1
This is mostly RTH's tcg_constant refactoring work, plus a few
1
Some arm patches; my to-review queue is by no means empty, but
2
other things.
2
this is a big enough set of patches to be getting on with...
3
3
4
thanks
5
-- PMM
4
-- PMM
6
5
7
The following changes since commit cf6f26d6f9b2015ee12b4604b79359e76784163a:
6
The following changes since commit cb9c6a8e5ad6a1f0ce164d352e3102df46986e22:
8
7
9
Merge tag 'kraxel-20220427-pull-request' of git://git.kraxel.org/qemu into staging (2022-04-27 10:49:28 -0700)
8
.gitlab-ci.d/windows: Work-around timeout and OpenGL problems of the MSYS2 jobs (2023-01-04 18:58:33 +0000)
10
9
11
are available in the Git repository at:
10
are available in the Git repository at:
12
11
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220428
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230105
14
13
15
for you to fetch changes up to f8e7163d9e6740b5cef02bf73a17a59d0bef8bdb:
14
for you to fetch changes up to 93c9678de9dc7d2e68f9e8477da072bac30ef132:
16
15
17
hw/arm/smmuv3: Advertise support for SMMUv3.2-BBML2 (2022-04-28 13:59:23 +0100)
16
hw/net: Fix read of uninitialized memory in imx_fec. (2023-01-05 15:33:00 +0000)
18
17
19
----------------------------------------------------------------
18
----------------------------------------------------------------
20
target-arm queue:
19
target-arm queue:
21
* refactor to use tcg_constant where appropriate
20
* Implement AArch32 ARMv8-R support
22
* Advertise support for FEAT_TTL and FEAT_BBM level 2
21
* Add Cortex-R52 CPU
23
* smmuv3: Cache event fault record
22
* fix handling of HLT semihosting in system mode
24
* smmuv3: Add space in guest error message
23
* hw/timer/ixm_epit: cleanup and fix bug in compare handling
25
* smmuv3: Advertise support for SMMUv3.2-BBML2
24
* target/arm: Coding style fixes
25
* target/arm: Clean up includes
26
* nseries: minor code cleanups
27
* target/arm: align exposed ID registers with Linux
28
* hw/arm/smmu-common: remove unnecessary inlines
29
* i.MX7D: Handle GPT timers
30
* i.MX7D: Connect IRQs to GPIO devices
31
* i.MX6UL: Add a specific GPT timer instance
32
* hw/net: Fix read of uninitialized memory in imx_fec
26
33
27
----------------------------------------------------------------
34
----------------------------------------------------------------
28
Damien Hedde (1):
35
Alex Bennée (1):
29
target/arm: Disable cryptographic instructions when neon is disabled
36
target/arm: fix handling of HLT semihosting in system mode
30
37
31
Jean-Philippe Brucker (2):
38
Axel Heider (8):
32
hw/arm/smmuv3: Cache event fault record
39
hw/timer/imx_epit: improve comments
33
hw/arm/smmuv3: Add space in guest error message
40
hw/timer/imx_epit: cleanup CR defines
41
hw/timer/imx_epit: define SR_OCIF
42
hw/timer/imx_epit: update interrupt state on CR write access
43
hw/timer/imx_epit: hard reset initializes CR with 0
44
hw/timer/imx_epit: factor out register write handlers
45
hw/timer/imx_epit: remove explicit fields cnt and freq
46
hw/timer/imx_epit: fix compare timer handling
34
47
35
Peter Maydell (3):
48
Claudio Fontana (1):
36
target/arm: Advertise support for FEAT_TTL
49
target/arm: cleanup cpu includes
37
target/arm: Advertise support for FEAT_BBM level 2
38
hw/arm/smmuv3: Advertise support for SMMUv3.2-BBML2
39
50
40
Richard Henderson (48):
51
Fabiano Rosas (5):
41
target/arm: Use tcg_constant in gen_probe_access
52
target/arm: Fix checkpatch comment style warnings in helper.c
42
target/arm: Use tcg_constant in gen_mte_check*
53
target/arm: Fix checkpatch space errors in helper.c
43
target/arm: Use tcg_constant in gen_exception*
54
target/arm: Fix checkpatch brace errors in helper.c
44
target/arm: Use tcg_constant in gen_adc_CC
55
target/arm: Remove unused includes from m_helper.c
45
target/arm: Use tcg_constant in handle_msr_i
56
target/arm: Remove unused includes from helper.c
46
target/arm: Use tcg_constant in handle_sys
47
target/arm: Use tcg_constant in disas_exc
48
target/arm: Use tcg_constant in gen_compare_and_swap_pair
49
target/arm: Use tcg_constant in disas_ld_lit
50
target/arm: Use tcg_constant in disas_ldst_*
51
target/arm: Use tcg_constant in disas_add_sum_imm*
52
target/arm: Use tcg_constant in disas_movw_imm
53
target/arm: Use tcg_constant in shift_reg_imm
54
target/arm: Use tcg_constant in disas_cond_select
55
target/arm: Use tcg_constant in handle_{rev16,crc32}
56
target/arm: Use tcg_constant in disas_data_proc_2src
57
target/arm: Use tcg_constant in disas_fp*
58
target/arm: Use tcg_constant in simd shift expanders
59
target/arm: Use tcg_constant in simd fp/int conversion
60
target/arm: Use tcg_constant in 2misc expanders
61
target/arm: Use tcg_constant in balance of translate-a64.c
62
target/arm: Use tcg_constant for aa32 exceptions
63
target/arm: Use tcg_constant for disas_iwmmxt_insn
64
target/arm: Use tcg_constant for gen_{msr,mrs}
65
target/arm: Use tcg_constant for vector shift expanders
66
target/arm: Use tcg_constant for do_coproc_insn
67
target/arm: Use tcg_constant for gen_srs
68
target/arm: Use tcg_constant for op_s_{rri,rxi}_rot
69
target/arm: Use tcg_constant for MOVW, UMAAL, CRC32
70
target/arm: Use tcg_constant for v7m MRS, MSR
71
target/arm: Use tcg_constant for TT, SAT, SMMLA
72
target/arm: Use tcg_constant in LDM, STM
73
target/arm: Use tcg_constant in CLRM, DLS, WLS, LE
74
target/arm: Use tcg_constant in trans_CPS_v7m
75
target/arm: Use tcg_constant in trans_CSEL
76
target/arm: Use tcg_constant for trans_INDEX_*
77
target/arm: Use tcg_constant in SINCDEC, INCDEC
78
target/arm: Use tcg_constant in FCPY, CPY
79
target/arm: Use tcg_constant in {incr, wrap}_last_active
80
target/arm: Use tcg_constant in do_clast_scalar
81
target/arm: Use tcg_constant in WHILE
82
target/arm: Use tcg_constant in LD1, ST1
83
target/arm: Use tcg_constant in SUBR
84
target/arm: Use tcg_constant in do_zzi_{sat, ool}, do_fp_imm
85
target/arm: Use tcg_constant for predicate descriptors
86
target/arm: Use tcg_constant for do_brk{2,3}
87
target/arm: Use tcg_constant for vector descriptor
88
target/arm: Use field names for accessing DBGWCRn
89
57
90
docs/system/arm/emulation.rst | 2 +
58
Jean-Christophe Dubois (4):
91
hw/arm/smmuv3-internal.h | 2 +-
59
i.MX7D: Connect GPT timers to IRQ
92
include/hw/arm/smmu-common.h | 1 +
60
i.MX7D: Compute clock frequency for the fixed frequency clocks.
93
target/arm/internals.h | 12 ++
61
i.MX6UL: Add a specific GPT timer instance for the i.MX6UL
94
hw/arm/smmuv3.c | 17 +--
62
i.MX7D: Connect IRQs to GPIO devices.
95
target/arm/cpu.c | 9 ++
63
96
target/arm/cpu64.c | 2 +
64
Peter Maydell (1):
97
target/arm/debug_helper.c | 10 +-
65
target/arm:Set lg_page_size to 0 if either S1 or S2 asks for it
98
target/arm/helper.c | 8 +-
66
99
target/arm/kvm64.c | 14 +-
67
Philippe Mathieu-Daudé (5):
100
target/arm/translate-a64.c | 301 +++++++++++++-----------------------------
68
hw/input/tsc2xxx: Constify set_transform()'s MouseTransformInfo arg
101
target/arm/translate-sve.c | 202 ++++++++++------------------
69
hw/arm/nseries: Constify various read-only arrays
102
target/arm/translate.c | 244 ++++++++++++----------------------
70
hw/arm/nseries: Silent -Wmissing-field-initializers warning
103
13 files changed, 293 insertions(+), 531 deletions(-)
71
hw/arm/smmu-common: Reduce smmu_inv_notifiers_mr() scope
72
hw/arm/smmu-common: Avoid using inlined functions with external linkage
73
74
Stephen Longfield (1):
75
hw/net: Fix read of uninitialized memory in imx_fec.
76
77
Tobias Röhmel (7):
78
target/arm: Don't add all MIDR aliases for cores that implement PMSA
79
target/arm: Make RVBAR available for all ARMv8 CPUs
80
target/arm: Make stage_2_format for cache attributes optional
81
target/arm: Enable TTBCR_EAE for ARMv8-R AArch32
82
target/arm: Add PMSAv8r registers
83
target/arm: Add PMSAv8r functionality
84
target/arm: Add ARM Cortex-R52 CPU
85
86
Zhuojia Shen (1):
87
target/arm: align exposed ID registers with Linux
88
89
include/hw/arm/fsl-imx7.h | 20 +
90
include/hw/arm/smmu-common.h | 3 -
91
include/hw/input/tsc2xxx.h | 4 +-
92
include/hw/timer/imx_epit.h | 8 +-
93
include/hw/timer/imx_gpt.h | 1 +
94
target/arm/cpu.h | 6 +
95
target/arm/internals.h | 4 +
96
hw/arm/fsl-imx6ul.c | 2 +-
97
hw/arm/fsl-imx7.c | 41 +-
98
hw/arm/nseries.c | 28 +-
99
hw/arm/smmu-common.c | 15 +-
100
hw/input/tsc2005.c | 2 +-
101
hw/input/tsc210x.c | 3 +-
102
hw/misc/imx6ul_ccm.c | 6 -
103
hw/misc/imx7_ccm.c | 49 ++-
104
hw/net/imx_fec.c | 8 +-
105
hw/timer/imx_epit.c | 376 +++++++++-------
106
hw/timer/imx_gpt.c | 25 ++
107
target/arm/cpu.c | 35 +-
108
target/arm/cpu64.c | 6 -
109
target/arm/cpu_tcg.c | 42 ++
110
target/arm/debug_helper.c | 3 +
111
target/arm/helper.c | 871 +++++++++++++++++++++++++++++---------
112
target/arm/m_helper.c | 16 -
113
target/arm/machine.c | 28 ++
114
target/arm/ptw.c | 152 +++++--
115
target/arm/tlb_helper.c | 4 +
116
target/arm/translate.c | 2 +-
117
tests/tcg/aarch64/sysregs.c | 24 +-
118
tests/tcg/aarch64/Makefile.target | 7 +-
119
30 files changed, 1330 insertions(+), 461 deletions(-)
120
diff view generated by jsdifflib
1
The Arm SMMUv3 includes an optional feature equivalent to the CPU
1
In get_phys_addr_twostage() we set the lg_page_size of the result to
2
FEAT_BBM, which permits an OS to switch a range of memory between
2
the maximum of the stage 1 and stage 2 page sizes. This works for
3
"covered by a huge page" and "covered by a sequence of normal pages"
3
the case where we do want to create a TLB entry, because we know the
4
without having to engage in the traditional 'break-before-make'
4
common TLB code only creates entries of the TARGET_PAGE_SIZE and
5
dance. (This is particularly important for the SMMU, because devices
5
asking for a size larger than that only means that invalidations
6
performing I/O through an SMMU are less likely to be able to cope with
6
invalidate the whole larger area. However, if lg_page_size is
7
the window in the sequence where an access results in a translation
7
smaller than TARGET_PAGE_SIZE this effectively means "don't create a
8
fault.) The SMMU spec explicitly notes that one of the valid ways to
8
TLB entry"; in this case if either S1 or S2 said "this covers less
9
be a BBM level 2 compliant implementation is:
9
than a page and can't go in a TLB" then the final result also should
10
* if there are multiple entries in the TLB for an address,
10
be marked that way. Set the resulting page size to 0 if either
11
choose one of them and use it, ignoring the others
11
stage asked for a less-than-a-page entry, and expand the comment
12
to explain what's going on.
12
13
13
Our SMMU TLB implementation (unlike our CPU TLB) does allow multiple
14
This has no effect for VMSA because currently the VMSA lookup always
14
TLB entries for an address, because the translation table level is
15
returns results that cover at least TARGET_PAGE_SIZE; however when we
15
part of the SMMUIOTLBKey, and so our IOTLB hashtable can include
16
add v8R support it will reuse this code path, and for v8R the S1 and
16
entries for the same address where the leaf was at different levels
17
S2 results can be smaller than TARGET_PAGE_SIZE.
17
(i.e. both hugepage and normal page). Our TLB lookup implementation in
18
smmu_iotlb_lookup() will always find the entry with the lowest level
19
(i.e. it prefers the hugepage over the normal page) and ignore any
20
others. TLB invalidation correctly removes all TLB entries matching
21
the specified address or address range (unless the guest specifies the
22
leaf level explicitly, in which case it gets what it asked for). So we
23
can validly advertise support for BBML level 2.
24
25
Note that we still can't yet advertise ourselves as an SMMU v3.2,
26
because v3.2 requires support for the S2FWB feature, which we don't
27
yet implement.
28
18
29
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
31
Reviewed-by: Eric Auger <eric.auger@redhat.com>
21
Message-id: 20221212142708.610090-1-peter.maydell@linaro.org
32
Message-id: 20220426160422.2353158-4-peter.maydell@linaro.org
33
---
22
---
34
hw/arm/smmuv3-internal.h | 1 +
23
target/arm/ptw.c | 16 +++++++++++++---
35
hw/arm/smmuv3.c | 1 +
24
1 file changed, 13 insertions(+), 3 deletions(-)
36
2 files changed, 2 insertions(+)
37
25
38
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
26
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
39
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/arm/smmuv3-internal.h
28
--- a/target/arm/ptw.c
41
+++ b/hw/arm/smmuv3-internal.h
29
+++ b/target/arm/ptw.c
42
@@ -XXX,XX +XXX,XX @@ REG32(IDR2, 0x8)
30
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
43
REG32(IDR3, 0xc)
31
}
44
FIELD(IDR3, HAD, 2, 1);
32
45
FIELD(IDR3, RIL, 10, 1);
33
/*
46
+ FIELD(IDR3, BBML, 11, 2);
34
- * Use the maximum of the S1 & S2 page size, so that invalidation
47
REG32(IDR4, 0x10)
35
- * of pages > TARGET_PAGE_SIZE works correctly.
48
REG32(IDR5, 0x14)
36
+ * If either S1 or S2 returned a result smaller than TARGET_PAGE_SIZE,
49
FIELD(IDR5, OAS, 0, 3);
37
+ * this means "don't put this in the TLB"; in this case, return a
50
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
38
+ * result with lg_page_size == 0 to achieve that. Otherwise,
51
index XXXXXXX..XXXXXXX 100644
39
+ * use the maximum of the S1 & S2 page size, so that invalidation
52
--- a/hw/arm/smmuv3.c
40
+ * of pages > TARGET_PAGE_SIZE works correctly. (This works even though
53
+++ b/hw/arm/smmuv3.c
41
+ * we know the combined result permissions etc only cover the minimum
54
@@ -XXX,XX +XXX,XX @@ static void smmuv3_init_regs(SMMUv3State *s)
42
+ * of the S1 and S2 page size, because we know that the common TLB code
55
43
+ * never actually creates TLB entries bigger than TARGET_PAGE_SIZE,
56
s->idr[3] = FIELD_DP32(s->idr[3], IDR3, RIL, 1);
44
+ * and passing a larger page size value only affects invalidations.)
57
s->idr[3] = FIELD_DP32(s->idr[3], IDR3, HAD, 1);
45
*/
58
+ s->idr[3] = FIELD_DP32(s->idr[3], IDR3, BBML, 2);
46
- if (result->f.lg_page_size < s1_lgpgsz) {
59
47
+ if (result->f.lg_page_size < TARGET_PAGE_BITS ||
60
/* 4K, 16K and 64K granule support */
48
+ s1_lgpgsz < TARGET_PAGE_BITS) {
61
s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN4K, 1);
49
+ result->f.lg_page_size = 0;
50
+ } else if (result->f.lg_page_size < s1_lgpgsz) {
51
result->f.lg_page_size = s1_lgpgsz;
52
}
53
62
--
54
--
63
2.25.1
55
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Cores with PMSA have the MPUIR register which has the
4
same encoding as the MIDR alias with opc2=4. So we only
5
add that alias if we are not realizing a core that
6
implements PMSA.
7
8
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-7-richard.henderson@linaro.org
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20221206102504.165775-2-tobias.roehmel@rwth-aachen.de
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
13
---
8
target/arm/translate-a64.c | 31 +++++++++----------------------
14
target/arm/helper.c | 13 +++++++++----
9
1 file changed, 9 insertions(+), 22 deletions(-)
15
1 file changed, 9 insertions(+), 4 deletions(-)
10
16
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
19
--- a/target/arm/helper.c
14
+++ b/target/arm/translate-a64.c
20
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
21
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
16
/* Emit code to perform further access permissions checks at
22
.access = PL1_R, .type = ARM_CP_NO_RAW, .resetvalue = cpu->midr,
17
* runtime; this may result in an exception.
23
.fieldoffset = offsetof(CPUARMState, cp15.c0_cpuid),
18
*/
24
.readfn = midr_read },
19
- TCGv_ptr tmpptr;
25
- /* crn = 0 op1 = 0 crm = 0 op2 = 4,7 : AArch32 aliases of MIDR */
20
- TCGv_i32 tcg_syn, tcg_isread;
26
- { .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST,
21
uint32_t syndrome;
27
- .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 4,
22
28
- .access = PL1_R, .resetvalue = cpu->midr },
23
- gen_a64_set_pc_im(s->pc_curr);
29
+ /* crn = 0 op1 = 0 crm = 0 op2 = 7 : AArch32 aliases of MIDR */
24
- tmpptr = tcg_const_ptr(ri);
30
{ .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST,
25
syndrome = syn_aa64_sysregtrap(op0, op1, op2, crn, crm, rt, isread);
31
.cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 7,
26
- tcg_syn = tcg_const_i32(syndrome);
32
.access = PL1_R, .resetvalue = cpu->midr },
27
- tcg_isread = tcg_const_i32(isread);
33
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
28
- gen_helper_access_check_cp_reg(cpu_env, tmpptr, tcg_syn, tcg_isread);
34
.accessfn = access_aa64_tid1,
29
- tcg_temp_free_ptr(tmpptr);
35
.type = ARM_CP_CONST, .resetvalue = cpu->revidr },
30
- tcg_temp_free_i32(tcg_syn);
36
};
31
- tcg_temp_free_i32(tcg_isread);
37
+ ARMCPRegInfo id_v8_midr_alias_cp_reginfo = {
32
+ gen_a64_set_pc_im(s->pc_curr);
38
+ .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST,
33
+ gen_helper_access_check_cp_reg(cpu_env,
39
+ .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 4,
34
+ tcg_constant_ptr(ri),
40
+ .access = PL1_R, .resetvalue = cpu->midr
35
+ tcg_constant_i32(syndrome),
41
+ };
36
+ tcg_constant_i32(isread));
42
ARMCPRegInfo id_cp_reginfo[] = {
37
} else if (ri->type & ARM_CP_RAISES_EXC) {
43
/* These are common to v8 and pre-v8 */
38
/*
44
{ .name = "CTR",
39
* The readfn or writefn might raise an exception;
45
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
40
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
46
}
41
case ARM_CP_DC_ZVA:
47
if (arm_feature(env, ARM_FEATURE_V8)) {
42
/* Writes clear the aligned block of memory which rt points into. */
48
define_arm_cp_regs(cpu, id_v8_midr_cp_reginfo);
43
if (s->mte_active[0]) {
49
+ if (!arm_feature(env, ARM_FEATURE_PMSA)) {
44
- TCGv_i32 t_desc;
50
+ define_one_arm_cp_reg(cpu, &id_v8_midr_alias_cp_reginfo);
45
int desc = 0;
51
+ }
46
47
desc = FIELD_DP32(desc, MTEDESC, MIDX, get_mem_index(s));
48
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
49
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
50
- t_desc = tcg_const_i32(desc);
51
52
tcg_rt = new_tmp_a64(s);
53
- gen_helper_mte_check_zva(tcg_rt, cpu_env, t_desc, cpu_reg(s, rt));
54
- tcg_temp_free_i32(t_desc);
55
+ gen_helper_mte_check_zva(tcg_rt, cpu_env,
56
+ tcg_constant_i32(desc), cpu_reg(s, rt));
57
} else {
52
} else {
58
tcg_rt = clean_data_tbi(s, cpu_reg(s, rt));
53
define_arm_cp_regs(cpu, id_pre_v8_midr_cp_reginfo);
59
}
60
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
61
if (ri->type & ARM_CP_CONST) {
62
tcg_gen_movi_i64(tcg_rt, ri->resetvalue);
63
} else if (ri->readfn) {
64
- TCGv_ptr tmpptr;
65
- tmpptr = tcg_const_ptr(ri);
66
- gen_helper_get_cp_reg64(tcg_rt, cpu_env, tmpptr);
67
- tcg_temp_free_ptr(tmpptr);
68
+ gen_helper_get_cp_reg64(tcg_rt, cpu_env, tcg_constant_ptr(ri));
69
} else {
70
tcg_gen_ld_i64(tcg_rt, cpu_env, ri->fieldoffset);
71
}
72
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
73
/* If not forbidden by access permissions, treat as WI */
74
return;
75
} else if (ri->writefn) {
76
- TCGv_ptr tmpptr;
77
- tmpptr = tcg_const_ptr(ri);
78
- gen_helper_set_cp_reg64(cpu_env, tmpptr, tcg_rt);
79
- tcg_temp_free_ptr(tmpptr);
80
+ gen_helper_set_cp_reg64(cpu_env, tcg_constant_ptr(ri), tcg_rt);
81
} else {
82
tcg_gen_st_i64(tcg_rt, cpu_env, ri->fieldoffset);
83
}
54
}
84
--
55
--
85
2.25.1
56
2.25.1
57
58
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
RVBAR shadows RVBAR_ELx where x is the highest exception
4
level if the highest EL is not EL3. This patch also allows
5
ARMv8 CPUs to change the reset address with
6
the rvbar property.
7
8
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-20-richard.henderson@linaro.org
10
Message-id: 20221206102504.165775-3-tobias.roehmel@rwth-aachen.de
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/translate-a64.c | 26 ++++++--------------------
13
target/arm/cpu.c | 6 +++++-
9
1 file changed, 6 insertions(+), 20 deletions(-)
14
target/arm/helper.c | 21 ++++++++++++++-------
15
2 files changed, 19 insertions(+), 8 deletions(-)
10
16
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
17
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
19
--- a/target/arm/cpu.c
14
+++ b/target/arm/translate-a64.c
20
+++ b/target/arm/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
21
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset_hold(Object *obj)
16
int pass;
22
env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
17
23
CPACR, CP11, 3);
18
if (fracbits || size == MO_64) {
24
#endif
19
- tcg_shift = tcg_const_i32(fracbits);
25
+ if (arm_feature(env, ARM_FEATURE_V8)) {
20
+ tcg_shift = tcg_constant_i32(fracbits);
26
+ env->cp15.rvbar = cpu->rvbar_prop;
27
+ env->regs[15] = cpu->rvbar_prop;
28
+ }
21
}
29
}
22
30
23
if (size == MO_64) {
31
#if defined(CONFIG_USER_ONLY)
24
@@ -XXX,XX +XXX,XX @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
32
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
33
qdev_property_add_static(DEVICE(obj), &arm_cpu_reset_hivecs_property);
25
}
34
}
26
35
27
tcg_temp_free_ptr(tcg_fpst);
36
- if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
28
- if (tcg_shift) {
37
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
29
- tcg_temp_free_i32(tcg_shift);
38
object_property_add_uint64_ptr(obj, "rvbar",
30
- }
39
&cpu->rvbar_prop,
31
40
OBJ_PROP_FLAG_READWRITE);
32
clear_vec_high(s, elements << size == 16, rd);
41
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
}
42
index XXXXXXX..XXXXXXX 100644
34
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
43
--- a/target/arm/helper.c
35
tcg_fpstatus = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
44
+++ b/target/arm/helper.c
36
gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus);
45
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
37
fracbits = (16 << size) - immhb;
46
if (!arm_feature(env, ARM_FEATURE_EL3) &&
38
- tcg_shift = tcg_const_i32(fracbits);
47
!arm_feature(env, ARM_FEATURE_EL2)) {
39
+ tcg_shift = tcg_constant_i32(fracbits);
48
ARMCPRegInfo rvbar = {
40
49
- .name = "RVBAR_EL1", .state = ARM_CP_STATE_AA64,
41
if (size == MO_64) {
50
+ .name = "RVBAR_EL1", .state = ARM_CP_STATE_BOTH,
42
int maxpass = is_scalar ? 1 : 2;
51
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 0, .opc2 = 1,
43
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
52
.access = PL1_R,
53
.fieldoffset = offsetof(CPUARMState, cp15.rvbar),
54
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
55
}
56
/* RVBAR_EL2 is only implemented if EL2 is the highest EL */
57
if (!arm_feature(env, ARM_FEATURE_EL3)) {
58
- ARMCPRegInfo rvbar = {
59
- .name = "RVBAR_EL2", .state = ARM_CP_STATE_AA64,
60
- .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 1,
61
- .access = PL2_R,
62
- .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
63
+ ARMCPRegInfo rvbar[] = {
64
+ {
65
+ .name = "RVBAR_EL2", .state = ARM_CP_STATE_AA64,
66
+ .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 1,
67
+ .access = PL2_R,
68
+ .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
69
+ },
70
+ { .name = "RVBAR", .type = ARM_CP_ALIAS,
71
+ .cp = 15, .opc1 = 0, .crn = 12, .crm = 0, .opc2 = 1,
72
+ .access = PL2_R,
73
+ .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
74
+ },
75
};
76
- define_one_arm_cp_reg(cpu, &rvbar);
77
+ define_arm_cp_regs(cpu, rvbar);
44
}
78
}
45
}
79
}
46
80
47
- tcg_temp_free_i32(tcg_shift);
48
gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus);
49
tcg_temp_free_ptr(tcg_fpstatus);
50
tcg_temp_free_i32(tcg_rmode);
51
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
52
case 0x1c: /* FCVTAS */
53
case 0x3a: /* FCVTPS */
54
case 0x3b: /* FCVTZS */
55
- {
56
- TCGv_i32 tcg_shift = tcg_const_i32(0);
57
- gen_helper_vfp_tosqd(tcg_rd, tcg_rn, tcg_shift, tcg_fpstatus);
58
- tcg_temp_free_i32(tcg_shift);
59
+ gen_helper_vfp_tosqd(tcg_rd, tcg_rn, tcg_constant_i32(0), tcg_fpstatus);
60
break;
61
- }
62
case 0x5a: /* FCVTNU */
63
case 0x5b: /* FCVTMU */
64
case 0x5c: /* FCVTAU */
65
case 0x7a: /* FCVTPU */
66
case 0x7b: /* FCVTZU */
67
- {
68
- TCGv_i32 tcg_shift = tcg_const_i32(0);
69
- gen_helper_vfp_touqd(tcg_rd, tcg_rn, tcg_shift, tcg_fpstatus);
70
- tcg_temp_free_i32(tcg_shift);
71
+ gen_helper_vfp_touqd(tcg_rd, tcg_rn, tcg_constant_i32(0), tcg_fpstatus);
72
break;
73
- }
74
case 0x18: /* FRINTN */
75
case 0x19: /* FRINTM */
76
case 0x38: /* FRINTP */
77
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
78
79
if (is_double) {
80
TCGv_i64 tcg_op = tcg_temp_new_i64();
81
- TCGv_i64 tcg_zero = tcg_const_i64(0);
82
+ TCGv_i64 tcg_zero = tcg_constant_i64(0);
83
TCGv_i64 tcg_res = tcg_temp_new_i64();
84
NeonGenTwoDoubleOpFn *genfn;
85
bool swap = false;
86
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
87
write_vec_element(s, tcg_res, rd, pass, MO_64);
88
}
89
tcg_temp_free_i64(tcg_res);
90
- tcg_temp_free_i64(tcg_zero);
91
tcg_temp_free_i64(tcg_op);
92
93
clear_vec_high(s, !is_scalar, rd);
94
} else {
95
TCGv_i32 tcg_op = tcg_temp_new_i32();
96
- TCGv_i32 tcg_zero = tcg_const_i32(0);
97
+ TCGv_i32 tcg_zero = tcg_constant_i32(0);
98
TCGv_i32 tcg_res = tcg_temp_new_i32();
99
NeonGenTwoSingleOpFn *genfn;
100
bool swap = false;
101
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
102
}
103
}
104
tcg_temp_free_i32(tcg_res);
105
- tcg_temp_free_i32(tcg_zero);
106
tcg_temp_free_i32(tcg_op);
107
if (!is_scalar) {
108
clear_vec_high(s, is_q, rd);
109
--
81
--
110
2.25.1
82
2.25.1
83
84
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
2
3
Make the translation error message prettier by adding a missing space
3
The v8R PMSAv8 has a two-stage MPU translation process, but, unlike
4
before the parenthesis.
4
VMSAv8, the stage 2 attributes are in the same format as the stage 1
5
attributes (8-bit MAIR format). Rather than converting the MAIR
6
format to the format used for VMSA stage 2 (bits [5:2] of a VMSA
7
stage 2 descriptor) and then converting back to do the attribute
8
combination, allow combined_attrs_nofwb() to accept s2 attributes
9
that are already in the MAIR format.
5
10
6
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
11
We move the assert() to combined_attrs_fwb(), because that function
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
really does require a VMSA stage 2 attribute format. (We will never
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
13
get there for v8R, because PMSAv8 does not implement FEAT_S2FWB.)
9
Message-id: 20220427111543.124620-2-jean-philippe@linaro.org
14
15
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20221206102504.165775-4-tobias.roehmel@rwth-aachen.de
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
19
---
12
hw/arm/smmuv3.c | 2 +-
20
target/arm/ptw.c | 10 ++++++++--
13
1 file changed, 1 insertion(+), 1 deletion(-)
21
1 file changed, 8 insertions(+), 2 deletions(-)
14
22
15
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
23
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/smmuv3.c
25
--- a/target/arm/ptw.c
18
+++ b/hw/arm/smmuv3.c
26
+++ b/target/arm/ptw.c
19
@@ -XXX,XX +XXX,XX @@ epilogue:
27
@@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_nofwb(uint64_t hcr,
20
break;
28
{
21
case SMMU_TRANS_ERROR:
29
uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
22
qemu_log_mask(LOG_GUEST_ERROR,
30
23
- "%s translation failed for iova=0x%"PRIx64"(%s)\n",
31
- s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs);
24
+ "%s translation failed for iova=0x%"PRIx64" (%s)\n",
32
+ if (s2.is_s2_format) {
25
mr->parent_obj.name, addr, smmu_event_string(event.type));
33
+ s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs);
26
smmuv3_record_event(s, &event);
34
+ } else {
27
break;
35
+ s2_mair_attrs = s2.attrs;
36
+ }
37
38
s1lo = extract32(s1.attrs, 0, 4);
39
s2lo = extract32(s2_mair_attrs, 0, 4);
40
@@ -XXX,XX +XXX,XX @@ static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
41
*/
42
static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
43
{
44
+ assert(s2.is_s2_format && !s1.is_s2_format);
45
+
46
switch (s2.attrs) {
47
case 7:
48
/* Use stage 1 attributes */
49
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
50
ARMCacheAttrs ret;
51
bool tagged = false;
52
53
- assert(s2.is_s2_format && !s1.is_s2_format);
54
+ assert(!s1.is_s2_format);
55
ret.is_s2_format = false;
56
57
if (s1.attrs == 0xf0) {
28
--
58
--
29
2.25.1
59
2.25.1
60
61
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
2
3
While defining these names, use the correct field width of 5 not 4 for
3
ARMv8-R AArch32 CPUs behave as if TTBCR.EAE is always 1 even
4
DBGWCR.MASK. This typo prevented setting a watchpoint larger than 32k.
4
tough they don't have the TTBCR register.
5
See ARM Architecture Reference Manual Supplement - ARMv8, for the ARMv8-R
6
AArch32 architecture profile Version:A.c section C1.2.
5
7
6
Reported-by: Chris Howard <cvz185@web.de>
8
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20221206102504.165775-5-tobias.roehmel@rwth-aachen.de
9
Message-id: 20220427051926.295223-1-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
target/arm/internals.h | 12 ++++++++++++
13
target/arm/internals.h | 4 ++++
13
target/arm/debug_helper.c | 10 +++++-----
14
target/arm/debug_helper.c | 3 +++
14
target/arm/helper.c | 8 ++++----
15
target/arm/tlb_helper.c | 4 ++++
15
target/arm/kvm64.c | 14 +++++++-------
16
3 files changed, 11 insertions(+)
16
4 files changed, 28 insertions(+), 16 deletions(-)
17
17
18
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
diff --git a/target/arm/internals.h b/target/arm/internals.h
19
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/internals.h
20
--- a/target/arm/internals.h
21
+++ b/target/arm/internals.h
21
+++ b/target/arm/internals.h
22
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_EXCRET, RES1, 7, 25) /* including the must-be-1 prefix */
22
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu);
23
*/
23
static inline bool extended_addresses_enabled(CPUARMState *env)
24
#define FNC_RETURN_MIN_MAGIC 0xfefffffe
24
{
25
25
uint64_t tcr = env->cp15.tcr_el[arm_is_secure(env) ? 3 : 1];
26
+/* Bit definitions for DBGWCRn and DBGWCRn_EL1 */
26
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
27
+FIELD(DBGWCR, E, 0, 1)
27
+ arm_feature(env, ARM_FEATURE_V8)) {
28
+FIELD(DBGWCR, PAC, 1, 2)
28
+ return true;
29
+FIELD(DBGWCR, LSC, 3, 2)
29
+ }
30
+FIELD(DBGWCR, BAS, 5, 8)
30
return arm_el_is_aa64(env, 1) ||
31
+FIELD(DBGWCR, HMC, 13, 1)
31
(arm_feature(env, ARM_FEATURE_LPAE) && (tcr & TTBCR_EAE));
32
+FIELD(DBGWCR, SSC, 14, 2)
32
}
33
+FIELD(DBGWCR, LBN, 16, 4)
34
+FIELD(DBGWCR, WT, 20, 1)
35
+FIELD(DBGWCR, MASK, 24, 5)
36
+FIELD(DBGWCR, SSCE, 29, 1)
37
+
38
/* We use a few fake FSR values for internal purposes in M profile.
39
* M profile cores don't have A/R format FSRs, but currently our
40
* get_phys_addr() code assumes A/R profile and reports failures via
41
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
33
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
42
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/debug_helper.c
35
--- a/target/arm/debug_helper.c
44
+++ b/target/arm/debug_helper.c
36
+++ b/target/arm/debug_helper.c
45
@@ -XXX,XX +XXX,XX @@ static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)
37
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_debug_exception_fsr(CPUARMState *env)
46
* Non-Secure to simplify the code slightly compared to the full
38
47
* table in the ARM ARM.
39
if (target_el == 2 || arm_el_is_aa64(env, target_el)) {
48
*/
40
using_lpae = true;
49
- pac = extract64(cr, 1, 2);
41
+ } else if (arm_feature(env, ARM_FEATURE_PMSA) &&
50
- hmc = extract64(cr, 13, 1);
42
+ arm_feature(env, ARM_FEATURE_V8)) {
51
- ssc = extract64(cr, 14, 2);
43
+ using_lpae = true;
52
+ pac = FIELD_EX64(cr, DBGWCR, PAC);
44
} else {
53
+ hmc = FIELD_EX64(cr, DBGWCR, HMC);
45
if (arm_feature(env, ARM_FEATURE_LPAE) &&
54
+ ssc = FIELD_EX64(cr, DBGWCR, SSC);
46
(env->cp15.tcr_el[target_el] & TTBCR_EAE)) {
55
47
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
56
switch (ssc) {
48
index XXXXXXX..XXXXXXX 100644
57
case 0:
49
--- a/target/arm/tlb_helper.c
58
@@ -XXX,XX +XXX,XX @@ static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)
50
+++ b/target/arm/tlb_helper.c
59
g_assert_not_reached();
51
@@ -XXX,XX +XXX,XX @@ bool regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
52
if (el == 2 || arm_el_is_aa64(env, el)) {
53
return true;
60
}
54
}
61
55
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
62
- wt = extract64(cr, 20, 1);
56
+ arm_feature(env, ARM_FEATURE_V8)) {
63
- lbn = extract64(cr, 16, 4);
57
+ return true;
64
+ wt = FIELD_EX64(cr, DBGWCR, WT);
58
+ }
65
+ lbn = FIELD_EX64(cr, DBGWCR, LBN);
59
if (arm_feature(env, ARM_FEATURE_LPAE)
66
60
&& (regime_tcr(env, mmu_idx) & TTBCR_EAE)) {
67
if (wt && !linked_bp_matches(cpu, lbn)) {
61
return true;
68
return false;
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/helper.c
72
+++ b/target/arm/helper.c
73
@@ -XXX,XX +XXX,XX @@ void hw_watchpoint_update(ARMCPU *cpu, int n)
74
env->cpu_watchpoint[n] = NULL;
75
}
76
77
- if (!extract64(wcr, 0, 1)) {
78
+ if (!FIELD_EX64(wcr, DBGWCR, E)) {
79
/* E bit clear : watchpoint disabled */
80
return;
81
}
82
83
- switch (extract64(wcr, 3, 2)) {
84
+ switch (FIELD_EX64(wcr, DBGWCR, LSC)) {
85
case 0:
86
/* LSC 00 is reserved and must behave as if the wp is disabled */
87
return;
88
@@ -XXX,XX +XXX,XX @@ void hw_watchpoint_update(ARMCPU *cpu, int n)
89
* CONSTRAINED UNPREDICTABLE; we opt to ignore BAS in this case,
90
* thus generating a watchpoint for every byte in the masked region.
91
*/
92
- mask = extract64(wcr, 24, 4);
93
+ mask = FIELD_EX64(wcr, DBGWCR, MASK);
94
if (mask == 1 || mask == 2) {
95
/* Reserved values of MASK; we must act as if the mask value was
96
* some non-reserved value, or as if the watchpoint were disabled.
97
@@ -XXX,XX +XXX,XX @@ void hw_watchpoint_update(ARMCPU *cpu, int n)
98
wvr &= ~(len - 1);
99
} else {
100
/* Watchpoint covers bytes defined by the byte address select bits */
101
- int bas = extract64(wcr, 5, 8);
102
+ int bas = FIELD_EX64(wcr, DBGWCR, BAS);
103
int basstart;
104
105
if (extract64(wvr, 2, 1)) {
106
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/kvm64.c
109
+++ b/target/arm/kvm64.c
110
@@ -XXX,XX +XXX,XX @@ static int insert_hw_watchpoint(target_ulong addr,
111
target_ulong len, int type)
112
{
113
HWWatchpoint wp = {
114
- .wcr = 1, /* E=1, enable */
115
+ .wcr = R_DBGWCR_E_MASK, /* E=1, enable */
116
.wvr = addr & (~0x7ULL),
117
.details = { .vaddr = addr, .len = len }
118
};
119
@@ -XXX,XX +XXX,XX @@ static int insert_hw_watchpoint(target_ulong addr,
120
* HMC=0 SSC=0 PAC=3 will hit EL0 or EL1, any security state,
121
* valid whether EL3 is implemented or not
122
*/
123
- wp.wcr = deposit32(wp.wcr, 1, 2, 3);
124
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, PAC, 3);
125
126
switch (type) {
127
case GDB_WATCHPOINT_READ:
128
- wp.wcr = deposit32(wp.wcr, 3, 2, 1);
129
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 1);
130
wp.details.flags = BP_MEM_READ;
131
break;
132
case GDB_WATCHPOINT_WRITE:
133
- wp.wcr = deposit32(wp.wcr, 3, 2, 2);
134
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 2);
135
wp.details.flags = BP_MEM_WRITE;
136
break;
137
case GDB_WATCHPOINT_ACCESS:
138
- wp.wcr = deposit32(wp.wcr, 3, 2, 3);
139
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 3);
140
wp.details.flags = BP_MEM_ACCESS;
141
break;
142
default:
143
@@ -XXX,XX +XXX,XX @@ static int insert_hw_watchpoint(target_ulong addr,
144
int bits = ctz64(len);
145
146
wp.wvr &= ~((1 << bits) - 1);
147
- wp.wcr = deposit32(wp.wcr, 24, 4, bits);
148
- wp.wcr = deposit32(wp.wcr, 5, 8, 0xff);
149
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, MASK, bits);
150
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, BAS, 0xff);
151
} else {
152
return -ENOBUFS;
153
}
154
--
62
--
155
2.25.1
63
2.25.1
156
64
157
65
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Message-id: 20221206102504.165775-6-tobias.roehmel@rwth-aachen.de
5
Message-id: 20220426163043.100432-18-richard.henderson@linaro.org
6
[PMM: Restore incorrectly removed free of t_false in disas_fp_csel()]
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
6
---
9
target/arm/translate-a64.c | 23 +++++++----------------
7
target/arm/cpu.h | 6 +
10
1 file changed, 7 insertions(+), 16 deletions(-)
8
target/arm/cpu.c | 28 +++-
9
target/arm/helper.c | 302 +++++++++++++++++++++++++++++++++++++++++++
10
target/arm/machine.c | 28 ++++
11
4 files changed, 360 insertions(+), 4 deletions(-)
11
12
12
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-a64.c
15
--- a/target/arm/cpu.h
15
+++ b/target/arm/translate-a64.c
16
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ static void handle_fp_compare(DisasContext *s, int size,
17
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
17
18
};
18
tcg_vn = read_fp_dreg(s, rn);
19
uint64_t sctlr_el[4];
19
if (cmp_with_zero) {
20
};
20
- tcg_vm = tcg_const_i64(0);
21
+ uint64_t vsctlr; /* Virtualization System control register. */
21
+ tcg_vm = tcg_constant_i64(0);
22
uint64_t cpacr_el1; /* Architectural feature access control register */
22
} else {
23
uint64_t cptr_el[4]; /* ARMv8 feature trap registers */
23
tcg_vm = read_fp_dreg(s, rm);
24
uint32_t c1_xscaleauxcr; /* XScale auxiliary control register. */
25
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
26
*/
27
uint32_t *rbar[M_REG_NUM_BANKS];
28
uint32_t *rlar[M_REG_NUM_BANKS];
29
+ uint32_t *hprbar;
30
+ uint32_t *hprlar;
31
uint32_t mair0[M_REG_NUM_BANKS];
32
uint32_t mair1[M_REG_NUM_BANKS];
33
+ uint32_t hprselr;
34
} pmsav8;
35
36
/* v8M SAU */
37
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
38
bool has_mpu;
39
/* PMSAv7 MPU number of supported regions */
40
uint32_t pmsav7_dregion;
41
+ /* PMSAv8 MPU number of supported hyp regions */
42
+ uint32_t pmsav8r_hdregion;
43
/* v8M SAU number of supported regions */
44
uint32_t sau_sregion;
45
46
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/cpu.c
49
+++ b/target/arm/cpu.c
50
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset_hold(Object *obj)
51
sizeof(*env->pmsav7.dracr) * cpu->pmsav7_dregion);
52
}
24
}
53
}
25
@@ -XXX,XX +XXX,XX @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
54
+
26
static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
55
+ if (cpu->pmsav8r_hdregion > 0) {
27
{
56
+ memset(env->pmsav8.hprbar, 0,
28
unsigned int mos, type, rm, cond, rn, op, nzcv;
57
+ sizeof(*env->pmsav8.hprbar) * cpu->pmsav8r_hdregion);
29
- TCGv_i64 tcg_flags;
58
+ memset(env->pmsav8.hprlar, 0,
30
TCGLabel *label_continue = NULL;
59
+ sizeof(*env->pmsav8.hprlar) * cpu->pmsav8r_hdregion);
31
int size;
60
+ }
32
61
+
33
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
62
env->pmsav7.rnr[M_REG_NS] = 0;
34
label_continue = gen_new_label();
63
env->pmsav7.rnr[M_REG_S] = 0;
35
arm_gen_test_cc(cond, label_match);
64
env->pmsav8.mair0[M_REG_NS] = 0;
36
/* nomatch: */
65
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
37
- tcg_flags = tcg_const_i64(nzcv << 28);
66
/* MPU can be configured out of a PMSA CPU either by setting has-mpu
38
- gen_set_nzcv(tcg_flags);
67
* to false or by setting pmsav7-dregion to 0.
39
- tcg_temp_free_i64(tcg_flags);
68
*/
40
+ gen_set_nzcv(tcg_constant_i64(nzcv << 28));
69
- if (!cpu->has_mpu) {
41
tcg_gen_br(label_continue);
70
- cpu->pmsav7_dregion = 0;
42
gen_set_label(label_match);
71
- }
72
- if (cpu->pmsav7_dregion == 0) {
73
+ if (!cpu->has_mpu || cpu->pmsav7_dregion == 0) {
74
cpu->has_mpu = false;
75
+ cpu->pmsav7_dregion = 0;
76
+ cpu->pmsav8r_hdregion = 0;
43
}
77
}
44
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
78
45
static void disas_fp_csel(DisasContext *s, uint32_t insn)
79
if (arm_feature(env, ARM_FEATURE_PMSA) &&
46
{
80
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
47
unsigned int mos, type, rm, cond, rn, rd;
81
env->pmsav7.dracr = g_new0(uint32_t, nr);
48
- TCGv_i64 t_true, t_false, t_zero;
82
}
49
+ TCGv_i64 t_true, t_false;
83
}
50
DisasCompare64 c;
84
+
51
MemOp sz;
85
+ if (cpu->pmsav8r_hdregion > 0xff) {
52
86
+ error_setg(errp, "PMSAv8 MPU EL2 #regions invalid %" PRIu32,
53
@@ -XXX,XX +XXX,XX @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
87
+ cpu->pmsav8r_hdregion);
54
read_vec_element(s, t_false, rm, 0, sz);
88
+ return;
55
89
+ }
56
a64_test_cc(&c, cond);
90
+
57
- t_zero = tcg_const_i64(0);
91
+ if (cpu->pmsav8r_hdregion) {
58
- tcg_gen_movcond_i64(c.cond, t_true, c.value, t_zero, t_true, t_false);
92
+ env->pmsav8.hprbar = g_new0(uint32_t,
59
- tcg_temp_free_i64(t_zero);
93
+ cpu->pmsav8r_hdregion);
60
+ tcg_gen_movcond_i64(c.cond, t_true, c.value, tcg_constant_i64(0),
94
+ env->pmsav8.hprlar = g_new0(uint32_t,
61
+ t_true, t_false);
95
+ cpu->pmsav8r_hdregion);
62
tcg_temp_free_i64(t_false);
96
+ }
63
a64_free_cc(&c);
64
65
@@ -XXX,XX +XXX,XX @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
66
int type = extract32(insn, 22, 2);
67
int mos = extract32(insn, 29, 3);
68
uint64_t imm;
69
- TCGv_i64 tcg_res;
70
MemOp sz;
71
72
if (mos || imm5) {
73
@@ -XXX,XX +XXX,XX @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
74
}
97
}
75
98
76
imm = vfp_expand_imm(sz, imm8);
99
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
77
-
100
diff --git a/target/arm/helper.c b/target/arm/helper.c
78
- tcg_res = tcg_const_i64(imm);
101
index XXXXXXX..XXXXXXX 100644
79
- write_fp_dreg(s, rd, tcg_res);
102
--- a/target/arm/helper.c
80
- tcg_temp_free_i64(tcg_res);
103
+++ b/target/arm/helper.c
81
+ write_fp_dreg(s, rd, tcg_constant_i64(imm));
104
@@ -XXX,XX +XXX,XX @@ static void pmsav7_rgnr_write(CPUARMState *env, const ARMCPRegInfo *ri,
105
raw_write(env, ri, value);
82
}
106
}
83
107
84
/* Handle floating point <=> fixed point conversions. Note that we can
108
+static void prbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
85
@@ -XXX,XX +XXX,XX @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
109
+ uint64_t value)
86
110
+{
87
tcg_fpstatus = fpstatus_ptr(type == 3 ? FPST_FPCR_F16 : FPST_FPCR);
111
+ ARMCPU *cpu = env_archcpu(env);
88
112
+
89
- tcg_shift = tcg_const_i32(64 - scale);
113
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
90
+ tcg_shift = tcg_constant_i32(64 - scale);
114
+ env->pmsav8.rbar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]] = value;
91
115
+}
92
if (itof) {
116
+
93
TCGv_i64 tcg_int = cpu_reg(s, rn);
117
+static uint64_t prbar_read(CPUARMState *env, const ARMCPRegInfo *ri)
94
@@ -XXX,XX +XXX,XX @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
118
+{
119
+ return env->pmsav8.rbar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]];
120
+}
121
+
122
+static void prlar_write(CPUARMState *env, const ARMCPRegInfo *ri,
123
+ uint64_t value)
124
+{
125
+ ARMCPU *cpu = env_archcpu(env);
126
+
127
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
128
+ env->pmsav8.rlar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]] = value;
129
+}
130
+
131
+static uint64_t prlar_read(CPUARMState *env, const ARMCPRegInfo *ri)
132
+{
133
+ return env->pmsav8.rlar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]];
134
+}
135
+
136
+static void prselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
137
+ uint64_t value)
138
+{
139
+ ARMCPU *cpu = env_archcpu(env);
140
+
141
+ /*
142
+ * Ignore writes that would select not implemented region.
143
+ * This is architecturally UNPREDICTABLE.
144
+ */
145
+ if (value >= cpu->pmsav7_dregion) {
146
+ return;
147
+ }
148
+
149
+ env->pmsav7.rnr[M_REG_NS] = value;
150
+}
151
+
152
+static void hprbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
153
+ uint64_t value)
154
+{
155
+ ARMCPU *cpu = env_archcpu(env);
156
+
157
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
158
+ env->pmsav8.hprbar[env->pmsav8.hprselr] = value;
159
+}
160
+
161
+static uint64_t hprbar_read(CPUARMState *env, const ARMCPRegInfo *ri)
162
+{
163
+ return env->pmsav8.hprbar[env->pmsav8.hprselr];
164
+}
165
+
166
+static void hprlar_write(CPUARMState *env, const ARMCPRegInfo *ri,
167
+ uint64_t value)
168
+{
169
+ ARMCPU *cpu = env_archcpu(env);
170
+
171
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
172
+ env->pmsav8.hprlar[env->pmsav8.hprselr] = value;
173
+}
174
+
175
+static uint64_t hprlar_read(CPUARMState *env, const ARMCPRegInfo *ri)
176
+{
177
+ return env->pmsav8.hprlar[env->pmsav8.hprselr];
178
+}
179
+
180
+static void hprenr_write(CPUARMState *env, const ARMCPRegInfo *ri,
181
+ uint64_t value)
182
+{
183
+ uint32_t n;
184
+ uint32_t bit;
185
+ ARMCPU *cpu = env_archcpu(env);
186
+
187
+ /* Ignore writes to unimplemented regions */
188
+ int rmax = MIN(cpu->pmsav8r_hdregion, 32);
189
+ value &= MAKE_64BIT_MASK(0, rmax);
190
+
191
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
192
+
193
+ /* Register alias is only valid for first 32 indexes */
194
+ for (n = 0; n < rmax; ++n) {
195
+ bit = extract32(value, n, 1);
196
+ env->pmsav8.hprlar[n] = deposit32(
197
+ env->pmsav8.hprlar[n], 0, 1, bit);
198
+ }
199
+}
200
+
201
+static uint64_t hprenr_read(CPUARMState *env, const ARMCPRegInfo *ri)
202
+{
203
+ uint32_t n;
204
+ uint32_t result = 0x0;
205
+ ARMCPU *cpu = env_archcpu(env);
206
+
207
+ /* Register alias is only valid for first 32 indexes */
208
+ for (n = 0; n < MIN(cpu->pmsav8r_hdregion, 32); ++n) {
209
+ if (env->pmsav8.hprlar[n] & 0x1) {
210
+ result |= (0x1 << n);
211
+ }
212
+ }
213
+ return result;
214
+}
215
+
216
+static void hprselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
217
+ uint64_t value)
218
+{
219
+ ARMCPU *cpu = env_archcpu(env);
220
+
221
+ /*
222
+ * Ignore writes that would select not implemented region.
223
+ * This is architecturally UNPREDICTABLE.
224
+ */
225
+ if (value >= cpu->pmsav8r_hdregion) {
226
+ return;
227
+ }
228
+
229
+ env->pmsav8.hprselr = value;
230
+}
231
+
232
+static void pmsav8r_regn_write(CPUARMState *env, const ARMCPRegInfo *ri,
233
+ uint64_t value)
234
+{
235
+ ARMCPU *cpu = env_archcpu(env);
236
+ uint8_t index = (extract32(ri->opc0, 0, 1) << 4) |
237
+ (extract32(ri->crm, 0, 3) << 1) | extract32(ri->opc2, 2, 1);
238
+
239
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
240
+
241
+ if (ri->opc1 & 4) {
242
+ if (index >= cpu->pmsav8r_hdregion) {
243
+ return;
244
+ }
245
+ if (ri->opc2 & 0x1) {
246
+ env->pmsav8.hprlar[index] = value;
247
+ } else {
248
+ env->pmsav8.hprbar[index] = value;
249
+ }
250
+ } else {
251
+ if (index >= cpu->pmsav7_dregion) {
252
+ return;
253
+ }
254
+ if (ri->opc2 & 0x1) {
255
+ env->pmsav8.rlar[M_REG_NS][index] = value;
256
+ } else {
257
+ env->pmsav8.rbar[M_REG_NS][index] = value;
258
+ }
259
+ }
260
+}
261
+
262
+static uint64_t pmsav8r_regn_read(CPUARMState *env, const ARMCPRegInfo *ri)
263
+{
264
+ ARMCPU *cpu = env_archcpu(env);
265
+ uint8_t index = (extract32(ri->opc0, 0, 1) << 4) |
266
+ (extract32(ri->crm, 0, 3) << 1) | extract32(ri->opc2, 2, 1);
267
+
268
+ if (ri->opc1 & 4) {
269
+ if (index >= cpu->pmsav8r_hdregion) {
270
+ return 0x0;
271
+ }
272
+ if (ri->opc2 & 0x1) {
273
+ return env->pmsav8.hprlar[index];
274
+ } else {
275
+ return env->pmsav8.hprbar[index];
276
+ }
277
+ } else {
278
+ if (index >= cpu->pmsav7_dregion) {
279
+ return 0x0;
280
+ }
281
+ if (ri->opc2 & 0x1) {
282
+ return env->pmsav8.rlar[M_REG_NS][index];
283
+ } else {
284
+ return env->pmsav8.rbar[M_REG_NS][index];
285
+ }
286
+ }
287
+}
288
+
289
+static const ARMCPRegInfo pmsav8r_cp_reginfo[] = {
290
+ { .name = "PRBAR",
291
+ .cp = 15, .opc1 = 0, .crn = 6, .crm = 3, .opc2 = 0,
292
+ .access = PL1_RW, .type = ARM_CP_NO_RAW,
293
+ .accessfn = access_tvm_trvm,
294
+ .readfn = prbar_read, .writefn = prbar_write },
295
+ { .name = "PRLAR",
296
+ .cp = 15, .opc1 = 0, .crn = 6, .crm = 3, .opc2 = 1,
297
+ .access = PL1_RW, .type = ARM_CP_NO_RAW,
298
+ .accessfn = access_tvm_trvm,
299
+ .readfn = prlar_read, .writefn = prlar_write },
300
+ { .name = "PRSELR", .resetvalue = 0,
301
+ .cp = 15, .opc1 = 0, .crn = 6, .crm = 2, .opc2 = 1,
302
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
303
+ .writefn = prselr_write,
304
+ .fieldoffset = offsetof(CPUARMState, pmsav7.rnr[M_REG_NS]) },
305
+ { .name = "HPRBAR", .resetvalue = 0,
306
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 3, .opc2 = 0,
307
+ .access = PL2_RW, .type = ARM_CP_NO_RAW,
308
+ .readfn = hprbar_read, .writefn = hprbar_write },
309
+ { .name = "HPRLAR",
310
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 3, .opc2 = 1,
311
+ .access = PL2_RW, .type = ARM_CP_NO_RAW,
312
+ .readfn = hprlar_read, .writefn = hprlar_write },
313
+ { .name = "HPRSELR", .resetvalue = 0,
314
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 2, .opc2 = 1,
315
+ .access = PL2_RW,
316
+ .writefn = hprselr_write,
317
+ .fieldoffset = offsetof(CPUARMState, pmsav8.hprselr) },
318
+ { .name = "HPRENR",
319
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 1, .opc2 = 1,
320
+ .access = PL2_RW, .type = ARM_CP_NO_RAW,
321
+ .readfn = hprenr_read, .writefn = hprenr_write },
322
+};
323
+
324
static const ARMCPRegInfo pmsav7_cp_reginfo[] = {
325
/* Reset for all these registers is handled in arm_cpu_reset(),
326
* because the PMSAv7 is also used by M-profile CPUs, which do
327
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
328
.access = PL1_R, .type = ARM_CP_CONST,
329
.resetvalue = cpu->pmsav7_dregion << 8
330
};
331
+ /* HMPUIR is specific to PMSA V8 */
332
+ ARMCPRegInfo id_hmpuir_reginfo = {
333
+ .name = "HMPUIR",
334
+ .cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 4,
335
+ .access = PL2_R, .type = ARM_CP_CONST,
336
+ .resetvalue = cpu->pmsav8r_hdregion
337
+ };
338
static const ARMCPRegInfo crn0_wi_reginfo = {
339
.name = "CRN0_WI", .cp = 15, .crn = 0, .crm = CP_ANY,
340
.opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_W,
341
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
342
define_arm_cp_regs(cpu, id_cp_reginfo);
343
if (!arm_feature(env, ARM_FEATURE_PMSA)) {
344
define_one_arm_cp_reg(cpu, &id_tlbtr_reginfo);
345
+ } else if (arm_feature(env, ARM_FEATURE_PMSA) &&
346
+ arm_feature(env, ARM_FEATURE_V8)) {
347
+ uint32_t i = 0;
348
+ char *tmp_string;
349
+
350
+ define_one_arm_cp_reg(cpu, &id_mpuir_reginfo);
351
+ define_one_arm_cp_reg(cpu, &id_hmpuir_reginfo);
352
+ define_arm_cp_regs(cpu, pmsav8r_cp_reginfo);
353
+
354
+ /* Register alias is only valid for first 32 indexes */
355
+ for (i = 0; i < MIN(cpu->pmsav7_dregion, 32); ++i) {
356
+ uint8_t crm = 0b1000 | extract32(i, 1, 3);
357
+ uint8_t opc1 = extract32(i, 4, 1);
358
+ uint8_t opc2 = extract32(i, 0, 1) << 2;
359
+
360
+ tmp_string = g_strdup_printf("PRBAR%u", i);
361
+ ARMCPRegInfo tmp_prbarn_reginfo = {
362
+ .name = tmp_string, .type = ARM_CP_ALIAS | ARM_CP_NO_RAW,
363
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
364
+ .access = PL1_RW, .resetvalue = 0,
365
+ .accessfn = access_tvm_trvm,
366
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
367
+ };
368
+ define_one_arm_cp_reg(cpu, &tmp_prbarn_reginfo);
369
+ g_free(tmp_string);
370
+
371
+ opc2 = extract32(i, 0, 1) << 2 | 0x1;
372
+ tmp_string = g_strdup_printf("PRLAR%u", i);
373
+ ARMCPRegInfo tmp_prlarn_reginfo = {
374
+ .name = tmp_string, .type = ARM_CP_ALIAS | ARM_CP_NO_RAW,
375
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
376
+ .access = PL1_RW, .resetvalue = 0,
377
+ .accessfn = access_tvm_trvm,
378
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
379
+ };
380
+ define_one_arm_cp_reg(cpu, &tmp_prlarn_reginfo);
381
+ g_free(tmp_string);
382
+ }
383
+
384
+ /* Register alias is only valid for first 32 indexes */
385
+ for (i = 0; i < MIN(cpu->pmsav8r_hdregion, 32); ++i) {
386
+ uint8_t crm = 0b1000 | extract32(i, 1, 3);
387
+ uint8_t opc1 = 0b100 | extract32(i, 4, 1);
388
+ uint8_t opc2 = extract32(i, 0, 1) << 2;
389
+
390
+ tmp_string = g_strdup_printf("HPRBAR%u", i);
391
+ ARMCPRegInfo tmp_hprbarn_reginfo = {
392
+ .name = tmp_string,
393
+ .type = ARM_CP_NO_RAW,
394
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
395
+ .access = PL2_RW, .resetvalue = 0,
396
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
397
+ };
398
+ define_one_arm_cp_reg(cpu, &tmp_hprbarn_reginfo);
399
+ g_free(tmp_string);
400
+
401
+ opc2 = extract32(i, 0, 1) << 2 | 0x1;
402
+ tmp_string = g_strdup_printf("HPRLAR%u", i);
403
+ ARMCPRegInfo tmp_hprlarn_reginfo = {
404
+ .name = tmp_string,
405
+ .type = ARM_CP_NO_RAW,
406
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
407
+ .access = PL2_RW, .resetvalue = 0,
408
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
409
+ };
410
+ define_one_arm_cp_reg(cpu, &tmp_hprlarn_reginfo);
411
+ g_free(tmp_string);
412
+ }
413
} else if (arm_feature(env, ARM_FEATURE_V7)) {
414
define_one_arm_cp_reg(cpu, &id_mpuir_reginfo);
415
}
416
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
417
sctlr.type |= ARM_CP_SUPPRESS_TB_END;
418
}
419
define_one_arm_cp_reg(cpu, &sctlr);
420
+
421
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
422
+ arm_feature(env, ARM_FEATURE_V8)) {
423
+ ARMCPRegInfo vsctlr = {
424
+ .name = "VSCTLR", .state = ARM_CP_STATE_AA32,
425
+ .cp = 15, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0,
426
+ .access = PL2_RW, .resetvalue = 0x0,
427
+ .fieldoffset = offsetoflow32(CPUARMState, cp15.vsctlr),
428
+ };
429
+ define_one_arm_cp_reg(cpu, &vsctlr);
430
+ }
95
}
431
}
96
432
97
tcg_temp_free_ptr(tcg_fpstatus);
433
if (cpu_isar_feature(aa64_lor, cpu)) {
98
- tcg_temp_free_i32(tcg_shift);
434
diff --git a/target/arm/machine.c b/target/arm/machine.c
435
index XXXXXXX..XXXXXXX 100644
436
--- a/target/arm/machine.c
437
+++ b/target/arm/machine.c
438
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_needed(void *opaque)
439
arm_feature(env, ARM_FEATURE_V8);
99
}
440
}
100
441
101
/* Floating point <-> fixed point conversions
442
+static bool pmsav8r_needed(void *opaque)
443
+{
444
+ ARMCPU *cpu = opaque;
445
+ CPUARMState *env = &cpu->env;
446
+
447
+ return arm_feature(env, ARM_FEATURE_PMSA) &&
448
+ arm_feature(env, ARM_FEATURE_V8) &&
449
+ !arm_feature(env, ARM_FEATURE_M);
450
+}
451
+
452
+static const VMStateDescription vmstate_pmsav8r = {
453
+ .name = "cpu/pmsav8/pmsav8r",
454
+ .version_id = 1,
455
+ .minimum_version_id = 1,
456
+ .needed = pmsav8r_needed,
457
+ .fields = (VMStateField[]) {
458
+ VMSTATE_VARRAY_UINT32(env.pmsav8.hprbar, ARMCPU,
459
+ pmsav8r_hdregion, 0, vmstate_info_uint32, uint32_t),
460
+ VMSTATE_VARRAY_UINT32(env.pmsav8.hprlar, ARMCPU,
461
+ pmsav8r_hdregion, 0, vmstate_info_uint32, uint32_t),
462
+ VMSTATE_END_OF_LIST()
463
+ },
464
+};
465
+
466
static const VMStateDescription vmstate_pmsav8 = {
467
.name = "cpu/pmsav8",
468
.version_id = 1,
469
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pmsav8 = {
470
VMSTATE_UINT32(env.pmsav8.mair0[M_REG_NS], ARMCPU),
471
VMSTATE_UINT32(env.pmsav8.mair1[M_REG_NS], ARMCPU),
472
VMSTATE_END_OF_LIST()
473
+ },
474
+ .subsections = (const VMStateDescription * []) {
475
+ &vmstate_pmsav8r,
476
+ NULL
477
}
478
};
479
102
--
480
--
103
2.25.1
481
2.25.1
482
483
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Add PMSAv8r translation.
4
5
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-4-richard.henderson@linaro.org
7
Message-id: 20221206102504.165775-7-tobias.roehmel@rwth-aachen.de
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate-a64.c | 11 ++---------
10
target/arm/ptw.c | 126 ++++++++++++++++++++++++++++++++++++++---------
9
1 file changed, 2 insertions(+), 9 deletions(-)
11
1 file changed, 104 insertions(+), 22 deletions(-)
10
12
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
15
--- a/target/arm/ptw.c
14
+++ b/target/arm/translate-a64.c
16
+++ b/target/arm/ptw.c
15
@@ -XXX,XX +XXX,XX @@ static void gen_rebuild_hflags(DisasContext *s)
17
@@ -XXX,XX +XXX,XX @@ static bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx,
16
18
17
static void gen_exception_internal(int excp)
19
if (arm_feature(env, ARM_FEATURE_M)) {
18
{
20
return env->v7m.mpu_ctrl[is_secure] & R_V7M_MPU_CTRL_PRIVDEFENA_MASK;
19
- TCGv_i32 tcg_excp = tcg_const_i32(excp);
21
- } else {
20
-
22
- return regime_sctlr(env, mmu_idx) & SCTLR_BR;
21
assert(excp_is_internal(excp));
23
}
22
- gen_helper_exception_internal(cpu_env, tcg_excp);
24
+
23
- tcg_temp_free_i32(tcg_excp);
25
+ if (mmu_idx == ARMMMUIdx_Stage2) {
24
+ gen_helper_exception_internal(cpu_env, tcg_constant_i32(excp));
26
+ return false;
27
+ }
28
+
29
+ return regime_sctlr(env, mmu_idx) & SCTLR_BR;
25
}
30
}
26
31
27
static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
32
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
28
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
33
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
29
34
return !(result->f.prot & (1 << access_type));
30
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syndrome)
31
{
32
- TCGv_i32 tcg_syn;
33
-
34
gen_a64_set_pc_im(s->pc_curr);
35
- tcg_syn = tcg_const_i32(syndrome);
36
- gen_helper_exception_bkpt_insn(cpu_env, tcg_syn);
37
- tcg_temp_free_i32(tcg_syn);
38
+ gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syndrome));
39
s->base.is_jmp = DISAS_NORETURN;
40
}
35
}
41
36
37
+static uint32_t *regime_rbar(CPUARMState *env, ARMMMUIdx mmu_idx,
38
+ uint32_t secure)
39
+{
40
+ if (regime_el(env, mmu_idx) == 2) {
41
+ return env->pmsav8.hprbar;
42
+ } else {
43
+ return env->pmsav8.rbar[secure];
44
+ }
45
+}
46
+
47
+static uint32_t *regime_rlar(CPUARMState *env, ARMMMUIdx mmu_idx,
48
+ uint32_t secure)
49
+{
50
+ if (regime_el(env, mmu_idx) == 2) {
51
+ return env->pmsav8.hprlar;
52
+ } else {
53
+ return env->pmsav8.rlar[secure];
54
+ }
55
+}
56
+
57
bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
58
MMUAccessType access_type, ARMMMUIdx mmu_idx,
59
bool secure, GetPhysAddrResult *result,
60
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
61
bool hit = false;
62
uint32_t addr_page_base = address & TARGET_PAGE_MASK;
63
uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
64
+ int region_counter;
65
+
66
+ if (regime_el(env, mmu_idx) == 2) {
67
+ region_counter = cpu->pmsav8r_hdregion;
68
+ } else {
69
+ region_counter = cpu->pmsav7_dregion;
70
+ }
71
72
result->f.lg_page_size = TARGET_PAGE_BITS;
73
result->f.phys_addr = address;
74
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
75
*mregion = -1;
76
}
77
78
+ if (mmu_idx == ARMMMUIdx_Stage2) {
79
+ fi->stage2 = true;
80
+ }
81
+
82
/*
83
* Unlike the ARM ARM pseudocode, we don't need to check whether this
84
* was an exception vector read from the vector table (which is always
85
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
86
hit = true;
87
}
88
89
- for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
90
+ uint32_t bitmask;
91
+ if (arm_feature(env, ARM_FEATURE_M)) {
92
+ bitmask = 0x1f;
93
+ } else {
94
+ bitmask = 0x3f;
95
+ fi->level = 0;
96
+ }
97
+
98
+ for (n = region_counter - 1; n >= 0; n--) {
99
/* region search */
100
/*
101
- * Note that the base address is bits [31:5] from the register
102
- * with bits [4:0] all zeroes, but the limit address is bits
103
- * [31:5] from the register with bits [4:0] all ones.
104
+ * Note that the base address is bits [31:x] from the register
105
+ * with bits [x-1:0] all zeroes, but the limit address is bits
106
+ * [31:x] from the register with bits [x:0] all ones. Where x is
107
+ * 5 for Cortex-M and 6 for Cortex-R
108
*/
109
- uint32_t base = env->pmsav8.rbar[secure][n] & ~0x1f;
110
- uint32_t limit = env->pmsav8.rlar[secure][n] | 0x1f;
111
+ uint32_t base = regime_rbar(env, mmu_idx, secure)[n] & ~bitmask;
112
+ uint32_t limit = regime_rlar(env, mmu_idx, secure)[n] | bitmask;
113
114
- if (!(env->pmsav8.rlar[secure][n] & 0x1)) {
115
+ if (!(regime_rlar(env, mmu_idx, secure)[n] & 0x1)) {
116
/* Region disabled */
117
continue;
118
}
119
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
120
* PMSAv7 where highest-numbered-region wins)
121
*/
122
fi->type = ARMFault_Permission;
123
- fi->level = 1;
124
+ if (arm_feature(env, ARM_FEATURE_M)) {
125
+ fi->level = 1;
126
+ }
127
return true;
128
}
129
130
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
131
}
132
133
if (!hit) {
134
- /* background fault */
135
- fi->type = ARMFault_Background;
136
+ if (arm_feature(env, ARM_FEATURE_M)) {
137
+ fi->type = ARMFault_Background;
138
+ } else {
139
+ fi->type = ARMFault_Permission;
140
+ }
141
return true;
142
}
143
144
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
145
/* hit using the background region */
146
get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
147
} else {
148
- uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
149
- uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
150
+ uint32_t matched_rbar = regime_rbar(env, mmu_idx, secure)[matchregion];
151
+ uint32_t matched_rlar = regime_rlar(env, mmu_idx, secure)[matchregion];
152
+ uint32_t ap = extract32(matched_rbar, 1, 2);
153
+ uint32_t xn = extract32(matched_rbar, 0, 1);
154
bool pxn = false;
155
156
if (arm_feature(env, ARM_FEATURE_V8_1M)) {
157
- pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
158
+ pxn = extract32(matched_rlar, 4, 1);
159
}
160
161
if (m_is_system_region(env, address)) {
162
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
163
xn = 1;
164
}
165
166
- result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
167
+ if (regime_el(env, mmu_idx) == 2) {
168
+ result->f.prot = simple_ap_to_rw_prot_is_user(ap,
169
+ mmu_idx != ARMMMUIdx_E2);
170
+ } else {
171
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
172
+ }
173
+
174
+ if (!arm_feature(env, ARM_FEATURE_M)) {
175
+ uint8_t attrindx = extract32(matched_rlar, 1, 3);
176
+ uint64_t mair = env->cp15.mair_el[regime_el(env, mmu_idx)];
177
+ uint8_t sh = extract32(matched_rlar, 3, 2);
178
+
179
+ if (regime_sctlr(env, mmu_idx) & SCTLR_WXN &&
180
+ result->f.prot & PAGE_WRITE && mmu_idx != ARMMMUIdx_Stage2) {
181
+ xn = 0x1;
182
+ }
183
+
184
+ if ((regime_el(env, mmu_idx) == 1) &&
185
+ regime_sctlr(env, mmu_idx) & SCTLR_UWXN && ap == 0x1) {
186
+ pxn = 0x1;
187
+ }
188
+
189
+ result->cacheattrs.is_s2_format = false;
190
+ result->cacheattrs.attrs = extract64(mair, attrindx * 8, 8);
191
+ result->cacheattrs.shareability = sh;
192
+ }
193
+
194
if (result->f.prot && !xn && !(pxn && !is_user)) {
195
result->f.prot |= PAGE_EXEC;
196
}
197
- /*
198
- * We don't need to look the attribute up in the MAIR0/MAIR1
199
- * registers because that only tells us about cacheability.
200
- */
201
+
202
if (mregion) {
203
*mregion = matchregion;
204
}
205
}
206
207
fi->type = ARMFault_Permission;
208
- fi->level = 1;
209
+ if (arm_feature(env, ARM_FEATURE_M)) {
210
+ fi->level = 1;
211
+ }
212
return !(result->f.prot & (1 << access_type));
213
}
214
215
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
216
cacheattrs1 = result->cacheattrs;
217
memset(result, 0, sizeof(*result));
218
219
- ret = get_phys_addr_lpae(env, ptw, ipa, access_type, is_el0, result, fi);
220
+ if (arm_feature(env, ARM_FEATURE_PMSA)) {
221
+ ret = get_phys_addr_pmsav8(env, ipa, access_type,
222
+ ptw->in_mmu_idx, is_secure, result, fi);
223
+ } else {
224
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
225
+ is_el0, result, fi);
226
+ }
227
fi->s2addr = ipa;
228
229
/* Combine the S1 and S2 perms. */
42
--
230
--
43
2.25.1
231
2.25.1
232
233
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
2
3
In these cases, 't' did double-duty as zero source and
3
All constants are taken from the ARM Cortex-R52 Processor TRM Revision: r1p3
4
temporary destination. Split the two uses and narrow
5
the scope of the temp.
6
4
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20220426163043.100432-47-richard.henderson@linaro.org
7
Message-id: 20221206102504.165775-8-tobias.roehmel@rwth-aachen.de
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/translate-sve.c | 18 ++++++++++--------
10
target/arm/cpu_tcg.c | 42 ++++++++++++++++++++++++++++++++++++++++++
13
1 file changed, 10 insertions(+), 8 deletions(-)
11
1 file changed, 42 insertions(+)
14
12
15
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-sve.c
15
--- a/target/arm/cpu_tcg.c
18
+++ b/target/arm/translate-sve.c
16
+++ b/target/arm/cpu_tcg.c
19
@@ -XXX,XX +XXX,XX @@ static bool do_brk3(DisasContext *s, arg_rprr_s *a,
17
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
20
TCGv_ptr n = tcg_temp_new_ptr();
18
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
21
TCGv_ptr m = tcg_temp_new_ptr();
22
TCGv_ptr g = tcg_temp_new_ptr();
23
- TCGv_i32 t = tcg_const_i32(FIELD_DP32(0, PREDDESC, OPRSZ, vsz));
24
+ TCGv_i32 desc = tcg_constant_i32(FIELD_DP32(0, PREDDESC, OPRSZ, vsz));
25
26
tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
27
tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
28
@@ -XXX,XX +XXX,XX @@ static bool do_brk3(DisasContext *s, arg_rprr_s *a,
29
tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
30
31
if (a->s) {
32
- fn_s(t, d, n, m, g, t);
33
+ TCGv_i32 t = tcg_temp_new_i32();
34
+ fn_s(t, d, n, m, g, desc);
35
do_pred_flags(t);
36
+ tcg_temp_free_i32(t);
37
} else {
38
- fn(d, n, m, g, t);
39
+ fn(d, n, m, g, desc);
40
}
41
tcg_temp_free_ptr(d);
42
tcg_temp_free_ptr(n);
43
tcg_temp_free_ptr(m);
44
tcg_temp_free_ptr(g);
45
- tcg_temp_free_i32(t);
46
return true;
47
}
19
}
48
20
49
@@ -XXX,XX +XXX,XX @@ static bool do_brk2(DisasContext *s, arg_rpr_s *a,
21
+static void cortex_r52_initfn(Object *obj)
50
TCGv_ptr d = tcg_temp_new_ptr();
22
+{
51
TCGv_ptr n = tcg_temp_new_ptr();
23
+ ARMCPU *cpu = ARM_CPU(obj);
52
TCGv_ptr g = tcg_temp_new_ptr();
24
+
53
- TCGv_i32 t = tcg_const_i32(FIELD_DP32(0, PREDDESC, OPRSZ, vsz));
25
+ set_feature(&cpu->env, ARM_FEATURE_V8);
54
+ TCGv_i32 desc = tcg_constant_i32(FIELD_DP32(0, PREDDESC, OPRSZ, vsz));
26
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
55
27
+ set_feature(&cpu->env, ARM_FEATURE_PMSA);
56
tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
28
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
57
tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
29
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
58
tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
30
+ cpu->midr = 0x411fd133; /* r1p3 */
59
31
+ cpu->revidr = 0x00000000;
60
if (a->s) {
32
+ cpu->reset_fpsid = 0x41034023;
61
- fn_s(t, d, n, g, t);
33
+ cpu->isar.mvfr0 = 0x10110222;
62
+ TCGv_i32 t = tcg_temp_new_i32();
34
+ cpu->isar.mvfr1 = 0x12111111;
63
+ fn_s(t, d, n, g, desc);
35
+ cpu->isar.mvfr2 = 0x00000043;
64
do_pred_flags(t);
36
+ cpu->ctr = 0x8144c004;
65
+ tcg_temp_free_i32(t);
37
+ cpu->reset_sctlr = 0x30c50838;
66
} else {
38
+ cpu->isar.id_pfr0 = 0x00000131;
67
- fn(d, n, g, t);
39
+ cpu->isar.id_pfr1 = 0x10111001;
68
+ fn(d, n, g, desc);
40
+ cpu->isar.id_dfr0 = 0x03010006;
69
}
41
+ cpu->id_afr0 = 0x00000000;
70
tcg_temp_free_ptr(d);
42
+ cpu->isar.id_mmfr0 = 0x00211040;
71
tcg_temp_free_ptr(n);
43
+ cpu->isar.id_mmfr1 = 0x40000000;
72
tcg_temp_free_ptr(g);
44
+ cpu->isar.id_mmfr2 = 0x01200000;
73
- tcg_temp_free_i32(t);
45
+ cpu->isar.id_mmfr3 = 0xf0102211;
74
return true;
46
+ cpu->isar.id_mmfr4 = 0x00000010;
75
}
47
+ cpu->isar.id_isar0 = 0x02101110;
76
48
+ cpu->isar.id_isar1 = 0x13112111;
49
+ cpu->isar.id_isar2 = 0x21232142;
50
+ cpu->isar.id_isar3 = 0x01112131;
51
+ cpu->isar.id_isar4 = 0x00010142;
52
+ cpu->isar.id_isar5 = 0x00010001;
53
+ cpu->isar.dbgdidr = 0x77168000;
54
+ cpu->clidr = (1 << 27) | (1 << 24) | 0x3;
55
+ cpu->ccsidr[0] = 0x700fe01a; /* 32KB L1 dcache */
56
+ cpu->ccsidr[1] = 0x201fe00a; /* 32KB L1 icache */
57
+
58
+ cpu->pmsav7_dregion = 16;
59
+ cpu->pmsav8r_hdregion = 16;
60
+}
61
+
62
static void cortex_r5f_initfn(Object *obj)
63
{
64
ARMCPU *cpu = ARM_CPU(obj);
65
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo arm_tcg_cpus[] = {
66
.class_init = arm_v7m_class_init },
67
{ .name = "cortex-r5", .initfn = cortex_r5_initfn },
68
{ .name = "cortex-r5f", .initfn = cortex_r5f_initfn },
69
+ { .name = "cortex-r52", .initfn = cortex_r52_initfn },
70
{ .name = "ti925t", .initfn = ti925t_initfn },
71
{ .name = "sa1100", .initfn = sa1100_initfn },
72
{ .name = "sa1110", .initfn = sa1110_initfn },
77
--
73
--
78
2.25.1
74
2.25.1
75
76
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
The check semihosting_enabled() wants to know if the guest is
4
currently in user mode. Unlike the other cases the test was inverted
5
causing us to block semihosting calls in non-EL0 modes.
6
7
Cc: qemu-stable@nongnu.org
8
Fixes: 19b26317e9 (target/arm: Honour -semihosting-config userspace=on)
9
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-36-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/translate.c | 7 +++----
13
target/arm/translate.c | 2 +-
9
1 file changed, 3 insertions(+), 4 deletions(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
10
15
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
18
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
19
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_CSEL(DisasContext *s, arg_CSEL *a)
20
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
16
}
21
* semihosting, to provide some semblance of security
17
22
* (and for consistency with our 32-bit semihosting).
18
/* In this insn input reg fields of 0b1111 mean "zero", not "PC" */
23
*/
19
+ zero = tcg_constant_i32(0);
24
- if (semihosting_enabled(s->current_el != 0) &&
20
if (a->rn == 15) {
25
+ if (semihosting_enabled(s->current_el == 0) &&
21
- rn = tcg_const_i32(0);
26
(imm == (s->thumb ? 0x3c : 0xf000))) {
22
+ rn = zero;
27
gen_exception_internal_insn(s, EXCP_SEMIHOST);
23
} else {
28
return;
24
rn = load_reg(s, a->rn);
25
}
26
if (a->rm == 15) {
27
- rm = tcg_const_i32(0);
28
+ rm = zero;
29
} else {
30
rm = load_reg(s, a->rm);
31
}
32
@@ -XXX,XX +XXX,XX @@ static bool trans_CSEL(DisasContext *s, arg_CSEL *a)
33
}
34
35
arm_test_cc(&c, a->fcond);
36
- zero = tcg_const_i32(0);
37
tcg_gen_movcond_i32(c.cond, rn, c.value, zero, rn, rm);
38
arm_free_cc(&c);
39
- tcg_temp_free_i32(zero);
40
41
store_reg(s, a->rd, rn);
42
tcg_temp_free_i32(rm);
43
--
29
--
44
2.25.1
30
2.25.1
31
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Fix typos, add background information
4
5
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-40-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
target/arm/translate-sve.c | 12 ++++--------
9
hw/timer/imx_epit.c | 20 ++++++++++++++++----
9
1 file changed, 4 insertions(+), 8 deletions(-)
10
1 file changed, 16 insertions(+), 4 deletions(-)
10
11
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
12
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
12
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
14
--- a/hw/timer/imx_epit.c
14
+++ b/target/arm/translate-sve.c
15
+++ b/hw/timer/imx_epit.c
15
@@ -XXX,XX +XXX,XX @@ static void incr_last_active(DisasContext *s, TCGv_i32 last, int esz)
16
@@ -XXX,XX +XXX,XX @@ static void imx_epit_set_freq(IMXEPITState *s)
16
if (is_power_of_2(vsz)) {
17
tcg_gen_andi_i32(last, last, vsz - 1);
18
} else {
19
- TCGv_i32 max = tcg_const_i32(vsz);
20
- TCGv_i32 zero = tcg_const_i32(0);
21
+ TCGv_i32 max = tcg_constant_i32(vsz);
22
+ TCGv_i32 zero = tcg_constant_i32(0);
23
tcg_gen_movcond_i32(TCG_COND_GEU, last, last, max, zero, last);
24
- tcg_temp_free_i32(max);
25
- tcg_temp_free_i32(zero);
26
}
17
}
27
}
18
}
28
19
29
@@ -XXX,XX +XXX,XX @@ static void wrap_last_active(DisasContext *s, TCGv_i32 last, int esz)
20
+/*
30
if (is_power_of_2(vsz)) {
21
+ * This is called both on hardware (device) reset and software reset.
31
tcg_gen_andi_i32(last, last, vsz - 1);
22
+ */
32
} else {
23
static void imx_epit_reset(DeviceState *dev)
33
- TCGv_i32 max = tcg_const_i32(vsz - (1 << esz));
24
{
34
- TCGv_i32 zero = tcg_const_i32(0);
25
IMXEPITState *s = IMX_EPIT(dev);
35
+ TCGv_i32 max = tcg_constant_i32(vsz - (1 << esz));
26
36
+ TCGv_i32 zero = tcg_constant_i32(0);
27
- /*
37
tcg_gen_movcond_i32(TCG_COND_LT, last, last, zero, max, last);
28
- * Soft reset doesn't touch some bits; hard reset clears them
38
- tcg_temp_free_i32(max);
29
- */
39
- tcg_temp_free_i32(zero);
30
+ /* Soft reset doesn't touch some bits; hard reset clears them */
40
}
31
s->cr &= (CR_EN|CR_ENMOD|CR_STOPEN|CR_DOZEN|CR_WAITEN|CR_DBGEN);
32
s->sr = 0;
33
s->lr = EPIT_TIMER_MAX;
34
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
35
ptimer_transaction_begin(s->timer_cmp);
36
ptimer_transaction_begin(s->timer_reload);
37
38
+ /* Update the frequency. Has been done already in case of a reset. */
39
if (!(s->cr & CR_SWR)) {
40
imx_epit_set_freq(s);
41
}
42
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
43
break;
44
45
case 1: /* SR - ACK*/
46
- /* writing 1 to OCIF clear the OCIF bit */
47
+ /* writing 1 to OCIF clears the OCIF bit */
48
if (value & 0x01) {
49
s->sr = 0;
50
imx_epit_update_int(s);
51
@@ -XXX,XX +XXX,XX @@ static void imx_epit_realize(DeviceState *dev, Error **errp)
52
0x00001000);
53
sysbus_init_mmio(sbd, &s->iomem);
54
55
+ /*
56
+ * The reload timer keeps running when the peripheral is enabled. It is a
57
+ * kind of wall clock that does not generate any interrupts. The callback
58
+ * needs to be provided, but it does nothing as the ptimer already supports
59
+ * all necessary reloading functionality.
60
+ */
61
s->timer_reload = ptimer_init(imx_epit_reload, s, PTIMER_POLICY_LEGACY);
62
63
+ /*
64
+ * The compare timer is running only when the peripheral configuration is
65
+ * in a state that will generate compare interrupts.
66
+ */
67
s->timer_cmp = ptimer_init(imx_epit_cmp, s, PTIMER_POLICY_LEGACY);
41
}
68
}
42
69
43
--
70
--
44
2.25.1
71
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
remove unused defines, add needed defines
4
5
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-37-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
target/arm/translate-sve.c | 12 ++++--------
9
include/hw/timer/imx_epit.h | 4 ++--
9
1 file changed, 4 insertions(+), 8 deletions(-)
10
hw/timer/imx_epit.c | 4 ++--
11
2 files changed, 4 insertions(+), 4 deletions(-)
10
12
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
diff --git a/include/hw/timer/imx_epit.h b/include/hw/timer/imx_epit.h
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
15
--- a/include/hw/timer/imx_epit.h
14
+++ b/target/arm/translate-sve.c
16
+++ b/include/hw/timer/imx_epit.h
15
@@ -XXX,XX +XXX,XX @@ static void do_index(DisasContext *s, int esz, int rd,
17
@@ -XXX,XX +XXX,XX @@
16
static bool trans_INDEX_ii(DisasContext *s, arg_INDEX_ii *a)
18
#define CR_OCIEN (1 << 2)
17
{
19
#define CR_RLD (1 << 3)
18
if (sve_access_check(s)) {
20
#define CR_PRESCALE_SHIFT (4)
19
- TCGv_i64 start = tcg_const_i64(a->imm1);
21
-#define CR_PRESCALE_MASK (0xfff)
20
- TCGv_i64 incr = tcg_const_i64(a->imm2);
22
+#define CR_PRESCALE_BITS (12)
21
+ TCGv_i64 start = tcg_constant_i64(a->imm1);
23
#define CR_SWR (1 << 16)
22
+ TCGv_i64 incr = tcg_constant_i64(a->imm2);
24
#define CR_IOVW (1 << 17)
23
do_index(s, a->esz, a->rd, start, incr);
25
#define CR_DBGEN (1 << 18)
24
- tcg_temp_free_i64(start);
26
@@ -XXX,XX +XXX,XX @@
25
- tcg_temp_free_i64(incr);
27
#define CR_DOZEN (1 << 20)
26
}
28
#define CR_STOPEN (1 << 21)
27
return true;
29
#define CR_CLKSRC_SHIFT (24)
28
}
30
-#define CR_CLKSRC_MASK (0x3 << CR_CLKSRC_SHIFT)
29
@@ -XXX,XX +XXX,XX @@ static bool trans_INDEX_ii(DisasContext *s, arg_INDEX_ii *a)
31
+#define CR_CLKSRC_BITS (2)
30
static bool trans_INDEX_ir(DisasContext *s, arg_INDEX_ir *a)
32
31
{
33
#define EPIT_TIMER_MAX 0XFFFFFFFFUL
32
if (sve_access_check(s)) {
34
33
- TCGv_i64 start = tcg_const_i64(a->imm);
35
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
34
+ TCGv_i64 start = tcg_constant_i64(a->imm);
36
index XXXXXXX..XXXXXXX 100644
35
TCGv_i64 incr = cpu_reg(s, a->rm);
37
--- a/hw/timer/imx_epit.c
36
do_index(s, a->esz, a->rd, start, incr);
38
+++ b/hw/timer/imx_epit.c
37
- tcg_temp_free_i64(start);
39
@@ -XXX,XX +XXX,XX @@ static void imx_epit_set_freq(IMXEPITState *s)
38
}
40
uint32_t clksrc;
39
return true;
41
uint32_t prescaler;
40
}
42
41
@@ -XXX,XX +XXX,XX @@ static bool trans_INDEX_ri(DisasContext *s, arg_INDEX_ri *a)
43
- clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, 2);
42
{
44
- prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, 12);
43
if (sve_access_check(s)) {
45
+ clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, CR_CLKSRC_BITS);
44
TCGv_i64 start = cpu_reg(s, a->rn);
46
+ prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, CR_PRESCALE_BITS);
45
- TCGv_i64 incr = tcg_const_i64(a->imm);
47
46
+ TCGv_i64 incr = tcg_constant_i64(a->imm);
48
s->freq = imx_ccm_get_clock_frequency(s->ccm,
47
do_index(s, a->esz, a->rd, start, incr);
49
imx_epit_clocks[clksrc]) / prescaler;
48
- tcg_temp_free_i64(incr);
49
}
50
return true;
51
}
52
--
50
--
53
2.25.1
51
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
5
---
8
target/arm/translate-a64.c | 6 +-----
6
include/hw/timer/imx_epit.h | 2 ++
9
1 file changed, 1 insertion(+), 5 deletions(-)
7
hw/timer/imx_epit.c | 12 ++++++------
8
2 files changed, 8 insertions(+), 6 deletions(-)
10
9
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
10
diff --git a/include/hw/timer/imx_epit.h b/include/hw/timer/imx_epit.h
12
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
12
--- a/include/hw/timer/imx_epit.h
14
+++ b/target/arm/translate-a64.c
13
+++ b/include/hw/timer/imx_epit.h
15
@@ -XXX,XX +XXX,XX @@ static void shift_reg_imm(TCGv_i64 dst, TCGv_i64 src, int sf,
14
@@ -XXX,XX +XXX,XX @@
16
if (shift_i == 0) {
15
#define CR_CLKSRC_SHIFT (24)
17
tcg_gen_mov_i64(dst, src);
16
#define CR_CLKSRC_BITS (2)
17
18
+#define SR_OCIF (1 << 0)
19
+
20
#define EPIT_TIMER_MAX 0XFFFFFFFFUL
21
22
#define TYPE_IMX_EPIT "imx.epit"
23
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/timer/imx_epit.c
26
+++ b/hw/timer/imx_epit.c
27
@@ -XXX,XX +XXX,XX @@ static const IMXClk imx_epit_clocks[] = {
28
*/
29
static void imx_epit_update_int(IMXEPITState *s)
30
{
31
- if (s->sr && (s->cr & CR_OCIEN) && (s->cr & CR_EN)) {
32
+ if ((s->sr & SR_OCIF) && (s->cr & CR_OCIEN) && (s->cr & CR_EN)) {
33
qemu_irq_raise(s->irq);
18
} else {
34
} else {
19
- TCGv_i64 shift_const;
35
qemu_irq_lower(s->irq);
36
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
37
break;
38
39
case 1: /* SR - ACK*/
40
- /* writing 1 to OCIF clears the OCIF bit */
41
- if (value & 0x01) {
42
- s->sr = 0;
43
+ /* writing 1 to SR.OCIF clears this bit and turns the interrupt off */
44
+ if (value & SR_OCIF) {
45
+ s->sr = 0; /* SR.OCIF is the only bit in this register anyway */
46
imx_epit_update_int(s);
47
}
48
break;
49
@@ -XXX,XX +XXX,XX @@ static void imx_epit_cmp(void *opaque)
50
IMXEPITState *s = IMX_EPIT(opaque);
51
52
DPRINTF("sr was %d\n", s->sr);
20
-
53
-
21
- shift_const = tcg_const_i64(shift_i);
54
- s->sr = 1;
22
- shift_reg(dst, src, sf, shift_type, shift_const);
55
+ /* Set interrupt status bit SR.OCIF and update the interrupt state */
23
- tcg_temp_free_i64(shift_const);
56
+ s->sr |= SR_OCIF;
24
+ shift_reg(dst, src, sf, shift_type, tcg_constant_i64(shift_i));
57
imx_epit_update_int(s);
25
}
26
}
58
}
27
59
28
--
60
--
29
2.25.1
61
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
The interrupt state can change due to:
4
- reset clears both SR.OCIF and CR.OCIE
5
- write to CR.EN or CR.OCIE
6
7
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-42-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate-sve.c | 20 +++++++-------------
11
hw/timer/imx_epit.c | 16 ++++++++++++----
9
1 file changed, 7 insertions(+), 13 deletions(-)
12
1 file changed, 12 insertions(+), 4 deletions(-)
10
13
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
14
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
16
--- a/hw/timer/imx_epit.c
14
+++ b/target/arm/translate-sve.c
17
+++ b/hw/timer/imx_epit.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_CTERM(DisasContext *s, arg_CTERM *a)
18
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
16
static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
19
if (s->cr & CR_SWR) {
17
{
20
/* handle the reset */
18
TCGv_i64 op0, op1, t0, t1, tmax;
21
imx_epit_reset(DEVICE(s));
19
- TCGv_i32 t2, t3;
22
- /*
20
+ TCGv_i32 t2;
23
- * TODO: could we 'break' here? following operations appear
21
TCGv_ptr ptr;
24
- * to duplicate the work imx_epit_reset() already did.
22
unsigned vsz = vec_full_reg_size(s);
25
- */
23
unsigned desc = 0;
24
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
25
}
26
}
26
}
27
27
28
+ /*
28
- tmax = tcg_const_i64(vsz >> a->esz);
29
+ * The interrupt state can change due to:
29
+ tmax = tcg_constant_i64(vsz >> a->esz);
30
+ * - reset clears both SR.OCIF and CR.OCIE
30
if (eq) {
31
+ * - write to CR.EN or CR.OCIE
31
/* Equality means one more iteration. */
32
+ */
32
tcg_gen_addi_i64(t0, t0, 1);
33
+ imx_epit_update_int(s);
33
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
34
+
34
35
+ /*
35
/* Bound to the maximum. */
36
+ * TODO: could we 'break' here for reset? following operations appear
36
tcg_gen_umin_i64(t0, t0, tmax);
37
+ * to duplicate the work imx_epit_reset() already did.
37
- tcg_temp_free_i64(tmax);
38
+ */
38
39
+
39
/* Set the count to zero if the condition is false. */
40
ptimer_transaction_begin(s->timer_cmp);
40
tcg_gen_movi_i64(t1, 0);
41
ptimer_transaction_begin(s->timer_reload);
41
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
42
43
desc = FIELD_DP32(desc, PREDDESC, OPRSZ, vsz / 8);
44
desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
45
- t3 = tcg_const_i32(desc);
46
47
ptr = tcg_temp_new_ptr();
48
tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
49
50
if (a->lt) {
51
- gen_helper_sve_whilel(t2, ptr, t2, t3);
52
+ gen_helper_sve_whilel(t2, ptr, t2, tcg_constant_i32(desc));
53
} else {
54
- gen_helper_sve_whileg(t2, ptr, t2, t3);
55
+ gen_helper_sve_whileg(t2, ptr, t2, tcg_constant_i32(desc));
56
}
57
do_pred_flags(t2);
58
59
tcg_temp_free_ptr(ptr);
60
tcg_temp_free_i32(t2);
61
- tcg_temp_free_i32(t3);
62
return true;
63
}
64
65
static bool trans_WHILE_ptr(DisasContext *s, arg_WHILE_ptr *a)
66
{
67
TCGv_i64 op0, op1, diff, t1, tmax;
68
- TCGv_i32 t2, t3;
69
+ TCGv_i32 t2;
70
TCGv_ptr ptr;
71
unsigned vsz = vec_full_reg_size(s);
72
unsigned desc = 0;
73
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE_ptr(DisasContext *s, arg_WHILE_ptr *a)
74
op0 = read_cpu_reg(s, a->rn, 1);
75
op1 = read_cpu_reg(s, a->rm, 1);
76
77
- tmax = tcg_const_i64(vsz);
78
+ tmax = tcg_constant_i64(vsz);
79
diff = tcg_temp_new_i64();
80
81
if (a->rw) {
82
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE_ptr(DisasContext *s, arg_WHILE_ptr *a)
83
84
/* Bound to the maximum. */
85
tcg_gen_umin_i64(diff, diff, tmax);
86
- tcg_temp_free_i64(tmax);
87
88
/* Since we're bounded, pass as a 32-bit type. */
89
t2 = tcg_temp_new_i32();
90
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE_ptr(DisasContext *s, arg_WHILE_ptr *a)
91
92
desc = FIELD_DP32(desc, PREDDESC, OPRSZ, vsz / 8);
93
desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
94
- t3 = tcg_const_i32(desc);
95
96
ptr = tcg_temp_new_ptr();
97
tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
98
99
- gen_helper_sve_whilel(t2, ptr, t2, t3);
100
+ gen_helper_sve_whilel(t2, ptr, t2, tcg_constant_i32(desc));
101
do_pred_flags(t2);
102
103
tcg_temp_free_ptr(ptr);
104
tcg_temp_free_i32(t2);
105
- tcg_temp_free_i32(t3);
106
return true;
107
}
108
42
109
--
43
--
110
2.25.1
44
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-25-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
6
---
8
target/arm/translate.c | 22 +++++++++-------------
7
hw/timer/imx_epit.c | 20 ++++++++++++++------
9
1 file changed, 9 insertions(+), 13 deletions(-)
8
1 file changed, 14 insertions(+), 6 deletions(-)
10
9
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
10
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
12
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
12
--- a/hw/timer/imx_epit.c
14
+++ b/target/arm/translate.c
13
+++ b/hw/timer/imx_epit.c
15
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
14
@@ -XXX,XX +XXX,XX @@ static void imx_epit_set_freq(IMXEPITState *s)
16
tcg_gen_sextract_i32(tcg_el, tcg_el, ctz32(SCR_EEL2), 1);
15
/*
17
tcg_gen_addi_i32(tcg_el, tcg_el, 3);
16
* This is called both on hardware (device) reset and software reset.
18
} else {
17
*/
19
- tcg_el = tcg_const_i32(3);
18
-static void imx_epit_reset(DeviceState *dev)
20
+ tcg_el = tcg_constant_i32(3);
19
+static void imx_epit_reset(IMXEPITState *s, bool is_hard_reset)
21
}
22
23
gen_exception_el(s, EXCP_UDEF, syn_uncategorized(), tcg_el);
24
@@ -XXX,XX +XXX,XX @@ undef:
25
26
static void gen_msr_banked(DisasContext *s, int r, int sysm, int rn)
27
{
20
{
28
- TCGv_i32 tcg_reg, tcg_tgtmode, tcg_regno;
21
- IMXEPITState *s = IMX_EPIT(dev);
29
+ TCGv_i32 tcg_reg;
22
-
30
int tgtmode = 0, regno = 0;
23
/* Soft reset doesn't touch some bits; hard reset clears them */
31
24
- s->cr &= (CR_EN|CR_ENMOD|CR_STOPEN|CR_DOZEN|CR_WAITEN|CR_DBGEN);
32
if (!msr_banked_access_decode(s, r, sysm, rn, &tgtmode, &regno)) {
25
+ if (is_hard_reset) {
33
@@ -XXX,XX +XXX,XX @@ static void gen_msr_banked(DisasContext *s, int r, int sysm, int rn)
26
+ s->cr = 0;
34
gen_set_condexec(s);
27
+ } else {
35
gen_set_pc_im(s, s->pc_curr);
28
+ s->cr &= (CR_EN|CR_ENMOD|CR_STOPEN|CR_DOZEN|CR_WAITEN|CR_DBGEN);
36
tcg_reg = load_reg(s, rn);
29
+ }
37
- tcg_tgtmode = tcg_const_i32(tgtmode);
30
s->sr = 0;
38
- tcg_regno = tcg_const_i32(regno);
31
s->lr = EPIT_TIMER_MAX;
39
- gen_helper_msr_banked(cpu_env, tcg_reg, tcg_tgtmode, tcg_regno);
32
s->cmp = 0;
40
- tcg_temp_free_i32(tcg_tgtmode);
33
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
41
- tcg_temp_free_i32(tcg_regno);
34
s->cr = value & 0x03ffffff;
42
+ gen_helper_msr_banked(cpu_env, tcg_reg,
35
if (s->cr & CR_SWR) {
43
+ tcg_constant_i32(tgtmode),
36
/* handle the reset */
44
+ tcg_constant_i32(regno));
37
- imx_epit_reset(DEVICE(s));
45
tcg_temp_free_i32(tcg_reg);
38
+ imx_epit_reset(s, false);
46
s->base.is_jmp = DISAS_UPDATE_EXIT;
39
}
40
41
/*
42
@@ -XXX,XX +XXX,XX @@ static void imx_epit_realize(DeviceState *dev, Error **errp)
43
s->timer_cmp = ptimer_init(imx_epit_cmp, s, PTIMER_POLICY_LEGACY);
47
}
44
}
48
45
49
static void gen_mrs_banked(DisasContext *s, int r, int sysm, int rn)
46
+static void imx_epit_dev_reset(DeviceState *dev)
47
+{
48
+ IMXEPITState *s = IMX_EPIT(dev);
49
+ imx_epit_reset(s, true);
50
+}
51
+
52
static void imx_epit_class_init(ObjectClass *klass, void *data)
50
{
53
{
51
- TCGv_i32 tcg_reg, tcg_tgtmode, tcg_regno;
54
DeviceClass *dc = DEVICE_CLASS(klass);
52
+ TCGv_i32 tcg_reg;
55
53
int tgtmode = 0, regno = 0;
56
dc->realize = imx_epit_realize;
54
57
- dc->reset = imx_epit_reset;
55
if (!msr_banked_access_decode(s, r, sysm, rn, &tgtmode, &regno)) {
58
+ dc->reset = imx_epit_dev_reset;
56
@@ -XXX,XX +XXX,XX @@ static void gen_mrs_banked(DisasContext *s, int r, int sysm, int rn)
59
dc->vmsd = &vmstate_imx_timer_epit;
57
gen_set_condexec(s);
60
dc->desc = "i.MX periodic timer";
58
gen_set_pc_im(s, s->pc_curr);
59
tcg_reg = tcg_temp_new_i32();
60
- tcg_tgtmode = tcg_const_i32(tgtmode);
61
- tcg_regno = tcg_const_i32(regno);
62
- gen_helper_mrs_banked(tcg_reg, cpu_env, tcg_tgtmode, tcg_regno);
63
- tcg_temp_free_i32(tcg_tgtmode);
64
- tcg_temp_free_i32(tcg_regno);
65
+ gen_helper_mrs_banked(tcg_reg, cpu_env,
66
+ tcg_constant_i32(tgtmode),
67
+ tcg_constant_i32(regno));
68
store_reg(s, rn, tcg_reg);
69
s->base.is_jmp = DISAS_UPDATE_EXIT;
70
}
61
}
71
--
62
--
72
2.25.1
63
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Finish conversion of the file to tcg_constant_*.
3
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20220426163043.100432-22-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
6
---
10
target/arm/translate-a64.c | 20 ++++++++------------
7
hw/timer/imx_epit.c | 215 ++++++++++++++++++++++++--------------------
11
1 file changed, 8 insertions(+), 12 deletions(-)
8
1 file changed, 117 insertions(+), 98 deletions(-)
12
9
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
10
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
14
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-a64.c
12
--- a/hw/timer/imx_epit.c
16
+++ b/target/arm/translate-a64.c
13
+++ b/hw/timer/imx_epit.c
17
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
14
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reload_compare_timer(IMXEPITState *s)
18
}
19
20
if (is_scalar) {
21
- tcg_res[1] = tcg_const_i64(0);
22
+ tcg_res[1] = tcg_constant_i64(0);
23
}
24
25
for (pass = 0; pass < 2; pass++) {
26
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
27
tcg_op2 = tcg_temp_new_i32();
28
tcg_op3 = tcg_temp_new_i32();
29
tcg_res = tcg_temp_new_i32();
30
- tcg_zero = tcg_const_i32(0);
31
+ tcg_zero = tcg_constant_i32(0);
32
33
read_vec_element_i32(s, tcg_op1, rn, 3, MO_32);
34
read_vec_element_i32(s, tcg_op2, rm, 3, MO_32);
35
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
36
tcg_temp_free_i32(tcg_op2);
37
tcg_temp_free_i32(tcg_op3);
38
tcg_temp_free_i32(tcg_res);
39
- tcg_temp_free_i32(tcg_zero);
40
}
15
}
41
}
16
}
42
17
43
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
18
+static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
44
gen_helper_yield(cpu_env);
19
+{
45
break;
20
+ uint32_t oldcr = s->cr;
46
case DISAS_WFI:
21
+
47
- {
22
+ s->cr = value & 0x03ffffff;
48
- /* This is a special case because we don't want to just halt the CPU
23
+
49
- * if trying to debug across a WFI.
24
+ if (s->cr & CR_SWR) {
50
+ /*
25
+ /* handle the reset */
51
+ * This is a special case because we don't want to just halt
26
+ imx_epit_reset(s, false);
52
+ * the CPU if trying to debug across a WFI.
27
+ }
53
*/
28
+
54
- TCGv_i32 tmp = tcg_const_i32(4);
29
+ /*
55
-
30
+ * The interrupt state can change due to:
56
gen_a64_set_pc_im(dc->base.pc_next);
31
+ * - reset clears both SR.OCIF and CR.OCIE
57
- gen_helper_wfi(cpu_env, tmp);
32
+ * - write to CR.EN or CR.OCIE
58
- tcg_temp_free_i32(tmp);
33
+ */
59
- /* The helper doesn't necessarily throw an exception, but we
34
+ imx_epit_update_int(s);
60
+ gen_helper_wfi(cpu_env, tcg_constant_i32(4));
35
+
61
+ /*
36
+ /*
62
+ * The helper doesn't necessarily throw an exception, but we
37
+ * TODO: could we 'break' here for reset? following operations appear
63
* must go back to the main loop to check for interrupts anyway.
38
+ * to duplicate the work imx_epit_reset() already did.
64
*/
39
+ */
65
tcg_gen_exit_tb(NULL, 0);
40
+
66
break;
41
+ ptimer_transaction_begin(s->timer_cmp);
67
}
42
+ ptimer_transaction_begin(s->timer_reload);
68
- }
43
+
44
+ /* Update the frequency. Has been done already in case of a reset. */
45
+ if (!(s->cr & CR_SWR)) {
46
+ imx_epit_set_freq(s);
47
+ }
48
+
49
+ if (s->freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
50
+ if (s->cr & CR_ENMOD) {
51
+ if (s->cr & CR_RLD) {
52
+ ptimer_set_limit(s->timer_reload, s->lr, 1);
53
+ ptimer_set_limit(s->timer_cmp, s->lr, 1);
54
+ } else {
55
+ ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
56
+ ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
57
+ }
58
+ }
59
+
60
+ imx_epit_reload_compare_timer(s);
61
+ ptimer_run(s->timer_reload, 0);
62
+ if (s->cr & CR_OCIEN) {
63
+ ptimer_run(s->timer_cmp, 0);
64
+ } else {
65
+ ptimer_stop(s->timer_cmp);
66
+ }
67
+ } else if (!(s->cr & CR_EN)) {
68
+ /* stop both timers */
69
+ ptimer_stop(s->timer_reload);
70
+ ptimer_stop(s->timer_cmp);
71
+ } else if (s->cr & CR_OCIEN) {
72
+ if (!(oldcr & CR_OCIEN)) {
73
+ imx_epit_reload_compare_timer(s);
74
+ ptimer_run(s->timer_cmp, 0);
75
+ }
76
+ } else {
77
+ ptimer_stop(s->timer_cmp);
78
+ }
79
+
80
+ ptimer_transaction_commit(s->timer_cmp);
81
+ ptimer_transaction_commit(s->timer_reload);
82
+}
83
+
84
+static void imx_epit_write_sr(IMXEPITState *s, uint32_t value)
85
+{
86
+ /* writing 1 to SR.OCIF clears this bit and turns the interrupt off */
87
+ if (value & SR_OCIF) {
88
+ s->sr = 0; /* SR.OCIF is the only bit in this register anyway */
89
+ imx_epit_update_int(s);
90
+ }
91
+}
92
+
93
+static void imx_epit_write_lr(IMXEPITState *s, uint32_t value)
94
+{
95
+ s->lr = value;
96
+
97
+ ptimer_transaction_begin(s->timer_cmp);
98
+ ptimer_transaction_begin(s->timer_reload);
99
+ if (s->cr & CR_RLD) {
100
+ /* Also set the limit if the LRD bit is set */
101
+ /* If IOVW bit is set then set the timer value */
102
+ ptimer_set_limit(s->timer_reload, s->lr, s->cr & CR_IOVW);
103
+ ptimer_set_limit(s->timer_cmp, s->lr, 0);
104
+ } else if (s->cr & CR_IOVW) {
105
+ /* If IOVW bit is set then set the timer value */
106
+ ptimer_set_count(s->timer_reload, s->lr);
107
+ }
108
+ /*
109
+ * Commit the change to s->timer_reload, so it can propagate. Otherwise
110
+ * the timer interrupt may not fire properly. The commit must happen
111
+ * before calling imx_epit_reload_compare_timer(), which reads
112
+ * s->timer_reload internally again.
113
+ */
114
+ ptimer_transaction_commit(s->timer_reload);
115
+ imx_epit_reload_compare_timer(s);
116
+ ptimer_transaction_commit(s->timer_cmp);
117
+}
118
+
119
+static void imx_epit_write_cmp(IMXEPITState *s, uint32_t value)
120
+{
121
+ s->cmp = value;
122
+
123
+ ptimer_transaction_begin(s->timer_cmp);
124
+ imx_epit_reload_compare_timer(s);
125
+ ptimer_transaction_commit(s->timer_cmp);
126
+}
127
+
128
static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
129
unsigned size)
130
{
131
IMXEPITState *s = IMX_EPIT(opaque);
132
- uint64_t oldcr;
133
134
DPRINTF("(%s, value = 0x%08x)\n", imx_epit_reg_name(offset >> 2),
135
(uint32_t)value);
136
137
switch (offset >> 2) {
138
case 0: /* CR */
139
-
140
- oldcr = s->cr;
141
- s->cr = value & 0x03ffffff;
142
- if (s->cr & CR_SWR) {
143
- /* handle the reset */
144
- imx_epit_reset(s, false);
145
- }
146
-
147
- /*
148
- * The interrupt state can change due to:
149
- * - reset clears both SR.OCIF and CR.OCIE
150
- * - write to CR.EN or CR.OCIE
151
- */
152
- imx_epit_update_int(s);
153
-
154
- /*
155
- * TODO: could we 'break' here for reset? following operations appear
156
- * to duplicate the work imx_epit_reset() already did.
157
- */
158
-
159
- ptimer_transaction_begin(s->timer_cmp);
160
- ptimer_transaction_begin(s->timer_reload);
161
-
162
- /* Update the frequency. Has been done already in case of a reset. */
163
- if (!(s->cr & CR_SWR)) {
164
- imx_epit_set_freq(s);
165
- }
166
-
167
- if (s->freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
168
- if (s->cr & CR_ENMOD) {
169
- if (s->cr & CR_RLD) {
170
- ptimer_set_limit(s->timer_reload, s->lr, 1);
171
- ptimer_set_limit(s->timer_cmp, s->lr, 1);
172
- } else {
173
- ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
174
- ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
175
- }
176
- }
177
-
178
- imx_epit_reload_compare_timer(s);
179
- ptimer_run(s->timer_reload, 0);
180
- if (s->cr & CR_OCIEN) {
181
- ptimer_run(s->timer_cmp, 0);
182
- } else {
183
- ptimer_stop(s->timer_cmp);
184
- }
185
- } else if (!(s->cr & CR_EN)) {
186
- /* stop both timers */
187
- ptimer_stop(s->timer_reload);
188
- ptimer_stop(s->timer_cmp);
189
- } else if (s->cr & CR_OCIEN) {
190
- if (!(oldcr & CR_OCIEN)) {
191
- imx_epit_reload_compare_timer(s);
192
- ptimer_run(s->timer_cmp, 0);
193
- }
194
- } else {
195
- ptimer_stop(s->timer_cmp);
196
- }
197
-
198
- ptimer_transaction_commit(s->timer_cmp);
199
- ptimer_transaction_commit(s->timer_reload);
200
+ imx_epit_write_cr(s, (uint32_t)value);
201
break;
202
203
- case 1: /* SR - ACK*/
204
- /* writing 1 to SR.OCIF clears this bit and turns the interrupt off */
205
- if (value & SR_OCIF) {
206
- s->sr = 0; /* SR.OCIF is the only bit in this register anyway */
207
- imx_epit_update_int(s);
208
- }
209
+ case 1: /* SR */
210
+ imx_epit_write_sr(s, (uint32_t)value);
211
break;
212
213
- case 2: /* LR - set ticks */
214
- s->lr = value;
215
-
216
- ptimer_transaction_begin(s->timer_cmp);
217
- ptimer_transaction_begin(s->timer_reload);
218
- if (s->cr & CR_RLD) {
219
- /* Also set the limit if the LRD bit is set */
220
- /* If IOVW bit is set then set the timer value */
221
- ptimer_set_limit(s->timer_reload, s->lr, s->cr & CR_IOVW);
222
- ptimer_set_limit(s->timer_cmp, s->lr, 0);
223
- } else if (s->cr & CR_IOVW) {
224
- /* If IOVW bit is set then set the timer value */
225
- ptimer_set_count(s->timer_reload, s->lr);
226
- }
227
- /*
228
- * Commit the change to s->timer_reload, so it can propagate. Otherwise
229
- * the timer interrupt may not fire properly. The commit must happen
230
- * before calling imx_epit_reload_compare_timer(), which reads
231
- * s->timer_reload internally again.
232
- */
233
- ptimer_transaction_commit(s->timer_reload);
234
- imx_epit_reload_compare_timer(s);
235
- ptimer_transaction_commit(s->timer_cmp);
236
+ case 2: /* LR */
237
+ imx_epit_write_lr(s, (uint32_t)value);
238
break;
239
240
case 3: /* CMP */
241
- s->cmp = value;
242
-
243
- ptimer_transaction_begin(s->timer_cmp);
244
- imx_epit_reload_compare_timer(s);
245
- ptimer_transaction_commit(s->timer_cmp);
246
-
247
+ imx_epit_write_cmp(s, (uint32_t)value);
248
break;
249
250
default:
251
qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Bad register at offset 0x%"
252
HWADDR_PRIx "\n", TYPE_IMX_EPIT, __func__, offset);
253
-
254
break;
69
}
255
}
70
}
256
}
71
257
+
258
static void imx_epit_cmp(void *opaque)
259
{
260
IMXEPITState *s = IMX_EPIT(opaque);
72
--
261
--
73
2.25.1
262
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
The CNT register is a read-only register. There is no need to
4
store it's value, it can be calculated on demand.
5
The calculated frequency is needed temporarily only.
6
7
Note that this is a migration compatibility break for all boards
8
types that use the EPIT peripheral.
9
10
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-26-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
13
---
8
target/arm/translate.c | 27 +++++++++------------------
14
include/hw/timer/imx_epit.h | 2 -
9
1 file changed, 9 insertions(+), 18 deletions(-)
15
hw/timer/imx_epit.c | 73 ++++++++++++++-----------------------
16
2 files changed, 28 insertions(+), 47 deletions(-)
10
17
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
diff --git a/include/hw/timer/imx_epit.h b/include/hw/timer/imx_epit.h
12
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
20
--- a/include/hw/timer/imx_epit.h
14
+++ b/target/arm/translate.c
21
+++ b/include/hw/timer/imx_epit.h
15
@@ -XXX,XX +XXX,XX @@ void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
22
@@ -XXX,XX +XXX,XX @@ struct IMXEPITState {
16
} \
23
uint32_t sr;
17
static void gen_##NAME##0_vec(unsigned vece, TCGv_vec d, TCGv_vec a) \
24
uint32_t lr;
18
{ \
25
uint32_t cmp;
19
- TCGv_vec zero = tcg_const_zeros_vec_matching(d); \
26
- uint32_t cnt;
20
+ TCGv_vec zero = tcg_constant_vec_matching(d, vece, 0); \
27
21
tcg_gen_cmp_vec(COND, vece, d, a, zero); \
28
- uint32_t freq;
22
- tcg_temp_free_vec(zero); \
29
qemu_irq irq;
23
} \
30
};
24
void gen_gvec_##NAME##0(unsigned vece, uint32_t d, uint32_t m, \
31
25
uint32_t opr_sz, uint32_t max_sz) \
32
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
26
@@ -XXX,XX +XXX,XX @@ void gen_ushl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
33
index XXXXXXX..XXXXXXX 100644
27
TCGv_i32 rval = tcg_temp_new_i32();
34
--- a/hw/timer/imx_epit.c
28
TCGv_i32 lsh = tcg_temp_new_i32();
35
+++ b/hw/timer/imx_epit.c
29
TCGv_i32 rsh = tcg_temp_new_i32();
36
@@ -XXX,XX +XXX,XX @@ static void imx_epit_update_int(IMXEPITState *s)
30
- TCGv_i32 zero = tcg_const_i32(0);
37
}
31
- TCGv_i32 max = tcg_const_i32(32);
32
+ TCGv_i32 zero = tcg_constant_i32(0);
33
+ TCGv_i32 max = tcg_constant_i32(32);
34
35
/*
36
* Rely on the TCG guarantee that out of range shifts produce
37
@@ -XXX,XX +XXX,XX @@ void gen_ushl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
38
tcg_temp_free_i32(rval);
39
tcg_temp_free_i32(lsh);
40
tcg_temp_free_i32(rsh);
41
- tcg_temp_free_i32(zero);
42
- tcg_temp_free_i32(max);
43
}
38
}
44
39
45
void gen_ushl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
40
-/*
46
@@ -XXX,XX +XXX,XX @@ void gen_ushl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
41
- * Must be called from within a ptimer_transaction_begin/commit block
47
TCGv_i64 rval = tcg_temp_new_i64();
42
- * for both s->timer_cmp and s->timer_reload.
48
TCGv_i64 lsh = tcg_temp_new_i64();
43
- */
49
TCGv_i64 rsh = tcg_temp_new_i64();
44
-static void imx_epit_set_freq(IMXEPITState *s)
50
- TCGv_i64 zero = tcg_const_i64(0);
45
+static uint32_t imx_epit_get_freq(IMXEPITState *s)
51
- TCGv_i64 max = tcg_const_i64(64);
46
{
52
+ TCGv_i64 zero = tcg_constant_i64(0);
47
- uint32_t clksrc;
53
+ TCGv_i64 max = tcg_constant_i64(64);
48
- uint32_t prescaler;
54
49
-
55
/*
50
- clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, CR_CLKSRC_BITS);
56
* Rely on the TCG guarantee that out of range shifts produce
51
- prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, CR_PRESCALE_BITS);
57
@@ -XXX,XX +XXX,XX @@ void gen_ushl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
52
-
58
tcg_temp_free_i64(rval);
53
- s->freq = imx_ccm_get_clock_frequency(s->ccm,
59
tcg_temp_free_i64(lsh);
54
- imx_epit_clocks[clksrc]) / prescaler;
60
tcg_temp_free_i64(rsh);
55
-
61
- tcg_temp_free_i64(zero);
56
- DPRINTF("Setting ptimer frequency to %u\n", s->freq);
62
- tcg_temp_free_i64(max);
57
-
58
- if (s->freq) {
59
- ptimer_set_freq(s->timer_reload, s->freq);
60
- ptimer_set_freq(s->timer_cmp, s->freq);
61
- }
62
+ uint32_t clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, CR_CLKSRC_BITS);
63
+ uint32_t prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, CR_PRESCALE_BITS);
64
+ uint32_t f_in = imx_ccm_get_clock_frequency(s->ccm, imx_epit_clocks[clksrc]);
65
+ uint32_t freq = f_in / prescaler;
66
+ DPRINTF("ptimer frequency is %u\n", freq);
67
+ return freq;
63
}
68
}
64
69
65
static void gen_ushl_vec(unsigned vece, TCGv_vec dst,
70
/*
66
@@ -XXX,XX +XXX,XX @@ void gen_sshl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
71
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reset(IMXEPITState *s, bool is_hard_reset)
67
TCGv_i32 rval = tcg_temp_new_i32();
72
s->sr = 0;
68
TCGv_i32 lsh = tcg_temp_new_i32();
73
s->lr = EPIT_TIMER_MAX;
69
TCGv_i32 rsh = tcg_temp_new_i32();
74
s->cmp = 0;
70
- TCGv_i32 zero = tcg_const_i32(0);
75
- s->cnt = 0;
71
- TCGv_i32 max = tcg_const_i32(31);
76
ptimer_transaction_begin(s->timer_cmp);
72
+ TCGv_i32 zero = tcg_constant_i32(0);
77
ptimer_transaction_begin(s->timer_reload);
73
+ TCGv_i32 max = tcg_constant_i32(31);
78
- /* stop both timers */
74
79
+
75
/*
80
+ /*
76
* Rely on the TCG guarantee that out of range shifts produce
81
+ * The reset switches off the input clock, so even if the CR.EN is still
77
@@ -XXX,XX +XXX,XX @@ void gen_sshl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
82
+ * set, the timers are no longer running.
78
tcg_temp_free_i32(rval);
83
+ */
79
tcg_temp_free_i32(lsh);
84
+ assert(imx_epit_get_freq(s) == 0);
80
tcg_temp_free_i32(rsh);
85
ptimer_stop(s->timer_cmp);
81
- tcg_temp_free_i32(zero);
86
ptimer_stop(s->timer_reload);
82
- tcg_temp_free_i32(max);
87
- /* compute new frequency */
88
- imx_epit_set_freq(s);
89
/* init both timers to EPIT_TIMER_MAX */
90
ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
91
ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
92
- if (s->freq && (s->cr & CR_EN)) {
93
- /* if the timer is still enabled, restart it */
94
- ptimer_run(s->timer_reload, 0);
95
- }
96
ptimer_transaction_commit(s->timer_cmp);
97
ptimer_transaction_commit(s->timer_reload);
83
}
98
}
84
99
85
void gen_sshl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
100
-static uint32_t imx_epit_update_count(IMXEPITState *s)
86
@@ -XXX,XX +XXX,XX @@ void gen_sshl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
101
-{
87
TCGv_i64 rval = tcg_temp_new_i64();
102
- s->cnt = ptimer_get_count(s->timer_reload);
88
TCGv_i64 lsh = tcg_temp_new_i64();
103
-
89
TCGv_i64 rsh = tcg_temp_new_i64();
104
- return s->cnt;
90
- TCGv_i64 zero = tcg_const_i64(0);
105
-}
91
- TCGv_i64 max = tcg_const_i64(63);
106
-
92
+ TCGv_i64 zero = tcg_constant_i64(0);
107
static uint64_t imx_epit_read(void *opaque, hwaddr offset, unsigned size)
93
+ TCGv_i64 max = tcg_constant_i64(63);
108
{
94
109
IMXEPITState *s = IMX_EPIT(opaque);
95
/*
110
@@ -XXX,XX +XXX,XX @@ static uint64_t imx_epit_read(void *opaque, hwaddr offset, unsigned size)
96
* Rely on the TCG guarantee that out of range shifts produce
111
break;
97
@@ -XXX,XX +XXX,XX @@ void gen_sshl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
112
98
tcg_temp_free_i64(rval);
113
case 4: /* CNT */
99
tcg_temp_free_i64(lsh);
114
- imx_epit_update_count(s);
100
tcg_temp_free_i64(rsh);
115
- reg_value = s->cnt;
101
- tcg_temp_free_i64(zero);
116
+ reg_value = ptimer_get_count(s->timer_reload);
102
- tcg_temp_free_i64(max);
117
break;
103
}
118
104
119
default:
105
static void gen_sshl_vec(unsigned vece, TCGv_vec dst,
120
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reload_compare_timer(IMXEPITState *s)
121
{
122
if ((s->cr & (CR_EN | CR_OCIEN)) == (CR_EN | CR_OCIEN)) {
123
/* if the compare feature is on and timers are running */
124
- uint32_t tmp = imx_epit_update_count(s);
125
+ uint32_t tmp = ptimer_get_count(s->timer_reload);
126
uint64_t next;
127
if (tmp > s->cmp) {
128
/* It'll fire in this round of the timer */
129
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reload_compare_timer(IMXEPITState *s)
130
131
static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
132
{
133
+ uint32_t freq = 0;
134
uint32_t oldcr = s->cr;
135
136
s->cr = value & 0x03ffffff;
137
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
138
ptimer_transaction_begin(s->timer_cmp);
139
ptimer_transaction_begin(s->timer_reload);
140
141
- /* Update the frequency. Has been done already in case of a reset. */
142
+ /*
143
+ * Update the frequency. In case of a reset the input clock was
144
+ * switched off, so this can be skipped.
145
+ */
146
if (!(s->cr & CR_SWR)) {
147
- imx_epit_set_freq(s);
148
+ freq = imx_epit_get_freq(s);
149
+ if (freq) {
150
+ ptimer_set_freq(s->timer_reload, freq);
151
+ ptimer_set_freq(s->timer_cmp, freq);
152
+ }
153
}
154
155
- if (s->freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
156
+ if (freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
157
if (s->cr & CR_ENMOD) {
158
if (s->cr & CR_RLD) {
159
ptimer_set_limit(s->timer_reload, s->lr, 1);
160
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps imx_epit_ops = {
161
162
static const VMStateDescription vmstate_imx_timer_epit = {
163
.name = TYPE_IMX_EPIT,
164
- .version_id = 2,
165
- .minimum_version_id = 2,
166
+ .version_id = 3,
167
+ .minimum_version_id = 3,
168
.fields = (VMStateField[]) {
169
VMSTATE_UINT32(cr, IMXEPITState),
170
VMSTATE_UINT32(sr, IMXEPITState),
171
VMSTATE_UINT32(lr, IMXEPITState),
172
VMSTATE_UINT32(cmp, IMXEPITState),
173
- VMSTATE_UINT32(cnt, IMXEPITState),
174
- VMSTATE_UINT32(freq, IMXEPITState),
175
VMSTATE_PTIMER(timer_reload, IMXEPITState),
176
VMSTATE_PTIMER(timer_cmp, IMXEPITState),
177
VMSTATE_END_OF_LIST()
106
--
178
--
107
2.25.1
179
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
In these cases, 't' did double-duty as zero source and
3
- fix #1263 for CR writes
4
temporary destination. Split the two uses.
4
- rework compare time handling
5
- The compare timer has to run even if CR.OCIEN is not set,
6
as SR.OCIF must be updated.
7
- The compare timer fires exactly once when the
8
compare value is less than the current value, but the
9
reload values is less than the compare value.
10
- The compare timer will never fire if the reload value is
11
less than the compare value. Disable it in this case.
5
12
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
14
[PMM: fixed minor style nits]
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20220426163043.100432-46-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
17
---
11
target/arm/translate-sve.c | 17 ++++++++---------
18
hw/timer/imx_epit.c | 192 ++++++++++++++++++++++++++------------------
12
1 file changed, 8 insertions(+), 9 deletions(-)
19
1 file changed, 116 insertions(+), 76 deletions(-)
13
20
14
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
21
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
15
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-sve.c
23
--- a/hw/timer/imx_epit.c
17
+++ b/target/arm/translate-sve.c
24
+++ b/hw/timer/imx_epit.c
18
@@ -XXX,XX +XXX,XX @@ static void do_predtest(DisasContext *s, int dofs, int gofs, int words)
25
@@ -XXX,XX +XXX,XX @@
19
{
26
* Originally written by Hans Jiang
20
TCGv_ptr dptr = tcg_temp_new_ptr();
27
* Updated by Peter Chubb
21
TCGv_ptr gptr = tcg_temp_new_ptr();
28
* Updated by Jean-Christophe Dubois <jcd@tribudubois.net>
22
- TCGv_i32 t;
29
+ * Updated by Axel Heider
23
+ TCGv_i32 t = tcg_temp_new_i32();
30
*
24
31
* This code is licensed under GPL version 2 or later. See
25
tcg_gen_addi_ptr(dptr, cpu_env, dofs);
32
* the COPYING file in the top-level directory.
26
tcg_gen_addi_ptr(gptr, cpu_env, gofs);
33
@@ -XXX,XX +XXX,XX @@ static uint64_t imx_epit_read(void *opaque, hwaddr offset, unsigned size)
27
- t = tcg_const_i32(words);
34
return reg_value;
28
35
}
29
- gen_helper_sve_predtest(t, dptr, gptr, t);
36
30
+ gen_helper_sve_predtest(t, dptr, gptr, tcg_constant_i32(words));
37
-/* Must be called from ptimer_transaction_begin/commit block for s->timer_cmp */
31
tcg_temp_free_ptr(dptr);
38
-static void imx_epit_reload_compare_timer(IMXEPITState *s)
32
tcg_temp_free_ptr(gptr);
39
+/*
33
40
+ * Must be called from a ptimer_transaction_begin/commit block for
34
@@ -XXX,XX +XXX,XX @@ static bool do_pfirst_pnext(DisasContext *s, arg_rr_esz *a,
41
+ * s->timer_cmp, but outside of a transaction block of s->timer_reload,
35
42
+ * so the proper counter value is read.
36
tcg_gen_addi_ptr(t_pd, cpu_env, pred_full_reg_offset(s, a->rd));
43
+ */
37
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->rn));
44
+static void imx_epit_update_compare_timer(IMXEPITState *s)
38
- t = tcg_const_i32(desc);
45
{
39
+ t = tcg_temp_new_i32();
46
- if ((s->cr & (CR_EN | CR_OCIEN)) == (CR_EN | CR_OCIEN)) {
40
47
- /* if the compare feature is on and timers are running */
41
- gen_fn(t, t_pd, t_pg, t);
48
- uint32_t tmp = ptimer_get_count(s->timer_reload);
42
+ gen_fn(t, t_pd, t_pg, tcg_constant_i32(desc));
49
- uint64_t next;
43
tcg_temp_free_ptr(t_pd);
50
- if (tmp > s->cmp) {
44
tcg_temp_free_ptr(t_pg);
51
- /* It'll fire in this round of the timer */
45
52
- next = tmp - s->cmp;
46
@@ -XXX,XX +XXX,XX @@ static bool do_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
53
- } else { /* catch it next time around */
54
- next = tmp - s->cmp + ((s->cr & CR_RLD) ? EPIT_TIMER_MAX : s->lr);
55
+ uint64_t counter = 0;
56
+ bool is_oneshot = false;
57
+ /*
58
+ * The compare timer only has to run if the timer peripheral is active
59
+ * and there is an input clock, Otherwise it can be switched off.
60
+ */
61
+ bool is_active = (s->cr & CR_EN) && imx_epit_get_freq(s);
62
+ if (is_active) {
63
+ /*
64
+ * Calculate next timeout for compare timer. Reading the reload
65
+ * counter returns proper results only if pending transactions
66
+ * on it are committed here. Otherwise stale values are be read.
67
+ */
68
+ counter = ptimer_get_count(s->timer_reload);
69
+ uint64_t limit = ptimer_get_limit(s->timer_cmp);
70
+ /*
71
+ * The compare timer is a periodic timer if the limit is at least
72
+ * the compare value. Otherwise it may fire at most once in the
73
+ * current round.
74
+ */
75
+ bool is_oneshot = (limit >= s->cmp);
76
+ if (counter >= s->cmp) {
77
+ /* The compare timer fires in the current round. */
78
+ counter -= s->cmp;
79
+ } else if (!is_oneshot) {
80
+ /*
81
+ * The compare timer fires after a reload, as it is below the
82
+ * compare value already in this round. Note that the counter
83
+ * value calculated below can be above the 32-bit limit, which
84
+ * is legal here because the compare timer is an internal
85
+ * helper ptimer only.
86
+ */
87
+ counter += limit - s->cmp;
88
+ } else {
89
+ /*
90
+ * The compare timer won't fire in this round, and the limit is
91
+ * set to a value below the compare value. This practically means
92
+ * it will never fire, so it can be switched off.
93
+ */
94
+ is_active = false;
95
}
96
- ptimer_set_count(s->timer_cmp, next);
47
}
97
}
48
98
+
49
vsz = vec_full_reg_size(s);
99
+ /*
50
- t = tcg_const_i32(simd_desc(vsz, vsz, 0));
100
+ * Set the compare timer and let it run, or stop it. This is agnostic
51
+ t = tcg_temp_new_i32();
101
+ * of CR.OCIEN bit, as this bit affects interrupt generation only. The
52
pd = tcg_temp_new_ptr();
102
+ * compare timer needs to run even if no interrupts are to be generated,
53
zn = tcg_temp_new_ptr();
103
+ * because the SR.OCIF bit must be updated also.
54
zm = tcg_temp_new_ptr();
104
+ * Note that the timer might already be stopped or be running with
55
@@ -XXX,XX +XXX,XX @@ static bool do_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
105
+ * counter values. However, finding out when an update is needed and
56
tcg_gen_addi_ptr(zm, cpu_env, vec_full_reg_offset(s, a->rm));
106
+ * when not is not trivial. It's much easier applying the setting again,
57
tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
107
+ * as this does not harm either and the overhead is negligible.
58
108
+ */
59
- gen_fn(t, pd, zn, zm, pg, t);
109
+ if (is_active) {
60
+ gen_fn(t, pd, zn, zm, pg, tcg_constant_i32(simd_desc(vsz, vsz, 0)));
110
+ ptimer_set_count(s->timer_cmp, counter);
61
111
+ ptimer_run(s->timer_cmp, is_oneshot ? 1 : 0);
62
tcg_temp_free_ptr(pd);
112
+ } else {
63
tcg_temp_free_ptr(zn);
113
+ ptimer_stop(s->timer_cmp);
64
@@ -XXX,XX +XXX,XX @@ static bool do_ppzi_flags(DisasContext *s, arg_rpri_esz *a,
114
+ }
115
+
116
}
117
118
static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
119
{
120
- uint32_t freq = 0;
121
uint32_t oldcr = s->cr;
122
123
s->cr = value & 0x03ffffff;
124
125
if (s->cr & CR_SWR) {
126
- /* handle the reset */
127
+ /*
128
+ * Reset clears CR.SWR again. It does not touch CR.EN, but the timers
129
+ * are still stopped because the input clock is disabled.
130
+ */
131
imx_epit_reset(s, false);
132
+ } else {
133
+ uint32_t freq;
134
+ uint32_t toggled_cr_bits = oldcr ^ s->cr;
135
+ /* re-initialize the limits if CR.RLD has changed */
136
+ bool set_limit = toggled_cr_bits & CR_RLD;
137
+ /* set the counter if the timer got just enabled and CR.ENMOD is set */
138
+ bool is_switched_on = (toggled_cr_bits & s->cr) & CR_EN;
139
+ bool set_counter = is_switched_on && (s->cr & CR_ENMOD);
140
+
141
+ ptimer_transaction_begin(s->timer_cmp);
142
+ ptimer_transaction_begin(s->timer_reload);
143
+ freq = imx_epit_get_freq(s);
144
+ if (freq) {
145
+ ptimer_set_freq(s->timer_reload, freq);
146
+ ptimer_set_freq(s->timer_cmp, freq);
147
+ }
148
+
149
+ if (set_limit || set_counter) {
150
+ uint64_t limit = (s->cr & CR_RLD) ? s->lr : EPIT_TIMER_MAX;
151
+ ptimer_set_limit(s->timer_reload, limit, set_counter ? 1 : 0);
152
+ if (set_limit) {
153
+ ptimer_set_limit(s->timer_cmp, limit, 0);
154
+ }
155
+ }
156
+ /*
157
+ * If there is an input clock and the peripheral is enabled, then
158
+ * ensure the wall clock timer is ticking. Otherwise stop the timers.
159
+ * The compare timer will be updated later.
160
+ */
161
+ if (freq && (s->cr & CR_EN)) {
162
+ ptimer_run(s->timer_reload, 0);
163
+ } else {
164
+ ptimer_stop(s->timer_reload);
165
+ }
166
+ /* Commit changes to reload timer, so they can propagate. */
167
+ ptimer_transaction_commit(s->timer_reload);
168
+ /* Update compare timer based on the committed reload timer value. */
169
+ imx_epit_update_compare_timer(s);
170
+ ptimer_transaction_commit(s->timer_cmp);
65
}
171
}
66
172
67
vsz = vec_full_reg_size(s);
173
/*
68
- t = tcg_const_i32(simd_desc(vsz, vsz, a->imm));
174
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
69
+ t = tcg_temp_new_i32();
175
* - write to CR.EN or CR.OCIE
70
pd = tcg_temp_new_ptr();
176
*/
71
zn = tcg_temp_new_ptr();
177
imx_epit_update_int(s);
72
pg = tcg_temp_new_ptr();
178
-
73
@@ -XXX,XX +XXX,XX @@ static bool do_ppzi_flags(DisasContext *s, arg_rpri_esz *a,
179
- /*
74
tcg_gen_addi_ptr(zn, cpu_env, vec_full_reg_offset(s, a->rn));
180
- * TODO: could we 'break' here for reset? following operations appear
75
tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
181
- * to duplicate the work imx_epit_reset() already did.
76
182
- */
77
- gen_fn(t, pd, zn, pg, t);
183
-
78
+ gen_fn(t, pd, zn, pg, tcg_constant_i32(simd_desc(vsz, vsz, a->imm)));
184
- ptimer_transaction_begin(s->timer_cmp);
79
185
- ptimer_transaction_begin(s->timer_reload);
80
tcg_temp_free_ptr(pd);
186
-
81
tcg_temp_free_ptr(zn);
187
- /*
188
- * Update the frequency. In case of a reset the input clock was
189
- * switched off, so this can be skipped.
190
- */
191
- if (!(s->cr & CR_SWR)) {
192
- freq = imx_epit_get_freq(s);
193
- if (freq) {
194
- ptimer_set_freq(s->timer_reload, freq);
195
- ptimer_set_freq(s->timer_cmp, freq);
196
- }
197
- }
198
-
199
- if (freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
200
- if (s->cr & CR_ENMOD) {
201
- if (s->cr & CR_RLD) {
202
- ptimer_set_limit(s->timer_reload, s->lr, 1);
203
- ptimer_set_limit(s->timer_cmp, s->lr, 1);
204
- } else {
205
- ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
206
- ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
207
- }
208
- }
209
-
210
- imx_epit_reload_compare_timer(s);
211
- ptimer_run(s->timer_reload, 0);
212
- if (s->cr & CR_OCIEN) {
213
- ptimer_run(s->timer_cmp, 0);
214
- } else {
215
- ptimer_stop(s->timer_cmp);
216
- }
217
- } else if (!(s->cr & CR_EN)) {
218
- /* stop both timers */
219
- ptimer_stop(s->timer_reload);
220
- ptimer_stop(s->timer_cmp);
221
- } else if (s->cr & CR_OCIEN) {
222
- if (!(oldcr & CR_OCIEN)) {
223
- imx_epit_reload_compare_timer(s);
224
- ptimer_run(s->timer_cmp, 0);
225
- }
226
- } else {
227
- ptimer_stop(s->timer_cmp);
228
- }
229
-
230
- ptimer_transaction_commit(s->timer_cmp);
231
- ptimer_transaction_commit(s->timer_reload);
232
}
233
234
static void imx_epit_write_sr(IMXEPITState *s, uint32_t value)
235
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_lr(IMXEPITState *s, uint32_t value)
236
/* If IOVW bit is set then set the timer value */
237
ptimer_set_count(s->timer_reload, s->lr);
238
}
239
- /*
240
- * Commit the change to s->timer_reload, so it can propagate. Otherwise
241
- * the timer interrupt may not fire properly. The commit must happen
242
- * before calling imx_epit_reload_compare_timer(), which reads
243
- * s->timer_reload internally again.
244
- */
245
+ /* Commit the changes to s->timer_reload, so they can propagate. */
246
ptimer_transaction_commit(s->timer_reload);
247
- imx_epit_reload_compare_timer(s);
248
+ /* Update the compare timer based on the committed reload timer value. */
249
+ imx_epit_update_compare_timer(s);
250
ptimer_transaction_commit(s->timer_cmp);
251
}
252
253
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_cmp(IMXEPITState *s, uint32_t value)
254
{
255
s->cmp = value;
256
257
+ /* Update the compare timer based on the committed reload timer value. */
258
ptimer_transaction_begin(s->timer_cmp);
259
- imx_epit_reload_compare_timer(s);
260
+ imx_epit_update_compare_timer(s);
261
ptimer_transaction_commit(s->timer_cmp);
262
}
263
264
@@ -XXX,XX +XXX,XX @@ static void imx_epit_cmp(void *opaque)
265
{
266
IMXEPITState *s = IMX_EPIT(opaque);
267
268
+ /* The cmp ptimer can't be running when the peripheral is disabled */
269
+ assert(s->cr & CR_EN);
270
+
271
DPRINTF("sr was %d\n", s->sr);
272
/* Set interrupt status bit SR.OCIF and update the interrupt state */
273
s->sr |= SR_OCIF;
82
--
274
--
83
2.25.1
275
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Fix these:
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
5
Message-id: 20220426163043.100432-30-richard.henderson@linaro.org
5
WARNING: Block comments use a leading /* on a separate line
6
WARNING: Block comments use * on subsequent lines
7
WARNING: Block comments use a trailing */ on a separate line
8
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Reviewed-by: Claudio Fontana <cfontana@suse.de>
11
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
12
Message-id: 20221213190537.511-2-farosas@suse.de
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
14
---
8
target/arm/translate.c | 11 +++--------
15
target/arm/helper.c | 323 +++++++++++++++++++++++++++++---------------
9
1 file changed, 3 insertions(+), 8 deletions(-)
16
1 file changed, 215 insertions(+), 108 deletions(-)
10
17
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
20
--- a/target/arm/helper.c
14
+++ b/target/arm/translate.c
21
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_ADR(DisasContext *s, arg_ri *a)
22
@@ -XXX,XX +XXX,XX @@ uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri)
16
23
static void write_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri,
17
static bool trans_MOVW(DisasContext *s, arg_MOVW *a)
24
uint64_t v)
18
{
25
{
19
- TCGv_i32 tmp;
26
- /* Raw write of a coprocessor register (as needed for migration, etc).
20
-
27
+ /*
21
if (!ENABLE_ARCH_6T2) {
28
+ * Raw write of a coprocessor register (as needed for migration, etc).
22
return false;
29
* Note that constant registers are treated as write-ignored; the
23
}
30
* caller should check for success by whether a readback gives the
24
31
* value written.
25
- tmp = tcg_const_i32(a->imm);
32
@@ -XXX,XX +XXX,XX @@ static void write_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri,
26
- store_reg(s, a->rd, tmp);
33
27
+ store_reg(s, a->rd, tcg_constant_i32(a->imm));
34
static bool raw_accessors_invalid(const ARMCPRegInfo *ri)
28
return true;
35
{
29
}
36
- /* Return true if the regdef would cause an assertion if you called
30
37
+ /*
31
@@ -XXX,XX +XXX,XX @@ static bool trans_UMAAL(DisasContext *s, arg_UMAAL *a)
38
+ * Return true if the regdef would cause an assertion if you called
32
t0 = load_reg(s, a->rm);
39
* read_raw_cp_reg() or write_raw_cp_reg() on it (ie if it is a
33
t1 = load_reg(s, a->rn);
40
* program bug for it not to have the NO_RAW flag).
34
tcg_gen_mulu2_i32(t0, t1, t0, t1);
41
* NB that returning false here doesn't necessarily mean that calling
35
- zero = tcg_const_i32(0);
42
@@ -XXX,XX +XXX,XX @@ bool write_list_to_cpustate(ARMCPU *cpu)
36
+ zero = tcg_constant_i32(0);
43
if (ri->type & ARM_CP_NO_RAW) {
37
t2 = load_reg(s, a->ra);
44
continue;
38
tcg_gen_add2_i32(t0, t1, t0, t1, t2, zero);
45
}
39
tcg_temp_free_i32(t2);
46
- /* Write value and confirm it reads back as written
40
t2 = load_reg(s, a->rd);
47
+ /*
41
tcg_gen_add2_i32(t0, t1, t0, t1, t2, zero);
48
+ * Write value and confirm it reads back as written
42
tcg_temp_free_i32(t2);
49
* (to catch read-only registers and partially read-only
43
- tcg_temp_free_i32(zero);
50
* registers where the incoming migration value doesn't match)
44
store_reg(s, a->ra, t0);
51
*/
45
store_reg(s, a->rd, t1);
52
@@ -XXX,XX +XXX,XX @@ static gint cpreg_key_compare(gconstpointer a, gconstpointer b)
46
return true;
53
47
@@ -XXX,XX +XXX,XX @@ static bool op_crc32(DisasContext *s, arg_rrr *a, bool c, MemOp sz)
54
void init_cpreg_list(ARMCPU *cpu)
55
{
56
- /* Initialise the cpreg_tuples[] array based on the cp_regs hash.
57
+ /*
58
+ * Initialise the cpreg_tuples[] array based on the cp_regs hash.
59
* Note that we require cpreg_tuples[] to be sorted by key ID.
60
*/
61
GList *keys;
62
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_el3_aa32ns(CPUARMState *env,
63
return CP_ACCESS_OK;
64
}
65
66
-/* Some secure-only AArch32 registers trap to EL3 if used from
67
+/*
68
+ * Some secure-only AArch32 registers trap to EL3 if used from
69
* Secure EL1 (but are just ordinary UNDEF in other non-EL3 contexts).
70
* Note that an access from Secure EL1 can only happen if EL3 is AArch64.
71
* We assume that the .access field is set to PL1_RW.
72
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_trap_aa32s_el1(CPUARMState *env,
73
return CP_ACCESS_TRAP_UNCATEGORIZED;
74
}
75
76
-/* Check for traps to performance monitor registers, which are controlled
77
+/*
78
+ * Check for traps to performance monitor registers, which are controlled
79
* by MDCR_EL2.TPM for EL2 and MDCR_EL3.TPM for EL3.
80
*/
81
static CPAccessResult access_tpm(CPUARMState *env, const ARMCPRegInfo *ri,
82
@@ -XXX,XX +XXX,XX @@ static void fcse_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
83
ARMCPU *cpu = env_archcpu(env);
84
85
if (raw_read(env, ri) != value) {
86
- /* Unlike real hardware the qemu TLB uses virtual addresses,
87
+ /*
88
+ * Unlike real hardware the qemu TLB uses virtual addresses,
89
* not modified virtual addresses, so this causes a TLB flush.
90
*/
91
tlb_flush(CPU(cpu));
92
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
93
94
if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_PMSA)
95
&& !extended_addresses_enabled(env)) {
96
- /* For VMSA (when not using the LPAE long descriptor page table
97
+ /*
98
+ * For VMSA (when not using the LPAE long descriptor page table
99
* format) this register includes the ASID, so do a TLB flush.
100
* For PMSA it is purely a process ID and no action is needed.
101
*/
102
@@ -XXX,XX +XXX,XX @@ static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
103
}
104
105
static const ARMCPRegInfo cp_reginfo[] = {
106
- /* Define the secure and non-secure FCSE identifier CP registers
107
+ /*
108
+ * Define the secure and non-secure FCSE identifier CP registers
109
* separately because there is no secure bank in V8 (no _EL3). This allows
110
* the secure register to be properly reset and migrated. There is also no
111
* v8 EL1 version of the register so the non-secure instance stands alone.
112
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
113
.access = PL1_RW, .secure = ARM_CP_SECSTATE_S,
114
.fieldoffset = offsetof(CPUARMState, cp15.fcseidr_s),
115
.resetvalue = 0, .writefn = fcse_write, .raw_writefn = raw_write, },
116
- /* Define the secure and non-secure context identifier CP registers
117
+ /*
118
+ * Define the secure and non-secure context identifier CP registers
119
* separately because there is no secure bank in V8 (no _EL3). This allows
120
* the secure register to be properly reset and migrated. In the
121
* non-secure case, the 32-bit register will have reset and migration
122
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
123
};
124
125
static const ARMCPRegInfo not_v8_cp_reginfo[] = {
126
- /* NB: Some of these registers exist in v8 but with more precise
127
+ /*
128
+ * NB: Some of these registers exist in v8 but with more precise
129
* definitions that don't use CP_ANY wildcards (mostly in v8_cp_reginfo[]).
130
*/
131
/* MMU Domain access control / MPU write buffer control */
132
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = {
133
.writefn = dacr_write, .raw_writefn = raw_write,
134
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dacr_s),
135
offsetoflow32(CPUARMState, cp15.dacr_ns) } },
136
- /* ARMv7 allocates a range of implementation defined TLB LOCKDOWN regs.
137
+ /*
138
+ * ARMv7 allocates a range of implementation defined TLB LOCKDOWN regs.
139
* For v6 and v5, these mappings are overly broad.
140
*/
141
{ .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 0,
142
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = {
143
};
144
145
static const ARMCPRegInfo not_v6_cp_reginfo[] = {
146
- /* Not all pre-v6 cores implemented this WFI, so this is slightly
147
+ /*
148
+ * Not all pre-v6 cores implemented this WFI, so this is slightly
149
* over-broad.
150
*/
151
{ .name = "WFI_v5", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = 2,
152
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v6_cp_reginfo[] = {
153
};
154
155
static const ARMCPRegInfo not_v7_cp_reginfo[] = {
156
- /* Standard v6 WFI (also used in some pre-v6 cores); not in v7 (which
157
+ /*
158
+ * Standard v6 WFI (also used in some pre-v6 cores); not in v7 (which
159
* is UNPREDICTABLE; we choose to NOP as most implementations do).
160
*/
161
{ .name = "WFI_v6", .cp = 15, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 4,
162
.access = PL1_W, .type = ARM_CP_WFI },
163
- /* L1 cache lockdown. Not architectural in v6 and earlier but in practice
164
+ /*
165
+ * L1 cache lockdown. Not architectural in v6 and earlier but in practice
166
* implemented in 926, 946, 1026, 1136, 1176 and 11MPCore. StrongARM and
167
* OMAPCP will override this space.
168
*/
169
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v7_cp_reginfo[] = {
170
{ .name = "DUMMY", .cp = 15, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = CP_ANY,
171
.access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW,
172
.resetvalue = 0 },
173
- /* We don't implement pre-v7 debug but most CPUs had at least a DBGDIDR;
174
+ /*
175
+ * We don't implement pre-v7 debug but most CPUs had at least a DBGDIDR;
176
* implementing it as RAZ means the "debug architecture version" bits
177
* will read as a reserved value, which should cause Linux to not try
178
* to use the debug hardware.
179
*/
180
{ .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 0,
181
.access = PL0_R, .type = ARM_CP_CONST, .resetvalue = 0 },
182
- /* MMU TLB control. Note that the wildcarding means we cover not just
183
+ /*
184
+ * MMU TLB control. Note that the wildcarding means we cover not just
185
* the unified TLB ops but also the dside/iside/inner-shareable variants.
186
*/
187
{ .name = "TLBIALL", .cp = 15, .crn = 8, .crm = CP_ANY,
188
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
189
190
/* In ARMv8 most bits of CPACR_EL1 are RES0. */
191
if (!arm_feature(env, ARM_FEATURE_V8)) {
192
- /* ARMv7 defines bits for unimplemented coprocessors as RAZ/WI.
193
+ /*
194
+ * ARMv7 defines bits for unimplemented coprocessors as RAZ/WI.
195
* ASEDIS [31] and D32DIS [30] are both UNK/SBZP without VFP.
196
* TRCDIS [28] is RAZ/WI since we do not implement a trace macrocell.
197
*/
198
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
199
value |= R_CPACR_ASEDIS_MASK;
200
}
201
202
- /* VFPv3 and upwards with NEON implement 32 double precision
203
+ /*
204
+ * VFPv3 and upwards with NEON implement 32 double precision
205
* registers (D0-D31).
206
*/
207
if (!cpu_isar_feature(aa32_simd_r32, env_archcpu(env))) {
208
@@ -XXX,XX +XXX,XX @@ static uint64_t cpacr_read(CPUARMState *env, const ARMCPRegInfo *ri)
209
210
static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
211
{
212
- /* Call cpacr_write() so that we reset with the correct RAO bits set
213
+ /*
214
+ * Call cpacr_write() so that we reset with the correct RAO bits set
215
* for our CPU features.
216
*/
217
cpacr_write(env, ri, 0);
218
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
219
{ .name = "MVA_prefetch",
220
.cp = 15, .crn = 7, .crm = 13, .opc1 = 0, .opc2 = 1,
221
.access = PL1_W, .type = ARM_CP_NOP },
222
- /* We need to break the TB after ISB to execute self-modifying code
223
+ /*
224
+ * We need to break the TB after ISB to execute self-modifying code
225
* correctly and also to take any pending interrupts immediately.
226
* So use arm_cp_write_ignore() function instead of ARM_CP_NOP flag.
227
*/
228
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
229
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ifar_s),
230
offsetof(CPUARMState, cp15.ifar_ns) },
231
.resetvalue = 0, },
232
- /* Watchpoint Fault Address Register : should actually only be present
233
+ /*
234
+ * Watchpoint Fault Address Register : should actually only be present
235
* for 1136, 1176, 11MPCore.
236
*/
237
{ .name = "WFAR", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 1,
238
@@ -XXX,XX +XXX,XX @@ static bool event_supported(uint16_t number)
239
static CPAccessResult pmreg_access(CPUARMState *env, const ARMCPRegInfo *ri,
240
bool isread)
241
{
242
- /* Performance monitor registers user accessibility is controlled
243
+ /*
244
+ * Performance monitor registers user accessibility is controlled
245
* by PMUSERENR. MDCR_EL2.TPM and MDCR_EL3.TPM allow configurable
246
* trapping to EL2 or EL3 for other accesses.
247
*/
248
@@ -XXX,XX +XXX,XX @@ static CPAccessResult pmreg_access_ccntr(CPUARMState *env,
249
(MDCR_HPME | MDCR_HPMD | MDCR_HPMN | MDCR_HCCD | MDCR_HLP)
250
#define MDCR_EL3_PMU_ENABLE_BITS (MDCR_SPME | MDCR_SCCD)
251
252
-/* Returns true if the counter (pass 31 for PMCCNTR) should count events using
253
+/*
254
+ * Returns true if the counter (pass 31 for PMCCNTR) should count events using
255
* the current EL, security state, and register configuration.
256
*/
257
static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
258
@@ -XXX,XX +XXX,XX @@ static uint64_t pmccntr_read(CPUARMState *env, const ARMCPRegInfo *ri)
259
static void pmselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
260
uint64_t value)
261
{
262
- /* The value of PMSELR.SEL affects the behavior of PMXEVTYPER and
263
+ /*
264
+ * The value of PMSELR.SEL affects the behavior of PMXEVTYPER and
265
* PMXEVCNTR. We allow [0..31] to be written to PMSELR here; in the
266
* meanwhile, we check PMSELR.SEL when PMXEVTYPER and PMXEVCNTR are
267
* accessed.
268
@@ -XXX,XX +XXX,XX @@ static void pmevtyper_write(CPUARMState *env, const ARMCPRegInfo *ri,
269
env->cp15.c14_pmevtyper[counter] = value & PMXEVTYPER_MASK;
270
pmevcntr_op_finish(env, counter);
271
}
272
- /* Attempts to access PMXEVTYPER are CONSTRAINED UNPREDICTABLE when
273
+ /*
274
+ * Attempts to access PMXEVTYPER are CONSTRAINED UNPREDICTABLE when
275
* PMSELR value is equal to or greater than the number of implemented
276
* counters, but not equal to 0x1f. We opt to behave as a RAZ/WI.
277
*/
278
@@ -XXX,XX +XXX,XX @@ static uint64_t pmevcntr_read(CPUARMState *env, const ARMCPRegInfo *ri,
279
}
280
return ret;
281
} else {
282
- /* We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR
283
- * are CONSTRAINED UNPREDICTABLE. */
284
+ /*
285
+ * We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR
286
+ * are CONSTRAINED UNPREDICTABLE.
287
+ */
288
return 0;
289
}
290
}
291
@@ -XXX,XX +XXX,XX @@ static void pmintenclr_write(CPUARMState *env, const ARMCPRegInfo *ri,
292
static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
293
uint64_t value)
294
{
295
- /* Note that even though the AArch64 view of this register has bits
296
+ /*
297
+ * Note that even though the AArch64 view of this register has bits
298
* [10:0] all RES0 we can only mask the bottom 5, to comply with the
299
* architectural requirements for bits which are RES0 only in some
300
* contexts. (ARMv8 would permit us to do no masking at all, but ARMv7
301
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
302
if (!arm_feature(env, ARM_FEATURE_EL2)) {
303
valid_mask &= ~SCR_HCE;
304
305
- /* On ARMv7, SMD (or SCD as it is called in v7) is only
306
+ /*
307
+ * On ARMv7, SMD (or SCD as it is called in v7) is only
308
* supported if EL2 exists. The bit is UNK/SBZP when
309
* EL2 is unavailable. In QEMU ARMv7, we force it to always zero
310
* when EL2 is unavailable.
311
@@ -XXX,XX +XXX,XX @@ static uint64_t ccsidr_read(CPUARMState *env, const ARMCPRegInfo *ri)
312
{
313
ARMCPU *cpu = env_archcpu(env);
314
315
- /* Acquire the CSSELR index from the bank corresponding to the CCSIDR
316
+ /*
317
+ * Acquire the CSSELR index from the bank corresponding to the CCSIDR
318
* bank
319
*/
320
uint32_t index = A32_BANKED_REG_GET(env, csselr,
321
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
322
/* the old v6 WFI, UNPREDICTABLE in v7 but we choose to NOP */
323
{ .name = "NOP", .cp = 15, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 4,
324
.access = PL1_W, .type = ARM_CP_NOP },
325
- /* Performance monitors are implementation defined in v7,
326
+ /*
327
+ * Performance monitors are implementation defined in v7,
328
* but with an ARM recommended set of registers, which we
329
* follow.
330
*
331
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
332
.writefn = csselr_write, .resetvalue = 0,
333
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.csselr_s),
334
offsetof(CPUARMState, cp15.csselr_ns) } },
335
- /* Auxiliary ID register: this actually has an IMPDEF value but for now
336
+ /*
337
+ * Auxiliary ID register: this actually has an IMPDEF value but for now
338
* just RAZ for all cores:
339
*/
340
{ .name = "AIDR", .state = ARM_CP_STATE_BOTH,
341
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
342
.access = PL1_R, .type = ARM_CP_CONST,
343
.accessfn = access_aa64_tid1,
344
.resetvalue = 0 },
345
- /* Auxiliary fault status registers: these also are IMPDEF, and we
346
+ /*
347
+ * Auxiliary fault status registers: these also are IMPDEF, and we
348
* choose to RAZ/WI for all cores.
349
*/
350
{ .name = "AFSR0_EL1", .state = ARM_CP_STATE_BOTH,
351
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
352
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 1,
353
.access = PL1_RW, .accessfn = access_tvm_trvm,
354
.type = ARM_CP_CONST, .resetvalue = 0 },
355
- /* MAIR can just read-as-written because we don't implement caches
356
+ /*
357
+ * MAIR can just read-as-written because we don't implement caches
358
* and so don't need to care about memory attributes.
359
*/
360
{ .name = "MAIR_EL1", .state = ARM_CP_STATE_AA64,
361
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
362
.opc0 = 3, .opc1 = 6, .crn = 10, .crm = 2, .opc2 = 0,
363
.access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.mair_el[3]),
364
.resetvalue = 0 },
365
- /* For non-long-descriptor page tables these are PRRR and NMRR;
366
+ /*
367
+ * For non-long-descriptor page tables these are PRRR and NMRR;
368
* regardless they still act as reads-as-written for QEMU.
369
*/
370
- /* MAIR0/1 are defined separately from their 64-bit counterpart which
371
+ /*
372
+ * MAIR0/1 are defined separately from their 64-bit counterpart which
373
* allows them to assign the correct fieldoffset based on the endianness
374
* handled in the field definitions.
375
*/
376
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
377
static CPAccessResult gt_cntfrq_access(CPUARMState *env, const ARMCPRegInfo *ri,
378
bool isread)
379
{
380
- /* CNTFRQ: not visible from PL0 if both PL0PCTEN and PL0VCTEN are zero.
381
+ /*
382
+ * CNTFRQ: not visible from PL0 if both PL0PCTEN and PL0VCTEN are zero.
383
* Writable only at the highest implemented exception level.
384
*/
385
int el = arm_current_el(env);
386
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_stimer_access(CPUARMState *env,
387
const ARMCPRegInfo *ri,
388
bool isread)
389
{
390
- /* The AArch64 register view of the secure physical timer is
391
+ /*
392
+ * The AArch64 register view of the secure physical timer is
393
* always accessible from EL3, and configurably accessible from
394
* Secure EL1.
395
*/
396
@@ -XXX,XX +XXX,XX @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
397
ARMGenericTimer *gt = &cpu->env.cp15.c14_timer[timeridx];
398
399
if (gt->ctl & 1) {
400
- /* Timer enabled: calculate and set current ISTATUS, irq, and
401
+ /*
402
+ * Timer enabled: calculate and set current ISTATUS, irq, and
403
* reset timer to when ISTATUS next has to change
404
*/
405
uint64_t offset = timeridx == GTIMER_VIRT ?
406
@@ -XXX,XX +XXX,XX @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
407
/* Next transition is when we hit cval */
408
nexttick = gt->cval + offset;
409
}
410
- /* Note that the desired next expiry time might be beyond the
411
+ /*
412
+ * Note that the desired next expiry time might be beyond the
413
* signed-64-bit range of a QEMUTimer -- in this case we just
414
* set the timer for as far in the future as possible. When the
415
* timer expires we will reset the timer for any remaining period.
416
@@ -XXX,XX +XXX,XX @@ static void gt_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
417
/* Enable toggled */
418
gt_recalc_timer(cpu, timeridx);
419
} else if ((oldval ^ value) & 2) {
420
- /* IMASK toggled: don't need to recalculate,
421
+ /*
422
+ * IMASK toggled: don't need to recalculate,
423
* just set the interrupt line based on ISTATUS
424
*/
425
int irqstate = (oldval & 4) && !(value & 2);
426
@@ -XXX,XX +XXX,XX @@ static void arm_gt_cntfrq_reset(CPUARMState *env, const ARMCPRegInfo *opaque)
427
}
428
429
static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
430
- /* Note that CNTFRQ is purely reads-as-written for the benefit
431
+ /*
432
+ * Note that CNTFRQ is purely reads-as-written for the benefit
433
* of software; writing it doesn't actually change the timer frequency.
434
* Our reset value matches the fixed frequency we implement the timer at.
435
*/
436
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
437
.readfn = gt_virt_redir_cval_read, .raw_readfn = raw_read,
438
.writefn = gt_virt_redir_cval_write, .raw_writefn = raw_write,
439
},
440
- /* Secure timer -- this is actually restricted to only EL3
441
+ /*
442
+ * Secure timer -- this is actually restricted to only EL3
443
* and configurably Secure-EL1 via the accessfn.
444
*/
445
{ .name = "CNTPS_TVAL_EL1", .state = ARM_CP_STATE_AA64,
446
@@ -XXX,XX +XXX,XX @@ static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri,
447
448
#else
449
450
-/* In user-mode most of the generic timer registers are inaccessible
451
+/*
452
+ * In user-mode most of the generic timer registers are inaccessible
453
* however modern kernels (4.12+) allow access to cntvct_el0
454
*/
455
456
@@ -XXX,XX +XXX,XX @@ static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri)
457
{
458
ARMCPU *cpu = env_archcpu(env);
459
460
- /* Currently we have no support for QEMUTimer in linux-user so we
461
+ /*
462
+ * Currently we have no support for QEMUTimer in linux-user so we
463
* can't call gt_get_countervalue(env), instead we directly
464
* call the lower level functions.
465
*/
466
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
467
bool isread)
468
{
469
if (ri->opc2 & 4) {
470
- /* The ATS12NSO* operations must trap to EL3 or EL2 if executed in
471
+ /*
472
+ * The ATS12NSO* operations must trap to EL3 or EL2 if executed in
473
* Secure EL1 (which can only happen if EL3 is AArch64).
474
* They are simply UNDEF if executed from NS EL1.
475
* They function normally from EL2 or EL3.
476
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
477
}
478
}
479
} else {
480
- /* fsr is a DFSR/IFSR value for the short descriptor
481
+ /*
482
+ * fsr is a DFSR/IFSR value for the short descriptor
483
* translation table format (with WnR always clear).
484
* Convert it to a 32-bit PAR.
485
*/
486
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav8r_cp_reginfo[] = {
487
};
488
489
static const ARMCPRegInfo pmsav7_cp_reginfo[] = {
490
- /* Reset for all these registers is handled in arm_cpu_reset(),
491
+ /*
492
+ * Reset for all these registers is handled in arm_cpu_reset(),
493
* because the PMSAv7 is also used by M-profile CPUs, which do
494
* not register cpregs but still need the state to be reset.
495
*/
496
@@ -XXX,XX +XXX,XX @@ static void vmsa_ttbcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
497
}
498
499
if (arm_feature(env, ARM_FEATURE_LPAE)) {
500
- /* With LPAE the TTBCR could result in a change of ASID
501
+ /*
502
+ * With LPAE the TTBCR could result in a change of ASID
503
* via the TTBCR.A1 bit, so do a TLB flush.
504
*/
505
tlb_flush(CPU(cpu));
506
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
507
offsetoflow32(CPUARMState, cp15.tcr_el[1])} },
508
};
509
510
-/* Note that unlike TTBCR, writing to TTBCR2 does not require flushing
511
+/*
512
+ * Note that unlike TTBCR, writing to TTBCR2 does not require flushing
513
* qemu tlbs nor adjusting cached masks.
514
*/
515
static const ARMCPRegInfo ttbcr2_reginfo = {
516
@@ -XXX,XX +XXX,XX @@ static void omap_wfi_write(CPUARMState *env, const ARMCPRegInfo *ri,
517
static void omap_cachemaint_write(CPUARMState *env, const ARMCPRegInfo *ri,
518
uint64_t value)
519
{
520
- /* On OMAP there are registers indicating the max/min index of dcache lines
521
+ /*
522
+ * On OMAP there are registers indicating the max/min index of dcache lines
523
* containing a dirty line; cache flush operations have to reset these.
524
*/
525
env->cp15.c15_i_max = 0x000;
526
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo omap_cp_reginfo[] = {
527
.crm = 8, .opc1 = 0, .opc2 = 0, .access = PL1_RW,
528
.type = ARM_CP_NO_RAW,
529
.readfn = arm_cp_read_zero, .writefn = omap_wfi_write, },
530
- /* TODO: Peripheral port remap register:
531
+ /*
532
+ * TODO: Peripheral port remap register:
533
* On OMAP2 mcr p15, 0, rn, c15, c2, 4 sets up the interrupt controller
534
* base address at $rn & ~0xfff and map size of 0x200 << ($rn & 0xfff),
535
* when MMU is off.
536
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo xscale_cp_reginfo[] = {
537
.cp = 15, .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 1, .access = PL1_RW,
538
.fieldoffset = offsetof(CPUARMState, cp15.c1_xscaleauxcr),
539
.resetvalue = 0, },
540
- /* XScale specific cache-lockdown: since we have no cache we NOP these
541
+ /*
542
+ * XScale specific cache-lockdown: since we have no cache we NOP these
543
* and hope the guest does not really rely on cache behaviour.
544
*/
545
{ .name = "XSCALE_LOCK_ICACHE_LINE",
546
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo xscale_cp_reginfo[] = {
547
};
548
549
static const ARMCPRegInfo dummy_c15_cp_reginfo[] = {
550
- /* RAZ/WI the whole crn=15 space, when we don't have a more specific
551
+ /*
552
+ * RAZ/WI the whole crn=15 space, when we don't have a more specific
553
* implementation of this implementation-defined space.
554
* Ideally this should eventually disappear in favour of actually
555
* implementing the correct behaviour for all cores.
556
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
557
};
558
559
static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
560
- /* The cache test-and-clean instructions always return (1 << 30)
561
+ /*
562
+ * The cache test-and-clean instructions always return (1 << 30)
563
* to indicate that there are no dirty cache lines.
564
*/
565
{ .name = "TC_DCACHE", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 3,
566
@@ -XXX,XX +XXX,XX @@ static uint64_t mpidr_read_val(CPUARMState *env)
567
568
if (arm_feature(env, ARM_FEATURE_V7MP)) {
569
mpidr |= (1U << 31);
570
- /* Cores which are uniprocessor (non-coherent)
571
+ /*
572
+ * Cores which are uniprocessor (non-coherent)
573
* but still implement the MP extensions set
574
* bit 30. (For instance, Cortex-R5).
575
*/
576
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tocu(CPUARMState *env, const ARMCPRegInfo *ri,
577
return do_cacheop_pou_access(env, HCR_TOCU | HCR_TPU);
578
}
579
580
-/* See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
581
+/*
582
+ * See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
583
* Page D4-1736 (DDI0487A.b)
584
*/
585
586
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
587
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
588
uint64_t value)
589
{
590
- /* Invalidate by VA, EL2
591
+ /*
592
+ * Invalidate by VA, EL2
593
* Currently handles both VAE2 and VALE2, since we don't support
594
* flush-last-level-only.
595
*/
596
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
597
static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
598
uint64_t value)
599
{
600
- /* Invalidate by VA, EL3
601
+ /*
602
+ * Invalidate by VA, EL3
603
* Currently handles both VAE3 and VALE3, since we don't support
604
* flush-last-level-only.
605
*/
606
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
607
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
608
uint64_t value)
609
{
610
- /* Invalidate by VA, EL1&0 (AArch64 version).
611
+ /*
612
+ * Invalidate by VA, EL1&0 (AArch64 version).
613
* Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
614
* since we don't support flush-for-specific-ASID-only or
615
* flush-last-level-only.
616
@@ -XXX,XX +XXX,XX @@ static CPAccessResult sp_el0_access(CPUARMState *env, const ARMCPRegInfo *ri,
617
bool isread)
618
{
619
if (!(env->pstate & PSTATE_SP)) {
620
- /* Access to SP_EL0 is undefined if it's being used as
621
+ /*
622
+ * Access to SP_EL0 is undefined if it's being used as
623
* the stack pointer.
624
*/
625
return CP_ACCESS_TRAP_UNCATEGORIZED;
626
@@ -XXX,XX +XXX,XX @@ static void sctlr_write(CPUARMState *env, const ARMCPRegInfo *ri,
627
}
628
629
if (raw_read(env, ri) == value) {
630
- /* Skip the TLB flush if nothing actually changed; Linux likes
631
+ /*
632
+ * Skip the TLB flush if nothing actually changed; Linux likes
633
* to do a lot of pointless SCTLR writes.
634
*/
635
return;
636
@@ -XXX,XX +XXX,XX @@ static void mdcr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
637
}
638
639
static const ARMCPRegInfo v8_cp_reginfo[] = {
640
- /* Minimal set of EL0-visible registers. This will need to be expanded
641
+ /*
642
+ * Minimal set of EL0-visible registers. This will need to be expanded
643
* significantly for system emulation of AArch64 CPUs.
644
*/
645
{ .name = "NZCV", .state = ARM_CP_STATE_AA64,
646
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
647
.opc0 = 3, .opc1 = 0, .crn = 4, .crm = 0, .opc2 = 0,
648
.access = PL1_RW,
649
.fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_SVC]) },
650
- /* We rely on the access checks not allowing the guest to write to the
651
+ /*
652
+ * We rely on the access checks not allowing the guest to write to the
653
* state field when SPSel indicates that it's being used as the stack
654
* pointer.
655
*/
656
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
657
if (arm_feature(env, ARM_FEATURE_EL3)) {
658
valid_mask &= ~HCR_HCD;
659
} else if (cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) {
660
- /* Architecturally HCR.TSC is RES0 if EL3 is not implemented.
661
+ /*
662
+ * Architecturally HCR.TSC is RES0 if EL3 is not implemented.
663
* However, if we're using the SMC PSCI conduit then QEMU is
664
* effectively acting like EL3 firmware and so the guest at
665
* EL2 should retain the ability to prevent EL1 from being
666
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
667
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
668
.writefn = tlbi_aa64_vae2is_write },
669
#ifndef CONFIG_USER_ONLY
670
- /* Unlike the other EL2-related AT operations, these must
671
+ /*
672
+ * Unlike the other EL2-related AT operations, these must
673
* UNDEF from EL3 if EL2 is not implemented, which is why we
674
* define them here rather than with the rest of the AT ops.
675
*/
676
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
677
.access = PL2_W, .accessfn = at_s1e2_access,
678
.type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
679
.writefn = ats_write64 },
680
- /* The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
681
+ /*
682
+ * The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
683
* if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3
684
* with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose
685
* to behave as if SCR.NS was 1.
686
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
687
.writefn = ats1h_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC },
688
{ .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH,
689
.opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0,
690
- /* ARMv7 requires bit 0 and 1 to reset to 1. ARMv8 defines the
691
+ /*
692
+ * ARMv7 requires bit 0 and 1 to reset to 1. ARMv8 defines the
693
* reset values as IMPDEF. We choose to reset to 3 to comply with
694
* both ARMv7 and ARMv8.
695
*/
696
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_sec_cp_reginfo[] = {
697
static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
698
bool isread)
699
{
700
- /* The NSACR is RW at EL3, and RO for NS EL1 and NS EL2.
701
+ /*
702
+ * The NSACR is RW at EL3, and RO for NS EL1 and NS EL2.
703
* At Secure EL1 it traps to EL3 or EL2.
704
*/
705
if (arm_current_el(env) == 3) {
706
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
707
}
708
}
709
710
-/* We don't know until after realize whether there's a GICv3
711
+/*
712
+ * We don't know until after realize whether there's a GICv3
713
* attached, and that is what registers the gicv3 sysregs.
714
* So we have to fill in the GIC fields in ID_PFR/ID_PFR1_EL1/ID_AA64PFR0_EL1
715
* at runtime.
716
@@ -XXX,XX +XXX,XX @@ static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
717
}
718
#endif
719
720
-/* Shared logic between LORID and the rest of the LOR* registers.
721
+/*
722
+ * Shared logic between LORID and the rest of the LOR* registers.
723
* Secure state exclusion has already been dealt with.
724
*/
725
static CPAccessResult access_lor_ns(CPUARMState *env,
726
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
727
728
define_arm_cp_regs(cpu, cp_reginfo);
729
if (!arm_feature(env, ARM_FEATURE_V8)) {
730
- /* Must go early as it is full of wildcards that may be
731
+ /*
732
+ * Must go early as it is full of wildcards that may be
733
* overridden by later definitions.
734
*/
735
define_arm_cp_regs(cpu, not_v8_cp_reginfo);
736
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
737
.access = PL1_R, .type = ARM_CP_CONST,
738
.accessfn = access_aa32_tid3,
739
.resetvalue = cpu->isar.id_pfr0 },
740
- /* ID_PFR1 is not a plain ARM_CP_CONST because we don't know
741
+ /*
742
+ * ID_PFR1 is not a plain ARM_CP_CONST because we don't know
743
* the value of the GIC field until after we define these regs.
744
*/
745
{ .name = "ID_PFR1", .state = ARM_CP_STATE_BOTH,
746
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
747
748
define_arm_cp_regs(cpu, el3_regs);
749
}
750
- /* The behaviour of NSACR is sufficiently various that we don't
751
+ /*
752
+ * The behaviour of NSACR is sufficiently various that we don't
753
* try to describe it in a single reginfo:
754
* if EL3 is 64 bit, then trap to EL3 from S EL1,
755
* reads as constant 0xc00 from NS EL1 and NS EL2
756
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
757
if (cpu_isar_feature(aa32_jazelle, cpu)) {
758
define_arm_cp_regs(cpu, jazelle_regs);
759
}
760
- /* Slightly awkwardly, the OMAP and StrongARM cores need all of
761
+ /*
762
+ * Slightly awkwardly, the OMAP and StrongARM cores need all of
763
* cp15 crn=0 to be writes-ignored, whereas for other cores they should
764
* be read-only (ie write causes UNDEF exception).
765
*/
766
{
767
ARMCPRegInfo id_pre_v8_midr_cp_reginfo[] = {
768
- /* Pre-v8 MIDR space.
769
+ /*
770
+ * Pre-v8 MIDR space.
771
* Note that the MIDR isn't a simple constant register because
772
* of the TI925 behaviour where writes to another register can
773
* cause the MIDR value to change.
774
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
775
if (arm_feature(env, ARM_FEATURE_OMAPCP) ||
776
arm_feature(env, ARM_FEATURE_STRONGARM)) {
777
size_t i;
778
- /* Register the blanket "writes ignored" value first to cover the
779
+ /*
780
+ * Register the blanket "writes ignored" value first to cover the
781
* whole space. Then update the specific ID registers to allow write
782
* access, so that they ignore writes rather than causing them to
783
* UNDEF.
784
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
785
.raw_writefn = raw_write,
786
};
787
if (arm_feature(env, ARM_FEATURE_XSCALE)) {
788
- /* Normally we would always end the TB on an SCTLR write, but Linux
789
+ /*
790
+ * Normally we would always end the TB on an SCTLR write, but Linux
791
* arch/arm/mach-pxa/sleep.S expects two instructions following
792
* an MMU enable to execute from cache. Imitate this behaviour.
793
*/
794
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
795
void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
796
const ARMCPRegInfo *r, void *opaque)
797
{
798
- /* Define implementations of coprocessor registers.
799
+ /*
800
+ * Define implementations of coprocessor registers.
801
* We store these in a hashtable because typically
802
* there are less than 150 registers in a space which
803
* is 16*16*16*8*8 = 262144 in size.
804
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
48
default:
805
default:
49
g_assert_not_reached();
806
g_assert_not_reached();
50
}
807
}
51
- t3 = tcg_const_i32(1 << sz);
808
- /* The AArch64 pseudocode CheckSystemAccess() specifies that op1
52
+ t3 = tcg_constant_i32(1 << sz);
809
+ /*
53
if (c) {
810
+ * The AArch64 pseudocode CheckSystemAccess() specifies that op1
54
gen_helper_crc32c(t1, t1, t2, t3);
811
* encodes a minimum access level for the register. We roll this
812
* runtime check into our general permission check code, so check
813
* here that the reginfo's specified permissions are strict enough
814
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
815
assert((r->access & ~mask) == 0);
816
}
817
818
- /* Check that the register definition has enough info to handle
819
+ /*
820
+ * Check that the register definition has enough info to handle
821
* reads and writes if they are permitted.
822
*/
823
if (!(r->type & (ARM_CP_SPECIAL_MASK | ARM_CP_CONST))) {
824
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
825
continue;
826
}
827
if (state == ARM_CP_STATE_AA32) {
828
- /* Under AArch32 CP registers can be common
829
+ /*
830
+ * Under AArch32 CP registers can be common
831
* (same for secure and non-secure world) or banked.
832
*/
833
char *name;
834
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
835
g_assert_not_reached();
836
}
837
} else {
838
- /* AArch64 registers get mapped to non-secure instance
839
- * of AArch32 */
840
+ /*
841
+ * AArch64 registers get mapped to non-secure instance
842
+ * of AArch32
843
+ */
844
add_cpreg_to_hashtable(cpu, r, opaque, state,
845
ARM_CP_SECSTATE_NS,
846
crm, opc1, opc2, r->name);
847
@@ -XXX,XX +XXX,XX @@ void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque)
848
849
static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type)
850
{
851
- /* Return true if it is not valid for us to switch to
852
+ /*
853
+ * Return true if it is not valid for us to switch to
854
* this CPU mode (ie all the UNPREDICTABLE cases in
855
* the ARM ARM CPSRWriteByInstr pseudocode).
856
*/
857
@@ -XXX,XX +XXX,XX @@ static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type)
858
case ARM_CPU_MODE_UND:
859
case ARM_CPU_MODE_IRQ:
860
case ARM_CPU_MODE_FIQ:
861
- /* Note that we don't implement the IMPDEF NSACR.RFR which in v7
862
+ /*
863
+ * Note that we don't implement the IMPDEF NSACR.RFR which in v7
864
* allows FIQ mode to be Secure-only. (In v8 this doesn't exist.)
865
*/
866
- /* If HCR.TGE is set then changes from Monitor to NS PL1 via MSR
867
+ /*
868
+ * If HCR.TGE is set then changes from Monitor to NS PL1 via MSR
869
* and CPS are treated as illegal mode changes.
870
*/
871
if (write_type == CPSRWriteByInstr &&
872
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
873
env->GE = (val >> 16) & 0xf;
874
}
875
876
- /* In a V7 implementation that includes the security extensions but does
877
+ /*
878
+ * In a V7 implementation that includes the security extensions but does
879
* not include Virtualization Extensions the SCR.FW and SCR.AW bits control
880
* whether non-secure software is allowed to change the CPSR_F and CPSR_A
881
* bits respectively.
882
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
883
changed_daif = (env->daif ^ val) & mask;
884
885
if (changed_daif & CPSR_A) {
886
- /* Check to see if we are allowed to change the masking of async
887
+ /*
888
+ * Check to see if we are allowed to change the masking of async
889
* abort exceptions from a non-secure state.
890
*/
891
if (!(env->cp15.scr_el3 & SCR_AW)) {
892
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
893
}
894
895
if (changed_daif & CPSR_F) {
896
- /* Check to see if we are allowed to change the masking of FIQ
897
+ /*
898
+ * Check to see if we are allowed to change the masking of FIQ
899
* exceptions from a non-secure state.
900
*/
901
if (!(env->cp15.scr_el3 & SCR_FW)) {
902
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
903
mask &= ~CPSR_F;
904
}
905
906
- /* Check whether non-maskable FIQ (NMFI) support is enabled.
907
+ /*
908
+ * Check whether non-maskable FIQ (NMFI) support is enabled.
909
* If this bit is set software is not allowed to mask
910
* FIQs, but is allowed to set CPSR_F to 0.
911
*/
912
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
913
if (write_type != CPSRWriteRaw &&
914
((env->uncached_cpsr ^ val) & mask & CPSR_M)) {
915
if ((env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_USR) {
916
- /* Note that we can only get here in USR mode if this is a
917
+ /*
918
+ * Note that we can only get here in USR mode if this is a
919
* gdb stub write; for this case we follow the architectural
920
* behaviour for guest writes in USR mode of ignoring an attempt
921
* to switch mode. (Those are caught by translate.c for writes
922
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
923
*/
924
mask &= ~CPSR_M;
925
} else if (bad_mode_switch(env, val & CPSR_M, write_type)) {
926
- /* Attempt to switch to an invalid mode: this is UNPREDICTABLE in
927
+ /*
928
+ * Attempt to switch to an invalid mode: this is UNPREDICTABLE in
929
* v7, and has defined behaviour in v8:
930
* + leave CPSR.M untouched
931
* + allow changes to the other CPSR fields
932
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
933
env->regs[14] = env->banked_r14[r14_bank_number(mode)];
934
}
935
936
-/* Physical Interrupt Target EL Lookup Table
937
+/*
938
+ * Physical Interrupt Target EL Lookup Table
939
*
940
* [ From ARM ARM section G1.13.4 (Table G1-15) ]
941
*
942
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
943
if (arm_feature(env, ARM_FEATURE_EL3)) {
944
rw = ((env->cp15.scr_el3 & SCR_RW) == SCR_RW);
55
} else {
945
} else {
56
gen_helper_crc32(t1, t1, t2, t3);
946
- /* Either EL2 is the highest EL (and so the EL2 register width
57
}
947
+ /*
58
tcg_temp_free_i32(t2);
948
+ * Either EL2 is the highest EL (and so the EL2 register width
59
- tcg_temp_free_i32(t3);
949
* is given by is64); or there is no EL2 or EL3, in which case
60
store_reg(s, a->rd, t1);
950
* the value of 'rw' does not affect the table lookup anyway.
61
return true;
951
*/
62
}
952
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
953
env->banked_r13[bank_number(ARM_CPU_MODE_UND)] = env->xregs[23];
954
}
955
956
- /* Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in FIQ
957
+ /*
958
+ * Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in FIQ
959
* mode, then we can copy to r8-r14. Otherwise, we copy to the
960
* FIQ bank for r8-r14.
961
*/
962
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
963
/* High vectors. When enabled, base address cannot be remapped. */
964
addr += 0xffff0000;
965
} else {
966
- /* ARM v7 architectures provide a vector base address register to remap
967
+ /*
968
+ * ARM v7 architectures provide a vector base address register to remap
969
* the interrupt vector table.
970
* This register is only followed in non-monitor mode, and is banked.
971
* Note: only bits 31:5 are valid.
972
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
973
aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
974
975
if (cur_el < new_el) {
976
- /* Entry vector offset depends on whether the implemented EL
977
+ /*
978
+ * Entry vector offset depends on whether the implemented EL
979
* immediately lower than the target level is using AArch32 or AArch64
980
*/
981
bool is_aa64;
982
@@ -XXX,XX +XXX,XX @@ static void handle_semihosting(CPUState *cs)
983
}
984
#endif
985
986
-/* Handle a CPU exception for A and R profile CPUs.
987
+/*
988
+ * Handle a CPU exception for A and R profile CPUs.
989
* Do any appropriate logging, handle PSCI calls, and then hand off
990
* to the AArch64-entry or AArch32-entry function depending on the
991
* target exception level's register width.
992
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
993
}
994
#endif
995
996
- /* Hooks may change global state so BQL should be held, also the
997
+ /*
998
+ * Hooks may change global state so BQL should be held, also the
999
* BQL needs to be held for any modification of
1000
* cs->interrupt_request.
1001
*/
1002
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
1003
};
1004
}
1005
1006
-/* Note that signed overflow is undefined in C. The following routines are
1007
- careful to use unsigned types where modulo arithmetic is required.
1008
- Failure to do so _will_ break on newer gcc. */
1009
+/*
1010
+ * Note that signed overflow is undefined in C. The following routines are
1011
+ * careful to use unsigned types where modulo arithmetic is required.
1012
+ * Failure to do so _will_ break on newer gcc.
1013
+ */
1014
1015
/* Signed saturating arithmetic. */
1016
1017
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sel_flags)(uint32_t flags, uint32_t a, uint32_t b)
1018
return (a & mask) | (b & ~mask);
1019
}
1020
1021
-/* CRC helpers.
1022
+/*
1023
+ * CRC helpers.
1024
* The upper bytes of val (above the number specified by 'bytes') must have
1025
* been zeroed out by the caller.
1026
*/
1027
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(crc32c)(uint32_t acc, uint32_t val, uint32_t bytes)
1028
return crc32c(acc, buf, bytes) ^ 0xffffffff;
1029
}
1030
1031
-/* Return the exception level to which FP-disabled exceptions should
1032
+/*
1033
+ * Return the exception level to which FP-disabled exceptions should
1034
* be taken, or 0 if FP is enabled.
1035
*/
1036
int fp_exception_el(CPUARMState *env, int cur_el)
1037
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
1038
#ifndef CONFIG_USER_ONLY
1039
uint64_t hcr_el2;
1040
1041
- /* CPACR and the CPTR registers don't exist before v6, so FP is
1042
+ /*
1043
+ * CPACR and the CPTR registers don't exist before v6, so FP is
1044
* always accessible
1045
*/
1046
if (!arm_feature(env, ARM_FEATURE_V6)) {
1047
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
1048
1049
hcr_el2 = arm_hcr_el2_eff(env);
1050
1051
- /* The CPACR controls traps to EL1, or PL1 if we're 32 bit:
1052
+ /*
1053
+ * The CPACR controls traps to EL1, or PL1 if we're 32 bit:
1054
* 0, 2 : trap EL0 and EL1/PL1 accesses
1055
* 1 : trap only EL0 accesses
1056
* 3 : trap no accesses
63
--
1057
--
64
2.25.1
1058
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Fix the following:
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
5
Message-id: 20220426163043.100432-16-richard.henderson@linaro.org
5
ERROR: spaces required around that '|' (ctx:VxV)
6
ERROR: space required before the open parenthesis '('
7
ERROR: spaces required around that '+' (ctx:VxB)
8
ERROR: space prohibited between function name and open parenthesis '('
9
10
(the last two still have some occurrences in macros which I left
11
behind because it might impact readability)
12
13
Signed-off-by: Fabiano Rosas <farosas@suse.de>
14
Reviewed-by: Claudio Fontana <cfontana@suse.de>
15
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
16
Message-id: 20221213190537.511-3-farosas@suse.de
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
18
---
8
target/arm/translate-a64.c | 7 ++-----
19
target/arm/helper.c | 42 +++++++++++++++++++++---------------------
9
1 file changed, 2 insertions(+), 5 deletions(-)
20
1 file changed, 21 insertions(+), 21 deletions(-)
10
21
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
24
--- a/target/arm/helper.c
14
+++ b/target/arm/translate-a64.c
25
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static void handle_rev16(DisasContext *s, unsigned int sf,
26
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_list(gpointer key, gpointer opaque)
16
TCGv_i64 tcg_rd = cpu_reg(s, rd);
27
uint32_t regidx = (uintptr_t)key;
17
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
28
const ARMCPRegInfo *ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
18
TCGv_i64 tcg_rn = read_cpu_reg(s, rn, sf);
29
19
- TCGv_i64 mask = tcg_const_i64(sf ? 0x00ff00ff00ff00ffull : 0x00ff00ff);
30
- if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
20
+ TCGv_i64 mask = tcg_constant_i64(sf ? 0x00ff00ff00ff00ffull : 0x00ff00ff);
31
+ if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) {
21
32
cpu->cpreg_indexes[cpu->cpreg_array_len] = cpreg_to_kvm_id(regidx);
22
tcg_gen_shri_i64(tcg_tmp, tcg_rn, 8);
33
/* The value array need not be initialized at this point */
23
tcg_gen_and_i64(tcg_rd, tcg_rn, mask);
34
cpu->cpreg_array_len++;
24
@@ -XXX,XX +XXX,XX @@ static void handle_rev16(DisasContext *s, unsigned int sf,
35
@@ -XXX,XX +XXX,XX @@ static void count_cpreg(gpointer key, gpointer opaque)
25
tcg_gen_shli_i64(tcg_rd, tcg_rd, 8);
36
26
tcg_gen_or_i64(tcg_rd, tcg_rd, tcg_tmp);
37
ri = g_hash_table_lookup(cpu->cp_regs, key);
27
38
28
- tcg_temp_free_i64(mask);
39
- if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
29
tcg_temp_free_i64(tcg_tmp);
40
+ if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) {
41
cpu->cpreg_array_len++;
42
}
30
}
43
}
31
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
32
@@ -XXX,XX +XXX,XX @@ static void handle_crc32(DisasContext *s,
45
.resetfn = arm_cp_reset_ignore },
46
{ .name = "TPIDRRO_EL0", .state = ARM_CP_STATE_AA64,
47
.opc0 = 3, .opc1 = 3, .opc2 = 3, .crn = 13, .crm = 0,
48
- .access = PL0_R|PL1_W,
49
+ .access = PL0_R | PL1_W,
50
.fieldoffset = offsetof(CPUARMState, cp15.tpidrro_el[0]),
51
.resetvalue = 0},
52
{ .name = "TPIDRURO", .cp = 15, .crn = 13, .crm = 0, .opc1 = 0, .opc2 = 3,
53
- .access = PL0_R|PL1_W,
54
+ .access = PL0_R | PL1_W,
55
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidruro_s),
56
offsetoflow32(CPUARMState, cp15.tpidruro_ns) },
57
.resetfn = arm_cp_reset_ignore },
58
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
59
.resetvalue = 0 },
60
/* The cache ops themselves: these all NOP for QEMU */
61
{ .name = "IICR", .cp = 15, .crm = 5, .opc1 = 0,
62
- .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
63
+ .access = PL1_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
64
{ .name = "IDCR", .cp = 15, .crm = 6, .opc1 = 0,
65
- .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
66
+ .access = PL1_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
67
{ .name = "CDCR", .cp = 15, .crm = 12, .opc1 = 0,
68
- .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
69
+ .access = PL0_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
70
{ .name = "PIR", .cp = 15, .crm = 12, .opc1 = 1,
71
- .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
72
+ .access = PL0_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
73
{ .name = "PDR", .cp = 15, .crm = 12, .opc1 = 2,
74
- .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
75
+ .access = PL0_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
76
{ .name = "CIDCR", .cp = 15, .crm = 14, .opc1 = 0,
77
- .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
78
+ .access = PL1_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
79
};
80
81
static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
82
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
83
ARMCPRegInfo cbar = {
84
.name = "CBAR",
85
.cp = 15, .crn = 15, .crm = 0, .opc1 = 4, .opc2 = 0,
86
- .access = PL1_R|PL3_W, .resetvalue = cpu->reset_cbar,
87
+ .access = PL1_R | PL3_W, .resetvalue = cpu->reset_cbar,
88
.fieldoffset = offsetof(CPUARMState,
89
cp15.c15_config_base_address)
90
};
91
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
92
return;
93
94
if (old_mode == ARM_CPU_MODE_FIQ) {
95
- memcpy (env->fiq_regs, env->regs + 8, 5 * sizeof(uint32_t));
96
- memcpy (env->regs + 8, env->usr_regs, 5 * sizeof(uint32_t));
97
+ memcpy(env->fiq_regs, env->regs + 8, 5 * sizeof(uint32_t));
98
+ memcpy(env->regs + 8, env->usr_regs, 5 * sizeof(uint32_t));
99
} else if (mode == ARM_CPU_MODE_FIQ) {
100
- memcpy (env->usr_regs, env->regs + 8, 5 * sizeof(uint32_t));
101
- memcpy (env->regs + 8, env->fiq_regs, 5 * sizeof(uint32_t));
102
+ memcpy(env->usr_regs, env->regs + 8, 5 * sizeof(uint32_t));
103
+ memcpy(env->regs + 8, env->fiq_regs, 5 * sizeof(uint32_t));
33
}
104
}
34
105
35
tcg_acc = cpu_reg(s, rn);
106
i = bank_number(old_mode);
36
- tcg_bytes = tcg_const_i32(1 << sz);
107
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
37
+ tcg_bytes = tcg_constant_i32(1 << sz);
108
RESULT(sum, n, 16); \
38
109
if (sum >= 0) \
39
if (crc32c) {
110
ge |= 3 << (n * 2); \
40
gen_helper_crc32c_64(cpu_reg(s, rd), tcg_acc, tcg_val, tcg_bytes);
111
- } while(0)
41
} else {
112
+ } while (0)
42
gen_helper_crc32_64(cpu_reg(s, rd), tcg_acc, tcg_val, tcg_bytes);
113
43
}
114
#define SARITH8(a, b, n, op) do { \
44
-
115
int32_t sum; \
45
- tcg_temp_free_i32(tcg_bytes);
116
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
46
}
117
RESULT(sum, n, 8); \
47
118
if (sum >= 0) \
48
/* Data-processing (2 source)
119
ge |= 1 << n; \
120
- } while(0)
121
+ } while (0)
122
123
124
#define ADD16(a, b, n) SARITH16(a, b, n, +)
125
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
126
RESULT(sum, n, 16); \
127
if ((sum >> 16) == 1) \
128
ge |= 3 << (n * 2); \
129
- } while(0)
130
+ } while (0)
131
132
#define ADD8(a, b, n) do { \
133
uint32_t sum; \
134
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
135
RESULT(sum, n, 8); \
136
if ((sum >> 8) == 1) \
137
ge |= 1 << n; \
138
- } while(0)
139
+ } while (0)
140
141
#define SUB16(a, b, n) do { \
142
uint32_t sum; \
143
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
144
RESULT(sum, n, 16); \
145
if ((sum >> 16) == 0) \
146
ge |= 3 << (n * 2); \
147
- } while(0)
148
+ } while (0)
149
150
#define SUB8(a, b, n) do { \
151
uint32_t sum; \
152
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
153
RESULT(sum, n, 8); \
154
if ((sum >> 8) == 0) \
155
ge |= 1 << n; \
156
- } while(0)
157
+ } while (0)
158
159
#define PFX u
160
#define ARITH_GE
49
--
161
--
50
2.25.1
162
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Fix this:
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
ERROR: braces {} are necessary for all arms of this statement
5
Message-id: 20220426163043.100432-48-richard.henderson@linaro.org
5
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
7
Reviewed-by: Claudio Fontana <cfontana@suse.de>
8
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
9
Message-id: 20221213190537.511-4-farosas@suse.de
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/translate-sve.c | 54 ++++++++++----------------------------
12
target/arm/helper.c | 67 ++++++++++++++++++++++++++++-----------------
9
1 file changed, 14 insertions(+), 40 deletions(-)
13
1 file changed, 42 insertions(+), 25 deletions(-)
10
14
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
17
--- a/target/arm/helper.c
14
+++ b/target/arm/translate-sve.c
18
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static bool do_vpz_ool(DisasContext *s, arg_rpr_esz *a,
19
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
16
return true;
20
env->CF = (val >> 29) & 1;
21
env->VF = (val << 3) & 0x80000000;
17
}
22
}
18
23
- if (mask & CPSR_Q)
19
- desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
24
+ if (mask & CPSR_Q) {
20
+ desc = tcg_constant_i32(simd_desc(vsz, vsz, 0));
25
env->QF = ((val & CPSR_Q) != 0);
21
temp = tcg_temp_new_i64();
26
- if (mask & CPSR_T)
22
t_zn = tcg_temp_new_ptr();
27
+ }
23
t_pg = tcg_temp_new_ptr();
28
+ if (mask & CPSR_T) {
24
@@ -XXX,XX +XXX,XX @@ static bool do_vpz_ool(DisasContext *s, arg_rpr_esz *a,
29
env->thumb = ((val & CPSR_T) != 0);
25
fn(temp, t_zn, t_pg, desc);
30
+ }
26
tcg_temp_free_ptr(t_zn);
31
if (mask & CPSR_IT_0_1) {
27
tcg_temp_free_ptr(t_pg);
32
env->condexec_bits &= ~3;
28
- tcg_temp_free_i32(desc);
33
env->condexec_bits |= (val >> 25) & 3;
29
34
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
30
write_fp_dreg(s, a->rd, temp);
35
int i;
31
tcg_temp_free_i64(temp);
36
32
@@ -XXX,XX +XXX,XX @@ static void do_index(DisasContext *s, int esz, int rd,
37
old_mode = env->uncached_cpsr & CPSR_M;
33
TCGv_i64 start, TCGv_i64 incr)
38
- if (mode == old_mode)
39
+ if (mode == old_mode) {
40
return;
41
+ }
42
43
if (old_mode == ARM_CPU_MODE_FIQ) {
44
memcpy(env->fiq_regs, env->regs + 8, 5 * sizeof(uint32_t));
45
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
46
new_mode = ARM_CPU_MODE_UND;
47
addr = 0x04;
48
mask = CPSR_I;
49
- if (env->thumb)
50
+ if (env->thumb) {
51
offset = 2;
52
- else
53
+ } else {
54
offset = 4;
55
+ }
56
break;
57
case EXCP_SWI:
58
new_mode = ARM_CPU_MODE_SVC;
59
@@ -XXX,XX +XXX,XX @@ static inline uint16_t add16_sat(uint16_t a, uint16_t b)
60
61
res = a + b;
62
if (((res ^ a) & 0x8000) && !((a ^ b) & 0x8000)) {
63
- if (a & 0x8000)
64
+ if (a & 0x8000) {
65
res = 0x8000;
66
- else
67
+ } else {
68
res = 0x7fff;
69
+ }
70
}
71
return res;
72
}
73
@@ -XXX,XX +XXX,XX @@ static inline uint8_t add8_sat(uint8_t a, uint8_t b)
74
75
res = a + b;
76
if (((res ^ a) & 0x80) && !((a ^ b) & 0x80)) {
77
- if (a & 0x80)
78
+ if (a & 0x80) {
79
res = 0x80;
80
- else
81
+ } else {
82
res = 0x7f;
83
+ }
84
}
85
return res;
86
}
87
@@ -XXX,XX +XXX,XX @@ static inline uint16_t sub16_sat(uint16_t a, uint16_t b)
88
89
res = a - b;
90
if (((res ^ a) & 0x8000) && ((a ^ b) & 0x8000)) {
91
- if (a & 0x8000)
92
+ if (a & 0x8000) {
93
res = 0x8000;
94
- else
95
+ } else {
96
res = 0x7fff;
97
+ }
98
}
99
return res;
100
}
101
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_sat(uint8_t a, uint8_t b)
102
103
res = a - b;
104
if (((res ^ a) & 0x80) && ((a ^ b) & 0x80)) {
105
- if (a & 0x80)
106
+ if (a & 0x80) {
107
res = 0x80;
108
- else
109
+ } else {
110
res = 0x7f;
111
+ }
112
}
113
return res;
114
}
115
@@ -XXX,XX +XXX,XX @@ static inline uint16_t add16_usat(uint16_t a, uint16_t b)
34
{
116
{
35
unsigned vsz = vec_full_reg_size(s);
117
uint16_t res;
36
- TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
118
res = a + b;
37
+ TCGv_i32 desc = tcg_constant_i32(simd_desc(vsz, vsz, 0));
119
- if (res < a)
38
TCGv_ptr t_zd = tcg_temp_new_ptr();
120
+ if (res < a) {
39
121
res = 0xffff;
40
tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, rd));
122
+ }
41
@@ -XXX,XX +XXX,XX @@ static void do_index(DisasContext *s, int esz, int rd,
123
return res;
42
tcg_temp_free_i32(i32);
43
}
44
tcg_temp_free_ptr(t_zd);
45
- tcg_temp_free_i32(desc);
46
}
124
}
47
125
48
static bool trans_INDEX_ii(DisasContext *s, arg_INDEX_ii *a)
126
static inline uint16_t sub16_usat(uint16_t a, uint16_t b)
49
@@ -XXX,XX +XXX,XX @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
127
{
50
nptr = tcg_temp_new_ptr();
128
- if (a > b)
51
tcg_gen_addi_ptr(dptr, cpu_env, vec_full_reg_offset(s, rd));
129
+ if (a > b) {
52
tcg_gen_addi_ptr(nptr, cpu_env, vec_full_reg_offset(s, rn));
130
return a - b;
53
- desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
131
- else
54
+ desc = tcg_constant_i32(simd_desc(vsz, vsz, 0));
132
+ } else {
55
133
return 0;
56
switch (esz) {
134
+ }
57
case MO_8:
58
@@ -XXX,XX +XXX,XX @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
59
60
tcg_temp_free_ptr(dptr);
61
tcg_temp_free_ptr(nptr);
62
- tcg_temp_free_i32(desc);
63
}
135
}
64
136
65
static bool trans_CNT_r(DisasContext *s, arg_CNT_r *a)
137
static inline uint8_t add8_usat(uint8_t a, uint8_t b)
66
@@ -XXX,XX +XXX,XX @@ static void do_cpy_m(DisasContext *s, int esz, int rd, int rn, int pg,
138
{
67
gen_helper_sve_cpy_m_s, gen_helper_sve_cpy_m_d,
139
uint8_t res;
68
};
140
res = a + b;
69
unsigned vsz = vec_full_reg_size(s);
141
- if (res < a)
70
- TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
142
+ if (res < a) {
71
+ TCGv_i32 desc = tcg_constant_i32(simd_desc(vsz, vsz, 0));
143
res = 0xff;
72
TCGv_ptr t_zd = tcg_temp_new_ptr();
144
+ }
73
TCGv_ptr t_zn = tcg_temp_new_ptr();
145
return res;
74
TCGv_ptr t_pg = tcg_temp_new_ptr();
75
@@ -XXX,XX +XXX,XX @@ static void do_cpy_m(DisasContext *s, int esz, int rd, int rn, int pg,
76
tcg_temp_free_ptr(t_zd);
77
tcg_temp_free_ptr(t_zn);
78
tcg_temp_free_ptr(t_pg);
79
- tcg_temp_free_i32(desc);
80
}
146
}
81
147
82
static bool trans_FCPY(DisasContext *s, arg_FCPY *a)
148
static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
83
@@ -XXX,XX +XXX,XX @@ static void do_insr_i64(DisasContext *s, arg_rrr_esz *a, TCGv_i64 val)
149
{
84
gen_helper_sve_insr_s, gen_helper_sve_insr_d,
150
- if (a > b)
85
};
151
+ if (a > b) {
86
unsigned vsz = vec_full_reg_size(s);
152
return a - b;
87
- TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
153
- else
88
+ TCGv_i32 desc = tcg_constant_i32(simd_desc(vsz, vsz, 0));
154
+ } else {
89
TCGv_ptr t_zd = tcg_temp_new_ptr();
155
return 0;
90
TCGv_ptr t_zn = tcg_temp_new_ptr();
156
+ }
91
92
@@ -XXX,XX +XXX,XX @@ static void do_insr_i64(DisasContext *s, arg_rrr_esz *a, TCGv_i64 val)
93
94
tcg_temp_free_ptr(t_zd);
95
tcg_temp_free_ptr(t_zn);
96
- tcg_temp_free_i32(desc);
97
}
157
}
98
158
99
static bool trans_INSR_f(DisasContext *s, arg_rrr_esz *a)
159
#define ADD16(a, b, n) RESULT(add16_usat(a, b), n, 16);
100
@@ -XXX,XX +XXX,XX @@ static bool do_perm_pred3(DisasContext *s, arg_rrr_esz *a, bool high_odd,
160
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
101
TCGv_ptr t_d = tcg_temp_new_ptr();
161
102
TCGv_ptr t_n = tcg_temp_new_ptr();
162
static inline uint8_t do_usad(uint8_t a, uint8_t b)
103
TCGv_ptr t_m = tcg_temp_new_ptr();
163
{
104
- TCGv_i32 t_desc;
164
- if (a > b)
105
uint32_t desc = 0;
165
+ if (a > b) {
106
166
return a - b;
107
desc = FIELD_DP32(desc, PREDDESC, OPRSZ, vsz);
167
- else
108
@@ -XXX,XX +XXX,XX @@ static bool do_perm_pred3(DisasContext *s, arg_rrr_esz *a, bool high_odd,
168
+ } else {
109
tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
169
return b - a;
110
tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
170
+ }
111
tcg_gen_addi_ptr(t_m, cpu_env, pred_full_reg_offset(s, a->rm));
112
- t_desc = tcg_const_i32(desc);
113
114
- fn(t_d, t_n, t_m, t_desc);
115
+ fn(t_d, t_n, t_m, tcg_constant_i32(desc));
116
117
tcg_temp_free_ptr(t_d);
118
tcg_temp_free_ptr(t_n);
119
tcg_temp_free_ptr(t_m);
120
- tcg_temp_free_i32(t_desc);
121
return true;
122
}
171
}
123
172
124
@@ -XXX,XX +XXX,XX @@ static bool do_perm_pred2(DisasContext *s, arg_rr_esz *a, bool high_odd,
173
/* Unsigned sum of absolute byte differences. */
125
unsigned vsz = pred_full_reg_size(s);
174
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sel_flags)(uint32_t flags, uint32_t a, uint32_t b)
126
TCGv_ptr t_d = tcg_temp_new_ptr();
175
uint32_t mask;
127
TCGv_ptr t_n = tcg_temp_new_ptr();
176
128
- TCGv_i32 t_desc;
177
mask = 0;
129
uint32_t desc = 0;
178
- if (flags & 1)
130
179
+ if (flags & 1) {
131
tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
180
mask |= 0xff;
132
@@ -XXX,XX +XXX,XX @@ static bool do_perm_pred2(DisasContext *s, arg_rr_esz *a, bool high_odd,
181
- if (flags & 2)
133
desc = FIELD_DP32(desc, PREDDESC, OPRSZ, vsz);
182
+ }
134
desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
183
+ if (flags & 2) {
135
desc = FIELD_DP32(desc, PREDDESC, DATA, high_odd);
184
mask |= 0xff00;
136
- t_desc = tcg_const_i32(desc);
185
- if (flags & 4)
137
186
+ }
138
- fn(t_d, t_n, t_desc);
187
+ if (flags & 4) {
139
+ fn(t_d, t_n, tcg_constant_i32(desc));
188
mask |= 0xff0000;
140
189
- if (flags & 8)
141
- tcg_temp_free_i32(t_desc);
190
+ }
142
tcg_temp_free_ptr(t_d);
191
+ if (flags & 8) {
143
tcg_temp_free_ptr(t_n);
192
mask |= 0xff000000;
144
return true;
193
+ }
145
@@ -XXX,XX +XXX,XX @@ static void find_last_active(DisasContext *s, TCGv_i32 ret, int esz, int pg)
194
return (a & mask) | (b & ~mask);
146
* round up, as we do elsewhere, because we need the exact size.
147
*/
148
TCGv_ptr t_p = tcg_temp_new_ptr();
149
- TCGv_i32 t_desc;
150
unsigned desc = 0;
151
152
desc = FIELD_DP32(desc, PREDDESC, OPRSZ, pred_full_reg_size(s));
153
desc = FIELD_DP32(desc, PREDDESC, ESZ, esz);
154
155
tcg_gen_addi_ptr(t_p, cpu_env, pred_full_reg_offset(s, pg));
156
- t_desc = tcg_const_i32(desc);
157
158
- gen_helper_sve_last_active_element(ret, t_p, t_desc);
159
+ gen_helper_sve_last_active_element(ret, t_p, tcg_constant_i32(desc));
160
161
- tcg_temp_free_i32(t_desc);
162
tcg_temp_free_ptr(t_p);
163
}
195
}
164
196
165
@@ -XXX,XX +XXX,XX @@ static void do_cntp(DisasContext *s, TCGv_i64 val, int esz, int pn, int pg)
166
TCGv_ptr t_pn = tcg_temp_new_ptr();
167
TCGv_ptr t_pg = tcg_temp_new_ptr();
168
unsigned desc = 0;
169
- TCGv_i32 t_desc;
170
171
desc = FIELD_DP32(desc, PREDDESC, OPRSZ, psz);
172
desc = FIELD_DP32(desc, PREDDESC, ESZ, esz);
173
174
tcg_gen_addi_ptr(t_pn, cpu_env, pred_full_reg_offset(s, pn));
175
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
176
- t_desc = tcg_const_i32(desc);
177
178
- gen_helper_sve_cntp(val, t_pn, t_pg, t_desc);
179
+ gen_helper_sve_cntp(val, t_pn, t_pg, tcg_constant_i32(desc));
180
tcg_temp_free_ptr(t_pn);
181
tcg_temp_free_ptr(t_pg);
182
- tcg_temp_free_i32(t_desc);
183
}
184
}
185
186
@@ -XXX,XX +XXX,XX @@ static void do_reduce(DisasContext *s, arg_rpr_esz *a,
187
{
188
unsigned vsz = vec_full_reg_size(s);
189
unsigned p2vsz = pow2ceil(vsz);
190
- TCGv_i32 t_desc = tcg_const_i32(simd_desc(vsz, vsz, p2vsz));
191
+ TCGv_i32 t_desc = tcg_constant_i32(simd_desc(vsz, vsz, p2vsz));
192
TCGv_ptr t_zn, t_pg, status;
193
TCGv_i64 temp;
194
195
@@ -XXX,XX +XXX,XX @@ static void do_reduce(DisasContext *s, arg_rpr_esz *a,
196
tcg_temp_free_ptr(t_zn);
197
tcg_temp_free_ptr(t_pg);
198
tcg_temp_free_ptr(status);
199
- tcg_temp_free_i32(t_desc);
200
201
write_fp_dreg(s, a->rd, temp);
202
tcg_temp_free_i64(temp);
203
@@ -XXX,XX +XXX,XX @@ static bool trans_FADDA(DisasContext *s, arg_rprr_esz *a)
204
tcg_gen_addi_ptr(t_rm, cpu_env, vec_full_reg_offset(s, a->rm));
205
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
206
t_fpst = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
207
- t_desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
208
+ t_desc = tcg_constant_i32(simd_desc(vsz, vsz, 0));
209
210
fns[a->esz - 1](t_val, t_val, t_rm, t_pg, t_fpst, t_desc);
211
212
- tcg_temp_free_i32(t_desc);
213
tcg_temp_free_ptr(t_fpst);
214
tcg_temp_free_ptr(t_pg);
215
tcg_temp_free_ptr(t_rm);
216
@@ -XXX,XX +XXX,XX @@ static void do_fp_scalar(DisasContext *s, int zd, int zn, int pg, bool is_fp16,
217
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
218
219
status = fpstatus_ptr(is_fp16 ? FPST_FPCR_F16 : FPST_FPCR);
220
- desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
221
+ desc = tcg_constant_i32(simd_desc(vsz, vsz, 0));
222
fn(t_zd, t_zn, t_pg, scalar, status, desc);
223
224
- tcg_temp_free_i32(desc);
225
tcg_temp_free_ptr(status);
226
tcg_temp_free_ptr(t_pg);
227
tcg_temp_free_ptr(t_zn);
228
@@ -XXX,XX +XXX,XX @@ static void do_mem_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
229
{
230
unsigned vsz = vec_full_reg_size(s);
231
TCGv_ptr t_pg;
232
- TCGv_i32 t_desc;
233
int desc = 0;
234
235
/*
236
@@ -XXX,XX +XXX,XX @@ static void do_mem_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
237
}
238
239
desc = simd_desc(vsz, vsz, zt | desc);
240
- t_desc = tcg_const_i32(desc);
241
t_pg = tcg_temp_new_ptr();
242
243
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
244
- fn(cpu_env, t_pg, addr, t_desc);
245
+ fn(cpu_env, t_pg, addr, tcg_constant_i32(desc));
246
247
tcg_temp_free_ptr(t_pg);
248
- tcg_temp_free_i32(t_desc);
249
}
250
251
/* Indexed by [mte][be][dtype][nreg] */
252
@@ -XXX,XX +XXX,XX @@ static void do_mem_zpz(DisasContext *s, int zt, int pg, int zm,
253
TCGv_ptr t_zm = tcg_temp_new_ptr();
254
TCGv_ptr t_pg = tcg_temp_new_ptr();
255
TCGv_ptr t_zt = tcg_temp_new_ptr();
256
- TCGv_i32 t_desc;
257
int desc = 0;
258
259
if (s->mte_active[0]) {
260
@@ -XXX,XX +XXX,XX @@ static void do_mem_zpz(DisasContext *s, int zt, int pg, int zm,
261
desc <<= SVE_MTEDESC_SHIFT;
262
}
263
desc = simd_desc(vsz, vsz, desc | scale);
264
- t_desc = tcg_const_i32(desc);
265
266
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
267
tcg_gen_addi_ptr(t_zm, cpu_env, vec_full_reg_offset(s, zm));
268
tcg_gen_addi_ptr(t_zt, cpu_env, vec_full_reg_offset(s, zt));
269
- fn(cpu_env, t_zt, t_pg, t_zm, scalar, t_desc);
270
+ fn(cpu_env, t_zt, t_pg, t_zm, scalar, tcg_constant_i32(desc));
271
272
tcg_temp_free_ptr(t_zt);
273
tcg_temp_free_ptr(t_zm);
274
tcg_temp_free_ptr(t_pg);
275
- tcg_temp_free_i32(t_desc);
276
}
277
278
/* Indexed by [mte][be][ff][xs][u][msz]. */
279
--
197
--
280
2.25.1
198
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Fabiano Rosas <farosas@suse.de>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Claudio Fontana <cfontana@suse.de>
5
Message-id: 20220426163043.100432-45-richard.henderson@linaro.org
5
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
6
Message-id: 20221213190537.511-5-farosas@suse.de
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
target/arm/translate-sve.c | 15 +++++----------
9
target/arm/m_helper.c | 16 ----------------
9
1 file changed, 5 insertions(+), 10 deletions(-)
10
1 file changed, 16 deletions(-)
10
11
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
12
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
14
--- a/target/arm/m_helper.c
14
+++ b/target/arm/translate-sve.c
15
+++ b/target/arm/m_helper.c
15
@@ -XXX,XX +XXX,XX @@ static bool do_zzi_sat(DisasContext *s, arg_rri_esz *a, bool u, bool d)
16
@@ -XXX,XX +XXX,XX @@
16
return false;
17
*/
17
}
18
18
if (sve_access_check(s)) {
19
#include "qemu/osdep.h"
19
- TCGv_i64 val = tcg_const_i64(a->imm);
20
-#include "qemu/units.h"
20
- do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, u, d);
21
-#include "target/arm/idau.h"
21
- tcg_temp_free_i64(val);
22
-#include "trace.h"
22
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn,
23
#include "cpu.h"
23
+ tcg_constant_i64(a->imm), u, d);
24
#include "internals.h"
24
}
25
-#include "exec/gdbstub.h"
25
return true;
26
#include "exec/helper-proto.h"
26
}
27
-#include "qemu/host-utils.h"
27
@@ -XXX,XX +XXX,XX @@ static bool do_zzi_ool(DisasContext *s, arg_rri_esz *a, gen_helper_gvec_2i *fn)
28
#include "qemu/main-loop.h"
28
{
29
#include "qemu/bitops.h"
29
if (sve_access_check(s)) {
30
-#include "qemu/crc32c.h"
30
unsigned vsz = vec_full_reg_size(s);
31
-#include "qemu/qemu-print.h"
31
- TCGv_i64 c = tcg_const_i64(a->imm);
32
#include "qemu/log.h"
32
-
33
#include "exec/exec-all.h"
33
tcg_gen_gvec_2i_ool(vec_full_reg_offset(s, a->rd),
34
-#include <zlib.h> /* For crc32 */
34
vec_full_reg_offset(s, a->rn),
35
-#include "semihosting/semihost.h"
35
- c, vsz, vsz, 0, fn);
36
-#include "sysemu/cpus.h"
36
- tcg_temp_free_i64(c);
37
-#include "sysemu/kvm.h"
37
+ tcg_constant_i64(a->imm), vsz, vsz, 0, fn);
38
-#include "qemu/range.h"
38
}
39
-#include "qapi/qapi-commands-machine-target.h"
39
return true;
40
-#include "qapi/error.h"
40
}
41
-#include "qemu/guest-random.h"
41
@@ -XXX,XX +XXX,XX @@ static void do_fp_scalar(DisasContext *s, int zd, int zn, int pg, bool is_fp16,
42
#ifdef CONFIG_TCG
42
static void do_fp_imm(DisasContext *s, arg_rpri_esz *a, uint64_t imm,
43
-#include "arm_ldst.h"
43
gen_helper_sve_fp2scalar *fn)
44
#include "exec/cpu_ldst.h"
44
{
45
#include "semihosting/common-semi.h"
45
- TCGv_i64 temp = tcg_const_i64(imm);
46
#endif
46
- do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz == MO_16, temp, fn);
47
- tcg_temp_free_i64(temp);
48
+ do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz == MO_16,
49
+ tcg_constant_i64(imm), fn);
50
}
51
52
#define DO_FP_IMM(NAME, name, const0, const1) \
53
--
47
--
54
2.25.1
48
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Fabiano Rosas <farosas@suse.de>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Claudio Fontana <cfontana@suse.de>
5
Message-id: 20220426163043.100432-44-richard.henderson@linaro.org
5
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
6
Message-id: 20221213190537.511-6-farosas@suse.de
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
target/arm/translate-sve.c | 4 +---
9
target/arm/helper.c | 7 -------
9
1 file changed, 1 insertion(+), 3 deletions(-)
10
1 file changed, 7 deletions(-)
10
11
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
14
--- a/target/arm/helper.c
14
+++ b/target/arm/translate-sve.c
15
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
16
@@ -XXX,XX +XXX,XX @@
16
}
17
*/
17
if (sve_access_check(s)) {
18
18
unsigned vsz = vec_full_reg_size(s);
19
#include "qemu/osdep.h"
19
- TCGv_i64 c = tcg_const_i64(a->imm);
20
-#include "qemu/units.h"
20
tcg_gen_gvec_2s(vec_full_reg_offset(s, a->rd),
21
#include "qemu/log.h"
21
vec_full_reg_offset(s, a->rn),
22
#include "trace.h"
22
- vsz, vsz, c, &op[a->esz]);
23
#include "cpu.h"
23
- tcg_temp_free_i64(c);
24
#include "internals.h"
24
+ vsz, vsz, tcg_constant_i64(a->imm), &op[a->esz]);
25
#include "exec/helper-proto.h"
25
}
26
-#include "qemu/host-utils.h"
26
return true;
27
#include "qemu/main-loop.h"
27
}
28
#include "qemu/timer.h"
29
#include "qemu/bitops.h"
30
@@ -XXX,XX +XXX,XX @@
31
#include "exec/exec-all.h"
32
#include <zlib.h> /* For crc32 */
33
#include "hw/irq.h"
34
-#include "semihosting/semihost.h"
35
-#include "sysemu/cpus.h"
36
#include "sysemu/cpu-timers.h"
37
#include "sysemu/kvm.h"
38
-#include "qemu/range.h"
39
#include "qapi/qapi-commands-machine-target.h"
40
#include "qapi/error.h"
41
#include "qemu/guest-random.h"
42
#ifdef CONFIG_TCG
43
-#include "arm_ldst.h"
44
-#include "exec/cpu_ldst.h"
45
#include "semihosting/common-semi.h"
46
#endif
47
#include "cpregs.h"
28
--
48
--
29
2.25.1
49
2.25.1
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
From: Claudio Fontana <cfontana@suse.de>
2
2
3
As of now, cryptographic instructions ISAR fields are never cleared so
3
Remove some unused headers.
4
we can end up with a cpu with cryptographic instructions but no
5
floating-point/neon instructions which is not a possible configuration
6
according to Arm specifications.
7
4
8
In QEMU, we have 3 kinds of cpus regarding cryptographic instructions:
5
Signed-off-by: Claudio Fontana <cfontana@suse.de>
9
+ no support
6
Acked-by: Richard Henderson <richard.henderson@linaro.org>
10
+ cortex-a57/a72: cryptographic extension is optional,
7
Reviewed-by: Claudio Fontana <cfontana@suse.de>
11
floating-point/neon is not.
8
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
12
+ cortex-a53: crytographic extension is optional as well as
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
13
floating-point/neon. But cryptographic requires
10
Message-id: 20221213190537.511-7-farosas@suse.de
14
floating-point/neon support.
11
[added back some includes that are still needed at this point]
15
12
Signed-off-by: Fabiano Rosas <farosas@suse.de>
16
Therefore we can safely clear the ISAR fields when neon is disabled.
17
18
Note that other Arm cpus seem to follow this. For example cortex-a55 is
19
like cortex-a53 and cortex-a76/cortex-a710 are like cortex-a57/a72.
20
21
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20220427090117.6954-1-damien.hedde@greensocs.com
24
[PMM: fixed commit message typos]
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
14
---
27
target/arm/cpu.c | 9 +++++++++
15
target/arm/cpu.c | 1 -
28
1 file changed, 9 insertions(+)
16
target/arm/cpu64.c | 6 ------
17
2 files changed, 7 deletions(-)
29
18
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
19
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.c
21
--- a/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
22
+++ b/target/arm/cpu.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
23
@@ -XXX,XX +XXX,XX @@
35
unset_feature(env, ARM_FEATURE_NEON);
24
#include "target/arm/idau.h"
36
25
#include "qemu/module.h"
37
t = cpu->isar.id_aa64isar0;
26
#include "qapi/error.h"
38
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 0);
27
-#include "qapi/visitor.h"
39
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 0);
28
#include "cpu.h"
40
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 0);
29
#ifdef CONFIG_TCG
41
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 0);
30
#include "hw/core/tcg-cpu-ops.h"
42
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 0);
31
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
43
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 0);
32
index XXXXXXX..XXXXXXX 100644
44
t = FIELD_DP64(t, ID_AA64ISAR0, DP, 0);
33
--- a/target/arm/cpu64.c
45
cpu->isar.id_aa64isar0 = t;
34
+++ b/target/arm/cpu64.c
46
35
@@ -XXX,XX +XXX,XX @@
47
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
36
#include "qemu/osdep.h"
48
cpu->isar.id_aa64pfr0 = t;
37
#include "qapi/error.h"
49
38
#include "cpu.h"
50
u = cpu->isar.id_isar5;
39
-#ifdef CONFIG_TCG
51
+ u = FIELD_DP32(u, ID_ISAR5, AES, 0);
40
-#include "hw/core/tcg-cpu-ops.h"
52
+ u = FIELD_DP32(u, ID_ISAR5, SHA1, 0);
41
-#endif /* CONFIG_TCG */
53
+ u = FIELD_DP32(u, ID_ISAR5, SHA2, 0);
42
#include "qemu/module.h"
54
u = FIELD_DP32(u, ID_ISAR5, RDM, 0);
43
-#if !defined(CONFIG_USER_ONLY)
55
u = FIELD_DP32(u, ID_ISAR5, VCMA, 0);
44
-#include "hw/loader.h"
56
cpu->isar.id_isar5 = u;
45
-#endif
46
#include "sysemu/kvm.h"
47
#include "sysemu/hvf.h"
48
#include "kvm_arm.h"
57
--
49
--
58
2.25.1
50
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
The pointed MouseTransformInfo structure is accessed read-only.
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
5
Message-id: 20220426163043.100432-41-richard.henderson@linaro.org
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221220142520.24094-2-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate-sve.c | 7 +++----
10
include/hw/input/tsc2xxx.h | 4 ++--
9
1 file changed, 3 insertions(+), 4 deletions(-)
11
hw/input/tsc2005.c | 2 +-
12
hw/input/tsc210x.c | 3 +--
13
3 files changed, 4 insertions(+), 5 deletions(-)
10
14
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
15
diff --git a/include/hw/input/tsc2xxx.h b/include/hw/input/tsc2xxx.h
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
17
--- a/include/hw/input/tsc2xxx.h
14
+++ b/target/arm/translate-sve.c
18
+++ b/include/hw/input/tsc2xxx.h
15
@@ -XXX,XX +XXX,XX @@ static void do_clast_scalar(DisasContext *s, int esz, int pg, int rm,
19
@@ -XXX,XX +XXX,XX @@ uWireSlave *tsc2102_init(qemu_irq pint);
16
bool before, TCGv_i64 reg_val)
20
uWireSlave *tsc2301_init(qemu_irq penirq, qemu_irq kbirq, qemu_irq dav);
21
I2SCodec *tsc210x_codec(uWireSlave *chip);
22
uint32_t tsc210x_txrx(void *opaque, uint32_t value, int len);
23
-void tsc210x_set_transform(uWireSlave *chip, MouseTransformInfo *info);
24
+void tsc210x_set_transform(uWireSlave *chip, const MouseTransformInfo *info);
25
void tsc210x_key_event(uWireSlave *chip, int key, int down);
26
27
/* tsc2005.c */
28
void *tsc2005_init(qemu_irq pintdav);
29
uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
30
-void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
31
+void tsc2005_set_transform(void *opaque, const MouseTransformInfo *info);
32
33
#endif
34
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/hw/input/tsc2005.c
37
+++ b/hw/input/tsc2005.c
38
@@ -XXX,XX +XXX,XX @@ void *tsc2005_init(qemu_irq pintdav)
39
* from the touchscreen. Assuming 12-bit precision was used during
40
* tslib calibration.
41
*/
42
-void tsc2005_set_transform(void *opaque, MouseTransformInfo *info)
43
+void tsc2005_set_transform(void *opaque, const MouseTransformInfo *info)
17
{
44
{
18
TCGv_i32 last = tcg_temp_new_i32();
45
TSC2005State *s = (TSC2005State *) opaque;
19
- TCGv_i64 ele, cmp, zero;
46
20
+ TCGv_i64 ele, cmp;
47
diff --git a/hw/input/tsc210x.c b/hw/input/tsc210x.c
21
48
index XXXXXXX..XXXXXXX 100644
22
find_last_active(s, last, esz, pg);
49
--- a/hw/input/tsc210x.c
23
50
+++ b/hw/input/tsc210x.c
24
@@ -XXX,XX +XXX,XX @@ static void do_clast_scalar(DisasContext *s, int esz, int pg, int rm,
51
@@ -XXX,XX +XXX,XX @@ I2SCodec *tsc210x_codec(uWireSlave *chip)
25
ele = load_last_active(s, last, rm, esz);
52
* from the touchscreen. Assuming 12-bit precision was used during
26
tcg_temp_free_i32(last);
53
* tslib calibration.
27
54
*/
28
- zero = tcg_const_i64(0);
55
-void tsc210x_set_transform(uWireSlave *chip,
29
- tcg_gen_movcond_i64(TCG_COND_GE, reg_val, cmp, zero, ele, reg_val);
56
- MouseTransformInfo *info)
30
+ tcg_gen_movcond_i64(TCG_COND_GE, reg_val, cmp, tcg_constant_i64(0),
57
+void tsc210x_set_transform(uWireSlave *chip, const MouseTransformInfo *info)
31
+ ele, reg_val);
58
{
32
59
TSC210xState *s = (TSC210xState *) chip->opaque;
33
- tcg_temp_free_i64(zero);
60
#if 0
34
tcg_temp_free_i64(cmp);
35
tcg_temp_free_i64(ele);
36
}
37
--
61
--
38
2.25.1
62
2.25.1
63
64
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20220426163043.100432-10-richard.henderson@linaro.org
5
Message-id: 20221220142520.24094-3-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/translate-a64.c | 3 +--
8
hw/arm/nseries.c | 18 +++++++++---------
9
1 file changed, 1 insertion(+), 2 deletions(-)
9
1 file changed, 9 insertions(+), 9 deletions(-)
10
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
11
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
12
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
13
--- a/hw/arm/nseries.c
14
+++ b/target/arm/translate-a64.c
14
+++ b/hw/arm/nseries.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
15
@@ -XXX,XX +XXX,XX @@ static void n8x0_i2c_setup(struct n800_s *s)
16
17
tcg_rt = cpu_reg(s, rt);
18
19
- clean_addr = tcg_const_i64(s->pc_curr + imm);
20
+ clean_addr = tcg_constant_i64(s->pc_curr + imm);
21
if (is_vector) {
22
do_fp_ld(s, rt, clean_addr, size);
23
} else {
24
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
25
do_gpr_ld(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
26
false, true, rt, iss_sf, false);
27
}
28
- tcg_temp_free_i64(clean_addr);
29
}
16
}
30
17
31
/*
18
/* Touchscreen and keypad controller */
19
-static MouseTransformInfo n800_pointercal = {
20
+static const MouseTransformInfo n800_pointercal = {
21
.x = 800,
22
.y = 480,
23
.a = { 14560, -68, -3455208, -39, -9621, 35152972, 65536 },
24
};
25
26
-static MouseTransformInfo n810_pointercal = {
27
+static const MouseTransformInfo n810_pointercal = {
28
.x = 800,
29
.y = 480,
30
.a = { 15041, 148, -4731056, 171, -10238, 35933380, 65536 },
31
@@ -XXX,XX +XXX,XX @@ static void n810_key_event(void *opaque, int keycode)
32
33
#define M    0
34
35
-static int n810_keys[0x80] = {
36
+static const int n810_keys[0x80] = {
37
[0x01] = 16,    /* Q */
38
[0x02] = 37,    /* K */
39
[0x03] = 24,    /* O */
40
@@ -XXX,XX +XXX,XX @@ static void n8x0_usb_setup(struct n800_s *s)
41
/* Setup done before the main bootloader starts by some early setup code
42
* - used when we want to run the main bootloader in emulation. This
43
* isn't documented. */
44
-static uint32_t n800_pinout[104] = {
45
+static const uint32_t n800_pinout[104] = {
46
0x080f00d8, 0x00d40808, 0x03080808, 0x080800d0,
47
0x00dc0808, 0x0b0f0f00, 0x080800b4, 0x00c00808,
48
0x08080808, 0x180800c4, 0x00b80000, 0x08080808,
49
@@ -XXX,XX +XXX,XX @@ static void n8x0_boot_init(void *opaque)
50
#define OMAP_TAG_CBUS        0x4e03
51
#define OMAP_TAG_EM_ASIC_BB5    0x4e04
52
53
-static struct omap_gpiosw_info_s {
54
+static const struct omap_gpiosw_info_s {
55
const char *name;
56
int line;
57
int type;
58
@@ -XXX,XX +XXX,XX @@ static struct omap_gpiosw_info_s {
59
{ NULL }
60
};
61
62
-static struct omap_partition_info_s {
63
+static const struct omap_partition_info_s {
64
uint32_t offset;
65
uint32_t size;
66
int mask;
67
@@ -XXX,XX +XXX,XX @@ static struct omap_partition_info_s {
68
{ 0, 0, 0, NULL }
69
};
70
71
-static uint8_t n8x0_bd_addr[6] = { N8X0_BD_ADDR };
72
+static const uint8_t n8x0_bd_addr[6] = { N8X0_BD_ADDR };
73
74
static int n8x0_atag_setup(void *p, int model)
75
{
76
uint8_t *b;
77
uint16_t *w;
78
uint32_t *l;
79
- struct omap_gpiosw_info_s *gpiosw;
80
- struct omap_partition_info_s *partition;
81
+ const struct omap_gpiosw_info_s *gpiosw;
82
+ const struct omap_partition_info_s *partition;
83
const char *tag;
84
85
w = p;
32
--
86
--
33
2.25.1
87
2.25.1
88
89
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Silent when compiling with -Wextra:
4
5
../hw/arm/nseries.c:1081:12: warning: missing field 'line' initializer [-Wmissing-field-initializers]
6
{ NULL }
7
^
8
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-id: 20221220142520.24094-4-philmd@linaro.org
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-39-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
13
---
8
target/arm/translate-sve.c | 13 ++++---------
14
hw/arm/nseries.c | 10 ++++------
9
1 file changed, 4 insertions(+), 9 deletions(-)
15
1 file changed, 4 insertions(+), 6 deletions(-)
10
16
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
17
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
12
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
19
--- a/hw/arm/nseries.c
14
+++ b/target/arm/translate-sve.c
20
+++ b/hw/arm/nseries.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_FCPY(DisasContext *s, arg_FCPY *a)
21
@@ -XXX,XX +XXX,XX @@ static const struct omap_gpiosw_info_s {
16
if (sve_access_check(s)) {
22
"headphone", N8X0_HEADPHONE_GPIO,
17
/* Decode the VFP immediate. */
23
OMAP_GPIOSW_TYPE_CONNECTION | OMAP_GPIOSW_INVERTED,
18
uint64_t imm = vfp_expand_imm(a->esz, a->imm);
24
},
19
- TCGv_i64 t_imm = tcg_const_i64(imm);
25
- { NULL }
20
- do_cpy_m(s, a->esz, a->rd, a->rn, a->pg, t_imm);
26
+ { /* end of list */ }
21
- tcg_temp_free_i64(t_imm);
27
}, n810_gpiosw_info[] = {
22
+ do_cpy_m(s, a->esz, a->rd, a->rn, a->pg, tcg_constant_i64(imm));
28
{
23
}
29
"gps_reset", N810_GPS_RESET_GPIO,
24
return true;
30
@@ -XXX,XX +XXX,XX @@ static const struct omap_gpiosw_info_s {
25
}
31
"slide", N810_SLIDE_GPIO,
26
@@ -XXX,XX +XXX,XX @@ static bool trans_CPY_m_i(DisasContext *s, arg_rpri_esz *a)
32
OMAP_GPIOSW_TYPE_COVER | OMAP_GPIOSW_INVERTED,
27
return false;
33
},
28
}
34
- { NULL }
29
if (sve_access_check(s)) {
35
+ { /* end of list */ }
30
- TCGv_i64 t_imm = tcg_const_i64(a->imm);
36
};
31
- do_cpy_m(s, a->esz, a->rd, a->rn, a->pg, t_imm);
37
32
- tcg_temp_free_i64(t_imm);
38
static const struct omap_partition_info_s {
33
+ do_cpy_m(s, a->esz, a->rd, a->rn, a->pg, tcg_constant_i64(a->imm));
39
@@ -XXX,XX +XXX,XX @@ static const struct omap_partition_info_s {
34
}
40
{ 0x00080000, 0x00200000, 0x0, "kernel" },
35
return true;
41
{ 0x00280000, 0x00200000, 0x3, "initfs" },
36
}
42
{ 0x00480000, 0x0fb80000, 0x3, "rootfs" },
37
@@ -XXX,XX +XXX,XX @@ static bool trans_CPY_z_i(DisasContext *s, arg_CPY_z_i *a)
43
-
38
}
44
- { 0, 0, 0, NULL }
39
if (sve_access_check(s)) {
45
+ { /* end of list */ }
40
unsigned vsz = vec_full_reg_size(s);
46
}, n810_part_info[] = {
41
- TCGv_i64 t_imm = tcg_const_i64(a->imm);
47
{ 0x00000000, 0x00020000, 0x3, "bootloader" },
42
tcg_gen_gvec_2i_ool(vec_full_reg_offset(s, a->rd),
48
{ 0x00020000, 0x00060000, 0x0, "config" },
43
pred_full_reg_offset(s, a->pg),
49
{ 0x00080000, 0x00220000, 0x0, "kernel" },
44
- t_imm, vsz, vsz, 0, fns[a->esz]);
50
{ 0x002a0000, 0x00400000, 0x0, "initfs" },
45
- tcg_temp_free_i64(t_imm);
51
{ 0x006a0000, 0x0f960000, 0x0, "rootfs" },
46
+ tcg_constant_i64(a->imm),
52
-
47
+ vsz, vsz, 0, fns[a->esz]);
53
- { 0, 0, 0, NULL }
48
}
54
+ { /* end of list */ }
49
return true;
55
};
50
}
56
57
static const uint8_t n8x0_bd_addr[6] = { N8X0_BD_ADDR };
51
--
58
--
52
2.25.1
59
2.25.1
60
61
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Zhuojia Shen <chaosdefinition@hotmail.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
In CPUID registers exposed to userspace, some registers were missing
4
and some fields were not exposed. This patch aligns exposed ID
5
registers and their fields with what the upstream kernel currently
6
exposes.
7
8
Specifically, the following new ID registers/fields are exposed to
9
userspace:
10
11
ID_AA64PFR1_EL1.BT: bits 3-0
12
ID_AA64PFR1_EL1.MTE: bits 11-8
13
ID_AA64PFR1_EL1.SME: bits 27-24
14
15
ID_AA64ZFR0_EL1.SVEver: bits 3-0
16
ID_AA64ZFR0_EL1.AES: bits 7-4
17
ID_AA64ZFR0_EL1.BitPerm: bits 19-16
18
ID_AA64ZFR0_EL1.BF16: bits 23-20
19
ID_AA64ZFR0_EL1.SHA3: bits 35-32
20
ID_AA64ZFR0_EL1.SM4: bits 43-40
21
ID_AA64ZFR0_EL1.I8MM: bits 47-44
22
ID_AA64ZFR0_EL1.F32MM: bits 55-52
23
ID_AA64ZFR0_EL1.F64MM: bits 59-56
24
25
ID_AA64SMFR0_EL1.F32F32: bit 32
26
ID_AA64SMFR0_EL1.B16F32: bit 34
27
ID_AA64SMFR0_EL1.F16F32: bit 35
28
ID_AA64SMFR0_EL1.I8I32: bits 39-36
29
ID_AA64SMFR0_EL1.F64F64: bit 48
30
ID_AA64SMFR0_EL1.I16I64: bits 55-52
31
ID_AA64SMFR0_EL1.FA64: bit 63
32
33
ID_AA64MMFR0_EL1.ECV: bits 63-60
34
35
ID_AA64MMFR1_EL1.AFP: bits 47-44
36
37
ID_AA64MMFR2_EL1.AT: bits 35-32
38
39
ID_AA64ISAR0_EL1.RNDR: bits 63-60
40
41
ID_AA64ISAR1_EL1.FRINTTS: bits 35-32
42
ID_AA64ISAR1_EL1.BF16: bits 47-44
43
ID_AA64ISAR1_EL1.DGH: bits 51-48
44
ID_AA64ISAR1_EL1.I8MM: bits 55-52
45
46
ID_AA64ISAR2_EL1.WFxT: bits 3-0
47
ID_AA64ISAR2_EL1.RPRES: bits 7-4
48
ID_AA64ISAR2_EL1.GPA3: bits 11-8
49
ID_AA64ISAR2_EL1.APA3: bits 15-12
50
51
The code is also refactored to use symbolic names for ID register fields
52
for better readability and maintainability.
53
54
The test case in tests/tcg/aarch64/sysregs.c is also updated to match
55
the intended behavior.
56
57
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
58
Message-id: DS7PR12MB6309FB585E10772928F14271ACE79@DS7PR12MB6309.namprd12.prod.outlook.com
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
59
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-38-richard.henderson@linaro.org
60
[PMM: use Sn_n_Cn_Cn_n syntax to work with older assemblers
61
that don't recognize id_aa64isar2_el1 and id_aa64mmfr2_el1]
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
62
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
63
---
8
target/arm/translate-sve.c | 18 ++++++------------
64
target/arm/helper.c | 96 +++++++++++++++++++++++++------
9
1 file changed, 6 insertions(+), 12 deletions(-)
65
tests/tcg/aarch64/sysregs.c | 24 ++++++--
10
66
tests/tcg/aarch64/Makefile.target | 7 ++-
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
67
3 files changed, 103 insertions(+), 24 deletions(-)
68
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
70
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
71
--- a/target/arm/helper.c
14
+++ b/target/arm/translate-sve.c
72
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_SINCDEC_r_32(DisasContext *s, arg_incdec_cnt *a)
73
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
16
tcg_gen_ext32s_i64(reg, reg);
74
#ifdef CONFIG_USER_ONLY
17
}
75
static const ARMCPRegUserSpaceInfo v8_user_idregs[] = {
18
} else {
76
{ .name = "ID_AA64PFR0_EL1",
19
- TCGv_i64 t = tcg_const_i64(inc);
77
- .exported_bits = 0x000f000f00ff0000,
20
- do_sat_addsub_32(reg, t, a->u, a->d);
78
- .fixed_bits = 0x0000000000000011 },
21
- tcg_temp_free_i64(t);
79
+ .exported_bits = R_ID_AA64PFR0_FP_MASK |
22
+ do_sat_addsub_32(reg, tcg_constant_i64(inc), a->u, a->d);
80
+ R_ID_AA64PFR0_ADVSIMD_MASK |
23
}
81
+ R_ID_AA64PFR0_SVE_MASK |
24
return true;
82
+ R_ID_AA64PFR0_DIT_MASK,
25
}
83
+ .fixed_bits = (0x1u << R_ID_AA64PFR0_EL0_SHIFT) |
26
@@ -XXX,XX +XXX,XX @@ static bool trans_SINCDEC_r_64(DisasContext *s, arg_incdec_cnt *a)
84
+ (0x1u << R_ID_AA64PFR0_EL1_SHIFT) },
27
TCGv_i64 reg = cpu_reg(s, a->rd);
85
{ .name = "ID_AA64PFR1_EL1",
28
86
- .exported_bits = 0x00000000000000f0 },
29
if (inc != 0) {
87
+ .exported_bits = R_ID_AA64PFR1_BT_MASK |
30
- TCGv_i64 t = tcg_const_i64(inc);
88
+ R_ID_AA64PFR1_SSBS_MASK |
31
- do_sat_addsub_64(reg, t, a->u, a->d);
89
+ R_ID_AA64PFR1_MTE_MASK |
32
- tcg_temp_free_i64(t);
90
+ R_ID_AA64PFR1_SME_MASK },
33
+ do_sat_addsub_64(reg, tcg_constant_i64(inc), a->u, a->d);
91
{ .name = "ID_AA64PFR*_EL1_RESERVED",
34
}
92
- .is_glob = true },
35
return true;
93
- { .name = "ID_AA64ZFR0_EL1" },
36
}
94
+ .is_glob = true },
37
@@ -XXX,XX +XXX,XX @@ static bool trans_INCDEC_v(DisasContext *s, arg_incdec2_cnt *a)
95
+ { .name = "ID_AA64ZFR0_EL1",
38
96
+ .exported_bits = R_ID_AA64ZFR0_SVEVER_MASK |
39
if (inc != 0) {
97
+ R_ID_AA64ZFR0_AES_MASK |
40
if (sve_access_check(s)) {
98
+ R_ID_AA64ZFR0_BITPERM_MASK |
41
- TCGv_i64 t = tcg_const_i64(a->d ? -inc : inc);
99
+ R_ID_AA64ZFR0_BFLOAT16_MASK |
42
tcg_gen_gvec_adds(a->esz, vec_full_reg_offset(s, a->rd),
100
+ R_ID_AA64ZFR0_SHA3_MASK |
43
vec_full_reg_offset(s, a->rn),
101
+ R_ID_AA64ZFR0_SM4_MASK |
44
- t, fullsz, fullsz);
102
+ R_ID_AA64ZFR0_I8MM_MASK |
45
- tcg_temp_free_i64(t);
103
+ R_ID_AA64ZFR0_F32MM_MASK |
46
+ tcg_constant_i64(a->d ? -inc : inc),
104
+ R_ID_AA64ZFR0_F64MM_MASK },
47
+ fullsz, fullsz);
105
+ { .name = "ID_AA64SMFR0_EL1",
48
}
106
+ .exported_bits = R_ID_AA64SMFR0_F32F32_MASK |
49
} else {
107
+ R_ID_AA64SMFR0_B16F32_MASK |
50
do_mov_z(s, a->rd, a->rn);
108
+ R_ID_AA64SMFR0_F16F32_MASK |
51
@@ -XXX,XX +XXX,XX @@ static bool trans_SINCDEC_v(DisasContext *s, arg_incdec2_cnt *a)
109
+ R_ID_AA64SMFR0_I8I32_MASK |
52
110
+ R_ID_AA64SMFR0_F64F64_MASK |
53
if (inc != 0) {
111
+ R_ID_AA64SMFR0_I16I64_MASK |
54
if (sve_access_check(s)) {
112
+ R_ID_AA64SMFR0_FA64_MASK },
55
- TCGv_i64 t = tcg_const_i64(inc);
113
{ .name = "ID_AA64MMFR0_EL1",
56
- do_sat_addsub_vec(s, a->esz, a->rd, a->rn, t, a->u, a->d);
114
- .fixed_bits = 0x00000000ff000000 },
57
- tcg_temp_free_i64(t);
115
- { .name = "ID_AA64MMFR1_EL1" },
58
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn,
116
+ .exported_bits = R_ID_AA64MMFR0_ECV_MASK,
59
+ tcg_constant_i64(inc), a->u, a->d);
117
+ .fixed_bits = (0xfu << R_ID_AA64MMFR0_TGRAN64_SHIFT) |
60
}
118
+ (0xfu << R_ID_AA64MMFR0_TGRAN4_SHIFT) },
61
} else {
119
+ { .name = "ID_AA64MMFR1_EL1",
62
do_mov_z(s, a->rd, a->rn);
120
+ .exported_bits = R_ID_AA64MMFR1_AFP_MASK },
121
+ { .name = "ID_AA64MMFR2_EL1",
122
+ .exported_bits = R_ID_AA64MMFR2_AT_MASK },
123
{ .name = "ID_AA64MMFR*_EL1_RESERVED",
124
- .is_glob = true },
125
+ .is_glob = true },
126
{ .name = "ID_AA64DFR0_EL1",
127
- .fixed_bits = 0x0000000000000006 },
128
- { .name = "ID_AA64DFR1_EL1" },
129
+ .fixed_bits = (0x6u << R_ID_AA64DFR0_DEBUGVER_SHIFT) },
130
+ { .name = "ID_AA64DFR1_EL1" },
131
{ .name = "ID_AA64DFR*_EL1_RESERVED",
132
- .is_glob = true },
133
+ .is_glob = true },
134
{ .name = "ID_AA64AFR*",
135
- .is_glob = true },
136
+ .is_glob = true },
137
{ .name = "ID_AA64ISAR0_EL1",
138
- .exported_bits = 0x00fffffff0fffff0 },
139
+ .exported_bits = R_ID_AA64ISAR0_AES_MASK |
140
+ R_ID_AA64ISAR0_SHA1_MASK |
141
+ R_ID_AA64ISAR0_SHA2_MASK |
142
+ R_ID_AA64ISAR0_CRC32_MASK |
143
+ R_ID_AA64ISAR0_ATOMIC_MASK |
144
+ R_ID_AA64ISAR0_RDM_MASK |
145
+ R_ID_AA64ISAR0_SHA3_MASK |
146
+ R_ID_AA64ISAR0_SM3_MASK |
147
+ R_ID_AA64ISAR0_SM4_MASK |
148
+ R_ID_AA64ISAR0_DP_MASK |
149
+ R_ID_AA64ISAR0_FHM_MASK |
150
+ R_ID_AA64ISAR0_TS_MASK |
151
+ R_ID_AA64ISAR0_RNDR_MASK },
152
{ .name = "ID_AA64ISAR1_EL1",
153
- .exported_bits = 0x000000f0ffffffff },
154
+ .exported_bits = R_ID_AA64ISAR1_DPB_MASK |
155
+ R_ID_AA64ISAR1_APA_MASK |
156
+ R_ID_AA64ISAR1_API_MASK |
157
+ R_ID_AA64ISAR1_JSCVT_MASK |
158
+ R_ID_AA64ISAR1_FCMA_MASK |
159
+ R_ID_AA64ISAR1_LRCPC_MASK |
160
+ R_ID_AA64ISAR1_GPA_MASK |
161
+ R_ID_AA64ISAR1_GPI_MASK |
162
+ R_ID_AA64ISAR1_FRINTTS_MASK |
163
+ R_ID_AA64ISAR1_SB_MASK |
164
+ R_ID_AA64ISAR1_BF16_MASK |
165
+ R_ID_AA64ISAR1_DGH_MASK |
166
+ R_ID_AA64ISAR1_I8MM_MASK },
167
+ { .name = "ID_AA64ISAR2_EL1",
168
+ .exported_bits = R_ID_AA64ISAR2_WFXT_MASK |
169
+ R_ID_AA64ISAR2_RPRES_MASK |
170
+ R_ID_AA64ISAR2_GPA3_MASK |
171
+ R_ID_AA64ISAR2_APA3_MASK },
172
{ .name = "ID_AA64ISAR*_EL1_RESERVED",
173
- .is_glob = true },
174
+ .is_glob = true },
175
};
176
modify_arm_cp_regs(v8_idregs, v8_user_idregs);
177
#endif
178
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
179
#ifdef CONFIG_USER_ONLY
180
static const ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
181
{ .name = "MIDR_EL1",
182
- .exported_bits = 0x00000000ffffffff },
183
- { .name = "REVIDR_EL1" },
184
+ .exported_bits = R_MIDR_EL1_REVISION_MASK |
185
+ R_MIDR_EL1_PARTNUM_MASK |
186
+ R_MIDR_EL1_ARCHITECTURE_MASK |
187
+ R_MIDR_EL1_VARIANT_MASK |
188
+ R_MIDR_EL1_IMPLEMENTER_MASK },
189
+ { .name = "REVIDR_EL1" },
190
};
191
modify_arm_cp_regs(id_v8_midr_cp_reginfo, id_v8_user_midr_cp_reginfo);
192
#endif
193
diff --git a/tests/tcg/aarch64/sysregs.c b/tests/tcg/aarch64/sysregs.c
194
index XXXXXXX..XXXXXXX 100644
195
--- a/tests/tcg/aarch64/sysregs.c
196
+++ b/tests/tcg/aarch64/sysregs.c
197
@@ -XXX,XX +XXX,XX @@
198
#define HWCAP_CPUID (1 << 11)
199
#endif
200
201
+/*
202
+ * Older assemblers don't recognize newer system register names,
203
+ * but we can still access them by the Sn_n_Cn_Cn_n syntax.
204
+ */
205
+#define SYS_ID_AA64ISAR2_EL1 S3_0_C0_C6_2
206
+#define SYS_ID_AA64MMFR2_EL1 S3_0_C0_C7_2
207
+
208
int failed_bit_count;
209
210
/* Read and print system register `id' value */
211
@@ -XXX,XX +XXX,XX @@ int main(void)
212
* minimum valid fields - for the purposes of this check allowed
213
* to have non-zero values.
214
*/
215
- get_cpu_reg_check_mask(id_aa64isar0_el1, _m(00ff,ffff,f0ff,fff0));
216
- get_cpu_reg_check_mask(id_aa64isar1_el1, _m(0000,00f0,ffff,ffff));
217
+ get_cpu_reg_check_mask(id_aa64isar0_el1, _m(f0ff,ffff,f0ff,fff0));
218
+ get_cpu_reg_check_mask(id_aa64isar1_el1, _m(00ff,f0ff,ffff,ffff));
219
+ get_cpu_reg_check_mask(SYS_ID_AA64ISAR2_EL1, _m(0000,0000,0000,ffff));
220
/* TGran4 & TGran64 as pegged to -1 */
221
- get_cpu_reg_check_mask(id_aa64mmfr0_el1, _m(0000,0000,ff00,0000));
222
- get_cpu_reg_check_zero(id_aa64mmfr1_el1);
223
+ get_cpu_reg_check_mask(id_aa64mmfr0_el1, _m(f000,0000,ff00,0000));
224
+ get_cpu_reg_check_mask(id_aa64mmfr1_el1, _m(0000,f000,0000,0000));
225
+ get_cpu_reg_check_mask(SYS_ID_AA64MMFR2_EL1, _m(0000,000f,0000,0000));
226
/* EL1/EL0 reported as AA64 only */
227
get_cpu_reg_check_mask(id_aa64pfr0_el1, _m(000f,000f,00ff,0011));
228
- get_cpu_reg_check_mask(id_aa64pfr1_el1, _m(0000,0000,0000,00f0));
229
+ get_cpu_reg_check_mask(id_aa64pfr1_el1, _m(0000,0000,0f00,0fff));
230
/* all hidden, DebugVer fixed to 0x6 (ARMv8 debug architecture) */
231
get_cpu_reg_check_mask(id_aa64dfr0_el1, _m(0000,0000,0000,0006));
232
get_cpu_reg_check_zero(id_aa64dfr1_el1);
233
- get_cpu_reg_check_zero(id_aa64zfr0_el1);
234
+ get_cpu_reg_check_mask(id_aa64zfr0_el1, _m(0ff0,ff0f,00ff,00ff));
235
+#ifdef HAS_ARMV9_SME
236
+ get_cpu_reg_check_mask(id_aa64smfr0_el1, _m(80f1,00fd,0000,0000));
237
+#endif
238
239
get_cpu_reg_check_zero(id_aa64afr0_el1);
240
get_cpu_reg_check_zero(id_aa64afr1_el1);
241
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
242
index XXXXXXX..XXXXXXX 100644
243
--- a/tests/tcg/aarch64/Makefile.target
244
+++ b/tests/tcg/aarch64/Makefile.target
245
@@ -XXX,XX +XXX,XX @@ config-cc.mak: Makefile
246
     $(call cc-option,-march=armv8.1-a+sve2, CROSS_CC_HAS_SVE2); \
247
     $(call cc-option,-march=armv8.3-a, CROSS_CC_HAS_ARMV8_3); \
248
     $(call cc-option,-mbranch-protection=standard, CROSS_CC_HAS_ARMV8_BTI); \
249
-     $(call cc-option,-march=armv8.5-a+memtag, CROSS_CC_HAS_ARMV8_MTE)) 3> config-cc.mak
250
+     $(call cc-option,-march=armv8.5-a+memtag, CROSS_CC_HAS_ARMV8_MTE); \
251
+     $(call cc-option,-march=armv9-a+sme, CROSS_CC_HAS_ARMV9_SME)) 3> config-cc.mak
252
-include config-cc.mak
253
254
# Pauth Tests
255
@@ -XXX,XX +XXX,XX @@ endif
256
ifneq ($(CROSS_CC_HAS_SVE),)
257
# System Registers Tests
258
AARCH64_TESTS += sysregs
259
+ifneq ($(CROSS_CC_HAS_ARMV9_SME),)
260
+sysregs: CFLAGS+=-march=armv9-a+sme -DHAS_ARMV9_SME
261
+else
262
sysregs: CFLAGS+=-march=armv8.1-a+sve
263
+endif
264
265
# SVE ioctl test
266
AARCH64_TESTS += sve-ioctls
63
--
267
--
64
2.25.1
268
2.25.1
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
The Record bit in the Context Descriptor tells the SMMU to report fault
3
This function is not used anywhere outside this file,
4
events to the event queue. Since we don't cache the Record bit at the
4
so we can make the function "static void".
5
moment, access faults from a cached Context Descriptor are never
6
reported. Store the Record bit in the cached SMMUTransCfg.
7
5
8
Fixes: 9bde7f0674fe ("hw/arm/smmuv3: Implement translate callback")
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
Message-id: 20220427111543.124620-1-jean-philippe@linaro.org
9
Message-id: 20221216214924.4711-2-philmd@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
hw/arm/smmuv3-internal.h | 1 -
12
include/hw/arm/smmu-common.h | 3 ---
16
include/hw/arm/smmu-common.h | 1 +
13
hw/arm/smmu-common.c | 2 +-
17
hw/arm/smmuv3.c | 14 +++++++-------
14
2 files changed, 1 insertion(+), 4 deletions(-)
18
3 files changed, 8 insertions(+), 8 deletions(-)
19
15
20
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/smmuv3-internal.h
23
+++ b/hw/arm/smmuv3-internal.h
24
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUEventInfo {
25
SMMUEventType type;
26
uint32_t sid;
27
bool recorded;
28
- bool record_trans_faults;
29
bool inval_ste_allowed;
30
union {
31
struct {
32
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
16
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
33
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
34
--- a/include/hw/arm/smmu-common.h
18
--- a/include/hw/arm/smmu-common.h
35
+++ b/include/hw/arm/smmu-common.h
19
+++ b/include/hw/arm/smmu-common.h
36
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUTransCfg {
20
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
37
bool disabled; /* smmu is disabled */
21
/* Unmap the range of all the notifiers registered to any IOMMU mr */
38
bool bypassed; /* translation is bypassed */
22
void smmu_inv_notifiers_all(SMMUState *s);
39
bool aborted; /* translation is aborted */
23
40
+ bool record_faults; /* record fault events */
24
-/* Unmap the range of all the notifiers registered to @mr */
41
uint64_t ttb; /* TT base address */
25
-void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr);
42
uint8_t oas; /* output address width */
26
-
43
uint8_t tbi; /* Top Byte Ignore */
27
#endif /* HW_ARM_SMMU_COMMON_H */
44
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
28
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
45
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
46
--- a/hw/arm/smmuv3.c
30
--- a/hw/arm/smmu-common.c
47
+++ b/hw/arm/smmuv3.c
31
+++ b/hw/arm/smmu-common.c
48
@@ -XXX,XX +XXX,XX @@ static int decode_cd(SMMUTransCfg *cfg, CD *cd, SMMUEventInfo *event)
32
@@ -XXX,XX +XXX,XX @@ static void smmu_unmap_notifier_range(IOMMUNotifier *n)
49
trace_smmuv3_decode_cd_tt(i, tt->tsz, tt->ttb, tt->granule_sz, tt->had);
33
}
50
}
34
51
35
/* Unmap all notifiers attached to @mr */
52
- event->record_trans_faults = CD_R(cd);
36
-inline void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
53
+ cfg->record_faults = CD_R(cd);
37
+static void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
54
38
{
55
return 0;
39
IOMMUNotifier *n;
56
40
57
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
58
59
tt = select_tt(cfg, addr);
60
if (!tt) {
61
- if (event.record_trans_faults) {
62
+ if (cfg->record_faults) {
63
event.type = SMMU_EVT_F_TRANSLATION;
64
event.u.f_translation.addr = addr;
65
event.u.f_translation.rnw = flag & 0x1;
66
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
67
if (cached_entry) {
68
if ((flag & IOMMU_WO) && !(cached_entry->entry.perm & IOMMU_WO)) {
69
status = SMMU_TRANS_ERROR;
70
- if (event.record_trans_faults) {
71
+ if (cfg->record_faults) {
72
event.type = SMMU_EVT_F_PERMISSION;
73
event.u.f_permission.addr = addr;
74
event.u.f_permission.rnw = flag & 0x1;
75
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
76
event.u.f_walk_eabt.addr2 = ptw_info.addr;
77
break;
78
case SMMU_PTW_ERR_TRANSLATION:
79
- if (event.record_trans_faults) {
80
+ if (cfg->record_faults) {
81
event.type = SMMU_EVT_F_TRANSLATION;
82
event.u.f_translation.addr = addr;
83
event.u.f_translation.rnw = flag & 0x1;
84
}
85
break;
86
case SMMU_PTW_ERR_ADDR_SIZE:
87
- if (event.record_trans_faults) {
88
+ if (cfg->record_faults) {
89
event.type = SMMU_EVT_F_ADDR_SIZE;
90
event.u.f_addr_size.addr = addr;
91
event.u.f_addr_size.rnw = flag & 0x1;
92
}
93
break;
94
case SMMU_PTW_ERR_ACCESS:
95
- if (event.record_trans_faults) {
96
+ if (cfg->record_faults) {
97
event.type = SMMU_EVT_F_ACCESS;
98
event.u.f_access.addr = addr;
99
event.u.f_access.rnw = flag & 0x1;
100
}
101
break;
102
case SMMU_PTW_ERR_PERMISSION:
103
- if (event.record_trans_faults) {
104
+ if (cfg->record_faults) {
105
event.type = SMMU_EVT_F_PERMISSION;
106
event.u.f_permission.addr = addr;
107
event.u.f_permission.rnw = flag & 0x1;
108
--
41
--
109
2.25.1
42
2.25.1
43
44
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
When using Clang ("Apple clang version 14.0.0 (clang-1400.0.29.202)")
4
and building with -Wall we get:
5
6
hw/arm/smmu-common.c:173:33: warning: static function 'smmu_hash_remove_by_asid_iova' is used in an inline function with external linkage [-Wstatic-in-inline]
7
hw/arm/smmu-common.h:170:1: note: use 'static' to give inline function 'smmu_iotlb_inv_iova' internal linkage
8
void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
9
^
10
static
11
12
None of our code base require / use inlined functions with external
13
linkage. Some places use internal inlining in the hot path. These
14
two functions are certainly not in any hot path and don't justify
15
any inlining, so these are likely oversights rather than intentional.
16
17
Reported-by: Stefan Weil <sw@weilnetz.de>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-23-richard.henderson@linaro.org
19
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Eric Auger <eric.auger@redhat.com>
22
Message-id: 20221216214924.4711-3-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
24
---
8
target/arm/translate.c | 32 +++++++-------------------------
25
hw/arm/smmu-common.c | 13 ++++++-------
9
1 file changed, 7 insertions(+), 25 deletions(-)
26
1 file changed, 6 insertions(+), 7 deletions(-)
10
27
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
28
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
12
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
30
--- a/hw/arm/smmu-common.c
14
+++ b/target/arm/translate.c
31
+++ b/hw/arm/smmu-common.c
15
@@ -XXX,XX +XXX,XX @@ static void store_sp_checked(DisasContext *s, TCGv_i32 var)
32
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *new)
16
33
g_hash_table_insert(bs->iotlb, key, new);
17
void gen_set_cpsr(TCGv_i32 var, uint32_t mask)
34
}
35
36
-inline void smmu_iotlb_inv_all(SMMUState *s)
37
+void smmu_iotlb_inv_all(SMMUState *s)
18
{
38
{
19
- TCGv_i32 tmp_mask = tcg_const_i32(mask);
39
trace_smmu_iotlb_inv_all();
20
- gen_helper_cpsr_write(cpu_env, var, tmp_mask);
40
g_hash_table_remove_all(s->iotlb);
21
- tcg_temp_free_i32(tmp_mask);
41
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_hash_remove_by_asid_iova(gpointer key, gpointer value,
22
+ gen_helper_cpsr_write(cpu_env, var, tcg_constant_i32(mask));
42
((entry->iova & ~info->mask) == info->iova);
23
}
43
}
24
44
25
static void gen_rebuild_hflags(DisasContext *s, bool new_el)
45
-inline void
26
@@ -XXX,XX +XXX,XX @@ static void gen_rebuild_hflags(DisasContext *s, bool new_el)
46
-smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
27
47
- uint8_t tg, uint64_t num_pages, uint8_t ttl)
28
static void gen_exception_internal(int excp)
48
+void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
49
+ uint8_t tg, uint64_t num_pages, uint8_t ttl)
29
{
50
{
30
- TCGv_i32 tcg_excp = tcg_const_i32(excp);
51
/* if tg is not set we use 4KB range invalidation */
31
-
52
uint8_t granule = tg ? tg * 2 + 10 : 12;
32
assert(excp_is_internal(excp));
53
@@ -XXX,XX +XXX,XX @@ smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
33
- gen_helper_exception_internal(cpu_env, tcg_excp);
54
&info);
34
- tcg_temp_free_i32(tcg_excp);
35
+ gen_helper_exception_internal(cpu_env, tcg_constant_i32(excp));
36
}
55
}
37
56
38
static void gen_singlestep_exception(DisasContext *s)
57
-inline void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
39
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
58
+void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
40
/* As with HVC, we may take an exception either before or after
41
* the insn executes.
42
*/
43
- TCGv_i32 tmp;
44
-
45
gen_set_pc_im(s, s->pc_curr);
46
- tmp = tcg_const_i32(syn_aa32_smc());
47
- gen_helper_pre_smc(cpu_env, tmp);
48
- tcg_temp_free_i32(tmp);
49
+ gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa32_smc()));
50
gen_set_pc_im(s, s->base.pc_next);
51
s->base.is_jmp = DISAS_SMC;
52
}
53
@@ -XXX,XX +XXX,XX @@ void gen_exception_insn(DisasContext *s, uint64_t pc, int excp,
54
55
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
56
{
59
{
57
- TCGv_i32 tcg_syn;
60
trace_smmu_iotlb_inv_asid(asid);
58
-
61
g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_asid, &asid);
59
gen_set_condexec(s);
62
@@ -XXX,XX +XXX,XX @@ error:
60
gen_set_pc_im(s, s->pc_curr);
63
*
61
- tcg_syn = tcg_const_i32(syn);
64
* return 0 on success
62
- gen_helper_exception_bkpt_insn(cpu_env, tcg_syn);
65
*/
63
- tcg_temp_free_i32(tcg_syn);
66
-inline int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
64
+ gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syn));
67
- SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
65
s->base.is_jmp = DISAS_NORETURN;
68
+int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
66
}
69
+ SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
67
68
@@ -XXX,XX +XXX,XX @@ void unallocated_encoding(DisasContext *s)
69
static void gen_exception_el(DisasContext *s, int excp, uint32_t syn,
70
TCGv_i32 tcg_el)
71
{
70
{
72
- TCGv_i32 tcg_excp;
71
if (!cfg->aa64) {
73
- TCGv_i32 tcg_syn;
72
/*
74
-
75
gen_set_condexec(s);
76
gen_set_pc_im(s, s->pc_curr);
77
- tcg_excp = tcg_const_i32(excp);
78
- tcg_syn = tcg_const_i32(syn);
79
- gen_helper_exception_with_syndrome(cpu_env, tcg_excp, tcg_syn, tcg_el);
80
- tcg_temp_free_i32(tcg_syn);
81
- tcg_temp_free_i32(tcg_excp);
82
+ gen_helper_exception_with_syndrome(cpu_env,
83
+ tcg_constant_i32(excp),
84
+ tcg_constant_i32(syn), tcg_el);
85
s->base.is_jmp = DISAS_NORETURN;
86
}
87
88
--
73
--
89
2.25.1
74
2.25.1
75
76
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
So far the GPT timers were unable to raise IRQs to the processor.
4
5
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-35-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
target/arm/translate.c | 9 +++------
9
include/hw/arm/fsl-imx7.h | 5 +++++
9
1 file changed, 3 insertions(+), 6 deletions(-)
10
hw/arm/fsl-imx7.c | 10 ++++++++++
11
2 files changed, 15 insertions(+)
10
12
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
15
--- a/include/hw/arm/fsl-imx7.h
14
+++ b/target/arm/translate.c
16
+++ b/include/hw/arm/fsl-imx7.h
15
@@ -XXX,XX +XXX,XX @@ static bool trans_CPS_v7m(DisasContext *s, arg_CPS_v7m *a)
17
@@ -XXX,XX +XXX,XX @@ enum FslIMX7IRQs {
16
return true;
18
FSL_IMX7_USB2_IRQ = 42,
19
FSL_IMX7_USB3_IRQ = 40,
20
21
+ FSL_IMX7_GPT1_IRQ = 55,
22
+ FSL_IMX7_GPT2_IRQ = 54,
23
+ FSL_IMX7_GPT3_IRQ = 53,
24
+ FSL_IMX7_GPT4_IRQ = 52,
25
+
26
FSL_IMX7_WDOG1_IRQ = 78,
27
FSL_IMX7_WDOG2_IRQ = 79,
28
FSL_IMX7_WDOG3_IRQ = 10,
29
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/arm/fsl-imx7.c
32
+++ b/hw/arm/fsl-imx7.c
33
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
34
FSL_IMX7_GPT4_ADDR,
35
};
36
37
+ static const int FSL_IMX7_GPTn_IRQ[FSL_IMX7_NUM_GPTS] = {
38
+ FSL_IMX7_GPT1_IRQ,
39
+ FSL_IMX7_GPT2_IRQ,
40
+ FSL_IMX7_GPT3_IRQ,
41
+ FSL_IMX7_GPT4_IRQ,
42
+ };
43
+
44
s->gpt[i].ccm = IMX_CCM(&s->ccm);
45
sysbus_realize(SYS_BUS_DEVICE(&s->gpt[i]), &error_abort);
46
sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpt[i]), 0, FSL_IMX7_GPTn_ADDR[i]);
47
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gpt[i]), 0,
48
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
49
+ FSL_IMX7_GPTn_IRQ[i]));
17
}
50
}
18
51
19
- tmp = tcg_const_i32(a->im);
52
for (i = 0; i < FSL_IMX7_NUM_GPIOS; i++) {
20
+ tmp = tcg_constant_i32(a->im);
21
/* FAULTMASK */
22
if (a->F) {
23
- addr = tcg_const_i32(19);
24
+ addr = tcg_constant_i32(19);
25
gen_helper_v7m_msr(cpu_env, addr, tmp);
26
- tcg_temp_free_i32(addr);
27
}
28
/* PRIMASK */
29
if (a->I) {
30
- addr = tcg_const_i32(16);
31
+ addr = tcg_constant_i32(16);
32
gen_helper_v7m_msr(cpu_env, addr, tmp);
33
- tcg_temp_free_i32(addr);
34
}
35
gen_rebuild_hflags(s, false);
36
- tcg_temp_free_i32(tmp);
37
gen_lookup_tb(s);
38
return true;
39
}
40
--
53
--
41
2.25.1
54
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
CCM derived clocks will have to be added later.
4
5
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
target/arm/translate-a64.c | 12 ++++--------
9
hw/misc/imx7_ccm.c | 49 +++++++++++++++++++++++++++++++++++++---------
9
1 file changed, 4 insertions(+), 8 deletions(-)
10
1 file changed, 40 insertions(+), 9 deletions(-)
10
11
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
diff --git a/hw/misc/imx7_ccm.c b/hw/misc/imx7_ccm.c
12
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
--- a/hw/misc/imx7_ccm.c
14
+++ b/target/arm/translate-a64.c
15
+++ b/hw/misc/imx7_ccm.c
15
@@ -XXX,XX +XXX,XX @@ static void gen_address_with_allocation_tag0(TCGv_i64 dst, TCGv_i64 src)
16
@@ -XXX,XX +XXX,XX @@
16
static void gen_probe_access(DisasContext *s, TCGv_i64 ptr,
17
#include "hw/misc/imx7_ccm.h"
17
MMUAccessType acc, int log2_size)
18
#include "migration/vmstate.h"
19
20
+#include "trace.h"
21
+
22
+#define CKIH_FREQ 24000000 /* 24MHz crystal input */
23
+
24
static void imx7_analog_reset(DeviceState *dev)
18
{
25
{
19
- TCGv_i32 t_acc = tcg_const_i32(acc);
26
IMX7AnalogState *s = IMX7_ANALOG(dev);
20
- TCGv_i32 t_idx = tcg_const_i32(get_mem_index(s));
27
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx7_ccm = {
21
- TCGv_i32 t_size = tcg_const_i32(1 << log2_size);
28
static uint32_t imx7_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
22
-
29
{
23
- gen_helper_probe_access(cpu_env, ptr, t_acc, t_idx, t_size);
30
/*
24
- tcg_temp_free_i32(t_acc);
31
- * This function is "consumed" by GPT emulation code, however on
25
- tcg_temp_free_i32(t_idx);
32
- * i.MX7 each GPT block can have their own clock root. This means
26
- tcg_temp_free_i32(t_size);
33
- * that this functions needs somehow to know requester's identity
27
+ gen_helper_probe_access(cpu_env, ptr,
34
- * and the way to pass it: be it via additional IMXClk constants
28
+ tcg_constant_i32(acc),
35
- * or by adding another argument to this method needs to be
29
+ tcg_constant_i32(get_mem_index(s)),
36
- * figured out
30
+ tcg_constant_i32(1 << log2_size));
37
+ * This function is "consumed" by GPT emulation code. Some clocks
38
+ * have fixed frequencies and we can provide requested frequency
39
+ * easily. However for CCM provided clocks (like IPG) each GPT
40
+ * timer can have its own clock root.
41
+ * This means we need additionnal information when calling this
42
+ * function to know the requester's identity.
43
*/
44
- qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Not implemented\n",
45
- TYPE_IMX7_CCM, __func__);
46
- return 0;
47
+ uint32_t freq = 0;
48
+
49
+ switch (clock) {
50
+ case CLK_NONE:
51
+ break;
52
+ case CLK_32k:
53
+ freq = CKIL_FREQ;
54
+ break;
55
+ case CLK_HIGH:
56
+ freq = CKIH_FREQ;
57
+ break;
58
+ case CLK_IPG:
59
+ case CLK_IPG_HIGH:
60
+ /*
61
+ * For now we don't have a way to figure out the device this
62
+ * function is called for. Until then the IPG derived clocks
63
+ * are left unimplemented.
64
+ */
65
+ qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Clock %d Not implemented\n",
66
+ TYPE_IMX7_CCM, __func__, clock);
67
+ break;
68
+ default:
69
+ qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: unsupported clock %d\n",
70
+ TYPE_IMX7_CCM, __func__, clock);
71
+ break;
72
+ }
73
+
74
+ trace_ccm_clock_freq(clock, freq);
75
+
76
+ return freq;
31
}
77
}
32
78
33
/*
79
static void imx7_ccm_class_init(ObjectClass *klass, void *data)
34
--
80
--
35
2.25.1
81
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 10 ++--------
9
1 file changed, 2 insertions(+), 8 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
16
int core_idx)
17
{
18
if (tag_checked && s->mte_active[is_unpriv]) {
19
- TCGv_i32 tcg_desc;
20
TCGv_i64 ret;
21
int desc = 0;
22
23
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
24
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
25
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
26
desc = FIELD_DP32(desc, MTEDESC, SIZEM1, (1 << log2_size) - 1);
27
- tcg_desc = tcg_const_i32(desc);
28
29
ret = new_tmp_a64(s);
30
- gen_helper_mte_check(ret, cpu_env, tcg_desc, addr);
31
- tcg_temp_free_i32(tcg_desc);
32
+ gen_helper_mte_check(ret, cpu_env, tcg_constant_i32(desc), addr);
33
34
return ret;
35
}
36
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
37
bool tag_checked, int size)
38
{
39
if (tag_checked && s->mte_active[0]) {
40
- TCGv_i32 tcg_desc;
41
TCGv_i64 ret;
42
int desc = 0;
43
44
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
45
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
46
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
47
desc = FIELD_DP32(desc, MTEDESC, SIZEM1, size - 1);
48
- tcg_desc = tcg_const_i32(desc);
49
50
ret = new_tmp_a64(s);
51
- gen_helper_mte_check(ret, cpu_env, tcg_desc, addr);
52
- tcg_temp_free_i32(tcg_desc);
53
+ gen_helper_mte_check(ret, cpu_env, tcg_constant_i32(desc), addr);
54
55
return ret;
56
}
57
--
58
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Note that tmp was doing double-duty as zero
4
and then later as a temporary in its own right.
5
Split the use of 0 to a new variable 'zero'.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20220426163043.100432-5-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-a64.c | 26 +++++++++++++-------------
13
1 file changed, 13 insertions(+), 13 deletions(-)
14
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
18
+++ b/target/arm/translate-a64.c
19
@@ -XXX,XX +XXX,XX @@ static void gen_adc(int sf, TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
20
static void gen_adc_CC(int sf, TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
21
{
22
if (sf) {
23
- TCGv_i64 result, cf_64, vf_64, tmp;
24
- result = tcg_temp_new_i64();
25
- cf_64 = tcg_temp_new_i64();
26
- vf_64 = tcg_temp_new_i64();
27
- tmp = tcg_const_i64(0);
28
+ TCGv_i64 result = tcg_temp_new_i64();
29
+ TCGv_i64 cf_64 = tcg_temp_new_i64();
30
+ TCGv_i64 vf_64 = tcg_temp_new_i64();
31
+ TCGv_i64 tmp = tcg_temp_new_i64();
32
+ TCGv_i64 zero = tcg_constant_i64(0);
33
34
tcg_gen_extu_i32_i64(cf_64, cpu_CF);
35
- tcg_gen_add2_i64(result, cf_64, t0, tmp, cf_64, tmp);
36
- tcg_gen_add2_i64(result, cf_64, result, cf_64, t1, tmp);
37
+ tcg_gen_add2_i64(result, cf_64, t0, zero, cf_64, zero);
38
+ tcg_gen_add2_i64(result, cf_64, result, cf_64, t1, zero);
39
tcg_gen_extrl_i64_i32(cpu_CF, cf_64);
40
gen_set_NZ64(result);
41
42
@@ -XXX,XX +XXX,XX @@ static void gen_adc_CC(int sf, TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
43
tcg_temp_free_i64(cf_64);
44
tcg_temp_free_i64(result);
45
} else {
46
- TCGv_i32 t0_32, t1_32, tmp;
47
- t0_32 = tcg_temp_new_i32();
48
- t1_32 = tcg_temp_new_i32();
49
- tmp = tcg_const_i32(0);
50
+ TCGv_i32 t0_32 = tcg_temp_new_i32();
51
+ TCGv_i32 t1_32 = tcg_temp_new_i32();
52
+ TCGv_i32 tmp = tcg_temp_new_i32();
53
+ TCGv_i32 zero = tcg_constant_i32(0);
54
55
tcg_gen_extrl_i64_i32(t0_32, t0);
56
tcg_gen_extrl_i64_i32(t1_32, t1);
57
- tcg_gen_add2_i32(cpu_NF, cpu_CF, t0_32, tmp, cpu_CF, tmp);
58
- tcg_gen_add2_i32(cpu_NF, cpu_CF, cpu_NF, cpu_CF, t1_32, tmp);
59
+ tcg_gen_add2_i32(cpu_NF, cpu_CF, t0_32, zero, cpu_CF, zero);
60
+ tcg_gen_add2_i32(cpu_NF, cpu_CF, cpu_NF, cpu_CF, t1_32, zero);
61
62
tcg_gen_mov_i32(cpu_ZF, cpu_NF);
63
tcg_gen_xor_i32(cpu_VF, cpu_NF, t0_32);
64
--
65
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 13 +++----------
9
1 file changed, 3 insertions(+), 10 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void gen_axflag(void)
16
static void handle_msr_i(DisasContext *s, uint32_t insn,
17
unsigned int op1, unsigned int op2, unsigned int crm)
18
{
19
- TCGv_i32 t1;
20
int op = op1 << 3 | op2;
21
22
/* End the TB by default, chaining is ok. */
23
@@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
24
if (s->current_el == 0) {
25
goto do_unallocated;
26
}
27
- t1 = tcg_const_i32(crm & PSTATE_SP);
28
- gen_helper_msr_i_spsel(cpu_env, t1);
29
- tcg_temp_free_i32(t1);
30
+ gen_helper_msr_i_spsel(cpu_env, tcg_constant_i32(crm & PSTATE_SP));
31
break;
32
33
case 0x19: /* SSBS */
34
@@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
35
break;
36
37
case 0x1e: /* DAIFSet */
38
- t1 = tcg_const_i32(crm);
39
- gen_helper_msr_i_daifset(cpu_env, t1);
40
- tcg_temp_free_i32(t1);
41
+ gen_helper_msr_i_daifset(cpu_env, tcg_constant_i32(crm));
42
break;
43
44
case 0x1f: /* DAIFClear */
45
- t1 = tcg_const_i32(crm);
46
- gen_helper_msr_i_daifclear(cpu_env, t1);
47
- tcg_temp_free_i32(t1);
48
+ gen_helper_msr_i_daifclear(cpu_env, tcg_constant_i32(crm));
49
/* For DAIFClear, exit the cpu loop to re-evaluate pending IRQs. */
50
s->base.is_jmp = DISAS_UPDATE_EXIT;
51
break;
52
--
53
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-8-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 5 +----
9
1 file changed, 1 insertion(+), 4 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
16
int opc = extract32(insn, 21, 3);
17
int op2_ll = extract32(insn, 0, 5);
18
int imm16 = extract32(insn, 5, 16);
19
- TCGv_i32 tmp;
20
21
switch (opc) {
22
case 0:
23
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
24
break;
25
}
26
gen_a64_set_pc_im(s->pc_curr);
27
- tmp = tcg_const_i32(syn_aa64_smc(imm16));
28
- gen_helper_pre_smc(cpu_env, tmp);
29
- tcg_temp_free_i32(tmp);
30
+ gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
31
gen_ss_advance(s);
32
gen_exception_insn(s, s->base.pc_next, EXCP_SMC,
33
syn_aa64_smc(imm16), 3);
34
--
35
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-9-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 6 ++----
9
1 file changed, 2 insertions(+), 4 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
16
tcg_temp_free_i64(cmp);
17
} else if (tb_cflags(s->base.tb) & CF_PARALLEL) {
18
if (HAVE_CMPXCHG128) {
19
- TCGv_i32 tcg_rs = tcg_const_i32(rs);
20
+ TCGv_i32 tcg_rs = tcg_constant_i32(rs);
21
if (s->be_data == MO_LE) {
22
gen_helper_casp_le_parallel(cpu_env, tcg_rs,
23
clean_addr, t1, t2);
24
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
25
gen_helper_casp_be_parallel(cpu_env, tcg_rs,
26
clean_addr, t1, t2);
27
}
28
- tcg_temp_free_i32(tcg_rs);
29
} else {
30
gen_helper_exit_atomic(cpu_env);
31
s->base.is_jmp = DISAS_NORETURN;
32
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
33
TCGv_i64 a2 = tcg_temp_new_i64();
34
TCGv_i64 c1 = tcg_temp_new_i64();
35
TCGv_i64 c2 = tcg_temp_new_i64();
36
- TCGv_i64 zero = tcg_const_i64(0);
37
+ TCGv_i64 zero = tcg_constant_i64(0);
38
39
/* Load the two words, in memory order. */
40
tcg_gen_qemu_ld_i64(d1, clean_addr, memidx,
41
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
42
tcg_temp_free_i64(a2);
43
tcg_temp_free_i64(c1);
44
tcg_temp_free_i64(c2);
45
- tcg_temp_free_i64(zero);
46
47
/* Write back the data from memory to Rs. */
48
tcg_gen_mov_i64(s1, d1);
49
--
50
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-11-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 9 +++------
9
1 file changed, 3 insertions(+), 6 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
16
mop = endian | size | align;
17
18
elements = (is_q ? 16 : 8) >> size;
19
- tcg_ebytes = tcg_const_i64(1 << size);
20
+ tcg_ebytes = tcg_constant_i64(1 << size);
21
for (r = 0; r < rpt; r++) {
22
int e;
23
for (e = 0; e < elements; e++) {
24
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
25
}
26
}
27
}
28
- tcg_temp_free_i64(tcg_ebytes);
29
30
if (!is_store) {
31
/* For non-quad operations, setting a slice of the low
32
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
33
total);
34
mop = finalize_memop(s, scale);
35
36
- tcg_ebytes = tcg_const_i64(1 << scale);
37
+ tcg_ebytes = tcg_constant_i64(1 << scale);
38
for (xs = 0; xs < selem; xs++) {
39
if (replicate) {
40
/* Load and replicate to all elements */
41
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
42
tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
43
rt = (rt + 1) % 32;
44
}
45
- tcg_temp_free_i64(tcg_ebytes);
46
47
if (is_postidx) {
48
if (rm == 31) {
49
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
50
51
if (is_zero) {
52
TCGv_i64 clean_addr = clean_data_tbi(s, addr);
53
- TCGv_i64 tcg_zero = tcg_const_i64(0);
54
+ TCGv_i64 tcg_zero = tcg_constant_i64(0);
55
int mem_index = get_mem_index(s);
56
int i, n = (1 + is_pair) << LOG2_TAG_GRANULE;
57
58
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
59
tcg_gen_addi_i64(clean_addr, clean_addr, 8);
60
tcg_gen_qemu_st_i64(tcg_zero, clean_addr, mem_index, MO_UQ);
61
}
62
- tcg_temp_free_i64(tcg_zero);
63
}
64
65
if (index != 0) {
66
--
67
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 12 ++++--------
9
1 file changed, 4 insertions(+), 8 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_add_sub_imm(DisasContext *s, uint32_t insn)
16
tcg_gen_addi_i64(tcg_result, tcg_rn, imm);
17
}
18
} else {
19
- TCGv_i64 tcg_imm = tcg_const_i64(imm);
20
+ TCGv_i64 tcg_imm = tcg_constant_i64(imm);
21
if (sub_op) {
22
gen_sub_CC(is_64bit, tcg_result, tcg_rn, tcg_imm);
23
} else {
24
gen_add_CC(is_64bit, tcg_result, tcg_rn, tcg_imm);
25
}
26
- tcg_temp_free_i64(tcg_imm);
27
}
28
29
if (is_64bit) {
30
@@ -XXX,XX +XXX,XX @@ static void disas_add_sub_imm_with_tags(DisasContext *s, uint32_t insn)
31
tcg_rd = cpu_reg_sp(s, rd);
32
33
if (s->ata) {
34
- TCGv_i32 offset = tcg_const_i32(imm);
35
- TCGv_i32 tag_offset = tcg_const_i32(uimm4);
36
-
37
- gen_helper_addsubg(tcg_rd, cpu_env, tcg_rn, offset, tag_offset);
38
- tcg_temp_free_i32(tag_offset);
39
- tcg_temp_free_i32(offset);
40
+ gen_helper_addsubg(tcg_rd, cpu_env, tcg_rn,
41
+ tcg_constant_i32(imm),
42
+ tcg_constant_i32(uimm4));
43
} else {
44
tcg_gen_addi_i64(tcg_rd, tcg_rn, imm);
45
gen_address_with_allocation_tag0(tcg_rd, tcg_rd);
46
--
47
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 5 +----
9
1 file changed, 1 insertion(+), 4 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_movw_imm(DisasContext *s, uint32_t insn)
16
int opc = extract32(insn, 29, 2);
17
int pos = extract32(insn, 21, 2) << 4;
18
TCGv_i64 tcg_rd = cpu_reg(s, rd);
19
- TCGv_i64 tcg_imm;
20
21
if (!sf && (pos >= 32)) {
22
unallocated_encoding(s);
23
@@ -XXX,XX +XXX,XX @@ static void disas_movw_imm(DisasContext *s, uint32_t insn)
24
tcg_gen_movi_i64(tcg_rd, imm);
25
break;
26
case 3: /* MOVK */
27
- tcg_imm = tcg_const_i64(imm);
28
- tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_imm, pos, 16);
29
- tcg_temp_free_i64(tcg_imm);
30
+ tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_constant_i64(imm), pos, 16);
31
if (!sf) {
32
tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
33
}
34
--
35
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-15-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 3 +--
9
1 file changed, 1 insertion(+), 2 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_cond_select(DisasContext *s, uint32_t insn)
16
tcg_rd = cpu_reg(s, rd);
17
18
a64_test_cc(&c, cond);
19
- zero = tcg_const_i64(0);
20
+ zero = tcg_constant_i64(0);
21
22
if (rn == 31 && rm == 31 && (else_inc ^ else_inv)) {
23
/* CSET & CSETM. */
24
@@ -XXX,XX +XXX,XX @@ static void disas_cond_select(DisasContext *s, uint32_t insn)
25
tcg_gen_movcond_i64(c.cond, tcg_rd, c.value, zero, t_true, t_false);
26
}
27
28
- tcg_temp_free_i64(zero);
29
a64_free_cc(&c);
30
31
if (!sf) {
32
--
33
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Existing temp usage treats t1 as both zero and as a
4
temporary. Rearrange to only require one temporary,
5
so remove t1 and rename t2.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20220426163043.100432-17-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-a64.c | 12 +++++-------
13
1 file changed, 5 insertions(+), 7 deletions(-)
14
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
18
+++ b/target/arm/translate-a64.c
19
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
20
if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
21
goto do_unallocated;
22
} else {
23
- TCGv_i64 t1 = tcg_const_i64(1);
24
- TCGv_i64 t2 = tcg_temp_new_i64();
25
+ TCGv_i64 t = tcg_temp_new_i64();
26
27
- tcg_gen_extract_i64(t2, cpu_reg_sp(s, rn), 56, 4);
28
- tcg_gen_shl_i64(t1, t1, t2);
29
- tcg_gen_or_i64(cpu_reg(s, rd), cpu_reg(s, rm), t1);
30
+ tcg_gen_extract_i64(t, cpu_reg_sp(s, rn), 56, 4);
31
+ tcg_gen_shl_i64(t, tcg_constant_i64(1), t);
32
+ tcg_gen_or_i64(cpu_reg(s, rd), cpu_reg(s, rm), t);
33
34
- tcg_temp_free_i64(t1);
35
- tcg_temp_free_i64(t2);
36
+ tcg_temp_free_i64(t);
37
}
38
break;
39
case 8: /* LSLV */
40
--
41
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 21 +++++----------------
9
1 file changed, 5 insertions(+), 16 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void handle_shri_with_rndacc(TCGv_i64 tcg_res, TCGv_i64 tcg_src,
16
/* Deal with the rounding step */
17
if (round) {
18
if (extended_result) {
19
- TCGv_i64 tcg_zero = tcg_const_i64(0);
20
+ TCGv_i64 tcg_zero = tcg_constant_i64(0);
21
if (!is_u) {
22
/* take care of sign extending tcg_res */
23
tcg_gen_sari_i64(tcg_src_hi, tcg_src, 63);
24
@@ -XXX,XX +XXX,XX @@ static void handle_shri_with_rndacc(TCGv_i64 tcg_res, TCGv_i64 tcg_src,
25
tcg_src, tcg_zero,
26
tcg_rnd, tcg_zero);
27
}
28
- tcg_temp_free_i64(tcg_zero);
29
} else {
30
tcg_gen_add_i64(tcg_src, tcg_src, tcg_rnd);
31
}
32
@@ -XXX,XX +XXX,XX @@ static void handle_scalar_simd_shri(DisasContext *s,
33
}
34
35
if (round) {
36
- uint64_t round_const = 1ULL << (shift - 1);
37
- tcg_round = tcg_const_i64(round_const);
38
+ tcg_round = tcg_constant_i64(1ULL << (shift - 1));
39
} else {
40
tcg_round = NULL;
41
}
42
@@ -XXX,XX +XXX,XX @@ static void handle_scalar_simd_shri(DisasContext *s,
43
44
tcg_temp_free_i64(tcg_rn);
45
tcg_temp_free_i64(tcg_rd);
46
- if (round) {
47
- tcg_temp_free_i64(tcg_round);
48
- }
49
}
50
51
/* SHL/SLI - Scalar shift left */
52
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
53
tcg_final = tcg_const_i64(0);
54
55
if (round) {
56
- uint64_t round_const = 1ULL << (shift - 1);
57
- tcg_round = tcg_const_i64(round_const);
58
+ tcg_round = tcg_constant_i64(1ULL << (shift - 1));
59
} else {
60
tcg_round = NULL;
61
}
62
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
63
write_vec_element(s, tcg_final, rd, 1, MO_64);
64
}
65
66
- if (round) {
67
- tcg_temp_free_i64(tcg_round);
68
- }
69
tcg_temp_free_i64(tcg_rn);
70
tcg_temp_free_i64(tcg_rd);
71
tcg_temp_free_i32(tcg_rd_narrowed);
72
@@ -XXX,XX +XXX,XX @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
73
}
74
75
if (size == 3) {
76
- TCGv_i64 tcg_shift = tcg_const_i64(shift);
77
+ TCGv_i64 tcg_shift = tcg_constant_i64(shift);
78
static NeonGenTwo64OpEnvFn * const fns[2][2] = {
79
{ gen_helper_neon_qshl_s64, gen_helper_neon_qshlu_s64 },
80
{ NULL, gen_helper_neon_qshl_u64 },
81
@@ -XXX,XX +XXX,XX @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
82
83
tcg_temp_free_i64(tcg_op);
84
}
85
- tcg_temp_free_i64(tcg_shift);
86
clear_vec_high(s, is_q, rd);
87
} else {
88
- TCGv_i32 tcg_shift = tcg_const_i32(shift);
89
+ TCGv_i32 tcg_shift = tcg_constant_i32(shift);
90
static NeonGenTwoOpEnvFn * const fns[2][2][3] = {
91
{
92
{ gen_helper_neon_qshl_s8,
93
@@ -XXX,XX +XXX,XX @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
94
95
tcg_temp_free_i32(tcg_op);
96
}
97
- tcg_temp_free_i32(tcg_shift);
98
99
if (!scalar) {
100
clear_vec_high(s, is_q, rd);
101
--
102
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-21-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 40 ++++++++++----------------------------
9
1 file changed, 10 insertions(+), 30 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
16
int passes = scalar ? 1 : 2;
17
18
if (scalar) {
19
- tcg_res[1] = tcg_const_i32(0);
20
+ tcg_res[1] = tcg_constant_i32(0);
21
}
22
23
for (pass = 0; pass < passes; pass++) {
24
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
25
}
26
27
if (is_scalar) {
28
- TCGv_i64 tcg_zero = tcg_const_i64(0);
29
- write_vec_element(s, tcg_zero, rd, 0, MO_64);
30
- tcg_temp_free_i64(tcg_zero);
31
+ write_vec_element(s, tcg_constant_i64(0), rd, 0, MO_64);
32
}
33
write_vec_element_i32(s, tcg_rd, rd, pass, MO_32);
34
}
35
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
36
case 0x1c: /* FCVTAS */
37
case 0x3a: /* FCVTPS */
38
case 0x3b: /* FCVTZS */
39
- {
40
- TCGv_i32 tcg_shift = tcg_const_i32(0);
41
- gen_helper_vfp_tosls(tcg_rd, tcg_rn, tcg_shift, tcg_fpstatus);
42
- tcg_temp_free_i32(tcg_shift);
43
+ gen_helper_vfp_tosls(tcg_rd, tcg_rn, tcg_constant_i32(0),
44
+ tcg_fpstatus);
45
break;
46
- }
47
case 0x5a: /* FCVTNU */
48
case 0x5b: /* FCVTMU */
49
case 0x5c: /* FCVTAU */
50
case 0x7a: /* FCVTPU */
51
case 0x7b: /* FCVTZU */
52
- {
53
- TCGv_i32 tcg_shift = tcg_const_i32(0);
54
- gen_helper_vfp_touls(tcg_rd, tcg_rn, tcg_shift, tcg_fpstatus);
55
- tcg_temp_free_i32(tcg_shift);
56
+ gen_helper_vfp_touls(tcg_rd, tcg_rn, tcg_constant_i32(0),
57
+ tcg_fpstatus);
58
break;
59
- }
60
default:
61
g_assert_not_reached();
62
}
63
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shrn(DisasContext *s, bool is_q,
64
read_vec_element(s, tcg_final, rd, is_q ? 1 : 0, MO_64);
65
66
if (round) {
67
- uint64_t round_const = 1ULL << (shift - 1);
68
- tcg_round = tcg_const_i64(round_const);
69
+ tcg_round = tcg_constant_i64(1ULL << (shift - 1));
70
} else {
71
tcg_round = NULL;
72
}
73
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shrn(DisasContext *s, bool is_q,
74
} else {
75
write_vec_element(s, tcg_final, rd, 1, MO_64);
76
}
77
- if (round) {
78
- tcg_temp_free_i64(tcg_round);
79
- }
80
tcg_temp_free_i64(tcg_rn);
81
tcg_temp_free_i64(tcg_rd);
82
tcg_temp_free_i64(tcg_final);
83
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,
84
}
85
}
86
if (!is_q) {
87
- tcg_res[1] = tcg_const_i64(0);
88
+ tcg_res[1] = tcg_constant_i64(0);
89
}
90
for (pass = 0; pass < 2; pass++) {
91
write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
92
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
93
case 0x1c: /* FCVTAS */
94
case 0x3a: /* FCVTPS */
95
case 0x3b: /* FCVTZS */
96
- {
97
- TCGv_i32 tcg_shift = tcg_const_i32(0);
98
gen_helper_vfp_tosls(tcg_res, tcg_op,
99
- tcg_shift, tcg_fpstatus);
100
- tcg_temp_free_i32(tcg_shift);
101
+ tcg_constant_i32(0), tcg_fpstatus);
102
break;
103
- }
104
case 0x5a: /* FCVTNU */
105
case 0x5b: /* FCVTMU */
106
case 0x5c: /* FCVTAU */
107
case 0x7a: /* FCVTPU */
108
case 0x7b: /* FCVTZU */
109
- {
110
- TCGv_i32 tcg_shift = tcg_const_i32(0);
111
gen_helper_vfp_touls(tcg_res, tcg_op,
112
- tcg_shift, tcg_fpstatus);
113
- tcg_temp_free_i32(tcg_shift);
114
+ tcg_constant_i32(0), tcg_fpstatus);
115
break;
116
- }
117
case 0x18: /* FRINTN */
118
case 0x19: /* FRINTM */
119
case 0x38: /* FRINTP */
120
--
121
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-24-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 25 ++++++++++---------------
9
1 file changed, 10 insertions(+), 15 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_iwmmxt_insn(DisasContext *s, uint32_t insn)
16
gen_op_iwmmxt_movq_M0_wRn(wrd);
17
switch ((insn >> 6) & 3) {
18
case 0:
19
- tmp2 = tcg_const_i32(0xff);
20
- tmp3 = tcg_const_i32((insn & 7) << 3);
21
+ tmp2 = tcg_constant_i32(0xff);
22
+ tmp3 = tcg_constant_i32((insn & 7) << 3);
23
break;
24
case 1:
25
- tmp2 = tcg_const_i32(0xffff);
26
- tmp3 = tcg_const_i32((insn & 3) << 4);
27
+ tmp2 = tcg_constant_i32(0xffff);
28
+ tmp3 = tcg_constant_i32((insn & 3) << 4);
29
break;
30
case 2:
31
- tmp2 = tcg_const_i32(0xffffffff);
32
- tmp3 = tcg_const_i32((insn & 1) << 5);
33
+ tmp2 = tcg_constant_i32(0xffffffff);
34
+ tmp3 = tcg_constant_i32((insn & 1) << 5);
35
break;
36
default:
37
- tmp2 = NULL;
38
- tmp3 = NULL;
39
+ g_assert_not_reached();
40
}
41
gen_helper_iwmmxt_insr(cpu_M0, cpu_M0, tmp, tmp2, tmp3);
42
- tcg_temp_free_i32(tmp3);
43
- tcg_temp_free_i32(tmp2);
44
tcg_temp_free_i32(tmp);
45
gen_op_iwmmxt_movq_wRn_M0(wrd);
46
gen_op_iwmmxt_set_mup();
47
@@ -XXX,XX +XXX,XX @@ static int disas_iwmmxt_insn(DisasContext *s, uint32_t insn)
48
rd0 = (insn >> 16) & 0xf;
49
rd1 = (insn >> 0) & 0xf;
50
gen_op_iwmmxt_movq_M0_wRn(rd0);
51
- tmp = tcg_const_i32((insn >> 20) & 3);
52
iwmmxt_load_reg(cpu_V1, rd1);
53
- gen_helper_iwmmxt_align(cpu_M0, cpu_M0, cpu_V1, tmp);
54
- tcg_temp_free_i32(tmp);
55
+ gen_helper_iwmmxt_align(cpu_M0, cpu_M0, cpu_V1,
56
+ tcg_constant_i32((insn >> 20) & 3));
57
gen_op_iwmmxt_movq_wRn_M0(wrd);
58
gen_op_iwmmxt_set_mup();
59
break;
60
@@ -XXX,XX +XXX,XX @@ static int disas_iwmmxt_insn(DisasContext *s, uint32_t insn)
61
wrd = (insn >> 12) & 0xf;
62
rd0 = (insn >> 16) & 0xf;
63
gen_op_iwmmxt_movq_M0_wRn(rd0);
64
- tmp = tcg_const_i32(((insn >> 16) & 0xf0) | (insn & 0x0f));
65
+ tmp = tcg_constant_i32(((insn >> 16) & 0xf0) | (insn & 0x0f));
66
gen_helper_iwmmxt_shufh(cpu_M0, cpu_env, cpu_M0, tmp);
67
- tcg_temp_free_i32(tmp);
68
gen_op_iwmmxt_movq_wRn_M0(wrd);
69
gen_op_iwmmxt_set_mup();
70
gen_op_iwmmxt_set_cup();
71
--
72
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-27-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 43 +++++++++++++-----------------------------
9
1 file changed, 13 insertions(+), 30 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
16
* Note that on XScale all cp0..c13 registers do an access check
17
* call in order to handle c15_cpar.
18
*/
19
- TCGv_ptr tmpptr;
20
- TCGv_i32 tcg_syn, tcg_isread;
21
uint32_t syndrome;
22
23
/* Note that since we are an implementation which takes an
24
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
25
26
gen_set_condexec(s);
27
gen_set_pc_im(s, s->pc_curr);
28
- tmpptr = tcg_const_ptr(ri);
29
- tcg_syn = tcg_const_i32(syndrome);
30
- tcg_isread = tcg_const_i32(isread);
31
- gen_helper_access_check_cp_reg(cpu_env, tmpptr, tcg_syn,
32
- tcg_isread);
33
- tcg_temp_free_ptr(tmpptr);
34
- tcg_temp_free_i32(tcg_syn);
35
- tcg_temp_free_i32(tcg_isread);
36
+ gen_helper_access_check_cp_reg(cpu_env,
37
+ tcg_constant_ptr(ri),
38
+ tcg_constant_i32(syndrome),
39
+ tcg_constant_i32(isread));
40
} else if (ri->type & ARM_CP_RAISES_EXC) {
41
/*
42
* The readfn or writefn might raise an exception;
43
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
44
TCGv_i64 tmp64;
45
TCGv_i32 tmp;
46
if (ri->type & ARM_CP_CONST) {
47
- tmp64 = tcg_const_i64(ri->resetvalue);
48
+ tmp64 = tcg_constant_i64(ri->resetvalue);
49
} else if (ri->readfn) {
50
- TCGv_ptr tmpptr;
51
tmp64 = tcg_temp_new_i64();
52
- tmpptr = tcg_const_ptr(ri);
53
- gen_helper_get_cp_reg64(tmp64, cpu_env, tmpptr);
54
- tcg_temp_free_ptr(tmpptr);
55
+ gen_helper_get_cp_reg64(tmp64, cpu_env,
56
+ tcg_constant_ptr(ri));
57
} else {
58
tmp64 = tcg_temp_new_i64();
59
tcg_gen_ld_i64(tmp64, cpu_env, ri->fieldoffset);
60
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
61
} else {
62
TCGv_i32 tmp;
63
if (ri->type & ARM_CP_CONST) {
64
- tmp = tcg_const_i32(ri->resetvalue);
65
+ tmp = tcg_constant_i32(ri->resetvalue);
66
} else if (ri->readfn) {
67
- TCGv_ptr tmpptr;
68
tmp = tcg_temp_new_i32();
69
- tmpptr = tcg_const_ptr(ri);
70
- gen_helper_get_cp_reg(tmp, cpu_env, tmpptr);
71
- tcg_temp_free_ptr(tmpptr);
72
+ gen_helper_get_cp_reg(tmp, cpu_env, tcg_constant_ptr(ri));
73
} else {
74
tmp = load_cpu_offset(ri->fieldoffset);
75
}
76
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
77
tcg_temp_free_i32(tmplo);
78
tcg_temp_free_i32(tmphi);
79
if (ri->writefn) {
80
- TCGv_ptr tmpptr = tcg_const_ptr(ri);
81
- gen_helper_set_cp_reg64(cpu_env, tmpptr, tmp64);
82
- tcg_temp_free_ptr(tmpptr);
83
+ gen_helper_set_cp_reg64(cpu_env, tcg_constant_ptr(ri),
84
+ tmp64);
85
} else {
86
tcg_gen_st_i64(tmp64, cpu_env, ri->fieldoffset);
87
}
88
tcg_temp_free_i64(tmp64);
89
} else {
90
+ TCGv_i32 tmp = load_reg(s, rt);
91
if (ri->writefn) {
92
- TCGv_i32 tmp;
93
- TCGv_ptr tmpptr;
94
- tmp = load_reg(s, rt);
95
- tmpptr = tcg_const_ptr(ri);
96
- gen_helper_set_cp_reg(cpu_env, tmpptr, tmp);
97
- tcg_temp_free_ptr(tmpptr);
98
+ gen_helper_set_cp_reg(cpu_env, tcg_constant_ptr(ri), tmp);
99
tcg_temp_free_i32(tmp);
100
} else {
101
- TCGv_i32 tmp = load_reg(s, rt);
102
store_cpu_offset(tmp, ri->fieldoffset, 4);
103
}
104
}
105
--
106
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
The i.MX6UL doesn't support CLK_HIGH ou CLK_HIGH_DIV clock source.
4
5
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-43-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
target/arm/translate-sve.c | 12 ++++--------
9
include/hw/timer/imx_gpt.h | 1 +
9
1 file changed, 4 insertions(+), 8 deletions(-)
10
hw/arm/fsl-imx6ul.c | 2 +-
11
hw/misc/imx6ul_ccm.c | 6 ------
12
hw/timer/imx_gpt.c | 25 +++++++++++++++++++++++++
13
4 files changed, 27 insertions(+), 7 deletions(-)
10
14
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
15
diff --git a/include/hw/timer/imx_gpt.h b/include/hw/timer/imx_gpt.h
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
17
--- a/include/hw/timer/imx_gpt.h
14
+++ b/target/arm/translate-sve.c
18
+++ b/include/hw/timer/imx_gpt.h
15
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz *a)
19
@@ -XXX,XX +XXX,XX @@
16
gen_helper_gvec_mem_scatter *fn = NULL;
20
#define TYPE_IMX25_GPT "imx25.gpt"
17
bool be = s->be_data == MO_BE;
21
#define TYPE_IMX31_GPT "imx31.gpt"
18
bool mte = s->mte_active[0];
22
#define TYPE_IMX6_GPT "imx6.gpt"
19
- TCGv_i64 imm;
23
+#define TYPE_IMX6UL_GPT "imx6ul.gpt"
20
24
#define TYPE_IMX7_GPT "imx7.gpt"
21
if (a->esz < a->msz || (a->esz == a->msz && !a->u)) {
25
22
return false;
26
#define TYPE_IMX_GPT TYPE_IMX25_GPT
23
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz *a)
27
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
24
/* Treat LD1_zpiz (zn[x] + imm) the same way as LD1_zprz (rn + zm[x])
28
index XXXXXXX..XXXXXXX 100644
25
* by loading the immediate into the scalar parameter.
29
--- a/hw/arm/fsl-imx6ul.c
30
+++ b/hw/arm/fsl-imx6ul.c
31
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
26
*/
32
*/
27
- imm = tcg_const_i64(a->imm << a->msz);
33
for (i = 0; i < FSL_IMX6UL_NUM_GPTS; i++) {
28
- do_mem_zpz(s, a->rd, a->pg, a->rn, 0, imm, a->msz, false, fn);
34
snprintf(name, NAME_SIZE, "gpt%d", i);
29
- tcg_temp_free_i64(imm);
35
- object_initialize_child(obj, name, &s->gpt[i], TYPE_IMX7_GPT);
30
+ do_mem_zpz(s, a->rd, a->pg, a->rn, 0,
36
+ object_initialize_child(obj, name, &s->gpt[i], TYPE_IMX6UL_GPT);
31
+ tcg_constant_i64(a->imm << a->msz), a->msz, false, fn);
37
}
32
return true;
38
39
/*
40
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/misc/imx6ul_ccm.c
43
+++ b/hw/misc/imx6ul_ccm.c
44
@@ -XXX,XX +XXX,XX @@ static uint32_t imx6ul_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
45
case CLK_32k:
46
freq = CKIL_FREQ;
47
break;
48
- case CLK_HIGH:
49
- freq = CKIH_FREQ;
50
- break;
51
- case CLK_HIGH_DIV:
52
- freq = CKIH_FREQ / 8;
53
- break;
54
default:
55
qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: unsupported clock %d\n",
56
TYPE_IMX6UL_CCM, __func__, clock);
57
diff --git a/hw/timer/imx_gpt.c b/hw/timer/imx_gpt.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/hw/timer/imx_gpt.c
60
+++ b/hw/timer/imx_gpt.c
61
@@ -XXX,XX +XXX,XX @@ static const IMXClk imx6_gpt_clocks[] = {
62
CLK_HIGH, /* 111 reference clock */
63
};
64
65
+static const IMXClk imx6ul_gpt_clocks[] = {
66
+ CLK_NONE, /* 000 No clock source */
67
+ CLK_IPG, /* 001 ipg_clk, 532MHz*/
68
+ CLK_IPG_HIGH, /* 010 ipg_clk_highfreq */
69
+ CLK_EXT, /* 011 External clock */
70
+ CLK_32k, /* 100 ipg_clk_32k */
71
+ CLK_NONE, /* 101 not defined */
72
+ CLK_NONE, /* 110 not defined */
73
+ CLK_NONE, /* 111 not defined */
74
+};
75
+
76
static const IMXClk imx7_gpt_clocks[] = {
77
CLK_NONE, /* 000 No clock source */
78
CLK_IPG, /* 001 ipg_clk, 532MHz*/
79
@@ -XXX,XX +XXX,XX @@ static void imx6_gpt_init(Object *obj)
80
s->clocks = imx6_gpt_clocks;
33
}
81
}
34
82
35
@@ -XXX,XX +XXX,XX @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz *a)
83
+static void imx6ul_gpt_init(Object *obj)
36
gen_helper_gvec_mem_scatter *fn = NULL;
84
+{
37
bool be = s->be_data == MO_BE;
85
+ IMXGPTState *s = IMX_GPT(obj);
38
bool mte = s->mte_active[0];
86
+
39
- TCGv_i64 imm;
87
+ s->clocks = imx6ul_gpt_clocks;
40
88
+}
41
if (a->esz < a->msz) {
89
+
42
return false;
90
static void imx7_gpt_init(Object *obj)
43
@@ -XXX,XX +XXX,XX @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz *a)
91
{
44
/* Treat ST1_zpiz (zn[x] + imm) the same way as ST1_zprz (rn + zm[x])
92
IMXGPTState *s = IMX_GPT(obj);
45
* by loading the immediate into the scalar parameter.
93
@@ -XXX,XX +XXX,XX @@ static const TypeInfo imx6_gpt_info = {
46
*/
94
.instance_init = imx6_gpt_init,
47
- imm = tcg_const_i64(a->imm << a->msz);
95
};
48
- do_mem_zpz(s, a->rd, a->pg, a->rn, 0, imm, a->msz, true, fn);
96
49
- tcg_temp_free_i64(imm);
97
+static const TypeInfo imx6ul_gpt_info = {
50
+ do_mem_zpz(s, a->rd, a->pg, a->rn, 0,
98
+ .name = TYPE_IMX6UL_GPT,
51
+ tcg_constant_i64(a->imm << a->msz), a->msz, true, fn);
99
+ .parent = TYPE_IMX25_GPT,
52
return true;
100
+ .instance_init = imx6ul_gpt_init,
101
+};
102
+
103
static const TypeInfo imx7_gpt_info = {
104
.name = TYPE_IMX7_GPT,
105
.parent = TYPE_IMX25_GPT,
106
@@ -XXX,XX +XXX,XX @@ static void imx_gpt_register_types(void)
107
type_register_static(&imx25_gpt_info);
108
type_register_static(&imx31_gpt_info);
109
type_register_static(&imx6_gpt_info);
110
+ type_register_static(&imx6ul_gpt_info);
111
type_register_static(&imx7_gpt_info);
53
}
112
}
54
113
55
--
114
--
56
2.25.1
115
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
IRQs were not associated to the various GPIO devices inside i.MX7D.
4
This patch brings the i.MX7D on par with i.MX6.
5
6
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
7
Message-id: 20221226101418.415170-1-jcd@tribudubois.net
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-28-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate.c | 8 ++------
11
include/hw/arm/fsl-imx7.h | 15 +++++++++++++++
9
1 file changed, 2 insertions(+), 6 deletions(-)
12
hw/arm/fsl-imx7.c | 31 ++++++++++++++++++++++++++++++-
13
2 files changed, 45 insertions(+), 1 deletion(-)
10
14
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
17
--- a/include/hw/arm/fsl-imx7.h
14
+++ b/target/arm/translate.c
18
+++ b/include/hw/arm/fsl-imx7.h
15
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
19
@@ -XXX,XX +XXX,XX @@ enum FslIMX7IRQs {
20
FSL_IMX7_GPT3_IRQ = 53,
21
FSL_IMX7_GPT4_IRQ = 52,
22
23
+ FSL_IMX7_GPIO1_LOW_IRQ = 64,
24
+ FSL_IMX7_GPIO1_HIGH_IRQ = 65,
25
+ FSL_IMX7_GPIO2_LOW_IRQ = 66,
26
+ FSL_IMX7_GPIO2_HIGH_IRQ = 67,
27
+ FSL_IMX7_GPIO3_LOW_IRQ = 68,
28
+ FSL_IMX7_GPIO3_HIGH_IRQ = 69,
29
+ FSL_IMX7_GPIO4_LOW_IRQ = 70,
30
+ FSL_IMX7_GPIO4_HIGH_IRQ = 71,
31
+ FSL_IMX7_GPIO5_LOW_IRQ = 72,
32
+ FSL_IMX7_GPIO5_HIGH_IRQ = 73,
33
+ FSL_IMX7_GPIO6_LOW_IRQ = 74,
34
+ FSL_IMX7_GPIO6_HIGH_IRQ = 75,
35
+ FSL_IMX7_GPIO7_LOW_IRQ = 76,
36
+ FSL_IMX7_GPIO7_HIGH_IRQ = 77,
37
+
38
FSL_IMX7_WDOG1_IRQ = 78,
39
FSL_IMX7_WDOG2_IRQ = 79,
40
FSL_IMX7_WDOG3_IRQ = 10,
41
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/arm/fsl-imx7.c
44
+++ b/hw/arm/fsl-imx7.c
45
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
46
FSL_IMX7_GPIO7_ADDR,
47
};
48
49
+ static const int FSL_IMX7_GPIOn_LOW_IRQ[FSL_IMX7_NUM_GPIOS] = {
50
+ FSL_IMX7_GPIO1_LOW_IRQ,
51
+ FSL_IMX7_GPIO2_LOW_IRQ,
52
+ FSL_IMX7_GPIO3_LOW_IRQ,
53
+ FSL_IMX7_GPIO4_LOW_IRQ,
54
+ FSL_IMX7_GPIO5_LOW_IRQ,
55
+ FSL_IMX7_GPIO6_LOW_IRQ,
56
+ FSL_IMX7_GPIO7_LOW_IRQ,
57
+ };
58
+
59
+ static const int FSL_IMX7_GPIOn_HIGH_IRQ[FSL_IMX7_NUM_GPIOS] = {
60
+ FSL_IMX7_GPIO1_HIGH_IRQ,
61
+ FSL_IMX7_GPIO2_HIGH_IRQ,
62
+ FSL_IMX7_GPIO3_HIGH_IRQ,
63
+ FSL_IMX7_GPIO4_HIGH_IRQ,
64
+ FSL_IMX7_GPIO5_HIGH_IRQ,
65
+ FSL_IMX7_GPIO6_HIGH_IRQ,
66
+ FSL_IMX7_GPIO7_HIGH_IRQ,
67
+ };
68
+
69
sysbus_realize(SYS_BUS_DEVICE(&s->gpio[i]), &error_abort);
70
- sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpio[i]), 0, FSL_IMX7_GPIOn_ADDR[i]);
71
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpio[i]), 0,
72
+ FSL_IMX7_GPIOn_ADDR[i]);
73
+
74
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gpio[i]), 0,
75
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
76
+ FSL_IMX7_GPIOn_LOW_IRQ[i]));
77
+
78
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gpio[i]), 1,
79
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
80
+ FSL_IMX7_GPIOn_HIGH_IRQ[i]));
16
}
81
}
17
82
18
addr = tcg_temp_new_i32();
83
/*
19
- tmp = tcg_const_i32(mode);
20
/* get_r13_banked() will raise an exception if called from System mode */
21
gen_set_condexec(s);
22
gen_set_pc_im(s, s->pc_curr);
23
- gen_helper_get_r13_banked(addr, cpu_env, tmp);
24
- tcg_temp_free_i32(tmp);
25
+ gen_helper_get_r13_banked(addr, cpu_env, tcg_constant_i32(mode));
26
switch (amode) {
27
case 0: /* DA */
28
offset = -4;
29
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
30
abort();
31
}
32
tcg_gen_addi_i32(addr, addr, offset);
33
- tmp = tcg_const_i32(mode);
34
- gen_helper_set_r13_banked(cpu_env, tmp, addr);
35
- tcg_temp_free_i32(tmp);
36
+ gen_helper_set_r13_banked(cpu_env, tcg_constant_i32(mode), addr);
37
}
38
tcg_temp_free_i32(addr);
39
s->base.is_jmp = DISAS_UPDATE_EXIT;
40
--
84
--
41
2.25.1
85
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-29-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 11 +++++------
9
1 file changed, 5 insertions(+), 6 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static bool op_s_rri_rot(DisasContext *s, arg_s_rri_rot *a,
16
void (*gen)(TCGv_i32, TCGv_i32, TCGv_i32),
17
int logic_cc, StoreRegKind kind)
18
{
19
- TCGv_i32 tmp1, tmp2;
20
+ TCGv_i32 tmp1;
21
uint32_t imm;
22
23
imm = ror32(a->imm, a->rot);
24
if (logic_cc && a->rot) {
25
tcg_gen_movi_i32(cpu_CF, imm >> 31);
26
}
27
- tmp2 = tcg_const_i32(imm);
28
tmp1 = load_reg(s, a->rn);
29
30
- gen(tmp1, tmp1, tmp2);
31
- tcg_temp_free_i32(tmp2);
32
+ gen(tmp1, tmp1, tcg_constant_i32(imm));
33
34
if (logic_cc) {
35
gen_logic_CC(tmp1);
36
@@ -XXX,XX +XXX,XX @@ static bool op_s_rxi_rot(DisasContext *s, arg_s_rri_rot *a,
37
if (logic_cc && a->rot) {
38
tcg_gen_movi_i32(cpu_CF, imm >> 31);
39
}
40
- tmp = tcg_const_i32(imm);
41
42
- gen(tmp, tmp);
43
+ tmp = tcg_temp_new_i32();
44
+ gen(tmp, tcg_constant_i32(imm));
45
+
46
if (logic_cc) {
47
gen_logic_CC(tmp);
48
}
49
--
50
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-31-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 7 +++----
9
1 file changed, 3 insertions(+), 4 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_MRS_v7m(DisasContext *s, arg_MRS_v7m *a)
16
if (!arm_dc_feature(s, ARM_FEATURE_M)) {
17
return false;
18
}
19
- tmp = tcg_const_i32(a->sysm);
20
- gen_helper_v7m_mrs(tmp, cpu_env, tmp);
21
+ tmp = tcg_temp_new_i32();
22
+ gen_helper_v7m_mrs(tmp, cpu_env, tcg_constant_i32(a->sysm));
23
store_reg(s, a->rd, tmp);
24
return true;
25
}
26
@@ -XXX,XX +XXX,XX @@ static bool trans_MSR_v7m(DisasContext *s, arg_MSR_v7m *a)
27
if (!arm_dc_feature(s, ARM_FEATURE_M)) {
28
return false;
29
}
30
- addr = tcg_const_i32((a->mask << 10) | a->sysm);
31
+ addr = tcg_constant_i32((a->mask << 10) | a->sysm);
32
reg = load_reg(s, a->rn);
33
gen_helper_v7m_msr(cpu_env, addr, reg);
34
- tcg_temp_free_i32(addr);
35
tcg_temp_free_i32(reg);
36
/* If we wrote to CONTROL, the EL might have changed */
37
gen_rebuild_hflags(s, true);
38
--
39
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Stephen Longfield <slongfield@google.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Size is used at lines 1088/1188 for the loop, which reads the last 4
4
bytes from the crc_ptr so it does need to get increased, however it
5
shouldn't be increased before the buffer is passed to CRC computation,
6
or the crc32 function will access uninitialized memory.
7
8
This was pointed out to me by clg@kaod.org during the code review of
9
a similar patch to hw/net/ftgmac100.c
10
11
Change-Id: Ib0464303b191af1e28abeb2f5105eb25aadb5e9b
12
Signed-off-by: Stephen Longfield <slongfield@google.com>
13
Reviewed-by: Patrick Venture <venture@google.com>
14
Message-id: 20221221183202.3788132-1-slongfield@google.com
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-32-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
17
---
8
target/arm/translate.c | 14 +++++---------
18
hw/net/imx_fec.c | 8 ++++----
9
1 file changed, 5 insertions(+), 9 deletions(-)
19
1 file changed, 4 insertions(+), 4 deletions(-)
10
20
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
21
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
12
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
23
--- a/hw/net/imx_fec.c
14
+++ b/target/arm/translate.c
24
+++ b/hw/net/imx_fec.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_TT(DisasContext *s, arg_TT *a)
25
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_fec_receive(NetClientState *nc, const uint8_t *buf,
26
return 0;
16
}
27
}
17
28
18
addr = load_reg(s, a->rn);
29
- /* 4 bytes for the CRC. */
19
- tmp = tcg_const_i32((a->A << 1) | a->T);
30
- size += 4;
20
- gen_helper_v7m_tt(tmp, cpu_env, addr, tmp);
31
crc = cpu_to_be32(crc32(~0, buf, size));
21
+ tmp = tcg_temp_new_i32();
32
+ /* Increase size by 4, loop below reads the last 4 bytes from crc_ptr. */
22
+ gen_helper_v7m_tt(tmp, cpu_env, addr, tcg_constant_i32((a->A << 1) | a->T));
33
+ size += 4;
23
tcg_temp_free_i32(addr);
34
crc_ptr = (uint8_t *) &crc;
24
store_reg(s, a->rd, tmp);
35
25
return true;
36
/* Huge frames are truncated. */
26
@@ -XXX,XX +XXX,XX @@ static bool trans_PKH(DisasContext *s, arg_PKH *a)
37
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_enet_receive(NetClientState *nc, const uint8_t *buf,
27
static bool op_sat(DisasContext *s, arg_sat *a,
38
return 0;
28
void (*gen)(TCGv_i32, TCGv_env, TCGv_i32, TCGv_i32))
29
{
30
- TCGv_i32 tmp, satimm;
31
+ TCGv_i32 tmp;
32
int shift = a->imm;
33
34
if (!ENABLE_ARCH_6) {
35
@@ -XXX,XX +XXX,XX @@ static bool op_sat(DisasContext *s, arg_sat *a,
36
tcg_gen_shli_i32(tmp, tmp, shift);
37
}
39
}
38
40
39
- satimm = tcg_const_i32(a->satimm);
41
- /* 4 bytes for the CRC. */
40
- gen(tmp, cpu_env, tmp, satimm);
42
- size += 4;
41
- tcg_temp_free_i32(satimm);
43
crc = cpu_to_be32(crc32(~0, buf, size));
42
+ gen(tmp, cpu_env, tmp, tcg_constant_i32(a->satimm));
44
+ /* Increase size by 4, loop below reads the last 4 bytes from crc_ptr. */
43
45
+ size += 4;
44
store_reg(s, a->rd, tmp);
46
crc_ptr = (uint8_t *) &crc;
45
return true;
47
46
@@ -XXX,XX +XXX,XX @@ static bool op_smmla(DisasContext *s, arg_rrrr *a, bool round, bool sub)
48
if (shift16) {
47
* a non-zero multiplicand lowpart, and the correct result
48
* lowpart for rounding.
49
*/
50
- TCGv_i32 zero = tcg_const_i32(0);
51
- tcg_gen_sub2_i32(t2, t1, zero, t3, t2, t1);
52
- tcg_temp_free_i32(zero);
53
+ tcg_gen_sub2_i32(t2, t1, tcg_constant_i32(0), t3, t2, t1);
54
} else {
55
tcg_gen_add_i32(t1, t1, t3);
56
}
57
--
49
--
58
2.25.1
50
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-33-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 12 ++++--------
9
1 file changed, 4 insertions(+), 8 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static bool op_stm(DisasContext *s, arg_ldst_block *a, int min_n)
16
{
17
int i, j, n, list, mem_idx;
18
bool user = a->u;
19
- TCGv_i32 addr, tmp, tmp2;
20
+ TCGv_i32 addr, tmp;
21
22
if (user) {
23
/* STM (user) */
24
@@ -XXX,XX +XXX,XX @@ static bool op_stm(DisasContext *s, arg_ldst_block *a, int min_n)
25
26
if (user && i != 15) {
27
tmp = tcg_temp_new_i32();
28
- tmp2 = tcg_const_i32(i);
29
- gen_helper_get_user_reg(tmp, cpu_env, tmp2);
30
- tcg_temp_free_i32(tmp2);
31
+ gen_helper_get_user_reg(tmp, cpu_env, tcg_constant_i32(i));
32
} else {
33
tmp = load_reg(s, i);
34
}
35
@@ -XXX,XX +XXX,XX @@ static bool do_ldm(DisasContext *s, arg_ldst_block *a, int min_n)
36
bool loaded_base;
37
bool user = a->u;
38
bool exc_return = false;
39
- TCGv_i32 addr, tmp, tmp2, loaded_var;
40
+ TCGv_i32 addr, tmp, loaded_var;
41
42
if (user) {
43
/* LDM (user), LDM (exception return) */
44
@@ -XXX,XX +XXX,XX @@ static bool do_ldm(DisasContext *s, arg_ldst_block *a, int min_n)
45
tmp = tcg_temp_new_i32();
46
gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL | MO_ALIGN);
47
if (user) {
48
- tmp2 = tcg_const_i32(i);
49
- gen_helper_set_user_reg(cpu_env, tmp2, tmp);
50
- tcg_temp_free_i32(tmp2);
51
+ gen_helper_set_user_reg(cpu_env, tcg_constant_i32(i), tmp);
52
tcg_temp_free_i32(tmp);
53
} else if (i == a->rn) {
54
loaded_var = tmp;
55
--
56
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20220426163043.100432-34-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 16 +++++-----------
9
1 file changed, 5 insertions(+), 11 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
16
17
s->eci_handled = true;
18
19
- zero = tcg_const_i32(0);
20
+ zero = tcg_constant_i32(0);
21
for (i = 0; i < 15; i++) {
22
if (extract32(a->list, i, 1)) {
23
/* Clear R[i] */
24
@@ -XXX,XX +XXX,XX @@ static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
25
* Clear APSR (by calling the MSR helper with the same argument
26
* as for "MSR APSR_nzcvqg, Rn": mask = 0b1100, SYSM=0)
27
*/
28
- TCGv_i32 maskreg = tcg_const_i32(0xc << 8);
29
- gen_helper_v7m_msr(cpu_env, maskreg, zero);
30
- tcg_temp_free_i32(maskreg);
31
+ gen_helper_v7m_msr(cpu_env, tcg_constant_i32(0xc00), zero);
32
}
33
- tcg_temp_free_i32(zero);
34
clear_eci_state(s);
35
return true;
36
}
37
@@ -XXX,XX +XXX,XX @@ static bool trans_DLS(DisasContext *s, arg_DLS *a)
38
store_reg(s, 14, tmp);
39
if (a->size != 4) {
40
/* DLSTP: set FPSCR.LTPSIZE */
41
- tmp = tcg_const_i32(a->size);
42
- store_cpu_field(tmp, v7m.ltpsize);
43
+ store_cpu_field(tcg_constant_i32(a->size), v7m.ltpsize);
44
s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
45
}
46
return true;
47
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
48
*/
49
bool ok = vfp_access_check(s);
50
assert(ok);
51
- tmp = tcg_const_i32(a->size);
52
- store_cpu_field(tmp, v7m.ltpsize);
53
+ store_cpu_field(tcg_constant_i32(a->size), v7m.ltpsize);
54
/*
55
* LTPSIZE updated, but MVE_NO_PRED will always be the same thing (0)
56
* when we take this upcoming exit from this TB, so gen_jmp_tb() is OK.
57
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
58
gen_set_label(loopend);
59
if (a->tp) {
60
/* Exits from tail-pred loops must reset LTPSIZE to 4 */
61
- tmp = tcg_const_i32(4);
62
- store_cpu_field(tmp, v7m.ltpsize);
63
+ store_cpu_field(tcg_constant_i32(4), v7m.ltpsize);
64
}
65
/* End TB, continuing to following insn */
66
gen_jmp_tb(s, s->base.pc_next, 1);
67
--
68
2.25.1
diff view generated by jsdifflib
Deleted patch
1
The Arm FEAT_TTL architectural feature allows the guest to provide an
2
optional hint in an AArch64 TLB invalidate operation about which
3
translation table level holds the leaf entry for the address being
4
invalidated. QEMU's TLB implementation doesn't need that hint, and
5
we correctly ignore the (previously RES0) bits in TLB invalidate
6
operation values that are now used for the TTL field. So we can
7
simply advertise support for it in our 'max' CPU.
8
1
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20220426160422.2353158-2-peter.maydell@linaro.org
12
---
13
docs/system/arm/emulation.rst | 1 +
14
target/arm/cpu64.c | 1 +
15
2 files changed, 2 insertions(+)
16
17
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
18
index XXXXXXX..XXXXXXX 100644
19
--- a/docs/system/arm/emulation.rst
20
+++ b/docs/system/arm/emulation.rst
21
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
22
- FEAT_TLBIOS (TLB invalidate instructions in Outer Shareable domain)
23
- FEAT_TLBIRANGE (TLB invalidate range instructions)
24
- FEAT_TTCNP (Translation table Common not private translations)
25
+- FEAT_TTL (Translation Table Level)
26
- FEAT_TTST (Small translation tables)
27
- FEAT_UAO (Unprivileged Access Override control)
28
- FEAT_VHE (Virtualization Host Extensions)
29
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/cpu64.c
32
+++ b/target/arm/cpu64.c
33
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
34
t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */
35
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */
36
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
37
+ t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
38
cpu->isar.id_aa64mmfr2 = t;
39
40
t = cpu->isar.id_aa64zfr0;
41
--
42
2.25.1
diff view generated by jsdifflib
Deleted patch
1
The description in the Arm ARM of the requirements of FEAT_BBM is
2
admirably clear on the guarantees it provides software, but slightly
3
more obscure on what that means for implementations. The description
4
of the equivalent SMMU feature in the SMMU specification (IHI0070D.b
5
section 3.21.1) is perhaps a bit more detailed and includes some
6
example valid implementation choices. (The SMMU version of this
7
feature is slightly tighter than the CPU version: the CPU is permitted
8
to raise TLB Conflict aborts in some situations that the SMMU may
9
not. This doesn't matter for QEMU because we don't want to do TLB
10
Conflict aborts anyway.)
11
1
12
The informal summary of FEAT_BBM is that it is about permitting an OS
13
to switch a range of memory between "covered by a huge page" and
14
"covered by a sequence of normal pages" without having to engage in
15
the 'break-before-make' dance that has traditionally been
16
necessary. The 'break-before-make' sequence is:
17
18
* replace the old translation table entry with an invalid entry
19
* execute a DSB insn
20
* execute a broadcast TLB invalidate insn
21
* execute a DSB insn
22
* write the new translation table entry
23
* execute a DSB insn
24
25
The point of this is to ensure that no TLB can simultaneously contain
26
TLB entries for the old and the new entry, which would traditionally
27
be UNPREDICTABLE (allowing the CPU to generate a TLB Conflict fault
28
or to use a random mishmash of values from the old and the new
29
entry). FEAT_BBM level 2 says "for the specific case where the only
30
thing that changed is the size of the block, the TLB is guaranteed
31
not to do weird things even if there are multiple entries for an
32
address", which means that software can now do:
33
34
* replace old translation table entry with new entry
35
* DSB
36
* broadcast TLB invalidate
37
* DSB
38
39
As the SMMU spec notes, valid ways to do this include:
40
41
* if there are multiple entries in the TLB for an address,
42
choose one of them and use it, ignoring the others
43
* if there are multiple entries in the TLB for an address,
44
throw them all out and do a page table walk to get a new one
45
46
QEMU's page table walk implementation for Arm CPUs already meets the
47
requirements for FEAT_BBM level 2. When we cache an entry in our TCG
48
TLB, we do so only for the specific (non-huge) page that the address
49
is in, and there is no way for the TLB data structure to ever have
50
more than one TLB entry for that page. (We handle huge pages only in
51
that we track what part of the address space is covered by huge pages
52
so that a TLB invalidate operation for an address in a huge page
53
results in an invalidation of the whole TLB.) We ignore the Contiguous
54
bit in page table entries, so we don't have to do anything for the
55
parts of FEAT_BBM that deal with changis to the Contiguous bit.
56
57
FEAT_BBM level 2 also requires that the nT bit in block descriptors
58
must be ignored; since commit 39a1fd25287f5dece5 we do this.
59
60
It's therefore safe for QEMU to advertise FEAT_BBM level 2 by
61
setting ID_AA64MMFR2_EL1.BBM to 2.
62
63
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
64
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
65
Message-id: 20220426160422.2353158-3-peter.maydell@linaro.org
66
---
67
docs/system/arm/emulation.rst | 1 +
68
target/arm/cpu64.c | 1 +
69
2 files changed, 2 insertions(+)
70
71
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
72
index XXXXXXX..XXXXXXX 100644
73
--- a/docs/system/arm/emulation.rst
74
+++ b/docs/system/arm/emulation.rst
75
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
76
- FEAT_AA32HPD (AArch32 hierarchical permission disables)
77
- FEAT_AA32I8MM (AArch32 Int8 matrix multiplication instructions)
78
- FEAT_AES (AESD and AESE instructions)
79
+- FEAT_BBM at level 2 (Translation table break-before-make levels)
80
- FEAT_BF16 (AArch64 BFloat16 instructions)
81
- FEAT_BTI (Branch Target Identification)
82
- FEAT_DIT (Data Independent Timing instructions)
83
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/cpu64.c
86
+++ b/target/arm/cpu64.c
87
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
88
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */
89
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
90
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
91
+ t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
92
cpu->isar.id_aa64mmfr2 = t;
93
94
t = cpu->isar.id_aa64zfr0;
95
--
96
2.25.1
diff view generated by jsdifflib