1
arm queue: big stuff here is my MVE codegen optimisation,
1
Some arm patches; my to-review queue is by no means empty, but
2
and Alex's Apple Silicon hvf support.
2
this is a big enough set of patches to be getting on with...
3
3
4
-- PMM
4
-- PMM
5
5
6
The following changes since commit 7adb961995a3744f51396502b33ad04a56a317c3:
6
The following changes since commit cb9c6a8e5ad6a1f0ce164d352e3102df46986e22:
7
7
8
Merge remote-tracking branch 'remotes/dgilbert-gitlab/tags/pull-virtiofs-20210916' into staging (2021-09-19 18:53:29 +0100)
8
.gitlab-ci.d/windows: Work-around timeout and OpenGL problems of the MSYS2 jobs (2023-01-04 18:58:33 +0000)
9
9
10
are available in the Git repository at:
10
are available in the Git repository at:
11
11
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210920
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230105
13
13
14
for you to fetch changes up to 1dc5a60bfe406bc1122d68cbdefda38d23134b27:
14
for you to fetch changes up to 93c9678de9dc7d2e68f9e8477da072bac30ef132:
15
15
16
target/arm: Optimize MVE 1op-immediate insns (2021-09-20 14:18:01 +0100)
16
hw/net: Fix read of uninitialized memory in imx_fec. (2023-01-05 15:33:00 +0000)
17
17
18
----------------------------------------------------------------
18
----------------------------------------------------------------
19
target-arm queue:
19
target-arm queue:
20
* Optimize codegen for MVE when predication not active
20
* Implement AArch32 ARMv8-R support
21
* hvf: Add Apple Silicon support
21
* Add Cortex-R52 CPU
22
* hw/intc: Set GIC maintenance interrupt level to only 0 or 1
22
* fix handling of HLT semihosting in system mode
23
* Fix mishandling of MVE FPSCR.LTPSIZE reset for usermode emulator
23
* hw/timer/ixm_epit: cleanup and fix bug in compare handling
24
* elf2dmp: Fix coverity nits
24
* target/arm: Coding style fixes
25
* target/arm: Clean up includes
26
* nseries: minor code cleanups
27
* target/arm: align exposed ID registers with Linux
28
* hw/arm/smmu-common: remove unnecessary inlines
29
* i.MX7D: Handle GPT timers
30
* i.MX7D: Connect IRQs to GPIO devices
31
* i.MX6UL: Add a specific GPT timer instance
32
* hw/net: Fix read of uninitialized memory in imx_fec
25
33
26
----------------------------------------------------------------
34
----------------------------------------------------------------
27
Alexander Graf (7):
35
Alex Bennée (1):
28
arm: Move PMC register definitions to internals.h
36
target/arm: fix handling of HLT semihosting in system mode
29
hvf: Add execute to dirty log permission bitmap
30
hvf: Introduce hvf_arch_init() callback
31
hvf: Add Apple Silicon support
32
hvf: arm: Implement PSCI handling
33
arm: Add Hypervisor.framework build target
34
hvf: arm: Add rudimentary PMC support
35
37
36
Peter Collingbourne (1):
38
Axel Heider (8):
37
arm/hvf: Add a WFI handler
39
hw/timer/imx_epit: improve comments
40
hw/timer/imx_epit: cleanup CR defines
41
hw/timer/imx_epit: define SR_OCIF
42
hw/timer/imx_epit: update interrupt state on CR write access
43
hw/timer/imx_epit: hard reset initializes CR with 0
44
hw/timer/imx_epit: factor out register write handlers
45
hw/timer/imx_epit: remove explicit fields cnt and freq
46
hw/timer/imx_epit: fix compare timer handling
38
47
39
Peter Maydell (18):
48
Claudio Fontana (1):
40
elf2dmp: Check curl_easy_setopt() return value
49
target/arm: cleanup cpu includes
41
elf2dmp: Fail cleanly if PDB file specifies zero block_size
42
target/arm: Don't skip M-profile reset entirely in user mode
43
target/arm: Always clear exclusive monitor on reset
44
target/arm: Consolidate ifdef blocks in reset
45
hvf: arm: Implement -cpu host
46
target/arm: Avoid goto_tb if we're trying to exit to the main loop
47
target/arm: Enforce that FPDSCR.LTPSIZE is 4 on inbound migration
48
target/arm: Add TB flag for "MVE insns not predicated"
49
target/arm: Optimize MVE logic ops
50
target/arm: Optimize MVE arithmetic ops
51
target/arm: Optimize MVE VNEG, VABS
52
target/arm: Optimize MVE VDUP
53
target/arm: Optimize MVE VMVN
54
target/arm: Optimize MVE VSHL, VSHR immediate forms
55
target/arm: Optimize MVE VSHLL and VMOVL
56
target/arm: Optimize MVE VSLI and VSRI
57
target/arm: Optimize MVE 1op-immediate insns
58
50
59
Shashi Mallela (1):
51
Fabiano Rosas (5):
60
hw/intc: Set GIC maintenance interrupt level to only 0 or 1
52
target/arm: Fix checkpatch comment style warnings in helper.c
53
target/arm: Fix checkpatch space errors in helper.c
54
target/arm: Fix checkpatch brace errors in helper.c
55
target/arm: Remove unused includes from m_helper.c
56
target/arm: Remove unused includes from helper.c
61
57
62
meson.build | 8 +
58
Jean-Christophe Dubois (4):
63
include/sysemu/hvf_int.h | 12 +-
59
i.MX7D: Connect GPT timers to IRQ
64
target/arm/cpu.h | 6 +-
60
i.MX7D: Compute clock frequency for the fixed frequency clocks.
65
target/arm/hvf_arm.h | 18 +
61
i.MX6UL: Add a specific GPT timer instance for the i.MX6UL
66
target/arm/internals.h | 44 ++
62
i.MX7D: Connect IRQs to GPIO devices.
67
target/arm/kvm_arm.h | 2 -
68
target/arm/translate.h | 2 +
69
accel/hvf/hvf-accel-ops.c | 21 +-
70
contrib/elf2dmp/download.c | 22 +-
71
contrib/elf2dmp/pdb.c | 4 +
72
hw/intc/arm_gicv3_cpuif.c | 5 +-
73
target/arm/cpu.c | 56 +-
74
target/arm/helper.c | 77 ++-
75
target/arm/hvf/hvf.c | 1278 +++++++++++++++++++++++++++++++++++++++++
76
target/arm/machine.c | 13 +
77
target/arm/translate-m-nocp.c | 8 +-
78
target/arm/translate-mve.c | 310 +++++++---
79
target/arm/translate-vfp.c | 33 +-
80
target/arm/translate.c | 42 +-
81
target/i386/hvf/hvf.c | 10 +
82
MAINTAINERS | 5 +
83
target/arm/hvf/meson.build | 3 +
84
target/arm/hvf/trace-events | 11 +
85
target/arm/meson.build | 2 +
86
24 files changed, 1824 insertions(+), 168 deletions(-)
87
create mode 100644 target/arm/hvf_arm.h
88
create mode 100644 target/arm/hvf/hvf.c
89
create mode 100644 target/arm/hvf/meson.build
90
create mode 100644 target/arm/hvf/trace-events
91
63
64
Peter Maydell (1):
65
target/arm:Set lg_page_size to 0 if either S1 or S2 asks for it
66
67
Philippe Mathieu-Daudé (5):
68
hw/input/tsc2xxx: Constify set_transform()'s MouseTransformInfo arg
69
hw/arm/nseries: Constify various read-only arrays
70
hw/arm/nseries: Silent -Wmissing-field-initializers warning
71
hw/arm/smmu-common: Reduce smmu_inv_notifiers_mr() scope
72
hw/arm/smmu-common: Avoid using inlined functions with external linkage
73
74
Stephen Longfield (1):
75
hw/net: Fix read of uninitialized memory in imx_fec.
76
77
Tobias Röhmel (7):
78
target/arm: Don't add all MIDR aliases for cores that implement PMSA
79
target/arm: Make RVBAR available for all ARMv8 CPUs
80
target/arm: Make stage_2_format for cache attributes optional
81
target/arm: Enable TTBCR_EAE for ARMv8-R AArch32
82
target/arm: Add PMSAv8r registers
83
target/arm: Add PMSAv8r functionality
84
target/arm: Add ARM Cortex-R52 CPU
85
86
Zhuojia Shen (1):
87
target/arm: align exposed ID registers with Linux
88
89
include/hw/arm/fsl-imx7.h | 20 +
90
include/hw/arm/smmu-common.h | 3 -
91
include/hw/input/tsc2xxx.h | 4 +-
92
include/hw/timer/imx_epit.h | 8 +-
93
include/hw/timer/imx_gpt.h | 1 +
94
target/arm/cpu.h | 6 +
95
target/arm/internals.h | 4 +
96
hw/arm/fsl-imx6ul.c | 2 +-
97
hw/arm/fsl-imx7.c | 41 +-
98
hw/arm/nseries.c | 28 +-
99
hw/arm/smmu-common.c | 15 +-
100
hw/input/tsc2005.c | 2 +-
101
hw/input/tsc210x.c | 3 +-
102
hw/misc/imx6ul_ccm.c | 6 -
103
hw/misc/imx7_ccm.c | 49 ++-
104
hw/net/imx_fec.c | 8 +-
105
hw/timer/imx_epit.c | 376 +++++++++-------
106
hw/timer/imx_gpt.c | 25 ++
107
target/arm/cpu.c | 35 +-
108
target/arm/cpu64.c | 6 -
109
target/arm/cpu_tcg.c | 42 ++
110
target/arm/debug_helper.c | 3 +
111
target/arm/helper.c | 871 +++++++++++++++++++++++++++++---------
112
target/arm/m_helper.c | 16 -
113
target/arm/machine.c | 28 ++
114
target/arm/ptw.c | 152 +++++--
115
target/arm/tlb_helper.c | 4 +
116
target/arm/translate.c | 2 +-
117
tests/tcg/aarch64/sysregs.c | 24 +-
118
tests/tcg/aarch64/Makefile.target | 7 +-
119
30 files changed, 1330 insertions(+), 461 deletions(-)
120
diff view generated by jsdifflib
1
Optimize the MVE VSHL and VSHR immediate forms by using TCG vector
1
In get_phys_addr_twostage() we set the lg_page_size of the result to
2
ops when possible.
2
the maximum of the stage 1 and stage 2 page sizes. This works for
3
the case where we do want to create a TLB entry, because we know the
4
common TLB code only creates entries of the TARGET_PAGE_SIZE and
5
asking for a size larger than that only means that invalidations
6
invalidate the whole larger area. However, if lg_page_size is
7
smaller than TARGET_PAGE_SIZE this effectively means "don't create a
8
TLB entry"; in this case if either S1 or S2 said "this covers less
9
than a page and can't go in a TLB" then the final result also should
10
be marked that way. Set the resulting page size to 0 if either
11
stage asked for a less-than-a-page entry, and expand the comment
12
to explain what's going on.
13
14
This has no effect for VMSA because currently the VMSA lookup always
15
returns results that cover at least TARGET_PAGE_SIZE; however when we
16
add v8R support it will reuse this code path, and for v8R the S1 and
17
S2 results can be smaller than TARGET_PAGE_SIZE.
3
18
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210913095440.13462-10-peter.maydell@linaro.org
21
Message-id: 20221212142708.610090-1-peter.maydell@linaro.org
7
---
22
---
8
target/arm/translate-mve.c | 83 +++++++++++++++++++++++++++++---------
23
target/arm/ptw.c | 16 +++++++++++++---
9
1 file changed, 63 insertions(+), 20 deletions(-)
24
1 file changed, 13 insertions(+), 3 deletions(-)
10
25
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
26
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
12
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-mve.c
28
--- a/target/arm/ptw.c
14
+++ b/target/arm/translate-mve.c
29
+++ b/target/arm/ptw.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
30
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
16
return do_1imm(s, a, fn);
17
}
18
19
-static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
20
- bool negateshift)
21
+static bool do_2shift_vec(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
22
+ bool negateshift, GVecGen2iFn vecfn)
23
{
24
TCGv_ptr qd, qm;
25
int shift = a->shift;
26
@@ -XXX,XX +XXX,XX @@ static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
27
shift = -shift;
28
}
31
}
29
32
30
- qd = mve_qreg_ptr(a->qd);
33
/*
31
- qm = mve_qreg_ptr(a->qm);
34
- * Use the maximum of the S1 & S2 page size, so that invalidation
32
- fn(cpu_env, qd, qm, tcg_constant_i32(shift));
35
- * of pages > TARGET_PAGE_SIZE works correctly.
33
- tcg_temp_free_ptr(qd);
36
+ * If either S1 or S2 returned a result smaller than TARGET_PAGE_SIZE,
34
- tcg_temp_free_ptr(qm);
37
+ * this means "don't put this in the TLB"; in this case, return a
35
+ if (vecfn && mve_no_predication(s)) {
38
+ * result with lg_page_size == 0 to achieve that. Otherwise,
36
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qm),
39
+ * use the maximum of the S1 & S2 page size, so that invalidation
37
+ shift, 16, 16);
40
+ * of pages > TARGET_PAGE_SIZE works correctly. (This works even though
38
+ } else {
41
+ * we know the combined result permissions etc only cover the minimum
39
+ qd = mve_qreg_ptr(a->qd);
42
+ * of the S1 and S2 page size, because we know that the common TLB code
40
+ qm = mve_qreg_ptr(a->qm);
43
+ * never actually creates TLB entries bigger than TARGET_PAGE_SIZE,
41
+ fn(cpu_env, qd, qm, tcg_constant_i32(shift));
44
+ * and passing a larger page size value only affects invalidations.)
42
+ tcg_temp_free_ptr(qd);
45
*/
43
+ tcg_temp_free_ptr(qm);
46
- if (result->f.lg_page_size < s1_lgpgsz) {
44
+ }
47
+ if (result->f.lg_page_size < TARGET_PAGE_BITS ||
45
mve_update_eci(s);
48
+ s1_lgpgsz < TARGET_PAGE_BITS) {
46
return true;
49
+ result->f.lg_page_size = 0;
47
}
50
+ } else if (result->f.lg_page_size < s1_lgpgsz) {
48
51
result->f.lg_page_size = s1_lgpgsz;
49
-#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
50
- static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
51
- { \
52
- static MVEGenTwoOpShiftFn * const fns[] = { \
53
- gen_helper_mve_##FN##b, \
54
- gen_helper_mve_##FN##h, \
55
- gen_helper_mve_##FN##w, \
56
- NULL, \
57
- }; \
58
- return do_2shift(s, a, fns[a->size], NEGATESHIFT); \
59
+static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
60
+ bool negateshift)
61
+{
62
+ return do_2shift_vec(s, a, fn, negateshift, NULL);
63
+}
64
+
65
+#define DO_2SHIFT_VEC(INSN, FN, NEGATESHIFT, VECFN) \
66
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
67
+ { \
68
+ static MVEGenTwoOpShiftFn * const fns[] = { \
69
+ gen_helper_mve_##FN##b, \
70
+ gen_helper_mve_##FN##h, \
71
+ gen_helper_mve_##FN##w, \
72
+ NULL, \
73
+ }; \
74
+ return do_2shift_vec(s, a, fns[a->size], NEGATESHIFT, VECFN); \
75
}
52
}
76
53
77
-DO_2SHIFT(VSHLI, vshli_u, false)
78
+#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
79
+ DO_2SHIFT_VEC(INSN, FN, NEGATESHIFT, NULL)
80
+
81
+static void do_gvec_shri_s(unsigned vece, uint32_t dofs, uint32_t aofs,
82
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
83
+{
84
+ /*
85
+ * We get here with a negated shift count, and we must handle
86
+ * shifts by the element size, which tcg_gen_gvec_sari() does not do.
87
+ */
88
+ shift = -shift;
89
+ if (shift == (8 << vece)) {
90
+ shift--;
91
+ }
92
+ tcg_gen_gvec_sari(vece, dofs, aofs, shift, oprsz, maxsz);
93
+}
94
+
95
+static void do_gvec_shri_u(unsigned vece, uint32_t dofs, uint32_t aofs,
96
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
97
+{
98
+ /*
99
+ * We get here with a negated shift count, and we must handle
100
+ * shifts by the element size, which tcg_gen_gvec_shri() does not do.
101
+ */
102
+ shift = -shift;
103
+ if (shift == (8 << vece)) {
104
+ tcg_gen_gvec_dup_imm(vece, dofs, oprsz, maxsz, 0);
105
+ } else {
106
+ tcg_gen_gvec_shri(vece, dofs, aofs, shift, oprsz, maxsz);
107
+ }
108
+}
109
+
110
+DO_2SHIFT_VEC(VSHLI, vshli_u, false, tcg_gen_gvec_shli)
111
DO_2SHIFT(VQSHLI_S, vqshli_s, false)
112
DO_2SHIFT(VQSHLI_U, vqshli_u, false)
113
DO_2SHIFT(VQSHLUI, vqshlui_s, false)
114
/* These right shifts use a left-shift helper with negated shift count */
115
-DO_2SHIFT(VSHRI_S, vshli_s, true)
116
-DO_2SHIFT(VSHRI_U, vshli_u, true)
117
+DO_2SHIFT_VEC(VSHRI_S, vshli_s, true, do_gvec_shri_s)
118
+DO_2SHIFT_VEC(VSHRI_U, vshli_u, true, do_gvec_shri_u)
119
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
120
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
121
122
--
54
--
123
2.20.1
55
2.25.1
124
125
diff view generated by jsdifflib
1
Our current codegen for MVE always calls out to helper functions,
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
because some byte lanes might be predicated. The common case is that
3
in fact there is no predication active and all lanes should be
4
updated together, so we can produce better code by detecting that and
5
using the TCG generic vector infrastructure.
6
2
7
Add a TB flag that is set when we can guarantee that there is no
3
Cores with PMSA have the MPUIR register which has the
8
active MVE predication, and a bool in the DisasContext. Subsequent
4
same encoding as the MIDR alias with opc2=4. So we only
9
patches will use this flag to generate improved code for some
5
add that alias if we are not realizing a core that
10
instructions.
6
implements PMSA.
11
7
12
In most cases when the predication state changes we simply end the TB
8
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
13
after that instruction. For the code called from vfp_access_check()
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
that handles lazy state preservation and creating a new FP context,
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
we can usually avoid having to try to end the TB because luckily the
11
Message-id: 20221206102504.165775-2-tobias.roehmel@rwth-aachen.de
16
new value of the flag following the register changes in those
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
sequences doesn't depend on any runtime decisions. We do have to end
13
---
18
the TB if the guest has enabled lazy FP state preservation but not
14
target/arm/helper.c | 13 +++++++++----
19
automatic state preservation, but this is an odd corner case that is
15
1 file changed, 9 insertions(+), 4 deletions(-)
20
not going to be common in real-world code.
21
16
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20210913095440.13462-4-peter.maydell@linaro.org
25
---
26
target/arm/cpu.h | 4 +++-
27
target/arm/translate.h | 2 ++
28
target/arm/helper.c | 33 +++++++++++++++++++++++++++++++++
29
target/arm/translate-m-nocp.c | 8 +++++++-
30
target/arm/translate-mve.c | 13 ++++++++++++-
31
target/arm/translate-vfp.c | 33 +++++++++++++++++++++++++++------
32
target/arm/translate.c | 8 ++++++++
33
7 files changed, 92 insertions(+), 9 deletions(-)
34
35
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/cpu.h
38
+++ b/target/arm/cpu.h
39
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
40
* | TBFLAG_AM32 | +-----+----------+
41
* | | |TBFLAG_M32|
42
* +-------------+----------------+----------+
43
- * 31 23 5 4 0
44
+ * 31 23 6 5 0
45
*
46
* Unless otherwise noted, these bits are cached in env->hflags.
47
*/
48
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, LSPACT, 2, 1) /* Not cached. */
49
FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */
50
/* Set if FPCCR.S does not match current security state */
51
FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */
52
+/* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */
53
+FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */
54
55
/*
56
* Bit usage when in AArch64 state
57
diff --git a/target/arm/translate.h b/target/arm/translate.h
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/translate.h
60
+++ b/target/arm/translate.h
61
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
62
bool align_mem;
63
/* True if PSTATE.IL is set */
64
bool pstate_il;
65
+ /* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
66
+ bool mve_no_pred;
67
/*
68
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
69
* < 0, set by the current instruction.
70
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
71
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/helper.c
19
--- a/target/arm/helper.c
73
+++ b/target/arm/helper.c
20
+++ b/target/arm/helper.c
74
@@ -XXX,XX +XXX,XX @@ static inline void assert_hflags_rebuild_correctly(CPUARMState *env)
21
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
75
#endif
22
.access = PL1_R, .type = ARM_CP_NO_RAW, .resetvalue = cpu->midr,
76
}
23
.fieldoffset = offsetof(CPUARMState, cp15.c0_cpuid),
77
24
.readfn = midr_read },
78
+static bool mve_no_pred(CPUARMState *env)
25
- /* crn = 0 op1 = 0 crm = 0 op2 = 4,7 : AArch32 aliases of MIDR */
79
+{
26
- { .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST,
80
+ /*
27
- .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 4,
81
+ * Return true if there is definitely no predication of MVE
28
- .access = PL1_R, .resetvalue = cpu->midr },
82
+ * instructions by VPR or LTPSIZE. (Returning false even if there
29
+ /* crn = 0 op1 = 0 crm = 0 op2 = 7 : AArch32 aliases of MIDR */
83
+ * isn't any predication is OK; generated code will just be
30
{ .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST,
84
+ * a little worse.)
31
.cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 7,
85
+ * If the CPU does not implement MVE then this TB flag is always 0.
32
.access = PL1_R, .resetvalue = cpu->midr },
86
+ *
33
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
87
+ * NOTE: if you change this logic, the "recalculate s->mve_no_pred"
34
.accessfn = access_aa64_tid1,
88
+ * logic in gen_update_fp_context() needs to be updated to match.
35
.type = ARM_CP_CONST, .resetvalue = cpu->revidr },
89
+ *
36
};
90
+ * We do not include the effect of the ECI bits here -- they are
37
+ ARMCPRegInfo id_v8_midr_alias_cp_reginfo = {
91
+ * tracked in other TB flags. This simplifies the logic for
38
+ .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST,
92
+ * "when did we emit code that changes the MVE_NO_PRED TB flag
39
+ .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 4,
93
+ * and thus need to end the TB?".
40
+ .access = PL1_R, .resetvalue = cpu->midr
94
+ */
41
+ };
95
+ if (cpu_isar_feature(aa32_mve, env_archcpu(env))) {
42
ARMCPRegInfo id_cp_reginfo[] = {
96
+ return false;
43
/* These are common to v8 and pre-v8 */
97
+ }
44
{ .name = "CTR",
98
+ if (env->v7m.vpr) {
45
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
99
+ return false;
46
}
100
+ }
47
if (arm_feature(env, ARM_FEATURE_V8)) {
101
+ if (env->v7m.ltpsize < 4) {
48
define_arm_cp_regs(cpu, id_v8_midr_cp_reginfo);
102
+ return false;
49
+ if (!arm_feature(env, ARM_FEATURE_PMSA)) {
103
+ }
50
+ define_one_arm_cp_reg(cpu, &id_v8_midr_alias_cp_reginfo);
104
+ return true;
105
+}
106
+
107
void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
108
target_ulong *cs_base, uint32_t *pflags)
109
{
110
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
111
if (env->v7m.fpccr[is_secure] & R_V7M_FPCCR_LSPACT_MASK) {
112
DP_TBFLAG_M32(flags, LSPACT, 1);
113
}
114
+
115
+ if (mve_no_pred(env)) {
116
+ DP_TBFLAG_M32(flags, MVE_NO_PRED, 1);
117
+ }
51
+ }
118
} else {
52
} else {
119
/*
53
define_arm_cp_regs(cpu, id_pre_v8_midr_cp_reginfo);
120
* Note that XSCALE_CPAR shares bits with VECSTRIDE.
121
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/target/arm/translate-m-nocp.c
124
+++ b/target/arm/translate-m-nocp.c
125
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
126
127
clear_eci_state(s);
128
129
- /* End the TB, because we have updated FP control bits */
130
+ /*
131
+ * End the TB, because we have updated FP control bits,
132
+ * and possibly VPR or LTPSIZE.
133
+ */
134
s->base.is_jmp = DISAS_UPDATE_EXIT;
135
return true;
136
}
137
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
138
store_cpu_field(control, v7m.control[M_REG_S]);
139
tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
140
gen_helper_vfp_set_fpscr(cpu_env, tmp);
141
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
142
tcg_temp_free_i32(tmp);
143
tcg_temp_free_i32(sfpa);
144
break;
145
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
146
}
54
}
147
tmp = loadfn(s, opaque, true);
148
store_cpu_field(tmp, v7m.vpr);
149
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
150
break;
151
case ARM_VFP_P0:
152
{
153
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
154
tcg_gen_deposit_i32(vpr, vpr, tmp,
155
R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
156
store_cpu_field(vpr, v7m.vpr);
157
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
158
tcg_temp_free_i32(tmp);
159
break;
160
}
161
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-mve.c
164
+++ b/target/arm/translate-mve.c
165
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VORR, gen_helper_mve_vorr)
166
DO_LOGIC(VORN, gen_helper_mve_vorn)
167
DO_LOGIC(VEOR, gen_helper_mve_veor)
168
169
-DO_LOGIC(VPSEL, gen_helper_mve_vpsel)
170
+static bool trans_VPSEL(DisasContext *s, arg_2op *a)
171
+{
172
+ /* This insn updates predication bits */
173
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
174
+ return do_2op(s, a, gen_helper_mve_vpsel);
175
+}
176
177
#define DO_2OP(INSN, FN) \
178
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
179
@@ -XXX,XX +XXX,XX @@ static bool trans_VPNOT(DisasContext *s, arg_VPNOT *a)
180
}
181
182
gen_helper_mve_vpnot(cpu_env);
183
+ /* This insn updates predication bits */
184
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
185
mve_update_eci(s);
186
return true;
187
}
188
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
189
/* VPT */
190
gen_vpst(s, a->mask);
191
}
192
+ /* This insn updates predication bits */
193
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
194
mve_update_eci(s);
195
return true;
196
}
197
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp_scalar(DisasContext *s, arg_vcmp_scalar *a,
198
/* VPT */
199
gen_vpst(s, a->mask);
200
}
201
+ /* This insn updates predication bits */
202
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
203
mve_update_eci(s);
204
return true;
205
}
206
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/translate-vfp.c
209
+++ b/target/arm/translate-vfp.c
210
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
211
* Generate code for M-profile lazy FP state preservation if needed;
212
* this corresponds to the pseudocode PreserveFPState() function.
213
*/
214
-static void gen_preserve_fp_state(DisasContext *s)
215
+static void gen_preserve_fp_state(DisasContext *s, bool skip_context_update)
216
{
217
if (s->v7m_lspact) {
218
/*
219
@@ -XXX,XX +XXX,XX @@ static void gen_preserve_fp_state(DisasContext *s)
220
* any further FP insns in this TB.
221
*/
222
s->v7m_lspact = false;
223
+ /*
224
+ * The helper might have zeroed VPR, so we do not know the
225
+ * correct value for the MVE_NO_PRED TB flag any more.
226
+ * If we're about to create a new fp context then that
227
+ * will precisely determine the MVE_NO_PRED value (see
228
+ * gen_update_fp_context()). Otherwise, we must:
229
+ * - set s->mve_no_pred to false, so this instruction
230
+ * is generated to use helper functions
231
+ * - end the TB now, without chaining to the next TB
232
+ */
233
+ if (skip_context_update || !s->v7m_new_fp_ctxt_needed) {
234
+ s->mve_no_pred = false;
235
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
236
+ }
237
}
238
}
239
240
@@ -XXX,XX +XXX,XX @@ static void gen_update_fp_context(DisasContext *s)
241
TCGv_i32 z32 = tcg_const_i32(0);
242
store_cpu_field(z32, v7m.vpr);
243
}
244
-
245
/*
246
- * We don't need to arrange to end the TB, because the only
247
- * parts of FPSCR which we cache in the TB flags are the VECLEN
248
- * and VECSTRIDE, and those don't exist for M-profile.
249
+ * We just updated the FPSCR and VPR. Some of this state is cached
250
+ * in the MVE_NO_PRED TB flag. We want to avoid having to end the
251
+ * TB here, which means we need the new value of the MVE_NO_PRED
252
+ * flag to be exactly known here and the same for all executions.
253
+ * Luckily FPDSCR.LTPSIZE is always constant 4 and the VPR is
254
+ * always set to 0, so the new MVE_NO_PRED flag is always 1
255
+ * if and only if we have MVE.
256
+ *
257
+ * (The other FPSCR state cached in TB flags is VECLEN and VECSTRIDE,
258
+ * but those do not exist for M-profile, so are not relevant here.)
259
*/
260
+ s->mve_no_pred = dc_isar_feature(aa32_mve, s);
261
262
if (s->v8m_secure) {
263
bits |= R_V7M_CONTROL_SFPA_MASK;
264
@@ -XXX,XX +XXX,XX @@ bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
265
/* Handle M-profile lazy FP state mechanics */
266
267
/* Trigger lazy-state preservation if necessary */
268
- gen_preserve_fp_state(s);
269
+ gen_preserve_fp_state(s, skip_context_update);
270
271
if (!skip_context_update) {
272
/* Update ownership of FP context and create new FP context if needed */
273
diff --git a/target/arm/translate.c b/target/arm/translate.c
274
index XXXXXXX..XXXXXXX 100644
275
--- a/target/arm/translate.c
276
+++ b/target/arm/translate.c
277
@@ -XXX,XX +XXX,XX @@ static bool trans_DLS(DisasContext *s, arg_DLS *a)
278
/* DLSTP: set FPSCR.LTPSIZE */
279
tmp = tcg_const_i32(a->size);
280
store_cpu_field(tmp, v7m.ltpsize);
281
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
282
}
283
return true;
284
}
285
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
286
assert(ok);
287
tmp = tcg_const_i32(a->size);
288
store_cpu_field(tmp, v7m.ltpsize);
289
+ /*
290
+ * LTPSIZE updated, but MVE_NO_PRED will always be the same thing (0)
291
+ * when we take this upcoming exit from this TB, so gen_jmp_tb() is OK.
292
+ */
293
}
294
gen_jmp_tb(s, s->base.pc_next, 1);
295
296
@@ -XXX,XX +XXX,XX @@ static bool trans_VCTP(DisasContext *s, arg_VCTP *a)
297
gen_helper_mve_vctp(cpu_env, masklen);
298
tcg_temp_free_i32(masklen);
299
tcg_temp_free_i32(rn_shifted);
300
+ /* This insn updates predication bits */
301
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
302
mve_update_eci(s);
303
return true;
304
}
305
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
306
dc->v7m_new_fp_ctxt_needed =
307
EX_TBFLAG_M32(tb_flags, NEW_FP_CTXT_NEEDED);
308
dc->v7m_lspact = EX_TBFLAG_M32(tb_flags, LSPACT);
309
+ dc->mve_no_pred = EX_TBFLAG_M32(tb_flags, MVE_NO_PRED);
310
} else {
311
dc->debug_target_el = EX_TBFLAG_ANY(tb_flags, DEBUG_TARGET_EL);
312
dc->sctlr_b = EX_TBFLAG_A32(tb_flags, SCTLR__B);
313
--
55
--
314
2.20.1
56
2.25.1
315
57
316
58
diff view generated by jsdifflib
1
Currently all of the M-profile specific code in arm_cpu_reset() is
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
inside a !defined(CONFIG_USER_ONLY) ifdef block. This is
3
unintentional: it happened because originally the only
4
M-profile-specific handling was the setup of the initial SP and PC
5
from the vector table, which is system-emulation only. But then we
6
added a lot of other M-profile setup to the same "if (ARM_FEATURE_M)"
7
code block without noticing that it was all inside a not-user-mode
8
ifdef. This has generally been harmless, but with the addition of
9
v8.1M low-overhead-loop support we ran into a problem: the reset of
10
FPSCR.LTPSIZE to 4 was only being done for system emulation mode, so
11
if a user-mode guest tried to execute the LE instruction it would
12
incorrectly take a UsageFault.
13
2
14
Adjust the ifdefs so only the really system-emulation specific parts
3
RVBAR shadows RVBAR_ELx where x is the highest exception
15
are covered. Because this means we now run some reset code that sets
4
level if the highest EL is not EL3. This patch also allows
16
up initial values in the FPCCR and similar FPU related registers,
5
ARMv8 CPUs to change the reset address with
17
explicitly set up the registers controlling FPU context handling in
6
the rvbar property.
18
user-emulation mode so that the FPU works by design and not by
19
chance.
20
7
21
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/613
8
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
22
Cc: qemu-stable@nongnu.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20221206102504.165775-3-tobias.roehmel@rwth-aachen.de
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Message-id: 20210914120725.24992-2-peter.maydell@linaro.org
26
---
12
---
27
target/arm/cpu.c | 19 +++++++++++++++++++
13
target/arm/cpu.c | 6 +++++-
28
1 file changed, 19 insertions(+)
14
target/arm/helper.c | 21 ++++++++++++++-------
15
2 files changed, 19 insertions(+), 8 deletions(-)
29
16
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
17
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.c
19
--- a/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
20
+++ b/target/arm/cpu.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
21
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset_hold(Object *obj)
35
env->uncached_cpsr = ARM_CPU_MODE_SVC;
22
env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
23
CPACR, CP11, 3);
24
#endif
25
+ if (arm_feature(env, ARM_FEATURE_V8)) {
26
+ env->cp15.rvbar = cpu->rvbar_prop;
27
+ env->regs[15] = cpu->rvbar_prop;
28
+ }
36
}
29
}
37
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
30
38
+#endif
31
#if defined(CONFIG_USER_ONLY)
39
32
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
40
if (arm_feature(env, ARM_FEATURE_M)) {
33
qdev_property_add_static(DEVICE(obj), &arm_cpu_reset_hivecs_property);
41
+#ifndef CONFIG_USER_ONLY
34
}
42
uint32_t initial_msp; /* Loaded from 0x0 */
35
43
uint32_t initial_pc; /* Loaded from 0x4 */
36
- if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
44
uint8_t *rom;
37
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
45
uint32_t vecbase;
38
object_property_add_uint64_ptr(obj, "rvbar",
46
+#endif
39
&cpu->rvbar_prop,
47
40
OBJ_PROP_FLAG_READWRITE);
48
if (cpu_isar_feature(aa32_lob, cpu)) {
41
diff --git a/target/arm/helper.c b/target/arm/helper.c
49
/*
42
index XXXXXXX..XXXXXXX 100644
50
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
43
--- a/target/arm/helper.c
51
env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
44
+++ b/target/arm/helper.c
52
R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
45
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
46
if (!arm_feature(env, ARM_FEATURE_EL3) &&
47
!arm_feature(env, ARM_FEATURE_EL2)) {
48
ARMCPRegInfo rvbar = {
49
- .name = "RVBAR_EL1", .state = ARM_CP_STATE_AA64,
50
+ .name = "RVBAR_EL1", .state = ARM_CP_STATE_BOTH,
51
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 0, .opc2 = 1,
52
.access = PL1_R,
53
.fieldoffset = offsetof(CPUARMState, cp15.rvbar),
54
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
53
}
55
}
54
+
56
/* RVBAR_EL2 is only implemented if EL2 is the highest EL */
55
+#ifndef CONFIG_USER_ONLY
57
if (!arm_feature(env, ARM_FEATURE_EL3)) {
56
/* Unlike A/R profile, M profile defines the reset LR value */
58
- ARMCPRegInfo rvbar = {
57
env->regs[14] = 0xffffffff;
59
- .name = "RVBAR_EL2", .state = ARM_CP_STATE_AA64,
58
60
- .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 1,
59
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
61
- .access = PL2_R,
60
env->regs[13] = initial_msp & 0xFFFFFFFC;
62
- .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
61
env->regs[15] = initial_pc & ~1;
63
+ ARMCPRegInfo rvbar[] = {
62
env->thumb = initial_pc & 1;
64
+ {
63
+#else
65
+ .name = "RVBAR_EL2", .state = ARM_CP_STATE_AA64,
64
+ /*
66
+ .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 1,
65
+ * For user mode we run non-secure and with access to the FPU.
67
+ .access = PL2_R,
66
+ * The FPU context is active (ie does not need further setup)
68
+ .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
67
+ * and is owned by non-secure.
69
+ },
68
+ */
70
+ { .name = "RVBAR", .type = ARM_CP_ALIAS,
69
+ env->v7m.secure = false;
71
+ .cp = 15, .opc1 = 0, .crn = 12, .crm = 0, .opc2 = 1,
70
+ env->v7m.nsacr = 0xcff;
72
+ .access = PL2_R,
71
+ env->v7m.cpacr[M_REG_NS] = 0xf0ffff;
73
+ .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
72
+ env->v7m.fpccr[M_REG_S] &=
74
+ },
73
+ ~(R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK);
75
};
74
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
76
- define_one_arm_cp_reg(cpu, &rvbar);
75
+#endif
77
+ define_arm_cp_regs(cpu, rvbar);
78
}
76
}
79
}
77
80
78
+#ifndef CONFIG_USER_ONLY
79
/* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
80
* executing as AArch32 then check if highvecs are enabled and
81
* adjust the PC accordingly.
82
--
81
--
83
2.20.1
82
2.25.1
84
83
85
84
diff view generated by jsdifflib
1
Optimize the MVE shift-and-insert insns by using TCG
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
vector ops when possible.
3
2
3
The v8R PMSAv8 has a two-stage MPU translation process, but, unlike
4
VMSAv8, the stage 2 attributes are in the same format as the stage 1
5
attributes (8-bit MAIR format). Rather than converting the MAIR
6
format to the format used for VMSA stage 2 (bits [5:2] of a VMSA
7
stage 2 descriptor) and then converting back to do the attribute
8
combination, allow combined_attrs_nofwb() to accept s2 attributes
9
that are already in the MAIR format.
10
11
We move the assert() to combined_attrs_fwb(), because that function
12
really does require a VMSA stage 2 attribute format. (We will never
13
get there for v8R, because PMSAv8 does not implement FEAT_S2FWB.)
14
15
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20221206102504.165775-4-tobias.roehmel@rwth-aachen.de
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210913095440.13462-12-peter.maydell@linaro.org
7
---
19
---
8
target/arm/translate-mve.c | 4 ++--
20
target/arm/ptw.c | 10 ++++++++--
9
1 file changed, 2 insertions(+), 2 deletions(-)
21
1 file changed, 8 insertions(+), 2 deletions(-)
10
22
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
23
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
12
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-mve.c
25
--- a/target/arm/ptw.c
14
+++ b/target/arm/translate-mve.c
26
+++ b/target/arm/ptw.c
15
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_VEC(VSHRI_U, vshli_u, true, do_gvec_shri_u)
27
@@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_nofwb(uint64_t hcr,
16
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
28
{
17
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
29
uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
18
30
19
-DO_2SHIFT(VSRI, vsri, false)
31
- s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs);
20
-DO_2SHIFT(VSLI, vsli, false)
32
+ if (s2.is_s2_format) {
21
+DO_2SHIFT_VEC(VSRI, vsri, false, gen_gvec_sri)
33
+ s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs);
22
+DO_2SHIFT_VEC(VSLI, vsli, false, gen_gvec_sli)
34
+ } else {
23
35
+ s2_mair_attrs = s2.attrs;
24
#define DO_2SHIFT_FP(INSN, FN) \
36
+ }
25
static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
37
38
s1lo = extract32(s1.attrs, 0, 4);
39
s2lo = extract32(s2_mair_attrs, 0, 4);
40
@@ -XXX,XX +XXX,XX @@ static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
41
*/
42
static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
43
{
44
+ assert(s2.is_s2_format && !s1.is_s2_format);
45
+
46
switch (s2.attrs) {
47
case 7:
48
/* Use stage 1 attributes */
49
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
50
ARMCacheAttrs ret;
51
bool tagged = false;
52
53
- assert(s2.is_s2_format && !s1.is_s2_format);
54
+ assert(!s1.is_s2_format);
55
ret.is_s2_format = false;
56
57
if (s1.attrs == 0xf0) {
26
--
58
--
27
2.20.1
59
2.25.1
28
60
29
61
diff view generated by jsdifflib
1
Optimize the MVE VDUP insns by using TCG vector ops when possible.
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
2
3
ARMv8-R AArch32 CPUs behave as if TTBCR.EAE is always 1 even
4
tough they don't have the TTBCR register.
5
See ARM Architecture Reference Manual Supplement - ARMv8, for the ARMv8-R
6
AArch32 architecture profile Version:A.c section C1.2.
7
8
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20221206102504.165775-5-tobias.roehmel@rwth-aachen.de
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210913095440.13462-8-peter.maydell@linaro.org
6
---
12
---
7
target/arm/translate-mve.c | 12 ++++++++----
13
target/arm/internals.h | 4 ++++
8
1 file changed, 8 insertions(+), 4 deletions(-)
14
target/arm/debug_helper.c | 3 +++
15
target/arm/tlb_helper.c | 4 ++++
16
3 files changed, 11 insertions(+)
9
17
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
18
diff --git a/target/arm/internals.h b/target/arm/internals.h
11
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/translate-mve.c
20
--- a/target/arm/internals.h
13
+++ b/target/arm/translate-mve.c
21
+++ b/target/arm/internals.h
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
22
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu);
23
static inline bool extended_addresses_enabled(CPUARMState *env)
24
{
25
uint64_t tcr = env->cp15.tcr_el[arm_is_secure(env) ? 3 : 1];
26
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
27
+ arm_feature(env, ARM_FEATURE_V8)) {
28
+ return true;
29
+ }
30
return arm_el_is_aa64(env, 1) ||
31
(arm_feature(env, ARM_FEATURE_LPAE) && (tcr & TTBCR_EAE));
32
}
33
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/debug_helper.c
36
+++ b/target/arm/debug_helper.c
37
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_debug_exception_fsr(CPUARMState *env)
38
39
if (target_el == 2 || arm_el_is_aa64(env, target_el)) {
40
using_lpae = true;
41
+ } else if (arm_feature(env, ARM_FEATURE_PMSA) &&
42
+ arm_feature(env, ARM_FEATURE_V8)) {
43
+ using_lpae = true;
44
} else {
45
if (arm_feature(env, ARM_FEATURE_LPAE) &&
46
(env->cp15.tcr_el[target_el] & TTBCR_EAE)) {
47
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/tlb_helper.c
50
+++ b/target/arm/tlb_helper.c
51
@@ -XXX,XX +XXX,XX @@ bool regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
52
if (el == 2 || arm_el_is_aa64(env, el)) {
15
return true;
53
return true;
16
}
54
}
17
55
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
18
- qd = mve_qreg_ptr(a->qd);
56
+ arm_feature(env, ARM_FEATURE_V8)) {
19
rt = load_reg(s, a->rt);
57
+ return true;
20
- tcg_gen_dup_i32(a->size, rt, rt);
21
- gen_helper_mve_vdup(cpu_env, qd, rt);
22
- tcg_temp_free_ptr(qd);
23
+ if (mve_no_predication(s)) {
24
+ tcg_gen_gvec_dup_i32(a->size, mve_qreg_offset(a->qd), 16, 16, rt);
25
+ } else {
26
+ qd = mve_qreg_ptr(a->qd);
27
+ tcg_gen_dup_i32(a->size, rt, rt);
28
+ gen_helper_mve_vdup(cpu_env, qd, rt);
29
+ tcg_temp_free_ptr(qd);
30
+ }
58
+ }
31
tcg_temp_free_i32(rt);
59
if (arm_feature(env, ARM_FEATURE_LPAE)
32
mve_update_eci(s);
60
&& (regime_tcr(env, mmu_idx) & TTBCR_EAE)) {
33
return true;
61
return true;
34
--
62
--
35
2.20.1
63
2.25.1
36
64
37
65
diff view generated by jsdifflib
1
Now that we have working system register sync, we push more target CPU
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
properties into the virtual machine. That might be useful in some
3
situations, but is not the typical case that users want.
4
2
5
So let's add a -cpu host option that allows them to explicitly pass all
3
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
6
CPU capabilities of their host CPU into the guest.
4
Message-id: 20221206102504.165775-6-tobias.roehmel@rwth-aachen.de
7
8
Signed-off-by: Alexander Graf <agraf@csgraf.de>
9
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
10
Reviewed-by: Sergio Lopez <slp@redhat.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20210916155404.86958-7-agraf@csgraf.de
13
[PMM: drop unnecessary #include line from .h file]
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
6
---
16
target/arm/cpu.h | 2 +
7
target/arm/cpu.h | 6 +
17
target/arm/hvf_arm.h | 18 +++++++++
8
target/arm/cpu.c | 28 +++-
18
target/arm/kvm_arm.h | 2 -
9
target/arm/helper.c | 302 +++++++++++++++++++++++++++++++++++++++++++
19
target/arm/cpu.c | 13 ++++--
10
target/arm/machine.c | 28 ++++
20
target/arm/hvf/hvf.c | 95 ++++++++++++++++++++++++++++++++++++++++++++
11
4 files changed, 360 insertions(+), 4 deletions(-)
21
5 files changed, 124 insertions(+), 6 deletions(-)
22
create mode 100644 target/arm/hvf_arm.h
23
12
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/cpu.h
15
--- a/target/arm/cpu.h
27
+++ b/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
28
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
17
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
29
#define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
18
};
30
#define CPU_RESOLVING_TYPE TYPE_ARM_CPU
19
uint64_t sctlr_el[4];
31
20
};
32
+#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
21
+ uint64_t vsctlr; /* Virtualization System control register. */
33
+
22
uint64_t cpacr_el1; /* Architectural feature access control register */
34
#define cpu_signal_handler cpu_arm_signal_handler
23
uint64_t cptr_el[4]; /* ARMv8 feature trap registers */
35
#define cpu_list arm_cpu_list
24
uint32_t c1_xscaleauxcr; /* XScale auxiliary control register. */
36
25
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
37
diff --git a/target/arm/hvf_arm.h b/target/arm/hvf_arm.h
26
*/
38
new file mode 100644
27
uint32_t *rbar[M_REG_NUM_BANKS];
39
index XXXXXXX..XXXXXXX
28
uint32_t *rlar[M_REG_NUM_BANKS];
40
--- /dev/null
29
+ uint32_t *hprbar;
41
+++ b/target/arm/hvf_arm.h
30
+ uint32_t *hprlar;
42
@@ -XXX,XX +XXX,XX @@
31
uint32_t mair0[M_REG_NUM_BANKS];
43
+/*
32
uint32_t mair1[M_REG_NUM_BANKS];
44
+ * QEMU Hypervisor.framework (HVF) support -- ARM specifics
33
+ uint32_t hprselr;
45
+ *
34
} pmsav8;
46
+ * Copyright (c) 2021 Alexander Graf
35
47
+ *
36
/* v8M SAU */
48
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
37
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
49
+ * See the COPYING file in the top-level directory.
38
bool has_mpu;
50
+ *
39
/* PMSAv7 MPU number of supported regions */
51
+ */
40
uint32_t pmsav7_dregion;
52
+
41
+ /* PMSAv8 MPU number of supported hyp regions */
53
+#ifndef QEMU_HVF_ARM_H
42
+ uint32_t pmsav8r_hdregion;
54
+#define QEMU_HVF_ARM_H
43
/* v8M SAU number of supported regions */
55
+
44
uint32_t sau_sregion;
56
+#include "cpu.h"
45
57
+
58
+void hvf_arm_set_cpu_features_from_host(struct ARMCPU *cpu);
59
+
60
+#endif
61
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/kvm_arm.h
64
+++ b/target/arm/kvm_arm.h
65
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
66
*/
67
void kvm_arm_destroy_scratch_host_vcpu(int *fdarray);
68
69
-#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
70
-
71
/**
72
* ARMHostCPUFeatures: information about the host CPU (identified
73
* by asking the host kernel)
74
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
46
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
75
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/cpu.c
48
--- a/target/arm/cpu.c
77
+++ b/target/arm/cpu.c
49
+++ b/target/arm/cpu.c
78
@@ -XXX,XX +XXX,XX @@
50
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset_hold(Object *obj)
79
#include "sysemu/tcg.h"
51
sizeof(*env->pmsav7.dracr) * cpu->pmsav7_dregion);
80
#include "sysemu/hw_accel.h"
52
}
81
#include "kvm_arm.h"
53
}
82
+#include "hvf_arm.h"
54
+
83
#include "disas/capstone.h"
55
+ if (cpu->pmsav8r_hdregion > 0) {
84
#include "fpu/softfloat.h"
56
+ memset(env->pmsav8.hprbar, 0,
85
57
+ sizeof(*env->pmsav8.hprbar) * cpu->pmsav8r_hdregion);
58
+ memset(env->pmsav8.hprlar, 0,
59
+ sizeof(*env->pmsav8.hprlar) * cpu->pmsav8r_hdregion);
60
+ }
61
+
62
env->pmsav7.rnr[M_REG_NS] = 0;
63
env->pmsav7.rnr[M_REG_S] = 0;
64
env->pmsav8.mair0[M_REG_NS] = 0;
86
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
65
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
87
* this is the first point where we can report it.
66
/* MPU can be configured out of a PMSA CPU either by setting has-mpu
67
* to false or by setting pmsav7-dregion to 0.
88
*/
68
*/
89
if (cpu->host_cpu_probe_failed) {
69
- if (!cpu->has_mpu) {
90
- if (!kvm_enabled()) {
70
- cpu->pmsav7_dregion = 0;
91
- error_setg(errp, "The 'host' CPU type can only be used with KVM");
71
- }
92
+ if (!kvm_enabled() && !hvf_enabled()) {
72
- if (cpu->pmsav7_dregion == 0) {
93
+ error_setg(errp, "The 'host' CPU type can only be used with KVM or HVF");
73
+ if (!cpu->has_mpu || cpu->pmsav7_dregion == 0) {
94
} else {
74
cpu->has_mpu = false;
95
error_setg(errp, "Failed to retrieve host CPU features");
75
+ cpu->pmsav7_dregion = 0;
76
+ cpu->pmsav8r_hdregion = 0;
77
}
78
79
if (arm_feature(env, ARM_FEATURE_PMSA) &&
80
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
81
env->pmsav7.dracr = g_new0(uint32_t, nr);
82
}
96
}
83
}
97
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
84
+
98
#endif /* CONFIG_TCG */
85
+ if (cpu->pmsav8r_hdregion > 0xff) {
86
+ error_setg(errp, "PMSAv8 MPU EL2 #regions invalid %" PRIu32,
87
+ cpu->pmsav8r_hdregion);
88
+ return;
89
+ }
90
+
91
+ if (cpu->pmsav8r_hdregion) {
92
+ env->pmsav8.hprbar = g_new0(uint32_t,
93
+ cpu->pmsav8r_hdregion);
94
+ env->pmsav8.hprlar = g_new0(uint32_t,
95
+ cpu->pmsav8r_hdregion);
96
+ }
97
}
98
99
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
100
diff --git a/target/arm/helper.c b/target/arm/helper.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/target/arm/helper.c
103
+++ b/target/arm/helper.c
104
@@ -XXX,XX +XXX,XX @@ static void pmsav7_rgnr_write(CPUARMState *env, const ARMCPRegInfo *ri,
105
raw_write(env, ri, value);
99
}
106
}
100
107
101
-#ifdef CONFIG_KVM
108
+static void prbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
102
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
109
+ uint64_t value)
103
static void arm_host_initfn(Object *obj)
110
+{
104
{
111
+ ARMCPU *cpu = env_archcpu(env);
105
ARMCPU *cpu = ARM_CPU(obj);
112
+
106
113
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
107
+#ifdef CONFIG_KVM
114
+ env->pmsav8.rbar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]] = value;
108
kvm_arm_set_cpu_features_from_host(cpu);
115
+}
109
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
116
+
110
aarch64_add_sve_properties(obj);
117
+static uint64_t prbar_read(CPUARMState *env, const ARMCPRegInfo *ri)
118
+{
119
+ return env->pmsav8.rbar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]];
120
+}
121
+
122
+static void prlar_write(CPUARMState *env, const ARMCPRegInfo *ri,
123
+ uint64_t value)
124
+{
125
+ ARMCPU *cpu = env_archcpu(env);
126
+
127
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
128
+ env->pmsav8.rlar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]] = value;
129
+}
130
+
131
+static uint64_t prlar_read(CPUARMState *env, const ARMCPRegInfo *ri)
132
+{
133
+ return env->pmsav8.rlar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]];
134
+}
135
+
136
+static void prselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
137
+ uint64_t value)
138
+{
139
+ ARMCPU *cpu = env_archcpu(env);
140
+
141
+ /*
142
+ * Ignore writes that would select not implemented region.
143
+ * This is architecturally UNPREDICTABLE.
144
+ */
145
+ if (value >= cpu->pmsav7_dregion) {
146
+ return;
147
+ }
148
+
149
+ env->pmsav7.rnr[M_REG_NS] = value;
150
+}
151
+
152
+static void hprbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
153
+ uint64_t value)
154
+{
155
+ ARMCPU *cpu = env_archcpu(env);
156
+
157
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
158
+ env->pmsav8.hprbar[env->pmsav8.hprselr] = value;
159
+}
160
+
161
+static uint64_t hprbar_read(CPUARMState *env, const ARMCPRegInfo *ri)
162
+{
163
+ return env->pmsav8.hprbar[env->pmsav8.hprselr];
164
+}
165
+
166
+static void hprlar_write(CPUARMState *env, const ARMCPRegInfo *ri,
167
+ uint64_t value)
168
+{
169
+ ARMCPU *cpu = env_archcpu(env);
170
+
171
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
172
+ env->pmsav8.hprlar[env->pmsav8.hprselr] = value;
173
+}
174
+
175
+static uint64_t hprlar_read(CPUARMState *env, const ARMCPRegInfo *ri)
176
+{
177
+ return env->pmsav8.hprlar[env->pmsav8.hprselr];
178
+}
179
+
180
+static void hprenr_write(CPUARMState *env, const ARMCPRegInfo *ri,
181
+ uint64_t value)
182
+{
183
+ uint32_t n;
184
+ uint32_t bit;
185
+ ARMCPU *cpu = env_archcpu(env);
186
+
187
+ /* Ignore writes to unimplemented regions */
188
+ int rmax = MIN(cpu->pmsav8r_hdregion, 32);
189
+ value &= MAKE_64BIT_MASK(0, rmax);
190
+
191
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
192
+
193
+ /* Register alias is only valid for first 32 indexes */
194
+ for (n = 0; n < rmax; ++n) {
195
+ bit = extract32(value, n, 1);
196
+ env->pmsav8.hprlar[n] = deposit32(
197
+ env->pmsav8.hprlar[n], 0, 1, bit);
198
+ }
199
+}
200
+
201
+static uint64_t hprenr_read(CPUARMState *env, const ARMCPRegInfo *ri)
202
+{
203
+ uint32_t n;
204
+ uint32_t result = 0x0;
205
+ ARMCPU *cpu = env_archcpu(env);
206
+
207
+ /* Register alias is only valid for first 32 indexes */
208
+ for (n = 0; n < MIN(cpu->pmsav8r_hdregion, 32); ++n) {
209
+ if (env->pmsav8.hprlar[n] & 0x1) {
210
+ result |= (0x1 << n);
211
+ }
212
+ }
213
+ return result;
214
+}
215
+
216
+static void hprselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
217
+ uint64_t value)
218
+{
219
+ ARMCPU *cpu = env_archcpu(env);
220
+
221
+ /*
222
+ * Ignore writes that would select not implemented region.
223
+ * This is architecturally UNPREDICTABLE.
224
+ */
225
+ if (value >= cpu->pmsav8r_hdregion) {
226
+ return;
227
+ }
228
+
229
+ env->pmsav8.hprselr = value;
230
+}
231
+
232
+static void pmsav8r_regn_write(CPUARMState *env, const ARMCPRegInfo *ri,
233
+ uint64_t value)
234
+{
235
+ ARMCPU *cpu = env_archcpu(env);
236
+ uint8_t index = (extract32(ri->opc0, 0, 1) << 4) |
237
+ (extract32(ri->crm, 0, 3) << 1) | extract32(ri->opc2, 2, 1);
238
+
239
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
240
+
241
+ if (ri->opc1 & 4) {
242
+ if (index >= cpu->pmsav8r_hdregion) {
243
+ return;
244
+ }
245
+ if (ri->opc2 & 0x1) {
246
+ env->pmsav8.hprlar[index] = value;
247
+ } else {
248
+ env->pmsav8.hprbar[index] = value;
249
+ }
250
+ } else {
251
+ if (index >= cpu->pmsav7_dregion) {
252
+ return;
253
+ }
254
+ if (ri->opc2 & 0x1) {
255
+ env->pmsav8.rlar[M_REG_NS][index] = value;
256
+ } else {
257
+ env->pmsav8.rbar[M_REG_NS][index] = value;
258
+ }
259
+ }
260
+}
261
+
262
+static uint64_t pmsav8r_regn_read(CPUARMState *env, const ARMCPRegInfo *ri)
263
+{
264
+ ARMCPU *cpu = env_archcpu(env);
265
+ uint8_t index = (extract32(ri->opc0, 0, 1) << 4) |
266
+ (extract32(ri->crm, 0, 3) << 1) | extract32(ri->opc2, 2, 1);
267
+
268
+ if (ri->opc1 & 4) {
269
+ if (index >= cpu->pmsav8r_hdregion) {
270
+ return 0x0;
271
+ }
272
+ if (ri->opc2 & 0x1) {
273
+ return env->pmsav8.hprlar[index];
274
+ } else {
275
+ return env->pmsav8.hprbar[index];
276
+ }
277
+ } else {
278
+ if (index >= cpu->pmsav7_dregion) {
279
+ return 0x0;
280
+ }
281
+ if (ri->opc2 & 0x1) {
282
+ return env->pmsav8.rlar[M_REG_NS][index];
283
+ } else {
284
+ return env->pmsav8.rbar[M_REG_NS][index];
285
+ }
286
+ }
287
+}
288
+
289
+static const ARMCPRegInfo pmsav8r_cp_reginfo[] = {
290
+ { .name = "PRBAR",
291
+ .cp = 15, .opc1 = 0, .crn = 6, .crm = 3, .opc2 = 0,
292
+ .access = PL1_RW, .type = ARM_CP_NO_RAW,
293
+ .accessfn = access_tvm_trvm,
294
+ .readfn = prbar_read, .writefn = prbar_write },
295
+ { .name = "PRLAR",
296
+ .cp = 15, .opc1 = 0, .crn = 6, .crm = 3, .opc2 = 1,
297
+ .access = PL1_RW, .type = ARM_CP_NO_RAW,
298
+ .accessfn = access_tvm_trvm,
299
+ .readfn = prlar_read, .writefn = prlar_write },
300
+ { .name = "PRSELR", .resetvalue = 0,
301
+ .cp = 15, .opc1 = 0, .crn = 6, .crm = 2, .opc2 = 1,
302
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
303
+ .writefn = prselr_write,
304
+ .fieldoffset = offsetof(CPUARMState, pmsav7.rnr[M_REG_NS]) },
305
+ { .name = "HPRBAR", .resetvalue = 0,
306
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 3, .opc2 = 0,
307
+ .access = PL2_RW, .type = ARM_CP_NO_RAW,
308
+ .readfn = hprbar_read, .writefn = hprbar_write },
309
+ { .name = "HPRLAR",
310
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 3, .opc2 = 1,
311
+ .access = PL2_RW, .type = ARM_CP_NO_RAW,
312
+ .readfn = hprlar_read, .writefn = hprlar_write },
313
+ { .name = "HPRSELR", .resetvalue = 0,
314
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 2, .opc2 = 1,
315
+ .access = PL2_RW,
316
+ .writefn = hprselr_write,
317
+ .fieldoffset = offsetof(CPUARMState, pmsav8.hprselr) },
318
+ { .name = "HPRENR",
319
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 1, .opc2 = 1,
320
+ .access = PL2_RW, .type = ARM_CP_NO_RAW,
321
+ .readfn = hprenr_read, .writefn = hprenr_write },
322
+};
323
+
324
static const ARMCPRegInfo pmsav7_cp_reginfo[] = {
325
/* Reset for all these registers is handled in arm_cpu_reset(),
326
* because the PMSAv7 is also used by M-profile CPUs, which do
327
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
328
.access = PL1_R, .type = ARM_CP_CONST,
329
.resetvalue = cpu->pmsav7_dregion << 8
330
};
331
+ /* HMPUIR is specific to PMSA V8 */
332
+ ARMCPRegInfo id_hmpuir_reginfo = {
333
+ .name = "HMPUIR",
334
+ .cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 4,
335
+ .access = PL2_R, .type = ARM_CP_CONST,
336
+ .resetvalue = cpu->pmsav8r_hdregion
337
+ };
338
static const ARMCPRegInfo crn0_wi_reginfo = {
339
.name = "CRN0_WI", .cp = 15, .crn = 0, .crm = CP_ANY,
340
.opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_W,
341
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
342
define_arm_cp_regs(cpu, id_cp_reginfo);
343
if (!arm_feature(env, ARM_FEATURE_PMSA)) {
344
define_one_arm_cp_reg(cpu, &id_tlbtr_reginfo);
345
+ } else if (arm_feature(env, ARM_FEATURE_PMSA) &&
346
+ arm_feature(env, ARM_FEATURE_V8)) {
347
+ uint32_t i = 0;
348
+ char *tmp_string;
349
+
350
+ define_one_arm_cp_reg(cpu, &id_mpuir_reginfo);
351
+ define_one_arm_cp_reg(cpu, &id_hmpuir_reginfo);
352
+ define_arm_cp_regs(cpu, pmsav8r_cp_reginfo);
353
+
354
+ /* Register alias is only valid for first 32 indexes */
355
+ for (i = 0; i < MIN(cpu->pmsav7_dregion, 32); ++i) {
356
+ uint8_t crm = 0b1000 | extract32(i, 1, 3);
357
+ uint8_t opc1 = extract32(i, 4, 1);
358
+ uint8_t opc2 = extract32(i, 0, 1) << 2;
359
+
360
+ tmp_string = g_strdup_printf("PRBAR%u", i);
361
+ ARMCPRegInfo tmp_prbarn_reginfo = {
362
+ .name = tmp_string, .type = ARM_CP_ALIAS | ARM_CP_NO_RAW,
363
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
364
+ .access = PL1_RW, .resetvalue = 0,
365
+ .accessfn = access_tvm_trvm,
366
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
367
+ };
368
+ define_one_arm_cp_reg(cpu, &tmp_prbarn_reginfo);
369
+ g_free(tmp_string);
370
+
371
+ opc2 = extract32(i, 0, 1) << 2 | 0x1;
372
+ tmp_string = g_strdup_printf("PRLAR%u", i);
373
+ ARMCPRegInfo tmp_prlarn_reginfo = {
374
+ .name = tmp_string, .type = ARM_CP_ALIAS | ARM_CP_NO_RAW,
375
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
376
+ .access = PL1_RW, .resetvalue = 0,
377
+ .accessfn = access_tvm_trvm,
378
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
379
+ };
380
+ define_one_arm_cp_reg(cpu, &tmp_prlarn_reginfo);
381
+ g_free(tmp_string);
382
+ }
383
+
384
+ /* Register alias is only valid for first 32 indexes */
385
+ for (i = 0; i < MIN(cpu->pmsav8r_hdregion, 32); ++i) {
386
+ uint8_t crm = 0b1000 | extract32(i, 1, 3);
387
+ uint8_t opc1 = 0b100 | extract32(i, 4, 1);
388
+ uint8_t opc2 = extract32(i, 0, 1) << 2;
389
+
390
+ tmp_string = g_strdup_printf("HPRBAR%u", i);
391
+ ARMCPRegInfo tmp_hprbarn_reginfo = {
392
+ .name = tmp_string,
393
+ .type = ARM_CP_NO_RAW,
394
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
395
+ .access = PL2_RW, .resetvalue = 0,
396
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
397
+ };
398
+ define_one_arm_cp_reg(cpu, &tmp_hprbarn_reginfo);
399
+ g_free(tmp_string);
400
+
401
+ opc2 = extract32(i, 0, 1) << 2 | 0x1;
402
+ tmp_string = g_strdup_printf("HPRLAR%u", i);
403
+ ARMCPRegInfo tmp_hprlarn_reginfo = {
404
+ .name = tmp_string,
405
+ .type = ARM_CP_NO_RAW,
406
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
407
+ .access = PL2_RW, .resetvalue = 0,
408
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
409
+ };
410
+ define_one_arm_cp_reg(cpu, &tmp_hprlarn_reginfo);
411
+ g_free(tmp_string);
412
+ }
413
} else if (arm_feature(env, ARM_FEATURE_V7)) {
414
define_one_arm_cp_reg(cpu, &id_mpuir_reginfo);
415
}
416
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
417
sctlr.type |= ARM_CP_SUPPRESS_TB_END;
418
}
419
define_one_arm_cp_reg(cpu, &sctlr);
420
+
421
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
422
+ arm_feature(env, ARM_FEATURE_V8)) {
423
+ ARMCPRegInfo vsctlr = {
424
+ .name = "VSCTLR", .state = ARM_CP_STATE_AA32,
425
+ .cp = 15, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0,
426
+ .access = PL2_RW, .resetvalue = 0x0,
427
+ .fieldoffset = offsetoflow32(CPUARMState, cp15.vsctlr),
428
+ };
429
+ define_one_arm_cp_reg(cpu, &vsctlr);
430
+ }
111
}
431
}
112
+#else
432
113
+ hvf_arm_set_cpu_features_from_host(cpu);
433
if (cpu_isar_feature(aa64_lor, cpu)) {
114
+#endif
434
diff --git a/target/arm/machine.c b/target/arm/machine.c
115
arm_cpu_post_init(obj);
435
index XXXXXXX..XXXXXXX 100644
436
--- a/target/arm/machine.c
437
+++ b/target/arm/machine.c
438
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_needed(void *opaque)
439
arm_feature(env, ARM_FEATURE_V8);
116
}
440
}
117
441
118
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_register_types(void)
442
+static bool pmsav8r_needed(void *opaque)
119
{
443
+{
120
type_register_static(&arm_cpu_type_info);
444
+ ARMCPU *cpu = opaque;
121
445
+ CPUARMState *env = &cpu->env;
122
-#ifdef CONFIG_KVM
446
+
123
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
447
+ return arm_feature(env, ARM_FEATURE_PMSA) &&
124
type_register_static(&host_arm_cpu_type_info);
448
+ arm_feature(env, ARM_FEATURE_V8) &&
125
#endif
449
+ !arm_feature(env, ARM_FEATURE_M);
126
}
450
+}
127
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
451
+
128
index XXXXXXX..XXXXXXX 100644
452
+static const VMStateDescription vmstate_pmsav8r = {
129
--- a/target/arm/hvf/hvf.c
453
+ .name = "cpu/pmsav8/pmsav8r",
130
+++ b/target/arm/hvf/hvf.c
454
+ .version_id = 1,
131
@@ -XXX,XX +XXX,XX @@
455
+ .minimum_version_id = 1,
132
#include "sysemu/hvf.h"
456
+ .needed = pmsav8r_needed,
133
#include "sysemu/hvf_int.h"
457
+ .fields = (VMStateField[]) {
134
#include "sysemu/hw_accel.h"
458
+ VMSTATE_VARRAY_UINT32(env.pmsav8.hprbar, ARMCPU,
135
+#include "hvf_arm.h"
459
+ pmsav8r_hdregion, 0, vmstate_info_uint32, uint32_t),
136
460
+ VMSTATE_VARRAY_UINT32(env.pmsav8.hprlar, ARMCPU,
137
#include <mach/mach_time.h>
461
+ pmsav8r_hdregion, 0, vmstate_info_uint32, uint32_t),
138
462
+ VMSTATE_END_OF_LIST()
139
@@ -XXX,XX +XXX,XX @@ typedef struct HVFVTimer {
463
+ },
140
464
+};
141
static HVFVTimer vtimer;
465
+
142
466
static const VMStateDescription vmstate_pmsav8 = {
143
+typedef struct ARMHostCPUFeatures {
467
.name = "cpu/pmsav8",
144
+ ARMISARegisters isar;
468
.version_id = 1,
145
+ uint64_t features;
469
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pmsav8 = {
146
+ uint64_t midr;
470
VMSTATE_UINT32(env.pmsav8.mair0[M_REG_NS], ARMCPU),
147
+ uint32_t reset_sctlr;
471
VMSTATE_UINT32(env.pmsav8.mair1[M_REG_NS], ARMCPU),
148
+ const char *dtb_compatible;
472
VMSTATE_END_OF_LIST()
149
+} ARMHostCPUFeatures;
473
+ },
150
+
474
+ .subsections = (const VMStateDescription * []) {
151
+static ARMHostCPUFeatures arm_host_cpu_features;
475
+ &vmstate_pmsav8r,
152
+
476
+ NULL
153
struct hvf_reg_match {
477
}
154
int reg;
478
};
155
uint64_t offset;
479
156
@@ -XXX,XX +XXX,XX @@ static uint64_t hvf_get_reg(CPUState *cpu, int rt)
157
return val;
158
}
159
160
+static bool hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
161
+{
162
+ ARMISARegisters host_isar = {};
163
+ const struct isar_regs {
164
+ int reg;
165
+ uint64_t *val;
166
+ } regs[] = {
167
+ { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 },
168
+ { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 },
169
+ { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 },
170
+ { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 },
171
+ { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 },
172
+ { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 },
173
+ { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 },
174
+ { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 },
175
+ { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 },
176
+ };
177
+ hv_vcpu_t fd;
178
+ hv_return_t r = HV_SUCCESS;
179
+ hv_vcpu_exit_t *exit;
180
+ int i;
181
+
182
+ ahcf->dtb_compatible = "arm,arm-v8";
183
+ ahcf->features = (1ULL << ARM_FEATURE_V8) |
184
+ (1ULL << ARM_FEATURE_NEON) |
185
+ (1ULL << ARM_FEATURE_AARCH64) |
186
+ (1ULL << ARM_FEATURE_PMU) |
187
+ (1ULL << ARM_FEATURE_GENERIC_TIMER);
188
+
189
+ /* We set up a small vcpu to extract host registers */
190
+
191
+ if (hv_vcpu_create(&fd, &exit, NULL) != HV_SUCCESS) {
192
+ return false;
193
+ }
194
+
195
+ for (i = 0; i < ARRAY_SIZE(regs); i++) {
196
+ r |= hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val);
197
+ }
198
+ r |= hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr);
199
+ r |= hv_vcpu_destroy(fd);
200
+
201
+ ahcf->isar = host_isar;
202
+
203
+ /*
204
+ * A scratch vCPU returns SCTLR 0, so let's fill our default with the M1
205
+ * boot SCTLR from https://github.com/AsahiLinux/m1n1/issues/97
206
+ */
207
+ ahcf->reset_sctlr = 0x30100180;
208
+ /*
209
+ * SPAN is disabled by default when SCTLR.SPAN=1. To improve compatibility,
210
+ * let's disable it on boot and then allow guest software to turn it on by
211
+ * setting it to 0.
212
+ */
213
+ ahcf->reset_sctlr |= 0x00800000;
214
+
215
+ /* Make sure we don't advertise AArch32 support for EL0/EL1 */
216
+ if ((host_isar.id_aa64pfr0 & 0xff) != 0x11) {
217
+ return false;
218
+ }
219
+
220
+ return r == HV_SUCCESS;
221
+}
222
+
223
+void hvf_arm_set_cpu_features_from_host(ARMCPU *cpu)
224
+{
225
+ if (!arm_host_cpu_features.dtb_compatible) {
226
+ if (!hvf_enabled() ||
227
+ !hvf_arm_get_host_cpu_features(&arm_host_cpu_features)) {
228
+ /*
229
+ * We can't report this error yet, so flag that we need to
230
+ * in arm_cpu_realizefn().
231
+ */
232
+ cpu->host_cpu_probe_failed = true;
233
+ return;
234
+ }
235
+ }
236
+
237
+ cpu->dtb_compatible = arm_host_cpu_features.dtb_compatible;
238
+ cpu->isar = arm_host_cpu_features.isar;
239
+ cpu->env.features = arm_host_cpu_features.features;
240
+ cpu->midr = arm_host_cpu_features.midr;
241
+ cpu->reset_sctlr = arm_host_cpu_features.reset_sctlr;
242
+}
243
+
244
void hvf_arch_vcpu_destroy(CPUState *cpu)
245
{
246
}
247
--
480
--
248
2.20.1
481
2.25.1
249
482
250
483
diff view generated by jsdifflib
1
Optimize the MVE VNEG and VABS insns by using TCG
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
vector ops when possible.
2
3
3
Add PMSAv8r translation.
4
5
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20221206102504.165775-7-tobias.roehmel@rwth-aachen.de
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-7-peter.maydell@linaro.org
8
---
9
---
9
target/arm/translate-mve.c | 32 ++++++++++++++++++++++----------
10
target/arm/ptw.c | 126 ++++++++++++++++++++++++++++++++++++++---------
10
1 file changed, 22 insertions(+), 10 deletions(-)
11
1 file changed, 104 insertions(+), 22 deletions(-)
11
12
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
15
--- a/target/arm/ptw.c
15
+++ b/target/arm/translate-mve.c
16
+++ b/target/arm/ptw.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
17
@@ -XXX,XX +XXX,XX @@ static bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx,
17
return true;
18
19
if (arm_feature(env, ARM_FEATURE_M)) {
20
return env->v7m.mpu_ctrl[is_secure] & R_V7M_MPU_CTRL_PRIVDEFENA_MASK;
21
- } else {
22
- return regime_sctlr(env, mmu_idx) & SCTLR_BR;
23
}
24
+
25
+ if (mmu_idx == ARMMMUIdx_Stage2) {
26
+ return false;
27
+ }
28
+
29
+ return regime_sctlr(env, mmu_idx) & SCTLR_BR;
18
}
30
}
19
31
20
-static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
32
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
21
+static bool do_1op_vec(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn,
33
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
22
+ GVecGen2Fn vecfn)
34
return !(result->f.prot & (1 << access_type));
23
{
35
}
24
TCGv_ptr qd, qm;
36
25
37
+static uint32_t *regime_rbar(CPUARMState *env, ARMMMUIdx mmu_idx,
26
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
38
+ uint32_t secure)
39
+{
40
+ if (regime_el(env, mmu_idx) == 2) {
41
+ return env->pmsav8.hprbar;
42
+ } else {
43
+ return env->pmsav8.rbar[secure];
44
+ }
45
+}
46
+
47
+static uint32_t *regime_rlar(CPUARMState *env, ARMMMUIdx mmu_idx,
48
+ uint32_t secure)
49
+{
50
+ if (regime_el(env, mmu_idx) == 2) {
51
+ return env->pmsav8.hprlar;
52
+ } else {
53
+ return env->pmsav8.rlar[secure];
54
+ }
55
+}
56
+
57
bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
58
MMUAccessType access_type, ARMMMUIdx mmu_idx,
59
bool secure, GetPhysAddrResult *result,
60
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
61
bool hit = false;
62
uint32_t addr_page_base = address & TARGET_PAGE_MASK;
63
uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
64
+ int region_counter;
65
+
66
+ if (regime_el(env, mmu_idx) == 2) {
67
+ region_counter = cpu->pmsav8r_hdregion;
68
+ } else {
69
+ region_counter = cpu->pmsav7_dregion;
70
+ }
71
72
result->f.lg_page_size = TARGET_PAGE_BITS;
73
result->f.phys_addr = address;
74
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
75
*mregion = -1;
76
}
77
78
+ if (mmu_idx == ARMMMUIdx_Stage2) {
79
+ fi->stage2 = true;
80
+ }
81
+
82
/*
83
* Unlike the ARM ARM pseudocode, we don't need to check whether this
84
* was an exception vector read from the vector table (which is always
85
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
86
hit = true;
87
}
88
89
- for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
90
+ uint32_t bitmask;
91
+ if (arm_feature(env, ARM_FEATURE_M)) {
92
+ bitmask = 0x1f;
93
+ } else {
94
+ bitmask = 0x3f;
95
+ fi->level = 0;
96
+ }
97
+
98
+ for (n = region_counter - 1; n >= 0; n--) {
99
/* region search */
100
/*
101
- * Note that the base address is bits [31:5] from the register
102
- * with bits [4:0] all zeroes, but the limit address is bits
103
- * [31:5] from the register with bits [4:0] all ones.
104
+ * Note that the base address is bits [31:x] from the register
105
+ * with bits [x-1:0] all zeroes, but the limit address is bits
106
+ * [31:x] from the register with bits [x:0] all ones. Where x is
107
+ * 5 for Cortex-M and 6 for Cortex-R
108
*/
109
- uint32_t base = env->pmsav8.rbar[secure][n] & ~0x1f;
110
- uint32_t limit = env->pmsav8.rlar[secure][n] | 0x1f;
111
+ uint32_t base = regime_rbar(env, mmu_idx, secure)[n] & ~bitmask;
112
+ uint32_t limit = regime_rlar(env, mmu_idx, secure)[n] | bitmask;
113
114
- if (!(env->pmsav8.rlar[secure][n] & 0x1)) {
115
+ if (!(regime_rlar(env, mmu_idx, secure)[n] & 0x1)) {
116
/* Region disabled */
117
continue;
118
}
119
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
120
* PMSAv7 where highest-numbered-region wins)
121
*/
122
fi->type = ARMFault_Permission;
123
- fi->level = 1;
124
+ if (arm_feature(env, ARM_FEATURE_M)) {
125
+ fi->level = 1;
126
+ }
127
return true;
128
}
129
130
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
131
}
132
133
if (!hit) {
134
- /* background fault */
135
- fi->type = ARMFault_Background;
136
+ if (arm_feature(env, ARM_FEATURE_M)) {
137
+ fi->type = ARMFault_Background;
138
+ } else {
139
+ fi->type = ARMFault_Permission;
140
+ }
27
return true;
141
return true;
28
}
142
}
29
143
30
- qd = mve_qreg_ptr(a->qd);
144
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
31
- qm = mve_qreg_ptr(a->qm);
145
/* hit using the background region */
32
- fn(cpu_env, qd, qm);
146
get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
33
- tcg_temp_free_ptr(qd);
147
} else {
34
- tcg_temp_free_ptr(qm);
148
- uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
35
+ if (vecfn && mve_no_predication(s)) {
149
- uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
36
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qm), 16, 16);
150
+ uint32_t matched_rbar = regime_rbar(env, mmu_idx, secure)[matchregion];
37
+ } else {
151
+ uint32_t matched_rlar = regime_rlar(env, mmu_idx, secure)[matchregion];
38
+ qd = mve_qreg_ptr(a->qd);
152
+ uint32_t ap = extract32(matched_rbar, 1, 2);
39
+ qm = mve_qreg_ptr(a->qm);
153
+ uint32_t xn = extract32(matched_rbar, 0, 1);
40
+ fn(cpu_env, qd, qm);
154
bool pxn = false;
41
+ tcg_temp_free_ptr(qd);
155
42
+ tcg_temp_free_ptr(qm);
156
if (arm_feature(env, ARM_FEATURE_V8_1M)) {
43
+ }
157
- pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
44
mve_update_eci(s);
158
+ pxn = extract32(matched_rlar, 4, 1);
45
return true;
159
}
160
161
if (m_is_system_region(env, address)) {
162
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
163
xn = 1;
164
}
165
166
- result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
167
+ if (regime_el(env, mmu_idx) == 2) {
168
+ result->f.prot = simple_ap_to_rw_prot_is_user(ap,
169
+ mmu_idx != ARMMMUIdx_E2);
170
+ } else {
171
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
172
+ }
173
+
174
+ if (!arm_feature(env, ARM_FEATURE_M)) {
175
+ uint8_t attrindx = extract32(matched_rlar, 1, 3);
176
+ uint64_t mair = env->cp15.mair_el[regime_el(env, mmu_idx)];
177
+ uint8_t sh = extract32(matched_rlar, 3, 2);
178
+
179
+ if (regime_sctlr(env, mmu_idx) & SCTLR_WXN &&
180
+ result->f.prot & PAGE_WRITE && mmu_idx != ARMMMUIdx_Stage2) {
181
+ xn = 0x1;
182
+ }
183
+
184
+ if ((regime_el(env, mmu_idx) == 1) &&
185
+ regime_sctlr(env, mmu_idx) & SCTLR_UWXN && ap == 0x1) {
186
+ pxn = 0x1;
187
+ }
188
+
189
+ result->cacheattrs.is_s2_format = false;
190
+ result->cacheattrs.attrs = extract64(mair, attrindx * 8, 8);
191
+ result->cacheattrs.shareability = sh;
192
+ }
193
+
194
if (result->f.prot && !xn && !(pxn && !is_user)) {
195
result->f.prot |= PAGE_EXEC;
196
}
197
- /*
198
- * We don't need to look the attribute up in the MAIR0/MAIR1
199
- * registers because that only tells us about cacheability.
200
- */
201
+
202
if (mregion) {
203
*mregion = matchregion;
204
}
205
}
206
207
fi->type = ARMFault_Permission;
208
- fi->level = 1;
209
+ if (arm_feature(env, ARM_FEATURE_M)) {
210
+ fi->level = 1;
211
+ }
212
return !(result->f.prot & (1 << access_type));
46
}
213
}
47
214
48
-#define DO_1OP(INSN, FN) \
215
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
49
+static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
216
cacheattrs1 = result->cacheattrs;
50
+{
217
memset(result, 0, sizeof(*result));
51
+ return do_1op_vec(s, a, fn, NULL);
218
52
+}
219
- ret = get_phys_addr_lpae(env, ptw, ipa, access_type, is_el0, result, fi);
53
+
220
+ if (arm_feature(env, ARM_FEATURE_PMSA)) {
54
+#define DO_1OP_VEC(INSN, FN, VECFN) \
221
+ ret = get_phys_addr_pmsav8(env, ipa, access_type,
55
static bool trans_##INSN(DisasContext *s, arg_1op *a) \
222
+ ptw->in_mmu_idx, is_secure, result, fi);
56
{ \
223
+ } else {
57
static MVEGenOneOpFn * const fns[] = { \
224
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
58
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
225
+ is_el0, result, fi);
59
gen_helper_mve_##FN##w, \
226
+ }
60
NULL, \
227
fi->s2addr = ipa;
61
}; \
228
62
- return do_1op(s, a, fns[a->size]); \
229
/* Combine the S1 and S2 perms. */
63
+ return do_1op_vec(s, a, fns[a->size], VECFN); \
64
}
65
66
+#define DO_1OP(INSN, FN) DO_1OP_VEC(INSN, FN, NULL)
67
+
68
DO_1OP(VCLZ, vclz)
69
DO_1OP(VCLS, vcls)
70
-DO_1OP(VABS, vabs)
71
-DO_1OP(VNEG, vneg)
72
+DO_1OP_VEC(VABS, vabs, tcg_gen_gvec_abs)
73
+DO_1OP_VEC(VNEG, vneg, tcg_gen_gvec_neg)
74
DO_1OP(VQABS, vqabs)
75
DO_1OP(VQNEG, vqneg)
76
DO_1OP(VMAXA, vmaxa)
77
--
230
--
78
2.20.1
231
2.25.1
79
232
80
233
diff view generated by jsdifflib
1
When not predicating, implement the MVE bitwise logical insns
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
directly using TCG vector operations.
3
2
3
All constants are taken from the ARM Cortex-R52 Processor TRM Revision: r1p3
4
5
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20221206102504.165775-8-tobias.roehmel@rwth-aachen.de
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-5-peter.maydell@linaro.org
8
---
9
---
9
target/arm/translate-mve.c | 51 +++++++++++++++++++++++++++-----------
10
target/arm/cpu_tcg.c | 42 ++++++++++++++++++++++++++++++++++++++++++
10
1 file changed, 36 insertions(+), 15 deletions(-)
11
1 file changed, 42 insertions(+)
11
12
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
13
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
15
--- a/target/arm/cpu_tcg.c
15
+++ b/target/arm/translate-mve.c
16
+++ b/target/arm/cpu_tcg.c
16
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr mve_qreg_ptr(unsigned reg)
17
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
17
return ret;
18
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
18
}
19
}
19
20
20
+static bool mve_no_predication(DisasContext *s)
21
+static void cortex_r52_initfn(Object *obj)
21
+{
22
+{
22
+ /*
23
+ ARMCPU *cpu = ARM_CPU(obj);
23
+ * Return true if we are executing the entire MVE instruction
24
+
24
+ * with no predication or partial-execution, and so we can safely
25
+ set_feature(&cpu->env, ARM_FEATURE_V8);
25
+ * use an inline TCG vector implementation.
26
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
26
+ */
27
+ set_feature(&cpu->env, ARM_FEATURE_PMSA);
27
+ return s->eci == 0 && s->mve_no_pred;
28
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
29
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
30
+ cpu->midr = 0x411fd133; /* r1p3 */
31
+ cpu->revidr = 0x00000000;
32
+ cpu->reset_fpsid = 0x41034023;
33
+ cpu->isar.mvfr0 = 0x10110222;
34
+ cpu->isar.mvfr1 = 0x12111111;
35
+ cpu->isar.mvfr2 = 0x00000043;
36
+ cpu->ctr = 0x8144c004;
37
+ cpu->reset_sctlr = 0x30c50838;
38
+ cpu->isar.id_pfr0 = 0x00000131;
39
+ cpu->isar.id_pfr1 = 0x10111001;
40
+ cpu->isar.id_dfr0 = 0x03010006;
41
+ cpu->id_afr0 = 0x00000000;
42
+ cpu->isar.id_mmfr0 = 0x00211040;
43
+ cpu->isar.id_mmfr1 = 0x40000000;
44
+ cpu->isar.id_mmfr2 = 0x01200000;
45
+ cpu->isar.id_mmfr3 = 0xf0102211;
46
+ cpu->isar.id_mmfr4 = 0x00000010;
47
+ cpu->isar.id_isar0 = 0x02101110;
48
+ cpu->isar.id_isar1 = 0x13112111;
49
+ cpu->isar.id_isar2 = 0x21232142;
50
+ cpu->isar.id_isar3 = 0x01112131;
51
+ cpu->isar.id_isar4 = 0x00010142;
52
+ cpu->isar.id_isar5 = 0x00010001;
53
+ cpu->isar.dbgdidr = 0x77168000;
54
+ cpu->clidr = (1 << 27) | (1 << 24) | 0x3;
55
+ cpu->ccsidr[0] = 0x700fe01a; /* 32KB L1 dcache */
56
+ cpu->ccsidr[1] = 0x201fe00a; /* 32KB L1 icache */
57
+
58
+ cpu->pmsav7_dregion = 16;
59
+ cpu->pmsav8r_hdregion = 16;
28
+}
60
+}
29
+
61
+
30
static bool mve_check_qreg_bank(DisasContext *s, int qmask)
62
static void cortex_r5f_initfn(Object *obj)
31
{
63
{
32
/*
64
ARMCPU *cpu = ARM_CPU(obj);
33
@@ -XXX,XX +XXX,XX @@ static bool trans_VNEG_fp(DisasContext *s, arg_1op *a)
65
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo arm_tcg_cpus[] = {
34
return do_1op(s, a, fns[a->size]);
66
.class_init = arm_v7m_class_init },
35
}
67
{ .name = "cortex-r5", .initfn = cortex_r5_initfn },
36
68
{ .name = "cortex-r5f", .initfn = cortex_r5f_initfn },
37
-static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
69
+ { .name = "cortex-r52", .initfn = cortex_r52_initfn },
38
+static bool do_2op_vec(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn,
70
{ .name = "ti925t", .initfn = ti925t_initfn },
39
+ GVecGen3Fn *vecfn)
71
{ .name = "sa1100", .initfn = sa1100_initfn },
40
{
72
{ .name = "sa1110", .initfn = sa1110_initfn },
41
TCGv_ptr qd, qn, qm;
42
43
@@ -XXX,XX +XXX,XX @@ static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
44
return true;
45
}
46
47
- qd = mve_qreg_ptr(a->qd);
48
- qn = mve_qreg_ptr(a->qn);
49
- qm = mve_qreg_ptr(a->qm);
50
- fn(cpu_env, qd, qn, qm);
51
- tcg_temp_free_ptr(qd);
52
- tcg_temp_free_ptr(qn);
53
- tcg_temp_free_ptr(qm);
54
+ if (vecfn && mve_no_predication(s)) {
55
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qn),
56
+ mve_qreg_offset(a->qm), 16, 16);
57
+ } else {
58
+ qd = mve_qreg_ptr(a->qd);
59
+ qn = mve_qreg_ptr(a->qn);
60
+ qm = mve_qreg_ptr(a->qm);
61
+ fn(cpu_env, qd, qn, qm);
62
+ tcg_temp_free_ptr(qd);
63
+ tcg_temp_free_ptr(qn);
64
+ tcg_temp_free_ptr(qm);
65
+ }
66
mve_update_eci(s);
67
return true;
68
}
69
70
-#define DO_LOGIC(INSN, HELPER) \
71
+static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn *fn)
72
+{
73
+ return do_2op_vec(s, a, fn, NULL);
74
+}
75
+
76
+#define DO_LOGIC(INSN, HELPER, VECFN) \
77
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
78
{ \
79
- return do_2op(s, a, HELPER); \
80
+ return do_2op_vec(s, a, HELPER, VECFN); \
81
}
82
83
-DO_LOGIC(VAND, gen_helper_mve_vand)
84
-DO_LOGIC(VBIC, gen_helper_mve_vbic)
85
-DO_LOGIC(VORR, gen_helper_mve_vorr)
86
-DO_LOGIC(VORN, gen_helper_mve_vorn)
87
-DO_LOGIC(VEOR, gen_helper_mve_veor)
88
+DO_LOGIC(VAND, gen_helper_mve_vand, tcg_gen_gvec_and)
89
+DO_LOGIC(VBIC, gen_helper_mve_vbic, tcg_gen_gvec_andc)
90
+DO_LOGIC(VORR, gen_helper_mve_vorr, tcg_gen_gvec_or)
91
+DO_LOGIC(VORN, gen_helper_mve_vorn, tcg_gen_gvec_orc)
92
+DO_LOGIC(VEOR, gen_helper_mve_veor, tcg_gen_gvec_xor)
93
94
static bool trans_VPSEL(DisasContext *s, arg_2op *a)
95
{
96
--
73
--
97
2.20.1
74
2.25.1
98
75
99
76
diff view generated by jsdifflib
1
Currently gen_jmp_tb() assumes that if it is called then the jump it
1
From: Alex Bennée <alex.bennee@linaro.org>
2
is handling is the only reason that we might be trying to end the TB,
3
so it will use goto_tb if it can. This is usually the case: mostly
4
"we did something that means we must end the TB" happens on a
5
non-branch instruction. However, there are cases where we decide
6
early in handling an instruction that we need to end the TB and
7
return to the main loop, and then the insn is a complex one that
8
involves gen_jmp_tb(). For instance, for M-profile FP instructions,
9
in gen_preserve_fp_state() which is called from vfp_access_check() we
10
want to force an exit to the main loop if lazy state preservation is
11
active and we are in icount mode.
12
2
13
Make gen_jmp_tb() look at the current value of is_jmp, and only use
3
The check semihosting_enabled() wants to know if the guest is
14
goto_tb if the previous is_jmp was DISAS_NEXT or DISAS_TOO_MANY.
4
currently in user mode. Unlike the other cases the test was inverted
5
causing us to block semihosting calls in non-EL0 modes.
15
6
7
Cc: qemu-stable@nongnu.org
8
Fixes: 19b26317e9 (target/arm: Honour -semihosting-config userspace=on)
9
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20210913095440.13462-2-peter.maydell@linaro.org
19
---
12
---
20
target/arm/translate.c | 34 +++++++++++++++++++++++++++++++++-
13
target/arm/translate.c | 2 +-
21
1 file changed, 33 insertions(+), 1 deletion(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
22
15
23
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
24
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/translate.c
18
--- a/target/arm/translate.c
26
+++ b/target/arm/translate.c
19
+++ b/target/arm/translate.c
27
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
20
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
28
/* An indirect jump so that we still trigger the debug exception. */
21
* semihosting, to provide some semblance of security
29
gen_set_pc_im(s, dest);
22
* (and for consistency with our 32-bit semihosting).
30
s->base.is_jmp = DISAS_JUMP;
23
*/
31
- } else {
24
- if (semihosting_enabled(s->current_el != 0) &&
32
+ return;
25
+ if (semihosting_enabled(s->current_el == 0) &&
33
+ }
26
(imm == (s->thumb ? 0x3c : 0xf000))) {
34
+ switch (s->base.is_jmp) {
27
gen_exception_internal_insn(s, EXCP_SEMIHOST);
35
+ case DISAS_NEXT:
28
return;
36
+ case DISAS_TOO_MANY:
37
+ case DISAS_NORETURN:
38
+ /*
39
+ * The normal case: just go to the destination TB.
40
+ * NB: NORETURN happens if we generate code like
41
+ * gen_brcondi(l);
42
+ * gen_jmp();
43
+ * gen_set_label(l);
44
+ * gen_jmp();
45
+ * on the second call to gen_jmp().
46
+ */
47
gen_goto_tb(s, tbno, dest);
48
+ break;
49
+ case DISAS_UPDATE_NOCHAIN:
50
+ case DISAS_UPDATE_EXIT:
51
+ /*
52
+ * We already decided we're leaving the TB for some other reason.
53
+ * Avoid using goto_tb so we really do exit back to the main loop
54
+ * and don't chain to another TB.
55
+ */
56
+ gen_set_pc_im(s, dest);
57
+ gen_goto_ptr();
58
+ s->base.is_jmp = DISAS_NORETURN;
59
+ break;
60
+ default:
61
+ /*
62
+ * We shouldn't be emitting code for a jump and also have
63
+ * is_jmp set to one of the special cases like DISAS_SWI.
64
+ */
65
+ g_assert_not_reached();
66
}
67
}
68
69
--
29
--
70
2.20.1
30
2.25.1
71
31
72
32
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
We need to handle PSCI calls. Most of the TCG code works for us,
3
Fix typos, add background information
4
but we can simplify it to only handle aa64 mode and we need to
5
handle SUSPEND differently.
6
4
7
This patch takes the TCG code as template and duplicates it in HVF.
5
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
8
9
To tell the guest that we support PSCI 0.2 now, update the check in
10
arm_cpu_initfn() as well.
11
12
Signed-off-by: Alexander Graf <agraf@csgraf.de>
13
Reviewed-by: Sergio Lopez <slp@redhat.com>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20210916155404.86958-8-agraf@csgraf.de
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
8
---
18
target/arm/cpu.c | 4 +-
9
hw/timer/imx_epit.c | 20 ++++++++++++++++----
19
target/arm/hvf/hvf.c | 141 ++++++++++++++++++++++++++++++++++--
10
1 file changed, 16 insertions(+), 4 deletions(-)
20
target/arm/hvf/trace-events | 1 +
21
3 files changed, 139 insertions(+), 7 deletions(-)
22
11
23
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
24
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/cpu.c
14
--- a/hw/timer/imx_epit.c
26
+++ b/target/arm/cpu.c
15
+++ b/hw/timer/imx_epit.c
27
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
16
@@ -XXX,XX +XXX,XX @@ static void imx_epit_set_freq(IMXEPITState *s)
28
cpu->psci_version = 1; /* By default assume PSCI v0.1 */
29
cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
30
31
- if (tcg_enabled()) {
32
- cpu->psci_version = 2; /* TCG implements PSCI 0.2 */
33
+ if (tcg_enabled() || hvf_enabled()) {
34
+ cpu->psci_version = 2; /* TCG and HVF implement PSCI 0.2 */
35
}
17
}
36
}
18
}
37
19
38
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
20
+/*
39
index XXXXXXX..XXXXXXX 100644
21
+ * This is called both on hardware (device) reset and software reset.
40
--- a/target/arm/hvf/hvf.c
22
+ */
41
+++ b/target/arm/hvf/hvf.c
23
static void imx_epit_reset(DeviceState *dev)
42
@@ -XXX,XX +XXX,XX @@
24
{
43
#include "hw/irq.h"
25
IMXEPITState *s = IMX_EPIT(dev);
44
#include "qemu/main-loop.h"
26
45
#include "sysemu/cpus.h"
27
- /*
46
+#include "arm-powerctl.h"
28
- * Soft reset doesn't touch some bits; hard reset clears them
47
#include "target/arm/cpu.h"
29
- */
48
#include "target/arm/internals.h"
30
+ /* Soft reset doesn't touch some bits; hard reset clears them */
49
#include "trace/trace-target_arm_hvf.h"
31
s->cr &= (CR_EN|CR_ENMOD|CR_STOPEN|CR_DOZEN|CR_WAITEN|CR_DBGEN);
50
@@ -XXX,XX +XXX,XX @@
32
s->sr = 0;
51
#define TMR_CTL_IMASK (1 << 1)
33
s->lr = EPIT_TIMER_MAX;
52
#define TMR_CTL_ISTATUS (1 << 2)
34
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
53
35
ptimer_transaction_begin(s->timer_cmp);
54
+static void hvf_wfi(CPUState *cpu);
36
ptimer_transaction_begin(s->timer_reload);
55
+
37
56
typedef struct HVFVTimer {
38
+ /* Update the frequency. Has been done already in case of a reset. */
57
/* Vtimer value during migration and paused state */
39
if (!(s->cr & CR_SWR)) {
58
uint64_t vtimer_val;
40
imx_epit_set_freq(s);
59
@@ -XXX,XX +XXX,XX @@ static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
41
}
60
arm_cpu_do_interrupt(cpu);
42
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
43
break;
44
45
case 1: /* SR - ACK*/
46
- /* writing 1 to OCIF clear the OCIF bit */
47
+ /* writing 1 to OCIF clears the OCIF bit */
48
if (value & 0x01) {
49
s->sr = 0;
50
imx_epit_update_int(s);
51
@@ -XXX,XX +XXX,XX @@ static void imx_epit_realize(DeviceState *dev, Error **errp)
52
0x00001000);
53
sysbus_init_mmio(sbd, &s->iomem);
54
55
+ /*
56
+ * The reload timer keeps running when the peripheral is enabled. It is a
57
+ * kind of wall clock that does not generate any interrupts. The callback
58
+ * needs to be provided, but it does nothing as the ptimer already supports
59
+ * all necessary reloading functionality.
60
+ */
61
s->timer_reload = ptimer_init(imx_epit_reload, s, PTIMER_POLICY_LEGACY);
62
63
+ /*
64
+ * The compare timer is running only when the peripheral configuration is
65
+ * in a state that will generate compare interrupts.
66
+ */
67
s->timer_cmp = ptimer_init(imx_epit_cmp, s, PTIMER_POLICY_LEGACY);
61
}
68
}
62
69
63
+static void hvf_psci_cpu_off(ARMCPU *arm_cpu)
64
+{
65
+ int32_t ret = arm_set_cpu_off(arm_cpu->mp_affinity);
66
+ assert(ret == QEMU_ARM_POWERCTL_RET_SUCCESS);
67
+}
68
+
69
+/*
70
+ * Handle a PSCI call.
71
+ *
72
+ * Returns 0 on success
73
+ * -1 when the PSCI call is unknown,
74
+ */
75
+static bool hvf_handle_psci_call(CPUState *cpu)
76
+{
77
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
78
+ CPUARMState *env = &arm_cpu->env;
79
+ uint64_t param[4] = {
80
+ env->xregs[0],
81
+ env->xregs[1],
82
+ env->xregs[2],
83
+ env->xregs[3]
84
+ };
85
+ uint64_t context_id, mpidr;
86
+ bool target_aarch64 = true;
87
+ CPUState *target_cpu_state;
88
+ ARMCPU *target_cpu;
89
+ target_ulong entry;
90
+ int target_el = 1;
91
+ int32_t ret = 0;
92
+
93
+ trace_hvf_psci_call(param[0], param[1], param[2], param[3],
94
+ arm_cpu->mp_affinity);
95
+
96
+ switch (param[0]) {
97
+ case QEMU_PSCI_0_2_FN_PSCI_VERSION:
98
+ ret = QEMU_PSCI_0_2_RET_VERSION_0_2;
99
+ break;
100
+ case QEMU_PSCI_0_2_FN_MIGRATE_INFO_TYPE:
101
+ ret = QEMU_PSCI_0_2_RET_TOS_MIGRATION_NOT_REQUIRED; /* No trusted OS */
102
+ break;
103
+ case QEMU_PSCI_0_2_FN_AFFINITY_INFO:
104
+ case QEMU_PSCI_0_2_FN64_AFFINITY_INFO:
105
+ mpidr = param[1];
106
+
107
+ switch (param[2]) {
108
+ case 0:
109
+ target_cpu_state = arm_get_cpu_by_id(mpidr);
110
+ if (!target_cpu_state) {
111
+ ret = QEMU_PSCI_RET_INVALID_PARAMS;
112
+ break;
113
+ }
114
+ target_cpu = ARM_CPU(target_cpu_state);
115
+
116
+ ret = target_cpu->power_state;
117
+ break;
118
+ default:
119
+ /* Everything above affinity level 0 is always on. */
120
+ ret = 0;
121
+ }
122
+ break;
123
+ case QEMU_PSCI_0_2_FN_SYSTEM_RESET:
124
+ qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
125
+ /*
126
+ * QEMU reset and shutdown are async requests, but PSCI
127
+ * mandates that we never return from the reset/shutdown
128
+ * call, so power the CPU off now so it doesn't execute
129
+ * anything further.
130
+ */
131
+ hvf_psci_cpu_off(arm_cpu);
132
+ break;
133
+ case QEMU_PSCI_0_2_FN_SYSTEM_OFF:
134
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
135
+ hvf_psci_cpu_off(arm_cpu);
136
+ break;
137
+ case QEMU_PSCI_0_1_FN_CPU_ON:
138
+ case QEMU_PSCI_0_2_FN_CPU_ON:
139
+ case QEMU_PSCI_0_2_FN64_CPU_ON:
140
+ mpidr = param[1];
141
+ entry = param[2];
142
+ context_id = param[3];
143
+ ret = arm_set_cpu_on(mpidr, entry, context_id,
144
+ target_el, target_aarch64);
145
+ break;
146
+ case QEMU_PSCI_0_1_FN_CPU_OFF:
147
+ case QEMU_PSCI_0_2_FN_CPU_OFF:
148
+ hvf_psci_cpu_off(arm_cpu);
149
+ break;
150
+ case QEMU_PSCI_0_1_FN_CPU_SUSPEND:
151
+ case QEMU_PSCI_0_2_FN_CPU_SUSPEND:
152
+ case QEMU_PSCI_0_2_FN64_CPU_SUSPEND:
153
+ /* Affinity levels are not supported in QEMU */
154
+ if (param[1] & 0xfffe0000) {
155
+ ret = QEMU_PSCI_RET_INVALID_PARAMS;
156
+ break;
157
+ }
158
+ /* Powerdown is not supported, we always go into WFI */
159
+ env->xregs[0] = 0;
160
+ hvf_wfi(cpu);
161
+ break;
162
+ case QEMU_PSCI_0_1_FN_MIGRATE:
163
+ case QEMU_PSCI_0_2_FN_MIGRATE:
164
+ ret = QEMU_PSCI_RET_NOT_SUPPORTED;
165
+ break;
166
+ default:
167
+ return false;
168
+ }
169
+
170
+ env->xregs[0] = ret;
171
+ return true;
172
+}
173
+
174
static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
175
{
176
ARMCPU *arm_cpu = ARM_CPU(cpu);
177
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
178
break;
179
case EC_AA64_HVC:
180
cpu_synchronize_state(cpu);
181
- trace_hvf_unknown_hvc(env->xregs[0]);
182
- /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
183
- env->xregs[0] = -1;
184
+ if (arm_cpu->psci_conduit == QEMU_PSCI_CONDUIT_HVC) {
185
+ if (!hvf_handle_psci_call(cpu)) {
186
+ trace_hvf_unknown_hvc(env->xregs[0]);
187
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
188
+ env->xregs[0] = -1;
189
+ }
190
+ } else {
191
+ trace_hvf_unknown_hvc(env->xregs[0]);
192
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
193
+ }
194
break;
195
case EC_AA64_SMC:
196
cpu_synchronize_state(cpu);
197
- trace_hvf_unknown_smc(env->xregs[0]);
198
- hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
199
+ if (arm_cpu->psci_conduit == QEMU_PSCI_CONDUIT_SMC) {
200
+ advance_pc = true;
201
+
202
+ if (!hvf_handle_psci_call(cpu)) {
203
+ trace_hvf_unknown_smc(env->xregs[0]);
204
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
205
+ env->xregs[0] = -1;
206
+ }
207
+ } else {
208
+ trace_hvf_unknown_smc(env->xregs[0]);
209
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
210
+ }
211
break;
212
default:
213
cpu_synchronize_state(cpu);
214
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
215
index XXXXXXX..XXXXXXX 100644
216
--- a/target/arm/hvf/trace-events
217
+++ b/target/arm/hvf/trace-events
218
@@ -XXX,XX +XXX,XX @@ hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_
219
hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
220
hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
221
hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
222
+hvf_psci_call(uint64_t x0, uint64_t x1, uint64_t x2, uint64_t x3, uint32_t cpuid) "PSCI Call x0=0x%016"PRIx64" x1=0x%016"PRIx64" x2=0x%016"PRIx64" x3=0x%016"PRIx64" cpu=0x%x"
223
--
70
--
224
2.20.1
71
2.25.1
225
226
diff view generated by jsdifflib
New patch
1
From: Axel Heider <axel.heider@hensoldt.net>
1
2
3
remove unused defines, add needed defines
4
5
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
include/hw/timer/imx_epit.h | 4 ++--
10
hw/timer/imx_epit.c | 4 ++--
11
2 files changed, 4 insertions(+), 4 deletions(-)
12
13
diff --git a/include/hw/timer/imx_epit.h b/include/hw/timer/imx_epit.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/timer/imx_epit.h
16
+++ b/include/hw/timer/imx_epit.h
17
@@ -XXX,XX +XXX,XX @@
18
#define CR_OCIEN (1 << 2)
19
#define CR_RLD (1 << 3)
20
#define CR_PRESCALE_SHIFT (4)
21
-#define CR_PRESCALE_MASK (0xfff)
22
+#define CR_PRESCALE_BITS (12)
23
#define CR_SWR (1 << 16)
24
#define CR_IOVW (1 << 17)
25
#define CR_DBGEN (1 << 18)
26
@@ -XXX,XX +XXX,XX @@
27
#define CR_DOZEN (1 << 20)
28
#define CR_STOPEN (1 << 21)
29
#define CR_CLKSRC_SHIFT (24)
30
-#define CR_CLKSRC_MASK (0x3 << CR_CLKSRC_SHIFT)
31
+#define CR_CLKSRC_BITS (2)
32
33
#define EPIT_TIMER_MAX 0XFFFFFFFFUL
34
35
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/timer/imx_epit.c
38
+++ b/hw/timer/imx_epit.c
39
@@ -XXX,XX +XXX,XX @@ static void imx_epit_set_freq(IMXEPITState *s)
40
uint32_t clksrc;
41
uint32_t prescaler;
42
43
- clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, 2);
44
- prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, 12);
45
+ clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, CR_CLKSRC_BITS);
46
+ prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, CR_PRESCALE_BITS);
47
48
s->freq = imx_ccm_get_clock_frequency(s->ccm,
49
imx_epit_clocks[clksrc]) / prescaler;
50
--
51
2.25.1
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Hvf's permission bitmap during and after dirty logging does not include
4
the HV_MEMORY_EXEC permission. At least on Apple Silicon, this leads to
5
instruction faults once dirty logging was enabled.
6
7
Add the bit to make it work properly.
8
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210916155404.86958-3-agraf@csgraf.de
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
5
---
14
accel/hvf/hvf-accel-ops.c | 4 ++--
6
include/hw/timer/imx_epit.h | 2 ++
15
1 file changed, 2 insertions(+), 2 deletions(-)
7
hw/timer/imx_epit.c | 12 ++++++------
8
2 files changed, 8 insertions(+), 6 deletions(-)
16
9
17
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
10
diff --git a/include/hw/timer/imx_epit.h b/include/hw/timer/imx_epit.h
18
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/hvf/hvf-accel-ops.c
12
--- a/include/hw/timer/imx_epit.h
20
+++ b/accel/hvf/hvf-accel-ops.c
13
+++ b/include/hw/timer/imx_epit.h
21
@@ -XXX,XX +XXX,XX @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
14
@@ -XXX,XX +XXX,XX @@
22
if (on) {
15
#define CR_CLKSRC_SHIFT (24)
23
slot->flags |= HVF_SLOT_LOG;
16
#define CR_CLKSRC_BITS (2)
24
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
17
25
- HV_MEMORY_READ);
18
+#define SR_OCIF (1 << 0)
26
+ HV_MEMORY_READ | HV_MEMORY_EXEC);
19
+
27
/* stop tracking region*/
20
#define EPIT_TIMER_MAX 0XFFFFFFFFUL
21
22
#define TYPE_IMX_EPIT "imx.epit"
23
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/timer/imx_epit.c
26
+++ b/hw/timer/imx_epit.c
27
@@ -XXX,XX +XXX,XX @@ static const IMXClk imx_epit_clocks[] = {
28
*/
29
static void imx_epit_update_int(IMXEPITState *s)
30
{
31
- if (s->sr && (s->cr & CR_OCIEN) && (s->cr & CR_EN)) {
32
+ if ((s->sr & SR_OCIF) && (s->cr & CR_OCIEN) && (s->cr & CR_EN)) {
33
qemu_irq_raise(s->irq);
28
} else {
34
} else {
29
slot->flags &= ~HVF_SLOT_LOG;
35
qemu_irq_lower(s->irq);
30
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
36
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
31
- HV_MEMORY_READ | HV_MEMORY_WRITE);
37
break;
32
+ HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
38
33
}
39
case 1: /* SR - ACK*/
40
- /* writing 1 to OCIF clears the OCIF bit */
41
- if (value & 0x01) {
42
- s->sr = 0;
43
+ /* writing 1 to SR.OCIF clears this bit and turns the interrupt off */
44
+ if (value & SR_OCIF) {
45
+ s->sr = 0; /* SR.OCIF is the only bit in this register anyway */
46
imx_epit_update_int(s);
47
}
48
break;
49
@@ -XXX,XX +XXX,XX @@ static void imx_epit_cmp(void *opaque)
50
IMXEPITState *s = IMX_EPIT(opaque);
51
52
DPRINTF("sr was %d\n", s->sr);
53
-
54
- s->sr = 1;
55
+ /* Set interrupt status bit SR.OCIF and update the interrupt state */
56
+ s->sr |= SR_OCIF;
57
imx_epit_update_int(s);
34
}
58
}
35
59
36
--
60
--
37
2.20.1
61
2.25.1
38
39
diff view generated by jsdifflib
New patch
1
From: Axel Heider <axel.heider@hensoldt.net>
1
2
3
The interrupt state can change due to:
4
- reset clears both SR.OCIF and CR.OCIE
5
- write to CR.EN or CR.OCIE
6
7
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/timer/imx_epit.c | 16 ++++++++++++----
12
1 file changed, 12 insertions(+), 4 deletions(-)
13
14
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/timer/imx_epit.c
17
+++ b/hw/timer/imx_epit.c
18
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
19
if (s->cr & CR_SWR) {
20
/* handle the reset */
21
imx_epit_reset(DEVICE(s));
22
- /*
23
- * TODO: could we 'break' here? following operations appear
24
- * to duplicate the work imx_epit_reset() already did.
25
- */
26
}
27
28
+ /*
29
+ * The interrupt state can change due to:
30
+ * - reset clears both SR.OCIF and CR.OCIE
31
+ * - write to CR.EN or CR.OCIE
32
+ */
33
+ imx_epit_update_int(s);
34
+
35
+ /*
36
+ * TODO: could we 'break' here for reset? following operations appear
37
+ * to duplicate the work imx_epit_reset() already did.
38
+ */
39
+
40
ptimer_transaction_begin(s->timer_cmp);
41
ptimer_transaction_begin(s->timer_reload);
42
43
--
44
2.25.1
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
We will need to install a migration helper for the ARM hvf backend.
3
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
4
Let's introduce an arch callback for the overall hvf init chain to
5
do so.
6
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20210916155404.86958-4-agraf@csgraf.de
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
6
---
12
include/sysemu/hvf_int.h | 1 +
7
hw/timer/imx_epit.c | 20 ++++++++++++++------
13
accel/hvf/hvf-accel-ops.c | 3 ++-
8
1 file changed, 14 insertions(+), 6 deletions(-)
14
target/i386/hvf/hvf.c | 5 +++++
15
3 files changed, 8 insertions(+), 1 deletion(-)
16
9
17
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
10
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
18
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
19
--- a/include/sysemu/hvf_int.h
12
--- a/hw/timer/imx_epit.c
20
+++ b/include/sysemu/hvf_int.h
13
+++ b/hw/timer/imx_epit.c
21
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
14
@@ -XXX,XX +XXX,XX @@ static void imx_epit_set_freq(IMXEPITState *s)
22
};
15
/*
23
16
* This is called both on hardware (device) reset and software reset.
24
void assert_hvf_ok(hv_return_t ret);
17
*/
25
+int hvf_arch_init(void);
18
-static void imx_epit_reset(DeviceState *dev)
26
int hvf_arch_init_vcpu(CPUState *cpu);
19
+static void imx_epit_reset(IMXEPITState *s, bool is_hard_reset)
27
void hvf_arch_vcpu_destroy(CPUState *cpu);
20
{
28
int hvf_vcpu_exec(CPUState *);
21
- IMXEPITState *s = IMX_EPIT(dev);
29
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
22
-
30
index XXXXXXX..XXXXXXX 100644
23
/* Soft reset doesn't touch some bits; hard reset clears them */
31
--- a/accel/hvf/hvf-accel-ops.c
24
- s->cr &= (CR_EN|CR_ENMOD|CR_STOPEN|CR_DOZEN|CR_WAITEN|CR_DBGEN);
32
+++ b/accel/hvf/hvf-accel-ops.c
25
+ if (is_hard_reset) {
33
@@ -XXX,XX +XXX,XX @@ static int hvf_accel_init(MachineState *ms)
26
+ s->cr = 0;
34
27
+ } else {
35
hvf_state = s;
28
+ s->cr &= (CR_EN|CR_ENMOD|CR_STOPEN|CR_DOZEN|CR_WAITEN|CR_DBGEN);
36
memory_listener_register(&hvf_memory_listener, &address_space_memory);
29
+ }
37
- return 0;
30
s->sr = 0;
38
+
31
s->lr = EPIT_TIMER_MAX;
39
+ return hvf_arch_init();
32
s->cmp = 0;
33
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
34
s->cr = value & 0x03ffffff;
35
if (s->cr & CR_SWR) {
36
/* handle the reset */
37
- imx_epit_reset(DEVICE(s));
38
+ imx_epit_reset(s, false);
39
}
40
41
/*
42
@@ -XXX,XX +XXX,XX @@ static void imx_epit_realize(DeviceState *dev, Error **errp)
43
s->timer_cmp = ptimer_init(imx_epit_cmp, s, PTIMER_POLICY_LEGACY);
40
}
44
}
41
45
42
static void hvf_accel_class_init(ObjectClass *oc, void *data)
46
+static void imx_epit_dev_reset(DeviceState *dev)
43
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/i386/hvf/hvf.c
46
+++ b/target/i386/hvf/hvf.c
47
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
48
return env->apic_bus_freq != 0;
49
}
50
51
+int hvf_arch_init(void)
52
+{
47
+{
53
+ return 0;
48
+ IMXEPITState *s = IMX_EPIT(dev);
49
+ imx_epit_reset(s, true);
54
+}
50
+}
55
+
51
+
56
int hvf_arch_init_vcpu(CPUState *cpu)
52
static void imx_epit_class_init(ObjectClass *klass, void *data)
57
{
53
{
58
X86CPU *x86cpu = X86_CPU(cpu);
54
DeviceClass *dc = DEVICE_CLASS(klass);
55
56
dc->realize = imx_epit_realize;
57
- dc->reset = imx_epit_reset;
58
+ dc->reset = imx_epit_dev_reset;
59
dc->vmsd = &vmstate_imx_timer_epit;
60
dc->desc = "i.MX periodic timer";
61
}
59
--
62
--
60
2.20.1
63
2.25.1
61
62
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
We can expose cycle counters on the PMU easily. To be as compatible as
3
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
4
possible, let's do so, but make sure we don't expose any other architectural
5
counters that we can not model yet.
6
7
This allows OSs to work that require PMU support.
8
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210916155404.86958-10-agraf@csgraf.de
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
6
---
14
target/arm/hvf/hvf.c | 179 +++++++++++++++++++++++++++++++++++++++++++
7
hw/timer/imx_epit.c | 215 ++++++++++++++++++++++++--------------------
15
1 file changed, 179 insertions(+)
8
1 file changed, 117 insertions(+), 98 deletions(-)
16
9
17
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
10
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
18
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/hvf/hvf.c
12
--- a/hw/timer/imx_epit.c
20
+++ b/target/arm/hvf/hvf.c
13
+++ b/hw/timer/imx_epit.c
21
@@ -XXX,XX +XXX,XX @@
14
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reload_compare_timer(IMXEPITState *s)
22
#define SYSREG_OSLSR_EL1 SYSREG(2, 0, 1, 1, 4)
15
}
23
#define SYSREG_OSDLR_EL1 SYSREG(2, 0, 1, 3, 4)
24
#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1)
25
+#define SYSREG_PMCR_EL0 SYSREG(3, 3, 9, 12, 0)
26
+#define SYSREG_PMUSERENR_EL0 SYSREG(3, 3, 9, 14, 0)
27
+#define SYSREG_PMCNTENSET_EL0 SYSREG(3, 3, 9, 12, 1)
28
+#define SYSREG_PMCNTENCLR_EL0 SYSREG(3, 3, 9, 12, 2)
29
+#define SYSREG_PMINTENCLR_EL1 SYSREG(3, 0, 9, 14, 2)
30
+#define SYSREG_PMOVSCLR_EL0 SYSREG(3, 3, 9, 12, 3)
31
+#define SYSREG_PMSWINC_EL0 SYSREG(3, 3, 9, 12, 4)
32
+#define SYSREG_PMSELR_EL0 SYSREG(3, 3, 9, 12, 5)
33
+#define SYSREG_PMCEID0_EL0 SYSREG(3, 3, 9, 12, 6)
34
+#define SYSREG_PMCEID1_EL0 SYSREG(3, 3, 9, 12, 7)
35
+#define SYSREG_PMCCNTR_EL0 SYSREG(3, 3, 9, 13, 0)
36
+#define SYSREG_PMCCFILTR_EL0 SYSREG(3, 3, 14, 15, 7)
37
38
#define WFX_IS_WFE (1 << 0)
39
40
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
41
val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
42
gt_cntfrq_period_ns(arm_cpu);
43
break;
44
+ case SYSREG_PMCR_EL0:
45
+ val = env->cp15.c9_pmcr;
46
+ break;
47
+ case SYSREG_PMCCNTR_EL0:
48
+ pmu_op_start(env);
49
+ val = env->cp15.c15_ccnt;
50
+ pmu_op_finish(env);
51
+ break;
52
+ case SYSREG_PMCNTENCLR_EL0:
53
+ val = env->cp15.c9_pmcnten;
54
+ break;
55
+ case SYSREG_PMOVSCLR_EL0:
56
+ val = env->cp15.c9_pmovsr;
57
+ break;
58
+ case SYSREG_PMSELR_EL0:
59
+ val = env->cp15.c9_pmselr;
60
+ break;
61
+ case SYSREG_PMINTENCLR_EL1:
62
+ val = env->cp15.c9_pminten;
63
+ break;
64
+ case SYSREG_PMCCFILTR_EL0:
65
+ val = env->cp15.pmccfiltr_el0;
66
+ break;
67
+ case SYSREG_PMCNTENSET_EL0:
68
+ val = env->cp15.c9_pmcnten;
69
+ break;
70
+ case SYSREG_PMUSERENR_EL0:
71
+ val = env->cp15.c9_pmuserenr;
72
+ break;
73
+ case SYSREG_PMCEID0_EL0:
74
+ case SYSREG_PMCEID1_EL0:
75
+ /* We can't really count anything yet, declare all events invalid */
76
+ val = 0;
77
+ break;
78
case SYSREG_OSLSR_EL1:
79
val = env->cp15.oslsr_el1;
80
break;
81
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
82
return 0;
83
}
16
}
84
17
85
+static void pmu_update_irq(CPUARMState *env)
18
+static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
86
+{
19
+{
87
+ ARMCPU *cpu = env_archcpu(env);
20
+ uint32_t oldcr = s->cr;
88
+ qemu_set_irq(cpu->pmu_interrupt, (env->cp15.c9_pmcr & PMCRE) &&
21
+
89
+ (env->cp15.c9_pminten & env->cp15.c9_pmovsr));
22
+ s->cr = value & 0x03ffffff;
90
+}
23
+
91
+
24
+ if (s->cr & CR_SWR) {
92
+static bool pmu_event_supported(uint16_t number)
25
+ /* handle the reset */
93
+{
26
+ imx_epit_reset(s, false);
94
+ return false;
27
+ }
95
+}
28
+
96
+
29
+ /*
97
+/* Returns true if the counter (pass 31 for PMCCNTR) should count events using
30
+ * The interrupt state can change due to:
98
+ * the current EL, security state, and register configuration.
31
+ * - reset clears both SR.OCIF and CR.OCIE
99
+ */
32
+ * - write to CR.EN or CR.OCIE
100
+static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
33
+ */
101
+{
34
+ imx_epit_update_int(s);
102
+ uint64_t filter;
35
+
103
+ bool enabled, filtered = true;
36
+ /*
104
+ int el = arm_current_el(env);
37
+ * TODO: could we 'break' here for reset? following operations appear
105
+
38
+ * to duplicate the work imx_epit_reset() already did.
106
+ enabled = (env->cp15.c9_pmcr & PMCRE) &&
39
+ */
107
+ (env->cp15.c9_pmcnten & (1 << counter));
40
+
108
+
41
+ ptimer_transaction_begin(s->timer_cmp);
109
+ if (counter == 31) {
42
+ ptimer_transaction_begin(s->timer_reload);
110
+ filter = env->cp15.pmccfiltr_el0;
43
+
111
+ } else {
44
+ /* Update the frequency. Has been done already in case of a reset. */
112
+ filter = env->cp15.c14_pmevtyper[counter];
45
+ if (!(s->cr & CR_SWR)) {
113
+ }
46
+ imx_epit_set_freq(s);
114
+
47
+ }
115
+ if (el == 0) {
48
+
116
+ filtered = filter & PMXEVTYPER_U;
49
+ if (s->freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
117
+ } else if (el == 1) {
50
+ if (s->cr & CR_ENMOD) {
118
+ filtered = filter & PMXEVTYPER_P;
51
+ if (s->cr & CR_RLD) {
119
+ }
52
+ ptimer_set_limit(s->timer_reload, s->lr, 1);
120
+
53
+ ptimer_set_limit(s->timer_cmp, s->lr, 1);
121
+ if (counter != 31) {
54
+ } else {
122
+ /*
55
+ ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
123
+ * If not checking PMCCNTR, ensure the counter is setup to an event we
56
+ ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
124
+ * support
125
+ */
126
+ uint16_t event = filter & PMXEVTYPER_EVTCOUNT;
127
+ if (!pmu_event_supported(event)) {
128
+ return false;
129
+ }
130
+ }
131
+
132
+ return enabled && !filtered;
133
+}
134
+
135
+static void pmswinc_write(CPUARMState *env, uint64_t value)
136
+{
137
+ unsigned int i;
138
+ for (i = 0; i < pmu_num_counters(env); i++) {
139
+ /* Increment a counter's count iff: */
140
+ if ((value & (1 << i)) && /* counter's bit is set */
141
+ /* counter is enabled and not filtered */
142
+ pmu_counter_enabled(env, i) &&
143
+ /* counter is SW_INCR */
144
+ (env->cp15.c14_pmevtyper[i] & PMXEVTYPER_EVTCOUNT) == 0x0) {
145
+ /*
146
+ * Detect if this write causes an overflow since we can't predict
147
+ * PMSWINC overflows like we can for other events
148
+ */
149
+ uint32_t new_pmswinc = env->cp15.c14_pmevcntr[i] + 1;
150
+
151
+ if (env->cp15.c14_pmevcntr[i] & ~new_pmswinc & INT32_MIN) {
152
+ env->cp15.c9_pmovsr |= (1 << i);
153
+ pmu_update_irq(env);
154
+ }
155
+
156
+ env->cp15.c14_pmevcntr[i] = new_pmswinc;
157
+ }
158
+ }
159
+}
160
+
161
static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
162
{
163
ARMCPU *arm_cpu = ARM_CPU(cpu);
164
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
165
val);
166
167
switch (reg) {
168
+ case SYSREG_PMCCNTR_EL0:
169
+ pmu_op_start(env);
170
+ env->cp15.c15_ccnt = val;
171
+ pmu_op_finish(env);
172
+ break;
173
+ case SYSREG_PMCR_EL0:
174
+ pmu_op_start(env);
175
+
176
+ if (val & PMCRC) {
177
+ /* The counter has been reset */
178
+ env->cp15.c15_ccnt = 0;
179
+ }
180
+
181
+ if (val & PMCRP) {
182
+ unsigned int i;
183
+ for (i = 0; i < pmu_num_counters(env); i++) {
184
+ env->cp15.c14_pmevcntr[i] = 0;
185
+ }
57
+ }
186
+ }
58
+ }
187
+
59
+
188
+ env->cp15.c9_pmcr &= ~PMCR_WRITEABLE_MASK;
60
+ imx_epit_reload_compare_timer(s);
189
+ env->cp15.c9_pmcr |= (val & PMCR_WRITEABLE_MASK);
61
+ ptimer_run(s->timer_reload, 0);
190
+
62
+ if (s->cr & CR_OCIEN) {
191
+ pmu_op_finish(env);
63
+ ptimer_run(s->timer_cmp, 0);
192
+ break;
64
+ } else {
193
+ case SYSREG_PMUSERENR_EL0:
65
+ ptimer_stop(s->timer_cmp);
194
+ env->cp15.c9_pmuserenr = val & 0xf;
66
+ }
195
+ break;
67
+ } else if (!(s->cr & CR_EN)) {
196
+ case SYSREG_PMCNTENSET_EL0:
68
+ /* stop both timers */
197
+ env->cp15.c9_pmcnten |= (val & pmu_counter_mask(env));
69
+ ptimer_stop(s->timer_reload);
198
+ break;
70
+ ptimer_stop(s->timer_cmp);
199
+ case SYSREG_PMCNTENCLR_EL0:
71
+ } else if (s->cr & CR_OCIEN) {
200
+ env->cp15.c9_pmcnten &= ~(val & pmu_counter_mask(env));
72
+ if (!(oldcr & CR_OCIEN)) {
201
+ break;
73
+ imx_epit_reload_compare_timer(s);
202
+ case SYSREG_PMINTENCLR_EL1:
74
+ ptimer_run(s->timer_cmp, 0);
203
+ pmu_op_start(env);
75
+ }
204
+ env->cp15.c9_pminten |= val;
76
+ } else {
205
+ pmu_op_finish(env);
77
+ ptimer_stop(s->timer_cmp);
206
+ break;
78
+ }
207
+ case SYSREG_PMOVSCLR_EL0:
79
+
208
+ pmu_op_start(env);
80
+ ptimer_transaction_commit(s->timer_cmp);
209
+ env->cp15.c9_pmovsr &= ~val;
81
+ ptimer_transaction_commit(s->timer_reload);
210
+ pmu_op_finish(env);
82
+}
211
+ break;
83
+
212
+ case SYSREG_PMSWINC_EL0:
84
+static void imx_epit_write_sr(IMXEPITState *s, uint32_t value)
213
+ pmu_op_start(env);
85
+{
214
+ pmswinc_write(env, val);
86
+ /* writing 1 to SR.OCIF clears this bit and turns the interrupt off */
215
+ pmu_op_finish(env);
87
+ if (value & SR_OCIF) {
216
+ break;
88
+ s->sr = 0; /* SR.OCIF is the only bit in this register anyway */
217
+ case SYSREG_PMSELR_EL0:
89
+ imx_epit_update_int(s);
218
+ env->cp15.c9_pmselr = val & 0x1f;
90
+ }
219
+ break;
91
+}
220
+ case SYSREG_PMCCFILTR_EL0:
92
+
221
+ pmu_op_start(env);
93
+static void imx_epit_write_lr(IMXEPITState *s, uint32_t value)
222
+ env->cp15.pmccfiltr_el0 = val & PMCCFILTR_EL0;
94
+{
223
+ pmu_op_finish(env);
95
+ s->lr = value;
224
+ break;
96
+
225
case SYSREG_OSLAR_EL1:
97
+ ptimer_transaction_begin(s->timer_cmp);
226
env->cp15.oslsr_el1 = val & 1;
98
+ ptimer_transaction_begin(s->timer_reload);
227
break;
99
+ if (s->cr & CR_RLD) {
100
+ /* Also set the limit if the LRD bit is set */
101
+ /* If IOVW bit is set then set the timer value */
102
+ ptimer_set_limit(s->timer_reload, s->lr, s->cr & CR_IOVW);
103
+ ptimer_set_limit(s->timer_cmp, s->lr, 0);
104
+ } else if (s->cr & CR_IOVW) {
105
+ /* If IOVW bit is set then set the timer value */
106
+ ptimer_set_count(s->timer_reload, s->lr);
107
+ }
108
+ /*
109
+ * Commit the change to s->timer_reload, so it can propagate. Otherwise
110
+ * the timer interrupt may not fire properly. The commit must happen
111
+ * before calling imx_epit_reload_compare_timer(), which reads
112
+ * s->timer_reload internally again.
113
+ */
114
+ ptimer_transaction_commit(s->timer_reload);
115
+ imx_epit_reload_compare_timer(s);
116
+ ptimer_transaction_commit(s->timer_cmp);
117
+}
118
+
119
+static void imx_epit_write_cmp(IMXEPITState *s, uint32_t value)
120
+{
121
+ s->cmp = value;
122
+
123
+ ptimer_transaction_begin(s->timer_cmp);
124
+ imx_epit_reload_compare_timer(s);
125
+ ptimer_transaction_commit(s->timer_cmp);
126
+}
127
+
128
static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
129
unsigned size)
130
{
131
IMXEPITState *s = IMX_EPIT(opaque);
132
- uint64_t oldcr;
133
134
DPRINTF("(%s, value = 0x%08x)\n", imx_epit_reg_name(offset >> 2),
135
(uint32_t)value);
136
137
switch (offset >> 2) {
138
case 0: /* CR */
139
-
140
- oldcr = s->cr;
141
- s->cr = value & 0x03ffffff;
142
- if (s->cr & CR_SWR) {
143
- /* handle the reset */
144
- imx_epit_reset(s, false);
145
- }
146
-
147
- /*
148
- * The interrupt state can change due to:
149
- * - reset clears both SR.OCIF and CR.OCIE
150
- * - write to CR.EN or CR.OCIE
151
- */
152
- imx_epit_update_int(s);
153
-
154
- /*
155
- * TODO: could we 'break' here for reset? following operations appear
156
- * to duplicate the work imx_epit_reset() already did.
157
- */
158
-
159
- ptimer_transaction_begin(s->timer_cmp);
160
- ptimer_transaction_begin(s->timer_reload);
161
-
162
- /* Update the frequency. Has been done already in case of a reset. */
163
- if (!(s->cr & CR_SWR)) {
164
- imx_epit_set_freq(s);
165
- }
166
-
167
- if (s->freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
168
- if (s->cr & CR_ENMOD) {
169
- if (s->cr & CR_RLD) {
170
- ptimer_set_limit(s->timer_reload, s->lr, 1);
171
- ptimer_set_limit(s->timer_cmp, s->lr, 1);
172
- } else {
173
- ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
174
- ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
175
- }
176
- }
177
-
178
- imx_epit_reload_compare_timer(s);
179
- ptimer_run(s->timer_reload, 0);
180
- if (s->cr & CR_OCIEN) {
181
- ptimer_run(s->timer_cmp, 0);
182
- } else {
183
- ptimer_stop(s->timer_cmp);
184
- }
185
- } else if (!(s->cr & CR_EN)) {
186
- /* stop both timers */
187
- ptimer_stop(s->timer_reload);
188
- ptimer_stop(s->timer_cmp);
189
- } else if (s->cr & CR_OCIEN) {
190
- if (!(oldcr & CR_OCIEN)) {
191
- imx_epit_reload_compare_timer(s);
192
- ptimer_run(s->timer_cmp, 0);
193
- }
194
- } else {
195
- ptimer_stop(s->timer_cmp);
196
- }
197
-
198
- ptimer_transaction_commit(s->timer_cmp);
199
- ptimer_transaction_commit(s->timer_reload);
200
+ imx_epit_write_cr(s, (uint32_t)value);
201
break;
202
203
- case 1: /* SR - ACK*/
204
- /* writing 1 to SR.OCIF clears this bit and turns the interrupt off */
205
- if (value & SR_OCIF) {
206
- s->sr = 0; /* SR.OCIF is the only bit in this register anyway */
207
- imx_epit_update_int(s);
208
- }
209
+ case 1: /* SR */
210
+ imx_epit_write_sr(s, (uint32_t)value);
211
break;
212
213
- case 2: /* LR - set ticks */
214
- s->lr = value;
215
-
216
- ptimer_transaction_begin(s->timer_cmp);
217
- ptimer_transaction_begin(s->timer_reload);
218
- if (s->cr & CR_RLD) {
219
- /* Also set the limit if the LRD bit is set */
220
- /* If IOVW bit is set then set the timer value */
221
- ptimer_set_limit(s->timer_reload, s->lr, s->cr & CR_IOVW);
222
- ptimer_set_limit(s->timer_cmp, s->lr, 0);
223
- } else if (s->cr & CR_IOVW) {
224
- /* If IOVW bit is set then set the timer value */
225
- ptimer_set_count(s->timer_reload, s->lr);
226
- }
227
- /*
228
- * Commit the change to s->timer_reload, so it can propagate. Otherwise
229
- * the timer interrupt may not fire properly. The commit must happen
230
- * before calling imx_epit_reload_compare_timer(), which reads
231
- * s->timer_reload internally again.
232
- */
233
- ptimer_transaction_commit(s->timer_reload);
234
- imx_epit_reload_compare_timer(s);
235
- ptimer_transaction_commit(s->timer_cmp);
236
+ case 2: /* LR */
237
+ imx_epit_write_lr(s, (uint32_t)value);
238
break;
239
240
case 3: /* CMP */
241
- s->cmp = value;
242
-
243
- ptimer_transaction_begin(s->timer_cmp);
244
- imx_epit_reload_compare_timer(s);
245
- ptimer_transaction_commit(s->timer_cmp);
246
-
247
+ imx_epit_write_cmp(s, (uint32_t)value);
248
break;
249
250
default:
251
qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Bad register at offset 0x%"
252
HWADDR_PRIx "\n", TYPE_IMX_EPIT, __func__, offset);
253
-
254
break;
255
}
256
}
257
+
258
static void imx_epit_cmp(void *opaque)
259
{
260
IMXEPITState *s = IMX_EPIT(opaque);
228
--
261
--
229
2.20.1
262
2.25.1
230
231
diff view generated by jsdifflib
1
From: Peter Collingbourne <pcc@google.com>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Sleep on WFI until the VTIMER is due but allow ourselves to be woken
3
The CNT register is a read-only register. There is no need to
4
up on IPI.
4
store it's value, it can be calculated on demand.
5
The calculated frequency is needed temporarily only.
5
6
6
In this implementation IPI is blocked on the CPU thread at startup and
7
Note that this is a migration compatibility break for all boards
7
pselect() is used to atomically unblock the signal and begin sleeping.
8
types that use the EPIT peripheral.
8
The signal is sent unconditionally so there's no need to worry about
9
races between actually sleeping and the "we think we're sleeping"
10
state. It may lead to an extra wakeup but that's better than missing
11
it entirely.
12
9
13
Signed-off-by: Peter Collingbourne <pcc@google.com>
10
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
14
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
16
Reviewed-by: Sergio Lopez <slp@redhat.com>
17
Message-id: 20210916155404.86958-6-agraf@csgraf.de
18
[agraf: Remove unused 'set' variable, always advance PC on WFX trap,
19
support vm stop / continue operations and cntv offsets]
20
Signed-off-by: Alexander Graf <agraf@csgraf.de>
21
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
22
Reviewed-by: Sergio Lopez <slp@redhat.com>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
13
---
25
include/sysemu/hvf_int.h | 1 +
14
include/hw/timer/imx_epit.h | 2 -
26
accel/hvf/hvf-accel-ops.c | 5 +--
15
hw/timer/imx_epit.c | 73 ++++++++++++++-----------------------
27
target/arm/hvf/hvf.c | 79 +++++++++++++++++++++++++++++++++++++++
16
2 files changed, 28 insertions(+), 47 deletions(-)
28
3 files changed, 82 insertions(+), 3 deletions(-)
29
17
30
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
18
diff --git a/include/hw/timer/imx_epit.h b/include/hw/timer/imx_epit.h
31
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
32
--- a/include/sysemu/hvf_int.h
20
--- a/include/hw/timer/imx_epit.h
33
+++ b/include/sysemu/hvf_int.h
21
+++ b/include/hw/timer/imx_epit.h
34
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
22
@@ -XXX,XX +XXX,XX @@ struct IMXEPITState {
35
uint64_t fd;
23
uint32_t sr;
36
void *exit;
24
uint32_t lr;
37
bool vtimer_masked;
25
uint32_t cmp;
38
+ sigset_t unblock_ipi_mask;
26
- uint32_t cnt;
27
28
- uint32_t freq;
29
qemu_irq irq;
39
};
30
};
40
31
41
void assert_hvf_ok(hv_return_t ret);
32
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
42
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
43
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
44
--- a/accel/hvf/hvf-accel-ops.c
34
--- a/hw/timer/imx_epit.c
45
+++ b/accel/hvf/hvf-accel-ops.c
35
+++ b/hw/timer/imx_epit.c
46
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
36
@@ -XXX,XX +XXX,XX @@ static void imx_epit_update_int(IMXEPITState *s)
47
cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
37
}
48
38
}
49
/* init cpu signals */
39
50
- sigset_t set;
40
-/*
51
struct sigaction sigact;
41
- * Must be called from within a ptimer_transaction_begin/commit block
52
42
- * for both s->timer_cmp and s->timer_reload.
53
memset(&sigact, 0, sizeof(sigact));
43
- */
54
sigact.sa_handler = dummy_signal;
44
-static void imx_epit_set_freq(IMXEPITState *s)
55
sigaction(SIG_IPI, &sigact, NULL);
45
+static uint32_t imx_epit_get_freq(IMXEPITState *s)
56
57
- pthread_sigmask(SIG_BLOCK, NULL, &set);
58
- sigdelset(&set, SIG_IPI);
59
+ pthread_sigmask(SIG_BLOCK, NULL, &cpu->hvf->unblock_ipi_mask);
60
+ sigdelset(&cpu->hvf->unblock_ipi_mask, SIG_IPI);
61
62
#ifdef __aarch64__
63
r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
64
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/hvf/hvf.c
67
+++ b/target/arm/hvf/hvf.c
68
@@ -XXX,XX +XXX,XX @@
69
* QEMU Hypervisor.framework support for Apple Silicon
70
71
* Copyright 2020 Alexander Graf <agraf@csgraf.de>
72
+ * Copyright 2020 Google LLC
73
*
74
* This work is licensed under the terms of the GNU GPL, version 2 or later.
75
* See the COPYING file in the top-level directory.
76
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init_vcpu(CPUState *cpu)
77
78
void hvf_kick_vcpu_thread(CPUState *cpu)
79
{
46
{
80
+ cpus_kick_thread(cpu);
47
- uint32_t clksrc;
81
hv_vcpus_exit(&cpu->hvf->fd, 1);
48
- uint32_t prescaler;
49
-
50
- clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, CR_CLKSRC_BITS);
51
- prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, CR_PRESCALE_BITS);
52
-
53
- s->freq = imx_ccm_get_clock_frequency(s->ccm,
54
- imx_epit_clocks[clksrc]) / prescaler;
55
-
56
- DPRINTF("Setting ptimer frequency to %u\n", s->freq);
57
-
58
- if (s->freq) {
59
- ptimer_set_freq(s->timer_reload, s->freq);
60
- ptimer_set_freq(s->timer_cmp, s->freq);
61
- }
62
+ uint32_t clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, CR_CLKSRC_BITS);
63
+ uint32_t prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, CR_PRESCALE_BITS);
64
+ uint32_t f_in = imx_ccm_get_clock_frequency(s->ccm, imx_epit_clocks[clksrc]);
65
+ uint32_t freq = f_in / prescaler;
66
+ DPRINTF("ptimer frequency is %u\n", freq);
67
+ return freq;
82
}
68
}
83
69
84
@@ -XXX,XX +XXX,XX @@ static uint64_t hvf_vtimer_val_raw(void)
70
/*
85
return mach_absolute_time() - hvf_state->vtimer_offset;
71
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reset(IMXEPITState *s, bool is_hard_reset)
86
}
72
s->sr = 0;
87
73
s->lr = EPIT_TIMER_MAX;
88
+static uint64_t hvf_vtimer_val(void)
74
s->cmp = 0;
89
+{
75
- s->cnt = 0;
90
+ if (!runstate_is_running()) {
76
ptimer_transaction_begin(s->timer_cmp);
91
+ /* VM is paused, the vtimer value is in vtimer.vtimer_val */
77
ptimer_transaction_begin(s->timer_reload);
92
+ return vtimer.vtimer_val;
78
- /* stop both timers */
93
+ }
94
+
95
+ return hvf_vtimer_val_raw();
96
+}
97
+
98
+static void hvf_wait_for_ipi(CPUState *cpu, struct timespec *ts)
99
+{
100
+ /*
101
+ * Use pselect to sleep so that other threads can IPI us while we're
102
+ * sleeping.
103
+ */
104
+ qatomic_mb_set(&cpu->thread_kicked, false);
105
+ qemu_mutex_unlock_iothread();
106
+ pselect(0, 0, 0, 0, ts, &cpu->hvf->unblock_ipi_mask);
107
+ qemu_mutex_lock_iothread();
108
+}
109
+
110
+static void hvf_wfi(CPUState *cpu)
111
+{
112
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
113
+ struct timespec ts;
114
+ hv_return_t r;
115
+ uint64_t ctl;
116
+ uint64_t cval;
117
+ int64_t ticks_to_sleep;
118
+ uint64_t seconds;
119
+ uint64_t nanos;
120
+ uint32_t cntfrq;
121
+
122
+ if (cpu->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) {
123
+ /* Interrupt pending, no need to wait */
124
+ return;
125
+ }
126
+
127
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
128
+ assert_hvf_ok(r);
129
+
130
+ if (!(ctl & 1) || (ctl & 2)) {
131
+ /* Timer disabled or masked, just wait for an IPI. */
132
+ hvf_wait_for_ipi(cpu, NULL);
133
+ return;
134
+ }
135
+
136
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0, &cval);
137
+ assert_hvf_ok(r);
138
+
139
+ ticks_to_sleep = cval - hvf_vtimer_val();
140
+ if (ticks_to_sleep < 0) {
141
+ return;
142
+ }
143
+
144
+ cntfrq = gt_cntfrq_period_ns(arm_cpu);
145
+ seconds = muldiv64(ticks_to_sleep, cntfrq, NANOSECONDS_PER_SECOND);
146
+ ticks_to_sleep -= muldiv64(seconds, NANOSECONDS_PER_SECOND, cntfrq);
147
+ nanos = ticks_to_sleep * cntfrq;
148
+
79
+
149
+ /*
80
+ /*
150
+ * Don't sleep for less than the time a context switch would take,
81
+ * The reset switches off the input clock, so even if the CR.EN is still
151
+ * so that we can satisfy fast timer requests on the same CPU.
82
+ * set, the timers are no longer running.
152
+ * Measurements on M1 show the sweet spot to be ~2ms.
153
+ */
83
+ */
154
+ if (!seconds && nanos < (2 * SCALE_MS)) {
84
+ assert(imx_epit_get_freq(s) == 0);
155
+ return;
85
ptimer_stop(s->timer_cmp);
156
+ }
86
ptimer_stop(s->timer_reload);
157
+
87
- /* compute new frequency */
158
+ ts = (struct timespec) { seconds, nanos };
88
- imx_epit_set_freq(s);
159
+ hvf_wait_for_ipi(cpu, &ts);
89
/* init both timers to EPIT_TIMER_MAX */
160
+}
90
ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
161
+
91
ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
162
static void hvf_sync_vtimer(CPUState *cpu)
92
- if (s->freq && (s->cr & CR_EN)) {
93
- /* if the timer is still enabled, restart it */
94
- ptimer_run(s->timer_reload, 0);
95
- }
96
ptimer_transaction_commit(s->timer_cmp);
97
ptimer_transaction_commit(s->timer_reload);
98
}
99
100
-static uint32_t imx_epit_update_count(IMXEPITState *s)
101
-{
102
- s->cnt = ptimer_get_count(s->timer_reload);
103
-
104
- return s->cnt;
105
-}
106
-
107
static uint64_t imx_epit_read(void *opaque, hwaddr offset, unsigned size)
163
{
108
{
164
ARMCPU *arm_cpu = ARM_CPU(cpu);
109
IMXEPITState *s = IMX_EPIT(opaque);
165
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
110
@@ -XXX,XX +XXX,XX @@ static uint64_t imx_epit_read(void *opaque, hwaddr offset, unsigned size)
111
break;
112
113
case 4: /* CNT */
114
- imx_epit_update_count(s);
115
- reg_value = s->cnt;
116
+ reg_value = ptimer_get_count(s->timer_reload);
117
break;
118
119
default:
120
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reload_compare_timer(IMXEPITState *s)
121
{
122
if ((s->cr & (CR_EN | CR_OCIEN)) == (CR_EN | CR_OCIEN)) {
123
/* if the compare feature is on and timers are running */
124
- uint32_t tmp = imx_epit_update_count(s);
125
+ uint32_t tmp = ptimer_get_count(s->timer_reload);
126
uint64_t next;
127
if (tmp > s->cmp) {
128
/* It'll fire in this round of the timer */
129
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reload_compare_timer(IMXEPITState *s)
130
131
static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
132
{
133
+ uint32_t freq = 0;
134
uint32_t oldcr = s->cr;
135
136
s->cr = value & 0x03ffffff;
137
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
138
ptimer_transaction_begin(s->timer_cmp);
139
ptimer_transaction_begin(s->timer_reload);
140
141
- /* Update the frequency. Has been done already in case of a reset. */
142
+ /*
143
+ * Update the frequency. In case of a reset the input clock was
144
+ * switched off, so this can be skipped.
145
+ */
146
if (!(s->cr & CR_SWR)) {
147
- imx_epit_set_freq(s);
148
+ freq = imx_epit_get_freq(s);
149
+ if (freq) {
150
+ ptimer_set_freq(s->timer_reload, freq);
151
+ ptimer_set_freq(s->timer_cmp, freq);
152
+ }
166
}
153
}
167
case EC_WFX_TRAP:
154
168
advance_pc = true;
155
- if (s->freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
169
+ if (!(syndrome & WFX_IS_WFE)) {
156
+ if (freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
170
+ hvf_wfi(cpu);
157
if (s->cr & CR_ENMOD) {
171
+ }
158
if (s->cr & CR_RLD) {
172
break;
159
ptimer_set_limit(s->timer_reload, s->lr, 1);
173
case EC_AA64_HVC:
160
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps imx_epit_ops = {
174
cpu_synchronize_state(cpu);
161
162
static const VMStateDescription vmstate_imx_timer_epit = {
163
.name = TYPE_IMX_EPIT,
164
- .version_id = 2,
165
- .minimum_version_id = 2,
166
+ .version_id = 3,
167
+ .minimum_version_id = 3,
168
.fields = (VMStateField[]) {
169
VMSTATE_UINT32(cr, IMXEPITState),
170
VMSTATE_UINT32(sr, IMXEPITState),
171
VMSTATE_UINT32(lr, IMXEPITState),
172
VMSTATE_UINT32(cmp, IMXEPITState),
173
- VMSTATE_UINT32(cnt, IMXEPITState),
174
- VMSTATE_UINT32(freq, IMXEPITState),
175
VMSTATE_PTIMER(timer_reload, IMXEPITState),
176
VMSTATE_PTIMER(timer_cmp, IMXEPITState),
177
VMSTATE_END_OF_LIST()
175
--
178
--
176
2.20.1
179
2.25.1
177
178
diff view generated by jsdifflib
1
Move an ifndef CONFIG_USER_ONLY code block up in arm_cpu_reset() so
1
From: Axel Heider <axel.heider@hensoldt.net>
2
it can be merged with another earlier one.
3
2
3
- fix #1263 for CR writes
4
- rework compare time handling
5
- The compare timer has to run even if CR.OCIEN is not set,
6
as SR.OCIF must be updated.
7
- The compare timer fires exactly once when the
8
compare value is less than the current value, but the
9
reload values is less than the compare value.
10
- The compare timer will never fire if the reload value is
11
less than the compare value. Disable it in this case.
12
13
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
14
[PMM: fixed minor style nits]
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210914120725.24992-4-peter.maydell@linaro.org
7
---
17
---
8
target/arm/cpu.c | 22 ++++++++++------------
18
hw/timer/imx_epit.c | 192 ++++++++++++++++++++++++++------------------
9
1 file changed, 10 insertions(+), 12 deletions(-)
19
1 file changed, 116 insertions(+), 76 deletions(-)
10
20
11
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
21
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
12
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.c
23
--- a/hw/timer/imx_epit.c
14
+++ b/target/arm/cpu.c
24
+++ b/hw/timer/imx_epit.c
15
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
25
@@ -XXX,XX +XXX,XX @@
16
env->uncached_cpsr = ARM_CPU_MODE_SVC;
26
* Originally written by Hans Jiang
27
* Updated by Peter Chubb
28
* Updated by Jean-Christophe Dubois <jcd@tribudubois.net>
29
+ * Updated by Axel Heider
30
*
31
* This code is licensed under GPL version 2 or later. See
32
* the COPYING file in the top-level directory.
33
@@ -XXX,XX +XXX,XX @@ static uint64_t imx_epit_read(void *opaque, hwaddr offset, unsigned size)
34
return reg_value;
35
}
36
37
-/* Must be called from ptimer_transaction_begin/commit block for s->timer_cmp */
38
-static void imx_epit_reload_compare_timer(IMXEPITState *s)
39
+/*
40
+ * Must be called from a ptimer_transaction_begin/commit block for
41
+ * s->timer_cmp, but outside of a transaction block of s->timer_reload,
42
+ * so the proper counter value is read.
43
+ */
44
+static void imx_epit_update_compare_timer(IMXEPITState *s)
45
{
46
- if ((s->cr & (CR_EN | CR_OCIEN)) == (CR_EN | CR_OCIEN)) {
47
- /* if the compare feature is on and timers are running */
48
- uint32_t tmp = ptimer_get_count(s->timer_reload);
49
- uint64_t next;
50
- if (tmp > s->cmp) {
51
- /* It'll fire in this round of the timer */
52
- next = tmp - s->cmp;
53
- } else { /* catch it next time around */
54
- next = tmp - s->cmp + ((s->cr & CR_RLD) ? EPIT_TIMER_MAX : s->lr);
55
+ uint64_t counter = 0;
56
+ bool is_oneshot = false;
57
+ /*
58
+ * The compare timer only has to run if the timer peripheral is active
59
+ * and there is an input clock, Otherwise it can be switched off.
60
+ */
61
+ bool is_active = (s->cr & CR_EN) && imx_epit_get_freq(s);
62
+ if (is_active) {
63
+ /*
64
+ * Calculate next timeout for compare timer. Reading the reload
65
+ * counter returns proper results only if pending transactions
66
+ * on it are committed here. Otherwise stale values are be read.
67
+ */
68
+ counter = ptimer_get_count(s->timer_reload);
69
+ uint64_t limit = ptimer_get_limit(s->timer_cmp);
70
+ /*
71
+ * The compare timer is a periodic timer if the limit is at least
72
+ * the compare value. Otherwise it may fire at most once in the
73
+ * current round.
74
+ */
75
+ bool is_oneshot = (limit >= s->cmp);
76
+ if (counter >= s->cmp) {
77
+ /* The compare timer fires in the current round. */
78
+ counter -= s->cmp;
79
+ } else if (!is_oneshot) {
80
+ /*
81
+ * The compare timer fires after a reload, as it is below the
82
+ * compare value already in this round. Note that the counter
83
+ * value calculated below can be above the 32-bit limit, which
84
+ * is legal here because the compare timer is an internal
85
+ * helper ptimer only.
86
+ */
87
+ counter += limit - s->cmp;
88
+ } else {
89
+ /*
90
+ * The compare timer won't fire in this round, and the limit is
91
+ * set to a value below the compare value. This practically means
92
+ * it will never fire, so it can be switched off.
93
+ */
94
+ is_active = false;
95
}
96
- ptimer_set_count(s->timer_cmp, next);
17
}
97
}
18
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
98
+
19
+
99
+ /*
20
+ /* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
100
+ * Set the compare timer and let it run, or stop it. This is agnostic
21
+ * executing as AArch32 then check if highvecs are enabled and
101
+ * of CR.OCIEN bit, as this bit affects interrupt generation only. The
22
+ * adjust the PC accordingly.
102
+ * compare timer needs to run even if no interrupts are to be generated,
103
+ * because the SR.OCIF bit must be updated also.
104
+ * Note that the timer might already be stopped or be running with
105
+ * counter values. However, finding out when an update is needed and
106
+ * when not is not trivial. It's much easier applying the setting again,
107
+ * as this does not harm either and the overhead is negligible.
23
+ */
108
+ */
24
+ if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) {
109
+ if (is_active) {
25
+ env->regs[15] = 0xFFFF0000;
110
+ ptimer_set_count(s->timer_cmp, counter);
111
+ ptimer_run(s->timer_cmp, is_oneshot ? 1 : 0);
112
+ } else {
113
+ ptimer_stop(s->timer_cmp);
26
+ }
114
+ }
27
+
115
+
28
+ env->vfp.xregs[ARM_VFP_FPEXC] = 0;
116
}
29
#endif
117
30
118
static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
31
if (arm_feature(env, ARM_FEATURE_M)) {
119
{
32
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
120
- uint32_t freq = 0;
33
#endif
121
uint32_t oldcr = s->cr;
122
123
s->cr = value & 0x03ffffff;
124
125
if (s->cr & CR_SWR) {
126
- /* handle the reset */
127
+ /*
128
+ * Reset clears CR.SWR again. It does not touch CR.EN, but the timers
129
+ * are still stopped because the input clock is disabled.
130
+ */
131
imx_epit_reset(s, false);
132
+ } else {
133
+ uint32_t freq;
134
+ uint32_t toggled_cr_bits = oldcr ^ s->cr;
135
+ /* re-initialize the limits if CR.RLD has changed */
136
+ bool set_limit = toggled_cr_bits & CR_RLD;
137
+ /* set the counter if the timer got just enabled and CR.ENMOD is set */
138
+ bool is_switched_on = (toggled_cr_bits & s->cr) & CR_EN;
139
+ bool set_counter = is_switched_on && (s->cr & CR_ENMOD);
140
+
141
+ ptimer_transaction_begin(s->timer_cmp);
142
+ ptimer_transaction_begin(s->timer_reload);
143
+ freq = imx_epit_get_freq(s);
144
+ if (freq) {
145
+ ptimer_set_freq(s->timer_reload, freq);
146
+ ptimer_set_freq(s->timer_cmp, freq);
147
+ }
148
+
149
+ if (set_limit || set_counter) {
150
+ uint64_t limit = (s->cr & CR_RLD) ? s->lr : EPIT_TIMER_MAX;
151
+ ptimer_set_limit(s->timer_reload, limit, set_counter ? 1 : 0);
152
+ if (set_limit) {
153
+ ptimer_set_limit(s->timer_cmp, limit, 0);
154
+ }
155
+ }
156
+ /*
157
+ * If there is an input clock and the peripheral is enabled, then
158
+ * ensure the wall clock timer is ticking. Otherwise stop the timers.
159
+ * The compare timer will be updated later.
160
+ */
161
+ if (freq && (s->cr & CR_EN)) {
162
+ ptimer_run(s->timer_reload, 0);
163
+ } else {
164
+ ptimer_stop(s->timer_reload);
165
+ }
166
+ /* Commit changes to reload timer, so they can propagate. */
167
+ ptimer_transaction_commit(s->timer_reload);
168
+ /* Update compare timer based on the committed reload timer value. */
169
+ imx_epit_update_compare_timer(s);
170
+ ptimer_transaction_commit(s->timer_cmp);
34
}
171
}
35
172
36
-#ifndef CONFIG_USER_ONLY
173
/*
37
- /* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
174
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
38
- * executing as AArch32 then check if highvecs are enabled and
175
* - write to CR.EN or CR.OCIE
39
- * adjust the PC accordingly.
176
*/
177
imx_epit_update_int(s);
178
-
179
- /*
180
- * TODO: could we 'break' here for reset? following operations appear
181
- * to duplicate the work imx_epit_reset() already did.
40
- */
182
- */
41
- if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) {
183
-
42
- env->regs[15] = 0xFFFF0000;
184
- ptimer_transaction_begin(s->timer_cmp);
185
- ptimer_transaction_begin(s->timer_reload);
186
-
187
- /*
188
- * Update the frequency. In case of a reset the input clock was
189
- * switched off, so this can be skipped.
190
- */
191
- if (!(s->cr & CR_SWR)) {
192
- freq = imx_epit_get_freq(s);
193
- if (freq) {
194
- ptimer_set_freq(s->timer_reload, freq);
195
- ptimer_set_freq(s->timer_cmp, freq);
196
- }
43
- }
197
- }
44
-
198
-
45
- env->vfp.xregs[ARM_VFP_FPEXC] = 0;
199
- if (freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
46
-#endif
200
- if (s->cr & CR_ENMOD) {
47
-
201
- if (s->cr & CR_RLD) {
48
/* M profile requires that reset clears the exclusive monitor;
202
- ptimer_set_limit(s->timer_reload, s->lr, 1);
49
* A profile does not, but clearing it makes more sense than having it
203
- ptimer_set_limit(s->timer_cmp, s->lr, 1);
50
* set with an exclusive access on address zero.
204
- } else {
205
- ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
206
- ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
207
- }
208
- }
209
-
210
- imx_epit_reload_compare_timer(s);
211
- ptimer_run(s->timer_reload, 0);
212
- if (s->cr & CR_OCIEN) {
213
- ptimer_run(s->timer_cmp, 0);
214
- } else {
215
- ptimer_stop(s->timer_cmp);
216
- }
217
- } else if (!(s->cr & CR_EN)) {
218
- /* stop both timers */
219
- ptimer_stop(s->timer_reload);
220
- ptimer_stop(s->timer_cmp);
221
- } else if (s->cr & CR_OCIEN) {
222
- if (!(oldcr & CR_OCIEN)) {
223
- imx_epit_reload_compare_timer(s);
224
- ptimer_run(s->timer_cmp, 0);
225
- }
226
- } else {
227
- ptimer_stop(s->timer_cmp);
228
- }
229
-
230
- ptimer_transaction_commit(s->timer_cmp);
231
- ptimer_transaction_commit(s->timer_reload);
232
}
233
234
static void imx_epit_write_sr(IMXEPITState *s, uint32_t value)
235
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_lr(IMXEPITState *s, uint32_t value)
236
/* If IOVW bit is set then set the timer value */
237
ptimer_set_count(s->timer_reload, s->lr);
238
}
239
- /*
240
- * Commit the change to s->timer_reload, so it can propagate. Otherwise
241
- * the timer interrupt may not fire properly. The commit must happen
242
- * before calling imx_epit_reload_compare_timer(), which reads
243
- * s->timer_reload internally again.
244
- */
245
+ /* Commit the changes to s->timer_reload, so they can propagate. */
246
ptimer_transaction_commit(s->timer_reload);
247
- imx_epit_reload_compare_timer(s);
248
+ /* Update the compare timer based on the committed reload timer value. */
249
+ imx_epit_update_compare_timer(s);
250
ptimer_transaction_commit(s->timer_cmp);
251
}
252
253
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_cmp(IMXEPITState *s, uint32_t value)
254
{
255
s->cmp = value;
256
257
+ /* Update the compare timer based on the committed reload timer value. */
258
ptimer_transaction_begin(s->timer_cmp);
259
- imx_epit_reload_compare_timer(s);
260
+ imx_epit_update_compare_timer(s);
261
ptimer_transaction_commit(s->timer_cmp);
262
}
263
264
@@ -XXX,XX +XXX,XX @@ static void imx_epit_cmp(void *opaque)
265
{
266
IMXEPITState *s = IMX_EPIT(opaque);
267
268
+ /* The cmp ptimer can't be running when the peripheral is disabled */
269
+ assert(s->cr & CR_EN);
270
+
271
DPRINTF("sr was %d\n", s->sr);
272
/* Set interrupt status bit SR.OCIF and update the interrupt state */
273
s->sr |= SR_OCIF;
51
--
274
--
52
2.20.1
275
2.25.1
53
54
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
We will need PMC register definitions in accel specific code later.
3
Fix these:
4
Move all constant definitions to common arm headers so we can reuse
5
them.
6
4
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
5
WARNING: Block comments use a leading /* on a separate line
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
WARNING: Block comments use * on subsequent lines
9
Message-id: 20210916155404.86958-2-agraf@csgraf.de
7
WARNING: Block comments use a trailing */ on a separate line
8
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Reviewed-by: Claudio Fontana <cfontana@suse.de>
11
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
12
Message-id: 20221213190537.511-2-farosas@suse.de
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
14
---
12
target/arm/internals.h | 44 ++++++++++++++++++++++++++++++++++++++++++
15
target/arm/helper.c | 323 +++++++++++++++++++++++++++++---------------
13
target/arm/helper.c | 44 ------------------------------------------
16
1 file changed, 215 insertions(+), 108 deletions(-)
14
2 files changed, 44 insertions(+), 44 deletions(-)
15
17
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
19
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ enum MVEECIState {
21
/* All other values reserved */
22
};
23
24
+/* Definitions for the PMU registers */
25
+#define PMCRN_MASK 0xf800
26
+#define PMCRN_SHIFT 11
27
+#define PMCRLC 0x40
28
+#define PMCRDP 0x20
29
+#define PMCRX 0x10
30
+#define PMCRD 0x8
31
+#define PMCRC 0x4
32
+#define PMCRP 0x2
33
+#define PMCRE 0x1
34
+/*
35
+ * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
36
+ * which can be written as 1 to trigger behaviour but which stay RAZ).
37
+ */
38
+#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
39
+
40
+#define PMXEVTYPER_P 0x80000000
41
+#define PMXEVTYPER_U 0x40000000
42
+#define PMXEVTYPER_NSK 0x20000000
43
+#define PMXEVTYPER_NSU 0x10000000
44
+#define PMXEVTYPER_NSH 0x08000000
45
+#define PMXEVTYPER_M 0x04000000
46
+#define PMXEVTYPER_MT 0x02000000
47
+#define PMXEVTYPER_EVTCOUNT 0x0000ffff
48
+#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
49
+ PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
50
+ PMXEVTYPER_M | PMXEVTYPER_MT | \
51
+ PMXEVTYPER_EVTCOUNT)
52
+
53
+#define PMCCFILTR 0xf8000000
54
+#define PMCCFILTR_M PMXEVTYPER_M
55
+#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M)
56
+
57
+static inline uint32_t pmu_num_counters(CPUARMState *env)
58
+{
59
+ return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
60
+}
61
+
62
+/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
63
+static inline uint64_t pmu_counter_mask(CPUARMState *env)
64
+{
65
+ return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
66
+}
67
+
68
#endif
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
70
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/helper.c
20
--- a/target/arm/helper.c
72
+++ b/target/arm/helper.c
21
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri)
23
static void write_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri,
24
uint64_t v)
25
{
26
- /* Raw write of a coprocessor register (as needed for migration, etc).
27
+ /*
28
+ * Raw write of a coprocessor register (as needed for migration, etc).
29
* Note that constant registers are treated as write-ignored; the
30
* caller should check for success by whether a readback gives the
31
* value written.
32
@@ -XXX,XX +XXX,XX @@ static void write_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri,
33
34
static bool raw_accessors_invalid(const ARMCPRegInfo *ri)
35
{
36
- /* Return true if the regdef would cause an assertion if you called
37
+ /*
38
+ * Return true if the regdef would cause an assertion if you called
39
* read_raw_cp_reg() or write_raw_cp_reg() on it (ie if it is a
40
* program bug for it not to have the NO_RAW flag).
41
* NB that returning false here doesn't necessarily mean that calling
42
@@ -XXX,XX +XXX,XX @@ bool write_list_to_cpustate(ARMCPU *cpu)
43
if (ri->type & ARM_CP_NO_RAW) {
44
continue;
45
}
46
- /* Write value and confirm it reads back as written
47
+ /*
48
+ * Write value and confirm it reads back as written
49
* (to catch read-only registers and partially read-only
50
* registers where the incoming migration value doesn't match)
51
*/
52
@@ -XXX,XX +XXX,XX @@ static gint cpreg_key_compare(gconstpointer a, gconstpointer b)
53
54
void init_cpreg_list(ARMCPU *cpu)
55
{
56
- /* Initialise the cpreg_tuples[] array based on the cp_regs hash.
57
+ /*
58
+ * Initialise the cpreg_tuples[] array based on the cp_regs hash.
59
* Note that we require cpreg_tuples[] to be sorted by key ID.
60
*/
61
GList *keys;
62
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_el3_aa32ns(CPUARMState *env,
63
return CP_ACCESS_OK;
64
}
65
66
-/* Some secure-only AArch32 registers trap to EL3 if used from
67
+/*
68
+ * Some secure-only AArch32 registers trap to EL3 if used from
69
* Secure EL1 (but are just ordinary UNDEF in other non-EL3 contexts).
70
* Note that an access from Secure EL1 can only happen if EL3 is AArch64.
71
* We assume that the .access field is set to PL1_RW.
72
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_trap_aa32s_el1(CPUARMState *env,
73
return CP_ACCESS_TRAP_UNCATEGORIZED;
74
}
75
76
-/* Check for traps to performance monitor registers, which are controlled
77
+/*
78
+ * Check for traps to performance monitor registers, which are controlled
79
* by MDCR_EL2.TPM for EL2 and MDCR_EL3.TPM for EL3.
80
*/
81
static CPAccessResult access_tpm(CPUARMState *env, const ARMCPRegInfo *ri,
82
@@ -XXX,XX +XXX,XX @@ static void fcse_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
83
ARMCPU *cpu = env_archcpu(env);
84
85
if (raw_read(env, ri) != value) {
86
- /* Unlike real hardware the qemu TLB uses virtual addresses,
87
+ /*
88
+ * Unlike real hardware the qemu TLB uses virtual addresses,
89
* not modified virtual addresses, so this causes a TLB flush.
90
*/
91
tlb_flush(CPU(cpu));
92
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
93
94
if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_PMSA)
95
&& !extended_addresses_enabled(env)) {
96
- /* For VMSA (when not using the LPAE long descriptor page table
97
+ /*
98
+ * For VMSA (when not using the LPAE long descriptor page table
99
* format) this register includes the ASID, so do a TLB flush.
100
* For PMSA it is purely a process ID and no action is needed.
101
*/
102
@@ -XXX,XX +XXX,XX @@ static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
103
}
104
105
static const ARMCPRegInfo cp_reginfo[] = {
106
- /* Define the secure and non-secure FCSE identifier CP registers
107
+ /*
108
+ * Define the secure and non-secure FCSE identifier CP registers
109
* separately because there is no secure bank in V8 (no _EL3). This allows
110
* the secure register to be properly reset and migrated. There is also no
111
* v8 EL1 version of the register so the non-secure instance stands alone.
112
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
113
.access = PL1_RW, .secure = ARM_CP_SECSTATE_S,
114
.fieldoffset = offsetof(CPUARMState, cp15.fcseidr_s),
115
.resetvalue = 0, .writefn = fcse_write, .raw_writefn = raw_write, },
116
- /* Define the secure and non-secure context identifier CP registers
117
+ /*
118
+ * Define the secure and non-secure context identifier CP registers
119
* separately because there is no secure bank in V8 (no _EL3). This allows
120
* the secure register to be properly reset and migrated. In the
121
* non-secure case, the 32-bit register will have reset and migration
122
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
123
};
124
125
static const ARMCPRegInfo not_v8_cp_reginfo[] = {
126
- /* NB: Some of these registers exist in v8 but with more precise
127
+ /*
128
+ * NB: Some of these registers exist in v8 but with more precise
129
* definitions that don't use CP_ANY wildcards (mostly in v8_cp_reginfo[]).
130
*/
131
/* MMU Domain access control / MPU write buffer control */
132
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = {
133
.writefn = dacr_write, .raw_writefn = raw_write,
134
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dacr_s),
135
offsetoflow32(CPUARMState, cp15.dacr_ns) } },
136
- /* ARMv7 allocates a range of implementation defined TLB LOCKDOWN regs.
137
+ /*
138
+ * ARMv7 allocates a range of implementation defined TLB LOCKDOWN regs.
139
* For v6 and v5, these mappings are overly broad.
140
*/
141
{ .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 0,
142
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = {
143
};
144
145
static const ARMCPRegInfo not_v6_cp_reginfo[] = {
146
- /* Not all pre-v6 cores implemented this WFI, so this is slightly
147
+ /*
148
+ * Not all pre-v6 cores implemented this WFI, so this is slightly
149
* over-broad.
150
*/
151
{ .name = "WFI_v5", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = 2,
152
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v6_cp_reginfo[] = {
153
};
154
155
static const ARMCPRegInfo not_v7_cp_reginfo[] = {
156
- /* Standard v6 WFI (also used in some pre-v6 cores); not in v7 (which
157
+ /*
158
+ * Standard v6 WFI (also used in some pre-v6 cores); not in v7 (which
159
* is UNPREDICTABLE; we choose to NOP as most implementations do).
160
*/
161
{ .name = "WFI_v6", .cp = 15, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 4,
162
.access = PL1_W, .type = ARM_CP_WFI },
163
- /* L1 cache lockdown. Not architectural in v6 and earlier but in practice
164
+ /*
165
+ * L1 cache lockdown. Not architectural in v6 and earlier but in practice
166
* implemented in 926, 946, 1026, 1136, 1176 and 11MPCore. StrongARM and
167
* OMAPCP will override this space.
168
*/
169
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v7_cp_reginfo[] = {
170
{ .name = "DUMMY", .cp = 15, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = CP_ANY,
171
.access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW,
172
.resetvalue = 0 },
173
- /* We don't implement pre-v7 debug but most CPUs had at least a DBGDIDR;
174
+ /*
175
+ * We don't implement pre-v7 debug but most CPUs had at least a DBGDIDR;
176
* implementing it as RAZ means the "debug architecture version" bits
177
* will read as a reserved value, which should cause Linux to not try
178
* to use the debug hardware.
179
*/
180
{ .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 0,
181
.access = PL0_R, .type = ARM_CP_CONST, .resetvalue = 0 },
182
- /* MMU TLB control. Note that the wildcarding means we cover not just
183
+ /*
184
+ * MMU TLB control. Note that the wildcarding means we cover not just
185
* the unified TLB ops but also the dside/iside/inner-shareable variants.
186
*/
187
{ .name = "TLBIALL", .cp = 15, .crn = 8, .crm = CP_ANY,
188
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
189
190
/* In ARMv8 most bits of CPACR_EL1 are RES0. */
191
if (!arm_feature(env, ARM_FEATURE_V8)) {
192
- /* ARMv7 defines bits for unimplemented coprocessors as RAZ/WI.
193
+ /*
194
+ * ARMv7 defines bits for unimplemented coprocessors as RAZ/WI.
195
* ASEDIS [31] and D32DIS [30] are both UNK/SBZP without VFP.
196
* TRCDIS [28] is RAZ/WI since we do not implement a trace macrocell.
197
*/
198
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
199
value |= R_CPACR_ASEDIS_MASK;
200
}
201
202
- /* VFPv3 and upwards with NEON implement 32 double precision
203
+ /*
204
+ * VFPv3 and upwards with NEON implement 32 double precision
205
* registers (D0-D31).
206
*/
207
if (!cpu_isar_feature(aa32_simd_r32, env_archcpu(env))) {
208
@@ -XXX,XX +XXX,XX @@ static uint64_t cpacr_read(CPUARMState *env, const ARMCPRegInfo *ri)
209
210
static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
211
{
212
- /* Call cpacr_write() so that we reset with the correct RAO bits set
213
+ /*
214
+ * Call cpacr_write() so that we reset with the correct RAO bits set
215
* for our CPU features.
216
*/
217
cpacr_write(env, ri, 0);
73
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
218
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
74
REGINFO_SENTINEL
219
{ .name = "MVA_prefetch",
220
.cp = 15, .crn = 7, .crm = 13, .opc1 = 0, .opc2 = 1,
221
.access = PL1_W, .type = ARM_CP_NOP },
222
- /* We need to break the TB after ISB to execute self-modifying code
223
+ /*
224
+ * We need to break the TB after ISB to execute self-modifying code
225
* correctly and also to take any pending interrupts immediately.
226
* So use arm_cp_write_ignore() function instead of ARM_CP_NOP flag.
227
*/
228
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
229
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ifar_s),
230
offsetof(CPUARMState, cp15.ifar_ns) },
231
.resetvalue = 0, },
232
- /* Watchpoint Fault Address Register : should actually only be present
233
+ /*
234
+ * Watchpoint Fault Address Register : should actually only be present
235
* for 1136, 1176, 11MPCore.
236
*/
237
{ .name = "WFAR", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 1,
238
@@ -XXX,XX +XXX,XX @@ static bool event_supported(uint16_t number)
239
static CPAccessResult pmreg_access(CPUARMState *env, const ARMCPRegInfo *ri,
240
bool isread)
241
{
242
- /* Performance monitor registers user accessibility is controlled
243
+ /*
244
+ * Performance monitor registers user accessibility is controlled
245
* by PMUSERENR. MDCR_EL2.TPM and MDCR_EL3.TPM allow configurable
246
* trapping to EL2 or EL3 for other accesses.
247
*/
248
@@ -XXX,XX +XXX,XX @@ static CPAccessResult pmreg_access_ccntr(CPUARMState *env,
249
(MDCR_HPME | MDCR_HPMD | MDCR_HPMN | MDCR_HCCD | MDCR_HLP)
250
#define MDCR_EL3_PMU_ENABLE_BITS (MDCR_SPME | MDCR_SCCD)
251
252
-/* Returns true if the counter (pass 31 for PMCCNTR) should count events using
253
+/*
254
+ * Returns true if the counter (pass 31 for PMCCNTR) should count events using
255
* the current EL, security state, and register configuration.
256
*/
257
static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
258
@@ -XXX,XX +XXX,XX @@ static uint64_t pmccntr_read(CPUARMState *env, const ARMCPRegInfo *ri)
259
static void pmselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
260
uint64_t value)
261
{
262
- /* The value of PMSELR.SEL affects the behavior of PMXEVTYPER and
263
+ /*
264
+ * The value of PMSELR.SEL affects the behavior of PMXEVTYPER and
265
* PMXEVCNTR. We allow [0..31] to be written to PMSELR here; in the
266
* meanwhile, we check PMSELR.SEL when PMXEVTYPER and PMXEVCNTR are
267
* accessed.
268
@@ -XXX,XX +XXX,XX @@ static void pmevtyper_write(CPUARMState *env, const ARMCPRegInfo *ri,
269
env->cp15.c14_pmevtyper[counter] = value & PMXEVTYPER_MASK;
270
pmevcntr_op_finish(env, counter);
271
}
272
- /* Attempts to access PMXEVTYPER are CONSTRAINED UNPREDICTABLE when
273
+ /*
274
+ * Attempts to access PMXEVTYPER are CONSTRAINED UNPREDICTABLE when
275
* PMSELR value is equal to or greater than the number of implemented
276
* counters, but not equal to 0x1f. We opt to behave as a RAZ/WI.
277
*/
278
@@ -XXX,XX +XXX,XX @@ static uint64_t pmevcntr_read(CPUARMState *env, const ARMCPRegInfo *ri,
279
}
280
return ret;
281
} else {
282
- /* We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR
283
- * are CONSTRAINED UNPREDICTABLE. */
284
+ /*
285
+ * We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR
286
+ * are CONSTRAINED UNPREDICTABLE.
287
+ */
288
return 0;
289
}
290
}
291
@@ -XXX,XX +XXX,XX @@ static void pmintenclr_write(CPUARMState *env, const ARMCPRegInfo *ri,
292
static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
293
uint64_t value)
294
{
295
- /* Note that even though the AArch64 view of this register has bits
296
+ /*
297
+ * Note that even though the AArch64 view of this register has bits
298
* [10:0] all RES0 we can only mask the bottom 5, to comply with the
299
* architectural requirements for bits which are RES0 only in some
300
* contexts. (ARMv8 would permit us to do no masking at all, but ARMv7
301
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
302
if (!arm_feature(env, ARM_FEATURE_EL2)) {
303
valid_mask &= ~SCR_HCE;
304
305
- /* On ARMv7, SMD (or SCD as it is called in v7) is only
306
+ /*
307
+ * On ARMv7, SMD (or SCD as it is called in v7) is only
308
* supported if EL2 exists. The bit is UNK/SBZP when
309
* EL2 is unavailable. In QEMU ARMv7, we force it to always zero
310
* when EL2 is unavailable.
311
@@ -XXX,XX +XXX,XX @@ static uint64_t ccsidr_read(CPUARMState *env, const ARMCPRegInfo *ri)
312
{
313
ARMCPU *cpu = env_archcpu(env);
314
315
- /* Acquire the CSSELR index from the bank corresponding to the CCSIDR
316
+ /*
317
+ * Acquire the CSSELR index from the bank corresponding to the CCSIDR
318
* bank
319
*/
320
uint32_t index = A32_BANKED_REG_GET(env, csselr,
321
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
322
/* the old v6 WFI, UNPREDICTABLE in v7 but we choose to NOP */
323
{ .name = "NOP", .cp = 15, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 4,
324
.access = PL1_W, .type = ARM_CP_NOP },
325
- /* Performance monitors are implementation defined in v7,
326
+ /*
327
+ * Performance monitors are implementation defined in v7,
328
* but with an ARM recommended set of registers, which we
329
* follow.
330
*
331
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
332
.writefn = csselr_write, .resetvalue = 0,
333
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.csselr_s),
334
offsetof(CPUARMState, cp15.csselr_ns) } },
335
- /* Auxiliary ID register: this actually has an IMPDEF value but for now
336
+ /*
337
+ * Auxiliary ID register: this actually has an IMPDEF value but for now
338
* just RAZ for all cores:
339
*/
340
{ .name = "AIDR", .state = ARM_CP_STATE_BOTH,
341
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
342
.access = PL1_R, .type = ARM_CP_CONST,
343
.accessfn = access_aa64_tid1,
344
.resetvalue = 0 },
345
- /* Auxiliary fault status registers: these also are IMPDEF, and we
346
+ /*
347
+ * Auxiliary fault status registers: these also are IMPDEF, and we
348
* choose to RAZ/WI for all cores.
349
*/
350
{ .name = "AFSR0_EL1", .state = ARM_CP_STATE_BOTH,
351
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
352
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 1,
353
.access = PL1_RW, .accessfn = access_tvm_trvm,
354
.type = ARM_CP_CONST, .resetvalue = 0 },
355
- /* MAIR can just read-as-written because we don't implement caches
356
+ /*
357
+ * MAIR can just read-as-written because we don't implement caches
358
* and so don't need to care about memory attributes.
359
*/
360
{ .name = "MAIR_EL1", .state = ARM_CP_STATE_AA64,
361
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
362
.opc0 = 3, .opc1 = 6, .crn = 10, .crm = 2, .opc2 = 0,
363
.access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.mair_el[3]),
364
.resetvalue = 0 },
365
- /* For non-long-descriptor page tables these are PRRR and NMRR;
366
+ /*
367
+ * For non-long-descriptor page tables these are PRRR and NMRR;
368
* regardless they still act as reads-as-written for QEMU.
369
*/
370
- /* MAIR0/1 are defined separately from their 64-bit counterpart which
371
+ /*
372
+ * MAIR0/1 are defined separately from their 64-bit counterpart which
373
* allows them to assign the correct fieldoffset based on the endianness
374
* handled in the field definitions.
375
*/
376
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
377
static CPAccessResult gt_cntfrq_access(CPUARMState *env, const ARMCPRegInfo *ri,
378
bool isread)
379
{
380
- /* CNTFRQ: not visible from PL0 if both PL0PCTEN and PL0VCTEN are zero.
381
+ /*
382
+ * CNTFRQ: not visible from PL0 if both PL0PCTEN and PL0VCTEN are zero.
383
* Writable only at the highest implemented exception level.
384
*/
385
int el = arm_current_el(env);
386
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_stimer_access(CPUARMState *env,
387
const ARMCPRegInfo *ri,
388
bool isread)
389
{
390
- /* The AArch64 register view of the secure physical timer is
391
+ /*
392
+ * The AArch64 register view of the secure physical timer is
393
* always accessible from EL3, and configurably accessible from
394
* Secure EL1.
395
*/
396
@@ -XXX,XX +XXX,XX @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
397
ARMGenericTimer *gt = &cpu->env.cp15.c14_timer[timeridx];
398
399
if (gt->ctl & 1) {
400
- /* Timer enabled: calculate and set current ISTATUS, irq, and
401
+ /*
402
+ * Timer enabled: calculate and set current ISTATUS, irq, and
403
* reset timer to when ISTATUS next has to change
404
*/
405
uint64_t offset = timeridx == GTIMER_VIRT ?
406
@@ -XXX,XX +XXX,XX @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
407
/* Next transition is when we hit cval */
408
nexttick = gt->cval + offset;
409
}
410
- /* Note that the desired next expiry time might be beyond the
411
+ /*
412
+ * Note that the desired next expiry time might be beyond the
413
* signed-64-bit range of a QEMUTimer -- in this case we just
414
* set the timer for as far in the future as possible. When the
415
* timer expires we will reset the timer for any remaining period.
416
@@ -XXX,XX +XXX,XX @@ static void gt_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
417
/* Enable toggled */
418
gt_recalc_timer(cpu, timeridx);
419
} else if ((oldval ^ value) & 2) {
420
- /* IMASK toggled: don't need to recalculate,
421
+ /*
422
+ * IMASK toggled: don't need to recalculate,
423
* just set the interrupt line based on ISTATUS
424
*/
425
int irqstate = (oldval & 4) && !(value & 2);
426
@@ -XXX,XX +XXX,XX @@ static void arm_gt_cntfrq_reset(CPUARMState *env, const ARMCPRegInfo *opaque)
427
}
428
429
static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
430
- /* Note that CNTFRQ is purely reads-as-written for the benefit
431
+ /*
432
+ * Note that CNTFRQ is purely reads-as-written for the benefit
433
* of software; writing it doesn't actually change the timer frequency.
434
* Our reset value matches the fixed frequency we implement the timer at.
435
*/
436
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
437
.readfn = gt_virt_redir_cval_read, .raw_readfn = raw_read,
438
.writefn = gt_virt_redir_cval_write, .raw_writefn = raw_write,
439
},
440
- /* Secure timer -- this is actually restricted to only EL3
441
+ /*
442
+ * Secure timer -- this is actually restricted to only EL3
443
* and configurably Secure-EL1 via the accessfn.
444
*/
445
{ .name = "CNTPS_TVAL_EL1", .state = ARM_CP_STATE_AA64,
446
@@ -XXX,XX +XXX,XX @@ static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri,
447
448
#else
449
450
-/* In user-mode most of the generic timer registers are inaccessible
451
+/*
452
+ * In user-mode most of the generic timer registers are inaccessible
453
* however modern kernels (4.12+) allow access to cntvct_el0
454
*/
455
456
@@ -XXX,XX +XXX,XX @@ static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri)
457
{
458
ARMCPU *cpu = env_archcpu(env);
459
460
- /* Currently we have no support for QEMUTimer in linux-user so we
461
+ /*
462
+ * Currently we have no support for QEMUTimer in linux-user so we
463
* can't call gt_get_countervalue(env), instead we directly
464
* call the lower level functions.
465
*/
466
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
467
bool isread)
468
{
469
if (ri->opc2 & 4) {
470
- /* The ATS12NSO* operations must trap to EL3 or EL2 if executed in
471
+ /*
472
+ * The ATS12NSO* operations must trap to EL3 or EL2 if executed in
473
* Secure EL1 (which can only happen if EL3 is AArch64).
474
* They are simply UNDEF if executed from NS EL1.
475
* They function normally from EL2 or EL3.
476
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
477
}
478
}
479
} else {
480
- /* fsr is a DFSR/IFSR value for the short descriptor
481
+ /*
482
+ * fsr is a DFSR/IFSR value for the short descriptor
483
* translation table format (with WnR always clear).
484
* Convert it to a 32-bit PAR.
485
*/
486
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav8r_cp_reginfo[] = {
75
};
487
};
76
488
77
-/* Definitions for the PMU registers */
489
static const ARMCPRegInfo pmsav7_cp_reginfo[] = {
78
-#define PMCRN_MASK 0xf800
490
- /* Reset for all these registers is handled in arm_cpu_reset(),
79
-#define PMCRN_SHIFT 11
491
+ /*
80
-#define PMCRLC 0x40
492
+ * Reset for all these registers is handled in arm_cpu_reset(),
81
-#define PMCRDP 0x20
493
* because the PMSAv7 is also used by M-profile CPUs, which do
82
-#define PMCRX 0x10
494
* not register cpregs but still need the state to be reset.
83
-#define PMCRD 0x8
495
*/
84
-#define PMCRC 0x4
496
@@ -XXX,XX +XXX,XX @@ static void vmsa_ttbcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
85
-#define PMCRP 0x2
497
}
86
-#define PMCRE 0x1
498
87
-/*
499
if (arm_feature(env, ARM_FEATURE_LPAE)) {
88
- * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
500
- /* With LPAE the TTBCR could result in a change of ASID
89
- * which can be written as 1 to trigger behaviour but which stay RAZ).
501
+ /*
90
- */
502
+ * With LPAE the TTBCR could result in a change of ASID
91
-#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
503
* via the TTBCR.A1 bit, so do a TLB flush.
92
-
504
*/
93
-#define PMXEVTYPER_P 0x80000000
505
tlb_flush(CPU(cpu));
94
-#define PMXEVTYPER_U 0x40000000
506
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
95
-#define PMXEVTYPER_NSK 0x20000000
507
offsetoflow32(CPUARMState, cp15.tcr_el[1])} },
96
-#define PMXEVTYPER_NSU 0x10000000
508
};
97
-#define PMXEVTYPER_NSH 0x08000000
509
98
-#define PMXEVTYPER_M 0x04000000
510
-/* Note that unlike TTBCR, writing to TTBCR2 does not require flushing
99
-#define PMXEVTYPER_MT 0x02000000
511
+/*
100
-#define PMXEVTYPER_EVTCOUNT 0x0000ffff
512
+ * Note that unlike TTBCR, writing to TTBCR2 does not require flushing
101
-#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
513
* qemu tlbs nor adjusting cached masks.
102
- PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
514
*/
103
- PMXEVTYPER_M | PMXEVTYPER_MT | \
515
static const ARMCPRegInfo ttbcr2_reginfo = {
104
- PMXEVTYPER_EVTCOUNT)
516
@@ -XXX,XX +XXX,XX @@ static void omap_wfi_write(CPUARMState *env, const ARMCPRegInfo *ri,
105
-
517
static void omap_cachemaint_write(CPUARMState *env, const ARMCPRegInfo *ri,
106
-#define PMCCFILTR 0xf8000000
518
uint64_t value)
107
-#define PMCCFILTR_M PMXEVTYPER_M
519
{
108
-#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M)
520
- /* On OMAP there are registers indicating the max/min index of dcache lines
109
-
521
+ /*
110
-static inline uint32_t pmu_num_counters(CPUARMState *env)
522
+ * On OMAP there are registers indicating the max/min index of dcache lines
111
-{
523
* containing a dirty line; cache flush operations have to reset these.
112
- return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
524
*/
113
-}
525
env->cp15.c15_i_max = 0x000;
114
-
526
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo omap_cp_reginfo[] = {
115
-/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
527
.crm = 8, .opc1 = 0, .opc2 = 0, .access = PL1_RW,
116
-static inline uint64_t pmu_counter_mask(CPUARMState *env)
528
.type = ARM_CP_NO_RAW,
117
-{
529
.readfn = arm_cp_read_zero, .writefn = omap_wfi_write, },
118
- return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
530
- /* TODO: Peripheral port remap register:
119
-}
531
+ /*
120
-
532
+ * TODO: Peripheral port remap register:
121
typedef struct pm_event {
533
* On OMAP2 mcr p15, 0, rn, c15, c2, 4 sets up the interrupt controller
122
uint16_t number; /* PMEVTYPER.evtCount is 16 bits wide */
534
* base address at $rn & ~0xfff and map size of 0x200 << ($rn & 0xfff),
123
/* If the event is supported on this CPU (used to generate PMCEID[01]) */
535
* when MMU is off.
536
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo xscale_cp_reginfo[] = {
537
.cp = 15, .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 1, .access = PL1_RW,
538
.fieldoffset = offsetof(CPUARMState, cp15.c1_xscaleauxcr),
539
.resetvalue = 0, },
540
- /* XScale specific cache-lockdown: since we have no cache we NOP these
541
+ /*
542
+ * XScale specific cache-lockdown: since we have no cache we NOP these
543
* and hope the guest does not really rely on cache behaviour.
544
*/
545
{ .name = "XSCALE_LOCK_ICACHE_LINE",
546
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo xscale_cp_reginfo[] = {
547
};
548
549
static const ARMCPRegInfo dummy_c15_cp_reginfo[] = {
550
- /* RAZ/WI the whole crn=15 space, when we don't have a more specific
551
+ /*
552
+ * RAZ/WI the whole crn=15 space, when we don't have a more specific
553
* implementation of this implementation-defined space.
554
* Ideally this should eventually disappear in favour of actually
555
* implementing the correct behaviour for all cores.
556
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
557
};
558
559
static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
560
- /* The cache test-and-clean instructions always return (1 << 30)
561
+ /*
562
+ * The cache test-and-clean instructions always return (1 << 30)
563
* to indicate that there are no dirty cache lines.
564
*/
565
{ .name = "TC_DCACHE", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 3,
566
@@ -XXX,XX +XXX,XX @@ static uint64_t mpidr_read_val(CPUARMState *env)
567
568
if (arm_feature(env, ARM_FEATURE_V7MP)) {
569
mpidr |= (1U << 31);
570
- /* Cores which are uniprocessor (non-coherent)
571
+ /*
572
+ * Cores which are uniprocessor (non-coherent)
573
* but still implement the MP extensions set
574
* bit 30. (For instance, Cortex-R5).
575
*/
576
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tocu(CPUARMState *env, const ARMCPRegInfo *ri,
577
return do_cacheop_pou_access(env, HCR_TOCU | HCR_TPU);
578
}
579
580
-/* See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
581
+/*
582
+ * See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
583
* Page D4-1736 (DDI0487A.b)
584
*/
585
586
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
587
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
588
uint64_t value)
589
{
590
- /* Invalidate by VA, EL2
591
+ /*
592
+ * Invalidate by VA, EL2
593
* Currently handles both VAE2 and VALE2, since we don't support
594
* flush-last-level-only.
595
*/
596
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
597
static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
598
uint64_t value)
599
{
600
- /* Invalidate by VA, EL3
601
+ /*
602
+ * Invalidate by VA, EL3
603
* Currently handles both VAE3 and VALE3, since we don't support
604
* flush-last-level-only.
605
*/
606
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
607
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
608
uint64_t value)
609
{
610
- /* Invalidate by VA, EL1&0 (AArch64 version).
611
+ /*
612
+ * Invalidate by VA, EL1&0 (AArch64 version).
613
* Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
614
* since we don't support flush-for-specific-ASID-only or
615
* flush-last-level-only.
616
@@ -XXX,XX +XXX,XX @@ static CPAccessResult sp_el0_access(CPUARMState *env, const ARMCPRegInfo *ri,
617
bool isread)
618
{
619
if (!(env->pstate & PSTATE_SP)) {
620
- /* Access to SP_EL0 is undefined if it's being used as
621
+ /*
622
+ * Access to SP_EL0 is undefined if it's being used as
623
* the stack pointer.
624
*/
625
return CP_ACCESS_TRAP_UNCATEGORIZED;
626
@@ -XXX,XX +XXX,XX @@ static void sctlr_write(CPUARMState *env, const ARMCPRegInfo *ri,
627
}
628
629
if (raw_read(env, ri) == value) {
630
- /* Skip the TLB flush if nothing actually changed; Linux likes
631
+ /*
632
+ * Skip the TLB flush if nothing actually changed; Linux likes
633
* to do a lot of pointless SCTLR writes.
634
*/
635
return;
636
@@ -XXX,XX +XXX,XX @@ static void mdcr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
637
}
638
639
static const ARMCPRegInfo v8_cp_reginfo[] = {
640
- /* Minimal set of EL0-visible registers. This will need to be expanded
641
+ /*
642
+ * Minimal set of EL0-visible registers. This will need to be expanded
643
* significantly for system emulation of AArch64 CPUs.
644
*/
645
{ .name = "NZCV", .state = ARM_CP_STATE_AA64,
646
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
647
.opc0 = 3, .opc1 = 0, .crn = 4, .crm = 0, .opc2 = 0,
648
.access = PL1_RW,
649
.fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_SVC]) },
650
- /* We rely on the access checks not allowing the guest to write to the
651
+ /*
652
+ * We rely on the access checks not allowing the guest to write to the
653
* state field when SPSel indicates that it's being used as the stack
654
* pointer.
655
*/
656
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
657
if (arm_feature(env, ARM_FEATURE_EL3)) {
658
valid_mask &= ~HCR_HCD;
659
} else if (cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) {
660
- /* Architecturally HCR.TSC is RES0 if EL3 is not implemented.
661
+ /*
662
+ * Architecturally HCR.TSC is RES0 if EL3 is not implemented.
663
* However, if we're using the SMC PSCI conduit then QEMU is
664
* effectively acting like EL3 firmware and so the guest at
665
* EL2 should retain the ability to prevent EL1 from being
666
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
667
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
668
.writefn = tlbi_aa64_vae2is_write },
669
#ifndef CONFIG_USER_ONLY
670
- /* Unlike the other EL2-related AT operations, these must
671
+ /*
672
+ * Unlike the other EL2-related AT operations, these must
673
* UNDEF from EL3 if EL2 is not implemented, which is why we
674
* define them here rather than with the rest of the AT ops.
675
*/
676
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
677
.access = PL2_W, .accessfn = at_s1e2_access,
678
.type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
679
.writefn = ats_write64 },
680
- /* The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
681
+ /*
682
+ * The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
683
* if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3
684
* with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose
685
* to behave as if SCR.NS was 1.
686
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
687
.writefn = ats1h_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC },
688
{ .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH,
689
.opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0,
690
- /* ARMv7 requires bit 0 and 1 to reset to 1. ARMv8 defines the
691
+ /*
692
+ * ARMv7 requires bit 0 and 1 to reset to 1. ARMv8 defines the
693
* reset values as IMPDEF. We choose to reset to 3 to comply with
694
* both ARMv7 and ARMv8.
695
*/
696
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_sec_cp_reginfo[] = {
697
static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
698
bool isread)
699
{
700
- /* The NSACR is RW at EL3, and RO for NS EL1 and NS EL2.
701
+ /*
702
+ * The NSACR is RW at EL3, and RO for NS EL1 and NS EL2.
703
* At Secure EL1 it traps to EL3 or EL2.
704
*/
705
if (arm_current_el(env) == 3) {
706
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
707
}
708
}
709
710
-/* We don't know until after realize whether there's a GICv3
711
+/*
712
+ * We don't know until after realize whether there's a GICv3
713
* attached, and that is what registers the gicv3 sysregs.
714
* So we have to fill in the GIC fields in ID_PFR/ID_PFR1_EL1/ID_AA64PFR0_EL1
715
* at runtime.
716
@@ -XXX,XX +XXX,XX @@ static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
717
}
718
#endif
719
720
-/* Shared logic between LORID and the rest of the LOR* registers.
721
+/*
722
+ * Shared logic between LORID and the rest of the LOR* registers.
723
* Secure state exclusion has already been dealt with.
724
*/
725
static CPAccessResult access_lor_ns(CPUARMState *env,
726
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
727
728
define_arm_cp_regs(cpu, cp_reginfo);
729
if (!arm_feature(env, ARM_FEATURE_V8)) {
730
- /* Must go early as it is full of wildcards that may be
731
+ /*
732
+ * Must go early as it is full of wildcards that may be
733
* overridden by later definitions.
734
*/
735
define_arm_cp_regs(cpu, not_v8_cp_reginfo);
736
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
737
.access = PL1_R, .type = ARM_CP_CONST,
738
.accessfn = access_aa32_tid3,
739
.resetvalue = cpu->isar.id_pfr0 },
740
- /* ID_PFR1 is not a plain ARM_CP_CONST because we don't know
741
+ /*
742
+ * ID_PFR1 is not a plain ARM_CP_CONST because we don't know
743
* the value of the GIC field until after we define these regs.
744
*/
745
{ .name = "ID_PFR1", .state = ARM_CP_STATE_BOTH,
746
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
747
748
define_arm_cp_regs(cpu, el3_regs);
749
}
750
- /* The behaviour of NSACR is sufficiently various that we don't
751
+ /*
752
+ * The behaviour of NSACR is sufficiently various that we don't
753
* try to describe it in a single reginfo:
754
* if EL3 is 64 bit, then trap to EL3 from S EL1,
755
* reads as constant 0xc00 from NS EL1 and NS EL2
756
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
757
if (cpu_isar_feature(aa32_jazelle, cpu)) {
758
define_arm_cp_regs(cpu, jazelle_regs);
759
}
760
- /* Slightly awkwardly, the OMAP and StrongARM cores need all of
761
+ /*
762
+ * Slightly awkwardly, the OMAP and StrongARM cores need all of
763
* cp15 crn=0 to be writes-ignored, whereas for other cores they should
764
* be read-only (ie write causes UNDEF exception).
765
*/
766
{
767
ARMCPRegInfo id_pre_v8_midr_cp_reginfo[] = {
768
- /* Pre-v8 MIDR space.
769
+ /*
770
+ * Pre-v8 MIDR space.
771
* Note that the MIDR isn't a simple constant register because
772
* of the TI925 behaviour where writes to another register can
773
* cause the MIDR value to change.
774
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
775
if (arm_feature(env, ARM_FEATURE_OMAPCP) ||
776
arm_feature(env, ARM_FEATURE_STRONGARM)) {
777
size_t i;
778
- /* Register the blanket "writes ignored" value first to cover the
779
+ /*
780
+ * Register the blanket "writes ignored" value first to cover the
781
* whole space. Then update the specific ID registers to allow write
782
* access, so that they ignore writes rather than causing them to
783
* UNDEF.
784
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
785
.raw_writefn = raw_write,
786
};
787
if (arm_feature(env, ARM_FEATURE_XSCALE)) {
788
- /* Normally we would always end the TB on an SCTLR write, but Linux
789
+ /*
790
+ * Normally we would always end the TB on an SCTLR write, but Linux
791
* arch/arm/mach-pxa/sleep.S expects two instructions following
792
* an MMU enable to execute from cache. Imitate this behaviour.
793
*/
794
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
795
void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
796
const ARMCPRegInfo *r, void *opaque)
797
{
798
- /* Define implementations of coprocessor registers.
799
+ /*
800
+ * Define implementations of coprocessor registers.
801
* We store these in a hashtable because typically
802
* there are less than 150 registers in a space which
803
* is 16*16*16*8*8 = 262144 in size.
804
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
805
default:
806
g_assert_not_reached();
807
}
808
- /* The AArch64 pseudocode CheckSystemAccess() specifies that op1
809
+ /*
810
+ * The AArch64 pseudocode CheckSystemAccess() specifies that op1
811
* encodes a minimum access level for the register. We roll this
812
* runtime check into our general permission check code, so check
813
* here that the reginfo's specified permissions are strict enough
814
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
815
assert((r->access & ~mask) == 0);
816
}
817
818
- /* Check that the register definition has enough info to handle
819
+ /*
820
+ * Check that the register definition has enough info to handle
821
* reads and writes if they are permitted.
822
*/
823
if (!(r->type & (ARM_CP_SPECIAL_MASK | ARM_CP_CONST))) {
824
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
825
continue;
826
}
827
if (state == ARM_CP_STATE_AA32) {
828
- /* Under AArch32 CP registers can be common
829
+ /*
830
+ * Under AArch32 CP registers can be common
831
* (same for secure and non-secure world) or banked.
832
*/
833
char *name;
834
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
835
g_assert_not_reached();
836
}
837
} else {
838
- /* AArch64 registers get mapped to non-secure instance
839
- * of AArch32 */
840
+ /*
841
+ * AArch64 registers get mapped to non-secure instance
842
+ * of AArch32
843
+ */
844
add_cpreg_to_hashtable(cpu, r, opaque, state,
845
ARM_CP_SECSTATE_NS,
846
crm, opc1, opc2, r->name);
847
@@ -XXX,XX +XXX,XX @@ void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque)
848
849
static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type)
850
{
851
- /* Return true if it is not valid for us to switch to
852
+ /*
853
+ * Return true if it is not valid for us to switch to
854
* this CPU mode (ie all the UNPREDICTABLE cases in
855
* the ARM ARM CPSRWriteByInstr pseudocode).
856
*/
857
@@ -XXX,XX +XXX,XX @@ static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type)
858
case ARM_CPU_MODE_UND:
859
case ARM_CPU_MODE_IRQ:
860
case ARM_CPU_MODE_FIQ:
861
- /* Note that we don't implement the IMPDEF NSACR.RFR which in v7
862
+ /*
863
+ * Note that we don't implement the IMPDEF NSACR.RFR which in v7
864
* allows FIQ mode to be Secure-only. (In v8 this doesn't exist.)
865
*/
866
- /* If HCR.TGE is set then changes from Monitor to NS PL1 via MSR
867
+ /*
868
+ * If HCR.TGE is set then changes from Monitor to NS PL1 via MSR
869
* and CPS are treated as illegal mode changes.
870
*/
871
if (write_type == CPSRWriteByInstr &&
872
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
873
env->GE = (val >> 16) & 0xf;
874
}
875
876
- /* In a V7 implementation that includes the security extensions but does
877
+ /*
878
+ * In a V7 implementation that includes the security extensions but does
879
* not include Virtualization Extensions the SCR.FW and SCR.AW bits control
880
* whether non-secure software is allowed to change the CPSR_F and CPSR_A
881
* bits respectively.
882
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
883
changed_daif = (env->daif ^ val) & mask;
884
885
if (changed_daif & CPSR_A) {
886
- /* Check to see if we are allowed to change the masking of async
887
+ /*
888
+ * Check to see if we are allowed to change the masking of async
889
* abort exceptions from a non-secure state.
890
*/
891
if (!(env->cp15.scr_el3 & SCR_AW)) {
892
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
893
}
894
895
if (changed_daif & CPSR_F) {
896
- /* Check to see if we are allowed to change the masking of FIQ
897
+ /*
898
+ * Check to see if we are allowed to change the masking of FIQ
899
* exceptions from a non-secure state.
900
*/
901
if (!(env->cp15.scr_el3 & SCR_FW)) {
902
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
903
mask &= ~CPSR_F;
904
}
905
906
- /* Check whether non-maskable FIQ (NMFI) support is enabled.
907
+ /*
908
+ * Check whether non-maskable FIQ (NMFI) support is enabled.
909
* If this bit is set software is not allowed to mask
910
* FIQs, but is allowed to set CPSR_F to 0.
911
*/
912
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
913
if (write_type != CPSRWriteRaw &&
914
((env->uncached_cpsr ^ val) & mask & CPSR_M)) {
915
if ((env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_USR) {
916
- /* Note that we can only get here in USR mode if this is a
917
+ /*
918
+ * Note that we can only get here in USR mode if this is a
919
* gdb stub write; for this case we follow the architectural
920
* behaviour for guest writes in USR mode of ignoring an attempt
921
* to switch mode. (Those are caught by translate.c for writes
922
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
923
*/
924
mask &= ~CPSR_M;
925
} else if (bad_mode_switch(env, val & CPSR_M, write_type)) {
926
- /* Attempt to switch to an invalid mode: this is UNPREDICTABLE in
927
+ /*
928
+ * Attempt to switch to an invalid mode: this is UNPREDICTABLE in
929
* v7, and has defined behaviour in v8:
930
* + leave CPSR.M untouched
931
* + allow changes to the other CPSR fields
932
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
933
env->regs[14] = env->banked_r14[r14_bank_number(mode)];
934
}
935
936
-/* Physical Interrupt Target EL Lookup Table
937
+/*
938
+ * Physical Interrupt Target EL Lookup Table
939
*
940
* [ From ARM ARM section G1.13.4 (Table G1-15) ]
941
*
942
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
943
if (arm_feature(env, ARM_FEATURE_EL3)) {
944
rw = ((env->cp15.scr_el3 & SCR_RW) == SCR_RW);
945
} else {
946
- /* Either EL2 is the highest EL (and so the EL2 register width
947
+ /*
948
+ * Either EL2 is the highest EL (and so the EL2 register width
949
* is given by is64); or there is no EL2 or EL3, in which case
950
* the value of 'rw' does not affect the table lookup anyway.
951
*/
952
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
953
env->banked_r13[bank_number(ARM_CPU_MODE_UND)] = env->xregs[23];
954
}
955
956
- /* Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in FIQ
957
+ /*
958
+ * Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in FIQ
959
* mode, then we can copy to r8-r14. Otherwise, we copy to the
960
* FIQ bank for r8-r14.
961
*/
962
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
963
/* High vectors. When enabled, base address cannot be remapped. */
964
addr += 0xffff0000;
965
} else {
966
- /* ARM v7 architectures provide a vector base address register to remap
967
+ /*
968
+ * ARM v7 architectures provide a vector base address register to remap
969
* the interrupt vector table.
970
* This register is only followed in non-monitor mode, and is banked.
971
* Note: only bits 31:5 are valid.
972
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
973
aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
974
975
if (cur_el < new_el) {
976
- /* Entry vector offset depends on whether the implemented EL
977
+ /*
978
+ * Entry vector offset depends on whether the implemented EL
979
* immediately lower than the target level is using AArch32 or AArch64
980
*/
981
bool is_aa64;
982
@@ -XXX,XX +XXX,XX @@ static void handle_semihosting(CPUState *cs)
983
}
984
#endif
985
986
-/* Handle a CPU exception for A and R profile CPUs.
987
+/*
988
+ * Handle a CPU exception for A and R profile CPUs.
989
* Do any appropriate logging, handle PSCI calls, and then hand off
990
* to the AArch64-entry or AArch32-entry function depending on the
991
* target exception level's register width.
992
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
993
}
994
#endif
995
996
- /* Hooks may change global state so BQL should be held, also the
997
+ /*
998
+ * Hooks may change global state so BQL should be held, also the
999
* BQL needs to be held for any modification of
1000
* cs->interrupt_request.
1001
*/
1002
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
1003
};
1004
}
1005
1006
-/* Note that signed overflow is undefined in C. The following routines are
1007
- careful to use unsigned types where modulo arithmetic is required.
1008
- Failure to do so _will_ break on newer gcc. */
1009
+/*
1010
+ * Note that signed overflow is undefined in C. The following routines are
1011
+ * careful to use unsigned types where modulo arithmetic is required.
1012
+ * Failure to do so _will_ break on newer gcc.
1013
+ */
1014
1015
/* Signed saturating arithmetic. */
1016
1017
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sel_flags)(uint32_t flags, uint32_t a, uint32_t b)
1018
return (a & mask) | (b & ~mask);
1019
}
1020
1021
-/* CRC helpers.
1022
+/*
1023
+ * CRC helpers.
1024
* The upper bytes of val (above the number specified by 'bytes') must have
1025
* been zeroed out by the caller.
1026
*/
1027
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(crc32c)(uint32_t acc, uint32_t val, uint32_t bytes)
1028
return crc32c(acc, buf, bytes) ^ 0xffffffff;
1029
}
1030
1031
-/* Return the exception level to which FP-disabled exceptions should
1032
+/*
1033
+ * Return the exception level to which FP-disabled exceptions should
1034
* be taken, or 0 if FP is enabled.
1035
*/
1036
int fp_exception_el(CPUARMState *env, int cur_el)
1037
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
1038
#ifndef CONFIG_USER_ONLY
1039
uint64_t hcr_el2;
1040
1041
- /* CPACR and the CPTR registers don't exist before v6, so FP is
1042
+ /*
1043
+ * CPACR and the CPTR registers don't exist before v6, so FP is
1044
* always accessible
1045
*/
1046
if (!arm_feature(env, ARM_FEATURE_V6)) {
1047
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
1048
1049
hcr_el2 = arm_hcr_el2_eff(env);
1050
1051
- /* The CPACR controls traps to EL1, or PL1 if we're 32 bit:
1052
+ /*
1053
+ * The CPACR controls traps to EL1, or PL1 if we're 32 bit:
1054
* 0, 2 : trap EL0 and EL1/PL1 accesses
1055
* 1 : trap only EL0 accesses
1056
* 3 : trap no accesses
124
--
1057
--
125
2.20.1
1058
2.25.1
126
127
diff view generated by jsdifflib
New patch
1
From: Fabiano Rosas <farosas@suse.de>
1
2
3
Fix the following:
4
5
ERROR: spaces required around that '|' (ctx:VxV)
6
ERROR: space required before the open parenthesis '('
7
ERROR: spaces required around that '+' (ctx:VxB)
8
ERROR: space prohibited between function name and open parenthesis '('
9
10
(the last two still have some occurrences in macros which I left
11
behind because it might impact readability)
12
13
Signed-off-by: Fabiano Rosas <farosas@suse.de>
14
Reviewed-by: Claudio Fontana <cfontana@suse.de>
15
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
16
Message-id: 20221213190537.511-3-farosas@suse.de
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
target/arm/helper.c | 42 +++++++++++++++++++++---------------------
20
1 file changed, 21 insertions(+), 21 deletions(-)
21
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_list(gpointer key, gpointer opaque)
27
uint32_t regidx = (uintptr_t)key;
28
const ARMCPRegInfo *ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
29
30
- if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
31
+ if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) {
32
cpu->cpreg_indexes[cpu->cpreg_array_len] = cpreg_to_kvm_id(regidx);
33
/* The value array need not be initialized at this point */
34
cpu->cpreg_array_len++;
35
@@ -XXX,XX +XXX,XX @@ static void count_cpreg(gpointer key, gpointer opaque)
36
37
ri = g_hash_table_lookup(cpu->cp_regs, key);
38
39
- if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
40
+ if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) {
41
cpu->cpreg_array_len++;
42
}
43
}
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
45
.resetfn = arm_cp_reset_ignore },
46
{ .name = "TPIDRRO_EL0", .state = ARM_CP_STATE_AA64,
47
.opc0 = 3, .opc1 = 3, .opc2 = 3, .crn = 13, .crm = 0,
48
- .access = PL0_R|PL1_W,
49
+ .access = PL0_R | PL1_W,
50
.fieldoffset = offsetof(CPUARMState, cp15.tpidrro_el[0]),
51
.resetvalue = 0},
52
{ .name = "TPIDRURO", .cp = 15, .crn = 13, .crm = 0, .opc1 = 0, .opc2 = 3,
53
- .access = PL0_R|PL1_W,
54
+ .access = PL0_R | PL1_W,
55
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidruro_s),
56
offsetoflow32(CPUARMState, cp15.tpidruro_ns) },
57
.resetfn = arm_cp_reset_ignore },
58
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
59
.resetvalue = 0 },
60
/* The cache ops themselves: these all NOP for QEMU */
61
{ .name = "IICR", .cp = 15, .crm = 5, .opc1 = 0,
62
- .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
63
+ .access = PL1_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
64
{ .name = "IDCR", .cp = 15, .crm = 6, .opc1 = 0,
65
- .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
66
+ .access = PL1_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
67
{ .name = "CDCR", .cp = 15, .crm = 12, .opc1 = 0,
68
- .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
69
+ .access = PL0_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
70
{ .name = "PIR", .cp = 15, .crm = 12, .opc1 = 1,
71
- .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
72
+ .access = PL0_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
73
{ .name = "PDR", .cp = 15, .crm = 12, .opc1 = 2,
74
- .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
75
+ .access = PL0_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
76
{ .name = "CIDCR", .cp = 15, .crm = 14, .opc1 = 0,
77
- .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
78
+ .access = PL1_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
79
};
80
81
static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
82
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
83
ARMCPRegInfo cbar = {
84
.name = "CBAR",
85
.cp = 15, .crn = 15, .crm = 0, .opc1 = 4, .opc2 = 0,
86
- .access = PL1_R|PL3_W, .resetvalue = cpu->reset_cbar,
87
+ .access = PL1_R | PL3_W, .resetvalue = cpu->reset_cbar,
88
.fieldoffset = offsetof(CPUARMState,
89
cp15.c15_config_base_address)
90
};
91
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
92
return;
93
94
if (old_mode == ARM_CPU_MODE_FIQ) {
95
- memcpy (env->fiq_regs, env->regs + 8, 5 * sizeof(uint32_t));
96
- memcpy (env->regs + 8, env->usr_regs, 5 * sizeof(uint32_t));
97
+ memcpy(env->fiq_regs, env->regs + 8, 5 * sizeof(uint32_t));
98
+ memcpy(env->regs + 8, env->usr_regs, 5 * sizeof(uint32_t));
99
} else if (mode == ARM_CPU_MODE_FIQ) {
100
- memcpy (env->usr_regs, env->regs + 8, 5 * sizeof(uint32_t));
101
- memcpy (env->regs + 8, env->fiq_regs, 5 * sizeof(uint32_t));
102
+ memcpy(env->usr_regs, env->regs + 8, 5 * sizeof(uint32_t));
103
+ memcpy(env->regs + 8, env->fiq_regs, 5 * sizeof(uint32_t));
104
}
105
106
i = bank_number(old_mode);
107
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
108
RESULT(sum, n, 16); \
109
if (sum >= 0) \
110
ge |= 3 << (n * 2); \
111
- } while(0)
112
+ } while (0)
113
114
#define SARITH8(a, b, n, op) do { \
115
int32_t sum; \
116
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
117
RESULT(sum, n, 8); \
118
if (sum >= 0) \
119
ge |= 1 << n; \
120
- } while(0)
121
+ } while (0)
122
123
124
#define ADD16(a, b, n) SARITH16(a, b, n, +)
125
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
126
RESULT(sum, n, 16); \
127
if ((sum >> 16) == 1) \
128
ge |= 3 << (n * 2); \
129
- } while(0)
130
+ } while (0)
131
132
#define ADD8(a, b, n) do { \
133
uint32_t sum; \
134
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
135
RESULT(sum, n, 8); \
136
if ((sum >> 8) == 1) \
137
ge |= 1 << n; \
138
- } while(0)
139
+ } while (0)
140
141
#define SUB16(a, b, n) do { \
142
uint32_t sum; \
143
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
144
RESULT(sum, n, 16); \
145
if ((sum >> 16) == 0) \
146
ge |= 3 << (n * 2); \
147
- } while(0)
148
+ } while (0)
149
150
#define SUB8(a, b, n) do { \
151
uint32_t sum; \
152
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
153
RESULT(sum, n, 8); \
154
if ((sum >> 8) == 0) \
155
ge |= 1 << n; \
156
- } while(0)
157
+ } while (0)
158
159
#define PFX u
160
#define ARITH_GE
161
--
162
2.25.1
diff view generated by jsdifflib
1
Optimize the MVE VMVN insn by using TCG vector ops when possible.
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Fix this:
4
ERROR: braces {} are necessary for all arms of this statement
5
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
7
Reviewed-by: Claudio Fontana <cfontana@suse.de>
8
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
9
Message-id: 20221213190537.511-4-farosas@suse.de
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210913095440.13462-9-peter.maydell@linaro.org
6
---
11
---
7
target/arm/translate-mve.c | 2 +-
12
target/arm/helper.c | 67 ++++++++++++++++++++++++++++-----------------
8
1 file changed, 1 insertion(+), 1 deletion(-)
13
1 file changed, 42 insertions(+), 25 deletions(-)
9
14
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/translate-mve.c
17
--- a/target/arm/helper.c
13
+++ b/target/arm/translate-mve.c
18
+++ b/target/arm/helper.c
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_1op *a)
19
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
15
20
env->CF = (val >> 29) & 1;
16
static bool trans_VMVN(DisasContext *s, arg_1op *a)
21
env->VF = (val << 3) & 0x80000000;
22
}
23
- if (mask & CPSR_Q)
24
+ if (mask & CPSR_Q) {
25
env->QF = ((val & CPSR_Q) != 0);
26
- if (mask & CPSR_T)
27
+ }
28
+ if (mask & CPSR_T) {
29
env->thumb = ((val & CPSR_T) != 0);
30
+ }
31
if (mask & CPSR_IT_0_1) {
32
env->condexec_bits &= ~3;
33
env->condexec_bits |= (val >> 25) & 3;
34
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
35
int i;
36
37
old_mode = env->uncached_cpsr & CPSR_M;
38
- if (mode == old_mode)
39
+ if (mode == old_mode) {
40
return;
41
+ }
42
43
if (old_mode == ARM_CPU_MODE_FIQ) {
44
memcpy(env->fiq_regs, env->regs + 8, 5 * sizeof(uint32_t));
45
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
46
new_mode = ARM_CPU_MODE_UND;
47
addr = 0x04;
48
mask = CPSR_I;
49
- if (env->thumb)
50
+ if (env->thumb) {
51
offset = 2;
52
- else
53
+ } else {
54
offset = 4;
55
+ }
56
break;
57
case EXCP_SWI:
58
new_mode = ARM_CPU_MODE_SVC;
59
@@ -XXX,XX +XXX,XX @@ static inline uint16_t add16_sat(uint16_t a, uint16_t b)
60
61
res = a + b;
62
if (((res ^ a) & 0x8000) && !((a ^ b) & 0x8000)) {
63
- if (a & 0x8000)
64
+ if (a & 0x8000) {
65
res = 0x8000;
66
- else
67
+ } else {
68
res = 0x7fff;
69
+ }
70
}
71
return res;
72
}
73
@@ -XXX,XX +XXX,XX @@ static inline uint8_t add8_sat(uint8_t a, uint8_t b)
74
75
res = a + b;
76
if (((res ^ a) & 0x80) && !((a ^ b) & 0x80)) {
77
- if (a & 0x80)
78
+ if (a & 0x80) {
79
res = 0x80;
80
- else
81
+ } else {
82
res = 0x7f;
83
+ }
84
}
85
return res;
86
}
87
@@ -XXX,XX +XXX,XX @@ static inline uint16_t sub16_sat(uint16_t a, uint16_t b)
88
89
res = a - b;
90
if (((res ^ a) & 0x8000) && ((a ^ b) & 0x8000)) {
91
- if (a & 0x8000)
92
+ if (a & 0x8000) {
93
res = 0x8000;
94
- else
95
+ } else {
96
res = 0x7fff;
97
+ }
98
}
99
return res;
100
}
101
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_sat(uint8_t a, uint8_t b)
102
103
res = a - b;
104
if (((res ^ a) & 0x80) && ((a ^ b) & 0x80)) {
105
- if (a & 0x80)
106
+ if (a & 0x80) {
107
res = 0x80;
108
- else
109
+ } else {
110
res = 0x7f;
111
+ }
112
}
113
return res;
114
}
115
@@ -XXX,XX +XXX,XX @@ static inline uint16_t add16_usat(uint16_t a, uint16_t b)
17
{
116
{
18
- return do_1op(s, a, gen_helper_mve_vmvn);
117
uint16_t res;
19
+ return do_1op_vec(s, a, gen_helper_mve_vmvn, tcg_gen_gvec_not);
118
res = a + b;
119
- if (res < a)
120
+ if (res < a) {
121
res = 0xffff;
122
+ }
123
return res;
20
}
124
}
21
125
22
static bool trans_VABS_fp(DisasContext *s, arg_1op *a)
126
static inline uint16_t sub16_usat(uint16_t a, uint16_t b)
127
{
128
- if (a > b)
129
+ if (a > b) {
130
return a - b;
131
- else
132
+ } else {
133
return 0;
134
+ }
135
}
136
137
static inline uint8_t add8_usat(uint8_t a, uint8_t b)
138
{
139
uint8_t res;
140
res = a + b;
141
- if (res < a)
142
+ if (res < a) {
143
res = 0xff;
144
+ }
145
return res;
146
}
147
148
static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
149
{
150
- if (a > b)
151
+ if (a > b) {
152
return a - b;
153
- else
154
+ } else {
155
return 0;
156
+ }
157
}
158
159
#define ADD16(a, b, n) RESULT(add16_usat(a, b), n, 16);
160
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
161
162
static inline uint8_t do_usad(uint8_t a, uint8_t b)
163
{
164
- if (a > b)
165
+ if (a > b) {
166
return a - b;
167
- else
168
+ } else {
169
return b - a;
170
+ }
171
}
172
173
/* Unsigned sum of absolute byte differences. */
174
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sel_flags)(uint32_t flags, uint32_t a, uint32_t b)
175
uint32_t mask;
176
177
mask = 0;
178
- if (flags & 1)
179
+ if (flags & 1) {
180
mask |= 0xff;
181
- if (flags & 2)
182
+ }
183
+ if (flags & 2) {
184
mask |= 0xff00;
185
- if (flags & 4)
186
+ }
187
+ if (flags & 4) {
188
mask |= 0xff0000;
189
- if (flags & 8)
190
+ }
191
+ if (flags & 8) {
192
mask |= 0xff000000;
193
+ }
194
return (a & mask) | (b & ~mask);
195
}
196
23
--
197
--
24
2.20.1
198
2.25.1
25
26
diff view generated by jsdifflib
New patch
1
From: Fabiano Rosas <farosas@suse.de>
1
2
3
Signed-off-by: Fabiano Rosas <farosas@suse.de>
4
Reviewed-by: Claudio Fontana <cfontana@suse.de>
5
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
6
Message-id: 20221213190537.511-5-farosas@suse.de
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/m_helper.c | 16 ----------------
10
1 file changed, 16 deletions(-)
11
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
15
+++ b/target/arm/m_helper.c
16
@@ -XXX,XX +XXX,XX @@
17
*/
18
19
#include "qemu/osdep.h"
20
-#include "qemu/units.h"
21
-#include "target/arm/idau.h"
22
-#include "trace.h"
23
#include "cpu.h"
24
#include "internals.h"
25
-#include "exec/gdbstub.h"
26
#include "exec/helper-proto.h"
27
-#include "qemu/host-utils.h"
28
#include "qemu/main-loop.h"
29
#include "qemu/bitops.h"
30
-#include "qemu/crc32c.h"
31
-#include "qemu/qemu-print.h"
32
#include "qemu/log.h"
33
#include "exec/exec-all.h"
34
-#include <zlib.h> /* For crc32 */
35
-#include "semihosting/semihost.h"
36
-#include "sysemu/cpus.h"
37
-#include "sysemu/kvm.h"
38
-#include "qemu/range.h"
39
-#include "qapi/qapi-commands-machine-target.h"
40
-#include "qapi/error.h"
41
-#include "qemu/guest-random.h"
42
#ifdef CONFIG_TCG
43
-#include "arm_ldst.h"
44
#include "exec/cpu_ldst.h"
45
#include "semihosting/common-semi.h"
46
#endif
47
--
48
2.25.1
diff view generated by jsdifflib
New patch
1
From: Fabiano Rosas <farosas@suse.de>
1
2
3
Signed-off-by: Fabiano Rosas <farosas@suse.de>
4
Reviewed-by: Claudio Fontana <cfontana@suse.de>
5
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
6
Message-id: 20221213190537.511-6-farosas@suse.de
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/helper.c | 7 -------
10
1 file changed, 7 deletions(-)
11
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@
17
*/
18
19
#include "qemu/osdep.h"
20
-#include "qemu/units.h"
21
#include "qemu/log.h"
22
#include "trace.h"
23
#include "cpu.h"
24
#include "internals.h"
25
#include "exec/helper-proto.h"
26
-#include "qemu/host-utils.h"
27
#include "qemu/main-loop.h"
28
#include "qemu/timer.h"
29
#include "qemu/bitops.h"
30
@@ -XXX,XX +XXX,XX @@
31
#include "exec/exec-all.h"
32
#include <zlib.h> /* For crc32 */
33
#include "hw/irq.h"
34
-#include "semihosting/semihost.h"
35
-#include "sysemu/cpus.h"
36
#include "sysemu/cpu-timers.h"
37
#include "sysemu/kvm.h"
38
-#include "qemu/range.h"
39
#include "qapi/qapi-commands-machine-target.h"
40
#include "qapi/error.h"
41
#include "qemu/guest-random.h"
42
#ifdef CONFIG_TCG
43
-#include "arm_ldst.h"
44
-#include "exec/cpu_ldst.h"
45
#include "semihosting/common-semi.h"
46
#endif
47
#include "cpregs.h"
48
--
49
2.25.1
diff view generated by jsdifflib
1
There's no particular reason why the exclusive monitor should
1
From: Claudio Fontana <cfontana@suse.de>
2
be only cleared on reset in system emulation mode. It doesn't
3
hurt if it isn't cleared in user mode, but we might as well
4
reduce the amount of code we have that's inside an ifdef.
5
2
3
Remove some unused headers.
4
5
Signed-off-by: Claudio Fontana <cfontana@suse.de>
6
Acked-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Claudio Fontana <cfontana@suse.de>
8
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Message-id: 20221213190537.511-7-farosas@suse.de
11
[added back some includes that are still needed at this point]
12
Signed-off-by: Fabiano Rosas <farosas@suse.de>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210914120725.24992-3-peter.maydell@linaro.org
9
---
14
---
10
target/arm/cpu.c | 6 +++---
15
target/arm/cpu.c | 1 -
11
1 file changed, 3 insertions(+), 3 deletions(-)
16
target/arm/cpu64.c | 6 ------
17
2 files changed, 7 deletions(-)
12
18
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
19
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.c
21
--- a/target/arm/cpu.c
16
+++ b/target/arm/cpu.c
22
+++ b/target/arm/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
23
@@ -XXX,XX +XXX,XX @@
18
env->regs[15] = 0xFFFF0000;
24
#include "target/arm/idau.h"
19
}
25
#include "qemu/module.h"
20
26
#include "qapi/error.h"
21
+ env->vfp.xregs[ARM_VFP_FPEXC] = 0;
27
-#include "qapi/visitor.h"
22
+#endif
28
#include "cpu.h"
23
+
29
#ifdef CONFIG_TCG
24
/* M profile requires that reset clears the exclusive monitor;
30
#include "hw/core/tcg-cpu-ops.h"
25
* A profile does not, but clearing it makes more sense than having it
31
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
26
* set with an exclusive access on address zero.
32
index XXXXXXX..XXXXXXX 100644
27
*/
33
--- a/target/arm/cpu64.c
28
arm_clear_exclusive(env);
34
+++ b/target/arm/cpu64.c
29
35
@@ -XXX,XX +XXX,XX @@
30
- env->vfp.xregs[ARM_VFP_FPEXC] = 0;
36
#include "qemu/osdep.h"
37
#include "qapi/error.h"
38
#include "cpu.h"
39
-#ifdef CONFIG_TCG
40
-#include "hw/core/tcg-cpu-ops.h"
41
-#endif /* CONFIG_TCG */
42
#include "qemu/module.h"
43
-#if !defined(CONFIG_USER_ONLY)
44
-#include "hw/loader.h"
31
-#endif
45
-#endif
32
-
46
#include "sysemu/kvm.h"
33
if (arm_feature(env, ARM_FEATURE_PMSA)) {
47
#include "sysemu/hvf.h"
34
if (cpu->pmsav7_dregion > 0) {
48
#include "kvm_arm.h"
35
if (arm_feature(env, ARM_FEATURE_V8)) {
36
--
49
--
37
2.20.1
50
2.25.1
38
39
diff view generated by jsdifflib
1
Architecturally, for an M-profile CPU with the LOB feature the
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
LTPSIZE field in FPDSCR is always constant 4. QEMU's implementation
3
enforces this everywhere, except that we don't check that it is true
4
in incoming migration data.
5
2
6
We're going to add come in gen_update_fp_context() which relies on
3
The pointed MouseTransformInfo structure is accessed read-only.
7
the "always 4" property. Since this is TCG-only, we don't actually
8
need to be robust to bogus incoming migration data, and the effect of
9
it being wrong would be wrong code generation rather than a QEMU
10
crash; but if it did ever happen somehow it would be very difficult
11
to track down the cause. Add a check so that we fail the inbound
12
migration if the FPDSCR.LTPSIZE value is incorrect.
13
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221220142520.24094-2-philmd@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20210913095440.13462-3-peter.maydell@linaro.org
17
---
9
---
18
target/arm/machine.c | 13 +++++++++++++
10
include/hw/input/tsc2xxx.h | 4 ++--
19
1 file changed, 13 insertions(+)
11
hw/input/tsc2005.c | 2 +-
12
hw/input/tsc210x.c | 3 +--
13
3 files changed, 4 insertions(+), 5 deletions(-)
20
14
21
diff --git a/target/arm/machine.c b/target/arm/machine.c
15
diff --git a/include/hw/input/tsc2xxx.h b/include/hw/input/tsc2xxx.h
22
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/machine.c
17
--- a/include/hw/input/tsc2xxx.h
24
+++ b/target/arm/machine.c
18
+++ b/include/hw/input/tsc2xxx.h
25
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
19
@@ -XXX,XX +XXX,XX @@ uWireSlave *tsc2102_init(qemu_irq pint);
26
hw_breakpoint_update_all(cpu);
20
uWireSlave *tsc2301_init(qemu_irq penirq, qemu_irq kbirq, qemu_irq dav);
27
hw_watchpoint_update_all(cpu);
21
I2SCodec *tsc210x_codec(uWireSlave *chip);
28
22
uint32_t tsc210x_txrx(void *opaque, uint32_t value, int len);
29
+ /*
23
-void tsc210x_set_transform(uWireSlave *chip, MouseTransformInfo *info);
30
+ * TCG gen_update_fp_context() relies on the invariant that
24
+void tsc210x_set_transform(uWireSlave *chip, const MouseTransformInfo *info);
31
+ * FPDSCR.LTPSIZE is constant 4 for M-profile with the LOB extension;
25
void tsc210x_key_event(uWireSlave *chip, int key, int down);
32
+ * forbid bogus incoming data with some other value.
26
33
+ */
27
/* tsc2005.c */
34
+ if (arm_feature(env, ARM_FEATURE_M) && cpu_isar_feature(aa32_lob, cpu)) {
28
void *tsc2005_init(qemu_irq pintdav);
35
+ if (extract32(env->v7m.fpdscr[M_REG_NS],
29
uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
36
+ FPCR_LTPSIZE_SHIFT, FPCR_LTPSIZE_LENGTH) != 4 ||
30
-void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
37
+ extract32(env->v7m.fpdscr[M_REG_S],
31
+void tsc2005_set_transform(void *opaque, const MouseTransformInfo *info);
38
+ FPCR_LTPSIZE_SHIFT, FPCR_LTPSIZE_LENGTH) != 4) {
32
39
+ return -1;
33
#endif
40
+ }
34
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
41
+ }
35
index XXXXXXX..XXXXXXX 100644
42
if (!kvm_enabled()) {
36
--- a/hw/input/tsc2005.c
43
pmu_op_finish(&cpu->env);
37
+++ b/hw/input/tsc2005.c
44
}
38
@@ -XXX,XX +XXX,XX @@ void *tsc2005_init(qemu_irq pintdav)
39
* from the touchscreen. Assuming 12-bit precision was used during
40
* tslib calibration.
41
*/
42
-void tsc2005_set_transform(void *opaque, MouseTransformInfo *info)
43
+void tsc2005_set_transform(void *opaque, const MouseTransformInfo *info)
44
{
45
TSC2005State *s = (TSC2005State *) opaque;
46
47
diff --git a/hw/input/tsc210x.c b/hw/input/tsc210x.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/hw/input/tsc210x.c
50
+++ b/hw/input/tsc210x.c
51
@@ -XXX,XX +XXX,XX @@ I2SCodec *tsc210x_codec(uWireSlave *chip)
52
* from the touchscreen. Assuming 12-bit precision was used during
53
* tslib calibration.
54
*/
55
-void tsc210x_set_transform(uWireSlave *chip,
56
- MouseTransformInfo *info)
57
+void tsc210x_set_transform(uWireSlave *chip, const MouseTransformInfo *info)
58
{
59
TSC210xState *s = (TSC210xState *) chip->opaque;
60
#if 0
45
--
61
--
46
2.20.1
62
2.25.1
47
63
48
64
diff view generated by jsdifflib
1
Optimize MVE arithmetic ops when we have a TCG
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
vector operation we can use.
3
2
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20221220142520.24094-3-philmd@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-6-peter.maydell@linaro.org
8
---
7
---
9
target/arm/translate-mve.c | 20 +++++++++++---------
8
hw/arm/nseries.c | 18 +++++++++---------
10
1 file changed, 11 insertions(+), 9 deletions(-)
9
1 file changed, 9 insertions(+), 9 deletions(-)
11
10
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
11
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
13
--- a/hw/arm/nseries.c
15
+++ b/target/arm/translate-mve.c
14
+++ b/hw/arm/nseries.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_VPSEL(DisasContext *s, arg_2op *a)
15
@@ -XXX,XX +XXX,XX @@ static void n8x0_i2c_setup(struct n800_s *s)
17
return do_2op(s, a, gen_helper_mve_vpsel);
18
}
16
}
19
17
20
-#define DO_2OP(INSN, FN) \
18
/* Touchscreen and keypad controller */
21
+#define DO_2OP_VEC(INSN, FN, VECFN) \
19
-static MouseTransformInfo n800_pointercal = {
22
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
20
+static const MouseTransformInfo n800_pointercal = {
23
{ \
21
.x = 800,
24
static MVEGenTwoOpFn * const fns[] = { \
22
.y = 480,
25
@@ -XXX,XX +XXX,XX @@ static bool trans_VPSEL(DisasContext *s, arg_2op *a)
23
.a = { 14560, -68, -3455208, -39, -9621, 35152972, 65536 },
26
gen_helper_mve_##FN##w, \
24
};
27
NULL, \
25
28
}; \
26
-static MouseTransformInfo n810_pointercal = {
29
- return do_2op(s, a, fns[a->size]); \
27
+static const MouseTransformInfo n810_pointercal = {
30
+ return do_2op_vec(s, a, fns[a->size], VECFN); \
28
.x = 800,
31
}
29
.y = 480,
32
30
.a = { 15041, 148, -4731056, 171, -10238, 35933380, 65536 },
33
-DO_2OP(VADD, vadd)
31
@@ -XXX,XX +XXX,XX @@ static void n810_key_event(void *opaque, int keycode)
34
-DO_2OP(VSUB, vsub)
32
35
-DO_2OP(VMUL, vmul)
33
#define M    0
36
+#define DO_2OP(INSN, FN) DO_2OP_VEC(INSN, FN, NULL)
34
37
+
35
-static int n810_keys[0x80] = {
38
+DO_2OP_VEC(VADD, vadd, tcg_gen_gvec_add)
36
+static const int n810_keys[0x80] = {
39
+DO_2OP_VEC(VSUB, vsub, tcg_gen_gvec_sub)
37
[0x01] = 16,    /* Q */
40
+DO_2OP_VEC(VMUL, vmul, tcg_gen_gvec_mul)
38
[0x02] = 37,    /* K */
41
DO_2OP(VMULH_S, vmulhs)
39
[0x03] = 24,    /* O */
42
DO_2OP(VMULH_U, vmulhu)
40
@@ -XXX,XX +XXX,XX @@ static void n8x0_usb_setup(struct n800_s *s)
43
DO_2OP(VRMULH_S, vrmulhs)
41
/* Setup done before the main bootloader starts by some early setup code
44
DO_2OP(VRMULH_U, vrmulhu)
42
* - used when we want to run the main bootloader in emulation. This
45
-DO_2OP(VMAX_S, vmaxs)
43
* isn't documented. */
46
-DO_2OP(VMAX_U, vmaxu)
44
-static uint32_t n800_pinout[104] = {
47
-DO_2OP(VMIN_S, vmins)
45
+static const uint32_t n800_pinout[104] = {
48
-DO_2OP(VMIN_U, vminu)
46
0x080f00d8, 0x00d40808, 0x03080808, 0x080800d0,
49
+DO_2OP_VEC(VMAX_S, vmaxs, tcg_gen_gvec_smax)
47
0x00dc0808, 0x0b0f0f00, 0x080800b4, 0x00c00808,
50
+DO_2OP_VEC(VMAX_U, vmaxu, tcg_gen_gvec_umax)
48
0x08080808, 0x180800c4, 0x00b80000, 0x08080808,
51
+DO_2OP_VEC(VMIN_S, vmins, tcg_gen_gvec_smin)
49
@@ -XXX,XX +XXX,XX @@ static void n8x0_boot_init(void *opaque)
52
+DO_2OP_VEC(VMIN_U, vminu, tcg_gen_gvec_umin)
50
#define OMAP_TAG_CBUS        0x4e03
53
DO_2OP(VABD_S, vabds)
51
#define OMAP_TAG_EM_ASIC_BB5    0x4e04
54
DO_2OP(VABD_U, vabdu)
52
55
DO_2OP(VHADD_S, vhadds)
53
-static struct omap_gpiosw_info_s {
54
+static const struct omap_gpiosw_info_s {
55
const char *name;
56
int line;
57
int type;
58
@@ -XXX,XX +XXX,XX @@ static struct omap_gpiosw_info_s {
59
{ NULL }
60
};
61
62
-static struct omap_partition_info_s {
63
+static const struct omap_partition_info_s {
64
uint32_t offset;
65
uint32_t size;
66
int mask;
67
@@ -XXX,XX +XXX,XX @@ static struct omap_partition_info_s {
68
{ 0, 0, 0, NULL }
69
};
70
71
-static uint8_t n8x0_bd_addr[6] = { N8X0_BD_ADDR };
72
+static const uint8_t n8x0_bd_addr[6] = { N8X0_BD_ADDR };
73
74
static int n8x0_atag_setup(void *p, int model)
75
{
76
uint8_t *b;
77
uint16_t *w;
78
uint32_t *l;
79
- struct omap_gpiosw_info_s *gpiosw;
80
- struct omap_partition_info_s *partition;
81
+ const struct omap_gpiosw_info_s *gpiosw;
82
+ const struct omap_partition_info_s *partition;
83
const char *tag;
84
85
w = p;
56
--
86
--
57
2.20.1
87
2.25.1
58
88
59
89
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Now that we have all logic in place that we need to handle Hypervisor.framework
3
Silent when compiling with -Wextra:
4
on Apple Silicon systems, let's add CONFIG_HVF for aarch64 as well so that we
5
can build it.
6
4
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
5
../hw/arm/nseries.c:1081:12: warning: missing field 'line' initializer [-Wmissing-field-initializers]
8
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
6
{ NULL }
9
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> (x86 only)
7
^
8
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-id: 20221220142520.24094-4-philmd@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210916155404.86958-9-agraf@csgraf.de
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
13
---
15
meson.build | 7 +++++++
14
hw/arm/nseries.c | 10 ++++------
16
target/arm/hvf/meson.build | 3 +++
15
1 file changed, 4 insertions(+), 6 deletions(-)
17
target/arm/meson.build | 2 ++
18
3 files changed, 12 insertions(+)
19
create mode 100644 target/arm/hvf/meson.build
20
16
21
diff --git a/meson.build b/meson.build
17
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
22
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
23
--- a/meson.build
19
--- a/hw/arm/nseries.c
24
+++ b/meson.build
20
+++ b/hw/arm/nseries.c
25
@@ -XXX,XX +XXX,XX @@ else
21
@@ -XXX,XX +XXX,XX @@ static const struct omap_gpiosw_info_s {
26
endif
22
"headphone", N8X0_HEADPHONE_GPIO,
27
23
OMAP_GPIOSW_TYPE_CONNECTION | OMAP_GPIOSW_INVERTED,
28
accelerator_targets = { 'CONFIG_KVM': kvm_targets }
24
},
29
+
25
- { NULL }
30
+if cpu in ['aarch64']
26
+ { /* end of list */ }
31
+ accelerator_targets += {
27
}, n810_gpiosw_info[] = {
32
+ 'CONFIG_HVF': ['aarch64-softmmu']
28
{
33
+ }
29
"gps_reset", N810_GPS_RESET_GPIO,
34
+endif
30
@@ -XXX,XX +XXX,XX @@ static const struct omap_gpiosw_info_s {
35
+
31
"slide", N810_SLIDE_GPIO,
36
if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
32
OMAP_GPIOSW_TYPE_COVER | OMAP_GPIOSW_INVERTED,
37
# i386 emulator provides xenpv machine type for multiple architectures
33
},
38
accelerator_targets += {
34
- { NULL }
39
diff --git a/target/arm/hvf/meson.build b/target/arm/hvf/meson.build
35
+ { /* end of list */ }
40
new file mode 100644
36
};
41
index XXXXXXX..XXXXXXX
37
42
--- /dev/null
38
static const struct omap_partition_info_s {
43
+++ b/target/arm/hvf/meson.build
39
@@ -XXX,XX +XXX,XX @@ static const struct omap_partition_info_s {
44
@@ -XXX,XX +XXX,XX @@
40
{ 0x00080000, 0x00200000, 0x0, "kernel" },
45
+arm_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files(
41
{ 0x00280000, 0x00200000, 0x3, "initfs" },
46
+ 'hvf.c',
42
{ 0x00480000, 0x0fb80000, 0x3, "rootfs" },
47
+))
43
-
48
diff --git a/target/arm/meson.build b/target/arm/meson.build
44
- { 0, 0, 0, NULL }
49
index XXXXXXX..XXXXXXX 100644
45
+ { /* end of list */ }
50
--- a/target/arm/meson.build
46
}, n810_part_info[] = {
51
+++ b/target/arm/meson.build
47
{ 0x00000000, 0x00020000, 0x3, "bootloader" },
52
@@ -XXX,XX +XXX,XX @@ arm_softmmu_ss.add(files(
48
{ 0x00020000, 0x00060000, 0x0, "config" },
53
'psci.c',
49
{ 0x00080000, 0x00220000, 0x0, "kernel" },
54
))
50
{ 0x002a0000, 0x00400000, 0x0, "initfs" },
55
51
{ 0x006a0000, 0x0f960000, 0x0, "rootfs" },
56
+subdir('hvf')
52
-
57
+
53
- { 0, 0, 0, NULL }
58
target_arch += {'arm': arm_ss}
54
+ { /* end of list */ }
59
target_softmmu_arch += {'arm': arm_softmmu_ss}
55
};
56
57
static const uint8_t n8x0_bd_addr[6] = { N8X0_BD_ADDR };
60
--
58
--
61
2.20.1
59
2.25.1
62
60
63
61
diff view generated by jsdifflib
1
Optimize the MVE VSHLL insns by using TCG vector ops when possible.
1
From: Zhuojia Shen <chaosdefinition@hotmail.com>
2
This includes the VMOVL insn, which we handle in mve.decode as "VSHLL
2
3
with zero shift count".
3
In CPUID registers exposed to userspace, some registers were missing
4
4
and some fields were not exposed. This patch aligns exposed ID
5
registers and their fields with what the upstream kernel currently
6
exposes.
7
8
Specifically, the following new ID registers/fields are exposed to
9
userspace:
10
11
ID_AA64PFR1_EL1.BT: bits 3-0
12
ID_AA64PFR1_EL1.MTE: bits 11-8
13
ID_AA64PFR1_EL1.SME: bits 27-24
14
15
ID_AA64ZFR0_EL1.SVEver: bits 3-0
16
ID_AA64ZFR0_EL1.AES: bits 7-4
17
ID_AA64ZFR0_EL1.BitPerm: bits 19-16
18
ID_AA64ZFR0_EL1.BF16: bits 23-20
19
ID_AA64ZFR0_EL1.SHA3: bits 35-32
20
ID_AA64ZFR0_EL1.SM4: bits 43-40
21
ID_AA64ZFR0_EL1.I8MM: bits 47-44
22
ID_AA64ZFR0_EL1.F32MM: bits 55-52
23
ID_AA64ZFR0_EL1.F64MM: bits 59-56
24
25
ID_AA64SMFR0_EL1.F32F32: bit 32
26
ID_AA64SMFR0_EL1.B16F32: bit 34
27
ID_AA64SMFR0_EL1.F16F32: bit 35
28
ID_AA64SMFR0_EL1.I8I32: bits 39-36
29
ID_AA64SMFR0_EL1.F64F64: bit 48
30
ID_AA64SMFR0_EL1.I16I64: bits 55-52
31
ID_AA64SMFR0_EL1.FA64: bit 63
32
33
ID_AA64MMFR0_EL1.ECV: bits 63-60
34
35
ID_AA64MMFR1_EL1.AFP: bits 47-44
36
37
ID_AA64MMFR2_EL1.AT: bits 35-32
38
39
ID_AA64ISAR0_EL1.RNDR: bits 63-60
40
41
ID_AA64ISAR1_EL1.FRINTTS: bits 35-32
42
ID_AA64ISAR1_EL1.BF16: bits 47-44
43
ID_AA64ISAR1_EL1.DGH: bits 51-48
44
ID_AA64ISAR1_EL1.I8MM: bits 55-52
45
46
ID_AA64ISAR2_EL1.WFxT: bits 3-0
47
ID_AA64ISAR2_EL1.RPRES: bits 7-4
48
ID_AA64ISAR2_EL1.GPA3: bits 11-8
49
ID_AA64ISAR2_EL1.APA3: bits 15-12
50
51
The code is also refactored to use symbolic names for ID register fields
52
for better readability and maintainability.
53
54
The test case in tests/tcg/aarch64/sysregs.c is also updated to match
55
the intended behavior.
56
57
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
58
Message-id: DS7PR12MB6309FB585E10772928F14271ACE79@DS7PR12MB6309.namprd12.prod.outlook.com
59
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
60
[PMM: use Sn_n_Cn_Cn_n syntax to work with older assemblers
61
that don't recognize id_aa64isar2_el1 and id_aa64mmfr2_el1]
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
62
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-11-peter.maydell@linaro.org
8
---
63
---
9
target/arm/translate-mve.c | 67 +++++++++++++++++++++++++++++++++-----
64
target/arm/helper.c | 96 +++++++++++++++++++++++++------
10
1 file changed, 59 insertions(+), 8 deletions(-)
65
tests/tcg/aarch64/sysregs.c | 24 ++++++--
11
66
tests/tcg/aarch64/Makefile.target | 7 ++-
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
67
3 files changed, 103 insertions(+), 24 deletions(-)
68
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
70
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
71
--- a/target/arm/helper.c
15
+++ b/target/arm/translate-mve.c
72
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SCALAR(VQSHL_U_scalar, vqshli_u)
73
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
17
DO_2SHIFT_SCALAR(VQRSHL_S_scalar, vqrshli_s)
74
#ifdef CONFIG_USER_ONLY
18
DO_2SHIFT_SCALAR(VQRSHL_U_scalar, vqrshli_u)
75
static const ARMCPRegUserSpaceInfo v8_user_idregs[] = {
19
76
{ .name = "ID_AA64PFR0_EL1",
20
-#define DO_VSHLL(INSN, FN) \
77
- .exported_bits = 0x000f000f00ff0000,
21
- static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
78
- .fixed_bits = 0x0000000000000011 },
22
- { \
79
+ .exported_bits = R_ID_AA64PFR0_FP_MASK |
23
- static MVEGenTwoOpShiftFn * const fns[] = { \
80
+ R_ID_AA64PFR0_ADVSIMD_MASK |
24
- gen_helper_mve_##FN##b, \
81
+ R_ID_AA64PFR0_SVE_MASK |
25
- gen_helper_mve_##FN##h, \
82
+ R_ID_AA64PFR0_DIT_MASK,
26
- }; \
83
+ .fixed_bits = (0x1u << R_ID_AA64PFR0_EL0_SHIFT) |
27
- return do_2shift(s, a, fns[a->size], false); \
84
+ (0x1u << R_ID_AA64PFR0_EL1_SHIFT) },
28
+#define DO_VSHLL(INSN, FN) \
85
{ .name = "ID_AA64PFR1_EL1",
29
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
86
- .exported_bits = 0x00000000000000f0 },
30
+ { \
87
+ .exported_bits = R_ID_AA64PFR1_BT_MASK |
31
+ static MVEGenTwoOpShiftFn * const fns[] = { \
88
+ R_ID_AA64PFR1_SSBS_MASK |
32
+ gen_helper_mve_##FN##b, \
89
+ R_ID_AA64PFR1_MTE_MASK |
33
+ gen_helper_mve_##FN##h, \
90
+ R_ID_AA64PFR1_SME_MASK },
34
+ }; \
91
{ .name = "ID_AA64PFR*_EL1_RESERVED",
35
+ return do_2shift_vec(s, a, fns[a->size], false, do_gvec_##FN); \
92
- .is_glob = true },
36
}
93
- { .name = "ID_AA64ZFR0_EL1" },
94
+ .is_glob = true },
95
+ { .name = "ID_AA64ZFR0_EL1",
96
+ .exported_bits = R_ID_AA64ZFR0_SVEVER_MASK |
97
+ R_ID_AA64ZFR0_AES_MASK |
98
+ R_ID_AA64ZFR0_BITPERM_MASK |
99
+ R_ID_AA64ZFR0_BFLOAT16_MASK |
100
+ R_ID_AA64ZFR0_SHA3_MASK |
101
+ R_ID_AA64ZFR0_SM4_MASK |
102
+ R_ID_AA64ZFR0_I8MM_MASK |
103
+ R_ID_AA64ZFR0_F32MM_MASK |
104
+ R_ID_AA64ZFR0_F64MM_MASK },
105
+ { .name = "ID_AA64SMFR0_EL1",
106
+ .exported_bits = R_ID_AA64SMFR0_F32F32_MASK |
107
+ R_ID_AA64SMFR0_B16F32_MASK |
108
+ R_ID_AA64SMFR0_F16F32_MASK |
109
+ R_ID_AA64SMFR0_I8I32_MASK |
110
+ R_ID_AA64SMFR0_F64F64_MASK |
111
+ R_ID_AA64SMFR0_I16I64_MASK |
112
+ R_ID_AA64SMFR0_FA64_MASK },
113
{ .name = "ID_AA64MMFR0_EL1",
114
- .fixed_bits = 0x00000000ff000000 },
115
- { .name = "ID_AA64MMFR1_EL1" },
116
+ .exported_bits = R_ID_AA64MMFR0_ECV_MASK,
117
+ .fixed_bits = (0xfu << R_ID_AA64MMFR0_TGRAN64_SHIFT) |
118
+ (0xfu << R_ID_AA64MMFR0_TGRAN4_SHIFT) },
119
+ { .name = "ID_AA64MMFR1_EL1",
120
+ .exported_bits = R_ID_AA64MMFR1_AFP_MASK },
121
+ { .name = "ID_AA64MMFR2_EL1",
122
+ .exported_bits = R_ID_AA64MMFR2_AT_MASK },
123
{ .name = "ID_AA64MMFR*_EL1_RESERVED",
124
- .is_glob = true },
125
+ .is_glob = true },
126
{ .name = "ID_AA64DFR0_EL1",
127
- .fixed_bits = 0x0000000000000006 },
128
- { .name = "ID_AA64DFR1_EL1" },
129
+ .fixed_bits = (0x6u << R_ID_AA64DFR0_DEBUGVER_SHIFT) },
130
+ { .name = "ID_AA64DFR1_EL1" },
131
{ .name = "ID_AA64DFR*_EL1_RESERVED",
132
- .is_glob = true },
133
+ .is_glob = true },
134
{ .name = "ID_AA64AFR*",
135
- .is_glob = true },
136
+ .is_glob = true },
137
{ .name = "ID_AA64ISAR0_EL1",
138
- .exported_bits = 0x00fffffff0fffff0 },
139
+ .exported_bits = R_ID_AA64ISAR0_AES_MASK |
140
+ R_ID_AA64ISAR0_SHA1_MASK |
141
+ R_ID_AA64ISAR0_SHA2_MASK |
142
+ R_ID_AA64ISAR0_CRC32_MASK |
143
+ R_ID_AA64ISAR0_ATOMIC_MASK |
144
+ R_ID_AA64ISAR0_RDM_MASK |
145
+ R_ID_AA64ISAR0_SHA3_MASK |
146
+ R_ID_AA64ISAR0_SM3_MASK |
147
+ R_ID_AA64ISAR0_SM4_MASK |
148
+ R_ID_AA64ISAR0_DP_MASK |
149
+ R_ID_AA64ISAR0_FHM_MASK |
150
+ R_ID_AA64ISAR0_TS_MASK |
151
+ R_ID_AA64ISAR0_RNDR_MASK },
152
{ .name = "ID_AA64ISAR1_EL1",
153
- .exported_bits = 0x000000f0ffffffff },
154
+ .exported_bits = R_ID_AA64ISAR1_DPB_MASK |
155
+ R_ID_AA64ISAR1_APA_MASK |
156
+ R_ID_AA64ISAR1_API_MASK |
157
+ R_ID_AA64ISAR1_JSCVT_MASK |
158
+ R_ID_AA64ISAR1_FCMA_MASK |
159
+ R_ID_AA64ISAR1_LRCPC_MASK |
160
+ R_ID_AA64ISAR1_GPA_MASK |
161
+ R_ID_AA64ISAR1_GPI_MASK |
162
+ R_ID_AA64ISAR1_FRINTTS_MASK |
163
+ R_ID_AA64ISAR1_SB_MASK |
164
+ R_ID_AA64ISAR1_BF16_MASK |
165
+ R_ID_AA64ISAR1_DGH_MASK |
166
+ R_ID_AA64ISAR1_I8MM_MASK },
167
+ { .name = "ID_AA64ISAR2_EL1",
168
+ .exported_bits = R_ID_AA64ISAR2_WFXT_MASK |
169
+ R_ID_AA64ISAR2_RPRES_MASK |
170
+ R_ID_AA64ISAR2_GPA3_MASK |
171
+ R_ID_AA64ISAR2_APA3_MASK },
172
{ .name = "ID_AA64ISAR*_EL1_RESERVED",
173
- .is_glob = true },
174
+ .is_glob = true },
175
};
176
modify_arm_cp_regs(v8_idregs, v8_user_idregs);
177
#endif
178
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
179
#ifdef CONFIG_USER_ONLY
180
static const ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
181
{ .name = "MIDR_EL1",
182
- .exported_bits = 0x00000000ffffffff },
183
- { .name = "REVIDR_EL1" },
184
+ .exported_bits = R_MIDR_EL1_REVISION_MASK |
185
+ R_MIDR_EL1_PARTNUM_MASK |
186
+ R_MIDR_EL1_ARCHITECTURE_MASK |
187
+ R_MIDR_EL1_VARIANT_MASK |
188
+ R_MIDR_EL1_IMPLEMENTER_MASK },
189
+ { .name = "REVIDR_EL1" },
190
};
191
modify_arm_cp_regs(id_v8_midr_cp_reginfo, id_v8_user_midr_cp_reginfo);
192
#endif
193
diff --git a/tests/tcg/aarch64/sysregs.c b/tests/tcg/aarch64/sysregs.c
194
index XXXXXXX..XXXXXXX 100644
195
--- a/tests/tcg/aarch64/sysregs.c
196
+++ b/tests/tcg/aarch64/sysregs.c
197
@@ -XXX,XX +XXX,XX @@
198
#define HWCAP_CPUID (1 << 11)
199
#endif
37
200
38
+/*
201
+/*
39
+ * For the VSHLL vector helpers, the vece is the size of the input
202
+ * Older assemblers don't recognize newer system register names,
40
+ * (ie MO_8 or MO_16); the helpers want to work in the output size.
203
+ * but we can still access them by the Sn_n_Cn_Cn_n syntax.
41
+ * The shift count can be 0..<input size>, inclusive. (0 is VMOVL.)
42
+ */
204
+ */
43
+static void do_gvec_vshllbs(unsigned vece, uint32_t dofs, uint32_t aofs,
205
+#define SYS_ID_AA64ISAR2_EL1 S3_0_C0_C6_2
44
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
206
+#define SYS_ID_AA64MMFR2_EL1 S3_0_C0_C7_2
45
+{
46
+ unsigned ovece = vece + 1;
47
+ unsigned ibits = vece == MO_8 ? 8 : 16;
48
+ tcg_gen_gvec_shli(ovece, dofs, aofs, ibits, oprsz, maxsz);
49
+ tcg_gen_gvec_sari(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
50
+}
51
+
207
+
52
+static void do_gvec_vshllbu(unsigned vece, uint32_t dofs, uint32_t aofs,
208
int failed_bit_count;
53
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
209
54
+{
210
/* Read and print system register `id' value */
55
+ unsigned ovece = vece + 1;
211
@@ -XXX,XX +XXX,XX @@ int main(void)
56
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
212
* minimum valid fields - for the purposes of this check allowed
57
+ ovece == MO_16 ? 0xff : 0xffff, oprsz, maxsz);
213
* to have non-zero values.
58
+ tcg_gen_gvec_shli(ovece, dofs, dofs, shift, oprsz, maxsz);
214
*/
59
+}
215
- get_cpu_reg_check_mask(id_aa64isar0_el1, _m(00ff,ffff,f0ff,fff0));
60
+
216
- get_cpu_reg_check_mask(id_aa64isar1_el1, _m(0000,00f0,ffff,ffff));
61
+static void do_gvec_vshllts(unsigned vece, uint32_t dofs, uint32_t aofs,
217
+ get_cpu_reg_check_mask(id_aa64isar0_el1, _m(f0ff,ffff,f0ff,fff0));
62
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
218
+ get_cpu_reg_check_mask(id_aa64isar1_el1, _m(00ff,f0ff,ffff,ffff));
63
+{
219
+ get_cpu_reg_check_mask(SYS_ID_AA64ISAR2_EL1, _m(0000,0000,0000,ffff));
64
+ unsigned ovece = vece + 1;
220
/* TGran4 & TGran64 as pegged to -1 */
65
+ unsigned ibits = vece == MO_8 ? 8 : 16;
221
- get_cpu_reg_check_mask(id_aa64mmfr0_el1, _m(0000,0000,ff00,0000));
66
+ if (shift == 0) {
222
- get_cpu_reg_check_zero(id_aa64mmfr1_el1);
67
+ tcg_gen_gvec_sari(ovece, dofs, aofs, ibits, oprsz, maxsz);
223
+ get_cpu_reg_check_mask(id_aa64mmfr0_el1, _m(f000,0000,ff00,0000));
68
+ } else {
224
+ get_cpu_reg_check_mask(id_aa64mmfr1_el1, _m(0000,f000,0000,0000));
69
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
225
+ get_cpu_reg_check_mask(SYS_ID_AA64MMFR2_EL1, _m(0000,000f,0000,0000));
70
+ ovece == MO_16 ? 0xff00 : 0xffff0000, oprsz, maxsz);
226
/* EL1/EL0 reported as AA64 only */
71
+ tcg_gen_gvec_sari(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
227
get_cpu_reg_check_mask(id_aa64pfr0_el1, _m(000f,000f,00ff,0011));
72
+ }
228
- get_cpu_reg_check_mask(id_aa64pfr1_el1, _m(0000,0000,0000,00f0));
73
+}
229
+ get_cpu_reg_check_mask(id_aa64pfr1_el1, _m(0000,0000,0f00,0fff));
74
+
230
/* all hidden, DebugVer fixed to 0x6 (ARMv8 debug architecture) */
75
+static void do_gvec_vshlltu(unsigned vece, uint32_t dofs, uint32_t aofs,
231
get_cpu_reg_check_mask(id_aa64dfr0_el1, _m(0000,0000,0000,0006));
76
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
232
get_cpu_reg_check_zero(id_aa64dfr1_el1);
77
+{
233
- get_cpu_reg_check_zero(id_aa64zfr0_el1);
78
+ unsigned ovece = vece + 1;
234
+ get_cpu_reg_check_mask(id_aa64zfr0_el1, _m(0ff0,ff0f,00ff,00ff));
79
+ unsigned ibits = vece == MO_8 ? 8 : 16;
235
+#ifdef HAS_ARMV9_SME
80
+ if (shift == 0) {
236
+ get_cpu_reg_check_mask(id_aa64smfr0_el1, _m(80f1,00fd,0000,0000));
81
+ tcg_gen_gvec_shri(ovece, dofs, aofs, ibits, oprsz, maxsz);
237
+#endif
82
+ } else {
238
83
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
239
get_cpu_reg_check_zero(id_aa64afr0_el1);
84
+ ovece == MO_16 ? 0xff00 : 0xffff0000, oprsz, maxsz);
240
get_cpu_reg_check_zero(id_aa64afr1_el1);
85
+ tcg_gen_gvec_shri(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
241
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
86
+ }
242
index XXXXXXX..XXXXXXX 100644
87
+}
243
--- a/tests/tcg/aarch64/Makefile.target
88
+
244
+++ b/tests/tcg/aarch64/Makefile.target
89
DO_VSHLL(VSHLL_BS, vshllbs)
245
@@ -XXX,XX +XXX,XX @@ config-cc.mak: Makefile
90
DO_VSHLL(VSHLL_BU, vshllbu)
246
     $(call cc-option,-march=armv8.1-a+sve2, CROSS_CC_HAS_SVE2); \
91
DO_VSHLL(VSHLL_TS, vshllts)
247
     $(call cc-option,-march=armv8.3-a, CROSS_CC_HAS_ARMV8_3); \
248
     $(call cc-option,-mbranch-protection=standard, CROSS_CC_HAS_ARMV8_BTI); \
249
-     $(call cc-option,-march=armv8.5-a+memtag, CROSS_CC_HAS_ARMV8_MTE)) 3> config-cc.mak
250
+     $(call cc-option,-march=armv8.5-a+memtag, CROSS_CC_HAS_ARMV8_MTE); \
251
+     $(call cc-option,-march=armv9-a+sme, CROSS_CC_HAS_ARMV9_SME)) 3> config-cc.mak
252
-include config-cc.mak
253
254
# Pauth Tests
255
@@ -XXX,XX +XXX,XX @@ endif
256
ifneq ($(CROSS_CC_HAS_SVE),)
257
# System Registers Tests
258
AARCH64_TESTS += sysregs
259
+ifneq ($(CROSS_CC_HAS_ARMV9_SME),)
260
+sysregs: CFLAGS+=-march=armv9-a+sme -DHAS_ARMV9_SME
261
+else
262
sysregs: CFLAGS+=-march=armv8.1-a+sve
263
+endif
264
265
# SVE ioctl test
266
AARCH64_TESTS += sve-ioctls
92
--
267
--
93
2.20.1
268
2.25.1
94
95
diff view generated by jsdifflib
1
Coverity points out that we aren't checking the return value
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
from curl_easy_setopt().
3
2
4
Fixes: Coverity CID 1458895
3
This function is not used anywhere outside this file,
5
Inspired-by: Peter Maydell <peter.maydell@linaro.org>
4
so we can make the function "static void".
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
7
Reviewed-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Tested-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210910170656.366592-2-philmd@redhat.com
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Message-id: 20221216214924.4711-2-philmd@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
contrib/elf2dmp/download.c | 22 ++++++++++------------
12
include/hw/arm/smmu-common.h | 3 ---
13
1 file changed, 10 insertions(+), 12 deletions(-)
13
hw/arm/smmu-common.c | 2 +-
14
2 files changed, 1 insertion(+), 4 deletions(-)
14
15
15
diff --git a/contrib/elf2dmp/download.c b/contrib/elf2dmp/download.c
16
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/contrib/elf2dmp/download.c
18
--- a/include/hw/arm/smmu-common.h
18
+++ b/contrib/elf2dmp/download.c
19
+++ b/include/hw/arm/smmu-common.h
19
@@ -XXX,XX +XXX,XX @@ int download_url(const char *name, const char *url)
20
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
20
goto out_curl;
21
/* Unmap the range of all the notifiers registered to any IOMMU mr */
21
}
22
void smmu_inv_notifiers_all(SMMUState *s);
22
23
23
- curl_easy_setopt(curl, CURLOPT_URL, url);
24
-/* Unmap the range of all the notifiers registered to @mr */
24
- curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, NULL);
25
-void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr);
25
- curl_easy_setopt(curl, CURLOPT_WRITEDATA, file);
26
- curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1);
27
- curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0);
28
-
26
-
29
- if (curl_easy_perform(curl) != CURLE_OK) {
27
#endif /* HW_ARM_SMMU_COMMON_H */
30
- err = 1;
28
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
31
- fclose(file);
29
index XXXXXXX..XXXXXXX 100644
32
+ if (curl_easy_setopt(curl, CURLOPT_URL, url) != CURLE_OK
30
--- a/hw/arm/smmu-common.c
33
+ || curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, NULL) != CURLE_OK
31
+++ b/hw/arm/smmu-common.c
34
+ || curl_easy_setopt(curl, CURLOPT_WRITEDATA, file) != CURLE_OK
32
@@ -XXX,XX +XXX,XX @@ static void smmu_unmap_notifier_range(IOMMUNotifier *n)
35
+ || curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1) != CURLE_OK
33
}
36
+ || curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0) != CURLE_OK
34
37
+ || curl_easy_perform(curl) != CURLE_OK) {
35
/* Unmap all notifiers attached to @mr */
38
unlink(name);
36
-inline void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
39
- goto out_curl;
37
+static void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
40
+ fclose(file);
38
{
41
+ err = 1;
39
IOMMUNotifier *n;
42
+ } else {
43
+ err = fclose(file);
44
}
45
46
- err = fclose(file);
47
-
48
out_curl:
49
curl_easy_cleanup(curl);
50
40
51
--
41
--
52
2.20.1
42
2.25.1
53
43
54
44
diff view generated by jsdifflib
1
Optimize the MVE 1op-immediate insns (VORR, VBIC, VMOV) to
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
use TCG vector ops when possible.
3
2
3
When using Clang ("Apple clang version 14.0.0 (clang-1400.0.29.202)")
4
and building with -Wall we get:
5
6
hw/arm/smmu-common.c:173:33: warning: static function 'smmu_hash_remove_by_asid_iova' is used in an inline function with external linkage [-Wstatic-in-inline]
7
hw/arm/smmu-common.h:170:1: note: use 'static' to give inline function 'smmu_iotlb_inv_iova' internal linkage
8
void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
9
^
10
static
11
12
None of our code base require / use inlined functions with external
13
linkage. Some places use internal inlining in the hot path. These
14
two functions are certainly not in any hot path and don't justify
15
any inlining, so these are likely oversights rather than intentional.
16
17
Reported-by: Stefan Weil <sw@weilnetz.de>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Eric Auger <eric.auger@redhat.com>
22
Message-id: 20221216214924.4711-3-philmd@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210913095440.13462-13-peter.maydell@linaro.org
7
---
24
---
8
target/arm/translate-mve.c | 26 +++++++++++++++++++++-----
25
hw/arm/smmu-common.c | 13 ++++++-------
9
1 file changed, 21 insertions(+), 5 deletions(-)
26
1 file changed, 6 insertions(+), 7 deletions(-)
10
27
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
28
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
12
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-mve.c
30
--- a/hw/arm/smmu-common.c
14
+++ b/target/arm/translate-mve.c
31
+++ b/hw/arm/smmu-common.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VADDLV(DisasContext *s, arg_VADDLV *a)
32
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *new)
16
return true;
33
g_hash_table_insert(bs->iotlb, key, new);
17
}
34
}
18
35
19
-static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
36
-inline void smmu_iotlb_inv_all(SMMUState *s)
20
+static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn,
37
+void smmu_iotlb_inv_all(SMMUState *s)
21
+ GVecGen2iFn *vecfn)
22
{
38
{
23
TCGv_ptr qd;
39
trace_smmu_iotlb_inv_all();
24
uint64_t imm;
40
g_hash_table_remove_all(s->iotlb);
25
@@ -XXX,XX +XXX,XX @@ static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
41
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_hash_remove_by_asid_iova(gpointer key, gpointer value,
26
42
((entry->iova & ~info->mask) == info->iova);
27
imm = asimd_imm_const(a->imm, a->cmode, a->op);
28
29
- qd = mve_qreg_ptr(a->qd);
30
- fn(cpu_env, qd, tcg_constant_i64(imm));
31
- tcg_temp_free_ptr(qd);
32
+ if (vecfn && mve_no_predication(s)) {
33
+ vecfn(MO_64, mve_qreg_offset(a->qd), mve_qreg_offset(a->qd),
34
+ imm, 16, 16);
35
+ } else {
36
+ qd = mve_qreg_ptr(a->qd);
37
+ fn(cpu_env, qd, tcg_constant_i64(imm));
38
+ tcg_temp_free_ptr(qd);
39
+ }
40
mve_update_eci(s);
41
return true;
42
}
43
}
43
44
44
+static void gen_gvec_vmovi(unsigned vece, uint32_t dofs, uint32_t aofs,
45
-inline void
45
+ int64_t c, uint32_t oprsz, uint32_t maxsz)
46
-smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
46
+{
47
- uint8_t tg, uint64_t num_pages, uint8_t ttl)
47
+ tcg_gen_gvec_dup_imm(vece, dofs, oprsz, maxsz, c);
48
+void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
48
+}
49
+ uint8_t tg, uint64_t num_pages, uint8_t ttl)
49
+
50
static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
51
{
50
{
52
/* Handle decode of cmode/op here between VORR/VBIC/VMOV */
51
/* if tg is not set we use 4KB range invalidation */
53
MVEGenOneOpImmFn *fn;
52
uint8_t granule = tg ? tg * 2 + 10 : 12;
54
+ GVecGen2iFn *vecfn;
53
@@ -XXX,XX +XXX,XX @@ smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
55
54
&info);
56
if ((a->cmode & 1) && a->cmode < 12) {
57
if (a->op) {
58
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
59
* so the VBIC becomes a logical AND operation.
60
*/
61
fn = gen_helper_mve_vandi;
62
+ vecfn = tcg_gen_gvec_andi;
63
} else {
64
fn = gen_helper_mve_vorri;
65
+ vecfn = tcg_gen_gvec_ori;
66
}
67
} else {
68
/* There is one unallocated cmode/op combination in this space */
69
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
70
}
71
/* asimd_imm_const() sorts out VMVNI vs VMOVI for us */
72
fn = gen_helper_mve_vmovi;
73
+ vecfn = gen_gvec_vmovi;
74
}
75
- return do_1imm(s, a, fn);
76
+ return do_1imm(s, a, fn, vecfn);
77
}
55
}
78
56
79
static bool do_2shift_vec(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
57
-inline void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
58
+void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
59
{
60
trace_smmu_iotlb_inv_asid(asid);
61
g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_asid, &asid);
62
@@ -XXX,XX +XXX,XX @@ error:
63
*
64
* return 0 on success
65
*/
66
-inline int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
67
- SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
68
+int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
69
+ SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
70
{
71
if (!cfg->aa64) {
72
/*
80
--
73
--
81
2.20.1
74
2.25.1
82
75
83
76
diff view generated by jsdifflib
New patch
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
1
2
3
So far the GPT timers were unable to raise IRQs to the processor.
4
5
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
include/hw/arm/fsl-imx7.h | 5 +++++
10
hw/arm/fsl-imx7.c | 10 ++++++++++
11
2 files changed, 15 insertions(+)
12
13
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/arm/fsl-imx7.h
16
+++ b/include/hw/arm/fsl-imx7.h
17
@@ -XXX,XX +XXX,XX @@ enum FslIMX7IRQs {
18
FSL_IMX7_USB2_IRQ = 42,
19
FSL_IMX7_USB3_IRQ = 40,
20
21
+ FSL_IMX7_GPT1_IRQ = 55,
22
+ FSL_IMX7_GPT2_IRQ = 54,
23
+ FSL_IMX7_GPT3_IRQ = 53,
24
+ FSL_IMX7_GPT4_IRQ = 52,
25
+
26
FSL_IMX7_WDOG1_IRQ = 78,
27
FSL_IMX7_WDOG2_IRQ = 79,
28
FSL_IMX7_WDOG3_IRQ = 10,
29
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/arm/fsl-imx7.c
32
+++ b/hw/arm/fsl-imx7.c
33
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
34
FSL_IMX7_GPT4_ADDR,
35
};
36
37
+ static const int FSL_IMX7_GPTn_IRQ[FSL_IMX7_NUM_GPTS] = {
38
+ FSL_IMX7_GPT1_IRQ,
39
+ FSL_IMX7_GPT2_IRQ,
40
+ FSL_IMX7_GPT3_IRQ,
41
+ FSL_IMX7_GPT4_IRQ,
42
+ };
43
+
44
s->gpt[i].ccm = IMX_CCM(&s->ccm);
45
sysbus_realize(SYS_BUS_DEVICE(&s->gpt[i]), &error_abort);
46
sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpt[i]), 0, FSL_IMX7_GPTn_ADDR[i]);
47
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gpt[i]), 0,
48
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
49
+ FSL_IMX7_GPTn_IRQ[i]));
50
}
51
52
for (i = 0; i < FSL_IMX7_NUM_GPIOS; i++) {
53
--
54
2.25.1
diff view generated by jsdifflib
1
Coverity points out that if the PDB file we're trying to read
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
has a header specifying a block_size of zero then we will
3
end up trying to divide by zero in pdb_ds_read_file().
4
Check for this and fail cleanly instead.
5
2
6
Fixes: Coverity CID 1458869
3
CCM derived clocks will have to be added later.
4
5
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Tested-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
11
Message-id: 20210910170656.366592-3-philmd@redhat.com
12
Message-Id: <20210901143910.17112-3-peter.maydell@linaro.org>
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
---
8
---
15
contrib/elf2dmp/pdb.c | 4 ++++
9
hw/misc/imx7_ccm.c | 49 +++++++++++++++++++++++++++++++++++++---------
16
1 file changed, 4 insertions(+)
10
1 file changed, 40 insertions(+), 9 deletions(-)
17
11
18
diff --git a/contrib/elf2dmp/pdb.c b/contrib/elf2dmp/pdb.c
12
diff --git a/hw/misc/imx7_ccm.c b/hw/misc/imx7_ccm.c
19
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
20
--- a/contrib/elf2dmp/pdb.c
14
--- a/hw/misc/imx7_ccm.c
21
+++ b/contrib/elf2dmp/pdb.c
15
+++ b/hw/misc/imx7_ccm.c
22
@@ -XXX,XX +XXX,XX @@ out_symbols:
16
@@ -XXX,XX +XXX,XX @@
23
17
#include "hw/misc/imx7_ccm.h"
24
static int pdb_reader_ds_init(struct pdb_reader *r, PDB_DS_HEADER *hdr)
18
#include "migration/vmstate.h"
19
20
+#include "trace.h"
21
+
22
+#define CKIH_FREQ 24000000 /* 24MHz crystal input */
23
+
24
static void imx7_analog_reset(DeviceState *dev)
25
{
25
{
26
+ if (hdr->block_size == 0) {
26
IMX7AnalogState *s = IMX7_ANALOG(dev);
27
+ return 1;
27
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx7_ccm = {
28
static uint32_t imx7_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
29
{
30
/*
31
- * This function is "consumed" by GPT emulation code, however on
32
- * i.MX7 each GPT block can have their own clock root. This means
33
- * that this functions needs somehow to know requester's identity
34
- * and the way to pass it: be it via additional IMXClk constants
35
- * or by adding another argument to this method needs to be
36
- * figured out
37
+ * This function is "consumed" by GPT emulation code. Some clocks
38
+ * have fixed frequencies and we can provide requested frequency
39
+ * easily. However for CCM provided clocks (like IPG) each GPT
40
+ * timer can have its own clock root.
41
+ * This means we need additionnal information when calling this
42
+ * function to know the requester's identity.
43
*/
44
- qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Not implemented\n",
45
- TYPE_IMX7_CCM, __func__);
46
- return 0;
47
+ uint32_t freq = 0;
48
+
49
+ switch (clock) {
50
+ case CLK_NONE:
51
+ break;
52
+ case CLK_32k:
53
+ freq = CKIL_FREQ;
54
+ break;
55
+ case CLK_HIGH:
56
+ freq = CKIH_FREQ;
57
+ break;
58
+ case CLK_IPG:
59
+ case CLK_IPG_HIGH:
60
+ /*
61
+ * For now we don't have a way to figure out the device this
62
+ * function is called for. Until then the IPG derived clocks
63
+ * are left unimplemented.
64
+ */
65
+ qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Clock %d Not implemented\n",
66
+ TYPE_IMX7_CCM, __func__, clock);
67
+ break;
68
+ default:
69
+ qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: unsupported clock %d\n",
70
+ TYPE_IMX7_CCM, __func__, clock);
71
+ break;
28
+ }
72
+ }
29
+
73
+
30
memset(r->file_used, 0, sizeof(r->file_used));
74
+ trace_ccm_clock_freq(clock, freq);
31
r->ds.header = hdr;
75
+
32
r->ds.toc = pdb_ds_read(hdr, (uint32_t *)((uint8_t *)hdr +
76
+ return freq;
77
}
78
79
static void imx7_ccm_class_init(ObjectClass *klass, void *data)
33
--
80
--
34
2.20.1
81
2.25.1
35
36
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
With Apple Silicon available to the masses, it's a good time to add support
3
The i.MX6UL doesn't support CLK_HIGH ou CLK_HIGH_DIV clock source.
4
for driving its virtualization extensions from QEMU.
5
4
6
This patch adds all necessary architecture specific code to get basic VMs
5
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
7
working, including save/restore.
8
9
Known limitations:
10
11
- WFI handling is missing (follows in later patch)
12
- No watchpoint/breakpoint support
13
14
Signed-off-by: Alexander Graf <agraf@csgraf.de>
15
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
16
Reviewed-by: Sergio Lopez <slp@redhat.com>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Message-id: 20210916155404.86958-5-agraf@csgraf.de
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
8
---
21
meson.build | 1 +
9
include/hw/timer/imx_gpt.h | 1 +
22
include/sysemu/hvf_int.h | 10 +-
10
hw/arm/fsl-imx6ul.c | 2 +-
23
accel/hvf/hvf-accel-ops.c | 9 +
11
hw/misc/imx6ul_ccm.c | 6 ------
24
target/arm/hvf/hvf.c | 794 ++++++++++++++++++++++++++++++++++++
12
hw/timer/imx_gpt.c | 25 +++++++++++++++++++++++++
25
target/i386/hvf/hvf.c | 5 +
13
4 files changed, 27 insertions(+), 7 deletions(-)
26
MAINTAINERS | 5 +
27
target/arm/hvf/trace-events | 10 +
28
7 files changed, 833 insertions(+), 1 deletion(-)
29
create mode 100644 target/arm/hvf/hvf.c
30
create mode 100644 target/arm/hvf/trace-events
31
14
32
diff --git a/meson.build b/meson.build
15
diff --git a/include/hw/timer/imx_gpt.h b/include/hw/timer/imx_gpt.h
33
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
34
--- a/meson.build
17
--- a/include/hw/timer/imx_gpt.h
35
+++ b/meson.build
18
+++ b/include/hw/timer/imx_gpt.h
36
@@ -XXX,XX +XXX,XX @@ if have_system or have_user
19
@@ -XXX,XX +XXX,XX @@
37
'accel/tcg',
20
#define TYPE_IMX25_GPT "imx25.gpt"
38
'hw/core',
21
#define TYPE_IMX31_GPT "imx31.gpt"
39
'target/arm',
22
#define TYPE_IMX6_GPT "imx6.gpt"
40
+ 'target/arm/hvf',
23
+#define TYPE_IMX6UL_GPT "imx6ul.gpt"
41
'target/hppa',
24
#define TYPE_IMX7_GPT "imx7.gpt"
42
'target/i386',
25
43
'target/i386/kvm',
26
#define TYPE_IMX_GPT TYPE_IMX25_GPT
44
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
27
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
45
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
46
--- a/include/sysemu/hvf_int.h
29
--- a/hw/arm/fsl-imx6ul.c
47
+++ b/include/sysemu/hvf_int.h
30
+++ b/hw/arm/fsl-imx6ul.c
48
@@ -XXX,XX +XXX,XX @@
31
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
49
#ifndef HVF_INT_H
32
*/
50
#define HVF_INT_H
33
for (i = 0; i < FSL_IMX6UL_NUM_GPTS; i++) {
51
34
snprintf(name, NAME_SIZE, "gpt%d", i);
52
+#ifdef __aarch64__
35
- object_initialize_child(obj, name, &s->gpt[i], TYPE_IMX7_GPT);
53
+#include <Hypervisor/Hypervisor.h>
36
+ object_initialize_child(obj, name, &s->gpt[i], TYPE_IMX6UL_GPT);
54
+#else
37
}
55
#include <Hypervisor/hv.h>
38
56
+#endif
39
/*
57
40
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
58
/* hvf_slot flags */
41
index XXXXXXX..XXXXXXX 100644
59
#define HVF_SLOT_LOG (1 << 0)
42
--- a/hw/misc/imx6ul_ccm.c
60
@@ -XXX,XX +XXX,XX @@ struct HVFState {
43
+++ b/hw/misc/imx6ul_ccm.c
61
int num_slots;
44
@@ -XXX,XX +XXX,XX @@ static uint32_t imx6ul_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
62
45
case CLK_32k:
63
hvf_vcpu_caps *hvf_caps;
46
freq = CKIL_FREQ;
64
+ uint64_t vtimer_offset;
47
break;
48
- case CLK_HIGH:
49
- freq = CKIH_FREQ;
50
- break;
51
- case CLK_HIGH_DIV:
52
- freq = CKIH_FREQ / 8;
53
- break;
54
default:
55
qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: unsupported clock %d\n",
56
TYPE_IMX6UL_CCM, __func__, clock);
57
diff --git a/hw/timer/imx_gpt.c b/hw/timer/imx_gpt.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/hw/timer/imx_gpt.c
60
+++ b/hw/timer/imx_gpt.c
61
@@ -XXX,XX +XXX,XX @@ static const IMXClk imx6_gpt_clocks[] = {
62
CLK_HIGH, /* 111 reference clock */
65
};
63
};
66
extern HVFState *hvf_state;
64
67
65
+static const IMXClk imx6ul_gpt_clocks[] = {
68
struct hvf_vcpu_state {
66
+ CLK_NONE, /* 000 No clock source */
69
- int fd;
67
+ CLK_IPG, /* 001 ipg_clk, 532MHz*/
70
+ uint64_t fd;
68
+ CLK_IPG_HIGH, /* 010 ipg_clk_highfreq */
71
+ void *exit;
69
+ CLK_EXT, /* 011 External clock */
72
+ bool vtimer_masked;
70
+ CLK_32k, /* 100 ipg_clk_32k */
73
};
71
+ CLK_NONE, /* 101 not defined */
74
72
+ CLK_NONE, /* 110 not defined */
75
void assert_hvf_ok(hv_return_t ret);
73
+ CLK_NONE, /* 111 not defined */
76
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *);
77
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
78
int hvf_put_registers(CPUState *);
79
int hvf_get_registers(CPUState *);
80
+void hvf_kick_vcpu_thread(CPUState *cpu);
81
82
#endif
83
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/accel/hvf/hvf-accel-ops.c
86
+++ b/accel/hvf/hvf-accel-ops.c
87
@@ -XXX,XX +XXX,XX @@
88
89
HVFState *hvf_state;
90
91
+#ifdef __aarch64__
92
+#define HV_VM_DEFAULT NULL
93
+#endif
94
+
95
/* Memory slots */
96
97
hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
98
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
99
pthread_sigmask(SIG_BLOCK, NULL, &set);
100
sigdelset(&set, SIG_IPI);
101
102
+#ifdef __aarch64__
103
+ r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
104
+#else
105
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
106
+#endif
107
cpu->vcpu_dirty = 1;
108
assert_hvf_ok(r);
109
110
@@ -XXX,XX +XXX,XX @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
111
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
112
113
ops->create_vcpu_thread = hvf_start_vcpu_thread;
114
+ ops->kick_vcpu_thread = hvf_kick_vcpu_thread;
115
116
ops->synchronize_post_reset = hvf_cpu_synchronize_post_reset;
117
ops->synchronize_post_init = hvf_cpu_synchronize_post_init;
118
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
119
new file mode 100644
120
index XXXXXXX..XXXXXXX
121
--- /dev/null
122
+++ b/target/arm/hvf/hvf.c
123
@@ -XXX,XX +XXX,XX @@
124
+/*
125
+ * QEMU Hypervisor.framework support for Apple Silicon
126
+
127
+ * Copyright 2020 Alexander Graf <agraf@csgraf.de>
128
+ *
129
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
130
+ * See the COPYING file in the top-level directory.
131
+ *
132
+ */
133
+
134
+#include "qemu/osdep.h"
135
+#include "qemu-common.h"
136
+#include "qemu/error-report.h"
137
+
138
+#include "sysemu/runstate.h"
139
+#include "sysemu/hvf.h"
140
+#include "sysemu/hvf_int.h"
141
+#include "sysemu/hw_accel.h"
142
+
143
+#include <mach/mach_time.h>
144
+
145
+#include "exec/address-spaces.h"
146
+#include "hw/irq.h"
147
+#include "qemu/main-loop.h"
148
+#include "sysemu/cpus.h"
149
+#include "target/arm/cpu.h"
150
+#include "target/arm/internals.h"
151
+#include "trace/trace-target_arm_hvf.h"
152
+#include "migration/vmstate.h"
153
+
154
+#define HVF_SYSREG(crn, crm, op0, op1, op2) \
155
+ ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
156
+#define PL1_WRITE_MASK 0x4
157
+
158
+#define SYSREG(op0, op1, crn, crm, op2) \
159
+ ((op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (crm << 1))
160
+#define SYSREG_MASK SYSREG(0x3, 0x7, 0xf, 0xf, 0x7)
161
+#define SYSREG_OSLAR_EL1 SYSREG(2, 0, 1, 0, 4)
162
+#define SYSREG_OSLSR_EL1 SYSREG(2, 0, 1, 1, 4)
163
+#define SYSREG_OSDLR_EL1 SYSREG(2, 0, 1, 3, 4)
164
+#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1)
165
+
166
+#define WFX_IS_WFE (1 << 0)
167
+
168
+#define TMR_CTL_ENABLE (1 << 0)
169
+#define TMR_CTL_IMASK (1 << 1)
170
+#define TMR_CTL_ISTATUS (1 << 2)
171
+
172
+typedef struct HVFVTimer {
173
+ /* Vtimer value during migration and paused state */
174
+ uint64_t vtimer_val;
175
+} HVFVTimer;
176
+
177
+static HVFVTimer vtimer;
178
+
179
+struct hvf_reg_match {
180
+ int reg;
181
+ uint64_t offset;
182
+};
74
+};
183
+
75
+
184
+static const struct hvf_reg_match hvf_reg_match[] = {
76
static const IMXClk imx7_gpt_clocks[] = {
185
+ { HV_REG_X0, offsetof(CPUARMState, xregs[0]) },
77
CLK_NONE, /* 000 No clock source */
186
+ { HV_REG_X1, offsetof(CPUARMState, xregs[1]) },
78
CLK_IPG, /* 001 ipg_clk, 532MHz*/
187
+ { HV_REG_X2, offsetof(CPUARMState, xregs[2]) },
79
@@ -XXX,XX +XXX,XX @@ static void imx6_gpt_init(Object *obj)
188
+ { HV_REG_X3, offsetof(CPUARMState, xregs[3]) },
80
s->clocks = imx6_gpt_clocks;
189
+ { HV_REG_X4, offsetof(CPUARMState, xregs[4]) },
81
}
190
+ { HV_REG_X5, offsetof(CPUARMState, xregs[5]) },
82
191
+ { HV_REG_X6, offsetof(CPUARMState, xregs[6]) },
83
+static void imx6ul_gpt_init(Object *obj)
192
+ { HV_REG_X7, offsetof(CPUARMState, xregs[7]) },
84
+{
193
+ { HV_REG_X8, offsetof(CPUARMState, xregs[8]) },
85
+ IMXGPTState *s = IMX_GPT(obj);
194
+ { HV_REG_X9, offsetof(CPUARMState, xregs[9]) },
86
+
195
+ { HV_REG_X10, offsetof(CPUARMState, xregs[10]) },
87
+ s->clocks = imx6ul_gpt_clocks;
196
+ { HV_REG_X11, offsetof(CPUARMState, xregs[11]) },
88
+}
197
+ { HV_REG_X12, offsetof(CPUARMState, xregs[12]) },
89
+
198
+ { HV_REG_X13, offsetof(CPUARMState, xregs[13]) },
90
static void imx7_gpt_init(Object *obj)
199
+ { HV_REG_X14, offsetof(CPUARMState, xregs[14]) },
91
{
200
+ { HV_REG_X15, offsetof(CPUARMState, xregs[15]) },
92
IMXGPTState *s = IMX_GPT(obj);
201
+ { HV_REG_X16, offsetof(CPUARMState, xregs[16]) },
93
@@ -XXX,XX +XXX,XX @@ static const TypeInfo imx6_gpt_info = {
202
+ { HV_REG_X17, offsetof(CPUARMState, xregs[17]) },
94
.instance_init = imx6_gpt_init,
203
+ { HV_REG_X18, offsetof(CPUARMState, xregs[18]) },
95
};
204
+ { HV_REG_X19, offsetof(CPUARMState, xregs[19]) },
96
205
+ { HV_REG_X20, offsetof(CPUARMState, xregs[20]) },
97
+static const TypeInfo imx6ul_gpt_info = {
206
+ { HV_REG_X21, offsetof(CPUARMState, xregs[21]) },
98
+ .name = TYPE_IMX6UL_GPT,
207
+ { HV_REG_X22, offsetof(CPUARMState, xregs[22]) },
99
+ .parent = TYPE_IMX25_GPT,
208
+ { HV_REG_X23, offsetof(CPUARMState, xregs[23]) },
100
+ .instance_init = imx6ul_gpt_init,
209
+ { HV_REG_X24, offsetof(CPUARMState, xregs[24]) },
210
+ { HV_REG_X25, offsetof(CPUARMState, xregs[25]) },
211
+ { HV_REG_X26, offsetof(CPUARMState, xregs[26]) },
212
+ { HV_REG_X27, offsetof(CPUARMState, xregs[27]) },
213
+ { HV_REG_X28, offsetof(CPUARMState, xregs[28]) },
214
+ { HV_REG_X29, offsetof(CPUARMState, xregs[29]) },
215
+ { HV_REG_X30, offsetof(CPUARMState, xregs[30]) },
216
+ { HV_REG_PC, offsetof(CPUARMState, pc) },
217
+};
101
+};
218
+
102
+
219
+static const struct hvf_reg_match hvf_fpreg_match[] = {
103
static const TypeInfo imx7_gpt_info = {
220
+ { HV_SIMD_FP_REG_Q0, offsetof(CPUARMState, vfp.zregs[0]) },
104
.name = TYPE_IMX7_GPT,
221
+ { HV_SIMD_FP_REG_Q1, offsetof(CPUARMState, vfp.zregs[1]) },
105
.parent = TYPE_IMX25_GPT,
222
+ { HV_SIMD_FP_REG_Q2, offsetof(CPUARMState, vfp.zregs[2]) },
106
@@ -XXX,XX +XXX,XX @@ static void imx_gpt_register_types(void)
223
+ { HV_SIMD_FP_REG_Q3, offsetof(CPUARMState, vfp.zregs[3]) },
107
type_register_static(&imx25_gpt_info);
224
+ { HV_SIMD_FP_REG_Q4, offsetof(CPUARMState, vfp.zregs[4]) },
108
type_register_static(&imx31_gpt_info);
225
+ { HV_SIMD_FP_REG_Q5, offsetof(CPUARMState, vfp.zregs[5]) },
109
type_register_static(&imx6_gpt_info);
226
+ { HV_SIMD_FP_REG_Q6, offsetof(CPUARMState, vfp.zregs[6]) },
110
+ type_register_static(&imx6ul_gpt_info);
227
+ { HV_SIMD_FP_REG_Q7, offsetof(CPUARMState, vfp.zregs[7]) },
111
type_register_static(&imx7_gpt_info);
228
+ { HV_SIMD_FP_REG_Q8, offsetof(CPUARMState, vfp.zregs[8]) },
229
+ { HV_SIMD_FP_REG_Q9, offsetof(CPUARMState, vfp.zregs[9]) },
230
+ { HV_SIMD_FP_REG_Q10, offsetof(CPUARMState, vfp.zregs[10]) },
231
+ { HV_SIMD_FP_REG_Q11, offsetof(CPUARMState, vfp.zregs[11]) },
232
+ { HV_SIMD_FP_REG_Q12, offsetof(CPUARMState, vfp.zregs[12]) },
233
+ { HV_SIMD_FP_REG_Q13, offsetof(CPUARMState, vfp.zregs[13]) },
234
+ { HV_SIMD_FP_REG_Q14, offsetof(CPUARMState, vfp.zregs[14]) },
235
+ { HV_SIMD_FP_REG_Q15, offsetof(CPUARMState, vfp.zregs[15]) },
236
+ { HV_SIMD_FP_REG_Q16, offsetof(CPUARMState, vfp.zregs[16]) },
237
+ { HV_SIMD_FP_REG_Q17, offsetof(CPUARMState, vfp.zregs[17]) },
238
+ { HV_SIMD_FP_REG_Q18, offsetof(CPUARMState, vfp.zregs[18]) },
239
+ { HV_SIMD_FP_REG_Q19, offsetof(CPUARMState, vfp.zregs[19]) },
240
+ { HV_SIMD_FP_REG_Q20, offsetof(CPUARMState, vfp.zregs[20]) },
241
+ { HV_SIMD_FP_REG_Q21, offsetof(CPUARMState, vfp.zregs[21]) },
242
+ { HV_SIMD_FP_REG_Q22, offsetof(CPUARMState, vfp.zregs[22]) },
243
+ { HV_SIMD_FP_REG_Q23, offsetof(CPUARMState, vfp.zregs[23]) },
244
+ { HV_SIMD_FP_REG_Q24, offsetof(CPUARMState, vfp.zregs[24]) },
245
+ { HV_SIMD_FP_REG_Q25, offsetof(CPUARMState, vfp.zregs[25]) },
246
+ { HV_SIMD_FP_REG_Q26, offsetof(CPUARMState, vfp.zregs[26]) },
247
+ { HV_SIMD_FP_REG_Q27, offsetof(CPUARMState, vfp.zregs[27]) },
248
+ { HV_SIMD_FP_REG_Q28, offsetof(CPUARMState, vfp.zregs[28]) },
249
+ { HV_SIMD_FP_REG_Q29, offsetof(CPUARMState, vfp.zregs[29]) },
250
+ { HV_SIMD_FP_REG_Q30, offsetof(CPUARMState, vfp.zregs[30]) },
251
+ { HV_SIMD_FP_REG_Q31, offsetof(CPUARMState, vfp.zregs[31]) },
252
+};
253
+
254
+struct hvf_sreg_match {
255
+ int reg;
256
+ uint32_t key;
257
+ uint32_t cp_idx;
258
+};
259
+
260
+static struct hvf_sreg_match hvf_sreg_match[] = {
261
+ { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 4) },
262
+ { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 5) },
263
+ { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 6) },
264
+ { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 7) },
265
+
266
+ { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 4) },
267
+ { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 5) },
268
+ { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 6) },
269
+ { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 7) },
270
+
271
+ { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 4) },
272
+ { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 5) },
273
+ { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 6) },
274
+ { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 7) },
275
+
276
+ { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 4) },
277
+ { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 5) },
278
+ { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 6) },
279
+ { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 7) },
280
+
281
+ { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 4) },
282
+ { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 5) },
283
+ { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 6) },
284
+ { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 7) },
285
+
286
+ { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 4) },
287
+ { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 5) },
288
+ { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 6) },
289
+ { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 7) },
290
+
291
+ { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 4) },
292
+ { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 5) },
293
+ { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 6) },
294
+ { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 7) },
295
+
296
+ { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 4) },
297
+ { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 5) },
298
+ { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 6) },
299
+ { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 7) },
300
+
301
+ { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 4) },
302
+ { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 5) },
303
+ { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 6) },
304
+ { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 7) },
305
+
306
+ { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 4) },
307
+ { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 5) },
308
+ { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 6) },
309
+ { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 7) },
310
+
311
+ { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 4) },
312
+ { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 5) },
313
+ { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 6) },
314
+ { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 7) },
315
+
316
+ { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 4) },
317
+ { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 5) },
318
+ { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 6) },
319
+ { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 7) },
320
+
321
+ { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 4) },
322
+ { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 5) },
323
+ { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 6) },
324
+ { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 7) },
325
+
326
+ { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 4) },
327
+ { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 5) },
328
+ { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 6) },
329
+ { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 7) },
330
+
331
+ { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 4) },
332
+ { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 5) },
333
+ { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 6) },
334
+ { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 7) },
335
+
336
+ { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 4) },
337
+ { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 5) },
338
+ { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 6) },
339
+ { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 7) },
340
+
341
+#ifdef SYNC_NO_RAW_REGS
342
+ /*
343
+ * The registers below are manually synced on init because they are
344
+ * marked as NO_RAW. We still list them to make number space sync easier.
345
+ */
346
+ { HV_SYS_REG_MDCCINT_EL1, HVF_SYSREG(0, 2, 2, 0, 0) },
347
+ { HV_SYS_REG_MIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 0) },
348
+ { HV_SYS_REG_MPIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 5) },
349
+ { HV_SYS_REG_ID_AA64PFR0_EL1, HVF_SYSREG(0, 4, 3, 0, 0) },
350
+#endif
351
+ { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 2) },
352
+ { HV_SYS_REG_ID_AA64DFR0_EL1, HVF_SYSREG(0, 5, 3, 0, 0) },
353
+ { HV_SYS_REG_ID_AA64DFR1_EL1, HVF_SYSREG(0, 5, 3, 0, 1) },
354
+ { HV_SYS_REG_ID_AA64ISAR0_EL1, HVF_SYSREG(0, 6, 3, 0, 0) },
355
+ { HV_SYS_REG_ID_AA64ISAR1_EL1, HVF_SYSREG(0, 6, 3, 0, 1) },
356
+#ifdef SYNC_NO_MMFR0
357
+ /* We keep the hardware MMFR0 around. HW limits are there anyway */
358
+ { HV_SYS_REG_ID_AA64MMFR0_EL1, HVF_SYSREG(0, 7, 3, 0, 0) },
359
+#endif
360
+ { HV_SYS_REG_ID_AA64MMFR1_EL1, HVF_SYSREG(0, 7, 3, 0, 1) },
361
+ { HV_SYS_REG_ID_AA64MMFR2_EL1, HVF_SYSREG(0, 7, 3, 0, 2) },
362
+
363
+ { HV_SYS_REG_MDSCR_EL1, HVF_SYSREG(0, 2, 2, 0, 2) },
364
+ { HV_SYS_REG_SCTLR_EL1, HVF_SYSREG(1, 0, 3, 0, 0) },
365
+ { HV_SYS_REG_CPACR_EL1, HVF_SYSREG(1, 0, 3, 0, 2) },
366
+ { HV_SYS_REG_TTBR0_EL1, HVF_SYSREG(2, 0, 3, 0, 0) },
367
+ { HV_SYS_REG_TTBR1_EL1, HVF_SYSREG(2, 0, 3, 0, 1) },
368
+ { HV_SYS_REG_TCR_EL1, HVF_SYSREG(2, 0, 3, 0, 2) },
369
+
370
+ { HV_SYS_REG_APIAKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 0) },
371
+ { HV_SYS_REG_APIAKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 1) },
372
+ { HV_SYS_REG_APIBKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 2) },
373
+ { HV_SYS_REG_APIBKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 3) },
374
+ { HV_SYS_REG_APDAKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 0) },
375
+ { HV_SYS_REG_APDAKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 1) },
376
+ { HV_SYS_REG_APDBKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 2) },
377
+ { HV_SYS_REG_APDBKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 3) },
378
+ { HV_SYS_REG_APGAKEYLO_EL1, HVF_SYSREG(2, 3, 3, 0, 0) },
379
+ { HV_SYS_REG_APGAKEYHI_EL1, HVF_SYSREG(2, 3, 3, 0, 1) },
380
+
381
+ { HV_SYS_REG_SPSR_EL1, HVF_SYSREG(4, 0, 3, 0, 0) },
382
+ { HV_SYS_REG_ELR_EL1, HVF_SYSREG(4, 0, 3, 0, 1) },
383
+ { HV_SYS_REG_SP_EL0, HVF_SYSREG(4, 1, 3, 0, 0) },
384
+ { HV_SYS_REG_AFSR0_EL1, HVF_SYSREG(5, 1, 3, 0, 0) },
385
+ { HV_SYS_REG_AFSR1_EL1, HVF_SYSREG(5, 1, 3, 0, 1) },
386
+ { HV_SYS_REG_ESR_EL1, HVF_SYSREG(5, 2, 3, 0, 0) },
387
+ { HV_SYS_REG_FAR_EL1, HVF_SYSREG(6, 0, 3, 0, 0) },
388
+ { HV_SYS_REG_PAR_EL1, HVF_SYSREG(7, 4, 3, 0, 0) },
389
+ { HV_SYS_REG_MAIR_EL1, HVF_SYSREG(10, 2, 3, 0, 0) },
390
+ { HV_SYS_REG_AMAIR_EL1, HVF_SYSREG(10, 3, 3, 0, 0) },
391
+ { HV_SYS_REG_VBAR_EL1, HVF_SYSREG(12, 0, 3, 0, 0) },
392
+ { HV_SYS_REG_CONTEXTIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 1) },
393
+ { HV_SYS_REG_TPIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 4) },
394
+ { HV_SYS_REG_CNTKCTL_EL1, HVF_SYSREG(14, 1, 3, 0, 0) },
395
+ { HV_SYS_REG_CSSELR_EL1, HVF_SYSREG(0, 0, 3, 2, 0) },
396
+ { HV_SYS_REG_TPIDR_EL0, HVF_SYSREG(13, 0, 3, 3, 2) },
397
+ { HV_SYS_REG_TPIDRRO_EL0, HVF_SYSREG(13, 0, 3, 3, 3) },
398
+ { HV_SYS_REG_CNTV_CTL_EL0, HVF_SYSREG(14, 3, 3, 3, 1) },
399
+ { HV_SYS_REG_CNTV_CVAL_EL0, HVF_SYSREG(14, 3, 3, 3, 2) },
400
+ { HV_SYS_REG_SP_EL1, HVF_SYSREG(4, 1, 3, 4, 0) },
401
+};
402
+
403
+int hvf_get_registers(CPUState *cpu)
404
+{
405
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
406
+ CPUARMState *env = &arm_cpu->env;
407
+ hv_return_t ret;
408
+ uint64_t val;
409
+ hv_simd_fp_uchar16_t fpval;
410
+ int i;
411
+
412
+ for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
413
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, hvf_reg_match[i].reg, &val);
414
+ *(uint64_t *)((void *)env + hvf_reg_match[i].offset) = val;
415
+ assert_hvf_ok(ret);
416
+ }
417
+
418
+ for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
419
+ ret = hv_vcpu_get_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
420
+ &fpval);
421
+ memcpy((void *)env + hvf_fpreg_match[i].offset, &fpval, sizeof(fpval));
422
+ assert_hvf_ok(ret);
423
+ }
424
+
425
+ val = 0;
426
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPCR, &val);
427
+ assert_hvf_ok(ret);
428
+ vfp_set_fpcr(env, val);
429
+
430
+ val = 0;
431
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPSR, &val);
432
+ assert_hvf_ok(ret);
433
+ vfp_set_fpsr(env, val);
434
+
435
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_CPSR, &val);
436
+ assert_hvf_ok(ret);
437
+ pstate_write(env, val);
438
+
439
+ for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
440
+ if (hvf_sreg_match[i].cp_idx == -1) {
441
+ continue;
442
+ }
443
+
444
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, &val);
445
+ assert_hvf_ok(ret);
446
+
447
+ arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx] = val;
448
+ }
449
+ assert(write_list_to_cpustate(arm_cpu));
450
+
451
+ aarch64_restore_sp(env, arm_current_el(env));
452
+
453
+ return 0;
454
+}
455
+
456
+int hvf_put_registers(CPUState *cpu)
457
+{
458
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
459
+ CPUARMState *env = &arm_cpu->env;
460
+ hv_return_t ret;
461
+ uint64_t val;
462
+ hv_simd_fp_uchar16_t fpval;
463
+ int i;
464
+
465
+ for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
466
+ val = *(uint64_t *)((void *)env + hvf_reg_match[i].offset);
467
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, hvf_reg_match[i].reg, val);
468
+ assert_hvf_ok(ret);
469
+ }
470
+
471
+ for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
472
+ memcpy(&fpval, (void *)env + hvf_fpreg_match[i].offset, sizeof(fpval));
473
+ ret = hv_vcpu_set_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
474
+ fpval);
475
+ assert_hvf_ok(ret);
476
+ }
477
+
478
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPCR, vfp_get_fpcr(env));
479
+ assert_hvf_ok(ret);
480
+
481
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPSR, vfp_get_fpsr(env));
482
+ assert_hvf_ok(ret);
483
+
484
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_CPSR, pstate_read(env));
485
+ assert_hvf_ok(ret);
486
+
487
+ aarch64_save_sp(env, arm_current_el(env));
488
+
489
+ assert(write_cpustate_to_list(arm_cpu, false));
490
+ for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
491
+ if (hvf_sreg_match[i].cp_idx == -1) {
492
+ continue;
493
+ }
494
+
495
+ val = arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx];
496
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, val);
497
+ assert_hvf_ok(ret);
498
+ }
499
+
500
+ ret = hv_vcpu_set_vtimer_offset(cpu->hvf->fd, hvf_state->vtimer_offset);
501
+ assert_hvf_ok(ret);
502
+
503
+ return 0;
504
+}
505
+
506
+static void flush_cpu_state(CPUState *cpu)
507
+{
508
+ if (cpu->vcpu_dirty) {
509
+ hvf_put_registers(cpu);
510
+ cpu->vcpu_dirty = false;
511
+ }
512
+}
513
+
514
+static void hvf_set_reg(CPUState *cpu, int rt, uint64_t val)
515
+{
516
+ hv_return_t r;
517
+
518
+ flush_cpu_state(cpu);
519
+
520
+ if (rt < 31) {
521
+ r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_X0 + rt, val);
522
+ assert_hvf_ok(r);
523
+ }
524
+}
525
+
526
+static uint64_t hvf_get_reg(CPUState *cpu, int rt)
527
+{
528
+ uint64_t val = 0;
529
+ hv_return_t r;
530
+
531
+ flush_cpu_state(cpu);
532
+
533
+ if (rt < 31) {
534
+ r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_X0 + rt, &val);
535
+ assert_hvf_ok(r);
536
+ }
537
+
538
+ return val;
539
+}
540
+
541
+void hvf_arch_vcpu_destroy(CPUState *cpu)
542
+{
543
+}
544
+
545
+int hvf_arch_init_vcpu(CPUState *cpu)
546
+{
547
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
548
+ CPUARMState *env = &arm_cpu->env;
549
+ uint32_t sregs_match_len = ARRAY_SIZE(hvf_sreg_match);
550
+ uint32_t sregs_cnt = 0;
551
+ uint64_t pfr;
552
+ hv_return_t ret;
553
+ int i;
554
+
555
+ env->aarch64 = 1;
556
+ asm volatile("mrs %0, cntfrq_el0" : "=r"(arm_cpu->gt_cntfrq_hz));
557
+
558
+ /* Allocate enough space for our sysreg sync */
559
+ arm_cpu->cpreg_indexes = g_renew(uint64_t, arm_cpu->cpreg_indexes,
560
+ sregs_match_len);
561
+ arm_cpu->cpreg_values = g_renew(uint64_t, arm_cpu->cpreg_values,
562
+ sregs_match_len);
563
+ arm_cpu->cpreg_vmstate_indexes = g_renew(uint64_t,
564
+ arm_cpu->cpreg_vmstate_indexes,
565
+ sregs_match_len);
566
+ arm_cpu->cpreg_vmstate_values = g_renew(uint64_t,
567
+ arm_cpu->cpreg_vmstate_values,
568
+ sregs_match_len);
569
+
570
+ memset(arm_cpu->cpreg_values, 0, sregs_match_len * sizeof(uint64_t));
571
+
572
+ /* Populate cp list for all known sysregs */
573
+ for (i = 0; i < sregs_match_len; i++) {
574
+ const ARMCPRegInfo *ri;
575
+ uint32_t key = hvf_sreg_match[i].key;
576
+
577
+ ri = get_arm_cp_reginfo(arm_cpu->cp_regs, key);
578
+ if (ri) {
579
+ assert(!(ri->type & ARM_CP_NO_RAW));
580
+ hvf_sreg_match[i].cp_idx = sregs_cnt;
581
+ arm_cpu->cpreg_indexes[sregs_cnt++] = cpreg_to_kvm_id(key);
582
+ } else {
583
+ hvf_sreg_match[i].cp_idx = -1;
584
+ }
585
+ }
586
+ arm_cpu->cpreg_array_len = sregs_cnt;
587
+ arm_cpu->cpreg_vmstate_array_len = sregs_cnt;
588
+
589
+ assert(write_cpustate_to_list(arm_cpu, false));
590
+
591
+ /* Set CP_NO_RAW system registers on init */
592
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MIDR_EL1,
593
+ arm_cpu->midr);
594
+ assert_hvf_ok(ret);
595
+
596
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MPIDR_EL1,
597
+ arm_cpu->mp_affinity);
598
+ assert_hvf_ok(ret);
599
+
600
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, &pfr);
601
+ assert_hvf_ok(ret);
602
+ pfr |= env->gicv3state ? (1 << 24) : 0;
603
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, pfr);
604
+ assert_hvf_ok(ret);
605
+
606
+ /* We're limited to underlying hardware caps, override internal versions */
607
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64MMFR0_EL1,
608
+ &arm_cpu->isar.id_aa64mmfr0);
609
+ assert_hvf_ok(ret);
610
+
611
+ return 0;
612
+}
613
+
614
+void hvf_kick_vcpu_thread(CPUState *cpu)
615
+{
616
+ hv_vcpus_exit(&cpu->hvf->fd, 1);
617
+}
618
+
619
+static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
620
+ uint32_t syndrome)
621
+{
622
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
623
+ CPUARMState *env = &arm_cpu->env;
624
+
625
+ cpu->exception_index = excp;
626
+ env->exception.target_el = 1;
627
+ env->exception.syndrome = syndrome;
628
+
629
+ arm_cpu_do_interrupt(cpu);
630
+}
631
+
632
+static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
633
+{
634
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
635
+ CPUARMState *env = &arm_cpu->env;
636
+ uint64_t val = 0;
637
+
638
+ switch (reg) {
639
+ case SYSREG_CNTPCT_EL0:
640
+ val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
641
+ gt_cntfrq_period_ns(arm_cpu);
642
+ break;
643
+ case SYSREG_OSLSR_EL1:
644
+ val = env->cp15.oslsr_el1;
645
+ break;
646
+ case SYSREG_OSDLR_EL1:
647
+ /* Dummy register */
648
+ break;
649
+ default:
650
+ cpu_synchronize_state(cpu);
651
+ trace_hvf_unhandled_sysreg_read(env->pc, reg,
652
+ (reg >> 20) & 0x3,
653
+ (reg >> 14) & 0x7,
654
+ (reg >> 10) & 0xf,
655
+ (reg >> 1) & 0xf,
656
+ (reg >> 17) & 0x7);
657
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
658
+ return 1;
659
+ }
660
+
661
+ trace_hvf_sysreg_read(reg,
662
+ (reg >> 20) & 0x3,
663
+ (reg >> 14) & 0x7,
664
+ (reg >> 10) & 0xf,
665
+ (reg >> 1) & 0xf,
666
+ (reg >> 17) & 0x7,
667
+ val);
668
+ hvf_set_reg(cpu, rt, val);
669
+
670
+ return 0;
671
+}
672
+
673
+static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
674
+{
675
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
676
+ CPUARMState *env = &arm_cpu->env;
677
+
678
+ trace_hvf_sysreg_write(reg,
679
+ (reg >> 20) & 0x3,
680
+ (reg >> 14) & 0x7,
681
+ (reg >> 10) & 0xf,
682
+ (reg >> 1) & 0xf,
683
+ (reg >> 17) & 0x7,
684
+ val);
685
+
686
+ switch (reg) {
687
+ case SYSREG_OSLAR_EL1:
688
+ env->cp15.oslsr_el1 = val & 1;
689
+ break;
690
+ case SYSREG_OSDLR_EL1:
691
+ /* Dummy register */
692
+ break;
693
+ default:
694
+ cpu_synchronize_state(cpu);
695
+ trace_hvf_unhandled_sysreg_write(env->pc, reg,
696
+ (reg >> 20) & 0x3,
697
+ (reg >> 14) & 0x7,
698
+ (reg >> 10) & 0xf,
699
+ (reg >> 1) & 0xf,
700
+ (reg >> 17) & 0x7);
701
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
702
+ return 1;
703
+ }
704
+
705
+ return 0;
706
+}
707
+
708
+static int hvf_inject_interrupts(CPUState *cpu)
709
+{
710
+ if (cpu->interrupt_request & CPU_INTERRUPT_FIQ) {
711
+ trace_hvf_inject_fiq();
712
+ hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_FIQ,
713
+ true);
714
+ }
715
+
716
+ if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
717
+ trace_hvf_inject_irq();
718
+ hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_IRQ,
719
+ true);
720
+ }
721
+
722
+ return 0;
723
+}
724
+
725
+static uint64_t hvf_vtimer_val_raw(void)
726
+{
727
+ /*
728
+ * mach_absolute_time() returns the vtimer value without the VM
729
+ * offset that we define. Add our own offset on top.
730
+ */
731
+ return mach_absolute_time() - hvf_state->vtimer_offset;
732
+}
733
+
734
+static void hvf_sync_vtimer(CPUState *cpu)
735
+{
736
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
737
+ hv_return_t r;
738
+ uint64_t ctl;
739
+ bool irq_state;
740
+
741
+ if (!cpu->hvf->vtimer_masked) {
742
+ /* We will get notified on vtimer changes by hvf, nothing to do */
743
+ return;
744
+ }
745
+
746
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
747
+ assert_hvf_ok(r);
748
+
749
+ irq_state = (ctl & (TMR_CTL_ENABLE | TMR_CTL_IMASK | TMR_CTL_ISTATUS)) ==
750
+ (TMR_CTL_ENABLE | TMR_CTL_ISTATUS);
751
+ qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], irq_state);
752
+
753
+ if (!irq_state) {
754
+ /* Timer no longer asserting, we can unmask it */
755
+ hv_vcpu_set_vtimer_mask(cpu->hvf->fd, false);
756
+ cpu->hvf->vtimer_masked = false;
757
+ }
758
+}
759
+
760
+int hvf_vcpu_exec(CPUState *cpu)
761
+{
762
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
763
+ CPUARMState *env = &arm_cpu->env;
764
+ hv_vcpu_exit_t *hvf_exit = cpu->hvf->exit;
765
+ hv_return_t r;
766
+ bool advance_pc = false;
767
+
768
+ if (hvf_inject_interrupts(cpu)) {
769
+ return EXCP_INTERRUPT;
770
+ }
771
+
772
+ if (cpu->halted) {
773
+ return EXCP_HLT;
774
+ }
775
+
776
+ flush_cpu_state(cpu);
777
+
778
+ qemu_mutex_unlock_iothread();
779
+ assert_hvf_ok(hv_vcpu_run(cpu->hvf->fd));
780
+
781
+ /* handle VMEXIT */
782
+ uint64_t exit_reason = hvf_exit->reason;
783
+ uint64_t syndrome = hvf_exit->exception.syndrome;
784
+ uint32_t ec = syn_get_ec(syndrome);
785
+
786
+ qemu_mutex_lock_iothread();
787
+ switch (exit_reason) {
788
+ case HV_EXIT_REASON_EXCEPTION:
789
+ /* This is the main one, handle below. */
790
+ break;
791
+ case HV_EXIT_REASON_VTIMER_ACTIVATED:
792
+ qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], 1);
793
+ cpu->hvf->vtimer_masked = true;
794
+ return 0;
795
+ case HV_EXIT_REASON_CANCELED:
796
+ /* we got kicked, no exit to process */
797
+ return 0;
798
+ default:
799
+ assert(0);
800
+ }
801
+
802
+ hvf_sync_vtimer(cpu);
803
+
804
+ switch (ec) {
805
+ case EC_DATAABORT: {
806
+ bool isv = syndrome & ARM_EL_ISV;
807
+ bool iswrite = (syndrome >> 6) & 1;
808
+ bool s1ptw = (syndrome >> 7) & 1;
809
+ uint32_t sas = (syndrome >> 22) & 3;
810
+ uint32_t len = 1 << sas;
811
+ uint32_t srt = (syndrome >> 16) & 0x1f;
812
+ uint64_t val = 0;
813
+
814
+ trace_hvf_data_abort(env->pc, hvf_exit->exception.virtual_address,
815
+ hvf_exit->exception.physical_address, isv,
816
+ iswrite, s1ptw, len, srt);
817
+
818
+ assert(isv);
819
+
820
+ if (iswrite) {
821
+ val = hvf_get_reg(cpu, srt);
822
+ address_space_write(&address_space_memory,
823
+ hvf_exit->exception.physical_address,
824
+ MEMTXATTRS_UNSPECIFIED, &val, len);
825
+ } else {
826
+ address_space_read(&address_space_memory,
827
+ hvf_exit->exception.physical_address,
828
+ MEMTXATTRS_UNSPECIFIED, &val, len);
829
+ hvf_set_reg(cpu, srt, val);
830
+ }
831
+
832
+ advance_pc = true;
833
+ break;
834
+ }
835
+ case EC_SYSTEMREGISTERTRAP: {
836
+ bool isread = (syndrome >> 0) & 1;
837
+ uint32_t rt = (syndrome >> 5) & 0x1f;
838
+ uint32_t reg = syndrome & SYSREG_MASK;
839
+ uint64_t val;
840
+ int ret = 0;
841
+
842
+ if (isread) {
843
+ ret = hvf_sysreg_read(cpu, reg, rt);
844
+ } else {
845
+ val = hvf_get_reg(cpu, rt);
846
+ ret = hvf_sysreg_write(cpu, reg, val);
847
+ }
848
+
849
+ advance_pc = !ret;
850
+ break;
851
+ }
852
+ case EC_WFX_TRAP:
853
+ advance_pc = true;
854
+ break;
855
+ case EC_AA64_HVC:
856
+ cpu_synchronize_state(cpu);
857
+ trace_hvf_unknown_hvc(env->xregs[0]);
858
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
859
+ env->xregs[0] = -1;
860
+ break;
861
+ case EC_AA64_SMC:
862
+ cpu_synchronize_state(cpu);
863
+ trace_hvf_unknown_smc(env->xregs[0]);
864
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
865
+ break;
866
+ default:
867
+ cpu_synchronize_state(cpu);
868
+ trace_hvf_exit(syndrome, ec, env->pc);
869
+ error_report("0x%llx: unhandled exception ec=0x%x", env->pc, ec);
870
+ }
871
+
872
+ if (advance_pc) {
873
+ uint64_t pc;
874
+
875
+ flush_cpu_state(cpu);
876
+
877
+ r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_PC, &pc);
878
+ assert_hvf_ok(r);
879
+ pc += 4;
880
+ r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc);
881
+ assert_hvf_ok(r);
882
+ }
883
+
884
+ return 0;
885
+}
886
+
887
+static const VMStateDescription vmstate_hvf_vtimer = {
888
+ .name = "hvf-vtimer",
889
+ .version_id = 1,
890
+ .minimum_version_id = 1,
891
+ .fields = (VMStateField[]) {
892
+ VMSTATE_UINT64(vtimer_val, HVFVTimer),
893
+ VMSTATE_END_OF_LIST()
894
+ },
895
+};
896
+
897
+static void hvf_vm_state_change(void *opaque, bool running, RunState state)
898
+{
899
+ HVFVTimer *s = opaque;
900
+
901
+ if (running) {
902
+ /* Update vtimer offset on all CPUs */
903
+ hvf_state->vtimer_offset = mach_absolute_time() - s->vtimer_val;
904
+ cpu_synchronize_all_states();
905
+ } else {
906
+ /* Remember vtimer value on every pause */
907
+ s->vtimer_val = hvf_vtimer_val_raw();
908
+ }
909
+}
910
+
911
+int hvf_arch_init(void)
912
+{
913
+ hvf_state->vtimer_offset = mach_absolute_time();
914
+ vmstate_register(NULL, 0, &vmstate_hvf_vtimer, &vtimer);
915
+ qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
916
+ return 0;
917
+}
918
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
919
index XXXXXXX..XXXXXXX 100644
920
--- a/target/i386/hvf/hvf.c
921
+++ b/target/i386/hvf/hvf.c
922
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
923
return env->apic_bus_freq != 0;
924
}
112
}
925
113
926
+void hvf_kick_vcpu_thread(CPUState *cpu)
927
+{
928
+ cpus_kick_thread(cpu);
929
+}
930
+
931
int hvf_arch_init(void)
932
{
933
return 0;
934
diff --git a/MAINTAINERS b/MAINTAINERS
935
index XXXXXXX..XXXXXXX 100644
936
--- a/MAINTAINERS
937
+++ b/MAINTAINERS
938
@@ -XXX,XX +XXX,XX @@ F: accel/accel-*.c
939
F: accel/Makefile.objs
940
F: accel/stubs/Makefile.objs
941
942
+Apple Silicon HVF CPUs
943
+M: Alexander Graf <agraf@csgraf.de>
944
+S: Maintained
945
+F: target/arm/hvf/
946
+
947
X86 HVF CPUs
948
M: Cameron Esfahani <dirty@apple.com>
949
M: Roman Bolshakov <r.bolshakov@yadro.com>
950
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
951
new file mode 100644
952
index XXXXXXX..XXXXXXX
953
--- /dev/null
954
+++ b/target/arm/hvf/trace-events
955
@@ -XXX,XX +XXX,XX @@
956
+hvf_unhandled_sysreg_read(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg read at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
957
+hvf_unhandled_sysreg_write(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg write at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
958
+hvf_inject_fiq(void) "injecting FIQ"
959
+hvf_inject_irq(void) "injecting IRQ"
960
+hvf_data_abort(uint64_t pc, uint64_t va, uint64_t pa, bool isv, bool iswrite, bool s1ptw, uint32_t len, uint32_t srt) "data abort: [pc=0x%"PRIx64" va=0x%016"PRIx64" pa=0x%016"PRIx64" isv=%d iswrite=%d s1ptw=%d len=%d srt=%d]"
961
+hvf_sysreg_read(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg read 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d) = 0x%016"PRIx64
962
+hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg write 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d, val=0x%016"PRIx64")"
963
+hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
964
+hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
965
+hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
966
--
114
--
967
2.20.1
115
2.25.1
968
969
diff view generated by jsdifflib
New patch
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
1
2
3
IRQs were not associated to the various GPIO devices inside i.MX7D.
4
This patch brings the i.MX7D on par with i.MX6.
5
6
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
7
Message-id: 20221226101418.415170-1-jcd@tribudubois.net
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/fsl-imx7.h | 15 +++++++++++++++
12
hw/arm/fsl-imx7.c | 31 ++++++++++++++++++++++++++++++-
13
2 files changed, 45 insertions(+), 1 deletion(-)
14
15
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/fsl-imx7.h
18
+++ b/include/hw/arm/fsl-imx7.h
19
@@ -XXX,XX +XXX,XX @@ enum FslIMX7IRQs {
20
FSL_IMX7_GPT3_IRQ = 53,
21
FSL_IMX7_GPT4_IRQ = 52,
22
23
+ FSL_IMX7_GPIO1_LOW_IRQ = 64,
24
+ FSL_IMX7_GPIO1_HIGH_IRQ = 65,
25
+ FSL_IMX7_GPIO2_LOW_IRQ = 66,
26
+ FSL_IMX7_GPIO2_HIGH_IRQ = 67,
27
+ FSL_IMX7_GPIO3_LOW_IRQ = 68,
28
+ FSL_IMX7_GPIO3_HIGH_IRQ = 69,
29
+ FSL_IMX7_GPIO4_LOW_IRQ = 70,
30
+ FSL_IMX7_GPIO4_HIGH_IRQ = 71,
31
+ FSL_IMX7_GPIO5_LOW_IRQ = 72,
32
+ FSL_IMX7_GPIO5_HIGH_IRQ = 73,
33
+ FSL_IMX7_GPIO6_LOW_IRQ = 74,
34
+ FSL_IMX7_GPIO6_HIGH_IRQ = 75,
35
+ FSL_IMX7_GPIO7_LOW_IRQ = 76,
36
+ FSL_IMX7_GPIO7_HIGH_IRQ = 77,
37
+
38
FSL_IMX7_WDOG1_IRQ = 78,
39
FSL_IMX7_WDOG2_IRQ = 79,
40
FSL_IMX7_WDOG3_IRQ = 10,
41
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/arm/fsl-imx7.c
44
+++ b/hw/arm/fsl-imx7.c
45
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
46
FSL_IMX7_GPIO7_ADDR,
47
};
48
49
+ static const int FSL_IMX7_GPIOn_LOW_IRQ[FSL_IMX7_NUM_GPIOS] = {
50
+ FSL_IMX7_GPIO1_LOW_IRQ,
51
+ FSL_IMX7_GPIO2_LOW_IRQ,
52
+ FSL_IMX7_GPIO3_LOW_IRQ,
53
+ FSL_IMX7_GPIO4_LOW_IRQ,
54
+ FSL_IMX7_GPIO5_LOW_IRQ,
55
+ FSL_IMX7_GPIO6_LOW_IRQ,
56
+ FSL_IMX7_GPIO7_LOW_IRQ,
57
+ };
58
+
59
+ static const int FSL_IMX7_GPIOn_HIGH_IRQ[FSL_IMX7_NUM_GPIOS] = {
60
+ FSL_IMX7_GPIO1_HIGH_IRQ,
61
+ FSL_IMX7_GPIO2_HIGH_IRQ,
62
+ FSL_IMX7_GPIO3_HIGH_IRQ,
63
+ FSL_IMX7_GPIO4_HIGH_IRQ,
64
+ FSL_IMX7_GPIO5_HIGH_IRQ,
65
+ FSL_IMX7_GPIO6_HIGH_IRQ,
66
+ FSL_IMX7_GPIO7_HIGH_IRQ,
67
+ };
68
+
69
sysbus_realize(SYS_BUS_DEVICE(&s->gpio[i]), &error_abort);
70
- sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpio[i]), 0, FSL_IMX7_GPIOn_ADDR[i]);
71
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpio[i]), 0,
72
+ FSL_IMX7_GPIOn_ADDR[i]);
73
+
74
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gpio[i]), 0,
75
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
76
+ FSL_IMX7_GPIOn_LOW_IRQ[i]));
77
+
78
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gpio[i]), 1,
79
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
80
+ FSL_IMX7_GPIOn_HIGH_IRQ[i]));
81
}
82
83
/*
84
--
85
2.25.1
diff view generated by jsdifflib
1
From: Shashi Mallela <shashi.mallela@linaro.org>
1
From: Stephen Longfield <slongfield@google.com>
2
2
3
During sbsa acs level 3 testing, it is seen that the GIC maintenance
3
Size is used at lines 1088/1188 for the loop, which reads the last 4
4
interrupts are not triggered and the related test cases fail. This
4
bytes from the crc_ptr so it does need to get increased, however it
5
is because we were incorrectly passing the value of the MISR register
5
shouldn't be increased before the buffer is passed to CRC computation,
6
(from maintenance_interrupt_state()) to qemu_set_irq() as the level
6
or the crc32 function will access uninitialized memory.
7
argument, whereas the device on the other end of this irq line
8
expects a 0/1 value.
9
7
10
Fix the logic to pass a 0/1 level indication, rather than a
8
This was pointed out to me by clg@kaod.org during the code review of
11
0/not-0 value.
9
a similar patch to hw/net/ftgmac100.c
12
10
13
Fixes: c5fc89b36c0 ("hw/intc/arm_gicv3: Implement gicv3_cpuif_virt_update()")
11
Change-Id: Ib0464303b191af1e28abeb2f5105eb25aadb5e9b
14
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
12
Signed-off-by: Stephen Longfield <slongfield@google.com>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Patrick Venture <venture@google.com>
16
Message-id: 20210915205809.59068-1-shashi.mallela@linaro.org
14
Message-id: 20221221183202.3788132-1-slongfield@google.com
17
[PMM: tweaked commit message; collapsed nested if()s into one]
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
17
---
21
hw/intc/arm_gicv3_cpuif.c | 5 +++--
18
hw/net/imx_fec.c | 8 ++++----
22
1 file changed, 3 insertions(+), 2 deletions(-)
19
1 file changed, 4 insertions(+), 4 deletions(-)
23
20
24
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
21
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
25
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/arm_gicv3_cpuif.c
23
--- a/hw/net/imx_fec.c
27
+++ b/hw/intc/arm_gicv3_cpuif.c
24
+++ b/hw/net/imx_fec.c
28
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
25
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_fec_receive(NetClientState *nc, const uint8_t *buf,
29
}
26
return 0;
30
}
27
}
31
28
32
- if (cs->ich_hcr_el2 & ICH_HCR_EL2_EN) {
29
- /* 4 bytes for the CRC. */
33
- maintlevel = maintenance_interrupt_state(cs);
30
- size += 4;
34
+ if ((cs->ich_hcr_el2 & ICH_HCR_EL2_EN) &&
31
crc = cpu_to_be32(crc32(~0, buf, size));
35
+ maintenance_interrupt_state(cs) != 0) {
32
+ /* Increase size by 4, loop below reads the last 4 bytes from crc_ptr. */
36
+ maintlevel = 1;
33
+ size += 4;
34
crc_ptr = (uint8_t *) &crc;
35
36
/* Huge frames are truncated. */
37
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_enet_receive(NetClientState *nc, const uint8_t *buf,
38
return 0;
37
}
39
}
38
40
39
trace_gicv3_cpuif_virt_set_irqs(gicv3_redist_affid(cs), fiqlevel,
41
- /* 4 bytes for the CRC. */
42
- size += 4;
43
crc = cpu_to_be32(crc32(~0, buf, size));
44
+ /* Increase size by 4, loop below reads the last 4 bytes from crc_ptr. */
45
+ size += 4;
46
crc_ptr = (uint8_t *) &crc;
47
48
if (shift16) {
40
--
49
--
41
2.20.1
50
2.25.1
42
43
diff view generated by jsdifflib