1
Two small bugfixes, plus most of RTH's refactoring of cpregs
1
Hi; this is the latest target-arm queue; most of this is a refactoring
2
handling.
2
patchset from RTH for the arm page-table-walk emulation.
3
3
4
thanks
4
-- PMM
5
-- PMM
5
6
6
The following changes since commit 1fba9dc71a170b3a05b9d3272dd8ecfe7f26e215:
7
The following changes since commit f1d33f55c47dfdaf8daacd618588ad3ae4c452d1:
7
8
8
Merge tag 'pull-request-2022-05-04' of https://gitlab.com/thuth/qemu into staging (2022-05-04 08:07:02 -0700)
9
Merge tag 'pull-testing-gdbstub-plugins-gitdm-061022-3' of https://github.com/stsquad/qemu into staging (2022-10-06 07:11:56 -0400)
9
10
10
are available in the Git repository at:
11
are available in the Git repository at:
11
12
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220505
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221010
13
14
14
for you to fetch changes up to 99a50d1a67c602126fc2b3a4812d3000eba9bf34:
15
for you to fetch changes up to 915f62844cf62e428c7c178149b5ff1cbe129b07:
15
16
16
target/arm: read access to performance counters from EL0 (2022-05-05 09:36:22 +0100)
17
docs/system/arm/emulation.rst: Report FEAT_GTG support (2022-10-10 14:52:25 +0100)
17
18
18
----------------------------------------------------------------
19
----------------------------------------------------------------
19
target-arm queue:
20
target-arm queue:
20
* Enable read access to performance counters from EL0
21
* Retry KVM_CREATE_VM call if it fails EINTR
21
* Enable SCTLR_EL1.BT0 for aarch64-linux-user
22
* allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
22
* Refactoring of cpreg handling
23
* docs/nuvoton: Update URL for images
24
* refactoring of page table walk code
25
* hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
26
* Don't allow guest to use unimplemented granule sizes
27
* Report FEAT_GTG support
23
28
24
----------------------------------------------------------------
29
----------------------------------------------------------------
25
Alex Zuepke (1):
30
Jerome Forissier (2):
26
target/arm: read access to performance counters from EL0
31
target/arm: allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
32
hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
27
33
28
Richard Henderson (22):
34
Joel Stanley (1):
29
target/arm: Enable SCTLR_EL1.BT0 for aarch64-linux-user
35
docs/nuvoton: Update URL for images
30
target/arm: Split out cpregs.h
31
target/arm: Reorg CPAccessResult and access_check_cp_reg
32
target/arm: Replace sentinels with ARRAY_SIZE in cpregs.h
33
target/arm: Make some more cpreg data static const
34
target/arm: Reorg ARMCPRegInfo type field bits
35
target/arm: Avoid bare abort() or assert(0)
36
target/arm: Change cpreg access permissions to enum
37
target/arm: Name CPState type
38
target/arm: Name CPSecureState type
39
target/arm: Drop always-true test in define_arm_vh_e2h_redirects_aliases
40
target/arm: Store cpregs key in the hash table directly
41
target/arm: Merge allocation of the cpreg and its name
42
target/arm: Hoist computation of key in add_cpreg_to_hashtable
43
target/arm: Consolidate cpreg updates in add_cpreg_to_hashtable
44
target/arm: Use bool for is64 and ns in add_cpreg_to_hashtable
45
target/arm: Hoist isbanked computation in add_cpreg_to_hashtable
46
target/arm: Perform override check early in add_cpreg_to_hashtable
47
target/arm: Reformat comments in add_cpreg_to_hashtable
48
target/arm: Remove HOST_BIG_ENDIAN ifdef in add_cpreg_to_hashtable
49
target/arm: Add isar predicates for FEAT_Debugv8p2
50
target/arm: Add isar_feature_{aa64,any}_ras
51
36
52
target/arm/cpregs.h | 453 ++++++++++++++++++++++++++++++++++++++
37
Peter Maydell (4):
53
target/arm/cpu.h | 393 +++------------------------------
38
target/arm/kvm: Retry KVM_CREATE_VM call if it fails EINTR
54
hw/arm/pxa2xx.c | 2 +-
39
target/arm: Don't allow guest to use unimplemented granule sizes
55
hw/arm/pxa2xx_pic.c | 2 +-
40
target/arm: Use ARMGranuleSize in ARMVAParameters
56
hw/intc/arm_gicv3_cpuif.c | 6 +-
41
docs/system/arm/emulation.rst: Report FEAT_GTG support
57
hw/intc/arm_gicv3_kvm.c | 3 +-
42
58
target/arm/cpu.c | 25 +--
43
Richard Henderson (21):
59
target/arm/cpu64.c | 2 +-
44
target/arm: Split s2walk_secure from ipa_secure in get_phys_addr
60
target/arm/cpu_tcg.c | 5 +-
45
target/arm: Make the final stage1+2 write to secure be unconditional
61
target/arm/gdbstub.c | 5 +-
46
target/arm: Add is_secure parameter to get_phys_addr_lpae
62
target/arm/helper.c | 358 +++++++++++++-----------------
47
target/arm: Fix S2 disabled check in S1_ptw_translate
63
target/arm/hvf/hvf.c | 2 +-
48
target/arm: Add is_secure parameter to regime_translation_disabled
64
target/arm/kvm-stub.c | 4 +-
49
target/arm: Split out get_phys_addr_with_secure
65
target/arm/kvm.c | 4 +-
50
target/arm: Add is_secure parameter to v7m_read_half_insn
66
target/arm/machine.c | 4 +-
51
target/arm: Add TBFLAG_M32.SECURE
67
target/arm/op_helper.c | 57 ++---
52
target/arm: Merge regime_is_secure into get_phys_addr
68
target/arm/translate-a64.c | 14 +-
53
target/arm: Add is_secure parameter to do_ats_write
69
target/arm/translate-neon.c | 2 +-
54
target/arm: Fold secure and non-secure a-profile mmu indexes
70
target/arm/translate.c | 13 +-
55
target/arm: Reorg regime_translation_disabled
71
tests/tcg/aarch64/bti-3.c | 42 ++++
56
target/arm: Drop secure check for HCR.TGE vs SCTLR_EL1.M
72
tests/tcg/aarch64/Makefile.target | 6 +-
57
target/arm: Introduce arm_hcr_el2_eff_secstate
73
21 files changed, 738 insertions(+), 664 deletions(-)
58
target/arm: Hoist read of *is_secure in S1_ptw_translate
74
create mode 100644 target/arm/cpregs.h
59
target/arm: Remove env argument from combined_attrs_fwb
75
create mode 100644 tests/tcg/aarch64/bti-3.c
60
target/arm: Pass HCR to attribute subroutines.
61
target/arm: Fix ATS12NSO* from S PL1
62
target/arm: Split out get_phys_addr_disabled
63
target/arm: Fix cacheattr in get_phys_addr_disabled
64
target/arm: Use tlb_set_page_full
65
66
docs/system/arm/emulation.rst | 1 +
67
docs/system/arm/nuvoton.rst | 4 +-
68
target/arm/cpu-param.h | 2 +-
69
target/arm/cpu.h | 181 ++++++++------
70
target/arm/internals.h | 150 ++++++-----
71
hw/arm/boot.c | 4 +
72
target/arm/helper.c | 332 ++++++++++++++----------
73
target/arm/kvm.c | 4 +-
74
target/arm/m_helper.c | 29 ++-
75
target/arm/ptw.c | 570 ++++++++++++++++++++++--------------------
76
target/arm/tlb_helper.c | 9 +-
77
target/arm/translate-a64.c | 8 -
78
target/arm/translate.c | 9 +-
79
13 files changed, 717 insertions(+), 586 deletions(-)
diff view generated by jsdifflib
New patch
1
Occasionally the KVM_CREATE_VM ioctl can return EINTR, even though
2
there is no pending signal to be taken. In commit 94ccff13382055
3
we added a retry-on-EINTR loop to the KVM_CREATE_VM call in the
4
generic KVM code. Adopt the same approach for the use of the
5
ioctl in the Arm-specific KVM code (where we use it to create a
6
scratch VM for probing for various things).
1
7
8
For more information, see the mailing list thread:
9
https://lore.kernel.org/qemu-devel/8735e0s1zw.wl-maz@kernel.org/
10
11
Reported-by: Vitaly Chikunov <vt@altlinux.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
14
Reviewed-by: Eric Auger <eric.auger@redhat.com>
15
Acked-by: Marc Zyngier <maz@kernel.org>
16
Message-id: 20220930113824.1933293-1-peter.maydell@linaro.org
17
---
18
target/arm/kvm.c | 4 +++-
19
1 file changed, 3 insertions(+), 1 deletion(-)
20
21
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/kvm.c
24
+++ b/target/arm/kvm.c
25
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
26
if (max_vm_pa_size < 0) {
27
max_vm_pa_size = 0;
28
}
29
- vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size);
30
+ do {
31
+ vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size);
32
+ } while (vmfd == -1 && errno == EINTR);
33
if (vmfd < 0) {
34
goto err;
35
}
36
--
37
2.25.1
diff view generated by jsdifflib
1
From: Alex Zuepke <alex.zuepke@tum.de>
1
From: Jerome Forissier <jerome.forissier@linaro.org>
2
2
3
The ARMv8 manual defines that PMUSERENR_EL0.ER enables read-access
3
Updates write_scr() to allow setting SCR_EL3.EnTP2 when FEAT_SME is
4
to both PMXEVCNTR_EL0 and PMEVCNTR<n>_EL0 registers, however,
4
implemented. SCR_EL3 being a 64-bit register, valid_mask is changed
5
we only use it for PMXEVCNTR_EL0. Extend to PMEVCNTR<n>_EL0 as well.
5
to uint64_t and the SCR_* constants in target/arm/cpu.h are extended
6
to 64-bit so that masking and bitwise not (~) behave as expected.
6
7
7
Signed-off-by: Alex Zuepke <alex.zuepke@tum.de>
8
This enables booting Linux with Trusted Firmware-A at EL3 with
9
"-M virt,secure=on -cpu max".
10
11
Cc: qemu-stable@nongnu.org
12
Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max")
13
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
14
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220428132717.84190-1-alex.zuepke@tum.de
16
Message-id: 20221004072354.27037-1-jerome.forissier@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
18
---
12
target/arm/helper.c | 4 ++--
19
target/arm/cpu.h | 54 ++++++++++++++++++++++-----------------------
13
1 file changed, 2 insertions(+), 2 deletions(-)
20
target/arm/helper.c | 5 ++++-
21
2 files changed, 31 insertions(+), 28 deletions(-)
14
22
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
27
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
28
29
#define HPFAR_NS (1ULL << 63)
30
31
-#define SCR_NS (1U << 0)
32
-#define SCR_IRQ (1U << 1)
33
-#define SCR_FIQ (1U << 2)
34
-#define SCR_EA (1U << 3)
35
-#define SCR_FW (1U << 4)
36
-#define SCR_AW (1U << 5)
37
-#define SCR_NET (1U << 6)
38
-#define SCR_SMD (1U << 7)
39
-#define SCR_HCE (1U << 8)
40
-#define SCR_SIF (1U << 9)
41
-#define SCR_RW (1U << 10)
42
-#define SCR_ST (1U << 11)
43
-#define SCR_TWI (1U << 12)
44
-#define SCR_TWE (1U << 13)
45
-#define SCR_TLOR (1U << 14)
46
-#define SCR_TERR (1U << 15)
47
-#define SCR_APK (1U << 16)
48
-#define SCR_API (1U << 17)
49
-#define SCR_EEL2 (1U << 18)
50
-#define SCR_EASE (1U << 19)
51
-#define SCR_NMEA (1U << 20)
52
-#define SCR_FIEN (1U << 21)
53
-#define SCR_ENSCXT (1U << 25)
54
-#define SCR_ATA (1U << 26)
55
-#define SCR_FGTEN (1U << 27)
56
-#define SCR_ECVEN (1U << 28)
57
-#define SCR_TWEDEN (1U << 29)
58
+#define SCR_NS (1ULL << 0)
59
+#define SCR_IRQ (1ULL << 1)
60
+#define SCR_FIQ (1ULL << 2)
61
+#define SCR_EA (1ULL << 3)
62
+#define SCR_FW (1ULL << 4)
63
+#define SCR_AW (1ULL << 5)
64
+#define SCR_NET (1ULL << 6)
65
+#define SCR_SMD (1ULL << 7)
66
+#define SCR_HCE (1ULL << 8)
67
+#define SCR_SIF (1ULL << 9)
68
+#define SCR_RW (1ULL << 10)
69
+#define SCR_ST (1ULL << 11)
70
+#define SCR_TWI (1ULL << 12)
71
+#define SCR_TWE (1ULL << 13)
72
+#define SCR_TLOR (1ULL << 14)
73
+#define SCR_TERR (1ULL << 15)
74
+#define SCR_APK (1ULL << 16)
75
+#define SCR_API (1ULL << 17)
76
+#define SCR_EEL2 (1ULL << 18)
77
+#define SCR_EASE (1ULL << 19)
78
+#define SCR_NMEA (1ULL << 20)
79
+#define SCR_FIEN (1ULL << 21)
80
+#define SCR_ENSCXT (1ULL << 25)
81
+#define SCR_ATA (1ULL << 26)
82
+#define SCR_FGTEN (1ULL << 27)
83
+#define SCR_ECVEN (1ULL << 28)
84
+#define SCR_TWEDEN (1ULL << 29)
85
#define SCR_TWEDEL MAKE_64BIT_MASK(30, 4)
86
#define SCR_TME (1ULL << 34)
87
#define SCR_AMVOFFEN (1ULL << 35)
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
88
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
89
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
90
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
91
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
92
@@ -XXX,XX +XXX,XX @@ static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
20
.crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
93
static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
21
.access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
94
{
22
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
95
/* Begin with base v8.0 state. */
23
- .accessfn = pmreg_access },
96
- uint32_t valid_mask = 0x3fff;
24
+ .accessfn = pmreg_access_xevcntr },
97
+ uint64_t valid_mask = 0x3fff;
25
{ .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64,
98
ARMCPU *cpu = env_archcpu(env);
26
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 8 | (3 & (i >> 3)),
99
27
- .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
100
/*
28
+ .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access_xevcntr,
101
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
29
.type = ARM_CP_IO,
102
if (cpu_isar_feature(aa64_doublefault, cpu)) {
30
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
103
valid_mask |= SCR_EASE | SCR_NMEA;
31
.raw_readfn = pmevcntr_rawread,
104
}
105
+ if (cpu_isar_feature(aa64_sme, cpu)) {
106
+ valid_mask |= SCR_ENTP2;
107
+ }
108
} else {
109
valid_mask &= ~(SCR_RW | SCR_ST);
110
if (cpu_isar_feature(aa32_ras, cpu)) {
32
--
111
--
33
2.25.1
112
2.25.1
diff view generated by jsdifflib
New patch
1
From: Joel Stanley <joel@jms.id.au>
1
2
3
openpower.xyz was retired some time ago. The OpenBMC Jenkins is where
4
images can be found these days.
5
6
Signed-off-by: Joel Stanley <joel@jms.id.au>
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20221004050042.22681-1-joel@jms.id.au
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
docs/system/arm/nuvoton.rst | 4 ++--
14
1 file changed, 2 insertions(+), 2 deletions(-)
15
16
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
17
index XXXXXXX..XXXXXXX 100644
18
--- a/docs/system/arm/nuvoton.rst
19
+++ b/docs/system/arm/nuvoton.rst
20
@@ -XXX,XX +XXX,XX @@ Boot options
21
22
The Nuvoton machines can boot from an OpenBMC firmware image, or directly into
23
a kernel using the ``-kernel`` option. OpenBMC images for ``quanta-gsj`` and
24
-possibly others can be downloaded from the OpenPOWER jenkins :
25
+possibly others can be downloaded from the OpenBMC jenkins :
26
27
- https://openpower.xyz/
28
+ https://jenkins.openbmc.org/
29
30
The firmware image should be attached as an MTD drive. Example :
31
32
--
33
2.25.1
34
35
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Put most of the value writeback to the same place,
3
The starting security state comes with the translation regime,
4
and improve the comment that goes with them.
4
not the current state of arm_is_secure_below_el3().
5
6
Create a new local variable, s2walk_secure, which does not need
7
to be written back to result->attrs.secure -- we compute that
8
value later, after the S2 walk is complete.
5
9
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20220501055028.646596-15-richard.henderson@linaro.org
12
Message-id: 20221001162318.153420-2-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
14
---
11
target/arm/helper.c | 28 ++++++++++++----------------
15
target/arm/ptw.c | 18 +++++++++---------
12
1 file changed, 12 insertions(+), 16 deletions(-)
16
1 file changed, 9 insertions(+), 9 deletions(-)
13
17
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
20
--- a/target/arm/ptw.c
17
+++ b/target/arm/helper.c
21
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
22
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
19
*r2 = *r;
23
hwaddr ipa;
20
r2->name = memcpy(r2 + 1, name, name_len);
24
int s1_prot;
21
25
int ret;
22
- /* Reset the secure state to the specific incoming state. This is
26
- bool ipa_secure;
23
- * necessary as the register may have been defined with both states.
27
+ bool ipa_secure, s2walk_secure;
24
+ /*
28
ARMCacheAttrs cacheattrs1;
25
+ * Update fields to match the instantiation, overwiting wildcards
29
ARMMMUIdx s2_mmu_idx;
26
+ * such as CP_ANY, ARM_CP_STATE_BOTH, or ARM_CP_SECSTATE_BOTH.
30
bool is_el0;
27
*/
31
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
28
+ r2->cp = cp;
32
29
+ r2->crm = crm;
33
ipa = result->phys;
30
+ r2->opc1 = opc1;
34
ipa_secure = result->attrs.secure;
31
+ r2->opc2 = opc2;
35
- if (arm_is_secure_below_el3(env)) {
32
+ r2->state = state;
36
- if (ipa_secure) {
33
r2->secure = secstate;
37
- result->attrs.secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
34
+ if (opaque) {
38
- } else {
35
+ r2->opaque = opaque;
39
- result->attrs.secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
36
+ }
40
- }
37
41
+ if (is_secure) {
38
if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
42
+ /* Select TCR based on the NS bit from the S1 walk. */
39
/* Register is banked (using both entries in array).
43
+ s2walk_secure = !(ipa_secure
40
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
44
+ ? env->cp15.vstcr_el2 & VSTCR_SW
41
#endif
45
+ : env->cp15.vtcr_el2 & VTCR_NSW);
42
}
46
} else {
43
}
47
assert(!ipa_secure);
44
- if (opaque) {
48
+ s2walk_secure = false;
45
- r2->opaque = opaque;
49
}
46
- }
50
47
- /* reginfo passed to helpers is correct for the actual access,
51
- s2_mmu_idx = (result->attrs.secure
48
- * and is never ARM_CP_STATE_BOTH:
52
+ s2_mmu_idx = (s2walk_secure
49
- */
53
? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
50
- r2->state = state;
54
is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
51
- /* Make sure reginfo passed to helpers for wildcarded regs
55
52
- * has the correct crm/opc1/opc2 for this reg, not CP_ANY:
56
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
53
- */
57
result->cacheattrs);
54
- r2->cp = cp;
58
55
- r2->crm = crm;
59
/* Check if IPA translates to secure or non-secure PA space. */
56
- r2->opc1 = opc1;
60
- if (arm_is_secure_below_el3(env)) {
57
- r2->opc2 = opc2;
61
+ if (is_secure) {
58
+
62
if (ipa_secure) {
59
/* By convention, for wildcarded registers only the first
63
result->attrs.secure =
60
* entry is used for migration; the others are marked as
64
!(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW));
61
* ALIAS so we don't try to transfer the register
62
--
65
--
63
2.25.1
66
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
While the stage2 call to get_phys_addr_lpae should never set
4
attrs.secure when given a non-secure input, it's just as easy
5
to make the final update to attrs.secure be unconditional and
6
false in the case of non-secure input.
7
8
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20221007152159.1414065-1-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/ptw.c | 21 ++++++++++-----------
15
1 file changed, 10 insertions(+), 11 deletions(-)
16
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/ptw.c
20
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
22
result->cacheattrs = combine_cacheattrs(env, cacheattrs1,
23
result->cacheattrs);
24
25
- /* Check if IPA translates to secure or non-secure PA space. */
26
- if (is_secure) {
27
- if (ipa_secure) {
28
- result->attrs.secure =
29
- !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW));
30
- } else {
31
- result->attrs.secure =
32
- !((env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))
33
- || (env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)));
34
- }
35
- }
36
+ /*
37
+ * Check if IPA translates to secure or non-secure PA space.
38
+ * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
39
+ */
40
+ result->attrs.secure =
41
+ (is_secure
42
+ && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
43
+ && (ipa_secure
44
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
45
+
46
return 0;
47
} else {
48
/*
49
--
50
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
These particular data structures are not modified at runtime.
3
Remove the use of regime_is_secure from get_phys_addr_lpae,
4
using the new parameter instead.
4
5
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220501055028.646596-5-richard.henderson@linaro.org
8
Message-id: 20221001162318.153420-3-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/helper.c | 16 ++++++++--------
11
target/arm/ptw.c | 20 ++++++++++----------
12
1 file changed, 8 insertions(+), 8 deletions(-)
12
1 file changed, 10 insertions(+), 10 deletions(-)
13
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
16
--- a/target/arm/ptw.c
17
+++ b/target/arm/helper.c
17
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
18
@@ -XXX,XX +XXX,XX @@
19
.resetvalue = cpu->pmceid1 },
19
20
};
20
static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
21
#ifdef CONFIG_USER_ONLY
21
MMUAccessType access_type, ARMMMUIdx mmu_idx,
22
- ARMCPRegUserSpaceInfo v8_user_idregs[] = {
22
- bool s1_is_el0, GetPhysAddrResult *result,
23
+ static const ARMCPRegUserSpaceInfo v8_user_idregs[] = {
23
- ARMMMUFaultInfo *fi)
24
{ .name = "ID_AA64PFR0_EL1",
24
+ bool is_secure, bool s1_is_el0,
25
.exported_bits = 0x000f000f00ff0000,
25
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
26
.fixed_bits = 0x0000000000000011 },
26
__attribute__((nonnull));
27
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
27
28
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
29
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
30
GetPhysAddrResult s2 = {};
31
int ret;
32
33
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false,
34
- &s2, fi);
35
+ ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
36
+ *is_secure, false, &s2, fi);
37
if (ret) {
38
assert(fi->type != ARMFault_None);
39
fi->s2addr = addr;
40
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
41
*/
42
static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
43
MMUAccessType access_type, ARMMMUIdx mmu_idx,
44
- bool s1_is_el0, GetPhysAddrResult *result,
45
- ARMMMUFaultInfo *fi)
46
+ bool is_secure, bool s1_is_el0,
47
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
48
{
49
ARMCPU *cpu = env_archcpu(env);
50
/* Read an LPAE long-descriptor translation table. */
51
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
52
* remain non-secure. We implement this by just ORing in the NSTable/NS
53
* bits at each step.
28
*/
54
*/
29
if (arm_feature(env, ARM_FEATURE_EL3)) {
55
- tableattrs = regime_is_secure(env, mmu_idx) ? 0 : (1 << 4);
30
if (arm_feature(env, ARM_FEATURE_AARCH64)) {
56
+ tableattrs = is_secure ? 0 : (1 << 4);
31
- ARMCPRegInfo nsacr = {
57
for (;;) {
32
+ static const ARMCPRegInfo nsacr = {
58
uint64_t descriptor;
33
.name = "NSACR", .type = ARM_CP_CONST,
59
bool nstable;
34
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
60
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
35
.access = PL1_RW, .accessfn = nsacr_access,
61
memset(result, 0, sizeof(*result));
36
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
62
37
};
63
ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx,
38
define_one_arm_cp_reg(cpu, &nsacr);
64
- is_el0, result, fi);
39
} else {
65
+ s2walk_secure, is_el0, result, fi);
40
- ARMCPRegInfo nsacr = {
66
fi->s2addr = ipa;
41
+ static const ARMCPRegInfo nsacr = {
67
42
.name = "NSACR",
68
/* Combine the S1 and S2 perms. */
43
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
69
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
44
.access = PL3_RW | PL1_R,
45
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
46
}
47
} else {
48
if (arm_feature(env, ARM_FEATURE_V8)) {
49
- ARMCPRegInfo nsacr = {
50
+ static const ARMCPRegInfo nsacr = {
51
.name = "NSACR", .type = ARM_CP_CONST,
52
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
53
.access = PL1_R,
54
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
55
.access = PL1_R, .type = ARM_CP_CONST,
56
.resetvalue = cpu->pmsav7_dregion << 8
57
};
58
- ARMCPRegInfo crn0_wi_reginfo = {
59
+ static const ARMCPRegInfo crn0_wi_reginfo = {
60
.name = "CRN0_WI", .cp = 15, .crn = 0, .crm = CP_ANY,
61
.opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_W,
62
.type = ARM_CP_NOP | ARM_CP_OVERRIDE
63
};
64
#ifdef CONFIG_USER_ONLY
65
- ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
66
+ static const ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
67
{ .name = "MIDR_EL1",
68
.exported_bits = 0x00000000ffffffff },
69
{ .name = "REVIDR_EL1" },
70
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
71
.access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
72
};
73
#ifdef CONFIG_USER_ONLY
74
- ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
75
+ static const ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
76
{ .name = "MPIDR_EL1",
77
.fixed_bits = 0x0000000080000000 },
78
};
79
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
80
}
70
}
81
71
82
if (arm_feature(env, ARM_FEATURE_VBAR)) {
72
if (regime_using_lpae_format(env, mmu_idx)) {
83
- ARMCPRegInfo vbar_cp_reginfo[] = {
73
- return get_phys_addr_lpae(env, address, access_type, mmu_idx, false,
84
+ static const ARMCPRegInfo vbar_cp_reginfo[] = {
74
- result, fi);
85
{ .name = "VBAR", .state = ARM_CP_STATE_BOTH,
75
+ return get_phys_addr_lpae(env, address, access_type, mmu_idx,
86
.opc0 = 3, .crn = 12, .crm = 0, .opc1 = 0, .opc2 = 0,
76
+ is_secure, false, result, fi);
87
.access = PL1_RW, .writefn = vbar_write,
77
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
78
return get_phys_addr_v6(env, address, access_type, mmu_idx,
79
is_secure, result, fi);
88
--
80
--
89
2.25.1
81
2.25.1
90
82
91
83
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Bool is a more appropriate type for these variables.
3
Pass the correct stage2 mmu_idx to regime_translation_disabled,
4
which we computed afterward.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20220501055028.646596-16-richard.henderson@linaro.org
8
Message-id: 20221001162318.153420-4-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/helper.c | 4 ++--
11
target/arm/ptw.c | 6 +++---
11
1 file changed, 2 insertions(+), 2 deletions(-)
12
1 file changed, 3 insertions(+), 3 deletions(-)
12
13
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
--- a/target/arm/ptw.c
16
+++ b/target/arm/helper.c
17
+++ b/target/arm/ptw.c
17
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
18
*/
19
hwaddr addr, bool *is_secure,
19
uint32_t key;
20
ARMMMUFaultInfo *fi)
20
ARMCPRegInfo *r2;
21
{
21
- int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
22
+ ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
22
- int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
23
+
23
+ bool is64 = r->type & ARM_CP_64BIT;
24
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
24
+ bool ns = secstate & ARM_CP_SECSTATE_NS;
25
- !regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
25
int cp = r->cp;
26
- ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S
26
size_t name_len;
27
- : ARMMMUIdx_Stage2;
28
+ !regime_translation_disabled(env, s2_mmu_idx)) {
29
GetPhysAddrResult s2 = {};
30
int ret;
27
31
28
--
32
--
29
2.25.1
33
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Give this enum a name and use in ARMCPRegInfo,
3
Remove the use of regime_is_secure from regime_translation_disabled,
4
add_cpreg_to_hashtable and define_one_arm_cp_reg_with_opaque.
4
using the new parameter instead.
5
5
6
This fixes a bug in S1_ptw_translate and get_phys_addr where we had
7
passed ARMMMUIdx_Stage2 and not ARMMMUIdx_Stage2_S to determine if
8
Stage2 is disabled, affecting FEAT_SEL2.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-9-richard.henderson@linaro.org
13
Message-id: 20221001162318.153420-5-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
15
---
12
target/arm/cpregs.h | 6 +++---
16
target/arm/ptw.c | 20 +++++++++++---------
13
target/arm/helper.c | 6 ++++--
17
1 file changed, 11 insertions(+), 9 deletions(-)
14
2 files changed, 7 insertions(+), 5 deletions(-)
15
18
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpregs.h
21
--- a/target/arm/ptw.c
19
+++ b/target/arm/cpregs.h
22
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ enum {
23
@@ -XXX,XX +XXX,XX @@ static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
21
* Note that we rely on the values of these enums as we iterate through
22
* the various states in some places.
23
*/
24
-enum {
25
+typedef enum {
26
ARM_CP_STATE_AA32 = 0,
27
ARM_CP_STATE_AA64 = 1,
28
ARM_CP_STATE_BOTH = 2,
29
-};
30
+} CPState;
31
32
/*
33
* ARM CP register secure state flags. These flags identify security state
34
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
35
uint8_t opc1;
36
uint8_t opc2;
37
/* Execution state in which this register is visible: ARM_CP_STATE_* */
38
- int state;
39
+ CPState state;
40
/* Register type: ARM_CP_* bits/values */
41
int type;
42
/* Access rights: PL*_[RW] */
43
diff --git a/target/arm/helper.c b/target/arm/helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/helper.c
46
+++ b/target/arm/helper.c
47
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
48
}
24
}
49
25
50
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
26
/* Return true if the specified stage of address translation is disabled */
51
- void *opaque, int state, int secstate,
27
-static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
52
+ void *opaque, CPState state, int secstate,
28
+static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
53
int crm, int opc1, int opc2,
29
+ bool is_secure)
54
const char *name)
55
{
30
{
56
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
31
uint64_t hcr_el2;
57
* bits; the ARM_CP_64BIT* flag applies only to the AArch32 view of
32
58
* the register, if any.
33
if (arm_feature(env, ARM_FEATURE_M)) {
34
- switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] &
35
+ switch (env->v7m.mpu_ctrl[is_secure] &
36
(R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
37
case R_V7M_MPU_CTRL_ENABLE_MASK:
38
/* Enabled, but not for HardFault and NMI */
39
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
40
41
if (hcr_el2 & HCR_TGE) {
42
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
43
- if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
44
+ if (!is_secure && regime_el(env, mmu_idx) == 1) {
45
return true;
46
}
47
}
48
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
49
ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
50
51
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
52
- !regime_translation_disabled(env, s2_mmu_idx)) {
53
+ !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) {
54
GetPhysAddrResult s2 = {};
55
int ret;
56
57
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
58
uint32_t base;
59
bool is_user = regime_is_user(env, mmu_idx);
60
61
- if (regime_translation_disabled(env, mmu_idx)) {
62
+ if (regime_translation_disabled(env, mmu_idx, is_secure)) {
63
/* MPU disabled. */
64
result->phys = address;
65
result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
66
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
67
result->page_size = TARGET_PAGE_SIZE;
68
result->prot = 0;
69
70
- if (regime_translation_disabled(env, mmu_idx) ||
71
+ if (regime_translation_disabled(env, mmu_idx, secure) ||
72
m_is_ppb_region(env, address)) {
73
/*
74
* MPU disabled or M profile PPB access: use default memory map.
75
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
76
* are done in arm_v7m_load_vector(), which always does a direct
77
* read using address_space_ldl(), rather than going via this function.
59
*/
78
*/
60
- int crm, opc1, opc2, state;
79
- if (regime_translation_disabled(env, mmu_idx)) { /* MPU disabled */
61
+ int crm, opc1, opc2;
80
+ if (regime_translation_disabled(env, mmu_idx, secure)) { /* MPU disabled */
62
int crmmin = (r->crm == CP_ANY) ? 0 : r->crm;
81
hit = true;
63
int crmmax = (r->crm == CP_ANY) ? 15 : r->crm;
82
} else if (m_is_ppb_region(env, address)) {
64
int opc1min = (r->opc1 == CP_ANY) ? 0 : r->opc1;
83
hit = true;
65
int opc1max = (r->opc1 == CP_ANY) ? 7 : r->opc1;
84
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
66
int opc2min = (r->opc2 == CP_ANY) ? 0 : r->opc2;
85
result, fi);
67
int opc2max = (r->opc2 == CP_ANY) ? 7 : r->opc2;
86
68
+ CPState state;
87
/* If S1 fails or S2 is disabled, return early. */
69
+
88
- if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
70
/* 64 bit registers have only CRm and Opc1 fields */
89
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
71
assert(!((r->type & ARM_CP_64BIT) && (r->opc2 || r->crn)));
90
+ is_secure)) {
72
/* op0 only exists in the AArch64 encodings */
91
return ret;
92
}
93
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
95
96
/* Definitely a real MMU, not an MPU */
97
98
- if (regime_translation_disabled(env, mmu_idx)) {
99
+ if (regime_translation_disabled(env, mmu_idx, is_secure)) {
100
uint64_t hcr;
101
uint8_t memattr;
102
73
--
103
--
74
2.25.1
104
2.25.1
75
105
76
106
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Remove a possible source of error by removing REGINFO_SENTINEL
3
Retain the existing get_phys_addr interface using the security
4
and using ARRAY_SIZE (convinently hidden inside a macro) to
4
state derived from mmu_idx. Move the kerneldoc comments to the
5
find the end of the set of regs being registered or modified.
5
header file where they belong.
6
6
7
The space saved by not having the extra array element reduces
8
the executable's .data.rel.ro section by about 9k.
9
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20220501055028.646596-4-richard.henderson@linaro.org
9
Message-id: 20221001162318.153420-6-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
11
---
16
target/arm/cpregs.h | 53 +++++++++---------
12
target/arm/internals.h | 40 ++++++++++++++++++++++++++++++++++++++
17
hw/arm/pxa2xx.c | 1 -
13
target/arm/ptw.c | 44 ++++++++++++++----------------------------
18
hw/arm/pxa2xx_pic.c | 1 -
14
2 files changed, 55 insertions(+), 29 deletions(-)
19
hw/intc/arm_gicv3_cpuif.c | 5 --
20
hw/intc/arm_gicv3_kvm.c | 1 -
21
target/arm/cpu64.c | 1 -
22
target/arm/cpu_tcg.c | 4 --
23
target/arm/helper.c | 111 ++++++++------------------------------
24
8 files changed, 48 insertions(+), 129 deletions(-)
25
15
26
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
27
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpregs.h
18
--- a/target/arm/internals.h
29
+++ b/target/arm/cpregs.h
19
+++ b/target/arm/internals.h
30
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ typedef struct GetPhysAddrResult {
31
#define ARM_CP_NO_GDB 0x4000
21
ARMCacheAttrs cacheattrs;
32
#define ARM_CP_RAISES_EXC 0x8000
22
} GetPhysAddrResult;
33
#define ARM_CP_NEWEL 0x10000
23
34
-/* Used only as a terminator for ARMCPRegInfo lists */
24
+/**
35
-#define ARM_CP_SENTINEL 0xfffff
25
+ * get_phys_addr_with_secure: get the physical address for a virtual address
36
/* Mask of only the flag bits in a type field */
26
+ * @env: CPUARMState
37
#define ARM_CP_FLAG_MASK 0x1f0ff
27
+ * @address: virtual address to get physical address for
38
28
+ * @access_type: 0 for read, 1 for write, 2 for execute
39
@@ -XXX,XX +XXX,XX @@ enum {
29
+ * @mmu_idx: MMU index indicating required translation regime
40
ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
30
+ * @is_secure: security state for the access
41
};
31
+ * @result: set on translation success.
42
32
+ * @fi: set to fault info if the translation fails
43
-/*
33
+ *
44
- * Return true if cptype is a valid type field. This is used to try to
34
+ * Find the physical address corresponding to the given virtual address,
45
- * catch errors where the sentinel has been accidentally left off the end
35
+ * by doing a translation table walk on MMU based systems or using the
46
- * of a list of registers.
36
+ * MPU state on MPU based systems.
37
+ *
38
+ * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
39
+ * prot and page_size may not be filled in, and the populated fsr value provides
40
+ * information on why the translation aborted, in the format of a
41
+ * DFSR/IFSR fault register, with the following caveats:
42
+ * * we honour the short vs long DFSR format differences.
43
+ * * the WnR bit is never set (the caller must do this).
44
+ * * for PSMAv5 based systems we don't bother to return a full FSR format
45
+ * value.
46
+ */
47
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
48
+ MMUAccessType access_type,
49
+ ARMMMUIdx mmu_idx, bool is_secure,
50
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
51
+ __attribute__((nonnull));
52
+
53
+/**
54
+ * get_phys_addr: get the physical address for a virtual address
55
+ * @env: CPUARMState
56
+ * @address: virtual address to get physical address for
57
+ * @access_type: 0 for read, 1 for write, 2 for execute
58
+ * @mmu_idx: MMU index indicating required translation regime
59
+ * @result: set on translation success.
60
+ * @fi: set to fault info if the translation fails
61
+ *
62
+ * Similarly, but use the security regime of @mmu_idx.
63
+ */
64
bool get_phys_addr(CPUARMState *env, target_ulong address,
65
MMUAccessType access_type, ARMMMUIdx mmu_idx,
66
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
67
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/ptw.c
70
+++ b/target/arm/ptw.c
71
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
72
return ret;
73
}
74
75
-/**
76
- * get_phys_addr - get the physical address for this virtual address
77
- *
78
- * Find the physical address corresponding to the given virtual address,
79
- * by doing a translation table walk on MMU based systems or using the
80
- * MPU state on MPU based systems.
81
- *
82
- * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
83
- * prot and page_size may not be filled in, and the populated fsr value provides
84
- * information on why the translation aborted, in the format of a
85
- * DFSR/IFSR fault register, with the following caveats:
86
- * * we honour the short vs long DFSR format differences.
87
- * * the WnR bit is never set (the caller must do this).
88
- * * for PSMAv5 based systems we don't bother to return a full FSR format
89
- * value.
90
- *
91
- * @env: CPUARMState
92
- * @address: virtual address to get physical address for
93
- * @access_type: 0 for read, 1 for write, 2 for execute
94
- * @mmu_idx: MMU index indicating required translation regime
95
- * @result: set on translation success.
96
- * @fi: set to fault info if the translation fails
47
- */
97
- */
48
-static inline bool cptype_valid(int cptype)
98
-bool get_phys_addr(CPUARMState *env, target_ulong address,
49
-{
99
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
50
- return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
100
- GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
51
- || ((cptype & ARM_CP_SPECIAL) &&
101
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
52
- ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
102
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
53
-}
103
+ bool is_secure, GetPhysAddrResult *result,
54
-
104
+ ARMMMUFaultInfo *fi)
55
/*
56
* Access rights:
57
* We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
58
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
59
#define CPREG_FIELD64(env, ri) \
60
(*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
61
62
-#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
63
+void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, const ARMCPRegInfo *reg,
64
+ void *opaque);
65
66
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
67
- const ARMCPRegInfo *regs, void *opaque);
68
-void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
69
- const ARMCPRegInfo *regs, void *opaque);
70
-static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
71
-{
72
- define_arm_cp_regs_with_opaque(cpu, regs, 0);
73
-}
74
static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
75
{
105
{
76
- define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
106
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
77
+ define_one_arm_cp_reg_with_opaque(cpu, regs, NULL);
107
- bool is_secure = regime_is_secure(env, mmu_idx);
78
}
108
79
+
109
if (mmu_idx != s1_mmu_idx) {
80
+void define_arm_cp_regs_with_opaque_len(ARMCPU *cpu, const ARMCPRegInfo *regs,
110
/*
81
+ void *opaque, size_t len);
111
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
82
+
112
ARMMMUIdx s2_mmu_idx;
83
+#define define_arm_cp_regs_with_opaque(CPU, REGS, OPAQUE) \
113
bool is_el0;
84
+ do { \
114
85
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(REGS) == 0); \
115
- ret = get_phys_addr(env, address, access_type, s1_mmu_idx,
86
+ define_arm_cp_regs_with_opaque_len(CPU, REGS, OPAQUE, \
116
- result, fi);
87
+ ARRAY_SIZE(REGS)); \
117
+ ret = get_phys_addr_with_secure(env, address, access_type,
88
+ } while (0)
118
+ s1_mmu_idx, is_secure, result, fi);
89
+
119
90
+#define define_arm_cp_regs(CPU, REGS) \
120
/* If S1 fails or S2 is disabled, return early. */
91
+ define_arm_cp_regs_with_opaque(CPU, REGS, NULL)
121
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
92
+
122
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
93
const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
94
95
/*
96
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCPRegUserSpaceInfo {
97
uint64_t fixed_bits;
98
} ARMCPRegUserSpaceInfo;
99
100
-#define REGUSERINFO_SENTINEL { .name = NULL }
101
+void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
102
+ const ARMCPRegUserSpaceInfo *mods,
103
+ size_t mods_len);
104
105
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
106
+#define modify_arm_cp_regs(REGS, MODS) \
107
+ do { \
108
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(REGS) == 0); \
109
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(MODS) == 0); \
110
+ modify_arm_cp_regs_with_len(REGS, ARRAY_SIZE(REGS), \
111
+ MODS, ARRAY_SIZE(MODS)); \
112
+ } while (0)
113
114
/* CPWriteFn that can be used to implement writes-ignored behaviour */
115
void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
116
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/hw/arm/pxa2xx.c
119
+++ b/hw/arm/pxa2xx.c
120
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pxa_cp_reginfo[] = {
121
{ .name = "PWRMODE", .cp = 14, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 0,
122
.access = PL1_RW, .type = ARM_CP_IO,
123
.readfn = arm_cp_read_zero, .writefn = pxa2xx_pwrmode_write },
124
- REGINFO_SENTINEL
125
};
126
127
static void pxa2xx_setup_cp14(PXA2xxState *s)
128
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
129
index XXXXXXX..XXXXXXX 100644
130
--- a/hw/arm/pxa2xx_pic.c
131
+++ b/hw/arm/pxa2xx_pic.c
132
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pxa_pic_cp_reginfo[] = {
133
REGINFO_FOR_PIC_CP("ICLR2", 8),
134
REGINFO_FOR_PIC_CP("ICFP2", 9),
135
REGINFO_FOR_PIC_CP("ICPR2", 0xa),
136
- REGINFO_SENTINEL
137
};
138
139
static const MemoryRegionOps pxa2xx_pic_ops = {
140
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/hw/intc/arm_gicv3_cpuif.c
143
+++ b/hw/intc/arm_gicv3_cpuif.c
144
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
145
.readfn = icc_igrpen1_el3_read,
146
.writefn = icc_igrpen1_el3_write,
147
},
148
- REGINFO_SENTINEL
149
};
150
151
static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
152
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_hcr_reginfo[] = {
153
.readfn = ich_vmcr_read,
154
.writefn = ich_vmcr_write,
155
},
156
- REGINFO_SENTINEL
157
};
158
159
static const ARMCPRegInfo gicv3_cpuif_ich_apxr1_reginfo[] = {
160
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_ich_apxr1_reginfo[] = {
161
.readfn = ich_ap_read,
162
.writefn = ich_ap_write,
163
},
164
- REGINFO_SENTINEL
165
};
166
167
static const ARMCPRegInfo gicv3_cpuif_ich_apxr23_reginfo[] = {
168
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_ich_apxr23_reginfo[] = {
169
.readfn = ich_ap_read,
170
.writefn = ich_ap_write,
171
},
172
- REGINFO_SENTINEL
173
};
174
175
static void gicv3_cpuif_el_change_hook(ARMCPU *cpu, void *opaque)
176
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
177
.readfn = ich_lr_read,
178
.writefn = ich_lr_write,
179
},
180
- REGINFO_SENTINEL
181
};
182
define_arm_cp_regs(cpu, lr_regset);
183
}
184
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
185
index XXXXXXX..XXXXXXX 100644
186
--- a/hw/intc/arm_gicv3_kvm.c
187
+++ b/hw/intc/arm_gicv3_kvm.c
188
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
189
*/
190
.resetfn = arm_gicv3_icc_reset,
191
},
192
- REGINFO_SENTINEL
193
};
194
195
/**
196
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
197
index XXXXXXX..XXXXXXX 100644
198
--- a/target/arm/cpu64.c
199
+++ b/target/arm/cpu64.c
200
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
201
{ .name = "L2MERRSR",
202
.cp = 15, .opc1 = 3, .crm = 15,
203
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
204
- REGINFO_SENTINEL
205
};
206
207
static void aarch64_a57_initfn(Object *obj)
208
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
209
index XXXXXXX..XXXXXXX 100644
210
--- a/target/arm/cpu_tcg.c
211
+++ b/target/arm/cpu_tcg.c
212
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa8_cp_reginfo[] = {
213
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
214
{ .name = "L2AUXCR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 2,
215
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
216
- REGINFO_SENTINEL
217
};
218
219
static void cortex_a8_initfn(Object *obj)
220
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa9_cp_reginfo[] = {
221
.access = PL1_RW, .resetvalue = 0, .type = ARM_CP_CONST },
222
{ .name = "TLB_ATTR", .cp = 15, .crn = 15, .crm = 7, .opc1 = 5, .opc2 = 2,
223
.access = PL1_RW, .resetvalue = 0, .type = ARM_CP_CONST },
224
- REGINFO_SENTINEL
225
};
226
227
static void cortex_a9_initfn(Object *obj)
228
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa15_cp_reginfo[] = {
229
#endif
230
{ .name = "L2ECTLR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 3,
231
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
232
- REGINFO_SENTINEL
233
};
234
235
static void cortex_a7_initfn(Object *obj)
236
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexr5_cp_reginfo[] = {
237
.access = PL1_RW, .type = ARM_CP_CONST },
238
{ .name = "DCACHE_INVAL", .cp = 15, .opc1 = 0, .crn = 15, .crm = 5,
239
.opc2 = 0, .access = PL1_W, .type = ARM_CP_NOP },
240
- REGINFO_SENTINEL
241
};
242
243
static void cortex_r5_initfn(Object *obj)
244
diff --git a/target/arm/helper.c b/target/arm/helper.c
245
index XXXXXXX..XXXXXXX 100644
246
--- a/target/arm/helper.c
247
+++ b/target/arm/helper.c
248
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
249
.secure = ARM_CP_SECSTATE_S,
250
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_s),
251
.resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, },
252
- REGINFO_SENTINEL
253
};
254
255
static const ARMCPRegInfo not_v8_cp_reginfo[] = {
256
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = {
257
{ .name = "CACHEMAINT", .cp = 15, .crn = 7, .crm = CP_ANY,
258
.opc1 = 0, .opc2 = CP_ANY, .access = PL1_W,
259
.type = ARM_CP_NOP | ARM_CP_OVERRIDE },
260
- REGINFO_SENTINEL
261
};
262
263
static const ARMCPRegInfo not_v6_cp_reginfo[] = {
264
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v6_cp_reginfo[] = {
265
*/
266
{ .name = "WFI_v5", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = 2,
267
.access = PL1_W, .type = ARM_CP_WFI },
268
- REGINFO_SENTINEL
269
};
270
271
static const ARMCPRegInfo not_v7_cp_reginfo[] = {
272
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v7_cp_reginfo[] = {
273
.opc1 = 0, .opc2 = 0, .access = PL1_RW, .type = ARM_CP_NOP },
274
{ .name = "NMRR", .cp = 15, .crn = 10, .crm = 2,
275
.opc1 = 0, .opc2 = 1, .access = PL1_RW, .type = ARM_CP_NOP },
276
- REGINFO_SENTINEL
277
};
278
279
static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
280
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
281
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
282
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
283
.resetfn = cpacr_reset, .writefn = cpacr_write, .readfn = cpacr_read },
284
- REGINFO_SENTINEL
285
};
286
287
typedef struct pm_event {
288
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
289
{ .name = "TLBIMVAA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
290
.type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
291
.writefn = tlbimvaa_write },
292
- REGINFO_SENTINEL
293
};
294
295
static const ARMCPRegInfo v7mp_cp_reginfo[] = {
296
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7mp_cp_reginfo[] = {
297
{ .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
298
.type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
299
.writefn = tlbimvaa_is_write },
300
- REGINFO_SENTINEL
301
};
302
303
static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
304
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
305
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr),
306
.writefn = pmovsset_write,
307
.raw_writefn = raw_write },
308
- REGINFO_SENTINEL
309
};
310
311
static void teecr_write(CPUARMState *env, const ARMCPRegInfo *ri,
312
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo t2ee_cp_reginfo[] = {
313
{ .name = "TEEHBR", .cp = 14, .crn = 1, .crm = 0, .opc1 = 6, .opc2 = 0,
314
.access = PL0_RW, .fieldoffset = offsetof(CPUARMState, teehbr),
315
.accessfn = teehbr_access, .resetvalue = 0 },
316
- REGINFO_SENTINEL
317
};
318
319
static const ARMCPRegInfo v6k_cp_reginfo[] = {
320
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
321
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidrprw_s),
322
offsetoflow32(CPUARMState, cp15.tpidrprw_ns) },
323
.resetvalue = 0 },
324
- REGINFO_SENTINEL
325
};
326
327
#ifndef CONFIG_USER_ONLY
328
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
329
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_SEC].cval),
330
.writefn = gt_sec_cval_write, .raw_writefn = raw_write,
331
},
332
- REGINFO_SENTINEL
333
};
334
335
static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri,
336
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
337
.access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO,
338
.readfn = gt_virt_cnt_read,
339
},
340
- REGINFO_SENTINEL
341
};
342
343
#endif
344
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vapa_cp_reginfo[] = {
345
.access = PL1_W, .accessfn = ats_access,
346
.writefn = ats_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC },
347
#endif
348
- REGINFO_SENTINEL
349
};
350
351
/* Return basic MPU access permission bits. */
352
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav7_cp_reginfo[] = {
353
.fieldoffset = offsetof(CPUARMState, pmsav7.rnr[M_REG_NS]),
354
.writefn = pmsav7_rgnr_write,
355
.resetfn = arm_cp_reset_ignore },
356
- REGINFO_SENTINEL
357
};
358
359
static const ARMCPRegInfo pmsav5_cp_reginfo[] = {
360
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav5_cp_reginfo[] = {
361
{ .name = "946_PRBS7", .cp = 15, .crn = 6, .crm = 7, .opc1 = 0,
362
.opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0,
363
.fieldoffset = offsetof(CPUARMState, cp15.c6_region[7]) },
364
- REGINFO_SENTINEL
365
};
366
367
static void vmsa_ttbcr_raw_write(CPUARMState *env, const ARMCPRegInfo *ri,
368
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
369
.access = PL1_RW, .accessfn = access_tvm_trvm,
370
.fieldoffset = offsetof(CPUARMState, cp15.far_el[1]),
371
.resetvalue = 0, },
372
- REGINFO_SENTINEL
373
};
374
375
static const ARMCPRegInfo vmsa_cp_reginfo[] = {
376
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
377
/* No offsetoflow32 -- pass the entire TCR to writefn/raw_writefn. */
378
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.tcr_el[3]),
379
offsetof(CPUARMState, cp15.tcr_el[1])} },
380
- REGINFO_SENTINEL
381
};
382
383
/* Note that unlike TTBCR, writing to TTBCR2 does not require flushing
384
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo omap_cp_reginfo[] = {
385
{ .name = "C9", .cp = 15, .crn = 9,
386
.crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW,
387
.type = ARM_CP_CONST | ARM_CP_OVERRIDE, .resetvalue = 0 },
388
- REGINFO_SENTINEL
389
};
390
391
static void xscale_cpar_write(CPUARMState *env, const ARMCPRegInfo *ri,
392
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo xscale_cp_reginfo[] = {
393
{ .name = "XSCALE_UNLOCK_DCACHE",
394
.cp = 15, .opc1 = 0, .crn = 9, .crm = 2, .opc2 = 1,
395
.access = PL1_W, .type = ARM_CP_NOP },
396
- REGINFO_SENTINEL
397
};
398
399
static const ARMCPRegInfo dummy_c15_cp_reginfo[] = {
400
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dummy_c15_cp_reginfo[] = {
401
.access = PL1_RW,
402
.type = ARM_CP_CONST | ARM_CP_NO_RAW | ARM_CP_OVERRIDE,
403
.resetvalue = 0 },
404
- REGINFO_SENTINEL
405
};
406
407
static const ARMCPRegInfo cache_dirty_status_cp_reginfo[] = {
408
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_dirty_status_cp_reginfo[] = {
409
{ .name = "CDSR", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 6,
410
.access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW,
411
.resetvalue = 0 },
412
- REGINFO_SENTINEL
413
};
414
415
static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
416
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
417
.access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
418
{ .name = "CIDCR", .cp = 15, .crm = 14, .opc1 = 0,
419
.access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
420
- REGINFO_SENTINEL
421
};
422
423
static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
424
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
425
{ .name = "TCI_DCACHE", .cp = 15, .crn = 7, .crm = 14, .opc1 = 0, .opc2 = 3,
426
.access = PL0_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW,
427
.resetvalue = (1 << 30) },
428
- REGINFO_SENTINEL
429
};
430
431
static const ARMCPRegInfo strongarm_cp_reginfo[] = {
432
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo strongarm_cp_reginfo[] = {
433
.crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY,
434
.access = PL1_RW, .resetvalue = 0,
435
.type = ARM_CP_CONST | ARM_CP_OVERRIDE | ARM_CP_NO_RAW },
436
- REGINFO_SENTINEL
437
};
438
439
static uint64_t midr_read(CPUARMState *env, const ARMCPRegInfo *ri)
440
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lpae_cp_reginfo[] = {
441
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s),
442
offsetof(CPUARMState, cp15.ttbr1_ns) },
443
.writefn = vmsa_ttbr_write, },
444
- REGINFO_SENTINEL
445
};
446
447
static uint64_t aa64_fpcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
448
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
449
.access = PL1_RW, .accessfn = access_trap_aa32s_el1,
450
.writefn = sdcr_write,
451
.fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) },
452
- REGINFO_SENTINEL
453
};
454
455
/* Used to describe the behaviour of EL2 regs when EL2 does not exist. */
456
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
457
.type = ARM_CP_CONST,
458
.cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2,
459
.access = PL2_RW, .resetvalue = 0 },
460
- REGINFO_SENTINEL
461
};
462
463
/* Ditto, but for registers which exist in ARMv8 but not v7 */
464
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
465
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
466
.access = PL2_RW,
467
.type = ARM_CP_CONST, .resetvalue = 0 },
468
- REGINFO_SENTINEL
469
};
470
471
static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
472
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
473
.cp = 15, .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3,
474
.access = PL2_RW,
475
.fieldoffset = offsetof(CPUARMState, cp15.hstr_el2) },
476
- REGINFO_SENTINEL
477
};
478
479
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
480
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
481
.access = PL2_RW,
482
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
483
.writefn = hcr_writehigh },
484
- REGINFO_SENTINEL
485
};
486
487
static CPAccessResult sel2_access(CPUARMState *env, const ARMCPRegInfo *ri,
488
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_sec_cp_reginfo[] = {
489
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 2,
490
.access = PL2_RW, .accessfn = sel2_access,
491
.fieldoffset = offsetof(CPUARMState, cp15.vstcr_el2) },
492
- REGINFO_SENTINEL
493
};
494
495
static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
496
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
497
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 5,
498
.access = PL3_W, .type = ARM_CP_NO_RAW,
499
.writefn = tlbi_aa64_vae3_write },
500
- REGINFO_SENTINEL
501
};
502
503
#ifndef CONFIG_USER_ONLY
504
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
505
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
506
.access = PL1_RW, .accessfn = access_tda,
507
.type = ARM_CP_NOP },
508
- REGINFO_SENTINEL
509
};
510
511
static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
512
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
513
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
514
{ .name = "DBGDSAR", .cp = 14, .crm = 2, .opc1 = 0,
515
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
516
- REGINFO_SENTINEL
517
};
518
519
/* Return the exception level to which exceptions should be taken
520
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
521
.fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]),
522
.writefn = dbgbcr_write, .raw_writefn = raw_write
523
},
524
- REGINFO_SENTINEL
525
};
526
define_arm_cp_regs(cpu, dbgregs);
527
}
528
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
529
.fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]),
530
.writefn = dbgwcr_write, .raw_writefn = raw_write
531
},
532
- REGINFO_SENTINEL
533
};
534
define_arm_cp_regs(cpu, dbgregs);
535
}
536
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
537
.type = ARM_CP_IO,
538
.readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
539
.raw_writefn = pmevtyper_rawwrite },
540
- REGINFO_SENTINEL
541
};
542
define_arm_cp_regs(cpu, pmev_regs);
543
g_free(pmevcntr_name);
544
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
545
.cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 5,
546
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
547
.resetvalue = extract64(cpu->pmceid1, 32, 32) },
548
- REGINFO_SENTINEL
549
};
550
define_arm_cp_regs(cpu, v81_pmu_regs);
551
}
552
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = {
553
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
554
.access = PL1_R, .accessfn = access_lor_ns,
555
.type = ARM_CP_CONST, .resetvalue = 0 },
556
- REGINFO_SENTINEL
557
};
558
559
#ifdef TARGET_AARCH64
560
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
561
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 3,
562
.access = PL1_RW, .accessfn = access_pauth,
563
.fieldoffset = offsetof(CPUARMState, keys.apib.hi) },
564
- REGINFO_SENTINEL
565
};
566
567
static const ARMCPRegInfo tlbirange_reginfo[] = {
568
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
569
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5,
570
.access = PL3_W, .type = ARM_CP_NO_RAW,
571
.writefn = tlbi_aa64_rvae3_write },
572
- REGINFO_SENTINEL
573
};
574
575
static const ARMCPRegInfo tlbios_reginfo[] = {
576
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
577
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 5,
578
.access = PL3_W, .type = ARM_CP_NO_RAW,
579
.writefn = tlbi_aa64_vae3is_write },
580
- REGINFO_SENTINEL
581
};
582
583
static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
584
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo rndr_reginfo[] = {
585
.type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END | ARM_CP_IO,
586
.opc0 = 3, .opc1 = 3, .crn = 2, .crm = 4, .opc2 = 1,
587
.access = PL0_R, .readfn = rndr_readfn },
588
- REGINFO_SENTINEL
589
};
590
591
#ifndef CONFIG_USER_ONLY
592
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpop_reg[] = {
593
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 1,
594
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
595
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
596
- REGINFO_SENTINEL
597
};
598
599
static const ARMCPRegInfo dcpodp_reg[] = {
600
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpodp_reg[] = {
601
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 1,
602
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
603
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
604
- REGINFO_SENTINEL
605
};
606
#endif /*CONFIG_USER_ONLY*/
607
608
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_reginfo[] = {
609
{ .name = "DC_CIGDSW", .state = ARM_CP_STATE_AA64,
610
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 6,
611
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
612
- REGINFO_SENTINEL
613
};
614
615
static const ARMCPRegInfo mte_tco_ro_reginfo[] = {
616
{ .name = "TCO", .state = ARM_CP_STATE_AA64,
617
.opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 7,
618
.type = ARM_CP_CONST, .access = PL0_RW, },
619
- REGINFO_SENTINEL
620
};
621
622
static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
623
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
624
.accessfn = aa64_zva_access,
625
#endif
626
},
627
- REGINFO_SENTINEL
628
};
629
630
#endif
631
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo predinv_reginfo[] = {
632
{ .name = "CPPRCTX", .state = ARM_CP_STATE_AA32,
633
.cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 7,
634
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
635
- REGINFO_SENTINEL
636
};
637
638
static uint64_t ccsidr2_read(CPUARMState *env, const ARMCPRegInfo *ri)
639
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ccsidr2_reginfo[] = {
640
.access = PL1_R,
641
.accessfn = access_aa64_tid2,
642
.readfn = ccsidr2_read, .type = ARM_CP_NO_RAW },
643
- REGINFO_SENTINEL
644
};
645
646
static CPAccessResult access_aa64_tid3(CPUARMState *env, const ARMCPRegInfo *ri,
647
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo jazelle_regs[] = {
648
.cp = 14, .crn = 2, .crm = 0, .opc1 = 7, .opc2 = 0,
649
.accessfn = access_joscr_jmcr,
650
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
651
- REGINFO_SENTINEL
652
};
653
654
static const ARMCPRegInfo vhe_reginfo[] = {
655
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
656
.access = PL2_RW, .accessfn = e2h_access,
657
.writefn = gt_virt_cval_write, .raw_writefn = raw_write },
658
#endif
659
- REGINFO_SENTINEL
660
};
661
662
#ifndef CONFIG_USER_ONLY
663
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1e1_reginfo[] = {
664
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
665
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
666
.writefn = ats_write64 },
667
- REGINFO_SENTINEL
668
};
669
670
static const ARMCPRegInfo ats1cp_reginfo[] = {
671
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1cp_reginfo[] = {
672
.cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
673
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
674
.writefn = ats_write },
675
- REGINFO_SENTINEL
676
};
677
#endif
678
679
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo actlr2_hactlr2_reginfo[] = {
680
.cp = 15, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 3,
681
.access = PL2_RW, .type = ARM_CP_CONST,
682
.resetvalue = 0 },
683
- REGINFO_SENTINEL
684
};
685
686
void register_cp_regs_for_features(ARMCPU *cpu)
687
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
688
.access = PL1_R, .type = ARM_CP_CONST,
689
.accessfn = access_aa32_tid3,
690
.resetvalue = cpu->isar.id_isar6 },
691
- REGINFO_SENTINEL
692
};
693
define_arm_cp_regs(cpu, v6_idregs);
694
define_arm_cp_regs(cpu, v6_cp_reginfo);
695
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
696
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 7,
697
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
698
.resetvalue = cpu->pmceid1 },
699
- REGINFO_SENTINEL
700
};
701
#ifdef CONFIG_USER_ONLY
702
ARMCPRegUserSpaceInfo v8_user_idregs[] = {
703
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
704
.exported_bits = 0x000000f0ffffffff },
705
{ .name = "ID_AA64ISAR*_EL1_RESERVED",
706
.is_glob = true },
707
- REGUSERINFO_SENTINEL
708
};
709
modify_arm_cp_regs(v8_idregs, v8_user_idregs);
710
#endif
711
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
712
.access = PL2_RW,
713
.resetvalue = vmpidr_def,
714
.fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) },
715
- REGINFO_SENTINEL
716
};
717
define_arm_cp_regs(cpu, vpidr_regs);
718
define_arm_cp_regs(cpu, el2_cp_reginfo);
719
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
720
.access = PL2_RW, .accessfn = access_el3_aa32ns,
721
.type = ARM_CP_NO_RAW,
722
.writefn = arm_cp_write_ignore, .readfn = mpidr_read },
723
- REGINFO_SENTINEL
724
};
725
define_arm_cp_regs(cpu, vpidr_regs);
726
define_arm_cp_regs(cpu, el3_no_el2_cp_reginfo);
727
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
728
.raw_writefn = raw_write, .writefn = sctlr_write,
729
.fieldoffset = offsetof(CPUARMState, cp15.sctlr_el[3]),
730
.resetvalue = cpu->reset_sctlr },
731
- REGINFO_SENTINEL
732
};
733
734
define_arm_cp_regs(cpu, el3_regs);
735
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
736
{ .name = "DUMMY",
737
.cp = 15, .crn = 0, .crm = 7, .opc1 = 0, .opc2 = CP_ANY,
738
.access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 },
739
- REGINFO_SENTINEL
740
};
741
ARMCPRegInfo id_v8_midr_cp_reginfo[] = {
742
{ .name = "MIDR_EL1", .state = ARM_CP_STATE_BOTH,
743
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
744
.access = PL1_R,
745
.accessfn = access_aa64_tid1,
746
.type = ARM_CP_CONST, .resetvalue = cpu->revidr },
747
- REGINFO_SENTINEL
748
};
749
ARMCPRegInfo id_cp_reginfo[] = {
750
/* These are common to v8 and pre-v8 */
751
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
752
.access = PL1_R,
753
.accessfn = access_aa32_tid1,
754
.type = ARM_CP_CONST, .resetvalue = 0 },
755
- REGINFO_SENTINEL
756
};
757
/* TLBTR is specific to VMSA */
758
ARMCPRegInfo id_tlbtr_reginfo = {
759
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
760
{ .name = "MIDR_EL1",
761
.exported_bits = 0x00000000ffffffff },
762
{ .name = "REVIDR_EL1" },
763
- REGUSERINFO_SENTINEL
764
};
765
modify_arm_cp_regs(id_v8_midr_cp_reginfo, id_v8_user_midr_cp_reginfo);
766
#endif
767
if (arm_feature(env, ARM_FEATURE_OMAPCP) ||
768
arm_feature(env, ARM_FEATURE_STRONGARM)) {
769
- ARMCPRegInfo *r;
770
+ size_t i;
771
/* Register the blanket "writes ignored" value first to cover the
772
* whole space. Then update the specific ID registers to allow write
773
* access, so that they ignore writes rather than causing them to
774
* UNDEF.
775
*/
776
define_one_arm_cp_reg(cpu, &crn0_wi_reginfo);
777
- for (r = id_pre_v8_midr_cp_reginfo;
778
- r->type != ARM_CP_SENTINEL; r++) {
779
- r->access = PL1_RW;
780
+ for (i = 0; i < ARRAY_SIZE(id_pre_v8_midr_cp_reginfo); ++i) {
781
+ id_pre_v8_midr_cp_reginfo[i].access = PL1_RW;
782
}
783
- for (r = id_cp_reginfo; r->type != ARM_CP_SENTINEL; r++) {
784
- r->access = PL1_RW;
785
+ for (i = 0; i < ARRAY_SIZE(id_cp_reginfo); ++i) {
786
+ id_cp_reginfo[i].access = PL1_RW;
787
}
788
id_mpuir_reginfo.access = PL1_RW;
789
id_tlbtr_reginfo.access = PL1_RW;
790
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
791
{ .name = "MPIDR_EL1", .state = ARM_CP_STATE_BOTH,
792
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 5,
793
.access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
794
- REGINFO_SENTINEL
795
};
796
#ifdef CONFIG_USER_ONLY
797
ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
798
{ .name = "MPIDR_EL1",
799
.fixed_bits = 0x0000000080000000 },
800
- REGUSERINFO_SENTINEL
801
};
802
modify_arm_cp_regs(mpidr_cp_reginfo, mpidr_user_cp_reginfo);
803
#endif
804
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
805
.opc0 = 3, .opc1 = 6, .crn = 1, .crm = 0, .opc2 = 1,
806
.access = PL3_RW, .type = ARM_CP_CONST,
807
.resetvalue = 0 },
808
- REGINFO_SENTINEL
809
};
810
define_arm_cp_regs(cpu, auxcr_reginfo);
811
if (cpu_isar_feature(aa32_ac2, cpu)) {
812
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
813
.type = ARM_CP_CONST,
814
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 3, .opc2 = 0,
815
.access = PL1_R, .resetvalue = cpu->reset_cbar },
816
- REGINFO_SENTINEL
817
};
818
/* We don't implement a r/w 64 bit CBAR currently */
819
assert(arm_feature(env, ARM_FEATURE_CBAR_RO));
820
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
821
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.vbar_s),
822
offsetof(CPUARMState, cp15.vbar_ns) },
823
.resetvalue = 0 },
824
- REGINFO_SENTINEL
825
};
826
define_arm_cp_regs(cpu, vbar_cp_reginfo);
827
}
828
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
829
r->writefn);
830
}
831
}
832
- /* Bad type field probably means missing sentinel at end of reg list */
833
- assert(cptype_valid(r->type));
834
+
835
for (crm = crmmin; crm <= crmmax; crm++) {
836
for (opc1 = opc1min; opc1 <= opc1max; opc1++) {
837
for (opc2 = opc2min; opc2 <= opc2max; opc2++) {
838
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
839
}
123
}
840
}
124
}
841
125
842
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
126
+bool get_phys_addr(CPUARMState *env, target_ulong address,
843
- const ARMCPRegInfo *regs, void *opaque)
127
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
844
+/* Define a whole list of registers */
128
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
845
+void define_arm_cp_regs_with_opaque_len(ARMCPU *cpu, const ARMCPRegInfo *regs,
129
+{
846
+ void *opaque, size_t len)
130
+ return get_phys_addr_with_secure(env, address, access_type, mmu_idx,
131
+ regime_is_secure(env, mmu_idx),
132
+ result, fi);
133
+}
134
+
135
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
136
MemTxAttrs *attrs)
847
{
137
{
848
- /* Define a whole list of registers */
849
- const ARMCPRegInfo *r;
850
- for (r = regs; r->type != ARM_CP_SENTINEL; r++) {
851
- define_one_arm_cp_reg_with_opaque(cpu, r, opaque);
852
+ size_t i;
853
+ for (i = 0; i < len; ++i) {
854
+ define_one_arm_cp_reg_with_opaque(cpu, regs + i, opaque);
855
}
856
}
857
858
@@ -XXX,XX +XXX,XX @@ void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
859
* user-space cannot alter any values and dynamic values pertaining to
860
* execution state are hidden from user space view anyway.
861
*/
862
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods)
863
+void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
864
+ const ARMCPRegUserSpaceInfo *mods,
865
+ size_t mods_len)
866
{
867
- const ARMCPRegUserSpaceInfo *m;
868
- ARMCPRegInfo *r;
869
-
870
- for (m = mods; m->name; m++) {
871
+ for (size_t mi = 0; mi < mods_len; ++mi) {
872
+ const ARMCPRegUserSpaceInfo *m = mods + mi;
873
GPatternSpec *pat = NULL;
874
+
875
if (m->is_glob) {
876
pat = g_pattern_spec_new(m->name);
877
}
878
- for (r = regs; r->type != ARM_CP_SENTINEL; r++) {
879
+ for (size_t ri = 0; ri < regs_len; ++ri) {
880
+ ARMCPRegInfo *r = regs + ri;
881
+
882
if (pat && g_pattern_match_string(pat, r->name)) {
883
r->type = ARM_CP_CONST;
884
r->access = PL0U_R;
885
--
138
--
886
2.25.1
139
2.25.1
887
888
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rearrange the values of the enumerators of CPAccessResult
3
Remove the use of regime_is_secure from v7m_read_half_insn, using
4
so that we may directly extract the target el. For the two
4
the new parameter instead.
5
special cases in access_check_cp_reg, use CPAccessResult.
6
5
6
As it happens, both callers pass true, propagated from the argument
7
to arm_v7m_mmu_idx_for_secstate which created the mmu_idx argument,
8
but that is a detail of v7m_handle_execute_nsc we need not expose
9
to the callee.
10
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220501055028.646596-3-richard.henderson@linaro.org
14
Message-id: 20221001162318.153420-7-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
16
---
13
target/arm/cpregs.h | 26 ++++++++++++--------
17
target/arm/m_helper.c | 9 ++++-----
14
target/arm/op_helper.c | 56 +++++++++++++++++++++---------------------
18
1 file changed, 4 insertions(+), 5 deletions(-)
15
2 files changed, 44 insertions(+), 38 deletions(-)
16
19
17
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpregs.h
22
--- a/target/arm/m_helper.c
20
+++ b/target/arm/cpregs.h
23
+++ b/target/arm/m_helper.c
21
@@ -XXX,XX +XXX,XX @@ static inline bool cptype_valid(int cptype)
24
@@ -XXX,XX +XXX,XX @@ static bool do_v7m_function_return(ARMCPU *cpu)
22
typedef enum CPAccessResult {
25
return true;
23
/* Access is permitted */
26
}
24
CP_ACCESS_OK = 0,
27
25
+
28
-static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
26
+ /*
29
+static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure,
27
+ * Combined with one of the following, the low 2 bits indicate the
30
uint32_t addr, uint16_t *insn)
28
+ * target exception level. If 0, the exception is taken to the usual
31
{
29
+ * target EL (EL1 or PL1 if in EL0, otherwise to the current EL).
30
+ */
31
+ CP_ACCESS_EL_MASK = 3,
32
+
33
/*
32
/*
34
* Access fails due to a configurable trap or enable which would
33
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
35
* result in a categorized exception syndrome giving information about
34
ARMMMUFaultInfo fi = {};
36
* the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
35
MemTxResult txres;
37
- * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
36
38
- * PL1 if in EL0, otherwise to the current EL).
37
- v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx,
39
+ * 0xc or 0x18).
38
- regime_is_secure(env, mmu_idx), &sattrs);
40
*/
39
+ v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, secure, &sattrs);
41
- CP_ACCESS_TRAP = 1,
40
if (!sattrs.nsc || sattrs.ns) {
42
+ CP_ACCESS_TRAP = (1 << 2),
41
/*
43
+ CP_ACCESS_TRAP_EL2 = CP_ACCESS_TRAP | 2,
42
* This must be the second half of the insn, and it straddles a
44
+ CP_ACCESS_TRAP_EL3 = CP_ACCESS_TRAP | 3,
43
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
45
+
44
/* We want to do the MPU lookup as secure; work out what mmu_idx that is */
46
/*
45
mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
47
* Access fails and results in an exception syndrome 0x0 ("uncategorized").
46
48
* Note that this is not a catch-all case -- the set of cases which may
47
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
49
* result in this failure is specifically defined by the architecture.
48
+ if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15], &insn)) {
50
*/
49
return false;
51
- CP_ACCESS_TRAP_UNCATEGORIZED = 2,
52
- /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
53
- CP_ACCESS_TRAP_EL2 = 3,
54
- CP_ACCESS_TRAP_EL3 = 4,
55
- /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
56
- CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
57
- CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
58
+ CP_ACCESS_TRAP_UNCATEGORIZED = (2 << 2),
59
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = CP_ACCESS_TRAP_UNCATEGORIZED | 2,
60
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = CP_ACCESS_TRAP_UNCATEGORIZED | 3,
61
} CPAccessResult;
62
63
typedef struct ARMCPRegInfo ARMCPRegInfo;
64
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/op_helper.c
67
+++ b/target/arm/op_helper.c
68
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
69
uint32_t isread)
70
{
71
const ARMCPRegInfo *ri = rip;
72
+ CPAccessResult res = CP_ACCESS_OK;
73
int target_el;
74
75
if (arm_feature(env, ARM_FEATURE_XSCALE) && ri->cp < 14
76
&& extract32(env->cp15.c15_cpar, ri->cp, 1) == 0) {
77
- raise_exception(env, EXCP_UDEF, syndrome, exception_target_el(env));
78
+ res = CP_ACCESS_TRAP;
79
+ goto fail;
80
}
50
}
81
51
82
/*
52
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
83
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
53
goto gen_invep;
84
mask &= ~((1 << 4) | (1 << 14));
85
86
if (env->cp15.hstr_el2 & mask) {
87
- target_el = 2;
88
- goto exept;
89
+ res = CP_ACCESS_TRAP_EL2;
90
+ goto fail;
91
}
92
}
54
}
93
55
94
- if (!ri->accessfn) {
56
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
95
+ if (ri->accessfn) {
57
+ if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15] + 2, &insn)) {
96
+ res = ri->accessfn(env, ri, isread);
58
return false;
97
+ }
98
+ if (likely(res == CP_ACCESS_OK)) {
99
return;
100
}
59
}
101
102
- switch (ri->accessfn(env, ri, isread)) {
103
- case CP_ACCESS_OK:
104
- return;
105
+ fail:
106
+ switch (res & ~CP_ACCESS_EL_MASK) {
107
case CP_ACCESS_TRAP:
108
- target_el = exception_target_el(env);
109
- break;
110
- case CP_ACCESS_TRAP_EL2:
111
- /* Requesting a trap to EL2 when we're in EL3 is
112
- * a bug in the access function.
113
- */
114
- assert(arm_current_el(env) != 3);
115
- target_el = 2;
116
- break;
117
- case CP_ACCESS_TRAP_EL3:
118
- target_el = 3;
119
break;
120
case CP_ACCESS_TRAP_UNCATEGORIZED:
121
- target_el = exception_target_el(env);
122
- syndrome = syn_uncategorized();
123
- break;
124
- case CP_ACCESS_TRAP_UNCATEGORIZED_EL2:
125
- target_el = 2;
126
- syndrome = syn_uncategorized();
127
- break;
128
- case CP_ACCESS_TRAP_UNCATEGORIZED_EL3:
129
- target_el = 3;
130
syndrome = syn_uncategorized();
131
break;
132
default:
133
g_assert_not_reached();
134
}
135
136
-exept:
137
+ target_el = res & CP_ACCESS_EL_MASK;
138
+ switch (target_el) {
139
+ case 0:
140
+ target_el = exception_target_el(env);
141
+ break;
142
+ case 2:
143
+ assert(arm_current_el(env) != 3);
144
+ assert(arm_is_el2_enabled(env));
145
+ break;
146
+ case 3:
147
+ assert(arm_feature(env, ARM_FEATURE_EL3));
148
+ break;
149
+ default:
150
+ /* No "direct" traps to EL1 */
151
+ g_assert_not_reached();
152
+ }
153
+
154
raise_exception(env, EXCP_UDEF, syndrome, target_el);
155
}
156
60
157
--
61
--
158
2.25.1
62
2.25.1
159
63
160
64
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move ARMCPRegInfo and all related declarations to a new
3
Remove the use of regime_is_secure from arm_tr_init_disas_context.
4
internal header, out of the public cpu.h.
4
Instead, provide the value of v8m_secure directly from tb_flags.
5
Rather than use regime_is_secure, use the env->v7m.secure directly,
6
as per arm_mmu_idx_el.
5
7
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-2-richard.henderson@linaro.org
10
Message-id: 20221001162318.153420-8-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
target/arm/cpregs.h | 413 +++++++++++++++++++++++++++++++++++++
13
target/arm/cpu.h | 2 ++
13
target/arm/cpu.h | 368 ---------------------------------
14
target/arm/helper.c | 4 ++++
14
hw/arm/pxa2xx.c | 1 +
15
target/arm/translate.c | 3 +--
15
hw/arm/pxa2xx_pic.c | 1 +
16
3 files changed, 7 insertions(+), 2 deletions(-)
16
hw/intc/arm_gicv3_cpuif.c | 1 +
17
hw/intc/arm_gicv3_kvm.c | 2 +
18
target/arm/cpu.c | 1 +
19
target/arm/cpu64.c | 1 +
20
target/arm/cpu_tcg.c | 1 +
21
target/arm/gdbstub.c | 3 +-
22
target/arm/helper.c | 1 +
23
target/arm/op_helper.c | 1 +
24
target/arm/translate-a64.c | 4 +-
25
target/arm/translate.c | 3 +-
26
14 files changed, 427 insertions(+), 374 deletions(-)
27
create mode 100644 target/arm/cpregs.h
28
17
29
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
30
new file mode 100644
31
index XXXXXXX..XXXXXXX
32
--- /dev/null
33
+++ b/target/arm/cpregs.h
34
@@ -XXX,XX +XXX,XX @@
35
+/*
36
+ * QEMU ARM CP Register access and descriptions
37
+ *
38
+ * Copyright (c) 2022 Linaro Ltd
39
+ *
40
+ * This program is free software; you can redistribute it and/or
41
+ * modify it under the terms of the GNU General Public License
42
+ * as published by the Free Software Foundation; either version 2
43
+ * of the License, or (at your option) any later version.
44
+ *
45
+ * This program is distributed in the hope that it will be useful,
46
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
47
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
48
+ * GNU General Public License for more details.
49
+ *
50
+ * You should have received a copy of the GNU General Public License
51
+ * along with this program; if not, see
52
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
53
+ */
54
+
55
+#ifndef TARGET_ARM_CPREGS_H
56
+#define TARGET_ARM_CPREGS_H
57
+
58
+/*
59
+ * ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
60
+ * special-behaviour cp reg and bits [11..8] indicate what behaviour
61
+ * it has. Otherwise it is a simple cp reg, where CONST indicates that
62
+ * TCG can assume the value to be constant (ie load at translate time)
63
+ * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
64
+ * indicates that the TB should not be ended after a write to this register
65
+ * (the default is that the TB ends after cp writes). OVERRIDE permits
66
+ * a register definition to override a previous definition for the
67
+ * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
68
+ * old must have the OVERRIDE bit set.
69
+ * ALIAS indicates that this register is an alias view of some underlying
70
+ * state which is also visible via another register, and that the other
71
+ * register is handling migration and reset; registers marked ALIAS will not be
72
+ * migrated but may have their state set by syncing of register state from KVM.
73
+ * NO_RAW indicates that this register has no underlying state and does not
74
+ * support raw access for state saving/loading; it will not be used for either
75
+ * migration or KVM state synchronization. (Typically this is for "registers"
76
+ * which are actually used as instructions for cache maintenance and so on.)
77
+ * IO indicates that this register does I/O and therefore its accesses
78
+ * need to be marked with gen_io_start() and also end the TB. In particular,
79
+ * registers which implement clocks or timers require this.
80
+ * RAISES_EXC is for when the read or write hook might raise an exception;
81
+ * the generated code will synchronize the CPU state before calling the hook
82
+ * so that it is safe for the hook to call raise_exception().
83
+ * NEWEL is for writes to registers that might change the exception
84
+ * level - typically on older ARM chips. For those cases we need to
85
+ * re-read the new el when recomputing the translation flags.
86
+ */
87
+#define ARM_CP_SPECIAL 0x0001
88
+#define ARM_CP_CONST 0x0002
89
+#define ARM_CP_64BIT 0x0004
90
+#define ARM_CP_SUPPRESS_TB_END 0x0008
91
+#define ARM_CP_OVERRIDE 0x0010
92
+#define ARM_CP_ALIAS 0x0020
93
+#define ARM_CP_IO 0x0040
94
+#define ARM_CP_NO_RAW 0x0080
95
+#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
96
+#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
97
+#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
98
+#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
99
+#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
100
+#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
101
+#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
102
+#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
103
+#define ARM_CP_FPU 0x1000
104
+#define ARM_CP_SVE 0x2000
105
+#define ARM_CP_NO_GDB 0x4000
106
+#define ARM_CP_RAISES_EXC 0x8000
107
+#define ARM_CP_NEWEL 0x10000
108
+/* Used only as a terminator for ARMCPRegInfo lists */
109
+#define ARM_CP_SENTINEL 0xfffff
110
+/* Mask of only the flag bits in a type field */
111
+#define ARM_CP_FLAG_MASK 0x1f0ff
112
+
113
+/*
114
+ * Valid values for ARMCPRegInfo state field, indicating which of
115
+ * the AArch32 and AArch64 execution states this register is visible in.
116
+ * If the reginfo doesn't explicitly specify then it is AArch32 only.
117
+ * If the reginfo is declared to be visible in both states then a second
118
+ * reginfo is synthesised for the AArch32 view of the AArch64 register,
119
+ * such that the AArch32 view is the lower 32 bits of the AArch64 one.
120
+ * Note that we rely on the values of these enums as we iterate through
121
+ * the various states in some places.
122
+ */
123
+enum {
124
+ ARM_CP_STATE_AA32 = 0,
125
+ ARM_CP_STATE_AA64 = 1,
126
+ ARM_CP_STATE_BOTH = 2,
127
+};
128
+
129
+/*
130
+ * ARM CP register secure state flags. These flags identify security state
131
+ * attributes for a given CP register entry.
132
+ * The existence of both or neither secure and non-secure flags indicates that
133
+ * the register has both a secure and non-secure hash entry. A single one of
134
+ * these flags causes the register to only be hashed for the specified
135
+ * security state.
136
+ * Although definitions may have any combination of the S/NS bits, each
137
+ * registered entry will only have one to identify whether the entry is secure
138
+ * or non-secure.
139
+ */
140
+enum {
141
+ ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
142
+ ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
143
+};
144
+
145
+/*
146
+ * Return true if cptype is a valid type field. This is used to try to
147
+ * catch errors where the sentinel has been accidentally left off the end
148
+ * of a list of registers.
149
+ */
150
+static inline bool cptype_valid(int cptype)
151
+{
152
+ return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
153
+ || ((cptype & ARM_CP_SPECIAL) &&
154
+ ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
155
+}
156
+
157
+/*
158
+ * Access rights:
159
+ * We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
160
+ * defines as PL0 (user), PL1 (fiq/irq/svc/abt/und/sys, ie privileged), and
161
+ * PL2 (hyp). The other level which has Read and Write bits is Secure PL1
162
+ * (ie any of the privileged modes in Secure state, or Monitor mode).
163
+ * If a register is accessible in one privilege level it's always accessible
164
+ * in higher privilege levels too. Since "Secure PL1" also follows this rule
165
+ * (ie anything visible in PL2 is visible in S-PL1, some things are only
166
+ * visible in S-PL1) but "Secure PL1" is a bit of a mouthful, we bend the
167
+ * terminology a little and call this PL3.
168
+ * In AArch64 things are somewhat simpler as the PLx bits line up exactly
169
+ * with the ELx exception levels.
170
+ *
171
+ * If access permissions for a register are more complex than can be
172
+ * described with these bits, then use a laxer set of restrictions, and
173
+ * do the more restrictive/complex check inside a helper function.
174
+ */
175
+#define PL3_R 0x80
176
+#define PL3_W 0x40
177
+#define PL2_R (0x20 | PL3_R)
178
+#define PL2_W (0x10 | PL3_W)
179
+#define PL1_R (0x08 | PL2_R)
180
+#define PL1_W (0x04 | PL2_W)
181
+#define PL0_R (0x02 | PL1_R)
182
+#define PL0_W (0x01 | PL1_W)
183
+
184
+/*
185
+ * For user-mode some registers are accessible to EL0 via a kernel
186
+ * trap-and-emulate ABI. In this case we define the read permissions
187
+ * as actually being PL0_R. However some bits of any given register
188
+ * may still be masked.
189
+ */
190
+#ifdef CONFIG_USER_ONLY
191
+#define PL0U_R PL0_R
192
+#else
193
+#define PL0U_R PL1_R
194
+#endif
195
+
196
+#define PL3_RW (PL3_R | PL3_W)
197
+#define PL2_RW (PL2_R | PL2_W)
198
+#define PL1_RW (PL1_R | PL1_W)
199
+#define PL0_RW (PL0_R | PL0_W)
200
+
201
+typedef enum CPAccessResult {
202
+ /* Access is permitted */
203
+ CP_ACCESS_OK = 0,
204
+ /*
205
+ * Access fails due to a configurable trap or enable which would
206
+ * result in a categorized exception syndrome giving information about
207
+ * the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
208
+ * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
209
+ * PL1 if in EL0, otherwise to the current EL).
210
+ */
211
+ CP_ACCESS_TRAP = 1,
212
+ /*
213
+ * Access fails and results in an exception syndrome 0x0 ("uncategorized").
214
+ * Note that this is not a catch-all case -- the set of cases which may
215
+ * result in this failure is specifically defined by the architecture.
216
+ */
217
+ CP_ACCESS_TRAP_UNCATEGORIZED = 2,
218
+ /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
219
+ CP_ACCESS_TRAP_EL2 = 3,
220
+ CP_ACCESS_TRAP_EL3 = 4,
221
+ /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
222
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
223
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
224
+} CPAccessResult;
225
+
226
+typedef struct ARMCPRegInfo ARMCPRegInfo;
227
+
228
+/*
229
+ * Access functions for coprocessor registers. These cannot fail and
230
+ * may not raise exceptions.
231
+ */
232
+typedef uint64_t CPReadFn(CPUARMState *env, const ARMCPRegInfo *opaque);
233
+typedef void CPWriteFn(CPUARMState *env, const ARMCPRegInfo *opaque,
234
+ uint64_t value);
235
+/* Access permission check functions for coprocessor registers. */
236
+typedef CPAccessResult CPAccessFn(CPUARMState *env,
237
+ const ARMCPRegInfo *opaque,
238
+ bool isread);
239
+/* Hook function for register reset */
240
+typedef void CPResetFn(CPUARMState *env, const ARMCPRegInfo *opaque);
241
+
242
+#define CP_ANY 0xff
243
+
244
+/* Definition of an ARM coprocessor register */
245
+struct ARMCPRegInfo {
246
+ /* Name of register (useful mainly for debugging, need not be unique) */
247
+ const char *name;
248
+ /*
249
+ * Location of register: coprocessor number and (crn,crm,opc1,opc2)
250
+ * tuple. Any of crm, opc1 and opc2 may be CP_ANY to indicate a
251
+ * 'wildcard' field -- any value of that field in the MRC/MCR insn
252
+ * will be decoded to this register. The register read and write
253
+ * callbacks will be passed an ARMCPRegInfo with the crn/crm/opc1/opc2
254
+ * used by the program, so it is possible to register a wildcard and
255
+ * then behave differently on read/write if necessary.
256
+ * For 64 bit registers, only crm and opc1 are relevant; crn and opc2
257
+ * must both be zero.
258
+ * For AArch64-visible registers, opc0 is also used.
259
+ * Since there are no "coprocessors" in AArch64, cp is purely used as a
260
+ * way to distinguish (for KVM's benefit) guest-visible system registers
261
+ * from demuxed ones provided to preserve the "no side effects on
262
+ * KVM register read/write from QEMU" semantics. cp==0x13 is guest
263
+ * visible (to match KVM's encoding); cp==0 will be converted to
264
+ * cp==0x13 when the ARMCPRegInfo is registered, for convenience.
265
+ */
266
+ uint8_t cp;
267
+ uint8_t crn;
268
+ uint8_t crm;
269
+ uint8_t opc0;
270
+ uint8_t opc1;
271
+ uint8_t opc2;
272
+ /* Execution state in which this register is visible: ARM_CP_STATE_* */
273
+ int state;
274
+ /* Register type: ARM_CP_* bits/values */
275
+ int type;
276
+ /* Access rights: PL*_[RW] */
277
+ int access;
278
+ /* Security state: ARM_CP_SECSTATE_* bits/values */
279
+ int secure;
280
+ /*
281
+ * The opaque pointer passed to define_arm_cp_regs_with_opaque() when
282
+ * this register was defined: can be used to hand data through to the
283
+ * register read/write functions, since they are passed the ARMCPRegInfo*.
284
+ */
285
+ void *opaque;
286
+ /*
287
+ * Value of this register, if it is ARM_CP_CONST. Otherwise, if
288
+ * fieldoffset is non-zero, the reset value of the register.
289
+ */
290
+ uint64_t resetvalue;
291
+ /*
292
+ * Offset of the field in CPUARMState for this register.
293
+ * This is not needed if either:
294
+ * 1. type is ARM_CP_CONST or one of the ARM_CP_SPECIALs
295
+ * 2. both readfn and writefn are specified
296
+ */
297
+ ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */
298
+
299
+ /*
300
+ * Offsets of the secure and non-secure fields in CPUARMState for the
301
+ * register if it is banked. These fields are only used during the static
302
+ * registration of a register. During hashing the bank associated
303
+ * with a given security state is copied to fieldoffset which is used from
304
+ * there on out.
305
+ *
306
+ * It is expected that register definitions use either fieldoffset or
307
+ * bank_fieldoffsets in the definition but not both. It is also expected
308
+ * that both bank offsets are set when defining a banked register. This
309
+ * use indicates that a register is banked.
310
+ */
311
+ ptrdiff_t bank_fieldoffsets[2];
312
+
313
+ /*
314
+ * Function for making any access checks for this register in addition to
315
+ * those specified by the 'access' permissions bits. If NULL, no extra
316
+ * checks required. The access check is performed at runtime, not at
317
+ * translate time.
318
+ */
319
+ CPAccessFn *accessfn;
320
+ /*
321
+ * Function for handling reads of this register. If NULL, then reads
322
+ * will be done by loading from the offset into CPUARMState specified
323
+ * by fieldoffset.
324
+ */
325
+ CPReadFn *readfn;
326
+ /*
327
+ * Function for handling writes of this register. If NULL, then writes
328
+ * will be done by writing to the offset into CPUARMState specified
329
+ * by fieldoffset.
330
+ */
331
+ CPWriteFn *writefn;
332
+ /*
333
+ * Function for doing a "raw" read; used when we need to copy
334
+ * coprocessor state to the kernel for KVM or out for
335
+ * migration. This only needs to be provided if there is also a
336
+ * readfn and it has side effects (for instance clear-on-read bits).
337
+ */
338
+ CPReadFn *raw_readfn;
339
+ /*
340
+ * Function for doing a "raw" write; used when we need to copy KVM
341
+ * kernel coprocessor state into userspace, or for inbound
342
+ * migration. This only needs to be provided if there is also a
343
+ * writefn and it masks out "unwritable" bits or has write-one-to-clear
344
+ * or similar behaviour.
345
+ */
346
+ CPWriteFn *raw_writefn;
347
+ /*
348
+ * Function for resetting the register. If NULL, then reset will be done
349
+ * by writing resetvalue to the field specified in fieldoffset. If
350
+ * fieldoffset is 0 then no reset will be done.
351
+ */
352
+ CPResetFn *resetfn;
353
+
354
+ /*
355
+ * "Original" writefn and readfn.
356
+ * For ARMv8.1-VHE register aliases, we overwrite the read/write
357
+ * accessor functions of various EL1/EL0 to perform the runtime
358
+ * check for which sysreg should actually be modified, and then
359
+ * forwards the operation. Before overwriting the accessors,
360
+ * the original function is copied here, so that accesses that
361
+ * really do go to the EL1/EL0 version proceed normally.
362
+ * (The corresponding EL2 register is linked via opaque.)
363
+ */
364
+ CPReadFn *orig_readfn;
365
+ CPWriteFn *orig_writefn;
366
+};
367
+
368
+/*
369
+ * Macros which are lvalues for the field in CPUARMState for the
370
+ * ARMCPRegInfo *ri.
371
+ */
372
+#define CPREG_FIELD32(env, ri) \
373
+ (*(uint32_t *)((char *)(env) + (ri)->fieldoffset))
374
+#define CPREG_FIELD64(env, ri) \
375
+ (*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
376
+
377
+#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
378
+
379
+void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
380
+ const ARMCPRegInfo *regs, void *opaque);
381
+void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
382
+ const ARMCPRegInfo *regs, void *opaque);
383
+static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
384
+{
385
+ define_arm_cp_regs_with_opaque(cpu, regs, 0);
386
+}
387
+static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
388
+{
389
+ define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
390
+}
391
+const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
392
+
393
+/*
394
+ * Definition of an ARM co-processor register as viewed from
395
+ * userspace. This is used for presenting sanitised versions of
396
+ * registers to userspace when emulating the Linux AArch64 CPU
397
+ * ID/feature ABI (advertised as HWCAP_CPUID).
398
+ */
399
+typedef struct ARMCPRegUserSpaceInfo {
400
+ /* Name of register */
401
+ const char *name;
402
+
403
+ /* Is the name actually a glob pattern */
404
+ bool is_glob;
405
+
406
+ /* Only some bits are exported to user space */
407
+ uint64_t exported_bits;
408
+
409
+ /* Fixed bits are applied after the mask */
410
+ uint64_t fixed_bits;
411
+} ARMCPRegUserSpaceInfo;
412
+
413
+#define REGUSERINFO_SENTINEL { .name = NULL }
414
+
415
+void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
416
+
417
+/* CPWriteFn that can be used to implement writes-ignored behaviour */
418
+void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
419
+ uint64_t value);
420
+/* CPReadFn that can be used for read-as-zero behaviour */
421
+uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri);
422
+
423
+/*
424
+ * CPResetFn that does nothing, for use if no reset is required even
425
+ * if fieldoffset is non zero.
426
+ */
427
+void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque);
428
+
429
+/*
430
+ * Return true if this reginfo struct's field in the cpu state struct
431
+ * is 64 bits wide.
432
+ */
433
+static inline bool cpreg_field_is_64bit(const ARMCPRegInfo *ri)
434
+{
435
+ return (ri->state == ARM_CP_STATE_AA64) || (ri->type & ARM_CP_64BIT);
436
+}
437
+
438
+static inline bool cp_access_ok(int current_el,
439
+ const ARMCPRegInfo *ri, int isread)
440
+{
441
+ return (ri->access >> ((current_el * 2) + isread)) & 1;
442
+}
443
+
444
+/* Raw read of a coprocessor register (as needed for migration, etc) */
445
+uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri);
446
+
447
+#endif /* TARGET_ARM_CPREGS_H */
448
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
449
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
450
--- a/target/arm/cpu.h
20
--- a/target/arm/cpu.h
451
+++ b/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
452
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
22
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */
453
return kvmid;
23
FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */
454
}
24
/* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */
455
25
FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */
456
-/* ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
26
+/* Set if in secure mode */
457
- * special-behaviour cp reg and bits [11..8] indicate what behaviour
27
+FIELD(TBFLAG_M32, SECURE, 6, 1)
458
- * it has. Otherwise it is a simple cp reg, where CONST indicates that
459
- * TCG can assume the value to be constant (ie load at translate time)
460
- * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
461
- * indicates that the TB should not be ended after a write to this register
462
- * (the default is that the TB ends after cp writes). OVERRIDE permits
463
- * a register definition to override a previous definition for the
464
- * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
465
- * old must have the OVERRIDE bit set.
466
- * ALIAS indicates that this register is an alias view of some underlying
467
- * state which is also visible via another register, and that the other
468
- * register is handling migration and reset; registers marked ALIAS will not be
469
- * migrated but may have their state set by syncing of register state from KVM.
470
- * NO_RAW indicates that this register has no underlying state and does not
471
- * support raw access for state saving/loading; it will not be used for either
472
- * migration or KVM state synchronization. (Typically this is for "registers"
473
- * which are actually used as instructions for cache maintenance and so on.)
474
- * IO indicates that this register does I/O and therefore its accesses
475
- * need to be marked with gen_io_start() and also end the TB. In particular,
476
- * registers which implement clocks or timers require this.
477
- * RAISES_EXC is for when the read or write hook might raise an exception;
478
- * the generated code will synchronize the CPU state before calling the hook
479
- * so that it is safe for the hook to call raise_exception().
480
- * NEWEL is for writes to registers that might change the exception
481
- * level - typically on older ARM chips. For those cases we need to
482
- * re-read the new el when recomputing the translation flags.
483
- */
484
-#define ARM_CP_SPECIAL 0x0001
485
-#define ARM_CP_CONST 0x0002
486
-#define ARM_CP_64BIT 0x0004
487
-#define ARM_CP_SUPPRESS_TB_END 0x0008
488
-#define ARM_CP_OVERRIDE 0x0010
489
-#define ARM_CP_ALIAS 0x0020
490
-#define ARM_CP_IO 0x0040
491
-#define ARM_CP_NO_RAW 0x0080
492
-#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
493
-#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
494
-#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
495
-#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
496
-#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
497
-#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
498
-#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
499
-#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
500
-#define ARM_CP_FPU 0x1000
501
-#define ARM_CP_SVE 0x2000
502
-#define ARM_CP_NO_GDB 0x4000
503
-#define ARM_CP_RAISES_EXC 0x8000
504
-#define ARM_CP_NEWEL 0x10000
505
-/* Used only as a terminator for ARMCPRegInfo lists */
506
-#define ARM_CP_SENTINEL 0xfffff
507
-/* Mask of only the flag bits in a type field */
508
-#define ARM_CP_FLAG_MASK 0x1f0ff
509
-
510
-/* Valid values for ARMCPRegInfo state field, indicating which of
511
- * the AArch32 and AArch64 execution states this register is visible in.
512
- * If the reginfo doesn't explicitly specify then it is AArch32 only.
513
- * If the reginfo is declared to be visible in both states then a second
514
- * reginfo is synthesised for the AArch32 view of the AArch64 register,
515
- * such that the AArch32 view is the lower 32 bits of the AArch64 one.
516
- * Note that we rely on the values of these enums as we iterate through
517
- * the various states in some places.
518
- */
519
-enum {
520
- ARM_CP_STATE_AA32 = 0,
521
- ARM_CP_STATE_AA64 = 1,
522
- ARM_CP_STATE_BOTH = 2,
523
-};
524
-
525
-/* ARM CP register secure state flags. These flags identify security state
526
- * attributes for a given CP register entry.
527
- * The existence of both or neither secure and non-secure flags indicates that
528
- * the register has both a secure and non-secure hash entry. A single one of
529
- * these flags causes the register to only be hashed for the specified
530
- * security state.
531
- * Although definitions may have any combination of the S/NS bits, each
532
- * registered entry will only have one to identify whether the entry is secure
533
- * or non-secure.
534
- */
535
-enum {
536
- ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
537
- ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
538
-};
539
-
540
-/* Return true if cptype is a valid type field. This is used to try to
541
- * catch errors where the sentinel has been accidentally left off the end
542
- * of a list of registers.
543
- */
544
-static inline bool cptype_valid(int cptype)
545
-{
546
- return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
547
- || ((cptype & ARM_CP_SPECIAL) &&
548
- ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
549
-}
550
-
551
-/* Access rights:
552
- * We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
553
- * defines as PL0 (user), PL1 (fiq/irq/svc/abt/und/sys, ie privileged), and
554
- * PL2 (hyp). The other level which has Read and Write bits is Secure PL1
555
- * (ie any of the privileged modes in Secure state, or Monitor mode).
556
- * If a register is accessible in one privilege level it's always accessible
557
- * in higher privilege levels too. Since "Secure PL1" also follows this rule
558
- * (ie anything visible in PL2 is visible in S-PL1, some things are only
559
- * visible in S-PL1) but "Secure PL1" is a bit of a mouthful, we bend the
560
- * terminology a little and call this PL3.
561
- * In AArch64 things are somewhat simpler as the PLx bits line up exactly
562
- * with the ELx exception levels.
563
- *
564
- * If access permissions for a register are more complex than can be
565
- * described with these bits, then use a laxer set of restrictions, and
566
- * do the more restrictive/complex check inside a helper function.
567
- */
568
-#define PL3_R 0x80
569
-#define PL3_W 0x40
570
-#define PL2_R (0x20 | PL3_R)
571
-#define PL2_W (0x10 | PL3_W)
572
-#define PL1_R (0x08 | PL2_R)
573
-#define PL1_W (0x04 | PL2_W)
574
-#define PL0_R (0x02 | PL1_R)
575
-#define PL0_W (0x01 | PL1_W)
576
-
577
-/*
578
- * For user-mode some registers are accessible to EL0 via a kernel
579
- * trap-and-emulate ABI. In this case we define the read permissions
580
- * as actually being PL0_R. However some bits of any given register
581
- * may still be masked.
582
- */
583
-#ifdef CONFIG_USER_ONLY
584
-#define PL0U_R PL0_R
585
-#else
586
-#define PL0U_R PL1_R
587
-#endif
588
-
589
-#define PL3_RW (PL3_R | PL3_W)
590
-#define PL2_RW (PL2_R | PL2_W)
591
-#define PL1_RW (PL1_R | PL1_W)
592
-#define PL0_RW (PL0_R | PL0_W)
593
-
594
/* Return the highest implemented Exception Level */
595
static inline int arm_highest_el(CPUARMState *env)
596
{
597
@@ -XXX,XX +XXX,XX @@ static inline int arm_current_el(CPUARMState *env)
598
}
599
}
600
601
-typedef struct ARMCPRegInfo ARMCPRegInfo;
602
-
603
-typedef enum CPAccessResult {
604
- /* Access is permitted */
605
- CP_ACCESS_OK = 0,
606
- /* Access fails due to a configurable trap or enable which would
607
- * result in a categorized exception syndrome giving information about
608
- * the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
609
- * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
610
- * PL1 if in EL0, otherwise to the current EL).
611
- */
612
- CP_ACCESS_TRAP = 1,
613
- /* Access fails and results in an exception syndrome 0x0 ("uncategorized").
614
- * Note that this is not a catch-all case -- the set of cases which may
615
- * result in this failure is specifically defined by the architecture.
616
- */
617
- CP_ACCESS_TRAP_UNCATEGORIZED = 2,
618
- /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
619
- CP_ACCESS_TRAP_EL2 = 3,
620
- CP_ACCESS_TRAP_EL3 = 4,
621
- /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
622
- CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
623
- CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
624
-} CPAccessResult;
625
-
626
-/* Access functions for coprocessor registers. These cannot fail and
627
- * may not raise exceptions.
628
- */
629
-typedef uint64_t CPReadFn(CPUARMState *env, const ARMCPRegInfo *opaque);
630
-typedef void CPWriteFn(CPUARMState *env, const ARMCPRegInfo *opaque,
631
- uint64_t value);
632
-/* Access permission check functions for coprocessor registers. */
633
-typedef CPAccessResult CPAccessFn(CPUARMState *env,
634
- const ARMCPRegInfo *opaque,
635
- bool isread);
636
-/* Hook function for register reset */
637
-typedef void CPResetFn(CPUARMState *env, const ARMCPRegInfo *opaque);
638
-
639
-#define CP_ANY 0xff
640
-
641
-/* Definition of an ARM coprocessor register */
642
-struct ARMCPRegInfo {
643
- /* Name of register (useful mainly for debugging, need not be unique) */
644
- const char *name;
645
- /* Location of register: coprocessor number and (crn,crm,opc1,opc2)
646
- * tuple. Any of crm, opc1 and opc2 may be CP_ANY to indicate a
647
- * 'wildcard' field -- any value of that field in the MRC/MCR insn
648
- * will be decoded to this register. The register read and write
649
- * callbacks will be passed an ARMCPRegInfo with the crn/crm/opc1/opc2
650
- * used by the program, so it is possible to register a wildcard and
651
- * then behave differently on read/write if necessary.
652
- * For 64 bit registers, only crm and opc1 are relevant; crn and opc2
653
- * must both be zero.
654
- * For AArch64-visible registers, opc0 is also used.
655
- * Since there are no "coprocessors" in AArch64, cp is purely used as a
656
- * way to distinguish (for KVM's benefit) guest-visible system registers
657
- * from demuxed ones provided to preserve the "no side effects on
658
- * KVM register read/write from QEMU" semantics. cp==0x13 is guest
659
- * visible (to match KVM's encoding); cp==0 will be converted to
660
- * cp==0x13 when the ARMCPRegInfo is registered, for convenience.
661
- */
662
- uint8_t cp;
663
- uint8_t crn;
664
- uint8_t crm;
665
- uint8_t opc0;
666
- uint8_t opc1;
667
- uint8_t opc2;
668
- /* Execution state in which this register is visible: ARM_CP_STATE_* */
669
- int state;
670
- /* Register type: ARM_CP_* bits/values */
671
- int type;
672
- /* Access rights: PL*_[RW] */
673
- int access;
674
- /* Security state: ARM_CP_SECSTATE_* bits/values */
675
- int secure;
676
- /* The opaque pointer passed to define_arm_cp_regs_with_opaque() when
677
- * this register was defined: can be used to hand data through to the
678
- * register read/write functions, since they are passed the ARMCPRegInfo*.
679
- */
680
- void *opaque;
681
- /* Value of this register, if it is ARM_CP_CONST. Otherwise, if
682
- * fieldoffset is non-zero, the reset value of the register.
683
- */
684
- uint64_t resetvalue;
685
- /* Offset of the field in CPUARMState for this register.
686
- *
687
- * This is not needed if either:
688
- * 1. type is ARM_CP_CONST or one of the ARM_CP_SPECIALs
689
- * 2. both readfn and writefn are specified
690
- */
691
- ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */
692
-
693
- /* Offsets of the secure and non-secure fields in CPUARMState for the
694
- * register if it is banked. These fields are only used during the static
695
- * registration of a register. During hashing the bank associated
696
- * with a given security state is copied to fieldoffset which is used from
697
- * there on out.
698
- *
699
- * It is expected that register definitions use either fieldoffset or
700
- * bank_fieldoffsets in the definition but not both. It is also expected
701
- * that both bank offsets are set when defining a banked register. This
702
- * use indicates that a register is banked.
703
- */
704
- ptrdiff_t bank_fieldoffsets[2];
705
-
706
- /* Function for making any access checks for this register in addition to
707
- * those specified by the 'access' permissions bits. If NULL, no extra
708
- * checks required. The access check is performed at runtime, not at
709
- * translate time.
710
- */
711
- CPAccessFn *accessfn;
712
- /* Function for handling reads of this register. If NULL, then reads
713
- * will be done by loading from the offset into CPUARMState specified
714
- * by fieldoffset.
715
- */
716
- CPReadFn *readfn;
717
- /* Function for handling writes of this register. If NULL, then writes
718
- * will be done by writing to the offset into CPUARMState specified
719
- * by fieldoffset.
720
- */
721
- CPWriteFn *writefn;
722
- /* Function for doing a "raw" read; used when we need to copy
723
- * coprocessor state to the kernel for KVM or out for
724
- * migration. This only needs to be provided if there is also a
725
- * readfn and it has side effects (for instance clear-on-read bits).
726
- */
727
- CPReadFn *raw_readfn;
728
- /* Function for doing a "raw" write; used when we need to copy KVM
729
- * kernel coprocessor state into userspace, or for inbound
730
- * migration. This only needs to be provided if there is also a
731
- * writefn and it masks out "unwritable" bits or has write-one-to-clear
732
- * or similar behaviour.
733
- */
734
- CPWriteFn *raw_writefn;
735
- /* Function for resetting the register. If NULL, then reset will be done
736
- * by writing resetvalue to the field specified in fieldoffset. If
737
- * fieldoffset is 0 then no reset will be done.
738
- */
739
- CPResetFn *resetfn;
740
-
741
- /*
742
- * "Original" writefn and readfn.
743
- * For ARMv8.1-VHE register aliases, we overwrite the read/write
744
- * accessor functions of various EL1/EL0 to perform the runtime
745
- * check for which sysreg should actually be modified, and then
746
- * forwards the operation. Before overwriting the accessors,
747
- * the original function is copied here, so that accesses that
748
- * really do go to the EL1/EL0 version proceed normally.
749
- * (The corresponding EL2 register is linked via opaque.)
750
- */
751
- CPReadFn *orig_readfn;
752
- CPWriteFn *orig_writefn;
753
-};
754
-
755
-/* Macros which are lvalues for the field in CPUARMState for the
756
- * ARMCPRegInfo *ri.
757
- */
758
-#define CPREG_FIELD32(env, ri) \
759
- (*(uint32_t *)((char *)(env) + (ri)->fieldoffset))
760
-#define CPREG_FIELD64(env, ri) \
761
- (*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
762
-
763
-#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
764
-
765
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
766
- const ARMCPRegInfo *regs, void *opaque);
767
-void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
768
- const ARMCPRegInfo *regs, void *opaque);
769
-static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
770
-{
771
- define_arm_cp_regs_with_opaque(cpu, regs, 0);
772
-}
773
-static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
774
-{
775
- define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
776
-}
777
-const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
778
-
779
-/*
780
- * Definition of an ARM co-processor register as viewed from
781
- * userspace. This is used for presenting sanitised versions of
782
- * registers to userspace when emulating the Linux AArch64 CPU
783
- * ID/feature ABI (advertised as HWCAP_CPUID).
784
- */
785
-typedef struct ARMCPRegUserSpaceInfo {
786
- /* Name of register */
787
- const char *name;
788
-
789
- /* Is the name actually a glob pattern */
790
- bool is_glob;
791
-
792
- /* Only some bits are exported to user space */
793
- uint64_t exported_bits;
794
-
795
- /* Fixed bits are applied after the mask */
796
- uint64_t fixed_bits;
797
-} ARMCPRegUserSpaceInfo;
798
-
799
-#define REGUSERINFO_SENTINEL { .name = NULL }
800
-
801
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
802
-
803
-/* CPWriteFn that can be used to implement writes-ignored behaviour */
804
-void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
805
- uint64_t value);
806
-/* CPReadFn that can be used for read-as-zero behaviour */
807
-uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri);
808
-
809
-/* CPResetFn that does nothing, for use if no reset is required even
810
- * if fieldoffset is non zero.
811
- */
812
-void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque);
813
-
814
-/* Return true if this reginfo struct's field in the cpu state struct
815
- * is 64 bits wide.
816
- */
817
-static inline bool cpreg_field_is_64bit(const ARMCPRegInfo *ri)
818
-{
819
- return (ri->state == ARM_CP_STATE_AA64) || (ri->type & ARM_CP_64BIT);
820
-}
821
-
822
-static inline bool cp_access_ok(int current_el,
823
- const ARMCPRegInfo *ri, int isread)
824
-{
825
- return (ri->access >> ((current_el * 2) + isread)) & 1;
826
-}
827
-
828
-/* Raw read of a coprocessor register (as needed for migration, etc) */
829
-uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri);
830
-
831
/**
832
* write_list_to_cpustate
833
* @cpu: ARMCPU
834
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
835
index XXXXXXX..XXXXXXX 100644
836
--- a/hw/arm/pxa2xx.c
837
+++ b/hw/arm/pxa2xx.c
838
@@ -XXX,XX +XXX,XX @@
839
#include "qemu/cutils.h"
840
#include "qemu/log.h"
841
#include "qom/object.h"
842
+#include "target/arm/cpregs.h"
843
844
static struct {
845
hwaddr io_base;
846
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
847
index XXXXXXX..XXXXXXX 100644
848
--- a/hw/arm/pxa2xx_pic.c
849
+++ b/hw/arm/pxa2xx_pic.c
850
@@ -XXX,XX +XXX,XX @@
851
#include "hw/sysbus.h"
852
#include "migration/vmstate.h"
853
#include "qom/object.h"
854
+#include "target/arm/cpregs.h"
855
856
#define ICIP    0x00    /* Interrupt Controller IRQ Pending register */
857
#define ICMR    0x04    /* Interrupt Controller Mask register */
858
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
859
index XXXXXXX..XXXXXXX 100644
860
--- a/hw/intc/arm_gicv3_cpuif.c
861
+++ b/hw/intc/arm_gicv3_cpuif.c
862
@@ -XXX,XX +XXX,XX @@
863
#include "gicv3_internal.h"
864
#include "hw/irq.h"
865
#include "cpu.h"
866
+#include "target/arm/cpregs.h"
867
28
868
/*
29
/*
869
* Special case return value from hppvi_index(); must be larger than
30
* Bit usage when in AArch64 state
870
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
871
index XXXXXXX..XXXXXXX 100644
872
--- a/hw/intc/arm_gicv3_kvm.c
873
+++ b/hw/intc/arm_gicv3_kvm.c
874
@@ -XXX,XX +XXX,XX @@
875
#include "vgic_common.h"
876
#include "migration/blocker.h"
877
#include "qom/object.h"
878
+#include "target/arm/cpregs.h"
879
+
880
881
#ifdef DEBUG_GICV3_KVM
882
#define DPRINTF(fmt, ...) \
883
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
884
index XXXXXXX..XXXXXXX 100644
885
--- a/target/arm/cpu.c
886
+++ b/target/arm/cpu.c
887
@@ -XXX,XX +XXX,XX @@
888
#include "kvm_arm.h"
889
#include "disas/capstone.h"
890
#include "fpu/softfloat.h"
891
+#include "cpregs.h"
892
893
static void arm_cpu_set_pc(CPUState *cs, vaddr value)
894
{
895
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
896
index XXXXXXX..XXXXXXX 100644
897
--- a/target/arm/cpu64.c
898
+++ b/target/arm/cpu64.c
899
@@ -XXX,XX +XXX,XX @@
900
#include "hvf_arm.h"
901
#include "qapi/visitor.h"
902
#include "hw/qdev-properties.h"
903
+#include "cpregs.h"
904
905
906
#ifndef CONFIG_USER_ONLY
907
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
908
index XXXXXXX..XXXXXXX 100644
909
--- a/target/arm/cpu_tcg.c
910
+++ b/target/arm/cpu_tcg.c
911
@@ -XXX,XX +XXX,XX @@
912
#if !defined(CONFIG_USER_ONLY)
913
#include "hw/boards.h"
914
#endif
915
+#include "cpregs.h"
916
917
/* CPU models. These are not needed for the AArch64 linux-user build. */
918
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
919
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
920
index XXXXXXX..XXXXXXX 100644
921
--- a/target/arm/gdbstub.c
922
+++ b/target/arm/gdbstub.c
923
@@ -XXX,XX +XXX,XX @@
924
*/
925
#include "qemu/osdep.h"
926
#include "cpu.h"
927
-#include "internals.h"
928
#include "exec/gdbstub.h"
929
+#include "internals.h"
930
+#include "cpregs.h"
931
932
typedef struct RegisterSysregXmlParam {
933
CPUState *cs;
934
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
diff --git a/target/arm/helper.c b/target/arm/helper.c
935
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
936
--- a/target/arm/helper.c
33
--- a/target/arm/helper.c
937
+++ b/target/arm/helper.c
34
+++ b/target/arm/helper.c
938
@@ -XXX,XX +XXX,XX @@
35
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_m32(CPUARMState *env, int fp_el,
939
#include "exec/cpu_ldst.h"
36
DP_TBFLAG_M32(flags, STACKCHECK, 1);
940
#include "semihosting/common-semi.h"
37
}
941
#endif
38
942
+#include "cpregs.h"
39
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) && env->v7m.secure) {
943
40
+ DP_TBFLAG_M32(flags, SECURE, 1);
944
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
41
+ }
945
#define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */
42
+
946
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
43
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
947
index XXXXXXX..XXXXXXX 100644
44
}
948
--- a/target/arm/op_helper.c
949
+++ b/target/arm/op_helper.c
950
@@ -XXX,XX +XXX,XX @@
951
#include "internals.h"
952
#include "exec/exec-all.h"
953
#include "exec/cpu_ldst.h"
954
+#include "cpregs.h"
955
956
#define SIGNBIT (uint32_t)0x80000000
957
#define SIGNBIT64 ((uint64_t)1 << 63)
958
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
959
index XXXXXXX..XXXXXXX 100644
960
--- a/target/arm/translate-a64.c
961
+++ b/target/arm/translate-a64.c
962
@@ -XXX,XX +XXX,XX @@
963
#include "translate.h"
964
#include "internals.h"
965
#include "qemu/host-utils.h"
966
-
967
#include "semihosting/semihost.h"
968
#include "exec/gen-icount.h"
969
-
970
#include "exec/helper-proto.h"
971
#include "exec/helper-gen.h"
972
#include "exec/log.h"
973
-
974
+#include "cpregs.h"
975
#include "translate-a64.h"
976
#include "qemu/atomic128.h"
977
45
978
diff --git a/target/arm/translate.c b/target/arm/translate.c
46
diff --git a/target/arm/translate.c b/target/arm/translate.c
979
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
980
--- a/target/arm/translate.c
48
--- a/target/arm/translate.c
981
+++ b/target/arm/translate.c
49
+++ b/target/arm/translate.c
982
@@ -XXX,XX +XXX,XX @@
50
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
983
#include "qemu/bitops.h"
51
dc->vfp_enabled = 1;
984
#include "arm_ldst.h"
52
dc->be_data = MO_TE;
985
#include "semihosting/semihost.h"
53
dc->v7m_handler_mode = EX_TBFLAG_M32(tb_flags, HANDLER);
986
-
54
- dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
987
#include "exec/helper-proto.h"
55
- regime_is_secure(env, dc->mmu_idx);
988
#include "exec/helper-gen.h"
56
+ dc->v8m_secure = EX_TBFLAG_M32(tb_flags, SECURE);
989
-
57
dc->v8m_stackcheck = EX_TBFLAG_M32(tb_flags, STACKCHECK);
990
#include "exec/log.h"
58
dc->v8m_fpccr_s_wrong = EX_TBFLAG_M32(tb_flags, FPCCR_S_WRONG);
991
+#include "cpregs.h"
59
dc->v7m_new_fp_ctxt_needed =
992
993
994
#define ENABLE_ARCH_4T arm_dc_feature(s, ARM_FEATURE_V4T)
995
--
60
--
996
2.25.1
61
2.25.1
997
998
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move the computation of key to the top of the function.
3
This is the last use of regime_is_secure; remove it
4
Hoist the resolution of cp as well, as an input to the
4
entirely before changing the layout of ARMMMUIdx.
5
computation of key.
6
5
7
This will be required by a subsequent patch.
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20221001162318.153420-9-richard.henderson@linaro.org
11
Message-id: 20220501055028.646596-14-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
target/arm/helper.c | 49 +++++++++++++++++++++++++--------------------
11
target/arm/internals.h | 42 ----------------------------------------
15
1 file changed, 27 insertions(+), 22 deletions(-)
12
target/arm/ptw.c | 44 ++++++++++++++++++++++++++++++++++++++++--
13
2 files changed, 42 insertions(+), 44 deletions(-)
16
14
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
17
--- a/target/arm/internals.h
20
+++ b/target/arm/helper.c
18
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
19
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
22
ARMCPRegInfo *r2;
20
}
23
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
21
}
24
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
22
25
+ int cp = r->cp;
23
-/* Return true if this address translation regime is secure */
26
size_t name_len;
24
-static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
27
25
-{
28
+ switch (state) {
26
- switch (mmu_idx) {
29
+ case ARM_CP_STATE_AA32:
27
- case ARMMMUIdx_E10_0:
30
+ /* We assume it is a cp15 register if the .cp field is left unset. */
28
- case ARMMMUIdx_E10_1:
31
+ if (cp == 0 && r->state == ARM_CP_STATE_BOTH) {
29
- case ARMMMUIdx_E10_1_PAN:
32
+ cp = 15;
30
- case ARMMMUIdx_E20_0:
33
+ }
31
- case ARMMMUIdx_E20_2:
34
+ key = ENCODE_CP_REG(cp, is64, ns, r->crn, crm, opc1, opc2);
32
- case ARMMMUIdx_E20_2_PAN:
33
- case ARMMMUIdx_Stage1_E0:
34
- case ARMMMUIdx_Stage1_E1:
35
- case ARMMMUIdx_Stage1_E1_PAN:
36
- case ARMMMUIdx_E2:
37
- case ARMMMUIdx_Stage2:
38
- case ARMMMUIdx_MPrivNegPri:
39
- case ARMMMUIdx_MUserNegPri:
40
- case ARMMMUIdx_MPriv:
41
- case ARMMMUIdx_MUser:
42
- return false;
43
- case ARMMMUIdx_SE3:
44
- case ARMMMUIdx_SE10_0:
45
- case ARMMMUIdx_SE10_1:
46
- case ARMMMUIdx_SE10_1_PAN:
47
- case ARMMMUIdx_SE20_0:
48
- case ARMMMUIdx_SE20_2:
49
- case ARMMMUIdx_SE20_2_PAN:
50
- case ARMMMUIdx_Stage1_SE0:
51
- case ARMMMUIdx_Stage1_SE1:
52
- case ARMMMUIdx_Stage1_SE1_PAN:
53
- case ARMMMUIdx_SE2:
54
- case ARMMMUIdx_Stage2_S:
55
- case ARMMMUIdx_MSPrivNegPri:
56
- case ARMMMUIdx_MSUserNegPri:
57
- case ARMMMUIdx_MSPriv:
58
- case ARMMMUIdx_MSUser:
59
- return true;
60
- default:
61
- g_assert_not_reached();
62
- }
63
-}
64
-
65
static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
66
{
67
switch (mmu_idx) {
68
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/ptw.c
71
+++ b/target/arm/ptw.c
72
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
73
MMUAccessType access_type, ARMMMUIdx mmu_idx,
74
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
75
{
76
+ bool is_secure;
77
+
78
+ switch (mmu_idx) {
79
+ case ARMMMUIdx_E10_0:
80
+ case ARMMMUIdx_E10_1:
81
+ case ARMMMUIdx_E10_1_PAN:
82
+ case ARMMMUIdx_E20_0:
83
+ case ARMMMUIdx_E20_2:
84
+ case ARMMMUIdx_E20_2_PAN:
85
+ case ARMMMUIdx_Stage1_E0:
86
+ case ARMMMUIdx_Stage1_E1:
87
+ case ARMMMUIdx_Stage1_E1_PAN:
88
+ case ARMMMUIdx_E2:
89
+ case ARMMMUIdx_Stage2:
90
+ case ARMMMUIdx_MPrivNegPri:
91
+ case ARMMMUIdx_MUserNegPri:
92
+ case ARMMMUIdx_MPriv:
93
+ case ARMMMUIdx_MUser:
94
+ is_secure = false;
35
+ break;
95
+ break;
36
+ case ARM_CP_STATE_AA64:
96
+ case ARMMMUIdx_SE3:
37
+ /*
97
+ case ARMMMUIdx_SE10_0:
38
+ * To allow abbreviation of ARMCPRegInfo definitions, we treat
98
+ case ARMMMUIdx_SE10_1:
39
+ * cp == 0 as equivalent to the value for "standard guest-visible
99
+ case ARMMMUIdx_SE10_1_PAN:
40
+ * sysreg". STATE_BOTH definitions are also always "standard sysreg"
100
+ case ARMMMUIdx_SE20_0:
41
+ * in their AArch64 view (the .cp value may be non-zero for the
101
+ case ARMMMUIdx_SE20_2:
42
+ * benefit of the AArch32 view).
102
+ case ARMMMUIdx_SE20_2_PAN:
43
+ */
103
+ case ARMMMUIdx_Stage1_SE0:
44
+ if (cp == 0 || r->state == ARM_CP_STATE_BOTH) {
104
+ case ARMMMUIdx_Stage1_SE1:
45
+ cp = CP_REG_ARM64_SYSREG_CP;
105
+ case ARMMMUIdx_Stage1_SE1_PAN:
46
+ }
106
+ case ARMMMUIdx_SE2:
47
+ key = ENCODE_AA64_CP_REG(cp, r->crn, crm, r->opc0, opc1, opc2);
107
+ case ARMMMUIdx_Stage2_S:
108
+ case ARMMMUIdx_MSPrivNegPri:
109
+ case ARMMMUIdx_MSUserNegPri:
110
+ case ARMMMUIdx_MSPriv:
111
+ case ARMMMUIdx_MSUser:
112
+ is_secure = true;
48
+ break;
113
+ break;
49
+ default:
114
+ default:
50
+ g_assert_not_reached();
115
+ g_assert_not_reached();
51
+ }
116
+ }
52
+
117
return get_phys_addr_with_secure(env, address, access_type, mmu_idx,
53
/* Combine cpreg and name into one allocation. */
118
- regime_is_secure(env, mmu_idx),
54
name_len = strlen(name) + 1;
119
- result, fi);
55
r2 = g_malloc(sizeof(*r2) + name_len);
120
+ is_secure, result, fi);
56
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
121
}
57
}
122
58
123
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
59
if (r->state == ARM_CP_STATE_BOTH) {
60
- /* We assume it is a cp15 register if the .cp field is left unset.
61
- */
62
- if (r2->cp == 0) {
63
- r2->cp = 15;
64
- }
65
-
66
#if HOST_BIG_ENDIAN
67
if (r2->fieldoffset) {
68
r2->fieldoffset += sizeof(uint32_t);
69
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
70
#endif
71
}
72
}
73
- if (state == ARM_CP_STATE_AA64) {
74
- /* To allow abbreviation of ARMCPRegInfo
75
- * definitions, we treat cp == 0 as equivalent to
76
- * the value for "standard guest-visible sysreg".
77
- * STATE_BOTH definitions are also always "standard
78
- * sysreg" in their AArch64 view (the .cp value may
79
- * be non-zero for the benefit of the AArch32 view).
80
- */
81
- if (r->cp == 0 || r->state == ARM_CP_STATE_BOTH) {
82
- r2->cp = CP_REG_ARM64_SYSREG_CP;
83
- }
84
- key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
85
- r2->opc0, opc1, opc2);
86
- } else {
87
- key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
88
- }
89
if (opaque) {
90
r2->opaque = opaque;
91
}
92
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
93
/* Make sure reginfo passed to helpers for wildcarded regs
94
* has the correct crm/opc1/opc2 for this reg, not CP_ANY:
95
*/
96
+ r2->cp = cp;
97
r2->crm = crm;
98
r2->opc1 = opc1;
99
r2->opc2 = opc2;
100
--
124
--
101
2.25.1
125
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Perform the override check early, so that it is still done
3
Use get_phys_addr_with_secure directly. For a-profile, this is the
4
even when we decide to discard an unreachable cpreg.
4
one place where the value of is_secure may not equal arm_is_secure(env).
5
5
6
Use assert not printf+abort.
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20221001162318.153420-10-richard.henderson@linaro.org
10
Message-id: 20220501055028.646596-18-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
target/arm/helper.c | 22 ++++++++--------------
11
target/arm/helper.c | 19 ++++++++++++++-----
14
1 file changed, 8 insertions(+), 14 deletions(-)
12
1 file changed, 14 insertions(+), 5 deletions(-)
15
13
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
18
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
19
20
#ifdef CONFIG_TCG
21
static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
22
- MMUAccessType access_type, ARMMMUIdx mmu_idx)
23
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
24
+ bool is_secure)
25
{
26
bool ret;
27
uint64_t par64;
28
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
29
ARMMMUFaultInfo fi = {};
30
GetPhysAddrResult res = {};
31
32
- ret = get_phys_addr(env, value, access_type, mmu_idx, &res, &fi);
33
+ ret = get_phys_addr_with_secure(env, value, access_type, mmu_idx,
34
+ is_secure, &res, &fi);
35
36
/*
37
* ATS operations only do S1 or S1+S2 translations, so we never
38
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
39
switch (el) {
40
case 3:
41
mmu_idx = ARMMMUIdx_SE3;
42
+ secure = true;
43
break;
44
case 2:
45
g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
46
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
47
switch (el) {
48
case 3:
49
mmu_idx = ARMMMUIdx_SE10_0;
50
+ secure = true;
51
break;
52
case 2:
53
g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
54
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
55
case 4:
56
/* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */
57
mmu_idx = ARMMMUIdx_E10_1;
58
+ secure = false;
59
break;
60
case 6:
61
/* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */
62
mmu_idx = ARMMMUIdx_E10_0;
63
+ secure = false;
64
break;
65
default:
21
g_assert_not_reached();
66
g_assert_not_reached();
22
}
67
}
23
68
24
+ /* Overriding of an existing definition must be explicitly requested. */
69
- par64 = do_ats_write(env, value, access_type, mmu_idx);
25
+ if (!(r->type & ARM_CP_OVERRIDE)) {
70
+ par64 = do_ats_write(env, value, access_type, mmu_idx, secure);
26
+ const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
71
27
+ if (oldreg) {
72
A32_BANKED_CURRENT_REG_SET(env, par, par64);
28
+ assert(oldreg->type & ARM_CP_OVERRIDE);
73
#else
29
+ }
74
@@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri,
30
+ }
75
MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD;
31
+
76
uint64_t par64;
32
/* Combine cpreg and name into one allocation. */
77
33
name_len = strlen(name) + 1;
78
- par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2);
34
r2 = g_malloc(sizeof(*r2) + name_len);
79
+ /* There is no SecureEL2 for AArch32. */
35
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
80
+ par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2, false);
36
assert(!raw_accessors_invalid(r2));
81
82
A32_BANKED_CURRENT_REG_SET(env, par, par64);
83
#else
84
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
85
break;
86
case 6: /* AT S1E3R, AT S1E3W */
87
mmu_idx = ARMMMUIdx_SE3;
88
+ secure = true;
89
break;
90
default:
91
g_assert_not_reached();
92
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
93
g_assert_not_reached();
37
}
94
}
38
95
39
- /* Overriding of an existing definition must be explicitly
96
- env->cp15.par_el[1] = do_ats_write(env, value, access_type, mmu_idx);
40
- * requested.
97
+ env->cp15.par_el[1] = do_ats_write(env, value, access_type,
41
- */
98
+ mmu_idx, secure);
42
- if (!(r->type & ARM_CP_OVERRIDE)) {
99
#else
43
- const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
100
/* Handled by hardware accelerator. */
44
- if (oldreg && !(oldreg->type & ARM_CP_OVERRIDE)) {
101
g_assert_not_reached();
45
- fprintf(stderr, "Register redefined: cp=%d %d bit "
46
- "crn=%d crm=%d opc1=%d opc2=%d, "
47
- "was %s, now %s\n", r2->cp, 32 + 32 * is64,
48
- r2->crn, r2->crm, r2->opc1, r2->opc2,
49
- oldreg->name, r2->name);
50
- g_assert_not_reached();
51
- }
52
- }
53
g_hash_table_insert(cpu->cp_regs, (gpointer)(uintptr_t)key, r2);
54
}
55
56
--
102
--
57
2.25.1
103
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Instead of defining ARM_CP_FLAG_MASK to remove flags,
3
For a-profile aarch64, which does not bank system registers, it takes
4
define ARM_CP_SPECIAL_MASK to isolate special cases.
4
quite a lot of code to switch between security states. In the process,
5
Sort the specials to the low bits. Use an enum.
5
registers such as TCR_EL{1,2} must be swapped, which in itself requires
6
the flushing of softmmu tlbs. Therefore it doesn't buy us anything to
7
separate tlbs by security state.
6
8
7
Split the large comment block so as to document each
9
Retain the distinction between Stage2 and Stage2_S.
8
value separately.
9
10
11
This will be important as we implement FEAT_RME, and do not wish to
12
add a third set of mmu indexes for Realm state.
13
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Message-id: 20221001162318.153420-11-richard.henderson@linaro.org
12
Message-id: 20220501055028.646596-6-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
18
---
15
target/arm/cpregs.h | 130 +++++++++++++++++++++++--------------
19
target/arm/cpu-param.h | 2 +-
16
target/arm/cpu.c | 4 +-
20
target/arm/cpu.h | 72 +++++++------------
17
target/arm/helper.c | 4 +-
21
target/arm/internals.h | 31 +-------
18
target/arm/translate-a64.c | 6 +-
22
target/arm/helper.c | 144 +++++++++++++------------------------
23
target/arm/ptw.c | 25 ++-----
24
target/arm/translate-a64.c | 8 ---
19
target/arm/translate.c | 6 +-
25
target/arm/translate.c | 6 +-
20
5 files changed, 92 insertions(+), 58 deletions(-)
26
7 files changed, 85 insertions(+), 203 deletions(-)
21
27
22
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
28
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
23
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpregs.h
30
--- a/target/arm/cpu-param.h
25
+++ b/target/arm/cpregs.h
31
+++ b/target/arm/cpu-param.h
26
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@
27
#define TARGET_ARM_CPREGS_H
33
# define TARGET_PAGE_BITS_MIN 10
28
34
#endif
29
/*
35
30
- * ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
36
-#define NB_MMU_MODES 15
31
- * special-behaviour cp reg and bits [11..8] indicate what behaviour
37
+#define NB_MMU_MODES 8
32
- * it has. Otherwise it is a simple cp reg, where CONST indicates that
38
33
- * TCG can assume the value to be constant (ie load at translate time)
39
#endif
34
- * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
40
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
35
- * indicates that the TB should not be ended after a write to this register
36
- * (the default is that the TB ends after cp writes). OVERRIDE permits
37
- * a register definition to override a previous definition for the
38
- * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
39
- * old must have the OVERRIDE bit set.
40
- * ALIAS indicates that this register is an alias view of some underlying
41
- * state which is also visible via another register, and that the other
42
- * register is handling migration and reset; registers marked ALIAS will not be
43
- * migrated but may have their state set by syncing of register state from KVM.
44
- * NO_RAW indicates that this register has no underlying state and does not
45
- * support raw access for state saving/loading; it will not be used for either
46
- * migration or KVM state synchronization. (Typically this is for "registers"
47
- * which are actually used as instructions for cache maintenance and so on.)
48
- * IO indicates that this register does I/O and therefore its accesses
49
- * need to be marked with gen_io_start() and also end the TB. In particular,
50
- * registers which implement clocks or timers require this.
51
- * RAISES_EXC is for when the read or write hook might raise an exception;
52
- * the generated code will synchronize the CPU state before calling the hook
53
- * so that it is safe for the hook to call raise_exception().
54
- * NEWEL is for writes to registers that might change the exception
55
- * level - typically on older ARM chips. For those cases we need to
56
- * re-read the new el when recomputing the translation flags.
57
+ * ARMCPRegInfo type field bits:
58
*/
59
-#define ARM_CP_SPECIAL 0x0001
60
-#define ARM_CP_CONST 0x0002
61
-#define ARM_CP_64BIT 0x0004
62
-#define ARM_CP_SUPPRESS_TB_END 0x0008
63
-#define ARM_CP_OVERRIDE 0x0010
64
-#define ARM_CP_ALIAS 0x0020
65
-#define ARM_CP_IO 0x0040
66
-#define ARM_CP_NO_RAW 0x0080
67
-#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
68
-#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
69
-#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
70
-#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
71
-#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
72
-#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
73
-#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
74
-#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
75
-#define ARM_CP_FPU 0x1000
76
-#define ARM_CP_SVE 0x2000
77
-#define ARM_CP_NO_GDB 0x4000
78
-#define ARM_CP_RAISES_EXC 0x8000
79
-#define ARM_CP_NEWEL 0x10000
80
-/* Mask of only the flag bits in a type field */
81
-#define ARM_CP_FLAG_MASK 0x1f0ff
82
+enum {
83
+ /*
84
+ * Register must be handled specially during translation.
85
+ * The method is one of the values below:
86
+ */
87
+ ARM_CP_SPECIAL_MASK = 0x000f,
88
+ /* Special: no change to PE state: writes ignored, reads ignored. */
89
+ ARM_CP_NOP = 0x0001,
90
+ /* Special: sysreg is WFI, for v5 and v6. */
91
+ ARM_CP_WFI = 0x0002,
92
+ /* Special: sysreg is NZCV. */
93
+ ARM_CP_NZCV = 0x0003,
94
+ /* Special: sysreg is CURRENTEL. */
95
+ ARM_CP_CURRENTEL = 0x0004,
96
+ /* Special: sysreg is DC ZVA or similar. */
97
+ ARM_CP_DC_ZVA = 0x0005,
98
+ ARM_CP_DC_GVA = 0x0006,
99
+ ARM_CP_DC_GZVA = 0x0007,
100
+
101
+ /* Flag: reads produce resetvalue; writes ignored. */
102
+ ARM_CP_CONST = 1 << 4,
103
+ /* Flag: For ARM_CP_STATE_AA32, sysreg is 64-bit. */
104
+ ARM_CP_64BIT = 1 << 5,
105
+ /*
106
+ * Flag: TB should not be ended after a write to this register
107
+ * (the default is that the TB ends after cp writes).
108
+ */
109
+ ARM_CP_SUPPRESS_TB_END = 1 << 6,
110
+ /*
111
+ * Flag: Permit a register definition to override a previous definition
112
+ * for the same (cp, is64, crn, crm, opc1, opc2) tuple: either the new
113
+ * or the old must have the ARM_CP_OVERRIDE bit set.
114
+ */
115
+ ARM_CP_OVERRIDE = 1 << 7,
116
+ /*
117
+ * Flag: Register is an alias view of some underlying state which is also
118
+ * visible via another register, and that the other register is handling
119
+ * migration and reset; registers marked ARM_CP_ALIAS will not be migrated
120
+ * but may have their state set by syncing of register state from KVM.
121
+ */
122
+ ARM_CP_ALIAS = 1 << 8,
123
+ /*
124
+ * Flag: Register does I/O and therefore its accesses need to be marked
125
+ * with gen_io_start() and also end the TB. In particular, registers which
126
+ * implement clocks or timers require this.
127
+ */
128
+ ARM_CP_IO = 1 << 9,
129
+ /*
130
+ * Flag: Register has no underlying state and does not support raw access
131
+ * for state saving/loading; it will not be used for either migration or
132
+ * KVM state synchronization. Typically this is for "registers" which are
133
+ * actually used as instructions for cache maintenance and so on.
134
+ */
135
+ ARM_CP_NO_RAW = 1 << 10,
136
+ /*
137
+ * Flag: The read or write hook might raise an exception; the generated
138
+ * code will synchronize the CPU state before calling the hook so that it
139
+ * is safe for the hook to call raise_exception().
140
+ */
141
+ ARM_CP_RAISES_EXC = 1 << 11,
142
+ /*
143
+ * Flag: Writes to the sysreg might change the exception level - typically
144
+ * on older ARM chips. For those cases we need to re-read the new el when
145
+ * recomputing the translation flags.
146
+ */
147
+ ARM_CP_NEWEL = 1 << 12,
148
+ /*
149
+ * Flag: Access check for this sysreg is identical to accessing FPU state
150
+ * from an instruction: use translation fp_access_check().
151
+ */
152
+ ARM_CP_FPU = 1 << 13,
153
+ /*
154
+ * Flag: Access check for this sysreg is identical to accessing SVE state
155
+ * from an instruction: use translation sve_access_check().
156
+ */
157
+ ARM_CP_SVE = 1 << 14,
158
+ /* Flag: Do not expose in gdb sysreg xml. */
159
+ ARM_CP_NO_GDB = 1 << 15,
160
+};
161
162
/*
163
* Valid values for ARMCPRegInfo state field, indicating which of
164
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
165
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
166
--- a/target/arm/cpu.c
42
--- a/target/arm/cpu.h
167
+++ b/target/arm/cpu.c
43
+++ b/target/arm/cpu.h
168
@@ -XXX,XX +XXX,XX @@ static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
44
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
169
ARMCPRegInfo *ri = value;
45
* table over and over.
170
ARMCPU *cpu = opaque;
46
* 6. we need separate EL1/EL2 mmu_idx for handling the Privileged Access
171
47
* Never (PAN) bit within PSTATE.
172
- if (ri->type & (ARM_CP_SPECIAL | ARM_CP_ALIAS)) {
48
+ * 7. we fold together the secure and non-secure regimes for A-profile,
173
+ if (ri->type & (ARM_CP_SPECIAL_MASK | ARM_CP_ALIAS)) {
49
+ * because there are no banked system registers for aarch64, so the
174
return;
50
+ * process of switching between secure and non-secure is
175
}
51
+ * already heavyweight.
176
52
*
177
@@ -XXX,XX +XXX,XX @@ static void cp_reg_check_reset(gpointer key, gpointer value, gpointer opaque)
53
* This gives us the following list of cases:
178
ARMCPU *cpu = opaque;
54
*
179
uint64_t oldvalue, newvalue;
55
- * NS EL0 EL1&0 stage 1+2 (aka NS PL0)
180
56
- * NS EL1 EL1&0 stage 1+2 (aka NS PL1)
181
- if (ri->type & (ARM_CP_SPECIAL | ARM_CP_ALIAS | ARM_CP_NO_RAW)) {
57
- * NS EL1 EL1&0 stage 1+2 +PAN
182
+ if (ri->type & (ARM_CP_SPECIAL_MASK | ARM_CP_ALIAS | ARM_CP_NO_RAW)) {
58
- * NS EL0 EL2&0
183
return;
59
- * NS EL2 EL2&0
184
}
60
- * NS EL2 EL2&0 +PAN
185
61
- * NS EL2 (aka NS PL2)
62
- * S EL0 EL1&0 (aka S PL0)
63
- * S EL1 EL1&0 (not used if EL3 is 32 bit)
64
- * S EL1 EL1&0 +PAN
65
- * S EL3 (aka S PL1)
66
+ * EL0 EL1&0 stage 1+2 (aka NS PL0)
67
+ * EL1 EL1&0 stage 1+2 (aka NS PL1)
68
+ * EL1 EL1&0 stage 1+2 +PAN
69
+ * EL0 EL2&0
70
+ * EL2 EL2&0
71
+ * EL2 EL2&0 +PAN
72
+ * EL2 (aka NS PL2)
73
+ * EL3 (aka S PL1)
74
*
75
- * for a total of 11 different mmu_idx.
76
+ * for a total of 8 different mmu_idx.
77
*
78
* R profile CPUs have an MPU, but can use the same set of MMU indexes
79
- * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
80
- * NS EL2 if we ever model a Cortex-R52).
81
+ * as A profile. They only need to distinguish EL0 and EL1 (and
82
+ * EL2 if we ever model a Cortex-R52).
83
*
84
* M profile CPUs are rather different as they do not have a true MMU.
85
* They have the following different MMU indexes:
86
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
87
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
88
#define ARM_MMU_IDX_M 0x40 /* M profile */
89
90
-/* Meanings of the bits for A profile mmu idx values */
91
-#define ARM_MMU_IDX_A_NS 0x8
92
-
93
/* Meanings of the bits for M profile mmu idx values */
94
#define ARM_MMU_IDX_M_PRIV 0x1
95
#define ARM_MMU_IDX_M_NEGPRI 0x2
96
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
97
/*
98
* A-profile.
99
*/
100
- ARMMMUIdx_SE10_0 = 0 | ARM_MMU_IDX_A,
101
- ARMMMUIdx_SE20_0 = 1 | ARM_MMU_IDX_A,
102
- ARMMMUIdx_SE10_1 = 2 | ARM_MMU_IDX_A,
103
- ARMMMUIdx_SE20_2 = 3 | ARM_MMU_IDX_A,
104
- ARMMMUIdx_SE10_1_PAN = 4 | ARM_MMU_IDX_A,
105
- ARMMMUIdx_SE20_2_PAN = 5 | ARM_MMU_IDX_A,
106
- ARMMMUIdx_SE2 = 6 | ARM_MMU_IDX_A,
107
- ARMMMUIdx_SE3 = 7 | ARM_MMU_IDX_A,
108
-
109
- ARMMMUIdx_E10_0 = ARMMMUIdx_SE10_0 | ARM_MMU_IDX_A_NS,
110
- ARMMMUIdx_E20_0 = ARMMMUIdx_SE20_0 | ARM_MMU_IDX_A_NS,
111
- ARMMMUIdx_E10_1 = ARMMMUIdx_SE10_1 | ARM_MMU_IDX_A_NS,
112
- ARMMMUIdx_E20_2 = ARMMMUIdx_SE20_2 | ARM_MMU_IDX_A_NS,
113
- ARMMMUIdx_E10_1_PAN = ARMMMUIdx_SE10_1_PAN | ARM_MMU_IDX_A_NS,
114
- ARMMMUIdx_E20_2_PAN = ARMMMUIdx_SE20_2_PAN | ARM_MMU_IDX_A_NS,
115
- ARMMMUIdx_E2 = ARMMMUIdx_SE2 | ARM_MMU_IDX_A_NS,
116
+ ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
117
+ ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A,
118
+ ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A,
119
+ ARMMMUIdx_E20_2 = 3 | ARM_MMU_IDX_A,
120
+ ARMMMUIdx_E10_1_PAN = 4 | ARM_MMU_IDX_A,
121
+ ARMMMUIdx_E20_2_PAN = 5 | ARM_MMU_IDX_A,
122
+ ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A,
123
+ ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A,
124
125
/*
126
* These are not allocated TLBs and are used only for AT system
127
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
128
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
129
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
130
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
131
- ARMMMUIdx_Stage1_SE0 = 3 | ARM_MMU_IDX_NOTLB,
132
- ARMMMUIdx_Stage1_SE1 = 4 | ARM_MMU_IDX_NOTLB,
133
- ARMMMUIdx_Stage1_SE1_PAN = 5 | ARM_MMU_IDX_NOTLB,
134
/*
135
* Not allocated a TLB: used only for second stage of an S12 page
136
* table walk, or for descriptor loads during first stage of an S1
137
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
138
* then various TLB flush insns which currently are no-ops or flush
139
* only stage 1 MMU indexes will need to change to flush stage 2.
140
*/
141
- ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_NOTLB,
142
- ARMMMUIdx_Stage2_S = 7 | ARM_MMU_IDX_NOTLB,
143
+ ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
144
+ ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB,
145
146
/*
147
* M-profile.
148
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
149
TO_CORE_BIT(E2),
150
TO_CORE_BIT(E20_2),
151
TO_CORE_BIT(E20_2_PAN),
152
- TO_CORE_BIT(SE10_0),
153
- TO_CORE_BIT(SE20_0),
154
- TO_CORE_BIT(SE10_1),
155
- TO_CORE_BIT(SE20_2),
156
- TO_CORE_BIT(SE10_1_PAN),
157
- TO_CORE_BIT(SE20_2_PAN),
158
- TO_CORE_BIT(SE2),
159
- TO_CORE_BIT(SE3),
160
+ TO_CORE_BIT(E3),
161
162
TO_CORE_BIT(MUser),
163
TO_CORE_BIT(MPriv),
164
diff --git a/target/arm/internals.h b/target/arm/internals.h
165
index XXXXXXX..XXXXXXX 100644
166
--- a/target/arm/internals.h
167
+++ b/target/arm/internals.h
168
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
169
case ARMMMUIdx_Stage1_E0:
170
case ARMMMUIdx_Stage1_E1:
171
case ARMMMUIdx_Stage1_E1_PAN:
172
- case ARMMMUIdx_Stage1_SE0:
173
- case ARMMMUIdx_Stage1_SE1:
174
- case ARMMMUIdx_Stage1_SE1_PAN:
175
case ARMMMUIdx_E10_0:
176
case ARMMMUIdx_E10_1:
177
case ARMMMUIdx_E10_1_PAN:
178
case ARMMMUIdx_E20_0:
179
case ARMMMUIdx_E20_2:
180
case ARMMMUIdx_E20_2_PAN:
181
- case ARMMMUIdx_SE10_0:
182
- case ARMMMUIdx_SE10_1:
183
- case ARMMMUIdx_SE10_1_PAN:
184
- case ARMMMUIdx_SE20_0:
185
- case ARMMMUIdx_SE20_2:
186
- case ARMMMUIdx_SE20_2_PAN:
187
return true;
188
default:
189
return false;
190
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
191
{
192
switch (mmu_idx) {
193
case ARMMMUIdx_Stage1_E1_PAN:
194
- case ARMMMUIdx_Stage1_SE1_PAN:
195
case ARMMMUIdx_E10_1_PAN:
196
case ARMMMUIdx_E20_2_PAN:
197
- case ARMMMUIdx_SE10_1_PAN:
198
- case ARMMMUIdx_SE20_2_PAN:
199
return true;
200
default:
201
return false;
202
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
203
static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
204
{
205
switch (mmu_idx) {
206
- case ARMMMUIdx_SE20_0:
207
- case ARMMMUIdx_SE20_2:
208
- case ARMMMUIdx_SE20_2_PAN:
209
case ARMMMUIdx_E20_0:
210
case ARMMMUIdx_E20_2:
211
case ARMMMUIdx_E20_2_PAN:
212
case ARMMMUIdx_Stage2:
213
case ARMMMUIdx_Stage2_S:
214
- case ARMMMUIdx_SE2:
215
case ARMMMUIdx_E2:
216
return 2;
217
- case ARMMMUIdx_SE3:
218
+ case ARMMMUIdx_E3:
219
return 3;
220
- case ARMMMUIdx_SE10_0:
221
- case ARMMMUIdx_Stage1_SE0:
222
- return arm_el_is_aa64(env, 3) ? 1 : 3;
223
- case ARMMMUIdx_SE10_1:
224
- case ARMMMUIdx_SE10_1_PAN:
225
+ case ARMMMUIdx_E10_0:
226
case ARMMMUIdx_Stage1_E0:
227
+ return arm_el_is_aa64(env, 3) || !arm_is_secure_below_el3(env) ? 1 : 3;
228
case ARMMMUIdx_Stage1_E1:
229
case ARMMMUIdx_Stage1_E1_PAN:
230
- case ARMMMUIdx_Stage1_SE1:
231
- case ARMMMUIdx_Stage1_SE1_PAN:
232
- case ARMMMUIdx_E10_0:
233
case ARMMMUIdx_E10_1:
234
case ARMMMUIdx_E10_1_PAN:
235
case ARMMMUIdx_MPrivNegPri:
236
@@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
237
case ARMMMUIdx_Stage1_E0:
238
case ARMMMUIdx_Stage1_E1:
239
case ARMMMUIdx_Stage1_E1_PAN:
240
- case ARMMMUIdx_Stage1_SE0:
241
- case ARMMMUIdx_Stage1_SE1:
242
- case ARMMMUIdx_Stage1_SE1_PAN:
243
return true;
244
default:
245
return false;
186
diff --git a/target/arm/helper.c b/target/arm/helper.c
246
diff --git a/target/arm/helper.c b/target/arm/helper.c
187
index XXXXXXX..XXXXXXX 100644
247
index XXXXXXX..XXXXXXX 100644
188
--- a/target/arm/helper.c
248
--- a/target/arm/helper.c
189
+++ b/target/arm/helper.c
249
+++ b/target/arm/helper.c
190
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
250
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
191
* multiple times. Special registers (ie NOP/WFI) are
251
/* Begin with base v8.0 state. */
192
* never migratable and not even raw-accessible.
252
uint64_t valid_mask = 0x3fff;
253
ARMCPU *cpu = env_archcpu(env);
254
+ uint64_t changed;
255
256
/*
257
* Because SCR_EL3 is the "real" cpreg and SCR is the alias, reset always
258
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
259
260
/* Clear all-context RES0 bits. */
261
value &= valid_mask;
262
- raw_write(env, ri, value);
263
+ changed = env->cp15.scr_el3 ^ value;
264
+ env->cp15.scr_el3 = value;
265
+
266
+ /*
267
+ * If SCR_EL3.NS changes, i.e. arm_is_secure_below_el3, then
268
+ * we must invalidate all TLBs below EL3.
269
+ */
270
+ if (changed & SCR_NS) {
271
+ tlb_flush_by_mmuidx(env_cpu(env), (ARMMMUIdxBit_E10_0 |
272
+ ARMMMUIdxBit_E20_0 |
273
+ ARMMMUIdxBit_E10_1 |
274
+ ARMMMUIdxBit_E20_2 |
275
+ ARMMMUIdxBit_E10_1_PAN |
276
+ ARMMMUIdxBit_E20_2_PAN |
277
+ ARMMMUIdxBit_E2));
278
+ }
279
}
280
281
static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
282
@@ -XXX,XX +XXX,XX @@ static int gt_phys_redir_timeridx(CPUARMState *env)
283
case ARMMMUIdx_E20_0:
284
case ARMMMUIdx_E20_2:
285
case ARMMMUIdx_E20_2_PAN:
286
- case ARMMMUIdx_SE20_0:
287
- case ARMMMUIdx_SE20_2:
288
- case ARMMMUIdx_SE20_2_PAN:
289
return GTIMER_HYP;
290
default:
291
return GTIMER_PHYS;
292
@@ -XXX,XX +XXX,XX @@ static int gt_virt_redir_timeridx(CPUARMState *env)
293
case ARMMMUIdx_E20_0:
294
case ARMMMUIdx_E20_2:
295
case ARMMMUIdx_E20_2_PAN:
296
- case ARMMMUIdx_SE20_0:
297
- case ARMMMUIdx_SE20_2:
298
- case ARMMMUIdx_SE20_2_PAN:
299
return GTIMER_HYPVIRT;
300
default:
301
return GTIMER_VIRT;
302
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
303
/* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */
304
switch (el) {
305
case 3:
306
- mmu_idx = ARMMMUIdx_SE3;
307
+ mmu_idx = ARMMMUIdx_E3;
308
secure = true;
309
break;
310
case 2:
311
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
312
/* fall through */
313
case 1:
314
if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
315
- mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
316
- : ARMMMUIdx_Stage1_E1_PAN);
317
+ mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
318
} else {
319
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
320
+ mmu_idx = ARMMMUIdx_Stage1_E1;
321
}
322
break;
323
default:
324
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
325
/* stage 1 current state PL0: ATS1CUR, ATS1CUW */
326
switch (el) {
327
case 3:
328
- mmu_idx = ARMMMUIdx_SE10_0;
329
+ mmu_idx = ARMMMUIdx_E10_0;
330
secure = true;
331
break;
332
case 2:
333
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
334
mmu_idx = ARMMMUIdx_Stage1_E0;
335
break;
336
case 1:
337
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
338
+ mmu_idx = ARMMMUIdx_Stage1_E0;
339
break;
340
default:
341
g_assert_not_reached();
342
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
343
switch (ri->opc1) {
344
case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */
345
if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) {
346
- mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
347
- : ARMMMUIdx_Stage1_E1_PAN);
348
+ mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
349
} else {
350
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
351
+ mmu_idx = ARMMMUIdx_Stage1_E1;
352
}
353
break;
354
case 4: /* AT S1E2R, AT S1E2W */
355
- mmu_idx = secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2;
356
+ mmu_idx = ARMMMUIdx_E2;
357
break;
358
case 6: /* AT S1E3R, AT S1E3W */
359
- mmu_idx = ARMMMUIdx_SE3;
360
+ mmu_idx = ARMMMUIdx_E3;
361
secure = true;
362
break;
363
default:
364
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
365
}
366
break;
367
case 2: /* AT S1E0R, AT S1E0W */
368
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
369
+ mmu_idx = ARMMMUIdx_Stage1_E0;
370
break;
371
case 4: /* AT S12E1R, AT S12E1W */
372
- mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1;
373
+ mmu_idx = ARMMMUIdx_E10_1;
374
break;
375
case 6: /* AT S12E0R, AT S12E0W */
376
- mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_E10_0;
377
+ mmu_idx = ARMMMUIdx_E10_0;
378
break;
379
default:
380
g_assert_not_reached();
381
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
382
uint16_t mask = ARMMMUIdxBit_E20_2 |
383
ARMMMUIdxBit_E20_2_PAN |
384
ARMMMUIdxBit_E20_0;
385
-
386
- if (arm_is_secure_below_el3(env)) {
387
- mask >>= ARM_MMU_IDX_A_NS;
388
- }
389
-
390
tlb_flush_by_mmuidx(env_cpu(env), mask);
391
}
392
raw_write(env, ri, value);
393
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
394
uint16_t mask = ARMMMUIdxBit_E10_1 |
395
ARMMMUIdxBit_E10_1_PAN |
396
ARMMMUIdxBit_E10_0;
397
-
398
- if (arm_is_secure_below_el3(env)) {
399
- mask >>= ARM_MMU_IDX_A_NS;
400
- }
401
-
402
tlb_flush_by_mmuidx(cs, mask);
403
raw_write(env, ri, value);
404
}
405
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
406
ARMMMUIdxBit_E10_1_PAN |
407
ARMMMUIdxBit_E10_0;
408
}
409
-
410
- if (arm_is_secure_below_el3(env)) {
411
- mask >>= ARM_MMU_IDX_A_NS;
412
- }
413
-
414
return mask;
415
}
416
417
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
418
mmu_idx = ARMMMUIdx_E10_0;
419
}
420
421
- if (arm_is_secure_below_el3(env)) {
422
- mmu_idx &= ~ARM_MMU_IDX_A_NS;
423
- }
424
-
425
return tlbbits_for_regime(env, mmu_idx, addr);
426
}
427
428
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
429
* stage 2 translations, whereas most other scopes only invalidate
430
* stage 1 translations.
193
*/
431
*/
194
- if ((r->type & ARM_CP_SPECIAL)) {
432
- if (arm_is_secure_below_el3(env)) {
195
+ if (r->type & ARM_CP_SPECIAL_MASK) {
433
- return ARMMMUIdxBit_SE10_1 |
196
r2->type |= ARM_CP_NO_RAW;
434
- ARMMMUIdxBit_SE10_1_PAN |
435
- ARMMMUIdxBit_SE10_0;
436
- } else {
437
- return ARMMMUIdxBit_E10_1 |
438
- ARMMMUIdxBit_E10_1_PAN |
439
- ARMMMUIdxBit_E10_0;
440
- }
441
+ return (ARMMMUIdxBit_E10_1 |
442
+ ARMMMUIdxBit_E10_1_PAN |
443
+ ARMMMUIdxBit_E10_0);
444
}
445
446
static int e2_tlbmask(CPUARMState *env)
447
{
448
- if (arm_is_secure_below_el3(env)) {
449
- return ARMMMUIdxBit_SE20_0 |
450
- ARMMMUIdxBit_SE20_2 |
451
- ARMMMUIdxBit_SE20_2_PAN |
452
- ARMMMUIdxBit_SE2;
453
- } else {
454
- return ARMMMUIdxBit_E20_0 |
455
- ARMMMUIdxBit_E20_2 |
456
- ARMMMUIdxBit_E20_2_PAN |
457
- ARMMMUIdxBit_E2;
458
- }
459
+ return (ARMMMUIdxBit_E20_0 |
460
+ ARMMMUIdxBit_E20_2 |
461
+ ARMMMUIdxBit_E20_2_PAN |
462
+ ARMMMUIdxBit_E2);
463
}
464
465
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
466
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
467
ARMCPU *cpu = env_archcpu(env);
468
CPUState *cs = CPU(cpu);
469
470
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_SE3);
471
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
472
}
473
474
static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
475
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
476
{
477
CPUState *cs = env_cpu(env);
478
479
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_SE3);
480
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
481
}
482
483
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
484
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
485
CPUState *cs = CPU(cpu);
486
uint64_t pageaddr = sextract64(value << 12, 0, 56);
487
488
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_SE3);
489
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
490
}
491
492
static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
493
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
494
{
495
CPUState *cs = env_cpu(env);
496
uint64_t pageaddr = sextract64(value << 12, 0, 56);
497
- bool secure = arm_is_secure_below_el3(env);
498
- int mask = secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2;
499
- int bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2,
500
- pageaddr);
501
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E2, pageaddr);
502
503
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
504
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
505
+ ARMMMUIdxBit_E2, bits);
506
}
507
508
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
509
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
510
{
511
CPUState *cs = env_cpu(env);
512
uint64_t pageaddr = sextract64(value << 12, 0, 56);
513
- int bits = tlbbits_for_regime(env, ARMMMUIdx_SE3, pageaddr);
514
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E3, pageaddr);
515
516
tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
517
- ARMMMUIdxBit_SE3, bits);
518
+ ARMMMUIdxBit_E3, bits);
519
}
520
521
#ifdef TARGET_AARCH64
522
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae1is_write(CPUARMState *env,
523
524
static int vae2_tlbmask(CPUARMState *env)
525
{
526
- return (arm_is_secure_below_el3(env)
527
- ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2);
528
+ return ARMMMUIdxBit_E2;
529
}
530
531
static void tlbi_aa64_rvae2_write(CPUARMState *env,
532
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3_write(CPUARMState *env,
533
* flush-last-level-only.
534
*/
535
536
- do_rvae_write(env, value, ARMMMUIdxBit_SE3,
537
- tlb_force_broadcast(env));
538
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, tlb_force_broadcast(env));
539
}
540
541
static void tlbi_aa64_rvae3is_write(CPUARMState *env,
542
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env,
543
* flush-last-level-only or inner/outer specific flushes.
544
*/
545
546
- do_rvae_write(env, value, ARMMMUIdxBit_SE3, true);
547
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
548
}
549
#endif
550
551
@@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el)
552
/* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */
553
if (el == 0) {
554
ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0);
555
- el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0)
556
- ? 2 : 1;
557
+ el = mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1;
197
}
558
}
198
if (((r->crm == CP_ANY) && crm != 0) ||
559
return env->cp15.sctlr_el[el];
199
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
560
}
200
/* Check that the register definition has enough info to handle
561
@@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
201
* reads and writes if they are permitted.
562
switch (mmu_idx) {
202
*/
563
case ARMMMUIdx_E10_0:
203
- if (!(r->type & (ARM_CP_SPECIAL|ARM_CP_CONST))) {
564
case ARMMMUIdx_E20_0:
204
+ if (!(r->type & (ARM_CP_SPECIAL_MASK | ARM_CP_CONST))) {
565
- case ARMMMUIdx_SE10_0:
205
if (r->access & PL3_R) {
566
- case ARMMMUIdx_SE20_0:
206
assert((r->fieldoffset ||
567
return 0;
207
(r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1])) ||
568
case ARMMMUIdx_E10_1:
569
case ARMMMUIdx_E10_1_PAN:
570
- case ARMMMUIdx_SE10_1:
571
- case ARMMMUIdx_SE10_1_PAN:
572
return 1;
573
case ARMMMUIdx_E2:
574
case ARMMMUIdx_E20_2:
575
case ARMMMUIdx_E20_2_PAN:
576
- case ARMMMUIdx_SE2:
577
- case ARMMMUIdx_SE20_2:
578
- case ARMMMUIdx_SE20_2_PAN:
579
return 2;
580
- case ARMMMUIdx_SE3:
581
+ case ARMMMUIdx_E3:
582
return 3;
583
default:
584
g_assert_not_reached();
585
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
586
}
587
break;
588
case 3:
589
- return ARMMMUIdx_SE3;
590
+ return ARMMMUIdx_E3;
591
default:
592
g_assert_not_reached();
593
}
594
595
- if (arm_is_secure_below_el3(env)) {
596
- idx &= ~ARM_MMU_IDX_A_NS;
597
- }
598
-
599
return idx;
600
}
601
602
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
603
switch (mmu_idx) {
604
case ARMMMUIdx_E10_1:
605
case ARMMMUIdx_E10_1_PAN:
606
- case ARMMMUIdx_SE10_1:
607
- case ARMMMUIdx_SE10_1_PAN:
608
/* TODO: ARMv8.3-NV */
609
DP_TBFLAG_A64(flags, UNPRIV, 1);
610
break;
611
case ARMMMUIdx_E20_2:
612
case ARMMMUIdx_E20_2_PAN:
613
- case ARMMMUIdx_SE20_2:
614
- case ARMMMUIdx_SE20_2_PAN:
615
/*
616
* Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is
617
* gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
618
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
619
index XXXXXXX..XXXXXXX 100644
620
--- a/target/arm/ptw.c
621
+++ b/target/arm/ptw.c
622
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu)
623
ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
624
{
625
switch (mmu_idx) {
626
- case ARMMMUIdx_SE10_0:
627
- return ARMMMUIdx_Stage1_SE0;
628
- case ARMMMUIdx_SE10_1:
629
- return ARMMMUIdx_Stage1_SE1;
630
- case ARMMMUIdx_SE10_1_PAN:
631
- return ARMMMUIdx_Stage1_SE1_PAN;
632
case ARMMMUIdx_E10_0:
633
return ARMMMUIdx_Stage1_E0;
634
case ARMMMUIdx_E10_1:
635
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
636
static bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
637
{
638
switch (mmu_idx) {
639
- case ARMMMUIdx_SE10_0:
640
case ARMMMUIdx_E20_0:
641
- case ARMMMUIdx_SE20_0:
642
case ARMMMUIdx_Stage1_E0:
643
- case ARMMMUIdx_Stage1_SE0:
644
case ARMMMUIdx_MUser:
645
case ARMMMUIdx_MSUser:
646
case ARMMMUIdx_MUserNegPri:
647
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
648
649
s2_mmu_idx = (s2walk_secure
650
? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
651
- is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
652
+ is_el0 = mmu_idx == ARMMMUIdx_E10_0;
653
654
/*
655
* S1 is done, now do S2 translation.
656
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
657
case ARMMMUIdx_Stage1_E1:
658
case ARMMMUIdx_Stage1_E1_PAN:
659
case ARMMMUIdx_E2:
660
+ is_secure = arm_is_secure_below_el3(env);
661
+ break;
662
case ARMMMUIdx_Stage2:
663
case ARMMMUIdx_MPrivNegPri:
664
case ARMMMUIdx_MUserNegPri:
665
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
666
case ARMMMUIdx_MUser:
667
is_secure = false;
668
break;
669
- case ARMMMUIdx_SE3:
670
- case ARMMMUIdx_SE10_0:
671
- case ARMMMUIdx_SE10_1:
672
- case ARMMMUIdx_SE10_1_PAN:
673
- case ARMMMUIdx_SE20_0:
674
- case ARMMMUIdx_SE20_2:
675
- case ARMMMUIdx_SE20_2_PAN:
676
- case ARMMMUIdx_Stage1_SE0:
677
- case ARMMMUIdx_Stage1_SE1:
678
- case ARMMMUIdx_Stage1_SE1_PAN:
679
- case ARMMMUIdx_SE2:
680
+ case ARMMMUIdx_E3:
681
case ARMMMUIdx_Stage2_S:
682
case ARMMMUIdx_MSPrivNegPri:
683
case ARMMMUIdx_MSUserNegPri:
208
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
684
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
209
index XXXXXXX..XXXXXXX 100644
685
index XXXXXXX..XXXXXXX 100644
210
--- a/target/arm/translate-a64.c
686
--- a/target/arm/translate-a64.c
211
+++ b/target/arm/translate-a64.c
687
+++ b/target/arm/translate-a64.c
212
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
688
@@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s)
213
}
689
case ARMMMUIdx_E20_2_PAN:
214
690
useridx = ARMMMUIdx_E20_0;
215
/* Handle special cases first */
691
break;
216
- switch (ri->type & ~(ARM_CP_FLAG_MASK & ~ARM_CP_SPECIAL)) {
692
- case ARMMMUIdx_SE10_1:
217
+ switch (ri->type & ARM_CP_SPECIAL_MASK) {
693
- case ARMMMUIdx_SE10_1_PAN:
218
+ case 0:
694
- useridx = ARMMMUIdx_SE10_0;
219
+ break;
695
- break;
220
case ARM_CP_NOP:
696
- case ARMMMUIdx_SE20_2:
221
return;
697
- case ARMMMUIdx_SE20_2_PAN:
222
case ARM_CP_NZCV:
698
- useridx = ARMMMUIdx_SE20_0;
223
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
699
- break;
700
default:
701
g_assert_not_reached();
224
}
702
}
225
return;
226
default:
227
- break;
228
+ g_assert_not_reached();
229
}
230
if ((ri->type & ARM_CP_FPU) && !fp_access_check(s)) {
231
return;
232
diff --git a/target/arm/translate.c b/target/arm/translate.c
703
diff --git a/target/arm/translate.c b/target/arm/translate.c
233
index XXXXXXX..XXXXXXX 100644
704
index XXXXXXX..XXXXXXX 100644
234
--- a/target/arm/translate.c
705
--- a/target/arm/translate.c
235
+++ b/target/arm/translate.c
706
+++ b/target/arm/translate.c
236
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
707
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
237
}
708
* otherwise, access as if at PL0.
238
709
*/
239
/* Handle special cases first */
710
switch (s->mmu_idx) {
240
- switch (ri->type & ~(ARM_CP_FLAG_MASK & ~ARM_CP_SPECIAL)) {
711
+ case ARMMMUIdx_E3:
241
+ switch (ri->type & ARM_CP_SPECIAL_MASK) {
712
case ARMMMUIdx_E2: /* this one is UNPREDICTABLE */
242
+ case 0:
713
case ARMMMUIdx_E10_0:
243
+ break;
714
case ARMMMUIdx_E10_1:
244
case ARM_CP_NOP:
715
case ARMMMUIdx_E10_1_PAN:
245
return;
716
return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
246
case ARM_CP_WFI:
717
- case ARMMMUIdx_SE3:
247
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
718
- case ARMMMUIdx_SE10_0:
248
s->base.is_jmp = DISAS_WFI;
719
- case ARMMMUIdx_SE10_1:
249
return;
720
- case ARMMMUIdx_SE10_1_PAN:
250
default:
721
- return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0);
251
- break;
722
case ARMMMUIdx_MUser:
252
+ g_assert_not_reached();
723
case ARMMMUIdx_MPriv:
253
}
724
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
254
255
if ((tb_cflags(s->base.tb) & CF_USE_ICOUNT) && (ri->type & ARM_CP_IO)) {
256
--
725
--
257
2.25.1
726
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Cast the uint32_t key into a gpointer directly, which
3
Use a switch on mmu_idx for the a-profile indexes, instead of
4
allows us to avoid allocating storage for each key.
4
three different if's vs regime_el and arm_mmu_idx_is_stage1_of_2.
5
5
6
Use g_hash_table_lookup when we already have a gpointer
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
(e.g. for callbacks like count_cpreg), or when using
8
get_arm_cp_reginfo would require casting away const.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20221001162318.153420-12-richard.henderson@linaro.org
12
Message-id: 20220501055028.646596-12-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
10
---
15
target/arm/cpu.c | 4 ++--
11
target/arm/ptw.c | 32 +++++++++++++++++++++++++-------
16
target/arm/gdbstub.c | 2 +-
12
1 file changed, 25 insertions(+), 7 deletions(-)
17
target/arm/helper.c | 41 ++++++++++++++++++-----------------------
18
3 files changed, 21 insertions(+), 26 deletions(-)
19
13
20
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
21
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.c
16
--- a/target/arm/ptw.c
23
+++ b/target/arm/cpu.c
17
+++ b/target/arm/ptw.c
24
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
18
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
25
ARMCPU *cpu = ARM_CPU(obj);
19
26
20
hcr_el2 = arm_hcr_el2_eff(env);
27
cpu_set_cpustate_pointers(cpu);
21
28
- cpu->cp_regs = g_hash_table_new_full(g_int_hash, g_int_equal,
22
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
29
- g_free, cpreg_hashtable_data_destroy);
23
+ switch (mmu_idx) {
30
+ cpu->cp_regs = g_hash_table_new_full(g_direct_hash, g_direct_equal,
24
+ case ARMMMUIdx_Stage2:
31
+ NULL, cpreg_hashtable_data_destroy);
25
+ case ARMMMUIdx_Stage2_S:
32
26
/* HCR.DC means HCR.VM behaves as 1 */
33
QLIST_INIT(&cpu->pre_el_change_hooks);
27
return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
34
QLIST_INIT(&cpu->el_change_hooks);
28
- }
35
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
29
36
index XXXXXXX..XXXXXXX 100644
30
- if (hcr_el2 & HCR_TGE) {
37
--- a/target/arm/gdbstub.c
31
+ case ARMMMUIdx_E10_0:
38
+++ b/target/arm/gdbstub.c
32
+ case ARMMMUIdx_E10_1:
39
@@ -XXX,XX +XXX,XX @@ static void arm_gen_one_xml_sysreg_tag(GString *s, DynamicGDBXMLInfo *dyn_xml,
33
+ case ARMMMUIdx_E10_1_PAN:
40
static void arm_register_sysreg_for_xml(gpointer key, gpointer value,
34
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
41
gpointer p)
35
- if (!is_secure && regime_el(env, mmu_idx) == 1) {
42
{
36
+ if (!is_secure && (hcr_el2 & HCR_TGE)) {
43
- uint32_t ri_key = *(uint32_t *)key;
37
return true;
44
+ uint32_t ri_key = (uintptr_t)key;
45
ARMCPRegInfo *ri = value;
46
RegisterSysregXmlParam *param = (RegisterSysregXmlParam *)p;
47
GString *s = param->s;
48
diff --git a/target/arm/helper.c b/target/arm/helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/helper.c
51
+++ b/target/arm/helper.c
52
@@ -XXX,XX +XXX,XX @@ bool write_list_to_cpustate(ARMCPU *cpu)
53
static void add_cpreg_to_list(gpointer key, gpointer opaque)
54
{
55
ARMCPU *cpu = opaque;
56
- uint64_t regidx;
57
- const ARMCPRegInfo *ri;
58
-
59
- regidx = *(uint32_t *)key;
60
- ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
61
+ uint32_t regidx = (uintptr_t)key;
62
+ const ARMCPRegInfo *ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
63
64
if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
65
cpu->cpreg_indexes[cpu->cpreg_array_len] = cpreg_to_kvm_id(regidx);
66
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_list(gpointer key, gpointer opaque)
67
static void count_cpreg(gpointer key, gpointer opaque)
68
{
69
ARMCPU *cpu = opaque;
70
- uint64_t regidx;
71
const ARMCPRegInfo *ri;
72
73
- regidx = *(uint32_t *)key;
74
- ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
75
+ ri = g_hash_table_lookup(cpu->cp_regs, key);
76
77
if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
78
cpu->cpreg_array_len++;
79
@@ -XXX,XX +XXX,XX @@ static void count_cpreg(gpointer key, gpointer opaque)
80
81
static gint cpreg_key_compare(gconstpointer a, gconstpointer b)
82
{
83
- uint64_t aidx = cpreg_to_kvm_id(*(uint32_t *)a);
84
- uint64_t bidx = cpreg_to_kvm_id(*(uint32_t *)b);
85
+ uint64_t aidx = cpreg_to_kvm_id((uintptr_t)a);
86
+ uint64_t bidx = cpreg_to_kvm_id((uintptr_t)b);
87
88
if (aidx > bidx) {
89
return 1;
90
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
91
for (i = 0; i < ARRAY_SIZE(aliases); i++) {
92
const struct E2HAlias *a = &aliases[i];
93
ARMCPRegInfo *src_reg, *dst_reg, *new_reg;
94
- uint32_t *new_key;
95
bool ok;
96
97
if (a->feature && !a->feature(&cpu->isar)) {
98
continue;
99
}
38
}
100
39
- }
101
- src_reg = g_hash_table_lookup(cpu->cp_regs, &a->src_key);
40
+ break;
102
- dst_reg = g_hash_table_lookup(cpu->cp_regs, &a->dst_key);
41
103
+ src_reg = g_hash_table_lookup(cpu->cp_regs,
42
- if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
104
+ (gpointer)(uintptr_t)a->src_key);
43
+ case ARMMMUIdx_Stage1_E0:
105
+ dst_reg = g_hash_table_lookup(cpu->cp_regs,
44
+ case ARMMMUIdx_Stage1_E1:
106
+ (gpointer)(uintptr_t)a->dst_key);
45
+ case ARMMMUIdx_Stage1_E1_PAN:
107
g_assert(src_reg != NULL);
46
/* HCR.DC means SCTLR_EL1.M behaves as 0 */
108
g_assert(dst_reg != NULL);
47
- return true;
109
48
+ if (hcr_el2 & HCR_DC) {
110
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
49
+ return true;
111
50
+ }
112
/* Create alias before redirection so we dup the right data. */
51
+ break;
113
new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
52
+
114
- new_key = g_memdup(&a->new_key, sizeof(uint32_t));
53
+ case ARMMMUIdx_E20_0:
115
54
+ case ARMMMUIdx_E20_2:
116
new_reg->name = a->new_name;
55
+ case ARMMMUIdx_E20_2_PAN:
117
new_reg->type |= ARM_CP_ALIAS;
56
+ case ARMMMUIdx_E2:
118
/* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
57
+ case ARMMMUIdx_E3:
119
new_reg->access &= PL2_RW | PL3_RW;
58
+ break;
120
59
+
121
- ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
60
+ default:
122
+ ok = g_hash_table_insert(cpu->cp_regs,
61
+ g_assert_not_reached();
123
+ (gpointer)(uintptr_t)a->new_key, new_reg);
124
g_assert(ok);
125
126
src_reg->opaque = dst_reg;
127
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
128
/* Private utility function for define_one_arm_cp_reg_with_opaque():
129
* add a single reginfo struct to the hash table.
130
*/
131
- uint32_t *key = g_new(uint32_t, 1);
132
+ uint32_t key;
133
ARMCPRegInfo *r2 = g_memdup(r, sizeof(ARMCPRegInfo));
134
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
135
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
136
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
137
if (r->cp == 0 || r->state == ARM_CP_STATE_BOTH) {
138
r2->cp = CP_REG_ARM64_SYSREG_CP;
139
}
140
- *key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
141
- r2->opc0, opc1, opc2);
142
+ key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
143
+ r2->opc0, opc1, opc2);
144
} else {
145
- *key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
146
+ key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
147
}
62
}
148
if (opaque) {
63
149
r2->opaque = opaque;
64
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
150
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
151
* requested.
152
*/
153
if (!(r->type & ARM_CP_OVERRIDE)) {
154
- ARMCPRegInfo *oldreg;
155
- oldreg = g_hash_table_lookup(cpu->cp_regs, key);
156
+ const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
157
if (oldreg && !(oldreg->type & ARM_CP_OVERRIDE)) {
158
fprintf(stderr, "Register redefined: cp=%d %d bit "
159
"crn=%d crm=%d opc1=%d opc2=%d, "
160
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
161
g_assert_not_reached();
162
}
163
}
164
- g_hash_table_insert(cpu->cp_regs, key, r2);
165
+ g_hash_table_insert(cpu->cp_regs, (gpointer)(uintptr_t)key, r2);
166
}
167
168
169
@@ -XXX,XX +XXX,XX @@ void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
170
171
const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp)
172
{
173
- return g_hash_table_lookup(cpregs, &encoded_cp);
174
+ return g_hash_table_lookup(cpregs, (gpointer)(uintptr_t)encoded_cp);
175
}
176
177
void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
178
--
65
--
179
2.25.1
66
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Give this enum a name and use in ARMCPRegInfo and add_cpreg_to_hashtable.
3
The effect of TGE does not only apply to non-secure state,
4
Add the enumerator ARM_CP_SECSTATE_BOTH to clarify how 0
4
now that Secure EL2 exists.
5
is handled in define_one_arm_cp_reg_with_opaque.
6
5
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-10-richard.henderson@linaro.org
8
Message-id: 20221001162318.153420-13-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/cpregs.h | 7 ++++---
11
target/arm/ptw.c | 4 ++--
13
target/arm/helper.c | 7 +++++--
12
1 file changed, 2 insertions(+), 2 deletions(-)
14
2 files changed, 9 insertions(+), 5 deletions(-)
15
13
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpregs.h
16
--- a/target/arm/ptw.c
19
+++ b/target/arm/cpregs.h
17
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ typedef enum {
18
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
21
* registered entry will only have one to identify whether the entry is secure
19
case ARMMMUIdx_E10_0:
22
* or non-secure.
20
case ARMMMUIdx_E10_1:
23
*/
21
case ARMMMUIdx_E10_1_PAN:
24
-enum {
22
- /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
25
+typedef enum {
23
- if (!is_secure && (hcr_el2 & HCR_TGE)) {
26
+ ARM_CP_SECSTATE_BOTH = 0, /* define one cpreg for each secstate */
24
+ /* TGE means that EL0/1 act as if SCTLR_EL1.M is zero */
27
ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
25
+ if (hcr_el2 & HCR_TGE) {
28
ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
26
return true;
29
-};
27
}
30
+} CPSecureState;
28
break;
31
32
/*
33
* Access rights:
34
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
35
/* Access rights: PL*_[RW] */
36
CPAccessRights access;
37
/* Security state: ARM_CP_SECSTATE_* bits/values */
38
- int secure;
39
+ CPSecureState secure;
40
/*
41
* The opaque pointer passed to define_arm_cp_regs_with_opaque() when
42
* this register was defined: can be used to hand data through to the
43
diff --git a/target/arm/helper.c b/target/arm/helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/helper.c
46
+++ b/target/arm/helper.c
47
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
48
}
49
50
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
51
- void *opaque, CPState state, int secstate,
52
+ void *opaque, CPState state,
53
+ CPSecureState secstate,
54
int crm, int opc1, int opc2,
55
const char *name)
56
{
57
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
58
r->secure, crm, opc1, opc2,
59
r->name);
60
break;
61
- default:
62
+ case ARM_CP_SECSTATE_BOTH:
63
name = g_strdup_printf("%s_S", r->name);
64
add_cpreg_to_hashtable(cpu, r, opaque, state,
65
ARM_CP_SECSTATE_S,
66
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
67
ARM_CP_SECSTATE_NS,
68
crm, opc1, opc2, r->name);
69
break;
70
+ default:
71
+ g_assert_not_reached();
72
}
73
} else {
74
/* AArch64 registers get mapped to non-secure instance
75
--
29
--
76
2.25.1
30
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add the aa64 predicate for detecting RAS support from id registers.
3
For page walking, we may require HCR for a security state
4
We already have the aa32 version from the M-profile work.
4
that is not "current".
5
Add the 'any' predicate for testing both aa64 and aa32.
6
5
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-34-richard.henderson@linaro.org
8
Message-id: 20221001162318.153420-14-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/cpu.h | 10 ++++++++++
11
target/arm/cpu.h | 20 +++++++++++++-------
13
1 file changed, 10 insertions(+)
12
target/arm/helper.c | 11 ++++++++---
13
2 files changed, 21 insertions(+), 10 deletions(-)
14
14
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_aa32_el1(const ARMISARegisters *id)
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
20
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL1) >= 2;
20
* Return true if the current security state has AArch64 EL2 or AArch32 Hyp.
21
}
21
* This corresponds to the pseudocode EL2Enabled()
22
22
*/
23
+static inline bool isar_feature_aa64_ras(const ARMISARegisters *id)
23
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
24
+{
24
+{
25
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RAS) != 0;
25
+ return arm_feature(env, ARM_FEATURE_EL2)
26
+ && (!secure || (env->cp15.scr_el3 & SCR_EEL2));
26
+}
27
+}
27
+
28
+
28
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
29
static inline bool arm_is_el2_enabled(CPUARMState *env)
29
{
30
{
30
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
31
- if (arm_feature(env, ARM_FEATURE_EL2)) {
31
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_debugv8p2(const ARMISARegisters *id)
32
- if (arm_is_secure_below_el3(env)) {
32
return isar_feature_aa64_debugv8p2(id) || isar_feature_aa32_debugv8p2(id);
33
- return (env->cp15.scr_el3 & SCR_EEL2) != 0;
34
- }
35
- return true;
36
- }
37
- return false;
38
+ return arm_is_el2_enabled_secstate(env, arm_is_secure_below_el3(env));
33
}
39
}
34
40
35
+static inline bool isar_feature_any_ras(const ARMISARegisters *id)
41
#else
42
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
43
return false;
44
}
45
46
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
36
+{
47
+{
37
+ return isar_feature_aa64_ras(id) || isar_feature_aa32_ras(id);
48
+ return false;
49
+}
50
+
51
static inline bool arm_is_el2_enabled(CPUARMState *env)
52
{
53
return false;
54
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_el2_enabled(CPUARMState *env)
55
* "for all purposes other than a direct read or write access of HCR_EL2."
56
* Not included here is HCR_RW.
57
*/
58
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure);
59
uint64_t arm_hcr_el2_eff(CPUARMState *env);
60
uint64_t arm_hcrx_el2_eff(CPUARMState *env);
61
62
diff --git a/target/arm/helper.c b/target/arm/helper.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/helper.c
65
+++ b/target/arm/helper.c
66
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
67
}
68
69
/*
70
- * Return the effective value of HCR_EL2.
71
+ * Return the effective value of HCR_EL2, at the given security state.
72
* Bits that are not included here:
73
* RW (read from SCR_EL3.RW as needed)
74
*/
75
-uint64_t arm_hcr_el2_eff(CPUARMState *env)
76
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure)
77
{
78
uint64_t ret = env->cp15.hcr_el2;
79
80
- if (!arm_is_el2_enabled(env)) {
81
+ if (!arm_is_el2_enabled_secstate(env, secure)) {
82
/*
83
* "This register has no effect if EL2 is not enabled in the
84
* current Security state". This is ARMv8.4-SecEL2 speak for
85
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
86
return ret;
87
}
88
89
+uint64_t arm_hcr_el2_eff(CPUARMState *env)
90
+{
91
+ return arm_hcr_el2_eff_secstate(env, arm_is_secure_below_el3(env));
38
+}
92
+}
39
+
93
+
40
/*
94
/*
41
* Forward to the above feature tests given an ARMCPU pointer.
95
* Corresponds to ARM pseudocode function ELIsInHost().
42
*/
96
*/
43
--
97
--
44
2.25.1
98
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Create a typedef as well, and use it in ARMCPRegInfo.
3
Rename the argument to is_secure_ptr, and introduce a
4
This won't be perfect for debugging, but it'll nicely
4
local variable is_secure with the value. We only write
5
display the most common cases.
5
back to the pointer toward the end of the function.
6
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-8-richard.henderson@linaro.org
9
Message-id: 20221001162318.153420-15-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/cpregs.h | 44 +++++++++++++++++++++++---------------------
12
target/arm/ptw.c | 22 ++++++++++++----------
13
target/arm/helper.c | 2 +-
13
1 file changed, 12 insertions(+), 10 deletions(-)
14
2 files changed, 24 insertions(+), 22 deletions(-)
15
14
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpregs.h
17
--- a/target/arm/ptw.c
19
+++ b/target/arm/cpregs.h
18
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ enum {
19
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
21
* described with these bits, then use a laxer set of restrictions, and
20
22
* do the more restrictive/complex check inside a helper function.
21
/* Translate a S1 pagetable walk through S2 if needed. */
23
*/
22
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
24
-#define PL3_R 0x80
23
- hwaddr addr, bool *is_secure,
25
-#define PL3_W 0x40
24
+ hwaddr addr, bool *is_secure_ptr,
26
-#define PL2_R (0x20 | PL3_R)
25
ARMMMUFaultInfo *fi)
27
-#define PL2_W (0x10 | PL3_W)
26
{
28
-#define PL1_R (0x08 | PL2_R)
27
- ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
29
-#define PL1_W (0x04 | PL2_W)
28
+ bool is_secure = *is_secure_ptr;
30
-#define PL0_R (0x02 | PL1_R)
29
+ ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
31
-#define PL0_W (0x01 | PL1_W)
30
32
+typedef enum {
31
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
33
+ PL3_R = 0x80,
32
- !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) {
34
+ PL3_W = 0x40,
33
+ !regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
35
+ PL2_R = 0x20 | PL3_R,
34
GetPhysAddrResult s2 = {};
36
+ PL2_W = 0x10 | PL3_W,
35
int ret;
37
+ PL1_R = 0x08 | PL2_R,
36
38
+ PL1_W = 0x04 | PL2_W,
37
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
39
+ PL0_R = 0x02 | PL1_R,
38
- *is_secure, false, &s2, fi);
40
+ PL0_W = 0x01 | PL1_W,
39
+ is_secure, false, &s2, fi);
41
40
if (ret) {
42
-/*
41
assert(fi->type != ARMFault_None);
43
- * For user-mode some registers are accessible to EL0 via a kernel
42
fi->s2addr = addr;
44
- * trap-and-emulate ABI. In this case we define the read permissions
43
fi->stage2 = true;
45
- * as actually being PL0_R. However some bits of any given register
44
fi->s1ptw = true;
46
- * may still be masked.
45
- fi->s1ns = !*is_secure;
47
- */
46
+ fi->s1ns = !is_secure;
48
+ /*
47
return ~0;
49
+ * For user-mode some registers are accessible to EL0 via a kernel
48
}
50
+ * trap-and-emulate ABI. In this case we define the read permissions
49
if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
51
+ * as actually being PL0_R. However some bits of any given register
50
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
52
+ * may still be masked.
51
fi->s2addr = addr;
53
+ */
52
fi->stage2 = true;
54
#ifdef CONFIG_USER_ONLY
53
fi->s1ptw = true;
55
-#define PL0U_R PL0_R
54
- fi->s1ns = !*is_secure;
56
+ PL0U_R = PL0_R,
55
+ fi->s1ns = !is_secure;
57
#else
56
return ~0;
58
-#define PL0U_R PL1_R
57
}
59
+ PL0U_R = PL1_R,
58
60
#endif
59
if (arm_is_secure_below_el3(env)) {
61
60
/* Check if page table walk is to secure or non-secure PA space. */
62
-#define PL3_RW (PL3_R | PL3_W)
61
- if (*is_secure) {
63
-#define PL2_RW (PL2_R | PL2_W)
62
- *is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
64
-#define PL1_RW (PL1_R | PL1_W)
63
+ if (is_secure) {
65
-#define PL0_RW (PL0_R | PL0_W)
64
+ is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
66
+ PL3_RW = PL3_R | PL3_W,
65
} else {
67
+ PL2_RW = PL2_R | PL2_W,
66
- *is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
68
+ PL1_RW = PL1_R | PL1_W,
67
+ is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
69
+ PL0_RW = PL0_R | PL0_W,
68
}
70
+} CPAccessRights;
69
+ *is_secure_ptr = is_secure;
71
70
} else {
72
typedef enum CPAccessResult {
71
- assert(!*is_secure);
73
/* Access is permitted */
72
+ assert(!is_secure);
74
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
73
}
75
/* Register type: ARM_CP_* bits/values */
74
76
int type;
75
addr = s2.phys;
77
/* Access rights: PL*_[RW] */
78
- int access;
79
+ CPAccessRights access;
80
/* Security state: ARM_CP_SECSTATE_* bits/values */
81
int secure;
82
/*
83
diff --git a/target/arm/helper.c b/target/arm/helper.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/helper.c
86
+++ b/target/arm/helper.c
87
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
88
* to encompass the generic architectural permission check.
89
*/
90
if (r->state != ARM_CP_STATE_AA32) {
91
- int mask = 0;
92
+ CPAccessRights mask;
93
switch (r->opc1) {
94
case 0:
95
/* min_EL EL1, but some accessible to EL0 via kernel ABI */
96
--
76
--
97
2.25.1
77
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Simplify freeing cp_regs hash table entries by using a single
3
This value is unused.
4
allocation for the entire value.
5
6
This fixes a theoretical bug if we were to ever free the entire
7
hash table, because we've been installing string literal constants
8
into the cpreg structure in define_arm_vh_e2h_redirects_aliases.
9
However, at present we only free entries created for AArch32
10
wildcard cpregs which get overwritten by more specific cpregs,
11
so this bug is never exposed.
12
4
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20220501055028.646596-13-richard.henderson@linaro.org
7
Message-id: 20221001162318.153420-16-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
9
---
18
target/arm/cpu.c | 16 +---------------
10
target/arm/ptw.c | 5 ++---
19
target/arm/helper.c | 10 ++++++++--
11
1 file changed, 2 insertions(+), 3 deletions(-)
20
2 files changed, 9 insertions(+), 17 deletions(-)
21
12
22
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
23
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.c
15
--- a/target/arm/ptw.c
25
+++ b/target/arm/cpu.c
16
+++ b/target/arm/ptw.c
26
@@ -XXX,XX +XXX,XX @@ uint64_t arm_cpu_mp_affinity(int idx, uint8_t clustersz)
17
@@ -XXX,XX +XXX,XX @@ static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
27
return (Aff1 << ARM_AFF1_SHIFT) | Aff0;
18
* s1 and s2 for the HCR_EL2.FWB == 1 case, returning the
28
}
19
* combined attributes in MAIR_EL1 format.
29
20
*/
30
-static void cpreg_hashtable_data_destroy(gpointer data)
21
-static uint8_t combined_attrs_fwb(CPUARMState *env,
31
-{
22
- ARMCacheAttrs s1, ARMCacheAttrs s2)
32
- /*
23
+static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
33
- * Destroy function for cpu->cp_regs hashtable data entries.
34
- * We must free the name string because it was g_strdup()ed in
35
- * add_cpreg_to_hashtable(). It's OK to cast away the 'const'
36
- * from r->name because we know we definitely allocated it.
37
- */
38
- ARMCPRegInfo *r = data;
39
-
40
- g_free((void *)r->name);
41
- g_free(r);
42
-}
43
-
44
static void arm_cpu_initfn(Object *obj)
45
{
24
{
46
ARMCPU *cpu = ARM_CPU(obj);
25
switch (s2.attrs) {
47
26
case 7:
48
cpu_set_cpustate_pointers(cpu);
27
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
49
cpu->cp_regs = g_hash_table_new_full(g_direct_hash, g_direct_equal,
28
50
- NULL, cpreg_hashtable_data_destroy);
29
/* Combine memory type and cacheability attributes */
51
+ NULL, g_free);
30
if (arm_hcr_el2_eff(env) & HCR_FWB) {
52
31
- ret.attrs = combined_attrs_fwb(env, s1, s2);
53
QLIST_INIT(&cpu->pre_el_change_hooks);
32
+ ret.attrs = combined_attrs_fwb(s1, s2);
54
QLIST_INIT(&cpu->el_change_hooks);
33
} else {
55
diff --git a/target/arm/helper.c b/target/arm/helper.c
34
ret.attrs = combined_attrs_nofwb(env, s1, s2);
56
index XXXXXXX..XXXXXXX 100644
35
}
57
--- a/target/arm/helper.c
58
+++ b/target/arm/helper.c
59
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
60
* add a single reginfo struct to the hash table.
61
*/
62
uint32_t key;
63
- ARMCPRegInfo *r2 = g_memdup(r, sizeof(ARMCPRegInfo));
64
+ ARMCPRegInfo *r2;
65
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
66
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
67
+ size_t name_len;
68
+
69
+ /* Combine cpreg and name into one allocation. */
70
+ name_len = strlen(name) + 1;
71
+ r2 = g_malloc(sizeof(*r2) + name_len);
72
+ *r2 = *r;
73
+ r2->name = memcpy(r2 + 1, name, name_len);
74
75
- r2->name = g_strdup(name);
76
/* Reset the secure state to the specific incoming state. This is
77
* necessary as the register may have been defined with both states.
78
*/
79
--
36
--
80
2.25.1
37
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Computing isbanked only once makes the code
3
These subroutines did not need ENV for anything except
4
a bit easier to read.
4
retrieving the effective value of HCR anyway.
5
5
6
We have computed the effective value of HCR in the callers,
7
and this will be especially important for interpreting HCR
8
in a non-current security state.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20221001162318.153420-17-richard.henderson@linaro.org
8
Message-id: 20220501055028.646596-17-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
14
---
11
target/arm/helper.c | 6 ++++--
15
target/arm/ptw.c | 30 +++++++++++++++++-------------
12
1 file changed, 4 insertions(+), 2 deletions(-)
16
1 file changed, 17 insertions(+), 13 deletions(-)
13
17
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
20
--- a/target/arm/ptw.c
17
+++ b/target/arm/helper.c
21
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
22
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
19
bool is64 = r->type & ARM_CP_64BIT;
23
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
20
bool ns = secstate & ARM_CP_SECSTATE_NS;
24
}
21
int cp = r->cp;
25
22
+ bool isbanked;
26
-static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
23
size_t name_len;
27
+static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
24
28
{
25
switch (state) {
29
/*
26
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
30
* For an S1 page table walk, the stage 1 attributes are always
27
r2->opaque = opaque;
31
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
32
* when cacheattrs.attrs bit [2] is 0.
33
*/
34
assert(cacheattrs.is_s2_format);
35
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
36
+ if (hcr & HCR_FWB) {
37
return (cacheattrs.attrs & 0x4) == 0;
38
} else {
39
return (cacheattrs.attrs & 0xc) == 0;
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
41
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
42
!regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
43
GetPhysAddrResult s2 = {};
44
+ uint64_t hcr;
45
int ret;
46
47
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
48
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
49
fi->s1ns = !is_secure;
50
return ~0;
51
}
52
- if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
53
- ptw_attrs_are_device(env, s2.cacheattrs)) {
54
+
55
+ hcr = arm_hcr_el2_eff(env);
56
+ if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
57
/*
58
* PTW set and S1 walk touched S2 Device memory:
59
* generate Permission fault.
60
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
61
* ref: shared/translation/attrs/S2AttrDecode()
62
* .../S2ConvertAttrsHints()
63
*/
64
-static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
65
+static uint8_t convert_stage2_attrs(uint64_t hcr, uint8_t s2attrs)
66
{
67
uint8_t hiattr = extract32(s2attrs, 2, 2);
68
uint8_t loattr = extract32(s2attrs, 0, 2);
69
uint8_t hihint = 0, lohint = 0;
70
71
if (hiattr != 0) { /* normal memory */
72
- if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */
73
+ if (hcr & HCR_CD) { /* cache disabled */
74
hiattr = loattr = 1; /* non-cacheable */
75
} else {
76
if (hiattr != 1) { /* Write-through or write-back */
77
@@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
78
* s1 and s2 for the HCR_EL2.FWB == 0 case, returning the
79
* combined attributes in MAIR_EL1 format.
80
*/
81
-static uint8_t combined_attrs_nofwb(CPUARMState *env,
82
+static uint8_t combined_attrs_nofwb(uint64_t hcr,
83
ARMCacheAttrs s1, ARMCacheAttrs s2)
84
{
85
uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
86
87
- s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
88
+ s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs);
89
90
s1lo = extract32(s1.attrs, 0, 4);
91
s2lo = extract32(s2_mair_attrs, 0, 4);
92
@@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
93
* @s1: Attributes from stage 1 walk
94
* @s2: Attributes from stage 2 walk
95
*/
96
-static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
97
+static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
98
ARMCacheAttrs s1, ARMCacheAttrs s2)
99
{
100
ARMCacheAttrs ret;
101
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
28
}
102
}
29
103
30
- if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
104
/* Combine memory type and cacheability attributes */
31
+ isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
105
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
32
+ if (isbanked) {
106
+ if (hcr & HCR_FWB) {
33
/* Register is banked (using both entries in array).
107
ret.attrs = combined_attrs_fwb(s1, s2);
34
* Overwriting fieldoffset as the array is only used to define
108
} else {
35
* banked registers but later only fieldoffset is used.
109
- ret.attrs = combined_attrs_nofwb(env, s1, s2);
36
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
110
+ ret.attrs = combined_attrs_nofwb(hcr, s1, s2);
37
}
111
}
38
112
39
if (state == ARM_CP_STATE_AA32) {
113
/*
40
- if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
114
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
41
+ if (isbanked) {
115
ARMCacheAttrs cacheattrs1;
42
/* If the register is banked then we don't need to migrate or
116
ARMMMUIdx s2_mmu_idx;
43
* reset the 32-bit instance in certain cases:
117
bool is_el0;
44
*
118
+ uint64_t hcr;
119
120
ret = get_phys_addr_with_secure(env, address, access_type,
121
s1_mmu_idx, is_secure, result, fi);
122
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
123
}
124
125
/* Combine the S1 and S2 cache attributes. */
126
- if (arm_hcr_el2_eff(env) & HCR_DC) {
127
+ hcr = arm_hcr_el2_eff(env);
128
+ if (hcr & HCR_DC) {
129
/*
130
* HCR.DC forces the first stage attributes to
131
* Normal Non-Shareable,
132
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
133
}
134
cacheattrs1.shareability = 0;
135
}
136
- result->cacheattrs = combine_cacheattrs(env, cacheattrs1,
137
+ result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
138
result->cacheattrs);
139
140
/*
45
--
141
--
46
2.25.1
142
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Since e03b56863d2bc, our host endian indicator is unconditionally
3
Use arm_hcr_el2_eff_secstate instead of arm_hcr_el2_eff, so
4
set, which means that we can use a normal C condition.
4
that we use is_secure instead of the current security state.
5
These AT* operations have been broken since arm_hcr_el2_eff
6
gained a check for "el2 enabled" for Secure EL2.
5
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20221001162318.153420-18-richard.henderson@linaro.org
8
Message-id: 20220501055028.646596-20-richard.henderson@linaro.org
9
[PMM: quote correct git hash in commit message]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
target/arm/helper.c | 9 +++------
13
target/arm/ptw.c | 8 ++++----
13
1 file changed, 3 insertions(+), 6 deletions(-)
14
1 file changed, 4 insertions(+), 4 deletions(-)
14
15
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
--- a/target/arm/ptw.c
18
+++ b/target/arm/helper.c
19
+++ b/target/arm/ptw.c
19
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
20
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
20
r2->type |= ARM_CP_ALIAS;
21
}
22
23
- if (r->state == ARM_CP_STATE_BOTH) {
24
-#if HOST_BIG_ENDIAN
25
- if (r2->fieldoffset) {
26
- r2->fieldoffset += sizeof(uint32_t);
27
- }
28
-#endif
29
+ if (HOST_BIG_ENDIAN &&
30
+ r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
31
+ r2->fieldoffset += sizeof(uint32_t);
32
}
21
}
33
}
22
}
34
23
24
- hcr_el2 = arm_hcr_el2_eff(env);
25
+ hcr_el2 = arm_hcr_el2_eff_secstate(env, is_secure);
26
27
switch (mmu_idx) {
28
case ARMMMUIdx_Stage2:
29
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
30
return ~0;
31
}
32
33
- hcr = arm_hcr_el2_eff(env);
34
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
35
if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
36
/*
37
* PTW set and S1 walk touched S2 Device memory:
38
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
39
}
40
41
/* Combine the S1 and S2 cache attributes. */
42
- hcr = arm_hcr_el2_eff(env);
43
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
44
if (hcr & HCR_DC) {
45
/*
46
* HCR.DC forces the first stage attributes to
47
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
48
result->page_size = TARGET_PAGE_SIZE;
49
50
/* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
51
- hcr = arm_hcr_el2_eff(env);
52
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
53
result->cacheattrs.shareability = 0;
54
result->cacheattrs.is_s2_format = false;
55
if (hcr & HCR_DC) {
35
--
56
--
36
2.25.1
57
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This controls whether the PACI{A,B}SP instructions trap with BTYPE=3
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
(indirect branch from register other than x16/x17). The linux kernel
5
sets this in bti_enable().
6
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/998
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20221001162318.153420-19-richard.henderson@linaro.org
10
Message-id: 20220427042312.294300-1-richard.henderson@linaro.org
11
[PMM: remove stray change to makefile comment]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
7
---
14
target/arm/cpu.c | 2 ++
8
target/arm/ptw.c | 138 +++++++++++++++++++++++++----------------------
15
tests/tcg/aarch64/bti-3.c | 42 +++++++++++++++++++++++++++++++
9
1 file changed, 74 insertions(+), 64 deletions(-)
16
tests/tcg/aarch64/Makefile.target | 6 ++---
17
3 files changed, 47 insertions(+), 3 deletions(-)
18
create mode 100644 tests/tcg/aarch64/bti-3.c
19
10
20
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
21
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.c
13
--- a/target/arm/ptw.c
23
+++ b/target/arm/cpu.c
14
+++ b/target/arm/ptw.c
24
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
15
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
25
/* Enable all PAC keys. */
16
return ret;
26
env->cp15.sctlr_el[1] |= (SCTLR_EnIA | SCTLR_EnIB |
17
}
27
SCTLR_EnDA | SCTLR_EnDB);
18
28
+ /* Trap on btype=3 for PACIxSP. */
29
+ env->cp15.sctlr_el[1] |= SCTLR_BT0;
30
/* and to the FP/Neon instructions */
31
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 20, 2, 3);
32
/* and to the SVE instructions */
33
diff --git a/tests/tcg/aarch64/bti-3.c b/tests/tcg/aarch64/bti-3.c
34
new file mode 100644
35
index XXXXXXX..XXXXXXX
36
--- /dev/null
37
+++ b/tests/tcg/aarch64/bti-3.c
38
@@ -XXX,XX +XXX,XX @@
39
+/*
19
+/*
40
+ * BTI vs PACIASP
20
+ * MMU disabled. S1 addresses within aa64 translation regimes are
21
+ * still checked for bounds -- see AArch64.S1DisabledOutput().
41
+ */
22
+ */
23
+static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
24
+ MMUAccessType access_type,
25
+ ARMMMUIdx mmu_idx, bool is_secure,
26
+ GetPhysAddrResult *result,
27
+ ARMMMUFaultInfo *fi)
28
+{
29
+ uint64_t hcr;
30
+ uint8_t memattr;
42
+
31
+
43
+#include "bti-crt.inc.c"
32
+ if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
33
+ int r_el = regime_el(env, mmu_idx);
34
+ if (arm_el_is_aa64(env, r_el)) {
35
+ int pamax = arm_pamax(env_archcpu(env));
36
+ uint64_t tcr = env->cp15.tcr_el[r_el];
37
+ int addrtop, tbi;
44
+
38
+
45
+static void skip2_sigill(int sig, siginfo_t *info, ucontext_t *uc)
39
+ tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
46
+{
40
+ if (access_type == MMU_INST_FETCH) {
47
+ uc->uc_mcontext.pc += 8;
41
+ tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
48
+ uc->uc_mcontext.pstate = 1;
42
+ }
43
+ tbi = (tbi >> extract64(address, 55, 1)) & 1;
44
+ addrtop = (tbi ? 55 : 63);
45
+
46
+ if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
47
+ fi->type = ARMFault_AddressSize;
48
+ fi->level = 0;
49
+ fi->stage2 = false;
50
+ return 1;
51
+ }
52
+
53
+ /*
54
+ * When TBI is disabled, we've just validated that all of the
55
+ * bits above PAMax are zero, so logically we only need to
56
+ * clear the top byte for TBI. But it's clearer to follow
57
+ * the pseudocode set of addrdesc.paddress.
58
+ */
59
+ address = extract64(address, 0, 52);
60
+ }
61
+ }
62
+
63
+ result->phys = address;
64
+ result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
65
+ result->page_size = TARGET_PAGE_SIZE;
66
+
67
+ /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
68
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
69
+ result->cacheattrs.shareability = 0;
70
+ result->cacheattrs.is_s2_format = false;
71
+ if (hcr & HCR_DC) {
72
+ if (hcr & HCR_DCT) {
73
+ memattr = 0xf0; /* Tagged, Normal, WB, RWA */
74
+ } else {
75
+ memattr = 0xff; /* Normal, WB, RWA */
76
+ }
77
+ } else if (access_type == MMU_INST_FETCH) {
78
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
79
+ memattr = 0xee; /* Normal, WT, RA, NT */
80
+ } else {
81
+ memattr = 0x44; /* Normal, NC, No */
82
+ }
83
+ result->cacheattrs.shareability = 2; /* outer sharable */
84
+ } else {
85
+ memattr = 0x00; /* Device, nGnRnE */
86
+ }
87
+ result->cacheattrs.attrs = memattr;
88
+ return 0;
49
+}
89
+}
50
+
90
+
51
+#define BTYPE_1() \
91
bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
52
+ asm("mov %0,#1; adr x16, 1f; br x16; 1: hint #25; mov %0,#0" \
92
MMUAccessType access_type, ARMMMUIdx mmu_idx,
53
+ : "=r"(skipped) : : "x16", "x30")
93
bool is_secure, GetPhysAddrResult *result,
54
+
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
55
+#define BTYPE_2() \
95
/* Definitely a real MMU, not an MPU */
56
+ asm("mov %0,#1; adr x16, 1f; blr x16; 1: hint #25; mov %0,#0" \
96
57
+ : "=r"(skipped) : : "x16", "x30")
97
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
58
+
98
- uint64_t hcr;
59
+#define BTYPE_3() \
99
- uint8_t memattr;
60
+ asm("mov %0,#1; adr x15, 1f; br x15; 1: hint #25; mov %0,#0" \
100
-
61
+ : "=r"(skipped) : : "x15", "x30")
101
- /*
62
+
102
- * MMU disabled. S1 addresses within aa64 translation regimes are
63
+#define TEST(WHICH, EXPECT) \
103
- * still checked for bounds -- see AArch64.TranslateAddressS1Off.
64
+ do { WHICH(); fail += skipped ^ EXPECT; } while (0)
104
- */
65
+
105
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
66
+int main()
106
- int r_el = regime_el(env, mmu_idx);
67
+{
107
- if (arm_el_is_aa64(env, r_el)) {
68
+ int fail = 0;
108
- int pamax = arm_pamax(env_archcpu(env));
69
+ int skipped;
109
- uint64_t tcr = env->cp15.tcr_el[r_el];
70
+
110
- int addrtop, tbi;
71
+ /* Signal-like with SA_SIGINFO. */
111
-
72
+ signal_info(SIGILL, skip2_sigill);
112
- tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
73
+
113
- if (access_type == MMU_INST_FETCH) {
74
+ /* With SCTLR_EL1.BT0 set, PACIASP is not compatible with type=3. */
114
- tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
75
+ TEST(BTYPE_1, 0);
115
- }
76
+ TEST(BTYPE_2, 0);
116
- tbi = (tbi >> extract64(address, 55, 1)) & 1;
77
+ TEST(BTYPE_3, 1);
117
- addrtop = (tbi ? 55 : 63);
78
+
118
-
79
+ return fail;
119
- if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
80
+}
120
- fi->type = ARMFault_AddressSize;
81
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
121
- fi->level = 0;
82
index XXXXXXX..XXXXXXX 100644
122
- fi->stage2 = false;
83
--- a/tests/tcg/aarch64/Makefile.target
123
- return 1;
84
+++ b/tests/tcg/aarch64/Makefile.target
124
- }
85
@@ -XXX,XX +XXX,XX @@ endif
125
-
86
# BTI Tests
126
- /*
87
# bti-1 tests the elf notes, so we require special compiler support.
127
- * When TBI is disabled, we've just validated that all of the
88
ifneq ($(CROSS_CC_HAS_ARMV8_BTI),)
128
- * bits above PAMax are zero, so logically we only need to
89
-AARCH64_TESTS += bti-1
129
- * clear the top byte for TBI. But it's clearer to follow
90
-bti-1: CFLAGS += -mbranch-protection=standard
130
- * the pseudocode set of addrdesc.paddress.
91
-bti-1: LDFLAGS += -nostdlib
131
- */
92
+AARCH64_TESTS += bti-1 bti-3
132
- address = extract64(address, 0, 52);
93
+bti-1 bti-3: CFLAGS += -mbranch-protection=standard
133
- }
94
+bti-1 bti-3: LDFLAGS += -nostdlib
134
- }
95
endif
135
- result->phys = address;
96
# bti-2 tests PROT_BTI, so no special compiler support required.
136
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
97
AARCH64_TESTS += bti-2
137
- result->page_size = TARGET_PAGE_SIZE;
138
-
139
- /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
140
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
141
- result->cacheattrs.shareability = 0;
142
- result->cacheattrs.is_s2_format = false;
143
- if (hcr & HCR_DC) {
144
- if (hcr & HCR_DCT) {
145
- memattr = 0xf0; /* Tagged, Normal, WB, RWA */
146
- } else {
147
- memattr = 0xff; /* Normal, WB, RWA */
148
- }
149
- } else if (access_type == MMU_INST_FETCH) {
150
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
151
- memattr = 0xee; /* Normal, WT, RA, NT */
152
- } else {
153
- memattr = 0x44; /* Normal, NC, No */
154
- }
155
- result->cacheattrs.shareability = 2; /* outer sharable */
156
- } else {
157
- memattr = 0x00; /* Device, nGnRnE */
158
- }
159
- result->cacheattrs.attrs = memattr;
160
- return 0;
161
+ return get_phys_addr_disabled(env, address, access_type, mmu_idx,
162
+ is_secure, result, fi);
163
}
164
-
165
if (regime_using_lpae_format(env, mmu_idx)) {
166
return get_phys_addr_lpae(env, address, access_type, mmu_idx,
167
is_secure, false, result, fi);
98
--
168
--
99
2.25.1
169
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The new_key field is always non-zero -- drop the if.
3
Do not apply memattr or shareability for Stage2 translations.
4
Make sure to apply HCR_{DC,DCT} only to Regime_EL10, per the
5
pseudocode in AArch64.S1DisabledOutput.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20220501055028.646596-11-richard.henderson@linaro.org
9
Message-id: 20221001162318.153420-20-richard.henderson@linaro.org
8
[PMM: reinstated dropped PL3_RW mask]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/helper.c | 23 +++++++++++------------
12
target/arm/ptw.c | 48 +++++++++++++++++++++++++-----------------------
12
1 file changed, 11 insertions(+), 12 deletions(-)
13
1 file changed, 25 insertions(+), 23 deletions(-)
13
14
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
--- a/target/arm/ptw.c
17
+++ b/target/arm/helper.c
18
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
19
20
GetPhysAddrResult *result,
20
for (i = 0; i < ARRAY_SIZE(aliases); i++) {
21
ARMMMUFaultInfo *fi)
21
const struct E2HAlias *a = &aliases[i];
22
{
22
- ARMCPRegInfo *src_reg, *dst_reg;
23
- uint64_t hcr;
23
+ ARMCPRegInfo *src_reg, *dst_reg, *new_reg;
24
- uint8_t memattr;
24
+ uint32_t *new_key;
25
+ uint8_t memattr = 0x00; /* Device nGnRnE */
25
+ bool ok;
26
+ uint8_t shareability = 0; /* non-sharable */
26
27
27
if (a->feature && !a->feature(&cpu->isar)) {
28
if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
28
continue;
29
int r_el = regime_el(env, mmu_idx);
29
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
30
+
30
g_assert(src_reg->opaque == NULL);
31
if (arm_el_is_aa64(env, r_el)) {
31
32
int pamax = arm_pamax(env_archcpu(env));
32
/* Create alias before redirection so we dup the right data. */
33
uint64_t tcr = env->cp15.tcr_el[r_el];
33
- if (a->new_key) {
34
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
34
- ARMCPRegInfo *new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
35
*/
35
- uint32_t *new_key = g_memdup(&a->new_key, sizeof(uint32_t));
36
address = extract64(address, 0, 52);
36
- bool ok;
37
}
37
+ new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
38
+
38
+ new_key = g_memdup(&a->new_key, sizeof(uint32_t));
39
+ /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
39
40
+ if (r_el == 1) {
40
- new_reg->name = a->new_name;
41
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
41
- new_reg->type |= ARM_CP_ALIAS;
42
+ if (hcr & HCR_DC) {
42
- /* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
43
+ if (hcr & HCR_DCT) {
43
- new_reg->access &= PL2_RW | PL3_RW;
44
+ memattr = 0xf0; /* Tagged, Normal, WB, RWA */
44
+ new_reg->name = a->new_name;
45
+ } else {
45
+ new_reg->type |= ARM_CP_ALIAS;
46
+ memattr = 0xff; /* Normal, WB, RWA */
46
+ /* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
47
+ }
47
+ new_reg->access &= PL2_RW | PL3_RW;
48
+ }
48
49
+ }
49
- ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
50
+ if (memattr == 0 && access_type == MMU_INST_FETCH) {
50
- g_assert(ok);
51
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
52
+ memattr = 0xee; /* Normal, WT, RA, NT */
53
+ } else {
54
+ memattr = 0x44; /* Normal, NC, No */
55
+ }
56
+ shareability = 2; /* outer sharable */
57
+ }
58
+ result->cacheattrs.is_s2_format = false;
59
}
60
61
result->phys = address;
62
result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
63
result->page_size = TARGET_PAGE_SIZE;
64
-
65
- /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
66
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
67
- result->cacheattrs.shareability = 0;
68
- result->cacheattrs.is_s2_format = false;
69
- if (hcr & HCR_DC) {
70
- if (hcr & HCR_DCT) {
71
- memattr = 0xf0; /* Tagged, Normal, WB, RWA */
72
- } else {
73
- memattr = 0xff; /* Normal, WB, RWA */
51
- }
74
- }
52
+ ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
75
- } else if (access_type == MMU_INST_FETCH) {
53
+ g_assert(ok);
76
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
54
77
- memattr = 0xee; /* Normal, WT, RA, NT */
55
src_reg->opaque = dst_reg;
78
- } else {
56
src_reg->orig_readfn = src_reg->readfn ?: raw_read;
79
- memattr = 0x44; /* Normal, NC, No */
80
- }
81
- result->cacheattrs.shareability = 2; /* outer sharable */
82
- } else {
83
- memattr = 0x00; /* Device, nGnRnE */
84
- }
85
+ result->cacheattrs.shareability = shareability;
86
result->cacheattrs.attrs = memattr;
87
return 0;
88
}
57
--
89
--
58
2.25.1
90
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Standardize on g_assert_not_reached() for "should not happen".
3
Adjust GetPhysAddrResult to fill in CPUTLBEntryFull,
4
Retain abort() when preceeded by fprintf or error_report.
4
so that it may be passed directly to tlb_set_page_full.
5
6
The change is large, but mostly mechanical. The major
7
non-mechanical change is page_size -> lg_page_size.
8
Most of the time this is obvious, and is related to
9
TARGET_PAGE_BITS.
5
10
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20220501055028.646596-7-richard.henderson@linaro.org
13
Message-id: 20221001162318.153420-21-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
15
---
11
target/arm/helper.c | 7 +++----
16
target/arm/internals.h | 5 +-
12
target/arm/hvf/hvf.c | 2 +-
17
target/arm/helper.c | 12 +--
13
target/arm/kvm-stub.c | 4 ++--
18
target/arm/m_helper.c | 20 ++---
14
target/arm/kvm.c | 4 ++--
19
target/arm/ptw.c | 179 ++++++++++++++++++++--------------------
15
target/arm/machine.c | 4 ++--
20
target/arm/tlb_helper.c | 9 +-
16
target/arm/translate-a64.c | 4 ++--
21
5 files changed, 111 insertions(+), 114 deletions(-)
17
target/arm/translate-neon.c | 2 +-
18
target/arm/translate.c | 4 ++--
19
8 files changed, 15 insertions(+), 16 deletions(-)
20
22
23
diff --git a/target/arm/internals.h b/target/arm/internals.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/internals.h
26
+++ b/target/arm/internals.h
27
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
28
29
/* Fields that are valid upon success. */
30
typedef struct GetPhysAddrResult {
31
- hwaddr phys;
32
- target_ulong page_size;
33
- int prot;
34
- MemTxAttrs attrs;
35
+ CPUTLBEntryFull f;
36
ARMCacheAttrs cacheattrs;
37
} GetPhysAddrResult;
38
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
41
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
42
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
43
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
44
/* Create a 64-bit PAR */
45
par64 = (1 << 11); /* LPAE bit always set */
46
if (!ret) {
47
- par64 |= res.phys & ~0xfffULL;
48
- if (!res.attrs.secure) {
49
+ par64 |= res.f.phys_addr & ~0xfffULL;
50
+ if (!res.f.attrs.secure) {
51
par64 |= (1 << 9); /* NS */
52
}
53
par64 |= (uint64_t)res.cacheattrs.attrs << 56; /* ATTR */
54
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
55
*/
56
if (!ret) {
57
/* We do not set any attribute bits in the PAR */
58
- if (res.page_size == (1 << 24)
59
+ if (res.f.lg_page_size == 24
60
&& arm_feature(env, ARM_FEATURE_V7)) {
61
- par64 = (res.phys & 0xff000000) | (1 << 1);
62
+ par64 = (res.f.phys_addr & 0xff000000) | (1 << 1);
63
} else {
64
- par64 = res.phys & 0xfffff000;
65
+ par64 = res.f.phys_addr & 0xfffff000;
66
}
67
- if (!res.attrs.secure) {
68
+ if (!res.f.attrs.secure) {
69
par64 |= (1 << 9); /* NS */
70
}
71
} else {
72
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/m_helper.c
75
+++ b/target/arm/m_helper.c
76
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
77
}
78
goto pend_fault;
79
}
80
- address_space_stl_le(arm_addressspace(cs, res.attrs), res.phys, value,
81
- res.attrs, &txres);
82
+ address_space_stl_le(arm_addressspace(cs, res.f.attrs), res.f.phys_addr,
83
+ value, res.f.attrs, &txres);
84
if (txres != MEMTX_OK) {
85
/* BusFault trying to write the data */
86
if (mode == STACK_LAZYFP) {
87
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr,
88
goto pend_fault;
89
}
90
91
- value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys,
92
- res.attrs, &txres);
93
+ value = address_space_ldl(arm_addressspace(cs, res.f.attrs),
94
+ res.f.phys_addr, res.f.attrs, &txres);
95
if (txres != MEMTX_OK) {
96
/* BusFault trying to read the data */
97
qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n");
98
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure,
99
qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
100
return false;
101
}
102
- *insn = address_space_lduw_le(arm_addressspace(cs, res.attrs), res.phys,
103
- res.attrs, &txres);
104
+ *insn = address_space_lduw_le(arm_addressspace(cs, res.f.attrs),
105
+ res.f.phys_addr, res.f.attrs, &txres);
106
if (txres != MEMTX_OK) {
107
env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
108
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
109
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
110
}
111
return false;
112
}
113
- value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys,
114
- res.attrs, &txres);
115
+ value = address_space_ldl(arm_addressspace(cs, res.f.attrs),
116
+ res.f.phys_addr, res.f.attrs, &txres);
117
if (txres != MEMTX_OK) {
118
/* BusFault trying to read the data */
119
qemu_log_mask(CPU_LOG_INT,
120
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
121
} else {
122
mrvalid = true;
123
}
124
- r = res.prot & PAGE_READ;
125
- rw = res.prot & PAGE_WRITE;
126
+ r = res.f.prot & PAGE_READ;
127
+ rw = res.f.prot & PAGE_WRITE;
128
} else {
129
r = false;
130
rw = false;
131
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
132
index XXXXXXX..XXXXXXX 100644
133
--- a/target/arm/ptw.c
134
+++ b/target/arm/ptw.c
135
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
136
assert(!is_secure);
137
}
138
139
- addr = s2.phys;
140
+ addr = s2.f.phys_addr;
141
}
142
return addr;
143
}
144
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
145
/* 1Mb section. */
146
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
147
ap = (desc >> 10) & 3;
148
- result->page_size = 1024 * 1024;
149
+ result->f.lg_page_size = 20; /* 1MB */
150
} else {
151
/* Lookup l2 entry. */
152
if (type == 1) {
153
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
154
case 1: /* 64k page. */
155
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
156
ap = (desc >> (4 + ((address >> 13) & 6))) & 3;
157
- result->page_size = 0x10000;
158
+ result->f.lg_page_size = 16;
26
break;
159
break;
27
default:
160
case 2: /* 4k page. */
28
/* broken reginfo with out-of-range opc1 */
161
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
29
- assert(false);
162
ap = (desc >> (4 + ((address >> 9) & 6))) & 3;
30
- break;
163
- result->page_size = 0x1000;
31
+ g_assert_not_reached();
164
+ result->f.lg_page_size = 12;
32
}
165
break;
33
/* assert our permissions are not too lax (stricter is fine) */
166
case 3: /* 1k page, or ARMv6/XScale "extended small (4k) page" */
34
assert((r->access & ~mask) == 0);
167
if (type == 1) {
35
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
168
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
169
if (arm_feature(env, ARM_FEATURE_XSCALE)
170
|| arm_feature(env, ARM_FEATURE_V6)) {
171
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
172
- result->page_size = 0x1000;
173
+ result->f.lg_page_size = 12;
174
} else {
175
/*
176
* UNPREDICTABLE in ARMv5; we choose to take a
177
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
178
}
179
} else {
180
phys_addr = (desc & 0xfffffc00) | (address & 0x3ff);
181
- result->page_size = 0x400;
182
+ result->f.lg_page_size = 10;
183
}
184
ap = (desc >> 4) & 3;
185
break;
186
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
187
g_assert_not_reached();
188
}
189
}
190
- result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
191
- result->prot |= result->prot ? PAGE_EXEC : 0;
192
- if (!(result->prot & (1 << access_type))) {
193
+ result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
194
+ result->f.prot |= result->f.prot ? PAGE_EXEC : 0;
195
+ if (!(result->f.prot & (1 << access_type))) {
196
/* Access permission fault. */
197
fi->type = ARMFault_Permission;
198
goto do_fault;
199
}
200
- result->phys = phys_addr;
201
+ result->f.phys_addr = phys_addr;
202
return false;
203
do_fault:
204
fi->domain = domain;
205
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
206
phys_addr = (desc & 0xff000000) | (address & 0x00ffffff);
207
phys_addr |= (uint64_t)extract32(desc, 20, 4) << 32;
208
phys_addr |= (uint64_t)extract32(desc, 5, 4) << 36;
209
- result->page_size = 0x1000000;
210
+ result->f.lg_page_size = 24; /* 16MB */
211
} else {
212
/* Section. */
213
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
214
- result->page_size = 0x100000;
215
+ result->f.lg_page_size = 20; /* 1MB */
216
}
217
ap = ((desc >> 10) & 3) | ((desc >> 13) & 4);
218
xn = desc & (1 << 4);
219
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
220
case 1: /* 64k page. */
221
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
222
xn = desc & (1 << 15);
223
- result->page_size = 0x10000;
224
+ result->f.lg_page_size = 16;
225
break;
226
case 2: case 3: /* 4k page. */
227
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
228
xn = desc & 1;
229
- result->page_size = 0x1000;
230
+ result->f.lg_page_size = 12;
36
break;
231
break;
37
default:
232
default:
38
/* Never happens, but compiler isn't smart enough to tell. */
233
/* Never happens, but compiler isn't smart enough to tell. */
39
- abort();
40
+ g_assert_not_reached();
41
}
42
}
43
*prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
44
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
234
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
45
break;
46
default:
47
/* Never happens, but compiler isn't smart enough to tell. */
48
- abort();
49
+ g_assert_not_reached();
50
}
235
}
51
}
236
}
52
if (domain_prot == 3) {
237
if (domain_prot == 3) {
53
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
238
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
54
index XXXXXXX..XXXXXXX 100644
239
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
55
--- a/target/arm/hvf/hvf.c
56
+++ b/target/arm/hvf/hvf.c
57
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
58
/* we got kicked, no exit to process */
59
return 0;
60
default:
61
- assert(0);
62
+ g_assert_not_reached();
63
}
64
65
hvf_sync_vtimer(cpu);
66
diff --git a/target/arm/kvm-stub.c b/target/arm/kvm-stub.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/kvm-stub.c
69
+++ b/target/arm/kvm-stub.c
70
@@ -XXX,XX +XXX,XX @@
71
72
bool write_kvmstate_to_list(ARMCPU *cpu)
73
{
74
- abort();
75
+ g_assert_not_reached();
76
}
77
78
bool write_list_to_kvmstate(ARMCPU *cpu, int level)
79
{
80
- abort();
81
+ g_assert_not_reached();
82
}
83
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/kvm.c
86
+++ b/target/arm/kvm.c
87
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu)
88
ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &r);
89
break;
90
default:
91
- abort();
92
+ g_assert_not_reached();
93
}
94
if (ret) {
95
ok = false;
96
@@ -XXX,XX +XXX,XX @@ bool write_list_to_kvmstate(ARMCPU *cpu, int level)
97
r.addr = (uintptr_t)(cpu->cpreg_values + i);
98
break;
99
default:
100
- abort();
101
+ g_assert_not_reached();
102
}
103
ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &r);
104
if (ret) {
105
diff --git a/target/arm/machine.c b/target/arm/machine.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/machine.c
108
+++ b/target/arm/machine.c
109
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
110
if (kvm_enabled()) {
111
if (!write_kvmstate_to_list(cpu)) {
112
/* This should never fail */
113
- abort();
114
+ g_assert_not_reached();
115
}
116
117
/*
118
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
119
} else {
240
} else {
120
if (!write_cpustate_to_list(cpu, false)) {
241
if (pxn && !regime_is_user(env, mmu_idx)) {
121
/* This should never fail. */
242
xn = 1;
122
- abort();
243
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
123
+ g_assert_not_reached();
244
fi->type = ARMFault_AccessFlag;
124
}
245
goto do_fault;
125
}
246
}
126
247
- result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
127
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
248
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
128
index XXXXXXX..XXXXXXX 100644
249
} else {
129
--- a/target/arm/translate-a64.c
250
- result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
130
+++ b/target/arm/translate-a64.c
251
+ result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
131
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
252
}
132
gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
253
- if (result->prot && !xn) {
254
- result->prot |= PAGE_EXEC;
255
+ if (result->f.prot && !xn) {
256
+ result->f.prot |= PAGE_EXEC;
257
}
258
- if (!(result->prot & (1 << access_type))) {
259
+ if (!(result->f.prot & (1 << access_type))) {
260
/* Access permission fault. */
261
fi->type = ARMFault_Permission;
262
goto do_fault;
263
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
264
* the CPU doesn't support TZ or this is a non-secure translation
265
* regime, because the attribute will already be non-secure.
266
*/
267
- result->attrs.secure = false;
268
+ result->f.attrs.secure = false;
269
}
270
- result->phys = phys_addr;
271
+ result->f.phys_addr = phys_addr;
272
return false;
273
do_fault:
274
fi->domain = domain;
275
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
276
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
277
ns = mmu_idx == ARMMMUIdx_Stage2;
278
xn = extract32(attrs, 11, 2);
279
- result->prot = get_S2prot(env, ap, xn, s1_is_el0);
280
+ result->f.prot = get_S2prot(env, ap, xn, s1_is_el0);
281
} else {
282
ns = extract32(attrs, 3, 1);
283
xn = extract32(attrs, 12, 1);
284
pxn = extract32(attrs, 11, 1);
285
- result->prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
286
+ result->f.prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
287
}
288
289
fault_type = ARMFault_Permission;
290
- if (!(result->prot & (1 << access_type))) {
291
+ if (!(result->f.prot & (1 << access_type))) {
292
goto do_fault;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
296
* the CPU doesn't support TZ or this is a non-secure translation
297
* regime, because the attribute will already be non-secure.
298
*/
299
- result->attrs.secure = false;
300
+ result->f.attrs.secure = false;
301
}
302
/* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
303
if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
304
- arm_tlb_bti_gp(&result->attrs) = true;
305
+ arm_tlb_bti_gp(&result->f.attrs) = true;
306
}
307
308
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
309
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
310
result->cacheattrs.shareability = extract32(attrs, 6, 2);
311
}
312
313
- result->phys = descaddr;
314
- result->page_size = page_size;
315
+ result->f.phys_addr = descaddr;
316
+ result->f.lg_page_size = ctz64(page_size);
317
return false;
318
319
do_fault:
320
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
321
322
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
323
/* MPU disabled. */
324
- result->phys = address;
325
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
326
+ result->f.phys_addr = address;
327
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
328
return false;
329
}
330
331
- result->phys = address;
332
+ result->f.phys_addr = address;
333
for (n = 7; n >= 0; n--) {
334
base = env->cp15.c6_region[n];
335
if ((base & 1) == 0) {
336
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
337
fi->level = 1;
338
return true;
339
}
340
- result->prot = PAGE_READ | PAGE_WRITE;
341
+ result->f.prot = PAGE_READ | PAGE_WRITE;
342
break;
343
case 2:
344
- result->prot = PAGE_READ;
345
+ result->f.prot = PAGE_READ;
346
if (!is_user) {
347
- result->prot |= PAGE_WRITE;
348
+ result->f.prot |= PAGE_WRITE;
349
}
350
break;
351
case 3:
352
- result->prot = PAGE_READ | PAGE_WRITE;
353
+ result->f.prot = PAGE_READ | PAGE_WRITE;
354
break;
355
case 5:
356
if (is_user) {
357
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
358
fi->level = 1;
359
return true;
360
}
361
- result->prot = PAGE_READ;
362
+ result->f.prot = PAGE_READ;
363
break;
364
case 6:
365
- result->prot = PAGE_READ;
366
+ result->f.prot = PAGE_READ;
133
break;
367
break;
134
default:
368
default:
135
- abort();
369
/* Bad permission. */
136
+ g_assert_not_reached();
370
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
137
}
371
fi->level = 1;
138
372
return true;
139
write_fp_sreg(s, rd, tcg_res);
373
}
140
@@ -XXX,XX +XXX,XX @@ static void handle_fp_fcvt(DisasContext *s, int opcode,
374
- result->prot |= PAGE_EXEC;
141
break;
375
+ result->f.prot |= PAGE_EXEC;
142
}
376
return false;
143
default:
144
- abort();
145
+ g_assert_not_reached();
146
}
147
}
377
}
148
378
149
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
379
static void get_phys_addr_pmsav7_default(CPUARMState *env, ARMMMUIdx mmu_idx,
380
- int32_t address, int *prot)
381
+ int32_t address, uint8_t *prot)
382
{
383
if (!arm_feature(env, ARM_FEATURE_M)) {
384
*prot = PAGE_READ | PAGE_WRITE;
385
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
386
int n;
387
bool is_user = regime_is_user(env, mmu_idx);
388
389
- result->phys = address;
390
- result->page_size = TARGET_PAGE_SIZE;
391
- result->prot = 0;
392
+ result->f.phys_addr = address;
393
+ result->f.lg_page_size = TARGET_PAGE_BITS;
394
+ result->f.prot = 0;
395
396
if (regime_translation_disabled(env, mmu_idx, secure) ||
397
m_is_ppb_region(env, address)) {
398
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
399
* which always does a direct read using address_space_ldl(), rather
400
* than going via this function, so we don't need to check that here.
401
*/
402
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
403
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
404
} else { /* MPU enabled */
405
for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
406
/* region search */
407
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
408
if (ranges_overlap(base, rmask,
409
address & TARGET_PAGE_MASK,
410
TARGET_PAGE_SIZE)) {
411
- result->page_size = 1;
412
+ result->f.lg_page_size = 0;
413
}
414
continue;
415
}
416
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
417
continue;
418
}
419
if (rsize < TARGET_PAGE_BITS) {
420
- result->page_size = 1 << rsize;
421
+ result->f.lg_page_size = rsize;
422
}
423
break;
424
}
425
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
426
fi->type = ARMFault_Background;
427
return true;
428
}
429
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
430
+ get_phys_addr_pmsav7_default(env, mmu_idx, address,
431
+ &result->f.prot);
432
} else { /* a MPU hit! */
433
uint32_t ap = extract32(env->pmsav7.dracr[n], 8, 3);
434
uint32_t xn = extract32(env->pmsav7.dracr[n], 12, 1);
435
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
436
case 5:
437
break; /* no access */
438
case 3:
439
- result->prot |= PAGE_WRITE;
440
+ result->f.prot |= PAGE_WRITE;
441
/* fall through */
442
case 2:
443
case 6:
444
- result->prot |= PAGE_READ | PAGE_EXEC;
445
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
446
break;
447
case 7:
448
/* for v7M, same as 6; for R profile a reserved value */
449
if (arm_feature(env, ARM_FEATURE_M)) {
450
- result->prot |= PAGE_READ | PAGE_EXEC;
451
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
452
break;
453
}
454
/* fall through */
455
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
456
case 1:
457
case 2:
458
case 3:
459
- result->prot |= PAGE_WRITE;
460
+ result->f.prot |= PAGE_WRITE;
461
/* fall through */
462
case 5:
463
case 6:
464
- result->prot |= PAGE_READ | PAGE_EXEC;
465
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
466
break;
467
case 7:
468
/* for v7M, same as 6; for R profile a reserved value */
469
if (arm_feature(env, ARM_FEATURE_M)) {
470
- result->prot |= PAGE_READ | PAGE_EXEC;
471
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
472
break;
473
}
474
/* fall through */
475
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
476
477
/* execute never */
478
if (xn) {
479
- result->prot &= ~PAGE_EXEC;
480
+ result->f.prot &= ~PAGE_EXEC;
481
}
482
}
483
}
484
485
fi->type = ARMFault_Permission;
486
fi->level = 1;
487
- return !(result->prot & (1 << access_type));
488
+ return !(result->f.prot & (1 << access_type));
489
}
490
491
bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
492
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
493
uint32_t addr_page_base = address & TARGET_PAGE_MASK;
494
uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
495
496
- result->page_size = TARGET_PAGE_SIZE;
497
- result->phys = address;
498
- result->prot = 0;
499
+ result->f.lg_page_size = TARGET_PAGE_BITS;
500
+ result->f.phys_addr = address;
501
+ result->f.prot = 0;
502
if (mregion) {
503
*mregion = -1;
504
}
505
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
506
ranges_overlap(base, limit - base + 1,
507
addr_page_base,
508
TARGET_PAGE_SIZE)) {
509
- result->page_size = 1;
510
+ result->f.lg_page_size = 0;
511
}
512
continue;
513
}
514
515
if (base > addr_page_base || limit < addr_page_limit) {
516
- result->page_size = 1;
517
+ result->f.lg_page_size = 0;
518
}
519
520
if (matchregion != -1) {
521
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
522
523
if (matchregion == -1) {
524
/* hit using the background region */
525
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
526
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
527
} else {
528
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
529
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
530
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
531
xn = 1;
532
}
533
534
- result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
535
- if (result->prot && !xn && !(pxn && !is_user)) {
536
- result->prot |= PAGE_EXEC;
537
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
538
+ if (result->f.prot && !xn && !(pxn && !is_user)) {
539
+ result->f.prot |= PAGE_EXEC;
540
}
541
/*
542
* We don't need to look the attribute up in the MAIR0/MAIR1
543
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
544
545
fi->type = ARMFault_Permission;
546
fi->level = 1;
547
- return !(result->prot & (1 << access_type));
548
+ return !(result->f.prot & (1 << access_type));
549
}
550
551
static bool v8m_is_sau_exempt(CPUARMState *env,
552
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
553
} else {
554
fi->type = ARMFault_QEMU_SFault;
555
}
556
- result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
557
- result->phys = address;
558
- result->prot = 0;
559
+ result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS;
560
+ result->f.phys_addr = address;
561
+ result->f.prot = 0;
562
return true;
563
}
564
} else {
565
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
566
* might downgrade a secure access to nonsecure.
567
*/
568
if (sattrs.ns) {
569
- result->attrs.secure = false;
570
+ result->f.attrs.secure = false;
571
} else if (!secure) {
572
/*
573
* NS access to S memory must fault.
574
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
575
* for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
576
*/
577
fi->type = ARMFault_QEMU_SFault;
578
- result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
579
- result->phys = address;
580
- result->prot = 0;
581
+ result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS;
582
+ result->f.phys_addr = address;
583
+ result->f.prot = 0;
584
return true;
585
}
586
}
587
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
588
ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, secure,
589
result, fi, NULL);
590
if (sattrs.subpage) {
591
- result->page_size = 1;
592
+ result->f.lg_page_size = 0;
593
}
594
return ret;
595
}
596
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
597
result->cacheattrs.is_s2_format = false;
598
}
599
600
- result->phys = address;
601
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
602
- result->page_size = TARGET_PAGE_SIZE;
603
+ result->f.phys_addr = address;
604
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
605
+ result->f.lg_page_size = TARGET_PAGE_BITS;
606
result->cacheattrs.shareability = shareability;
607
result->cacheattrs.attrs = memattr;
608
return 0;
609
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
610
return ret;
611
}
612
613
- ipa = result->phys;
614
- ipa_secure = result->attrs.secure;
615
+ ipa = result->f.phys_addr;
616
+ ipa_secure = result->f.attrs.secure;
617
if (is_secure) {
618
/* Select TCR based on the NS bit from the S1 walk. */
619
s2walk_secure = !(ipa_secure
620
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
621
* Save the stage1 results so that we may merge
622
* prot and cacheattrs later.
623
*/
624
- s1_prot = result->prot;
625
+ s1_prot = result->f.prot;
626
cacheattrs1 = result->cacheattrs;
627
memset(result, 0, sizeof(*result));
628
629
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
630
fi->s2addr = ipa;
631
632
/* Combine the S1 and S2 perms. */
633
- result->prot &= s1_prot;
634
+ result->f.prot &= s1_prot;
635
636
/* If S2 fails, return early. */
637
if (ret) {
638
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
639
* Check if IPA translates to secure or non-secure PA space.
640
* Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
641
*/
642
- result->attrs.secure =
643
+ result->f.attrs.secure =
644
(is_secure
645
&& !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
646
&& (ipa_secure
647
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
648
* cannot upgrade an non-secure translation regime's attributes
649
* to secure.
650
*/
651
- result->attrs.secure = is_secure;
652
- result->attrs.user = regime_is_user(env, mmu_idx);
653
+ result->f.attrs.secure = is_secure;
654
+ result->f.attrs.user = regime_is_user(env, mmu_idx);
655
656
/*
657
* Fast Context Switch Extension. This doesn't exist at all in v8.
658
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
659
660
if (arm_feature(env, ARM_FEATURE_PMSA)) {
661
bool ret;
662
- result->page_size = TARGET_PAGE_SIZE;
663
+ result->f.lg_page_size = TARGET_PAGE_BITS;
664
665
if (arm_feature(env, ARM_FEATURE_V8)) {
666
/* PMSAv8 */
667
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
668
(access_type == MMU_DATA_STORE ? "writing" : "execute"),
669
(uint32_t)address, mmu_idx,
670
ret ? "Miss" : "Hit",
671
- result->prot & PAGE_READ ? 'r' : '-',
672
- result->prot & PAGE_WRITE ? 'w' : '-',
673
- result->prot & PAGE_EXEC ? 'x' : '-');
674
+ result->f.prot & PAGE_READ ? 'r' : '-',
675
+ result->f.prot & PAGE_WRITE ? 'w' : '-',
676
+ result->f.prot & PAGE_EXEC ? 'x' : '-');
677
678
return ret;
679
}
680
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
681
bool ret;
682
683
ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi);
684
- *attrs = res.attrs;
685
+ *attrs = res.f.attrs;
686
687
if (ret) {
688
return -1;
689
}
690
- return res.phys;
691
+ return res.f.phys_addr;
692
}
693
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
150
index XXXXXXX..XXXXXXX 100644
694
index XXXXXXX..XXXXXXX 100644
151
--- a/target/arm/translate-neon.c
695
--- a/target/arm/tlb_helper.c
152
+++ b/target/arm/translate-neon.c
696
+++ b/target/arm/tlb_helper.c
153
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
697
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
154
}
698
* target page size are handled specially, so for those we
155
break;
699
* pass in the exact addresses.
156
default:
700
*/
157
- abort();
701
- if (res.page_size >= TARGET_PAGE_SIZE) {
158
+ g_assert_not_reached();
702
- res.phys &= TARGET_PAGE_MASK;
159
}
703
+ if (res.f.lg_page_size >= TARGET_PAGE_BITS) {
160
if ((vd + a->stride * (nregs - 1)) > 31) {
704
+ res.f.phys_addr &= TARGET_PAGE_MASK;
161
/*
705
address &= TARGET_PAGE_MASK;
162
diff --git a/target/arm/translate.c b/target/arm/translate.c
706
}
163
index XXXXXXX..XXXXXXX 100644
707
/* Notice and record tagged memory. */
164
--- a/target/arm/translate.c
708
if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) {
165
+++ b/target/arm/translate.c
709
- arm_tlb_mte_tagged(&res.attrs) = true;
166
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
710
+ arm_tlb_mte_tagged(&res.f.attrs) = true;
167
offset = 4;
711
}
168
break;
712
169
default:
713
- tlb_set_page_with_attrs(cs, address, res.phys, res.attrs,
170
- abort();
714
- res.prot, mmu_idx, res.page_size);
171
+ g_assert_not_reached();
715
+ tlb_set_page_full(cs, mmu_idx, address, &res.f);
172
}
716
return true;
173
tcg_gen_addi_i32(addr, addr, offset);
717
} else if (probe) {
174
tmp = load_reg(s, 14);
718
return false;
175
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
176
offset = 0;
177
break;
178
default:
179
- abort();
180
+ g_assert_not_reached();
181
}
182
tcg_gen_addi_i32(addr, addr, offset);
183
gen_helper_set_r13_banked(cpu_env, tcg_constant_i32(mode), addr);
184
--
719
--
185
2.25.1
720
2.25.1
diff view generated by jsdifflib
New patch
1
From: Jerome Forissier <jerome.forissier@linaro.org>
1
2
3
According to the Linux kernel booting.rst [1], CPTR_EL3.ESM and
4
SCR_EL3.EnTP2 must be initialized to 1 when EL3 is present and FEAT_SME
5
is advertised. This has to be taken care of when QEMU boots directly
6
into the kernel (i.e., "-M virt,secure=on -cpu max -kernel Image").
7
8
Cc: qemu-stable@nongnu.org
9
Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max")
10
Link: [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst?h=v6.0#n321
11
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
12
Message-id: 20221003145641.1921467-1-jerome.forissier@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
hw/arm/boot.c | 4 ++++
17
1 file changed, 4 insertions(+)
18
19
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/boot.c
22
+++ b/hw/arm/boot.c
23
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
24
if (cpu_isar_feature(aa64_sve, cpu)) {
25
env->cp15.cptr_el[3] |= R_CPTR_EL3_EZ_MASK;
26
}
27
+ if (cpu_isar_feature(aa64_sme, cpu)) {
28
+ env->cp15.cptr_el[3] |= R_CPTR_EL3_ESM_MASK;
29
+ env->cp15.scr_el3 |= SCR_ENTP2;
30
+ }
31
/* AArch64 kernels never boot in secure mode */
32
assert(!info->secure_boot);
33
/* This hook is only supported for AArch32 currently:
34
--
35
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Arm CPUs support some subset of the granule (page) sizes 4K, 16K and
2
2
64K. The guest selects the one it wants using bits in the TCR_ELx
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
registers. If it tries to program these registers with a value that
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
is either reserved or which requests a size that the CPU does not
5
Message-id: 20220501055028.646596-24-richard.henderson@linaro.org
5
implement, the architecture requires that the CPU behaves as if the
6
field was programmed to some size that has been implemented.
7
Currently we don't implement this, and instead let the guest use any
8
granule size, even if the CPU ID register fields say it isn't
9
present.
10
11
Make aa64_va_parameters() check against the supported granule size
12
and force use of a different one if it is not implemented.
13
14
(A subsequent commit will make ARMVAParameters use the new enum
15
rather than the current pair of using16k/using64k bools.)
16
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Message-id: 20221003162315.2833797-2-peter.maydell@linaro.org
7
---
20
---
8
target/arm/cpu.h | 15 +++++++++++++++
21
target/arm/cpu.h | 33 +++++++++++++
9
1 file changed, 15 insertions(+)
22
target/arm/internals.h | 9 ++++
23
target/arm/helper.c | 102 +++++++++++++++++++++++++++++++++++++----
24
3 files changed, 136 insertions(+), 8 deletions(-)
10
25
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.h
28
--- a/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
29
+++ b/target/arm/cpu.h
15
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_ssbs(const ARMISARegisters *id)
30
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tgran16_2_lpa2(const ARMISARegisters *id)
16
return FIELD_EX32(id->id_pfr2, ID_PFR2, SSBS) != 0;
31
return t >= 3 || (t == 0 && isar_feature_aa64_tgran16_lpa2(id));
17
}
32
}
18
33
19
+static inline bool isar_feature_aa32_debugv8p2(const ARMISARegisters *id)
34
+static inline bool isar_feature_aa64_tgran4(const ARMISARegisters *id)
20
+{
35
+{
21
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, COPDBG) >= 8;
36
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 0;
22
+}
37
+}
38
+
39
+static inline bool isar_feature_aa64_tgran16(const ARMISARegisters *id)
40
+{
41
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 1;
42
+}
43
+
44
+static inline bool isar_feature_aa64_tgran64(const ARMISARegisters *id)
45
+{
46
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64) >= 0;
47
+}
48
+
49
+static inline bool isar_feature_aa64_tgran4_2(const ARMISARegisters *id)
50
+{
51
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
52
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran4(id));
53
+}
54
+
55
+static inline bool isar_feature_aa64_tgran16_2(const ARMISARegisters *id)
56
+{
57
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
58
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran16(id));
59
+}
60
+
61
+static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id)
62
+{
63
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64_2);
64
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id));
65
+}
66
+
67
static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
68
{
69
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
70
diff --git a/target/arm/internals.h b/target/arm/internals.h
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/internals.h
73
+++ b/target/arm/internals.h
74
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
75
return valid;
76
}
77
78
+/* Granule size (i.e. page size) */
79
+typedef enum ARMGranuleSize {
80
+ /* Same order as TG0 encoding */
81
+ Gran4K,
82
+ Gran64K,
83
+ Gran16K,
84
+ GranInvalid,
85
+} ARMGranuleSize;
23
+
86
+
24
/*
87
/*
25
* 64-bit feature tests via id registers.
88
* Parameters of a given virtual address, as extracted from the
26
*/
89
* translation control register (TCR) for a given regime.
27
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
90
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/helper.c
93
+++ b/target/arm/helper.c
94
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx)
95
}
29
}
96
}
30
97
31
+static inline bool isar_feature_aa64_debugv8p2(const ARMISARegisters *id)
98
+static ARMGranuleSize tg0_to_gran_size(int tg)
32
+{
99
+{
33
+ return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, DEBUGVER) >= 8;
100
+ switch (tg) {
34
+}
101
+ case 0:
35
+
102
+ return Gran4K;
36
static inline bool isar_feature_aa64_sve2(const ARMISARegisters *id)
103
+ case 1:
104
+ return Gran64K;
105
+ case 2:
106
+ return Gran16K;
107
+ default:
108
+ return GranInvalid;
109
+ }
110
+}
111
+
112
+static ARMGranuleSize tg1_to_gran_size(int tg)
113
+{
114
+ switch (tg) {
115
+ case 1:
116
+ return Gran16K;
117
+ case 2:
118
+ return Gran4K;
119
+ case 3:
120
+ return Gran64K;
121
+ default:
122
+ return GranInvalid;
123
+ }
124
+}
125
+
126
+static inline bool have4k(ARMCPU *cpu, bool stage2)
127
+{
128
+ return stage2 ? cpu_isar_feature(aa64_tgran4_2, cpu)
129
+ : cpu_isar_feature(aa64_tgran4, cpu);
130
+}
131
+
132
+static inline bool have16k(ARMCPU *cpu, bool stage2)
133
+{
134
+ return stage2 ? cpu_isar_feature(aa64_tgran16_2, cpu)
135
+ : cpu_isar_feature(aa64_tgran16, cpu);
136
+}
137
+
138
+static inline bool have64k(ARMCPU *cpu, bool stage2)
139
+{
140
+ return stage2 ? cpu_isar_feature(aa64_tgran64_2, cpu)
141
+ : cpu_isar_feature(aa64_tgran64, cpu);
142
+}
143
+
144
+static ARMGranuleSize sanitize_gran_size(ARMCPU *cpu, ARMGranuleSize gran,
145
+ bool stage2)
146
+{
147
+ switch (gran) {
148
+ case Gran4K:
149
+ if (have4k(cpu, stage2)) {
150
+ return gran;
151
+ }
152
+ break;
153
+ case Gran16K:
154
+ if (have16k(cpu, stage2)) {
155
+ return gran;
156
+ }
157
+ break;
158
+ case Gran64K:
159
+ if (have64k(cpu, stage2)) {
160
+ return gran;
161
+ }
162
+ break;
163
+ case GranInvalid:
164
+ break;
165
+ }
166
+ /*
167
+ * If the guest selects a granule size that isn't implemented,
168
+ * the architecture requires that we behave as if it selected one
169
+ * that is (with an IMPDEF choice of which one to pick). We choose
170
+ * to implement the smallest supported granule size.
171
+ */
172
+ if (have4k(cpu, stage2)) {
173
+ return Gran4K;
174
+ }
175
+ if (have16k(cpu, stage2)) {
176
+ return Gran16K;
177
+ }
178
+ assert(have64k(cpu, stage2));
179
+ return Gran64K;
180
+}
181
+
182
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
183
ARMMMUIdx mmu_idx, bool data)
37
{
184
{
38
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SVEVER) != 0;
185
uint64_t tcr = regime_tcr(env, mmu_idx);
39
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_tts2uxn(const ARMISARegisters *id)
186
bool epd, hpd, using16k, using64k, tsz_oob, ds;
40
return isar_feature_aa64_tts2uxn(id) || isar_feature_aa32_tts2uxn(id);
187
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
41
}
188
+ ARMGranuleSize gran;
42
189
ARMCPU *cpu = env_archcpu(env);
43
+static inline bool isar_feature_any_debugv8p2(const ARMISARegisters *id)
190
+ bool stage2 = mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S;
44
+{
191
45
+ return isar_feature_aa64_debugv8p2(id) || isar_feature_aa32_debugv8p2(id);
192
if (!regime_has_2_ranges(mmu_idx)) {
46
+}
193
select = 0;
47
+
194
tsz = extract32(tcr, 0, 6);
48
/*
195
- using64k = extract32(tcr, 14, 1);
49
* Forward to the above feature tests given an ARMCPU pointer.
196
- using16k = extract32(tcr, 15, 1);
50
*/
197
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
198
+ gran = tg0_to_gran_size(extract32(tcr, 14, 2));
199
+ if (stage2) {
200
/* VTCR_EL2 */
201
hpd = false;
202
} else {
203
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
204
select = extract64(va, 55, 1);
205
if (!select) {
206
tsz = extract32(tcr, 0, 6);
207
+ gran = tg0_to_gran_size(extract32(tcr, 14, 2));
208
epd = extract32(tcr, 7, 1);
209
sh = extract32(tcr, 12, 2);
210
- using64k = extract32(tcr, 14, 1);
211
- using16k = extract32(tcr, 15, 1);
212
hpd = extract64(tcr, 41, 1);
213
} else {
214
- int tg = extract32(tcr, 30, 2);
215
- using16k = tg == 1;
216
- using64k = tg == 3;
217
tsz = extract32(tcr, 16, 6);
218
+ gran = tg1_to_gran_size(extract32(tcr, 30, 2));
219
epd = extract32(tcr, 23, 1);
220
sh = extract32(tcr, 28, 2);
221
hpd = extract64(tcr, 42, 1);
222
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
223
ds = extract64(tcr, 59, 1);
224
}
225
226
+ gran = sanitize_gran_size(cpu, gran, stage2);
227
+ using64k = gran == Gran64K;
228
+ using16k = gran == Gran16K;
229
+
230
if (cpu_isar_feature(aa64_st, cpu)) {
231
max_tsz = 48 - using64k;
232
} else {
51
--
233
--
52
2.25.1
234
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Now we have an enum for the granule size, use it in the
2
ARMVAParameters struct instead of the using16k/using64k bools.
2
3
3
Put the block comments into the current coding style.
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20221003162315.2833797-3-peter.maydell@linaro.org
7
---
8
target/arm/internals.h | 23 +++++++++++++++++++++--
9
target/arm/helper.c | 39 ++++++++++++++++++++++++++++-----------
10
target/arm/ptw.c | 8 +-------
11
3 files changed, 50 insertions(+), 20 deletions(-)
4
12
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
index XXXXXXX..XXXXXXX 100644
7
Message-id: 20220501055028.646596-19-richard.henderson@linaro.org
15
--- a/target/arm/internals.h
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
+++ b/target/arm/internals.h
9
---
17
@@ -XXX,XX +XXX,XX @@ typedef enum ARMGranuleSize {
10
target/arm/helper.c | 24 +++++++++++++++---------
18
GranInvalid,
11
1 file changed, 15 insertions(+), 9 deletions(-)
19
} ARMGranuleSize;
12
20
21
+/**
22
+ * arm_granule_bits: Return address size of the granule in bits
23
+ *
24
+ * Return the address size of the granule in bits. This corresponds
25
+ * to the pseudocode TGxGranuleBits().
26
+ */
27
+static inline int arm_granule_bits(ARMGranuleSize gran)
28
+{
29
+ switch (gran) {
30
+ case Gran64K:
31
+ return 16;
32
+ case Gran16K:
33
+ return 14;
34
+ case Gran4K:
35
+ return 12;
36
+ default:
37
+ g_assert_not_reached();
38
+ }
39
+}
40
+
41
/*
42
* Parameters of a given virtual address, as extracted from the
43
* translation control register (TCR) for a given regime.
44
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
45
bool tbi : 1;
46
bool epd : 1;
47
bool hpd : 1;
48
- bool using16k : 1;
49
- bool using64k : 1;
50
bool tsz_oob : 1; /* tsz has been clamped to legal range */
51
bool ds : 1;
52
+ ARMGranuleSize gran : 2;
53
} ARMVAParameters;
54
55
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
56
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
58
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
59
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
60
@@ -XXX,XX +XXX,XX @@ typedef struct {
18
return cpu_list;
61
uint64_t length;
62
} TLBIRange;
63
64
+static ARMGranuleSize tlbi_range_tg_to_gran_size(int tg)
65
+{
66
+ /*
67
+ * Note that the TLBI range TG field encoding differs from both
68
+ * TG0 and TG1 encodings.
69
+ */
70
+ switch (tg) {
71
+ case 1:
72
+ return Gran4K;
73
+ case 2:
74
+ return Gran16K;
75
+ case 3:
76
+ return Gran64K;
77
+ default:
78
+ return GranInvalid;
79
+ }
80
+}
81
+
82
static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
83
uint64_t value)
84
{
85
@@ -XXX,XX +XXX,XX @@ static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
86
uint64_t select = sextract64(value, 36, 1);
87
ARMVAParameters param = aa64_va_parameters(env, select, mmuidx, true);
88
TLBIRange ret = { };
89
+ ARMGranuleSize gran;
90
91
page_size_granule = extract64(value, 46, 2);
92
+ gran = tlbi_range_tg_to_gran_size(page_size_granule);
93
94
/* The granule encoded in value must match the granule in use. */
95
- if (page_size_granule != (param.using64k ? 3 : param.using16k ? 2 : 1)) {
96
+ if (gran != param.gran) {
97
qemu_log_mask(LOG_GUEST_ERROR, "Invalid tlbi page size granule %d\n",
98
page_size_granule);
99
return ret;
100
}
101
102
- page_shift = (page_size_granule - 1) * 2 + 12;
103
+ page_shift = arm_granule_bits(gran);
104
num = extract64(value, 39, 5);
105
scale = extract64(value, 44, 2);
106
exponent = (5 * scale) + 1;
107
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
108
ARMMMUIdx mmu_idx, bool data)
109
{
110
uint64_t tcr = regime_tcr(env, mmu_idx);
111
- bool epd, hpd, using16k, using64k, tsz_oob, ds;
112
+ bool epd, hpd, tsz_oob, ds;
113
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
114
ARMGranuleSize gran;
115
ARMCPU *cpu = env_archcpu(env);
116
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
117
}
118
119
gran = sanitize_gran_size(cpu, gran, stage2);
120
- using64k = gran == Gran64K;
121
- using16k = gran == Gran16K;
122
123
if (cpu_isar_feature(aa64_st, cpu)) {
124
- max_tsz = 48 - using64k;
125
+ max_tsz = 48 - (gran == Gran64K);
126
} else {
127
max_tsz = 39;
128
}
129
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
130
* adjust the effective value of DS, as documented.
131
*/
132
min_tsz = 16;
133
- if (using64k) {
134
+ if (gran == Gran64K) {
135
if (cpu_isar_feature(aa64_lva, cpu)) {
136
min_tsz = 12;
137
}
138
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
139
switch (mmu_idx) {
140
case ARMMMUIdx_Stage2:
141
case ARMMMUIdx_Stage2_S:
142
- if (using16k) {
143
+ if (gran == Gran16K) {
144
ds = cpu_isar_feature(aa64_tgran16_2_lpa2, cpu);
145
} else {
146
ds = cpu_isar_feature(aa64_tgran4_2_lpa2, cpu);
147
}
148
break;
149
default:
150
- if (using16k) {
151
+ if (gran == Gran16K) {
152
ds = cpu_isar_feature(aa64_tgran16_lpa2, cpu);
153
} else {
154
ds = cpu_isar_feature(aa64_tgran4_lpa2, cpu);
155
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
156
.tbi = tbi,
157
.epd = epd,
158
.hpd = hpd,
159
- .using16k = using16k,
160
- .using64k = using64k,
161
.tsz_oob = tsz_oob,
162
.ds = ds,
163
+ .gran = gran,
164
};
19
}
165
}
20
166
21
+/*
167
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
22
+ * Private utility function for define_one_arm_cp_reg_with_opaque():
168
index XXXXXXX..XXXXXXX 100644
23
+ * add a single reginfo struct to the hash table.
169
--- a/target/arm/ptw.c
24
+ */
170
+++ b/target/arm/ptw.c
25
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
171
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
26
void *opaque, CPState state,
27
CPSecureState secstate,
28
int crm, int opc1, int opc2,
29
const char *name)
30
{
31
- /* Private utility function for define_one_arm_cp_reg_with_opaque():
32
- * add a single reginfo struct to the hash table.
33
- */
34
uint32_t key;
35
ARMCPRegInfo *r2;
36
bool is64 = r->type & ARM_CP_64BIT;
37
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
38
39
isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
40
if (isbanked) {
41
- /* Register is banked (using both entries in array).
42
+ /*
43
+ * Register is banked (using both entries in array).
44
* Overwriting fieldoffset as the array is only used to define
45
* banked registers but later only fieldoffset is used.
46
*/
47
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
48
49
if (state == ARM_CP_STATE_AA32) {
50
if (isbanked) {
51
- /* If the register is banked then we don't need to migrate or
52
+ /*
53
+ * If the register is banked then we don't need to migrate or
54
* reset the 32-bit instance in certain cases:
55
*
56
* 1) If the register has both 32-bit and 64-bit instances then we
57
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
58
r2->type |= ARM_CP_ALIAS;
59
}
60
} else if ((secstate != r->secure) && !ns) {
61
- /* The register is not banked so we only want to allow migration of
62
- * the non-secure instance.
63
+ /*
64
+ * The register is not banked so we only want to allow migration
65
+ * of the non-secure instance.
66
*/
67
r2->type |= ARM_CP_ALIAS;
68
}
69
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
70
}
172
}
71
}
173
}
72
174
73
- /* By convention, for wildcarded registers only the first
175
- if (param.using64k) {
74
+ /*
176
- stride = 13;
75
+ * By convention, for wildcarded registers only the first
177
- } else if (param.using16k) {
76
* entry is used for migration; the others are marked as
178
- stride = 11;
77
* ALIAS so we don't try to transfer the register
179
- } else {
78
* multiple times. Special registers (ie NOP/WFI) are
180
- stride = 9;
79
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
181
- }
80
r2->type |= ARM_CP_ALIAS | ARM_CP_NO_GDB;
182
+ stride = arm_granule_bits(param.gran) - 3;
81
}
183
82
184
/*
83
- /* Check that raw accesses are either forbidden or handled. Note that
185
* Note that QEMU ignores shareability and cacheability attributes,
84
+ /*
85
+ * Check that raw accesses are either forbidden or handled. Note that
86
* we can't assert this earlier because the setup of fieldoffset for
87
* banked registers has to be done first.
88
*/
89
--
186
--
90
2.25.1
187
2.25.1
diff view generated by jsdifflib
New patch
1
FEAT_GTG is a change tho the ID register ID_AA64MMFR0_EL1 so that it
2
can report a different set of supported granule (page) sizes for
3
stage 1 and stage 2 translation tables. As of commit c20281b2a5048
4
we already report the granule sizes that way for '-cpu max', and now
5
we also correctly make attempts to use unimplemented granule sizes
6
fail, so we can report the support of the feature in the
7
documentation.
1
8
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20221003162315.2833797-4-peter.maydell@linaro.org
12
---
13
docs/system/arm/emulation.rst | 1 +
14
1 file changed, 1 insertion(+)
15
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
17
index XXXXXXX..XXXXXXX 100644
18
--- a/docs/system/arm/emulation.rst
19
+++ b/docs/system/arm/emulation.rst
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
21
- FEAT_FRINTTS (Floating-point to integer instructions)
22
- FEAT_FlagM (Flag manipulation instructions v2)
23
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
24
+- FEAT_GTG (Guest translation granule size)
25
- FEAT_HCX (Support for the HCRX_EL2 register)
26
- FEAT_HPDS (Hierarchical permission disables)
27
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
28
--
29
2.25.1
diff view generated by jsdifflib