1
Arm patches for rc3 : just a handful of bug fixes.
1
Hi; this is the latest target-arm queue; most of this is a refactoring
2
patchset from RTH for the arm page-table-walk emulation.
2
3
3
thanks
4
thanks
4
-- PMM
5
-- PMM
5
6
7
The following changes since commit f1d33f55c47dfdaf8daacd618588ad3ae4c452d1:
6
8
7
The following changes since commit 4ecc984210ca1bf508a96a550ec8a93a5f833f6c:
9
Merge tag 'pull-testing-gdbstub-plugins-gitdm-061022-3' of https://github.com/stsquad/qemu into staging (2022-10-06 07:11:56 -0400)
8
9
Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-4.2-rc3' into staging (2019-11-26 12:36:40 +0000)
10
10
11
are available in the Git repository at:
11
are available in the Git repository at:
12
12
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20191126
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221010
14
14
15
for you to fetch changes up to 6a4ef4e5d1084ce41fafa7d470a644b0fd3d9317:
15
for you to fetch changes up to 915f62844cf62e428c7c178149b5ff1cbe129b07:
16
16
17
target/arm: Honor HCR_EL2.TID3 trapping requirements (2019-11-26 13:55:37 +0000)
17
docs/system/arm/emulation.rst: Report FEAT_GTG support (2022-10-10 14:52:25 +0100)
18
18
19
----------------------------------------------------------------
19
----------------------------------------------------------------
20
target-arm queue:
20
target-arm queue:
21
* handle FTYPE flag correctly in v7M exception return
21
* Retry KVM_CREATE_VM call if it fails EINTR
22
for v7M CPUs with an FPU (v8M CPUs were already correct)
22
* allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
23
* versal: Add the CRP as unimplemented
23
* docs/nuvoton: Update URL for images
24
* Fix ISR_EL1 tracking when executing at EL2
24
* refactoring of page table walk code
25
* Honor HCR_EL2.TID3 trapping requirements
25
* hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
26
* Don't allow guest to use unimplemented granule sizes
27
* Report FEAT_GTG support
26
28
27
----------------------------------------------------------------
29
----------------------------------------------------------------
28
Edgar E. Iglesias (1):
30
Jerome Forissier (2):
29
hw/arm: versal: Add the CRP as unimplemented
31
target/arm: allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
32
hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
30
33
31
Jean-Hugues Deschênes (1):
34
Joel Stanley (1):
32
target/arm: Fix handling of cortex-m FTYPE flag in EXCRET
35
docs/nuvoton: Update URL for images
33
36
34
Marc Zyngier (2):
37
Peter Maydell (4):
35
target/arm: Fix ISR_EL1 tracking when executing at EL2
38
target/arm/kvm: Retry KVM_CREATE_VM call if it fails EINTR
36
target/arm: Honor HCR_EL2.TID3 trapping requirements
39
target/arm: Don't allow guest to use unimplemented granule sizes
40
target/arm: Use ARMGranuleSize in ARMVAParameters
41
docs/system/arm/emulation.rst: Report FEAT_GTG support
37
42
38
include/hw/arm/xlnx-versal.h | 3 ++
43
Richard Henderson (21):
39
hw/arm/xlnx-versal.c | 2 ++
44
target/arm: Split s2walk_secure from ipa_secure in get_phys_addr
40
target/arm/helper.c | 83 ++++++++++++++++++++++++++++++++++++++++++--
45
target/arm: Make the final stage1+2 write to secure be unconditional
41
target/arm/m_helper.c | 7 ++--
46
target/arm: Add is_secure parameter to get_phys_addr_lpae
42
4 files changed, 89 insertions(+), 6 deletions(-)
47
target/arm: Fix S2 disabled check in S1_ptw_translate
48
target/arm: Add is_secure parameter to regime_translation_disabled
49
target/arm: Split out get_phys_addr_with_secure
50
target/arm: Add is_secure parameter to v7m_read_half_insn
51
target/arm: Add TBFLAG_M32.SECURE
52
target/arm: Merge regime_is_secure into get_phys_addr
53
target/arm: Add is_secure parameter to do_ats_write
54
target/arm: Fold secure and non-secure a-profile mmu indexes
55
target/arm: Reorg regime_translation_disabled
56
target/arm: Drop secure check for HCR.TGE vs SCTLR_EL1.M
57
target/arm: Introduce arm_hcr_el2_eff_secstate
58
target/arm: Hoist read of *is_secure in S1_ptw_translate
59
target/arm: Remove env argument from combined_attrs_fwb
60
target/arm: Pass HCR to attribute subroutines.
61
target/arm: Fix ATS12NSO* from S PL1
62
target/arm: Split out get_phys_addr_disabled
63
target/arm: Fix cacheattr in get_phys_addr_disabled
64
target/arm: Use tlb_set_page_full
43
65
66
docs/system/arm/emulation.rst | 1 +
67
docs/system/arm/nuvoton.rst | 4 +-
68
target/arm/cpu-param.h | 2 +-
69
target/arm/cpu.h | 181 ++++++++------
70
target/arm/internals.h | 150 ++++++-----
71
hw/arm/boot.c | 4 +
72
target/arm/helper.c | 332 ++++++++++++++----------
73
target/arm/kvm.c | 4 +-
74
target/arm/m_helper.c | 29 ++-
75
target/arm/ptw.c | 570 ++++++++++++++++++++++--------------------
76
target/arm/tlb_helper.c | 9 +-
77
target/arm/translate-a64.c | 8 -
78
target/arm/translate.c | 9 +-
79
13 files changed, 717 insertions(+), 586 deletions(-)
diff view generated by jsdifflib
New patch
1
Occasionally the KVM_CREATE_VM ioctl can return EINTR, even though
2
there is no pending signal to be taken. In commit 94ccff13382055
3
we added a retry-on-EINTR loop to the KVM_CREATE_VM call in the
4
generic KVM code. Adopt the same approach for the use of the
5
ioctl in the Arm-specific KVM code (where we use it to create a
6
scratch VM for probing for various things).
1
7
8
For more information, see the mailing list thread:
9
https://lore.kernel.org/qemu-devel/8735e0s1zw.wl-maz@kernel.org/
10
11
Reported-by: Vitaly Chikunov <vt@altlinux.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
14
Reviewed-by: Eric Auger <eric.auger@redhat.com>
15
Acked-by: Marc Zyngier <maz@kernel.org>
16
Message-id: 20220930113824.1933293-1-peter.maydell@linaro.org
17
---
18
target/arm/kvm.c | 4 +++-
19
1 file changed, 3 insertions(+), 1 deletion(-)
20
21
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/kvm.c
24
+++ b/target/arm/kvm.c
25
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
26
if (max_vm_pa_size < 0) {
27
max_vm_pa_size = 0;
28
}
29
- vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size);
30
+ do {
31
+ vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size);
32
+ } while (vmfd == -1 && errno == EINTR);
33
if (vmfd < 0) {
34
goto err;
35
}
36
--
37
2.25.1
diff view generated by jsdifflib
New patch
1
From: Jerome Forissier <jerome.forissier@linaro.org>
1
2
3
Updates write_scr() to allow setting SCR_EL3.EnTP2 when FEAT_SME is
4
implemented. SCR_EL3 being a 64-bit register, valid_mask is changed
5
to uint64_t and the SCR_* constants in target/arm/cpu.h are extended
6
to 64-bit so that masking and bitwise not (~) behave as expected.
7
8
This enables booting Linux with Trusted Firmware-A at EL3 with
9
"-M virt,secure=on -cpu max".
10
11
Cc: qemu-stable@nongnu.org
12
Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max")
13
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
14
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20221004072354.27037-1-jerome.forissier@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
target/arm/cpu.h | 54 ++++++++++++++++++++++-----------------------
20
target/arm/helper.c | 5 ++++-
21
2 files changed, 31 insertions(+), 28 deletions(-)
22
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
27
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
28
29
#define HPFAR_NS (1ULL << 63)
30
31
-#define SCR_NS (1U << 0)
32
-#define SCR_IRQ (1U << 1)
33
-#define SCR_FIQ (1U << 2)
34
-#define SCR_EA (1U << 3)
35
-#define SCR_FW (1U << 4)
36
-#define SCR_AW (1U << 5)
37
-#define SCR_NET (1U << 6)
38
-#define SCR_SMD (1U << 7)
39
-#define SCR_HCE (1U << 8)
40
-#define SCR_SIF (1U << 9)
41
-#define SCR_RW (1U << 10)
42
-#define SCR_ST (1U << 11)
43
-#define SCR_TWI (1U << 12)
44
-#define SCR_TWE (1U << 13)
45
-#define SCR_TLOR (1U << 14)
46
-#define SCR_TERR (1U << 15)
47
-#define SCR_APK (1U << 16)
48
-#define SCR_API (1U << 17)
49
-#define SCR_EEL2 (1U << 18)
50
-#define SCR_EASE (1U << 19)
51
-#define SCR_NMEA (1U << 20)
52
-#define SCR_FIEN (1U << 21)
53
-#define SCR_ENSCXT (1U << 25)
54
-#define SCR_ATA (1U << 26)
55
-#define SCR_FGTEN (1U << 27)
56
-#define SCR_ECVEN (1U << 28)
57
-#define SCR_TWEDEN (1U << 29)
58
+#define SCR_NS (1ULL << 0)
59
+#define SCR_IRQ (1ULL << 1)
60
+#define SCR_FIQ (1ULL << 2)
61
+#define SCR_EA (1ULL << 3)
62
+#define SCR_FW (1ULL << 4)
63
+#define SCR_AW (1ULL << 5)
64
+#define SCR_NET (1ULL << 6)
65
+#define SCR_SMD (1ULL << 7)
66
+#define SCR_HCE (1ULL << 8)
67
+#define SCR_SIF (1ULL << 9)
68
+#define SCR_RW (1ULL << 10)
69
+#define SCR_ST (1ULL << 11)
70
+#define SCR_TWI (1ULL << 12)
71
+#define SCR_TWE (1ULL << 13)
72
+#define SCR_TLOR (1ULL << 14)
73
+#define SCR_TERR (1ULL << 15)
74
+#define SCR_APK (1ULL << 16)
75
+#define SCR_API (1ULL << 17)
76
+#define SCR_EEL2 (1ULL << 18)
77
+#define SCR_EASE (1ULL << 19)
78
+#define SCR_NMEA (1ULL << 20)
79
+#define SCR_FIEN (1ULL << 21)
80
+#define SCR_ENSCXT (1ULL << 25)
81
+#define SCR_ATA (1ULL << 26)
82
+#define SCR_FGTEN (1ULL << 27)
83
+#define SCR_ECVEN (1ULL << 28)
84
+#define SCR_TWEDEN (1ULL << 29)
85
#define SCR_TWEDEL MAKE_64BIT_MASK(30, 4)
86
#define SCR_TME (1ULL << 34)
87
#define SCR_AMVOFFEN (1ULL << 35)
88
diff --git a/target/arm/helper.c b/target/arm/helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/helper.c
91
+++ b/target/arm/helper.c
92
@@ -XXX,XX +XXX,XX @@ static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
93
static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
94
{
95
/* Begin with base v8.0 state. */
96
- uint32_t valid_mask = 0x3fff;
97
+ uint64_t valid_mask = 0x3fff;
98
ARMCPU *cpu = env_archcpu(env);
99
100
/*
101
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
102
if (cpu_isar_feature(aa64_doublefault, cpu)) {
103
valid_mask |= SCR_EASE | SCR_NMEA;
104
}
105
+ if (cpu_isar_feature(aa64_sme, cpu)) {
106
+ valid_mask |= SCR_ENTP2;
107
+ }
108
} else {
109
valid_mask &= ~(SCR_RW | SCR_ST);
110
if (cpu_isar_feature(aa32_ras, cpu)) {
111
--
112
2.25.1
diff view generated by jsdifflib
New patch
1
From: Joel Stanley <joel@jms.id.au>
1
2
3
openpower.xyz was retired some time ago. The OpenBMC Jenkins is where
4
images can be found these days.
5
6
Signed-off-by: Joel Stanley <joel@jms.id.au>
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20221004050042.22681-1-joel@jms.id.au
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
docs/system/arm/nuvoton.rst | 4 ++--
14
1 file changed, 2 insertions(+), 2 deletions(-)
15
16
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
17
index XXXXXXX..XXXXXXX 100644
18
--- a/docs/system/arm/nuvoton.rst
19
+++ b/docs/system/arm/nuvoton.rst
20
@@ -XXX,XX +XXX,XX @@ Boot options
21
22
The Nuvoton machines can boot from an OpenBMC firmware image, or directly into
23
a kernel using the ``-kernel`` option. OpenBMC images for ``quanta-gsj`` and
24
-possibly others can be downloaded from the OpenPOWER jenkins :
25
+possibly others can be downloaded from the OpenBMC jenkins :
26
27
- https://openpower.xyz/
28
+ https://jenkins.openbmc.org/
29
30
The firmware image should be attached as an MTD drive. Example :
31
32
--
33
2.25.1
34
35
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The starting security state comes with the translation regime,
4
not the current state of arm_is_secure_below_el3().
5
6
Create a new local variable, s2walk_secure, which does not need
7
to be written back to result->attrs.secure -- we compute that
8
value later, after the S2 walk is complete.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20221001162318.153420-2-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/ptw.c | 18 +++++++++---------
16
1 file changed, 9 insertions(+), 9 deletions(-)
17
18
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/ptw.c
21
+++ b/target/arm/ptw.c
22
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
23
hwaddr ipa;
24
int s1_prot;
25
int ret;
26
- bool ipa_secure;
27
+ bool ipa_secure, s2walk_secure;
28
ARMCacheAttrs cacheattrs1;
29
ARMMMUIdx s2_mmu_idx;
30
bool is_el0;
31
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
32
33
ipa = result->phys;
34
ipa_secure = result->attrs.secure;
35
- if (arm_is_secure_below_el3(env)) {
36
- if (ipa_secure) {
37
- result->attrs.secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
38
- } else {
39
- result->attrs.secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
40
- }
41
+ if (is_secure) {
42
+ /* Select TCR based on the NS bit from the S1 walk. */
43
+ s2walk_secure = !(ipa_secure
44
+ ? env->cp15.vstcr_el2 & VSTCR_SW
45
+ : env->cp15.vtcr_el2 & VTCR_NSW);
46
} else {
47
assert(!ipa_secure);
48
+ s2walk_secure = false;
49
}
50
51
- s2_mmu_idx = (result->attrs.secure
52
+ s2_mmu_idx = (s2walk_secure
53
? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
54
is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
55
56
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
57
result->cacheattrs);
58
59
/* Check if IPA translates to secure or non-secure PA space. */
60
- if (arm_is_secure_below_el3(env)) {
61
+ if (is_secure) {
62
if (ipa_secure) {
63
result->attrs.secure =
64
!(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW));
65
--
66
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
While the stage2 call to get_phys_addr_lpae should never set
4
attrs.secure when given a non-secure input, it's just as easy
5
to make the final update to attrs.secure be unconditional and
6
false in the case of non-secure input.
7
8
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20221007152159.1414065-1-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/ptw.c | 21 ++++++++++-----------
15
1 file changed, 10 insertions(+), 11 deletions(-)
16
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/ptw.c
20
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
22
result->cacheattrs = combine_cacheattrs(env, cacheattrs1,
23
result->cacheattrs);
24
25
- /* Check if IPA translates to secure or non-secure PA space. */
26
- if (is_secure) {
27
- if (ipa_secure) {
28
- result->attrs.secure =
29
- !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW));
30
- } else {
31
- result->attrs.secure =
32
- !((env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))
33
- || (env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)));
34
- }
35
- }
36
+ /*
37
+ * Check if IPA translates to secure or non-secure PA space.
38
+ * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
39
+ */
40
+ result->attrs.secure =
41
+ (is_secure
42
+ && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
43
+ && (ipa_secure
44
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
45
+
46
return 0;
47
} else {
48
/*
49
--
50
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove the use of regime_is_secure from get_phys_addr_lpae,
4
using the new parameter instead.
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-3-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/ptw.c | 20 ++++++++++----------
12
1 file changed, 10 insertions(+), 10 deletions(-)
13
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/ptw.c
17
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@
19
20
static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
21
MMUAccessType access_type, ARMMMUIdx mmu_idx,
22
- bool s1_is_el0, GetPhysAddrResult *result,
23
- ARMMMUFaultInfo *fi)
24
+ bool is_secure, bool s1_is_el0,
25
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
26
__attribute__((nonnull));
27
28
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
29
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
30
GetPhysAddrResult s2 = {};
31
int ret;
32
33
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false,
34
- &s2, fi);
35
+ ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
36
+ *is_secure, false, &s2, fi);
37
if (ret) {
38
assert(fi->type != ARMFault_None);
39
fi->s2addr = addr;
40
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
41
*/
42
static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
43
MMUAccessType access_type, ARMMMUIdx mmu_idx,
44
- bool s1_is_el0, GetPhysAddrResult *result,
45
- ARMMMUFaultInfo *fi)
46
+ bool is_secure, bool s1_is_el0,
47
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
48
{
49
ARMCPU *cpu = env_archcpu(env);
50
/* Read an LPAE long-descriptor translation table. */
51
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
52
* remain non-secure. We implement this by just ORing in the NSTable/NS
53
* bits at each step.
54
*/
55
- tableattrs = regime_is_secure(env, mmu_idx) ? 0 : (1 << 4);
56
+ tableattrs = is_secure ? 0 : (1 << 4);
57
for (;;) {
58
uint64_t descriptor;
59
bool nstable;
60
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
61
memset(result, 0, sizeof(*result));
62
63
ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx,
64
- is_el0, result, fi);
65
+ s2walk_secure, is_el0, result, fi);
66
fi->s2addr = ipa;
67
68
/* Combine the S1 and S2 perms. */
69
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
70
}
71
72
if (regime_using_lpae_format(env, mmu_idx)) {
73
- return get_phys_addr_lpae(env, address, access_type, mmu_idx, false,
74
- result, fi);
75
+ return get_phys_addr_lpae(env, address, access_type, mmu_idx,
76
+ is_secure, false, result, fi);
77
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
78
return get_phys_addr_v6(env, address, access_type, mmu_idx,
79
is_secure, result, fi);
80
--
81
2.25.1
82
83
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Pass the correct stage2 mmu_idx to regime_translation_disabled,
4
which we computed afterward.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20221001162318.153420-4-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/ptw.c | 6 +++---
12
1 file changed, 3 insertions(+), 3 deletions(-)
13
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/ptw.c
17
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
19
hwaddr addr, bool *is_secure,
20
ARMMMUFaultInfo *fi)
21
{
22
+ ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
23
+
24
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
25
- !regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
26
- ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S
27
- : ARMMMUIdx_Stage2;
28
+ !regime_translation_disabled(env, s2_mmu_idx)) {
29
GetPhysAddrResult s2 = {};
30
int ret;
31
32
--
33
2.25.1
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add the CRP as unimplemented thus avoiding bus errors when
3
Remove the use of regime_is_secure from regime_translation_disabled,
4
guests access these registers.
4
using the new parameter instead.
5
5
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
This fixes a bug in S1_ptw_translate and get_phys_addr where we had
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
passed ARMMMUIdx_Stage2 and not ARMMMUIdx_Stage2_S to determine if
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
8
Stage2 is disabled, affecting FEAT_SEL2.
9
Message-id: 20191115154734.26449-2-edgar.iglesias@gmail.com
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20221001162318.153420-5-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
15
---
12
include/hw/arm/xlnx-versal.h | 3 +++
16
target/arm/ptw.c | 20 +++++++++++---------
13
hw/arm/xlnx-versal.c | 2 ++
17
1 file changed, 11 insertions(+), 9 deletions(-)
14
2 files changed, 5 insertions(+)
15
18
16
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/xlnx-versal.h
21
--- a/target/arm/ptw.c
19
+++ b/include/hw/arm/xlnx-versal.h
22
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
23
@@ -XXX,XX +XXX,XX @@ static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
21
#define MM_IOU_SCNTRS_SIZE 0x10000
24
}
22
#define MM_FPD_CRF 0xfd1a0000U
25
23
#define MM_FPD_CRF_SIZE 0x140000
26
/* Return true if the specified stage of address translation is disabled */
24
+
27
-static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
25
+#define MM_PMC_CRP 0xf1260000U
28
+static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
26
+#define MM_PMC_CRP_SIZE 0x10000
29
+ bool is_secure)
27
#endif
30
{
28
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
31
uint64_t hcr_el2;
29
index XXXXXXX..XXXXXXX 100644
32
30
--- a/hw/arm/xlnx-versal.c
33
if (arm_feature(env, ARM_FEATURE_M)) {
31
+++ b/hw/arm/xlnx-versal.c
34
- switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] &
32
@@ -XXX,XX +XXX,XX @@ static void versal_unimp(Versal *s)
35
+ switch (env->v7m.mpu_ctrl[is_secure] &
33
MM_CRL, MM_CRL_SIZE);
36
(R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
34
versal_unimp_area(s, "crf", &s->mr_ps,
37
case R_V7M_MPU_CTRL_ENABLE_MASK:
35
MM_FPD_CRF, MM_FPD_CRF_SIZE);
38
/* Enabled, but not for HardFault and NMI */
36
+ versal_unimp_area(s, "crp", &s->mr_ps,
39
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
37
+ MM_PMC_CRP, MM_PMC_CRP_SIZE);
40
38
versal_unimp_area(s, "iou-scntr", &s->mr_ps,
41
if (hcr_el2 & HCR_TGE) {
39
MM_IOU_SCNTR, MM_IOU_SCNTR_SIZE);
42
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
40
versal_unimp_area(s, "iou-scntr-seucre", &s->mr_ps,
43
- if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
44
+ if (!is_secure && regime_el(env, mmu_idx) == 1) {
45
return true;
46
}
47
}
48
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
49
ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
50
51
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
52
- !regime_translation_disabled(env, s2_mmu_idx)) {
53
+ !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) {
54
GetPhysAddrResult s2 = {};
55
int ret;
56
57
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
58
uint32_t base;
59
bool is_user = regime_is_user(env, mmu_idx);
60
61
- if (regime_translation_disabled(env, mmu_idx)) {
62
+ if (regime_translation_disabled(env, mmu_idx, is_secure)) {
63
/* MPU disabled. */
64
result->phys = address;
65
result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
66
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
67
result->page_size = TARGET_PAGE_SIZE;
68
result->prot = 0;
69
70
- if (regime_translation_disabled(env, mmu_idx) ||
71
+ if (regime_translation_disabled(env, mmu_idx, secure) ||
72
m_is_ppb_region(env, address)) {
73
/*
74
* MPU disabled or M profile PPB access: use default memory map.
75
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
76
* are done in arm_v7m_load_vector(), which always does a direct
77
* read using address_space_ldl(), rather than going via this function.
78
*/
79
- if (regime_translation_disabled(env, mmu_idx)) { /* MPU disabled */
80
+ if (regime_translation_disabled(env, mmu_idx, secure)) { /* MPU disabled */
81
hit = true;
82
} else if (m_is_ppb_region(env, address)) {
83
hit = true;
84
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
85
result, fi);
86
87
/* If S1 fails or S2 is disabled, return early. */
88
- if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
89
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
90
+ is_secure)) {
91
return ret;
92
}
93
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
95
96
/* Definitely a real MMU, not an MPU */
97
98
- if (regime_translation_disabled(env, mmu_idx)) {
99
+ if (regime_translation_disabled(env, mmu_idx, is_secure)) {
100
uint64_t hcr;
101
uint8_t memattr;
102
41
--
103
--
42
2.20.1
104
2.25.1
43
105
44
106
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Retain the existing get_phys_addr interface using the security
4
state derived from mmu_idx. Move the kerneldoc comments to the
5
header file where they belong.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221001162318.153420-6-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/internals.h | 40 ++++++++++++++++++++++++++++++++++++++
13
target/arm/ptw.c | 44 ++++++++++++++----------------------------
14
2 files changed, 55 insertions(+), 29 deletions(-)
15
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
19
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ typedef struct GetPhysAddrResult {
21
ARMCacheAttrs cacheattrs;
22
} GetPhysAddrResult;
23
24
+/**
25
+ * get_phys_addr_with_secure: get the physical address for a virtual address
26
+ * @env: CPUARMState
27
+ * @address: virtual address to get physical address for
28
+ * @access_type: 0 for read, 1 for write, 2 for execute
29
+ * @mmu_idx: MMU index indicating required translation regime
30
+ * @is_secure: security state for the access
31
+ * @result: set on translation success.
32
+ * @fi: set to fault info if the translation fails
33
+ *
34
+ * Find the physical address corresponding to the given virtual address,
35
+ * by doing a translation table walk on MMU based systems or using the
36
+ * MPU state on MPU based systems.
37
+ *
38
+ * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
39
+ * prot and page_size may not be filled in, and the populated fsr value provides
40
+ * information on why the translation aborted, in the format of a
41
+ * DFSR/IFSR fault register, with the following caveats:
42
+ * * we honour the short vs long DFSR format differences.
43
+ * * the WnR bit is never set (the caller must do this).
44
+ * * for PSMAv5 based systems we don't bother to return a full FSR format
45
+ * value.
46
+ */
47
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
48
+ MMUAccessType access_type,
49
+ ARMMMUIdx mmu_idx, bool is_secure,
50
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
51
+ __attribute__((nonnull));
52
+
53
+/**
54
+ * get_phys_addr: get the physical address for a virtual address
55
+ * @env: CPUARMState
56
+ * @address: virtual address to get physical address for
57
+ * @access_type: 0 for read, 1 for write, 2 for execute
58
+ * @mmu_idx: MMU index indicating required translation regime
59
+ * @result: set on translation success.
60
+ * @fi: set to fault info if the translation fails
61
+ *
62
+ * Similarly, but use the security regime of @mmu_idx.
63
+ */
64
bool get_phys_addr(CPUARMState *env, target_ulong address,
65
MMUAccessType access_type, ARMMMUIdx mmu_idx,
66
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
67
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/ptw.c
70
+++ b/target/arm/ptw.c
71
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
72
return ret;
73
}
74
75
-/**
76
- * get_phys_addr - get the physical address for this virtual address
77
- *
78
- * Find the physical address corresponding to the given virtual address,
79
- * by doing a translation table walk on MMU based systems or using the
80
- * MPU state on MPU based systems.
81
- *
82
- * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
83
- * prot and page_size may not be filled in, and the populated fsr value provides
84
- * information on why the translation aborted, in the format of a
85
- * DFSR/IFSR fault register, with the following caveats:
86
- * * we honour the short vs long DFSR format differences.
87
- * * the WnR bit is never set (the caller must do this).
88
- * * for PSMAv5 based systems we don't bother to return a full FSR format
89
- * value.
90
- *
91
- * @env: CPUARMState
92
- * @address: virtual address to get physical address for
93
- * @access_type: 0 for read, 1 for write, 2 for execute
94
- * @mmu_idx: MMU index indicating required translation regime
95
- * @result: set on translation success.
96
- * @fi: set to fault info if the translation fails
97
- */
98
-bool get_phys_addr(CPUARMState *env, target_ulong address,
99
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
100
- GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
101
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
102
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
103
+ bool is_secure, GetPhysAddrResult *result,
104
+ ARMMMUFaultInfo *fi)
105
{
106
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
107
- bool is_secure = regime_is_secure(env, mmu_idx);
108
109
if (mmu_idx != s1_mmu_idx) {
110
/*
111
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
112
ARMMMUIdx s2_mmu_idx;
113
bool is_el0;
114
115
- ret = get_phys_addr(env, address, access_type, s1_mmu_idx,
116
- result, fi);
117
+ ret = get_phys_addr_with_secure(env, address, access_type,
118
+ s1_mmu_idx, is_secure, result, fi);
119
120
/* If S1 fails or S2 is disabled, return early. */
121
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
122
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
123
}
124
}
125
126
+bool get_phys_addr(CPUARMState *env, target_ulong address,
127
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
128
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
129
+{
130
+ return get_phys_addr_with_secure(env, address, access_type, mmu_idx,
131
+ regime_is_secure(env, mmu_idx),
132
+ result, fi);
133
+}
134
+
135
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
136
MemTxAttrs *attrs)
137
{
138
--
139
2.25.1
diff view generated by jsdifflib
1
From: Jean-Hugues Deschênes <Jean-Hugues.Deschenes@ossiaco.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
According to the PushStack() pseudocode in the armv7m RM,
3
Remove the use of regime_is_secure from v7m_read_half_insn, using
4
bit 4 of the LR should be set to NOT(CONTROL.PFCA) when
4
the new parameter instead.
5
an FPU is present. Current implementation is doing it for
6
armv8, but not for armv7. This patch makes the existing
7
logic applicable to both code paths.
8
5
9
Signed-off-by: Jean-Hugues Deschenes <jean-hugues.deschenes@ossiaco.com>
6
As it happens, both callers pass true, propagated from the argument
7
to arm_v7m_mmu_idx_for_secstate which created the mmu_idx argument,
8
but that is a detail of v7m_handle_execute_nsc we need not expose
9
to the callee.
10
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20221001162318.153420-7-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
16
---
13
target/arm/m_helper.c | 7 +++----
17
target/arm/m_helper.c | 9 ++++-----
14
1 file changed, 3 insertions(+), 4 deletions(-)
18
1 file changed, 4 insertions(+), 5 deletions(-)
15
19
16
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
17
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/m_helper.c
22
--- a/target/arm/m_helper.c
19
+++ b/target/arm/m_helper.c
23
+++ b/target/arm/m_helper.c
20
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
24
@@ -XXX,XX +XXX,XX @@ static bool do_v7m_function_return(ARMCPU *cpu)
21
if (env->v7m.secure) {
25
return true;
22
lr |= R_V7M_EXCRET_S_MASK;
26
}
23
}
27
24
- if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
28
-static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
25
- lr |= R_V7M_EXCRET_FTYPE_MASK;
29
+static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure,
26
- }
30
uint32_t addr, uint16_t *insn)
27
} else {
31
{
28
lr = R_V7M_EXCRET_RES1_MASK |
32
/*
29
R_V7M_EXCRET_S_MASK |
33
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
30
R_V7M_EXCRET_DCRS_MASK |
34
ARMMMUFaultInfo fi = {};
31
- R_V7M_EXCRET_FTYPE_MASK |
35
MemTxResult txres;
32
R_V7M_EXCRET_ES_MASK;
36
33
if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) {
37
- v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx,
34
lr |= R_V7M_EXCRET_SPSEL_MASK;
38
- regime_is_secure(env, mmu_idx), &sattrs);
35
}
39
+ v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, secure, &sattrs);
40
if (!sattrs.nsc || sattrs.ns) {
41
/*
42
* This must be the second half of the insn, and it straddles a
43
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
44
/* We want to do the MPU lookup as secure; work out what mmu_idx that is */
45
mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
46
47
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
48
+ if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15], &insn)) {
49
return false;
36
}
50
}
37
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
51
38
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
52
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
39
+ }
53
goto gen_invep;
40
if (!arm_v7m_is_handler_mode(env)) {
41
lr |= R_V7M_EXCRET_MODE_MASK;
42
}
54
}
55
56
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
57
+ if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15] + 2, &insn)) {
58
return false;
59
}
60
43
--
61
--
44
2.20.1
62
2.25.1
45
63
46
64
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove the use of regime_is_secure from arm_tr_init_disas_context.
4
Instead, provide the value of v8m_secure directly from tb_flags.
5
Rather than use regime_is_secure, use the env->v7m.secure directly,
6
as per arm_mmu_idx_el.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20221001162318.153420-8-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/cpu.h | 2 ++
14
target/arm/helper.c | 4 ++++
15
target/arm/translate.c | 3 +--
16
3 files changed, 7 insertions(+), 2 deletions(-)
17
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */
23
FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */
24
/* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */
25
FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */
26
+/* Set if in secure mode */
27
+FIELD(TBFLAG_M32, SECURE, 6, 1)
28
29
/*
30
* Bit usage when in AArch64 state
31
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/helper.c
34
+++ b/target/arm/helper.c
35
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_m32(CPUARMState *env, int fp_el,
36
DP_TBFLAG_M32(flags, STACKCHECK, 1);
37
}
38
39
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) && env->v7m.secure) {
40
+ DP_TBFLAG_M32(flags, SECURE, 1);
41
+ }
42
+
43
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
44
}
45
46
diff --git a/target/arm/translate.c b/target/arm/translate.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate.c
49
+++ b/target/arm/translate.c
50
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
51
dc->vfp_enabled = 1;
52
dc->be_data = MO_TE;
53
dc->v7m_handler_mode = EX_TBFLAG_M32(tb_flags, HANDLER);
54
- dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
55
- regime_is_secure(env, dc->mmu_idx);
56
+ dc->v8m_secure = EX_TBFLAG_M32(tb_flags, SECURE);
57
dc->v8m_stackcheck = EX_TBFLAG_M32(tb_flags, STACKCHECK);
58
dc->v8m_fpccr_s_wrong = EX_TBFLAG_M32(tb_flags, FPCCR_S_WRONG);
59
dc->v7m_new_fp_ctxt_needed =
60
--
61
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This is the last use of regime_is_secure; remove it
4
entirely before changing the layout of ARMMMUIdx.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-9-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/internals.h | 42 ----------------------------------------
12
target/arm/ptw.c | 44 ++++++++++++++++++++++++++++++++++++++++--
13
2 files changed, 42 insertions(+), 44 deletions(-)
14
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/internals.h
18
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
20
}
21
}
22
23
-/* Return true if this address translation regime is secure */
24
-static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
25
-{
26
- switch (mmu_idx) {
27
- case ARMMMUIdx_E10_0:
28
- case ARMMMUIdx_E10_1:
29
- case ARMMMUIdx_E10_1_PAN:
30
- case ARMMMUIdx_E20_0:
31
- case ARMMMUIdx_E20_2:
32
- case ARMMMUIdx_E20_2_PAN:
33
- case ARMMMUIdx_Stage1_E0:
34
- case ARMMMUIdx_Stage1_E1:
35
- case ARMMMUIdx_Stage1_E1_PAN:
36
- case ARMMMUIdx_E2:
37
- case ARMMMUIdx_Stage2:
38
- case ARMMMUIdx_MPrivNegPri:
39
- case ARMMMUIdx_MUserNegPri:
40
- case ARMMMUIdx_MPriv:
41
- case ARMMMUIdx_MUser:
42
- return false;
43
- case ARMMMUIdx_SE3:
44
- case ARMMMUIdx_SE10_0:
45
- case ARMMMUIdx_SE10_1:
46
- case ARMMMUIdx_SE10_1_PAN:
47
- case ARMMMUIdx_SE20_0:
48
- case ARMMMUIdx_SE20_2:
49
- case ARMMMUIdx_SE20_2_PAN:
50
- case ARMMMUIdx_Stage1_SE0:
51
- case ARMMMUIdx_Stage1_SE1:
52
- case ARMMMUIdx_Stage1_SE1_PAN:
53
- case ARMMMUIdx_SE2:
54
- case ARMMMUIdx_Stage2_S:
55
- case ARMMMUIdx_MSPrivNegPri:
56
- case ARMMMUIdx_MSUserNegPri:
57
- case ARMMMUIdx_MSPriv:
58
- case ARMMMUIdx_MSUser:
59
- return true;
60
- default:
61
- g_assert_not_reached();
62
- }
63
-}
64
-
65
static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
66
{
67
switch (mmu_idx) {
68
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/ptw.c
71
+++ b/target/arm/ptw.c
72
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
73
MMUAccessType access_type, ARMMMUIdx mmu_idx,
74
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
75
{
76
+ bool is_secure;
77
+
78
+ switch (mmu_idx) {
79
+ case ARMMMUIdx_E10_0:
80
+ case ARMMMUIdx_E10_1:
81
+ case ARMMMUIdx_E10_1_PAN:
82
+ case ARMMMUIdx_E20_0:
83
+ case ARMMMUIdx_E20_2:
84
+ case ARMMMUIdx_E20_2_PAN:
85
+ case ARMMMUIdx_Stage1_E0:
86
+ case ARMMMUIdx_Stage1_E1:
87
+ case ARMMMUIdx_Stage1_E1_PAN:
88
+ case ARMMMUIdx_E2:
89
+ case ARMMMUIdx_Stage2:
90
+ case ARMMMUIdx_MPrivNegPri:
91
+ case ARMMMUIdx_MUserNegPri:
92
+ case ARMMMUIdx_MPriv:
93
+ case ARMMMUIdx_MUser:
94
+ is_secure = false;
95
+ break;
96
+ case ARMMMUIdx_SE3:
97
+ case ARMMMUIdx_SE10_0:
98
+ case ARMMMUIdx_SE10_1:
99
+ case ARMMMUIdx_SE10_1_PAN:
100
+ case ARMMMUIdx_SE20_0:
101
+ case ARMMMUIdx_SE20_2:
102
+ case ARMMMUIdx_SE20_2_PAN:
103
+ case ARMMMUIdx_Stage1_SE0:
104
+ case ARMMMUIdx_Stage1_SE1:
105
+ case ARMMMUIdx_Stage1_SE1_PAN:
106
+ case ARMMMUIdx_SE2:
107
+ case ARMMMUIdx_Stage2_S:
108
+ case ARMMMUIdx_MSPrivNegPri:
109
+ case ARMMMUIdx_MSUserNegPri:
110
+ case ARMMMUIdx_MSPriv:
111
+ case ARMMMUIdx_MSUser:
112
+ is_secure = true;
113
+ break;
114
+ default:
115
+ g_assert_not_reached();
116
+ }
117
return get_phys_addr_with_secure(env, address, access_type, mmu_idx,
118
- regime_is_secure(env, mmu_idx),
119
- result, fi);
120
+ is_secure, result, fi);
121
}
122
123
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
124
--
125
2.25.1
diff view generated by jsdifflib
1
From: Marc Zyngier <maz@kernel.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The ARMv8 ARM states when executing at EL2, EL3 or Secure EL1,
3
Use get_phys_addr_with_secure directly. For a-profile, this is the
4
ISR_EL1 shows the pending status of the physical IRQ, FIQ, or
4
one place where the value of is_secure may not equal arm_is_secure(env).
5
SError interrupts.
6
5
7
Unfortunately, QEMU's implementation only considers the HCR_EL2
8
bits, and ignores the current exception level. This means a hypervisor
9
trying to look at its own interrupt state actually sees the guest
10
state, which is unexpected and breaks KVM as of Linux 5.3.
11
12
Instead, check for the running EL and return the physical bits
13
if not running in a virtualized context.
14
15
Fixes: 636540e9c40b
16
Cc: qemu-stable@nongnu.org
17
Reported-by: Quentin Perret <qperret@google.com>
18
Signed-off-by: Marc Zyngier <maz@kernel.org>
19
Message-id: 20191122135833.28953-1-maz@kernel.org
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-10-richard.henderson@linaro.org
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
10
---
24
target/arm/helper.c | 7 +++++--
11
target/arm/helper.c | 19 ++++++++++++++-----
25
1 file changed, 5 insertions(+), 2 deletions(-)
12
1 file changed, 14 insertions(+), 5 deletions(-)
26
13
27
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
30
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
31
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
18
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
32
CPUState *cs = env_cpu(env);
19
33
uint64_t hcr_el2 = arm_hcr_el2_eff(env);
20
#ifdef CONFIG_TCG
34
uint64_t ret = 0;
21
static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
35
+ bool allow_virt = (arm_current_el(env) == 1 &&
22
- MMUAccessType access_type, ARMMMUIdx mmu_idx)
36
+ (!arm_is_secure_below_el3(env) ||
23
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
37
+ (env->cp15.scr_el3 & SCR_EEL2)));
24
+ bool is_secure)
38
25
{
39
- if (hcr_el2 & HCR_IMO) {
26
bool ret;
40
+ if (allow_virt && (hcr_el2 & HCR_IMO)) {
27
uint64_t par64;
41
if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
28
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
42
ret |= CPSR_I;
29
ARMMMUFaultInfo fi = {};
43
}
30
GetPhysAddrResult res = {};
44
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
31
45
}
32
- ret = get_phys_addr(env, value, access_type, mmu_idx, &res, &fi);
33
+ ret = get_phys_addr_with_secure(env, value, access_type, mmu_idx,
34
+ is_secure, &res, &fi);
35
36
/*
37
* ATS operations only do S1 or S1+S2 translations, so we never
38
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
39
switch (el) {
40
case 3:
41
mmu_idx = ARMMMUIdx_SE3;
42
+ secure = true;
43
break;
44
case 2:
45
g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
46
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
47
switch (el) {
48
case 3:
49
mmu_idx = ARMMMUIdx_SE10_0;
50
+ secure = true;
51
break;
52
case 2:
53
g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
54
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
55
case 4:
56
/* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */
57
mmu_idx = ARMMMUIdx_E10_1;
58
+ secure = false;
59
break;
60
case 6:
61
/* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */
62
mmu_idx = ARMMMUIdx_E10_0;
63
+ secure = false;
64
break;
65
default:
66
g_assert_not_reached();
46
}
67
}
47
68
48
- if (hcr_el2 & HCR_FMO) {
69
- par64 = do_ats_write(env, value, access_type, mmu_idx);
49
+ if (allow_virt && (hcr_el2 & HCR_FMO)) {
70
+ par64 = do_ats_write(env, value, access_type, mmu_idx, secure);
50
if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
71
51
ret |= CPSR_F;
72
A32_BANKED_CURRENT_REG_SET(env, par, par64);
52
}
73
#else
74
@@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri,
75
MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD;
76
uint64_t par64;
77
78
- par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2);
79
+ /* There is no SecureEL2 for AArch32. */
80
+ par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2, false);
81
82
A32_BANKED_CURRENT_REG_SET(env, par, par64);
83
#else
84
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
85
break;
86
case 6: /* AT S1E3R, AT S1E3W */
87
mmu_idx = ARMMMUIdx_SE3;
88
+ secure = true;
89
break;
90
default:
91
g_assert_not_reached();
92
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
93
g_assert_not_reached();
94
}
95
96
- env->cp15.par_el[1] = do_ats_write(env, value, access_type, mmu_idx);
97
+ env->cp15.par_el[1] = do_ats_write(env, value, access_type,
98
+ mmu_idx, secure);
99
#else
100
/* Handled by hardware accelerator. */
101
g_assert_not_reached();
53
--
102
--
54
2.20.1
103
2.25.1
55
56
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
For a-profile aarch64, which does not bank system registers, it takes
4
quite a lot of code to switch between security states. In the process,
5
registers such as TCR_EL{1,2} must be swapped, which in itself requires
6
the flushing of softmmu tlbs. Therefore it doesn't buy us anything to
7
separate tlbs by security state.
8
9
Retain the distinction between Stage2 and Stage2_S.
10
11
This will be important as we implement FEAT_RME, and do not wish to
12
add a third set of mmu indexes for Realm state.
13
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20221001162318.153420-11-richard.henderson@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
target/arm/cpu-param.h | 2 +-
20
target/arm/cpu.h | 72 +++++++------------
21
target/arm/internals.h | 31 +-------
22
target/arm/helper.c | 144 +++++++++++++------------------------
23
target/arm/ptw.c | 25 ++-----
24
target/arm/translate-a64.c | 8 ---
25
target/arm/translate.c | 6 +-
26
7 files changed, 85 insertions(+), 203 deletions(-)
27
28
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu-param.h
31
+++ b/target/arm/cpu-param.h
32
@@ -XXX,XX +XXX,XX @@
33
# define TARGET_PAGE_BITS_MIN 10
34
#endif
35
36
-#define NB_MMU_MODES 15
37
+#define NB_MMU_MODES 8
38
39
#endif
40
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/cpu.h
43
+++ b/target/arm/cpu.h
44
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
45
* table over and over.
46
* 6. we need separate EL1/EL2 mmu_idx for handling the Privileged Access
47
* Never (PAN) bit within PSTATE.
48
+ * 7. we fold together the secure and non-secure regimes for A-profile,
49
+ * because there are no banked system registers for aarch64, so the
50
+ * process of switching between secure and non-secure is
51
+ * already heavyweight.
52
*
53
* This gives us the following list of cases:
54
*
55
- * NS EL0 EL1&0 stage 1+2 (aka NS PL0)
56
- * NS EL1 EL1&0 stage 1+2 (aka NS PL1)
57
- * NS EL1 EL1&0 stage 1+2 +PAN
58
- * NS EL0 EL2&0
59
- * NS EL2 EL2&0
60
- * NS EL2 EL2&0 +PAN
61
- * NS EL2 (aka NS PL2)
62
- * S EL0 EL1&0 (aka S PL0)
63
- * S EL1 EL1&0 (not used if EL3 is 32 bit)
64
- * S EL1 EL1&0 +PAN
65
- * S EL3 (aka S PL1)
66
+ * EL0 EL1&0 stage 1+2 (aka NS PL0)
67
+ * EL1 EL1&0 stage 1+2 (aka NS PL1)
68
+ * EL1 EL1&0 stage 1+2 +PAN
69
+ * EL0 EL2&0
70
+ * EL2 EL2&0
71
+ * EL2 EL2&0 +PAN
72
+ * EL2 (aka NS PL2)
73
+ * EL3 (aka S PL1)
74
*
75
- * for a total of 11 different mmu_idx.
76
+ * for a total of 8 different mmu_idx.
77
*
78
* R profile CPUs have an MPU, but can use the same set of MMU indexes
79
- * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
80
- * NS EL2 if we ever model a Cortex-R52).
81
+ * as A profile. They only need to distinguish EL0 and EL1 (and
82
+ * EL2 if we ever model a Cortex-R52).
83
*
84
* M profile CPUs are rather different as they do not have a true MMU.
85
* They have the following different MMU indexes:
86
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
87
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
88
#define ARM_MMU_IDX_M 0x40 /* M profile */
89
90
-/* Meanings of the bits for A profile mmu idx values */
91
-#define ARM_MMU_IDX_A_NS 0x8
92
-
93
/* Meanings of the bits for M profile mmu idx values */
94
#define ARM_MMU_IDX_M_PRIV 0x1
95
#define ARM_MMU_IDX_M_NEGPRI 0x2
96
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
97
/*
98
* A-profile.
99
*/
100
- ARMMMUIdx_SE10_0 = 0 | ARM_MMU_IDX_A,
101
- ARMMMUIdx_SE20_0 = 1 | ARM_MMU_IDX_A,
102
- ARMMMUIdx_SE10_1 = 2 | ARM_MMU_IDX_A,
103
- ARMMMUIdx_SE20_2 = 3 | ARM_MMU_IDX_A,
104
- ARMMMUIdx_SE10_1_PAN = 4 | ARM_MMU_IDX_A,
105
- ARMMMUIdx_SE20_2_PAN = 5 | ARM_MMU_IDX_A,
106
- ARMMMUIdx_SE2 = 6 | ARM_MMU_IDX_A,
107
- ARMMMUIdx_SE3 = 7 | ARM_MMU_IDX_A,
108
-
109
- ARMMMUIdx_E10_0 = ARMMMUIdx_SE10_0 | ARM_MMU_IDX_A_NS,
110
- ARMMMUIdx_E20_0 = ARMMMUIdx_SE20_0 | ARM_MMU_IDX_A_NS,
111
- ARMMMUIdx_E10_1 = ARMMMUIdx_SE10_1 | ARM_MMU_IDX_A_NS,
112
- ARMMMUIdx_E20_2 = ARMMMUIdx_SE20_2 | ARM_MMU_IDX_A_NS,
113
- ARMMMUIdx_E10_1_PAN = ARMMMUIdx_SE10_1_PAN | ARM_MMU_IDX_A_NS,
114
- ARMMMUIdx_E20_2_PAN = ARMMMUIdx_SE20_2_PAN | ARM_MMU_IDX_A_NS,
115
- ARMMMUIdx_E2 = ARMMMUIdx_SE2 | ARM_MMU_IDX_A_NS,
116
+ ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
117
+ ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A,
118
+ ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A,
119
+ ARMMMUIdx_E20_2 = 3 | ARM_MMU_IDX_A,
120
+ ARMMMUIdx_E10_1_PAN = 4 | ARM_MMU_IDX_A,
121
+ ARMMMUIdx_E20_2_PAN = 5 | ARM_MMU_IDX_A,
122
+ ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A,
123
+ ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A,
124
125
/*
126
* These are not allocated TLBs and are used only for AT system
127
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
128
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
129
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
130
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
131
- ARMMMUIdx_Stage1_SE0 = 3 | ARM_MMU_IDX_NOTLB,
132
- ARMMMUIdx_Stage1_SE1 = 4 | ARM_MMU_IDX_NOTLB,
133
- ARMMMUIdx_Stage1_SE1_PAN = 5 | ARM_MMU_IDX_NOTLB,
134
/*
135
* Not allocated a TLB: used only for second stage of an S12 page
136
* table walk, or for descriptor loads during first stage of an S1
137
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
138
* then various TLB flush insns which currently are no-ops or flush
139
* only stage 1 MMU indexes will need to change to flush stage 2.
140
*/
141
- ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_NOTLB,
142
- ARMMMUIdx_Stage2_S = 7 | ARM_MMU_IDX_NOTLB,
143
+ ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
144
+ ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB,
145
146
/*
147
* M-profile.
148
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
149
TO_CORE_BIT(E2),
150
TO_CORE_BIT(E20_2),
151
TO_CORE_BIT(E20_2_PAN),
152
- TO_CORE_BIT(SE10_0),
153
- TO_CORE_BIT(SE20_0),
154
- TO_CORE_BIT(SE10_1),
155
- TO_CORE_BIT(SE20_2),
156
- TO_CORE_BIT(SE10_1_PAN),
157
- TO_CORE_BIT(SE20_2_PAN),
158
- TO_CORE_BIT(SE2),
159
- TO_CORE_BIT(SE3),
160
+ TO_CORE_BIT(E3),
161
162
TO_CORE_BIT(MUser),
163
TO_CORE_BIT(MPriv),
164
diff --git a/target/arm/internals.h b/target/arm/internals.h
165
index XXXXXXX..XXXXXXX 100644
166
--- a/target/arm/internals.h
167
+++ b/target/arm/internals.h
168
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
169
case ARMMMUIdx_Stage1_E0:
170
case ARMMMUIdx_Stage1_E1:
171
case ARMMMUIdx_Stage1_E1_PAN:
172
- case ARMMMUIdx_Stage1_SE0:
173
- case ARMMMUIdx_Stage1_SE1:
174
- case ARMMMUIdx_Stage1_SE1_PAN:
175
case ARMMMUIdx_E10_0:
176
case ARMMMUIdx_E10_1:
177
case ARMMMUIdx_E10_1_PAN:
178
case ARMMMUIdx_E20_0:
179
case ARMMMUIdx_E20_2:
180
case ARMMMUIdx_E20_2_PAN:
181
- case ARMMMUIdx_SE10_0:
182
- case ARMMMUIdx_SE10_1:
183
- case ARMMMUIdx_SE10_1_PAN:
184
- case ARMMMUIdx_SE20_0:
185
- case ARMMMUIdx_SE20_2:
186
- case ARMMMUIdx_SE20_2_PAN:
187
return true;
188
default:
189
return false;
190
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
191
{
192
switch (mmu_idx) {
193
case ARMMMUIdx_Stage1_E1_PAN:
194
- case ARMMMUIdx_Stage1_SE1_PAN:
195
case ARMMMUIdx_E10_1_PAN:
196
case ARMMMUIdx_E20_2_PAN:
197
- case ARMMMUIdx_SE10_1_PAN:
198
- case ARMMMUIdx_SE20_2_PAN:
199
return true;
200
default:
201
return false;
202
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
203
static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
204
{
205
switch (mmu_idx) {
206
- case ARMMMUIdx_SE20_0:
207
- case ARMMMUIdx_SE20_2:
208
- case ARMMMUIdx_SE20_2_PAN:
209
case ARMMMUIdx_E20_0:
210
case ARMMMUIdx_E20_2:
211
case ARMMMUIdx_E20_2_PAN:
212
case ARMMMUIdx_Stage2:
213
case ARMMMUIdx_Stage2_S:
214
- case ARMMMUIdx_SE2:
215
case ARMMMUIdx_E2:
216
return 2;
217
- case ARMMMUIdx_SE3:
218
+ case ARMMMUIdx_E3:
219
return 3;
220
- case ARMMMUIdx_SE10_0:
221
- case ARMMMUIdx_Stage1_SE0:
222
- return arm_el_is_aa64(env, 3) ? 1 : 3;
223
- case ARMMMUIdx_SE10_1:
224
- case ARMMMUIdx_SE10_1_PAN:
225
+ case ARMMMUIdx_E10_0:
226
case ARMMMUIdx_Stage1_E0:
227
+ return arm_el_is_aa64(env, 3) || !arm_is_secure_below_el3(env) ? 1 : 3;
228
case ARMMMUIdx_Stage1_E1:
229
case ARMMMUIdx_Stage1_E1_PAN:
230
- case ARMMMUIdx_Stage1_SE1:
231
- case ARMMMUIdx_Stage1_SE1_PAN:
232
- case ARMMMUIdx_E10_0:
233
case ARMMMUIdx_E10_1:
234
case ARMMMUIdx_E10_1_PAN:
235
case ARMMMUIdx_MPrivNegPri:
236
@@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
237
case ARMMMUIdx_Stage1_E0:
238
case ARMMMUIdx_Stage1_E1:
239
case ARMMMUIdx_Stage1_E1_PAN:
240
- case ARMMMUIdx_Stage1_SE0:
241
- case ARMMMUIdx_Stage1_SE1:
242
- case ARMMMUIdx_Stage1_SE1_PAN:
243
return true;
244
default:
245
return false;
246
diff --git a/target/arm/helper.c b/target/arm/helper.c
247
index XXXXXXX..XXXXXXX 100644
248
--- a/target/arm/helper.c
249
+++ b/target/arm/helper.c
250
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
251
/* Begin with base v8.0 state. */
252
uint64_t valid_mask = 0x3fff;
253
ARMCPU *cpu = env_archcpu(env);
254
+ uint64_t changed;
255
256
/*
257
* Because SCR_EL3 is the "real" cpreg and SCR is the alias, reset always
258
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
259
260
/* Clear all-context RES0 bits. */
261
value &= valid_mask;
262
- raw_write(env, ri, value);
263
+ changed = env->cp15.scr_el3 ^ value;
264
+ env->cp15.scr_el3 = value;
265
+
266
+ /*
267
+ * If SCR_EL3.NS changes, i.e. arm_is_secure_below_el3, then
268
+ * we must invalidate all TLBs below EL3.
269
+ */
270
+ if (changed & SCR_NS) {
271
+ tlb_flush_by_mmuidx(env_cpu(env), (ARMMMUIdxBit_E10_0 |
272
+ ARMMMUIdxBit_E20_0 |
273
+ ARMMMUIdxBit_E10_1 |
274
+ ARMMMUIdxBit_E20_2 |
275
+ ARMMMUIdxBit_E10_1_PAN |
276
+ ARMMMUIdxBit_E20_2_PAN |
277
+ ARMMMUIdxBit_E2));
278
+ }
279
}
280
281
static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
282
@@ -XXX,XX +XXX,XX @@ static int gt_phys_redir_timeridx(CPUARMState *env)
283
case ARMMMUIdx_E20_0:
284
case ARMMMUIdx_E20_2:
285
case ARMMMUIdx_E20_2_PAN:
286
- case ARMMMUIdx_SE20_0:
287
- case ARMMMUIdx_SE20_2:
288
- case ARMMMUIdx_SE20_2_PAN:
289
return GTIMER_HYP;
290
default:
291
return GTIMER_PHYS;
292
@@ -XXX,XX +XXX,XX @@ static int gt_virt_redir_timeridx(CPUARMState *env)
293
case ARMMMUIdx_E20_0:
294
case ARMMMUIdx_E20_2:
295
case ARMMMUIdx_E20_2_PAN:
296
- case ARMMMUIdx_SE20_0:
297
- case ARMMMUIdx_SE20_2:
298
- case ARMMMUIdx_SE20_2_PAN:
299
return GTIMER_HYPVIRT;
300
default:
301
return GTIMER_VIRT;
302
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
303
/* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */
304
switch (el) {
305
case 3:
306
- mmu_idx = ARMMMUIdx_SE3;
307
+ mmu_idx = ARMMMUIdx_E3;
308
secure = true;
309
break;
310
case 2:
311
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
312
/* fall through */
313
case 1:
314
if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
315
- mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
316
- : ARMMMUIdx_Stage1_E1_PAN);
317
+ mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
318
} else {
319
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
320
+ mmu_idx = ARMMMUIdx_Stage1_E1;
321
}
322
break;
323
default:
324
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
325
/* stage 1 current state PL0: ATS1CUR, ATS1CUW */
326
switch (el) {
327
case 3:
328
- mmu_idx = ARMMMUIdx_SE10_0;
329
+ mmu_idx = ARMMMUIdx_E10_0;
330
secure = true;
331
break;
332
case 2:
333
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
334
mmu_idx = ARMMMUIdx_Stage1_E0;
335
break;
336
case 1:
337
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
338
+ mmu_idx = ARMMMUIdx_Stage1_E0;
339
break;
340
default:
341
g_assert_not_reached();
342
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
343
switch (ri->opc1) {
344
case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */
345
if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) {
346
- mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
347
- : ARMMMUIdx_Stage1_E1_PAN);
348
+ mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
349
} else {
350
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
351
+ mmu_idx = ARMMMUIdx_Stage1_E1;
352
}
353
break;
354
case 4: /* AT S1E2R, AT S1E2W */
355
- mmu_idx = secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2;
356
+ mmu_idx = ARMMMUIdx_E2;
357
break;
358
case 6: /* AT S1E3R, AT S1E3W */
359
- mmu_idx = ARMMMUIdx_SE3;
360
+ mmu_idx = ARMMMUIdx_E3;
361
secure = true;
362
break;
363
default:
364
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
365
}
366
break;
367
case 2: /* AT S1E0R, AT S1E0W */
368
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
369
+ mmu_idx = ARMMMUIdx_Stage1_E0;
370
break;
371
case 4: /* AT S12E1R, AT S12E1W */
372
- mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1;
373
+ mmu_idx = ARMMMUIdx_E10_1;
374
break;
375
case 6: /* AT S12E0R, AT S12E0W */
376
- mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_E10_0;
377
+ mmu_idx = ARMMMUIdx_E10_0;
378
break;
379
default:
380
g_assert_not_reached();
381
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
382
uint16_t mask = ARMMMUIdxBit_E20_2 |
383
ARMMMUIdxBit_E20_2_PAN |
384
ARMMMUIdxBit_E20_0;
385
-
386
- if (arm_is_secure_below_el3(env)) {
387
- mask >>= ARM_MMU_IDX_A_NS;
388
- }
389
-
390
tlb_flush_by_mmuidx(env_cpu(env), mask);
391
}
392
raw_write(env, ri, value);
393
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
394
uint16_t mask = ARMMMUIdxBit_E10_1 |
395
ARMMMUIdxBit_E10_1_PAN |
396
ARMMMUIdxBit_E10_0;
397
-
398
- if (arm_is_secure_below_el3(env)) {
399
- mask >>= ARM_MMU_IDX_A_NS;
400
- }
401
-
402
tlb_flush_by_mmuidx(cs, mask);
403
raw_write(env, ri, value);
404
}
405
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
406
ARMMMUIdxBit_E10_1_PAN |
407
ARMMMUIdxBit_E10_0;
408
}
409
-
410
- if (arm_is_secure_below_el3(env)) {
411
- mask >>= ARM_MMU_IDX_A_NS;
412
- }
413
-
414
return mask;
415
}
416
417
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
418
mmu_idx = ARMMMUIdx_E10_0;
419
}
420
421
- if (arm_is_secure_below_el3(env)) {
422
- mmu_idx &= ~ARM_MMU_IDX_A_NS;
423
- }
424
-
425
return tlbbits_for_regime(env, mmu_idx, addr);
426
}
427
428
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
429
* stage 2 translations, whereas most other scopes only invalidate
430
* stage 1 translations.
431
*/
432
- if (arm_is_secure_below_el3(env)) {
433
- return ARMMMUIdxBit_SE10_1 |
434
- ARMMMUIdxBit_SE10_1_PAN |
435
- ARMMMUIdxBit_SE10_0;
436
- } else {
437
- return ARMMMUIdxBit_E10_1 |
438
- ARMMMUIdxBit_E10_1_PAN |
439
- ARMMMUIdxBit_E10_0;
440
- }
441
+ return (ARMMMUIdxBit_E10_1 |
442
+ ARMMMUIdxBit_E10_1_PAN |
443
+ ARMMMUIdxBit_E10_0);
444
}
445
446
static int e2_tlbmask(CPUARMState *env)
447
{
448
- if (arm_is_secure_below_el3(env)) {
449
- return ARMMMUIdxBit_SE20_0 |
450
- ARMMMUIdxBit_SE20_2 |
451
- ARMMMUIdxBit_SE20_2_PAN |
452
- ARMMMUIdxBit_SE2;
453
- } else {
454
- return ARMMMUIdxBit_E20_0 |
455
- ARMMMUIdxBit_E20_2 |
456
- ARMMMUIdxBit_E20_2_PAN |
457
- ARMMMUIdxBit_E2;
458
- }
459
+ return (ARMMMUIdxBit_E20_0 |
460
+ ARMMMUIdxBit_E20_2 |
461
+ ARMMMUIdxBit_E20_2_PAN |
462
+ ARMMMUIdxBit_E2);
463
}
464
465
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
466
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
467
ARMCPU *cpu = env_archcpu(env);
468
CPUState *cs = CPU(cpu);
469
470
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_SE3);
471
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
472
}
473
474
static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
475
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
476
{
477
CPUState *cs = env_cpu(env);
478
479
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_SE3);
480
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
481
}
482
483
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
484
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
485
CPUState *cs = CPU(cpu);
486
uint64_t pageaddr = sextract64(value << 12, 0, 56);
487
488
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_SE3);
489
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
490
}
491
492
static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
493
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
494
{
495
CPUState *cs = env_cpu(env);
496
uint64_t pageaddr = sextract64(value << 12, 0, 56);
497
- bool secure = arm_is_secure_below_el3(env);
498
- int mask = secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2;
499
- int bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2,
500
- pageaddr);
501
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E2, pageaddr);
502
503
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
504
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
505
+ ARMMMUIdxBit_E2, bits);
506
}
507
508
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
509
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
510
{
511
CPUState *cs = env_cpu(env);
512
uint64_t pageaddr = sextract64(value << 12, 0, 56);
513
- int bits = tlbbits_for_regime(env, ARMMMUIdx_SE3, pageaddr);
514
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E3, pageaddr);
515
516
tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
517
- ARMMMUIdxBit_SE3, bits);
518
+ ARMMMUIdxBit_E3, bits);
519
}
520
521
#ifdef TARGET_AARCH64
522
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae1is_write(CPUARMState *env,
523
524
static int vae2_tlbmask(CPUARMState *env)
525
{
526
- return (arm_is_secure_below_el3(env)
527
- ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2);
528
+ return ARMMMUIdxBit_E2;
529
}
530
531
static void tlbi_aa64_rvae2_write(CPUARMState *env,
532
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3_write(CPUARMState *env,
533
* flush-last-level-only.
534
*/
535
536
- do_rvae_write(env, value, ARMMMUIdxBit_SE3,
537
- tlb_force_broadcast(env));
538
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, tlb_force_broadcast(env));
539
}
540
541
static void tlbi_aa64_rvae3is_write(CPUARMState *env,
542
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env,
543
* flush-last-level-only or inner/outer specific flushes.
544
*/
545
546
- do_rvae_write(env, value, ARMMMUIdxBit_SE3, true);
547
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
548
}
549
#endif
550
551
@@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el)
552
/* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */
553
if (el == 0) {
554
ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0);
555
- el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0)
556
- ? 2 : 1;
557
+ el = mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1;
558
}
559
return env->cp15.sctlr_el[el];
560
}
561
@@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
562
switch (mmu_idx) {
563
case ARMMMUIdx_E10_0:
564
case ARMMMUIdx_E20_0:
565
- case ARMMMUIdx_SE10_0:
566
- case ARMMMUIdx_SE20_0:
567
return 0;
568
case ARMMMUIdx_E10_1:
569
case ARMMMUIdx_E10_1_PAN:
570
- case ARMMMUIdx_SE10_1:
571
- case ARMMMUIdx_SE10_1_PAN:
572
return 1;
573
case ARMMMUIdx_E2:
574
case ARMMMUIdx_E20_2:
575
case ARMMMUIdx_E20_2_PAN:
576
- case ARMMMUIdx_SE2:
577
- case ARMMMUIdx_SE20_2:
578
- case ARMMMUIdx_SE20_2_PAN:
579
return 2;
580
- case ARMMMUIdx_SE3:
581
+ case ARMMMUIdx_E3:
582
return 3;
583
default:
584
g_assert_not_reached();
585
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
586
}
587
break;
588
case 3:
589
- return ARMMMUIdx_SE3;
590
+ return ARMMMUIdx_E3;
591
default:
592
g_assert_not_reached();
593
}
594
595
- if (arm_is_secure_below_el3(env)) {
596
- idx &= ~ARM_MMU_IDX_A_NS;
597
- }
598
-
599
return idx;
600
}
601
602
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
603
switch (mmu_idx) {
604
case ARMMMUIdx_E10_1:
605
case ARMMMUIdx_E10_1_PAN:
606
- case ARMMMUIdx_SE10_1:
607
- case ARMMMUIdx_SE10_1_PAN:
608
/* TODO: ARMv8.3-NV */
609
DP_TBFLAG_A64(flags, UNPRIV, 1);
610
break;
611
case ARMMMUIdx_E20_2:
612
case ARMMMUIdx_E20_2_PAN:
613
- case ARMMMUIdx_SE20_2:
614
- case ARMMMUIdx_SE20_2_PAN:
615
/*
616
* Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is
617
* gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
618
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
619
index XXXXXXX..XXXXXXX 100644
620
--- a/target/arm/ptw.c
621
+++ b/target/arm/ptw.c
622
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu)
623
ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
624
{
625
switch (mmu_idx) {
626
- case ARMMMUIdx_SE10_0:
627
- return ARMMMUIdx_Stage1_SE0;
628
- case ARMMMUIdx_SE10_1:
629
- return ARMMMUIdx_Stage1_SE1;
630
- case ARMMMUIdx_SE10_1_PAN:
631
- return ARMMMUIdx_Stage1_SE1_PAN;
632
case ARMMMUIdx_E10_0:
633
return ARMMMUIdx_Stage1_E0;
634
case ARMMMUIdx_E10_1:
635
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
636
static bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
637
{
638
switch (mmu_idx) {
639
- case ARMMMUIdx_SE10_0:
640
case ARMMMUIdx_E20_0:
641
- case ARMMMUIdx_SE20_0:
642
case ARMMMUIdx_Stage1_E0:
643
- case ARMMMUIdx_Stage1_SE0:
644
case ARMMMUIdx_MUser:
645
case ARMMMUIdx_MSUser:
646
case ARMMMUIdx_MUserNegPri:
647
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
648
649
s2_mmu_idx = (s2walk_secure
650
? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
651
- is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
652
+ is_el0 = mmu_idx == ARMMMUIdx_E10_0;
653
654
/*
655
* S1 is done, now do S2 translation.
656
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
657
case ARMMMUIdx_Stage1_E1:
658
case ARMMMUIdx_Stage1_E1_PAN:
659
case ARMMMUIdx_E2:
660
+ is_secure = arm_is_secure_below_el3(env);
661
+ break;
662
case ARMMMUIdx_Stage2:
663
case ARMMMUIdx_MPrivNegPri:
664
case ARMMMUIdx_MUserNegPri:
665
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
666
case ARMMMUIdx_MUser:
667
is_secure = false;
668
break;
669
- case ARMMMUIdx_SE3:
670
- case ARMMMUIdx_SE10_0:
671
- case ARMMMUIdx_SE10_1:
672
- case ARMMMUIdx_SE10_1_PAN:
673
- case ARMMMUIdx_SE20_0:
674
- case ARMMMUIdx_SE20_2:
675
- case ARMMMUIdx_SE20_2_PAN:
676
- case ARMMMUIdx_Stage1_SE0:
677
- case ARMMMUIdx_Stage1_SE1:
678
- case ARMMMUIdx_Stage1_SE1_PAN:
679
- case ARMMMUIdx_SE2:
680
+ case ARMMMUIdx_E3:
681
case ARMMMUIdx_Stage2_S:
682
case ARMMMUIdx_MSPrivNegPri:
683
case ARMMMUIdx_MSUserNegPri:
684
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
685
index XXXXXXX..XXXXXXX 100644
686
--- a/target/arm/translate-a64.c
687
+++ b/target/arm/translate-a64.c
688
@@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s)
689
case ARMMMUIdx_E20_2_PAN:
690
useridx = ARMMMUIdx_E20_0;
691
break;
692
- case ARMMMUIdx_SE10_1:
693
- case ARMMMUIdx_SE10_1_PAN:
694
- useridx = ARMMMUIdx_SE10_0;
695
- break;
696
- case ARMMMUIdx_SE20_2:
697
- case ARMMMUIdx_SE20_2_PAN:
698
- useridx = ARMMMUIdx_SE20_0;
699
- break;
700
default:
701
g_assert_not_reached();
702
}
703
diff --git a/target/arm/translate.c b/target/arm/translate.c
704
index XXXXXXX..XXXXXXX 100644
705
--- a/target/arm/translate.c
706
+++ b/target/arm/translate.c
707
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
708
* otherwise, access as if at PL0.
709
*/
710
switch (s->mmu_idx) {
711
+ case ARMMMUIdx_E3:
712
case ARMMMUIdx_E2: /* this one is UNPREDICTABLE */
713
case ARMMMUIdx_E10_0:
714
case ARMMMUIdx_E10_1:
715
case ARMMMUIdx_E10_1_PAN:
716
return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
717
- case ARMMMUIdx_SE3:
718
- case ARMMMUIdx_SE10_0:
719
- case ARMMMUIdx_SE10_1:
720
- case ARMMMUIdx_SE10_1_PAN:
721
- return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0);
722
case ARMMMUIdx_MUser:
723
case ARMMMUIdx_MPriv:
724
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
725
--
726
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Use a switch on mmu_idx for the a-profile indexes, instead of
4
three different if's vs regime_el and arm_mmu_idx_is_stage1_of_2.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-12-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/ptw.c | 32 +++++++++++++++++++++++++-------
12
1 file changed, 25 insertions(+), 7 deletions(-)
13
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/ptw.c
17
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
19
20
hcr_el2 = arm_hcr_el2_eff(env);
21
22
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
23
+ switch (mmu_idx) {
24
+ case ARMMMUIdx_Stage2:
25
+ case ARMMMUIdx_Stage2_S:
26
/* HCR.DC means HCR.VM behaves as 1 */
27
return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
28
- }
29
30
- if (hcr_el2 & HCR_TGE) {
31
+ case ARMMMUIdx_E10_0:
32
+ case ARMMMUIdx_E10_1:
33
+ case ARMMMUIdx_E10_1_PAN:
34
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
35
- if (!is_secure && regime_el(env, mmu_idx) == 1) {
36
+ if (!is_secure && (hcr_el2 & HCR_TGE)) {
37
return true;
38
}
39
- }
40
+ break;
41
42
- if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
43
+ case ARMMMUIdx_Stage1_E0:
44
+ case ARMMMUIdx_Stage1_E1:
45
+ case ARMMMUIdx_Stage1_E1_PAN:
46
/* HCR.DC means SCTLR_EL1.M behaves as 0 */
47
- return true;
48
+ if (hcr_el2 & HCR_DC) {
49
+ return true;
50
+ }
51
+ break;
52
+
53
+ case ARMMMUIdx_E20_0:
54
+ case ARMMMUIdx_E20_2:
55
+ case ARMMMUIdx_E20_2_PAN:
56
+ case ARMMMUIdx_E2:
57
+ case ARMMMUIdx_E3:
58
+ break;
59
+
60
+ default:
61
+ g_assert_not_reached();
62
}
63
64
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
65
--
66
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The effect of TGE does not only apply to non-secure state,
4
now that Secure EL2 exists.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-13-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/ptw.c | 4 ++--
12
1 file changed, 2 insertions(+), 2 deletions(-)
13
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/ptw.c
17
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
19
case ARMMMUIdx_E10_0:
20
case ARMMMUIdx_E10_1:
21
case ARMMMUIdx_E10_1_PAN:
22
- /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
23
- if (!is_secure && (hcr_el2 & HCR_TGE)) {
24
+ /* TGE means that EL0/1 act as if SCTLR_EL1.M is zero */
25
+ if (hcr_el2 & HCR_TGE) {
26
return true;
27
}
28
break;
29
--
30
2.25.1
diff view generated by jsdifflib
1
From: Marc Zyngier <maz@kernel.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
HCR_EL2.TID3 mandates that access from EL1 to a long list of id
3
For page walking, we may require HCR for a security state
4
registers traps to EL2, and QEMU has so far ignored this requirement.
4
that is not "current".
5
5
6
This breaks (among other things) KVM guests that have PtrAuth enabled,
7
while the hypervisor doesn't want to expose the feature to its guest.
8
To achieve this, KVM traps the ID registers (ID_AA64ISAR1_EL1 in this
9
case), and masks out the unsupported feature.
10
11
QEMU not honoring the trap request means that the guest observes
12
that the feature is present in the HW, starts using it, and dies
13
a horrible death when KVM injects an UNDEF, because the feature
14
*really* isn't supported.
15
16
Do the right thing by trapping to EL2 if HCR_EL2.TID3 is set.
17
18
Note that this change does not include trapping of the MVFR
19
registers from AArch32 (they are accessed via the VMRS
20
instruction and need to be handled in a different way).
21
22
Reported-by: Will Deacon <will@kernel.org>
23
Signed-off-by: Marc Zyngier <maz@kernel.org>
24
Tested-by: Will Deacon <will@kernel.org>
25
Message-id: 20191123115618.29230-1-maz@kernel.org
26
[PMM: added missing accessfn line for ID_AA4PFR2_EL1_RESERVED;
27
changed names of access functions to include _tid3]
28
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-14-richard.henderson@linaro.org
29
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
---
10
---
31
target/arm/helper.c | 76 +++++++++++++++++++++++++++++++++++++++++++++
11
target/arm/cpu.h | 20 +++++++++++++-------
32
1 file changed, 76 insertions(+)
12
target/arm/helper.c | 11 ++++++++---
13
2 files changed, 21 insertions(+), 10 deletions(-)
33
14
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
20
* Return true if the current security state has AArch64 EL2 or AArch32 Hyp.
21
* This corresponds to the pseudocode EL2Enabled()
22
*/
23
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
24
+{
25
+ return arm_feature(env, ARM_FEATURE_EL2)
26
+ && (!secure || (env->cp15.scr_el3 & SCR_EEL2));
27
+}
28
+
29
static inline bool arm_is_el2_enabled(CPUARMState *env)
30
{
31
- if (arm_feature(env, ARM_FEATURE_EL2)) {
32
- if (arm_is_secure_below_el3(env)) {
33
- return (env->cp15.scr_el3 & SCR_EEL2) != 0;
34
- }
35
- return true;
36
- }
37
- return false;
38
+ return arm_is_el2_enabled_secstate(env, arm_is_secure_below_el3(env));
39
}
40
41
#else
42
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
43
return false;
44
}
45
46
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
47
+{
48
+ return false;
49
+}
50
+
51
static inline bool arm_is_el2_enabled(CPUARMState *env)
52
{
53
return false;
54
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_el2_enabled(CPUARMState *env)
55
* "for all purposes other than a direct read or write access of HCR_EL2."
56
* Not included here is HCR_RW.
57
*/
58
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure);
59
uint64_t arm_hcr_el2_eff(CPUARMState *env);
60
uint64_t arm_hcrx_el2_eff(CPUARMState *env);
61
34
diff --git a/target/arm/helper.c b/target/arm/helper.c
62
diff --git a/target/arm/helper.c b/target/arm/helper.c
35
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/helper.c
64
--- a/target/arm/helper.c
37
+++ b/target/arm/helper.c
65
+++ b/target/arm/helper.c
38
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo predinv_reginfo[] = {
66
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
39
REGINFO_SENTINEL
67
}
40
};
68
41
69
/*
42
+static CPAccessResult access_aa64_tid3(CPUARMState *env, const ARMCPRegInfo *ri,
70
- * Return the effective value of HCR_EL2.
43
+ bool isread)
71
+ * Return the effective value of HCR_EL2, at the given security state.
72
* Bits that are not included here:
73
* RW (read from SCR_EL3.RW as needed)
74
*/
75
-uint64_t arm_hcr_el2_eff(CPUARMState *env)
76
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure)
77
{
78
uint64_t ret = env->cp15.hcr_el2;
79
80
- if (!arm_is_el2_enabled(env)) {
81
+ if (!arm_is_el2_enabled_secstate(env, secure)) {
82
/*
83
* "This register has no effect if EL2 is not enabled in the
84
* current Security state". This is ARMv8.4-SecEL2 speak for
85
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
86
return ret;
87
}
88
89
+uint64_t arm_hcr_el2_eff(CPUARMState *env)
44
+{
90
+{
45
+ if ((arm_current_el(env) < 2) && (arm_hcr_el2_eff(env) & HCR_TID3)) {
91
+ return arm_hcr_el2_eff_secstate(env, arm_is_secure_below_el3(env));
46
+ return CP_ACCESS_TRAP_EL2;
47
+ }
48
+
49
+ return CP_ACCESS_OK;
50
+}
92
+}
51
+
93
+
52
+static CPAccessResult access_aa32_tid3(CPUARMState *env, const ARMCPRegInfo *ri,
94
/*
53
+ bool isread)
95
* Corresponds to ARM pseudocode function ELIsInHost().
54
+{
96
*/
55
+ if (arm_feature(env, ARM_FEATURE_V8)) {
56
+ return access_aa64_tid3(env, ri, isread);
57
+ }
58
+
59
+ return CP_ACCESS_OK;
60
+}
61
+
62
void register_cp_regs_for_features(ARMCPU *cpu)
63
{
64
/* Register all the coprocessor registers based on feature bits */
65
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
66
{ .name = "ID_PFR0", .state = ARM_CP_STATE_BOTH,
67
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 0,
68
.access = PL1_R, .type = ARM_CP_CONST,
69
+ .accessfn = access_aa32_tid3,
70
.resetvalue = cpu->id_pfr0 },
71
/* ID_PFR1 is not a plain ARM_CP_CONST because we don't know
72
* the value of the GIC field until after we define these regs.
73
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
74
{ .name = "ID_PFR1", .state = ARM_CP_STATE_BOTH,
75
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 1,
76
.access = PL1_R, .type = ARM_CP_NO_RAW,
77
+ .accessfn = access_aa32_tid3,
78
.readfn = id_pfr1_read,
79
.writefn = arm_cp_write_ignore },
80
{ .name = "ID_DFR0", .state = ARM_CP_STATE_BOTH,
81
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 2,
82
.access = PL1_R, .type = ARM_CP_CONST,
83
+ .accessfn = access_aa32_tid3,
84
.resetvalue = cpu->id_dfr0 },
85
{ .name = "ID_AFR0", .state = ARM_CP_STATE_BOTH,
86
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 3,
87
.access = PL1_R, .type = ARM_CP_CONST,
88
+ .accessfn = access_aa32_tid3,
89
.resetvalue = cpu->id_afr0 },
90
{ .name = "ID_MMFR0", .state = ARM_CP_STATE_BOTH,
91
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 4,
92
.access = PL1_R, .type = ARM_CP_CONST,
93
+ .accessfn = access_aa32_tid3,
94
.resetvalue = cpu->id_mmfr0 },
95
{ .name = "ID_MMFR1", .state = ARM_CP_STATE_BOTH,
96
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 5,
97
.access = PL1_R, .type = ARM_CP_CONST,
98
+ .accessfn = access_aa32_tid3,
99
.resetvalue = cpu->id_mmfr1 },
100
{ .name = "ID_MMFR2", .state = ARM_CP_STATE_BOTH,
101
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 6,
102
.access = PL1_R, .type = ARM_CP_CONST,
103
+ .accessfn = access_aa32_tid3,
104
.resetvalue = cpu->id_mmfr2 },
105
{ .name = "ID_MMFR3", .state = ARM_CP_STATE_BOTH,
106
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 7,
107
.access = PL1_R, .type = ARM_CP_CONST,
108
+ .accessfn = access_aa32_tid3,
109
.resetvalue = cpu->id_mmfr3 },
110
{ .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH,
111
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
112
.access = PL1_R, .type = ARM_CP_CONST,
113
+ .accessfn = access_aa32_tid3,
114
.resetvalue = cpu->isar.id_isar0 },
115
{ .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH,
116
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1,
117
.access = PL1_R, .type = ARM_CP_CONST,
118
+ .accessfn = access_aa32_tid3,
119
.resetvalue = cpu->isar.id_isar1 },
120
{ .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH,
121
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
122
.access = PL1_R, .type = ARM_CP_CONST,
123
+ .accessfn = access_aa32_tid3,
124
.resetvalue = cpu->isar.id_isar2 },
125
{ .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH,
126
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3,
127
.access = PL1_R, .type = ARM_CP_CONST,
128
+ .accessfn = access_aa32_tid3,
129
.resetvalue = cpu->isar.id_isar3 },
130
{ .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH,
131
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4,
132
.access = PL1_R, .type = ARM_CP_CONST,
133
+ .accessfn = access_aa32_tid3,
134
.resetvalue = cpu->isar.id_isar4 },
135
{ .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH,
136
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5,
137
.access = PL1_R, .type = ARM_CP_CONST,
138
+ .accessfn = access_aa32_tid3,
139
.resetvalue = cpu->isar.id_isar5 },
140
{ .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH,
141
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
142
.access = PL1_R, .type = ARM_CP_CONST,
143
+ .accessfn = access_aa32_tid3,
144
.resetvalue = cpu->id_mmfr4 },
145
{ .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
146
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
147
.access = PL1_R, .type = ARM_CP_CONST,
148
+ .accessfn = access_aa32_tid3,
149
.resetvalue = cpu->isar.id_isar6 },
150
REGINFO_SENTINEL
151
};
152
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
153
{ .name = "ID_AA64PFR0_EL1", .state = ARM_CP_STATE_AA64,
154
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 0,
155
.access = PL1_R, .type = ARM_CP_NO_RAW,
156
+ .accessfn = access_aa64_tid3,
157
.readfn = id_aa64pfr0_read,
158
.writefn = arm_cp_write_ignore },
159
{ .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64,
160
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1,
161
.access = PL1_R, .type = ARM_CP_CONST,
162
+ .accessfn = access_aa64_tid3,
163
.resetvalue = cpu->isar.id_aa64pfr1},
164
{ .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
165
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2,
166
.access = PL1_R, .type = ARM_CP_CONST,
167
+ .accessfn = access_aa64_tid3,
168
.resetvalue = 0 },
169
{ .name = "ID_AA64PFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
170
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 3,
171
.access = PL1_R, .type = ARM_CP_CONST,
172
+ .accessfn = access_aa64_tid3,
173
.resetvalue = 0 },
174
{ .name = "ID_AA64ZFR0_EL1", .state = ARM_CP_STATE_AA64,
175
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 4,
176
.access = PL1_R, .type = ARM_CP_CONST,
177
+ .accessfn = access_aa64_tid3,
178
/* At present, only SVEver == 0 is defined anyway. */
179
.resetvalue = 0 },
180
{ .name = "ID_AA64PFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
181
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 5,
182
.access = PL1_R, .type = ARM_CP_CONST,
183
+ .accessfn = access_aa64_tid3,
184
.resetvalue = 0 },
185
{ .name = "ID_AA64PFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
186
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 6,
187
.access = PL1_R, .type = ARM_CP_CONST,
188
+ .accessfn = access_aa64_tid3,
189
.resetvalue = 0 },
190
{ .name = "ID_AA64PFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
191
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 7,
192
.access = PL1_R, .type = ARM_CP_CONST,
193
+ .accessfn = access_aa64_tid3,
194
.resetvalue = 0 },
195
{ .name = "ID_AA64DFR0_EL1", .state = ARM_CP_STATE_AA64,
196
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 0,
197
.access = PL1_R, .type = ARM_CP_CONST,
198
+ .accessfn = access_aa64_tid3,
199
.resetvalue = cpu->id_aa64dfr0 },
200
{ .name = "ID_AA64DFR1_EL1", .state = ARM_CP_STATE_AA64,
201
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 1,
202
.access = PL1_R, .type = ARM_CP_CONST,
203
+ .accessfn = access_aa64_tid3,
204
.resetvalue = cpu->id_aa64dfr1 },
205
{ .name = "ID_AA64DFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
206
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 2,
207
.access = PL1_R, .type = ARM_CP_CONST,
208
+ .accessfn = access_aa64_tid3,
209
.resetvalue = 0 },
210
{ .name = "ID_AA64DFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
211
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 3,
212
.access = PL1_R, .type = ARM_CP_CONST,
213
+ .accessfn = access_aa64_tid3,
214
.resetvalue = 0 },
215
{ .name = "ID_AA64AFR0_EL1", .state = ARM_CP_STATE_AA64,
216
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 4,
217
.access = PL1_R, .type = ARM_CP_CONST,
218
+ .accessfn = access_aa64_tid3,
219
.resetvalue = cpu->id_aa64afr0 },
220
{ .name = "ID_AA64AFR1_EL1", .state = ARM_CP_STATE_AA64,
221
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 5,
222
.access = PL1_R, .type = ARM_CP_CONST,
223
+ .accessfn = access_aa64_tid3,
224
.resetvalue = cpu->id_aa64afr1 },
225
{ .name = "ID_AA64AFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
226
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 6,
227
.access = PL1_R, .type = ARM_CP_CONST,
228
+ .accessfn = access_aa64_tid3,
229
.resetvalue = 0 },
230
{ .name = "ID_AA64AFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
231
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 7,
232
.access = PL1_R, .type = ARM_CP_CONST,
233
+ .accessfn = access_aa64_tid3,
234
.resetvalue = 0 },
235
{ .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64,
236
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0,
237
.access = PL1_R, .type = ARM_CP_CONST,
238
+ .accessfn = access_aa64_tid3,
239
.resetvalue = cpu->isar.id_aa64isar0 },
240
{ .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64,
241
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1,
242
.access = PL1_R, .type = ARM_CP_CONST,
243
+ .accessfn = access_aa64_tid3,
244
.resetvalue = cpu->isar.id_aa64isar1 },
245
{ .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
246
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
247
.access = PL1_R, .type = ARM_CP_CONST,
248
+ .accessfn = access_aa64_tid3,
249
.resetvalue = 0 },
250
{ .name = "ID_AA64ISAR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
251
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 3,
252
.access = PL1_R, .type = ARM_CP_CONST,
253
+ .accessfn = access_aa64_tid3,
254
.resetvalue = 0 },
255
{ .name = "ID_AA64ISAR4_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
256
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 4,
257
.access = PL1_R, .type = ARM_CP_CONST,
258
+ .accessfn = access_aa64_tid3,
259
.resetvalue = 0 },
260
{ .name = "ID_AA64ISAR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
261
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 5,
262
.access = PL1_R, .type = ARM_CP_CONST,
263
+ .accessfn = access_aa64_tid3,
264
.resetvalue = 0 },
265
{ .name = "ID_AA64ISAR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
266
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 6,
267
.access = PL1_R, .type = ARM_CP_CONST,
268
+ .accessfn = access_aa64_tid3,
269
.resetvalue = 0 },
270
{ .name = "ID_AA64ISAR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
271
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 7,
272
.access = PL1_R, .type = ARM_CP_CONST,
273
+ .accessfn = access_aa64_tid3,
274
.resetvalue = 0 },
275
{ .name = "ID_AA64MMFR0_EL1", .state = ARM_CP_STATE_AA64,
276
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 0,
277
.access = PL1_R, .type = ARM_CP_CONST,
278
+ .accessfn = access_aa64_tid3,
279
.resetvalue = cpu->isar.id_aa64mmfr0 },
280
{ .name = "ID_AA64MMFR1_EL1", .state = ARM_CP_STATE_AA64,
281
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 1,
282
.access = PL1_R, .type = ARM_CP_CONST,
283
+ .accessfn = access_aa64_tid3,
284
.resetvalue = cpu->isar.id_aa64mmfr1 },
285
{ .name = "ID_AA64MMFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
286
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 2,
287
.access = PL1_R, .type = ARM_CP_CONST,
288
+ .accessfn = access_aa64_tid3,
289
.resetvalue = 0 },
290
{ .name = "ID_AA64MMFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
291
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 3,
292
.access = PL1_R, .type = ARM_CP_CONST,
293
+ .accessfn = access_aa64_tid3,
294
.resetvalue = 0 },
295
{ .name = "ID_AA64MMFR4_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
296
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 4,
297
.access = PL1_R, .type = ARM_CP_CONST,
298
+ .accessfn = access_aa64_tid3,
299
.resetvalue = 0 },
300
{ .name = "ID_AA64MMFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
301
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 5,
302
.access = PL1_R, .type = ARM_CP_CONST,
303
+ .accessfn = access_aa64_tid3,
304
.resetvalue = 0 },
305
{ .name = "ID_AA64MMFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
306
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 6,
307
.access = PL1_R, .type = ARM_CP_CONST,
308
+ .accessfn = access_aa64_tid3,
309
.resetvalue = 0 },
310
{ .name = "ID_AA64MMFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
311
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 7,
312
.access = PL1_R, .type = ARM_CP_CONST,
313
+ .accessfn = access_aa64_tid3,
314
.resetvalue = 0 },
315
{ .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64,
316
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0,
317
.access = PL1_R, .type = ARM_CP_CONST,
318
+ .accessfn = access_aa64_tid3,
319
.resetvalue = cpu->isar.mvfr0 },
320
{ .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64,
321
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1,
322
.access = PL1_R, .type = ARM_CP_CONST,
323
+ .accessfn = access_aa64_tid3,
324
.resetvalue = cpu->isar.mvfr1 },
325
{ .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64,
326
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
327
.access = PL1_R, .type = ARM_CP_CONST,
328
+ .accessfn = access_aa64_tid3,
329
.resetvalue = cpu->isar.mvfr2 },
330
{ .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
331
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3,
332
.access = PL1_R, .type = ARM_CP_CONST,
333
+ .accessfn = access_aa64_tid3,
334
.resetvalue = 0 },
335
{ .name = "MVFR4_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
336
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 4,
337
.access = PL1_R, .type = ARM_CP_CONST,
338
+ .accessfn = access_aa64_tid3,
339
.resetvalue = 0 },
340
{ .name = "MVFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
341
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 5,
342
.access = PL1_R, .type = ARM_CP_CONST,
343
+ .accessfn = access_aa64_tid3,
344
.resetvalue = 0 },
345
{ .name = "MVFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
346
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 6,
347
.access = PL1_R, .type = ARM_CP_CONST,
348
+ .accessfn = access_aa64_tid3,
349
.resetvalue = 0 },
350
{ .name = "MVFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
351
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 7,
352
.access = PL1_R, .type = ARM_CP_CONST,
353
+ .accessfn = access_aa64_tid3,
354
.resetvalue = 0 },
355
{ .name = "PMCEID0", .state = ARM_CP_STATE_AA32,
356
.cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 6,
357
--
97
--
358
2.20.1
98
2.25.1
359
360
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Rename the argument to is_secure_ptr, and introduce a
4
local variable is_secure with the value. We only write
5
back to the pointer toward the end of the function.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221001162318.153420-15-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/ptw.c | 22 ++++++++++++----------
13
1 file changed, 12 insertions(+), 10 deletions(-)
14
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/ptw.c
18
+++ b/target/arm/ptw.c
19
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
20
21
/* Translate a S1 pagetable walk through S2 if needed. */
22
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
23
- hwaddr addr, bool *is_secure,
24
+ hwaddr addr, bool *is_secure_ptr,
25
ARMMMUFaultInfo *fi)
26
{
27
- ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
28
+ bool is_secure = *is_secure_ptr;
29
+ ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
30
31
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
32
- !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) {
33
+ !regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
34
GetPhysAddrResult s2 = {};
35
int ret;
36
37
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
38
- *is_secure, false, &s2, fi);
39
+ is_secure, false, &s2, fi);
40
if (ret) {
41
assert(fi->type != ARMFault_None);
42
fi->s2addr = addr;
43
fi->stage2 = true;
44
fi->s1ptw = true;
45
- fi->s1ns = !*is_secure;
46
+ fi->s1ns = !is_secure;
47
return ~0;
48
}
49
if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
50
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
51
fi->s2addr = addr;
52
fi->stage2 = true;
53
fi->s1ptw = true;
54
- fi->s1ns = !*is_secure;
55
+ fi->s1ns = !is_secure;
56
return ~0;
57
}
58
59
if (arm_is_secure_below_el3(env)) {
60
/* Check if page table walk is to secure or non-secure PA space. */
61
- if (*is_secure) {
62
- *is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
63
+ if (is_secure) {
64
+ is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
65
} else {
66
- *is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
67
+ is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
68
}
69
+ *is_secure_ptr = is_secure;
70
} else {
71
- assert(!*is_secure);
72
+ assert(!is_secure);
73
}
74
75
addr = s2.phys;
76
--
77
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This value is unused.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20221001162318.153420-16-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/ptw.c | 5 ++---
11
1 file changed, 2 insertions(+), 3 deletions(-)
12
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.c
16
+++ b/target/arm/ptw.c
17
@@ -XXX,XX +XXX,XX @@ static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
18
* s1 and s2 for the HCR_EL2.FWB == 1 case, returning the
19
* combined attributes in MAIR_EL1 format.
20
*/
21
-static uint8_t combined_attrs_fwb(CPUARMState *env,
22
- ARMCacheAttrs s1, ARMCacheAttrs s2)
23
+static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
24
{
25
switch (s2.attrs) {
26
case 7:
27
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
28
29
/* Combine memory type and cacheability attributes */
30
if (arm_hcr_el2_eff(env) & HCR_FWB) {
31
- ret.attrs = combined_attrs_fwb(env, s1, s2);
32
+ ret.attrs = combined_attrs_fwb(s1, s2);
33
} else {
34
ret.attrs = combined_attrs_nofwb(env, s1, s2);
35
}
36
--
37
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
These subroutines did not need ENV for anything except
4
retrieving the effective value of HCR anyway.
5
6
We have computed the effective value of HCR in the callers,
7
and this will be especially important for interpreting HCR
8
in a non-current security state.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20221001162318.153420-17-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/ptw.c | 30 +++++++++++++++++-------------
16
1 file changed, 17 insertions(+), 13 deletions(-)
17
18
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/ptw.c
21
+++ b/target/arm/ptw.c
22
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
23
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
24
}
25
26
-static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
27
+static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
28
{
29
/*
30
* For an S1 page table walk, the stage 1 attributes are always
31
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
32
* when cacheattrs.attrs bit [2] is 0.
33
*/
34
assert(cacheattrs.is_s2_format);
35
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
36
+ if (hcr & HCR_FWB) {
37
return (cacheattrs.attrs & 0x4) == 0;
38
} else {
39
return (cacheattrs.attrs & 0xc) == 0;
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
41
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
42
!regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
43
GetPhysAddrResult s2 = {};
44
+ uint64_t hcr;
45
int ret;
46
47
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
48
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
49
fi->s1ns = !is_secure;
50
return ~0;
51
}
52
- if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
53
- ptw_attrs_are_device(env, s2.cacheattrs)) {
54
+
55
+ hcr = arm_hcr_el2_eff(env);
56
+ if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
57
/*
58
* PTW set and S1 walk touched S2 Device memory:
59
* generate Permission fault.
60
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
61
* ref: shared/translation/attrs/S2AttrDecode()
62
* .../S2ConvertAttrsHints()
63
*/
64
-static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
65
+static uint8_t convert_stage2_attrs(uint64_t hcr, uint8_t s2attrs)
66
{
67
uint8_t hiattr = extract32(s2attrs, 2, 2);
68
uint8_t loattr = extract32(s2attrs, 0, 2);
69
uint8_t hihint = 0, lohint = 0;
70
71
if (hiattr != 0) { /* normal memory */
72
- if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */
73
+ if (hcr & HCR_CD) { /* cache disabled */
74
hiattr = loattr = 1; /* non-cacheable */
75
} else {
76
if (hiattr != 1) { /* Write-through or write-back */
77
@@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
78
* s1 and s2 for the HCR_EL2.FWB == 0 case, returning the
79
* combined attributes in MAIR_EL1 format.
80
*/
81
-static uint8_t combined_attrs_nofwb(CPUARMState *env,
82
+static uint8_t combined_attrs_nofwb(uint64_t hcr,
83
ARMCacheAttrs s1, ARMCacheAttrs s2)
84
{
85
uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
86
87
- s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
88
+ s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs);
89
90
s1lo = extract32(s1.attrs, 0, 4);
91
s2lo = extract32(s2_mair_attrs, 0, 4);
92
@@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
93
* @s1: Attributes from stage 1 walk
94
* @s2: Attributes from stage 2 walk
95
*/
96
-static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
97
+static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
98
ARMCacheAttrs s1, ARMCacheAttrs s2)
99
{
100
ARMCacheAttrs ret;
101
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
102
}
103
104
/* Combine memory type and cacheability attributes */
105
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
106
+ if (hcr & HCR_FWB) {
107
ret.attrs = combined_attrs_fwb(s1, s2);
108
} else {
109
- ret.attrs = combined_attrs_nofwb(env, s1, s2);
110
+ ret.attrs = combined_attrs_nofwb(hcr, s1, s2);
111
}
112
113
/*
114
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
115
ARMCacheAttrs cacheattrs1;
116
ARMMMUIdx s2_mmu_idx;
117
bool is_el0;
118
+ uint64_t hcr;
119
120
ret = get_phys_addr_with_secure(env, address, access_type,
121
s1_mmu_idx, is_secure, result, fi);
122
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
123
}
124
125
/* Combine the S1 and S2 cache attributes. */
126
- if (arm_hcr_el2_eff(env) & HCR_DC) {
127
+ hcr = arm_hcr_el2_eff(env);
128
+ if (hcr & HCR_DC) {
129
/*
130
* HCR.DC forces the first stage attributes to
131
* Normal Non-Shareable,
132
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
133
}
134
cacheattrs1.shareability = 0;
135
}
136
- result->cacheattrs = combine_cacheattrs(env, cacheattrs1,
137
+ result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
138
result->cacheattrs);
139
140
/*
141
--
142
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Use arm_hcr_el2_eff_secstate instead of arm_hcr_el2_eff, so
4
that we use is_secure instead of the current security state.
5
These AT* operations have been broken since arm_hcr_el2_eff
6
gained a check for "el2 enabled" for Secure EL2.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20221001162318.153420-18-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/ptw.c | 8 ++++----
14
1 file changed, 4 insertions(+), 4 deletions(-)
15
16
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/ptw.c
19
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
21
}
22
}
23
24
- hcr_el2 = arm_hcr_el2_eff(env);
25
+ hcr_el2 = arm_hcr_el2_eff_secstate(env, is_secure);
26
27
switch (mmu_idx) {
28
case ARMMMUIdx_Stage2:
29
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
30
return ~0;
31
}
32
33
- hcr = arm_hcr_el2_eff(env);
34
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
35
if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
36
/*
37
* PTW set and S1 walk touched S2 Device memory:
38
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
39
}
40
41
/* Combine the S1 and S2 cache attributes. */
42
- hcr = arm_hcr_el2_eff(env);
43
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
44
if (hcr & HCR_DC) {
45
/*
46
* HCR.DC forces the first stage attributes to
47
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
48
result->page_size = TARGET_PAGE_SIZE;
49
50
/* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
51
- hcr = arm_hcr_el2_eff(env);
52
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
53
result->cacheattrs.shareability = 0;
54
result->cacheattrs.is_s2_format = false;
55
if (hcr & HCR_DC) {
56
--
57
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20221001162318.153420-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.c | 138 +++++++++++++++++++++++++----------------------
9
1 file changed, 74 insertions(+), 64 deletions(-)
10
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/ptw.c
14
+++ b/target/arm/ptw.c
15
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
16
return ret;
17
}
18
19
+/*
20
+ * MMU disabled. S1 addresses within aa64 translation regimes are
21
+ * still checked for bounds -- see AArch64.S1DisabledOutput().
22
+ */
23
+static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
24
+ MMUAccessType access_type,
25
+ ARMMMUIdx mmu_idx, bool is_secure,
26
+ GetPhysAddrResult *result,
27
+ ARMMMUFaultInfo *fi)
28
+{
29
+ uint64_t hcr;
30
+ uint8_t memattr;
31
+
32
+ if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
33
+ int r_el = regime_el(env, mmu_idx);
34
+ if (arm_el_is_aa64(env, r_el)) {
35
+ int pamax = arm_pamax(env_archcpu(env));
36
+ uint64_t tcr = env->cp15.tcr_el[r_el];
37
+ int addrtop, tbi;
38
+
39
+ tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
40
+ if (access_type == MMU_INST_FETCH) {
41
+ tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
42
+ }
43
+ tbi = (tbi >> extract64(address, 55, 1)) & 1;
44
+ addrtop = (tbi ? 55 : 63);
45
+
46
+ if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
47
+ fi->type = ARMFault_AddressSize;
48
+ fi->level = 0;
49
+ fi->stage2 = false;
50
+ return 1;
51
+ }
52
+
53
+ /*
54
+ * When TBI is disabled, we've just validated that all of the
55
+ * bits above PAMax are zero, so logically we only need to
56
+ * clear the top byte for TBI. But it's clearer to follow
57
+ * the pseudocode set of addrdesc.paddress.
58
+ */
59
+ address = extract64(address, 0, 52);
60
+ }
61
+ }
62
+
63
+ result->phys = address;
64
+ result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
65
+ result->page_size = TARGET_PAGE_SIZE;
66
+
67
+ /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
68
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
69
+ result->cacheattrs.shareability = 0;
70
+ result->cacheattrs.is_s2_format = false;
71
+ if (hcr & HCR_DC) {
72
+ if (hcr & HCR_DCT) {
73
+ memattr = 0xf0; /* Tagged, Normal, WB, RWA */
74
+ } else {
75
+ memattr = 0xff; /* Normal, WB, RWA */
76
+ }
77
+ } else if (access_type == MMU_INST_FETCH) {
78
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
79
+ memattr = 0xee; /* Normal, WT, RA, NT */
80
+ } else {
81
+ memattr = 0x44; /* Normal, NC, No */
82
+ }
83
+ result->cacheattrs.shareability = 2; /* outer sharable */
84
+ } else {
85
+ memattr = 0x00; /* Device, nGnRnE */
86
+ }
87
+ result->cacheattrs.attrs = memattr;
88
+ return 0;
89
+}
90
+
91
bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
92
MMUAccessType access_type, ARMMMUIdx mmu_idx,
93
bool is_secure, GetPhysAddrResult *result,
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
95
/* Definitely a real MMU, not an MPU */
96
97
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
98
- uint64_t hcr;
99
- uint8_t memattr;
100
-
101
- /*
102
- * MMU disabled. S1 addresses within aa64 translation regimes are
103
- * still checked for bounds -- see AArch64.TranslateAddressS1Off.
104
- */
105
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
106
- int r_el = regime_el(env, mmu_idx);
107
- if (arm_el_is_aa64(env, r_el)) {
108
- int pamax = arm_pamax(env_archcpu(env));
109
- uint64_t tcr = env->cp15.tcr_el[r_el];
110
- int addrtop, tbi;
111
-
112
- tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
113
- if (access_type == MMU_INST_FETCH) {
114
- tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
115
- }
116
- tbi = (tbi >> extract64(address, 55, 1)) & 1;
117
- addrtop = (tbi ? 55 : 63);
118
-
119
- if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
120
- fi->type = ARMFault_AddressSize;
121
- fi->level = 0;
122
- fi->stage2 = false;
123
- return 1;
124
- }
125
-
126
- /*
127
- * When TBI is disabled, we've just validated that all of the
128
- * bits above PAMax are zero, so logically we only need to
129
- * clear the top byte for TBI. But it's clearer to follow
130
- * the pseudocode set of addrdesc.paddress.
131
- */
132
- address = extract64(address, 0, 52);
133
- }
134
- }
135
- result->phys = address;
136
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
137
- result->page_size = TARGET_PAGE_SIZE;
138
-
139
- /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
140
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
141
- result->cacheattrs.shareability = 0;
142
- result->cacheattrs.is_s2_format = false;
143
- if (hcr & HCR_DC) {
144
- if (hcr & HCR_DCT) {
145
- memattr = 0xf0; /* Tagged, Normal, WB, RWA */
146
- } else {
147
- memattr = 0xff; /* Normal, WB, RWA */
148
- }
149
- } else if (access_type == MMU_INST_FETCH) {
150
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
151
- memattr = 0xee; /* Normal, WT, RA, NT */
152
- } else {
153
- memattr = 0x44; /* Normal, NC, No */
154
- }
155
- result->cacheattrs.shareability = 2; /* outer sharable */
156
- } else {
157
- memattr = 0x00; /* Device, nGnRnE */
158
- }
159
- result->cacheattrs.attrs = memattr;
160
- return 0;
161
+ return get_phys_addr_disabled(env, address, access_type, mmu_idx,
162
+ is_secure, result, fi);
163
}
164
-
165
if (regime_using_lpae_format(env, mmu_idx)) {
166
return get_phys_addr_lpae(env, address, access_type, mmu_idx,
167
is_secure, false, result, fi);
168
--
169
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Do not apply memattr or shareability for Stage2 translations.
4
Make sure to apply HCR_{DC,DCT} only to Regime_EL10, per the
5
pseudocode in AArch64.S1DisabledOutput.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20221001162318.153420-20-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/ptw.c | 48 +++++++++++++++++++++++++-----------------------
13
1 file changed, 25 insertions(+), 23 deletions(-)
14
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/ptw.c
18
+++ b/target/arm/ptw.c
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
20
GetPhysAddrResult *result,
21
ARMMMUFaultInfo *fi)
22
{
23
- uint64_t hcr;
24
- uint8_t memattr;
25
+ uint8_t memattr = 0x00; /* Device nGnRnE */
26
+ uint8_t shareability = 0; /* non-sharable */
27
28
if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
29
int r_el = regime_el(env, mmu_idx);
30
+
31
if (arm_el_is_aa64(env, r_el)) {
32
int pamax = arm_pamax(env_archcpu(env));
33
uint64_t tcr = env->cp15.tcr_el[r_el];
34
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
35
*/
36
address = extract64(address, 0, 52);
37
}
38
+
39
+ /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
40
+ if (r_el == 1) {
41
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
42
+ if (hcr & HCR_DC) {
43
+ if (hcr & HCR_DCT) {
44
+ memattr = 0xf0; /* Tagged, Normal, WB, RWA */
45
+ } else {
46
+ memattr = 0xff; /* Normal, WB, RWA */
47
+ }
48
+ }
49
+ }
50
+ if (memattr == 0 && access_type == MMU_INST_FETCH) {
51
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
52
+ memattr = 0xee; /* Normal, WT, RA, NT */
53
+ } else {
54
+ memattr = 0x44; /* Normal, NC, No */
55
+ }
56
+ shareability = 2; /* outer sharable */
57
+ }
58
+ result->cacheattrs.is_s2_format = false;
59
}
60
61
result->phys = address;
62
result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
63
result->page_size = TARGET_PAGE_SIZE;
64
-
65
- /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
66
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
67
- result->cacheattrs.shareability = 0;
68
- result->cacheattrs.is_s2_format = false;
69
- if (hcr & HCR_DC) {
70
- if (hcr & HCR_DCT) {
71
- memattr = 0xf0; /* Tagged, Normal, WB, RWA */
72
- } else {
73
- memattr = 0xff; /* Normal, WB, RWA */
74
- }
75
- } else if (access_type == MMU_INST_FETCH) {
76
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
77
- memattr = 0xee; /* Normal, WT, RA, NT */
78
- } else {
79
- memattr = 0x44; /* Normal, NC, No */
80
- }
81
- result->cacheattrs.shareability = 2; /* outer sharable */
82
- } else {
83
- memattr = 0x00; /* Device, nGnRnE */
84
- }
85
+ result->cacheattrs.shareability = shareability;
86
result->cacheattrs.attrs = memattr;
87
return 0;
88
}
89
--
90
2.25.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Adjust GetPhysAddrResult to fill in CPUTLBEntryFull,
4
so that it may be passed directly to tlb_set_page_full.
5
6
The change is large, but mostly mechanical. The major
7
non-mechanical change is page_size -> lg_page_size.
8
Most of the time this is obvious, and is related to
9
TARGET_PAGE_BITS.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20221001162318.153420-21-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/internals.h | 5 +-
17
target/arm/helper.c | 12 +--
18
target/arm/m_helper.c | 20 ++---
19
target/arm/ptw.c | 179 ++++++++++++++++++++--------------------
20
target/arm/tlb_helper.c | 9 +-
21
5 files changed, 111 insertions(+), 114 deletions(-)
22
23
diff --git a/target/arm/internals.h b/target/arm/internals.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/internals.h
26
+++ b/target/arm/internals.h
27
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
28
29
/* Fields that are valid upon success. */
30
typedef struct GetPhysAddrResult {
31
- hwaddr phys;
32
- target_ulong page_size;
33
- int prot;
34
- MemTxAttrs attrs;
35
+ CPUTLBEntryFull f;
36
ARMCacheAttrs cacheattrs;
37
} GetPhysAddrResult;
38
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/helper.c
42
+++ b/target/arm/helper.c
43
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
44
/* Create a 64-bit PAR */
45
par64 = (1 << 11); /* LPAE bit always set */
46
if (!ret) {
47
- par64 |= res.phys & ~0xfffULL;
48
- if (!res.attrs.secure) {
49
+ par64 |= res.f.phys_addr & ~0xfffULL;
50
+ if (!res.f.attrs.secure) {
51
par64 |= (1 << 9); /* NS */
52
}
53
par64 |= (uint64_t)res.cacheattrs.attrs << 56; /* ATTR */
54
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
55
*/
56
if (!ret) {
57
/* We do not set any attribute bits in the PAR */
58
- if (res.page_size == (1 << 24)
59
+ if (res.f.lg_page_size == 24
60
&& arm_feature(env, ARM_FEATURE_V7)) {
61
- par64 = (res.phys & 0xff000000) | (1 << 1);
62
+ par64 = (res.f.phys_addr & 0xff000000) | (1 << 1);
63
} else {
64
- par64 = res.phys & 0xfffff000;
65
+ par64 = res.f.phys_addr & 0xfffff000;
66
}
67
- if (!res.attrs.secure) {
68
+ if (!res.f.attrs.secure) {
69
par64 |= (1 << 9); /* NS */
70
}
71
} else {
72
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/m_helper.c
75
+++ b/target/arm/m_helper.c
76
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
77
}
78
goto pend_fault;
79
}
80
- address_space_stl_le(arm_addressspace(cs, res.attrs), res.phys, value,
81
- res.attrs, &txres);
82
+ address_space_stl_le(arm_addressspace(cs, res.f.attrs), res.f.phys_addr,
83
+ value, res.f.attrs, &txres);
84
if (txres != MEMTX_OK) {
85
/* BusFault trying to write the data */
86
if (mode == STACK_LAZYFP) {
87
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr,
88
goto pend_fault;
89
}
90
91
- value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys,
92
- res.attrs, &txres);
93
+ value = address_space_ldl(arm_addressspace(cs, res.f.attrs),
94
+ res.f.phys_addr, res.f.attrs, &txres);
95
if (txres != MEMTX_OK) {
96
/* BusFault trying to read the data */
97
qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n");
98
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure,
99
qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
100
return false;
101
}
102
- *insn = address_space_lduw_le(arm_addressspace(cs, res.attrs), res.phys,
103
- res.attrs, &txres);
104
+ *insn = address_space_lduw_le(arm_addressspace(cs, res.f.attrs),
105
+ res.f.phys_addr, res.f.attrs, &txres);
106
if (txres != MEMTX_OK) {
107
env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
108
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
109
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
110
}
111
return false;
112
}
113
- value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys,
114
- res.attrs, &txres);
115
+ value = address_space_ldl(arm_addressspace(cs, res.f.attrs),
116
+ res.f.phys_addr, res.f.attrs, &txres);
117
if (txres != MEMTX_OK) {
118
/* BusFault trying to read the data */
119
qemu_log_mask(CPU_LOG_INT,
120
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
121
} else {
122
mrvalid = true;
123
}
124
- r = res.prot & PAGE_READ;
125
- rw = res.prot & PAGE_WRITE;
126
+ r = res.f.prot & PAGE_READ;
127
+ rw = res.f.prot & PAGE_WRITE;
128
} else {
129
r = false;
130
rw = false;
131
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
132
index XXXXXXX..XXXXXXX 100644
133
--- a/target/arm/ptw.c
134
+++ b/target/arm/ptw.c
135
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
136
assert(!is_secure);
137
}
138
139
- addr = s2.phys;
140
+ addr = s2.f.phys_addr;
141
}
142
return addr;
143
}
144
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
145
/* 1Mb section. */
146
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
147
ap = (desc >> 10) & 3;
148
- result->page_size = 1024 * 1024;
149
+ result->f.lg_page_size = 20; /* 1MB */
150
} else {
151
/* Lookup l2 entry. */
152
if (type == 1) {
153
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
154
case 1: /* 64k page. */
155
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
156
ap = (desc >> (4 + ((address >> 13) & 6))) & 3;
157
- result->page_size = 0x10000;
158
+ result->f.lg_page_size = 16;
159
break;
160
case 2: /* 4k page. */
161
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
162
ap = (desc >> (4 + ((address >> 9) & 6))) & 3;
163
- result->page_size = 0x1000;
164
+ result->f.lg_page_size = 12;
165
break;
166
case 3: /* 1k page, or ARMv6/XScale "extended small (4k) page" */
167
if (type == 1) {
168
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
169
if (arm_feature(env, ARM_FEATURE_XSCALE)
170
|| arm_feature(env, ARM_FEATURE_V6)) {
171
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
172
- result->page_size = 0x1000;
173
+ result->f.lg_page_size = 12;
174
} else {
175
/*
176
* UNPREDICTABLE in ARMv5; we choose to take a
177
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
178
}
179
} else {
180
phys_addr = (desc & 0xfffffc00) | (address & 0x3ff);
181
- result->page_size = 0x400;
182
+ result->f.lg_page_size = 10;
183
}
184
ap = (desc >> 4) & 3;
185
break;
186
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
187
g_assert_not_reached();
188
}
189
}
190
- result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
191
- result->prot |= result->prot ? PAGE_EXEC : 0;
192
- if (!(result->prot & (1 << access_type))) {
193
+ result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
194
+ result->f.prot |= result->f.prot ? PAGE_EXEC : 0;
195
+ if (!(result->f.prot & (1 << access_type))) {
196
/* Access permission fault. */
197
fi->type = ARMFault_Permission;
198
goto do_fault;
199
}
200
- result->phys = phys_addr;
201
+ result->f.phys_addr = phys_addr;
202
return false;
203
do_fault:
204
fi->domain = domain;
205
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
206
phys_addr = (desc & 0xff000000) | (address & 0x00ffffff);
207
phys_addr |= (uint64_t)extract32(desc, 20, 4) << 32;
208
phys_addr |= (uint64_t)extract32(desc, 5, 4) << 36;
209
- result->page_size = 0x1000000;
210
+ result->f.lg_page_size = 24; /* 16MB */
211
} else {
212
/* Section. */
213
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
214
- result->page_size = 0x100000;
215
+ result->f.lg_page_size = 20; /* 1MB */
216
}
217
ap = ((desc >> 10) & 3) | ((desc >> 13) & 4);
218
xn = desc & (1 << 4);
219
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
220
case 1: /* 64k page. */
221
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
222
xn = desc & (1 << 15);
223
- result->page_size = 0x10000;
224
+ result->f.lg_page_size = 16;
225
break;
226
case 2: case 3: /* 4k page. */
227
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
228
xn = desc & 1;
229
- result->page_size = 0x1000;
230
+ result->f.lg_page_size = 12;
231
break;
232
default:
233
/* Never happens, but compiler isn't smart enough to tell. */
234
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
235
}
236
}
237
if (domain_prot == 3) {
238
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
239
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
240
} else {
241
if (pxn && !regime_is_user(env, mmu_idx)) {
242
xn = 1;
243
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
244
fi->type = ARMFault_AccessFlag;
245
goto do_fault;
246
}
247
- result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
248
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
249
} else {
250
- result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
251
+ result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
252
}
253
- if (result->prot && !xn) {
254
- result->prot |= PAGE_EXEC;
255
+ if (result->f.prot && !xn) {
256
+ result->f.prot |= PAGE_EXEC;
257
}
258
- if (!(result->prot & (1 << access_type))) {
259
+ if (!(result->f.prot & (1 << access_type))) {
260
/* Access permission fault. */
261
fi->type = ARMFault_Permission;
262
goto do_fault;
263
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
264
* the CPU doesn't support TZ or this is a non-secure translation
265
* regime, because the attribute will already be non-secure.
266
*/
267
- result->attrs.secure = false;
268
+ result->f.attrs.secure = false;
269
}
270
- result->phys = phys_addr;
271
+ result->f.phys_addr = phys_addr;
272
return false;
273
do_fault:
274
fi->domain = domain;
275
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
276
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
277
ns = mmu_idx == ARMMMUIdx_Stage2;
278
xn = extract32(attrs, 11, 2);
279
- result->prot = get_S2prot(env, ap, xn, s1_is_el0);
280
+ result->f.prot = get_S2prot(env, ap, xn, s1_is_el0);
281
} else {
282
ns = extract32(attrs, 3, 1);
283
xn = extract32(attrs, 12, 1);
284
pxn = extract32(attrs, 11, 1);
285
- result->prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
286
+ result->f.prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
287
}
288
289
fault_type = ARMFault_Permission;
290
- if (!(result->prot & (1 << access_type))) {
291
+ if (!(result->f.prot & (1 << access_type))) {
292
goto do_fault;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
296
* the CPU doesn't support TZ or this is a non-secure translation
297
* regime, because the attribute will already be non-secure.
298
*/
299
- result->attrs.secure = false;
300
+ result->f.attrs.secure = false;
301
}
302
/* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
303
if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
304
- arm_tlb_bti_gp(&result->attrs) = true;
305
+ arm_tlb_bti_gp(&result->f.attrs) = true;
306
}
307
308
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
309
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
310
result->cacheattrs.shareability = extract32(attrs, 6, 2);
311
}
312
313
- result->phys = descaddr;
314
- result->page_size = page_size;
315
+ result->f.phys_addr = descaddr;
316
+ result->f.lg_page_size = ctz64(page_size);
317
return false;
318
319
do_fault:
320
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
321
322
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
323
/* MPU disabled. */
324
- result->phys = address;
325
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
326
+ result->f.phys_addr = address;
327
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
328
return false;
329
}
330
331
- result->phys = address;
332
+ result->f.phys_addr = address;
333
for (n = 7; n >= 0; n--) {
334
base = env->cp15.c6_region[n];
335
if ((base & 1) == 0) {
336
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
337
fi->level = 1;
338
return true;
339
}
340
- result->prot = PAGE_READ | PAGE_WRITE;
341
+ result->f.prot = PAGE_READ | PAGE_WRITE;
342
break;
343
case 2:
344
- result->prot = PAGE_READ;
345
+ result->f.prot = PAGE_READ;
346
if (!is_user) {
347
- result->prot |= PAGE_WRITE;
348
+ result->f.prot |= PAGE_WRITE;
349
}
350
break;
351
case 3:
352
- result->prot = PAGE_READ | PAGE_WRITE;
353
+ result->f.prot = PAGE_READ | PAGE_WRITE;
354
break;
355
case 5:
356
if (is_user) {
357
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
358
fi->level = 1;
359
return true;
360
}
361
- result->prot = PAGE_READ;
362
+ result->f.prot = PAGE_READ;
363
break;
364
case 6:
365
- result->prot = PAGE_READ;
366
+ result->f.prot = PAGE_READ;
367
break;
368
default:
369
/* Bad permission. */
370
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
371
fi->level = 1;
372
return true;
373
}
374
- result->prot |= PAGE_EXEC;
375
+ result->f.prot |= PAGE_EXEC;
376
return false;
377
}
378
379
static void get_phys_addr_pmsav7_default(CPUARMState *env, ARMMMUIdx mmu_idx,
380
- int32_t address, int *prot)
381
+ int32_t address, uint8_t *prot)
382
{
383
if (!arm_feature(env, ARM_FEATURE_M)) {
384
*prot = PAGE_READ | PAGE_WRITE;
385
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
386
int n;
387
bool is_user = regime_is_user(env, mmu_idx);
388
389
- result->phys = address;
390
- result->page_size = TARGET_PAGE_SIZE;
391
- result->prot = 0;
392
+ result->f.phys_addr = address;
393
+ result->f.lg_page_size = TARGET_PAGE_BITS;
394
+ result->f.prot = 0;
395
396
if (regime_translation_disabled(env, mmu_idx, secure) ||
397
m_is_ppb_region(env, address)) {
398
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
399
* which always does a direct read using address_space_ldl(), rather
400
* than going via this function, so we don't need to check that here.
401
*/
402
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
403
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
404
} else { /* MPU enabled */
405
for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
406
/* region search */
407
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
408
if (ranges_overlap(base, rmask,
409
address & TARGET_PAGE_MASK,
410
TARGET_PAGE_SIZE)) {
411
- result->page_size = 1;
412
+ result->f.lg_page_size = 0;
413
}
414
continue;
415
}
416
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
417
continue;
418
}
419
if (rsize < TARGET_PAGE_BITS) {
420
- result->page_size = 1 << rsize;
421
+ result->f.lg_page_size = rsize;
422
}
423
break;
424
}
425
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
426
fi->type = ARMFault_Background;
427
return true;
428
}
429
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
430
+ get_phys_addr_pmsav7_default(env, mmu_idx, address,
431
+ &result->f.prot);
432
} else { /* a MPU hit! */
433
uint32_t ap = extract32(env->pmsav7.dracr[n], 8, 3);
434
uint32_t xn = extract32(env->pmsav7.dracr[n], 12, 1);
435
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
436
case 5:
437
break; /* no access */
438
case 3:
439
- result->prot |= PAGE_WRITE;
440
+ result->f.prot |= PAGE_WRITE;
441
/* fall through */
442
case 2:
443
case 6:
444
- result->prot |= PAGE_READ | PAGE_EXEC;
445
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
446
break;
447
case 7:
448
/* for v7M, same as 6; for R profile a reserved value */
449
if (arm_feature(env, ARM_FEATURE_M)) {
450
- result->prot |= PAGE_READ | PAGE_EXEC;
451
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
452
break;
453
}
454
/* fall through */
455
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
456
case 1:
457
case 2:
458
case 3:
459
- result->prot |= PAGE_WRITE;
460
+ result->f.prot |= PAGE_WRITE;
461
/* fall through */
462
case 5:
463
case 6:
464
- result->prot |= PAGE_READ | PAGE_EXEC;
465
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
466
break;
467
case 7:
468
/* for v7M, same as 6; for R profile a reserved value */
469
if (arm_feature(env, ARM_FEATURE_M)) {
470
- result->prot |= PAGE_READ | PAGE_EXEC;
471
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
472
break;
473
}
474
/* fall through */
475
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
476
477
/* execute never */
478
if (xn) {
479
- result->prot &= ~PAGE_EXEC;
480
+ result->f.prot &= ~PAGE_EXEC;
481
}
482
}
483
}
484
485
fi->type = ARMFault_Permission;
486
fi->level = 1;
487
- return !(result->prot & (1 << access_type));
488
+ return !(result->f.prot & (1 << access_type));
489
}
490
491
bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
492
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
493
uint32_t addr_page_base = address & TARGET_PAGE_MASK;
494
uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
495
496
- result->page_size = TARGET_PAGE_SIZE;
497
- result->phys = address;
498
- result->prot = 0;
499
+ result->f.lg_page_size = TARGET_PAGE_BITS;
500
+ result->f.phys_addr = address;
501
+ result->f.prot = 0;
502
if (mregion) {
503
*mregion = -1;
504
}
505
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
506
ranges_overlap(base, limit - base + 1,
507
addr_page_base,
508
TARGET_PAGE_SIZE)) {
509
- result->page_size = 1;
510
+ result->f.lg_page_size = 0;
511
}
512
continue;
513
}
514
515
if (base > addr_page_base || limit < addr_page_limit) {
516
- result->page_size = 1;
517
+ result->f.lg_page_size = 0;
518
}
519
520
if (matchregion != -1) {
521
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
522
523
if (matchregion == -1) {
524
/* hit using the background region */
525
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
526
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
527
} else {
528
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
529
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
530
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
531
xn = 1;
532
}
533
534
- result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
535
- if (result->prot && !xn && !(pxn && !is_user)) {
536
- result->prot |= PAGE_EXEC;
537
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
538
+ if (result->f.prot && !xn && !(pxn && !is_user)) {
539
+ result->f.prot |= PAGE_EXEC;
540
}
541
/*
542
* We don't need to look the attribute up in the MAIR0/MAIR1
543
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
544
545
fi->type = ARMFault_Permission;
546
fi->level = 1;
547
- return !(result->prot & (1 << access_type));
548
+ return !(result->f.prot & (1 << access_type));
549
}
550
551
static bool v8m_is_sau_exempt(CPUARMState *env,
552
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
553
} else {
554
fi->type = ARMFault_QEMU_SFault;
555
}
556
- result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
557
- result->phys = address;
558
- result->prot = 0;
559
+ result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS;
560
+ result->f.phys_addr = address;
561
+ result->f.prot = 0;
562
return true;
563
}
564
} else {
565
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
566
* might downgrade a secure access to nonsecure.
567
*/
568
if (sattrs.ns) {
569
- result->attrs.secure = false;
570
+ result->f.attrs.secure = false;
571
} else if (!secure) {
572
/*
573
* NS access to S memory must fault.
574
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
575
* for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
576
*/
577
fi->type = ARMFault_QEMU_SFault;
578
- result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
579
- result->phys = address;
580
- result->prot = 0;
581
+ result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS;
582
+ result->f.phys_addr = address;
583
+ result->f.prot = 0;
584
return true;
585
}
586
}
587
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
588
ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, secure,
589
result, fi, NULL);
590
if (sattrs.subpage) {
591
- result->page_size = 1;
592
+ result->f.lg_page_size = 0;
593
}
594
return ret;
595
}
596
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
597
result->cacheattrs.is_s2_format = false;
598
}
599
600
- result->phys = address;
601
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
602
- result->page_size = TARGET_PAGE_SIZE;
603
+ result->f.phys_addr = address;
604
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
605
+ result->f.lg_page_size = TARGET_PAGE_BITS;
606
result->cacheattrs.shareability = shareability;
607
result->cacheattrs.attrs = memattr;
608
return 0;
609
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
610
return ret;
611
}
612
613
- ipa = result->phys;
614
- ipa_secure = result->attrs.secure;
615
+ ipa = result->f.phys_addr;
616
+ ipa_secure = result->f.attrs.secure;
617
if (is_secure) {
618
/* Select TCR based on the NS bit from the S1 walk. */
619
s2walk_secure = !(ipa_secure
620
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
621
* Save the stage1 results so that we may merge
622
* prot and cacheattrs later.
623
*/
624
- s1_prot = result->prot;
625
+ s1_prot = result->f.prot;
626
cacheattrs1 = result->cacheattrs;
627
memset(result, 0, sizeof(*result));
628
629
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
630
fi->s2addr = ipa;
631
632
/* Combine the S1 and S2 perms. */
633
- result->prot &= s1_prot;
634
+ result->f.prot &= s1_prot;
635
636
/* If S2 fails, return early. */
637
if (ret) {
638
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
639
* Check if IPA translates to secure or non-secure PA space.
640
* Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
641
*/
642
- result->attrs.secure =
643
+ result->f.attrs.secure =
644
(is_secure
645
&& !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
646
&& (ipa_secure
647
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
648
* cannot upgrade an non-secure translation regime's attributes
649
* to secure.
650
*/
651
- result->attrs.secure = is_secure;
652
- result->attrs.user = regime_is_user(env, mmu_idx);
653
+ result->f.attrs.secure = is_secure;
654
+ result->f.attrs.user = regime_is_user(env, mmu_idx);
655
656
/*
657
* Fast Context Switch Extension. This doesn't exist at all in v8.
658
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
659
660
if (arm_feature(env, ARM_FEATURE_PMSA)) {
661
bool ret;
662
- result->page_size = TARGET_PAGE_SIZE;
663
+ result->f.lg_page_size = TARGET_PAGE_BITS;
664
665
if (arm_feature(env, ARM_FEATURE_V8)) {
666
/* PMSAv8 */
667
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
668
(access_type == MMU_DATA_STORE ? "writing" : "execute"),
669
(uint32_t)address, mmu_idx,
670
ret ? "Miss" : "Hit",
671
- result->prot & PAGE_READ ? 'r' : '-',
672
- result->prot & PAGE_WRITE ? 'w' : '-',
673
- result->prot & PAGE_EXEC ? 'x' : '-');
674
+ result->f.prot & PAGE_READ ? 'r' : '-',
675
+ result->f.prot & PAGE_WRITE ? 'w' : '-',
676
+ result->f.prot & PAGE_EXEC ? 'x' : '-');
677
678
return ret;
679
}
680
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
681
bool ret;
682
683
ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi);
684
- *attrs = res.attrs;
685
+ *attrs = res.f.attrs;
686
687
if (ret) {
688
return -1;
689
}
690
- return res.phys;
691
+ return res.f.phys_addr;
692
}
693
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
694
index XXXXXXX..XXXXXXX 100644
695
--- a/target/arm/tlb_helper.c
696
+++ b/target/arm/tlb_helper.c
697
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
698
* target page size are handled specially, so for those we
699
* pass in the exact addresses.
700
*/
701
- if (res.page_size >= TARGET_PAGE_SIZE) {
702
- res.phys &= TARGET_PAGE_MASK;
703
+ if (res.f.lg_page_size >= TARGET_PAGE_BITS) {
704
+ res.f.phys_addr &= TARGET_PAGE_MASK;
705
address &= TARGET_PAGE_MASK;
706
}
707
/* Notice and record tagged memory. */
708
if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) {
709
- arm_tlb_mte_tagged(&res.attrs) = true;
710
+ arm_tlb_mte_tagged(&res.f.attrs) = true;
711
}
712
713
- tlb_set_page_with_attrs(cs, address, res.phys, res.attrs,
714
- res.prot, mmu_idx, res.page_size);
715
+ tlb_set_page_full(cs, mmu_idx, address, &res.f);
716
return true;
717
} else if (probe) {
718
return false;
719
--
720
2.25.1
diff view generated by jsdifflib
New patch
1
From: Jerome Forissier <jerome.forissier@linaro.org>
1
2
3
According to the Linux kernel booting.rst [1], CPTR_EL3.ESM and
4
SCR_EL3.EnTP2 must be initialized to 1 when EL3 is present and FEAT_SME
5
is advertised. This has to be taken care of when QEMU boots directly
6
into the kernel (i.e., "-M virt,secure=on -cpu max -kernel Image").
7
8
Cc: qemu-stable@nongnu.org
9
Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max")
10
Link: [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst?h=v6.0#n321
11
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
12
Message-id: 20221003145641.1921467-1-jerome.forissier@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
hw/arm/boot.c | 4 ++++
17
1 file changed, 4 insertions(+)
18
19
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/boot.c
22
+++ b/hw/arm/boot.c
23
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
24
if (cpu_isar_feature(aa64_sve, cpu)) {
25
env->cp15.cptr_el[3] |= R_CPTR_EL3_EZ_MASK;
26
}
27
+ if (cpu_isar_feature(aa64_sme, cpu)) {
28
+ env->cp15.cptr_el[3] |= R_CPTR_EL3_ESM_MASK;
29
+ env->cp15.scr_el3 |= SCR_ENTP2;
30
+ }
31
/* AArch64 kernels never boot in secure mode */
32
assert(!info->secure_boot);
33
/* This hook is only supported for AArch32 currently:
34
--
35
2.25.1
diff view generated by jsdifflib
New patch
1
1
Arm CPUs support some subset of the granule (page) sizes 4K, 16K and
2
64K. The guest selects the one it wants using bits in the TCR_ELx
3
registers. If it tries to program these registers with a value that
4
is either reserved or which requests a size that the CPU does not
5
implement, the architecture requires that the CPU behaves as if the
6
field was programmed to some size that has been implemented.
7
Currently we don't implement this, and instead let the guest use any
8
granule size, even if the CPU ID register fields say it isn't
9
present.
10
11
Make aa64_va_parameters() check against the supported granule size
12
and force use of a different one if it is not implemented.
13
14
(A subsequent commit will make ARMVAParameters use the new enum
15
rather than the current pair of using16k/using64k bools.)
16
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Message-id: 20221003162315.2833797-2-peter.maydell@linaro.org
20
---
21
target/arm/cpu.h | 33 +++++++++++++
22
target/arm/internals.h | 9 ++++
23
target/arm/helper.c | 102 +++++++++++++++++++++++++++++++++++++----
24
3 files changed, 136 insertions(+), 8 deletions(-)
25
26
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpu.h
29
+++ b/target/arm/cpu.h
30
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tgran16_2_lpa2(const ARMISARegisters *id)
31
return t >= 3 || (t == 0 && isar_feature_aa64_tgran16_lpa2(id));
32
}
33
34
+static inline bool isar_feature_aa64_tgran4(const ARMISARegisters *id)
35
+{
36
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 0;
37
+}
38
+
39
+static inline bool isar_feature_aa64_tgran16(const ARMISARegisters *id)
40
+{
41
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 1;
42
+}
43
+
44
+static inline bool isar_feature_aa64_tgran64(const ARMISARegisters *id)
45
+{
46
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64) >= 0;
47
+}
48
+
49
+static inline bool isar_feature_aa64_tgran4_2(const ARMISARegisters *id)
50
+{
51
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
52
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran4(id));
53
+}
54
+
55
+static inline bool isar_feature_aa64_tgran16_2(const ARMISARegisters *id)
56
+{
57
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
58
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran16(id));
59
+}
60
+
61
+static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id)
62
+{
63
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64_2);
64
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id));
65
+}
66
+
67
static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
68
{
69
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
70
diff --git a/target/arm/internals.h b/target/arm/internals.h
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/internals.h
73
+++ b/target/arm/internals.h
74
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
75
return valid;
76
}
77
78
+/* Granule size (i.e. page size) */
79
+typedef enum ARMGranuleSize {
80
+ /* Same order as TG0 encoding */
81
+ Gran4K,
82
+ Gran64K,
83
+ Gran16K,
84
+ GranInvalid,
85
+} ARMGranuleSize;
86
+
87
/*
88
* Parameters of a given virtual address, as extracted from the
89
* translation control register (TCR) for a given regime.
90
diff --git a/target/arm/helper.c b/target/arm/helper.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/helper.c
93
+++ b/target/arm/helper.c
94
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx)
95
}
96
}
97
98
+static ARMGranuleSize tg0_to_gran_size(int tg)
99
+{
100
+ switch (tg) {
101
+ case 0:
102
+ return Gran4K;
103
+ case 1:
104
+ return Gran64K;
105
+ case 2:
106
+ return Gran16K;
107
+ default:
108
+ return GranInvalid;
109
+ }
110
+}
111
+
112
+static ARMGranuleSize tg1_to_gran_size(int tg)
113
+{
114
+ switch (tg) {
115
+ case 1:
116
+ return Gran16K;
117
+ case 2:
118
+ return Gran4K;
119
+ case 3:
120
+ return Gran64K;
121
+ default:
122
+ return GranInvalid;
123
+ }
124
+}
125
+
126
+static inline bool have4k(ARMCPU *cpu, bool stage2)
127
+{
128
+ return stage2 ? cpu_isar_feature(aa64_tgran4_2, cpu)
129
+ : cpu_isar_feature(aa64_tgran4, cpu);
130
+}
131
+
132
+static inline bool have16k(ARMCPU *cpu, bool stage2)
133
+{
134
+ return stage2 ? cpu_isar_feature(aa64_tgran16_2, cpu)
135
+ : cpu_isar_feature(aa64_tgran16, cpu);
136
+}
137
+
138
+static inline bool have64k(ARMCPU *cpu, bool stage2)
139
+{
140
+ return stage2 ? cpu_isar_feature(aa64_tgran64_2, cpu)
141
+ : cpu_isar_feature(aa64_tgran64, cpu);
142
+}
143
+
144
+static ARMGranuleSize sanitize_gran_size(ARMCPU *cpu, ARMGranuleSize gran,
145
+ bool stage2)
146
+{
147
+ switch (gran) {
148
+ case Gran4K:
149
+ if (have4k(cpu, stage2)) {
150
+ return gran;
151
+ }
152
+ break;
153
+ case Gran16K:
154
+ if (have16k(cpu, stage2)) {
155
+ return gran;
156
+ }
157
+ break;
158
+ case Gran64K:
159
+ if (have64k(cpu, stage2)) {
160
+ return gran;
161
+ }
162
+ break;
163
+ case GranInvalid:
164
+ break;
165
+ }
166
+ /*
167
+ * If the guest selects a granule size that isn't implemented,
168
+ * the architecture requires that we behave as if it selected one
169
+ * that is (with an IMPDEF choice of which one to pick). We choose
170
+ * to implement the smallest supported granule size.
171
+ */
172
+ if (have4k(cpu, stage2)) {
173
+ return Gran4K;
174
+ }
175
+ if (have16k(cpu, stage2)) {
176
+ return Gran16K;
177
+ }
178
+ assert(have64k(cpu, stage2));
179
+ return Gran64K;
180
+}
181
+
182
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
183
ARMMMUIdx mmu_idx, bool data)
184
{
185
uint64_t tcr = regime_tcr(env, mmu_idx);
186
bool epd, hpd, using16k, using64k, tsz_oob, ds;
187
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
188
+ ARMGranuleSize gran;
189
ARMCPU *cpu = env_archcpu(env);
190
+ bool stage2 = mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S;
191
192
if (!regime_has_2_ranges(mmu_idx)) {
193
select = 0;
194
tsz = extract32(tcr, 0, 6);
195
- using64k = extract32(tcr, 14, 1);
196
- using16k = extract32(tcr, 15, 1);
197
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
198
+ gran = tg0_to_gran_size(extract32(tcr, 14, 2));
199
+ if (stage2) {
200
/* VTCR_EL2 */
201
hpd = false;
202
} else {
203
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
204
select = extract64(va, 55, 1);
205
if (!select) {
206
tsz = extract32(tcr, 0, 6);
207
+ gran = tg0_to_gran_size(extract32(tcr, 14, 2));
208
epd = extract32(tcr, 7, 1);
209
sh = extract32(tcr, 12, 2);
210
- using64k = extract32(tcr, 14, 1);
211
- using16k = extract32(tcr, 15, 1);
212
hpd = extract64(tcr, 41, 1);
213
} else {
214
- int tg = extract32(tcr, 30, 2);
215
- using16k = tg == 1;
216
- using64k = tg == 3;
217
tsz = extract32(tcr, 16, 6);
218
+ gran = tg1_to_gran_size(extract32(tcr, 30, 2));
219
epd = extract32(tcr, 23, 1);
220
sh = extract32(tcr, 28, 2);
221
hpd = extract64(tcr, 42, 1);
222
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
223
ds = extract64(tcr, 59, 1);
224
}
225
226
+ gran = sanitize_gran_size(cpu, gran, stage2);
227
+ using64k = gran == Gran64K;
228
+ using16k = gran == Gran16K;
229
+
230
if (cpu_isar_feature(aa64_st, cpu)) {
231
max_tsz = 48 - using64k;
232
} else {
233
--
234
2.25.1
diff view generated by jsdifflib
New patch
1
Now we have an enum for the granule size, use it in the
2
ARMVAParameters struct instead of the using16k/using64k bools.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20221003162315.2833797-3-peter.maydell@linaro.org
7
---
8
target/arm/internals.h | 23 +++++++++++++++++++++--
9
target/arm/helper.c | 39 ++++++++++++++++++++++++++++-----------
10
target/arm/ptw.c | 8 +-------
11
3 files changed, 50 insertions(+), 20 deletions(-)
12
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/internals.h
16
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@ typedef enum ARMGranuleSize {
18
GranInvalid,
19
} ARMGranuleSize;
20
21
+/**
22
+ * arm_granule_bits: Return address size of the granule in bits
23
+ *
24
+ * Return the address size of the granule in bits. This corresponds
25
+ * to the pseudocode TGxGranuleBits().
26
+ */
27
+static inline int arm_granule_bits(ARMGranuleSize gran)
28
+{
29
+ switch (gran) {
30
+ case Gran64K:
31
+ return 16;
32
+ case Gran16K:
33
+ return 14;
34
+ case Gran4K:
35
+ return 12;
36
+ default:
37
+ g_assert_not_reached();
38
+ }
39
+}
40
+
41
/*
42
* Parameters of a given virtual address, as extracted from the
43
* translation control register (TCR) for a given regime.
44
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
45
bool tbi : 1;
46
bool epd : 1;
47
bool hpd : 1;
48
- bool using16k : 1;
49
- bool using64k : 1;
50
bool tsz_oob : 1; /* tsz has been clamped to legal range */
51
bool ds : 1;
52
+ ARMGranuleSize gran : 2;
53
} ARMVAParameters;
54
55
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
56
diff --git a/target/arm/helper.c b/target/arm/helper.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/helper.c
59
+++ b/target/arm/helper.c
60
@@ -XXX,XX +XXX,XX @@ typedef struct {
61
uint64_t length;
62
} TLBIRange;
63
64
+static ARMGranuleSize tlbi_range_tg_to_gran_size(int tg)
65
+{
66
+ /*
67
+ * Note that the TLBI range TG field encoding differs from both
68
+ * TG0 and TG1 encodings.
69
+ */
70
+ switch (tg) {
71
+ case 1:
72
+ return Gran4K;
73
+ case 2:
74
+ return Gran16K;
75
+ case 3:
76
+ return Gran64K;
77
+ default:
78
+ return GranInvalid;
79
+ }
80
+}
81
+
82
static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
83
uint64_t value)
84
{
85
@@ -XXX,XX +XXX,XX @@ static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
86
uint64_t select = sextract64(value, 36, 1);
87
ARMVAParameters param = aa64_va_parameters(env, select, mmuidx, true);
88
TLBIRange ret = { };
89
+ ARMGranuleSize gran;
90
91
page_size_granule = extract64(value, 46, 2);
92
+ gran = tlbi_range_tg_to_gran_size(page_size_granule);
93
94
/* The granule encoded in value must match the granule in use. */
95
- if (page_size_granule != (param.using64k ? 3 : param.using16k ? 2 : 1)) {
96
+ if (gran != param.gran) {
97
qemu_log_mask(LOG_GUEST_ERROR, "Invalid tlbi page size granule %d\n",
98
page_size_granule);
99
return ret;
100
}
101
102
- page_shift = (page_size_granule - 1) * 2 + 12;
103
+ page_shift = arm_granule_bits(gran);
104
num = extract64(value, 39, 5);
105
scale = extract64(value, 44, 2);
106
exponent = (5 * scale) + 1;
107
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
108
ARMMMUIdx mmu_idx, bool data)
109
{
110
uint64_t tcr = regime_tcr(env, mmu_idx);
111
- bool epd, hpd, using16k, using64k, tsz_oob, ds;
112
+ bool epd, hpd, tsz_oob, ds;
113
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
114
ARMGranuleSize gran;
115
ARMCPU *cpu = env_archcpu(env);
116
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
117
}
118
119
gran = sanitize_gran_size(cpu, gran, stage2);
120
- using64k = gran == Gran64K;
121
- using16k = gran == Gran16K;
122
123
if (cpu_isar_feature(aa64_st, cpu)) {
124
- max_tsz = 48 - using64k;
125
+ max_tsz = 48 - (gran == Gran64K);
126
} else {
127
max_tsz = 39;
128
}
129
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
130
* adjust the effective value of DS, as documented.
131
*/
132
min_tsz = 16;
133
- if (using64k) {
134
+ if (gran == Gran64K) {
135
if (cpu_isar_feature(aa64_lva, cpu)) {
136
min_tsz = 12;
137
}
138
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
139
switch (mmu_idx) {
140
case ARMMMUIdx_Stage2:
141
case ARMMMUIdx_Stage2_S:
142
- if (using16k) {
143
+ if (gran == Gran16K) {
144
ds = cpu_isar_feature(aa64_tgran16_2_lpa2, cpu);
145
} else {
146
ds = cpu_isar_feature(aa64_tgran4_2_lpa2, cpu);
147
}
148
break;
149
default:
150
- if (using16k) {
151
+ if (gran == Gran16K) {
152
ds = cpu_isar_feature(aa64_tgran16_lpa2, cpu);
153
} else {
154
ds = cpu_isar_feature(aa64_tgran4_lpa2, cpu);
155
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
156
.tbi = tbi,
157
.epd = epd,
158
.hpd = hpd,
159
- .using16k = using16k,
160
- .using64k = using64k,
161
.tsz_oob = tsz_oob,
162
.ds = ds,
163
+ .gran = gran,
164
};
165
}
166
167
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
168
index XXXXXXX..XXXXXXX 100644
169
--- a/target/arm/ptw.c
170
+++ b/target/arm/ptw.c
171
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
172
}
173
}
174
175
- if (param.using64k) {
176
- stride = 13;
177
- } else if (param.using16k) {
178
- stride = 11;
179
- } else {
180
- stride = 9;
181
- }
182
+ stride = arm_granule_bits(param.gran) - 3;
183
184
/*
185
* Note that QEMU ignores shareability and cacheability attributes,
186
--
187
2.25.1
diff view generated by jsdifflib
New patch
1
FEAT_GTG is a change tho the ID register ID_AA64MMFR0_EL1 so that it
2
can report a different set of supported granule (page) sizes for
3
stage 1 and stage 2 translation tables. As of commit c20281b2a5048
4
we already report the granule sizes that way for '-cpu max', and now
5
we also correctly make attempts to use unimplemented granule sizes
6
fail, so we can report the support of the feature in the
7
documentation.
1
8
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20221003162315.2833797-4-peter.maydell@linaro.org
12
---
13
docs/system/arm/emulation.rst | 1 +
14
1 file changed, 1 insertion(+)
15
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
17
index XXXXXXX..XXXXXXX 100644
18
--- a/docs/system/arm/emulation.rst
19
+++ b/docs/system/arm/emulation.rst
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
21
- FEAT_FRINTTS (Floating-point to integer instructions)
22
- FEAT_FlagM (Flag manipulation instructions v2)
23
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
24
+- FEAT_GTG (Guest translation granule size)
25
- FEAT_HCX (Support for the HCRX_EL2 register)
26
- FEAT_HPDS (Hierarchical permission disables)
27
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
28
--
29
2.25.1
diff view generated by jsdifflib