1
The following changes since commit 6d940eff4734bcb40b1a25f62d7cec5a396f994a:
1
Hi; here's a target-arm pullreq for rc0; these are all bugfixes
2
and similar minor stuff.
2
3
3
Merge tag 'pull-tpm-2022-06-07-1' of https://github.com/stefanberger/qemu-tpm into staging (2022-06-07 19:22:18 -0700)
4
thanks
5
-- PMM
6
7
The following changes since commit 0462a32b4f63b2448b4a196381138afd50719dc4:
8
9
Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging (2025-03-14 09:31:13 +0800)
4
10
5
are available in the Git repository at:
11
are available in the Git repository at:
6
12
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220609
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20250314-1
8
14
9
for you to fetch changes up to 414c54d515dba16bfaef643a8acec200c05f229a:
15
for you to fetch changes up to a019e15edfd62beae1e2f6adc0fa7415ba20b14c:
10
16
11
target/arm: Add ID_AA64SMFR0_EL1 (2022-06-08 19:38:59 +0100)
17
meson.build: Set RUST_BACKTRACE for all tests (2025-03-14 12:54:33 +0000)
12
18
13
----------------------------------------------------------------
19
----------------------------------------------------------------
14
target-arm queue:
20
target-arm queue:
15
* target/arm: Declare support for FEAT_RASv1p1
21
* Correctly handle corner cases of guest attempting an exception
16
* target/arm: Implement FEAT_DoubleFault
22
return to AArch32 when target EL is AArch64 only
17
* Fix 'writeable' typos
23
* MAINTAINERS: Fix status for Arm boards I "maintain"
18
* xlnx_dp: Implement vblank interrupt
24
* tests/functional: Bump up arm_replay timeout
19
* target/arm: Move page-table-walk code to ptw.c
25
* Revert "hw/char/pl011: Warn when using disabled receiver"
20
* target/arm: Preparatory patches for SME support
26
* util/cacheflush: Make first DSB unconditional on aarch64
27
* target/arm: Fix SVE/SME access check logic
28
* meson.build: Set RUST_BACKTRACE for all tests
21
29
22
----------------------------------------------------------------
30
----------------------------------------------------------------
23
Frederic Konrad (2):
31
Joe Komlodi (1):
24
xlnx_dp: fix the wrong register size
32
util/cacheflush: Make first DSB unconditional on aarch64
25
xlnx-zynqmp: fix the irq mapping for the display port and its dma
26
33
27
Peter Maydell (3):
34
Paolo Bonzini (1):
28
target/arm: Declare support for FEAT_RASv1p1
35
Revert "hw/char/pl011: Warn when using disabled receiver"
29
target/arm: Implement FEAT_DoubleFault
30
Fix 'writeable' typos
31
36
32
Richard Henderson (48):
37
Peter Maydell (13):
33
target/arm: Move stage_1_mmu_idx decl to internals.h
38
target/arm: Move A32_BANKED_REG_{GET,SET} macros to cpregs.h
34
target/arm: Move get_phys_addr to ptw.c
39
target/arm: Un-inline access_secure_reg()
35
target/arm: Move get_phys_addr_v5 to ptw.c
40
linux-user/aarch64: Remove unused get/put_user macros
36
target/arm: Move get_phys_addr_v6 to ptw.c
41
linux-user/arm: Remove unused get_put_user macros
37
target/arm: Move get_phys_addr_pmsav5 to ptw.c
42
target/arm: Move arm_cpu_data_is_big_endian() etc to internals.h
38
target/arm: Move get_phys_addr_pmsav7_default to ptw.c
43
target/arm: Move arm_current_el() and arm_el_is_aa64() to internals.h
39
target/arm: Move get_phys_addr_pmsav7 to ptw.c
44
target/arm: SCR_EL3.RW should be treated as 1 if EL2 doesn't support AArch32
40
target/arm: Move get_phys_addr_pmsav8 to ptw.c
45
target/arm: HCR_EL2.RW should be RAO/WI if EL1 doesn't support AArch32
41
target/arm: Move pmsav8_mpu_lookup to ptw.c
46
target/arm: Add cpu local variable to exception_return helper
42
target/arm: Move pmsav7_use_background_region to ptw.c
47
target/arm: Forbid return to AArch32 when CPU is AArch64-only
43
target/arm: Move v8m_security_lookup to ptw.c
48
MAINTAINERS: Fix status for Arm boards I "maintain"
44
target/arm: Move m_is_{ppb,system}_region to ptw.c
49
tests/functional: Bump up arm_replay timeout
45
target/arm: Move get_level1_table_address to ptw.c
50
meson.build: Set RUST_BACKTRACE for all tests
46
target/arm: Move combine_cacheattrs and subroutines to ptw.c
47
target/arm: Move get_phys_addr_lpae to ptw.c
48
target/arm: Move arm_{ldl,ldq}_ptw to ptw.c
49
target/arm: Move {arm_s1_, }regime_using_lpae_format to tlb_helper.c
50
target/arm: Move arm_pamax, pamax_map into ptw.c
51
target/arm: Move get_S1prot, get_S2prot to ptw.c
52
target/arm: Move check_s2_mmu_setup to ptw.c
53
target/arm: Move aa32_va_parameters to ptw.c
54
target/arm: Move ap_to_tw_prot etc to ptw.c
55
target/arm: Move regime_is_user to ptw.c
56
target/arm: Move regime_ttbr to ptw.c
57
target/arm: Move regime_translation_disabled to ptw.c
58
target/arm: Move arm_cpu_get_phys_page_attrs_debug to ptw.c
59
target/arm: Move stage_1_mmu_idx, arm_stage1_mmu_idx to ptw.c
60
target/arm: Pass CPUARMState to arm_ld[lq]_ptw
61
target/arm: Rename TBFLAG_A64 ZCR_LEN to VL
62
linux-user/aarch64: Introduce sve_vq
63
target/arm: Remove route_to_el2 check from sve_exception_el
64
target/arm: Remove fp checks from sve_exception_el
65
target/arm: Add el_is_in_host
66
target/arm: Use el_is_in_host for sve_zcr_len_for_el
67
target/arm: Use el_is_in_host for sve_exception_el
68
target/arm: Hoist arm_is_el2_enabled check in sve_exception_el
69
target/arm: Do not use aarch64_sve_zcr_get_valid_len in reset
70
target/arm: Merge aarch64_sve_zcr_get_valid_len into caller
71
target/arm: Use uint32_t instead of bitmap for sve vq's
72
target/arm: Rename sve_zcr_len_for_el to sve_vqm1_for_el
73
target/arm: Split out load/store primitives to sve_ldst_internal.h
74
target/arm: Export sve contiguous ldst support functions
75
target/arm: Move expand_pred_b to vec_internal.h
76
target/arm: Use expand_pred_b in mve_helper.c
77
target/arm: Move expand_pred_h to vec_internal.h
78
target/arm: Export bfdotadd from vec_helper.c
79
target/arm: Add isar_feature_aa64_sme
80
target/arm: Add ID_AA64SMFR0_EL1
81
51
82
Sai Pavan Boddu (2):
52
Richard Henderson (2):
83
xlnx_dp: Introduce a vblank signal
53
target/arm: Make DisasContext.{fp, sve}_access_checked tristate
84
xlnx_dp: Fix the interrupt disable logic
54
target/arm: Simplify pstate_sm check in sve_access_check
85
55
86
docs/interop/vhost-user.rst | 2 +-
56
MAINTAINERS | 14 ++--
87
docs/specs/vmgenid.txt | 4 +-
57
meson.build | 9 ++-
88
docs/system/arm/emulation.rst | 2 +
58
target/arm/cpregs.h | 28 +++++++
89
hw/scsi/mfi.h | 2 +-
59
target/arm/cpu.h | 153 +-----------------------------------
90
include/hw/display/xlnx_dp.h | 12 +-
60
target/arm/internals.h | 135 +++++++++++++++++++++++++++++++
91
linux-user/aarch64/target_prctl.h | 20 +-
61
target/arm/tcg/translate-a64.h | 2 +-
92
target/arm/cpu.h | 66 +-
62
target/arm/tcg/translate.h | 10 ++-
93
target/arm/internals.h | 45 +-
63
hw/char/pl011.c | 19 ++---
94
target/arm/kvm_arm.h | 7 +-
64
hw/intc/arm_gicv3_cpuif.c | 1 +
95
target/arm/sve_ldst_internal.h | 221 +++
65
linux-user/aarch64/cpu_loop.c | 48 -----------
96
target/arm/translate-a64.h | 2 +-
66
linux-user/arm/cpu_loop.c | 43 +---------
97
target/arm/translate.h | 2 +-
67
target/arm/arch_dump.c | 1 +
98
target/arm/vec_internal.h | 28 +-
68
target/arm/helper.c | 16 +++-
99
target/i386/hvf/vmcs.h | 2 +-
69
target/arm/tcg/helper-a64.c | 12 ++-
100
target/i386/hvf/vmx.h | 2 +-
70
target/arm/tcg/hflags.c | 9 +++
101
accel/hvf/hvf-accel-ops.c | 4 +-
71
target/arm/tcg/translate-a64.c | 37 ++++-----
102
accel/kvm/kvm-all.c | 4 +-
72
util/cacheflush.c | 4 +-
103
accel/tcg/user-exec.c | 6 +-
73
.gitlab-ci.d/buildtest-template.yml | 1 -
104
hw/acpi/ghes.c | 2 +-
74
18 files changed, 257 insertions(+), 285 deletions(-)
105
hw/arm/xlnx-zynqmp.c | 4 +-
106
hw/display/xlnx_dp.c | 49 +-
107
hw/intc/arm_gicv3_cpuif.c | 2 +-
108
hw/intc/arm_gicv3_dist.c | 2 +-
109
hw/intc/arm_gicv3_redist.c | 4 +-
110
hw/intc/riscv_aclint.c | 2 +-
111
hw/intc/riscv_aplic.c | 2 +-
112
hw/pci/shpc.c | 2 +-
113
hw/sparc64/sun4u_iommu.c | 2 +-
114
hw/timer/sse-timer.c | 2 +-
115
linux-user/aarch64/signal.c | 4 +-
116
target/arm/arch_dump.c | 2 +-
117
target/arm/cpu.c | 5 +-
118
target/arm/cpu64.c | 120 +-
119
target/arm/gdbstub.c | 2 +-
120
target/arm/gdbstub64.c | 2 +-
121
target/arm/helper.c | 2742 ++-----------------------------------
122
target/arm/hvf/hvf.c | 4 +-
123
target/arm/kvm64.c | 47 +-
124
target/arm/mve_helper.c | 6 +-
125
target/arm/ptw.c | 2540 ++++++++++++++++++++++++++++++++++
126
target/arm/sve_helper.c | 232 +---
127
target/arm/tlb_helper.c | 26 +
128
target/arm/translate-a64.c | 2 +-
129
target/arm/translate-sve.c | 2 +-
130
target/arm/vec_helper.c | 28 +-
131
target/i386/cpu-sysemu.c | 2 +-
132
target/s390x/ioinst.c | 2 +-
133
python/qemu/machine/machine.py | 2 +-
134
target/arm/meson.build | 1 +
135
tests/tcg/x86_64/system/boot.S | 2 +-
136
50 files changed, 3240 insertions(+), 3037 deletions(-)
137
create mode 100644 target/arm/sve_ldst_internal.h
138
create mode 100644 target/arm/ptw.c
diff view generated by jsdifflib
1
The FEAT_DoubleFault extension adds the following:
1
The A32_BANKED_REG_{GET,SET} macros are only used inside target/arm;
2
2
move their definitions to cpregs.h. There's no need to have them
3
* All external aborts on instruction fetches and translation table
3
defined in all the code that includes cpu.h.
4
walks for instruction fetches must be synchronous. For QEMU this
5
is already true.
6
7
* SCR_EL3 has a new bit NMEA which disables the masking of SError
8
interrupts by PSTATE.A when the SError interrupt is taken to EL3.
9
For QEMU we only need to make the bit writable, because we have no
10
sources of SError interrupts.
11
12
* SCR_EL3 has a new bit EASE which causes synchronous external
13
aborts taken to EL3 to be taken at the same entry point as SError.
14
(Note that this does not mean that they are SErrors for purposes
15
of PSTATE.A masking or that the syndrome register reports them as
16
SErrors: it just means that the vector offset is different.)
17
18
* The existing SCTLR_EL3.IESB has an effective value of 1 when
19
SCR_EL3.NMEA is 1. For QEMU this is a no-op because we don't need
20
different behaviour based on IESB (we don't need to do anything to
21
ensure that error exceptions are synchronized).
22
23
So for QEMU the things we need to change are:
24
* Make SCR_EL3.{NMEA,EASE} writable
25
* When taking a synchronous external abort at EL3, adjust the
26
vector entry point if SCR_EL3.EASE is set
27
* Advertise the feature in the ID registers
28
4
29
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
31
Message-id: 20220531151431.949322-1-peter.maydell@linaro.org
32
---
7
---
33
docs/system/arm/emulation.rst | 1 +
8
target/arm/cpregs.h | 28 ++++++++++++++++++++++++++++
34
target/arm/cpu.h | 5 +++++
9
target/arm/cpu.h | 27 ---------------------------
35
target/arm/cpu64.c | 4 ++--
10
2 files changed, 28 insertions(+), 27 deletions(-)
36
target/arm/helper.c | 36 +++++++++++++++++++++++++++++++++++
37
4 files changed, 44 insertions(+), 2 deletions(-)
38
11
39
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
12
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
40
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
41
--- a/docs/system/arm/emulation.rst
14
--- a/target/arm/cpregs.h
42
+++ b/docs/system/arm/emulation.rst
15
+++ b/target/arm/cpregs.h
43
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
16
@@ -XXX,XX +XXX,XX @@ static inline bool arm_cpreg_traps_in_nv(const ARMCPRegInfo *ri)
44
- FEAT_Debugv8p2 (Debug changes for v8.2)
17
return ri->opc1 == 4 || ri->opc1 == 5;
45
- FEAT_Debugv8p4 (Debug changes for v8.4)
18
}
46
- FEAT_DotProd (Advanced SIMD dot product instructions)
19
47
+- FEAT_DoubleFault (Double Fault Extension)
20
+/* Macros for accessing a specified CP register bank */
48
- FEAT_FCMA (Floating-point complex number instructions)
21
+#define A32_BANKED_REG_GET(_env, _regname, _secure) \
49
- FEAT_FHM (Floating-point half-precision multiplication instructions)
22
+ ((_secure) ? (_env)->cp15._regname##_s : (_env)->cp15._regname##_ns)
50
- FEAT_FP16 (Half-precision floating-point data processing)
23
+
24
+#define A32_BANKED_REG_SET(_env, _regname, _secure, _val) \
25
+ do { \
26
+ if (_secure) { \
27
+ (_env)->cp15._regname##_s = (_val); \
28
+ } else { \
29
+ (_env)->cp15._regname##_ns = (_val); \
30
+ } \
31
+ } while (0)
32
+
33
+/*
34
+ * Macros for automatically accessing a specific CP register bank depending on
35
+ * the current secure state of the system. These macros are not intended for
36
+ * supporting instruction translation reads/writes as these are dependent
37
+ * solely on the SCR.NS bit and not the mode.
38
+ */
39
+#define A32_BANKED_CURRENT_REG_GET(_env, _regname) \
40
+ A32_BANKED_REG_GET((_env), _regname, \
41
+ (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)))
42
+
43
+#define A32_BANKED_CURRENT_REG_SET(_env, _regname, _val) \
44
+ A32_BANKED_REG_SET((_env), _regname, \
45
+ (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)), \
46
+ (_val))
47
+
48
#endif /* TARGET_ARM_CPREGS_H */
51
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
49
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
52
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/cpu.h
51
--- a/target/arm/cpu.h
54
+++ b/target/arm/cpu.h
52
+++ b/target/arm/cpu.h
55
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ras(const ARMISARegisters *id)
53
@@ -XXX,XX +XXX,XX @@ static inline bool access_secure_reg(CPUARMState *env)
56
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RAS) != 0;
57
}
58
59
+static inline bool isar_feature_aa64_doublefault(const ARMISARegisters *id)
60
+{
61
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RAS) >= 2;
62
+}
63
+
64
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
65
{
66
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
67
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/cpu64.c
70
+++ b/target/arm/cpu64.c
71
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
72
t = cpu->isar.id_aa64pfr0;
73
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
74
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
75
- t = FIELD_DP64(t, ID_AA64PFR0, RAS, 1); /* FEAT_RAS */
76
+ t = FIELD_DP64(t, ID_AA64PFR0, RAS, 2); /* FEAT_RASv1p1 + FEAT_DoubleFault */
77
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
78
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
79
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
80
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
81
* we do for EL2 with the virtualization=on property.
82
*/
83
t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
84
- t = FIELD_DP64(t, ID_AA64PFR1, RAS_FRAC, 1); /* FEAT_RASv1p1 */
85
+ t = FIELD_DP64(t, ID_AA64PFR1, RAS_FRAC, 0); /* FEAT_RASv1p1 + FEAT_DoubleFault */
86
t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
87
cpu->isar.id_aa64pfr1 = t;
88
89
diff --git a/target/arm/helper.c b/target/arm/helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/helper.c
92
+++ b/target/arm/helper.c
93
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
94
if (cpu_isar_feature(aa64_scxtnum, cpu)) {
95
valid_mask |= SCR_ENSCXT;
96
}
97
+ if (cpu_isar_feature(aa64_doublefault, cpu)) {
98
+ valid_mask |= SCR_EASE | SCR_NMEA;
99
+ }
100
} else {
101
valid_mask &= ~(SCR_RW | SCR_ST);
102
if (cpu_isar_feature(aa32_ras, cpu)) {
103
@@ -XXX,XX +XXX,XX @@ static uint32_t cpsr_read_for_spsr_elx(CPUARMState *env)
104
return ret;
54
return ret;
105
}
55
}
106
56
107
+static bool syndrome_is_sync_extabt(uint32_t syndrome)
57
-/* Macros for accessing a specified CP register bank */
108
+{
58
-#define A32_BANKED_REG_GET(_env, _regname, _secure) \
109
+ /* Return true if this syndrome value is a synchronous external abort */
59
- ((_secure) ? (_env)->cp15._regname##_s : (_env)->cp15._regname##_ns)
110
+ switch (syn_get_ec(syndrome)) {
60
-
111
+ case EC_INSNABORT:
61
-#define A32_BANKED_REG_SET(_env, _regname, _secure, _val) \
112
+ case EC_INSNABORT_SAME_EL:
62
- do { \
113
+ case EC_DATAABORT:
63
- if (_secure) { \
114
+ case EC_DATAABORT_SAME_EL:
64
- (_env)->cp15._regname##_s = (_val); \
115
+ /* Look at fault status code for all the synchronous ext abort cases */
65
- } else { \
116
+ switch (syndrome & 0x3f) {
66
- (_env)->cp15._regname##_ns = (_val); \
117
+ case 0x10:
67
- } \
118
+ case 0x13:
68
- } while (0)
119
+ case 0x14:
69
-
120
+ case 0x15:
70
-/* Macros for automatically accessing a specific CP register bank depending on
121
+ case 0x16:
71
- * the current secure state of the system. These macros are not intended for
122
+ case 0x17:
72
- * supporting instruction translation reads/writes as these are dependent
123
+ return true;
73
- * solely on the SCR.NS bit and not the mode.
124
+ default:
74
- */
125
+ return false;
75
-#define A32_BANKED_CURRENT_REG_GET(_env, _regname) \
126
+ }
76
- A32_BANKED_REG_GET((_env), _regname, \
127
+ default:
77
- (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)))
128
+ return false;
78
-
129
+ }
79
-#define A32_BANKED_CURRENT_REG_SET(_env, _regname, _val) \
130
+}
80
- A32_BANKED_REG_SET((_env), _regname, \
131
+
81
- (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)), \
132
/* Handle exception entry to a target EL which is using AArch64 */
82
- (_val))
133
static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
83
-
134
{
84
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
135
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
85
uint32_t cur_el, bool secure);
136
switch (cs->exception_index) {
86
137
case EXCP_PREFETCH_ABORT:
138
case EXCP_DATA_ABORT:
139
+ /*
140
+ * FEAT_DoubleFault allows synchronous external aborts taken to EL3
141
+ * to be taken to the SError vector entrypoint.
142
+ */
143
+ if (new_el == 3 && (env->cp15.scr_el3 & SCR_EASE) &&
144
+ syndrome_is_sync_extabt(env->exception.syndrome)) {
145
+ addr += 0x180;
146
+ }
147
env->cp15.far_el[new_el] = env->exception.vaddress;
148
qemu_log_mask(CPU_LOG_INT, "...with FAR 0x%" PRIx64 "\n",
149
env->cp15.far_el[new_el]);
150
--
87
--
151
2.25.1
88
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
We would like to move arm_el_is_aa64() to internals.h; however, it is
2
used by access_secure_reg(). Make that function not be inline, so
3
that it can stay in cpu.h.
2
4
3
This will be used for implementing FEAT_SME.
5
access_secure_reg() is used only in two places:
6
* in hflags.c
7
* in the user-mode arm emulators, to decide whether to store
8
the TLS value in the secure or non-secure banked field
4
9
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
The second of these is not on a super-hot path that would care about
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
the inlining (and incidentally will always use the NS banked field
7
Message-id: 20220607203306.657998-20-richard.henderson@linaro.org
12
because our user-mode CPUs never set ARM_FEATURE_EL3); put the
13
definition of access_secure_reg() in hflags.c, near its only use
14
inside target/arm.
15
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
---
18
---
10
target/arm/cpu.h | 5 +++++
19
target/arm/cpu.h | 12 +++---------
11
1 file changed, 5 insertions(+)
20
target/arm/tcg/hflags.c | 9 +++++++++
21
2 files changed, 12 insertions(+), 9 deletions(-)
12
22
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
25
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_mte(const ARMISARegisters *id)
27
@@ -XXX,XX +XXX,XX @@ static inline bool arm_el_is_aa64(CPUARMState *env, int el)
18
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, MTE) >= 2;
28
return aa64;
19
}
29
}
20
30
21
+static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
31
-/* Function for determining whether guest cp register reads and writes should
32
+/*
33
+ * Function for determining whether guest cp register reads and writes should
34
* access the secure or non-secure bank of a cp register. When EL3 is
35
* operating in AArch32 state, the NS-bit determines whether the secure
36
* instance of a cp register should be used. When EL3 is AArch64 (or if
37
* it doesn't exist at all) then there is no register banking, and all
38
* accesses are to the non-secure version.
39
*/
40
-static inline bool access_secure_reg(CPUARMState *env)
41
-{
42
- bool ret = (arm_feature(env, ARM_FEATURE_EL3) &&
43
- !arm_el_is_aa64(env, 3) &&
44
- !(env->cp15.scr_el3 & SCR_NS));
45
-
46
- return ret;
47
-}
48
+bool access_secure_reg(CPUARMState *env);
49
50
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
51
uint32_t cur_el, bool secure);
52
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/tcg/hflags.c
55
+++ b/target/arm/tcg/hflags.c
56
@@ -XXX,XX +XXX,XX @@ static bool aprofile_require_alignment(CPUARMState *env, int el, uint64_t sctlr)
57
#endif
58
}
59
60
+bool access_secure_reg(CPUARMState *env)
22
+{
61
+{
23
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SME) != 0;
62
+ bool ret = (arm_feature(env, ARM_FEATURE_EL3) &&
63
+ !arm_el_is_aa64(env, 3) &&
64
+ !(env->cp15.scr_el3 & SCR_NS));
65
+
66
+ return ret;
24
+}
67
+}
25
+
68
+
26
static inline bool isar_feature_aa64_pmu_8_1(const ARMISARegisters *id)
69
static CPUARMTBFlags rebuild_hflags_common(CPUARMState *env, int fp_el,
27
{
70
ARMMMUIdx mmu_idx,
28
return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 4 &&
71
CPUARMTBFlags flags)
29
--
72
--
30
2.25.1
73
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
At the top of linux-user/aarch64/cpu_loop.c we define a set of
2
macros for reading and writing data and code words, but we never
3
use these macros. Delete them.
2
4
3
This (newish) ARM pseudocode function is easier to work with
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
than open-coded tests for HCR_E2H etc. Use of the function
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
will be staged into the code base in parts.
7
---
8
linux-user/aarch64/cpu_loop.c | 48 -----------------------------------
9
1 file changed, 48 deletions(-)
6
10
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220607203306.657998-6-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/internals.h | 2 ++
13
target/arm/helper.c | 28 ++++++++++++++++++++++++++++
14
2 files changed, 30 insertions(+)
15
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
13
--- a/linux-user/aarch64/cpu_loop.c
19
+++ b/target/arm/internals.h
14
+++ b/linux-user/aarch64/cpu_loop.c
20
@@ -XXX,XX +XXX,XX @@ static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
15
@@ -XXX,XX +XXX,XX @@
21
void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
16
#include "target/arm/syndrome.h"
22
#endif
17
#include "target/arm/cpu-features.h"
23
18
24
+bool el_is_in_host(CPUARMState *env, int el);
19
-#define get_user_code_u32(x, gaddr, env) \
25
+
20
- ({ abi_long __r = get_user_u32((x), (gaddr)); \
26
void aa32_max_features(ARMCPU *cpu);
21
- if (!__r && bswap_code(arm_sctlr_b(env))) { \
27
22
- (x) = bswap32(x); \
28
#endif
23
- } \
29
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
- __r; \
30
index XXXXXXX..XXXXXXX 100644
25
- })
31
--- a/target/arm/helper.c
26
-
32
+++ b/target/arm/helper.c
27
-#define get_user_code_u16(x, gaddr, env) \
33
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
28
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
34
return ret;
29
- if (!__r && bswap_code(arm_sctlr_b(env))) { \
35
}
30
- (x) = bswap16(x); \
36
31
- } \
37
+/*
32
- __r; \
38
+ * Corresponds to ARM pseudocode function ELIsInHost().
33
- })
39
+ */
34
-
40
+bool el_is_in_host(CPUARMState *env, int el)
35
-#define get_user_data_u32(x, gaddr, env) \
41
+{
36
- ({ abi_long __r = get_user_u32((x), (gaddr)); \
42
+ uint64_t mask;
37
- if (!__r && arm_cpu_bswap_data(env)) { \
43
+
38
- (x) = bswap32(x); \
44
+ /*
39
- } \
45
+ * Since we only care about E2H and TGE, we can skip arm_hcr_el2_eff().
40
- __r; \
46
+ * Perform the simplest bit tests first, and validate EL2 afterward.
41
- })
47
+ */
42
-
48
+ if (el & 1) {
43
-#define get_user_data_u16(x, gaddr, env) \
49
+ return false; /* EL1 or EL3 */
44
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
50
+ }
45
- if (!__r && arm_cpu_bswap_data(env)) { \
51
+
46
- (x) = bswap16(x); \
52
+ /*
47
- } \
53
+ * Note that hcr_write() checks isar_feature_aa64_vh(),
48
- __r; \
54
+ * aka HaveVirtHostExt(), in allowing HCR_E2H to be set.
49
- })
55
+ */
50
-
56
+ mask = el ? HCR_E2H : HCR_E2H | HCR_TGE;
51
-#define put_user_data_u32(x, gaddr, env) \
57
+ if ((env->cp15.hcr_el2 & mask) != mask) {
52
- ({ typeof(x) __x = (x); \
58
+ return false;
53
- if (arm_cpu_bswap_data(env)) { \
59
+ }
54
- __x = bswap32(__x); \
60
+
55
- } \
61
+ /* TGE and/or E2H set: double check those bits are currently legal. */
56
- put_user_u32(__x, (gaddr)); \
62
+ return arm_is_el2_enabled(env) && arm_el_is_aa64(env, 2);
57
- })
63
+}
58
-
64
+
59
-#define put_user_data_u16(x, gaddr, env) \
65
static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
60
- ({ typeof(x) __x = (x); \
66
uint64_t value)
61
- if (arm_cpu_bswap_data(env)) { \
62
- __x = bswap16(__x); \
63
- } \
64
- put_user_u16(__x, (gaddr)); \
65
- })
66
-
67
/* AArch64 main loop */
68
void cpu_loop(CPUARMState *env)
67
{
69
{
68
--
70
--
69
2.25.1
71
2.43.0
diff view generated by jsdifflib
1
The architectural feature RASv1p1 introduces the following new
1
In linux-user/arm/cpu_loop.c we define a full set of get/put
2
features:
2
macros for both code and data (since the endianness handling
3
* new registers ERXPFGCDN_EL1, ERXPFGCTL_EL1 and ERXPFGF_EL1
3
is different between the two). However the only one we actually
4
* new bits in the fine-grained trap registers that control traps
4
use is get_user_code_u32(). Remove the rest.
5
for these new registers
6
* new trap bits HCR_EL2.FIEN and SCR_EL3.FIEN that control traps
7
for ERXPFGCDN_EL1, ERXPFGCTL_EL1, ERXPFGP_EL1
8
* a larger number of the ERXMISC<n>_EL1 registers
9
* the format of ERR<n>STATUS registers changes
10
5
11
The architecture permits that if ERRIDR_EL1.NUM is 0 (as it is for
6
We leave a comment noting how data-side accesses should be handled
12
QEMU) then all these new registers may UNDEF, and the HCR_EL2.FIEN
7
for big-endian, because that's a subtle point and we just removed the
13
and SCR_EL3.FIEN bits may be RES0. We don't have any ERR<n>STATUS
8
macros that were effectively documenting it.
14
registers (again, because ERRIDR_EL1.NUM is 0). QEMU does not yet
15
implement the fine-grained-trap extension. So there is nothing we
16
need to implement to be compliant with the feature spec. Make the
17
'max' CPU report the feature in its ID registers, and document it.
18
9
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20220531114258.855804-1-peter.maydell@linaro.org
22
---
12
---
23
docs/system/arm/emulation.rst | 1 +
13
linux-user/arm/cpu_loop.c | 43 ++++-----------------------------------
24
target/arm/cpu64.c | 1 +
14
1 file changed, 4 insertions(+), 39 deletions(-)
25
2 files changed, 2 insertions(+)
26
15
27
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
16
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
28
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
29
--- a/docs/system/arm/emulation.rst
18
--- a/linux-user/arm/cpu_loop.c
30
+++ b/docs/system/arm/emulation.rst
19
+++ b/linux-user/arm/cpu_loop.c
31
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
20
@@ -XXX,XX +XXX,XX @@
32
- FEAT_PMUv3p1 (PMU Extensions v3.1)
21
__r; \
33
- FEAT_PMUv3p4 (PMU Extensions v3.4)
22
})
34
- FEAT_RAS (Reliability, availability, and serviceability)
23
35
+- FEAT_RASv1p1 (RAS Extension v1.1)
24
-#define get_user_code_u16(x, gaddr, env) \
36
- FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions)
25
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
37
- FEAT_RNG (Random number generator)
26
- if (!__r && bswap_code(arm_sctlr_b(env))) { \
38
- FEAT_S2FWB (Stage 2 forced Write-Back)
27
- (x) = bswap16(x); \
39
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
28
- } \
40
index XXXXXXX..XXXXXXX 100644
29
- __r; \
41
--- a/target/arm/cpu64.c
30
- })
42
+++ b/target/arm/cpu64.c
31
-
43
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
32
-#define get_user_data_u32(x, gaddr, env) \
44
* we do for EL2 with the virtualization=on property.
33
- ({ abi_long __r = get_user_u32((x), (gaddr)); \
45
*/
34
- if (!__r && arm_cpu_bswap_data(env)) { \
46
t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
35
- (x) = bswap32(x); \
47
+ t = FIELD_DP64(t, ID_AA64PFR1, RAS_FRAC, 1); /* FEAT_RASv1p1 */
36
- } \
48
t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
37
- __r; \
49
cpu->isar.id_aa64pfr1 = t;
38
- })
50
39
-
40
-#define get_user_data_u16(x, gaddr, env) \
41
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
42
- if (!__r && arm_cpu_bswap_data(env)) { \
43
- (x) = bswap16(x); \
44
- } \
45
- __r; \
46
- })
47
-
48
-#define put_user_data_u32(x, gaddr, env) \
49
- ({ typeof(x) __x = (x); \
50
- if (arm_cpu_bswap_data(env)) { \
51
- __x = bswap32(__x); \
52
- } \
53
- put_user_u32(__x, (gaddr)); \
54
- })
55
-
56
-#define put_user_data_u16(x, gaddr, env) \
57
- ({ typeof(x) __x = (x); \
58
- if (arm_cpu_bswap_data(env)) { \
59
- __x = bswap16(__x); \
60
- } \
61
- put_user_u16(__x, (gaddr)); \
62
- })
63
+/*
64
+ * Note that if we need to do data accesses here, they should do a
65
+ * bswap if arm_cpu_bswap_data() returns true.
66
+ */
67
68
/*
69
* Similar to code in accel/tcg/user-exec.c, but outside the execution loop.
51
--
70
--
52
2.25.1
71
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The arm_cpu_data_is_big_endian() and related functions are now used
2
only in target/arm; they can be moved to internals.h.
2
3
3
This register is allocated from the existing block of id registers,
4
The motivation here is that we would like to move arm_current_el()
4
so it is already RES0 for cpus that do not implement SME.
5
to internals.h.
5
6
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220607203306.657998-21-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
9
---
11
target/arm/cpu.h | 25 +++++++++++++++++++++++++
10
target/arm/cpu.h | 48 ------------------------------------------
12
target/arm/helper.c | 4 ++--
11
target/arm/internals.h | 48 ++++++++++++++++++++++++++++++++++++++++++
13
target/arm/kvm64.c | 11 +++++++----
12
2 files changed, 48 insertions(+), 48 deletions(-)
14
3 files changed, 34 insertions(+), 6 deletions(-)
15
13
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
16
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
18
@@ -XXX,XX +XXX,XX @@ static inline bool arm_sctlr_b(CPUARMState *env)
21
uint64_t id_aa64dfr0;
19
22
uint64_t id_aa64dfr1;
20
uint64_t arm_sctlr(CPUARMState *env, int el);
23
uint64_t id_aa64zfr0;
21
24
+ uint64_t id_aa64smfr0;
22
-static inline bool arm_cpu_data_is_big_endian_a32(CPUARMState *env,
25
uint64_t reset_pmcr_el0;
23
- bool sctlr_b)
26
} isar;
24
-{
27
uint64_t midr;
25
-#ifdef CONFIG_USER_ONLY
28
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ZFR0, I8MM, 44, 4)
26
- /*
29
FIELD(ID_AA64ZFR0, F32MM, 52, 4)
27
- * In system mode, BE32 is modelled in line with the
30
FIELD(ID_AA64ZFR0, F64MM, 56, 4)
28
- * architecture (as word-invariant big-endianness), where loads
31
29
- * and stores are done little endian but from addresses which
32
+FIELD(ID_AA64SMFR0, F32F32, 32, 1)
30
- * are adjusted by XORing with the appropriate constant. So the
33
+FIELD(ID_AA64SMFR0, B16F32, 34, 1)
31
- * endianness to use for the raw data access is not affected by
34
+FIELD(ID_AA64SMFR0, F16F32, 35, 1)
32
- * SCTLR.B.
35
+FIELD(ID_AA64SMFR0, I8I32, 36, 4)
33
- * In user mode, however, we model BE32 as byte-invariant
36
+FIELD(ID_AA64SMFR0, F64F64, 48, 1)
34
- * big-endianness (because user-only code cannot tell the
37
+FIELD(ID_AA64SMFR0, I16I64, 52, 4)
35
- * difference), and so we need to use a data access endianness
38
+FIELD(ID_AA64SMFR0, SMEVER, 56, 4)
36
- * that depends on SCTLR.B.
39
+FIELD(ID_AA64SMFR0, FA64, 63, 1)
37
- */
40
+
38
- if (sctlr_b) {
41
FIELD(ID_DFR0, COPDBG, 0, 4)
39
- return true;
42
FIELD(ID_DFR0, COPSDBG, 4, 4)
40
- }
43
FIELD(ID_DFR0, MMAPDBG, 8, 4)
41
-#endif
44
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve_f64mm(const ARMISARegisters *id)
42
- /* In 32bit endianness is determined by looking at CPSR's E bit */
45
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F64MM) != 0;
43
- return env->uncached_cpsr & CPSR_E;
44
-}
45
-
46
-static inline bool arm_cpu_data_is_big_endian_a64(int el, uint64_t sctlr)
47
-{
48
- return sctlr & (el ? SCTLR_EE : SCTLR_E0E);
49
-}
50
-
51
-/* Return true if the processor is in big-endian mode. */
52
-static inline bool arm_cpu_data_is_big_endian(CPUARMState *env)
53
-{
54
- if (!is_a64(env)) {
55
- return arm_cpu_data_is_big_endian_a32(env, arm_sctlr_b(env));
56
- } else {
57
- int cur_el = arm_current_el(env);
58
- uint64_t sctlr = arm_sctlr(env, cur_el);
59
- return arm_cpu_data_is_big_endian_a64(cur_el, sctlr);
60
- }
61
-}
62
-
63
#include "exec/cpu-all.h"
64
65
/*
66
@@ -XXX,XX +XXX,XX @@ static inline bool bswap_code(bool sctlr_b)
67
#endif
46
}
68
}
47
69
48
+static inline bool isar_feature_aa64_sme_f64f64(const ARMISARegisters *id)
70
-#ifdef CONFIG_USER_ONLY
71
-static inline bool arm_cpu_bswap_data(CPUARMState *env)
72
-{
73
- return TARGET_BIG_ENDIAN ^ arm_cpu_data_is_big_endian(env);
74
-}
75
-#endif
76
-
77
void cpu_get_tb_cpu_state(CPUARMState *env, vaddr *pc,
78
uint64_t *cs_base, uint32_t *flags);
79
80
diff --git a/target/arm/internals.h b/target/arm/internals.h
81
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/internals.h
83
+++ b/target/arm/internals.h
84
@@ -XXX,XX +XXX,XX @@ static inline FloatRoundMode arm_rmode_to_sf(ARMFPRounding rmode)
85
return arm_rmode_to_sf_map[rmode];
86
}
87
88
+static inline bool arm_cpu_data_is_big_endian_a32(CPUARMState *env,
89
+ bool sctlr_b)
49
+{
90
+{
50
+ return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, F64F64);
91
+#ifdef CONFIG_USER_ONLY
92
+ /*
93
+ * In system mode, BE32 is modelled in line with the
94
+ * architecture (as word-invariant big-endianness), where loads
95
+ * and stores are done little endian but from addresses which
96
+ * are adjusted by XORing with the appropriate constant. So the
97
+ * endianness to use for the raw data access is not affected by
98
+ * SCTLR.B.
99
+ * In user mode, however, we model BE32 as byte-invariant
100
+ * big-endianness (because user-only code cannot tell the
101
+ * difference), and so we need to use a data access endianness
102
+ * that depends on SCTLR.B.
103
+ */
104
+ if (sctlr_b) {
105
+ return true;
106
+ }
107
+#endif
108
+ /* In 32bit endianness is determined by looking at CPSR's E bit */
109
+ return env->uncached_cpsr & CPSR_E;
51
+}
110
+}
52
+
111
+
53
+static inline bool isar_feature_aa64_sme_i16i64(const ARMISARegisters *id)
112
+static inline bool arm_cpu_data_is_big_endian_a64(int el, uint64_t sctlr)
54
+{
113
+{
55
+ return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, I16I64) == 0xf;
114
+ return sctlr & (el ? SCTLR_EE : SCTLR_E0E);
56
+}
115
+}
57
+
116
+
58
+static inline bool isar_feature_aa64_sme_fa64(const ARMISARegisters *id)
117
+/* Return true if the processor is in big-endian mode. */
118
+static inline bool arm_cpu_data_is_big_endian(CPUARMState *env)
59
+{
119
+{
60
+ return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, FA64);
120
+ if (!is_a64(env)) {
121
+ return arm_cpu_data_is_big_endian_a32(env, arm_sctlr_b(env));
122
+ } else {
123
+ int cur_el = arm_current_el(env);
124
+ uint64_t sctlr = arm_sctlr(env, cur_el);
125
+ return arm_cpu_data_is_big_endian_a64(cur_el, sctlr);
126
+ }
61
+}
127
+}
62
+
128
+
63
/*
129
+#ifdef CONFIG_USER_ONLY
64
* Feature tests for "does this exist in either 32-bit or 64-bit?"
130
+static inline bool arm_cpu_bswap_data(CPUARMState *env)
65
*/
131
+{
66
diff --git a/target/arm/helper.c b/target/arm/helper.c
132
+ return TARGET_BIG_ENDIAN ^ arm_cpu_data_is_big_endian(env);
67
index XXXXXXX..XXXXXXX 100644
133
+}
68
--- a/target/arm/helper.c
134
+#endif
69
+++ b/target/arm/helper.c
135
+
70
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
136
static inline void aarch64_save_sp(CPUARMState *env, int el)
71
.access = PL1_R, .type = ARM_CP_CONST,
137
{
72
.accessfn = access_aa64_tid3,
138
if (env->pstate & PSTATE_SP) {
73
.resetvalue = cpu->isar.id_aa64zfr0 },
74
- { .name = "ID_AA64PFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
75
+ { .name = "ID_AA64SMFR0_EL1", .state = ARM_CP_STATE_AA64,
76
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 5,
77
.access = PL1_R, .type = ARM_CP_CONST,
78
.accessfn = access_aa64_tid3,
79
- .resetvalue = 0 },
80
+ .resetvalue = cpu->isar.id_aa64smfr0 },
81
{ .name = "ID_AA64PFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
82
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 6,
83
.access = PL1_R, .type = ARM_CP_CONST,
84
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/target/arm/kvm64.c
87
+++ b/target/arm/kvm64.c
88
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
89
} else {
90
err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64pfr1,
91
ARM64_SYS_REG(3, 0, 0, 4, 1));
92
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64smfr0,
93
+ ARM64_SYS_REG(3, 0, 0, 4, 5));
94
err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64dfr0,
95
ARM64_SYS_REG(3, 0, 0, 5, 0));
96
err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64dfr1,
97
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
98
ahcf->isar.id_aa64pfr0 = t;
99
100
/*
101
- * Before v5.1, KVM did not support SVE and did not expose
102
- * ID_AA64ZFR0_EL1 even as RAZ. After v5.1, KVM still does
103
- * not expose the register to "user" requests like this
104
- * unless the host supports SVE.
105
+ * There is a range of kernels between kernel commit 73433762fcae
106
+ * and f81cb2c3ad41 which have a bug where the kernel doesn't expose
107
+ * SYS_ID_AA64ZFR0_EL1 via the ONE_REG API unless the VM has enabled
108
+ * SVE support, so we only read it here, rather than together with all
109
+ * the other ID registers earlier.
110
*/
111
err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64zfr0,
112
ARM64_SYS_REG(3, 0, 0, 4, 4));
113
--
139
--
114
2.25.1
140
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The functions arm_current_el() and arm_el_is_aa64() are used only in
2
2
target/arm and in hw/intc/arm_gicv3_cpuif.c. They're functions that
3
This is the final user of get_phys_addr_pmsav7_default
3
query internal state of the CPU. Move them out of cpu.h and into
4
within helper.c, so make it static within ptw.c.
4
internals.h.
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
This means we need to include internals.h in arm_gicv3_cpuif.c, but
7
Message-id: 20220604040607.269301-10-richard.henderson@linaro.org
7
this is justifiable because that file is implementing the GICv3 CPU
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
interface, which really is part of the CPU proper; we just ended up
9
implementing it in code in hw/intc/ for historical reasons.
10
11
The motivation for this move is that we'd like to change
12
arm_el_is_aa64() to add a condition that uses cpu_isar_feature();
13
but we don't want to include cpu-features.h in cpu.h.
14
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
17
---
11
target/arm/ptw.h | 3 -
18
target/arm/cpu.h | 66 --------------------------------------
12
target/arm/helper.c | 136 -----------------------------------------
19
target/arm/internals.h | 67 +++++++++++++++++++++++++++++++++++++++
13
target/arm/ptw.c | 146 +++++++++++++++++++++++++++++++++++++++++++-
20
hw/intc/arm_gicv3_cpuif.c | 1 +
14
3 files changed, 143 insertions(+), 142 deletions(-)
21
target/arm/arch_dump.c | 1 +
15
22
4 files changed, 69 insertions(+), 66 deletions(-)
16
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
23
17
index XXXXXXX..XXXXXXX 100644
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
--- a/target/arm/ptw.h
25
index XXXXXXX..XXXXXXX 100644
19
+++ b/target/arm/ptw.h
26
--- a/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
27
+++ b/target/arm/cpu.h
21
bool m_is_ppb_region(CPUARMState *env, uint32_t address);
28
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, ARMSecuritySpace space);
22
bool m_is_system_region(CPUARMState *env, uint32_t address);
29
uint64_t arm_hcr_el2_eff(CPUARMState *env);
23
30
uint64_t arm_hcrx_el2_eff(CPUARMState *env);
24
-void get_phys_addr_pmsav7_default(CPUARMState *env,
31
25
- ARMMMUIdx mmu_idx,
32
-/* Return true if the specified exception level is running in AArch64 state. */
26
- int32_t address, int *prot);
33
-static inline bool arm_el_is_aa64(CPUARMState *env, int el)
27
bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool is_user);
34
-{
28
35
- /* This isn't valid for EL0 (if we're in EL0, is_a64() is what you want,
29
bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
36
- * and if we're not in EL0 then the state of EL0 isn't well defined.)
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
37
- */
31
index XXXXXXX..XXXXXXX 100644
38
- assert(el >= 1 && el <= 3);
32
--- a/target/arm/helper.c
39
- bool aa64 = arm_feature(env, ARM_FEATURE_AARCH64);
33
+++ b/target/arm/helper.c
40
-
34
@@ -XXX,XX +XXX,XX @@ void v8m_security_lookup(CPUARMState *env, uint32_t address,
41
- /* The highest exception level is always at the maximum supported
35
}
42
- * register width, and then lower levels have a register width controlled
43
- * by bits in the SCR or HCR registers.
44
- */
45
- if (el == 3) {
46
- return aa64;
47
- }
48
-
49
- if (arm_feature(env, ARM_FEATURE_EL3) &&
50
- ((env->cp15.scr_el3 & SCR_NS) || !(env->cp15.scr_el3 & SCR_EEL2))) {
51
- aa64 = aa64 && (env->cp15.scr_el3 & SCR_RW);
52
- }
53
-
54
- if (el == 2) {
55
- return aa64;
56
- }
57
-
58
- if (arm_is_el2_enabled(env)) {
59
- aa64 = aa64 && (env->cp15.hcr_el2 & HCR_RW);
60
- }
61
-
62
- return aa64;
63
-}
64
-
65
/*
66
* Function for determining whether guest cp register reads and writes should
67
* access the secure or non-secure bank of a cp register. When EL3 is
68
@@ -XXX,XX +XXX,XX @@ static inline bool arm_v7m_is_handler_mode(CPUARMState *env)
69
return env->v7m.exception != 0;
36
}
70
}
37
71
38
-bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
72
-/* Return the current Exception Level (as per ARMv8; note that this differs
39
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
73
- * from the ARMv7 Privilege Level).
40
- hwaddr *phys_ptr, MemTxAttrs *txattrs,
74
- */
41
- int *prot, bool *is_subpage,
75
-static inline int arm_current_el(CPUARMState *env)
42
- ARMMMUFaultInfo *fi, uint32_t *mregion)
43
-{
76
-{
44
- /* Perform a PMSAv8 MPU lookup (without also doing the SAU check
77
- if (arm_feature(env, ARM_FEATURE_M)) {
45
- * that a full phys-to-virt translation does).
78
- return arm_v7m_is_handler_mode(env) ||
46
- * mregion is (if not NULL) set to the region number which matched,
79
- !(env->v7m.control[env->v7m.secure] & 1);
47
- * or -1 if no region number is returned (MPU off, address did not
80
- }
48
- * hit a region, address hit in multiple regions).
81
-
49
- * We set is_subpage to true if the region hit doesn't cover the
82
- if (is_a64(env)) {
50
- * entire TARGET_PAGE the address is within.
83
- return extract32(env->pstate, 2, 2);
51
- */
84
- }
52
- ARMCPU *cpu = env_archcpu(env);
85
-
53
- bool is_user = regime_is_user(env, mmu_idx);
86
- switch (env->uncached_cpsr & 0x1f) {
54
- uint32_t secure = regime_is_secure(env, mmu_idx);
87
- case ARM_CPU_MODE_USR:
55
- int n;
88
- return 0;
56
- int matchregion = -1;
89
- case ARM_CPU_MODE_HYP:
57
- bool hit = false;
90
- return 2;
58
- uint32_t addr_page_base = address & TARGET_PAGE_MASK;
91
- case ARM_CPU_MODE_MON:
59
- uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
92
- return 3;
60
-
93
- default:
61
- *is_subpage = false;
94
- if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) {
62
- *phys_ptr = address;
95
- /* If EL3 is 32-bit then all secure privileged modes run in
63
- *prot = 0;
96
- * EL3
64
- if (mregion) {
97
- */
65
- *mregion = -1;
98
- return 3;
66
- }
67
-
68
- /* Unlike the ARM ARM pseudocode, we don't need to check whether this
69
- * was an exception vector read from the vector table (which is always
70
- * done using the default system address map), because those accesses
71
- * are done in arm_v7m_load_vector(), which always does a direct
72
- * read using address_space_ldl(), rather than going via this function.
73
- */
74
- if (regime_translation_disabled(env, mmu_idx)) { /* MPU disabled */
75
- hit = true;
76
- } else if (m_is_ppb_region(env, address)) {
77
- hit = true;
78
- } else {
79
- if (pmsav7_use_background_region(cpu, mmu_idx, is_user)) {
80
- hit = true;
81
- }
99
- }
82
-
100
-
83
- for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
101
- return 1;
84
- /* region search */
102
- }
85
- /* Note that the base address is bits [31:5] from the register
86
- * with bits [4:0] all zeroes, but the limit address is bits
87
- * [31:5] from the register with bits [4:0] all ones.
88
- */
89
- uint32_t base = env->pmsav8.rbar[secure][n] & ~0x1f;
90
- uint32_t limit = env->pmsav8.rlar[secure][n] | 0x1f;
91
-
92
- if (!(env->pmsav8.rlar[secure][n] & 0x1)) {
93
- /* Region disabled */
94
- continue;
95
- }
96
-
97
- if (address < base || address > limit) {
98
- /*
99
- * Address not in this region. We must check whether the
100
- * region covers addresses in the same page as our address.
101
- * In that case we must not report a size that covers the
102
- * whole page for a subsequent hit against a different MPU
103
- * region or the background region, because it would result in
104
- * incorrect TLB hits for subsequent accesses to addresses that
105
- * are in this MPU region.
106
- */
107
- if (limit >= base &&
108
- ranges_overlap(base, limit - base + 1,
109
- addr_page_base,
110
- TARGET_PAGE_SIZE)) {
111
- *is_subpage = true;
112
- }
113
- continue;
114
- }
115
-
116
- if (base > addr_page_base || limit < addr_page_limit) {
117
- *is_subpage = true;
118
- }
119
-
120
- if (matchregion != -1) {
121
- /* Multiple regions match -- always a failure (unlike
122
- * PMSAv7 where highest-numbered-region wins)
123
- */
124
- fi->type = ARMFault_Permission;
125
- fi->level = 1;
126
- return true;
127
- }
128
-
129
- matchregion = n;
130
- hit = true;
131
- }
132
- }
133
-
134
- if (!hit) {
135
- /* background fault */
136
- fi->type = ARMFault_Background;
137
- return true;
138
- }
139
-
140
- if (matchregion == -1) {
141
- /* hit using the background region */
142
- get_phys_addr_pmsav7_default(env, mmu_idx, address, prot);
143
- } else {
144
- uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
145
- uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
146
- bool pxn = false;
147
-
148
- if (arm_feature(env, ARM_FEATURE_V8_1M)) {
149
- pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
150
- }
151
-
152
- if (m_is_system_region(env, address)) {
153
- /* System space is always execute never */
154
- xn = 1;
155
- }
156
-
157
- *prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
158
- if (*prot && !xn && !(pxn && !is_user)) {
159
- *prot |= PAGE_EXEC;
160
- }
161
- /* We don't need to look the attribute up in the MAIR0/MAIR1
162
- * registers because that only tells us about cacheability.
163
- */
164
- if (mregion) {
165
- *mregion = matchregion;
166
- }
167
- }
168
-
169
- fi->type = ARMFault_Permission;
170
- fi->level = 1;
171
- return !(*prot & (1 << access_type));
172
-}
103
-}
173
-
104
-
174
/* Combine either inner or outer cacheability attributes for normal
105
/**
175
* memory, according to table D4-42 and pseudocode procedure
106
* write_list_to_cpustate
176
* CombineS1S2AttrHints() of ARM DDI 0487B.b (the ARMv8 ARM).
107
* @cpu: ARMCPU
177
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
108
diff --git a/target/arm/internals.h b/target/arm/internals.h
178
index XXXXXXX..XXXXXXX 100644
109
index XXXXXXX..XXXXXXX 100644
179
--- a/target/arm/ptw.c
110
--- a/target/arm/internals.h
180
+++ b/target/arm/ptw.c
111
+++ b/target/arm/internals.h
181
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
112
@@ -XXX,XX +XXX,XX @@ static inline FloatRoundMode arm_rmode_to_sf(ARMFPRounding rmode)
182
return false;
113
return arm_rmode_to_sf_map[rmode];
183
}
114
}
184
115
185
-void get_phys_addr_pmsav7_default(CPUARMState *env,
116
+/* Return true if the specified exception level is running in AArch64 state. */
186
- ARMMMUIdx mmu_idx,
117
+static inline bool arm_el_is_aa64(CPUARMState *env, int el)
187
- int32_t address, int *prot)
188
+static void get_phys_addr_pmsav7_default(CPUARMState *env, ARMMMUIdx mmu_idx,
189
+ int32_t address, int *prot)
190
{
191
if (!arm_feature(env, ARM_FEATURE_M)) {
192
*prot = PAGE_READ | PAGE_WRITE;
193
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
194
return !(*prot & (1 << access_type));
195
}
196
197
+bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
198
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
199
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
200
+ int *prot, bool *is_subpage,
201
+ ARMMMUFaultInfo *fi, uint32_t *mregion)
202
+{
118
+{
203
+ /*
119
+ /*
204
+ * Perform a PMSAv8 MPU lookup (without also doing the SAU check
120
+ * This isn't valid for EL0 (if we're in EL0, is_a64() is what you want,
205
+ * that a full phys-to-virt translation does).
121
+ * and if we're not in EL0 then the state of EL0 isn't well defined.)
206
+ * mregion is (if not NULL) set to the region number which matched,
207
+ * or -1 if no region number is returned (MPU off, address did not
208
+ * hit a region, address hit in multiple regions).
209
+ * We set is_subpage to true if the region hit doesn't cover the
210
+ * entire TARGET_PAGE the address is within.
211
+ */
122
+ */
212
+ ARMCPU *cpu = env_archcpu(env);
123
+ assert(el >= 1 && el <= 3);
213
+ bool is_user = regime_is_user(env, mmu_idx);
124
+ bool aa64 = arm_feature(env, ARM_FEATURE_AARCH64);
214
+ uint32_t secure = regime_is_secure(env, mmu_idx);
215
+ int n;
216
+ int matchregion = -1;
217
+ bool hit = false;
218
+ uint32_t addr_page_base = address & TARGET_PAGE_MASK;
219
+ uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
220
+
221
+ *is_subpage = false;
222
+ *phys_ptr = address;
223
+ *prot = 0;
224
+ if (mregion) {
225
+ *mregion = -1;
226
+ }
227
+
125
+
228
+ /*
126
+ /*
229
+ * Unlike the ARM ARM pseudocode, we don't need to check whether this
127
+ * The highest exception level is always at the maximum supported
230
+ * was an exception vector read from the vector table (which is always
128
+ * register width, and then lower levels have a register width controlled
231
+ * done using the default system address map), because those accesses
129
+ * by bits in the SCR or HCR registers.
232
+ * are done in arm_v7m_load_vector(), which always does a direct
233
+ * read using address_space_ldl(), rather than going via this function.
234
+ */
130
+ */
235
+ if (regime_translation_disabled(env, mmu_idx)) { /* MPU disabled */
131
+ if (el == 3) {
236
+ hit = true;
132
+ return aa64;
237
+ } else if (m_is_ppb_region(env, address)) {
133
+ }
238
+ hit = true;
134
+
239
+ } else {
135
+ if (arm_feature(env, ARM_FEATURE_EL3) &&
240
+ if (pmsav7_use_background_region(cpu, mmu_idx, is_user)) {
136
+ ((env->cp15.scr_el3 & SCR_NS) || !(env->cp15.scr_el3 & SCR_EEL2))) {
241
+ hit = true;
137
+ aa64 = aa64 && (env->cp15.scr_el3 & SCR_RW);
138
+ }
139
+
140
+ if (el == 2) {
141
+ return aa64;
142
+ }
143
+
144
+ if (arm_is_el2_enabled(env)) {
145
+ aa64 = aa64 && (env->cp15.hcr_el2 & HCR_RW);
146
+ }
147
+
148
+ return aa64;
149
+}
150
+
151
+/*
152
+ * Return the current Exception Level (as per ARMv8; note that this differs
153
+ * from the ARMv7 Privilege Level).
154
+ */
155
+static inline int arm_current_el(CPUARMState *env)
156
+{
157
+ if (arm_feature(env, ARM_FEATURE_M)) {
158
+ return arm_v7m_is_handler_mode(env) ||
159
+ !(env->v7m.control[env->v7m.secure] & 1);
160
+ }
161
+
162
+ if (is_a64(env)) {
163
+ return extract32(env->pstate, 2, 2);
164
+ }
165
+
166
+ switch (env->uncached_cpsr & 0x1f) {
167
+ case ARM_CPU_MODE_USR:
168
+ return 0;
169
+ case ARM_CPU_MODE_HYP:
170
+ return 2;
171
+ case ARM_CPU_MODE_MON:
172
+ return 3;
173
+ default:
174
+ if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) {
175
+ /* If EL3 is 32-bit then all secure privileged modes run in EL3 */
176
+ return 3;
242
+ }
177
+ }
243
+
178
+
244
+ for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
179
+ return 1;
245
+ /* region search */
180
+ }
246
+ /*
247
+ * Note that the base address is bits [31:5] from the register
248
+ * with bits [4:0] all zeroes, but the limit address is bits
249
+ * [31:5] from the register with bits [4:0] all ones.
250
+ */
251
+ uint32_t base = env->pmsav8.rbar[secure][n] & ~0x1f;
252
+ uint32_t limit = env->pmsav8.rlar[secure][n] | 0x1f;
253
+
254
+ if (!(env->pmsav8.rlar[secure][n] & 0x1)) {
255
+ /* Region disabled */
256
+ continue;
257
+ }
258
+
259
+ if (address < base || address > limit) {
260
+ /*
261
+ * Address not in this region. We must check whether the
262
+ * region covers addresses in the same page as our address.
263
+ * In that case we must not report a size that covers the
264
+ * whole page for a subsequent hit against a different MPU
265
+ * region or the background region, because it would result in
266
+ * incorrect TLB hits for subsequent accesses to addresses that
267
+ * are in this MPU region.
268
+ */
269
+ if (limit >= base &&
270
+ ranges_overlap(base, limit - base + 1,
271
+ addr_page_base,
272
+ TARGET_PAGE_SIZE)) {
273
+ *is_subpage = true;
274
+ }
275
+ continue;
276
+ }
277
+
278
+ if (base > addr_page_base || limit < addr_page_limit) {
279
+ *is_subpage = true;
280
+ }
281
+
282
+ if (matchregion != -1) {
283
+ /*
284
+ * Multiple regions match -- always a failure (unlike
285
+ * PMSAv7 where highest-numbered-region wins)
286
+ */
287
+ fi->type = ARMFault_Permission;
288
+ fi->level = 1;
289
+ return true;
290
+ }
291
+
292
+ matchregion = n;
293
+ hit = true;
294
+ }
295
+ }
296
+
297
+ if (!hit) {
298
+ /* background fault */
299
+ fi->type = ARMFault_Background;
300
+ return true;
301
+ }
302
+
303
+ if (matchregion == -1) {
304
+ /* hit using the background region */
305
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, prot);
306
+ } else {
307
+ uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
308
+ uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
309
+ bool pxn = false;
310
+
311
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
312
+ pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
313
+ }
314
+
315
+ if (m_is_system_region(env, address)) {
316
+ /* System space is always execute never */
317
+ xn = 1;
318
+ }
319
+
320
+ *prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
321
+ if (*prot && !xn && !(pxn && !is_user)) {
322
+ *prot |= PAGE_EXEC;
323
+ }
324
+ /*
325
+ * We don't need to look the attribute up in the MAIR0/MAIR1
326
+ * registers because that only tells us about cacheability.
327
+ */
328
+ if (mregion) {
329
+ *mregion = matchregion;
330
+ }
331
+ }
332
+
333
+ fi->type = ARMFault_Permission;
334
+ fi->level = 1;
335
+ return !(*prot & (1 << access_type));
336
+}
181
+}
337
+
182
+
338
static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
183
static inline bool arm_cpu_data_is_big_endian_a32(CPUARMState *env,
339
MMUAccessType access_type, ARMMMUIdx mmu_idx,
184
bool sctlr_b)
340
hwaddr *phys_ptr, MemTxAttrs *txattrs,
185
{
186
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
187
index XXXXXXX..XXXXXXX 100644
188
--- a/hw/intc/arm_gicv3_cpuif.c
189
+++ b/hw/intc/arm_gicv3_cpuif.c
190
@@ -XXX,XX +XXX,XX @@
191
#include "cpu.h"
192
#include "target/arm/cpregs.h"
193
#include "target/arm/cpu-features.h"
194
+#include "target/arm/internals.h"
195
#include "system/tcg.h"
196
#include "system/qtest.h"
197
198
diff --git a/target/arm/arch_dump.c b/target/arm/arch_dump.c
199
index XXXXXXX..XXXXXXX 100644
200
--- a/target/arm/arch_dump.c
201
+++ b/target/arm/arch_dump.c
202
@@ -XXX,XX +XXX,XX @@
203
#include "elf.h"
204
#include "system/dump.h"
205
#include "cpu-features.h"
206
+#include "internals.h"
207
208
/* struct user_pt_regs from arch/arm64/include/uapi/asm/ptrace.h */
209
struct aarch64_user_regs {
341
--
210
--
342
2.25.1
211
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The definition of SCR_EL3.RW says that its effective value is 1 if:
2
- EL2 is implemented and does not support AArch32, and SCR_EL3.NS is 1
3
- the effective value of SCR_EL3.{EEL2,NS} is {1,0} (i.e. we are
4
Secure and Secure EL2 is disabled)
2
5
3
Begin moving all of the page table walking functions
6
We implement the second of these in arm_el_is_aa64(), but forgot the
4
out of helper.c, starting with get_phys_addr().
7
first.
5
8
6
Create a temporary header file, "ptw.h", in which to
9
Provide a new function arm_scr_rw_eff() to return the effective
7
share declarations between the two C files while we
10
value of SCR_EL3.RW, and use it in arm_el_is_aa64() and the other
8
are moving functions.
11
places that currently look directly at the bit value.
9
12
10
Move a few declarations to "internals.h", which will
13
(scr_write() enforces that the RW bit is RAO/WI if neither EL1 nor
11
remain used by multiple C files.
14
EL2 have AArch32 support, but if EL1 does but EL2 does not then the
15
bit must still be writeable.)
12
16
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
This will mean that if code at EL3 attempts to perform an exception
14
Message-id: 20220604040607.269301-3-richard.henderson@linaro.org
18
return to AArch32 EL2 when EL2 is AArch64-only we will correctly
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
handle this as an illegal exception return: it will be caught by the
20
"return to an EL which is configured for a different register width"
21
check in HELPER(exception_return).
22
23
We do already have some CPU types which don't implement AArch32
24
above EL0, so this is technically a bug; it doesn't seem worth
25
backporting to stable because no sensible guest code will be
26
deliberately attempting to set the RW bit to a value corresponding
27
to an unimplemented execution state and then checking that we
28
did the right thing.
29
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
---
32
---
18
target/arm/internals.h | 18 ++-
33
target/arm/internals.h | 26 +++++++++++++++++++++++---
19
target/arm/ptw.h | 51 ++++++
34
target/arm/helper.c | 4 ++--
20
target/arm/helper.c | 344 +++++------------------------------------
35
2 files changed, 25 insertions(+), 5 deletions(-)
21
target/arm/ptw.c | 267 ++++++++++++++++++++++++++++++++
22
target/arm/meson.build | 1 +
23
5 files changed, 372 insertions(+), 309 deletions(-)
24
create mode 100644 target/arm/ptw.h
25
create mode 100644 target/arm/ptw.c
26
36
27
diff --git a/target/arm/internals.h b/target/arm/internals.h
37
diff --git a/target/arm/internals.h b/target/arm/internals.h
28
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/internals.h
39
--- a/target/arm/internals.h
30
+++ b/target/arm/internals.h
40
+++ b/target/arm/internals.h
31
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
41
@@ -XXX,XX +XXX,XX @@ static inline FloatRoundMode arm_rmode_to_sf(ARMFPRounding rmode)
32
/* Return the MMU index for a v7M CPU in the specified security state */
42
return arm_rmode_to_sf_map[rmode];
33
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
43
}
34
44
35
-/* Return true if the stage 1 translation regime is using LPAE format page
45
+/* Return the effective value of SCR_EL3.RW */
36
- * tables */
46
+static inline bool arm_scr_rw_eff(CPUARMState *env)
37
+/* Return true if the translation regime is using LPAE format page tables */
47
+{
38
+bool regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx);
48
+ /*
49
+ * SCR_EL3.RW has an effective value of 1 if:
50
+ * - we are NS and EL2 is implemented but doesn't support AArch32
51
+ * - we are S and EL2 is enabled (in which case it must be AArch64)
52
+ */
53
+ ARMCPU *cpu = env_archcpu(env);
39
+
54
+
40
+/*
55
+ if (env->cp15.scr_el3 & SCR_RW) {
41
+ * Return true if the stage 1 translation regime is using LPAE
56
+ return true;
42
+ * format page tables
57
+ }
43
+ */
58
+ if (env->cp15.scr_el3 & SCR_NS) {
44
bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx);
59
+ return arm_feature(env, ARM_FEATURE_EL2) &&
45
60
+ !cpu_isar_feature(aa64_aa32_el2, cpu);
46
/* Raise a data fault alignment exception for the specified virtual address */
61
+ } else {
47
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
62
+ return env->cp15.scr_el3 & SCR_EEL2;
48
}
63
+ }
49
}
50
51
+/* Return the SCTLR value which controls this address translation regime */
52
+static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
53
+{
54
+ return env->cp15.sctlr_el[regime_el(env, mmu_idx)];
55
+}
64
+}
56
+
65
+
57
/* Return the TCR controlling this translation regime */
66
/* Return true if the specified exception level is running in AArch64 state. */
58
static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
67
static inline bool arm_el_is_aa64(CPUARMState *env, int el)
59
{
68
{
60
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
69
@@ -XXX,XX +XXX,XX @@ static inline bool arm_el_is_aa64(CPUARMState *env, int el)
61
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
70
return aa64;
62
ARMMMUIdx mmu_idx, bool data);
71
}
63
72
64
+int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx);
73
- if (arm_feature(env, ARM_FEATURE_EL3) &&
65
+int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx);
74
- ((env->cp15.scr_el3 & SCR_NS) || !(env->cp15.scr_el3 & SCR_EEL2))) {
66
+
75
- aa64 = aa64 && (env->cp15.scr_el3 & SCR_RW);
67
static inline int exception_target_el(CPUARMState *env)
76
+ if (arm_feature(env, ARM_FEATURE_EL3)) {
68
{
77
+ aa64 = aa64 && arm_scr_rw_eff(env);
69
int target_el = MAX(1, arm_current_el(env));
78
}
70
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
79
71
new file mode 100644
80
if (el == 2) {
72
index XXXXXXX..XXXXXXX
73
--- /dev/null
74
+++ b/target/arm/ptw.h
75
@@ -XXX,XX +XXX,XX @@
76
+/*
77
+ * ARM page table walking.
78
+ *
79
+ * This code is licensed under the GNU GPL v2 or later.
80
+ *
81
+ * SPDX-License-Identifier: GPL-2.0-or-later
82
+ */
83
+
84
+#ifndef TARGET_ARM_PTW_H
85
+#define TARGET_ARM_PTW_H
86
+
87
+#ifndef CONFIG_USER_ONLY
88
+
89
+bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx);
90
+bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx);
91
+ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
92
+ ARMCacheAttrs s1, ARMCacheAttrs s2);
93
+
94
+bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
95
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
96
+ hwaddr *phys_ptr, int *prot,
97
+ target_ulong *page_size,
98
+ ARMMMUFaultInfo *fi);
99
+bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
100
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
101
+ hwaddr *phys_ptr, int *prot,
102
+ ARMMMUFaultInfo *fi);
103
+bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
104
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
105
+ hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
106
+ target_ulong *page_size, ARMMMUFaultInfo *fi);
107
+bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
108
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
109
+ hwaddr *phys_ptr, int *prot,
110
+ target_ulong *page_size,
111
+ ARMMMUFaultInfo *fi);
112
+bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
113
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
114
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
115
+ int *prot, target_ulong *page_size,
116
+ ARMMMUFaultInfo *fi);
117
+bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
118
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
119
+ bool s1_is_el0,
120
+ hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
121
+ target_ulong *page_size_ptr,
122
+ ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
123
+ __attribute__((nonnull));
124
+
125
+#endif /* !CONFIG_USER_ONLY */
126
+#endif /* TARGET_ARM_PTW_H */
127
diff --git a/target/arm/helper.c b/target/arm/helper.c
81
diff --git a/target/arm/helper.c b/target/arm/helper.c
128
index XXXXXXX..XXXXXXX 100644
82
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/helper.c
83
--- a/target/arm/helper.c
130
+++ b/target/arm/helper.c
84
+++ b/target/arm/helper.c
131
@@ -XXX,XX +XXX,XX @@
85
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
132
#include "semihosting/common-semi.h"
133
#endif
134
#include "cpregs.h"
135
+#include "ptw.h"
136
137
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
138
139
-#ifndef CONFIG_USER_ONLY
140
-
141
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
142
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
143
- bool s1_is_el0,
144
- hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
145
- target_ulong *page_size_ptr,
146
- ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
147
- __attribute__((nonnull));
148
-#endif
149
-
150
static void switch_mode(CPUARMState *env, int mode);
151
-static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx);
152
153
static uint64_t raw_read(CPUARMState *env, const ARMCPRegInfo *ri)
154
{
155
@@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el)
156
return env->cp15.sctlr_el[el];
157
}
158
159
-/* Return the SCTLR value which controls this address translation regime */
160
-static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
161
-{
162
- return env->cp15.sctlr_el[regime_el(env, mmu_idx)];
163
-}
164
-
165
#ifndef CONFIG_USER_ONLY
166
167
/* Return true if the specified stage of address translation is disabled */
168
-static inline bool regime_translation_disabled(CPUARMState *env,
169
- ARMMMUIdx mmu_idx)
170
+bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
171
{
172
uint64_t hcr_el2;
86
uint64_t hcr_el2;
173
87
174
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
88
if (arm_feature(env, ARM_FEATURE_EL3)) {
175
#endif /* !CONFIG_USER_ONLY */
89
- rw = ((env->cp15.scr_el3 & SCR_RW) == SCR_RW);
176
90
+ rw = arm_scr_rw_eff(env);
177
/* Return true if the translation regime is using LPAE format page tables */
91
} else {
178
-static inline bool regime_using_lpae_format(CPUARMState *env,
92
/*
179
- ARMMMUIdx mmu_idx)
93
* Either EL2 is the highest EL (and so the EL2 register width
180
+bool regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
94
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
181
{
95
182
int el = regime_el(env, mmu_idx);
96
switch (new_el) {
183
if (el == 2 || arm_el_is_aa64(env, el)) {
97
case 3:
184
@@ -XXX,XX +XXX,XX @@ bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
98
- is_aa64 = (env->cp15.scr_el3 & SCR_RW) != 0;
185
}
99
+ is_aa64 = arm_scr_rw_eff(env);
186
100
break;
187
#ifndef CONFIG_USER_ONLY
101
case 2:
188
-static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
102
hcr = arm_hcr_el2_eff(env);
189
+bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
190
{
191
switch (mmu_idx) {
192
case ARMMMUIdx_SE10_0:
193
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
194
return 0;
195
}
196
197
-static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
198
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
199
- hwaddr *phys_ptr, int *prot,
200
- target_ulong *page_size,
201
- ARMMMUFaultInfo *fi)
202
+bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
203
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
204
+ hwaddr *phys_ptr, int *prot,
205
+ target_ulong *page_size,
206
+ ARMMMUFaultInfo *fi)
207
{
208
CPUState *cs = env_cpu(env);
209
int level = 1;
210
@@ -XXX,XX +XXX,XX @@ do_fault:
211
return true;
212
}
213
214
-static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
215
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
216
- hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
217
- target_ulong *page_size, ARMMMUFaultInfo *fi)
218
+bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
219
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
220
+ hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
221
+ target_ulong *page_size, ARMMMUFaultInfo *fi)
222
{
223
CPUState *cs = env_cpu(env);
224
ARMCPU *cpu = env_archcpu(env);
225
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu)
226
return pamax_map[parange];
227
}
228
229
-static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
230
+int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
231
{
232
if (regime_has_2_ranges(mmu_idx)) {
233
return extract64(tcr, 37, 2);
234
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
235
}
236
}
237
238
-static int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx)
239
+int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx)
240
{
241
if (regime_has_2_ranges(mmu_idx)) {
242
return extract64(tcr, 51, 2);
243
@@ -XXX,XX +XXX,XX @@ static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
244
* @fi: set to fault info if the translation fails
245
* @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes
246
*/
247
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
248
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
249
- bool s1_is_el0,
250
- hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
251
- target_ulong *page_size_ptr,
252
- ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
253
+bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
254
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
255
+ bool s1_is_el0,
256
+ hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
257
+ target_ulong *page_size_ptr,
258
+ ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
259
{
260
ARMCPU *cpu = env_archcpu(env);
261
CPUState *cs = CPU(cpu);
262
@@ -XXX,XX +XXX,XX @@ static inline bool m_is_system_region(CPUARMState *env, uint32_t address)
263
return arm_feature(env, ARM_FEATURE_M) && extract32(address, 29, 3) == 0x7;
264
}
265
266
-static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
267
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
268
- hwaddr *phys_ptr, int *prot,
269
- target_ulong *page_size,
270
- ARMMMUFaultInfo *fi)
271
+bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
272
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
273
+ hwaddr *phys_ptr, int *prot,
274
+ target_ulong *page_size,
275
+ ARMMMUFaultInfo *fi)
276
{
277
ARMCPU *cpu = env_archcpu(env);
278
int n;
279
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
280
}
281
282
283
-static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
284
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
285
- hwaddr *phys_ptr, MemTxAttrs *txattrs,
286
- int *prot, target_ulong *page_size,
287
- ARMMMUFaultInfo *fi)
288
+bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
289
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
290
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
291
+ int *prot, target_ulong *page_size,
292
+ ARMMMUFaultInfo *fi)
293
{
294
uint32_t secure = regime_is_secure(env, mmu_idx);
295
V8M_SAttributes sattrs = {};
296
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
297
return ret;
298
}
299
300
-static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
301
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
302
- hwaddr *phys_ptr, int *prot,
303
- ARMMMUFaultInfo *fi)
304
+bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
305
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
306
+ hwaddr *phys_ptr, int *prot,
307
+ ARMMMUFaultInfo *fi)
308
{
309
int n;
310
uint32_t mask;
311
@@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_fwb(CPUARMState *env,
312
* @s1: Attributes from stage 1 walk
313
* @s2: Attributes from stage 2 walk
314
*/
315
-static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
316
- ARMCacheAttrs s1, ARMCacheAttrs s2)
317
+ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
318
+ ARMCacheAttrs s1, ARMCacheAttrs s2)
319
{
320
ARMCacheAttrs ret;
321
bool tagged = false;
322
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
323
return ret;
324
}
325
326
-
327
-/* get_phys_addr - get the physical address for this virtual address
328
- *
329
- * Find the physical address corresponding to the given virtual address,
330
- * by doing a translation table walk on MMU based systems or using the
331
- * MPU state on MPU based systems.
332
- *
333
- * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
334
- * prot and page_size may not be filled in, and the populated fsr value provides
335
- * information on why the translation aborted, in the format of a
336
- * DFSR/IFSR fault register, with the following caveats:
337
- * * we honour the short vs long DFSR format differences.
338
- * * the WnR bit is never set (the caller must do this).
339
- * * for PSMAv5 based systems we don't bother to return a full FSR format
340
- * value.
341
- *
342
- * @env: CPUARMState
343
- * @address: virtual address to get physical address for
344
- * @access_type: 0 for read, 1 for write, 2 for execute
345
- * @mmu_idx: MMU index indicating required translation regime
346
- * @phys_ptr: set to the physical address corresponding to the virtual address
347
- * @attrs: set to the memory transaction attributes to use
348
- * @prot: set to the permissions for the page containing phys_ptr
349
- * @page_size: set to the size of the page containing phys_ptr
350
- * @fi: set to fault info if the translation fails
351
- * @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes
352
- */
353
-bool get_phys_addr(CPUARMState *env, target_ulong address,
354
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
355
- hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
356
- target_ulong *page_size,
357
- ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
358
-{
359
- ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
360
-
361
- if (mmu_idx != s1_mmu_idx) {
362
- /* Call ourselves recursively to do the stage 1 and then stage 2
363
- * translations if mmu_idx is a two-stage regime.
364
- */
365
- if (arm_feature(env, ARM_FEATURE_EL2)) {
366
- hwaddr ipa;
367
- int s2_prot;
368
- int ret;
369
- bool ipa_secure;
370
- ARMCacheAttrs cacheattrs2 = {};
371
- ARMMMUIdx s2_mmu_idx;
372
- bool is_el0;
373
-
374
- ret = get_phys_addr(env, address, access_type, s1_mmu_idx, &ipa,
375
- attrs, prot, page_size, fi, cacheattrs);
376
-
377
- /* If S1 fails or S2 is disabled, return early. */
378
- if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
379
- *phys_ptr = ipa;
380
- return ret;
381
- }
382
-
383
- ipa_secure = attrs->secure;
384
- if (arm_is_secure_below_el3(env)) {
385
- if (ipa_secure) {
386
- attrs->secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
387
- } else {
388
- attrs->secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
389
- }
390
- } else {
391
- assert(!ipa_secure);
392
- }
393
-
394
- s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
395
- is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
396
-
397
- /* S1 is done. Now do S2 translation. */
398
- ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx, is_el0,
399
- phys_ptr, attrs, &s2_prot,
400
- page_size, fi, &cacheattrs2);
401
- fi->s2addr = ipa;
402
- /* Combine the S1 and S2 perms. */
403
- *prot &= s2_prot;
404
-
405
- /* If S2 fails, return early. */
406
- if (ret) {
407
- return ret;
408
- }
409
-
410
- /* Combine the S1 and S2 cache attributes. */
411
- if (arm_hcr_el2_eff(env) & HCR_DC) {
412
- /*
413
- * HCR.DC forces the first stage attributes to
414
- * Normal Non-Shareable,
415
- * Inner Write-Back Read-Allocate Write-Allocate,
416
- * Outer Write-Back Read-Allocate Write-Allocate.
417
- * Do not overwrite Tagged within attrs.
418
- */
419
- if (cacheattrs->attrs != 0xf0) {
420
- cacheattrs->attrs = 0xff;
421
- }
422
- cacheattrs->shareability = 0;
423
- }
424
- *cacheattrs = combine_cacheattrs(env, *cacheattrs, cacheattrs2);
425
-
426
- /* Check if IPA translates to secure or non-secure PA space. */
427
- if (arm_is_secure_below_el3(env)) {
428
- if (ipa_secure) {
429
- attrs->secure =
430
- !(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW));
431
- } else {
432
- attrs->secure =
433
- !((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_NSW))
434
- || (env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW)));
435
- }
436
- }
437
- return 0;
438
- } else {
439
- /*
440
- * For non-EL2 CPUs a stage1+stage2 translation is just stage 1.
441
- */
442
- mmu_idx = stage_1_mmu_idx(mmu_idx);
443
- }
444
- }
445
-
446
- /* The page table entries may downgrade secure to non-secure, but
447
- * cannot upgrade an non-secure translation regime's attributes
448
- * to secure.
449
- */
450
- attrs->secure = regime_is_secure(env, mmu_idx);
451
- attrs->user = regime_is_user(env, mmu_idx);
452
-
453
- /* Fast Context Switch Extension. This doesn't exist at all in v8.
454
- * In v7 and earlier it affects all stage 1 translations.
455
- */
456
- if (address < 0x02000000 && mmu_idx != ARMMMUIdx_Stage2
457
- && !arm_feature(env, ARM_FEATURE_V8)) {
458
- if (regime_el(env, mmu_idx) == 3) {
459
- address += env->cp15.fcseidr_s;
460
- } else {
461
- address += env->cp15.fcseidr_ns;
462
- }
463
- }
464
-
465
- if (arm_feature(env, ARM_FEATURE_PMSA)) {
466
- bool ret;
467
- *page_size = TARGET_PAGE_SIZE;
468
-
469
- if (arm_feature(env, ARM_FEATURE_V8)) {
470
- /* PMSAv8 */
471
- ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
472
- phys_ptr, attrs, prot, page_size, fi);
473
- } else if (arm_feature(env, ARM_FEATURE_V7)) {
474
- /* PMSAv7 */
475
- ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
476
- phys_ptr, prot, page_size, fi);
477
- } else {
478
- /* Pre-v7 MPU */
479
- ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
480
- phys_ptr, prot, fi);
481
- }
482
- qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32
483
- " mmu_idx %u -> %s (prot %c%c%c)\n",
484
- access_type == MMU_DATA_LOAD ? "reading" :
485
- (access_type == MMU_DATA_STORE ? "writing" : "execute"),
486
- (uint32_t)address, mmu_idx,
487
- ret ? "Miss" : "Hit",
488
- *prot & PAGE_READ ? 'r' : '-',
489
- *prot & PAGE_WRITE ? 'w' : '-',
490
- *prot & PAGE_EXEC ? 'x' : '-');
491
-
492
- return ret;
493
- }
494
-
495
- /* Definitely a real MMU, not an MPU */
496
-
497
- if (regime_translation_disabled(env, mmu_idx)) {
498
- uint64_t hcr;
499
- uint8_t memattr;
500
-
501
- /*
502
- * MMU disabled. S1 addresses within aa64 translation regimes are
503
- * still checked for bounds -- see AArch64.TranslateAddressS1Off.
504
- */
505
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
506
- int r_el = regime_el(env, mmu_idx);
507
- if (arm_el_is_aa64(env, r_el)) {
508
- int pamax = arm_pamax(env_archcpu(env));
509
- uint64_t tcr = env->cp15.tcr_el[r_el].raw_tcr;
510
- int addrtop, tbi;
511
-
512
- tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
513
- if (access_type == MMU_INST_FETCH) {
514
- tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
515
- }
516
- tbi = (tbi >> extract64(address, 55, 1)) & 1;
517
- addrtop = (tbi ? 55 : 63);
518
-
519
- if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
520
- fi->type = ARMFault_AddressSize;
521
- fi->level = 0;
522
- fi->stage2 = false;
523
- return 1;
524
- }
525
-
526
- /*
527
- * When TBI is disabled, we've just validated that all of the
528
- * bits above PAMax are zero, so logically we only need to
529
- * clear the top byte for TBI. But it's clearer to follow
530
- * the pseudocode set of addrdesc.paddress.
531
- */
532
- address = extract64(address, 0, 52);
533
- }
534
- }
535
- *phys_ptr = address;
536
- *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
537
- *page_size = TARGET_PAGE_SIZE;
538
-
539
- /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
540
- hcr = arm_hcr_el2_eff(env);
541
- cacheattrs->shareability = 0;
542
- cacheattrs->is_s2_format = false;
543
- if (hcr & HCR_DC) {
544
- if (hcr & HCR_DCT) {
545
- memattr = 0xf0; /* Tagged, Normal, WB, RWA */
546
- } else {
547
- memattr = 0xff; /* Normal, WB, RWA */
548
- }
549
- } else if (access_type == MMU_INST_FETCH) {
550
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
551
- memattr = 0xee; /* Normal, WT, RA, NT */
552
- } else {
553
- memattr = 0x44; /* Normal, NC, No */
554
- }
555
- cacheattrs->shareability = 2; /* outer sharable */
556
- } else {
557
- memattr = 0x00; /* Device, nGnRnE */
558
- }
559
- cacheattrs->attrs = memattr;
560
- return 0;
561
- }
562
-
563
- if (regime_using_lpae_format(env, mmu_idx)) {
564
- return get_phys_addr_lpae(env, address, access_type, mmu_idx, false,
565
- phys_ptr, attrs, prot, page_size,
566
- fi, cacheattrs);
567
- } else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
568
- return get_phys_addr_v6(env, address, access_type, mmu_idx,
569
- phys_ptr, attrs, prot, page_size, fi);
570
- } else {
571
- return get_phys_addr_v5(env, address, access_type, mmu_idx,
572
- phys_ptr, prot, page_size, fi);
573
- }
574
-}
575
-
576
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
577
MemTxAttrs *attrs)
578
{
579
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
580
}
581
return phys_addr;
582
}
583
-
584
#endif
585
586
/* Note that signed overflow is undefined in C. The following routines are
587
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
588
new file mode 100644
589
index XXXXXXX..XXXXXXX
590
--- /dev/null
591
+++ b/target/arm/ptw.c
592
@@ -XXX,XX +XXX,XX @@
593
+/*
594
+ * ARM page table walking.
595
+ *
596
+ * This code is licensed under the GNU GPL v2 or later.
597
+ *
598
+ * SPDX-License-Identifier: GPL-2.0-or-later
599
+ */
600
+
601
+#include "qemu/osdep.h"
602
+#include "qemu/log.h"
603
+#include "cpu.h"
604
+#include "internals.h"
605
+#include "ptw.h"
606
+
607
+
608
+/**
609
+ * get_phys_addr - get the physical address for this virtual address
610
+ *
611
+ * Find the physical address corresponding to the given virtual address,
612
+ * by doing a translation table walk on MMU based systems or using the
613
+ * MPU state on MPU based systems.
614
+ *
615
+ * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
616
+ * prot and page_size may not be filled in, and the populated fsr value provides
617
+ * information on why the translation aborted, in the format of a
618
+ * DFSR/IFSR fault register, with the following caveats:
619
+ * * we honour the short vs long DFSR format differences.
620
+ * * the WnR bit is never set (the caller must do this).
621
+ * * for PSMAv5 based systems we don't bother to return a full FSR format
622
+ * value.
623
+ *
624
+ * @env: CPUARMState
625
+ * @address: virtual address to get physical address for
626
+ * @access_type: 0 for read, 1 for write, 2 for execute
627
+ * @mmu_idx: MMU index indicating required translation regime
628
+ * @phys_ptr: set to the physical address corresponding to the virtual address
629
+ * @attrs: set to the memory transaction attributes to use
630
+ * @prot: set to the permissions for the page containing phys_ptr
631
+ * @page_size: set to the size of the page containing phys_ptr
632
+ * @fi: set to fault info if the translation fails
633
+ * @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes
634
+ */
635
+bool get_phys_addr(CPUARMState *env, target_ulong address,
636
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
637
+ hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
638
+ target_ulong *page_size,
639
+ ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
640
+{
641
+ ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
642
+
643
+ if (mmu_idx != s1_mmu_idx) {
644
+ /*
645
+ * Call ourselves recursively to do the stage 1 and then stage 2
646
+ * translations if mmu_idx is a two-stage regime.
647
+ */
648
+ if (arm_feature(env, ARM_FEATURE_EL2)) {
649
+ hwaddr ipa;
650
+ int s2_prot;
651
+ int ret;
652
+ bool ipa_secure;
653
+ ARMCacheAttrs cacheattrs2 = {};
654
+ ARMMMUIdx s2_mmu_idx;
655
+ bool is_el0;
656
+
657
+ ret = get_phys_addr(env, address, access_type, s1_mmu_idx, &ipa,
658
+ attrs, prot, page_size, fi, cacheattrs);
659
+
660
+ /* If S1 fails or S2 is disabled, return early. */
661
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
662
+ *phys_ptr = ipa;
663
+ return ret;
664
+ }
665
+
666
+ ipa_secure = attrs->secure;
667
+ if (arm_is_secure_below_el3(env)) {
668
+ if (ipa_secure) {
669
+ attrs->secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
670
+ } else {
671
+ attrs->secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
672
+ }
673
+ } else {
674
+ assert(!ipa_secure);
675
+ }
676
+
677
+ s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
678
+ is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
679
+
680
+ /* S1 is done. Now do S2 translation. */
681
+ ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx, is_el0,
682
+ phys_ptr, attrs, &s2_prot,
683
+ page_size, fi, &cacheattrs2);
684
+ fi->s2addr = ipa;
685
+ /* Combine the S1 and S2 perms. */
686
+ *prot &= s2_prot;
687
+
688
+ /* If S2 fails, return early. */
689
+ if (ret) {
690
+ return ret;
691
+ }
692
+
693
+ /* Combine the S1 and S2 cache attributes. */
694
+ if (arm_hcr_el2_eff(env) & HCR_DC) {
695
+ /*
696
+ * HCR.DC forces the first stage attributes to
697
+ * Normal Non-Shareable,
698
+ * Inner Write-Back Read-Allocate Write-Allocate,
699
+ * Outer Write-Back Read-Allocate Write-Allocate.
700
+ * Do not overwrite Tagged within attrs.
701
+ */
702
+ if (cacheattrs->attrs != 0xf0) {
703
+ cacheattrs->attrs = 0xff;
704
+ }
705
+ cacheattrs->shareability = 0;
706
+ }
707
+ *cacheattrs = combine_cacheattrs(env, *cacheattrs, cacheattrs2);
708
+
709
+ /* Check if IPA translates to secure or non-secure PA space. */
710
+ if (arm_is_secure_below_el3(env)) {
711
+ if (ipa_secure) {
712
+ attrs->secure =
713
+ !(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW));
714
+ } else {
715
+ attrs->secure =
716
+ !((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_NSW))
717
+ || (env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW)));
718
+ }
719
+ }
720
+ return 0;
721
+ } else {
722
+ /*
723
+ * For non-EL2 CPUs a stage1+stage2 translation is just stage 1.
724
+ */
725
+ mmu_idx = stage_1_mmu_idx(mmu_idx);
726
+ }
727
+ }
728
+
729
+ /*
730
+ * The page table entries may downgrade secure to non-secure, but
731
+ * cannot upgrade an non-secure translation regime's attributes
732
+ * to secure.
733
+ */
734
+ attrs->secure = regime_is_secure(env, mmu_idx);
735
+ attrs->user = regime_is_user(env, mmu_idx);
736
+
737
+ /*
738
+ * Fast Context Switch Extension. This doesn't exist at all in v8.
739
+ * In v7 and earlier it affects all stage 1 translations.
740
+ */
741
+ if (address < 0x02000000 && mmu_idx != ARMMMUIdx_Stage2
742
+ && !arm_feature(env, ARM_FEATURE_V8)) {
743
+ if (regime_el(env, mmu_idx) == 3) {
744
+ address += env->cp15.fcseidr_s;
745
+ } else {
746
+ address += env->cp15.fcseidr_ns;
747
+ }
748
+ }
749
+
750
+ if (arm_feature(env, ARM_FEATURE_PMSA)) {
751
+ bool ret;
752
+ *page_size = TARGET_PAGE_SIZE;
753
+
754
+ if (arm_feature(env, ARM_FEATURE_V8)) {
755
+ /* PMSAv8 */
756
+ ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
757
+ phys_ptr, attrs, prot, page_size, fi);
758
+ } else if (arm_feature(env, ARM_FEATURE_V7)) {
759
+ /* PMSAv7 */
760
+ ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
761
+ phys_ptr, prot, page_size, fi);
762
+ } else {
763
+ /* Pre-v7 MPU */
764
+ ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
765
+ phys_ptr, prot, fi);
766
+ }
767
+ qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32
768
+ " mmu_idx %u -> %s (prot %c%c%c)\n",
769
+ access_type == MMU_DATA_LOAD ? "reading" :
770
+ (access_type == MMU_DATA_STORE ? "writing" : "execute"),
771
+ (uint32_t)address, mmu_idx,
772
+ ret ? "Miss" : "Hit",
773
+ *prot & PAGE_READ ? 'r' : '-',
774
+ *prot & PAGE_WRITE ? 'w' : '-',
775
+ *prot & PAGE_EXEC ? 'x' : '-');
776
+
777
+ return ret;
778
+ }
779
+
780
+ /* Definitely a real MMU, not an MPU */
781
+
782
+ if (regime_translation_disabled(env, mmu_idx)) {
783
+ uint64_t hcr;
784
+ uint8_t memattr;
785
+
786
+ /*
787
+ * MMU disabled. S1 addresses within aa64 translation regimes are
788
+ * still checked for bounds -- see AArch64.TranslateAddressS1Off.
789
+ */
790
+ if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
791
+ int r_el = regime_el(env, mmu_idx);
792
+ if (arm_el_is_aa64(env, r_el)) {
793
+ int pamax = arm_pamax(env_archcpu(env));
794
+ uint64_t tcr = env->cp15.tcr_el[r_el].raw_tcr;
795
+ int addrtop, tbi;
796
+
797
+ tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
798
+ if (access_type == MMU_INST_FETCH) {
799
+ tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
800
+ }
801
+ tbi = (tbi >> extract64(address, 55, 1)) & 1;
802
+ addrtop = (tbi ? 55 : 63);
803
+
804
+ if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
805
+ fi->type = ARMFault_AddressSize;
806
+ fi->level = 0;
807
+ fi->stage2 = false;
808
+ return 1;
809
+ }
810
+
811
+ /*
812
+ * When TBI is disabled, we've just validated that all of the
813
+ * bits above PAMax are zero, so logically we only need to
814
+ * clear the top byte for TBI. But it's clearer to follow
815
+ * the pseudocode set of addrdesc.paddress.
816
+ */
817
+ address = extract64(address, 0, 52);
818
+ }
819
+ }
820
+ *phys_ptr = address;
821
+ *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
822
+ *page_size = TARGET_PAGE_SIZE;
823
+
824
+ /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
825
+ hcr = arm_hcr_el2_eff(env);
826
+ cacheattrs->shareability = 0;
827
+ cacheattrs->is_s2_format = false;
828
+ if (hcr & HCR_DC) {
829
+ if (hcr & HCR_DCT) {
830
+ memattr = 0xf0; /* Tagged, Normal, WB, RWA */
831
+ } else {
832
+ memattr = 0xff; /* Normal, WB, RWA */
833
+ }
834
+ } else if (access_type == MMU_INST_FETCH) {
835
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
836
+ memattr = 0xee; /* Normal, WT, RA, NT */
837
+ } else {
838
+ memattr = 0x44; /* Normal, NC, No */
839
+ }
840
+ cacheattrs->shareability = 2; /* outer sharable */
841
+ } else {
842
+ memattr = 0x00; /* Device, nGnRnE */
843
+ }
844
+ cacheattrs->attrs = memattr;
845
+ return 0;
846
+ }
847
+
848
+ if (regime_using_lpae_format(env, mmu_idx)) {
849
+ return get_phys_addr_lpae(env, address, access_type, mmu_idx, false,
850
+ phys_ptr, attrs, prot, page_size,
851
+ fi, cacheattrs);
852
+ } else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
853
+ return get_phys_addr_v6(env, address, access_type, mmu_idx,
854
+ phys_ptr, attrs, prot, page_size, fi);
855
+ } else {
856
+ return get_phys_addr_v5(env, address, access_type, mmu_idx,
857
+ phys_ptr, prot, page_size, fi);
858
+ }
859
+}
860
diff --git a/target/arm/meson.build b/target/arm/meson.build
861
index XXXXXXX..XXXXXXX 100644
862
--- a/target/arm/meson.build
863
+++ b/target/arm/meson.build
864
@@ -XXX,XX +XXX,XX @@ arm_softmmu_ss.add(files(
865
'machine.c',
866
'monitor.c',
867
'psci.c',
868
+ 'ptw.c',
869
))
870
871
subdir('hvf')
872
--
103
--
873
2.25.1
104
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
When EL1 doesn't support AArch32, the HCR_EL2.RW bit is supposed to
2
be RAO/WI. Enforce the RAO/WI behaviour.
2
3
3
This function has one private helper, v8m_is_sau_exempt,
4
Note that we handle "reset value should honour RES1 bits" in the same
4
so move that at the same time.
5
way that SCR_EL3 does, via a reset function.
5
6
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
We do already have some CPU types which don't implement AArch32
7
Message-id: 20220604040607.269301-12-richard.henderson@linaro.org
8
above EL0, so this is technically a bug; it doesn't seem worth
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
backporting to stable because no sensible guest code will be
10
deliberately attempting to set the RW bit to a value corresponding
11
to an unimplemented execution state and then checking that we
12
did the right thing.
13
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
16
---
11
target/arm/helper.c | 123 ------------------------------------------
17
target/arm/helper.c | 12 ++++++++++++
12
target/arm/ptw.c | 126 ++++++++++++++++++++++++++++++++++++++++++++
18
1 file changed, 12 insertions(+)
13
2 files changed, 126 insertions(+), 123 deletions(-)
14
19
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
22
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
23
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
20
#include "qemu/osdep.h"
25
/* Clear RES0 bits. */
21
#include "qemu/units.h"
26
value &= valid_mask;
22
#include "qemu/log.h"
27
23
-#include "target/arm/idau.h"
28
+ /* RW is RAO/WI if EL1 is AArch64 only */
24
#include "trace.h"
29
+ if (!cpu_isar_feature(aa64_aa32_el1, cpu)) {
25
#include "cpu.h"
30
+ value |= HCR_RW;
26
#include "internals.h"
31
+ }
27
@@ -XXX,XX +XXX,XX @@ bool m_is_system_region(CPUARMState *env, uint32_t address)
32
+
28
return arm_feature(env, ARM_FEATURE_M) && extract32(address, 29, 3) == 0x7;
33
/*
34
* These bits change the MMU setup:
35
* HCR_VM enables stage 2 translation
36
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
37
do_hcr_write(env, value, MAKE_64BIT_MASK(32, 32));
29
}
38
}
30
39
31
-static bool v8m_is_sau_exempt(CPUARMState *env,
40
+static void hcr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
32
- uint32_t address, MMUAccessType access_type)
33
-{
34
- /* The architecture specifies that certain address ranges are
35
- * exempt from v8M SAU/IDAU checks.
36
- */
37
- return
38
- (access_type == MMU_INST_FETCH && m_is_system_region(env, address)) ||
39
- (address >= 0xe0000000 && address <= 0xe0002fff) ||
40
- (address >= 0xe000e000 && address <= 0xe000efff) ||
41
- (address >= 0xe002e000 && address <= 0xe002efff) ||
42
- (address >= 0xe0040000 && address <= 0xe0041fff) ||
43
- (address >= 0xe00ff000 && address <= 0xe00fffff);
44
-}
45
-
46
-void v8m_security_lookup(CPUARMState *env, uint32_t address,
47
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
48
- V8M_SAttributes *sattrs)
49
-{
50
- /* Look up the security attributes for this address. Compare the
51
- * pseudocode SecurityCheck() function.
52
- * We assume the caller has zero-initialized *sattrs.
53
- */
54
- ARMCPU *cpu = env_archcpu(env);
55
- int r;
56
- bool idau_exempt = false, idau_ns = true, idau_nsc = true;
57
- int idau_region = IREGION_NOTVALID;
58
- uint32_t addr_page_base = address & TARGET_PAGE_MASK;
59
- uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
60
-
61
- if (cpu->idau) {
62
- IDAUInterfaceClass *iic = IDAU_INTERFACE_GET_CLASS(cpu->idau);
63
- IDAUInterface *ii = IDAU_INTERFACE(cpu->idau);
64
-
65
- iic->check(ii, address, &idau_region, &idau_exempt, &idau_ns,
66
- &idau_nsc);
67
- }
68
-
69
- if (access_type == MMU_INST_FETCH && extract32(address, 28, 4) == 0xf) {
70
- /* 0xf0000000..0xffffffff is always S for insn fetches */
71
- return;
72
- }
73
-
74
- if (idau_exempt || v8m_is_sau_exempt(env, address, access_type)) {
75
- sattrs->ns = !regime_is_secure(env, mmu_idx);
76
- return;
77
- }
78
-
79
- if (idau_region != IREGION_NOTVALID) {
80
- sattrs->irvalid = true;
81
- sattrs->iregion = idau_region;
82
- }
83
-
84
- switch (env->sau.ctrl & 3) {
85
- case 0: /* SAU.ENABLE == 0, SAU.ALLNS == 0 */
86
- break;
87
- case 2: /* SAU.ENABLE == 0, SAU.ALLNS == 1 */
88
- sattrs->ns = true;
89
- break;
90
- default: /* SAU.ENABLE == 1 */
91
- for (r = 0; r < cpu->sau_sregion; r++) {
92
- if (env->sau.rlar[r] & 1) {
93
- uint32_t base = env->sau.rbar[r] & ~0x1f;
94
- uint32_t limit = env->sau.rlar[r] | 0x1f;
95
-
96
- if (base <= address && limit >= address) {
97
- if (base > addr_page_base || limit < addr_page_limit) {
98
- sattrs->subpage = true;
99
- }
100
- if (sattrs->srvalid) {
101
- /* If we hit in more than one region then we must report
102
- * as Secure, not NS-Callable, with no valid region
103
- * number info.
104
- */
105
- sattrs->ns = false;
106
- sattrs->nsc = false;
107
- sattrs->sregion = 0;
108
- sattrs->srvalid = false;
109
- break;
110
- } else {
111
- if (env->sau.rlar[r] & 2) {
112
- sattrs->nsc = true;
113
- } else {
114
- sattrs->ns = true;
115
- }
116
- sattrs->srvalid = true;
117
- sattrs->sregion = r;
118
- }
119
- } else {
120
- /*
121
- * Address not in this region. We must check whether the
122
- * region covers addresses in the same page as our address.
123
- * In that case we must not report a size that covers the
124
- * whole page for a subsequent hit against a different MPU
125
- * region or the background region, because it would result
126
- * in incorrect TLB hits for subsequent accesses to
127
- * addresses that are in this MPU region.
128
- */
129
- if (limit >= base &&
130
- ranges_overlap(base, limit - base + 1,
131
- addr_page_base,
132
- TARGET_PAGE_SIZE)) {
133
- sattrs->subpage = true;
134
- }
135
- }
136
- }
137
- }
138
- break;
139
- }
140
-
141
- /*
142
- * The IDAU will override the SAU lookup results if it specifies
143
- * higher security than the SAU does.
144
- */
145
- if (!idau_ns) {
146
- if (sattrs->ns || (!idau_nsc && sattrs->nsc)) {
147
- sattrs->ns = false;
148
- sattrs->nsc = idau_nsc;
149
- }
150
- }
151
-}
152
-
153
/* Combine either inner or outer cacheability attributes for normal
154
* memory, according to table D4-42 and pseudocode procedure
155
* CombineS1S2AttrHints() of ARM DDI 0487B.b (the ARMv8 ARM).
156
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/target/arm/ptw.c
159
+++ b/target/arm/ptw.c
160
@@ -XXX,XX +XXX,XX @@
161
#include "qemu/range.h"
162
#include "cpu.h"
163
#include "internals.h"
164
+#include "idau.h"
165
#include "ptw.h"
166
167
168
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
169
return !(*prot & (1 << access_type));
170
}
171
172
+static bool v8m_is_sau_exempt(CPUARMState *env,
173
+ uint32_t address, MMUAccessType access_type)
174
+{
41
+{
175
+ /*
42
+ /* hcr_write will set the RES1 bits on an AArch64-only CPU */
176
+ * The architecture specifies that certain address ranges are
43
+ hcr_write(env, ri, 0);
177
+ * exempt from v8M SAU/IDAU checks.
178
+ */
179
+ return
180
+ (access_type == MMU_INST_FETCH && m_is_system_region(env, address)) ||
181
+ (address >= 0xe0000000 && address <= 0xe0002fff) ||
182
+ (address >= 0xe000e000 && address <= 0xe000efff) ||
183
+ (address >= 0xe002e000 && address <= 0xe002efff) ||
184
+ (address >= 0xe0040000 && address <= 0xe0041fff) ||
185
+ (address >= 0xe00ff000 && address <= 0xe00fffff);
186
+}
44
+}
187
+
45
+
188
+void v8m_security_lookup(CPUARMState *env, uint32_t address,
46
/*
189
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
47
* Return the effective value of HCR_EL2, at the given security state.
190
+ V8M_SAttributes *sattrs)
48
* Bits that are not included here:
191
+{
49
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
192
+ /*
50
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
193
+ * Look up the security attributes for this address. Compare the
51
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
194
+ * pseudocode SecurityCheck() function.
52
.nv2_redirect_offset = 0x78,
195
+ * We assume the caller has zero-initialized *sattrs.
53
+ .resetfn = hcr_reset,
196
+ */
54
.writefn = hcr_write, .raw_writefn = raw_write },
197
+ ARMCPU *cpu = env_archcpu(env);
55
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
198
+ int r;
56
.type = ARM_CP_ALIAS | ARM_CP_IO,
199
+ bool idau_exempt = false, idau_ns = true, idau_nsc = true;
200
+ int idau_region = IREGION_NOTVALID;
201
+ uint32_t addr_page_base = address & TARGET_PAGE_MASK;
202
+ uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
203
+
204
+ if (cpu->idau) {
205
+ IDAUInterfaceClass *iic = IDAU_INTERFACE_GET_CLASS(cpu->idau);
206
+ IDAUInterface *ii = IDAU_INTERFACE(cpu->idau);
207
+
208
+ iic->check(ii, address, &idau_region, &idau_exempt, &idau_ns,
209
+ &idau_nsc);
210
+ }
211
+
212
+ if (access_type == MMU_INST_FETCH && extract32(address, 28, 4) == 0xf) {
213
+ /* 0xf0000000..0xffffffff is always S for insn fetches */
214
+ return;
215
+ }
216
+
217
+ if (idau_exempt || v8m_is_sau_exempt(env, address, access_type)) {
218
+ sattrs->ns = !regime_is_secure(env, mmu_idx);
219
+ return;
220
+ }
221
+
222
+ if (idau_region != IREGION_NOTVALID) {
223
+ sattrs->irvalid = true;
224
+ sattrs->iregion = idau_region;
225
+ }
226
+
227
+ switch (env->sau.ctrl & 3) {
228
+ case 0: /* SAU.ENABLE == 0, SAU.ALLNS == 0 */
229
+ break;
230
+ case 2: /* SAU.ENABLE == 0, SAU.ALLNS == 1 */
231
+ sattrs->ns = true;
232
+ break;
233
+ default: /* SAU.ENABLE == 1 */
234
+ for (r = 0; r < cpu->sau_sregion; r++) {
235
+ if (env->sau.rlar[r] & 1) {
236
+ uint32_t base = env->sau.rbar[r] & ~0x1f;
237
+ uint32_t limit = env->sau.rlar[r] | 0x1f;
238
+
239
+ if (base <= address && limit >= address) {
240
+ if (base > addr_page_base || limit < addr_page_limit) {
241
+ sattrs->subpage = true;
242
+ }
243
+ if (sattrs->srvalid) {
244
+ /*
245
+ * If we hit in more than one region then we must report
246
+ * as Secure, not NS-Callable, with no valid region
247
+ * number info.
248
+ */
249
+ sattrs->ns = false;
250
+ sattrs->nsc = false;
251
+ sattrs->sregion = 0;
252
+ sattrs->srvalid = false;
253
+ break;
254
+ } else {
255
+ if (env->sau.rlar[r] & 2) {
256
+ sattrs->nsc = true;
257
+ } else {
258
+ sattrs->ns = true;
259
+ }
260
+ sattrs->srvalid = true;
261
+ sattrs->sregion = r;
262
+ }
263
+ } else {
264
+ /*
265
+ * Address not in this region. We must check whether the
266
+ * region covers addresses in the same page as our address.
267
+ * In that case we must not report a size that covers the
268
+ * whole page for a subsequent hit against a different MPU
269
+ * region or the background region, because it would result
270
+ * in incorrect TLB hits for subsequent accesses to
271
+ * addresses that are in this MPU region.
272
+ */
273
+ if (limit >= base &&
274
+ ranges_overlap(base, limit - base + 1,
275
+ addr_page_base,
276
+ TARGET_PAGE_SIZE)) {
277
+ sattrs->subpage = true;
278
+ }
279
+ }
280
+ }
281
+ }
282
+ break;
283
+ }
284
+
285
+ /*
286
+ * The IDAU will override the SAU lookup results if it specifies
287
+ * higher security than the SAU does.
288
+ */
289
+ if (!idau_ns) {
290
+ if (sattrs->ns || (!idau_nsc && sattrs->nsc)) {
291
+ sattrs->ns = false;
292
+ sattrs->nsc = idau_nsc;
293
+ }
294
+ }
295
+}
296
+
297
static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
298
MMUAccessType access_type, ARMMMUIdx mmu_idx,
299
hwaddr *phys_ptr, MemTxAttrs *txattrs,
300
--
57
--
301
2.25.1
58
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
We already call env_archcpu() multiple times within the
2
exception_return helper function, and we're about to want to
3
add another use of the ARMCPU pointer. Add a local variable
4
cpu so we can call env_archcpu() just once.
2
5
3
The bitmap need only hold 15 bits; bitmap is over-complicated.
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
We can simplify operations quite a bit with plain logical ops.
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
target/arm/tcg/helper-a64.c | 7 ++++---
10
1 file changed, 4 insertions(+), 3 deletions(-)
5
11
6
The introduction of SVE_VQ_POW2_MAP eliminates the need for
12
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
7
looping in order to search for powers of two. Simply perform
8
the logical ops and use count leading or trailing zeros as
9
required to find the result.
10
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20220607203306.657998-12-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/cpu.h | 6 +--
17
target/arm/internals.h | 5 ++
18
target/arm/kvm_arm.h | 7 ++-
19
target/arm/cpu64.c | 117 ++++++++++++++++++++---------------------
20
target/arm/helper.c | 9 +---
21
target/arm/kvm64.c | 36 +++----------
22
6 files changed, 75 insertions(+), 105 deletions(-)
23
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/cpu.h
14
--- a/target/arm/tcg/helper-a64.c
27
+++ b/target/arm/cpu.h
15
+++ b/target/arm/tcg/helper-a64.c
28
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
16
@@ -XXX,XX +XXX,XX @@ static void cpsr_write_from_spsr_elx(CPUARMState *env,
29
* Bits set in sve_vq_supported represent valid vector lengths for
17
30
* the CPU type.
18
void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
31
*/
32
- DECLARE_BITMAP(sve_vq_map, ARM_MAX_VQ);
33
- DECLARE_BITMAP(sve_vq_init, ARM_MAX_VQ);
34
- DECLARE_BITMAP(sve_vq_supported, ARM_MAX_VQ);
35
+ uint32_t sve_vq_map;
36
+ uint32_t sve_vq_init;
37
+ uint32_t sve_vq_supported;
38
39
/* Generic timer counter frequency, in Hz */
40
uint64_t gt_cntfrq_hz;
41
diff --git a/target/arm/internals.h b/target/arm/internals.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/internals.h
44
+++ b/target/arm/internals.h
45
@@ -XXX,XX +XXX,XX @@ bool el_is_in_host(CPUARMState *env, int el);
46
47
void aa32_max_features(ARMCPU *cpu);
48
49
+/* Powers of 2 for sve_vq_map et al. */
50
+#define SVE_VQ_POW2_MAP \
51
+ ((1 << (1 - 1)) | (1 << (2 - 1)) | \
52
+ (1 << (4 - 1)) | (1 << (8 - 1)) | (1 << (16 - 1)))
53
+
54
#endif
55
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/kvm_arm.h
58
+++ b/target/arm/kvm_arm.h
59
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf);
60
/**
61
* kvm_arm_sve_get_vls:
62
* @cs: CPUState
63
- * @map: bitmap to fill in
64
*
65
* Get all the SVE vector lengths supported by the KVM host, setting
66
* the bits corresponding to their length in quadwords minus one
67
- * (vq - 1) in @map up to ARM_MAX_VQ.
68
+ * (vq - 1) up to ARM_MAX_VQ. Return the resulting map.
69
*/
70
-void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map);
71
+uint32_t kvm_arm_sve_get_vls(CPUState *cs);
72
73
/**
74
* kvm_arm_set_cpu_features_from_host:
75
@@ -XXX,XX +XXX,XX @@ static inline void kvm_arm_steal_time_finalize(ARMCPU *cpu, Error **errp)
76
g_assert_not_reached();
77
}
78
79
-static inline void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map)
80
+static inline uint32_t kvm_arm_sve_get_vls(CPUState *cs)
81
{
19
{
82
g_assert_not_reached();
20
+ ARMCPU *cpu = env_archcpu(env);
83
}
21
int cur_el = arm_current_el(env);
84
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
22
unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el);
85
index XXXXXXX..XXXXXXX 100644
23
uint32_t spsr = env->banked_spsr[spsr_idx];
86
--- a/target/arm/cpu64.c
24
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
87
+++ b/target/arm/cpu64.c
88
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
89
* any of the above. Finally, if SVE is not disabled, then at least one
90
* vector length must be enabled.
91
*/
92
- DECLARE_BITMAP(tmp, ARM_MAX_VQ);
93
- uint32_t vq, max_vq = 0;
94
+ uint32_t vq_map = cpu->sve_vq_map;
95
+ uint32_t vq_init = cpu->sve_vq_init;
96
+ uint32_t vq_supported;
97
+ uint32_t vq_mask = 0;
98
+ uint32_t tmp, vq, max_vq = 0;
99
100
/*
101
* CPU models specify a set of supported vector lengths which are
102
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
103
* in the supported bitmap results in an error. When KVM is enabled we
104
* fetch the supported bitmap from the host.
105
*/
106
- if (kvm_enabled() && kvm_arm_sve_supported()) {
107
- kvm_arm_sve_get_vls(CPU(cpu), cpu->sve_vq_supported);
108
- } else if (kvm_enabled()) {
109
- assert(!cpu_isar_feature(aa64_sve, cpu));
110
+ if (kvm_enabled()) {
111
+ if (kvm_arm_sve_supported()) {
112
+ cpu->sve_vq_supported = kvm_arm_sve_get_vls(CPU(cpu));
113
+ vq_supported = cpu->sve_vq_supported;
114
+ } else {
115
+ assert(!cpu_isar_feature(aa64_sve, cpu));
116
+ vq_supported = 0;
117
+ }
118
+ } else {
119
+ vq_supported = cpu->sve_vq_supported;
120
}
25
}
121
26
122
/*
27
bql_lock();
123
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
28
- arm_call_pre_el_change_hook(env_archcpu(env));
124
* From the properties, sve_vq_map<N> implies sve_vq_init<N>.
29
+ arm_call_pre_el_change_hook(cpu);
125
* Check first for any sve<N> enabled.
30
bql_unlock();
126
*/
31
127
- if (!bitmap_empty(cpu->sve_vq_map, ARM_MAX_VQ)) {
32
if (!return_to_aa64) {
128
- max_vq = find_last_bit(cpu->sve_vq_map, ARM_MAX_VQ) + 1;
33
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
129
+ if (vq_map != 0) {
34
int tbii;
130
+ max_vq = 32 - clz32(vq_map);
35
131
+ vq_mask = MAKE_64BIT_MASK(0, max_vq);
36
env->aarch64 = true;
132
37
- spsr &= aarch64_pstate_valid_mask(&env_archcpu(env)->isar);
133
if (cpu->sve_max_vq && max_vq > cpu->sve_max_vq) {
38
+ spsr &= aarch64_pstate_valid_mask(&cpu->isar);
134
error_setg(errp, "cannot enable sve%d", max_vq * 128);
39
pstate_write(env, spsr);
135
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
40
if (!arm_singlestep_active(env)) {
136
* For KVM we have to automatically enable all supported unitialized
41
env->pstate &= ~PSTATE_SS;
137
* lengths, even when the smaller lengths are not all powers-of-two.
42
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
138
*/
43
aarch64_sve_change_el(env, cur_el, new_el, return_to_aa64);
139
- bitmap_andnot(tmp, cpu->sve_vq_supported, cpu->sve_vq_init, max_vq);
44
140
- bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq);
45
bql_lock();
141
+ vq_map |= vq_supported & ~vq_init & vq_mask;
46
- arm_call_el_change_hook(env_archcpu(env));
142
} else {
47
+ arm_call_el_change_hook(cpu);
143
/* Propagate enabled bits down through required powers-of-two. */
48
bql_unlock();
144
- for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
49
145
- if (!test_bit(vq - 1, cpu->sve_vq_init)) {
50
return;
146
- set_bit(vq - 1, cpu->sve_vq_map);
147
- }
148
- }
149
+ vq_map |= SVE_VQ_POW2_MAP & ~vq_init & vq_mask;
150
}
151
} else if (cpu->sve_max_vq == 0) {
152
/*
153
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
154
155
if (kvm_enabled()) {
156
/* Disabling a supported length disables all larger lengths. */
157
- for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
158
- if (test_bit(vq - 1, cpu->sve_vq_init) &&
159
- test_bit(vq - 1, cpu->sve_vq_supported)) {
160
- break;
161
- }
162
- }
163
+ tmp = vq_init & vq_supported;
164
} else {
165
/* Disabling a power-of-two disables all larger lengths. */
166
- for (vq = 1; vq <= ARM_MAX_VQ; vq <<= 1) {
167
- if (test_bit(vq - 1, cpu->sve_vq_init)) {
168
- break;
169
- }
170
- }
171
+ tmp = vq_init & SVE_VQ_POW2_MAP;
172
}
173
+ vq = ctz32(tmp) + 1;
174
175
max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ;
176
- bitmap_andnot(cpu->sve_vq_map, cpu->sve_vq_supported,
177
- cpu->sve_vq_init, max_vq);
178
- if (max_vq == 0 || bitmap_empty(cpu->sve_vq_map, max_vq)) {
179
+ vq_mask = MAKE_64BIT_MASK(0, max_vq);
180
+ vq_map = vq_supported & ~vq_init & vq_mask;
181
+
182
+ if (max_vq == 0 || vq_map == 0) {
183
error_setg(errp, "cannot disable sve%d", vq * 128);
184
error_append_hint(errp, "Disabling sve%d results in all "
185
"vector lengths being disabled.\n",
186
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
187
return;
188
}
189
190
- max_vq = find_last_bit(cpu->sve_vq_map, max_vq) + 1;
191
+ max_vq = 32 - clz32(vq_map);
192
+ vq_mask = MAKE_64BIT_MASK(0, max_vq);
193
}
194
195
/*
196
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
197
*/
198
if (cpu->sve_max_vq != 0) {
199
max_vq = cpu->sve_max_vq;
200
+ vq_mask = MAKE_64BIT_MASK(0, max_vq);
201
202
- if (!test_bit(max_vq - 1, cpu->sve_vq_map) &&
203
- test_bit(max_vq - 1, cpu->sve_vq_init)) {
204
+ if (vq_init & ~vq_map & (1 << (max_vq - 1))) {
205
error_setg(errp, "cannot disable sve%d", max_vq * 128);
206
error_append_hint(errp, "The maximum vector length must be "
207
"enabled, sve-max-vq=%d (%d bits)\n",
208
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
209
}
210
211
/* Set all bits not explicitly set within sve-max-vq. */
212
- bitmap_complement(tmp, cpu->sve_vq_init, max_vq);
213
- bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq);
214
+ vq_map |= ~vq_init & vq_mask;
215
}
216
217
/*
218
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
219
* are clear, just in case anybody looks.
220
*/
221
assert(max_vq != 0);
222
- bitmap_clear(cpu->sve_vq_map, max_vq, ARM_MAX_VQ - max_vq);
223
+ assert(vq_mask != 0);
224
+ vq_map &= vq_mask;
225
226
/* Ensure the set of lengths matches what is supported. */
227
- bitmap_xor(tmp, cpu->sve_vq_map, cpu->sve_vq_supported, max_vq);
228
- if (!bitmap_empty(tmp, max_vq)) {
229
- vq = find_last_bit(tmp, max_vq) + 1;
230
- if (test_bit(vq - 1, cpu->sve_vq_map)) {
231
+ tmp = vq_map ^ (vq_supported & vq_mask);
232
+ if (tmp) {
233
+ vq = 32 - clz32(tmp);
234
+ if (vq_map & (1 << (vq - 1))) {
235
if (cpu->sve_max_vq) {
236
error_setg(errp, "cannot set sve-max-vq=%d", cpu->sve_max_vq);
237
error_append_hint(errp, "This CPU does not support "
238
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
239
return;
240
} else {
241
/* Ensure all required powers-of-two are enabled. */
242
- for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
243
- if (!test_bit(vq - 1, cpu->sve_vq_map)) {
244
- error_setg(errp, "cannot disable sve%d", vq * 128);
245
- error_append_hint(errp, "sve%d is required as it "
246
- "is a power-of-two length smaller "
247
- "than the maximum, sve%d\n",
248
- vq * 128, max_vq * 128);
249
- return;
250
- }
251
+ tmp = SVE_VQ_POW2_MAP & vq_mask & ~vq_map;
252
+ if (tmp) {
253
+ vq = 32 - clz32(tmp);
254
+ error_setg(errp, "cannot disable sve%d", vq * 128);
255
+ error_append_hint(errp, "sve%d is required as it "
256
+ "is a power-of-two length smaller "
257
+ "than the maximum, sve%d\n",
258
+ vq * 128, max_vq * 128);
259
+ return;
260
}
261
}
262
}
263
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
264
265
/* From now on sve_max_vq is the actual maximum supported length. */
266
cpu->sve_max_vq = max_vq;
267
+ cpu->sve_vq_map = vq_map;
268
}
269
270
static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name,
271
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
272
if (!cpu_isar_feature(aa64_sve, cpu)) {
273
value = false;
274
} else {
275
- value = test_bit(vq - 1, cpu->sve_vq_map);
276
+ value = extract32(cpu->sve_vq_map, vq - 1, 1);
277
}
278
visit_type_bool(v, name, &value, errp);
279
}
280
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
281
return;
282
}
283
284
- if (value) {
285
- set_bit(vq - 1, cpu->sve_vq_map);
286
- } else {
287
- clear_bit(vq - 1, cpu->sve_vq_map);
288
- }
289
- set_bit(vq - 1, cpu->sve_vq_init);
290
+ cpu->sve_vq_map = deposit32(cpu->sve_vq_map, vq - 1, 1, value);
291
+ cpu->sve_vq_init |= 1 << (vq - 1);
292
}
293
294
static bool cpu_arm_get_sve(Object *obj, Error **errp)
295
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
296
cpu->dcz_blocksize = 7; /* 512 bytes */
297
#endif
298
299
- bitmap_fill(cpu->sve_vq_supported, ARM_MAX_VQ);
300
+ cpu->sve_vq_supported = MAKE_64BIT_MASK(0, ARM_MAX_VQ);
301
302
aarch64_add_pauth_properties(obj);
303
aarch64_add_sve_properties(obj);
304
@@ -XXX,XX +XXX,XX @@ static void aarch64_a64fx_initfn(Object *obj)
305
cpu->gic_vprebits = 5;
306
cpu->gic_pribits = 5;
307
308
- /* Suppport of A64FX's vector length are 128,256 and 512bit only */
309
+ /* The A64FX supports only 128, 256 and 512 bit vector lengths */
310
aarch64_add_sve_properties(obj);
311
- bitmap_zero(cpu->sve_vq_supported, ARM_MAX_VQ);
312
- set_bit(0, cpu->sve_vq_supported); /* 128bit */
313
- set_bit(1, cpu->sve_vq_supported); /* 256bit */
314
- set_bit(3, cpu->sve_vq_supported); /* 512bit */
315
+ cpu->sve_vq_supported = (1 << 0) /* 128bit */
316
+ | (1 << 1) /* 256bit */
317
+ | (1 << 3); /* 512bit */
318
319
cpu->isar.reset_pmcr_el0 = 0x46014040;
320
321
diff --git a/target/arm/helper.c b/target/arm/helper.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/helper.c
324
+++ b/target/arm/helper.c
325
@@ -XXX,XX +XXX,XX @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
326
{
327
ARMCPU *cpu = env_archcpu(env);
328
uint32_t len = cpu->sve_max_vq - 1;
329
- uint32_t end_len;
330
331
if (el <= 1 && !el_is_in_host(env, el)) {
332
len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[1]);
333
@@ -XXX,XX +XXX,XX @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
334
len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
335
}
336
337
- end_len = len;
338
- if (!test_bit(len, cpu->sve_vq_map)) {
339
- end_len = find_last_bit(cpu->sve_vq_map, len);
340
- assert(end_len < len);
341
- }
342
- return end_len;
343
+ len = 31 - clz32(cpu->sve_vq_map & MAKE_64BIT_MASK(0, len + 1));
344
+ return len;
345
}
346
347
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
348
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
349
index XXXXXXX..XXXXXXX 100644
350
--- a/target/arm/kvm64.c
351
+++ b/target/arm/kvm64.c
352
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_steal_time_supported(void)
353
354
QEMU_BUILD_BUG_ON(KVM_ARM64_SVE_VQ_MIN != 1);
355
356
-void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map)
357
+uint32_t kvm_arm_sve_get_vls(CPUState *cs)
358
{
359
/* Only call this function if kvm_arm_sve_supported() returns true. */
360
static uint64_t vls[KVM_ARM64_SVE_VLS_WORDS];
361
static bool probed;
362
uint32_t vq = 0;
363
- int i, j;
364
-
365
- bitmap_zero(map, ARM_MAX_VQ);
366
+ int i;
367
368
/*
369
* KVM ensures all host CPUs support the same set of vector lengths.
370
@@ -XXX,XX +XXX,XX @@ void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map)
371
if (vq > ARM_MAX_VQ) {
372
warn_report("KVM supports vector lengths larger than "
373
"QEMU can enable");
374
+ vls[0] &= MAKE_64BIT_MASK(0, ARM_MAX_VQ);
375
}
376
}
377
378
- for (i = 0; i < KVM_ARM64_SVE_VLS_WORDS; ++i) {
379
- if (!vls[i]) {
380
- continue;
381
- }
382
- for (j = 1; j <= 64; ++j) {
383
- vq = j + i * 64;
384
- if (vq > ARM_MAX_VQ) {
385
- return;
386
- }
387
- if (vls[i] & (1UL << (j - 1))) {
388
- set_bit(vq - 1, map);
389
- }
390
- }
391
- }
392
+ return vls[0];
393
}
394
395
static int kvm_arm_sve_set_vls(CPUState *cs)
396
{
397
- uint64_t vls[KVM_ARM64_SVE_VLS_WORDS] = {0};
398
+ ARMCPU *cpu = ARM_CPU(cs);
399
+ uint64_t vls[KVM_ARM64_SVE_VLS_WORDS] = { cpu->sve_vq_map };
400
struct kvm_one_reg reg = {
401
.id = KVM_REG_ARM64_SVE_VLS,
402
.addr = (uint64_t)&vls[0],
403
};
404
- ARMCPU *cpu = ARM_CPU(cs);
405
- uint32_t vq;
406
- int i, j;
407
408
assert(cpu->sve_max_vq <= KVM_ARM64_SVE_VQ_MAX);
409
410
- for (vq = 1; vq <= cpu->sve_max_vq; ++vq) {
411
- if (test_bit(vq - 1, cpu->sve_vq_map)) {
412
- i = (vq - 1) / 64;
413
- j = (vq - 1) % 64;
414
- vls[i] |= 1UL << j;
415
- }
416
- }
417
-
418
return kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
419
}
420
421
--
51
--
422
2.25.1
52
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
In the Arm ARM, rule R_TYTWB states that returning to AArch32
2
is an illegal exception return if:
3
* AArch32 is not supported at any exception level
4
* the target EL is configured for AArch64 via SCR_EL3.RW
5
or HCR_EL2.RW or via CPU state at reset
2
6
3
With SME, the vector length does not only come from ZCR_ELx.
7
We check the second of these, but not the first (which can only be
4
Comment that this is either NVL or SVL, like the pseudocode.
8
relevant for the case of a return to EL0, because if AArch32 is not
9
supported at one of the higher ELs then the RW bits will have an
10
effective value of 1 and the the "configured for AArch64" condition
11
will hold also).
5
12
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Add the missing condition. Although this is technically a bug
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
(because we have one AArch64-only CPU: a64fx) it isn't worth
8
Message-id: 20220607203306.657998-2-richard.henderson@linaro.org
15
backporting to stable because no sensible guest code will
16
deliberately try to return to a nonexistent execution state
17
to check that it gets an illegal exception return.
18
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
21
---
11
target/arm/cpu.h | 3 ++-
22
target/arm/tcg/helper-a64.c | 5 +++++
12
target/arm/translate-a64.h | 2 +-
23
1 file changed, 5 insertions(+)
13
target/arm/translate.h | 2 +-
14
target/arm/helper.c | 2 +-
15
target/arm/translate-a64.c | 2 +-
16
target/arm/translate-sve.c | 2 +-
17
6 files changed, 7 insertions(+), 6 deletions(-)
18
24
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
20
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
27
--- a/target/arm/tcg/helper-a64.c
22
+++ b/target/arm/cpu.h
28
+++ b/target/arm/tcg/helper-a64.c
23
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */
29
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
24
*/
30
goto illegal_return;
25
FIELD(TBFLAG_A64, TBII, 0, 2)
26
FIELD(TBFLAG_A64, SVEEXC_EL, 2, 2)
27
-FIELD(TBFLAG_A64, ZCR_LEN, 4, 4)
28
+/* The current vector length, either NVL or SVL. */
29
+FIELD(TBFLAG_A64, VL, 4, 4)
30
FIELD(TBFLAG_A64, PAUTH_ACTIVE, 8, 1)
31
FIELD(TBFLAG_A64, BT, 9, 1)
32
FIELD(TBFLAG_A64, BTYPE, 10, 2) /* Not cached. */
33
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-a64.h
36
+++ b/target/arm/translate-a64.h
37
@@ -XXX,XX +XXX,XX @@ static inline TCGv_ptr vec_full_reg_ptr(DisasContext *s, int regno)
38
/* Return the byte size of the "whole" vector register, VL / 8. */
39
static inline int vec_full_reg_size(DisasContext *s)
40
{
41
- return s->sve_len;
42
+ return s->vl;
43
}
44
45
bool disas_sve(DisasContext *, uint32_t);
46
diff --git a/target/arm/translate.h b/target/arm/translate.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate.h
49
+++ b/target/arm/translate.h
50
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
51
bool ns; /* Use non-secure CPREG bank on access */
52
int fp_excp_el; /* FP exception EL or 0 if enabled */
53
int sve_excp_el; /* SVE exception EL or 0 if enabled */
54
- int sve_len; /* SVE vector length in bytes */
55
+ int vl; /* current vector length in bytes */
56
/* Flag indicating that exceptions from secure mode are routed to EL3. */
57
bool secure_routed_to_el3;
58
bool vfp_enabled; /* FP enabled via FPSCR.EN */
59
diff --git a/target/arm/helper.c b/target/arm/helper.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/helper.c
62
+++ b/target/arm/helper.c
63
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
64
zcr_len = sve_zcr_len_for_el(env, el);
65
}
66
DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
67
- DP_TBFLAG_A64(flags, ZCR_LEN, zcr_len);
68
+ DP_TBFLAG_A64(flags, VL, zcr_len);
69
}
31
}
70
32
71
sctlr = regime_sctlr(env, stage1);
33
+ if (!return_to_aa64 && !cpu_isar_feature(aa64_aa32, cpu)) {
72
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
34
+ /* Return to AArch32 when CPU is AArch64-only */
73
index XXXXXXX..XXXXXXX 100644
35
+ goto illegal_return;
74
--- a/target/arm/translate-a64.c
36
+ }
75
+++ b/target/arm/translate-a64.c
37
+
76
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
38
if (new_el == 1 && (arm_hcr_el2_eff(env) & HCR_TGE)) {
77
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
39
goto illegal_return;
78
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
40
}
79
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
80
- dc->sve_len = (EX_TBFLAG_A64(tb_flags, ZCR_LEN) + 1) * 16;
81
+ dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
82
dc->pauth_active = EX_TBFLAG_A64(tb_flags, PAUTH_ACTIVE);
83
dc->bt = EX_TBFLAG_A64(tb_flags, BT);
84
dc->btype = EX_TBFLAG_A64(tb_flags, BTYPE);
85
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/arm/translate-sve.c
88
+++ b/target/arm/translate-sve.c
89
@@ -XXX,XX +XXX,XX @@ static inline int pred_full_reg_offset(DisasContext *s, int regno)
90
/* Return the byte size of the whole predicate register, VL / 64. */
91
static inline int pred_full_reg_size(DisasContext *s)
92
{
93
- return s->sve_len >> 3;
94
+ return s->vl >> 3;
95
}
96
97
/* Round up the size of a register to a size allowed by
98
--
41
--
99
2.25.1
42
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
I'm down as the only listed maintainer for quite a lot of Arm SoC and
2
board types. In some cases this is only as the "maintainer of last
3
resort" and I'm not in practice doing anything beyond patch review
4
and the odd bit of tidyup.
2
5
3
We will need this over in sme_helper.c.
6
Move these entries in MAINTAINERS from "Maintained" to "Odd Fixes",
7
to better represent reality. Entries for other boards and SoCs where
8
I do more actively care (or where there is a listed co-maintainer)
9
remain as they are.
4
10
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220607203306.657998-19-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
Message-id: 20250307152838.3226398-1-peter.maydell@linaro.org
9
---
14
---
10
target/arm/vec_internal.h | 13 +++++++++++++
15
MAINTAINERS | 14 +++++++-------
11
target/arm/vec_helper.c | 2 +-
16
1 file changed, 7 insertions(+), 7 deletions(-)
12
2 files changed, 14 insertions(+), 1 deletion(-)
13
17
14
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
18
diff --git a/MAINTAINERS b/MAINTAINERS
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/vec_internal.h
20
--- a/MAINTAINERS
17
+++ b/target/arm/vec_internal.h
21
+++ b/MAINTAINERS
18
@@ -XXX,XX +XXX,XX @@ uint64_t pmull_h(uint64_t op1, uint64_t op2);
22
@@ -XXX,XX +XXX,XX @@ F: docs/system/arm/kzm.rst
19
*/
23
Integrator CP
20
uint64_t pmull_w(uint64_t op1, uint64_t op2);
24
M: Peter Maydell <peter.maydell@linaro.org>
21
25
L: qemu-arm@nongnu.org
22
+/**
26
-S: Maintained
23
+ * bfdotadd:
27
+S: Odd Fixes
24
+ * @sum: addend
28
F: hw/arm/integratorcp.c
25
+ * @e1, @e2: multiplicand vectors
29
F: hw/misc/arm_integrator_debug.c
26
+ *
30
F: include/hw/misc/arm_integrator_debug.h
27
+ * BFloat16 2-way dot product of @e1 & @e2, accumulating with @sum.
31
@@ -XXX,XX +XXX,XX @@ F: docs/system/arm/mps2.rst
28
+ * The @e1 and @e2 operands correspond to the 32-bit source vector
32
Musca
29
+ * slots and contain two Bfloat16 values each.
33
M: Peter Maydell <peter.maydell@linaro.org>
30
+ *
34
L: qemu-arm@nongnu.org
31
+ * Corresponds to the ARM pseudocode function BFDotAdd.
35
-S: Maintained
32
+ */
36
+S: Odd Fixes
33
+float32 bfdotadd(float32 sum, uint32_t e1, uint32_t e2);
37
F: hw/arm/musca.c
34
+
38
F: docs/system/arm/musca.rst
35
#endif /* TARGET_ARM_VEC_INTERNAL_H */
39
36
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
40
@@ -XXX,XX +XXX,XX @@ F: tests/functional/test_aarch64_raspi4.py
37
index XXXXXXX..XXXXXXX 100644
41
Real View
38
--- a/target/arm/vec_helper.c
42
M: Peter Maydell <peter.maydell@linaro.org>
39
+++ b/target/arm/vec_helper.c
43
L: qemu-arm@nongnu.org
40
@@ -XXX,XX +XXX,XX @@ DO_MMLA_B(gvec_usmmla_b, do_usmmla_b)
44
-S: Maintained
41
* BFloat16 Dot Product
45
+S: Odd Fixes
42
*/
46
F: hw/arm/realview*
43
47
F: hw/cpu/realview_mpcore.c
44
-static float32 bfdotadd(float32 sum, uint32_t e1, uint32_t e2)
48
F: hw/intc/realview_gic.c
45
+float32 bfdotadd(float32 sum, uint32_t e1, uint32_t e2)
49
@@ -XXX,XX +XXX,XX @@ F: tests/functional/test_arm_collie.py
46
{
50
Stellaris
47
/* FPCR is ignored for BFDOT and BFMMLA. */
51
M: Peter Maydell <peter.maydell@linaro.org>
48
float_status bf_status = {
52
L: qemu-arm@nongnu.org
53
-S: Maintained
54
+S: Odd Fixes
55
F: hw/*/stellaris*
56
F: hw/display/ssd03*
57
F: include/hw/input/gamepad.h
58
@@ -XXX,XX +XXX,XX @@ F: docs/system/arm/stm32.rst
59
Versatile Express
60
M: Peter Maydell <peter.maydell@linaro.org>
61
L: qemu-arm@nongnu.org
62
-S: Maintained
63
+S: Odd Fixes
64
F: hw/arm/vexpress.c
65
F: hw/display/sii9022.c
66
F: docs/system/arm/vexpress.rst
67
@@ -XXX,XX +XXX,XX @@ F: tests/functional/test_arm_vexpress.py
68
Versatile PB
69
M: Peter Maydell <peter.maydell@linaro.org>
70
L: qemu-arm@nongnu.org
71
-S: Maintained
72
+S: Odd Fixes
73
F: hw/*/versatile*
74
F: hw/i2c/arm_sbcon_i2c.c
75
F: include/hw/i2c/arm_sbcon_i2c.h
76
@@ -XXX,XX +XXX,XX @@ F: include/hw/hyperv/vmbus*.h
77
OMAP
78
M: Peter Maydell <peter.maydell@linaro.org>
79
L: qemu-arm@nongnu.org
80
-S: Maintained
81
+S: Odd Fixes
82
F: hw/*/omap*
83
F: include/hw/arm/omap.h
84
F: docs/system/arm/sx1.rst
49
--
85
--
50
2.25.1
86
2.43.0
87
88
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
The guest does not control whether characters are sent on the UART.
4
Message-id: 20220604040607.269301-4-richard.henderson@linaro.org
4
Sending them before the guest happens to boot will now result in a
5
"guest error" log entry that is only because of timing, even if the
6
guest _would_ later setup the receiver correctly.
7
8
This reverts the bulk of commit abf2b6a028670bd2890bb3aee7e103fe53e4b0df,
9
and instead adds a comment about why we don't check the enable bits.
10
11
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Cc: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
14
Message-id: 20250311153717.206129-1-pbonzini@redhat.com
15
[PMM: expanded comment]
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
18
---
8
target/arm/ptw.h | 15 +++--
19
hw/char/pl011.c | 19 ++++++++++---------
9
target/arm/helper.c | 137 +++-----------------------------------------
20
1 file changed, 10 insertions(+), 9 deletions(-)
10
target/arm/ptw.c | 123 +++++++++++++++++++++++++++++++++++++++
11
3 files changed, 140 insertions(+), 135 deletions(-)
12
21
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
22
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
14
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
24
--- a/hw/char/pl011.c
16
+++ b/target/arm/ptw.h
25
+++ b/hw/char/pl011.c
17
@@ -XXX,XX +XXX,XX @@
26
@@ -XXX,XX +XXX,XX @@ static int pl011_can_receive(void *opaque)
18
27
unsigned fifo_depth = pl011_get_fifo_depth(s);
19
#ifndef CONFIG_USER_ONLY
28
unsigned fifo_available = fifo_depth - s->read_count;
20
29
21
+uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
30
- if (!(s->cr & CR_UARTEN)) {
22
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi);
31
- qemu_log_mask(LOG_GUEST_ERROR,
23
+uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
32
- "PL011 receiving data on disabled UART\n");
24
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi);
33
- }
25
+
34
- if (!(s->cr & CR_RXE)) {
26
bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx);
35
- qemu_log_mask(LOG_GUEST_ERROR,
27
bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx);
36
- "PL011 receiving data on disabled RX UART\n");
28
ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
37
- }
29
ARMCacheAttrs s1, ARMCacheAttrs s2);
38
- trace_pl011_can_receive(s->lcr, s->read_count, fifo_depth, fifo_available);
30
39
+ /*
31
-bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
40
+ * In theory we should check the UART and RX enable bits here and
32
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
41
+ * return 0 if they are not set (so the guest can't receive data
33
- hwaddr *phys_ptr, int *prot,
42
+ * until you have enabled the UART). In practice we suspect there
34
- target_ulong *page_size,
43
+ * is at least some guest code out there which has been tested only
35
- ARMMMUFaultInfo *fi);
44
+ * on QEMU and which never bothers to enable the UART because we
36
+bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
45
+ * historically never enforced that. So we effectively keep the
37
+ uint32_t *table, uint32_t address);
46
+ * UART continuously enabled regardless of the enable bits.
38
+int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx,
47
+ */
39
+ int ap, int domain_prot);
48
40
+
49
+ trace_pl011_can_receive(s->lcr, s->read_count, fifo_depth, fifo_available);
41
bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
50
return fifo_available;
42
MMUAccessType access_type, ARMMMUIdx mmu_idx,
43
hwaddr *phys_ptr, int *prot,
44
diff --git a/target/arm/helper.c b/target/arm/helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper.c
47
+++ b/target/arm/helper.c
48
@@ -XXX,XX +XXX,XX @@ bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
49
* @ap: The 3-bit access permissions (AP[2:0])
50
* @domain_prot: The 2-bit domain access permissions
51
*/
52
-static inline int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx,
53
- int ap, int domain_prot)
54
+int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap, int domain_prot)
55
{
56
bool is_user = regime_is_user(env, mmu_idx);
57
58
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
59
return prot_rw | PAGE_EXEC;
60
}
51
}
61
52
62
-static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
63
- uint32_t *table, uint32_t address)
64
+bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
65
+ uint32_t *table, uint32_t address)
66
{
67
/* Note that we can only get here for an AArch32 PL0/PL1 lookup */
68
TCR *tcr = regime_tcr(env, mmu_idx);
69
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
70
}
71
72
/* All loads done in the course of a page table walk go through here. */
73
-static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
74
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
75
+uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
76
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
77
{
78
ARMCPU *cpu = ARM_CPU(cs);
79
CPUARMState *env = &cpu->env;
80
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
81
return 0;
82
}
83
84
-static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
85
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
86
+uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
87
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
88
{
89
ARMCPU *cpu = ARM_CPU(cs);
90
CPUARMState *env = &cpu->env;
91
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
92
return 0;
93
}
94
95
-bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
96
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
97
- hwaddr *phys_ptr, int *prot,
98
- target_ulong *page_size,
99
- ARMMMUFaultInfo *fi)
100
-{
101
- CPUState *cs = env_cpu(env);
102
- int level = 1;
103
- uint32_t table;
104
- uint32_t desc;
105
- int type;
106
- int ap;
107
- int domain = 0;
108
- int domain_prot;
109
- hwaddr phys_addr;
110
- uint32_t dacr;
111
-
112
- /* Pagetable walk. */
113
- /* Lookup l1 descriptor. */
114
- if (!get_level1_table_address(env, mmu_idx, &table, address)) {
115
- /* Section translation fault if page walk is disabled by PD0 or PD1 */
116
- fi->type = ARMFault_Translation;
117
- goto do_fault;
118
- }
119
- desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
120
- mmu_idx, fi);
121
- if (fi->type != ARMFault_None) {
122
- goto do_fault;
123
- }
124
- type = (desc & 3);
125
- domain = (desc >> 5) & 0x0f;
126
- if (regime_el(env, mmu_idx) == 1) {
127
- dacr = env->cp15.dacr_ns;
128
- } else {
129
- dacr = env->cp15.dacr_s;
130
- }
131
- domain_prot = (dacr >> (domain * 2)) & 3;
132
- if (type == 0) {
133
- /* Section translation fault. */
134
- fi->type = ARMFault_Translation;
135
- goto do_fault;
136
- }
137
- if (type != 2) {
138
- level = 2;
139
- }
140
- if (domain_prot == 0 || domain_prot == 2) {
141
- fi->type = ARMFault_Domain;
142
- goto do_fault;
143
- }
144
- if (type == 2) {
145
- /* 1Mb section. */
146
- phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
147
- ap = (desc >> 10) & 3;
148
- *page_size = 1024 * 1024;
149
- } else {
150
- /* Lookup l2 entry. */
151
- if (type == 1) {
152
- /* Coarse pagetable. */
153
- table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
154
- } else {
155
- /* Fine pagetable. */
156
- table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
157
- }
158
- desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
159
- mmu_idx, fi);
160
- if (fi->type != ARMFault_None) {
161
- goto do_fault;
162
- }
163
- switch (desc & 3) {
164
- case 0: /* Page translation fault. */
165
- fi->type = ARMFault_Translation;
166
- goto do_fault;
167
- case 1: /* 64k page. */
168
- phys_addr = (desc & 0xffff0000) | (address & 0xffff);
169
- ap = (desc >> (4 + ((address >> 13) & 6))) & 3;
170
- *page_size = 0x10000;
171
- break;
172
- case 2: /* 4k page. */
173
- phys_addr = (desc & 0xfffff000) | (address & 0xfff);
174
- ap = (desc >> (4 + ((address >> 9) & 6))) & 3;
175
- *page_size = 0x1000;
176
- break;
177
- case 3: /* 1k page, or ARMv6/XScale "extended small (4k) page" */
178
- if (type == 1) {
179
- /* ARMv6/XScale extended small page format */
180
- if (arm_feature(env, ARM_FEATURE_XSCALE)
181
- || arm_feature(env, ARM_FEATURE_V6)) {
182
- phys_addr = (desc & 0xfffff000) | (address & 0xfff);
183
- *page_size = 0x1000;
184
- } else {
185
- /* UNPREDICTABLE in ARMv5; we choose to take a
186
- * page translation fault.
187
- */
188
- fi->type = ARMFault_Translation;
189
- goto do_fault;
190
- }
191
- } else {
192
- phys_addr = (desc & 0xfffffc00) | (address & 0x3ff);
193
- *page_size = 0x400;
194
- }
195
- ap = (desc >> 4) & 3;
196
- break;
197
- default:
198
- /* Never happens, but compiler isn't smart enough to tell. */
199
- g_assert_not_reached();
200
- }
201
- }
202
- *prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
203
- *prot |= *prot ? PAGE_EXEC : 0;
204
- if (!(*prot & (1 << access_type))) {
205
- /* Access permission fault. */
206
- fi->type = ARMFault_Permission;
207
- goto do_fault;
208
- }
209
- *phys_ptr = phys_addr;
210
- return false;
211
-do_fault:
212
- fi->domain = domain;
213
- fi->level = level;
214
- return true;
215
-}
216
-
217
bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
218
MMUAccessType access_type, ARMMMUIdx mmu_idx,
219
hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
220
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
221
index XXXXXXX..XXXXXXX 100644
222
--- a/target/arm/ptw.c
223
+++ b/target/arm/ptw.c
224
@@ -XXX,XX +XXX,XX @@
225
#include "ptw.h"
226
227
228
+static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
229
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
230
+ hwaddr *phys_ptr, int *prot,
231
+ target_ulong *page_size,
232
+ ARMMMUFaultInfo *fi)
233
+{
234
+ CPUState *cs = env_cpu(env);
235
+ int level = 1;
236
+ uint32_t table;
237
+ uint32_t desc;
238
+ int type;
239
+ int ap;
240
+ int domain = 0;
241
+ int domain_prot;
242
+ hwaddr phys_addr;
243
+ uint32_t dacr;
244
+
245
+ /* Pagetable walk. */
246
+ /* Lookup l1 descriptor. */
247
+ if (!get_level1_table_address(env, mmu_idx, &table, address)) {
248
+ /* Section translation fault if page walk is disabled by PD0 or PD1 */
249
+ fi->type = ARMFault_Translation;
250
+ goto do_fault;
251
+ }
252
+ desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
253
+ mmu_idx, fi);
254
+ if (fi->type != ARMFault_None) {
255
+ goto do_fault;
256
+ }
257
+ type = (desc & 3);
258
+ domain = (desc >> 5) & 0x0f;
259
+ if (regime_el(env, mmu_idx) == 1) {
260
+ dacr = env->cp15.dacr_ns;
261
+ } else {
262
+ dacr = env->cp15.dacr_s;
263
+ }
264
+ domain_prot = (dacr >> (domain * 2)) & 3;
265
+ if (type == 0) {
266
+ /* Section translation fault. */
267
+ fi->type = ARMFault_Translation;
268
+ goto do_fault;
269
+ }
270
+ if (type != 2) {
271
+ level = 2;
272
+ }
273
+ if (domain_prot == 0 || domain_prot == 2) {
274
+ fi->type = ARMFault_Domain;
275
+ goto do_fault;
276
+ }
277
+ if (type == 2) {
278
+ /* 1Mb section. */
279
+ phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
280
+ ap = (desc >> 10) & 3;
281
+ *page_size = 1024 * 1024;
282
+ } else {
283
+ /* Lookup l2 entry. */
284
+ if (type == 1) {
285
+ /* Coarse pagetable. */
286
+ table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
287
+ } else {
288
+ /* Fine pagetable. */
289
+ table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
290
+ }
291
+ desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
292
+ mmu_idx, fi);
293
+ if (fi->type != ARMFault_None) {
294
+ goto do_fault;
295
+ }
296
+ switch (desc & 3) {
297
+ case 0: /* Page translation fault. */
298
+ fi->type = ARMFault_Translation;
299
+ goto do_fault;
300
+ case 1: /* 64k page. */
301
+ phys_addr = (desc & 0xffff0000) | (address & 0xffff);
302
+ ap = (desc >> (4 + ((address >> 13) & 6))) & 3;
303
+ *page_size = 0x10000;
304
+ break;
305
+ case 2: /* 4k page. */
306
+ phys_addr = (desc & 0xfffff000) | (address & 0xfff);
307
+ ap = (desc >> (4 + ((address >> 9) & 6))) & 3;
308
+ *page_size = 0x1000;
309
+ break;
310
+ case 3: /* 1k page, or ARMv6/XScale "extended small (4k) page" */
311
+ if (type == 1) {
312
+ /* ARMv6/XScale extended small page format */
313
+ if (arm_feature(env, ARM_FEATURE_XSCALE)
314
+ || arm_feature(env, ARM_FEATURE_V6)) {
315
+ phys_addr = (desc & 0xfffff000) | (address & 0xfff);
316
+ *page_size = 0x1000;
317
+ } else {
318
+ /*
319
+ * UNPREDICTABLE in ARMv5; we choose to take a
320
+ * page translation fault.
321
+ */
322
+ fi->type = ARMFault_Translation;
323
+ goto do_fault;
324
+ }
325
+ } else {
326
+ phys_addr = (desc & 0xfffffc00) | (address & 0x3ff);
327
+ *page_size = 0x400;
328
+ }
329
+ ap = (desc >> 4) & 3;
330
+ break;
331
+ default:
332
+ /* Never happens, but compiler isn't smart enough to tell. */
333
+ g_assert_not_reached();
334
+ }
335
+ }
336
+ *prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
337
+ *prot |= *prot ? PAGE_EXEC : 0;
338
+ if (!(*prot & (1 << access_type))) {
339
+ /* Access permission fault. */
340
+ fi->type = ARMFault_Permission;
341
+ goto do_fault;
342
+ }
343
+ *phys_ptr = phys_addr;
344
+ return false;
345
+do_fault:
346
+ fi->domain = domain;
347
+ fi->level = level;
348
+ return true;
349
+}
350
+
351
/**
352
* get_phys_addr - get the physical address for this virtual address
353
*
354
--
53
--
355
2.25.1
54
2.43.0
55
56
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Joe Komlodi <komlodi@google.com>
2
2
3
This check is buried within arm_hcr_el2_eff(), but since we
3
On ARM hosts with CTR_EL0.DIC and CTR_EL0.IDC set, this would only cause
4
have to have the explicit check for CPTR_EL2.TZ, we might as
4
an ISB to be executed during cache maintenance, which could lead to QEMU
5
well just check it once at the beginning of the block.
5
executing TBs containing garbage instructions.
6
6
7
Once this is done, we can test HCR_EL2.{E2H,TGE} directly,
7
This seems to be because the ISB finishes executing instructions and
8
rather than going through arm_hcr_el2_eff().
8
flushes the pipeline, but the ISB doesn't guarantee that writes from the
9
executed instructions are committed. If a small enough TB is created, it's
10
possible that the writes setting up the TB aren't committed by the time the
11
TB is executed.
9
12
13
This function is intended to be a port of the gcc implementation
14
(https://github.com/gcc-mirror/gcc/blob/85b46d0795ac76bc192cb8f88b646a647acf98c1/libgcc/config/aarch64/sync-cache.c#L67)
15
which makes the first DSB unconditional, so we can fix the synchronization
16
issue by doing that as well.
17
18
Cc: qemu-stable@nongnu.org
19
Fixes: 664a79735e4deb1 ("util: Specialize flush_idcache_range for aarch64")
20
Signed-off-by: Joe Komlodi <komlodi@google.com>
21
Message-id: 20250310203622.1827940-2-komlodi@google.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20220607203306.657998-9-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
25
---
15
target/arm/helper.c | 13 +++++--------
26
util/cacheflush.c | 4 +++-
16
1 file changed, 5 insertions(+), 8 deletions(-)
27
1 file changed, 3 insertions(+), 1 deletion(-)
17
28
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
29
diff --git a/util/cacheflush.c b/util/cacheflush.c
19
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
31
--- a/util/cacheflush.c
21
+++ b/target/arm/helper.c
32
+++ b/util/cacheflush.c
22
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
33
@@ -XXX,XX +XXX,XX @@ void flush_idcache_range(uintptr_t rx, uintptr_t rw, size_t len)
34
for (p = rw & -dcache_lsize; p < rw + len; p += dcache_lsize) {
35
asm volatile("dc\tcvau, %0" : : "r" (p) : "memory");
23
}
36
}
37
- asm volatile("dsb\tish" : : : "memory");
24
}
38
}
25
39
26
- /*
40
+ /* DSB unconditionally to ensure any outstanding writes are committed. */
27
- * CPTR_EL2 changes format with HCR_EL2.E2H (regardless of TGE).
41
+ asm volatile("dsb\tish" : : : "memory");
28
- */
42
+
29
- if (el <= 2) {
43
/*
30
- uint64_t hcr_el2 = arm_hcr_el2_eff(env);
44
* If CTR_EL0.DIC is enabled, Instruction cache cleaning to the Point
31
- if (hcr_el2 & HCR_E2H) {
45
* of Unification is not required for instruction to data coherence.
32
+ if (el <= 2 && arm_is_el2_enabled(env)) {
33
+ /* CPTR_EL2 changes format with HCR_EL2.E2H (regardless of TGE). */
34
+ if (env->cp15.hcr_el2 & HCR_E2H) {
35
switch (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, ZEN)) {
36
case 1:
37
- if (el != 0 || !(hcr_el2 & HCR_TGE)) {
38
+ if (el != 0 || !(env->cp15.hcr_el2 & HCR_TGE)) {
39
break;
40
}
41
/* fall through */
42
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
43
case 2:
44
return 2;
45
}
46
- } else if (arm_is_el2_enabled(env)) {
47
+ } else {
48
if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TZ)) {
49
return 2;
50
}
51
--
46
--
52
2.25.1
47
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The check for fp_excp_el in assert_fp_access_checked is
4
incorrect. For SME, with StreamingMode enabled, the access
5
is really against the streaming mode vectors, and access
6
to the normal fp registers is allowed to be disabled.
7
C.f. sme_enabled_check.
8
9
Convert sve_access_checked to match, even though we don't
10
currently check the exception state.
11
12
Cc: qemu-stable@nongnu.org
13
Fixes: 3d74825f4d6 ("target/arm: Add SME enablement checks")
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-7-richard.henderson@linaro.org
15
Message-id: 20250307190415.982049-2-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
18
---
8
target/arm/ptw.h | 3 +++
19
target/arm/tcg/translate-a64.h | 2 +-
9
target/arm/helper.c | 41 -----------------------------------------
20
target/arm/tcg/translate.h | 10 +++++++---
10
target/arm/ptw.c | 41 +++++++++++++++++++++++++++++++++++++++++
21
target/arm/tcg/translate-a64.c | 17 +++++++++--------
11
3 files changed, 44 insertions(+), 41 deletions(-)
22
3 files changed, 17 insertions(+), 12 deletions(-)
12
23
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
24
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
14
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
26
--- a/target/arm/tcg/translate-a64.h
16
+++ b/target/arm/ptw.h
27
+++ b/target/arm/tcg/translate-a64.h
17
@@ -XXX,XX +XXX,XX @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
28
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
18
return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
29
static inline void assert_fp_access_checked(DisasContext *s)
19
}
30
{
20
31
#ifdef CONFIG_DEBUG_TCG
21
+void get_phys_addr_pmsav7_default(CPUARMState *env,
32
- if (unlikely(!s->fp_access_checked || s->fp_excp_el)) {
22
+ ARMMMUIdx mmu_idx,
33
+ if (unlikely(s->fp_access_checked <= 0)) {
23
+ int32_t address, int *prot);
34
fprintf(stderr, "target-arm: FP access check missing for "
24
bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
35
"instruction 0x%08x\n", s->insn);
25
MMUAccessType access_type, ARMMMUIdx mmu_idx,
36
abort();
26
hwaddr *phys_ptr, int *prot,
37
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
27
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/helper.c
39
--- a/target/arm/tcg/translate.h
30
+++ b/target/arm/helper.c
40
+++ b/target/arm/tcg/translate.h
31
@@ -XXX,XX +XXX,XX @@ do_fault:
41
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
42
bool aarch64;
43
bool thumb;
44
bool lse2;
45
- /* Because unallocated encodings generate different exception syndrome
46
+ /*
47
+ * Because unallocated encodings generate different exception syndrome
48
* information from traps due to FP being disabled, we can't do a single
49
* "is fp access disabled" check at a high level in the decode tree.
50
* To help in catching bugs where the access check was forgotten in some
51
* code path, we set this flag when the access check is done, and assert
52
* that it is set at the point where we actually touch the FP regs.
53
+ * 0: not checked,
54
+ * 1: checked, access ok
55
+ * -1: checked, access denied
56
*/
57
- bool fp_access_checked;
58
- bool sve_access_checked;
59
+ int8_t fp_access_checked;
60
+ int8_t sve_access_checked;
61
/* ARMv8 single-step state (this is distinct from the QEMU gdbstub
62
* single-step support).
63
*/
64
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/tcg/translate-a64.c
67
+++ b/target/arm/tcg/translate-a64.c
68
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check_only(DisasContext *s)
69
{
70
if (s->fp_excp_el) {
71
assert(!s->fp_access_checked);
72
- s->fp_access_checked = true;
73
+ s->fp_access_checked = -1;
74
75
gen_exception_insn_el(s, 0, EXCP_UDEF,
76
syn_fp_access_trap(1, 0xe, false, 0),
77
s->fp_excp_el);
78
return false;
79
}
80
- s->fp_access_checked = true;
81
+ s->fp_access_checked = 1;
32
return true;
82
return true;
33
}
83
}
34
84
35
-static inline void get_phys_addr_pmsav7_default(CPUARMState *env,
85
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
36
- ARMMMUIdx mmu_idx,
86
syn_sve_access_trap(), s->sve_excp_el);
37
- int32_t address, int *prot)
87
goto fail_exit;
38
-{
88
}
39
- if (!arm_feature(env, ARM_FEATURE_M)) {
89
- s->sve_access_checked = true;
40
- *prot = PAGE_READ | PAGE_WRITE;
90
+ s->sve_access_checked = 1;
41
- switch (address) {
91
return fp_access_check(s);
42
- case 0xF0000000 ... 0xFFFFFFFF:
92
43
- if (regime_sctlr(env, mmu_idx) & SCTLR_V) {
93
fail_exit:
44
- /* hivecs execing is ok */
94
/* Assert that we only raise one exception per instruction. */
45
- *prot |= PAGE_EXEC;
95
assert(!s->sve_access_checked);
46
- }
96
- s->sve_access_checked = true;
47
- break;
97
+ s->sve_access_checked = -1;
48
- case 0x00000000 ... 0x7FFFFFFF:
49
- *prot |= PAGE_EXEC;
50
- break;
51
- }
52
- } else {
53
- /* Default system address map for M profile cores.
54
- * The architecture specifies which regions are execute-never;
55
- * at the MPU level no other checks are defined.
56
- */
57
- switch (address) {
58
- case 0x00000000 ... 0x1fffffff: /* ROM */
59
- case 0x20000000 ... 0x3fffffff: /* SRAM */
60
- case 0x60000000 ... 0x7fffffff: /* RAM */
61
- case 0x80000000 ... 0x9fffffff: /* RAM */
62
- *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
63
- break;
64
- case 0x40000000 ... 0x5fffffff: /* Peripheral */
65
- case 0xa0000000 ... 0xbfffffff: /* Device */
66
- case 0xc0000000 ... 0xdfffffff: /* Device */
67
- case 0xe0000000 ... 0xffffffff: /* System */
68
- *prot = PAGE_READ | PAGE_WRITE;
69
- break;
70
- default:
71
- g_assert_not_reached();
72
- }
73
- }
74
-}
75
-
76
static bool pmsav7_use_background_region(ARMCPU *cpu,
77
ARMMMUIdx mmu_idx, bool is_user)
78
{
79
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/ptw.c
82
+++ b/target/arm/ptw.c
83
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
84
return false;
98
return false;
85
}
99
}
86
100
87
+void get_phys_addr_pmsav7_default(CPUARMState *env,
101
@@ -XXX,XX +XXX,XX @@ bool sme_enabled_check(DisasContext *s)
88
+ ARMMMUIdx mmu_idx,
102
* sme_excp_el by itself for cpregs access checks.
89
+ int32_t address, int *prot)
103
*/
90
+{
104
if (!s->fp_excp_el || s->sme_excp_el < s->fp_excp_el) {
91
+ if (!arm_feature(env, ARM_FEATURE_M)) {
105
- s->fp_access_checked = true;
92
+ *prot = PAGE_READ | PAGE_WRITE;
106
- return sme_access_check(s);
93
+ switch (address) {
107
+ bool ret = sme_access_check(s);
94
+ case 0xF0000000 ... 0xFFFFFFFF:
108
+ s->fp_access_checked = (ret ? 1 : -1);
95
+ if (regime_sctlr(env, mmu_idx) & SCTLR_V) {
109
+ return ret;
96
+ /* hivecs execing is ok */
110
}
97
+ *prot |= PAGE_EXEC;
111
return fp_access_check_only(s);
98
+ }
112
}
99
+ break;
113
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
100
+ case 0x00000000 ... 0x7FFFFFFF:
114
s->insn = insn;
101
+ *prot |= PAGE_EXEC;
115
s->base.pc_next = pc + 4;
102
+ break;
116
103
+ }
117
- s->fp_access_checked = false;
104
+ } else {
118
- s->sve_access_checked = false;
105
+ /* Default system address map for M profile cores.
119
+ s->fp_access_checked = 0;
106
+ * The architecture specifies which regions are execute-never;
120
+ s->sve_access_checked = 0;
107
+ * at the MPU level no other checks are defined.
121
108
+ */
122
if (s->pstate_il) {
109
+ switch (address) {
123
/*
110
+ case 0x00000000 ... 0x1fffffff: /* ROM */
111
+ case 0x20000000 ... 0x3fffffff: /* SRAM */
112
+ case 0x60000000 ... 0x7fffffff: /* RAM */
113
+ case 0x80000000 ... 0x9fffffff: /* RAM */
114
+ *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
115
+ break;
116
+ case 0x40000000 ... 0x5fffffff: /* Peripheral */
117
+ case 0xa0000000 ... 0xbfffffff: /* Device */
118
+ case 0xc0000000 ... 0xdfffffff: /* Device */
119
+ case 0xe0000000 ... 0xffffffff: /* System */
120
+ *prot = PAGE_READ | PAGE_WRITE;
121
+ break;
122
+ default:
123
+ g_assert_not_reached();
124
+ }
125
+ }
126
+}
127
+
128
/**
129
* get_phys_addr - get the physical address for this virtual address
130
*
131
--
124
--
132
2.25.1
125
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move the data to vec_helper.c and the inline to vec_internal.h.
3
In StreamingMode, fp_access_checked is handled already.
4
We cannot fall through to fp_access_check lest we fall
5
foul of the double-check assertion.
4
6
7
Cc: qemu-stable@nongnu.org
8
Fixes: 285b1d5fcef ("target/arm: Handle SME in sve_access_check")
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20250307190415.982049-3-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
[PMM: move declaration of 'ret' to top of block]
7
Message-id: 20220607203306.657998-18-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
14
---
10
target/arm/vec_internal.h | 7 +++++++
15
target/arm/tcg/translate-a64.c | 22 +++++++++++-----------
11
target/arm/sve_helper.c | 29 -----------------------------
16
1 file changed, 11 insertions(+), 11 deletions(-)
12
target/arm/vec_helper.c | 26 ++++++++++++++++++++++++++
13
3 files changed, 33 insertions(+), 29 deletions(-)
14
17
15
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
18
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/vec_internal.h
20
--- a/target/arm/tcg/translate-a64.c
18
+++ b/target/arm/vec_internal.h
21
+++ b/target/arm/tcg/translate-a64.c
19
@@ -XXX,XX +XXX,XX @@ static inline uint64_t expand_pred_b(uint8_t byte)
22
@@ -XXX,XX +XXX,XX @@ static int fp_access_check_vector_hsd(DisasContext *s, bool is_q, MemOp esz)
20
return expand_pred_b_data[byte];
23
bool sve_access_check(DisasContext *s)
24
{
25
if (s->pstate_sm || !dc_isar_feature(aa64_sve, s)) {
26
+ bool ret;
27
+
28
assert(dc_isar_feature(aa64_sme, s));
29
- if (!sme_sm_enabled_check(s)) {
30
- goto fail_exit;
31
- }
32
- } else if (s->sve_excp_el) {
33
+ ret = sme_sm_enabled_check(s);
34
+ s->sve_access_checked = (ret ? 1 : -1);
35
+ return ret;
36
+ }
37
+ if (s->sve_excp_el) {
38
+ /* Assert that we only raise one exception per instruction. */
39
+ assert(!s->sve_access_checked);
40
gen_exception_insn_el(s, 0, EXCP_UDEF,
41
syn_sve_access_trap(), s->sve_excp_el);
42
- goto fail_exit;
43
+ s->sve_access_checked = -1;
44
+ return false;
45
}
46
s->sve_access_checked = 1;
47
return fp_access_check(s);
48
-
49
- fail_exit:
50
- /* Assert that we only raise one exception per instruction. */
51
- assert(!s->sve_access_checked);
52
- s->sve_access_checked = -1;
53
- return false;
21
}
54
}
22
55
23
+/* Similarly for half-word elements. */
56
/*
24
+extern const uint64_t expand_pred_h_data[0x55 + 1];
25
+static inline uint64_t expand_pred_h(uint8_t byte)
26
+{
27
+ return expand_pred_h_data[byte & 0x55];
28
+}
29
+
30
static inline void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
31
{
32
uint64_t *d = vd + opr_sz;
33
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/sve_helper.c
36
+++ b/target/arm/sve_helper.c
37
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_predtest)(void *vd, void *vg, uint32_t words)
38
return flags;
39
}
40
41
-/* Similarly for half-word elements.
42
- * for (i = 0; i < 256; ++i) {
43
- * unsigned long m = 0;
44
- * if (i & 0xaa) {
45
- * continue;
46
- * }
47
- * for (j = 0; j < 8; j += 2) {
48
- * if ((i >> j) & 1) {
49
- * m |= 0xfffful << (j << 3);
50
- * }
51
- * }
52
- * printf("[0x%x] = 0x%016lx,\n", i, m);
53
- * }
54
- */
55
-static inline uint64_t expand_pred_h(uint8_t byte)
56
-{
57
- static const uint64_t word[] = {
58
- [0x01] = 0x000000000000ffff, [0x04] = 0x00000000ffff0000,
59
- [0x05] = 0x00000000ffffffff, [0x10] = 0x0000ffff00000000,
60
- [0x11] = 0x0000ffff0000ffff, [0x14] = 0x0000ffffffff0000,
61
- [0x15] = 0x0000ffffffffffff, [0x40] = 0xffff000000000000,
62
- [0x41] = 0xffff00000000ffff, [0x44] = 0xffff0000ffff0000,
63
- [0x45] = 0xffff0000ffffffff, [0x50] = 0xffffffff00000000,
64
- [0x51] = 0xffffffff0000ffff, [0x54] = 0xffffffffffff0000,
65
- [0x55] = 0xffffffffffffffff,
66
- };
67
- return word[byte & 0x55];
68
-}
69
-
70
/* Similarly for single word elements. */
71
static inline uint64_t expand_pred_s(uint8_t byte)
72
{
73
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/target/arm/vec_helper.c
76
+++ b/target/arm/vec_helper.c
77
@@ -XXX,XX +XXX,XX @@ const uint64_t expand_pred_b_data[256] = {
78
0xffffffffffffffff,
79
};
80
81
+/*
82
+ * Similarly for half-word elements.
83
+ * for (i = 0; i < 256; ++i) {
84
+ * unsigned long m = 0;
85
+ * if (i & 0xaa) {
86
+ * continue;
87
+ * }
88
+ * for (j = 0; j < 8; j += 2) {
89
+ * if ((i >> j) & 1) {
90
+ * m |= 0xfffful << (j << 3);
91
+ * }
92
+ * }
93
+ * printf("[0x%x] = 0x%016lx,\n", i, m);
94
+ * }
95
+ */
96
+const uint64_t expand_pred_h_data[0x55 + 1] = {
97
+ [0x01] = 0x000000000000ffff, [0x04] = 0x00000000ffff0000,
98
+ [0x05] = 0x00000000ffffffff, [0x10] = 0x0000ffff00000000,
99
+ [0x11] = 0x0000ffff0000ffff, [0x14] = 0x0000ffffffff0000,
100
+ [0x15] = 0x0000ffffffffffff, [0x40] = 0xffff000000000000,
101
+ [0x41] = 0xffff00000000ffff, [0x44] = 0xffff0000ffff0000,
102
+ [0x45] = 0xffff0000ffffffff, [0x50] = 0xffffffff00000000,
103
+ [0x51] = 0xffffffff0000ffff, [0x54] = 0xffffffffffff0000,
104
+ [0x55] = 0xffffffffffffffff,
105
+};
106
+
107
/* Signed saturating rounding doubling multiply-accumulate high half, 8-bit */
108
int8_t do_sqrdmlah_b(int8_t src1, int8_t src2, int8_t src3,
109
bool neg, bool round)
110
--
57
--
111
2.25.1
58
2.43.0
diff view generated by jsdifflib
1
We have about 30 instances of the typo/variant spelling 'writeable',
1
We want to capture potential Rust backtraces on panics in our test
2
and over 500 of the more common 'writable'. Standardize on the
2
logs, which isn't Rust's default behaviour. Set RUST_BACKTRACE=1 in
3
latter.
3
the add_test_setup environments, so that all our tests get run with
4
this environment variable set.
4
5
5
Change produced with:
6
This makes the setting of that variable in the gitlab CI template
6
7
redundant, so we can remove it.
7
sed -i -e 's/\([Ww][Rr][Ii][Tt]\)[Ee]\([Aa][Bb][Ll][Ee]\)/\1\2/g' $(git grep -il writeable)
8
9
and then hand-undoing the instance in linux-headers/linux/kvm.h.
10
11
Most of these changes are in comments or documentation; the
12
exceptions are:
13
* a local variable in accel/hvf/hvf-accel-ops.c
14
* a local variable in accel/kvm/kvm-all.c
15
* the PMCR_WRITABLE_MASK macro in target/arm/internals.h
16
* the EPT_VIOLATION_GPA_WRITABLE macro in target/i386/hvf/vmcs.h
17
(which is never used anywhere)
18
* the AR_TYPE_WRITABLE_MASK macro in target/i386/hvf/vmx.h
19
(which is never used anywhere)
20
8
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
23
Reviewed-by: Stefan Weil <sw@weilnetz.de>
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
24
Message-id: 20220505095015.2714666-1-peter.maydell@linaro.org
12
Message-id: 20250310102950.3752908-1-peter.maydell@linaro.org
25
---
13
---
26
docs/interop/vhost-user.rst | 2 +-
14
meson.build | 9 ++++++---
27
docs/specs/vmgenid.txt | 4 ++--
15
.gitlab-ci.d/buildtest-template.yml | 1 -
28
hw/scsi/mfi.h | 2 +-
16
2 files changed, 6 insertions(+), 4 deletions(-)
29
target/arm/internals.h | 4 ++--
30
target/i386/hvf/vmcs.h | 2 +-
31
target/i386/hvf/vmx.h | 2 +-
32
accel/hvf/hvf-accel-ops.c | 4 ++--
33
accel/kvm/kvm-all.c | 4 ++--
34
accel/tcg/user-exec.c | 6 +++---
35
hw/acpi/ghes.c | 2 +-
36
hw/intc/arm_gicv3_cpuif.c | 2 +-
37
hw/intc/arm_gicv3_dist.c | 2 +-
38
hw/intc/arm_gicv3_redist.c | 4 ++--
39
hw/intc/riscv_aclint.c | 2 +-
40
hw/intc/riscv_aplic.c | 2 +-
41
hw/pci/shpc.c | 2 +-
42
hw/sparc64/sun4u_iommu.c | 2 +-
43
hw/timer/sse-timer.c | 2 +-
44
target/arm/gdbstub.c | 2 +-
45
target/arm/helper.c | 4 ++--
46
target/arm/hvf/hvf.c | 4 ++--
47
target/i386/cpu-sysemu.c | 2 +-
48
target/s390x/ioinst.c | 2 +-
49
python/qemu/machine/machine.py | 2 +-
50
tests/tcg/x86_64/system/boot.S | 2 +-
51
25 files changed, 34 insertions(+), 34 deletions(-)
52
17
53
diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
18
diff --git a/meson.build b/meson.build
54
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
55
--- a/docs/interop/vhost-user.rst
20
--- a/meson.build
56
+++ b/docs/interop/vhost-user.rst
21
+++ b/meson.build
57
@@ -XXX,XX +XXX,XX @@ Virtio device config space
22
@@ -XXX,XX +XXX,XX @@ project('qemu', ['c'], meson_version: '>=1.5.0',
58
:size: a 32-bit configuration space access size in bytes
23
59
24
meson.add_devenv({ 'MESON_BUILD_ROOT' : meson.project_build_root() })
60
:flags: a 32-bit value:
25
61
- - 0: Vhost front-end messages used for writeable fields
26
-add_test_setup('quick', exclude_suites: ['slow', 'thorough'], is_default: true)
62
+ - 0: Vhost front-end messages used for writable fields
27
-add_test_setup('slow', exclude_suites: ['thorough'], env: ['G_TEST_SLOW=1', 'SPEED=slow'])
63
- 1: Vhost front-end messages used for live migration
28
-add_test_setup('thorough', env: ['G_TEST_SLOW=1', 'SPEED=thorough'])
64
29
+add_test_setup('quick', exclude_suites: ['slow', 'thorough'], is_default: true,
65
:payload: Size bytes array holding the contents of the virtio
30
+ env: ['RUST_BACKTRACE=1'])
66
diff --git a/docs/specs/vmgenid.txt b/docs/specs/vmgenid.txt
31
+add_test_setup('slow', exclude_suites: ['thorough'],
32
+ env: ['G_TEST_SLOW=1', 'SPEED=slow', 'RUST_BACKTRACE=1'])
33
+add_test_setup('thorough',
34
+ env: ['G_TEST_SLOW=1', 'SPEED=thorough', 'RUST_BACKTRACE=1'])
35
36
meson.add_postconf_script(find_program('scripts/symlink-install-tree.py'))
37
38
diff --git a/.gitlab-ci.d/buildtest-template.yml b/.gitlab-ci.d/buildtest-template.yml
67
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
68
--- a/docs/specs/vmgenid.txt
40
--- a/.gitlab-ci.d/buildtest-template.yml
69
+++ b/docs/specs/vmgenid.txt
41
+++ b/.gitlab-ci.d/buildtest-template.yml
70
@@ -XXX,XX +XXX,XX @@ change the contents of the memory at runtime, specifically when starting a
71
backed-up or snapshotted image. In order to do this, QEMU must know the
72
address that has been allocated.
73
74
-The mechanism chosen for this memory sharing is writeable fw_cfg blobs.
75
+The mechanism chosen for this memory sharing is writable fw_cfg blobs.
76
These are data object that are visible to both QEMU and guests, and are
77
addressable as sequential files.
78
79
@@ -XXX,XX +XXX,XX @@ Two fw_cfg blobs are used in this case:
80
/etc/vmgenid_guid - contains the actual VM Generation ID GUID
81
- read-only to the guest
82
/etc/vmgenid_addr - contains the address of the downloaded vmgenid blob
83
- - writeable by the guest
84
+ - writable by the guest
85
86
87
QEMU sends the following commands to the guest at startup:
88
diff --git a/hw/scsi/mfi.h b/hw/scsi/mfi.h
89
index XXXXXXX..XXXXXXX 100644
90
--- a/hw/scsi/mfi.h
91
+++ b/hw/scsi/mfi.h
92
@@ -XXX,XX +XXX,XX @@ struct mfi_ctrl_props {
93
* metadata and user data
94
* 1=5%, 2=10%, 3=15% and so on
95
*/
96
- uint8_t viewSpace; /* snapshot writeable VIEWs
97
+ uint8_t viewSpace; /* snapshot writable VIEWs
98
* capacity as a % of source LD
99
* capacity. 0=READ only
100
* 1=5%, 2=10%, 3=15% and so on
101
diff --git a/target/arm/internals.h b/target/arm/internals.h
102
index XXXXXXX..XXXXXXX 100644
103
--- a/target/arm/internals.h
104
+++ b/target/arm/internals.h
105
@@ -XXX,XX +XXX,XX @@ enum MVEECIState {
106
#define PMCRP 0x2
107
#define PMCRE 0x1
108
/*
109
- * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
110
+ * Mask of PMCR bits writable by guest (not including WO bits like C, P,
111
* which can be written as 1 to trigger behaviour but which stay RAZ).
112
*/
113
-#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
114
+#define PMCR_WRITABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
115
116
#define PMXEVTYPER_P 0x80000000
117
#define PMXEVTYPER_U 0x40000000
118
diff --git a/target/i386/hvf/vmcs.h b/target/i386/hvf/vmcs.h
119
index XXXXXXX..XXXXXXX 100644
120
--- a/target/i386/hvf/vmcs.h
121
+++ b/target/i386/hvf/vmcs.h
122
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@
123
#define EPT_VIOLATION_DATA_WRITE (1UL << 1)
43
stage: test
124
#define EPT_VIOLATION_INST_FETCH (1UL << 2)
44
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
125
#define EPT_VIOLATION_GPA_READABLE (1UL << 3)
45
script:
126
-#define EPT_VIOLATION_GPA_WRITEABLE (1UL << 4)
46
- - export RUST_BACKTRACE=1
127
+#define EPT_VIOLATION_GPA_WRITABLE (1UL << 4)
47
- source scripts/ci/gitlab-ci-section
128
#define EPT_VIOLATION_GPA_EXECUTABLE (1UL << 5)
48
- section_start buildenv "Setting up to run tests"
129
#define EPT_VIOLATION_GLA_VALID (1UL << 7)
49
- scripts/git-submodule.sh update roms/SLOF
130
#define EPT_VIOLATION_XLAT_VALID (1UL << 8)
131
diff --git a/target/i386/hvf/vmx.h b/target/i386/hvf/vmx.h
132
index XXXXXXX..XXXXXXX 100644
133
--- a/target/i386/hvf/vmx.h
134
+++ b/target/i386/hvf/vmx.h
135
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cap2ctrl(uint64_t cap, uint64_t ctrl)
136
137
#define AR_TYPE_ACCESSES_MASK 1
138
#define AR_TYPE_READABLE_MASK (1 << 1)
139
-#define AR_TYPE_WRITEABLE_MASK (1 << 2)
140
+#define AR_TYPE_WRITABLE_MASK (1 << 2)
141
#define AR_TYPE_CODE_MASK (1 << 3)
142
#define AR_TYPE_MASK 0x0f
143
#define AR_TYPE_BUSY_64_TSS 11
144
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
145
index XXXXXXX..XXXXXXX 100644
146
--- a/accel/hvf/hvf-accel-ops.c
147
+++ b/accel/hvf/hvf-accel-ops.c
148
@@ -XXX,XX +XXX,XX @@ static void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
149
{
150
hvf_slot *mem;
151
MemoryRegion *area = section->mr;
152
- bool writeable = !area->readonly && !area->rom_device;
153
+ bool writable = !area->readonly && !area->rom_device;
154
hv_memory_flags_t flags;
155
uint64_t page_size = qemu_real_host_page_size();
156
157
if (!memory_region_is_ram(area)) {
158
- if (writeable) {
159
+ if (writable) {
160
return;
161
} else if (!memory_region_is_romd(area)) {
162
/*
163
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
164
index XXXXXXX..XXXXXXX 100644
165
--- a/accel/kvm/kvm-all.c
166
+++ b/accel/kvm/kvm-all.c
167
@@ -XXX,XX +XXX,XX @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
168
KVMSlot *mem;
169
int err;
170
MemoryRegion *mr = section->mr;
171
- bool writeable = !mr->readonly && !mr->rom_device;
172
+ bool writable = !mr->readonly && !mr->rom_device;
173
hwaddr start_addr, size, slot_size, mr_offset;
174
ram_addr_t ram_start_offset;
175
void *ram;
176
177
if (!memory_region_is_ram(mr)) {
178
- if (writeable || !kvm_readonly_mem_allowed) {
179
+ if (writable || !kvm_readonly_mem_allowed) {
180
return;
181
} else if (!mr->romd_mode) {
182
/* If the memory device is not in romd_mode, then we actually want
183
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
184
index XXXXXXX..XXXXXXX 100644
185
--- a/accel/tcg/user-exec.c
186
+++ b/accel/tcg/user-exec.c
187
@@ -XXX,XX +XXX,XX @@ MMUAccessType adjust_signal_pc(uintptr_t *pc, bool is_write)
188
* Return true if the write fault has been handled, and should be re-tried.
189
*
190
* Note that it is important that we don't call page_unprotect() unless
191
- * this is really a "write to nonwriteable page" fault, because
192
+ * this is really a "write to nonwritable page" fault, because
193
* page_unprotect() assumes that if it is called for an access to
194
- * a page that's writeable this means we had two threads racing and
195
- * another thread got there first and already made the page writeable;
196
+ * a page that's writable this means we had two threads racing and
197
+ * another thread got there first and already made the page writable;
198
* so we will retry the access. If we were to call page_unprotect()
199
* for some other kind of fault that should really be passed to the
200
* guest, we'd end up in an infinite loop of retrying the faulting access.
201
diff --git a/hw/acpi/ghes.c b/hw/acpi/ghes.c
202
index XXXXXXX..XXXXXXX 100644
203
--- a/hw/acpi/ghes.c
204
+++ b/hw/acpi/ghes.c
205
@@ -XXX,XX +XXX,XX @@ void build_ghes_error_table(GArray *hardware_errors, BIOSLinker *linker)
206
for (i = 0; i < ACPI_GHES_ERROR_SOURCE_COUNT; i++) {
207
/*
208
* Initialize the value of read_ack_register to 1, so GHES can be
209
- * writeable after (re)boot.
210
+ * writable after (re)boot.
211
* ACPI 6.2: 18.3.2.8 Generic Hardware Error Source version 2
212
* (GHESv2 - Type 10)
213
*/
214
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
215
index XXXXXXX..XXXXXXX 100644
216
--- a/hw/intc/arm_gicv3_cpuif.c
217
+++ b/hw/intc/arm_gicv3_cpuif.c
218
@@ -XXX,XX +XXX,XX @@ static void icc_ctlr_el3_write(CPUARMState *env, const ARMCPRegInfo *ri,
219
cs->icc_ctlr_el1[GICV3_S] |= ICC_CTLR_EL1_CBPR;
220
}
221
222
- /* The only bit stored in icc_ctlr_el3 which is writeable is EOIMODE_EL3: */
223
+ /* The only bit stored in icc_ctlr_el3 which is writable is EOIMODE_EL3: */
224
mask = ICC_CTLR_EL3_EOIMODE_EL3;
225
226
cs->icc_ctlr_el3 &= ~mask;
227
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
228
index XXXXXXX..XXXXXXX 100644
229
--- a/hw/intc/arm_gicv3_dist.c
230
+++ b/hw/intc/arm_gicv3_dist.c
231
@@ -XXX,XX +XXX,XX @@ static bool gicd_writel(GICv3State *s, hwaddr offset,
232
if (value & mask & GICD_CTLR_DS) {
233
/* We just set DS, so the ARE_NS and EnG1S bits are now RES0.
234
* Note that this is a one-way transition because if DS is set
235
- * then it's not writeable, so it can only go back to 0 with a
236
+ * then it's not writable, so it can only go back to 0 with a
237
* hardware reset.
238
*/
239
s->gicd_ctlr &= ~(GICD_CTLR_EN_GRP1S | GICD_CTLR_ARE_NS);
240
diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
241
index XXXXXXX..XXXXXXX 100644
242
--- a/hw/intc/arm_gicv3_redist.c
243
+++ b/hw/intc/arm_gicv3_redist.c
244
@@ -XXX,XX +XXX,XX @@ static void gicr_write_vpendbaser(GICv3CPUState *cs, uint64_t newval)
245
246
/*
247
* The DIRTY bit is read-only and for us is always zero;
248
- * other fields are writeable.
249
+ * other fields are writable.
250
*/
251
newval &= R_GICR_VPENDBASER_INNERCACHE_MASK |
252
R_GICR_VPENDBASER_SHAREABILITY_MASK |
253
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_writel(GICv3CPUState *cs, hwaddr offset,
254
/* RAZ/WI for our implementation */
255
return MEMTX_OK;
256
case GICR_WAKER:
257
- /* Only the ProcessorSleep bit is writeable. When the guest sets
258
+ /* Only the ProcessorSleep bit is writable. When the guest sets
259
* it it requests that we transition the channel between the
260
* redistributor and the cpu interface to quiescent, and that
261
* we set the ChildrenAsleep bit once the inteface has reached the
262
diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c
263
index XXXXXXX..XXXXXXX 100644
264
--- a/hw/intc/riscv_aclint.c
265
+++ b/hw/intc/riscv_aclint.c
266
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_swi_realize(DeviceState *dev, Error **errp)
267
/* Claim software interrupt bits */
268
for (i = 0; i < swi->num_harts; i++) {
269
RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(swi->hartid_base + i));
270
- /* We don't claim mip.SSIP because it is writeable by software */
271
+ /* We don't claim mip.SSIP because it is writable by software */
272
if (riscv_cpu_claim_interrupts(cpu, swi->sswi ? 0 : MIP_MSIP) < 0) {
273
error_report("MSIP already claimed");
274
exit(1);
275
diff --git a/hw/intc/riscv_aplic.c b/hw/intc/riscv_aplic.c
276
index XXXXXXX..XXXXXXX 100644
277
--- a/hw/intc/riscv_aplic.c
278
+++ b/hw/intc/riscv_aplic.c
279
@@ -XXX,XX +XXX,XX @@ static void riscv_aplic_write(void *opaque, hwaddr addr, uint64_t value,
280
}
281
282
if (addr == APLIC_DOMAINCFG) {
283
- /* Only IE bit writeable at the moment */
284
+ /* Only IE bit writable at the moment */
285
value &= APLIC_DOMAINCFG_IE;
286
aplic->domaincfg = value;
287
} else if ((APLIC_SOURCECFG_BASE <= addr) &&
288
diff --git a/hw/pci/shpc.c b/hw/pci/shpc.c
289
index XXXXXXX..XXXXXXX 100644
290
--- a/hw/pci/shpc.c
291
+++ b/hw/pci/shpc.c
292
@@ -XXX,XX +XXX,XX @@ static int shpc_cap_add_config(PCIDevice *d, Error **errp)
293
pci_set_byte(config + SHPC_CAP_CxP, 0);
294
pci_set_long(config + SHPC_CAP_DWORD_DATA, 0);
295
d->shpc->cap = config_offset;
296
- /* Make dword select and data writeable. */
297
+ /* Make dword select and data writable. */
298
pci_set_byte(d->wmask + config_offset + SHPC_CAP_DWORD_SELECT, 0xff);
299
pci_set_long(d->wmask + config_offset + SHPC_CAP_DWORD_DATA, 0xffffffff);
300
return 0;
301
diff --git a/hw/sparc64/sun4u_iommu.c b/hw/sparc64/sun4u_iommu.c
302
index XXXXXXX..XXXXXXX 100644
303
--- a/hw/sparc64/sun4u_iommu.c
304
+++ b/hw/sparc64/sun4u_iommu.c
305
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry sun4u_translate_iommu(IOMMUMemoryRegion *iommu,
306
}
307
308
if (tte & IOMMU_TTE_DATA_W) {
309
- /* Writeable */
310
+ /* Writable */
311
ret.perm = IOMMU_RW;
312
} else {
313
ret.perm = IOMMU_RO;
314
diff --git a/hw/timer/sse-timer.c b/hw/timer/sse-timer.c
315
index XXXXXXX..XXXXXXX 100644
316
--- a/hw/timer/sse-timer.c
317
+++ b/hw/timer/sse-timer.c
318
@@ -XXX,XX +XXX,XX @@ static void sse_timer_write(void *opaque, hwaddr offset, uint64_t value,
319
{
320
uint32_t old_ctl = s->cntp_aival_ctl;
321
322
- /* EN bit is writeable; CLR bit is write-0-to-clear, write-1-ignored */
323
+ /* EN bit is writable; CLR bit is write-0-to-clear, write-1-ignored */
324
s->cntp_aival_ctl &= ~R_CNTP_AIVAL_CTL_EN_MASK;
325
s->cntp_aival_ctl |= value & R_CNTP_AIVAL_CTL_EN_MASK;
326
if (!(value & R_CNTP_AIVAL_CTL_CLR_MASK)) {
327
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
328
index XXXXXXX..XXXXXXX 100644
329
--- a/target/arm/gdbstub.c
330
+++ b/target/arm/gdbstub.c
331
@@ -XXX,XX +XXX,XX @@ int arm_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
332
/*
333
* Don't allow writing to XPSR.Exception as it can cause
334
* a transition into or out of handler mode (it's not
335
- * writeable via the MSR insn so this is a reasonable
336
+ * writable via the MSR insn so this is a reasonable
337
* restriction). Other fields are safe to update.
338
*/
339
xpsr_write(env, tmp, ~XPSR_EXCP);
340
diff --git a/target/arm/helper.c b/target/arm/helper.c
341
index XXXXXXX..XXXXXXX 100644
342
--- a/target/arm/helper.c
343
+++ b/target/arm/helper.c
344
@@ -XXX,XX +XXX,XX @@ static void pmcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
345
}
346
}
347
348
- env->cp15.c9_pmcr &= ~PMCR_WRITEABLE_MASK;
349
- env->cp15.c9_pmcr |= (value & PMCR_WRITEABLE_MASK);
350
+ env->cp15.c9_pmcr &= ~PMCR_WRITABLE_MASK;
351
+ env->cp15.c9_pmcr |= (value & PMCR_WRITABLE_MASK);
352
353
pmu_op_finish(env);
354
}
355
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
356
index XXXXXXX..XXXXXXX 100644
357
--- a/target/arm/hvf/hvf.c
358
+++ b/target/arm/hvf/hvf.c
359
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
360
}
361
}
362
363
- env->cp15.c9_pmcr &= ~PMCR_WRITEABLE_MASK;
364
- env->cp15.c9_pmcr |= (val & PMCR_WRITEABLE_MASK);
365
+ env->cp15.c9_pmcr &= ~PMCR_WRITABLE_MASK;
366
+ env->cp15.c9_pmcr |= (val & PMCR_WRITABLE_MASK);
367
368
pmu_op_finish(env);
369
break;
370
diff --git a/target/i386/cpu-sysemu.c b/target/i386/cpu-sysemu.c
371
index XXXXXXX..XXXXXXX 100644
372
--- a/target/i386/cpu-sysemu.c
373
+++ b/target/i386/cpu-sysemu.c
374
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_to_dict(X86CPU *cpu, QDict *props)
375
376
/* Convert CPU model data from X86CPU object to a property dictionary
377
* that can recreate exactly the same CPU model, including every
378
- * writeable QOM property.
379
+ * writable QOM property.
380
*/
381
static void x86_cpu_to_dict_full(X86CPU *cpu, QDict *props)
382
{
383
diff --git a/target/s390x/ioinst.c b/target/s390x/ioinst.c
384
index XXXXXXX..XXXXXXX 100644
385
--- a/target/s390x/ioinst.c
386
+++ b/target/s390x/ioinst.c
387
@@ -XXX,XX +XXX,XX @@ void ioinst_handle_stsch(S390CPU *cpu, uint64_t reg1, uint32_t ipb,
388
g_assert(!s390_is_pv());
389
/*
390
* As operand exceptions have a lower priority than access exceptions,
391
- * we check whether the memory area is writeable (injecting the
392
+ * we check whether the memory area is writable (injecting the
393
* access execption if it is not) first.
394
*/
395
if (!s390_cpu_virt_mem_check_write(cpu, addr, ar, sizeof(schib))) {
396
diff --git a/python/qemu/machine/machine.py b/python/qemu/machine/machine.py
397
index XXXXXXX..XXXXXXX 100644
398
--- a/python/qemu/machine/machine.py
399
+++ b/python/qemu/machine/machine.py
400
@@ -XXX,XX +XXX,XX @@ def _early_cleanup(self) -> None:
401
"""
402
# If we keep the console socket open, we may deadlock waiting
403
# for QEMU to exit, while QEMU is waiting for the socket to
404
- # become writeable.
405
+ # become writable.
406
if self._console_socket is not None:
407
self._console_socket.close()
408
self._console_socket = None
409
diff --git a/tests/tcg/x86_64/system/boot.S b/tests/tcg/x86_64/system/boot.S
410
index XXXXXXX..XXXXXXX 100644
411
--- a/tests/tcg/x86_64/system/boot.S
412
+++ b/tests/tcg/x86_64/system/boot.S
413
@@ -XXX,XX +XXX,XX @@
414
    *
415
    * - `ebx`: contains the physical memory address where the loader has placed
416
    * the boot start info structure.
417
-    * - `cr0`: bit 0 (PE) must be set. All the other writeable bits are cleared.
418
+    * - `cr0`: bit 0 (PE) must be set. All the other writable bits are cleared.
419
    * - `cr4`: all bits are cleared.
420
    * - `cs `: must be a 32-bit read/execute code segment with a base of ‘0’
421
    * and a limit of ‘0xFFFFFFFF’. The selector value is unspecified.
422
--
50
--
423
2.25.1
51
2.43.0
424
52
425
53
diff view generated by jsdifflib
Deleted patch
1
From: Frederic Konrad <fkonrad@amd.com>
2
1
3
The core and the vblend registers size are wrong, they should respectively be
4
0x3B0 and 0x1E0 according to:
5
https://www.xilinx.com/htmldocs/registers/ug1087/ug1087-zynq-ultrascale-registers.html.
6
7
Let's fix that and use macros when creating the mmio region.
8
9
Fixes: 58ac482a66d ("introduce xlnx-dp")
10
Signed-off-by: Frederic Konrad <fkonrad@amd.com>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
12
Acked-by: Alistair Francis <alistair.francis@wdc.com>
13
Message-id: 20220601172353.3220232-2-fkonrad@xilinx.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
include/hw/display/xlnx_dp.h | 9 +++++++--
17
hw/display/xlnx_dp.c | 17 ++++++++++-------
18
2 files changed, 17 insertions(+), 9 deletions(-)
19
20
diff --git a/include/hw/display/xlnx_dp.h b/include/hw/display/xlnx_dp.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/include/hw/display/xlnx_dp.h
23
+++ b/include/hw/display/xlnx_dp.h
24
@@ -XXX,XX +XXX,XX @@
25
#define AUD_CHBUF_MAX_DEPTH (32 * KiB)
26
#define MAX_QEMU_BUFFER_SIZE (4 * KiB)
27
28
-#define DP_CORE_REG_ARRAY_SIZE (0x3AF >> 2)
29
+#define DP_CORE_REG_OFFSET (0x0000)
30
+#define DP_CORE_REG_ARRAY_SIZE (0x3B0 >> 2)
31
+#define DP_AVBUF_REG_OFFSET (0xB000)
32
#define DP_AVBUF_REG_ARRAY_SIZE (0x238 >> 2)
33
-#define DP_VBLEND_REG_ARRAY_SIZE (0x1DF >> 2)
34
+#define DP_VBLEND_REG_OFFSET (0xA000)
35
+#define DP_VBLEND_REG_ARRAY_SIZE (0x1E0 >> 2)
36
+#define DP_AUDIO_REG_OFFSET (0xC000)
37
#define DP_AUDIO_REG_ARRAY_SIZE (0x50 >> 2)
38
+#define DP_CONTAINER_SIZE (0xC050)
39
40
struct PixmanPlane {
41
pixman_format_code_t format;
42
diff --git a/hw/display/xlnx_dp.c b/hw/display/xlnx_dp.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/display/xlnx_dp.c
45
+++ b/hw/display/xlnx_dp.c
46
@@ -XXX,XX +XXX,XX @@ static void xlnx_dp_init(Object *obj)
47
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
48
XlnxDPState *s = XLNX_DP(obj);
49
50
- memory_region_init(&s->container, obj, TYPE_XLNX_DP, 0xC050);
51
+ memory_region_init(&s->container, obj, TYPE_XLNX_DP, DP_CONTAINER_SIZE);
52
53
memory_region_init_io(&s->core_iomem, obj, &dp_ops, s, TYPE_XLNX_DP
54
- ".core", 0x3AF);
55
- memory_region_add_subregion(&s->container, 0x0000, &s->core_iomem);
56
+ ".core", sizeof(s->core_registers));
57
+ memory_region_add_subregion(&s->container, DP_CORE_REG_OFFSET,
58
+ &s->core_iomem);
59
60
memory_region_init_io(&s->vblend_iomem, obj, &vblend_ops, s, TYPE_XLNX_DP
61
- ".v_blend", 0x1DF);
62
- memory_region_add_subregion(&s->container, 0xA000, &s->vblend_iomem);
63
+ ".v_blend", sizeof(s->vblend_registers));
64
+ memory_region_add_subregion(&s->container, DP_VBLEND_REG_OFFSET,
65
+ &s->vblend_iomem);
66
67
memory_region_init_io(&s->avbufm_iomem, obj, &avbufm_ops, s, TYPE_XLNX_DP
68
- ".av_buffer_manager", 0x238);
69
- memory_region_add_subregion(&s->container, 0xB000, &s->avbufm_iomem);
70
+ ".av_buffer_manager", sizeof(s->avbufm_registers));
71
+ memory_region_add_subregion(&s->container, DP_AVBUF_REG_OFFSET,
72
+ &s->avbufm_iomem);
73
74
memory_region_init_io(&s->audio_iomem, obj, &audio_ops, s, TYPE_XLNX_DP
75
".audio", sizeof(s->audio_registers));
76
--
77
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
2
1
3
Add a periodic timer which raises vblank at a frequency of 30Hz.
4
5
Note that this is a migration compatibility break for the
6
xlnx-zcu102 board type.
7
8
Signed-off-by: Sai Pavan Boddu <saipava@xilinx.com>
9
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Signed-off-by: Frederic Konrad <fkonrad@amd.com>
11
Acked-by: Alistair Francis <alistair.francis@wdc.com>
12
Message-id: 20220601172353.3220232-3-fkonrad@xilinx.com
13
Changes by fkonrad:
14
- Switched to transaction-based ptimer API.
15
- Added the DP_INT_VBLNK_START macro.
16
Signed-off-by: Frederic Konrad <fkonrad@amd.com>
17
[PMM: bump vmstate version, add commit message note about
18
compat break]
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
22
include/hw/display/xlnx_dp.h | 3 +++
23
hw/display/xlnx_dp.c | 30 ++++++++++++++++++++++++++----
24
2 files changed, 29 insertions(+), 4 deletions(-)
25
26
diff --git a/include/hw/display/xlnx_dp.h b/include/hw/display/xlnx_dp.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/include/hw/display/xlnx_dp.h
29
+++ b/include/hw/display/xlnx_dp.h
30
@@ -XXX,XX +XXX,XX @@
31
#include "hw/dma/xlnx_dpdma.h"
32
#include "audio/audio.h"
33
#include "qom/object.h"
34
+#include "hw/ptimer.h"
35
36
#define AUD_CHBUF_MAX_DEPTH (32 * KiB)
37
#define MAX_QEMU_BUFFER_SIZE (4 * KiB)
38
@@ -XXX,XX +XXX,XX @@ struct XlnxDPState {
39
*/
40
DPCDState *dpcd;
41
I2CDDCState *edid;
42
+
43
+ ptimer_state *vblank;
44
};
45
46
#define TYPE_XLNX_DP "xlnx.v-dp"
47
diff --git a/hw/display/xlnx_dp.c b/hw/display/xlnx_dp.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/hw/display/xlnx_dp.c
50
+++ b/hw/display/xlnx_dp.c
51
@@ -XXX,XX +XXX,XX @@
52
#define DP_TX_N_AUD (0x032C >> 2)
53
#define DP_TX_AUDIO_EXT_DATA(n) ((0x0330 + 4 * n) >> 2)
54
#define DP_INT_STATUS (0x03A0 >> 2)
55
+#define DP_INT_VBLNK_START (1 << 13)
56
#define DP_INT_MASK (0x03A4 >> 2)
57
#define DP_INT_EN (0x03A8 >> 2)
58
#define DP_INT_DS (0x03AC >> 2)
59
@@ -XXX,XX +XXX,XX @@ typedef enum DPVideoFmt DPVideoFmt;
60
61
static const VMStateDescription vmstate_dp = {
62
.name = TYPE_XLNX_DP,
63
- .version_id = 1,
64
+ .version_id = 2,
65
.fields = (VMStateField[]){
66
VMSTATE_UINT32_ARRAY(core_registers, XlnxDPState,
67
DP_CORE_REG_ARRAY_SIZE),
68
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_dp = {
69
DP_VBLEND_REG_ARRAY_SIZE),
70
VMSTATE_UINT32_ARRAY(audio_registers, XlnxDPState,
71
DP_AUDIO_REG_ARRAY_SIZE),
72
+ VMSTATE_PTIMER(vblank, XlnxDPState),
73
VMSTATE_END_OF_LIST()
74
}
75
};
76
77
+#define DP_VBLANK_PTIMER_POLICY (PTIMER_POLICY_WRAP_AFTER_ONE_PERIOD | \
78
+ PTIMER_POLICY_CONTINUOUS_TRIGGER | \
79
+ PTIMER_POLICY_NO_IMMEDIATE_TRIGGER)
80
+
81
static void xlnx_dp_update_irq(XlnxDPState *s);
82
83
static uint64_t xlnx_dp_audio_read(void *opaque, hwaddr offset, unsigned size)
84
@@ -XXX,XX +XXX,XX @@ static void xlnx_dp_write(void *opaque, hwaddr offset, uint64_t value,
85
break;
86
case DP_TRANSMITTER_ENABLE:
87
s->core_registers[offset] = value & 0x01;
88
+ ptimer_transaction_begin(s->vblank);
89
+ if (value & 0x1) {
90
+ ptimer_run(s->vblank, 0);
91
+ } else {
92
+ ptimer_stop(s->vblank);
93
+ }
94
+ ptimer_transaction_commit(s->vblank);
95
break;
96
case DP_FORCE_SCRAMBLER_RESET:
97
/*
98
@@ -XXX,XX +XXX,XX @@ static void xlnx_dp_update_display(void *opaque)
99
return;
100
}
101
102
- s->core_registers[DP_INT_STATUS] |= (1 << 13);
103
- xlnx_dp_update_irq(s);
104
-
105
xlnx_dpdma_trigger_vsync_irq(s->dpdma);
106
107
/*
108
@@ -XXX,XX +XXX,XX @@ static void xlnx_dp_finalize(Object *obj)
109
fifo8_destroy(&s->rx_fifo);
110
}
111
112
+static void vblank_hit(void *opaque)
113
+{
114
+ XlnxDPState *s = XLNX_DP(opaque);
115
+
116
+ s->core_registers[DP_INT_STATUS] |= DP_INT_VBLNK_START;
117
+ xlnx_dp_update_irq(s);
118
+}
119
+
120
static void xlnx_dp_realize(DeviceState *dev, Error **errp)
121
{
122
XlnxDPState *s = XLNX_DP(dev);
123
@@ -XXX,XX +XXX,XX @@ static void xlnx_dp_realize(DeviceState *dev, Error **errp)
124
&as);
125
AUD_set_volume_out(s->amixer_output_stream, 0, 255, 255);
126
xlnx_dp_audio_activate(s);
127
+ s->vblank = ptimer_init(vblank_hit, s, DP_VBLANK_PTIMER_POLICY);
128
+ ptimer_transaction_begin(s->vblank);
129
+ ptimer_set_freq(s->vblank, 30);
130
+ ptimer_transaction_commit(s->vblank);
131
}
132
133
static void xlnx_dp_reset(DeviceState *dev)
134
--
135
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
2
1
3
Fix interrupt disable logic. Mask value 1 indicates that interrupts are
4
disabled.
5
6
Signed-off-by: Sai Pavan Boddu <saipava@xilinx.com>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Signed-off-by: Frederic Konrad <fkonrad@amd.com>
9
Acked-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-id: 20220601172353.3220232-4-fkonrad@xilinx.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/display/xlnx_dp.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/hw/display/xlnx_dp.c b/hw/display/xlnx_dp.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/display/xlnx_dp.c
19
+++ b/hw/display/xlnx_dp.c
20
@@ -XXX,XX +XXX,XX @@ static void xlnx_dp_write(void *opaque, hwaddr offset, uint64_t value,
21
xlnx_dp_update_irq(s);
22
break;
23
case DP_INT_DS:
24
- s->core_registers[DP_INT_MASK] |= ~value;
25
+ s->core_registers[DP_INT_MASK] |= value;
26
xlnx_dp_update_irq(s);
27
break;
28
default:
29
--
30
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Frederic Konrad <fkonrad@amd.com>
2
1
3
When the display port has been initially implemented the device
4
driver wasn't using interrupts. Now that the display port driver
5
waits for vblank interrupt it has been noticed that the irq mapping
6
is wrong. So use the value from the linux device tree and the
7
ultrascale+ reference manual.
8
9
Signed-off-by: Frederic Konrad <fkonrad@amd.com>
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
11
Acked-by: Alistair Francis <alistair.francis@wdc.com>
12
Message-id: 20220601172353.3220232-5-fkonrad@xilinx.com
13
[PMM: refold lines in commit message]
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
hw/arm/xlnx-zynqmp.c | 4 ++--
17
1 file changed, 2 insertions(+), 2 deletions(-)
18
19
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/xlnx-zynqmp.c
22
+++ b/hw/arm/xlnx-zynqmp.c
23
@@ -XXX,XX +XXX,XX @@
24
#define SERDES_SIZE 0x20000
25
26
#define DP_ADDR 0xfd4a0000
27
-#define DP_IRQ 113
28
+#define DP_IRQ 0x77
29
30
#define DPDMA_ADDR 0xfd4c0000
31
-#define DPDMA_IRQ 116
32
+#define DPDMA_IRQ 0x7a
33
34
#define APU_ADDR 0xfd5c0000
35
#define APU_IRQ 153
36
--
37
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Move the decl from ptw.h to internals.h. Provide an inline
4
version for user-only, just as we do for arm_stage1_mmu_idx.
5
Move an endif down to make the definition in helper.c be
6
system only.
7
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220604040607.269301-2-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/internals.h | 5 +++++
14
target/arm/helper.c | 5 ++---
15
2 files changed, 7 insertions(+), 3 deletions(-)
16
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/internals.h
20
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx(CPUARMState *env);
22
* Return the ARMMMUIdx for the stage1 traversal for the current regime.
23
*/
24
#ifdef CONFIG_USER_ONLY
25
+static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
26
+{
27
+ return ARMMMUIdx_Stage1_E0;
28
+}
29
static inline ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
30
{
31
return ARMMMUIdx_Stage1_E0;
32
}
33
#else
34
+ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx);
35
ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env);
36
#endif
37
38
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/helper.c
41
+++ b/target/arm/helper.c
42
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx,
43
}
44
}
45
46
-#endif /* !CONFIG_USER_ONLY */
47
-
48
/* Convert a possible stage1+2 MMU index into the appropriate
49
* stage 1 MMU index
50
*/
51
-static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
52
+ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
53
{
54
switch (mmu_idx) {
55
case ARMMMUIdx_SE10_0:
56
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
57
return mmu_idx;
58
}
59
}
60
+#endif /* !CONFIG_USER_ONLY */
61
62
/* Return true if the translation regime is using LPAE format page tables */
63
static inline bool regime_using_lpae_format(CPUARMState *env,
64
--
65
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-5-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 11 +--
9
target/arm/helper.c | 161 +-------------------------------------------
10
target/arm/ptw.c | 153 +++++++++++++++++++++++++++++++++++++++++
11
3 files changed, 161 insertions(+), 164 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@ bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
18
uint32_t *table, uint32_t address);
19
int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx,
20
int ap, int domain_prot);
21
+int simple_ap_to_rw_prot_is_user(int ap, bool is_user);
22
+
23
+static inline int
24
+simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
25
+{
26
+ return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
27
+}
28
29
bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
30
MMUAccessType access_type, ARMMMUIdx mmu_idx,
31
hwaddr *phys_ptr, int *prot,
32
ARMMMUFaultInfo *fi);
33
-bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
34
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
35
- hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
36
- target_ulong *page_size, ARMMMUFaultInfo *fi);
37
bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
38
MMUAccessType access_type, ARMMMUIdx mmu_idx,
39
hwaddr *phys_ptr, int *prot,
40
diff --git a/target/arm/helper.c b/target/arm/helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/helper.c
43
+++ b/target/arm/helper.c
44
@@ -XXX,XX +XXX,XX @@ int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap, int domain_prot)
45
* @ap: The 2-bit simple AP (AP[2:1])
46
* @is_user: TRUE if accessing from PL0
47
*/
48
-static inline int simple_ap_to_rw_prot_is_user(int ap, bool is_user)
49
+int simple_ap_to_rw_prot_is_user(int ap, bool is_user)
50
{
51
switch (ap) {
52
case 0:
53
@@ -XXX,XX +XXX,XX @@ static inline int simple_ap_to_rw_prot_is_user(int ap, bool is_user)
54
}
55
}
56
57
-static inline int
58
-simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
59
-{
60
- return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
61
-}
62
-
63
/* Translate S2 section/page access permissions to protection flags
64
*
65
* @env: CPUARMState
66
@@ -XXX,XX +XXX,XX @@ uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
67
return 0;
68
}
69
70
-bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
71
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
72
- hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
73
- target_ulong *page_size, ARMMMUFaultInfo *fi)
74
-{
75
- CPUState *cs = env_cpu(env);
76
- ARMCPU *cpu = env_archcpu(env);
77
- int level = 1;
78
- uint32_t table;
79
- uint32_t desc;
80
- uint32_t xn;
81
- uint32_t pxn = 0;
82
- int type;
83
- int ap;
84
- int domain = 0;
85
- int domain_prot;
86
- hwaddr phys_addr;
87
- uint32_t dacr;
88
- bool ns;
89
-
90
- /* Pagetable walk. */
91
- /* Lookup l1 descriptor. */
92
- if (!get_level1_table_address(env, mmu_idx, &table, address)) {
93
- /* Section translation fault if page walk is disabled by PD0 or PD1 */
94
- fi->type = ARMFault_Translation;
95
- goto do_fault;
96
- }
97
- desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
98
- mmu_idx, fi);
99
- if (fi->type != ARMFault_None) {
100
- goto do_fault;
101
- }
102
- type = (desc & 3);
103
- if (type == 0 || (type == 3 && !cpu_isar_feature(aa32_pxn, cpu))) {
104
- /* Section translation fault, or attempt to use the encoding
105
- * which is Reserved on implementations without PXN.
106
- */
107
- fi->type = ARMFault_Translation;
108
- goto do_fault;
109
- }
110
- if ((type == 1) || !(desc & (1 << 18))) {
111
- /* Page or Section. */
112
- domain = (desc >> 5) & 0x0f;
113
- }
114
- if (regime_el(env, mmu_idx) == 1) {
115
- dacr = env->cp15.dacr_ns;
116
- } else {
117
- dacr = env->cp15.dacr_s;
118
- }
119
- if (type == 1) {
120
- level = 2;
121
- }
122
- domain_prot = (dacr >> (domain * 2)) & 3;
123
- if (domain_prot == 0 || domain_prot == 2) {
124
- /* Section or Page domain fault */
125
- fi->type = ARMFault_Domain;
126
- goto do_fault;
127
- }
128
- if (type != 1) {
129
- if (desc & (1 << 18)) {
130
- /* Supersection. */
131
- phys_addr = (desc & 0xff000000) | (address & 0x00ffffff);
132
- phys_addr |= (uint64_t)extract32(desc, 20, 4) << 32;
133
- phys_addr |= (uint64_t)extract32(desc, 5, 4) << 36;
134
- *page_size = 0x1000000;
135
- } else {
136
- /* Section. */
137
- phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
138
- *page_size = 0x100000;
139
- }
140
- ap = ((desc >> 10) & 3) | ((desc >> 13) & 4);
141
- xn = desc & (1 << 4);
142
- pxn = desc & 1;
143
- ns = extract32(desc, 19, 1);
144
- } else {
145
- if (cpu_isar_feature(aa32_pxn, cpu)) {
146
- pxn = (desc >> 2) & 1;
147
- }
148
- ns = extract32(desc, 3, 1);
149
- /* Lookup l2 entry. */
150
- table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
151
- desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
152
- mmu_idx, fi);
153
- if (fi->type != ARMFault_None) {
154
- goto do_fault;
155
- }
156
- ap = ((desc >> 4) & 3) | ((desc >> 7) & 4);
157
- switch (desc & 3) {
158
- case 0: /* Page translation fault. */
159
- fi->type = ARMFault_Translation;
160
- goto do_fault;
161
- case 1: /* 64k page. */
162
- phys_addr = (desc & 0xffff0000) | (address & 0xffff);
163
- xn = desc & (1 << 15);
164
- *page_size = 0x10000;
165
- break;
166
- case 2: case 3: /* 4k page. */
167
- phys_addr = (desc & 0xfffff000) | (address & 0xfff);
168
- xn = desc & 1;
169
- *page_size = 0x1000;
170
- break;
171
- default:
172
- /* Never happens, but compiler isn't smart enough to tell. */
173
- g_assert_not_reached();
174
- }
175
- }
176
- if (domain_prot == 3) {
177
- *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
178
- } else {
179
- if (pxn && !regime_is_user(env, mmu_idx)) {
180
- xn = 1;
181
- }
182
- if (xn && access_type == MMU_INST_FETCH) {
183
- fi->type = ARMFault_Permission;
184
- goto do_fault;
185
- }
186
-
187
- if (arm_feature(env, ARM_FEATURE_V6K) &&
188
- (regime_sctlr(env, mmu_idx) & SCTLR_AFE)) {
189
- /* The simplified model uses AP[0] as an access control bit. */
190
- if ((ap & 1) == 0) {
191
- /* Access flag fault. */
192
- fi->type = ARMFault_AccessFlag;
193
- goto do_fault;
194
- }
195
- *prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
196
- } else {
197
- *prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
198
- }
199
- if (*prot && !xn) {
200
- *prot |= PAGE_EXEC;
201
- }
202
- if (!(*prot & (1 << access_type))) {
203
- /* Access permission fault. */
204
- fi->type = ARMFault_Permission;
205
- goto do_fault;
206
- }
207
- }
208
- if (ns) {
209
- /* The NS bit will (as required by the architecture) have no effect if
210
- * the CPU doesn't support TZ or this is a non-secure translation
211
- * regime, because the attribute will already be non-secure.
212
- */
213
- attrs->secure = false;
214
- }
215
- *phys_ptr = phys_addr;
216
- return false;
217
-do_fault:
218
- fi->domain = domain;
219
- fi->level = level;
220
- return true;
221
-}
222
-
223
/*
224
* check_s2_mmu_setup
225
* @cpu: ARMCPU
226
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
227
index XXXXXXX..XXXXXXX 100644
228
--- a/target/arm/ptw.c
229
+++ b/target/arm/ptw.c
230
@@ -XXX,XX +XXX,XX @@ do_fault:
231
return true;
232
}
233
234
+static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
235
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
236
+ hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
237
+ target_ulong *page_size, ARMMMUFaultInfo *fi)
238
+{
239
+ CPUState *cs = env_cpu(env);
240
+ ARMCPU *cpu = env_archcpu(env);
241
+ int level = 1;
242
+ uint32_t table;
243
+ uint32_t desc;
244
+ uint32_t xn;
245
+ uint32_t pxn = 0;
246
+ int type;
247
+ int ap;
248
+ int domain = 0;
249
+ int domain_prot;
250
+ hwaddr phys_addr;
251
+ uint32_t dacr;
252
+ bool ns;
253
+
254
+ /* Pagetable walk. */
255
+ /* Lookup l1 descriptor. */
256
+ if (!get_level1_table_address(env, mmu_idx, &table, address)) {
257
+ /* Section translation fault if page walk is disabled by PD0 or PD1 */
258
+ fi->type = ARMFault_Translation;
259
+ goto do_fault;
260
+ }
261
+ desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
262
+ mmu_idx, fi);
263
+ if (fi->type != ARMFault_None) {
264
+ goto do_fault;
265
+ }
266
+ type = (desc & 3);
267
+ if (type == 0 || (type == 3 && !cpu_isar_feature(aa32_pxn, cpu))) {
268
+ /* Section translation fault, or attempt to use the encoding
269
+ * which is Reserved on implementations without PXN.
270
+ */
271
+ fi->type = ARMFault_Translation;
272
+ goto do_fault;
273
+ }
274
+ if ((type == 1) || !(desc & (1 << 18))) {
275
+ /* Page or Section. */
276
+ domain = (desc >> 5) & 0x0f;
277
+ }
278
+ if (regime_el(env, mmu_idx) == 1) {
279
+ dacr = env->cp15.dacr_ns;
280
+ } else {
281
+ dacr = env->cp15.dacr_s;
282
+ }
283
+ if (type == 1) {
284
+ level = 2;
285
+ }
286
+ domain_prot = (dacr >> (domain * 2)) & 3;
287
+ if (domain_prot == 0 || domain_prot == 2) {
288
+ /* Section or Page domain fault */
289
+ fi->type = ARMFault_Domain;
290
+ goto do_fault;
291
+ }
292
+ if (type != 1) {
293
+ if (desc & (1 << 18)) {
294
+ /* Supersection. */
295
+ phys_addr = (desc & 0xff000000) | (address & 0x00ffffff);
296
+ phys_addr |= (uint64_t)extract32(desc, 20, 4) << 32;
297
+ phys_addr |= (uint64_t)extract32(desc, 5, 4) << 36;
298
+ *page_size = 0x1000000;
299
+ } else {
300
+ /* Section. */
301
+ phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
302
+ *page_size = 0x100000;
303
+ }
304
+ ap = ((desc >> 10) & 3) | ((desc >> 13) & 4);
305
+ xn = desc & (1 << 4);
306
+ pxn = desc & 1;
307
+ ns = extract32(desc, 19, 1);
308
+ } else {
309
+ if (cpu_isar_feature(aa32_pxn, cpu)) {
310
+ pxn = (desc >> 2) & 1;
311
+ }
312
+ ns = extract32(desc, 3, 1);
313
+ /* Lookup l2 entry. */
314
+ table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
315
+ desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
316
+ mmu_idx, fi);
317
+ if (fi->type != ARMFault_None) {
318
+ goto do_fault;
319
+ }
320
+ ap = ((desc >> 4) & 3) | ((desc >> 7) & 4);
321
+ switch (desc & 3) {
322
+ case 0: /* Page translation fault. */
323
+ fi->type = ARMFault_Translation;
324
+ goto do_fault;
325
+ case 1: /* 64k page. */
326
+ phys_addr = (desc & 0xffff0000) | (address & 0xffff);
327
+ xn = desc & (1 << 15);
328
+ *page_size = 0x10000;
329
+ break;
330
+ case 2: case 3: /* 4k page. */
331
+ phys_addr = (desc & 0xfffff000) | (address & 0xfff);
332
+ xn = desc & 1;
333
+ *page_size = 0x1000;
334
+ break;
335
+ default:
336
+ /* Never happens, but compiler isn't smart enough to tell. */
337
+ g_assert_not_reached();
338
+ }
339
+ }
340
+ if (domain_prot == 3) {
341
+ *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
342
+ } else {
343
+ if (pxn && !regime_is_user(env, mmu_idx)) {
344
+ xn = 1;
345
+ }
346
+ if (xn && access_type == MMU_INST_FETCH) {
347
+ fi->type = ARMFault_Permission;
348
+ goto do_fault;
349
+ }
350
+
351
+ if (arm_feature(env, ARM_FEATURE_V6K) &&
352
+ (regime_sctlr(env, mmu_idx) & SCTLR_AFE)) {
353
+ /* The simplified model uses AP[0] as an access control bit. */
354
+ if ((ap & 1) == 0) {
355
+ /* Access flag fault. */
356
+ fi->type = ARMFault_AccessFlag;
357
+ goto do_fault;
358
+ }
359
+ *prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
360
+ } else {
361
+ *prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
362
+ }
363
+ if (*prot && !xn) {
364
+ *prot |= PAGE_EXEC;
365
+ }
366
+ if (!(*prot & (1 << access_type))) {
367
+ /* Access permission fault. */
368
+ fi->type = ARMFault_Permission;
369
+ goto do_fault;
370
+ }
371
+ }
372
+ if (ns) {
373
+ /* The NS bit will (as required by the architecture) have no effect if
374
+ * the CPU doesn't support TZ or this is a non-secure translation
375
+ * regime, because the attribute will already be non-secure.
376
+ */
377
+ attrs->secure = false;
378
+ }
379
+ *phys_ptr = phys_addr;
380
+ return false;
381
+do_fault:
382
+ fi->domain = domain;
383
+ fi->level = level;
384
+ return true;
385
+}
386
+
387
/**
388
* get_phys_addr - get the physical address for this virtual address
389
*
390
--
391
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-6-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 4 ---
9
target/arm/helper.c | 85 ---------------------------------------------
10
target/arm/ptw.c | 85 +++++++++++++++++++++++++++++++++++++++++++++
11
3 files changed, 85 insertions(+), 89 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
18
return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
19
}
20
21
-bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
22
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
23
- hwaddr *phys_ptr, int *prot,
24
- ARMMMUFaultInfo *fi);
25
bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
26
MMUAccessType access_type, ARMMMUIdx mmu_idx,
27
hwaddr *phys_ptr, int *prot,
28
diff --git a/target/arm/helper.c b/target/arm/helper.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/helper.c
31
+++ b/target/arm/helper.c
32
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
33
return ret;
34
}
35
36
-bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
37
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
38
- hwaddr *phys_ptr, int *prot,
39
- ARMMMUFaultInfo *fi)
40
-{
41
- int n;
42
- uint32_t mask;
43
- uint32_t base;
44
- bool is_user = regime_is_user(env, mmu_idx);
45
-
46
- if (regime_translation_disabled(env, mmu_idx)) {
47
- /* MPU disabled. */
48
- *phys_ptr = address;
49
- *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
50
- return false;
51
- }
52
-
53
- *phys_ptr = address;
54
- for (n = 7; n >= 0; n--) {
55
- base = env->cp15.c6_region[n];
56
- if ((base & 1) == 0) {
57
- continue;
58
- }
59
- mask = 1 << ((base >> 1) & 0x1f);
60
- /* Keep this shift separate from the above to avoid an
61
- (undefined) << 32. */
62
- mask = (mask << 1) - 1;
63
- if (((base ^ address) & ~mask) == 0) {
64
- break;
65
- }
66
- }
67
- if (n < 0) {
68
- fi->type = ARMFault_Background;
69
- return true;
70
- }
71
-
72
- if (access_type == MMU_INST_FETCH) {
73
- mask = env->cp15.pmsav5_insn_ap;
74
- } else {
75
- mask = env->cp15.pmsav5_data_ap;
76
- }
77
- mask = (mask >> (n * 4)) & 0xf;
78
- switch (mask) {
79
- case 0:
80
- fi->type = ARMFault_Permission;
81
- fi->level = 1;
82
- return true;
83
- case 1:
84
- if (is_user) {
85
- fi->type = ARMFault_Permission;
86
- fi->level = 1;
87
- return true;
88
- }
89
- *prot = PAGE_READ | PAGE_WRITE;
90
- break;
91
- case 2:
92
- *prot = PAGE_READ;
93
- if (!is_user) {
94
- *prot |= PAGE_WRITE;
95
- }
96
- break;
97
- case 3:
98
- *prot = PAGE_READ | PAGE_WRITE;
99
- break;
100
- case 5:
101
- if (is_user) {
102
- fi->type = ARMFault_Permission;
103
- fi->level = 1;
104
- return true;
105
- }
106
- *prot = PAGE_READ;
107
- break;
108
- case 6:
109
- *prot = PAGE_READ;
110
- break;
111
- default:
112
- /* Bad permission. */
113
- fi->type = ARMFault_Permission;
114
- fi->level = 1;
115
- return true;
116
- }
117
- *prot |= PAGE_EXEC;
118
- return false;
119
-}
120
-
121
/* Combine either inner or outer cacheability attributes for normal
122
* memory, according to table D4-42 and pseudocode procedure
123
* CombineS1S2AttrHints() of ARM DDI 0487B.b (the ARMv8 ARM).
124
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
125
index XXXXXXX..XXXXXXX 100644
126
--- a/target/arm/ptw.c
127
+++ b/target/arm/ptw.c
128
@@ -XXX,XX +XXX,XX @@ do_fault:
129
return true;
130
}
131
132
+static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
133
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
134
+ hwaddr *phys_ptr, int *prot,
135
+ ARMMMUFaultInfo *fi)
136
+{
137
+ int n;
138
+ uint32_t mask;
139
+ uint32_t base;
140
+ bool is_user = regime_is_user(env, mmu_idx);
141
+
142
+ if (regime_translation_disabled(env, mmu_idx)) {
143
+ /* MPU disabled. */
144
+ *phys_ptr = address;
145
+ *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
146
+ return false;
147
+ }
148
+
149
+ *phys_ptr = address;
150
+ for (n = 7; n >= 0; n--) {
151
+ base = env->cp15.c6_region[n];
152
+ if ((base & 1) == 0) {
153
+ continue;
154
+ }
155
+ mask = 1 << ((base >> 1) & 0x1f);
156
+ /* Keep this shift separate from the above to avoid an
157
+ (undefined) << 32. */
158
+ mask = (mask << 1) - 1;
159
+ if (((base ^ address) & ~mask) == 0) {
160
+ break;
161
+ }
162
+ }
163
+ if (n < 0) {
164
+ fi->type = ARMFault_Background;
165
+ return true;
166
+ }
167
+
168
+ if (access_type == MMU_INST_FETCH) {
169
+ mask = env->cp15.pmsav5_insn_ap;
170
+ } else {
171
+ mask = env->cp15.pmsav5_data_ap;
172
+ }
173
+ mask = (mask >> (n * 4)) & 0xf;
174
+ switch (mask) {
175
+ case 0:
176
+ fi->type = ARMFault_Permission;
177
+ fi->level = 1;
178
+ return true;
179
+ case 1:
180
+ if (is_user) {
181
+ fi->type = ARMFault_Permission;
182
+ fi->level = 1;
183
+ return true;
184
+ }
185
+ *prot = PAGE_READ | PAGE_WRITE;
186
+ break;
187
+ case 2:
188
+ *prot = PAGE_READ;
189
+ if (!is_user) {
190
+ *prot |= PAGE_WRITE;
191
+ }
192
+ break;
193
+ case 3:
194
+ *prot = PAGE_READ | PAGE_WRITE;
195
+ break;
196
+ case 5:
197
+ if (is_user) {
198
+ fi->type = ARMFault_Permission;
199
+ fi->level = 1;
200
+ return true;
201
+ }
202
+ *prot = PAGE_READ;
203
+ break;
204
+ case 6:
205
+ *prot = PAGE_READ;
206
+ break;
207
+ default:
208
+ /* Bad permission. */
209
+ fi->type = ARMFault_Permission;
210
+ fi->level = 1;
211
+ return true;
212
+ }
213
+ *prot |= PAGE_EXEC;
214
+ return false;
215
+}
216
+
217
/**
218
* get_phys_addr - get the physical address for this virtual address
219
*
220
--
221
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-8-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 10 +--
9
target/arm/helper.c | 194 +-------------------------------------------
10
target/arm/ptw.c | 190 +++++++++++++++++++++++++++++++++++++++++++
11
3 files changed, 198 insertions(+), 196 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
18
return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
19
}
20
21
+bool m_is_ppb_region(CPUARMState *env, uint32_t address);
22
+bool m_is_system_region(CPUARMState *env, uint32_t address);
23
+
24
void get_phys_addr_pmsav7_default(CPUARMState *env,
25
ARMMMUIdx mmu_idx,
26
int32_t address, int *prot);
27
-bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
28
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
29
- hwaddr *phys_ptr, int *prot,
30
- target_ulong *page_size,
31
- ARMMMUFaultInfo *fi);
32
+bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool is_user);
33
+
34
bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
35
MMUAccessType access_type, ARMMMUIdx mmu_idx,
36
hwaddr *phys_ptr, MemTxAttrs *txattrs,
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
40
+++ b/target/arm/helper.c
41
@@ -XXX,XX +XXX,XX @@ do_fault:
42
return true;
43
}
44
45
-static bool pmsav7_use_background_region(ARMCPU *cpu,
46
- ARMMMUIdx mmu_idx, bool is_user)
47
+bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool is_user)
48
{
49
/* Return true if we should use the default memory map as a
50
* "background" region if there are no hits against any MPU regions.
51
@@ -XXX,XX +XXX,XX @@ static bool pmsav7_use_background_region(ARMCPU *cpu,
52
}
53
}
54
55
-static inline bool m_is_ppb_region(CPUARMState *env, uint32_t address)
56
+bool m_is_ppb_region(CPUARMState *env, uint32_t address)
57
{
58
/* True if address is in the M profile PPB region 0xe0000000 - 0xe00fffff */
59
return arm_feature(env, ARM_FEATURE_M) &&
60
extract32(address, 20, 12) == 0xe00;
61
}
62
63
-static inline bool m_is_system_region(CPUARMState *env, uint32_t address)
64
+bool m_is_system_region(CPUARMState *env, uint32_t address)
65
{
66
/* True if address is in the M profile system region
67
* 0xe0000000 - 0xffffffff
68
@@ -XXX,XX +XXX,XX @@ static inline bool m_is_system_region(CPUARMState *env, uint32_t address)
69
return arm_feature(env, ARM_FEATURE_M) && extract32(address, 29, 3) == 0x7;
70
}
71
72
-bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
73
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
74
- hwaddr *phys_ptr, int *prot,
75
- target_ulong *page_size,
76
- ARMMMUFaultInfo *fi)
77
-{
78
- ARMCPU *cpu = env_archcpu(env);
79
- int n;
80
- bool is_user = regime_is_user(env, mmu_idx);
81
-
82
- *phys_ptr = address;
83
- *page_size = TARGET_PAGE_SIZE;
84
- *prot = 0;
85
-
86
- if (regime_translation_disabled(env, mmu_idx) ||
87
- m_is_ppb_region(env, address)) {
88
- /* MPU disabled or M profile PPB access: use default memory map.
89
- * The other case which uses the default memory map in the
90
- * v7M ARM ARM pseudocode is exception vector reads from the vector
91
- * table. In QEMU those accesses are done in arm_v7m_load_vector(),
92
- * which always does a direct read using address_space_ldl(), rather
93
- * than going via this function, so we don't need to check that here.
94
- */
95
- get_phys_addr_pmsav7_default(env, mmu_idx, address, prot);
96
- } else { /* MPU enabled */
97
- for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
98
- /* region search */
99
- uint32_t base = env->pmsav7.drbar[n];
100
- uint32_t rsize = extract32(env->pmsav7.drsr[n], 1, 5);
101
- uint32_t rmask;
102
- bool srdis = false;
103
-
104
- if (!(env->pmsav7.drsr[n] & 0x1)) {
105
- continue;
106
- }
107
-
108
- if (!rsize) {
109
- qemu_log_mask(LOG_GUEST_ERROR,
110
- "DRSR[%d]: Rsize field cannot be 0\n", n);
111
- continue;
112
- }
113
- rsize++;
114
- rmask = (1ull << rsize) - 1;
115
-
116
- if (base & rmask) {
117
- qemu_log_mask(LOG_GUEST_ERROR,
118
- "DRBAR[%d]: 0x%" PRIx32 " misaligned "
119
- "to DRSR region size, mask = 0x%" PRIx32 "\n",
120
- n, base, rmask);
121
- continue;
122
- }
123
-
124
- if (address < base || address > base + rmask) {
125
- /*
126
- * Address not in this region. We must check whether the
127
- * region covers addresses in the same page as our address.
128
- * In that case we must not report a size that covers the
129
- * whole page for a subsequent hit against a different MPU
130
- * region or the background region, because it would result in
131
- * incorrect TLB hits for subsequent accesses to addresses that
132
- * are in this MPU region.
133
- */
134
- if (ranges_overlap(base, rmask,
135
- address & TARGET_PAGE_MASK,
136
- TARGET_PAGE_SIZE)) {
137
- *page_size = 1;
138
- }
139
- continue;
140
- }
141
-
142
- /* Region matched */
143
-
144
- if (rsize >= 8) { /* no subregions for regions < 256 bytes */
145
- int i, snd;
146
- uint32_t srdis_mask;
147
-
148
- rsize -= 3; /* sub region size (power of 2) */
149
- snd = ((address - base) >> rsize) & 0x7;
150
- srdis = extract32(env->pmsav7.drsr[n], snd + 8, 1);
151
-
152
- srdis_mask = srdis ? 0x3 : 0x0;
153
- for (i = 2; i <= 8 && rsize < TARGET_PAGE_BITS; i *= 2) {
154
- /* This will check in groups of 2, 4 and then 8, whether
155
- * the subregion bits are consistent. rsize is incremented
156
- * back up to give the region size, considering consistent
157
- * adjacent subregions as one region. Stop testing if rsize
158
- * is already big enough for an entire QEMU page.
159
- */
160
- int snd_rounded = snd & ~(i - 1);
161
- uint32_t srdis_multi = extract32(env->pmsav7.drsr[n],
162
- snd_rounded + 8, i);
163
- if (srdis_mask ^ srdis_multi) {
164
- break;
165
- }
166
- srdis_mask = (srdis_mask << i) | srdis_mask;
167
- rsize++;
168
- }
169
- }
170
- if (srdis) {
171
- continue;
172
- }
173
- if (rsize < TARGET_PAGE_BITS) {
174
- *page_size = 1 << rsize;
175
- }
176
- break;
177
- }
178
-
179
- if (n == -1) { /* no hits */
180
- if (!pmsav7_use_background_region(cpu, mmu_idx, is_user)) {
181
- /* background fault */
182
- fi->type = ARMFault_Background;
183
- return true;
184
- }
185
- get_phys_addr_pmsav7_default(env, mmu_idx, address, prot);
186
- } else { /* a MPU hit! */
187
- uint32_t ap = extract32(env->pmsav7.dracr[n], 8, 3);
188
- uint32_t xn = extract32(env->pmsav7.dracr[n], 12, 1);
189
-
190
- if (m_is_system_region(env, address)) {
191
- /* System space is always execute never */
192
- xn = 1;
193
- }
194
-
195
- if (is_user) { /* User mode AP bit decoding */
196
- switch (ap) {
197
- case 0:
198
- case 1:
199
- case 5:
200
- break; /* no access */
201
- case 3:
202
- *prot |= PAGE_WRITE;
203
- /* fall through */
204
- case 2:
205
- case 6:
206
- *prot |= PAGE_READ | PAGE_EXEC;
207
- break;
208
- case 7:
209
- /* for v7M, same as 6; for R profile a reserved value */
210
- if (arm_feature(env, ARM_FEATURE_M)) {
211
- *prot |= PAGE_READ | PAGE_EXEC;
212
- break;
213
- }
214
- /* fall through */
215
- default:
216
- qemu_log_mask(LOG_GUEST_ERROR,
217
- "DRACR[%d]: Bad value for AP bits: 0x%"
218
- PRIx32 "\n", n, ap);
219
- }
220
- } else { /* Priv. mode AP bits decoding */
221
- switch (ap) {
222
- case 0:
223
- break; /* no access */
224
- case 1:
225
- case 2:
226
- case 3:
227
- *prot |= PAGE_WRITE;
228
- /* fall through */
229
- case 5:
230
- case 6:
231
- *prot |= PAGE_READ | PAGE_EXEC;
232
- break;
233
- case 7:
234
- /* for v7M, same as 6; for R profile a reserved value */
235
- if (arm_feature(env, ARM_FEATURE_M)) {
236
- *prot |= PAGE_READ | PAGE_EXEC;
237
- break;
238
- }
239
- /* fall through */
240
- default:
241
- qemu_log_mask(LOG_GUEST_ERROR,
242
- "DRACR[%d]: Bad value for AP bits: 0x%"
243
- PRIx32 "\n", n, ap);
244
- }
245
- }
246
-
247
- /* execute never */
248
- if (xn) {
249
- *prot &= ~PAGE_EXEC;
250
- }
251
- }
252
- }
253
-
254
- fi->type = ARMFault_Permission;
255
- fi->level = 1;
256
- return !(*prot & (1 << access_type));
257
-}
258
-
259
static bool v8m_is_sau_exempt(CPUARMState *env,
260
uint32_t address, MMUAccessType access_type)
261
{
262
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
263
index XXXXXXX..XXXXXXX 100644
264
--- a/target/arm/ptw.c
265
+++ b/target/arm/ptw.c
266
@@ -XXX,XX +XXX,XX @@
267
268
#include "qemu/osdep.h"
269
#include "qemu/log.h"
270
+#include "qemu/range.h"
271
#include "cpu.h"
272
#include "internals.h"
273
#include "ptw.h"
274
@@ -XXX,XX +XXX,XX @@ void get_phys_addr_pmsav7_default(CPUARMState *env,
275
}
276
}
277
278
+static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
279
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
280
+ hwaddr *phys_ptr, int *prot,
281
+ target_ulong *page_size,
282
+ ARMMMUFaultInfo *fi)
283
+{
284
+ ARMCPU *cpu = env_archcpu(env);
285
+ int n;
286
+ bool is_user = regime_is_user(env, mmu_idx);
287
+
288
+ *phys_ptr = address;
289
+ *page_size = TARGET_PAGE_SIZE;
290
+ *prot = 0;
291
+
292
+ if (regime_translation_disabled(env, mmu_idx) ||
293
+ m_is_ppb_region(env, address)) {
294
+ /*
295
+ * MPU disabled or M profile PPB access: use default memory map.
296
+ * The other case which uses the default memory map in the
297
+ * v7M ARM ARM pseudocode is exception vector reads from the vector
298
+ * table. In QEMU those accesses are done in arm_v7m_load_vector(),
299
+ * which always does a direct read using address_space_ldl(), rather
300
+ * than going via this function, so we don't need to check that here.
301
+ */
302
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, prot);
303
+ } else { /* MPU enabled */
304
+ for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
305
+ /* region search */
306
+ uint32_t base = env->pmsav7.drbar[n];
307
+ uint32_t rsize = extract32(env->pmsav7.drsr[n], 1, 5);
308
+ uint32_t rmask;
309
+ bool srdis = false;
310
+
311
+ if (!(env->pmsav7.drsr[n] & 0x1)) {
312
+ continue;
313
+ }
314
+
315
+ if (!rsize) {
316
+ qemu_log_mask(LOG_GUEST_ERROR,
317
+ "DRSR[%d]: Rsize field cannot be 0\n", n);
318
+ continue;
319
+ }
320
+ rsize++;
321
+ rmask = (1ull << rsize) - 1;
322
+
323
+ if (base & rmask) {
324
+ qemu_log_mask(LOG_GUEST_ERROR,
325
+ "DRBAR[%d]: 0x%" PRIx32 " misaligned "
326
+ "to DRSR region size, mask = 0x%" PRIx32 "\n",
327
+ n, base, rmask);
328
+ continue;
329
+ }
330
+
331
+ if (address < base || address > base + rmask) {
332
+ /*
333
+ * Address not in this region. We must check whether the
334
+ * region covers addresses in the same page as our address.
335
+ * In that case we must not report a size that covers the
336
+ * whole page for a subsequent hit against a different MPU
337
+ * region or the background region, because it would result in
338
+ * incorrect TLB hits for subsequent accesses to addresses that
339
+ * are in this MPU region.
340
+ */
341
+ if (ranges_overlap(base, rmask,
342
+ address & TARGET_PAGE_MASK,
343
+ TARGET_PAGE_SIZE)) {
344
+ *page_size = 1;
345
+ }
346
+ continue;
347
+ }
348
+
349
+ /* Region matched */
350
+
351
+ if (rsize >= 8) { /* no subregions for regions < 256 bytes */
352
+ int i, snd;
353
+ uint32_t srdis_mask;
354
+
355
+ rsize -= 3; /* sub region size (power of 2) */
356
+ snd = ((address - base) >> rsize) & 0x7;
357
+ srdis = extract32(env->pmsav7.drsr[n], snd + 8, 1);
358
+
359
+ srdis_mask = srdis ? 0x3 : 0x0;
360
+ for (i = 2; i <= 8 && rsize < TARGET_PAGE_BITS; i *= 2) {
361
+ /*
362
+ * This will check in groups of 2, 4 and then 8, whether
363
+ * the subregion bits are consistent. rsize is incremented
364
+ * back up to give the region size, considering consistent
365
+ * adjacent subregions as one region. Stop testing if rsize
366
+ * is already big enough for an entire QEMU page.
367
+ */
368
+ int snd_rounded = snd & ~(i - 1);
369
+ uint32_t srdis_multi = extract32(env->pmsav7.drsr[n],
370
+ snd_rounded + 8, i);
371
+ if (srdis_mask ^ srdis_multi) {
372
+ break;
373
+ }
374
+ srdis_mask = (srdis_mask << i) | srdis_mask;
375
+ rsize++;
376
+ }
377
+ }
378
+ if (srdis) {
379
+ continue;
380
+ }
381
+ if (rsize < TARGET_PAGE_BITS) {
382
+ *page_size = 1 << rsize;
383
+ }
384
+ break;
385
+ }
386
+
387
+ if (n == -1) { /* no hits */
388
+ if (!pmsav7_use_background_region(cpu, mmu_idx, is_user)) {
389
+ /* background fault */
390
+ fi->type = ARMFault_Background;
391
+ return true;
392
+ }
393
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, prot);
394
+ } else { /* a MPU hit! */
395
+ uint32_t ap = extract32(env->pmsav7.dracr[n], 8, 3);
396
+ uint32_t xn = extract32(env->pmsav7.dracr[n], 12, 1);
397
+
398
+ if (m_is_system_region(env, address)) {
399
+ /* System space is always execute never */
400
+ xn = 1;
401
+ }
402
+
403
+ if (is_user) { /* User mode AP bit decoding */
404
+ switch (ap) {
405
+ case 0:
406
+ case 1:
407
+ case 5:
408
+ break; /* no access */
409
+ case 3:
410
+ *prot |= PAGE_WRITE;
411
+ /* fall through */
412
+ case 2:
413
+ case 6:
414
+ *prot |= PAGE_READ | PAGE_EXEC;
415
+ break;
416
+ case 7:
417
+ /* for v7M, same as 6; for R profile a reserved value */
418
+ if (arm_feature(env, ARM_FEATURE_M)) {
419
+ *prot |= PAGE_READ | PAGE_EXEC;
420
+ break;
421
+ }
422
+ /* fall through */
423
+ default:
424
+ qemu_log_mask(LOG_GUEST_ERROR,
425
+ "DRACR[%d]: Bad value for AP bits: 0x%"
426
+ PRIx32 "\n", n, ap);
427
+ }
428
+ } else { /* Priv. mode AP bits decoding */
429
+ switch (ap) {
430
+ case 0:
431
+ break; /* no access */
432
+ case 1:
433
+ case 2:
434
+ case 3:
435
+ *prot |= PAGE_WRITE;
436
+ /* fall through */
437
+ case 5:
438
+ case 6:
439
+ *prot |= PAGE_READ | PAGE_EXEC;
440
+ break;
441
+ case 7:
442
+ /* for v7M, same as 6; for R profile a reserved value */
443
+ if (arm_feature(env, ARM_FEATURE_M)) {
444
+ *prot |= PAGE_READ | PAGE_EXEC;
445
+ break;
446
+ }
447
+ /* fall through */
448
+ default:
449
+ qemu_log_mask(LOG_GUEST_ERROR,
450
+ "DRACR[%d]: Bad value for AP bits: 0x%"
451
+ PRIx32 "\n", n, ap);
452
+ }
453
+ }
454
+
455
+ /* execute never */
456
+ if (xn) {
457
+ *prot &= ~PAGE_EXEC;
458
+ }
459
+ }
460
+ }
461
+
462
+ fi->type = ARMFault_Permission;
463
+ fi->level = 1;
464
+ return !(*prot & (1 << access_type));
465
+}
466
+
467
/**
468
* get_phys_addr - get the physical address for this virtual address
469
*
470
--
471
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-9-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 5 ---
9
target/arm/helper.c | 75 -------------------------------------------
10
target/arm/ptw.c | 77 +++++++++++++++++++++++++++++++++++++++++++++
11
3 files changed, 77 insertions(+), 80 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@ void get_phys_addr_pmsav7_default(CPUARMState *env,
18
int32_t address, int *prot);
19
bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool is_user);
20
21
-bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
22
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
23
- hwaddr *phys_ptr, MemTxAttrs *txattrs,
24
- int *prot, target_ulong *page_size,
25
- ARMMMUFaultInfo *fi);
26
bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
27
MMUAccessType access_type, ARMMMUIdx mmu_idx,
28
bool s1_is_el0,
29
diff --git a/target/arm/helper.c b/target/arm/helper.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/helper.c
32
+++ b/target/arm/helper.c
33
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
34
return !(*prot & (1 << access_type));
35
}
36
37
-
38
-bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
39
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
40
- hwaddr *phys_ptr, MemTxAttrs *txattrs,
41
- int *prot, target_ulong *page_size,
42
- ARMMMUFaultInfo *fi)
43
-{
44
- uint32_t secure = regime_is_secure(env, mmu_idx);
45
- V8M_SAttributes sattrs = {};
46
- bool ret;
47
- bool mpu_is_subpage;
48
-
49
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
50
- v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
51
- if (access_type == MMU_INST_FETCH) {
52
- /* Instruction fetches always use the MMU bank and the
53
- * transaction attribute determined by the fetch address,
54
- * regardless of CPU state. This is painful for QEMU
55
- * to handle, because it would mean we need to encode
56
- * into the mmu_idx not just the (user, negpri) information
57
- * for the current security state but also that for the
58
- * other security state, which would balloon the number
59
- * of mmu_idx values needed alarmingly.
60
- * Fortunately we can avoid this because it's not actually
61
- * possible to arbitrarily execute code from memory with
62
- * the wrong security attribute: it will always generate
63
- * an exception of some kind or another, apart from the
64
- * special case of an NS CPU executing an SG instruction
65
- * in S&NSC memory. So we always just fail the translation
66
- * here and sort things out in the exception handler
67
- * (including possibly emulating an SG instruction).
68
- */
69
- if (sattrs.ns != !secure) {
70
- if (sattrs.nsc) {
71
- fi->type = ARMFault_QEMU_NSCExec;
72
- } else {
73
- fi->type = ARMFault_QEMU_SFault;
74
- }
75
- *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
76
- *phys_ptr = address;
77
- *prot = 0;
78
- return true;
79
- }
80
- } else {
81
- /* For data accesses we always use the MMU bank indicated
82
- * by the current CPU state, but the security attributes
83
- * might downgrade a secure access to nonsecure.
84
- */
85
- if (sattrs.ns) {
86
- txattrs->secure = false;
87
- } else if (!secure) {
88
- /* NS access to S memory must fault.
89
- * Architecturally we should first check whether the
90
- * MPU information for this address indicates that we
91
- * are doing an unaligned access to Device memory, which
92
- * should generate a UsageFault instead. QEMU does not
93
- * currently check for that kind of unaligned access though.
94
- * If we added it we would need to do so as a special case
95
- * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
96
- */
97
- fi->type = ARMFault_QEMU_SFault;
98
- *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
99
- *phys_ptr = address;
100
- *prot = 0;
101
- return true;
102
- }
103
- }
104
- }
105
-
106
- ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
107
- txattrs, prot, &mpu_is_subpage, fi, NULL);
108
- *page_size = sattrs.subpage || mpu_is_subpage ? 1 : TARGET_PAGE_SIZE;
109
- return ret;
110
-}
111
-
112
/* Combine either inner or outer cacheability attributes for normal
113
* memory, according to table D4-42 and pseudocode procedure
114
* CombineS1S2AttrHints() of ARM DDI 0487B.b (the ARMv8 ARM).
115
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/ptw.c
118
+++ b/target/arm/ptw.c
119
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
120
return !(*prot & (1 << access_type));
121
}
122
123
+static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
124
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
125
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
126
+ int *prot, target_ulong *page_size,
127
+ ARMMMUFaultInfo *fi)
128
+{
129
+ uint32_t secure = regime_is_secure(env, mmu_idx);
130
+ V8M_SAttributes sattrs = {};
131
+ bool ret;
132
+ bool mpu_is_subpage;
133
+
134
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
135
+ v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
136
+ if (access_type == MMU_INST_FETCH) {
137
+ /*
138
+ * Instruction fetches always use the MMU bank and the
139
+ * transaction attribute determined by the fetch address,
140
+ * regardless of CPU state. This is painful for QEMU
141
+ * to handle, because it would mean we need to encode
142
+ * into the mmu_idx not just the (user, negpri) information
143
+ * for the current security state but also that for the
144
+ * other security state, which would balloon the number
145
+ * of mmu_idx values needed alarmingly.
146
+ * Fortunately we can avoid this because it's not actually
147
+ * possible to arbitrarily execute code from memory with
148
+ * the wrong security attribute: it will always generate
149
+ * an exception of some kind or another, apart from the
150
+ * special case of an NS CPU executing an SG instruction
151
+ * in S&NSC memory. So we always just fail the translation
152
+ * here and sort things out in the exception handler
153
+ * (including possibly emulating an SG instruction).
154
+ */
155
+ if (sattrs.ns != !secure) {
156
+ if (sattrs.nsc) {
157
+ fi->type = ARMFault_QEMU_NSCExec;
158
+ } else {
159
+ fi->type = ARMFault_QEMU_SFault;
160
+ }
161
+ *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
162
+ *phys_ptr = address;
163
+ *prot = 0;
164
+ return true;
165
+ }
166
+ } else {
167
+ /*
168
+ * For data accesses we always use the MMU bank indicated
169
+ * by the current CPU state, but the security attributes
170
+ * might downgrade a secure access to nonsecure.
171
+ */
172
+ if (sattrs.ns) {
173
+ txattrs->secure = false;
174
+ } else if (!secure) {
175
+ /*
176
+ * NS access to S memory must fault.
177
+ * Architecturally we should first check whether the
178
+ * MPU information for this address indicates that we
179
+ * are doing an unaligned access to Device memory, which
180
+ * should generate a UsageFault instead. QEMU does not
181
+ * currently check for that kind of unaligned access though.
182
+ * If we added it we would need to do so as a special case
183
+ * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
184
+ */
185
+ fi->type = ARMFault_QEMU_SFault;
186
+ *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
187
+ *phys_ptr = address;
188
+ *prot = 0;
189
+ return true;
190
+ }
191
+ }
192
+ }
193
+
194
+ ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
195
+ txattrs, prot, &mpu_is_subpage, fi, NULL);
196
+ *page_size = sattrs.subpage || mpu_is_subpage ? 1 : TARGET_PAGE_SIZE;
197
+ return ret;
198
+}
199
+
200
/**
201
* get_phys_addr - get the physical address for this virtual address
202
*
203
--
204
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-11-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 2 --
9
target/arm/helper.c | 19 -------------------
10
target/arm/ptw.c | 21 +++++++++++++++++++++
11
3 files changed, 21 insertions(+), 21 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
18
bool m_is_ppb_region(CPUARMState *env, uint32_t address);
19
bool m_is_system_region(CPUARMState *env, uint32_t address);
20
21
-bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool is_user);
22
-
23
bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
24
MMUAccessType access_type, ARMMMUIdx mmu_idx,
25
bool s1_is_el0,
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
29
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ do_fault:
31
return true;
32
}
33
34
-bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool is_user)
35
-{
36
- /* Return true if we should use the default memory map as a
37
- * "background" region if there are no hits against any MPU regions.
38
- */
39
- CPUARMState *env = &cpu->env;
40
-
41
- if (is_user) {
42
- return false;
43
- }
44
-
45
- if (arm_feature(env, ARM_FEATURE_M)) {
46
- return env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)]
47
- & R_V7M_MPU_CTRL_PRIVDEFENA_MASK;
48
- } else {
49
- return regime_sctlr(env, mmu_idx) & SCTLR_BR;
50
- }
51
-}
52
-
53
bool m_is_ppb_region(CPUARMState *env, uint32_t address)
54
{
55
/* True if address is in the M profile PPB region 0xe0000000 - 0xe00fffff */
56
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/ptw.c
59
+++ b/target/arm/ptw.c
60
@@ -XXX,XX +XXX,XX @@ static void get_phys_addr_pmsav7_default(CPUARMState *env, ARMMMUIdx mmu_idx,
61
}
62
}
63
64
+static bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx,
65
+ bool is_user)
66
+{
67
+ /*
68
+ * Return true if we should use the default memory map as a
69
+ * "background" region if there are no hits against any MPU regions.
70
+ */
71
+ CPUARMState *env = &cpu->env;
72
+
73
+ if (is_user) {
74
+ return false;
75
+ }
76
+
77
+ if (arm_feature(env, ARM_FEATURE_M)) {
78
+ return env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)]
79
+ & R_V7M_MPU_CTRL_PRIVDEFENA_MASK;
80
+ } else {
81
+ return regime_sctlr(env, mmu_idx) & SCTLR_BR;
82
+ }
83
+}
84
+
85
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
86
MMUAccessType access_type, ARMMMUIdx mmu_idx,
87
hwaddr *phys_ptr, int *prot,
88
--
89
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-13-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 3 ---
9
target/arm/helper.c | 15 ---------------
10
target/arm/ptw.c | 16 ++++++++++++++++
11
3 files changed, 16 insertions(+), 18 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
18
return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
19
}
20
21
-bool m_is_ppb_region(CPUARMState *env, uint32_t address);
22
-bool m_is_system_region(CPUARMState *env, uint32_t address);
23
-
24
bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
25
MMUAccessType access_type, ARMMMUIdx mmu_idx,
26
bool s1_is_el0,
27
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/helper.c
30
+++ b/target/arm/helper.c
31
@@ -XXX,XX +XXX,XX @@ do_fault:
32
return true;
33
}
34
35
-bool m_is_ppb_region(CPUARMState *env, uint32_t address)
36
-{
37
- /* True if address is in the M profile PPB region 0xe0000000 - 0xe00fffff */
38
- return arm_feature(env, ARM_FEATURE_M) &&
39
- extract32(address, 20, 12) == 0xe00;
40
-}
41
-
42
-bool m_is_system_region(CPUARMState *env, uint32_t address)
43
-{
44
- /* True if address is in the M profile system region
45
- * 0xe0000000 - 0xffffffff
46
- */
47
- return arm_feature(env, ARM_FEATURE_M) && extract32(address, 29, 3) == 0x7;
48
-}
49
-
50
/* Combine either inner or outer cacheability attributes for normal
51
* memory, according to table D4-42 and pseudocode procedure
52
* CombineS1S2AttrHints() of ARM DDI 0487B.b (the ARMv8 ARM).
53
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/ptw.c
56
+++ b/target/arm/ptw.c
57
@@ -XXX,XX +XXX,XX @@ static void get_phys_addr_pmsav7_default(CPUARMState *env, ARMMMUIdx mmu_idx,
58
}
59
}
60
61
+static bool m_is_ppb_region(CPUARMState *env, uint32_t address)
62
+{
63
+ /* True if address is in the M profile PPB region 0xe0000000 - 0xe00fffff */
64
+ return arm_feature(env, ARM_FEATURE_M) &&
65
+ extract32(address, 20, 12) == 0xe00;
66
+}
67
+
68
+static bool m_is_system_region(CPUARMState *env, uint32_t address)
69
+{
70
+ /*
71
+ * True if address is in the M profile system region
72
+ * 0xe0000000 - 0xffffffff
73
+ */
74
+ return arm_feature(env, ARM_FEATURE_M) && extract32(address, 29, 3) == 0x7;
75
+}
76
+
77
static bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx,
78
bool is_user)
79
{
80
--
81
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-14-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 4 ++--
9
target/arm/helper.c | 26 +-------------------------
10
target/arm/ptw.c | 23 +++++++++++++++++++++++
11
3 files changed, 26 insertions(+), 27 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@ uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
18
19
bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx);
20
bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx);
21
+uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn);
22
+
23
ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
24
ARMCacheAttrs s1, ARMCacheAttrs s2);
25
26
-bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
27
- uint32_t *table, uint32_t address);
28
int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx,
29
int ap, int domain_prot);
30
int simple_ap_to_rw_prot_is_user(int ap, bool is_user);
31
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/helper.c
34
+++ b/target/arm/helper.c
35
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_big_endian(CPUARMState *env,
36
}
37
38
/* Return the TTBR associated with this translation regime */
39
-static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx,
40
- int ttbrn)
41
+uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
42
{
43
if (mmu_idx == ARMMMUIdx_Stage2) {
44
return env->cp15.vttbr_el2;
45
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
46
return prot_rw | PAGE_EXEC;
47
}
48
49
-bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
50
- uint32_t *table, uint32_t address)
51
-{
52
- /* Note that we can only get here for an AArch32 PL0/PL1 lookup */
53
- TCR *tcr = regime_tcr(env, mmu_idx);
54
-
55
- if (address & tcr->mask) {
56
- if (tcr->raw_tcr & TTBCR_PD1) {
57
- /* Translation table walk disabled for TTBR1 */
58
- return false;
59
- }
60
- *table = regime_ttbr(env, mmu_idx, 1) & 0xffffc000;
61
- } else {
62
- if (tcr->raw_tcr & TTBCR_PD0) {
63
- /* Translation table walk disabled for TTBR0 */
64
- return false;
65
- }
66
- *table = regime_ttbr(env, mmu_idx, 0) & tcr->base_mask;
67
- }
68
- *table |= (address >> 18) & 0x3ffc;
69
- return true;
70
-}
71
-
72
static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
73
{
74
/*
75
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/ptw.c
78
+++ b/target/arm/ptw.c
79
@@ -XXX,XX +XXX,XX @@
80
#include "ptw.h"
81
82
83
+static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
84
+ uint32_t *table, uint32_t address)
85
+{
86
+ /* Note that we can only get here for an AArch32 PL0/PL1 lookup */
87
+ TCR *tcr = regime_tcr(env, mmu_idx);
88
+
89
+ if (address & tcr->mask) {
90
+ if (tcr->raw_tcr & TTBCR_PD1) {
91
+ /* Translation table walk disabled for TTBR1 */
92
+ return false;
93
+ }
94
+ *table = regime_ttbr(env, mmu_idx, 1) & 0xffffc000;
95
+ } else {
96
+ if (tcr->raw_tcr & TTBCR_PD0) {
97
+ /* Translation table walk disabled for TTBR0 */
98
+ return false;
99
+ }
100
+ *table = regime_ttbr(env, mmu_idx, 0) & tcr->base_mask;
101
+ }
102
+ *table |= (address >> 18) & 0x3ffc;
103
+ return true;
104
+}
105
+
106
static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
107
MMUAccessType access_type, ARMMMUIdx mmu_idx,
108
hwaddr *phys_ptr, int *prot,
109
--
110
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
There are a handful of helpers for combine_cacheattrs
4
that we can move at the same time as the main entry point.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220604040607.269301-15-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/ptw.h | 3 -
12
target/arm/helper.c | 218 -------------------------------------------
13
target/arm/ptw.c | 221 ++++++++++++++++++++++++++++++++++++++++++++
14
3 files changed, 221 insertions(+), 221 deletions(-)
15
16
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/ptw.h
19
+++ b/target/arm/ptw.h
20
@@ -XXX,XX +XXX,XX @@ bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx);
21
bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx);
22
uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn);
23
24
-ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
25
- ARMCacheAttrs s1, ARMCacheAttrs s2);
26
-
27
int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx,
28
int ap, int domain_prot);
29
int simple_ap_to_rw_prot_is_user(int ap, bool is_user);
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/helper.c
33
+++ b/target/arm/helper.c
34
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
35
}
36
return true;
37
}
38
-
39
-/* Translate from the 4-bit stage 2 representation of
40
- * memory attributes (without cache-allocation hints) to
41
- * the 8-bit representation of the stage 1 MAIR registers
42
- * (which includes allocation hints).
43
- *
44
- * ref: shared/translation/attrs/S2AttrDecode()
45
- * .../S2ConvertAttrsHints()
46
- */
47
-static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
48
-{
49
- uint8_t hiattr = extract32(s2attrs, 2, 2);
50
- uint8_t loattr = extract32(s2attrs, 0, 2);
51
- uint8_t hihint = 0, lohint = 0;
52
-
53
- if (hiattr != 0) { /* normal memory */
54
- if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */
55
- hiattr = loattr = 1; /* non-cacheable */
56
- } else {
57
- if (hiattr != 1) { /* Write-through or write-back */
58
- hihint = 3; /* RW allocate */
59
- }
60
- if (loattr != 1) { /* Write-through or write-back */
61
- lohint = 3; /* RW allocate */
62
- }
63
- }
64
- }
65
-
66
- return (hiattr << 6) | (hihint << 4) | (loattr << 2) | lohint;
67
-}
68
#endif /* !CONFIG_USER_ONLY */
69
70
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
71
@@ -XXX,XX +XXX,XX @@ do_fault:
72
return true;
73
}
74
75
-/* Combine either inner or outer cacheability attributes for normal
76
- * memory, according to table D4-42 and pseudocode procedure
77
- * CombineS1S2AttrHints() of ARM DDI 0487B.b (the ARMv8 ARM).
78
- *
79
- * NB: only stage 1 includes allocation hints (RW bits), leading to
80
- * some asymmetry.
81
- */
82
-static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
83
-{
84
- if (s1 == 4 || s2 == 4) {
85
- /* non-cacheable has precedence */
86
- return 4;
87
- } else if (extract32(s1, 2, 2) == 0 || extract32(s1, 2, 2) == 2) {
88
- /* stage 1 write-through takes precedence */
89
- return s1;
90
- } else if (extract32(s2, 2, 2) == 2) {
91
- /* stage 2 write-through takes precedence, but the allocation hint
92
- * is still taken from stage 1
93
- */
94
- return (2 << 2) | extract32(s1, 0, 2);
95
- } else { /* write-back */
96
- return s1;
97
- }
98
-}
99
-
100
-/*
101
- * Combine the memory type and cacheability attributes of
102
- * s1 and s2 for the HCR_EL2.FWB == 0 case, returning the
103
- * combined attributes in MAIR_EL1 format.
104
- */
105
-static uint8_t combined_attrs_nofwb(CPUARMState *env,
106
- ARMCacheAttrs s1, ARMCacheAttrs s2)
107
-{
108
- uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
109
-
110
- s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
111
-
112
- s1lo = extract32(s1.attrs, 0, 4);
113
- s2lo = extract32(s2_mair_attrs, 0, 4);
114
- s1hi = extract32(s1.attrs, 4, 4);
115
- s2hi = extract32(s2_mair_attrs, 4, 4);
116
-
117
- /* Combine memory type and cacheability attributes */
118
- if (s1hi == 0 || s2hi == 0) {
119
- /* Device has precedence over normal */
120
- if (s1lo == 0 || s2lo == 0) {
121
- /* nGnRnE has precedence over anything */
122
- ret_attrs = 0;
123
- } else if (s1lo == 4 || s2lo == 4) {
124
- /* non-Reordering has precedence over Reordering */
125
- ret_attrs = 4; /* nGnRE */
126
- } else if (s1lo == 8 || s2lo == 8) {
127
- /* non-Gathering has precedence over Gathering */
128
- ret_attrs = 8; /* nGRE */
129
- } else {
130
- ret_attrs = 0xc; /* GRE */
131
- }
132
- } else { /* Normal memory */
133
- /* Outer/inner cacheability combine independently */
134
- ret_attrs = combine_cacheattr_nibble(s1hi, s2hi) << 4
135
- | combine_cacheattr_nibble(s1lo, s2lo);
136
- }
137
- return ret_attrs;
138
-}
139
-
140
-static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
141
-{
142
- /*
143
- * Given the 4 bits specifying the outer or inner cacheability
144
- * in MAIR format, return a value specifying Normal Write-Back,
145
- * with the allocation and transient hints taken from the input
146
- * if the input specified some kind of cacheable attribute.
147
- */
148
- if (attr == 0 || attr == 4) {
149
- /*
150
- * 0 == an UNPREDICTABLE encoding
151
- * 4 == Non-cacheable
152
- * Either way, force Write-Back RW allocate non-transient
153
- */
154
- return 0xf;
155
- }
156
- /* Change WriteThrough to WriteBack, keep allocation and transient hints */
157
- return attr | 4;
158
-}
159
-
160
-/*
161
- * Combine the memory type and cacheability attributes of
162
- * s1 and s2 for the HCR_EL2.FWB == 1 case, returning the
163
- * combined attributes in MAIR_EL1 format.
164
- */
165
-static uint8_t combined_attrs_fwb(CPUARMState *env,
166
- ARMCacheAttrs s1, ARMCacheAttrs s2)
167
-{
168
- switch (s2.attrs) {
169
- case 7:
170
- /* Use stage 1 attributes */
171
- return s1.attrs;
172
- case 6:
173
- /*
174
- * Force Normal Write-Back. Note that if S1 is Normal cacheable
175
- * then we take the allocation hints from it; otherwise it is
176
- * RW allocate, non-transient.
177
- */
178
- if ((s1.attrs & 0xf0) == 0) {
179
- /* S1 is Device */
180
- return 0xff;
181
- }
182
- /* Need to check the Inner and Outer nibbles separately */
183
- return force_cacheattr_nibble_wb(s1.attrs & 0xf) |
184
- force_cacheattr_nibble_wb(s1.attrs >> 4) << 4;
185
- case 5:
186
- /* If S1 attrs are Device, use them; otherwise Normal Non-cacheable */
187
- if ((s1.attrs & 0xf0) == 0) {
188
- return s1.attrs;
189
- }
190
- return 0x44;
191
- case 0 ... 3:
192
- /* Force Device, of subtype specified by S2 */
193
- return s2.attrs << 2;
194
- default:
195
- /*
196
- * RESERVED values (including RES0 descriptor bit [5] being nonzero);
197
- * arbitrarily force Device.
198
- */
199
- return 0;
200
- }
201
-}
202
-
203
-/* Combine S1 and S2 cacheability/shareability attributes, per D4.5.4
204
- * and CombineS1S2Desc()
205
- *
206
- * @env: CPUARMState
207
- * @s1: Attributes from stage 1 walk
208
- * @s2: Attributes from stage 2 walk
209
- */
210
-ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
211
- ARMCacheAttrs s1, ARMCacheAttrs s2)
212
-{
213
- ARMCacheAttrs ret;
214
- bool tagged = false;
215
-
216
- assert(s2.is_s2_format && !s1.is_s2_format);
217
- ret.is_s2_format = false;
218
-
219
- if (s1.attrs == 0xf0) {
220
- tagged = true;
221
- s1.attrs = 0xff;
222
- }
223
-
224
- /* Combine shareability attributes (table D4-43) */
225
- if (s1.shareability == 2 || s2.shareability == 2) {
226
- /* if either are outer-shareable, the result is outer-shareable */
227
- ret.shareability = 2;
228
- } else if (s1.shareability == 3 || s2.shareability == 3) {
229
- /* if either are inner-shareable, the result is inner-shareable */
230
- ret.shareability = 3;
231
- } else {
232
- /* both non-shareable */
233
- ret.shareability = 0;
234
- }
235
-
236
- /* Combine memory type and cacheability attributes */
237
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
238
- ret.attrs = combined_attrs_fwb(env, s1, s2);
239
- } else {
240
- ret.attrs = combined_attrs_nofwb(env, s1, s2);
241
- }
242
-
243
- /*
244
- * Any location for which the resultant memory type is any
245
- * type of Device memory is always treated as Outer Shareable.
246
- * Any location for which the resultant memory type is Normal
247
- * Inner Non-cacheable, Outer Non-cacheable is always treated
248
- * as Outer Shareable.
249
- * TODO: FEAT_XS adds another value (0x40) also meaning iNCoNC
250
- */
251
- if ((ret.attrs & 0xf0) == 0 || ret.attrs == 0x44) {
252
- ret.shareability = 2;
253
- }
254
-
255
- /* TODO: CombineS1S2Desc does not consider transient, only WB, RWA. */
256
- if (tagged && ret.attrs == 0xff) {
257
- ret.attrs = 0xf0;
258
- }
259
-
260
- return ret;
261
-}
262
-
263
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
264
MemTxAttrs *attrs)
265
{
266
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
267
index XXXXXXX..XXXXXXX 100644
268
--- a/target/arm/ptw.c
269
+++ b/target/arm/ptw.c
270
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
271
return ret;
272
}
273
274
+/*
275
+ * Translate from the 4-bit stage 2 representation of
276
+ * memory attributes (without cache-allocation hints) to
277
+ * the 8-bit representation of the stage 1 MAIR registers
278
+ * (which includes allocation hints).
279
+ *
280
+ * ref: shared/translation/attrs/S2AttrDecode()
281
+ * .../S2ConvertAttrsHints()
282
+ */
283
+static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
284
+{
285
+ uint8_t hiattr = extract32(s2attrs, 2, 2);
286
+ uint8_t loattr = extract32(s2attrs, 0, 2);
287
+ uint8_t hihint = 0, lohint = 0;
288
+
289
+ if (hiattr != 0) { /* normal memory */
290
+ if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */
291
+ hiattr = loattr = 1; /* non-cacheable */
292
+ } else {
293
+ if (hiattr != 1) { /* Write-through or write-back */
294
+ hihint = 3; /* RW allocate */
295
+ }
296
+ if (loattr != 1) { /* Write-through or write-back */
297
+ lohint = 3; /* RW allocate */
298
+ }
299
+ }
300
+ }
301
+
302
+ return (hiattr << 6) | (hihint << 4) | (loattr << 2) | lohint;
303
+}
304
+
305
+/*
306
+ * Combine either inner or outer cacheability attributes for normal
307
+ * memory, according to table D4-42 and pseudocode procedure
308
+ * CombineS1S2AttrHints() of ARM DDI 0487B.b (the ARMv8 ARM).
309
+ *
310
+ * NB: only stage 1 includes allocation hints (RW bits), leading to
311
+ * some asymmetry.
312
+ */
313
+static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
314
+{
315
+ if (s1 == 4 || s2 == 4) {
316
+ /* non-cacheable has precedence */
317
+ return 4;
318
+ } else if (extract32(s1, 2, 2) == 0 || extract32(s1, 2, 2) == 2) {
319
+ /* stage 1 write-through takes precedence */
320
+ return s1;
321
+ } else if (extract32(s2, 2, 2) == 2) {
322
+ /* stage 2 write-through takes precedence, but the allocation hint
323
+ * is still taken from stage 1
324
+ */
325
+ return (2 << 2) | extract32(s1, 0, 2);
326
+ } else { /* write-back */
327
+ return s1;
328
+ }
329
+}
330
+
331
+/*
332
+ * Combine the memory type and cacheability attributes of
333
+ * s1 and s2 for the HCR_EL2.FWB == 0 case, returning the
334
+ * combined attributes in MAIR_EL1 format.
335
+ */
336
+static uint8_t combined_attrs_nofwb(CPUARMState *env,
337
+ ARMCacheAttrs s1, ARMCacheAttrs s2)
338
+{
339
+ uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
340
+
341
+ s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
342
+
343
+ s1lo = extract32(s1.attrs, 0, 4);
344
+ s2lo = extract32(s2_mair_attrs, 0, 4);
345
+ s1hi = extract32(s1.attrs, 4, 4);
346
+ s2hi = extract32(s2_mair_attrs, 4, 4);
347
+
348
+ /* Combine memory type and cacheability attributes */
349
+ if (s1hi == 0 || s2hi == 0) {
350
+ /* Device has precedence over normal */
351
+ if (s1lo == 0 || s2lo == 0) {
352
+ /* nGnRnE has precedence over anything */
353
+ ret_attrs = 0;
354
+ } else if (s1lo == 4 || s2lo == 4) {
355
+ /* non-Reordering has precedence over Reordering */
356
+ ret_attrs = 4; /* nGnRE */
357
+ } else if (s1lo == 8 || s2lo == 8) {
358
+ /* non-Gathering has precedence over Gathering */
359
+ ret_attrs = 8; /* nGRE */
360
+ } else {
361
+ ret_attrs = 0xc; /* GRE */
362
+ }
363
+ } else { /* Normal memory */
364
+ /* Outer/inner cacheability combine independently */
365
+ ret_attrs = combine_cacheattr_nibble(s1hi, s2hi) << 4
366
+ | combine_cacheattr_nibble(s1lo, s2lo);
367
+ }
368
+ return ret_attrs;
369
+}
370
+
371
+static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
372
+{
373
+ /*
374
+ * Given the 4 bits specifying the outer or inner cacheability
375
+ * in MAIR format, return a value specifying Normal Write-Back,
376
+ * with the allocation and transient hints taken from the input
377
+ * if the input specified some kind of cacheable attribute.
378
+ */
379
+ if (attr == 0 || attr == 4) {
380
+ /*
381
+ * 0 == an UNPREDICTABLE encoding
382
+ * 4 == Non-cacheable
383
+ * Either way, force Write-Back RW allocate non-transient
384
+ */
385
+ return 0xf;
386
+ }
387
+ /* Change WriteThrough to WriteBack, keep allocation and transient hints */
388
+ return attr | 4;
389
+}
390
+
391
+/*
392
+ * Combine the memory type and cacheability attributes of
393
+ * s1 and s2 for the HCR_EL2.FWB == 1 case, returning the
394
+ * combined attributes in MAIR_EL1 format.
395
+ */
396
+static uint8_t combined_attrs_fwb(CPUARMState *env,
397
+ ARMCacheAttrs s1, ARMCacheAttrs s2)
398
+{
399
+ switch (s2.attrs) {
400
+ case 7:
401
+ /* Use stage 1 attributes */
402
+ return s1.attrs;
403
+ case 6:
404
+ /*
405
+ * Force Normal Write-Back. Note that if S1 is Normal cacheable
406
+ * then we take the allocation hints from it; otherwise it is
407
+ * RW allocate, non-transient.
408
+ */
409
+ if ((s1.attrs & 0xf0) == 0) {
410
+ /* S1 is Device */
411
+ return 0xff;
412
+ }
413
+ /* Need to check the Inner and Outer nibbles separately */
414
+ return force_cacheattr_nibble_wb(s1.attrs & 0xf) |
415
+ force_cacheattr_nibble_wb(s1.attrs >> 4) << 4;
416
+ case 5:
417
+ /* If S1 attrs are Device, use them; otherwise Normal Non-cacheable */
418
+ if ((s1.attrs & 0xf0) == 0) {
419
+ return s1.attrs;
420
+ }
421
+ return 0x44;
422
+ case 0 ... 3:
423
+ /* Force Device, of subtype specified by S2 */
424
+ return s2.attrs << 2;
425
+ default:
426
+ /*
427
+ * RESERVED values (including RES0 descriptor bit [5] being nonzero);
428
+ * arbitrarily force Device.
429
+ */
430
+ return 0;
431
+ }
432
+}
433
+
434
+/*
435
+ * Combine S1 and S2 cacheability/shareability attributes, per D4.5.4
436
+ * and CombineS1S2Desc()
437
+ *
438
+ * @env: CPUARMState
439
+ * @s1: Attributes from stage 1 walk
440
+ * @s2: Attributes from stage 2 walk
441
+ */
442
+static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
443
+ ARMCacheAttrs s1, ARMCacheAttrs s2)
444
+{
445
+ ARMCacheAttrs ret;
446
+ bool tagged = false;
447
+
448
+ assert(s2.is_s2_format && !s1.is_s2_format);
449
+ ret.is_s2_format = false;
450
+
451
+ if (s1.attrs == 0xf0) {
452
+ tagged = true;
453
+ s1.attrs = 0xff;
454
+ }
455
+
456
+ /* Combine shareability attributes (table D4-43) */
457
+ if (s1.shareability == 2 || s2.shareability == 2) {
458
+ /* if either are outer-shareable, the result is outer-shareable */
459
+ ret.shareability = 2;
460
+ } else if (s1.shareability == 3 || s2.shareability == 3) {
461
+ /* if either are inner-shareable, the result is inner-shareable */
462
+ ret.shareability = 3;
463
+ } else {
464
+ /* both non-shareable */
465
+ ret.shareability = 0;
466
+ }
467
+
468
+ /* Combine memory type and cacheability attributes */
469
+ if (arm_hcr_el2_eff(env) & HCR_FWB) {
470
+ ret.attrs = combined_attrs_fwb(env, s1, s2);
471
+ } else {
472
+ ret.attrs = combined_attrs_nofwb(env, s1, s2);
473
+ }
474
+
475
+ /*
476
+ * Any location for which the resultant memory type is any
477
+ * type of Device memory is always treated as Outer Shareable.
478
+ * Any location for which the resultant memory type is Normal
479
+ * Inner Non-cacheable, Outer Non-cacheable is always treated
480
+ * as Outer Shareable.
481
+ * TODO: FEAT_XS adds another value (0x40) also meaning iNCoNC
482
+ */
483
+ if ((ret.attrs & 0xf0) == 0 || ret.attrs == 0x44) {
484
+ ret.shareability = 2;
485
+ }
486
+
487
+ /* TODO: CombineS1S2Desc does not consider transient, only WB, RWA. */
488
+ if (tagged && ret.attrs == 0xff) {
489
+ ret.attrs = 0xf0;
490
+ }
491
+
492
+ return ret;
493
+}
494
+
495
/**
496
* get_phys_addr - get the physical address for this virtual address
497
*
498
--
499
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-16-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 10 ++
9
target/arm/helper.c | 416 +-------------------------------------------
10
target/arm/ptw.c | 411 +++++++++++++++++++++++++++++++++++++++++++
11
3 files changed, 429 insertions(+), 408 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@
18
19
#ifndef CONFIG_USER_ONLY
20
21
+extern const uint8_t pamax_map[7];
22
+
23
uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
24
ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi);
25
uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
26
@@ -XXX,XX +XXX,XX @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
27
return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
28
}
29
30
+ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
31
+ ARMMMUIdx mmu_idx);
32
+bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
33
+ int inputsize, int stride, int outputsize);
34
+int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0);
35
+int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
36
+ int ap, int ns, int xn, int pxn);
37
+
38
bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
39
MMUAccessType access_type, ARMMMUIdx mmu_idx,
40
bool s1_is_el0,
41
diff --git a/target/arm/helper.c b/target/arm/helper.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/helper.c
44
+++ b/target/arm/helper.c
45
@@ -XXX,XX +XXX,XX @@ int simple_ap_to_rw_prot_is_user(int ap, bool is_user)
46
* @xn: XN (execute-never) bits
47
* @s1_is_el0: true if this is S2 of an S1+2 walk for EL0
48
*/
49
-static int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0)
50
+int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0)
51
{
52
int prot = 0;
53
54
@@ -XXX,XX +XXX,XX @@ static int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0)
55
* @xn: XN (execute-never) bit
56
* @pxn: PXN (privileged execute-never) bit
57
*/
58
-static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
59
- int ap, int ns, int xn, int pxn)
60
+int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
61
+ int ap, int ns, int xn, int pxn)
62
{
63
bool is_user = regime_is_user(env, mmu_idx);
64
int prot_rw, user_rw;
65
@@ -XXX,XX +XXX,XX @@ uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
66
* Returns true if the suggested S2 translation parameters are OK and
67
* false otherwise.
68
*/
69
-static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
70
- int inputsize, int stride, int outputsize)
71
+bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
72
+ int inputsize, int stride, int outputsize)
73
{
74
const int grainsize = stride + 3;
75
int startsizecheck;
76
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
77
#endif /* !CONFIG_USER_ONLY */
78
79
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
80
-static const uint8_t pamax_map[] = {
81
+const uint8_t pamax_map[] = {
82
[0] = 32,
83
[1] = 36,
84
[2] = 40,
85
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
86
}
87
88
#ifndef CONFIG_USER_ONLY
89
-static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
90
- ARMMMUIdx mmu_idx)
91
+ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
92
+ ARMMMUIdx mmu_idx)
93
{
94
uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
95
uint32_t el = regime_el(env, mmu_idx);
96
@@ -XXX,XX +XXX,XX @@ static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
97
};
98
}
99
100
-/**
101
- * get_phys_addr_lpae: perform one stage of page table walk, LPAE format
102
- *
103
- * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
104
- * prot and page_size may not be filled in, and the populated fsr value provides
105
- * information on why the translation aborted, in the format of a long-format
106
- * DFSR/IFSR fault register, with the following caveats:
107
- * * the WnR bit is never set (the caller must do this).
108
- *
109
- * @env: CPUARMState
110
- * @address: virtual address to get physical address for
111
- * @access_type: MMU_DATA_LOAD, MMU_DATA_STORE or MMU_INST_FETCH
112
- * @mmu_idx: MMU index indicating required translation regime
113
- * @s1_is_el0: if @mmu_idx is ARMMMUIdx_Stage2 (so this is a stage 2 page table
114
- * walk), must be true if this is stage 2 of a stage 1+2 walk for an
115
- * EL0 access). If @mmu_idx is anything else, @s1_is_el0 is ignored.
116
- * @phys_ptr: set to the physical address corresponding to the virtual address
117
- * @attrs: set to the memory transaction attributes to use
118
- * @prot: set to the permissions for the page containing phys_ptr
119
- * @page_size_ptr: set to the size of the page containing phys_ptr
120
- * @fi: set to fault info if the translation fails
121
- * @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes
122
- */
123
-bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
124
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
125
- bool s1_is_el0,
126
- hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
127
- target_ulong *page_size_ptr,
128
- ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
129
-{
130
- ARMCPU *cpu = env_archcpu(env);
131
- CPUState *cs = CPU(cpu);
132
- /* Read an LPAE long-descriptor translation table. */
133
- ARMFaultType fault_type = ARMFault_Translation;
134
- uint32_t level;
135
- ARMVAParameters param;
136
- uint64_t ttbr;
137
- hwaddr descaddr, indexmask, indexmask_grainsize;
138
- uint32_t tableattrs;
139
- target_ulong page_size;
140
- uint32_t attrs;
141
- int32_t stride;
142
- int addrsize, inputsize, outputsize;
143
- TCR *tcr = regime_tcr(env, mmu_idx);
144
- int ap, ns, xn, pxn;
145
- uint32_t el = regime_el(env, mmu_idx);
146
- uint64_t descaddrmask;
147
- bool aarch64 = arm_el_is_aa64(env, el);
148
- bool guarded = false;
149
-
150
- /* TODO: This code does not support shareability levels. */
151
- if (aarch64) {
152
- int ps;
153
-
154
- param = aa64_va_parameters(env, address, mmu_idx,
155
- access_type != MMU_INST_FETCH);
156
- level = 0;
157
-
158
- /*
159
- * If TxSZ is programmed to a value larger than the maximum,
160
- * or smaller than the effective minimum, it is IMPLEMENTATION
161
- * DEFINED whether we behave as if the field were programmed
162
- * within bounds, or if a level 0 Translation fault is generated.
163
- *
164
- * With FEAT_LVA, fault on less than minimum becomes required,
165
- * so our choice is to always raise the fault.
166
- */
167
- if (param.tsz_oob) {
168
- fault_type = ARMFault_Translation;
169
- goto do_fault;
170
- }
171
-
172
- addrsize = 64 - 8 * param.tbi;
173
- inputsize = 64 - param.tsz;
174
-
175
- /*
176
- * Bound PS by PARANGE to find the effective output address size.
177
- * ID_AA64MMFR0 is a read-only register so values outside of the
178
- * supported mappings can be considered an implementation error.
179
- */
180
- ps = FIELD_EX64(cpu->isar.id_aa64mmfr0, ID_AA64MMFR0, PARANGE);
181
- ps = MIN(ps, param.ps);
182
- assert(ps < ARRAY_SIZE(pamax_map));
183
- outputsize = pamax_map[ps];
184
- } else {
185
- param = aa32_va_parameters(env, address, mmu_idx);
186
- level = 1;
187
- addrsize = (mmu_idx == ARMMMUIdx_Stage2 ? 40 : 32);
188
- inputsize = addrsize - param.tsz;
189
- outputsize = 40;
190
- }
191
-
192
- /*
193
- * We determined the region when collecting the parameters, but we
194
- * have not yet validated that the address is valid for the region.
195
- * Extract the top bits and verify that they all match select.
196
- *
197
- * For aa32, if inputsize == addrsize, then we have selected the
198
- * region by exclusion in aa32_va_parameters and there is no more
199
- * validation to do here.
200
- */
201
- if (inputsize < addrsize) {
202
- target_ulong top_bits = sextract64(address, inputsize,
203
- addrsize - inputsize);
204
- if (-top_bits != param.select) {
205
- /* The gap between the two regions is a Translation fault */
206
- fault_type = ARMFault_Translation;
207
- goto do_fault;
208
- }
209
- }
210
-
211
- if (param.using64k) {
212
- stride = 13;
213
- } else if (param.using16k) {
214
- stride = 11;
215
- } else {
216
- stride = 9;
217
- }
218
-
219
- /* Note that QEMU ignores shareability and cacheability attributes,
220
- * so we don't need to do anything with the SH, ORGN, IRGN fields
221
- * in the TTBCR. Similarly, TTBCR:A1 selects whether we get the
222
- * ASID from TTBR0 or TTBR1, but QEMU's TLB doesn't currently
223
- * implement any ASID-like capability so we can ignore it (instead
224
- * we will always flush the TLB any time the ASID is changed).
225
- */
226
- ttbr = regime_ttbr(env, mmu_idx, param.select);
227
-
228
- /* Here we should have set up all the parameters for the translation:
229
- * inputsize, ttbr, epd, stride, tbi
230
- */
231
-
232
- if (param.epd) {
233
- /* Translation table walk disabled => Translation fault on TLB miss
234
- * Note: This is always 0 on 64-bit EL2 and EL3.
235
- */
236
- goto do_fault;
237
- }
238
-
239
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
240
- /* The starting level depends on the virtual address size (which can
241
- * be up to 48 bits) and the translation granule size. It indicates
242
- * the number of strides (stride bits at a time) needed to
243
- * consume the bits of the input address. In the pseudocode this is:
244
- * level = 4 - RoundUp((inputsize - grainsize) / stride)
245
- * where their 'inputsize' is our 'inputsize', 'grainsize' is
246
- * our 'stride + 3' and 'stride' is our 'stride'.
247
- * Applying the usual "rounded up m/n is (m+n-1)/n" and simplifying:
248
- * = 4 - (inputsize - stride - 3 + stride - 1) / stride
249
- * = 4 - (inputsize - 4) / stride;
250
- */
251
- level = 4 - (inputsize - 4) / stride;
252
- } else {
253
- /* For stage 2 translations the starting level is specified by the
254
- * VTCR_EL2.SL0 field (whose interpretation depends on the page size)
255
- */
256
- uint32_t sl0 = extract32(tcr->raw_tcr, 6, 2);
257
- uint32_t sl2 = extract64(tcr->raw_tcr, 33, 1);
258
- uint32_t startlevel;
259
- bool ok;
260
-
261
- /* SL2 is RES0 unless DS=1 & 4kb granule. */
262
- if (param.ds && stride == 9 && sl2) {
263
- if (sl0 != 0) {
264
- level = 0;
265
- fault_type = ARMFault_Translation;
266
- goto do_fault;
267
- }
268
- startlevel = -1;
269
- } else if (!aarch64 || stride == 9) {
270
- /* AArch32 or 4KB pages */
271
- startlevel = 2 - sl0;
272
-
273
- if (cpu_isar_feature(aa64_st, cpu)) {
274
- startlevel &= 3;
275
- }
276
- } else {
277
- /* 16KB or 64KB pages */
278
- startlevel = 3 - sl0;
279
- }
280
-
281
- /* Check that the starting level is valid. */
282
- ok = check_s2_mmu_setup(cpu, aarch64, startlevel,
283
- inputsize, stride, outputsize);
284
- if (!ok) {
285
- fault_type = ARMFault_Translation;
286
- goto do_fault;
287
- }
288
- level = startlevel;
289
- }
290
-
291
- indexmask_grainsize = MAKE_64BIT_MASK(0, stride + 3);
292
- indexmask = MAKE_64BIT_MASK(0, inputsize - (stride * (4 - level)));
293
-
294
- /* Now we can extract the actual base address from the TTBR */
295
- descaddr = extract64(ttbr, 0, 48);
296
-
297
- /*
298
- * For FEAT_LPA and PS=6, bits [51:48] of descaddr are in [5:2] of TTBR.
299
- *
300
- * Otherwise, if the base address is out of range, raise AddressSizeFault.
301
- * In the pseudocode, this is !IsZero(baseregister<47:outputsize>),
302
- * but we've just cleared the bits above 47, so simplify the test.
303
- */
304
- if (outputsize > 48) {
305
- descaddr |= extract64(ttbr, 2, 4) << 48;
306
- } else if (descaddr >> outputsize) {
307
- level = 0;
308
- fault_type = ARMFault_AddressSize;
309
- goto do_fault;
310
- }
311
-
312
- /*
313
- * We rely on this masking to clear the RES0 bits at the bottom of the TTBR
314
- * and also to mask out CnP (bit 0) which could validly be non-zero.
315
- */
316
- descaddr &= ~indexmask;
317
-
318
- /*
319
- * For AArch32, the address field in the descriptor goes up to bit 39
320
- * for both v7 and v8. However, for v8 the SBZ bits [47:40] must be 0
321
- * or an AddressSize fault is raised. So for v8 we extract those SBZ
322
- * bits as part of the address, which will be checked via outputsize.
323
- * For AArch64, the address field goes up to bit 47, or 49 with FEAT_LPA2;
324
- * the highest bits of a 52-bit output are placed elsewhere.
325
- */
326
- if (param.ds) {
327
- descaddrmask = MAKE_64BIT_MASK(0, 50);
328
- } else if (arm_feature(env, ARM_FEATURE_V8)) {
329
- descaddrmask = MAKE_64BIT_MASK(0, 48);
330
- } else {
331
- descaddrmask = MAKE_64BIT_MASK(0, 40);
332
- }
333
- descaddrmask &= ~indexmask_grainsize;
334
-
335
- /* Secure accesses start with the page table in secure memory and
336
- * can be downgraded to non-secure at any step. Non-secure accesses
337
- * remain non-secure. We implement this by just ORing in the NSTable/NS
338
- * bits at each step.
339
- */
340
- tableattrs = regime_is_secure(env, mmu_idx) ? 0 : (1 << 4);
341
- for (;;) {
342
- uint64_t descriptor;
343
- bool nstable;
344
-
345
- descaddr |= (address >> (stride * (4 - level))) & indexmask;
346
- descaddr &= ~7ULL;
347
- nstable = extract32(tableattrs, 4, 1);
348
- descriptor = arm_ldq_ptw(cs, descaddr, !nstable, mmu_idx, fi);
349
- if (fi->type != ARMFault_None) {
350
- goto do_fault;
351
- }
352
-
353
- if (!(descriptor & 1) ||
354
- (!(descriptor & 2) && (level == 3))) {
355
- /* Invalid, or the Reserved level 3 encoding */
356
- goto do_fault;
357
- }
358
-
359
- descaddr = descriptor & descaddrmask;
360
-
361
- /*
362
- * For FEAT_LPA and PS=6, bits [51:48] of descaddr are in [15:12]
363
- * of descriptor. For FEAT_LPA2 and effective DS, bits [51:50] of
364
- * descaddr are in [9:8]. Otherwise, if descaddr is out of range,
365
- * raise AddressSizeFault.
366
- */
367
- if (outputsize > 48) {
368
- if (param.ds) {
369
- descaddr |= extract64(descriptor, 8, 2) << 50;
370
- } else {
371
- descaddr |= extract64(descriptor, 12, 4) << 48;
372
- }
373
- } else if (descaddr >> outputsize) {
374
- fault_type = ARMFault_AddressSize;
375
- goto do_fault;
376
- }
377
-
378
- if ((descriptor & 2) && (level < 3)) {
379
- /* Table entry. The top five bits are attributes which may
380
- * propagate down through lower levels of the table (and
381
- * which are all arranged so that 0 means "no effect", so
382
- * we can gather them up by ORing in the bits at each level).
383
- */
384
- tableattrs |= extract64(descriptor, 59, 5);
385
- level++;
386
- indexmask = indexmask_grainsize;
387
- continue;
388
- }
389
- /*
390
- * Block entry at level 1 or 2, or page entry at level 3.
391
- * These are basically the same thing, although the number
392
- * of bits we pull in from the vaddr varies. Note that although
393
- * descaddrmask masks enough of the low bits of the descriptor
394
- * to give a correct page or table address, the address field
395
- * in a block descriptor is smaller; so we need to explicitly
396
- * clear the lower bits here before ORing in the low vaddr bits.
397
- */
398
- page_size = (1ULL << ((stride * (4 - level)) + 3));
399
- descaddr &= ~(page_size - 1);
400
- descaddr |= (address & (page_size - 1));
401
- /* Extract attributes from the descriptor */
402
- attrs = extract64(descriptor, 2, 10)
403
- | (extract64(descriptor, 52, 12) << 10);
404
-
405
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
406
- /* Stage 2 table descriptors do not include any attribute fields */
407
- break;
408
- }
409
- /* Merge in attributes from table descriptors */
410
- attrs |= nstable << 3; /* NS */
411
- guarded = extract64(descriptor, 50, 1); /* GP */
412
- if (param.hpd) {
413
- /* HPD disables all the table attributes except NSTable. */
414
- break;
415
- }
416
- attrs |= extract32(tableattrs, 0, 2) << 11; /* XN, PXN */
417
- /* The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
418
- * means "force PL1 access only", which means forcing AP[1] to 0.
419
- */
420
- attrs &= ~(extract32(tableattrs, 2, 1) << 4); /* !APT[0] => AP[1] */
421
- attrs |= extract32(tableattrs, 3, 1) << 5; /* APT[1] => AP[2] */
422
- break;
423
- }
424
- /* Here descaddr is the final physical address, and attributes
425
- * are all in attrs.
426
- */
427
- fault_type = ARMFault_AccessFlag;
428
- if ((attrs & (1 << 8)) == 0) {
429
- /* Access flag */
430
- goto do_fault;
431
- }
432
-
433
- ap = extract32(attrs, 4, 2);
434
-
435
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
436
- ns = mmu_idx == ARMMMUIdx_Stage2;
437
- xn = extract32(attrs, 11, 2);
438
- *prot = get_S2prot(env, ap, xn, s1_is_el0);
439
- } else {
440
- ns = extract32(attrs, 3, 1);
441
- xn = extract32(attrs, 12, 1);
442
- pxn = extract32(attrs, 11, 1);
443
- *prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
444
- }
445
-
446
- fault_type = ARMFault_Permission;
447
- if (!(*prot & (1 << access_type))) {
448
- goto do_fault;
449
- }
450
-
451
- if (ns) {
452
- /* The NS bit will (as required by the architecture) have no effect if
453
- * the CPU doesn't support TZ or this is a non-secure translation
454
- * regime, because the attribute will already be non-secure.
455
- */
456
- txattrs->secure = false;
457
- }
458
- /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
459
- if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
460
- arm_tlb_bti_gp(txattrs) = true;
461
- }
462
-
463
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
464
- cacheattrs->is_s2_format = true;
465
- cacheattrs->attrs = extract32(attrs, 0, 4);
466
- } else {
467
- /* Index into MAIR registers for cache attributes */
468
- uint8_t attrindx = extract32(attrs, 0, 3);
469
- uint64_t mair = env->cp15.mair_el[regime_el(env, mmu_idx)];
470
- assert(attrindx <= 7);
471
- cacheattrs->is_s2_format = false;
472
- cacheattrs->attrs = extract64(mair, attrindx * 8, 8);
473
- }
474
-
475
- /*
476
- * For FEAT_LPA2 and effective DS, the SH field in the attributes
477
- * was re-purposed for output address bits. The SH attribute in
478
- * that case comes from TCR_ELx, which we extracted earlier.
479
- */
480
- if (param.ds) {
481
- cacheattrs->shareability = param.sh;
482
- } else {
483
- cacheattrs->shareability = extract32(attrs, 6, 2);
484
- }
485
-
486
- *phys_ptr = descaddr;
487
- *page_size_ptr = page_size;
488
- return false;
489
-
490
-do_fault:
491
- fi->type = fault_type;
492
- fi->level = level;
493
- /* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
494
- fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2 ||
495
- mmu_idx == ARMMMUIdx_Stage2_S);
496
- fi->s1ns = mmu_idx == ARMMMUIdx_Stage2;
497
- return true;
498
-}
499
-
500
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
501
MemTxAttrs *attrs)
502
{
503
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
504
index XXXXXXX..XXXXXXX 100644
505
--- a/target/arm/ptw.c
506
+++ b/target/arm/ptw.c
507
@@ -XXX,XX +XXX,XX @@ do_fault:
508
return true;
509
}
510
511
+/**
512
+ * get_phys_addr_lpae: perform one stage of page table walk, LPAE format
513
+ *
514
+ * Returns false if the translation was successful. Otherwise, phys_ptr,
515
+ * attrs, prot and page_size may not be filled in, and the populated fsr
516
+ * value provides information on why the translation aborted, in the format
517
+ * of a long-format DFSR/IFSR fault register, with the following caveat:
518
+ * the WnR bit is never set (the caller must do this).
519
+ *
520
+ * @env: CPUARMState
521
+ * @address: virtual address to get physical address for
522
+ * @access_type: MMU_DATA_LOAD, MMU_DATA_STORE or MMU_INST_FETCH
523
+ * @mmu_idx: MMU index indicating required translation regime
524
+ * @s1_is_el0: if @mmu_idx is ARMMMUIdx_Stage2 (so this is a stage 2 page
525
+ * table walk), must be true if this is stage 2 of a stage 1+2
526
+ * walk for an EL0 access. If @mmu_idx is anything else,
527
+ * @s1_is_el0 is ignored.
528
+ * @phys_ptr: set to the physical address corresponding to the virtual address
529
+ * @attrs: set to the memory transaction attributes to use
530
+ * @prot: set to the permissions for the page containing phys_ptr
531
+ * @page_size_ptr: set to the size of the page containing phys_ptr
532
+ * @fi: set to fault info if the translation fails
533
+ * @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes
534
+ */
535
+bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
536
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
537
+ bool s1_is_el0,
538
+ hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
539
+ target_ulong *page_size_ptr,
540
+ ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
541
+{
542
+ ARMCPU *cpu = env_archcpu(env);
543
+ CPUState *cs = CPU(cpu);
544
+ /* Read an LPAE long-descriptor translation table. */
545
+ ARMFaultType fault_type = ARMFault_Translation;
546
+ uint32_t level;
547
+ ARMVAParameters param;
548
+ uint64_t ttbr;
549
+ hwaddr descaddr, indexmask, indexmask_grainsize;
550
+ uint32_t tableattrs;
551
+ target_ulong page_size;
552
+ uint32_t attrs;
553
+ int32_t stride;
554
+ int addrsize, inputsize, outputsize;
555
+ TCR *tcr = regime_tcr(env, mmu_idx);
556
+ int ap, ns, xn, pxn;
557
+ uint32_t el = regime_el(env, mmu_idx);
558
+ uint64_t descaddrmask;
559
+ bool aarch64 = arm_el_is_aa64(env, el);
560
+ bool guarded = false;
561
+
562
+ /* TODO: This code does not support shareability levels. */
563
+ if (aarch64) {
564
+ int ps;
565
+
566
+ param = aa64_va_parameters(env, address, mmu_idx,
567
+ access_type != MMU_INST_FETCH);
568
+ level = 0;
569
+
570
+ /*
571
+ * If TxSZ is programmed to a value larger than the maximum,
572
+ * or smaller than the effective minimum, it is IMPLEMENTATION
573
+ * DEFINED whether we behave as if the field were programmed
574
+ * within bounds, or if a level 0 Translation fault is generated.
575
+ *
576
+ * With FEAT_LVA, fault on less than minimum becomes required,
577
+ * so our choice is to always raise the fault.
578
+ */
579
+ if (param.tsz_oob) {
580
+ fault_type = ARMFault_Translation;
581
+ goto do_fault;
582
+ }
583
+
584
+ addrsize = 64 - 8 * param.tbi;
585
+ inputsize = 64 - param.tsz;
586
+
587
+ /*
588
+ * Bound PS by PARANGE to find the effective output address size.
589
+ * ID_AA64MMFR0 is a read-only register so values outside of the
590
+ * supported mappings can be considered an implementation error.
591
+ */
592
+ ps = FIELD_EX64(cpu->isar.id_aa64mmfr0, ID_AA64MMFR0, PARANGE);
593
+ ps = MIN(ps, param.ps);
594
+ assert(ps < ARRAY_SIZE(pamax_map));
595
+ outputsize = pamax_map[ps];
596
+ } else {
597
+ param = aa32_va_parameters(env, address, mmu_idx);
598
+ level = 1;
599
+ addrsize = (mmu_idx == ARMMMUIdx_Stage2 ? 40 : 32);
600
+ inputsize = addrsize - param.tsz;
601
+ outputsize = 40;
602
+ }
603
+
604
+ /*
605
+ * We determined the region when collecting the parameters, but we
606
+ * have not yet validated that the address is valid for the region.
607
+ * Extract the top bits and verify that they all match select.
608
+ *
609
+ * For aa32, if inputsize == addrsize, then we have selected the
610
+ * region by exclusion in aa32_va_parameters and there is no more
611
+ * validation to do here.
612
+ */
613
+ if (inputsize < addrsize) {
614
+ target_ulong top_bits = sextract64(address, inputsize,
615
+ addrsize - inputsize);
616
+ if (-top_bits != param.select) {
617
+ /* The gap between the two regions is a Translation fault */
618
+ fault_type = ARMFault_Translation;
619
+ goto do_fault;
620
+ }
621
+ }
622
+
623
+ if (param.using64k) {
624
+ stride = 13;
625
+ } else if (param.using16k) {
626
+ stride = 11;
627
+ } else {
628
+ stride = 9;
629
+ }
630
+
631
+ /*
632
+ * Note that QEMU ignores shareability and cacheability attributes,
633
+ * so we don't need to do anything with the SH, ORGN, IRGN fields
634
+ * in the TTBCR. Similarly, TTBCR:A1 selects whether we get the
635
+ * ASID from TTBR0 or TTBR1, but QEMU's TLB doesn't currently
636
+ * implement any ASID-like capability so we can ignore it (instead
637
+ * we will always flush the TLB any time the ASID is changed).
638
+ */
639
+ ttbr = regime_ttbr(env, mmu_idx, param.select);
640
+
641
+ /*
642
+ * Here we should have set up all the parameters for the translation:
643
+ * inputsize, ttbr, epd, stride, tbi
644
+ */
645
+
646
+ if (param.epd) {
647
+ /*
648
+ * Translation table walk disabled => Translation fault on TLB miss
649
+ * Note: This is always 0 on 64-bit EL2 and EL3.
650
+ */
651
+ goto do_fault;
652
+ }
653
+
654
+ if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
655
+ /*
656
+ * The starting level depends on the virtual address size (which can
657
+ * be up to 48 bits) and the translation granule size. It indicates
658
+ * the number of strides (stride bits at a time) needed to
659
+ * consume the bits of the input address. In the pseudocode this is:
660
+ * level = 4 - RoundUp((inputsize - grainsize) / stride)
661
+ * where their 'inputsize' is our 'inputsize', 'grainsize' is
662
+ * our 'stride + 3' and 'stride' is our 'stride'.
663
+ * Applying the usual "rounded up m/n is (m+n-1)/n" and simplifying:
664
+ * = 4 - (inputsize - stride - 3 + stride - 1) / stride
665
+ * = 4 - (inputsize - 4) / stride;
666
+ */
667
+ level = 4 - (inputsize - 4) / stride;
668
+ } else {
669
+ /*
670
+ * For stage 2 translations the starting level is specified by the
671
+ * VTCR_EL2.SL0 field (whose interpretation depends on the page size)
672
+ */
673
+ uint32_t sl0 = extract32(tcr->raw_tcr, 6, 2);
674
+ uint32_t sl2 = extract64(tcr->raw_tcr, 33, 1);
675
+ uint32_t startlevel;
676
+ bool ok;
677
+
678
+ /* SL2 is RES0 unless DS=1 & 4kb granule. */
679
+ if (param.ds && stride == 9 && sl2) {
680
+ if (sl0 != 0) {
681
+ level = 0;
682
+ fault_type = ARMFault_Translation;
683
+ goto do_fault;
684
+ }
685
+ startlevel = -1;
686
+ } else if (!aarch64 || stride == 9) {
687
+ /* AArch32 or 4KB pages */
688
+ startlevel = 2 - sl0;
689
+
690
+ if (cpu_isar_feature(aa64_st, cpu)) {
691
+ startlevel &= 3;
692
+ }
693
+ } else {
694
+ /* 16KB or 64KB pages */
695
+ startlevel = 3 - sl0;
696
+ }
697
+
698
+ /* Check that the starting level is valid. */
699
+ ok = check_s2_mmu_setup(cpu, aarch64, startlevel,
700
+ inputsize, stride, outputsize);
701
+ if (!ok) {
702
+ fault_type = ARMFault_Translation;
703
+ goto do_fault;
704
+ }
705
+ level = startlevel;
706
+ }
707
+
708
+ indexmask_grainsize = MAKE_64BIT_MASK(0, stride + 3);
709
+ indexmask = MAKE_64BIT_MASK(0, inputsize - (stride * (4 - level)));
710
+
711
+ /* Now we can extract the actual base address from the TTBR */
712
+ descaddr = extract64(ttbr, 0, 48);
713
+
714
+ /*
715
+ * For FEAT_LPA and PS=6, bits [51:48] of descaddr are in [5:2] of TTBR.
716
+ *
717
+ * Otherwise, if the base address is out of range, raise AddressSizeFault.
718
+ * In the pseudocode, this is !IsZero(baseregister<47:outputsize>),
719
+ * but we've just cleared the bits above 47, so simplify the test.
720
+ */
721
+ if (outputsize > 48) {
722
+ descaddr |= extract64(ttbr, 2, 4) << 48;
723
+ } else if (descaddr >> outputsize) {
724
+ level = 0;
725
+ fault_type = ARMFault_AddressSize;
726
+ goto do_fault;
727
+ }
728
+
729
+ /*
730
+ * We rely on this masking to clear the RES0 bits at the bottom of the TTBR
731
+ * and also to mask out CnP (bit 0) which could validly be non-zero.
732
+ */
733
+ descaddr &= ~indexmask;
734
+
735
+ /*
736
+ * For AArch32, the address field in the descriptor goes up to bit 39
737
+ * for both v7 and v8. However, for v8 the SBZ bits [47:40] must be 0
738
+ * or an AddressSize fault is raised. So for v8 we extract those SBZ
739
+ * bits as part of the address, which will be checked via outputsize.
740
+ * For AArch64, the address field goes up to bit 47, or 49 with FEAT_LPA2;
741
+ * the highest bits of a 52-bit output are placed elsewhere.
742
+ */
743
+ if (param.ds) {
744
+ descaddrmask = MAKE_64BIT_MASK(0, 50);
745
+ } else if (arm_feature(env, ARM_FEATURE_V8)) {
746
+ descaddrmask = MAKE_64BIT_MASK(0, 48);
747
+ } else {
748
+ descaddrmask = MAKE_64BIT_MASK(0, 40);
749
+ }
750
+ descaddrmask &= ~indexmask_grainsize;
751
+
752
+ /*
753
+ * Secure accesses start with the page table in secure memory and
754
+ * can be downgraded to non-secure at any step. Non-secure accesses
755
+ * remain non-secure. We implement this by just ORing in the NSTable/NS
756
+ * bits at each step.
757
+ */
758
+ tableattrs = regime_is_secure(env, mmu_idx) ? 0 : (1 << 4);
759
+ for (;;) {
760
+ uint64_t descriptor;
761
+ bool nstable;
762
+
763
+ descaddr |= (address >> (stride * (4 - level))) & indexmask;
764
+ descaddr &= ~7ULL;
765
+ nstable = extract32(tableattrs, 4, 1);
766
+ descriptor = arm_ldq_ptw(cs, descaddr, !nstable, mmu_idx, fi);
767
+ if (fi->type != ARMFault_None) {
768
+ goto do_fault;
769
+ }
770
+
771
+ if (!(descriptor & 1) ||
772
+ (!(descriptor & 2) && (level == 3))) {
773
+ /* Invalid, or the Reserved level 3 encoding */
774
+ goto do_fault;
775
+ }
776
+
777
+ descaddr = descriptor & descaddrmask;
778
+
779
+ /*
780
+ * For FEAT_LPA and PS=6, bits [51:48] of descaddr are in [15:12]
781
+ * of descriptor. For FEAT_LPA2 and effective DS, bits [51:50] of
782
+ * descaddr are in [9:8]. Otherwise, if descaddr is out of range,
783
+ * raise AddressSizeFault.
784
+ */
785
+ if (outputsize > 48) {
786
+ if (param.ds) {
787
+ descaddr |= extract64(descriptor, 8, 2) << 50;
788
+ } else {
789
+ descaddr |= extract64(descriptor, 12, 4) << 48;
790
+ }
791
+ } else if (descaddr >> outputsize) {
792
+ fault_type = ARMFault_AddressSize;
793
+ goto do_fault;
794
+ }
795
+
796
+ if ((descriptor & 2) && (level < 3)) {
797
+ /*
798
+ * Table entry. The top five bits are attributes which may
799
+ * propagate down through lower levels of the table (and
800
+ * which are all arranged so that 0 means "no effect", so
801
+ * we can gather them up by ORing in the bits at each level).
802
+ */
803
+ tableattrs |= extract64(descriptor, 59, 5);
804
+ level++;
805
+ indexmask = indexmask_grainsize;
806
+ continue;
807
+ }
808
+ /*
809
+ * Block entry at level 1 or 2, or page entry at level 3.
810
+ * These are basically the same thing, although the number
811
+ * of bits we pull in from the vaddr varies. Note that although
812
+ * descaddrmask masks enough of the low bits of the descriptor
813
+ * to give a correct page or table address, the address field
814
+ * in a block descriptor is smaller; so we need to explicitly
815
+ * clear the lower bits here before ORing in the low vaddr bits.
816
+ */
817
+ page_size = (1ULL << ((stride * (4 - level)) + 3));
818
+ descaddr &= ~(page_size - 1);
819
+ descaddr |= (address & (page_size - 1));
820
+ /* Extract attributes from the descriptor */
821
+ attrs = extract64(descriptor, 2, 10)
822
+ | (extract64(descriptor, 52, 12) << 10);
823
+
824
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
825
+ /* Stage 2 table descriptors do not include any attribute fields */
826
+ break;
827
+ }
828
+ /* Merge in attributes from table descriptors */
829
+ attrs |= nstable << 3; /* NS */
830
+ guarded = extract64(descriptor, 50, 1); /* GP */
831
+ if (param.hpd) {
832
+ /* HPD disables all the table attributes except NSTable. */
833
+ break;
834
+ }
835
+ attrs |= extract32(tableattrs, 0, 2) << 11; /* XN, PXN */
836
+ /*
837
+ * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
838
+ * means "force PL1 access only", which means forcing AP[1] to 0.
839
+ */
840
+ attrs &= ~(extract32(tableattrs, 2, 1) << 4); /* !APT[0] => AP[1] */
841
+ attrs |= extract32(tableattrs, 3, 1) << 5; /* APT[1] => AP[2] */
842
+ break;
843
+ }
844
+ /*
845
+ * Here descaddr is the final physical address, and attributes
846
+ * are all in attrs.
847
+ */
848
+ fault_type = ARMFault_AccessFlag;
849
+ if ((attrs & (1 << 8)) == 0) {
850
+ /* Access flag */
851
+ goto do_fault;
852
+ }
853
+
854
+ ap = extract32(attrs, 4, 2);
855
+
856
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
857
+ ns = mmu_idx == ARMMMUIdx_Stage2;
858
+ xn = extract32(attrs, 11, 2);
859
+ *prot = get_S2prot(env, ap, xn, s1_is_el0);
860
+ } else {
861
+ ns = extract32(attrs, 3, 1);
862
+ xn = extract32(attrs, 12, 1);
863
+ pxn = extract32(attrs, 11, 1);
864
+ *prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
865
+ }
866
+
867
+ fault_type = ARMFault_Permission;
868
+ if (!(*prot & (1 << access_type))) {
869
+ goto do_fault;
870
+ }
871
+
872
+ if (ns) {
873
+ /*
874
+ * The NS bit will (as required by the architecture) have no effect if
875
+ * the CPU doesn't support TZ or this is a non-secure translation
876
+ * regime, because the attribute will already be non-secure.
877
+ */
878
+ txattrs->secure = false;
879
+ }
880
+ /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
881
+ if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
882
+ arm_tlb_bti_gp(txattrs) = true;
883
+ }
884
+
885
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
886
+ cacheattrs->is_s2_format = true;
887
+ cacheattrs->attrs = extract32(attrs, 0, 4);
888
+ } else {
889
+ /* Index into MAIR registers for cache attributes */
890
+ uint8_t attrindx = extract32(attrs, 0, 3);
891
+ uint64_t mair = env->cp15.mair_el[regime_el(env, mmu_idx)];
892
+ assert(attrindx <= 7);
893
+ cacheattrs->is_s2_format = false;
894
+ cacheattrs->attrs = extract64(mair, attrindx * 8, 8);
895
+ }
896
+
897
+ /*
898
+ * For FEAT_LPA2 and effective DS, the SH field in the attributes
899
+ * was re-purposed for output address bits. The SH attribute in
900
+ * that case comes from TCR_ELx, which we extracted earlier.
901
+ */
902
+ if (param.ds) {
903
+ cacheattrs->shareability = param.sh;
904
+ } else {
905
+ cacheattrs->shareability = extract32(attrs, 6, 2);
906
+ }
907
+
908
+ *phys_ptr = descaddr;
909
+ *page_size_ptr = page_size;
910
+ return false;
911
+
912
+do_fault:
913
+ fi->type = fault_type;
914
+ fi->level = level;
915
+ /* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
916
+ fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2 ||
917
+ mmu_idx == ARMMMUIdx_Stage2_S);
918
+ fi->s1ns = mmu_idx == ARMMMUIdx_Stage2;
919
+ return true;
920
+}
921
+
922
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
923
MMUAccessType access_type, ARMMMUIdx mmu_idx,
924
hwaddr *phys_ptr, int *prot,
925
--
926
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Move the ptw load functions, plus 3 common subroutines:
4
S1_ptw_translate, ptw_attrs_are_device, and regime_translation_big_endian.
5
This also allows get_phys_addr_lpae to become static again.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220604040607.269301-17-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/ptw.h | 13 ----
13
target/arm/helper.c | 141 --------------------------------------
14
target/arm/ptw.c | 160 ++++++++++++++++++++++++++++++++++++++++++--
15
3 files changed, 154 insertions(+), 160 deletions(-)
16
17
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/ptw.h
20
+++ b/target/arm/ptw.h
21
@@ -XXX,XX +XXX,XX @@
22
23
extern const uint8_t pamax_map[7];
24
25
-uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
26
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi);
27
-uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
28
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi);
29
-
30
bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx);
31
bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx);
32
uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn);
33
@@ -XXX,XX +XXX,XX @@ int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0);
34
int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
35
int ap, int ns, int xn, int pxn);
36
37
-bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
38
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
39
- bool s1_is_el0,
40
- hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
41
- target_ulong *page_size_ptr,
42
- ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
43
- __attribute__((nonnull));
44
-
45
#endif /* !CONFIG_USER_ONLY */
46
#endif /* TARGET_ARM_PTW_H */
47
diff --git a/target/arm/helper.c b/target/arm/helper.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/helper.c
50
+++ b/target/arm/helper.c
51
@@ -XXX,XX +XXX,XX @@ bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
52
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
53
}
54
55
-static inline bool regime_translation_big_endian(CPUARMState *env,
56
- ARMMMUIdx mmu_idx)
57
-{
58
- return (regime_sctlr(env, mmu_idx) & SCTLR_EE) != 0;
59
-}
60
-
61
/* Return the TTBR associated with this translation regime */
62
uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
63
{
64
@@ -XXX,XX +XXX,XX @@ int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
65
return prot_rw | PAGE_EXEC;
66
}
67
68
-static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
69
-{
70
- /*
71
- * For an S1 page table walk, the stage 1 attributes are always
72
- * some form of "this is Normal memory". The combined S1+S2
73
- * attributes are therefore only Device if stage 2 specifies Device.
74
- * With HCR_EL2.FWB == 0 this is when descriptor bits [5:4] are 0b00,
75
- * ie when cacheattrs.attrs bits [3:2] are 0b00.
76
- * With HCR_EL2.FWB == 1 this is when descriptor bit [4] is 0, ie
77
- * when cacheattrs.attrs bit [2] is 0.
78
- */
79
- assert(cacheattrs.is_s2_format);
80
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
81
- return (cacheattrs.attrs & 0x4) == 0;
82
- } else {
83
- return (cacheattrs.attrs & 0xc) == 0;
84
- }
85
-}
86
-
87
-/* Translate a S1 pagetable walk through S2 if needed. */
88
-static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
89
- hwaddr addr, bool *is_secure,
90
- ARMMMUFaultInfo *fi)
91
-{
92
- if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
93
- !regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
94
- target_ulong s2size;
95
- hwaddr s2pa;
96
- int s2prot;
97
- int ret;
98
- ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S
99
- : ARMMMUIdx_Stage2;
100
- ARMCacheAttrs cacheattrs = {};
101
- MemTxAttrs txattrs = {};
102
-
103
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false,
104
- &s2pa, &txattrs, &s2prot, &s2size, fi,
105
- &cacheattrs);
106
- if (ret) {
107
- assert(fi->type != ARMFault_None);
108
- fi->s2addr = addr;
109
- fi->stage2 = true;
110
- fi->s1ptw = true;
111
- fi->s1ns = !*is_secure;
112
- return ~0;
113
- }
114
- if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
115
- ptw_attrs_are_device(env, cacheattrs)) {
116
- /*
117
- * PTW set and S1 walk touched S2 Device memory:
118
- * generate Permission fault.
119
- */
120
- fi->type = ARMFault_Permission;
121
- fi->s2addr = addr;
122
- fi->stage2 = true;
123
- fi->s1ptw = true;
124
- fi->s1ns = !*is_secure;
125
- return ~0;
126
- }
127
-
128
- if (arm_is_secure_below_el3(env)) {
129
- /* Check if page table walk is to secure or non-secure PA space. */
130
- if (*is_secure) {
131
- *is_secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
132
- } else {
133
- *is_secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
134
- }
135
- } else {
136
- assert(!*is_secure);
137
- }
138
-
139
- addr = s2pa;
140
- }
141
- return addr;
142
-}
143
-
144
-/* All loads done in the course of a page table walk go through here. */
145
-uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
146
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
147
-{
148
- ARMCPU *cpu = ARM_CPU(cs);
149
- CPUARMState *env = &cpu->env;
150
- MemTxAttrs attrs = {};
151
- MemTxResult result = MEMTX_OK;
152
- AddressSpace *as;
153
- uint32_t data;
154
-
155
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
156
- attrs.secure = is_secure;
157
- as = arm_addressspace(cs, attrs);
158
- if (fi->s1ptw) {
159
- return 0;
160
- }
161
- if (regime_translation_big_endian(env, mmu_idx)) {
162
- data = address_space_ldl_be(as, addr, attrs, &result);
163
- } else {
164
- data = address_space_ldl_le(as, addr, attrs, &result);
165
- }
166
- if (result == MEMTX_OK) {
167
- return data;
168
- }
169
- fi->type = ARMFault_SyncExternalOnWalk;
170
- fi->ea = arm_extabort_type(result);
171
- return 0;
172
-}
173
-
174
-uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
175
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
176
-{
177
- ARMCPU *cpu = ARM_CPU(cs);
178
- CPUARMState *env = &cpu->env;
179
- MemTxAttrs attrs = {};
180
- MemTxResult result = MEMTX_OK;
181
- AddressSpace *as;
182
- uint64_t data;
183
-
184
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
185
- attrs.secure = is_secure;
186
- as = arm_addressspace(cs, attrs);
187
- if (fi->s1ptw) {
188
- return 0;
189
- }
190
- if (regime_translation_big_endian(env, mmu_idx)) {
191
- data = address_space_ldq_be(as, addr, attrs, &result);
192
- } else {
193
- data = address_space_ldq_le(as, addr, attrs, &result);
194
- }
195
- if (result == MEMTX_OK) {
196
- return data;
197
- }
198
- fi->type = ARMFault_SyncExternalOnWalk;
199
- fi->ea = arm_extabort_type(result);
200
- return 0;
201
-}
202
-
203
/*
204
* check_s2_mmu_setup
205
* @cpu: ARMCPU
206
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/ptw.c
209
+++ b/target/arm/ptw.c
210
@@ -XXX,XX +XXX,XX @@
211
#include "ptw.h"
212
213
214
+static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
215
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
216
+ bool s1_is_el0, hwaddr *phys_ptr,
217
+ MemTxAttrs *txattrs, int *prot,
218
+ target_ulong *page_size_ptr,
219
+ ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
220
+ __attribute__((nonnull));
221
+
222
+static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
223
+{
224
+ return (regime_sctlr(env, mmu_idx) & SCTLR_EE) != 0;
225
+}
226
+
227
+static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
228
+{
229
+ /*
230
+ * For an S1 page table walk, the stage 1 attributes are always
231
+ * some form of "this is Normal memory". The combined S1+S2
232
+ * attributes are therefore only Device if stage 2 specifies Device.
233
+ * With HCR_EL2.FWB == 0 this is when descriptor bits [5:4] are 0b00,
234
+ * ie when cacheattrs.attrs bits [3:2] are 0b00.
235
+ * With HCR_EL2.FWB == 1 this is when descriptor bit [4] is 0, ie
236
+ * when cacheattrs.attrs bit [2] is 0.
237
+ */
238
+ assert(cacheattrs.is_s2_format);
239
+ if (arm_hcr_el2_eff(env) & HCR_FWB) {
240
+ return (cacheattrs.attrs & 0x4) == 0;
241
+ } else {
242
+ return (cacheattrs.attrs & 0xc) == 0;
243
+ }
244
+}
245
+
246
+/* Translate a S1 pagetable walk through S2 if needed. */
247
+static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
248
+ hwaddr addr, bool *is_secure,
249
+ ARMMMUFaultInfo *fi)
250
+{
251
+ if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
252
+ !regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
253
+ target_ulong s2size;
254
+ hwaddr s2pa;
255
+ int s2prot;
256
+ int ret;
257
+ ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S
258
+ : ARMMMUIdx_Stage2;
259
+ ARMCacheAttrs cacheattrs = {};
260
+ MemTxAttrs txattrs = {};
261
+
262
+ ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false,
263
+ &s2pa, &txattrs, &s2prot, &s2size, fi,
264
+ &cacheattrs);
265
+ if (ret) {
266
+ assert(fi->type != ARMFault_None);
267
+ fi->s2addr = addr;
268
+ fi->stage2 = true;
269
+ fi->s1ptw = true;
270
+ fi->s1ns = !*is_secure;
271
+ return ~0;
272
+ }
273
+ if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
274
+ ptw_attrs_are_device(env, cacheattrs)) {
275
+ /*
276
+ * PTW set and S1 walk touched S2 Device memory:
277
+ * generate Permission fault.
278
+ */
279
+ fi->type = ARMFault_Permission;
280
+ fi->s2addr = addr;
281
+ fi->stage2 = true;
282
+ fi->s1ptw = true;
283
+ fi->s1ns = !*is_secure;
284
+ return ~0;
285
+ }
286
+
287
+ if (arm_is_secure_below_el3(env)) {
288
+ /* Check if page table walk is to secure or non-secure PA space. */
289
+ if (*is_secure) {
290
+ *is_secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
291
+ } else {
292
+ *is_secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
293
+ }
294
+ } else {
295
+ assert(!*is_secure);
296
+ }
297
+
298
+ addr = s2pa;
299
+ }
300
+ return addr;
301
+}
302
+
303
+/* All loads done in the course of a page table walk go through here. */
304
+static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
305
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
306
+{
307
+ ARMCPU *cpu = ARM_CPU(cs);
308
+ CPUARMState *env = &cpu->env;
309
+ MemTxAttrs attrs = {};
310
+ MemTxResult result = MEMTX_OK;
311
+ AddressSpace *as;
312
+ uint32_t data;
313
+
314
+ addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
315
+ attrs.secure = is_secure;
316
+ as = arm_addressspace(cs, attrs);
317
+ if (fi->s1ptw) {
318
+ return 0;
319
+ }
320
+ if (regime_translation_big_endian(env, mmu_idx)) {
321
+ data = address_space_ldl_be(as, addr, attrs, &result);
322
+ } else {
323
+ data = address_space_ldl_le(as, addr, attrs, &result);
324
+ }
325
+ if (result == MEMTX_OK) {
326
+ return data;
327
+ }
328
+ fi->type = ARMFault_SyncExternalOnWalk;
329
+ fi->ea = arm_extabort_type(result);
330
+ return 0;
331
+}
332
+
333
+static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
334
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
335
+{
336
+ ARMCPU *cpu = ARM_CPU(cs);
337
+ CPUARMState *env = &cpu->env;
338
+ MemTxAttrs attrs = {};
339
+ MemTxResult result = MEMTX_OK;
340
+ AddressSpace *as;
341
+ uint64_t data;
342
+
343
+ addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
344
+ attrs.secure = is_secure;
345
+ as = arm_addressspace(cs, attrs);
346
+ if (fi->s1ptw) {
347
+ return 0;
348
+ }
349
+ if (regime_translation_big_endian(env, mmu_idx)) {
350
+ data = address_space_ldq_be(as, addr, attrs, &result);
351
+ } else {
352
+ data = address_space_ldq_le(as, addr, attrs, &result);
353
+ }
354
+ if (result == MEMTX_OK) {
355
+ return data;
356
+ }
357
+ fi->type = ARMFault_SyncExternalOnWalk;
358
+ fi->ea = arm_extabort_type(result);
359
+ return 0;
360
+}
361
+
362
static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
363
uint32_t *table, uint32_t address)
364
{
365
@@ -XXX,XX +XXX,XX @@ do_fault:
366
* @fi: set to fault info if the translation fails
367
* @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes
368
*/
369
-bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
370
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
371
- bool s1_is_el0,
372
- hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
373
- target_ulong *page_size_ptr,
374
- ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
375
+static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
376
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
377
+ bool s1_is_el0, hwaddr *phys_ptr,
378
+ MemTxAttrs *txattrs, int *prot,
379
+ target_ulong *page_size_ptr,
380
+ ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
381
{
382
ARMCPU *cpu = env_archcpu(env);
383
CPUState *cs = CPU(cpu);
384
--
385
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
These functions are used for both page table walking and for
4
deciding what format in which to deliver exception results.
5
Since ptw.c is only present for system mode, put the functions
6
into tlb_helper.c.
7
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220604040607.269301-18-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/helper.c | 24 ------------------------
14
target/arm/tlb_helper.c | 26 ++++++++++++++++++++++++++
15
2 files changed, 26 insertions(+), 24 deletions(-)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
22
}
23
#endif /* !CONFIG_USER_ONLY */
24
25
-/* Return true if the translation regime is using LPAE format page tables */
26
-bool regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
27
-{
28
- int el = regime_el(env, mmu_idx);
29
- if (el == 2 || arm_el_is_aa64(env, el)) {
30
- return true;
31
- }
32
- if (arm_feature(env, ARM_FEATURE_LPAE)
33
- && (regime_tcr(env, mmu_idx)->raw_tcr & TTBCR_EAE)) {
34
- return true;
35
- }
36
- return false;
37
-}
38
-
39
-/* Returns true if the stage 1 translation regime is using LPAE format page
40
- * tables. Used when raising alignment exceptions, whose FSR changes depending
41
- * on whether the long or short descriptor format is in use. */
42
-bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
43
-{
44
- mmu_idx = stage_1_mmu_idx(mmu_idx);
45
-
46
- return regime_using_lpae_format(env, mmu_idx);
47
-}
48
-
49
#ifndef CONFIG_USER_ONLY
50
bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
51
{
52
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/tlb_helper.c
55
+++ b/target/arm/tlb_helper.c
56
@@ -XXX,XX +XXX,XX @@
57
#include "exec/exec-all.h"
58
#include "exec/helper-proto.h"
59
60
+
61
+/* Return true if the translation regime is using LPAE format page tables */
62
+bool regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
63
+{
64
+ int el = regime_el(env, mmu_idx);
65
+ if (el == 2 || arm_el_is_aa64(env, el)) {
66
+ return true;
67
+ }
68
+ if (arm_feature(env, ARM_FEATURE_LPAE)
69
+ && (regime_tcr(env, mmu_idx)->raw_tcr & TTBCR_EAE)) {
70
+ return true;
71
+ }
72
+ return false;
73
+}
74
+
75
+/*
76
+ * Returns true if the stage 1 translation regime is using LPAE format page
77
+ * tables. Used when raising alignment exceptions, whose FSR changes depending
78
+ * on whether the long or short descriptor format is in use.
79
+ */
80
+bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
81
+{
82
+ mmu_idx = stage_1_mmu_idx(mmu_idx);
83
+ return regime_using_lpae_format(env, mmu_idx);
84
+}
85
+
86
static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
87
unsigned int target_el,
88
bool same_el, bool ea,
89
--
90
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-19-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 2 --
9
target/arm/helper.c | 25 -------------------------
10
target/arm/ptw.c | 25 +++++++++++++++++++++++++
11
3 files changed, 25 insertions(+), 27 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@
18
19
#ifndef CONFIG_USER_ONLY
20
21
-extern const uint8_t pamax_map[7];
22
-
23
bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx);
24
bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx);
25
uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn);
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
29
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
31
}
32
#endif /* !CONFIG_USER_ONLY */
33
34
-/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
35
-const uint8_t pamax_map[] = {
36
- [0] = 32,
37
- [1] = 36,
38
- [2] = 40,
39
- [3] = 42,
40
- [4] = 44,
41
- [5] = 48,
42
- [6] = 52,
43
-};
44
-
45
-/* The cpu-specific constant value of PAMax; also used by hw/arm/virt. */
46
-unsigned int arm_pamax(ARMCPU *cpu)
47
-{
48
- unsigned int parange =
49
- FIELD_EX64(cpu->isar.id_aa64mmfr0, ID_AA64MMFR0, PARANGE);
50
-
51
- /*
52
- * id_aa64mmfr0 is a read-only register so values outside of the
53
- * supported mappings can be considered an implementation error.
54
- */
55
- assert(parange < ARRAY_SIZE(pamax_map));
56
- return pamax_map[parange];
57
-}
58
-
59
int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
60
{
61
if (regime_has_2_ranges(mmu_idx)) {
62
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/ptw.c
65
+++ b/target/arm/ptw.c
66
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
67
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
68
__attribute__((nonnull));
69
70
+/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
71
+static const uint8_t pamax_map[] = {
72
+ [0] = 32,
73
+ [1] = 36,
74
+ [2] = 40,
75
+ [3] = 42,
76
+ [4] = 44,
77
+ [5] = 48,
78
+ [6] = 52,
79
+};
80
+
81
+/* The cpu-specific constant value of PAMax; also used by hw/arm/virt. */
82
+unsigned int arm_pamax(ARMCPU *cpu)
83
+{
84
+ unsigned int parange =
85
+ FIELD_EX64(cpu->isar.id_aa64mmfr0, ID_AA64MMFR0, PARANGE);
86
+
87
+ /*
88
+ * id_aa64mmfr0 is a read-only register so values outside of the
89
+ * supported mappings can be considered an implementation error.
90
+ */
91
+ assert(parange < ARRAY_SIZE(pamax_map));
92
+ return pamax_map[parange];
93
+}
94
+
95
static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
96
{
97
return (regime_sctlr(env, mmu_idx) & SCTLR_EE) != 0;
98
--
99
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-20-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 3 --
9
target/arm/helper.c | 128 --------------------------------------------
10
target/arm/ptw.c | 128 ++++++++++++++++++++++++++++++++++++++++++++
11
3 files changed, 128 insertions(+), 131 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
18
ARMMMUIdx mmu_idx);
19
bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
20
int inputsize, int stride, int outputsize);
21
-int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0);
22
-int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
23
- int ap, int ns, int xn, int pxn);
24
25
#endif /* !CONFIG_USER_ONLY */
26
#endif /* TARGET_ARM_PTW_H */
27
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/helper.c
30
+++ b/target/arm/helper.c
31
@@ -XXX,XX +XXX,XX @@ int simple_ap_to_rw_prot_is_user(int ap, bool is_user)
32
}
33
}
34
35
-/* Translate S2 section/page access permissions to protection flags
36
- *
37
- * @env: CPUARMState
38
- * @s2ap: The 2-bit stage2 access permissions (S2AP)
39
- * @xn: XN (execute-never) bits
40
- * @s1_is_el0: true if this is S2 of an S1+2 walk for EL0
41
- */
42
-int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0)
43
-{
44
- int prot = 0;
45
-
46
- if (s2ap & 1) {
47
- prot |= PAGE_READ;
48
- }
49
- if (s2ap & 2) {
50
- prot |= PAGE_WRITE;
51
- }
52
-
53
- if (cpu_isar_feature(any_tts2uxn, env_archcpu(env))) {
54
- switch (xn) {
55
- case 0:
56
- prot |= PAGE_EXEC;
57
- break;
58
- case 1:
59
- if (s1_is_el0) {
60
- prot |= PAGE_EXEC;
61
- }
62
- break;
63
- case 2:
64
- break;
65
- case 3:
66
- if (!s1_is_el0) {
67
- prot |= PAGE_EXEC;
68
- }
69
- break;
70
- default:
71
- g_assert_not_reached();
72
- }
73
- } else {
74
- if (!extract32(xn, 1, 1)) {
75
- if (arm_el_is_aa64(env, 2) || prot & PAGE_READ) {
76
- prot |= PAGE_EXEC;
77
- }
78
- }
79
- }
80
- return prot;
81
-}
82
-
83
-/* Translate section/page access permissions to protection flags
84
- *
85
- * @env: CPUARMState
86
- * @mmu_idx: MMU index indicating required translation regime
87
- * @is_aa64: TRUE if AArch64
88
- * @ap: The 2-bit simple AP (AP[2:1])
89
- * @ns: NS (non-secure) bit
90
- * @xn: XN (execute-never) bit
91
- * @pxn: PXN (privileged execute-never) bit
92
- */
93
-int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
94
- int ap, int ns, int xn, int pxn)
95
-{
96
- bool is_user = regime_is_user(env, mmu_idx);
97
- int prot_rw, user_rw;
98
- bool have_wxn;
99
- int wxn = 0;
100
-
101
- assert(mmu_idx != ARMMMUIdx_Stage2);
102
- assert(mmu_idx != ARMMMUIdx_Stage2_S);
103
-
104
- user_rw = simple_ap_to_rw_prot_is_user(ap, true);
105
- if (is_user) {
106
- prot_rw = user_rw;
107
- } else {
108
- if (user_rw && regime_is_pan(env, mmu_idx)) {
109
- /* PAN forbids data accesses but doesn't affect insn fetch */
110
- prot_rw = 0;
111
- } else {
112
- prot_rw = simple_ap_to_rw_prot_is_user(ap, false);
113
- }
114
- }
115
-
116
- if (ns && arm_is_secure(env) && (env->cp15.scr_el3 & SCR_SIF)) {
117
- return prot_rw;
118
- }
119
-
120
- /* TODO have_wxn should be replaced with
121
- * ARM_FEATURE_V8 || (ARM_FEATURE_V7 && ARM_FEATURE_EL2)
122
- * when ARM_FEATURE_EL2 starts getting set. For now we assume all LPAE
123
- * compatible processors have EL2, which is required for [U]WXN.
124
- */
125
- have_wxn = arm_feature(env, ARM_FEATURE_LPAE);
126
-
127
- if (have_wxn) {
128
- wxn = regime_sctlr(env, mmu_idx) & SCTLR_WXN;
129
- }
130
-
131
- if (is_aa64) {
132
- if (regime_has_2_ranges(mmu_idx) && !is_user) {
133
- xn = pxn || (user_rw & PAGE_WRITE);
134
- }
135
- } else if (arm_feature(env, ARM_FEATURE_V7)) {
136
- switch (regime_el(env, mmu_idx)) {
137
- case 1:
138
- case 3:
139
- if (is_user) {
140
- xn = xn || !(user_rw & PAGE_READ);
141
- } else {
142
- int uwxn = 0;
143
- if (have_wxn) {
144
- uwxn = regime_sctlr(env, mmu_idx) & SCTLR_UWXN;
145
- }
146
- xn = xn || !(prot_rw & PAGE_READ) || pxn ||
147
- (uwxn && (user_rw & PAGE_WRITE));
148
- }
149
- break;
150
- case 2:
151
- break;
152
- }
153
- } else {
154
- xn = wxn = 0;
155
- }
156
-
157
- if (xn || (wxn && (prot_rw & PAGE_WRITE))) {
158
- return prot_rw;
159
- }
160
- return prot_rw | PAGE_EXEC;
161
-}
162
-
163
/*
164
* check_s2_mmu_setup
165
* @cpu: ARMCPU
166
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
167
index XXXXXXX..XXXXXXX 100644
168
--- a/target/arm/ptw.c
169
+++ b/target/arm/ptw.c
170
@@ -XXX,XX +XXX,XX @@ do_fault:
171
return true;
172
}
173
174
+/*
175
+ * Translate S2 section/page access permissions to protection flags
176
+ * @env: CPUARMState
177
+ * @s2ap: The 2-bit stage2 access permissions (S2AP)
178
+ * @xn: XN (execute-never) bits
179
+ * @s1_is_el0: true if this is S2 of an S1+2 walk for EL0
180
+ */
181
+static int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0)
182
+{
183
+ int prot = 0;
184
+
185
+ if (s2ap & 1) {
186
+ prot |= PAGE_READ;
187
+ }
188
+ if (s2ap & 2) {
189
+ prot |= PAGE_WRITE;
190
+ }
191
+
192
+ if (cpu_isar_feature(any_tts2uxn, env_archcpu(env))) {
193
+ switch (xn) {
194
+ case 0:
195
+ prot |= PAGE_EXEC;
196
+ break;
197
+ case 1:
198
+ if (s1_is_el0) {
199
+ prot |= PAGE_EXEC;
200
+ }
201
+ break;
202
+ case 2:
203
+ break;
204
+ case 3:
205
+ if (!s1_is_el0) {
206
+ prot |= PAGE_EXEC;
207
+ }
208
+ break;
209
+ default:
210
+ g_assert_not_reached();
211
+ }
212
+ } else {
213
+ if (!extract32(xn, 1, 1)) {
214
+ if (arm_el_is_aa64(env, 2) || prot & PAGE_READ) {
215
+ prot |= PAGE_EXEC;
216
+ }
217
+ }
218
+ }
219
+ return prot;
220
+}
221
+
222
+/*
223
+ * Translate section/page access permissions to protection flags
224
+ * @env: CPUARMState
225
+ * @mmu_idx: MMU index indicating required translation regime
226
+ * @is_aa64: TRUE if AArch64
227
+ * @ap: The 2-bit simple AP (AP[2:1])
228
+ * @ns: NS (non-secure) bit
229
+ * @xn: XN (execute-never) bit
230
+ * @pxn: PXN (privileged execute-never) bit
231
+ */
232
+static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
233
+ int ap, int ns, int xn, int pxn)
234
+{
235
+ bool is_user = regime_is_user(env, mmu_idx);
236
+ int prot_rw, user_rw;
237
+ bool have_wxn;
238
+ int wxn = 0;
239
+
240
+ assert(mmu_idx != ARMMMUIdx_Stage2);
241
+ assert(mmu_idx != ARMMMUIdx_Stage2_S);
242
+
243
+ user_rw = simple_ap_to_rw_prot_is_user(ap, true);
244
+ if (is_user) {
245
+ prot_rw = user_rw;
246
+ } else {
247
+ if (user_rw && regime_is_pan(env, mmu_idx)) {
248
+ /* PAN forbids data accesses but doesn't affect insn fetch */
249
+ prot_rw = 0;
250
+ } else {
251
+ prot_rw = simple_ap_to_rw_prot_is_user(ap, false);
252
+ }
253
+ }
254
+
255
+ if (ns && arm_is_secure(env) && (env->cp15.scr_el3 & SCR_SIF)) {
256
+ return prot_rw;
257
+ }
258
+
259
+ /* TODO have_wxn should be replaced with
260
+ * ARM_FEATURE_V8 || (ARM_FEATURE_V7 && ARM_FEATURE_EL2)
261
+ * when ARM_FEATURE_EL2 starts getting set. For now we assume all LPAE
262
+ * compatible processors have EL2, which is required for [U]WXN.
263
+ */
264
+ have_wxn = arm_feature(env, ARM_FEATURE_LPAE);
265
+
266
+ if (have_wxn) {
267
+ wxn = regime_sctlr(env, mmu_idx) & SCTLR_WXN;
268
+ }
269
+
270
+ if (is_aa64) {
271
+ if (regime_has_2_ranges(mmu_idx) && !is_user) {
272
+ xn = pxn || (user_rw & PAGE_WRITE);
273
+ }
274
+ } else if (arm_feature(env, ARM_FEATURE_V7)) {
275
+ switch (regime_el(env, mmu_idx)) {
276
+ case 1:
277
+ case 3:
278
+ if (is_user) {
279
+ xn = xn || !(user_rw & PAGE_READ);
280
+ } else {
281
+ int uwxn = 0;
282
+ if (have_wxn) {
283
+ uwxn = regime_sctlr(env, mmu_idx) & SCTLR_UWXN;
284
+ }
285
+ xn = xn || !(prot_rw & PAGE_READ) || pxn ||
286
+ (uwxn && (user_rw & PAGE_WRITE));
287
+ }
288
+ break;
289
+ case 2:
290
+ break;
291
+ }
292
+ } else {
293
+ xn = wxn = 0;
294
+ }
295
+
296
+ if (xn || (wxn && (prot_rw & PAGE_WRITE))) {
297
+ return prot_rw;
298
+ }
299
+ return prot_rw | PAGE_EXEC;
300
+}
301
+
302
/**
303
* get_phys_addr_lpae: perform one stage of page table walk, LPAE format
304
*
305
--
306
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-21-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 2 --
9
target/arm/helper.c | 70 ---------------------------------------------
10
target/arm/ptw.c | 70 +++++++++++++++++++++++++++++++++++++++++++++
11
3 files changed, 70 insertions(+), 72 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
18
19
ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
20
ARMMMUIdx mmu_idx);
21
-bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
22
- int inputsize, int stride, int outputsize);
23
24
#endif /* !CONFIG_USER_ONLY */
25
#endif /* TARGET_ARM_PTW_H */
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
29
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ int simple_ap_to_rw_prot_is_user(int ap, bool is_user)
31
g_assert_not_reached();
32
}
33
}
34
-
35
-/*
36
- * check_s2_mmu_setup
37
- * @cpu: ARMCPU
38
- * @is_aa64: True if the translation regime is in AArch64 state
39
- * @startlevel: Suggested starting level
40
- * @inputsize: Bitsize of IPAs
41
- * @stride: Page-table stride (See the ARM ARM)
42
- *
43
- * Returns true if the suggested S2 translation parameters are OK and
44
- * false otherwise.
45
- */
46
-bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
47
- int inputsize, int stride, int outputsize)
48
-{
49
- const int grainsize = stride + 3;
50
- int startsizecheck;
51
-
52
- /*
53
- * Negative levels are usually not allowed...
54
- * Except for FEAT_LPA2, 4k page table, 52-bit address space, which
55
- * begins with level -1. Note that previous feature tests will have
56
- * eliminated this combination if it is not enabled.
57
- */
58
- if (level < (inputsize == 52 && stride == 9 ? -1 : 0)) {
59
- return false;
60
- }
61
-
62
- startsizecheck = inputsize - ((3 - level) * stride + grainsize);
63
- if (startsizecheck < 1 || startsizecheck > stride + 4) {
64
- return false;
65
- }
66
-
67
- if (is_aa64) {
68
- switch (stride) {
69
- case 13: /* 64KB Pages. */
70
- if (level == 0 || (level == 1 && outputsize <= 42)) {
71
- return false;
72
- }
73
- break;
74
- case 11: /* 16KB Pages. */
75
- if (level == 0 || (level == 1 && outputsize <= 40)) {
76
- return false;
77
- }
78
- break;
79
- case 9: /* 4KB Pages. */
80
- if (level == 0 && outputsize <= 42) {
81
- return false;
82
- }
83
- break;
84
- default:
85
- g_assert_not_reached();
86
- }
87
-
88
- /* Inputsize checks. */
89
- if (inputsize > outputsize &&
90
- (arm_el_is_aa64(&cpu->env, 1) || inputsize > 40)) {
91
- /* This is CONSTRAINED UNPREDICTABLE and we choose to fault. */
92
- return false;
93
- }
94
- } else {
95
- /* AArch32 only supports 4KB pages. Assert on that. */
96
- assert(stride == 9);
97
-
98
- if (level == 0) {
99
- return false;
100
- }
101
- }
102
- return true;
103
-}
104
#endif /* !CONFIG_USER_ONLY */
105
106
int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
107
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/ptw.c
110
+++ b/target/arm/ptw.c
111
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
112
return prot_rw | PAGE_EXEC;
113
}
114
115
+/*
116
+ * check_s2_mmu_setup
117
+ * @cpu: ARMCPU
118
+ * @is_aa64: True if the translation regime is in AArch64 state
119
+ * @startlevel: Suggested starting level
120
+ * @inputsize: Bitsize of IPAs
121
+ * @stride: Page-table stride (See the ARM ARM)
122
+ *
123
+ * Returns true if the suggested S2 translation parameters are OK and
124
+ * false otherwise.
125
+ */
126
+static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
127
+ int inputsize, int stride, int outputsize)
128
+{
129
+ const int grainsize = stride + 3;
130
+ int startsizecheck;
131
+
132
+ /*
133
+ * Negative levels are usually not allowed...
134
+ * Except for FEAT_LPA2, 4k page table, 52-bit address space, which
135
+ * begins with level -1. Note that previous feature tests will have
136
+ * eliminated this combination if it is not enabled.
137
+ */
138
+ if (level < (inputsize == 52 && stride == 9 ? -1 : 0)) {
139
+ return false;
140
+ }
141
+
142
+ startsizecheck = inputsize - ((3 - level) * stride + grainsize);
143
+ if (startsizecheck < 1 || startsizecheck > stride + 4) {
144
+ return false;
145
+ }
146
+
147
+ if (is_aa64) {
148
+ switch (stride) {
149
+ case 13: /* 64KB Pages. */
150
+ if (level == 0 || (level == 1 && outputsize <= 42)) {
151
+ return false;
152
+ }
153
+ break;
154
+ case 11: /* 16KB Pages. */
155
+ if (level == 0 || (level == 1 && outputsize <= 40)) {
156
+ return false;
157
+ }
158
+ break;
159
+ case 9: /* 4KB Pages. */
160
+ if (level == 0 && outputsize <= 42) {
161
+ return false;
162
+ }
163
+ break;
164
+ default:
165
+ g_assert_not_reached();
166
+ }
167
+
168
+ /* Inputsize checks. */
169
+ if (inputsize > outputsize &&
170
+ (arm_el_is_aa64(&cpu->env, 1) || inputsize > 40)) {
171
+ /* This is CONSTRAINED UNPREDICTABLE and we choose to fault. */
172
+ return false;
173
+ }
174
+ } else {
175
+ /* AArch32 only supports 4KB pages. Assert on that. */
176
+ assert(stride == 9);
177
+
178
+ if (level == 0) {
179
+ return false;
180
+ }
181
+ }
182
+ return true;
183
+}
184
+
185
/**
186
* get_phys_addr_lpae: perform one stage of page table walk, LPAE format
187
*
188
--
189
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-22-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 3 ---
9
target/arm/helper.c | 64 ---------------------------------------------
10
target/arm/ptw.c | 64 +++++++++++++++++++++++++++++++++++++++++++++
11
3 files changed, 64 insertions(+), 67 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
18
return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
19
}
20
21
-ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
22
- ARMMMUIdx mmu_idx);
23
-
24
#endif /* !CONFIG_USER_ONLY */
25
#endif /* TARGET_ARM_PTW_H */
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
29
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
31
}
32
33
#ifndef CONFIG_USER_ONLY
34
-ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
35
- ARMMMUIdx mmu_idx)
36
-{
37
- uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
38
- uint32_t el = regime_el(env, mmu_idx);
39
- int select, tsz;
40
- bool epd, hpd;
41
-
42
- assert(mmu_idx != ARMMMUIdx_Stage2_S);
43
-
44
- if (mmu_idx == ARMMMUIdx_Stage2) {
45
- /* VTCR */
46
- bool sext = extract32(tcr, 4, 1);
47
- bool sign = extract32(tcr, 3, 1);
48
-
49
- /*
50
- * If the sign-extend bit is not the same as t0sz[3], the result
51
- * is unpredictable. Flag this as a guest error.
52
- */
53
- if (sign != sext) {
54
- qemu_log_mask(LOG_GUEST_ERROR,
55
- "AArch32: VTCR.S / VTCR.T0SZ[3] mismatch\n");
56
- }
57
- tsz = sextract32(tcr, 0, 4) + 8;
58
- select = 0;
59
- hpd = false;
60
- epd = false;
61
- } else if (el == 2) {
62
- /* HTCR */
63
- tsz = extract32(tcr, 0, 3);
64
- select = 0;
65
- hpd = extract64(tcr, 24, 1);
66
- epd = false;
67
- } else {
68
- int t0sz = extract32(tcr, 0, 3);
69
- int t1sz = extract32(tcr, 16, 3);
70
-
71
- if (t1sz == 0) {
72
- select = va > (0xffffffffu >> t0sz);
73
- } else {
74
- /* Note that we will detect errors later. */
75
- select = va >= ~(0xffffffffu >> t1sz);
76
- }
77
- if (!select) {
78
- tsz = t0sz;
79
- epd = extract32(tcr, 7, 1);
80
- hpd = extract64(tcr, 41, 1);
81
- } else {
82
- tsz = t1sz;
83
- epd = extract32(tcr, 23, 1);
84
- hpd = extract64(tcr, 42, 1);
85
- }
86
- /* For aarch32, hpd0 is not enabled without t2e as well. */
87
- hpd &= extract32(tcr, 6, 1);
88
- }
89
-
90
- return (ARMVAParameters) {
91
- .tsz = tsz,
92
- .select = select,
93
- .epd = epd,
94
- .hpd = hpd,
95
- };
96
-}
97
-
98
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
99
MemTxAttrs *attrs)
100
{
101
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
102
index XXXXXXX..XXXXXXX 100644
103
--- a/target/arm/ptw.c
104
+++ b/target/arm/ptw.c
105
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
106
return prot_rw | PAGE_EXEC;
107
}
108
109
+static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
110
+ ARMMMUIdx mmu_idx)
111
+{
112
+ uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
113
+ uint32_t el = regime_el(env, mmu_idx);
114
+ int select, tsz;
115
+ bool epd, hpd;
116
+
117
+ assert(mmu_idx != ARMMMUIdx_Stage2_S);
118
+
119
+ if (mmu_idx == ARMMMUIdx_Stage2) {
120
+ /* VTCR */
121
+ bool sext = extract32(tcr, 4, 1);
122
+ bool sign = extract32(tcr, 3, 1);
123
+
124
+ /*
125
+ * If the sign-extend bit is not the same as t0sz[3], the result
126
+ * is unpredictable. Flag this as a guest error.
127
+ */
128
+ if (sign != sext) {
129
+ qemu_log_mask(LOG_GUEST_ERROR,
130
+ "AArch32: VTCR.S / VTCR.T0SZ[3] mismatch\n");
131
+ }
132
+ tsz = sextract32(tcr, 0, 4) + 8;
133
+ select = 0;
134
+ hpd = false;
135
+ epd = false;
136
+ } else if (el == 2) {
137
+ /* HTCR */
138
+ tsz = extract32(tcr, 0, 3);
139
+ select = 0;
140
+ hpd = extract64(tcr, 24, 1);
141
+ epd = false;
142
+ } else {
143
+ int t0sz = extract32(tcr, 0, 3);
144
+ int t1sz = extract32(tcr, 16, 3);
145
+
146
+ if (t1sz == 0) {
147
+ select = va > (0xffffffffu >> t0sz);
148
+ } else {
149
+ /* Note that we will detect errors later. */
150
+ select = va >= ~(0xffffffffu >> t1sz);
151
+ }
152
+ if (!select) {
153
+ tsz = t0sz;
154
+ epd = extract32(tcr, 7, 1);
155
+ hpd = extract64(tcr, 41, 1);
156
+ } else {
157
+ tsz = t1sz;
158
+ epd = extract32(tcr, 23, 1);
159
+ hpd = extract64(tcr, 42, 1);
160
+ }
161
+ /* For aarch32, hpd0 is not enabled without t2e as well. */
162
+ hpd &= extract32(tcr, 6, 1);
163
+ }
164
+
165
+ return (ARMVAParameters) {
166
+ .tsz = tsz,
167
+ .select = select,
168
+ .epd = epd,
169
+ .hpd = hpd,
170
+ };
171
+}
172
+
173
/*
174
* check_s2_mmu_setup
175
* @cpu: ARMCPU
176
--
177
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-23-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 10 ------
9
target/arm/helper.c | 77 ------------------------------------------
10
target/arm/ptw.c | 81 +++++++++++++++++++++++++++++++++++++++++++++
11
3 files changed, 81 insertions(+), 87 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@ bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx);
18
bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx);
19
uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn);
20
21
-int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx,
22
- int ap, int domain_prot);
23
-int simple_ap_to_rw_prot_is_user(int ap, bool is_user);
24
-
25
-static inline int
26
-simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
27
-{
28
- return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
29
-}
30
-
31
#endif /* !CONFIG_USER_ONLY */
32
#endif /* TARGET_ARM_PTW_H */
33
diff --git a/target/arm/helper.c b/target/arm/helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/helper.c
36
+++ b/target/arm/helper.c
37
@@ -XXX,XX +XXX,XX @@ bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
38
g_assert_not_reached();
39
}
40
}
41
-
42
-/* Translate section/page access permissions to page
43
- * R/W protection flags
44
- *
45
- * @env: CPUARMState
46
- * @mmu_idx: MMU index indicating required translation regime
47
- * @ap: The 3-bit access permissions (AP[2:0])
48
- * @domain_prot: The 2-bit domain access permissions
49
- */
50
-int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap, int domain_prot)
51
-{
52
- bool is_user = regime_is_user(env, mmu_idx);
53
-
54
- if (domain_prot == 3) {
55
- return PAGE_READ | PAGE_WRITE;
56
- }
57
-
58
- switch (ap) {
59
- case 0:
60
- if (arm_feature(env, ARM_FEATURE_V7)) {
61
- return 0;
62
- }
63
- switch (regime_sctlr(env, mmu_idx) & (SCTLR_S | SCTLR_R)) {
64
- case SCTLR_S:
65
- return is_user ? 0 : PAGE_READ;
66
- case SCTLR_R:
67
- return PAGE_READ;
68
- default:
69
- return 0;
70
- }
71
- case 1:
72
- return is_user ? 0 : PAGE_READ | PAGE_WRITE;
73
- case 2:
74
- if (is_user) {
75
- return PAGE_READ;
76
- } else {
77
- return PAGE_READ | PAGE_WRITE;
78
- }
79
- case 3:
80
- return PAGE_READ | PAGE_WRITE;
81
- case 4: /* Reserved. */
82
- return 0;
83
- case 5:
84
- return is_user ? 0 : PAGE_READ;
85
- case 6:
86
- return PAGE_READ;
87
- case 7:
88
- if (!arm_feature(env, ARM_FEATURE_V6K)) {
89
- return 0;
90
- }
91
- return PAGE_READ;
92
- default:
93
- g_assert_not_reached();
94
- }
95
-}
96
-
97
-/* Translate section/page access permissions to page
98
- * R/W protection flags.
99
- *
100
- * @ap: The 2-bit simple AP (AP[2:1])
101
- * @is_user: TRUE if accessing from PL0
102
- */
103
-int simple_ap_to_rw_prot_is_user(int ap, bool is_user)
104
-{
105
- switch (ap) {
106
- case 0:
107
- return is_user ? 0 : PAGE_READ | PAGE_WRITE;
108
- case 1:
109
- return PAGE_READ | PAGE_WRITE;
110
- case 2:
111
- return is_user ? 0 : PAGE_READ;
112
- case 3:
113
- return PAGE_READ;
114
- default:
115
- g_assert_not_reached();
116
- }
117
-}
118
#endif /* !CONFIG_USER_ONLY */
119
120
int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
121
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/target/arm/ptw.c
124
+++ b/target/arm/ptw.c
125
@@ -XXX,XX +XXX,XX @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
126
return true;
127
}
128
129
+/*
130
+ * Translate section/page access permissions to page R/W protection flags
131
+ * @env: CPUARMState
132
+ * @mmu_idx: MMU index indicating required translation regime
133
+ * @ap: The 3-bit access permissions (AP[2:0])
134
+ * @domain_prot: The 2-bit domain access permissions
135
+ */
136
+static int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx,
137
+ int ap, int domain_prot)
138
+{
139
+ bool is_user = regime_is_user(env, mmu_idx);
140
+
141
+ if (domain_prot == 3) {
142
+ return PAGE_READ | PAGE_WRITE;
143
+ }
144
+
145
+ switch (ap) {
146
+ case 0:
147
+ if (arm_feature(env, ARM_FEATURE_V7)) {
148
+ return 0;
149
+ }
150
+ switch (regime_sctlr(env, mmu_idx) & (SCTLR_S | SCTLR_R)) {
151
+ case SCTLR_S:
152
+ return is_user ? 0 : PAGE_READ;
153
+ case SCTLR_R:
154
+ return PAGE_READ;
155
+ default:
156
+ return 0;
157
+ }
158
+ case 1:
159
+ return is_user ? 0 : PAGE_READ | PAGE_WRITE;
160
+ case 2:
161
+ if (is_user) {
162
+ return PAGE_READ;
163
+ } else {
164
+ return PAGE_READ | PAGE_WRITE;
165
+ }
166
+ case 3:
167
+ return PAGE_READ | PAGE_WRITE;
168
+ case 4: /* Reserved. */
169
+ return 0;
170
+ case 5:
171
+ return is_user ? 0 : PAGE_READ;
172
+ case 6:
173
+ return PAGE_READ;
174
+ case 7:
175
+ if (!arm_feature(env, ARM_FEATURE_V6K)) {
176
+ return 0;
177
+ }
178
+ return PAGE_READ;
179
+ default:
180
+ g_assert_not_reached();
181
+ }
182
+}
183
+
184
+/*
185
+ * Translate section/page access permissions to page R/W protection flags.
186
+ * @ap: The 2-bit simple AP (AP[2:1])
187
+ * @is_user: TRUE if accessing from PL0
188
+ */
189
+static int simple_ap_to_rw_prot_is_user(int ap, bool is_user)
190
+{
191
+ switch (ap) {
192
+ case 0:
193
+ return is_user ? 0 : PAGE_READ | PAGE_WRITE;
194
+ case 1:
195
+ return PAGE_READ | PAGE_WRITE;
196
+ case 2:
197
+ return is_user ? 0 : PAGE_READ;
198
+ case 3:
199
+ return PAGE_READ;
200
+ default:
201
+ g_assert_not_reached();
202
+ }
203
+}
204
+
205
+static int simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
206
+{
207
+ return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
208
+}
209
+
210
static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
211
MMUAccessType access_type, ARMMMUIdx mmu_idx,
212
hwaddr *phys_ptr, int *prot,
213
--
214
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-24-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 1 -
9
target/arm/helper.c | 24 ------------------------
10
target/arm/ptw.c | 22 ++++++++++++++++++++++
11
3 files changed, 22 insertions(+), 25 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@
18
19
#ifndef CONFIG_USER_ONLY
20
21
-bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx);
22
bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx);
23
uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn);
24
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
30
}
31
#endif /* !CONFIG_USER_ONLY */
32
33
-#ifndef CONFIG_USER_ONLY
34
-bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
35
-{
36
- switch (mmu_idx) {
37
- case ARMMMUIdx_SE10_0:
38
- case ARMMMUIdx_E20_0:
39
- case ARMMMUIdx_SE20_0:
40
- case ARMMMUIdx_Stage1_E0:
41
- case ARMMMUIdx_Stage1_SE0:
42
- case ARMMMUIdx_MUser:
43
- case ARMMMUIdx_MSUser:
44
- case ARMMMUIdx_MUserNegPri:
45
- case ARMMMUIdx_MSUserNegPri:
46
- return true;
47
- default:
48
- return false;
49
- case ARMMMUIdx_E10_0:
50
- case ARMMMUIdx_E10_1:
51
- case ARMMMUIdx_E10_1_PAN:
52
- g_assert_not_reached();
53
- }
54
-}
55
-#endif /* !CONFIG_USER_ONLY */
56
-
57
int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
58
{
59
if (regime_has_2_ranges(mmu_idx)) {
60
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/ptw.c
63
+++ b/target/arm/ptw.c
64
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
65
return (regime_sctlr(env, mmu_idx) & SCTLR_EE) != 0;
66
}
67
68
+static bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
69
+{
70
+ switch (mmu_idx) {
71
+ case ARMMMUIdx_SE10_0:
72
+ case ARMMMUIdx_E20_0:
73
+ case ARMMMUIdx_SE20_0:
74
+ case ARMMMUIdx_Stage1_E0:
75
+ case ARMMMUIdx_Stage1_SE0:
76
+ case ARMMMUIdx_MUser:
77
+ case ARMMMUIdx_MSUser:
78
+ case ARMMMUIdx_MUserNegPri:
79
+ case ARMMMUIdx_MSUserNegPri:
80
+ return true;
81
+ default:
82
+ return false;
83
+ case ARMMMUIdx_E10_0:
84
+ case ARMMMUIdx_E10_1:
85
+ case ARMMMUIdx_E10_1_PAN:
86
+ g_assert_not_reached();
87
+ }
88
+}
89
+
90
static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
91
{
92
/*
93
--
94
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-25-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 1 -
9
target/arm/helper.c | 16 ----------------
10
target/arm/ptw.c | 16 ++++++++++++++++
11
3 files changed, 16 insertions(+), 17 deletions(-)
12
13
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.h
16
+++ b/target/arm/ptw.h
17
@@ -XXX,XX +XXX,XX @@
18
#ifndef CONFIG_USER_ONLY
19
20
bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx);
21
-uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn);
22
23
#endif /* !CONFIG_USER_ONLY */
24
#endif /* TARGET_ARM_PTW_H */
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
30
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
31
}
32
33
-/* Return the TTBR associated with this translation regime */
34
-uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
35
-{
36
- if (mmu_idx == ARMMMUIdx_Stage2) {
37
- return env->cp15.vttbr_el2;
38
- }
39
- if (mmu_idx == ARMMMUIdx_Stage2_S) {
40
- return env->cp15.vsttbr_el2;
41
- }
42
- if (ttbrn == 0) {
43
- return env->cp15.ttbr0_el[regime_el(env, mmu_idx)];
44
- } else {
45
- return env->cp15.ttbr1_el[regime_el(env, mmu_idx)];
46
- }
47
-}
48
-
49
/* Convert a possible stage1+2 MMU index into the appropriate
50
* stage 1 MMU index
51
*/
52
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/ptw.c
55
+++ b/target/arm/ptw.c
56
@@ -XXX,XX +XXX,XX @@ static bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
57
}
58
}
59
60
+/* Return the TTBR associated with this translation regime */
61
+static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
62
+{
63
+ if (mmu_idx == ARMMMUIdx_Stage2) {
64
+ return env->cp15.vttbr_el2;
65
+ }
66
+ if (mmu_idx == ARMMMUIdx_Stage2_S) {
67
+ return env->cp15.vsttbr_el2;
68
+ }
69
+ if (ttbrn == 0) {
70
+ return env->cp15.ttbr0_el[regime_el(env, mmu_idx)];
71
+ } else {
72
+ return env->cp15.ttbr1_el[regime_el(env, mmu_idx)];
73
+ }
74
+}
75
+
76
static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
77
{
78
/*
79
--
80
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-26-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.h | 17 ----------------
9
target/arm/helper.c | 47 ---------------------------------------------
10
target/arm/ptw.c | 47 ++++++++++++++++++++++++++++++++++++++++++++-
11
3 files changed, 46 insertions(+), 65 deletions(-)
12
delete mode 100644 target/arm/ptw.h
13
14
diff --git a/target/arm/ptw.h b/target/arm/ptw.h
15
deleted file mode 100644
16
index XXXXXXX..XXXXXXX
17
--- a/target/arm/ptw.h
18
+++ /dev/null
19
@@ -XXX,XX +XXX,XX @@
20
-/*
21
- * ARM page table walking.
22
- *
23
- * This code is licensed under the GNU GPL v2 or later.
24
- *
25
- * SPDX-License-Identifier: GPL-2.0-or-later
26
- */
27
-
28
-#ifndef TARGET_ARM_PTW_H
29
-#define TARGET_ARM_PTW_H
30
-
31
-#ifndef CONFIG_USER_ONLY
32
-
33
-bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx);
34
-
35
-#endif /* !CONFIG_USER_ONLY */
36
-#endif /* TARGET_ARM_PTW_H */
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
40
+++ b/target/arm/helper.c
41
@@ -XXX,XX +XXX,XX @@
42
#include "semihosting/common-semi.h"
43
#endif
44
#include "cpregs.h"
45
-#include "ptw.h"
46
47
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
48
49
@@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el)
50
}
51
52
#ifndef CONFIG_USER_ONLY
53
-
54
-/* Return true if the specified stage of address translation is disabled */
55
-bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
56
-{
57
- uint64_t hcr_el2;
58
-
59
- if (arm_feature(env, ARM_FEATURE_M)) {
60
- switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] &
61
- (R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
62
- case R_V7M_MPU_CTRL_ENABLE_MASK:
63
- /* Enabled, but not for HardFault and NMI */
64
- return mmu_idx & ARM_MMU_IDX_M_NEGPRI;
65
- case R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK:
66
- /* Enabled for all cases */
67
- return false;
68
- case 0:
69
- default:
70
- /* HFNMIENA set and ENABLE clear is UNPREDICTABLE, but
71
- * we warned about that in armv7m_nvic.c when the guest set it.
72
- */
73
- return true;
74
- }
75
- }
76
-
77
- hcr_el2 = arm_hcr_el2_eff(env);
78
-
79
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
80
- /* HCR.DC means HCR.VM behaves as 1 */
81
- return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
82
- }
83
-
84
- if (hcr_el2 & HCR_TGE) {
85
- /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
86
- if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
87
- return true;
88
- }
89
- }
90
-
91
- if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
92
- /* HCR.DC means SCTLR_EL1.M behaves as 0 */
93
- return true;
94
- }
95
-
96
- return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
97
-}
98
-
99
/* Convert a possible stage1+2 MMU index into the appropriate
100
* stage 1 MMU index
101
*/
102
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/ptw.c
105
+++ b/target/arm/ptw.c
106
@@ -XXX,XX +XXX,XX @@
107
#include "cpu.h"
108
#include "internals.h"
109
#include "idau.h"
110
-#include "ptw.h"
111
112
113
static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
114
@@ -XXX,XX +XXX,XX @@ static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
115
}
116
}
117
118
+/* Return true if the specified stage of address translation is disabled */
119
+static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
120
+{
121
+ uint64_t hcr_el2;
122
+
123
+ if (arm_feature(env, ARM_FEATURE_M)) {
124
+ switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] &
125
+ (R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
126
+ case R_V7M_MPU_CTRL_ENABLE_MASK:
127
+ /* Enabled, but not for HardFault and NMI */
128
+ return mmu_idx & ARM_MMU_IDX_M_NEGPRI;
129
+ case R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK:
130
+ /* Enabled for all cases */
131
+ return false;
132
+ case 0:
133
+ default:
134
+ /*
135
+ * HFNMIENA set and ENABLE clear is UNPREDICTABLE, but
136
+ * we warned about that in armv7m_nvic.c when the guest set it.
137
+ */
138
+ return true;
139
+ }
140
+ }
141
+
142
+ hcr_el2 = arm_hcr_el2_eff(env);
143
+
144
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
145
+ /* HCR.DC means HCR.VM behaves as 1 */
146
+ return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
147
+ }
148
+
149
+ if (hcr_el2 & HCR_TGE) {
150
+ /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
151
+ if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
152
+ return true;
153
+ }
154
+ }
155
+
156
+ if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
157
+ /* HCR.DC means SCTLR_EL1.M behaves as 0 */
158
+ return true;
159
+ }
160
+
161
+ return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
162
+}
163
+
164
static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
165
{
166
/*
167
--
168
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-27-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper.c | 26 --------------------------
9
target/arm/ptw.c | 24 ++++++++++++++++++++++++
10
2 files changed, 24 insertions(+), 26 deletions(-)
11
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
17
};
18
}
19
20
-#ifndef CONFIG_USER_ONLY
21
-hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
22
- MemTxAttrs *attrs)
23
-{
24
- ARMCPU *cpu = ARM_CPU(cs);
25
- CPUARMState *env = &cpu->env;
26
- hwaddr phys_addr;
27
- target_ulong page_size;
28
- int prot;
29
- bool ret;
30
- ARMMMUFaultInfo fi = {};
31
- ARMMMUIdx mmu_idx = arm_mmu_idx(env);
32
- ARMCacheAttrs cacheattrs = {};
33
-
34
- *attrs = (MemTxAttrs) {};
35
-
36
- ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &phys_addr,
37
- attrs, &prot, &page_size, &fi, &cacheattrs);
38
-
39
- if (ret) {
40
- return -1;
41
- }
42
- return phys_addr;
43
-}
44
-#endif
45
-
46
/* Note that signed overflow is undefined in C. The following routines are
47
careful to use unsigned types where modulo arithmetic is required.
48
Failure to do so _will_ break on newer gcc. */
49
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/ptw.c
52
+++ b/target/arm/ptw.c
53
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
54
phys_ptr, prot, page_size, fi);
55
}
56
}
57
+
58
+hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
59
+ MemTxAttrs *attrs)
60
+{
61
+ ARMCPU *cpu = ARM_CPU(cs);
62
+ CPUARMState *env = &cpu->env;
63
+ hwaddr phys_addr;
64
+ target_ulong page_size;
65
+ int prot;
66
+ bool ret;
67
+ ARMMMUFaultInfo fi = {};
68
+ ARMMMUIdx mmu_idx = arm_mmu_idx(env);
69
+ ARMCacheAttrs cacheattrs = {};
70
+
71
+ *attrs = (MemTxAttrs) {};
72
+
73
+ ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &phys_addr,
74
+ attrs, &prot, &page_size, &fi, &cacheattrs);
75
+
76
+ if (ret) {
77
+ return -1;
78
+ }
79
+ return phys_addr;
80
+}
81
--
82
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220604040607.269301-28-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper.c | 32 --------------------------------
9
target/arm/ptw.c | 28 ++++++++++++++++++++++++++++
10
2 files changed, 28 insertions(+), 32 deletions(-)
11
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el)
17
return env->cp15.sctlr_el[el];
18
}
19
20
-#ifndef CONFIG_USER_ONLY
21
-/* Convert a possible stage1+2 MMU index into the appropriate
22
- * stage 1 MMU index
23
- */
24
-ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
25
-{
26
- switch (mmu_idx) {
27
- case ARMMMUIdx_SE10_0:
28
- return ARMMMUIdx_Stage1_SE0;
29
- case ARMMMUIdx_SE10_1:
30
- return ARMMMUIdx_Stage1_SE1;
31
- case ARMMMUIdx_SE10_1_PAN:
32
- return ARMMMUIdx_Stage1_SE1_PAN;
33
- case ARMMMUIdx_E10_0:
34
- return ARMMMUIdx_Stage1_E0;
35
- case ARMMMUIdx_E10_1:
36
- return ARMMMUIdx_Stage1_E1;
37
- case ARMMMUIdx_E10_1_PAN:
38
- return ARMMMUIdx_Stage1_E1_PAN;
39
- default:
40
- return mmu_idx;
41
- }
42
-}
43
-#endif /* !CONFIG_USER_ONLY */
44
-
45
int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
46
{
47
if (regime_has_2_ranges(mmu_idx)) {
48
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx(CPUARMState *env)
49
return arm_mmu_idx_el(env, arm_current_el(env));
50
}
51
52
-#ifndef CONFIG_USER_ONLY
53
-ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
54
-{
55
- return stage_1_mmu_idx(arm_mmu_idx(env));
56
-}
57
-#endif
58
-
59
static CPUARMTBFlags rebuild_hflags_common(CPUARMState *env, int fp_el,
60
ARMMMUIdx mmu_idx,
61
CPUARMTBFlags flags)
62
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/ptw.c
65
+++ b/target/arm/ptw.c
66
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu)
67
return pamax_map[parange];
68
}
69
70
+/*
71
+ * Convert a possible stage1+2 MMU index into the appropriate stage 1 MMU index
72
+ */
73
+ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
74
+{
75
+ switch (mmu_idx) {
76
+ case ARMMMUIdx_SE10_0:
77
+ return ARMMMUIdx_Stage1_SE0;
78
+ case ARMMMUIdx_SE10_1:
79
+ return ARMMMUIdx_Stage1_SE1;
80
+ case ARMMMUIdx_SE10_1_PAN:
81
+ return ARMMMUIdx_Stage1_SE1_PAN;
82
+ case ARMMMUIdx_E10_0:
83
+ return ARMMMUIdx_Stage1_E0;
84
+ case ARMMMUIdx_E10_1:
85
+ return ARMMMUIdx_Stage1_E1;
86
+ case ARMMMUIdx_E10_1_PAN:
87
+ return ARMMMUIdx_Stage1_E1_PAN;
88
+ default:
89
+ return mmu_idx;
90
+ }
91
+}
92
+
93
+ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
94
+{
95
+ return stage_1_mmu_idx(arm_mmu_idx(env));
96
+}
97
+
98
static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
99
{
100
return (regime_sctlr(env, mmu_idx) & SCTLR_EE) != 0;
101
--
102
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The use of ARM_CPU to recover env from cs calls
4
object_class_dynamic_cast, which shows up on the profile.
5
This is pointless, because all callers already have env, and
6
the reverse operation, env_cpu, is only pointer arithmetic.
7
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220604040607.269301-29-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/ptw.c | 23 +++++++++--------------
14
1 file changed, 9 insertions(+), 14 deletions(-)
15
16
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/ptw.c
19
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
21
}
22
23
/* All loads done in the course of a page table walk go through here. */
24
-static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
25
+static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
26
ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
27
{
28
- ARMCPU *cpu = ARM_CPU(cs);
29
- CPUARMState *env = &cpu->env;
30
+ CPUState *cs = env_cpu(env);
31
MemTxAttrs attrs = {};
32
MemTxResult result = MEMTX_OK;
33
AddressSpace *as;
34
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
35
return 0;
36
}
37
38
-static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
39
+static uint64_t arm_ldq_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
40
ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
41
{
42
- ARMCPU *cpu = ARM_CPU(cs);
43
- CPUARMState *env = &cpu->env;
44
+ CPUState *cs = env_cpu(env);
45
MemTxAttrs attrs = {};
46
MemTxResult result = MEMTX_OK;
47
AddressSpace *as;
48
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
49
target_ulong *page_size,
50
ARMMMUFaultInfo *fi)
51
{
52
- CPUState *cs = env_cpu(env);
53
int level = 1;
54
uint32_t table;
55
uint32_t desc;
56
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
57
fi->type = ARMFault_Translation;
58
goto do_fault;
59
}
60
- desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
61
+ desc = arm_ldl_ptw(env, table, regime_is_secure(env, mmu_idx),
62
mmu_idx, fi);
63
if (fi->type != ARMFault_None) {
64
goto do_fault;
65
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
66
/* Fine pagetable. */
67
table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
68
}
69
- desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
70
+ desc = arm_ldl_ptw(env, table, regime_is_secure(env, mmu_idx),
71
mmu_idx, fi);
72
if (fi->type != ARMFault_None) {
73
goto do_fault;
74
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
75
hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
76
target_ulong *page_size, ARMMMUFaultInfo *fi)
77
{
78
- CPUState *cs = env_cpu(env);
79
ARMCPU *cpu = env_archcpu(env);
80
int level = 1;
81
uint32_t table;
82
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
83
fi->type = ARMFault_Translation;
84
goto do_fault;
85
}
86
- desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
87
+ desc = arm_ldl_ptw(env, table, regime_is_secure(env, mmu_idx),
88
mmu_idx, fi);
89
if (fi->type != ARMFault_None) {
90
goto do_fault;
91
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
92
ns = extract32(desc, 3, 1);
93
/* Lookup l2 entry. */
94
table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
95
- desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
96
+ desc = arm_ldl_ptw(env, table, regime_is_secure(env, mmu_idx),
97
mmu_idx, fi);
98
if (fi->type != ARMFault_None) {
99
goto do_fault;
100
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
101
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
102
{
103
ARMCPU *cpu = env_archcpu(env);
104
- CPUState *cs = CPU(cpu);
105
/* Read an LPAE long-descriptor translation table. */
106
ARMFaultType fault_type = ARMFault_Translation;
107
uint32_t level;
108
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
109
descaddr |= (address >> (stride * (4 - level))) & indexmask;
110
descaddr &= ~7ULL;
111
nstable = extract32(tableattrs, 4, 1);
112
- descriptor = arm_ldq_ptw(cs, descaddr, !nstable, mmu_idx, fi);
113
+ descriptor = arm_ldq_ptw(env, descaddr, !nstable, mmu_idx, fi);
114
if (fi->type != ARMFault_None) {
115
goto do_fault;
116
}
117
--
118
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Add an interface function to extract the digested vector length
4
rather than the raw zcr_el[1] value. This fixes an incorrect
5
return from do_prctl_set_vl where we didn't take into account
6
the set of vector lengths supported by the cpu.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220607203306.657998-3-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
linux-user/aarch64/target_prctl.h | 20 +++++++++++++-------
14
target/arm/cpu.h | 11 +++++++++++
15
linux-user/aarch64/signal.c | 4 ++--
16
3 files changed, 26 insertions(+), 9 deletions(-)
17
18
diff --git a/linux-user/aarch64/target_prctl.h b/linux-user/aarch64/target_prctl.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/linux-user/aarch64/target_prctl.h
21
+++ b/linux-user/aarch64/target_prctl.h
22
@@ -XXX,XX +XXX,XX @@ static abi_long do_prctl_get_vl(CPUArchState *env)
23
{
24
ARMCPU *cpu = env_archcpu(env);
25
if (cpu_isar_feature(aa64_sve, cpu)) {
26
- return ((cpu->env.vfp.zcr_el[1] & 0xf) + 1) * 16;
27
+ return sve_vq(env) * 16;
28
}
29
return -TARGET_EINVAL;
30
}
31
@@ -XXX,XX +XXX,XX @@ static abi_long do_prctl_set_vl(CPUArchState *env, abi_long arg2)
32
*/
33
if (cpu_isar_feature(aa64_sve, env_archcpu(env))
34
&& arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
35
- ARMCPU *cpu = env_archcpu(env);
36
uint32_t vq, old_vq;
37
38
- old_vq = (env->vfp.zcr_el[1] & 0xf) + 1;
39
- vq = MAX(arg2 / 16, 1);
40
- vq = MIN(vq, cpu->sve_max_vq);
41
+ old_vq = sve_vq(env);
42
43
+ /*
44
+ * Bound the value of arg2, so that we know that it fits into
45
+ * the 4-bit field in ZCR_EL1. Rely on the hflags rebuild to
46
+ * sort out the length supported by the cpu.
47
+ */
48
+ vq = MAX(arg2 / 16, 1);
49
+ vq = MIN(vq, ARM_MAX_VQ);
50
+ env->vfp.zcr_el[1] = vq - 1;
51
+ arm_rebuild_hflags(env);
52
+
53
+ vq = sve_vq(env);
54
if (vq < old_vq) {
55
aarch64_sve_narrow_vq(env, vq);
56
}
57
- env->vfp.zcr_el[1] = vq - 1;
58
- arm_rebuild_hflags(env);
59
return vq * 16;
60
}
61
return -TARGET_EINVAL;
62
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/cpu.h
65
+++ b/target/arm/cpu.h
66
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
67
return EX_TBFLAG_ANY(env->hflags, MMUIDX);
68
}
69
70
+/**
71
+ * sve_vq
72
+ * @env: the cpu context
73
+ *
74
+ * Return the VL cached within env->hflags, in units of quadwords.
75
+ */
76
+static inline int sve_vq(CPUARMState *env)
77
+{
78
+ return EX_TBFLAG_A64(env->hflags, VL) + 1;
79
+}
80
+
81
static inline bool bswap_code(bool sctlr_b)
82
{
83
#ifdef CONFIG_USER_ONLY
84
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/linux-user/aarch64/signal.c
87
+++ b/linux-user/aarch64/signal.c
88
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
89
90
case TARGET_SVE_MAGIC:
91
if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
92
- vq = (env->vfp.zcr_el[1] & 0xf) + 1;
93
+ vq = sve_vq(env);
94
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
95
if (!sve && size == sve_size) {
96
sve = (struct target_sve_context *)ctx;
97
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
98
99
/* SVE state needs saving only if it exists. */
100
if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
101
- vq = (env->vfp.zcr_el[1] & 0xf) + 1;
102
+ vq = sve_vq(env);
103
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
104
sve_ofs = alloc_sigframe_space(sve_size, &layout);
105
}
106
--
107
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We handle this routing in raise_exception. Promoting the value early
4
means that we can't directly compare FPEXC_EL and SVEEXC_EL.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220607203306.657998-4-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.c | 3 +--
12
1 file changed, 1 insertion(+), 2 deletions(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
19
/* fall through */
20
case 0:
21
case 2:
22
- /* route_to_el2 */
23
- return hcr_el2 & HCR_TGE ? 2 : 1;
24
+ return 1;
25
}
26
27
/* Check CPACR.FPEN. */
28
--
29
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Instead of checking these bits in fp_exception_el and
4
also in sve_exception_el, document that we must compare
5
the results. The only place where we have not already
6
checked that FP EL is zero is in rebuild_hflags_a64.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220607203306.657998-5-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/helper.c | 58 +++++++++++++++------------------------------
14
1 file changed, 19 insertions(+), 39 deletions(-)
15
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo minimal_ras_reginfo[] = {
21
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vsesr_el2) },
22
};
23
24
-/* Return the exception level to which exceptions should be taken
25
- * via SVEAccessTrap. If an exception should be routed through
26
- * AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should
27
- * take care of raising that exception.
28
- * C.f. the ARM pseudocode function CheckSVEEnabled.
29
+/*
30
+ * Return the exception level to which exceptions should be taken
31
+ * via SVEAccessTrap. This excludes the check for whether the exception
32
+ * should be routed through AArch64.AdvSIMDFPAccessTrap. That can easily
33
+ * be found by testing 0 < fp_exception_el < sve_exception_el.
34
+ *
35
+ * C.f. the ARM pseudocode function CheckSVEEnabled. Note that the
36
+ * pseudocode does *not* separate out the FP trap checks, but has them
37
+ * all in one function.
38
*/
39
int sve_exception_el(CPUARMState *env, int el)
40
{
41
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
42
case 2:
43
return 1;
44
}
45
-
46
- /* Check CPACR.FPEN. */
47
- switch (FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, FPEN)) {
48
- case 1:
49
- if (el != 0) {
50
- break;
51
- }
52
- /* fall through */
53
- case 0:
54
- case 2:
55
- return 0;
56
- }
57
}
58
59
/*
60
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
61
case 2:
62
return 2;
63
}
64
-
65
- switch (FIELD_EX32(env->cp15.cptr_el[2], CPTR_EL2, FPEN)) {
66
- case 1:
67
- if (el == 2 || !(hcr_el2 & HCR_TGE)) {
68
- break;
69
- }
70
- /* fall through */
71
- case 0:
72
- case 2:
73
- return 0;
74
- }
75
} else if (arm_is_el2_enabled(env)) {
76
if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TZ)) {
77
return 2;
78
}
79
- if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TFP)) {
80
- return 0;
81
- }
82
}
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
86
87
if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
88
int sve_el = sve_exception_el(env, el);
89
- uint32_t zcr_len;
90
91
/*
92
- * If SVE is disabled, but FP is enabled,
93
- * then the effective len is 0.
94
+ * If either FP or SVE are disabled, translator does not need len.
95
+ * If SVE EL > FP EL, FP exception has precedence, and translator
96
+ * does not need SVE EL. Save potential re-translations by forcing
97
+ * the unneeded data to zero.
98
*/
99
- if (sve_el != 0 && fp_el == 0) {
100
- zcr_len = 0;
101
- } else {
102
- zcr_len = sve_zcr_len_for_el(env, el);
103
+ if (fp_el != 0) {
104
+ if (sve_el > fp_el) {
105
+ sve_el = 0;
106
+ }
107
+ } else if (sve_el == 0) {
108
+ DP_TBFLAG_A64(flags, VL, sve_zcr_len_for_el(env, el));
109
}
110
DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
111
- DP_TBFLAG_A64(flags, VL, zcr_len);
112
}
113
114
sctlr = regime_sctlr(env, stage1);
115
--
116
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The ARM pseudocode function NVL uses this predicate now,
4
and I think it's a bit clearer. Simplify the pseudocode
5
condition by noting that IsInHost is always false for EL1.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220607203306.657998-7-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.c | 3 +--
13
1 file changed, 1 insertion(+), 2 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
20
ARMCPU *cpu = env_archcpu(env);
21
uint32_t zcr_len = cpu->sve_max_vq - 1;
22
23
- if (el <= 1 &&
24
- (arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
25
+ if (el <= 1 && !el_is_in_host(env, el)) {
26
zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[1]);
27
}
28
if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) {
29
--
30
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The ARM pseudocode function CheckNormalSVEEnabled uses this
4
predicate now, and I think it's a bit clearer.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220607203306.657998-8-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.c | 5 ++---
12
1 file changed, 2 insertions(+), 3 deletions(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo minimal_ras_reginfo[] = {
19
int sve_exception_el(CPUARMState *env, int el)
20
{
21
#ifndef CONFIG_USER_ONLY
22
- uint64_t hcr_el2 = arm_hcr_el2_eff(env);
23
-
24
- if (el <= 1 && (hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
25
+ if (el <= 1 && !el_is_in_host(env, el)) {
26
switch (FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, ZEN)) {
27
case 1:
28
if (el != 0) {
29
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
30
* CPTR_EL2 changes format with HCR_EL2.E2H (regardless of TGE).
31
*/
32
if (el <= 2) {
33
+ uint64_t hcr_el2 = arm_hcr_el2_eff(env);
34
if (hcr_el2 & HCR_E2H) {
35
switch (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, ZEN)) {
36
case 1:
37
--
38
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We don't need to constrain the value set in zcr_el[1],
4
because it will be done by sve_zcr_len_for_el.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220607203306.657998-10-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/cpu.c | 3 +--
12
1 file changed, 1 insertion(+), 2 deletions(-)
13
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.c
17
+++ b/target/arm/cpu.c
18
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
19
CPACR_EL1, ZEN, 3);
20
/* with reasonable vector length */
21
if (cpu_isar_feature(aa64_sve, cpu)) {
22
- env->vfp.zcr_el[1] =
23
- aarch64_sve_zcr_get_valid_len(cpu, cpu->sve_default_vq - 1);
24
+ env->vfp.zcr_el[1] = cpu->sve_default_vq - 1;
25
}
26
/*
27
* Enable 48-bit address space (TODO: take reserved_va into account).
28
--
29
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This function is used only once, and will need modification
4
for Streaming SVE mode.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220607203306.657998-11-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/internals.h | 11 -----------
12
target/arm/helper.c | 30 +++++++++++-------------------
13
2 files changed, 11 insertions(+), 30 deletions(-)
14
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/internals.h
18
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void);
20
void arm_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb);
21
#endif /* CONFIG_TCG */
22
23
-/**
24
- * aarch64_sve_zcr_get_valid_len:
25
- * @cpu: cpu context
26
- * @start_len: maximum len to consider
27
- *
28
- * Return the maximum supported sve vector length <= @start_len.
29
- * Note that both @start_len and the return value are in units
30
- * of ZCR_ELx.LEN, so the vector bit length is (x + 1) * 128.
31
- */
32
-uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len);
33
-
34
enum arm_fprounding {
35
FPROUNDING_TIEEVEN,
36
FPROUNDING_POSINF,
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
40
+++ b/target/arm/helper.c
41
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
42
return 0;
43
}
44
45
-uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
46
-{
47
- uint32_t end_len;
48
-
49
- start_len = MIN(start_len, ARM_MAX_VQ - 1);
50
- end_len = start_len;
51
-
52
- if (!test_bit(start_len, cpu->sve_vq_map)) {
53
- end_len = find_last_bit(cpu->sve_vq_map, start_len);
54
- assert(end_len < start_len);
55
- }
56
- return end_len;
57
-}
58
-
59
/*
60
* Given that SVE is enabled, return the vector length for EL.
61
*/
62
uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
63
{
64
ARMCPU *cpu = env_archcpu(env);
65
- uint32_t zcr_len = cpu->sve_max_vq - 1;
66
+ uint32_t len = cpu->sve_max_vq - 1;
67
+ uint32_t end_len;
68
69
if (el <= 1 && !el_is_in_host(env, el)) {
70
- zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[1]);
71
+ len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[1]);
72
}
73
if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) {
74
- zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[2]);
75
+ len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[2]);
76
}
77
if (arm_feature(env, ARM_FEATURE_EL3)) {
78
- zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
79
+ len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
80
}
81
82
- return aarch64_sve_zcr_get_valid_len(cpu, zcr_len);
83
+ end_len = len;
84
+ if (!test_bit(len, cpu->sve_vq_map)) {
85
+ end_len = find_last_bit(cpu->sve_vq_map, len);
86
+ assert(end_len < len);
87
+ }
88
+ return end_len;
89
}
90
91
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
92
--
93
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This will be used for both Normal and Streaming SVE, and the value
4
does not necessarily come from ZCR_ELx. While we're at it, emphasize
5
the units in which the value is returned.
6
7
Patch produced by
8
git grep -l sve_zcr_len_for_el | \
9
xargs -n1 sed -i 's/sve_zcr_len_for_el/sve_vqm1_for_el/g'
10
11
and then adding a function comment.
12
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20220607203306.657998-13-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
target/arm/cpu.h | 11 ++++++++++-
19
target/arm/arch_dump.c | 2 +-
20
target/arm/cpu.c | 2 +-
21
target/arm/gdbstub64.c | 2 +-
22
target/arm/helper.c | 12 ++++++------
23
5 files changed, 19 insertions(+), 10 deletions(-)
24
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
29
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env);
30
31
int fp_exception_el(CPUARMState *env, int cur_el);
32
int sve_exception_el(CPUARMState *env, int cur_el);
33
-uint32_t sve_zcr_len_for_el(CPUARMState *env, int el);
34
+
35
+/**
36
+ * sve_vqm1_for_el:
37
+ * @env: CPUARMState
38
+ * @el: exception level
39
+ *
40
+ * Compute the current SVE vector length for @el, in units of
41
+ * Quadwords Minus 1 -- the same scale used for ZCR_ELx.LEN.
42
+ */
43
+uint32_t sve_vqm1_for_el(CPUARMState *env, int el);
44
45
static inline bool is_a64(CPUARMState *env)
46
{
47
diff --git a/target/arm/arch_dump.c b/target/arm/arch_dump.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/arch_dump.c
50
+++ b/target/arm/arch_dump.c
51
@@ -XXX,XX +XXX,XX @@ static off_t sve_fpcr_offset(uint32_t vq)
52
53
static uint32_t sve_current_vq(CPUARMState *env)
54
{
55
- return sve_zcr_len_for_el(env, arm_current_el(env)) + 1;
56
+ return sve_vqm1_for_el(env, arm_current_el(env)) + 1;
57
}
58
59
static size_t sve_size_vq(uint32_t vq)
60
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/cpu.c
63
+++ b/target/arm/cpu.c
64
@@ -XXX,XX +XXX,XX @@ static void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags)
65
vfp_get_fpcr(env), vfp_get_fpsr(env));
66
67
if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) {
68
- int j, zcr_len = sve_zcr_len_for_el(env, el);
69
+ int j, zcr_len = sve_vqm1_for_el(env, el);
70
71
for (i = 0; i <= FFR_PRED_NUM; i++) {
72
bool eol;
73
diff --git a/target/arm/gdbstub64.c b/target/arm/gdbstub64.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/target/arm/gdbstub64.c
76
+++ b/target/arm/gdbstub64.c
77
@@ -XXX,XX +XXX,XX @@ int arm_gdb_get_svereg(CPUARMState *env, GByteArray *buf, int reg)
78
* We report in Vector Granules (VG) which is 64bit in a Z reg
79
* while the ZCR works in Vector Quads (VQ) which is 128bit chunks.
80
*/
81
- int vq = sve_zcr_len_for_el(env, arm_current_el(env)) + 1;
82
+ int vq = sve_vqm1_for_el(env, arm_current_el(env)) + 1;
83
return gdb_get_reg64(buf, vq * 2);
84
}
85
default:
86
diff --git a/target/arm/helper.c b/target/arm/helper.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/helper.c
89
+++ b/target/arm/helper.c
90
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
91
/*
92
* Given that SVE is enabled, return the vector length for EL.
93
*/
94
-uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
95
+uint32_t sve_vqm1_for_el(CPUARMState *env, int el)
96
{
97
ARMCPU *cpu = env_archcpu(env);
98
uint32_t len = cpu->sve_max_vq - 1;
99
@@ -XXX,XX +XXX,XX @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
100
uint64_t value)
101
{
102
int cur_el = arm_current_el(env);
103
- int old_len = sve_zcr_len_for_el(env, cur_el);
104
+ int old_len = sve_vqm1_for_el(env, cur_el);
105
int new_len;
106
107
/* Bits other than [3:0] are RAZ/WI. */
108
@@ -XXX,XX +XXX,XX @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
109
* Because we arrived here, we know both FP and SVE are enabled;
110
* otherwise we would have trapped access to the ZCR_ELn register.
111
*/
112
- new_len = sve_zcr_len_for_el(env, cur_el);
113
+ new_len = sve_vqm1_for_el(env, cur_el);
114
if (new_len < old_len) {
115
aarch64_sve_narrow_vq(env, new_len + 1);
116
}
117
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
118
sve_el = 0;
119
}
120
} else if (sve_el == 0) {
121
- DP_TBFLAG_A64(flags, VL, sve_zcr_len_for_el(env, el));
122
+ DP_TBFLAG_A64(flags, VL, sve_vqm1_for_el(env, el));
123
}
124
DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
125
}
126
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_change_el(CPUARMState *env, int old_el,
127
*/
128
old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64;
129
old_len = (old_a64 && !sve_exception_el(env, old_el)
130
- ? sve_zcr_len_for_el(env, old_el) : 0);
131
+ ? sve_vqm1_for_el(env, old_el) : 0);
132
new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64;
133
new_len = (new_a64 && !sve_exception_el(env, new_el)
134
- ? sve_zcr_len_for_el(env, new_el) : 0);
135
+ ? sve_vqm1_for_el(env, new_el) : 0);
136
137
/* When changing vector length, clear inaccessible state. */
138
if (new_len < old_len) {
139
--
140
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Begin creation of sve_ldst_internal.h by moving the primitives
4
that access host and tlb memory.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220607203306.657998-14-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/sve_ldst_internal.h | 127 +++++++++++++++++++++++++++++++++
12
target/arm/sve_helper.c | 107 +--------------------------
13
2 files changed, 128 insertions(+), 106 deletions(-)
14
create mode 100644 target/arm/sve_ldst_internal.h
15
16
diff --git a/target/arm/sve_ldst_internal.h b/target/arm/sve_ldst_internal.h
17
new file mode 100644
18
index XXXXXXX..XXXXXXX
19
--- /dev/null
20
+++ b/target/arm/sve_ldst_internal.h
21
@@ -XXX,XX +XXX,XX @@
22
+/*
23
+ * ARM SVE Load/Store Helpers
24
+ *
25
+ * Copyright (c) 2018-2022 Linaro
26
+ *
27
+ * This library is free software; you can redistribute it and/or
28
+ * modify it under the terms of the GNU Lesser General Public
29
+ * License as published by the Free Software Foundation; either
30
+ * version 2.1 of the License, or (at your option) any later version.
31
+ *
32
+ * This library is distributed in the hope that it will be useful,
33
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
34
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
35
+ * Lesser General Public License for more details.
36
+ *
37
+ * You should have received a copy of the GNU Lesser General Public
38
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
39
+ */
40
+
41
+#ifndef TARGET_ARM_SVE_LDST_INTERNAL_H
42
+#define TARGET_ARM_SVE_LDST_INTERNAL_H
43
+
44
+#include "exec/cpu_ldst.h"
45
+
46
+/*
47
+ * Load one element into @vd + @reg_off from @host.
48
+ * The controlling predicate is known to be true.
49
+ */
50
+typedef void sve_ldst1_host_fn(void *vd, intptr_t reg_off, void *host);
51
+
52
+/*
53
+ * Load one element into @vd + @reg_off from (@env, @vaddr, @ra).
54
+ * The controlling predicate is known to be true.
55
+ */
56
+typedef void sve_ldst1_tlb_fn(CPUARMState *env, void *vd, intptr_t reg_off,
57
+ target_ulong vaddr, uintptr_t retaddr);
58
+
59
+/*
60
+ * Generate the above primitives.
61
+ */
62
+
63
+#define DO_LD_HOST(NAME, H, TYPEE, TYPEM, HOST) \
64
+static inline void sve_##NAME##_host(void *vd, intptr_t reg_off, void *host) \
65
+{ TYPEM val = HOST(host); *(TYPEE *)(vd + H(reg_off)) = val; }
66
+
67
+#define DO_ST_HOST(NAME, H, TYPEE, TYPEM, HOST) \
68
+static inline void sve_##NAME##_host(void *vd, intptr_t reg_off, void *host) \
69
+{ TYPEM val = *(TYPEE *)(vd + H(reg_off)); HOST(host, val); }
70
+
71
+#define DO_LD_TLB(NAME, H, TYPEE, TYPEM, TLB) \
72
+static inline void sve_##NAME##_tlb(CPUARMState *env, void *vd, \
73
+ intptr_t reg_off, target_ulong addr, uintptr_t ra) \
74
+{ \
75
+ TYPEM val = TLB(env, useronly_clean_ptr(addr), ra); \
76
+ *(TYPEE *)(vd + H(reg_off)) = val; \
77
+}
78
+
79
+#define DO_ST_TLB(NAME, H, TYPEE, TYPEM, TLB) \
80
+static inline void sve_##NAME##_tlb(CPUARMState *env, void *vd, \
81
+ intptr_t reg_off, target_ulong addr, uintptr_t ra) \
82
+{ \
83
+ TYPEM val = *(TYPEE *)(vd + H(reg_off)); \
84
+ TLB(env, useronly_clean_ptr(addr), val, ra); \
85
+}
86
+
87
+#define DO_LD_PRIM_1(NAME, H, TE, TM) \
88
+ DO_LD_HOST(NAME, H, TE, TM, ldub_p) \
89
+ DO_LD_TLB(NAME, H, TE, TM, cpu_ldub_data_ra)
90
+
91
+DO_LD_PRIM_1(ld1bb, H1, uint8_t, uint8_t)
92
+DO_LD_PRIM_1(ld1bhu, H1_2, uint16_t, uint8_t)
93
+DO_LD_PRIM_1(ld1bhs, H1_2, uint16_t, int8_t)
94
+DO_LD_PRIM_1(ld1bsu, H1_4, uint32_t, uint8_t)
95
+DO_LD_PRIM_1(ld1bss, H1_4, uint32_t, int8_t)
96
+DO_LD_PRIM_1(ld1bdu, H1_8, uint64_t, uint8_t)
97
+DO_LD_PRIM_1(ld1bds, H1_8, uint64_t, int8_t)
98
+
99
+#define DO_ST_PRIM_1(NAME, H, TE, TM) \
100
+ DO_ST_HOST(st1##NAME, H, TE, TM, stb_p) \
101
+ DO_ST_TLB(st1##NAME, H, TE, TM, cpu_stb_data_ra)
102
+
103
+DO_ST_PRIM_1(bb, H1, uint8_t, uint8_t)
104
+DO_ST_PRIM_1(bh, H1_2, uint16_t, uint8_t)
105
+DO_ST_PRIM_1(bs, H1_4, uint32_t, uint8_t)
106
+DO_ST_PRIM_1(bd, H1_8, uint64_t, uint8_t)
107
+
108
+#define DO_LD_PRIM_2(NAME, H, TE, TM, LD) \
109
+ DO_LD_HOST(ld1##NAME##_be, H, TE, TM, LD##_be_p) \
110
+ DO_LD_HOST(ld1##NAME##_le, H, TE, TM, LD##_le_p) \
111
+ DO_LD_TLB(ld1##NAME##_be, H, TE, TM, cpu_##LD##_be_data_ra) \
112
+ DO_LD_TLB(ld1##NAME##_le, H, TE, TM, cpu_##LD##_le_data_ra)
113
+
114
+#define DO_ST_PRIM_2(NAME, H, TE, TM, ST) \
115
+ DO_ST_HOST(st1##NAME##_be, H, TE, TM, ST##_be_p) \
116
+ DO_ST_HOST(st1##NAME##_le, H, TE, TM, ST##_le_p) \
117
+ DO_ST_TLB(st1##NAME##_be, H, TE, TM, cpu_##ST##_be_data_ra) \
118
+ DO_ST_TLB(st1##NAME##_le, H, TE, TM, cpu_##ST##_le_data_ra)
119
+
120
+DO_LD_PRIM_2(hh, H1_2, uint16_t, uint16_t, lduw)
121
+DO_LD_PRIM_2(hsu, H1_4, uint32_t, uint16_t, lduw)
122
+DO_LD_PRIM_2(hss, H1_4, uint32_t, int16_t, lduw)
123
+DO_LD_PRIM_2(hdu, H1_8, uint64_t, uint16_t, lduw)
124
+DO_LD_PRIM_2(hds, H1_8, uint64_t, int16_t, lduw)
125
+
126
+DO_ST_PRIM_2(hh, H1_2, uint16_t, uint16_t, stw)
127
+DO_ST_PRIM_2(hs, H1_4, uint32_t, uint16_t, stw)
128
+DO_ST_PRIM_2(hd, H1_8, uint64_t, uint16_t, stw)
129
+
130
+DO_LD_PRIM_2(ss, H1_4, uint32_t, uint32_t, ldl)
131
+DO_LD_PRIM_2(sdu, H1_8, uint64_t, uint32_t, ldl)
132
+DO_LD_PRIM_2(sds, H1_8, uint64_t, int32_t, ldl)
133
+
134
+DO_ST_PRIM_2(ss, H1_4, uint32_t, uint32_t, stl)
135
+DO_ST_PRIM_2(sd, H1_8, uint64_t, uint32_t, stl)
136
+
137
+DO_LD_PRIM_2(dd, H1_8, uint64_t, uint64_t, ldq)
138
+DO_ST_PRIM_2(dd, H1_8, uint64_t, uint64_t, stq)
139
+
140
+#undef DO_LD_TLB
141
+#undef DO_ST_TLB
142
+#undef DO_LD_HOST
143
+#undef DO_LD_PRIM_1
144
+#undef DO_ST_PRIM_1
145
+#undef DO_LD_PRIM_2
146
+#undef DO_ST_PRIM_2
147
+
148
+#endif /* TARGET_ARM_SVE_LDST_INTERNAL_H */
149
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
150
index XXXXXXX..XXXXXXX 100644
151
--- a/target/arm/sve_helper.c
152
+++ b/target/arm/sve_helper.c
153
@@ -XXX,XX +XXX,XX @@
154
#include "cpu.h"
155
#include "internals.h"
156
#include "exec/exec-all.h"
157
-#include "exec/cpu_ldst.h"
158
#include "exec/helper-proto.h"
159
#include "tcg/tcg-gvec-desc.h"
160
#include "fpu/softfloat.h"
161
#include "tcg/tcg.h"
162
#include "vec_internal.h"
163
+#include "sve_ldst_internal.h"
164
165
166
/* Return a value for NZCV as per the ARM PredTest pseudofunction.
167
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcmla_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
168
* Load contiguous data, protected by a governing predicate.
169
*/
170
171
-/*
172
- * Load one element into @vd + @reg_off from @host.
173
- * The controlling predicate is known to be true.
174
- */
175
-typedef void sve_ldst1_host_fn(void *vd, intptr_t reg_off, void *host);
176
-
177
-/*
178
- * Load one element into @vd + @reg_off from (@env, @vaddr, @ra).
179
- * The controlling predicate is known to be true.
180
- */
181
-typedef void sve_ldst1_tlb_fn(CPUARMState *env, void *vd, intptr_t reg_off,
182
- target_ulong vaddr, uintptr_t retaddr);
183
-
184
-/*
185
- * Generate the above primitives.
186
- */
187
-
188
-#define DO_LD_HOST(NAME, H, TYPEE, TYPEM, HOST) \
189
-static void sve_##NAME##_host(void *vd, intptr_t reg_off, void *host) \
190
-{ \
191
- TYPEM val = HOST(host); \
192
- *(TYPEE *)(vd + H(reg_off)) = val; \
193
-}
194
-
195
-#define DO_ST_HOST(NAME, H, TYPEE, TYPEM, HOST) \
196
-static void sve_##NAME##_host(void *vd, intptr_t reg_off, void *host) \
197
-{ HOST(host, (TYPEM)*(TYPEE *)(vd + H(reg_off))); }
198
-
199
-#define DO_LD_TLB(NAME, H, TYPEE, TYPEM, TLB) \
200
-static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \
201
- target_ulong addr, uintptr_t ra) \
202
-{ \
203
- *(TYPEE *)(vd + H(reg_off)) = \
204
- (TYPEM)TLB(env, useronly_clean_ptr(addr), ra); \
205
-}
206
-
207
-#define DO_ST_TLB(NAME, H, TYPEE, TYPEM, TLB) \
208
-static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \
209
- target_ulong addr, uintptr_t ra) \
210
-{ \
211
- TLB(env, useronly_clean_ptr(addr), \
212
- (TYPEM)*(TYPEE *)(vd + H(reg_off)), ra); \
213
-}
214
-
215
-#define DO_LD_PRIM_1(NAME, H, TE, TM) \
216
- DO_LD_HOST(NAME, H, TE, TM, ldub_p) \
217
- DO_LD_TLB(NAME, H, TE, TM, cpu_ldub_data_ra)
218
-
219
-DO_LD_PRIM_1(ld1bb, H1, uint8_t, uint8_t)
220
-DO_LD_PRIM_1(ld1bhu, H1_2, uint16_t, uint8_t)
221
-DO_LD_PRIM_1(ld1bhs, H1_2, uint16_t, int8_t)
222
-DO_LD_PRIM_1(ld1bsu, H1_4, uint32_t, uint8_t)
223
-DO_LD_PRIM_1(ld1bss, H1_4, uint32_t, int8_t)
224
-DO_LD_PRIM_1(ld1bdu, H1_8, uint64_t, uint8_t)
225
-DO_LD_PRIM_1(ld1bds, H1_8, uint64_t, int8_t)
226
-
227
-#define DO_ST_PRIM_1(NAME, H, TE, TM) \
228
- DO_ST_HOST(st1##NAME, H, TE, TM, stb_p) \
229
- DO_ST_TLB(st1##NAME, H, TE, TM, cpu_stb_data_ra)
230
-
231
-DO_ST_PRIM_1(bb, H1, uint8_t, uint8_t)
232
-DO_ST_PRIM_1(bh, H1_2, uint16_t, uint8_t)
233
-DO_ST_PRIM_1(bs, H1_4, uint32_t, uint8_t)
234
-DO_ST_PRIM_1(bd, H1_8, uint64_t, uint8_t)
235
-
236
-#define DO_LD_PRIM_2(NAME, H, TE, TM, LD) \
237
- DO_LD_HOST(ld1##NAME##_be, H, TE, TM, LD##_be_p) \
238
- DO_LD_HOST(ld1##NAME##_le, H, TE, TM, LD##_le_p) \
239
- DO_LD_TLB(ld1##NAME##_be, H, TE, TM, cpu_##LD##_be_data_ra) \
240
- DO_LD_TLB(ld1##NAME##_le, H, TE, TM, cpu_##LD##_le_data_ra)
241
-
242
-#define DO_ST_PRIM_2(NAME, H, TE, TM, ST) \
243
- DO_ST_HOST(st1##NAME##_be, H, TE, TM, ST##_be_p) \
244
- DO_ST_HOST(st1##NAME##_le, H, TE, TM, ST##_le_p) \
245
- DO_ST_TLB(st1##NAME##_be, H, TE, TM, cpu_##ST##_be_data_ra) \
246
- DO_ST_TLB(st1##NAME##_le, H, TE, TM, cpu_##ST##_le_data_ra)
247
-
248
-DO_LD_PRIM_2(hh, H1_2, uint16_t, uint16_t, lduw)
249
-DO_LD_PRIM_2(hsu, H1_4, uint32_t, uint16_t, lduw)
250
-DO_LD_PRIM_2(hss, H1_4, uint32_t, int16_t, lduw)
251
-DO_LD_PRIM_2(hdu, H1_8, uint64_t, uint16_t, lduw)
252
-DO_LD_PRIM_2(hds, H1_8, uint64_t, int16_t, lduw)
253
-
254
-DO_ST_PRIM_2(hh, H1_2, uint16_t, uint16_t, stw)
255
-DO_ST_PRIM_2(hs, H1_4, uint32_t, uint16_t, stw)
256
-DO_ST_PRIM_2(hd, H1_8, uint64_t, uint16_t, stw)
257
-
258
-DO_LD_PRIM_2(ss, H1_4, uint32_t, uint32_t, ldl)
259
-DO_LD_PRIM_2(sdu, H1_8, uint64_t, uint32_t, ldl)
260
-DO_LD_PRIM_2(sds, H1_8, uint64_t, int32_t, ldl)
261
-
262
-DO_ST_PRIM_2(ss, H1_4, uint32_t, uint32_t, stl)
263
-DO_ST_PRIM_2(sd, H1_8, uint64_t, uint32_t, stl)
264
-
265
-DO_LD_PRIM_2(dd, H1_8, uint64_t, uint64_t, ldq)
266
-DO_ST_PRIM_2(dd, H1_8, uint64_t, uint64_t, stq)
267
-
268
-#undef DO_LD_TLB
269
-#undef DO_ST_TLB
270
-#undef DO_LD_HOST
271
-#undef DO_LD_PRIM_1
272
-#undef DO_ST_PRIM_1
273
-#undef DO_LD_PRIM_2
274
-#undef DO_ST_PRIM_2
275
-
276
/*
277
* Skip through a sequence of inactive elements in the guarding predicate @vg,
278
* beginning at @reg_off bounded by @reg_max. Return the offset of the active
279
--
280
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Export all of the support functions for performing bulk
4
fault analysis on a set of elements at contiguous addresses
5
controlled by a predicate.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220607203306.657998-15-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/sve_ldst_internal.h | 94 ++++++++++++++++++++++++++++++++++
13
target/arm/sve_helper.c | 87 ++++++-------------------------
14
2 files changed, 111 insertions(+), 70 deletions(-)
15
16
diff --git a/target/arm/sve_ldst_internal.h b/target/arm/sve_ldst_internal.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve_ldst_internal.h
19
+++ b/target/arm/sve_ldst_internal.h
20
@@ -XXX,XX +XXX,XX @@ DO_ST_PRIM_2(dd, H1_8, uint64_t, uint64_t, stq)
21
#undef DO_LD_PRIM_2
22
#undef DO_ST_PRIM_2
23
24
+/*
25
+ * Resolve the guest virtual address to info->host and info->flags.
26
+ * If @nofault, return false if the page is invalid, otherwise
27
+ * exit via page fault exception.
28
+ */
29
+
30
+typedef struct {
31
+ void *host;
32
+ int flags;
33
+ MemTxAttrs attrs;
34
+} SVEHostPage;
35
+
36
+bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
37
+ target_ulong addr, int mem_off, MMUAccessType access_type,
38
+ int mmu_idx, uintptr_t retaddr);
39
+
40
+/*
41
+ * Analyse contiguous data, protected by a governing predicate.
42
+ */
43
+
44
+typedef enum {
45
+ FAULT_NO,
46
+ FAULT_FIRST,
47
+ FAULT_ALL,
48
+} SVEContFault;
49
+
50
+typedef struct {
51
+ /*
52
+ * First and last element wholly contained within the two pages.
53
+ * mem_off_first[0] and reg_off_first[0] are always set >= 0.
54
+ * reg_off_last[0] may be < 0 if the first element crosses pages.
55
+ * All of mem_off_first[1], reg_off_first[1] and reg_off_last[1]
56
+ * are set >= 0 only if there are complete elements on a second page.
57
+ *
58
+ * The reg_off_* offsets are relative to the internal vector register.
59
+ * The mem_off_first offset is relative to the memory address; the
60
+ * two offsets are different when a load operation extends, a store
61
+ * operation truncates, or for multi-register operations.
62
+ */
63
+ int16_t mem_off_first[2];
64
+ int16_t reg_off_first[2];
65
+ int16_t reg_off_last[2];
66
+
67
+ /*
68
+ * One element that is misaligned and spans both pages,
69
+ * or -1 if there is no such active element.
70
+ */
71
+ int16_t mem_off_split;
72
+ int16_t reg_off_split;
73
+
74
+ /*
75
+ * The byte offset at which the entire operation crosses a page boundary.
76
+ * Set >= 0 if and only if the entire operation spans two pages.
77
+ */
78
+ int16_t page_split;
79
+
80
+ /* TLB data for the two pages. */
81
+ SVEHostPage page[2];
82
+} SVEContLdSt;
83
+
84
+/*
85
+ * Find first active element on each page, and a loose bound for the
86
+ * final element on each page. Identify any single element that spans
87
+ * the page boundary. Return true if there are any active elements.
88
+ */
89
+bool sve_cont_ldst_elements(SVEContLdSt *info, target_ulong addr, uint64_t *vg,
90
+ intptr_t reg_max, int esz, int msize);
91
+
92
+/*
93
+ * Resolve the guest virtual addresses to info->page[].
94
+ * Control the generation of page faults with @fault. Return false if
95
+ * there is no work to do, which can only happen with @fault == FAULT_NO.
96
+ */
97
+bool sve_cont_ldst_pages(SVEContLdSt *info, SVEContFault fault,
98
+ CPUARMState *env, target_ulong addr,
99
+ MMUAccessType access_type, uintptr_t retaddr);
100
+
101
+#ifdef CONFIG_USER_ONLY
102
+static inline void
103
+sve_cont_ldst_watchpoints(SVEContLdSt *info, CPUARMState *env, uint64_t *vg,
104
+ target_ulong addr, int esize, int msize,
105
+ int wp_access, uintptr_t retaddr)
106
+{ }
107
+#else
108
+void sve_cont_ldst_watchpoints(SVEContLdSt *info, CPUARMState *env,
109
+ uint64_t *vg, target_ulong addr,
110
+ int esize, int msize, int wp_access,
111
+ uintptr_t retaddr);
112
+#endif
113
+
114
+void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env, uint64_t *vg,
115
+ target_ulong addr, int esize, int msize,
116
+ uint32_t mtedesc, uintptr_t ra);
117
+
118
#endif /* TARGET_ARM_SVE_LDST_INTERNAL_H */
119
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/sve_helper.c
122
+++ b/target/arm/sve_helper.c
123
@@ -XXX,XX +XXX,XX @@ static intptr_t find_next_active(uint64_t *vg, intptr_t reg_off,
124
* exit via page fault exception.
125
*/
126
127
-typedef struct {
128
- void *host;
129
- int flags;
130
- MemTxAttrs attrs;
131
-} SVEHostPage;
132
-
133
-static bool sve_probe_page(SVEHostPage *info, bool nofault,
134
- CPUARMState *env, target_ulong addr,
135
- int mem_off, MMUAccessType access_type,
136
- int mmu_idx, uintptr_t retaddr)
137
+bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
138
+ target_ulong addr, int mem_off, MMUAccessType access_type,
139
+ int mmu_idx, uintptr_t retaddr)
140
{
141
int flags;
142
143
@@ -XXX,XX +XXX,XX @@ static bool sve_probe_page(SVEHostPage *info, bool nofault,
144
return true;
145
}
146
147
-
148
-/*
149
- * Analyse contiguous data, protected by a governing predicate.
150
- */
151
-
152
-typedef enum {
153
- FAULT_NO,
154
- FAULT_FIRST,
155
- FAULT_ALL,
156
-} SVEContFault;
157
-
158
-typedef struct {
159
- /*
160
- * First and last element wholly contained within the two pages.
161
- * mem_off_first[0] and reg_off_first[0] are always set >= 0.
162
- * reg_off_last[0] may be < 0 if the first element crosses pages.
163
- * All of mem_off_first[1], reg_off_first[1] and reg_off_last[1]
164
- * are set >= 0 only if there are complete elements on a second page.
165
- *
166
- * The reg_off_* offsets are relative to the internal vector register.
167
- * The mem_off_first offset is relative to the memory address; the
168
- * two offsets are different when a load operation extends, a store
169
- * operation truncates, or for multi-register operations.
170
- */
171
- int16_t mem_off_first[2];
172
- int16_t reg_off_first[2];
173
- int16_t reg_off_last[2];
174
-
175
- /*
176
- * One element that is misaligned and spans both pages,
177
- * or -1 if there is no such active element.
178
- */
179
- int16_t mem_off_split;
180
- int16_t reg_off_split;
181
-
182
- /*
183
- * The byte offset at which the entire operation crosses a page boundary.
184
- * Set >= 0 if and only if the entire operation spans two pages.
185
- */
186
- int16_t page_split;
187
-
188
- /* TLB data for the two pages. */
189
- SVEHostPage page[2];
190
-} SVEContLdSt;
191
-
192
/*
193
* Find first active element on each page, and a loose bound for the
194
* final element on each page. Identify any single element that spans
195
* the page boundary. Return true if there are any active elements.
196
*/
197
-static bool sve_cont_ldst_elements(SVEContLdSt *info, target_ulong addr,
198
- uint64_t *vg, intptr_t reg_max,
199
- int esz, int msize)
200
+bool sve_cont_ldst_elements(SVEContLdSt *info, target_ulong addr, uint64_t *vg,
201
+ intptr_t reg_max, int esz, int msize)
202
{
203
const int esize = 1 << esz;
204
const uint64_t pg_mask = pred_esz_masks[esz];
205
@@ -XXX,XX +XXX,XX @@ static bool sve_cont_ldst_elements(SVEContLdSt *info, target_ulong addr,
206
* Control the generation of page faults with @fault. Return false if
207
* there is no work to do, which can only happen with @fault == FAULT_NO.
208
*/
209
-static bool sve_cont_ldst_pages(SVEContLdSt *info, SVEContFault fault,
210
- CPUARMState *env, target_ulong addr,
211
- MMUAccessType access_type, uintptr_t retaddr)
212
+bool sve_cont_ldst_pages(SVEContLdSt *info, SVEContFault fault,
213
+ CPUARMState *env, target_ulong addr,
214
+ MMUAccessType access_type, uintptr_t retaddr)
215
{
216
int mmu_idx = cpu_mmu_index(env, false);
217
int mem_off = info->mem_off_first[0];
218
@@ -XXX,XX +XXX,XX @@ static bool sve_cont_ldst_pages(SVEContLdSt *info, SVEContFault fault,
219
return have_work;
220
}
221
222
-static void sve_cont_ldst_watchpoints(SVEContLdSt *info, CPUARMState *env,
223
- uint64_t *vg, target_ulong addr,
224
- int esize, int msize, int wp_access,
225
- uintptr_t retaddr)
226
-{
227
#ifndef CONFIG_USER_ONLY
228
+void sve_cont_ldst_watchpoints(SVEContLdSt *info, CPUARMState *env,
229
+ uint64_t *vg, target_ulong addr,
230
+ int esize, int msize, int wp_access,
231
+ uintptr_t retaddr)
232
+{
233
intptr_t mem_off, reg_off, reg_last;
234
int flags0 = info->page[0].flags;
235
int flags1 = info->page[1].flags;
236
@@ -XXX,XX +XXX,XX @@ static void sve_cont_ldst_watchpoints(SVEContLdSt *info, CPUARMState *env,
237
} while (reg_off & 63);
238
} while (reg_off <= reg_last);
239
}
240
-#endif
241
}
242
+#endif
243
244
-static void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
245
- uint64_t *vg, target_ulong addr, int esize,
246
- int msize, uint32_t mtedesc, uintptr_t ra)
247
+void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
248
+ uint64_t *vg, target_ulong addr, int esize,
249
+ int msize, uint32_t mtedesc, uintptr_t ra)
250
{
251
intptr_t mem_off, reg_off, reg_last;
252
253
--
254
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Put the inline function near the array declaration.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220607203306.657998-16-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/vec_internal.h | 8 +++++++-
11
target/arm/sve_helper.c | 9 ---------
12
2 files changed, 7 insertions(+), 10 deletions(-)
13
14
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/vec_internal.h
17
+++ b/target/arm/vec_internal.h
18
@@ -XXX,XX +XXX,XX @@
19
#define H8(x) (x)
20
#define H1_8(x) (x)
21
22
-/* Data for expanding active predicate bits to bytes, for byte elements. */
23
+/*
24
+ * Expand active predicate bits to bytes, for byte elements.
25
+ */
26
extern const uint64_t expand_pred_b_data[256];
27
+static inline uint64_t expand_pred_b(uint8_t byte)
28
+{
29
+ return expand_pred_b_data[byte];
30
+}
31
32
static inline void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
33
{
34
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/sve_helper.c
37
+++ b/target/arm/sve_helper.c
38
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_predtest)(void *vd, void *vg, uint32_t words)
39
return flags;
40
}
41
42
-/*
43
- * Expand active predicate bits to bytes, for byte elements.
44
- * (The data table itself is in vec_helper.c as MVE also needs it.)
45
- */
46
-static inline uint64_t expand_pred_b(uint8_t byte)
47
-{
48
- return expand_pred_b_data[byte];
49
-}
50
-
51
/* Similarly for half-word elements.
52
* for (i = 0; i < 256; ++i) {
53
* unsigned long m = 0;
54
--
55
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Use the function instead of the array directly.
4
5
Because the function performs its own masking, via the uint8_t
6
parameter, we need to do nothing extra within the users: the bits
7
above the first 2 (_uh) or 4 (_uw) will be discarded by assignment
8
to the local bmask variables, and of course _uq uses the entire
9
uint64_t result.
10
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20220607203306.657998-17-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/mve_helper.c | 6 +++---
17
1 file changed, 3 insertions(+), 3 deletions(-)
18
19
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/mve_helper.c
22
+++ b/target/arm/mve_helper.c
23
@@ -XXX,XX +XXX,XX @@ static void mergemask_sb(int8_t *d, int8_t r, uint16_t mask)
24
25
static void mergemask_uh(uint16_t *d, uint16_t r, uint16_t mask)
26
{
27
- uint16_t bmask = expand_pred_b_data[mask & 3];
28
+ uint16_t bmask = expand_pred_b(mask);
29
*d = (*d & ~bmask) | (r & bmask);
30
}
31
32
@@ -XXX,XX +XXX,XX @@ static void mergemask_sh(int16_t *d, int16_t r, uint16_t mask)
33
34
static void mergemask_uw(uint32_t *d, uint32_t r, uint16_t mask)
35
{
36
- uint32_t bmask = expand_pred_b_data[mask & 0xf];
37
+ uint32_t bmask = expand_pred_b(mask);
38
*d = (*d & ~bmask) | (r & bmask);
39
}
40
41
@@ -XXX,XX +XXX,XX @@ static void mergemask_sw(int32_t *d, int32_t r, uint16_t mask)
42
43
static void mergemask_uq(uint64_t *d, uint64_t r, uint16_t mask)
44
{
45
- uint64_t bmask = expand_pred_b_data[mask & 0xff];
46
+ uint64_t bmask = expand_pred_b(mask);
47
*d = (*d & ~bmask) | (r & bmask);
48
}
49
50
--
51
2.25.1
diff view generated by jsdifflib