1
target-arm queue. This has the "plumb txattrs through various
1
Hi; this is the latest target-arm queue; most of this is a refactoring
2
bits of exec.c" patches, and a collection of bug fixes from
2
patchset from RTH for the arm page-table-walk emulation.
3
various people.
4
3
5
thanks
4
thanks
6
-- PMM
5
-- PMM
7
6
7
The following changes since commit f1d33f55c47dfdaf8daacd618588ad3ae4c452d1:
8
8
9
9
Merge tag 'pull-testing-gdbstub-plugins-gitdm-061022-3' of https://github.com/stsquad/qemu into staging (2022-10-06 07:11:56 -0400)
10
The following changes since commit a3ac12fba028df90f7b3dbec924995c126c41022:
11
12
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-next-pull-request' into staging (2018-05-31 11:12:36 +0100)
13
10
14
are available in the Git repository at:
11
are available in the Git repository at:
15
12
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180531
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221010
17
14
18
for you to fetch changes up to 49d1dca0520ea71bc21867fab6647f474fcf857b:
15
for you to fetch changes up to 915f62844cf62e428c7c178149b5ff1cbe129b07:
19
16
20
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice (2018-05-31 14:52:53 +0100)
17
docs/system/arm/emulation.rst: Report FEAT_GTG support (2022-10-10 14:52:25 +0100)
21
18
22
----------------------------------------------------------------
19
----------------------------------------------------------------
23
target-arm queue:
20
target-arm queue:
24
* target/arm: Honour FPCR.FZ in FRECPX
21
* Retry KVM_CREATE_VM call if it fails EINTR
25
* MAINTAINERS: Add entries for newer MPS2 boards and devices
22
* allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
26
* hw/intc/arm_gicv3: Fix APxR<n> register dispatching
23
* docs/nuvoton: Update URL for images
27
* arm_gicv3_kvm: fix bug in writing zero bits back to the in-kernel
24
* refactoring of page table walk code
28
GIC state
25
* hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
29
* tcg: Fix helper function vs host abi for float16
26
* Don't allow guest to use unimplemented granule sizes
30
* arm: fix qemu crash on startup with -bios option
27
* Report FEAT_GTG support
31
* arm: fix malloc type mismatch
32
* xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
33
* Correct CPACR reset value for v7 cores
34
* memory.h: Improve IOMMU related documentation
35
* exec: Plumb transaction attributes through various functions in
36
preparation for allowing IOMMUs to see them
37
* vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
38
* ARM: ACPI: Fix use-after-free due to memory realloc
39
* KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
40
28
41
----------------------------------------------------------------
29
----------------------------------------------------------------
42
Francisco Iglesias (1):
30
Jerome Forissier (2):
43
xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
31
target/arm: allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
32
hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
44
33
45
Igor Mammedov (1):
34
Joel Stanley (1):
46
arm: fix qemu crash on startup with -bios option
35
docs/nuvoton: Update URL for images
47
36
48
Jan Kiszka (1):
37
Peter Maydell (4):
49
hw/intc/arm_gicv3: Fix APxR<n> register dispatching
38
target/arm/kvm: Retry KVM_CREATE_VM call if it fails EINTR
39
target/arm: Don't allow guest to use unimplemented granule sizes
40
target/arm: Use ARMGranuleSize in ARMVAParameters
41
docs/system/arm/emulation.rst: Report FEAT_GTG support
50
42
51
Paolo Bonzini (1):
43
Richard Henderson (21):
52
arm: fix malloc type mismatch
44
target/arm: Split s2walk_secure from ipa_secure in get_phys_addr
45
target/arm: Make the final stage1+2 write to secure be unconditional
46
target/arm: Add is_secure parameter to get_phys_addr_lpae
47
target/arm: Fix S2 disabled check in S1_ptw_translate
48
target/arm: Add is_secure parameter to regime_translation_disabled
49
target/arm: Split out get_phys_addr_with_secure
50
target/arm: Add is_secure parameter to v7m_read_half_insn
51
target/arm: Add TBFLAG_M32.SECURE
52
target/arm: Merge regime_is_secure into get_phys_addr
53
target/arm: Add is_secure parameter to do_ats_write
54
target/arm: Fold secure and non-secure a-profile mmu indexes
55
target/arm: Reorg regime_translation_disabled
56
target/arm: Drop secure check for HCR.TGE vs SCTLR_EL1.M
57
target/arm: Introduce arm_hcr_el2_eff_secstate
58
target/arm: Hoist read of *is_secure in S1_ptw_translate
59
target/arm: Remove env argument from combined_attrs_fwb
60
target/arm: Pass HCR to attribute subroutines.
61
target/arm: Fix ATS12NSO* from S PL1
62
target/arm: Split out get_phys_addr_disabled
63
target/arm: Fix cacheattr in get_phys_addr_disabled
64
target/arm: Use tlb_set_page_full
53
65
54
Peter Maydell (17):
66
docs/system/arm/emulation.rst | 1 +
55
target/arm: Honour FPCR.FZ in FRECPX
67
docs/system/arm/nuvoton.rst | 4 +-
56
MAINTAINERS: Add entries for newer MPS2 boards and devices
68
target/arm/cpu-param.h | 2 +-
57
Correct CPACR reset value for v7 cores
69
target/arm/cpu.h | 181 ++++++++------
58
memory.h: Improve IOMMU related documentation
70
target/arm/internals.h | 150 ++++++-----
59
Make tb_invalidate_phys_addr() take a MemTxAttrs argument
71
hw/arm/boot.c | 4 +
60
Make address_space_translate{, _cached}() take a MemTxAttrs argument
72
target/arm/helper.c | 332 ++++++++++++++----------
61
Make address_space_map() take a MemTxAttrs argument
73
target/arm/kvm.c | 4 +-
62
Make address_space_access_valid() take a MemTxAttrs argument
74
target/arm/m_helper.c | 29 ++-
63
Make flatview_extend_translation() take a MemTxAttrs argument
75
target/arm/ptw.c | 570 ++++++++++++++++++++++--------------------
64
Make memory_region_access_valid() take a MemTxAttrs argument
76
target/arm/tlb_helper.c | 9 +-
65
Make MemoryRegion valid.accepts callback take a MemTxAttrs argument
77
target/arm/translate-a64.c | 8 -
66
Make flatview_access_valid() take a MemTxAttrs argument
78
target/arm/translate.c | 9 +-
67
Make flatview_translate() take a MemTxAttrs argument
79
13 files changed, 717 insertions(+), 586 deletions(-)
68
Make address_space_get_iotlb_entry() take a MemTxAttrs argument
69
Make flatview_do_translate() take a MemTxAttrs argument
70
Make address_space_translate_iommu take a MemTxAttrs argument
71
vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
72
73
Richard Henderson (1):
74
tcg: Fix helper function vs host abi for float16
75
76
Shannon Zhao (3):
77
arm_gicv3_kvm: increase clroffset accordingly
78
ARM: ACPI: Fix use-after-free due to memory realloc
79
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
80
81
include/exec/exec-all.h | 5 +-
82
include/exec/helper-head.h | 2 +-
83
include/exec/memory-internal.h | 3 +-
84
include/exec/memory.h | 128 +++++++++++++++++++++++++++++++++++------
85
include/migration/vmstate.h | 3 +
86
include/sysemu/dma.h | 6 +-
87
accel/tcg/translate-all.c | 4 +-
88
exec.c | 95 ++++++++++++++++++------------
89
hw/arm/boot.c | 18 +++---
90
hw/arm/virt-acpi-build.c | 20 +++++--
91
hw/dma/xlnx-zdma.c | 10 +++-
92
hw/hppa/dino.c | 3 +-
93
hw/intc/arm_gic_kvm.c | 1 -
94
hw/intc/arm_gicv3_cpuif.c | 12 ++--
95
hw/intc/arm_gicv3_kvm.c | 2 +-
96
hw/nvram/fw_cfg.c | 12 ++--
97
hw/s390x/s390-pci-inst.c | 3 +-
98
hw/scsi/esp.c | 3 +-
99
hw/vfio/common.c | 3 +-
100
hw/virtio/vhost.c | 3 +-
101
hw/xen/xen_pt_msi.c | 3 +-
102
memory.c | 12 ++--
103
memory_ldst.inc.c | 18 +++---
104
target/arm/gdbstub.c | 3 +-
105
target/arm/helper-a64.c | 41 +++++++------
106
target/arm/helper.c | 90 ++++++++++++++++-------------
107
target/ppc/mmu-hash64.c | 3 +-
108
target/riscv/helper.c | 2 +-
109
target/s390x/diag.c | 6 +-
110
target/s390x/excp_helper.c | 3 +-
111
target/s390x/mmu_helper.c | 3 +-
112
target/s390x/sigp.c | 3 +-
113
target/xtensa/op_helper.c | 3 +-
114
MAINTAINERS | 9 ++-
115
34 files changed, 353 insertions(+), 182 deletions(-)
116
diff view generated by jsdifflib
New patch
1
Occasionally the KVM_CREATE_VM ioctl can return EINTR, even though
2
there is no pending signal to be taken. In commit 94ccff13382055
3
we added a retry-on-EINTR loop to the KVM_CREATE_VM call in the
4
generic KVM code. Adopt the same approach for the use of the
5
ioctl in the Arm-specific KVM code (where we use it to create a
6
scratch VM for probing for various things).
1
7
8
For more information, see the mailing list thread:
9
https://lore.kernel.org/qemu-devel/8735e0s1zw.wl-maz@kernel.org/
10
11
Reported-by: Vitaly Chikunov <vt@altlinux.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
14
Reviewed-by: Eric Auger <eric.auger@redhat.com>
15
Acked-by: Marc Zyngier <maz@kernel.org>
16
Message-id: 20220930113824.1933293-1-peter.maydell@linaro.org
17
---
18
target/arm/kvm.c | 4 +++-
19
1 file changed, 3 insertions(+), 1 deletion(-)
20
21
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/kvm.c
24
+++ b/target/arm/kvm.c
25
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
26
if (max_vm_pa_size < 0) {
27
max_vm_pa_size = 0;
28
}
29
- vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size);
30
+ do {
31
+ vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size);
32
+ } while (vmfd == -1 && errno == EINTR);
33
if (vmfd < 0) {
34
goto err;
35
}
36
--
37
2.25.1
diff view generated by jsdifflib
New patch
1
From: Jerome Forissier <jerome.forissier@linaro.org>
1
2
3
Updates write_scr() to allow setting SCR_EL3.EnTP2 when FEAT_SME is
4
implemented. SCR_EL3 being a 64-bit register, valid_mask is changed
5
to uint64_t and the SCR_* constants in target/arm/cpu.h are extended
6
to 64-bit so that masking and bitwise not (~) behave as expected.
7
8
This enables booting Linux with Trusted Firmware-A at EL3 with
9
"-M virt,secure=on -cpu max".
10
11
Cc: qemu-stable@nongnu.org
12
Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max")
13
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
14
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20221004072354.27037-1-jerome.forissier@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
target/arm/cpu.h | 54 ++++++++++++++++++++++-----------------------
20
target/arm/helper.c | 5 ++++-
21
2 files changed, 31 insertions(+), 28 deletions(-)
22
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
27
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
28
29
#define HPFAR_NS (1ULL << 63)
30
31
-#define SCR_NS (1U << 0)
32
-#define SCR_IRQ (1U << 1)
33
-#define SCR_FIQ (1U << 2)
34
-#define SCR_EA (1U << 3)
35
-#define SCR_FW (1U << 4)
36
-#define SCR_AW (1U << 5)
37
-#define SCR_NET (1U << 6)
38
-#define SCR_SMD (1U << 7)
39
-#define SCR_HCE (1U << 8)
40
-#define SCR_SIF (1U << 9)
41
-#define SCR_RW (1U << 10)
42
-#define SCR_ST (1U << 11)
43
-#define SCR_TWI (1U << 12)
44
-#define SCR_TWE (1U << 13)
45
-#define SCR_TLOR (1U << 14)
46
-#define SCR_TERR (1U << 15)
47
-#define SCR_APK (1U << 16)
48
-#define SCR_API (1U << 17)
49
-#define SCR_EEL2 (1U << 18)
50
-#define SCR_EASE (1U << 19)
51
-#define SCR_NMEA (1U << 20)
52
-#define SCR_FIEN (1U << 21)
53
-#define SCR_ENSCXT (1U << 25)
54
-#define SCR_ATA (1U << 26)
55
-#define SCR_FGTEN (1U << 27)
56
-#define SCR_ECVEN (1U << 28)
57
-#define SCR_TWEDEN (1U << 29)
58
+#define SCR_NS (1ULL << 0)
59
+#define SCR_IRQ (1ULL << 1)
60
+#define SCR_FIQ (1ULL << 2)
61
+#define SCR_EA (1ULL << 3)
62
+#define SCR_FW (1ULL << 4)
63
+#define SCR_AW (1ULL << 5)
64
+#define SCR_NET (1ULL << 6)
65
+#define SCR_SMD (1ULL << 7)
66
+#define SCR_HCE (1ULL << 8)
67
+#define SCR_SIF (1ULL << 9)
68
+#define SCR_RW (1ULL << 10)
69
+#define SCR_ST (1ULL << 11)
70
+#define SCR_TWI (1ULL << 12)
71
+#define SCR_TWE (1ULL << 13)
72
+#define SCR_TLOR (1ULL << 14)
73
+#define SCR_TERR (1ULL << 15)
74
+#define SCR_APK (1ULL << 16)
75
+#define SCR_API (1ULL << 17)
76
+#define SCR_EEL2 (1ULL << 18)
77
+#define SCR_EASE (1ULL << 19)
78
+#define SCR_NMEA (1ULL << 20)
79
+#define SCR_FIEN (1ULL << 21)
80
+#define SCR_ENSCXT (1ULL << 25)
81
+#define SCR_ATA (1ULL << 26)
82
+#define SCR_FGTEN (1ULL << 27)
83
+#define SCR_ECVEN (1ULL << 28)
84
+#define SCR_TWEDEN (1ULL << 29)
85
#define SCR_TWEDEL MAKE_64BIT_MASK(30, 4)
86
#define SCR_TME (1ULL << 34)
87
#define SCR_AMVOFFEN (1ULL << 35)
88
diff --git a/target/arm/helper.c b/target/arm/helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/helper.c
91
+++ b/target/arm/helper.c
92
@@ -XXX,XX +XXX,XX @@ static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
93
static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
94
{
95
/* Begin with base v8.0 state. */
96
- uint32_t valid_mask = 0x3fff;
97
+ uint64_t valid_mask = 0x3fff;
98
ARMCPU *cpu = env_archcpu(env);
99
100
/*
101
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
102
if (cpu_isar_feature(aa64_doublefault, cpu)) {
103
valid_mask |= SCR_EASE | SCR_NMEA;
104
}
105
+ if (cpu_isar_feature(aa64_sme, cpu)) {
106
+ valid_mask |= SCR_ENTP2;
107
+ }
108
} else {
109
valid_mask &= ~(SCR_RW | SCR_ST);
110
if (cpu_isar_feature(aa32_ras, cpu)) {
111
--
112
2.25.1
diff view generated by jsdifflib
1
Provide a VMSTATE_BOOL_SUB_ARRAY to go with VMSTATE_UINT8_SUB_ARRAY
1
From: Joel Stanley <joel@jms.id.au>
2
and friends.
3
2
3
openpower.xyz was retired some time ago. The OpenBMC Jenkins is where
4
images can be found these days.
5
6
Signed-off-by: Joel Stanley <joel@jms.id.au>
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20221004050042.22681-1-joel@jms.id.au
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20180521140402.23318-23-peter.maydell@linaro.org
7
---
12
---
8
include/migration/vmstate.h | 3 +++
13
docs/system/arm/nuvoton.rst | 4 ++--
9
1 file changed, 3 insertions(+)
14
1 file changed, 2 insertions(+), 2 deletions(-)
10
15
11
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
16
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/include/migration/vmstate.h
18
--- a/docs/system/arm/nuvoton.rst
14
+++ b/include/migration/vmstate.h
19
+++ b/docs/system/arm/nuvoton.rst
15
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
20
@@ -XXX,XX +XXX,XX @@ Boot options
16
#define VMSTATE_BOOL_ARRAY(_f, _s, _n) \
21
17
VMSTATE_BOOL_ARRAY_V(_f, _s, _n, 0)
22
The Nuvoton machines can boot from an OpenBMC firmware image, or directly into
18
23
a kernel using the ``-kernel`` option. OpenBMC images for ``quanta-gsj`` and
19
+#define VMSTATE_BOOL_SUB_ARRAY(_f, _s, _start, _num) \
24
-possibly others can be downloaded from the OpenPOWER jenkins :
20
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_bool, bool)
25
+possibly others can be downloaded from the OpenBMC jenkins :
21
+
26
22
#define VMSTATE_UINT16_ARRAY_V(_f, _s, _n, _v) \
27
- https://openpower.xyz/
23
VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_uint16, uint16_t)
28
+ https://jenkins.openbmc.org/
29
30
The firmware image should be attached as an MTD drive. Example :
24
31
25
--
32
--
26
2.17.1
33
2.25.1
27
34
28
35
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The starting security state comes with the translation regime,
4
not the current state of arm_is_secure_below_el3().
5
6
Create a new local variable, s2walk_secure, which does not need
7
to be written back to result->attrs.secure -- we compute that
8
value later, after the S2 walk is complete.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20221001162318.153420-2-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/ptw.c | 18 +++++++++---------
16
1 file changed, 9 insertions(+), 9 deletions(-)
17
18
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/ptw.c
21
+++ b/target/arm/ptw.c
22
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
23
hwaddr ipa;
24
int s1_prot;
25
int ret;
26
- bool ipa_secure;
27
+ bool ipa_secure, s2walk_secure;
28
ARMCacheAttrs cacheattrs1;
29
ARMMMUIdx s2_mmu_idx;
30
bool is_el0;
31
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
32
33
ipa = result->phys;
34
ipa_secure = result->attrs.secure;
35
- if (arm_is_secure_below_el3(env)) {
36
- if (ipa_secure) {
37
- result->attrs.secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
38
- } else {
39
- result->attrs.secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
40
- }
41
+ if (is_secure) {
42
+ /* Select TCR based on the NS bit from the S1 walk. */
43
+ s2walk_secure = !(ipa_secure
44
+ ? env->cp15.vstcr_el2 & VSTCR_SW
45
+ : env->cp15.vtcr_el2 & VTCR_NSW);
46
} else {
47
assert(!ipa_secure);
48
+ s2walk_secure = false;
49
}
50
51
- s2_mmu_idx = (result->attrs.secure
52
+ s2_mmu_idx = (s2walk_secure
53
? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
54
is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
55
56
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
57
result->cacheattrs);
58
59
/* Check if IPA translates to secure or non-secure PA space. */
60
- if (arm_is_secure_below_el3(env)) {
61
+ if (is_secure) {
62
if (ipa_secure) {
63
result->attrs.secure =
64
!(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW));
65
--
66
2.25.1
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Coverity found that the string return by 'object_get_canonical_path' was not
3
While the stage2 call to get_phys_addr_lpae should never set
4
being freed at two locations in the model (CID 1391294 and CID 1391293) and
4
attrs.secure when given a non-secure input, it's just as easy
5
also that a memset was being called with a value greater than the max of a byte
5
to make the final update to attrs.secure be unconditional and
6
on the second argument (CID 1391286). This patch corrects this by adding the
6
false in the case of non-secure input.
7
freeing of the strings and also changing to memset to zero instead on
8
descriptor unaligned errors.
9
7
10
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20221007152159.1414065-1-richard.henderson@linaro.org
13
Message-id: 20180528184859.3530-1-frasse.iglesias@gmail.com
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
13
---
17
hw/dma/xlnx-zdma.c | 10 +++++++---
14
target/arm/ptw.c | 21 ++++++++++-----------
18
1 file changed, 7 insertions(+), 3 deletions(-)
15
1 file changed, 10 insertions(+), 11 deletions(-)
19
16
20
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/dma/xlnx-zdma.c
19
--- a/target/arm/ptw.c
23
+++ b/hw/dma/xlnx-zdma.c
20
+++ b/target/arm/ptw.c
24
@@ -XXX,XX +XXX,XX @@ static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
21
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
25
qemu_log_mask(LOG_GUEST_ERROR,
22
result->cacheattrs = combine_cacheattrs(env, cacheattrs1,
26
"zdma: unaligned descriptor at %" PRIx64,
23
result->cacheattrs);
27
addr);
24
28
- memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
25
- /* Check if IPA translates to secure or non-secure PA space. */
29
+ memset(buf, 0x0, sizeof(XlnxZDMADescr));
26
- if (is_secure) {
30
s->error = true;
27
- if (ipa_secure) {
31
return false;
28
- result->attrs.secure =
32
}
29
- !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW));
33
@@ -XXX,XX +XXX,XX @@ static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
30
- } else {
34
RegisterInfo *r = &s->regs_info[addr / 4];
31
- result->attrs.secure =
35
32
- !((env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))
36
if (!r->data) {
33
- || (env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)));
37
+ gchar *path = object_get_canonical_path(OBJECT(s));
34
- }
38
qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
35
- }
39
- object_get_canonical_path(OBJECT(s)),
36
+ /*
40
+ path,
37
+ * Check if IPA translates to secure or non-secure PA space.
41
addr);
38
+ * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
42
+ g_free(path);
39
+ */
43
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
40
+ result->attrs.secure =
44
zdma_ch_imr_update_irq(s);
41
+ (is_secure
45
return 0;
42
+ && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
46
@@ -XXX,XX +XXX,XX @@ static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
43
+ && (ipa_secure
47
RegisterInfo *r = &s->regs_info[addr / 4];
44
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
48
45
+
49
if (!r->data) {
46
return 0;
50
+ gchar *path = object_get_canonical_path(OBJECT(s));
47
} else {
51
qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
48
/*
52
- object_get_canonical_path(OBJECT(s)),
53
+ path,
54
addr, value);
55
+ g_free(path);
56
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
57
zdma_ch_imr_update_irq(s);
58
return;
59
--
49
--
60
2.17.1
50
2.25.1
61
62
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to flatview_do_translate().
3
2
3
Remove the use of regime_is_secure from get_phys_addr_lpae,
4
using the new parameter instead.
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-3-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-13-peter.maydell@linaro.org
8
---
10
---
9
exec.c | 9 ++++++---
11
target/arm/ptw.c | 20 ++++++++++----------
10
1 file changed, 6 insertions(+), 3 deletions(-)
12
1 file changed, 10 insertions(+), 10 deletions(-)
11
13
12
diff --git a/exec.c b/exec.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
16
--- a/target/arm/ptw.c
15
+++ b/exec.c
17
+++ b/target/arm/ptw.c
16
@@ -XXX,XX +XXX,XX @@ unassigned:
18
@@ -XXX,XX +XXX,XX @@
17
* @is_write: whether the translation operation is for write
19
18
* @is_mmio: whether this can be MMIO, set true if it can
20
static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
19
* @target_as: the address space targeted by the IOMMU
21
MMUAccessType access_type, ARMMMUIdx mmu_idx,
20
+ * @attrs: memory transaction attributes
22
- bool s1_is_el0, GetPhysAddrResult *result,
21
*
23
- ARMMMUFaultInfo *fi)
22
* This function is called from RCU critical section
24
+ bool is_secure, bool s1_is_el0,
25
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
26
__attribute__((nonnull));
27
28
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
29
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
30
GetPhysAddrResult s2 = {};
31
int ret;
32
33
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false,
34
- &s2, fi);
35
+ ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
36
+ *is_secure, false, &s2, fi);
37
if (ret) {
38
assert(fi->type != ARMFault_None);
39
fi->s2addr = addr;
40
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
23
*/
41
*/
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
42
static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
25
hwaddr *page_mask_out,
43
MMUAccessType access_type, ARMMMUIdx mmu_idx,
26
bool is_write,
44
- bool s1_is_el0, GetPhysAddrResult *result,
27
bool is_mmio,
45
- ARMMMUFaultInfo *fi)
28
- AddressSpace **target_as)
46
+ bool is_secure, bool s1_is_el0,
29
+ AddressSpace **target_as,
47
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
30
+ MemTxAttrs attrs)
31
{
48
{
32
MemoryRegionSection *section;
49
ARMCPU *cpu = env_archcpu(env);
33
IOMMUMemoryRegion *iommu_mr;
50
/* Read an LPAE long-descriptor translation table. */
34
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
51
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
35
* but page mask.
52
* remain non-secure. We implement this by just ORing in the NSTable/NS
53
* bits at each step.
36
*/
54
*/
37
section = flatview_do_translate(address_space_to_flatview(as), addr, &xlat,
55
- tableattrs = regime_is_secure(env, mmu_idx) ? 0 : (1 << 4);
38
- NULL, &page_mask, is_write, false, &as);
56
+ tableattrs = is_secure ? 0 : (1 << 4);
39
+ NULL, &page_mask, is_write, false, &as,
57
for (;;) {
40
+ attrs);
58
uint64_t descriptor;
41
59
bool nstable;
42
/* Illegal translation */
60
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
43
if (section.mr == &io_mem_unassigned) {
61
memset(result, 0, sizeof(*result));
44
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
62
45
63
ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx,
46
/* This can be MMIO, so setup MMIO bit. */
64
- is_el0, result, fi);
47
section = flatview_do_translate(fv, addr, xlat, plen, NULL,
65
+ s2walk_secure, is_el0, result, fi);
48
- is_write, true, &as);
66
fi->s2addr = ipa;
49
+ is_write, true, &as, attrs);
67
50
mr = section.mr;
68
/* Combine the S1 and S2 perms. */
51
69
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
52
if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
70
}
71
72
if (regime_using_lpae_format(env, mmu_idx)) {
73
- return get_phys_addr_lpae(env, address, access_type, mmu_idx, false,
74
- result, fi);
75
+ return get_phys_addr_lpae(env, address, access_type, mmu_idx,
76
+ is_secure, false, result, fi);
77
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
78
return get_phys_addr_v6(env, address, access_type, mmu_idx,
79
is_secure, result, fi);
53
--
80
--
54
2.17.1
81
2.25.1
55
82
56
83
diff view generated by jsdifflib
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
There was a nasty flip in identifying which register group an access is
3
Pass the correct stage2 mmu_idx to regime_translation_disabled,
4
targeting. The issue caused spuriously raised priorities of the guest
4
which we computed afterward.
5
when handing CPUs over in the Jailhouse hypervisor.
6
5
7
Cc: qemu-stable@nongnu.org
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
9
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20221001162318.153420-4-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
hw/intc/arm_gicv3_cpuif.c | 12 ++++++------
11
target/arm/ptw.c | 6 +++---
14
1 file changed, 6 insertions(+), 6 deletions(-)
12
1 file changed, 3 insertions(+), 3 deletions(-)
15
13
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gicv3_cpuif.c
16
--- a/target/arm/ptw.c
19
+++ b/hw/intc/arm_gicv3_cpuif.c
17
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
19
hwaddr addr, bool *is_secure,
20
ARMMMUFaultInfo *fi)
21
{
21
{
22
GICv3CPUState *cs = icc_cs_from_env(env);
22
+ ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
23
int regno = ri->opc2 & 3;
23
+
24
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
24
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
25
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
25
- !regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
26
uint64_t value = cs->ich_apr[grp][regno];
26
- ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S
27
27
- : ARMMMUIdx_Stage2;
28
trace_gicv3_icv_ap_read(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
28
+ !regime_translation_disabled(env, s2_mmu_idx)) {
29
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
29
GetPhysAddrResult s2 = {};
30
{
30
int ret;
31
GICv3CPUState *cs = icc_cs_from_env(env);
32
int regno = ri->opc2 & 3;
33
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
34
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
35
36
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
37
38
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
39
uint64_t value;
40
41
int regno = ri->opc2 & 3;
42
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
43
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
44
45
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
46
return icv_ap_read(env, ri);
47
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
48
GICv3CPUState *cs = icc_cs_from_env(env);
49
50
int regno = ri->opc2 & 3;
51
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
52
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
53
54
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
55
icv_ap_write(env, ri, value);
56
@@ -XXX,XX +XXX,XX @@ static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
{
58
GICv3CPUState *cs = icc_cs_from_env(env);
59
int regno = ri->opc2 & 3;
60
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
61
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
62
uint64_t value;
63
64
value = cs->ich_apr[grp][regno];
65
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
66
{
67
GICv3CPUState *cs = icc_cs_from_env(env);
68
int regno = ri->opc2 & 3;
69
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
70
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
71
72
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
73
31
74
--
32
--
75
2.17.1
33
2.25.1
76
77
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to the MemoryRegion valid.accepts
3
callback. We'll need this for subpage_accepts().
4
2
5
We could take the approach we used with the read and write
3
Remove the use of regime_is_secure from regime_translation_disabled,
6
callbacks and add new a new _with_attrs version, but since there
4
using the new parameter instead.
7
are so few implementations of the accepts hook we just change
8
them all.
9
5
6
This fixes a bug in S1_ptw_translate and get_phys_addr where we had
7
passed ARMMMUIdx_Stage2 and not ARMMMUIdx_Stage2_S to determine if
8
Stage2 is disabled, affecting FEAT_SEL2.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20221001162318.153420-5-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180521140402.23318-9-peter.maydell@linaro.org
14
---
15
---
15
include/exec/memory.h | 3 ++-
16
target/arm/ptw.c | 20 +++++++++++---------
16
exec.c | 9 ++++++---
17
1 file changed, 11 insertions(+), 9 deletions(-)
17
hw/hppa/dino.c | 3 ++-
18
hw/nvram/fw_cfg.c | 12 ++++++++----
19
hw/scsi/esp.c | 3 ++-
20
hw/xen/xen_pt_msi.c | 3 ++-
21
memory.c | 5 +++--
22
7 files changed, 25 insertions(+), 13 deletions(-)
23
18
24
diff --git a/include/exec/memory.h b/include/exec/memory.h
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
25
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory.h
21
--- a/target/arm/ptw.c
27
+++ b/include/exec/memory.h
22
+++ b/target/arm/ptw.c
28
@@ -XXX,XX +XXX,XX @@ struct MemoryRegionOps {
23
@@ -XXX,XX +XXX,XX @@ static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
29
* as a machine check exception).
30
*/
31
bool (*accepts)(void *opaque, hwaddr addr,
32
- unsigned size, bool is_write);
33
+ unsigned size, bool is_write,
34
+ MemTxAttrs attrs);
35
} valid;
36
/* Internal implementation constraints: */
37
struct {
38
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
41
+++ b/exec.c
42
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
43
}
24
}
44
25
45
static bool notdirty_mem_accepts(void *opaque, hwaddr addr,
26
/* Return true if the specified stage of address translation is disabled */
46
- unsigned size, bool is_write)
27
-static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
47
+ unsigned size, bool is_write,
28
+static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
48
+ MemTxAttrs attrs)
29
+ bool is_secure)
49
{
30
{
50
return is_write;
31
uint64_t hcr_el2;
51
}
32
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
33
if (arm_feature(env, ARM_FEATURE_M)) {
53
}
34
- switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] &
54
35
+ switch (env->v7m.mpu_ctrl[is_secure] &
55
static bool subpage_accepts(void *opaque, hwaddr addr,
36
(R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
56
- unsigned len, bool is_write)
37
case R_V7M_MPU_CTRL_ENABLE_MASK:
57
+ unsigned len, bool is_write,
38
/* Enabled, but not for HardFault and NMI */
58
+ MemTxAttrs attrs)
39
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
59
{
40
60
subpage_t *subpage = opaque;
41
if (hcr_el2 & HCR_TGE) {
61
#if defined(DEBUG_SUBPAGE)
42
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
62
@@ -XXX,XX +XXX,XX @@ static void readonly_mem_write(void *opaque, hwaddr addr,
43
- if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
63
}
44
+ if (!is_secure && regime_el(env, mmu_idx) == 1) {
64
45
return true;
65
static bool readonly_mem_accepts(void *opaque, hwaddr addr,
66
- unsigned size, bool is_write)
67
+ unsigned size, bool is_write,
68
+ MemTxAttrs attrs)
69
{
70
return is_write;
71
}
72
diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/hppa/dino.c
75
+++ b/hw/hppa/dino.c
76
@@ -XXX,XX +XXX,XX @@ static void gsc_to_pci_forwarding(DinoState *s)
77
}
78
79
static bool dino_chip_mem_valid(void *opaque, hwaddr addr,
80
- unsigned size, bool is_write)
81
+ unsigned size, bool is_write,
82
+ MemTxAttrs attrs)
83
{
84
switch (addr) {
85
case DINO_IAR0:
86
diff --git a/hw/nvram/fw_cfg.c b/hw/nvram/fw_cfg.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/nvram/fw_cfg.c
89
+++ b/hw/nvram/fw_cfg.c
90
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_dma_mem_write(void *opaque, hwaddr addr,
91
}
92
93
static bool fw_cfg_dma_mem_valid(void *opaque, hwaddr addr,
94
- unsigned size, bool is_write)
95
+ unsigned size, bool is_write,
96
+ MemTxAttrs attrs)
97
{
98
return !is_write || ((size == 4 && (addr == 0 || addr == 4)) ||
99
(size == 8 && addr == 0));
100
}
101
102
static bool fw_cfg_data_mem_valid(void *opaque, hwaddr addr,
103
- unsigned size, bool is_write)
104
+ unsigned size, bool is_write,
105
+ MemTxAttrs attrs)
106
{
107
return addr == 0;
108
}
109
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_ctl_mem_write(void *opaque, hwaddr addr,
110
}
111
112
static bool fw_cfg_ctl_mem_valid(void *opaque, hwaddr addr,
113
- unsigned size, bool is_write)
114
+ unsigned size, bool is_write,
115
+ MemTxAttrs attrs)
116
{
117
return is_write && size == 2;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_comb_write(void *opaque, hwaddr addr,
120
}
121
122
static bool fw_cfg_comb_valid(void *opaque, hwaddr addr,
123
- unsigned size, bool is_write)
124
+ unsigned size, bool is_write,
125
+ MemTxAttrs attrs)
126
{
127
return (size == 1) || (is_write && size == 2);
128
}
129
diff --git a/hw/scsi/esp.c b/hw/scsi/esp.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/scsi/esp.c
132
+++ b/hw/scsi/esp.c
133
@@ -XXX,XX +XXX,XX @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t val)
134
}
135
136
static bool esp_mem_accepts(void *opaque, hwaddr addr,
137
- unsigned size, bool is_write)
138
+ unsigned size, bool is_write,
139
+ MemTxAttrs attrs)
140
{
141
return (size == 1) || (is_write && size == 4);
142
}
143
diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/xen/xen_pt_msi.c
146
+++ b/hw/xen/xen_pt_msi.c
147
@@ -XXX,XX +XXX,XX @@ static uint64_t pci_msix_read(void *opaque, hwaddr addr,
148
}
149
150
static bool pci_msix_accepts(void *opaque, hwaddr addr,
151
- unsigned size, bool is_write)
152
+ unsigned size, bool is_write,
153
+ MemTxAttrs attrs)
154
{
155
return !(addr & (size - 1));
156
}
157
diff --git a/memory.c b/memory.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/memory.c
160
+++ b/memory.c
161
@@ -XXX,XX +XXX,XX @@ static void unassigned_mem_write(void *opaque, hwaddr addr,
162
}
163
164
static bool unassigned_mem_accepts(void *opaque, hwaddr addr,
165
- unsigned size, bool is_write)
166
+ unsigned size, bool is_write,
167
+ MemTxAttrs attrs)
168
{
169
return false;
170
}
171
@@ -XXX,XX +XXX,XX @@ bool memory_region_access_valid(MemoryRegion *mr,
172
access_size = MAX(MIN(size, access_size_max), access_size_min);
173
for (i = 0; i < size; i += access_size) {
174
if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size,
175
- is_write)) {
176
+ is_write, attrs)) {
177
return false;
178
}
46
}
179
}
47
}
48
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
49
ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
50
51
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
52
- !regime_translation_disabled(env, s2_mmu_idx)) {
53
+ !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) {
54
GetPhysAddrResult s2 = {};
55
int ret;
56
57
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
58
uint32_t base;
59
bool is_user = regime_is_user(env, mmu_idx);
60
61
- if (regime_translation_disabled(env, mmu_idx)) {
62
+ if (regime_translation_disabled(env, mmu_idx, is_secure)) {
63
/* MPU disabled. */
64
result->phys = address;
65
result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
66
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
67
result->page_size = TARGET_PAGE_SIZE;
68
result->prot = 0;
69
70
- if (regime_translation_disabled(env, mmu_idx) ||
71
+ if (regime_translation_disabled(env, mmu_idx, secure) ||
72
m_is_ppb_region(env, address)) {
73
/*
74
* MPU disabled or M profile PPB access: use default memory map.
75
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
76
* are done in arm_v7m_load_vector(), which always does a direct
77
* read using address_space_ldl(), rather than going via this function.
78
*/
79
- if (regime_translation_disabled(env, mmu_idx)) { /* MPU disabled */
80
+ if (regime_translation_disabled(env, mmu_idx, secure)) { /* MPU disabled */
81
hit = true;
82
} else if (m_is_ppb_region(env, address)) {
83
hit = true;
84
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
85
result, fi);
86
87
/* If S1 fails or S2 is disabled, return early. */
88
- if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
89
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
90
+ is_secure)) {
91
return ret;
92
}
93
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
95
96
/* Definitely a real MMU, not an MPU */
97
98
- if (regime_translation_disabled(env, mmu_idx)) {
99
+ if (regime_translation_disabled(env, mmu_idx, is_secure)) {
100
uint64_t hcr;
101
uint8_t memattr;
102
180
--
103
--
181
2.17.1
104
2.25.1
182
105
183
106
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_get_iotlb_entry().
3
2
3
Retain the existing get_phys_addr interface using the security
4
state derived from mmu_idx. Move the kerneldoc comments to the
5
header file where they belong.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221001162318.153420-6-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-12-peter.maydell@linaro.org
8
---
11
---
9
include/exec/memory.h | 2 +-
12
target/arm/internals.h | 40 ++++++++++++++++++++++++++++++++++++++
10
exec.c | 2 +-
13
target/arm/ptw.c | 44 ++++++++++++++----------------------------
11
hw/virtio/vhost.c | 3 ++-
14
2 files changed, 55 insertions(+), 29 deletions(-)
12
3 files changed, 4 insertions(+), 3 deletions(-)
13
15
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
18
--- a/target/arm/internals.h
17
+++ b/include/exec/memory.h
19
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache);
20
@@ -XXX,XX +XXX,XX @@ typedef struct GetPhysAddrResult {
19
* entry. Should be called from an RCU critical section.
21
ARMCacheAttrs cacheattrs;
20
*/
22
} GetPhysAddrResult;
21
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
23
22
- bool is_write);
24
+/**
23
+ bool is_write, MemTxAttrs attrs);
25
+ * get_phys_addr_with_secure: get the physical address for a virtual address
24
26
+ * @env: CPUARMState
25
/* address_space_translate: translate an address range into an address space
27
+ * @address: virtual address to get physical address for
26
* into a MemoryRegion and an address range into that section. Should be
28
+ * @access_type: 0 for read, 1 for write, 2 for execute
27
diff --git a/exec.c b/exec.c
29
+ * @mmu_idx: MMU index indicating required translation regime
30
+ * @is_secure: security state for the access
31
+ * @result: set on translation success.
32
+ * @fi: set to fault info if the translation fails
33
+ *
34
+ * Find the physical address corresponding to the given virtual address,
35
+ * by doing a translation table walk on MMU based systems or using the
36
+ * MPU state on MPU based systems.
37
+ *
38
+ * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
39
+ * prot and page_size may not be filled in, and the populated fsr value provides
40
+ * information on why the translation aborted, in the format of a
41
+ * DFSR/IFSR fault register, with the following caveats:
42
+ * * we honour the short vs long DFSR format differences.
43
+ * * the WnR bit is never set (the caller must do this).
44
+ * * for PSMAv5 based systems we don't bother to return a full FSR format
45
+ * value.
46
+ */
47
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
48
+ MMUAccessType access_type,
49
+ ARMMMUIdx mmu_idx, bool is_secure,
50
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
51
+ __attribute__((nonnull));
52
+
53
+/**
54
+ * get_phys_addr: get the physical address for a virtual address
55
+ * @env: CPUARMState
56
+ * @address: virtual address to get physical address for
57
+ * @access_type: 0 for read, 1 for write, 2 for execute
58
+ * @mmu_idx: MMU index indicating required translation regime
59
+ * @result: set on translation success.
60
+ * @fi: set to fault info if the translation fails
61
+ *
62
+ * Similarly, but use the security regime of @mmu_idx.
63
+ */
64
bool get_phys_addr(CPUARMState *env, target_ulong address,
65
MMUAccessType access_type, ARMMMUIdx mmu_idx,
66
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
67
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
28
index XXXXXXX..XXXXXXX 100644
68
index XXXXXXX..XXXXXXX 100644
29
--- a/exec.c
69
--- a/target/arm/ptw.c
30
+++ b/exec.c
70
+++ b/target/arm/ptw.c
31
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
71
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
32
72
return ret;
33
/* Called from RCU critical section */
73
}
34
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
74
35
- bool is_write)
75
-/**
36
+ bool is_write, MemTxAttrs attrs)
76
- * get_phys_addr - get the physical address for this virtual address
77
- *
78
- * Find the physical address corresponding to the given virtual address,
79
- * by doing a translation table walk on MMU based systems or using the
80
- * MPU state on MPU based systems.
81
- *
82
- * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
83
- * prot and page_size may not be filled in, and the populated fsr value provides
84
- * information on why the translation aborted, in the format of a
85
- * DFSR/IFSR fault register, with the following caveats:
86
- * * we honour the short vs long DFSR format differences.
87
- * * the WnR bit is never set (the caller must do this).
88
- * * for PSMAv5 based systems we don't bother to return a full FSR format
89
- * value.
90
- *
91
- * @env: CPUARMState
92
- * @address: virtual address to get physical address for
93
- * @access_type: 0 for read, 1 for write, 2 for execute
94
- * @mmu_idx: MMU index indicating required translation regime
95
- * @result: set on translation success.
96
- * @fi: set to fault info if the translation fails
97
- */
98
-bool get_phys_addr(CPUARMState *env, target_ulong address,
99
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
100
- GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
101
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
102
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
103
+ bool is_secure, GetPhysAddrResult *result,
104
+ ARMMMUFaultInfo *fi)
37
{
105
{
38
MemoryRegionSection section;
106
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
39
hwaddr xlat, page_mask;
107
- bool is_secure = regime_is_secure(env, mmu_idx);
40
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
108
41
index XXXXXXX..XXXXXXX 100644
109
if (mmu_idx != s1_mmu_idx) {
42
--- a/hw/virtio/vhost.c
110
/*
43
+++ b/hw/virtio/vhost.c
111
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
44
@@ -XXX,XX +XXX,XX @@ int vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write)
112
ARMMMUIdx s2_mmu_idx;
45
trace_vhost_iotlb_miss(dev, 1);
113
bool is_el0;
46
114
47
iotlb = address_space_get_iotlb_entry(dev->vdev->dma_as,
115
- ret = get_phys_addr(env, address, access_type, s1_mmu_idx,
48
- iova, write);
116
- result, fi);
49
+ iova, write,
117
+ ret = get_phys_addr_with_secure(env, address, access_type,
50
+ MEMTXATTRS_UNSPECIFIED);
118
+ s1_mmu_idx, is_secure, result, fi);
51
if (iotlb.target_as != NULL) {
119
52
ret = vhost_memory_region_lookup(dev, iotlb.translated_addr,
120
/* If S1 fails or S2 is disabled, return early. */
53
&uaddr, &len);
121
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
122
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
123
}
124
}
125
126
+bool get_phys_addr(CPUARMState *env, target_ulong address,
127
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
128
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
129
+{
130
+ return get_phys_addr_with_secure(env, address, access_type, mmu_idx,
131
+ regime_is_secure(env, mmu_idx),
132
+ result, fi);
133
+}
134
+
135
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
136
MemTxAttrs *attrs)
137
{
54
--
138
--
55
2.17.1
139
2.25.1
56
57
diff view generated by jsdifflib
1
The FRECPX instructions should (like most other floating point operations)
1
From: Richard Henderson <richard.henderson@linaro.org>
2
honour the FPCR.FZ bit which specifies whether input denormals should
3
be flushed to zero (or FZ16 for the half-precision version).
4
We forgot to implement this, which doesn't affect the results (since
5
the calculation doesn't actually care about the mantissa bits) but did
6
mean we were failing to set the FPSR.IDC bit.
7
2
3
Remove the use of regime_is_secure from v7m_read_half_insn, using
4
the new parameter instead.
5
6
As it happens, both callers pass true, propagated from the argument
7
to arm_v7m_mmu_idx_for_secstate which created the mmu_idx argument,
8
but that is a detail of v7m_handle_execute_nsc we need not expose
9
to the callee.
10
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20221001162318.153420-7-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
11
---
16
---
12
target/arm/helper-a64.c | 6 ++++++
17
target/arm/m_helper.c | 9 ++++-----
13
1 file changed, 6 insertions(+)
18
1 file changed, 4 insertions(+), 5 deletions(-)
14
19
15
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-a64.c
22
--- a/target/arm/m_helper.c
18
+++ b/target/arm/helper-a64.c
23
+++ b/target/arm/m_helper.c
19
@@ -XXX,XX +XXX,XX @@ float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
24
@@ -XXX,XX +XXX,XX @@ static bool do_v7m_function_return(ARMCPU *cpu)
20
return nan;
25
return true;
26
}
27
28
-static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
29
+static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure,
30
uint32_t addr, uint16_t *insn)
31
{
32
/*
33
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
34
ARMMMUFaultInfo fi = {};
35
MemTxResult txres;
36
37
- v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx,
38
- regime_is_secure(env, mmu_idx), &sattrs);
39
+ v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, secure, &sattrs);
40
if (!sattrs.nsc || sattrs.ns) {
41
/*
42
* This must be the second half of the insn, and it straddles a
43
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
44
/* We want to do the MPU lookup as secure; work out what mmu_idx that is */
45
mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
46
47
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
48
+ if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15], &insn)) {
49
return false;
21
}
50
}
22
51
23
+ a = float16_squash_input_denormal(a, fpst);
52
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
24
+
53
goto gen_invep;
25
val16 = float16_val(a);
26
sbit = 0x8000 & val16;
27
exp = extract32(val16, 10, 5);
28
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
29
return nan;
30
}
54
}
31
55
32
+ a = float32_squash_input_denormal(a, fpst);
56
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
33
+
57
+ if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15] + 2, &insn)) {
34
val32 = float32_val(a);
58
return false;
35
sbit = 0x80000000ULL & val32;
36
exp = extract32(val32, 23, 8);
37
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
38
return nan;
39
}
59
}
40
60
41
+ a = float64_squash_input_denormal(a, fpst);
42
+
43
val64 = float64_val(a);
44
sbit = 0x8000000000000000ULL & val64;
45
exp = extract64(float64_val(a), 52, 11);
46
--
61
--
47
2.17.1
62
2.25.1
48
63
49
64
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_translate_iommu().
3
2
3
Remove the use of regime_is_secure from arm_tr_init_disas_context.
4
Instead, provide the value of v8m_secure directly from tb_flags.
5
Rather than use regime_is_secure, use the env->v7m.secure directly,
6
as per arm_mmu_idx_el.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20221001162318.153420-8-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-14-peter.maydell@linaro.org
8
---
12
---
9
exec.c | 8 +++++---
13
target/arm/cpu.h | 2 ++
10
1 file changed, 5 insertions(+), 3 deletions(-)
14
target/arm/helper.c | 4 ++++
15
target/arm/translate.c | 3 +--
16
3 files changed, 7 insertions(+), 2 deletions(-)
11
17
12
diff --git a/exec.c b/exec.c
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
20
--- a/target/arm/cpu.h
15
+++ b/exec.c
21
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
22
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */
17
* @is_write: whether the translation operation is for write
23
FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */
18
* @is_mmio: whether this can be MMIO, set true if it can
24
/* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */
19
* @target_as: the address space targeted by the IOMMU
25
FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */
20
+ * @attrs: transaction attributes
26
+/* Set if in secure mode */
21
*
27
+FIELD(TBFLAG_M32, SECURE, 6, 1)
22
* This function is called from RCU critical section. It is the common
28
23
* part of flatview_do_translate and address_space_translate_cached.
29
/*
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
30
* Bit usage when in AArch64 state
25
hwaddr *page_mask_out,
31
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
bool is_write,
32
index XXXXXXX..XXXXXXX 100644
27
bool is_mmio,
33
--- a/target/arm/helper.c
28
- AddressSpace **target_as)
34
+++ b/target/arm/helper.c
29
+ AddressSpace **target_as,
35
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_m32(CPUARMState *env, int fp_el,
30
+ MemTxAttrs attrs)
36
DP_TBFLAG_M32(flags, STACKCHECK, 1);
31
{
32
MemoryRegionSection *section;
33
hwaddr page_mask = (hwaddr)-1;
34
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
35
return address_space_translate_iommu(iommu_mr, xlat,
36
plen_out, page_mask_out,
37
is_write, is_mmio,
38
- target_as);
39
+ target_as, attrs);
40
}
37
}
41
if (page_mask_out) {
38
42
/* Not behind an IOMMU, use default page size. */
39
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) && env->v7m.secure) {
43
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate_cached(
40
+ DP_TBFLAG_M32(flags, SECURE, 1);
44
41
+ }
45
section = address_space_translate_iommu(iommu_mr, xlat, plen,
42
+
46
NULL, is_write, true,
43
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
47
- &target_as);
48
+ &target_as, attrs);
49
return section.mr;
50
}
44
}
51
45
46
diff --git a/target/arm/translate.c b/target/arm/translate.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate.c
49
+++ b/target/arm/translate.c
50
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
51
dc->vfp_enabled = 1;
52
dc->be_data = MO_TE;
53
dc->v7m_handler_mode = EX_TBFLAG_M32(tb_flags, HANDLER);
54
- dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
55
- regime_is_secure(env, dc->mmu_idx);
56
+ dc->v8m_secure = EX_TBFLAG_M32(tb_flags, SECURE);
57
dc->v8m_stackcheck = EX_TBFLAG_M32(tb_flags, STACKCHECK);
58
dc->v8m_fpccr_s_wrong = EX_TBFLAG_M32(tb_flags, FPCCR_S_WRONG);
59
dc->v7m_new_fp_ctxt_needed =
52
--
60
--
53
2.17.1
61
2.25.1
54
55
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
This is the last use of regime_is_secure; remove it
4
entirely before changing the layout of ARMMMUIdx.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-9-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
10
---
10
---
11
include/exec/exec-all.h | 5 +++--
11
target/arm/internals.h | 42 ----------------------------------------
12
accel/tcg/translate-all.c | 2 +-
12
target/arm/ptw.c | 44 ++++++++++++++++++++++++++++++++++++++++--
13
exec.c | 2 +-
13
2 files changed, 42 insertions(+), 44 deletions(-)
14
target/xtensa/op_helper.c | 3 ++-
15
4 files changed, 7 insertions(+), 5 deletions(-)
16
14
17
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/exec-all.h
17
--- a/target/arm/internals.h
20
+++ b/include/exec/exec-all.h
18
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
19
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
22
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
23
hwaddr paddr, int prot,
24
int mmu_idx, target_ulong size);
25
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr);
26
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
27
void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx,
28
uintptr_t retaddr);
29
#else
30
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
31
uint16_t idxmap)
32
{
33
}
34
-static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
35
+static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr,
36
+ MemTxAttrs attrs)
37
{
38
}
39
#endif
40
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/accel/tcg/translate-all.c
43
+++ b/accel/tcg/translate-all.c
44
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
45
}
46
47
#if !defined(CONFIG_USER_ONLY)
48
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
49
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
50
{
51
ram_addr_t ram_addr;
52
MemoryRegion *mr;
53
diff --git a/exec.c b/exec.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/exec.c
56
+++ b/exec.c
57
@@ -XXX,XX +XXX,XX @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
58
if (phys != -1) {
59
/* Locks grabbed by tb_invalidate_phys_addr */
60
tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as,
61
- phys | (pc & ~TARGET_PAGE_MASK));
62
+ phys | (pc & ~TARGET_PAGE_MASK), attrs);
63
}
20
}
64
}
21
}
65
#endif
22
66
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
23
-/* Return true if this address translation regime is secure */
24
-static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
25
-{
26
- switch (mmu_idx) {
27
- case ARMMMUIdx_E10_0:
28
- case ARMMMUIdx_E10_1:
29
- case ARMMMUIdx_E10_1_PAN:
30
- case ARMMMUIdx_E20_0:
31
- case ARMMMUIdx_E20_2:
32
- case ARMMMUIdx_E20_2_PAN:
33
- case ARMMMUIdx_Stage1_E0:
34
- case ARMMMUIdx_Stage1_E1:
35
- case ARMMMUIdx_Stage1_E1_PAN:
36
- case ARMMMUIdx_E2:
37
- case ARMMMUIdx_Stage2:
38
- case ARMMMUIdx_MPrivNegPri:
39
- case ARMMMUIdx_MUserNegPri:
40
- case ARMMMUIdx_MPriv:
41
- case ARMMMUIdx_MUser:
42
- return false;
43
- case ARMMMUIdx_SE3:
44
- case ARMMMUIdx_SE10_0:
45
- case ARMMMUIdx_SE10_1:
46
- case ARMMMUIdx_SE10_1_PAN:
47
- case ARMMMUIdx_SE20_0:
48
- case ARMMMUIdx_SE20_2:
49
- case ARMMMUIdx_SE20_2_PAN:
50
- case ARMMMUIdx_Stage1_SE0:
51
- case ARMMMUIdx_Stage1_SE1:
52
- case ARMMMUIdx_Stage1_SE1_PAN:
53
- case ARMMMUIdx_SE2:
54
- case ARMMMUIdx_Stage2_S:
55
- case ARMMMUIdx_MSPrivNegPri:
56
- case ARMMMUIdx_MSUserNegPri:
57
- case ARMMMUIdx_MSPriv:
58
- case ARMMMUIdx_MSUser:
59
- return true;
60
- default:
61
- g_assert_not_reached();
62
- }
63
-}
64
-
65
static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
66
{
67
switch (mmu_idx) {
68
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
67
index XXXXXXX..XXXXXXX 100644
69
index XXXXXXX..XXXXXXX 100644
68
--- a/target/xtensa/op_helper.c
70
--- a/target/arm/ptw.c
69
+++ b/target/xtensa/op_helper.c
71
+++ b/target/arm/ptw.c
70
@@ -XXX,XX +XXX,XX @@ static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
72
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
71
int ret = xtensa_get_physical_addr(env, false, vaddr, 2, 0,
73
MMUAccessType access_type, ARMMMUIdx mmu_idx,
72
&paddr, &page_size, &access);
74
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
73
if (ret == 0) {
75
{
74
- tb_invalidate_phys_addr(&address_space_memory, paddr);
76
+ bool is_secure;
75
+ tb_invalidate_phys_addr(&address_space_memory, paddr,
77
+
76
+ MEMTXATTRS_UNSPECIFIED);
78
+ switch (mmu_idx) {
77
}
79
+ case ARMMMUIdx_E10_0:
80
+ case ARMMMUIdx_E10_1:
81
+ case ARMMMUIdx_E10_1_PAN:
82
+ case ARMMMUIdx_E20_0:
83
+ case ARMMMUIdx_E20_2:
84
+ case ARMMMUIdx_E20_2_PAN:
85
+ case ARMMMUIdx_Stage1_E0:
86
+ case ARMMMUIdx_Stage1_E1:
87
+ case ARMMMUIdx_Stage1_E1_PAN:
88
+ case ARMMMUIdx_E2:
89
+ case ARMMMUIdx_Stage2:
90
+ case ARMMMUIdx_MPrivNegPri:
91
+ case ARMMMUIdx_MUserNegPri:
92
+ case ARMMMUIdx_MPriv:
93
+ case ARMMMUIdx_MUser:
94
+ is_secure = false;
95
+ break;
96
+ case ARMMMUIdx_SE3:
97
+ case ARMMMUIdx_SE10_0:
98
+ case ARMMMUIdx_SE10_1:
99
+ case ARMMMUIdx_SE10_1_PAN:
100
+ case ARMMMUIdx_SE20_0:
101
+ case ARMMMUIdx_SE20_2:
102
+ case ARMMMUIdx_SE20_2_PAN:
103
+ case ARMMMUIdx_Stage1_SE0:
104
+ case ARMMMUIdx_Stage1_SE1:
105
+ case ARMMMUIdx_Stage1_SE1_PAN:
106
+ case ARMMMUIdx_SE2:
107
+ case ARMMMUIdx_Stage2_S:
108
+ case ARMMMUIdx_MSPrivNegPri:
109
+ case ARMMMUIdx_MSUserNegPri:
110
+ case ARMMMUIdx_MSPriv:
111
+ case ARMMMUIdx_MSUser:
112
+ is_secure = true;
113
+ break;
114
+ default:
115
+ g_assert_not_reached();
116
+ }
117
return get_phys_addr_with_secure(env, address, access_type, mmu_idx,
118
- regime_is_secure(env, mmu_idx),
119
- result, fi);
120
+ is_secure, result, fi);
78
}
121
}
79
122
123
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
80
--
124
--
81
2.17.1
125
2.25.1
82
83
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to flatview_extend_translation().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Use get_phys_addr_with_secure directly. For a-profile, this is the
4
one place where the value of is_secure may not equal arm_is_secure(env).
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-10-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-7-peter.maydell@linaro.org
10
---
10
---
11
exec.c | 15 ++++++++++-----
11
target/arm/helper.c | 19 ++++++++++++++-----
12
1 file changed, 10 insertions(+), 5 deletions(-)
12
1 file changed, 14 insertions(+), 5 deletions(-)
13
13
14
diff --git a/exec.c b/exec.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
16
--- a/target/arm/helper.c
17
+++ b/exec.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
18
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
19
19
20
static hwaddr
20
#ifdef CONFIG_TCG
21
flatview_extend_translation(FlatView *fv, hwaddr addr,
21
static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
22
- hwaddr target_len,
22
- MMUAccessType access_type, ARMMMUIdx mmu_idx)
23
- MemoryRegion *mr, hwaddr base, hwaddr len,
23
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
24
- bool is_write)
24
+ bool is_secure)
25
+ hwaddr target_len,
26
+ MemoryRegion *mr, hwaddr base, hwaddr len,
27
+ bool is_write, MemTxAttrs attrs)
28
{
25
{
29
hwaddr done = 0;
26
bool ret;
30
hwaddr xlat;
27
uint64_t par64;
31
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
28
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
32
29
ARMMMUFaultInfo fi = {};
33
memory_region_ref(mr);
30
GetPhysAddrResult res = {};
34
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
31
35
- l, is_write);
32
- ret = get_phys_addr(env, value, access_type, mmu_idx, &res, &fi);
36
+ l, is_write, attrs);
33
+ ret = get_phys_addr_with_secure(env, value, access_type, mmu_idx,
37
ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
34
+ is_secure, &res, &fi);
38
rcu_read_unlock();
35
39
36
/*
40
@@ -XXX,XX +XXX,XX @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
37
* ATS operations only do S1 or S1+S2 translations, so we never
41
mr = cache->mrs.mr;
38
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
42
memory_region_ref(mr);
39
switch (el) {
43
if (memory_access_is_direct(mr, is_write)) {
40
case 3:
44
+ /* We don't care about the memory attributes here as we're only
41
mmu_idx = ARMMMUIdx_SE3;
45
+ * doing this if we found actual RAM, which behaves the same
42
+ secure = true;
46
+ * regardless of attributes; so UNSPECIFIED is fine.
43
break;
47
+ */
44
case 2:
48
l = flatview_extend_translation(cache->fv, addr, len, mr,
45
g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
49
- cache->xlat, l, is_write);
46
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
50
+ cache->xlat, l, is_write,
47
switch (el) {
51
+ MEMTXATTRS_UNSPECIFIED);
48
case 3:
52
cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
49
mmu_idx = ARMMMUIdx_SE10_0;
53
} else {
50
+ secure = true;
54
cache->ptr = NULL;
51
break;
52
case 2:
53
g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
54
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
55
case 4:
56
/* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */
57
mmu_idx = ARMMMUIdx_E10_1;
58
+ secure = false;
59
break;
60
case 6:
61
/* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */
62
mmu_idx = ARMMMUIdx_E10_0;
63
+ secure = false;
64
break;
65
default:
66
g_assert_not_reached();
67
}
68
69
- par64 = do_ats_write(env, value, access_type, mmu_idx);
70
+ par64 = do_ats_write(env, value, access_type, mmu_idx, secure);
71
72
A32_BANKED_CURRENT_REG_SET(env, par, par64);
73
#else
74
@@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri,
75
MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD;
76
uint64_t par64;
77
78
- par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2);
79
+ /* There is no SecureEL2 for AArch32. */
80
+ par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2, false);
81
82
A32_BANKED_CURRENT_REG_SET(env, par, par64);
83
#else
84
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
85
break;
86
case 6: /* AT S1E3R, AT S1E3W */
87
mmu_idx = ARMMMUIdx_SE3;
88
+ secure = true;
89
break;
90
default:
91
g_assert_not_reached();
92
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
93
g_assert_not_reached();
94
}
95
96
- env->cp15.par_el[1] = do_ats_write(env, value, access_type, mmu_idx);
97
+ env->cp15.par_el[1] = do_ats_write(env, value, access_type,
98
+ mmu_idx, secure);
99
#else
100
/* Handled by hardware accelerator. */
101
g_assert_not_reached();
55
--
102
--
56
2.17.1
103
2.25.1
57
58
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_translate()
3
and address_space_translate_cached(). Callers either have an
4
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
For a-profile aarch64, which does not bank system registers, it takes
4
quite a lot of code to switch between security states. In the process,
5
registers such as TCR_EL{1,2} must be swapped, which in itself requires
6
the flushing of softmmu tlbs. Therefore it doesn't buy us anything to
7
separate tlbs by security state.
8
9
Retain the distinction between Stage2 and Stage2_S.
10
11
This will be important as we implement FEAT_RME, and do not wish to
12
add a third set of mmu indexes for Realm state.
13
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20221001162318.153420-11-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
10
---
18
---
11
include/exec/memory.h | 4 +++-
19
target/arm/cpu-param.h | 2 +-
12
accel/tcg/translate-all.c | 2 +-
20
target/arm/cpu.h | 72 +++++++------------
13
exec.c | 14 +++++++++-----
21
target/arm/internals.h | 31 +-------
14
hw/vfio/common.c | 3 ++-
22
target/arm/helper.c | 144 +++++++++++++------------------------
15
memory_ldst.inc.c | 18 +++++++++---------
23
target/arm/ptw.c | 25 ++-----
16
target/riscv/helper.c | 2 +-
24
target/arm/translate-a64.c | 8 ---
17
6 files changed, 25 insertions(+), 18 deletions(-)
25
target/arm/translate.c | 6 +-
26
7 files changed, 85 insertions(+), 203 deletions(-)
18
27
19
diff --git a/include/exec/memory.h b/include/exec/memory.h
28
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
20
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/memory.h
30
--- a/target/arm/cpu-param.h
22
+++ b/include/exec/memory.h
31
+++ b/target/arm/cpu-param.h
23
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
32
@@ -XXX,XX +XXX,XX @@
24
* #MemoryRegion.
33
# define TARGET_PAGE_BITS_MIN 10
25
* @len: pointer to length
34
#endif
26
* @is_write: indicates the transfer direction
35
27
+ * @attrs: memory attributes
36
-#define NB_MMU_MODES 15
28
*/
37
+#define NB_MMU_MODES 8
29
MemoryRegion *flatview_translate(FlatView *fv,
38
30
hwaddr addr, hwaddr *xlat,
39
#endif
31
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv,
40
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
33
static inline MemoryRegion *address_space_translate(AddressSpace *as,
34
hwaddr addr, hwaddr *xlat,
35
- hwaddr *len, bool is_write)
36
+ hwaddr *len, bool is_write,
37
+ MemTxAttrs attrs)
38
{
39
return flatview_translate(address_space_to_flatview(as),
40
addr, xlat, len, is_write);
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
42
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/translate-all.c
42
--- a/target/arm/cpu.h
44
+++ b/accel/tcg/translate-all.c
43
+++ b/target/arm/cpu.h
45
@@ -XXX,XX +XXX,XX @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
44
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
46
hwaddr l = 1;
45
* table over and over.
47
46
* 6. we need separate EL1/EL2 mmu_idx for handling the Privileged Access
48
rcu_read_lock();
47
* Never (PAN) bit within PSTATE.
49
- mr = address_space_translate(as, addr, &addr, &l, false);
48
+ * 7. we fold together the secure and non-secure regimes for A-profile,
50
+ mr = address_space_translate(as, addr, &addr, &l, false, attrs);
49
+ * because there are no banked system registers for aarch64, so the
51
if (!(memory_region_is_ram(mr)
50
+ * process of switching between secure and non-secure is
52
|| memory_region_is_romd(mr))) {
51
+ * already heavyweight.
53
rcu_read_unlock();
52
*
54
diff --git a/exec.c b/exec.c
53
* This gives us the following list of cases:
54
*
55
- * NS EL0 EL1&0 stage 1+2 (aka NS PL0)
56
- * NS EL1 EL1&0 stage 1+2 (aka NS PL1)
57
- * NS EL1 EL1&0 stage 1+2 +PAN
58
- * NS EL0 EL2&0
59
- * NS EL2 EL2&0
60
- * NS EL2 EL2&0 +PAN
61
- * NS EL2 (aka NS PL2)
62
- * S EL0 EL1&0 (aka S PL0)
63
- * S EL1 EL1&0 (not used if EL3 is 32 bit)
64
- * S EL1 EL1&0 +PAN
65
- * S EL3 (aka S PL1)
66
+ * EL0 EL1&0 stage 1+2 (aka NS PL0)
67
+ * EL1 EL1&0 stage 1+2 (aka NS PL1)
68
+ * EL1 EL1&0 stage 1+2 +PAN
69
+ * EL0 EL2&0
70
+ * EL2 EL2&0
71
+ * EL2 EL2&0 +PAN
72
+ * EL2 (aka NS PL2)
73
+ * EL3 (aka S PL1)
74
*
75
- * for a total of 11 different mmu_idx.
76
+ * for a total of 8 different mmu_idx.
77
*
78
* R profile CPUs have an MPU, but can use the same set of MMU indexes
79
- * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
80
- * NS EL2 if we ever model a Cortex-R52).
81
+ * as A profile. They only need to distinguish EL0 and EL1 (and
82
+ * EL2 if we ever model a Cortex-R52).
83
*
84
* M profile CPUs are rather different as they do not have a true MMU.
85
* They have the following different MMU indexes:
86
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
87
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
88
#define ARM_MMU_IDX_M 0x40 /* M profile */
89
90
-/* Meanings of the bits for A profile mmu idx values */
91
-#define ARM_MMU_IDX_A_NS 0x8
92
-
93
/* Meanings of the bits for M profile mmu idx values */
94
#define ARM_MMU_IDX_M_PRIV 0x1
95
#define ARM_MMU_IDX_M_NEGPRI 0x2
96
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
97
/*
98
* A-profile.
99
*/
100
- ARMMMUIdx_SE10_0 = 0 | ARM_MMU_IDX_A,
101
- ARMMMUIdx_SE20_0 = 1 | ARM_MMU_IDX_A,
102
- ARMMMUIdx_SE10_1 = 2 | ARM_MMU_IDX_A,
103
- ARMMMUIdx_SE20_2 = 3 | ARM_MMU_IDX_A,
104
- ARMMMUIdx_SE10_1_PAN = 4 | ARM_MMU_IDX_A,
105
- ARMMMUIdx_SE20_2_PAN = 5 | ARM_MMU_IDX_A,
106
- ARMMMUIdx_SE2 = 6 | ARM_MMU_IDX_A,
107
- ARMMMUIdx_SE3 = 7 | ARM_MMU_IDX_A,
108
-
109
- ARMMMUIdx_E10_0 = ARMMMUIdx_SE10_0 | ARM_MMU_IDX_A_NS,
110
- ARMMMUIdx_E20_0 = ARMMMUIdx_SE20_0 | ARM_MMU_IDX_A_NS,
111
- ARMMMUIdx_E10_1 = ARMMMUIdx_SE10_1 | ARM_MMU_IDX_A_NS,
112
- ARMMMUIdx_E20_2 = ARMMMUIdx_SE20_2 | ARM_MMU_IDX_A_NS,
113
- ARMMMUIdx_E10_1_PAN = ARMMMUIdx_SE10_1_PAN | ARM_MMU_IDX_A_NS,
114
- ARMMMUIdx_E20_2_PAN = ARMMMUIdx_SE20_2_PAN | ARM_MMU_IDX_A_NS,
115
- ARMMMUIdx_E2 = ARMMMUIdx_SE2 | ARM_MMU_IDX_A_NS,
116
+ ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
117
+ ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A,
118
+ ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A,
119
+ ARMMMUIdx_E20_2 = 3 | ARM_MMU_IDX_A,
120
+ ARMMMUIdx_E10_1_PAN = 4 | ARM_MMU_IDX_A,
121
+ ARMMMUIdx_E20_2_PAN = 5 | ARM_MMU_IDX_A,
122
+ ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A,
123
+ ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A,
124
125
/*
126
* These are not allocated TLBs and are used only for AT system
127
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
128
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
129
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
130
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
131
- ARMMMUIdx_Stage1_SE0 = 3 | ARM_MMU_IDX_NOTLB,
132
- ARMMMUIdx_Stage1_SE1 = 4 | ARM_MMU_IDX_NOTLB,
133
- ARMMMUIdx_Stage1_SE1_PAN = 5 | ARM_MMU_IDX_NOTLB,
134
/*
135
* Not allocated a TLB: used only for second stage of an S12 page
136
* table walk, or for descriptor loads during first stage of an S1
137
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
138
* then various TLB flush insns which currently are no-ops or flush
139
* only stage 1 MMU indexes will need to change to flush stage 2.
140
*/
141
- ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_NOTLB,
142
- ARMMMUIdx_Stage2_S = 7 | ARM_MMU_IDX_NOTLB,
143
+ ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
144
+ ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB,
145
146
/*
147
* M-profile.
148
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
149
TO_CORE_BIT(E2),
150
TO_CORE_BIT(E20_2),
151
TO_CORE_BIT(E20_2_PAN),
152
- TO_CORE_BIT(SE10_0),
153
- TO_CORE_BIT(SE20_0),
154
- TO_CORE_BIT(SE10_1),
155
- TO_CORE_BIT(SE20_2),
156
- TO_CORE_BIT(SE10_1_PAN),
157
- TO_CORE_BIT(SE20_2_PAN),
158
- TO_CORE_BIT(SE2),
159
- TO_CORE_BIT(SE3),
160
+ TO_CORE_BIT(E3),
161
162
TO_CORE_BIT(MUser),
163
TO_CORE_BIT(MPriv),
164
diff --git a/target/arm/internals.h b/target/arm/internals.h
55
index XXXXXXX..XXXXXXX 100644
165
index XXXXXXX..XXXXXXX 100644
56
--- a/exec.c
166
--- a/target/arm/internals.h
57
+++ b/exec.c
167
+++ b/target/arm/internals.h
58
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_write_rom_internal(AddressSpace *as,
168
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
59
rcu_read_lock();
169
case ARMMMUIdx_Stage1_E0:
60
while (len > 0) {
170
case ARMMMUIdx_Stage1_E1:
61
l = len;
171
case ARMMMUIdx_Stage1_E1_PAN:
62
- mr = address_space_translate(as, addr, &addr1, &l, true);
172
- case ARMMMUIdx_Stage1_SE0:
63
+ mr = address_space_translate(as, addr, &addr1, &l, true,
173
- case ARMMMUIdx_Stage1_SE1:
64
+ MEMTXATTRS_UNSPECIFIED);
174
- case ARMMMUIdx_Stage1_SE1_PAN:
65
175
case ARMMMUIdx_E10_0:
66
if (!(memory_region_is_ram(mr) ||
176
case ARMMMUIdx_E10_1:
67
memory_region_is_romd(mr))) {
177
case ARMMMUIdx_E10_1_PAN:
68
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache)
178
case ARMMMUIdx_E20_0:
69
*/
179
case ARMMMUIdx_E20_2:
70
static inline MemoryRegion *address_space_translate_cached(
180
case ARMMMUIdx_E20_2_PAN:
71
MemoryRegionCache *cache, hwaddr addr, hwaddr *xlat,
181
- case ARMMMUIdx_SE10_0:
72
- hwaddr *plen, bool is_write)
182
- case ARMMMUIdx_SE10_1:
73
+ hwaddr *plen, bool is_write, MemTxAttrs attrs)
183
- case ARMMMUIdx_SE10_1_PAN:
74
{
184
- case ARMMMUIdx_SE20_0:
75
MemoryRegionSection section;
185
- case ARMMMUIdx_SE20_2:
76
MemoryRegion *mr;
186
- case ARMMMUIdx_SE20_2_PAN:
77
@@ -XXX,XX +XXX,XX @@ address_space_read_cached_slow(MemoryRegionCache *cache, hwaddr addr,
187
return true;
78
MemoryRegion *mr;
188
default:
79
189
return false;
80
l = len;
190
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
81
- mr = address_space_translate_cached(cache, addr, &addr1, &l, false);
191
{
82
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, false,
192
switch (mmu_idx) {
83
+ MEMTXATTRS_UNSPECIFIED);
193
case ARMMMUIdx_Stage1_E1_PAN:
84
flatview_read_continue(cache->fv,
194
- case ARMMMUIdx_Stage1_SE1_PAN:
85
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
195
case ARMMMUIdx_E10_1_PAN:
86
addr1, l, mr);
196
case ARMMMUIdx_E20_2_PAN:
87
@@ -XXX,XX +XXX,XX @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr,
197
- case ARMMMUIdx_SE10_1_PAN:
88
MemoryRegion *mr;
198
- case ARMMMUIdx_SE20_2_PAN:
89
199
return true;
90
l = len;
200
default:
91
- mr = address_space_translate_cached(cache, addr, &addr1, &l, true);
201
return false;
92
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, true,
202
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
93
+ MEMTXATTRS_UNSPECIFIED);
203
static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
94
flatview_write_continue(cache->fv,
204
{
95
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
205
switch (mmu_idx) {
96
addr1, l, mr);
206
- case ARMMMUIdx_SE20_0:
97
@@ -XXX,XX +XXX,XX @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
207
- case ARMMMUIdx_SE20_2:
98
208
- case ARMMMUIdx_SE20_2_PAN:
99
rcu_read_lock();
209
case ARMMMUIdx_E20_0:
100
mr = address_space_translate(&address_space_memory,
210
case ARMMMUIdx_E20_2:
101
- phys_addr, &phys_addr, &l, false);
211
case ARMMMUIdx_E20_2_PAN:
102
+ phys_addr, &phys_addr, &l, false,
212
case ARMMMUIdx_Stage2:
103
+ MEMTXATTRS_UNSPECIFIED);
213
case ARMMMUIdx_Stage2_S:
104
214
- case ARMMMUIdx_SE2:
105
res = !(memory_region_is_ram(mr) || memory_region_is_romd(mr));
215
case ARMMMUIdx_E2:
106
rcu_read_unlock();
216
return 2;
107
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
217
- case ARMMMUIdx_SE3:
218
+ case ARMMMUIdx_E3:
219
return 3;
220
- case ARMMMUIdx_SE10_0:
221
- case ARMMMUIdx_Stage1_SE0:
222
- return arm_el_is_aa64(env, 3) ? 1 : 3;
223
- case ARMMMUIdx_SE10_1:
224
- case ARMMMUIdx_SE10_1_PAN:
225
+ case ARMMMUIdx_E10_0:
226
case ARMMMUIdx_Stage1_E0:
227
+ return arm_el_is_aa64(env, 3) || !arm_is_secure_below_el3(env) ? 1 : 3;
228
case ARMMMUIdx_Stage1_E1:
229
case ARMMMUIdx_Stage1_E1_PAN:
230
- case ARMMMUIdx_Stage1_SE1:
231
- case ARMMMUIdx_Stage1_SE1_PAN:
232
- case ARMMMUIdx_E10_0:
233
case ARMMMUIdx_E10_1:
234
case ARMMMUIdx_E10_1_PAN:
235
case ARMMMUIdx_MPrivNegPri:
236
@@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
237
case ARMMMUIdx_Stage1_E0:
238
case ARMMMUIdx_Stage1_E1:
239
case ARMMMUIdx_Stage1_E1_PAN:
240
- case ARMMMUIdx_Stage1_SE0:
241
- case ARMMMUIdx_Stage1_SE1:
242
- case ARMMMUIdx_Stage1_SE1_PAN:
243
return true;
244
default:
245
return false;
246
diff --git a/target/arm/helper.c b/target/arm/helper.c
108
index XXXXXXX..XXXXXXX 100644
247
index XXXXXXX..XXXXXXX 100644
109
--- a/hw/vfio/common.c
248
--- a/target/arm/helper.c
110
+++ b/hw/vfio/common.c
249
+++ b/target/arm/helper.c
111
@@ -XXX,XX +XXX,XX @@ static bool vfio_get_vaddr(IOMMUTLBEntry *iotlb, void **vaddr,
250
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
251
/* Begin with base v8.0 state. */
252
uint64_t valid_mask = 0x3fff;
253
ARMCPU *cpu = env_archcpu(env);
254
+ uint64_t changed;
255
256
/*
257
* Because SCR_EL3 is the "real" cpreg and SCR is the alias, reset always
258
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
259
260
/* Clear all-context RES0 bits. */
261
value &= valid_mask;
262
- raw_write(env, ri, value);
263
+ changed = env->cp15.scr_el3 ^ value;
264
+ env->cp15.scr_el3 = value;
265
+
266
+ /*
267
+ * If SCR_EL3.NS changes, i.e. arm_is_secure_below_el3, then
268
+ * we must invalidate all TLBs below EL3.
269
+ */
270
+ if (changed & SCR_NS) {
271
+ tlb_flush_by_mmuidx(env_cpu(env), (ARMMMUIdxBit_E10_0 |
272
+ ARMMMUIdxBit_E20_0 |
273
+ ARMMMUIdxBit_E10_1 |
274
+ ARMMMUIdxBit_E20_2 |
275
+ ARMMMUIdxBit_E10_1_PAN |
276
+ ARMMMUIdxBit_E20_2_PAN |
277
+ ARMMMUIdxBit_E2));
278
+ }
279
}
280
281
static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
282
@@ -XXX,XX +XXX,XX @@ static int gt_phys_redir_timeridx(CPUARMState *env)
283
case ARMMMUIdx_E20_0:
284
case ARMMMUIdx_E20_2:
285
case ARMMMUIdx_E20_2_PAN:
286
- case ARMMMUIdx_SE20_0:
287
- case ARMMMUIdx_SE20_2:
288
- case ARMMMUIdx_SE20_2_PAN:
289
return GTIMER_HYP;
290
default:
291
return GTIMER_PHYS;
292
@@ -XXX,XX +XXX,XX @@ static int gt_virt_redir_timeridx(CPUARMState *env)
293
case ARMMMUIdx_E20_0:
294
case ARMMMUIdx_E20_2:
295
case ARMMMUIdx_E20_2_PAN:
296
- case ARMMMUIdx_SE20_0:
297
- case ARMMMUIdx_SE20_2:
298
- case ARMMMUIdx_SE20_2_PAN:
299
return GTIMER_HYPVIRT;
300
default:
301
return GTIMER_VIRT;
302
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
303
/* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */
304
switch (el) {
305
case 3:
306
- mmu_idx = ARMMMUIdx_SE3;
307
+ mmu_idx = ARMMMUIdx_E3;
308
secure = true;
309
break;
310
case 2:
311
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
312
/* fall through */
313
case 1:
314
if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
315
- mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
316
- : ARMMMUIdx_Stage1_E1_PAN);
317
+ mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
318
} else {
319
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
320
+ mmu_idx = ARMMMUIdx_Stage1_E1;
321
}
322
break;
323
default:
324
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
325
/* stage 1 current state PL0: ATS1CUR, ATS1CUW */
326
switch (el) {
327
case 3:
328
- mmu_idx = ARMMMUIdx_SE10_0;
329
+ mmu_idx = ARMMMUIdx_E10_0;
330
secure = true;
331
break;
332
case 2:
333
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
334
mmu_idx = ARMMMUIdx_Stage1_E0;
335
break;
336
case 1:
337
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
338
+ mmu_idx = ARMMMUIdx_Stage1_E0;
339
break;
340
default:
341
g_assert_not_reached();
342
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
343
switch (ri->opc1) {
344
case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */
345
if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) {
346
- mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
347
- : ARMMMUIdx_Stage1_E1_PAN);
348
+ mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
349
} else {
350
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
351
+ mmu_idx = ARMMMUIdx_Stage1_E1;
352
}
353
break;
354
case 4: /* AT S1E2R, AT S1E2W */
355
- mmu_idx = secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2;
356
+ mmu_idx = ARMMMUIdx_E2;
357
break;
358
case 6: /* AT S1E3R, AT S1E3W */
359
- mmu_idx = ARMMMUIdx_SE3;
360
+ mmu_idx = ARMMMUIdx_E3;
361
secure = true;
362
break;
363
default:
364
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
365
}
366
break;
367
case 2: /* AT S1E0R, AT S1E0W */
368
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
369
+ mmu_idx = ARMMMUIdx_Stage1_E0;
370
break;
371
case 4: /* AT S12E1R, AT S12E1W */
372
- mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1;
373
+ mmu_idx = ARMMMUIdx_E10_1;
374
break;
375
case 6: /* AT S12E0R, AT S12E0W */
376
- mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_E10_0;
377
+ mmu_idx = ARMMMUIdx_E10_0;
378
break;
379
default:
380
g_assert_not_reached();
381
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
382
uint16_t mask = ARMMMUIdxBit_E20_2 |
383
ARMMMUIdxBit_E20_2_PAN |
384
ARMMMUIdxBit_E20_0;
385
-
386
- if (arm_is_secure_below_el3(env)) {
387
- mask >>= ARM_MMU_IDX_A_NS;
388
- }
389
-
390
tlb_flush_by_mmuidx(env_cpu(env), mask);
391
}
392
raw_write(env, ri, value);
393
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
394
uint16_t mask = ARMMMUIdxBit_E10_1 |
395
ARMMMUIdxBit_E10_1_PAN |
396
ARMMMUIdxBit_E10_0;
397
-
398
- if (arm_is_secure_below_el3(env)) {
399
- mask >>= ARM_MMU_IDX_A_NS;
400
- }
401
-
402
tlb_flush_by_mmuidx(cs, mask);
403
raw_write(env, ri, value);
404
}
405
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
406
ARMMMUIdxBit_E10_1_PAN |
407
ARMMMUIdxBit_E10_0;
408
}
409
-
410
- if (arm_is_secure_below_el3(env)) {
411
- mask >>= ARM_MMU_IDX_A_NS;
412
- }
413
-
414
return mask;
415
}
416
417
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
418
mmu_idx = ARMMMUIdx_E10_0;
419
}
420
421
- if (arm_is_secure_below_el3(env)) {
422
- mmu_idx &= ~ARM_MMU_IDX_A_NS;
423
- }
424
-
425
return tlbbits_for_regime(env, mmu_idx, addr);
426
}
427
428
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
429
* stage 2 translations, whereas most other scopes only invalidate
430
* stage 1 translations.
112
*/
431
*/
113
mr = address_space_translate(&address_space_memory,
432
- if (arm_is_secure_below_el3(env)) {
114
iotlb->translated_addr,
433
- return ARMMMUIdxBit_SE10_1 |
115
- &xlat, &len, writable);
434
- ARMMMUIdxBit_SE10_1_PAN |
116
+ &xlat, &len, writable,
435
- ARMMMUIdxBit_SE10_0;
117
+ MEMTXATTRS_UNSPECIFIED);
436
- } else {
118
if (!memory_region_is_ram(mr)) {
437
- return ARMMMUIdxBit_E10_1 |
119
error_report("iommu map to non memory area %"HWADDR_PRIx"",
438
- ARMMMUIdxBit_E10_1_PAN |
120
xlat);
439
- ARMMMUIdxBit_E10_0;
121
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
440
- }
441
+ return (ARMMMUIdxBit_E10_1 |
442
+ ARMMMUIdxBit_E10_1_PAN |
443
+ ARMMMUIdxBit_E10_0);
444
}
445
446
static int e2_tlbmask(CPUARMState *env)
447
{
448
- if (arm_is_secure_below_el3(env)) {
449
- return ARMMMUIdxBit_SE20_0 |
450
- ARMMMUIdxBit_SE20_2 |
451
- ARMMMUIdxBit_SE20_2_PAN |
452
- ARMMMUIdxBit_SE2;
453
- } else {
454
- return ARMMMUIdxBit_E20_0 |
455
- ARMMMUIdxBit_E20_2 |
456
- ARMMMUIdxBit_E20_2_PAN |
457
- ARMMMUIdxBit_E2;
458
- }
459
+ return (ARMMMUIdxBit_E20_0 |
460
+ ARMMMUIdxBit_E20_2 |
461
+ ARMMMUIdxBit_E20_2_PAN |
462
+ ARMMMUIdxBit_E2);
463
}
464
465
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
466
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
467
ARMCPU *cpu = env_archcpu(env);
468
CPUState *cs = CPU(cpu);
469
470
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_SE3);
471
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
472
}
473
474
static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
475
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
476
{
477
CPUState *cs = env_cpu(env);
478
479
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_SE3);
480
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
481
}
482
483
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
484
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
485
CPUState *cs = CPU(cpu);
486
uint64_t pageaddr = sextract64(value << 12, 0, 56);
487
488
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_SE3);
489
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
490
}
491
492
static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
493
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
494
{
495
CPUState *cs = env_cpu(env);
496
uint64_t pageaddr = sextract64(value << 12, 0, 56);
497
- bool secure = arm_is_secure_below_el3(env);
498
- int mask = secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2;
499
- int bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2,
500
- pageaddr);
501
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E2, pageaddr);
502
503
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
504
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
505
+ ARMMMUIdxBit_E2, bits);
506
}
507
508
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
509
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
510
{
511
CPUState *cs = env_cpu(env);
512
uint64_t pageaddr = sextract64(value << 12, 0, 56);
513
- int bits = tlbbits_for_regime(env, ARMMMUIdx_SE3, pageaddr);
514
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E3, pageaddr);
515
516
tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
517
- ARMMMUIdxBit_SE3, bits);
518
+ ARMMMUIdxBit_E3, bits);
519
}
520
521
#ifdef TARGET_AARCH64
522
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae1is_write(CPUARMState *env,
523
524
static int vae2_tlbmask(CPUARMState *env)
525
{
526
- return (arm_is_secure_below_el3(env)
527
- ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2);
528
+ return ARMMMUIdxBit_E2;
529
}
530
531
static void tlbi_aa64_rvae2_write(CPUARMState *env,
532
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3_write(CPUARMState *env,
533
* flush-last-level-only.
534
*/
535
536
- do_rvae_write(env, value, ARMMMUIdxBit_SE3,
537
- tlb_force_broadcast(env));
538
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, tlb_force_broadcast(env));
539
}
540
541
static void tlbi_aa64_rvae3is_write(CPUARMState *env,
542
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env,
543
* flush-last-level-only or inner/outer specific flushes.
544
*/
545
546
- do_rvae_write(env, value, ARMMMUIdxBit_SE3, true);
547
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
548
}
549
#endif
550
551
@@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el)
552
/* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */
553
if (el == 0) {
554
ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0);
555
- el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0)
556
- ? 2 : 1;
557
+ el = mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1;
558
}
559
return env->cp15.sctlr_el[el];
560
}
561
@@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
562
switch (mmu_idx) {
563
case ARMMMUIdx_E10_0:
564
case ARMMMUIdx_E20_0:
565
- case ARMMMUIdx_SE10_0:
566
- case ARMMMUIdx_SE20_0:
567
return 0;
568
case ARMMMUIdx_E10_1:
569
case ARMMMUIdx_E10_1_PAN:
570
- case ARMMMUIdx_SE10_1:
571
- case ARMMMUIdx_SE10_1_PAN:
572
return 1;
573
case ARMMMUIdx_E2:
574
case ARMMMUIdx_E20_2:
575
case ARMMMUIdx_E20_2_PAN:
576
- case ARMMMUIdx_SE2:
577
- case ARMMMUIdx_SE20_2:
578
- case ARMMMUIdx_SE20_2_PAN:
579
return 2;
580
- case ARMMMUIdx_SE3:
581
+ case ARMMMUIdx_E3:
582
return 3;
583
default:
584
g_assert_not_reached();
585
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
586
}
587
break;
588
case 3:
589
- return ARMMMUIdx_SE3;
590
+ return ARMMMUIdx_E3;
591
default:
592
g_assert_not_reached();
593
}
594
595
- if (arm_is_secure_below_el3(env)) {
596
- idx &= ~ARM_MMU_IDX_A_NS;
597
- }
598
-
599
return idx;
600
}
601
602
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
603
switch (mmu_idx) {
604
case ARMMMUIdx_E10_1:
605
case ARMMMUIdx_E10_1_PAN:
606
- case ARMMMUIdx_SE10_1:
607
- case ARMMMUIdx_SE10_1_PAN:
608
/* TODO: ARMv8.3-NV */
609
DP_TBFLAG_A64(flags, UNPRIV, 1);
610
break;
611
case ARMMMUIdx_E20_2:
612
case ARMMMUIdx_E20_2_PAN:
613
- case ARMMMUIdx_SE20_2:
614
- case ARMMMUIdx_SE20_2_PAN:
615
/*
616
* Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is
617
* gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
618
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
122
index XXXXXXX..XXXXXXX 100644
619
index XXXXXXX..XXXXXXX 100644
123
--- a/memory_ldst.inc.c
620
--- a/target/arm/ptw.c
124
+++ b/memory_ldst.inc.c
621
+++ b/target/arm/ptw.c
125
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
622
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu)
126
bool release_lock = false;
623
ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
127
624
{
128
RCU_READ_LOCK();
625
switch (mmu_idx) {
129
- mr = TRANSLATE(addr, &addr1, &l, false);
626
- case ARMMMUIdx_SE10_0:
130
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
627
- return ARMMMUIdx_Stage1_SE0;
131
if (l < 4 || !IS_DIRECT(mr, false)) {
628
- case ARMMMUIdx_SE10_1:
132
release_lock |= prepare_mmio_access(mr);
629
- return ARMMMUIdx_Stage1_SE1;
133
630
- case ARMMMUIdx_SE10_1_PAN:
134
@@ -XXX,XX +XXX,XX @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
631
- return ARMMMUIdx_Stage1_SE1_PAN;
135
bool release_lock = false;
632
case ARMMMUIdx_E10_0:
136
633
return ARMMMUIdx_Stage1_E0;
137
RCU_READ_LOCK();
634
case ARMMMUIdx_E10_1:
138
- mr = TRANSLATE(addr, &addr1, &l, false);
635
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
139
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
636
static bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
140
if (l < 8 || !IS_DIRECT(mr, false)) {
637
{
141
release_lock |= prepare_mmio_access(mr);
638
switch (mmu_idx) {
142
639
- case ARMMMUIdx_SE10_0:
143
@@ -XXX,XX +XXX,XX @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
640
case ARMMMUIdx_E20_0:
144
bool release_lock = false;
641
- case ARMMMUIdx_SE20_0:
145
642
case ARMMMUIdx_Stage1_E0:
146
RCU_READ_LOCK();
643
- case ARMMMUIdx_Stage1_SE0:
147
- mr = TRANSLATE(addr, &addr1, &l, false);
644
case ARMMMUIdx_MUser:
148
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
645
case ARMMMUIdx_MSUser:
149
if (!IS_DIRECT(mr, false)) {
646
case ARMMMUIdx_MUserNegPri:
150
release_lock |= prepare_mmio_access(mr);
647
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
151
648
152
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
649
s2_mmu_idx = (s2walk_secure
153
bool release_lock = false;
650
? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
154
651
- is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
155
RCU_READ_LOCK();
652
+ is_el0 = mmu_idx == ARMMMUIdx_E10_0;
156
- mr = TRANSLATE(addr, &addr1, &l, false);
653
157
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
654
/*
158
if (l < 2 || !IS_DIRECT(mr, false)) {
655
* S1 is done, now do S2 translation.
159
release_lock |= prepare_mmio_access(mr);
656
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
160
657
case ARMMMUIdx_Stage1_E1:
161
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
658
case ARMMMUIdx_Stage1_E1_PAN:
162
bool release_lock = false;
659
case ARMMMUIdx_E2:
163
660
+ is_secure = arm_is_secure_below_el3(env);
164
RCU_READ_LOCK();
661
+ break;
165
- mr = TRANSLATE(addr, &addr1, &l, true);
662
case ARMMMUIdx_Stage2:
166
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
663
case ARMMMUIdx_MPrivNegPri:
167
if (l < 4 || !IS_DIRECT(mr, true)) {
664
case ARMMMUIdx_MUserNegPri:
168
release_lock |= prepare_mmio_access(mr);
665
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
169
666
case ARMMMUIdx_MUser:
170
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
667
is_secure = false;
171
bool release_lock = false;
668
break;
172
669
- case ARMMMUIdx_SE3:
173
RCU_READ_LOCK();
670
- case ARMMMUIdx_SE10_0:
174
- mr = TRANSLATE(addr, &addr1, &l, true);
671
- case ARMMMUIdx_SE10_1:
175
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
672
- case ARMMMUIdx_SE10_1_PAN:
176
if (l < 4 || !IS_DIRECT(mr, true)) {
673
- case ARMMMUIdx_SE20_0:
177
release_lock |= prepare_mmio_access(mr);
674
- case ARMMMUIdx_SE20_2:
178
675
- case ARMMMUIdx_SE20_2_PAN:
179
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
676
- case ARMMMUIdx_Stage1_SE0:
180
bool release_lock = false;
677
- case ARMMMUIdx_Stage1_SE1:
181
678
- case ARMMMUIdx_Stage1_SE1_PAN:
182
RCU_READ_LOCK();
679
- case ARMMMUIdx_SE2:
183
- mr = TRANSLATE(addr, &addr1, &l, true);
680
+ case ARMMMUIdx_E3:
184
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
681
case ARMMMUIdx_Stage2_S:
185
if (!IS_DIRECT(mr, true)) {
682
case ARMMMUIdx_MSPrivNegPri:
186
release_lock |= prepare_mmio_access(mr);
683
case ARMMMUIdx_MSUserNegPri:
187
r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
684
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
188
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
189
bool release_lock = false;
190
191
RCU_READ_LOCK();
192
- mr = TRANSLATE(addr, &addr1, &l, true);
193
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
194
if (l < 2 || !IS_DIRECT(mr, true)) {
195
release_lock |= prepare_mmio_access(mr);
196
197
@@ -XXX,XX +XXX,XX @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
198
bool release_lock = false;
199
200
RCU_READ_LOCK();
201
- mr = TRANSLATE(addr, &addr1, &l, true);
202
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
203
if (l < 8 || !IS_DIRECT(mr, true)) {
204
release_lock |= prepare_mmio_access(mr);
205
206
diff --git a/target/riscv/helper.c b/target/riscv/helper.c
207
index XXXXXXX..XXXXXXX 100644
685
index XXXXXXX..XXXXXXX 100644
208
--- a/target/riscv/helper.c
686
--- a/target/arm/translate-a64.c
209
+++ b/target/riscv/helper.c
687
+++ b/target/arm/translate-a64.c
210
@@ -XXX,XX +XXX,XX @@ restart:
688
@@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s)
211
MemoryRegion *mr;
689
case ARMMMUIdx_E20_2_PAN:
212
hwaddr l = sizeof(target_ulong), addr1;
690
useridx = ARMMMUIdx_E20_0;
213
mr = address_space_translate(cs->as, pte_addr,
691
break;
214
- &addr1, &l, false);
692
- case ARMMMUIdx_SE10_1:
215
+ &addr1, &l, false, MEMTXATTRS_UNSPECIFIED);
693
- case ARMMMUIdx_SE10_1_PAN:
216
if (memory_access_is_direct(mr, true)) {
694
- useridx = ARMMMUIdx_SE10_0;
217
target_ulong *pte_pa =
695
- break;
218
qemu_map_ram_ptr(mr->ram_block, addr1);
696
- case ARMMMUIdx_SE20_2:
697
- case ARMMMUIdx_SE20_2_PAN:
698
- useridx = ARMMMUIdx_SE20_0;
699
- break;
700
default:
701
g_assert_not_reached();
702
}
703
diff --git a/target/arm/translate.c b/target/arm/translate.c
704
index XXXXXXX..XXXXXXX 100644
705
--- a/target/arm/translate.c
706
+++ b/target/arm/translate.c
707
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
708
* otherwise, access as if at PL0.
709
*/
710
switch (s->mmu_idx) {
711
+ case ARMMMUIdx_E3:
712
case ARMMMUIdx_E2: /* this one is UNPREDICTABLE */
713
case ARMMMUIdx_E10_0:
714
case ARMMMUIdx_E10_1:
715
case ARMMMUIdx_E10_1_PAN:
716
return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
717
- case ARMMMUIdx_SE3:
718
- case ARMMMUIdx_SE10_0:
719
- case ARMMMUIdx_SE10_1:
720
- case ARMMMUIdx_SE10_1_PAN:
721
- return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0);
722
case ARMMMUIdx_MUser:
723
case ARMMMUIdx_MPriv:
724
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
219
--
725
--
220
2.17.1
726
2.25.1
221
222
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_map().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Use a switch on mmu_idx for the a-profile indexes, instead of
4
three different if's vs regime_el and arm_mmu_idx_is_stage1_of_2.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
10
---
10
---
11
include/exec/memory.h | 3 ++-
11
target/arm/ptw.c | 32 +++++++++++++++++++++++++-------
12
include/sysemu/dma.h | 3 ++-
12
1 file changed, 25 insertions(+), 7 deletions(-)
13
exec.c | 6 ++++--
14
target/ppc/mmu-hash64.c | 3 ++-
15
4 files changed, 10 insertions(+), 5 deletions(-)
16
13
17
diff --git a/include/exec/memory.h b/include/exec/memory.h
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/memory.h
16
--- a/target/arm/ptw.c
20
+++ b/include/exec/memory.h
17
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
18
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
22
* @addr: address within that address space
19
23
* @plen: pointer to length of buffer; updated on return
20
hcr_el2 = arm_hcr_el2_eff(env);
24
* @is_write: indicates the transfer direction
21
25
+ * @attrs: memory attributes
22
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
26
*/
23
+ switch (mmu_idx) {
27
void *address_space_map(AddressSpace *as, hwaddr addr,
24
+ case ARMMMUIdx_Stage2:
28
- hwaddr *plen, bool is_write);
25
+ case ARMMMUIdx_Stage2_S:
29
+ hwaddr *plen, bool is_write, MemTxAttrs attrs);
26
/* HCR.DC means HCR.VM behaves as 1 */
30
27
return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
31
/* address_space_unmap: Unmaps a memory region previously mapped by address_space_map()
28
- }
32
*
29
33
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
30
- if (hcr_el2 & HCR_TGE) {
34
index XXXXXXX..XXXXXXX 100644
31
+ case ARMMMUIdx_E10_0:
35
--- a/include/sysemu/dma.h
32
+ case ARMMMUIdx_E10_1:
36
+++ b/include/sysemu/dma.h
33
+ case ARMMMUIdx_E10_1_PAN:
37
@@ -XXX,XX +XXX,XX @@ static inline void *dma_memory_map(AddressSpace *as,
34
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
38
hwaddr xlen = *len;
35
- if (!is_secure && regime_el(env, mmu_idx) == 1) {
39
void *p;
36
+ if (!is_secure && (hcr_el2 & HCR_TGE)) {
40
37
return true;
41
- p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE);
38
}
42
+ p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE,
39
- }
43
+ MEMTXATTRS_UNSPECIFIED);
40
+ break;
44
*len = xlen;
41
45
return p;
42
- if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
46
}
43
+ case ARMMMUIdx_Stage1_E0:
47
diff --git a/exec.c b/exec.c
44
+ case ARMMMUIdx_Stage1_E1:
48
index XXXXXXX..XXXXXXX 100644
45
+ case ARMMMUIdx_Stage1_E1_PAN:
49
--- a/exec.c
46
/* HCR.DC means SCTLR_EL1.M behaves as 0 */
50
+++ b/exec.c
47
- return true;
51
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
48
+ if (hcr_el2 & HCR_DC) {
52
void *address_space_map(AddressSpace *as,
49
+ return true;
53
hwaddr addr,
50
+ }
54
hwaddr *plen,
51
+ break;
55
- bool is_write)
52
+
56
+ bool is_write,
53
+ case ARMMMUIdx_E20_0:
57
+ MemTxAttrs attrs)
54
+ case ARMMMUIdx_E20_2:
58
{
55
+ case ARMMMUIdx_E20_2_PAN:
59
hwaddr len = *plen;
56
+ case ARMMMUIdx_E2:
60
hwaddr l, xlat;
57
+ case ARMMMUIdx_E3:
61
@@ -XXX,XX +XXX,XX @@ void *cpu_physical_memory_map(hwaddr addr,
58
+ break;
62
hwaddr *plen,
59
+
63
int is_write)
60
+ default:
64
{
61
+ g_assert_not_reached();
65
- return address_space_map(&address_space_memory, addr, plen, is_write);
66
+ return address_space_map(&address_space_memory, addr, plen, is_write,
67
+ MEMTXATTRS_UNSPECIFIED);
68
}
69
70
void cpu_physical_memory_unmap(void *buffer, hwaddr len,
71
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/ppc/mmu-hash64.c
74
+++ b/target/ppc/mmu-hash64.c
75
@@ -XXX,XX +XXX,XX @@ const ppc_hash_pte64_t *ppc_hash64_map_hptes(PowerPCCPU *cpu,
76
return NULL;
77
}
62
}
78
63
79
- hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false);
64
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
80
+ hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false,
81
+ MEMTXATTRS_UNSPECIFIED);
82
if (plen < (n * HASH_PTE_SIZE_64)) {
83
hw_error("%s: Unable to map all requested HPTEs\n", __func__);
84
}
85
--
65
--
86
2.17.1
66
2.25.1
87
88
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
kvm_irqchip_create called by kvm_init will call kvm_init_irq_routing to
3
The effect of TGE does not only apply to non-secure state,
4
initialize global capability variables. If we call kvm_init_irq_routing in
4
now that Secure EL2 exists.
5
GIC realize function, previous allocated memory will leak.
6
5
7
Fix this by deleting the unnecessary call.
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
8
Message-id: 20221001162318.153420-13-richard.henderson@linaro.org
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Message-id: 1527750994-14360-1-git-send-email-zhaoshenglong@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
hw/intc/arm_gic_kvm.c | 1 -
11
target/arm/ptw.c | 4 ++--
15
hw/intc/arm_gicv3_kvm.c | 1 -
12
1 file changed, 2 insertions(+), 2 deletions(-)
16
2 files changed, 2 deletions(-)
17
13
18
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gic_kvm.c
16
--- a/target/arm/ptw.c
21
+++ b/hw/intc/arm_gic_kvm.c
17
+++ b/target/arm/ptw.c
22
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
18
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
23
19
case ARMMMUIdx_E10_0:
24
if (kvm_has_gsi_routing()) {
20
case ARMMMUIdx_E10_1:
25
/* set up irq routing */
21
case ARMMMUIdx_E10_1_PAN:
26
- kvm_init_irq_routing(kvm_state);
22
- /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
27
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
23
- if (!is_secure && (hcr_el2 & HCR_TGE)) {
28
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
24
+ /* TGE means that EL0/1 act as if SCTLR_EL1.M is zero */
25
+ if (hcr_el2 & HCR_TGE) {
26
return true;
29
}
27
}
30
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
28
break;
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/arm_gicv3_kvm.c
33
+++ b/hw/intc/arm_gicv3_kvm.c
34
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
35
36
if (kvm_has_gsi_routing()) {
37
/* set up irq routing */
38
- kvm_init_irq_routing(kvm_state);
39
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
40
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
41
}
42
--
29
--
43
2.17.1
30
2.25.1
44
45
diff view generated by jsdifflib
1
In commit f0aff255700 we made cpacr_write() enforce that some CPACR
1
From: Richard Henderson <richard.henderson@linaro.org>
2
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
3
we forgot to also update the register's reset value. The effect
4
was that (a) a guest that read CPACR on reset would not see ones in
5
the RAO bits, and (b) if you did a migration before the guest did
6
a write to the CPACR then the migration would fail because the
7
destination would enforce the RAO bits and then complain that they
8
didn't match the zero value from the source.
9
2
10
Implement reset for the CPACR using a custom reset function
3
For page walking, we may require HCR for a security state
11
that just calls cpacr_write(), to avoid having to duplicate
4
that is not "current".
12
the logic for which bits are RAO.
13
5
14
This bug would affect migration for TCG CPUs which are ARMv7
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
with VFP but without one of Neon or VFPv3.
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-14-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/cpu.h | 20 +++++++++++++-------
12
target/arm/helper.c | 11 ++++++++---
13
2 files changed, 21 insertions(+), 10 deletions(-)
16
14
17
Reported-by: Cédric Le Goater <clg@kaod.org>
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
index XXXXXXX..XXXXXXX 100644
19
Tested-by: Cédric Le Goater <clg@kaod.org>
17
--- a/target/arm/cpu.h
20
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
18
+++ b/target/arm/cpu.h
21
---
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
22
target/arm/helper.c | 10 +++++++++-
20
* Return true if the current security state has AArch64 EL2 or AArch32 Hyp.
23
1 file changed, 9 insertions(+), 1 deletion(-)
21
* This corresponds to the pseudocode EL2Enabled()
24
22
*/
23
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
24
+{
25
+ return arm_feature(env, ARM_FEATURE_EL2)
26
+ && (!secure || (env->cp15.scr_el3 & SCR_EEL2));
27
+}
28
+
29
static inline bool arm_is_el2_enabled(CPUARMState *env)
30
{
31
- if (arm_feature(env, ARM_FEATURE_EL2)) {
32
- if (arm_is_secure_below_el3(env)) {
33
- return (env->cp15.scr_el3 & SCR_EEL2) != 0;
34
- }
35
- return true;
36
- }
37
- return false;
38
+ return arm_is_el2_enabled_secstate(env, arm_is_secure_below_el3(env));
39
}
40
41
#else
42
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
43
return false;
44
}
45
46
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
47
+{
48
+ return false;
49
+}
50
+
51
static inline bool arm_is_el2_enabled(CPUARMState *env)
52
{
53
return false;
54
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_el2_enabled(CPUARMState *env)
55
* "for all purposes other than a direct read or write access of HCR_EL2."
56
* Not included here is HCR_RW.
57
*/
58
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure);
59
uint64_t arm_hcr_el2_eff(CPUARMState *env);
60
uint64_t arm_hcrx_el2_eff(CPUARMState *env);
61
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
62
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
64
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
65
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
66
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
30
env->cp15.cpacr_el1 = value;
31
}
67
}
32
68
33
+static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
69
/*
70
- * Return the effective value of HCR_EL2.
71
+ * Return the effective value of HCR_EL2, at the given security state.
72
* Bits that are not included here:
73
* RW (read from SCR_EL3.RW as needed)
74
*/
75
-uint64_t arm_hcr_el2_eff(CPUARMState *env)
76
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure)
77
{
78
uint64_t ret = env->cp15.hcr_el2;
79
80
- if (!arm_is_el2_enabled(env)) {
81
+ if (!arm_is_el2_enabled_secstate(env, secure)) {
82
/*
83
* "This register has no effect if EL2 is not enabled in the
84
* current Security state". This is ARMv8.4-SecEL2 speak for
85
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
86
return ret;
87
}
88
89
+uint64_t arm_hcr_el2_eff(CPUARMState *env)
34
+{
90
+{
35
+ /* Call cpacr_write() so that we reset with the correct RAO bits set
91
+ return arm_hcr_el2_eff_secstate(env, arm_is_secure_below_el3(env));
36
+ * for our CPU features.
37
+ */
38
+ cpacr_write(env, ri, 0);
39
+}
92
+}
40
+
93
+
41
static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
94
/*
42
bool isread)
95
* Corresponds to ARM pseudocode function ELIsInHost().
43
{
96
*/
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
45
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
46
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
47
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
48
- .resetvalue = 0, .writefn = cpacr_write },
49
+ .resetfn = cpacr_reset, .writefn = cpacr_write },
50
REGINFO_SENTINEL
51
};
52
53
--
97
--
54
2.17.1
98
2.25.1
55
56
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
3
Rename the argument to is_secure_ptr, and introduce a
4
g_new is even better because it is type-safe.
4
local variable is_secure with the value. We only write
5
back to the pointer toward the end of the function.
5
6
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221001162318.153420-15-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/gdbstub.c | 3 +--
12
target/arm/ptw.c | 22 ++++++++++++----------
12
1 file changed, 1 insertion(+), 2 deletions(-)
13
1 file changed, 12 insertions(+), 10 deletions(-)
13
14
14
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/gdbstub.c
17
--- a/target/arm/ptw.c
17
+++ b/target/arm/gdbstub.c
18
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ int arm_gen_dynamic_xml(CPUState *cs)
19
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
19
RegisterSysregXmlParam param = {cs, s};
20
20
21
/* Translate a S1 pagetable walk through S2 if needed. */
21
cpu->dyn_xml.num_cpregs = 0;
22
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
22
- cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
23
- hwaddr addr, bool *is_secure,
23
- g_hash_table_size(cpu->cp_regs));
24
+ hwaddr addr, bool *is_secure_ptr,
24
+ cpu->dyn_xml.cpregs_keys = g_new(uint32_t, g_hash_table_size(cpu->cp_regs));
25
ARMMMUFaultInfo *fi)
25
g_string_printf(s, "<?xml version=\"1.0\"?>");
26
{
26
g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
27
- ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
27
g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
28
+ bool is_secure = *is_secure_ptr;
29
+ ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
30
31
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
32
- !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) {
33
+ !regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
34
GetPhysAddrResult s2 = {};
35
int ret;
36
37
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
38
- *is_secure, false, &s2, fi);
39
+ is_secure, false, &s2, fi);
40
if (ret) {
41
assert(fi->type != ARMFault_None);
42
fi->s2addr = addr;
43
fi->stage2 = true;
44
fi->s1ptw = true;
45
- fi->s1ns = !*is_secure;
46
+ fi->s1ns = !is_secure;
47
return ~0;
48
}
49
if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
50
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
51
fi->s2addr = addr;
52
fi->stage2 = true;
53
fi->s1ptw = true;
54
- fi->s1ns = !*is_secure;
55
+ fi->s1ns = !is_secure;
56
return ~0;
57
}
58
59
if (arm_is_secure_below_el3(env)) {
60
/* Check if page table walk is to secure or non-secure PA space. */
61
- if (*is_secure) {
62
- *is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
63
+ if (is_secure) {
64
+ is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
65
} else {
66
- *is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
67
+ is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
68
}
69
+ *is_secure_ptr = is_secure;
70
} else {
71
- assert(!*is_secure);
72
+ assert(!is_secure);
73
}
74
75
addr = s2.phys;
28
--
76
--
29
2.17.1
77
2.25.1
30
31
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to memory_region_access_valid().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
6
The callsite in flatview_access_valid() is part of a recursive
3
This value is unused.
7
loop flatview_access_valid() -> memory_region_access_valid() ->
8
subpage_accepts() -> flatview_access_valid(); we make it pass
9
MEMTXATTRS_UNSPECIFIED for now, until the next several commits
10
have plumbed an attrs parameter through the rest of the loop
11
and we can add an attrs parameter to flatview_access_valid().
12
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20221001162318.153420-16-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20180521140402.23318-8-peter.maydell@linaro.org
17
---
9
---
18
include/exec/memory-internal.h | 3 ++-
10
target/arm/ptw.c | 5 ++---
19
exec.c | 4 +++-
11
1 file changed, 2 insertions(+), 3 deletions(-)
20
hw/s390x/s390-pci-inst.c | 3 ++-
21
memory.c | 7 ++++---
22
4 files changed, 11 insertions(+), 6 deletions(-)
23
12
24
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
25
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory-internal.h
15
--- a/target/arm/ptw.c
27
+++ b/include/exec/memory-internal.h
16
+++ b/target/arm/ptw.c
28
@@ -XXX,XX +XXX,XX @@ void flatview_unref(FlatView *view);
17
@@ -XXX,XX +XXX,XX @@ static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
29
extern const MemoryRegionOps unassigned_mem_ops;
18
* s1 and s2 for the HCR_EL2.FWB == 1 case, returning the
30
19
* combined attributes in MAIR_EL1 format.
31
bool memory_region_access_valid(MemoryRegion *mr, hwaddr addr,
20
*/
32
- unsigned size, bool is_write);
21
-static uint8_t combined_attrs_fwb(CPUARMState *env,
33
+ unsigned size, bool is_write,
22
- ARMCacheAttrs s1, ARMCacheAttrs s2)
34
+ MemTxAttrs attrs);
23
+static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
35
36
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section);
37
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv);
38
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
41
+++ b/exec.c
42
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
43
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
44
if (!memory_access_is_direct(mr, is_write)) {
45
l = memory_access_size(mr, l, addr);
46
- if (!memory_region_access_valid(mr, xlat, l, is_write)) {
47
+ /* When our callers all have attrs we'll pass them through here */
48
+ if (!memory_region_access_valid(mr, xlat, l, is_write,
49
+ MEMTXATTRS_UNSPECIFIED)) {
50
return false;
51
}
52
}
53
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/s390x/s390-pci-inst.c
56
+++ b/hw/s390x/s390-pci-inst.c
57
@@ -XXX,XX +XXX,XX @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
58
mr = s390_get_subregion(mr, offset, len);
59
offset -= mr->addr;
60
61
- if (!memory_region_access_valid(mr, offset, len, true)) {
62
+ if (!memory_region_access_valid(mr, offset, len, true,
63
+ MEMTXATTRS_UNSPECIFIED)) {
64
s390_program_interrupt(env, PGM_OPERAND, 6, ra);
65
return 0;
66
}
67
diff --git a/memory.c b/memory.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/memory.c
70
+++ b/memory.c
71
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps ram_device_mem_ops = {
72
bool memory_region_access_valid(MemoryRegion *mr,
73
hwaddr addr,
74
unsigned size,
75
- bool is_write)
76
+ bool is_write,
77
+ MemTxAttrs attrs)
78
{
24
{
79
int access_size_min, access_size_max;
25
switch (s2.attrs) {
80
int access_size, i;
26
case 7:
81
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
27
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
82
{
28
83
MemTxResult r;
29
/* Combine memory type and cacheability attributes */
84
30
if (arm_hcr_el2_eff(env) & HCR_FWB) {
85
- if (!memory_region_access_valid(mr, addr, size, false)) {
31
- ret.attrs = combined_attrs_fwb(env, s1, s2);
86
+ if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
32
+ ret.attrs = combined_attrs_fwb(s1, s2);
87
*pval = unassigned_mem_read(mr, addr, size);
33
} else {
88
return MEMTX_DECODE_ERROR;
34
ret.attrs = combined_attrs_nofwb(env, s1, s2);
89
}
90
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
91
unsigned size,
92
MemTxAttrs attrs)
93
{
94
- if (!memory_region_access_valid(mr, addr, size, true)) {
95
+ if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
96
unassigned_mem_write(mr, addr, data, size);
97
return MEMTX_DECODE_ERROR;
98
}
35
}
99
--
36
--
100
2.17.1
37
2.25.1
101
102
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
acpi_data_push uses g_array_set_size to resize the memory size. If there
3
These subroutines did not need ENV for anything except
4
is no enough contiguous memory, the address will be changed. So previous
4
retrieving the effective value of HCR anyway.
5
pointer could not be used any more. It must update the pointer and use
6
the new one.
7
5
8
Also, previous codes wrongly use le32 conversion of iort->node_offset
6
We have computed the effective value of HCR in the callers,
9
for subsequent computations that will result incorrect value if host is
7
and this will be especially important for interpreting HCR
10
not litlle endian. So use the non-converted one instead.
8
in a non-current security state.
11
9
12
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1527663951-14552-1-git-send-email-zhaoshenglong@huawei.com
12
Message-id: 20221001162318.153420-17-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
14
---
17
hw/arm/virt-acpi-build.c | 20 +++++++++++++++-----
15
target/arm/ptw.c | 30 +++++++++++++++++-------------
18
1 file changed, 15 insertions(+), 5 deletions(-)
16
1 file changed, 17 insertions(+), 13 deletions(-)
19
17
20
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
18
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
21
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt-acpi-build.c
20
--- a/target/arm/ptw.c
23
+++ b/hw/arm/virt-acpi-build.c
21
+++ b/target/arm/ptw.c
24
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
22
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
25
AcpiIortItsGroup *its;
23
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
26
AcpiIortTable *iort;
24
}
27
AcpiIortSmmu3 *smmu;
25
28
- size_t node_size, iort_length, smmu_offset = 0;
26
-static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
29
+ size_t node_size, iort_node_offset, iort_length, smmu_offset = 0;
27
+static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
30
AcpiIortRC *rc;
28
{
31
29
/*
32
iort = acpi_data_push(table_data, sizeof(*iort));
30
* For an S1 page table walk, the stage 1 attributes are always
33
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
31
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
34
32
* when cacheattrs.attrs bit [2] is 0.
35
iort_length = sizeof(*iort);
33
*/
36
iort->node_count = cpu_to_le32(nb_nodes);
34
assert(cacheattrs.is_s2_format);
37
- iort->node_offset = cpu_to_le32(sizeof(*iort));
35
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
38
+ /*
36
+ if (hcr & HCR_FWB) {
39
+ * Use a copy in case table_data->data moves during acpi_data_push
37
return (cacheattrs.attrs & 0x4) == 0;
40
+ * operations.
38
} else {
41
+ */
39
return (cacheattrs.attrs & 0xc) == 0;
42
+ iort_node_offset = sizeof(*iort);
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
43
+ iort->node_offset = cpu_to_le32(iort_node_offset);
41
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
44
42
!regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
45
/* ITS group node */
43
GetPhysAddrResult s2 = {};
46
node_size = sizeof(*its) + sizeof(uint32_t);
44
+ uint64_t hcr;
47
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
45
int ret;
48
int irq = vms->irqmap[VIRT_SMMU];
46
49
47
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
50
/* SMMUv3 node */
48
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
51
- smmu_offset = iort->node_offset + node_size;
49
fi->s1ns = !is_secure;
52
+ smmu_offset = iort_node_offset + node_size;
50
return ~0;
53
node_size = sizeof(*smmu) + sizeof(*idmap);
51
}
54
iort_length += node_size;
52
- if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
55
smmu = acpi_data_push(table_data, node_size);
53
- ptw_attrs_are_device(env, s2.cacheattrs)) {
56
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
54
+
57
idmap->id_count = cpu_to_le32(0xFFFF);
55
+ hcr = arm_hcr_el2_eff(env);
58
idmap->output_base = 0;
56
+ if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
59
/* output IORT node is the ITS group node (the first node) */
57
/*
60
- idmap->output_reference = cpu_to_le32(iort->node_offset);
58
* PTW set and S1 walk touched S2 Device memory:
61
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
59
* generate Permission fault.
60
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
61
* ref: shared/translation/attrs/S2AttrDecode()
62
* .../S2ConvertAttrsHints()
63
*/
64
-static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
65
+static uint8_t convert_stage2_attrs(uint64_t hcr, uint8_t s2attrs)
66
{
67
uint8_t hiattr = extract32(s2attrs, 2, 2);
68
uint8_t loattr = extract32(s2attrs, 0, 2);
69
uint8_t hihint = 0, lohint = 0;
70
71
if (hiattr != 0) { /* normal memory */
72
- if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */
73
+ if (hcr & HCR_CD) { /* cache disabled */
74
hiattr = loattr = 1; /* non-cacheable */
75
} else {
76
if (hiattr != 1) { /* Write-through or write-back */
77
@@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
78
* s1 and s2 for the HCR_EL2.FWB == 0 case, returning the
79
* combined attributes in MAIR_EL1 format.
80
*/
81
-static uint8_t combined_attrs_nofwb(CPUARMState *env,
82
+static uint8_t combined_attrs_nofwb(uint64_t hcr,
83
ARMCacheAttrs s1, ARMCacheAttrs s2)
84
{
85
uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
86
87
- s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
88
+ s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs);
89
90
s1lo = extract32(s1.attrs, 0, 4);
91
s2lo = extract32(s2_mair_attrs, 0, 4);
92
@@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
93
* @s1: Attributes from stage 1 walk
94
* @s2: Attributes from stage 2 walk
95
*/
96
-static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
97
+static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
98
ARMCacheAttrs s1, ARMCacheAttrs s2)
99
{
100
ARMCacheAttrs ret;
101
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
62
}
102
}
63
103
64
/* Root Complex Node */
104
/* Combine memory type and cacheability attributes */
65
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
105
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
66
idmap->output_reference = cpu_to_le32(smmu_offset);
106
+ if (hcr & HCR_FWB) {
107
ret.attrs = combined_attrs_fwb(s1, s2);
67
} else {
108
} else {
68
/* output IORT node is the ITS group node (the first node) */
109
- ret.attrs = combined_attrs_nofwb(env, s1, s2);
69
- idmap->output_reference = cpu_to_le32(iort->node_offset);
110
+ ret.attrs = combined_attrs_nofwb(hcr, s1, s2);
70
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
71
}
111
}
72
112
73
+ /*
113
/*
74
+ * Update the pointer address in case table_data->data moves during above
114
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
75
+ * acpi_data_push operations.
115
ARMCacheAttrs cacheattrs1;
76
+ */
116
ARMMMUIdx s2_mmu_idx;
77
+ iort = (AcpiIortTable *)(table_data->data + iort_start);
117
bool is_el0;
78
iort->length = cpu_to_le32(iort_length);
118
+ uint64_t hcr;
79
119
80
build_header(linker, table_data, (void *)(table_data->data + iort_start),
120
ret = get_phys_addr_with_secure(env, address, access_type,
121
s1_mmu_idx, is_secure, result, fi);
122
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
123
}
124
125
/* Combine the S1 and S2 cache attributes. */
126
- if (arm_hcr_el2_eff(env) & HCR_DC) {
127
+ hcr = arm_hcr_el2_eff(env);
128
+ if (hcr & HCR_DC) {
129
/*
130
* HCR.DC forces the first stage attributes to
131
* Normal Non-Shareable,
132
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
133
}
134
cacheattrs1.shareability = 0;
135
}
136
- result->cacheattrs = combine_cacheattrs(env, cacheattrs1,
137
+ result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
138
result->cacheattrs);
139
140
/*
81
--
141
--
82
2.17.1
142
2.25.1
83
84
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to flatview_translate(); all its
3
callers now have attrs available.
4
2
3
Use arm_hcr_el2_eff_secstate instead of arm_hcr_el2_eff, so
4
that we use is_secure instead of the current security state.
5
These AT* operations have been broken since arm_hcr_el2_eff
6
gained a check for "el2 enabled" for Secure EL2.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20221001162318.153420-18-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180521140402.23318-11-peter.maydell@linaro.org
9
---
12
---
10
include/exec/memory.h | 7 ++++---
13
target/arm/ptw.c | 8 ++++----
11
exec.c | 17 +++++++++--------
14
1 file changed, 4 insertions(+), 4 deletions(-)
12
2 files changed, 13 insertions(+), 11 deletions(-)
13
15
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
16
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
18
--- a/target/arm/ptw.c
17
+++ b/include/exec/memory.h
19
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
20
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
19
*/
20
MemoryRegion *flatview_translate(FlatView *fv,
21
hwaddr addr, hwaddr *xlat,
22
- hwaddr *len, bool is_write);
23
+ hwaddr *len, bool is_write,
24
+ MemTxAttrs attrs);
25
26
static inline MemoryRegion *address_space_translate(AddressSpace *as,
27
hwaddr addr, hwaddr *xlat,
28
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
29
MemTxAttrs attrs)
30
{
31
return flatview_translate(address_space_to_flatview(as),
32
- addr, xlat, len, is_write);
33
+ addr, xlat, len, is_write, attrs);
34
}
35
36
/* address_space_access_valid: check for validity of accessing an address
37
@@ -XXX,XX +XXX,XX @@ MemTxResult address_space_read(AddressSpace *as, hwaddr addr,
38
rcu_read_lock();
39
fv = address_space_to_flatview(as);
40
l = len;
41
- mr = flatview_translate(fv, addr, &addr1, &l, false);
42
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
43
if (len == l && memory_access_is_direct(mr, false)) {
44
ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
45
memcpy(buf, ptr, len);
46
diff --git a/exec.c b/exec.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/exec.c
49
+++ b/exec.c
50
@@ -XXX,XX +XXX,XX @@ iotlb_fail:
51
52
/* Called from RCU critical section */
53
MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
54
- hwaddr *plen, bool is_write)
55
+ hwaddr *plen, bool is_write,
56
+ MemTxAttrs attrs)
57
{
58
MemoryRegion *mr;
59
MemoryRegionSection section;
60
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
61
}
21
}
62
63
l = len;
64
- mr = flatview_translate(fv, addr, &addr1, &l, true);
65
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
66
}
22
}
67
23
68
return result;
24
- hcr_el2 = arm_hcr_el2_eff(env);
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
25
+ hcr_el2 = arm_hcr_el2_eff_secstate(env, is_secure);
70
MemTxResult result = MEMTX_OK;
26
71
27
switch (mmu_idx) {
72
l = len;
28
case ARMMMUIdx_Stage2:
73
- mr = flatview_translate(fv, addr, &addr1, &l, true);
29
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
74
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
30
return ~0;
75
result = flatview_write_continue(fv, addr, attrs, buf, len,
76
addr1, l, mr);
77
78
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
79
}
31
}
80
32
81
l = len;
33
- hcr = arm_hcr_el2_eff(env);
82
- mr = flatview_translate(fv, addr, &addr1, &l, false);
34
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
83
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
35
if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
84
}
36
/*
85
37
* PTW set and S1 walk touched S2 Device memory:
86
return result;
38
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
87
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
39
}
88
MemoryRegion *mr;
40
89
41
/* Combine the S1 and S2 cache attributes. */
90
l = len;
42
- hcr = arm_hcr_el2_eff(env);
91
- mr = flatview_translate(fv, addr, &addr1, &l, false);
43
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
92
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
44
if (hcr & HCR_DC) {
93
return flatview_read_continue(fv, addr, attrs, buf, len,
45
/*
94
addr1, l, mr);
46
* HCR.DC forces the first stage attributes to
95
}
47
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
96
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
48
result->page_size = TARGET_PAGE_SIZE;
97
49
98
while (len > 0) {
50
/* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
99
l = len;
51
- hcr = arm_hcr_el2_eff(env);
100
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
52
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
101
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
53
result->cacheattrs.shareability = 0;
102
if (!memory_access_is_direct(mr, is_write)) {
54
result->cacheattrs.is_s2_format = false;
103
l = memory_access_size(mr, l, addr);
55
if (hcr & HCR_DC) {
104
if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
105
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
106
107
len = target_len;
108
this_mr = flatview_translate(fv, addr, &xlat,
109
- &len, is_write);
110
+ &len, is_write, attrs);
111
if (this_mr != mr || xlat != base + done) {
112
return done;
113
}
114
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
115
l = len;
116
rcu_read_lock();
117
fv = address_space_to_flatview(as);
118
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
119
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
120
121
if (!memory_access_is_direct(mr, is_write)) {
122
if (atomic_xchg(&bounce.in_use, true)) {
123
--
56
--
124
2.17.1
57
2.25.1
125
126
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_access_valid().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20221001162318.153420-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-6-peter.maydell@linaro.org
10
---
7
---
11
include/exec/memory.h | 4 +++-
8
target/arm/ptw.c | 138 +++++++++++++++++++++++++----------------------
12
include/sysemu/dma.h | 3 ++-
9
1 file changed, 74 insertions(+), 64 deletions(-)
13
exec.c | 3 ++-
14
target/s390x/diag.c | 6 ++++--
15
target/s390x/excp_helper.c | 3 ++-
16
target/s390x/mmu_helper.c | 3 ++-
17
target/s390x/sigp.c | 3 ++-
18
7 files changed, 17 insertions(+), 8 deletions(-)
19
10
20
diff --git a/include/exec/memory.h b/include/exec/memory.h
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
21
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
22
--- a/include/exec/memory.h
13
--- a/target/arm/ptw.c
23
+++ b/include/exec/memory.h
14
+++ b/target/arm/ptw.c
24
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
15
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
25
* @addr: address within that address space
16
return ret;
26
* @len: length of the area to be checked
27
* @is_write: indicates the transfer direction
28
+ * @attrs: memory attributes
29
*/
30
-bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
31
+bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len,
32
+ bool is_write, MemTxAttrs attrs);
33
34
/* address_space_map: map a physical memory region into a host virtual address
35
*
36
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/include/sysemu/dma.h
39
+++ b/include/sysemu/dma.h
40
@@ -XXX,XX +XXX,XX @@ static inline bool dma_memory_valid(AddressSpace *as,
41
DMADirection dir)
42
{
43
return address_space_access_valid(as, addr, len,
44
- dir == DMA_DIRECTION_FROM_DEVICE);
45
+ dir == DMA_DIRECTION_FROM_DEVICE,
46
+ MEMTXATTRS_UNSPECIFIED);
47
}
17
}
48
18
49
static inline int dma_memory_rw_relaxed(AddressSpace *as, dma_addr_t addr,
19
+/*
50
diff --git a/exec.c b/exec.c
20
+ * MMU disabled. S1 addresses within aa64 translation regimes are
51
index XXXXXXX..XXXXXXX 100644
21
+ * still checked for bounds -- see AArch64.S1DisabledOutput().
52
--- a/exec.c
22
+ */
53
+++ b/exec.c
23
+static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
54
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
24
+ MMUAccessType access_type,
55
}
25
+ ARMMMUIdx mmu_idx, bool is_secure,
56
26
+ GetPhysAddrResult *result,
57
bool address_space_access_valid(AddressSpace *as, hwaddr addr,
27
+ ARMMMUFaultInfo *fi)
58
- int len, bool is_write)
28
+{
59
+ int len, bool is_write,
29
+ uint64_t hcr;
60
+ MemTxAttrs attrs)
30
+ uint8_t memattr;
61
{
31
+
62
FlatView *fv;
32
+ if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
63
bool result;
33
+ int r_el = regime_el(env, mmu_idx);
64
diff --git a/target/s390x/diag.c b/target/s390x/diag.c
34
+ if (arm_el_is_aa64(env, r_el)) {
65
index XXXXXXX..XXXXXXX 100644
35
+ int pamax = arm_pamax(env_archcpu(env));
66
--- a/target/s390x/diag.c
36
+ uint64_t tcr = env->cp15.tcr_el[r_el];
67
+++ b/target/s390x/diag.c
37
+ int addrtop, tbi;
68
@@ -XXX,XX +XXX,XX @@ void handle_diag_308(CPUS390XState *env, uint64_t r1, uint64_t r3, uintptr_t ra)
38
+
69
return;
39
+ tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
70
}
40
+ if (access_type == MMU_INST_FETCH) {
71
if (!address_space_access_valid(&address_space_memory, addr,
41
+ tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
72
- sizeof(IplParameterBlock), false)) {
42
+ }
73
+ sizeof(IplParameterBlock), false,
43
+ tbi = (tbi >> extract64(address, 55, 1)) & 1;
74
+ MEMTXATTRS_UNSPECIFIED)) {
44
+ addrtop = (tbi ? 55 : 63);
75
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
45
+
76
return;
46
+ if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
77
}
47
+ fi->type = ARMFault_AddressSize;
78
@@ -XXX,XX +XXX,XX @@ out:
48
+ fi->level = 0;
79
return;
49
+ fi->stage2 = false;
80
}
50
+ return 1;
81
if (!address_space_access_valid(&address_space_memory, addr,
51
+ }
82
- sizeof(IplParameterBlock), true)) {
52
+
83
+ sizeof(IplParameterBlock), true,
53
+ /*
84
+ MEMTXATTRS_UNSPECIFIED)) {
54
+ * When TBI is disabled, we've just validated that all of the
85
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
55
+ * bits above PAMax are zero, so logically we only need to
86
return;
56
+ * clear the top byte for TBI. But it's clearer to follow
87
}
57
+ * the pseudocode set of addrdesc.paddress.
88
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
58
+ */
89
index XXXXXXX..XXXXXXX 100644
59
+ address = extract64(address, 0, 52);
90
--- a/target/s390x/excp_helper.c
60
+ }
91
+++ b/target/s390x/excp_helper.c
61
+ }
92
@@ -XXX,XX +XXX,XX @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr, int size,
62
+
93
63
+ result->phys = address;
94
/* check out of RAM access */
64
+ result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
95
if (!address_space_access_valid(&address_space_memory, raddr,
65
+ result->page_size = TARGET_PAGE_SIZE;
96
- TARGET_PAGE_SIZE, rw)) {
66
+
97
+ TARGET_PAGE_SIZE, rw,
67
+ /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
98
+ MEMTXATTRS_UNSPECIFIED)) {
68
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
99
DPRINTF("%s: raddr %" PRIx64 " > ram_size %" PRIx64 "\n", __func__,
69
+ result->cacheattrs.shareability = 0;
100
(uint64_t)raddr, (uint64_t)ram_size);
70
+ result->cacheattrs.is_s2_format = false;
101
trigger_pgm_exception(env, PGM_ADDRESSING, ILEN_AUTO);
71
+ if (hcr & HCR_DC) {
102
diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
72
+ if (hcr & HCR_DCT) {
103
index XXXXXXX..XXXXXXX 100644
73
+ memattr = 0xf0; /* Tagged, Normal, WB, RWA */
104
--- a/target/s390x/mmu_helper.c
74
+ } else {
105
+++ b/target/s390x/mmu_helper.c
75
+ memattr = 0xff; /* Normal, WB, RWA */
106
@@ -XXX,XX +XXX,XX @@ static int translate_pages(S390CPU *cpu, vaddr addr, int nr_pages,
76
+ }
107
return ret;
77
+ } else if (access_type == MMU_INST_FETCH) {
108
}
78
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
109
if (!address_space_access_valid(&address_space_memory, pages[i],
79
+ memattr = 0xee; /* Normal, WT, RA, NT */
110
- TARGET_PAGE_SIZE, is_write)) {
80
+ } else {
111
+ TARGET_PAGE_SIZE, is_write,
81
+ memattr = 0x44; /* Normal, NC, No */
112
+ MEMTXATTRS_UNSPECIFIED)) {
82
+ }
113
trigger_access_exception(env, PGM_ADDRESSING, ILEN_AUTO, 0);
83
+ result->cacheattrs.shareability = 2; /* outer sharable */
114
return -EFAULT;
84
+ } else {
115
}
85
+ memattr = 0x00; /* Device, nGnRnE */
116
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
86
+ }
117
index XXXXXXX..XXXXXXX 100644
87
+ result->cacheattrs.attrs = memattr;
118
--- a/target/s390x/sigp.c
88
+ return 0;
119
+++ b/target/s390x/sigp.c
89
+}
120
@@ -XXX,XX +XXX,XX @@ static void sigp_set_prefix(CPUState *cs, run_on_cpu_data arg)
90
+
121
cpu_synchronize_state(cs);
91
bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
122
92
MMUAccessType access_type, ARMMMUIdx mmu_idx,
123
if (!address_space_access_valid(&address_space_memory, addr,
93
bool is_secure, GetPhysAddrResult *result,
124
- sizeof(struct LowCore), false)) {
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
125
+ sizeof(struct LowCore), false,
95
/* Definitely a real MMU, not an MPU */
126
+ MEMTXATTRS_UNSPECIFIED)) {
96
127
set_sigp_status(si, SIGP_STAT_INVALID_PARAMETER);
97
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
128
return;
98
- uint64_t hcr;
99
- uint8_t memattr;
100
-
101
- /*
102
- * MMU disabled. S1 addresses within aa64 translation regimes are
103
- * still checked for bounds -- see AArch64.TranslateAddressS1Off.
104
- */
105
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
106
- int r_el = regime_el(env, mmu_idx);
107
- if (arm_el_is_aa64(env, r_el)) {
108
- int pamax = arm_pamax(env_archcpu(env));
109
- uint64_t tcr = env->cp15.tcr_el[r_el];
110
- int addrtop, tbi;
111
-
112
- tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
113
- if (access_type == MMU_INST_FETCH) {
114
- tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
115
- }
116
- tbi = (tbi >> extract64(address, 55, 1)) & 1;
117
- addrtop = (tbi ? 55 : 63);
118
-
119
- if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
120
- fi->type = ARMFault_AddressSize;
121
- fi->level = 0;
122
- fi->stage2 = false;
123
- return 1;
124
- }
125
-
126
- /*
127
- * When TBI is disabled, we've just validated that all of the
128
- * bits above PAMax are zero, so logically we only need to
129
- * clear the top byte for TBI. But it's clearer to follow
130
- * the pseudocode set of addrdesc.paddress.
131
- */
132
- address = extract64(address, 0, 52);
133
- }
134
- }
135
- result->phys = address;
136
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
137
- result->page_size = TARGET_PAGE_SIZE;
138
-
139
- /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
140
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
141
- result->cacheattrs.shareability = 0;
142
- result->cacheattrs.is_s2_format = false;
143
- if (hcr & HCR_DC) {
144
- if (hcr & HCR_DCT) {
145
- memattr = 0xf0; /* Tagged, Normal, WB, RWA */
146
- } else {
147
- memattr = 0xff; /* Normal, WB, RWA */
148
- }
149
- } else if (access_type == MMU_INST_FETCH) {
150
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
151
- memattr = 0xee; /* Normal, WT, RA, NT */
152
- } else {
153
- memattr = 0x44; /* Normal, NC, No */
154
- }
155
- result->cacheattrs.shareability = 2; /* outer sharable */
156
- } else {
157
- memattr = 0x00; /* Device, nGnRnE */
158
- }
159
- result->cacheattrs.attrs = memattr;
160
- return 0;
161
+ return get_phys_addr_disabled(env, address, access_type, mmu_idx,
162
+ is_secure, result, fi);
129
}
163
}
164
-
165
if (regime_using_lpae_format(env, mmu_idx)) {
166
return get_phys_addr_lpae(env, address, access_type, mmu_idx,
167
is_secure, false, result, fi);
130
--
168
--
131
2.17.1
169
2.25.1
132
133
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
It forgot to increase clroffset during the loop. So it only clear the
3
Do not apply memattr or shareability for Stage2 translations.
4
first 4 bytes.
4
Make sure to apply HCR_{DC,DCT} only to Regime_EL10, per the
5
pseudocode in AArch64.S1DisabledOutput.
5
6
6
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Cc: qemu-stable@nongnu.org
8
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20221001162318.153420-20-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/intc/arm_gicv3_kvm.c | 1 +
12
target/arm/ptw.c | 48 +++++++++++++++++++++++++-----------------------
15
1 file changed, 1 insertion(+)
13
1 file changed, 25 insertions(+), 23 deletions(-)
16
14
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
17
--- a/target/arm/ptw.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
18
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
22
if (clroffset != 0) {
20
GetPhysAddrResult *result,
23
reg = 0;
21
ARMMMUFaultInfo *fi)
24
kvm_gicd_access(s, clroffset, &reg, true);
22
{
25
+ clroffset += 4;
23
- uint64_t hcr;
24
- uint8_t memattr;
25
+ uint8_t memattr = 0x00; /* Device nGnRnE */
26
+ uint8_t shareability = 0; /* non-sharable */
27
28
if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
29
int r_el = regime_el(env, mmu_idx);
30
+
31
if (arm_el_is_aa64(env, r_el)) {
32
int pamax = arm_pamax(env_archcpu(env));
33
uint64_t tcr = env->cp15.tcr_el[r_el];
34
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
35
*/
36
address = extract64(address, 0, 52);
26
}
37
}
27
reg = *gic_bmp_ptr32(bmp, irq);
38
+
28
kvm_gicd_access(s, offset, &reg, true);
39
+ /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
40
+ if (r_el == 1) {
41
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
42
+ if (hcr & HCR_DC) {
43
+ if (hcr & HCR_DCT) {
44
+ memattr = 0xf0; /* Tagged, Normal, WB, RWA */
45
+ } else {
46
+ memattr = 0xff; /* Normal, WB, RWA */
47
+ }
48
+ }
49
+ }
50
+ if (memattr == 0 && access_type == MMU_INST_FETCH) {
51
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
52
+ memattr = 0xee; /* Normal, WT, RA, NT */
53
+ } else {
54
+ memattr = 0x44; /* Normal, NC, No */
55
+ }
56
+ shareability = 2; /* outer sharable */
57
+ }
58
+ result->cacheattrs.is_s2_format = false;
59
}
60
61
result->phys = address;
62
result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
63
result->page_size = TARGET_PAGE_SIZE;
64
-
65
- /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
66
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
67
- result->cacheattrs.shareability = 0;
68
- result->cacheattrs.is_s2_format = false;
69
- if (hcr & HCR_DC) {
70
- if (hcr & HCR_DCT) {
71
- memattr = 0xf0; /* Tagged, Normal, WB, RWA */
72
- } else {
73
- memattr = 0xff; /* Normal, WB, RWA */
74
- }
75
- } else if (access_type == MMU_INST_FETCH) {
76
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
77
- memattr = 0xee; /* Normal, WT, RA, NT */
78
- } else {
79
- memattr = 0x44; /* Normal, NC, No */
80
- }
81
- result->cacheattrs.shareability = 2; /* outer sharable */
82
- } else {
83
- memattr = 0x00; /* Device, nGnRnE */
84
- }
85
+ result->cacheattrs.shareability = shareability;
86
result->cacheattrs.attrs = memattr;
87
return 0;
88
}
29
--
89
--
30
2.17.1
90
2.25.1
31
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Depending on the host abi, float16, aka uint16_t, values are
3
Adjust GetPhysAddrResult to fill in CPUTLBEntryFull,
4
passed and returned either zero-extended in the host register
4
so that it may be passed directly to tlb_set_page_full.
5
or with garbage at the top of the host register.
6
5
7
The tcg code generator has so far been assuming garbage, as that
6
The change is large, but mostly mechanical. The major
8
matches the x86 abi, but this is incorrect for other host abis.
7
non-mechanical change is page_size -> lg_page_size.
9
Further, target/arm has so far been assuming zero-extended results,
8
Most of the time this is obvious, and is related to
10
so that it may store the 16-bit value into a 32-bit slot with the
9
TARGET_PAGE_BITS.
11
high 16-bits already clear.
12
10
13
Rectify both problems by mapping "f16" in the helper definition
14
to uint32_t instead of (a typedef for) uint16_t. This forces
15
the host compiler to assume garbage in the upper 16 bits on input
16
and to zero-extend the result on output.
17
18
Cc: qemu-stable@nongnu.org
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
22
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20221001162318.153420-21-richard.henderson@linaro.org
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
15
---
26
include/exec/helper-head.h | 2 +-
16
target/arm/internals.h | 5 +-
27
target/arm/helper-a64.c | 35 +++++++++--------
17
target/arm/helper.c | 12 +--
28
target/arm/helper.c | 80 +++++++++++++++++++-------------------
18
target/arm/m_helper.c | 20 ++---
29
3 files changed, 59 insertions(+), 58 deletions(-)
19
target/arm/ptw.c | 179 ++++++++++++++++++++--------------------
20
target/arm/tlb_helper.c | 9 +-
21
5 files changed, 111 insertions(+), 114 deletions(-)
30
22
31
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
23
diff --git a/target/arm/internals.h b/target/arm/internals.h
32
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
33
--- a/include/exec/helper-head.h
25
--- a/target/arm/internals.h
34
+++ b/include/exec/helper-head.h
26
+++ b/target/arm/internals.h
35
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
36
#define dh_ctype_int int
28
37
#define dh_ctype_i64 uint64_t
29
/* Fields that are valid upon success. */
38
#define dh_ctype_s64 int64_t
30
typedef struct GetPhysAddrResult {
39
-#define dh_ctype_f16 float16
31
- hwaddr phys;
40
+#define dh_ctype_f16 uint32_t
32
- target_ulong page_size;
41
#define dh_ctype_f32 float32
33
- int prot;
42
#define dh_ctype_f64 float64
34
- MemTxAttrs attrs;
43
#define dh_ctype_ptr void *
35
+ CPUTLBEntryFull f;
44
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
36
ARMCacheAttrs cacheattrs;
45
index XXXXXXX..XXXXXXX 100644
37
} GetPhysAddrResult;
46
--- a/target/arm/helper-a64.c
47
+++ b/target/arm/helper-a64.c
48
@@ -XXX,XX +XXX,XX @@ static inline uint32_t float_rel_to_flags(int res)
49
return flags;
50
}
51
52
-uint64_t HELPER(vfp_cmph_a64)(float16 x, float16 y, void *fp_status)
53
+uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, void *fp_status)
54
{
55
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
56
}
57
58
-uint64_t HELPER(vfp_cmpeh_a64)(float16 x, float16 y, void *fp_status)
59
+uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, void *fp_status)
60
{
61
return float_rel_to_flags(float16_compare(x, y, fp_status));
62
}
63
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
64
#define float64_three make_float64(0x4008000000000000ULL)
65
#define float64_one_point_five make_float64(0x3FF8000000000000ULL)
66
67
-float16 HELPER(recpsf_f16)(float16 a, float16 b, void *fpstp)
68
+uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
69
{
70
float_status *fpst = fpstp;
71
72
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
73
return float64_muladd(a, b, float64_two, 0, fpst);
74
}
75
76
-float16 HELPER(rsqrtsf_f16)(float16 a, float16 b, void *fpstp)
77
+uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
78
{
79
float_status *fpst = fpstp;
80
81
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u16)(uint64_t a)
82
}
83
84
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
85
-float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
86
+uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
87
{
88
float_status *fpst = fpstp;
89
uint16_t val16, sbit;
90
@@ -XXX,XX +XXX,XX @@ void HELPER(casp_be_parallel)(CPUARMState *env, uint32_t rs, uint64_t addr,
91
#define ADVSIMD_HELPER(name, suffix) HELPER(glue(glue(advsimd_, name), suffix))
92
93
#define ADVSIMD_HALFOP(name) \
94
-float16 ADVSIMD_HELPER(name, h)(float16 a, float16 b, void *fpstp) \
95
+uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, void *fpstp) \
96
{ \
97
float_status *fpst = fpstp; \
98
return float16_ ## name(a, b, fpst); \
99
@@ -XXX,XX +XXX,XX @@ ADVSIMD_HALFOP(mulx)
100
ADVSIMD_TWOHALFOP(mulx)
101
102
/* fused multiply-accumulate */
103
-float16 HELPER(advsimd_muladdh)(float16 a, float16 b, float16 c, void *fpstp)
104
+uint32_t HELPER(advsimd_muladdh)(uint32_t a, uint32_t b, uint32_t c,
105
+ void *fpstp)
106
{
107
float_status *fpst = fpstp;
108
return float16_muladd(a, b, c, 0, fpst);
109
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
110
111
#define ADVSIMD_CMPRES(test) (test) ? 0xffff : 0
112
113
-uint32_t HELPER(advsimd_ceq_f16)(float16 a, float16 b, void *fpstp)
114
+uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, void *fpstp)
115
{
116
float_status *fpst = fpstp;
117
int compare = float16_compare_quiet(a, b, fpst);
118
return ADVSIMD_CMPRES(compare == float_relation_equal);
119
}
120
121
-uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
122
+uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, void *fpstp)
123
{
124
float_status *fpst = fpstp;
125
int compare = float16_compare(a, b, fpst);
126
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
127
compare == float_relation_equal);
128
}
129
130
-uint32_t HELPER(advsimd_cgt_f16)(float16 a, float16 b, void *fpstp)
131
+uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, void *fpstp)
132
{
133
float_status *fpst = fpstp;
134
int compare = float16_compare(a, b, fpst);
135
return ADVSIMD_CMPRES(compare == float_relation_greater);
136
}
137
138
-uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
139
+uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
140
{
141
float_status *fpst = fpstp;
142
float16 f0 = float16_abs(a);
143
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
144
compare == float_relation_equal);
145
}
146
147
-uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
148
+uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
149
{
150
float_status *fpst = fpstp;
151
float16 f0 = float16_abs(a);
152
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
153
}
154
155
/* round to integral */
156
-float16 HELPER(advsimd_rinth_exact)(float16 x, void *fp_status)
157
+uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, void *fp_status)
158
{
159
return float16_round_to_int(x, fp_status);
160
}
161
162
-float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
163
+uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
164
{
165
int old_flags = get_float_exception_flags(fp_status), new_flags;
166
float16 ret;
167
@@ -XXX,XX +XXX,XX @@ float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
168
* setting the mode appropriately before calling the helper.
169
*/
170
171
-uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
172
+uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
173
{
174
float_status *fpst = fpstp;
175
176
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
177
return float16_to_int16(a, fpst);
178
}
179
180
-uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
181
+uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
182
{
183
float_status *fpst = fpstp;
184
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
186
* Square Root and Reciprocal square root
187
*/
188
189
-float16 HELPER(sqrt_f16)(float16 a, void *fpstp)
190
+uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
191
{
192
float_status *s = fpstp;
193
38
194
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
195
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
196
--- a/target/arm/helper.c
41
--- a/target/arm/helper.c
197
+++ b/target/arm/helper.c
42
+++ b/target/arm/helper.c
198
@@ -XXX,XX +XXX,XX @@ DO_VFP_cmp(d, float64)
43
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
199
44
/* Create a 64-bit PAR */
200
/* Integer to float and float to integer conversions */
45
par64 = (1 << 11); /* LPAE bit always set */
201
46
if (!ret) {
202
-#define CONV_ITOF(name, fsz, sign) \
47
- par64 |= res.phys & ~0xfffULL;
203
- float##fsz HELPER(name)(uint32_t x, void *fpstp) \
48
- if (!res.attrs.secure) {
204
-{ \
49
+ par64 |= res.f.phys_addr & ~0xfffULL;
205
- float_status *fpst = fpstp; \
50
+ if (!res.f.attrs.secure) {
206
- return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
51
par64 |= (1 << 9); /* NS */
207
+#define CONV_ITOF(name, ftype, fsz, sign) \
52
}
208
+ftype HELPER(name)(uint32_t x, void *fpstp) \
53
par64 |= (uint64_t)res.cacheattrs.attrs << 56; /* ATTR */
209
+{ \
54
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
210
+ float_status *fpst = fpstp; \
55
*/
211
+ return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
56
if (!ret) {
57
/* We do not set any attribute bits in the PAR */
58
- if (res.page_size == (1 << 24)
59
+ if (res.f.lg_page_size == 24
60
&& arm_feature(env, ARM_FEATURE_V7)) {
61
- par64 = (res.phys & 0xff000000) | (1 << 1);
62
+ par64 = (res.f.phys_addr & 0xff000000) | (1 << 1);
63
} else {
64
- par64 = res.phys & 0xfffff000;
65
+ par64 = res.f.phys_addr & 0xfffff000;
66
}
67
- if (!res.attrs.secure) {
68
+ if (!res.f.attrs.secure) {
69
par64 |= (1 << 9); /* NS */
70
}
71
} else {
72
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/m_helper.c
75
+++ b/target/arm/m_helper.c
76
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
77
}
78
goto pend_fault;
79
}
80
- address_space_stl_le(arm_addressspace(cs, res.attrs), res.phys, value,
81
- res.attrs, &txres);
82
+ address_space_stl_le(arm_addressspace(cs, res.f.attrs), res.f.phys_addr,
83
+ value, res.f.attrs, &txres);
84
if (txres != MEMTX_OK) {
85
/* BusFault trying to write the data */
86
if (mode == STACK_LAZYFP) {
87
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr,
88
goto pend_fault;
89
}
90
91
- value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys,
92
- res.attrs, &txres);
93
+ value = address_space_ldl(arm_addressspace(cs, res.f.attrs),
94
+ res.f.phys_addr, res.f.attrs, &txres);
95
if (txres != MEMTX_OK) {
96
/* BusFault trying to read the data */
97
qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n");
98
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure,
99
qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
100
return false;
101
}
102
- *insn = address_space_lduw_le(arm_addressspace(cs, res.attrs), res.phys,
103
- res.attrs, &txres);
104
+ *insn = address_space_lduw_le(arm_addressspace(cs, res.f.attrs),
105
+ res.f.phys_addr, res.f.attrs, &txres);
106
if (txres != MEMTX_OK) {
107
env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
108
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
109
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
110
}
111
return false;
112
}
113
- value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys,
114
- res.attrs, &txres);
115
+ value = address_space_ldl(arm_addressspace(cs, res.f.attrs),
116
+ res.f.phys_addr, res.f.attrs, &txres);
117
if (txres != MEMTX_OK) {
118
/* BusFault trying to read the data */
119
qemu_log_mask(CPU_LOG_INT,
120
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
121
} else {
122
mrvalid = true;
123
}
124
- r = res.prot & PAGE_READ;
125
- rw = res.prot & PAGE_WRITE;
126
+ r = res.f.prot & PAGE_READ;
127
+ rw = res.f.prot & PAGE_WRITE;
128
} else {
129
r = false;
130
rw = false;
131
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
132
index XXXXXXX..XXXXXXX 100644
133
--- a/target/arm/ptw.c
134
+++ b/target/arm/ptw.c
135
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
136
assert(!is_secure);
137
}
138
139
- addr = s2.phys;
140
+ addr = s2.f.phys_addr;
141
}
142
return addr;
212
}
143
}
213
144
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
214
-#define CONV_FTOI(name, fsz, sign, round) \
145
/* 1Mb section. */
215
-uint32_t HELPER(name)(float##fsz x, void *fpstp) \
146
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
216
-{ \
147
ap = (desc >> 10) & 3;
217
- float_status *fpst = fpstp; \
148
- result->page_size = 1024 * 1024;
218
- if (float##fsz##_is_any_nan(x)) { \
149
+ result->f.lg_page_size = 20; /* 1MB */
219
- float_raise(float_flag_invalid, fpst); \
150
} else {
220
- return 0; \
151
/* Lookup l2 entry. */
221
- } \
152
if (type == 1) {
222
- return float##fsz##_to_##sign##int32##round(x, fpst); \
153
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
223
+#define CONV_FTOI(name, ftype, fsz, sign, round) \
154
case 1: /* 64k page. */
224
+uint32_t HELPER(name)(ftype x, void *fpstp) \
155
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
225
+{ \
156
ap = (desc >> (4 + ((address >> 13) & 6))) & 3;
226
+ float_status *fpst = fpstp; \
157
- result->page_size = 0x10000;
227
+ if (float##fsz##_is_any_nan(x)) { \
158
+ result->f.lg_page_size = 16;
228
+ float_raise(float_flag_invalid, fpst); \
159
break;
229
+ return 0; \
160
case 2: /* 4k page. */
230
+ } \
161
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
231
+ return float##fsz##_to_##sign##int32##round(x, fpst); \
162
ap = (desc >> (4 + ((address >> 9) & 6))) & 3;
163
- result->page_size = 0x1000;
164
+ result->f.lg_page_size = 12;
165
break;
166
case 3: /* 1k page, or ARMv6/XScale "extended small (4k) page" */
167
if (type == 1) {
168
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
169
if (arm_feature(env, ARM_FEATURE_XSCALE)
170
|| arm_feature(env, ARM_FEATURE_V6)) {
171
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
172
- result->page_size = 0x1000;
173
+ result->f.lg_page_size = 12;
174
} else {
175
/*
176
* UNPREDICTABLE in ARMv5; we choose to take a
177
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
178
}
179
} else {
180
phys_addr = (desc & 0xfffffc00) | (address & 0x3ff);
181
- result->page_size = 0x400;
182
+ result->f.lg_page_size = 10;
183
}
184
ap = (desc >> 4) & 3;
185
break;
186
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
187
g_assert_not_reached();
188
}
189
}
190
- result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
191
- result->prot |= result->prot ? PAGE_EXEC : 0;
192
- if (!(result->prot & (1 << access_type))) {
193
+ result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
194
+ result->f.prot |= result->f.prot ? PAGE_EXEC : 0;
195
+ if (!(result->f.prot & (1 << access_type))) {
196
/* Access permission fault. */
197
fi->type = ARMFault_Permission;
198
goto do_fault;
199
}
200
- result->phys = phys_addr;
201
+ result->f.phys_addr = phys_addr;
202
return false;
203
do_fault:
204
fi->domain = domain;
205
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
206
phys_addr = (desc & 0xff000000) | (address & 0x00ffffff);
207
phys_addr |= (uint64_t)extract32(desc, 20, 4) << 32;
208
phys_addr |= (uint64_t)extract32(desc, 5, 4) << 36;
209
- result->page_size = 0x1000000;
210
+ result->f.lg_page_size = 24; /* 16MB */
211
} else {
212
/* Section. */
213
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
214
- result->page_size = 0x100000;
215
+ result->f.lg_page_size = 20; /* 1MB */
216
}
217
ap = ((desc >> 10) & 3) | ((desc >> 13) & 4);
218
xn = desc & (1 << 4);
219
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
220
case 1: /* 64k page. */
221
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
222
xn = desc & (1 << 15);
223
- result->page_size = 0x10000;
224
+ result->f.lg_page_size = 16;
225
break;
226
case 2: case 3: /* 4k page. */
227
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
228
xn = desc & 1;
229
- result->page_size = 0x1000;
230
+ result->f.lg_page_size = 12;
231
break;
232
default:
233
/* Never happens, but compiler isn't smart enough to tell. */
234
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
235
}
236
}
237
if (domain_prot == 3) {
238
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
239
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
240
} else {
241
if (pxn && !regime_is_user(env, mmu_idx)) {
242
xn = 1;
243
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
244
fi->type = ARMFault_AccessFlag;
245
goto do_fault;
246
}
247
- result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
248
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
249
} else {
250
- result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
251
+ result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
252
}
253
- if (result->prot && !xn) {
254
- result->prot |= PAGE_EXEC;
255
+ if (result->f.prot && !xn) {
256
+ result->f.prot |= PAGE_EXEC;
257
}
258
- if (!(result->prot & (1 << access_type))) {
259
+ if (!(result->f.prot & (1 << access_type))) {
260
/* Access permission fault. */
261
fi->type = ARMFault_Permission;
262
goto do_fault;
263
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
264
* the CPU doesn't support TZ or this is a non-secure translation
265
* regime, because the attribute will already be non-secure.
266
*/
267
- result->attrs.secure = false;
268
+ result->f.attrs.secure = false;
269
}
270
- result->phys = phys_addr;
271
+ result->f.phys_addr = phys_addr;
272
return false;
273
do_fault:
274
fi->domain = domain;
275
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
276
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
277
ns = mmu_idx == ARMMMUIdx_Stage2;
278
xn = extract32(attrs, 11, 2);
279
- result->prot = get_S2prot(env, ap, xn, s1_is_el0);
280
+ result->f.prot = get_S2prot(env, ap, xn, s1_is_el0);
281
} else {
282
ns = extract32(attrs, 3, 1);
283
xn = extract32(attrs, 12, 1);
284
pxn = extract32(attrs, 11, 1);
285
- result->prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
286
+ result->f.prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
287
}
288
289
fault_type = ARMFault_Permission;
290
- if (!(result->prot & (1 << access_type))) {
291
+ if (!(result->f.prot & (1 << access_type))) {
292
goto do_fault;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
296
* the CPU doesn't support TZ or this is a non-secure translation
297
* regime, because the attribute will already be non-secure.
298
*/
299
- result->attrs.secure = false;
300
+ result->f.attrs.secure = false;
301
}
302
/* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
303
if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
304
- arm_tlb_bti_gp(&result->attrs) = true;
305
+ arm_tlb_bti_gp(&result->f.attrs) = true;
306
}
307
308
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
309
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
310
result->cacheattrs.shareability = extract32(attrs, 6, 2);
311
}
312
313
- result->phys = descaddr;
314
- result->page_size = page_size;
315
+ result->f.phys_addr = descaddr;
316
+ result->f.lg_page_size = ctz64(page_size);
317
return false;
318
319
do_fault:
320
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
321
322
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
323
/* MPU disabled. */
324
- result->phys = address;
325
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
326
+ result->f.phys_addr = address;
327
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
328
return false;
329
}
330
331
- result->phys = address;
332
+ result->f.phys_addr = address;
333
for (n = 7; n >= 0; n--) {
334
base = env->cp15.c6_region[n];
335
if ((base & 1) == 0) {
336
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
337
fi->level = 1;
338
return true;
339
}
340
- result->prot = PAGE_READ | PAGE_WRITE;
341
+ result->f.prot = PAGE_READ | PAGE_WRITE;
342
break;
343
case 2:
344
- result->prot = PAGE_READ;
345
+ result->f.prot = PAGE_READ;
346
if (!is_user) {
347
- result->prot |= PAGE_WRITE;
348
+ result->f.prot |= PAGE_WRITE;
349
}
350
break;
351
case 3:
352
- result->prot = PAGE_READ | PAGE_WRITE;
353
+ result->f.prot = PAGE_READ | PAGE_WRITE;
354
break;
355
case 5:
356
if (is_user) {
357
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
358
fi->level = 1;
359
return true;
360
}
361
- result->prot = PAGE_READ;
362
+ result->f.prot = PAGE_READ;
363
break;
364
case 6:
365
- result->prot = PAGE_READ;
366
+ result->f.prot = PAGE_READ;
367
break;
368
default:
369
/* Bad permission. */
370
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
371
fi->level = 1;
372
return true;
373
}
374
- result->prot |= PAGE_EXEC;
375
+ result->f.prot |= PAGE_EXEC;
376
return false;
232
}
377
}
233
378
234
-#define FLOAT_CONVS(name, p, fsz, sign) \
379
static void get_phys_addr_pmsav7_default(CPUARMState *env, ARMMMUIdx mmu_idx,
235
-CONV_ITOF(vfp_##name##to##p, fsz, sign) \
380
- int32_t address, int *prot)
236
-CONV_FTOI(vfp_to##name##p, fsz, sign, ) \
381
+ int32_t address, uint8_t *prot)
237
-CONV_FTOI(vfp_to##name##z##p, fsz, sign, _round_to_zero)
382
{
238
+#define FLOAT_CONVS(name, p, ftype, fsz, sign) \
383
if (!arm_feature(env, ARM_FEATURE_M)) {
239
+ CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \
384
*prot = PAGE_READ | PAGE_WRITE;
240
+ CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \
385
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
241
+ CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero)
386
int n;
242
387
bool is_user = regime_is_user(env, mmu_idx);
243
-FLOAT_CONVS(si, h, 16, )
388
244
-FLOAT_CONVS(si, s, 32, )
389
- result->phys = address;
245
-FLOAT_CONVS(si, d, 64, )
390
- result->page_size = TARGET_PAGE_SIZE;
246
-FLOAT_CONVS(ui, h, 16, u)
391
- result->prot = 0;
247
-FLOAT_CONVS(ui, s, 32, u)
392
+ result->f.phys_addr = address;
248
-FLOAT_CONVS(ui, d, 64, u)
393
+ result->f.lg_page_size = TARGET_PAGE_BITS;
249
+FLOAT_CONVS(si, h, uint32_t, 16, )
394
+ result->f.prot = 0;
250
+FLOAT_CONVS(si, s, float32, 32, )
395
251
+FLOAT_CONVS(si, d, float64, 64, )
396
if (regime_translation_disabled(env, mmu_idx, secure) ||
252
+FLOAT_CONVS(ui, h, uint32_t, 16, u)
397
m_is_ppb_region(env, address)) {
253
+FLOAT_CONVS(ui, s, float32, 32, u)
398
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
254
+FLOAT_CONVS(ui, d, float64, 64, u)
399
* which always does a direct read using address_space_ldl(), rather
255
400
* than going via this function, so we don't need to check that here.
256
#undef CONV_ITOF
401
*/
257
#undef CONV_FTOI
402
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
258
@@ -XXX,XX +XXX,XX @@ static float16 do_postscale_fp16(float64 f, int shift, float_status *fpst)
403
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
259
return float64_to_float16(float64_scalbn(f, -shift, fpst), true, fpst);
404
} else { /* MPU enabled */
405
for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
406
/* region search */
407
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
408
if (ranges_overlap(base, rmask,
409
address & TARGET_PAGE_MASK,
410
TARGET_PAGE_SIZE)) {
411
- result->page_size = 1;
412
+ result->f.lg_page_size = 0;
413
}
414
continue;
415
}
416
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
417
continue;
418
}
419
if (rsize < TARGET_PAGE_BITS) {
420
- result->page_size = 1 << rsize;
421
+ result->f.lg_page_size = rsize;
422
}
423
break;
424
}
425
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
426
fi->type = ARMFault_Background;
427
return true;
428
}
429
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
430
+ get_phys_addr_pmsav7_default(env, mmu_idx, address,
431
+ &result->f.prot);
432
} else { /* a MPU hit! */
433
uint32_t ap = extract32(env->pmsav7.dracr[n], 8, 3);
434
uint32_t xn = extract32(env->pmsav7.dracr[n], 12, 1);
435
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
436
case 5:
437
break; /* no access */
438
case 3:
439
- result->prot |= PAGE_WRITE;
440
+ result->f.prot |= PAGE_WRITE;
441
/* fall through */
442
case 2:
443
case 6:
444
- result->prot |= PAGE_READ | PAGE_EXEC;
445
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
446
break;
447
case 7:
448
/* for v7M, same as 6; for R profile a reserved value */
449
if (arm_feature(env, ARM_FEATURE_M)) {
450
- result->prot |= PAGE_READ | PAGE_EXEC;
451
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
452
break;
453
}
454
/* fall through */
455
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
456
case 1:
457
case 2:
458
case 3:
459
- result->prot |= PAGE_WRITE;
460
+ result->f.prot |= PAGE_WRITE;
461
/* fall through */
462
case 5:
463
case 6:
464
- result->prot |= PAGE_READ | PAGE_EXEC;
465
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
466
break;
467
case 7:
468
/* for v7M, same as 6; for R profile a reserved value */
469
if (arm_feature(env, ARM_FEATURE_M)) {
470
- result->prot |= PAGE_READ | PAGE_EXEC;
471
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
472
break;
473
}
474
/* fall through */
475
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
476
477
/* execute never */
478
if (xn) {
479
- result->prot &= ~PAGE_EXEC;
480
+ result->f.prot &= ~PAGE_EXEC;
481
}
482
}
483
}
484
485
fi->type = ARMFault_Permission;
486
fi->level = 1;
487
- return !(result->prot & (1 << access_type));
488
+ return !(result->f.prot & (1 << access_type));
260
}
489
}
261
490
262
-float16 HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
491
bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
263
+uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
492
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
264
{
493
uint32_t addr_page_base = address & TARGET_PAGE_MASK;
265
return do_postscale_fp16(int32_to_float64(x, fpst), shift, fpst);
494
uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
495
496
- result->page_size = TARGET_PAGE_SIZE;
497
- result->phys = address;
498
- result->prot = 0;
499
+ result->f.lg_page_size = TARGET_PAGE_BITS;
500
+ result->f.phys_addr = address;
501
+ result->f.prot = 0;
502
if (mregion) {
503
*mregion = -1;
504
}
505
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
506
ranges_overlap(base, limit - base + 1,
507
addr_page_base,
508
TARGET_PAGE_SIZE)) {
509
- result->page_size = 1;
510
+ result->f.lg_page_size = 0;
511
}
512
continue;
513
}
514
515
if (base > addr_page_base || limit < addr_page_limit) {
516
- result->page_size = 1;
517
+ result->f.lg_page_size = 0;
518
}
519
520
if (matchregion != -1) {
521
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
522
523
if (matchregion == -1) {
524
/* hit using the background region */
525
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
526
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
527
} else {
528
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
529
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
530
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
531
xn = 1;
532
}
533
534
- result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
535
- if (result->prot && !xn && !(pxn && !is_user)) {
536
- result->prot |= PAGE_EXEC;
537
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
538
+ if (result->f.prot && !xn && !(pxn && !is_user)) {
539
+ result->f.prot |= PAGE_EXEC;
540
}
541
/*
542
* We don't need to look the attribute up in the MAIR0/MAIR1
543
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
544
545
fi->type = ARMFault_Permission;
546
fi->level = 1;
547
- return !(result->prot & (1 << access_type));
548
+ return !(result->f.prot & (1 << access_type));
266
}
549
}
267
550
268
-float16 HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
551
static bool v8m_is_sau_exempt(CPUARMState *env,
269
+uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
552
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
270
{
553
} else {
271
return do_postscale_fp16(uint32_to_float64(x, fpst), shift, fpst);
554
fi->type = ARMFault_QEMU_SFault;
555
}
556
- result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
557
- result->phys = address;
558
- result->prot = 0;
559
+ result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS;
560
+ result->f.phys_addr = address;
561
+ result->f.prot = 0;
562
return true;
563
}
564
} else {
565
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
566
* might downgrade a secure access to nonsecure.
567
*/
568
if (sattrs.ns) {
569
- result->attrs.secure = false;
570
+ result->f.attrs.secure = false;
571
} else if (!secure) {
572
/*
573
* NS access to S memory must fault.
574
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
575
* for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
576
*/
577
fi->type = ARMFault_QEMU_SFault;
578
- result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
579
- result->phys = address;
580
- result->prot = 0;
581
+ result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS;
582
+ result->f.phys_addr = address;
583
+ result->f.prot = 0;
584
return true;
585
}
586
}
587
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
588
ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, secure,
589
result, fi, NULL);
590
if (sattrs.subpage) {
591
- result->page_size = 1;
592
+ result->f.lg_page_size = 0;
593
}
594
return ret;
272
}
595
}
273
596
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
274
-float16 HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
597
result->cacheattrs.is_s2_format = false;
275
+uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
598
}
276
{
599
277
return do_postscale_fp16(int64_to_float64(x, fpst), shift, fpst);
600
- result->phys = address;
601
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
602
- result->page_size = TARGET_PAGE_SIZE;
603
+ result->f.phys_addr = address;
604
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
605
+ result->f.lg_page_size = TARGET_PAGE_BITS;
606
result->cacheattrs.shareability = shareability;
607
result->cacheattrs.attrs = memattr;
608
return 0;
609
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
610
return ret;
611
}
612
613
- ipa = result->phys;
614
- ipa_secure = result->attrs.secure;
615
+ ipa = result->f.phys_addr;
616
+ ipa_secure = result->f.attrs.secure;
617
if (is_secure) {
618
/* Select TCR based on the NS bit from the S1 walk. */
619
s2walk_secure = !(ipa_secure
620
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
621
* Save the stage1 results so that we may merge
622
* prot and cacheattrs later.
623
*/
624
- s1_prot = result->prot;
625
+ s1_prot = result->f.prot;
626
cacheattrs1 = result->cacheattrs;
627
memset(result, 0, sizeof(*result));
628
629
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
630
fi->s2addr = ipa;
631
632
/* Combine the S1 and S2 perms. */
633
- result->prot &= s1_prot;
634
+ result->f.prot &= s1_prot;
635
636
/* If S2 fails, return early. */
637
if (ret) {
638
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
639
* Check if IPA translates to secure or non-secure PA space.
640
* Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
641
*/
642
- result->attrs.secure =
643
+ result->f.attrs.secure =
644
(is_secure
645
&& !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
646
&& (ipa_secure
647
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
648
* cannot upgrade an non-secure translation regime's attributes
649
* to secure.
650
*/
651
- result->attrs.secure = is_secure;
652
- result->attrs.user = regime_is_user(env, mmu_idx);
653
+ result->f.attrs.secure = is_secure;
654
+ result->f.attrs.user = regime_is_user(env, mmu_idx);
655
656
/*
657
* Fast Context Switch Extension. This doesn't exist at all in v8.
658
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
659
660
if (arm_feature(env, ARM_FEATURE_PMSA)) {
661
bool ret;
662
- result->page_size = TARGET_PAGE_SIZE;
663
+ result->f.lg_page_size = TARGET_PAGE_BITS;
664
665
if (arm_feature(env, ARM_FEATURE_V8)) {
666
/* PMSAv8 */
667
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
668
(access_type == MMU_DATA_STORE ? "writing" : "execute"),
669
(uint32_t)address, mmu_idx,
670
ret ? "Miss" : "Hit",
671
- result->prot & PAGE_READ ? 'r' : '-',
672
- result->prot & PAGE_WRITE ? 'w' : '-',
673
- result->prot & PAGE_EXEC ? 'x' : '-');
674
+ result->f.prot & PAGE_READ ? 'r' : '-',
675
+ result->f.prot & PAGE_WRITE ? 'w' : '-',
676
+ result->f.prot & PAGE_EXEC ? 'x' : '-');
677
678
return ret;
679
}
680
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
681
bool ret;
682
683
ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi);
684
- *attrs = res.attrs;
685
+ *attrs = res.f.attrs;
686
687
if (ret) {
688
return -1;
689
}
690
- return res.phys;
691
+ return res.f.phys_addr;
278
}
692
}
279
693
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
280
-float16 HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
694
index XXXXXXX..XXXXXXX 100644
281
+uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
695
--- a/target/arm/tlb_helper.c
282
{
696
+++ b/target/arm/tlb_helper.c
283
return do_postscale_fp16(uint64_to_float64(x, fpst), shift, fpst);
697
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
284
}
698
* target page size are handled specially, so for those we
285
@@ -XXX,XX +XXX,XX @@ static float64 do_prescale_fp16(float16 f, int shift, float_status *fpst)
699
* pass in the exact addresses.
286
}
700
*/
287
}
701
- if (res.page_size >= TARGET_PAGE_SIZE) {
288
702
- res.phys &= TARGET_PAGE_MASK;
289
-uint32_t HELPER(vfp_toshh)(float16 x, uint32_t shift, void *fpst)
703
+ if (res.f.lg_page_size >= TARGET_PAGE_BITS) {
290
+uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst)
704
+ res.f.phys_addr &= TARGET_PAGE_MASK;
291
{
705
address &= TARGET_PAGE_MASK;
292
return float64_to_int16(do_prescale_fp16(x, shift, fpst), fpst);
706
}
293
}
707
/* Notice and record tagged memory. */
294
708
if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) {
295
-uint32_t HELPER(vfp_touhh)(float16 x, uint32_t shift, void *fpst)
709
- arm_tlb_mte_tagged(&res.attrs) = true;
296
+uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst)
710
+ arm_tlb_mte_tagged(&res.f.attrs) = true;
297
{
711
}
298
return float64_to_uint16(do_prescale_fp16(x, shift, fpst), fpst);
712
299
}
713
- tlb_set_page_with_attrs(cs, address, res.phys, res.attrs,
300
714
- res.prot, mmu_idx, res.page_size);
301
-uint32_t HELPER(vfp_toslh)(float16 x, uint32_t shift, void *fpst)
715
+ tlb_set_page_full(cs, mmu_idx, address, &res.f);
302
+uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst)
716
return true;
303
{
717
} else if (probe) {
304
return float64_to_int32(do_prescale_fp16(x, shift, fpst), fpst);
718
return false;
305
}
306
307
-uint32_t HELPER(vfp_toulh)(float16 x, uint32_t shift, void *fpst)
308
+uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst)
309
{
310
return float64_to_uint32(do_prescale_fp16(x, shift, fpst), fpst);
311
}
312
313
-uint64_t HELPER(vfp_tosqh)(float16 x, uint32_t shift, void *fpst)
314
+uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst)
315
{
316
return float64_to_int64(do_prescale_fp16(x, shift, fpst), fpst);
317
}
318
319
-uint64_t HELPER(vfp_touqh)(float16 x, uint32_t shift, void *fpst)
320
+uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst)
321
{
322
return float64_to_uint64(do_prescale_fp16(x, shift, fpst), fpst);
323
}
324
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env)
325
}
326
327
/* Half precision conversions. */
328
-float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
329
+float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
330
{
331
/* Squash FZ16 to 0 for the duration of conversion. In this case,
332
* it would affect flushing input denormals.
333
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
334
return r;
335
}
336
337
-float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
338
+uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
339
{
340
/* Squash FZ16 to 0 for the duration of conversion. In this case,
341
* it would affect flushing output denormals.
342
@@ -XXX,XX +XXX,XX @@ float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
343
return r;
344
}
345
346
-float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
347
+float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
348
{
349
/* Squash FZ16 to 0 for the duration of conversion. In this case,
350
* it would affect flushing input denormals.
351
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
352
return r;
353
}
354
355
-float16 HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
356
+uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
357
{
358
/* Squash FZ16 to 0 for the duration of conversion. In this case,
359
* it would affect flushing output denormals.
360
@@ -XXX,XX +XXX,XX @@ static bool round_to_inf(float_status *fpst, bool sign_bit)
361
g_assert_not_reached();
362
}
363
364
-float16 HELPER(recpe_f16)(float16 input, void *fpstp)
365
+uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
366
{
367
float_status *fpst = fpstp;
368
float16 f16 = float16_squash_input_denormal(input, fpst);
369
@@ -XXX,XX +XXX,XX @@ static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac)
370
return extract64(estimate, 0, 8) << 44;
371
}
372
373
-float16 HELPER(rsqrte_f16)(float16 input, void *fpstp)
374
+uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
375
{
376
float_status *s = fpstp;
377
float16 f16 = float16_squash_input_denormal(input, s);
378
--
719
--
379
2.17.1
720
2.25.1
380
381
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Jerome Forissier <jerome.forissier@linaro.org>
2
2
3
When QEMU is started with following CLI
3
According to the Linux kernel booting.rst [1], CPTR_EL3.ESM and
4
-machine virt,gic-version=3,accel=kvm -cpu host -bios AAVMF_CODE.fd
4
SCR_EL3.EnTP2 must be initialized to 1 when EL3 is present and FEAT_SME
5
it crashes with abort at
5
is advertised. This has to be taken care of when QEMU boots directly
6
accel/kvm/kvm-all.c:2164:
6
into the kernel (i.e., "-M virt,secure=on -cpu max -kernel Image").
7
KVM_SET_DEVICE_ATTR failed: Group 6 attr 0x000000000000c665: Invalid argument
8
7
9
Which is caused by implicit dependency of kvm_arm_gicv3_reset() on
8
Cc: qemu-stable@nongnu.org
10
arm_gicv3_icc_reset() where the later is called by CPU reset
9
Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max")
11
reset callback.
10
Link: [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst?h=v6.0#n321
12
11
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
13
However commit:
12
Message-id: 20221003145641.1921467-1-jerome.forissier@linaro.org
14
3b77f6c arm/boot: split load_dtb() from arm_load_kernel()
15
broke CPU reset callback registration in case
16
17
arm_load_kernel()
18
...
19
if (!info->kernel_filename || info->firmware_loaded)
20
21
branch is taken, i.e. it's sufficient to provide a firmware
22
or do not provide kernel on CLI to skip cpu reset callback
23
registration, where before offending commit the callback
24
has been registered unconditionally.
25
26
Fix it by registering the callback right at the beginning of
27
arm_load_kernel() unconditionally instead of doing it at the end.
28
29
NOTE:
30
we probably should eliminate that dependency anyways as well as
31
separate arch CPU reset parts from arm_load_kernel() into CPU
32
itself, but that refactoring that I probably would have to do
33
anyways later for CPU hotplug to work.
34
35
Reported-by: Auger Eric <eric.auger@redhat.com>
36
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
37
Reviewed-by: Eric Auger <eric.auger@redhat.com>
38
Tested-by: Eric Auger <eric.auger@redhat.com>
39
Message-id: 1527070950-208350-1-git-send-email-imammedo@redhat.com
40
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
---
15
---
43
hw/arm/boot.c | 18 +++++++++---------
16
hw/arm/boot.c | 4 ++++
44
1 file changed, 9 insertions(+), 9 deletions(-)
17
1 file changed, 4 insertions(+)
45
18
46
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
19
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
47
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/boot.c
21
--- a/hw/arm/boot.c
49
+++ b/hw/arm/boot.c
22
+++ b/hw/arm/boot.c
50
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
23
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
51
static const ARMInsnFixup *primary_loader;
24
if (cpu_isar_feature(aa64_sve, cpu)) {
52
AddressSpace *as = arm_boot_address_space(cpu, info);
25
env->cp15.cptr_el[3] |= R_CPTR_EL3_EZ_MASK;
53
26
}
54
+ /* CPU objects (unlike devices) are not automatically reset on system
27
+ if (cpu_isar_feature(aa64_sme, cpu)) {
55
+ * reset, so we must always register a handler to do so. If we're
28
+ env->cp15.cptr_el[3] |= R_CPTR_EL3_ESM_MASK;
56
+ * actually loading a kernel, the handler is also responsible for
29
+ env->cp15.scr_el3 |= SCR_ENTP2;
57
+ * arranging that we start it correctly.
30
+ }
58
+ */
31
/* AArch64 kernels never boot in secure mode */
59
+ for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
32
assert(!info->secure_boot);
60
+ qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
33
/* This hook is only supported for AArch32 currently:
61
+ }
62
+
63
/* The board code is not supposed to set secure_board_setup unless
64
* running its code in secure mode is actually possible, and KVM
65
* doesn't support secure.
66
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
67
ARM_CPU(cs)->env.boot_info = info;
68
}
69
70
- /* CPU objects (unlike devices) are not automatically reset on system
71
- * reset, so we must always register a handler to do so. If we're
72
- * actually loading a kernel, the handler is also responsible for
73
- * arranging that we start it correctly.
74
- */
75
- for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
76
- qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
77
- }
78
-
79
if (!info->skip_dtb_autoload && have_dtb(info)) {
80
if (arm_load_dtb(info->dtb_start, info, info->dtb_limit, as) < 0) {
81
exit(1);
82
--
34
--
83
2.17.1
35
2.25.1
84
85
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Arm CPUs support some subset of the granule (page) sizes 4K, 16K and
2
add MemTxAttrs as an argument to flatview_access_valid().
2
64K. The guest selects the one it wants using bits in the TCR_ELx
3
Its callers now all have an attrs value to hand, so we can
3
registers. If it tries to program these registers with a value that
4
correct our earlier temporary use of MEMTXATTRS_UNSPECIFIED.
4
is either reserved or which requests a size that the CPU does not
5
5
implement, the architecture requires that the CPU behaves as if the
6
field was programmed to some size that has been implemented.
7
Currently we don't implement this, and instead let the guest use any
8
granule size, even if the CPU ID register fields say it isn't
9
present.
10
11
Make aa64_va_parameters() check against the supported granule size
12
and force use of a different one if it is not implemented.
13
14
(A subsequent commit will make ARMVAParameters use the new enum
15
rather than the current pair of using16k/using64k bools.)
16
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
19
Message-id: 20221003162315.2833797-2-peter.maydell@linaro.org
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-10-peter.maydell@linaro.org
10
---
20
---
11
exec.c | 12 +++++-------
21
target/arm/cpu.h | 33 +++++++++++++
12
1 file changed, 5 insertions(+), 7 deletions(-)
22
target/arm/internals.h | 9 ++++
13
23
target/arm/helper.c | 102 +++++++++++++++++++++++++++++++++++++----
14
diff --git a/exec.c b/exec.c
24
3 files changed, 136 insertions(+), 8 deletions(-)
25
26
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
28
--- a/target/arm/cpu.h
17
+++ b/exec.c
29
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
30
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tgran16_2_lpa2(const ARMISARegisters *id)
19
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
31
return t >= 3 || (t == 0 && isar_feature_aa64_tgran16_lpa2(id));
20
const uint8_t *buf, int len);
21
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
22
- bool is_write);
23
+ bool is_write, MemTxAttrs attrs);
24
25
static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
26
unsigned len, MemTxAttrs attrs)
27
@@ -XXX,XX +XXX,XX @@ static bool subpage_accepts(void *opaque, hwaddr addr,
28
#endif
29
30
return flatview_access_valid(subpage->fv, addr + subpage->base,
31
- len, is_write);
32
+ len, is_write, attrs);
33
}
32
}
34
33
35
static const MemoryRegionOps subpage_ops = {
34
+static inline bool isar_feature_aa64_tgran4(const ARMISARegisters *id)
36
@@ -XXX,XX +XXX,XX @@ static void cpu_notify_map_clients(void)
35
+{
36
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 0;
37
+}
38
+
39
+static inline bool isar_feature_aa64_tgran16(const ARMISARegisters *id)
40
+{
41
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 1;
42
+}
43
+
44
+static inline bool isar_feature_aa64_tgran64(const ARMISARegisters *id)
45
+{
46
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64) >= 0;
47
+}
48
+
49
+static inline bool isar_feature_aa64_tgran4_2(const ARMISARegisters *id)
50
+{
51
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
52
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran4(id));
53
+}
54
+
55
+static inline bool isar_feature_aa64_tgran16_2(const ARMISARegisters *id)
56
+{
57
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
58
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran16(id));
59
+}
60
+
61
+static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id)
62
+{
63
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64_2);
64
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id));
65
+}
66
+
67
static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
68
{
69
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
70
diff --git a/target/arm/internals.h b/target/arm/internals.h
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/internals.h
73
+++ b/target/arm/internals.h
74
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
75
return valid;
37
}
76
}
38
77
39
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
78
+/* Granule size (i.e. page size) */
40
- bool is_write)
79
+typedef enum ARMGranuleSize {
41
+ bool is_write, MemTxAttrs attrs)
80
+ /* Same order as TG0 encoding */
81
+ Gran4K,
82
+ Gran64K,
83
+ Gran16K,
84
+ GranInvalid,
85
+} ARMGranuleSize;
86
+
87
/*
88
* Parameters of a given virtual address, as extracted from the
89
* translation control register (TCR) for a given regime.
90
diff --git a/target/arm/helper.c b/target/arm/helper.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/helper.c
93
+++ b/target/arm/helper.c
94
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx)
95
}
96
}
97
98
+static ARMGranuleSize tg0_to_gran_size(int tg)
99
+{
100
+ switch (tg) {
101
+ case 0:
102
+ return Gran4K;
103
+ case 1:
104
+ return Gran64K;
105
+ case 2:
106
+ return Gran16K;
107
+ default:
108
+ return GranInvalid;
109
+ }
110
+}
111
+
112
+static ARMGranuleSize tg1_to_gran_size(int tg)
113
+{
114
+ switch (tg) {
115
+ case 1:
116
+ return Gran16K;
117
+ case 2:
118
+ return Gran4K;
119
+ case 3:
120
+ return Gran64K;
121
+ default:
122
+ return GranInvalid;
123
+ }
124
+}
125
+
126
+static inline bool have4k(ARMCPU *cpu, bool stage2)
127
+{
128
+ return stage2 ? cpu_isar_feature(aa64_tgran4_2, cpu)
129
+ : cpu_isar_feature(aa64_tgran4, cpu);
130
+}
131
+
132
+static inline bool have16k(ARMCPU *cpu, bool stage2)
133
+{
134
+ return stage2 ? cpu_isar_feature(aa64_tgran16_2, cpu)
135
+ : cpu_isar_feature(aa64_tgran16, cpu);
136
+}
137
+
138
+static inline bool have64k(ARMCPU *cpu, bool stage2)
139
+{
140
+ return stage2 ? cpu_isar_feature(aa64_tgran64_2, cpu)
141
+ : cpu_isar_feature(aa64_tgran64, cpu);
142
+}
143
+
144
+static ARMGranuleSize sanitize_gran_size(ARMCPU *cpu, ARMGranuleSize gran,
145
+ bool stage2)
146
+{
147
+ switch (gran) {
148
+ case Gran4K:
149
+ if (have4k(cpu, stage2)) {
150
+ return gran;
151
+ }
152
+ break;
153
+ case Gran16K:
154
+ if (have16k(cpu, stage2)) {
155
+ return gran;
156
+ }
157
+ break;
158
+ case Gran64K:
159
+ if (have64k(cpu, stage2)) {
160
+ return gran;
161
+ }
162
+ break;
163
+ case GranInvalid:
164
+ break;
165
+ }
166
+ /*
167
+ * If the guest selects a granule size that isn't implemented,
168
+ * the architecture requires that we behave as if it selected one
169
+ * that is (with an IMPDEF choice of which one to pick). We choose
170
+ * to implement the smallest supported granule size.
171
+ */
172
+ if (have4k(cpu, stage2)) {
173
+ return Gran4K;
174
+ }
175
+ if (have16k(cpu, stage2)) {
176
+ return Gran16K;
177
+ }
178
+ assert(have64k(cpu, stage2));
179
+ return Gran64K;
180
+}
181
+
182
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
183
ARMMMUIdx mmu_idx, bool data)
42
{
184
{
43
MemoryRegion *mr;
185
uint64_t tcr = regime_tcr(env, mmu_idx);
44
hwaddr l, xlat;
186
bool epd, hpd, using16k, using64k, tsz_oob, ds;
45
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
187
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
46
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
188
+ ARMGranuleSize gran;
47
if (!memory_access_is_direct(mr, is_write)) {
189
ARMCPU *cpu = env_archcpu(env);
48
l = memory_access_size(mr, l, addr);
190
+ bool stage2 = mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S;
49
- /* When our callers all have attrs we'll pass them through here */
191
50
- if (!memory_region_access_valid(mr, xlat, l, is_write,
192
if (!regime_has_2_ranges(mmu_idx)) {
51
- MEMTXATTRS_UNSPECIFIED)) {
193
select = 0;
52
+ if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
194
tsz = extract32(tcr, 0, 6);
53
return false;
195
- using64k = extract32(tcr, 14, 1);
54
}
196
- using16k = extract32(tcr, 15, 1);
55
}
197
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
56
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
198
+ gran = tg0_to_gran_size(extract32(tcr, 14, 2));
57
199
+ if (stage2) {
58
rcu_read_lock();
200
/* VTCR_EL2 */
59
fv = address_space_to_flatview(as);
201
hpd = false;
60
- result = flatview_access_valid(fv, addr, len, is_write);
202
} else {
61
+ result = flatview_access_valid(fv, addr, len, is_write, attrs);
203
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
62
rcu_read_unlock();
204
select = extract64(va, 55, 1);
63
return result;
205
if (!select) {
64
}
206
tsz = extract32(tcr, 0, 6);
207
+ gran = tg0_to_gran_size(extract32(tcr, 14, 2));
208
epd = extract32(tcr, 7, 1);
209
sh = extract32(tcr, 12, 2);
210
- using64k = extract32(tcr, 14, 1);
211
- using16k = extract32(tcr, 15, 1);
212
hpd = extract64(tcr, 41, 1);
213
} else {
214
- int tg = extract32(tcr, 30, 2);
215
- using16k = tg == 1;
216
- using64k = tg == 3;
217
tsz = extract32(tcr, 16, 6);
218
+ gran = tg1_to_gran_size(extract32(tcr, 30, 2));
219
epd = extract32(tcr, 23, 1);
220
sh = extract32(tcr, 28, 2);
221
hpd = extract64(tcr, 42, 1);
222
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
223
ds = extract64(tcr, 59, 1);
224
}
225
226
+ gran = sanitize_gran_size(cpu, gran, stage2);
227
+ using64k = gran == Gran64K;
228
+ using16k = gran == Gran16K;
229
+
230
if (cpu_isar_feature(aa64_st, cpu)) {
231
max_tsz = 48 - using64k;
232
} else {
65
--
233
--
66
2.17.1
234
2.25.1
67
68
diff view generated by jsdifflib
1
Add more detail to the documentation for memory_region_init_iommu()
1
Now we have an enum for the granule size, use it in the
2
and other IOMMU-related functions and data structures.
2
ARMVAParameters struct instead of the using16k/using64k bools.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20221003162315.2833797-3-peter.maydell@linaro.org
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20180521140402.23318-2-peter.maydell@linaro.org
9
---
7
---
10
include/exec/memory.h | 105 ++++++++++++++++++++++++++++++++++++++----
8
target/arm/internals.h | 23 +++++++++++++++++++++--
11
1 file changed, 95 insertions(+), 10 deletions(-)
9
target/arm/helper.c | 39 ++++++++++++++++++++++++++++-----------
10
target/arm/ptw.c | 8 +-------
11
3 files changed, 50 insertions(+), 20 deletions(-)
12
12
13
diff --git a/include/exec/memory.h b/include/exec/memory.h
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/memory.h
15
--- a/target/arm/internals.h
16
+++ b/include/exec/memory.h
16
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
17
@@ -XXX,XX +XXX,XX @@ typedef enum ARMGranuleSize {
18
IOMMU_ATTR_SPAPR_TCE_FD
18
GranInvalid,
19
};
19
} ARMGranuleSize;
20
20
21
+/**
21
+/**
22
+ * IOMMUMemoryRegionClass:
22
+ * arm_granule_bits: Return address size of the granule in bits
23
+ *
23
+ *
24
+ * All IOMMU implementations need to subclass TYPE_IOMMU_MEMORY_REGION
24
+ * Return the address size of the granule in bits. This corresponds
25
+ * and provide an implementation of at least the @translate method here
25
+ * to the pseudocode TGxGranuleBits().
26
+ * to handle requests to the memory region. Other methods are optional.
27
+ *
28
+ * The IOMMU implementation must use the IOMMU notifier infrastructure
29
+ * to report whenever mappings are changed, by calling
30
+ * memory_region_notify_iommu() (or, if necessary, by calling
31
+ * memory_region_notify_one() for each registered notifier).
32
+ */
26
+ */
33
typedef struct IOMMUMemoryRegionClass {
27
+static inline int arm_granule_bits(ARMGranuleSize gran)
34
/* private */
28
+{
35
struct DeviceClass parent_class;
29
+ switch (gran) {
30
+ case Gran64K:
31
+ return 16;
32
+ case Gran16K:
33
+ return 14;
34
+ case Gran4K:
35
+ return 12;
36
+ default:
37
+ g_assert_not_reached();
38
+ }
39
+}
40
+
41
/*
42
* Parameters of a given virtual address, as extracted from the
43
* translation control register (TCR) for a given regime.
44
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
45
bool tbi : 1;
46
bool epd : 1;
47
bool hpd : 1;
48
- bool using16k : 1;
49
- bool using64k : 1;
50
bool tsz_oob : 1; /* tsz has been clamped to legal range */
51
bool ds : 1;
52
+ ARMGranuleSize gran : 2;
53
} ARMVAParameters;
54
55
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
56
diff --git a/target/arm/helper.c b/target/arm/helper.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/helper.c
59
+++ b/target/arm/helper.c
60
@@ -XXX,XX +XXX,XX @@ typedef struct {
61
uint64_t length;
62
} TLBIRange;
63
64
+static ARMGranuleSize tlbi_range_tg_to_gran_size(int tg)
65
+{
66
+ /*
67
+ * Note that the TLBI range TG field encoding differs from both
68
+ * TG0 and TG1 encodings.
69
+ */
70
+ switch (tg) {
71
+ case 1:
72
+ return Gran4K;
73
+ case 2:
74
+ return Gran16K;
75
+ case 3:
76
+ return Gran64K;
77
+ default:
78
+ return GranInvalid;
79
+ }
80
+}
81
+
82
static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
83
uint64_t value)
84
{
85
@@ -XXX,XX +XXX,XX @@ static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
86
uint64_t select = sextract64(value, 36, 1);
87
ARMVAParameters param = aa64_va_parameters(env, select, mmuidx, true);
88
TLBIRange ret = { };
89
+ ARMGranuleSize gran;
90
91
page_size_granule = extract64(value, 46, 2);
92
+ gran = tlbi_range_tg_to_gran_size(page_size_granule);
93
94
/* The granule encoded in value must match the granule in use. */
95
- if (page_size_granule != (param.using64k ? 3 : param.using16k ? 2 : 1)) {
96
+ if (gran != param.gran) {
97
qemu_log_mask(LOG_GUEST_ERROR, "Invalid tlbi page size granule %d\n",
98
page_size_granule);
99
return ret;
100
}
101
102
- page_shift = (page_size_granule - 1) * 2 + 12;
103
+ page_shift = arm_granule_bits(gran);
104
num = extract64(value, 39, 5);
105
scale = extract64(value, 44, 2);
106
exponent = (5 * scale) + 1;
107
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
108
ARMMMUIdx mmu_idx, bool data)
109
{
110
uint64_t tcr = regime_tcr(env, mmu_idx);
111
- bool epd, hpd, using16k, using64k, tsz_oob, ds;
112
+ bool epd, hpd, tsz_oob, ds;
113
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
114
ARMGranuleSize gran;
115
ARMCPU *cpu = env_archcpu(env);
116
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
117
}
118
119
gran = sanitize_gran_size(cpu, gran, stage2);
120
- using64k = gran == Gran64K;
121
- using16k = gran == Gran16K;
122
123
if (cpu_isar_feature(aa64_st, cpu)) {
124
- max_tsz = 48 - using64k;
125
+ max_tsz = 48 - (gran == Gran64K);
126
} else {
127
max_tsz = 39;
128
}
129
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
130
* adjust the effective value of DS, as documented.
131
*/
132
min_tsz = 16;
133
- if (using64k) {
134
+ if (gran == Gran64K) {
135
if (cpu_isar_feature(aa64_lva, cpu)) {
136
min_tsz = 12;
137
}
138
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
139
switch (mmu_idx) {
140
case ARMMMUIdx_Stage2:
141
case ARMMMUIdx_Stage2_S:
142
- if (using16k) {
143
+ if (gran == Gran16K) {
144
ds = cpu_isar_feature(aa64_tgran16_2_lpa2, cpu);
145
} else {
146
ds = cpu_isar_feature(aa64_tgran4_2_lpa2, cpu);
147
}
148
break;
149
default:
150
- if (using16k) {
151
+ if (gran == Gran16K) {
152
ds = cpu_isar_feature(aa64_tgran16_lpa2, cpu);
153
} else {
154
ds = cpu_isar_feature(aa64_tgran4_lpa2, cpu);
155
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
156
.tbi = tbi,
157
.epd = epd,
158
.hpd = hpd,
159
- .using16k = using16k,
160
- .using64k = using64k,
161
.tsz_oob = tsz_oob,
162
.ds = ds,
163
+ .gran = gran,
164
};
165
}
166
167
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
168
index XXXXXXX..XXXXXXX 100644
169
--- a/target/arm/ptw.c
170
+++ b/target/arm/ptw.c
171
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
172
}
173
}
174
175
- if (param.using64k) {
176
- stride = 13;
177
- } else if (param.using16k) {
178
- stride = 11;
179
- } else {
180
- stride = 9;
181
- }
182
+ stride = arm_granule_bits(param.gran) - 3;
36
183
37
/*
184
/*
38
- * Return a TLB entry that contains a given address. Flag should
185
* Note that QEMU ignores shareability and cacheability attributes,
39
- * be the access permission of this translation operation. We can
40
- * set flag to IOMMU_NONE to mean that we don't need any
41
- * read/write permission checks, like, when for region replay.
42
+ * Return a TLB entry that contains a given address.
43
+ *
44
+ * The IOMMUAccessFlags indicated via @flag are optional and may
45
+ * be specified as IOMMU_NONE to indicate that the caller needs
46
+ * the full translation information for both reads and writes. If
47
+ * the access flags are specified then the IOMMU implementation
48
+ * may use this as an optimization, to stop doing a page table
49
+ * walk as soon as it knows that the requested permissions are not
50
+ * allowed. If IOMMU_NONE is passed then the IOMMU must do the
51
+ * full page table walk and report the permissions in the returned
52
+ * IOMMUTLBEntry. (Note that this implies that an IOMMU may not
53
+ * return different mappings for reads and writes.)
54
+ *
55
+ * The returned information remains valid while the caller is
56
+ * holding the big QEMU lock or is inside an RCU critical section;
57
+ * if the caller wishes to cache the mapping beyond that it must
58
+ * register an IOMMU notifier so it can invalidate its cached
59
+ * information when the IOMMU mapping changes.
60
+ *
61
+ * @iommu: the IOMMUMemoryRegion
62
+ * @hwaddr: address to be translated within the memory region
63
+ * @flag: requested access permissions
64
*/
65
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
66
IOMMUAccessFlags flag);
67
- /* Returns minimum supported page size */
68
+ /* Returns minimum supported page size in bytes.
69
+ * If this method is not provided then the minimum is assumed to
70
+ * be TARGET_PAGE_SIZE.
71
+ *
72
+ * @iommu: the IOMMUMemoryRegion
73
+ */
74
uint64_t (*get_min_page_size)(IOMMUMemoryRegion *iommu);
75
- /* Called when IOMMU Notifier flag changed */
76
+ /* Called when IOMMU Notifier flag changes (ie when the set of
77
+ * events which IOMMU users are requesting notification for changes).
78
+ * Optional method -- need not be provided if the IOMMU does not
79
+ * need to know exactly which events must be notified.
80
+ *
81
+ * @iommu: the IOMMUMemoryRegion
82
+ * @old_flags: events which previously needed to be notified
83
+ * @new_flags: events which now need to be notified
84
+ */
85
void (*notify_flag_changed)(IOMMUMemoryRegion *iommu,
86
IOMMUNotifierFlag old_flags,
87
IOMMUNotifierFlag new_flags);
88
- /* Set this up to provide customized IOMMU replay function */
89
+ /* Called to handle memory_region_iommu_replay().
90
+ *
91
+ * The default implementation of memory_region_iommu_replay() is to
92
+ * call the IOMMU translate method for every page in the address space
93
+ * with flag == IOMMU_NONE and then call the notifier if translate
94
+ * returns a valid mapping. If this method is implemented then it
95
+ * overrides the default behaviour, and must provide the full semantics
96
+ * of memory_region_iommu_replay(), by calling @notifier for every
97
+ * translation present in the IOMMU.
98
+ *
99
+ * Optional method -- an IOMMU only needs to provide this method
100
+ * if the default is inefficient or produces undesirable side effects.
101
+ *
102
+ * Note: this is not related to record-and-replay functionality.
103
+ */
104
void (*replay)(IOMMUMemoryRegion *iommu, IOMMUNotifier *notifier);
105
106
- /* Get IOMMU misc attributes */
107
- int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr,
108
+ /* Get IOMMU misc attributes. This is an optional method that
109
+ * can be used to allow users of the IOMMU to get implementation-specific
110
+ * information. The IOMMU implements this method to handle calls
111
+ * by IOMMU users to memory_region_iommu_get_attr() by filling in
112
+ * the arbitrary data pointer for any IOMMUMemoryRegionAttr values that
113
+ * the IOMMU supports. If the method is unimplemented then
114
+ * memory_region_iommu_get_attr() will always return -EINVAL.
115
+ *
116
+ * @iommu: the IOMMUMemoryRegion
117
+ * @attr: attribute being queried
118
+ * @data: memory to fill in with the attribute data
119
+ *
120
+ * Returns 0 on success, or a negative errno; in particular
121
+ * returns -EINVAL for unrecognized or unimplemented attribute types.
122
+ */
123
+ int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
124
void *data);
125
} IOMMUMemoryRegionClass;
126
127
@@ -XXX,XX +XXX,XX @@ static inline void memory_region_init_reservation(MemoryRegion *mr,
128
* An IOMMU region translates addresses and forwards accesses to a target
129
* memory region.
130
*
131
+ * The IOMMU implementation must define a subclass of TYPE_IOMMU_MEMORY_REGION.
132
+ * @_iommu_mr should be a pointer to enough memory for an instance of
133
+ * that subclass, @instance_size is the size of that subclass, and
134
+ * @mrtypename is its name. This function will initialize @_iommu_mr as an
135
+ * instance of the subclass, and its methods will then be called to handle
136
+ * accesses to the memory region. See the documentation of
137
+ * #IOMMUMemoryRegionClass for further details.
138
+ *
139
* @_iommu_mr: the #IOMMUMemoryRegion to be initialized
140
* @instance_size: the IOMMUMemoryRegion subclass instance size
141
* @mrtypename: the type name of the #IOMMUMemoryRegion
142
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
143
* a notifier with the minimum page granularity returned by
144
* mr->iommu_ops->get_page_size().
145
*
146
+ * Note: this is not related to record-and-replay functionality.
147
+ *
148
* @iommu_mr: the memory region to observe
149
* @n: the notifier to which to replay iommu mappings
150
*/
151
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n);
152
* memory_region_iommu_replay_all: replay existing IOMMU translations
153
* to all the notifiers registered.
154
*
155
+ * Note: this is not related to record-and-replay functionality.
156
+ *
157
* @iommu_mr: the memory region to observe
158
*/
159
void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr);
160
@@ -XXX,XX +XXX,XX @@ void memory_region_unregister_iommu_notifier(MemoryRegion *mr,
161
* memory_region_iommu_get_attr: return an IOMMU attr if get_attr() is
162
* defined on the IOMMU.
163
*
164
- * Returns 0 if succeded, error code otherwise.
165
+ * Returns 0 on success, or a negative errno otherwise. In particular,
166
+ * -EINVAL indicates that the IOMMU does not support the requested
167
+ * attribute.
168
*
169
* @iommu_mr: the memory region
170
* @attr: the requested attribute
171
--
186
--
172
2.17.1
187
2.25.1
173
174
diff view generated by jsdifflib
1
Add entries to MAINTAINERS to cover the newer MPS2 boards and
1
FEAT_GTG is a change tho the ID register ID_AA64MMFR0_EL1 so that it
2
the new devices they use.
2
can report a different set of supported granule (page) sizes for
3
stage 1 and stage 2 translation tables. As of commit c20281b2a5048
4
we already report the granule sizes that way for '-cpu max', and now
5
we also correctly make attempts to use unimplemented granule sizes
6
fail, so we can report the support of the feature in the
7
documentation.
3
8
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180518153157.14899-1-peter.maydell@linaro.org
11
Message-id: 20221003162315.2833797-4-peter.maydell@linaro.org
6
---
12
---
7
MAINTAINERS | 9 +++++++--
13
docs/system/arm/emulation.rst | 1 +
8
1 file changed, 7 insertions(+), 2 deletions(-)
14
1 file changed, 1 insertion(+)
9
15
10
diff --git a/MAINTAINERS b/MAINTAINERS
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
11
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
12
--- a/MAINTAINERS
18
--- a/docs/system/arm/emulation.rst
13
+++ b/MAINTAINERS
19
+++ b/docs/system/arm/emulation.rst
14
@@ -XXX,XX +XXX,XX @@ F: hw/timer/cmsdk-apb-timer.c
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
15
F: include/hw/timer/cmsdk-apb-timer.h
21
- FEAT_FRINTTS (Floating-point to integer instructions)
16
F: hw/char/cmsdk-apb-uart.c
22
- FEAT_FlagM (Flag manipulation instructions v2)
17
F: include/hw/char/cmsdk-apb-uart.h
23
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
18
+F: hw/misc/tz-ppc.c
24
+- FEAT_GTG (Guest translation granule size)
19
+F: include/hw/misc/tz-ppc.h
25
- FEAT_HCX (Support for the HCRX_EL2 register)
20
26
- FEAT_HPDS (Hierarchical permission disables)
21
ARM cores
27
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
22
M: Peter Maydell <peter.maydell@linaro.org>
23
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
24
L: qemu-arm@nongnu.org
25
S: Maintained
26
F: hw/arm/mps2.c
27
-F: hw/misc/mps2-scc.c
28
-F: include/hw/misc/mps2-scc.h
29
+F: hw/arm/mps2-tz.c
30
+F: hw/misc/mps2-*.c
31
+F: include/hw/misc/mps2-*.h
32
+F: hw/arm/iotkit.c
33
+F: include/hw/arm/iotkit.h
34
35
Musicpal
36
M: Jan Kiszka <jan.kiszka@web.de>
37
--
28
--
38
2.17.1
29
2.25.1
39
40
diff view generated by jsdifflib