1
Hi; here's the first target-arm pullreq for the 7.0 cycle.
1
Hi; this is the latest target-arm queue; most of this is a refactoring
2
patchset from RTH for the arm page-table-walk emulation.
2
3
3
thanks
4
thanks
4
-- PMM
5
-- PMM
5
6
6
The following changes since commit 76b56fdfc9fa43ec6e5986aee33f108c6c6a511e:
7
The following changes since commit f1d33f55c47dfdaf8daacd618588ad3ae4c452d1:
7
8
8
Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into staging (2021-12-14 12:46:18 -0800)
9
Merge tag 'pull-testing-gdbstub-plugins-gitdm-061022-3' of https://github.com/stsquad/qemu into staging (2022-10-06 07:11:56 -0400)
9
10
10
are available in the Git repository at:
11
are available in the Git repository at:
11
12
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20211215
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221010
13
14
14
for you to fetch changes up to aed176558806674d030a8305d989d4e6a5073359:
15
for you to fetch changes up to 915f62844cf62e428c7c178149b5ff1cbe129b07:
15
16
16
tests/acpi: add expected blob for VIOT test on virt machine (2021-12-15 10:35:26 +0000)
17
docs/system/arm/emulation.rst: Report FEAT_GTG support (2022-10-10 14:52:25 +0100)
17
18
18
----------------------------------------------------------------
19
----------------------------------------------------------------
19
target-arm queue:
20
target-arm queue:
20
* ITS: error reporting cleanup
21
* Retry KVM_CREATE_VM call if it fails EINTR
21
* aspeed: improve documentation
22
* allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
22
* Fix STM32F2XX USART data register readout
23
* docs/nuvoton: Update URL for images
23
* allow emulated GICv3 to be disabled in non-TCG builds
24
* refactoring of page table walk code
24
* fix exception priority for singlestep, misaligned PC, bp, etc
25
* hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
25
* Correct calculation of tlb range invalidate length
26
* Don't allow guest to use unimplemented granule sizes
26
* npcm7xx_emc: fix missing queue_flush
27
* Report FEAT_GTG support
27
* virt: Add VIOT ACPI table for virtio-iommu
28
* target/i386: Use assert() to sanity-check b1 in SSE decode
29
* Don't include qemu-common unnecessarily
30
28
31
----------------------------------------------------------------
29
----------------------------------------------------------------
32
Alex Bennée (1):
30
Jerome Forissier (2):
33
hw/intc: clean-up error reporting for failed ITS cmd
31
target/arm: allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
32
hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
34
33
35
Jean-Philippe Brucker (8):
34
Joel Stanley (1):
36
hw/arm/virt-acpi-build: Add VIOT table for virtio-iommu
35
docs/nuvoton: Update URL for images
37
hw/arm/virt: Remove device tree restriction for virtio-iommu
38
hw/arm/virt: Reject instantiation of multiple IOMMUs
39
hw/arm/virt: Use object_property_set instead of qdev_prop_set
40
tests/acpi: allow updates of VIOT expected data files
41
tests/acpi: add test case for VIOT
42
tests/acpi: add expected blobs for VIOT test on q35 machine
43
tests/acpi: add expected blob for VIOT test on virt machine
44
36
45
Joel Stanley (4):
37
Peter Maydell (4):
46
docs: aspeed: Add new boards
38
target/arm/kvm: Retry KVM_CREATE_VM call if it fails EINTR
47
docs: aspeed: Update OpenBMC image URL
39
target/arm: Don't allow guest to use unimplemented granule sizes
48
docs: aspeed: Give an example of booting a kernel
40
target/arm: Use ARMGranuleSize in ARMVAParameters
49
docs: aspeed: ADC is now modelled
41
docs/system/arm/emulation.rst: Report FEAT_GTG support
50
42
51
Olivier Hériveaux (1):
43
Richard Henderson (21):
52
Fix STM32F2XX USART data register readout
44
target/arm: Split s2walk_secure from ipa_secure in get_phys_addr
45
target/arm: Make the final stage1+2 write to secure be unconditional
46
target/arm: Add is_secure parameter to get_phys_addr_lpae
47
target/arm: Fix S2 disabled check in S1_ptw_translate
48
target/arm: Add is_secure parameter to regime_translation_disabled
49
target/arm: Split out get_phys_addr_with_secure
50
target/arm: Add is_secure parameter to v7m_read_half_insn
51
target/arm: Add TBFLAG_M32.SECURE
52
target/arm: Merge regime_is_secure into get_phys_addr
53
target/arm: Add is_secure parameter to do_ats_write
54
target/arm: Fold secure and non-secure a-profile mmu indexes
55
target/arm: Reorg regime_translation_disabled
56
target/arm: Drop secure check for HCR.TGE vs SCTLR_EL1.M
57
target/arm: Introduce arm_hcr_el2_eff_secstate
58
target/arm: Hoist read of *is_secure in S1_ptw_translate
59
target/arm: Remove env argument from combined_attrs_fwb
60
target/arm: Pass HCR to attribute subroutines.
61
target/arm: Fix ATS12NSO* from S PL1
62
target/arm: Split out get_phys_addr_disabled
63
target/arm: Fix cacheattr in get_phys_addr_disabled
64
target/arm: Use tlb_set_page_full
53
65
54
Patrick Venture (1):
66
docs/system/arm/emulation.rst | 1 +
55
hw/net: npcm7xx_emc fix missing queue_flush
67
docs/system/arm/nuvoton.rst | 4 +-
56
68
target/arm/cpu-param.h | 2 +-
57
Peter Maydell (6):
69
target/arm/cpu.h | 181 ++++++++------
58
target/i386: Use assert() to sanity-check b1 in SSE decode
70
target/arm/internals.h | 150 ++++++-----
59
include/hw/i386: Don't include qemu-common.h in .h files
71
hw/arm/boot.c | 4 +
60
target/hexagon/cpu.h: don't include qemu-common.h
72
target/arm/helper.c | 332 ++++++++++++++----------
61
target/rx/cpu.h: Don't include qemu-common.h
73
target/arm/kvm.c | 4 +-
62
hw/arm: Don't include qemu-common.h unnecessarily
74
target/arm/m_helper.c | 29 ++-
63
target/arm: Correct calculation of tlb range invalidate length
75
target/arm/ptw.c | 570 ++++++++++++++++++++++--------------------
64
76
target/arm/tlb_helper.c | 9 +-
65
Philippe Mathieu-Daudé (2):
77
target/arm/translate-a64.c | 8 -
66
hw/intc/arm_gicv3: Extract gicv3_set_gicv3state from arm_gicv3_cpuif.c
78
target/arm/translate.c | 9 +-
67
hw/intc/arm_gicv3: Introduce CONFIG_ARM_GIC_TCG Kconfig selector
79
13 files changed, 717 insertions(+), 586 deletions(-)
68
69
Richard Henderson (10):
70
target/arm: Hoist pc_next to a local variable in aarch64_tr_translate_insn
71
target/arm: Hoist pc_next to a local variable in arm_tr_translate_insn
72
target/arm: Hoist pc_next to a local variable in thumb_tr_translate_insn
73
target/arm: Split arm_pre_translate_insn
74
target/arm: Advance pc for arch single-step exception
75
target/arm: Split compute_fsr_fsc out of arm_deliver_fault
76
target/arm: Take an exception if PC is misaligned
77
target/arm: Assert thumb pc is aligned
78
target/arm: Suppress bp for exceptions with more priority
79
tests/tcg: Add arm and aarch64 pc alignment tests
80
81
docs/system/arm/aspeed.rst | 26 ++++++++++++----
82
include/hw/i386/microvm.h | 1 -
83
include/hw/i386/x86.h | 1 -
84
target/arm/helper.h | 1 +
85
target/arm/syndrome.h | 5 +++
86
target/hexagon/cpu.h | 1 -
87
target/rx/cpu.h | 1 -
88
hw/arm/boot.c | 1 -
89
hw/arm/digic_boards.c | 1 -
90
hw/arm/highbank.c | 1 -
91
hw/arm/npcm7xx_boards.c | 1 -
92
hw/arm/sbsa-ref.c | 1 -
93
hw/arm/stm32f405_soc.c | 1 -
94
hw/arm/vexpress.c | 1 -
95
hw/arm/virt-acpi-build.c | 7 +++++
96
hw/arm/virt.c | 21 ++++++-------
97
hw/char/stm32f2xx_usart.c | 3 +-
98
hw/intc/arm_gicv3.c | 2 +-
99
hw/intc/arm_gicv3_cpuif.c | 10 +-----
100
hw/intc/arm_gicv3_cpuif_common.c | 22 +++++++++++++
101
hw/intc/arm_gicv3_its.c | 39 +++++++++++++++--------
102
hw/net/npcm7xx_emc.c | 18 +++++------
103
hw/virtio/virtio-iommu-pci.c | 12 ++------
104
linux-user/aarch64/cpu_loop.c | 46 ++++++++++++++++------------
105
linux-user/hexagon/cpu_loop.c | 1 +
106
target/arm/debug_helper.c | 23 ++++++++++++++
107
target/arm/gdbstub.c | 9 ++++--
108
target/arm/helper.c | 6 ++--
109
target/arm/machine.c | 10 ++++++
110
target/arm/tlb_helper.c | 63 ++++++++++++++++++++++++++++----------
111
target/arm/translate-a64.c | 23 ++++++++++++--
112
target/arm/translate.c | 58 ++++++++++++++++++++++++++---------
113
target/i386/tcg/translate.c | 12 ++------
114
tests/qtest/bios-tables-test.c | 38 +++++++++++++++++++++++
115
tests/tcg/aarch64/pcalign-a64.c | 37 ++++++++++++++++++++++
116
tests/tcg/arm/pcalign-a32.c | 46 ++++++++++++++++++++++++++++
117
hw/arm/Kconfig | 1 +
118
hw/intc/Kconfig | 5 +++
119
hw/intc/meson.build | 11 ++++---
120
tests/data/acpi/q35/DSDT.viot | Bin 0 -> 9398 bytes
121
tests/data/acpi/q35/VIOT.viot | Bin 0 -> 112 bytes
122
tests/data/acpi/virt/VIOT | Bin 0 -> 88 bytes
123
tests/tcg/aarch64/Makefile.target | 4 +--
124
tests/tcg/arm/Makefile.target | 4 +++
125
44 files changed, 429 insertions(+), 145 deletions(-)
126
create mode 100644 hw/intc/arm_gicv3_cpuif_common.c
127
create mode 100644 tests/tcg/aarch64/pcalign-a64.c
128
create mode 100644 tests/tcg/arm/pcalign-a32.c
129
create mode 100644 tests/data/acpi/q35/DSDT.viot
130
create mode 100644 tests/data/acpi/q35/VIOT.viot
131
create mode 100644 tests/data/acpi/virt/VIOT
132
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
Occasionally the KVM_CREATE_VM ioctl can return EINTR, even though
2
there is no pending signal to be taken. In commit 94ccff13382055
3
we added a retry-on-EINTR loop to the KVM_CREATE_VM call in the
4
generic KVM code. Adopt the same approach for the use of the
5
ioctl in the Arm-specific KVM code (where we use it to create a
6
scratch VM for probing for various things).
2
7
3
The VIOT blob contains the following:
8
For more information, see the mailing list thread:
9
https://lore.kernel.org/qemu-devel/8735e0s1zw.wl-maz@kernel.org/
4
10
5
[000h 0000 4] Signature : "VIOT" [Virtual I/O Translation Table]
11
Reported-by: Vitaly Chikunov <vt@altlinux.org>
6
[004h 0004 4] Table Length : 00000058
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
[008h 0008 1] Revision : 00
13
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
8
[009h 0009 1] Checksum : 66
14
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
[00Ah 0010 6] Oem ID : "BOCHS "
15
Acked-by: Marc Zyngier <maz@kernel.org>
10
[010h 0016 8] Oem Table ID : "BXPC "
16
Message-id: 20220930113824.1933293-1-peter.maydell@linaro.org
11
[018h 0024 4] Oem Revision : 00000001
17
---
12
[01Ch 0028 4] Asl Compiler ID : "BXPC"
18
target/arm/kvm.c | 4 +++-
13
[020h 0032 4] Asl Compiler Revision : 00000001
19
1 file changed, 3 insertions(+), 1 deletion(-)
14
20
15
[024h 0036 2] Node count : 0002
21
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
16
[026h 0038 2] Node offset : 0030
17
[028h 0040 8] Reserved : 0000000000000000
18
19
[030h 0048 1] Type : 03 [VirtIO-PCI IOMMU]
20
[031h 0049 1] Reserved : 00
21
[032h 0050 2] Length : 0010
22
23
[034h 0052 2] PCI Segment : 0000
24
[036h 0054 2] PCI BDF number : 0008
25
[038h 0056 8] Reserved : 0000000000000000
26
27
[040h 0064 1] Type : 01 [PCI Range]
28
[041h 0065 1] Reserved : 00
29
[042h 0066 2] Length : 0018
30
31
[044h 0068 4] Endpoint start : 00000000
32
[048h 0072 2] PCI Segment start : 0000
33
[04Ah 0074 2] PCI Segment end : 0000
34
[04Ch 0076 2] PCI BDF start : 0000
35
[04Eh 0078 2] PCI BDF end : 00FF
36
[050h 0080 2] Output node : 0030
37
[052h 0082 6] Reserved : 000000000000
38
39
Acked-by: Ani Sinha <ani@anisinha.ca>
40
Reviewed-by: Eric Auger <eric.auger@redhat.com>
41
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
42
Message-id: 20211210170415.583179-9-jean-philippe@linaro.org
43
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
44
---
45
tests/qtest/bios-tables-test-allowed-diff.h | 1 -
46
tests/data/acpi/virt/VIOT | Bin 0 -> 88 bytes
47
2 files changed, 1 deletion(-)
48
49
diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
50
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
51
--- a/tests/qtest/bios-tables-test-allowed-diff.h
23
--- a/target/arm/kvm.c
52
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
24
+++ b/target/arm/kvm.c
53
@@ -1,2 +1 @@
25
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
54
/* List of comma-separated changed AML files to ignore */
26
if (max_vm_pa_size < 0) {
55
-"tests/data/acpi/virt/VIOT",
27
max_vm_pa_size = 0;
56
diff --git a/tests/data/acpi/virt/VIOT b/tests/data/acpi/virt/VIOT
28
}
57
index XXXXXXX..XXXXXXX 100644
29
- vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size);
58
GIT binary patch
30
+ do {
59
literal 88
31
+ vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size);
60
zcmWIZ^bd((0D?3pe`k+i1*eDrX9XZ&1PX!JAexE60Hgv8m>C3sGzXN&z`)2L0cSHX
32
+ } while (vmfd == -1 && errno == EINTR);
61
I{D-Rq0Q5fy0RR91
33
if (vmfd < 0) {
62
34
goto err;
63
literal 0
35
}
64
HcmV?d00001
65
66
--
36
--
67
2.25.1
37
2.25.1
68
69
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
From: Jerome Forissier <jerome.forissier@linaro.org>
2
2
3
Create empty data files and allow updates for the upcoming VIOT tests.
3
Updates write_scr() to allow setting SCR_EL3.EnTP2 when FEAT_SME is
4
implemented. SCR_EL3 being a 64-bit register, valid_mask is changed
5
to uint64_t and the SCR_* constants in target/arm/cpu.h are extended
6
to 64-bit so that masking and bitwise not (~) behave as expected.
4
7
5
Acked-by: Igor Mammedov <imammedo@redhat.com>
8
This enables booting Linux with Trusted Firmware-A at EL3 with
6
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
"-M virt,secure=on -cpu max".
7
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
10
8
Message-id: 20211210170415.583179-6-jean-philippe@linaro.org
11
Cc: qemu-stable@nongnu.org
12
Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max")
13
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
14
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20221004072354.27037-1-jerome.forissier@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
18
---
11
tests/qtest/bios-tables-test-allowed-diff.h | 3 +++
19
target/arm/cpu.h | 54 ++++++++++++++++++++++-----------------------
12
tests/data/acpi/q35/DSDT.viot | 0
20
target/arm/helper.c | 5 ++++-
13
tests/data/acpi/q35/VIOT.viot | 0
21
2 files changed, 31 insertions(+), 28 deletions(-)
14
tests/data/acpi/virt/VIOT | 0
15
4 files changed, 3 insertions(+)
16
create mode 100644 tests/data/acpi/q35/DSDT.viot
17
create mode 100644 tests/data/acpi/q35/VIOT.viot
18
create mode 100644 tests/data/acpi/virt/VIOT
19
22
20
diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
22
--- a/tests/qtest/bios-tables-test-allowed-diff.h
25
--- a/target/arm/cpu.h
23
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
26
+++ b/target/arm/cpu.h
24
@@ -1 +1,4 @@
27
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
25
/* List of comma-separated changed AML files to ignore */
28
26
+"tests/data/acpi/virt/VIOT",
29
#define HPFAR_NS (1ULL << 63)
27
+"tests/data/acpi/q35/DSDT.viot",
30
28
+"tests/data/acpi/q35/VIOT.viot",
31
-#define SCR_NS (1U << 0)
29
diff --git a/tests/data/acpi/q35/DSDT.viot b/tests/data/acpi/q35/DSDT.viot
32
-#define SCR_IRQ (1U << 1)
30
new file mode 100644
33
-#define SCR_FIQ (1U << 2)
31
index XXXXXXX..XXXXXXX
34
-#define SCR_EA (1U << 3)
32
diff --git a/tests/data/acpi/q35/VIOT.viot b/tests/data/acpi/q35/VIOT.viot
35
-#define SCR_FW (1U << 4)
33
new file mode 100644
36
-#define SCR_AW (1U << 5)
34
index XXXXXXX..XXXXXXX
37
-#define SCR_NET (1U << 6)
35
diff --git a/tests/data/acpi/virt/VIOT b/tests/data/acpi/virt/VIOT
38
-#define SCR_SMD (1U << 7)
36
new file mode 100644
39
-#define SCR_HCE (1U << 8)
37
index XXXXXXX..XXXXXXX
40
-#define SCR_SIF (1U << 9)
41
-#define SCR_RW (1U << 10)
42
-#define SCR_ST (1U << 11)
43
-#define SCR_TWI (1U << 12)
44
-#define SCR_TWE (1U << 13)
45
-#define SCR_TLOR (1U << 14)
46
-#define SCR_TERR (1U << 15)
47
-#define SCR_APK (1U << 16)
48
-#define SCR_API (1U << 17)
49
-#define SCR_EEL2 (1U << 18)
50
-#define SCR_EASE (1U << 19)
51
-#define SCR_NMEA (1U << 20)
52
-#define SCR_FIEN (1U << 21)
53
-#define SCR_ENSCXT (1U << 25)
54
-#define SCR_ATA (1U << 26)
55
-#define SCR_FGTEN (1U << 27)
56
-#define SCR_ECVEN (1U << 28)
57
-#define SCR_TWEDEN (1U << 29)
58
+#define SCR_NS (1ULL << 0)
59
+#define SCR_IRQ (1ULL << 1)
60
+#define SCR_FIQ (1ULL << 2)
61
+#define SCR_EA (1ULL << 3)
62
+#define SCR_FW (1ULL << 4)
63
+#define SCR_AW (1ULL << 5)
64
+#define SCR_NET (1ULL << 6)
65
+#define SCR_SMD (1ULL << 7)
66
+#define SCR_HCE (1ULL << 8)
67
+#define SCR_SIF (1ULL << 9)
68
+#define SCR_RW (1ULL << 10)
69
+#define SCR_ST (1ULL << 11)
70
+#define SCR_TWI (1ULL << 12)
71
+#define SCR_TWE (1ULL << 13)
72
+#define SCR_TLOR (1ULL << 14)
73
+#define SCR_TERR (1ULL << 15)
74
+#define SCR_APK (1ULL << 16)
75
+#define SCR_API (1ULL << 17)
76
+#define SCR_EEL2 (1ULL << 18)
77
+#define SCR_EASE (1ULL << 19)
78
+#define SCR_NMEA (1ULL << 20)
79
+#define SCR_FIEN (1ULL << 21)
80
+#define SCR_ENSCXT (1ULL << 25)
81
+#define SCR_ATA (1ULL << 26)
82
+#define SCR_FGTEN (1ULL << 27)
83
+#define SCR_ECVEN (1ULL << 28)
84
+#define SCR_TWEDEN (1ULL << 29)
85
#define SCR_TWEDEL MAKE_64BIT_MASK(30, 4)
86
#define SCR_TME (1ULL << 34)
87
#define SCR_AMVOFFEN (1ULL << 35)
88
diff --git a/target/arm/helper.c b/target/arm/helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/helper.c
91
+++ b/target/arm/helper.c
92
@@ -XXX,XX +XXX,XX @@ static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
93
static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
94
{
95
/* Begin with base v8.0 state. */
96
- uint32_t valid_mask = 0x3fff;
97
+ uint64_t valid_mask = 0x3fff;
98
ARMCPU *cpu = env_archcpu(env);
99
100
/*
101
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
102
if (cpu_isar_feature(aa64_doublefault, cpu)) {
103
valid_mask |= SCR_EASE | SCR_NMEA;
104
}
105
+ if (cpu_isar_feature(aa64_sme, cpu)) {
106
+ valid_mask |= SCR_ENTP2;
107
+ }
108
} else {
109
valid_mask &= ~(SCR_RW | SCR_ST);
110
if (cpu_isar_feature(aa32_ras, cpu)) {
38
--
111
--
39
2.25.1
112
2.25.1
40
41
diff view generated by jsdifflib
1
From: Joel Stanley <joel@jms.id.au>
1
From: Joel Stanley <joel@jms.id.au>
2
2
3
Add X11, FP5280G2, G220A, Rainier and Fuji. Mention that Swift will be
3
openpower.xyz was retired some time ago. The OpenBMC Jenkins is where
4
removed in v7.0.
4
images can be found these days.
5
5
6
Signed-off-by: Joel Stanley <joel@jms.id.au>
6
Signed-off-by: Joel Stanley <joel@jms.id.au>
7
Reviewed-by: Cédric Le Goater <clg@kaod.org>
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
8
Message-id: 20211117065752.330632-2-joel@jms.id.au
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20221004050042.22681-1-joel@jms.id.au
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
docs/system/arm/aspeed.rst | 7 ++++++-
13
docs/system/arm/nuvoton.rst | 4 ++--
12
1 file changed, 6 insertions(+), 1 deletion(-)
14
1 file changed, 2 insertions(+), 2 deletions(-)
13
15
14
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
16
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/docs/system/arm/aspeed.rst
18
--- a/docs/system/arm/nuvoton.rst
17
+++ b/docs/system/arm/aspeed.rst
19
+++ b/docs/system/arm/nuvoton.rst
18
@@ -XXX,XX +XXX,XX @@ AST2400 SoC based machines :
20
@@ -XXX,XX +XXX,XX @@ Boot options
19
21
20
- ``palmetto-bmc`` OpenPOWER Palmetto POWER8 BMC
22
The Nuvoton machines can boot from an OpenBMC firmware image, or directly into
21
- ``quanta-q71l-bmc`` OpenBMC Quanta BMC
23
a kernel using the ``-kernel`` option. OpenBMC images for ``quanta-gsj`` and
22
+- ``supermicrox11-bmc`` Supermicro X11 BMC
24
-possibly others can be downloaded from the OpenPOWER jenkins :
23
25
+possibly others can be downloaded from the OpenBMC jenkins :
24
AST2500 SoC based machines :
26
25
27
- https://openpower.xyz/
26
@@ -XXX,XX +XXX,XX @@ AST2500 SoC based machines :
28
+ https://jenkins.openbmc.org/
27
- ``romulus-bmc`` OpenPOWER Romulus POWER9 BMC
29
28
- ``witherspoon-bmc`` OpenPOWER Witherspoon POWER9 BMC
30
The firmware image should be attached as an MTD drive. Example :
29
- ``sonorapass-bmc`` OCP SonoraPass BMC
31
30
-- ``swift-bmc`` OpenPOWER Swift BMC POWER9
31
+- ``swift-bmc`` OpenPOWER Swift BMC POWER9 (to be removed in v7.0)
32
+- ``fp5280g2-bmc`` Inspur FP5280G2 BMC
33
+- ``g220a-bmc`` Bytedance G220A BMC
34
35
AST2600 SoC based machines :
36
37
- ``ast2600-evb`` Aspeed AST2600 Evaluation board (Cortex-A7)
38
- ``tacoma-bmc`` OpenPOWER Witherspoon POWER9 AST2600 BMC
39
+- ``rainier-bmc`` IBM Rainier POWER10 BMC
40
+- ``fuji-bmc`` Facebook Fuji BMC
41
42
Supported devices
43
-----------------
44
--
32
--
45
2.25.1
33
2.25.1
46
34
47
35
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We do not support instantiating multiple IOMMUs. Before adding a
3
The starting security state comes with the translation regime,
4
virtio-iommu, check that no other IOMMU is present. This will detect
4
not the current state of arm_is_secure_below_el3().
5
both "iommu=smmuv3" machine parameter and another virtio-iommu instance.
6
5
7
Fixes: 70e89132c9 ("hw/arm/virt: Add the virtio-iommu device tree mappings")
6
Create a new local variable, s2walk_secure, which does not need
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
to be written back to result->attrs.secure -- we compute that
9
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
8
value later, after the S2 walk is complete.
10
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
9
11
Message-id: 20211210170415.583179-4-jean-philippe@linaro.org
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20221001162318.153420-2-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
---
14
hw/arm/virt.c | 5 +++++
15
target/arm/ptw.c | 18 +++++++++---------
15
1 file changed, 5 insertions(+)
16
1 file changed, 9 insertions(+), 9 deletions(-)
16
17
17
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
18
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/virt.c
20
--- a/target/arm/ptw.c
20
+++ b/hw/arm/virt.c
21
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ static void virt_machine_device_pre_plug_cb(HotplugHandler *hotplug_dev,
22
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
22
hwaddr db_start = 0, db_end = 0;
23
hwaddr ipa;
23
char *resv_prop_str;
24
int s1_prot;
24
25
int ret;
25
+ if (vms->iommu != VIRT_IOMMU_NONE) {
26
- bool ipa_secure;
26
+ error_setg(errp, "virt machine does not support multiple IOMMUs");
27
+ bool ipa_secure, s2walk_secure;
27
+ return;
28
ARMCacheAttrs cacheattrs1;
28
+ }
29
ARMMMUIdx s2_mmu_idx;
29
+
30
bool is_el0;
30
switch (vms->msi_controller) {
31
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
31
case VIRT_MSI_CTRL_NONE:
32
32
return;
33
ipa = result->phys;
34
ipa_secure = result->attrs.secure;
35
- if (arm_is_secure_below_el3(env)) {
36
- if (ipa_secure) {
37
- result->attrs.secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
38
- } else {
39
- result->attrs.secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
40
- }
41
+ if (is_secure) {
42
+ /* Select TCR based on the NS bit from the S1 walk. */
43
+ s2walk_secure = !(ipa_secure
44
+ ? env->cp15.vstcr_el2 & VSTCR_SW
45
+ : env->cp15.vtcr_el2 & VTCR_NSW);
46
} else {
47
assert(!ipa_secure);
48
+ s2walk_secure = false;
49
}
50
51
- s2_mmu_idx = (result->attrs.secure
52
+ s2_mmu_idx = (s2walk_secure
53
? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
54
is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
55
56
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
57
result->cacheattrs);
58
59
/* Check if IPA translates to secure or non-secure PA space. */
60
- if (arm_is_secure_below_el3(env)) {
61
+ if (is_secure) {
62
if (ipa_secure) {
63
result->attrs.secure =
64
!(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW));
33
--
65
--
34
2.25.1
66
2.25.1
35
36
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
When a virtio-iommu is instantiated, describe it using the ACPI VIOT
3
While the stage2 call to get_phys_addr_lpae should never set
4
table.
4
attrs.secure when given a non-secure input, it's just as easy
5
to make the final update to attrs.secure be unconditional and
6
false in the case of non-secure input.
5
7
6
Acked-by: Igor Mammedov <imammedo@redhat.com>
8
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
10
Message-id: 20221007152159.1414065-1-richard.henderson@linaro.org
9
Message-id: 20211210170415.583179-2-jean-philippe@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
hw/arm/virt-acpi-build.c | 7 +++++++
14
target/arm/ptw.c | 21 ++++++++++-----------
13
hw/arm/Kconfig | 1 +
15
1 file changed, 10 insertions(+), 11 deletions(-)
14
2 files changed, 8 insertions(+)
15
16
16
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/virt-acpi-build.c
19
--- a/target/arm/ptw.c
19
+++ b/hw/arm/virt-acpi-build.c
20
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
21
#include "kvm_arm.h"
22
result->cacheattrs = combine_cacheattrs(env, cacheattrs1,
22
#include "migration/vmstate.h"
23
result->cacheattrs);
23
#include "hw/acpi/ghes.h"
24
24
+#include "hw/acpi/viot.h"
25
- /* Check if IPA translates to secure or non-secure PA space. */
25
26
- if (is_secure) {
26
#define ARM_SPI_BASE 32
27
- if (ipa_secure) {
27
28
- result->attrs.secure =
28
@@ -XXX,XX +XXX,XX @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
29
- !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW));
29
}
30
- } else {
30
#endif
31
- result->attrs.secure =
31
32
- !((env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))
32
+ if (vms->iommu == VIRT_IOMMU_VIRTIO) {
33
- || (env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)));
33
+ acpi_add_table(table_offsets, tables_blob);
34
- }
34
+ build_viot(ms, tables_blob, tables->linker, vms->virtio_iommu_bdf,
35
- }
35
+ vms->oem_id, vms->oem_table_id);
36
+ /*
36
+ }
37
+ * Check if IPA translates to secure or non-secure PA space.
38
+ * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
39
+ */
40
+ result->attrs.secure =
41
+ (is_secure
42
+ && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
43
+ && (ipa_secure
44
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
37
+
45
+
38
/* XSDT is pointed to by RSDP */
46
return 0;
39
xsdt = tables_blob->len;
47
} else {
40
build_xsdt(tables_blob, tables->linker, table_offsets, vms->oem_id,
48
/*
41
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
42
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/arm/Kconfig
44
+++ b/hw/arm/Kconfig
45
@@ -XXX,XX +XXX,XX @@ config ARM_VIRT
46
select DIMM
47
select ACPI_HW_REDUCED
48
select ACPI_APEI
49
+ select ACPI_VIOT
50
51
config CHEETAH
52
bool
53
--
49
--
54
2.25.1
50
2.25.1
55
56
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Remove the use of regime_is_secure from get_phys_addr_lpae,
4
using the new parameter instead.
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20221001162318.153420-3-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
10
---
7
target/arm/translate.c | 16 ++++++++--------
11
target/arm/ptw.c | 20 ++++++++++----------
8
1 file changed, 8 insertions(+), 8 deletions(-)
12
1 file changed, 10 insertions(+), 10 deletions(-)
9
13
10
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
11
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/translate.c
16
--- a/target/arm/ptw.c
13
+++ b/target/arm/translate.c
17
+++ b/target/arm/ptw.c
14
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
18
@@ -XXX,XX +XXX,XX @@
19
20
static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
21
MMUAccessType access_type, ARMMMUIdx mmu_idx,
22
- bool s1_is_el0, GetPhysAddrResult *result,
23
- ARMMMUFaultInfo *fi)
24
+ bool is_secure, bool s1_is_el0,
25
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
26
__attribute__((nonnull));
27
28
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
29
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
30
GetPhysAddrResult s2 = {};
31
int ret;
32
33
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false,
34
- &s2, fi);
35
+ ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
36
+ *is_secure, false, &s2, fi);
37
if (ret) {
38
assert(fi->type != ARMFault_None);
39
fi->s2addr = addr;
40
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
41
*/
42
static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
43
MMUAccessType access_type, ARMMMUIdx mmu_idx,
44
- bool s1_is_el0, GetPhysAddrResult *result,
45
- ARMMMUFaultInfo *fi)
46
+ bool is_secure, bool s1_is_el0,
47
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
15
{
48
{
16
DisasContext *dc = container_of(dcbase, DisasContext, base);
49
ARMCPU *cpu = env_archcpu(env);
17
CPUARMState *env = cpu->env_ptr;
50
/* Read an LPAE long-descriptor translation table. */
18
+ uint32_t pc = dc->base.pc_next;
51
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
19
uint32_t insn;
52
* remain non-secure. We implement this by just ORing in the NSTable/NS
20
bool is_16bit;
53
* bits at each step.
21
54
*/
22
if (arm_pre_translate_insn(dc)) {
55
- tableattrs = regime_is_secure(env, mmu_idx) ? 0 : (1 << 4);
23
- dc->base.pc_next += 2;
56
+ tableattrs = is_secure ? 0 : (1 << 4);
24
+ dc->base.pc_next = pc + 2;
57
for (;;) {
25
return;
58
uint64_t descriptor;
59
bool nstable;
60
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
61
memset(result, 0, sizeof(*result));
62
63
ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx,
64
- is_el0, result, fi);
65
+ s2walk_secure, is_el0, result, fi);
66
fi->s2addr = ipa;
67
68
/* Combine the S1 and S2 perms. */
69
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
26
}
70
}
27
71
28
- dc->pc_curr = dc->base.pc_next;
72
if (regime_using_lpae_format(env, mmu_idx)) {
29
- insn = arm_lduw_code(env, &dc->base, dc->base.pc_next, dc->sctlr_b);
73
- return get_phys_addr_lpae(env, address, access_type, mmu_idx, false,
30
+ dc->pc_curr = pc;
74
- result, fi);
31
+ insn = arm_lduw_code(env, &dc->base, pc, dc->sctlr_b);
75
+ return get_phys_addr_lpae(env, address, access_type, mmu_idx,
32
is_16bit = thumb_insn_is_16bit(dc, dc->base.pc_next, insn);
76
+ is_secure, false, result, fi);
33
- dc->base.pc_next += 2;
77
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
34
+ pc += 2;
78
return get_phys_addr_v6(env, address, access_type, mmu_idx,
35
if (!is_16bit) {
79
is_secure, result, fi);
36
- uint32_t insn2 = arm_lduw_code(env, &dc->base, dc->base.pc_next,
37
- dc->sctlr_b);
38
-
39
+ uint32_t insn2 = arm_lduw_code(env, &dc->base, pc, dc->sctlr_b);
40
insn = insn << 16 | insn2;
41
- dc->base.pc_next += 2;
42
+ pc += 2;
43
}
44
+ dc->base.pc_next = pc;
45
dc->insn = insn;
46
47
if (dc->pstate_il) {
48
--
80
--
49
2.25.1
81
2.25.1
50
82
51
83
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The size of the code covered by a TranslationBlock cannot be 0;
3
Pass the correct stage2 mmu_idx to regime_translation_disabled,
4
this is checked via assert in tb_gen_code.
4
which we computed afterward.
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20221001162318.153420-4-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/translate-a64.c | 1 +
11
target/arm/ptw.c | 6 +++---
11
1 file changed, 1 insertion(+)
12
1 file changed, 3 insertions(+), 3 deletions(-)
12
13
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-a64.c
16
--- a/target/arm/ptw.c
16
+++ b/target/arm/translate-a64.c
17
+++ b/target/arm/ptw.c
17
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
18
assert(s->base.num_insns == 1);
19
hwaddr addr, bool *is_secure,
19
gen_swstep_exception(s, 0, 0);
20
ARMMMUFaultInfo *fi)
20
s->base.is_jmp = DISAS_NORETURN;
21
{
21
+ s->base.pc_next = pc + 4;
22
+ ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
22
return;
23
+
23
}
24
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
25
- !regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
26
- ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S
27
- : ARMMMUIdx_Stage2;
28
+ !regime_translation_disabled(env, s2_mmu_idx)) {
29
GetPhysAddrResult s2 = {};
30
int ret;
24
31
25
--
32
--
26
2.25.1
33
2.25.1
27
28
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add two test cases for VIOT, one on the q35 machine and the other on
3
Remove the use of regime_is_secure from regime_translation_disabled,
4
virt. To test complex topologies the q35 test has two PCIe buses that
4
using the new parameter instead.
5
bypass the IOMMU (and are therefore not described by VIOT), and two
6
buses that are translated by virtio-iommu.
7
5
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
6
This fixes a bug in S1_ptw_translate and get_phys_addr where we had
9
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
7
passed ARMMMUIdx_Stage2 and not ARMMMUIdx_Stage2_S to determine if
10
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
8
Stage2 is disabled, affecting FEAT_SEL2.
11
Message-id: 20211210170415.583179-7-jean-philippe@linaro.org
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20221001162318.153420-5-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
15
---
14
tests/qtest/bios-tables-test.c | 38 ++++++++++++++++++++++++++++++++++
16
target/arm/ptw.c | 20 +++++++++++---------
15
1 file changed, 38 insertions(+)
17
1 file changed, 11 insertions(+), 9 deletions(-)
16
18
17
diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/qtest/bios-tables-test.c
21
--- a/target/arm/ptw.c
20
+++ b/tests/qtest/bios-tables-test.c
22
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ static void test_acpi_virt_tcg(void)
23
@@ -XXX,XX +XXX,XX @@ static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
22
free_test_data(&data);
23
}
24
}
24
25
25
+static void test_acpi_q35_viot(void)
26
/* Return true if the specified stage of address translation is disabled */
26
+{
27
-static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
27
+ test_data data = {
28
+static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
28
+ .machine = MACHINE_Q35,
29
+ bool is_secure)
29
+ .variant = ".viot",
30
+ };
31
+
32
+ /*
33
+ * To keep things interesting, two buses bypass the IOMMU.
34
+ * VIOT should only describes the other two buses.
35
+ */
36
+ test_acpi_one("-machine default_bus_bypass_iommu=on "
37
+ "-device virtio-iommu-pci "
38
+ "-device pxb-pcie,bus_nr=0x10,id=pcie.100,bus=pcie.0 "
39
+ "-device pxb-pcie,bus_nr=0x20,id=pcie.200,bus=pcie.0,bypass_iommu=on "
40
+ "-device pxb-pcie,bus_nr=0x30,id=pcie.300,bus=pcie.0",
41
+ &data);
42
+ free_test_data(&data);
43
+}
44
+
45
+static void test_acpi_virt_viot(void)
46
+{
47
+ test_data data = {
48
+ .machine = "virt",
49
+ .uefi_fl1 = "pc-bios/edk2-aarch64-code.fd",
50
+ .uefi_fl2 = "pc-bios/edk2-arm-vars.fd",
51
+ .cd = "tests/data/uefi-boot-images/bios-tables-test.aarch64.iso.qcow2",
52
+ .ram_start = 0x40000000ULL,
53
+ .scan_len = 128ULL * 1024 * 1024,
54
+ };
55
+
56
+ test_acpi_one("-cpu cortex-a57 "
57
+ "-device virtio-iommu-pci", &data);
58
+ free_test_data(&data);
59
+}
60
+
61
static void test_oem_fields(test_data *data)
62
{
30
{
63
int i;
31
uint64_t hcr_el2;
64
@@ -XXX,XX +XXX,XX @@ int main(int argc, char *argv[])
32
65
qtest_add_func("acpi/q35/kvm/xapic", test_acpi_q35_kvm_xapic);
33
if (arm_feature(env, ARM_FEATURE_M)) {
66
qtest_add_func("acpi/q35/kvm/dmar", test_acpi_q35_kvm_dmar);
34
- switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] &
67
}
35
+ switch (env->v7m.mpu_ctrl[is_secure] &
68
+ qtest_add_func("acpi/q35/viot", test_acpi_q35_viot);
36
(R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
69
} else if (strcmp(arch, "aarch64") == 0) {
37
case R_V7M_MPU_CTRL_ENABLE_MASK:
70
if (has_tcg) {
38
/* Enabled, but not for HardFault and NMI */
71
qtest_add_func("acpi/virt", test_acpi_virt_tcg);
39
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
72
@@ -XXX,XX +XXX,XX @@ int main(int argc, char *argv[])
40
73
qtest_add_func("acpi/virt/memhp", test_acpi_virt_tcg_memhp);
41
if (hcr_el2 & HCR_TGE) {
74
qtest_add_func("acpi/virt/pxb", test_acpi_virt_tcg_pxb);
42
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
75
qtest_add_func("acpi/virt/oem-fields", test_acpi_oem_fields_virt);
43
- if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
76
+ qtest_add_func("acpi/virt/viot", test_acpi_virt_viot);
44
+ if (!is_secure && regime_el(env, mmu_idx) == 1) {
45
return true;
77
}
46
}
78
}
47
}
79
ret = g_test_run();
48
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
49
ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
50
51
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
52
- !regime_translation_disabled(env, s2_mmu_idx)) {
53
+ !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) {
54
GetPhysAddrResult s2 = {};
55
int ret;
56
57
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
58
uint32_t base;
59
bool is_user = regime_is_user(env, mmu_idx);
60
61
- if (regime_translation_disabled(env, mmu_idx)) {
62
+ if (regime_translation_disabled(env, mmu_idx, is_secure)) {
63
/* MPU disabled. */
64
result->phys = address;
65
result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
66
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
67
result->page_size = TARGET_PAGE_SIZE;
68
result->prot = 0;
69
70
- if (regime_translation_disabled(env, mmu_idx) ||
71
+ if (regime_translation_disabled(env, mmu_idx, secure) ||
72
m_is_ppb_region(env, address)) {
73
/*
74
* MPU disabled or M profile PPB access: use default memory map.
75
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
76
* are done in arm_v7m_load_vector(), which always does a direct
77
* read using address_space_ldl(), rather than going via this function.
78
*/
79
- if (regime_translation_disabled(env, mmu_idx)) { /* MPU disabled */
80
+ if (regime_translation_disabled(env, mmu_idx, secure)) { /* MPU disabled */
81
hit = true;
82
} else if (m_is_ppb_region(env, address)) {
83
hit = true;
84
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
85
result, fi);
86
87
/* If S1 fails or S2 is disabled, return early. */
88
- if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
89
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
90
+ is_secure)) {
91
return ret;
92
}
93
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
95
96
/* Definitely a real MMU, not an MPU */
97
98
- if (regime_translation_disabled(env, mmu_idx)) {
99
+ if (regime_translation_disabled(env, mmu_idx, is_secure)) {
100
uint64_t hcr;
101
uint8_t memattr;
102
80
--
103
--
81
2.25.1
104
2.25.1
82
105
83
106
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
To propagate errors to the caller of the pre_plug callback, use the
3
Retain the existing get_phys_addr interface using the security
4
object_poperty_set*() functions directly instead of the qdev_prop_set*()
4
state derived from mmu_idx. Move the kerneldoc comments to the
5
helpers.
5
header file where they belong.
6
6
7
Suggested-by: Igor Mammedov <imammedo@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
9
Message-id: 20221001162318.153420-6-richard.henderson@linaro.org
10
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
11
Message-id: 20211210170415.583179-5-jean-philippe@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/arm/virt.c | 5 +++--
12
target/arm/internals.h | 40 ++++++++++++++++++++++++++++++++++++++
15
1 file changed, 3 insertions(+), 2 deletions(-)
13
target/arm/ptw.c | 44 ++++++++++++++----------------------------
14
2 files changed, 55 insertions(+), 29 deletions(-)
16
15
17
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/virt.c
18
--- a/target/arm/internals.h
20
+++ b/hw/arm/virt.c
19
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ static void virt_machine_device_pre_plug_cb(HotplugHandler *hotplug_dev,
20
@@ -XXX,XX +XXX,XX @@ typedef struct GetPhysAddrResult {
22
db_start, db_end,
21
ARMCacheAttrs cacheattrs;
23
VIRTIO_IOMMU_RESV_MEM_T_MSI);
22
} GetPhysAddrResult;
24
23
25
- qdev_prop_set_uint32(dev, "len-reserved-regions", 1);
24
+/**
26
- qdev_prop_set_string(dev, "reserved-regions[0]", resv_prop_str);
25
+ * get_phys_addr_with_secure: get the physical address for a virtual address
27
+ object_property_set_uint(OBJECT(dev), "len-reserved-regions", 1, errp);
26
+ * @env: CPUARMState
28
+ object_property_set_str(OBJECT(dev), "reserved-regions[0]",
27
+ * @address: virtual address to get physical address for
29
+ resv_prop_str, errp);
28
+ * @access_type: 0 for read, 1 for write, 2 for execute
30
g_free(resv_prop_str);
29
+ * @mmu_idx: MMU index indicating required translation regime
30
+ * @is_secure: security state for the access
31
+ * @result: set on translation success.
32
+ * @fi: set to fault info if the translation fails
33
+ *
34
+ * Find the physical address corresponding to the given virtual address,
35
+ * by doing a translation table walk on MMU based systems or using the
36
+ * MPU state on MPU based systems.
37
+ *
38
+ * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
39
+ * prot and page_size may not be filled in, and the populated fsr value provides
40
+ * information on why the translation aborted, in the format of a
41
+ * DFSR/IFSR fault register, with the following caveats:
42
+ * * we honour the short vs long DFSR format differences.
43
+ * * the WnR bit is never set (the caller must do this).
44
+ * * for PSMAv5 based systems we don't bother to return a full FSR format
45
+ * value.
46
+ */
47
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
48
+ MMUAccessType access_type,
49
+ ARMMMUIdx mmu_idx, bool is_secure,
50
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
51
+ __attribute__((nonnull));
52
+
53
+/**
54
+ * get_phys_addr: get the physical address for a virtual address
55
+ * @env: CPUARMState
56
+ * @address: virtual address to get physical address for
57
+ * @access_type: 0 for read, 1 for write, 2 for execute
58
+ * @mmu_idx: MMU index indicating required translation regime
59
+ * @result: set on translation success.
60
+ * @fi: set to fault info if the translation fails
61
+ *
62
+ * Similarly, but use the security regime of @mmu_idx.
63
+ */
64
bool get_phys_addr(CPUARMState *env, target_ulong address,
65
MMUAccessType access_type, ARMMMUIdx mmu_idx,
66
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
67
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/ptw.c
70
+++ b/target/arm/ptw.c
71
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
72
return ret;
73
}
74
75
-/**
76
- * get_phys_addr - get the physical address for this virtual address
77
- *
78
- * Find the physical address corresponding to the given virtual address,
79
- * by doing a translation table walk on MMU based systems or using the
80
- * MPU state on MPU based systems.
81
- *
82
- * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
83
- * prot and page_size may not be filled in, and the populated fsr value provides
84
- * information on why the translation aborted, in the format of a
85
- * DFSR/IFSR fault register, with the following caveats:
86
- * * we honour the short vs long DFSR format differences.
87
- * * the WnR bit is never set (the caller must do this).
88
- * * for PSMAv5 based systems we don't bother to return a full FSR format
89
- * value.
90
- *
91
- * @env: CPUARMState
92
- * @address: virtual address to get physical address for
93
- * @access_type: 0 for read, 1 for write, 2 for execute
94
- * @mmu_idx: MMU index indicating required translation regime
95
- * @result: set on translation success.
96
- * @fi: set to fault info if the translation fails
97
- */
98
-bool get_phys_addr(CPUARMState *env, target_ulong address,
99
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
100
- GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
101
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
102
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
103
+ bool is_secure, GetPhysAddrResult *result,
104
+ ARMMMUFaultInfo *fi)
105
{
106
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
107
- bool is_secure = regime_is_secure(env, mmu_idx);
108
109
if (mmu_idx != s1_mmu_idx) {
110
/*
111
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
112
ARMMMUIdx s2_mmu_idx;
113
bool is_el0;
114
115
- ret = get_phys_addr(env, address, access_type, s1_mmu_idx,
116
- result, fi);
117
+ ret = get_phys_addr_with_secure(env, address, access_type,
118
+ s1_mmu_idx, is_secure, result, fi);
119
120
/* If S1 fails or S2 is disabled, return early. */
121
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
122
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
31
}
123
}
32
}
124
}
125
126
+bool get_phys_addr(CPUARMState *env, target_ulong address,
127
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
128
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
129
+{
130
+ return get_phys_addr_with_secure(env, address, access_type, mmu_idx,
131
+ regime_is_secure(env, mmu_idx),
132
+ result, fi);
133
+}
134
+
135
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
136
MemTxAttrs *attrs)
137
{
33
--
138
--
34
2.25.1
139
2.25.1
35
36
diff view generated by jsdifflib
1
From: Joel Stanley <joel@jms.id.au>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This is the latest URL for the OpenBMC CI. The old URL still works, but
3
Remove the use of regime_is_secure from v7m_read_half_insn, using
4
redirects.
4
the new parameter instead.
5
5
6
Reviewed-by: Cédric Le Goater <clg@kaod.org>
6
As it happens, both callers pass true, propagated from the argument
7
Signed-off-by: Joel Stanley <joel@jms.id.au>
7
to arm_v7m_mmu_idx_for_secstate which created the mmu_idx argument,
8
Message-id: 20211117065752.330632-3-joel@jms.id.au
8
but that is a detail of v7m_handle_execute_nsc we need not expose
9
to the callee.
10
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20221001162318.153420-7-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
16
---
11
docs/system/arm/aspeed.rst | 2 +-
17
target/arm/m_helper.c | 9 ++++-----
12
1 file changed, 1 insertion(+), 1 deletion(-)
18
1 file changed, 4 insertions(+), 5 deletions(-)
13
19
14
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/docs/system/arm/aspeed.rst
22
--- a/target/arm/m_helper.c
17
+++ b/docs/system/arm/aspeed.rst
23
+++ b/target/arm/m_helper.c
18
@@ -XXX,XX +XXX,XX @@ The Aspeed machines can be started using the ``-kernel`` option to
24
@@ -XXX,XX +XXX,XX @@ static bool do_v7m_function_return(ARMCPU *cpu)
19
load a Linux kernel or from a firmware. Images can be downloaded from
25
return true;
20
the OpenBMC jenkins :
26
}
21
27
22
- https://jenkins.openbmc.org/job/ci-openbmc/lastSuccessfulBuild/distro=ubuntu,label=docker-builder
28
-static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
23
+ https://jenkins.openbmc.org/job/ci-openbmc/lastSuccessfulBuild/
29
+static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure,
24
30
uint32_t addr, uint16_t *insn)
25
or directly from the OpenBMC GitHub release repository :
31
{
32
/*
33
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
34
ARMMMUFaultInfo fi = {};
35
MemTxResult txres;
36
37
- v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx,
38
- regime_is_secure(env, mmu_idx), &sattrs);
39
+ v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, secure, &sattrs);
40
if (!sattrs.nsc || sattrs.ns) {
41
/*
42
* This must be the second half of the insn, and it straddles a
43
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
44
/* We want to do the MPU lookup as secure; work out what mmu_idx that is */
45
mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
46
47
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
48
+ if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15], &insn)) {
49
return false;
50
}
51
52
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
53
goto gen_invep;
54
}
55
56
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
57
+ if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15] + 2, &insn)) {
58
return false;
59
}
26
60
27
--
61
--
28
2.25.1
62
2.25.1
29
63
30
64
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Misaligned thumb PC is architecturally impossible.
3
Remove the use of regime_is_secure from arm_tr_init_disas_context.
4
Assert is better than proceeding, in case we've missed
4
Instead, provide the value of v8m_secure directly from tb_flags.
5
something somewhere.
5
Rather than use regime_is_secure, use the env->v7m.secure directly,
6
6
as per arm_mmu_idx_el.
7
Expand a comment about aligning the pc in gdbstub.
8
Fail an incoming migrate if a thumb pc is misaligned.
9
7
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20221001162318.153420-8-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
12
---
14
target/arm/gdbstub.c | 9 +++++++--
13
target/arm/cpu.h | 2 ++
15
target/arm/machine.c | 10 ++++++++++
14
target/arm/helper.c | 4 ++++
16
target/arm/translate.c | 3 +++
15
target/arm/translate.c | 3 +--
17
3 files changed, 20 insertions(+), 2 deletions(-)
16
3 files changed, 7 insertions(+), 2 deletions(-)
18
17
19
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/gdbstub.c
20
--- a/target/arm/cpu.h
22
+++ b/target/arm/gdbstub.c
21
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ int arm_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
22
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */
24
23
FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */
25
tmp = ldl_p(mem_buf);
24
/* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */
26
25
FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */
27
- /* Mask out low bit of PC to workaround gdb bugs. This will probably
26
+/* Set if in secure mode */
28
- cause problems if we ever implement the Jazelle DBX extensions. */
27
+FIELD(TBFLAG_M32, SECURE, 6, 1)
29
+ /*
28
30
+ * Mask out low bits of PC to workaround gdb bugs.
29
/*
31
+ * This avoids an assert in thumb_tr_translate_insn, because it is
30
* Bit usage when in AArch64 state
32
+ * architecturally impossible to misalign the pc.
31
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
+ * This will probably cause problems if we ever implement the
32
index XXXXXXX..XXXXXXX 100644
34
+ * Jazelle DBX extensions.
33
--- a/target/arm/helper.c
35
+ */
34
+++ b/target/arm/helper.c
36
if (n == 15) {
35
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_m32(CPUARMState *env, int fp_el,
37
tmp &= ~1;
36
DP_TBFLAG_M32(flags, STACKCHECK, 1);
38
}
37
}
39
diff --git a/target/arm/machine.c b/target/arm/machine.c
38
40
index XXXXXXX..XXXXXXX 100644
39
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) && env->v7m.secure) {
41
--- a/target/arm/machine.c
40
+ DP_TBFLAG_M32(flags, SECURE, 1);
42
+++ b/target/arm/machine.c
43
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
44
return -1;
45
}
46
}
47
+
48
+ /*
49
+ * Misaligned thumb pc is architecturally impossible.
50
+ * We have an assert in thumb_tr_translate_insn to verify this.
51
+ * Fail an incoming migrate to avoid this assert.
52
+ */
53
+ if (!is_a64(env) && env->thumb && (env->regs[15] & 1)) {
54
+ return -1;
55
+ }
41
+ }
56
+
42
+
57
if (!kvm_enabled()) {
43
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
58
pmu_op_finish(&cpu->env);
44
}
59
}
45
60
diff --git a/target/arm/translate.c b/target/arm/translate.c
46
diff --git a/target/arm/translate.c b/target/arm/translate.c
61
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/translate.c
48
--- a/target/arm/translate.c
63
+++ b/target/arm/translate.c
49
+++ b/target/arm/translate.c
64
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
50
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
65
uint32_t insn;
51
dc->vfp_enabled = 1;
66
bool is_16bit;
52
dc->be_data = MO_TE;
67
53
dc->v7m_handler_mode = EX_TBFLAG_M32(tb_flags, HANDLER);
68
+ /* Misaligned thumb PC is architecturally impossible. */
54
- dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
69
+ assert((dc->base.pc_next & 1) == 0);
55
- regime_is_secure(env, dc->mmu_idx);
70
+
56
+ dc->v8m_secure = EX_TBFLAG_M32(tb_flags, SECURE);
71
if (arm_check_ss_active(dc) || arm_check_kernelpage(dc)) {
57
dc->v8m_stackcheck = EX_TBFLAG_M32(tb_flags, STACKCHECK);
72
dc->base.pc_next = pc + 2;
58
dc->v8m_fpccr_s_wrong = EX_TBFLAG_M32(tb_flags, FPCCR_S_WRONG);
73
return;
59
dc->v7m_new_fp_ctxt_needed =
74
--
60
--
75
2.25.1
61
2.25.1
76
77
diff view generated by jsdifflib
1
From: Patrick Venture <venture@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The rx_active boolean change to true should always trigger a try_read
3
This is the last use of regime_is_secure; remove it
4
call that flushes the queue.
4
entirely before changing the layout of ARMMMUIdx.
5
5
6
Signed-off-by: Patrick Venture <venture@google.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20211203221002.1719306-1-venture@google.com
8
Message-id: 20221001162318.153420-9-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
hw/net/npcm7xx_emc.c | 18 ++++++++----------
11
target/arm/internals.h | 42 ----------------------------------------
12
1 file changed, 8 insertions(+), 10 deletions(-)
12
target/arm/ptw.c | 44 ++++++++++++++++++++++++++++++++++++++++--
13
2 files changed, 42 insertions(+), 44 deletions(-)
13
14
14
diff --git a/hw/net/npcm7xx_emc.c b/hw/net/npcm7xx_emc.c
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/net/npcm7xx_emc.c
17
--- a/target/arm/internals.h
17
+++ b/hw/net/npcm7xx_emc.c
18
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ static void emc_halt_rx(NPCM7xxEMCState *emc, uint32_t mista_flag)
19
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
19
emc_set_mista(emc, mista_flag);
20
}
20
}
21
}
21
22
22
+static void emc_enable_rx_and_flush(NPCM7xxEMCState *emc)
23
-/* Return true if this address translation regime is secure */
23
+{
24
-static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
24
+ emc->rx_active = true;
25
+ qemu_flush_queued_packets(qemu_get_queue(emc->nic));
26
+}
27
+
28
static void emc_set_next_tx_descriptor(NPCM7xxEMCState *emc,
29
const NPCM7xxEMCTxDesc *tx_desc,
30
uint32_t desc_addr)
31
@@ -XXX,XX +XXX,XX @@ static ssize_t emc_receive(NetClientState *nc, const uint8_t *buf, size_t len1)
32
return len;
33
}
34
35
-static void emc_try_receive_next_packet(NPCM7xxEMCState *emc)
36
-{
25
-{
37
- if (emc_can_receive(qemu_get_queue(emc->nic))) {
26
- switch (mmu_idx) {
38
- qemu_flush_queued_packets(qemu_get_queue(emc->nic));
27
- case ARMMMUIdx_E10_0:
28
- case ARMMMUIdx_E10_1:
29
- case ARMMMUIdx_E10_1_PAN:
30
- case ARMMMUIdx_E20_0:
31
- case ARMMMUIdx_E20_2:
32
- case ARMMMUIdx_E20_2_PAN:
33
- case ARMMMUIdx_Stage1_E0:
34
- case ARMMMUIdx_Stage1_E1:
35
- case ARMMMUIdx_Stage1_E1_PAN:
36
- case ARMMMUIdx_E2:
37
- case ARMMMUIdx_Stage2:
38
- case ARMMMUIdx_MPrivNegPri:
39
- case ARMMMUIdx_MUserNegPri:
40
- case ARMMMUIdx_MPriv:
41
- case ARMMMUIdx_MUser:
42
- return false;
43
- case ARMMMUIdx_SE3:
44
- case ARMMMUIdx_SE10_0:
45
- case ARMMMUIdx_SE10_1:
46
- case ARMMMUIdx_SE10_1_PAN:
47
- case ARMMMUIdx_SE20_0:
48
- case ARMMMUIdx_SE20_2:
49
- case ARMMMUIdx_SE20_2_PAN:
50
- case ARMMMUIdx_Stage1_SE0:
51
- case ARMMMUIdx_Stage1_SE1:
52
- case ARMMMUIdx_Stage1_SE1_PAN:
53
- case ARMMMUIdx_SE2:
54
- case ARMMMUIdx_Stage2_S:
55
- case ARMMMUIdx_MSPrivNegPri:
56
- case ARMMMUIdx_MSUserNegPri:
57
- case ARMMMUIdx_MSPriv:
58
- case ARMMMUIdx_MSUser:
59
- return true;
60
- default:
61
- g_assert_not_reached();
39
- }
62
- }
40
-}
63
-}
41
-
64
-
42
static uint64_t npcm7xx_emc_read(void *opaque, hwaddr offset, unsigned size)
65
static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
43
{
66
{
44
NPCM7xxEMCState *emc = opaque;
67
switch (mmu_idx) {
45
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_emc_write(void *opaque, hwaddr offset,
68
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
46
emc->regs[REG_MGSTA] |= REG_MGSTA_RXHA;
69
index XXXXXXX..XXXXXXX 100644
47
}
70
--- a/target/arm/ptw.c
48
if (value & REG_MCMDR_RXON) {
71
+++ b/target/arm/ptw.c
49
- emc->rx_active = true;
72
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
50
+ emc_enable_rx_and_flush(emc);
73
MMUAccessType access_type, ARMMMUIdx mmu_idx,
51
} else {
74
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
52
emc_halt_rx(emc, 0);
75
{
53
}
76
+ bool is_secure;
54
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_emc_write(void *opaque, hwaddr offset,
77
+
55
break;
78
+ switch (mmu_idx) {
56
case REG_RSDR:
79
+ case ARMMMUIdx_E10_0:
57
if (emc->regs[REG_MCMDR] & REG_MCMDR_RXON) {
80
+ case ARMMMUIdx_E10_1:
58
- emc->rx_active = true;
81
+ case ARMMMUIdx_E10_1_PAN:
59
- emc_try_receive_next_packet(emc);
82
+ case ARMMMUIdx_E20_0:
60
+ emc_enable_rx_and_flush(emc);
83
+ case ARMMMUIdx_E20_2:
61
}
84
+ case ARMMMUIdx_E20_2_PAN:
62
break;
85
+ case ARMMMUIdx_Stage1_E0:
63
case REG_MIIDA:
86
+ case ARMMMUIdx_Stage1_E1:
87
+ case ARMMMUIdx_Stage1_E1_PAN:
88
+ case ARMMMUIdx_E2:
89
+ case ARMMMUIdx_Stage2:
90
+ case ARMMMUIdx_MPrivNegPri:
91
+ case ARMMMUIdx_MUserNegPri:
92
+ case ARMMMUIdx_MPriv:
93
+ case ARMMMUIdx_MUser:
94
+ is_secure = false;
95
+ break;
96
+ case ARMMMUIdx_SE3:
97
+ case ARMMMUIdx_SE10_0:
98
+ case ARMMMUIdx_SE10_1:
99
+ case ARMMMUIdx_SE10_1_PAN:
100
+ case ARMMMUIdx_SE20_0:
101
+ case ARMMMUIdx_SE20_2:
102
+ case ARMMMUIdx_SE20_2_PAN:
103
+ case ARMMMUIdx_Stage1_SE0:
104
+ case ARMMMUIdx_Stage1_SE1:
105
+ case ARMMMUIdx_Stage1_SE1_PAN:
106
+ case ARMMMUIdx_SE2:
107
+ case ARMMMUIdx_Stage2_S:
108
+ case ARMMMUIdx_MSPrivNegPri:
109
+ case ARMMMUIdx_MSUserNegPri:
110
+ case ARMMMUIdx_MSPriv:
111
+ case ARMMMUIdx_MSUser:
112
+ is_secure = true;
113
+ break;
114
+ default:
115
+ g_assert_not_reached();
116
+ }
117
return get_phys_addr_with_secure(env, address, access_type, mmu_idx,
118
- regime_is_secure(env, mmu_idx),
119
- result, fi);
120
+ is_secure, result, fi);
121
}
122
123
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
64
--
124
--
65
2.25.1
125
2.25.1
66
67
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Both single-step and pc alignment faults have priority over
3
Use get_phys_addr_with_secure directly. For a-profile, this is the
4
breakpoint exceptions.
4
one place where the value of is_secure may not equal arm_is_secure(env).
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-10-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/debug_helper.c | 23 +++++++++++++++++++++++
11
target/arm/helper.c | 19 ++++++++++++++-----
11
1 file changed, 23 insertions(+)
12
1 file changed, 14 insertions(+), 5 deletions(-)
12
13
13
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/debug_helper.c
16
--- a/target/arm/helper.c
16
+++ b/target/arm/debug_helper.c
17
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ bool arm_debug_check_breakpoint(CPUState *cs)
18
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
19
20
#ifdef CONFIG_TCG
21
static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
22
- MMUAccessType access_type, ARMMMUIdx mmu_idx)
23
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
24
+ bool is_secure)
18
{
25
{
19
ARMCPU *cpu = ARM_CPU(cs);
26
bool ret;
20
CPUARMState *env = &cpu->env;
27
uint64_t par64;
21
+ target_ulong pc;
28
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
22
int n;
29
ARMMMUFaultInfo fi = {};
30
GetPhysAddrResult res = {};
31
32
- ret = get_phys_addr(env, value, access_type, mmu_idx, &res, &fi);
33
+ ret = get_phys_addr_with_secure(env, value, access_type, mmu_idx,
34
+ is_secure, &res, &fi);
23
35
24
/*
36
/*
25
@@ -XXX,XX +XXX,XX @@ bool arm_debug_check_breakpoint(CPUState *cs)
37
* ATS operations only do S1 or S1+S2 translations, so we never
26
return false;
38
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
39
switch (el) {
40
case 3:
41
mmu_idx = ARMMMUIdx_SE3;
42
+ secure = true;
43
break;
44
case 2:
45
g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
46
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
47
switch (el) {
48
case 3:
49
mmu_idx = ARMMMUIdx_SE10_0;
50
+ secure = true;
51
break;
52
case 2:
53
g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
54
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
55
case 4:
56
/* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */
57
mmu_idx = ARMMMUIdx_E10_1;
58
+ secure = false;
59
break;
60
case 6:
61
/* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */
62
mmu_idx = ARMMMUIdx_E10_0;
63
+ secure = false;
64
break;
65
default:
66
g_assert_not_reached();
27
}
67
}
28
68
29
+ /*
69
- par64 = do_ats_write(env, value, access_type, mmu_idx);
30
+ * Single-step exceptions have priority over breakpoint exceptions.
70
+ par64 = do_ats_write(env, value, access_type, mmu_idx, secure);
31
+ * If single-step state is active-pending, suppress the bp.
71
32
+ */
72
A32_BANKED_CURRENT_REG_SET(env, par, par64);
33
+ if (arm_singlestep_active(env) && !(env->pstate & PSTATE_SS)) {
73
#else
34
+ return false;
74
@@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri,
35
+ }
75
MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD;
36
+
76
uint64_t par64;
37
+ /*
77
38
+ * PC alignment faults have priority over breakpoint exceptions.
78
- par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2);
39
+ */
79
+ /* There is no SecureEL2 for AArch32. */
40
+ pc = is_a64(env) ? env->pc : env->regs[15];
80
+ par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2, false);
41
+ if ((is_a64(env) || !env->thumb) && (pc & 3) != 0) {
81
42
+ return false;
82
A32_BANKED_CURRENT_REG_SET(env, par, par64);
43
+ }
83
#else
44
+
84
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
45
+ /*
85
break;
46
+ * Instruction aborts have priority over breakpoint exceptions.
86
case 6: /* AT S1E3R, AT S1E3W */
47
+ * TODO: We would need to look up the page for PC and verify that
87
mmu_idx = ARMMMUIdx_SE3;
48
+ * it is present and executable.
88
+ secure = true;
49
+ */
89
break;
50
+
90
default:
51
for (n = 0; n < ARRAY_SIZE(env->cpu_breakpoint); n++) {
91
g_assert_not_reached();
52
if (bp_wp_matches(cpu, n, false)) {
92
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
53
return true;
93
g_assert_not_reached();
94
}
95
96
- env->cp15.par_el[1] = do_ats_write(env, value, access_type, mmu_idx);
97
+ env->cp15.par_el[1] = do_ats_write(env, value, access_type,
98
+ mmu_idx, secure);
99
#else
100
/* Handled by hardware accelerator. */
101
g_assert_not_reached();
54
--
102
--
55
2.25.1
103
2.25.1
56
57
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
For A64, any input to an indirect branch can cause this.
3
For a-profile aarch64, which does not bank system registers, it takes
4
quite a lot of code to switch between security states. In the process,
5
registers such as TCR_EL{1,2} must be swapped, which in itself requires
6
the flushing of softmmu tlbs. Therefore it doesn't buy us anything to
7
separate tlbs by security state.
4
8
5
For A32, many indirect branch paths force the branch to be aligned,
9
Retain the distinction between Stage2 and Stage2_S.
6
but BXWritePC does not. This includes the BX instruction but also
7
other interworking changes to PC. Prior to v8, this case is UNDEFINED.
8
With v8, this is CONSTRAINED UNPREDICTABLE and may either raise an
9
exception or force align the PC.
10
10
11
We choose to raise an exception because we have the infrastructure,
11
This will be important as we implement FEAT_RME, and do not wish to
12
it makes the generated code for gen_bx simpler, and it has the
12
add a third set of mmu indexes for Realm state.
13
possibility of catching more guest bugs.
14
13
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Message-id: 20221001162318.153420-11-richard.henderson@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
18
---
19
target/arm/helper.h | 1 +
19
target/arm/cpu-param.h | 2 +-
20
target/arm/syndrome.h | 5 ++++
20
target/arm/cpu.h | 72 +++++++------------
21
linux-user/aarch64/cpu_loop.c | 46 ++++++++++++++++++++---------------
21
target/arm/internals.h | 31 +-------
22
target/arm/tlb_helper.c | 18 ++++++++++++++
22
target/arm/helper.c | 144 +++++++++++++------------------------
23
target/arm/translate-a64.c | 15 ++++++++++++
23
target/arm/ptw.c | 25 ++-----
24
target/arm/translate.c | 22 ++++++++++++++++-
24
target/arm/translate-a64.c | 8 ---
25
6 files changed, 87 insertions(+), 20 deletions(-)
25
target/arm/translate.c | 6 +-
26
7 files changed, 85 insertions(+), 203 deletions(-)
26
27
27
diff --git a/target/arm/helper.h b/target/arm/helper.h
28
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
28
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/helper.h
30
--- a/target/arm/cpu-param.h
30
+++ b/target/arm/helper.h
31
+++ b/target/arm/cpu-param.h
31
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sel_flags, TCG_CALL_NO_RWG_SE,
32
@@ -XXX,XX +XXX,XX @@
32
DEF_HELPER_2(exception_internal, void, env, i32)
33
# define TARGET_PAGE_BITS_MIN 10
33
DEF_HELPER_4(exception_with_syndrome, void, env, i32, i32, i32)
34
#endif
34
DEF_HELPER_2(exception_bkpt_insn, void, env, i32)
35
35
+DEF_HELPER_2(exception_pc_alignment, noreturn, env, tl)
36
-#define NB_MMU_MODES 15
36
DEF_HELPER_1(setend, void, env)
37
+#define NB_MMU_MODES 8
37
DEF_HELPER_2(wfi, void, env, i32)
38
38
DEF_HELPER_1(wfe, void, env)
39
#endif
39
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
40
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
40
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/syndrome.h
42
--- a/target/arm/cpu.h
42
+++ b/target/arm/syndrome.h
43
+++ b/target/arm/cpu.h
43
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_illegalstate(void)
44
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
44
return (EC_ILLEGALSTATE << ARM_EL_EC_SHIFT) | ARM_EL_IL;
45
* table over and over.
45
}
46
* 6. we need separate EL1/EL2 mmu_idx for handling the Privileged Access
46
47
* Never (PAN) bit within PSTATE.
47
+static inline uint32_t syn_pcalignment(void)
48
+ * 7. we fold together the secure and non-secure regimes for A-profile,
48
+{
49
+ * because there are no banked system registers for aarch64, so the
49
+ return (EC_PCALIGNMENT << ARM_EL_EC_SHIFT) | ARM_EL_IL;
50
+ * process of switching between secure and non-secure is
50
+}
51
+ * already heavyweight.
51
+
52
*
52
#endif /* TARGET_ARM_SYNDROME_H */
53
* This gives us the following list of cases:
53
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
54
*
55
- * NS EL0 EL1&0 stage 1+2 (aka NS PL0)
56
- * NS EL1 EL1&0 stage 1+2 (aka NS PL1)
57
- * NS EL1 EL1&0 stage 1+2 +PAN
58
- * NS EL0 EL2&0
59
- * NS EL2 EL2&0
60
- * NS EL2 EL2&0 +PAN
61
- * NS EL2 (aka NS PL2)
62
- * S EL0 EL1&0 (aka S PL0)
63
- * S EL1 EL1&0 (not used if EL3 is 32 bit)
64
- * S EL1 EL1&0 +PAN
65
- * S EL3 (aka S PL1)
66
+ * EL0 EL1&0 stage 1+2 (aka NS PL0)
67
+ * EL1 EL1&0 stage 1+2 (aka NS PL1)
68
+ * EL1 EL1&0 stage 1+2 +PAN
69
+ * EL0 EL2&0
70
+ * EL2 EL2&0
71
+ * EL2 EL2&0 +PAN
72
+ * EL2 (aka NS PL2)
73
+ * EL3 (aka S PL1)
74
*
75
- * for a total of 11 different mmu_idx.
76
+ * for a total of 8 different mmu_idx.
77
*
78
* R profile CPUs have an MPU, but can use the same set of MMU indexes
79
- * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
80
- * NS EL2 if we ever model a Cortex-R52).
81
+ * as A profile. They only need to distinguish EL0 and EL1 (and
82
+ * EL2 if we ever model a Cortex-R52).
83
*
84
* M profile CPUs are rather different as they do not have a true MMU.
85
* They have the following different MMU indexes:
86
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
87
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
88
#define ARM_MMU_IDX_M 0x40 /* M profile */
89
90
-/* Meanings of the bits for A profile mmu idx values */
91
-#define ARM_MMU_IDX_A_NS 0x8
92
-
93
/* Meanings of the bits for M profile mmu idx values */
94
#define ARM_MMU_IDX_M_PRIV 0x1
95
#define ARM_MMU_IDX_M_NEGPRI 0x2
96
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
97
/*
98
* A-profile.
99
*/
100
- ARMMMUIdx_SE10_0 = 0 | ARM_MMU_IDX_A,
101
- ARMMMUIdx_SE20_0 = 1 | ARM_MMU_IDX_A,
102
- ARMMMUIdx_SE10_1 = 2 | ARM_MMU_IDX_A,
103
- ARMMMUIdx_SE20_2 = 3 | ARM_MMU_IDX_A,
104
- ARMMMUIdx_SE10_1_PAN = 4 | ARM_MMU_IDX_A,
105
- ARMMMUIdx_SE20_2_PAN = 5 | ARM_MMU_IDX_A,
106
- ARMMMUIdx_SE2 = 6 | ARM_MMU_IDX_A,
107
- ARMMMUIdx_SE3 = 7 | ARM_MMU_IDX_A,
108
-
109
- ARMMMUIdx_E10_0 = ARMMMUIdx_SE10_0 | ARM_MMU_IDX_A_NS,
110
- ARMMMUIdx_E20_0 = ARMMMUIdx_SE20_0 | ARM_MMU_IDX_A_NS,
111
- ARMMMUIdx_E10_1 = ARMMMUIdx_SE10_1 | ARM_MMU_IDX_A_NS,
112
- ARMMMUIdx_E20_2 = ARMMMUIdx_SE20_2 | ARM_MMU_IDX_A_NS,
113
- ARMMMUIdx_E10_1_PAN = ARMMMUIdx_SE10_1_PAN | ARM_MMU_IDX_A_NS,
114
- ARMMMUIdx_E20_2_PAN = ARMMMUIdx_SE20_2_PAN | ARM_MMU_IDX_A_NS,
115
- ARMMMUIdx_E2 = ARMMMUIdx_SE2 | ARM_MMU_IDX_A_NS,
116
+ ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
117
+ ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A,
118
+ ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A,
119
+ ARMMMUIdx_E20_2 = 3 | ARM_MMU_IDX_A,
120
+ ARMMMUIdx_E10_1_PAN = 4 | ARM_MMU_IDX_A,
121
+ ARMMMUIdx_E20_2_PAN = 5 | ARM_MMU_IDX_A,
122
+ ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A,
123
+ ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A,
124
125
/*
126
* These are not allocated TLBs and are used only for AT system
127
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
128
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
129
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
130
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
131
- ARMMMUIdx_Stage1_SE0 = 3 | ARM_MMU_IDX_NOTLB,
132
- ARMMMUIdx_Stage1_SE1 = 4 | ARM_MMU_IDX_NOTLB,
133
- ARMMMUIdx_Stage1_SE1_PAN = 5 | ARM_MMU_IDX_NOTLB,
134
/*
135
* Not allocated a TLB: used only for second stage of an S12 page
136
* table walk, or for descriptor loads during first stage of an S1
137
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
138
* then various TLB flush insns which currently are no-ops or flush
139
* only stage 1 MMU indexes will need to change to flush stage 2.
140
*/
141
- ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_NOTLB,
142
- ARMMMUIdx_Stage2_S = 7 | ARM_MMU_IDX_NOTLB,
143
+ ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
144
+ ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB,
145
146
/*
147
* M-profile.
148
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
149
TO_CORE_BIT(E2),
150
TO_CORE_BIT(E20_2),
151
TO_CORE_BIT(E20_2_PAN),
152
- TO_CORE_BIT(SE10_0),
153
- TO_CORE_BIT(SE20_0),
154
- TO_CORE_BIT(SE10_1),
155
- TO_CORE_BIT(SE20_2),
156
- TO_CORE_BIT(SE10_1_PAN),
157
- TO_CORE_BIT(SE20_2_PAN),
158
- TO_CORE_BIT(SE2),
159
- TO_CORE_BIT(SE3),
160
+ TO_CORE_BIT(E3),
161
162
TO_CORE_BIT(MUser),
163
TO_CORE_BIT(MPriv),
164
diff --git a/target/arm/internals.h b/target/arm/internals.h
54
index XXXXXXX..XXXXXXX 100644
165
index XXXXXXX..XXXXXXX 100644
55
--- a/linux-user/aarch64/cpu_loop.c
166
--- a/target/arm/internals.h
56
+++ b/linux-user/aarch64/cpu_loop.c
167
+++ b/target/arm/internals.h
57
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
168
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
58
break;
169
case ARMMMUIdx_Stage1_E0:
59
case EXCP_PREFETCH_ABORT:
170
case ARMMMUIdx_Stage1_E1:
60
case EXCP_DATA_ABORT:
171
case ARMMMUIdx_Stage1_E1_PAN:
61
- /* We should only arrive here with EC in {DATAABORT, INSNABORT}. */
172
- case ARMMMUIdx_Stage1_SE0:
62
ec = syn_get_ec(env->exception.syndrome);
173
- case ARMMMUIdx_Stage1_SE1:
63
- assert(ec == EC_DATAABORT || ec == EC_INSNABORT);
174
- case ARMMMUIdx_Stage1_SE1_PAN:
64
-
175
case ARMMMUIdx_E10_0:
65
- /* Both EC have the same format for FSC, or close enough. */
176
case ARMMMUIdx_E10_1:
66
- fsc = extract32(env->exception.syndrome, 0, 6);
177
case ARMMMUIdx_E10_1_PAN:
67
- switch (fsc) {
178
case ARMMMUIdx_E20_0:
68
- case 0x04 ... 0x07: /* Translation fault, level {0-3} */
179
case ARMMMUIdx_E20_2:
69
- si_signo = TARGET_SIGSEGV;
180
case ARMMMUIdx_E20_2_PAN:
70
- si_code = TARGET_SEGV_MAPERR;
181
- case ARMMMUIdx_SE10_0:
71
+ switch (ec) {
182
- case ARMMMUIdx_SE10_1:
72
+ case EC_DATAABORT:
183
- case ARMMMUIdx_SE10_1_PAN:
73
+ case EC_INSNABORT:
184
- case ARMMMUIdx_SE20_0:
74
+ /* Both EC have the same format for FSC, or close enough. */
185
- case ARMMMUIdx_SE20_2:
75
+ fsc = extract32(env->exception.syndrome, 0, 6);
186
- case ARMMMUIdx_SE20_2_PAN:
76
+ switch (fsc) {
187
return true;
77
+ case 0x04 ... 0x07: /* Translation fault, level {0-3} */
188
default:
78
+ si_signo = TARGET_SIGSEGV;
189
return false;
79
+ si_code = TARGET_SEGV_MAPERR;
190
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
80
+ break;
191
{
81
+ case 0x09 ... 0x0b: /* Access flag fault, level {1-3} */
192
switch (mmu_idx) {
82
+ case 0x0d ... 0x0f: /* Permission fault, level {1-3} */
193
case ARMMMUIdx_Stage1_E1_PAN:
83
+ si_signo = TARGET_SIGSEGV;
194
- case ARMMMUIdx_Stage1_SE1_PAN:
84
+ si_code = TARGET_SEGV_ACCERR;
195
case ARMMMUIdx_E10_1_PAN:
85
+ break;
196
case ARMMMUIdx_E20_2_PAN:
86
+ case 0x11: /* Synchronous Tag Check Fault */
197
- case ARMMMUIdx_SE10_1_PAN:
87
+ si_signo = TARGET_SIGSEGV;
198
- case ARMMMUIdx_SE20_2_PAN:
88
+ si_code = TARGET_SEGV_MTESERR;
199
return true;
89
+ break;
200
default:
90
+ case 0x21: /* Alignment fault */
201
return false;
91
+ si_signo = TARGET_SIGBUS;
202
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
92
+ si_code = TARGET_BUS_ADRALN;
203
static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
93
+ break;
204
{
94
+ default:
205
switch (mmu_idx) {
95
+ g_assert_not_reached();
206
- case ARMMMUIdx_SE20_0:
96
+ }
207
- case ARMMMUIdx_SE20_2:
97
break;
208
- case ARMMMUIdx_SE20_2_PAN:
98
- case 0x09 ... 0x0b: /* Access flag fault, level {1-3} */
209
case ARMMMUIdx_E20_0:
99
- case 0x0d ... 0x0f: /* Permission fault, level {1-3} */
210
case ARMMMUIdx_E20_2:
100
- si_signo = TARGET_SIGSEGV;
211
case ARMMMUIdx_E20_2_PAN:
101
- si_code = TARGET_SEGV_ACCERR;
212
case ARMMMUIdx_Stage2:
102
- break;
213
case ARMMMUIdx_Stage2_S:
103
- case 0x11: /* Synchronous Tag Check Fault */
214
- case ARMMMUIdx_SE2:
104
- si_signo = TARGET_SIGSEGV;
215
case ARMMMUIdx_E2:
105
- si_code = TARGET_SEGV_MTESERR;
216
return 2;
106
- break;
217
- case ARMMMUIdx_SE3:
107
- case 0x21: /* Alignment fault */
218
+ case ARMMMUIdx_E3:
108
+ case EC_PCALIGNMENT:
219
return 3;
109
si_signo = TARGET_SIGBUS;
220
- case ARMMMUIdx_SE10_0:
110
si_code = TARGET_BUS_ADRALN;
221
- case ARMMMUIdx_Stage1_SE0:
111
break;
222
- return arm_el_is_aa64(env, 3) ? 1 : 3;
112
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
223
- case ARMMMUIdx_SE10_1:
224
- case ARMMMUIdx_SE10_1_PAN:
225
+ case ARMMMUIdx_E10_0:
226
case ARMMMUIdx_Stage1_E0:
227
+ return arm_el_is_aa64(env, 3) || !arm_is_secure_below_el3(env) ? 1 : 3;
228
case ARMMMUIdx_Stage1_E1:
229
case ARMMMUIdx_Stage1_E1_PAN:
230
- case ARMMMUIdx_Stage1_SE1:
231
- case ARMMMUIdx_Stage1_SE1_PAN:
232
- case ARMMMUIdx_E10_0:
233
case ARMMMUIdx_E10_1:
234
case ARMMMUIdx_E10_1_PAN:
235
case ARMMMUIdx_MPrivNegPri:
236
@@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
237
case ARMMMUIdx_Stage1_E0:
238
case ARMMMUIdx_Stage1_E1:
239
case ARMMMUIdx_Stage1_E1_PAN:
240
- case ARMMMUIdx_Stage1_SE0:
241
- case ARMMMUIdx_Stage1_SE1:
242
- case ARMMMUIdx_Stage1_SE1_PAN:
243
return true;
244
default:
245
return false;
246
diff --git a/target/arm/helper.c b/target/arm/helper.c
113
index XXXXXXX..XXXXXXX 100644
247
index XXXXXXX..XXXXXXX 100644
114
--- a/target/arm/tlb_helper.c
248
--- a/target/arm/helper.c
115
+++ b/target/arm/tlb_helper.c
249
+++ b/target/arm/helper.c
116
@@ -XXX,XX +XXX,XX @@
250
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
117
#include "cpu.h"
251
/* Begin with base v8.0 state. */
118
#include "internals.h"
252
uint64_t valid_mask = 0x3fff;
119
#include "exec/exec-all.h"
253
ARMCPU *cpu = env_archcpu(env);
120
+#include "exec/helper-proto.h"
254
+ uint64_t changed;
121
255
122
static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
256
/*
123
unsigned int target_el,
257
* Because SCR_EL3 is the "real" cpreg and SCR is the alias, reset always
124
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
258
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
125
arm_deliver_fault(cpu, vaddr, access_type, mmu_idx, &fi);
259
126
}
260
/* Clear all-context RES0 bits. */
127
261
value &= valid_mask;
128
+void helper_exception_pc_alignment(CPUARMState *env, target_ulong pc)
262
- raw_write(env, ri, value);
129
+{
263
+ changed = env->cp15.scr_el3 ^ value;
130
+ ARMMMUFaultInfo fi = { .type = ARMFault_Alignment };
264
+ env->cp15.scr_el3 = value;
131
+ int target_el = exception_target_el(env);
132
+ int mmu_idx = cpu_mmu_index(env, true);
133
+ uint32_t fsc;
134
+
135
+ env->exception.vaddress = pc;
136
+
265
+
137
+ /*
266
+ /*
138
+ * Note that the fsc is not applicable to this exception,
267
+ * If SCR_EL3.NS changes, i.e. arm_is_secure_below_el3, then
139
+ * since any syndrome is pcalignment not insn_abort.
268
+ * we must invalidate all TLBs below EL3.
140
+ */
269
+ */
141
+ env->exception.fsr = compute_fsr_fsc(env, &fi, target_el, mmu_idx, &fsc);
270
+ if (changed & SCR_NS) {
142
+ raise_exception(env, EXCP_PREFETCH_ABORT, syn_pcalignment(), target_el);
271
+ tlb_flush_by_mmuidx(env_cpu(env), (ARMMMUIdxBit_E10_0 |
143
+}
272
+ ARMMMUIdxBit_E20_0 |
144
+
273
+ ARMMMUIdxBit_E10_1 |
145
#if !defined(CONFIG_USER_ONLY)
274
+ ARMMMUIdxBit_E20_2 |
146
275
+ ARMMMUIdxBit_E10_1_PAN |
147
/*
276
+ ARMMMUIdxBit_E20_2_PAN |
277
+ ARMMMUIdxBit_E2));
278
+ }
279
}
280
281
static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
282
@@ -XXX,XX +XXX,XX @@ static int gt_phys_redir_timeridx(CPUARMState *env)
283
case ARMMMUIdx_E20_0:
284
case ARMMMUIdx_E20_2:
285
case ARMMMUIdx_E20_2_PAN:
286
- case ARMMMUIdx_SE20_0:
287
- case ARMMMUIdx_SE20_2:
288
- case ARMMMUIdx_SE20_2_PAN:
289
return GTIMER_HYP;
290
default:
291
return GTIMER_PHYS;
292
@@ -XXX,XX +XXX,XX @@ static int gt_virt_redir_timeridx(CPUARMState *env)
293
case ARMMMUIdx_E20_0:
294
case ARMMMUIdx_E20_2:
295
case ARMMMUIdx_E20_2_PAN:
296
- case ARMMMUIdx_SE20_0:
297
- case ARMMMUIdx_SE20_2:
298
- case ARMMMUIdx_SE20_2_PAN:
299
return GTIMER_HYPVIRT;
300
default:
301
return GTIMER_VIRT;
302
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
303
/* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */
304
switch (el) {
305
case 3:
306
- mmu_idx = ARMMMUIdx_SE3;
307
+ mmu_idx = ARMMMUIdx_E3;
308
secure = true;
309
break;
310
case 2:
311
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
312
/* fall through */
313
case 1:
314
if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
315
- mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
316
- : ARMMMUIdx_Stage1_E1_PAN);
317
+ mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
318
} else {
319
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
320
+ mmu_idx = ARMMMUIdx_Stage1_E1;
321
}
322
break;
323
default:
324
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
325
/* stage 1 current state PL0: ATS1CUR, ATS1CUW */
326
switch (el) {
327
case 3:
328
- mmu_idx = ARMMMUIdx_SE10_0;
329
+ mmu_idx = ARMMMUIdx_E10_0;
330
secure = true;
331
break;
332
case 2:
333
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
334
mmu_idx = ARMMMUIdx_Stage1_E0;
335
break;
336
case 1:
337
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
338
+ mmu_idx = ARMMMUIdx_Stage1_E0;
339
break;
340
default:
341
g_assert_not_reached();
342
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
343
switch (ri->opc1) {
344
case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */
345
if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) {
346
- mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
347
- : ARMMMUIdx_Stage1_E1_PAN);
348
+ mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
349
} else {
350
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
351
+ mmu_idx = ARMMMUIdx_Stage1_E1;
352
}
353
break;
354
case 4: /* AT S1E2R, AT S1E2W */
355
- mmu_idx = secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2;
356
+ mmu_idx = ARMMMUIdx_E2;
357
break;
358
case 6: /* AT S1E3R, AT S1E3W */
359
- mmu_idx = ARMMMUIdx_SE3;
360
+ mmu_idx = ARMMMUIdx_E3;
361
secure = true;
362
break;
363
default:
364
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
365
}
366
break;
367
case 2: /* AT S1E0R, AT S1E0W */
368
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
369
+ mmu_idx = ARMMMUIdx_Stage1_E0;
370
break;
371
case 4: /* AT S12E1R, AT S12E1W */
372
- mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1;
373
+ mmu_idx = ARMMMUIdx_E10_1;
374
break;
375
case 6: /* AT S12E0R, AT S12E0W */
376
- mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_E10_0;
377
+ mmu_idx = ARMMMUIdx_E10_0;
378
break;
379
default:
380
g_assert_not_reached();
381
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
382
uint16_t mask = ARMMMUIdxBit_E20_2 |
383
ARMMMUIdxBit_E20_2_PAN |
384
ARMMMUIdxBit_E20_0;
385
-
386
- if (arm_is_secure_below_el3(env)) {
387
- mask >>= ARM_MMU_IDX_A_NS;
388
- }
389
-
390
tlb_flush_by_mmuidx(env_cpu(env), mask);
391
}
392
raw_write(env, ri, value);
393
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
394
uint16_t mask = ARMMMUIdxBit_E10_1 |
395
ARMMMUIdxBit_E10_1_PAN |
396
ARMMMUIdxBit_E10_0;
397
-
398
- if (arm_is_secure_below_el3(env)) {
399
- mask >>= ARM_MMU_IDX_A_NS;
400
- }
401
-
402
tlb_flush_by_mmuidx(cs, mask);
403
raw_write(env, ri, value);
404
}
405
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
406
ARMMMUIdxBit_E10_1_PAN |
407
ARMMMUIdxBit_E10_0;
408
}
409
-
410
- if (arm_is_secure_below_el3(env)) {
411
- mask >>= ARM_MMU_IDX_A_NS;
412
- }
413
-
414
return mask;
415
}
416
417
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
418
mmu_idx = ARMMMUIdx_E10_0;
419
}
420
421
- if (arm_is_secure_below_el3(env)) {
422
- mmu_idx &= ~ARM_MMU_IDX_A_NS;
423
- }
424
-
425
return tlbbits_for_regime(env, mmu_idx, addr);
426
}
427
428
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
429
* stage 2 translations, whereas most other scopes only invalidate
430
* stage 1 translations.
431
*/
432
- if (arm_is_secure_below_el3(env)) {
433
- return ARMMMUIdxBit_SE10_1 |
434
- ARMMMUIdxBit_SE10_1_PAN |
435
- ARMMMUIdxBit_SE10_0;
436
- } else {
437
- return ARMMMUIdxBit_E10_1 |
438
- ARMMMUIdxBit_E10_1_PAN |
439
- ARMMMUIdxBit_E10_0;
440
- }
441
+ return (ARMMMUIdxBit_E10_1 |
442
+ ARMMMUIdxBit_E10_1_PAN |
443
+ ARMMMUIdxBit_E10_0);
444
}
445
446
static int e2_tlbmask(CPUARMState *env)
447
{
448
- if (arm_is_secure_below_el3(env)) {
449
- return ARMMMUIdxBit_SE20_0 |
450
- ARMMMUIdxBit_SE20_2 |
451
- ARMMMUIdxBit_SE20_2_PAN |
452
- ARMMMUIdxBit_SE2;
453
- } else {
454
- return ARMMMUIdxBit_E20_0 |
455
- ARMMMUIdxBit_E20_2 |
456
- ARMMMUIdxBit_E20_2_PAN |
457
- ARMMMUIdxBit_E2;
458
- }
459
+ return (ARMMMUIdxBit_E20_0 |
460
+ ARMMMUIdxBit_E20_2 |
461
+ ARMMMUIdxBit_E20_2_PAN |
462
+ ARMMMUIdxBit_E2);
463
}
464
465
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
466
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
467
ARMCPU *cpu = env_archcpu(env);
468
CPUState *cs = CPU(cpu);
469
470
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_SE3);
471
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
472
}
473
474
static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
475
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
476
{
477
CPUState *cs = env_cpu(env);
478
479
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_SE3);
480
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
481
}
482
483
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
484
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
485
CPUState *cs = CPU(cpu);
486
uint64_t pageaddr = sextract64(value << 12, 0, 56);
487
488
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_SE3);
489
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
490
}
491
492
static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
493
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
494
{
495
CPUState *cs = env_cpu(env);
496
uint64_t pageaddr = sextract64(value << 12, 0, 56);
497
- bool secure = arm_is_secure_below_el3(env);
498
- int mask = secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2;
499
- int bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2,
500
- pageaddr);
501
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E2, pageaddr);
502
503
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
504
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
505
+ ARMMMUIdxBit_E2, bits);
506
}
507
508
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
509
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
510
{
511
CPUState *cs = env_cpu(env);
512
uint64_t pageaddr = sextract64(value << 12, 0, 56);
513
- int bits = tlbbits_for_regime(env, ARMMMUIdx_SE3, pageaddr);
514
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E3, pageaddr);
515
516
tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
517
- ARMMMUIdxBit_SE3, bits);
518
+ ARMMMUIdxBit_E3, bits);
519
}
520
521
#ifdef TARGET_AARCH64
522
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae1is_write(CPUARMState *env,
523
524
static int vae2_tlbmask(CPUARMState *env)
525
{
526
- return (arm_is_secure_below_el3(env)
527
- ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2);
528
+ return ARMMMUIdxBit_E2;
529
}
530
531
static void tlbi_aa64_rvae2_write(CPUARMState *env,
532
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3_write(CPUARMState *env,
533
* flush-last-level-only.
534
*/
535
536
- do_rvae_write(env, value, ARMMMUIdxBit_SE3,
537
- tlb_force_broadcast(env));
538
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, tlb_force_broadcast(env));
539
}
540
541
static void tlbi_aa64_rvae3is_write(CPUARMState *env,
542
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env,
543
* flush-last-level-only or inner/outer specific flushes.
544
*/
545
546
- do_rvae_write(env, value, ARMMMUIdxBit_SE3, true);
547
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
548
}
549
#endif
550
551
@@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el)
552
/* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */
553
if (el == 0) {
554
ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0);
555
- el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0)
556
- ? 2 : 1;
557
+ el = mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1;
558
}
559
return env->cp15.sctlr_el[el];
560
}
561
@@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
562
switch (mmu_idx) {
563
case ARMMMUIdx_E10_0:
564
case ARMMMUIdx_E20_0:
565
- case ARMMMUIdx_SE10_0:
566
- case ARMMMUIdx_SE20_0:
567
return 0;
568
case ARMMMUIdx_E10_1:
569
case ARMMMUIdx_E10_1_PAN:
570
- case ARMMMUIdx_SE10_1:
571
- case ARMMMUIdx_SE10_1_PAN:
572
return 1;
573
case ARMMMUIdx_E2:
574
case ARMMMUIdx_E20_2:
575
case ARMMMUIdx_E20_2_PAN:
576
- case ARMMMUIdx_SE2:
577
- case ARMMMUIdx_SE20_2:
578
- case ARMMMUIdx_SE20_2_PAN:
579
return 2;
580
- case ARMMMUIdx_SE3:
581
+ case ARMMMUIdx_E3:
582
return 3;
583
default:
584
g_assert_not_reached();
585
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
586
}
587
break;
588
case 3:
589
- return ARMMMUIdx_SE3;
590
+ return ARMMMUIdx_E3;
591
default:
592
g_assert_not_reached();
593
}
594
595
- if (arm_is_secure_below_el3(env)) {
596
- idx &= ~ARM_MMU_IDX_A_NS;
597
- }
598
-
599
return idx;
600
}
601
602
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
603
switch (mmu_idx) {
604
case ARMMMUIdx_E10_1:
605
case ARMMMUIdx_E10_1_PAN:
606
- case ARMMMUIdx_SE10_1:
607
- case ARMMMUIdx_SE10_1_PAN:
608
/* TODO: ARMv8.3-NV */
609
DP_TBFLAG_A64(flags, UNPRIV, 1);
610
break;
611
case ARMMMUIdx_E20_2:
612
case ARMMMUIdx_E20_2_PAN:
613
- case ARMMMUIdx_SE20_2:
614
- case ARMMMUIdx_SE20_2_PAN:
615
/*
616
* Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is
617
* gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
618
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
619
index XXXXXXX..XXXXXXX 100644
620
--- a/target/arm/ptw.c
621
+++ b/target/arm/ptw.c
622
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu)
623
ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
624
{
625
switch (mmu_idx) {
626
- case ARMMMUIdx_SE10_0:
627
- return ARMMMUIdx_Stage1_SE0;
628
- case ARMMMUIdx_SE10_1:
629
- return ARMMMUIdx_Stage1_SE1;
630
- case ARMMMUIdx_SE10_1_PAN:
631
- return ARMMMUIdx_Stage1_SE1_PAN;
632
case ARMMMUIdx_E10_0:
633
return ARMMMUIdx_Stage1_E0;
634
case ARMMMUIdx_E10_1:
635
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
636
static bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
637
{
638
switch (mmu_idx) {
639
- case ARMMMUIdx_SE10_0:
640
case ARMMMUIdx_E20_0:
641
- case ARMMMUIdx_SE20_0:
642
case ARMMMUIdx_Stage1_E0:
643
- case ARMMMUIdx_Stage1_SE0:
644
case ARMMMUIdx_MUser:
645
case ARMMMUIdx_MSUser:
646
case ARMMMUIdx_MUserNegPri:
647
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
648
649
s2_mmu_idx = (s2walk_secure
650
? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
651
- is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
652
+ is_el0 = mmu_idx == ARMMMUIdx_E10_0;
653
654
/*
655
* S1 is done, now do S2 translation.
656
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
657
case ARMMMUIdx_Stage1_E1:
658
case ARMMMUIdx_Stage1_E1_PAN:
659
case ARMMMUIdx_E2:
660
+ is_secure = arm_is_secure_below_el3(env);
661
+ break;
662
case ARMMMUIdx_Stage2:
663
case ARMMMUIdx_MPrivNegPri:
664
case ARMMMUIdx_MUserNegPri:
665
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
666
case ARMMMUIdx_MUser:
667
is_secure = false;
668
break;
669
- case ARMMMUIdx_SE3:
670
- case ARMMMUIdx_SE10_0:
671
- case ARMMMUIdx_SE10_1:
672
- case ARMMMUIdx_SE10_1_PAN:
673
- case ARMMMUIdx_SE20_0:
674
- case ARMMMUIdx_SE20_2:
675
- case ARMMMUIdx_SE20_2_PAN:
676
- case ARMMMUIdx_Stage1_SE0:
677
- case ARMMMUIdx_Stage1_SE1:
678
- case ARMMMUIdx_Stage1_SE1_PAN:
679
- case ARMMMUIdx_SE2:
680
+ case ARMMMUIdx_E3:
681
case ARMMMUIdx_Stage2_S:
682
case ARMMMUIdx_MSPrivNegPri:
683
case ARMMMUIdx_MSUserNegPri:
148
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
684
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
149
index XXXXXXX..XXXXXXX 100644
685
index XXXXXXX..XXXXXXX 100644
150
--- a/target/arm/translate-a64.c
686
--- a/target/arm/translate-a64.c
151
+++ b/target/arm/translate-a64.c
687
+++ b/target/arm/translate-a64.c
152
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
688
@@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s)
153
uint64_t pc = s->base.pc_next;
689
case ARMMMUIdx_E20_2_PAN:
154
uint32_t insn;
690
useridx = ARMMMUIdx_E20_0;
155
691
break;
156
+ /* Singlestep exceptions have the highest priority. */
692
- case ARMMMUIdx_SE10_1:
157
if (s->ss_active && !s->pstate_ss) {
693
- case ARMMMUIdx_SE10_1_PAN:
158
/* Singlestep state is Active-pending.
694
- useridx = ARMMMUIdx_SE10_0;
159
* If we're in this state at the start of a TB then either
695
- break;
160
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
696
- case ARMMMUIdx_SE20_2:
161
return;
697
- case ARMMMUIdx_SE20_2_PAN:
162
}
698
- useridx = ARMMMUIdx_SE20_0;
163
699
- break;
164
+ if (pc & 3) {
700
default:
165
+ /*
701
g_assert_not_reached();
166
+ * PC alignment fault. This has priority over the instruction abort
702
}
167
+ * that we would receive from a translation fault via arm_ldl_code.
168
+ * This should only be possible after an indirect branch, at the
169
+ * start of the TB.
170
+ */
171
+ assert(s->base.num_insns == 1);
172
+ gen_helper_exception_pc_alignment(cpu_env, tcg_constant_tl(pc));
173
+ s->base.is_jmp = DISAS_NORETURN;
174
+ s->base.pc_next = QEMU_ALIGN_UP(pc, 4);
175
+ return;
176
+ }
177
+
178
s->pc_curr = pc;
179
insn = arm_ldl_code(env, &s->base, pc, s->sctlr_b);
180
s->insn = insn;
181
diff --git a/target/arm/translate.c b/target/arm/translate.c
703
diff --git a/target/arm/translate.c b/target/arm/translate.c
182
index XXXXXXX..XXXXXXX 100644
704
index XXXXXXX..XXXXXXX 100644
183
--- a/target/arm/translate.c
705
--- a/target/arm/translate.c
184
+++ b/target/arm/translate.c
706
+++ b/target/arm/translate.c
185
@@ -XXX,XX +XXX,XX @@ static void arm_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
707
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
186
uint32_t pc = dc->base.pc_next;
708
* otherwise, access as if at PL0.
187
unsigned int insn;
709
*/
188
710
switch (s->mmu_idx) {
189
- if (arm_check_ss_active(dc) || arm_check_kernelpage(dc)) {
711
+ case ARMMMUIdx_E3:
190
+ /* Singlestep exceptions have the highest priority. */
712
case ARMMMUIdx_E2: /* this one is UNPREDICTABLE */
191
+ if (arm_check_ss_active(dc)) {
713
case ARMMMUIdx_E10_0:
192
+ dc->base.pc_next = pc + 4;
714
case ARMMMUIdx_E10_1:
193
+ return;
715
case ARMMMUIdx_E10_1_PAN:
194
+ }
716
return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
195
+
717
- case ARMMMUIdx_SE3:
196
+ if (pc & 3) {
718
- case ARMMMUIdx_SE10_0:
197
+ /*
719
- case ARMMMUIdx_SE10_1:
198
+ * PC alignment fault. This has priority over the instruction abort
720
- case ARMMMUIdx_SE10_1_PAN:
199
+ * that we would receive from a translation fault via arm_ldl_code
721
- return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0);
200
+ * (or the execution of the kernelpage entrypoint). This should only
722
case ARMMMUIdx_MUser:
201
+ * be possible after an indirect branch, at the start of the TB.
723
case ARMMMUIdx_MPriv:
202
+ */
724
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
203
+ assert(dc->base.num_insns == 1);
204
+ gen_helper_exception_pc_alignment(cpu_env, tcg_constant_tl(pc));
205
+ dc->base.is_jmp = DISAS_NORETURN;
206
+ dc->base.pc_next = QEMU_ALIGN_UP(pc, 4);
207
+ return;
208
+ }
209
+
210
+ if (arm_check_kernelpage(dc)) {
211
dc->base.pc_next = pc + 4;
212
return;
213
}
214
--
725
--
215
2.25.1
726
2.25.1
216
217
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Use a switch on mmu_idx for the a-profile indexes, instead of
4
three different if's vs regime_el and arm_mmu_idx_is_stage1_of_2.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20221001162318.153420-12-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
10
---
7
target/arm/translate.c | 9 +++++----
11
target/arm/ptw.c | 32 +++++++++++++++++++++++++-------
8
1 file changed, 5 insertions(+), 4 deletions(-)
12
1 file changed, 25 insertions(+), 7 deletions(-)
9
13
10
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
11
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/translate.c
16
--- a/target/arm/ptw.c
13
+++ b/target/arm/translate.c
17
+++ b/target/arm/ptw.c
14
@@ -XXX,XX +XXX,XX @@ static void arm_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
18
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
15
{
19
16
DisasContext *dc = container_of(dcbase, DisasContext, base);
20
hcr_el2 = arm_hcr_el2_eff(env);
17
CPUARMState *env = cpu->env_ptr;
21
18
+ uint32_t pc = dc->base.pc_next;
22
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
19
unsigned int insn;
23
+ switch (mmu_idx) {
20
24
+ case ARMMMUIdx_Stage2:
21
if (arm_pre_translate_insn(dc)) {
25
+ case ARMMMUIdx_Stage2_S:
22
- dc->base.pc_next += 4;
26
/* HCR.DC means HCR.VM behaves as 1 */
23
+ dc->base.pc_next = pc + 4;
27
return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
24
return;
28
- }
29
30
- if (hcr_el2 & HCR_TGE) {
31
+ case ARMMMUIdx_E10_0:
32
+ case ARMMMUIdx_E10_1:
33
+ case ARMMMUIdx_E10_1_PAN:
34
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
35
- if (!is_secure && regime_el(env, mmu_idx) == 1) {
36
+ if (!is_secure && (hcr_el2 & HCR_TGE)) {
37
return true;
38
}
39
- }
40
+ break;
41
42
- if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
43
+ case ARMMMUIdx_Stage1_E0:
44
+ case ARMMMUIdx_Stage1_E1:
45
+ case ARMMMUIdx_Stage1_E1_PAN:
46
/* HCR.DC means SCTLR_EL1.M behaves as 0 */
47
- return true;
48
+ if (hcr_el2 & HCR_DC) {
49
+ return true;
50
+ }
51
+ break;
52
+
53
+ case ARMMMUIdx_E20_0:
54
+ case ARMMMUIdx_E20_2:
55
+ case ARMMMUIdx_E20_2_PAN:
56
+ case ARMMMUIdx_E2:
57
+ case ARMMMUIdx_E3:
58
+ break;
59
+
60
+ default:
61
+ g_assert_not_reached();
25
}
62
}
26
63
27
- dc->pc_curr = dc->base.pc_next;
64
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
28
- insn = arm_ldl_code(env, &dc->base, dc->base.pc_next, dc->sctlr_b);
29
+ dc->pc_curr = pc;
30
+ insn = arm_ldl_code(env, &dc->base, pc, dc->sctlr_b);
31
dc->insn = insn;
32
- dc->base.pc_next += 4;
33
+ dc->base.pc_next = pc + 4;
34
disas_arm_insn(dc, insn);
35
36
arm_post_translate_insn(dc);
37
--
65
--
38
2.25.1
66
2.25.1
39
40
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The TYPE_ARM_GICV3 device is an emulated one. When using
3
The effect of TGE does not only apply to non-secure state,
4
KVM, it is recommended to use the TYPE_KVM_ARM_GICV3 device
4
now that Secure EL2 exists.
5
(which uses in-kernel support).
6
5
7
When using --with-devices-FOO, it is possible to build a
8
binary with a specific set of devices. When this binary is
9
restricted to KVM accelerator, the TYPE_ARM_GICV3 device is
10
irrelevant, and it is desirable to remove it from the binary.
11
12
Therefore introduce the CONFIG_ARM_GIC_TCG Kconfig selector
13
which select the files required to have the TYPE_ARM_GICV3
14
device, but also allowing to de-select this device.
15
16
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Message-id: 20211115223619.2599282-3-philmd@redhat.com
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-13-richard.henderson@linaro.org
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
10
---
21
hw/intc/arm_gicv3.c | 2 +-
11
target/arm/ptw.c | 4 ++--
22
hw/intc/Kconfig | 5 +++++
12
1 file changed, 2 insertions(+), 2 deletions(-)
23
hw/intc/meson.build | 10 ++++++----
24
3 files changed, 12 insertions(+), 5 deletions(-)
25
13
26
diff --git a/hw/intc/arm_gicv3.c b/hw/intc/arm_gicv3.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
27
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/intc/arm_gicv3.c
16
--- a/target/arm/ptw.c
29
+++ b/hw/intc/arm_gicv3.c
17
+++ b/target/arm/ptw.c
30
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
31
/*
19
case ARMMMUIdx_E10_0:
32
- * ARM Generic Interrupt Controller v3
20
case ARMMMUIdx_E10_1:
33
+ * ARM Generic Interrupt Controller v3 (emulation)
21
case ARMMMUIdx_E10_1_PAN:
34
*
22
- /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
35
* Copyright (c) 2015 Huawei.
23
- if (!is_secure && (hcr_el2 & HCR_TGE)) {
36
* Copyright (c) 2016 Linaro Limited
24
+ /* TGE means that EL0/1 act as if SCTLR_EL1.M is zero */
37
diff --git a/hw/intc/Kconfig b/hw/intc/Kconfig
25
+ if (hcr_el2 & HCR_TGE) {
38
index XXXXXXX..XXXXXXX 100644
26
return true;
39
--- a/hw/intc/Kconfig
27
}
40
+++ b/hw/intc/Kconfig
28
break;
41
@@ -XXX,XX +XXX,XX @@ config APIC
42
select MSI_NONBROKEN
43
select I8259
44
45
+config ARM_GIC_TCG
46
+ bool
47
+ default y
48
+ depends on ARM_GIC && TCG
49
+
50
config ARM_GIC_KVM
51
bool
52
default y
53
diff --git a/hw/intc/meson.build b/hw/intc/meson.build
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/intc/meson.build
56
+++ b/hw/intc/meson.build
57
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_ARM_GIC', if_true: files(
58
'arm_gic.c',
59
'arm_gic_common.c',
60
'arm_gicv2m.c',
61
- 'arm_gicv3.c',
62
'arm_gicv3_common.c',
63
- 'arm_gicv3_dist.c',
64
'arm_gicv3_its_common.c',
65
- 'arm_gicv3_redist.c',
66
+))
67
+softmmu_ss.add(when: 'CONFIG_ARM_GIC_TCG', if_true: files(
68
+ 'arm_gicv3.c',
69
+ 'arm_gicv3_dist.c',
70
'arm_gicv3_its.c',
71
+ 'arm_gicv3_redist.c',
72
))
73
softmmu_ss.add(when: 'CONFIG_ETRAXFS', if_true: files('etraxfs_pic.c'))
74
softmmu_ss.add(when: 'CONFIG_HEATHROW_PIC', if_true: files('heathrow_pic.c'))
75
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP_PMU', if_true: files('xlnx-pmu-iomod-in
76
specific_ss.add(when: 'CONFIG_ALLWINNER_A10_PIC', if_true: files('allwinner-a10-pic.c'))
77
specific_ss.add(when: 'CONFIG_APIC', if_true: files('apic.c', 'apic_common.c'))
78
specific_ss.add(when: 'CONFIG_ARM_GIC', if_true: files('arm_gicv3_cpuif_common.c'))
79
-specific_ss.add(when: 'CONFIG_ARM_GIC', if_true: files('arm_gicv3_cpuif.c'))
80
+specific_ss.add(when: 'CONFIG_ARM_GIC_TCG', if_true: files('arm_gicv3_cpuif.c'))
81
specific_ss.add(when: 'CONFIG_ARM_GIC_KVM', if_true: files('arm_gic_kvm.c'))
82
specific_ss.add(when: ['CONFIG_ARM_GIC_KVM', 'TARGET_AARCH64'], if_true: files('arm_gicv3_kvm.c', 'arm_gicv3_its_kvm.c'))
83
specific_ss.add(when: 'CONFIG_ARM_V7M', if_true: files('armv7m_nvic.c'))
84
--
29
--
85
2.25.1
30
2.25.1
86
87
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
For page walking, we may require HCR for a security state
4
that is not "current".
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-14-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
10
---
7
tests/tcg/aarch64/pcalign-a64.c | 37 +++++++++++++++++++++++++
11
target/arm/cpu.h | 20 +++++++++++++-------
8
tests/tcg/arm/pcalign-a32.c | 46 +++++++++++++++++++++++++++++++
12
target/arm/helper.c | 11 ++++++++---
9
tests/tcg/aarch64/Makefile.target | 4 +--
13
2 files changed, 21 insertions(+), 10 deletions(-)
10
tests/tcg/arm/Makefile.target | 4 +++
11
4 files changed, 89 insertions(+), 2 deletions(-)
12
create mode 100644 tests/tcg/aarch64/pcalign-a64.c
13
create mode 100644 tests/tcg/arm/pcalign-a32.c
14
14
15
diff --git a/tests/tcg/aarch64/pcalign-a64.c b/tests/tcg/aarch64/pcalign-a64.c
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
new file mode 100644
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX
17
--- a/target/arm/cpu.h
18
--- /dev/null
18
+++ b/target/arm/cpu.h
19
+++ b/tests/tcg/aarch64/pcalign-a64.c
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
20
@@ -XXX,XX +XXX,XX @@
20
* Return true if the current security state has AArch64 EL2 or AArch32 Hyp.
21
+/* Test PC misalignment exception */
21
* This corresponds to the pseudocode EL2Enabled()
22
+
22
*/
23
+#include <assert.h>
23
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
24
+#include <signal.h>
25
+#include <stdlib.h>
26
+#include <stdio.h>
27
+
28
+static void *expected;
29
+
30
+static void sigbus(int sig, siginfo_t *info, void *vuc)
31
+{
24
+{
32
+ assert(info->si_code == BUS_ADRALN);
25
+ return arm_feature(env, ARM_FEATURE_EL2)
33
+ assert(info->si_addr == expected);
26
+ && (!secure || (env->cp15.scr_el3 & SCR_EEL2));
34
+ exit(EXIT_SUCCESS);
35
+}
27
+}
36
+
28
+
37
+int main()
29
static inline bool arm_is_el2_enabled(CPUARMState *env)
30
{
31
- if (arm_feature(env, ARM_FEATURE_EL2)) {
32
- if (arm_is_secure_below_el3(env)) {
33
- return (env->cp15.scr_el3 & SCR_EEL2) != 0;
34
- }
35
- return true;
36
- }
37
- return false;
38
+ return arm_is_el2_enabled_secstate(env, arm_is_secure_below_el3(env));
39
}
40
41
#else
42
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
43
return false;
44
}
45
46
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
38
+{
47
+{
39
+ void *tmp;
48
+ return false;
40
+
41
+ struct sigaction sa = {
42
+ .sa_sigaction = sigbus,
43
+ .sa_flags = SA_SIGINFO
44
+ };
45
+
46
+ if (sigaction(SIGBUS, &sa, NULL) < 0) {
47
+ perror("sigaction");
48
+ return EXIT_FAILURE;
49
+ }
50
+
51
+ asm volatile("adr %0, 1f + 1\n\t"
52
+ "str %0, %1\n\t"
53
+ "br %0\n"
54
+ "1:"
55
+ : "=&r"(tmp), "=m"(expected));
56
+ abort();
57
+}
58
diff --git a/tests/tcg/arm/pcalign-a32.c b/tests/tcg/arm/pcalign-a32.c
59
new file mode 100644
60
index XXXXXXX..XXXXXXX
61
--- /dev/null
62
+++ b/tests/tcg/arm/pcalign-a32.c
63
@@ -XXX,XX +XXX,XX @@
64
+/* Test PC misalignment exception */
65
+
66
+#ifdef __thumb__
67
+#error "This test must be compiled for ARM"
68
+#endif
69
+
70
+#include <assert.h>
71
+#include <signal.h>
72
+#include <stdlib.h>
73
+#include <stdio.h>
74
+
75
+static void *expected;
76
+
77
+static void sigbus(int sig, siginfo_t *info, void *vuc)
78
+{
79
+ assert(info->si_code == BUS_ADRALN);
80
+ assert(info->si_addr == expected);
81
+ exit(EXIT_SUCCESS);
82
+}
49
+}
83
+
50
+
84
+int main()
51
static inline bool arm_is_el2_enabled(CPUARMState *env)
52
{
53
return false;
54
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_el2_enabled(CPUARMState *env)
55
* "for all purposes other than a direct read or write access of HCR_EL2."
56
* Not included here is HCR_RW.
57
*/
58
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure);
59
uint64_t arm_hcr_el2_eff(CPUARMState *env);
60
uint64_t arm_hcrx_el2_eff(CPUARMState *env);
61
62
diff --git a/target/arm/helper.c b/target/arm/helper.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/helper.c
65
+++ b/target/arm/helper.c
66
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
67
}
68
69
/*
70
- * Return the effective value of HCR_EL2.
71
+ * Return the effective value of HCR_EL2, at the given security state.
72
* Bits that are not included here:
73
* RW (read from SCR_EL3.RW as needed)
74
*/
75
-uint64_t arm_hcr_el2_eff(CPUARMState *env)
76
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure)
77
{
78
uint64_t ret = env->cp15.hcr_el2;
79
80
- if (!arm_is_el2_enabled(env)) {
81
+ if (!arm_is_el2_enabled_secstate(env, secure)) {
82
/*
83
* "This register has no effect if EL2 is not enabled in the
84
* current Security state". This is ARMv8.4-SecEL2 speak for
85
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
86
return ret;
87
}
88
89
+uint64_t arm_hcr_el2_eff(CPUARMState *env)
85
+{
90
+{
86
+ void *tmp;
91
+ return arm_hcr_el2_eff_secstate(env, arm_is_secure_below_el3(env));
92
+}
87
+
93
+
88
+ struct sigaction sa = {
94
/*
89
+ .sa_sigaction = sigbus,
95
* Corresponds to ARM pseudocode function ELIsInHost().
90
+ .sa_flags = SA_SIGINFO
96
*/
91
+ };
92
+
93
+ if (sigaction(SIGBUS, &sa, NULL) < 0) {
94
+ perror("sigaction");
95
+ return EXIT_FAILURE;
96
+ }
97
+
98
+ asm volatile("adr %0, 1f + 2\n\t"
99
+ "str %0, %1\n\t"
100
+ "bx %0\n"
101
+ "1:"
102
+ : "=&r"(tmp), "=m"(expected));
103
+
104
+ /*
105
+ * From v8, it is CONSTRAINED UNPREDICTABLE whether BXWritePC aligns
106
+ * the address or not. If so, we can legitimately fall through.
107
+ */
108
+ return EXIT_SUCCESS;
109
+}
110
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
111
index XXXXXXX..XXXXXXX 100644
112
--- a/tests/tcg/aarch64/Makefile.target
113
+++ b/tests/tcg/aarch64/Makefile.target
114
@@ -XXX,XX +XXX,XX @@ VPATH         += $(ARM_SRC)
115
AARCH64_SRC=$(SRC_PATH)/tests/tcg/aarch64
116
VPATH         += $(AARCH64_SRC)
117
118
-# Float-convert Tests
119
-AARCH64_TESTS=fcvt
120
+# Base architecture tests
121
+AARCH64_TESTS=fcvt pcalign-a64
122
123
fcvt: LDFLAGS+=-lm
124
125
diff --git a/tests/tcg/arm/Makefile.target b/tests/tcg/arm/Makefile.target
126
index XXXXXXX..XXXXXXX 100644
127
--- a/tests/tcg/arm/Makefile.target
128
+++ b/tests/tcg/arm/Makefile.target
129
@@ -XXX,XX +XXX,XX @@ run-fcvt: fcvt
130
    $(call run-test,fcvt,$(QEMU) $<,"$< on $(TARGET_NAME)")
131
    $(call diff-out,fcvt,$(ARM_SRC)/fcvt.ref)
132
133
+# PC alignment test
134
+ARM_TESTS += pcalign-a32
135
+pcalign-a32: CFLAGS+=-marm
136
+
137
ifeq ($(CONFIG_ARM_COMPATIBLE_SEMIHOSTING),y)
138
139
# Semihosting smoke test for linux-user
140
--
97
--
141
2.25.1
98
2.25.1
142
143
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
gicv3_set_gicv3state() is used by arm_gicv3_common.c in
3
Rename the argument to is_secure_ptr, and introduce a
4
arm_gicv3_common_realize(). Since we want to restrict
4
local variable is_secure with the value. We only write
5
arm_gicv3_cpuif.c to TCG, extract gicv3_set_gicv3state()
5
back to the pointer toward the end of the function.
6
to a new file. Add this file to the meson 'specific'
7
source set, since it needs access to "cpu.h".
8
6
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20211115223619.2599282-2-philmd@redhat.com
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221001162318.153420-15-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/intc/arm_gicv3_cpuif.c | 10 +---------
12
target/arm/ptw.c | 22 ++++++++++++----------
15
hw/intc/arm_gicv3_cpuif_common.c | 22 ++++++++++++++++++++++
13
1 file changed, 12 insertions(+), 10 deletions(-)
16
hw/intc/meson.build | 1 +
17
3 files changed, 24 insertions(+), 9 deletions(-)
18
create mode 100644 hw/intc/arm_gicv3_cpuif_common.c
19
14
20
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/arm_gicv3_cpuif.c
17
--- a/target/arm/ptw.c
23
+++ b/hw/intc/arm_gicv3_cpuif.c
18
+++ b/target/arm/ptw.c
24
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
25
/*
20
26
- * ARM Generic Interrupt Controller v3
21
/* Translate a S1 pagetable walk through S2 if needed. */
27
+ * ARM Generic Interrupt Controller v3 (emulation)
22
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
28
*
23
- hwaddr addr, bool *is_secure,
29
* Copyright (c) 2016 Linaro Limited
24
+ hwaddr addr, bool *is_secure_ptr,
30
* Written by Peter Maydell
25
ARMMMUFaultInfo *fi)
31
@@ -XXX,XX +XXX,XX @@
32
#include "hw/irq.h"
33
#include "cpu.h"
34
35
-void gicv3_set_gicv3state(CPUState *cpu, GICv3CPUState *s)
36
-{
37
- ARMCPU *arm_cpu = ARM_CPU(cpu);
38
- CPUARMState *env = &arm_cpu->env;
39
-
40
- env->gicv3state = (void *)s;
41
-};
42
-
43
static GICv3CPUState *icc_cs_from_env(CPUARMState *env)
44
{
26
{
45
return env->gicv3state;
27
- ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
46
diff --git a/hw/intc/arm_gicv3_cpuif_common.c b/hw/intc/arm_gicv3_cpuif_common.c
28
+ bool is_secure = *is_secure_ptr;
47
new file mode 100644
29
+ ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
48
index XXXXXXX..XXXXXXX
30
49
--- /dev/null
31
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
50
+++ b/hw/intc/arm_gicv3_cpuif_common.c
32
- !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) {
51
@@ -XXX,XX +XXX,XX @@
33
+ !regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
52
+/* SPDX-License-Identifier: GPL-2.0-or-later */
34
GetPhysAddrResult s2 = {};
53
+/*
35
int ret;
54
+ * ARM Generic Interrupt Controller v3
36
55
+ *
37
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
56
+ * Copyright (c) 2016 Linaro Limited
38
- *is_secure, false, &s2, fi);
57
+ * Written by Peter Maydell
39
+ is_secure, false, &s2, fi);
58
+ *
40
if (ret) {
59
+ * This code is licensed under the GPL, version 2 or (at your option)
41
assert(fi->type != ARMFault_None);
60
+ * any later version.
42
fi->s2addr = addr;
61
+ */
43
fi->stage2 = true;
62
+
44
fi->s1ptw = true;
63
+#include "qemu/osdep.h"
45
- fi->s1ns = !*is_secure;
64
+#include "gicv3_internal.h"
46
+ fi->s1ns = !is_secure;
65
+#include "cpu.h"
47
return ~0;
66
+
48
}
67
+void gicv3_set_gicv3state(CPUState *cpu, GICv3CPUState *s)
49
if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
68
+{
50
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
69
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
51
fi->s2addr = addr;
70
+ CPUARMState *env = &arm_cpu->env;
52
fi->stage2 = true;
71
+
53
fi->s1ptw = true;
72
+ env->gicv3state = (void *)s;
54
- fi->s1ns = !*is_secure;
73
+};
55
+ fi->s1ns = !is_secure;
74
diff --git a/hw/intc/meson.build b/hw/intc/meson.build
56
return ~0;
75
index XXXXXXX..XXXXXXX 100644
57
}
76
--- a/hw/intc/meson.build
58
77
+++ b/hw/intc/meson.build
59
if (arm_is_secure_below_el3(env)) {
78
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP_PMU', if_true: files('xlnx-pmu-iomod-in
60
/* Check if page table walk is to secure or non-secure PA space. */
79
61
- if (*is_secure) {
80
specific_ss.add(when: 'CONFIG_ALLWINNER_A10_PIC', if_true: files('allwinner-a10-pic.c'))
62
- *is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
81
specific_ss.add(when: 'CONFIG_APIC', if_true: files('apic.c', 'apic_common.c'))
63
+ if (is_secure) {
82
+specific_ss.add(when: 'CONFIG_ARM_GIC', if_true: files('arm_gicv3_cpuif_common.c'))
64
+ is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
83
specific_ss.add(when: 'CONFIG_ARM_GIC', if_true: files('arm_gicv3_cpuif.c'))
65
} else {
84
specific_ss.add(when: 'CONFIG_ARM_GIC_KVM', if_true: files('arm_gic_kvm.c'))
66
- *is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
85
specific_ss.add(when: ['CONFIG_ARM_GIC_KVM', 'TARGET_AARCH64'], if_true: files('arm_gicv3_kvm.c', 'arm_gicv3_its_kvm.c'))
67
+ is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
68
}
69
+ *is_secure_ptr = is_secure;
70
} else {
71
- assert(!*is_secure);
72
+ assert(!is_secure);
73
}
74
75
addr = s2.phys;
86
--
76
--
87
2.25.1
77
2.25.1
88
89
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Create arm_check_ss_active and arm_check_kernelpage.
3
This value is unused.
4
5
Reverse the order of the tests. While it doesn't matter in practice,
6
because only user-only has a kernel page and user-only never sets
7
ss_active, ss_active has priority over execution exceptions and it
8
is best to keep them in the proper order.
9
4
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20221001162318.153420-16-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
target/arm/translate.c | 10 +++++++---
10
target/arm/ptw.c | 5 ++---
15
1 file changed, 7 insertions(+), 3 deletions(-)
11
1 file changed, 2 insertions(+), 3 deletions(-)
16
12
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
15
--- a/target/arm/ptw.c
20
+++ b/target/arm/translate.c
16
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
17
@@ -XXX,XX +XXX,XX @@ static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
22
dc->insn_start = tcg_last_op();
18
* s1 and s2 for the HCR_EL2.FWB == 1 case, returning the
23
}
19
* combined attributes in MAIR_EL1 format.
24
20
*/
25
-static bool arm_pre_translate_insn(DisasContext *dc)
21
-static uint8_t combined_attrs_fwb(CPUARMState *env,
26
+static bool arm_check_kernelpage(DisasContext *dc)
22
- ARMCacheAttrs s1, ARMCacheAttrs s2)
23
+static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
27
{
24
{
28
#ifdef CONFIG_USER_ONLY
25
switch (s2.attrs) {
29
/* Intercept jump to the magic kernel page. */
26
case 7:
30
@@ -XXX,XX +XXX,XX @@ static bool arm_pre_translate_insn(DisasContext *dc)
27
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
31
return true;
28
32
}
29
/* Combine memory type and cacheability attributes */
33
#endif
30
if (arm_hcr_el2_eff(env) & HCR_FWB) {
34
+ return false;
31
- ret.attrs = combined_attrs_fwb(env, s1, s2);
35
+}
32
+ ret.attrs = combined_attrs_fwb(s1, s2);
36
33
} else {
37
+static bool arm_check_ss_active(DisasContext *dc)
34
ret.attrs = combined_attrs_nofwb(env, s1, s2);
38
+{
39
if (dc->ss_active && !dc->pstate_ss) {
40
/* Singlestep state is Active-pending.
41
* If we're in this state at the start of a TB then either
42
@@ -XXX,XX +XXX,XX @@ static void arm_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
43
uint32_t pc = dc->base.pc_next;
44
unsigned int insn;
45
46
- if (arm_pre_translate_insn(dc)) {
47
+ if (arm_check_ss_active(dc) || arm_check_kernelpage(dc)) {
48
dc->base.pc_next = pc + 4;
49
return;
50
}
51
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
52
uint32_t insn;
53
bool is_16bit;
54
55
- if (arm_pre_translate_insn(dc)) {
56
+ if (arm_check_ss_active(dc) || arm_check_kernelpage(dc)) {
57
dc->base.pc_next = pc + 2;
58
return;
59
}
35
}
60
--
36
--
61
2.25.1
37
2.25.1
62
63
diff view generated by jsdifflib
1
From: Olivier Hériveaux <olivier.heriveaux@ledger.fr>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Fix issue where the data register may be overwritten by next character
3
These subroutines did not need ENV for anything except
4
reception before being read and returned.
4
retrieving the effective value of HCR anyway.
5
5
6
Signed-off-by: Olivier Hériveaux <olivier.heriveaux@ledger.fr>
6
We have computed the effective value of HCR in the callers,
7
and this will be especially important for interpreting HCR
8
in a non-current security state.
9
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20211128120723.4053-1-olivier.heriveaux@ledger.fr
12
Message-id: 20221001162318.153420-17-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
14
---
12
hw/char/stm32f2xx_usart.c | 3 ++-
15
target/arm/ptw.c | 30 +++++++++++++++++-------------
13
1 file changed, 2 insertions(+), 1 deletion(-)
16
1 file changed, 17 insertions(+), 13 deletions(-)
14
17
15
diff --git a/hw/char/stm32f2xx_usart.c b/hw/char/stm32f2xx_usart.c
18
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/char/stm32f2xx_usart.c
20
--- a/target/arm/ptw.c
18
+++ b/hw/char/stm32f2xx_usart.c
21
+++ b/target/arm/ptw.c
19
@@ -XXX,XX +XXX,XX @@ static uint64_t stm32f2xx_usart_read(void *opaque, hwaddr addr,
22
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
20
return retvalue;
23
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
21
case USART_DR:
24
}
22
DB_PRINT("Value: 0x%" PRIx32 ", %c\n", s->usart_dr, (char) s->usart_dr);
25
23
+ retvalue = s->usart_dr & 0x3FF;
26
-static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
24
s->usart_sr &= ~USART_SR_RXNE;
27
+static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
25
qemu_chr_fe_accept_input(&s->chr);
28
{
26
qemu_set_irq(s->irq, 0);
29
/*
27
- return s->usart_dr & 0x3FF;
30
* For an S1 page table walk, the stage 1 attributes are always
28
+ return retvalue;
31
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
29
case USART_BRR:
32
* when cacheattrs.attrs bit [2] is 0.
30
return s->usart_brr;
33
*/
31
case USART_CR1:
34
assert(cacheattrs.is_s2_format);
35
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
36
+ if (hcr & HCR_FWB) {
37
return (cacheattrs.attrs & 0x4) == 0;
38
} else {
39
return (cacheattrs.attrs & 0xc) == 0;
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
41
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
42
!regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
43
GetPhysAddrResult s2 = {};
44
+ uint64_t hcr;
45
int ret;
46
47
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
48
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
49
fi->s1ns = !is_secure;
50
return ~0;
51
}
52
- if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
53
- ptw_attrs_are_device(env, s2.cacheattrs)) {
54
+
55
+ hcr = arm_hcr_el2_eff(env);
56
+ if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
57
/*
58
* PTW set and S1 walk touched S2 Device memory:
59
* generate Permission fault.
60
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
61
* ref: shared/translation/attrs/S2AttrDecode()
62
* .../S2ConvertAttrsHints()
63
*/
64
-static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
65
+static uint8_t convert_stage2_attrs(uint64_t hcr, uint8_t s2attrs)
66
{
67
uint8_t hiattr = extract32(s2attrs, 2, 2);
68
uint8_t loattr = extract32(s2attrs, 0, 2);
69
uint8_t hihint = 0, lohint = 0;
70
71
if (hiattr != 0) { /* normal memory */
72
- if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */
73
+ if (hcr & HCR_CD) { /* cache disabled */
74
hiattr = loattr = 1; /* non-cacheable */
75
} else {
76
if (hiattr != 1) { /* Write-through or write-back */
77
@@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
78
* s1 and s2 for the HCR_EL2.FWB == 0 case, returning the
79
* combined attributes in MAIR_EL1 format.
80
*/
81
-static uint8_t combined_attrs_nofwb(CPUARMState *env,
82
+static uint8_t combined_attrs_nofwb(uint64_t hcr,
83
ARMCacheAttrs s1, ARMCacheAttrs s2)
84
{
85
uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
86
87
- s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
88
+ s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs);
89
90
s1lo = extract32(s1.attrs, 0, 4);
91
s2lo = extract32(s2_mair_attrs, 0, 4);
92
@@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
93
* @s1: Attributes from stage 1 walk
94
* @s2: Attributes from stage 2 walk
95
*/
96
-static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
97
+static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
98
ARMCacheAttrs s1, ARMCacheAttrs s2)
99
{
100
ARMCacheAttrs ret;
101
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
102
}
103
104
/* Combine memory type and cacheability attributes */
105
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
106
+ if (hcr & HCR_FWB) {
107
ret.attrs = combined_attrs_fwb(s1, s2);
108
} else {
109
- ret.attrs = combined_attrs_nofwb(env, s1, s2);
110
+ ret.attrs = combined_attrs_nofwb(hcr, s1, s2);
111
}
112
113
/*
114
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
115
ARMCacheAttrs cacheattrs1;
116
ARMMMUIdx s2_mmu_idx;
117
bool is_el0;
118
+ uint64_t hcr;
119
120
ret = get_phys_addr_with_secure(env, address, access_type,
121
s1_mmu_idx, is_secure, result, fi);
122
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
123
}
124
125
/* Combine the S1 and S2 cache attributes. */
126
- if (arm_hcr_el2_eff(env) & HCR_DC) {
127
+ hcr = arm_hcr_el2_eff(env);
128
+ if (hcr & HCR_DC) {
129
/*
130
* HCR.DC forces the first stage attributes to
131
* Normal Non-Shareable,
132
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
133
}
134
cacheattrs1.shareability = 0;
135
}
136
- result->cacheattrs = combine_cacheattrs(env, cacheattrs1,
137
+ result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
138
result->cacheattrs);
139
140
/*
32
--
141
--
33
2.25.1
142
2.25.1
34
35
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
While trying to debug a GIC ITS failure I saw some guest errors that
3
Use arm_hcr_el2_eff_secstate instead of arm_hcr_el2_eff, so
4
had poor formatting as well as leaving me confused as to what failed.
4
that we use is_secure instead of the current security state.
5
As most of the checks aren't possible without a valid dte split that
5
These AT* operations have been broken since arm_hcr_el2_eff
6
check apart and then check the other conditions in steps. This avoids
6
gained a check for "el2 enabled" for Secure EL2.
7
us relying on undefined data.
8
7
9
I still get a failure with the current kvm-unit-tests but at least I
10
know (partially) why now:
11
12
Exception return from AArch64 EL1 to AArch64 EL1 PC 0x40080588
13
PASS: gicv3: its-trigger: inv/invall: dev2/eventid=20 now triggers an LPI
14
ITS: MAPD devid=2 size = 0x8 itt=0x40430000 valid=0
15
INT dev_id=2 event_id=20
16
process_its_cmd: invalid command attributes: invalid dte: 0 for 2 (MEM_TX: 0)
17
PASS: gicv3: its-trigger: mapd valid=false: no LPI after device unmap
18
SUMMARY: 6 tests, 1 unexpected failures
19
20
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
21
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Message-id: 20211112170454.3158925-1-alex.bennee@linaro.org
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
23
Cc: Shashi Mallela <shashi.mallela@linaro.org>
10
Message-id: 20221001162318.153420-18-richard.henderson@linaro.org
24
Cc: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
12
---
27
hw/intc/arm_gicv3_its.c | 39 +++++++++++++++++++++++++++------------
13
target/arm/ptw.c | 8 ++++----
28
1 file changed, 27 insertions(+), 12 deletions(-)
14
1 file changed, 4 insertions(+), 4 deletions(-)
29
15
30
diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c
16
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
31
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/arm_gicv3_its.c
18
--- a/target/arm/ptw.c
33
+++ b/hw/intc/arm_gicv3_its.c
19
+++ b/target/arm/ptw.c
34
@@ -XXX,XX +XXX,XX @@ static bool process_its_cmd(GICv3ITSState *s, uint64_t value, uint32_t offset,
20
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
35
if (res != MEMTX_OK) {
36
return result;
37
}
21
}
38
+ } else {
39
+ qemu_log_mask(LOG_GUEST_ERROR,
40
+ "%s: invalid command attributes: "
41
+ "invalid dte: %"PRIx64" for %d (MEM_TX: %d)\n",
42
+ __func__, dte, devid, res);
43
+ return result;
44
}
22
}
45
23
46
- if ((devid > s->dt.maxids.max_devids) || !dte_valid || !ite_valid ||
24
- hcr_el2 = arm_hcr_el2_eff(env);
47
- !cte_valid || (eventid > max_eventid)) {
25
+ hcr_el2 = arm_hcr_el2_eff_secstate(env, is_secure);
48
+
26
49
+ /*
27
switch (mmu_idx) {
50
+ * In this implementation, in case of guest errors we ignore the
28
case ARMMMUIdx_Stage2:
51
+ * command and move onto the next command in the queue.
29
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
52
+ */
30
return ~0;
53
+ if (devid > s->dt.maxids.max_devids) {
31
}
54
qemu_log_mask(LOG_GUEST_ERROR,
32
55
- "%s: invalid command attributes "
33
- hcr = arm_hcr_el2_eff(env);
56
- "devid %d or eventid %d or invalid dte %d or"
34
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
57
- "invalid cte %d or invalid ite %d\n",
35
if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
58
- __func__, devid, eventid, dte_valid, cte_valid,
36
/*
59
- ite_valid);
37
* PTW set and S1 walk touched S2 Device memory:
60
- /*
38
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
61
- * in this implementation, in case of error
39
}
62
- * we ignore this command and move onto the next
40
63
- * command in the queue
41
/* Combine the S1 and S2 cache attributes. */
64
- */
42
- hcr = arm_hcr_el2_eff(env);
65
+ "%s: invalid command attributes: devid %d>%d",
43
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
66
+ __func__, devid, s->dt.maxids.max_devids);
44
if (hcr & HCR_DC) {
67
+
45
/*
68
+ } else if (!dte_valid || !ite_valid || !cte_valid) {
46
* HCR.DC forces the first stage attributes to
69
+ qemu_log_mask(LOG_GUEST_ERROR,
47
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
70
+ "%s: invalid command attributes: "
48
result->page_size = TARGET_PAGE_SIZE;
71
+ "dte: %s, ite: %s, cte: %s\n",
49
72
+ __func__,
50
/* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
73
+ dte_valid ? "valid" : "invalid",
51
- hcr = arm_hcr_el2_eff(env);
74
+ ite_valid ? "valid" : "invalid",
52
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
75
+ cte_valid ? "valid" : "invalid");
53
result->cacheattrs.shareability = 0;
76
+ } else if (eventid > max_eventid) {
54
result->cacheattrs.is_s2_format = false;
77
+ qemu_log_mask(LOG_GUEST_ERROR,
55
if (hcr & HCR_DC) {
78
+ "%s: invalid command attributes: eventid %d > %d\n",
79
+ __func__, eventid, max_eventid);
80
} else {
81
/*
82
* Current implementation only supports rdbase == procnum
83
--
56
--
84
2.25.1
57
2.25.1
85
86
diff view generated by jsdifflib
Deleted patch
1
From: Joel Stanley <joel@jms.id.au>
2
1
3
A common use case for the ASPEED machine is to boot a Linux kernel.
4
Provide a full example command line.
5
6
Reviewed-by: Cédric Le Goater <clg@kaod.org>
7
Signed-off-by: Joel Stanley <joel@jms.id.au>
8
Message-id: 20211117065752.330632-4-joel@jms.id.au
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
docs/system/arm/aspeed.rst | 15 ++++++++++++---
12
1 file changed, 12 insertions(+), 3 deletions(-)
13
14
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
15
index XXXXXXX..XXXXXXX 100644
16
--- a/docs/system/arm/aspeed.rst
17
+++ b/docs/system/arm/aspeed.rst
18
@@ -XXX,XX +XXX,XX @@ Missing devices
19
Boot options
20
------------
21
22
-The Aspeed machines can be started using the ``-kernel`` option to
23
-load a Linux kernel or from a firmware. Images can be downloaded from
24
-the OpenBMC jenkins :
25
+The Aspeed machines can be started using the ``-kernel`` and ``-dtb`` options
26
+to load a Linux kernel or from a firmware. Images can be downloaded from the
27
+OpenBMC jenkins :
28
29
https://jenkins.openbmc.org/job/ci-openbmc/lastSuccessfulBuild/
30
31
@@ -XXX,XX +XXX,XX @@ or directly from the OpenBMC GitHub release repository :
32
33
https://github.com/openbmc/openbmc/releases
34
35
+To boot a kernel directly from a Linux build tree:
36
+
37
+.. code-block:: bash
38
+
39
+ $ qemu-system-arm -M ast2600-evb -nographic \
40
+ -kernel arch/arm/boot/zImage \
41
+ -dtb arch/arm/boot/dts/aspeed-ast2600-evb.dtb \
42
+ -initrd rootfs.cpio
43
+
44
The image should be attached as an MTD drive. Run :
45
46
.. code-block:: bash
47
--
48
2.25.1
49
50
diff view generated by jsdifflib
Deleted patch
1
From: Joel Stanley <joel@jms.id.au>
2
1
3
Move it to the supported list.
4
5
Signed-off-by: Joel Stanley <joel@jms.id.au>
6
Message-id: 20211117065752.330632-5-joel@jms.id.au
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
docs/system/arm/aspeed.rst | 2 +-
10
1 file changed, 1 insertion(+), 1 deletion(-)
11
12
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
13
index XXXXXXX..XXXXXXX 100644
14
--- a/docs/system/arm/aspeed.rst
15
+++ b/docs/system/arm/aspeed.rst
16
@@ -XXX,XX +XXX,XX @@ Supported devices
17
* Front LEDs (PCA9552 on I2C bus)
18
* LPC Peripheral Controller (a subset of subdevices are supported)
19
* Hash/Crypto Engine (HACE) - Hash support only. TODO: HMAC and RSA
20
+ * ADC
21
22
23
Missing devices
24
---------------
25
26
* Coprocessor support
27
- * ADC (out of tree implementation)
28
* PWM and Fan Controller
29
* Slave GPIO Controller
30
* Super I/O Controller
31
--
32
2.25.1
33
34
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add expected blobs of the VIOT and DSDT table for the VIOT test on the
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
q35 machine.
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20221001162318.153420-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.c | 138 +++++++++++++++++++++++++----------------------
9
1 file changed, 74 insertions(+), 64 deletions(-)
5
10
6
Since the test instantiates a virtio device and two PCIe expander
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
7
bridges, DSDT.viot has more blocks than the base DSDT.
12
index XXXXXXX..XXXXXXX 100644
8
13
--- a/target/arm/ptw.c
9
The VIOT table generated for the q35 test is:
14
+++ b/target/arm/ptw.c
10
15
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
11
[000h 0000 4] Signature : "VIOT" [Virtual I/O Translation Table]
16
return ret;
12
[004h 0004 4] Table Length : 00000070
17
}
13
[008h 0008 1] Revision : 00
18
14
[009h 0009 1] Checksum : 3D
19
+/*
15
[00Ah 0010 6] Oem ID : "BOCHS "
20
+ * MMU disabled. S1 addresses within aa64 translation regimes are
16
[010h 0016 8] Oem Table ID : "BXPC "
21
+ * still checked for bounds -- see AArch64.S1DisabledOutput().
17
[018h 0024 4] Oem Revision : 00000001
22
+ */
18
[01Ch 0028 4] Asl Compiler ID : "BXPC"
23
+static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
19
[020h 0032 4] Asl Compiler Revision : 00000001
24
+ MMUAccessType access_type,
20
25
+ ARMMMUIdx mmu_idx, bool is_secure,
21
[024h 0036 2] Node count : 0003
26
+ GetPhysAddrResult *result,
22
[026h 0038 2] Node offset : 0030
27
+ ARMMMUFaultInfo *fi)
23
[028h 0040 8] Reserved : 0000000000000000
28
+{
24
29
+ uint64_t hcr;
25
[030h 0048 1] Type : 03 [VirtIO-PCI IOMMU]
30
+ uint8_t memattr;
26
[031h 0049 1] Reserved : 00
27
[032h 0050 2] Length : 0010
28
29
[034h 0052 2] PCI Segment : 0000
30
[036h 0054 2] PCI BDF number : 0010
31
[038h 0056 8] Reserved : 0000000000000000
32
33
[040h 0064 1] Type : 01 [PCI Range]
34
[041h 0065 1] Reserved : 00
35
[042h 0066 2] Length : 0018
36
37
[044h 0068 4] Endpoint start : 00003000
38
[048h 0072 2] PCI Segment start : 0000
39
[04Ah 0074 2] PCI Segment end : 0000
40
[04Ch 0076 2] PCI BDF start : 3000
41
[04Eh 0078 2] PCI BDF end : 30FF
42
[050h 0080 2] Output node : 0030
43
[052h 0082 6] Reserved : 000000000000
44
45
[058h 0088 1] Type : 01 [PCI Range]
46
[059h 0089 1] Reserved : 00
47
[05Ah 0090 2] Length : 0018
48
49
[05Ch 0092 4] Endpoint start : 00001000
50
[060h 0096 2] PCI Segment start : 0000
51
[062h 0098 2] PCI Segment end : 0000
52
[064h 0100 2] PCI BDF start : 1000
53
[066h 0102 2] PCI BDF end : 10FF
54
[068h 0104 2] Output node : 0030
55
[06Ah 0106 6] Reserved : 000000000000
56
57
And the DSDT diff is:
58
59
@@ -XXX,XX +XXX,XX @@
60
*
61
* Disassembling to symbolic ASL+ operators
62
*
63
- * Disassembly of tests/data/acpi/q35/DSDT, Fri Dec 10 15:03:08 2021
64
+ * Disassembly of /tmp/aml-H9Y5D1, Fri Dec 10 15:02:27 2021
65
*
66
* Original Table Header:
67
* Signature "DSDT"
68
- * Length 0x00002061 (8289)
69
+ * Length 0x000024B6 (9398)
70
* Revision 0x01 **** 32-bit table (V1), no 64-bit math support
71
- * Checksum 0xFA
72
+ * Checksum 0xA7
73
* OEM ID "BOCHS "
74
* OEM Table ID "BXPC "
75
* OEM Revision 0x00000001 (1)
76
@@ -XXX,XX +XXX,XX @@
77
}
78
}
79
80
+ Scope (\_SB)
81
+ {
82
+ Device (PC30)
83
+ {
84
+ Name (_UID, 0x30) // _UID: Unique ID
85
+ Name (_BBN, 0x30) // _BBN: BIOS Bus Number
86
+ Name (_HID, EisaId ("PNP0A08") /* PCI Express Bus */) // _HID: Hardware ID
87
+ Name (_CID, EisaId ("PNP0A03") /* PCI Bus */) // _CID: Compatible ID
88
+ Method (_OSC, 4, NotSerialized) // _OSC: Operating System Capabilities
89
+ {
90
+ CreateDWordField (Arg3, Zero, CDW1)
91
+ If ((Arg0 == ToUUID ("33db4d5b-1ff7-401c-9657-7441c03dd766") /* PCI Host Bridge Device */))
92
+ {
93
+ CreateDWordField (Arg3, 0x04, CDW2)
94
+ CreateDWordField (Arg3, 0x08, CDW3)
95
+ Local0 = CDW3 /* \_SB_.PC30._OSC.CDW3 */
96
+ Local0 &= 0x1F
97
+ If ((Arg1 != One))
98
+ {
99
+ CDW1 |= 0x08
100
+ }
101
+
31
+
102
+ If ((CDW3 != Local0))
32
+ if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
103
+ {
33
+ int r_el = regime_el(env, mmu_idx);
104
+ CDW1 |= 0x10
34
+ if (arm_el_is_aa64(env, r_el)) {
105
+ }
35
+ int pamax = arm_pamax(env_archcpu(env));
36
+ uint64_t tcr = env->cp15.tcr_el[r_el];
37
+ int addrtop, tbi;
106
+
38
+
107
+ CDW3 = Local0
39
+ tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
108
+ }
40
+ if (access_type == MMU_INST_FETCH) {
109
+ Else
41
+ tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
110
+ {
42
+ }
111
+ CDW1 |= 0x04
43
+ tbi = (tbi >> extract64(address, 55, 1)) & 1;
112
+ }
44
+ addrtop = (tbi ? 55 : 63);
113
+
45
+
114
+ Return (Arg3)
46
+ if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
47
+ fi->type = ARMFault_AddressSize;
48
+ fi->level = 0;
49
+ fi->stage2 = false;
50
+ return 1;
115
+ }
51
+ }
116
+
52
+
117
+ Method (_PRT, 0, NotSerialized) // _PRT: PCI Routing Table
53
+ /*
118
+ {
54
+ * When TBI is disabled, we've just validated that all of the
119
+ Local0 = Package (0x80){}
55
+ * bits above PAMax are zero, so logically we only need to
120
+ Local1 = Zero
56
+ * clear the top byte for TBI. But it's clearer to follow
121
+ While ((Local1 < 0x80))
57
+ * the pseudocode set of addrdesc.paddress.
122
+ {
58
+ */
123
+ Local2 = (Local1 >> 0x02)
59
+ address = extract64(address, 0, 52);
124
+ Local3 = ((Local1 + Local2) & 0x03)
125
+ If ((Local3 == Zero))
126
+ {
127
+ Local4 = Package (0x04)
128
+ {
129
+ Zero,
130
+ Zero,
131
+ LNKD,
132
+ Zero
133
+ }
134
+ }
135
+
136
+ If ((Local3 == One))
137
+ {
138
+ Local4 = Package (0x04)
139
+ {
140
+ Zero,
141
+ Zero,
142
+ LNKA,
143
+ Zero
144
+ }
145
+ }
146
+
147
+ If ((Local3 == 0x02))
148
+ {
149
+ Local4 = Package (0x04)
150
+ {
151
+ Zero,
152
+ Zero,
153
+ LNKB,
154
+ Zero
155
+ }
156
+ }
157
+
158
+ If ((Local3 == 0x03))
159
+ {
160
+ Local4 = Package (0x04)
161
+ {
162
+ Zero,
163
+ Zero,
164
+ LNKC,
165
+ Zero
166
+ }
167
+ }
168
+
169
+ Local4 [Zero] = ((Local2 << 0x10) | 0xFFFF)
170
+ Local4 [One] = (Local1 & 0x03)
171
+ Local0 [Local1] = Local4
172
+ Local1++
173
+ }
174
+
175
+ Return (Local0)
176
+ }
177
+
178
+ Name (_CRS, ResourceTemplate () // _CRS: Current Resource Settings
179
+ {
180
+ WordBusNumber (ResourceProducer, MinFixed, MaxFixed, PosDecode,
181
+ 0x0000, // Granularity
182
+ 0x0030, // Range Minimum
183
+ 0x0030, // Range Maximum
184
+ 0x0000, // Translation Offset
185
+ 0x0001, // Length
186
+ ,, )
187
+ })
188
+ }
60
+ }
189
+ }
61
+ }
190
+
62
+
191
+ Scope (\_SB)
63
+ result->phys = address;
192
+ {
64
+ result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
193
+ Device (PC20)
65
+ result->page_size = TARGET_PAGE_SIZE;
194
+ {
195
+ Name (_UID, 0x20) // _UID: Unique ID
196
+ Name (_BBN, 0x20) // _BBN: BIOS Bus Number
197
+ Name (_HID, EisaId ("PNP0A08") /* PCI Express Bus */) // _HID: Hardware ID
198
+ Name (_CID, EisaId ("PNP0A03") /* PCI Bus */) // _CID: Compatible ID
199
+ Method (_OSC, 4, NotSerialized) // _OSC: Operating System Capabilities
200
+ {
201
+ CreateDWordField (Arg3, Zero, CDW1)
202
+ If ((Arg0 == ToUUID ("33db4d5b-1ff7-401c-9657-7441c03dd766") /* PCI Host Bridge Device */))
203
+ {
204
+ CreateDWordField (Arg3, 0x04, CDW2)
205
+ CreateDWordField (Arg3, 0x08, CDW3)
206
+ Local0 = CDW3 /* \_SB_.PC20._OSC.CDW3 */
207
+ Local0 &= 0x1F
208
+ If ((Arg1 != One))
209
+ {
210
+ CDW1 |= 0x08
211
+ }
212
+
66
+
213
+ If ((CDW3 != Local0))
67
+ /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
214
+ {
68
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
215
+ CDW1 |= 0x10
69
+ result->cacheattrs.shareability = 0;
216
+ }
70
+ result->cacheattrs.is_s2_format = false;
71
+ if (hcr & HCR_DC) {
72
+ if (hcr & HCR_DCT) {
73
+ memattr = 0xf0; /* Tagged, Normal, WB, RWA */
74
+ } else {
75
+ memattr = 0xff; /* Normal, WB, RWA */
76
+ }
77
+ } else if (access_type == MMU_INST_FETCH) {
78
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
79
+ memattr = 0xee; /* Normal, WT, RA, NT */
80
+ } else {
81
+ memattr = 0x44; /* Normal, NC, No */
82
+ }
83
+ result->cacheattrs.shareability = 2; /* outer sharable */
84
+ } else {
85
+ memattr = 0x00; /* Device, nGnRnE */
86
+ }
87
+ result->cacheattrs.attrs = memattr;
88
+ return 0;
89
+}
217
+
90
+
218
+ CDW3 = Local0
91
bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
219
+ }
92
MMUAccessType access_type, ARMMMUIdx mmu_idx,
220
+ Else
93
bool is_secure, GetPhysAddrResult *result,
221
+ {
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
222
+ CDW1 |= 0x04
95
/* Definitely a real MMU, not an MPU */
223
+ }
96
224
+
97
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
225
+ Return (Arg3)
98
- uint64_t hcr;
226
+ }
99
- uint8_t memattr;
227
+
100
-
228
+ Method (_PRT, 0, NotSerialized) // _PRT: PCI Routing Table
101
- /*
229
+ {
102
- * MMU disabled. S1 addresses within aa64 translation regimes are
230
+ Local0 = Package (0x80){}
103
- * still checked for bounds -- see AArch64.TranslateAddressS1Off.
231
+ Local1 = Zero
104
- */
232
+ While ((Local1 < 0x80))
105
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
233
+ {
106
- int r_el = regime_el(env, mmu_idx);
234
+ Local2 = (Local1 >> 0x02)
107
- if (arm_el_is_aa64(env, r_el)) {
235
+ Local3 = ((Local1 + Local2) & 0x03)
108
- int pamax = arm_pamax(env_archcpu(env));
236
+ If ((Local3 == Zero))
109
- uint64_t tcr = env->cp15.tcr_el[r_el];
237
+ {
110
- int addrtop, tbi;
238
+ Local4 = Package (0x04)
111
-
239
+ {
112
- tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
240
+ Zero,
113
- if (access_type == MMU_INST_FETCH) {
241
+ Zero,
114
- tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
242
+ LNKD,
115
- }
243
+ Zero
116
- tbi = (tbi >> extract64(address, 55, 1)) & 1;
244
+ }
117
- addrtop = (tbi ? 55 : 63);
245
+ }
118
-
246
+
119
- if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
247
+ If ((Local3 == One))
120
- fi->type = ARMFault_AddressSize;
248
+ {
121
- fi->level = 0;
249
+ Local4 = Package (0x04)
122
- fi->stage2 = false;
250
+ {
123
- return 1;
251
+ Zero,
124
- }
252
+ Zero,
125
-
253
+ LNKA,
126
- /*
254
+ Zero
127
- * When TBI is disabled, we've just validated that all of the
255
+ }
128
- * bits above PAMax are zero, so logically we only need to
256
+ }
129
- * clear the top byte for TBI. But it's clearer to follow
257
+
130
- * the pseudocode set of addrdesc.paddress.
258
+ If ((Local3 == 0x02))
131
- */
259
+ {
132
- address = extract64(address, 0, 52);
260
+ Local4 = Package (0x04)
133
- }
261
+ {
134
- }
262
+ Zero,
135
- result->phys = address;
263
+ Zero,
136
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
264
+ LNKB,
137
- result->page_size = TARGET_PAGE_SIZE;
265
+ Zero
138
-
266
+ }
139
- /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
267
+ }
140
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
268
+
141
- result->cacheattrs.shareability = 0;
269
+ If ((Local3 == 0x03))
142
- result->cacheattrs.is_s2_format = false;
270
+ {
143
- if (hcr & HCR_DC) {
271
+ Local4 = Package (0x04)
144
- if (hcr & HCR_DCT) {
272
+ {
145
- memattr = 0xf0; /* Tagged, Normal, WB, RWA */
273
+ Zero,
146
- } else {
274
+ Zero,
147
- memattr = 0xff; /* Normal, WB, RWA */
275
+ LNKC,
148
- }
276
+ Zero
149
- } else if (access_type == MMU_INST_FETCH) {
277
+ }
150
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
278
+ }
151
- memattr = 0xee; /* Normal, WT, RA, NT */
279
+
152
- } else {
280
+ Local4 [Zero] = ((Local2 << 0x10) | 0xFFFF)
153
- memattr = 0x44; /* Normal, NC, No */
281
+ Local4 [One] = (Local1 & 0x03)
154
- }
282
+ Local0 [Local1] = Local4
155
- result->cacheattrs.shareability = 2; /* outer sharable */
283
+ Local1++
156
- } else {
284
+ }
157
- memattr = 0x00; /* Device, nGnRnE */
285
+
158
- }
286
+ Return (Local0)
159
- result->cacheattrs.attrs = memattr;
287
+ }
160
- return 0;
288
+
161
+ return get_phys_addr_disabled(env, address, access_type, mmu_idx,
289
+ Name (_CRS, ResourceTemplate () // _CRS: Current Resource Settings
162
+ is_secure, result, fi);
290
+ {
163
}
291
+ WordBusNumber (ResourceProducer, MinFixed, MaxFixed, PosDecode,
164
-
292
+ 0x0000, // Granularity
165
if (regime_using_lpae_format(env, mmu_idx)) {
293
+ 0x0020, // Range Minimum
166
return get_phys_addr_lpae(env, address, access_type, mmu_idx,
294
+ 0x0020, // Range Maximum
167
is_secure, false, result, fi);
295
+ 0x0000, // Translation Offset
296
+ 0x0001, // Length
297
+ ,, )
298
+ })
299
+ }
300
+ }
301
+
302
+ Scope (\_SB)
303
+ {
304
+ Device (PC10)
305
+ {
306
+ Name (_UID, 0x10) // _UID: Unique ID
307
+ Name (_BBN, 0x10) // _BBN: BIOS Bus Number
308
+ Name (_HID, EisaId ("PNP0A08") /* PCI Express Bus */) // _HID: Hardware ID
309
+ Name (_CID, EisaId ("PNP0A03") /* PCI Bus */) // _CID: Compatible ID
310
+ Method (_OSC, 4, NotSerialized) // _OSC: Operating System Capabilities
311
+ {
312
+ CreateDWordField (Arg3, Zero, CDW1)
313
+ If ((Arg0 == ToUUID ("33db4d5b-1ff7-401c-9657-7441c03dd766") /* PCI Host Bridge Device */))
314
+ {
315
+ CreateDWordField (Arg3, 0x04, CDW2)
316
+ CreateDWordField (Arg3, 0x08, CDW3)
317
+ Local0 = CDW3 /* \_SB_.PC10._OSC.CDW3 */
318
+ Local0 &= 0x1F
319
+ If ((Arg1 != One))
320
+ {
321
+ CDW1 |= 0x08
322
+ }
323
+
324
+ If ((CDW3 != Local0))
325
+ {
326
+ CDW1 |= 0x10
327
+ }
328
+
329
+ CDW3 = Local0
330
+ }
331
+ Else
332
+ {
333
+ CDW1 |= 0x04
334
+ }
335
+
336
+ Return (Arg3)
337
+ }
338
+
339
+ Method (_PRT, 0, NotSerialized) // _PRT: PCI Routing Table
340
+ {
341
+ Local0 = Package (0x80){}
342
+ Local1 = Zero
343
+ While ((Local1 < 0x80))
344
+ {
345
+ Local2 = (Local1 >> 0x02)
346
+ Local3 = ((Local1 + Local2) & 0x03)
347
+ If ((Local3 == Zero))
348
+ {
349
+ Local4 = Package (0x04)
350
+ {
351
+ Zero,
352
+ Zero,
353
+ LNKD,
354
+ Zero
355
+ }
356
+ }
357
+
358
+ If ((Local3 == One))
359
+ {
360
+ Local4 = Package (0x04)
361
+ {
362
+ Zero,
363
+ Zero,
364
+ LNKA,
365
+ Zero
366
+ }
367
+ }
368
+
369
+ If ((Local3 == 0x02))
370
+ {
371
+ Local4 = Package (0x04)
372
+ {
373
+ Zero,
374
+ Zero,
375
+ LNKB,
376
+ Zero
377
+ }
378
+ }
379
+
380
+ If ((Local3 == 0x03))
381
+ {
382
+ Local4 = Package (0x04)
383
+ {
384
+ Zero,
385
+ Zero,
386
+ LNKC,
387
+ Zero
388
+ }
389
+ }
390
+
391
+ Local4 [Zero] = ((Local2 << 0x10) | 0xFFFF)
392
+ Local4 [One] = (Local1 & 0x03)
393
+ Local0 [Local1] = Local4
394
+ Local1++
395
+ }
396
+
397
+ Return (Local0)
398
+ }
399
+
400
+ Name (_CRS, ResourceTemplate () // _CRS: Current Resource Settings
401
+ {
402
+ WordBusNumber (ResourceProducer, MinFixed, MaxFixed, PosDecode,
403
+ 0x0000, // Granularity
404
+ 0x0010, // Range Minimum
405
+ 0x0010, // Range Maximum
406
+ 0x0000, // Translation Offset
407
+ 0x0001, // Length
408
+ ,, )
409
+ })
410
+ }
411
+ }
412
+
413
Scope (\_SB.PCI0)
414
{
415
Name (_CRS, ResourceTemplate () // _CRS: Current Resource Settings
416
@@ -XXX,XX +XXX,XX @@
417
WordBusNumber (ResourceProducer, MinFixed, MaxFixed, PosDecode,
418
0x0000, // Granularity
419
0x0000, // Range Minimum
420
- 0x00FF, // Range Maximum
421
+ 0x000F, // Range Maximum
422
0x0000, // Translation Offset
423
- 0x0100, // Length
424
+ 0x0010, // Length
425
,, )
426
IO (Decode16,
427
0x0CF8, // Range Minimum
428
@@ -XXX,XX +XXX,XX @@
429
}
430
}
431
432
+ Device (S10)
433
+ {
434
+ Name (_ADR, 0x00020000) // _ADR: Address
435
+ }
436
+
437
+ Device (S18)
438
+ {
439
+ Name (_ADR, 0x00030000) // _ADR: Address
440
+ }
441
+
442
+ Device (S20)
443
+ {
444
+ Name (_ADR, 0x00040000) // _ADR: Address
445
+ }
446
+
447
+ Device (S28)
448
+ {
449
+ Name (_ADR, 0x00050000) // _ADR: Address
450
+ }
451
+
452
Method (PCNT, 0, NotSerialized)
453
{
454
}
455
456
Reviewed-by: Eric Auger <eric.auger@redhat.com>
457
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
458
Message-id: 20211210170415.583179-8-jean-philippe@linaro.org
459
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
460
---
461
tests/qtest/bios-tables-test-allowed-diff.h | 2 --
462
tests/data/acpi/q35/DSDT.viot | Bin 0 -> 9398 bytes
463
tests/data/acpi/q35/VIOT.viot | Bin 0 -> 112 bytes
464
3 files changed, 2 deletions(-)
465
466
diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
467
index XXXXXXX..XXXXXXX 100644
468
--- a/tests/qtest/bios-tables-test-allowed-diff.h
469
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
470
@@ -XXX,XX +XXX,XX @@
471
/* List of comma-separated changed AML files to ignore */
472
"tests/data/acpi/virt/VIOT",
473
-"tests/data/acpi/q35/DSDT.viot",
474
-"tests/data/acpi/q35/VIOT.viot",
475
diff --git a/tests/data/acpi/q35/DSDT.viot b/tests/data/acpi/q35/DSDT.viot
476
index XXXXXXX..XXXXXXX 100644
477
GIT binary patch
478
literal 9398
479
zcmeHNO>7&-8J*>iv|O&FB}G~Oi$yp||57BBoWHhc5OS9yDTx$CQgH$r;8Idr*-4Q_
480
z5(9Az1F`}niVsB-)<KW7p`g9Br(A2Gm-gmc1N78GFS!;)e2V(MnH_0{q<{#yMgn&C
481
zn|*J-d9yqFhO_H6z19~`FlPL*u<DkZ*}|)JH;X@mF-FI<cPg<fti9tEN*yB^i5czN
482
zNq&q?!OZ;BE3B7{KWzJ-`Tn~f`9?Qj8~2^N8{Oc8J%57{==w%rS#;nOCp*nTr@iZ1
483
zb+?i;JLQUJ=O0?8*>S~D)a>NF1~WVB6^~_B#yhJ`H+JU@=6aXs`?Yv)J2h=N?drcS
484
zeLZ*n<<Bm^n}6`jfBx#u8&(W}1?)}iF9o#mZ~E2+zwdn7yK3AbIzKnxpZ>JRPm3~#
485
z&ICS{+_OayRW-l=Mtk=~uaS3o8z<_udd|(wqg`&JnVPfCe>BUOO`Su3e>pff_^UW%
486
z&JE^NO`)=Amg~iqRB1pPscP?(>#ZuY8GHCmlEvD$9g3%4Db~Dfz2SATnddvrR-Oe^
487
z;s;dJec!hnzi)ri^I6YN9vtkm{^TdUF8h7gX8-<Qe4p)GQ=)AtYx2VcwdLVAEXEjG
488
z^Mj|UHPqkj-LsWuzQem1>F3atdZn=zv3$#RmZzSHN+6-yyU#8cJb=YDilX&sl}vNm
489
znkgAR^O<3kj4if>{ly5fwRfMWuC5=lrlvKPX~i#654Cp}R_d*JS$9laZ$ra6)<ns8
490
zFZy28G%xP(nit&F>LDi%G<tIc=TY=gl$jSD&Uv!Yat~XR46h%rI$!}a%!|xG7u8Zn
491
zeY8_|n=K>xz_v_W8VX$W-Fg-qFWcT}7MCyz{%%{ia7hZ>Law-k6NOr}VI&_48U=2l
492
zwqDKFE8eTwwozDdms#e?x?5a|v>&JF;2_v0L~z5n%BYU^52<*cWuD4|GYUm@1+?))
493
zte^45>Rz)t*<T5V#={r>@t@{%?^i#W{i=HAZ*Dc9y59Va-+#P!jrGs;u38a{fLr`N
494
zvT@rUu>DljxJ?^&Z?-?vyJn3C>3D=qux{Y*bs5|5n)Qmi$TD^Zdn4GU$ocJS2Hh-<
495
z`xPI^^+v0nUVdjMos8k`WGl7hA`{03ju%<lrgAHSpd^DRf-*}_#Ly0mB!LSfVgWcQ
496
z&T$@~G9)JI=hz5m0vkrel+Xy{Oh7pkAu-V!j*W7rY(bO}Q$nMH2`FbGB&N)QaV4<4
497
zo)~9JXiP9=;}NPl<C@MmXG&;XFlFNrsyfFsonxFSp<}vEgsRSQP3O3#b6nSnP}ON_
498
zI!#Tdsp~|j>ckUB>FI=~GokB5sOq#dotCE4(sd$KbtW~PNlj-`*NIToiD#j5J#9^=
499
zt?NXn>YUJYPG~wObe#xQos*i*NloXZt`niEb4t@WrRki~bs|)CI+{*L)9L6s5vn><
500
zn$DD_Go|Z9sOn5>I@6lYw5}7Os&iV?Ij!lO)^#FOb!If38BJ$K*NIToIiu;E(R9w}
501
zIuWWmPiZ<&X*y5oIuWWmF_XaEC!a&Jn$B5WCqh-{X-(&8P3LJ{Cqh-{8P3dyPr@^t
502
zSqL9?X9Uwd3W@23*s~h*tj0X6GZCuHa~kuU#yqDp5vt7d8uPryJg+kms?5hU=3^T3
503
zF`bD}WnSP+=`t5MQ$FJ_2&Q~+BP6E0f^%BVIW6a$o)e+SX~IDBih-7z6{O~7YTy`&
504
zLjy&Cv?7QikV#>n0>>@MV8oK`Gmun34-FKdlm-J8SZSaNlnhir4-FI{S|bfqV8e)V
505
zss<{chX#reE#g=hsKAC%sF6d-Km}BWs!kZFsFpKfpbC@>6rprQGEjt4Ck#|zITHq|
506
zK*>M_l;<P^MJRQ`Kn0dFVW0|>3{*fllMEE0)CmI>Sk8ojDo`>|0p(0GP=xY&!axO<
507
zGhv_#lnhirIg<<&q0|Wj6<E%MfhtfkPyyvkGEjt4Ck#|zITHq|K*>M_lrzad5lWpf
508
zP=V!47^ngz0~JutBm+e#b;3XemNQ|X3X}{~Ksl2P6rt1!0~J`#gn=qhGEf2KOfpb}
509
zQYQ>lU^x>8szAv=1(Y+%KoLrvFi?TzOc<yFB?A>u&LjgxD0RX>1(q{mpbC@>R6seC
510
z3>2Z%2?G^a&V+#~P%=;f<xDbAgi<FARA4z12C6{GKn0XD$v_cGoiI>=<xCi;0wn_#
511
zP|hR+MJRQ`Kn0dFVW0|>3{*fllMEE0)CmI>Sk8ojDo`>|0p(0GP=rz^3{+q_69%e4
512
z$v_2^Gs!>^N}VuJf#pmXr~)Me6;RG314Srx!axxz28u{EP=u<1B2)}iVZuNaCK;&0
513
zBm-5LFi?dF167!0pbC==RAItE6($T+VUmF=Ofpb~2?JG_Fi?d_2C6X0Kouqo6p_5T
514
zFi=FeV!SiSKoR0H$dH(_Z(*Q_WZ%L-5y`$K14StNmJAdjmWs}HV4<vU_xO+1efmLq
515
zZ;W>N_U)fP6Qy6Nw5mbt9Y(#emWSi66=>tq#xoh#Ue=0qyhxi8ZOUe5y0V7VfPUhp
516
zwX=;ymc+i5%sg9Ja~lZ&8oAV@mHc>&CHP9v4R(jhtT?un;O4e9#pno)Xkh7OWgK&a
517
zyj=3Iv0OuoK_;5rOr5f(Kb~ZXDBO+V`OWYo#_C08imwChQxnjdd?wZLDou8aj;$SD
518
zGDYiA3<$Tu<JnHL(KPOChi#zrR32t83}naR$+ym4P_h?z_5#|cW-nw$XD_sOtE62l
519
zrD3@*)NVyiklt0&yF9%+klsBey&I<Y2E<!f(E8TuJte)z(|ZHyy<^gQVfx}=`q&B5
520
z7nSryp1wGczIaUfVwiq$Fn#<4=@*ssi#+|}K>EdF(l3VTOM~ghPLRH&q%ZOGrGfON
521
zW73zx^yR_y<0nX8R??Sw`tm^f@-gYlNFSp|*<gA{q?Zp5Oe-+l#rmyYmKozi9y=P>
522
zVReJU*h=ZuVXiS$ohTbw-O#v9>(yZbGE|)?8(H1ZIKvV!jWa0>vy!3eMA^vdhQ>`s
523
zuMSg{q3T50$m)j1!HixV<}X9liL#N^4c*tL^y)CF8LCc{jjV3yKAqL8!%SzWI#H%q
524
z=bSrQ&)%JCRttF5g4Zf`6l?y@>PzD7MA^D>wBlcH6r1ucwJ<p0O%rZ?JzIY3-QdmZ
525
zzs|n>`a5r3e|z)wcUaqS>nqFQ-8x}eCF4u`OWUxqst-@1rSmUs%WmKP5e0dcb?e2N
526
z;Z|x*!);VwF|Yuhqs^khqOM!@u*jY!WYldISF(V6`BoNd&6Qfk3>X#SuD^7J>p_D=
527
zBPa51y^_n#=cpOt#Zf$ya$Ae9Mfz56n|<i!a=ELS@)%a{^NIH3SDuN<R~sah1km#P
528
zU@?*f%<rG=4W1wgfi;C?_n|W@%lm$&8YfvNOJodIg&IcIpIJQRHr<+ej11GQ6)&eF
529
z2Lam*jIH}#y0>KnY%4JQfOYS$*uU%f#@$U6`N8I3N-lV?5ErFCdv~xDmu2(wexld4
530
z4v^;aVAT2k6GJ^m*FD(Wqc(Qg^)6a<?}h$zLoj}4;PP!+(O{@!a1y-hoAhF_7!z+6
531
zslpAmNtYbjHrw-~#SPVk_FUf>-Obg6yV`8o$8_`PyJe_;bY5_EMBfBfWU!Q=*9HsG
532
z%_Cda{@_Krr!oHVhv9+y+T5qR8zZ2aZ>5r!$*|f$^U%yBUYfR&B!+EYy_PwL!BeUi
533
zJH^}r3r9Q+B)X@Z)fk=P13w&7x#wBtXTZ)g>WITPg5r&pQc!nmyrmk#S(>>b9xnNr
534
zx_b#v9Xv-Y><Wb%?S^0Xe&<)bbKl_=Z|3C$tf|F<bYzE*mfHB;uC)`q-?buaBe?l?
535
zcLTpK*k<49Z32`K?|nSBMFqxTK^_IE-li2fEGdK~(ZdoKBl6ab4a;Hler#`xvEXJG
536
zb?<E%EZExfX>jcOVhS*0rS~RS1dA#xhkv@Nct@#q?LyeKS<$uFec!bw>{@uu$gZ6a
537
zyVen1i{1BKd%~`D7|m$;U0a<I*3I7%^N%N%lGYdU_GS!gaR8T$NA@GzFi~z`l7hdl
538
zarZy6590|88pi(1zq;V(>38zM0sT&<zX;R5$1w3;`_JMG`;&I&0Y23DMx1%@(w(R9
539
z4M$j;D5J+Gy%fijRQsctzFKf&cv|BAz#YLq3CZJWDdtL4u1u1|mkdcUp7|sxJC+?Y
540
z_@@s`v3j}Q7*z>6X~cwUxUL8G1KT)_XTp!KAbs;vCp{K3&~_X@+ew=-D}v`2MbFV0
541
zQsVsL=rXi-pI*G|iiz;VTCutgUs)hDzV1+4?8KcoP3xROf<M%qC6lgVdpFt4<-|uM
542
z=#rl_b1#YjSIl6Toj2z_hOZcKupkdE(LozC(fN=FY(x|sk)ym|;Rq2E1xJWD%Z!ol
543
Gu>S+TT-130
544
545
literal 0
546
HcmV?d00001
547
548
diff --git a/tests/data/acpi/q35/VIOT.viot b/tests/data/acpi/q35/VIOT.viot
549
index XXXXXXX..XXXXXXX 100644
550
GIT binary patch
551
literal 112
552
zcmWIZ^baXu00LVle`k+i1*eDrX9XZ&1PX!JAex!M0Hgv8m>C3sGzdcgBZCA3T-xBj
553
Q0Zb)W9Hva*zW_`e0M!8s0RR91
554
555
literal 0
556
HcmV?d00001
557
558
--
168
--
559
2.25.1
169
2.25.1
560
561
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Do not apply memattr or shareability for Stage2 translations.
4
Make sure to apply HCR_{DC,DCT} only to Regime_EL10, per the
5
pseudocode in AArch64.S1DisabledOutput.
2
6
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20221001162318.153420-20-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
11
---
7
target/arm/translate-a64.c | 7 ++++---
12
target/arm/ptw.c | 48 +++++++++++++++++++++++++-----------------------
8
1 file changed, 4 insertions(+), 3 deletions(-)
13
1 file changed, 25 insertions(+), 23 deletions(-)
9
14
10
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
11
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/translate-a64.c
17
--- a/target/arm/ptw.c
13
+++ b/target/arm/translate-a64.c
18
+++ b/target/arm/ptw.c
14
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
20
GetPhysAddrResult *result,
21
ARMMMUFaultInfo *fi)
15
{
22
{
16
DisasContext *s = container_of(dcbase, DisasContext, base);
23
- uint64_t hcr;
17
CPUARMState *env = cpu->env_ptr;
24
- uint8_t memattr;
18
+ uint64_t pc = s->base.pc_next;
25
+ uint8_t memattr = 0x00; /* Device nGnRnE */
19
uint32_t insn;
26
+ uint8_t shareability = 0; /* non-sharable */
20
27
21
if (s->ss_active && !s->pstate_ss) {
28
if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
22
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
29
int r_el = regime_el(env, mmu_idx);
23
return;
30
+
31
if (arm_el_is_aa64(env, r_el)) {
32
int pamax = arm_pamax(env_archcpu(env));
33
uint64_t tcr = env->cp15.tcr_el[r_el];
34
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
35
*/
36
address = extract64(address, 0, 52);
37
}
38
+
39
+ /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
40
+ if (r_el == 1) {
41
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
42
+ if (hcr & HCR_DC) {
43
+ if (hcr & HCR_DCT) {
44
+ memattr = 0xf0; /* Tagged, Normal, WB, RWA */
45
+ } else {
46
+ memattr = 0xff; /* Normal, WB, RWA */
47
+ }
48
+ }
49
+ }
50
+ if (memattr == 0 && access_type == MMU_INST_FETCH) {
51
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
52
+ memattr = 0xee; /* Normal, WT, RA, NT */
53
+ } else {
54
+ memattr = 0x44; /* Normal, NC, No */
55
+ }
56
+ shareability = 2; /* outer sharable */
57
+ }
58
+ result->cacheattrs.is_s2_format = false;
24
}
59
}
25
60
26
- s->pc_curr = s->base.pc_next;
61
result->phys = address;
27
- insn = arm_ldl_code(env, &s->base, s->base.pc_next, s->sctlr_b);
62
result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
28
+ s->pc_curr = pc;
63
result->page_size = TARGET_PAGE_SIZE;
29
+ insn = arm_ldl_code(env, &s->base, pc, s->sctlr_b);
64
-
30
s->insn = insn;
65
- /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
31
- s->base.pc_next += 4;
66
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
32
+ s->base.pc_next = pc + 4;
67
- result->cacheattrs.shareability = 0;
33
68
- result->cacheattrs.is_s2_format = false;
34
s->fp_access_checked = false;
69
- if (hcr & HCR_DC) {
35
s->sve_access_checked = false;
70
- if (hcr & HCR_DCT) {
71
- memattr = 0xf0; /* Tagged, Normal, WB, RWA */
72
- } else {
73
- memattr = 0xff; /* Normal, WB, RWA */
74
- }
75
- } else if (access_type == MMU_INST_FETCH) {
76
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
77
- memattr = 0xee; /* Normal, WT, RA, NT */
78
- } else {
79
- memattr = 0x44; /* Normal, NC, No */
80
- }
81
- result->cacheattrs.shareability = 2; /* outer sharable */
82
- } else {
83
- memattr = 0x00; /* Device, nGnRnE */
84
- }
85
+ result->cacheattrs.shareability = shareability;
86
result->cacheattrs.attrs = memattr;
87
return 0;
88
}
36
--
89
--
37
2.25.1
90
2.25.1
38
39
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We will reuse this section of arm_deliver_fault for
3
Adjust GetPhysAddrResult to fill in CPUTLBEntryFull,
4
raising pc alignment faults.
4
so that it may be passed directly to tlb_set_page_full.
5
6
The change is large, but mostly mechanical. The major
7
non-mechanical change is page_size -> lg_page_size.
8
Most of the time this is obvious, and is related to
9
TARGET_PAGE_BITS.
5
10
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20221001162318.153420-21-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
15
---
10
target/arm/tlb_helper.c | 45 +++++++++++++++++++++++++----------------
16
target/arm/internals.h | 5 +-
11
1 file changed, 28 insertions(+), 17 deletions(-)
17
target/arm/helper.c | 12 +--
18
target/arm/m_helper.c | 20 ++---
19
target/arm/ptw.c | 179 ++++++++++++++++++++--------------------
20
target/arm/tlb_helper.c | 9 +-
21
5 files changed, 111 insertions(+), 114 deletions(-)
12
22
23
diff --git a/target/arm/internals.h b/target/arm/internals.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/internals.h
26
+++ b/target/arm/internals.h
27
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
28
29
/* Fields that are valid upon success. */
30
typedef struct GetPhysAddrResult {
31
- hwaddr phys;
32
- target_ulong page_size;
33
- int prot;
34
- MemTxAttrs attrs;
35
+ CPUTLBEntryFull f;
36
ARMCacheAttrs cacheattrs;
37
} GetPhysAddrResult;
38
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/helper.c
42
+++ b/target/arm/helper.c
43
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
44
/* Create a 64-bit PAR */
45
par64 = (1 << 11); /* LPAE bit always set */
46
if (!ret) {
47
- par64 |= res.phys & ~0xfffULL;
48
- if (!res.attrs.secure) {
49
+ par64 |= res.f.phys_addr & ~0xfffULL;
50
+ if (!res.f.attrs.secure) {
51
par64 |= (1 << 9); /* NS */
52
}
53
par64 |= (uint64_t)res.cacheattrs.attrs << 56; /* ATTR */
54
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
55
*/
56
if (!ret) {
57
/* We do not set any attribute bits in the PAR */
58
- if (res.page_size == (1 << 24)
59
+ if (res.f.lg_page_size == 24
60
&& arm_feature(env, ARM_FEATURE_V7)) {
61
- par64 = (res.phys & 0xff000000) | (1 << 1);
62
+ par64 = (res.f.phys_addr & 0xff000000) | (1 << 1);
63
} else {
64
- par64 = res.phys & 0xfffff000;
65
+ par64 = res.f.phys_addr & 0xfffff000;
66
}
67
- if (!res.attrs.secure) {
68
+ if (!res.f.attrs.secure) {
69
par64 |= (1 << 9); /* NS */
70
}
71
} else {
72
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/m_helper.c
75
+++ b/target/arm/m_helper.c
76
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
77
}
78
goto pend_fault;
79
}
80
- address_space_stl_le(arm_addressspace(cs, res.attrs), res.phys, value,
81
- res.attrs, &txres);
82
+ address_space_stl_le(arm_addressspace(cs, res.f.attrs), res.f.phys_addr,
83
+ value, res.f.attrs, &txres);
84
if (txres != MEMTX_OK) {
85
/* BusFault trying to write the data */
86
if (mode == STACK_LAZYFP) {
87
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr,
88
goto pend_fault;
89
}
90
91
- value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys,
92
- res.attrs, &txres);
93
+ value = address_space_ldl(arm_addressspace(cs, res.f.attrs),
94
+ res.f.phys_addr, res.f.attrs, &txres);
95
if (txres != MEMTX_OK) {
96
/* BusFault trying to read the data */
97
qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n");
98
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure,
99
qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
100
return false;
101
}
102
- *insn = address_space_lduw_le(arm_addressspace(cs, res.attrs), res.phys,
103
- res.attrs, &txres);
104
+ *insn = address_space_lduw_le(arm_addressspace(cs, res.f.attrs),
105
+ res.f.phys_addr, res.f.attrs, &txres);
106
if (txres != MEMTX_OK) {
107
env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
108
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
109
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
110
}
111
return false;
112
}
113
- value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys,
114
- res.attrs, &txres);
115
+ value = address_space_ldl(arm_addressspace(cs, res.f.attrs),
116
+ res.f.phys_addr, res.f.attrs, &txres);
117
if (txres != MEMTX_OK) {
118
/* BusFault trying to read the data */
119
qemu_log_mask(CPU_LOG_INT,
120
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
121
} else {
122
mrvalid = true;
123
}
124
- r = res.prot & PAGE_READ;
125
- rw = res.prot & PAGE_WRITE;
126
+ r = res.f.prot & PAGE_READ;
127
+ rw = res.f.prot & PAGE_WRITE;
128
} else {
129
r = false;
130
rw = false;
131
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
132
index XXXXXXX..XXXXXXX 100644
133
--- a/target/arm/ptw.c
134
+++ b/target/arm/ptw.c
135
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
136
assert(!is_secure);
137
}
138
139
- addr = s2.phys;
140
+ addr = s2.f.phys_addr;
141
}
142
return addr;
143
}
144
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
145
/* 1Mb section. */
146
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
147
ap = (desc >> 10) & 3;
148
- result->page_size = 1024 * 1024;
149
+ result->f.lg_page_size = 20; /* 1MB */
150
} else {
151
/* Lookup l2 entry. */
152
if (type == 1) {
153
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
154
case 1: /* 64k page. */
155
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
156
ap = (desc >> (4 + ((address >> 13) & 6))) & 3;
157
- result->page_size = 0x10000;
158
+ result->f.lg_page_size = 16;
159
break;
160
case 2: /* 4k page. */
161
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
162
ap = (desc >> (4 + ((address >> 9) & 6))) & 3;
163
- result->page_size = 0x1000;
164
+ result->f.lg_page_size = 12;
165
break;
166
case 3: /* 1k page, or ARMv6/XScale "extended small (4k) page" */
167
if (type == 1) {
168
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
169
if (arm_feature(env, ARM_FEATURE_XSCALE)
170
|| arm_feature(env, ARM_FEATURE_V6)) {
171
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
172
- result->page_size = 0x1000;
173
+ result->f.lg_page_size = 12;
174
} else {
175
/*
176
* UNPREDICTABLE in ARMv5; we choose to take a
177
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
178
}
179
} else {
180
phys_addr = (desc & 0xfffffc00) | (address & 0x3ff);
181
- result->page_size = 0x400;
182
+ result->f.lg_page_size = 10;
183
}
184
ap = (desc >> 4) & 3;
185
break;
186
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
187
g_assert_not_reached();
188
}
189
}
190
- result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
191
- result->prot |= result->prot ? PAGE_EXEC : 0;
192
- if (!(result->prot & (1 << access_type))) {
193
+ result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
194
+ result->f.prot |= result->f.prot ? PAGE_EXEC : 0;
195
+ if (!(result->f.prot & (1 << access_type))) {
196
/* Access permission fault. */
197
fi->type = ARMFault_Permission;
198
goto do_fault;
199
}
200
- result->phys = phys_addr;
201
+ result->f.phys_addr = phys_addr;
202
return false;
203
do_fault:
204
fi->domain = domain;
205
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
206
phys_addr = (desc & 0xff000000) | (address & 0x00ffffff);
207
phys_addr |= (uint64_t)extract32(desc, 20, 4) << 32;
208
phys_addr |= (uint64_t)extract32(desc, 5, 4) << 36;
209
- result->page_size = 0x1000000;
210
+ result->f.lg_page_size = 24; /* 16MB */
211
} else {
212
/* Section. */
213
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
214
- result->page_size = 0x100000;
215
+ result->f.lg_page_size = 20; /* 1MB */
216
}
217
ap = ((desc >> 10) & 3) | ((desc >> 13) & 4);
218
xn = desc & (1 << 4);
219
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
220
case 1: /* 64k page. */
221
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
222
xn = desc & (1 << 15);
223
- result->page_size = 0x10000;
224
+ result->f.lg_page_size = 16;
225
break;
226
case 2: case 3: /* 4k page. */
227
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
228
xn = desc & 1;
229
- result->page_size = 0x1000;
230
+ result->f.lg_page_size = 12;
231
break;
232
default:
233
/* Never happens, but compiler isn't smart enough to tell. */
234
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
235
}
236
}
237
if (domain_prot == 3) {
238
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
239
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
240
} else {
241
if (pxn && !regime_is_user(env, mmu_idx)) {
242
xn = 1;
243
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
244
fi->type = ARMFault_AccessFlag;
245
goto do_fault;
246
}
247
- result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
248
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
249
} else {
250
- result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
251
+ result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
252
}
253
- if (result->prot && !xn) {
254
- result->prot |= PAGE_EXEC;
255
+ if (result->f.prot && !xn) {
256
+ result->f.prot |= PAGE_EXEC;
257
}
258
- if (!(result->prot & (1 << access_type))) {
259
+ if (!(result->f.prot & (1 << access_type))) {
260
/* Access permission fault. */
261
fi->type = ARMFault_Permission;
262
goto do_fault;
263
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
264
* the CPU doesn't support TZ or this is a non-secure translation
265
* regime, because the attribute will already be non-secure.
266
*/
267
- result->attrs.secure = false;
268
+ result->f.attrs.secure = false;
269
}
270
- result->phys = phys_addr;
271
+ result->f.phys_addr = phys_addr;
272
return false;
273
do_fault:
274
fi->domain = domain;
275
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
276
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
277
ns = mmu_idx == ARMMMUIdx_Stage2;
278
xn = extract32(attrs, 11, 2);
279
- result->prot = get_S2prot(env, ap, xn, s1_is_el0);
280
+ result->f.prot = get_S2prot(env, ap, xn, s1_is_el0);
281
} else {
282
ns = extract32(attrs, 3, 1);
283
xn = extract32(attrs, 12, 1);
284
pxn = extract32(attrs, 11, 1);
285
- result->prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
286
+ result->f.prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
287
}
288
289
fault_type = ARMFault_Permission;
290
- if (!(result->prot & (1 << access_type))) {
291
+ if (!(result->f.prot & (1 << access_type))) {
292
goto do_fault;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
296
* the CPU doesn't support TZ or this is a non-secure translation
297
* regime, because the attribute will already be non-secure.
298
*/
299
- result->attrs.secure = false;
300
+ result->f.attrs.secure = false;
301
}
302
/* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
303
if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
304
- arm_tlb_bti_gp(&result->attrs) = true;
305
+ arm_tlb_bti_gp(&result->f.attrs) = true;
306
}
307
308
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
309
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
310
result->cacheattrs.shareability = extract32(attrs, 6, 2);
311
}
312
313
- result->phys = descaddr;
314
- result->page_size = page_size;
315
+ result->f.phys_addr = descaddr;
316
+ result->f.lg_page_size = ctz64(page_size);
317
return false;
318
319
do_fault:
320
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
321
322
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
323
/* MPU disabled. */
324
- result->phys = address;
325
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
326
+ result->f.phys_addr = address;
327
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
328
return false;
329
}
330
331
- result->phys = address;
332
+ result->f.phys_addr = address;
333
for (n = 7; n >= 0; n--) {
334
base = env->cp15.c6_region[n];
335
if ((base & 1) == 0) {
336
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
337
fi->level = 1;
338
return true;
339
}
340
- result->prot = PAGE_READ | PAGE_WRITE;
341
+ result->f.prot = PAGE_READ | PAGE_WRITE;
342
break;
343
case 2:
344
- result->prot = PAGE_READ;
345
+ result->f.prot = PAGE_READ;
346
if (!is_user) {
347
- result->prot |= PAGE_WRITE;
348
+ result->f.prot |= PAGE_WRITE;
349
}
350
break;
351
case 3:
352
- result->prot = PAGE_READ | PAGE_WRITE;
353
+ result->f.prot = PAGE_READ | PAGE_WRITE;
354
break;
355
case 5:
356
if (is_user) {
357
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
358
fi->level = 1;
359
return true;
360
}
361
- result->prot = PAGE_READ;
362
+ result->f.prot = PAGE_READ;
363
break;
364
case 6:
365
- result->prot = PAGE_READ;
366
+ result->f.prot = PAGE_READ;
367
break;
368
default:
369
/* Bad permission. */
370
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
371
fi->level = 1;
372
return true;
373
}
374
- result->prot |= PAGE_EXEC;
375
+ result->f.prot |= PAGE_EXEC;
376
return false;
377
}
378
379
static void get_phys_addr_pmsav7_default(CPUARMState *env, ARMMMUIdx mmu_idx,
380
- int32_t address, int *prot)
381
+ int32_t address, uint8_t *prot)
382
{
383
if (!arm_feature(env, ARM_FEATURE_M)) {
384
*prot = PAGE_READ | PAGE_WRITE;
385
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
386
int n;
387
bool is_user = regime_is_user(env, mmu_idx);
388
389
- result->phys = address;
390
- result->page_size = TARGET_PAGE_SIZE;
391
- result->prot = 0;
392
+ result->f.phys_addr = address;
393
+ result->f.lg_page_size = TARGET_PAGE_BITS;
394
+ result->f.prot = 0;
395
396
if (regime_translation_disabled(env, mmu_idx, secure) ||
397
m_is_ppb_region(env, address)) {
398
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
399
* which always does a direct read using address_space_ldl(), rather
400
* than going via this function, so we don't need to check that here.
401
*/
402
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
403
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
404
} else { /* MPU enabled */
405
for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
406
/* region search */
407
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
408
if (ranges_overlap(base, rmask,
409
address & TARGET_PAGE_MASK,
410
TARGET_PAGE_SIZE)) {
411
- result->page_size = 1;
412
+ result->f.lg_page_size = 0;
413
}
414
continue;
415
}
416
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
417
continue;
418
}
419
if (rsize < TARGET_PAGE_BITS) {
420
- result->page_size = 1 << rsize;
421
+ result->f.lg_page_size = rsize;
422
}
423
break;
424
}
425
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
426
fi->type = ARMFault_Background;
427
return true;
428
}
429
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
430
+ get_phys_addr_pmsav7_default(env, mmu_idx, address,
431
+ &result->f.prot);
432
} else { /* a MPU hit! */
433
uint32_t ap = extract32(env->pmsav7.dracr[n], 8, 3);
434
uint32_t xn = extract32(env->pmsav7.dracr[n], 12, 1);
435
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
436
case 5:
437
break; /* no access */
438
case 3:
439
- result->prot |= PAGE_WRITE;
440
+ result->f.prot |= PAGE_WRITE;
441
/* fall through */
442
case 2:
443
case 6:
444
- result->prot |= PAGE_READ | PAGE_EXEC;
445
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
446
break;
447
case 7:
448
/* for v7M, same as 6; for R profile a reserved value */
449
if (arm_feature(env, ARM_FEATURE_M)) {
450
- result->prot |= PAGE_READ | PAGE_EXEC;
451
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
452
break;
453
}
454
/* fall through */
455
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
456
case 1:
457
case 2:
458
case 3:
459
- result->prot |= PAGE_WRITE;
460
+ result->f.prot |= PAGE_WRITE;
461
/* fall through */
462
case 5:
463
case 6:
464
- result->prot |= PAGE_READ | PAGE_EXEC;
465
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
466
break;
467
case 7:
468
/* for v7M, same as 6; for R profile a reserved value */
469
if (arm_feature(env, ARM_FEATURE_M)) {
470
- result->prot |= PAGE_READ | PAGE_EXEC;
471
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
472
break;
473
}
474
/* fall through */
475
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
476
477
/* execute never */
478
if (xn) {
479
- result->prot &= ~PAGE_EXEC;
480
+ result->f.prot &= ~PAGE_EXEC;
481
}
482
}
483
}
484
485
fi->type = ARMFault_Permission;
486
fi->level = 1;
487
- return !(result->prot & (1 << access_type));
488
+ return !(result->f.prot & (1 << access_type));
489
}
490
491
bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
492
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
493
uint32_t addr_page_base = address & TARGET_PAGE_MASK;
494
uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
495
496
- result->page_size = TARGET_PAGE_SIZE;
497
- result->phys = address;
498
- result->prot = 0;
499
+ result->f.lg_page_size = TARGET_PAGE_BITS;
500
+ result->f.phys_addr = address;
501
+ result->f.prot = 0;
502
if (mregion) {
503
*mregion = -1;
504
}
505
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
506
ranges_overlap(base, limit - base + 1,
507
addr_page_base,
508
TARGET_PAGE_SIZE)) {
509
- result->page_size = 1;
510
+ result->f.lg_page_size = 0;
511
}
512
continue;
513
}
514
515
if (base > addr_page_base || limit < addr_page_limit) {
516
- result->page_size = 1;
517
+ result->f.lg_page_size = 0;
518
}
519
520
if (matchregion != -1) {
521
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
522
523
if (matchregion == -1) {
524
/* hit using the background region */
525
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
526
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
527
} else {
528
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
529
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
530
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
531
xn = 1;
532
}
533
534
- result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
535
- if (result->prot && !xn && !(pxn && !is_user)) {
536
- result->prot |= PAGE_EXEC;
537
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
538
+ if (result->f.prot && !xn && !(pxn && !is_user)) {
539
+ result->f.prot |= PAGE_EXEC;
540
}
541
/*
542
* We don't need to look the attribute up in the MAIR0/MAIR1
543
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
544
545
fi->type = ARMFault_Permission;
546
fi->level = 1;
547
- return !(result->prot & (1 << access_type));
548
+ return !(result->f.prot & (1 << access_type));
549
}
550
551
static bool v8m_is_sau_exempt(CPUARMState *env,
552
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
553
} else {
554
fi->type = ARMFault_QEMU_SFault;
555
}
556
- result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
557
- result->phys = address;
558
- result->prot = 0;
559
+ result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS;
560
+ result->f.phys_addr = address;
561
+ result->f.prot = 0;
562
return true;
563
}
564
} else {
565
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
566
* might downgrade a secure access to nonsecure.
567
*/
568
if (sattrs.ns) {
569
- result->attrs.secure = false;
570
+ result->f.attrs.secure = false;
571
} else if (!secure) {
572
/*
573
* NS access to S memory must fault.
574
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
575
* for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
576
*/
577
fi->type = ARMFault_QEMU_SFault;
578
- result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
579
- result->phys = address;
580
- result->prot = 0;
581
+ result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS;
582
+ result->f.phys_addr = address;
583
+ result->f.prot = 0;
584
return true;
585
}
586
}
587
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
588
ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, secure,
589
result, fi, NULL);
590
if (sattrs.subpage) {
591
- result->page_size = 1;
592
+ result->f.lg_page_size = 0;
593
}
594
return ret;
595
}
596
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
597
result->cacheattrs.is_s2_format = false;
598
}
599
600
- result->phys = address;
601
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
602
- result->page_size = TARGET_PAGE_SIZE;
603
+ result->f.phys_addr = address;
604
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
605
+ result->f.lg_page_size = TARGET_PAGE_BITS;
606
result->cacheattrs.shareability = shareability;
607
result->cacheattrs.attrs = memattr;
608
return 0;
609
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
610
return ret;
611
}
612
613
- ipa = result->phys;
614
- ipa_secure = result->attrs.secure;
615
+ ipa = result->f.phys_addr;
616
+ ipa_secure = result->f.attrs.secure;
617
if (is_secure) {
618
/* Select TCR based on the NS bit from the S1 walk. */
619
s2walk_secure = !(ipa_secure
620
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
621
* Save the stage1 results so that we may merge
622
* prot and cacheattrs later.
623
*/
624
- s1_prot = result->prot;
625
+ s1_prot = result->f.prot;
626
cacheattrs1 = result->cacheattrs;
627
memset(result, 0, sizeof(*result));
628
629
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
630
fi->s2addr = ipa;
631
632
/* Combine the S1 and S2 perms. */
633
- result->prot &= s1_prot;
634
+ result->f.prot &= s1_prot;
635
636
/* If S2 fails, return early. */
637
if (ret) {
638
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
639
* Check if IPA translates to secure or non-secure PA space.
640
* Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
641
*/
642
- result->attrs.secure =
643
+ result->f.attrs.secure =
644
(is_secure
645
&& !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
646
&& (ipa_secure
647
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
648
* cannot upgrade an non-secure translation regime's attributes
649
* to secure.
650
*/
651
- result->attrs.secure = is_secure;
652
- result->attrs.user = regime_is_user(env, mmu_idx);
653
+ result->f.attrs.secure = is_secure;
654
+ result->f.attrs.user = regime_is_user(env, mmu_idx);
655
656
/*
657
* Fast Context Switch Extension. This doesn't exist at all in v8.
658
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
659
660
if (arm_feature(env, ARM_FEATURE_PMSA)) {
661
bool ret;
662
- result->page_size = TARGET_PAGE_SIZE;
663
+ result->f.lg_page_size = TARGET_PAGE_BITS;
664
665
if (arm_feature(env, ARM_FEATURE_V8)) {
666
/* PMSAv8 */
667
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
668
(access_type == MMU_DATA_STORE ? "writing" : "execute"),
669
(uint32_t)address, mmu_idx,
670
ret ? "Miss" : "Hit",
671
- result->prot & PAGE_READ ? 'r' : '-',
672
- result->prot & PAGE_WRITE ? 'w' : '-',
673
- result->prot & PAGE_EXEC ? 'x' : '-');
674
+ result->f.prot & PAGE_READ ? 'r' : '-',
675
+ result->f.prot & PAGE_WRITE ? 'w' : '-',
676
+ result->f.prot & PAGE_EXEC ? 'x' : '-');
677
678
return ret;
679
}
680
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
681
bool ret;
682
683
ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi);
684
- *attrs = res.attrs;
685
+ *attrs = res.f.attrs;
686
687
if (ret) {
688
return -1;
689
}
690
- return res.phys;
691
+ return res.f.phys_addr;
692
}
13
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
693
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
14
index XXXXXXX..XXXXXXX 100644
694
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/tlb_helper.c
695
--- a/target/arm/tlb_helper.c
16
+++ b/target/arm/tlb_helper.c
696
+++ b/target/arm/tlb_helper.c
17
@@ -XXX,XX +XXX,XX @@ static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
697
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
18
return syn;
698
* target page size are handled specially, so for those we
19
}
699
* pass in the exact addresses.
20
700
*/
21
-static void QEMU_NORETURN arm_deliver_fault(ARMCPU *cpu, vaddr addr,
701
- if (res.page_size >= TARGET_PAGE_SIZE) {
22
- MMUAccessType access_type,
702
- res.phys &= TARGET_PAGE_MASK;
23
- int mmu_idx, ARMMMUFaultInfo *fi)
703
+ if (res.f.lg_page_size >= TARGET_PAGE_BITS) {
24
+static uint32_t compute_fsr_fsc(CPUARMState *env, ARMMMUFaultInfo *fi,
704
+ res.f.phys_addr &= TARGET_PAGE_MASK;
25
+ int target_el, int mmu_idx, uint32_t *ret_fsc)
705
address &= TARGET_PAGE_MASK;
26
{
706
}
27
- CPUARMState *env = &cpu->env;
707
/* Notice and record tagged memory. */
28
- int target_el;
708
if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) {
29
- bool same_el;
709
- arm_tlb_mte_tagged(&res.attrs) = true;
30
- uint32_t syn, exc, fsr, fsc;
710
+ arm_tlb_mte_tagged(&res.f.attrs) = true;
31
ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
711
}
32
-
712
33
- target_el = exception_target_el(env);
713
- tlb_set_page_with_attrs(cs, address, res.phys, res.attrs,
34
- if (fi->stage2) {
714
- res.prot, mmu_idx, res.page_size);
35
- target_el = 2;
715
+ tlb_set_page_full(cs, mmu_idx, address, &res.f);
36
- env->cp15.hpfar_el2 = extract64(fi->s2addr, 12, 47) << 4;
716
return true;
37
- if (arm_is_secure_below_el3(env) && fi->s1ns) {
717
} else if (probe) {
38
- env->cp15.hpfar_el2 |= HPFAR_NS;
718
return false;
39
- }
40
- }
41
- same_el = (arm_current_el(env) == target_el);
42
+ uint32_t fsr, fsc;
43
44
if (target_el == 2 || arm_el_is_aa64(env, target_el) ||
45
arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
46
@@ -XXX,XX +XXX,XX @@ static void QEMU_NORETURN arm_deliver_fault(ARMCPU *cpu, vaddr addr,
47
fsc = 0x3f;
48
}
49
50
+ *ret_fsc = fsc;
51
+ return fsr;
52
+}
53
+
54
+static void QEMU_NORETURN arm_deliver_fault(ARMCPU *cpu, vaddr addr,
55
+ MMUAccessType access_type,
56
+ int mmu_idx, ARMMMUFaultInfo *fi)
57
+{
58
+ CPUARMState *env = &cpu->env;
59
+ int target_el;
60
+ bool same_el;
61
+ uint32_t syn, exc, fsr, fsc;
62
+
63
+ target_el = exception_target_el(env);
64
+ if (fi->stage2) {
65
+ target_el = 2;
66
+ env->cp15.hpfar_el2 = extract64(fi->s2addr, 12, 47) << 4;
67
+ if (arm_is_secure_below_el3(env) && fi->s1ns) {
68
+ env->cp15.hpfar_el2 |= HPFAR_NS;
69
+ }
70
+ }
71
+ same_el = (arm_current_el(env) == target_el);
72
+
73
+ fsr = compute_fsr_fsc(env, fi, target_el, mmu_idx, &fsc);
74
+
75
if (access_type == MMU_INST_FETCH) {
76
syn = syn_insn_abort(same_el, fi->ea, fi->s1ptw, fsc);
77
exc = EXCP_PREFETCH_ABORT;
78
--
719
--
79
2.25.1
720
2.25.1
80
81
diff view generated by jsdifflib
Deleted patch
1
In the SSE decode function gen_sse(), we combine a byte
2
'b' and a value 'b1' which can be [0..3], and switch on them:
3
b |= (b1 << 8);
4
switch (b) {
5
...
6
default:
7
unknown_op:
8
gen_unknown_opcode(env, s);
9
return;
10
}
11
1
12
In three cases inside this switch, we were then also checking for
13
"if (b1 >= 2) { goto unknown_op; }".
14
However, this can never happen, because the 'case' values in each place
15
are 0x0nn or 0x1nn and the switch will have directed the b1 == (2, 3)
16
cases to the default already.
17
18
This check was added in commit c045af25a52e9 in 2010; the added code
19
was unnecessary then as well, and was apparently intended only to
20
ensure that we never accidentally ended up indexing off the end
21
of an sse_op_table with only 2 entries as a result of future bugs
22
in the decode logic.
23
24
Change the checks to assert() instead, and make sure they're always
25
immediately before the array access they are protecting.
26
27
Fixes: Coverity CID 1460207
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
30
---
31
target/i386/tcg/translate.c | 12 +++---------
32
1 file changed, 3 insertions(+), 9 deletions(-)
33
34
diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/i386/tcg/translate.c
37
+++ b/target/i386/tcg/translate.c
38
@@ -XXX,XX +XXX,XX @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
39
case 0x171: /* shift xmm, im */
40
case 0x172:
41
case 0x173:
42
- if (b1 >= 2) {
43
- goto unknown_op;
44
- }
45
val = x86_ldub_code(env, s);
46
if (is_xmm) {
47
tcg_gen_movi_tl(s->T0, val);
48
@@ -XXX,XX +XXX,XX @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
49
offsetof(CPUX86State, mmx_t0.MMX_L(1)));
50
op1_offset = offsetof(CPUX86State,mmx_t0);
51
}
52
+ assert(b1 < 2);
53
sse_fn_epp = sse_op_table2[((b - 1) & 3) * 8 +
54
(((modrm >> 3)) & 7)][b1];
55
if (!sse_fn_epp) {
56
@@ -XXX,XX +XXX,XX @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
57
rm = modrm & 7;
58
reg = ((modrm >> 3) & 7) | REX_R(s);
59
mod = (modrm >> 6) & 3;
60
- if (b1 >= 2) {
61
- goto unknown_op;
62
- }
63
64
+ assert(b1 < 2);
65
sse_fn_epp = sse_op_table6[b].op[b1];
66
if (!sse_fn_epp) {
67
goto unknown_op;
68
@@ -XXX,XX +XXX,XX @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
69
rm = modrm & 7;
70
reg = ((modrm >> 3) & 7) | REX_R(s);
71
mod = (modrm >> 6) & 3;
72
- if (b1 >= 2) {
73
- goto unknown_op;
74
- }
75
76
+ assert(b1 < 2);
77
sse_fn_eppi = sse_op_table7[b].op[b1];
78
if (!sse_fn_eppi) {
79
goto unknown_op;
80
--
81
2.25.1
82
83
diff view generated by jsdifflib
Deleted patch
1
The qemu-common.h header is not supposed to be included from any
2
other header files, only from .c files (as documented in a comment at
3
the start of it).
4
1
5
include/hw/i386/x86.h and include/hw/i386/microvm.h break this rule.
6
In fact, the include is not required at all, so we can just drop it
7
from both files.
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Message-id: 20211129200510.1233037-2-peter.maydell@linaro.org
13
---
14
include/hw/i386/microvm.h | 1 -
15
include/hw/i386/x86.h | 1 -
16
2 files changed, 2 deletions(-)
17
18
diff --git a/include/hw/i386/microvm.h b/include/hw/i386/microvm.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/i386/microvm.h
21
+++ b/include/hw/i386/microvm.h
22
@@ -XXX,XX +XXX,XX @@
23
#ifndef HW_I386_MICROVM_H
24
#define HW_I386_MICROVM_H
25
26
-#include "qemu-common.h"
27
#include "exec/hwaddr.h"
28
#include "qemu/notify.h"
29
30
diff --git a/include/hw/i386/x86.h b/include/hw/i386/x86.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/include/hw/i386/x86.h
33
+++ b/include/hw/i386/x86.h
34
@@ -XXX,XX +XXX,XX @@
35
#ifndef HW_I386_X86_H
36
#define HW_I386_X86_H
37
38
-#include "qemu-common.h"
39
#include "exec/hwaddr.h"
40
#include "qemu/notify.h"
41
42
--
43
2.25.1
44
45
diff view generated by jsdifflib
Deleted patch
1
The qemu-common.h header is not supposed to be included from any
2
other header files, only from .c files (as documented in a comment at
3
the start of it).
4
1
5
Move the include to linux-user/hexagon/cpu_loop.c, which needs it for
6
the declaration of cpu_exec_step_atomic().
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
12
Message-id: 20211129200510.1233037-3-peter.maydell@linaro.org
13
---
14
target/hexagon/cpu.h | 1 -
15
linux-user/hexagon/cpu_loop.c | 1 +
16
2 files changed, 1 insertion(+), 1 deletion(-)
17
18
diff --git a/target/hexagon/cpu.h b/target/hexagon/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/hexagon/cpu.h
21
+++ b/target/hexagon/cpu.h
22
@@ -XXX,XX +XXX,XX @@ typedef struct CPUHexagonState CPUHexagonState;
23
24
#include "fpu/softfloat-types.h"
25
26
-#include "qemu-common.h"
27
#include "exec/cpu-defs.h"
28
#include "hex_regs.h"
29
#include "mmvec/mmvec.h"
30
diff --git a/linux-user/hexagon/cpu_loop.c b/linux-user/hexagon/cpu_loop.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/linux-user/hexagon/cpu_loop.c
33
+++ b/linux-user/hexagon/cpu_loop.c
34
@@ -XXX,XX +XXX,XX @@
35
*/
36
37
#include "qemu/osdep.h"
38
+#include "qemu-common.h"
39
#include "qemu.h"
40
#include "user-internals.h"
41
#include "cpu_loop-common.h"
42
--
43
2.25.1
44
45
diff view generated by jsdifflib
1
A lot of C files in hw/arm include qemu-common.h when they don't
1
From: Jerome Forissier <jerome.forissier@linaro.org>
2
need anything from it. Drop the include lines.
3
2
4
omap1.c, pxa2xx.c and strongarm.c retain the include because they
3
According to the Linux kernel booting.rst [1], CPTR_EL3.ESM and
5
use it for the prototype of qemu_get_timedate().
4
SCR_EL3.EnTP2 must be initialized to 1 when EL3 is present and FEAT_SME
5
is advertised. This has to be taken care of when QEMU boots directly
6
into the kernel (i.e., "-M virt,secure=on -cpu max -kernel Image").
6
7
8
Cc: qemu-stable@nongnu.org
9
Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max")
10
Link: [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst?h=v6.0#n321
11
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
12
Message-id: 20221003145641.1921467-1-jerome.forissier@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
11
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
12
Message-id: 20211129200510.1233037-5-peter.maydell@linaro.org
13
---
15
---
14
hw/arm/boot.c | 1 -
16
hw/arm/boot.c | 4 ++++
15
hw/arm/digic_boards.c | 1 -
17
1 file changed, 4 insertions(+)
16
hw/arm/highbank.c | 1 -
17
hw/arm/npcm7xx_boards.c | 1 -
18
hw/arm/sbsa-ref.c | 1 -
19
hw/arm/stm32f405_soc.c | 1 -
20
hw/arm/vexpress.c | 1 -
21
hw/arm/virt.c | 1 -
22
8 files changed, 8 deletions(-)
23
18
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
19
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
25
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/arm/boot.c
21
--- a/hw/arm/boot.c
27
+++ b/hw/arm/boot.c
22
+++ b/hw/arm/boot.c
28
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
29
*/
24
if (cpu_isar_feature(aa64_sve, cpu)) {
30
25
env->cp15.cptr_el[3] |= R_CPTR_EL3_EZ_MASK;
31
#include "qemu/osdep.h"
26
}
32
-#include "qemu-common.h"
27
+ if (cpu_isar_feature(aa64_sme, cpu)) {
33
#include "qemu/datadir.h"
28
+ env->cp15.cptr_el[3] |= R_CPTR_EL3_ESM_MASK;
34
#include "qemu/error-report.h"
29
+ env->cp15.scr_el3 |= SCR_ENTP2;
35
#include "qapi/error.h"
30
+ }
36
diff --git a/hw/arm/digic_boards.c b/hw/arm/digic_boards.c
31
/* AArch64 kernels never boot in secure mode */
37
index XXXXXXX..XXXXXXX 100644
32
assert(!info->secure_boot);
38
--- a/hw/arm/digic_boards.c
33
/* This hook is only supported for AArch32 currently:
39
+++ b/hw/arm/digic_boards.c
40
@@ -XXX,XX +XXX,XX @@
41
42
#include "qemu/osdep.h"
43
#include "qapi/error.h"
44
-#include "qemu-common.h"
45
#include "qemu/datadir.h"
46
#include "hw/boards.h"
47
#include "qemu/error-report.h"
48
diff --git a/hw/arm/highbank.c b/hw/arm/highbank.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/hw/arm/highbank.c
51
+++ b/hw/arm/highbank.c
52
@@ -XXX,XX +XXX,XX @@
53
*/
54
55
#include "qemu/osdep.h"
56
-#include "qemu-common.h"
57
#include "qemu/datadir.h"
58
#include "qapi/error.h"
59
#include "hw/sysbus.h"
60
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/hw/arm/npcm7xx_boards.c
63
+++ b/hw/arm/npcm7xx_boards.c
64
@@ -XXX,XX +XXX,XX @@
65
#include "hw/qdev-core.h"
66
#include "hw/qdev-properties.h"
67
#include "qapi/error.h"
68
-#include "qemu-common.h"
69
#include "qemu/datadir.h"
70
#include "qemu/units.h"
71
#include "sysemu/blockdev.h"
72
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/arm/sbsa-ref.c
75
+++ b/hw/arm/sbsa-ref.c
76
@@ -XXX,XX +XXX,XX @@
77
*/
78
79
#include "qemu/osdep.h"
80
-#include "qemu-common.h"
81
#include "qemu/datadir.h"
82
#include "qapi/error.h"
83
#include "qemu/error-report.h"
84
diff --git a/hw/arm/stm32f405_soc.c b/hw/arm/stm32f405_soc.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/hw/arm/stm32f405_soc.c
87
+++ b/hw/arm/stm32f405_soc.c
88
@@ -XXX,XX +XXX,XX @@
89
90
#include "qemu/osdep.h"
91
#include "qapi/error.h"
92
-#include "qemu-common.h"
93
#include "exec/address-spaces.h"
94
#include "sysemu/sysemu.h"
95
#include "hw/arm/stm32f405_soc.h"
96
diff --git a/hw/arm/vexpress.c b/hw/arm/vexpress.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/hw/arm/vexpress.c
99
+++ b/hw/arm/vexpress.c
100
@@ -XXX,XX +XXX,XX @@
101
102
#include "qemu/osdep.h"
103
#include "qapi/error.h"
104
-#include "qemu-common.h"
105
#include "qemu/datadir.h"
106
#include "cpu.h"
107
#include "hw/sysbus.h"
108
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
109
index XXXXXXX..XXXXXXX 100644
110
--- a/hw/arm/virt.c
111
+++ b/hw/arm/virt.c
112
@@ -XXX,XX +XXX,XX @@
113
*/
114
115
#include "qemu/osdep.h"
116
-#include "qemu-common.h"
117
#include "qemu/datadir.h"
118
#include "qemu/units.h"
119
#include "qemu/option.h"
120
--
34
--
121
2.25.1
35
2.25.1
122
123
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
Arm CPUs support some subset of the granule (page) sizes 4K, 16K and
2
2
64K. The guest selects the one it wants using bits in the TCR_ELx
3
virtio-iommu is now supported with ACPI VIOT as well as device tree.
3
registers. If it tries to program these registers with a value that
4
Remove the restriction that prevents from instantiating a virtio-iommu
4
is either reserved or which requests a size that the CPU does not
5
device under ACPI.
5
implement, the architecture requires that the CPU behaves as if the
6
6
field was programmed to some size that has been implemented.
7
Acked-by: Igor Mammedov <imammedo@redhat.com>
7
Currently we don't implement this, and instead let the guest use any
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
granule size, even if the CPU ID register fields say it isn't
9
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
9
present.
10
Message-id: 20211210170415.583179-3-jean-philippe@linaro.org
10
11
Make aa64_va_parameters() check against the supported granule size
12
and force use of a different one if it is not implemented.
13
14
(A subsequent commit will make ARMVAParameters use the new enum
15
rather than the current pair of using16k/using64k bools.)
16
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Message-id: 20221003162315.2833797-2-peter.maydell@linaro.org
12
---
20
---
13
hw/arm/virt.c | 10 ++--------
21
target/arm/cpu.h | 33 +++++++++++++
14
hw/virtio/virtio-iommu-pci.c | 12 ++----------
22
target/arm/internals.h | 9 ++++
15
2 files changed, 4 insertions(+), 18 deletions(-)
23
target/arm/helper.c | 102 +++++++++++++++++++++++++++++++++++++----
16
24
3 files changed, 136 insertions(+), 8 deletions(-)
17
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
25
26
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/virt.c
28
--- a/target/arm/cpu.h
20
+++ b/hw/arm/virt.c
29
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ static HotplugHandler *virt_machine_get_hotplug_handler(MachineState *machine,
30
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tgran16_2_lpa2(const ARMISARegisters *id)
22
MachineClass *mc = MACHINE_GET_CLASS(machine);
31
return t >= 3 || (t == 0 && isar_feature_aa64_tgran16_lpa2(id));
23
32
}
24
if (device_is_dynamic_sysbus(mc, dev) ||
33
25
- (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM))) {
34
+static inline bool isar_feature_aa64_tgran4(const ARMISARegisters *id)
26
+ object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM) ||
35
+{
27
+ object_dynamic_cast(OBJECT(dev), TYPE_VIRTIO_IOMMU_PCI)) {
36
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 0;
28
return HOTPLUG_HANDLER(machine);
37
+}
38
+
39
+static inline bool isar_feature_aa64_tgran16(const ARMISARegisters *id)
40
+{
41
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 1;
42
+}
43
+
44
+static inline bool isar_feature_aa64_tgran64(const ARMISARegisters *id)
45
+{
46
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64) >= 0;
47
+}
48
+
49
+static inline bool isar_feature_aa64_tgran4_2(const ARMISARegisters *id)
50
+{
51
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
52
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran4(id));
53
+}
54
+
55
+static inline bool isar_feature_aa64_tgran16_2(const ARMISARegisters *id)
56
+{
57
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
58
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran16(id));
59
+}
60
+
61
+static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id)
62
+{
63
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64_2);
64
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id));
65
+}
66
+
67
static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
68
{
69
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
70
diff --git a/target/arm/internals.h b/target/arm/internals.h
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/internals.h
73
+++ b/target/arm/internals.h
74
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
75
return valid;
76
}
77
78
+/* Granule size (i.e. page size) */
79
+typedef enum ARMGranuleSize {
80
+ /* Same order as TG0 encoding */
81
+ Gran4K,
82
+ Gran64K,
83
+ Gran16K,
84
+ GranInvalid,
85
+} ARMGranuleSize;
86
+
87
/*
88
* Parameters of a given virtual address, as extracted from the
89
* translation control register (TCR) for a given regime.
90
diff --git a/target/arm/helper.c b/target/arm/helper.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/helper.c
93
+++ b/target/arm/helper.c
94
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx)
29
}
95
}
30
- if (object_dynamic_cast(OBJECT(dev), TYPE_VIRTIO_IOMMU_PCI)) {
31
- VirtMachineState *vms = VIRT_MACHINE(machine);
32
-
33
- if (!vms->bootinfo.firmware_loaded || !virt_is_acpi_enabled(vms)) {
34
- return HOTPLUG_HANDLER(machine);
35
- }
36
- }
37
return NULL;
38
}
96
}
39
97
40
diff --git a/hw/virtio/virtio-iommu-pci.c b/hw/virtio/virtio-iommu-pci.c
98
+static ARMGranuleSize tg0_to_gran_size(int tg)
41
index XXXXXXX..XXXXXXX 100644
99
+{
42
--- a/hw/virtio/virtio-iommu-pci.c
100
+ switch (tg) {
43
+++ b/hw/virtio/virtio-iommu-pci.c
101
+ case 0:
44
@@ -XXX,XX +XXX,XX @@ static void virtio_iommu_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
102
+ return Gran4K;
45
VirtIOIOMMU *s = VIRTIO_IOMMU(vdev);
103
+ case 1:
46
104
+ return Gran64K;
47
if (!qdev_get_machine_hotplug_handler(DEVICE(vpci_dev))) {
105
+ case 2:
48
- MachineClass *mc = MACHINE_GET_CLASS(qdev_get_machine());
106
+ return Gran16K;
49
-
107
+ default:
50
- error_setg(errp,
108
+ return GranInvalid;
51
- "%s machine fails to create iommu-map device tree bindings",
109
+ }
52
- mc->name);
110
+}
53
- error_append_hint(errp,
111
+
54
- "Check your machine implements a hotplug handler "
112
+static ARMGranuleSize tg1_to_gran_size(int tg)
55
- "for the virtio-iommu-pci device\n");
113
+{
56
- error_append_hint(errp, "Check the guest is booted without FW or with "
114
+ switch (tg) {
57
- "-no-acpi\n");
115
+ case 1:
58
+ error_setg(errp, "Check your machine implements a hotplug handler "
116
+ return Gran16K;
59
+ "for the virtio-iommu-pci device");
117
+ case 2:
60
return;
118
+ return Gran4K;
119
+ case 3:
120
+ return Gran64K;
121
+ default:
122
+ return GranInvalid;
123
+ }
124
+}
125
+
126
+static inline bool have4k(ARMCPU *cpu, bool stage2)
127
+{
128
+ return stage2 ? cpu_isar_feature(aa64_tgran4_2, cpu)
129
+ : cpu_isar_feature(aa64_tgran4, cpu);
130
+}
131
+
132
+static inline bool have16k(ARMCPU *cpu, bool stage2)
133
+{
134
+ return stage2 ? cpu_isar_feature(aa64_tgran16_2, cpu)
135
+ : cpu_isar_feature(aa64_tgran16, cpu);
136
+}
137
+
138
+static inline bool have64k(ARMCPU *cpu, bool stage2)
139
+{
140
+ return stage2 ? cpu_isar_feature(aa64_tgran64_2, cpu)
141
+ : cpu_isar_feature(aa64_tgran64, cpu);
142
+}
143
+
144
+static ARMGranuleSize sanitize_gran_size(ARMCPU *cpu, ARMGranuleSize gran,
145
+ bool stage2)
146
+{
147
+ switch (gran) {
148
+ case Gran4K:
149
+ if (have4k(cpu, stage2)) {
150
+ return gran;
151
+ }
152
+ break;
153
+ case Gran16K:
154
+ if (have16k(cpu, stage2)) {
155
+ return gran;
156
+ }
157
+ break;
158
+ case Gran64K:
159
+ if (have64k(cpu, stage2)) {
160
+ return gran;
161
+ }
162
+ break;
163
+ case GranInvalid:
164
+ break;
165
+ }
166
+ /*
167
+ * If the guest selects a granule size that isn't implemented,
168
+ * the architecture requires that we behave as if it selected one
169
+ * that is (with an IMPDEF choice of which one to pick). We choose
170
+ * to implement the smallest supported granule size.
171
+ */
172
+ if (have4k(cpu, stage2)) {
173
+ return Gran4K;
174
+ }
175
+ if (have16k(cpu, stage2)) {
176
+ return Gran16K;
177
+ }
178
+ assert(have64k(cpu, stage2));
179
+ return Gran64K;
180
+}
181
+
182
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
183
ARMMMUIdx mmu_idx, bool data)
184
{
185
uint64_t tcr = regime_tcr(env, mmu_idx);
186
bool epd, hpd, using16k, using64k, tsz_oob, ds;
187
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
188
+ ARMGranuleSize gran;
189
ARMCPU *cpu = env_archcpu(env);
190
+ bool stage2 = mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S;
191
192
if (!regime_has_2_ranges(mmu_idx)) {
193
select = 0;
194
tsz = extract32(tcr, 0, 6);
195
- using64k = extract32(tcr, 14, 1);
196
- using16k = extract32(tcr, 15, 1);
197
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
198
+ gran = tg0_to_gran_size(extract32(tcr, 14, 2));
199
+ if (stage2) {
200
/* VTCR_EL2 */
201
hpd = false;
202
} else {
203
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
204
select = extract64(va, 55, 1);
205
if (!select) {
206
tsz = extract32(tcr, 0, 6);
207
+ gran = tg0_to_gran_size(extract32(tcr, 14, 2));
208
epd = extract32(tcr, 7, 1);
209
sh = extract32(tcr, 12, 2);
210
- using64k = extract32(tcr, 14, 1);
211
- using16k = extract32(tcr, 15, 1);
212
hpd = extract64(tcr, 41, 1);
213
} else {
214
- int tg = extract32(tcr, 30, 2);
215
- using16k = tg == 1;
216
- using64k = tg == 3;
217
tsz = extract32(tcr, 16, 6);
218
+ gran = tg1_to_gran_size(extract32(tcr, 30, 2));
219
epd = extract32(tcr, 23, 1);
220
sh = extract32(tcr, 28, 2);
221
hpd = extract64(tcr, 42, 1);
222
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
223
ds = extract64(tcr, 59, 1);
61
}
224
}
62
for (int i = 0; i < s->nb_reserved_regions; i++) {
225
226
+ gran = sanitize_gran_size(cpu, gran, stage2);
227
+ using64k = gran == Gran64K;
228
+ using16k = gran == Gran16K;
229
+
230
if (cpu_isar_feature(aa64_st, cpu)) {
231
max_tsz = 48 - using64k;
232
} else {
63
--
233
--
64
2.25.1
234
2.25.1
65
66
diff view generated by jsdifflib
1
The calculation of the length of TLB range invalidate operations
1
Now we have an enum for the granule size, use it in the
2
in tlbi_aa64_range_get_length() is incorrect in two ways:
2
ARMVAParameters struct instead of the using16k/using64k bools.
3
* the NUM field is 5 bits, but we read only 4 bits
4
* we miscalculate the page_shift value, because of an
5
off-by-one error:
6
TG 0b00 is invalid
7
TG 0b01 is 4K granule size == 4096 == 2^12
8
TG 0b10 is 16K granule size == 16384 == 2^14
9
TG 0b11 is 64K granule size == 65536 == 2^16
10
so page_shift should be (TG - 1) * 2 + 12
11
3
12
Thanks to the bug report submitter Cha HyunSoo for identifying
13
both these errors.
14
15
Fixes: 84940ed82552d3c ("target/arm: Add support for FEAT_TLBIRANGE")
16
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/734
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20221003162315.2833797-3-peter.maydell@linaro.org
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Message-id: 20211130173257.1274194-1-peter.maydell@linaro.org
22
---
7
---
23
target/arm/helper.c | 6 +++---
8
target/arm/internals.h | 23 +++++++++++++++++++++--
24
1 file changed, 3 insertions(+), 3 deletions(-)
9
target/arm/helper.c | 39 ++++++++++++++++++++++++++++-----------
10
target/arm/ptw.c | 8 +-------
11
3 files changed, 50 insertions(+), 20 deletions(-)
25
12
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/internals.h
16
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@ typedef enum ARMGranuleSize {
18
GranInvalid,
19
} ARMGranuleSize;
20
21
+/**
22
+ * arm_granule_bits: Return address size of the granule in bits
23
+ *
24
+ * Return the address size of the granule in bits. This corresponds
25
+ * to the pseudocode TGxGranuleBits().
26
+ */
27
+static inline int arm_granule_bits(ARMGranuleSize gran)
28
+{
29
+ switch (gran) {
30
+ case Gran64K:
31
+ return 16;
32
+ case Gran16K:
33
+ return 14;
34
+ case Gran4K:
35
+ return 12;
36
+ default:
37
+ g_assert_not_reached();
38
+ }
39
+}
40
+
41
/*
42
* Parameters of a given virtual address, as extracted from the
43
* translation control register (TCR) for a given regime.
44
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
45
bool tbi : 1;
46
bool epd : 1;
47
bool hpd : 1;
48
- bool using16k : 1;
49
- bool using64k : 1;
50
bool tsz_oob : 1; /* tsz has been clamped to legal range */
51
bool ds : 1;
52
+ ARMGranuleSize gran : 2;
53
} ARMVAParameters;
54
55
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
56
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
58
--- a/target/arm/helper.c
29
+++ b/target/arm/helper.c
59
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ static uint64_t tlbi_aa64_range_get_length(CPUARMState *env,
60
@@ -XXX,XX +XXX,XX @@ typedef struct {
31
uint64_t exponent;
32
uint64_t length;
61
uint64_t length;
33
62
} TLBIRange;
34
- num = extract64(value, 39, 4);
63
35
+ num = extract64(value, 39, 5);
64
+static ARMGranuleSize tlbi_range_tg_to_gran_size(int tg)
65
+{
66
+ /*
67
+ * Note that the TLBI range TG field encoding differs from both
68
+ * TG0 and TG1 encodings.
69
+ */
70
+ switch (tg) {
71
+ case 1:
72
+ return Gran4K;
73
+ case 2:
74
+ return Gran16K;
75
+ case 3:
76
+ return Gran64K;
77
+ default:
78
+ return GranInvalid;
79
+ }
80
+}
81
+
82
static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
83
uint64_t value)
84
{
85
@@ -XXX,XX +XXX,XX @@ static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
86
uint64_t select = sextract64(value, 36, 1);
87
ARMVAParameters param = aa64_va_parameters(env, select, mmuidx, true);
88
TLBIRange ret = { };
89
+ ARMGranuleSize gran;
90
91
page_size_granule = extract64(value, 46, 2);
92
+ gran = tlbi_range_tg_to_gran_size(page_size_granule);
93
94
/* The granule encoded in value must match the granule in use. */
95
- if (page_size_granule != (param.using64k ? 3 : param.using16k ? 2 : 1)) {
96
+ if (gran != param.gran) {
97
qemu_log_mask(LOG_GUEST_ERROR, "Invalid tlbi page size granule %d\n",
98
page_size_granule);
99
return ret;
100
}
101
102
- page_shift = (page_size_granule - 1) * 2 + 12;
103
+ page_shift = arm_granule_bits(gran);
104
num = extract64(value, 39, 5);
36
scale = extract64(value, 44, 2);
105
scale = extract64(value, 44, 2);
37
page_size_granule = extract64(value, 46, 2);
106
exponent = (5 * scale) + 1;
38
107
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
39
- page_shift = page_size_granule * 2 + 12;
108
ARMMMUIdx mmu_idx, bool data)
40
-
109
{
41
if (page_size_granule == 0) {
110
uint64_t tcr = regime_tcr(env, mmu_idx);
42
qemu_log_mask(LOG_GUEST_ERROR, "Invalid page size granule %d\n",
111
- bool epd, hpd, using16k, using64k, tsz_oob, ds;
43
page_size_granule);
112
+ bool epd, hpd, tsz_oob, ds;
44
return 0;
113
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
114
ARMGranuleSize gran;
115
ARMCPU *cpu = env_archcpu(env);
116
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
45
}
117
}
46
118
47
+ page_shift = (page_size_granule - 1) * 2 + 12;
119
gran = sanitize_gran_size(cpu, gran, stage2);
48
+
120
- using64k = gran == Gran64K;
49
exponent = (5 * scale) + 1;
121
- using16k = gran == Gran16K;
50
length = (num + 1) << (exponent + page_shift);
122
51
123
if (cpu_isar_feature(aa64_st, cpu)) {
124
- max_tsz = 48 - using64k;
125
+ max_tsz = 48 - (gran == Gran64K);
126
} else {
127
max_tsz = 39;
128
}
129
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
130
* adjust the effective value of DS, as documented.
131
*/
132
min_tsz = 16;
133
- if (using64k) {
134
+ if (gran == Gran64K) {
135
if (cpu_isar_feature(aa64_lva, cpu)) {
136
min_tsz = 12;
137
}
138
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
139
switch (mmu_idx) {
140
case ARMMMUIdx_Stage2:
141
case ARMMMUIdx_Stage2_S:
142
- if (using16k) {
143
+ if (gran == Gran16K) {
144
ds = cpu_isar_feature(aa64_tgran16_2_lpa2, cpu);
145
} else {
146
ds = cpu_isar_feature(aa64_tgran4_2_lpa2, cpu);
147
}
148
break;
149
default:
150
- if (using16k) {
151
+ if (gran == Gran16K) {
152
ds = cpu_isar_feature(aa64_tgran16_lpa2, cpu);
153
} else {
154
ds = cpu_isar_feature(aa64_tgran4_lpa2, cpu);
155
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
156
.tbi = tbi,
157
.epd = epd,
158
.hpd = hpd,
159
- .using16k = using16k,
160
- .using64k = using64k,
161
.tsz_oob = tsz_oob,
162
.ds = ds,
163
+ .gran = gran,
164
};
165
}
166
167
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
168
index XXXXXXX..XXXXXXX 100644
169
--- a/target/arm/ptw.c
170
+++ b/target/arm/ptw.c
171
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
172
}
173
}
174
175
- if (param.using64k) {
176
- stride = 13;
177
- } else if (param.using16k) {
178
- stride = 11;
179
- } else {
180
- stride = 9;
181
- }
182
+ stride = arm_granule_bits(param.gran) - 3;
183
184
/*
185
* Note that QEMU ignores shareability and cacheability attributes,
52
--
186
--
53
2.25.1
187
2.25.1
54
55
diff view generated by jsdifflib
1
The qemu-common.h header is not supposed to be included from any
1
FEAT_GTG is a change tho the ID register ID_AA64MMFR0_EL1 so that it
2
other header files, only from .c files (as documented in a comment at
2
can report a different set of supported granule (page) sizes for
3
the start of it).
3
stage 1 and stage 2 translation tables. As of commit c20281b2a5048
4
we already report the granule sizes that way for '-cpu max', and now
5
we also correctly make attempts to use unimplemented granule sizes
6
fail, so we can report the support of the feature in the
7
documentation.
4
8
5
Nothing actually relies on target/rx/cpu.h including it, so we can
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
just drop the include.
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20221003162315.2833797-4-peter.maydell@linaro.org
12
---
13
docs/system/arm/emulation.rst | 1 +
14
1 file changed, 1 insertion(+)
7
15
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
12
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
13
Message-id: 20211129200510.1233037-4-peter.maydell@linaro.org
14
---
15
target/rx/cpu.h | 1 -
16
1 file changed, 1 deletion(-)
17
18
diff --git a/target/rx/cpu.h b/target/rx/cpu.h
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/target/rx/cpu.h
18
--- a/docs/system/arm/emulation.rst
21
+++ b/target/rx/cpu.h
19
+++ b/docs/system/arm/emulation.rst
22
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
23
#define RX_CPU_H
21
- FEAT_FRINTTS (Floating-point to integer instructions)
24
22
- FEAT_FlagM (Flag manipulation instructions v2)
25
#include "qemu/bitops.h"
23
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
26
-#include "qemu-common.h"
24
+- FEAT_GTG (Guest translation granule size)
27
#include "hw/registerfields.h"
25
- FEAT_HCX (Support for the HCRX_EL2 register)
28
#include "cpu-qom.h"
26
- FEAT_HPDS (Hierarchical permission disables)
29
27
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
30
--
28
--
31
2.25.1
29
2.25.1
32
33
diff view generated by jsdifflib