1
Hi; this is the latest target-arm queue. Most of the patches
1
Hi; here's the latest batch of arm changes. The big thing
2
here are RTH's FEAT_HAFDBS finally landing. I've also included
2
in here is the SMMUv3 changes to add stage-2 translation support.
3
the RNG-seed randomization patches from Jason, as well as a few
4
more minor things. The patches include a couple of regression
5
fixes:
6
* the resettable patch fixes a SCSI reset regression
7
* the 'do not re-randomize on snapshot load' patches fix
8
record-and-replay regressions
9
3
10
thanks
4
thanks
11
-- PMM
5
-- PMM
12
6
13
The following changes since commit e750a7ace492f0b450653d4ad368a77d6f660fb8:
7
The following changes since commit aa9bbd865502ed517624ab6fe7d4b5d89ca95e43:
14
8
15
Merge tag 'pull-9p-20221024' of https://github.com/cschoenebeck/qemu into staging (2022-10-24 14:27:12 -0400)
9
Merge tag 'pull-ppc-20230528' of https://gitlab.com/danielhb/qemu into staging (2023-05-29 14:31:52 -0700)
16
10
17
are available in the Git repository at:
11
are available in the Git repository at:
18
12
19
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221025
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230530
20
14
21
for you to fetch changes up to e2114f701c78f76246e4b1872639dad94a6bdd21:
15
for you to fetch changes up to b03d0d4f531a8b867e0aac1fab0b876903015680:
22
16
23
rx: re-randomize rng-seed on reboot (2022-10-25 17:32:24 +0100)
17
docs: sbsa: correct graphics card name (2023-05-30 13:32:46 +0100)
24
18
25
----------------------------------------------------------------
19
----------------------------------------------------------------
26
target-arm queue:
20
target-arm queue:
27
* Implement FEAT_E0PD
21
* fsl-imx6: Add SNVS support for i.MX6 boards
28
* Implement FEAT_HAFDBS
22
* smmuv3: Add support for stage 2 translations
29
* honor HCR_E2H and HCR_TGE in arm_excp_unmasked()
23
* hw/dma/xilinx_axidma: Check DMASR.HALTED to prevent infinite loop
30
* hw/arm/virt: Fix devicetree warnings about the virtio-iommu node
24
* hw/arm/xlnx-zynqmp: fix unsigned error when checking the RPUs number
31
* hw/core/resettable: fix reset level counting
25
* cleanups for recent Kconfig changes
32
* hw/hyperv/hyperv.c: Use device_cold_reset() instead of device_legacy_reset()
26
* target/arm: Explicitly select short-format FSR for M-profile
33
* imx: reload cmp timer outside of the reload ptimer transaction
27
* tests/qtest: Run arm-specific tests only if the required machine is available
34
* x86: do not re-randomize RNG seed on snapshot load
28
* hw/arm/sbsa-ref: add GIC node into DT
35
* m68k/virt: do not re-randomize RNG seed on snapshot load
29
* docs: sbsa: correct graphics card name
36
* m68k/q800: do not re-randomize RNG seed on snapshot load
30
* Update copyright dates to 2023
37
* arm: re-randomize rng-seed on reboot
38
* riscv: re-randomize rng-seed on reboot
39
* mips/boston: re-randomize rng-seed on reboot
40
* openrisc: re-randomize rng-seed on reboot
41
* rx: re-randomize rng-seed on reboot
42
31
43
----------------------------------------------------------------
32
----------------------------------------------------------------
44
Ake Koomsin (1):
33
Clément Chigot (1):
45
target/arm: honor HCR_E2H and HCR_TGE in arm_excp_unmasked()
34
hw/arm/xlnx-zynqmp: fix unsigned error when checking the RPUs number
46
35
47
Axel Heider (1):
36
Enze Li (1):
48
target/imx: reload cmp timer outside of the reload ptimer transaction
37
Update copyright dates to 2023
49
38
50
Damien Hedde (1):
39
Fabiano Rosas (3):
51
hw/core/resettable: fix reset level counting
40
target/arm: Explain why we need to select ARM_V7M
41
arm/Kconfig: Keep Kconfig default entries in default.mak as documentation
42
arm/Kconfig: Make TCG dependence explicit
52
43
53
Jason A. Donenfeld (10):
44
Marcin Juszkiewicz (2):
54
reset: allow registering handlers that aren't called by snapshot loading
45
hw/arm/sbsa-ref: add GIC node into DT
55
device-tree: add re-randomization helper function
46
docs: sbsa: correct graphics card name
56
x86: do not re-randomize RNG seed on snapshot load
57
arm: re-randomize rng-seed on reboot
58
riscv: re-randomize rng-seed on reboot
59
m68k/virt: do not re-randomize RNG seed on snapshot load
60
m68k/q800: do not re-randomize RNG seed on snapshot load
61
mips/boston: re-randomize rng-seed on reboot
62
openrisc: re-randomize rng-seed on reboot
63
rx: re-randomize rng-seed on reboot
64
47
65
Jean-Philippe Brucker (1):
48
Mostafa Saleh (10):
66
hw/arm/virt: Fix devicetree warnings about the virtio-iommu node
49
hw/arm/smmuv3: Add missing fields for IDR0
50
hw/arm/smmuv3: Update translation config to hold stage-2
51
hw/arm/smmuv3: Refactor stage-1 PTW
52
hw/arm/smmuv3: Add page table walk for stage-2
53
hw/arm/smmuv3: Parse STE config for stage-2
54
hw/arm/smmuv3: Make TLB lookup work for stage-2
55
hw/arm/smmuv3: Add VMID to TLB tagging
56
hw/arm/smmuv3: Add CMDs related to stage-2
57
hw/arm/smmuv3: Add stage-2 support in iova notifier
58
hw/arm/smmuv3: Add knob to choose translation stage and enable stage-2
67
59
68
Peter Maydell (2):
60
Peter Maydell (1):
69
target/arm: Implement FEAT_E0PD
61
target/arm: Explicitly select short-format FSR for M-profile
70
hw/hyperv/hyperv.c: Use device_cold_reset() instead of device_legacy_reset()
71
62
72
Richard Henderson (14):
63
Thomas Huth (1):
73
target/arm: Introduce regime_is_stage2
64
tests/qtest: Run arm-specific tests only if the required machine is available
74
target/arm: Add ptw_idx to S1Translate
75
target/arm: Add isar predicates for FEAT_HAFDBS
76
target/arm: Extract HA and HD in aa64_va_parameters
77
target/arm: Move S1_ptw_translate outside arm_ld[lq]_ptw
78
target/arm: Add ARMFault_UnsuppAtomicUpdate
79
target/arm: Remove loop from get_phys_addr_lpae
80
target/arm: Fix fault reporting in get_phys_addr_lpae
81
target/arm: Don't shift attrs in get_phys_addr_lpae
82
target/arm: Consider GP an attribute in get_phys_addr_lpae
83
target/arm: Tidy merging of attributes from descriptor and table
84
target/arm: Implement FEAT_HAFDBS, access flag portion
85
target/arm: Implement FEAT_HAFDBS, dirty bit portion
86
target/arm: Use the max page size in a 2-stage ptw
87
65
88
docs/devel/reset.rst | 8 +-
66
Tommy Wu (1):
89
docs/system/arm/emulation.rst | 2 +
67
hw/dma/xilinx_axidma: Check DMASR.HALTED to prevent infinite loop.
90
qapi/run-state.json | 6 +-
68
91
include/hw/boards.h | 2 +-
69
Vitaly Cheptsov (1):
92
include/sysemu/device_tree.h | 9 +
70
fsl-imx6: Add SNVS support for i.MX6 boards
93
include/sysemu/reset.h | 5 +-
71
94
target/arm/cpu.h | 15 ++
72
docs/conf.py | 2 +-
95
target/arm/internals.h | 30 +++
73
docs/system/arm/sbsa.rst | 2 +-
96
hw/arm/aspeed.c | 4 +-
74
configs/devices/aarch64-softmmu/default.mak | 6 +
97
hw/arm/boot.c | 2 +
75
configs/devices/arm-softmmu/default.mak | 40 ++++
98
hw/arm/mps2-tz.c | 4 +-
76
hw/arm/smmu-internal.h | 37 +++
99
hw/arm/virt.c | 5 +-
77
hw/arm/smmuv3-internal.h | 12 +-
100
hw/core/reset.c | 17 +-
78
include/hw/arm/fsl-imx6.h | 2 +
101
hw/core/resettable.c | 3 +-
79
include/hw/arm/smmu-common.h | 45 +++-
102
hw/hppa/machine.c | 4 +-
80
include/hw/arm/smmuv3.h | 4 +
103
hw/hyperv/hyperv.c | 2 +-
81
include/qemu/help-texts.h | 2 +-
104
hw/i386/microvm.c | 4 +-
82
hw/arm/fsl-imx6.c | 8 +
105
hw/i386/pc.c | 6 +-
83
hw/arm/sbsa-ref.c | 19 +-
106
hw/i386/x86.c | 2 +-
84
hw/arm/smmu-common.c | 209 ++++++++++++++--
107
hw/m68k/q800.c | 33 ++-
85
hw/arm/smmuv3.c | 357 ++++++++++++++++++++++++----
108
hw/m68k/virt.c | 20 +-
86
hw/arm/xlnx-zynqmp.c | 2 +-
109
hw/mips/boston.c | 3 +
87
hw/dma/xilinx_axidma.c | 11 +-
110
hw/openrisc/boot.c | 3 +
88
target/arm/tcg/tlb_helper.c | 13 +-
111
hw/ppc/pegasos2.c | 4 +-
89
hw/arm/Kconfig | 123 ++++++----
112
hw/ppc/pnv.c | 4 +-
90
hw/arm/trace-events | 14 +-
113
hw/ppc/spapr.c | 4 +-
91
target/arm/Kconfig | 3 +
114
hw/riscv/boot.c | 3 +
92
tests/qtest/meson.build | 7 +-
115
hw/rx/rx-gdbsim.c | 3 +
93
21 files changed, 773 insertions(+), 145 deletions(-)
116
hw/s390x/s390-virtio-ccw.c | 4 +-
94
117
hw/timer/imx_epit.c | 9 +-
118
migration/savevm.c | 2 +-
119
softmmu/device_tree.c | 21 ++
120
softmmu/runstate.c | 11 +-
121
target/arm/cpu.c | 24 +-
122
target/arm/cpu64.c | 2 +
123
target/arm/helper.c | 31 ++-
124
target/arm/ptw.c | 524 +++++++++++++++++++++++++++---------------
125
37 files changed, 572 insertions(+), 263 deletions(-)
diff view generated by jsdifflib
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
1
From: Vitaly Cheptsov <cheptsov@ispras.ru>
2
2
3
When the system reboots, the rng-seed that the FDT has should be
3
SNVS is supported on both i.MX6 and i.MX6UL and is needed
4
re-randomized, so that the new boot gets a new seed. Since the FDT is in
4
to support shutdown on the board.
5
the ROM region at this point, we add a hook right after the ROM has been
6
added, so that we have a pointer to that copy of the FDT.
7
5
8
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
6
Cc: Peter Maydell <peter.maydell@linaro.org> (odd fixer:SABRELITE / i.MX6)
9
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
7
Cc: Jean-Christophe Dubois <jcd@tribudubois.net> (reviewer:SABRELITE / i.MX6)
10
Message-id: 20221025004327.568476-12-Jason@zx2c4.com
8
Cc: qemu-arm@nongnu.org (open list:SABRELITE / i.MX6)
9
Cc: qemu-devel@nongnu.org (open list:All patches CC here)
10
Signed-off-by: Vitaly Cheptsov <cheptsov@ispras.ru>
11
Message-id: 20230515095015.66860-1-cheptsov@ispras.ru
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
---
14
hw/rx/rx-gdbsim.c | 3 +++
15
include/hw/arm/fsl-imx6.h | 2 ++
15
1 file changed, 3 insertions(+)
16
hw/arm/fsl-imx6.c | 8 ++++++++
17
2 files changed, 10 insertions(+)
16
18
17
diff --git a/hw/rx/rx-gdbsim.c b/hw/rx/rx-gdbsim.c
19
diff --git a/include/hw/arm/fsl-imx6.h b/include/hw/arm/fsl-imx6.h
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/rx/rx-gdbsim.c
21
--- a/include/hw/arm/fsl-imx6.h
20
+++ b/hw/rx/rx-gdbsim.c
22
+++ b/include/hw/arm/fsl-imx6.h
21
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@
22
#include "hw/rx/rx62n.h"
24
#include "hw/cpu/a9mpcore.h"
23
#include "sysemu/qtest.h"
25
#include "hw/misc/imx6_ccm.h"
24
#include "sysemu/device_tree.h"
26
#include "hw/misc/imx6_src.h"
25
+#include "sysemu/reset.h"
27
+#include "hw/misc/imx7_snvs.h"
26
#include "hw/boards.h"
28
#include "hw/watchdog/wdt_imx2.h"
27
#include "qom/object.h"
29
#include "hw/char/imx_serial.h"
28
30
#include "hw/timer/imx_gpt.h"
29
@@ -XXX,XX +XXX,XX @@ static void rx_gdbsim_init(MachineState *machine)
31
@@ -XXX,XX +XXX,XX @@ struct FslIMX6State {
30
dtb_offset = ROUND_DOWN(machine->ram_size - dtb_size, 16);
32
A9MPPrivState a9mpcore;
31
rom_add_blob_fixed("dtb", dtb, dtb_size,
33
IMX6CCMState ccm;
32
SDRAM_BASE + dtb_offset);
34
IMX6SRCState src;
33
+ qemu_register_reset_nosnapshotload(qemu_fdt_randomize_seeds,
35
+ IMX7SNVSState snvs;
34
+ rom_ptr(SDRAM_BASE + dtb_offset, dtb_size));
36
IMXSerialState uart[FSL_IMX6_NUM_UARTS];
35
/* Set dtb address to R1 */
37
IMXGPTState gpt;
36
RX_CPU(first_cpu)->env.regs[1] = SDRAM_BASE + dtb_offset;
38
IMXEPITState epit[FSL_IMX6_NUM_EPITS];
37
}
39
diff --git a/hw/arm/fsl-imx6.c b/hw/arm/fsl-imx6.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/arm/fsl-imx6.c
42
+++ b/hw/arm/fsl-imx6.c
43
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6_init(Object *obj)
44
45
object_initialize_child(obj, "src", &s->src, TYPE_IMX6_SRC);
46
47
+ object_initialize_child(obj, "snvs", &s->snvs, TYPE_IMX7_SNVS);
48
+
49
for (i = 0; i < FSL_IMX6_NUM_UARTS; i++) {
50
snprintf(name, NAME_SIZE, "uart%d", i + 1);
51
object_initialize_child(obj, name, &s->uart[i], TYPE_IMX_SERIAL);
52
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6_realize(DeviceState *dev, Error **errp)
53
qdev_get_gpio_in(DEVICE(&s->a9mpcore),
54
FSL_IMX6_ENET_MAC_1588_IRQ));
55
56
+ /*
57
+ * SNVS
58
+ */
59
+ sysbus_realize(SYS_BUS_DEVICE(&s->snvs), &error_abort);
60
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->snvs), 0, FSL_IMX6_SNVSHP_ADDR);
61
+
62
/*
63
* Watchdog
64
*/
38
--
65
--
39
2.25.1
66
2.34.1
diff view generated by jsdifflib
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
1
From: Mostafa Saleh <smostafa@google.com>
2
2
3
When the system reboots, the rng-seed that the FDT has should be
3
In preparation for adding stage-2 support.
4
re-randomized, so that the new boot gets a new seed. Since the FDT is in
4
Add IDR0 fields related to stage-2.
5
the ROM region at this point, we add a hook right after the ROM has been
6
added, so that we have a pointer to that copy of the FDT.
7
5
8
Cc: Stafford Horne <shorne@gmail.com>
6
VMID16: 16-bit VMID supported.
9
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
7
S2P: Stage-2 translation supported.
10
Message-id: 20221025004327.568476-11-Jason@zx2c4.com
8
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
They are described in 6.3.1 SMMU_IDR0.
10
11
No functional change intended.
12
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Eric Auger <eric.auger@redhat.com>
15
Signed-off-by: Mostafa Saleh <smostafa@google.com>
16
Tested-by: Eric Auger <eric.auger@redhat.com>
17
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
18
Message-id: 20230516203327.2051088-2-smostafa@google.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
20
---
14
hw/openrisc/boot.c | 3 +++
21
hw/arm/smmuv3-internal.h | 2 ++
15
1 file changed, 3 insertions(+)
22
1 file changed, 2 insertions(+)
16
23
17
diff --git a/hw/openrisc/boot.c b/hw/openrisc/boot.c
24
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
18
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/openrisc/boot.c
26
--- a/hw/arm/smmuv3-internal.h
20
+++ b/hw/openrisc/boot.c
27
+++ b/hw/arm/smmuv3-internal.h
21
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@ typedef enum SMMUTranslationStatus {
22
#include "hw/openrisc/boot.h"
29
/* MMIO Registers */
23
#include "sysemu/device_tree.h"
30
24
#include "sysemu/qtest.h"
31
REG32(IDR0, 0x0)
25
+#include "sysemu/reset.h"
32
+ FIELD(IDR0, S2P, 0 , 1)
26
33
FIELD(IDR0, S1P, 1 , 1)
27
#include <libfdt.h>
34
FIELD(IDR0, TTF, 2 , 2)
28
35
FIELD(IDR0, COHACC, 4 , 1)
29
@@ -XXX,XX +XXX,XX @@ uint32_t openrisc_load_fdt(void *fdt, hwaddr load_start,
36
FIELD(IDR0, ASID16, 12, 1)
30
37
+ FIELD(IDR0, VMID16, 18, 1)
31
rom_add_blob_fixed_as("fdt", fdt, fdtsize, fdt_addr,
38
FIELD(IDR0, TTENDIAN, 21, 2)
32
&address_space_memory);
39
FIELD(IDR0, STALL_MODEL, 24, 2)
33
+ qemu_register_reset_nosnapshotload(qemu_fdt_randomize_seeds,
40
FIELD(IDR0, TERM_MODEL, 26, 1)
34
+ rom_ptr_for_as(&address_space_memory, fdt_addr, fdtsize));
35
36
return fdt_addr;
37
}
38
--
41
--
39
2.25.1
42
2.34.1
diff view generated by jsdifflib
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
1
From: Mostafa Saleh <smostafa@google.com>
2
2
3
Snapshot loading is supposed to be deterministic, so we shouldn't
3
In preparation for adding stage-2 support, add a S2 config
4
re-randomize the various seeds used.
4
struct(SMMUS2Cfg), composed of the following fields and embedded in
5
the main SMMUTransCfg:
6
-tsz: Size of IPA input region (S2T0SZ)
7
-sl0: Start level of translation (S2SL0)
8
-affd: AF Fault Disable (S2AFFD)
9
-record_faults: Record fault events (S2R)
10
-granule_sz: Granule page shift (based on S2TG)
11
-vmid: Virtual Machine ID (S2VMID)
12
-vttb: Address of translation table base (S2TTB)
13
-eff_ps: Effective PA output range (based on S2PS)
5
14
6
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
15
They will be used in the next patches in stage-2 address translation.
7
Message-id: 20221025004327.568476-8-Jason@zx2c4.com
16
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
The fields in SMMUS2Cfg, are reordered to make the shared and stage-1
18
fields next to each other, this reordering didn't change the struct
19
size (104 bytes before and after).
20
21
Stage-1 only fields: aa64, asid, tt, ttb, tbi, record_faults, oas.
22
oas is stage-1 output address size. However, it is used to check
23
input address in case stage-1 is unimplemented or bypassed according
24
to SMMUv3 manual IHI0070.E "3.4. Address sizes"
25
26
Shared fields: stage, disabled, bypassed, aborted, iotlb_*.
27
28
No functional change intended.
29
30
Reviewed-by: Eric Auger <eric.auger@redhat.com>
31
Signed-off-by: Mostafa Saleh <smostafa@google.com>
32
Tested-by: Eric Auger <eric.auger@redhat.com>
33
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
34
Message-id: 20230516203327.2051088-3-smostafa@google.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
36
---
11
hw/m68k/q800.c | 33 +++++++++++++--------------------
37
include/hw/arm/smmu-common.h | 22 +++++++++++++++++++---
12
1 file changed, 13 insertions(+), 20 deletions(-)
38
1 file changed, 19 insertions(+), 3 deletions(-)
13
39
14
diff --git a/hw/m68k/q800.c b/hw/m68k/q800.c
40
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
15
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/m68k/q800.c
42
--- a/include/hw/arm/smmu-common.h
17
+++ b/hw/m68k/q800.c
43
+++ b/include/hw/arm/smmu-common.h
18
@@ -XXX,XX +XXX,XX @@ static const TypeInfo glue_info = {
44
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUTLBEntry {
19
},
45
uint8_t granule;
20
};
46
} SMMUTLBEntry;
21
47
22
-typedef struct {
48
+/* Stage-2 configuration. */
23
- M68kCPU *cpu;
49
+typedef struct SMMUS2Cfg {
24
- struct bi_record *rng_seed;
50
+ uint8_t tsz; /* Size of IPA input region (S2T0SZ) */
25
-} ResetInfo;
51
+ uint8_t sl0; /* Start level of translation (S2SL0) */
26
-
52
+ bool affd; /* AF Fault Disable (S2AFFD) */
27
static void main_cpu_reset(void *opaque)
53
+ bool record_faults; /* Record fault events (S2R) */
28
{
54
+ uint8_t granule_sz; /* Granule page shift (based on S2TG) */
29
- ResetInfo *reset_info = opaque;
55
+ uint8_t eff_ps; /* Effective PA output range (based on S2PS) */
30
- M68kCPU *cpu = reset_info->cpu;
56
+ uint16_t vmid; /* Virtual Machine ID (S2VMID) */
31
+ M68kCPU *cpu = opaque;
57
+ uint64_t vttb; /* Address of translation table base (S2TTB) */
32
CPUState *cs = CPU(cpu);
58
+} SMMUS2Cfg;
33
34
- if (reset_info->rng_seed) {
35
- qemu_guest_getrandom_nofail((void *)reset_info->rng_seed->data + 2,
36
- be16_to_cpu(*(uint16_t *)reset_info->rng_seed->data));
37
- }
38
-
39
cpu_reset(cs);
40
cpu->env.aregs[7] = ldl_phys(cs->as, 0);
41
cpu->env.pc = ldl_phys(cs->as, 4);
42
}
43
44
+static void rerandomize_rng_seed(void *opaque)
45
+{
46
+ struct bi_record *rng_seed = opaque;
47
+ qemu_guest_getrandom_nofail((void *)rng_seed->data + 2,
48
+ be16_to_cpu(*(uint16_t *)rng_seed->data));
49
+}
50
+
59
+
51
static uint8_t fake_mac_rom[] = {
60
/*
52
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
61
* Generic structure populated by derived SMMU devices
53
62
* after decoding the configuration information and used as
54
@@ -XXX,XX +XXX,XX @@ static void q800_init(MachineState *machine)
63
* input to the page table walk
55
NubusBus *nubus;
64
*/
56
DeviceState *glue;
65
typedef struct SMMUTransCfg {
57
DriveInfo *dinfo;
66
+ /* Shared fields between stage-1 and stage-2. */
58
- ResetInfo *reset_info;
67
int stage; /* translation stage */
59
uint8_t rng_seed[32];
68
- bool aa64; /* arch64 or aarch32 translation table */
60
69
bool disabled; /* smmu is disabled */
61
linux_boot = (kernel_filename != NULL);
70
bool bypassed; /* translation is bypassed */
62
@@ -XXX,XX +XXX,XX @@ static void q800_init(MachineState *machine)
71
bool aborted; /* translation is aborted */
63
exit(1);
72
+ uint32_t iotlb_hits; /* counts IOTLB hits */
64
}
73
+ uint32_t iotlb_misses; /* counts IOTLB misses*/
65
74
+ /* Used by stage-1 only. */
66
- reset_info = g_new0(ResetInfo, 1);
75
+ bool aa64; /* arch64 or aarch32 translation table */
67
-
76
bool record_faults; /* record fault events */
68
/* init CPUs */
77
uint64_t ttb; /* TT base address */
69
cpu = M68K_CPU(cpu_create(machine->cpu_type));
78
uint8_t oas; /* output address width */
70
- reset_info->cpu = cpu;
79
uint8_t tbi; /* Top Byte Ignore */
71
- qemu_register_reset(main_cpu_reset, reset_info);
80
uint16_t asid;
72
+ qemu_register_reset(main_cpu_reset, cpu);
81
SMMUTransTableInfo tt[2];
73
82
- uint32_t iotlb_hits; /* counts IOTLB hits for this asid */
74
/* RAM */
83
- uint32_t iotlb_misses; /* counts IOTLB misses for this asid */
75
memory_region_add_subregion(get_system_memory(), 0, machine->ram);
84
+ /* Used by stage-2 only. */
76
@@ -XXX,XX +XXX,XX @@ static void q800_init(MachineState *machine)
85
+ struct SMMUS2Cfg s2cfg;
77
BOOTINFO0(param_ptr, BI_LAST);
86
} SMMUTransCfg;
78
rom_add_blob_fixed_as("bootinfo", param_blob, param_ptr - param_blob,
87
79
parameters_base, cs->as);
88
typedef struct SMMUDevice {
80
- reset_info->rng_seed = rom_ptr_for_as(cs->as, parameters_base,
81
- param_ptr - param_blob) +
82
- (param_rng_seed - param_blob);
83
+ qemu_register_reset_nosnapshotload(rerandomize_rng_seed,
84
+ rom_ptr_for_as(cs->as, parameters_base,
85
+ param_ptr - param_blob) +
86
+ (param_rng_seed - param_blob));
87
g_free(param_blob);
88
} else {
89
uint8_t *ptr;
90
--
89
--
91
2.25.1
90
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Mostafa Saleh <smostafa@google.com>
2
2
3
Separate S1 translation from the actual lookup.
3
In preparation for adding stage-2 support, rename smmu_ptw_64 to
4
Will enable lpae hardware updates.
4
smmu_ptw_64_s1 and refactor some of the code so it can be reused in
5
stage-2 page table walk.
5
6
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Remove AA64 check from PTW as decode_cd already ensures that AA64 is
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
used, otherwise it faults with C_BAD_CD.
8
Message-id: 20221024051851.3074715-6-richard.henderson@linaro.org
9
10
A stage member is added to SMMUPTWEventInfo to differentiate
11
between stage-1 and stage-2 ptw faults.
12
13
Add stage argument to trace_smmu_ptw_level be consistent with other
14
trace events.
15
16
Signed-off-by: Mostafa Saleh <smostafa@google.com>
17
Reviewed-by: Eric Auger <eric.auger@redhat.com>
18
Tested-by: Eric Auger <eric.auger@redhat.com>
19
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
20
Message-id: 20230516203327.2051088-4-smostafa@google.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
22
---
11
target/arm/ptw.c | 41 ++++++++++++++++++++++-------------------
23
include/hw/arm/smmu-common.h | 16 +++++++++++++---
12
1 file changed, 22 insertions(+), 19 deletions(-)
24
hw/arm/smmu-common.c | 27 ++++++++++-----------------
25
hw/arm/smmuv3.c | 2 ++
26
hw/arm/trace-events | 2 +-
27
4 files changed, 26 insertions(+), 21 deletions(-)
13
28
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
29
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
15
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/ptw.c
31
--- a/include/hw/arm/smmu-common.h
17
+++ b/target/arm/ptw.c
32
+++ b/include/hw/arm/smmu-common.h
18
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
33
@@ -XXX,XX +XXX,XX @@
34
#include "hw/pci/pci.h"
35
#include "qom/object.h"
36
37
-#define SMMU_PCI_BUS_MAX 256
38
-#define SMMU_PCI_DEVFN_MAX 256
39
-#define SMMU_PCI_DEVFN(sid) (sid & 0xFF)
40
+#define SMMU_PCI_BUS_MAX 256
41
+#define SMMU_PCI_DEVFN_MAX 256
42
+#define SMMU_PCI_DEVFN(sid) (sid & 0xFF)
43
+
44
+/* VMSAv8-64 Translation constants and functions */
45
+#define VMSA_LEVELS 4
46
+
47
+#define VMSA_STRIDE(gran) ((gran) - VMSA_LEVELS + 1)
48
+#define VMSA_BIT_LVL(isz, strd, lvl) ((isz) - (strd) * \
49
+ (VMSA_LEVELS - (lvl)))
50
+#define VMSA_IDXMSK(isz, strd, lvl) ((1ULL << \
51
+ VMSA_BIT_LVL(isz, strd, lvl)) - 1)
52
53
/*
54
* Page table walk error types
55
@@ -XXX,XX +XXX,XX @@ typedef enum {
56
} SMMUPTWEventType;
57
58
typedef struct SMMUPTWEventInfo {
59
+ int stage;
60
SMMUPTWEventType type;
61
dma_addr_t addr; /* fetched address that induced an abort, if any */
62
} SMMUPTWEventInfo;
63
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/arm/smmu-common.c
66
+++ b/hw/arm/smmu-common.c
67
@@ -XXX,XX +XXX,XX @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
19
}
68
}
20
69
21
/* All loads done in the course of a page table walk go through here. */
70
/**
22
-static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
71
- * smmu_ptw_64 - VMSAv8-64 Walk of the page tables for a given IOVA
23
+static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw,
72
+ * smmu_ptw_64_s1 - VMSAv8-64 Walk of the page tables for a given IOVA
24
ARMMMUFaultInfo *fi)
73
* @cfg: translation config
74
* @iova: iova to translate
75
* @perm: access type
76
@@ -XXX,XX +XXX,XX @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
77
* Upon success, @tlbe is filled with translated_addr and entry
78
* permission rights.
79
*/
80
-static int smmu_ptw_64(SMMUTransCfg *cfg,
81
- dma_addr_t iova, IOMMUAccessFlags perm,
82
- SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
83
+static int smmu_ptw_64_s1(SMMUTransCfg *cfg,
84
+ dma_addr_t iova, IOMMUAccessFlags perm,
85
+ SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
25
{
86
{
26
CPUState *cs = env_cpu(env);
87
dma_addr_t baseaddr, indexmask;
27
uint32_t data;
88
int stage = cfg->stage;
28
89
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64(SMMUTransCfg *cfg,
29
- if (!S1_ptw_translate(env, ptw, addr, fi)) {
90
}
30
- /* Failure. */
91
31
- assert(fi->s1ptw);
92
granule_sz = tt->granule_sz;
32
- return 0;
93
- stride = granule_sz - 3;
94
+ stride = VMSA_STRIDE(granule_sz);
95
inputsize = 64 - tt->tsz;
96
level = 4 - (inputsize - 4) / stride;
97
- indexmask = (1ULL << (inputsize - (stride * (4 - level)))) - 1;
98
+ indexmask = VMSA_IDXMSK(inputsize, stride, level);
99
baseaddr = extract64(tt->ttb, 0, 48);
100
baseaddr &= ~indexmask;
101
102
- while (level <= 3) {
103
+ while (level < VMSA_LEVELS) {
104
uint64_t subpage_size = 1ULL << level_shift(level, granule_sz);
105
uint64_t mask = subpage_size - 1;
106
uint32_t offset = iova_level_offset(iova, inputsize, level, granule_sz);
107
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64(SMMUTransCfg *cfg,
108
if (get_pte(baseaddr, offset, &pte, info)) {
109
goto error;
110
}
111
- trace_smmu_ptw_level(level, iova, subpage_size,
112
+ trace_smmu_ptw_level(stage, level, iova, subpage_size,
113
baseaddr, offset, pte);
114
115
if (is_invalid_pte(pte) || is_reserved_pte(pte, level)) {
116
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64(SMMUTransCfg *cfg,
117
info->type = SMMU_PTW_ERR_TRANSLATION;
118
119
error:
120
+ info->stage = 1;
121
tlbe->entry.perm = IOMMU_NONE;
122
return -EINVAL;
123
}
124
@@ -XXX,XX +XXX,XX @@ error:
125
int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
126
SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
127
{
128
- if (!cfg->aa64) {
129
- /*
130
- * This code path is not entered as we check this while decoding
131
- * the configuration data in the derived SMMU model.
132
- */
133
- g_assert_not_reached();
33
- }
134
- }
34
-
135
-
35
if (likely(ptw->out_host)) {
136
- return smmu_ptw_64(cfg, iova, perm, tlbe, info);
36
/* Page tables are in RAM, and we have the host address. */
137
+ return smmu_ptw_64_s1(cfg, iova, perm, tlbe, info);
37
if (ptw->out_be) {
38
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
39
return data;
40
}
138
}
41
139
42
-static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
140
/**
43
+static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw,
141
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
44
ARMMMUFaultInfo *fi)
142
index XXXXXXX..XXXXXXX 100644
45
{
143
--- a/hw/arm/smmuv3.c
46
CPUState *cs = env_cpu(env);
144
+++ b/hw/arm/smmuv3.c
47
uint64_t data;
145
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
48
146
cached_entry = g_new0(SMMUTLBEntry, 1);
49
- if (!S1_ptw_translate(env, ptw, addr, fi)) {
147
50
- /* Failure. */
148
if (smmu_ptw(cfg, aligned_addr, flag, cached_entry, &ptw_info)) {
51
- assert(fi->s1ptw);
149
+ /* All faults from PTW has S2 field. */
52
- return 0;
150
+ event.u.f_walk_eabt.s2 = (ptw_info.stage == 2);
53
- }
151
g_free(cached_entry);
54
-
152
switch (ptw_info.type) {
55
if (likely(ptw->out_host)) {
153
case SMMU_PTW_ERR_WALK_EABT:
56
/* Page tables are in RAM, and we have the host address. */
154
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
57
if (ptw->out_be) {
155
index XXXXXXX..XXXXXXX 100644
58
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, S1Translate *ptw,
156
--- a/hw/arm/trace-events
59
fi->type = ARMFault_Translation;
157
+++ b/hw/arm/trace-events
60
goto do_fault;
158
@@ -XXX,XX +XXX,XX @@ virt_acpi_setup(void) "No fw cfg or ACPI disabled. Bailing out."
61
}
159
62
- desc = arm_ldl_ptw(env, ptw, table, fi);
160
# smmu-common.c
63
+ if (!S1_ptw_translate(env, ptw, table, fi)) {
161
smmu_add_mr(const char *name) "%s"
64
+ goto do_fault;
162
-smmu_ptw_level(int level, uint64_t iova, size_t subpage_size, uint64_t baseaddr, uint32_t offset, uint64_t pte) "level=%d iova=0x%"PRIx64" subpage_sz=0x%zx baseaddr=0x%"PRIx64" offset=%d => pte=0x%"PRIx64
65
+ }
163
+smmu_ptw_level(int stage, int level, uint64_t iova, size_t subpage_size, uint64_t baseaddr, uint32_t offset, uint64_t pte) "stage=%d level=%d iova=0x%"PRIx64" subpage_sz=0x%zx baseaddr=0x%"PRIx64" offset=%d => pte=0x%"PRIx64
66
+ desc = arm_ldl_ptw(env, ptw, fi);
164
smmu_ptw_invalid_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint32_t offset, uint64_t pte) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" offset=%d pte=0x%"PRIx64
67
if (fi->type != ARMFault_None) {
165
smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t address) "stage=%d level=%d iova=0x%"PRIx64" base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" page address = 0x%"PRIx64
68
goto do_fault;
166
smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
69
}
70
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, S1Translate *ptw,
71
/* Fine pagetable. */
72
table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
73
}
74
- desc = arm_ldl_ptw(env, ptw, table, fi);
75
+ if (!S1_ptw_translate(env, ptw, table, fi)) {
76
+ goto do_fault;
77
+ }
78
+ desc = arm_ldl_ptw(env, ptw, fi);
79
if (fi->type != ARMFault_None) {
80
goto do_fault;
81
}
82
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
83
fi->type = ARMFault_Translation;
84
goto do_fault;
85
}
86
- desc = arm_ldl_ptw(env, ptw, table, fi);
87
+ if (!S1_ptw_translate(env, ptw, table, fi)) {
88
+ goto do_fault;
89
+ }
90
+ desc = arm_ldl_ptw(env, ptw, fi);
91
if (fi->type != ARMFault_None) {
92
goto do_fault;
93
}
94
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
95
ns = extract32(desc, 3, 1);
96
/* Lookup l2 entry. */
97
table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
98
- desc = arm_ldl_ptw(env, ptw, table, fi);
99
+ if (!S1_ptw_translate(env, ptw, table, fi)) {
100
+ goto do_fault;
101
+ }
102
+ desc = arm_ldl_ptw(env, ptw, fi);
103
if (fi->type != ARMFault_None) {
104
goto do_fault;
105
}
106
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
107
ptw->in_ptw_idx &= ~1;
108
ptw->in_secure = false;
109
}
110
- descriptor = arm_ldq_ptw(env, ptw, descaddr, fi);
111
+ if (!S1_ptw_translate(env, ptw, descaddr, fi)) {
112
+ goto do_fault;
113
+ }
114
+ descriptor = arm_ldq_ptw(env, ptw, fi);
115
if (fi->type != ARMFault_None) {
116
goto do_fault;
117
}
118
--
167
--
119
2.25.1
168
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Mostafa Saleh <smostafa@google.com>
2
2
3
The MMFR1 field may indicate support for hardware update of
3
In preparation for adding stage-2 support, add Stage-2 PTW code.
4
access flag alone, or access flag and dirty bit.
4
Only Aarch64 format is supported as stage-1.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Nesting stage-1 and stage-2 is not supported right now.
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
8
Message-id: 20221024051851.3074715-4-richard.henderson@linaro.org
8
HTTU is not supported, SW is expected to maintain the Access flag.
9
This is described in the SMMUv3 manual(IHI 0070.E.a)
10
"5.2. Stream Table Entry" in "[181] S2AFFD".
11
This flag determines the behavior on access of a stage-2 page whose
12
descriptor has AF == 0:
13
- 0b0: An Access flag fault occurs (stall not supported).
14
- 0b1: An Access flag fault never occurs.
15
An Access fault takes priority over a Permission fault.
16
17
There are 3 address size checks for stage-2 according to
18
(IHI 0070.E.a) in "3.4. Address sizes".
19
- As nesting is not supported, input address is passed directly to
20
stage-2, and is checked against IAS.
21
We use cfg->oas to hold the OAS when stage-1 is not used, this is set
22
in the next patch.
23
This check is done outside of smmu_ptw_64_s2 as it is not part of
24
stage-2(it throws stage-1 fault), and the stage-2 function shouldn't
25
change it's behavior when nesting is supported.
26
When nesting is supported and we figure out how to combine TLB for
27
stage-1 and stage-2 we can move this check into the stage-1 function
28
as described in ARM DDI0487I.a in pseudocode
29
aarch64/translation/vmsa_translation/AArch64.S1Translate
30
aarch64/translation/vmsa_translation/AArch64.S1DisabledOutput
31
32
- Input to stage-2 is checked against s2t0sz, and throws stage-2
33
transaltion fault if exceeds it.
34
35
- Output of stage-2 is checked against effective PA output range.
36
37
Reviewed-by: Eric Auger <eric.auger@redhat.com>
38
Signed-off-by: Mostafa Saleh <smostafa@google.com>
39
Tested-by: Eric Auger <eric.auger@redhat.com>
40
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
41
Message-id: 20230516203327.2051088-5-smostafa@google.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
43
---
11
target/arm/cpu.h | 10 ++++++++++
44
hw/arm/smmu-internal.h | 35 ++++++++++
12
1 file changed, 10 insertions(+)
45
hw/arm/smmu-common.c | 142 ++++++++++++++++++++++++++++++++++++++++-
13
46
2 files changed, 176 insertions(+), 1 deletion(-)
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
47
48
diff --git a/hw/arm/smmu-internal.h b/hw/arm/smmu-internal.h
15
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
50
--- a/hw/arm/smmu-internal.h
17
+++ b/target/arm/cpu.h
51
+++ b/hw/arm/smmu-internal.h
18
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_e0pd(const ARMISARegisters *id)
52
@@ -XXX,XX +XXX,XX @@
19
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, E0PD) != 0;
53
#define PTE_APTABLE(pte) \
54
(extract64(pte, 61, 2))
55
56
+#define PTE_AF(pte) \
57
+ (extract64(pte, 10, 1))
58
/*
59
* TODO: At the moment all transactions are considered as privileged (EL1)
60
* as IOMMU translation callback does not pass user/priv attributes.
61
@@ -XXX,XX +XXX,XX @@
62
#define is_permission_fault(ap, perm) \
63
(((perm) & IOMMU_WO) && ((ap) & 0x2))
64
65
+#define is_permission_fault_s2(s2ap, perm) \
66
+ (!(((s2ap) & (perm)) == (perm)))
67
+
68
#define PTE_AP_TO_PERM(ap) \
69
(IOMMU_ACCESS_FLAG(true, !((ap) & 0x2)))
70
71
@@ -XXX,XX +XXX,XX @@ uint64_t iova_level_offset(uint64_t iova, int inputsize,
72
MAKE_64BIT_MASK(0, gsz - 3);
20
}
73
}
21
74
22
+static inline bool isar_feature_aa64_hafs(const ARMISARegisters *id)
75
+/* FEAT_LPA2 and FEAT_TTST are not implemented. */
76
+static inline int get_start_level(int sl0 , int granule_sz)
23
+{
77
+{
24
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) != 0;
78
+ /* ARM DDI0487I.a: Table D8-12. */
79
+ if (granule_sz == 12) {
80
+ return 2 - sl0;
81
+ }
82
+ /* ARM DDI0487I.a: Table D8-22 and Table D8-31. */
83
+ return 3 - sl0;
25
+}
84
+}
26
+
85
+
27
+static inline bool isar_feature_aa64_hdbs(const ARMISARegisters *id)
86
+/*
87
+ * Index in a concatenated first level stage-2 page table.
88
+ * ARM DDI0487I.a: D8.2.2 Concatenated translation tables.
89
+ */
90
+static inline int pgd_concat_idx(int start_level, int granule_sz,
91
+ dma_addr_t ipa)
28
+{
92
+{
29
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) >= 2;
93
+ uint64_t ret;
94
+ /*
95
+ * Get the number of bits handled by next levels, then any extra bits in
96
+ * the address should index the concatenated tables. This relation can be
97
+ * deduced from tables in ARM DDI0487I.a: D8.2.7-9
98
+ */
99
+ int shift = level_shift(start_level - 1, granule_sz);
100
+
101
+ ret = ipa >> shift;
102
+ return ret;
30
+}
103
+}
31
+
104
+
32
static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
105
#define SMMU_IOTLB_ASID(key) ((key).asid)
106
107
typedef struct SMMUIOTLBPageInvInfo {
108
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
109
index XXXXXXX..XXXXXXX 100644
110
--- a/hw/arm/smmu-common.c
111
+++ b/hw/arm/smmu-common.c
112
@@ -XXX,XX +XXX,XX @@ error:
113
return -EINVAL;
114
}
115
116
+/**
117
+ * smmu_ptw_64_s2 - VMSAv8-64 Walk of the page tables for a given ipa
118
+ * for stage-2.
119
+ * @cfg: translation config
120
+ * @ipa: ipa to translate
121
+ * @perm: access type
122
+ * @tlbe: SMMUTLBEntry (out)
123
+ * @info: handle to an error info
124
+ *
125
+ * Return 0 on success, < 0 on error. In case of error, @info is filled
126
+ * and tlbe->perm is set to IOMMU_NONE.
127
+ * Upon success, @tlbe is filled with translated_addr and entry
128
+ * permission rights.
129
+ */
130
+static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
131
+ dma_addr_t ipa, IOMMUAccessFlags perm,
132
+ SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
133
+{
134
+ const int stage = 2;
135
+ int granule_sz = cfg->s2cfg.granule_sz;
136
+ /* ARM DDI0487I.a: Table D8-7. */
137
+ int inputsize = 64 - cfg->s2cfg.tsz;
138
+ int level = get_start_level(cfg->s2cfg.sl0, granule_sz);
139
+ int stride = VMSA_STRIDE(granule_sz);
140
+ int idx = pgd_concat_idx(level, granule_sz, ipa);
141
+ /*
142
+ * Get the ttb from concatenated structure.
143
+ * The offset is the idx * size of each ttb(number of ptes * (sizeof(pte))
144
+ */
145
+ uint64_t baseaddr = extract64(cfg->s2cfg.vttb, 0, 48) + (1 << stride) *
146
+ idx * sizeof(uint64_t);
147
+ dma_addr_t indexmask = VMSA_IDXMSK(inputsize, stride, level);
148
+
149
+ baseaddr &= ~indexmask;
150
+
151
+ /*
152
+ * On input, a stage 2 Translation fault occurs if the IPA is outside the
153
+ * range configured by the relevant S2T0SZ field of the STE.
154
+ */
155
+ if (ipa >= (1ULL << inputsize)) {
156
+ info->type = SMMU_PTW_ERR_TRANSLATION;
157
+ goto error;
158
+ }
159
+
160
+ while (level < VMSA_LEVELS) {
161
+ uint64_t subpage_size = 1ULL << level_shift(level, granule_sz);
162
+ uint64_t mask = subpage_size - 1;
163
+ uint32_t offset = iova_level_offset(ipa, inputsize, level, granule_sz);
164
+ uint64_t pte, gpa;
165
+ dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
166
+ uint8_t s2ap;
167
+
168
+ if (get_pte(baseaddr, offset, &pte, info)) {
169
+ goto error;
170
+ }
171
+ trace_smmu_ptw_level(stage, level, ipa, subpage_size,
172
+ baseaddr, offset, pte);
173
+ if (is_invalid_pte(pte) || is_reserved_pte(pte, level)) {
174
+ trace_smmu_ptw_invalid_pte(stage, level, baseaddr,
175
+ pte_addr, offset, pte);
176
+ break;
177
+ }
178
+
179
+ if (is_table_pte(pte, level)) {
180
+ baseaddr = get_table_pte_address(pte, granule_sz);
181
+ level++;
182
+ continue;
183
+ } else if (is_page_pte(pte, level)) {
184
+ gpa = get_page_pte_address(pte, granule_sz);
185
+ trace_smmu_ptw_page_pte(stage, level, ipa,
186
+ baseaddr, pte_addr, pte, gpa);
187
+ } else {
188
+ uint64_t block_size;
189
+
190
+ gpa = get_block_pte_address(pte, level, granule_sz,
191
+ &block_size);
192
+ trace_smmu_ptw_block_pte(stage, level, baseaddr,
193
+ pte_addr, pte, ipa, gpa,
194
+ block_size >> 20);
195
+ }
196
+
197
+ /*
198
+ * If S2AFFD and PTE.AF are 0 => fault. (5.2. Stream Table Entry)
199
+ * An Access fault takes priority over a Permission fault.
200
+ */
201
+ if (!PTE_AF(pte) && !cfg->s2cfg.affd) {
202
+ info->type = SMMU_PTW_ERR_ACCESS;
203
+ goto error;
204
+ }
205
+
206
+ s2ap = PTE_AP(pte);
207
+ if (is_permission_fault_s2(s2ap, perm)) {
208
+ info->type = SMMU_PTW_ERR_PERMISSION;
209
+ goto error;
210
+ }
211
+
212
+ /*
213
+ * The address output from the translation causes a stage 2 Address
214
+ * Size fault if it exceeds the effective PA output range.
215
+ */
216
+ if (gpa >= (1ULL << cfg->s2cfg.eff_ps)) {
217
+ info->type = SMMU_PTW_ERR_ADDR_SIZE;
218
+ goto error;
219
+ }
220
+
221
+ tlbe->entry.translated_addr = gpa;
222
+ tlbe->entry.iova = ipa & ~mask;
223
+ tlbe->entry.addr_mask = mask;
224
+ tlbe->entry.perm = s2ap;
225
+ tlbe->level = level;
226
+ tlbe->granule = granule_sz;
227
+ return 0;
228
+ }
229
+ info->type = SMMU_PTW_ERR_TRANSLATION;
230
+
231
+error:
232
+ info->stage = 2;
233
+ tlbe->entry.perm = IOMMU_NONE;
234
+ return -EINVAL;
235
+}
236
+
237
/**
238
* smmu_ptw - Walk the page tables for an IOVA, according to @cfg
239
*
240
@@ -XXX,XX +XXX,XX @@ error:
241
int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
242
SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
33
{
243
{
34
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
244
- return smmu_ptw_64_s1(cfg, iova, perm, tlbe, info);
245
+ if (cfg->stage == 1) {
246
+ return smmu_ptw_64_s1(cfg, iova, perm, tlbe, info);
247
+ } else if (cfg->stage == 2) {
248
+ /*
249
+ * If bypassing stage 1(or unimplemented), the input address is passed
250
+ * directly to stage 2 as IPA. If the input address of a transaction
251
+ * exceeds the size of the IAS, a stage 1 Address Size fault occurs.
252
+ * For AA64, IAS = OAS according to (IHI 0070.E.a) "3.4 Address sizes"
253
+ */
254
+ if (iova >= (1ULL << cfg->oas)) {
255
+ info->type = SMMU_PTW_ERR_ADDR_SIZE;
256
+ info->stage = 1;
257
+ tlbe->entry.perm = IOMMU_NONE;
258
+ return -EINVAL;
259
+ }
260
+
261
+ return smmu_ptw_64_s2(cfg, iova, perm, tlbe, info);
262
+ }
263
+
264
+ g_assert_not_reached();
265
}
266
267
/**
35
--
268
--
36
2.25.1
269
2.34.1
diff view generated by jsdifflib
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
1
From: Mostafa Saleh <smostafa@google.com>
2
2
3
Snapshot loading only expects to call deterministic handlers, not
3
Parse stage-2 configuration from STE and populate it in SMMUS2Cfg.
4
non-deterministic ones. So introduce a way of registering handlers that
4
Validity of field values are checked when possible.
5
won't be called when reseting for snapshots.
5
6
6
Only AA64 tables are supported and Small Translation Tables (STT) are
7
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
7
not supported.
8
Message-id: 20221025004327.568476-2-Jason@zx2c4.com
8
9
[PMM: updated json doc comment with Markus' text; fixed
9
According to SMMUv3 UM(IHI0070E) "5.2 Stream Table Entry": All fields
10
checkpatch style nit]
10
with an S2 prefix (with the exception of S2VMID) are IGNORED when
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
stage-2 bypasses translation (Config[1] == 0).
12
13
Which means that VMID can be used(for TLB tagging) even if stage-2 is
14
bypassed, so we parse it unconditionally when S2P exists. Otherwise
15
it is set to -1.(only S1P)
16
17
As stall is not supported, if S2S is set the translation would abort.
18
For S2R, we reuse the same code used for stage-1 with flag
19
record_faults. However when nested translation is supported we would
20
need to separate stage-1 and stage-2 faults.
21
22
Fix wrong shift in STE_S2HD, STE_S2HA, STE_S2S.
23
24
Signed-off-by: Mostafa Saleh <smostafa@google.com>
25
Tested-by: Eric Auger <eric.auger@redhat.com>
26
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
27
Reviewed-by: Eric Auger <eric.auger@redhat.com>
28
Message-id: 20230516203327.2051088-6-smostafa@google.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
30
---
14
qapi/run-state.json | 6 +++++-
31
hw/arm/smmuv3-internal.h | 10 +-
15
include/hw/boards.h | 2 +-
32
include/hw/arm/smmu-common.h | 1 +
16
include/sysemu/reset.h | 5 ++++-
33
include/hw/arm/smmuv3.h | 3 +
17
hw/arm/aspeed.c | 4 ++--
34
hw/arm/smmuv3.c | 181 +++++++++++++++++++++++++++++++++--
18
hw/arm/mps2-tz.c | 4 ++--
35
4 files changed, 185 insertions(+), 10 deletions(-)
19
hw/core/reset.c | 17 ++++++++++++++++-
36
20
hw/hppa/machine.c | 4 ++--
37
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
21
hw/i386/microvm.c | 4 ++--
22
hw/i386/pc.c | 6 +++---
23
hw/ppc/pegasos2.c | 4 ++--
24
hw/ppc/pnv.c | 4 ++--
25
hw/ppc/spapr.c | 4 ++--
26
hw/s390x/s390-virtio-ccw.c | 4 ++--
27
migration/savevm.c | 2 +-
28
softmmu/runstate.c | 11 ++++++++---
29
15 files changed, 54 insertions(+), 27 deletions(-)
30
31
diff --git a/qapi/run-state.json b/qapi/run-state.json
32
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
33
--- a/qapi/run-state.json
39
--- a/hw/arm/smmuv3-internal.h
34
+++ b/qapi/run-state.json
40
+++ b/hw/arm/smmuv3-internal.h
41
@@ -XXX,XX +XXX,XX @@ typedef struct CD {
42
#define STE_S2TG(x) extract32((x)->word[5], 14, 2)
43
#define STE_S2PS(x) extract32((x)->word[5], 16, 3)
44
#define STE_S2AA64(x) extract32((x)->word[5], 19, 1)
45
-#define STE_S2HD(x) extract32((x)->word[5], 24, 1)
46
-#define STE_S2HA(x) extract32((x)->word[5], 25, 1)
47
-#define STE_S2S(x) extract32((x)->word[5], 26, 1)
48
+#define STE_S2ENDI(x) extract32((x)->word[5], 20, 1)
49
+#define STE_S2AFFD(x) extract32((x)->word[5], 21, 1)
50
+#define STE_S2HD(x) extract32((x)->word[5], 23, 1)
51
+#define STE_S2HA(x) extract32((x)->word[5], 24, 1)
52
+#define STE_S2S(x) extract32((x)->word[5], 25, 1)
53
+#define STE_S2R(x) extract32((x)->word[5], 26, 1)
54
+
55
#define STE_CTXPTR(x) \
56
({ \
57
unsigned long addr; \
58
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
59
index XXXXXXX..XXXXXXX 100644
60
--- a/include/hw/arm/smmu-common.h
61
+++ b/include/hw/arm/smmu-common.h
35
@@ -XXX,XX +XXX,XX @@
62
@@ -XXX,XX +XXX,XX @@
36
# ignores --no-reboot. This is useful for sanitizing
63
37
# hypercalls on s390 that are used during kexec/kdump/boot
64
/* VMSAv8-64 Translation constants and functions */
38
#
65
#define VMSA_LEVELS 4
39
+# @snapshot-load: A snapshot is being loaded by the record & replay
66
+#define VMSA_MAX_S2_CONCAT 16
40
+# subsystem. This value is used only within QEMU. It
67
41
+# doesn't occur in QMP. (since 7.2)
68
#define VMSA_STRIDE(gran) ((gran) - VMSA_LEVELS + 1)
42
+#
69
#define VMSA_BIT_LVL(isz, strd, lvl) ((isz) - (strd) * \
43
##
70
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
44
{ 'enum': 'ShutdownCause',
45
# Beware, shutdown_caused_by_guest() depends on enumeration order
46
'data': [ 'none', 'host-error', 'host-qmp-quit', 'host-qmp-system-reset',
47
'host-signal', 'host-ui', 'guest-shutdown', 'guest-reset',
48
- 'guest-panic', 'subsystem-reset'] }
49
+ 'guest-panic', 'subsystem-reset', 'snapshot-load'] }
50
51
##
52
# @StatusInfo:
53
diff --git a/include/hw/boards.h b/include/hw/boards.h
54
index XXXXXXX..XXXXXXX 100644
71
index XXXXXXX..XXXXXXX 100644
55
--- a/include/hw/boards.h
72
--- a/include/hw/arm/smmuv3.h
56
+++ b/include/hw/boards.h
73
+++ b/include/hw/arm/smmuv3.h
57
@@ -XXX,XX +XXX,XX @@ struct MachineClass {
74
@@ -XXX,XX +XXX,XX @@ struct SMMUv3Class {
58
const char *deprecation_reason;
75
#define TYPE_ARM_SMMUV3 "arm-smmuv3"
59
76
OBJECT_DECLARE_TYPE(SMMUv3State, SMMUv3Class, ARM_SMMUV3)
60
void (*init)(MachineState *state);
77
61
- void (*reset)(MachineState *state);
78
+#define STAGE1_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S1P)
62
+ void (*reset)(MachineState *state, ShutdownCause reason);
79
+#define STAGE2_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S2P)
63
void (*wakeup)(MachineState *state);
80
+
64
int (*kvm_type)(MachineState *machine, const char *arg);
81
#endif
65
82
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
66
diff --git a/include/sysemu/reset.h b/include/sysemu/reset.h
67
index XXXXXXX..XXXXXXX 100644
83
index XXXXXXX..XXXXXXX 100644
68
--- a/include/sysemu/reset.h
84
--- a/hw/arm/smmuv3.c
69
+++ b/include/sysemu/reset.h
85
+++ b/hw/arm/smmuv3.c
70
@@ -XXX,XX +XXX,XX @@
86
@@ -XXX,XX +XXX,XX @@
71
#ifndef QEMU_SYSEMU_RESET_H
87
#include "smmuv3-internal.h"
72
#define QEMU_SYSEMU_RESET_H
88
#include "smmu-internal.h"
73
89
74
+#include "qapi/qapi-events-run-state.h"
90
+#define PTW_RECORD_FAULT(cfg) (((cfg)->stage == 1) ? (cfg)->record_faults : \
75
+
91
+ (cfg)->s2cfg.record_faults)
76
typedef void QEMUResetHandler(void *opaque);
92
+
77
93
/**
78
void qemu_register_reset(QEMUResetHandler *func, void *opaque);
94
* smmuv3_trigger_irq - pulse @irq if enabled and update
79
+void qemu_register_reset_nosnapshotload(QEMUResetHandler *func, void *opaque);
95
* GERROR register in case of GERROR interrupt
80
void qemu_unregister_reset(QEMUResetHandler *func, void *opaque);
96
@@ -XXX,XX +XXX,XX @@ static int smmu_get_cd(SMMUv3State *s, STE *ste, uint32_t ssid,
81
-void qemu_devices_reset(void);
97
return 0;
82
+void qemu_devices_reset(ShutdownCause reason);
83
84
#endif
85
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/hw/arm/aspeed.c
88
+++ b/hw/arm/aspeed.c
89
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_bletchley_class_init(ObjectClass *oc, void *data)
90
aspeed_soc_num_cpus(amc->soc_name);
91
}
98
}
92
99
93
-static void fby35_reset(MachineState *state)
100
+/*
94
+static void fby35_reset(MachineState *state, ShutdownCause reason)
101
+ * Max valid value is 39 when SMMU_IDR3.STT == 0.
95
{
102
+ * In architectures after SMMUv3.0:
96
AspeedMachineState *bmc = ASPEED_MACHINE(state);
103
+ * - If STE.S2TG selects a 4KB or 16KB granule, the minimum valid value for this
97
AspeedGPIOState *gpio = &bmc->soc.gpio;
104
+ * field is MAX(16, 64-IAS)
98
105
+ * - If STE.S2TG selects a 64KB granule, the minimum valid value for this field
99
- qemu_devices_reset();
106
+ * is (64-IAS).
100
+ qemu_devices_reset(reason);
107
+ * As we only support AA64, IAS = OAS.
101
108
+ */
102
/* Board ID: 7 (Class-1, 4 slots) */
109
+static bool s2t0sz_valid(SMMUTransCfg *cfg)
103
object_property_set_bool(OBJECT(gpio), "gpioV4", true, &error_fatal);
104
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/hw/arm/mps2-tz.c
107
+++ b/hw/arm/mps2-tz.c
108
@@ -XXX,XX +XXX,XX @@ static void mps2_set_remap(Object *obj, const char *value, Error **errp)
109
}
110
}
111
112
-static void mps2_machine_reset(MachineState *machine)
113
+static void mps2_machine_reset(MachineState *machine, ShutdownCause reason)
114
{
115
MPS2TZMachineState *mms = MPS2TZ_MACHINE(machine);
116
117
@@ -XXX,XX +XXX,XX @@ static void mps2_machine_reset(MachineState *machine)
118
* reset see the correct mapping.
119
*/
120
remap_memory(mms, mms->remap);
121
- qemu_devices_reset();
122
+ qemu_devices_reset(reason);
123
}
124
125
static void mps2tz_class_init(ObjectClass *oc, void *data)
126
diff --git a/hw/core/reset.c b/hw/core/reset.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/hw/core/reset.c
129
+++ b/hw/core/reset.c
130
@@ -XXX,XX +XXX,XX @@ typedef struct QEMUResetEntry {
131
QTAILQ_ENTRY(QEMUResetEntry) entry;
132
QEMUResetHandler *func;
133
void *opaque;
134
+ bool skip_on_snapshot_load;
135
} QEMUResetEntry;
136
137
static QTAILQ_HEAD(, QEMUResetEntry) reset_handlers =
138
@@ -XXX,XX +XXX,XX @@ void qemu_register_reset(QEMUResetHandler *func, void *opaque)
139
QTAILQ_INSERT_TAIL(&reset_handlers, re, entry);
140
}
141
142
+void qemu_register_reset_nosnapshotload(QEMUResetHandler *func, void *opaque)
143
+{
110
+{
144
+ QEMUResetEntry *re = g_new0(QEMUResetEntry, 1);
111
+ if (cfg->s2cfg.tsz > 39) {
145
+
112
+ return false;
146
+ re->func = func;
113
+ }
147
+ re->opaque = opaque;
114
+
148
+ re->skip_on_snapshot_load = true;
115
+ if (cfg->s2cfg.granule_sz == 16) {
149
+ QTAILQ_INSERT_TAIL(&reset_handlers, re, entry);
116
+ return (cfg->s2cfg.tsz >= 64 - oas2bits(SMMU_IDR5_OAS));
117
+ }
118
+
119
+ return (cfg->s2cfg.tsz >= MAX(64 - oas2bits(SMMU_IDR5_OAS), 16));
150
+}
120
+}
151
+
121
+
152
void qemu_unregister_reset(QEMUResetHandler *func, void *opaque)
122
+/*
153
{
123
+ * Return true if s2 page table config is valid.
154
QEMUResetEntry *re;
124
+ * This checks with the configured start level, ias_bits and granularity we can
155
@@ -XXX,XX +XXX,XX @@ void qemu_unregister_reset(QEMUResetHandler *func, void *opaque)
125
+ * have a valid page table as described in ARM ARM D8.2 Translation process.
156
}
126
+ * The idea here is to see for the highest possible number of IPA bits, how
157
}
127
+ * many concatenated tables we would need, if it is more than 16, then this is
158
128
+ * not possible.
159
-void qemu_devices_reset(void)
129
+ */
160
+void qemu_devices_reset(ShutdownCause reason)
130
+static bool s2_pgtable_config_valid(uint8_t sl0, uint8_t t0sz, uint8_t gran)
161
{
131
+{
162
QEMUResetEntry *re, *nre;
132
+ int level = get_start_level(sl0, gran);
163
133
+ uint64_t ipa_bits = 64 - t0sz;
164
/* reset all devices */
134
+ uint64_t max_ipa = (1ULL << ipa_bits) - 1;
165
QTAILQ_FOREACH_SAFE(re, &reset_handlers, entry, nre) {
135
+ int nr_concat = pgd_concat_idx(level, gran, max_ipa) + 1;
166
+ if (reason == SHUTDOWN_CAUSE_SNAPSHOT_LOAD &&
136
+
167
+ re->skip_on_snapshot_load) {
137
+ return nr_concat <= VMSA_MAX_S2_CONCAT;
168
+ continue;
138
+}
169
+ }
139
+
170
re->func(re->opaque);
140
+static int decode_ste_s2_cfg(SMMUTransCfg *cfg, STE *ste)
171
}
141
+{
172
}
142
+ cfg->stage = 2;
173
diff --git a/hw/hppa/machine.c b/hw/hppa/machine.c
143
+
174
index XXXXXXX..XXXXXXX 100644
144
+ if (STE_S2AA64(ste) == 0x0) {
175
--- a/hw/hppa/machine.c
145
+ qemu_log_mask(LOG_UNIMP,
176
+++ b/hw/hppa/machine.c
146
+ "SMMUv3 AArch32 tables not supported\n");
177
@@ -XXX,XX +XXX,XX @@ static void machine_hppa_init(MachineState *machine)
147
+ g_assert_not_reached();
178
cpu[0]->env.gr[19] = FW_CFG_IO_BASE;
148
+ }
179
}
149
+
180
150
+ switch (STE_S2TG(ste)) {
181
-static void hppa_machine_reset(MachineState *ms)
151
+ case 0x0: /* 4KB */
182
+static void hppa_machine_reset(MachineState *ms, ShutdownCause reason)
152
+ cfg->s2cfg.granule_sz = 12;
183
{
153
+ break;
184
unsigned int smp_cpus = ms->smp.cpus;
154
+ case 0x1: /* 64KB */
185
int i;
155
+ cfg->s2cfg.granule_sz = 16;
186
156
+ break;
187
- qemu_devices_reset();
157
+ case 0x2: /* 16KB */
188
+ qemu_devices_reset(reason);
158
+ cfg->s2cfg.granule_sz = 14;
189
190
/* Start all CPUs at the firmware entry point.
191
* Monarch CPU will initialize firmware, secondary CPUs
192
diff --git a/hw/i386/microvm.c b/hw/i386/microvm.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/hw/i386/microvm.c
195
+++ b/hw/i386/microvm.c
196
@@ -XXX,XX +XXX,XX @@ static void microvm_machine_state_init(MachineState *machine)
197
microvm_devices_init(mms);
198
}
199
200
-static void microvm_machine_reset(MachineState *machine)
201
+static void microvm_machine_reset(MachineState *machine, ShutdownCause reason)
202
{
203
MicrovmMachineState *mms = MICROVM_MACHINE(machine);
204
CPUState *cs;
205
@@ -XXX,XX +XXX,XX @@ static void microvm_machine_reset(MachineState *machine)
206
mms->kernel_cmdline_fixed = true;
207
}
208
209
- qemu_devices_reset();
210
+ qemu_devices_reset(reason);
211
212
CPU_FOREACH(cs) {
213
cpu = X86_CPU(cs);
214
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
215
index XXXXXXX..XXXXXXX 100644
216
--- a/hw/i386/pc.c
217
+++ b/hw/i386/pc.c
218
@@ -XXX,XX +XXX,XX @@ static void pc_machine_initfn(Object *obj)
219
cxl_machine_init(obj, &pcms->cxl_devices_state);
220
}
221
222
-static void pc_machine_reset(MachineState *machine)
223
+static void pc_machine_reset(MachineState *machine, ShutdownCause reason)
224
{
225
CPUState *cs;
226
X86CPU *cpu;
227
228
- qemu_devices_reset();
229
+ qemu_devices_reset(reason);
230
231
/* Reset APIC after devices have been reset to cancel
232
* any changes that qemu_devices_reset() might have done.
233
@@ -XXX,XX +XXX,XX @@ static void pc_machine_reset(MachineState *machine)
234
static void pc_machine_wakeup(MachineState *machine)
235
{
236
cpu_synchronize_all_states();
237
- pc_machine_reset(machine);
238
+ pc_machine_reset(machine, SHUTDOWN_CAUSE_NONE);
239
cpu_synchronize_all_post_reset();
240
}
241
242
diff --git a/hw/ppc/pegasos2.c b/hw/ppc/pegasos2.c
243
index XXXXXXX..XXXXXXX 100644
244
--- a/hw/ppc/pegasos2.c
245
+++ b/hw/ppc/pegasos2.c
246
@@ -XXX,XX +XXX,XX @@ static void pegasos2_pci_config_write(Pegasos2MachineState *pm, int bus,
247
pegasos2_mv_reg_write(pm, pcicfg + 4, len, val);
248
}
249
250
-static void pegasos2_machine_reset(MachineState *machine)
251
+static void pegasos2_machine_reset(MachineState *machine, ShutdownCause reason)
252
{
253
Pegasos2MachineState *pm = PEGASOS2_MACHINE(machine);
254
void *fdt;
255
uint64_t d[2];
256
int sz;
257
258
- qemu_devices_reset();
259
+ qemu_devices_reset(reason);
260
if (!pm->vof) {
261
return; /* Firmware should set up machine so nothing to do */
262
}
263
diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
264
index XXXXXXX..XXXXXXX 100644
265
--- a/hw/ppc/pnv.c
266
+++ b/hw/ppc/pnv.c
267
@@ -XXX,XX +XXX,XX @@ static void pnv_powerdown_notify(Notifier *n, void *opaque)
268
}
269
}
270
271
-static void pnv_reset(MachineState *machine)
272
+static void pnv_reset(MachineState *machine, ShutdownCause reason)
273
{
274
PnvMachineState *pnv = PNV_MACHINE(machine);
275
IPMIBmc *bmc;
276
void *fdt;
277
278
- qemu_devices_reset();
279
+ qemu_devices_reset(reason);
280
281
/*
282
* The machine should provide by default an internal BMC simulator.
283
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
284
index XXXXXXX..XXXXXXX 100644
285
--- a/hw/ppc/spapr.c
286
+++ b/hw/ppc/spapr.c
287
@@ -XXX,XX +XXX,XX @@ void spapr_check_mmu_mode(bool guest_radix)
288
}
289
}
290
291
-static void spapr_machine_reset(MachineState *machine)
292
+static void spapr_machine_reset(MachineState *machine, ShutdownCause reason)
293
{
294
SpaprMachineState *spapr = SPAPR_MACHINE(machine);
295
PowerPCCPU *first_ppc_cpu;
296
@@ -XXX,XX +XXX,XX @@ static void spapr_machine_reset(MachineState *machine)
297
spapr_setup_hpt(spapr);
298
}
299
300
- qemu_devices_reset();
301
+ qemu_devices_reset(reason);
302
303
spapr_ovec_cleanup(spapr->ov5_cas);
304
spapr->ov5_cas = spapr_ovec_new();
305
diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
306
index XXXXXXX..XXXXXXX 100644
307
--- a/hw/s390x/s390-virtio-ccw.c
308
+++ b/hw/s390x/s390-virtio-ccw.c
309
@@ -XXX,XX +XXX,XX @@ static void s390_pv_prepare_reset(S390CcwMachineState *ms)
310
s390_pv_prep_reset();
311
}
312
313
-static void s390_machine_reset(MachineState *machine)
314
+static void s390_machine_reset(MachineState *machine, ShutdownCause reason)
315
{
316
S390CcwMachineState *ms = S390_CCW_MACHINE(machine);
317
enum s390_reset reset_type;
318
@@ -XXX,XX +XXX,XX @@ static void s390_machine_reset(MachineState *machine)
319
s390_machine_unprotect(ms);
320
}
321
322
- qemu_devices_reset();
323
+ qemu_devices_reset(reason);
324
s390_crypto_reset();
325
326
/* configure and start the ipl CPU only */
327
diff --git a/migration/savevm.c b/migration/savevm.c
328
index XXXXXXX..XXXXXXX 100644
329
--- a/migration/savevm.c
330
+++ b/migration/savevm.c
331
@@ -XXX,XX +XXX,XX @@ bool load_snapshot(const char *name, const char *vmstate,
332
goto err_drain;
333
}
334
335
- qemu_system_reset(SHUTDOWN_CAUSE_NONE);
336
+ qemu_system_reset(SHUTDOWN_CAUSE_SNAPSHOT_LOAD);
337
mis->from_src_file = f;
338
339
if (!yank_register_instance(MIGRATION_YANK_INSTANCE, errp)) {
340
diff --git a/softmmu/runstate.c b/softmmu/runstate.c
341
index XXXXXXX..XXXXXXX 100644
342
--- a/softmmu/runstate.c
343
+++ b/softmmu/runstate.c
344
@@ -XXX,XX +XXX,XX @@ void qemu_system_reset(ShutdownCause reason)
345
cpu_synchronize_all_states();
346
347
if (mc && mc->reset) {
348
- mc->reset(current_machine);
349
+ mc->reset(current_machine, reason);
350
} else {
351
- qemu_devices_reset();
352
+ qemu_devices_reset(reason);
353
}
354
- if (reason && reason != SHUTDOWN_CAUSE_SUBSYSTEM_RESET) {
355
+ switch (reason) {
356
+ case SHUTDOWN_CAUSE_NONE:
357
+ case SHUTDOWN_CAUSE_SUBSYSTEM_RESET:
358
+ case SHUTDOWN_CAUSE_SNAPSHOT_LOAD:
359
+ break;
159
+ break;
360
+ default:
160
+ default:
361
qapi_event_send_reset(shutdown_caused_by_guest(reason), reason);
161
+ qemu_log_mask(LOG_GUEST_ERROR,
162
+ "SMMUv3 bad STE S2TG: %x\n", STE_S2TG(ste));
163
+ goto bad_ste;
164
+ }
165
+
166
+ cfg->s2cfg.vttb = STE_S2TTB(ste);
167
+
168
+ cfg->s2cfg.sl0 = STE_S2SL0(ste);
169
+ /* FEAT_TTST not supported. */
170
+ if (cfg->s2cfg.sl0 == 0x3) {
171
+ qemu_log_mask(LOG_UNIMP, "SMMUv3 S2SL0 = 0x3 has no meaning!\n");
172
+ goto bad_ste;
173
+ }
174
+
175
+ /* For AA64, The effective S2PS size is capped to the OAS. */
176
+ cfg->s2cfg.eff_ps = oas2bits(MIN(STE_S2PS(ste), SMMU_IDR5_OAS));
177
+ /*
178
+ * It is ILLEGAL for the address in S2TTB to be outside the range
179
+ * described by the effective S2PS value.
180
+ */
181
+ if (cfg->s2cfg.vttb & ~(MAKE_64BIT_MASK(0, cfg->s2cfg.eff_ps))) {
182
+ qemu_log_mask(LOG_GUEST_ERROR,
183
+ "SMMUv3 S2TTB too large 0x%lx, effective PS %d bits\n",
184
+ cfg->s2cfg.vttb, cfg->s2cfg.eff_ps);
185
+ goto bad_ste;
186
+ }
187
+
188
+ cfg->s2cfg.tsz = STE_S2T0SZ(ste);
189
+
190
+ if (!s2t0sz_valid(cfg)) {
191
+ qemu_log_mask(LOG_GUEST_ERROR, "SMMUv3 bad STE S2T0SZ = %d\n",
192
+ cfg->s2cfg.tsz);
193
+ goto bad_ste;
194
+ }
195
+
196
+ if (!s2_pgtable_config_valid(cfg->s2cfg.sl0, cfg->s2cfg.tsz,
197
+ cfg->s2cfg.granule_sz)) {
198
+ qemu_log_mask(LOG_GUEST_ERROR,
199
+ "SMMUv3 STE stage 2 config not valid!\n");
200
+ goto bad_ste;
201
+ }
202
+
203
+ /* Only LE supported(IDR0.TTENDIAN). */
204
+ if (STE_S2ENDI(ste)) {
205
+ qemu_log_mask(LOG_GUEST_ERROR,
206
+ "SMMUv3 STE_S2ENDI only supports LE!\n");
207
+ goto bad_ste;
208
+ }
209
+
210
+ cfg->s2cfg.affd = STE_S2AFFD(ste);
211
+
212
+ cfg->s2cfg.record_faults = STE_S2R(ste);
213
+ /* As stall is not supported. */
214
+ if (STE_S2S(ste)) {
215
+ qemu_log_mask(LOG_UNIMP, "SMMUv3 Stall not implemented!\n");
216
+ goto bad_ste;
217
+ }
218
+
219
+ /* This is still here as stage 2 has not been fully enabled yet. */
220
+ qemu_log_mask(LOG_UNIMP, "SMMUv3 does not support stage 2 yet\n");
221
+ goto bad_ste;
222
+
223
+ return 0;
224
+
225
+bad_ste:
226
+ return -EINVAL;
227
+}
228
+
229
/* Returns < 0 in case of invalid STE, 0 otherwise */
230
static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
231
STE *ste, SMMUEventInfo *event)
232
{
233
uint32_t config;
234
+ int ret;
235
236
if (!STE_VALID(ste)) {
237
if (!event->inval_ste_allowed) {
238
@@ -XXX,XX +XXX,XX @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
239
return 0;
362
}
240
}
363
cpu_synchronize_all_post_reset();
241
242
- if (STE_CFG_S2_ENABLED(config)) {
243
- qemu_log_mask(LOG_UNIMP, "SMMUv3 does not support stage 2 yet\n");
244
+ /*
245
+ * If a stage is enabled in SW while not advertised, throw bad ste
246
+ * according to user manual(IHI0070E) "5.2 Stream Table Entry".
247
+ */
248
+ if (!STAGE1_SUPPORTED(s) && STE_CFG_S1_ENABLED(config)) {
249
+ qemu_log_mask(LOG_GUEST_ERROR, "SMMUv3 S1 used but not supported.\n");
250
goto bad_ste;
251
}
252
+ if (!STAGE2_SUPPORTED(s) && STE_CFG_S2_ENABLED(config)) {
253
+ qemu_log_mask(LOG_GUEST_ERROR, "SMMUv3 S2 used but not supported.\n");
254
+ goto bad_ste;
255
+ }
256
+
257
+ if (STAGE2_SUPPORTED(s)) {
258
+ /* VMID is considered even if s2 is disabled. */
259
+ cfg->s2cfg.vmid = STE_S2VMID(ste);
260
+ } else {
261
+ /* Default to -1 */
262
+ cfg->s2cfg.vmid = -1;
263
+ }
264
+
265
+ if (STE_CFG_S2_ENABLED(config)) {
266
+ /*
267
+ * Stage-1 OAS defaults to OAS even if not enabled as it would be used
268
+ * in input address check for stage-2.
269
+ */
270
+ cfg->oas = oas2bits(SMMU_IDR5_OAS);
271
+ ret = decode_ste_s2_cfg(cfg, ste);
272
+ if (ret) {
273
+ goto bad_ste;
274
+ }
275
+ }
276
277
if (STE_S1CDMAX(ste) != 0) {
278
qemu_log_mask(LOG_UNIMP,
279
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
280
if (cached_entry) {
281
if ((flag & IOMMU_WO) && !(cached_entry->entry.perm & IOMMU_WO)) {
282
status = SMMU_TRANS_ERROR;
283
- if (cfg->record_faults) {
284
+ /*
285
+ * We know that the TLB only contains either stage-1 or stage-2 as
286
+ * nesting is not supported. So it is sufficient to check the
287
+ * translation stage to know the TLB stage for now.
288
+ */
289
+ event.u.f_walk_eabt.s2 = (cfg->stage == 2);
290
+ if (PTW_RECORD_FAULT(cfg)) {
291
event.type = SMMU_EVT_F_PERMISSION;
292
event.u.f_permission.addr = addr;
293
event.u.f_permission.rnw = flag & 0x1;
294
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
295
event.u.f_walk_eabt.addr2 = ptw_info.addr;
296
break;
297
case SMMU_PTW_ERR_TRANSLATION:
298
- if (cfg->record_faults) {
299
+ if (PTW_RECORD_FAULT(cfg)) {
300
event.type = SMMU_EVT_F_TRANSLATION;
301
event.u.f_translation.addr = addr;
302
event.u.f_translation.rnw = flag & 0x1;
303
}
304
break;
305
case SMMU_PTW_ERR_ADDR_SIZE:
306
- if (cfg->record_faults) {
307
+ if (PTW_RECORD_FAULT(cfg)) {
308
event.type = SMMU_EVT_F_ADDR_SIZE;
309
event.u.f_addr_size.addr = addr;
310
event.u.f_addr_size.rnw = flag & 0x1;
311
}
312
break;
313
case SMMU_PTW_ERR_ACCESS:
314
- if (cfg->record_faults) {
315
+ if (PTW_RECORD_FAULT(cfg)) {
316
event.type = SMMU_EVT_F_ACCESS;
317
event.u.f_access.addr = addr;
318
event.u.f_access.rnw = flag & 0x1;
319
}
320
break;
321
case SMMU_PTW_ERR_PERMISSION:
322
- if (cfg->record_faults) {
323
+ if (PTW_RECORD_FAULT(cfg)) {
324
event.type = SMMU_EVT_F_PERMISSION;
325
event.u.f_permission.addr = addr;
326
event.u.f_permission.rnw = flag & 0x1;
364
--
327
--
365
2.25.1
328
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Mostafa Saleh <smostafa@google.com>
2
2
3
We had only been reporting the stage2 page size. This causes
3
Right now, either stage-1 or stage-2 are supported, this simplifies
4
problems if stage1 is using a larger page size (16k, 2M, etc),
4
how we can deal with TLBs.
5
but stage2 is using a smaller page size, because cputlb does
5
This patch makes TLB lookup work if stage-2 is enabled instead of
6
not set large_page_{addr,mask} properly.
6
stage-1.
7
TLB lookup is done before a PTW, if a valid entry is found we won't
8
do the PTW.
9
To be able to do TLB lookup, we need the correct tagging info, as
10
granularity and input size, so we get this based on the supported
11
translation stage. The TLB entries are added correctly from each
12
stage PTW.
7
13
8
Fix by using the max of the two page sizes.
14
When nested translation is supported, this would need to change, for
15
example if we go with a combined TLB implementation, we would need to
16
use the min of the granularities in TLB.
9
17
10
Reported-by: Marc Zyngier <maz@kernel.org>
18
As stage-2 shouldn't be tagged by ASID, it will be set to -1 if S1P
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
is not enabled.
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
13
Message-id: 20221024051851.3074715-15-richard.henderson@linaro.org
21
Signed-off-by: Mostafa Saleh <smostafa@google.com>
22
Reviewed-by: Eric Auger <eric.auger@redhat.com>
23
Tested-by: Eric Auger <eric.auger@redhat.com>
24
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
25
Message-id: 20230516203327.2051088-7-smostafa@google.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
27
---
16
target/arm/ptw.c | 11 ++++++++++-
28
hw/arm/smmuv3.c | 44 +++++++++++++++++++++++++++++++++-----------
17
1 file changed, 10 insertions(+), 1 deletion(-)
29
1 file changed, 33 insertions(+), 11 deletions(-)
18
30
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
31
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
20
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/ptw.c
33
--- a/hw/arm/smmuv3.c
22
+++ b/target/arm/ptw.c
34
+++ b/hw/arm/smmuv3.c
23
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
35
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
24
ARMMMUFaultInfo *fi)
36
STE ste;
25
{
37
CD cd;
26
hwaddr ipa;
38
27
- int s1_prot;
39
+ /* ASID defaults to -1 (if s1 is not supported). */
28
+ int s1_prot, s1_lgpgsz;
40
+ cfg->asid = -1;
29
bool is_secure = ptw->in_secure;
41
+
30
bool ret, ipa_secure, s2walk_secure;
42
ret = smmu_find_ste(s, sid, &ste, event);
31
ARMCacheAttrs cacheattrs1;
43
if (ret) {
32
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
33
* Save the stage1 results so that we may merge prot and cacheattrs later.
34
*/
35
s1_prot = result->f.prot;
36
+ s1_lgpgsz = result->f.lg_page_size;
37
cacheattrs1 = result->cacheattrs;
38
memset(result, 0, sizeof(*result));
39
40
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
41
return ret;
44
return ret;
45
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
46
.addr_mask = ~(hwaddr)0,
47
.perm = IOMMU_NONE,
48
};
49
+ /*
50
+ * Combined attributes used for TLB lookup, as only one stage is supported,
51
+ * it will hold attributes based on the enabled stage.
52
+ */
53
+ SMMUTransTableInfo tt_combined;
54
55
qemu_mutex_lock(&s->mutex);
56
57
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
58
goto epilogue;
42
}
59
}
43
60
61
- tt = select_tt(cfg, addr);
62
- if (!tt) {
63
- if (cfg->record_faults) {
64
- event.type = SMMU_EVT_F_TRANSLATION;
65
- event.u.f_translation.addr = addr;
66
- event.u.f_translation.rnw = flag & 0x1;
67
+ if (cfg->stage == 1) {
68
+ /* Select stage1 translation table. */
69
+ tt = select_tt(cfg, addr);
70
+ if (!tt) {
71
+ if (cfg->record_faults) {
72
+ event.type = SMMU_EVT_F_TRANSLATION;
73
+ event.u.f_translation.addr = addr;
74
+ event.u.f_translation.rnw = flag & 0x1;
75
+ }
76
+ status = SMMU_TRANS_ERROR;
77
+ goto epilogue;
78
}
79
- status = SMMU_TRANS_ERROR;
80
- goto epilogue;
81
- }
82
+ tt_combined.granule_sz = tt->granule_sz;
83
+ tt_combined.tsz = tt->tsz;
84
85
- page_mask = (1ULL << (tt->granule_sz)) - 1;
86
+ } else {
87
+ /* Stage2. */
88
+ tt_combined.granule_sz = cfg->s2cfg.granule_sz;
89
+ tt_combined.tsz = cfg->s2cfg.tsz;
90
+ }
44
+ /*
91
+ /*
45
+ * Use the maximum of the S1 & S2 page size, so that invalidation
92
+ * TLB lookup looks for granule and input size for a translation stage,
46
+ * of pages > TARGET_PAGE_SIZE works correctly.
93
+ * as only one stage is supported right now, choose the right values
94
+ * from the configuration.
47
+ */
95
+ */
48
+ if (result->f.lg_page_size < s1_lgpgsz) {
96
+ page_mask = (1ULL << tt_combined.granule_sz) - 1;
49
+ result->f.lg_page_size = s1_lgpgsz;
97
aligned_addr = addr & ~page_mask;
50
+ }
98
51
+
99
- cached_entry = smmu_iotlb_lookup(bs, cfg, tt, aligned_addr);
52
/* Combine the S1 and S2 cache attributes. */
100
+ cached_entry = smmu_iotlb_lookup(bs, cfg, &tt_combined, aligned_addr);
53
hcr = arm_hcr_el2_eff_secstate(env, is_secure);
101
if (cached_entry) {
54
if (hcr & HCR_DC) {
102
if ((flag & IOMMU_WO) && !(cached_entry->entry.perm & IOMMU_WO)) {
103
status = SMMU_TRANS_ERROR;
55
--
104
--
56
2.25.1
105
2.34.1
diff view generated by jsdifflib
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
1
From: Mostafa Saleh <smostafa@google.com>
2
2
3
When the system reboots, the rng-seed that the FDT has should be
3
Allow TLB to be tagged with VMID.
4
re-randomized, so that the new boot gets a new seed. Since the FDT is in
4
5
the ROM region at this point, we add a hook right after the ROM has been
5
If stage-1 is only supported, VMID is set to -1 and ignored from STE
6
added, so that we have a pointer to that copy of the FDT.
6
and CMD_TLBI_NH* cmds.
7
7
8
Cc: Palmer Dabbelt <palmer@dabbelt.com>
8
Update smmu_iotlb_insert trace event to have vmid.
9
Cc: Alistair Francis <alistair.francis@wdc.com>
9
10
Cc: Bin Meng <bin.meng@windriver.com>
10
Signed-off-by: Mostafa Saleh <smostafa@google.com>
11
Cc: qemu-riscv@nongnu.org
11
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
12
Tested-by: Eric Auger <eric.auger@redhat.com>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
13
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
14
Message-id: 20221025004327.568476-6-Jason@zx2c4.com
14
Message-id: 20230516203327.2051088-8-smostafa@google.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
16
---
17
hw/riscv/boot.c | 3 +++
17
hw/arm/smmu-internal.h | 2 ++
18
1 file changed, 3 insertions(+)
18
include/hw/arm/smmu-common.h | 5 +++--
19
19
hw/arm/smmu-common.c | 36 ++++++++++++++++++++++--------------
20
diff --git a/hw/riscv/boot.c b/hw/riscv/boot.c
20
hw/arm/smmuv3.c | 12 +++++++++---
21
index XXXXXXX..XXXXXXX 100644
21
hw/arm/trace-events | 6 +++---
22
--- a/hw/riscv/boot.c
22
5 files changed, 39 insertions(+), 22 deletions(-)
23
+++ b/hw/riscv/boot.c
23
24
@@ -XXX,XX +XXX,XX @@
24
diff --git a/hw/arm/smmu-internal.h b/hw/arm/smmu-internal.h
25
#include "sysemu/device_tree.h"
25
index XXXXXXX..XXXXXXX 100644
26
#include "sysemu/qtest.h"
26
--- a/hw/arm/smmu-internal.h
27
#include "sysemu/kvm.h"
27
+++ b/hw/arm/smmu-internal.h
28
+#include "sysemu/reset.h"
28
@@ -XXX,XX +XXX,XX @@ static inline int pgd_concat_idx(int start_level, int granule_sz,
29
29
}
30
#include <libfdt.h>
30
31
31
#define SMMU_IOTLB_ASID(key) ((key).asid)
32
@@ -XXX,XX +XXX,XX @@ uint64_t riscv_load_fdt(hwaddr dram_base, uint64_t mem_size, void *fdt)
32
+#define SMMU_IOTLB_VMID(key) ((key).vmid)
33
33
34
rom_add_blob_fixed_as("fdt", fdt, fdtsize, fdt_addr,
34
typedef struct SMMUIOTLBPageInvInfo {
35
&address_space_memory);
35
int asid;
36
+ qemu_register_reset_nosnapshotload(qemu_fdt_randomize_seeds,
36
+ int vmid;
37
+ rom_ptr_for_as(&address_space_memory, fdt_addr, fdtsize));
37
uint64_t iova;
38
38
uint64_t mask;
39
return fdt_addr;
39
} SMMUIOTLBPageInvInfo;
40
}
40
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
41
index XXXXXXX..XXXXXXX 100644
42
--- a/include/hw/arm/smmu-common.h
43
+++ b/include/hw/arm/smmu-common.h
44
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUPciBus {
45
typedef struct SMMUIOTLBKey {
46
uint64_t iova;
47
uint16_t asid;
48
+ uint16_t vmid;
49
uint8_t tg;
50
uint8_t level;
51
} SMMUIOTLBKey;
52
@@ -XXX,XX +XXX,XX @@ IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid);
53
SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
54
SMMUTransTableInfo *tt, hwaddr iova);
55
void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
56
-SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint64_t iova,
57
+SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint16_t vmid, uint64_t iova,
58
uint8_t tg, uint8_t level);
59
void smmu_iotlb_inv_all(SMMUState *s);
60
void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid);
61
-void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
62
+void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
63
uint8_t tg, uint64_t num_pages, uint8_t ttl);
64
65
/* Unmap the range of all the notifiers registered to any IOMMU mr */
66
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/hw/arm/smmu-common.c
69
+++ b/hw/arm/smmu-common.c
70
@@ -XXX,XX +XXX,XX @@ static guint smmu_iotlb_key_hash(gconstpointer v)
71
72
/* Jenkins hash */
73
a = b = c = JHASH_INITVAL + sizeof(*key);
74
- a += key->asid + key->level + key->tg;
75
+ a += key->asid + key->vmid + key->level + key->tg;
76
b += extract64(key->iova, 0, 32);
77
c += extract64(key->iova, 32, 32);
78
79
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
80
SMMUIOTLBKey *k1 = (SMMUIOTLBKey *)v1, *k2 = (SMMUIOTLBKey *)v2;
81
82
return (k1->asid == k2->asid) && (k1->iova == k2->iova) &&
83
- (k1->level == k2->level) && (k1->tg == k2->tg);
84
+ (k1->level == k2->level) && (k1->tg == k2->tg) &&
85
+ (k1->vmid == k2->vmid);
86
}
87
88
-SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint64_t iova,
89
+SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint16_t vmid, uint64_t iova,
90
uint8_t tg, uint8_t level)
91
{
92
- SMMUIOTLBKey key = {.asid = asid, .iova = iova, .tg = tg, .level = level};
93
+ SMMUIOTLBKey key = {.asid = asid, .vmid = vmid, .iova = iova,
94
+ .tg = tg, .level = level};
95
96
return key;
97
}
98
@@ -XXX,XX +XXX,XX @@ SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
99
uint64_t mask = subpage_size - 1;
100
SMMUIOTLBKey key;
101
102
- key = smmu_get_iotlb_key(cfg->asid, iova & ~mask, tg, level);
103
+ key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid,
104
+ iova & ~mask, tg, level);
105
entry = g_hash_table_lookup(bs->iotlb, &key);
106
if (entry) {
107
break;
108
@@ -XXX,XX +XXX,XX @@ SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
109
110
if (entry) {
111
cfg->iotlb_hits++;
112
- trace_smmu_iotlb_lookup_hit(cfg->asid, iova,
113
+ trace_smmu_iotlb_lookup_hit(cfg->asid, cfg->s2cfg.vmid, iova,
114
cfg->iotlb_hits, cfg->iotlb_misses,
115
100 * cfg->iotlb_hits /
116
(cfg->iotlb_hits + cfg->iotlb_misses));
117
} else {
118
cfg->iotlb_misses++;
119
- trace_smmu_iotlb_lookup_miss(cfg->asid, iova,
120
+ trace_smmu_iotlb_lookup_miss(cfg->asid, cfg->s2cfg.vmid, iova,
121
cfg->iotlb_hits, cfg->iotlb_misses,
122
100 * cfg->iotlb_hits /
123
(cfg->iotlb_hits + cfg->iotlb_misses));
124
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *new)
125
smmu_iotlb_inv_all(bs);
126
}
127
128
- *key = smmu_get_iotlb_key(cfg->asid, new->entry.iova, tg, new->level);
129
- trace_smmu_iotlb_insert(cfg->asid, new->entry.iova, tg, new->level);
130
+ *key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
131
+ tg, new->level);
132
+ trace_smmu_iotlb_insert(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
133
+ tg, new->level);
134
g_hash_table_insert(bs->iotlb, key, new);
135
}
136
137
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_hash_remove_by_asid(gpointer key, gpointer value,
138
139
return SMMU_IOTLB_ASID(*iotlb_key) == asid;
140
}
141
-
142
-static gboolean smmu_hash_remove_by_asid_iova(gpointer key, gpointer value,
143
+static gboolean smmu_hash_remove_by_asid_vmid_iova(gpointer key, gpointer value,
144
gpointer user_data)
145
{
146
SMMUTLBEntry *iter = (SMMUTLBEntry *)value;
147
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_hash_remove_by_asid_iova(gpointer key, gpointer value,
148
if (info->asid >= 0 && info->asid != SMMU_IOTLB_ASID(iotlb_key)) {
149
return false;
150
}
151
+ if (info->vmid >= 0 && info->vmid != SMMU_IOTLB_VMID(iotlb_key)) {
152
+ return false;
153
+ }
154
return ((info->iova & ~entry->addr_mask) == entry->iova) ||
155
((entry->iova & ~info->mask) == info->iova);
156
}
157
158
-void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
159
+void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
160
uint8_t tg, uint64_t num_pages, uint8_t ttl)
161
{
162
/* if tg is not set we use 4KB range invalidation */
163
uint8_t granule = tg ? tg * 2 + 10 : 12;
164
165
if (ttl && (num_pages == 1) && (asid >= 0)) {
166
- SMMUIOTLBKey key = smmu_get_iotlb_key(asid, iova, tg, ttl);
167
+ SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, iova, tg, ttl);
168
169
if (g_hash_table_remove(s->iotlb, &key)) {
170
return;
171
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
172
173
SMMUIOTLBPageInvInfo info = {
174
.asid = asid, .iova = iova,
175
+ .vmid = vmid,
176
.mask = (num_pages * 1 << granule) - 1};
177
178
g_hash_table_foreach_remove(s->iotlb,
179
- smmu_hash_remove_by_asid_iova,
180
+ smmu_hash_remove_by_asid_vmid_iova,
181
&info);
182
}
183
184
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
185
index XXXXXXX..XXXXXXX 100644
186
--- a/hw/arm/smmuv3.c
187
+++ b/hw/arm/smmuv3.c
188
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
189
{
190
dma_addr_t end, addr = CMD_ADDR(cmd);
191
uint8_t type = CMD_TYPE(cmd);
192
- uint16_t vmid = CMD_VMID(cmd);
193
+ int vmid = -1;
194
uint8_t scale = CMD_SCALE(cmd);
195
uint8_t num = CMD_NUM(cmd);
196
uint8_t ttl = CMD_TTL(cmd);
197
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
198
uint64_t num_pages;
199
uint8_t granule;
200
int asid = -1;
201
+ SMMUv3State *smmuv3 = ARM_SMMUV3(s);
202
+
203
+ /* Only consider VMID if stage-2 is supported. */
204
+ if (STAGE2_SUPPORTED(smmuv3)) {
205
+ vmid = CMD_VMID(cmd);
206
+ }
207
208
if (type == SMMU_CMD_TLBI_NH_VA) {
209
asid = CMD_ASID(cmd);
210
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
211
if (!tg) {
212
trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, 1, ttl, leaf);
213
smmuv3_inv_notifiers_iova(s, asid, addr, tg, 1);
214
- smmu_iotlb_inv_iova(s, asid, addr, tg, 1, ttl);
215
+ smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl);
216
return;
217
}
218
219
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
220
num_pages = (mask + 1) >> granule;
221
trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, num_pages, ttl, leaf);
222
smmuv3_inv_notifiers_iova(s, asid, addr, tg, num_pages);
223
- smmu_iotlb_inv_iova(s, asid, addr, tg, num_pages, ttl);
224
+ smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, num_pages, ttl);
225
addr += mask + 1;
226
}
227
}
228
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
229
index XXXXXXX..XXXXXXX 100644
230
--- a/hw/arm/trace-events
231
+++ b/hw/arm/trace-events
232
@@ -XXX,XX +XXX,XX @@ smmu_iotlb_inv_all(void) "IOTLB invalidate all"
233
smmu_iotlb_inv_asid(uint16_t asid) "IOTLB invalidate asid=%d"
234
smmu_iotlb_inv_iova(uint16_t asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64
235
smmu_inv_notifiers_mr(const char *name) "iommu mr=%s"
236
-smmu_iotlb_lookup_hit(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache HIT asid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
237
-smmu_iotlb_lookup_miss(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache MISS asid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
238
-smmu_iotlb_insert(uint16_t asid, uint64_t addr, uint8_t tg, uint8_t level) "IOTLB ++ asid=%d addr=0x%"PRIx64" tg=%d level=%d"
239
+smmu_iotlb_lookup_hit(uint16_t asid, uint16_t vmid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache HIT asid=%d vmid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
240
+smmu_iotlb_lookup_miss(uint16_t asid, uint16_t vmid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache MISS asid=%d vmid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
241
+smmu_iotlb_insert(uint16_t asid, uint16_t vmid, uint64_t addr, uint8_t tg, uint8_t level) "IOTLB ++ asid=%d vmid=%d addr=0x%"PRIx64" tg=%d level=%d"
242
243
# smmuv3.c
244
smmuv3_read_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
41
--
245
--
42
2.25.1
246
2.34.1
diff view generated by jsdifflib
1
FEAT_E0PD adds new bits E0PD0 and E0PD1 to TCR_EL1, which allow the
1
From: Mostafa Saleh <smostafa@google.com>
2
OS to forbid EL0 access to half of the address space. Since this is
2
3
an EL0-specific variation on the existing TCR_ELx.{EPD0,EPD1}, we can
3
CMD_TLBI_S2_IPA: As S1+S2 is not enabled, for now this can be the
4
implement it entirely in aa64_va_parameters().
4
same as CMD_TLBI_NH_VAA.
5
5
6
This requires moving the existing regime_is_user() to internals.h
6
CMD_TLBI_S12_VMALL: Added new function to invalidate TLB by VMID.
7
so that the code in helper.c can get at it.
7
8
8
For stage-1 only commands, add a check to throw CERROR_ILL if used
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
when stage-1 is not supported.
10
11
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
Signed-off-by: Mostafa Saleh <smostafa@google.com>
13
Tested-by: Eric Auger <eric.auger@redhat.com>
14
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
15
Message-id: 20230516203327.2051088-9-smostafa@google.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20221021160131.3531787-1-peter.maydell@linaro.org
12
---
17
---
13
docs/system/arm/emulation.rst | 1 +
18
include/hw/arm/smmu-common.h | 1 +
14
target/arm/cpu.h | 5 +++++
19
hw/arm/smmu-common.c | 16 +++++++++++
15
target/arm/internals.h | 19 +++++++++++++++++++
20
hw/arm/smmuv3.c | 55 ++++++++++++++++++++++++++++++------
16
target/arm/cpu64.c | 1 +
21
hw/arm/trace-events | 4 ++-
17
target/arm/helper.c | 9 +++++++++
22
4 files changed, 67 insertions(+), 9 deletions(-)
18
target/arm/ptw.c | 19 -------------------
23
19
6 files changed, 35 insertions(+), 19 deletions(-)
24
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
20
25
index XXXXXXX..XXXXXXX 100644
21
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
26
--- a/include/hw/arm/smmu-common.h
22
index XXXXXXX..XXXXXXX 100644
27
+++ b/include/hw/arm/smmu-common.h
23
--- a/docs/system/arm/emulation.rst
28
@@ -XXX,XX +XXX,XX @@ SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint16_t vmid, uint64_t iova,
24
+++ b/docs/system/arm/emulation.rst
29
uint8_t tg, uint8_t level);
25
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
30
void smmu_iotlb_inv_all(SMMUState *s);
26
- FEAT_Debugv8p4 (Debug changes for v8.4)
31
void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid);
27
- FEAT_DotProd (Advanced SIMD dot product instructions)
32
+void smmu_iotlb_inv_vmid(SMMUState *s, uint16_t vmid);
28
- FEAT_DoubleFault (Double Fault Extension)
33
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
29
+- FEAT_E0PD (Preventing EL0 access to halves of address maps)
34
uint8_t tg, uint64_t num_pages, uint8_t ttl);
30
- FEAT_ETS (Enhanced Translation Synchronization)
35
31
- FEAT_FCMA (Floating-point complex number instructions)
36
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
32
- FEAT_FHM (Floating-point half-precision multiplication instructions)
37
index XXXXXXX..XXXXXXX 100644
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
38
--- a/hw/arm/smmu-common.c
34
index XXXXXXX..XXXXXXX 100644
39
+++ b/hw/arm/smmu-common.c
35
--- a/target/arm/cpu.h
40
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_hash_remove_by_asid(gpointer key, gpointer value,
36
+++ b/target/arm/cpu.h
41
37
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_lva(const ARMISARegisters *id)
42
return SMMU_IOTLB_ASID(*iotlb_key) == asid;
38
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, VARANGE) != 0;
39
}
43
}
40
44
+
41
+static inline bool isar_feature_aa64_e0pd(const ARMISARegisters *id)
45
+static gboolean smmu_hash_remove_by_vmid(gpointer key, gpointer value,
46
+ gpointer user_data)
42
+{
47
+{
43
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, E0PD) != 0;
48
+ uint16_t vmid = *(uint16_t *)user_data;
49
+ SMMUIOTLBKey *iotlb_key = (SMMUIOTLBKey *)key;
50
+
51
+ return SMMU_IOTLB_VMID(*iotlb_key) == vmid;
44
+}
52
+}
45
+
53
+
46
static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
54
static gboolean smmu_hash_remove_by_asid_vmid_iova(gpointer key, gpointer value,
55
gpointer user_data)
47
{
56
{
48
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
57
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
49
diff --git a/target/arm/internals.h b/target/arm/internals.h
58
g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_asid, &asid);
50
index XXXXXXX..XXXXXXX 100644
59
}
51
--- a/target/arm/internals.h
60
52
+++ b/target/arm/internals.h
61
+inline void smmu_iotlb_inv_vmid(SMMUState *s, uint16_t vmid)
53
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
62
+{
63
+ trace_smmu_iotlb_inv_vmid(vmid);
64
+ g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_vmid, &vmid);
65
+}
66
+
67
/* VMSAv8-64 Translation */
68
69
/**
70
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/hw/arm/smmuv3.c
73
+++ b/hw/arm/smmuv3.c
74
@@ -XXX,XX +XXX,XX @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova,
54
}
75
}
55
}
76
}
56
77
57
+static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
78
-static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
58
+{
79
+static void smmuv3_range_inval(SMMUState *s, Cmd *cmd)
59
+ switch (mmu_idx) {
60
+ case ARMMMUIdx_E20_0:
61
+ case ARMMMUIdx_Stage1_E0:
62
+ case ARMMMUIdx_MUser:
63
+ case ARMMMUIdx_MSUser:
64
+ case ARMMMUIdx_MUserNegPri:
65
+ case ARMMMUIdx_MSUserNegPri:
66
+ return true;
67
+ default:
68
+ return false;
69
+ case ARMMMUIdx_E10_0:
70
+ case ARMMMUIdx_E10_1:
71
+ case ARMMMUIdx_E10_1_PAN:
72
+ g_assert_not_reached();
73
+ }
74
+}
75
+
76
/* Return the SCTLR value which controls this address translation regime */
77
static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
78
{
80
{
79
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
81
dma_addr_t end, addr = CMD_ADDR(cmd);
80
index XXXXXXX..XXXXXXX 100644
82
uint8_t type = CMD_TYPE(cmd);
81
--- a/target/arm/cpu64.c
83
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
82
+++ b/target/arm/cpu64.c
84
}
83
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
85
84
t = FIELD_DP64(t, ID_AA64MMFR2, FWB, 1); /* FEAT_S2FWB */
86
if (!tg) {
85
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
87
- trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, 1, ttl, leaf);
86
t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
88
+ trace_smmuv3_range_inval(vmid, asid, addr, tg, 1, ttl, leaf);
87
+ t = FIELD_DP64(t, ID_AA64MMFR2, E0PD, 1); /* FEAT_E0PD */
89
smmuv3_inv_notifiers_iova(s, asid, addr, tg, 1);
88
cpu->isar.id_aa64mmfr2 = t;
90
smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl);
89
91
return;
90
t = cpu->isar.id_aa64zfr0;
92
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
91
diff --git a/target/arm/helper.c b/target/arm/helper.c
93
uint64_t mask = dma_aligned_pow2_mask(addr, end, 64);
92
index XXXXXXX..XXXXXXX 100644
94
93
--- a/target/arm/helper.c
95
num_pages = (mask + 1) >> granule;
94
+++ b/target/arm/helper.c
96
- trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, num_pages, ttl, leaf);
95
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
97
+ trace_smmuv3_range_inval(vmid, asid, addr, tg, num_pages, ttl, leaf);
96
ps = extract32(tcr, 16, 3);
98
smmuv3_inv_notifiers_iova(s, asid, addr, tg, num_pages);
97
ds = extract64(tcr, 32, 1);
99
smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, num_pages, ttl);
98
} else {
100
addr += mask + 1;
99
+ bool e0pd;
101
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
100
+
102
{
103
uint16_t asid = CMD_ASID(&cmd);
104
105
+ if (!STAGE1_SUPPORTED(s)) {
106
+ cmd_error = SMMU_CERROR_ILL;
107
+ break;
108
+ }
109
+
110
trace_smmuv3_cmdq_tlbi_nh_asid(asid);
111
smmu_inv_notifiers_all(&s->smmu_state);
112
smmu_iotlb_inv_asid(bs, asid);
113
break;
114
}
115
case SMMU_CMD_TLBI_NH_ALL:
116
+ if (!STAGE1_SUPPORTED(s)) {
117
+ cmd_error = SMMU_CERROR_ILL;
118
+ break;
119
+ }
120
+ QEMU_FALLTHROUGH;
121
case SMMU_CMD_TLBI_NSNH_ALL:
122
trace_smmuv3_cmdq_tlbi_nh();
123
smmu_inv_notifiers_all(&s->smmu_state);
124
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
125
break;
126
case SMMU_CMD_TLBI_NH_VAA:
127
case SMMU_CMD_TLBI_NH_VA:
128
- smmuv3_s1_range_inval(bs, &cmd);
129
+ if (!STAGE1_SUPPORTED(s)) {
130
+ cmd_error = SMMU_CERROR_ILL;
131
+ break;
132
+ }
133
+ smmuv3_range_inval(bs, &cmd);
134
+ break;
135
+ case SMMU_CMD_TLBI_S12_VMALL:
136
+ {
137
+ uint16_t vmid = CMD_VMID(&cmd);
138
+
139
+ if (!STAGE2_SUPPORTED(s)) {
140
+ cmd_error = SMMU_CERROR_ILL;
141
+ break;
142
+ }
143
+
144
+ trace_smmuv3_cmdq_tlbi_s12_vmid(vmid);
145
+ smmu_inv_notifiers_all(&s->smmu_state);
146
+ smmu_iotlb_inv_vmid(bs, vmid);
147
+ break;
148
+ }
149
+ case SMMU_CMD_TLBI_S2_IPA:
150
+ if (!STAGE2_SUPPORTED(s)) {
151
+ cmd_error = SMMU_CERROR_ILL;
152
+ break;
153
+ }
154
+ /*
155
+ * As currently only either s1 or s2 are supported
156
+ * we can reuse same function for s2.
157
+ */
158
+ smmuv3_range_inval(bs, &cmd);
159
break;
160
case SMMU_CMD_TLBI_EL3_ALL:
161
case SMMU_CMD_TLBI_EL3_VA:
162
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
163
case SMMU_CMD_TLBI_EL2_ASID:
164
case SMMU_CMD_TLBI_EL2_VA:
165
case SMMU_CMD_TLBI_EL2_VAA:
166
- case SMMU_CMD_TLBI_S12_VMALL:
167
- case SMMU_CMD_TLBI_S2_IPA:
168
case SMMU_CMD_ATC_INV:
169
case SMMU_CMD_PRI_RESP:
170
case SMMU_CMD_RESUME:
171
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
172
break;
173
default:
174
cmd_error = SMMU_CERROR_ILL;
175
- qemu_log_mask(LOG_GUEST_ERROR,
176
- "Illegal command type: %d\n", CMD_TYPE(&cmd));
177
break;
178
}
179
qemu_mutex_unlock(&s->mutex);
180
if (cmd_error) {
181
+ if (cmd_error == SMMU_CERROR_ILL) {
182
+ qemu_log_mask(LOG_GUEST_ERROR,
183
+ "Illegal command type: %d\n", CMD_TYPE(&cmd));
184
+ }
185
break;
186
}
101
/*
187
/*
102
* Bit 55 is always between the two regions, and is canonical for
188
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
103
* determining if address tagging is enabled.
189
index XXXXXXX..XXXXXXX 100644
104
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
190
--- a/hw/arm/trace-events
105
epd = extract32(tcr, 7, 1);
191
+++ b/hw/arm/trace-events
106
sh = extract32(tcr, 12, 2);
192
@@ -XXX,XX +XXX,XX @@ smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, ui
107
hpd = extract64(tcr, 41, 1);
193
smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
108
+ e0pd = extract64(tcr, 55, 1);
194
smmu_iotlb_inv_all(void) "IOTLB invalidate all"
109
} else {
195
smmu_iotlb_inv_asid(uint16_t asid) "IOTLB invalidate asid=%d"
110
tsz = extract32(tcr, 16, 6);
196
+smmu_iotlb_inv_vmid(uint16_t vmid) "IOTLB invalidate vmid=%d"
111
gran = tg1_to_gran_size(extract32(tcr, 30, 2));
197
smmu_iotlb_inv_iova(uint16_t asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64
112
epd = extract32(tcr, 23, 1);
198
smmu_inv_notifiers_mr(const char *name) "iommu mr=%s"
113
sh = extract32(tcr, 28, 2);
199
smmu_iotlb_lookup_hit(uint16_t asid, uint16_t vmid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache HIT asid=%d vmid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
114
hpd = extract64(tcr, 42, 1);
200
@@ -XXX,XX +XXX,XX @@ smmuv3_cmdq_cfgi_ste_range(int start, int end) "start=0x%x - end=0x%x"
115
+ e0pd = extract64(tcr, 56, 1);
201
smmuv3_cmdq_cfgi_cd(uint32_t sid) "sid=0x%x"
116
}
202
smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
117
ps = extract64(tcr, 32, 3);
203
smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
118
ds = extract64(tcr, 59, 1);
204
-smmuv3_s1_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf) "vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d"
119
+
205
+smmuv3_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf) "vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d"
120
+ if (e0pd && cpu_isar_feature(aa64_e0pd, cpu) &&
206
smmuv3_cmdq_tlbi_nh(void) ""
121
+ regime_is_user(env, mmu_idx)) {
207
smmuv3_cmdq_tlbi_nh_asid(uint16_t asid) "asid=%d"
122
+ epd = true;
208
+smmuv3_cmdq_tlbi_s12_vmid(uint16_t vmid) "vmid=%d"
123
+ }
209
smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid=0x%x"
124
}
210
smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
125
211
smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
126
gran = sanitize_gran_size(cpu, gran, stage2);
127
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/ptw.c
130
+++ b/target/arm/ptw.c
131
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
132
return (regime_sctlr(env, mmu_idx) & SCTLR_EE) != 0;
133
}
134
135
-static bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
136
-{
137
- switch (mmu_idx) {
138
- case ARMMMUIdx_E20_0:
139
- case ARMMMUIdx_Stage1_E0:
140
- case ARMMMUIdx_MUser:
141
- case ARMMMUIdx_MSUser:
142
- case ARMMMUIdx_MUserNegPri:
143
- case ARMMMUIdx_MSUserNegPri:
144
- return true;
145
- default:
146
- return false;
147
- case ARMMMUIdx_E10_0:
148
- case ARMMMUIdx_E10_1:
149
- case ARMMMUIdx_E10_1_PAN:
150
- g_assert_not_reached();
151
- }
152
-}
153
-
154
/* Return the TTBR associated with this translation regime */
155
static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
156
{
157
--
212
--
158
2.25.1
213
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
1
3
The "PCI Bus Binding to: IEEE Std 1275-1994" defines the compatible
4
string for a PCIe bus or endpoint as "pci<vendorid>,<deviceid>" or
5
similar. Since the initial binding for PCI virtio-iommu didn't follow
6
this rule, it was modified to accept both strings and ensure backward
7
compatibility. Also, the unit-name for the node should be
8
"device,function".
9
10
Fix corresponding dt-validate and dtc warnings:
11
12
pcie@10000000: virtio_iommu@16:compatible: ['virtio,pci-iommu'] does not contain items matching the given schema
13
pcie@10000000: Unevaluated properties are not allowed (... 'virtio_iommu@16' were unexpected)
14
From schema: linux/Documentation/devicetree/bindings/pci/host-generic-pci.yaml
15
virtio_iommu@16: compatible: 'oneOf' conditional failed, one must be fixed:
16
['virtio,pci-iommu'] is too short
17
'pci1af4,1057' was expected
18
From schema: dtschema/schemas/pci/pci-bus.yaml
19
20
Warning (pci_device_reg): /pcie@10000000/virtio_iommu@16: PCI unit address format error, expected "2,0"
21
22
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
26
hw/arm/virt.c | 5 +++--
27
1 file changed, 3 insertions(+), 2 deletions(-)
28
29
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/arm/virt.c
32
+++ b/hw/arm/virt.c
33
@@ -XXX,XX +XXX,XX @@ static void create_smmu(const VirtMachineState *vms,
34
35
static void create_virtio_iommu_dt_bindings(VirtMachineState *vms)
36
{
37
- const char compat[] = "virtio,pci-iommu";
38
+ const char compat[] = "virtio,pci-iommu\0pci1af4,1057";
39
uint16_t bdf = vms->virtio_iommu_bdf;
40
MachineState *ms = MACHINE(vms);
41
char *node;
42
43
vms->iommu_phandle = qemu_fdt_alloc_phandle(ms->fdt);
44
45
- node = g_strdup_printf("%s/virtio_iommu@%d", vms->pciehb_nodename, bdf);
46
+ node = g_strdup_printf("%s/virtio_iommu@%x,%x", vms->pciehb_nodename,
47
+ PCI_SLOT(bdf), PCI_FUNC(bdf));
48
qemu_fdt_add_subnode(ms->fdt, node);
49
qemu_fdt_setprop(ms->fdt, node, "compatible", compat, sizeof(compat));
50
qemu_fdt_setprop_sized_cells(ms->fdt, node, "reg",
51
--
52
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Ake Koomsin <ake@igel.co.jp>
2
1
3
An exception targeting EL2 from lower EL is actually maskable when
4
HCR_E2H and HCR_TGE are both set. This applies to both secure and
5
non-secure Security state.
6
7
We can remove the conditions that try to suppress masking of
8
interrupts when we are Secure and the exception targets EL2 and
9
Secure EL2 is disabled. This is OK because in that situation
10
arm_phys_excp_target_el() will never return 2 as the target EL. The
11
'not if secure' check in this function was originally written before
12
arm_hcr_el2_eff(), and back then the target EL returned by
13
arm_phys_excp_target_el() could be 2 even if we were in Secure
14
EL0/EL1; but it is no longer needed.
15
16
Signed-off-by: Ake Koomsin <ake@igel.co.jp>
17
Message-id: 20221017092432.546881-1-ake@igel.co.jp
18
[PMM: Add commit message paragraph explaining why it's OK to
19
remove the checks on secure and SCR_EEL2]
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
23
target/arm/cpu.c | 24 +++++++++++++++++-------
24
1 file changed, 17 insertions(+), 7 deletions(-)
25
26
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpu.c
29
+++ b/target/arm/cpu.c
30
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
31
if ((target_el > cur_el) && (target_el != 1)) {
32
/* Exceptions targeting a higher EL may not be maskable */
33
if (arm_feature(env, ARM_FEATURE_AARCH64)) {
34
- /*
35
- * 64-bit masking rules are simple: exceptions to EL3
36
- * can't be masked, and exceptions to EL2 can only be
37
- * masked from Secure state. The HCR and SCR settings
38
- * don't affect the masking logic, only the interrupt routing.
39
- */
40
- if (target_el == 3 || !secure || (env->cp15.scr_el3 & SCR_EEL2)) {
41
+ switch (target_el) {
42
+ case 2:
43
+ /*
44
+ * According to ARM DDI 0487H.a, an interrupt can be masked
45
+ * when HCR_E2H and HCR_TGE are both set regardless of the
46
+ * current Security state. Note that we need to revisit this
47
+ * part again once we need to support NMI.
48
+ */
49
+ if ((hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
50
+ unmasked = true;
51
+ }
52
+ break;
53
+ case 3:
54
+ /* Interrupt cannot be masked when the target EL is 3 */
55
unmasked = true;
56
+ break;
57
+ default:
58
+ g_assert_not_reached();
59
}
60
} else {
61
/*
62
--
63
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Damien Hedde <damien.hedde@greensocs.com>
2
1
3
The code for handling the reset level count in the Resettable code
4
has two issues:
5
6
The reset count is only decremented for the 1->0 case. This means
7
that if there's ever a nested reset that takes the count to 2 then it
8
will never again be decremented. Eventually the count will exceed
9
the '50' limit in resettable_phase_enter() and QEMU will trip over
10
the assertion failure. The repro case in issue 1266 is an example of
11
this that happens now the SCSI subsystem uses three-phase reset.
12
13
Secondly, the count is decremented only after the exit phase handler
14
is called. Moving the reset count decrement from "just after" to
15
"just before" calling the exit phase handler allows
16
resettable_is_in_reset() to return false during the handler
17
execution.
18
19
This simplifies reset handling in resettable devices. Typically, a
20
function that updates the device state will just need to read the
21
current reset state and not anymore treat the "in a reset-exit
22
transition" as a special case.
23
24
Note that the semantics change to the *_is_in_reset() functions
25
will have no effect on the current codebase, because only two
26
devices (hw/char/cadence_uart.c and hw/misc/zynq_sclr.c) currently
27
call those functions, and in neither case do they do it from the
28
device's exit phase methed.
29
30
Fixes: 4a5fc890 ("scsi: Use device_cold_reset() and bus_cold_reset()")
31
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1266
32
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Reported-by: Michael Peter <michael.peter@hensoldt-cyber.com>
35
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
36
Message-id: 20221020142749.3357951-1-peter.maydell@linaro.org
37
Buglink: https://bugs.launchpad.net/qemu/+bug/1905297
38
Reported-by: Michael Peter <michael.peter@hensoldt-cyber.com>
39
[PMM: adjust the docs paragraph changed to get the name of the
40
'enter' phase right and to clarify exactly when the count is
41
adjusted; rewrite the commit message]
42
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
43
---
44
docs/devel/reset.rst | 8 +++++---
45
hw/core/resettable.c | 3 +--
46
2 files changed, 6 insertions(+), 5 deletions(-)
47
48
diff --git a/docs/devel/reset.rst b/docs/devel/reset.rst
49
index XXXXXXX..XXXXXXX 100644
50
--- a/docs/devel/reset.rst
51
+++ b/docs/devel/reset.rst
52
@@ -XXX,XX +XXX,XX @@ Polling the reset state
53
Resettable interface provides the ``resettable_is_in_reset()`` function.
54
This function returns true if the object parameter is currently under reset.
55
56
-An object is under reset from the beginning of the *init* phase to the end of
57
-the *exit* phase. During all three phases, the function will return that the
58
-object is in reset.
59
+An object is under reset from the beginning of the *enter* phase (before
60
+either its children or its own enter method is called) to the *exit*
61
+phase. During *enter* and *hold* phase only, the function will return that the
62
+object is in reset. The state is changed after the *exit* is propagated to
63
+its children and just before calling the object's own *exit* method.
64
65
This function may be used if the object behavior has to be adapted
66
while in reset state. For example if a device has an irq input,
67
diff --git a/hw/core/resettable.c b/hw/core/resettable.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/hw/core/resettable.c
70
+++ b/hw/core/resettable.c
71
@@ -XXX,XX +XXX,XX @@ static void resettable_phase_exit(Object *obj, void *opaque, ResetType type)
72
resettable_child_foreach(rc, obj, resettable_phase_exit, NULL, type);
73
74
assert(s->count > 0);
75
- if (s->count == 1) {
76
+ if (--s->count == 0) {
77
trace_resettable_phase_exit_exec(obj, obj_typename, !!rc->phases.exit);
78
if (rc->phases.exit && !resettable_get_tr_func(rc, obj)) {
79
rc->phases.exit(obj);
80
}
81
- s->count = 0;
82
}
83
s->exit_phase_in_progress = false;
84
trace_resettable_phase_exit_end(obj, obj_typename, s->count);
85
--
86
2.25.1
87
88
diff view generated by jsdifflib
Deleted patch
1
The semantic difference between the deprecated device_legacy_reset()
2
function and the newer device_cold_reset() function is that the new
3
function resets both the device itself and any qbuses it owns,
4
whereas the legacy function resets just the device itself and nothing
5
else. In hyperv_synic_reset() we reset a SynICState, which has no
6
qbuses, so for this purpose the two functions behave identically and
7
we can stop using the deprecated one.
8
1
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
11
Message-id: 20221013171817.1447562-1-peter.maydell@linaro.org
12
---
13
hw/hyperv/hyperv.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/hw/hyperv/hyperv.c b/hw/hyperv/hyperv.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/hyperv/hyperv.c
19
+++ b/hw/hyperv/hyperv.c
20
@@ -XXX,XX +XXX,XX @@ void hyperv_synic_reset(CPUState *cs)
21
SynICState *synic = get_synic(cs);
22
23
if (synic) {
24
- device_legacy_reset(DEVICE(synic));
25
+ device_cold_reset(DEVICE(synic));
26
}
27
}
28
29
--
30
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Axel Heider <axel.heider@hensoldt.net>
2
1
3
When running seL4 tests (https://docs.sel4.systems/projects/sel4test)
4
on the sabrelight platform, the timer tests fail. The arm/imx6 EPIT
5
timer interrupt does not fire properly, instead of a e.g. second in
6
can take up to a minute to finally see the interrupt.
7
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1263
9
10
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
11
Message-id: 166663118138.13362.1229967229046092876-0@git.sr.ht
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/timer/imx_epit.c | 9 +++++++--
16
1 file changed, 7 insertions(+), 2 deletions(-)
17
18
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/timer/imx_epit.c
21
+++ b/hw/timer/imx_epit.c
22
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
23
/* If IOVW bit is set then set the timer value */
24
ptimer_set_count(s->timer_reload, s->lr);
25
}
26
-
27
+ /*
28
+ * Commit the change to s->timer_reload, so it can propagate. Otherwise
29
+ * the timer interrupt may not fire properly. The commit must happen
30
+ * before calling imx_epit_reload_compare_timer(), which reads
31
+ * s->timer_reload internally again.
32
+ */
33
+ ptimer_transaction_commit(s->timer_reload);
34
imx_epit_reload_compare_timer(s);
35
ptimer_transaction_commit(s->timer_cmp);
36
- ptimer_transaction_commit(s->timer_reload);
37
break;
38
39
case 3: /* CMP */
40
--
41
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Mostafa Saleh <smostafa@google.com>
2
2
3
Perform the atomic update for hardware management of the access flag.
3
In smmuv3_notify_iova, read the granule based on translation stage
4
and use VMID if valid value is sent.
4
5
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Mostafa Saleh <smostafa@google.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
Message-id: 20221024051851.3074715-13-richard.henderson@linaro.org
8
Tested-by: Eric Auger <eric.auger@redhat.com>
9
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
10
Message-id: 20230516203327.2051088-10-smostafa@google.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
12
---
10
docs/system/arm/emulation.rst | 1 +
13
hw/arm/smmuv3.c | 39 ++++++++++++++++++++++++++-------------
11
target/arm/cpu64.c | 1 +
14
hw/arm/trace-events | 2 +-
12
target/arm/ptw.c | 176 +++++++++++++++++++++++++++++-----
15
2 files changed, 27 insertions(+), 14 deletions(-)
13
3 files changed, 156 insertions(+), 22 deletions(-)
14
16
15
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
17
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/docs/system/arm/emulation.rst
19
--- a/hw/arm/smmuv3.c
18
+++ b/docs/system/arm/emulation.rst
20
+++ b/hw/arm/smmuv3.c
19
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
21
@@ -XXX,XX +XXX,XX @@ epilogue:
20
- FEAT_FlagM (Flag manipulation instructions v2)
22
* @mr: IOMMU mr region handle
21
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
23
* @n: notifier to be called
22
- FEAT_GTG (Guest translation granule size)
24
* @asid: address space ID or negative value if we don't care
23
+- FEAT_HAFDBS (Hardware management of the access flag and dirty bit state)
25
+ * @vmid: virtual machine ID or negative value if we don't care
24
- FEAT_HCX (Support for the HCRX_EL2 register)
26
* @iova: iova
25
- FEAT_HPDS (Hierarchical permission disables)
27
* @tg: translation granule (if communicated through range invalidation)
26
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
28
* @num_pages: number of @granule sized pages (if tg != 0), otherwise 1
27
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
29
*/
28
index XXXXXXX..XXXXXXX 100644
30
static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
29
--- a/target/arm/cpu64.c
31
IOMMUNotifier *n,
30
+++ b/target/arm/cpu64.c
32
- int asid, dma_addr_t iova,
31
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
33
- uint8_t tg, uint64_t num_pages)
32
cpu->isar.id_aa64mmfr0 = t;
34
+ int asid, int vmid,
33
35
+ dma_addr_t iova, uint8_t tg,
34
t = cpu->isar.id_aa64mmfr1;
36
+ uint64_t num_pages)
35
+ t = FIELD_DP64(t, ID_AA64MMFR1, HAFDBS, 1); /* FEAT_HAFDBS, AF only */
37
{
36
t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* FEAT_VMID16 */
38
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
37
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
39
IOMMUTLBEvent event;
38
t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
40
uint8_t granule;
39
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
41
+ SMMUv3State *s = sdev->smmu;
40
index XXXXXXX..XXXXXXX 100644
42
41
--- a/target/arm/ptw.c
43
if (!tg) {
42
+++ b/target/arm/ptw.c
44
SMMUEventInfo event = {.inval_ste_allowed = true};
43
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
45
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
44
bool in_secure;
46
return;
45
bool in_debug;
47
}
46
bool out_secure;
48
47
+ bool out_rw;
49
- tt = select_tt(cfg, iova);
48
bool out_be;
50
- if (!tt) {
49
+ hwaddr out_virt;
51
+ if (vmid >= 0 && cfg->s2cfg.vmid != vmid) {
50
hwaddr out_phys;
52
return;
51
void *out_host;
53
}
52
} S1Translate;
54
- granule = tt->granule_sz;
53
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
54
uint8_t pte_attrs;
55
bool pte_secure;
56
57
+ ptw->out_virt = addr;
58
+
55
+
59
if (unlikely(ptw->in_debug)) {
56
+ if (STAGE1_SUPPORTED(s)) {
60
/*
57
+ tt = select_tt(cfg, iova);
61
* From gdbstub, do not use softmmu so that we don't modify the
58
+ if (!tt) {
62
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
59
+ return;
63
pte_secure = is_secure;
60
+ }
64
}
61
+ granule = tt->granule_sz;
65
ptw->out_host = NULL;
66
+ ptw->out_rw = false;
67
} else {
68
CPUTLBEntryFull *full;
69
int flags;
70
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
71
goto fail;
72
}
73
ptw->out_phys = full->phys_addr;
74
+ ptw->out_rw = full->prot & PROT_WRITE;
75
pte_attrs = full->pte_attrs;
76
pte_secure = full->attrs.secure;
77
}
78
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw,
79
ARMMMUFaultInfo *fi)
80
{
81
CPUState *cs = env_cpu(env);
82
+ void *host = ptw->out_host;
83
uint32_t data;
84
85
- if (likely(ptw->out_host)) {
86
+ if (likely(host)) {
87
/* Page tables are in RAM, and we have the host address. */
88
+ data = qatomic_read((uint32_t *)host);
89
if (ptw->out_be) {
90
- data = ldl_be_p(ptw->out_host);
91
+ data = be32_to_cpu(data);
92
} else {
93
- data = ldl_le_p(ptw->out_host);
94
+ data = le32_to_cpu(data);
95
}
96
} else {
97
/* Page tables are in MMIO. */
98
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw,
99
ARMMMUFaultInfo *fi)
100
{
101
CPUState *cs = env_cpu(env);
102
+ void *host = ptw->out_host;
103
uint64_t data;
104
105
- if (likely(ptw->out_host)) {
106
+ if (likely(host)) {
107
/* Page tables are in RAM, and we have the host address. */
108
+#ifdef CONFIG_ATOMIC64
109
+ data = qatomic_read__nocheck((uint64_t *)host);
110
if (ptw->out_be) {
111
- data = ldq_be_p(ptw->out_host);
112
+ data = be64_to_cpu(data);
113
} else {
114
- data = ldq_le_p(ptw->out_host);
115
+ data = le64_to_cpu(data);
116
}
117
+#else
118
+ if (ptw->out_be) {
119
+ data = ldq_be_p(host);
120
+ } else {
62
+ } else {
121
+ data = ldq_le_p(host);
63
+ granule = cfg->s2cfg.granule_sz;
122
+ }
123
+#endif
124
} else {
125
/* Page tables are in MMIO. */
126
MemTxAttrs attrs = { .secure = ptw->out_secure };
127
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw,
128
return data;
129
}
130
131
+static uint64_t arm_casq_ptw(CPUARMState *env, uint64_t old_val,
132
+ uint64_t new_val, S1Translate *ptw,
133
+ ARMMMUFaultInfo *fi)
134
+{
135
+ uint64_t cur_val;
136
+ void *host = ptw->out_host;
137
+
138
+ if (unlikely(!host)) {
139
+ fi->type = ARMFault_UnsuppAtomicUpdate;
140
+ fi->s1ptw = true;
141
+ return 0;
142
+ }
143
+
144
+ /*
145
+ * Raising a stage2 Protection fault for an atomic update to a read-only
146
+ * page is delayed until it is certain that there is a change to make.
147
+ */
148
+ if (unlikely(!ptw->out_rw)) {
149
+ int flags;
150
+ void *discard;
151
+
152
+ env->tlb_fi = fi;
153
+ flags = probe_access_flags(env, ptw->out_virt, MMU_DATA_STORE,
154
+ arm_to_core_mmu_idx(ptw->in_ptw_idx),
155
+ true, &discard, 0);
156
+ env->tlb_fi = NULL;
157
+
158
+ if (unlikely(flags & TLB_INVALID_MASK)) {
159
+ assert(fi->type != ARMFault_None);
160
+ fi->s2addr = ptw->out_virt;
161
+ fi->stage2 = true;
162
+ fi->s1ptw = true;
163
+ fi->s1ns = !ptw->in_secure;
164
+ return 0;
165
+ }
64
+ }
166
+
65
+
167
+ /* In case CAS mismatches and we loop, remember writability. */
66
} else {
168
+ ptw->out_rw = true;
67
granule = tg * 2 + 10;
169
+ }
68
}
170
+
69
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
171
+#ifdef CONFIG_ATOMIC64
70
memory_region_notify_iommu_one(n, &event);
172
+ if (ptw->out_be) {
71
}
173
+ old_val = cpu_to_be64(old_val);
72
174
+ new_val = cpu_to_be64(new_val);
73
-/* invalidate an asid/iova range tuple in all mr's */
175
+ cur_val = qatomic_cmpxchg__nocheck((uint64_t *)host, old_val, new_val);
74
-static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova,
176
+ cur_val = be64_to_cpu(cur_val);
75
- uint8_t tg, uint64_t num_pages)
177
+ } else {
76
+/* invalidate an asid/vmid/iova range tuple in all mr's */
178
+ old_val = cpu_to_le64(old_val);
77
+static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
179
+ new_val = cpu_to_le64(new_val);
78
+ dma_addr_t iova, uint8_t tg,
180
+ cur_val = qatomic_cmpxchg__nocheck((uint64_t *)host, old_val, new_val);
79
+ uint64_t num_pages)
181
+ cur_val = le64_to_cpu(cur_val);
182
+ }
183
+#else
184
+ /*
185
+ * We can't support the full 64-bit atomic cmpxchg on the host.
186
+ * Because this is only used for FEAT_HAFDBS, which is only for AA64,
187
+ * we know that TCG_OVERSIZED_GUEST is set, which means that we are
188
+ * running in round-robin mode and could only race with dma i/o.
189
+ */
190
+#ifndef TCG_OVERSIZED_GUEST
191
+# error "Unexpected configuration"
192
+#endif
193
+ bool locked = qemu_mutex_iothread_locked();
194
+ if (!locked) {
195
+ qemu_mutex_lock_iothread();
196
+ }
197
+ if (ptw->out_be) {
198
+ cur_val = ldq_be_p(host);
199
+ if (cur_val == old_val) {
200
+ stq_be_p(host, new_val);
201
+ }
202
+ } else {
203
+ cur_val = ldq_le_p(host);
204
+ if (cur_val == old_val) {
205
+ stq_le_p(host, new_val);
206
+ }
207
+ }
208
+ if (!locked) {
209
+ qemu_mutex_unlock_iothread();
210
+ }
211
+#endif
212
+
213
+ return cur_val;
214
+}
215
+
216
static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
217
uint32_t *table, uint32_t address)
218
{
80
{
219
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
81
SMMUDevice *sdev;
220
uint32_t el = regime_el(env, mmu_idx);
82
221
uint64_t descaddrmask;
83
@@ -XXX,XX +XXX,XX @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova,
222
bool aarch64 = arm_el_is_aa64(env, el);
84
IOMMUMemoryRegion *mr = &sdev->iommu;
223
- uint64_t descriptor;
85
IOMMUNotifier *n;
224
+ uint64_t descriptor, new_descriptor;
86
225
bool nstable;
87
- trace_smmuv3_inv_notifiers_iova(mr->parent_obj.name, asid, iova,
226
88
- tg, num_pages);
227
/* TODO: This code does not support shareability levels. */
89
+ trace_smmuv3_inv_notifiers_iova(mr->parent_obj.name, asid, vmid,
228
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
90
+ iova, tg, num_pages);
229
if (fi->type != ARMFault_None) {
91
230
goto do_fault;
92
IOMMU_NOTIFIER_FOREACH(n, mr) {
231
}
93
- smmuv3_notify_iova(mr, n, asid, iova, tg, num_pages);
232
+ new_descriptor = descriptor;
94
+ smmuv3_notify_iova(mr, n, asid, vmid, iova, tg, num_pages);
233
234
+ restart_atomic_update:
235
if (!(descriptor & 1) || (!(descriptor & 2) && (level == 3))) {
236
/* Invalid, or the Reserved level 3 encoding */
237
goto do_translation_fault;
238
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
239
* to give a correct page or table address, the address field
240
* in a block descriptor is smaller; so we need to explicitly
241
* clear the lower bits here before ORing in the low vaddr bits.
242
+ *
243
+ * Afterward, descaddr is the final physical address.
244
*/
245
page_size = (1ULL << ((stride * (4 - level)) + 3));
246
descaddr &= ~(hwaddr)(page_size - 1);
247
descaddr |= (address & (page_size - 1));
248
249
+ if (likely(!ptw->in_debug)) {
250
+ /*
251
+ * Access flag.
252
+ * If HA is enabled, prepare to update the descriptor below.
253
+ * Otherwise, pass the access fault on to software.
254
+ */
255
+ if (!(descriptor & (1 << 10))) {
256
+ if (param.ha) {
257
+ new_descriptor |= 1 << 10; /* AF */
258
+ } else {
259
+ fi->type = ARMFault_AccessFlag;
260
+ goto do_fault;
261
+ }
262
+ }
263
+ }
264
+
265
/*
266
- * Extract attributes from the descriptor, and apply table descriptors.
267
- * Stage 2 table descriptors do not include any attribute fields.
268
- * HPD disables all the table attributes except NSTable.
269
+ * Extract attributes from the (modified) descriptor, and apply
270
+ * table descriptors. Stage 2 table descriptors do not include
271
+ * any attribute fields. HPD disables all the table attributes
272
+ * except NSTable.
273
*/
274
- attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(50, 14));
275
+ attrs = new_descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(50, 14));
276
if (!regime_is_stage2(mmu_idx)) {
277
attrs |= nstable << 5; /* NS */
278
if (!param.hpd) {
279
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
280
}
95
}
281
}
96
}
282
97
}
283
- /*
98
@@ -XXX,XX +XXX,XX @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd)
284
- * Here descaddr is the final physical address, and attributes
99
285
- * are all in attrs.
100
if (!tg) {
286
- */
101
trace_smmuv3_range_inval(vmid, asid, addr, tg, 1, ttl, leaf);
287
- if ((attrs & (1 << 10)) == 0) {
102
- smmuv3_inv_notifiers_iova(s, asid, addr, tg, 1);
288
- /* Access flag */
103
+ smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1);
289
- fi->type = ARMFault_AccessFlag;
104
smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl);
290
- goto do_fault;
105
return;
291
- }
292
-
293
ap = extract32(attrs, 6, 2);
294
-
295
if (regime_is_stage2(mmu_idx)) {
296
ns = mmu_idx == ARMMMUIdx_Stage2;
297
xn = extract64(attrs, 53, 2);
298
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
299
goto do_fault;
300
}
106
}
301
107
@@ -XXX,XX +XXX,XX @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd)
302
+ /* If FEAT_HAFDBS has made changes, update the PTE. */
108
303
+ if (new_descriptor != descriptor) {
109
num_pages = (mask + 1) >> granule;
304
+ new_descriptor = arm_casq_ptw(env, descriptor, new_descriptor, ptw, fi);
110
trace_smmuv3_range_inval(vmid, asid, addr, tg, num_pages, ttl, leaf);
305
+ if (fi->type != ARMFault_None) {
111
- smmuv3_inv_notifiers_iova(s, asid, addr, tg, num_pages);
306
+ goto do_fault;
112
+ smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, num_pages);
307
+ }
113
smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, num_pages, ttl);
308
+ /*
114
addr += mask + 1;
309
+ * I_YZSVV says that if the in-memory descriptor has changed,
115
}
310
+ * then we must use the information in that new value
116
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
311
+ * (which might include a different output address, different
117
index XXXXXXX..XXXXXXX 100644
312
+ * attributes, or generate a fault).
118
--- a/hw/arm/trace-events
313
+ * Restart the handling of the descriptor value from scratch.
119
+++ b/hw/arm/trace-events
314
+ */
120
@@ -XXX,XX +XXX,XX @@ smmuv3_cmdq_tlbi_s12_vmid(uint16_t vmid) "vmid=%d"
315
+ if (new_descriptor != descriptor) {
121
smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid=0x%x"
316
+ descriptor = new_descriptor;
122
smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
317
+ goto restart_atomic_update;
123
smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
318
+ }
124
-smmuv3_inv_notifiers_iova(const char *name, uint16_t asid, uint64_t iova, uint8_t tg, uint64_t num_pages) "iommu mr=%s asid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64
319
+ }
125
+smmuv3_inv_notifiers_iova(const char *name, uint16_t asid, uint16_t vmid, uint64_t iova, uint8_t tg, uint64_t num_pages) "iommu mr=%s asid=%d vmid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64
320
+
126
321
if (ns) {
322
/*
323
* The NS bit will (as required by the architecture) have no effect if
324
--
127
--
325
2.25.1
128
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Mostafa Saleh <smostafa@google.com>
2
2
3
Always overriding fi->type was incorrect, as we would not properly
3
As everything is in place, we can use a new system property to
4
propagate the fault type from S1_ptw_translate, or arm_ldq_ptw.
4
advertise which stage is supported and remove bad_ste from STE
5
Simplify things by providing a new label for a translation fault.
5
stage2 config.
6
For other faults, store into fi directly.
7
6
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
The property added arm-smmuv3.stage can have 3 values:
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
- "1": Stage-1 only is advertised.
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
- "2": Stage-2 only is advertised.
11
Message-id: 20221024051851.3074715-9-richard.henderson@linaro.org
10
11
If not passed or an unsupported value is passed, it will default to
12
stage-1.
13
14
Advertise VMID16.
15
16
Don't try to decode CD, if stage-2 is configured.
17
18
Reviewed-by: Eric Auger <eric.auger@redhat.com>
19
Signed-off-by: Mostafa Saleh <smostafa@google.com>
20
Tested-by: Eric Auger <eric.auger@redhat.com>
21
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
22
Message-id: 20230516203327.2051088-11-smostafa@google.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
24
---
14
target/arm/ptw.c | 31 +++++++++++++------------------
25
include/hw/arm/smmuv3.h | 1 +
15
1 file changed, 13 insertions(+), 18 deletions(-)
26
hw/arm/smmuv3.c | 32 ++++++++++++++++++++++----------
27
2 files changed, 23 insertions(+), 10 deletions(-)
16
28
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
29
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
18
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/ptw.c
31
--- a/include/hw/arm/smmuv3.h
20
+++ b/target/arm/ptw.c
32
+++ b/include/hw/arm/smmuv3.h
21
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
33
@@ -XXX,XX +XXX,XX @@ struct SMMUv3State {
22
ARMCPU *cpu = env_archcpu(env);
34
23
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
35
qemu_irq irq[4];
24
bool is_secure = ptw->in_secure;
36
QemuMutex mutex;
25
- /* Read an LPAE long-descriptor translation table. */
37
+ char *stage;
26
- ARMFaultType fault_type = ARMFault_Translation;
38
};
27
uint32_t level;
39
28
ARMVAParameters param;
40
typedef enum {
29
uint64_t ttbr;
41
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
30
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
42
index XXXXXXX..XXXXXXX 100644
31
* so our choice is to always raise the fault.
43
--- a/hw/arm/smmuv3.c
32
*/
44
+++ b/hw/arm/smmuv3.c
33
if (param.tsz_oob) {
45
@@ -XXX,XX +XXX,XX @@
34
- fault_type = ARMFault_Translation;
46
#include "hw/irq.h"
35
- goto do_fault;
47
#include "hw/sysbus.h"
36
+ goto do_translation_fault;
48
#include "migration/vmstate.h"
37
}
49
+#include "hw/qdev-properties.h"
38
50
#include "hw/qdev-core.h"
39
addrsize = 64 - 8 * param.tbi;
51
#include "hw/pci/pci.h"
40
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
52
#include "cpu.h"
41
addrsize - inputsize);
53
@@ -XXX,XX +XXX,XX @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
42
if (-top_bits != param.select) {
54
43
/* The gap between the two regions is a Translation fault */
55
static void smmuv3_init_regs(SMMUv3State *s)
44
- fault_type = ARMFault_Translation;
56
{
45
- goto do_fault;
57
- /**
46
+ goto do_translation_fault;
58
- * IDR0: stage1 only, AArch64 only, coherent access, 16b ASID,
47
}
59
- * multi-level stream table
60
- */
61
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1); /* stage 1 supported */
62
+ /* Based on sys property, the stages supported in smmu will be advertised.*/
63
+ if (s->stage && !strcmp("2", s->stage)) {
64
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
65
+ } else {
66
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
67
+ }
68
+
69
s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
70
s->idr[0] = FIELD_DP32(s->idr[0], IDR0, COHACC, 1); /* IO coherent */
71
s->idr[0] = FIELD_DP32(s->idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
72
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, VMID16, 1); /* 16-bit VMID */
73
s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTENDIAN, 2); /* little endian */
74
s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STALL_MODEL, 1); /* No stall */
75
/* terminated transaction will always be aborted/error returned */
76
@@ -XXX,XX +XXX,XX @@ static int decode_ste_s2_cfg(SMMUTransCfg *cfg, STE *ste)
77
goto bad_ste;
48
}
78
}
49
79
50
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
80
- /* This is still here as stage 2 has not been fully enabled yet. */
51
* Translation table walk disabled => Translation fault on TLB miss
81
- qemu_log_mask(LOG_UNIMP, "SMMUv3 does not support stage 2 yet\n");
52
* Note: This is always 0 on 64-bit EL2 and EL3.
82
- goto bad_ste;
53
*/
83
-
54
- goto do_fault;
84
return 0;
55
+ goto do_translation_fault;
85
86
bad_ste:
87
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
88
return ret;
56
}
89
}
57
90
58
if (!regime_is_stage2(mmu_idx)) {
91
- if (cfg->aborted || cfg->bypassed) {
59
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
92
+ if (cfg->aborted || cfg->bypassed || (cfg->stage == 2)) {
60
if (param.ds && stride == 9 && sl2) {
93
return 0;
61
if (sl0 != 0) {
62
level = 0;
63
- fault_type = ARMFault_Translation;
64
- goto do_fault;
65
+ goto do_translation_fault;
66
}
67
startlevel = -1;
68
} else if (!aarch64 || stride == 9) {
69
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
70
ok = check_s2_mmu_setup(cpu, aarch64, startlevel,
71
inputsize, stride, outputsize);
72
if (!ok) {
73
- fault_type = ARMFault_Translation;
74
- goto do_fault;
75
+ goto do_translation_fault;
76
}
77
level = startlevel;
78
}
94
}
79
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
95
80
descaddr |= extract64(ttbr, 2, 4) << 48;
96
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_smmuv3 = {
81
} else if (descaddr >> outputsize) {
82
level = 0;
83
- fault_type = ARMFault_AddressSize;
84
+ fi->type = ARMFault_AddressSize;
85
goto do_fault;
86
}
97
}
87
98
};
88
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
99
89
100
+static Property smmuv3_properties[] = {
90
if (!(descriptor & 1) || (!(descriptor & 2) && (level == 3))) {
101
+ /*
91
/* Invalid, or the Reserved level 3 encoding */
102
+ * Stages of translation advertised.
92
- goto do_fault;
103
+ * "1": Stage 1
93
+ goto do_translation_fault;
104
+ * "2": Stage 2
94
}
105
+ * Defaults to stage 1
95
106
+ */
96
descaddr = descriptor & descaddrmask;
107
+ DEFINE_PROP_STRING("stage", SMMUv3State, stage),
97
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
108
+ DEFINE_PROP_END_OF_LIST()
98
descaddr |= extract64(descriptor, 12, 4) << 48;
109
+};
99
}
110
+
100
} else if (descaddr >> outputsize) {
111
static void smmuv3_instance_init(Object *obj)
101
- fault_type = ARMFault_AddressSize;
112
{
102
+ fi->type = ARMFault_AddressSize;
113
/* Nothing much to do here as of now */
103
goto do_fault;
114
@@ -XXX,XX +XXX,XX @@ static void smmuv3_class_init(ObjectClass *klass, void *data)
104
}
115
&c->parent_phases);
105
116
c->parent_realize = dc->realize;
106
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
117
dc->realize = smmu_realize;
107
* Here descaddr is the final physical address, and attributes
118
+ device_class_set_props(dc, smmuv3_properties);
108
* are all in attrs.
119
}
109
*/
120
110
- fault_type = ARMFault_AccessFlag;
121
static int smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
111
if ((attrs & (1 << 8)) == 0) {
112
/* Access flag */
113
+ fi->type = ARMFault_AccessFlag;
114
goto do_fault;
115
}
116
117
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
118
result->f.prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
119
}
120
121
- fault_type = ARMFault_Permission;
122
if (!(result->f.prot & (1 << access_type))) {
123
+ fi->type = ARMFault_Permission;
124
goto do_fault;
125
}
126
127
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
128
result->f.lg_page_size = ctz64(page_size);
129
return false;
130
131
-do_fault:
132
- fi->type = fault_type;
133
+ do_translation_fault:
134
+ fi->type = ARMFault_Translation;
135
+ do_fault:
136
fi->level = level;
137
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
138
fi->stage2 = fi->s1ptw || regime_is_stage2(mmu_idx);
139
--
122
--
140
2.25.1
123
2.34.1
141
142
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Tommy Wu <tommy.wu@sifive.com>
2
2
3
Reduce the amount of typing required for this check.
3
When we receive a packet from the xilinx_axienet and then try to s2mem
4
through the xilinx_axidma, if the descriptor ring buffer is full in the
5
xilinx axidma driver, we’ll assert the DMASR.HALTED in the
6
function : stream_process_s2mem and return 0. In the end, we’ll be stuck in
7
an infinite loop in axienet_eth_rx_notify.
4
8
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
This patch checks the DMASR.HALTED state when we try to push data
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
from xilinx axi-enet to xilinx axi-dma. When the DMASR.HALTED is asserted,
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
we will not keep pushing the data and then prevent the infinte loop.
8
Message-id: 20221024051851.3074715-2-richard.henderson@linaro.org
12
13
Signed-off-by: Tommy Wu <tommy.wu@sifive.com>
14
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
15
Reviewed-by: Frank Chang <frank.chang@sifive.com>
16
Message-id: 20230519062137.1251741-1-tommy.wu@sifive.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
18
---
11
target/arm/internals.h | 5 +++++
19
hw/dma/xilinx_axidma.c | 11 ++++++++---
12
target/arm/helper.c | 14 +++++---------
20
1 file changed, 8 insertions(+), 3 deletions(-)
13
target/arm/ptw.c | 14 ++++++--------
14
3 files changed, 16 insertions(+), 17 deletions(-)
15
21
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
22
diff --git a/hw/dma/xilinx_axidma.c b/hw/dma/xilinx_axidma.c
17
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
24
--- a/hw/dma/xilinx_axidma.c
19
+++ b/target/arm/internals.h
25
+++ b/hw/dma/xilinx_axidma.c
20
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
26
@@ -XXX,XX +XXX,XX @@ static inline int stream_idle(struct Stream *s)
21
}
27
return !!(s->regs[R_DMASR] & DMASR_IDLE);
22
}
28
}
23
29
24
+static inline bool regime_is_stage2(ARMMMUIdx mmu_idx)
30
+static inline int stream_halted(struct Stream *s)
25
+{
31
+{
26
+ return mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S;
32
+ return !!(s->regs[R_DMASR] & DMASR_HALTED);
27
+}
33
+}
28
+
34
+
29
/* Return the exception level which controls this address translation regime */
35
static void stream_reset(struct Stream *s)
30
static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
31
{
36
{
32
diff --git a/target/arm/helper.c b/target/arm/helper.c
37
s->regs[R_DMASR] = DMASR_HALTED; /* starts up halted. */
33
index XXXXXXX..XXXXXXX 100644
38
@@ -XXX,XX +XXX,XX @@ static void stream_process_mem2s(struct Stream *s, StreamSink *tx_data_dev,
34
--- a/target/arm/helper.c
39
uint64_t addr;
35
+++ b/target/arm/helper.c
40
bool eop;
36
@@ -XXX,XX +XXX,XX @@ int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
41
37
{
42
- if (!stream_running(s) || stream_idle(s)) {
38
if (regime_has_2_ranges(mmu_idx)) {
43
+ if (!stream_running(s) || stream_idle(s) || stream_halted(s)) {
39
return extract64(tcr, 37, 2);
44
return;
40
- } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
41
+ } else if (regime_is_stage2(mmu_idx)) {
42
return 0; /* VTCR_EL2 */
43
} else {
44
/* Replicate the single TBI bit so we always have 2 bits. */
45
@@ -XXX,XX +XXX,XX @@ int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx)
46
{
47
if (regime_has_2_ranges(mmu_idx)) {
48
return extract64(tcr, 51, 2);
49
- } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
50
+ } else if (regime_is_stage2(mmu_idx)) {
51
return 0; /* VTCR_EL2 */
52
} else {
53
/* Replicate the single TBID bit so we always have 2 bits. */
54
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
55
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
56
ARMGranuleSize gran;
57
ARMCPU *cpu = env_archcpu(env);
58
- bool stage2 = mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S;
59
+ bool stage2 = regime_is_stage2(mmu_idx);
60
61
if (!regime_has_2_ranges(mmu_idx)) {
62
select = 0;
63
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
64
}
65
ds = false;
66
} else if (ds) {
67
- switch (mmu_idx) {
68
- case ARMMMUIdx_Stage2:
69
- case ARMMMUIdx_Stage2_S:
70
+ if (regime_is_stage2(mmu_idx)) {
71
if (gran == Gran16K) {
72
ds = cpu_isar_feature(aa64_tgran16_2_lpa2, cpu);
73
} else {
74
ds = cpu_isar_feature(aa64_tgran4_2_lpa2, cpu);
75
}
76
- break;
77
- default:
78
+ } else {
79
if (gran == Gran16K) {
80
ds = cpu_isar_feature(aa64_tgran16_lpa2, cpu);
81
} else {
82
ds = cpu_isar_feature(aa64_tgran4_lpa2, cpu);
83
}
84
- break;
85
}
86
if (ds) {
87
min_tsz = 12;
88
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/ptw.c
91
+++ b/target/arm/ptw.c
92
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
93
bool have_wxn;
94
int wxn = 0;
95
96
- assert(mmu_idx != ARMMMUIdx_Stage2);
97
- assert(mmu_idx != ARMMMUIdx_Stage2_S);
98
+ assert(!regime_is_stage2(mmu_idx));
99
100
user_rw = simple_ap_to_rw_prot_is_user(ap, true);
101
if (is_user) {
102
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
103
goto do_fault;
104
}
45
}
105
46
106
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
47
@@ -XXX,XX +XXX,XX @@ static size_t stream_process_s2mem(struct Stream *s, unsigned char *buf,
107
+ if (!regime_is_stage2(mmu_idx)) {
48
unsigned int rxlen;
108
/*
49
size_t pos = 0;
109
* The starting level depends on the virtual address size (which can
50
110
* be up to 48 bits) and the translation granule size. It indicates
51
- if (!stream_running(s) || stream_idle(s)) {
111
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
52
+ if (!stream_running(s) || stream_idle(s) || stream_halted(s)) {
112
attrs = extract64(descriptor, 2, 10)
53
return 0;
113
| (extract64(descriptor, 52, 12) << 10);
114
115
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
116
+ if (regime_is_stage2(mmu_idx)) {
117
/* Stage 2 table descriptors do not include any attribute fields */
118
break;
119
}
120
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
121
122
ap = extract32(attrs, 4, 2);
123
124
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
125
+ if (regime_is_stage2(mmu_idx)) {
126
ns = mmu_idx == ARMMMUIdx_Stage2;
127
xn = extract32(attrs, 11, 2);
128
result->f.prot = get_S2prot(env, ap, xn, s1_is_el0);
129
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
130
result->f.guarded = guarded;
131
}
54
}
132
55
133
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
56
@@ -XXX,XX +XXX,XX @@ xilinx_axidma_data_stream_can_push(StreamSink *obj,
134
+ if (regime_is_stage2(mmu_idx)) {
57
XilinxAXIDMAStreamSink *ds = XILINX_AXI_DMA_DATA_STREAM(obj);
135
result->cacheattrs.is_s2_format = true;
58
struct Stream *s = &ds->dma->streams[1];
136
result->cacheattrs.attrs = extract32(attrs, 0, 4);
59
137
} else {
60
- if (!stream_running(s) || stream_idle(s)) {
138
@@ -XXX,XX +XXX,XX @@ do_fault:
61
+ if (!stream_running(s) || stream_idle(s) || stream_halted(s)) {
139
fi->type = fault_type;
62
ds->dma->notify = notify;
140
fi->level = level;
63
ds->dma->notify_opaque = notify_opaque;
141
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
64
return false;
142
- fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2 ||
143
- mmu_idx == ARMMMUIdx_Stage2_S);
144
+ fi->stage2 = fi->s1ptw || regime_is_stage2(mmu_idx);
145
fi->s1ns = mmu_idx == ARMMMUIdx_Stage2;
146
return true;
147
}
148
--
65
--
149
2.25.1
66
2.34.1
150
67
151
68
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Clément Chigot <chigot@adacore.com>
2
2
3
Replace some gotos with some nested if statements.
3
When passing --smp with a number lower than XLNX_ZYNQMP_NUM_APU_CPUS,
4
the expression (ms->smp.cpus - XLNX_ZYNQMP_NUM_APU_CPUS) will result
5
in a positive number as ms->smp.cpus is a unsigned int.
6
This will raise the following error afterwards, as Qemu will try to
7
instantiate some additional RPUs.
8
| $ qemu-system-aarch64 --smp 1 -M xlnx-zcu102
9
| **
10
| ERROR:../src/tcg/tcg.c:777:tcg_register_thread:
11
| assertion failed: (n < tcg_max_ctxs)
4
12
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Clément Chigot <chigot@adacore.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Message-id: 20221024051851.3074715-12-richard.henderson@linaro.org
15
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
16
Message-id: 20230524143714.565792-1-chigot@adacore.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
18
---
10
target/arm/ptw.c | 34 ++++++++++++++++------------------
19
hw/arm/xlnx-zynqmp.c | 2 +-
11
1 file changed, 16 insertions(+), 18 deletions(-)
20
1 file changed, 1 insertion(+), 1 deletion(-)
12
21
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
22
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
14
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.c
24
--- a/hw/arm/xlnx-zynqmp.c
16
+++ b/target/arm/ptw.c
25
+++ b/hw/arm/xlnx-zynqmp.c
17
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
26
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_create_rpu(MachineState *ms, XlnxZynqMPState *s,
18
page_size = (1ULL << ((stride * (4 - level)) + 3));
27
const char *boot_cpu, Error **errp)
19
descaddr &= ~(hwaddr)(page_size - 1);
28
{
20
descaddr |= (address & (page_size - 1));
29
int i;
21
- /* Extract attributes from the descriptor */
30
- int num_rpus = MIN(ms->smp.cpus - XLNX_ZYNQMP_NUM_APU_CPUS,
22
- attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(50, 14));
31
+ int num_rpus = MIN((int)(ms->smp.cpus - XLNX_ZYNQMP_NUM_APU_CPUS),
23
32
XLNX_ZYNQMP_NUM_RPU_CPUS);
24
- if (regime_is_stage2(mmu_idx)) {
33
25
- /* Stage 2 table descriptors do not include any attribute fields */
34
if (num_rpus <= 0) {
26
- goto skip_attrs;
27
- }
28
- /* Merge in attributes from table descriptors */
29
- attrs |= nstable << 5; /* NS */
30
- if (param.hpd) {
31
- /* HPD disables all the table attributes except NSTable. */
32
- goto skip_attrs;
33
- }
34
- attrs |= extract64(tableattrs, 0, 2) << 53; /* XN, PXN */
35
/*
36
- * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
37
- * means "force PL1 access only", which means forcing AP[1] to 0.
38
+ * Extract attributes from the descriptor, and apply table descriptors.
39
+ * Stage 2 table descriptors do not include any attribute fields.
40
+ * HPD disables all the table attributes except NSTable.
41
*/
42
- attrs &= ~(extract64(tableattrs, 2, 1) << 6); /* !APT[0] => AP[1] */
43
- attrs |= extract32(tableattrs, 3, 1) << 7; /* APT[1] => AP[2] */
44
- skip_attrs:
45
+ attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(50, 14));
46
+ if (!regime_is_stage2(mmu_idx)) {
47
+ attrs |= nstable << 5; /* NS */
48
+ if (!param.hpd) {
49
+ attrs |= extract64(tableattrs, 0, 2) << 53; /* XN, PXN */
50
+ /*
51
+ * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
52
+ * means "force PL1 access only", which means forcing AP[1] to 0.
53
+ */
54
+ attrs &= ~(extract64(tableattrs, 2, 1) << 6); /* !APT[0] => AP[1] */
55
+ attrs |= extract32(tableattrs, 3, 1) << 7; /* APT[1] => AP[2] */
56
+ }
57
+ }
58
59
/*
60
* Here descaddr is the final physical address, and attributes
61
--
35
--
62
2.25.1
36
2.34.1
63
37
64
38
diff view generated by jsdifflib
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
1
From: Thomas Huth <thuth@redhat.com>
2
2
3
When the system reboots, the rng-seed that the FDT has should be
3
pflash-cfi02-test.c always uses the "musicpal" machine for testing,
4
re-randomized, so that the new boot gets a new seed. Several
4
test-arm-mptimer.c always uses the "vexpress-a9" machine, and
5
architectures require this functionality, so export a function for
5
microbit-test.c requires the "microbit" machine, so we should only
6
injecting a new seed into the given FDT.
6
run these tests if the machines have been enabled in the configuration.
7
7
8
Cc: Alistair Francis <alistair.francis@wdc.com>
8
Signed-off-by: Thomas Huth <thuth@redhat.com>
9
Cc: David Gibson <david@gibson.dropbear.id.au>
9
Reviewed-by: Fabiano Rosas <farosas@suse.de>
10
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
10
Message-id: 20230524080600.1618137-1-thuth@redhat.com
11
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Message-id: 20221025004327.568476-3-Jason@zx2c4.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
12
---
15
include/sysemu/device_tree.h | 9 +++++++++
13
tests/qtest/meson.build | 7 ++++---
16
softmmu/device_tree.c | 21 +++++++++++++++++++++
14
1 file changed, 4 insertions(+), 3 deletions(-)
17
2 files changed, 30 insertions(+)
18
15
19
diff --git a/include/sysemu/device_tree.h b/include/sysemu/device_tree.h
16
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
20
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
21
--- a/include/sysemu/device_tree.h
18
--- a/tests/qtest/meson.build
22
+++ b/include/sysemu/device_tree.h
19
+++ b/tests/qtest/meson.build
23
@@ -XXX,XX +XXX,XX @@ int qemu_fdt_setprop_sized_cells_from_array(void *fdt,
20
@@ -XXX,XX +XXX,XX @@ qtests_arm = \
24
qdt_tmp); \
21
(config_all_devices.has_key('CONFIG_CMSDK_APB_DUALTIMER') ? ['cmsdk-apb-dualtimer-test'] : []) + \
25
})
22
(config_all_devices.has_key('CONFIG_CMSDK_APB_TIMER') ? ['cmsdk-apb-timer-test'] : []) + \
26
23
(config_all_devices.has_key('CONFIG_CMSDK_APB_WATCHDOG') ? ['cmsdk-apb-watchdog-test'] : []) + \
27
+
24
- (config_all_devices.has_key('CONFIG_PFLASH_CFI02') ? ['pflash-cfi02-test'] : []) + \
28
+/**
25
+ (config_all_devices.has_key('CONFIG_PFLASH_CFI02') and
29
+ * qemu_fdt_randomize_seeds:
26
+ config_all_devices.has_key('CONFIG_MUSICPAL') ? ['pflash-cfi02-test'] : []) + \
30
+ * @fdt: device tree blob
27
(config_all_devices.has_key('CONFIG_ASPEED_SOC') ? qtests_aspeed : []) + \
31
+ *
28
(config_all_devices.has_key('CONFIG_NPCM7XX') ? qtests_npcm7xx : []) + \
32
+ * Re-randomize all "rng-seed" properties with new seeds.
29
(config_all_devices.has_key('CONFIG_GENERIC_LOADER') ? ['hexloader-test'] : []) + \
33
+ */
30
(config_all_devices.has_key('CONFIG_TPM_TIS_I2C') ? ['tpm-tis-i2c-test'] : []) + \
34
+void qemu_fdt_randomize_seeds(void *fdt);
31
+ (config_all_devices.has_key('CONFIG_VEXPRESS') ? ['test-arm-mptimer'] : []) + \
35
+
32
+ (config_all_devices.has_key('CONFIG_MICROBIT') ? ['microbit-test'] : []) + \
36
#define FDT_PCI_RANGE_RELOCATABLE 0x80000000
33
['arm-cpu-features',
37
#define FDT_PCI_RANGE_PREFETCHABLE 0x40000000
34
- 'microbit-test',
38
#define FDT_PCI_RANGE_ALIASED 0x20000000
35
- 'test-arm-mptimer',
39
diff --git a/softmmu/device_tree.c b/softmmu/device_tree.c
36
'boot-serial-test']
40
index XXXXXXX..XXXXXXX 100644
37
41
--- a/softmmu/device_tree.c
38
# TODO: once aarch64 TCG is fixed on ARM 32 bit host, make bios-tables-test unconditional
42
+++ b/softmmu/device_tree.c
43
@@ -XXX,XX +XXX,XX @@
44
#include "qemu/option.h"
45
#include "qemu/bswap.h"
46
#include "qemu/cutils.h"
47
+#include "qemu/guest-random.h"
48
#include "sysemu/device_tree.h"
49
#include "hw/loader.h"
50
#include "hw/boards.h"
51
@@ -XXX,XX +XXX,XX @@ void hmp_dumpdtb(Monitor *mon, const QDict *qdict)
52
53
info_report("dtb dumped to %s", filename);
54
}
55
+
56
+void qemu_fdt_randomize_seeds(void *fdt)
57
+{
58
+ int noffset, poffset, len;
59
+ const char *name;
60
+ uint8_t *data;
61
+
62
+ for (noffset = fdt_next_node(fdt, 0, NULL);
63
+ noffset >= 0;
64
+ noffset = fdt_next_node(fdt, noffset, NULL)) {
65
+ for (poffset = fdt_first_property_offset(fdt, noffset);
66
+ poffset >= 0;
67
+ poffset = fdt_next_property_offset(fdt, poffset)) {
68
+ data = (uint8_t *)fdt_getprop_by_offset(fdt, poffset, &name, &len);
69
+ if (!data || strcmp(name, "rng-seed"))
70
+ continue;
71
+ qemu_guest_getrandom_nofail(data, len);
72
+ }
73
+ }
74
+}
75
--
39
--
76
2.25.1
40
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
For M-profile, there is no guest-facing A-profile format FSR, but we
2
still use the env->exception.fsr field to pass fault information from
3
the point where a fault is raised to the code in
4
arm_v7m_cpu_do_interrupt() which interprets it and sets the M-profile
5
specific fault status registers. So it doesn't matter whether we
6
fill in env->exception.fsr in the short format or the LPAE format, as
7
long as both sides agree. As it happens arm_v7m_cpu_do_interrupt()
8
assumes short-form.
2
9
3
Perform the atomic update for hardware management of the dirty bit.
10
In compute_fsr_fsc() we weren't explicitly choosing short-form for
11
M-profile, but instead relied on it falling out in the wash because
12
arm_s1_regime_using_lpae_format() would be false. This was broken in
13
commit 452c67a4 when we added v8R support, because we said "PMSAv8 is
14
always LPAE format" (as it is for v8R), forgetting that we were
15
implicitly using this code path on M-profile. At that point we would
16
hit a g_assert_not_reached():
17
ERROR:../../target/arm/internals.h:549:arm_fi_to_lfsc: code should not be reached
4
18
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
19
#7 0x0000555555e055f7 in arm_fi_to_lfsc (fi=0x7fffecff9a90) at ../../target/arm/internals.h:549
6
Message-id: 20221024051851.3074715-14-richard.henderson@linaro.org
20
#8 0x0000555555e05a27 in compute_fsr_fsc (env=0x555557356670, fi=0x7fffecff9a90, target_el=1, mmu_idx=1, ret_fsc=0x7fffecff9a1c)
21
at ../../target/arm/tlb_helper.c:95
22
#9 0x0000555555e05b62 in arm_deliver_fault (cpu=0x555557354800, addr=268961344, access_type=MMU_INST_FETCH, mmu_idx=1, fi=0x7fffecff9a90)
23
at ../../target/arm/tlb_helper.c:132
24
#10 0x0000555555e06095 in arm_cpu_tlb_fill (cs=0x555557354800, address=268961344, size=1, access_type=MMU_INST_FETCH, mmu_idx=1, probe=false, retaddr=0)
25
at ../../target/arm/tlb_helper.c:260
26
27
The specific assertion changed when commit fcc7404eff24b4c added
28
"assert not M-profile" to arm_is_secure_below_el3(), because the
29
conditions being checked in compute_fsr_fsc() include
30
arm_el_is_aa64(), which will end up calling arm_is_secure_below_el3()
31
and asserting before we try to call arm_fi_to_lfsc():
32
33
#7 0x0000555555efaf43 in arm_is_secure_below_el3 (env=0x5555574665a0) at ../../target/arm/cpu.h:2396
34
#8 0x0000555555efb103 in arm_is_el2_enabled (env=0x5555574665a0) at ../../target/arm/cpu.h:2448
35
#9 0x0000555555efb204 in arm_el_is_aa64 (env=0x5555574665a0, el=1) at ../../target/arm/cpu.h:2509
36
#10 0x0000555555efbdfd in compute_fsr_fsc (env=0x5555574665a0, fi=0x7fffecff99e0, target_el=1, mmu_idx=1, ret_fsc=0x7fffecff996c)
37
38
Avoid the assertion and the incorrect FSR format selection by
39
explicitly making M-profile use the short-format in this function.
40
41
Fixes: 452c67a42704 ("target/arm: Enable TTBCR_EAE for ARMv8-R AArch32")a
42
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1658
43
Cc: qemu-stable@nongnu.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
44
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
45
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
46
Message-id: 20230523131726.866635-1-peter.maydell@linaro.org
8
---
47
---
9
target/arm/cpu64.c | 2 +-
48
target/arm/tcg/tlb_helper.c | 13 +++++++++++--
10
target/arm/ptw.c | 16 ++++++++++++++++
49
1 file changed, 11 insertions(+), 2 deletions(-)
11
2 files changed, 17 insertions(+), 1 deletion(-)
12
50
13
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
51
diff --git a/target/arm/tcg/tlb_helper.c b/target/arm/tcg/tlb_helper.c
14
index XXXXXXX..XXXXXXX 100644
52
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu64.c
53
--- a/target/arm/tcg/tlb_helper.c
16
+++ b/target/arm/cpu64.c
54
+++ b/target/arm/tcg/tlb_helper.c
17
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
55
@@ -XXX,XX +XXX,XX @@ static uint32_t compute_fsr_fsc(CPUARMState *env, ARMMMUFaultInfo *fi,
18
cpu->isar.id_aa64mmfr0 = t;
56
ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
19
57
uint32_t fsr, fsc;
20
t = cpu->isar.id_aa64mmfr1;
58
21
- t = FIELD_DP64(t, ID_AA64MMFR1, HAFDBS, 1); /* FEAT_HAFDBS, AF only */
59
- if (target_el == 2 || arm_el_is_aa64(env, target_el) ||
22
+ t = FIELD_DP64(t, ID_AA64MMFR1, HAFDBS, 2); /* FEAT_HAFDBS */
60
- arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
23
t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* FEAT_VMID16 */
61
+ /*
24
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
62
+ * For M-profile there is no guest-facing FSR. We compute a
25
t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
63
+ * short-form value for env->exception.fsr which we will then
26
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
64
+ * examine in arm_v7m_cpu_do_interrupt(). In theory we could
27
index XXXXXXX..XXXXXXX 100644
65
+ * use the LPAE format instead as long as both bits of code agree
28
--- a/target/arm/ptw.c
66
+ * (and arm_fi_to_lfsc() handled the M-profile specific
29
+++ b/target/arm/ptw.c
67
+ * ARMFault_QEMU_NSCExec and ARMFault_QEMU_SFault cases).
30
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
68
+ */
31
goto do_fault;
69
+ if (!arm_feature(env, ARM_FEATURE_M) &&
32
}
70
+ (target_el == 2 || arm_el_is_aa64(env, target_el) ||
33
}
71
+ arm_s1_regime_using_lpae_format(env, arm_mmu_idx))) {
34
+
72
/*
35
+ /*
73
* LPAE format fault status register : bottom 6 bits are
36
+ * Dirty Bit.
74
* status code in the same form as needed for syndrome
37
+ * If HD is enabled, pre-emptively set/clear the appropriate AP/S2AP
38
+ * bit for writeback. The actual write protection test may still be
39
+ * overridden by tableattrs, to be merged below.
40
+ */
41
+ if (param.hd
42
+ && extract64(descriptor, 51, 1) /* DBM */
43
+ && access_type == MMU_DATA_STORE) {
44
+ if (regime_is_stage2(mmu_idx)) {
45
+ new_descriptor |= 1ull << 7; /* set S2AP[1] */
46
+ } else {
47
+ new_descriptor &= ~(1ull << 7); /* clear AP[2] */
48
+ }
49
+ }
50
}
51
52
/*
53
--
75
--
54
2.25.1
76
2.34.1
diff view generated by jsdifflib
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
When the system reboots, the rng-seed that the FDT has should be
3
We currently need to select ARM_V7M unconditionally when TCG is
4
re-randomized, so that the new boot gets a new seed. Since the FDT is in
4
present in the build because some translate.c helpers and the whole of
5
the ROM region at this point, we add a hook right after the ROM has been
5
m_helpers.c are not yet under CONFIG_ARM_V7M.
6
added, so that we have a pointer to that copy of the FDT.
7
6
8
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
7
Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Cc: Paul Burton <paulburton@kernel.org>
8
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
10
Message-id: 20230523180525.29994-2-farosas@suse.de
12
Message-id: 20221025004327.568476-9-Jason@zx2c4.com
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
12
---
16
hw/mips/boston.c | 3 +++
13
target/arm/Kconfig | 3 +++
17
1 file changed, 3 insertions(+)
14
1 file changed, 3 insertions(+)
18
15
19
diff --git a/hw/mips/boston.c b/hw/mips/boston.c
16
diff --git a/target/arm/Kconfig b/target/arm/Kconfig
20
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/mips/boston.c
18
--- a/target/arm/Kconfig
22
+++ b/hw/mips/boston.c
19
+++ b/target/arm/Kconfig
23
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@
24
#include "sysemu/sysemu.h"
21
config ARM
25
#include "sysemu/qtest.h"
22
bool
26
#include "sysemu/runstate.h"
23
select ARM_COMPATIBLE_SEMIHOSTING if TCG
27
+#include "sysemu/reset.h"
24
+
28
25
+ # We need to select this until we move m_helper.c and the
29
#include <libfdt.h>
26
+ # translate.c v7m helpers under ARM_V7M.
30
#include "qom/object.h"
27
select ARM_V7M if TCG
31
@@ -XXX,XX +XXX,XX @@ static void boston_mach_init(MachineState *machine)
28
32
/* Calculate real fdt size after filter */
29
config AARCH64
33
dt_size = fdt_totalsize(dtb_load_data);
34
rom_add_blob_fixed("dtb", dtb_load_data, dt_size, dtb_paddr);
35
+ qemu_register_reset_nosnapshotload(qemu_fdt_randomize_seeds,
36
+ rom_ptr(dtb_paddr, dt_size));
37
} else {
38
/* Try to load file as FIT */
39
fit_err = load_fit(&boston_fit_loader, machine->kernel_filename, s);
40
--
30
--
41
2.25.1
31
2.34.1
42
32
43
33
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Both GP and DBM are in the upper attribute block.
3
When we moved the arm default CONFIGs into Kconfig and removed them
4
Extend the computation of attrs to include them,
4
from default.mak, we made it harder to identify which CONFIGs are
5
then simplify the setting of guarded.
5
selected by default in case users want to disable them.
6
6
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Bring back the default entries into default.mak, but keep them
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
commented out. This way users can keep their workflows of editing
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
default.mak to remove build options without needing to search through
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Kconfig.
11
Message-id: 20221024051851.3074715-11-richard.henderson@linaro.org
11
12
Reported-by: Thomas Huth <thuth@redhat.com>
13
Signed-off-by: Fabiano Rosas <farosas@suse.de>
14
Reviewed-by: Thomas Huth <thuth@redhat.com>
15
Message-id: 20230523180525.29994-3-farosas@suse.de
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
17
---
14
target/arm/ptw.c | 6 ++----
18
configs/devices/aarch64-softmmu/default.mak | 6 ++++
15
1 file changed, 2 insertions(+), 4 deletions(-)
19
configs/devices/arm-softmmu/default.mak | 40 +++++++++++++++++++++
20
2 files changed, 46 insertions(+)
16
21
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
22
diff --git a/configs/devices/aarch64-softmmu/default.mak b/configs/devices/aarch64-softmmu/default.mak
18
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/ptw.c
24
--- a/configs/devices/aarch64-softmmu/default.mak
20
+++ b/target/arm/ptw.c
25
+++ b/configs/devices/aarch64-softmmu/default.mak
21
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
26
@@ -XXX,XX +XXX,XX @@
22
uint32_t el = regime_el(env, mmu_idx);
27
23
uint64_t descaddrmask;
28
# We support all the 32 bit boards so need all their config
24
bool aarch64 = arm_el_is_aa64(env, el);
29
include ../arm-softmmu/default.mak
25
- bool guarded = false;
30
+
26
uint64_t descriptor;
31
+# These are selected by default when TCG is enabled, uncomment them to
27
bool nstable;
32
+# keep out of the build.
28
33
+# CONFIG_XLNX_ZYNQMP_ARM=n
29
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
34
+# CONFIG_XLNX_VERSAL=n
30
descaddr &= ~(hwaddr)(page_size - 1);
35
+# CONFIG_SBSA_REF=n
31
descaddr |= (address & (page_size - 1));
36
diff --git a/configs/devices/arm-softmmu/default.mak b/configs/devices/arm-softmmu/default.mak
32
/* Extract attributes from the descriptor */
37
index XXXXXXX..XXXXXXX 100644
33
- attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(52, 12));
38
--- a/configs/devices/arm-softmmu/default.mak
34
+ attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(50, 14));
39
+++ b/configs/devices/arm-softmmu/default.mak
35
40
@@ -XXX,XX +XXX,XX @@
36
if (regime_is_stage2(mmu_idx)) {
41
# CONFIG_TEST_DEVICES=n
37
/* Stage 2 table descriptors do not include any attribute fields */
42
38
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
43
CONFIG_ARM_VIRT=y
39
}
44
+
40
/* Merge in attributes from table descriptors */
45
+# These are selected by default when TCG is enabled, uncomment them to
41
attrs |= nstable << 5; /* NS */
46
+# keep out of the build.
42
- guarded = extract64(descriptor, 50, 1); /* GP */
47
+# CONFIG_CUBIEBOARD=n
43
if (param.hpd) {
48
+# CONFIG_EXYNOS4=n
44
/* HPD disables all the table attributes except NSTable. */
49
+# CONFIG_HIGHBANK=n
45
goto skip_attrs;
50
+# CONFIG_INTEGRATOR=n
46
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
51
+# CONFIG_FSL_IMX31=n
47
52
+# CONFIG_MUSICPAL=n
48
/* When in aarch64 mode, and BTI is enabled, remember GP in the TLB. */
53
+# CONFIG_MUSCA=n
49
if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) {
54
+# CONFIG_CHEETAH=n
50
- result->f.guarded = guarded;
55
+# CONFIG_SX1=n
51
+ result->f.guarded = extract64(attrs, 50, 1); /* GP */
56
+# CONFIG_NSERIES=n
52
}
57
+# CONFIG_STELLARIS=n
53
58
+# CONFIG_STM32VLDISCOVERY=n
54
if (regime_is_stage2(mmu_idx)) {
59
+# CONFIG_REALVIEW=n
60
+# CONFIG_VERSATILE=n
61
+# CONFIG_VEXPRESS=n
62
+# CONFIG_ZYNQ=n
63
+# CONFIG_MAINSTONE=n
64
+# CONFIG_GUMSTIX=n
65
+# CONFIG_SPITZ=n
66
+# CONFIG_TOSA=n
67
+# CONFIG_Z2=n
68
+# CONFIG_NPCM7XX=n
69
+# CONFIG_COLLIE=n
70
+# CONFIG_ASPEED_SOC=n
71
+# CONFIG_NETDUINO2=n
72
+# CONFIG_NETDUINOPLUS2=n
73
+# CONFIG_OLIMEX_STM32_H405=n
74
+# CONFIG_MPS2=n
75
+# CONFIG_RASPI=n
76
+# CONFIG_DIGIC=n
77
+# CONFIG_SABRELITE=n
78
+# CONFIG_EMCRAFT_SF2=n
79
+# CONFIG_MICROBIT=n
80
+# CONFIG_FSL_IMX25=n
81
+# CONFIG_FSL_IMX7=n
82
+# CONFIG_FSL_IMX6UL=n
83
+# CONFIG_ALLWINNER_H3=n
55
--
84
--
56
2.25.1
85
2.34.1
57
58
diff view generated by jsdifflib
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
When the system reboots, the rng-seed that the FDT has should be
3
Replace the 'default y if TCG' pattern with 'default y; depends on
4
re-randomized, so that the new boot gets a new seed. Since the FDT is in
4
TCG'.
5
the ROM region at this point, we add a hook right after the ROM has been
5
6
added, so that we have a pointer to that copy of the FDT.
6
That makes explict that there is a dependence on TCG and enabling
7
7
these CONFIGs via .mak files without TCG present will fail earlier.
8
Cc: Peter Maydell <peter.maydell@linaro.org>
8
9
Cc: qemu-arm@nongnu.org
9
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
10
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
10
Signed-off-by: Fabiano Rosas <farosas@suse.de>
11
Message-id: 20221025004327.568476-5-Jason@zx2c4.com
11
Reviewed-by: Thomas Huth <thuth@redhat.com>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
Message-id: 20230523180525.29994-4-farosas@suse.de
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
---
15
hw/arm/boot.c | 2 ++
16
hw/arm/Kconfig | 123 ++++++++++++++++++++++++++++++++-----------------
16
1 file changed, 2 insertions(+)
17
1 file changed, 82 insertions(+), 41 deletions(-)
17
18
18
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
19
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
19
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/boot.c
21
--- a/hw/arm/Kconfig
21
+++ b/hw/arm/boot.c
22
+++ b/hw/arm/Kconfig
22
@@ -XXX,XX +XXX,XX @@ int arm_load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
23
@@ -XXX,XX +XXX,XX @@ config ARM_VIRT
23
* the DTB is copied again upon reset, even if addr points into RAM.
24
24
*/
25
config CHEETAH
25
rom_add_blob_fixed_as("dtb", fdt, size, addr, as);
26
bool
26
+ qemu_register_reset_nosnapshotload(qemu_fdt_randomize_seeds,
27
- default y if TCG && ARM
27
+ rom_ptr_for_as(as, addr, size));
28
+ default y
28
29
+ depends on TCG && ARM
29
g_free(fdt);
30
select OMAP
31
select TSC210X
32
33
config CUBIEBOARD
34
bool
35
- default y if TCG && ARM
36
+ default y
37
+ depends on TCG && ARM
38
select ALLWINNER_A10
39
40
config DIGIC
41
bool
42
- default y if TCG && ARM
43
+ default y
44
+ depends on TCG && ARM
45
select PTIMER
46
select PFLASH_CFI02
47
48
config EXYNOS4
49
bool
50
- default y if TCG && ARM
51
+ default y
52
+ depends on TCG && ARM
53
imply I2C_DEVICES
54
select A9MPCORE
55
select I2C
56
@@ -XXX,XX +XXX,XX @@ config EXYNOS4
57
58
config HIGHBANK
59
bool
60
- default y if TCG && ARM
61
+ default y
62
+ depends on TCG && ARM
63
select A9MPCORE
64
select A15MPCORE
65
select AHCI
66
@@ -XXX,XX +XXX,XX @@ config HIGHBANK
67
68
config INTEGRATOR
69
bool
70
- default y if TCG && ARM
71
+ default y
72
+ depends on TCG && ARM
73
select ARM_TIMER
74
select INTEGRATOR_DEBUG
75
select PL011 # UART
76
@@ -XXX,XX +XXX,XX @@ config INTEGRATOR
77
78
config MAINSTONE
79
bool
80
- default y if TCG && ARM
81
+ default y
82
+ depends on TCG && ARM
83
select PXA2XX
84
select PFLASH_CFI01
85
select SMC91C111
86
87
config MUSCA
88
bool
89
- default y if TCG && ARM
90
+ default y
91
+ depends on TCG && ARM
92
select ARMSSE
93
select PL011
94
select PL031
95
@@ -XXX,XX +XXX,XX @@ config MARVELL_88W8618
96
97
config MUSICPAL
98
bool
99
- default y if TCG && ARM
100
+ default y
101
+ depends on TCG && ARM
102
select OR_IRQ
103
select BITBANG_I2C
104
select MARVELL_88W8618
105
@@ -XXX,XX +XXX,XX @@ config MUSICPAL
106
107
config NETDUINO2
108
bool
109
- default y if TCG && ARM
110
+ default y
111
+ depends on TCG && ARM
112
select STM32F205_SOC
113
114
config NETDUINOPLUS2
115
bool
116
- default y if TCG && ARM
117
+ default y
118
+ depends on TCG && ARM
119
select STM32F405_SOC
120
121
config OLIMEX_STM32_H405
122
bool
123
- default y if TCG && ARM
124
+ default y
125
+ depends on TCG && ARM
126
select STM32F405_SOC
127
128
config NSERIES
129
bool
130
- default y if TCG && ARM
131
+ default y
132
+ depends on TCG && ARM
133
select OMAP
134
select TMP105 # temperature sensor
135
select BLIZZARD # LCD/TV controller
136
@@ -XXX,XX +XXX,XX @@ config PXA2XX
137
138
config GUMSTIX
139
bool
140
- default y if TCG && ARM
141
+ default y
142
+ depends on TCG && ARM
143
select PFLASH_CFI01
144
select SMC91C111
145
select PXA2XX
146
147
config TOSA
148
bool
149
- default y if TCG && ARM
150
+ default y
151
+ depends on TCG && ARM
152
select ZAURUS # scoop
153
select MICRODRIVE
154
select PXA2XX
155
@@ -XXX,XX +XXX,XX @@ config TOSA
156
157
config SPITZ
158
bool
159
- default y if TCG && ARM
160
+ default y
161
+ depends on TCG && ARM
162
select ADS7846 # touch-screen controller
163
select MAX111X # A/D converter
164
select WM8750 # audio codec
165
@@ -XXX,XX +XXX,XX @@ config SPITZ
166
167
config Z2
168
bool
169
- default y if TCG && ARM
170
+ default y
171
+ depends on TCG && ARM
172
select PFLASH_CFI01
173
select WM8750
174
select PL011 # UART
175
@@ -XXX,XX +XXX,XX @@ config Z2
176
177
config REALVIEW
178
bool
179
- default y if TCG && ARM
180
+ default y
181
+ depends on TCG && ARM
182
imply PCI_DEVICES
183
imply PCI_TESTDEV
184
imply I2C_DEVICES
185
@@ -XXX,XX +XXX,XX @@ config REALVIEW
186
187
config SBSA_REF
188
bool
189
- default y if TCG && AARCH64
190
+ default y
191
+ depends on TCG && AARCH64
192
imply PCI_DEVICES
193
select AHCI
194
select ARM_SMMUV3
195
@@ -XXX,XX +XXX,XX @@ config SBSA_REF
196
197
config SABRELITE
198
bool
199
- default y if TCG && ARM
200
+ default y
201
+ depends on TCG && ARM
202
select FSL_IMX6
203
select SSI_M25P80
204
205
config STELLARIS
206
bool
207
- default y if TCG && ARM
208
+ default y
209
+ depends on TCG && ARM
210
imply I2C_DEVICES
211
select ARM_V7M
212
select CMSDK_APB_WATCHDOG
213
@@ -XXX,XX +XXX,XX @@ config STELLARIS
214
215
config STM32VLDISCOVERY
216
bool
217
- default y if TCG && ARM
218
+ default y
219
+ depends on TCG && ARM
220
select STM32F100_SOC
221
222
config STRONGARM
223
@@ -XXX,XX +XXX,XX @@ config STRONGARM
224
225
config COLLIE
226
bool
227
- default y if TCG && ARM
228
+ default y
229
+ depends on TCG && ARM
230
select PFLASH_CFI01
231
select ZAURUS # scoop
232
select STRONGARM
233
234
config SX1
235
bool
236
- default y if TCG && ARM
237
+ default y
238
+ depends on TCG && ARM
239
select OMAP
240
241
config VERSATILE
242
bool
243
- default y if TCG && ARM
244
+ default y
245
+ depends on TCG && ARM
246
select ARM_TIMER # sp804
247
select PFLASH_CFI01
248
select LSI_SCSI_PCI
249
@@ -XXX,XX +XXX,XX @@ config VERSATILE
250
251
config VEXPRESS
252
bool
253
- default y if TCG && ARM
254
+ default y
255
+ depends on TCG && ARM
256
select A9MPCORE
257
select A15MPCORE
258
select ARM_MPTIMER
259
@@ -XXX,XX +XXX,XX @@ config VEXPRESS
260
261
config ZYNQ
262
bool
263
- default y if TCG && ARM
264
+ default y
265
+ depends on TCG && ARM
266
select A9MPCORE
267
select CADENCE # UART
268
select PFLASH_CFI02
269
@@ -XXX,XX +XXX,XX @@ config ZYNQ
270
config ARM_V7M
271
bool
272
# currently v7M must be included in a TCG build due to translate.c
273
- default y if TCG && ARM
274
+ default y
275
+ depends on TCG && ARM
276
select PTIMER
277
278
config ALLWINNER_A10
279
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_A10
280
281
config ALLWINNER_H3
282
bool
283
- default y if TCG && ARM
284
+ default y
285
+ depends on TCG && ARM
286
select ALLWINNER_A10_PIT
287
select ALLWINNER_SUN8I_EMAC
288
select ALLWINNER_I2C
289
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_H3
290
291
config RASPI
292
bool
293
- default y if TCG && ARM
294
+ default y
295
+ depends on TCG && ARM
296
select FRAMEBUFFER
297
select PL011 # UART
298
select SDHCI
299
@@ -XXX,XX +XXX,XX @@ config STM32F405_SOC
300
301
config XLNX_ZYNQMP_ARM
302
bool
303
- default y if TCG && AARCH64
304
+ default y
305
+ depends on TCG && AARCH64
306
select AHCI
307
select ARM_GIC
308
select CADENCE
309
@@ -XXX,XX +XXX,XX @@ config XLNX_ZYNQMP_ARM
310
311
config XLNX_VERSAL
312
bool
313
- default y if TCG && AARCH64
314
+ default y
315
+ depends on TCG && AARCH64
316
select ARM_GIC
317
select PL011
318
select CADENCE
319
@@ -XXX,XX +XXX,XX @@ config XLNX_VERSAL
320
321
config NPCM7XX
322
bool
323
- default y if TCG && ARM
324
+ default y
325
+ depends on TCG && ARM
326
select A9MPCORE
327
select ADM1272
328
select ARM_GIC
329
@@ -XXX,XX +XXX,XX @@ config NPCM7XX
330
331
config FSL_IMX25
332
bool
333
- default y if TCG && ARM
334
+ default y
335
+ depends on TCG && ARM
336
imply I2C_DEVICES
337
select IMX
338
select IMX_FEC
339
@@ -XXX,XX +XXX,XX @@ config FSL_IMX25
340
341
config FSL_IMX31
342
bool
343
- default y if TCG && ARM
344
+ default y
345
+ depends on TCG && ARM
346
imply I2C_DEVICES
347
select SERIAL
348
select IMX
349
@@ -XXX,XX +XXX,XX @@ config FSL_IMX6
350
351
config ASPEED_SOC
352
bool
353
- default y if TCG && ARM
354
+ default y
355
+ depends on TCG && ARM
356
select DS1338
357
select FTGMAC100
358
select I2C
359
@@ -XXX,XX +XXX,XX @@ config ASPEED_SOC
360
361
config MPS2
362
bool
363
- default y if TCG && ARM
364
+ default y
365
+ depends on TCG && ARM
366
imply I2C_DEVICES
367
select ARMSSE
368
select LAN9118
369
@@ -XXX,XX +XXX,XX @@ config MPS2
370
371
config FSL_IMX7
372
bool
373
- default y if TCG && ARM
374
+ default y
375
+ depends on TCG && ARM
376
imply PCI_DEVICES
377
imply TEST_DEVICES
378
imply I2C_DEVICES
379
@@ -XXX,XX +XXX,XX @@ config ARM_SMMUV3
380
381
config FSL_IMX6UL
382
bool
383
- default y if TCG && ARM
384
+ default y
385
+ depends on TCG && ARM
386
imply I2C_DEVICES
387
select A15MPCORE
388
select IMX
389
@@ -XXX,XX +XXX,XX @@ config FSL_IMX6UL
390
391
config MICROBIT
392
bool
393
- default y if TCG && ARM
394
+ default y
395
+ depends on TCG && ARM
396
select NRF51_SOC
397
398
config NRF51_SOC
399
@@ -XXX,XX +XXX,XX @@ config NRF51_SOC
400
401
config EMCRAFT_SF2
402
bool
403
- default y if TCG && ARM
404
+ default y
405
+ depends on TCG && ARM
406
select MSF2
407
select SSI_M25P80
30
408
31
--
409
--
32
2.25.1
410
2.34.1
411
412
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Enze Li <lienze@kylinos.cn>
2
2
3
Hoist the computation of the mmu_idx for the ptw up to
3
I noticed that in the latest version, the copyright string is still
4
get_phys_addr_with_struct and get_phys_addr_twostage.
4
2022, even though 2023 is halfway through. This patch fixes that and
5
This removes the duplicate check for stage2 disabled
5
fixes the documentation along with it.
6
from the middle of the walk, performing it only once.
7
6
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Enze Li <lienze@kylinos.cn>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20230525064345.1152801-1-lienze@kylinos.cn
11
Message-id: 20221024051851.3074715-3-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
target/arm/ptw.c | 71 ++++++++++++++++++++++++++++++++++++------------
12
docs/conf.py | 2 +-
15
1 file changed, 54 insertions(+), 17 deletions(-)
13
include/qemu/help-texts.h | 2 +-
14
2 files changed, 2 insertions(+), 2 deletions(-)
16
15
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
diff --git a/docs/conf.py b/docs/conf.py
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/ptw.c
18
--- a/docs/conf.py
20
+++ b/target/arm/ptw.c
19
+++ b/docs/conf.py
21
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@
22
21
23
typedef struct S1Translate {
22
# General information about the project.
24
ARMMMUIdx in_mmu_idx;
23
project = u'QEMU'
25
+ ARMMMUIdx in_ptw_idx;
24
-copyright = u'2022, The QEMU Project Developers'
26
bool in_secure;
25
+copyright = u'2023, The QEMU Project Developers'
27
bool in_debug;
26
author = u'The QEMU Project Developers'
28
bool out_secure;
27
29
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
28
# The version info for the project you're documenting, acts as replacement for
30
{
29
diff --git a/include/qemu/help-texts.h b/include/qemu/help-texts.h
31
bool is_secure = ptw->in_secure;
30
index XXXXXXX..XXXXXXX 100644
32
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
31
--- a/include/qemu/help-texts.h
33
- ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
32
+++ b/include/qemu/help-texts.h
34
- bool s2_phys = false;
33
@@ -XXX,XX +XXX,XX @@
35
+ ARMMMUIdx s2_mmu_idx = ptw->in_ptw_idx;
34
#define QEMU_HELP_TEXTS_H
36
uint8_t pte_attrs;
35
37
bool pte_secure;
36
/* Copyright string for -version arguments, About dialogs, etc */
38
37
-#define QEMU_COPYRIGHT "Copyright (c) 2003-2022 " \
39
- if (!arm_mmu_idx_is_stage1_of_2(mmu_idx)
38
+#define QEMU_COPYRIGHT "Copyright (c) 2003-2023 " \
40
- || regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
39
"Fabrice Bellard and the QEMU Project developers"
41
- s2_mmu_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
40
42
- s2_phys = true;
41
/* Bug reporting information for --help arguments, About dialogs, etc */
43
- }
44
-
45
if (unlikely(ptw->in_debug)) {
46
/*
47
* From gdbstub, do not use softmmu so that we don't modify the
48
* state of the cpu at all, including softmmu tlb contents.
49
*/
50
- if (s2_phys) {
51
- ptw->out_phys = addr;
52
- pte_attrs = 0;
53
- pte_secure = is_secure;
54
- } else {
55
+ if (regime_is_stage2(s2_mmu_idx)) {
56
S1Translate s2ptw = {
57
.in_mmu_idx = s2_mmu_idx,
58
+ .in_ptw_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS,
59
.in_secure = is_secure,
60
.in_debug = true,
61
};
62
GetPhysAddrResult s2 = { };
63
+
64
if (!get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
65
false, &s2, fi)) {
66
goto fail;
67
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
68
ptw->out_phys = s2.f.phys_addr;
69
pte_attrs = s2.cacheattrs.attrs;
70
pte_secure = s2.f.attrs.secure;
71
+ } else {
72
+ /* Regime is physical. */
73
+ ptw->out_phys = addr;
74
+ pte_attrs = 0;
75
+ pte_secure = is_secure;
76
}
77
ptw->out_host = NULL;
78
} else {
79
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
80
pte_secure = full->attrs.secure;
81
}
82
83
- if (!s2_phys) {
84
+ if (regime_is_stage2(s2_mmu_idx)) {
85
uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
86
87
if ((hcr & HCR_PTW) && S2_attrs_are_device(hcr, pte_attrs)) {
88
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
89
descaddr |= (address >> (stride * (4 - level))) & indexmask;
90
descaddr &= ~7ULL;
91
nstable = extract32(tableattrs, 4, 1);
92
- ptw->in_secure = !nstable;
93
+ if (!nstable) {
94
+ /*
95
+ * Stage2_S -> Stage2 or Phys_S -> Phys_NS
96
+ * Assert that the non-secure idx are even, and relative order.
97
+ */
98
+ QEMU_BUILD_BUG_ON((ARMMMUIdx_Phys_NS & 1) != 0);
99
+ QEMU_BUILD_BUG_ON((ARMMMUIdx_Stage2 & 1) != 0);
100
+ QEMU_BUILD_BUG_ON(ARMMMUIdx_Phys_NS + 1 != ARMMMUIdx_Phys_S);
101
+ QEMU_BUILD_BUG_ON(ARMMMUIdx_Stage2 + 1 != ARMMMUIdx_Stage2_S);
102
+ ptw->in_ptw_idx &= ~1;
103
+ ptw->in_secure = false;
104
+ }
105
descriptor = arm_ldq_ptw(env, ptw, descaddr, fi);
106
if (fi->type != ARMFault_None) {
107
goto do_fault;
108
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
109
110
is_el0 = ptw->in_mmu_idx == ARMMMUIdx_Stage1_E0;
111
ptw->in_mmu_idx = s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
112
+ ptw->in_ptw_idx = s2walk_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
113
ptw->in_secure = s2walk_secure;
114
115
/*
116
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
117
ARMMMUFaultInfo *fi)
118
{
119
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
120
- ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
121
bool is_secure = ptw->in_secure;
122
+ ARMMMUIdx s1_mmu_idx;
123
124
- if (mmu_idx != s1_mmu_idx) {
125
+ switch (mmu_idx) {
126
+ case ARMMMUIdx_Phys_S:
127
+ case ARMMMUIdx_Phys_NS:
128
+ /* Checking Phys early avoids special casing later vs regime_el. */
129
+ return get_phys_addr_disabled(env, address, access_type, mmu_idx,
130
+ is_secure, result, fi);
131
+
132
+ case ARMMMUIdx_Stage1_E0:
133
+ case ARMMMUIdx_Stage1_E1:
134
+ case ARMMMUIdx_Stage1_E1_PAN:
135
+ /* First stage lookup uses second stage for ptw. */
136
+ ptw->in_ptw_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
137
+ break;
138
+
139
+ case ARMMMUIdx_E10_0:
140
+ s1_mmu_idx = ARMMMUIdx_Stage1_E0;
141
+ goto do_twostage;
142
+ case ARMMMUIdx_E10_1:
143
+ s1_mmu_idx = ARMMMUIdx_Stage1_E1;
144
+ goto do_twostage;
145
+ case ARMMMUIdx_E10_1_PAN:
146
+ s1_mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
147
+ do_twostage:
148
/*
149
* Call ourselves recursively to do the stage 1 and then stage 2
150
* translations if mmu_idx is a two-stage regime, and EL2 present.
151
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
152
return get_phys_addr_twostage(env, ptw, address, access_type,
153
result, fi);
154
}
155
+ /* fall through */
156
+
157
+ default:
158
+ /* Single stage and second stage uses physical for ptw. */
159
+ ptw->in_ptw_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
160
+ break;
161
}
162
163
/*
164
--
42
--
165
2.25.1
43
2.34.1
166
167
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20221024051851.3074715-5-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/internals.h | 2 ++
10
target/arm/helper.c | 8 +++++++-
11
2 files changed, 9 insertions(+), 1 deletion(-)
12
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/internals.h
16
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
18
bool hpd : 1;
19
bool tsz_oob : 1; /* tsz has been clamped to legal range */
20
bool ds : 1;
21
+ bool ha : 1;
22
+ bool hd : 1;
23
ARMGranuleSize gran : 2;
24
} ARMVAParameters;
25
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
29
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
31
ARMMMUIdx mmu_idx, bool data)
32
{
33
uint64_t tcr = regime_tcr(env, mmu_idx);
34
- bool epd, hpd, tsz_oob, ds;
35
+ bool epd, hpd, tsz_oob, ds, ha, hd;
36
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
37
ARMGranuleSize gran;
38
ARMCPU *cpu = env_archcpu(env);
39
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
40
epd = false;
41
sh = extract32(tcr, 12, 2);
42
ps = extract32(tcr, 16, 3);
43
+ ha = extract32(tcr, 21, 1) && cpu_isar_feature(aa64_hafs, cpu);
44
+ hd = extract32(tcr, 22, 1) && cpu_isar_feature(aa64_hdbs, cpu);
45
ds = extract64(tcr, 32, 1);
46
} else {
47
bool e0pd;
48
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
49
e0pd = extract64(tcr, 56, 1);
50
}
51
ps = extract64(tcr, 32, 3);
52
+ ha = extract64(tcr, 39, 1) && cpu_isar_feature(aa64_hafs, cpu);
53
+ hd = extract64(tcr, 40, 1) && cpu_isar_feature(aa64_hdbs, cpu);
54
ds = extract64(tcr, 59, 1);
55
56
if (e0pd && cpu_isar_feature(aa64_e0pd, cpu) &&
57
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
58
.hpd = hpd,
59
.tsz_oob = tsz_oob,
60
.ds = ds,
61
+ .ha = ha,
62
+ .hd = ha && hd,
63
.gran = gran,
64
};
65
}
66
--
67
2.25.1
68
69
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This fault type is to be used with FEAT_HAFDBS when
4
the guest enables hw updates, but places the tables
5
in memory where atomic updates are unsupported.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20221024051851.3074715-7-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/internals.h | 4 ++++
14
1 file changed, 4 insertions(+)
15
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
19
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ typedef enum ARMFaultType {
21
ARMFault_AsyncExternal,
22
ARMFault_Debug,
23
ARMFault_TLBConflict,
24
+ ARMFault_UnsuppAtomicUpdate,
25
ARMFault_Lockdown,
26
ARMFault_Exclusive,
27
ARMFault_ICacheMaint,
28
@@ -XXX,XX +XXX,XX @@ static inline uint32_t arm_fi_to_lfsc(ARMMMUFaultInfo *fi)
29
case ARMFault_TLBConflict:
30
fsc = 0x30;
31
break;
32
+ case ARMFault_UnsuppAtomicUpdate:
33
+ fsc = 0x31;
34
+ break;
35
case ARMFault_Lockdown:
36
fsc = 0x34;
37
break;
38
--
39
2.25.1
40
41
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The unconditional loop was used both to iterate over levels
4
and to control parsing of attributes. Use an explicit goto
5
in both cases.
6
7
While this appears less clean for iterating over levels, we
8
will need to jump back into the middle of this loop for
9
atomic updates, which is even uglier.
10
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20221024051851.3074715-8-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/ptw.c | 192 +++++++++++++++++++++++------------------------
17
1 file changed, 96 insertions(+), 96 deletions(-)
18
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/ptw.c
22
+++ b/target/arm/ptw.c
23
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
24
uint64_t descaddrmask;
25
bool aarch64 = arm_el_is_aa64(env, el);
26
bool guarded = false;
27
+ uint64_t descriptor;
28
+ bool nstable;
29
30
/* TODO: This code does not support shareability levels. */
31
if (aarch64) {
32
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
33
* bits at each step.
34
*/
35
tableattrs = is_secure ? 0 : (1 << 4);
36
- for (;;) {
37
- uint64_t descriptor;
38
- bool nstable;
39
-
40
- descaddr |= (address >> (stride * (4 - level))) & indexmask;
41
- descaddr &= ~7ULL;
42
- nstable = extract32(tableattrs, 4, 1);
43
- if (!nstable) {
44
- /*
45
- * Stage2_S -> Stage2 or Phys_S -> Phys_NS
46
- * Assert that the non-secure idx are even, and relative order.
47
- */
48
- QEMU_BUILD_BUG_ON((ARMMMUIdx_Phys_NS & 1) != 0);
49
- QEMU_BUILD_BUG_ON((ARMMMUIdx_Stage2 & 1) != 0);
50
- QEMU_BUILD_BUG_ON(ARMMMUIdx_Phys_NS + 1 != ARMMMUIdx_Phys_S);
51
- QEMU_BUILD_BUG_ON(ARMMMUIdx_Stage2 + 1 != ARMMMUIdx_Stage2_S);
52
- ptw->in_ptw_idx &= ~1;
53
- ptw->in_secure = false;
54
- }
55
- if (!S1_ptw_translate(env, ptw, descaddr, fi)) {
56
- goto do_fault;
57
- }
58
- descriptor = arm_ldq_ptw(env, ptw, fi);
59
- if (fi->type != ARMFault_None) {
60
- goto do_fault;
61
- }
62
-
63
- if (!(descriptor & 1) ||
64
- (!(descriptor & 2) && (level == 3))) {
65
- /* Invalid, or the Reserved level 3 encoding */
66
- goto do_fault;
67
- }
68
-
69
- descaddr = descriptor & descaddrmask;
70
71
+ next_level:
72
+ descaddr |= (address >> (stride * (4 - level))) & indexmask;
73
+ descaddr &= ~7ULL;
74
+ nstable = extract32(tableattrs, 4, 1);
75
+ if (!nstable) {
76
/*
77
- * For FEAT_LPA and PS=6, bits [51:48] of descaddr are in [15:12]
78
- * of descriptor. For FEAT_LPA2 and effective DS, bits [51:50] of
79
- * descaddr are in [9:8]. Otherwise, if descaddr is out of range,
80
- * raise AddressSizeFault.
81
+ * Stage2_S -> Stage2 or Phys_S -> Phys_NS
82
+ * Assert that the non-secure idx are even, and relative order.
83
*/
84
- if (outputsize > 48) {
85
- if (param.ds) {
86
- descaddr |= extract64(descriptor, 8, 2) << 50;
87
- } else {
88
- descaddr |= extract64(descriptor, 12, 4) << 48;
89
- }
90
- } else if (descaddr >> outputsize) {
91
- fault_type = ARMFault_AddressSize;
92
- goto do_fault;
93
- }
94
-
95
- if ((descriptor & 2) && (level < 3)) {
96
- /*
97
- * Table entry. The top five bits are attributes which may
98
- * propagate down through lower levels of the table (and
99
- * which are all arranged so that 0 means "no effect", so
100
- * we can gather them up by ORing in the bits at each level).
101
- */
102
- tableattrs |= extract64(descriptor, 59, 5);
103
- level++;
104
- indexmask = indexmask_grainsize;
105
- continue;
106
- }
107
- /*
108
- * Block entry at level 1 or 2, or page entry at level 3.
109
- * These are basically the same thing, although the number
110
- * of bits we pull in from the vaddr varies. Note that although
111
- * descaddrmask masks enough of the low bits of the descriptor
112
- * to give a correct page or table address, the address field
113
- * in a block descriptor is smaller; so we need to explicitly
114
- * clear the lower bits here before ORing in the low vaddr bits.
115
- */
116
- page_size = (1ULL << ((stride * (4 - level)) + 3));
117
- descaddr &= ~(hwaddr)(page_size - 1);
118
- descaddr |= (address & (page_size - 1));
119
- /* Extract attributes from the descriptor */
120
- attrs = extract64(descriptor, 2, 10)
121
- | (extract64(descriptor, 52, 12) << 10);
122
-
123
- if (regime_is_stage2(mmu_idx)) {
124
- /* Stage 2 table descriptors do not include any attribute fields */
125
- break;
126
- }
127
- /* Merge in attributes from table descriptors */
128
- attrs |= nstable << 3; /* NS */
129
- guarded = extract64(descriptor, 50, 1); /* GP */
130
- if (param.hpd) {
131
- /* HPD disables all the table attributes except NSTable. */
132
- break;
133
- }
134
- attrs |= extract32(tableattrs, 0, 2) << 11; /* XN, PXN */
135
- /*
136
- * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
137
- * means "force PL1 access only", which means forcing AP[1] to 0.
138
- */
139
- attrs &= ~(extract32(tableattrs, 2, 1) << 4); /* !APT[0] => AP[1] */
140
- attrs |= extract32(tableattrs, 3, 1) << 5; /* APT[1] => AP[2] */
141
- break;
142
+ QEMU_BUILD_BUG_ON((ARMMMUIdx_Phys_NS & 1) != 0);
143
+ QEMU_BUILD_BUG_ON((ARMMMUIdx_Stage2 & 1) != 0);
144
+ QEMU_BUILD_BUG_ON(ARMMMUIdx_Phys_NS + 1 != ARMMMUIdx_Phys_S);
145
+ QEMU_BUILD_BUG_ON(ARMMMUIdx_Stage2 + 1 != ARMMMUIdx_Stage2_S);
146
+ ptw->in_ptw_idx &= ~1;
147
+ ptw->in_secure = false;
148
}
149
+ if (!S1_ptw_translate(env, ptw, descaddr, fi)) {
150
+ goto do_fault;
151
+ }
152
+ descriptor = arm_ldq_ptw(env, ptw, fi);
153
+ if (fi->type != ARMFault_None) {
154
+ goto do_fault;
155
+ }
156
+
157
+ if (!(descriptor & 1) || (!(descriptor & 2) && (level == 3))) {
158
+ /* Invalid, or the Reserved level 3 encoding */
159
+ goto do_fault;
160
+ }
161
+
162
+ descaddr = descriptor & descaddrmask;
163
+
164
+ /*
165
+ * For FEAT_LPA and PS=6, bits [51:48] of descaddr are in [15:12]
166
+ * of descriptor. For FEAT_LPA2 and effective DS, bits [51:50] of
167
+ * descaddr are in [9:8]. Otherwise, if descaddr is out of range,
168
+ * raise AddressSizeFault.
169
+ */
170
+ if (outputsize > 48) {
171
+ if (param.ds) {
172
+ descaddr |= extract64(descriptor, 8, 2) << 50;
173
+ } else {
174
+ descaddr |= extract64(descriptor, 12, 4) << 48;
175
+ }
176
+ } else if (descaddr >> outputsize) {
177
+ fault_type = ARMFault_AddressSize;
178
+ goto do_fault;
179
+ }
180
+
181
+ if ((descriptor & 2) && (level < 3)) {
182
+ /*
183
+ * Table entry. The top five bits are attributes which may
184
+ * propagate down through lower levels of the table (and
185
+ * which are all arranged so that 0 means "no effect", so
186
+ * we can gather them up by ORing in the bits at each level).
187
+ */
188
+ tableattrs |= extract64(descriptor, 59, 5);
189
+ level++;
190
+ indexmask = indexmask_grainsize;
191
+ goto next_level;
192
+ }
193
+
194
+ /*
195
+ * Block entry at level 1 or 2, or page entry at level 3.
196
+ * These are basically the same thing, although the number
197
+ * of bits we pull in from the vaddr varies. Note that although
198
+ * descaddrmask masks enough of the low bits of the descriptor
199
+ * to give a correct page or table address, the address field
200
+ * in a block descriptor is smaller; so we need to explicitly
201
+ * clear the lower bits here before ORing in the low vaddr bits.
202
+ */
203
+ page_size = (1ULL << ((stride * (4 - level)) + 3));
204
+ descaddr &= ~(hwaddr)(page_size - 1);
205
+ descaddr |= (address & (page_size - 1));
206
+ /* Extract attributes from the descriptor */
207
+ attrs = extract64(descriptor, 2, 10)
208
+ | (extract64(descriptor, 52, 12) << 10);
209
+
210
+ if (regime_is_stage2(mmu_idx)) {
211
+ /* Stage 2 table descriptors do not include any attribute fields */
212
+ goto skip_attrs;
213
+ }
214
+ /* Merge in attributes from table descriptors */
215
+ attrs |= nstable << 3; /* NS */
216
+ guarded = extract64(descriptor, 50, 1); /* GP */
217
+ if (param.hpd) {
218
+ /* HPD disables all the table attributes except NSTable. */
219
+ goto skip_attrs;
220
+ }
221
+ attrs |= extract32(tableattrs, 0, 2) << 11; /* XN, PXN */
222
+ /*
223
+ * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
224
+ * means "force PL1 access only", which means forcing AP[1] to 0.
225
+ */
226
+ attrs &= ~(extract32(tableattrs, 2, 1) << 4); /* !APT[0] => AP[1] */
227
+ attrs |= extract32(tableattrs, 3, 1) << 5; /* APT[1] => AP[2] */
228
+ skip_attrs:
229
+
230
/*
231
* Here descaddr is the final physical address, and attributes
232
* are all in attrs.
233
--
234
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Leave the upper and lower attributes in the place they originate
4
from in the descriptor. Shifting them around is confusing, since
5
one cannot read the bit numbers out of the manual. Also, new
6
attributes have been added which would alter the shifts.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20221024051851.3074715-10-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/ptw.c | 31 +++++++++++++++----------------
15
1 file changed, 15 insertions(+), 16 deletions(-)
16
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/ptw.c
20
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
22
hwaddr descaddr, indexmask, indexmask_grainsize;
23
uint32_t tableattrs;
24
target_ulong page_size;
25
- uint32_t attrs;
26
+ uint64_t attrs;
27
int32_t stride;
28
int addrsize, inputsize, outputsize;
29
uint64_t tcr = regime_tcr(env, mmu_idx);
30
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
31
descaddr &= ~(hwaddr)(page_size - 1);
32
descaddr |= (address & (page_size - 1));
33
/* Extract attributes from the descriptor */
34
- attrs = extract64(descriptor, 2, 10)
35
- | (extract64(descriptor, 52, 12) << 10);
36
+ attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(52, 12));
37
38
if (regime_is_stage2(mmu_idx)) {
39
/* Stage 2 table descriptors do not include any attribute fields */
40
goto skip_attrs;
41
}
42
/* Merge in attributes from table descriptors */
43
- attrs |= nstable << 3; /* NS */
44
+ attrs |= nstable << 5; /* NS */
45
guarded = extract64(descriptor, 50, 1); /* GP */
46
if (param.hpd) {
47
/* HPD disables all the table attributes except NSTable. */
48
goto skip_attrs;
49
}
50
- attrs |= extract32(tableattrs, 0, 2) << 11; /* XN, PXN */
51
+ attrs |= extract64(tableattrs, 0, 2) << 53; /* XN, PXN */
52
/*
53
* The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
54
* means "force PL1 access only", which means forcing AP[1] to 0.
55
*/
56
- attrs &= ~(extract32(tableattrs, 2, 1) << 4); /* !APT[0] => AP[1] */
57
- attrs |= extract32(tableattrs, 3, 1) << 5; /* APT[1] => AP[2] */
58
+ attrs &= ~(extract64(tableattrs, 2, 1) << 6); /* !APT[0] => AP[1] */
59
+ attrs |= extract32(tableattrs, 3, 1) << 7; /* APT[1] => AP[2] */
60
skip_attrs:
61
62
/*
63
* Here descaddr is the final physical address, and attributes
64
* are all in attrs.
65
*/
66
- if ((attrs & (1 << 8)) == 0) {
67
+ if ((attrs & (1 << 10)) == 0) {
68
/* Access flag */
69
fi->type = ARMFault_AccessFlag;
70
goto do_fault;
71
}
72
73
- ap = extract32(attrs, 4, 2);
74
+ ap = extract32(attrs, 6, 2);
75
76
if (regime_is_stage2(mmu_idx)) {
77
ns = mmu_idx == ARMMMUIdx_Stage2;
78
- xn = extract32(attrs, 11, 2);
79
+ xn = extract64(attrs, 53, 2);
80
result->f.prot = get_S2prot(env, ap, xn, s1_is_el0);
81
} else {
82
- ns = extract32(attrs, 3, 1);
83
- xn = extract32(attrs, 12, 1);
84
- pxn = extract32(attrs, 11, 1);
85
+ ns = extract32(attrs, 5, 1);
86
+ xn = extract64(attrs, 54, 1);
87
+ pxn = extract64(attrs, 53, 1);
88
result->f.prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
89
}
90
91
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
92
93
if (regime_is_stage2(mmu_idx)) {
94
result->cacheattrs.is_s2_format = true;
95
- result->cacheattrs.attrs = extract32(attrs, 0, 4);
96
+ result->cacheattrs.attrs = extract32(attrs, 2, 4);
97
} else {
98
/* Index into MAIR registers for cache attributes */
99
- uint8_t attrindx = extract32(attrs, 0, 3);
100
+ uint8_t attrindx = extract32(attrs, 2, 3);
101
uint64_t mair = env->cp15.mair_el[regime_el(env, mmu_idx)];
102
assert(attrindx <= 7);
103
result->cacheattrs.is_s2_format = false;
104
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
105
if (param.ds) {
106
result->cacheattrs.shareability = param.sh;
107
} else {
108
- result->cacheattrs.shareability = extract32(attrs, 6, 2);
109
+ result->cacheattrs.shareability = extract32(attrs, 8, 2);
110
}
111
112
result->f.phys_addr = descaddr;
113
--
114
2.25.1
115
116
diff view generated by jsdifflib
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2
3
Snapshot loading is supposed to be deterministic, so we shouldn't
3
Let add GIC information into DeviceTree as part of SBSA-REF versioning.
4
re-randomize the various seeds used.
5
4
6
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
5
Trusted Firmware will read it and provide to next firmware level.
7
Message-id: 20221025004327.568476-7-Jason@zx2c4.com
6
7
Bumps platform version to 0.1 one so we can check is node is present.
8
9
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
hw/m68k/virt.c | 20 +++++++++++---------
13
hw/arm/sbsa-ref.c | 19 ++++++++++++++++++-
12
1 file changed, 11 insertions(+), 9 deletions(-)
14
1 file changed, 18 insertions(+), 1 deletion(-)
13
15
14
diff --git a/hw/m68k/virt.c b/hw/m68k/virt.c
16
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/m68k/virt.c
18
--- a/hw/arm/sbsa-ref.c
17
+++ b/hw/m68k/virt.c
19
+++ b/hw/arm/sbsa-ref.c
18
@@ -XXX,XX +XXX,XX @@ typedef struct {
20
@@ -XXX,XX +XXX,XX @@
19
M68kCPU *cpu;
21
#include "exec/hwaddr.h"
20
hwaddr initial_pc;
22
#include "kvm_arm.h"
21
hwaddr initial_stack;
23
#include "hw/arm/boot.h"
22
- struct bi_record *rng_seed;
24
+#include "hw/arm/fdt.h"
23
} ResetInfo;
25
#include "hw/arm/smmuv3.h"
24
26
#include "hw/block/flash.h"
25
static void main_cpu_reset(void *opaque)
27
#include "hw/boards.h"
26
@@ -XXX,XX +XXX,XX @@ static void main_cpu_reset(void *opaque)
28
@@ -XXX,XX +XXX,XX @@ static uint64_t sbsa_ref_cpu_mp_affinity(SBSAMachineState *sms, int idx)
27
M68kCPU *cpu = reset_info->cpu;
29
return arm_cpu_mp_affinity(idx, clustersz);
28
CPUState *cs = CPU(cpu);
29
30
- if (reset_info->rng_seed) {
31
- qemu_guest_getrandom_nofail((void *)reset_info->rng_seed->data + 2,
32
- be16_to_cpu(*(uint16_t *)reset_info->rng_seed->data));
33
- }
34
-
35
cpu_reset(cs);
36
cpu->env.aregs[7] = reset_info->initial_stack;
37
cpu->env.pc = reset_info->initial_pc;
38
}
30
}
39
31
40
+static void rerandomize_rng_seed(void *opaque)
32
+static void sbsa_fdt_add_gic_node(SBSAMachineState *sms)
41
+{
33
+{
42
+ struct bi_record *rng_seed = opaque;
34
+ char *nodename;
43
+ qemu_guest_getrandom_nofail((void *)rng_seed->data + 2,
35
+
44
+ be16_to_cpu(*(uint16_t *)rng_seed->data));
36
+ nodename = g_strdup_printf("/intc");
37
+ qemu_fdt_add_subnode(sms->fdt, nodename);
38
+ qemu_fdt_setprop_sized_cells(sms->fdt, nodename, "reg",
39
+ 2, sbsa_ref_memmap[SBSA_GIC_DIST].base,
40
+ 2, sbsa_ref_memmap[SBSA_GIC_DIST].size,
41
+ 2, sbsa_ref_memmap[SBSA_GIC_REDIST].base,
42
+ 2, sbsa_ref_memmap[SBSA_GIC_REDIST].size);
43
+
44
+ g_free(nodename);
45
+}
45
+}
46
/*
47
* Firmware on this machine only uses ACPI table to load OS, these limited
48
* device tree nodes are just to let firmware know the info which varies from
49
@@ -XXX,XX +XXX,XX @@ static void create_fdt(SBSAMachineState *sms)
50
* fw compatibility.
51
*/
52
qemu_fdt_setprop_cell(fdt, "/", "machine-version-major", 0);
53
- qemu_fdt_setprop_cell(fdt, "/", "machine-version-minor", 0);
54
+ qemu_fdt_setprop_cell(fdt, "/", "machine-version-minor", 1);
55
56
if (ms->numa_state->have_numa_distance) {
57
int size = nb_numa_nodes * nb_numa_nodes * 3 * sizeof(uint32_t);
58
@@ -XXX,XX +XXX,XX @@ static void create_fdt(SBSAMachineState *sms)
59
60
g_free(nodename);
61
}
46
+
62
+
47
static void virt_init(MachineState *machine)
63
+ sbsa_fdt_add_gic_node(sms);
48
{
49
M68kCPU *cpu = NULL;
50
@@ -XXX,XX +XXX,XX @@ static void virt_init(MachineState *machine)
51
BOOTINFO0(param_ptr, BI_LAST);
52
rom_add_blob_fixed_as("bootinfo", param_blob, param_ptr - param_blob,
53
parameters_base, cs->as);
54
- reset_info->rng_seed = rom_ptr_for_as(cs->as, parameters_base,
55
- param_ptr - param_blob) +
56
- (param_rng_seed - param_blob);
57
+ qemu_register_reset_nosnapshotload(rerandomize_rng_seed,
58
+ rom_ptr_for_as(cs->as, parameters_base,
59
+ param_ptr - param_blob) +
60
+ (param_rng_seed - param_blob));
61
g_free(param_blob);
62
}
63
}
64
}
65
66
#define SBSA_FLASH_SECTOR_SIZE (256 * KiB)
64
--
67
--
65
2.25.1
68
2.34.1
diff view generated by jsdifflib
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2
3
Snapshot loading is supposed to be deterministic, so we shouldn't
3
We moved from VGA to Bochs to have PCIe card.
4
re-randomize the various seeds used.
5
4
6
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
5
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
7
Message-id: 20221025004327.568476-4-Jason@zx2c4.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
8
---
11
hw/i386/x86.c | 2 +-
9
docs/system/arm/sbsa.rst | 2 +-
12
1 file changed, 1 insertion(+), 1 deletion(-)
10
1 file changed, 1 insertion(+), 1 deletion(-)
13
11
14
diff --git a/hw/i386/x86.c b/hw/i386/x86.c
12
diff --git a/docs/system/arm/sbsa.rst b/docs/system/arm/sbsa.rst
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/i386/x86.c
14
--- a/docs/system/arm/sbsa.rst
17
+++ b/hw/i386/x86.c
15
+++ b/docs/system/arm/sbsa.rst
18
@@ -XXX,XX +XXX,XX @@ void x86_load_linux(X86MachineState *x86ms,
16
@@ -XXX,XX +XXX,XX @@ The sbsa-ref board supports:
19
setup_data->type = cpu_to_le32(SETUP_RNG_SEED);
17
- System bus EHCI controller
20
setup_data->len = cpu_to_le32(RNG_SEED_LENGTH);
18
- CDROM and hard disc on AHCI bus
21
qemu_guest_getrandom_nofail(setup_data->data, RNG_SEED_LENGTH);
19
- E1000E ethernet card on PCIe bus
22
- qemu_register_reset(reset_rng_seed, setup_data);
20
- - VGA display adaptor on PCIe bus
23
+ qemu_register_reset_nosnapshotload(reset_rng_seed, setup_data);
21
+ - Bochs display adapter on PCIe bus
24
fw_cfg_add_bytes_callback(fw_cfg, FW_CFG_KERNEL_DATA, reset_rng_seed, NULL,
22
- A generic SBSA watchdog device
25
setup_data, kernel, kernel_size, true);
23
26
} else {
27
--
24
--
28
2.25.1
25
2.34.1
diff view generated by jsdifflib