1
The following changes since commit 1ea06abceec61b6f3ab33dadb0510b6e09fb61e2:
1
Hi; this is the latest target-arm queue. Most of the patches
2
here are RTH's FEAT_HAFDBS finally landing. I've also included
3
the RNG-seed randomization patches from Jason, as well as a few
4
more minor things. The patches include a couple of regression
5
fixes:
6
* the resettable patch fixes a SCSI reset regression
7
* the 'do not re-randomize on snapshot load' patches fix
8
record-and-replay regressions
2
9
3
Merge remote-tracking branch 'remotes/berrange-gitlab/tags/misc-fixes-pull-request' into staging (2021-06-14 15:59:13 +0100)
10
thanks
11
-- PMM
12
13
The following changes since commit e750a7ace492f0b450653d4ad368a77d6f660fb8:
14
15
Merge tag 'pull-9p-20221024' of https://github.com/cschoenebeck/qemu into staging (2022-10-24 14:27:12 -0400)
4
16
5
are available in the Git repository at:
17
are available in the Git repository at:
6
18
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210615
19
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221025
8
20
9
for you to fetch changes up to c611c956c7fdce651e30687b1f5d19b4cab78b6a:
21
for you to fetch changes up to e2114f701c78f76246e4b1872639dad94a6bdd21:
10
22
11
include/qemu/int128.h: Add function to create Int128 from int64_t (2021-06-15 16:18:50 +0100)
23
rx: re-randomize rng-seed on reboot (2022-10-25 17:32:24 +0100)
12
24
13
----------------------------------------------------------------
25
----------------------------------------------------------------
14
target-arm queue:
26
target-arm queue:
15
* hw/intc/arm_gicv3_cpuif: Tolerate spurious EOIR writes
27
* Implement FEAT_E0PD
16
* handle some UNALLOCATED decode cases correctly rather
28
* Implement FEAT_HAFDBS
17
than asserting
29
* honor HCR_E2H and HCR_TGE in arm_excp_unmasked()
18
* hw: virt: consider hw_compat_6_0
30
* hw/arm/virt: Fix devicetree warnings about the virtio-iommu node
19
* hw/arm: add quanta-gbs-bmc machine
31
* hw/core/resettable: fix reset level counting
20
* hw/intc/armv7m_nvic: Remove stale comment
32
* hw/hyperv/hyperv.c: Use device_cold_reset() instead of device_legacy_reset()
21
* arm, acpi: Remove dependency on presence of 'virt' board
33
* imx: reload cmp timer outside of the reload ptimer transaction
22
* target/arm: Fix mte page crossing test
34
* x86: do not re-randomize RNG seed on snapshot load
23
* hw/arm: quanta-q71l add pca954x muxes
35
* m68k/virt: do not re-randomize RNG seed on snapshot load
24
* target/arm: First few parts of MVE support
36
* m68k/q800: do not re-randomize RNG seed on snapshot load
37
* arm: re-randomize rng-seed on reboot
38
* riscv: re-randomize rng-seed on reboot
39
* mips/boston: re-randomize rng-seed on reboot
40
* openrisc: re-randomize rng-seed on reboot
41
* rx: re-randomize rng-seed on reboot
25
42
26
----------------------------------------------------------------
43
----------------------------------------------------------------
27
Heinrich Schuchardt (1):
44
Ake Koomsin (1):
28
hw: virt: consider hw_compat_6_0
45
target/arm: honor HCR_E2H and HCR_TGE in arm_excp_unmasked()
46
47
Axel Heider (1):
48
target/imx: reload cmp timer outside of the reload ptimer transaction
49
50
Damien Hedde (1):
51
hw/core/resettable: fix reset level counting
52
53
Jason A. Donenfeld (10):
54
reset: allow registering handlers that aren't called by snapshot loading
55
device-tree: add re-randomization helper function
56
x86: do not re-randomize RNG seed on snapshot load
57
arm: re-randomize rng-seed on reboot
58
riscv: re-randomize rng-seed on reboot
59
m68k/virt: do not re-randomize RNG seed on snapshot load
60
m68k/q800: do not re-randomize RNG seed on snapshot load
61
mips/boston: re-randomize rng-seed on reboot
62
openrisc: re-randomize rng-seed on reboot
63
rx: re-randomize rng-seed on reboot
29
64
30
Jean-Philippe Brucker (1):
65
Jean-Philippe Brucker (1):
31
hw/intc/arm_gicv3_cpuif: Tolerate spurious EOIR writes
66
hw/arm/virt: Fix devicetree warnings about the virtio-iommu node
32
67
33
Patrick Venture (5):
68
Peter Maydell (2):
34
hw/arm: add quanta-gbs-bmc machine
69
target/arm: Implement FEAT_E0PD
35
hw/arm: quanta-gbs-bmc add i2c comments
70
hw/hyperv/hyperv.c: Use device_cold_reset() instead of device_legacy_reset()
36
hw/arm: gsj add i2c comments
37
hw/arm: gsj add pca9548
38
hw/arm: quanta-q71l add pca954x muxes
39
71
40
Peter Maydell (17):
72
Richard Henderson (14):
41
hw/intc/armv7m_nvic: Remove stale comment
73
target/arm: Introduce regime_is_stage2
42
hw/acpi: Provide stub version of acpi_ghes_record_errors()
74
target/arm: Add ptw_idx to S1Translate
43
hw/acpi: Provide function acpi_ghes_present()
75
target/arm: Add isar predicates for FEAT_HAFDBS
44
target/arm: Use acpi_ghes_present() to see if we report ACPI memory errors
76
target/arm: Extract HA and HD in aa64_va_parameters
45
target/arm: Provide and use H8 and H1_8 macros
77
target/arm: Move S1_ptw_translate outside arm_ld[lq]_ptw
46
target/arm: Enable FPSCR.QC bit for MVE
78
target/arm: Add ARMFault_UnsuppAtomicUpdate
47
target/arm: Handle VPR semantics in existing code
79
target/arm: Remove loop from get_phys_addr_lpae
48
target/arm: Add handling for PSR.ECI/ICI
80
target/arm: Fix fault reporting in get_phys_addr_lpae
49
target/arm: Let vfp_access_check() handle late NOCP checks
81
target/arm: Don't shift attrs in get_phys_addr_lpae
50
target/arm: Implement MVE LCTP
82
target/arm: Consider GP an attribute in get_phys_addr_lpae
51
target/arm: Implement MVE WLSTP insn
83
target/arm: Tidy merging of attributes from descriptor and table
52
target/arm: Implement MVE DLSTP
84
target/arm: Implement FEAT_HAFDBS, access flag portion
53
target/arm: Implement MVE LETP insn
85
target/arm: Implement FEAT_HAFDBS, dirty bit portion
54
target/arm: Add framework for MVE decode
86
target/arm: Use the max page size in a 2-stage ptw
55
target/arm: Move expand_pred_b() data to vec_helper.c
56
bitops.h: Provide hswap32(), hswap64(), wswap64() swapping operations
57
include/qemu/int128.h: Add function to create Int128 from int64_t
58
87
59
Richard Henderson (4):
88
docs/devel/reset.rst | 8 +-
60
target/arm: Diagnose UNALLOCATED in disas_simd_two_reg_misc_fp16
89
docs/system/arm/emulation.rst | 2 +
61
target/arm: Remove fprintf from disas_simd_mod_imm
90
qapi/run-state.json | 6 +-
62
target/arm: Diagnose UNALLOCATED in disas_simd_three_reg_same_fp16
91
include/hw/boards.h | 2 +-
63
target/arm: Fix mte page crossing test
92
include/sysemu/device_tree.h | 9 +
64
93
include/sysemu/reset.h | 5 +-
65
include/hw/acpi/ghes.h | 9 +
94
target/arm/cpu.h | 15 ++
66
include/qemu/bitops.h | 29 +++
95
target/arm/internals.h | 30 +++
67
include/qemu/int128.h | 10 +
96
hw/arm/aspeed.c | 4 +-
68
target/arm/translate-a32.h | 2 +
97
hw/arm/boot.c | 2 +
69
target/arm/translate.h | 9 +
98
hw/arm/mps2-tz.c | 4 +-
70
target/arm/vec_internal.h | 9 +
99
hw/arm/virt.c | 5 +-
71
target/arm/mve.decode | 20 ++
100
hw/core/reset.c | 17 +-
72
target/arm/t32.decode | 15 +-
101
hw/core/resettable.c | 3 +-
73
hw/acpi/ghes-stub.c | 22 +++
102
hw/hppa/machine.c | 4 +-
74
hw/acpi/ghes.c | 17 ++
103
hw/hyperv/hyperv.c | 2 +-
75
hw/arm/aspeed.c | 11 +-
104
hw/i386/microvm.c | 4 +-
76
hw/arm/npcm7xx_boards.c | 107 ++++++++++-
105
hw/i386/pc.c | 6 +-
77
hw/arm/virt.c | 2 +
106
hw/i386/x86.c | 2 +-
78
hw/intc/arm_gicv3_cpuif.c | 5 +-
107
hw/m68k/q800.c | 33 ++-
79
hw/intc/armv7m_nvic.c | 6 -
108
hw/m68k/virt.c | 20 +-
80
target/arm/kvm64.c | 6 +-
109
hw/mips/boston.c | 3 +
81
target/arm/m_helper.c | 54 +++++-
110
hw/openrisc/boot.c | 3 +
82
target/arm/mte_helper.c | 2 +-
111
hw/ppc/pegasos2.c | 4 +-
83
target/arm/sve_helper.c | 381 +++++++++++++-------------------------
112
hw/ppc/pnv.c | 4 +-
84
target/arm/translate-a64.c | 87 +++++----
113
hw/ppc/spapr.c | 4 +-
85
target/arm/translate-m-nocp.c | 16 +-
114
hw/riscv/boot.c | 3 +
86
target/arm/translate-mve.c | 29 +++
115
hw/rx/rx-gdbsim.c | 3 +
87
target/arm/translate-vfp.c | 65 +++++--
116
hw/s390x/s390-virtio-ccw.c | 4 +-
88
target/arm/translate.c | 300 ++++++++++++++++++++++++++++--
117
hw/timer/imx_epit.c | 9 +-
89
target/arm/vec_helper.c | 116 +++++++++++-
118
migration/savevm.c | 2 +-
90
target/arm/vfp_helper.c | 3 +-
119
softmmu/device_tree.c | 21 ++
91
tests/tcg/aarch64/mte-7.c | 31 ++++
120
softmmu/runstate.c | 11 +-
92
hw/acpi/meson.build | 6 +-
121
target/arm/cpu.c | 24 +-
93
hw/arm/Kconfig | 2 +
122
target/arm/cpu64.c | 2 +
94
target/arm/meson.build | 2 +
123
target/arm/helper.c | 31 ++-
95
tests/tcg/aarch64/Makefile.target | 2 +-
124
target/arm/ptw.c | 524 +++++++++++++++++++++++++++---------------
96
31 files changed, 1019 insertions(+), 356 deletions(-)
125
37 files changed, 572 insertions(+), 263 deletions(-)
97
create mode 100644 target/arm/mve.decode
98
create mode 100644 hw/acpi/ghes-stub.c
99
create mode 100644 target/arm/translate-mve.c
100
create mode 100644 tests/tcg/aarch64/mte-7.c
101
diff view generated by jsdifflib
1
Currently the ARM SVE helper code defines locally some utility
1
FEAT_E0PD adds new bits E0PD0 and E0PD1 to TCR_EL1, which allow the
2
functions for swapping 16-bit halfwords within 32-bit or 64-bit
2
OS to forbid EL0 access to half of the address space. Since this is
3
values and for swapping 32-bit words within 64-bit values,
3
an EL0-specific variation on the existing TCR_ELx.{EPD0,EPD1}, we can
4
parallel to the byte-swapping bswap16/32/64 functions.
4
implement it entirely in aa64_va_parameters().
5
5
6
We want these also for the ARM MVE code, and they're potentially
6
This requires moving the existing regime_is_user() to internals.h
7
generally useful for other targets, so move them to bitops.h.
7
so that the code in helper.c can get at it.
8
(We don't put them in bswap.h with the bswap* functions because
9
they are implemented in terms of the rotate operations also
10
defined in bitops.h, and including bitops.h from bswap.h seems
11
better avoided.)
12
8
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20221021160131.3531787-1-peter.maydell@linaro.org
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Message-id: 20210614151007.4545-17-peter.maydell@linaro.org
17
---
12
---
18
include/qemu/bitops.h | 29 +++++++++++++++++++++++++++++
13
docs/system/arm/emulation.rst | 1 +
19
target/arm/sve_helper.c | 20 --------------------
14
target/arm/cpu.h | 5 +++++
20
2 files changed, 29 insertions(+), 20 deletions(-)
15
target/arm/internals.h | 19 +++++++++++++++++++
16
target/arm/cpu64.c | 1 +
17
target/arm/helper.c | 9 +++++++++
18
target/arm/ptw.c | 19 -------------------
19
6 files changed, 35 insertions(+), 19 deletions(-)
21
20
22
diff --git a/include/qemu/bitops.h b/include/qemu/bitops.h
21
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
23
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
24
--- a/include/qemu/bitops.h
23
--- a/docs/system/arm/emulation.rst
25
+++ b/include/qemu/bitops.h
24
+++ b/docs/system/arm/emulation.rst
26
@@ -XXX,XX +XXX,XX @@ static inline uint64_t ror64(uint64_t word, unsigned int shift)
25
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
27
return (word >> shift) | (word << ((64 - shift) & 63));
26
- FEAT_Debugv8p4 (Debug changes for v8.4)
27
- FEAT_DotProd (Advanced SIMD dot product instructions)
28
- FEAT_DoubleFault (Double Fault Extension)
29
+- FEAT_E0PD (Preventing EL0 access to halves of address maps)
30
- FEAT_ETS (Enhanced Translation Synchronization)
31
- FEAT_FCMA (Floating-point complex number instructions)
32
- FEAT_FHM (Floating-point half-precision multiplication instructions)
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_lva(const ARMISARegisters *id)
38
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, VARANGE) != 0;
28
}
39
}
29
40
30
+/**
41
+static inline bool isar_feature_aa64_e0pd(const ARMISARegisters *id)
31
+ * hswap32 - swap 16-bit halfwords within a 32-bit value
32
+ * @h: value to swap
33
+ */
34
+static inline uint32_t hswap32(uint32_t h)
35
+{
42
+{
36
+ return rol32(h, 16);
43
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, E0PD) != 0;
37
+}
44
+}
38
+
45
+
39
+/**
46
static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
40
+ * hswap64 - swap 16-bit halfwords within a 64-bit value
47
{
41
+ * @h: value to swap
48
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
42
+ */
49
diff --git a/target/arm/internals.h b/target/arm/internals.h
43
+static inline uint64_t hswap64(uint64_t h)
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/internals.h
52
+++ b/target/arm/internals.h
53
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
54
}
55
}
56
57
+static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
44
+{
58
+{
45
+ uint64_t m = 0x0000ffff0000ffffull;
59
+ switch (mmu_idx) {
46
+ h = rol64(h, 32);
60
+ case ARMMMUIdx_E20_0:
47
+ return ((h & m) << 16) | ((h >> 16) & m);
61
+ case ARMMMUIdx_Stage1_E0:
62
+ case ARMMMUIdx_MUser:
63
+ case ARMMMUIdx_MSUser:
64
+ case ARMMMUIdx_MUserNegPri:
65
+ case ARMMMUIdx_MSUserNegPri:
66
+ return true;
67
+ default:
68
+ return false;
69
+ case ARMMMUIdx_E10_0:
70
+ case ARMMMUIdx_E10_1:
71
+ case ARMMMUIdx_E10_1_PAN:
72
+ g_assert_not_reached();
73
+ }
48
+}
74
+}
49
+
75
+
50
+/**
76
/* Return the SCTLR value which controls this address translation regime */
51
+ * wswap64 - swap 32-bit words within a 64-bit value
77
static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
52
+ * @h: value to swap
78
{
53
+ */
79
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
54
+static inline uint64_t wswap64(uint64_t h)
80
index XXXXXXX..XXXXXXX 100644
55
+{
81
--- a/target/arm/cpu64.c
56
+ return rol64(h, 32);
82
+++ b/target/arm/cpu64.c
57
+}
83
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
84
t = FIELD_DP64(t, ID_AA64MMFR2, FWB, 1); /* FEAT_S2FWB */
85
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
86
t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
87
+ t = FIELD_DP64(t, ID_AA64MMFR2, E0PD, 1); /* FEAT_E0PD */
88
cpu->isar.id_aa64mmfr2 = t;
89
90
t = cpu->isar.id_aa64zfr0;
91
diff --git a/target/arm/helper.c b/target/arm/helper.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/helper.c
94
+++ b/target/arm/helper.c
95
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
96
ps = extract32(tcr, 16, 3);
97
ds = extract64(tcr, 32, 1);
98
} else {
99
+ bool e0pd;
58
+
100
+
59
/**
101
/*
60
* extract32:
102
* Bit 55 is always between the two regions, and is canonical for
61
* @value: the value to extract the bit field from
103
* determining if address tagging is enabled.
62
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
104
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
105
epd = extract32(tcr, 7, 1);
106
sh = extract32(tcr, 12, 2);
107
hpd = extract64(tcr, 41, 1);
108
+ e0pd = extract64(tcr, 55, 1);
109
} else {
110
tsz = extract32(tcr, 16, 6);
111
gran = tg1_to_gran_size(extract32(tcr, 30, 2));
112
epd = extract32(tcr, 23, 1);
113
sh = extract32(tcr, 28, 2);
114
hpd = extract64(tcr, 42, 1);
115
+ e0pd = extract64(tcr, 56, 1);
116
}
117
ps = extract64(tcr, 32, 3);
118
ds = extract64(tcr, 59, 1);
119
+
120
+ if (e0pd && cpu_isar_feature(aa64_e0pd, cpu) &&
121
+ regime_is_user(env, mmu_idx)) {
122
+ epd = true;
123
+ }
124
}
125
126
gran = sanitize_gran_size(cpu, gran, stage2);
127
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
63
index XXXXXXX..XXXXXXX 100644
128
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/sve_helper.c
129
--- a/target/arm/ptw.c
65
+++ b/target/arm/sve_helper.c
130
+++ b/target/arm/ptw.c
66
@@ -XXX,XX +XXX,XX @@ static inline uint64_t expand_pred_s(uint8_t byte)
131
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
67
return word[byte & 0x11];
132
return (regime_sctlr(env, mmu_idx) & SCTLR_EE) != 0;
68
}
133
}
69
134
70
-/* Swap 16-bit words within a 32-bit word. */
135
-static bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
71
-static inline uint32_t hswap32(uint32_t h)
72
-{
136
-{
73
- return rol32(h, 16);
137
- switch (mmu_idx) {
138
- case ARMMMUIdx_E20_0:
139
- case ARMMMUIdx_Stage1_E0:
140
- case ARMMMUIdx_MUser:
141
- case ARMMMUIdx_MSUser:
142
- case ARMMMUIdx_MUserNegPri:
143
- case ARMMMUIdx_MSUserNegPri:
144
- return true;
145
- default:
146
- return false;
147
- case ARMMMUIdx_E10_0:
148
- case ARMMMUIdx_E10_1:
149
- case ARMMMUIdx_E10_1_PAN:
150
- g_assert_not_reached();
151
- }
74
-}
152
-}
75
-
153
-
76
-/* Swap 16-bit words within a 64-bit word. */
154
/* Return the TTBR associated with this translation regime */
77
-static inline uint64_t hswap64(uint64_t h)
155
static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
78
-{
156
{
79
- uint64_t m = 0x0000ffff0000ffffull;
80
- h = rol64(h, 32);
81
- return ((h & m) << 16) | ((h >> 16) & m);
82
-}
83
-
84
-/* Swap 32-bit words within a 64-bit word. */
85
-static inline uint64_t wswap64(uint64_t h)
86
-{
87
- return rol64(h, 32);
88
-}
89
-
90
#define LOGICAL_PPPP(NAME, FUNC) \
91
void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
92
{ \
93
--
157
--
94
2.20.1
158
2.25.1
95
96
diff view generated by jsdifflib
1
From: Heinrich Schuchardt <xypron.glpk@gmx.de>
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
2
3
virt-6.0 must consider hw_compat_6_0.
3
The "PCI Bus Binding to: IEEE Std 1275-1994" defines the compatible
4
string for a PCIe bus or endpoint as "pci<vendorid>,<deviceid>" or
5
similar. Since the initial binding for PCI virtio-iommu didn't follow
6
this rule, it was modified to accept both strings and ensure backward
7
compatibility. Also, the unit-name for the node should be
8
"device,function".
4
9
5
Fixes: da7e13c00b59 ("hw: add compat machines for 6.1")
10
Fix corresponding dt-validate and dtc warnings:
6
Signed-off-by: Heinrich Schuchardt <xypron.glpk@gmx.de>
11
7
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
12
pcie@10000000: virtio_iommu@16:compatible: ['virtio,pci-iommu'] does not contain items matching the given schema
8
Message-id: 20210610183500.54207-1-xypron.glpk@gmx.de
13
pcie@10000000: Unevaluated properties are not allowed (... 'virtio_iommu@16' were unexpected)
14
From schema: linux/Documentation/devicetree/bindings/pci/host-generic-pci.yaml
15
virtio_iommu@16: compatible: 'oneOf' conditional failed, one must be fixed:
16
['virtio,pci-iommu'] is too short
17
'pci1af4,1057' was expected
18
From schema: dtschema/schemas/pci/pci-bus.yaml
19
20
Warning (pci_device_reg): /pcie@10000000/virtio_iommu@16: PCI unit address format error, expected "2,0"
21
22
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
25
---
11
hw/arm/virt.c | 2 ++
26
hw/arm/virt.c | 5 +++--
12
1 file changed, 2 insertions(+)
27
1 file changed, 3 insertions(+), 2 deletions(-)
13
28
14
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
29
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
15
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/virt.c
31
--- a/hw/arm/virt.c
17
+++ b/hw/arm/virt.c
32
+++ b/hw/arm/virt.c
18
@@ -XXX,XX +XXX,XX @@ DEFINE_VIRT_MACHINE_AS_LATEST(6, 1)
33
@@ -XXX,XX +XXX,XX @@ static void create_smmu(const VirtMachineState *vms,
19
34
20
static void virt_machine_6_0_options(MachineClass *mc)
35
static void create_virtio_iommu_dt_bindings(VirtMachineState *vms)
21
{
36
{
22
+ virt_machine_6_1_options(mc);
37
- const char compat[] = "virtio,pci-iommu";
23
+ compat_props_add(mc->compat_props, hw_compat_6_0, hw_compat_6_0_len);
38
+ const char compat[] = "virtio,pci-iommu\0pci1af4,1057";
24
}
39
uint16_t bdf = vms->virtio_iommu_bdf;
25
DEFINE_VIRT_MACHINE(6, 0)
40
MachineState *ms = MACHINE(vms);
26
41
char *node;
42
43
vms->iommu_phandle = qemu_fdt_alloc_phandle(ms->fdt);
44
45
- node = g_strdup_printf("%s/virtio_iommu@%d", vms->pciehb_nodename, bdf);
46
+ node = g_strdup_printf("%s/virtio_iommu@%x,%x", vms->pciehb_nodename,
47
+ PCI_SLOT(bdf), PCI_FUNC(bdf));
48
qemu_fdt_add_subnode(ms->fdt, node);
49
qemu_fdt_setprop(ms->fdt, node, "compatible", compat, sizeof(compat));
50
qemu_fdt_setprop_sized_cells(ms->fdt, node, "reg",
27
--
51
--
28
2.20.1
52
2.25.1
29
30
diff view generated by jsdifflib
1
For MVE, we want to re-use the large data table from expand_pred_b().
1
From: Ake Koomsin <ake@igel.co.jp>
2
Move the data table to vec_helper.c so it is no longer in an SVE
3
specific source file.
4
2
3
An exception targeting EL2 from lower EL is actually maskable when
4
HCR_E2H and HCR_TGE are both set. This applies to both secure and
5
non-secure Security state.
6
7
We can remove the conditions that try to suppress masking of
8
interrupts when we are Secure and the exception targets EL2 and
9
Secure EL2 is disabled. This is OK because in that situation
10
arm_phys_excp_target_el() will never return 2 as the target EL. The
11
'not if secure' check in this function was originally written before
12
arm_hcr_el2_eff(), and back then the target EL returned by
13
arm_phys_excp_target_el() could be 2 even if we were in Secure
14
EL0/EL1; but it is no longer needed.
15
16
Signed-off-by: Ake Koomsin <ake@igel.co.jp>
17
Message-id: 20221017092432.546881-1-ake@igel.co.jp
18
[PMM: Add commit message paragraph explaining why it's OK to
19
remove the checks on secure and SCR_EEL2]
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210614151007.4545-14-peter.maydell@linaro.org
8
---
22
---
9
target/arm/vec_internal.h | 3 ++
23
target/arm/cpu.c | 24 +++++++++++++++++-------
10
target/arm/sve_helper.c | 103 ++------------------------------------
24
1 file changed, 17 insertions(+), 7 deletions(-)
11
target/arm/vec_helper.c | 102 +++++++++++++++++++++++++++++++++++++
12
3 files changed, 109 insertions(+), 99 deletions(-)
13
25
14
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
26
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
15
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/vec_internal.h
28
--- a/target/arm/cpu.c
17
+++ b/target/arm/vec_internal.h
29
+++ b/target/arm/cpu.c
18
@@ -XXX,XX +XXX,XX @@
30
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
19
#define H8(x) (x)
31
if ((target_el > cur_el) && (target_el != 1)) {
20
#define H1_8(x) (x)
32
/* Exceptions targeting a higher EL may not be maskable */
21
33
if (arm_feature(env, ARM_FEATURE_AARCH64)) {
22
+/* Data for expanding active predicate bits to bytes, for byte elements. */
34
- /*
23
+extern const uint64_t expand_pred_b_data[256];
35
- * 64-bit masking rules are simple: exceptions to EL3
24
+
36
- * can't be masked, and exceptions to EL2 can only be
25
static inline void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
37
- * masked from Secure state. The HCR and SCR settings
26
{
38
- * don't affect the masking logic, only the interrupt routing.
27
uint64_t *d = vd + opr_sz;
39
- */
28
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
40
- if (target_el == 3 || !secure || (env->cp15.scr_el3 & SCR_EEL2)) {
29
index XXXXXXX..XXXXXXX 100644
41
+ switch (target_el) {
30
--- a/target/arm/sve_helper.c
42
+ case 2:
31
+++ b/target/arm/sve_helper.c
43
+ /*
32
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_predtest)(void *vd, void *vg, uint32_t words)
44
+ * According to ARM DDI 0487H.a, an interrupt can be masked
33
return flags;
45
+ * when HCR_E2H and HCR_TGE are both set regardless of the
34
}
46
+ * current Security state. Note that we need to revisit this
35
47
+ * part again once we need to support NMI.
36
-/* Expand active predicate bits to bytes, for byte elements.
48
+ */
37
- * for (i = 0; i < 256; ++i) {
49
+ if ((hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
38
- * unsigned long m = 0;
50
+ unmasked = true;
39
- * for (j = 0; j < 8; j++) {
51
+ }
40
- * if ((i >> j) & 1) {
52
+ break;
41
- * m |= 0xfful << (j << 3);
53
+ case 3:
42
- * }
54
+ /* Interrupt cannot be masked when the target EL is 3 */
43
- * }
55
unmasked = true;
44
- * printf("0x%016lx,\n", m);
56
+ break;
45
- * }
57
+ default:
46
+/*
58
+ g_assert_not_reached();
47
+ * Expand active predicate bits to bytes, for byte elements.
59
}
48
+ * (The data table itself is in vec_helper.c as MVE also needs it.)
60
} else {
49
*/
61
/*
50
static inline uint64_t expand_pred_b(uint8_t byte)
51
{
52
- static const uint64_t word[256] = {
53
- 0x0000000000000000, 0x00000000000000ff, 0x000000000000ff00,
54
- 0x000000000000ffff, 0x0000000000ff0000, 0x0000000000ff00ff,
55
- 0x0000000000ffff00, 0x0000000000ffffff, 0x00000000ff000000,
56
- 0x00000000ff0000ff, 0x00000000ff00ff00, 0x00000000ff00ffff,
57
- 0x00000000ffff0000, 0x00000000ffff00ff, 0x00000000ffffff00,
58
- 0x00000000ffffffff, 0x000000ff00000000, 0x000000ff000000ff,
59
- 0x000000ff0000ff00, 0x000000ff0000ffff, 0x000000ff00ff0000,
60
- 0x000000ff00ff00ff, 0x000000ff00ffff00, 0x000000ff00ffffff,
61
- 0x000000ffff000000, 0x000000ffff0000ff, 0x000000ffff00ff00,
62
- 0x000000ffff00ffff, 0x000000ffffff0000, 0x000000ffffff00ff,
63
- 0x000000ffffffff00, 0x000000ffffffffff, 0x0000ff0000000000,
64
- 0x0000ff00000000ff, 0x0000ff000000ff00, 0x0000ff000000ffff,
65
- 0x0000ff0000ff0000, 0x0000ff0000ff00ff, 0x0000ff0000ffff00,
66
- 0x0000ff0000ffffff, 0x0000ff00ff000000, 0x0000ff00ff0000ff,
67
- 0x0000ff00ff00ff00, 0x0000ff00ff00ffff, 0x0000ff00ffff0000,
68
- 0x0000ff00ffff00ff, 0x0000ff00ffffff00, 0x0000ff00ffffffff,
69
- 0x0000ffff00000000, 0x0000ffff000000ff, 0x0000ffff0000ff00,
70
- 0x0000ffff0000ffff, 0x0000ffff00ff0000, 0x0000ffff00ff00ff,
71
- 0x0000ffff00ffff00, 0x0000ffff00ffffff, 0x0000ffffff000000,
72
- 0x0000ffffff0000ff, 0x0000ffffff00ff00, 0x0000ffffff00ffff,
73
- 0x0000ffffffff0000, 0x0000ffffffff00ff, 0x0000ffffffffff00,
74
- 0x0000ffffffffffff, 0x00ff000000000000, 0x00ff0000000000ff,
75
- 0x00ff00000000ff00, 0x00ff00000000ffff, 0x00ff000000ff0000,
76
- 0x00ff000000ff00ff, 0x00ff000000ffff00, 0x00ff000000ffffff,
77
- 0x00ff0000ff000000, 0x00ff0000ff0000ff, 0x00ff0000ff00ff00,
78
- 0x00ff0000ff00ffff, 0x00ff0000ffff0000, 0x00ff0000ffff00ff,
79
- 0x00ff0000ffffff00, 0x00ff0000ffffffff, 0x00ff00ff00000000,
80
- 0x00ff00ff000000ff, 0x00ff00ff0000ff00, 0x00ff00ff0000ffff,
81
- 0x00ff00ff00ff0000, 0x00ff00ff00ff00ff, 0x00ff00ff00ffff00,
82
- 0x00ff00ff00ffffff, 0x00ff00ffff000000, 0x00ff00ffff0000ff,
83
- 0x00ff00ffff00ff00, 0x00ff00ffff00ffff, 0x00ff00ffffff0000,
84
- 0x00ff00ffffff00ff, 0x00ff00ffffffff00, 0x00ff00ffffffffff,
85
- 0x00ffff0000000000, 0x00ffff00000000ff, 0x00ffff000000ff00,
86
- 0x00ffff000000ffff, 0x00ffff0000ff0000, 0x00ffff0000ff00ff,
87
- 0x00ffff0000ffff00, 0x00ffff0000ffffff, 0x00ffff00ff000000,
88
- 0x00ffff00ff0000ff, 0x00ffff00ff00ff00, 0x00ffff00ff00ffff,
89
- 0x00ffff00ffff0000, 0x00ffff00ffff00ff, 0x00ffff00ffffff00,
90
- 0x00ffff00ffffffff, 0x00ffffff00000000, 0x00ffffff000000ff,
91
- 0x00ffffff0000ff00, 0x00ffffff0000ffff, 0x00ffffff00ff0000,
92
- 0x00ffffff00ff00ff, 0x00ffffff00ffff00, 0x00ffffff00ffffff,
93
- 0x00ffffffff000000, 0x00ffffffff0000ff, 0x00ffffffff00ff00,
94
- 0x00ffffffff00ffff, 0x00ffffffffff0000, 0x00ffffffffff00ff,
95
- 0x00ffffffffffff00, 0x00ffffffffffffff, 0xff00000000000000,
96
- 0xff000000000000ff, 0xff0000000000ff00, 0xff0000000000ffff,
97
- 0xff00000000ff0000, 0xff00000000ff00ff, 0xff00000000ffff00,
98
- 0xff00000000ffffff, 0xff000000ff000000, 0xff000000ff0000ff,
99
- 0xff000000ff00ff00, 0xff000000ff00ffff, 0xff000000ffff0000,
100
- 0xff000000ffff00ff, 0xff000000ffffff00, 0xff000000ffffffff,
101
- 0xff0000ff00000000, 0xff0000ff000000ff, 0xff0000ff0000ff00,
102
- 0xff0000ff0000ffff, 0xff0000ff00ff0000, 0xff0000ff00ff00ff,
103
- 0xff0000ff00ffff00, 0xff0000ff00ffffff, 0xff0000ffff000000,
104
- 0xff0000ffff0000ff, 0xff0000ffff00ff00, 0xff0000ffff00ffff,
105
- 0xff0000ffffff0000, 0xff0000ffffff00ff, 0xff0000ffffffff00,
106
- 0xff0000ffffffffff, 0xff00ff0000000000, 0xff00ff00000000ff,
107
- 0xff00ff000000ff00, 0xff00ff000000ffff, 0xff00ff0000ff0000,
108
- 0xff00ff0000ff00ff, 0xff00ff0000ffff00, 0xff00ff0000ffffff,
109
- 0xff00ff00ff000000, 0xff00ff00ff0000ff, 0xff00ff00ff00ff00,
110
- 0xff00ff00ff00ffff, 0xff00ff00ffff0000, 0xff00ff00ffff00ff,
111
- 0xff00ff00ffffff00, 0xff00ff00ffffffff, 0xff00ffff00000000,
112
- 0xff00ffff000000ff, 0xff00ffff0000ff00, 0xff00ffff0000ffff,
113
- 0xff00ffff00ff0000, 0xff00ffff00ff00ff, 0xff00ffff00ffff00,
114
- 0xff00ffff00ffffff, 0xff00ffffff000000, 0xff00ffffff0000ff,
115
- 0xff00ffffff00ff00, 0xff00ffffff00ffff, 0xff00ffffffff0000,
116
- 0xff00ffffffff00ff, 0xff00ffffffffff00, 0xff00ffffffffffff,
117
- 0xffff000000000000, 0xffff0000000000ff, 0xffff00000000ff00,
118
- 0xffff00000000ffff, 0xffff000000ff0000, 0xffff000000ff00ff,
119
- 0xffff000000ffff00, 0xffff000000ffffff, 0xffff0000ff000000,
120
- 0xffff0000ff0000ff, 0xffff0000ff00ff00, 0xffff0000ff00ffff,
121
- 0xffff0000ffff0000, 0xffff0000ffff00ff, 0xffff0000ffffff00,
122
- 0xffff0000ffffffff, 0xffff00ff00000000, 0xffff00ff000000ff,
123
- 0xffff00ff0000ff00, 0xffff00ff0000ffff, 0xffff00ff00ff0000,
124
- 0xffff00ff00ff00ff, 0xffff00ff00ffff00, 0xffff00ff00ffffff,
125
- 0xffff00ffff000000, 0xffff00ffff0000ff, 0xffff00ffff00ff00,
126
- 0xffff00ffff00ffff, 0xffff00ffffff0000, 0xffff00ffffff00ff,
127
- 0xffff00ffffffff00, 0xffff00ffffffffff, 0xffffff0000000000,
128
- 0xffffff00000000ff, 0xffffff000000ff00, 0xffffff000000ffff,
129
- 0xffffff0000ff0000, 0xffffff0000ff00ff, 0xffffff0000ffff00,
130
- 0xffffff0000ffffff, 0xffffff00ff000000, 0xffffff00ff0000ff,
131
- 0xffffff00ff00ff00, 0xffffff00ff00ffff, 0xffffff00ffff0000,
132
- 0xffffff00ffff00ff, 0xffffff00ffffff00, 0xffffff00ffffffff,
133
- 0xffffffff00000000, 0xffffffff000000ff, 0xffffffff0000ff00,
134
- 0xffffffff0000ffff, 0xffffffff00ff0000, 0xffffffff00ff00ff,
135
- 0xffffffff00ffff00, 0xffffffff00ffffff, 0xffffffffff000000,
136
- 0xffffffffff0000ff, 0xffffffffff00ff00, 0xffffffffff00ffff,
137
- 0xffffffffffff0000, 0xffffffffffff00ff, 0xffffffffffffff00,
138
- 0xffffffffffffffff,
139
- };
140
- return word[byte];
141
+ return expand_pred_b_data[byte];
142
}
143
144
/* Similarly for half-word elements.
145
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
146
index XXXXXXX..XXXXXXX 100644
147
--- a/target/arm/vec_helper.c
148
+++ b/target/arm/vec_helper.c
149
@@ -XXX,XX +XXX,XX @@
150
#include "qemu/int128.h"
151
#include "vec_internal.h"
152
153
+/*
154
+ * Data for expanding active predicate bits to bytes, for byte elements.
155
+ *
156
+ * for (i = 0; i < 256; ++i) {
157
+ * unsigned long m = 0;
158
+ * for (j = 0; j < 8; j++) {
159
+ * if ((i >> j) & 1) {
160
+ * m |= 0xfful << (j << 3);
161
+ * }
162
+ * }
163
+ * printf("0x%016lx,\n", m);
164
+ * }
165
+ */
166
+const uint64_t expand_pred_b_data[256] = {
167
+ 0x0000000000000000, 0x00000000000000ff, 0x000000000000ff00,
168
+ 0x000000000000ffff, 0x0000000000ff0000, 0x0000000000ff00ff,
169
+ 0x0000000000ffff00, 0x0000000000ffffff, 0x00000000ff000000,
170
+ 0x00000000ff0000ff, 0x00000000ff00ff00, 0x00000000ff00ffff,
171
+ 0x00000000ffff0000, 0x00000000ffff00ff, 0x00000000ffffff00,
172
+ 0x00000000ffffffff, 0x000000ff00000000, 0x000000ff000000ff,
173
+ 0x000000ff0000ff00, 0x000000ff0000ffff, 0x000000ff00ff0000,
174
+ 0x000000ff00ff00ff, 0x000000ff00ffff00, 0x000000ff00ffffff,
175
+ 0x000000ffff000000, 0x000000ffff0000ff, 0x000000ffff00ff00,
176
+ 0x000000ffff00ffff, 0x000000ffffff0000, 0x000000ffffff00ff,
177
+ 0x000000ffffffff00, 0x000000ffffffffff, 0x0000ff0000000000,
178
+ 0x0000ff00000000ff, 0x0000ff000000ff00, 0x0000ff000000ffff,
179
+ 0x0000ff0000ff0000, 0x0000ff0000ff00ff, 0x0000ff0000ffff00,
180
+ 0x0000ff0000ffffff, 0x0000ff00ff000000, 0x0000ff00ff0000ff,
181
+ 0x0000ff00ff00ff00, 0x0000ff00ff00ffff, 0x0000ff00ffff0000,
182
+ 0x0000ff00ffff00ff, 0x0000ff00ffffff00, 0x0000ff00ffffffff,
183
+ 0x0000ffff00000000, 0x0000ffff000000ff, 0x0000ffff0000ff00,
184
+ 0x0000ffff0000ffff, 0x0000ffff00ff0000, 0x0000ffff00ff00ff,
185
+ 0x0000ffff00ffff00, 0x0000ffff00ffffff, 0x0000ffffff000000,
186
+ 0x0000ffffff0000ff, 0x0000ffffff00ff00, 0x0000ffffff00ffff,
187
+ 0x0000ffffffff0000, 0x0000ffffffff00ff, 0x0000ffffffffff00,
188
+ 0x0000ffffffffffff, 0x00ff000000000000, 0x00ff0000000000ff,
189
+ 0x00ff00000000ff00, 0x00ff00000000ffff, 0x00ff000000ff0000,
190
+ 0x00ff000000ff00ff, 0x00ff000000ffff00, 0x00ff000000ffffff,
191
+ 0x00ff0000ff000000, 0x00ff0000ff0000ff, 0x00ff0000ff00ff00,
192
+ 0x00ff0000ff00ffff, 0x00ff0000ffff0000, 0x00ff0000ffff00ff,
193
+ 0x00ff0000ffffff00, 0x00ff0000ffffffff, 0x00ff00ff00000000,
194
+ 0x00ff00ff000000ff, 0x00ff00ff0000ff00, 0x00ff00ff0000ffff,
195
+ 0x00ff00ff00ff0000, 0x00ff00ff00ff00ff, 0x00ff00ff00ffff00,
196
+ 0x00ff00ff00ffffff, 0x00ff00ffff000000, 0x00ff00ffff0000ff,
197
+ 0x00ff00ffff00ff00, 0x00ff00ffff00ffff, 0x00ff00ffffff0000,
198
+ 0x00ff00ffffff00ff, 0x00ff00ffffffff00, 0x00ff00ffffffffff,
199
+ 0x00ffff0000000000, 0x00ffff00000000ff, 0x00ffff000000ff00,
200
+ 0x00ffff000000ffff, 0x00ffff0000ff0000, 0x00ffff0000ff00ff,
201
+ 0x00ffff0000ffff00, 0x00ffff0000ffffff, 0x00ffff00ff000000,
202
+ 0x00ffff00ff0000ff, 0x00ffff00ff00ff00, 0x00ffff00ff00ffff,
203
+ 0x00ffff00ffff0000, 0x00ffff00ffff00ff, 0x00ffff00ffffff00,
204
+ 0x00ffff00ffffffff, 0x00ffffff00000000, 0x00ffffff000000ff,
205
+ 0x00ffffff0000ff00, 0x00ffffff0000ffff, 0x00ffffff00ff0000,
206
+ 0x00ffffff00ff00ff, 0x00ffffff00ffff00, 0x00ffffff00ffffff,
207
+ 0x00ffffffff000000, 0x00ffffffff0000ff, 0x00ffffffff00ff00,
208
+ 0x00ffffffff00ffff, 0x00ffffffffff0000, 0x00ffffffffff00ff,
209
+ 0x00ffffffffffff00, 0x00ffffffffffffff, 0xff00000000000000,
210
+ 0xff000000000000ff, 0xff0000000000ff00, 0xff0000000000ffff,
211
+ 0xff00000000ff0000, 0xff00000000ff00ff, 0xff00000000ffff00,
212
+ 0xff00000000ffffff, 0xff000000ff000000, 0xff000000ff0000ff,
213
+ 0xff000000ff00ff00, 0xff000000ff00ffff, 0xff000000ffff0000,
214
+ 0xff000000ffff00ff, 0xff000000ffffff00, 0xff000000ffffffff,
215
+ 0xff0000ff00000000, 0xff0000ff000000ff, 0xff0000ff0000ff00,
216
+ 0xff0000ff0000ffff, 0xff0000ff00ff0000, 0xff0000ff00ff00ff,
217
+ 0xff0000ff00ffff00, 0xff0000ff00ffffff, 0xff0000ffff000000,
218
+ 0xff0000ffff0000ff, 0xff0000ffff00ff00, 0xff0000ffff00ffff,
219
+ 0xff0000ffffff0000, 0xff0000ffffff00ff, 0xff0000ffffffff00,
220
+ 0xff0000ffffffffff, 0xff00ff0000000000, 0xff00ff00000000ff,
221
+ 0xff00ff000000ff00, 0xff00ff000000ffff, 0xff00ff0000ff0000,
222
+ 0xff00ff0000ff00ff, 0xff00ff0000ffff00, 0xff00ff0000ffffff,
223
+ 0xff00ff00ff000000, 0xff00ff00ff0000ff, 0xff00ff00ff00ff00,
224
+ 0xff00ff00ff00ffff, 0xff00ff00ffff0000, 0xff00ff00ffff00ff,
225
+ 0xff00ff00ffffff00, 0xff00ff00ffffffff, 0xff00ffff00000000,
226
+ 0xff00ffff000000ff, 0xff00ffff0000ff00, 0xff00ffff0000ffff,
227
+ 0xff00ffff00ff0000, 0xff00ffff00ff00ff, 0xff00ffff00ffff00,
228
+ 0xff00ffff00ffffff, 0xff00ffffff000000, 0xff00ffffff0000ff,
229
+ 0xff00ffffff00ff00, 0xff00ffffff00ffff, 0xff00ffffffff0000,
230
+ 0xff00ffffffff00ff, 0xff00ffffffffff00, 0xff00ffffffffffff,
231
+ 0xffff000000000000, 0xffff0000000000ff, 0xffff00000000ff00,
232
+ 0xffff00000000ffff, 0xffff000000ff0000, 0xffff000000ff00ff,
233
+ 0xffff000000ffff00, 0xffff000000ffffff, 0xffff0000ff000000,
234
+ 0xffff0000ff0000ff, 0xffff0000ff00ff00, 0xffff0000ff00ffff,
235
+ 0xffff0000ffff0000, 0xffff0000ffff00ff, 0xffff0000ffffff00,
236
+ 0xffff0000ffffffff, 0xffff00ff00000000, 0xffff00ff000000ff,
237
+ 0xffff00ff0000ff00, 0xffff00ff0000ffff, 0xffff00ff00ff0000,
238
+ 0xffff00ff00ff00ff, 0xffff00ff00ffff00, 0xffff00ff00ffffff,
239
+ 0xffff00ffff000000, 0xffff00ffff0000ff, 0xffff00ffff00ff00,
240
+ 0xffff00ffff00ffff, 0xffff00ffffff0000, 0xffff00ffffff00ff,
241
+ 0xffff00ffffffff00, 0xffff00ffffffffff, 0xffffff0000000000,
242
+ 0xffffff00000000ff, 0xffffff000000ff00, 0xffffff000000ffff,
243
+ 0xffffff0000ff0000, 0xffffff0000ff00ff, 0xffffff0000ffff00,
244
+ 0xffffff0000ffffff, 0xffffff00ff000000, 0xffffff00ff0000ff,
245
+ 0xffffff00ff00ff00, 0xffffff00ff00ffff, 0xffffff00ffff0000,
246
+ 0xffffff00ffff00ff, 0xffffff00ffffff00, 0xffffff00ffffffff,
247
+ 0xffffffff00000000, 0xffffffff000000ff, 0xffffffff0000ff00,
248
+ 0xffffffff0000ffff, 0xffffffff00ff0000, 0xffffffff00ff00ff,
249
+ 0xffffffff00ffff00, 0xffffffff00ffffff, 0xffffffffff000000,
250
+ 0xffffffffff0000ff, 0xffffffffff00ff00, 0xffffffffff00ffff,
251
+ 0xffffffffffff0000, 0xffffffffffff00ff, 0xffffffffffffff00,
252
+ 0xffffffffffffffff,
253
+};
254
+
255
/* Signed saturating rounding doubling multiply-accumulate high half, 8-bit */
256
int8_t do_sqrdmlah_b(int8_t src1, int8_t src2, int8_t src3,
257
bool neg, bool round)
258
--
62
--
259
2.20.1
63
2.25.1
260
261
diff view generated by jsdifflib
1
Add the framework for decoding MVE insns, with the necessary new
1
From: Damien Hedde <damien.hedde@greensocs.com>
2
files and the meson.build rules, but no actual content yet.
3
2
3
The code for handling the reset level count in the Resettable code
4
has two issues:
5
6
The reset count is only decremented for the 1->0 case. This means
7
that if there's ever a nested reset that takes the count to 2 then it
8
will never again be decremented. Eventually the count will exceed
9
the '50' limit in resettable_phase_enter() and QEMU will trip over
10
the assertion failure. The repro case in issue 1266 is an example of
11
this that happens now the SCSI subsystem uses three-phase reset.
12
13
Secondly, the count is decremented only after the exit phase handler
14
is called. Moving the reset count decrement from "just after" to
15
"just before" calling the exit phase handler allows
16
resettable_is_in_reset() to return false during the handler
17
execution.
18
19
This simplifies reset handling in resettable devices. Typically, a
20
function that updates the device state will just need to read the
21
current reset state and not anymore treat the "in a reset-exit
22
transition" as a special case.
23
24
Note that the semantics change to the *_is_in_reset() functions
25
will have no effect on the current codebase, because only two
26
devices (hw/char/cadence_uart.c and hw/misc/zynq_sclr.c) currently
27
call those functions, and in neither case do they do it from the
28
device's exit phase methed.
29
30
Fixes: 4a5fc890 ("scsi: Use device_cold_reset() and bus_cold_reset()")
31
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1266
32
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
34
Reported-by: Michael Peter <michael.peter@hensoldt-cyber.com>
6
Message-id: 20210614151007.4545-11-peter.maydell@linaro.org
35
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
36
Message-id: 20221020142749.3357951-1-peter.maydell@linaro.org
37
Buglink: https://bugs.launchpad.net/qemu/+bug/1905297
38
Reported-by: Michael Peter <michael.peter@hensoldt-cyber.com>
39
[PMM: adjust the docs paragraph changed to get the name of the
40
'enter' phase right and to clarify exactly when the count is
41
adjusted; rewrite the commit message]
42
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
43
---
8
target/arm/translate-a32.h | 1 +
44
docs/devel/reset.rst | 8 +++++---
9
target/arm/mve.decode | 20 ++++++++++++++++++++
45
hw/core/resettable.c | 3 +--
10
target/arm/translate-mve.c | 29 +++++++++++++++++++++++++++++
46
2 files changed, 6 insertions(+), 5 deletions(-)
11
target/arm/translate.c | 1 +
12
target/arm/meson.build | 2 ++
13
5 files changed, 53 insertions(+)
14
create mode 100644 target/arm/mve.decode
15
create mode 100644 target/arm/translate-mve.c
16
47
17
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
48
diff --git a/docs/devel/reset.rst b/docs/devel/reset.rst
18
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate-a32.h
50
--- a/docs/devel/reset.rst
20
+++ b/target/arm/translate-a32.h
51
+++ b/docs/devel/reset.rst
21
@@ -XXX,XX +XXX,XX @@
52
@@ -XXX,XX +XXX,XX @@ Polling the reset state
22
53
Resettable interface provides the ``resettable_is_in_reset()`` function.
23
/* Prototypes for autogenerated disassembler functions */
54
This function returns true if the object parameter is currently under reset.
24
bool disas_m_nocp(DisasContext *dc, uint32_t insn);
55
25
+bool disas_mve(DisasContext *dc, uint32_t insn);
56
-An object is under reset from the beginning of the *init* phase to the end of
26
bool disas_vfp(DisasContext *s, uint32_t insn);
57
-the *exit* phase. During all three phases, the function will return that the
27
bool disas_vfp_uncond(DisasContext *s, uint32_t insn);
58
-object is in reset.
28
bool disas_neon_dp(DisasContext *s, uint32_t insn);
59
+An object is under reset from the beginning of the *enter* phase (before
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
60
+either its children or its own enter method is called) to the *exit*
30
new file mode 100644
61
+phase. During *enter* and *hold* phase only, the function will return that the
31
index XXXXXXX..XXXXXXX
62
+object is in reset. The state is changed after the *exit* is propagated to
32
--- /dev/null
63
+its children and just before calling the object's own *exit* method.
33
+++ b/target/arm/mve.decode
64
34
@@ -XXX,XX +XXX,XX @@
65
This function may be used if the object behavior has to be adapted
35
+# M-profile MVE instruction descriptions
66
while in reset state. For example if a device has an irq input,
36
+#
67
diff --git a/hw/core/resettable.c b/hw/core/resettable.c
37
+# Copyright (c) 2021 Linaro, Ltd
38
+#
39
+# This library is free software; you can redistribute it and/or
40
+# modify it under the terms of the GNU Lesser General Public
41
+# License as published by the Free Software Foundation; either
42
+# version 2.1 of the License, or (at your option) any later version.
43
+#
44
+# This library is distributed in the hope that it will be useful,
45
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
46
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
47
+# Lesser General Public License for more details.
48
+#
49
+# You should have received a copy of the GNU Lesser General Public
50
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
51
+
52
+#
53
+# This file is processed by scripts/decodetree.py
54
+#
55
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
56
new file mode 100644
57
index XXXXXXX..XXXXXXX
58
--- /dev/null
59
+++ b/target/arm/translate-mve.c
60
@@ -XXX,XX +XXX,XX @@
61
+/*
62
+ * ARM translation: M-profile MVE instructions
63
+ *
64
+ * Copyright (c) 2021 Linaro, Ltd.
65
+ *
66
+ * This library is free software; you can redistribute it and/or
67
+ * modify it under the terms of the GNU Lesser General Public
68
+ * License as published by the Free Software Foundation; either
69
+ * version 2.1 of the License, or (at your option) any later version.
70
+ *
71
+ * This library is distributed in the hope that it will be useful,
72
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
73
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
74
+ * Lesser General Public License for more details.
75
+ *
76
+ * You should have received a copy of the GNU Lesser General Public
77
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
78
+ */
79
+
80
+#include "qemu/osdep.h"
81
+#include "tcg/tcg-op.h"
82
+#include "tcg/tcg-op-gvec.h"
83
+#include "exec/exec-all.h"
84
+#include "exec/gen-icount.h"
85
+#include "translate.h"
86
+#include "translate-a32.h"
87
+
88
+/* Include the generated decoder */
89
+#include "decode-mve.c.inc"
90
diff --git a/target/arm/translate.c b/target/arm/translate.c
91
index XXXXXXX..XXXXXXX 100644
68
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/translate.c
69
--- a/hw/core/resettable.c
93
+++ b/target/arm/translate.c
70
+++ b/hw/core/resettable.c
94
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
71
@@ -XXX,XX +XXX,XX @@ static void resettable_phase_exit(Object *obj, void *opaque, ResetType type)
95
if (disas_t32(s, insn) ||
72
resettable_child_foreach(rc, obj, resettable_phase_exit, NULL, type);
96
disas_vfp_uncond(s, insn) ||
73
97
disas_neon_shared(s, insn) ||
74
assert(s->count > 0);
98
+ disas_mve(s, insn) ||
75
- if (s->count == 1) {
99
((insn >> 28) == 0xe && disas_vfp(s, insn))) {
76
+ if (--s->count == 0) {
100
return;
77
trace_resettable_phase_exit_exec(obj, obj_typename, !!rc->phases.exit);
78
if (rc->phases.exit && !resettable_get_tr_func(rc, obj)) {
79
rc->phases.exit(obj);
80
}
81
- s->count = 0;
101
}
82
}
102
diff --git a/target/arm/meson.build b/target/arm/meson.build
83
s->exit_phase_in_progress = false;
103
index XXXXXXX..XXXXXXX 100644
84
trace_resettable_phase_exit_end(obj, obj_typename, s->count);
104
--- a/target/arm/meson.build
105
+++ b/target/arm/meson.build
106
@@ -XXX,XX +XXX,XX @@ gen = [
107
decodetree.process('vfp.decode', extra_args: '--decode=disas_vfp'),
108
decodetree.process('vfp-uncond.decode', extra_args: '--decode=disas_vfp_uncond'),
109
decodetree.process('m-nocp.decode', extra_args: '--decode=disas_m_nocp'),
110
+ decodetree.process('mve.decode', extra_args: '--decode=disas_mve'),
111
decodetree.process('a32.decode', extra_args: '--static-decode=disas_a32'),
112
decodetree.process('a32-uncond.decode', extra_args: '--static-decode=disas_a32_uncond'),
113
decodetree.process('t32.decode', extra_args: '--static-decode=disas_t32'),
114
@@ -XXX,XX +XXX,XX @@ arm_ss.add(files(
115
'tlb_helper.c',
116
'translate.c',
117
'translate-m-nocp.c',
118
+ 'translate-mve.c',
119
'translate-neon.c',
120
'translate-vfp.c',
121
'vec_helper.c',
122
--
85
--
123
2.20.1
86
2.25.1
124
87
125
88
diff view generated by jsdifflib
1
Implement the MVE LETP insn. This is like the existing LE loop-end
1
The semantic difference between the deprecated device_legacy_reset()
2
insn, but it must perform an FPU-enabled check, and on loop-exit it
2
function and the newer device_cold_reset() function is that the new
3
resets LTPSIZE to 4.
3
function resets both the device itself and any qbuses it owns,
4
4
whereas the legacy function resets just the device itself and nothing
5
To accommodate the requirement to do something on loop-exit, we drop
5
else. In hyperv_synic_reset() we reset a SynICState, which has no
6
the use of condlabel and instead manage both the TB exits manually,
6
qbuses, so for this purpose the two functions behave identically and
7
in the same way we already do in trans_WLS().
7
we can stop using the deprecated one.
8
9
The other MVE-specific change to the LE insn is that we must raise an
10
INVSTATE UsageFault insn if LTPSIZE is not 4.
11
8
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
14
Message-id: 20210614151007.4545-10-peter.maydell@linaro.org
11
Message-id: 20221013171817.1447562-1-peter.maydell@linaro.org
15
---
12
---
16
target/arm/t32.decode | 2 +-
13
hw/hyperv/hyperv.c | 2 +-
17
target/arm/translate.c | 104 +++++++++++++++++++++++++++++++++++++----
14
1 file changed, 1 insertion(+), 1 deletion(-)
18
2 files changed, 97 insertions(+), 9 deletions(-)
19
15
20
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
16
diff --git a/hw/hyperv/hyperv.c b/hw/hyperv/hyperv.c
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/t32.decode
18
--- a/hw/hyperv/hyperv.c
23
+++ b/target/arm/t32.decode
19
+++ b/hw/hyperv/hyperv.c
24
@@ -XXX,XX +XXX,XX @@ BL 1111 0. .......... 11.1 ............ @branch24
20
@@ -XXX,XX +XXX,XX @@ void hyperv_synic_reset(CPUState *cs)
25
DLS 1111 0 0000 100 rn:4 1110 0000 0000 0001 size=4
21
SynICState *synic = get_synic(cs);
26
WLS 1111 0 0000 100 rn:4 1100 . .......... 1 imm=%lob_imm size=4
22
27
{
23
if (synic) {
28
- LE 1111 0 0000 0 f:1 0 1111 1100 . .......... 1 imm=%lob_imm
24
- device_legacy_reset(DEVICE(synic));
29
+ LE 1111 0 0000 0 f:1 tp:1 1111 1100 . .......... 1 imm=%lob_imm
25
+ device_cold_reset(DEVICE(synic));
30
# This is WLSTP
31
WLS 1111 0 0000 0 size:2 rn:4 1100 . .......... 1 imm=%lob_imm
32
}
26
}
33
diff --git a/target/arm/translate.c b/target/arm/translate.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate.c
36
+++ b/target/arm/translate.c
37
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
38
* any faster.
39
*/
40
TCGv_i32 tmp;
41
+ TCGLabel *loopend;
42
+ bool fpu_active;
43
44
if (!dc_isar_feature(aa32_lob, s)) {
45
return false;
46
}
47
+ if (a->f && a->tp) {
48
+ return false;
49
+ }
50
+ if (s->condexec_mask) {
51
+ /*
52
+ * LE in an IT block is CONSTRAINED UNPREDICTABLE;
53
+ * we choose to UNDEF, because otherwise our use of
54
+ * gen_goto_tb(1) would clash with the use of TB exit 1
55
+ * in the dc->condjmp condition-failed codepath in
56
+ * arm_tr_tb_stop() and we'd get an assertion.
57
+ */
58
+ return false;
59
+ }
60
+ if (a->tp) {
61
+ /* LETP */
62
+ if (!dc_isar_feature(aa32_mve, s)) {
63
+ return false;
64
+ }
65
+ if (!vfp_access_check(s)) {
66
+ s->eci_handled = true;
67
+ return true;
68
+ }
69
+ }
70
71
/* LE/LETP is OK with ECI set and leaves it untouched */
72
s->eci_handled = true;
73
74
- if (!a->f) {
75
- /* Not loop-forever. If LR <= 1 this is the last loop: do nothing. */
76
- arm_gen_condlabel(s);
77
- tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, s->condlabel);
78
- /* Decrement LR */
79
- tmp = load_reg(s, 14);
80
- tcg_gen_addi_i32(tmp, tmp, -1);
81
- store_reg(s, 14, tmp);
82
+ /*
83
+ * With MVE, LTPSIZE might not be 4, and we must emit an INVSTATE
84
+ * UsageFault exception for the LE insn in that case. Note that we
85
+ * are not directly checking FPSCR.LTPSIZE but instead check the
86
+ * pseudocode LTPSIZE() function, which returns 4 if the FPU is
87
+ * not currently active (ie ActiveFPState() returns false). We
88
+ * can identify not-active purely from our TB state flags, as the
89
+ * FPU is active only if:
90
+ * the FPU is enabled
91
+ * AND lazy state preservation is not active
92
+ * AND we do not need a new fp context (this is the ASPEN/FPCA check)
93
+ *
94
+ * Usually we don't need to care about this distinction between
95
+ * LTPSIZE and FPSCR.LTPSIZE, because the code in vfp_access_check()
96
+ * will either take an exception or clear the conditions that make
97
+ * the FPU not active. But LE is an unusual case of a non-FP insn
98
+ * that looks at LTPSIZE.
99
+ */
100
+ fpu_active = !s->fp_excp_el && !s->v7m_lspact && !s->v7m_new_fp_ctxt_needed;
101
+
102
+ if (!a->tp && dc_isar_feature(aa32_mve, s) && fpu_active) {
103
+ /* Need to do a runtime check for LTPSIZE != 4 */
104
+ TCGLabel *skipexc = gen_new_label();
105
+ tmp = load_cpu_field(v7m.ltpsize);
106
+ tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc);
107
+ tcg_temp_free_i32(tmp);
108
+ gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized(),
109
+ default_exception_el(s));
110
+ gen_set_label(skipexc);
111
+ }
112
+
113
+ if (a->f) {
114
+ /* Loop-forever: just jump back to the loop start */
115
+ gen_jmp(s, read_pc(s) - a->imm);
116
+ return true;
117
+ }
118
+
119
+ /*
120
+ * Not loop-forever. If LR <= loop-decrement-value this is the last loop.
121
+ * For LE, we know at this point that LTPSIZE must be 4 and the
122
+ * loop decrement value is 1. For LETP we need to calculate the decrement
123
+ * value from LTPSIZE.
124
+ */
125
+ loopend = gen_new_label();
126
+ if (!a->tp) {
127
+ tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, loopend);
128
+ tcg_gen_addi_i32(cpu_R[14], cpu_R[14], -1);
129
+ } else {
130
+ /*
131
+ * Decrement by 1 << (4 - LTPSIZE). We need to use a TCG local
132
+ * so that decr stays live after the brcondi.
133
+ */
134
+ TCGv_i32 decr = tcg_temp_local_new_i32();
135
+ TCGv_i32 ltpsize = load_cpu_field(v7m.ltpsize);
136
+ tcg_gen_sub_i32(decr, tcg_constant_i32(4), ltpsize);
137
+ tcg_gen_shl_i32(decr, tcg_constant_i32(1), decr);
138
+ tcg_temp_free_i32(ltpsize);
139
+
140
+ tcg_gen_brcond_i32(TCG_COND_LEU, cpu_R[14], decr, loopend);
141
+
142
+ tcg_gen_sub_i32(cpu_R[14], cpu_R[14], decr);
143
+ tcg_temp_free_i32(decr);
144
}
145
/* Jump back to the loop start */
146
gen_jmp(s, read_pc(s) - a->imm);
147
+
148
+ gen_set_label(loopend);
149
+ if (a->tp) {
150
+ /* Exits from tail-pred loops must reset LTPSIZE to 4 */
151
+ tmp = tcg_const_i32(4);
152
+ store_cpu_field(tmp, v7m.ltpsize);
153
+ }
154
+ /* End TB, continuing to following insn */
155
+ gen_jmp_tb(s, s->base.pc_next, 1);
156
return true;
157
}
27
}
158
28
159
--
29
--
160
2.20.1
30
2.25.1
161
162
diff view generated by jsdifflib
New patch
1
From: Axel Heider <axel.heider@hensoldt.net>
1
2
3
When running seL4 tests (https://docs.sel4.systems/projects/sel4test)
4
on the sabrelight platform, the timer tests fail. The arm/imx6 EPIT
5
timer interrupt does not fire properly, instead of a e.g. second in
6
can take up to a minute to finally see the interrupt.
7
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1263
9
10
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
11
Message-id: 166663118138.13362.1229967229046092876-0@git.sr.ht
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/timer/imx_epit.c | 9 +++++++--
16
1 file changed, 7 insertions(+), 2 deletions(-)
17
18
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/timer/imx_epit.c
21
+++ b/hw/timer/imx_epit.c
22
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
23
/* If IOVW bit is set then set the timer value */
24
ptimer_set_count(s->timer_reload, s->lr);
25
}
26
-
27
+ /*
28
+ * Commit the change to s->timer_reload, so it can propagate. Otherwise
29
+ * the timer interrupt may not fire properly. The commit must happen
30
+ * before calling imx_epit_reload_compare_timer(), which reads
31
+ * s->timer_reload internally again.
32
+ */
33
+ ptimer_transaction_commit(s->timer_reload);
34
imx_epit_reload_compare_timer(s);
35
ptimer_transaction_commit(s->timer_cmp);
36
- ptimer_transaction_commit(s->timer_reload);
37
break;
38
39
case 3: /* CMP */
40
--
41
2.25.1
diff view generated by jsdifflib
1
Implement the MVE DLSTP insn; this is like the existing DLS
1
From: Richard Henderson <richard.henderson@linaro.org>
2
insn, except that it must do an FPU access check and it
3
sets LTPSIZE to the value specified in the insn.
4
2
3
Reduce the amount of typing required for this check.
4
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221024051851.3074715-2-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210614151007.4545-9-peter.maydell@linaro.org
8
---
10
---
9
target/arm/t32.decode | 9 ++++++---
11
target/arm/internals.h | 5 +++++
10
target/arm/translate.c | 23 +++++++++++++++++++++--
12
target/arm/helper.c | 14 +++++---------
11
2 files changed, 27 insertions(+), 5 deletions(-)
13
target/arm/ptw.c | 14 ++++++--------
14
3 files changed, 16 insertions(+), 17 deletions(-)
12
15
13
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/t32.decode
18
--- a/target/arm/internals.h
16
+++ b/target/arm/t32.decode
19
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@ BL 1111 0. .......... 11.1 ............ @branch24
20
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
18
# LE and WLS immediate
19
%lob_imm 1:10 11:1 !function=times_2
20
21
- DLS 1111 0 0000 100 rn:4 1110 0000 0000 0001
22
+ DLS 1111 0 0000 100 rn:4 1110 0000 0000 0001 size=4
23
WLS 1111 0 0000 100 rn:4 1100 . .......... 1 imm=%lob_imm size=4
24
{
25
LE 1111 0 0000 0 f:1 0 1111 1100 . .......... 1 imm=%lob_imm
26
# This is WLSTP
27
WLS 1111 0 0000 0 size:2 rn:4 1100 . .......... 1 imm=%lob_imm
28
}
21
}
29
-
30
- LCTP 1111 0 0000 000 1111 1110 0000 0000 0001
31
+ {
32
+ LCTP 1111 0 0000 000 1111 1110 0000 0000 0001
33
+ # This is DLSTP
34
+ DLS 1111 0 0000 0 size:2 rn:4 1110 0000 0000 0001
35
+ }
36
]
37
}
22
}
38
diff --git a/target/arm/translate.c b/target/arm/translate.c
23
24
+static inline bool regime_is_stage2(ARMMMUIdx mmu_idx)
25
+{
26
+ return mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S;
27
+}
28
+
29
/* Return the exception level which controls this address translation regime */
30
static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
31
{
32
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate.c
34
--- a/target/arm/helper.c
41
+++ b/target/arm/translate.c
35
+++ b/target/arm/helper.c
42
@@ -XXX,XX +XXX,XX @@ static bool trans_DLS(DisasContext *s, arg_DLS *a)
36
@@ -XXX,XX +XXX,XX @@ int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
43
return false;
37
{
38
if (regime_has_2_ranges(mmu_idx)) {
39
return extract64(tcr, 37, 2);
40
- } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
41
+ } else if (regime_is_stage2(mmu_idx)) {
42
return 0; /* VTCR_EL2 */
43
} else {
44
/* Replicate the single TBI bit so we always have 2 bits. */
45
@@ -XXX,XX +XXX,XX @@ int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx)
46
{
47
if (regime_has_2_ranges(mmu_idx)) {
48
return extract64(tcr, 51, 2);
49
- } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
50
+ } else if (regime_is_stage2(mmu_idx)) {
51
return 0; /* VTCR_EL2 */
52
} else {
53
/* Replicate the single TBID bit so we always have 2 bits. */
54
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
55
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
56
ARMGranuleSize gran;
57
ARMCPU *cpu = env_archcpu(env);
58
- bool stage2 = mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S;
59
+ bool stage2 = regime_is_stage2(mmu_idx);
60
61
if (!regime_has_2_ranges(mmu_idx)) {
62
select = 0;
63
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
64
}
65
ds = false;
66
} else if (ds) {
67
- switch (mmu_idx) {
68
- case ARMMMUIdx_Stage2:
69
- case ARMMMUIdx_Stage2_S:
70
+ if (regime_is_stage2(mmu_idx)) {
71
if (gran == Gran16K) {
72
ds = cpu_isar_feature(aa64_tgran16_2_lpa2, cpu);
73
} else {
74
ds = cpu_isar_feature(aa64_tgran4_2_lpa2, cpu);
75
}
76
- break;
77
- default:
78
+ } else {
79
if (gran == Gran16K) {
80
ds = cpu_isar_feature(aa64_tgran16_lpa2, cpu);
81
} else {
82
ds = cpu_isar_feature(aa64_tgran4_lpa2, cpu);
83
}
84
- break;
85
}
86
if (ds) {
87
min_tsz = 12;
88
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/ptw.c
91
+++ b/target/arm/ptw.c
92
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
93
bool have_wxn;
94
int wxn = 0;
95
96
- assert(mmu_idx != ARMMMUIdx_Stage2);
97
- assert(mmu_idx != ARMMMUIdx_Stage2_S);
98
+ assert(!regime_is_stage2(mmu_idx));
99
100
user_rw = simple_ap_to_rw_prot_is_user(ap, true);
101
if (is_user) {
102
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
103
goto do_fault;
44
}
104
}
45
if (a->rn == 13 || a->rn == 15) {
105
46
- /* CONSTRAINED UNPREDICTABLE: we choose to UNDEF */
106
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
47
+ /*
107
+ if (!regime_is_stage2(mmu_idx)) {
48
+ * For DLSTP rn == 15 is a related encoding (LCTP); the
108
/*
49
+ * other cases caught by this condition are all
109
* The starting level depends on the virtual address size (which can
50
+ * CONSTRAINED UNPREDICTABLE: we choose to UNDEF
110
* be up to 48 bits) and the translation granule size. It indicates
51
+ */
111
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
52
return false;
112
attrs = extract64(descriptor, 2, 10)
113
| (extract64(descriptor, 52, 12) << 10);
114
115
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
116
+ if (regime_is_stage2(mmu_idx)) {
117
/* Stage 2 table descriptors do not include any attribute fields */
118
break;
119
}
120
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
121
122
ap = extract32(attrs, 4, 2);
123
124
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
125
+ if (regime_is_stage2(mmu_idx)) {
126
ns = mmu_idx == ARMMMUIdx_Stage2;
127
xn = extract32(attrs, 11, 2);
128
result->f.prot = get_S2prot(env, ap, xn, s1_is_el0);
129
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
130
result->f.guarded = guarded;
53
}
131
}
54
132
55
- /* Not a while loop, no tail predication: just set LR to the count */
133
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
56
+ if (a->size != 4) {
134
+ if (regime_is_stage2(mmu_idx)) {
57
+ /* DLSTP */
135
result->cacheattrs.is_s2_format = true;
58
+ if (!dc_isar_feature(aa32_mve, s)) {
136
result->cacheattrs.attrs = extract32(attrs, 0, 4);
59
+ return false;
137
} else {
60
+ }
138
@@ -XXX,XX +XXX,XX @@ do_fault:
61
+ if (!vfp_access_check(s)) {
139
fi->type = fault_type;
62
+ return true;
140
fi->level = level;
63
+ }
141
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
64
+ }
142
- fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2 ||
65
+
143
- mmu_idx == ARMMMUIdx_Stage2_S);
66
+ /* Not a while loop: set LR to the count, and set LTPSIZE for DLSTP */
144
+ fi->stage2 = fi->s1ptw || regime_is_stage2(mmu_idx);
67
tmp = load_reg(s, a->rn);
145
fi->s1ns = mmu_idx == ARMMMUIdx_Stage2;
68
store_reg(s, 14, tmp);
69
+ if (a->size != 4) {
70
+ /* DLSTP: set FPSCR.LTPSIZE */
71
+ tmp = tcg_const_i32(a->size);
72
+ store_cpu_field(tmp, v7m.ltpsize);
73
+ }
74
return true;
146
return true;
75
}
147
}
76
77
--
148
--
78
2.20.1
149
2.25.1
79
150
80
151
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Commit 382c7160d1cd ("hw/intc/arm_gicv3_cpuif: Fix EOIR write access
3
Hoist the computation of the mmu_idx for the ptw up to
4
check logic") added an assert_not_reached() if the guest writes the EOIR
4
get_phys_addr_with_struct and get_phys_addr_twostage.
5
register while no interrupt is active.
5
This removes the duplicate check for stage2 disabled
6
from the middle of the walk, performing it only once.
6
7
7
It turns out some software does this: EDK2, in
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
GicV3ExitBootServicesEvent(), unconditionally write EOIR for all
9
interrupts that it manages. This now causes QEMU to abort when running
10
UEFI on a VM with GICv3. Although it is UNPREDICTABLE behavior and EDK2
11
does need fixing, the punishment seems a little harsh, especially since
12
icc_eoir_write() already tolerates writes of nonexistent interrupt
13
numbers. Display a guest error and tolerate spurious EOIR writes.
14
15
Fixes: 382c7160d1cd ("hw/intc/arm_gicv3_cpuif: Fix EOIR write access check logic")
16
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
17
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
18
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
19
Tested-by: Alex Bennée <alex.bennee@linaro.org>
10
Tested-by: Alex Bennée <alex.bennee@linaro.org>
20
Message-id: 20210604130352.1887560-1-jean-philippe@linaro.org
11
Message-id: 20221024051851.3074715-3-richard.henderson@linaro.org
21
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
13
---
24
hw/intc/arm_gicv3_cpuif.c | 5 ++++-
14
target/arm/ptw.c | 71 ++++++++++++++++++++++++++++++++++++------------
25
1 file changed, 4 insertions(+), 1 deletion(-)
15
1 file changed, 54 insertions(+), 17 deletions(-)
26
16
27
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
28
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/intc/arm_gicv3_cpuif.c
19
--- a/target/arm/ptw.c
30
+++ b/hw/intc/arm_gicv3_cpuif.c
20
+++ b/target/arm/ptw.c
31
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@
32
22
33
#include "qemu/osdep.h"
23
typedef struct S1Translate {
34
#include "qemu/bitops.h"
24
ARMMMUIdx in_mmu_idx;
35
+#include "qemu/log.h"
25
+ ARMMMUIdx in_ptw_idx;
36
#include "qemu/main-loop.h"
26
bool in_secure;
37
#include "trace.h"
27
bool in_debug;
38
#include "gicv3_internal.h"
28
bool out_secure;
39
@@ -XXX,XX +XXX,XX @@ static void icc_eoir_write(CPUARMState *env, const ARMCPRegInfo *ri,
29
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
30
{
31
bool is_secure = ptw->in_secure;
32
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
33
- ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
34
- bool s2_phys = false;
35
+ ARMMMUIdx s2_mmu_idx = ptw->in_ptw_idx;
36
uint8_t pte_attrs;
37
bool pte_secure;
38
39
- if (!arm_mmu_idx_is_stage1_of_2(mmu_idx)
40
- || regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
41
- s2_mmu_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
42
- s2_phys = true;
43
- }
44
-
45
if (unlikely(ptw->in_debug)) {
46
/*
47
* From gdbstub, do not use softmmu so that we don't modify the
48
* state of the cpu at all, including softmmu tlb contents.
49
*/
50
- if (s2_phys) {
51
- ptw->out_phys = addr;
52
- pte_attrs = 0;
53
- pte_secure = is_secure;
54
- } else {
55
+ if (regime_is_stage2(s2_mmu_idx)) {
56
S1Translate s2ptw = {
57
.in_mmu_idx = s2_mmu_idx,
58
+ .in_ptw_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS,
59
.in_secure = is_secure,
60
.in_debug = true,
61
};
62
GetPhysAddrResult s2 = { };
63
+
64
if (!get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
65
false, &s2, fi)) {
66
goto fail;
67
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
68
ptw->out_phys = s2.f.phys_addr;
69
pte_attrs = s2.cacheattrs.attrs;
70
pte_secure = s2.f.attrs.secure;
71
+ } else {
72
+ /* Regime is physical. */
73
+ ptw->out_phys = addr;
74
+ pte_attrs = 0;
75
+ pte_secure = is_secure;
40
}
76
}
41
break;
77
ptw->out_host = NULL;
42
default:
78
} else {
43
- g_assert_not_reached();
79
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
44
+ qemu_log_mask(LOG_GUEST_ERROR,
80
pte_secure = full->attrs.secure;
45
+ "%s: IRQ %d isn't active\n", __func__, irq);
46
+ return;
47
}
81
}
48
82
49
icc_drop_prio(cs, grp);
83
- if (!s2_phys) {
84
+ if (regime_is_stage2(s2_mmu_idx)) {
85
uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
86
87
if ((hcr & HCR_PTW) && S2_attrs_are_device(hcr, pte_attrs)) {
88
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
89
descaddr |= (address >> (stride * (4 - level))) & indexmask;
90
descaddr &= ~7ULL;
91
nstable = extract32(tableattrs, 4, 1);
92
- ptw->in_secure = !nstable;
93
+ if (!nstable) {
94
+ /*
95
+ * Stage2_S -> Stage2 or Phys_S -> Phys_NS
96
+ * Assert that the non-secure idx are even, and relative order.
97
+ */
98
+ QEMU_BUILD_BUG_ON((ARMMMUIdx_Phys_NS & 1) != 0);
99
+ QEMU_BUILD_BUG_ON((ARMMMUIdx_Stage2 & 1) != 0);
100
+ QEMU_BUILD_BUG_ON(ARMMMUIdx_Phys_NS + 1 != ARMMMUIdx_Phys_S);
101
+ QEMU_BUILD_BUG_ON(ARMMMUIdx_Stage2 + 1 != ARMMMUIdx_Stage2_S);
102
+ ptw->in_ptw_idx &= ~1;
103
+ ptw->in_secure = false;
104
+ }
105
descriptor = arm_ldq_ptw(env, ptw, descaddr, fi);
106
if (fi->type != ARMFault_None) {
107
goto do_fault;
108
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
109
110
is_el0 = ptw->in_mmu_idx == ARMMMUIdx_Stage1_E0;
111
ptw->in_mmu_idx = s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
112
+ ptw->in_ptw_idx = s2walk_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
113
ptw->in_secure = s2walk_secure;
114
115
/*
116
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
117
ARMMMUFaultInfo *fi)
118
{
119
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
120
- ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
121
bool is_secure = ptw->in_secure;
122
+ ARMMMUIdx s1_mmu_idx;
123
124
- if (mmu_idx != s1_mmu_idx) {
125
+ switch (mmu_idx) {
126
+ case ARMMMUIdx_Phys_S:
127
+ case ARMMMUIdx_Phys_NS:
128
+ /* Checking Phys early avoids special casing later vs regime_el. */
129
+ return get_phys_addr_disabled(env, address, access_type, mmu_idx,
130
+ is_secure, result, fi);
131
+
132
+ case ARMMMUIdx_Stage1_E0:
133
+ case ARMMMUIdx_Stage1_E1:
134
+ case ARMMMUIdx_Stage1_E1_PAN:
135
+ /* First stage lookup uses second stage for ptw. */
136
+ ptw->in_ptw_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
137
+ break;
138
+
139
+ case ARMMMUIdx_E10_0:
140
+ s1_mmu_idx = ARMMMUIdx_Stage1_E0;
141
+ goto do_twostage;
142
+ case ARMMMUIdx_E10_1:
143
+ s1_mmu_idx = ARMMMUIdx_Stage1_E1;
144
+ goto do_twostage;
145
+ case ARMMMUIdx_E10_1_PAN:
146
+ s1_mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
147
+ do_twostage:
148
/*
149
* Call ourselves recursively to do the stage 1 and then stage 2
150
* translations if mmu_idx is a two-stage regime, and EL2 present.
151
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
152
return get_phys_addr_twostage(env, ptw, address, access_type,
153
result, fi);
154
}
155
+ /* fall through */
156
+
157
+ default:
158
+ /* Single stage and second stage uses physical for ptw. */
159
+ ptw->in_ptw_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
160
+ break;
161
}
162
163
/*
50
--
164
--
51
2.20.1
165
2.25.1
52
166
53
167
diff view generated by jsdifflib
1
int128_make64() creates an Int128 from an unsigned 64 bit value; add
1
From: Richard Henderson <richard.henderson@linaro.org>
2
a function int128_makes64() creating an Int128 from a signed 64 bit
3
value.
4
2
3
The MMFR1 field may indicate support for hardware update of
4
access flag alone, or access flag and dirty bit.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221024051851.3074715-4-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20210614151007.4545-34-peter.maydell@linaro.org
9
---
10
---
10
include/qemu/int128.h | 10 ++++++++++
11
target/arm/cpu.h | 10 ++++++++++
11
1 file changed, 10 insertions(+)
12
1 file changed, 10 insertions(+)
12
13
13
diff --git a/include/qemu/int128.h b/include/qemu/int128.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/include/qemu/int128.h
16
--- a/target/arm/cpu.h
16
+++ b/include/qemu/int128.h
17
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline Int128 int128_make64(uint64_t a)
18
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_e0pd(const ARMISARegisters *id)
18
return a;
19
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, E0PD) != 0;
19
}
20
}
20
21
21
+static inline Int128 int128_makes64(int64_t a)
22
+static inline bool isar_feature_aa64_hafs(const ARMISARegisters *id)
22
+{
23
+{
23
+ return a;
24
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) != 0;
24
+}
25
+}
25
+
26
+
26
static inline Int128 int128_make128(uint64_t lo, uint64_t hi)
27
+static inline bool isar_feature_aa64_hdbs(const ARMISARegisters *id)
27
{
28
return (__uint128_t)hi << 64 | lo;
29
@@ -XXX,XX +XXX,XX @@ static inline Int128 int128_make64(uint64_t a)
30
return (Int128) { a, 0 };
31
}
32
33
+static inline Int128 int128_makes64(int64_t a)
34
+{
28
+{
35
+ return (Int128) { a, a >> 63 };
29
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) >= 2;
36
+}
30
+}
37
+
31
+
38
static inline Int128 int128_make128(uint64_t lo, uint64_t hi)
32
static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
39
{
33
{
40
return (Int128) { lo, hi };
34
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
41
--
35
--
42
2.20.1
36
2.25.1
43
44
diff view generated by jsdifflib
1
Implement the MVE WLSTP insn; this is like the existing WLS insn,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
except that it specifies a size value which is used to set
3
FPSCR.LTPSIZE.
4
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20221024051851.3074715-5-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210614151007.4545-8-peter.maydell@linaro.org
8
---
8
---
9
target/arm/t32.decode | 8 ++++++--
9
target/arm/internals.h | 2 ++
10
target/arm/translate.c | 37 ++++++++++++++++++++++++++++++++++++-
10
target/arm/helper.c | 8 +++++++-
11
2 files changed, 42 insertions(+), 3 deletions(-)
11
2 files changed, 9 insertions(+), 1 deletion(-)
12
12
13
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/t32.decode
15
--- a/target/arm/internals.h
16
+++ b/target/arm/t32.decode
16
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@ BL 1111 0. .......... 11.1 ............ @branch24
17
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
18
%lob_imm 1:10 11:1 !function=times_2
18
bool hpd : 1;
19
19
bool tsz_oob : 1; /* tsz has been clamped to legal range */
20
DLS 1111 0 0000 100 rn:4 1110 0000 0000 0001
20
bool ds : 1;
21
- WLS 1111 0 0000 100 rn:4 1100 . .......... 1 imm=%lob_imm
21
+ bool ha : 1;
22
- LE 1111 0 0000 0 f:1 0 1111 1100 . .......... 1 imm=%lob_imm
22
+ bool hd : 1;
23
+ WLS 1111 0 0000 100 rn:4 1100 . .......... 1 imm=%lob_imm size=4
23
ARMGranuleSize gran : 2;
24
+ {
24
} ARMVAParameters;
25
+ LE 1111 0 0000 0 f:1 0 1111 1100 . .......... 1 imm=%lob_imm
25
26
+ # This is WLSTP
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
+ WLS 1111 0 0000 0 size:2 rn:4 1100 . .......... 1 imm=%lob_imm
28
+ }
29
30
LCTP 1111 0 0000 000 1111 1110 0000 0000 0001
31
]
32
diff --git a/target/arm/translate.c b/target/arm/translate.c
33
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate.c
28
--- a/target/arm/helper.c
35
+++ b/target/arm/translate.c
29
+++ b/target/arm/helper.c
36
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
30
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
37
return false;
31
ARMMMUIdx mmu_idx, bool data)
38
}
32
{
39
if (a->rn == 13 || a->rn == 15) {
33
uint64_t tcr = regime_tcr(env, mmu_idx);
40
- /* CONSTRAINED UNPREDICTABLE: we choose to UNDEF */
34
- bool epd, hpd, tsz_oob, ds;
41
+ /*
35
+ bool epd, hpd, tsz_oob, ds, ha, hd;
42
+ * For WLSTP rn == 15 is a related encoding (LE); the
36
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
43
+ * other cases caught by this condition are all
37
ARMGranuleSize gran;
44
+ * CONSTRAINED UNPREDICTABLE: we choose to UNDEF
38
ARMCPU *cpu = env_archcpu(env);
45
+ */
39
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
46
return false;
40
epd = false;
47
}
41
sh = extract32(tcr, 12, 2);
48
if (s->condexec_mask) {
42
ps = extract32(tcr, 16, 3);
49
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
43
+ ha = extract32(tcr, 21, 1) && cpu_isar_feature(aa64_hafs, cpu);
50
*/
44
+ hd = extract32(tcr, 22, 1) && cpu_isar_feature(aa64_hdbs, cpu);
51
return false;
45
ds = extract64(tcr, 32, 1);
52
}
46
} else {
53
+ if (a->size != 4) {
47
bool e0pd;
54
+ /* WLSTP */
48
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
55
+ if (!dc_isar_feature(aa32_mve, s)) {
49
e0pd = extract64(tcr, 56, 1);
56
+ return false;
50
}
57
+ }
51
ps = extract64(tcr, 32, 3);
58
+ /*
52
+ ha = extract64(tcr, 39, 1) && cpu_isar_feature(aa64_hafs, cpu);
59
+ * We need to check that the FPU is enabled here, but mustn't
53
+ hd = extract64(tcr, 40, 1) && cpu_isar_feature(aa64_hdbs, cpu);
60
+ * call vfp_access_check() to do that because we don't want to
54
ds = extract64(tcr, 59, 1);
61
+ * do the lazy state preservation in the "loop count is zero" case.
55
62
+ * Do the check-and-raise-exception by hand.
56
if (e0pd && cpu_isar_feature(aa64_e0pd, cpu) &&
63
+ */
57
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
64
+ if (s->fp_excp_el) {
58
.hpd = hpd,
65
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
59
.tsz_oob = tsz_oob,
66
+ syn_uncategorized(), s->fp_excp_el);
60
.ds = ds,
67
+ return true;
61
+ .ha = ha,
68
+ }
62
+ .hd = ha && hd,
69
+ }
63
.gran = gran,
70
+
64
};
71
nextlabel = gen_new_label();
65
}
72
tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel);
73
tmp = load_reg(s, a->rn);
74
store_reg(s, 14, tmp);
75
+ if (a->size != 4) {
76
+ /*
77
+ * WLSTP: set FPSCR.LTPSIZE. This requires that we do the
78
+ * lazy state preservation, new FP context creation, etc,
79
+ * that vfp_access_check() does. We know that the actual
80
+ * access check will succeed (ie it won't generate code that
81
+ * throws an exception) because we did that check by hand earlier.
82
+ */
83
+ bool ok = vfp_access_check(s);
84
+ assert(ok);
85
+ tmp = tcg_const_i32(a->size);
86
+ store_cpu_field(tmp, v7m.ltpsize);
87
+ }
88
gen_jmp_tb(s, s->base.pc_next, 1);
89
90
gen_set_label(nextlabel);
91
--
66
--
92
2.20.1
67
2.25.1
93
68
94
69
diff view generated by jsdifflib
1
Implement the MVE LCTP instruction.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We put its decode and implementation with the other
3
Separate S1 translation from the actual lookup.
4
low-overhead-branch insns because although it is only present if MVE
4
Will enable lpae hardware updates.
5
is implemented it is logically in the same group as the other LOB
6
insns.
7
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221024051851.3074715-6-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210614151007.4545-7-peter.maydell@linaro.org
11
---
10
---
12
target/arm/t32.decode | 2 ++
11
target/arm/ptw.c | 41 ++++++++++++++++++++++-------------------
13
target/arm/translate.c | 24 ++++++++++++++++++++++++
12
1 file changed, 22 insertions(+), 19 deletions(-)
14
2 files changed, 26 insertions(+)
15
13
16
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/t32.decode
16
--- a/target/arm/ptw.c
19
+++ b/target/arm/t32.decode
17
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ BL 1111 0. .......... 11.1 ............ @branch24
18
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
21
DLS 1111 0 0000 100 rn:4 1110 0000 0000 0001
22
WLS 1111 0 0000 100 rn:4 1100 . .......... 1 imm=%lob_imm
23
LE 1111 0 0000 0 f:1 0 1111 1100 . .......... 1 imm=%lob_imm
24
+
25
+ LCTP 1111 0 0000 000 1111 1110 0000 0000 0001
26
]
27
}
19
}
28
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
29
index XXXXXXX..XXXXXXX 100644
21
/* All loads done in the course of a page table walk go through here. */
30
--- a/target/arm/translate.c
22
-static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
31
+++ b/target/arm/translate.c
23
+static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw,
32
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
24
ARMMMUFaultInfo *fi)
33
return true;
25
{
26
CPUState *cs = env_cpu(env);
27
uint32_t data;
28
29
- if (!S1_ptw_translate(env, ptw, addr, fi)) {
30
- /* Failure. */
31
- assert(fi->s1ptw);
32
- return 0;
33
- }
34
-
35
if (likely(ptw->out_host)) {
36
/* Page tables are in RAM, and we have the host address. */
37
if (ptw->out_be) {
38
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
39
return data;
34
}
40
}
35
41
36
+static bool trans_LCTP(DisasContext *s, arg_LCTP *a)
42
-static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
37
+{
43
+static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw,
38
+ /*
44
ARMMMUFaultInfo *fi)
39
+ * M-profile Loop Clear with Tail Predication. Since our implementation
45
{
40
+ * doesn't cache branch information, all we need to do is reset
46
CPUState *cs = env_cpu(env);
41
+ * FPSCR.LTPSIZE to 4.
47
uint64_t data;
42
+ */
48
43
+ TCGv_i32 ltpsize;
49
- if (!S1_ptw_translate(env, ptw, addr, fi)) {
44
+
50
- /* Failure. */
45
+ if (!dc_isar_feature(aa32_lob, s) ||
51
- assert(fi->s1ptw);
46
+ !dc_isar_feature(aa32_mve, s)) {
52
- return 0;
47
+ return false;
53
- }
54
-
55
if (likely(ptw->out_host)) {
56
/* Page tables are in RAM, and we have the host address. */
57
if (ptw->out_be) {
58
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, S1Translate *ptw,
59
fi->type = ARMFault_Translation;
60
goto do_fault;
61
}
62
- desc = arm_ldl_ptw(env, ptw, table, fi);
63
+ if (!S1_ptw_translate(env, ptw, table, fi)) {
64
+ goto do_fault;
48
+ }
65
+ }
49
+
66
+ desc = arm_ldl_ptw(env, ptw, fi);
50
+ if (!vfp_access_check(s)) {
67
if (fi->type != ARMFault_None) {
51
+ return true;
68
goto do_fault;
69
}
70
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, S1Translate *ptw,
71
/* Fine pagetable. */
72
table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
73
}
74
- desc = arm_ldl_ptw(env, ptw, table, fi);
75
+ if (!S1_ptw_translate(env, ptw, table, fi)) {
76
+ goto do_fault;
77
+ }
78
+ desc = arm_ldl_ptw(env, ptw, fi);
79
if (fi->type != ARMFault_None) {
80
goto do_fault;
81
}
82
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
83
fi->type = ARMFault_Translation;
84
goto do_fault;
85
}
86
- desc = arm_ldl_ptw(env, ptw, table, fi);
87
+ if (!S1_ptw_translate(env, ptw, table, fi)) {
88
+ goto do_fault;
52
+ }
89
+ }
53
+
90
+ desc = arm_ldl_ptw(env, ptw, fi);
54
+ ltpsize = tcg_const_i32(4);
91
if (fi->type != ARMFault_None) {
55
+ store_cpu_field(ltpsize, v7m.ltpsize);
92
goto do_fault;
56
+ return true;
93
}
57
+}
94
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
58
+
95
ns = extract32(desc, 3, 1);
59
+
96
/* Lookup l2 entry. */
60
static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
97
table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
61
{
98
- desc = arm_ldl_ptw(env, ptw, table, fi);
62
TCGv_i32 addr, tmp;
99
+ if (!S1_ptw_translate(env, ptw, table, fi)) {
100
+ goto do_fault;
101
+ }
102
+ desc = arm_ldl_ptw(env, ptw, fi);
103
if (fi->type != ARMFault_None) {
104
goto do_fault;
105
}
106
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
107
ptw->in_ptw_idx &= ~1;
108
ptw->in_secure = false;
109
}
110
- descriptor = arm_ldq_ptw(env, ptw, descaddr, fi);
111
+ if (!S1_ptw_translate(env, ptw, descaddr, fi)) {
112
+ goto do_fault;
113
+ }
114
+ descriptor = arm_ldq_ptw(env, ptw, fi);
115
if (fi->type != ARMFault_None) {
116
goto do_fault;
117
}
63
--
118
--
64
2.20.1
119
2.25.1
65
66
diff view generated by jsdifflib
1
In commit a3494d4671797c we reworked the M-profile handling of its
1
From: Richard Henderson <richard.henderson@linaro.org>
2
checks for when the NOCP exception should be raised because the FPU
3
is disabled, so that (in line with the architecture) the NOCP check
4
is done early over a large range of the encoding space, and takes
5
precedence over UNDEF exceptions. As part of this, we removed the
6
code from full_vfp_access_check() which raised an exception there for
7
M-profile with the FPU disabled, because it was no longer reachable.
8
2
9
For MVE, some instructions which are outside the "coprocessor space"
3
This fault type is to be used with FEAT_HAFDBS when
10
region of the encoding space must nonetheless do "is the FPU enabled"
4
the guest enables hw updates, but places the tables
11
checks and possibly raise a NOCP exception. (In particular this
5
in memory where atomic updates are unsupported.
12
covers the MVE-specific low-overhead branch insns LCTP, DLSTP and
13
WLSTP.) To support these insns, reinstate the code in
14
full_vfp_access_check(), so that their trans functions can call
15
vfp_access_check() and get the correct behaviour.
16
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20221024051851.3074715-7-richard.henderson@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20210614151007.4545-6-peter.maydell@linaro.org
20
---
12
---
21
target/arm/translate-vfp.c | 20 +++++++++++++++-----
13
target/arm/internals.h | 4 ++++
22
1 file changed, 15 insertions(+), 5 deletions(-)
14
1 file changed, 4 insertions(+)
23
15
24
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
25
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/translate-vfp.c
18
--- a/target/arm/internals.h
27
+++ b/target/arm/translate-vfp.c
19
+++ b/target/arm/internals.h
28
@@ -XXX,XX +XXX,XX @@ static void gen_preserve_fp_state(DisasContext *s)
20
@@ -XXX,XX +XXX,XX @@ typedef enum ARMFaultType {
29
static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
21
ARMFault_AsyncExternal,
30
{
22
ARMFault_Debug,
31
if (s->fp_excp_el) {
23
ARMFault_TLBConflict,
32
- /* M-profile handled this earlier, in disas_m_nocp() */
24
+ ARMFault_UnsuppAtomicUpdate,
33
- assert (!arm_dc_feature(s, ARM_FEATURE_M));
25
ARMFault_Lockdown,
34
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
26
ARMFault_Exclusive,
35
- syn_fp_access_trap(1, 0xe, false),
27
ARMFault_ICacheMaint,
36
- s->fp_excp_el);
28
@@ -XXX,XX +XXX,XX @@ static inline uint32_t arm_fi_to_lfsc(ARMMMUFaultInfo *fi)
37
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
29
case ARMFault_TLBConflict:
38
+ /*
30
fsc = 0x30;
39
+ * M-profile mostly catches the "FPU disabled" case early, in
31
break;
40
+ * disas_m_nocp(), but a few insns (eg LCTP, WLSTP, DLSTP)
32
+ case ARMFault_UnsuppAtomicUpdate:
41
+ * which do coprocessor-checks are outside the large ranges of
33
+ fsc = 0x31;
42
+ * the encoding space handled by the patterns in m-nocp.decode,
34
+ break;
43
+ * and for them we may need to raise NOCP here.
35
case ARMFault_Lockdown:
44
+ */
36
fsc = 0x34;
45
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
37
break;
46
+ syn_uncategorized(), s->fp_excp_el);
47
+ } else {
48
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
49
+ syn_fp_access_trap(1, 0xe, false),
50
+ s->fp_excp_el);
51
+ }
52
return false;
53
}
54
55
--
38
--
56
2.20.1
39
2.25.1
57
40
58
41
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The test was off-by-one, because tag_last points to the
3
The unconditional loop was used both to iterate over levels
4
last byte of the tag to check, thus tag_last - prev_page
4
and to control parsing of attributes. Use an explicit goto
5
will equal TARGET_PAGE_SIZE when we use the first byte
5
in both cases.
6
of the next page.
6
7
7
While this appears less clean for iterating over levels, we
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/403
8
will need to jump back into the middle of this loop for
9
Reported-by: Peter Collingbourne <pcc@google.com>
9
atomic updates, which is even uglier.
10
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210612195707.840217-1-richard.henderson@linaro.org
13
Message-id: 20221024051851.3074715-8-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
---
15
target/arm/mte_helper.c | 2 +-
16
target/arm/ptw.c | 192 +++++++++++++++++++++++------------------------
16
tests/tcg/aarch64/mte-7.c | 31 +++++++++++++++++++++++++++++++
17
1 file changed, 96 insertions(+), 96 deletions(-)
17
tests/tcg/aarch64/Makefile.target | 2 +-
18
18
3 files changed, 33 insertions(+), 2 deletions(-)
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
19
create mode 100644 tests/tcg/aarch64/mte-7.c
20
21
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
22
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/mte_helper.c
21
--- a/target/arm/ptw.c
24
+++ b/target/arm/mte_helper.c
22
+++ b/target/arm/ptw.c
25
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
23
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
26
prev_page = ptr & TARGET_PAGE_MASK;
24
uint64_t descaddrmask;
27
next_page = prev_page + TARGET_PAGE_SIZE;
25
bool aarch64 = arm_el_is_aa64(env, el);
28
26
bool guarded = false;
29
- if (likely(tag_last - prev_page <= TARGET_PAGE_SIZE)) {
27
+ uint64_t descriptor;
30
+ if (likely(tag_last - prev_page < TARGET_PAGE_SIZE)) {
28
+ bool nstable;
31
/* Memory access stays on one page. */
29
32
tag_size = ((tag_byte_last - tag_byte_first) / (2 * TAG_GRANULE)) + 1;
30
/* TODO: This code does not support shareability levels. */
33
mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, sizem1 + 1,
31
if (aarch64) {
34
diff --git a/tests/tcg/aarch64/mte-7.c b/tests/tcg/aarch64/mte-7.c
32
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
35
new file mode 100644
33
* bits at each step.
36
index XXXXXXX..XXXXXXX
34
*/
37
--- /dev/null
35
tableattrs = is_secure ? 0 : (1 << 4);
38
+++ b/tests/tcg/aarch64/mte-7.c
36
- for (;;) {
39
@@ -XXX,XX +XXX,XX @@
37
- uint64_t descriptor;
40
+/*
38
- bool nstable;
41
+ * Memory tagging, unaligned access crossing pages.
39
-
42
+ * https://gitlab.com/qemu-project/qemu/-/issues/403
40
- descaddr |= (address >> (stride * (4 - level))) & indexmask;
43
+ *
41
- descaddr &= ~7ULL;
44
+ * Copyright (c) 2021 Linaro Ltd
42
- nstable = extract32(tableattrs, 4, 1);
45
+ * SPDX-License-Identifier: GPL-2.0-or-later
43
- if (!nstable) {
46
+ */
44
- /*
47
+
45
- * Stage2_S -> Stage2 or Phys_S -> Phys_NS
48
+#include "mte.h"
46
- * Assert that the non-secure idx are even, and relative order.
49
+
47
- */
50
+int main(int ac, char **av)
48
- QEMU_BUILD_BUG_ON((ARMMMUIdx_Phys_NS & 1) != 0);
51
+{
49
- QEMU_BUILD_BUG_ON((ARMMMUIdx_Stage2 & 1) != 0);
52
+ void *p;
50
- QEMU_BUILD_BUG_ON(ARMMMUIdx_Phys_NS + 1 != ARMMMUIdx_Phys_S);
53
+
51
- QEMU_BUILD_BUG_ON(ARMMMUIdx_Stage2 + 1 != ARMMMUIdx_Stage2_S);
54
+ enable_mte(PR_MTE_TCF_SYNC);
52
- ptw->in_ptw_idx &= ~1;
55
+ p = alloc_mte_mem(2 * 0x1000);
53
- ptw->in_secure = false;
56
+
54
- }
57
+ /* Tag the pointer. */
55
- if (!S1_ptw_translate(env, ptw, descaddr, fi)) {
58
+ p = (void *)((unsigned long)p | (1ul << 56));
56
- goto do_fault;
59
+
57
- }
60
+ /* Store tag in sequential granules. */
58
- descriptor = arm_ldq_ptw(env, ptw, fi);
61
+ asm("stg %0, [%0]" : : "r"(p + 0x0ff0));
59
- if (fi->type != ARMFault_None) {
62
+ asm("stg %0, [%0]" : : "r"(p + 0x1000));
60
- goto do_fault;
61
- }
62
-
63
- if (!(descriptor & 1) ||
64
- (!(descriptor & 2) && (level == 3))) {
65
- /* Invalid, or the Reserved level 3 encoding */
66
- goto do_fault;
67
- }
68
-
69
- descaddr = descriptor & descaddrmask;
70
71
+ next_level:
72
+ descaddr |= (address >> (stride * (4 - level))) & indexmask;
73
+ descaddr &= ~7ULL;
74
+ nstable = extract32(tableattrs, 4, 1);
75
+ if (!nstable) {
76
/*
77
- * For FEAT_LPA and PS=6, bits [51:48] of descaddr are in [15:12]
78
- * of descriptor. For FEAT_LPA2 and effective DS, bits [51:50] of
79
- * descaddr are in [9:8]. Otherwise, if descaddr is out of range,
80
- * raise AddressSizeFault.
81
+ * Stage2_S -> Stage2 or Phys_S -> Phys_NS
82
+ * Assert that the non-secure idx are even, and relative order.
83
*/
84
- if (outputsize > 48) {
85
- if (param.ds) {
86
- descaddr |= extract64(descriptor, 8, 2) << 50;
87
- } else {
88
- descaddr |= extract64(descriptor, 12, 4) << 48;
89
- }
90
- } else if (descaddr >> outputsize) {
91
- fault_type = ARMFault_AddressSize;
92
- goto do_fault;
93
- }
94
-
95
- if ((descriptor & 2) && (level < 3)) {
96
- /*
97
- * Table entry. The top five bits are attributes which may
98
- * propagate down through lower levels of the table (and
99
- * which are all arranged so that 0 means "no effect", so
100
- * we can gather them up by ORing in the bits at each level).
101
- */
102
- tableattrs |= extract64(descriptor, 59, 5);
103
- level++;
104
- indexmask = indexmask_grainsize;
105
- continue;
106
- }
107
- /*
108
- * Block entry at level 1 or 2, or page entry at level 3.
109
- * These are basically the same thing, although the number
110
- * of bits we pull in from the vaddr varies. Note that although
111
- * descaddrmask masks enough of the low bits of the descriptor
112
- * to give a correct page or table address, the address field
113
- * in a block descriptor is smaller; so we need to explicitly
114
- * clear the lower bits here before ORing in the low vaddr bits.
115
- */
116
- page_size = (1ULL << ((stride * (4 - level)) + 3));
117
- descaddr &= ~(hwaddr)(page_size - 1);
118
- descaddr |= (address & (page_size - 1));
119
- /* Extract attributes from the descriptor */
120
- attrs = extract64(descriptor, 2, 10)
121
- | (extract64(descriptor, 52, 12) << 10);
122
-
123
- if (regime_is_stage2(mmu_idx)) {
124
- /* Stage 2 table descriptors do not include any attribute fields */
125
- break;
126
- }
127
- /* Merge in attributes from table descriptors */
128
- attrs |= nstable << 3; /* NS */
129
- guarded = extract64(descriptor, 50, 1); /* GP */
130
- if (param.hpd) {
131
- /* HPD disables all the table attributes except NSTable. */
132
- break;
133
- }
134
- attrs |= extract32(tableattrs, 0, 2) << 11; /* XN, PXN */
135
- /*
136
- * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
137
- * means "force PL1 access only", which means forcing AP[1] to 0.
138
- */
139
- attrs &= ~(extract32(tableattrs, 2, 1) << 4); /* !APT[0] => AP[1] */
140
- attrs |= extract32(tableattrs, 3, 1) << 5; /* APT[1] => AP[2] */
141
- break;
142
+ QEMU_BUILD_BUG_ON((ARMMMUIdx_Phys_NS & 1) != 0);
143
+ QEMU_BUILD_BUG_ON((ARMMMUIdx_Stage2 & 1) != 0);
144
+ QEMU_BUILD_BUG_ON(ARMMMUIdx_Phys_NS + 1 != ARMMMUIdx_Phys_S);
145
+ QEMU_BUILD_BUG_ON(ARMMMUIdx_Stage2 + 1 != ARMMMUIdx_Stage2_S);
146
+ ptw->in_ptw_idx &= ~1;
147
+ ptw->in_secure = false;
148
}
149
+ if (!S1_ptw_translate(env, ptw, descaddr, fi)) {
150
+ goto do_fault;
151
+ }
152
+ descriptor = arm_ldq_ptw(env, ptw, fi);
153
+ if (fi->type != ARMFault_None) {
154
+ goto do_fault;
155
+ }
156
+
157
+ if (!(descriptor & 1) || (!(descriptor & 2) && (level == 3))) {
158
+ /* Invalid, or the Reserved level 3 encoding */
159
+ goto do_fault;
160
+ }
161
+
162
+ descaddr = descriptor & descaddrmask;
63
+
163
+
64
+ /*
164
+ /*
65
+ * Perform an unaligned store with tag 1 crossing the pages.
165
+ * For FEAT_LPA and PS=6, bits [51:48] of descaddr are in [15:12]
66
+ * Failure dies with SIGSEGV.
166
+ * of descriptor. For FEAT_LPA2 and effective DS, bits [51:50] of
167
+ * descaddr are in [9:8]. Otherwise, if descaddr is out of range,
168
+ * raise AddressSizeFault.
67
+ */
169
+ */
68
+ asm("str %0, [%0]" : : "r"(p + 0x0ffc));
170
+ if (outputsize > 48) {
69
+ return 0;
171
+ if (param.ds) {
70
+}
172
+ descaddr |= extract64(descriptor, 8, 2) << 50;
71
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
173
+ } else {
72
index XXXXXXX..XXXXXXX 100644
174
+ descaddr |= extract64(descriptor, 12, 4) << 48;
73
--- a/tests/tcg/aarch64/Makefile.target
175
+ }
74
+++ b/tests/tcg/aarch64/Makefile.target
176
+ } else if (descaddr >> outputsize) {
75
@@ -XXX,XX +XXX,XX @@ AARCH64_TESTS += bti-2
177
+ fault_type = ARMFault_AddressSize;
76
178
+ goto do_fault;
77
# MTE Tests
179
+ }
78
ifneq ($(DOCKER_IMAGE)$(CROSS_CC_HAS_ARMV8_MTE),)
180
+
79
-AARCH64_TESTS += mte-1 mte-2 mte-3 mte-4 mte-5 mte-6
181
+ if ((descriptor & 2) && (level < 3)) {
80
+AARCH64_TESTS += mte-1 mte-2 mte-3 mte-4 mte-5 mte-6 mte-7
182
+ /*
81
mte-%: CFLAGS += -march=armv8.5-a+memtag
183
+ * Table entry. The top five bits are attributes which may
82
endif
184
+ * propagate down through lower levels of the table (and
83
185
+ * which are all arranged so that 0 means "no effect", so
186
+ * we can gather them up by ORing in the bits at each level).
187
+ */
188
+ tableattrs |= extract64(descriptor, 59, 5);
189
+ level++;
190
+ indexmask = indexmask_grainsize;
191
+ goto next_level;
192
+ }
193
+
194
+ /*
195
+ * Block entry at level 1 or 2, or page entry at level 3.
196
+ * These are basically the same thing, although the number
197
+ * of bits we pull in from the vaddr varies. Note that although
198
+ * descaddrmask masks enough of the low bits of the descriptor
199
+ * to give a correct page or table address, the address field
200
+ * in a block descriptor is smaller; so we need to explicitly
201
+ * clear the lower bits here before ORing in the low vaddr bits.
202
+ */
203
+ page_size = (1ULL << ((stride * (4 - level)) + 3));
204
+ descaddr &= ~(hwaddr)(page_size - 1);
205
+ descaddr |= (address & (page_size - 1));
206
+ /* Extract attributes from the descriptor */
207
+ attrs = extract64(descriptor, 2, 10)
208
+ | (extract64(descriptor, 52, 12) << 10);
209
+
210
+ if (regime_is_stage2(mmu_idx)) {
211
+ /* Stage 2 table descriptors do not include any attribute fields */
212
+ goto skip_attrs;
213
+ }
214
+ /* Merge in attributes from table descriptors */
215
+ attrs |= nstable << 3; /* NS */
216
+ guarded = extract64(descriptor, 50, 1); /* GP */
217
+ if (param.hpd) {
218
+ /* HPD disables all the table attributes except NSTable. */
219
+ goto skip_attrs;
220
+ }
221
+ attrs |= extract32(tableattrs, 0, 2) << 11; /* XN, PXN */
222
+ /*
223
+ * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
224
+ * means "force PL1 access only", which means forcing AP[1] to 0.
225
+ */
226
+ attrs &= ~(extract32(tableattrs, 2, 1) << 4); /* !APT[0] => AP[1] */
227
+ attrs |= extract32(tableattrs, 3, 1) << 5; /* APT[1] => AP[2] */
228
+ skip_attrs:
229
+
230
/*
231
* Here descaddr is the final physical address, and attributes
232
* are all in attrs.
84
--
233
--
85
2.20.1
234
2.25.1
86
87
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The default of this switch is truly unreachable.
3
Always overriding fi->type was incorrect, as we would not properly
4
The switch selector is 3 bits, and all 8 cases are present.
4
propagate the fault type from S1_ptw_translate, or arm_ldq_ptw.
5
Simplify things by providing a new label for a translation fault.
6
For other faults, store into fi directly.
5
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 20210604183506.916654-3-richard.henderson@linaro.org
11
Message-id: 20221024051851.3074715-9-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
target/arm/translate-a64.c | 1 -
14
target/arm/ptw.c | 31 +++++++++++++------------------
13
1 file changed, 1 deletion(-)
15
1 file changed, 13 insertions(+), 18 deletions(-)
14
16
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
19
--- a/target/arm/ptw.c
18
+++ b/target/arm/translate-a64.c
20
+++ b/target/arm/ptw.c
19
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
21
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
22
ARMCPU *cpu = env_archcpu(env);
23
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
24
bool is_secure = ptw->in_secure;
25
- /* Read an LPAE long-descriptor translation table. */
26
- ARMFaultType fault_type = ARMFault_Translation;
27
uint32_t level;
28
ARMVAParameters param;
29
uint64_t ttbr;
30
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
31
* so our choice is to always raise the fault.
32
*/
33
if (param.tsz_oob) {
34
- fault_type = ARMFault_Translation;
35
- goto do_fault;
36
+ goto do_translation_fault;
20
}
37
}
21
break;
38
22
default:
39
addrsize = 64 - 8 * param.tbi;
23
- fprintf(stderr, "%s: cmode_3_1: %x\n", __func__, cmode_3_1);
40
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
24
g_assert_not_reached();
41
addrsize - inputsize);
42
if (-top_bits != param.select) {
43
/* The gap between the two regions is a Translation fault */
44
- fault_type = ARMFault_Translation;
45
- goto do_fault;
46
+ goto do_translation_fault;
47
}
25
}
48
}
26
49
50
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
51
* Translation table walk disabled => Translation fault on TLB miss
52
* Note: This is always 0 on 64-bit EL2 and EL3.
53
*/
54
- goto do_fault;
55
+ goto do_translation_fault;
56
}
57
58
if (!regime_is_stage2(mmu_idx)) {
59
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
60
if (param.ds && stride == 9 && sl2) {
61
if (sl0 != 0) {
62
level = 0;
63
- fault_type = ARMFault_Translation;
64
- goto do_fault;
65
+ goto do_translation_fault;
66
}
67
startlevel = -1;
68
} else if (!aarch64 || stride == 9) {
69
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
70
ok = check_s2_mmu_setup(cpu, aarch64, startlevel,
71
inputsize, stride, outputsize);
72
if (!ok) {
73
- fault_type = ARMFault_Translation;
74
- goto do_fault;
75
+ goto do_translation_fault;
76
}
77
level = startlevel;
78
}
79
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
80
descaddr |= extract64(ttbr, 2, 4) << 48;
81
} else if (descaddr >> outputsize) {
82
level = 0;
83
- fault_type = ARMFault_AddressSize;
84
+ fi->type = ARMFault_AddressSize;
85
goto do_fault;
86
}
87
88
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
89
90
if (!(descriptor & 1) || (!(descriptor & 2) && (level == 3))) {
91
/* Invalid, or the Reserved level 3 encoding */
92
- goto do_fault;
93
+ goto do_translation_fault;
94
}
95
96
descaddr = descriptor & descaddrmask;
97
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
98
descaddr |= extract64(descriptor, 12, 4) << 48;
99
}
100
} else if (descaddr >> outputsize) {
101
- fault_type = ARMFault_AddressSize;
102
+ fi->type = ARMFault_AddressSize;
103
goto do_fault;
104
}
105
106
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
107
* Here descaddr is the final physical address, and attributes
108
* are all in attrs.
109
*/
110
- fault_type = ARMFault_AccessFlag;
111
if ((attrs & (1 << 8)) == 0) {
112
/* Access flag */
113
+ fi->type = ARMFault_AccessFlag;
114
goto do_fault;
115
}
116
117
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
118
result->f.prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
119
}
120
121
- fault_type = ARMFault_Permission;
122
if (!(result->f.prot & (1 << access_type))) {
123
+ fi->type = ARMFault_Permission;
124
goto do_fault;
125
}
126
127
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
128
result->f.lg_page_size = ctz64(page_size);
129
return false;
130
131
-do_fault:
132
- fi->type = fault_type;
133
+ do_translation_fault:
134
+ fi->type = ARMFault_Translation;
135
+ do_fault:
136
fi->level = level;
137
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
138
fi->stage2 = fi->s1ptw || regime_is_stage2(mmu_idx);
27
--
139
--
28
2.20.1
140
2.25.1
29
141
30
142
diff view generated by jsdifflib
1
On A-profile, PSR bits [15:10][26:25] are always the IT state bits.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
On M-profile, some of the reserved encodings of the IT state are used
3
to instead indicate partial progress through instructions that were
4
interrupted partway through by an exception and can be resumed.
5
2
6
These resumable instructions fall into two categories:
3
Leave the upper and lower attributes in the place they originate
4
from in the descriptor. Shifting them around is confusing, since
5
one cannot read the bit numbers out of the manual. Also, new
6
attributes have been added which would alter the shifts.
7
7
8
(1) load/store multiple instructions, where these bits are called
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
"ICI" and specify the register in the ldm/stm list where execution
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
should resume. (Specifically: LDM, STM, VLDM, VSTM, VLLDM, VLSTM,
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
CLRM, VSCCLRM.)
11
Message-id: 20221024051851.3074715-10-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/ptw.c | 31 +++++++++++++++----------------
15
1 file changed, 15 insertions(+), 16 deletions(-)
12
16
13
(2) MVE instructions subject to beatwise execution, where these bits
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
14
are called "ECI" and specify which beats in this and possibly also
15
the following MVE insn have been executed.
16
17
There are also a few insns (LE, LETP, and BKPT) which do not use the
18
ICI/ECI bits but must leave them alone.
19
20
Otherwise, we should raise an INVSTATE UsageFault for any attempt to
21
execute an insn with non-zero ICI/ECI bits.
22
23
So far we have been able to ignore ECI/ICI, because the architecture
24
allows the IMPDEF choice of "always restart load/store multiple from
25
the beginning regardless of ICI state", so the only thing we have
26
been missing is that we don't raise the INVSTATE fault for bad guest
27
code. However, MVE requires that we honour ECI bits and do not
28
rexecute beats of an insn that have already been executed.
29
30
Add the support in the decoder for handling ECI/ICI:
31
* identify the ECI/ICI case in the CONDEXEC TB flags
32
* when a load/store multiple insn succeeds, it updates the ECI/ICI
33
state (both in DisasContext and in the CPU state), and sets a flag
34
to say that the ECI/ICI state was handled
35
* if we find that the insn we just decoded did not handle the
36
ECI/ICI state, we delete all the code that we just generated for
37
it and instead emit the code to raise the INVFAULT. This allows
38
us to avoid having to update every non-MVE non-LDM/STM insn to
39
make it check for "is ECI/ICI set?".
40
41
We continue with our existing IMPDEF choice of not caring about the
42
ICI state for the load/store multiples and simply restarting them
43
from the beginning. Because we don't allow interrupts in the middle
44
of an insn, the only way we would see this state is if the guest set
45
ICI manually on return from an exception handler, so it's a corner
46
case which doesn't merit optimisation.
47
48
ICI update for LDM/STM is simple -- it always zeroes the state. ECI
49
update for MVE beatwise insns will be a little more complex, since
50
the ECI state may include information for the following insn.
51
52
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
53
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
54
Message-id: 20210614151007.4545-5-peter.maydell@linaro.org
55
---
56
target/arm/translate-a32.h | 1 +
57
target/arm/translate.h | 9 +++
58
target/arm/translate-m-nocp.c | 11 ++++
59
target/arm/translate-vfp.c | 6 ++
60
target/arm/translate.c | 111 ++++++++++++++++++++++++++++++++--
61
5 files changed, 133 insertions(+), 5 deletions(-)
62
63
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
64
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/translate-a32.h
19
--- a/target/arm/ptw.c
66
+++ b/target/arm/translate-a32.h
20
+++ b/target/arm/ptw.c
67
@@ -XXX,XX +XXX,XX @@ long vfp_reg_offset(bool dp, unsigned reg);
21
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
68
long neon_full_reg_offset(unsigned reg);
22
hwaddr descaddr, indexmask, indexmask_grainsize;
69
long neon_element_offset(int reg, int element, MemOp memop);
23
uint32_t tableattrs;
70
void gen_rev16(TCGv_i32 dest, TCGv_i32 var);
24
target_ulong page_size;
71
+void clear_eci_state(DisasContext *s);
25
- uint32_t attrs;
72
26
+ uint64_t attrs;
73
static inline TCGv_i32 load_cpu_offset(int offset)
27
int32_t stride;
74
{
28
int addrsize, inputsize, outputsize;
75
diff --git a/target/arm/translate.h b/target/arm/translate.h
29
uint64_t tcr = regime_tcr(env, mmu_idx);
76
index XXXXXXX..XXXXXXX 100644
30
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
77
--- a/target/arm/translate.h
31
descaddr &= ~(hwaddr)(page_size - 1);
78
+++ b/target/arm/translate.h
32
descaddr |= (address & (page_size - 1));
79
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
33
/* Extract attributes from the descriptor */
80
/* Thumb-2 conditional execution bits. */
34
- attrs = extract64(descriptor, 2, 10)
81
int condexec_mask;
35
- | (extract64(descriptor, 52, 12) << 10);
82
int condexec_cond;
36
+ attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(52, 12));
83
+ /* M-profile ECI/ICI exception-continuable instruction state */
37
84
+ int eci;
38
if (regime_is_stage2(mmu_idx)) {
85
+ /*
39
/* Stage 2 table descriptors do not include any attribute fields */
86
+ * trans_ functions for insns which are continuable should set this true
40
goto skip_attrs;
87
+ * after decode (ie after any UNDEF checks)
88
+ */
89
+ bool eci_handled;
90
+ /* TCG op to rewind to if this turns out to be an invalid ECI state */
91
+ TCGOp *insn_eci_rewind;
92
int thumb;
93
int sctlr_b;
94
MemOp be_data;
95
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/target/arm/translate-m-nocp.c
98
+++ b/target/arm/translate-m-nocp.c
99
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
100
unallocated_encoding(s);
101
return true;
102
}
41
}
103
+
42
/* Merge in attributes from table descriptors */
104
+ s->eci_handled = true;
43
- attrs |= nstable << 3; /* NS */
105
+
44
+ attrs |= nstable << 5; /* NS */
106
/* If no fpu, NOP. */
45
guarded = extract64(descriptor, 50, 1); /* GP */
107
if (!dc_isar_feature(aa32_vfp, s)) {
46
if (param.hpd) {
108
+ clear_eci_state(s);
47
/* HPD disables all the table attributes except NSTable. */
109
return true;
48
goto skip_attrs;
110
}
49
}
111
50
- attrs |= extract32(tableattrs, 0, 2) << 11; /* XN, PXN */
112
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
51
+ attrs |= extract64(tableattrs, 0, 2) << 53; /* XN, PXN */
52
/*
53
* The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
54
* means "force PL1 access only", which means forcing AP[1] to 0.
55
*/
56
- attrs &= ~(extract32(tableattrs, 2, 1) << 4); /* !APT[0] => AP[1] */
57
- attrs |= extract32(tableattrs, 3, 1) << 5; /* APT[1] => AP[2] */
58
+ attrs &= ~(extract64(tableattrs, 2, 1) << 6); /* !APT[0] => AP[1] */
59
+ attrs |= extract32(tableattrs, 3, 1) << 7; /* APT[1] => AP[2] */
60
skip_attrs:
61
62
/*
63
* Here descaddr is the final physical address, and attributes
64
* are all in attrs.
65
*/
66
- if ((attrs & (1 << 8)) == 0) {
67
+ if ((attrs & (1 << 10)) == 0) {
68
/* Access flag */
69
fi->type = ARMFault_AccessFlag;
70
goto do_fault;
113
}
71
}
114
tcg_temp_free_i32(fptr);
72
115
73
- ap = extract32(attrs, 4, 2);
116
+ clear_eci_state(s);
74
+ ap = extract32(attrs, 6, 2);
117
+
75
118
/* End the TB, because we have updated FP control bits */
76
if (regime_is_stage2(mmu_idx)) {
119
s->base.is_jmp = DISAS_UPDATE_EXIT;
77
ns = mmu_idx == ARMMMUIdx_Stage2;
120
return true;
78
- xn = extract32(attrs, 11, 2);
121
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
79
+ xn = extract64(attrs, 53, 2);
122
return true;
80
result->f.prot = get_S2prot(env, ap, xn, s1_is_el0);
81
} else {
82
- ns = extract32(attrs, 3, 1);
83
- xn = extract32(attrs, 12, 1);
84
- pxn = extract32(attrs, 11, 1);
85
+ ns = extract32(attrs, 5, 1);
86
+ xn = extract64(attrs, 54, 1);
87
+ pxn = extract64(attrs, 53, 1);
88
result->f.prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
123
}
89
}
124
90
125
+ s->eci_handled = true;
91
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
126
+
92
127
if (!dc_isar_feature(aa32_vfp_simd, s)) {
93
if (regime_is_stage2(mmu_idx)) {
128
/* NOP if we have neither FP nor MVE */
94
result->cacheattrs.is_s2_format = true;
129
+ clear_eci_state(s);
95
- result->cacheattrs.attrs = extract32(attrs, 0, 4);
130
return true;
96
+ result->cacheattrs.attrs = extract32(attrs, 2, 4);
97
} else {
98
/* Index into MAIR registers for cache attributes */
99
- uint8_t attrindx = extract32(attrs, 0, 3);
100
+ uint8_t attrindx = extract32(attrs, 2, 3);
101
uint64_t mair = env->cp15.mair_el[regime_el(env, mmu_idx)];
102
assert(attrindx <= 7);
103
result->cacheattrs.is_s2_format = false;
104
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
105
if (param.ds) {
106
result->cacheattrs.shareability = param.sh;
107
} else {
108
- result->cacheattrs.shareability = extract32(attrs, 6, 2);
109
+ result->cacheattrs.shareability = extract32(attrs, 8, 2);
131
}
110
}
132
111
133
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
112
result->f.phys_addr = descaddr;
134
TCGv_i32 z32 = tcg_const_i32(0);
135
store_cpu_field(z32, v7m.vpr);
136
}
137
+
138
+ clear_eci_state(s);
139
return true;
140
}
141
142
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
143
index XXXXXXX..XXXXXXX 100644
144
--- a/target/arm/translate-vfp.c
145
+++ b/target/arm/translate-vfp.c
146
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
147
return false;
148
}
149
150
+ s->eci_handled = true;
151
+
152
if (!vfp_access_check(s)) {
153
return true;
154
}
155
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
156
tcg_temp_free_i32(addr);
157
}
158
159
+ clear_eci_state(s);
160
return true;
161
}
162
163
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
164
return false;
165
}
166
167
+ s->eci_handled = true;
168
+
169
if (!vfp_access_check(s)) {
170
return true;
171
}
172
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
173
tcg_temp_free_i32(addr);
174
}
175
176
+ clear_eci_state(s);
177
return true;
178
}
179
180
diff --git a/target/arm/translate.c b/target/arm/translate.c
181
index XXXXXXX..XXXXXXX 100644
182
--- a/target/arm/translate.c
183
+++ b/target/arm/translate.c
184
@@ -XXX,XX +XXX,XX @@ static inline bool is_singlestepping(DisasContext *s)
185
return s->base.singlestep_enabled || s->ss_active;
186
}
187
188
+void clear_eci_state(DisasContext *s)
189
+{
190
+ /*
191
+ * Clear any ECI/ICI state: used when a load multiple/store
192
+ * multiple insn executes.
193
+ */
194
+ if (s->eci) {
195
+ TCGv_i32 tmp = tcg_const_i32(0);
196
+ store_cpu_field(tmp, condexec_bits);
197
+ s->eci = 0;
198
+ }
199
+}
200
+
201
static void gen_smul_dual(TCGv_i32 a, TCGv_i32 b)
202
{
203
TCGv_i32 tmp1 = tcg_temp_new_i32();
204
@@ -XXX,XX +XXX,XX @@ static bool trans_BKPT(DisasContext *s, arg_BKPT *a)
205
if (!ENABLE_ARCH_5) {
206
return false;
207
}
208
+ /* BKPT is OK with ECI set and leaves it untouched */
209
+ s->eci_handled = true;
210
if (arm_dc_feature(s, ARM_FEATURE_M) &&
211
semihosting_enabled() &&
212
#ifndef CONFIG_USER_ONLY
213
@@ -XXX,XX +XXX,XX @@ static bool op_stm(DisasContext *s, arg_ldst_block *a, int min_n)
214
return true;
215
}
216
217
+ s->eci_handled = true;
218
+
219
addr = op_addr_block_pre(s, a, n);
220
mem_idx = get_mem_index(s);
221
222
@@ -XXX,XX +XXX,XX @@ static bool op_stm(DisasContext *s, arg_ldst_block *a, int min_n)
223
}
224
225
op_addr_block_post(s, a, addr, n);
226
+ clear_eci_state(s);
227
return true;
228
}
229
230
@@ -XXX,XX +XXX,XX @@ static bool do_ldm(DisasContext *s, arg_ldst_block *a, int min_n)
231
return true;
232
}
233
234
+ s->eci_handled = true;
235
+
236
addr = op_addr_block_pre(s, a, n);
237
mem_idx = get_mem_index(s);
238
loaded_base = false;
239
@@ -XXX,XX +XXX,XX @@ static bool do_ldm(DisasContext *s, arg_ldst_block *a, int min_n)
240
/* Must exit loop to check un-masked IRQs */
241
s->base.is_jmp = DISAS_EXIT;
242
}
243
+ clear_eci_state(s);
244
return true;
245
}
246
247
@@ -XXX,XX +XXX,XX @@ static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
248
return false;
249
}
250
251
+ s->eci_handled = true;
252
+
253
zero = tcg_const_i32(0);
254
for (i = 0; i < 15; i++) {
255
if (extract32(a->list, i, 1)) {
256
@@ -XXX,XX +XXX,XX @@ static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
257
tcg_temp_free_i32(maskreg);
258
}
259
tcg_temp_free_i32(zero);
260
+ clear_eci_state(s);
261
return true;
262
}
263
264
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
265
return false;
266
}
267
268
+ /* LE/LETP is OK with ECI set and leaves it untouched */
269
+ s->eci_handled = true;
270
+
271
if (!a->f) {
272
/* Not loop-forever. If LR <= 1 this is the last loop: do nothing. */
273
arm_gen_condlabel(s);
274
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
275
dc->thumb = EX_TBFLAG_AM32(tb_flags, THUMB);
276
dc->be_data = EX_TBFLAG_ANY(tb_flags, BE_DATA) ? MO_BE : MO_LE;
277
condexec = EX_TBFLAG_AM32(tb_flags, CONDEXEC);
278
- dc->condexec_mask = (condexec & 0xf) << 1;
279
- dc->condexec_cond = condexec >> 4;
280
+ /*
281
+ * the CONDEXEC TB flags are CPSR bits [15:10][26:25]. On A-profile this
282
+ * is always the IT bits. On M-profile, some of the reserved encodings
283
+ * of IT are used instead to indicate either ICI or ECI, which
284
+ * indicate partial progress of a restartable insn that was interrupted
285
+ * partway through by an exception:
286
+ * * if CONDEXEC[3:0] != 0b0000 : CONDEXEC is IT bits
287
+ * * if CONDEXEC[3:0] == 0b0000 : CONDEXEC is ICI or ECI bits
288
+ * In all cases CONDEXEC == 0 means "not in IT block or restartable
289
+ * insn, behave normally".
290
+ */
291
+ dc->eci = dc->condexec_mask = dc->condexec_cond = 0;
292
+ dc->eci_handled = false;
293
+ dc->insn_eci_rewind = NULL;
294
+ if (condexec & 0xf) {
295
+ dc->condexec_mask = (condexec & 0xf) << 1;
296
+ dc->condexec_cond = condexec >> 4;
297
+ } else {
298
+ if (arm_feature(env, ARM_FEATURE_M)) {
299
+ dc->eci = condexec >> 4;
300
+ }
301
+ }
302
303
core_mmu_idx = EX_TBFLAG_ANY(tb_flags, MMUIDX);
304
dc->mmu_idx = core_to_arm_mmu_idx(env, core_mmu_idx);
305
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_start(DisasContextBase *dcbase, CPUState *cpu)
306
static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
307
{
308
DisasContext *dc = container_of(dcbase, DisasContext, base);
309
+ /*
310
+ * The ECI/ICI bits share PSR bits with the IT bits, so we
311
+ * need to reconstitute the bits from the split-out DisasContext
312
+ * fields here.
313
+ */
314
+ uint32_t condexec_bits;
315
316
- tcg_gen_insn_start(dc->base.pc_next,
317
- (dc->condexec_cond << 4) | (dc->condexec_mask >> 1),
318
- 0);
319
+ if (dc->eci) {
320
+ condexec_bits = dc->eci << 4;
321
+ } else {
322
+ condexec_bits = (dc->condexec_cond << 4) | (dc->condexec_mask >> 1);
323
+ }
324
+ tcg_gen_insn_start(dc->base.pc_next, condexec_bits, 0);
325
dc->insn_start = tcg_last_op();
326
}
327
328
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
329
}
330
dc->insn = insn;
331
332
+ if (dc->eci) {
333
+ /*
334
+ * For M-profile continuable instructions, ECI/ICI handling
335
+ * falls into these cases:
336
+ * - interrupt-continuable instructions
337
+ * These are the various load/store multiple insns (both
338
+ * integer and fp). The ICI bits indicate the register
339
+ * where the load/store can resume. We make the IMPDEF
340
+ * choice to always do "instruction restart", ie ignore
341
+ * the ICI value and always execute the ldm/stm from the
342
+ * start. So all we need to do is zero PSR.ICI if the
343
+ * insn executes.
344
+ * - MVE instructions subject to beat-wise execution
345
+ * Here the ECI bits indicate which beats have already been
346
+ * executed, and we must honour this. Each insn of this
347
+ * type will handle it correctly. We will update PSR.ECI
348
+ * in the helper function for the insn (some ECI values
349
+ * mean that the following insn also has been partially
350
+ * executed).
351
+ * - Special cases which don't advance ECI
352
+ * The insns LE, LETP and BKPT leave the ECI/ICI state
353
+ * bits untouched.
354
+ * - all other insns (the common case)
355
+ * Non-zero ECI/ICI means an INVSTATE UsageFault.
356
+ * We place a rewind-marker here. Insns in the previous
357
+ * three categories will set a flag in the DisasContext.
358
+ * If the flag isn't set after we call disas_thumb_insn()
359
+ * or disas_thumb2_insn() then we know we have a "some other
360
+ * insn" case. We will rewind to the marker (ie throwing away
361
+ * all the generated code) and instead emit "take exception".
362
+ */
363
+ dc->insn_eci_rewind = tcg_last_op();
364
+ }
365
+
366
if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) {
367
uint32_t cond = dc->condexec_cond;
368
369
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
370
}
371
}
372
373
+ if (dc->eci && !dc->eci_handled) {
374
+ /*
375
+ * Insn wasn't valid for ECI/ICI at all: undo what we
376
+ * just generated and instead emit an exception
377
+ */
378
+ tcg_remove_ops_after(dc->insn_eci_rewind);
379
+ dc->condjmp = 0;
380
+ gen_exception_insn(dc, dc->pc_curr, EXCP_INVSTATE, syn_uncategorized(),
381
+ default_exception_el(dc));
382
+ }
383
+
384
arm_post_translate_insn(dc);
385
386
/* Thumb is a variable-length ISA. Stop translation when the next insn
387
--
113
--
388
2.20.1
114
2.25.1
389
115
390
116
diff view generated by jsdifflib
1
MVE has an FPSCR.QC bit similar to the A-profile Neon one; when MVE
1
From: Richard Henderson <richard.henderson@linaro.org>
2
is implemented make the bit writeable, both in the generic "load and
3
store FPSCR" helper functions and in the code for handling the NZCVQC
4
sysreg which we had previously left as "TODO when we implement MVE".
5
2
3
Both GP and DBM are in the upper attribute block.
4
Extend the computation of attrs to include them,
5
then simplify the setting of guarded.
6
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Message-id: 20221024051851.3074715-11-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210614151007.4545-3-peter.maydell@linaro.org
9
---
13
---
10
target/arm/translate-vfp.c | 30 +++++++++++++++++++++---------
14
target/arm/ptw.c | 6 ++----
11
target/arm/vfp_helper.c | 3 ++-
15
1 file changed, 2 insertions(+), 4 deletions(-)
12
2 files changed, 23 insertions(+), 10 deletions(-)
13
16
14
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-vfp.c
19
--- a/target/arm/ptw.c
17
+++ b/target/arm/translate-vfp.c
20
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
21
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
19
{
22
uint32_t el = regime_el(env, mmu_idx);
20
TCGv_i32 fpscr;
23
uint64_t descaddrmask;
21
tmp = loadfn(s, opaque);
24
bool aarch64 = arm_el_is_aa64(env, el);
22
- /*
25
- bool guarded = false;
23
- * TODO: when we implement MVE, write the QC bit.
26
uint64_t descriptor;
24
- * For non-MVE, QC is RES0.
27
bool nstable;
25
- */
28
26
+ if (dc_isar_feature(aa32_mve, s)) {
29
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
27
+ /* QC is only present for MVE; otherwise RES0 */
30
descaddr &= ~(hwaddr)(page_size - 1);
28
+ TCGv_i32 qc = tcg_temp_new_i32();
31
descaddr |= (address & (page_size - 1));
29
+ tcg_gen_andi_i32(qc, tmp, FPCR_QC);
32
/* Extract attributes from the descriptor */
30
+ /*
33
- attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(52, 12));
31
+ * The 4 vfp.qc[] fields need only be "zero" vs "non-zero";
34
+ attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(50, 14));
32
+ * here writing the same value into all elements is simplest.
35
33
+ */
36
if (regime_is_stage2(mmu_idx)) {
34
+ tcg_gen_gvec_dup_i32(MO_32, offsetof(CPUARMState, vfp.qc),
37
/* Stage 2 table descriptors do not include any attribute fields */
35
+ 16, 16, qc);
38
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
36
+ }
37
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
38
fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
39
tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
40
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
41
break;
42
}
39
}
43
40
/* Merge in attributes from table descriptors */
44
+ if (regno == ARM_VFP_FPSCR_NZCVQC && !dc_isar_feature(aa32_mve, s)) {
41
attrs |= nstable << 5; /* NS */
45
+ /* QC is RES0 without MVE, so NZCVQC simplifies to NZCV */
42
- guarded = extract64(descriptor, 50, 1); /* GP */
46
+ regno = QEMU_VFP_FPSCR_NZCV;
43
if (param.hpd) {
47
+ }
44
/* HPD disables all the table attributes except NSTable. */
48
+
45
goto skip_attrs;
49
switch (regno) {
46
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
50
case ARM_VFP_FPSCR:
47
51
tmp = tcg_temp_new_i32();
48
/* When in aarch64 mode, and BTI is enabled, remember GP in the TLB. */
52
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
49
if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) {
53
storefn(s, opaque, tmp);
50
- result->f.guarded = guarded;
54
break;
51
+ result->f.guarded = extract64(attrs, 50, 1); /* GP */
55
case ARM_VFP_FPSCR_NZCVQC:
56
- /*
57
- * TODO: MVE has a QC bit, which we probably won't store
58
- * in the xregs[] field. For non-MVE, where QC is RES0,
59
- * we can just fall through to the FPSCR_NZCV case.
60
- */
61
+ tmp = tcg_temp_new_i32();
62
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
63
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCVQC_MASK);
64
+ storefn(s, opaque, tmp);
65
+ break;
66
case QEMU_VFP_FPSCR_NZCV:
67
/*
68
* Read just NZCV; this is a special case to avoid the
69
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/vfp_helper.c
72
+++ b/target/arm/vfp_helper.c
73
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
74
FPCR_LTPSIZE_LENGTH);
75
}
52
}
76
53
77
- if (arm_feature(env, ARM_FEATURE_NEON)) {
54
if (regime_is_stage2(mmu_idx)) {
78
+ if (arm_feature(env, ARM_FEATURE_NEON) ||
79
+ cpu_isar_feature(aa32_mve, cpu)) {
80
/*
81
* The bit we set within fpscr_q is arbitrary; the register as a
82
* whole being zero/non-zero is what counts.
83
--
55
--
84
2.20.1
56
2.25.1
85
57
86
58
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This fprintf+assert has been in place since the beginning.
3
Replace some gotos with some nested if statements.
4
It is after to the fp_access_check, so we need to move the
5
check up. Fold that in to the pairwise filter.
6
4
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20210604183506.916654-4-richard.henderson@linaro.org
7
Message-id: 20221024051851.3074715-12-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
target/arm/translate-a64.c | 82 +++++++++++++++++++++++---------------
10
target/arm/ptw.c | 34 ++++++++++++++++------------------
14
1 file changed, 50 insertions(+), 32 deletions(-)
11
1 file changed, 16 insertions(+), 18 deletions(-)
15
12
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a64.c
15
--- a/target/arm/ptw.c
19
+++ b/target/arm/translate-a64.c
16
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
17
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
21
*/
18
page_size = (1ULL << ((stride * (4 - level)) + 3));
22
static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
19
descaddr &= ~(hwaddr)(page_size - 1);
23
{
20
descaddr |= (address & (page_size - 1));
24
- int opcode, fpopcode;
21
- /* Extract attributes from the descriptor */
25
- int is_q, u, a, rm, rn, rd;
22
- attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(50, 14));
26
- int datasize, elements;
23
27
- int pass;
24
- if (regime_is_stage2(mmu_idx)) {
28
+ int opcode = extract32(insn, 11, 3);
25
- /* Stage 2 table descriptors do not include any attribute fields */
29
+ int u = extract32(insn, 29, 1);
26
- goto skip_attrs;
30
+ int a = extract32(insn, 23, 1);
27
- }
31
+ int is_q = extract32(insn, 30, 1);
28
- /* Merge in attributes from table descriptors */
32
+ int rm = extract32(insn, 16, 5);
29
- attrs |= nstable << 5; /* NS */
33
+ int rn = extract32(insn, 5, 5);
30
- if (param.hpd) {
34
+ int rd = extract32(insn, 0, 5);
31
- /* HPD disables all the table attributes except NSTable. */
35
+ /*
32
- goto skip_attrs;
36
+ * For these floating point ops, the U, a and opcode bits
33
- }
37
+ * together indicate the operation.
34
- attrs |= extract64(tableattrs, 0, 2) << 53; /* XN, PXN */
38
+ */
35
/*
39
+ int fpopcode = opcode | (a << 3) | (u << 4);
36
- * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
40
+ int datasize = is_q ? 128 : 64;
37
- * means "force PL1 access only", which means forcing AP[1] to 0.
41
+ int elements = datasize / 16;
38
+ * Extract attributes from the descriptor, and apply table descriptors.
42
+ bool pairwise;
39
+ * Stage 2 table descriptors do not include any attribute fields.
43
TCGv_ptr fpst;
40
+ * HPD disables all the table attributes except NSTable.
44
- bool pairwise = false;
41
*/
45
+ int pass;
42
- attrs &= ~(extract64(tableattrs, 2, 1) << 6); /* !APT[0] => AP[1] */
46
+
43
- attrs |= extract32(tableattrs, 3, 1) << 7; /* APT[1] => AP[2] */
47
+ switch (fpopcode) {
44
- skip_attrs:
48
+ case 0x0: /* FMAXNM */
45
+ attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(50, 14));
49
+ case 0x1: /* FMLA */
46
+ if (!regime_is_stage2(mmu_idx)) {
50
+ case 0x2: /* FADD */
47
+ attrs |= nstable << 5; /* NS */
51
+ case 0x3: /* FMULX */
48
+ if (!param.hpd) {
52
+ case 0x4: /* FCMEQ */
49
+ attrs |= extract64(tableattrs, 0, 2) << 53; /* XN, PXN */
53
+ case 0x6: /* FMAX */
50
+ /*
54
+ case 0x7: /* FRECPS */
51
+ * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1
55
+ case 0x8: /* FMINNM */
52
+ * means "force PL1 access only", which means forcing AP[1] to 0.
56
+ case 0x9: /* FMLS */
53
+ */
57
+ case 0xa: /* FSUB */
54
+ attrs &= ~(extract64(tableattrs, 2, 1) << 6); /* !APT[0] => AP[1] */
58
+ case 0xe: /* FMIN */
55
+ attrs |= extract32(tableattrs, 3, 1) << 7; /* APT[1] => AP[2] */
59
+ case 0xf: /* FRSQRTS */
56
+ }
60
+ case 0x13: /* FMUL */
61
+ case 0x14: /* FCMGE */
62
+ case 0x15: /* FACGE */
63
+ case 0x17: /* FDIV */
64
+ case 0x1a: /* FABD */
65
+ case 0x1c: /* FCMGT */
66
+ case 0x1d: /* FACGT */
67
+ pairwise = false;
68
+ break;
69
+ case 0x10: /* FMAXNMP */
70
+ case 0x12: /* FADDP */
71
+ case 0x16: /* FMAXP */
72
+ case 0x18: /* FMINNMP */
73
+ case 0x1e: /* FMINP */
74
+ pairwise = true;
75
+ break;
76
+ default:
77
+ unallocated_encoding(s);
78
+ return;
79
+ }
57
+ }
80
58
81
if (!dc_isar_feature(aa64_fp16, s)) {
59
/*
82
unallocated_encoding(s);
60
* Here descaddr is the final physical address, and attributes
83
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
84
return;
85
}
86
87
- /* For these floating point ops, the U, a and opcode bits
88
- * together indicate the operation.
89
- */
90
- opcode = extract32(insn, 11, 3);
91
- u = extract32(insn, 29, 1);
92
- a = extract32(insn, 23, 1);
93
- is_q = extract32(insn, 30, 1);
94
- rm = extract32(insn, 16, 5);
95
- rn = extract32(insn, 5, 5);
96
- rd = extract32(insn, 0, 5);
97
-
98
- fpopcode = opcode | (a << 3) | (u << 4);
99
- datasize = is_q ? 128 : 64;
100
- elements = datasize / 16;
101
-
102
- switch (fpopcode) {
103
- case 0x10: /* FMAXNMP */
104
- case 0x12: /* FADDP */
105
- case 0x16: /* FMAXP */
106
- case 0x18: /* FMINNMP */
107
- case 0x1e: /* FMINP */
108
- pairwise = true;
109
- break;
110
- }
111
-
112
fpst = fpstatus_ptr(FPST_FPCR_F16);
113
114
if (pairwise) {
115
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
116
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
117
break;
118
default:
119
- fprintf(stderr, "%s: insn 0x%04x, fpop 0x%2x @ 0x%" PRIx64 "\n",
120
- __func__, insn, fpopcode, s->pc_curr);
121
g_assert_not_reached();
122
}
123
124
--
61
--
125
2.20.1
62
2.25.1
126
63
127
64
diff view generated by jsdifflib
1
From: Patrick Venture <venture@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Tested: Quanta-gsj firmware booted.
3
Perform the atomic update for hardware management of the access flag.
4
4
5
i2c /dev entries driver
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
I2C init bus 1 freq 100000
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
I2C init bus 2 freq 100000
7
Message-id: 20221024051851.3074715-13-richard.henderson@linaro.org
8
I2C init bus 3 freq 100000
9
I2C init bus 4 freq 100000
10
I2C init bus 8 freq 100000
11
I2C init bus 9 freq 100000
12
at24 9-0055: 8192 byte 24c64 EEPROM, writable, 1 bytes/write
13
I2C init bus 10 freq 100000
14
at24 10-0055: 8192 byte 24c64 EEPROM, writable, 1 bytes/write
15
I2C init bus 12 freq 100000
16
I2C init bus 15 freq 100000
17
i2c i2c-15: Added multiplexed i2c bus 16
18
i2c i2c-15: Added multiplexed i2c bus 17
19
i2c i2c-15: Added multiplexed i2c bus 18
20
i2c i2c-15: Added multiplexed i2c bus 19
21
i2c i2c-15: Added multiplexed i2c bus 20
22
i2c i2c-15: Added multiplexed i2c bus 21
23
i2c i2c-15: Added multiplexed i2c bus 22
24
i2c i2c-15: Added multiplexed i2c bus 23
25
pca954x 15-0075: registered 8 multiplexed busses for I2C switch pca9548
26
27
Signed-off-by: Patrick Venture <venture@google.com>
28
Reviewed-by: Hao Wu <wuhaotsh@google.com>
29
Reviewed-by: Joel Stanley <joel@jms.id.au>
30
Message-id: 20210608202522.2677850-3-venture@google.com
31
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
32
---
9
---
33
hw/arm/npcm7xx_boards.c | 6 ++----
10
docs/system/arm/emulation.rst | 1 +
34
hw/arm/Kconfig | 1 +
11
target/arm/cpu64.c | 1 +
35
2 files changed, 3 insertions(+), 4 deletions(-)
12
target/arm/ptw.c | 176 +++++++++++++++++++++++++++++-----
13
3 files changed, 156 insertions(+), 22 deletions(-)
36
14
37
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
15
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
38
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
39
--- a/hw/arm/npcm7xx_boards.c
17
--- a/docs/system/arm/emulation.rst
40
+++ b/hw/arm/npcm7xx_boards.c
18
+++ b/docs/system/arm/emulation.rst
41
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
42
20
- FEAT_FlagM (Flag manipulation instructions v2)
43
#include "hw/arm/npcm7xx.h"
21
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
44
#include "hw/core/cpu.h"
22
- FEAT_GTG (Guest translation granule size)
45
+#include "hw/i2c/i2c_mux_pca954x.h"
23
+- FEAT_HAFDBS (Hardware management of the access flag and dirty bit state)
46
#include "hw/i2c/smbus_eeprom.h"
24
- FEAT_HCX (Support for the HCRX_EL2 register)
47
#include "hw/loader.h"
25
- FEAT_HPDS (Hierarchical permission disables)
48
#include "hw/qdev-core.h"
26
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
49
@@ -XXX,XX +XXX,XX @@ static void quanta_gsj_i2c_init(NPCM7xxState *soc)
27
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
50
* - ucd90160@6b
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu64.c
30
+++ b/target/arm/cpu64.c
31
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
32
cpu->isar.id_aa64mmfr0 = t;
33
34
t = cpu->isar.id_aa64mmfr1;
35
+ t = FIELD_DP64(t, ID_AA64MMFR1, HAFDBS, 1); /* FEAT_HAFDBS, AF only */
36
t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* FEAT_VMID16 */
37
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
38
t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
39
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/ptw.c
42
+++ b/target/arm/ptw.c
43
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
44
bool in_secure;
45
bool in_debug;
46
bool out_secure;
47
+ bool out_rw;
48
bool out_be;
49
+ hwaddr out_virt;
50
hwaddr out_phys;
51
void *out_host;
52
} S1Translate;
53
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
54
uint8_t pte_attrs;
55
bool pte_secure;
56
57
+ ptw->out_virt = addr;
58
+
59
if (unlikely(ptw->in_debug)) {
60
/*
61
* From gdbstub, do not use softmmu so that we don't modify the
62
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
63
pte_secure = is_secure;
64
}
65
ptw->out_host = NULL;
66
+ ptw->out_rw = false;
67
} else {
68
CPUTLBEntryFull *full;
69
int flags;
70
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
71
goto fail;
72
}
73
ptw->out_phys = full->phys_addr;
74
+ ptw->out_rw = full->prot & PROT_WRITE;
75
pte_attrs = full->pte_attrs;
76
pte_secure = full->attrs.secure;
77
}
78
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw,
79
ARMMMUFaultInfo *fi)
80
{
81
CPUState *cs = env_cpu(env);
82
+ void *host = ptw->out_host;
83
uint32_t data;
84
85
- if (likely(ptw->out_host)) {
86
+ if (likely(host)) {
87
/* Page tables are in RAM, and we have the host address. */
88
+ data = qatomic_read((uint32_t *)host);
89
if (ptw->out_be) {
90
- data = ldl_be_p(ptw->out_host);
91
+ data = be32_to_cpu(data);
92
} else {
93
- data = ldl_le_p(ptw->out_host);
94
+ data = le32_to_cpu(data);
95
}
96
} else {
97
/* Page tables are in MMIO. */
98
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw,
99
ARMMMUFaultInfo *fi)
100
{
101
CPUState *cs = env_cpu(env);
102
+ void *host = ptw->out_host;
103
uint64_t data;
104
105
- if (likely(ptw->out_host)) {
106
+ if (likely(host)) {
107
/* Page tables are in RAM, and we have the host address. */
108
+#ifdef CONFIG_ATOMIC64
109
+ data = qatomic_read__nocheck((uint64_t *)host);
110
if (ptw->out_be) {
111
- data = ldq_be_p(ptw->out_host);
112
+ data = be64_to_cpu(data);
113
} else {
114
- data = ldq_le_p(ptw->out_host);
115
+ data = le64_to_cpu(data);
116
}
117
+#else
118
+ if (ptw->out_be) {
119
+ data = ldq_be_p(host);
120
+ } else {
121
+ data = ldq_le_p(host);
122
+ }
123
+#endif
124
} else {
125
/* Page tables are in MMIO. */
126
MemTxAttrs attrs = { .secure = ptw->out_secure };
127
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw,
128
return data;
129
}
130
131
+static uint64_t arm_casq_ptw(CPUARMState *env, uint64_t old_val,
132
+ uint64_t new_val, S1Translate *ptw,
133
+ ARMMMUFaultInfo *fi)
134
+{
135
+ uint64_t cur_val;
136
+ void *host = ptw->out_host;
137
+
138
+ if (unlikely(!host)) {
139
+ fi->type = ARMFault_UnsuppAtomicUpdate;
140
+ fi->s1ptw = true;
141
+ return 0;
142
+ }
143
+
144
+ /*
145
+ * Raising a stage2 Protection fault for an atomic update to a read-only
146
+ * page is delayed until it is certain that there is a change to make.
147
+ */
148
+ if (unlikely(!ptw->out_rw)) {
149
+ int flags;
150
+ void *discard;
151
+
152
+ env->tlb_fi = fi;
153
+ flags = probe_access_flags(env, ptw->out_virt, MMU_DATA_STORE,
154
+ arm_to_core_mmu_idx(ptw->in_ptw_idx),
155
+ true, &discard, 0);
156
+ env->tlb_fi = NULL;
157
+
158
+ if (unlikely(flags & TLB_INVALID_MASK)) {
159
+ assert(fi->type != ARMFault_None);
160
+ fi->s2addr = ptw->out_virt;
161
+ fi->stage2 = true;
162
+ fi->s1ptw = true;
163
+ fi->s1ns = !ptw->in_secure;
164
+ return 0;
165
+ }
166
+
167
+ /* In case CAS mismatches and we loop, remember writability. */
168
+ ptw->out_rw = true;
169
+ }
170
+
171
+#ifdef CONFIG_ATOMIC64
172
+ if (ptw->out_be) {
173
+ old_val = cpu_to_be64(old_val);
174
+ new_val = cpu_to_be64(new_val);
175
+ cur_val = qatomic_cmpxchg__nocheck((uint64_t *)host, old_val, new_val);
176
+ cur_val = be64_to_cpu(cur_val);
177
+ } else {
178
+ old_val = cpu_to_le64(old_val);
179
+ new_val = cpu_to_le64(new_val);
180
+ cur_val = qatomic_cmpxchg__nocheck((uint64_t *)host, old_val, new_val);
181
+ cur_val = le64_to_cpu(cur_val);
182
+ }
183
+#else
184
+ /*
185
+ * We can't support the full 64-bit atomic cmpxchg on the host.
186
+ * Because this is only used for FEAT_HAFDBS, which is only for AA64,
187
+ * we know that TCG_OVERSIZED_GUEST is set, which means that we are
188
+ * running in round-robin mode and could only race with dma i/o.
189
+ */
190
+#ifndef TCG_OVERSIZED_GUEST
191
+# error "Unexpected configuration"
192
+#endif
193
+ bool locked = qemu_mutex_iothread_locked();
194
+ if (!locked) {
195
+ qemu_mutex_lock_iothread();
196
+ }
197
+ if (ptw->out_be) {
198
+ cur_val = ldq_be_p(host);
199
+ if (cur_val == old_val) {
200
+ stq_be_p(host, new_val);
201
+ }
202
+ } else {
203
+ cur_val = ldq_le_p(host);
204
+ if (cur_val == old_val) {
205
+ stq_le_p(host, new_val);
206
+ }
207
+ }
208
+ if (!locked) {
209
+ qemu_mutex_unlock_iothread();
210
+ }
211
+#endif
212
+
213
+ return cur_val;
214
+}
215
+
216
static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
217
uint32_t *table, uint32_t address)
218
{
219
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
220
uint32_t el = regime_el(env, mmu_idx);
221
uint64_t descaddrmask;
222
bool aarch64 = arm_el_is_aa64(env, el);
223
- uint64_t descriptor;
224
+ uint64_t descriptor, new_descriptor;
225
bool nstable;
226
227
/* TODO: This code does not support shareability levels. */
228
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
229
if (fi->type != ARMFault_None) {
230
goto do_fault;
231
}
232
+ new_descriptor = descriptor;
233
234
+ restart_atomic_update:
235
if (!(descriptor & 1) || (!(descriptor & 2) && (level == 3))) {
236
/* Invalid, or the Reserved level 3 encoding */
237
goto do_translation_fault;
238
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
239
* to give a correct page or table address, the address field
240
* in a block descriptor is smaller; so we need to explicitly
241
* clear the lower bits here before ORing in the low vaddr bits.
242
+ *
243
+ * Afterward, descaddr is the final physical address.
51
*/
244
*/
245
page_size = (1ULL << ((stride * (4 - level)) + 3));
246
descaddr &= ~(hwaddr)(page_size - 1);
247
descaddr |= (address & (page_size - 1));
248
249
+ if (likely(!ptw->in_debug)) {
250
+ /*
251
+ * Access flag.
252
+ * If HA is enabled, prepare to update the descriptor below.
253
+ * Otherwise, pass the access fault on to software.
254
+ */
255
+ if (!(descriptor & (1 << 10))) {
256
+ if (param.ha) {
257
+ new_descriptor |= 1 << 10; /* AF */
258
+ } else {
259
+ fi->type = ARMFault_AccessFlag;
260
+ goto do_fault;
261
+ }
262
+ }
263
+ }
264
+
265
/*
266
- * Extract attributes from the descriptor, and apply table descriptors.
267
- * Stage 2 table descriptors do not include any attribute fields.
268
- * HPD disables all the table attributes except NSTable.
269
+ * Extract attributes from the (modified) descriptor, and apply
270
+ * table descriptors. Stage 2 table descriptors do not include
271
+ * any attribute fields. HPD disables all the table attributes
272
+ * except NSTable.
273
*/
274
- attrs = descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(50, 14));
275
+ attrs = new_descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(50, 14));
276
if (!regime_is_stage2(mmu_idx)) {
277
attrs |= nstable << 5; /* NS */
278
if (!param.hpd) {
279
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
280
}
281
}
52
282
53
- /*
283
- /*
54
- * i2c-15:
284
- * Here descaddr is the final physical address, and attributes
55
- * - pca9548@75
285
- * are all in attrs.
56
- */
286
- */
57
+ i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 15), "pca9548", 0x75);
287
- if ((attrs & (1 << 10)) == 0) {
58
}
288
- /* Access flag */
59
289
- fi->type = ARMFault_AccessFlag;
60
static void quanta_gsj_fan_init(NPCM7xxMachine *machine, NPCM7xxState *soc)
290
- goto do_fault;
61
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
291
- }
62
index XXXXXXX..XXXXXXX 100644
292
-
63
--- a/hw/arm/Kconfig
293
ap = extract32(attrs, 6, 2);
64
+++ b/hw/arm/Kconfig
294
-
65
@@ -XXX,XX +XXX,XX @@ config NPCM7XX
295
if (regime_is_stage2(mmu_idx)) {
66
select SERIAL
296
ns = mmu_idx == ARMMMUIdx_Stage2;
67
select SSI
297
xn = extract64(attrs, 53, 2);
68
select UNIMP
298
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
69
+ select PCA954X
299
goto do_fault;
70
300
}
71
config FSL_IMX25
301
72
bool
302
+ /* If FEAT_HAFDBS has made changes, update the PTE. */
303
+ if (new_descriptor != descriptor) {
304
+ new_descriptor = arm_casq_ptw(env, descriptor, new_descriptor, ptw, fi);
305
+ if (fi->type != ARMFault_None) {
306
+ goto do_fault;
307
+ }
308
+ /*
309
+ * I_YZSVV says that if the in-memory descriptor has changed,
310
+ * then we must use the information in that new value
311
+ * (which might include a different output address, different
312
+ * attributes, or generate a fault).
313
+ * Restart the handling of the descriptor value from scratch.
314
+ */
315
+ if (new_descriptor != descriptor) {
316
+ descriptor = new_descriptor;
317
+ goto restart_atomic_update;
318
+ }
319
+ }
320
+
321
if (ns) {
322
/*
323
* The NS bit will (as required by the architecture) have no effect if
73
--
324
--
74
2.20.1
325
2.25.1
75
76
diff view generated by jsdifflib
1
When MVE is supported, the VPR register has a place on the exception
1
From: Richard Henderson <richard.henderson@linaro.org>
2
stack frame in a previously reserved slot just above the FPSCR.
3
It must also be zeroed in various situations when we invalidate
4
FPU context.
5
2
6
Update the code which handles the stack frames (exception entry and
3
Perform the atomic update for hardware management of the dirty bit.
7
exit code, VLLDM, and VLSTM) to save/restore VPR.
8
4
9
Update code which invalidates FP registers (mostly also exception
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
entry and exit code, but also VSCCLRM and the code in
6
Message-id: 20221024051851.3074715-14-richard.henderson@linaro.org
11
full_vfp_access_check() that corresponds to the ExecuteFPCheck()
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
pseudocode) to zero VPR.
8
---
9
target/arm/cpu64.c | 2 +-
10
target/arm/ptw.c | 16 ++++++++++++++++
11
2 files changed, 17 insertions(+), 1 deletion(-)
13
12
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20210614151007.4545-4-peter.maydell@linaro.org
17
---
18
target/arm/m_helper.c | 54 +++++++++++++++++++++++++++++------
19
target/arm/translate-m-nocp.c | 5 +++-
20
target/arm/translate-vfp.c | 9 ++++--
21
3 files changed, 57 insertions(+), 11 deletions(-)
22
23
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
24
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/m_helper.c
15
--- a/target/arm/cpu64.c
26
+++ b/target/arm/m_helper.c
16
+++ b/target/arm/cpu64.c
27
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
17
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
28
uint32_t shi = extract64(dn, 32, 32);
18
cpu->isar.id_aa64mmfr0 = t;
29
19
30
if (i >= 16) {
20
t = cpu->isar.id_aa64mmfr1;
31
- faddr += 8; /* skip the slot for the FPSCR */
21
- t = FIELD_DP64(t, ID_AA64MMFR1, HAFDBS, 1); /* FEAT_HAFDBS, AF only */
32
+ faddr += 8; /* skip the slot for the FPSCR/VPR */
22
+ t = FIELD_DP64(t, ID_AA64MMFR1, HAFDBS, 2); /* FEAT_HAFDBS */
23
t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* FEAT_VMID16 */
24
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
25
t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
26
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/ptw.c
29
+++ b/target/arm/ptw.c
30
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
31
goto do_fault;
33
}
32
}
34
stacked_ok = stacked_ok &&
33
}
35
v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) &&
34
+
36
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
35
+ /*
37
stacked_ok = stacked_ok &&
36
+ * Dirty Bit.
38
v7m_stack_write(cpu, fpcar + 0x40,
37
+ * If HD is enabled, pre-emptively set/clear the appropriate AP/S2AP
39
vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP);
38
+ * bit for writeback. The actual write protection test may still be
40
+ if (cpu_isar_feature(aa32_mve, cpu)) {
39
+ * overridden by tableattrs, to be merged below.
41
+ stacked_ok = stacked_ok &&
40
+ */
42
+ v7m_stack_write(cpu, fpcar + 0x44,
41
+ if (param.hd
43
+ env->v7m.vpr, mmu_idx, STACK_LAZYFP);
42
+ && extract64(descriptor, 51, 1) /* DBM */
43
+ && access_type == MMU_DATA_STORE) {
44
+ if (regime_is_stage2(mmu_idx)) {
45
+ new_descriptor |= 1ull << 7; /* set S2AP[1] */
46
+ } else {
47
+ new_descriptor &= ~(1ull << 7); /* clear AP[2] */
48
+ }
44
+ }
49
+ }
45
}
50
}
46
51
47
/*
52
/*
48
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
49
env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
50
51
if (ts) {
52
- /* Clear s0 to s31 and the FPSCR */
53
+ /* Clear s0 to s31 and the FPSCR and VPR */
54
int i;
55
56
for (i = 0; i < 32; i += 2) {
57
*aa32_vfp_dreg(env, i / 2) = 0;
58
}
59
vfp_set_fpscr(env, 0);
60
+ if (cpu_isar_feature(aa32_mve, cpu)) {
61
+ env->v7m.vpr = 0;
62
+ }
63
}
64
/*
65
- * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them
66
+ * Otherwise s0 to s15, FPSCR and VPR are UNKNOWN; we choose to leave them
67
* unchanged.
68
*/
69
}
70
@@ -XXX,XX +XXX,XX @@ static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
71
void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
72
{
73
/* fptr is the value of Rn, the frame pointer we store the FP regs to */
74
+ ARMCPU *cpu = env_archcpu(env);
75
bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
76
bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
77
uintptr_t ra = GETPC();
78
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
79
cpu_stl_data_ra(env, faddr + 4, shi, ra);
80
}
81
cpu_stl_data_ra(env, fptr + 0x40, vfp_get_fpscr(env), ra);
82
+ if (cpu_isar_feature(aa32_mve, cpu)) {
83
+ cpu_stl_data_ra(env, fptr + 0x44, env->v7m.vpr, ra);
84
+ }
85
86
/*
87
- * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
88
+ * If TS is 0 then s0 to s15, FPSCR and VPR are UNKNOWN; we choose to
89
* leave them unchanged, matching our choice in v7m_preserve_fp_state.
90
*/
91
if (ts) {
92
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
93
*aa32_vfp_dreg(env, i / 2) = 0;
94
}
95
vfp_set_fpscr(env, 0);
96
+ if (cpu_isar_feature(aa32_mve, cpu)) {
97
+ env->v7m.vpr = 0;
98
+ }
99
}
100
} else {
101
v7m_update_fpccr(env, fptr, false);
102
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
103
104
void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
105
{
106
+ ARMCPU *cpu = env_archcpu(env);
107
uintptr_t ra = GETPC();
108
109
/* fptr is the value of Rn, the frame pointer we load the FP regs from */
110
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
111
uint32_t faddr = fptr + 4 * i;
112
113
if (i >= 16) {
114
- faddr += 8; /* skip the slot for the FPSCR */
115
+ faddr += 8; /* skip the slot for the FPSCR and VPR */
116
}
117
118
slo = cpu_ldl_data_ra(env, faddr, ra);
119
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
120
}
121
fpscr = cpu_ldl_data_ra(env, fptr + 0x40, ra);
122
vfp_set_fpscr(env, fpscr);
123
+ if (cpu_isar_feature(aa32_mve, cpu)) {
124
+ env->v7m.vpr = cpu_ldl_data_ra(env, fptr + 0x44, ra);
125
+ }
126
}
127
128
env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
129
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
130
uint32_t shi = extract64(dn, 32, 32);
131
132
if (i >= 16) {
133
- faddr += 8; /* skip the slot for the FPSCR */
134
+ faddr += 8; /* skip the slot for the FPSCR and VPR */
135
}
136
stacked_ok = stacked_ok &&
137
v7m_stack_write(cpu, faddr, slo,
138
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
139
stacked_ok = stacked_ok &&
140
v7m_stack_write(cpu, frameptr + 0x60,
141
vfp_get_fpscr(env), mmu_idx, STACK_NORMAL);
142
+ if (cpu_isar_feature(aa32_mve, cpu)) {
143
+ stacked_ok = stacked_ok &&
144
+ v7m_stack_write(cpu, frameptr + 0x64,
145
+ env->v7m.vpr, mmu_idx, STACK_NORMAL);
146
+ }
147
if (cpacr_pass) {
148
for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
149
*aa32_vfp_dreg(env, i / 2) = 0;
150
}
151
vfp_set_fpscr(env, 0);
152
+ if (cpu_isar_feature(aa32_mve, cpu)) {
153
+ env->v7m.vpr = 0;
154
+ }
155
}
156
} else {
157
/* Lazy stacking enabled, save necessary info to stack later */
158
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
159
v7m_exception_taken(cpu, excret, true, false);
160
}
161
}
162
- /* Clear s0..s15 and FPSCR; TODO also VPR when MVE is implemented */
163
+ /* Clear s0..s15, FPSCR and VPR */
164
int i;
165
166
for (i = 0; i < 16; i += 2) {
167
*aa32_vfp_dreg(env, i / 2) = 0;
168
}
169
vfp_set_fpscr(env, 0);
170
+ if (cpu_isar_feature(aa32_mve, cpu)) {
171
+ env->v7m.vpr = 0;
172
+ }
173
}
174
}
175
176
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
177
uint32_t faddr = frameptr + 0x20 + 4 * i;
178
179
if (i >= 16) {
180
- faddr += 8; /* Skip the slot for the FPSCR */
181
+ faddr += 8; /* Skip the slot for the FPSCR and VPR */
182
}
183
184
pop_ok = pop_ok &&
185
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
186
if (pop_ok) {
187
vfp_set_fpscr(env, fpscr);
188
}
189
+ if (cpu_isar_feature(aa32_mve, cpu)) {
190
+ pop_ok = pop_ok &&
191
+ v7m_stack_read(cpu, &env->v7m.vpr,
192
+ frameptr + 0x64, mmu_idx);
193
+ }
194
if (!pop_ok) {
195
/*
196
* These regs are 0 if security extension present;
197
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
198
*aa32_vfp_dreg(env, i / 2) = 0;
199
}
200
vfp_set_fpscr(env, 0);
201
+ if (cpu_isar_feature(aa32_mve, cpu)) {
202
+ env->v7m.vpr = 0;
203
+ }
204
}
205
}
206
}
207
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
208
index XXXXXXX..XXXXXXX 100644
209
--- a/target/arm/translate-m-nocp.c
210
+++ b/target/arm/translate-m-nocp.c
211
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
212
btmreg++;
213
}
214
assert(btmreg == topreg + 1);
215
- /* TODO: when MVE is implemented, zero VPR here */
216
+ if (dc_isar_feature(aa32_mve, s)) {
217
+ TCGv_i32 z32 = tcg_const_i32(0);
218
+ store_cpu_field(z32, v7m.vpr);
219
+ }
220
return true;
221
}
222
223
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
224
index XXXXXXX..XXXXXXX 100644
225
--- a/target/arm/translate-vfp.c
226
+++ b/target/arm/translate-vfp.c
227
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
228
229
if (s->v7m_new_fp_ctxt_needed) {
230
/*
231
- * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA
232
- * and the FPSCR.
233
+ * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA,
234
+ * the FPSCR, and VPR.
235
*/
236
TCGv_i32 control, fpscr;
237
uint32_t bits = R_V7M_CONTROL_FPCA_MASK;
238
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
239
fpscr = load_cpu_field(v7m.fpdscr[s->v8m_secure]);
240
gen_helper_vfp_set_fpscr(cpu_env, fpscr);
241
tcg_temp_free_i32(fpscr);
242
+ if (dc_isar_feature(aa32_mve, s)) {
243
+ TCGv_i32 z32 = tcg_const_i32(0);
244
+ store_cpu_field(z32, v7m.vpr);
245
+ }
246
+
247
/*
248
* We don't need to arrange to end the TB, because the only
249
* parts of FPSCR which we cache in the TB flags are the VECLEN
250
--
53
--
251
2.20.1
54
2.25.1
252
253
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This fprintf+assert has been in place since the beginning.
3
We had only been reporting the stage2 page size. This causes
4
It is prior to the fp_access_check, so we're still good to
4
problems if stage1 is using a larger page size (16k, 2M, etc),
5
raise sigill here.
5
but stage2 is using a smaller page size, because cputlb does
6
not set large_page_{addr,mask} properly.
6
7
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/381
8
Fix by using the max of the two page sizes.
9
10
Reported-by: Marc Zyngier <maz@kernel.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Message-id: 20221024051851.3074715-15-richard.henderson@linaro.org
10
Message-id: 20210604183506.916654-2-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
15
---
14
target/arm/translate-a64.c | 4 ++--
16
target/arm/ptw.c | 11 ++++++++++-
15
1 file changed, 2 insertions(+), 2 deletions(-)
17
1 file changed, 10 insertions(+), 1 deletion(-)
16
18
17
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate-a64.c
21
--- a/target/arm/ptw.c
20
+++ b/target/arm/translate-a64.c
22
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
23
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
22
case 0x7f: /* FSQRT (vector) */
24
ARMMMUFaultInfo *fi)
23
break;
25
{
24
default:
26
hwaddr ipa;
25
- fprintf(stderr, "%s: insn 0x%04x fpop 0x%2x\n", __func__, insn, fpop);
27
- int s1_prot;
26
- g_assert_not_reached();
28
+ int s1_prot, s1_lgpgsz;
27
+ unallocated_encoding(s);
29
bool is_secure = ptw->in_secure;
28
+ return;
30
bool ret, ipa_secure, s2walk_secure;
31
ARMCacheAttrs cacheattrs1;
32
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
33
* Save the stage1 results so that we may merge prot and cacheattrs later.
34
*/
35
s1_prot = result->f.prot;
36
+ s1_lgpgsz = result->f.lg_page_size;
37
cacheattrs1 = result->cacheattrs;
38
memset(result, 0, sizeof(*result));
39
40
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
41
return ret;
29
}
42
}
30
43
31
44
+ /*
45
+ * Use the maximum of the S1 & S2 page size, so that invalidation
46
+ * of pages > TARGET_PAGE_SIZE works correctly.
47
+ */
48
+ if (result->f.lg_page_size < s1_lgpgsz) {
49
+ result->f.lg_page_size = s1_lgpgsz;
50
+ }
51
+
52
/* Combine the S1 and S2 cache attributes. */
53
hcr = arm_hcr_el2_eff_secstate(env, is_secure);
54
if (hcr & HCR_DC) {
32
--
55
--
33
2.20.1
56
2.25.1
34
35
diff view generated by jsdifflib
1
From: Patrick Venture <venture@google.com>
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
2
2
3
Adds the pca954x muxes expected.
3
Snapshot loading only expects to call deterministic handlers, not
4
non-deterministic ones. So introduce a way of registering handlers that
5
won't be called when reseting for snapshots.
4
6
5
Tested: Booted quanta-q71l image to userspace.
7
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
6
Signed-off-by: Patrick Venture <venture@google.com>
8
Message-id: 20221025004327.568476-2-Jason@zx2c4.com
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
9
[PMM: updated json doc comment with Markus' text; fixed
8
Reviewed-by: Joel Stanley <joel@jms.id.au>
10
checkpatch style nit]
9
Reviewed-by: Cédric Le Goater <clg@kaod.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20210608202522.2677850-4-venture@google.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
---
13
hw/arm/aspeed.c | 11 ++++++++---
14
qapi/run-state.json | 6 +++++-
14
hw/arm/Kconfig | 1 +
15
include/hw/boards.h | 2 +-
15
2 files changed, 9 insertions(+), 3 deletions(-)
16
include/sysemu/reset.h | 5 ++++-
17
hw/arm/aspeed.c | 4 ++--
18
hw/arm/mps2-tz.c | 4 ++--
19
hw/core/reset.c | 17 ++++++++++++++++-
20
hw/hppa/machine.c | 4 ++--
21
hw/i386/microvm.c | 4 ++--
22
hw/i386/pc.c | 6 +++---
23
hw/ppc/pegasos2.c | 4 ++--
24
hw/ppc/pnv.c | 4 ++--
25
hw/ppc/spapr.c | 4 ++--
26
hw/s390x/s390-virtio-ccw.c | 4 ++--
27
migration/savevm.c | 2 +-
28
softmmu/runstate.c | 11 ++++++++---
29
15 files changed, 54 insertions(+), 27 deletions(-)
16
30
31
diff --git a/qapi/run-state.json b/qapi/run-state.json
32
index XXXXXXX..XXXXXXX 100644
33
--- a/qapi/run-state.json
34
+++ b/qapi/run-state.json
35
@@ -XXX,XX +XXX,XX @@
36
# ignores --no-reboot. This is useful for sanitizing
37
# hypercalls on s390 that are used during kexec/kdump/boot
38
#
39
+# @snapshot-load: A snapshot is being loaded by the record & replay
40
+# subsystem. This value is used only within QEMU. It
41
+# doesn't occur in QMP. (since 7.2)
42
+#
43
##
44
{ 'enum': 'ShutdownCause',
45
# Beware, shutdown_caused_by_guest() depends on enumeration order
46
'data': [ 'none', 'host-error', 'host-qmp-quit', 'host-qmp-system-reset',
47
'host-signal', 'host-ui', 'guest-shutdown', 'guest-reset',
48
- 'guest-panic', 'subsystem-reset'] }
49
+ 'guest-panic', 'subsystem-reset', 'snapshot-load'] }
50
51
##
52
# @StatusInfo:
53
diff --git a/include/hw/boards.h b/include/hw/boards.h
54
index XXXXXXX..XXXXXXX 100644
55
--- a/include/hw/boards.h
56
+++ b/include/hw/boards.h
57
@@ -XXX,XX +XXX,XX @@ struct MachineClass {
58
const char *deprecation_reason;
59
60
void (*init)(MachineState *state);
61
- void (*reset)(MachineState *state);
62
+ void (*reset)(MachineState *state, ShutdownCause reason);
63
void (*wakeup)(MachineState *state);
64
int (*kvm_type)(MachineState *machine, const char *arg);
65
66
diff --git a/include/sysemu/reset.h b/include/sysemu/reset.h
67
index XXXXXXX..XXXXXXX 100644
68
--- a/include/sysemu/reset.h
69
+++ b/include/sysemu/reset.h
70
@@ -XXX,XX +XXX,XX @@
71
#ifndef QEMU_SYSEMU_RESET_H
72
#define QEMU_SYSEMU_RESET_H
73
74
+#include "qapi/qapi-events-run-state.h"
75
+
76
typedef void QEMUResetHandler(void *opaque);
77
78
void qemu_register_reset(QEMUResetHandler *func, void *opaque);
79
+void qemu_register_reset_nosnapshotload(QEMUResetHandler *func, void *opaque);
80
void qemu_unregister_reset(QEMUResetHandler *func, void *opaque);
81
-void qemu_devices_reset(void);
82
+void qemu_devices_reset(ShutdownCause reason);
83
84
#endif
17
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
85
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
18
index XXXXXXX..XXXXXXX 100644
86
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/aspeed.c
87
--- a/hw/arm/aspeed.c
20
+++ b/hw/arm/aspeed.c
88
+++ b/hw/arm/aspeed.c
21
@@ -XXX,XX +XXX,XX @@
89
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_bletchley_class_init(ObjectClass *oc, void *data)
22
#include "hw/arm/boot.h"
90
aspeed_soc_num_cpus(amc->soc_name);
23
#include "hw/arm/aspeed.h"
91
}
24
#include "hw/arm/aspeed_soc.h"
92
25
+#include "hw/i2c/i2c_mux_pca954x.h"
93
-static void fby35_reset(MachineState *state)
26
#include "hw/i2c/smbus_eeprom.h"
94
+static void fby35_reset(MachineState *state, ShutdownCause reason)
27
#include "hw/misc/pca9552.h"
95
{
28
#include "hw/misc/tmp105.h"
96
AspeedMachineState *bmc = ASPEED_MACHINE(state);
29
@@ -XXX,XX +XXX,XX @@ static void quanta_q71l_bmc_i2c_init(AspeedMachineState *bmc)
97
AspeedGPIOState *gpio = &bmc->soc.gpio;
30
/* TODO: i2c-1: Add Frontpanel FRU eeprom@57 24c64 */
98
31
/* TODO: Add Memory Riser i2c mux and eeproms. */
99
- qemu_devices_reset();
32
100
+ qemu_devices_reset(reason);
33
- /* TODO: i2c-2: pca9546@74 */
101
34
- /* TODO: i2c-2: pca9548@77 */
102
/* Board ID: 7 (Class-1, 4 slots) */
35
+ i2c_slave_create_simple(aspeed_i2c_get_bus(&soc->i2c, 2), "pca9546", 0x74);
103
object_property_set_bool(OBJECT(gpio), "gpioV4", true, &error_fatal);
36
+ i2c_slave_create_simple(aspeed_i2c_get_bus(&soc->i2c, 2), "pca9548", 0x77);
104
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/hw/arm/mps2-tz.c
107
+++ b/hw/arm/mps2-tz.c
108
@@ -XXX,XX +XXX,XX @@ static void mps2_set_remap(Object *obj, const char *value, Error **errp)
109
}
110
}
111
112
-static void mps2_machine_reset(MachineState *machine)
113
+static void mps2_machine_reset(MachineState *machine, ShutdownCause reason)
114
{
115
MPS2TZMachineState *mms = MPS2TZ_MACHINE(machine);
116
117
@@ -XXX,XX +XXX,XX @@ static void mps2_machine_reset(MachineState *machine)
118
* reset see the correct mapping.
119
*/
120
remap_memory(mms, mms->remap);
121
- qemu_devices_reset();
122
+ qemu_devices_reset(reason);
123
}
124
125
static void mps2tz_class_init(ObjectClass *oc, void *data)
126
diff --git a/hw/core/reset.c b/hw/core/reset.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/hw/core/reset.c
129
+++ b/hw/core/reset.c
130
@@ -XXX,XX +XXX,XX @@ typedef struct QEMUResetEntry {
131
QTAILQ_ENTRY(QEMUResetEntry) entry;
132
QEMUResetHandler *func;
133
void *opaque;
134
+ bool skip_on_snapshot_load;
135
} QEMUResetEntry;
136
137
static QTAILQ_HEAD(, QEMUResetEntry) reset_handlers =
138
@@ -XXX,XX +XXX,XX @@ void qemu_register_reset(QEMUResetHandler *func, void *opaque)
139
QTAILQ_INSERT_TAIL(&reset_handlers, re, entry);
140
}
141
142
+void qemu_register_reset_nosnapshotload(QEMUResetHandler *func, void *opaque)
143
+{
144
+ QEMUResetEntry *re = g_new0(QEMUResetEntry, 1);
37
+
145
+
38
/* TODO: i2c-3: Add BIOS FRU eeprom@56 24c64 */
146
+ re->func = func;
39
- /* TODO: i2c-7: Add pca9546@70 */
147
+ re->opaque = opaque;
148
+ re->skip_on_snapshot_load = true;
149
+ QTAILQ_INSERT_TAIL(&reset_handlers, re, entry);
150
+}
40
+
151
+
41
+ /* i2c-7 */
152
void qemu_unregister_reset(QEMUResetHandler *func, void *opaque)
42
+ i2c_slave_create_simple(aspeed_i2c_get_bus(&soc->i2c, 7), "pca9546", 0x70);
153
{
43
/* - i2c@0: pmbus@59 */
154
QEMUResetEntry *re;
44
/* - i2c@1: pmbus@58 */
155
@@ -XXX,XX +XXX,XX @@ void qemu_unregister_reset(QEMUResetHandler *func, void *opaque)
45
/* - i2c@2: pmbus@58 */
156
}
46
/* - i2c@3: pmbus@59 */
157
}
47
+
158
48
/* TODO: i2c-7: Add PDB FRU eeprom@52 */
159
-void qemu_devices_reset(void)
49
/* TODO: i2c-8: Add BMC FRU eeprom@50 */
160
+void qemu_devices_reset(ShutdownCause reason)
50
}
161
{
51
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
162
QEMUResetEntry *re, *nre;
52
index XXXXXXX..XXXXXXX 100644
163
53
--- a/hw/arm/Kconfig
164
/* reset all devices */
54
+++ b/hw/arm/Kconfig
165
QTAILQ_FOREACH_SAFE(re, &reset_handlers, entry, nre) {
55
@@ -XXX,XX +XXX,XX @@ config ASPEED_SOC
166
+ if (reason == SHUTDOWN_CAUSE_SNAPSHOT_LOAD &&
56
select PCA9552
167
+ re->skip_on_snapshot_load) {
57
select SERIAL
168
+ continue;
58
select SMBUS_EEPROM
169
+ }
59
+ select PCA954X
170
re->func(re->opaque);
60
select SSI
171
}
61
select SSI_M25P80
172
}
62
select TMP105
173
diff --git a/hw/hppa/machine.c b/hw/hppa/machine.c
174
index XXXXXXX..XXXXXXX 100644
175
--- a/hw/hppa/machine.c
176
+++ b/hw/hppa/machine.c
177
@@ -XXX,XX +XXX,XX @@ static void machine_hppa_init(MachineState *machine)
178
cpu[0]->env.gr[19] = FW_CFG_IO_BASE;
179
}
180
181
-static void hppa_machine_reset(MachineState *ms)
182
+static void hppa_machine_reset(MachineState *ms, ShutdownCause reason)
183
{
184
unsigned int smp_cpus = ms->smp.cpus;
185
int i;
186
187
- qemu_devices_reset();
188
+ qemu_devices_reset(reason);
189
190
/* Start all CPUs at the firmware entry point.
191
* Monarch CPU will initialize firmware, secondary CPUs
192
diff --git a/hw/i386/microvm.c b/hw/i386/microvm.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/hw/i386/microvm.c
195
+++ b/hw/i386/microvm.c
196
@@ -XXX,XX +XXX,XX @@ static void microvm_machine_state_init(MachineState *machine)
197
microvm_devices_init(mms);
198
}
199
200
-static void microvm_machine_reset(MachineState *machine)
201
+static void microvm_machine_reset(MachineState *machine, ShutdownCause reason)
202
{
203
MicrovmMachineState *mms = MICROVM_MACHINE(machine);
204
CPUState *cs;
205
@@ -XXX,XX +XXX,XX @@ static void microvm_machine_reset(MachineState *machine)
206
mms->kernel_cmdline_fixed = true;
207
}
208
209
- qemu_devices_reset();
210
+ qemu_devices_reset(reason);
211
212
CPU_FOREACH(cs) {
213
cpu = X86_CPU(cs);
214
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
215
index XXXXXXX..XXXXXXX 100644
216
--- a/hw/i386/pc.c
217
+++ b/hw/i386/pc.c
218
@@ -XXX,XX +XXX,XX @@ static void pc_machine_initfn(Object *obj)
219
cxl_machine_init(obj, &pcms->cxl_devices_state);
220
}
221
222
-static void pc_machine_reset(MachineState *machine)
223
+static void pc_machine_reset(MachineState *machine, ShutdownCause reason)
224
{
225
CPUState *cs;
226
X86CPU *cpu;
227
228
- qemu_devices_reset();
229
+ qemu_devices_reset(reason);
230
231
/* Reset APIC after devices have been reset to cancel
232
* any changes that qemu_devices_reset() might have done.
233
@@ -XXX,XX +XXX,XX @@ static void pc_machine_reset(MachineState *machine)
234
static void pc_machine_wakeup(MachineState *machine)
235
{
236
cpu_synchronize_all_states();
237
- pc_machine_reset(machine);
238
+ pc_machine_reset(machine, SHUTDOWN_CAUSE_NONE);
239
cpu_synchronize_all_post_reset();
240
}
241
242
diff --git a/hw/ppc/pegasos2.c b/hw/ppc/pegasos2.c
243
index XXXXXXX..XXXXXXX 100644
244
--- a/hw/ppc/pegasos2.c
245
+++ b/hw/ppc/pegasos2.c
246
@@ -XXX,XX +XXX,XX @@ static void pegasos2_pci_config_write(Pegasos2MachineState *pm, int bus,
247
pegasos2_mv_reg_write(pm, pcicfg + 4, len, val);
248
}
249
250
-static void pegasos2_machine_reset(MachineState *machine)
251
+static void pegasos2_machine_reset(MachineState *machine, ShutdownCause reason)
252
{
253
Pegasos2MachineState *pm = PEGASOS2_MACHINE(machine);
254
void *fdt;
255
uint64_t d[2];
256
int sz;
257
258
- qemu_devices_reset();
259
+ qemu_devices_reset(reason);
260
if (!pm->vof) {
261
return; /* Firmware should set up machine so nothing to do */
262
}
263
diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
264
index XXXXXXX..XXXXXXX 100644
265
--- a/hw/ppc/pnv.c
266
+++ b/hw/ppc/pnv.c
267
@@ -XXX,XX +XXX,XX @@ static void pnv_powerdown_notify(Notifier *n, void *opaque)
268
}
269
}
270
271
-static void pnv_reset(MachineState *machine)
272
+static void pnv_reset(MachineState *machine, ShutdownCause reason)
273
{
274
PnvMachineState *pnv = PNV_MACHINE(machine);
275
IPMIBmc *bmc;
276
void *fdt;
277
278
- qemu_devices_reset();
279
+ qemu_devices_reset(reason);
280
281
/*
282
* The machine should provide by default an internal BMC simulator.
283
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
284
index XXXXXXX..XXXXXXX 100644
285
--- a/hw/ppc/spapr.c
286
+++ b/hw/ppc/spapr.c
287
@@ -XXX,XX +XXX,XX @@ void spapr_check_mmu_mode(bool guest_radix)
288
}
289
}
290
291
-static void spapr_machine_reset(MachineState *machine)
292
+static void spapr_machine_reset(MachineState *machine, ShutdownCause reason)
293
{
294
SpaprMachineState *spapr = SPAPR_MACHINE(machine);
295
PowerPCCPU *first_ppc_cpu;
296
@@ -XXX,XX +XXX,XX @@ static void spapr_machine_reset(MachineState *machine)
297
spapr_setup_hpt(spapr);
298
}
299
300
- qemu_devices_reset();
301
+ qemu_devices_reset(reason);
302
303
spapr_ovec_cleanup(spapr->ov5_cas);
304
spapr->ov5_cas = spapr_ovec_new();
305
diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
306
index XXXXXXX..XXXXXXX 100644
307
--- a/hw/s390x/s390-virtio-ccw.c
308
+++ b/hw/s390x/s390-virtio-ccw.c
309
@@ -XXX,XX +XXX,XX @@ static void s390_pv_prepare_reset(S390CcwMachineState *ms)
310
s390_pv_prep_reset();
311
}
312
313
-static void s390_machine_reset(MachineState *machine)
314
+static void s390_machine_reset(MachineState *machine, ShutdownCause reason)
315
{
316
S390CcwMachineState *ms = S390_CCW_MACHINE(machine);
317
enum s390_reset reset_type;
318
@@ -XXX,XX +XXX,XX @@ static void s390_machine_reset(MachineState *machine)
319
s390_machine_unprotect(ms);
320
}
321
322
- qemu_devices_reset();
323
+ qemu_devices_reset(reason);
324
s390_crypto_reset();
325
326
/* configure and start the ipl CPU only */
327
diff --git a/migration/savevm.c b/migration/savevm.c
328
index XXXXXXX..XXXXXXX 100644
329
--- a/migration/savevm.c
330
+++ b/migration/savevm.c
331
@@ -XXX,XX +XXX,XX @@ bool load_snapshot(const char *name, const char *vmstate,
332
goto err_drain;
333
}
334
335
- qemu_system_reset(SHUTDOWN_CAUSE_NONE);
336
+ qemu_system_reset(SHUTDOWN_CAUSE_SNAPSHOT_LOAD);
337
mis->from_src_file = f;
338
339
if (!yank_register_instance(MIGRATION_YANK_INSTANCE, errp)) {
340
diff --git a/softmmu/runstate.c b/softmmu/runstate.c
341
index XXXXXXX..XXXXXXX 100644
342
--- a/softmmu/runstate.c
343
+++ b/softmmu/runstate.c
344
@@ -XXX,XX +XXX,XX @@ void qemu_system_reset(ShutdownCause reason)
345
cpu_synchronize_all_states();
346
347
if (mc && mc->reset) {
348
- mc->reset(current_machine);
349
+ mc->reset(current_machine, reason);
350
} else {
351
- qemu_devices_reset();
352
+ qemu_devices_reset(reason);
353
}
354
- if (reason && reason != SHUTDOWN_CAUSE_SUBSYSTEM_RESET) {
355
+ switch (reason) {
356
+ case SHUTDOWN_CAUSE_NONE:
357
+ case SHUTDOWN_CAUSE_SUBSYSTEM_RESET:
358
+ case SHUTDOWN_CAUSE_SNAPSHOT_LOAD:
359
+ break;
360
+ default:
361
qapi_event_send_reset(shutdown_caused_by_guest(reason), reason);
362
}
363
cpu_synchronize_all_post_reset();
63
--
364
--
64
2.20.1
365
2.25.1
65
66
diff view generated by jsdifflib
1
Allow code elsewhere in the system to check whether the ACPI GHES
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
2
table is present, so it can determine whether it is OK to try to
3
record an error by calling acpi_ghes_record_errors().
4
2
5
(We don't need to migrate the new 'present' field in AcpiGhesState,
3
When the system reboots, the rng-seed that the FDT has should be
6
because it is set once at system initialization and doesn't change.)
4
re-randomized, so that the new boot gets a new seed. Several
5
architectures require this functionality, so export a function for
6
injecting a new seed into the given FDT.
7
7
8
Cc: Alistair Francis <alistair.francis@wdc.com>
9
Cc: David Gibson <david@gibson.dropbear.id.au>
10
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
11
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Message-id: 20221025004327.568476-3-Jason@zx2c4.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
11
Message-id: 20210603171259.27962-3-peter.maydell@linaro.org
12
---
14
---
13
include/hw/acpi/ghes.h | 9 +++++++++
15
include/sysemu/device_tree.h | 9 +++++++++
14
hw/acpi/ghes-stub.c | 5 +++++
16
softmmu/device_tree.c | 21 +++++++++++++++++++++
15
hw/acpi/ghes.c | 17 +++++++++++++++++
17
2 files changed, 30 insertions(+)
16
3 files changed, 31 insertions(+)
17
18
18
diff --git a/include/hw/acpi/ghes.h b/include/hw/acpi/ghes.h
19
diff --git a/include/sysemu/device_tree.h b/include/sysemu/device_tree.h
19
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/acpi/ghes.h
21
--- a/include/sysemu/device_tree.h
21
+++ b/include/hw/acpi/ghes.h
22
+++ b/include/sysemu/device_tree.h
22
@@ -XXX,XX +XXX,XX @@ enum {
23
@@ -XXX,XX +XXX,XX @@ int qemu_fdt_setprop_sized_cells_from_array(void *fdt,
23
24
qdt_tmp); \
24
typedef struct AcpiGhesState {
25
})
25
uint64_t ghes_addr_le;
26
26
+ bool present; /* True if GHES is present at all on this board */
27
} AcpiGhesState;
28
29
void build_ghes_error_table(GArray *hardware_errors, BIOSLinker *linker);
30
@@ -XXX,XX +XXX,XX @@ void acpi_build_hest(GArray *table_data, BIOSLinker *linker,
31
void acpi_ghes_add_fw_cfg(AcpiGhesState *vms, FWCfgState *s,
32
GArray *hardware_errors);
33
int acpi_ghes_record_errors(uint8_t notify, uint64_t error_physical_addr);
34
+
27
+
35
+/**
28
+/**
36
+ * acpi_ghes_present: Report whether ACPI GHES table is present
29
+ * qemu_fdt_randomize_seeds:
30
+ * @fdt: device tree blob
37
+ *
31
+ *
38
+ * Returns: true if the system has an ACPI GHES table and it is
32
+ * Re-randomize all "rng-seed" properties with new seeds.
39
+ * safe to call acpi_ghes_record_errors() to record a memory error.
40
+ */
33
+ */
41
+bool acpi_ghes_present(void);
34
+void qemu_fdt_randomize_seeds(void *fdt);
42
#endif
35
+
43
diff --git a/hw/acpi/ghes-stub.c b/hw/acpi/ghes-stub.c
36
#define FDT_PCI_RANGE_RELOCATABLE 0x80000000
37
#define FDT_PCI_RANGE_PREFETCHABLE 0x40000000
38
#define FDT_PCI_RANGE_ALIASED 0x20000000
39
diff --git a/softmmu/device_tree.c b/softmmu/device_tree.c
44
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/acpi/ghes-stub.c
41
--- a/softmmu/device_tree.c
46
+++ b/hw/acpi/ghes-stub.c
42
+++ b/softmmu/device_tree.c
47
@@ -XXX,XX +XXX,XX @@ int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
43
@@ -XXX,XX +XXX,XX @@
48
{
44
#include "qemu/option.h"
49
return -1;
45
#include "qemu/bswap.h"
46
#include "qemu/cutils.h"
47
+#include "qemu/guest-random.h"
48
#include "sysemu/device_tree.h"
49
#include "hw/loader.h"
50
#include "hw/boards.h"
51
@@ -XXX,XX +XXX,XX @@ void hmp_dumpdtb(Monitor *mon, const QDict *qdict)
52
53
info_report("dtb dumped to %s", filename);
50
}
54
}
51
+
55
+
52
+bool acpi_ghes_present(void)
56
+void qemu_fdt_randomize_seeds(void *fdt)
53
+{
57
+{
54
+ return false;
58
+ int noffset, poffset, len;
55
+}
59
+ const char *name;
56
diff --git a/hw/acpi/ghes.c b/hw/acpi/ghes.c
60
+ uint8_t *data;
57
index XXXXXXX..XXXXXXX 100644
58
--- a/hw/acpi/ghes.c
59
+++ b/hw/acpi/ghes.c
60
@@ -XXX,XX +XXX,XX @@ void acpi_ghes_add_fw_cfg(AcpiGhesState *ags, FWCfgState *s,
61
/* Create a read-write fw_cfg file for Address */
62
fw_cfg_add_file_callback(s, ACPI_GHES_DATA_ADDR_FW_CFG_FILE, NULL, NULL,
63
NULL, &(ags->ghes_addr_le), sizeof(ags->ghes_addr_le), false);
64
+
61
+
65
+ ags->present = true;
62
+ for (noffset = fdt_next_node(fdt, 0, NULL);
66
}
63
+ noffset >= 0;
67
64
+ noffset = fdt_next_node(fdt, noffset, NULL)) {
68
int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
65
+ for (poffset = fdt_first_property_offset(fdt, noffset);
69
@@ -XXX,XX +XXX,XX @@ int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
66
+ poffset >= 0;
70
67
+ poffset = fdt_next_property_offset(fdt, poffset)) {
71
return ret;
68
+ data = (uint8_t *)fdt_getprop_by_offset(fdt, poffset, &name, &len);
72
}
69
+ if (!data || strcmp(name, "rng-seed"))
73
+
70
+ continue;
74
+bool acpi_ghes_present(void)
71
+ qemu_guest_getrandom_nofail(data, len);
75
+{
72
+ }
76
+ AcpiGedState *acpi_ged_state;
77
+ AcpiGhesState *ags;
78
+
79
+ acpi_ged_state = ACPI_GED(object_resolve_path_type("", TYPE_ACPI_GED,
80
+ NULL));
81
+
82
+ if (!acpi_ged_state) {
83
+ return false;
84
+ }
73
+ }
85
+ ags = &acpi_ged_state->ghes_state;
86
+ return ags->present;
87
+}
74
+}
88
--
75
--
89
2.20.1
76
2.25.1
90
91
diff view generated by jsdifflib
New patch
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
1
2
3
Snapshot loading is supposed to be deterministic, so we shouldn't
4
re-randomize the various seeds used.
5
6
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
7
Message-id: 20221025004327.568476-4-Jason@zx2c4.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/i386/x86.c | 2 +-
12
1 file changed, 1 insertion(+), 1 deletion(-)
13
14
diff --git a/hw/i386/x86.c b/hw/i386/x86.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/i386/x86.c
17
+++ b/hw/i386/x86.c
18
@@ -XXX,XX +XXX,XX @@ void x86_load_linux(X86MachineState *x86ms,
19
setup_data->type = cpu_to_le32(SETUP_RNG_SEED);
20
setup_data->len = cpu_to_le32(RNG_SEED_LENGTH);
21
qemu_guest_getrandom_nofail(setup_data->data, RNG_SEED_LENGTH);
22
- qemu_register_reset(reset_rng_seed, setup_data);
23
+ qemu_register_reset_nosnapshotload(reset_rng_seed, setup_data);
24
fw_cfg_add_bytes_callback(fw_cfg, FW_CFG_KERNEL_DATA, reset_rng_seed, NULL,
25
setup_data, kernel, kernel_size, true);
26
} else {
27
--
28
2.25.1
diff view generated by jsdifflib
1
Currently we provide Hn and H1_n macros for accessing the correct
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
2
data within arrays of vector elements of size 1, 2 and 4, accounting
3
for host endianness. We don't provide any macros for elements of
4
size 8 because there the host endianness doesn't matter. However,
5
this does result in awkwardness where we need to pass empty arguments
6
to macros, because checkpatch complains about them. The empty
7
argument is a little confusing for humans to read as well.
8
2
9
Add H8() and H1_8() macros and use them where we were previously
3
When the system reboots, the rng-seed that the FDT has should be
10
passing empty arguments to macros.
4
re-randomized, so that the new boot gets a new seed. Since the FDT is in
5
the ROM region at this point, we add a hook right after the ROM has been
6
added, so that we have a pointer to that copy of the FDT.
11
7
12
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
8
Cc: Peter Maydell <peter.maydell@linaro.org>
9
Cc: qemu-arm@nongnu.org
10
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
11
Message-id: 20221025004327.568476-5-Jason@zx2c4.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20210614151007.4545-2-peter.maydell@linaro.org
17
Message-id: 20210610132505.5827-1-peter.maydell@linaro.org
18
---
14
---
19
target/arm/vec_internal.h | 8 +-
15
hw/arm/boot.c | 2 ++
20
target/arm/sve_helper.c | 258 +++++++++++++++++++-------------------
16
1 file changed, 2 insertions(+)
21
target/arm/vec_helper.c | 14 +--
22
3 files changed, 143 insertions(+), 137 deletions(-)
23
17
24
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
18
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
25
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/vec_internal.h
20
--- a/hw/arm/boot.c
27
+++ b/target/arm/vec_internal.h
21
+++ b/hw/arm/boot.c
28
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ int arm_load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
29
#define H2(x) (x)
23
* the DTB is copied again upon reset, even if addr points into RAM.
30
#define H4(x) (x)
24
*/
31
#endif
25
rom_add_blob_fixed_as("dtb", fdt, size, addr, as);
32
-
26
+ qemu_register_reset_nosnapshotload(qemu_fdt_randomize_seeds,
33
+/*
27
+ rom_ptr_for_as(as, addr, size));
34
+ * Access to 64-bit elements isn't host-endian dependent; we provide H8
28
35
+ * and H1_8 so that when a function is being generated from a macro we
29
g_free(fdt);
36
+ * can pass these rather than an empty macro argument, for clarity.
37
+ */
38
+#define H8(x) (x)
39
+#define H1_8(x) (x)
40
41
static inline void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
42
{
43
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/sve_helper.c
46
+++ b/target/arm/sve_helper.c
47
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, \
48
49
DO_ZPZZ_PAIR_FP(sve2_faddp_zpzz_h, float16, H1_2, float16_add)
50
DO_ZPZZ_PAIR_FP(sve2_faddp_zpzz_s, float32, H1_4, float32_add)
51
-DO_ZPZZ_PAIR_FP(sve2_faddp_zpzz_d, float64, , float64_add)
52
+DO_ZPZZ_PAIR_FP(sve2_faddp_zpzz_d, float64, H1_8, float64_add)
53
54
DO_ZPZZ_PAIR_FP(sve2_fmaxnmp_zpzz_h, float16, H1_2, float16_maxnum)
55
DO_ZPZZ_PAIR_FP(sve2_fmaxnmp_zpzz_s, float32, H1_4, float32_maxnum)
56
-DO_ZPZZ_PAIR_FP(sve2_fmaxnmp_zpzz_d, float64, , float64_maxnum)
57
+DO_ZPZZ_PAIR_FP(sve2_fmaxnmp_zpzz_d, float64, H1_8, float64_maxnum)
58
59
DO_ZPZZ_PAIR_FP(sve2_fminnmp_zpzz_h, float16, H1_2, float16_minnum)
60
DO_ZPZZ_PAIR_FP(sve2_fminnmp_zpzz_s, float32, H1_4, float32_minnum)
61
-DO_ZPZZ_PAIR_FP(sve2_fminnmp_zpzz_d, float64, , float64_minnum)
62
+DO_ZPZZ_PAIR_FP(sve2_fminnmp_zpzz_d, float64, H1_8, float64_minnum)
63
64
DO_ZPZZ_PAIR_FP(sve2_fmaxp_zpzz_h, float16, H1_2, float16_max)
65
DO_ZPZZ_PAIR_FP(sve2_fmaxp_zpzz_s, float32, H1_4, float32_max)
66
-DO_ZPZZ_PAIR_FP(sve2_fmaxp_zpzz_d, float64, , float64_max)
67
+DO_ZPZZ_PAIR_FP(sve2_fmaxp_zpzz_d, float64, H1_8, float64_max)
68
69
DO_ZPZZ_PAIR_FP(sve2_fminp_zpzz_h, float16, H1_2, float16_min)
70
DO_ZPZZ_PAIR_FP(sve2_fminp_zpzz_s, float32, H1_4, float32_min)
71
-DO_ZPZZ_PAIR_FP(sve2_fminp_zpzz_d, float64, , float64_min)
72
+DO_ZPZZ_PAIR_FP(sve2_fminp_zpzz_d, float64, H1_8, float64_min)
73
74
#undef DO_ZPZZ_PAIR_FP
75
76
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
77
78
DO_ZZZ_TB(sve2_saddl_h, int16_t, int8_t, H1_2, H1, DO_ADD)
79
DO_ZZZ_TB(sve2_saddl_s, int32_t, int16_t, H1_4, H1_2, DO_ADD)
80
-DO_ZZZ_TB(sve2_saddl_d, int64_t, int32_t, , H1_4, DO_ADD)
81
+DO_ZZZ_TB(sve2_saddl_d, int64_t, int32_t, H1_8, H1_4, DO_ADD)
82
83
DO_ZZZ_TB(sve2_ssubl_h, int16_t, int8_t, H1_2, H1, DO_SUB)
84
DO_ZZZ_TB(sve2_ssubl_s, int32_t, int16_t, H1_4, H1_2, DO_SUB)
85
-DO_ZZZ_TB(sve2_ssubl_d, int64_t, int32_t, , H1_4, DO_SUB)
86
+DO_ZZZ_TB(sve2_ssubl_d, int64_t, int32_t, H1_8, H1_4, DO_SUB)
87
88
DO_ZZZ_TB(sve2_sabdl_h, int16_t, int8_t, H1_2, H1, DO_ABD)
89
DO_ZZZ_TB(sve2_sabdl_s, int32_t, int16_t, H1_4, H1_2, DO_ABD)
90
-DO_ZZZ_TB(sve2_sabdl_d, int64_t, int32_t, , H1_4, DO_ABD)
91
+DO_ZZZ_TB(sve2_sabdl_d, int64_t, int32_t, H1_8, H1_4, DO_ABD)
92
93
DO_ZZZ_TB(sve2_uaddl_h, uint16_t, uint8_t, H1_2, H1, DO_ADD)
94
DO_ZZZ_TB(sve2_uaddl_s, uint32_t, uint16_t, H1_4, H1_2, DO_ADD)
95
-DO_ZZZ_TB(sve2_uaddl_d, uint64_t, uint32_t, , H1_4, DO_ADD)
96
+DO_ZZZ_TB(sve2_uaddl_d, uint64_t, uint32_t, H1_8, H1_4, DO_ADD)
97
98
DO_ZZZ_TB(sve2_usubl_h, uint16_t, uint8_t, H1_2, H1, DO_SUB)
99
DO_ZZZ_TB(sve2_usubl_s, uint32_t, uint16_t, H1_4, H1_2, DO_SUB)
100
-DO_ZZZ_TB(sve2_usubl_d, uint64_t, uint32_t, , H1_4, DO_SUB)
101
+DO_ZZZ_TB(sve2_usubl_d, uint64_t, uint32_t, H1_8, H1_4, DO_SUB)
102
103
DO_ZZZ_TB(sve2_uabdl_h, uint16_t, uint8_t, H1_2, H1, DO_ABD)
104
DO_ZZZ_TB(sve2_uabdl_s, uint32_t, uint16_t, H1_4, H1_2, DO_ABD)
105
-DO_ZZZ_TB(sve2_uabdl_d, uint64_t, uint32_t, , H1_4, DO_ABD)
106
+DO_ZZZ_TB(sve2_uabdl_d, uint64_t, uint32_t, H1_8, H1_4, DO_ABD)
107
108
DO_ZZZ_TB(sve2_smull_zzz_h, int16_t, int8_t, H1_2, H1, DO_MUL)
109
DO_ZZZ_TB(sve2_smull_zzz_s, int32_t, int16_t, H1_4, H1_2, DO_MUL)
110
-DO_ZZZ_TB(sve2_smull_zzz_d, int64_t, int32_t, , H1_4, DO_MUL)
111
+DO_ZZZ_TB(sve2_smull_zzz_d, int64_t, int32_t, H1_8, H1_4, DO_MUL)
112
113
DO_ZZZ_TB(sve2_umull_zzz_h, uint16_t, uint8_t, H1_2, H1, DO_MUL)
114
DO_ZZZ_TB(sve2_umull_zzz_s, uint32_t, uint16_t, H1_4, H1_2, DO_MUL)
115
-DO_ZZZ_TB(sve2_umull_zzz_d, uint64_t, uint32_t, , H1_4, DO_MUL)
116
+DO_ZZZ_TB(sve2_umull_zzz_d, uint64_t, uint32_t, H1_8, H1_4, DO_MUL)
117
118
/* Note that the multiply cannot overflow, but the doubling can. */
119
static inline int16_t do_sqdmull_h(int16_t n, int16_t m)
120
@@ -XXX,XX +XXX,XX @@ static inline int64_t do_sqdmull_d(int64_t n, int64_t m)
121
122
DO_ZZZ_TB(sve2_sqdmull_zzz_h, int16_t, int8_t, H1_2, H1, do_sqdmull_h)
123
DO_ZZZ_TB(sve2_sqdmull_zzz_s, int32_t, int16_t, H1_4, H1_2, do_sqdmull_s)
124
-DO_ZZZ_TB(sve2_sqdmull_zzz_d, int64_t, int32_t, , H1_4, do_sqdmull_d)
125
+DO_ZZZ_TB(sve2_sqdmull_zzz_d, int64_t, int32_t, H1_8, H1_4, do_sqdmull_d)
126
127
#undef DO_ZZZ_TB
128
129
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
130
131
DO_ZZZ_WTB(sve2_saddw_h, int16_t, int8_t, H1_2, H1, DO_ADD)
132
DO_ZZZ_WTB(sve2_saddw_s, int32_t, int16_t, H1_4, H1_2, DO_ADD)
133
-DO_ZZZ_WTB(sve2_saddw_d, int64_t, int32_t, , H1_4, DO_ADD)
134
+DO_ZZZ_WTB(sve2_saddw_d, int64_t, int32_t, H1_8, H1_4, DO_ADD)
135
136
DO_ZZZ_WTB(sve2_ssubw_h, int16_t, int8_t, H1_2, H1, DO_SUB)
137
DO_ZZZ_WTB(sve2_ssubw_s, int32_t, int16_t, H1_4, H1_2, DO_SUB)
138
-DO_ZZZ_WTB(sve2_ssubw_d, int64_t, int32_t, , H1_4, DO_SUB)
139
+DO_ZZZ_WTB(sve2_ssubw_d, int64_t, int32_t, H1_8, H1_4, DO_SUB)
140
141
DO_ZZZ_WTB(sve2_uaddw_h, uint16_t, uint8_t, H1_2, H1, DO_ADD)
142
DO_ZZZ_WTB(sve2_uaddw_s, uint32_t, uint16_t, H1_4, H1_2, DO_ADD)
143
-DO_ZZZ_WTB(sve2_uaddw_d, uint64_t, uint32_t, , H1_4, DO_ADD)
144
+DO_ZZZ_WTB(sve2_uaddw_d, uint64_t, uint32_t, H1_8, H1_4, DO_ADD)
145
146
DO_ZZZ_WTB(sve2_usubw_h, uint16_t, uint8_t, H1_2, H1, DO_SUB)
147
DO_ZZZ_WTB(sve2_usubw_s, uint32_t, uint16_t, H1_4, H1_2, DO_SUB)
148
-DO_ZZZ_WTB(sve2_usubw_d, uint64_t, uint32_t, , H1_4, DO_SUB)
149
+DO_ZZZ_WTB(sve2_usubw_d, uint64_t, uint32_t, H1_8, H1_4, DO_SUB)
150
151
#undef DO_ZZZ_WTB
152
153
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
154
DO_ZZZ_NTB(sve2_eoril_b, uint8_t, H1, DO_EOR)
155
DO_ZZZ_NTB(sve2_eoril_h, uint16_t, H1_2, DO_EOR)
156
DO_ZZZ_NTB(sve2_eoril_s, uint32_t, H1_4, DO_EOR)
157
-DO_ZZZ_NTB(sve2_eoril_d, uint64_t, , DO_EOR)
158
+DO_ZZZ_NTB(sve2_eoril_d, uint64_t, H1_8, DO_EOR)
159
160
#undef DO_ZZZ_NTB
161
162
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
163
164
DO_ZZZW_ACC(sve2_sabal_h, int16_t, int8_t, H1_2, H1, DO_ABD)
165
DO_ZZZW_ACC(sve2_sabal_s, int32_t, int16_t, H1_4, H1_2, DO_ABD)
166
-DO_ZZZW_ACC(sve2_sabal_d, int64_t, int32_t, , H1_4, DO_ABD)
167
+DO_ZZZW_ACC(sve2_sabal_d, int64_t, int32_t, H1_8, H1_4, DO_ABD)
168
169
DO_ZZZW_ACC(sve2_uabal_h, uint16_t, uint8_t, H1_2, H1, DO_ABD)
170
DO_ZZZW_ACC(sve2_uabal_s, uint32_t, uint16_t, H1_4, H1_2, DO_ABD)
171
-DO_ZZZW_ACC(sve2_uabal_d, uint64_t, uint32_t, , H1_4, DO_ABD)
172
+DO_ZZZW_ACC(sve2_uabal_d, uint64_t, uint32_t, H1_8, H1_4, DO_ABD)
173
174
DO_ZZZW_ACC(sve2_smlal_zzzw_h, int16_t, int8_t, H1_2, H1, DO_MUL)
175
DO_ZZZW_ACC(sve2_smlal_zzzw_s, int32_t, int16_t, H1_4, H1_2, DO_MUL)
176
-DO_ZZZW_ACC(sve2_smlal_zzzw_d, int64_t, int32_t, , H1_4, DO_MUL)
177
+DO_ZZZW_ACC(sve2_smlal_zzzw_d, int64_t, int32_t, H1_8, H1_4, DO_MUL)
178
179
DO_ZZZW_ACC(sve2_umlal_zzzw_h, uint16_t, uint8_t, H1_2, H1, DO_MUL)
180
DO_ZZZW_ACC(sve2_umlal_zzzw_s, uint32_t, uint16_t, H1_4, H1_2, DO_MUL)
181
-DO_ZZZW_ACC(sve2_umlal_zzzw_d, uint64_t, uint32_t, , H1_4, DO_MUL)
182
+DO_ZZZW_ACC(sve2_umlal_zzzw_d, uint64_t, uint32_t, H1_8, H1_4, DO_MUL)
183
184
#define DO_NMUL(N, M) -(N * M)
185
186
DO_ZZZW_ACC(sve2_smlsl_zzzw_h, int16_t, int8_t, H1_2, H1, DO_NMUL)
187
DO_ZZZW_ACC(sve2_smlsl_zzzw_s, int32_t, int16_t, H1_4, H1_2, DO_NMUL)
188
-DO_ZZZW_ACC(sve2_smlsl_zzzw_d, int64_t, int32_t, , H1_4, DO_NMUL)
189
+DO_ZZZW_ACC(sve2_smlsl_zzzw_d, int64_t, int32_t, H1_8, H1_4, DO_NMUL)
190
191
DO_ZZZW_ACC(sve2_umlsl_zzzw_h, uint16_t, uint8_t, H1_2, H1, DO_NMUL)
192
DO_ZZZW_ACC(sve2_umlsl_zzzw_s, uint32_t, uint16_t, H1_4, H1_2, DO_NMUL)
193
-DO_ZZZW_ACC(sve2_umlsl_zzzw_d, uint64_t, uint32_t, , H1_4, DO_NMUL)
194
+DO_ZZZW_ACC(sve2_umlsl_zzzw_d, uint64_t, uint32_t, H1_8, H1_4, DO_NMUL)
195
196
#undef DO_ZZZW_ACC
197
198
@@ -XXX,XX +XXX,XX @@ DO_SQDMLAL(sve2_sqdmlal_zzzw_h, int16_t, int8_t, H1_2, H1,
199
do_sqdmull_h, DO_SQADD_H)
200
DO_SQDMLAL(sve2_sqdmlal_zzzw_s, int32_t, int16_t, H1_4, H1_2,
201
do_sqdmull_s, DO_SQADD_S)
202
-DO_SQDMLAL(sve2_sqdmlal_zzzw_d, int64_t, int32_t, , H1_4,
203
+DO_SQDMLAL(sve2_sqdmlal_zzzw_d, int64_t, int32_t, H1_8, H1_4,
204
do_sqdmull_d, do_sqadd_d)
205
206
DO_SQDMLAL(sve2_sqdmlsl_zzzw_h, int16_t, int8_t, H1_2, H1,
207
do_sqdmull_h, DO_SQSUB_H)
208
DO_SQDMLAL(sve2_sqdmlsl_zzzw_s, int32_t, int16_t, H1_4, H1_2,
209
do_sqdmull_s, DO_SQSUB_S)
210
-DO_SQDMLAL(sve2_sqdmlsl_zzzw_d, int64_t, int32_t, , H1_4,
211
+DO_SQDMLAL(sve2_sqdmlsl_zzzw_d, int64_t, int32_t, H1_8, H1_4,
212
do_sqdmull_d, do_sqsub_d)
213
214
#undef DO_SQDMLAL
215
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
216
DO_CMLA_FUNC(sve2_cmla_zzzz_b, uint8_t, H1, DO_CMLA)
217
DO_CMLA_FUNC(sve2_cmla_zzzz_h, uint16_t, H2, DO_CMLA)
218
DO_CMLA_FUNC(sve2_cmla_zzzz_s, uint32_t, H4, DO_CMLA)
219
-DO_CMLA_FUNC(sve2_cmla_zzzz_d, uint64_t, , DO_CMLA)
220
+DO_CMLA_FUNC(sve2_cmla_zzzz_d, uint64_t, H8, DO_CMLA)
221
222
#define DO_SQRDMLAH_B(N, M, A, S) \
223
do_sqrdmlah_b(N, M, A, S, true)
224
@@ -XXX,XX +XXX,XX @@ DO_CMLA_FUNC(sve2_cmla_zzzz_d, uint64_t, , DO_CMLA)
225
DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_b, int8_t, H1, DO_SQRDMLAH_B)
226
DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_h, int16_t, H2, DO_SQRDMLAH_H)
227
DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_s, int32_t, H4, DO_SQRDMLAH_S)
228
-DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_d, int64_t, , DO_SQRDMLAH_D)
229
+DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_d, int64_t, H8, DO_SQRDMLAH_D)
230
231
#define DO_CMLA_IDX_FUNC(NAME, TYPE, H, OP) \
232
void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
233
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
234
235
DO_ZZXZ(sve2_sqrdmlah_idx_h, int16_t, H2, DO_SQRDMLAH_H)
236
DO_ZZXZ(sve2_sqrdmlah_idx_s, int32_t, H4, DO_SQRDMLAH_S)
237
-DO_ZZXZ(sve2_sqrdmlah_idx_d, int64_t, , DO_SQRDMLAH_D)
238
+DO_ZZXZ(sve2_sqrdmlah_idx_d, int64_t, H8, DO_SQRDMLAH_D)
239
240
#define DO_SQRDMLSH_H(N, M, A) \
241
({ uint32_t discard; do_sqrdmlah_h(N, M, A, true, true, &discard); })
242
@@ -XXX,XX +XXX,XX @@ DO_ZZXZ(sve2_sqrdmlah_idx_d, int64_t, , DO_SQRDMLAH_D)
243
244
DO_ZZXZ(sve2_sqrdmlsh_idx_h, int16_t, H2, DO_SQRDMLSH_H)
245
DO_ZZXZ(sve2_sqrdmlsh_idx_s, int32_t, H4, DO_SQRDMLSH_S)
246
-DO_ZZXZ(sve2_sqrdmlsh_idx_d, int64_t, , DO_SQRDMLSH_D)
247
+DO_ZZXZ(sve2_sqrdmlsh_idx_d, int64_t, H8, DO_SQRDMLSH_D)
248
249
#undef DO_ZZXZ
250
251
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
252
#define DO_MLA(N, M, A) (A + N * M)
253
254
DO_ZZXW(sve2_smlal_idx_s, int32_t, int16_t, H1_4, H1_2, DO_MLA)
255
-DO_ZZXW(sve2_smlal_idx_d, int64_t, int32_t, , H1_4, DO_MLA)
256
+DO_ZZXW(sve2_smlal_idx_d, int64_t, int32_t, H1_8, H1_4, DO_MLA)
257
DO_ZZXW(sve2_umlal_idx_s, uint32_t, uint16_t, H1_4, H1_2, DO_MLA)
258
-DO_ZZXW(sve2_umlal_idx_d, uint64_t, uint32_t, , H1_4, DO_MLA)
259
+DO_ZZXW(sve2_umlal_idx_d, uint64_t, uint32_t, H1_8, H1_4, DO_MLA)
260
261
#define DO_MLS(N, M, A) (A - N * M)
262
263
DO_ZZXW(sve2_smlsl_idx_s, int32_t, int16_t, H1_4, H1_2, DO_MLS)
264
-DO_ZZXW(sve2_smlsl_idx_d, int64_t, int32_t, , H1_4, DO_MLS)
265
+DO_ZZXW(sve2_smlsl_idx_d, int64_t, int32_t, H1_8, H1_4, DO_MLS)
266
DO_ZZXW(sve2_umlsl_idx_s, uint32_t, uint16_t, H1_4, H1_2, DO_MLS)
267
-DO_ZZXW(sve2_umlsl_idx_d, uint64_t, uint32_t, , H1_4, DO_MLS)
268
+DO_ZZXW(sve2_umlsl_idx_d, uint64_t, uint32_t, H1_8, H1_4, DO_MLS)
269
270
#define DO_SQDMLAL_S(N, M, A) DO_SQADD_S(A, do_sqdmull_s(N, M))
271
#define DO_SQDMLAL_D(N, M, A) do_sqadd_d(A, do_sqdmull_d(N, M))
272
273
DO_ZZXW(sve2_sqdmlal_idx_s, int32_t, int16_t, H1_4, H1_2, DO_SQDMLAL_S)
274
-DO_ZZXW(sve2_sqdmlal_idx_d, int64_t, int32_t, , H1_4, DO_SQDMLAL_D)
275
+DO_ZZXW(sve2_sqdmlal_idx_d, int64_t, int32_t, H1_8, H1_4, DO_SQDMLAL_D)
276
277
#define DO_SQDMLSL_S(N, M, A) DO_SQSUB_S(A, do_sqdmull_s(N, M))
278
#define DO_SQDMLSL_D(N, M, A) do_sqsub_d(A, do_sqdmull_d(N, M))
279
280
DO_ZZXW(sve2_sqdmlsl_idx_s, int32_t, int16_t, H1_4, H1_2, DO_SQDMLSL_S)
281
-DO_ZZXW(sve2_sqdmlsl_idx_d, int64_t, int32_t, , H1_4, DO_SQDMLSL_D)
282
+DO_ZZXW(sve2_sqdmlsl_idx_d, int64_t, int32_t, H1_8, H1_4, DO_SQDMLSL_D)
283
284
#undef DO_MLA
285
#undef DO_MLS
286
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
287
}
288
289
DO_ZZX(sve2_sqdmull_idx_s, int32_t, int16_t, H1_4, H1_2, do_sqdmull_s)
290
-DO_ZZX(sve2_sqdmull_idx_d, int64_t, int32_t, , H1_4, do_sqdmull_d)
291
+DO_ZZX(sve2_sqdmull_idx_d, int64_t, int32_t, H1_8, H1_4, do_sqdmull_d)
292
293
DO_ZZX(sve2_smull_idx_s, int32_t, int16_t, H1_4, H1_2, DO_MUL)
294
-DO_ZZX(sve2_smull_idx_d, int64_t, int32_t, , H1_4, DO_MUL)
295
+DO_ZZX(sve2_smull_idx_d, int64_t, int32_t, H1_8, H1_4, DO_MUL)
296
297
DO_ZZX(sve2_umull_idx_s, uint32_t, uint16_t, H1_4, H1_2, DO_MUL)
298
-DO_ZZX(sve2_umull_idx_d, uint64_t, uint32_t, , H1_4, DO_MUL)
299
+DO_ZZX(sve2_umull_idx_d, uint64_t, uint32_t, H1_8, H1_4, DO_MUL)
300
301
#undef DO_ZZX
302
303
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
304
DO_CADD(sve2_cadd_b, int8_t, H1, DO_ADD, DO_SUB)
305
DO_CADD(sve2_cadd_h, int16_t, H1_2, DO_ADD, DO_SUB)
306
DO_CADD(sve2_cadd_s, int32_t, H1_4, DO_ADD, DO_SUB)
307
-DO_CADD(sve2_cadd_d, int64_t, , DO_ADD, DO_SUB)
308
+DO_CADD(sve2_cadd_d, int64_t, H1_8, DO_ADD, DO_SUB)
309
310
DO_CADD(sve2_sqcadd_b, int8_t, H1, DO_SQADD_B, DO_SQSUB_B)
311
DO_CADD(sve2_sqcadd_h, int16_t, H1_2, DO_SQADD_H, DO_SQSUB_H)
312
DO_CADD(sve2_sqcadd_s, int32_t, H1_4, DO_SQADD_S, DO_SQSUB_S)
313
-DO_CADD(sve2_sqcadd_d, int64_t, , do_sqadd_d, do_sqsub_d)
314
+DO_CADD(sve2_sqcadd_d, int64_t, H1_8, do_sqadd_d, do_sqsub_d)
315
316
#undef DO_CADD
317
318
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
319
320
DO_ZZI_SHLL(sve2_sshll_h, int16_t, int8_t, H1_2, H1)
321
DO_ZZI_SHLL(sve2_sshll_s, int32_t, int16_t, H1_4, H1_2)
322
-DO_ZZI_SHLL(sve2_sshll_d, int64_t, int32_t, , H1_4)
323
+DO_ZZI_SHLL(sve2_sshll_d, int64_t, int32_t, H1_8, H1_4)
324
325
DO_ZZI_SHLL(sve2_ushll_h, uint16_t, uint8_t, H1_2, H1)
326
DO_ZZI_SHLL(sve2_ushll_s, uint32_t, uint16_t, H1_4, H1_2)
327
-DO_ZZI_SHLL(sve2_ushll_d, uint64_t, uint32_t, , H1_4)
328
+DO_ZZI_SHLL(sve2_ushll_d, uint64_t, uint32_t, H1_8, H1_4)
329
330
#undef DO_ZZI_SHLL
331
332
@@ -XXX,XX +XXX,XX @@ DO_SHRNB(sve2_shrnb_d, uint64_t, uint32_t, DO_SHR)
333
334
DO_SHRNT(sve2_shrnt_h, uint16_t, uint8_t, H1_2, H1, DO_SHR)
335
DO_SHRNT(sve2_shrnt_s, uint32_t, uint16_t, H1_4, H1_2, DO_SHR)
336
-DO_SHRNT(sve2_shrnt_d, uint64_t, uint32_t, , H1_4, DO_SHR)
337
+DO_SHRNT(sve2_shrnt_d, uint64_t, uint32_t, H1_8, H1_4, DO_SHR)
338
339
DO_SHRNB(sve2_rshrnb_h, uint16_t, uint8_t, do_urshr)
340
DO_SHRNB(sve2_rshrnb_s, uint32_t, uint16_t, do_urshr)
341
@@ -XXX,XX +XXX,XX @@ DO_SHRNB(sve2_rshrnb_d, uint64_t, uint32_t, do_urshr)
342
343
DO_SHRNT(sve2_rshrnt_h, uint16_t, uint8_t, H1_2, H1, do_urshr)
344
DO_SHRNT(sve2_rshrnt_s, uint32_t, uint16_t, H1_4, H1_2, do_urshr)
345
-DO_SHRNT(sve2_rshrnt_d, uint64_t, uint32_t, , H1_4, do_urshr)
346
+DO_SHRNT(sve2_rshrnt_d, uint64_t, uint32_t, H1_8, H1_4, do_urshr)
347
348
#define DO_SQSHRUN_H(x, sh) do_sat_bhs((int64_t)(x) >> sh, 0, UINT8_MAX)
349
#define DO_SQSHRUN_S(x, sh) do_sat_bhs((int64_t)(x) >> sh, 0, UINT16_MAX)
350
@@ -XXX,XX +XXX,XX @@ DO_SHRNB(sve2_sqshrunb_d, int64_t, uint32_t, DO_SQSHRUN_D)
351
352
DO_SHRNT(sve2_sqshrunt_h, int16_t, uint8_t, H1_2, H1, DO_SQSHRUN_H)
353
DO_SHRNT(sve2_sqshrunt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQSHRUN_S)
354
-DO_SHRNT(sve2_sqshrunt_d, int64_t, uint32_t, , H1_4, DO_SQSHRUN_D)
355
+DO_SHRNT(sve2_sqshrunt_d, int64_t, uint32_t, H1_8, H1_4, DO_SQSHRUN_D)
356
357
#define DO_SQRSHRUN_H(x, sh) do_sat_bhs(do_srshr(x, sh), 0, UINT8_MAX)
358
#define DO_SQRSHRUN_S(x, sh) do_sat_bhs(do_srshr(x, sh), 0, UINT16_MAX)
359
@@ -XXX,XX +XXX,XX @@ DO_SHRNB(sve2_sqrshrunb_d, int64_t, uint32_t, DO_SQRSHRUN_D)
360
361
DO_SHRNT(sve2_sqrshrunt_h, int16_t, uint8_t, H1_2, H1, DO_SQRSHRUN_H)
362
DO_SHRNT(sve2_sqrshrunt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQRSHRUN_S)
363
-DO_SHRNT(sve2_sqrshrunt_d, int64_t, uint32_t, , H1_4, DO_SQRSHRUN_D)
364
+DO_SHRNT(sve2_sqrshrunt_d, int64_t, uint32_t, H1_8, H1_4, DO_SQRSHRUN_D)
365
366
#define DO_SQSHRN_H(x, sh) do_sat_bhs(x >> sh, INT8_MIN, INT8_MAX)
367
#define DO_SQSHRN_S(x, sh) do_sat_bhs(x >> sh, INT16_MIN, INT16_MAX)
368
@@ -XXX,XX +XXX,XX @@ DO_SHRNB(sve2_sqshrnb_d, int64_t, uint32_t, DO_SQSHRN_D)
369
370
DO_SHRNT(sve2_sqshrnt_h, int16_t, uint8_t, H1_2, H1, DO_SQSHRN_H)
371
DO_SHRNT(sve2_sqshrnt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQSHRN_S)
372
-DO_SHRNT(sve2_sqshrnt_d, int64_t, uint32_t, , H1_4, DO_SQSHRN_D)
373
+DO_SHRNT(sve2_sqshrnt_d, int64_t, uint32_t, H1_8, H1_4, DO_SQSHRN_D)
374
375
#define DO_SQRSHRN_H(x, sh) do_sat_bhs(do_srshr(x, sh), INT8_MIN, INT8_MAX)
376
#define DO_SQRSHRN_S(x, sh) do_sat_bhs(do_srshr(x, sh), INT16_MIN, INT16_MAX)
377
@@ -XXX,XX +XXX,XX @@ DO_SHRNB(sve2_sqrshrnb_d, int64_t, uint32_t, DO_SQRSHRN_D)
378
379
DO_SHRNT(sve2_sqrshrnt_h, int16_t, uint8_t, H1_2, H1, DO_SQRSHRN_H)
380
DO_SHRNT(sve2_sqrshrnt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQRSHRN_S)
381
-DO_SHRNT(sve2_sqrshrnt_d, int64_t, uint32_t, , H1_4, DO_SQRSHRN_D)
382
+DO_SHRNT(sve2_sqrshrnt_d, int64_t, uint32_t, H1_8, H1_4, DO_SQRSHRN_D)
383
384
#define DO_UQSHRN_H(x, sh) MIN(x >> sh, UINT8_MAX)
385
#define DO_UQSHRN_S(x, sh) MIN(x >> sh, UINT16_MAX)
386
@@ -XXX,XX +XXX,XX @@ DO_SHRNB(sve2_uqshrnb_d, uint64_t, uint32_t, DO_UQSHRN_D)
387
388
DO_SHRNT(sve2_uqshrnt_h, uint16_t, uint8_t, H1_2, H1, DO_UQSHRN_H)
389
DO_SHRNT(sve2_uqshrnt_s, uint32_t, uint16_t, H1_4, H1_2, DO_UQSHRN_S)
390
-DO_SHRNT(sve2_uqshrnt_d, uint64_t, uint32_t, , H1_4, DO_UQSHRN_D)
391
+DO_SHRNT(sve2_uqshrnt_d, uint64_t, uint32_t, H1_8, H1_4, DO_UQSHRN_D)
392
393
#define DO_UQRSHRN_H(x, sh) MIN(do_urshr(x, sh), UINT8_MAX)
394
#define DO_UQRSHRN_S(x, sh) MIN(do_urshr(x, sh), UINT16_MAX)
395
@@ -XXX,XX +XXX,XX @@ DO_SHRNB(sve2_uqrshrnb_d, uint64_t, uint32_t, DO_UQRSHRN_D)
396
397
DO_SHRNT(sve2_uqrshrnt_h, uint16_t, uint8_t, H1_2, H1, DO_UQRSHRN_H)
398
DO_SHRNT(sve2_uqrshrnt_s, uint32_t, uint16_t, H1_4, H1_2, DO_UQRSHRN_S)
399
-DO_SHRNT(sve2_uqrshrnt_d, uint64_t, uint32_t, , H1_4, DO_UQRSHRN_D)
400
+DO_SHRNT(sve2_uqrshrnt_d, uint64_t, uint32_t, H1_8, H1_4, DO_UQRSHRN_D)
401
402
#undef DO_SHRNB
403
#undef DO_SHRNT
404
@@ -XXX,XX +XXX,XX @@ DO_BINOPNB(sve2_addhnb_d, uint64_t, uint32_t, 32, DO_ADDHN)
405
406
DO_BINOPNT(sve2_addhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_ADDHN)
407
DO_BINOPNT(sve2_addhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_ADDHN)
408
-DO_BINOPNT(sve2_addhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_ADDHN)
409
+DO_BINOPNT(sve2_addhnt_d, uint64_t, uint32_t, 32, H1_8, H1_4, DO_ADDHN)
410
411
DO_BINOPNB(sve2_raddhnb_h, uint16_t, uint8_t, 8, DO_RADDHN)
412
DO_BINOPNB(sve2_raddhnb_s, uint32_t, uint16_t, 16, DO_RADDHN)
413
@@ -XXX,XX +XXX,XX @@ DO_BINOPNB(sve2_raddhnb_d, uint64_t, uint32_t, 32, DO_RADDHN)
414
415
DO_BINOPNT(sve2_raddhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_RADDHN)
416
DO_BINOPNT(sve2_raddhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_RADDHN)
417
-DO_BINOPNT(sve2_raddhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_RADDHN)
418
+DO_BINOPNT(sve2_raddhnt_d, uint64_t, uint32_t, 32, H1_8, H1_4, DO_RADDHN)
419
420
DO_BINOPNB(sve2_subhnb_h, uint16_t, uint8_t, 8, DO_SUBHN)
421
DO_BINOPNB(sve2_subhnb_s, uint32_t, uint16_t, 16, DO_SUBHN)
422
@@ -XXX,XX +XXX,XX @@ DO_BINOPNB(sve2_subhnb_d, uint64_t, uint32_t, 32, DO_SUBHN)
423
424
DO_BINOPNT(sve2_subhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_SUBHN)
425
DO_BINOPNT(sve2_subhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_SUBHN)
426
-DO_BINOPNT(sve2_subhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_SUBHN)
427
+DO_BINOPNT(sve2_subhnt_d, uint64_t, uint32_t, 32, H1_8, H1_4, DO_SUBHN)
428
429
DO_BINOPNB(sve2_rsubhnb_h, uint16_t, uint8_t, 8, DO_RSUBHN)
430
DO_BINOPNB(sve2_rsubhnb_s, uint32_t, uint16_t, 16, DO_RSUBHN)
431
@@ -XXX,XX +XXX,XX @@ DO_BINOPNB(sve2_rsubhnb_d, uint64_t, uint32_t, 32, DO_RSUBHN)
432
433
DO_BINOPNT(sve2_rsubhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_RSUBHN)
434
DO_BINOPNT(sve2_rsubhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_RSUBHN)
435
-DO_BINOPNT(sve2_rsubhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_RSUBHN)
436
+DO_BINOPNT(sve2_rsubhnt_d, uint64_t, uint32_t, 32, H1_8, H1_4, DO_RSUBHN)
437
438
#undef DO_RSUBHN
439
#undef DO_SUBHN
440
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, uint64_t val, uint32_t desc) \
441
DO_INSR(sve_insr_b, uint8_t, H1)
442
DO_INSR(sve_insr_h, uint16_t, H1_2)
443
DO_INSR(sve_insr_s, uint32_t, H1_4)
444
-DO_INSR(sve_insr_d, uint64_t, )
445
+DO_INSR(sve_insr_d, uint64_t, H1_8)
446
447
#undef DO_INSR
448
449
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_tbx_##SUFF)(void *vd, void *vn, void *vm, uint32_t desc) \
450
DO_TB(b, uint8_t, H1)
451
DO_TB(h, uint16_t, H2)
452
DO_TB(s, uint32_t, H4)
453
-DO_TB(d, uint64_t, )
454
+DO_TB(d, uint64_t, H8)
455
456
#undef DO_TB
457
458
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
459
460
DO_UNPK(sve_sunpk_h, int16_t, int8_t, H2, H1)
461
DO_UNPK(sve_sunpk_s, int32_t, int16_t, H4, H2)
462
-DO_UNPK(sve_sunpk_d, int64_t, int32_t, , H4)
463
+DO_UNPK(sve_sunpk_d, int64_t, int32_t, H8, H4)
464
465
DO_UNPK(sve_uunpk_h, uint16_t, uint8_t, H2, H1)
466
DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
467
-DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
468
+DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, H8, H4)
469
470
#undef DO_UNPK
471
472
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
473
DO_ZIP(sve_zip_b, uint8_t, H1)
474
DO_ZIP(sve_zip_h, uint16_t, H1_2)
475
DO_ZIP(sve_zip_s, uint32_t, H1_4)
476
-DO_ZIP(sve_zip_d, uint64_t, )
477
+DO_ZIP(sve_zip_d, uint64_t, H1_8)
478
DO_ZIP(sve2_zip_q, Int128, )
479
480
#define DO_UZP(NAME, TYPE, H) \
481
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
482
DO_UZP(sve_uzp_b, uint8_t, H1)
483
DO_UZP(sve_uzp_h, uint16_t, H1_2)
484
DO_UZP(sve_uzp_s, uint32_t, H1_4)
485
-DO_UZP(sve_uzp_d, uint64_t, )
486
+DO_UZP(sve_uzp_d, uint64_t, H1_8)
487
DO_UZP(sve2_uzp_q, Int128, )
488
489
#define DO_TRN(NAME, TYPE, H) \
490
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
491
DO_TRN(sve_trn_b, uint8_t, H1)
492
DO_TRN(sve_trn_h, uint16_t, H1_2)
493
DO_TRN(sve_trn_s, uint32_t, H1_4)
494
-DO_TRN(sve_trn_d, uint64_t, )
495
+DO_TRN(sve_trn_d, uint64_t, H1_8)
496
DO_TRN(sve2_trn_q, Int128, )
497
498
#undef DO_ZIP
499
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
500
#define DO_CMP_PPZZ_S(NAME, TYPE, OP) \
501
DO_CMP_PPZZ(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
502
#define DO_CMP_PPZZ_D(NAME, TYPE, OP) \
503
- DO_CMP_PPZZ(NAME, TYPE, OP, , 0x0101010101010101ull)
504
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1_8, 0x0101010101010101ull)
505
506
DO_CMP_PPZZ_B(sve_cmpeq_ppzz_b, uint8_t, ==)
507
DO_CMP_PPZZ_H(sve_cmpeq_ppzz_h, uint16_t, ==)
508
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
509
#define DO_CMP_PPZI_S(NAME, TYPE, OP) \
510
DO_CMP_PPZI(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
511
#define DO_CMP_PPZI_D(NAME, TYPE, OP) \
512
- DO_CMP_PPZI(NAME, TYPE, OP, , 0x0101010101010101ull)
513
+ DO_CMP_PPZI(NAME, TYPE, OP, H1_8, 0x0101010101010101ull)
514
515
DO_CMP_PPZI_B(sve_cmpeq_ppzi_b, uint8_t, ==)
516
DO_CMP_PPZI_H(sve_cmpeq_ppzi_h, uint16_t, ==)
517
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(NAME)(void *vn, void *vg, void *vs, uint32_t desc) \
518
519
DO_REDUCE(sve_faddv_h, float16, H1_2, add, float16_zero)
520
DO_REDUCE(sve_faddv_s, float32, H1_4, add, float32_zero)
521
-DO_REDUCE(sve_faddv_d, float64, , add, float64_zero)
522
+DO_REDUCE(sve_faddv_d, float64, H1_8, add, float64_zero)
523
524
/* Identity is floatN_default_nan, without the function call. */
525
DO_REDUCE(sve_fminnmv_h, float16, H1_2, minnum, 0x7E00)
526
DO_REDUCE(sve_fminnmv_s, float32, H1_4, minnum, 0x7FC00000)
527
-DO_REDUCE(sve_fminnmv_d, float64, , minnum, 0x7FF8000000000000ULL)
528
+DO_REDUCE(sve_fminnmv_d, float64, H1_8, minnum, 0x7FF8000000000000ULL)
529
530
DO_REDUCE(sve_fmaxnmv_h, float16, H1_2, maxnum, 0x7E00)
531
DO_REDUCE(sve_fmaxnmv_s, float32, H1_4, maxnum, 0x7FC00000)
532
-DO_REDUCE(sve_fmaxnmv_d, float64, , maxnum, 0x7FF8000000000000ULL)
533
+DO_REDUCE(sve_fmaxnmv_d, float64, H1_8, maxnum, 0x7FF8000000000000ULL)
534
535
DO_REDUCE(sve_fminv_h, float16, H1_2, min, float16_infinity)
536
DO_REDUCE(sve_fminv_s, float32, H1_4, min, float32_infinity)
537
-DO_REDUCE(sve_fminv_d, float64, , min, float64_infinity)
538
+DO_REDUCE(sve_fminv_d, float64, H1_8, min, float64_infinity)
539
540
DO_REDUCE(sve_fmaxv_h, float16, H1_2, max, float16_chs(float16_infinity))
541
DO_REDUCE(sve_fmaxv_s, float32, H1_4, max, float32_chs(float32_infinity))
542
-DO_REDUCE(sve_fmaxv_d, float64, , max, float64_chs(float64_infinity))
543
+DO_REDUCE(sve_fmaxv_d, float64, H1_8, max, float64_chs(float64_infinity))
544
545
#undef DO_REDUCE
546
547
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, \
548
549
DO_ZPZZ_FP(sve_fadd_h, uint16_t, H1_2, float16_add)
550
DO_ZPZZ_FP(sve_fadd_s, uint32_t, H1_4, float32_add)
551
-DO_ZPZZ_FP(sve_fadd_d, uint64_t, , float64_add)
552
+DO_ZPZZ_FP(sve_fadd_d, uint64_t, H1_8, float64_add)
553
554
DO_ZPZZ_FP(sve_fsub_h, uint16_t, H1_2, float16_sub)
555
DO_ZPZZ_FP(sve_fsub_s, uint32_t, H1_4, float32_sub)
556
-DO_ZPZZ_FP(sve_fsub_d, uint64_t, , float64_sub)
557
+DO_ZPZZ_FP(sve_fsub_d, uint64_t, H1_8, float64_sub)
558
559
DO_ZPZZ_FP(sve_fmul_h, uint16_t, H1_2, float16_mul)
560
DO_ZPZZ_FP(sve_fmul_s, uint32_t, H1_4, float32_mul)
561
-DO_ZPZZ_FP(sve_fmul_d, uint64_t, , float64_mul)
562
+DO_ZPZZ_FP(sve_fmul_d, uint64_t, H1_8, float64_mul)
563
564
DO_ZPZZ_FP(sve_fdiv_h, uint16_t, H1_2, float16_div)
565
DO_ZPZZ_FP(sve_fdiv_s, uint32_t, H1_4, float32_div)
566
-DO_ZPZZ_FP(sve_fdiv_d, uint64_t, , float64_div)
567
+DO_ZPZZ_FP(sve_fdiv_d, uint64_t, H1_8, float64_div)
568
569
DO_ZPZZ_FP(sve_fmin_h, uint16_t, H1_2, float16_min)
570
DO_ZPZZ_FP(sve_fmin_s, uint32_t, H1_4, float32_min)
571
-DO_ZPZZ_FP(sve_fmin_d, uint64_t, , float64_min)
572
+DO_ZPZZ_FP(sve_fmin_d, uint64_t, H1_8, float64_min)
573
574
DO_ZPZZ_FP(sve_fmax_h, uint16_t, H1_2, float16_max)
575
DO_ZPZZ_FP(sve_fmax_s, uint32_t, H1_4, float32_max)
576
-DO_ZPZZ_FP(sve_fmax_d, uint64_t, , float64_max)
577
+DO_ZPZZ_FP(sve_fmax_d, uint64_t, H1_8, float64_max)
578
579
DO_ZPZZ_FP(sve_fminnum_h, uint16_t, H1_2, float16_minnum)
580
DO_ZPZZ_FP(sve_fminnum_s, uint32_t, H1_4, float32_minnum)
581
-DO_ZPZZ_FP(sve_fminnum_d, uint64_t, , float64_minnum)
582
+DO_ZPZZ_FP(sve_fminnum_d, uint64_t, H1_8, float64_minnum)
583
584
DO_ZPZZ_FP(sve_fmaxnum_h, uint16_t, H1_2, float16_maxnum)
585
DO_ZPZZ_FP(sve_fmaxnum_s, uint32_t, H1_4, float32_maxnum)
586
-DO_ZPZZ_FP(sve_fmaxnum_d, uint64_t, , float64_maxnum)
587
+DO_ZPZZ_FP(sve_fmaxnum_d, uint64_t, H1_8, float64_maxnum)
588
589
static inline float16 abd_h(float16 a, float16 b, float_status *s)
590
{
591
@@ -XXX,XX +XXX,XX @@ static inline float64 abd_d(float64 a, float64 b, float_status *s)
592
593
DO_ZPZZ_FP(sve_fabd_h, uint16_t, H1_2, abd_h)
594
DO_ZPZZ_FP(sve_fabd_s, uint32_t, H1_4, abd_s)
595
-DO_ZPZZ_FP(sve_fabd_d, uint64_t, , abd_d)
596
+DO_ZPZZ_FP(sve_fabd_d, uint64_t, H1_8, abd_d)
597
598
static inline float64 scalbn_d(float64 a, int64_t b, float_status *s)
599
{
600
@@ -XXX,XX +XXX,XX @@ static inline float64 scalbn_d(float64 a, int64_t b, float_status *s)
601
602
DO_ZPZZ_FP(sve_fscalbn_h, int16_t, H1_2, float16_scalbn)
603
DO_ZPZZ_FP(sve_fscalbn_s, int32_t, H1_4, float32_scalbn)
604
-DO_ZPZZ_FP(sve_fscalbn_d, int64_t, , scalbn_d)
605
+DO_ZPZZ_FP(sve_fscalbn_d, int64_t, H1_8, scalbn_d)
606
607
DO_ZPZZ_FP(sve_fmulx_h, uint16_t, H1_2, helper_advsimd_mulxh)
608
DO_ZPZZ_FP(sve_fmulx_s, uint32_t, H1_4, helper_vfp_mulxs)
609
-DO_ZPZZ_FP(sve_fmulx_d, uint64_t, , helper_vfp_mulxd)
610
+DO_ZPZZ_FP(sve_fmulx_d, uint64_t, H1_8, helper_vfp_mulxd)
611
612
#undef DO_ZPZZ_FP
613
614
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vg, uint64_t scalar, \
615
616
DO_ZPZS_FP(sve_fadds_h, float16, H1_2, float16_add)
617
DO_ZPZS_FP(sve_fadds_s, float32, H1_4, float32_add)
618
-DO_ZPZS_FP(sve_fadds_d, float64, , float64_add)
619
+DO_ZPZS_FP(sve_fadds_d, float64, H1_8, float64_add)
620
621
DO_ZPZS_FP(sve_fsubs_h, float16, H1_2, float16_sub)
622
DO_ZPZS_FP(sve_fsubs_s, float32, H1_4, float32_sub)
623
-DO_ZPZS_FP(sve_fsubs_d, float64, , float64_sub)
624
+DO_ZPZS_FP(sve_fsubs_d, float64, H1_8, float64_sub)
625
626
DO_ZPZS_FP(sve_fmuls_h, float16, H1_2, float16_mul)
627
DO_ZPZS_FP(sve_fmuls_s, float32, H1_4, float32_mul)
628
-DO_ZPZS_FP(sve_fmuls_d, float64, , float64_mul)
629
+DO_ZPZS_FP(sve_fmuls_d, float64, H1_8, float64_mul)
630
631
static inline float16 subr_h(float16 a, float16 b, float_status *s)
632
{
633
@@ -XXX,XX +XXX,XX @@ static inline float64 subr_d(float64 a, float64 b, float_status *s)
634
635
DO_ZPZS_FP(sve_fsubrs_h, float16, H1_2, subr_h)
636
DO_ZPZS_FP(sve_fsubrs_s, float32, H1_4, subr_s)
637
-DO_ZPZS_FP(sve_fsubrs_d, float64, , subr_d)
638
+DO_ZPZS_FP(sve_fsubrs_d, float64, H1_8, subr_d)
639
640
DO_ZPZS_FP(sve_fmaxnms_h, float16, H1_2, float16_maxnum)
641
DO_ZPZS_FP(sve_fmaxnms_s, float32, H1_4, float32_maxnum)
642
-DO_ZPZS_FP(sve_fmaxnms_d, float64, , float64_maxnum)
643
+DO_ZPZS_FP(sve_fmaxnms_d, float64, H1_8, float64_maxnum)
644
645
DO_ZPZS_FP(sve_fminnms_h, float16, H1_2, float16_minnum)
646
DO_ZPZS_FP(sve_fminnms_s, float32, H1_4, float32_minnum)
647
-DO_ZPZS_FP(sve_fminnms_d, float64, , float64_minnum)
648
+DO_ZPZS_FP(sve_fminnms_d, float64, H1_8, float64_minnum)
649
650
DO_ZPZS_FP(sve_fmaxs_h, float16, H1_2, float16_max)
651
DO_ZPZS_FP(sve_fmaxs_s, float32, H1_4, float32_max)
652
-DO_ZPZS_FP(sve_fmaxs_d, float64, , float64_max)
653
+DO_ZPZS_FP(sve_fmaxs_d, float64, H1_8, float64_max)
654
655
DO_ZPZS_FP(sve_fmins_h, float16, H1_2, float16_min)
656
DO_ZPZS_FP(sve_fmins_s, float32, H1_4, float32_min)
657
-DO_ZPZS_FP(sve_fmins_d, float64, , float64_min)
658
+DO_ZPZS_FP(sve_fmins_d, float64, H1_8, float64_min)
659
660
/* Fully general two-operand expander, controlled by a predicate,
661
* With the extra float_status parameter.
662
@@ -XXX,XX +XXX,XX @@ static inline uint64_t vfp_float64_to_uint64_rtz(float64 f, float_status *s)
663
DO_ZPZ_FP(sve_fcvt_sh, uint32_t, H1_4, sve_f32_to_f16)
664
DO_ZPZ_FP(sve_fcvt_hs, uint32_t, H1_4, sve_f16_to_f32)
665
DO_ZPZ_FP(sve_bfcvt, uint32_t, H1_4, float32_to_bfloat16)
666
-DO_ZPZ_FP(sve_fcvt_dh, uint64_t, , sve_f64_to_f16)
667
-DO_ZPZ_FP(sve_fcvt_hd, uint64_t, , sve_f16_to_f64)
668
-DO_ZPZ_FP(sve_fcvt_ds, uint64_t, , float64_to_float32)
669
-DO_ZPZ_FP(sve_fcvt_sd, uint64_t, , float32_to_float64)
670
+DO_ZPZ_FP(sve_fcvt_dh, uint64_t, H1_8, sve_f64_to_f16)
671
+DO_ZPZ_FP(sve_fcvt_hd, uint64_t, H1_8, sve_f16_to_f64)
672
+DO_ZPZ_FP(sve_fcvt_ds, uint64_t, H1_8, float64_to_float32)
673
+DO_ZPZ_FP(sve_fcvt_sd, uint64_t, H1_8, float32_to_float64)
674
675
DO_ZPZ_FP(sve_fcvtzs_hh, uint16_t, H1_2, vfp_float16_to_int16_rtz)
676
DO_ZPZ_FP(sve_fcvtzs_hs, uint32_t, H1_4, helper_vfp_tosizh)
677
DO_ZPZ_FP(sve_fcvtzs_ss, uint32_t, H1_4, helper_vfp_tosizs)
678
-DO_ZPZ_FP(sve_fcvtzs_hd, uint64_t, , vfp_float16_to_int64_rtz)
679
-DO_ZPZ_FP(sve_fcvtzs_sd, uint64_t, , vfp_float32_to_int64_rtz)
680
-DO_ZPZ_FP(sve_fcvtzs_ds, uint64_t, , helper_vfp_tosizd)
681
-DO_ZPZ_FP(sve_fcvtzs_dd, uint64_t, , vfp_float64_to_int64_rtz)
682
+DO_ZPZ_FP(sve_fcvtzs_hd, uint64_t, H1_8, vfp_float16_to_int64_rtz)
683
+DO_ZPZ_FP(sve_fcvtzs_sd, uint64_t, H1_8, vfp_float32_to_int64_rtz)
684
+DO_ZPZ_FP(sve_fcvtzs_ds, uint64_t, H1_8, helper_vfp_tosizd)
685
+DO_ZPZ_FP(sve_fcvtzs_dd, uint64_t, H1_8, vfp_float64_to_int64_rtz)
686
687
DO_ZPZ_FP(sve_fcvtzu_hh, uint16_t, H1_2, vfp_float16_to_uint16_rtz)
688
DO_ZPZ_FP(sve_fcvtzu_hs, uint32_t, H1_4, helper_vfp_touizh)
689
DO_ZPZ_FP(sve_fcvtzu_ss, uint32_t, H1_4, helper_vfp_touizs)
690
-DO_ZPZ_FP(sve_fcvtzu_hd, uint64_t, , vfp_float16_to_uint64_rtz)
691
-DO_ZPZ_FP(sve_fcvtzu_sd, uint64_t, , vfp_float32_to_uint64_rtz)
692
-DO_ZPZ_FP(sve_fcvtzu_ds, uint64_t, , helper_vfp_touizd)
693
-DO_ZPZ_FP(sve_fcvtzu_dd, uint64_t, , vfp_float64_to_uint64_rtz)
694
+DO_ZPZ_FP(sve_fcvtzu_hd, uint64_t, H1_8, vfp_float16_to_uint64_rtz)
695
+DO_ZPZ_FP(sve_fcvtzu_sd, uint64_t, H1_8, vfp_float32_to_uint64_rtz)
696
+DO_ZPZ_FP(sve_fcvtzu_ds, uint64_t, H1_8, helper_vfp_touizd)
697
+DO_ZPZ_FP(sve_fcvtzu_dd, uint64_t, H1_8, vfp_float64_to_uint64_rtz)
698
699
DO_ZPZ_FP(sve_frint_h, uint16_t, H1_2, helper_advsimd_rinth)
700
DO_ZPZ_FP(sve_frint_s, uint32_t, H1_4, helper_rints)
701
-DO_ZPZ_FP(sve_frint_d, uint64_t, , helper_rintd)
702
+DO_ZPZ_FP(sve_frint_d, uint64_t, H1_8, helper_rintd)
703
704
DO_ZPZ_FP(sve_frintx_h, uint16_t, H1_2, float16_round_to_int)
705
DO_ZPZ_FP(sve_frintx_s, uint32_t, H1_4, float32_round_to_int)
706
-DO_ZPZ_FP(sve_frintx_d, uint64_t, , float64_round_to_int)
707
+DO_ZPZ_FP(sve_frintx_d, uint64_t, H1_8, float64_round_to_int)
708
709
DO_ZPZ_FP(sve_frecpx_h, uint16_t, H1_2, helper_frecpx_f16)
710
DO_ZPZ_FP(sve_frecpx_s, uint32_t, H1_4, helper_frecpx_f32)
711
-DO_ZPZ_FP(sve_frecpx_d, uint64_t, , helper_frecpx_f64)
712
+DO_ZPZ_FP(sve_frecpx_d, uint64_t, H1_8, helper_frecpx_f64)
713
714
DO_ZPZ_FP(sve_fsqrt_h, uint16_t, H1_2, float16_sqrt)
715
DO_ZPZ_FP(sve_fsqrt_s, uint32_t, H1_4, float32_sqrt)
716
-DO_ZPZ_FP(sve_fsqrt_d, uint64_t, , float64_sqrt)
717
+DO_ZPZ_FP(sve_fsqrt_d, uint64_t, H1_8, float64_sqrt)
718
719
DO_ZPZ_FP(sve_scvt_hh, uint16_t, H1_2, int16_to_float16)
720
DO_ZPZ_FP(sve_scvt_sh, uint32_t, H1_4, int32_to_float16)
721
DO_ZPZ_FP(sve_scvt_ss, uint32_t, H1_4, int32_to_float32)
722
-DO_ZPZ_FP(sve_scvt_sd, uint64_t, , int32_to_float64)
723
-DO_ZPZ_FP(sve_scvt_dh, uint64_t, , int64_to_float16)
724
-DO_ZPZ_FP(sve_scvt_ds, uint64_t, , int64_to_float32)
725
-DO_ZPZ_FP(sve_scvt_dd, uint64_t, , int64_to_float64)
726
+DO_ZPZ_FP(sve_scvt_sd, uint64_t, H1_8, int32_to_float64)
727
+DO_ZPZ_FP(sve_scvt_dh, uint64_t, H1_8, int64_to_float16)
728
+DO_ZPZ_FP(sve_scvt_ds, uint64_t, H1_8, int64_to_float32)
729
+DO_ZPZ_FP(sve_scvt_dd, uint64_t, H1_8, int64_to_float64)
730
731
DO_ZPZ_FP(sve_ucvt_hh, uint16_t, H1_2, uint16_to_float16)
732
DO_ZPZ_FP(sve_ucvt_sh, uint32_t, H1_4, uint32_to_float16)
733
DO_ZPZ_FP(sve_ucvt_ss, uint32_t, H1_4, uint32_to_float32)
734
-DO_ZPZ_FP(sve_ucvt_sd, uint64_t, , uint32_to_float64)
735
-DO_ZPZ_FP(sve_ucvt_dh, uint64_t, , uint64_to_float16)
736
-DO_ZPZ_FP(sve_ucvt_ds, uint64_t, , uint64_to_float32)
737
-DO_ZPZ_FP(sve_ucvt_dd, uint64_t, , uint64_to_float64)
738
+DO_ZPZ_FP(sve_ucvt_sd, uint64_t, H1_8, uint32_to_float64)
739
+DO_ZPZ_FP(sve_ucvt_dh, uint64_t, H1_8, uint64_to_float16)
740
+DO_ZPZ_FP(sve_ucvt_ds, uint64_t, H1_8, uint64_to_float32)
741
+DO_ZPZ_FP(sve_ucvt_dd, uint64_t, H1_8, uint64_to_float64)
742
743
static int16_t do_float16_logb_as_int(float16 a, float_status *s)
744
{
745
@@ -XXX,XX +XXX,XX @@ static int64_t do_float64_logb_as_int(float64 a, float_status *s)
746
747
DO_ZPZ_FP(flogb_h, float16, H1_2, do_float16_logb_as_int)
748
DO_ZPZ_FP(flogb_s, float32, H1_4, do_float32_logb_as_int)
749
-DO_ZPZ_FP(flogb_d, float64, , do_float64_logb_as_int)
750
+DO_ZPZ_FP(flogb_d, float64, H1_8, do_float64_logb_as_int)
751
752
#undef DO_ZPZ_FP
753
754
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, \
755
#define DO_FPCMP_PPZZ_S(NAME, OP) \
756
DO_FPCMP_PPZZ(NAME##_s, float32, H1_4, OP)
757
#define DO_FPCMP_PPZZ_D(NAME, OP) \
758
- DO_FPCMP_PPZZ(NAME##_d, float64, , OP)
759
+ DO_FPCMP_PPZZ(NAME##_d, float64, H1_8, OP)
760
761
#define DO_FPCMP_PPZZ_ALL(NAME, OP) \
762
DO_FPCMP_PPZZ_H(NAME, OP) \
763
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vg, \
764
#define DO_FPCMP_PPZ0_S(NAME, OP) \
765
DO_FPCMP_PPZ0(NAME##_s, float32, H1_4, OP)
766
#define DO_FPCMP_PPZ0_D(NAME, OP) \
767
- DO_FPCMP_PPZ0(NAME##_d, float64, , OP)
768
+ DO_FPCMP_PPZ0(NAME##_d, float64, H1_8, OP)
769
770
#define DO_FPCMP_PPZ0_ALL(NAME, OP) \
771
DO_FPCMP_PPZ0_H(NAME, OP) \
772
@@ -XXX,XX +XXX,XX @@ DO_LD_PRIM_1(ld1bhu, H1_2, uint16_t, uint8_t)
773
DO_LD_PRIM_1(ld1bhs, H1_2, uint16_t, int8_t)
774
DO_LD_PRIM_1(ld1bsu, H1_4, uint32_t, uint8_t)
775
DO_LD_PRIM_1(ld1bss, H1_4, uint32_t, int8_t)
776
-DO_LD_PRIM_1(ld1bdu, , uint64_t, uint8_t)
777
-DO_LD_PRIM_1(ld1bds, , uint64_t, int8_t)
778
+DO_LD_PRIM_1(ld1bdu, H1_8, uint64_t, uint8_t)
779
+DO_LD_PRIM_1(ld1bds, H1_8, uint64_t, int8_t)
780
781
#define DO_ST_PRIM_1(NAME, H, TE, TM) \
782
DO_ST_HOST(st1##NAME, H, TE, TM, stb_p) \
783
@@ -XXX,XX +XXX,XX @@ DO_LD_PRIM_1(ld1bds, , uint64_t, int8_t)
784
DO_ST_PRIM_1(bb, H1, uint8_t, uint8_t)
785
DO_ST_PRIM_1(bh, H1_2, uint16_t, uint8_t)
786
DO_ST_PRIM_1(bs, H1_4, uint32_t, uint8_t)
787
-DO_ST_PRIM_1(bd, , uint64_t, uint8_t)
788
+DO_ST_PRIM_1(bd, H1_8, uint64_t, uint8_t)
789
790
#define DO_LD_PRIM_2(NAME, H, TE, TM, LD) \
791
DO_LD_HOST(ld1##NAME##_be, H, TE, TM, LD##_be_p) \
792
@@ -XXX,XX +XXX,XX @@ DO_ST_PRIM_1(bd, , uint64_t, uint8_t)
793
DO_LD_PRIM_2(hh, H1_2, uint16_t, uint16_t, lduw)
794
DO_LD_PRIM_2(hsu, H1_4, uint32_t, uint16_t, lduw)
795
DO_LD_PRIM_2(hss, H1_4, uint32_t, int16_t, lduw)
796
-DO_LD_PRIM_2(hdu, , uint64_t, uint16_t, lduw)
797
-DO_LD_PRIM_2(hds, , uint64_t, int16_t, lduw)
798
+DO_LD_PRIM_2(hdu, H1_8, uint64_t, uint16_t, lduw)
799
+DO_LD_PRIM_2(hds, H1_8, uint64_t, int16_t, lduw)
800
801
DO_ST_PRIM_2(hh, H1_2, uint16_t, uint16_t, stw)
802
DO_ST_PRIM_2(hs, H1_4, uint32_t, uint16_t, stw)
803
-DO_ST_PRIM_2(hd, , uint64_t, uint16_t, stw)
804
+DO_ST_PRIM_2(hd, H1_8, uint64_t, uint16_t, stw)
805
806
DO_LD_PRIM_2(ss, H1_4, uint32_t, uint32_t, ldl)
807
-DO_LD_PRIM_2(sdu, , uint64_t, uint32_t, ldl)
808
-DO_LD_PRIM_2(sds, , uint64_t, int32_t, ldl)
809
+DO_LD_PRIM_2(sdu, H1_8, uint64_t, uint32_t, ldl)
810
+DO_LD_PRIM_2(sds, H1_8, uint64_t, int32_t, ldl)
811
812
DO_ST_PRIM_2(ss, H1_4, uint32_t, uint32_t, stl)
813
-DO_ST_PRIM_2(sd, , uint64_t, uint32_t, stl)
814
+DO_ST_PRIM_2(sd, H1_8, uint64_t, uint32_t, stl)
815
816
-DO_LD_PRIM_2(dd, , uint64_t, uint64_t, ldq)
817
-DO_ST_PRIM_2(dd, , uint64_t, uint64_t, stq)
818
+DO_LD_PRIM_2(dd, H1_8, uint64_t, uint64_t, ldq)
819
+DO_ST_PRIM_2(dd, H1_8, uint64_t, uint64_t, stq)
820
821
#undef DO_LD_TLB
822
#undef DO_ST_TLB
823
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
824
825
DO_FCVTNT(sve_bfcvtnt, uint32_t, uint16_t, H1_4, H1_2, float32_to_bfloat16)
826
DO_FCVTNT(sve2_fcvtnt_sh, uint32_t, uint16_t, H1_4, H1_2, sve_f32_to_f16)
827
-DO_FCVTNT(sve2_fcvtnt_ds, uint64_t, uint32_t, , H1_4, float64_to_float32)
828
+DO_FCVTNT(sve2_fcvtnt_ds, uint64_t, uint32_t, H1_8, H1_4, float64_to_float32)
829
830
#define DO_FCVTLT(NAME, TYPEW, TYPEN, HW, HN, OP) \
831
void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
832
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
833
}
834
835
DO_FCVTLT(sve2_fcvtlt_hs, uint32_t, uint16_t, H1_4, H1_2, sve_f16_to_f32)
836
-DO_FCVTLT(sve2_fcvtlt_sd, uint64_t, uint32_t, , H1_4, float32_to_float64)
837
+DO_FCVTLT(sve2_fcvtlt_sd, uint64_t, uint32_t, H1_8, H1_4, float32_to_float64)
838
839
#undef DO_FCVTLT
840
#undef DO_FCVTNT
841
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
842
index XXXXXXX..XXXXXXX 100644
843
--- a/target/arm/vec_helper.c
844
+++ b/target/arm/vec_helper.c
845
@@ -XXX,XX +XXX,XX @@ DO_DOT_IDX(gvec_sdot_idx_b, int32_t, int8_t, int8_t, H4)
846
DO_DOT_IDX(gvec_udot_idx_b, uint32_t, uint8_t, uint8_t, H4)
847
DO_DOT_IDX(gvec_sudot_idx_b, int32_t, int8_t, uint8_t, H4)
848
DO_DOT_IDX(gvec_usdot_idx_b, int32_t, uint8_t, int8_t, H4)
849
-DO_DOT_IDX(gvec_sdot_idx_h, int64_t, int16_t, int16_t, )
850
-DO_DOT_IDX(gvec_udot_idx_h, uint64_t, uint16_t, uint16_t, )
851
+DO_DOT_IDX(gvec_sdot_idx_h, int64_t, int16_t, int16_t, H8)
852
+DO_DOT_IDX(gvec_udot_idx_h, uint64_t, uint16_t, uint16_t, H8)
853
854
void HELPER(gvec_fcaddh)(void *vd, void *vn, void *vm,
855
void *vfpst, uint32_t desc)
856
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
857
858
DO_MUL_IDX(gvec_mul_idx_h, uint16_t, H2)
859
DO_MUL_IDX(gvec_mul_idx_s, uint32_t, H4)
860
-DO_MUL_IDX(gvec_mul_idx_d, uint64_t, )
861
+DO_MUL_IDX(gvec_mul_idx_d, uint64_t, H8)
862
863
#undef DO_MUL_IDX
864
865
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
866
867
DO_MLA_IDX(gvec_mla_idx_h, uint16_t, +, H2)
868
DO_MLA_IDX(gvec_mla_idx_s, uint32_t, +, H4)
869
-DO_MLA_IDX(gvec_mla_idx_d, uint64_t, +, )
870
+DO_MLA_IDX(gvec_mla_idx_d, uint64_t, +, H8)
871
872
DO_MLA_IDX(gvec_mls_idx_h, uint16_t, -, H2)
873
DO_MLA_IDX(gvec_mls_idx_s, uint32_t, -, H4)
874
-DO_MLA_IDX(gvec_mls_idx_d, uint64_t, -, )
875
+DO_MLA_IDX(gvec_mls_idx_d, uint64_t, -, H8)
876
877
#undef DO_MLA_IDX
878
879
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
880
881
DO_FMUL_IDX(gvec_fmul_idx_h, nop, float16, H2)
882
DO_FMUL_IDX(gvec_fmul_idx_s, nop, float32, H4)
883
-DO_FMUL_IDX(gvec_fmul_idx_d, nop, float64, )
884
+DO_FMUL_IDX(gvec_fmul_idx_d, nop, float64, H8)
885
886
/*
887
* Non-fused multiply-accumulate operations, for Neon. NB that unlike
888
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, \
889
890
DO_FMLA_IDX(gvec_fmla_idx_h, float16, H2)
891
DO_FMLA_IDX(gvec_fmla_idx_s, float32, H4)
892
-DO_FMLA_IDX(gvec_fmla_idx_d, float64, )
893
+DO_FMLA_IDX(gvec_fmla_idx_d, float64, H8)
894
895
#undef DO_FMLA_IDX
896
30
897
--
31
--
898
2.20.1
32
2.25.1
899
900
diff view generated by jsdifflib
1
From: Patrick Venture <venture@google.com>
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
2
2
3
Adds comments to the board init to identify missing i2c devices.
3
When the system reboots, the rng-seed that the FDT has should be
4
re-randomized, so that the new boot gets a new seed. Since the FDT is in
5
the ROM region at this point, we add a hook right after the ROM has been
6
added, so that we have a pointer to that copy of the FDT.
4
7
5
Signed-off-by: Patrick Venture <venture@google.com>
8
Cc: Palmer Dabbelt <palmer@dabbelt.com>
6
Reviewed-by: Hao Wu <wuhaotsh@google.com>
9
Cc: Alistair Francis <alistair.francis@wdc.com>
7
Reviewed-by: Joel Stanley <joel@jms.id.au>
10
Cc: Bin Meng <bin.meng@windriver.com>
8
Message-id: 20210608202522.2677850-2-venture@google.com
11
Cc: qemu-riscv@nongnu.org
12
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Message-id: 20221025004327.568476-6-Jason@zx2c4.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
16
---
11
hw/arm/npcm7xx_boards.c | 16 +++++++++++++++-
17
hw/riscv/boot.c | 3 +++
12
1 file changed, 15 insertions(+), 1 deletion(-)
18
1 file changed, 3 insertions(+)
13
19
14
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
20
diff --git a/hw/riscv/boot.c b/hw/riscv/boot.c
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/npcm7xx_boards.c
22
--- a/hw/riscv/boot.c
17
+++ b/hw/arm/npcm7xx_boards.c
23
+++ b/hw/riscv/boot.c
18
@@ -XXX,XX +XXX,XX @@ static void quanta_gsj_i2c_init(NPCM7xxState *soc)
24
@@ -XXX,XX +XXX,XX @@
19
at24c_eeprom_init(soc, 9, 0x55, 8192);
25
#include "sysemu/device_tree.h"
20
at24c_eeprom_init(soc, 10, 0x55, 8192);
26
#include "sysemu/qtest.h"
21
27
#include "sysemu/kvm.h"
22
- /* TODO: Add additional i2c devices. */
28
+#include "sysemu/reset.h"
23
+ /*
29
24
+ * i2c-11:
30
#include <libfdt.h>
25
+ * - power-brick@36: delta,dps800
31
26
+ * - hotswap@15: ti,lm5066i
32
@@ -XXX,XX +XXX,XX @@ uint64_t riscv_load_fdt(hwaddr dram_base, uint64_t mem_size, void *fdt)
27
+ */
33
28
+
34
rom_add_blob_fixed_as("fdt", fdt, fdtsize, fdt_addr,
29
+ /*
35
&address_space_memory);
30
+ * i2c-12:
36
+ qemu_register_reset_nosnapshotload(qemu_fdt_randomize_seeds,
31
+ * - ucd90160@6b
37
+ rom_ptr_for_as(&address_space_memory, fdt_addr, fdtsize));
32
+ */
38
33
+
39
return fdt_addr;
34
+ /*
35
+ * i2c-15:
36
+ * - pca9548@75
37
+ */
38
}
40
}
39
40
static void quanta_gsj_fan_init(NPCM7xxMachine *machine, NPCM7xxState *soc)
41
--
41
--
42
2.20.1
42
2.25.1
43
44
diff view generated by jsdifflib
1
From: Patrick Venture <venture@google.com>
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
2
2
3
Add a comment and i2c method that describes the board layout.
3
Snapshot loading is supposed to be deterministic, so we shouldn't
4
re-randomize the various seeds used.
4
5
5
Tested: firmware booted to userspace.
6
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
6
Signed-off-by: Patrick Venture <venture@google.com>
7
Message-id: 20221025004327.568476-7-Jason@zx2c4.com
7
Reviewed-by: Brandon Kim <brandonkim@google.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Hao Wu <wuhaotsh@google.com>
9
Message-id: 20210608193605.2611114-3-venture@google.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
hw/arm/npcm7xx_boards.c | 60 +++++++++++++++++++++++++++++++++++++++++
11
hw/m68k/virt.c | 20 +++++++++++---------
13
1 file changed, 60 insertions(+)
12
1 file changed, 11 insertions(+), 9 deletions(-)
14
13
15
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
14
diff --git a/hw/m68k/virt.c b/hw/m68k/virt.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/npcm7xx_boards.c
16
--- a/hw/m68k/virt.c
18
+++ b/hw/arm/npcm7xx_boards.c
17
+++ b/hw/m68k/virt.c
19
@@ -XXX,XX +XXX,XX @@ static void quanta_gsj_fan_init(NPCM7xxMachine *machine, NPCM7xxState *soc)
18
@@ -XXX,XX +XXX,XX @@ typedef struct {
20
npcm7xx_connect_pwm_fan(soc, &splitter[2], 0x05, 1);
19
M68kCPU *cpu;
20
hwaddr initial_pc;
21
hwaddr initial_stack;
22
- struct bi_record *rng_seed;
23
} ResetInfo;
24
25
static void main_cpu_reset(void *opaque)
26
@@ -XXX,XX +XXX,XX @@ static void main_cpu_reset(void *opaque)
27
M68kCPU *cpu = reset_info->cpu;
28
CPUState *cs = CPU(cpu);
29
30
- if (reset_info->rng_seed) {
31
- qemu_guest_getrandom_nofail((void *)reset_info->rng_seed->data + 2,
32
- be16_to_cpu(*(uint16_t *)reset_info->rng_seed->data));
33
- }
34
-
35
cpu_reset(cs);
36
cpu->env.aregs[7] = reset_info->initial_stack;
37
cpu->env.pc = reset_info->initial_pc;
21
}
38
}
22
39
23
+static void quanta_gbs_i2c_init(NPCM7xxState *soc)
40
+static void rerandomize_rng_seed(void *opaque)
24
+{
41
+{
25
+ /*
42
+ struct bi_record *rng_seed = opaque;
26
+ * i2c-0:
43
+ qemu_guest_getrandom_nofail((void *)rng_seed->data + 2,
27
+ * pca9546@71
44
+ be16_to_cpu(*(uint16_t *)rng_seed->data));
28
+ *
29
+ * i2c-1:
30
+ * pca9535@24
31
+ * pca9535@20
32
+ * pca9535@21
33
+ * pca9535@22
34
+ * pca9535@23
35
+ * pca9535@25
36
+ * pca9535@26
37
+ *
38
+ * i2c-2:
39
+ * sbtsi@4c
40
+ *
41
+ * i2c-5:
42
+ * atmel,24c64@50 mb_fru
43
+ * pca9546@71
44
+ * - channel 0: max31725@54
45
+ * - channel 1: max31725@55
46
+ * - channel 2: max31725@5d
47
+ * atmel,24c64@51 fan_fru
48
+ * - channel 3: atmel,24c64@52 hsbp_fru
49
+ *
50
+ * i2c-6:
51
+ * pca9545@73
52
+ *
53
+ * i2c-7:
54
+ * pca9545@72
55
+ *
56
+ * i2c-8:
57
+ * adi,adm1272@10
58
+ *
59
+ * i2c-9:
60
+ * pca9546@71
61
+ * - channel 0: isil,isl68137@60
62
+ * - channel 1: isil,isl68137@61
63
+ * - channel 2: isil,isl68137@63
64
+ * - channel 3: isil,isl68137@45
65
+ *
66
+ * i2c-10:
67
+ * pca9545@71
68
+ *
69
+ * i2c-11:
70
+ * pca9545@76
71
+ *
72
+ * i2c-12:
73
+ * maxim,max34451@4e
74
+ * isil,isl68137@5d
75
+ * isil,isl68137@5e
76
+ *
77
+ * i2c-14:
78
+ * pca9545@70
79
+ */
80
+}
45
+}
81
+
46
+
82
static void npcm750_evb_init(MachineState *machine)
47
static void virt_init(MachineState *machine)
83
{
48
{
84
NPCM7xxState *soc;
49
M68kCPU *cpu = NULL;
85
@@ -XXX,XX +XXX,XX @@ static void quanta_gbs_init(MachineState *machine)
50
@@ -XXX,XX +XXX,XX @@ static void virt_init(MachineState *machine)
86
npcm7xx_connect_flash(&soc->fiu[0], 0, "mx66u51235f",
51
BOOTINFO0(param_ptr, BI_LAST);
87
drive_get(IF_MTD, 0, 0));
52
rom_add_blob_fixed_as("bootinfo", param_blob, param_ptr - param_blob,
88
53
parameters_base, cs->as);
89
+ quanta_gbs_i2c_init(soc);
54
- reset_info->rng_seed = rom_ptr_for_as(cs->as, parameters_base,
90
npcm7xx_load_kernel(machine, soc);
55
- param_ptr - param_blob) +
56
- (param_rng_seed - param_blob);
57
+ qemu_register_reset_nosnapshotload(rerandomize_rng_seed,
58
+ rom_ptr_for_as(cs->as, parameters_base,
59
+ param_ptr - param_blob) +
60
+ (param_rng_seed - param_blob));
61
g_free(param_blob);
62
}
91
}
63
}
92
93
--
64
--
94
2.20.1
65
2.25.1
95
96
diff view generated by jsdifflib
1
From: Patrick Venture <venture@google.com>
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
2
2
3
Adds initial quanta-gbs-bmc machine support.
3
Snapshot loading is supposed to be deterministic, so we shouldn't
4
re-randomize the various seeds used.
4
5
5
Tested: Boots to userspace.
6
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
6
Signed-off-by: Patrick Venture <venture@google.com>
7
Message-id: 20221025004327.568476-8-Jason@zx2c4.com
7
Reviewed-by: Brandon Kim <brandonkim@google.com>
8
Reviewed-by: Hao Wu <wuhaotsh@google.com>
9
Message-id: 20210608193605.2611114-2-venture@google.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
hw/arm/npcm7xx_boards.c | 33 +++++++++++++++++++++++++++++++++
11
hw/m68k/q800.c | 33 +++++++++++++--------------------
14
1 file changed, 33 insertions(+)
12
1 file changed, 13 insertions(+), 20 deletions(-)
15
13
16
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
14
diff --git a/hw/m68k/q800.c b/hw/m68k/q800.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/npcm7xx_boards.c
16
--- a/hw/m68k/q800.c
19
+++ b/hw/arm/npcm7xx_boards.c
17
+++ b/hw/m68k/q800.c
20
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static const TypeInfo glue_info = {
21
19
},
22
#define NPCM750_EVB_POWER_ON_STRAPS 0x00001ff7
20
};
23
#define QUANTA_GSJ_POWER_ON_STRAPS 0x00001fff
21
24
+#define QUANTA_GBS_POWER_ON_STRAPS 0x000017ff
22
-typedef struct {
25
23
- M68kCPU *cpu;
26
static const char npcm7xx_default_bootrom[] = "npcm7xx_bootrom.bin";
24
- struct bi_record *rng_seed;
27
25
-} ResetInfo;
28
@@ -XXX,XX +XXX,XX @@ static void quanta_gsj_init(MachineState *machine)
26
-
29
npcm7xx_load_kernel(machine, soc);
27
static void main_cpu_reset(void *opaque)
28
{
29
- ResetInfo *reset_info = opaque;
30
- M68kCPU *cpu = reset_info->cpu;
31
+ M68kCPU *cpu = opaque;
32
CPUState *cs = CPU(cpu);
33
34
- if (reset_info->rng_seed) {
35
- qemu_guest_getrandom_nofail((void *)reset_info->rng_seed->data + 2,
36
- be16_to_cpu(*(uint16_t *)reset_info->rng_seed->data));
37
- }
38
-
39
cpu_reset(cs);
40
cpu->env.aregs[7] = ldl_phys(cs->as, 0);
41
cpu->env.pc = ldl_phys(cs->as, 4);
30
}
42
}
31
43
32
+static void quanta_gbs_init(MachineState *machine)
44
+static void rerandomize_rng_seed(void *opaque)
33
+{
45
+{
34
+ NPCM7xxState *soc;
46
+ struct bi_record *rng_seed = opaque;
35
+
47
+ qemu_guest_getrandom_nofail((void *)rng_seed->data + 2,
36
+ soc = npcm7xx_create_soc(machine, QUANTA_GBS_POWER_ON_STRAPS);
48
+ be16_to_cpu(*(uint16_t *)rng_seed->data));
37
+ npcm7xx_connect_dram(soc, machine->ram);
38
+ qdev_realize(DEVICE(soc), NULL, &error_fatal);
39
+
40
+ npcm7xx_load_bootrom(machine, soc);
41
+
42
+ npcm7xx_connect_flash(&soc->fiu[0], 0, "mx66u51235f",
43
+ drive_get(IF_MTD, 0, 0));
44
+
45
+ npcm7xx_load_kernel(machine, soc);
46
+}
49
+}
47
+
50
+
48
static void npcm7xx_set_soc_type(NPCM7xxMachineClass *nmc, const char *type)
51
static uint8_t fake_mac_rom[] = {
49
{
52
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
50
NPCM7xxClass *sc = NPCM7XX_CLASS(object_class_by_name(type));
53
51
@@ -XXX,XX +XXX,XX @@ static void gsj_machine_class_init(ObjectClass *oc, void *data)
54
@@ -XXX,XX +XXX,XX @@ static void q800_init(MachineState *machine)
52
mc->default_ram_size = 512 * MiB;
55
NubusBus *nubus;
53
};
56
DeviceState *glue;
54
57
DriveInfo *dinfo;
55
+static void gbs_bmc_machine_class_init(ObjectClass *oc, void *data)
58
- ResetInfo *reset_info;
56
+{
59
uint8_t rng_seed[32];
57
+ NPCM7xxMachineClass *nmc = NPCM7XX_MACHINE_CLASS(oc);
60
58
+ MachineClass *mc = MACHINE_CLASS(oc);
61
linux_boot = (kernel_filename != NULL);
59
+
62
@@ -XXX,XX +XXX,XX @@ static void q800_init(MachineState *machine)
60
+ npcm7xx_set_soc_type(nmc, TYPE_NPCM730);
63
exit(1);
61
+
64
}
62
+ mc->desc = "Quanta GBS (Cortex-A9)";
65
63
+ mc->init = quanta_gbs_init;
66
- reset_info = g_new0(ResetInfo, 1);
64
+ mc->default_ram_size = 1 * GiB;
67
-
65
+}
68
/* init CPUs */
66
+
69
cpu = M68K_CPU(cpu_create(machine->cpu_type));
67
static const TypeInfo npcm7xx_machine_types[] = {
70
- reset_info->cpu = cpu;
68
{
71
- qemu_register_reset(main_cpu_reset, reset_info);
69
.name = TYPE_NPCM7XX_MACHINE,
72
+ qemu_register_reset(main_cpu_reset, cpu);
70
@@ -XXX,XX +XXX,XX @@ static const TypeInfo npcm7xx_machine_types[] = {
73
71
.name = MACHINE_TYPE_NAME("quanta-gsj"),
74
/* RAM */
72
.parent = TYPE_NPCM7XX_MACHINE,
75
memory_region_add_subregion(get_system_memory(), 0, machine->ram);
73
.class_init = gsj_machine_class_init,
76
@@ -XXX,XX +XXX,XX @@ static void q800_init(MachineState *machine)
74
+ }, {
77
BOOTINFO0(param_ptr, BI_LAST);
75
+ .name = MACHINE_TYPE_NAME("quanta-gbs-bmc"),
78
rom_add_blob_fixed_as("bootinfo", param_blob, param_ptr - param_blob,
76
+ .parent = TYPE_NPCM7XX_MACHINE,
79
parameters_base, cs->as);
77
+ .class_init = gbs_bmc_machine_class_init,
80
- reset_info->rng_seed = rom_ptr_for_as(cs->as, parameters_base,
78
},
81
- param_ptr - param_blob) +
79
};
82
- (param_rng_seed - param_blob);
80
83
+ qemu_register_reset_nosnapshotload(rerandomize_rng_seed,
84
+ rom_ptr_for_as(cs->as, parameters_base,
85
+ param_ptr - param_blob) +
86
+ (param_rng_seed - param_blob));
87
g_free(param_blob);
88
} else {
89
uint8_t *ptr;
81
--
90
--
82
2.20.1
91
2.25.1
83
84
diff view generated by jsdifflib
1
The virt_is_acpi_enabled() function is specific to the virt board, as
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
2
is the check for its 'ras' property. Use the new acpi_ghes_present()
3
function to check whether we should report memory errors via
4
acpi_ghes_record_errors().
5
2
6
This avoids a link error if QEMU was built without support for the
3
When the system reboots, the rng-seed that the FDT has should be
7
virt board, and provides a mechanism that can be used by any future
4
re-randomized, so that the new boot gets a new seed. Since the FDT is in
8
board models that want to add ACPI memory error reporting support
5
the ROM region at this point, we add a hook right after the ROM has been
9
(they only need to call acpi_ghes_add_fw_cfg()).
6
added, so that we have a pointer to that copy of the FDT.
10
7
8
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
9
Cc: Paul Burton <paulburton@kernel.org>
10
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
12
Message-id: 20221025004327.568476-9-Jason@zx2c4.com
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
14
Message-id: 20210603171259.27962-4-peter.maydell@linaro.org
15
---
15
---
16
target/arm/kvm64.c | 6 +-----
16
hw/mips/boston.c | 3 +++
17
1 file changed, 1 insertion(+), 5 deletions(-)
17
1 file changed, 3 insertions(+)
18
18
19
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
19
diff --git a/hw/mips/boston.c b/hw/mips/boston.c
20
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/kvm64.c
21
--- a/hw/mips/boston.c
22
+++ b/target/arm/kvm64.c
22
+++ b/hw/mips/boston.c
23
@@ -XXX,XX +XXX,XX @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr)
23
@@ -XXX,XX +XXX,XX @@
24
{
24
#include "sysemu/sysemu.h"
25
ram_addr_t ram_addr;
25
#include "sysemu/qtest.h"
26
hwaddr paddr;
26
#include "sysemu/runstate.h"
27
- Object *obj = qdev_get_machine();
27
+#include "sysemu/reset.h"
28
- VirtMachineState *vms = VIRT_MACHINE(obj);
28
29
- bool acpi_enabled = virt_is_acpi_enabled(vms);
29
#include <libfdt.h>
30
30
#include "qom/object.h"
31
assert(code == BUS_MCEERR_AR || code == BUS_MCEERR_AO);
31
@@ -XXX,XX +XXX,XX @@ static void boston_mach_init(MachineState *machine)
32
32
/* Calculate real fdt size after filter */
33
- if (acpi_enabled && addr &&
33
dt_size = fdt_totalsize(dtb_load_data);
34
- object_property_get_bool(obj, "ras", NULL)) {
34
rom_add_blob_fixed("dtb", dtb_load_data, dt_size, dtb_paddr);
35
+ if (acpi_ghes_present() && addr) {
35
+ qemu_register_reset_nosnapshotload(qemu_fdt_randomize_seeds,
36
ram_addr = qemu_ram_addr_from_host(addr);
36
+ rom_ptr(dtb_paddr, dt_size));
37
if (ram_addr != RAM_ADDR_INVALID &&
37
} else {
38
kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) {
38
/* Try to load file as FIT */
39
fit_err = load_fit(&boston_fit_loader, machine->kernel_filename, s);
39
--
40
--
40
2.20.1
41
2.25.1
41
42
42
43
diff view generated by jsdifflib
1
Generic code in target/arm wants to call acpi_ghes_record_errors();
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
2
provide a stub version so that we don't fail to link when
3
CONFIG_ACPI_APEI is not set. This requires us to add a new
4
ghes-stub.c file to contain it and the meson.build mechanics
5
to use it when appropriate.
6
2
3
When the system reboots, the rng-seed that the FDT has should be
4
re-randomized, so that the new boot gets a new seed. Since the FDT is in
5
the ROM region at this point, we add a hook right after the ROM has been
6
added, so that we have a pointer to that copy of the FDT.
7
8
Cc: Stafford Horne <shorne@gmail.com>
9
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
10
Message-id: 20221025004327.568476-11-Jason@zx2c4.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
10
Message-id: 20210603171259.27962-2-peter.maydell@linaro.org
11
---
13
---
12
hw/acpi/ghes-stub.c | 17 +++++++++++++++++
14
hw/openrisc/boot.c | 3 +++
13
hw/acpi/meson.build | 6 +++---
15
1 file changed, 3 insertions(+)
14
2 files changed, 20 insertions(+), 3 deletions(-)
15
create mode 100644 hw/acpi/ghes-stub.c
16
16
17
diff --git a/hw/acpi/ghes-stub.c b/hw/acpi/ghes-stub.c
17
diff --git a/hw/openrisc/boot.c b/hw/openrisc/boot.c
18
new file mode 100644
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX
19
--- a/hw/openrisc/boot.c
20
--- /dev/null
20
+++ b/hw/openrisc/boot.c
21
+++ b/hw/acpi/ghes-stub.c
22
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@
23
+/*
22
#include "hw/openrisc/boot.h"
24
+ * Support for generating APEI tables and recording CPER for Guests:
23
#include "sysemu/device_tree.h"
25
+ * stub functions.
24
#include "sysemu/qtest.h"
26
+ *
25
+#include "sysemu/reset.h"
27
+ * Copyright (c) 2021 Linaro, Ltd
26
28
+ *
27
#include <libfdt.h>
29
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
28
30
+ * See the COPYING file in the top-level directory.
29
@@ -XXX,XX +XXX,XX @@ uint32_t openrisc_load_fdt(void *fdt, hwaddr load_start,
31
+ */
30
32
+
31
rom_add_blob_fixed_as("fdt", fdt, fdtsize, fdt_addr,
33
+#include "qemu/osdep.h"
32
&address_space_memory);
34
+#include "hw/acpi/ghes.h"
33
+ qemu_register_reset_nosnapshotload(qemu_fdt_randomize_seeds,
35
+
34
+ rom_ptr_for_as(&address_space_memory, fdt_addr, fdtsize));
36
+int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
35
37
+{
36
return fdt_addr;
38
+ return -1;
37
}
39
+}
40
diff --git a/hw/acpi/meson.build b/hw/acpi/meson.build
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/acpi/meson.build
43
+++ b/hw/acpi/meson.build
44
@@ -XXX,XX +XXX,XX @@ acpi_ss.add(when: 'CONFIG_ACPI_PCI', if_true: files('pci.c'))
45
acpi_ss.add(when: 'CONFIG_ACPI_VMGENID', if_true: files('vmgenid.c'))
46
acpi_ss.add(when: 'CONFIG_ACPI_HW_REDUCED', if_true: files('generic_event_device.c'))
47
acpi_ss.add(when: 'CONFIG_ACPI_HMAT', if_true: files('hmat.c'))
48
-acpi_ss.add(when: 'CONFIG_ACPI_APEI', if_true: files('ghes.c'))
49
+acpi_ss.add(when: 'CONFIG_ACPI_APEI', if_true: files('ghes.c'), if_false:('ghes-stub.c'))
50
acpi_ss.add(when: 'CONFIG_ACPI_X86', if_true: files('core.c', 'piix4.c', 'pcihp.c'), if_false: files('acpi-stub.c'))
51
acpi_ss.add(when: 'CONFIG_ACPI_X86_ICH', if_true: files('ich9.c', 'tco.c'))
52
acpi_ss.add(when: 'CONFIG_IPMI', if_true: files('ipmi.c'), if_false: files('ipmi-stub.c'))
53
acpi_ss.add(when: 'CONFIG_PC', if_false: files('acpi-x86-stub.c'))
54
acpi_ss.add(when: 'CONFIG_TPM', if_true: files('tpm.c'))
55
-softmmu_ss.add(when: 'CONFIG_ACPI', if_false: files('acpi-stub.c', 'aml-build-stub.c'))
56
+softmmu_ss.add(when: 'CONFIG_ACPI', if_false: files('acpi-stub.c', 'aml-build-stub.c', 'ghes-stub.c'))
57
softmmu_ss.add_all(when: 'CONFIG_ACPI', if_true: acpi_ss)
58
softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('acpi-stub.c', 'aml-build-stub.c',
59
- 'acpi-x86-stub.c', 'ipmi-stub.c'))
60
+ 'acpi-x86-stub.c', 'ipmi-stub.c', 'ghes-stub.c'))
61
--
38
--
62
2.20.1
39
2.25.1
63
64
diff view generated by jsdifflib
1
In commit da6d674e509f0939b we split the NVIC code out from the GIC.
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
2
This allowed us to specify the NVIC's default value for the num-irq
3
property (64) in the usual way in its property list, and we deleted
4
the previous hack where we updated the value in the state struct in
5
the instance init function. Remove a stale comment about that hack
6
which we forgot to delete at that time.
7
2
3
When the system reboots, the rng-seed that the FDT has should be
4
re-randomized, so that the new boot gets a new seed. Since the FDT is in
5
the ROM region at this point, we add a hook right after the ROM has been
6
added, so that we have a pointer to that copy of the FDT.
7
8
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
9
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
10
Message-id: 20221025004327.568476-12-Jason@zx2c4.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210614161243.14211-1-peter.maydell@linaro.org
12
---
13
---
13
hw/intc/armv7m_nvic.c | 6 ------
14
hw/rx/rx-gdbsim.c | 3 +++
14
1 file changed, 6 deletions(-)
15
1 file changed, 3 insertions(+)
15
16
16
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
17
diff --git a/hw/rx/rx-gdbsim.c b/hw/rx/rx-gdbsim.c
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/armv7m_nvic.c
19
--- a/hw/rx/rx-gdbsim.c
19
+++ b/hw/intc/armv7m_nvic.c
20
+++ b/hw/rx/rx-gdbsim.c
20
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
21
@@ -XXX,XX +XXX,XX @@
21
22
#include "hw/rx/rx62n.h"
22
static void armv7m_nvic_instance_init(Object *obj)
23
#include "sysemu/qtest.h"
23
{
24
#include "sysemu/device_tree.h"
24
- /* We have a different default value for the num-irq property
25
+#include "sysemu/reset.h"
25
- * than our superclass. This function runs after qdev init
26
#include "hw/boards.h"
26
- * has set the defaults from the Property array and before
27
#include "qom/object.h"
27
- * any user-specified property setting, so just modify the
28
28
- * value in the GICState struct.
29
@@ -XXX,XX +XXX,XX @@ static void rx_gdbsim_init(MachineState *machine)
29
- */
30
dtb_offset = ROUND_DOWN(machine->ram_size - dtb_size, 16);
30
DeviceState *dev = DEVICE(obj);
31
rom_add_blob_fixed("dtb", dtb, dtb_size,
31
NVICState *nvic = NVIC(obj);
32
SDRAM_BASE + dtb_offset);
32
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
33
+ qemu_register_reset_nosnapshotload(qemu_fdt_randomize_seeds,
34
+ rom_ptr(SDRAM_BASE + dtb_offset, dtb_size));
35
/* Set dtb address to R1 */
36
RX_CPU(first_cpu)->env.regs[1] = SDRAM_BASE + dtb_offset;
37
}
33
--
38
--
34
2.20.1
39
2.25.1
35
36
diff view generated by jsdifflib