1
target-arm queue. This has the "plumb txattrs through various
1
The following changes since commit a97978bcc2d1f650c7d411428806e5b03082b8c7:
2
bits of exec.c" patches, and a collection of bug fixes from
3
various people.
4
2
5
thanks
3
Merge remote-tracking branch 'remotes/dg-gitlab/tags/ppc-for-6.1-20210603' into staging (2021-06-03 10:00:35 +0100)
6
-- PMM
7
8
9
10
The following changes since commit a3ac12fba028df90f7b3dbec924995c126c41022:
11
12
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-next-pull-request' into staging (2018-05-31 11:12:36 +0100)
13
4
14
are available in the Git repository at:
5
are available in the Git repository at:
15
6
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180531
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210603
17
8
18
for you to fetch changes up to 49d1dca0520ea71bc21867fab6647f474fcf857b:
9
for you to fetch changes up to 1c861885894d840235954060050d240259f5340b:
19
10
20
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice (2018-05-31 14:52:53 +0100)
11
tests/unit/test-vmstate: Assert that dup() and mkstemp() succeed (2021-06-03 16:43:27 +0100)
21
12
22
----------------------------------------------------------------
13
----------------------------------------------------------------
23
target-arm queue:
14
target-arm queue:
24
* target/arm: Honour FPCR.FZ in FRECPX
15
* Some not-yet-enabled preliminaries for M-profile MVE support
25
* MAINTAINERS: Add entries for newer MPS2 boards and devices
16
* Consistently use "Cortex-Axx", not "Cortex Axx" in docs, comments
26
* hw/intc/arm_gicv3: Fix APxR<n> register dispatching
17
* docs: Fix installation of man pages with Sphinx 4.x
27
* arm_gicv3_kvm: fix bug in writing zero bits back to the in-kernel
18
* Mark LDS{MIN,MAX} as signed operations
28
GIC state
19
* Fix missing syndrome value for DAIF and PAC check exceptions
29
* tcg: Fix helper function vs host abi for float16
20
* Implement BFloat16 extensions
30
* arm: fix qemu crash on startup with -bios option
21
* Refactoring of hvf accelerator code in preparation for aarch64 support
31
* arm: fix malloc type mismatch
22
* Fix some coverity nits in test code
32
* xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
33
* Correct CPACR reset value for v7 cores
34
* memory.h: Improve IOMMU related documentation
35
* exec: Plumb transaction attributes through various functions in
36
preparation for allowing IOMMUs to see them
37
* vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
38
* ARM: ACPI: Fix use-after-free due to memory realloc
39
* KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
40
23
41
----------------------------------------------------------------
24
----------------------------------------------------------------
42
Francisco Iglesias (1):
25
Alexander Graf (12):
43
xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
26
hvf: Move assert_hvf_ok() into common directory
27
hvf: Move vcpu thread functions into common directory
28
hvf: Move cpu functions into common directory
29
hvf: Move hvf internal definitions into common header
30
hvf: Make hvf_set_phys_mem() static
31
hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
32
hvf: Split out common code on vcpu init and destroy
33
hvf: Use cpu_synchronize_state()
34
hvf: Make synchronize functions static
35
hvf: Remove hvf-accel-ops.h
36
hvf: Introduce hvf vcpu struct
37
hvf: Simplify post reset/init/loadvm hooks
44
38
45
Igor Mammedov (1):
39
Damien Goutte-Gattat (1):
46
arm: fix qemu crash on startup with -bios option
40
docs: Fix installation of man pages with Sphinx 4.x
47
41
48
Jan Kiszka (1):
42
Jamie Iles (4):
49
hw/intc/arm_gicv3: Fix APxR<n> register dispatching
43
target/arm: fix missing exception class
44
target/arm: fold do_raise_exception into raise_exception
45
target/arm: use raise_exception_ra for MTE check failure
46
target/arm: use raise_exception_ra for stack limit exception
50
47
51
Paolo Bonzini (1):
48
Peter Maydell (15):
52
arm: fix malloc type mismatch
49
target/arm: Add isar feature check functions for MVE
50
target/arm: Update feature checks for insns which are "MVE or FP"
51
target/arm: Move fpsp/fpdp isar check into callers of do_vfp_2op_sp/dp
52
target/arm: Add MVE check to VMOV_reg_sp and VMOV_reg_dp
53
target/arm: Fix return values in fp_sysreg_checks()
54
target/arm: Implement M-profile VPR register
55
target/arm: Make FPSCR.LTPSIZE writable for MVE
56
target/arm: Allow board models to specify initial NS VTOR
57
arm: Consistently use "Cortex-Axx", not "Cortex Axx"
58
tests/qtest/bios-tables-test: Check for dup2() failure
59
tests/qtest/e1000e-test: Check qemu_recv() succeeded
60
tests/qtest/hd-geo-test: Fix checks on mkstemp() return value
61
tests/qtest/pflash-cfi02-test: Avoid potential integer overflow
62
tests/qtest/tpm-tests: Remove unnecessary NULL checks
63
tests/unit/test-vmstate: Assert that dup() and mkstemp() succeed
53
64
54
Peter Maydell (17):
65
Richard Henderson (13):
55
target/arm: Honour FPCR.FZ in FRECPX
66
target/arm: Mark LDS{MIN,MAX} as signed operations
56
MAINTAINERS: Add entries for newer MPS2 boards and devices
67
target/arm: Add isar_feature_{aa32, aa64, aa64_sve}_bf16
57
Correct CPACR reset value for v7 cores
68
target/arm: Unify unallocated path in disas_fp_1src
58
memory.h: Improve IOMMU related documentation
69
target/arm: Implement scalar float32 to bfloat16 conversion
59
Make tb_invalidate_phys_addr() take a MemTxAttrs argument
70
target/arm: Implement vector float32 to bfloat16 conversion
60
Make address_space_translate{, _cached}() take a MemTxAttrs argument
71
softfpu: Add float_round_to_odd_inf
61
Make address_space_map() take a MemTxAttrs argument
72
target/arm: Implement bfloat16 dot product (vector)
62
Make address_space_access_valid() take a MemTxAttrs argument
73
target/arm: Implement bfloat16 dot product (indexed)
63
Make flatview_extend_translation() take a MemTxAttrs argument
74
target/arm: Implement bfloat16 matrix multiply accumulate
64
Make memory_region_access_valid() take a MemTxAttrs argument
75
target/arm: Implement bfloat widening fma (vector)
65
Make MemoryRegion valid.accepts callback take a MemTxAttrs argument
76
target/arm: Implement bfloat widening fma (indexed)
66
Make flatview_access_valid() take a MemTxAttrs argument
77
linux-user/aarch64: Enable hwcap bits for bfloat16
67
Make flatview_translate() take a MemTxAttrs argument
78
target/arm: Enable BFloat16 extensions
68
Make address_space_get_iotlb_entry() take a MemTxAttrs argument
69
Make flatview_do_translate() take a MemTxAttrs argument
70
Make address_space_translate_iommu take a MemTxAttrs argument
71
vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
72
79
73
Richard Henderson (1):
80
docs/conf.py | 1 +
74
tcg: Fix helper function vs host abi for float16
81
docs/system/arm/aspeed.rst | 4 +-
82
docs/system/arm/nuvoton.rst | 6 +-
83
docs/system/arm/sabrelite.rst | 2 +-
84
include/fpu/softfloat-types.h | 4 +-
85
include/hw/arm/allwinner-h3.h | 2 +-
86
include/hw/arm/armv7m.h | 2 +
87
include/hw/core/cpu.h | 3 +-
88
include/sysemu/hvf_int.h | 58 +++++
89
target/arm/cpu.h | 48 +++-
90
target/arm/helper-sve.h | 4 +
91
target/arm/helper.h | 15 ++
92
target/i386/hvf/hvf-accel-ops.h | 23 --
93
target/i386/hvf/hvf-i386.h | 33 +--
94
target/i386/hvf/vmx.h | 24 +-
95
target/i386/hvf/x86hvf.h | 2 -
96
target/arm/neon-dp.decode | 1 +
97
target/arm/neon-shared.decode | 11 +
98
target/arm/sve.decode | 19 +-
99
target/arm/vfp.decode | 2 +
100
accel/hvf/hvf-accel-ops.c | 471 ++++++++++++++++++++++++++++++++++++++++
101
accel/hvf/hvf-all.c | 47 ++++
102
hw/arm/armv7m.c | 7 +
103
hw/arm/aspeed.c | 6 +-
104
hw/arm/mcimx6ul-evk.c | 2 +-
105
hw/arm/mcimx7d-sabre.c | 2 +-
106
hw/arm/npcm7xx_boards.c | 4 +-
107
hw/arm/sabrelite.c | 2 +-
108
hw/misc/npcm7xx_clk.c | 2 +-
109
linux-user/elfload.c | 2 +
110
target/arm/cpu.c | 13 ++
111
target/arm/cpu64.c | 3 +
112
target/arm/cpu_tcg.c | 1 +
113
target/arm/m_helper.c | 5 +-
114
target/arm/machine.c | 20 ++
115
target/arm/mte_helper.c | 12 +-
116
target/arm/op_helper.c | 32 ++-
117
target/arm/sve_helper.c | 2 +
118
target/arm/translate-a64.c | 155 +++++++++++--
119
target/arm/translate-neon.c | 91 ++++++++
120
target/arm/translate-sve.c | 112 ++++++++++
121
target/arm/translate-vfp.c | 164 ++++++++++----
122
target/arm/vec_helper.c | 140 +++++++++++-
123
target/arm/vfp_helper.c | 21 +-
124
target/i386/hvf/hvf-accel-ops.c | 146 -------------
125
target/i386/hvf/hvf.c | 464 +++++----------------------------------
126
target/i386/hvf/x86.c | 28 +--
127
target/i386/hvf/x86_descr.c | 26 +--
128
target/i386/hvf/x86_emu.c | 62 +++---
129
target/i386/hvf/x86_mmu.c | 4 +-
130
target/i386/hvf/x86_task.c | 12 +-
131
target/i386/hvf/x86hvf.c | 222 +++++++++----------
132
tests/qtest/bios-tables-test.c | 8 +-
133
tests/qtest/e1000e-test.c | 3 +-
134
tests/qtest/hd-geo-test.c | 4 +-
135
tests/qtest/pflash-cfi02-test.c | 2 +-
136
tests/qtest/tpm-tests.c | 12 +-
137
tests/unit/test-vmstate.c | 5 +-
138
fpu/softfloat-parts.c.inc | 6 +-
139
MAINTAINERS | 8 +
140
accel/hvf/meson.build | 7 +
141
accel/meson.build | 1 +
142
target/i386/hvf/meson.build | 1 -
143
63 files changed, 1666 insertions(+), 935 deletions(-)
144
create mode 100644 include/sysemu/hvf_int.h
145
delete mode 100644 target/i386/hvf/hvf-accel-ops.h
146
create mode 100644 accel/hvf/hvf-accel-ops.c
147
create mode 100644 accel/hvf/hvf-all.c
148
delete mode 100644 target/i386/hvf/hvf-accel-ops.c
149
create mode 100644 accel/hvf/meson.build
75
150
76
Shannon Zhao (3):
77
arm_gicv3_kvm: increase clroffset accordingly
78
ARM: ACPI: Fix use-after-free due to memory realloc
79
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
80
81
include/exec/exec-all.h | 5 +-
82
include/exec/helper-head.h | 2 +-
83
include/exec/memory-internal.h | 3 +-
84
include/exec/memory.h | 128 +++++++++++++++++++++++++++++++++++------
85
include/migration/vmstate.h | 3 +
86
include/sysemu/dma.h | 6 +-
87
accel/tcg/translate-all.c | 4 +-
88
exec.c | 95 ++++++++++++++++++------------
89
hw/arm/boot.c | 18 +++---
90
hw/arm/virt-acpi-build.c | 20 +++++--
91
hw/dma/xlnx-zdma.c | 10 +++-
92
hw/hppa/dino.c | 3 +-
93
hw/intc/arm_gic_kvm.c | 1 -
94
hw/intc/arm_gicv3_cpuif.c | 12 ++--
95
hw/intc/arm_gicv3_kvm.c | 2 +-
96
hw/nvram/fw_cfg.c | 12 ++--
97
hw/s390x/s390-pci-inst.c | 3 +-
98
hw/scsi/esp.c | 3 +-
99
hw/vfio/common.c | 3 +-
100
hw/virtio/vhost.c | 3 +-
101
hw/xen/xen_pt_msi.c | 3 +-
102
memory.c | 12 ++--
103
memory_ldst.inc.c | 18 +++---
104
target/arm/gdbstub.c | 3 +-
105
target/arm/helper-a64.c | 41 +++++++------
106
target/arm/helper.c | 90 ++++++++++++++++-------------
107
target/ppc/mmu-hash64.c | 3 +-
108
target/riscv/helper.c | 2 +-
109
target/s390x/diag.c | 6 +-
110
target/s390x/excp_helper.c | 3 +-
111
target/s390x/mmu_helper.c | 3 +-
112
target/s390x/sigp.c | 3 +-
113
target/xtensa/op_helper.c | 3 +-
114
MAINTAINERS | 9 ++-
115
34 files changed, 353 insertions(+), 182 deletions(-)
116
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Add the isar feature check functions we will need for v8.1M MVE:
2
add MemTxAttrs as an argument to address_space_get_iotlb_entry().
2
* a check for MVE present: this corresponds to the pseudocode's
3
CheckDecodeFaults(ExtType_Mve)
4
* a check for the optional floating-point part of MVE: this
5
corresponds to CheckDecodeFaults(ExtType_MveFp)
3
6
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-12-peter.maydell@linaro.org
9
Message-id: 20210520152840.24453-2-peter.maydell@linaro.org
8
---
10
---
9
include/exec/memory.h | 2 +-
11
target/arm/cpu.h | 22 ++++++++++++++++++++++
10
exec.c | 2 +-
12
1 file changed, 22 insertions(+)
11
hw/virtio/vhost.c | 3 ++-
12
3 files changed, 4 insertions(+), 3 deletions(-)
13
13
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
16
--- a/target/arm/cpu.h
17
+++ b/include/exec/memory.h
17
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache);
18
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
19
* entry. Should be called from an RCU critical section.
19
}
20
*/
20
}
21
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
21
22
- bool is_write);
22
+static inline bool isar_feature_aa32_mve(const ARMISARegisters *id)
23
+ bool is_write, MemTxAttrs attrs);
23
+{
24
24
+ /*
25
/* address_space_translate: translate an address range into an address space
25
+ * Return true if MVE is supported (either integer or floating point).
26
* into a MemoryRegion and an address range into that section. Should be
26
+ * We must check for M-profile as the MVFR1 field means something
27
diff --git a/exec.c b/exec.c
27
+ * else for A-profile.
28
index XXXXXXX..XXXXXXX 100644
28
+ */
29
--- a/exec.c
29
+ return isar_feature_aa32_mprofile(id) &&
30
+++ b/exec.c
30
+ FIELD_EX32(id->mvfr1, MVFR1, MVE) > 0;
31
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
31
+}
32
32
+
33
/* Called from RCU critical section */
33
+static inline bool isar_feature_aa32_mve_fp(const ARMISARegisters *id)
34
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
34
+{
35
- bool is_write)
35
+ /*
36
+ bool is_write, MemTxAttrs attrs)
36
+ * Return true if MVE is supported (either integer or floating point).
37
+ * We must check for M-profile as the MVFR1 field means something
38
+ * else for A-profile.
39
+ */
40
+ return isar_feature_aa32_mprofile(id) &&
41
+ FIELD_EX32(id->mvfr1, MVFR1, MVE) >= 2;
42
+}
43
+
44
static inline bool isar_feature_aa32_vfp_simd(const ARMISARegisters *id)
37
{
45
{
38
MemoryRegionSection section;
46
/*
39
hwaddr xlat, page_mask;
40
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/virtio/vhost.c
43
+++ b/hw/virtio/vhost.c
44
@@ -XXX,XX +XXX,XX @@ int vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write)
45
trace_vhost_iotlb_miss(dev, 1);
46
47
iotlb = address_space_get_iotlb_entry(dev->vdev->dma_as,
48
- iova, write);
49
+ iova, write,
50
+ MEMTXATTRS_UNSPECIFIED);
51
if (iotlb.target_as != NULL) {
52
ret = vhost_memory_region_lookup(dev, iotlb.translated_addr,
53
&uaddr, &len);
54
--
47
--
55
2.17.1
48
2.20.1
56
49
57
50
diff view generated by jsdifflib
1
The FRECPX instructions should (like most other floating point operations)
1
Some v8M instructions are present if either the floating point
2
honour the FPCR.FZ bit which specifies whether input denormals should
2
extension or MVE is implemented. Update our implementation of them
3
be flushed to zero (or FZ16 for the half-precision version).
3
to check for MVE as well as for FP.
4
We forgot to implement this, which doesn't affect the results (since
4
5
the calculation doesn't actually care about the mantissa bits) but did
5
This is all the insns which use CheckDecodeFaults(ExtType_MveOrFp) or
6
mean we were failing to set the FPSR.IDC bit.
6
CheckDecodeFaults(ExtType_MveOrDpFp) in their pseudocode, which are
7
essentially the loads and stores, moves and sysreg accesses, except
8
for VMOV_reg_sp and VMOV_reg_dp, which we handle in subsequent
9
patches because they need a refactor to provide a place to put the
10
new MVE check.
7
11
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
14
Message-id: 20210520152840.24453-3-peter.maydell@linaro.org
11
---
15
---
12
target/arm/helper-a64.c | 6 ++++++
16
target/arm/translate-vfp.c | 48 +++++++++++++++++++++++---------------
13
1 file changed, 6 insertions(+)
17
1 file changed, 29 insertions(+), 19 deletions(-)
14
18
15
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
19
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
16
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-a64.c
21
--- a/target/arm/translate-vfp.c
18
+++ b/target/arm/helper-a64.c
22
+++ b/target/arm/translate-vfp.c
19
@@ -XXX,XX +XXX,XX @@ float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
23
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
20
return nan;
24
/* VMOV scalar to general purpose register */
25
TCGv_i32 tmp;
26
27
- /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
28
- if (a->size == MO_32
29
- ? !dc_isar_feature(aa32_fpsp_v2, s)
30
- : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
31
- return false;
32
+ /*
33
+ * SIZE == MO_32 is a VFP instruction; otherwise NEON. MVE has
34
+ * all sizes, whether the CPU has fp or not.
35
+ */
36
+ if (!dc_isar_feature(aa32_mve, s)) {
37
+ if (a->size == MO_32
38
+ ? !dc_isar_feature(aa32_fpsp_v2, s)
39
+ : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
40
+ return false;
41
+ }
21
}
42
}
22
43
23
+ a = float16_squash_input_denormal(a, fpst);
44
/* UNDEF accesses to D16-D31 if they don't exist */
24
+
45
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
25
val16 = float16_val(a);
46
/* VMOV general purpose register to scalar */
26
sbit = 0x8000 & val16;
47
TCGv_i32 tmp;
27
exp = extract32(val16, 10, 5);
48
28
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
49
- /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
29
return nan;
50
- if (a->size == MO_32
51
- ? !dc_isar_feature(aa32_fpsp_v2, s)
52
- : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
53
- return false;
54
+ /*
55
+ * SIZE == MO_32 is a VFP instruction; otherwise NEON. MVE has
56
+ * all sizes, whether the CPU has fp or not.
57
+ */
58
+ if (!dc_isar_feature(aa32_mve, s)) {
59
+ if (a->size == MO_32
60
+ ? !dc_isar_feature(aa32_fpsp_v2, s)
61
+ : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
62
+ return false;
63
+ }
30
}
64
}
31
65
32
+ a = float32_squash_input_denormal(a, fpst);
66
/* UNDEF accesses to D16-D31 if they don't exist */
33
+
67
@@ -XXX,XX +XXX,XX @@ typedef enum FPSysRegCheckResult {
34
val32 = float32_val(a);
68
35
sbit = 0x80000000ULL & val32;
69
static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
36
exp = extract32(val32, 23, 8);
70
{
37
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
71
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
38
return nan;
72
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
73
return FPSysRegCheckFailed;
39
}
74
}
40
75
41
+ a = float64_squash_input_denormal(a, fpst);
76
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
42
+
77
{
43
val64 = float64_val(a);
78
TCGv_i32 tmp;
44
sbit = 0x8000000000000000ULL & val64;
79
45
exp = extract64(float64_val(a), 52, 11);
80
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
81
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
82
return false;
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
86
{
87
TCGv_i32 tmp;
88
89
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
90
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
91
return false;
92
}
93
94
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
95
* floating point register. Note that this does not require support
96
* for double precision arithmetic.
97
*/
98
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
99
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
100
return false;
101
}
102
103
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_hp(DisasContext *s, arg_VLDR_VSTR_sp *a)
104
uint32_t offset;
105
TCGv_i32 addr, tmp;
106
107
- if (!dc_isar_feature(aa32_fp16_arith, s)) {
108
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
109
return false;
110
}
111
112
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
113
uint32_t offset;
114
TCGv_i32 addr, tmp;
115
116
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
117
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
118
return false;
119
}
120
121
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
122
TCGv_i64 tmp;
123
124
/* Note that this does not require support for double arithmetic. */
125
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
126
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
127
return false;
128
}
129
130
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
131
TCGv_i32 addr, tmp;
132
int i, n;
133
134
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
135
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
136
return false;
137
}
138
139
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
140
int i, n;
141
142
/* Note that this does not require support for double arithmetic. */
143
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
144
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
145
return false;
146
}
147
46
--
148
--
47
2.17.1
149
2.20.1
48
150
49
151
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
The do_vfp_2op_sp() and do_vfp_2op_dp() functions currently check
2
add MemTxAttrs as an argument to flatview_access_valid().
2
whether floating point is supported via the aa32_fpdp_v2 and
3
Its callers now all have an attrs value to hand, so we can
3
aa32_fpsp_v2 isar checks. For v8.1M MVE support, the VMOV_reg trans
4
correct our earlier temporary use of MEMTXATTRS_UNSPECIFIED.
4
functions (but not any of the others) need to update this to also
5
allow the insn if MVE is implemented. Move the check out of the do_
6
function and into its callsites (which are all implemented via the
7
DO_VFP_2OP macro), so we have a place to change the check for the
8
VMOV insns.
5
9
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-10-peter.maydell@linaro.org
12
Message-id: 20210520152840.24453-4-peter.maydell@linaro.org
10
---
13
---
11
exec.c | 12 +++++-------
14
target/arm/translate-vfp.c | 37 +++++++++++++++++++------------------
12
1 file changed, 5 insertions(+), 7 deletions(-)
15
1 file changed, 19 insertions(+), 18 deletions(-)
13
16
14
diff --git a/exec.c b/exec.c
17
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
19
--- a/target/arm/translate-vfp.c
17
+++ b/exec.c
20
+++ b/target/arm/translate-vfp.c
18
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
21
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
19
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
22
int veclen = s->vec_len;
20
const uint8_t *buf, int len);
23
TCGv_i32 f0, fd;
21
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
24
22
- bool is_write);
25
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
23
+ bool is_write, MemTxAttrs attrs);
26
- return false;
24
27
- }
25
static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
28
+ /* Note that the caller must check the aa32_fpsp_v2 feature. */
26
unsigned len, MemTxAttrs attrs)
29
27
@@ -XXX,XX +XXX,XX @@ static bool subpage_accepts(void *opaque, hwaddr addr,
30
if (!dc_isar_feature(aa32_fpshvec, s) &&
28
#endif
31
(veclen != 0 || s->vec_stride != 0)) {
29
32
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_hp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
30
return flatview_access_valid(subpage->fv, addr + subpage->base,
33
*/
31
- len, is_write);
34
TCGv_i32 f0;
32
+ len, is_write, attrs);
35
36
+ /* Note that the caller must check the aa32_fp16_arith feature */
37
+
38
if (!dc_isar_feature(aa32_fp16_arith, s)) {
39
return false;
40
}
41
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
42
int veclen = s->vec_len;
43
TCGv_i64 f0, fd;
44
45
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
46
- return false;
47
- }
48
+ /* Note that the caller must check the aa32_fpdp_v2 feature. */
49
50
/* UNDEF accesses to D16-D31 if they don't exist */
51
if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vm) & 0x10)) {
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
53
return true;
33
}
54
}
34
55
35
static const MemoryRegionOps subpage_ops = {
56
-#define DO_VFP_2OP(INSN, PREC, FN) \
36
@@ -XXX,XX +XXX,XX @@ static void cpu_notify_map_clients(void)
57
+#define DO_VFP_2OP(INSN, PREC, FN, CHECK) \
58
static bool trans_##INSN##_##PREC(DisasContext *s, \
59
arg_##INSN##_##PREC *a) \
60
{ \
61
+ if (!dc_isar_feature(CHECK, s)) { \
62
+ return false; \
63
+ } \
64
return do_vfp_2op_##PREC(s, FN, a->vd, a->vm); \
65
}
66
67
-DO_VFP_2OP(VMOV_reg, sp, tcg_gen_mov_i32)
68
-DO_VFP_2OP(VMOV_reg, dp, tcg_gen_mov_i64)
69
+DO_VFP_2OP(VMOV_reg, sp, tcg_gen_mov_i32, aa32_fpsp_v2)
70
+DO_VFP_2OP(VMOV_reg, dp, tcg_gen_mov_i64, aa32_fpdp_v2)
71
72
-DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh)
73
-DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss)
74
-DO_VFP_2OP(VABS, dp, gen_helper_vfp_absd)
75
+DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh, aa32_fp16_arith)
76
+DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss, aa32_fpsp_v2)
77
+DO_VFP_2OP(VABS, dp, gen_helper_vfp_absd, aa32_fpdp_v2)
78
79
-DO_VFP_2OP(VNEG, hp, gen_helper_vfp_negh)
80
-DO_VFP_2OP(VNEG, sp, gen_helper_vfp_negs)
81
-DO_VFP_2OP(VNEG, dp, gen_helper_vfp_negd)
82
+DO_VFP_2OP(VNEG, hp, gen_helper_vfp_negh, aa32_fp16_arith)
83
+DO_VFP_2OP(VNEG, sp, gen_helper_vfp_negs, aa32_fpsp_v2)
84
+DO_VFP_2OP(VNEG, dp, gen_helper_vfp_negd, aa32_fpdp_v2)
85
86
static void gen_VSQRT_hp(TCGv_i32 vd, TCGv_i32 vm)
87
{
88
@@ -XXX,XX +XXX,XX @@ static void gen_VSQRT_dp(TCGv_i64 vd, TCGv_i64 vm)
89
gen_helper_vfp_sqrtd(vd, vm, cpu_env);
37
}
90
}
38
91
39
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
92
-DO_VFP_2OP(VSQRT, hp, gen_VSQRT_hp)
40
- bool is_write)
93
-DO_VFP_2OP(VSQRT, sp, gen_VSQRT_sp)
41
+ bool is_write, MemTxAttrs attrs)
94
-DO_VFP_2OP(VSQRT, dp, gen_VSQRT_dp)
95
+DO_VFP_2OP(VSQRT, hp, gen_VSQRT_hp, aa32_fp16_arith)
96
+DO_VFP_2OP(VSQRT, sp, gen_VSQRT_sp, aa32_fpsp_v2)
97
+DO_VFP_2OP(VSQRT, dp, gen_VSQRT_dp, aa32_fpdp_v2)
98
99
static bool trans_VCMP_hp(DisasContext *s, arg_VCMP_sp *a)
42
{
100
{
43
MemoryRegion *mr;
44
hwaddr l, xlat;
45
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
46
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
47
if (!memory_access_is_direct(mr, is_write)) {
48
l = memory_access_size(mr, l, addr);
49
- /* When our callers all have attrs we'll pass them through here */
50
- if (!memory_region_access_valid(mr, xlat, l, is_write,
51
- MEMTXATTRS_UNSPECIFIED)) {
52
+ if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
53
return false;
54
}
55
}
56
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
57
58
rcu_read_lock();
59
fv = address_space_to_flatview(as);
60
- result = flatview_access_valid(fv, addr, len, is_write);
61
+ result = flatview_access_valid(fv, addr, len, is_write, attrs);
62
rcu_read_unlock();
63
return result;
64
}
65
--
101
--
66
2.17.1
102
2.20.1
67
103
68
104
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Split out the handling of VMOV_reg_sp and VMOV_reg_dp so that we can
2
add MemTxAttrs as an argument to flatview_do_translate().
2
permit the insns if either FP or MVE are present.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-13-peter.maydell@linaro.org
6
Message-id: 20210520152840.24453-5-peter.maydell@linaro.org
8
---
7
---
9
exec.c | 9 ++++++---
8
target/arm/translate-vfp.c | 15 +++++++++++++--
10
1 file changed, 6 insertions(+), 3 deletions(-)
9
1 file changed, 13 insertions(+), 2 deletions(-)
11
10
12
diff --git a/exec.c b/exec.c
11
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
13
--- a/target/arm/translate-vfp.c
15
+++ b/exec.c
14
+++ b/target/arm/translate-vfp.c
16
@@ -XXX,XX +XXX,XX @@ unassigned:
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
17
* @is_write: whether the translation operation is for write
16
return do_vfp_2op_##PREC(s, FN, a->vd, a->vm); \
18
* @is_mmio: whether this can be MMIO, set true if it can
17
}
19
* @target_as: the address space targeted by the IOMMU
18
20
+ * @attrs: memory transaction attributes
19
-DO_VFP_2OP(VMOV_reg, sp, tcg_gen_mov_i32, aa32_fpsp_v2)
21
*
20
-DO_VFP_2OP(VMOV_reg, dp, tcg_gen_mov_i64, aa32_fpdp_v2)
22
* This function is called from RCU critical section
21
+#define DO_VFP_VMOV(INSN, PREC, FN) \
23
*/
22
+ static bool trans_##INSN##_##PREC(DisasContext *s, \
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
23
+ arg_##INSN##_##PREC *a) \
25
hwaddr *page_mask_out,
24
+ { \
26
bool is_write,
25
+ if (!dc_isar_feature(aa32_fp##PREC##_v2, s) && \
27
bool is_mmio,
26
+ !dc_isar_feature(aa32_mve, s)) { \
28
- AddressSpace **target_as)
27
+ return false; \
29
+ AddressSpace **target_as,
28
+ } \
30
+ MemTxAttrs attrs)
29
+ return do_vfp_2op_##PREC(s, FN, a->vd, a->vm); \
31
{
30
+ }
32
MemoryRegionSection *section;
31
+
33
IOMMUMemoryRegion *iommu_mr;
32
+DO_VFP_VMOV(VMOV_reg, sp, tcg_gen_mov_i32)
34
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
33
+DO_VFP_VMOV(VMOV_reg, dp, tcg_gen_mov_i64)
35
* but page mask.
34
36
*/
35
DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh, aa32_fp16_arith)
37
section = flatview_do_translate(address_space_to_flatview(as), addr, &xlat,
36
DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss, aa32_fpsp_v2)
38
- NULL, &page_mask, is_write, false, &as);
39
+ NULL, &page_mask, is_write, false, &as,
40
+ attrs);
41
42
/* Illegal translation */
43
if (section.mr == &io_mem_unassigned) {
44
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
45
46
/* This can be MMIO, so setup MMIO bit. */
47
section = flatview_do_translate(fv, addr, xlat, plen, NULL,
48
- is_write, true, &as);
49
+ is_write, true, &as, attrs);
50
mr = section.mr;
51
52
if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
53
--
37
--
54
2.17.1
38
2.20.1
55
39
56
40
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
The fp_sysreg_checks() function is supposed to be returning an
2
add MemTxAttrs as an argument to flatview_extend_translation().
2
FPSysRegCheckResult, which is an enum with three possible values.
3
Its callers either have an attrs value to hand, or don't care
3
However, three places in the function "return false" (a hangover from
4
and can use MEMTXATTRS_UNSPECIFIED.
4
a previous iteration of the design where the function just returned a
5
bool). Make these return FPSysRegCheckFailed instead (for no
6
functional change, since both false and FPSysRegCheckFailed are
7
zero).
5
8
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-7-peter.maydell@linaro.org
11
Message-id: 20210520152840.24453-6-peter.maydell@linaro.org
10
---
12
---
11
exec.c | 15 ++++++++++-----
13
target/arm/translate-vfp.c | 6 +++---
12
1 file changed, 10 insertions(+), 5 deletions(-)
14
1 file changed, 3 insertions(+), 3 deletions(-)
13
15
14
diff --git a/exec.c b/exec.c
16
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
18
--- a/target/arm/translate-vfp.c
17
+++ b/exec.c
19
+++ b/target/arm/translate-vfp.c
18
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
20
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
19
21
break;
20
static hwaddr
22
case ARM_VFP_FPSCR_NZCVQC:
21
flatview_extend_translation(FlatView *fv, hwaddr addr,
23
if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
22
- hwaddr target_len,
24
- return false;
23
- MemoryRegion *mr, hwaddr base, hwaddr len,
25
+ return FPSysRegCheckFailed;
24
- bool is_write)
26
}
25
+ hwaddr target_len,
27
break;
26
+ MemoryRegion *mr, hwaddr base, hwaddr len,
28
case ARM_VFP_FPCXT_S:
27
+ bool is_write, MemTxAttrs attrs)
29
case ARM_VFP_FPCXT_NS:
28
{
30
if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
29
hwaddr done = 0;
31
- return false;
30
hwaddr xlat;
32
+ return FPSysRegCheckFailed;
31
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
33
}
32
34
if (!s->v8m_secure) {
33
memory_region_ref(mr);
35
- return false;
34
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
36
+ return FPSysRegCheckFailed;
35
- l, is_write);
37
}
36
+ l, is_write, attrs);
38
break;
37
ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
39
default:
38
rcu_read_unlock();
39
40
@@ -XXX,XX +XXX,XX @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
41
mr = cache->mrs.mr;
42
memory_region_ref(mr);
43
if (memory_access_is_direct(mr, is_write)) {
44
+ /* We don't care about the memory attributes here as we're only
45
+ * doing this if we found actual RAM, which behaves the same
46
+ * regardless of attributes; so UNSPECIFIED is fine.
47
+ */
48
l = flatview_extend_translation(cache->fv, addr, len, mr,
49
- cache->xlat, l, is_write);
50
+ cache->xlat, l, is_write,
51
+ MEMTXATTRS_UNSPECIFIED);
52
cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
53
} else {
54
cache->ptr = NULL;
55
--
40
--
56
2.17.1
41
2.20.1
57
42
58
43
diff view generated by jsdifflib
1
Add more detail to the documentation for memory_region_init_iommu()
1
If MVE is implemented for an M-profile CPU then it has a VPR
2
and other IOMMU-related functions and data structures.
2
register, which tracks predication information.
3
4
Implement the read and write handling of this register, and
5
the migration of its state.
3
6
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20210520152840.24453-7-peter.maydell@linaro.org
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20180521140402.23318-2-peter.maydell@linaro.org
9
---
10
---
10
include/exec/memory.h | 105 ++++++++++++++++++++++++++++++++++++++----
11
target/arm/cpu.h | 6 ++++++
11
1 file changed, 95 insertions(+), 10 deletions(-)
12
target/arm/machine.c | 19 +++++++++++++++++++
13
target/arm/translate-vfp.c | 38 ++++++++++++++++++++++++++++++++++++++
14
3 files changed, 63 insertions(+)
12
15
13
diff --git a/include/exec/memory.h b/include/exec/memory.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/memory.h
18
--- a/target/arm/cpu.h
16
+++ b/include/exec/memory.h
19
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
20
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
18
IOMMU_ATTR_SPAPR_TCE_FD
21
uint32_t cpacr[M_REG_NUM_BANKS];
22
uint32_t nsacr;
23
int ltpsize;
24
+ uint32_t vpr;
25
} v7m;
26
27
/* Information associated with an exception about to be taken:
28
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_FPCCR, ASPEN, 31, 1)
29
R_V7M_FPCCR_UFRDY_MASK | \
30
R_V7M_FPCCR_ASPEN_MASK)
31
32
+/* v7M VPR bits */
33
+FIELD(V7M_VPR, P0, 0, 16)
34
+FIELD(V7M_VPR, MASK01, 16, 4)
35
+FIELD(V7M_VPR, MASK23, 20, 4)
36
+
37
/*
38
* System register ID fields.
39
*/
40
diff --git a/target/arm/machine.c b/target/arm/machine.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/machine.c
43
+++ b/target/arm/machine.c
44
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_fp = {
45
}
19
};
46
};
20
47
21
+/**
48
+static bool mve_needed(void *opaque)
22
+ * IOMMUMemoryRegionClass:
49
+{
23
+ *
50
+ ARMCPU *cpu = opaque;
24
+ * All IOMMU implementations need to subclass TYPE_IOMMU_MEMORY_REGION
51
+
25
+ * and provide an implementation of at least the @translate method here
52
+ return cpu_isar_feature(aa32_mve, cpu);
26
+ * to handle requests to the memory region. Other methods are optional.
53
+}
27
+ *
54
+
28
+ * The IOMMU implementation must use the IOMMU notifier infrastructure
55
+static const VMStateDescription vmstate_m_mve = {
29
+ * to report whenever mappings are changed, by calling
56
+ .name = "cpu/m/mve",
30
+ * memory_region_notify_iommu() (or, if necessary, by calling
57
+ .version_id = 1,
31
+ * memory_region_notify_one() for each registered notifier).
58
+ .minimum_version_id = 1,
32
+ */
59
+ .needed = mve_needed,
33
typedef struct IOMMUMemoryRegionClass {
60
+ .fields = (VMStateField[]) {
34
/* private */
61
+ VMSTATE_UINT32(env.v7m.vpr, ARMCPU),
35
struct DeviceClass parent_class;
62
+ VMSTATE_END_OF_LIST()
36
63
+ },
37
/*
64
+};
38
- * Return a TLB entry that contains a given address. Flag should
65
+
39
- * be the access permission of this translation operation. We can
66
static const VMStateDescription vmstate_m = {
40
- * set flag to IOMMU_NONE to mean that we don't need any
67
.name = "cpu/m",
41
- * read/write permission checks, like, when for region replay.
68
.version_id = 4,
42
+ * Return a TLB entry that contains a given address.
69
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
43
+ *
70
&vmstate_m_other_sp,
44
+ * The IOMMUAccessFlags indicated via @flag are optional and may
71
&vmstate_m_v8m,
45
+ * be specified as IOMMU_NONE to indicate that the caller needs
72
&vmstate_m_fp,
46
+ * the full translation information for both reads and writes. If
73
+ &vmstate_m_mve,
47
+ * the access flags are specified then the IOMMU implementation
74
NULL
48
+ * may use this as an optimization, to stop doing a page table
75
}
49
+ * walk as soon as it knows that the requested permissions are not
76
};
50
+ * allowed. If IOMMU_NONE is passed then the IOMMU must do the
77
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
51
+ * full page table walk and report the permissions in the returned
78
index XXXXXXX..XXXXXXX 100644
52
+ * IOMMUTLBEntry. (Note that this implies that an IOMMU may not
79
--- a/target/arm/translate-vfp.c
53
+ * return different mappings for reads and writes.)
80
+++ b/target/arm/translate-vfp.c
54
+ *
81
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
55
+ * The returned information remains valid while the caller is
82
return FPSysRegCheckFailed;
56
+ * holding the big QEMU lock or is inside an RCU critical section;
83
}
57
+ * if the caller wishes to cache the mapping beyond that it must
84
break;
58
+ * register an IOMMU notifier so it can invalidate its cached
85
+ case ARM_VFP_VPR:
59
+ * information when the IOMMU mapping changes.
86
+ case ARM_VFP_P0:
60
+ *
87
+ if (!dc_isar_feature(aa32_mve, s)) {
61
+ * @iommu: the IOMMUMemoryRegion
88
+ return FPSysRegCheckFailed;
62
+ * @hwaddr: address to be translated within the memory region
89
+ }
63
+ * @flag: requested access permissions
90
+ break;
64
*/
91
default:
65
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
92
return FPSysRegCheckFailed;
66
IOMMUAccessFlags flag);
93
}
67
- /* Returns minimum supported page size */
94
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
68
+ /* Returns minimum supported page size in bytes.
95
tcg_temp_free_i32(sfpa);
69
+ * If this method is not provided then the minimum is assumed to
96
break;
70
+ * be TARGET_PAGE_SIZE.
97
}
71
+ *
98
+ case ARM_VFP_VPR:
72
+ * @iommu: the IOMMUMemoryRegion
99
+ /* Behaves as NOP if not privileged */
73
+ */
100
+ if (IS_USER(s)) {
74
uint64_t (*get_min_page_size)(IOMMUMemoryRegion *iommu);
101
+ break;
75
- /* Called when IOMMU Notifier flag changed */
102
+ }
76
+ /* Called when IOMMU Notifier flag changes (ie when the set of
103
+ tmp = loadfn(s, opaque);
77
+ * events which IOMMU users are requesting notification for changes).
104
+ store_cpu_field(tmp, v7m.vpr);
78
+ * Optional method -- need not be provided if the IOMMU does not
105
+ break;
79
+ * need to know exactly which events must be notified.
106
+ case ARM_VFP_P0:
80
+ *
107
+ {
81
+ * @iommu: the IOMMUMemoryRegion
108
+ TCGv_i32 vpr;
82
+ * @old_flags: events which previously needed to be notified
109
+ tmp = loadfn(s, opaque);
83
+ * @new_flags: events which now need to be notified
110
+ vpr = load_cpu_field(v7m.vpr);
84
+ */
111
+ tcg_gen_deposit_i32(vpr, vpr, tmp,
85
void (*notify_flag_changed)(IOMMUMemoryRegion *iommu,
112
+ R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
86
IOMMUNotifierFlag old_flags,
113
+ store_cpu_field(vpr, v7m.vpr);
87
IOMMUNotifierFlag new_flags);
114
+ tcg_temp_free_i32(tmp);
88
- /* Set this up to provide customized IOMMU replay function */
115
+ break;
89
+ /* Called to handle memory_region_iommu_replay().
116
+ }
90
+ *
117
default:
91
+ * The default implementation of memory_region_iommu_replay() is to
118
g_assert_not_reached();
92
+ * call the IOMMU translate method for every page in the address space
119
}
93
+ * with flag == IOMMU_NONE and then call the notifier if translate
120
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
94
+ * returns a valid mapping. If this method is implemented then it
121
tcg_temp_free_i32(fpscr);
95
+ * overrides the default behaviour, and must provide the full semantics
122
break;
96
+ * of memory_region_iommu_replay(), by calling @notifier for every
123
}
97
+ * translation present in the IOMMU.
124
+ case ARM_VFP_VPR:
98
+ *
125
+ /* Behaves as NOP if not privileged */
99
+ * Optional method -- an IOMMU only needs to provide this method
126
+ if (IS_USER(s)) {
100
+ * if the default is inefficient or produces undesirable side effects.
127
+ break;
101
+ *
128
+ }
102
+ * Note: this is not related to record-and-replay functionality.
129
+ tmp = load_cpu_field(v7m.vpr);
103
+ */
130
+ storefn(s, opaque, tmp);
104
void (*replay)(IOMMUMemoryRegion *iommu, IOMMUNotifier *notifier);
131
+ break;
105
132
+ case ARM_VFP_P0:
106
- /* Get IOMMU misc attributes */
133
+ tmp = load_cpu_field(v7m.vpr);
107
- int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr,
134
+ tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
108
+ /* Get IOMMU misc attributes. This is an optional method that
135
+ storefn(s, opaque, tmp);
109
+ * can be used to allow users of the IOMMU to get implementation-specific
136
+ break;
110
+ * information. The IOMMU implements this method to handle calls
137
default:
111
+ * by IOMMU users to memory_region_iommu_get_attr() by filling in
138
g_assert_not_reached();
112
+ * the arbitrary data pointer for any IOMMUMemoryRegionAttr values that
139
}
113
+ * the IOMMU supports. If the method is unimplemented then
114
+ * memory_region_iommu_get_attr() will always return -EINVAL.
115
+ *
116
+ * @iommu: the IOMMUMemoryRegion
117
+ * @attr: attribute being queried
118
+ * @data: memory to fill in with the attribute data
119
+ *
120
+ * Returns 0 on success, or a negative errno; in particular
121
+ * returns -EINVAL for unrecognized or unimplemented attribute types.
122
+ */
123
+ int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
124
void *data);
125
} IOMMUMemoryRegionClass;
126
127
@@ -XXX,XX +XXX,XX @@ static inline void memory_region_init_reservation(MemoryRegion *mr,
128
* An IOMMU region translates addresses and forwards accesses to a target
129
* memory region.
130
*
131
+ * The IOMMU implementation must define a subclass of TYPE_IOMMU_MEMORY_REGION.
132
+ * @_iommu_mr should be a pointer to enough memory for an instance of
133
+ * that subclass, @instance_size is the size of that subclass, and
134
+ * @mrtypename is its name. This function will initialize @_iommu_mr as an
135
+ * instance of the subclass, and its methods will then be called to handle
136
+ * accesses to the memory region. See the documentation of
137
+ * #IOMMUMemoryRegionClass for further details.
138
+ *
139
* @_iommu_mr: the #IOMMUMemoryRegion to be initialized
140
* @instance_size: the IOMMUMemoryRegion subclass instance size
141
* @mrtypename: the type name of the #IOMMUMemoryRegion
142
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
143
* a notifier with the minimum page granularity returned by
144
* mr->iommu_ops->get_page_size().
145
*
146
+ * Note: this is not related to record-and-replay functionality.
147
+ *
148
* @iommu_mr: the memory region to observe
149
* @n: the notifier to which to replay iommu mappings
150
*/
151
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n);
152
* memory_region_iommu_replay_all: replay existing IOMMU translations
153
* to all the notifiers registered.
154
*
155
+ * Note: this is not related to record-and-replay functionality.
156
+ *
157
* @iommu_mr: the memory region to observe
158
*/
159
void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr);
160
@@ -XXX,XX +XXX,XX @@ void memory_region_unregister_iommu_notifier(MemoryRegion *mr,
161
* memory_region_iommu_get_attr: return an IOMMU attr if get_attr() is
162
* defined on the IOMMU.
163
*
164
- * Returns 0 if succeded, error code otherwise.
165
+ * Returns 0 on success, or a negative errno otherwise. In particular,
166
+ * -EINVAL indicates that the IOMMU does not support the requested
167
+ * attribute.
168
*
169
* @iommu_mr: the memory region
170
* @attr: the requested attribute
171
--
140
--
172
2.17.1
141
2.20.1
173
142
174
143
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
The M-profile FPSCR has an LTPSIZE field, but if MVE is not
2
add MemTxAttrs as an argument to flatview_translate(); all its
2
implemented it is read-only and always reads as 4; this is how QEMU
3
callers now have attrs available.
3
currently handles it.
4
5
Make the field writable when MVE is implemented.
6
7
We can safely add the field to the MVE migration struct because
8
currently no CPUs enable MVE and so the migration struct is never
9
used.
4
10
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180521140402.23318-11-peter.maydell@linaro.org
13
Message-id: 20210520152840.24453-8-peter.maydell@linaro.org
9
---
14
---
10
include/exec/memory.h | 7 ++++---
15
target/arm/cpu.h | 3 ++-
11
exec.c | 17 +++++++++--------
16
target/arm/machine.c | 1 +
12
2 files changed, 13 insertions(+), 11 deletions(-)
17
target/arm/vfp_helper.c | 9 ++++++---
18
3 files changed, 9 insertions(+), 4 deletions(-)
13
19
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
22
--- a/target/arm/cpu.h
17
+++ b/include/exec/memory.h
23
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
24
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
19
*/
25
uint32_t fpdscr[M_REG_NUM_BANKS];
20
MemoryRegion *flatview_translate(FlatView *fv,
26
uint32_t cpacr[M_REG_NUM_BANKS];
21
hwaddr addr, hwaddr *xlat,
27
uint32_t nsacr;
22
- hwaddr *len, bool is_write);
28
- int ltpsize;
23
+ hwaddr *len, bool is_write,
29
+ uint32_t ltpsize;
24
+ MemTxAttrs attrs);
30
uint32_t vpr;
25
31
} v7m;
26
static inline MemoryRegion *address_space_translate(AddressSpace *as,
32
27
hwaddr addr, hwaddr *xlat,
33
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
28
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
34
29
MemTxAttrs attrs)
35
#define FPCR_LTPSIZE_SHIFT 16 /* LTPSIZE, M-profile only */
36
#define FPCR_LTPSIZE_MASK (7 << FPCR_LTPSIZE_SHIFT)
37
+#define FPCR_LTPSIZE_LENGTH 3
38
39
#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
40
#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
41
diff --git a/target/arm/machine.c b/target/arm/machine.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/machine.c
44
+++ b/target/arm/machine.c
45
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_mve = {
46
.needed = mve_needed,
47
.fields = (VMStateField[]) {
48
VMSTATE_UINT32(env.v7m.vpr, ARMCPU),
49
+ VMSTATE_UINT32(env.v7m.ltpsize, ARMCPU),
50
VMSTATE_END_OF_LIST()
51
},
52
};
53
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/vfp_helper.c
56
+++ b/target/arm/vfp_helper.c
57
@@ -XXX,XX +XXX,XX @@ uint32_t vfp_get_fpscr(CPUARMState *env)
58
59
void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
30
{
60
{
31
return flatview_translate(address_space_to_flatview(as),
61
+ ARMCPU *cpu = env_archcpu(env);
32
- addr, xlat, len, is_write);
62
+
33
+ addr, xlat, len, is_write, attrs);
63
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
34
}
64
- if (!cpu_isar_feature(any_fp16, env_archcpu(env))) {
35
65
+ if (!cpu_isar_feature(any_fp16, cpu)) {
36
/* address_space_access_valid: check for validity of accessing an address
66
val &= ~FPCR_FZ16;
37
@@ -XXX,XX +XXX,XX @@ MemTxResult address_space_read(AddressSpace *as, hwaddr addr,
38
rcu_read_lock();
39
fv = address_space_to_flatview(as);
40
l = len;
41
- mr = flatview_translate(fv, addr, &addr1, &l, false);
42
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
43
if (len == l && memory_access_is_direct(mr, false)) {
44
ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
45
memcpy(buf, ptr, len);
46
diff --git a/exec.c b/exec.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/exec.c
49
+++ b/exec.c
50
@@ -XXX,XX +XXX,XX @@ iotlb_fail:
51
52
/* Called from RCU critical section */
53
MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
54
- hwaddr *plen, bool is_write)
55
+ hwaddr *plen, bool is_write,
56
+ MemTxAttrs attrs)
57
{
58
MemoryRegion *mr;
59
MemoryRegionSection section;
60
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
61
}
62
63
l = len;
64
- mr = flatview_translate(fv, addr, &addr1, &l, true);
65
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
66
}
67
}
67
68
68
return result;
69
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
70
* because in v7A no-short-vector-support cores still had to
70
MemTxResult result = MEMTX_OK;
71
* allow Stride/Len to be written with the only effect that
71
72
* some insns are required to UNDEF if the guest sets them.
72
l = len;
73
- *
73
- mr = flatview_translate(fv, addr, &addr1, &l, true);
74
- * TODO: if M-profile MVE implemented, set LTPSIZE.
74
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
75
*/
75
result = flatview_write_continue(fv, addr, attrs, buf, len,
76
env->vfp.vec_len = extract32(val, 16, 3);
76
addr1, l, mr);
77
env->vfp.vec_stride = extract32(val, 20, 2);
77
78
+ } else if (cpu_isar_feature(aa32_mve, cpu)) {
78
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
79
+ env->v7m.ltpsize = extract32(val, FPCR_LTPSIZE_SHIFT,
79
}
80
+ FPCR_LTPSIZE_LENGTH);
80
81
l = len;
82
- mr = flatview_translate(fv, addr, &addr1, &l, false);
83
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
84
}
81
}
85
82
86
return result;
83
if (arm_feature(env, ARM_FEATURE_NEON)) {
87
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = flatview_translate(fv, addr, &addr1, &l, false);
92
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
93
return flatview_read_continue(fv, addr, attrs, buf, len,
94
addr1, l, mr);
95
}
96
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
97
98
while (len > 0) {
99
l = len;
100
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
101
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
102
if (!memory_access_is_direct(mr, is_write)) {
103
l = memory_access_size(mr, l, addr);
104
if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
105
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
106
107
len = target_len;
108
this_mr = flatview_translate(fv, addr, &xlat,
109
- &len, is_write);
110
+ &len, is_write, attrs);
111
if (this_mr != mr || xlat != base + done) {
112
return done;
113
}
114
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
115
l = len;
116
rcu_read_lock();
117
fv = address_space_to_flatview(as);
118
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
119
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
120
121
if (!memory_access_is_direct(mr, is_write)) {
122
if (atomic_xchg(&bounce.in_use, true)) {
123
--
84
--
124
2.17.1
85
2.20.1
125
86
126
87
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Currently we allow board models to specify the initial value of the
2
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
2
Secure VTOR register, using an init-svtor property on the TYPE_ARMV7M
3
Its callers either have an attrs value to hand, or don't care
3
object which is plumbed through to the CPU. Allow board models to
4
and can use MEMTXATTRS_UNSPECIFIED.
4
also specify the initial value of the Non-secure VTOR via a similar
5
init-nsvtor property.
5
6
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20210520152840.24453-10-peter.maydell@linaro.org
9
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
10
---
10
---
11
include/exec/exec-all.h | 5 +++--
11
include/hw/arm/armv7m.h | 2 ++
12
accel/tcg/translate-all.c | 2 +-
12
target/arm/cpu.h | 2 ++
13
exec.c | 2 +-
13
hw/arm/armv7m.c | 7 +++++++
14
target/xtensa/op_helper.c | 3 ++-
14
target/arm/cpu.c | 10 ++++++++++
15
4 files changed, 7 insertions(+), 5 deletions(-)
15
4 files changed, 21 insertions(+)
16
16
17
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
17
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/exec-all.h
19
--- a/include/hw/arm/armv7m.h
20
+++ b/include/exec/exec-all.h
20
+++ b/include/hw/arm/armv7m.h
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
21
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(ARMv7MState, ARMV7M)
22
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
22
* devices will be automatically layered on top of this view.)
23
hwaddr paddr, int prot,
23
* + Property "idau": IDAU interface (forwarded to CPU object)
24
int mmu_idx, target_ulong size);
24
* + Property "init-svtor": secure VTOR reset value (forwarded to CPU object)
25
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr);
25
+ * + Property "init-nsvtor": non-secure VTOR reset value (forwarded to CPU object)
26
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
26
* + Property "vfp": enable VFP (forwarded to CPU object)
27
void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx,
27
* + Property "dsp": enable DSP (forwarded to CPU object)
28
uintptr_t retaddr);
28
* + Property "enable-bitband": expose bitbanded IO
29
#else
29
@@ -XXX,XX +XXX,XX @@ struct ARMv7MState {
30
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
30
MemoryRegion *board_memory;
31
uint16_t idxmap)
31
Object *idau;
32
{
32
uint32_t init_svtor;
33
}
33
+ uint32_t init_nsvtor;
34
-static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
34
bool enable_bitband;
35
+static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr,
35
bool start_powered_off;
36
+ MemTxAttrs attrs)
36
bool vfp;
37
{
37
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
38
}
39
#endif
40
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
41
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
42
--- a/accel/tcg/translate-all.c
39
--- a/target/arm/cpu.h
43
+++ b/accel/tcg/translate-all.c
40
+++ b/target/arm/cpu.h
44
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
41
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
45
}
42
46
43
/* For v8M, initial value of the Secure VTOR */
47
#if !defined(CONFIG_USER_ONLY)
44
uint32_t init_svtor;
48
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
45
+ /* For v8M, initial value of the Non-secure VTOR */
49
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
46
+ uint32_t init_nsvtor;
50
{
47
51
ram_addr_t ram_addr;
48
/* [QEMU_]KVM_ARM_TARGET_* constant for this CPU, or
52
MemoryRegion *mr;
49
* QEMU_KVM_ARM_TARGET_NONE if the kernel doesn't support this CPU type.
53
diff --git a/exec.c b/exec.c
50
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
54
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
55
--- a/exec.c
52
--- a/hw/arm/armv7m.c
56
+++ b/exec.c
53
+++ b/hw/arm/armv7m.c
57
@@ -XXX,XX +XXX,XX @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
54
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
58
if (phys != -1) {
55
return;
59
/* Locks grabbed by tb_invalidate_phys_addr */
56
}
60
tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as,
61
- phys | (pc & ~TARGET_PAGE_MASK));
62
+ phys | (pc & ~TARGET_PAGE_MASK), attrs);
63
}
57
}
64
}
58
+ if (object_property_find(OBJECT(s->cpu), "init-nsvtor")) {
65
#endif
59
+ if (!object_property_set_uint(OBJECT(s->cpu), "init-nsvtor",
66
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
60
+ s->init_nsvtor, errp)) {
61
+ return;
62
+ }
63
+ }
64
if (object_property_find(OBJECT(s->cpu), "start-powered-off")) {
65
if (!object_property_set_bool(OBJECT(s->cpu), "start-powered-off",
66
s->start_powered_off, errp)) {
67
@@ -XXX,XX +XXX,XX @@ static Property armv7m_properties[] = {
68
MemoryRegion *),
69
DEFINE_PROP_LINK("idau", ARMv7MState, idau, TYPE_IDAU_INTERFACE, Object *),
70
DEFINE_PROP_UINT32("init-svtor", ARMv7MState, init_svtor, 0),
71
+ DEFINE_PROP_UINT32("init-nsvtor", ARMv7MState, init_nsvtor, 0),
72
DEFINE_PROP_BOOL("enable-bitband", ARMv7MState, enable_bitband, false),
73
DEFINE_PROP_BOOL("start-powered-off", ARMv7MState, start_powered_off,
74
false),
75
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
67
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
68
--- a/target/xtensa/op_helper.c
77
--- a/target/arm/cpu.c
69
+++ b/target/xtensa/op_helper.c
78
+++ b/target/arm/cpu.c
70
@@ -XXX,XX +XXX,XX @@ static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
79
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
71
int ret = xtensa_get_physical_addr(env, false, vaddr, 2, 0,
80
env->regs[14] = 0xffffffff;
72
&paddr, &page_size, &access);
81
73
if (ret == 0) {
82
env->v7m.vecbase[M_REG_S] = cpu->init_svtor & 0xffffff80;
74
- tb_invalidate_phys_addr(&address_space_memory, paddr);
83
+ env->v7m.vecbase[M_REG_NS] = cpu->init_nsvtor & 0xffffff80;
75
+ tb_invalidate_phys_addr(&address_space_memory, paddr,
84
76
+ MEMTXATTRS_UNSPECIFIED);
85
/* Load the initial SP and PC from offset 0 and 4 in the vector table */
86
vecbase = env->v7m.vecbase[env->v7m.secure];
87
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
88
&cpu->init_svtor,
89
OBJ_PROP_FLAG_READWRITE);
77
}
90
}
78
}
91
+ if (arm_feature(&cpu->env, ARM_FEATURE_M)) {
92
+ /*
93
+ * Initial value of the NS VTOR (for cores without the Security
94
+ * extension, this is the only VTOR)
95
+ */
96
+ object_property_add_uint32_ptr(obj, "init-nsvtor",
97
+ &cpu->init_nsvtor,
98
+ OBJ_PROP_FLAG_READWRITE);
99
+ }
100
101
qdev_property_add_static(DEVICE(obj), &arm_cpu_cfgend_property);
79
102
80
--
103
--
81
2.17.1
104
2.20.1
82
105
83
106
diff view generated by jsdifflib
New patch
1
1
The official punctuation for Arm CPU names uses a hyphen, like
2
"Cortex-A9". We mostly follow this, but in a few places usage
3
without the hyphen has crept in. Fix those so we consistently
4
use the same way of writing the CPU name.
5
6
This commit was created with:
7
git grep -z -l 'Cortex ' | xargs -0 sed -i 's/Cortex /Cortex-/'
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Message-id: 20210527095152.10968-1-peter.maydell@linaro.org
14
---
15
docs/system/arm/aspeed.rst | 4 ++--
16
docs/system/arm/nuvoton.rst | 6 +++---
17
docs/system/arm/sabrelite.rst | 2 +-
18
include/hw/arm/allwinner-h3.h | 2 +-
19
hw/arm/aspeed.c | 6 +++---
20
hw/arm/mcimx6ul-evk.c | 2 +-
21
hw/arm/mcimx7d-sabre.c | 2 +-
22
hw/arm/npcm7xx_boards.c | 4 ++--
23
hw/arm/sabrelite.c | 2 +-
24
hw/misc/npcm7xx_clk.c | 2 +-
25
10 files changed, 16 insertions(+), 16 deletions(-)
26
27
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
28
index XXXXXXX..XXXXXXX 100644
29
--- a/docs/system/arm/aspeed.rst
30
+++ b/docs/system/arm/aspeed.rst
31
@@ -XXX,XX +XXX,XX @@ The QEMU Aspeed machines model BMCs of various OpenPOWER systems and
32
Aspeed evaluation boards. They are based on different releases of the
33
Aspeed SoC : the AST2400 integrating an ARM926EJ-S CPU (400MHz), the
34
AST2500 with an ARM1176JZS CPU (800MHz) and more recently the AST2600
35
-with dual cores ARM Cortex A7 CPUs (1.2GHz).
36
+with dual cores ARM Cortex-A7 CPUs (1.2GHz).
37
38
The SoC comes with RAM, Gigabit ethernet, USB, SD/MMC, USB, SPI, I2C,
39
etc.
40
@@ -XXX,XX +XXX,XX @@ AST2500 SoC based machines :
41
42
AST2600 SoC based machines :
43
44
-- ``ast2600-evb`` Aspeed AST2600 Evaluation board (Cortex A7)
45
+- ``ast2600-evb`` Aspeed AST2600 Evaluation board (Cortex-A7)
46
- ``tacoma-bmc`` OpenPOWER Witherspoon POWER9 AST2600 BMC
47
48
Supported devices
49
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
50
index XXXXXXX..XXXXXXX 100644
51
--- a/docs/system/arm/nuvoton.rst
52
+++ b/docs/system/arm/nuvoton.rst
53
@@ -XXX,XX +XXX,XX @@ Nuvoton iBMC boards (``npcm750-evb``, ``quanta-gsj``)
54
55
The `Nuvoton iBMC`_ chips (NPCM7xx) are a family of ARM-based SoCs that are
56
designed to be used as Baseboard Management Controllers (BMCs) in various
57
-servers. They all feature one or two ARM Cortex A9 CPU cores, as well as an
58
+servers. They all feature one or two ARM Cortex-A9 CPU cores, as well as an
59
assortment of peripherals targeted for either Enterprise or Data Center /
60
Hyperscale applications. The former is a superset of the latter, so NPCM750 has
61
all the peripherals of NPCM730 and more.
62
63
.. _Nuvoton iBMC: https://www.nuvoton.com/products/cloud-computing/ibmc/
64
65
-The NPCM750 SoC has two Cortex A9 cores and is targeted for the Enterprise
66
+The NPCM750 SoC has two Cortex-A9 cores and is targeted for the Enterprise
67
segment. The following machines are based on this chip :
68
69
- ``npcm750-evb`` Nuvoton NPCM750 Evaluation board
70
71
-The NPCM730 SoC has two Cortex A9 cores and is targeted for Data Center and
72
+The NPCM730 SoC has two Cortex-A9 cores and is targeted for Data Center and
73
Hyperscale applications. The following machines are based on this chip :
74
75
- ``quanta-gsj`` Quanta GSJ server BMC
76
diff --git a/docs/system/arm/sabrelite.rst b/docs/system/arm/sabrelite.rst
77
index XXXXXXX..XXXXXXX 100644
78
--- a/docs/system/arm/sabrelite.rst
79
+++ b/docs/system/arm/sabrelite.rst
80
@@ -XXX,XX +XXX,XX @@ Supported devices
81
82
The SABRE Lite machine supports the following devices:
83
84
- * Up to 4 Cortex A9 cores
85
+ * Up to 4 Cortex-A9 cores
86
* Generic Interrupt Controller
87
* 1 Clock Controller Module
88
* 1 System Reset Controller
89
diff --git a/include/hw/arm/allwinner-h3.h b/include/hw/arm/allwinner-h3.h
90
index XXXXXXX..XXXXXXX 100644
91
--- a/include/hw/arm/allwinner-h3.h
92
+++ b/include/hw/arm/allwinner-h3.h
93
@@ -XXX,XX +XXX,XX @@
94
*/
95
96
/*
97
- * The Allwinner H3 is a System on Chip containing four ARM Cortex A7
98
+ * The Allwinner H3 is a System on Chip containing four ARM Cortex-A7
99
* processor cores. Features and specifications include DDR2/DDR3 memory,
100
* SD/MMC storage cards, 10/100/1000Mbit Ethernet, USB 2.0, HDMI and
101
* various I/O modules.
102
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/hw/arm/aspeed.c
105
+++ b/hw/arm/aspeed.c
106
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_ast2600_evb_class_init(ObjectClass *oc, void *data)
107
MachineClass *mc = MACHINE_CLASS(oc);
108
AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
109
110
- mc->desc = "Aspeed AST2600 EVB (Cortex A7)";
111
+ mc->desc = "Aspeed AST2600 EVB (Cortex-A7)";
112
amc->soc_name = "ast2600-a1";
113
amc->hw_strap1 = AST2600_EVB_HW_STRAP1;
114
amc->hw_strap2 = AST2600_EVB_HW_STRAP2;
115
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_tacoma_class_init(ObjectClass *oc, void *data)
116
MachineClass *mc = MACHINE_CLASS(oc);
117
AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
118
119
- mc->desc = "OpenPOWER Tacoma BMC (Cortex A7)";
120
+ mc->desc = "OpenPOWER Tacoma BMC (Cortex-A7)";
121
amc->soc_name = "ast2600-a1";
122
amc->hw_strap1 = TACOMA_BMC_HW_STRAP1;
123
amc->hw_strap2 = TACOMA_BMC_HW_STRAP2;
124
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_rainier_class_init(ObjectClass *oc, void *data)
125
MachineClass *mc = MACHINE_CLASS(oc);
126
AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
127
128
- mc->desc = "IBM Rainier BMC (Cortex A7)";
129
+ mc->desc = "IBM Rainier BMC (Cortex-A7)";
130
amc->soc_name = "ast2600-a1";
131
amc->hw_strap1 = RAINIER_BMC_HW_STRAP1;
132
amc->hw_strap2 = RAINIER_BMC_HW_STRAP2;
133
diff --git a/hw/arm/mcimx6ul-evk.c b/hw/arm/mcimx6ul-evk.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/hw/arm/mcimx6ul-evk.c
136
+++ b/hw/arm/mcimx6ul-evk.c
137
@@ -XXX,XX +XXX,XX @@ static void mcimx6ul_evk_init(MachineState *machine)
138
139
static void mcimx6ul_evk_machine_init(MachineClass *mc)
140
{
141
- mc->desc = "Freescale i.MX6UL Evaluation Kit (Cortex A7)";
142
+ mc->desc = "Freescale i.MX6UL Evaluation Kit (Cortex-A7)";
143
mc->init = mcimx6ul_evk_init;
144
mc->max_cpus = FSL_IMX6UL_NUM_CPUS;
145
mc->default_ram_id = "mcimx6ul-evk.ram";
146
diff --git a/hw/arm/mcimx7d-sabre.c b/hw/arm/mcimx7d-sabre.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/hw/arm/mcimx7d-sabre.c
149
+++ b/hw/arm/mcimx7d-sabre.c
150
@@ -XXX,XX +XXX,XX @@ static void mcimx7d_sabre_init(MachineState *machine)
151
152
static void mcimx7d_sabre_machine_init(MachineClass *mc)
153
{
154
- mc->desc = "Freescale i.MX7 DUAL SABRE (Cortex A7)";
155
+ mc->desc = "Freescale i.MX7 DUAL SABRE (Cortex-A7)";
156
mc->init = mcimx7d_sabre_init;
157
mc->max_cpus = FSL_IMX7_NUM_CPUS;
158
mc->default_ram_id = "mcimx7d-sabre.ram";
159
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/hw/arm/npcm7xx_boards.c
162
+++ b/hw/arm/npcm7xx_boards.c
163
@@ -XXX,XX +XXX,XX @@ static void npcm750_evb_machine_class_init(ObjectClass *oc, void *data)
164
165
npcm7xx_set_soc_type(nmc, TYPE_NPCM750);
166
167
- mc->desc = "Nuvoton NPCM750 Evaluation Board (Cortex A9)";
168
+ mc->desc = "Nuvoton NPCM750 Evaluation Board (Cortex-A9)";
169
mc->init = npcm750_evb_init;
170
mc->default_ram_size = 512 * MiB;
171
};
172
@@ -XXX,XX +XXX,XX @@ static void gsj_machine_class_init(ObjectClass *oc, void *data)
173
174
npcm7xx_set_soc_type(nmc, TYPE_NPCM730);
175
176
- mc->desc = "Quanta GSJ (Cortex A9)";
177
+ mc->desc = "Quanta GSJ (Cortex-A9)";
178
mc->init = quanta_gsj_init;
179
mc->default_ram_size = 512 * MiB;
180
};
181
diff --git a/hw/arm/sabrelite.c b/hw/arm/sabrelite.c
182
index XXXXXXX..XXXXXXX 100644
183
--- a/hw/arm/sabrelite.c
184
+++ b/hw/arm/sabrelite.c
185
@@ -XXX,XX +XXX,XX @@ static void sabrelite_init(MachineState *machine)
186
187
static void sabrelite_machine_init(MachineClass *mc)
188
{
189
- mc->desc = "Freescale i.MX6 Quad SABRE Lite Board (Cortex A9)";
190
+ mc->desc = "Freescale i.MX6 Quad SABRE Lite Board (Cortex-A9)";
191
mc->init = sabrelite_init;
192
mc->max_cpus = FSL_IMX6_NUM_CPUS;
193
mc->ignore_memory_transaction_failures = true;
194
diff --git a/hw/misc/npcm7xx_clk.c b/hw/misc/npcm7xx_clk.c
195
index XXXXXXX..XXXXXXX 100644
196
--- a/hw/misc/npcm7xx_clk.c
197
+++ b/hw/misc/npcm7xx_clk.c
198
@@ -XXX,XX +XXX,XX @@
199
#define NPCM7XX_CLOCK_REF_HZ (25000000)
200
201
/* Register Field Definitions */
202
-#define NPCM7XX_CLK_WDRCR_CA9C BIT(0) /* Cortex A9 Cores */
203
+#define NPCM7XX_CLK_WDRCR_CA9C BIT(0) /* Cortex-A9 Cores */
204
205
#define PLLCON_LOKI BIT(31)
206
#define PLLCON_LOKS BIT(30)
207
--
208
2.20.1
209
210
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Damien Goutte-Gattat <dgouttegattat@incenp.org>
2
2
3
It forgot to increase clroffset during the loop. So it only clear the
3
The 4.x branch of Sphinx introduces a breaking change, as generated man
4
first 4 bytes.
4
pages are now written to subdirectories corresponding to the manual
5
section they belong to. This results in `make install` erroring out when
6
attempting to install the man pages, because they are not where it
7
expects to find them.
5
8
6
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
9
This patch restores the behavior of Sphinx 3.x regarding man pages.
7
Cc: qemu-stable@nongnu.org
10
8
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
11
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/256
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
Signed-off-by: Damien Goutte-Gattat <dgouttegattat@incenp.org>
10
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
13
Message-id: 20210503161422.15028-1-dgouttegattat@incenp.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
16
---
14
hw/intc/arm_gicv3_kvm.c | 1 +
17
docs/conf.py | 1 +
15
1 file changed, 1 insertion(+)
18
1 file changed, 1 insertion(+)
16
19
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
20
diff --git a/docs/conf.py b/docs/conf.py
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
22
--- a/docs/conf.py
20
+++ b/hw/intc/arm_gicv3_kvm.c
23
+++ b/docs/conf.py
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
24
@@ -XXX,XX +XXX,XX @@
22
if (clroffset != 0) {
25
['Stefan Hajnoczi <stefanha@redhat.com>',
23
reg = 0;
26
'Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>'], 1),
24
kvm_gicd_access(s, clroffset, &reg, true);
27
]
25
+ clroffset += 4;
28
+man_make_section_directory = False
26
}
29
27
reg = *gic_bmp_ptr32(bmp, irq);
30
# -- Options for Texinfo output -------------------------------------------
28
kvm_gicd_access(s, offset, &reg, true);
31
29
--
32
--
30
2.17.1
33
2.20.1
31
34
32
35
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The operands to tcg_gen_atomic_fetch_s{min,max}_i64 must
4
be signed, so that the inputs are properly extended.
5
Zero extend the result afterward, as needed.
6
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/364
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20210602020720.47679-1-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/translate-a64.c | 13 ++++++++++---
14
1 file changed, 10 insertions(+), 3 deletions(-)
15
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a64.c
19
+++ b/target/arm/translate-a64.c
20
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
21
int o3_opc = extract32(insn, 12, 4);
22
bool r = extract32(insn, 22, 1);
23
bool a = extract32(insn, 23, 1);
24
- TCGv_i64 tcg_rs, clean_addr;
25
+ TCGv_i64 tcg_rs, tcg_rt, clean_addr;
26
AtomicThreeOpFn *fn = NULL;
27
+ MemOp mop = s->be_data | size | MO_ALIGN;
28
29
if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
30
unallocated_encoding(s);
31
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
32
break;
33
case 004: /* LDSMAX */
34
fn = tcg_gen_atomic_fetch_smax_i64;
35
+ mop |= MO_SIGN;
36
break;
37
case 005: /* LDSMIN */
38
fn = tcg_gen_atomic_fetch_smin_i64;
39
+ mop |= MO_SIGN;
40
break;
41
case 006: /* LDUMAX */
42
fn = tcg_gen_atomic_fetch_umax_i64;
43
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
44
}
45
46
tcg_rs = read_cpu_reg(s, rs, true);
47
+ tcg_rt = cpu_reg(s, rt);
48
49
if (o3_opc == 1) { /* LDCLR */
50
tcg_gen_not_i64(tcg_rs, tcg_rs);
51
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
52
/* The tcg atomic primitives are all full barriers. Therefore we
53
* can ignore the Acquire and Release bits of this instruction.
54
*/
55
- fn(cpu_reg(s, rt), clean_addr, tcg_rs, get_mem_index(s),
56
- s->be_data | size | MO_ALIGN);
57
+ fn(tcg_rt, clean_addr, tcg_rs, get_mem_index(s), mop);
58
+
59
+ if ((mop & MO_SIGN) && size != MO_64) {
60
+ tcg_gen_ext32u_i64(tcg_rt, tcg_rt);
61
+ }
62
}
63
64
/*
65
--
66
2.20.1
67
68
diff view generated by jsdifflib
New patch
1
From: Jamie Iles <jamie@nuviainc.com>
1
2
3
The DAIF and PAC checks used raise_exception_ra to raise an exception
4
and unwind CPU state but raise_exception_ra is currently designed for
5
handling data aborts as the syndrome is partially precomputed and
6
encoded in the TB and then merged in merge_syn_data_abort when handling
7
the data abort. Using raise_exception_ra for DAIF and PAC checks
8
results in an empty syndrome being retrieved from data[2] in
9
restore_state_to_opc and setting ESR to 0. This manifested as:
10
11
kvm [571]: Unknown exception class: esr: 0x000000 –
12
Unknown/Uncategorized
13
14
when launching a KVM guest when the host qemu used a CPU supporting
15
EL2+pointer authentication and enabling pointer authentication in the
16
guest.
17
18
Rework raise_exception_ra such that the state is restored before raising
19
the exception so that the exception is not clobbered by
20
restore_state_to_opc.
21
22
Fixes: 0d43e1a2d29a ("target/arm: Add PAuth helpers")
23
Cc: Richard Henderson <richard.henderson@linaro.org>
24
Cc: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
26
[PMM: added comment]
27
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
---
30
target/arm/op_helper.c | 11 +++++++++--
31
1 file changed, 9 insertions(+), 2 deletions(-)
32
33
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/op_helper.c
36
+++ b/target/arm/op_helper.c
37
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
38
void raise_exception_ra(CPUARMState *env, uint32_t excp, uint32_t syndrome,
39
uint32_t target_el, uintptr_t ra)
40
{
41
- CPUState *cs = do_raise_exception(env, excp, syndrome, target_el);
42
- cpu_loop_exit_restore(cs, ra);
43
+ CPUState *cs = env_cpu(env);
44
+
45
+ /*
46
+ * restore_state_to_opc() will set env->exception.syndrome, so
47
+ * we must restore CPU state here before setting the syndrome
48
+ * the caller passed us, and cannot use cpu_loop_exit_restore().
49
+ */
50
+ cpu_restore_state(cs, ra, true);
51
+ raise_exception(env, excp, syndrome, target_el);
52
}
53
54
uint64_t HELPER(neon_tbl)(CPUARMState *env, uint32_t desc,
55
--
56
2.20.1
57
58
diff view generated by jsdifflib
New patch
1
From: Jamie Iles <jamie@nuviainc.com>
1
2
3
Now that there are no other users of do_raise_exception, fold it into
4
raise_exception.
5
6
Cc: Richard Henderson <richard.henderson@linaro.org>
7
Cc: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/op_helper.c | 12 ++----------
13
1 file changed, 2 insertions(+), 10 deletions(-)
14
15
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/op_helper.c
18
+++ b/target/arm/op_helper.c
19
@@ -XXX,XX +XXX,XX @@
20
#define SIGNBIT (uint32_t)0x80000000
21
#define SIGNBIT64 ((uint64_t)1 << 63)
22
23
-static CPUState *do_raise_exception(CPUARMState *env, uint32_t excp,
24
- uint32_t syndrome, uint32_t target_el)
25
+void raise_exception(CPUARMState *env, uint32_t excp,
26
+ uint32_t syndrome, uint32_t target_el)
27
{
28
CPUState *cs = env_cpu(env);
29
30
@@ -XXX,XX +XXX,XX @@ static CPUState *do_raise_exception(CPUARMState *env, uint32_t excp,
31
cs->exception_index = excp;
32
env->exception.syndrome = syndrome;
33
env->exception.target_el = target_el;
34
-
35
- return cs;
36
-}
37
-
38
-void raise_exception(CPUARMState *env, uint32_t excp,
39
- uint32_t syndrome, uint32_t target_el)
40
-{
41
- CPUState *cs = do_raise_exception(env, excp, syndrome, target_el);
42
cpu_loop_exit(cs);
43
}
44
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
New patch
1
From: Jamie Iles <jamie@nuviainc.com>
1
2
3
Now that raise_exception_ra restores the state before raising the
4
exception we can use restore_exception_ra to perform the state restore +
5
exception raising without clobbering the syndrome.
6
7
Cc: Richard Henderson <richard.henderson@linaro.org>
8
Cc: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
10
[PMM: Keep the one line of the comment that is still relevant]
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/mte_helper.c | 12 +++---------
15
1 file changed, 3 insertions(+), 9 deletions(-)
16
17
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/mte_helper.c
20
+++ b/target/arm/mte_helper.c
21
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
22
23
switch (tcf) {
24
case 1:
25
- /*
26
- * Tag check fail causes a synchronous exception.
27
- *
28
- * In restore_state_to_opc, we set the exception syndrome
29
- * for the load or store operation. Unwind first so we
30
- * may overwrite that with the syndrome for the tag check.
31
- */
32
- cpu_restore_state(env_cpu(env), ra, true);
33
+ /* Tag check fail causes a synchronous exception. */
34
env->exception.vaddress = dirty_ptr;
35
36
is_write = FIELD_EX32(desc, MTEDESC, WRITE);
37
syn = syn_data_abort_no_iss(arm_current_el(env) != 0, 0, 0, 0, 0,
38
is_write, 0x11);
39
- raise_exception(env, EXCP_DATA_ABORT, syn, exception_target_el(env));
40
+ raise_exception_ra(env, EXCP_DATA_ABORT, syn,
41
+ exception_target_el(env), ra);
42
/* noreturn, but fall through to the assert anyway */
43
44
case 0:
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Jamie Iles <jamie@nuviainc.com>
2
add MemTxAttrs as an argument to address_space_translate_iommu().
3
2
3
The sequence cpu_restore_state() + raise_exception() is equivalent to
4
raise_exception_ra(), so use that instead. (In this case we never
5
cared about the syndrome value, because M-profile doesn't use the
6
syndrome; the old code was just written unnecessarily awkwardly.)
7
8
Cc: Richard Henderson <richard.henderson@linaro.org>
9
Cc: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
11
[PMM: Retain edited version of comment; rewrite commit message]
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-14-peter.maydell@linaro.org
8
---
14
---
9
exec.c | 8 +++++---
15
target/arm/m_helper.c | 5 +----
10
1 file changed, 5 insertions(+), 3 deletions(-)
16
target/arm/op_helper.c | 9 +++------
17
2 files changed, 4 insertions(+), 10 deletions(-)
11
18
12
diff --git a/exec.c b/exec.c
19
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
13
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
21
--- a/target/arm/m_helper.c
15
+++ b/exec.c
22
+++ b/target/arm/m_helper.c
16
@@ -XXX,XX +XXX,XX @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
23
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
17
* @is_write: whether the translation operation is for write
24
limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
18
* @is_mmio: whether this can be MMIO, set true if it can
25
19
* @target_as: the address space targeted by the IOMMU
26
if (val < limit) {
20
+ * @attrs: transaction attributes
27
- CPUState *cs = env_cpu(env);
21
*
28
-
22
* This function is called from RCU critical section. It is the common
29
- cpu_restore_state(cs, GETPC(), true);
23
* part of flatview_do_translate and address_space_translate_cached.
30
- raise_exception(env, EXCP_STKOF, 0, 1);
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
31
+ raise_exception_ra(env, EXCP_STKOF, 0, 1, GETPC());
25
hwaddr *page_mask_out,
32
}
26
bool is_write,
33
27
bool is_mmio,
34
if (is_psp) {
28
- AddressSpace **target_as)
35
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
29
+ AddressSpace **target_as,
36
index XXXXXXX..XXXXXXX 100644
30
+ MemTxAttrs attrs)
37
--- a/target/arm/op_helper.c
31
{
38
+++ b/target/arm/op_helper.c
32
MemoryRegionSection *section;
39
@@ -XXX,XX +XXX,XX @@ void HELPER(v8m_stackcheck)(CPUARMState *env, uint32_t newvalue)
33
hwaddr page_mask = (hwaddr)-1;
40
* raising an exception if the limit is breached.
34
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
41
*/
35
return address_space_translate_iommu(iommu_mr, xlat,
42
if (newvalue < v7m_sp_limit(env)) {
36
plen_out, page_mask_out,
43
- CPUState *cs = env_cpu(env);
37
is_write, is_mmio,
44
-
38
- target_as);
45
/*
39
+ target_as, attrs);
46
* Stack limit exceptions are a rare case, so rather than syncing
47
- * PC/condbits before the call, we use cpu_restore_state() to
48
- * get them right before raising the exception.
49
+ * PC/condbits before the call, we use raise_exception_ra() so
50
+ * that cpu_restore_state() will sort them out.
51
*/
52
- cpu_restore_state(cs, GETPC(), true);
53
- raise_exception(env, EXCP_STKOF, 0, 1);
54
+ raise_exception_ra(env, EXCP_STKOF, 0, 1, GETPC());
40
}
55
}
41
if (page_mask_out) {
42
/* Not behind an IOMMU, use default page size. */
43
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate_cached(
44
45
section = address_space_translate_iommu(iommu_mr, xlat, plen,
46
NULL, is_write, true,
47
- &target_as);
48
+ &target_as, attrs);
49
return section.mr;
50
}
56
}
51
57
52
--
58
--
53
2.17.1
59
2.20.1
54
60
55
61
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Note that the SVE BFLOAT16 support does not require SVE2,
4
it is an independent extension.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525225817.400336-2-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/cpu.h | 15 +++++++++++++++
12
1 file changed, 15 insertions(+)
13
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
19
return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
20
}
21
22
+static inline bool isar_feature_aa32_bf16(const ARMISARegisters *id)
23
+{
24
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, BF16) != 0;
25
+}
26
+
27
static inline bool isar_feature_aa32_i8mm(const ARMISARegisters *id)
28
{
29
return FIELD_EX32(id->id_isar6, ID_ISAR6, I8MM) != 0;
30
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_dcpodp(const ARMISARegisters *id)
31
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, DPB) >= 2;
32
}
33
34
+static inline bool isar_feature_aa64_bf16(const ARMISARegisters *id)
35
+{
36
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, BF16) != 0;
37
+}
38
+
39
static inline bool isar_feature_aa64_fp_simd(const ARMISARegisters *id)
40
{
41
/* We always set the AdvSIMD and FP fields identically. */
42
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve2_bitperm(const ARMISARegisters *id)
43
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
44
}
45
46
+static inline bool isar_feature_aa64_sve_bf16(const ARMISARegisters *id)
47
+{
48
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BFLOAT16) != 0;
49
+}
50
+
51
static inline bool isar_feature_aa64_sve2_sha3(const ARMISARegisters *id)
52
{
53
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SHA3) != 0;
54
--
55
2.20.1
56
57
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525225817.400336-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 15 ++++++---------
9
1 file changed, 6 insertions(+), 9 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
16
int rd = extract32(insn, 0, 5);
17
18
if (mos) {
19
- unallocated_encoding(s);
20
- return;
21
+ goto do_unallocated;
22
}
23
24
switch (opcode) {
25
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
26
/* FCVT between half, single and double precision */
27
int dtype = extract32(opcode, 0, 2);
28
if (type == 2 || dtype == type) {
29
- unallocated_encoding(s);
30
- return;
31
+ goto do_unallocated;
32
}
33
if (!fp_access_check(s)) {
34
return;
35
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
36
37
case 0x10 ... 0x13: /* FRINT{32,64}{X,Z} */
38
if (type > 1 || !dc_isar_feature(aa64_frint, s)) {
39
- unallocated_encoding(s);
40
- return;
41
+ goto do_unallocated;
42
}
43
/* fall through */
44
case 0x0 ... 0x3:
45
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
46
break;
47
case 3:
48
if (!dc_isar_feature(aa64_fp16, s)) {
49
- unallocated_encoding(s);
50
- return;
51
+ goto do_unallocated;
52
}
53
54
if (!fp_access_check(s)) {
55
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
56
handle_fp_1src_half(s, opcode, rd, rn);
57
break;
58
default:
59
- unallocated_encoding(s);
60
+ goto do_unallocated;
61
}
62
break;
63
64
default:
65
+ do_unallocated:
66
unallocated_encoding(s);
67
break;
68
}
69
--
70
2.20.1
71
72
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This is the 64-bit BFCVT and the 32-bit VCVT{B,T}.BF16.F32.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525225817.400336-4-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.h | 1 +
11
target/arm/vfp.decode | 2 ++
12
target/arm/translate-a64.c | 19 +++++++++++++++++++
13
target/arm/translate-vfp.c | 24 ++++++++++++++++++++++++
14
target/arm/vfp_helper.c | 5 +++++
15
5 files changed, 51 insertions(+)
16
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
20
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_cmped, void, f64, f64, env)
22
23
DEF_HELPER_2(vfp_fcvtds, f64, f32, env)
24
DEF_HELPER_2(vfp_fcvtsd, f32, f64, env)
25
+DEF_HELPER_FLAGS_2(bfcvt, TCG_CALL_NO_RWG, i32, f32, ptr)
26
27
DEF_HELPER_2(vfp_uitoh, f16, i32, ptr)
28
DEF_HELPER_2(vfp_uitos, f32, i32, ptr)
29
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/vfp.decode
32
+++ b/target/arm/vfp.decode
33
@@ -XXX,XX +XXX,XX @@ VCVT_f64_f16 ---- 1110 1.11 0010 .... 1011 t:1 1.0 .... \
34
35
# VCVTB and VCVTT to f16: Vd format is always vd_sp;
36
# Vm format depends on size bit
37
+VCVT_b16_f32 ---- 1110 1.11 0011 .... 1001 t:1 1.0 .... \
38
+ vd=%vd_sp vm=%vm_sp
39
VCVT_f16_f32 ---- 1110 1.11 0011 .... 1010 t:1 1.0 .... \
40
vd=%vd_sp vm=%vm_sp
41
VCVT_f16_f64 ---- 1110 1.11 0011 .... 1011 t:1 1.0 .... \
42
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/translate-a64.c
45
+++ b/target/arm/translate-a64.c
46
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
47
case 0x3: /* FSQRT */
48
gen_helper_vfp_sqrts(tcg_res, tcg_op, cpu_env);
49
goto done;
50
+ case 0x6: /* BFCVT */
51
+ gen_fpst = gen_helper_bfcvt;
52
+ break;
53
case 0x8: /* FRINTN */
54
case 0x9: /* FRINTP */
55
case 0xa: /* FRINTM */
56
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
57
}
58
break;
59
60
+ case 0x6:
61
+ switch (type) {
62
+ case 1: /* BFCVT */
63
+ if (!dc_isar_feature(aa64_bf16, s)) {
64
+ goto do_unallocated;
65
+ }
66
+ if (!fp_access_check(s)) {
67
+ return;
68
+ }
69
+ handle_fp_1src_single(s, opcode, rd, rn);
70
+ break;
71
+ default:
72
+ goto do_unallocated;
73
+ }
74
+ break;
75
+
76
default:
77
do_unallocated:
78
unallocated_encoding(s);
79
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/translate-vfp.c
82
+++ b/target/arm/translate-vfp.c
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
84
return true;
85
}
86
87
+static bool trans_VCVT_b16_f32(DisasContext *s, arg_VCVT_b16_f32 *a)
88
+{
89
+ TCGv_ptr fpst;
90
+ TCGv_i32 tmp;
91
+
92
+ if (!dc_isar_feature(aa32_bf16, s)) {
93
+ return false;
94
+ }
95
+
96
+ if (!vfp_access_check(s)) {
97
+ return true;
98
+ }
99
+
100
+ fpst = fpstatus_ptr(FPST_FPCR);
101
+ tmp = tcg_temp_new_i32();
102
+
103
+ vfp_load_reg32(tmp, a->vm);
104
+ gen_helper_bfcvt(tmp, tmp, fpst);
105
+ tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
106
+ tcg_temp_free_ptr(fpst);
107
+ tcg_temp_free_i32(tmp);
108
+ return true;
109
+}
110
+
111
static bool trans_VCVT_f16_f32(DisasContext *s, arg_VCVT_f16_f32 *a)
112
{
113
TCGv_ptr fpst;
114
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/vfp_helper.c
117
+++ b/target/arm/vfp_helper.c
118
@@ -XXX,XX +XXX,XX @@ float32 VFP_HELPER(fcvts, d)(float64 x, CPUARMState *env)
119
return float64_to_float32(x, &env->vfp.fp_status);
120
}
121
122
+uint32_t HELPER(bfcvt)(float32 x, void *status)
123
+{
124
+ return float32_to_bfloat16(x, status);
125
+}
126
+
127
/*
128
* VFP3 fixed point conversion. The AArch32 versions of fix-to-float
129
* must always round-to-nearest; the AArch64 ones honour the FPSCR
130
--
131
2.20.1
132
133
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This is BFCVT{N,T} for both AArch64 AdvSIMD and SVE,
4
and VCVT.BF16.F32 for AArch32 NEON.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525225817.400336-5-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper-sve.h | 4 ++++
12
target/arm/helper.h | 1 +
13
target/arm/neon-dp.decode | 1 +
14
target/arm/sve.decode | 2 ++
15
target/arm/sve_helper.c | 2 ++
16
target/arm/translate-a64.c | 17 ++++++++++++++
17
target/arm/translate-neon.c | 45 +++++++++++++++++++++++++++++++++++++
18
target/arm/translate-sve.c | 16 +++++++++++++
19
target/arm/vfp_helper.c | 7 ++++++
20
9 files changed, 95 insertions(+)
21
22
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper-sve.h
25
+++ b/target/arm/helper-sve.h
26
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_fcvt_hd, TCG_CALL_NO_RWG,
27
void, ptr, ptr, ptr, ptr, i32)
28
DEF_HELPER_FLAGS_5(sve_fcvt_sd, TCG_CALL_NO_RWG,
29
void, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(sve_bfcvt, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
33
DEF_HELPER_FLAGS_5(sve_fcvtzs_hh, TCG_CALL_NO_RWG,
34
void, ptr, ptr, ptr, ptr, i32)
35
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_fcvtnt_sh, TCG_CALL_NO_RWG,
36
void, ptr, ptr, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_5(sve2_fcvtnt_ds, TCG_CALL_NO_RWG,
38
void, ptr, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_5(sve_bfcvtnt, TCG_CALL_NO_RWG,
40
+ void, ptr, ptr, ptr, ptr, i32)
41
42
DEF_HELPER_FLAGS_5(sve2_fcvtlt_hs, TCG_CALL_NO_RWG,
43
void, ptr, ptr, ptr, ptr, i32)
44
diff --git a/target/arm/helper.h b/target/arm/helper.h
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper.h
47
+++ b/target/arm/helper.h
48
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_cmped, void, f64, f64, env)
49
DEF_HELPER_2(vfp_fcvtds, f64, f32, env)
50
DEF_HELPER_2(vfp_fcvtsd, f32, f64, env)
51
DEF_HELPER_FLAGS_2(bfcvt, TCG_CALL_NO_RWG, i32, f32, ptr)
52
+DEF_HELPER_FLAGS_2(bfcvt_pair, TCG_CALL_NO_RWG, i32, i64, ptr)
53
54
DEF_HELPER_2(vfp_uitoh, f16, i32, ptr)
55
DEF_HELPER_2(vfp_uitos, f32, i32, ptr)
56
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/neon-dp.decode
59
+++ b/target/arm/neon-dp.decode
60
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
61
VRINTZ 1111 001 11 . 11 .. 10 .... 0 1011 . . 0 .... @2misc
62
63
VCVT_F16_F32 1111 001 11 . 11 .. 10 .... 0 1100 0 . 0 .... @2misc_q0
64
+ VCVT_B16_F32 1111 001 11 . 11 .. 10 .... 0 1100 1 . 0 .... @2misc_q0
65
66
VRINTM 1111 001 11 . 11 .. 10 .... 0 1101 . . 0 .... @2misc
67
68
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/sve.decode
71
+++ b/target/arm/sve.decode
72
@@ -XXX,XX +XXX,XX @@ FNMLS_zpzzz 01100101 .. 1 ..... 111 ... ..... ..... @rdn_pg_rm_ra
73
# SVE floating-point convert precision
74
FCVT_sh 01100101 10 0010 00 101 ... ..... ..... @rd_pg_rn_e0
75
FCVT_hs 01100101 10 0010 01 101 ... ..... ..... @rd_pg_rn_e0
76
+BFCVT 01100101 10 0010 10 101 ... ..... ..... @rd_pg_rn_e0
77
FCVT_dh 01100101 11 0010 00 101 ... ..... ..... @rd_pg_rn_e0
78
FCVT_hd 01100101 11 0010 01 101 ... ..... ..... @rd_pg_rn_e0
79
FCVT_ds 01100101 11 0010 10 101 ... ..... ..... @rd_pg_rn_e0
80
@@ -XXX,XX +XXX,XX @@ RAX1 01000101 00 1 ..... 11110 1 ..... ..... @rd_rn_rm_e0
81
FCVTXNT_ds 01100100 00 0010 10 101 ... ..... ..... @rd_pg_rn_e0
82
FCVTX_ds 01100101 00 0010 10 101 ... ..... ..... @rd_pg_rn_e0
83
FCVTNT_sh 01100100 10 0010 00 101 ... ..... ..... @rd_pg_rn_e0
84
+BFCVTNT 01100100 10 0010 10 101 ... ..... ..... @rd_pg_rn_e0
85
FCVTLT_hs 01100100 10 0010 01 101 ... ..... ..... @rd_pg_rn_e0
86
FCVTNT_ds 01100100 11 0010 10 101 ... ..... ..... @rd_pg_rn_e0
87
FCVTLT_sd 01100100 11 0010 11 101 ... ..... ..... @rd_pg_rn_e0
88
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/sve_helper.c
91
+++ b/target/arm/sve_helper.c
92
@@ -XXX,XX +XXX,XX @@ static inline uint64_t vfp_float64_to_uint64_rtz(float64 f, float_status *s)
93
94
DO_ZPZ_FP(sve_fcvt_sh, uint32_t, H1_4, sve_f32_to_f16)
95
DO_ZPZ_FP(sve_fcvt_hs, uint32_t, H1_4, sve_f16_to_f32)
96
+DO_ZPZ_FP(sve_bfcvt, uint32_t, H1_4, float32_to_bfloat16)
97
DO_ZPZ_FP(sve_fcvt_dh, uint64_t, , sve_f64_to_f16)
98
DO_ZPZ_FP(sve_fcvt_hd, uint64_t, , sve_f16_to_f64)
99
DO_ZPZ_FP(sve_fcvt_ds, uint64_t, , float64_to_float32)
100
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
101
} while (i != 0); \
102
}
103
104
+DO_FCVTNT(sve_bfcvtnt, uint32_t, uint16_t, H1_4, H1_2, float32_to_bfloat16)
105
DO_FCVTNT(sve2_fcvtnt_sh, uint32_t, uint16_t, H1_4, H1_2, sve_f32_to_f16)
106
DO_FCVTNT(sve2_fcvtnt_ds, uint64_t, uint32_t, , H1_4, float64_to_float32)
107
108
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
109
index XXXXXXX..XXXXXXX 100644
110
--- a/target/arm/translate-a64.c
111
+++ b/target/arm/translate-a64.c
112
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
113
tcg_temp_free_i32(ahp);
114
}
115
break;
116
+ case 0x36: /* BFCVTN, BFCVTN2 */
117
+ {
118
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
119
+ gen_helper_bfcvt_pair(tcg_res[pass], tcg_op, fpst);
120
+ tcg_temp_free_ptr(fpst);
121
+ }
122
+ break;
123
case 0x56: /* FCVTXN, FCVTXN2 */
124
/* 64 bit to 32 bit float conversion
125
* with von Neumann rounding (round to odd)
126
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
127
}
128
handle_2misc_narrow(s, false, opcode, 0, is_q, size - 1, rn, rd);
129
return;
130
+ case 0x36: /* BFCVTN, BFCVTN2 */
131
+ if (!dc_isar_feature(aa64_bf16, s) || size != 2) {
132
+ unallocated_encoding(s);
133
+ return;
134
+ }
135
+ if (!fp_access_check(s)) {
136
+ return;
137
+ }
138
+ handle_2misc_narrow(s, false, opcode, 0, is_q, size - 1, rn, rd);
139
+ return;
140
case 0x17: /* FCVTL, FCVTL2 */
141
if (!fp_access_check(s)) {
142
return;
143
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/target/arm/translate-neon.c
146
+++ b/target/arm/translate-neon.c
147
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
148
return true;
149
}
150
151
+static bool trans_VCVT_B16_F32(DisasContext *s, arg_2misc *a)
152
+{
153
+ TCGv_ptr fpst;
154
+ TCGv_i64 tmp;
155
+ TCGv_i32 dst0, dst1;
156
+
157
+ if (!dc_isar_feature(aa32_bf16, s)) {
158
+ return false;
159
+ }
160
+
161
+ /* UNDEF accesses to D16-D31 if they don't exist. */
162
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
163
+ ((a->vd | a->vm) & 0x10)) {
164
+ return false;
165
+ }
166
+
167
+ if ((a->vm & 1) || (a->size != 1)) {
168
+ return false;
169
+ }
170
+
171
+ if (!vfp_access_check(s)) {
172
+ return true;
173
+ }
174
+
175
+ fpst = fpstatus_ptr(FPST_STD);
176
+ tmp = tcg_temp_new_i64();
177
+ dst0 = tcg_temp_new_i32();
178
+ dst1 = tcg_temp_new_i32();
179
+
180
+ read_neon_element64(tmp, a->vm, 0, MO_64);
181
+ gen_helper_bfcvt_pair(dst0, tmp, fpst);
182
+
183
+ read_neon_element64(tmp, a->vm, 1, MO_64);
184
+ gen_helper_bfcvt_pair(dst1, tmp, fpst);
185
+
186
+ write_neon_element32(dst0, a->vd, 0, MO_32);
187
+ write_neon_element32(dst1, a->vd, 1, MO_32);
188
+
189
+ tcg_temp_free_i64(tmp);
190
+ tcg_temp_free_i32(dst0);
191
+ tcg_temp_free_i32(dst1);
192
+ tcg_temp_free_ptr(fpst);
193
+ return true;
194
+}
195
+
196
static bool trans_VCVT_F16_F32(DisasContext *s, arg_2misc *a)
197
{
198
TCGv_ptr fpst;
199
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
200
index XXXXXXX..XXXXXXX 100644
201
--- a/target/arm/translate-sve.c
202
+++ b/target/arm/translate-sve.c
203
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVT_hs(DisasContext *s, arg_rpr_esz *a)
204
return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_hs);
205
}
206
207
+static bool trans_BFCVT(DisasContext *s, arg_rpr_esz *a)
208
+{
209
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
210
+ return false;
211
+ }
212
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_bfcvt);
213
+}
214
+
215
static bool trans_FCVT_dh(DisasContext *s, arg_rpr_esz *a)
216
{
217
return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_dh);
218
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVTNT_sh(DisasContext *s, arg_rpr_esz *a)
219
return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve2_fcvtnt_sh);
220
}
221
222
+static bool trans_BFCVTNT(DisasContext *s, arg_rpr_esz *a)
223
+{
224
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
225
+ return false;
226
+ }
227
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_bfcvtnt);
228
+}
229
+
230
static bool trans_FCVTNT_ds(DisasContext *s, arg_rpr_esz *a)
231
{
232
if (!dc_isar_feature(aa64_sve2, s)) {
233
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
234
index XXXXXXX..XXXXXXX 100644
235
--- a/target/arm/vfp_helper.c
236
+++ b/target/arm/vfp_helper.c
237
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(bfcvt)(float32 x, void *status)
238
return float32_to_bfloat16(x, status);
239
}
240
241
+uint32_t HELPER(bfcvt_pair)(uint64_t pair, void *status)
242
+{
243
+ bfloat16 lo = float32_to_bfloat16(extract64(pair, 0, 32), status);
244
+ bfloat16 hi = float32_to_bfloat16(extract64(pair, 32, 32), status);
245
+ return deposit32(lo, 16, 16, hi);
246
+}
247
+
248
/*
249
* VFP3 fixed point conversion. The AArch32 versions of fix-to-float
250
* must always round-to-nearest; the AArch64 ones honour the FPSCR
251
--
252
2.20.1
253
254
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
acpi_data_push uses g_array_set_size to resize the memory size. If there
3
For Arm BFDOT and BFMMLA, we need a version of round-to-odd
4
is no enough contiguous memory, the address will be changed. So previous
4
that overflows to infinity, instead of the max normal number.
5
pointer could not be used any more. It must update the pointer and use
6
the new one.
7
5
8
Also, previous codes wrongly use le32 conversion of iort->node_offset
6
Cc: Alex Bennée <alex.bennee@linaro.org>
9
for subsequent computations that will result incorrect value if host is
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
not litlle endian. So use the non-converted one instead.
8
Message-id: 20210525225817.400336-6-richard.henderson@linaro.org
11
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Message-id: 1527663951-14552-1-git-send-email-zhaoshenglong@huawei.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
11
---
17
hw/arm/virt-acpi-build.c | 20 +++++++++++++++-----
12
include/fpu/softfloat-types.h | 4 +++-
18
1 file changed, 15 insertions(+), 5 deletions(-)
13
fpu/softfloat-parts.c.inc | 6 ++++--
14
2 files changed, 7 insertions(+), 3 deletions(-)
19
15
20
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
16
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt-acpi-build.c
18
--- a/include/fpu/softfloat-types.h
23
+++ b/hw/arm/virt-acpi-build.c
19
+++ b/include/fpu/softfloat-types.h
24
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
20
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
25
AcpiIortItsGroup *its;
21
float_round_up = 2,
26
AcpiIortTable *iort;
22
float_round_to_zero = 3,
27
AcpiIortSmmu3 *smmu;
23
float_round_ties_away = 4,
28
- size_t node_size, iort_length, smmu_offset = 0;
24
- /* Not an IEEE rounding mode: round to the closest odd mantissa value */
29
+ size_t node_size, iort_node_offset, iort_length, smmu_offset = 0;
25
+ /* Not an IEEE rounding mode: round to closest odd, overflow to max */
30
AcpiIortRC *rc;
26
float_round_to_odd = 5,
31
27
+ /* Not an IEEE rounding mode: round to closest odd, overflow to inf */
32
iort = acpi_data_push(table_data, sizeof(*iort));
28
+ float_round_to_odd_inf = 6,
33
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
29
} FloatRoundMode;
34
30
35
iort_length = sizeof(*iort);
31
/*
36
iort->node_count = cpu_to_le32(nb_nodes);
32
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
37
- iort->node_offset = cpu_to_le32(sizeof(*iort));
33
index XXXXXXX..XXXXXXX 100644
38
+ /*
34
--- a/fpu/softfloat-parts.c.inc
39
+ * Use a copy in case table_data->data moves during acpi_data_push
35
+++ b/fpu/softfloat-parts.c.inc
40
+ * operations.
36
@@ -XXX,XX +XXX,XX @@ static void partsN(uncanon)(FloatPartsN *p, float_status *s,
41
+ */
37
g_assert_not_reached();
42
+ iort_node_offset = sizeof(*iort);
43
+ iort->node_offset = cpu_to_le32(iort_node_offset);
44
45
/* ITS group node */
46
node_size = sizeof(*its) + sizeof(uint32_t);
47
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
48
int irq = vms->irqmap[VIRT_SMMU];
49
50
/* SMMUv3 node */
51
- smmu_offset = iort->node_offset + node_size;
52
+ smmu_offset = iort_node_offset + node_size;
53
node_size = sizeof(*smmu) + sizeof(*idmap);
54
iort_length += node_size;
55
smmu = acpi_data_push(table_data, node_size);
56
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
57
idmap->id_count = cpu_to_le32(0xFFFF);
58
idmap->output_base = 0;
59
/* output IORT node is the ITS group node (the first node) */
60
- idmap->output_reference = cpu_to_le32(iort->node_offset);
61
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
62
}
38
}
63
39
64
/* Root Complex Node */
40
+ overflow_norm = false;
65
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
41
switch (s->float_rounding_mode) {
66
idmap->output_reference = cpu_to_le32(smmu_offset);
42
case float_round_nearest_even:
67
} else {
43
- overflow_norm = false;
68
/* output IORT node is the ITS group node (the first node) */
44
inc = ((p->frac_lo & roundeven_mask) != frac_lsbm1 ? frac_lsbm1 : 0);
69
- idmap->output_reference = cpu_to_le32(iort->node_offset);
45
break;
70
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
46
case float_round_ties_away:
71
}
47
- overflow_norm = false;
72
48
inc = frac_lsbm1;
73
+ /*
49
break;
74
+ * Update the pointer address in case table_data->data moves during above
50
case float_round_to_zero:
75
+ * acpi_data_push operations.
51
@@ -XXX,XX +XXX,XX @@ static void partsN(uncanon)(FloatPartsN *p, float_status *s,
76
+ */
52
break;
77
+ iort = (AcpiIortTable *)(table_data->data + iort_start);
53
case float_round_to_odd:
78
iort->length = cpu_to_le32(iort_length);
54
overflow_norm = true;
79
55
+ /* fall through */
80
build_header(linker, table_data, (void *)(table_data->data + iort_start),
56
+ case float_round_to_odd_inf:
57
inc = p->frac_lo & frac_lsb ? 0 : round_mask;
58
break;
59
default:
60
@@ -XXX,XX +XXX,XX @@ static void partsN(uncanon)(FloatPartsN *p, float_status *s,
61
? frac_lsbm1 : 0);
62
break;
63
case float_round_to_odd:
64
+ case float_round_to_odd_inf:
65
inc = p->frac_lo & frac_lsb ? 0 : round_mask;
66
break;
67
default:
81
--
68
--
82
2.17.1
69
2.20.1
83
70
84
71
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This is BFDOT for both AArch64 AdvSIMD and SVE,
4
and VDOT.BF16 for AArch32 NEON.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525225817.400336-7-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 3 +++
12
target/arm/neon-shared.decode | 2 ++
13
target/arm/sve.decode | 3 +++
14
target/arm/translate-a64.c | 20 ++++++++++++++++++
15
target/arm/translate-neon.c | 9 ++++++++
16
target/arm/translate-sve.c | 12 +++++++++++
17
target/arm/vec_helper.c | 40 +++++++++++++++++++++++++++++++++++
18
7 files changed, 89 insertions(+)
19
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.h
23
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_ummla_b, TCG_CALL_NO_RWG,
25
DEF_HELPER_FLAGS_5(gvec_usmmla_b, TCG_CALL_NO_RWG,
26
void, ptr, ptr, ptr, ptr, i32)
27
28
+DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
#ifdef TARGET_AARCH64
32
#include "helper-a64.h"
33
#include "helper-sve.h"
34
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/neon-shared.decode
37
+++ b/target/arm/neon-shared.decode
38
@@ -XXX,XX +XXX,XX @@ VUDOT 1111 110 00 . 10 .... .... 1101 . q:1 . 1 .... \
39
vm=%vm_dp vn=%vn_dp vd=%vd_dp
40
VUSDOT 1111 110 01 . 10 .... .... 1101 . q:1 . 0 .... \
41
vm=%vm_dp vn=%vn_dp vd=%vd_dp
42
+VDOT_b16 1111 110 00 . 00 .... .... 1101 . q:1 . 0 .... \
43
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
44
45
# VFM[AS]L
46
VFML 1111 110 0 s:1 . 10 .... .... 1000 . 0 . 1 .... \
47
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/sve.decode
50
+++ b/target/arm/sve.decode
51
@@ -XXX,XX +XXX,XX @@ FMLALT_zzzw 01100100 10 1 ..... 10 0 00 1 ..... ..... @rda_rn_rm_e0
52
FMLSLB_zzzw 01100100 10 1 ..... 10 1 00 0 ..... ..... @rda_rn_rm_e0
53
FMLSLT_zzzw 01100100 10 1 ..... 10 1 00 1 ..... ..... @rda_rn_rm_e0
54
55
+### SVE2 floating-point bfloat16 dot-product
56
+BFDOT_zzzz 01100100 01 1 ..... 10 0 00 0 ..... ..... @rda_rn_rm_e0
57
+
58
### SVE2 floating-point multiply-add long (indexed)
59
FMLALB_zzxw 01100100 10 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
60
FMLALT_zzxw 01100100 10 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
61
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/translate-a64.c
64
+++ b/target/arm/translate-a64.c
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
66
}
67
feature = dc_isar_feature(aa64_fcma, s);
68
break;
69
+ case 0x1f: /* BFDOT */
70
+ switch (size) {
71
+ case 1:
72
+ feature = dc_isar_feature(aa64_bf16, s);
73
+ break;
74
+ default:
75
+ unallocated_encoding(s);
76
+ return;
77
+ }
78
+ break;
79
default:
80
unallocated_encoding(s);
81
return;
82
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
83
}
84
return;
85
86
+ case 0xf: /* BFDOT */
87
+ switch (size) {
88
+ case 1:
89
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfdot);
90
+ break;
91
+ default:
92
+ g_assert_not_reached();
93
+ }
94
+ return;
95
+
96
default:
97
g_assert_not_reached();
98
}
99
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/translate-neon.c
102
+++ b/target/arm/translate-neon.c
103
@@ -XXX,XX +XXX,XX @@ static bool trans_VUSDOT(DisasContext *s, arg_VUSDOT *a)
104
gen_helper_gvec_usdot_b);
105
}
106
107
+static bool trans_VDOT_b16(DisasContext *s, arg_VDOT_b16 *a)
108
+{
109
+ if (!dc_isar_feature(aa32_bf16, s)) {
110
+ return false;
111
+ }
112
+ return do_neon_ddda(s, a->q * 7, a->vd, a->vn, a->vm, 0,
113
+ gen_helper_gvec_bfdot);
114
+}
115
+
116
static bool trans_VFML(DisasContext *s, arg_VFML *a)
117
{
118
int opr_sz;
119
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/translate-sve.c
122
+++ b/target/arm/translate-sve.c
123
@@ -XXX,XX +XXX,XX @@ static bool trans_UMMLA(DisasContext *s, arg_rrrr_esz *a)
124
{
125
return do_i8mm_zzzz_ool(s, a, gen_helper_gvec_ummla_b, 0);
126
}
127
+
128
+static bool trans_BFDOT_zzzz(DisasContext *s, arg_rrrr_esz *a)
129
+{
130
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
131
+ return false;
132
+ }
133
+ if (sve_access_check(s)) {
134
+ gen_gvec_ool_zzzz(s, gen_helper_gvec_bfdot,
135
+ a->rd, a->rn, a->rm, a->ra, 0);
136
+ }
137
+ return true;
138
+}
139
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
140
index XXXXXXX..XXXXXXX 100644
141
--- a/target/arm/vec_helper.c
142
+++ b/target/arm/vec_helper.c
143
@@ -XXX,XX +XXX,XX @@ static void do_mmla_b(void *vd, void *vn, void *vm, void *va, uint32_t desc,
144
DO_MMLA_B(gvec_smmla_b, do_smmla_b)
145
DO_MMLA_B(gvec_ummla_b, do_ummla_b)
146
DO_MMLA_B(gvec_usmmla_b, do_usmmla_b)
147
+
148
+/*
149
+ * BFloat16 Dot Product
150
+ */
151
+
152
+static float32 bfdotadd(float32 sum, uint32_t e1, uint32_t e2)
153
+{
154
+ /* FPCR is ignored for BFDOT and BFMMLA. */
155
+ float_status bf_status = {
156
+ .tininess_before_rounding = float_tininess_before_rounding,
157
+ .float_rounding_mode = float_round_to_odd_inf,
158
+ .flush_to_zero = true,
159
+ .flush_inputs_to_zero = true,
160
+ .default_nan_mode = true,
161
+ };
162
+ float32 t1, t2;
163
+
164
+ /*
165
+ * Extract each BFloat16 from the element pair, and shift
166
+ * them such that they become float32.
167
+ */
168
+ t1 = float32_mul(e1 << 16, e2 << 16, &bf_status);
169
+ t2 = float32_mul(e1 & 0xffff0000u, e2 & 0xffff0000u, &bf_status);
170
+ t1 = float32_add(t1, t2, &bf_status);
171
+ t1 = float32_add(sum, t1, &bf_status);
172
+
173
+ return t1;
174
+}
175
+
176
+void HELPER(gvec_bfdot)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
177
+{
178
+ intptr_t i, opr_sz = simd_oprsz(desc);
179
+ float32 *d = vd, *a = va;
180
+ uint32_t *n = vn, *m = vm;
181
+
182
+ for (i = 0; i < opr_sz / 4; ++i) {
183
+ d[i] = bfdotadd(a[i], n[i], m[i]);
184
+ }
185
+ clear_tail(d, opr_sz, simd_maxsz(desc));
186
+}
187
--
188
2.20.1
189
190
diff view generated by jsdifflib
1
In commit f0aff255700 we made cpacr_write() enforce that some CPACR
1
From: Richard Henderson <richard.henderson@linaro.org>
2
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
3
we forgot to also update the register's reset value. The effect
4
was that (a) a guest that read CPACR on reset would not see ones in
5
the RAO bits, and (b) if you did a migration before the guest did
6
a write to the CPACR then the migration would fail because the
7
destination would enforce the RAO bits and then complain that they
8
didn't match the zero value from the source.
9
2
10
Implement reset for the CPACR using a custom reset function
3
This is BFDOT for both AArch64 AdvSIMD and SVE,
11
that just calls cpacr_write(), to avoid having to duplicate
4
and VDOT.BF16 for AArch32 NEON.
12
the logic for which bits are RAO.
13
5
14
This bug would affect migration for TCG CPUs which are ARMv7
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
with VFP but without one of Neon or VFPv3.
7
Message-id: 20210525225817.400336-8-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 2 ++
12
target/arm/neon-shared.decode | 2 ++
13
target/arm/sve.decode | 3 +++
14
target/arm/translate-a64.c | 41 +++++++++++++++++++++++++++--------
15
target/arm/translate-neon.c | 9 ++++++++
16
target/arm/translate-sve.c | 12 ++++++++++
17
target/arm/vec_helper.c | 20 +++++++++++++++++
18
7 files changed, 80 insertions(+), 9 deletions(-)
16
19
17
Reported-by: Cédric Le Goater <clg@kaod.org>
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Tested-by: Cédric Le Goater <clg@kaod.org>
20
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
21
---
22
target/arm/helper.c | 10 +++++++++-
23
1 file changed, 9 insertions(+), 1 deletion(-)
24
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
22
--- a/target/arm/helper.h
28
+++ b/target/arm/helper.c
23
+++ b/target/arm/helper.h
29
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_usmmla_b, TCG_CALL_NO_RWG,
30
env->cp15.cpacr_el1 = value;
25
26
DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
27
void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
31
#ifdef TARGET_AARCH64
32
#include "helper-a64.h"
33
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/neon-shared.decode
36
+++ b/target/arm/neon-shared.decode
37
@@ -XXX,XX +XXX,XX @@ VUSDOT_scalar 1111 1110 1 . 00 .... .... 1101 . q:1 index:1 0 vm:4 \
38
vn=%vn_dp vd=%vd_dp
39
VSUDOT_scalar 1111 1110 1 . 00 .... .... 1101 . q:1 index:1 1 vm:4 \
40
vn=%vn_dp vd=%vd_dp
41
+VDOT_b16_scal 1111 1110 0 . 00 .... .... 1101 . q:1 index:1 0 vm:4 \
42
+ vn=%vn_dp vd=%vd_dp
43
44
%vfml_scalar_q0_rm 0:3 5:1
45
%vfml_scalar_q1_index 5:1 3:1
46
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/sve.decode
49
+++ b/target/arm/sve.decode
50
@@ -XXX,XX +XXX,XX @@ FMLALB_zzxw 01100100 10 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
51
FMLALT_zzxw 01100100 10 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
52
FMLSLB_zzxw 01100100 10 1 ..... 0110.0 ..... ..... @rrxr_3a esz=2
53
FMLSLT_zzxw 01100100 10 1 ..... 0110.1 ..... ..... @rrxr_3a esz=2
54
+
55
+### SVE2 floating-point bfloat16 dot-product (indexed)
56
+BFDOT_zzxz 01100100 01 1 ..... 010000 ..... ..... @rrxr_2 esz=2
57
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/translate-a64.c
60
+++ b/target/arm/translate-a64.c
61
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
62
return;
63
}
64
break;
65
- case 0x0f: /* SUDOT, USDOT */
66
- if (is_scalar || (size & 1) || !dc_isar_feature(aa64_i8mm, s)) {
67
+ case 0x0f:
68
+ switch (size) {
69
+ case 0: /* SUDOT */
70
+ case 2: /* USDOT */
71
+ if (is_scalar || !dc_isar_feature(aa64_i8mm, s)) {
72
+ unallocated_encoding(s);
73
+ return;
74
+ }
75
+ break;
76
+ case 1: /* BFDOT */
77
+ if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
78
+ unallocated_encoding(s);
79
+ return;
80
+ }
81
+ break;
82
+ default:
83
unallocated_encoding(s);
84
return;
85
}
86
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
87
u ? gen_helper_gvec_udot_idx_b
88
: gen_helper_gvec_sdot_idx_b);
89
return;
90
- case 0x0f: /* SUDOT, USDOT */
91
- gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
92
- extract32(insn, 23, 1)
93
- ? gen_helper_gvec_usdot_idx_b
94
- : gen_helper_gvec_sudot_idx_b);
95
- return;
96
-
97
+ case 0x0f:
98
+ switch (extract32(insn, 22, 2)) {
99
+ case 0: /* SUDOT */
100
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
101
+ gen_helper_gvec_sudot_idx_b);
102
+ return;
103
+ case 1: /* BFDOT */
104
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
105
+ gen_helper_gvec_bfdot_idx);
106
+ return;
107
+ case 2: /* USDOT */
108
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
109
+ gen_helper_gvec_usdot_idx_b);
110
+ return;
111
+ }
112
+ g_assert_not_reached();
113
case 0x11: /* FCMLA #0 */
114
case 0x13: /* FCMLA #90 */
115
case 0x15: /* FCMLA #180 */
116
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/arm/translate-neon.c
119
+++ b/target/arm/translate-neon.c
120
@@ -XXX,XX +XXX,XX @@ static bool trans_VSUDOT_scalar(DisasContext *s, arg_VSUDOT_scalar *a)
121
gen_helper_gvec_sudot_idx_b);
31
}
122
}
32
123
33
+static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
124
+static bool trans_VDOT_b16_scal(DisasContext *s, arg_VDOT_b16_scal *a)
34
+{
125
+{
35
+ /* Call cpacr_write() so that we reset with the correct RAO bits set
126
+ if (!dc_isar_feature(aa32_bf16, s)) {
36
+ * for our CPU features.
127
+ return false;
37
+ */
128
+ }
38
+ cpacr_write(env, ri, 0);
129
+ return do_neon_ddda(s, a->q * 6, a->vd, a->vn, a->vm, a->index,
130
+ gen_helper_gvec_bfdot_idx);
39
+}
131
+}
40
+
132
+
41
static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
133
static bool trans_VFML_scalar(DisasContext *s, arg_VFML_scalar *a)
42
bool isread)
43
{
134
{
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
135
int opr_sz;
45
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
136
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
46
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
137
index XXXXXXX..XXXXXXX 100644
47
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
138
--- a/target/arm/translate-sve.c
48
- .resetvalue = 0, .writefn = cpacr_write },
139
+++ b/target/arm/translate-sve.c
49
+ .resetfn = cpacr_reset, .writefn = cpacr_write },
140
@@ -XXX,XX +XXX,XX @@ static bool trans_BFDOT_zzzz(DisasContext *s, arg_rrrr_esz *a)
50
REGINFO_SENTINEL
141
}
51
};
142
return true;
52
143
}
144
+
145
+static bool trans_BFDOT_zzxz(DisasContext *s, arg_rrxr_esz *a)
146
+{
147
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
148
+ return false;
149
+ }
150
+ if (sve_access_check(s)) {
151
+ gen_gvec_ool_zzzz(s, gen_helper_gvec_bfdot_idx,
152
+ a->rd, a->rn, a->rm, a->ra, a->index);
153
+ }
154
+ return true;
155
+}
156
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/target/arm/vec_helper.c
159
+++ b/target/arm/vec_helper.c
160
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfdot)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
161
}
162
clear_tail(d, opr_sz, simd_maxsz(desc));
163
}
164
+
165
+void HELPER(gvec_bfdot_idx)(void *vd, void *vn, void *vm,
166
+ void *va, uint32_t desc)
167
+{
168
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
169
+ intptr_t index = simd_data(desc);
170
+ intptr_t elements = opr_sz / 4;
171
+ intptr_t eltspersegment = MIN(16 / 4, elements);
172
+ float32 *d = vd, *a = va;
173
+ uint32_t *n = vn, *m = vm;
174
+
175
+ for (i = 0; i < elements; i += eltspersegment) {
176
+ uint32_t m_idx = m[i + H4(index)];
177
+
178
+ for (j = i; j < i + eltspersegment; j++) {
179
+ d[j] = bfdotadd(a[j], n[j], m_idx);
180
+ }
181
+ }
182
+ clear_tail(d, opr_sz, simd_maxsz(desc));
183
+}
53
--
184
--
54
2.17.1
185
2.20.1
55
186
56
187
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This is BFMMLA for both AArch64 AdvSIMD and SVE,
4
and VMMLA.BF16 for AArch32 NEON.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525225817.400336-9-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 3 +++
12
target/arm/neon-shared.decode | 2 ++
13
target/arm/sve.decode | 6 +++--
14
target/arm/translate-a64.c | 10 +++++++++
15
target/arm/translate-neon.c | 9 ++++++++
16
target/arm/translate-sve.c | 12 ++++++++++
17
target/arm/vec_helper.c | 42 ++++++++++++++++++++++++++++++++++-
18
7 files changed, 81 insertions(+), 3 deletions(-)
19
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.h
23
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
25
DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
26
void, ptr, ptr, ptr, ptr, i32)
27
28
+DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
#ifdef TARGET_AARCH64
32
#include "helper-a64.h"
33
#include "helper-sve.h"
34
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/neon-shared.decode
37
+++ b/target/arm/neon-shared.decode
38
@@ -XXX,XX +XXX,XX @@ VUMMLA 1111 1100 0.10 .... .... 1100 .1.1 .... \
39
vm=%vm_dp vn=%vn_dp vd=%vd_dp
40
VUSMMLA 1111 1100 1.10 .... .... 1100 .1.0 .... \
41
vm=%vm_dp vn=%vn_dp vd=%vd_dp
42
+VMMLA_b16 1111 1100 0.00 .... .... 1100 .1.0 .... \
43
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
44
45
VCMLA_scalar 1111 1110 0 . rot:2 .... .... 1000 . q:1 index:1 0 vm:4 \
46
vn=%vn_dp vd=%vd_dp size=1
47
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/sve.decode
50
+++ b/target/arm/sve.decode
51
@@ -XXX,XX +XXX,XX @@ SQRDCMLAH_zzzz 01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5 ra=%reg_movprfx
52
USDOT_zzzz 01000100 .. 0 ..... 011 110 ..... ..... @rda_rn_rm
53
54
### SVE2 floating point matrix multiply accumulate
55
-
56
-FMMLA 01100100 .. 1 ..... 111001 ..... ..... @rda_rn_rm
57
+{
58
+ BFMMLA 01100100 01 1 ..... 111 001 ..... ..... @rda_rn_rm_e0
59
+ FMMLA 01100100 .. 1 ..... 111 001 ..... ..... @rda_rn_rm
60
+}
61
62
### SVE2 Memory Gather Load Group
63
64
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate-a64.c
67
+++ b/target/arm/translate-a64.c
68
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
69
}
70
feature = dc_isar_feature(aa64_fcma, s);
71
break;
72
+ case 0x1d: /* BFMMLA */
73
+ if (size != MO_16 || !is_q) {
74
+ unallocated_encoding(s);
75
+ return;
76
+ }
77
+ feature = dc_isar_feature(aa64_bf16, s);
78
+ break;
79
case 0x1f: /* BFDOT */
80
switch (size) {
81
case 1:
82
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
83
}
84
return;
85
86
+ case 0xd: /* BFMMLA */
87
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfmmla);
88
+ return;
89
case 0xf: /* BFDOT */
90
switch (size) {
91
case 1:
92
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/translate-neon.c
95
+++ b/target/arm/translate-neon.c
96
@@ -XXX,XX +XXX,XX @@ static bool trans_VUSMMLA(DisasContext *s, arg_VUSMMLA *a)
97
return do_neon_ddda(s, 7, a->vd, a->vn, a->vm, 0,
98
gen_helper_gvec_usmmla_b);
99
}
100
+
101
+static bool trans_VMMLA_b16(DisasContext *s, arg_VMMLA_b16 *a)
102
+{
103
+ if (!dc_isar_feature(aa32_bf16, s)) {
104
+ return false;
105
+ }
106
+ return do_neon_ddda(s, 7, a->vd, a->vn, a->vm, 0,
107
+ gen_helper_gvec_bfmmla);
108
+}
109
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
110
index XXXXXXX..XXXXXXX 100644
111
--- a/target/arm/translate-sve.c
112
+++ b/target/arm/translate-sve.c
113
@@ -XXX,XX +XXX,XX @@ static bool trans_BFDOT_zzxz(DisasContext *s, arg_rrxr_esz *a)
114
}
115
return true;
116
}
117
+
118
+static bool trans_BFMMLA(DisasContext *s, arg_rrrr_esz *a)
119
+{
120
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
121
+ return false;
122
+ }
123
+ if (sve_access_check(s)) {
124
+ gen_gvec_ool_zzzz(s, gen_helper_gvec_bfmmla,
125
+ a->rd, a->rn, a->rm, a->ra, 0);
126
+ }
127
+ return true;
128
+}
129
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/target/arm/vec_helper.c
132
+++ b/target/arm/vec_helper.c
133
@@ -XXX,XX +XXX,XX @@ static void do_mmla_b(void *vd, void *vn, void *vm, void *va, uint32_t desc,
134
* Process the entire segment at once, writing back the
135
* results only after we've consumed all of the inputs.
136
*
137
- * Key to indicies by column:
138
+ * Key to indices by column:
139
* i j i j
140
*/
141
sum0 = a[H4(0 + 0)];
142
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfdot_idx)(void *vd, void *vn, void *vm,
143
}
144
clear_tail(d, opr_sz, simd_maxsz(desc));
145
}
146
+
147
+void HELPER(gvec_bfmmla)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
148
+{
149
+ intptr_t s, opr_sz = simd_oprsz(desc);
150
+ float32 *d = vd, *a = va;
151
+ uint32_t *n = vn, *m = vm;
152
+
153
+ for (s = 0; s < opr_sz / 4; s += 4) {
154
+ float32 sum00, sum01, sum10, sum11;
155
+
156
+ /*
157
+ * Process the entire segment at once, writing back the
158
+ * results only after we've consumed all of the inputs.
159
+ *
160
+ * Key to indicies by column:
161
+ * i j i k j k
162
+ */
163
+ sum00 = a[s + H4(0 + 0)];
164
+ sum00 = bfdotadd(sum00, n[s + H4(0 + 0)], m[s + H4(0 + 0)]);
165
+ sum00 = bfdotadd(sum00, n[s + H4(0 + 1)], m[s + H4(0 + 1)]);
166
+
167
+ sum01 = a[s + H4(0 + 1)];
168
+ sum01 = bfdotadd(sum01, n[s + H4(0 + 0)], m[s + H4(2 + 0)]);
169
+ sum01 = bfdotadd(sum01, n[s + H4(0 + 1)], m[s + H4(2 + 1)]);
170
+
171
+ sum10 = a[s + H4(2 + 0)];
172
+ sum10 = bfdotadd(sum10, n[s + H4(2 + 0)], m[s + H4(0 + 0)]);
173
+ sum10 = bfdotadd(sum10, n[s + H4(2 + 1)], m[s + H4(0 + 1)]);
174
+
175
+ sum11 = a[s + H4(2 + 1)];
176
+ sum11 = bfdotadd(sum11, n[s + H4(2 + 0)], m[s + H4(2 + 0)]);
177
+ sum11 = bfdotadd(sum11, n[s + H4(2 + 1)], m[s + H4(2 + 1)]);
178
+
179
+ d[s + H4(0 + 0)] = sum00;
180
+ d[s + H4(0 + 1)] = sum01;
181
+ d[s + H4(2 + 0)] = sum10;
182
+ d[s + H4(2 + 1)] = sum11;
183
+ }
184
+ clear_tail(d, opr_sz, simd_maxsz(desc));
185
+}
186
--
187
2.20.1
188
189
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This is BFMLAL{B,T} for both AArch64 AdvSIMD and SVE,
4
and VFMA{B,T}.BF16 for AArch32 NEON.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525225817.400336-10-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 3 +++
12
target/arm/neon-shared.decode | 3 +++
13
target/arm/sve.decode | 3 +++
14
target/arm/translate-a64.c | 13 +++++++++----
15
target/arm/translate-neon.c | 9 +++++++++
16
target/arm/translate-sve.c | 30 ++++++++++++++++++++++++++++++
17
target/arm/vec_helper.c | 16 ++++++++++++++++
18
7 files changed, 73 insertions(+), 4 deletions(-)
19
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.h
23
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
25
DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
26
void, ptr, ptr, ptr, ptr, i32)
27
28
+DEF_HELPER_FLAGS_6(gvec_bfmlal, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, ptr, i32)
30
+
31
#ifdef TARGET_AARCH64
32
#include "helper-a64.h"
33
#include "helper-sve.h"
34
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/neon-shared.decode
37
+++ b/target/arm/neon-shared.decode
38
@@ -XXX,XX +XXX,XX @@ VUSMMLA 1111 1100 1.10 .... .... 1100 .1.0 .... \
39
VMMLA_b16 1111 1100 0.00 .... .... 1100 .1.0 .... \
40
vm=%vm_dp vn=%vn_dp vd=%vd_dp
41
42
+VFMA_b16 1111 110 0 0.11 .... .... 1000 . q:1 . 1 .... \
43
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
44
+
45
VCMLA_scalar 1111 1110 0 . rot:2 .... .... 1000 . q:1 index:1 0 vm:4 \
46
vn=%vn_dp vd=%vd_dp size=1
47
VCMLA_scalar 1111 1110 1 . rot:2 .... .... 1000 . q:1 . 0 .... \
48
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/sve.decode
51
+++ b/target/arm/sve.decode
52
@@ -XXX,XX +XXX,XX @@ FMLALT_zzzw 01100100 10 1 ..... 10 0 00 1 ..... ..... @rda_rn_rm_e0
53
FMLSLB_zzzw 01100100 10 1 ..... 10 1 00 0 ..... ..... @rda_rn_rm_e0
54
FMLSLT_zzzw 01100100 10 1 ..... 10 1 00 1 ..... ..... @rda_rn_rm_e0
55
56
+BFMLALB_zzzw 01100100 11 1 ..... 10 0 00 0 ..... ..... @rda_rn_rm_e0
57
+BFMLALT_zzzw 01100100 11 1 ..... 10 0 00 1 ..... ..... @rda_rn_rm_e0
58
+
59
### SVE2 floating-point bfloat16 dot-product
60
BFDOT_zzzz 01100100 01 1 ..... 10 0 00 0 ..... ..... @rda_rn_rm_e0
61
62
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/translate-a64.c
65
+++ b/target/arm/translate-a64.c
66
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
67
}
68
feature = dc_isar_feature(aa64_bf16, s);
69
break;
70
- case 0x1f: /* BFDOT */
71
+ case 0x1f:
72
switch (size) {
73
- case 1:
74
+ case 1: /* BFDOT */
75
+ case 3: /* BFMLAL{B,T} */
76
feature = dc_isar_feature(aa64_bf16, s);
77
break;
78
default:
79
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
80
case 0xd: /* BFMMLA */
81
gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfmmla);
82
return;
83
- case 0xf: /* BFDOT */
84
+ case 0xf:
85
switch (size) {
86
- case 1:
87
+ case 1: /* BFDOT */
88
gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfdot);
89
break;
90
+ case 3: /* BFMLAL{B,T} */
91
+ gen_gvec_op4_fpst(s, 1, rd, rn, rm, rd, false, is_q,
92
+ gen_helper_gvec_bfmlal);
93
+ break;
94
default:
95
g_assert_not_reached();
96
}
97
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/translate-neon.c
100
+++ b/target/arm/translate-neon.c
101
@@ -XXX,XX +XXX,XX @@ static bool trans_VMMLA_b16(DisasContext *s, arg_VMMLA_b16 *a)
102
return do_neon_ddda(s, 7, a->vd, a->vn, a->vm, 0,
103
gen_helper_gvec_bfmmla);
104
}
105
+
106
+static bool trans_VFMA_b16(DisasContext *s, arg_VFMA_b16 *a)
107
+{
108
+ if (!dc_isar_feature(aa32_bf16, s)) {
109
+ return false;
110
+ }
111
+ return do_neon_ddda_fpst(s, 7, a->vd, a->vn, a->vm, a->q, FPST_STD,
112
+ gen_helper_gvec_bfmlal);
113
+}
114
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/translate-sve.c
117
+++ b/target/arm/translate-sve.c
118
@@ -XXX,XX +XXX,XX @@ static bool trans_BFMMLA(DisasContext *s, arg_rrrr_esz *a)
119
}
120
return true;
121
}
122
+
123
+static bool do_BFMLAL_zzzw(DisasContext *s, arg_rrrr_esz *a, bool sel)
124
+{
125
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
126
+ return false;
127
+ }
128
+ if (sve_access_check(s)) {
129
+ TCGv_ptr status = fpstatus_ptr(FPST_FPCR);
130
+ unsigned vsz = vec_full_reg_size(s);
131
+
132
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
133
+ vec_full_reg_offset(s, a->rn),
134
+ vec_full_reg_offset(s, a->rm),
135
+ vec_full_reg_offset(s, a->ra),
136
+ status, vsz, vsz, sel,
137
+ gen_helper_gvec_bfmlal);
138
+ tcg_temp_free_ptr(status);
139
+ }
140
+ return true;
141
+}
142
+
143
+static bool trans_BFMLALB_zzzw(DisasContext *s, arg_rrrr_esz *a)
144
+{
145
+ return do_BFMLAL_zzzw(s, a, false);
146
+}
147
+
148
+static bool trans_BFMLALT_zzzw(DisasContext *s, arg_rrrr_esz *a)
149
+{
150
+ return do_BFMLAL_zzzw(s, a, true);
151
+}
152
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/vec_helper.c
155
+++ b/target/arm/vec_helper.c
156
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfmmla)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
157
}
158
clear_tail(d, opr_sz, simd_maxsz(desc));
159
}
160
+
161
+void HELPER(gvec_bfmlal)(void *vd, void *vn, void *vm, void *va,
162
+ void *stat, uint32_t desc)
163
+{
164
+ intptr_t i, opr_sz = simd_oprsz(desc);
165
+ intptr_t sel = simd_data(desc);
166
+ float32 *d = vd, *a = va;
167
+ bfloat16 *n = vn, *m = vm;
168
+
169
+ for (i = 0; i < opr_sz / 4; ++i) {
170
+ float32 nn = n[H2(i * 2 + sel)] << 16;
171
+ float32 mm = m[H2(i * 2 + sel)] << 16;
172
+ d[H4(i)] = float32_muladd(nn, mm, a[H4(i)], 0, stat);
173
+ }
174
+ clear_tail(d, opr_sz, simd_maxsz(desc));
175
+}
176
--
177
2.20.1
178
179
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This is BFMLAL{B,T} for both AArch64 AdvSIMD and SVE,
4
and VFMA{B,T}.BF16 for AArch32 NEON.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525225817.400336-11-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 2 ++
12
target/arm/neon-shared.decode | 2 ++
13
target/arm/sve.decode | 2 ++
14
target/arm/translate-a64.c | 15 ++++++++++++++-
15
target/arm/translate-neon.c | 10 ++++++++++
16
target/arm/translate-sve.c | 30 ++++++++++++++++++++++++++++++
17
target/arm/vec_helper.c | 22 ++++++++++++++++++++++
18
7 files changed, 82 insertions(+), 1 deletion(-)
19
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.h
23
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
25
26
DEF_HELPER_FLAGS_6(gvec_bfmlal, TCG_CALL_NO_RWG,
27
void, ptr, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, ptr, i32)
30
31
#ifdef TARGET_AARCH64
32
#include "helper-a64.h"
33
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/neon-shared.decode
36
+++ b/target/arm/neon-shared.decode
37
@@ -XXX,XX +XXX,XX @@ VFML_scalar 1111 1110 0 . 0 s:1 .... .... 1000 . 0 . 1 index:1 ... \
38
rm=%vfml_scalar_q0_rm vn=%vn_sp vd=%vd_dp q=0
39
VFML_scalar 1111 1110 0 . 0 s:1 .... .... 1000 . 1 . 1 . rm:3 \
40
index=%vfml_scalar_q1_index vn=%vn_dp vd=%vd_dp q=1
41
+VFMA_b16_scal 1111 1110 0.11 .... .... 1000 . q:1 . 1 . vm:3 \
42
+ index=%vfml_scalar_q1_index vn=%vn_dp vd=%vd_dp
43
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/sve.decode
46
+++ b/target/arm/sve.decode
47
@@ -XXX,XX +XXX,XX @@ FMLALB_zzxw 01100100 10 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
48
FMLALT_zzxw 01100100 10 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
49
FMLSLB_zzxw 01100100 10 1 ..... 0110.0 ..... ..... @rrxr_3a esz=2
50
FMLSLT_zzxw 01100100 10 1 ..... 0110.1 ..... ..... @rrxr_3a esz=2
51
+BFMLALB_zzxw 01100100 11 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
52
+BFMLALT_zzxw 01100100 11 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
53
54
### SVE2 floating-point bfloat16 dot-product (indexed)
55
BFDOT_zzxz 01100100 01 1 ..... 010000 ..... ..... @rrxr_2 esz=2
56
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/translate-a64.c
59
+++ b/target/arm/translate-a64.c
60
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
61
unallocated_encoding(s);
62
return;
63
}
64
+ size = MO_32;
65
break;
66
case 1: /* BFDOT */
67
if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
68
unallocated_encoding(s);
69
return;
70
}
71
+ size = MO_32;
72
+ break;
73
+ case 3: /* BFMLAL{B,T} */
74
+ if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
75
+ unallocated_encoding(s);
76
+ return;
77
+ }
78
+ /* can't set is_fp without other incorrect size checks */
79
+ size = MO_16;
80
break;
81
default:
82
unallocated_encoding(s);
83
return;
84
}
85
- size = MO_32;
86
break;
87
case 0x11: /* FCMLA #0 */
88
case 0x13: /* FCMLA #90 */
89
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
90
gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
91
gen_helper_gvec_usdot_idx_b);
92
return;
93
+ case 3: /* BFMLAL{B,T} */
94
+ gen_gvec_op4_fpst(s, 1, rd, rn, rm, rd, 0, (index << 1) | is_q,
95
+ gen_helper_gvec_bfmlal_idx);
96
+ return;
97
}
98
g_assert_not_reached();
99
case 0x11: /* FCMLA #0 */
100
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/target/arm/translate-neon.c
103
+++ b/target/arm/translate-neon.c
104
@@ -XXX,XX +XXX,XX @@ static bool trans_VFMA_b16(DisasContext *s, arg_VFMA_b16 *a)
105
return do_neon_ddda_fpst(s, 7, a->vd, a->vn, a->vm, a->q, FPST_STD,
106
gen_helper_gvec_bfmlal);
107
}
108
+
109
+static bool trans_VFMA_b16_scal(DisasContext *s, arg_VFMA_b16_scal *a)
110
+{
111
+ if (!dc_isar_feature(aa32_bf16, s)) {
112
+ return false;
113
+ }
114
+ return do_neon_ddda_fpst(s, 6, a->vd, a->vn, a->vm,
115
+ (a->index << 1) | a->q, FPST_STD,
116
+ gen_helper_gvec_bfmlal_idx);
117
+}
118
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/target/arm/translate-sve.c
121
+++ b/target/arm/translate-sve.c
122
@@ -XXX,XX +XXX,XX @@ static bool trans_BFMLALT_zzzw(DisasContext *s, arg_rrrr_esz *a)
123
{
124
return do_BFMLAL_zzzw(s, a, true);
125
}
126
+
127
+static bool do_BFMLAL_zzxw(DisasContext *s, arg_rrxr_esz *a, bool sel)
128
+{
129
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
130
+ return false;
131
+ }
132
+ if (sve_access_check(s)) {
133
+ TCGv_ptr status = fpstatus_ptr(FPST_FPCR);
134
+ unsigned vsz = vec_full_reg_size(s);
135
+
136
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
137
+ vec_full_reg_offset(s, a->rn),
138
+ vec_full_reg_offset(s, a->rm),
139
+ vec_full_reg_offset(s, a->ra),
140
+ status, vsz, vsz, (a->index << 1) | sel,
141
+ gen_helper_gvec_bfmlal_idx);
142
+ tcg_temp_free_ptr(status);
143
+ }
144
+ return true;
145
+}
146
+
147
+static bool trans_BFMLALB_zzxw(DisasContext *s, arg_rrxr_esz *a)
148
+{
149
+ return do_BFMLAL_zzxw(s, a, false);
150
+}
151
+
152
+static bool trans_BFMLALT_zzxw(DisasContext *s, arg_rrxr_esz *a)
153
+{
154
+ return do_BFMLAL_zzxw(s, a, true);
155
+}
156
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/target/arm/vec_helper.c
159
+++ b/target/arm/vec_helper.c
160
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfmlal)(void *vd, void *vn, void *vm, void *va,
161
}
162
clear_tail(d, opr_sz, simd_maxsz(desc));
163
}
164
+
165
+void HELPER(gvec_bfmlal_idx)(void *vd, void *vn, void *vm,
166
+ void *va, void *stat, uint32_t desc)
167
+{
168
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
169
+ intptr_t sel = extract32(desc, SIMD_DATA_SHIFT, 1);
170
+ intptr_t index = extract32(desc, SIMD_DATA_SHIFT + 1, 3);
171
+ intptr_t elements = opr_sz / 4;
172
+ intptr_t eltspersegment = MIN(16 / 4, elements);
173
+ float32 *d = vd, *a = va;
174
+ bfloat16 *n = vn, *m = vm;
175
+
176
+ for (i = 0; i < elements; i += eltspersegment) {
177
+ float32 m_idx = m[H2(2 * i + index)] << 16;
178
+
179
+ for (j = i; j < i + eltspersegment; j++) {
180
+ float32 n_j = n[H2(2 * j + sel)] << 16;
181
+ d[H4(j)] = float32_muladd(n_j, m_idx, a[H4(j)], 0, stat);
182
+ }
183
+ }
184
+ clear_tail(d, opr_sz, simd_maxsz(desc));
185
+}
186
--
187
2.20.1
188
189
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20210525225817.400336-12-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
linux-user/elfload.c | 2 ++
9
1 file changed, 2 insertions(+)
10
11
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/linux-user/elfload.c
14
+++ b/linux-user/elfload.c
15
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
16
GET_FEATURE_ID(aa64_sve_i8mm, ARM_HWCAP2_A64_SVEI8MM);
17
GET_FEATURE_ID(aa64_sve_f32mm, ARM_HWCAP2_A64_SVEF32MM);
18
GET_FEATURE_ID(aa64_sve_f64mm, ARM_HWCAP2_A64_SVEF64MM);
19
+ GET_FEATURE_ID(aa64_sve_bf16, ARM_HWCAP2_A64_SVEBF16);
20
GET_FEATURE_ID(aa64_i8mm, ARM_HWCAP2_A64_I8MM);
21
+ GET_FEATURE_ID(aa64_bf16, ARM_HWCAP2_A64_BF16);
22
GET_FEATURE_ID(aa64_rndr, ARM_HWCAP2_A64_RNG);
23
GET_FEATURE_ID(aa64_bti, ARM_HWCAP2_A64_BTI);
24
GET_FEATURE_ID(aa64_mte, ARM_HWCAP2_A64_MTE);
25
--
26
2.20.1
27
28
diff view generated by jsdifflib
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
There was a nasty flip in identifying which register group an access is
3
Disable BF16 again for !have_neon and !have_vfp during realize.
4
targeting. The issue caused spuriously raised priorities of the guest
5
when handing CPUs over in the Jailhouse hypervisor.
6
4
7
Cc: qemu-stable@nongnu.org
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
6
Message-id: 20210525225817.400336-13-richard.henderson@linaro.org
9
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
hw/intc/arm_gicv3_cpuif.c | 12 ++++++------
10
target/arm/cpu.c | 3 +++
14
1 file changed, 6 insertions(+), 6 deletions(-)
11
target/arm/cpu64.c | 3 +++
12
target/arm/cpu_tcg.c | 1 +
13
3 files changed, 7 insertions(+)
15
14
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gicv3_cpuif.c
17
--- a/target/arm/cpu.c
19
+++ b/hw/intc/arm_gicv3_cpuif.c
18
+++ b/target/arm/cpu.c
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
21
{
20
22
GICv3CPUState *cs = icc_cs_from_env(env);
21
u = cpu->isar.id_isar6;
23
int regno = ri->opc2 & 3;
22
u = FIELD_DP32(u, ID_ISAR6, JSCVT, 0);
24
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
23
+ u = FIELD_DP32(u, ID_ISAR6, BF16, 0);
25
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
24
cpu->isar.id_isar6 = u;
26
uint64_t value = cs->ich_apr[grp][regno];
25
27
26
u = cpu->isar.mvfr0;
28
trace_gicv3_icv_ap_read(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
27
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
29
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
30
{
29
t = cpu->isar.id_aa64isar1;
31
GICv3CPUState *cs = icc_cs_from_env(env);
30
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 0);
32
int regno = ri->opc2 & 3;
31
+ t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 0);
33
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
32
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 0);
34
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
33
cpu->isar.id_aa64isar1 = t;
35
34
36
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
35
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
37
36
u = cpu->isar.id_isar6;
38
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
37
u = FIELD_DP32(u, ID_ISAR6, DP, 0);
39
uint64_t value;
38
u = FIELD_DP32(u, ID_ISAR6, FHM, 0);
40
39
+ u = FIELD_DP32(u, ID_ISAR6, BF16, 0);
41
int regno = ri->opc2 & 3;
40
u = FIELD_DP32(u, ID_ISAR6, I8MM, 0);
42
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
41
cpu->isar.id_isar6 = u;
43
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
42
44
43
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
45
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
44
index XXXXXXX..XXXXXXX 100644
46
return icv_ap_read(env, ri);
45
--- a/target/arm/cpu64.c
47
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
46
+++ b/target/arm/cpu64.c
48
GICv3CPUState *cs = icc_cs_from_env(env);
47
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
49
48
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
50
int regno = ri->opc2 & 3;
49
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
51
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
50
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
52
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
51
+ t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1);
53
52
t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
54
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
53
t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
55
icv_ap_write(env, ri, value);
54
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
56
@@ -XXX,XX +XXX,XX @@ static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
55
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
57
{
56
t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
58
GICv3CPUState *cs = icc_cs_from_env(env);
57
t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* PMULL */
59
int regno = ri->opc2 & 3;
58
t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
60
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
59
+ t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1);
61
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
60
t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
62
uint64_t value;
61
t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
63
62
t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
64
value = cs->ich_apr[grp][regno];
63
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
65
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
64
u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
66
{
65
u = FIELD_DP32(u, ID_ISAR6, SB, 1);
67
GICv3CPUState *cs = icc_cs_from_env(env);
66
u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
68
int regno = ri->opc2 & 3;
67
+ u = FIELD_DP32(u, ID_ISAR6, BF16, 1);
69
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
68
u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
70
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
69
cpu->isar.id_isar6 = u;
71
70
72
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
71
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/cpu_tcg.c
74
+++ b/target/arm/cpu_tcg.c
75
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
76
t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
77
t = FIELD_DP32(t, ID_ISAR6, SB, 1);
78
t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
79
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
80
t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
81
cpu->isar.id_isar6 = t;
73
82
74
--
83
--
75
2.17.1
84
2.20.1
76
85
77
86
diff view generated by jsdifflib
1
Add entries to MAINTAINERS to cover the newer MPS2 boards and
1
From: Alexander Graf <agraf@csgraf.de>
2
the new devices they use.
2
3
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
7
8
This patch moves assert_hvf_ok() and introduces generic build infrastructure.
9
10
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210519202253.76782-2-agraf@csgraf.de
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180518153157.14899-1-peter.maydell@linaro.org
6
---
15
---
7
MAINTAINERS | 9 +++++++--
16
include/sysemu/hvf_int.h | 18 +++++++++++++++
8
1 file changed, 7 insertions(+), 2 deletions(-)
17
accel/hvf/hvf-all.c | 47 ++++++++++++++++++++++++++++++++++++++++
9
18
target/i386/hvf/hvf.c | 33 +---------------------------
19
MAINTAINERS | 8 +++++++
20
accel/hvf/meson.build | 6 +++++
21
accel/meson.build | 1 +
22
6 files changed, 81 insertions(+), 32 deletions(-)
23
create mode 100644 include/sysemu/hvf_int.h
24
create mode 100644 accel/hvf/hvf-all.c
25
create mode 100644 accel/hvf/meson.build
26
27
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
28
new file mode 100644
29
index XXXXXXX..XXXXXXX
30
--- /dev/null
31
+++ b/include/sysemu/hvf_int.h
32
@@ -XXX,XX +XXX,XX @@
33
+/*
34
+ * QEMU Hypervisor.framework (HVF) support
35
+ *
36
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
37
+ * See the COPYING file in the top-level directory.
38
+ *
39
+ */
40
+
41
+/* header to be included in HVF-specific code */
42
+
43
+#ifndef HVF_INT_H
44
+#define HVF_INT_H
45
+
46
+#include <Hypervisor/hv.h>
47
+
48
+void assert_hvf_ok(hv_return_t ret);
49
+
50
+#endif
51
diff --git a/accel/hvf/hvf-all.c b/accel/hvf/hvf-all.c
52
new file mode 100644
53
index XXXXXXX..XXXXXXX
54
--- /dev/null
55
+++ b/accel/hvf/hvf-all.c
56
@@ -XXX,XX +XXX,XX @@
57
+/*
58
+ * QEMU Hypervisor.framework support
59
+ *
60
+ * This work is licensed under the terms of the GNU GPL, version 2. See
61
+ * the COPYING file in the top-level directory.
62
+ *
63
+ * Contributions after 2012-01-13 are licensed under the terms of the
64
+ * GNU GPL, version 2 or (at your option) any later version.
65
+ */
66
+
67
+#include "qemu/osdep.h"
68
+#include "qemu-common.h"
69
+#include "qemu/error-report.h"
70
+#include "sysemu/hvf.h"
71
+#include "sysemu/hvf_int.h"
72
+
73
+void assert_hvf_ok(hv_return_t ret)
74
+{
75
+ if (ret == HV_SUCCESS) {
76
+ return;
77
+ }
78
+
79
+ switch (ret) {
80
+ case HV_ERROR:
81
+ error_report("Error: HV_ERROR");
82
+ break;
83
+ case HV_BUSY:
84
+ error_report("Error: HV_BUSY");
85
+ break;
86
+ case HV_BAD_ARGUMENT:
87
+ error_report("Error: HV_BAD_ARGUMENT");
88
+ break;
89
+ case HV_NO_RESOURCES:
90
+ error_report("Error: HV_NO_RESOURCES");
91
+ break;
92
+ case HV_NO_DEVICE:
93
+ error_report("Error: HV_NO_DEVICE");
94
+ break;
95
+ case HV_UNSUPPORTED:
96
+ error_report("Error: HV_UNSUPPORTED");
97
+ break;
98
+ default:
99
+ error_report("Unknown Error");
100
+ }
101
+
102
+ abort();
103
+}
104
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/target/i386/hvf/hvf.c
107
+++ b/target/i386/hvf/hvf.c
108
@@ -XXX,XX +XXX,XX @@
109
#include "qemu/error-report.h"
110
111
#include "sysemu/hvf.h"
112
+#include "sysemu/hvf_int.h"
113
#include "sysemu/runstate.h"
114
#include "hvf-i386.h"
115
#include "vmcs.h"
116
@@ -XXX,XX +XXX,XX @@
117
118
HVFState *hvf_state;
119
120
-static void assert_hvf_ok(hv_return_t ret)
121
-{
122
- if (ret == HV_SUCCESS) {
123
- return;
124
- }
125
-
126
- switch (ret) {
127
- case HV_ERROR:
128
- error_report("Error: HV_ERROR");
129
- break;
130
- case HV_BUSY:
131
- error_report("Error: HV_BUSY");
132
- break;
133
- case HV_BAD_ARGUMENT:
134
- error_report("Error: HV_BAD_ARGUMENT");
135
- break;
136
- case HV_NO_RESOURCES:
137
- error_report("Error: HV_NO_RESOURCES");
138
- break;
139
- case HV_NO_DEVICE:
140
- error_report("Error: HV_NO_DEVICE");
141
- break;
142
- case HV_UNSUPPORTED:
143
- error_report("Error: HV_UNSUPPORTED");
144
- break;
145
- default:
146
- error_report("Unknown Error");
147
- }
148
-
149
- abort();
150
-}
151
-
152
/* Memory slots */
153
hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
154
{
10
diff --git a/MAINTAINERS b/MAINTAINERS
155
diff --git a/MAINTAINERS b/MAINTAINERS
11
index XXXXXXX..XXXXXXX 100644
156
index XXXXXXX..XXXXXXX 100644
12
--- a/MAINTAINERS
157
--- a/MAINTAINERS
13
+++ b/MAINTAINERS
158
+++ b/MAINTAINERS
14
@@ -XXX,XX +XXX,XX @@ F: hw/timer/cmsdk-apb-timer.c
159
@@ -XXX,XX +XXX,XX @@ M: Roman Bolshakov <r.bolshakov@yadro.com>
15
F: include/hw/timer/cmsdk-apb-timer.h
160
W: https://wiki.qemu.org/Features/HVF
16
F: hw/char/cmsdk-apb-uart.c
17
F: include/hw/char/cmsdk-apb-uart.h
18
+F: hw/misc/tz-ppc.c
19
+F: include/hw/misc/tz-ppc.h
20
21
ARM cores
22
M: Peter Maydell <peter.maydell@linaro.org>
23
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
24
L: qemu-arm@nongnu.org
25
S: Maintained
161
S: Maintained
26
F: hw/arm/mps2.c
162
F: target/i386/hvf/
27
-F: hw/misc/mps2-scc.c
163
+
28
-F: include/hw/misc/mps2-scc.h
164
+HVF
29
+F: hw/arm/mps2-tz.c
165
+M: Cameron Esfahani <dirty@apple.com>
30
+F: hw/misc/mps2-*.c
166
+M: Roman Bolshakov <r.bolshakov@yadro.com>
31
+F: include/hw/misc/mps2-*.h
167
+W: https://wiki.qemu.org/Features/HVF
32
+F: hw/arm/iotkit.c
168
+S: Maintained
33
+F: include/hw/arm/iotkit.h
169
+F: accel/hvf/
34
170
F: include/sysemu/hvf.h
35
Musicpal
171
+F: include/sysemu/hvf_int.h
36
M: Jan Kiszka <jan.kiszka@web.de>
172
173
WHPX CPUs
174
M: Sunil Muthuswamy <sunilmut@microsoft.com>
175
diff --git a/accel/hvf/meson.build b/accel/hvf/meson.build
176
new file mode 100644
177
index XXXXXXX..XXXXXXX
178
--- /dev/null
179
+++ b/accel/hvf/meson.build
180
@@ -XXX,XX +XXX,XX @@
181
+hvf_ss = ss.source_set()
182
+hvf_ss.add(files(
183
+ 'hvf-all.c',
184
+))
185
+
186
+specific_ss.add_all(when: 'CONFIG_HVF', if_true: hvf_ss)
187
diff --git a/accel/meson.build b/accel/meson.build
188
index XXXXXXX..XXXXXXX 100644
189
--- a/accel/meson.build
190
+++ b/accel/meson.build
191
@@ -XXX,XX +XXX,XX @@ specific_ss.add(files('accel-common.c'))
192
softmmu_ss.add(files('accel-softmmu.c'))
193
user_ss.add(files('accel-user.c'))
194
195
+subdir('hvf')
196
subdir('qtest')
197
subdir('kvm')
198
subdir('tcg')
37
--
199
--
38
2.17.1
200
2.20.1
39
201
40
202
diff view generated by jsdifflib
New patch
1
From: Alexander Graf <agraf@csgraf.de>
1
2
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
7
8
This patch moves the vCPU thread loop over.
9
10
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210519202253.76782-3-agraf@csgraf.de
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
{target/i386 => accel}/hvf/hvf-accel-ops.h | 0
17
{target/i386 => accel}/hvf/hvf-accel-ops.c | 0
18
target/i386/hvf/x86hvf.c | 2 +-
19
accel/hvf/meson.build | 1 +
20
target/i386/hvf/meson.build | 1 -
21
5 files changed, 2 insertions(+), 2 deletions(-)
22
rename {target/i386 => accel}/hvf/hvf-accel-ops.h (100%)
23
rename {target/i386 => accel}/hvf/hvf-accel-ops.c (100%)
24
25
diff --git a/target/i386/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
26
similarity index 100%
27
rename from target/i386/hvf/hvf-accel-ops.h
28
rename to accel/hvf/hvf-accel-ops.h
29
diff --git a/target/i386/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
30
similarity index 100%
31
rename from target/i386/hvf/hvf-accel-ops.c
32
rename to accel/hvf/hvf-accel-ops.c
33
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/i386/hvf/x86hvf.c
36
+++ b/target/i386/hvf/x86hvf.c
37
@@ -XXX,XX +XXX,XX @@
38
#include <Hypervisor/hv.h>
39
#include <Hypervisor/hv_vmx.h>
40
41
-#include "hvf-accel-ops.h"
42
+#include "accel/hvf/hvf-accel-ops.h"
43
44
void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
45
SegmentCache *qseg, bool is_tr)
46
diff --git a/accel/hvf/meson.build b/accel/hvf/meson.build
47
index XXXXXXX..XXXXXXX 100644
48
--- a/accel/hvf/meson.build
49
+++ b/accel/hvf/meson.build
50
@@ -XXX,XX +XXX,XX @@
51
hvf_ss = ss.source_set()
52
hvf_ss.add(files(
53
'hvf-all.c',
54
+ 'hvf-accel-ops.c',
55
))
56
57
specific_ss.add_all(when: 'CONFIG_HVF', if_true: hvf_ss)
58
diff --git a/target/i386/hvf/meson.build b/target/i386/hvf/meson.build
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/i386/hvf/meson.build
61
+++ b/target/i386/hvf/meson.build
62
@@ -XXX,XX +XXX,XX @@
63
i386_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files(
64
'hvf.c',
65
- 'hvf-accel-ops.c',
66
'x86.c',
67
'x86_cpuid.c',
68
'x86_decode.c',
69
--
70
2.20.1
71
72
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Alexander Graf <agraf@csgraf.de>
2
add MemTxAttrs as an argument to address_space_access_valid().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
7
8
This patch moves CPU and memory operations over. While at it, make sure
9
the code is consumable on non-i386 systems.
10
11
Signed-off-by: Alexander Graf <agraf@csgraf.de>
12
Reviewed-by: Sergio Lopez <slp@redhat.com>
13
Message-id: 20210519202253.76782-4-agraf@csgraf.de
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-6-peter.maydell@linaro.org
10
---
16
---
11
include/exec/memory.h | 4 +++-
17
include/sysemu/hvf_int.h | 4 +
12
include/sysemu/dma.h | 3 ++-
18
target/i386/hvf/hvf-i386.h | 2 -
13
exec.c | 3 ++-
19
target/i386/hvf/x86hvf.h | 2 -
14
target/s390x/diag.c | 6 ++++--
20
accel/hvf/hvf-accel-ops.c | 308 ++++++++++++++++++++++++++++++++++++-
15
target/s390x/excp_helper.c | 3 ++-
21
target/i386/hvf/hvf.c | 302 ------------------------------------
16
target/s390x/mmu_helper.c | 3 ++-
22
5 files changed, 311 insertions(+), 307 deletions(-)
17
target/s390x/sigp.c | 3 ++-
18
7 files changed, 17 insertions(+), 8 deletions(-)
19
23
20
diff --git a/include/exec/memory.h b/include/exec/memory.h
24
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
21
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
22
--- a/include/exec/memory.h
26
--- a/include/sysemu/hvf_int.h
23
+++ b/include/exec/memory.h
27
+++ b/include/sysemu/hvf_int.h
24
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
28
@@ -XXX,XX +XXX,XX @@
25
* @addr: address within that address space
29
26
* @len: length of the area to be checked
30
#include <Hypervisor/hv.h>
27
* @is_write: indicates the transfer direction
31
28
+ * @attrs: memory attributes
32
+void hvf_set_phys_mem(MemoryRegionSection *, bool);
29
*/
33
void assert_hvf_ok(hv_return_t ret);
30
-bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
34
+hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
31
+bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len,
35
+int hvf_put_registers(CPUState *);
32
+ bool is_write, MemTxAttrs attrs);
36
+int hvf_get_registers(CPUState *);
33
37
34
/* address_space_map: map a physical memory region into a host virtual address
38
#endif
35
*
39
diff --git a/target/i386/hvf/hvf-i386.h b/target/i386/hvf/hvf-i386.h
36
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
37
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
38
--- a/include/sysemu/dma.h
41
--- a/target/i386/hvf/hvf-i386.h
39
+++ b/include/sysemu/dma.h
42
+++ b/target/i386/hvf/hvf-i386.h
40
@@ -XXX,XX +XXX,XX @@ static inline bool dma_memory_valid(AddressSpace *as,
43
@@ -XXX,XX +XXX,XX @@ struct HVFState {
41
DMADirection dir)
44
};
45
extern HVFState *hvf_state;
46
47
-void hvf_set_phys_mem(MemoryRegionSection *, bool);
48
void hvf_handle_io(CPUArchState *, uint16_t, void *, int, int, int);
49
-hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
50
51
#ifdef NEED_CPU_H
52
/* Functions exported to host specific mode */
53
diff --git a/target/i386/hvf/x86hvf.h b/target/i386/hvf/x86hvf.h
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/i386/hvf/x86hvf.h
56
+++ b/target/i386/hvf/x86hvf.h
57
@@ -XXX,XX +XXX,XX @@
58
#include "x86_descr.h"
59
60
int hvf_process_events(CPUState *);
61
-int hvf_put_registers(CPUState *);
62
-int hvf_get_registers(CPUState *);
63
bool hvf_inject_interrupts(CPUState *);
64
void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
65
SegmentCache *qseg, bool is_tr);
66
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/accel/hvf/hvf-accel-ops.c
69
+++ b/accel/hvf/hvf-accel-ops.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "qemu/osdep.h"
72
#include "qemu/error-report.h"
73
#include "qemu/main-loop.h"
74
+#include "exec/address-spaces.h"
75
+#include "exec/exec-all.h"
76
+#include "sysemu/cpus.h"
77
#include "sysemu/hvf.h"
78
+#include "sysemu/hvf_int.h"
79
#include "sysemu/runstate.h"
80
-#include "target/i386/cpu.h"
81
#include "qemu/guest-random.h"
82
83
#include "hvf-accel-ops.h"
84
85
+HVFState *hvf_state;
86
+
87
+/* Memory slots */
88
+
89
+hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
90
+{
91
+ hvf_slot *slot;
92
+ int x;
93
+ for (x = 0; x < hvf_state->num_slots; ++x) {
94
+ slot = &hvf_state->slots[x];
95
+ if (slot->size && start < (slot->start + slot->size) &&
96
+ (start + size) > slot->start) {
97
+ return slot;
98
+ }
99
+ }
100
+ return NULL;
101
+}
102
+
103
+struct mac_slot {
104
+ int present;
105
+ uint64_t size;
106
+ uint64_t gpa_start;
107
+ uint64_t gva;
108
+};
109
+
110
+struct mac_slot mac_slots[32];
111
+
112
+static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
113
+{
114
+ struct mac_slot *macslot;
115
+ hv_return_t ret;
116
+
117
+ macslot = &mac_slots[slot->slot_id];
118
+
119
+ if (macslot->present) {
120
+ if (macslot->size != slot->size) {
121
+ macslot->present = 0;
122
+ ret = hv_vm_unmap(macslot->gpa_start, macslot->size);
123
+ assert_hvf_ok(ret);
124
+ }
125
+ }
126
+
127
+ if (!slot->size) {
128
+ return 0;
129
+ }
130
+
131
+ macslot->present = 1;
132
+ macslot->gpa_start = slot->start;
133
+ macslot->size = slot->size;
134
+ ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
135
+ assert_hvf_ok(ret);
136
+ return 0;
137
+}
138
+
139
+void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
140
+{
141
+ hvf_slot *mem;
142
+ MemoryRegion *area = section->mr;
143
+ bool writeable = !area->readonly && !area->rom_device;
144
+ hv_memory_flags_t flags;
145
+
146
+ if (!memory_region_is_ram(area)) {
147
+ if (writeable) {
148
+ return;
149
+ } else if (!memory_region_is_romd(area)) {
150
+ /*
151
+ * If the memory device is not in romd_mode, then we actually want
152
+ * to remove the hvf memory slot so all accesses will trap.
153
+ */
154
+ add = false;
155
+ }
156
+ }
157
+
158
+ mem = hvf_find_overlap_slot(
159
+ section->offset_within_address_space,
160
+ int128_get64(section->size));
161
+
162
+ if (mem && add) {
163
+ if (mem->size == int128_get64(section->size) &&
164
+ mem->start == section->offset_within_address_space &&
165
+ mem->mem == (memory_region_get_ram_ptr(area) +
166
+ section->offset_within_region)) {
167
+ return; /* Same region was attempted to register, go away. */
168
+ }
169
+ }
170
+
171
+ /* Region needs to be reset. set the size to 0 and remap it. */
172
+ if (mem) {
173
+ mem->size = 0;
174
+ if (do_hvf_set_memory(mem, 0)) {
175
+ error_report("Failed to reset overlapping slot");
176
+ abort();
177
+ }
178
+ }
179
+
180
+ if (!add) {
181
+ return;
182
+ }
183
+
184
+ if (area->readonly ||
185
+ (!memory_region_is_ram(area) && memory_region_is_romd(area))) {
186
+ flags = HV_MEMORY_READ | HV_MEMORY_EXEC;
187
+ } else {
188
+ flags = HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC;
189
+ }
190
+
191
+ /* Now make a new slot. */
192
+ int x;
193
+
194
+ for (x = 0; x < hvf_state->num_slots; ++x) {
195
+ mem = &hvf_state->slots[x];
196
+ if (!mem->size) {
197
+ break;
198
+ }
199
+ }
200
+
201
+ if (x == hvf_state->num_slots) {
202
+ error_report("No free slots");
203
+ abort();
204
+ }
205
+
206
+ mem->size = int128_get64(section->size);
207
+ mem->mem = memory_region_get_ram_ptr(area) + section->offset_within_region;
208
+ mem->start = section->offset_within_address_space;
209
+ mem->region = area;
210
+
211
+ if (do_hvf_set_memory(mem, flags)) {
212
+ error_report("Error registering new memory slot");
213
+ abort();
214
+ }
215
+}
216
+
217
+static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
218
+{
219
+ if (!cpu->vcpu_dirty) {
220
+ hvf_get_registers(cpu);
221
+ cpu->vcpu_dirty = true;
222
+ }
223
+}
224
+
225
+void hvf_cpu_synchronize_state(CPUState *cpu)
226
+{
227
+ if (!cpu->vcpu_dirty) {
228
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
229
+ }
230
+}
231
+
232
+static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
233
+ run_on_cpu_data arg)
234
+{
235
+ hvf_put_registers(cpu);
236
+ cpu->vcpu_dirty = false;
237
+}
238
+
239
+void hvf_cpu_synchronize_post_reset(CPUState *cpu)
240
+{
241
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
242
+}
243
+
244
+static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
245
+ run_on_cpu_data arg)
246
+{
247
+ hvf_put_registers(cpu);
248
+ cpu->vcpu_dirty = false;
249
+}
250
+
251
+void hvf_cpu_synchronize_post_init(CPUState *cpu)
252
+{
253
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
254
+}
255
+
256
+static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
257
+ run_on_cpu_data arg)
258
+{
259
+ cpu->vcpu_dirty = true;
260
+}
261
+
262
+void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
263
+{
264
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
265
+}
266
+
267
+static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
268
+{
269
+ hvf_slot *slot;
270
+
271
+ slot = hvf_find_overlap_slot(
272
+ section->offset_within_address_space,
273
+ int128_get64(section->size));
274
+
275
+ /* protect region against writes; begin tracking it */
276
+ if (on) {
277
+ slot->flags |= HVF_SLOT_LOG;
278
+ hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
279
+ HV_MEMORY_READ);
280
+ /* stop tracking region*/
281
+ } else {
282
+ slot->flags &= ~HVF_SLOT_LOG;
283
+ hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
284
+ HV_MEMORY_READ | HV_MEMORY_WRITE);
285
+ }
286
+}
287
+
288
+static void hvf_log_start(MemoryListener *listener,
289
+ MemoryRegionSection *section, int old, int new)
290
+{
291
+ if (old != 0) {
292
+ return;
293
+ }
294
+
295
+ hvf_set_dirty_tracking(section, 1);
296
+}
297
+
298
+static void hvf_log_stop(MemoryListener *listener,
299
+ MemoryRegionSection *section, int old, int new)
300
+{
301
+ if (new != 0) {
302
+ return;
303
+ }
304
+
305
+ hvf_set_dirty_tracking(section, 0);
306
+}
307
+
308
+static void hvf_log_sync(MemoryListener *listener,
309
+ MemoryRegionSection *section)
310
+{
311
+ /*
312
+ * sync of dirty pages is handled elsewhere; just make sure we keep
313
+ * tracking the region.
314
+ */
315
+ hvf_set_dirty_tracking(section, 1);
316
+}
317
+
318
+static void hvf_region_add(MemoryListener *listener,
319
+ MemoryRegionSection *section)
320
+{
321
+ hvf_set_phys_mem(section, true);
322
+}
323
+
324
+static void hvf_region_del(MemoryListener *listener,
325
+ MemoryRegionSection *section)
326
+{
327
+ hvf_set_phys_mem(section, false);
328
+}
329
+
330
+static MemoryListener hvf_memory_listener = {
331
+ .priority = 10,
332
+ .region_add = hvf_region_add,
333
+ .region_del = hvf_region_del,
334
+ .log_start = hvf_log_start,
335
+ .log_stop = hvf_log_stop,
336
+ .log_sync = hvf_log_sync,
337
+};
338
+
339
+static void dummy_signal(int sig)
340
+{
341
+}
342
+
343
+bool hvf_allowed;
344
+
345
+static int hvf_accel_init(MachineState *ms)
346
+{
347
+ int x;
348
+ hv_return_t ret;
349
+ HVFState *s;
350
+
351
+ ret = hv_vm_create(HV_VM_DEFAULT);
352
+ assert_hvf_ok(ret);
353
+
354
+ s = g_new0(HVFState, 1);
355
+
356
+ s->num_slots = 32;
357
+ for (x = 0; x < s->num_slots; ++x) {
358
+ s->slots[x].size = 0;
359
+ s->slots[x].slot_id = x;
360
+ }
361
+
362
+ hvf_state = s;
363
+ memory_listener_register(&hvf_memory_listener, &address_space_memory);
364
+ return 0;
365
+}
366
+
367
+static void hvf_accel_class_init(ObjectClass *oc, void *data)
368
+{
369
+ AccelClass *ac = ACCEL_CLASS(oc);
370
+ ac->name = "HVF";
371
+ ac->init_machine = hvf_accel_init;
372
+ ac->allowed = &hvf_allowed;
373
+}
374
+
375
+static const TypeInfo hvf_accel_type = {
376
+ .name = TYPE_HVF_ACCEL,
377
+ .parent = TYPE_ACCEL,
378
+ .class_init = hvf_accel_class_init,
379
+};
380
+
381
+static void hvf_type_init(void)
382
+{
383
+ type_register_static(&hvf_accel_type);
384
+}
385
+
386
+type_init(hvf_type_init);
387
+
388
/*
389
* The HVF-specific vCPU thread function. This one should only run when the host
390
* CPU supports the VMX "unrestricted guest" feature.
391
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
392
index XXXXXXX..XXXXXXX 100644
393
--- a/target/i386/hvf/hvf.c
394
+++ b/target/i386/hvf/hvf.c
395
@@ -XXX,XX +XXX,XX @@
396
397
#include "hvf-accel-ops.h"
398
399
-HVFState *hvf_state;
400
-
401
-/* Memory slots */
402
-hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
403
-{
404
- hvf_slot *slot;
405
- int x;
406
- for (x = 0; x < hvf_state->num_slots; ++x) {
407
- slot = &hvf_state->slots[x];
408
- if (slot->size && start < (slot->start + slot->size) &&
409
- (start + size) > slot->start) {
410
- return slot;
411
- }
412
- }
413
- return NULL;
414
-}
415
-
416
-struct mac_slot {
417
- int present;
418
- uint64_t size;
419
- uint64_t gpa_start;
420
- uint64_t gva;
421
-};
422
-
423
-struct mac_slot mac_slots[32];
424
-
425
-static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
426
-{
427
- struct mac_slot *macslot;
428
- hv_return_t ret;
429
-
430
- macslot = &mac_slots[slot->slot_id];
431
-
432
- if (macslot->present) {
433
- if (macslot->size != slot->size) {
434
- macslot->present = 0;
435
- ret = hv_vm_unmap(macslot->gpa_start, macslot->size);
436
- assert_hvf_ok(ret);
437
- }
438
- }
439
-
440
- if (!slot->size) {
441
- return 0;
442
- }
443
-
444
- macslot->present = 1;
445
- macslot->gpa_start = slot->start;
446
- macslot->size = slot->size;
447
- ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
448
- assert_hvf_ok(ret);
449
- return 0;
450
-}
451
-
452
-void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
453
-{
454
- hvf_slot *mem;
455
- MemoryRegion *area = section->mr;
456
- bool writeable = !area->readonly && !area->rom_device;
457
- hv_memory_flags_t flags;
458
-
459
- if (!memory_region_is_ram(area)) {
460
- if (writeable) {
461
- return;
462
- } else if (!memory_region_is_romd(area)) {
463
- /*
464
- * If the memory device is not in romd_mode, then we actually want
465
- * to remove the hvf memory slot so all accesses will trap.
466
- */
467
- add = false;
468
- }
469
- }
470
-
471
- mem = hvf_find_overlap_slot(
472
- section->offset_within_address_space,
473
- int128_get64(section->size));
474
-
475
- if (mem && add) {
476
- if (mem->size == int128_get64(section->size) &&
477
- mem->start == section->offset_within_address_space &&
478
- mem->mem == (memory_region_get_ram_ptr(area) +
479
- section->offset_within_region)) {
480
- return; /* Same region was attempted to register, go away. */
481
- }
482
- }
483
-
484
- /* Region needs to be reset. set the size to 0 and remap it. */
485
- if (mem) {
486
- mem->size = 0;
487
- if (do_hvf_set_memory(mem, 0)) {
488
- error_report("Failed to reset overlapping slot");
489
- abort();
490
- }
491
- }
492
-
493
- if (!add) {
494
- return;
495
- }
496
-
497
- if (area->readonly ||
498
- (!memory_region_is_ram(area) && memory_region_is_romd(area))) {
499
- flags = HV_MEMORY_READ | HV_MEMORY_EXEC;
500
- } else {
501
- flags = HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC;
502
- }
503
-
504
- /* Now make a new slot. */
505
- int x;
506
-
507
- for (x = 0; x < hvf_state->num_slots; ++x) {
508
- mem = &hvf_state->slots[x];
509
- if (!mem->size) {
510
- break;
511
- }
512
- }
513
-
514
- if (x == hvf_state->num_slots) {
515
- error_report("No free slots");
516
- abort();
517
- }
518
-
519
- mem->size = int128_get64(section->size);
520
- mem->mem = memory_region_get_ram_ptr(area) + section->offset_within_region;
521
- mem->start = section->offset_within_address_space;
522
- mem->region = area;
523
-
524
- if (do_hvf_set_memory(mem, flags)) {
525
- error_report("Error registering new memory slot");
526
- abort();
527
- }
528
-}
529
-
530
void vmx_update_tpr(CPUState *cpu)
42
{
531
{
43
return address_space_access_valid(as, addr, len,
532
/* TODO: need integrate APIC handling */
44
- dir == DMA_DIRECTION_FROM_DEVICE);
533
@@ -XXX,XX +XXX,XX @@ void hvf_handle_io(CPUArchState *env, uint16_t port, void *buffer,
45
+ dir == DMA_DIRECTION_FROM_DEVICE,
534
}
46
+ MEMTXATTRS_UNSPECIFIED);
47
}
535
}
48
536
49
static inline int dma_memory_rw_relaxed(AddressSpace *as, dma_addr_t addr,
537
-static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
50
diff --git a/exec.c b/exec.c
538
-{
51
index XXXXXXX..XXXXXXX 100644
539
- if (!cpu->vcpu_dirty) {
52
--- a/exec.c
540
- hvf_get_registers(cpu);
53
+++ b/exec.c
541
- cpu->vcpu_dirty = true;
54
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
542
- }
543
-}
544
-
545
-void hvf_cpu_synchronize_state(CPUState *cpu)
546
-{
547
- if (!cpu->vcpu_dirty) {
548
- run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
549
- }
550
-}
551
-
552
-static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
553
- run_on_cpu_data arg)
554
-{
555
- hvf_put_registers(cpu);
556
- cpu->vcpu_dirty = false;
557
-}
558
-
559
-void hvf_cpu_synchronize_post_reset(CPUState *cpu)
560
-{
561
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
562
-}
563
-
564
-static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
565
- run_on_cpu_data arg)
566
-{
567
- hvf_put_registers(cpu);
568
- cpu->vcpu_dirty = false;
569
-}
570
-
571
-void hvf_cpu_synchronize_post_init(CPUState *cpu)
572
-{
573
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
574
-}
575
-
576
-static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
577
- run_on_cpu_data arg)
578
-{
579
- cpu->vcpu_dirty = true;
580
-}
581
-
582
-void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
583
-{
584
- run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
585
-}
586
-
587
static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
588
{
589
int read, write;
590
@@ -XXX,XX +XXX,XX @@ static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
591
return false;
55
}
592
}
56
593
57
bool address_space_access_valid(AddressSpace *as, hwaddr addr,
594
-static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
58
- int len, bool is_write)
595
-{
59
+ int len, bool is_write,
596
- hvf_slot *slot;
60
+ MemTxAttrs attrs)
597
-
598
- slot = hvf_find_overlap_slot(
599
- section->offset_within_address_space,
600
- int128_get64(section->size));
601
-
602
- /* protect region against writes; begin tracking it */
603
- if (on) {
604
- slot->flags |= HVF_SLOT_LOG;
605
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
606
- HV_MEMORY_READ);
607
- /* stop tracking region*/
608
- } else {
609
- slot->flags &= ~HVF_SLOT_LOG;
610
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
611
- HV_MEMORY_READ | HV_MEMORY_WRITE);
612
- }
613
-}
614
-
615
-static void hvf_log_start(MemoryListener *listener,
616
- MemoryRegionSection *section, int old, int new)
617
-{
618
- if (old != 0) {
619
- return;
620
- }
621
-
622
- hvf_set_dirty_tracking(section, 1);
623
-}
624
-
625
-static void hvf_log_stop(MemoryListener *listener,
626
- MemoryRegionSection *section, int old, int new)
627
-{
628
- if (new != 0) {
629
- return;
630
- }
631
-
632
- hvf_set_dirty_tracking(section, 0);
633
-}
634
-
635
-static void hvf_log_sync(MemoryListener *listener,
636
- MemoryRegionSection *section)
637
-{
638
- /*
639
- * sync of dirty pages is handled elsewhere; just make sure we keep
640
- * tracking the region.
641
- */
642
- hvf_set_dirty_tracking(section, 1);
643
-}
644
-
645
-static void hvf_region_add(MemoryListener *listener,
646
- MemoryRegionSection *section)
647
-{
648
- hvf_set_phys_mem(section, true);
649
-}
650
-
651
-static void hvf_region_del(MemoryListener *listener,
652
- MemoryRegionSection *section)
653
-{
654
- hvf_set_phys_mem(section, false);
655
-}
656
-
657
-static MemoryListener hvf_memory_listener = {
658
- .priority = 10,
659
- .region_add = hvf_region_add,
660
- .region_del = hvf_region_del,
661
- .log_start = hvf_log_start,
662
- .log_stop = hvf_log_stop,
663
- .log_sync = hvf_log_sync,
664
-};
665
-
666
void hvf_vcpu_destroy(CPUState *cpu)
61
{
667
{
62
FlatView *fv;
668
X86CPU *x86_cpu = X86_CPU(cpu);
63
bool result;
669
@@ -XXX,XX +XXX,XX @@ void hvf_vcpu_destroy(CPUState *cpu)
64
diff --git a/target/s390x/diag.c b/target/s390x/diag.c
670
assert_hvf_ok(ret);
65
index XXXXXXX..XXXXXXX 100644
671
}
66
--- a/target/s390x/diag.c
672
67
+++ b/target/s390x/diag.c
673
-static void dummy_signal(int sig)
68
@@ -XXX,XX +XXX,XX @@ void handle_diag_308(CPUS390XState *env, uint64_t r1, uint64_t r3, uintptr_t ra)
674
-{
69
return;
675
-}
70
}
676
-
71
if (!address_space_access_valid(&address_space_memory, addr,
677
static void init_tsc_freq(CPUX86State *env)
72
- sizeof(IplParameterBlock), false)) {
678
{
73
+ sizeof(IplParameterBlock), false,
679
size_t length;
74
+ MEMTXATTRS_UNSPECIFIED)) {
680
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
75
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
681
76
return;
682
return ret;
77
}
683
}
78
@@ -XXX,XX +XXX,XX @@ out:
684
-
79
return;
685
-bool hvf_allowed;
80
}
686
-
81
if (!address_space_access_valid(&address_space_memory, addr,
687
-static int hvf_accel_init(MachineState *ms)
82
- sizeof(IplParameterBlock), true)) {
688
-{
83
+ sizeof(IplParameterBlock), true,
689
- int x;
84
+ MEMTXATTRS_UNSPECIFIED)) {
690
- hv_return_t ret;
85
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
691
- HVFState *s;
86
return;
692
-
87
}
693
- ret = hv_vm_create(HV_VM_DEFAULT);
88
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
694
- assert_hvf_ok(ret);
89
index XXXXXXX..XXXXXXX 100644
695
-
90
--- a/target/s390x/excp_helper.c
696
- s = g_new0(HVFState, 1);
91
+++ b/target/s390x/excp_helper.c
697
-
92
@@ -XXX,XX +XXX,XX @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr, int size,
698
- s->num_slots = 32;
93
699
- for (x = 0; x < s->num_slots; ++x) {
94
/* check out of RAM access */
700
- s->slots[x].size = 0;
95
if (!address_space_access_valid(&address_space_memory, raddr,
701
- s->slots[x].slot_id = x;
96
- TARGET_PAGE_SIZE, rw)) {
702
- }
97
+ TARGET_PAGE_SIZE, rw,
703
-
98
+ MEMTXATTRS_UNSPECIFIED)) {
704
- hvf_state = s;
99
DPRINTF("%s: raddr %" PRIx64 " > ram_size %" PRIx64 "\n", __func__,
705
- memory_listener_register(&hvf_memory_listener, &address_space_memory);
100
(uint64_t)raddr, (uint64_t)ram_size);
706
- return 0;
101
trigger_pgm_exception(env, PGM_ADDRESSING, ILEN_AUTO);
707
-}
102
diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
708
-
103
index XXXXXXX..XXXXXXX 100644
709
-static void hvf_accel_class_init(ObjectClass *oc, void *data)
104
--- a/target/s390x/mmu_helper.c
710
-{
105
+++ b/target/s390x/mmu_helper.c
711
- AccelClass *ac = ACCEL_CLASS(oc);
106
@@ -XXX,XX +XXX,XX @@ static int translate_pages(S390CPU *cpu, vaddr addr, int nr_pages,
712
- ac->name = "HVF";
107
return ret;
713
- ac->init_machine = hvf_accel_init;
108
}
714
- ac->allowed = &hvf_allowed;
109
if (!address_space_access_valid(&address_space_memory, pages[i],
715
-}
110
- TARGET_PAGE_SIZE, is_write)) {
716
-
111
+ TARGET_PAGE_SIZE, is_write,
717
-static const TypeInfo hvf_accel_type = {
112
+ MEMTXATTRS_UNSPECIFIED)) {
718
- .name = TYPE_HVF_ACCEL,
113
trigger_access_exception(env, PGM_ADDRESSING, ILEN_AUTO, 0);
719
- .parent = TYPE_ACCEL,
114
return -EFAULT;
720
- .class_init = hvf_accel_class_init,
115
}
721
-};
116
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
722
-
117
index XXXXXXX..XXXXXXX 100644
723
-static void hvf_type_init(void)
118
--- a/target/s390x/sigp.c
724
-{
119
+++ b/target/s390x/sigp.c
725
- type_register_static(&hvf_accel_type);
120
@@ -XXX,XX +XXX,XX @@ static void sigp_set_prefix(CPUState *cs, run_on_cpu_data arg)
726
-}
121
cpu_synchronize_state(cs);
727
-
122
728
-type_init(hvf_type_init);
123
if (!address_space_access_valid(&address_space_memory, addr,
124
- sizeof(struct LowCore), false)) {
125
+ sizeof(struct LowCore), false,
126
+ MEMTXATTRS_UNSPECIFIED)) {
127
set_sigp_status(si, SIGP_STAT_INVALID_PARAMETER);
128
return;
129
}
130
--
729
--
131
2.17.1
730
2.20.1
132
731
133
732
diff view generated by jsdifflib
New patch
1
From: Alexander Graf <agraf@csgraf.de>
1
2
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
7
8
This patch moves a few internal struct and constant defines over.
9
10
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210519202253.76782-5-agraf@csgraf.de
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
include/sysemu/hvf_int.h | 30 ++++++++++++++++++++++++++++++
17
target/i386/hvf/hvf-i386.h | 31 +------------------------------
18
2 files changed, 31 insertions(+), 30 deletions(-)
19
20
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/include/sysemu/hvf_int.h
23
+++ b/include/sysemu/hvf_int.h
24
@@ -XXX,XX +XXX,XX @@
25
26
#include <Hypervisor/hv.h>
27
28
+/* hvf_slot flags */
29
+#define HVF_SLOT_LOG (1 << 0)
30
+
31
+typedef struct hvf_slot {
32
+ uint64_t start;
33
+ uint64_t size;
34
+ uint8_t *mem;
35
+ int slot_id;
36
+ uint32_t flags;
37
+ MemoryRegion *region;
38
+} hvf_slot;
39
+
40
+typedef struct hvf_vcpu_caps {
41
+ uint64_t vmx_cap_pinbased;
42
+ uint64_t vmx_cap_procbased;
43
+ uint64_t vmx_cap_procbased2;
44
+ uint64_t vmx_cap_entry;
45
+ uint64_t vmx_cap_exit;
46
+ uint64_t vmx_cap_preemption_timer;
47
+} hvf_vcpu_caps;
48
+
49
+struct HVFState {
50
+ AccelState parent;
51
+ hvf_slot slots[32];
52
+ int num_slots;
53
+
54
+ hvf_vcpu_caps *hvf_caps;
55
+};
56
+extern HVFState *hvf_state;
57
+
58
void hvf_set_phys_mem(MemoryRegionSection *, bool);
59
void assert_hvf_ok(hv_return_t ret);
60
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
61
diff --git a/target/i386/hvf/hvf-i386.h b/target/i386/hvf/hvf-i386.h
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/i386/hvf/hvf-i386.h
64
+++ b/target/i386/hvf/hvf-i386.h
65
@@ -XXX,XX +XXX,XX @@
66
67
#include "qemu/accel.h"
68
#include "sysemu/hvf.h"
69
+#include "sysemu/hvf_int.h"
70
#include "cpu.h"
71
#include "x86.h"
72
73
-/* hvf_slot flags */
74
-#define HVF_SLOT_LOG (1 << 0)
75
-
76
-typedef struct hvf_slot {
77
- uint64_t start;
78
- uint64_t size;
79
- uint8_t *mem;
80
- int slot_id;
81
- uint32_t flags;
82
- MemoryRegion *region;
83
-} hvf_slot;
84
-
85
-typedef struct hvf_vcpu_caps {
86
- uint64_t vmx_cap_pinbased;
87
- uint64_t vmx_cap_procbased;
88
- uint64_t vmx_cap_procbased2;
89
- uint64_t vmx_cap_entry;
90
- uint64_t vmx_cap_exit;
91
- uint64_t vmx_cap_preemption_timer;
92
-} hvf_vcpu_caps;
93
-
94
-struct HVFState {
95
- AccelState parent;
96
- hvf_slot slots[32];
97
- int num_slots;
98
-
99
- hvf_vcpu_caps *hvf_caps;
100
-};
101
-extern HVFState *hvf_state;
102
-
103
void hvf_handle_io(CPUArchState *, uint16_t, void *, int, int, int);
104
105
#ifdef NEED_CPU_H
106
--
107
2.20.1
108
109
diff view generated by jsdifflib
New patch
1
From: Alexander Graf <agraf@csgraf.de>
1
2
3
The hvf_set_phys_mem() function is only called within the same file.
4
Make it static.
5
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
8
Message-id: 20210519202253.76782-6-agraf@csgraf.de
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/sysemu/hvf_int.h | 1 -
13
accel/hvf/hvf-accel-ops.c | 2 +-
14
2 files changed, 1 insertion(+), 2 deletions(-)
15
16
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/sysemu/hvf_int.h
19
+++ b/include/sysemu/hvf_int.h
20
@@ -XXX,XX +XXX,XX @@ struct HVFState {
21
};
22
extern HVFState *hvf_state;
23
24
-void hvf_set_phys_mem(MemoryRegionSection *, bool);
25
void assert_hvf_ok(hv_return_t ret);
26
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
27
int hvf_put_registers(CPUState *);
28
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/accel/hvf/hvf-accel-ops.c
31
+++ b/accel/hvf/hvf-accel-ops.c
32
@@ -XXX,XX +XXX,XX @@ static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
33
return 0;
34
}
35
36
-void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
37
+static void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
38
{
39
hvf_slot *mem;
40
MemoryRegion *area = section->mr;
41
--
42
2.20.1
43
44
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
kvm_irqchip_create called by kvm_init will call kvm_init_irq_routing to
3
The ARM version of Hypervisor.framework no longer defines these two
4
initialize global capability variables. If we call kvm_init_irq_routing in
4
types, so let's just revert to standard ones.
5
GIC realize function, previous allocated memory will leak.
6
5
7
Fix this by deleting the unnecessary call.
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
9
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
8
Message-id: 20210519202253.76782-7-agraf@csgraf.de
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 1527750994-14360-1-git-send-email-zhaoshenglong@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/intc/arm_gic_kvm.c | 1 -
12
accel/hvf/hvf-accel-ops.c | 6 +++---
15
hw/intc/arm_gicv3_kvm.c | 1 -
13
1 file changed, 3 insertions(+), 3 deletions(-)
16
2 files changed, 2 deletions(-)
17
14
18
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
15
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gic_kvm.c
17
--- a/accel/hvf/hvf-accel-ops.c
21
+++ b/hw/intc/arm_gic_kvm.c
18
+++ b/accel/hvf/hvf-accel-ops.c
22
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
19
@@ -XXX,XX +XXX,XX @@ static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
23
20
macslot->present = 1;
24
if (kvm_has_gsi_routing()) {
21
macslot->gpa_start = slot->start;
25
/* set up irq routing */
22
macslot->size = slot->size;
26
- kvm_init_irq_routing(kvm_state);
23
- ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
27
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
24
+ ret = hv_vm_map(slot->mem, slot->start, slot->size, flags);
28
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
25
assert_hvf_ok(ret);
29
}
26
return 0;
30
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
27
}
31
index XXXXXXX..XXXXXXX 100644
28
@@ -XXX,XX +XXX,XX @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
32
--- a/hw/intc/arm_gicv3_kvm.c
29
/* protect region against writes; begin tracking it */
33
+++ b/hw/intc/arm_gicv3_kvm.c
30
if (on) {
34
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
31
slot->flags |= HVF_SLOT_LOG;
35
32
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
36
if (kvm_has_gsi_routing()) {
33
+ hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
37
/* set up irq routing */
34
HV_MEMORY_READ);
38
- kvm_init_irq_routing(kvm_state);
35
/* stop tracking region*/
39
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
36
} else {
40
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
37
slot->flags &= ~HVF_SLOT_LOG;
41
}
38
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
39
+ hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
40
HV_MEMORY_READ | HV_MEMORY_WRITE);
41
}
42
}
42
--
43
--
43
2.17.1
44
2.20.1
44
45
45
46
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
When QEMU is started with following CLI
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
-machine virt,gic-version=3,accel=kvm -cpu host -bios AAVMF_CODE.fd
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
it crashes with abort at
5
prepare for support for multiple architectures, let's start moving common
6
accel/kvm/kvm-all.c:2164:
6
code out into its own accel directory.
7
KVM_SET_DEVICE_ATTR failed: Group 6 attr 0x000000000000c665: Invalid argument
8
7
9
Which is caused by implicit dependency of kvm_arm_gicv3_reset() on
8
This patch splits the vcpu init and destroy functions into a generic and
10
arm_gicv3_icc_reset() where the later is called by CPU reset
9
an architecture specific portion. This also allows us to move the generic
11
reset callback.
10
functions into the generic hvf code, removing exported functions.
12
11
13
However commit:
12
Signed-off-by: Alexander Graf <agraf@csgraf.de>
14
3b77f6c arm/boot: split load_dtb() from arm_load_kernel()
13
Reviewed-by: Sergio Lopez <slp@redhat.com>
15
broke CPU reset callback registration in case
14
Message-id: 20210519202253.76782-8-agraf@csgraf.de
16
17
arm_load_kernel()
18
...
19
if (!info->kernel_filename || info->firmware_loaded)
20
21
branch is taken, i.e. it's sufficient to provide a firmware
22
or do not provide kernel on CLI to skip cpu reset callback
23
registration, where before offending commit the callback
24
has been registered unconditionally.
25
26
Fix it by registering the callback right at the beginning of
27
arm_load_kernel() unconditionally instead of doing it at the end.
28
29
NOTE:
30
we probably should eliminate that dependency anyways as well as
31
separate arch CPU reset parts from arm_load_kernel() into CPU
32
itself, but that refactoring that I probably would have to do
33
anyways later for CPU hotplug to work.
34
35
Reported-by: Auger Eric <eric.auger@redhat.com>
36
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
37
Reviewed-by: Eric Auger <eric.auger@redhat.com>
38
Tested-by: Eric Auger <eric.auger@redhat.com>
39
Message-id: 1527070950-208350-1-git-send-email-imammedo@redhat.com
40
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
---
17
---
43
hw/arm/boot.c | 18 +++++++++---------
18
accel/hvf/hvf-accel-ops.h | 2 --
44
1 file changed, 9 insertions(+), 9 deletions(-)
19
include/sysemu/hvf_int.h | 2 ++
20
accel/hvf/hvf-accel-ops.c | 30 ++++++++++++++++++++++++++++++
21
target/i386/hvf/hvf.c | 23 ++---------------------
22
4 files changed, 34 insertions(+), 23 deletions(-)
45
23
46
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
24
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
47
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/boot.c
26
--- a/accel/hvf/hvf-accel-ops.h
49
+++ b/hw/arm/boot.c
27
+++ b/accel/hvf/hvf-accel-ops.h
50
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
28
@@ -XXX,XX +XXX,XX @@
51
static const ARMInsnFixup *primary_loader;
29
52
AddressSpace *as = arm_boot_address_space(cpu, info);
30
#include "sysemu/cpus.h"
53
31
54
+ /* CPU objects (unlike devices) are not automatically reset on system
32
-int hvf_init_vcpu(CPUState *);
55
+ * reset, so we must always register a handler to do so. If we're
33
int hvf_vcpu_exec(CPUState *);
56
+ * actually loading a kernel, the handler is also responsible for
34
void hvf_cpu_synchronize_state(CPUState *);
57
+ * arranging that we start it correctly.
35
void hvf_cpu_synchronize_post_reset(CPUState *);
58
+ */
36
void hvf_cpu_synchronize_post_init(CPUState *);
59
+ for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
37
void hvf_cpu_synchronize_pre_loadvm(CPUState *);
60
+ qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
38
-void hvf_vcpu_destroy(CPUState *);
61
+ }
39
40
#endif /* HVF_CPUS_H */
41
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/include/sysemu/hvf_int.h
44
+++ b/include/sysemu/hvf_int.h
45
@@ -XXX,XX +XXX,XX @@ struct HVFState {
46
extern HVFState *hvf_state;
47
48
void assert_hvf_ok(hv_return_t ret);
49
+int hvf_arch_init_vcpu(CPUState *cpu);
50
+void hvf_arch_vcpu_destroy(CPUState *cpu);
51
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
52
int hvf_put_registers(CPUState *);
53
int hvf_get_registers(CPUState *);
54
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/accel/hvf/hvf-accel-ops.c
57
+++ b/accel/hvf/hvf-accel-ops.c
58
@@ -XXX,XX +XXX,XX @@ static void hvf_type_init(void)
59
60
type_init(hvf_type_init);
61
62
+static void hvf_vcpu_destroy(CPUState *cpu)
63
+{
64
+ hv_return_t ret = hv_vcpu_destroy(cpu->hvf_fd);
65
+ assert_hvf_ok(ret);
62
+
66
+
63
/* The board code is not supposed to set secure_board_setup unless
67
+ hvf_arch_vcpu_destroy(cpu);
64
* running its code in secure mode is actually possible, and KVM
68
+}
65
* doesn't support secure.
69
+
66
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
70
+static int hvf_init_vcpu(CPUState *cpu)
67
ARM_CPU(cs)->env.boot_info = info;
71
+{
72
+ int r;
73
+
74
+ /* init cpu signals */
75
+ sigset_t set;
76
+ struct sigaction sigact;
77
+
78
+ memset(&sigact, 0, sizeof(sigact));
79
+ sigact.sa_handler = dummy_signal;
80
+ sigaction(SIG_IPI, &sigact, NULL);
81
+
82
+ pthread_sigmask(SIG_BLOCK, NULL, &set);
83
+ sigdelset(&set, SIG_IPI);
84
+
85
+ r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
86
+ cpu->vcpu_dirty = 1;
87
+ assert_hvf_ok(r);
88
+
89
+ return hvf_arch_init_vcpu(cpu);
90
+}
91
+
92
/*
93
* The HVF-specific vCPU thread function. This one should only run when the host
94
* CPU supports the VMX "unrestricted guest" feature.
95
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/target/i386/hvf/hvf.c
98
+++ b/target/i386/hvf/hvf.c
99
@@ -XXX,XX +XXX,XX @@ static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
100
return false;
101
}
102
103
-void hvf_vcpu_destroy(CPUState *cpu)
104
+void hvf_arch_vcpu_destroy(CPUState *cpu)
105
{
106
X86CPU *x86_cpu = X86_CPU(cpu);
107
CPUX86State *env = &x86_cpu->env;
108
109
- hv_return_t ret = hv_vcpu_destroy((hv_vcpuid_t)cpu->hvf_fd);
110
g_free(env->hvf_mmio_buf);
111
- assert_hvf_ok(ret);
112
}
113
114
static void init_tsc_freq(CPUX86State *env)
115
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
116
return env->apic_bus_freq != 0;
117
}
118
119
-int hvf_init_vcpu(CPUState *cpu)
120
+int hvf_arch_init_vcpu(CPUState *cpu)
121
{
122
-
123
X86CPU *x86cpu = X86_CPU(cpu);
124
CPUX86State *env = &x86cpu->env;
125
- int r;
126
-
127
- /* init cpu signals */
128
- sigset_t set;
129
- struct sigaction sigact;
130
-
131
- memset(&sigact, 0, sizeof(sigact));
132
- sigact.sa_handler = dummy_signal;
133
- sigaction(SIG_IPI, &sigact, NULL);
134
-
135
- pthread_sigmask(SIG_BLOCK, NULL, &set);
136
- sigdelset(&set, SIG_IPI);
137
138
init_emu();
139
init_decoder();
140
@@ -XXX,XX +XXX,XX @@ int hvf_init_vcpu(CPUState *cpu)
141
}
68
}
142
}
69
143
70
- /* CPU objects (unlike devices) are not automatically reset on system
144
- r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
71
- * reset, so we must always register a handler to do so. If we're
145
- cpu->vcpu_dirty = 1;
72
- * actually loading a kernel, the handler is also responsible for
146
- assert_hvf_ok(r);
73
- * arranging that we start it correctly.
74
- */
75
- for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
76
- qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
77
- }
78
-
147
-
79
if (!info->skip_dtb_autoload && have_dtb(info)) {
148
if (hv_vmx_read_capability(HV_VMX_CAP_PINBASED,
80
if (arm_load_dtb(info->dtb_start, info, info->dtb_limit, as) < 0) {
149
&hvf_state->hvf_caps->vmx_cap_pinbased)) {
81
exit(1);
150
abort();
82
--
151
--
83
2.17.1
152
2.20.1
84
153
85
154
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Alexander Graf <agraf@csgraf.de>
2
add MemTxAttrs as an argument to address_space_map().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
There is no reason to call the hvf specific hvf_cpu_synchronize_state()
4
when we can just use the generic cpu_synchronize_state() instead. This
5
allows us to have less dependency on internal function definitions and
6
allows us to make hvf_cpu_synchronize_state() static.
7
8
Signed-off-by: Alexander Graf <agraf@csgraf.de>
9
Reviewed-by: Sergio Lopez <slp@redhat.com>
10
Message-id: 20210519202253.76782-9-agraf@csgraf.de
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
10
---
13
---
11
include/exec/memory.h | 3 ++-
14
accel/hvf/hvf-accel-ops.h | 1 -
12
include/sysemu/dma.h | 3 ++-
15
accel/hvf/hvf-accel-ops.c | 2 +-
13
exec.c | 6 ++++--
16
target/i386/hvf/x86hvf.c | 9 ++++-----
14
target/ppc/mmu-hash64.c | 3 ++-
17
3 files changed, 5 insertions(+), 7 deletions(-)
15
4 files changed, 10 insertions(+), 5 deletions(-)
16
18
17
diff --git a/include/exec/memory.h b/include/exec/memory.h
19
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/memory.h
21
--- a/accel/hvf/hvf-accel-ops.h
20
+++ b/include/exec/memory.h
22
+++ b/accel/hvf/hvf-accel-ops.h
21
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
23
@@ -XXX,XX +XXX,XX @@
22
* @addr: address within that address space
24
#include "sysemu/cpus.h"
23
* @plen: pointer to length of buffer; updated on return
25
24
* @is_write: indicates the transfer direction
26
int hvf_vcpu_exec(CPUState *);
25
+ * @attrs: memory attributes
27
-void hvf_cpu_synchronize_state(CPUState *);
26
*/
28
void hvf_cpu_synchronize_post_reset(CPUState *);
27
void *address_space_map(AddressSpace *as, hwaddr addr,
29
void hvf_cpu_synchronize_post_init(CPUState *);
28
- hwaddr *plen, bool is_write);
30
void hvf_cpu_synchronize_pre_loadvm(CPUState *);
29
+ hwaddr *plen, bool is_write, MemTxAttrs attrs);
31
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
30
31
/* address_space_unmap: Unmaps a memory region previously mapped by address_space_map()
32
*
33
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
34
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
35
--- a/include/sysemu/dma.h
33
--- a/accel/hvf/hvf-accel-ops.c
36
+++ b/include/sysemu/dma.h
34
+++ b/accel/hvf/hvf-accel-ops.c
37
@@ -XXX,XX +XXX,XX @@ static inline void *dma_memory_map(AddressSpace *as,
35
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
38
hwaddr xlen = *len;
36
}
39
void *p;
40
41
- p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE);
42
+ p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE,
43
+ MEMTXATTRS_UNSPECIFIED);
44
*len = xlen;
45
return p;
46
}
37
}
47
diff --git a/exec.c b/exec.c
38
39
-void hvf_cpu_synchronize_state(CPUState *cpu)
40
+static void hvf_cpu_synchronize_state(CPUState *cpu)
41
{
42
if (!cpu->vcpu_dirty) {
43
run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
44
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
48
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
49
--- a/exec.c
46
--- a/target/i386/hvf/x86hvf.c
50
+++ b/exec.c
47
+++ b/target/i386/hvf/x86hvf.c
51
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
48
@@ -XXX,XX +XXX,XX @@
52
void *address_space_map(AddressSpace *as,
49
#include "cpu.h"
53
hwaddr addr,
50
#include "x86_descr.h"
54
hwaddr *plen,
51
#include "x86_decode.h"
55
- bool is_write)
52
+#include "sysemu/hw_accel.h"
56
+ bool is_write,
53
57
+ MemTxAttrs attrs)
54
#include "hw/i386/apic_internal.h"
55
56
#include <Hypervisor/hv.h>
57
#include <Hypervisor/hv_vmx.h>
58
59
-#include "accel/hvf/hvf-accel-ops.h"
60
-
61
void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
62
SegmentCache *qseg, bool is_tr)
58
{
63
{
59
hwaddr len = *plen;
64
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
60
hwaddr l, xlat;
65
env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
61
@@ -XXX,XX +XXX,XX @@ void *cpu_physical_memory_map(hwaddr addr,
66
62
hwaddr *plen,
67
if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
63
int is_write)
68
- hvf_cpu_synchronize_state(cpu_state);
64
{
69
+ cpu_synchronize_state(cpu_state);
65
- return address_space_map(&address_space_memory, addr, plen, is_write);
70
do_cpu_init(cpu);
66
+ return address_space_map(&address_space_memory, addr, plen, is_write,
67
+ MEMTXATTRS_UNSPECIFIED);
68
}
69
70
void cpu_physical_memory_unmap(void *buffer, hwaddr len,
71
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/ppc/mmu-hash64.c
74
+++ b/target/ppc/mmu-hash64.c
75
@@ -XXX,XX +XXX,XX @@ const ppc_hash_pte64_t *ppc_hash64_map_hptes(PowerPCCPU *cpu,
76
return NULL;
77
}
71
}
78
72
79
- hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false);
73
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
80
+ hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false,
74
cpu_state->halted = 0;
81
+ MEMTXATTRS_UNSPECIFIED);
75
}
82
if (plen < (n * HASH_PTE_SIZE_64)) {
76
if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) {
83
hw_error("%s: Unable to map all requested HPTEs\n", __func__);
77
- hvf_cpu_synchronize_state(cpu_state);
78
+ cpu_synchronize_state(cpu_state);
79
do_cpu_sipi(cpu);
80
}
81
if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) {
82
cpu_state->interrupt_request &= ~CPU_INTERRUPT_TPR;
83
- hvf_cpu_synchronize_state(cpu_state);
84
+ cpu_synchronize_state(cpu_state);
85
apic_handle_tpr_access_report(cpu->apic_state, env->eip,
86
env->tpr_access_type);
84
}
87
}
85
--
88
--
86
2.17.1
89
2.20.1
87
90
88
91
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Alexander Graf <agraf@csgraf.de>
2
add MemTxAttrs as an argument to the MemoryRegion valid.accepts
3
callback. We'll need this for subpage_accepts().
4
2
5
We could take the approach we used with the read and write
3
The hvf accel synchronize functions are only used as input for local
6
callbacks and add new a new _with_attrs version, but since there
4
callback functions, so we can make them static.
7
are so few implementations of the accepts hook we just change
8
them all.
9
5
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
8
Message-id: 20210519202253.76782-10-agraf@csgraf.de
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180521140402.23318-9-peter.maydell@linaro.org
14
---
11
---
15
include/exec/memory.h | 3 ++-
12
accel/hvf/hvf-accel-ops.h | 3 ---
16
exec.c | 9 ++++++---
13
accel/hvf/hvf-accel-ops.c | 6 +++---
17
hw/hppa/dino.c | 3 ++-
14
2 files changed, 3 insertions(+), 6 deletions(-)
18
hw/nvram/fw_cfg.c | 12 ++++++++----
19
hw/scsi/esp.c | 3 ++-
20
hw/xen/xen_pt_msi.c | 3 ++-
21
memory.c | 5 +++--
22
7 files changed, 25 insertions(+), 13 deletions(-)
23
15
24
diff --git a/include/exec/memory.h b/include/exec/memory.h
16
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
25
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory.h
18
--- a/accel/hvf/hvf-accel-ops.h
27
+++ b/include/exec/memory.h
19
+++ b/accel/hvf/hvf-accel-ops.h
28
@@ -XXX,XX +XXX,XX @@ struct MemoryRegionOps {
20
@@ -XXX,XX +XXX,XX @@
29
* as a machine check exception).
21
#include "sysemu/cpus.h"
30
*/
22
31
bool (*accepts)(void *opaque, hwaddr addr,
23
int hvf_vcpu_exec(CPUState *);
32
- unsigned size, bool is_write);
24
-void hvf_cpu_synchronize_post_reset(CPUState *);
33
+ unsigned size, bool is_write,
25
-void hvf_cpu_synchronize_post_init(CPUState *);
34
+ MemTxAttrs attrs);
26
-void hvf_cpu_synchronize_pre_loadvm(CPUState *);
35
} valid;
27
36
/* Internal implementation constraints: */
28
#endif /* HVF_CPUS_H */
37
struct {
29
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
38
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
31
--- a/accel/hvf/hvf-accel-ops.c
41
+++ b/exec.c
32
+++ b/accel/hvf/hvf-accel-ops.c
42
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
33
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
34
cpu->vcpu_dirty = false;
43
}
35
}
44
36
45
static bool notdirty_mem_accepts(void *opaque, hwaddr addr,
37
-void hvf_cpu_synchronize_post_reset(CPUState *cpu)
46
- unsigned size, bool is_write)
38
+static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
47
+ unsigned size, bool is_write,
48
+ MemTxAttrs attrs)
49
{
39
{
50
return is_write;
40
run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
51
}
41
}
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
42
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
43
cpu->vcpu_dirty = false;
53
}
44
}
54
45
55
static bool subpage_accepts(void *opaque, hwaddr addr,
46
-void hvf_cpu_synchronize_post_init(CPUState *cpu)
56
- unsigned len, bool is_write)
47
+static void hvf_cpu_synchronize_post_init(CPUState *cpu)
57
+ unsigned len, bool is_write,
58
+ MemTxAttrs attrs)
59
{
48
{
60
subpage_t *subpage = opaque;
49
run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
61
#if defined(DEBUG_SUBPAGE)
62
@@ -XXX,XX +XXX,XX @@ static void readonly_mem_write(void *opaque, hwaddr addr,
63
}
50
}
64
51
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
65
static bool readonly_mem_accepts(void *opaque, hwaddr addr,
52
cpu->vcpu_dirty = true;
66
- unsigned size, bool is_write)
53
}
67
+ unsigned size, bool is_write,
54
68
+ MemTxAttrs attrs)
55
-void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
56
+static void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
69
{
57
{
70
return is_write;
58
run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
71
}
59
}
72
diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/hppa/dino.c
75
+++ b/hw/hppa/dino.c
76
@@ -XXX,XX +XXX,XX @@ static void gsc_to_pci_forwarding(DinoState *s)
77
}
78
79
static bool dino_chip_mem_valid(void *opaque, hwaddr addr,
80
- unsigned size, bool is_write)
81
+ unsigned size, bool is_write,
82
+ MemTxAttrs attrs)
83
{
84
switch (addr) {
85
case DINO_IAR0:
86
diff --git a/hw/nvram/fw_cfg.c b/hw/nvram/fw_cfg.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/nvram/fw_cfg.c
89
+++ b/hw/nvram/fw_cfg.c
90
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_dma_mem_write(void *opaque, hwaddr addr,
91
}
92
93
static bool fw_cfg_dma_mem_valid(void *opaque, hwaddr addr,
94
- unsigned size, bool is_write)
95
+ unsigned size, bool is_write,
96
+ MemTxAttrs attrs)
97
{
98
return !is_write || ((size == 4 && (addr == 0 || addr == 4)) ||
99
(size == 8 && addr == 0));
100
}
101
102
static bool fw_cfg_data_mem_valid(void *opaque, hwaddr addr,
103
- unsigned size, bool is_write)
104
+ unsigned size, bool is_write,
105
+ MemTxAttrs attrs)
106
{
107
return addr == 0;
108
}
109
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_ctl_mem_write(void *opaque, hwaddr addr,
110
}
111
112
static bool fw_cfg_ctl_mem_valid(void *opaque, hwaddr addr,
113
- unsigned size, bool is_write)
114
+ unsigned size, bool is_write,
115
+ MemTxAttrs attrs)
116
{
117
return is_write && size == 2;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_comb_write(void *opaque, hwaddr addr,
120
}
121
122
static bool fw_cfg_comb_valid(void *opaque, hwaddr addr,
123
- unsigned size, bool is_write)
124
+ unsigned size, bool is_write,
125
+ MemTxAttrs attrs)
126
{
127
return (size == 1) || (is_write && size == 2);
128
}
129
diff --git a/hw/scsi/esp.c b/hw/scsi/esp.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/scsi/esp.c
132
+++ b/hw/scsi/esp.c
133
@@ -XXX,XX +XXX,XX @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t val)
134
}
135
136
static bool esp_mem_accepts(void *opaque, hwaddr addr,
137
- unsigned size, bool is_write)
138
+ unsigned size, bool is_write,
139
+ MemTxAttrs attrs)
140
{
141
return (size == 1) || (is_write && size == 4);
142
}
143
diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/xen/xen_pt_msi.c
146
+++ b/hw/xen/xen_pt_msi.c
147
@@ -XXX,XX +XXX,XX @@ static uint64_t pci_msix_read(void *opaque, hwaddr addr,
148
}
149
150
static bool pci_msix_accepts(void *opaque, hwaddr addr,
151
- unsigned size, bool is_write)
152
+ unsigned size, bool is_write,
153
+ MemTxAttrs attrs)
154
{
155
return !(addr & (size - 1));
156
}
157
diff --git a/memory.c b/memory.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/memory.c
160
+++ b/memory.c
161
@@ -XXX,XX +XXX,XX @@ static void unassigned_mem_write(void *opaque, hwaddr addr,
162
}
163
164
static bool unassigned_mem_accepts(void *opaque, hwaddr addr,
165
- unsigned size, bool is_write)
166
+ unsigned size, bool is_write,
167
+ MemTxAttrs attrs)
168
{
169
return false;
170
}
171
@@ -XXX,XX +XXX,XX @@ bool memory_region_access_valid(MemoryRegion *mr,
172
access_size = MAX(MIN(size, access_size_max), access_size_min);
173
for (i = 0; i < size; i += access_size) {
174
if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size,
175
- is_write)) {
176
+ is_write, attrs)) {
177
return false;
178
}
179
}
180
--
60
--
181
2.17.1
61
2.20.1
182
62
183
63
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Coverity found that the string return by 'object_get_canonical_path' was not
3
We can move the definition of hvf_vcpu_exec() into our internal
4
being freed at two locations in the model (CID 1391294 and CID 1391293) and
4
hvf header, obsoleting the need for hvf-accel-ops.h.
5
also that a memset was being called with a value greater than the max of a byte
6
on the second argument (CID 1391286). This patch corrects this by adding the
7
freeing of the strings and also changing to memset to zero instead on
8
descriptor unaligned errors.
9
5
10
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20210519202253.76782-11-agraf@csgraf.de
13
Message-id: 20180528184859.3530-1-frasse.iglesias@gmail.com
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
11
---
17
hw/dma/xlnx-zdma.c | 10 +++++++---
12
accel/hvf/hvf-accel-ops.h | 17 -----------------
18
1 file changed, 7 insertions(+), 3 deletions(-)
13
include/sysemu/hvf_int.h | 1 +
14
accel/hvf/hvf-accel-ops.c | 2 --
15
target/i386/hvf/hvf.c | 2 --
16
4 files changed, 1 insertion(+), 21 deletions(-)
17
delete mode 100644 accel/hvf/hvf-accel-ops.h
19
18
20
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
19
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
20
deleted file mode 100644
21
index XXXXXXX..XXXXXXX
22
--- a/accel/hvf/hvf-accel-ops.h
23
+++ /dev/null
24
@@ -XXX,XX +XXX,XX @@
25
-/*
26
- * Accelerator CPUS Interface
27
- *
28
- * Copyright 2020 SUSE LLC
29
- *
30
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
31
- * See the COPYING file in the top-level directory.
32
- */
33
-
34
-#ifndef HVF_CPUS_H
35
-#define HVF_CPUS_H
36
-
37
-#include "sysemu/cpus.h"
38
-
39
-int hvf_vcpu_exec(CPUState *);
40
-
41
-#endif /* HVF_CPUS_H */
42
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
21
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/dma/xlnx-zdma.c
44
--- a/include/sysemu/hvf_int.h
23
+++ b/hw/dma/xlnx-zdma.c
45
+++ b/include/sysemu/hvf_int.h
24
@@ -XXX,XX +XXX,XX @@ static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
46
@@ -XXX,XX +XXX,XX @@ extern HVFState *hvf_state;
25
qemu_log_mask(LOG_GUEST_ERROR,
47
void assert_hvf_ok(hv_return_t ret);
26
"zdma: unaligned descriptor at %" PRIx64,
48
int hvf_arch_init_vcpu(CPUState *cpu);
27
addr);
49
void hvf_arch_vcpu_destroy(CPUState *cpu);
28
- memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
50
+int hvf_vcpu_exec(CPUState *);
29
+ memset(buf, 0x0, sizeof(XlnxZDMADescr));
51
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
30
s->error = true;
52
int hvf_put_registers(CPUState *);
31
return false;
53
int hvf_get_registers(CPUState *);
32
}
54
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
33
@@ -XXX,XX +XXX,XX @@ static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
55
index XXXXXXX..XXXXXXX 100644
34
RegisterInfo *r = &s->regs_info[addr / 4];
56
--- a/accel/hvf/hvf-accel-ops.c
35
57
+++ b/accel/hvf/hvf-accel-ops.c
36
if (!r->data) {
58
@@ -XXX,XX +XXX,XX @@
37
+ gchar *path = object_get_canonical_path(OBJECT(s));
59
#include "sysemu/runstate.h"
38
qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
60
#include "qemu/guest-random.h"
39
- object_get_canonical_path(OBJECT(s)),
61
40
+ path,
62
-#include "hvf-accel-ops.h"
41
addr);
63
-
42
+ g_free(path);
64
HVFState *hvf_state;
43
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
65
44
zdma_ch_imr_update_irq(s);
66
/* Memory slots */
45
return 0;
67
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
46
@@ -XXX,XX +XXX,XX @@ static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
68
index XXXXXXX..XXXXXXX 100644
47
RegisterInfo *r = &s->regs_info[addr / 4];
69
--- a/target/i386/hvf/hvf.c
48
70
+++ b/target/i386/hvf/hvf.c
49
if (!r->data) {
71
@@ -XXX,XX +XXX,XX @@
50
+ gchar *path = object_get_canonical_path(OBJECT(s));
72
#include "qemu/accel.h"
51
qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
73
#include "target/i386/cpu.h"
52
- object_get_canonical_path(OBJECT(s)),
74
53
+ path,
75
-#include "hvf-accel-ops.h"
54
addr, value);
76
-
55
+ g_free(path);
77
void vmx_update_tpr(CPUState *cpu)
56
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
78
{
57
zdma_ch_imr_update_irq(s);
79
/* TODO: need integrate APIC handling */
58
return;
59
--
80
--
60
2.17.1
81
2.20.1
61
82
62
83
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
3
We will need more than a single field for hvf going forward. To keep
4
g_new is even better because it is type-safe.
4
the global vcpu struct uncluttered, let's allocate a special hvf vcpu
5
struct, similar to how hax does it.
5
6
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
9
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210519202253.76782-12-agraf@csgraf.de
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
15
---
11
target/arm/gdbstub.c | 3 +--
16
include/hw/core/cpu.h | 3 +-
12
1 file changed, 1 insertion(+), 2 deletions(-)
17
include/sysemu/hvf_int.h | 4 +
18
target/i386/hvf/vmx.h | 24 +++--
19
accel/hvf/hvf-accel-ops.c | 8 +-
20
target/i386/hvf/hvf.c | 104 +++++++++---------
21
target/i386/hvf/x86.c | 28 ++---
22
target/i386/hvf/x86_descr.c | 26 ++---
23
target/i386/hvf/x86_emu.c | 62 +++++------
24
target/i386/hvf/x86_mmu.c | 4 +-
25
target/i386/hvf/x86_task.c | 12 +--
26
target/i386/hvf/x86hvf.c | 210 ++++++++++++++++++------------------
27
11 files changed, 248 insertions(+), 237 deletions(-)
13
28
14
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
29
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
15
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/gdbstub.c
31
--- a/include/hw/core/cpu.h
17
+++ b/target/arm/gdbstub.c
32
+++ b/include/hw/core/cpu.h
18
@@ -XXX,XX +XXX,XX @@ int arm_gen_dynamic_xml(CPUState *cs)
33
@@ -XXX,XX +XXX,XX @@ struct KVMState;
19
RegisterSysregXmlParam param = {cs, s};
34
struct kvm_run;
20
35
21
cpu->dyn_xml.num_cpregs = 0;
36
struct hax_vcpu_state;
22
- cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
37
+struct hvf_vcpu_state;
23
- g_hash_table_size(cpu->cp_regs));
38
24
+ cpu->dyn_xml.cpregs_keys = g_new(uint32_t, g_hash_table_size(cpu->cp_regs));
39
#define TB_JMP_CACHE_BITS 12
25
g_string_printf(s, "<?xml version=\"1.0\"?>");
40
#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
26
g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
41
@@ -XXX,XX +XXX,XX @@ struct CPUState {
27
g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
42
43
struct hax_vcpu_state *hax_vcpu;
44
45
- int hvf_fd;
46
+ struct hvf_vcpu_state *hvf;
47
48
/* track IOMMUs whose translations we've cached in the TCG TLB */
49
GArray *iommu_notifiers;
50
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
51
index XXXXXXX..XXXXXXX 100644
52
--- a/include/sysemu/hvf_int.h
53
+++ b/include/sysemu/hvf_int.h
54
@@ -XXX,XX +XXX,XX @@ struct HVFState {
55
};
56
extern HVFState *hvf_state;
57
58
+struct hvf_vcpu_state {
59
+ int fd;
60
+};
61
+
62
void assert_hvf_ok(hv_return_t ret);
63
int hvf_arch_init_vcpu(CPUState *cpu);
64
void hvf_arch_vcpu_destroy(CPUState *cpu);
65
diff --git a/target/i386/hvf/vmx.h b/target/i386/hvf/vmx.h
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/i386/hvf/vmx.h
68
+++ b/target/i386/hvf/vmx.h
69
@@ -XXX,XX +XXX,XX @@
70
#include "vmcs.h"
71
#include "cpu.h"
72
#include "x86.h"
73
+#include "sysemu/hvf.h"
74
+#include "sysemu/hvf_int.h"
75
76
#include "exec/address-spaces.h"
77
78
@@ -XXX,XX +XXX,XX @@ static inline void macvm_set_rip(CPUState *cpu, uint64_t rip)
79
uint64_t val;
80
81
/* BUG, should take considering overlap.. */
82
- wreg(cpu->hvf_fd, HV_X86_RIP, rip);
83
+ wreg(cpu->hvf->fd, HV_X86_RIP, rip);
84
env->eip = rip;
85
86
/* after moving forward in rip, we need to clean INTERRUPTABILITY */
87
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
88
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
89
if (val & (VMCS_INTERRUPTIBILITY_STI_BLOCKING |
90
VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) {
91
env->hflags &= ~HF_INHIBIT_IRQ_MASK;
92
- wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY,
93
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY,
94
val & ~(VMCS_INTERRUPTIBILITY_STI_BLOCKING |
95
VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING));
96
}
97
@@ -XXX,XX +XXX,XX @@ static inline void vmx_clear_nmi_blocking(CPUState *cpu)
98
CPUX86State *env = &x86_cpu->env;
99
100
env->hflags2 &= ~HF2_NMI_MASK;
101
- uint32_t gi = (uint32_t) rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
102
+ uint32_t gi = (uint32_t) rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
103
gi &= ~VMCS_INTERRUPTIBILITY_NMI_BLOCKING;
104
- wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
105
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
106
}
107
108
static inline void vmx_set_nmi_blocking(CPUState *cpu)
109
@@ -XXX,XX +XXX,XX @@ static inline void vmx_set_nmi_blocking(CPUState *cpu)
110
CPUX86State *env = &x86_cpu->env;
111
112
env->hflags2 |= HF2_NMI_MASK;
113
- uint32_t gi = (uint32_t)rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
114
+ uint32_t gi = (uint32_t)rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
115
gi |= VMCS_INTERRUPTIBILITY_NMI_BLOCKING;
116
- wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
117
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
118
}
119
120
static inline void vmx_set_nmi_window_exiting(CPUState *cpu)
121
{
122
uint64_t val;
123
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
124
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val |
125
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
126
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val |
127
VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING);
128
129
}
130
@@ -XXX,XX +XXX,XX @@ static inline void vmx_clear_nmi_window_exiting(CPUState *cpu)
131
{
132
133
uint64_t val;
134
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
135
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val &
136
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
137
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val &
138
~VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING);
139
}
140
141
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
142
index XXXXXXX..XXXXXXX 100644
143
--- a/accel/hvf/hvf-accel-ops.c
144
+++ b/accel/hvf/hvf-accel-ops.c
145
@@ -XXX,XX +XXX,XX @@ type_init(hvf_type_init);
146
147
static void hvf_vcpu_destroy(CPUState *cpu)
148
{
149
- hv_return_t ret = hv_vcpu_destroy(cpu->hvf_fd);
150
+ hv_return_t ret = hv_vcpu_destroy(cpu->hvf->fd);
151
assert_hvf_ok(ret);
152
153
hvf_arch_vcpu_destroy(cpu);
154
+ g_free(cpu->hvf);
155
+ cpu->hvf = NULL;
156
}
157
158
static int hvf_init_vcpu(CPUState *cpu)
159
{
160
int r;
161
162
+ cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
163
+
164
/* init cpu signals */
165
sigset_t set;
166
struct sigaction sigact;
167
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
168
pthread_sigmask(SIG_BLOCK, NULL, &set);
169
sigdelset(&set, SIG_IPI);
170
171
- r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
172
+ r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
173
cpu->vcpu_dirty = 1;
174
assert_hvf_ok(r);
175
176
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
177
index XXXXXXX..XXXXXXX 100644
178
--- a/target/i386/hvf/hvf.c
179
+++ b/target/i386/hvf/hvf.c
180
@@ -XXX,XX +XXX,XX @@ void vmx_update_tpr(CPUState *cpu)
181
int tpr = cpu_get_apic_tpr(x86_cpu->apic_state) << 4;
182
int irr = apic_get_highest_priority_irr(x86_cpu->apic_state);
183
184
- wreg(cpu->hvf_fd, HV_X86_TPR, tpr);
185
+ wreg(cpu->hvf->fd, HV_X86_TPR, tpr);
186
if (irr == -1) {
187
- wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0);
188
+ wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0);
189
} else {
190
- wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 :
191
+ wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 :
192
irr >> 4);
193
}
194
}
195
@@ -XXX,XX +XXX,XX @@ void vmx_update_tpr(CPUState *cpu)
196
static void update_apic_tpr(CPUState *cpu)
197
{
198
X86CPU *x86_cpu = X86_CPU(cpu);
199
- int tpr = rreg(cpu->hvf_fd, HV_X86_TPR) >> 4;
200
+ int tpr = rreg(cpu->hvf->fd, HV_X86_TPR) >> 4;
201
cpu_set_apic_tpr(x86_cpu->apic_state, tpr);
202
}
203
204
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init_vcpu(CPUState *cpu)
205
}
206
207
/* set VMCS control fields */
208
- wvmcs(cpu->hvf_fd, VMCS_PIN_BASED_CTLS,
209
+ wvmcs(cpu->hvf->fd, VMCS_PIN_BASED_CTLS,
210
cap2ctrl(hvf_state->hvf_caps->vmx_cap_pinbased,
211
VMCS_PIN_BASED_CTLS_EXTINT |
212
VMCS_PIN_BASED_CTLS_NMI |
213
VMCS_PIN_BASED_CTLS_VNMI));
214
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS,
215
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS,
216
cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased,
217
VMCS_PRI_PROC_BASED_CTLS_HLT |
218
VMCS_PRI_PROC_BASED_CTLS_MWAIT |
219
VMCS_PRI_PROC_BASED_CTLS_TSC_OFFSET |
220
VMCS_PRI_PROC_BASED_CTLS_TPR_SHADOW) |
221
VMCS_PRI_PROC_BASED_CTLS_SEC_CONTROL);
222
- wvmcs(cpu->hvf_fd, VMCS_SEC_PROC_BASED_CTLS,
223
+ wvmcs(cpu->hvf->fd, VMCS_SEC_PROC_BASED_CTLS,
224
cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased2,
225
VMCS_PRI_PROC_BASED2_CTLS_APIC_ACCESSES));
226
227
- wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_entry,
228
+ wvmcs(cpu->hvf->fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_entry,
229
0));
230
- wvmcs(cpu->hvf_fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */
231
+ wvmcs(cpu->hvf->fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */
232
233
- wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0);
234
+ wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0);
235
236
x86cpu = X86_CPU(cpu);
237
x86cpu->env.xsave_buf = qemu_memalign(4096, 4096);
238
239
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_STAR, 1);
240
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_LSTAR, 1);
241
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_CSTAR, 1);
242
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FMASK, 1);
243
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FSBASE, 1);
244
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_GSBASE, 1);
245
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_KERNELGSBASE, 1);
246
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_TSC_AUX, 1);
247
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_TSC, 1);
248
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_CS, 1);
249
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_EIP, 1);
250
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_ESP, 1);
251
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_STAR, 1);
252
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_LSTAR, 1);
253
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_CSTAR, 1);
254
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_FMASK, 1);
255
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_FSBASE, 1);
256
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_GSBASE, 1);
257
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_KERNELGSBASE, 1);
258
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_TSC_AUX, 1);
259
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_TSC, 1);
260
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_CS, 1);
261
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_EIP, 1);
262
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_ESP, 1);
263
264
return 0;
265
}
266
@@ -XXX,XX +XXX,XX @@ static void hvf_store_events(CPUState *cpu, uint32_t ins_len, uint64_t idtvec_in
267
}
268
if (idtvec_info & VMCS_IDT_VEC_ERRCODE_VALID) {
269
env->has_error_code = true;
270
- env->error_code = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_ERROR);
271
+ env->error_code = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_ERROR);
272
}
273
}
274
- if ((rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) &
275
+ if ((rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY) &
276
VMCS_INTERRUPTIBILITY_NMI_BLOCKING)) {
277
env->hflags2 |= HF2_NMI_MASK;
278
} else {
279
env->hflags2 &= ~HF2_NMI_MASK;
280
}
281
- if (rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) &
282
+ if (rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY) &
283
(VMCS_INTERRUPTIBILITY_STI_BLOCKING |
284
VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) {
285
env->hflags |= HF_INHIBIT_IRQ_MASK;
286
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
287
return EXCP_HLT;
288
}
289
290
- hv_return_t r = hv_vcpu_run(cpu->hvf_fd);
291
+ hv_return_t r = hv_vcpu_run(cpu->hvf->fd);
292
assert_hvf_ok(r);
293
294
/* handle VMEXIT */
295
- uint64_t exit_reason = rvmcs(cpu->hvf_fd, VMCS_EXIT_REASON);
296
- uint64_t exit_qual = rvmcs(cpu->hvf_fd, VMCS_EXIT_QUALIFICATION);
297
- uint32_t ins_len = (uint32_t)rvmcs(cpu->hvf_fd,
298
+ uint64_t exit_reason = rvmcs(cpu->hvf->fd, VMCS_EXIT_REASON);
299
+ uint64_t exit_qual = rvmcs(cpu->hvf->fd, VMCS_EXIT_QUALIFICATION);
300
+ uint32_t ins_len = (uint32_t)rvmcs(cpu->hvf->fd,
301
VMCS_EXIT_INSTRUCTION_LENGTH);
302
303
- uint64_t idtvec_info = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO);
304
+ uint64_t idtvec_info = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_INFO);
305
306
hvf_store_events(cpu, ins_len, idtvec_info);
307
- rip = rreg(cpu->hvf_fd, HV_X86_RIP);
308
- env->eflags = rreg(cpu->hvf_fd, HV_X86_RFLAGS);
309
+ rip = rreg(cpu->hvf->fd, HV_X86_RIP);
310
+ env->eflags = rreg(cpu->hvf->fd, HV_X86_RFLAGS);
311
312
qemu_mutex_lock_iothread();
313
314
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
315
case EXIT_REASON_EPT_FAULT:
316
{
317
hvf_slot *slot;
318
- uint64_t gpa = rvmcs(cpu->hvf_fd, VMCS_GUEST_PHYSICAL_ADDRESS);
319
+ uint64_t gpa = rvmcs(cpu->hvf->fd, VMCS_GUEST_PHYSICAL_ADDRESS);
320
321
if (((idtvec_info & VMCS_IDT_VEC_VALID) == 0) &&
322
((exit_qual & EXIT_QUAL_NMIUDTI) != 0)) {
323
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
324
store_regs(cpu);
325
break;
326
} else if (!string && !in) {
327
- RAX(env) = rreg(cpu->hvf_fd, HV_X86_RAX);
328
+ RAX(env) = rreg(cpu->hvf->fd, HV_X86_RAX);
329
hvf_handle_io(env, port, &RAX(env), 1, size, 1);
330
macvm_set_rip(cpu, rip + ins_len);
331
break;
332
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
333
break;
334
}
335
case EXIT_REASON_CPUID: {
336
- uint32_t rax = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX);
337
- uint32_t rbx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RBX);
338
- uint32_t rcx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX);
339
- uint32_t rdx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX);
340
+ uint32_t rax = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RAX);
341
+ uint32_t rbx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RBX);
342
+ uint32_t rcx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RCX);
343
+ uint32_t rdx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RDX);
344
345
if (rax == 1) {
346
/* CPUID1.ecx.OSXSAVE needs to know CR4 */
347
- env->cr[4] = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4);
348
+ env->cr[4] = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR4);
349
}
350
hvf_cpu_x86_cpuid(env, rax, rcx, &rax, &rbx, &rcx, &rdx);
351
352
- wreg(cpu->hvf_fd, HV_X86_RAX, rax);
353
- wreg(cpu->hvf_fd, HV_X86_RBX, rbx);
354
- wreg(cpu->hvf_fd, HV_X86_RCX, rcx);
355
- wreg(cpu->hvf_fd, HV_X86_RDX, rdx);
356
+ wreg(cpu->hvf->fd, HV_X86_RAX, rax);
357
+ wreg(cpu->hvf->fd, HV_X86_RBX, rbx);
358
+ wreg(cpu->hvf->fd, HV_X86_RCX, rcx);
359
+ wreg(cpu->hvf->fd, HV_X86_RDX, rdx);
360
361
macvm_set_rip(cpu, rip + ins_len);
362
break;
363
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
364
case EXIT_REASON_XSETBV: {
365
X86CPU *x86_cpu = X86_CPU(cpu);
366
CPUX86State *env = &x86_cpu->env;
367
- uint32_t eax = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX);
368
- uint32_t ecx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX);
369
- uint32_t edx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX);
370
+ uint32_t eax = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RAX);
371
+ uint32_t ecx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RCX);
372
+ uint32_t edx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RDX);
373
374
if (ecx) {
375
macvm_set_rip(cpu, rip + ins_len);
376
break;
377
}
378
env->xcr0 = ((uint64_t)edx << 32) | eax;
379
- wreg(cpu->hvf_fd, HV_X86_XCR0, env->xcr0 | 1);
380
+ wreg(cpu->hvf->fd, HV_X86_XCR0, env->xcr0 | 1);
381
macvm_set_rip(cpu, rip + ins_len);
382
break;
383
}
384
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
385
386
switch (cr) {
387
case 0x0: {
388
- macvm_set_cr0(cpu->hvf_fd, RRX(env, reg));
389
+ macvm_set_cr0(cpu->hvf->fd, RRX(env, reg));
390
break;
391
}
392
case 4: {
393
- macvm_set_cr4(cpu->hvf_fd, RRX(env, reg));
394
+ macvm_set_cr4(cpu->hvf->fd, RRX(env, reg));
395
break;
396
}
397
case 8: {
398
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
399
break;
400
}
401
case EXIT_REASON_TASK_SWITCH: {
402
- uint64_t vinfo = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO);
403
+ uint64_t vinfo = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_INFO);
404
x68_segment_selector sel = {.sel = exit_qual & 0xffff};
405
vmx_handle_task_switch(cpu, sel, (exit_qual >> 30) & 0x3,
406
vinfo & VMCS_INTR_VALID, vinfo & VECTORING_INFO_VECTOR_MASK, vinfo
407
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
408
break;
409
}
410
case EXIT_REASON_RDPMC:
411
- wreg(cpu->hvf_fd, HV_X86_RAX, 0);
412
- wreg(cpu->hvf_fd, HV_X86_RDX, 0);
413
+ wreg(cpu->hvf->fd, HV_X86_RAX, 0);
414
+ wreg(cpu->hvf->fd, HV_X86_RDX, 0);
415
macvm_set_rip(cpu, rip + ins_len);
416
break;
417
case VMX_REASON_VMCALL:
418
diff --git a/target/i386/hvf/x86.c b/target/i386/hvf/x86.c
419
index XXXXXXX..XXXXXXX 100644
420
--- a/target/i386/hvf/x86.c
421
+++ b/target/i386/hvf/x86.c
422
@@ -XXX,XX +XXX,XX @@ bool x86_read_segment_descriptor(struct CPUState *cpu,
423
}
424
425
if (GDT_SEL == sel.ti) {
426
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE);
427
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
428
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_BASE);
429
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
430
} else {
431
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE);
432
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT);
433
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_BASE);
434
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_LIMIT);
435
}
436
437
if (sel.index * 8 >= limit) {
438
@@ -XXX,XX +XXX,XX @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
439
uint32_t limit;
440
441
if (GDT_SEL == sel.ti) {
442
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE);
443
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
444
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_BASE);
445
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
446
} else {
447
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE);
448
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT);
449
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_BASE);
450
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_LIMIT);
451
}
452
453
if (sel.index * 8 >= limit) {
454
@@ -XXX,XX +XXX,XX @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
455
bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_desc,
456
int gate)
457
{
458
- target_ulong base = rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_BASE);
459
- uint32_t limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_LIMIT);
460
+ target_ulong base = rvmcs(cpu->hvf->fd, VMCS_GUEST_IDTR_BASE);
461
+ uint32_t limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_IDTR_LIMIT);
462
463
memset(idt_desc, 0, sizeof(*idt_desc));
464
if (gate * 8 >= limit) {
465
@@ -XXX,XX +XXX,XX @@ bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_desc,
466
467
bool x86_is_protected(struct CPUState *cpu)
468
{
469
- uint64_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
470
+ uint64_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
471
return cr0 & CR0_PE;
472
}
473
474
@@ -XXX,XX +XXX,XX @@ bool x86_is_v8086(struct CPUState *cpu)
475
476
bool x86_is_long_mode(struct CPUState *cpu)
477
{
478
- return rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER) & MSR_EFER_LMA;
479
+ return rvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER) & MSR_EFER_LMA;
480
}
481
482
bool x86_is_long64_mode(struct CPUState *cpu)
483
@@ -XXX,XX +XXX,XX @@ bool x86_is_long64_mode(struct CPUState *cpu)
484
485
bool x86_is_paging_mode(struct CPUState *cpu)
486
{
487
- uint64_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
488
+ uint64_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
489
return cr0 & CR0_PG;
490
}
491
492
bool x86_is_pae_enabled(struct CPUState *cpu)
493
{
494
- uint64_t cr4 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4);
495
+ uint64_t cr4 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR4);
496
return cr4 & CR4_PAE;
497
}
498
499
diff --git a/target/i386/hvf/x86_descr.c b/target/i386/hvf/x86_descr.c
500
index XXXXXXX..XXXXXXX 100644
501
--- a/target/i386/hvf/x86_descr.c
502
+++ b/target/i386/hvf/x86_descr.c
503
@@ -XXX,XX +XXX,XX @@ static const struct vmx_segment_field {
504
505
uint32_t vmx_read_segment_limit(CPUState *cpu, X86Seg seg)
506
{
507
- return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit);
508
+ return (uint32_t)rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].limit);
509
}
510
511
uint32_t vmx_read_segment_ar(CPUState *cpu, X86Seg seg)
512
{
513
- return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes);
514
+ return (uint32_t)rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].ar_bytes);
515
}
516
517
uint64_t vmx_read_segment_base(CPUState *cpu, X86Seg seg)
518
{
519
- return rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base);
520
+ return rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].base);
521
}
522
523
x68_segment_selector vmx_read_segment_selector(CPUState *cpu, X86Seg seg)
524
{
525
x68_segment_selector sel;
526
- sel.sel = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector);
527
+ sel.sel = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector);
528
return sel;
529
}
530
531
void vmx_write_segment_selector(struct CPUState *cpu, x68_segment_selector selector, X86Seg seg)
532
{
533
- wvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector, selector.sel);
534
+ wvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector, selector.sel);
535
}
536
537
void vmx_read_segment_descriptor(struct CPUState *cpu, struct vmx_segment *desc, X86Seg seg)
538
{
539
- desc->sel = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector);
540
- desc->base = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base);
541
- desc->limit = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit);
542
- desc->ar = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes);
543
+ desc->sel = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector);
544
+ desc->base = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].base);
545
+ desc->limit = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].limit);
546
+ desc->ar = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].ar_bytes);
547
}
548
549
void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc, X86Seg seg)
550
{
551
const struct vmx_segment_field *sf = &vmx_segment_fields[seg];
552
553
- wvmcs(cpu->hvf_fd, sf->base, desc->base);
554
- wvmcs(cpu->hvf_fd, sf->limit, desc->limit);
555
- wvmcs(cpu->hvf_fd, sf->selector, desc->sel);
556
- wvmcs(cpu->hvf_fd, sf->ar_bytes, desc->ar);
557
+ wvmcs(cpu->hvf->fd, sf->base, desc->base);
558
+ wvmcs(cpu->hvf->fd, sf->limit, desc->limit);
559
+ wvmcs(cpu->hvf->fd, sf->selector, desc->sel);
560
+ wvmcs(cpu->hvf->fd, sf->ar_bytes, desc->ar);
561
}
562
563
void x86_segment_descriptor_to_vmx(struct CPUState *cpu, x68_segment_selector selector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_desc)
564
diff --git a/target/i386/hvf/x86_emu.c b/target/i386/hvf/x86_emu.c
565
index XXXXXXX..XXXXXXX 100644
566
--- a/target/i386/hvf/x86_emu.c
567
+++ b/target/i386/hvf/x86_emu.c
568
@@ -XXX,XX +XXX,XX @@ void simulate_rdmsr(struct CPUState *cpu)
569
570
switch (msr) {
571
case MSR_IA32_TSC:
572
- val = rdtscp() + rvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET);
573
+ val = rdtscp() + rvmcs(cpu->hvf->fd, VMCS_TSC_OFFSET);
574
break;
575
case MSR_IA32_APICBASE:
576
val = cpu_get_apic_base(X86_CPU(cpu)->apic_state);
577
@@ -XXX,XX +XXX,XX @@ void simulate_rdmsr(struct CPUState *cpu)
578
val = x86_cpu->ucode_rev;
579
break;
580
case MSR_EFER:
581
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER);
582
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER);
583
break;
584
case MSR_FSBASE:
585
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE);
586
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_FS_BASE);
587
break;
588
case MSR_GSBASE:
589
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE);
590
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_GS_BASE);
591
break;
592
case MSR_KERNELGSBASE:
593
- val = rvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE);
594
+ val = rvmcs(cpu->hvf->fd, VMCS_HOST_FS_BASE);
595
break;
596
case MSR_STAR:
597
abort();
598
@@ -XXX,XX +XXX,XX @@ void simulate_wrmsr(struct CPUState *cpu)
599
cpu_set_apic_base(X86_CPU(cpu)->apic_state, data);
600
break;
601
case MSR_FSBASE:
602
- wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE, data);
603
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_FS_BASE, data);
604
break;
605
case MSR_GSBASE:
606
- wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE, data);
607
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_GS_BASE, data);
608
break;
609
case MSR_KERNELGSBASE:
610
- wvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE, data);
611
+ wvmcs(cpu->hvf->fd, VMCS_HOST_FS_BASE, data);
612
break;
613
case MSR_STAR:
614
abort();
615
@@ -XXX,XX +XXX,XX @@ void simulate_wrmsr(struct CPUState *cpu)
616
break;
617
case MSR_EFER:
618
/*printf("new efer %llx\n", EFER(cpu));*/
619
- wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, data);
620
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER, data);
621
if (data & MSR_EFER_NXE) {
622
- hv_vcpu_invalidate_tlb(cpu->hvf_fd);
623
+ hv_vcpu_invalidate_tlb(cpu->hvf->fd);
624
}
625
break;
626
case MSR_MTRRphysBase(0):
627
@@ -XXX,XX +XXX,XX @@ void load_regs(struct CPUState *cpu)
628
CPUX86State *env = &x86_cpu->env;
629
630
int i = 0;
631
- RRX(env, R_EAX) = rreg(cpu->hvf_fd, HV_X86_RAX);
632
- RRX(env, R_EBX) = rreg(cpu->hvf_fd, HV_X86_RBX);
633
- RRX(env, R_ECX) = rreg(cpu->hvf_fd, HV_X86_RCX);
634
- RRX(env, R_EDX) = rreg(cpu->hvf_fd, HV_X86_RDX);
635
- RRX(env, R_ESI) = rreg(cpu->hvf_fd, HV_X86_RSI);
636
- RRX(env, R_EDI) = rreg(cpu->hvf_fd, HV_X86_RDI);
637
- RRX(env, R_ESP) = rreg(cpu->hvf_fd, HV_X86_RSP);
638
- RRX(env, R_EBP) = rreg(cpu->hvf_fd, HV_X86_RBP);
639
+ RRX(env, R_EAX) = rreg(cpu->hvf->fd, HV_X86_RAX);
640
+ RRX(env, R_EBX) = rreg(cpu->hvf->fd, HV_X86_RBX);
641
+ RRX(env, R_ECX) = rreg(cpu->hvf->fd, HV_X86_RCX);
642
+ RRX(env, R_EDX) = rreg(cpu->hvf->fd, HV_X86_RDX);
643
+ RRX(env, R_ESI) = rreg(cpu->hvf->fd, HV_X86_RSI);
644
+ RRX(env, R_EDI) = rreg(cpu->hvf->fd, HV_X86_RDI);
645
+ RRX(env, R_ESP) = rreg(cpu->hvf->fd, HV_X86_RSP);
646
+ RRX(env, R_EBP) = rreg(cpu->hvf->fd, HV_X86_RBP);
647
for (i = 8; i < 16; i++) {
648
- RRX(env, i) = rreg(cpu->hvf_fd, HV_X86_RAX + i);
649
+ RRX(env, i) = rreg(cpu->hvf->fd, HV_X86_RAX + i);
650
}
651
652
- env->eflags = rreg(cpu->hvf_fd, HV_X86_RFLAGS);
653
+ env->eflags = rreg(cpu->hvf->fd, HV_X86_RFLAGS);
654
rflags_to_lflags(env);
655
- env->eip = rreg(cpu->hvf_fd, HV_X86_RIP);
656
+ env->eip = rreg(cpu->hvf->fd, HV_X86_RIP);
657
}
658
659
void store_regs(struct CPUState *cpu)
660
@@ -XXX,XX +XXX,XX @@ void store_regs(struct CPUState *cpu)
661
CPUX86State *env = &x86_cpu->env;
662
663
int i = 0;
664
- wreg(cpu->hvf_fd, HV_X86_RAX, RAX(env));
665
- wreg(cpu->hvf_fd, HV_X86_RBX, RBX(env));
666
- wreg(cpu->hvf_fd, HV_X86_RCX, RCX(env));
667
- wreg(cpu->hvf_fd, HV_X86_RDX, RDX(env));
668
- wreg(cpu->hvf_fd, HV_X86_RSI, RSI(env));
669
- wreg(cpu->hvf_fd, HV_X86_RDI, RDI(env));
670
- wreg(cpu->hvf_fd, HV_X86_RBP, RBP(env));
671
- wreg(cpu->hvf_fd, HV_X86_RSP, RSP(env));
672
+ wreg(cpu->hvf->fd, HV_X86_RAX, RAX(env));
673
+ wreg(cpu->hvf->fd, HV_X86_RBX, RBX(env));
674
+ wreg(cpu->hvf->fd, HV_X86_RCX, RCX(env));
675
+ wreg(cpu->hvf->fd, HV_X86_RDX, RDX(env));
676
+ wreg(cpu->hvf->fd, HV_X86_RSI, RSI(env));
677
+ wreg(cpu->hvf->fd, HV_X86_RDI, RDI(env));
678
+ wreg(cpu->hvf->fd, HV_X86_RBP, RBP(env));
679
+ wreg(cpu->hvf->fd, HV_X86_RSP, RSP(env));
680
for (i = 8; i < 16; i++) {
681
- wreg(cpu->hvf_fd, HV_X86_RAX + i, RRX(env, i));
682
+ wreg(cpu->hvf->fd, HV_X86_RAX + i, RRX(env, i));
683
}
684
685
lflags_to_rflags(env);
686
- wreg(cpu->hvf_fd, HV_X86_RFLAGS, env->eflags);
687
+ wreg(cpu->hvf->fd, HV_X86_RFLAGS, env->eflags);
688
macvm_set_rip(cpu, env->eip);
689
}
690
691
diff --git a/target/i386/hvf/x86_mmu.c b/target/i386/hvf/x86_mmu.c
692
index XXXXXXX..XXXXXXX 100644
693
--- a/target/i386/hvf/x86_mmu.c
694
+++ b/target/i386/hvf/x86_mmu.c
695
@@ -XXX,XX +XXX,XX @@ static bool test_pt_entry(struct CPUState *cpu, struct gpt_translation *pt,
696
pt->err_code |= MMU_PAGE_PT;
697
}
698
699
- uint32_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
700
+ uint32_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
701
/* check protection */
702
if (cr0 & CR0_WP) {
703
if (pt->write_access && !pte_write_access(pte)) {
704
@@ -XXX,XX +XXX,XX @@ static bool walk_gpt(struct CPUState *cpu, target_ulong addr, int err_code,
705
{
706
int top_level, level;
707
bool is_large = false;
708
- target_ulong cr3 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR3);
709
+ target_ulong cr3 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR3);
710
uint64_t page_mask = pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK;
711
712
memset(pt, 0, sizeof(*pt));
713
diff --git a/target/i386/hvf/x86_task.c b/target/i386/hvf/x86_task.c
714
index XXXXXXX..XXXXXXX 100644
715
--- a/target/i386/hvf/x86_task.c
716
+++ b/target/i386/hvf/x86_task.c
717
@@ -XXX,XX +XXX,XX @@ static void load_state_from_tss32(CPUState *cpu, struct x86_tss_segment32 *tss)
718
X86CPU *x86_cpu = X86_CPU(cpu);
719
CPUX86State *env = &x86_cpu->env;
720
721
- wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, tss->cr3);
722
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_CR3, tss->cr3);
723
724
env->eip = tss->eip;
725
env->eflags = tss->eflags | 2;
726
@@ -XXX,XX +XXX,XX @@ static int task_switch_32(CPUState *cpu, x68_segment_selector tss_sel, x68_segme
727
728
void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, int reason, bool gate_valid, uint8_t gate, uint64_t gate_type)
729
{
730
- uint64_t rip = rreg(cpu->hvf_fd, HV_X86_RIP);
731
+ uint64_t rip = rreg(cpu->hvf->fd, HV_X86_RIP);
732
if (!gate_valid || (gate_type != VMCS_INTR_T_HWEXCEPTION &&
733
gate_type != VMCS_INTR_T_HWINTR &&
734
gate_type != VMCS_INTR_T_NMI)) {
735
- int ins_len = rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH);
736
+ int ins_len = rvmcs(cpu->hvf->fd, VMCS_EXIT_INSTRUCTION_LENGTH);
737
macvm_set_rip(cpu, rip + ins_len);
738
return;
739
}
740
@@ -XXX,XX +XXX,XX @@ void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, int rea
741
//ret = task_switch_16(cpu, tss_sel, old_tss_sel, old_tss_base, &next_tss_desc);
742
VM_PANIC("task_switch_16");
743
744
- macvm_set_cr0(cpu->hvf_fd, rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0) | CR0_TS);
745
+ macvm_set_cr0(cpu->hvf->fd, rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0) | CR0_TS);
746
x86_segment_descriptor_to_vmx(cpu, tss_sel, &next_tss_desc, &vmx_seg);
747
vmx_write_segment_descriptor(cpu, &vmx_seg, R_TR);
748
749
store_regs(cpu);
750
751
- hv_vcpu_invalidate_tlb(cpu->hvf_fd);
752
- hv_vcpu_flush(cpu->hvf_fd);
753
+ hv_vcpu_invalidate_tlb(cpu->hvf->fd);
754
+ hv_vcpu_flush(cpu->hvf->fd);
755
}
756
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
757
index XXXXXXX..XXXXXXX 100644
758
--- a/target/i386/hvf/x86hvf.c
759
+++ b/target/i386/hvf/x86hvf.c
760
@@ -XXX,XX +XXX,XX @@ void hvf_put_xsave(CPUState *cpu_state)
761
762
x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave);
763
764
- if (hv_vcpu_write_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) {
765
+ if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
766
abort();
767
}
768
}
769
@@ -XXX,XX +XXX,XX @@ void hvf_put_segments(CPUState *cpu_state)
770
CPUX86State *env = &X86_CPU(cpu_state)->env;
771
struct vmx_segment seg;
772
773
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
774
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
775
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
776
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
777
778
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit);
779
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE, env->gdt.base);
780
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit);
781
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_BASE, env->gdt.base);
782
783
- /* wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR2, env->cr[2]); */
784
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3, env->cr[3]);
785
+ /* wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR2, env->cr[2]); */
786
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR3, env->cr[3]);
787
vmx_update_tpr(cpu_state);
788
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER, env->efer);
789
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IA32_EFER, env->efer);
790
791
- macvm_set_cr4(cpu_state->hvf_fd, env->cr[4]);
792
- macvm_set_cr0(cpu_state->hvf_fd, env->cr[0]);
793
+ macvm_set_cr4(cpu_state->hvf->fd, env->cr[4]);
794
+ macvm_set_cr0(cpu_state->hvf->fd, env->cr[0]);
795
796
hvf_set_segment(cpu_state, &seg, &env->segs[R_CS], false);
797
vmx_write_segment_descriptor(cpu_state, &seg, R_CS);
798
@@ -XXX,XX +XXX,XX @@ void hvf_put_segments(CPUState *cpu_state)
799
hvf_set_segment(cpu_state, &seg, &env->ldt, false);
800
vmx_write_segment_descriptor(cpu_state, &seg, R_LDTR);
801
802
- hv_vcpu_flush(cpu_state->hvf_fd);
803
+ hv_vcpu_flush(cpu_state->hvf->fd);
804
}
805
806
void hvf_put_msrs(CPUState *cpu_state)
807
{
808
CPUX86State *env = &X86_CPU(cpu_state)->env;
809
810
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS,
811
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_CS,
812
env->sysenter_cs);
813
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP,
814
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_ESP,
815
env->sysenter_esp);
816
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP,
817
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_EIP,
818
env->sysenter_eip);
819
820
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_STAR, env->star);
821
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_STAR, env->star);
822
823
#ifdef TARGET_X86_64
824
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_CSTAR, env->cstar);
825
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, env->kernelgsbase);
826
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FMASK, env->fmask);
827
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_LSTAR, env->lstar);
828
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_CSTAR, env->cstar);
829
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_KERNELGSBASE, env->kernelgsbase);
830
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_FMASK, env->fmask);
831
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_LSTAR, env->lstar);
832
#endif
833
834
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_GSBASE, env->segs[R_GS].base);
835
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FSBASE, env->segs[R_FS].base);
836
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_GSBASE, env->segs[R_GS].base);
837
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_FSBASE, env->segs[R_FS].base);
838
}
839
840
841
@@ -XXX,XX +XXX,XX @@ void hvf_get_xsave(CPUState *cpu_state)
842
843
xsave = X86_CPU(cpu_state)->env.xsave_buf;
844
845
- if (hv_vcpu_read_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) {
846
+ if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
847
abort();
848
}
849
850
@@ -XXX,XX +XXX,XX @@ void hvf_get_segments(CPUState *cpu_state)
851
vmx_read_segment_descriptor(cpu_state, &seg, R_LDTR);
852
hvf_get_segment(&env->ldt, &seg);
853
854
- env->idt.limit = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT);
855
- env->idt.base = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE);
856
- env->gdt.limit = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
857
- env->gdt.base = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE);
858
+ env->idt.limit = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_LIMIT);
859
+ env->idt.base = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_BASE);
860
+ env->gdt.limit = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
861
+ env->gdt.base = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_BASE);
862
863
- env->cr[0] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR0);
864
+ env->cr[0] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR0);
865
env->cr[2] = 0;
866
- env->cr[3] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3);
867
- env->cr[4] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR4);
868
+ env->cr[3] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR3);
869
+ env->cr[4] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR4);
870
871
- env->efer = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER);
872
+ env->efer = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IA32_EFER);
873
}
874
875
void hvf_get_msrs(CPUState *cpu_state)
876
@@ -XXX,XX +XXX,XX @@ void hvf_get_msrs(CPUState *cpu_state)
877
CPUX86State *env = &X86_CPU(cpu_state)->env;
878
uint64_t tmp;
879
880
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, &tmp);
881
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_CS, &tmp);
882
env->sysenter_cs = tmp;
883
884
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, &tmp);
885
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_ESP, &tmp);
886
env->sysenter_esp = tmp;
887
888
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, &tmp);
889
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_EIP, &tmp);
890
env->sysenter_eip = tmp;
891
892
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_STAR, &env->star);
893
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_STAR, &env->star);
894
895
#ifdef TARGET_X86_64
896
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_CSTAR, &env->cstar);
897
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, &env->kernelgsbase);
898
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_FMASK, &env->fmask);
899
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_LSTAR, &env->lstar);
900
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_CSTAR, &env->cstar);
901
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_KERNELGSBASE, &env->kernelgsbase);
902
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_FMASK, &env->fmask);
903
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_LSTAR, &env->lstar);
904
#endif
905
906
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_APICBASE, &tmp);
907
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_APICBASE, &tmp);
908
909
- env->tsc = rdtscp() + rvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET);
910
+ env->tsc = rdtscp() + rvmcs(cpu_state->hvf->fd, VMCS_TSC_OFFSET);
911
}
912
913
int hvf_put_registers(CPUState *cpu_state)
914
@@ -XXX,XX +XXX,XX @@ int hvf_put_registers(CPUState *cpu_state)
915
X86CPU *x86cpu = X86_CPU(cpu_state);
916
CPUX86State *env = &x86cpu->env;
917
918
- wreg(cpu_state->hvf_fd, HV_X86_RAX, env->regs[R_EAX]);
919
- wreg(cpu_state->hvf_fd, HV_X86_RBX, env->regs[R_EBX]);
920
- wreg(cpu_state->hvf_fd, HV_X86_RCX, env->regs[R_ECX]);
921
- wreg(cpu_state->hvf_fd, HV_X86_RDX, env->regs[R_EDX]);
922
- wreg(cpu_state->hvf_fd, HV_X86_RBP, env->regs[R_EBP]);
923
- wreg(cpu_state->hvf_fd, HV_X86_RSP, env->regs[R_ESP]);
924
- wreg(cpu_state->hvf_fd, HV_X86_RSI, env->regs[R_ESI]);
925
- wreg(cpu_state->hvf_fd, HV_X86_RDI, env->regs[R_EDI]);
926
- wreg(cpu_state->hvf_fd, HV_X86_R8, env->regs[8]);
927
- wreg(cpu_state->hvf_fd, HV_X86_R9, env->regs[9]);
928
- wreg(cpu_state->hvf_fd, HV_X86_R10, env->regs[10]);
929
- wreg(cpu_state->hvf_fd, HV_X86_R11, env->regs[11]);
930
- wreg(cpu_state->hvf_fd, HV_X86_R12, env->regs[12]);
931
- wreg(cpu_state->hvf_fd, HV_X86_R13, env->regs[13]);
932
- wreg(cpu_state->hvf_fd, HV_X86_R14, env->regs[14]);
933
- wreg(cpu_state->hvf_fd, HV_X86_R15, env->regs[15]);
934
- wreg(cpu_state->hvf_fd, HV_X86_RFLAGS, env->eflags);
935
- wreg(cpu_state->hvf_fd, HV_X86_RIP, env->eip);
936
+ wreg(cpu_state->hvf->fd, HV_X86_RAX, env->regs[R_EAX]);
937
+ wreg(cpu_state->hvf->fd, HV_X86_RBX, env->regs[R_EBX]);
938
+ wreg(cpu_state->hvf->fd, HV_X86_RCX, env->regs[R_ECX]);
939
+ wreg(cpu_state->hvf->fd, HV_X86_RDX, env->regs[R_EDX]);
940
+ wreg(cpu_state->hvf->fd, HV_X86_RBP, env->regs[R_EBP]);
941
+ wreg(cpu_state->hvf->fd, HV_X86_RSP, env->regs[R_ESP]);
942
+ wreg(cpu_state->hvf->fd, HV_X86_RSI, env->regs[R_ESI]);
943
+ wreg(cpu_state->hvf->fd, HV_X86_RDI, env->regs[R_EDI]);
944
+ wreg(cpu_state->hvf->fd, HV_X86_R8, env->regs[8]);
945
+ wreg(cpu_state->hvf->fd, HV_X86_R9, env->regs[9]);
946
+ wreg(cpu_state->hvf->fd, HV_X86_R10, env->regs[10]);
947
+ wreg(cpu_state->hvf->fd, HV_X86_R11, env->regs[11]);
948
+ wreg(cpu_state->hvf->fd, HV_X86_R12, env->regs[12]);
949
+ wreg(cpu_state->hvf->fd, HV_X86_R13, env->regs[13]);
950
+ wreg(cpu_state->hvf->fd, HV_X86_R14, env->regs[14]);
951
+ wreg(cpu_state->hvf->fd, HV_X86_R15, env->regs[15]);
952
+ wreg(cpu_state->hvf->fd, HV_X86_RFLAGS, env->eflags);
953
+ wreg(cpu_state->hvf->fd, HV_X86_RIP, env->eip);
954
955
- wreg(cpu_state->hvf_fd, HV_X86_XCR0, env->xcr0);
956
+ wreg(cpu_state->hvf->fd, HV_X86_XCR0, env->xcr0);
957
958
hvf_put_xsave(cpu_state);
959
960
@@ -XXX,XX +XXX,XX @@ int hvf_put_registers(CPUState *cpu_state)
961
962
hvf_put_msrs(cpu_state);
963
964
- wreg(cpu_state->hvf_fd, HV_X86_DR0, env->dr[0]);
965
- wreg(cpu_state->hvf_fd, HV_X86_DR1, env->dr[1]);
966
- wreg(cpu_state->hvf_fd, HV_X86_DR2, env->dr[2]);
967
- wreg(cpu_state->hvf_fd, HV_X86_DR3, env->dr[3]);
968
- wreg(cpu_state->hvf_fd, HV_X86_DR4, env->dr[4]);
969
- wreg(cpu_state->hvf_fd, HV_X86_DR5, env->dr[5]);
970
- wreg(cpu_state->hvf_fd, HV_X86_DR6, env->dr[6]);
971
- wreg(cpu_state->hvf_fd, HV_X86_DR7, env->dr[7]);
972
+ wreg(cpu_state->hvf->fd, HV_X86_DR0, env->dr[0]);
973
+ wreg(cpu_state->hvf->fd, HV_X86_DR1, env->dr[1]);
974
+ wreg(cpu_state->hvf->fd, HV_X86_DR2, env->dr[2]);
975
+ wreg(cpu_state->hvf->fd, HV_X86_DR3, env->dr[3]);
976
+ wreg(cpu_state->hvf->fd, HV_X86_DR4, env->dr[4]);
977
+ wreg(cpu_state->hvf->fd, HV_X86_DR5, env->dr[5]);
978
+ wreg(cpu_state->hvf->fd, HV_X86_DR6, env->dr[6]);
979
+ wreg(cpu_state->hvf->fd, HV_X86_DR7, env->dr[7]);
980
981
return 0;
982
}
983
@@ -XXX,XX +XXX,XX @@ int hvf_get_registers(CPUState *cpu_state)
984
X86CPU *x86cpu = X86_CPU(cpu_state);
985
CPUX86State *env = &x86cpu->env;
986
987
- env->regs[R_EAX] = rreg(cpu_state->hvf_fd, HV_X86_RAX);
988
- env->regs[R_EBX] = rreg(cpu_state->hvf_fd, HV_X86_RBX);
989
- env->regs[R_ECX] = rreg(cpu_state->hvf_fd, HV_X86_RCX);
990
- env->regs[R_EDX] = rreg(cpu_state->hvf_fd, HV_X86_RDX);
991
- env->regs[R_EBP] = rreg(cpu_state->hvf_fd, HV_X86_RBP);
992
- env->regs[R_ESP] = rreg(cpu_state->hvf_fd, HV_X86_RSP);
993
- env->regs[R_ESI] = rreg(cpu_state->hvf_fd, HV_X86_RSI);
994
- env->regs[R_EDI] = rreg(cpu_state->hvf_fd, HV_X86_RDI);
995
- env->regs[8] = rreg(cpu_state->hvf_fd, HV_X86_R8);
996
- env->regs[9] = rreg(cpu_state->hvf_fd, HV_X86_R9);
997
- env->regs[10] = rreg(cpu_state->hvf_fd, HV_X86_R10);
998
- env->regs[11] = rreg(cpu_state->hvf_fd, HV_X86_R11);
999
- env->regs[12] = rreg(cpu_state->hvf_fd, HV_X86_R12);
1000
- env->regs[13] = rreg(cpu_state->hvf_fd, HV_X86_R13);
1001
- env->regs[14] = rreg(cpu_state->hvf_fd, HV_X86_R14);
1002
- env->regs[15] = rreg(cpu_state->hvf_fd, HV_X86_R15);
1003
+ env->regs[R_EAX] = rreg(cpu_state->hvf->fd, HV_X86_RAX);
1004
+ env->regs[R_EBX] = rreg(cpu_state->hvf->fd, HV_X86_RBX);
1005
+ env->regs[R_ECX] = rreg(cpu_state->hvf->fd, HV_X86_RCX);
1006
+ env->regs[R_EDX] = rreg(cpu_state->hvf->fd, HV_X86_RDX);
1007
+ env->regs[R_EBP] = rreg(cpu_state->hvf->fd, HV_X86_RBP);
1008
+ env->regs[R_ESP] = rreg(cpu_state->hvf->fd, HV_X86_RSP);
1009
+ env->regs[R_ESI] = rreg(cpu_state->hvf->fd, HV_X86_RSI);
1010
+ env->regs[R_EDI] = rreg(cpu_state->hvf->fd, HV_X86_RDI);
1011
+ env->regs[8] = rreg(cpu_state->hvf->fd, HV_X86_R8);
1012
+ env->regs[9] = rreg(cpu_state->hvf->fd, HV_X86_R9);
1013
+ env->regs[10] = rreg(cpu_state->hvf->fd, HV_X86_R10);
1014
+ env->regs[11] = rreg(cpu_state->hvf->fd, HV_X86_R11);
1015
+ env->regs[12] = rreg(cpu_state->hvf->fd, HV_X86_R12);
1016
+ env->regs[13] = rreg(cpu_state->hvf->fd, HV_X86_R13);
1017
+ env->regs[14] = rreg(cpu_state->hvf->fd, HV_X86_R14);
1018
+ env->regs[15] = rreg(cpu_state->hvf->fd, HV_X86_R15);
1019
1020
- env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
1021
- env->eip = rreg(cpu_state->hvf_fd, HV_X86_RIP);
1022
+ env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
1023
+ env->eip = rreg(cpu_state->hvf->fd, HV_X86_RIP);
1024
1025
hvf_get_xsave(cpu_state);
1026
- env->xcr0 = rreg(cpu_state->hvf_fd, HV_X86_XCR0);
1027
+ env->xcr0 = rreg(cpu_state->hvf->fd, HV_X86_XCR0);
1028
1029
hvf_get_segments(cpu_state);
1030
hvf_get_msrs(cpu_state);
1031
1032
- env->dr[0] = rreg(cpu_state->hvf_fd, HV_X86_DR0);
1033
- env->dr[1] = rreg(cpu_state->hvf_fd, HV_X86_DR1);
1034
- env->dr[2] = rreg(cpu_state->hvf_fd, HV_X86_DR2);
1035
- env->dr[3] = rreg(cpu_state->hvf_fd, HV_X86_DR3);
1036
- env->dr[4] = rreg(cpu_state->hvf_fd, HV_X86_DR4);
1037
- env->dr[5] = rreg(cpu_state->hvf_fd, HV_X86_DR5);
1038
- env->dr[6] = rreg(cpu_state->hvf_fd, HV_X86_DR6);
1039
- env->dr[7] = rreg(cpu_state->hvf_fd, HV_X86_DR7);
1040
+ env->dr[0] = rreg(cpu_state->hvf->fd, HV_X86_DR0);
1041
+ env->dr[1] = rreg(cpu_state->hvf->fd, HV_X86_DR1);
1042
+ env->dr[2] = rreg(cpu_state->hvf->fd, HV_X86_DR2);
1043
+ env->dr[3] = rreg(cpu_state->hvf->fd, HV_X86_DR3);
1044
+ env->dr[4] = rreg(cpu_state->hvf->fd, HV_X86_DR4);
1045
+ env->dr[5] = rreg(cpu_state->hvf->fd, HV_X86_DR5);
1046
+ env->dr[6] = rreg(cpu_state->hvf->fd, HV_X86_DR6);
1047
+ env->dr[7] = rreg(cpu_state->hvf->fd, HV_X86_DR7);
1048
1049
x86_update_hflags(env);
1050
return 0;
1051
@@ -XXX,XX +XXX,XX @@ int hvf_get_registers(CPUState *cpu_state)
1052
static void vmx_set_int_window_exiting(CPUState *cpu)
1053
{
1054
uint64_t val;
1055
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
1056
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val |
1057
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
1058
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val |
1059
VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING);
1060
}
1061
1062
void vmx_clear_int_window_exiting(CPUState *cpu)
1063
{
1064
uint64_t val;
1065
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
1066
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val &
1067
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
1068
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val &
1069
~VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING);
1070
}
1071
1072
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1073
uint64_t info = 0;
1074
if (have_event) {
1075
info = vector | intr_type | VMCS_INTR_VALID;
1076
- uint64_t reason = rvmcs(cpu_state->hvf_fd, VMCS_EXIT_REASON);
1077
+ uint64_t reason = rvmcs(cpu_state->hvf->fd, VMCS_EXIT_REASON);
1078
if (env->nmi_injected && reason != EXIT_REASON_TASK_SWITCH) {
1079
vmx_clear_nmi_blocking(cpu_state);
1080
}
1081
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1082
info &= ~(1 << 12); /* clear undefined bit */
1083
if (intr_type == VMCS_INTR_T_SWINTR ||
1084
intr_type == VMCS_INTR_T_SWEXCEPTION) {
1085
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, env->ins_len);
1086
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INST_LENGTH, env->ins_len);
1087
}
1088
1089
if (env->has_error_code) {
1090
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR,
1091
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_EXCEPTION_ERROR,
1092
env->error_code);
1093
/* Indicate that VMCS_ENTRY_EXCEPTION_ERROR is valid */
1094
info |= VMCS_INTR_DEL_ERRCODE;
1095
}
1096
/*printf("reinject %lx err %d\n", info, err);*/
1097
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info);
1098
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, info);
1099
};
1100
}
1101
1102
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1103
if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
1104
cpu_state->interrupt_request &= ~CPU_INTERRUPT_NMI;
1105
info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | EXCP02_NMI;
1106
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info);
1107
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, info);
1108
} else {
1109
vmx_set_nmi_window_exiting(cpu_state);
1110
}
1111
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1112
int line = cpu_get_pic_interrupt(&x86cpu->env);
1113
cpu_state->interrupt_request &= ~CPU_INTERRUPT_HARD;
1114
if (line >= 0) {
1115
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line |
1116
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, line |
1117
VMCS_INTR_VALID | VMCS_INTR_T_HWINTR);
1118
}
1119
}
1120
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
1121
X86CPU *cpu = X86_CPU(cpu_state);
1122
CPUX86State *env = &cpu->env;
1123
1124
- env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
1125
+ env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
1126
1127
if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
1128
cpu_synchronize_state(cpu_state);
28
--
1129
--
29
2.17.1
1130
2.20.1
30
1131
31
1132
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Depending on the host abi, float16, aka uint16_t, values are
3
The hooks we have that call us after reset, init and loadvm really all
4
passed and returned either zero-extended in the host register
4
just want to say "The reference of all register state is in the QEMU
5
or with garbage at the top of the host register.
5
vcpu struct, please push it".
6
6
7
The tcg code generator has so far been assuming garbage, as that
7
We already have a working pushing mechanism though called cpu->vcpu_dirty,
8
matches the x86 abi, but this is incorrect for other host abis.
8
so we can just reuse that for all of the above, syncing state properly the
9
Further, target/arm has so far been assuming zero-extended results,
9
next time we actually execute a vCPU.
10
so that it may store the 16-bit value into a 32-bit slot with the
11
high 16-bits already clear.
12
10
13
Rectify both problems by mapping "f16" in the helper definition
11
This fixes PSCI resets on ARM, as they modify CPU state even after the
14
to uint32_t instead of (a typedef for) uint16_t. This forces
12
post init call has completed, but before we execute the vCPU again.
15
the host compiler to assume garbage in the upper 16 bits on input
16
and to zero-extend the result on output.
17
13
18
Cc: qemu-stable@nongnu.org
14
To also make the scheme work for x86, we have to make sure we don't
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
move stale eflags into our env when the vcpu state is dirty.
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
21
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
17
Signed-off-by: Alexander Graf <agraf@csgraf.de>
22
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
18
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
20
Reviewed-by: Sergio Lopez <slp@redhat.com>
21
Message-id: 20210519202253.76782-13-agraf@csgraf.de
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
23
---
26
include/exec/helper-head.h | 2 +-
24
accel/hvf/hvf-accel-ops.c | 27 +++++++--------------------
27
target/arm/helper-a64.c | 35 +++++++++--------
25
target/i386/hvf/x86hvf.c | 5 ++++-
28
target/arm/helper.c | 80 +++++++++++++++++++-------------------
26
2 files changed, 11 insertions(+), 21 deletions(-)
29
3 files changed, 59 insertions(+), 58 deletions(-)
30
27
31
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
28
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
32
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
33
--- a/include/exec/helper-head.h
30
--- a/accel/hvf/hvf-accel-ops.c
34
+++ b/include/exec/helper-head.h
31
+++ b/accel/hvf/hvf-accel-ops.c
35
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@ static void hvf_cpu_synchronize_state(CPUState *cpu)
36
#define dh_ctype_int int
37
#define dh_ctype_i64 uint64_t
38
#define dh_ctype_s64 int64_t
39
-#define dh_ctype_f16 float16
40
+#define dh_ctype_f16 uint32_t
41
#define dh_ctype_f32 float32
42
#define dh_ctype_f64 float64
43
#define dh_ctype_ptr void *
44
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper-a64.c
47
+++ b/target/arm/helper-a64.c
48
@@ -XXX,XX +XXX,XX @@ static inline uint32_t float_rel_to_flags(int res)
49
return flags;
50
}
51
52
-uint64_t HELPER(vfp_cmph_a64)(float16 x, float16 y, void *fp_status)
53
+uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, void *fp_status)
54
{
55
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
56
}
57
58
-uint64_t HELPER(vfp_cmpeh_a64)(float16 x, float16 y, void *fp_status)
59
+uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, void *fp_status)
60
{
61
return float_rel_to_flags(float16_compare(x, y, fp_status));
62
}
63
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
64
#define float64_three make_float64(0x4008000000000000ULL)
65
#define float64_one_point_five make_float64(0x3FF8000000000000ULL)
66
67
-float16 HELPER(recpsf_f16)(float16 a, float16 b, void *fpstp)
68
+uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
69
{
70
float_status *fpst = fpstp;
71
72
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
73
return float64_muladd(a, b, float64_two, 0, fpst);
74
}
75
76
-float16 HELPER(rsqrtsf_f16)(float16 a, float16 b, void *fpstp)
77
+uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
78
{
79
float_status *fpst = fpstp;
80
81
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u16)(uint64_t a)
82
}
83
84
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
85
-float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
86
+uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
87
{
88
float_status *fpst = fpstp;
89
uint16_t val16, sbit;
90
@@ -XXX,XX +XXX,XX @@ void HELPER(casp_be_parallel)(CPUARMState *env, uint32_t rs, uint64_t addr,
91
#define ADVSIMD_HELPER(name, suffix) HELPER(glue(glue(advsimd_, name), suffix))
92
93
#define ADVSIMD_HALFOP(name) \
94
-float16 ADVSIMD_HELPER(name, h)(float16 a, float16 b, void *fpstp) \
95
+uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, void *fpstp) \
96
{ \
97
float_status *fpst = fpstp; \
98
return float16_ ## name(a, b, fpst); \
99
@@ -XXX,XX +XXX,XX @@ ADVSIMD_HALFOP(mulx)
100
ADVSIMD_TWOHALFOP(mulx)
101
102
/* fused multiply-accumulate */
103
-float16 HELPER(advsimd_muladdh)(float16 a, float16 b, float16 c, void *fpstp)
104
+uint32_t HELPER(advsimd_muladdh)(uint32_t a, uint32_t b, uint32_t c,
105
+ void *fpstp)
106
{
107
float_status *fpst = fpstp;
108
return float16_muladd(a, b, c, 0, fpst);
109
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
110
111
#define ADVSIMD_CMPRES(test) (test) ? 0xffff : 0
112
113
-uint32_t HELPER(advsimd_ceq_f16)(float16 a, float16 b, void *fpstp)
114
+uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, void *fpstp)
115
{
116
float_status *fpst = fpstp;
117
int compare = float16_compare_quiet(a, b, fpst);
118
return ADVSIMD_CMPRES(compare == float_relation_equal);
119
}
120
121
-uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
122
+uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, void *fpstp)
123
{
124
float_status *fpst = fpstp;
125
int compare = float16_compare(a, b, fpst);
126
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
127
compare == float_relation_equal);
128
}
129
130
-uint32_t HELPER(advsimd_cgt_f16)(float16 a, float16 b, void *fpstp)
131
+uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, void *fpstp)
132
{
133
float_status *fpst = fpstp;
134
int compare = float16_compare(a, b, fpst);
135
return ADVSIMD_CMPRES(compare == float_relation_greater);
136
}
137
138
-uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
139
+uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
140
{
141
float_status *fpst = fpstp;
142
float16 f0 = float16_abs(a);
143
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
144
compare == float_relation_equal);
145
}
146
147
-uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
148
+uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
149
{
150
float_status *fpst = fpstp;
151
float16 f0 = float16_abs(a);
152
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
153
}
154
155
/* round to integral */
156
-float16 HELPER(advsimd_rinth_exact)(float16 x, void *fp_status)
157
+uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, void *fp_status)
158
{
159
return float16_round_to_int(x, fp_status);
160
}
161
162
-float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
163
+uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
164
{
165
int old_flags = get_float_exception_flags(fp_status), new_flags;
166
float16 ret;
167
@@ -XXX,XX +XXX,XX @@ float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
168
* setting the mode appropriately before calling the helper.
169
*/
170
171
-uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
172
+uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
173
{
174
float_status *fpst = fpstp;
175
176
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
177
return float16_to_int16(a, fpst);
178
}
179
180
-uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
181
+uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
182
{
183
float_status *fpst = fpstp;
184
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
186
* Square Root and Reciprocal square root
187
*/
188
189
-float16 HELPER(sqrt_f16)(float16 a, void *fpstp)
190
+uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
191
{
192
float_status *s = fpstp;
193
194
diff --git a/target/arm/helper.c b/target/arm/helper.c
195
index XXXXXXX..XXXXXXX 100644
196
--- a/target/arm/helper.c
197
+++ b/target/arm/helper.c
198
@@ -XXX,XX +XXX,XX @@ DO_VFP_cmp(d, float64)
199
200
/* Integer to float and float to integer conversions */
201
202
-#define CONV_ITOF(name, fsz, sign) \
203
- float##fsz HELPER(name)(uint32_t x, void *fpstp) \
204
-{ \
205
- float_status *fpst = fpstp; \
206
- return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
207
+#define CONV_ITOF(name, ftype, fsz, sign) \
208
+ftype HELPER(name)(uint32_t x, void *fpstp) \
209
+{ \
210
+ float_status *fpst = fpstp; \
211
+ return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
212
}
213
214
-#define CONV_FTOI(name, fsz, sign, round) \
215
-uint32_t HELPER(name)(float##fsz x, void *fpstp) \
216
-{ \
217
- float_status *fpst = fpstp; \
218
- if (float##fsz##_is_any_nan(x)) { \
219
- float_raise(float_flag_invalid, fpst); \
220
- return 0; \
221
- } \
222
- return float##fsz##_to_##sign##int32##round(x, fpst); \
223
+#define CONV_FTOI(name, ftype, fsz, sign, round) \
224
+uint32_t HELPER(name)(ftype x, void *fpstp) \
225
+{ \
226
+ float_status *fpst = fpstp; \
227
+ if (float##fsz##_is_any_nan(x)) { \
228
+ float_raise(float_flag_invalid, fpst); \
229
+ return 0; \
230
+ } \
231
+ return float##fsz##_to_##sign##int32##round(x, fpst); \
232
}
233
234
-#define FLOAT_CONVS(name, p, fsz, sign) \
235
-CONV_ITOF(vfp_##name##to##p, fsz, sign) \
236
-CONV_FTOI(vfp_to##name##p, fsz, sign, ) \
237
-CONV_FTOI(vfp_to##name##z##p, fsz, sign, _round_to_zero)
238
+#define FLOAT_CONVS(name, p, ftype, fsz, sign) \
239
+ CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \
240
+ CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \
241
+ CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero)
242
243
-FLOAT_CONVS(si, h, 16, )
244
-FLOAT_CONVS(si, s, 32, )
245
-FLOAT_CONVS(si, d, 64, )
246
-FLOAT_CONVS(ui, h, 16, u)
247
-FLOAT_CONVS(ui, s, 32, u)
248
-FLOAT_CONVS(ui, d, 64, u)
249
+FLOAT_CONVS(si, h, uint32_t, 16, )
250
+FLOAT_CONVS(si, s, float32, 32, )
251
+FLOAT_CONVS(si, d, float64, 64, )
252
+FLOAT_CONVS(ui, h, uint32_t, 16, u)
253
+FLOAT_CONVS(ui, s, float32, 32, u)
254
+FLOAT_CONVS(ui, d, float64, 64, u)
255
256
#undef CONV_ITOF
257
#undef CONV_FTOI
258
@@ -XXX,XX +XXX,XX @@ static float16 do_postscale_fp16(float64 f, int shift, float_status *fpst)
259
return float64_to_float16(float64_scalbn(f, -shift, fpst), true, fpst);
260
}
261
262
-float16 HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
263
+uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
264
{
265
return do_postscale_fp16(int32_to_float64(x, fpst), shift, fpst);
266
}
267
268
-float16 HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
269
+uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
270
{
271
return do_postscale_fp16(uint32_to_float64(x, fpst), shift, fpst);
272
}
273
274
-float16 HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
275
+uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
276
{
277
return do_postscale_fp16(int64_to_float64(x, fpst), shift, fpst);
278
}
279
280
-float16 HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
281
+uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
282
{
283
return do_postscale_fp16(uint64_to_float64(x, fpst), shift, fpst);
284
}
285
@@ -XXX,XX +XXX,XX @@ static float64 do_prescale_fp16(float16 f, int shift, float_status *fpst)
286
}
33
}
287
}
34
}
288
35
289
-uint32_t HELPER(vfp_toshh)(float16 x, uint32_t shift, void *fpst)
36
-static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
290
+uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst)
37
- run_on_cpu_data arg)
38
+static void do_hvf_cpu_synchronize_set_dirty(CPUState *cpu,
39
+ run_on_cpu_data arg)
291
{
40
{
292
return float64_to_int16(do_prescale_fp16(x, shift, fpst), fpst);
41
- hvf_put_registers(cpu);
42
- cpu->vcpu_dirty = false;
43
+ /* QEMU state is the reference, push it to HVF now and on next entry */
44
+ cpu->vcpu_dirty = true;
293
}
45
}
294
46
295
-uint32_t HELPER(vfp_touhh)(float16 x, uint32_t shift, void *fpst)
47
static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
296
+uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst)
297
{
48
{
298
return float64_to_uint16(do_prescale_fp16(x, shift, fpst), fpst);
49
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
50
-}
51
-
52
-static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
53
- run_on_cpu_data arg)
54
-{
55
- hvf_put_registers(cpu);
56
- cpu->vcpu_dirty = false;
57
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
299
}
58
}
300
59
301
-uint32_t HELPER(vfp_toslh)(float16 x, uint32_t shift, void *fpst)
60
static void hvf_cpu_synchronize_post_init(CPUState *cpu)
302
+uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst)
303
{
61
{
304
return float64_to_int32(do_prescale_fp16(x, shift, fpst), fpst);
62
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
63
-}
64
-
65
-static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
66
- run_on_cpu_data arg)
67
-{
68
- cpu->vcpu_dirty = true;
69
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
305
}
70
}
306
71
307
-uint32_t HELPER(vfp_toulh)(float16 x, uint32_t shift, void *fpst)
72
static void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
308
+uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst)
309
{
73
{
310
return float64_to_uint32(do_prescale_fp16(x, shift, fpst), fpst);
74
- run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
75
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
311
}
76
}
312
77
313
-uint64_t HELPER(vfp_tosqh)(float16 x, uint32_t shift, void *fpst)
78
static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
314
+uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst)
79
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
315
{
80
index XXXXXXX..XXXXXXX 100644
316
return float64_to_int64(do_prescale_fp16(x, shift, fpst), fpst);
81
--- a/target/i386/hvf/x86hvf.c
317
}
82
+++ b/target/i386/hvf/x86hvf.c
318
83
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
319
-uint64_t HELPER(vfp_touqh)(float16 x, uint32_t shift, void *fpst)
84
X86CPU *cpu = X86_CPU(cpu_state);
320
+uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst)
85
CPUX86State *env = &cpu->env;
321
{
86
322
return float64_to_uint64(do_prescale_fp16(x, shift, fpst), fpst);
87
- env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
323
}
88
+ if (!cpu_state->vcpu_dirty) {
324
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env)
89
+ /* light weight sync for CPU_INTERRUPT_HARD and IF_MASK */
325
}
90
+ env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
326
91
+ }
327
/* Half precision conversions. */
92
328
-float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
93
if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
329
+float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
94
cpu_synchronize_state(cpu_state);
330
{
331
/* Squash FZ16 to 0 for the duration of conversion. In this case,
332
* it would affect flushing input denormals.
333
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
334
return r;
335
}
336
337
-float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
338
+uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
339
{
340
/* Squash FZ16 to 0 for the duration of conversion. In this case,
341
* it would affect flushing output denormals.
342
@@ -XXX,XX +XXX,XX @@ float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
343
return r;
344
}
345
346
-float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
347
+float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
348
{
349
/* Squash FZ16 to 0 for the duration of conversion. In this case,
350
* it would affect flushing input denormals.
351
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
352
return r;
353
}
354
355
-float16 HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
356
+uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
357
{
358
/* Squash FZ16 to 0 for the duration of conversion. In this case,
359
* it would affect flushing output denormals.
360
@@ -XXX,XX +XXX,XX @@ static bool round_to_inf(float_status *fpst, bool sign_bit)
361
g_assert_not_reached();
362
}
363
364
-float16 HELPER(recpe_f16)(float16 input, void *fpstp)
365
+uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
366
{
367
float_status *fpst = fpstp;
368
float16 f16 = float16_squash_input_denormal(input, fpst);
369
@@ -XXX,XX +XXX,XX @@ static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac)
370
return extract64(estimate, 0, 8) << 44;
371
}
372
373
-float16 HELPER(rsqrte_f16)(float16 input, void *fpstp)
374
+uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
375
{
376
float_status *s = fpstp;
377
float16 f16 = float16_squash_input_denormal(input, s);
378
--
95
--
379
2.17.1
96
2.20.1
380
97
381
98
diff view generated by jsdifflib
New patch
1
Coverity notes that we don't check for dup2() failing. Add some
2
assertions so that if it does ever happen we get some indication.
3
(This is similar to how we handle other "don't expect this syscall to
4
fail" checks in this test code.)
1
5
6
Fixes: Coverity CID 1432346
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
9
Message-id: 20210525134458.6675-2-peter.maydell@linaro.org
10
---
11
tests/qtest/bios-tables-test.c | 8 ++++++--
12
1 file changed, 6 insertions(+), 2 deletions(-)
13
14
diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/qtest/bios-tables-test.c
17
+++ b/tests/qtest/bios-tables-test.c
18
@@ -XXX,XX +XXX,XX @@ static void test_acpi_asl(test_data *data)
19
exp_sdt->asl_file, sdt->asl_file);
20
int out = dup(STDOUT_FILENO);
21
int ret G_GNUC_UNUSED;
22
+ int dupret;
23
24
- dup2(STDERR_FILENO, STDOUT_FILENO);
25
+ g_assert(out >= 0);
26
+ dupret = dup2(STDERR_FILENO, STDOUT_FILENO);
27
+ g_assert(dupret >= 0);
28
ret = system(diff) ;
29
- dup2(out, STDOUT_FILENO);
30
+ dupret = dup2(out, STDOUT_FILENO);
31
+ g_assert(dupret >= 0);
32
close(out);
33
g_free(diff);
34
}
35
--
36
2.20.1
37
38
diff view generated by jsdifflib
New patch
1
The e1000e_send_verify() test calls qemu_recv() but doesn't
2
check that the call succeeded, which annoys Coverity. Add
3
an explicit test check for the length of the data.
1
4
5
(This is a test check, not a "we assume this syscall always
6
succeeds", so we use g_assert_cmpint() rather than g_assert().)
7
8
Fixes: Coverity CID 1432324
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
11
Message-id: 20210525134458.6675-3-peter.maydell@linaro.org
12
---
13
tests/qtest/e1000e-test.c | 3 ++-
14
1 file changed, 2 insertions(+), 1 deletion(-)
15
16
diff --git a/tests/qtest/e1000e-test.c b/tests/qtest/e1000e-test.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/tests/qtest/e1000e-test.c
19
+++ b/tests/qtest/e1000e-test.c
20
@@ -XXX,XX +XXX,XX @@ static void e1000e_send_verify(QE1000E *d, int *test_sockets, QGuestAllocator *a
21
/* Check data sent to the backend */
22
ret = qemu_recv(test_sockets[0], &recv_len, sizeof(recv_len), 0);
23
g_assert_cmpint(ret, == , sizeof(recv_len));
24
- qemu_recv(test_sockets[0], buffer, 64, 0);
25
+ ret = qemu_recv(test_sockets[0], buffer, 64, 0);
26
+ g_assert_cmpint(ret, >=, 5);
27
g_assert_cmpstr(buffer, == , "TEST");
28
29
/* Free test data buffer */
30
--
31
2.20.1
32
33
diff view generated by jsdifflib
New patch
1
Coverity notices that the checks against mkstemp() failing in
2
create_qcow2_with_mbr() are wrong: mkstemp returns -1 on failure but
3
the check is just "g_assert(fd)". Fix to use "g_assert(fd >= 0)",
4
matching the correct check in create_test_img().
1
5
6
Fixes: Coverity CID 1432274
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
10
Message-id: 20210525134458.6675-4-peter.maydell@linaro.org
11
---
12
tests/qtest/hd-geo-test.c | 4 ++--
13
1 file changed, 2 insertions(+), 2 deletions(-)
14
15
diff --git a/tests/qtest/hd-geo-test.c b/tests/qtest/hd-geo-test.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/tests/qtest/hd-geo-test.c
18
+++ b/tests/qtest/hd-geo-test.c
19
@@ -XXX,XX +XXX,XX @@ static char *create_qcow2_with_mbr(MBRpartitions mbr, uint64_t sectors)
20
}
21
22
fd = mkstemp(raw_path);
23
- g_assert(fd);
24
+ g_assert(fd >= 0);
25
close(fd);
26
27
fd = open(raw_path, O_WRONLY);
28
@@ -XXX,XX +XXX,XX @@ static char *create_qcow2_with_mbr(MBRpartitions mbr, uint64_t sectors)
29
close(fd);
30
31
fd = mkstemp(qcow2_path);
32
- g_assert(fd);
33
+ g_assert(fd >= 0);
34
close(fd);
35
36
qemu_img_path = getenv("QTEST_QEMU_IMG");
37
--
38
2.20.1
39
40
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Coverity points out that we calculate a 64-bit value using 32-bit
2
add MemTxAttrs as an argument to memory_region_access_valid().
2
arithmetic; add the cast to force the multiply to be done as 64-bits.
3
Its callers either have an attrs value to hand, or don't care
3
(The overflow will never happen with the current test data.)
4
and can use MEMTXATTRS_UNSPECIFIED.
5
4
6
The callsite in flatview_access_valid() is part of a recursive
5
Fixes: Coverity CID 1432320
7
loop flatview_access_valid() -> memory_region_access_valid() ->
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
subpage_accepts() -> flatview_access_valid(); we make it pass
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
MEMTXATTRS_UNSPECIFIED for now, until the next several commits
8
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
10
have plumbed an attrs parameter through the rest of the loop
9
Message-id: 20210525134458.6675-5-peter.maydell@linaro.org
11
and we can add an attrs parameter to flatview_access_valid().
10
---
11
tests/qtest/pflash-cfi02-test.c | 2 +-
12
1 file changed, 1 insertion(+), 1 deletion(-)
12
13
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
diff --git a/tests/qtest/pflash-cfi02-test.c b/tests/qtest/pflash-cfi02-test.c
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20180521140402.23318-8-peter.maydell@linaro.org
17
---
18
include/exec/memory-internal.h | 3 ++-
19
exec.c | 4 +++-
20
hw/s390x/s390-pci-inst.c | 3 ++-
21
memory.c | 7 ++++---
22
4 files changed, 11 insertions(+), 6 deletions(-)
23
24
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory-internal.h
16
--- a/tests/qtest/pflash-cfi02-test.c
27
+++ b/include/exec/memory-internal.h
17
+++ b/tests/qtest/pflash-cfi02-test.c
28
@@ -XXX,XX +XXX,XX @@ void flatview_unref(FlatView *view);
18
@@ -XXX,XX +XXX,XX @@ static void test_geometry(const void *opaque)
29
extern const MemoryRegionOps unassigned_mem_ops;
19
30
20
for (int region = 0; region < nb_erase_regions; ++region) {
31
bool memory_region_access_valid(MemoryRegion *mr, hwaddr addr,
21
for (uint32_t i = 0; i < c->nb_blocs[region]; ++i) {
32
- unsigned size, bool is_write);
22
- uint64_t byte_addr = i * c->sector_len[region];
33
+ unsigned size, bool is_write,
23
+ uint64_t byte_addr = (uint64_t)i * c->sector_len[region];
34
+ MemTxAttrs attrs);
24
g_assert_cmphex(flash_read(c, byte_addr), ==, bank_mask(c));
35
36
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section);
37
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv);
38
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
41
+++ b/exec.c
42
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
43
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
44
if (!memory_access_is_direct(mr, is_write)) {
45
l = memory_access_size(mr, l, addr);
46
- if (!memory_region_access_valid(mr, xlat, l, is_write)) {
47
+ /* When our callers all have attrs we'll pass them through here */
48
+ if (!memory_region_access_valid(mr, xlat, l, is_write,
49
+ MEMTXATTRS_UNSPECIFIED)) {
50
return false;
51
}
52
}
25
}
53
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/s390x/s390-pci-inst.c
56
+++ b/hw/s390x/s390-pci-inst.c
57
@@ -XXX,XX +XXX,XX @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
58
mr = s390_get_subregion(mr, offset, len);
59
offset -= mr->addr;
60
61
- if (!memory_region_access_valid(mr, offset, len, true)) {
62
+ if (!memory_region_access_valid(mr, offset, len, true,
63
+ MEMTXATTRS_UNSPECIFIED)) {
64
s390_program_interrupt(env, PGM_OPERAND, 6, ra);
65
return 0;
66
}
67
diff --git a/memory.c b/memory.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/memory.c
70
+++ b/memory.c
71
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps ram_device_mem_ops = {
72
bool memory_region_access_valid(MemoryRegion *mr,
73
hwaddr addr,
74
unsigned size,
75
- bool is_write)
76
+ bool is_write,
77
+ MemTxAttrs attrs)
78
{
79
int access_size_min, access_size_max;
80
int access_size, i;
81
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
82
{
83
MemTxResult r;
84
85
- if (!memory_region_access_valid(mr, addr, size, false)) {
86
+ if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
87
*pval = unassigned_mem_read(mr, addr, size);
88
return MEMTX_DECODE_ERROR;
89
}
90
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
91
unsigned size,
92
MemTxAttrs attrs)
93
{
94
- if (!memory_region_access_valid(mr, addr, size, true)) {
95
+ if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
96
unassigned_mem_write(mr, addr, data, size);
97
return MEMTX_DECODE_ERROR;
98
}
26
}
99
--
27
--
100
2.17.1
28
2.20.1
101
29
102
30
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Coverity points out that in tpm_test_swtpm_migration_test() we
2
add MemTxAttrs as an argument to address_space_translate()
2
assume that src_tpm_addr and dst_tpm_addr are non-NULL (we
3
and address_space_translate_cached(). Callers either have an
3
pass them to tpm_util_migration_start_qemu() which will
4
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
4
unconditionally dereference them) but then later explicitly
5
check them for NULL. Remove the pointless checks.
6
7
Fixes: Coverity CID 1432367, 1432359
5
8
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
9
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
12
Message-id: 20210525134458.6675-6-peter.maydell@linaro.org
10
---
13
---
11
include/exec/memory.h | 4 +++-
14
tests/qtest/tpm-tests.c | 12 ++++--------
12
accel/tcg/translate-all.c | 2 +-
15
1 file changed, 4 insertions(+), 8 deletions(-)
13
exec.c | 14 +++++++++-----
14
hw/vfio/common.c | 3 ++-
15
memory_ldst.inc.c | 18 +++++++++---------
16
target/riscv/helper.c | 2 +-
17
6 files changed, 25 insertions(+), 18 deletions(-)
18
16
19
diff --git a/include/exec/memory.h b/include/exec/memory.h
17
diff --git a/tests/qtest/tpm-tests.c b/tests/qtest/tpm-tests.c
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/memory.h
19
--- a/tests/qtest/tpm-tests.c
22
+++ b/include/exec/memory.h
20
+++ b/tests/qtest/tpm-tests.c
23
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
21
@@ -XXX,XX +XXX,XX @@ void tpm_test_swtpm_migration_test(const char *src_tpm_path,
24
* #MemoryRegion.
22
qtest_quit(src_qemu);
25
* @len: pointer to length
23
26
* @is_write: indicates the transfer direction
24
tpm_util_swtpm_kill(dst_tpm_pid);
27
+ * @attrs: memory attributes
25
- if (dst_tpm_addr) {
28
*/
26
- g_unlink(dst_tpm_addr->u.q_unix.path);
29
MemoryRegion *flatview_translate(FlatView *fv,
27
- qapi_free_SocketAddress(dst_tpm_addr);
30
hwaddr addr, hwaddr *xlat,
28
- }
31
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv,
29
+ g_unlink(dst_tpm_addr->u.q_unix.path);
32
30
+ qapi_free_SocketAddress(dst_tpm_addr);
33
static inline MemoryRegion *address_space_translate(AddressSpace *as,
31
34
hwaddr addr, hwaddr *xlat,
32
tpm_util_swtpm_kill(src_tpm_pid);
35
- hwaddr *len, bool is_write)
33
- if (src_tpm_addr) {
36
+ hwaddr *len, bool is_write,
34
- g_unlink(src_tpm_addr->u.q_unix.path);
37
+ MemTxAttrs attrs)
35
- qapi_free_SocketAddress(src_tpm_addr);
38
{
36
- }
39
return flatview_translate(address_space_to_flatview(as),
37
+ g_unlink(src_tpm_addr->u.q_unix.path);
40
addr, xlat, len, is_write);
38
+ qapi_free_SocketAddress(src_tpm_addr);
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
39
}
42
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/translate-all.c
44
+++ b/accel/tcg/translate-all.c
45
@@ -XXX,XX +XXX,XX @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
46
hwaddr l = 1;
47
48
rcu_read_lock();
49
- mr = address_space_translate(as, addr, &addr, &l, false);
50
+ mr = address_space_translate(as, addr, &addr, &l, false, attrs);
51
if (!(memory_region_is_ram(mr)
52
|| memory_region_is_romd(mr))) {
53
rcu_read_unlock();
54
diff --git a/exec.c b/exec.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/exec.c
57
+++ b/exec.c
58
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_write_rom_internal(AddressSpace *as,
59
rcu_read_lock();
60
while (len > 0) {
61
l = len;
62
- mr = address_space_translate(as, addr, &addr1, &l, true);
63
+ mr = address_space_translate(as, addr, &addr1, &l, true,
64
+ MEMTXATTRS_UNSPECIFIED);
65
66
if (!(memory_region_is_ram(mr) ||
67
memory_region_is_romd(mr))) {
68
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache)
69
*/
70
static inline MemoryRegion *address_space_translate_cached(
71
MemoryRegionCache *cache, hwaddr addr, hwaddr *xlat,
72
- hwaddr *plen, bool is_write)
73
+ hwaddr *plen, bool is_write, MemTxAttrs attrs)
74
{
75
MemoryRegionSection section;
76
MemoryRegion *mr;
77
@@ -XXX,XX +XXX,XX @@ address_space_read_cached_slow(MemoryRegionCache *cache, hwaddr addr,
78
MemoryRegion *mr;
79
80
l = len;
81
- mr = address_space_translate_cached(cache, addr, &addr1, &l, false);
82
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, false,
83
+ MEMTXATTRS_UNSPECIFIED);
84
flatview_read_continue(cache->fv,
85
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
86
addr1, l, mr);
87
@@ -XXX,XX +XXX,XX @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = address_space_translate_cached(cache, addr, &addr1, &l, true);
92
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, true,
93
+ MEMTXATTRS_UNSPECIFIED);
94
flatview_write_continue(cache->fv,
95
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
96
addr1, l, mr);
97
@@ -XXX,XX +XXX,XX @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
98
99
rcu_read_lock();
100
mr = address_space_translate(&address_space_memory,
101
- phys_addr, &phys_addr, &l, false);
102
+ phys_addr, &phys_addr, &l, false,
103
+ MEMTXATTRS_UNSPECIFIED);
104
105
res = !(memory_region_is_ram(mr) || memory_region_is_romd(mr));
106
rcu_read_unlock();
107
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/hw/vfio/common.c
110
+++ b/hw/vfio/common.c
111
@@ -XXX,XX +XXX,XX @@ static bool vfio_get_vaddr(IOMMUTLBEntry *iotlb, void **vaddr,
112
*/
113
mr = address_space_translate(&address_space_memory,
114
iotlb->translated_addr,
115
- &xlat, &len, writable);
116
+ &xlat, &len, writable,
117
+ MEMTXATTRS_UNSPECIFIED);
118
if (!memory_region_is_ram(mr)) {
119
error_report("iommu map to non memory area %"HWADDR_PRIx"",
120
xlat);
121
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/memory_ldst.inc.c
124
+++ b/memory_ldst.inc.c
125
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
126
bool release_lock = false;
127
128
RCU_READ_LOCK();
129
- mr = TRANSLATE(addr, &addr1, &l, false);
130
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
131
if (l < 4 || !IS_DIRECT(mr, false)) {
132
release_lock |= prepare_mmio_access(mr);
133
134
@@ -XXX,XX +XXX,XX @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
135
bool release_lock = false;
136
137
RCU_READ_LOCK();
138
- mr = TRANSLATE(addr, &addr1, &l, false);
139
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
140
if (l < 8 || !IS_DIRECT(mr, false)) {
141
release_lock |= prepare_mmio_access(mr);
142
143
@@ -XXX,XX +XXX,XX @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
144
bool release_lock = false;
145
146
RCU_READ_LOCK();
147
- mr = TRANSLATE(addr, &addr1, &l, false);
148
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
149
if (!IS_DIRECT(mr, false)) {
150
release_lock |= prepare_mmio_access(mr);
151
152
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
153
bool release_lock = false;
154
155
RCU_READ_LOCK();
156
- mr = TRANSLATE(addr, &addr1, &l, false);
157
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
158
if (l < 2 || !IS_DIRECT(mr, false)) {
159
release_lock |= prepare_mmio_access(mr);
160
161
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
162
bool release_lock = false;
163
164
RCU_READ_LOCK();
165
- mr = TRANSLATE(addr, &addr1, &l, true);
166
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
167
if (l < 4 || !IS_DIRECT(mr, true)) {
168
release_lock |= prepare_mmio_access(mr);
169
170
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
171
bool release_lock = false;
172
173
RCU_READ_LOCK();
174
- mr = TRANSLATE(addr, &addr1, &l, true);
175
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
176
if (l < 4 || !IS_DIRECT(mr, true)) {
177
release_lock |= prepare_mmio_access(mr);
178
179
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
180
bool release_lock = false;
181
182
RCU_READ_LOCK();
183
- mr = TRANSLATE(addr, &addr1, &l, true);
184
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
185
if (!IS_DIRECT(mr, true)) {
186
release_lock |= prepare_mmio_access(mr);
187
r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
188
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
189
bool release_lock = false;
190
191
RCU_READ_LOCK();
192
- mr = TRANSLATE(addr, &addr1, &l, true);
193
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
194
if (l < 2 || !IS_DIRECT(mr, true)) {
195
release_lock |= prepare_mmio_access(mr);
196
197
@@ -XXX,XX +XXX,XX @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
198
bool release_lock = false;
199
200
RCU_READ_LOCK();
201
- mr = TRANSLATE(addr, &addr1, &l, true);
202
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
203
if (l < 8 || !IS_DIRECT(mr, true)) {
204
release_lock |= prepare_mmio_access(mr);
205
206
diff --git a/target/riscv/helper.c b/target/riscv/helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/riscv/helper.c
209
+++ b/target/riscv/helper.c
210
@@ -XXX,XX +XXX,XX @@ restart:
211
MemoryRegion *mr;
212
hwaddr l = sizeof(target_ulong), addr1;
213
mr = address_space_translate(cs->as, pte_addr,
214
- &addr1, &l, false);
215
+ &addr1, &l, false, MEMTXATTRS_UNSPECIFIED);
216
if (memory_access_is_direct(mr, true)) {
217
target_ulong *pte_pa =
218
qemu_map_ram_ptr(mr->ram_block, addr1);
219
--
40
--
220
2.17.1
41
2.20.1
221
42
222
43
diff view generated by jsdifflib
1
Provide a VMSTATE_BOOL_SUB_ARRAY to go with VMSTATE_UINT8_SUB_ARRAY
1
Coverity complains that we don't check for failures from dup()
2
and friends.
2
and mkstemp(); add asserts that these syscalls succeeded.
3
3
4
Fixes: Coverity CID 1432516, 1432574
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
6
Message-id: 20180521140402.23318-23-peter.maydell@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20210525134458.6675-7-peter.maydell@linaro.org
7
---
9
---
8
include/migration/vmstate.h | 3 +++
10
tests/unit/test-vmstate.c | 5 ++++-
9
1 file changed, 3 insertions(+)
11
1 file changed, 4 insertions(+), 1 deletion(-)
10
12
11
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
13
diff --git a/tests/unit/test-vmstate.c b/tests/unit/test-vmstate.c
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/include/migration/vmstate.h
15
--- a/tests/unit/test-vmstate.c
14
+++ b/include/migration/vmstate.h
16
+++ b/tests/unit/test-vmstate.c
15
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
17
@@ -XXX,XX +XXX,XX @@ static int temp_fd;
16
#define VMSTATE_BOOL_ARRAY(_f, _s, _n) \
18
/* Duplicate temp_fd and seek to the beginning of the file */
17
VMSTATE_BOOL_ARRAY_V(_f, _s, _n, 0)
19
static QEMUFile *open_test_file(bool write)
18
20
{
19
+#define VMSTATE_BOOL_SUB_ARRAY(_f, _s, _start, _num) \
21
- int fd = dup(temp_fd);
20
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_bool, bool)
22
+ int fd;
21
+
23
QIOChannel *ioc;
22
#define VMSTATE_UINT16_ARRAY_V(_f, _s, _n, _v) \
24
QEMUFile *f;
23
VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_uint16, uint16_t)
25
26
+ fd = dup(temp_fd);
27
+ g_assert(fd >= 0);
28
lseek(fd, 0, SEEK_SET);
29
if (write) {
30
g_assert_cmpint(ftruncate(fd, 0), ==, 0);
31
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
32
g_autofree char *temp_file = g_strdup_printf("%s/vmst.test.XXXXXX",
33
g_get_tmp_dir());
34
temp_fd = mkstemp(temp_file);
35
+ g_assert(temp_fd >= 0);
36
37
module_call_init(MODULE_INIT_QOM);
24
38
25
--
39
--
26
2.17.1
40
2.20.1
27
41
28
42
diff view generated by jsdifflib