1
Not very much here, but several people have fallen over
1
arm queue: big stuff here is my MVE codegen optimisation,
2
the vector operation segfault bug, so let's get the fix
2
and Alex's Apple Silicon hvf support.
3
into master.
4
3
5
thanks
6
-- PMM
4
-- PMM
7
5
8
The following changes since commit d418238dca7b4e0b124135827ead3076233052b1:
6
The following changes since commit 7adb961995a3744f51396502b33ad04a56a317c3:
9
7
10
Merge remote-tracking branch 'remotes/rth/tags/pull-rng-20190522' into staging (2019-05-23 12:57:17 +0100)
8
Merge remote-tracking branch 'remotes/dgilbert-gitlab/tags/pull-virtiofs-20210916' into staging (2021-09-19 18:53:29 +0100)
11
9
12
are available in the Git repository at:
10
are available in the Git repository at:
13
11
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190523
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210920
15
13
16
for you to fetch changes up to 98e4f4fdb8ea05d840f51f47125924c2bb9df2df:
14
for you to fetch changes up to 1dc5a60bfe406bc1122d68cbdefda38d23134b27:
17
15
18
hw/arm/exynos4210: QOM'ify the Exynos4210 SoC (2019-05-23 14:47:44 +0100)
16
target/arm: Optimize MVE 1op-immediate insns (2021-09-20 14:18:01 +0100)
19
17
20
----------------------------------------------------------------
18
----------------------------------------------------------------
21
target-arm queue:
19
target-arm queue:
22
* exynos4210: QOM'ify the Exynos4210 SoC
20
* Optimize codegen for MVE when predication not active
23
* exynos4210: Add DMA support for the Exynos4210
21
* hvf: Add Apple Silicon support
24
* arm_gicv3: Fix writes to ICC_CTLR_EL3
22
* hw/intc: Set GIC maintenance interrupt level to only 0 or 1
25
* arm_gicv3: Fix write of ICH_VMCR_EL2.{VBPR0, VBPR1}
23
* Fix mishandling of MVE FPSCR.LTPSIZE reset for usermode emulator
26
* target/arm: Fix vector operation segfault
24
* elf2dmp: Fix coverity nits
27
* target/arm: Minor improvements to BFXIL, EXTR
28
25
29
----------------------------------------------------------------
26
----------------------------------------------------------------
30
Alistair Francis (1):
27
Alexander Graf (7):
31
target/arm: Fix vector operation segfault
28
arm: Move PMC register definitions to internals.h
29
hvf: Add execute to dirty log permission bitmap
30
hvf: Introduce hvf_arch_init() callback
31
hvf: Add Apple Silicon support
32
hvf: arm: Implement PSCI handling
33
arm: Add Hypervisor.framework build target
34
hvf: arm: Add rudimentary PMC support
32
35
33
Guenter Roeck (1):
36
Peter Collingbourne (1):
34
hw/arm/exynos4210: Add DMA support for the Exynos4210
37
arm/hvf: Add a WFI handler
35
38
36
Peter Maydell (5):
39
Peter Maydell (18):
37
arm: Move system_clock_scale to armv7m_systick.h
40
elf2dmp: Check curl_easy_setopt() return value
38
arm: Remove unnecessary includes of hw/arm/arm.h
41
elf2dmp: Fail cleanly if PDB file specifies zero block_size
39
arm: Rename hw/arm/arm.h to hw/arm/boot.h
42
target/arm: Don't skip M-profile reset entirely in user mode
40
hw/intc/arm_gicv3: Fix write of ICH_VMCR_EL2.{VBPR0, VBPR1}
43
target/arm: Always clear exclusive monitor on reset
41
hw/intc/arm_gicv3: Fix writes to ICC_CTLR_EL3
44
target/arm: Consolidate ifdef blocks in reset
45
hvf: arm: Implement -cpu host
46
target/arm: Avoid goto_tb if we're trying to exit to the main loop
47
target/arm: Enforce that FPDSCR.LTPSIZE is 4 on inbound migration
48
target/arm: Add TB flag for "MVE insns not predicated"
49
target/arm: Optimize MVE logic ops
50
target/arm: Optimize MVE arithmetic ops
51
target/arm: Optimize MVE VNEG, VABS
52
target/arm: Optimize MVE VDUP
53
target/arm: Optimize MVE VMVN
54
target/arm: Optimize MVE VSHL, VSHR immediate forms
55
target/arm: Optimize MVE VSHLL and VMOVL
56
target/arm: Optimize MVE VSLI and VSRI
57
target/arm: Optimize MVE 1op-immediate insns
42
58
43
Philippe Mathieu-Daudé (3):
59
Shashi Mallela (1):
44
hw/arm/exynos4: Remove unuseful debug code
60
hw/intc: Set GIC maintenance interrupt level to only 0 or 1
45
hw/arm/exynos4: Use the IEC binary prefix definitions
46
hw/arm/exynos4210: QOM'ify the Exynos4210 SoC
47
61
48
Richard Henderson (2):
62
meson.build | 8 +
49
target/arm: Use extract2 for EXTR
63
include/sysemu/hvf_int.h | 12 +-
50
target/arm: Simplify BFXIL expansion
64
target/arm/cpu.h | 6 +-
65
target/arm/hvf_arm.h | 18 +
66
target/arm/internals.h | 44 ++
67
target/arm/kvm_arm.h | 2 -
68
target/arm/translate.h | 2 +
69
accel/hvf/hvf-accel-ops.c | 21 +-
70
contrib/elf2dmp/download.c | 22 +-
71
contrib/elf2dmp/pdb.c | 4 +
72
hw/intc/arm_gicv3_cpuif.c | 5 +-
73
target/arm/cpu.c | 56 +-
74
target/arm/helper.c | 77 ++-
75
target/arm/hvf/hvf.c | 1278 +++++++++++++++++++++++++++++++++++++++++
76
target/arm/machine.c | 13 +
77
target/arm/translate-m-nocp.c | 8 +-
78
target/arm/translate-mve.c | 310 +++++++---
79
target/arm/translate-vfp.c | 33 +-
80
target/arm/translate.c | 42 +-
81
target/i386/hvf/hvf.c | 10 +
82
MAINTAINERS | 5 +
83
target/arm/hvf/meson.build | 3 +
84
target/arm/hvf/trace-events | 11 +
85
target/arm/meson.build | 2 +
86
24 files changed, 1824 insertions(+), 168 deletions(-)
87
create mode 100644 target/arm/hvf_arm.h
88
create mode 100644 target/arm/hvf/hvf.c
89
create mode 100644 target/arm/hvf/meson.build
90
create mode 100644 target/arm/hvf/trace-events
51
91
52
include/hw/arm/allwinner-a10.h | 2 +-
53
include/hw/arm/aspeed_soc.h | 1 -
54
include/hw/arm/bcm2836.h | 1 -
55
include/hw/arm/{arm.h => boot.h} | 12 +++------
56
include/hw/arm/exynos4210.h | 9 +++++--
57
include/hw/arm/fsl-imx25.h | 2 +-
58
include/hw/arm/fsl-imx31.h | 2 +-
59
include/hw/arm/fsl-imx6.h | 2 +-
60
include/hw/arm/fsl-imx6ul.h | 2 +-
61
include/hw/arm/fsl-imx7.h | 2 +-
62
include/hw/arm/virt.h | 2 +-
63
include/hw/arm/xlnx-versal.h | 2 +-
64
include/hw/arm/xlnx-zynqmp.h | 2 +-
65
include/hw/timer/armv7m_systick.h | 22 ++++++++++++++++
66
hw/arm/armsse.c | 2 +-
67
hw/arm/armv7m.c | 2 +-
68
hw/arm/aspeed.c | 2 +-
69
hw/arm/boot.c | 2 +-
70
hw/arm/collie.c | 2 +-
71
hw/arm/exynos4210.c | 54 ++++++++++++++++++++++++++++++++++++---
72
hw/arm/exynos4_boards.c | 40 ++++++++---------------------
73
hw/arm/highbank.c | 2 +-
74
hw/arm/integratorcp.c | 2 +-
75
hw/arm/mainstone.c | 2 +-
76
hw/arm/microbit.c | 2 +-
77
hw/arm/mps2-tz.c | 2 +-
78
hw/arm/mps2.c | 2 +-
79
hw/arm/msf2-soc.c | 1 -
80
hw/arm/msf2-som.c | 2 +-
81
hw/arm/musca.c | 2 +-
82
hw/arm/musicpal.c | 2 +-
83
hw/arm/netduino2.c | 2 +-
84
hw/arm/nrf51_soc.c | 2 +-
85
hw/arm/nseries.c | 2 +-
86
hw/arm/omap1.c | 2 +-
87
hw/arm/omap2.c | 2 +-
88
hw/arm/omap_sx1.c | 2 +-
89
hw/arm/palm.c | 2 +-
90
hw/arm/raspi.c | 2 +-
91
hw/arm/realview.c | 2 +-
92
hw/arm/spitz.c | 2 +-
93
hw/arm/stellaris.c | 2 +-
94
hw/arm/stm32f205_soc.c | 2 +-
95
hw/arm/strongarm.c | 2 +-
96
hw/arm/tosa.c | 2 +-
97
hw/arm/versatilepb.c | 2 +-
98
hw/arm/vexpress.c | 2 +-
99
hw/arm/virt.c | 2 +-
100
hw/arm/xilinx_zynq.c | 2 +-
101
hw/arm/xlnx-versal.c | 2 +-
102
hw/arm/z2.c | 2 +-
103
hw/intc/arm_gicv3_cpuif.c | 6 ++---
104
hw/intc/armv7m_nvic.c | 1 -
105
target/arm/arm-semi.c | 1 -
106
target/arm/cpu.c | 1 -
107
target/arm/cpu64.c | 1 -
108
target/arm/kvm.c | 1 -
109
target/arm/kvm32.c | 1 -
110
target/arm/kvm64.c | 1 -
111
target/arm/translate-a64.c | 44 ++++++++++++++++---------------
112
target/arm/translate.c | 4 +--
113
61 files changed, 164 insertions(+), 123 deletions(-)
114
rename include/hw/arm/{arm.h => boot.h} (96%)
115
diff view generated by jsdifflib
New patch
1
Coverity points out that we aren't checking the return value
2
from curl_easy_setopt().
1
3
4
Fixes: Coverity CID 1458895
5
Inspired-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
8
Tested-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
9
Message-id: 20210910170656.366592-2-philmd@redhat.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
contrib/elf2dmp/download.c | 22 ++++++++++------------
13
1 file changed, 10 insertions(+), 12 deletions(-)
14
15
diff --git a/contrib/elf2dmp/download.c b/contrib/elf2dmp/download.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/contrib/elf2dmp/download.c
18
+++ b/contrib/elf2dmp/download.c
19
@@ -XXX,XX +XXX,XX @@ int download_url(const char *name, const char *url)
20
goto out_curl;
21
}
22
23
- curl_easy_setopt(curl, CURLOPT_URL, url);
24
- curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, NULL);
25
- curl_easy_setopt(curl, CURLOPT_WRITEDATA, file);
26
- curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1);
27
- curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0);
28
-
29
- if (curl_easy_perform(curl) != CURLE_OK) {
30
- err = 1;
31
- fclose(file);
32
+ if (curl_easy_setopt(curl, CURLOPT_URL, url) != CURLE_OK
33
+ || curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, NULL) != CURLE_OK
34
+ || curl_easy_setopt(curl, CURLOPT_WRITEDATA, file) != CURLE_OK
35
+ || curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1) != CURLE_OK
36
+ || curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0) != CURLE_OK
37
+ || curl_easy_perform(curl) != CURLE_OK) {
38
unlink(name);
39
- goto out_curl;
40
+ fclose(file);
41
+ err = 1;
42
+ } else {
43
+ err = fclose(file);
44
}
45
46
- err = fclose(file);
47
-
48
out_curl:
49
curl_easy_cleanup(curl);
50
51
--
52
2.20.1
53
54
diff view generated by jsdifflib
New patch
1
Coverity points out that if the PDB file we're trying to read
2
has a header specifying a block_size of zero then we will
3
end up trying to divide by zero in pdb_ds_read_file().
4
Check for this and fail cleanly instead.
1
5
6
Fixes: Coverity CID 1458869
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Tested-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
11
Message-id: 20210910170656.366592-3-philmd@redhat.com
12
Message-Id: <20210901143910.17112-3-peter.maydell@linaro.org>
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
---
15
contrib/elf2dmp/pdb.c | 4 ++++
16
1 file changed, 4 insertions(+)
17
18
diff --git a/contrib/elf2dmp/pdb.c b/contrib/elf2dmp/pdb.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/contrib/elf2dmp/pdb.c
21
+++ b/contrib/elf2dmp/pdb.c
22
@@ -XXX,XX +XXX,XX @@ out_symbols:
23
24
static int pdb_reader_ds_init(struct pdb_reader *r, PDB_DS_HEADER *hdr)
25
{
26
+ if (hdr->block_size == 0) {
27
+ return 1;
28
+ }
29
+
30
memset(r->file_used, 0, sizeof(r->file_used));
31
r->ds.header = hdr;
32
r->ds.toc = pdb_ds_read(hdr, (uint32_t *)((uint8_t *)hdr +
33
--
34
2.20.1
35
36
diff view generated by jsdifflib
New patch
1
Currently all of the M-profile specific code in arm_cpu_reset() is
2
inside a !defined(CONFIG_USER_ONLY) ifdef block. This is
3
unintentional: it happened because originally the only
4
M-profile-specific handling was the setup of the initial SP and PC
5
from the vector table, which is system-emulation only. But then we
6
added a lot of other M-profile setup to the same "if (ARM_FEATURE_M)"
7
code block without noticing that it was all inside a not-user-mode
8
ifdef. This has generally been harmless, but with the addition of
9
v8.1M low-overhead-loop support we ran into a problem: the reset of
10
FPSCR.LTPSIZE to 4 was only being done for system emulation mode, so
11
if a user-mode guest tried to execute the LE instruction it would
12
incorrectly take a UsageFault.
1
13
14
Adjust the ifdefs so only the really system-emulation specific parts
15
are covered. Because this means we now run some reset code that sets
16
up initial values in the FPCCR and similar FPU related registers,
17
explicitly set up the registers controlling FPU context handling in
18
user-emulation mode so that the FPU works by design and not by
19
chance.
20
21
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/613
22
Cc: qemu-stable@nongnu.org
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Message-id: 20210914120725.24992-2-peter.maydell@linaro.org
26
---
27
target/arm/cpu.c | 19 +++++++++++++++++++
28
1 file changed, 19 insertions(+)
29
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
35
env->uncached_cpsr = ARM_CPU_MODE_SVC;
36
}
37
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
38
+#endif
39
40
if (arm_feature(env, ARM_FEATURE_M)) {
41
+#ifndef CONFIG_USER_ONLY
42
uint32_t initial_msp; /* Loaded from 0x0 */
43
uint32_t initial_pc; /* Loaded from 0x4 */
44
uint8_t *rom;
45
uint32_t vecbase;
46
+#endif
47
48
if (cpu_isar_feature(aa32_lob, cpu)) {
49
/*
50
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
51
env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
52
R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
53
}
54
+
55
+#ifndef CONFIG_USER_ONLY
56
/* Unlike A/R profile, M profile defines the reset LR value */
57
env->regs[14] = 0xffffffff;
58
59
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
60
env->regs[13] = initial_msp & 0xFFFFFFFC;
61
env->regs[15] = initial_pc & ~1;
62
env->thumb = initial_pc & 1;
63
+#else
64
+ /*
65
+ * For user mode we run non-secure and with access to the FPU.
66
+ * The FPU context is active (ie does not need further setup)
67
+ * and is owned by non-secure.
68
+ */
69
+ env->v7m.secure = false;
70
+ env->v7m.nsacr = 0xcff;
71
+ env->v7m.cpacr[M_REG_NS] = 0xf0ffff;
72
+ env->v7m.fpccr[M_REG_S] &=
73
+ ~(R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK);
74
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
75
+#endif
76
}
77
78
+#ifndef CONFIG_USER_ONLY
79
/* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
80
* executing as AArch32 then check if highvecs are enabled and
81
* adjust the PC accordingly.
82
--
83
2.20.1
84
85
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
There's no particular reason why the exclusive monitor should
2
be only cleared on reset in system emulation mode. It doesn't
3
hurt if it isn't cleared in user mode, but we might as well
4
reduce the amount of code we have that's inside an ifdef.
2
5
3
The mask implied by the extract is redundant with the one
4
implied by the deposit. Also, fix spelling of BFXIL.
5
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190514011129.11330-3-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210914120725.24992-3-peter.maydell@linaro.org
10
---
9
---
11
target/arm/translate-a64.c | 6 +++---
10
target/arm/cpu.c | 6 +++---
12
1 file changed, 3 insertions(+), 3 deletions(-)
11
1 file changed, 3 insertions(+), 3 deletions(-)
13
12
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
15
--- a/target/arm/cpu.c
17
+++ b/target/arm/translate-a64.c
16
+++ b/target/arm/cpu.c
18
@@ -XXX,XX +XXX,XX @@ static void disas_bitfield(DisasContext *s, uint32_t insn)
17
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
19
tcg_gen_extract_i64(tcg_rd, tcg_tmp, ri, len);
18
env->regs[15] = 0xFFFF0000;
20
return;
21
}
22
- /* opc == 1, BXFIL fall through to deposit */
23
- tcg_gen_extract_i64(tcg_tmp, tcg_tmp, ri, len);
24
+ /* opc == 1, BFXIL fall through to deposit */
25
+ tcg_gen_shri_i64(tcg_tmp, tcg_tmp, ri);
26
pos = 0;
27
} else {
28
/* Handle the ri > si case with a deposit
29
@@ -XXX,XX +XXX,XX @@ static void disas_bitfield(DisasContext *s, uint32_t insn)
30
len = ri;
31
}
19
}
32
20
33
- if (opc == 1) { /* BFM, BXFIL */
21
+ env->vfp.xregs[ARM_VFP_FPEXC] = 0;
34
+ if (opc == 1) { /* BFM, BFXIL */
22
+#endif
35
tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_tmp, pos, len);
23
+
36
} else {
24
/* M profile requires that reset clears the exclusive monitor;
37
/* SBFM or UBFM: We start with zero, and we haven't modified
25
* A profile does not, but clearing it makes more sense than having it
26
* set with an exclusive access on address zero.
27
*/
28
arm_clear_exclusive(env);
29
30
- env->vfp.xregs[ARM_VFP_FPEXC] = 0;
31
-#endif
32
-
33
if (arm_feature(env, ARM_FEATURE_PMSA)) {
34
if (cpu->pmsav7_dregion > 0) {
35
if (arm_feature(env, ARM_FEATURE_V8)) {
38
--
36
--
39
2.20.1
37
2.20.1
40
38
41
39
diff view generated by jsdifflib
New patch
1
Move an ifndef CONFIG_USER_ONLY code block up in arm_cpu_reset() so
2
it can be merged with another earlier one.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210914120725.24992-4-peter.maydell@linaro.org
7
---
8
target/arm/cpu.c | 22 ++++++++++------------
9
1 file changed, 10 insertions(+), 12 deletions(-)
10
11
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.c
14
+++ b/target/arm/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
16
env->uncached_cpsr = ARM_CPU_MODE_SVC;
17
}
18
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
19
+
20
+ /* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
21
+ * executing as AArch32 then check if highvecs are enabled and
22
+ * adjust the PC accordingly.
23
+ */
24
+ if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) {
25
+ env->regs[15] = 0xFFFF0000;
26
+ }
27
+
28
+ env->vfp.xregs[ARM_VFP_FPEXC] = 0;
29
#endif
30
31
if (arm_feature(env, ARM_FEATURE_M)) {
32
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
33
#endif
34
}
35
36
-#ifndef CONFIG_USER_ONLY
37
- /* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
38
- * executing as AArch32 then check if highvecs are enabled and
39
- * adjust the PC accordingly.
40
- */
41
- if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) {
42
- env->regs[15] = 0xFFFF0000;
43
- }
44
-
45
- env->vfp.xregs[ARM_VFP_FPEXC] = 0;
46
-#endif
47
-
48
/* M profile requires that reset clears the exclusive monitor;
49
* A profile does not, but clearing it makes more sense than having it
50
* set with an exclusive access on address zero.
51
--
52
2.20.1
53
54
diff view generated by jsdifflib
1
The ICC_CTLR_EL3 register includes some bits which are aliases
1
From: Shashi Mallela <shashi.mallela@linaro.org>
2
of bits in the ICC_CTLR_EL1(S) and (NS) registers. QEMU chooses
3
to keep those bits in the cs->icc_ctlr_el1[] struct fields.
4
Unfortunately a missing '~' in the code to update the bits
5
in those fields meant that writing to ICC_CTLR_EL3 would corrupt
6
the ICC_CLTR_EL1 register values.
7
2
3
During sbsa acs level 3 testing, it is seen that the GIC maintenance
4
interrupts are not triggered and the related test cases fail. This
5
is because we were incorrectly passing the value of the MISR register
6
(from maintenance_interrupt_state()) to qemu_set_irq() as the level
7
argument, whereas the device on the other end of this irq line
8
expects a 0/1 value.
9
10
Fix the logic to pass a 0/1 level indication, rather than a
11
0/not-0 value.
12
13
Fixes: c5fc89b36c0 ("hw/intc/arm_gicv3: Implement gicv3_cpuif_virt_update()")
14
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Message-id: 20210915205809.59068-1-shashi.mallela@linaro.org
17
[PMM: tweaked commit message; collapsed nested if()s into one]
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Message-id: 20190520162809.2677-5-peter.maydell@linaro.org
11
---
20
---
12
hw/intc/arm_gicv3_cpuif.c | 4 ++--
21
hw/intc/arm_gicv3_cpuif.c | 5 +++--
13
1 file changed, 2 insertions(+), 2 deletions(-)
22
1 file changed, 3 insertions(+), 2 deletions(-)
14
23
15
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
24
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
16
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gicv3_cpuif.c
26
--- a/hw/intc/arm_gicv3_cpuif.c
18
+++ b/hw/intc/arm_gicv3_cpuif.c
27
+++ b/hw/intc/arm_gicv3_cpuif.c
19
@@ -XXX,XX +XXX,XX @@ static void icc_ctlr_el3_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
20
trace_gicv3_icc_ctlr_el3_write(gicv3_redist_affid(cs), value);
29
}
21
22
/* *_EL1NS and *_EL1S bits are aliases into the ICC_CTLR_EL1 bits. */
23
- cs->icc_ctlr_el1[GICV3_NS] &= (ICC_CTLR_EL1_CBPR | ICC_CTLR_EL1_EOIMODE);
24
+ cs->icc_ctlr_el1[GICV3_NS] &= ~(ICC_CTLR_EL1_CBPR | ICC_CTLR_EL1_EOIMODE);
25
if (value & ICC_CTLR_EL3_EOIMODE_EL1NS) {
26
cs->icc_ctlr_el1[GICV3_NS] |= ICC_CTLR_EL1_EOIMODE;
27
}
30
}
28
@@ -XXX,XX +XXX,XX @@ static void icc_ctlr_el3_write(CPUARMState *env, const ARMCPRegInfo *ri,
31
29
cs->icc_ctlr_el1[GICV3_NS] |= ICC_CTLR_EL1_CBPR;
32
- if (cs->ich_hcr_el2 & ICH_HCR_EL2_EN) {
33
- maintlevel = maintenance_interrupt_state(cs);
34
+ if ((cs->ich_hcr_el2 & ICH_HCR_EL2_EN) &&
35
+ maintenance_interrupt_state(cs) != 0) {
36
+ maintlevel = 1;
30
}
37
}
31
38
32
- cs->icc_ctlr_el1[GICV3_S] &= (ICC_CTLR_EL1_CBPR | ICC_CTLR_EL1_EOIMODE);
39
trace_gicv3_cpuif_virt_set_irqs(gicv3_redist_affid(cs), fiqlevel,
33
+ cs->icc_ctlr_el1[GICV3_S] &= ~(ICC_CTLR_EL1_CBPR | ICC_CTLR_EL1_EOIMODE);
34
if (value & ICC_CTLR_EL3_EOIMODE_EL1S) {
35
cs->icc_ctlr_el1[GICV3_S] |= ICC_CTLR_EL1_EOIMODE;
36
}
37
--
40
--
38
2.20.1
41
2.20.1
39
42
40
43
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
It eases code review, unit is explicit.
3
We will need PMC register definitions in accel specific code later.
4
Move all constant definitions to common arm headers so we can reuse
5
them.
4
6
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20190520214342.13709-3-philmd@redhat.com
9
Message-id: 20210916155404.86958-2-agraf@csgraf.de
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
hw/arm/exynos4_boards.c | 5 +++--
12
target/arm/internals.h | 44 ++++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 3 insertions(+), 2 deletions(-)
13
target/arm/helper.c | 44 ------------------------------------------
14
2 files changed, 44 insertions(+), 44 deletions(-)
12
15
13
diff --git a/hw/arm/exynos4_boards.c b/hw/arm/exynos4_boards.c
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/exynos4_boards.c
18
--- a/target/arm/internals.h
16
+++ b/hw/arm/exynos4_boards.c
19
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ enum MVEECIState {
18
*/
21
/* All other values reserved */
19
20
#include "qemu/osdep.h"
21
+#include "qemu/units.h"
22
#include "qapi/error.h"
23
#include "qemu/error-report.h"
24
#include "qemu-common.h"
25
@@ -XXX,XX +XXX,XX @@ static int exynos4_board_smp_bootreg_addr[EXYNOS4_NUM_OF_BOARDS] = {
26
};
22
};
27
23
28
static unsigned long exynos4_board_ram_size[EXYNOS4_NUM_OF_BOARDS] = {
24
+/* Definitions for the PMU registers */
29
- [EXYNOS4_BOARD_NURI] = 0x40000000,
25
+#define PMCRN_MASK 0xf800
30
- [EXYNOS4_BOARD_SMDKC210] = 0x40000000,
26
+#define PMCRN_SHIFT 11
31
+ [EXYNOS4_BOARD_NURI] = 1 * GiB,
27
+#define PMCRLC 0x40
32
+ [EXYNOS4_BOARD_SMDKC210] = 1 * GiB,
28
+#define PMCRDP 0x20
29
+#define PMCRX 0x10
30
+#define PMCRD 0x8
31
+#define PMCRC 0x4
32
+#define PMCRP 0x2
33
+#define PMCRE 0x1
34
+/*
35
+ * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
36
+ * which can be written as 1 to trigger behaviour but which stay RAZ).
37
+ */
38
+#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
39
+
40
+#define PMXEVTYPER_P 0x80000000
41
+#define PMXEVTYPER_U 0x40000000
42
+#define PMXEVTYPER_NSK 0x20000000
43
+#define PMXEVTYPER_NSU 0x10000000
44
+#define PMXEVTYPER_NSH 0x08000000
45
+#define PMXEVTYPER_M 0x04000000
46
+#define PMXEVTYPER_MT 0x02000000
47
+#define PMXEVTYPER_EVTCOUNT 0x0000ffff
48
+#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
49
+ PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
50
+ PMXEVTYPER_M | PMXEVTYPER_MT | \
51
+ PMXEVTYPER_EVTCOUNT)
52
+
53
+#define PMCCFILTR 0xf8000000
54
+#define PMCCFILTR_M PMXEVTYPER_M
55
+#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M)
56
+
57
+static inline uint32_t pmu_num_counters(CPUARMState *env)
58
+{
59
+ return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
60
+}
61
+
62
+/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
63
+static inline uint64_t pmu_counter_mask(CPUARMState *env)
64
+{
65
+ return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
66
+}
67
+
68
#endif
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/helper.c
72
+++ b/target/arm/helper.c
73
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
74
REGINFO_SENTINEL
33
};
75
};
34
76
35
static struct arm_boot_info exynos4_board_binfo = {
77
-/* Definitions for the PMU registers */
78
-#define PMCRN_MASK 0xf800
79
-#define PMCRN_SHIFT 11
80
-#define PMCRLC 0x40
81
-#define PMCRDP 0x20
82
-#define PMCRX 0x10
83
-#define PMCRD 0x8
84
-#define PMCRC 0x4
85
-#define PMCRP 0x2
86
-#define PMCRE 0x1
87
-/*
88
- * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
89
- * which can be written as 1 to trigger behaviour but which stay RAZ).
90
- */
91
-#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
92
-
93
-#define PMXEVTYPER_P 0x80000000
94
-#define PMXEVTYPER_U 0x40000000
95
-#define PMXEVTYPER_NSK 0x20000000
96
-#define PMXEVTYPER_NSU 0x10000000
97
-#define PMXEVTYPER_NSH 0x08000000
98
-#define PMXEVTYPER_M 0x04000000
99
-#define PMXEVTYPER_MT 0x02000000
100
-#define PMXEVTYPER_EVTCOUNT 0x0000ffff
101
-#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
102
- PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
103
- PMXEVTYPER_M | PMXEVTYPER_MT | \
104
- PMXEVTYPER_EVTCOUNT)
105
-
106
-#define PMCCFILTR 0xf8000000
107
-#define PMCCFILTR_M PMXEVTYPER_M
108
-#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M)
109
-
110
-static inline uint32_t pmu_num_counters(CPUARMState *env)
111
-{
112
- return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
113
-}
114
-
115
-/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
116
-static inline uint64_t pmu_counter_mask(CPUARMState *env)
117
-{
118
- return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
119
-}
120
-
121
typedef struct pm_event {
122
uint16_t number; /* PMEVTYPER.evtCount is 16 bits wide */
123
/* If the event is supported on this CPU (used to generate PMCEID[01]) */
36
--
124
--
37
2.20.1
125
2.20.1
38
126
39
127
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
3
Hvf's permission bitmap during and after dirty logging does not include
4
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
4
the HV_MEMORY_EXEC permission. At least on Apple Silicon, this leads to
5
Message-id: 20190520214342.13709-2-philmd@redhat.com
5
instruction faults once dirty logging was enabled.
6
7
Add the bit to make it work properly.
8
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210916155404.86958-3-agraf@csgraf.de
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
13
---
8
hw/arm/exynos4_boards.c | 24 ------------------------
14
accel/hvf/hvf-accel-ops.c | 4 ++--
9
1 file changed, 24 deletions(-)
15
1 file changed, 2 insertions(+), 2 deletions(-)
10
16
11
diff --git a/hw/arm/exynos4_boards.c b/hw/arm/exynos4_boards.c
17
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
12
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/exynos4_boards.c
19
--- a/accel/hvf/hvf-accel-ops.c
14
+++ b/hw/arm/exynos4_boards.c
20
+++ b/accel/hvf/hvf-accel-ops.c
15
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
16
#include "hw/net/lan9118.h"
22
if (on) {
17
#include "hw/boards.h"
23
slot->flags |= HVF_SLOT_LOG;
18
24
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
19
-#undef DEBUG
25
- HV_MEMORY_READ);
20
-
26
+ HV_MEMORY_READ | HV_MEMORY_EXEC);
21
-//#define DEBUG
27
/* stop tracking region*/
22
-
28
} else {
23
-#ifdef DEBUG
29
slot->flags &= ~HVF_SLOT_LOG;
24
- #undef PRINT_DEBUG
30
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
25
- #define PRINT_DEBUG(fmt, args...) \
31
- HV_MEMORY_READ | HV_MEMORY_WRITE);
26
- do { \
32
+ HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
27
- fprintf(stderr, " [%s:%d] "fmt, __func__, __LINE__, ##args); \
33
}
28
- } while (0)
34
}
29
-#else
30
- #define PRINT_DEBUG(fmt, args...) do {} while (0)
31
-#endif
32
-
33
#define SMDK_LAN9118_BASE_ADDR 0x05000000
34
35
typedef enum Exynos4BoardType {
36
@@ -XXX,XX +XXX,XX @@ exynos4_boards_init_common(MachineState *machine,
37
exynos4_board_binfo.gic_cpu_if_addr =
38
EXYNOS4210_SMP_PRIVATE_BASE_ADDR + 0x100;
39
40
- PRINT_DEBUG("\n ram_size: %luMiB [0x%08lx]\n"
41
- " kernel_filename: %s\n"
42
- " kernel_cmdline: %s\n"
43
- " initrd_filename: %s\n",
44
- exynos4_board_ram_size[board_type] / 1048576,
45
- exynos4_board_ram_size[board_type],
46
- machine->kernel_filename,
47
- machine->kernel_cmdline,
48
- machine->initrd_filename);
49
-
50
exynos4_boards_init_ram(s, get_system_memory(),
51
exynos4_board_ram_size[board_type]);
52
35
53
--
36
--
54
2.20.1
37
2.20.1
55
38
56
39
diff view generated by jsdifflib
New patch
1
From: Alexander Graf <agraf@csgraf.de>
1
2
3
We will need to install a migration helper for the ARM hvf backend.
4
Let's introduce an arch callback for the overall hvf init chain to
5
do so.
6
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20210916155404.86958-4-agraf@csgraf.de
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/sysemu/hvf_int.h | 1 +
13
accel/hvf/hvf-accel-ops.c | 3 ++-
14
target/i386/hvf/hvf.c | 5 +++++
15
3 files changed, 8 insertions(+), 1 deletion(-)
16
17
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/sysemu/hvf_int.h
20
+++ b/include/sysemu/hvf_int.h
21
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
22
};
23
24
void assert_hvf_ok(hv_return_t ret);
25
+int hvf_arch_init(void);
26
int hvf_arch_init_vcpu(CPUState *cpu);
27
void hvf_arch_vcpu_destroy(CPUState *cpu);
28
int hvf_vcpu_exec(CPUState *);
29
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/accel/hvf/hvf-accel-ops.c
32
+++ b/accel/hvf/hvf-accel-ops.c
33
@@ -XXX,XX +XXX,XX @@ static int hvf_accel_init(MachineState *ms)
34
35
hvf_state = s;
36
memory_listener_register(&hvf_memory_listener, &address_space_memory);
37
- return 0;
38
+
39
+ return hvf_arch_init();
40
}
41
42
static void hvf_accel_class_init(ObjectClass *oc, void *data)
43
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/i386/hvf/hvf.c
46
+++ b/target/i386/hvf/hvf.c
47
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
48
return env->apic_bus_freq != 0;
49
}
50
51
+int hvf_arch_init(void)
52
+{
53
+ return 0;
54
+}
55
+
56
int hvf_arch_init_vcpu(CPUState *cpu)
57
{
58
X86CPU *x86cpu = X86_CPU(cpu);
59
--
60
2.20.1
61
62
diff view generated by jsdifflib
New patch
1
From: Alexander Graf <agraf@csgraf.de>
1
2
3
With Apple Silicon available to the masses, it's a good time to add support
4
for driving its virtualization extensions from QEMU.
5
6
This patch adds all necessary architecture specific code to get basic VMs
7
working, including save/restore.
8
9
Known limitations:
10
11
- WFI handling is missing (follows in later patch)
12
- No watchpoint/breakpoint support
13
14
Signed-off-by: Alexander Graf <agraf@csgraf.de>
15
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
16
Reviewed-by: Sergio Lopez <slp@redhat.com>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Message-id: 20210916155404.86958-5-agraf@csgraf.de
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
meson.build | 1 +
22
include/sysemu/hvf_int.h | 10 +-
23
accel/hvf/hvf-accel-ops.c | 9 +
24
target/arm/hvf/hvf.c | 794 ++++++++++++++++++++++++++++++++++++
25
target/i386/hvf/hvf.c | 5 +
26
MAINTAINERS | 5 +
27
target/arm/hvf/trace-events | 10 +
28
7 files changed, 833 insertions(+), 1 deletion(-)
29
create mode 100644 target/arm/hvf/hvf.c
30
create mode 100644 target/arm/hvf/trace-events
31
32
diff --git a/meson.build b/meson.build
33
index XXXXXXX..XXXXXXX 100644
34
--- a/meson.build
35
+++ b/meson.build
36
@@ -XXX,XX +XXX,XX @@ if have_system or have_user
37
'accel/tcg',
38
'hw/core',
39
'target/arm',
40
+ 'target/arm/hvf',
41
'target/hppa',
42
'target/i386',
43
'target/i386/kvm',
44
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
45
index XXXXXXX..XXXXXXX 100644
46
--- a/include/sysemu/hvf_int.h
47
+++ b/include/sysemu/hvf_int.h
48
@@ -XXX,XX +XXX,XX @@
49
#ifndef HVF_INT_H
50
#define HVF_INT_H
51
52
+#ifdef __aarch64__
53
+#include <Hypervisor/Hypervisor.h>
54
+#else
55
#include <Hypervisor/hv.h>
56
+#endif
57
58
/* hvf_slot flags */
59
#define HVF_SLOT_LOG (1 << 0)
60
@@ -XXX,XX +XXX,XX @@ struct HVFState {
61
int num_slots;
62
63
hvf_vcpu_caps *hvf_caps;
64
+ uint64_t vtimer_offset;
65
};
66
extern HVFState *hvf_state;
67
68
struct hvf_vcpu_state {
69
- int fd;
70
+ uint64_t fd;
71
+ void *exit;
72
+ bool vtimer_masked;
73
};
74
75
void assert_hvf_ok(hv_return_t ret);
76
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *);
77
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
78
int hvf_put_registers(CPUState *);
79
int hvf_get_registers(CPUState *);
80
+void hvf_kick_vcpu_thread(CPUState *cpu);
81
82
#endif
83
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/accel/hvf/hvf-accel-ops.c
86
+++ b/accel/hvf/hvf-accel-ops.c
87
@@ -XXX,XX +XXX,XX @@
88
89
HVFState *hvf_state;
90
91
+#ifdef __aarch64__
92
+#define HV_VM_DEFAULT NULL
93
+#endif
94
+
95
/* Memory slots */
96
97
hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
98
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
99
pthread_sigmask(SIG_BLOCK, NULL, &set);
100
sigdelset(&set, SIG_IPI);
101
102
+#ifdef __aarch64__
103
+ r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
104
+#else
105
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
106
+#endif
107
cpu->vcpu_dirty = 1;
108
assert_hvf_ok(r);
109
110
@@ -XXX,XX +XXX,XX @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
111
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
112
113
ops->create_vcpu_thread = hvf_start_vcpu_thread;
114
+ ops->kick_vcpu_thread = hvf_kick_vcpu_thread;
115
116
ops->synchronize_post_reset = hvf_cpu_synchronize_post_reset;
117
ops->synchronize_post_init = hvf_cpu_synchronize_post_init;
118
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
119
new file mode 100644
120
index XXXXXXX..XXXXXXX
121
--- /dev/null
122
+++ b/target/arm/hvf/hvf.c
123
@@ -XXX,XX +XXX,XX @@
124
+/*
125
+ * QEMU Hypervisor.framework support for Apple Silicon
126
+
127
+ * Copyright 2020 Alexander Graf <agraf@csgraf.de>
128
+ *
129
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
130
+ * See the COPYING file in the top-level directory.
131
+ *
132
+ */
133
+
134
+#include "qemu/osdep.h"
135
+#include "qemu-common.h"
136
+#include "qemu/error-report.h"
137
+
138
+#include "sysemu/runstate.h"
139
+#include "sysemu/hvf.h"
140
+#include "sysemu/hvf_int.h"
141
+#include "sysemu/hw_accel.h"
142
+
143
+#include <mach/mach_time.h>
144
+
145
+#include "exec/address-spaces.h"
146
+#include "hw/irq.h"
147
+#include "qemu/main-loop.h"
148
+#include "sysemu/cpus.h"
149
+#include "target/arm/cpu.h"
150
+#include "target/arm/internals.h"
151
+#include "trace/trace-target_arm_hvf.h"
152
+#include "migration/vmstate.h"
153
+
154
+#define HVF_SYSREG(crn, crm, op0, op1, op2) \
155
+ ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
156
+#define PL1_WRITE_MASK 0x4
157
+
158
+#define SYSREG(op0, op1, crn, crm, op2) \
159
+ ((op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (crm << 1))
160
+#define SYSREG_MASK SYSREG(0x3, 0x7, 0xf, 0xf, 0x7)
161
+#define SYSREG_OSLAR_EL1 SYSREG(2, 0, 1, 0, 4)
162
+#define SYSREG_OSLSR_EL1 SYSREG(2, 0, 1, 1, 4)
163
+#define SYSREG_OSDLR_EL1 SYSREG(2, 0, 1, 3, 4)
164
+#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1)
165
+
166
+#define WFX_IS_WFE (1 << 0)
167
+
168
+#define TMR_CTL_ENABLE (1 << 0)
169
+#define TMR_CTL_IMASK (1 << 1)
170
+#define TMR_CTL_ISTATUS (1 << 2)
171
+
172
+typedef struct HVFVTimer {
173
+ /* Vtimer value during migration and paused state */
174
+ uint64_t vtimer_val;
175
+} HVFVTimer;
176
+
177
+static HVFVTimer vtimer;
178
+
179
+struct hvf_reg_match {
180
+ int reg;
181
+ uint64_t offset;
182
+};
183
+
184
+static const struct hvf_reg_match hvf_reg_match[] = {
185
+ { HV_REG_X0, offsetof(CPUARMState, xregs[0]) },
186
+ { HV_REG_X1, offsetof(CPUARMState, xregs[1]) },
187
+ { HV_REG_X2, offsetof(CPUARMState, xregs[2]) },
188
+ { HV_REG_X3, offsetof(CPUARMState, xregs[3]) },
189
+ { HV_REG_X4, offsetof(CPUARMState, xregs[4]) },
190
+ { HV_REG_X5, offsetof(CPUARMState, xregs[5]) },
191
+ { HV_REG_X6, offsetof(CPUARMState, xregs[6]) },
192
+ { HV_REG_X7, offsetof(CPUARMState, xregs[7]) },
193
+ { HV_REG_X8, offsetof(CPUARMState, xregs[8]) },
194
+ { HV_REG_X9, offsetof(CPUARMState, xregs[9]) },
195
+ { HV_REG_X10, offsetof(CPUARMState, xregs[10]) },
196
+ { HV_REG_X11, offsetof(CPUARMState, xregs[11]) },
197
+ { HV_REG_X12, offsetof(CPUARMState, xregs[12]) },
198
+ { HV_REG_X13, offsetof(CPUARMState, xregs[13]) },
199
+ { HV_REG_X14, offsetof(CPUARMState, xregs[14]) },
200
+ { HV_REG_X15, offsetof(CPUARMState, xregs[15]) },
201
+ { HV_REG_X16, offsetof(CPUARMState, xregs[16]) },
202
+ { HV_REG_X17, offsetof(CPUARMState, xregs[17]) },
203
+ { HV_REG_X18, offsetof(CPUARMState, xregs[18]) },
204
+ { HV_REG_X19, offsetof(CPUARMState, xregs[19]) },
205
+ { HV_REG_X20, offsetof(CPUARMState, xregs[20]) },
206
+ { HV_REG_X21, offsetof(CPUARMState, xregs[21]) },
207
+ { HV_REG_X22, offsetof(CPUARMState, xregs[22]) },
208
+ { HV_REG_X23, offsetof(CPUARMState, xregs[23]) },
209
+ { HV_REG_X24, offsetof(CPUARMState, xregs[24]) },
210
+ { HV_REG_X25, offsetof(CPUARMState, xregs[25]) },
211
+ { HV_REG_X26, offsetof(CPUARMState, xregs[26]) },
212
+ { HV_REG_X27, offsetof(CPUARMState, xregs[27]) },
213
+ { HV_REG_X28, offsetof(CPUARMState, xregs[28]) },
214
+ { HV_REG_X29, offsetof(CPUARMState, xregs[29]) },
215
+ { HV_REG_X30, offsetof(CPUARMState, xregs[30]) },
216
+ { HV_REG_PC, offsetof(CPUARMState, pc) },
217
+};
218
+
219
+static const struct hvf_reg_match hvf_fpreg_match[] = {
220
+ { HV_SIMD_FP_REG_Q0, offsetof(CPUARMState, vfp.zregs[0]) },
221
+ { HV_SIMD_FP_REG_Q1, offsetof(CPUARMState, vfp.zregs[1]) },
222
+ { HV_SIMD_FP_REG_Q2, offsetof(CPUARMState, vfp.zregs[2]) },
223
+ { HV_SIMD_FP_REG_Q3, offsetof(CPUARMState, vfp.zregs[3]) },
224
+ { HV_SIMD_FP_REG_Q4, offsetof(CPUARMState, vfp.zregs[4]) },
225
+ { HV_SIMD_FP_REG_Q5, offsetof(CPUARMState, vfp.zregs[5]) },
226
+ { HV_SIMD_FP_REG_Q6, offsetof(CPUARMState, vfp.zregs[6]) },
227
+ { HV_SIMD_FP_REG_Q7, offsetof(CPUARMState, vfp.zregs[7]) },
228
+ { HV_SIMD_FP_REG_Q8, offsetof(CPUARMState, vfp.zregs[8]) },
229
+ { HV_SIMD_FP_REG_Q9, offsetof(CPUARMState, vfp.zregs[9]) },
230
+ { HV_SIMD_FP_REG_Q10, offsetof(CPUARMState, vfp.zregs[10]) },
231
+ { HV_SIMD_FP_REG_Q11, offsetof(CPUARMState, vfp.zregs[11]) },
232
+ { HV_SIMD_FP_REG_Q12, offsetof(CPUARMState, vfp.zregs[12]) },
233
+ { HV_SIMD_FP_REG_Q13, offsetof(CPUARMState, vfp.zregs[13]) },
234
+ { HV_SIMD_FP_REG_Q14, offsetof(CPUARMState, vfp.zregs[14]) },
235
+ { HV_SIMD_FP_REG_Q15, offsetof(CPUARMState, vfp.zregs[15]) },
236
+ { HV_SIMD_FP_REG_Q16, offsetof(CPUARMState, vfp.zregs[16]) },
237
+ { HV_SIMD_FP_REG_Q17, offsetof(CPUARMState, vfp.zregs[17]) },
238
+ { HV_SIMD_FP_REG_Q18, offsetof(CPUARMState, vfp.zregs[18]) },
239
+ { HV_SIMD_FP_REG_Q19, offsetof(CPUARMState, vfp.zregs[19]) },
240
+ { HV_SIMD_FP_REG_Q20, offsetof(CPUARMState, vfp.zregs[20]) },
241
+ { HV_SIMD_FP_REG_Q21, offsetof(CPUARMState, vfp.zregs[21]) },
242
+ { HV_SIMD_FP_REG_Q22, offsetof(CPUARMState, vfp.zregs[22]) },
243
+ { HV_SIMD_FP_REG_Q23, offsetof(CPUARMState, vfp.zregs[23]) },
244
+ { HV_SIMD_FP_REG_Q24, offsetof(CPUARMState, vfp.zregs[24]) },
245
+ { HV_SIMD_FP_REG_Q25, offsetof(CPUARMState, vfp.zregs[25]) },
246
+ { HV_SIMD_FP_REG_Q26, offsetof(CPUARMState, vfp.zregs[26]) },
247
+ { HV_SIMD_FP_REG_Q27, offsetof(CPUARMState, vfp.zregs[27]) },
248
+ { HV_SIMD_FP_REG_Q28, offsetof(CPUARMState, vfp.zregs[28]) },
249
+ { HV_SIMD_FP_REG_Q29, offsetof(CPUARMState, vfp.zregs[29]) },
250
+ { HV_SIMD_FP_REG_Q30, offsetof(CPUARMState, vfp.zregs[30]) },
251
+ { HV_SIMD_FP_REG_Q31, offsetof(CPUARMState, vfp.zregs[31]) },
252
+};
253
+
254
+struct hvf_sreg_match {
255
+ int reg;
256
+ uint32_t key;
257
+ uint32_t cp_idx;
258
+};
259
+
260
+static struct hvf_sreg_match hvf_sreg_match[] = {
261
+ { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 4) },
262
+ { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 5) },
263
+ { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 6) },
264
+ { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 7) },
265
+
266
+ { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 4) },
267
+ { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 5) },
268
+ { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 6) },
269
+ { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 7) },
270
+
271
+ { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 4) },
272
+ { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 5) },
273
+ { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 6) },
274
+ { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 7) },
275
+
276
+ { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 4) },
277
+ { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 5) },
278
+ { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 6) },
279
+ { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 7) },
280
+
281
+ { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 4) },
282
+ { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 5) },
283
+ { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 6) },
284
+ { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 7) },
285
+
286
+ { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 4) },
287
+ { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 5) },
288
+ { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 6) },
289
+ { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 7) },
290
+
291
+ { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 4) },
292
+ { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 5) },
293
+ { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 6) },
294
+ { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 7) },
295
+
296
+ { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 4) },
297
+ { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 5) },
298
+ { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 6) },
299
+ { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 7) },
300
+
301
+ { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 4) },
302
+ { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 5) },
303
+ { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 6) },
304
+ { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 7) },
305
+
306
+ { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 4) },
307
+ { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 5) },
308
+ { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 6) },
309
+ { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 7) },
310
+
311
+ { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 4) },
312
+ { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 5) },
313
+ { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 6) },
314
+ { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 7) },
315
+
316
+ { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 4) },
317
+ { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 5) },
318
+ { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 6) },
319
+ { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 7) },
320
+
321
+ { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 4) },
322
+ { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 5) },
323
+ { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 6) },
324
+ { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 7) },
325
+
326
+ { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 4) },
327
+ { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 5) },
328
+ { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 6) },
329
+ { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 7) },
330
+
331
+ { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 4) },
332
+ { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 5) },
333
+ { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 6) },
334
+ { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 7) },
335
+
336
+ { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 4) },
337
+ { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 5) },
338
+ { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 6) },
339
+ { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 7) },
340
+
341
+#ifdef SYNC_NO_RAW_REGS
342
+ /*
343
+ * The registers below are manually synced on init because they are
344
+ * marked as NO_RAW. We still list them to make number space sync easier.
345
+ */
346
+ { HV_SYS_REG_MDCCINT_EL1, HVF_SYSREG(0, 2, 2, 0, 0) },
347
+ { HV_SYS_REG_MIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 0) },
348
+ { HV_SYS_REG_MPIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 5) },
349
+ { HV_SYS_REG_ID_AA64PFR0_EL1, HVF_SYSREG(0, 4, 3, 0, 0) },
350
+#endif
351
+ { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 2) },
352
+ { HV_SYS_REG_ID_AA64DFR0_EL1, HVF_SYSREG(0, 5, 3, 0, 0) },
353
+ { HV_SYS_REG_ID_AA64DFR1_EL1, HVF_SYSREG(0, 5, 3, 0, 1) },
354
+ { HV_SYS_REG_ID_AA64ISAR0_EL1, HVF_SYSREG(0, 6, 3, 0, 0) },
355
+ { HV_SYS_REG_ID_AA64ISAR1_EL1, HVF_SYSREG(0, 6, 3, 0, 1) },
356
+#ifdef SYNC_NO_MMFR0
357
+ /* We keep the hardware MMFR0 around. HW limits are there anyway */
358
+ { HV_SYS_REG_ID_AA64MMFR0_EL1, HVF_SYSREG(0, 7, 3, 0, 0) },
359
+#endif
360
+ { HV_SYS_REG_ID_AA64MMFR1_EL1, HVF_SYSREG(0, 7, 3, 0, 1) },
361
+ { HV_SYS_REG_ID_AA64MMFR2_EL1, HVF_SYSREG(0, 7, 3, 0, 2) },
362
+
363
+ { HV_SYS_REG_MDSCR_EL1, HVF_SYSREG(0, 2, 2, 0, 2) },
364
+ { HV_SYS_REG_SCTLR_EL1, HVF_SYSREG(1, 0, 3, 0, 0) },
365
+ { HV_SYS_REG_CPACR_EL1, HVF_SYSREG(1, 0, 3, 0, 2) },
366
+ { HV_SYS_REG_TTBR0_EL1, HVF_SYSREG(2, 0, 3, 0, 0) },
367
+ { HV_SYS_REG_TTBR1_EL1, HVF_SYSREG(2, 0, 3, 0, 1) },
368
+ { HV_SYS_REG_TCR_EL1, HVF_SYSREG(2, 0, 3, 0, 2) },
369
+
370
+ { HV_SYS_REG_APIAKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 0) },
371
+ { HV_SYS_REG_APIAKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 1) },
372
+ { HV_SYS_REG_APIBKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 2) },
373
+ { HV_SYS_REG_APIBKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 3) },
374
+ { HV_SYS_REG_APDAKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 0) },
375
+ { HV_SYS_REG_APDAKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 1) },
376
+ { HV_SYS_REG_APDBKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 2) },
377
+ { HV_SYS_REG_APDBKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 3) },
378
+ { HV_SYS_REG_APGAKEYLO_EL1, HVF_SYSREG(2, 3, 3, 0, 0) },
379
+ { HV_SYS_REG_APGAKEYHI_EL1, HVF_SYSREG(2, 3, 3, 0, 1) },
380
+
381
+ { HV_SYS_REG_SPSR_EL1, HVF_SYSREG(4, 0, 3, 0, 0) },
382
+ { HV_SYS_REG_ELR_EL1, HVF_SYSREG(4, 0, 3, 0, 1) },
383
+ { HV_SYS_REG_SP_EL0, HVF_SYSREG(4, 1, 3, 0, 0) },
384
+ { HV_SYS_REG_AFSR0_EL1, HVF_SYSREG(5, 1, 3, 0, 0) },
385
+ { HV_SYS_REG_AFSR1_EL1, HVF_SYSREG(5, 1, 3, 0, 1) },
386
+ { HV_SYS_REG_ESR_EL1, HVF_SYSREG(5, 2, 3, 0, 0) },
387
+ { HV_SYS_REG_FAR_EL1, HVF_SYSREG(6, 0, 3, 0, 0) },
388
+ { HV_SYS_REG_PAR_EL1, HVF_SYSREG(7, 4, 3, 0, 0) },
389
+ { HV_SYS_REG_MAIR_EL1, HVF_SYSREG(10, 2, 3, 0, 0) },
390
+ { HV_SYS_REG_AMAIR_EL1, HVF_SYSREG(10, 3, 3, 0, 0) },
391
+ { HV_SYS_REG_VBAR_EL1, HVF_SYSREG(12, 0, 3, 0, 0) },
392
+ { HV_SYS_REG_CONTEXTIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 1) },
393
+ { HV_SYS_REG_TPIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 4) },
394
+ { HV_SYS_REG_CNTKCTL_EL1, HVF_SYSREG(14, 1, 3, 0, 0) },
395
+ { HV_SYS_REG_CSSELR_EL1, HVF_SYSREG(0, 0, 3, 2, 0) },
396
+ { HV_SYS_REG_TPIDR_EL0, HVF_SYSREG(13, 0, 3, 3, 2) },
397
+ { HV_SYS_REG_TPIDRRO_EL0, HVF_SYSREG(13, 0, 3, 3, 3) },
398
+ { HV_SYS_REG_CNTV_CTL_EL0, HVF_SYSREG(14, 3, 3, 3, 1) },
399
+ { HV_SYS_REG_CNTV_CVAL_EL0, HVF_SYSREG(14, 3, 3, 3, 2) },
400
+ { HV_SYS_REG_SP_EL1, HVF_SYSREG(4, 1, 3, 4, 0) },
401
+};
402
+
403
+int hvf_get_registers(CPUState *cpu)
404
+{
405
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
406
+ CPUARMState *env = &arm_cpu->env;
407
+ hv_return_t ret;
408
+ uint64_t val;
409
+ hv_simd_fp_uchar16_t fpval;
410
+ int i;
411
+
412
+ for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
413
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, hvf_reg_match[i].reg, &val);
414
+ *(uint64_t *)((void *)env + hvf_reg_match[i].offset) = val;
415
+ assert_hvf_ok(ret);
416
+ }
417
+
418
+ for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
419
+ ret = hv_vcpu_get_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
420
+ &fpval);
421
+ memcpy((void *)env + hvf_fpreg_match[i].offset, &fpval, sizeof(fpval));
422
+ assert_hvf_ok(ret);
423
+ }
424
+
425
+ val = 0;
426
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPCR, &val);
427
+ assert_hvf_ok(ret);
428
+ vfp_set_fpcr(env, val);
429
+
430
+ val = 0;
431
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPSR, &val);
432
+ assert_hvf_ok(ret);
433
+ vfp_set_fpsr(env, val);
434
+
435
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_CPSR, &val);
436
+ assert_hvf_ok(ret);
437
+ pstate_write(env, val);
438
+
439
+ for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
440
+ if (hvf_sreg_match[i].cp_idx == -1) {
441
+ continue;
442
+ }
443
+
444
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, &val);
445
+ assert_hvf_ok(ret);
446
+
447
+ arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx] = val;
448
+ }
449
+ assert(write_list_to_cpustate(arm_cpu));
450
+
451
+ aarch64_restore_sp(env, arm_current_el(env));
452
+
453
+ return 0;
454
+}
455
+
456
+int hvf_put_registers(CPUState *cpu)
457
+{
458
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
459
+ CPUARMState *env = &arm_cpu->env;
460
+ hv_return_t ret;
461
+ uint64_t val;
462
+ hv_simd_fp_uchar16_t fpval;
463
+ int i;
464
+
465
+ for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
466
+ val = *(uint64_t *)((void *)env + hvf_reg_match[i].offset);
467
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, hvf_reg_match[i].reg, val);
468
+ assert_hvf_ok(ret);
469
+ }
470
+
471
+ for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
472
+ memcpy(&fpval, (void *)env + hvf_fpreg_match[i].offset, sizeof(fpval));
473
+ ret = hv_vcpu_set_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
474
+ fpval);
475
+ assert_hvf_ok(ret);
476
+ }
477
+
478
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPCR, vfp_get_fpcr(env));
479
+ assert_hvf_ok(ret);
480
+
481
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPSR, vfp_get_fpsr(env));
482
+ assert_hvf_ok(ret);
483
+
484
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_CPSR, pstate_read(env));
485
+ assert_hvf_ok(ret);
486
+
487
+ aarch64_save_sp(env, arm_current_el(env));
488
+
489
+ assert(write_cpustate_to_list(arm_cpu, false));
490
+ for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
491
+ if (hvf_sreg_match[i].cp_idx == -1) {
492
+ continue;
493
+ }
494
+
495
+ val = arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx];
496
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, val);
497
+ assert_hvf_ok(ret);
498
+ }
499
+
500
+ ret = hv_vcpu_set_vtimer_offset(cpu->hvf->fd, hvf_state->vtimer_offset);
501
+ assert_hvf_ok(ret);
502
+
503
+ return 0;
504
+}
505
+
506
+static void flush_cpu_state(CPUState *cpu)
507
+{
508
+ if (cpu->vcpu_dirty) {
509
+ hvf_put_registers(cpu);
510
+ cpu->vcpu_dirty = false;
511
+ }
512
+}
513
+
514
+static void hvf_set_reg(CPUState *cpu, int rt, uint64_t val)
515
+{
516
+ hv_return_t r;
517
+
518
+ flush_cpu_state(cpu);
519
+
520
+ if (rt < 31) {
521
+ r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_X0 + rt, val);
522
+ assert_hvf_ok(r);
523
+ }
524
+}
525
+
526
+static uint64_t hvf_get_reg(CPUState *cpu, int rt)
527
+{
528
+ uint64_t val = 0;
529
+ hv_return_t r;
530
+
531
+ flush_cpu_state(cpu);
532
+
533
+ if (rt < 31) {
534
+ r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_X0 + rt, &val);
535
+ assert_hvf_ok(r);
536
+ }
537
+
538
+ return val;
539
+}
540
+
541
+void hvf_arch_vcpu_destroy(CPUState *cpu)
542
+{
543
+}
544
+
545
+int hvf_arch_init_vcpu(CPUState *cpu)
546
+{
547
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
548
+ CPUARMState *env = &arm_cpu->env;
549
+ uint32_t sregs_match_len = ARRAY_SIZE(hvf_sreg_match);
550
+ uint32_t sregs_cnt = 0;
551
+ uint64_t pfr;
552
+ hv_return_t ret;
553
+ int i;
554
+
555
+ env->aarch64 = 1;
556
+ asm volatile("mrs %0, cntfrq_el0" : "=r"(arm_cpu->gt_cntfrq_hz));
557
+
558
+ /* Allocate enough space for our sysreg sync */
559
+ arm_cpu->cpreg_indexes = g_renew(uint64_t, arm_cpu->cpreg_indexes,
560
+ sregs_match_len);
561
+ arm_cpu->cpreg_values = g_renew(uint64_t, arm_cpu->cpreg_values,
562
+ sregs_match_len);
563
+ arm_cpu->cpreg_vmstate_indexes = g_renew(uint64_t,
564
+ arm_cpu->cpreg_vmstate_indexes,
565
+ sregs_match_len);
566
+ arm_cpu->cpreg_vmstate_values = g_renew(uint64_t,
567
+ arm_cpu->cpreg_vmstate_values,
568
+ sregs_match_len);
569
+
570
+ memset(arm_cpu->cpreg_values, 0, sregs_match_len * sizeof(uint64_t));
571
+
572
+ /* Populate cp list for all known sysregs */
573
+ for (i = 0; i < sregs_match_len; i++) {
574
+ const ARMCPRegInfo *ri;
575
+ uint32_t key = hvf_sreg_match[i].key;
576
+
577
+ ri = get_arm_cp_reginfo(arm_cpu->cp_regs, key);
578
+ if (ri) {
579
+ assert(!(ri->type & ARM_CP_NO_RAW));
580
+ hvf_sreg_match[i].cp_idx = sregs_cnt;
581
+ arm_cpu->cpreg_indexes[sregs_cnt++] = cpreg_to_kvm_id(key);
582
+ } else {
583
+ hvf_sreg_match[i].cp_idx = -1;
584
+ }
585
+ }
586
+ arm_cpu->cpreg_array_len = sregs_cnt;
587
+ arm_cpu->cpreg_vmstate_array_len = sregs_cnt;
588
+
589
+ assert(write_cpustate_to_list(arm_cpu, false));
590
+
591
+ /* Set CP_NO_RAW system registers on init */
592
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MIDR_EL1,
593
+ arm_cpu->midr);
594
+ assert_hvf_ok(ret);
595
+
596
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MPIDR_EL1,
597
+ arm_cpu->mp_affinity);
598
+ assert_hvf_ok(ret);
599
+
600
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, &pfr);
601
+ assert_hvf_ok(ret);
602
+ pfr |= env->gicv3state ? (1 << 24) : 0;
603
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, pfr);
604
+ assert_hvf_ok(ret);
605
+
606
+ /* We're limited to underlying hardware caps, override internal versions */
607
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64MMFR0_EL1,
608
+ &arm_cpu->isar.id_aa64mmfr0);
609
+ assert_hvf_ok(ret);
610
+
611
+ return 0;
612
+}
613
+
614
+void hvf_kick_vcpu_thread(CPUState *cpu)
615
+{
616
+ hv_vcpus_exit(&cpu->hvf->fd, 1);
617
+}
618
+
619
+static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
620
+ uint32_t syndrome)
621
+{
622
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
623
+ CPUARMState *env = &arm_cpu->env;
624
+
625
+ cpu->exception_index = excp;
626
+ env->exception.target_el = 1;
627
+ env->exception.syndrome = syndrome;
628
+
629
+ arm_cpu_do_interrupt(cpu);
630
+}
631
+
632
+static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
633
+{
634
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
635
+ CPUARMState *env = &arm_cpu->env;
636
+ uint64_t val = 0;
637
+
638
+ switch (reg) {
639
+ case SYSREG_CNTPCT_EL0:
640
+ val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
641
+ gt_cntfrq_period_ns(arm_cpu);
642
+ break;
643
+ case SYSREG_OSLSR_EL1:
644
+ val = env->cp15.oslsr_el1;
645
+ break;
646
+ case SYSREG_OSDLR_EL1:
647
+ /* Dummy register */
648
+ break;
649
+ default:
650
+ cpu_synchronize_state(cpu);
651
+ trace_hvf_unhandled_sysreg_read(env->pc, reg,
652
+ (reg >> 20) & 0x3,
653
+ (reg >> 14) & 0x7,
654
+ (reg >> 10) & 0xf,
655
+ (reg >> 1) & 0xf,
656
+ (reg >> 17) & 0x7);
657
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
658
+ return 1;
659
+ }
660
+
661
+ trace_hvf_sysreg_read(reg,
662
+ (reg >> 20) & 0x3,
663
+ (reg >> 14) & 0x7,
664
+ (reg >> 10) & 0xf,
665
+ (reg >> 1) & 0xf,
666
+ (reg >> 17) & 0x7,
667
+ val);
668
+ hvf_set_reg(cpu, rt, val);
669
+
670
+ return 0;
671
+}
672
+
673
+static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
674
+{
675
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
676
+ CPUARMState *env = &arm_cpu->env;
677
+
678
+ trace_hvf_sysreg_write(reg,
679
+ (reg >> 20) & 0x3,
680
+ (reg >> 14) & 0x7,
681
+ (reg >> 10) & 0xf,
682
+ (reg >> 1) & 0xf,
683
+ (reg >> 17) & 0x7,
684
+ val);
685
+
686
+ switch (reg) {
687
+ case SYSREG_OSLAR_EL1:
688
+ env->cp15.oslsr_el1 = val & 1;
689
+ break;
690
+ case SYSREG_OSDLR_EL1:
691
+ /* Dummy register */
692
+ break;
693
+ default:
694
+ cpu_synchronize_state(cpu);
695
+ trace_hvf_unhandled_sysreg_write(env->pc, reg,
696
+ (reg >> 20) & 0x3,
697
+ (reg >> 14) & 0x7,
698
+ (reg >> 10) & 0xf,
699
+ (reg >> 1) & 0xf,
700
+ (reg >> 17) & 0x7);
701
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
702
+ return 1;
703
+ }
704
+
705
+ return 0;
706
+}
707
+
708
+static int hvf_inject_interrupts(CPUState *cpu)
709
+{
710
+ if (cpu->interrupt_request & CPU_INTERRUPT_FIQ) {
711
+ trace_hvf_inject_fiq();
712
+ hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_FIQ,
713
+ true);
714
+ }
715
+
716
+ if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
717
+ trace_hvf_inject_irq();
718
+ hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_IRQ,
719
+ true);
720
+ }
721
+
722
+ return 0;
723
+}
724
+
725
+static uint64_t hvf_vtimer_val_raw(void)
726
+{
727
+ /*
728
+ * mach_absolute_time() returns the vtimer value without the VM
729
+ * offset that we define. Add our own offset on top.
730
+ */
731
+ return mach_absolute_time() - hvf_state->vtimer_offset;
732
+}
733
+
734
+static void hvf_sync_vtimer(CPUState *cpu)
735
+{
736
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
737
+ hv_return_t r;
738
+ uint64_t ctl;
739
+ bool irq_state;
740
+
741
+ if (!cpu->hvf->vtimer_masked) {
742
+ /* We will get notified on vtimer changes by hvf, nothing to do */
743
+ return;
744
+ }
745
+
746
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
747
+ assert_hvf_ok(r);
748
+
749
+ irq_state = (ctl & (TMR_CTL_ENABLE | TMR_CTL_IMASK | TMR_CTL_ISTATUS)) ==
750
+ (TMR_CTL_ENABLE | TMR_CTL_ISTATUS);
751
+ qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], irq_state);
752
+
753
+ if (!irq_state) {
754
+ /* Timer no longer asserting, we can unmask it */
755
+ hv_vcpu_set_vtimer_mask(cpu->hvf->fd, false);
756
+ cpu->hvf->vtimer_masked = false;
757
+ }
758
+}
759
+
760
+int hvf_vcpu_exec(CPUState *cpu)
761
+{
762
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
763
+ CPUARMState *env = &arm_cpu->env;
764
+ hv_vcpu_exit_t *hvf_exit = cpu->hvf->exit;
765
+ hv_return_t r;
766
+ bool advance_pc = false;
767
+
768
+ if (hvf_inject_interrupts(cpu)) {
769
+ return EXCP_INTERRUPT;
770
+ }
771
+
772
+ if (cpu->halted) {
773
+ return EXCP_HLT;
774
+ }
775
+
776
+ flush_cpu_state(cpu);
777
+
778
+ qemu_mutex_unlock_iothread();
779
+ assert_hvf_ok(hv_vcpu_run(cpu->hvf->fd));
780
+
781
+ /* handle VMEXIT */
782
+ uint64_t exit_reason = hvf_exit->reason;
783
+ uint64_t syndrome = hvf_exit->exception.syndrome;
784
+ uint32_t ec = syn_get_ec(syndrome);
785
+
786
+ qemu_mutex_lock_iothread();
787
+ switch (exit_reason) {
788
+ case HV_EXIT_REASON_EXCEPTION:
789
+ /* This is the main one, handle below. */
790
+ break;
791
+ case HV_EXIT_REASON_VTIMER_ACTIVATED:
792
+ qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], 1);
793
+ cpu->hvf->vtimer_masked = true;
794
+ return 0;
795
+ case HV_EXIT_REASON_CANCELED:
796
+ /* we got kicked, no exit to process */
797
+ return 0;
798
+ default:
799
+ assert(0);
800
+ }
801
+
802
+ hvf_sync_vtimer(cpu);
803
+
804
+ switch (ec) {
805
+ case EC_DATAABORT: {
806
+ bool isv = syndrome & ARM_EL_ISV;
807
+ bool iswrite = (syndrome >> 6) & 1;
808
+ bool s1ptw = (syndrome >> 7) & 1;
809
+ uint32_t sas = (syndrome >> 22) & 3;
810
+ uint32_t len = 1 << sas;
811
+ uint32_t srt = (syndrome >> 16) & 0x1f;
812
+ uint64_t val = 0;
813
+
814
+ trace_hvf_data_abort(env->pc, hvf_exit->exception.virtual_address,
815
+ hvf_exit->exception.physical_address, isv,
816
+ iswrite, s1ptw, len, srt);
817
+
818
+ assert(isv);
819
+
820
+ if (iswrite) {
821
+ val = hvf_get_reg(cpu, srt);
822
+ address_space_write(&address_space_memory,
823
+ hvf_exit->exception.physical_address,
824
+ MEMTXATTRS_UNSPECIFIED, &val, len);
825
+ } else {
826
+ address_space_read(&address_space_memory,
827
+ hvf_exit->exception.physical_address,
828
+ MEMTXATTRS_UNSPECIFIED, &val, len);
829
+ hvf_set_reg(cpu, srt, val);
830
+ }
831
+
832
+ advance_pc = true;
833
+ break;
834
+ }
835
+ case EC_SYSTEMREGISTERTRAP: {
836
+ bool isread = (syndrome >> 0) & 1;
837
+ uint32_t rt = (syndrome >> 5) & 0x1f;
838
+ uint32_t reg = syndrome & SYSREG_MASK;
839
+ uint64_t val;
840
+ int ret = 0;
841
+
842
+ if (isread) {
843
+ ret = hvf_sysreg_read(cpu, reg, rt);
844
+ } else {
845
+ val = hvf_get_reg(cpu, rt);
846
+ ret = hvf_sysreg_write(cpu, reg, val);
847
+ }
848
+
849
+ advance_pc = !ret;
850
+ break;
851
+ }
852
+ case EC_WFX_TRAP:
853
+ advance_pc = true;
854
+ break;
855
+ case EC_AA64_HVC:
856
+ cpu_synchronize_state(cpu);
857
+ trace_hvf_unknown_hvc(env->xregs[0]);
858
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
859
+ env->xregs[0] = -1;
860
+ break;
861
+ case EC_AA64_SMC:
862
+ cpu_synchronize_state(cpu);
863
+ trace_hvf_unknown_smc(env->xregs[0]);
864
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
865
+ break;
866
+ default:
867
+ cpu_synchronize_state(cpu);
868
+ trace_hvf_exit(syndrome, ec, env->pc);
869
+ error_report("0x%llx: unhandled exception ec=0x%x", env->pc, ec);
870
+ }
871
+
872
+ if (advance_pc) {
873
+ uint64_t pc;
874
+
875
+ flush_cpu_state(cpu);
876
+
877
+ r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_PC, &pc);
878
+ assert_hvf_ok(r);
879
+ pc += 4;
880
+ r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc);
881
+ assert_hvf_ok(r);
882
+ }
883
+
884
+ return 0;
885
+}
886
+
887
+static const VMStateDescription vmstate_hvf_vtimer = {
888
+ .name = "hvf-vtimer",
889
+ .version_id = 1,
890
+ .minimum_version_id = 1,
891
+ .fields = (VMStateField[]) {
892
+ VMSTATE_UINT64(vtimer_val, HVFVTimer),
893
+ VMSTATE_END_OF_LIST()
894
+ },
895
+};
896
+
897
+static void hvf_vm_state_change(void *opaque, bool running, RunState state)
898
+{
899
+ HVFVTimer *s = opaque;
900
+
901
+ if (running) {
902
+ /* Update vtimer offset on all CPUs */
903
+ hvf_state->vtimer_offset = mach_absolute_time() - s->vtimer_val;
904
+ cpu_synchronize_all_states();
905
+ } else {
906
+ /* Remember vtimer value on every pause */
907
+ s->vtimer_val = hvf_vtimer_val_raw();
908
+ }
909
+}
910
+
911
+int hvf_arch_init(void)
912
+{
913
+ hvf_state->vtimer_offset = mach_absolute_time();
914
+ vmstate_register(NULL, 0, &vmstate_hvf_vtimer, &vtimer);
915
+ qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
916
+ return 0;
917
+}
918
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
919
index XXXXXXX..XXXXXXX 100644
920
--- a/target/i386/hvf/hvf.c
921
+++ b/target/i386/hvf/hvf.c
922
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
923
return env->apic_bus_freq != 0;
924
}
925
926
+void hvf_kick_vcpu_thread(CPUState *cpu)
927
+{
928
+ cpus_kick_thread(cpu);
929
+}
930
+
931
int hvf_arch_init(void)
932
{
933
return 0;
934
diff --git a/MAINTAINERS b/MAINTAINERS
935
index XXXXXXX..XXXXXXX 100644
936
--- a/MAINTAINERS
937
+++ b/MAINTAINERS
938
@@ -XXX,XX +XXX,XX @@ F: accel/accel-*.c
939
F: accel/Makefile.objs
940
F: accel/stubs/Makefile.objs
941
942
+Apple Silicon HVF CPUs
943
+M: Alexander Graf <agraf@csgraf.de>
944
+S: Maintained
945
+F: target/arm/hvf/
946
+
947
X86 HVF CPUs
948
M: Cameron Esfahani <dirty@apple.com>
949
M: Roman Bolshakov <r.bolshakov@yadro.com>
950
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
951
new file mode 100644
952
index XXXXXXX..XXXXXXX
953
--- /dev/null
954
+++ b/target/arm/hvf/trace-events
955
@@ -XXX,XX +XXX,XX @@
956
+hvf_unhandled_sysreg_read(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg read at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
957
+hvf_unhandled_sysreg_write(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg write at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
958
+hvf_inject_fiq(void) "injecting FIQ"
959
+hvf_inject_irq(void) "injecting IRQ"
960
+hvf_data_abort(uint64_t pc, uint64_t va, uint64_t pa, bool isv, bool iswrite, bool s1ptw, uint32_t len, uint32_t srt) "data abort: [pc=0x%"PRIx64" va=0x%016"PRIx64" pa=0x%016"PRIx64" isv=%d iswrite=%d s1ptw=%d len=%d srt=%d]"
961
+hvf_sysreg_read(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg read 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d) = 0x%016"PRIx64
962
+hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg write 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d, val=0x%016"PRIx64")"
963
+hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
964
+hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
965
+hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
966
--
967
2.20.1
968
969
diff view generated by jsdifflib
New patch
1
From: Peter Collingbourne <pcc@google.com>
1
2
3
Sleep on WFI until the VTIMER is due but allow ourselves to be woken
4
up on IPI.
5
6
In this implementation IPI is blocked on the CPU thread at startup and
7
pselect() is used to atomically unblock the signal and begin sleeping.
8
The signal is sent unconditionally so there's no need to worry about
9
races between actually sleeping and the "we think we're sleeping"
10
state. It may lead to an extra wakeup but that's better than missing
11
it entirely.
12
13
Signed-off-by: Peter Collingbourne <pcc@google.com>
14
Signed-off-by: Alexander Graf <agraf@csgraf.de>
15
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
16
Reviewed-by: Sergio Lopez <slp@redhat.com>
17
Message-id: 20210916155404.86958-6-agraf@csgraf.de
18
[agraf: Remove unused 'set' variable, always advance PC on WFX trap,
19
support vm stop / continue operations and cntv offsets]
20
Signed-off-by: Alexander Graf <agraf@csgraf.de>
21
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
22
Reviewed-by: Sergio Lopez <slp@redhat.com>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
25
include/sysemu/hvf_int.h | 1 +
26
accel/hvf/hvf-accel-ops.c | 5 +--
27
target/arm/hvf/hvf.c | 79 +++++++++++++++++++++++++++++++++++++++
28
3 files changed, 82 insertions(+), 3 deletions(-)
29
30
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/include/sysemu/hvf_int.h
33
+++ b/include/sysemu/hvf_int.h
34
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
35
uint64_t fd;
36
void *exit;
37
bool vtimer_masked;
38
+ sigset_t unblock_ipi_mask;
39
};
40
41
void assert_hvf_ok(hv_return_t ret);
42
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/accel/hvf/hvf-accel-ops.c
45
+++ b/accel/hvf/hvf-accel-ops.c
46
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
47
cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
48
49
/* init cpu signals */
50
- sigset_t set;
51
struct sigaction sigact;
52
53
memset(&sigact, 0, sizeof(sigact));
54
sigact.sa_handler = dummy_signal;
55
sigaction(SIG_IPI, &sigact, NULL);
56
57
- pthread_sigmask(SIG_BLOCK, NULL, &set);
58
- sigdelset(&set, SIG_IPI);
59
+ pthread_sigmask(SIG_BLOCK, NULL, &cpu->hvf->unblock_ipi_mask);
60
+ sigdelset(&cpu->hvf->unblock_ipi_mask, SIG_IPI);
61
62
#ifdef __aarch64__
63
r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
64
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/hvf/hvf.c
67
+++ b/target/arm/hvf/hvf.c
68
@@ -XXX,XX +XXX,XX @@
69
* QEMU Hypervisor.framework support for Apple Silicon
70
71
* Copyright 2020 Alexander Graf <agraf@csgraf.de>
72
+ * Copyright 2020 Google LLC
73
*
74
* This work is licensed under the terms of the GNU GPL, version 2 or later.
75
* See the COPYING file in the top-level directory.
76
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init_vcpu(CPUState *cpu)
77
78
void hvf_kick_vcpu_thread(CPUState *cpu)
79
{
80
+ cpus_kick_thread(cpu);
81
hv_vcpus_exit(&cpu->hvf->fd, 1);
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static uint64_t hvf_vtimer_val_raw(void)
85
return mach_absolute_time() - hvf_state->vtimer_offset;
86
}
87
88
+static uint64_t hvf_vtimer_val(void)
89
+{
90
+ if (!runstate_is_running()) {
91
+ /* VM is paused, the vtimer value is in vtimer.vtimer_val */
92
+ return vtimer.vtimer_val;
93
+ }
94
+
95
+ return hvf_vtimer_val_raw();
96
+}
97
+
98
+static void hvf_wait_for_ipi(CPUState *cpu, struct timespec *ts)
99
+{
100
+ /*
101
+ * Use pselect to sleep so that other threads can IPI us while we're
102
+ * sleeping.
103
+ */
104
+ qatomic_mb_set(&cpu->thread_kicked, false);
105
+ qemu_mutex_unlock_iothread();
106
+ pselect(0, 0, 0, 0, ts, &cpu->hvf->unblock_ipi_mask);
107
+ qemu_mutex_lock_iothread();
108
+}
109
+
110
+static void hvf_wfi(CPUState *cpu)
111
+{
112
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
113
+ struct timespec ts;
114
+ hv_return_t r;
115
+ uint64_t ctl;
116
+ uint64_t cval;
117
+ int64_t ticks_to_sleep;
118
+ uint64_t seconds;
119
+ uint64_t nanos;
120
+ uint32_t cntfrq;
121
+
122
+ if (cpu->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) {
123
+ /* Interrupt pending, no need to wait */
124
+ return;
125
+ }
126
+
127
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
128
+ assert_hvf_ok(r);
129
+
130
+ if (!(ctl & 1) || (ctl & 2)) {
131
+ /* Timer disabled or masked, just wait for an IPI. */
132
+ hvf_wait_for_ipi(cpu, NULL);
133
+ return;
134
+ }
135
+
136
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0, &cval);
137
+ assert_hvf_ok(r);
138
+
139
+ ticks_to_sleep = cval - hvf_vtimer_val();
140
+ if (ticks_to_sleep < 0) {
141
+ return;
142
+ }
143
+
144
+ cntfrq = gt_cntfrq_period_ns(arm_cpu);
145
+ seconds = muldiv64(ticks_to_sleep, cntfrq, NANOSECONDS_PER_SECOND);
146
+ ticks_to_sleep -= muldiv64(seconds, NANOSECONDS_PER_SECOND, cntfrq);
147
+ nanos = ticks_to_sleep * cntfrq;
148
+
149
+ /*
150
+ * Don't sleep for less than the time a context switch would take,
151
+ * so that we can satisfy fast timer requests on the same CPU.
152
+ * Measurements on M1 show the sweet spot to be ~2ms.
153
+ */
154
+ if (!seconds && nanos < (2 * SCALE_MS)) {
155
+ return;
156
+ }
157
+
158
+ ts = (struct timespec) { seconds, nanos };
159
+ hvf_wait_for_ipi(cpu, &ts);
160
+}
161
+
162
static void hvf_sync_vtimer(CPUState *cpu)
163
{
164
ARMCPU *arm_cpu = ARM_CPU(cpu);
165
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
166
}
167
case EC_WFX_TRAP:
168
advance_pc = true;
169
+ if (!(syndrome & WFX_IS_WFE)) {
170
+ hvf_wfi(cpu);
171
+ }
172
break;
173
case EC_AA64_HVC:
174
cpu_synchronize_state(cpu);
175
--
176
2.20.1
177
178
diff view generated by jsdifflib
1
The hw/arm/arm.h header now only includes declarations relating
1
Now that we have working system register sync, we push more target CPU
2
to boot.c code, so it is only needed by Arm board or SoC code.
2
properties into the virtual machine. That might be useful in some
3
Remove some unnecessary inclusions of it from target/arm files
3
situations, but is not the typical case that users want.
4
and from hw/intc/armv7m_nvic.c.
4
5
5
So let's add a -cpu host option that allows them to explicitly pass all
6
CPU capabilities of their host CPU into the guest.
7
8
Signed-off-by: Alexander Graf <agraf@csgraf.de>
9
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
10
Reviewed-by: Sergio Lopez <slp@redhat.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20210916155404.86958-7-agraf@csgraf.de
13
[PMM: drop unnecessary #include line from .h file]
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20190516163857.6430-3-peter.maydell@linaro.org
10
---
15
---
11
hw/intc/armv7m_nvic.c | 1 -
16
target/arm/cpu.h | 2 +
12
target/arm/arm-semi.c | 1 -
17
target/arm/hvf_arm.h | 18 +++++++++
13
target/arm/cpu.c | 1 -
18
target/arm/kvm_arm.h | 2 -
14
target/arm/cpu64.c | 1 -
19
target/arm/cpu.c | 13 ++++--
15
target/arm/kvm.c | 1 -
20
target/arm/hvf/hvf.c | 95 ++++++++++++++++++++++++++++++++++++++++++++
16
target/arm/kvm32.c | 1 -
21
5 files changed, 124 insertions(+), 6 deletions(-)
17
target/arm/kvm64.c | 1 -
22
create mode 100644 target/arm/hvf_arm.h
18
7 files changed, 7 deletions(-)
23
19
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
25
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/cpu.h
22
--- a/hw/intc/armv7m_nvic.c
27
+++ b/target/arm/cpu.h
23
+++ b/hw/intc/armv7m_nvic.c
28
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
29
#define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
30
#define CPU_RESOLVING_TYPE TYPE_ARM_CPU
31
32
+#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
33
+
34
#define cpu_signal_handler cpu_arm_signal_handler
35
#define cpu_list arm_cpu_list
36
37
diff --git a/target/arm/hvf_arm.h b/target/arm/hvf_arm.h
38
new file mode 100644
39
index XXXXXXX..XXXXXXX
40
--- /dev/null
41
+++ b/target/arm/hvf_arm.h
24
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@
25
#include "cpu.h"
43
+/*
26
#include "hw/sysbus.h"
44
+ * QEMU Hypervisor.framework (HVF) support -- ARM specifics
27
#include "qemu/timer.h"
45
+ *
28
-#include "hw/arm/arm.h"
46
+ * Copyright (c) 2021 Alexander Graf
29
#include "hw/intc/armv7m_nvic.h"
47
+ *
30
#include "target/arm/cpu.h"
48
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
31
#include "exec/exec-all.h"
49
+ * See the COPYING file in the top-level directory.
32
diff --git a/target/arm/arm-semi.c b/target/arm/arm-semi.c
50
+ *
33
index XXXXXXX..XXXXXXX 100644
51
+ */
34
--- a/target/arm/arm-semi.c
52
+
35
+++ b/target/arm/arm-semi.c
53
+#ifndef QEMU_HVF_ARM_H
36
@@ -XXX,XX +XXX,XX @@
54
+#define QEMU_HVF_ARM_H
37
#else
55
+
38
#include "qemu-common.h"
56
+#include "cpu.h"
39
#include "exec/gdbstub.h"
57
+
40
-#include "hw/arm/arm.h"
58
+void hvf_arm_set_cpu_features_from_host(struct ARMCPU *cpu);
41
#include "qemu/cutils.h"
59
+
42
#endif
60
+#endif
43
61
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/kvm_arm.h
64
+++ b/target/arm/kvm_arm.h
65
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
66
*/
67
void kvm_arm_destroy_scratch_host_vcpu(int *fdarray);
68
69
-#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
70
-
71
/**
72
* ARMHostCPUFeatures: information about the host CPU (identified
73
* by asking the host kernel)
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
74
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
45
index XXXXXXX..XXXXXXX 100644
75
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu.c
76
--- a/target/arm/cpu.c
47
+++ b/target/arm/cpu.c
77
+++ b/target/arm/cpu.c
48
@@ -XXX,XX +XXX,XX @@
78
@@ -XXX,XX +XXX,XX @@
49
#if !defined(CONFIG_USER_ONLY)
79
#include "sysemu/tcg.h"
50
#include "hw/loader.h"
51
#endif
52
-#include "hw/arm/arm.h"
53
#include "sysemu/sysemu.h"
54
#include "sysemu/hw_accel.h"
80
#include "sysemu/hw_accel.h"
55
#include "kvm_arm.h"
81
#include "kvm_arm.h"
56
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
82
+#include "hvf_arm.h"
57
index XXXXXXX..XXXXXXX 100644
83
#include "disas/capstone.h"
58
--- a/target/arm/cpu64.c
84
#include "fpu/softfloat.h"
59
+++ b/target/arm/cpu64.c
85
86
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
87
* this is the first point where we can report it.
88
*/
89
if (cpu->host_cpu_probe_failed) {
90
- if (!kvm_enabled()) {
91
- error_setg(errp, "The 'host' CPU type can only be used with KVM");
92
+ if (!kvm_enabled() && !hvf_enabled()) {
93
+ error_setg(errp, "The 'host' CPU type can only be used with KVM or HVF");
94
} else {
95
error_setg(errp, "Failed to retrieve host CPU features");
96
}
97
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
98
#endif /* CONFIG_TCG */
99
}
100
101
-#ifdef CONFIG_KVM
102
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
103
static void arm_host_initfn(Object *obj)
104
{
105
ARMCPU *cpu = ARM_CPU(obj);
106
107
+#ifdef CONFIG_KVM
108
kvm_arm_set_cpu_features_from_host(cpu);
109
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
110
aarch64_add_sve_properties(obj);
111
}
112
+#else
113
+ hvf_arm_set_cpu_features_from_host(cpu);
114
+#endif
115
arm_cpu_post_init(obj);
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_register_types(void)
119
{
120
type_register_static(&arm_cpu_type_info);
121
122
-#ifdef CONFIG_KVM
123
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
124
type_register_static(&host_arm_cpu_type_info);
125
#endif
126
}
127
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/hvf/hvf.c
130
+++ b/target/arm/hvf/hvf.c
60
@@ -XXX,XX +XXX,XX @@
131
@@ -XXX,XX +XXX,XX @@
61
#if !defined(CONFIG_USER_ONLY)
132
#include "sysemu/hvf.h"
62
#include "hw/loader.h"
133
#include "sysemu/hvf_int.h"
63
#endif
134
#include "sysemu/hw_accel.h"
64
-#include "hw/arm/arm.h"
135
+#include "hvf_arm.h"
65
#include "sysemu/sysemu.h"
136
66
#include "sysemu/kvm.h"
137
#include <mach/mach_time.h>
67
#include "kvm_arm.h"
138
68
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
139
@@ -XXX,XX +XXX,XX @@ typedef struct HVFVTimer {
69
index XXXXXXX..XXXXXXX 100644
140
70
--- a/target/arm/kvm.c
141
static HVFVTimer vtimer;
71
+++ b/target/arm/kvm.c
142
72
@@ -XXX,XX +XXX,XX @@
143
+typedef struct ARMHostCPUFeatures {
73
#include "cpu.h"
144
+ ARMISARegisters isar;
74
#include "trace.h"
145
+ uint64_t features;
75
#include "internals.h"
146
+ uint64_t midr;
76
-#include "hw/arm/arm.h"
147
+ uint32_t reset_sctlr;
77
#include "hw/pci/pci.h"
148
+ const char *dtb_compatible;
78
#include "exec/memattrs.h"
149
+} ARMHostCPUFeatures;
79
#include "exec/address-spaces.h"
150
+
80
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
151
+static ARMHostCPUFeatures arm_host_cpu_features;
81
index XXXXXXX..XXXXXXX 100644
152
+
82
--- a/target/arm/kvm32.c
153
struct hvf_reg_match {
83
+++ b/target/arm/kvm32.c
154
int reg;
84
@@ -XXX,XX +XXX,XX @@
155
uint64_t offset;
85
#include "sysemu/kvm.h"
156
@@ -XXX,XX +XXX,XX @@ static uint64_t hvf_get_reg(CPUState *cpu, int rt)
86
#include "kvm_arm.h"
157
return val;
87
#include "internals.h"
158
}
88
-#include "hw/arm/arm.h"
159
89
#include "qemu/log.h"
160
+static bool hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
90
161
+{
91
static inline void set_feature(uint64_t *features, int feature)
162
+ ARMISARegisters host_isar = {};
92
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
163
+ const struct isar_regs {
93
index XXXXXXX..XXXXXXX 100644
164
+ int reg;
94
--- a/target/arm/kvm64.c
165
+ uint64_t *val;
95
+++ b/target/arm/kvm64.c
166
+ } regs[] = {
96
@@ -XXX,XX +XXX,XX @@
167
+ { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 },
97
#include "sysemu/kvm.h"
168
+ { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 },
98
#include "kvm_arm.h"
169
+ { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 },
99
#include "internals.h"
170
+ { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 },
100
-#include "hw/arm/arm.h"
171
+ { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 },
101
172
+ { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 },
102
static bool have_guest_debug;
173
+ { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 },
103
174
+ { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 },
175
+ { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 },
176
+ };
177
+ hv_vcpu_t fd;
178
+ hv_return_t r = HV_SUCCESS;
179
+ hv_vcpu_exit_t *exit;
180
+ int i;
181
+
182
+ ahcf->dtb_compatible = "arm,arm-v8";
183
+ ahcf->features = (1ULL << ARM_FEATURE_V8) |
184
+ (1ULL << ARM_FEATURE_NEON) |
185
+ (1ULL << ARM_FEATURE_AARCH64) |
186
+ (1ULL << ARM_FEATURE_PMU) |
187
+ (1ULL << ARM_FEATURE_GENERIC_TIMER);
188
+
189
+ /* We set up a small vcpu to extract host registers */
190
+
191
+ if (hv_vcpu_create(&fd, &exit, NULL) != HV_SUCCESS) {
192
+ return false;
193
+ }
194
+
195
+ for (i = 0; i < ARRAY_SIZE(regs); i++) {
196
+ r |= hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val);
197
+ }
198
+ r |= hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr);
199
+ r |= hv_vcpu_destroy(fd);
200
+
201
+ ahcf->isar = host_isar;
202
+
203
+ /*
204
+ * A scratch vCPU returns SCTLR 0, so let's fill our default with the M1
205
+ * boot SCTLR from https://github.com/AsahiLinux/m1n1/issues/97
206
+ */
207
+ ahcf->reset_sctlr = 0x30100180;
208
+ /*
209
+ * SPAN is disabled by default when SCTLR.SPAN=1. To improve compatibility,
210
+ * let's disable it on boot and then allow guest software to turn it on by
211
+ * setting it to 0.
212
+ */
213
+ ahcf->reset_sctlr |= 0x00800000;
214
+
215
+ /* Make sure we don't advertise AArch32 support for EL0/EL1 */
216
+ if ((host_isar.id_aa64pfr0 & 0xff) != 0x11) {
217
+ return false;
218
+ }
219
+
220
+ return r == HV_SUCCESS;
221
+}
222
+
223
+void hvf_arm_set_cpu_features_from_host(ARMCPU *cpu)
224
+{
225
+ if (!arm_host_cpu_features.dtb_compatible) {
226
+ if (!hvf_enabled() ||
227
+ !hvf_arm_get_host_cpu_features(&arm_host_cpu_features)) {
228
+ /*
229
+ * We can't report this error yet, so flag that we need to
230
+ * in arm_cpu_realizefn().
231
+ */
232
+ cpu->host_cpu_probe_failed = true;
233
+ return;
234
+ }
235
+ }
236
+
237
+ cpu->dtb_compatible = arm_host_cpu_features.dtb_compatible;
238
+ cpu->isar = arm_host_cpu_features.isar;
239
+ cpu->env.features = arm_host_cpu_features.features;
240
+ cpu->midr = arm_host_cpu_features.midr;
241
+ cpu->reset_sctlr = arm_host_cpu_features.reset_sctlr;
242
+}
243
+
244
void hvf_arch_vcpu_destroy(CPUState *cpu)
245
{
246
}
104
--
247
--
105
2.20.1
248
2.20.1
106
249
107
250
diff view generated by jsdifflib
1
From: Guenter Roeck <linux@roeck-us.net>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
QEMU already supports pl330. Instantiate it for Exynos4210.
3
We need to handle PSCI calls. Most of the TCG code works for us,
4
4
but we can simplify it to only handle aa64 mode and we need to
5
Relevant part of Linux arch/arm/boot/dts/exynos4.dtsi:
5
handle SUSPEND differently.
6
6
7
/ {
7
This patch takes the TCG code as template and duplicates it in HVF.
8
soc: soc {
8
9
amba {
9
To tell the guest that we support PSCI 0.2 now, update the check in
10
pdma0: pdma@12680000 {
10
arm_cpu_initfn() as well.
11
compatible = "arm,pl330", "arm,primecell";
11
12
reg = <0x12680000 0x1000>;
12
Signed-off-by: Alexander Graf <agraf@csgraf.de>
13
interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>;
13
Reviewed-by: Sergio Lopez <slp@redhat.com>
14
clocks = <&clock CLK_PDMA0>;
15
clock-names = "apb_pclk";
16
#dma-cells = <1>;
17
#dma-channels = <8>;
18
#dma-requests = <32>;
19
};
20
pdma1: pdma@12690000 {
21
compatible = "arm,pl330", "arm,primecell";
22
reg = <0x12690000 0x1000>;
23
interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>;
24
clocks = <&clock CLK_PDMA1>;
25
clock-names = "apb_pclk";
26
#dma-cells = <1>;
27
#dma-channels = <8>;
28
#dma-requests = <32>;
29
};
30
mdma1: mdma@12850000 {
31
compatible = "arm,pl330", "arm,primecell";
32
reg = <0x12850000 0x1000>;
33
interrupts = <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>;
34
clocks = <&clock CLK_MDMA>;
35
clock-names = "apb_pclk";
36
#dma-cells = <1>;
37
#dma-channels = <8>;
38
#dma-requests = <1>;
39
};
40
};
41
};
42
};
43
44
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
45
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
46
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
47
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
48
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Message-id: 20210916155404.86958-8-agraf@csgraf.de
49
Message-id: 20190520214342.13709-4-philmd@redhat.com
50
[PMD: Do not set default qdev properties, create the controllers in the SoC
51
rather than the board (Peter Maydell), add dtsi in commit message]
52
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
53
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
54
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
55
---
17
---
56
hw/arm/exynos4210.c | 26 ++++++++++++++++++++++++++
18
target/arm/cpu.c | 4 +-
57
1 file changed, 26 insertions(+)
19
target/arm/hvf/hvf.c | 141 ++++++++++++++++++++++++++++++++++--
58
20
target/arm/hvf/trace-events | 1 +
59
diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c
21
3 files changed, 139 insertions(+), 7 deletions(-)
22
23
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
60
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
61
--- a/hw/arm/exynos4210.c
25
--- a/target/arm/cpu.c
62
+++ b/hw/arm/exynos4210.c
26
+++ b/target/arm/cpu.c
27
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
28
cpu->psci_version = 1; /* By default assume PSCI v0.1 */
29
cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
30
31
- if (tcg_enabled()) {
32
- cpu->psci_version = 2; /* TCG implements PSCI 0.2 */
33
+ if (tcg_enabled() || hvf_enabled()) {
34
+ cpu->psci_version = 2; /* TCG and HVF implement PSCI 0.2 */
35
}
36
}
37
38
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/hvf/hvf.c
41
+++ b/target/arm/hvf/hvf.c
63
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@
64
/* EHCI */
43
#include "hw/irq.h"
65
#define EXYNOS4210_EHCI_BASE_ADDR 0x12580000
44
#include "qemu/main-loop.h"
66
45
#include "sysemu/cpus.h"
67
+/* DMA */
46
+#include "arm-powerctl.h"
68
+#define EXYNOS4210_PL330_BASE0_ADDR 0x12680000
47
#include "target/arm/cpu.h"
69
+#define EXYNOS4210_PL330_BASE1_ADDR 0x12690000
48
#include "target/arm/internals.h"
70
+#define EXYNOS4210_PL330_BASE2_ADDR 0x12850000
49
#include "trace/trace-target_arm_hvf.h"
71
+
50
@@ -XXX,XX +XXX,XX @@
72
static uint8_t chipid_and_omr[] = { 0x11, 0x02, 0x21, 0x43,
51
#define TMR_CTL_IMASK (1 << 1)
73
0x09, 0x00, 0x00, 0x00 };
52
#define TMR_CTL_ISTATUS (1 << 2)
74
53
75
@@ -XXX,XX +XXX,XX @@ static uint64_t exynos4210_calc_affinity(int cpu)
54
+static void hvf_wfi(CPUState *cpu);
76
return (0x9 << ARM_AFF1_SHIFT) | cpu;
55
+
56
typedef struct HVFVTimer {
57
/* Vtimer value during migration and paused state */
58
uint64_t vtimer_val;
59
@@ -XXX,XX +XXX,XX @@ static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
60
arm_cpu_do_interrupt(cpu);
77
}
61
}
78
62
79
+static void pl330_create(uint32_t base, qemu_irq irq, int nreq)
63
+static void hvf_psci_cpu_off(ARMCPU *arm_cpu)
80
+{
64
+{
81
+ SysBusDevice *busdev;
65
+ int32_t ret = arm_set_cpu_off(arm_cpu->mp_affinity);
82
+ DeviceState *dev;
66
+ assert(ret == QEMU_ARM_POWERCTL_RET_SUCCESS);
83
+
84
+ dev = qdev_create(NULL, "pl330");
85
+ qdev_prop_set_uint8(dev, "num_periph_req", nreq);
86
+ qdev_init_nofail(dev);
87
+ busdev = SYS_BUS_DEVICE(dev);
88
+ sysbus_mmio_map(busdev, 0, base);
89
+ sysbus_connect_irq(busdev, 0, irq);
90
+}
67
+}
91
+
68
+
92
Exynos4210State *exynos4210_init(MemoryRegion *system_mem)
69
+/*
70
+ * Handle a PSCI call.
71
+ *
72
+ * Returns 0 on success
73
+ * -1 when the PSCI call is unknown,
74
+ */
75
+static bool hvf_handle_psci_call(CPUState *cpu)
76
+{
77
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
78
+ CPUARMState *env = &arm_cpu->env;
79
+ uint64_t param[4] = {
80
+ env->xregs[0],
81
+ env->xregs[1],
82
+ env->xregs[2],
83
+ env->xregs[3]
84
+ };
85
+ uint64_t context_id, mpidr;
86
+ bool target_aarch64 = true;
87
+ CPUState *target_cpu_state;
88
+ ARMCPU *target_cpu;
89
+ target_ulong entry;
90
+ int target_el = 1;
91
+ int32_t ret = 0;
92
+
93
+ trace_hvf_psci_call(param[0], param[1], param[2], param[3],
94
+ arm_cpu->mp_affinity);
95
+
96
+ switch (param[0]) {
97
+ case QEMU_PSCI_0_2_FN_PSCI_VERSION:
98
+ ret = QEMU_PSCI_0_2_RET_VERSION_0_2;
99
+ break;
100
+ case QEMU_PSCI_0_2_FN_MIGRATE_INFO_TYPE:
101
+ ret = QEMU_PSCI_0_2_RET_TOS_MIGRATION_NOT_REQUIRED; /* No trusted OS */
102
+ break;
103
+ case QEMU_PSCI_0_2_FN_AFFINITY_INFO:
104
+ case QEMU_PSCI_0_2_FN64_AFFINITY_INFO:
105
+ mpidr = param[1];
106
+
107
+ switch (param[2]) {
108
+ case 0:
109
+ target_cpu_state = arm_get_cpu_by_id(mpidr);
110
+ if (!target_cpu_state) {
111
+ ret = QEMU_PSCI_RET_INVALID_PARAMS;
112
+ break;
113
+ }
114
+ target_cpu = ARM_CPU(target_cpu_state);
115
+
116
+ ret = target_cpu->power_state;
117
+ break;
118
+ default:
119
+ /* Everything above affinity level 0 is always on. */
120
+ ret = 0;
121
+ }
122
+ break;
123
+ case QEMU_PSCI_0_2_FN_SYSTEM_RESET:
124
+ qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
125
+ /*
126
+ * QEMU reset and shutdown are async requests, but PSCI
127
+ * mandates that we never return from the reset/shutdown
128
+ * call, so power the CPU off now so it doesn't execute
129
+ * anything further.
130
+ */
131
+ hvf_psci_cpu_off(arm_cpu);
132
+ break;
133
+ case QEMU_PSCI_0_2_FN_SYSTEM_OFF:
134
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
135
+ hvf_psci_cpu_off(arm_cpu);
136
+ break;
137
+ case QEMU_PSCI_0_1_FN_CPU_ON:
138
+ case QEMU_PSCI_0_2_FN_CPU_ON:
139
+ case QEMU_PSCI_0_2_FN64_CPU_ON:
140
+ mpidr = param[1];
141
+ entry = param[2];
142
+ context_id = param[3];
143
+ ret = arm_set_cpu_on(mpidr, entry, context_id,
144
+ target_el, target_aarch64);
145
+ break;
146
+ case QEMU_PSCI_0_1_FN_CPU_OFF:
147
+ case QEMU_PSCI_0_2_FN_CPU_OFF:
148
+ hvf_psci_cpu_off(arm_cpu);
149
+ break;
150
+ case QEMU_PSCI_0_1_FN_CPU_SUSPEND:
151
+ case QEMU_PSCI_0_2_FN_CPU_SUSPEND:
152
+ case QEMU_PSCI_0_2_FN64_CPU_SUSPEND:
153
+ /* Affinity levels are not supported in QEMU */
154
+ if (param[1] & 0xfffe0000) {
155
+ ret = QEMU_PSCI_RET_INVALID_PARAMS;
156
+ break;
157
+ }
158
+ /* Powerdown is not supported, we always go into WFI */
159
+ env->xregs[0] = 0;
160
+ hvf_wfi(cpu);
161
+ break;
162
+ case QEMU_PSCI_0_1_FN_MIGRATE:
163
+ case QEMU_PSCI_0_2_FN_MIGRATE:
164
+ ret = QEMU_PSCI_RET_NOT_SUPPORTED;
165
+ break;
166
+ default:
167
+ return false;
168
+ }
169
+
170
+ env->xregs[0] = ret;
171
+ return true;
172
+}
173
+
174
static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
93
{
175
{
94
Exynos4210State *s = g_new0(Exynos4210State, 1);
176
ARMCPU *arm_cpu = ARM_CPU(cpu);
95
@@ -XXX,XX +XXX,XX @@ Exynos4210State *exynos4210_init(MemoryRegion *system_mem)
177
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
96
sysbus_create_simple(TYPE_EXYNOS4210_EHCI, EXYNOS4210_EHCI_BASE_ADDR,
178
break;
97
s->irq_table[exynos4210_get_irq(28, 3)]);
179
case EC_AA64_HVC:
98
180
cpu_synchronize_state(cpu);
99
+ /*** DMA controllers ***/
181
- trace_hvf_unknown_hvc(env->xregs[0]);
100
+ pl330_create(EXYNOS4210_PL330_BASE0_ADDR,
182
- /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
101
+ qemu_irq_invert(s->irq_table[exynos4210_get_irq(35, 1)]), 32);
183
- env->xregs[0] = -1;
102
+ pl330_create(EXYNOS4210_PL330_BASE1_ADDR,
184
+ if (arm_cpu->psci_conduit == QEMU_PSCI_CONDUIT_HVC) {
103
+ qemu_irq_invert(s->irq_table[exynos4210_get_irq(36, 1)]), 32);
185
+ if (!hvf_handle_psci_call(cpu)) {
104
+ pl330_create(EXYNOS4210_PL330_BASE2_ADDR,
186
+ trace_hvf_unknown_hvc(env->xregs[0]);
105
+ qemu_irq_invert(s->irq_table[exynos4210_get_irq(34, 1)]), 1);
187
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
106
+
188
+ env->xregs[0] = -1;
107
return s;
189
+ }
108
}
190
+ } else {
191
+ trace_hvf_unknown_hvc(env->xregs[0]);
192
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
193
+ }
194
break;
195
case EC_AA64_SMC:
196
cpu_synchronize_state(cpu);
197
- trace_hvf_unknown_smc(env->xregs[0]);
198
- hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
199
+ if (arm_cpu->psci_conduit == QEMU_PSCI_CONDUIT_SMC) {
200
+ advance_pc = true;
201
+
202
+ if (!hvf_handle_psci_call(cpu)) {
203
+ trace_hvf_unknown_smc(env->xregs[0]);
204
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
205
+ env->xregs[0] = -1;
206
+ }
207
+ } else {
208
+ trace_hvf_unknown_smc(env->xregs[0]);
209
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
210
+ }
211
break;
212
default:
213
cpu_synchronize_state(cpu);
214
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
215
index XXXXXXX..XXXXXXX 100644
216
--- a/target/arm/hvf/trace-events
217
+++ b/target/arm/hvf/trace-events
218
@@ -XXX,XX +XXX,XX @@ hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_
219
hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
220
hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
221
hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
222
+hvf_psci_call(uint64_t x0, uint64_t x1, uint64_t x2, uint64_t x3, uint32_t cpuid) "PSCI Call x0=0x%016"PRIx64" x1=0x%016"PRIx64" x2=0x%016"PRIx64" x3=0x%016"PRIx64" cpu=0x%x"
109
--
223
--
110
2.20.1
224
2.20.1
111
225
112
226
diff view generated by jsdifflib
New patch
1
From: Alexander Graf <agraf@csgraf.de>
1
2
3
Now that we have all logic in place that we need to handle Hypervisor.framework
4
on Apple Silicon systems, let's add CONFIG_HVF for aarch64 as well so that we
5
can build it.
6
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
9
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> (x86 only)
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210916155404.86958-9-agraf@csgraf.de
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
meson.build | 7 +++++++
16
target/arm/hvf/meson.build | 3 +++
17
target/arm/meson.build | 2 ++
18
3 files changed, 12 insertions(+)
19
create mode 100644 target/arm/hvf/meson.build
20
21
diff --git a/meson.build b/meson.build
22
index XXXXXXX..XXXXXXX 100644
23
--- a/meson.build
24
+++ b/meson.build
25
@@ -XXX,XX +XXX,XX @@ else
26
endif
27
28
accelerator_targets = { 'CONFIG_KVM': kvm_targets }
29
+
30
+if cpu in ['aarch64']
31
+ accelerator_targets += {
32
+ 'CONFIG_HVF': ['aarch64-softmmu']
33
+ }
34
+endif
35
+
36
if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
37
# i386 emulator provides xenpv machine type for multiple architectures
38
accelerator_targets += {
39
diff --git a/target/arm/hvf/meson.build b/target/arm/hvf/meson.build
40
new file mode 100644
41
index XXXXXXX..XXXXXXX
42
--- /dev/null
43
+++ b/target/arm/hvf/meson.build
44
@@ -XXX,XX +XXX,XX @@
45
+arm_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files(
46
+ 'hvf.c',
47
+))
48
diff --git a/target/arm/meson.build b/target/arm/meson.build
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/meson.build
51
+++ b/target/arm/meson.build
52
@@ -XXX,XX +XXX,XX @@ arm_softmmu_ss.add(files(
53
'psci.c',
54
))
55
56
+subdir('hvf')
57
+
58
target_arch += {'arm': arm_ss}
59
target_softmmu_arch += {'arm': arm_softmmu_ss}
60
--
61
2.20.1
62
63
diff view generated by jsdifflib
New patch
1
1
From: Alexander Graf <agraf@csgraf.de>
2
3
We can expose cycle counters on the PMU easily. To be as compatible as
4
possible, let's do so, but make sure we don't expose any other architectural
5
counters that we can not model yet.
6
7
This allows OSs to work that require PMU support.
8
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210916155404.86958-10-agraf@csgraf.de
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/hvf/hvf.c | 179 +++++++++++++++++++++++++++++++++++++++++++
15
1 file changed, 179 insertions(+)
16
17
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/hvf/hvf.c
20
+++ b/target/arm/hvf/hvf.c
21
@@ -XXX,XX +XXX,XX @@
22
#define SYSREG_OSLSR_EL1 SYSREG(2, 0, 1, 1, 4)
23
#define SYSREG_OSDLR_EL1 SYSREG(2, 0, 1, 3, 4)
24
#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1)
25
+#define SYSREG_PMCR_EL0 SYSREG(3, 3, 9, 12, 0)
26
+#define SYSREG_PMUSERENR_EL0 SYSREG(3, 3, 9, 14, 0)
27
+#define SYSREG_PMCNTENSET_EL0 SYSREG(3, 3, 9, 12, 1)
28
+#define SYSREG_PMCNTENCLR_EL0 SYSREG(3, 3, 9, 12, 2)
29
+#define SYSREG_PMINTENCLR_EL1 SYSREG(3, 0, 9, 14, 2)
30
+#define SYSREG_PMOVSCLR_EL0 SYSREG(3, 3, 9, 12, 3)
31
+#define SYSREG_PMSWINC_EL0 SYSREG(3, 3, 9, 12, 4)
32
+#define SYSREG_PMSELR_EL0 SYSREG(3, 3, 9, 12, 5)
33
+#define SYSREG_PMCEID0_EL0 SYSREG(3, 3, 9, 12, 6)
34
+#define SYSREG_PMCEID1_EL0 SYSREG(3, 3, 9, 12, 7)
35
+#define SYSREG_PMCCNTR_EL0 SYSREG(3, 3, 9, 13, 0)
36
+#define SYSREG_PMCCFILTR_EL0 SYSREG(3, 3, 14, 15, 7)
37
38
#define WFX_IS_WFE (1 << 0)
39
40
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
41
val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
42
gt_cntfrq_period_ns(arm_cpu);
43
break;
44
+ case SYSREG_PMCR_EL0:
45
+ val = env->cp15.c9_pmcr;
46
+ break;
47
+ case SYSREG_PMCCNTR_EL0:
48
+ pmu_op_start(env);
49
+ val = env->cp15.c15_ccnt;
50
+ pmu_op_finish(env);
51
+ break;
52
+ case SYSREG_PMCNTENCLR_EL0:
53
+ val = env->cp15.c9_pmcnten;
54
+ break;
55
+ case SYSREG_PMOVSCLR_EL0:
56
+ val = env->cp15.c9_pmovsr;
57
+ break;
58
+ case SYSREG_PMSELR_EL0:
59
+ val = env->cp15.c9_pmselr;
60
+ break;
61
+ case SYSREG_PMINTENCLR_EL1:
62
+ val = env->cp15.c9_pminten;
63
+ break;
64
+ case SYSREG_PMCCFILTR_EL0:
65
+ val = env->cp15.pmccfiltr_el0;
66
+ break;
67
+ case SYSREG_PMCNTENSET_EL0:
68
+ val = env->cp15.c9_pmcnten;
69
+ break;
70
+ case SYSREG_PMUSERENR_EL0:
71
+ val = env->cp15.c9_pmuserenr;
72
+ break;
73
+ case SYSREG_PMCEID0_EL0:
74
+ case SYSREG_PMCEID1_EL0:
75
+ /* We can't really count anything yet, declare all events invalid */
76
+ val = 0;
77
+ break;
78
case SYSREG_OSLSR_EL1:
79
val = env->cp15.oslsr_el1;
80
break;
81
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
82
return 0;
83
}
84
85
+static void pmu_update_irq(CPUARMState *env)
86
+{
87
+ ARMCPU *cpu = env_archcpu(env);
88
+ qemu_set_irq(cpu->pmu_interrupt, (env->cp15.c9_pmcr & PMCRE) &&
89
+ (env->cp15.c9_pminten & env->cp15.c9_pmovsr));
90
+}
91
+
92
+static bool pmu_event_supported(uint16_t number)
93
+{
94
+ return false;
95
+}
96
+
97
+/* Returns true if the counter (pass 31 for PMCCNTR) should count events using
98
+ * the current EL, security state, and register configuration.
99
+ */
100
+static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
101
+{
102
+ uint64_t filter;
103
+ bool enabled, filtered = true;
104
+ int el = arm_current_el(env);
105
+
106
+ enabled = (env->cp15.c9_pmcr & PMCRE) &&
107
+ (env->cp15.c9_pmcnten & (1 << counter));
108
+
109
+ if (counter == 31) {
110
+ filter = env->cp15.pmccfiltr_el0;
111
+ } else {
112
+ filter = env->cp15.c14_pmevtyper[counter];
113
+ }
114
+
115
+ if (el == 0) {
116
+ filtered = filter & PMXEVTYPER_U;
117
+ } else if (el == 1) {
118
+ filtered = filter & PMXEVTYPER_P;
119
+ }
120
+
121
+ if (counter != 31) {
122
+ /*
123
+ * If not checking PMCCNTR, ensure the counter is setup to an event we
124
+ * support
125
+ */
126
+ uint16_t event = filter & PMXEVTYPER_EVTCOUNT;
127
+ if (!pmu_event_supported(event)) {
128
+ return false;
129
+ }
130
+ }
131
+
132
+ return enabled && !filtered;
133
+}
134
+
135
+static void pmswinc_write(CPUARMState *env, uint64_t value)
136
+{
137
+ unsigned int i;
138
+ for (i = 0; i < pmu_num_counters(env); i++) {
139
+ /* Increment a counter's count iff: */
140
+ if ((value & (1 << i)) && /* counter's bit is set */
141
+ /* counter is enabled and not filtered */
142
+ pmu_counter_enabled(env, i) &&
143
+ /* counter is SW_INCR */
144
+ (env->cp15.c14_pmevtyper[i] & PMXEVTYPER_EVTCOUNT) == 0x0) {
145
+ /*
146
+ * Detect if this write causes an overflow since we can't predict
147
+ * PMSWINC overflows like we can for other events
148
+ */
149
+ uint32_t new_pmswinc = env->cp15.c14_pmevcntr[i] + 1;
150
+
151
+ if (env->cp15.c14_pmevcntr[i] & ~new_pmswinc & INT32_MIN) {
152
+ env->cp15.c9_pmovsr |= (1 << i);
153
+ pmu_update_irq(env);
154
+ }
155
+
156
+ env->cp15.c14_pmevcntr[i] = new_pmswinc;
157
+ }
158
+ }
159
+}
160
+
161
static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
162
{
163
ARMCPU *arm_cpu = ARM_CPU(cpu);
164
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
165
val);
166
167
switch (reg) {
168
+ case SYSREG_PMCCNTR_EL0:
169
+ pmu_op_start(env);
170
+ env->cp15.c15_ccnt = val;
171
+ pmu_op_finish(env);
172
+ break;
173
+ case SYSREG_PMCR_EL0:
174
+ pmu_op_start(env);
175
+
176
+ if (val & PMCRC) {
177
+ /* The counter has been reset */
178
+ env->cp15.c15_ccnt = 0;
179
+ }
180
+
181
+ if (val & PMCRP) {
182
+ unsigned int i;
183
+ for (i = 0; i < pmu_num_counters(env); i++) {
184
+ env->cp15.c14_pmevcntr[i] = 0;
185
+ }
186
+ }
187
+
188
+ env->cp15.c9_pmcr &= ~PMCR_WRITEABLE_MASK;
189
+ env->cp15.c9_pmcr |= (val & PMCR_WRITEABLE_MASK);
190
+
191
+ pmu_op_finish(env);
192
+ break;
193
+ case SYSREG_PMUSERENR_EL0:
194
+ env->cp15.c9_pmuserenr = val & 0xf;
195
+ break;
196
+ case SYSREG_PMCNTENSET_EL0:
197
+ env->cp15.c9_pmcnten |= (val & pmu_counter_mask(env));
198
+ break;
199
+ case SYSREG_PMCNTENCLR_EL0:
200
+ env->cp15.c9_pmcnten &= ~(val & pmu_counter_mask(env));
201
+ break;
202
+ case SYSREG_PMINTENCLR_EL1:
203
+ pmu_op_start(env);
204
+ env->cp15.c9_pminten |= val;
205
+ pmu_op_finish(env);
206
+ break;
207
+ case SYSREG_PMOVSCLR_EL0:
208
+ pmu_op_start(env);
209
+ env->cp15.c9_pmovsr &= ~val;
210
+ pmu_op_finish(env);
211
+ break;
212
+ case SYSREG_PMSWINC_EL0:
213
+ pmu_op_start(env);
214
+ pmswinc_write(env, val);
215
+ pmu_op_finish(env);
216
+ break;
217
+ case SYSREG_PMSELR_EL0:
218
+ env->cp15.c9_pmselr = val & 0x1f;
219
+ break;
220
+ case SYSREG_PMCCFILTR_EL0:
221
+ pmu_op_start(env);
222
+ env->cp15.pmccfiltr_el0 = val & PMCCFILTR_EL0;
223
+ pmu_op_finish(env);
224
+ break;
225
case SYSREG_OSLAR_EL1:
226
env->cp15.oslsr_el1 = val & 1;
227
break;
228
--
229
2.20.1
230
231
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@wdc.com>
1
Currently gen_jmp_tb() assumes that if it is called then the jump it
2
is handling is the only reason that we might be trying to end the TB,
3
so it will use goto_tb if it can. This is usually the case: mostly
4
"we did something that means we must end the TB" happens on a
5
non-branch instruction. However, there are cases where we decide
6
early in handling an instruction that we need to end the TB and
7
return to the main loop, and then the insn is a complex one that
8
involves gen_jmp_tb(). For instance, for M-profile FP instructions,
9
in gen_preserve_fp_state() which is called from vfp_access_check() we
10
want to force an exit to the main loop if lazy state preservation is
11
active and we are in icount mode.
2
12
3
Commit 89e68b575 "target/arm: Use vector operations for saturation"
13
Make gen_jmp_tb() look at the current value of is_jmp, and only use
4
causes this abort() when booting QEMU ARM with a Cortex-A15:
14
goto_tb if the previous is_jmp was DISAS_NEXT or DISAS_TOO_MANY.
5
15
6
0 0x00007ffff4c2382f in raise () at /usr/lib/libc.so.6
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
1 0x00007ffff4c0e672 in abort () at /usr/lib/libc.so.6
8
2 0x00005555559c1839 in disas_neon_data_insn (insn=<optimized out>, s=<optimized out>) at ./target/arm/translate.c:6673
9
3 0x00005555559c1839 in disas_neon_data_insn (s=<optimized out>, insn=<optimized out>) at ./target/arm/translate.c:6386
10
4 0x00005555559cd8a4 in disas_arm_insn (insn=4081107068, s=0x7fffe59a9510) at ./target/arm/translate.c:9289
11
5 0x00005555559cd8a4 in arm_tr_translate_insn (dcbase=0x7fffe59a9510, cpu=<optimized out>) at ./target/arm/translate.c:13612
12
6 0x00005555558d1d39 in translator_loop (ops=0x5555561cc580 <arm_translator_ops>, db=0x7fffe59a9510, cpu=0x55555686a2f0, tb=<optimized out>, max_insns=<optimized out>) at ./accel/tcg/translator.c:96
13
7 0x00005555559d10d4 in gen_intermediate_code (cpu=cpu@entry=0x55555686a2f0, tb=tb@entry=0x7fffd7840080 <code_gen_buffer+126091347>, max_insns=max_insns@entry=512) at ./target/arm/translate.c:13901
14
8 0x00005555558d06b9 in tb_gen_code (cpu=cpu@entry=0x55555686a2f0, pc=3067096216, cs_base=0, flags=192, cflags=-16252928, cflags@entry=524288) at ./accel/tcg/translate-all.c:1736
15
9 0x00005555558ce467 in tb_find (cf_mask=524288, tb_exit=1, last_tb=0x7fffd783e640 <code_gen_buffer+126084627>, cpu=0x1) at ./accel/tcg/cpu-exec.c:407
16
10 0x00005555558ce467 in cpu_exec (cpu=cpu@entry=0x55555686a2f0) at ./accel/tcg/cpu-exec.c:728
17
11 0x000055555588b0cf in tcg_cpu_exec (cpu=0x55555686a2f0) at ./cpus.c:1431
18
12 0x000055555588d223 in qemu_tcg_cpu_thread_fn (arg=0x55555686a2f0) at ./cpus.c:1735
19
13 0x000055555588d223 in qemu_tcg_cpu_thread_fn (arg=arg@entry=0x55555686a2f0) at ./cpus.c:1709
20
14 0x0000555555d2629a in qemu_thread_start (args=<optimized out>) at ./util/qemu-thread-posix.c:502
21
15 0x00007ffff4db8a92 in start_thread () at /usr/lib/libpthread.
22
23
This patch ensures that we don't hit the abort() in the second switch
24
case in disas_neon_data_insn() as we will return from the first case.
25
26
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
27
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
28
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
18
Message-id: 20210913095440.13462-2-peter.maydell@linaro.org
29
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
30
Tested-by: Alex Bennée <alex.bennee@linaro.org>
31
Message-id: ad91b397f360b2fc7f4087e476f7df5b04d42ddb.1558021877.git.alistair.francis@wdc.com
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
---
19
---
34
target/arm/translate.c | 4 ++--
20
target/arm/translate.c | 34 +++++++++++++++++++++++++++++++++-
35
1 file changed, 2 insertions(+), 2 deletions(-)
21
1 file changed, 33 insertions(+), 1 deletion(-)
36
22
37
diff --git a/target/arm/translate.c b/target/arm/translate.c
23
diff --git a/target/arm/translate.c b/target/arm/translate.c
38
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate.c
25
--- a/target/arm/translate.c
40
+++ b/target/arm/translate.c
26
+++ b/target/arm/translate.c
41
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
27
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
42
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
28
/* An indirect jump so that we still trigger the debug exception. */
43
rn_ofs, rm_ofs, vec_size, vec_size,
29
gen_set_pc_im(s, dest);
44
(u ? uqadd_op : sqadd_op) + size);
30
s->base.is_jmp = DISAS_JUMP;
45
- break;
31
- } else {
46
+ return 0;
32
+ return;
47
33
+ }
48
case NEON_3R_VQSUB:
34
+ switch (s->base.is_jmp) {
49
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
35
+ case DISAS_NEXT:
50
rn_ofs, rm_ofs, vec_size, vec_size,
36
+ case DISAS_TOO_MANY:
51
(u ? uqsub_op : sqsub_op) + size);
37
+ case DISAS_NORETURN:
52
- break;
38
+ /*
53
+ return 0;
39
+ * The normal case: just go to the destination TB.
54
40
+ * NB: NORETURN happens if we generate code like
55
case NEON_3R_VMUL: /* VMUL */
41
+ * gen_brcondi(l);
56
if (u) {
42
+ * gen_jmp();
43
+ * gen_set_label(l);
44
+ * gen_jmp();
45
+ * on the second call to gen_jmp().
46
+ */
47
gen_goto_tb(s, tbno, dest);
48
+ break;
49
+ case DISAS_UPDATE_NOCHAIN:
50
+ case DISAS_UPDATE_EXIT:
51
+ /*
52
+ * We already decided we're leaving the TB for some other reason.
53
+ * Avoid using goto_tb so we really do exit back to the main loop
54
+ * and don't chain to another TB.
55
+ */
56
+ gen_set_pc_im(s, dest);
57
+ gen_goto_ptr();
58
+ s->base.is_jmp = DISAS_NORETURN;
59
+ break;
60
+ default:
61
+ /*
62
+ * We shouldn't be emitting code for a jump and also have
63
+ * is_jmp set to one of the special cases like DISAS_SWI.
64
+ */
65
+ g_assert_not_reached();
66
}
67
}
68
57
--
69
--
58
2.20.1
70
2.20.1
59
71
60
72
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Architecturally, for an M-profile CPU with the LOB feature the
2
LTPSIZE field in FPDSCR is always constant 4. QEMU's implementation
3
enforces this everywhere, except that we don't check that it is true
4
in incoming migration data.
2
5
3
This is, after all, how we implement extract2 in tcg/aarch64.
6
We're going to add come in gen_update_fp_context() which relies on
7
the "always 4" property. Since this is TCG-only, we don't actually
8
need to be robust to bogus incoming migration data, and the effect of
9
it being wrong would be wrong code generation rather than a QEMU
10
crash; but if it did ever happen somehow it would be very difficult
11
to track down the cause. Add a check so that we fail the inbound
12
migration if the FPDSCR.LTPSIZE value is incorrect.
4
13
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190514011129.11330-2-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20210913095440.13462-3-peter.maydell@linaro.org
9
---
17
---
10
target/arm/translate-a64.c | 38 ++++++++++++++++++++------------------
18
target/arm/machine.c | 13 +++++++++++++
11
1 file changed, 20 insertions(+), 18 deletions(-)
19
1 file changed, 13 insertions(+)
12
20
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
21
diff --git a/target/arm/machine.c b/target/arm/machine.c
14
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-a64.c
23
--- a/target/arm/machine.c
16
+++ b/target/arm/translate-a64.c
24
+++ b/target/arm/machine.c
17
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
25
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
18
} else {
26
hw_breakpoint_update_all(cpu);
19
tcg_gen_ext32u_i64(tcg_rd, cpu_reg(s, rm));
27
hw_watchpoint_update_all(cpu);
20
}
28
21
- } else if (rm == rn) { /* ROR */
29
+ /*
22
- tcg_rm = cpu_reg(s, rm);
30
+ * TCG gen_update_fp_context() relies on the invariant that
23
- if (sf) {
31
+ * FPDSCR.LTPSIZE is constant 4 for M-profile with the LOB extension;
24
- tcg_gen_rotri_i64(tcg_rd, tcg_rm, imm);
32
+ * forbid bogus incoming data with some other value.
25
- } else {
33
+ */
26
- TCGv_i32 tmp = tcg_temp_new_i32();
34
+ if (arm_feature(env, ARM_FEATURE_M) && cpu_isar_feature(aa32_lob, cpu)) {
27
- tcg_gen_extrl_i64_i32(tmp, tcg_rm);
35
+ if (extract32(env->v7m.fpdscr[M_REG_NS],
28
- tcg_gen_rotri_i32(tmp, tmp, imm);
36
+ FPCR_LTPSIZE_SHIFT, FPCR_LTPSIZE_LENGTH) != 4 ||
29
- tcg_gen_extu_i32_i64(tcg_rd, tmp);
37
+ extract32(env->v7m.fpdscr[M_REG_S],
30
- tcg_temp_free_i32(tmp);
38
+ FPCR_LTPSIZE_SHIFT, FPCR_LTPSIZE_LENGTH) != 4) {
31
- }
39
+ return -1;
32
} else {
40
+ }
33
- tcg_rm = read_cpu_reg(s, rm, sf);
41
+ }
34
- tcg_rn = read_cpu_reg(s, rn, sf);
42
if (!kvm_enabled()) {
35
- tcg_gen_shri_i64(tcg_rm, tcg_rm, imm);
43
pmu_op_finish(&cpu->env);
36
- tcg_gen_shli_i64(tcg_rn, tcg_rn, bitsize - imm);
37
- tcg_gen_or_i64(tcg_rd, tcg_rm, tcg_rn);
38
- if (!sf) {
39
- tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
40
+ tcg_rm = cpu_reg(s, rm);
41
+ tcg_rn = cpu_reg(s, rn);
42
+
43
+ if (sf) {
44
+ /* Specialization to ROR happens in EXTRACT2. */
45
+ tcg_gen_extract2_i64(tcg_rd, tcg_rm, tcg_rn, imm);
46
+ } else {
47
+ TCGv_i32 t0 = tcg_temp_new_i32();
48
+
49
+ tcg_gen_extrl_i64_i32(t0, tcg_rm);
50
+ if (rm == rn) {
51
+ tcg_gen_rotri_i32(t0, t0, imm);
52
+ } else {
53
+ TCGv_i32 t1 = tcg_temp_new_i32();
54
+ tcg_gen_extrl_i64_i32(t1, tcg_rn);
55
+ tcg_gen_extract2_i32(t0, t0, t1, imm);
56
+ tcg_temp_free_i32(t1);
57
+ }
58
+ tcg_gen_extu_i32_i64(tcg_rd, t0);
59
+ tcg_temp_free_i32(t0);
60
}
61
}
62
}
44
}
63
--
45
--
64
2.20.1
46
2.20.1
65
47
66
48
diff view generated by jsdifflib
New patch
1
1
Our current codegen for MVE always calls out to helper functions,
2
because some byte lanes might be predicated. The common case is that
3
in fact there is no predication active and all lanes should be
4
updated together, so we can produce better code by detecting that and
5
using the TCG generic vector infrastructure.
6
7
Add a TB flag that is set when we can guarantee that there is no
8
active MVE predication, and a bool in the DisasContext. Subsequent
9
patches will use this flag to generate improved code for some
10
instructions.
11
12
In most cases when the predication state changes we simply end the TB
13
after that instruction. For the code called from vfp_access_check()
14
that handles lazy state preservation and creating a new FP context,
15
we can usually avoid having to try to end the TB because luckily the
16
new value of the flag following the register changes in those
17
sequences doesn't depend on any runtime decisions. We do have to end
18
the TB if the guest has enabled lazy FP state preservation but not
19
automatic state preservation, but this is an odd corner case that is
20
not going to be common in real-world code.
21
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20210913095440.13462-4-peter.maydell@linaro.org
25
---
26
target/arm/cpu.h | 4 +++-
27
target/arm/translate.h | 2 ++
28
target/arm/helper.c | 33 +++++++++++++++++++++++++++++++++
29
target/arm/translate-m-nocp.c | 8 +++++++-
30
target/arm/translate-mve.c | 13 ++++++++++++-
31
target/arm/translate-vfp.c | 33 +++++++++++++++++++++++++++------
32
target/arm/translate.c | 8 ++++++++
33
7 files changed, 92 insertions(+), 9 deletions(-)
34
35
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/cpu.h
38
+++ b/target/arm/cpu.h
39
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
40
* | TBFLAG_AM32 | +-----+----------+
41
* | | |TBFLAG_M32|
42
* +-------------+----------------+----------+
43
- * 31 23 5 4 0
44
+ * 31 23 6 5 0
45
*
46
* Unless otherwise noted, these bits are cached in env->hflags.
47
*/
48
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, LSPACT, 2, 1) /* Not cached. */
49
FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */
50
/* Set if FPCCR.S does not match current security state */
51
FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */
52
+/* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */
53
+FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */
54
55
/*
56
* Bit usage when in AArch64 state
57
diff --git a/target/arm/translate.h b/target/arm/translate.h
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/translate.h
60
+++ b/target/arm/translate.h
61
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
62
bool align_mem;
63
/* True if PSTATE.IL is set */
64
bool pstate_il;
65
+ /* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
66
+ bool mve_no_pred;
67
/*
68
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
69
* < 0, set by the current instruction.
70
diff --git a/target/arm/helper.c b/target/arm/helper.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/helper.c
73
+++ b/target/arm/helper.c
74
@@ -XXX,XX +XXX,XX @@ static inline void assert_hflags_rebuild_correctly(CPUARMState *env)
75
#endif
76
}
77
78
+static bool mve_no_pred(CPUARMState *env)
79
+{
80
+ /*
81
+ * Return true if there is definitely no predication of MVE
82
+ * instructions by VPR or LTPSIZE. (Returning false even if there
83
+ * isn't any predication is OK; generated code will just be
84
+ * a little worse.)
85
+ * If the CPU does not implement MVE then this TB flag is always 0.
86
+ *
87
+ * NOTE: if you change this logic, the "recalculate s->mve_no_pred"
88
+ * logic in gen_update_fp_context() needs to be updated to match.
89
+ *
90
+ * We do not include the effect of the ECI bits here -- they are
91
+ * tracked in other TB flags. This simplifies the logic for
92
+ * "when did we emit code that changes the MVE_NO_PRED TB flag
93
+ * and thus need to end the TB?".
94
+ */
95
+ if (cpu_isar_feature(aa32_mve, env_archcpu(env))) {
96
+ return false;
97
+ }
98
+ if (env->v7m.vpr) {
99
+ return false;
100
+ }
101
+ if (env->v7m.ltpsize < 4) {
102
+ return false;
103
+ }
104
+ return true;
105
+}
106
+
107
void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
108
target_ulong *cs_base, uint32_t *pflags)
109
{
110
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
111
if (env->v7m.fpccr[is_secure] & R_V7M_FPCCR_LSPACT_MASK) {
112
DP_TBFLAG_M32(flags, LSPACT, 1);
113
}
114
+
115
+ if (mve_no_pred(env)) {
116
+ DP_TBFLAG_M32(flags, MVE_NO_PRED, 1);
117
+ }
118
} else {
119
/*
120
* Note that XSCALE_CPAR shares bits with VECSTRIDE.
121
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/target/arm/translate-m-nocp.c
124
+++ b/target/arm/translate-m-nocp.c
125
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
126
127
clear_eci_state(s);
128
129
- /* End the TB, because we have updated FP control bits */
130
+ /*
131
+ * End the TB, because we have updated FP control bits,
132
+ * and possibly VPR or LTPSIZE.
133
+ */
134
s->base.is_jmp = DISAS_UPDATE_EXIT;
135
return true;
136
}
137
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
138
store_cpu_field(control, v7m.control[M_REG_S]);
139
tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
140
gen_helper_vfp_set_fpscr(cpu_env, tmp);
141
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
142
tcg_temp_free_i32(tmp);
143
tcg_temp_free_i32(sfpa);
144
break;
145
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
146
}
147
tmp = loadfn(s, opaque, true);
148
store_cpu_field(tmp, v7m.vpr);
149
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
150
break;
151
case ARM_VFP_P0:
152
{
153
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
154
tcg_gen_deposit_i32(vpr, vpr, tmp,
155
R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
156
store_cpu_field(vpr, v7m.vpr);
157
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
158
tcg_temp_free_i32(tmp);
159
break;
160
}
161
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-mve.c
164
+++ b/target/arm/translate-mve.c
165
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VORR, gen_helper_mve_vorr)
166
DO_LOGIC(VORN, gen_helper_mve_vorn)
167
DO_LOGIC(VEOR, gen_helper_mve_veor)
168
169
-DO_LOGIC(VPSEL, gen_helper_mve_vpsel)
170
+static bool trans_VPSEL(DisasContext *s, arg_2op *a)
171
+{
172
+ /* This insn updates predication bits */
173
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
174
+ return do_2op(s, a, gen_helper_mve_vpsel);
175
+}
176
177
#define DO_2OP(INSN, FN) \
178
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
179
@@ -XXX,XX +XXX,XX @@ static bool trans_VPNOT(DisasContext *s, arg_VPNOT *a)
180
}
181
182
gen_helper_mve_vpnot(cpu_env);
183
+ /* This insn updates predication bits */
184
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
185
mve_update_eci(s);
186
return true;
187
}
188
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
189
/* VPT */
190
gen_vpst(s, a->mask);
191
}
192
+ /* This insn updates predication bits */
193
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
194
mve_update_eci(s);
195
return true;
196
}
197
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp_scalar(DisasContext *s, arg_vcmp_scalar *a,
198
/* VPT */
199
gen_vpst(s, a->mask);
200
}
201
+ /* This insn updates predication bits */
202
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
203
mve_update_eci(s);
204
return true;
205
}
206
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/translate-vfp.c
209
+++ b/target/arm/translate-vfp.c
210
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
211
* Generate code for M-profile lazy FP state preservation if needed;
212
* this corresponds to the pseudocode PreserveFPState() function.
213
*/
214
-static void gen_preserve_fp_state(DisasContext *s)
215
+static void gen_preserve_fp_state(DisasContext *s, bool skip_context_update)
216
{
217
if (s->v7m_lspact) {
218
/*
219
@@ -XXX,XX +XXX,XX @@ static void gen_preserve_fp_state(DisasContext *s)
220
* any further FP insns in this TB.
221
*/
222
s->v7m_lspact = false;
223
+ /*
224
+ * The helper might have zeroed VPR, so we do not know the
225
+ * correct value for the MVE_NO_PRED TB flag any more.
226
+ * If we're about to create a new fp context then that
227
+ * will precisely determine the MVE_NO_PRED value (see
228
+ * gen_update_fp_context()). Otherwise, we must:
229
+ * - set s->mve_no_pred to false, so this instruction
230
+ * is generated to use helper functions
231
+ * - end the TB now, without chaining to the next TB
232
+ */
233
+ if (skip_context_update || !s->v7m_new_fp_ctxt_needed) {
234
+ s->mve_no_pred = false;
235
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
236
+ }
237
}
238
}
239
240
@@ -XXX,XX +XXX,XX @@ static void gen_update_fp_context(DisasContext *s)
241
TCGv_i32 z32 = tcg_const_i32(0);
242
store_cpu_field(z32, v7m.vpr);
243
}
244
-
245
/*
246
- * We don't need to arrange to end the TB, because the only
247
- * parts of FPSCR which we cache in the TB flags are the VECLEN
248
- * and VECSTRIDE, and those don't exist for M-profile.
249
+ * We just updated the FPSCR and VPR. Some of this state is cached
250
+ * in the MVE_NO_PRED TB flag. We want to avoid having to end the
251
+ * TB here, which means we need the new value of the MVE_NO_PRED
252
+ * flag to be exactly known here and the same for all executions.
253
+ * Luckily FPDSCR.LTPSIZE is always constant 4 and the VPR is
254
+ * always set to 0, so the new MVE_NO_PRED flag is always 1
255
+ * if and only if we have MVE.
256
+ *
257
+ * (The other FPSCR state cached in TB flags is VECLEN and VECSTRIDE,
258
+ * but those do not exist for M-profile, so are not relevant here.)
259
*/
260
+ s->mve_no_pred = dc_isar_feature(aa32_mve, s);
261
262
if (s->v8m_secure) {
263
bits |= R_V7M_CONTROL_SFPA_MASK;
264
@@ -XXX,XX +XXX,XX @@ bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
265
/* Handle M-profile lazy FP state mechanics */
266
267
/* Trigger lazy-state preservation if necessary */
268
- gen_preserve_fp_state(s);
269
+ gen_preserve_fp_state(s, skip_context_update);
270
271
if (!skip_context_update) {
272
/* Update ownership of FP context and create new FP context if needed */
273
diff --git a/target/arm/translate.c b/target/arm/translate.c
274
index XXXXXXX..XXXXXXX 100644
275
--- a/target/arm/translate.c
276
+++ b/target/arm/translate.c
277
@@ -XXX,XX +XXX,XX @@ static bool trans_DLS(DisasContext *s, arg_DLS *a)
278
/* DLSTP: set FPSCR.LTPSIZE */
279
tmp = tcg_const_i32(a->size);
280
store_cpu_field(tmp, v7m.ltpsize);
281
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
282
}
283
return true;
284
}
285
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
286
assert(ok);
287
tmp = tcg_const_i32(a->size);
288
store_cpu_field(tmp, v7m.ltpsize);
289
+ /*
290
+ * LTPSIZE updated, but MVE_NO_PRED will always be the same thing (0)
291
+ * when we take this upcoming exit from this TB, so gen_jmp_tb() is OK.
292
+ */
293
}
294
gen_jmp_tb(s, s->base.pc_next, 1);
295
296
@@ -XXX,XX +XXX,XX @@ static bool trans_VCTP(DisasContext *s, arg_VCTP *a)
297
gen_helper_mve_vctp(cpu_env, masklen);
298
tcg_temp_free_i32(masklen);
299
tcg_temp_free_i32(rn_shifted);
300
+ /* This insn updates predication bits */
301
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
302
mve_update_eci(s);
303
return true;
304
}
305
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
306
dc->v7m_new_fp_ctxt_needed =
307
EX_TBFLAG_M32(tb_flags, NEW_FP_CTXT_NEEDED);
308
dc->v7m_lspact = EX_TBFLAG_M32(tb_flags, LSPACT);
309
+ dc->mve_no_pred = EX_TBFLAG_M32(tb_flags, MVE_NO_PRED);
310
} else {
311
dc->debug_target_el = EX_TBFLAG_ANY(tb_flags, DEBUG_TARGET_EL);
312
dc->sctlr_b = EX_TBFLAG_A32(tb_flags, SCTLR__B);
313
--
314
2.20.1
315
316
diff view generated by jsdifflib
New patch
1
When not predicating, implement the MVE bitwise logical insns
2
directly using TCG vector operations.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-5-peter.maydell@linaro.org
8
---
9
target/arm/translate-mve.c | 51 +++++++++++++++++++++++++++-----------
10
1 file changed, 36 insertions(+), 15 deletions(-)
11
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
15
+++ b/target/arm/translate-mve.c
16
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr mve_qreg_ptr(unsigned reg)
17
return ret;
18
}
19
20
+static bool mve_no_predication(DisasContext *s)
21
+{
22
+ /*
23
+ * Return true if we are executing the entire MVE instruction
24
+ * with no predication or partial-execution, and so we can safely
25
+ * use an inline TCG vector implementation.
26
+ */
27
+ return s->eci == 0 && s->mve_no_pred;
28
+}
29
+
30
static bool mve_check_qreg_bank(DisasContext *s, int qmask)
31
{
32
/*
33
@@ -XXX,XX +XXX,XX @@ static bool trans_VNEG_fp(DisasContext *s, arg_1op *a)
34
return do_1op(s, a, fns[a->size]);
35
}
36
37
-static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
38
+static bool do_2op_vec(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn,
39
+ GVecGen3Fn *vecfn)
40
{
41
TCGv_ptr qd, qn, qm;
42
43
@@ -XXX,XX +XXX,XX @@ static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
44
return true;
45
}
46
47
- qd = mve_qreg_ptr(a->qd);
48
- qn = mve_qreg_ptr(a->qn);
49
- qm = mve_qreg_ptr(a->qm);
50
- fn(cpu_env, qd, qn, qm);
51
- tcg_temp_free_ptr(qd);
52
- tcg_temp_free_ptr(qn);
53
- tcg_temp_free_ptr(qm);
54
+ if (vecfn && mve_no_predication(s)) {
55
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qn),
56
+ mve_qreg_offset(a->qm), 16, 16);
57
+ } else {
58
+ qd = mve_qreg_ptr(a->qd);
59
+ qn = mve_qreg_ptr(a->qn);
60
+ qm = mve_qreg_ptr(a->qm);
61
+ fn(cpu_env, qd, qn, qm);
62
+ tcg_temp_free_ptr(qd);
63
+ tcg_temp_free_ptr(qn);
64
+ tcg_temp_free_ptr(qm);
65
+ }
66
mve_update_eci(s);
67
return true;
68
}
69
70
-#define DO_LOGIC(INSN, HELPER) \
71
+static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn *fn)
72
+{
73
+ return do_2op_vec(s, a, fn, NULL);
74
+}
75
+
76
+#define DO_LOGIC(INSN, HELPER, VECFN) \
77
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
78
{ \
79
- return do_2op(s, a, HELPER); \
80
+ return do_2op_vec(s, a, HELPER, VECFN); \
81
}
82
83
-DO_LOGIC(VAND, gen_helper_mve_vand)
84
-DO_LOGIC(VBIC, gen_helper_mve_vbic)
85
-DO_LOGIC(VORR, gen_helper_mve_vorr)
86
-DO_LOGIC(VORN, gen_helper_mve_vorn)
87
-DO_LOGIC(VEOR, gen_helper_mve_veor)
88
+DO_LOGIC(VAND, gen_helper_mve_vand, tcg_gen_gvec_and)
89
+DO_LOGIC(VBIC, gen_helper_mve_vbic, tcg_gen_gvec_andc)
90
+DO_LOGIC(VORR, gen_helper_mve_vorr, tcg_gen_gvec_or)
91
+DO_LOGIC(VORN, gen_helper_mve_vorn, tcg_gen_gvec_orc)
92
+DO_LOGIC(VEOR, gen_helper_mve_veor, tcg_gen_gvec_xor)
93
94
static bool trans_VPSEL(DisasContext *s, arg_2op *a)
95
{
96
--
97
2.20.1
98
99
diff view generated by jsdifflib
New patch
1
Optimize MVE arithmetic ops when we have a TCG
2
vector operation we can use.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-6-peter.maydell@linaro.org
8
---
9
target/arm/translate-mve.c | 20 +++++++++++---------
10
1 file changed, 11 insertions(+), 9 deletions(-)
11
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
15
+++ b/target/arm/translate-mve.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_VPSEL(DisasContext *s, arg_2op *a)
17
return do_2op(s, a, gen_helper_mve_vpsel);
18
}
19
20
-#define DO_2OP(INSN, FN) \
21
+#define DO_2OP_VEC(INSN, FN, VECFN) \
22
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
23
{ \
24
static MVEGenTwoOpFn * const fns[] = { \
25
@@ -XXX,XX +XXX,XX @@ static bool trans_VPSEL(DisasContext *s, arg_2op *a)
26
gen_helper_mve_##FN##w, \
27
NULL, \
28
}; \
29
- return do_2op(s, a, fns[a->size]); \
30
+ return do_2op_vec(s, a, fns[a->size], VECFN); \
31
}
32
33
-DO_2OP(VADD, vadd)
34
-DO_2OP(VSUB, vsub)
35
-DO_2OP(VMUL, vmul)
36
+#define DO_2OP(INSN, FN) DO_2OP_VEC(INSN, FN, NULL)
37
+
38
+DO_2OP_VEC(VADD, vadd, tcg_gen_gvec_add)
39
+DO_2OP_VEC(VSUB, vsub, tcg_gen_gvec_sub)
40
+DO_2OP_VEC(VMUL, vmul, tcg_gen_gvec_mul)
41
DO_2OP(VMULH_S, vmulhs)
42
DO_2OP(VMULH_U, vmulhu)
43
DO_2OP(VRMULH_S, vrmulhs)
44
DO_2OP(VRMULH_U, vrmulhu)
45
-DO_2OP(VMAX_S, vmaxs)
46
-DO_2OP(VMAX_U, vmaxu)
47
-DO_2OP(VMIN_S, vmins)
48
-DO_2OP(VMIN_U, vminu)
49
+DO_2OP_VEC(VMAX_S, vmaxs, tcg_gen_gvec_smax)
50
+DO_2OP_VEC(VMAX_U, vmaxu, tcg_gen_gvec_umax)
51
+DO_2OP_VEC(VMIN_S, vmins, tcg_gen_gvec_smin)
52
+DO_2OP_VEC(VMIN_U, vminu, tcg_gen_gvec_umin)
53
DO_2OP(VABD_S, vabds)
54
DO_2OP(VABD_U, vabdu)
55
DO_2OP(VHADD_S, vhadds)
56
--
57
2.20.1
58
59
diff view generated by jsdifflib
New patch
1
Optimize the MVE VNEG and VABS insns by using TCG
2
vector ops when possible.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-7-peter.maydell@linaro.org
8
---
9
target/arm/translate-mve.c | 32 ++++++++++++++++++++++----------
10
1 file changed, 22 insertions(+), 10 deletions(-)
11
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
15
+++ b/target/arm/translate-mve.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
17
return true;
18
}
19
20
-static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
21
+static bool do_1op_vec(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn,
22
+ GVecGen2Fn vecfn)
23
{
24
TCGv_ptr qd, qm;
25
26
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
27
return true;
28
}
29
30
- qd = mve_qreg_ptr(a->qd);
31
- qm = mve_qreg_ptr(a->qm);
32
- fn(cpu_env, qd, qm);
33
- tcg_temp_free_ptr(qd);
34
- tcg_temp_free_ptr(qm);
35
+ if (vecfn && mve_no_predication(s)) {
36
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qm), 16, 16);
37
+ } else {
38
+ qd = mve_qreg_ptr(a->qd);
39
+ qm = mve_qreg_ptr(a->qm);
40
+ fn(cpu_env, qd, qm);
41
+ tcg_temp_free_ptr(qd);
42
+ tcg_temp_free_ptr(qm);
43
+ }
44
mve_update_eci(s);
45
return true;
46
}
47
48
-#define DO_1OP(INSN, FN) \
49
+static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
50
+{
51
+ return do_1op_vec(s, a, fn, NULL);
52
+}
53
+
54
+#define DO_1OP_VEC(INSN, FN, VECFN) \
55
static bool trans_##INSN(DisasContext *s, arg_1op *a) \
56
{ \
57
static MVEGenOneOpFn * const fns[] = { \
58
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
59
gen_helper_mve_##FN##w, \
60
NULL, \
61
}; \
62
- return do_1op(s, a, fns[a->size]); \
63
+ return do_1op_vec(s, a, fns[a->size], VECFN); \
64
}
65
66
+#define DO_1OP(INSN, FN) DO_1OP_VEC(INSN, FN, NULL)
67
+
68
DO_1OP(VCLZ, vclz)
69
DO_1OP(VCLS, vcls)
70
-DO_1OP(VABS, vabs)
71
-DO_1OP(VNEG, vneg)
72
+DO_1OP_VEC(VABS, vabs, tcg_gen_gvec_abs)
73
+DO_1OP_VEC(VNEG, vneg, tcg_gen_gvec_neg)
74
DO_1OP(VQABS, vqabs)
75
DO_1OP(VQNEG, vqneg)
76
DO_1OP(VMAXA, vmaxa)
77
--
78
2.20.1
79
80
diff view generated by jsdifflib
New patch
1
Optimize the MVE VDUP insns by using TCG vector ops when possible.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210913095440.13462-8-peter.maydell@linaro.org
6
---
7
target/arm/translate-mve.c | 12 ++++++++----
8
1 file changed, 8 insertions(+), 4 deletions(-)
9
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/translate-mve.c
13
+++ b/target/arm/translate-mve.c
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
15
return true;
16
}
17
18
- qd = mve_qreg_ptr(a->qd);
19
rt = load_reg(s, a->rt);
20
- tcg_gen_dup_i32(a->size, rt, rt);
21
- gen_helper_mve_vdup(cpu_env, qd, rt);
22
- tcg_temp_free_ptr(qd);
23
+ if (mve_no_predication(s)) {
24
+ tcg_gen_gvec_dup_i32(a->size, mve_qreg_offset(a->qd), 16, 16, rt);
25
+ } else {
26
+ qd = mve_qreg_ptr(a->qd);
27
+ tcg_gen_dup_i32(a->size, rt, rt);
28
+ gen_helper_mve_vdup(cpu_env, qd, rt);
29
+ tcg_temp_free_ptr(qd);
30
+ }
31
tcg_temp_free_i32(rt);
32
mve_update_eci(s);
33
return true;
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
1
In ich_vmcr_write() we enforce "writes of BPR fields to less than
1
Optimize the MVE VMVN insn by using TCG vector ops when possible.
2
their minimum sets them to the minimum" by doing a "read vbpr and
3
write it back" operation. A typo here meant that we weren't handling
4
writes to these fields correctly, because we were reading from VBPR0
5
but writing to VBPR1.
6
2
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190520162809.2677-4-peter.maydell@linaro.org
5
Message-id: 20210913095440.13462-9-peter.maydell@linaro.org
10
---
6
---
11
hw/intc/arm_gicv3_cpuif.c | 2 +-
7
target/arm/translate-mve.c | 2 +-
12
1 file changed, 1 insertion(+), 1 deletion(-)
8
1 file changed, 1 insertion(+), 1 deletion(-)
13
9
14
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
15
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/arm_gicv3_cpuif.c
12
--- a/target/arm/translate-mve.c
17
+++ b/hw/intc/arm_gicv3_cpuif.c
13
+++ b/target/arm/translate-mve.c
18
@@ -XXX,XX +XXX,XX @@ static void ich_vmcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_1op *a)
19
/* Enforce "writing BPRs to less than minimum sets them to the minimum"
15
20
* by reading and writing back the fields.
16
static bool trans_VMVN(DisasContext *s, arg_1op *a)
21
*/
17
{
22
- write_vbpr(cs, GICV3_G1, read_vbpr(cs, GICV3_G0));
18
- return do_1op(s, a, gen_helper_mve_vmvn);
23
+ write_vbpr(cs, GICV3_G0, read_vbpr(cs, GICV3_G0));
19
+ return do_1op_vec(s, a, gen_helper_mve_vmvn, tcg_gen_gvec_not);
24
write_vbpr(cs, GICV3_G1, read_vbpr(cs, GICV3_G1));
20
}
25
21
26
gicv3_cpuif_virt_update(cs);
22
static bool trans_VABS_fp(DisasContext *s, arg_1op *a)
27
--
23
--
28
2.20.1
24
2.20.1
29
25
30
26
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
Optimize the MVE VSHL and VSHR immediate forms by using TCG vector
2
ops when possible.
2
3
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
Message-id: 20190520214342.13709-5-philmd@redhat.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210913095440.13462-10-peter.maydell@linaro.org
7
---
7
---
8
include/hw/arm/exynos4210.h | 9 +++++++--
8
target/arm/translate-mve.c | 83 +++++++++++++++++++++++++++++---------
9
hw/arm/exynos4210.c | 28 ++++++++++++++++++++++++----
9
1 file changed, 63 insertions(+), 20 deletions(-)
10
hw/arm/exynos4_boards.c | 9 ++++++---
11
3 files changed, 37 insertions(+), 9 deletions(-)
12
10
13
diff --git a/include/hw/arm/exynos4210.h b/include/hw/arm/exynos4210.h
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
14
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/arm/exynos4210.h
13
--- a/target/arm/translate-mve.c
16
+++ b/include/hw/arm/exynos4210.h
14
+++ b/target/arm/translate-mve.c
17
@@ -XXX,XX +XXX,XX @@ typedef struct Exynos4210Irq {
15
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
18
} Exynos4210Irq;
16
return do_1imm(s, a, fn);
19
20
typedef struct Exynos4210State {
21
+ /*< private >*/
22
+ SysBusDevice parent_obj;
23
+ /*< public >*/
24
ARMCPU *cpu[EXYNOS4210_NCPUS];
25
Exynos4210Irq irqs;
26
qemu_irq *irq_table;
27
@@ -XXX,XX +XXX,XX @@ typedef struct Exynos4210State {
28
I2CBus *i2c_if[EXYNOS4210_I2C_NUMBER];
29
} Exynos4210State;
30
31
+#define TYPE_EXYNOS4210_SOC "exynos4210"
32
+#define EXYNOS4210_SOC(obj) \
33
+ OBJECT_CHECK(Exynos4210State, obj, TYPE_EXYNOS4210_SOC)
34
+
35
void exynos4210_write_secondary(ARMCPU *cpu,
36
const struct arm_boot_info *info);
37
38
-Exynos4210State *exynos4210_init(MemoryRegion *system_mem);
39
-
40
/* Initialize exynos4210 IRQ subsystem stub */
41
qemu_irq *exynos4210_init_irq(Exynos4210Irq *env);
42
43
diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/arm/exynos4210.c
46
+++ b/hw/arm/exynos4210.c
47
@@ -XXX,XX +XXX,XX @@ static void pl330_create(uint32_t base, qemu_irq irq, int nreq)
48
sysbus_connect_irq(busdev, 0, irq);
49
}
17
}
50
18
51
-Exynos4210State *exynos4210_init(MemoryRegion *system_mem)
19
-static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
52
+static void exynos4210_realize(DeviceState *socdev, Error **errp)
20
- bool negateshift)
21
+static bool do_2shift_vec(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
22
+ bool negateshift, GVecGen2iFn vecfn)
53
{
23
{
54
- Exynos4210State *s = g_new0(Exynos4210State, 1);
24
TCGv_ptr qd, qm;
55
+ Exynos4210State *s = EXYNOS4210_SOC(socdev);
25
int shift = a->shift;
56
+ MemoryRegion *system_mem = get_system_memory();
26
@@ -XXX,XX +XXX,XX @@ static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
57
qemu_irq gate_irq[EXYNOS4210_NCPUS][EXYNOS4210_IRQ_GATE_NINPUTS];
27
shift = -shift;
58
SysBusDevice *busdev;
28
}
59
DeviceState *dev;
29
60
@@ -XXX,XX +XXX,XX @@ Exynos4210State *exynos4210_init(MemoryRegion *system_mem)
30
- qd = mve_qreg_ptr(a->qd);
61
qemu_irq_invert(s->irq_table[exynos4210_get_irq(36, 1)]), 32);
31
- qm = mve_qreg_ptr(a->qm);
62
pl330_create(EXYNOS4210_PL330_BASE2_ADDR,
32
- fn(cpu_env, qd, qm, tcg_constant_i32(shift));
63
qemu_irq_invert(s->irq_table[exynos4210_get_irq(34, 1)]), 1);
33
- tcg_temp_free_ptr(qd);
64
-
34
- tcg_temp_free_ptr(qm);
65
- return s;
35
+ if (vecfn && mve_no_predication(s)) {
36
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qm),
37
+ shift, 16, 16);
38
+ } else {
39
+ qd = mve_qreg_ptr(a->qd);
40
+ qm = mve_qreg_ptr(a->qm);
41
+ fn(cpu_env, qd, qm, tcg_constant_i32(shift));
42
+ tcg_temp_free_ptr(qd);
43
+ tcg_temp_free_ptr(qm);
44
+ }
45
mve_update_eci(s);
46
return true;
66
}
47
}
67
+
48
68
+static void exynos4210_class_init(ObjectClass *klass, void *data)
49
-#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
50
- static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
51
- { \
52
- static MVEGenTwoOpShiftFn * const fns[] = { \
53
- gen_helper_mve_##FN##b, \
54
- gen_helper_mve_##FN##h, \
55
- gen_helper_mve_##FN##w, \
56
- NULL, \
57
- }; \
58
- return do_2shift(s, a, fns[a->size], NEGATESHIFT); \
59
+static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
60
+ bool negateshift)
69
+{
61
+{
70
+ DeviceClass *dc = DEVICE_CLASS(klass);
62
+ return do_2shift_vec(s, a, fn, negateshift, NULL);
71
+
72
+ dc->realize = exynos4210_realize;
73
+}
63
+}
74
+
64
+
75
+static const TypeInfo exynos4210_info = {
65
+#define DO_2SHIFT_VEC(INSN, FN, NEGATESHIFT, VECFN) \
76
+ .name = TYPE_EXYNOS4210_SOC,
66
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
77
+ .parent = TYPE_SYS_BUS_DEVICE,
67
+ { \
78
+ .instance_size = sizeof(Exynos4210State),
68
+ static MVEGenTwoOpShiftFn * const fns[] = { \
79
+ .class_init = exynos4210_class_init,
69
+ gen_helper_mve_##FN##b, \
80
+};
70
+ gen_helper_mve_##FN##h, \
71
+ gen_helper_mve_##FN##w, \
72
+ NULL, \
73
+ }; \
74
+ return do_2shift_vec(s, a, fns[a->size], NEGATESHIFT, VECFN); \
75
}
76
77
-DO_2SHIFT(VSHLI, vshli_u, false)
78
+#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
79
+ DO_2SHIFT_VEC(INSN, FN, NEGATESHIFT, NULL)
81
+
80
+
82
+static void exynos4210_register_types(void)
81
+static void do_gvec_shri_s(unsigned vece, uint32_t dofs, uint32_t aofs,
82
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
83
+{
83
+{
84
+ type_register_static(&exynos4210_info);
84
+ /*
85
+ * We get here with a negated shift count, and we must handle
86
+ * shifts by the element size, which tcg_gen_gvec_sari() does not do.
87
+ */
88
+ shift = -shift;
89
+ if (shift == (8 << vece)) {
90
+ shift--;
91
+ }
92
+ tcg_gen_gvec_sari(vece, dofs, aofs, shift, oprsz, maxsz);
85
+}
93
+}
86
+
94
+
87
+type_init(exynos4210_register_types)
95
+static void do_gvec_shri_u(unsigned vece, uint32_t dofs, uint32_t aofs,
88
diff --git a/hw/arm/exynos4_boards.c b/hw/arm/exynos4_boards.c
96
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
89
index XXXXXXX..XXXXXXX 100644
97
+{
90
--- a/hw/arm/exynos4_boards.c
98
+ /*
91
+++ b/hw/arm/exynos4_boards.c
99
+ * We get here with a negated shift count, and we must handle
92
@@ -XXX,XX +XXX,XX @@ typedef enum Exynos4BoardType {
100
+ * shifts by the element size, which tcg_gen_gvec_shri() does not do.
93
} Exynos4BoardType;
101
+ */
94
102
+ shift = -shift;
95
typedef struct Exynos4BoardState {
103
+ if (shift == (8 << vece)) {
96
- Exynos4210State *soc;
104
+ tcg_gen_gvec_dup_imm(vece, dofs, oprsz, maxsz, 0);
97
+ Exynos4210State soc;
105
+ } else {
98
MemoryRegion dram0_mem;
106
+ tcg_gen_gvec_shri(vece, dofs, aofs, shift, oprsz, maxsz);
99
MemoryRegion dram1_mem;
107
+ }
100
} Exynos4BoardState;
108
+}
101
@@ -XXX,XX +XXX,XX @@ exynos4_boards_init_common(MachineState *machine,
109
+
102
exynos4_boards_init_ram(s, get_system_memory(),
110
+DO_2SHIFT_VEC(VSHLI, vshli_u, false, tcg_gen_gvec_shli)
103
exynos4_board_ram_size[board_type]);
111
DO_2SHIFT(VQSHLI_S, vqshli_s, false)
104
112
DO_2SHIFT(VQSHLI_U, vqshli_u, false)
105
- s->soc = exynos4210_init(get_system_memory());
113
DO_2SHIFT(VQSHLUI, vqshlui_s, false)
106
+ object_initialize(&s->soc, sizeof(s->soc), TYPE_EXYNOS4210_SOC);
114
/* These right shifts use a left-shift helper with negated shift count */
107
+ qdev_set_parent_bus(DEVICE(&s->soc), sysbus_get_default());
115
-DO_2SHIFT(VSHRI_S, vshli_s, true)
108
+ object_property_set_bool(OBJECT(&s->soc), true, "realized",
116
-DO_2SHIFT(VSHRI_U, vshli_u, true)
109
+ &error_fatal);
117
+DO_2SHIFT_VEC(VSHRI_S, vshli_s, true, do_gvec_shri_s)
110
118
+DO_2SHIFT_VEC(VSHRI_U, vshli_u, true, do_gvec_shri_u)
111
return s;
119
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
112
}
120
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
113
@@ -XXX,XX +XXX,XX @@ static void smdkc210_init(MachineState *machine)
114
EXYNOS4_BOARD_SMDKC210);
115
116
lan9215_init(SMDK_LAN9118_BASE_ADDR,
117
- qemu_irq_invert(s->soc->irq_table[exynos4210_get_irq(37, 1)]));
118
+ qemu_irq_invert(s->soc.irq_table[exynos4210_get_irq(37, 1)]));
119
arm_load_kernel(ARM_CPU(first_cpu), &exynos4_board_binfo);
120
}
121
121
122
--
122
--
123
2.20.1
123
2.20.1
124
124
125
125
diff view generated by jsdifflib
1
The system_clock_scale global is used only by the armv7m systick
1
Optimize the MVE VSHLL insns by using TCG vector ops when possible.
2
device; move the extern declaration to the armv7m_systick.h header,
2
This includes the VMOVL insn, which we handle in mve.decode as "VSHLL
3
and expand the comment to explain what it is and that it should
3
with zero shift count".
4
ideally be replaced with a different approach.
5
4
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Message-id: 20210913095440.13462-11-peter.maydell@linaro.org
9
Message-id: 20190516163857.6430-2-peter.maydell@linaro.org
10
---
8
---
11
include/hw/arm/arm.h | 4 ----
9
target/arm/translate-mve.c | 67 +++++++++++++++++++++++++++++++++-----
12
include/hw/timer/armv7m_systick.h | 22 ++++++++++++++++++++++
10
1 file changed, 59 insertions(+), 8 deletions(-)
13
2 files changed, 22 insertions(+), 4 deletions(-)
14
11
15
diff --git a/include/hw/arm/arm.h b/include/hw/arm/arm.h
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/arm.h
14
--- a/target/arm/translate-mve.c
18
+++ b/include/hw/arm/arm.h
15
+++ b/target/arm/translate-mve.c
19
@@ -XXX,XX +XXX,XX @@ void arm_write_secure_board_setup_dummy_smc(ARMCPU *cpu,
16
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SCALAR(VQSHL_U_scalar, vqshli_u)
20
const struct arm_boot_info *info,
17
DO_2SHIFT_SCALAR(VQRSHL_S_scalar, vqrshli_s)
21
hwaddr mvbar_addr);
18
DO_2SHIFT_SCALAR(VQRSHL_U_scalar, vqrshli_u)
22
19
23
-/* Multiplication factor to convert from system clock ticks to qemu timer
20
-#define DO_VSHLL(INSN, FN) \
24
- ticks. */
21
- static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
25
-extern int system_clock_scale;
22
- { \
26
-
23
- static MVEGenTwoOpShiftFn * const fns[] = { \
27
#endif /* HW_ARM_H */
24
- gen_helper_mve_##FN##b, \
28
diff --git a/include/hw/timer/armv7m_systick.h b/include/hw/timer/armv7m_systick.h
25
- gen_helper_mve_##FN##h, \
29
index XXXXXXX..XXXXXXX 100644
26
- }; \
30
--- a/include/hw/timer/armv7m_systick.h
27
- return do_2shift(s, a, fns[a->size], false); \
31
+++ b/include/hw/timer/armv7m_systick.h
28
+#define DO_VSHLL(INSN, FN) \
32
@@ -XXX,XX +XXX,XX @@ typedef struct SysTickState {
29
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
33
qemu_irq irq;
30
+ { \
34
} SysTickState;
31
+ static MVEGenTwoOpShiftFn * const fns[] = { \
32
+ gen_helper_mve_##FN##b, \
33
+ gen_helper_mve_##FN##h, \
34
+ }; \
35
+ return do_2shift_vec(s, a, fns[a->size], false, do_gvec_##FN); \
36
}
35
37
36
+/*
38
+/*
37
+ * Multiplication factor to convert from system clock ticks to qemu timer
39
+ * For the VSHLL vector helpers, the vece is the size of the input
38
+ * ticks. This should be set (by board code, usually) to a value
40
+ * (ie MO_8 or MO_16); the helpers want to work in the output size.
39
+ * equal to NANOSECONDS_PER_SECOND / frq, where frq is the clock frequency
41
+ * The shift count can be 0..<input size>, inclusive. (0 is VMOVL.)
40
+ * in Hz of the CPU.
41
+ *
42
+ * This value is used by the systick device when it is running in
43
+ * its "use the CPU clock" mode (ie when SYST_CSR.CLKSOURCE == 1) to
44
+ * set how fast the timer should tick.
45
+ *
46
+ * TODO: we should refactor this so that rather than using a global
47
+ * we use a device property or something similar. This is complicated
48
+ * because (a) the property would need to be plumbed through from the
49
+ * board code down through various layers to the systick device
50
+ * and (b) the property needs to be modifiable after realize, because
51
+ * the stellaris board uses this to implement the behaviour where the
52
+ * guest can reprogram the PLL registers to downclock the CPU, and the
53
+ * systick device needs to react accordingly. Possibly this should
54
+ * be deferred until we have a good API for modelling clock trees.
55
+ */
42
+ */
56
+extern int system_clock_scale;
43
+static void do_gvec_vshllbs(unsigned vece, uint32_t dofs, uint32_t aofs,
44
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
45
+{
46
+ unsigned ovece = vece + 1;
47
+ unsigned ibits = vece == MO_8 ? 8 : 16;
48
+ tcg_gen_gvec_shli(ovece, dofs, aofs, ibits, oprsz, maxsz);
49
+ tcg_gen_gvec_sari(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
50
+}
57
+
51
+
58
#endif
52
+static void do_gvec_vshllbu(unsigned vece, uint32_t dofs, uint32_t aofs,
53
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
54
+{
55
+ unsigned ovece = vece + 1;
56
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
57
+ ovece == MO_16 ? 0xff : 0xffff, oprsz, maxsz);
58
+ tcg_gen_gvec_shli(ovece, dofs, dofs, shift, oprsz, maxsz);
59
+}
60
+
61
+static void do_gvec_vshllts(unsigned vece, uint32_t dofs, uint32_t aofs,
62
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
63
+{
64
+ unsigned ovece = vece + 1;
65
+ unsigned ibits = vece == MO_8 ? 8 : 16;
66
+ if (shift == 0) {
67
+ tcg_gen_gvec_sari(ovece, dofs, aofs, ibits, oprsz, maxsz);
68
+ } else {
69
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
70
+ ovece == MO_16 ? 0xff00 : 0xffff0000, oprsz, maxsz);
71
+ tcg_gen_gvec_sari(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
72
+ }
73
+}
74
+
75
+static void do_gvec_vshlltu(unsigned vece, uint32_t dofs, uint32_t aofs,
76
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
77
+{
78
+ unsigned ovece = vece + 1;
79
+ unsigned ibits = vece == MO_8 ? 8 : 16;
80
+ if (shift == 0) {
81
+ tcg_gen_gvec_shri(ovece, dofs, aofs, ibits, oprsz, maxsz);
82
+ } else {
83
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
84
+ ovece == MO_16 ? 0xff00 : 0xffff0000, oprsz, maxsz);
85
+ tcg_gen_gvec_shri(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
86
+ }
87
+}
88
+
89
DO_VSHLL(VSHLL_BS, vshllbs)
90
DO_VSHLL(VSHLL_BU, vshllbu)
91
DO_VSHLL(VSHLL_TS, vshllts)
59
--
92
--
60
2.20.1
93
2.20.1
61
94
62
95
diff view generated by jsdifflib
New patch
1
Optimize the MVE shift-and-insert insns by using TCG
2
vector ops when possible.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210913095440.13462-12-peter.maydell@linaro.org
7
---
8
target/arm/translate-mve.c | 4 ++--
9
1 file changed, 2 insertions(+), 2 deletions(-)
10
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-mve.c
14
+++ b/target/arm/translate-mve.c
15
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_VEC(VSHRI_U, vshli_u, true, do_gvec_shri_u)
16
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
17
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
18
19
-DO_2SHIFT(VSRI, vsri, false)
20
-DO_2SHIFT(VSLI, vsli, false)
21
+DO_2SHIFT_VEC(VSRI, vsri, false, gen_gvec_sri)
22
+DO_2SHIFT_VEC(VSLI, vsli, false, gen_gvec_sli)
23
24
#define DO_2SHIFT_FP(INSN, FN) \
25
static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
26
--
27
2.20.1
28
29
diff view generated by jsdifflib
1
The header file hw/arm/arm.h now includes only declarations
1
Optimize the MVE 1op-immediate insns (VORR, VBIC, VMOV) to
2
relating to hw/arm/boot.c functionality. Rename it accordingly,
2
use TCG vector ops when possible.
3
and adjust its header comment.
4
5
The bulk of this commit was created via
6
perl -pi -e 's|hw/arm/arm.h|hw/arm/boot.h|' hw/arm/*.c include/hw/arm/*.h
7
8
In a few cases we can just delete the #include:
9
hw/arm/msf2-soc.c, include/hw/arm/aspeed_soc.h and
10
include/hw/arm/bcm2836.h did not require it.
11
3
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Message-id: 20210913095440.13462-13-peter.maydell@linaro.org
15
Message-id: 20190516163857.6430-4-peter.maydell@linaro.org
16
---
7
---
17
include/hw/arm/allwinner-a10.h | 2 +-
8
target/arm/translate-mve.c | 26 +++++++++++++++++++++-----
18
include/hw/arm/aspeed_soc.h | 1 -
9
1 file changed, 21 insertions(+), 5 deletions(-)
19
include/hw/arm/bcm2836.h | 1 -
20
include/hw/arm/{arm.h => boot.h} | 8 ++++----
21
include/hw/arm/fsl-imx25.h | 2 +-
22
include/hw/arm/fsl-imx31.h | 2 +-
23
include/hw/arm/fsl-imx6.h | 2 +-
24
include/hw/arm/fsl-imx6ul.h | 2 +-
25
include/hw/arm/fsl-imx7.h | 2 +-
26
include/hw/arm/virt.h | 2 +-
27
include/hw/arm/xlnx-versal.h | 2 +-
28
include/hw/arm/xlnx-zynqmp.h | 2 +-
29
hw/arm/armsse.c | 2 +-
30
hw/arm/armv7m.c | 2 +-
31
hw/arm/aspeed.c | 2 +-
32
hw/arm/boot.c | 2 +-
33
hw/arm/collie.c | 2 +-
34
hw/arm/exynos4210.c | 2 +-
35
hw/arm/exynos4_boards.c | 2 +-
36
hw/arm/highbank.c | 2 +-
37
hw/arm/integratorcp.c | 2 +-
38
hw/arm/mainstone.c | 2 +-
39
hw/arm/microbit.c | 2 +-
40
hw/arm/mps2-tz.c | 2 +-
41
hw/arm/mps2.c | 2 +-
42
hw/arm/msf2-soc.c | 1 -
43
hw/arm/msf2-som.c | 2 +-
44
hw/arm/musca.c | 2 +-
45
hw/arm/musicpal.c | 2 +-
46
hw/arm/netduino2.c | 2 +-
47
hw/arm/nrf51_soc.c | 2 +-
48
hw/arm/nseries.c | 2 +-
49
hw/arm/omap1.c | 2 +-
50
hw/arm/omap2.c | 2 +-
51
hw/arm/omap_sx1.c | 2 +-
52
hw/arm/palm.c | 2 +-
53
hw/arm/raspi.c | 2 +-
54
hw/arm/realview.c | 2 +-
55
hw/arm/spitz.c | 2 +-
56
hw/arm/stellaris.c | 2 +-
57
hw/arm/stm32f205_soc.c | 2 +-
58
hw/arm/strongarm.c | 2 +-
59
hw/arm/tosa.c | 2 +-
60
hw/arm/versatilepb.c | 2 +-
61
hw/arm/vexpress.c | 2 +-
62
hw/arm/virt.c | 2 +-
63
hw/arm/xilinx_zynq.c | 2 +-
64
hw/arm/xlnx-versal.c | 2 +-
65
hw/arm/z2.c | 2 +-
66
49 files changed, 49 insertions(+), 52 deletions(-)
67
rename include/hw/arm/{arm.h => boot.h} (98%)
68
10
69
diff --git a/include/hw/arm/allwinner-a10.h b/include/hw/arm/allwinner-a10.h
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
70
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
71
--- a/include/hw/arm/allwinner-a10.h
13
--- a/target/arm/translate-mve.c
72
+++ b/include/hw/arm/allwinner-a10.h
14
+++ b/target/arm/translate-mve.c
73
@@ -XXX,XX +XXX,XX @@
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VADDLV(DisasContext *s, arg_VADDLV *a)
74
#include "qemu-common.h"
16
return true;
75
#include "qemu/error-report.h"
17
}
76
#include "hw/char/serial.h"
18
77
-#include "hw/arm/arm.h"
19
-static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
78
+#include "hw/arm/boot.h"
20
+static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn,
79
#include "hw/timer/allwinner-a10-pit.h"
21
+ GVecGen2iFn *vecfn)
80
#include "hw/intc/allwinner-a10-pic.h"
81
#include "hw/net/allwinner_emac.h"
82
diff --git a/include/hw/arm/aspeed_soc.h b/include/hw/arm/aspeed_soc.h
83
index XXXXXXX..XXXXXXX 100644
84
--- a/include/hw/arm/aspeed_soc.h
85
+++ b/include/hw/arm/aspeed_soc.h
86
@@ -XXX,XX +XXX,XX @@
87
#ifndef ASPEED_SOC_H
88
#define ASPEED_SOC_H
89
90
-#include "hw/arm/arm.h"
91
#include "hw/intc/aspeed_vic.h"
92
#include "hw/misc/aspeed_scu.h"
93
#include "hw/misc/aspeed_sdmc.h"
94
diff --git a/include/hw/arm/bcm2836.h b/include/hw/arm/bcm2836.h
95
index XXXXXXX..XXXXXXX 100644
96
--- a/include/hw/arm/bcm2836.h
97
+++ b/include/hw/arm/bcm2836.h
98
@@ -XXX,XX +XXX,XX @@
99
#ifndef BCM2836_H
100
#define BCM2836_H
101
102
-#include "hw/arm/arm.h"
103
#include "hw/arm/bcm2835_peripherals.h"
104
#include "hw/intc/bcm2836_control.h"
105
106
diff --git a/include/hw/arm/arm.h b/include/hw/arm/boot.h
107
similarity index 98%
108
rename from include/hw/arm/arm.h
109
rename to include/hw/arm/boot.h
110
index XXXXXXX..XXXXXXX 100644
111
--- a/include/hw/arm/arm.h
112
+++ b/include/hw/arm/boot.h
113
@@ -XXX,XX +XXX,XX @@
114
/*
115
- * Misc ARM declarations
116
+ * ARM kernel loader.
117
*
118
* Copyright (c) 2006 CodeSourcery.
119
* Written by Paul Brook
120
@@ -XXX,XX +XXX,XX @@
121
*
122
*/
123
124
-#ifndef HW_ARM_H
125
-#define HW_ARM_H
126
+#ifndef HW_ARM_BOOT_H
127
+#define HW_ARM_BOOT_H
128
129
#include "exec/memory.h"
130
#include "target/arm/cpu-qom.h"
131
@@ -XXX,XX +XXX,XX @@ void arm_write_secure_board_setup_dummy_smc(ARMCPU *cpu,
132
const struct arm_boot_info *info,
133
hwaddr mvbar_addr);
134
135
-#endif /* HW_ARM_H */
136
+#endif /* HW_ARM_BOOT_H */
137
diff --git a/include/hw/arm/fsl-imx25.h b/include/hw/arm/fsl-imx25.h
138
index XXXXXXX..XXXXXXX 100644
139
--- a/include/hw/arm/fsl-imx25.h
140
+++ b/include/hw/arm/fsl-imx25.h
141
@@ -XXX,XX +XXX,XX @@
142
#ifndef FSL_IMX25_H
143
#define FSL_IMX25_H
144
145
-#include "hw/arm/arm.h"
146
+#include "hw/arm/boot.h"
147
#include "hw/intc/imx_avic.h"
148
#include "hw/misc/imx25_ccm.h"
149
#include "hw/char/imx_serial.h"
150
diff --git a/include/hw/arm/fsl-imx31.h b/include/hw/arm/fsl-imx31.h
151
index XXXXXXX..XXXXXXX 100644
152
--- a/include/hw/arm/fsl-imx31.h
153
+++ b/include/hw/arm/fsl-imx31.h
154
@@ -XXX,XX +XXX,XX @@
155
#ifndef FSL_IMX31_H
156
#define FSL_IMX31_H
157
158
-#include "hw/arm/arm.h"
159
+#include "hw/arm/boot.h"
160
#include "hw/intc/imx_avic.h"
161
#include "hw/misc/imx31_ccm.h"
162
#include "hw/char/imx_serial.h"
163
diff --git a/include/hw/arm/fsl-imx6.h b/include/hw/arm/fsl-imx6.h
164
index XXXXXXX..XXXXXXX 100644
165
--- a/include/hw/arm/fsl-imx6.h
166
+++ b/include/hw/arm/fsl-imx6.h
167
@@ -XXX,XX +XXX,XX @@
168
#ifndef FSL_IMX6_H
169
#define FSL_IMX6_H
170
171
-#include "hw/arm/arm.h"
172
+#include "hw/arm/boot.h"
173
#include "hw/cpu/a9mpcore.h"
174
#include "hw/misc/imx6_ccm.h"
175
#include "hw/misc/imx6_src.h"
176
diff --git a/include/hw/arm/fsl-imx6ul.h b/include/hw/arm/fsl-imx6ul.h
177
index XXXXXXX..XXXXXXX 100644
178
--- a/include/hw/arm/fsl-imx6ul.h
179
+++ b/include/hw/arm/fsl-imx6ul.h
180
@@ -XXX,XX +XXX,XX @@
181
#ifndef FSL_IMX6UL_H
182
#define FSL_IMX6UL_H
183
184
-#include "hw/arm/arm.h"
185
+#include "hw/arm/boot.h"
186
#include "hw/cpu/a15mpcore.h"
187
#include "hw/misc/imx6ul_ccm.h"
188
#include "hw/misc/imx6_src.h"
189
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
190
index XXXXXXX..XXXXXXX 100644
191
--- a/include/hw/arm/fsl-imx7.h
192
+++ b/include/hw/arm/fsl-imx7.h
193
@@ -XXX,XX +XXX,XX @@
194
#ifndef FSL_IMX7_H
195
#define FSL_IMX7_H
196
197
-#include "hw/arm/arm.h"
198
+#include "hw/arm/boot.h"
199
#include "hw/cpu/a15mpcore.h"
200
#include "hw/intc/imx_gpcv2.h"
201
#include "hw/misc/imx7_ccm.h"
202
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
203
index XXXXXXX..XXXXXXX 100644
204
--- a/include/hw/arm/virt.h
205
+++ b/include/hw/arm/virt.h
206
@@ -XXX,XX +XXX,XX @@
207
#include "exec/hwaddr.h"
208
#include "qemu/notify.h"
209
#include "hw/boards.h"
210
-#include "hw/arm/arm.h"
211
+#include "hw/arm/boot.h"
212
#include "hw/block/flash.h"
213
#include "sysemu/kvm.h"
214
#include "hw/intc/arm_gicv3_common.h"
215
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
216
index XXXXXXX..XXXXXXX 100644
217
--- a/include/hw/arm/xlnx-versal.h
218
+++ b/include/hw/arm/xlnx-versal.h
219
@@ -XXX,XX +XXX,XX @@
220
#define XLNX_VERSAL_H
221
222
#include "hw/sysbus.h"
223
-#include "hw/arm/arm.h"
224
+#include "hw/arm/boot.h"
225
#include "hw/intc/arm_gicv3.h"
226
227
#define TYPE_XLNX_VERSAL "xlnx-versal"
228
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
229
index XXXXXXX..XXXXXXX 100644
230
--- a/include/hw/arm/xlnx-zynqmp.h
231
+++ b/include/hw/arm/xlnx-zynqmp.h
232
@@ -XXX,XX +XXX,XX @@
233
#ifndef XLNX_ZYNQMP_H
234
235
#include "qemu-common.h"
236
-#include "hw/arm/arm.h"
237
+#include "hw/arm/boot.h"
238
#include "hw/intc/arm_gic.h"
239
#include "hw/net/cadence_gem.h"
240
#include "hw/char/cadence_uart.h"
241
diff --git a/hw/arm/armsse.c b/hw/arm/armsse.c
242
index XXXXXXX..XXXXXXX 100644
243
--- a/hw/arm/armsse.c
244
+++ b/hw/arm/armsse.c
245
@@ -XXX,XX +XXX,XX @@
246
#include "hw/sysbus.h"
247
#include "hw/registerfields.h"
248
#include "hw/arm/armsse.h"
249
-#include "hw/arm/arm.h"
250
+#include "hw/arm/boot.h"
251
252
/* Format of the System Information block SYS_CONFIG register */
253
typedef enum SysConfigFormat {
254
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
255
index XXXXXXX..XXXXXXX 100644
256
--- a/hw/arm/armv7m.c
257
+++ b/hw/arm/armv7m.c
258
@@ -XXX,XX +XXX,XX @@
259
#include "qemu-common.h"
260
#include "cpu.h"
261
#include "hw/sysbus.h"
262
-#include "hw/arm/arm.h"
263
+#include "hw/arm/boot.h"
264
#include "hw/loader.h"
265
#include "elf.h"
266
#include "sysemu/qtest.h"
267
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
268
index XXXXXXX..XXXXXXX 100644
269
--- a/hw/arm/aspeed.c
270
+++ b/hw/arm/aspeed.c
271
@@ -XXX,XX +XXX,XX @@
272
#include "qemu-common.h"
273
#include "cpu.h"
274
#include "exec/address-spaces.h"
275
-#include "hw/arm/arm.h"
276
+#include "hw/arm/boot.h"
277
#include "hw/arm/aspeed.h"
278
#include "hw/arm/aspeed_soc.h"
279
#include "hw/boards.h"
280
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
281
index XXXXXXX..XXXXXXX 100644
282
--- a/hw/arm/boot.c
283
+++ b/hw/arm/boot.c
284
@@ -XXX,XX +XXX,XX @@
285
#include "qapi/error.h"
286
#include <libfdt.h>
287
#include "hw/hw.h"
288
-#include "hw/arm/arm.h"
289
+#include "hw/arm/boot.h"
290
#include "hw/arm/linux-boot-if.h"
291
#include "sysemu/kvm.h"
292
#include "sysemu/sysemu.h"
293
diff --git a/hw/arm/collie.c b/hw/arm/collie.c
294
index XXXXXXX..XXXXXXX 100644
295
--- a/hw/arm/collie.c
296
+++ b/hw/arm/collie.c
297
@@ -XXX,XX +XXX,XX @@
298
#include "hw/sysbus.h"
299
#include "hw/boards.h"
300
#include "strongarm.h"
301
-#include "hw/arm/arm.h"
302
+#include "hw/arm/boot.h"
303
#include "hw/block/flash.h"
304
#include "exec/address-spaces.h"
305
#include "cpu.h"
306
diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c
307
index XXXXXXX..XXXXXXX 100644
308
--- a/hw/arm/exynos4210.c
309
+++ b/hw/arm/exynos4210.c
310
@@ -XXX,XX +XXX,XX @@
311
#include "hw/boards.h"
312
#include "sysemu/sysemu.h"
313
#include "hw/sysbus.h"
314
-#include "hw/arm/arm.h"
315
+#include "hw/arm/boot.h"
316
#include "hw/loader.h"
317
#include "hw/arm/exynos4210.h"
318
#include "hw/sd/sdhci.h"
319
diff --git a/hw/arm/exynos4_boards.c b/hw/arm/exynos4_boards.c
320
index XXXXXXX..XXXXXXX 100644
321
--- a/hw/arm/exynos4_boards.c
322
+++ b/hw/arm/exynos4_boards.c
323
@@ -XXX,XX +XXX,XX @@
324
#include "sysemu/sysemu.h"
325
#include "hw/sysbus.h"
326
#include "net/net.h"
327
-#include "hw/arm/arm.h"
328
+#include "hw/arm/boot.h"
329
#include "exec/address-spaces.h"
330
#include "hw/arm/exynos4210.h"
331
#include "hw/net/lan9118.h"
332
diff --git a/hw/arm/highbank.c b/hw/arm/highbank.c
333
index XXXXXXX..XXXXXXX 100644
334
--- a/hw/arm/highbank.c
335
+++ b/hw/arm/highbank.c
336
@@ -XXX,XX +XXX,XX @@
337
#include "qemu/osdep.h"
338
#include "qapi/error.h"
339
#include "hw/sysbus.h"
340
-#include "hw/arm/arm.h"
341
+#include "hw/arm/boot.h"
342
#include "hw/loader.h"
343
#include "net/net.h"
344
#include "sysemu/kvm.h"
345
diff --git a/hw/arm/integratorcp.c b/hw/arm/integratorcp.c
346
index XXXXXXX..XXXXXXX 100644
347
--- a/hw/arm/integratorcp.c
348
+++ b/hw/arm/integratorcp.c
349
@@ -XXX,XX +XXX,XX @@
350
#include "cpu.h"
351
#include "hw/sysbus.h"
352
#include "hw/boards.h"
353
-#include "hw/arm/arm.h"
354
+#include "hw/arm/boot.h"
355
#include "hw/misc/arm_integrator_debug.h"
356
#include "hw/net/smc91c111.h"
357
#include "net/net.h"
358
diff --git a/hw/arm/mainstone.c b/hw/arm/mainstone.c
359
index XXXXXXX..XXXXXXX 100644
360
--- a/hw/arm/mainstone.c
361
+++ b/hw/arm/mainstone.c
362
@@ -XXX,XX +XXX,XX @@
363
#include "qapi/error.h"
364
#include "hw/hw.h"
365
#include "hw/arm/pxa.h"
366
-#include "hw/arm/arm.h"
367
+#include "hw/arm/boot.h"
368
#include "net/net.h"
369
#include "hw/net/smc91c111.h"
370
#include "hw/boards.h"
371
diff --git a/hw/arm/microbit.c b/hw/arm/microbit.c
372
index XXXXXXX..XXXXXXX 100644
373
--- a/hw/arm/microbit.c
374
+++ b/hw/arm/microbit.c
375
@@ -XXX,XX +XXX,XX @@
376
#include "qemu/osdep.h"
377
#include "qapi/error.h"
378
#include "hw/boards.h"
379
-#include "hw/arm/arm.h"
380
+#include "hw/arm/boot.h"
381
#include "sysemu/sysemu.h"
382
#include "exec/address-spaces.h"
383
384
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
385
index XXXXXXX..XXXXXXX 100644
386
--- a/hw/arm/mps2-tz.c
387
+++ b/hw/arm/mps2-tz.c
388
@@ -XXX,XX +XXX,XX @@
389
#include "qemu/osdep.h"
390
#include "qapi/error.h"
391
#include "qemu/error-report.h"
392
-#include "hw/arm/arm.h"
393
+#include "hw/arm/boot.h"
394
#include "hw/arm/armv7m.h"
395
#include "hw/or-irq.h"
396
#include "hw/boards.h"
397
diff --git a/hw/arm/mps2.c b/hw/arm/mps2.c
398
index XXXXXXX..XXXXXXX 100644
399
--- a/hw/arm/mps2.c
400
+++ b/hw/arm/mps2.c
401
@@ -XXX,XX +XXX,XX @@
402
#include "qemu/osdep.h"
403
#include "qapi/error.h"
404
#include "qemu/error-report.h"
405
-#include "hw/arm/arm.h"
406
+#include "hw/arm/boot.h"
407
#include "hw/arm/armv7m.h"
408
#include "hw/or-irq.h"
409
#include "hw/boards.h"
410
diff --git a/hw/arm/msf2-soc.c b/hw/arm/msf2-soc.c
411
index XXXXXXX..XXXXXXX 100644
412
--- a/hw/arm/msf2-soc.c
413
+++ b/hw/arm/msf2-soc.c
414
@@ -XXX,XX +XXX,XX @@
415
#include "qemu/units.h"
416
#include "qapi/error.h"
417
#include "qemu-common.h"
418
-#include "hw/arm/arm.h"
419
#include "exec/address-spaces.h"
420
#include "hw/char/serial.h"
421
#include "hw/boards.h"
422
diff --git a/hw/arm/msf2-som.c b/hw/arm/msf2-som.c
423
index XXXXXXX..XXXXXXX 100644
424
--- a/hw/arm/msf2-som.c
425
+++ b/hw/arm/msf2-som.c
426
@@ -XXX,XX +XXX,XX @@
427
#include "qapi/error.h"
428
#include "qemu/error-report.h"
429
#include "hw/boards.h"
430
-#include "hw/arm/arm.h"
431
+#include "hw/arm/boot.h"
432
#include "exec/address-spaces.h"
433
#include "hw/arm/msf2-soc.h"
434
#include "cpu.h"
435
diff --git a/hw/arm/musca.c b/hw/arm/musca.c
436
index XXXXXXX..XXXXXXX 100644
437
--- a/hw/arm/musca.c
438
+++ b/hw/arm/musca.c
439
@@ -XXX,XX +XXX,XX @@
440
#include "qapi/error.h"
441
#include "exec/address-spaces.h"
442
#include "sysemu/sysemu.h"
443
-#include "hw/arm/arm.h"
444
+#include "hw/arm/boot.h"
445
#include "hw/arm/armsse.h"
446
#include "hw/boards.h"
447
#include "hw/char/pl011.h"
448
diff --git a/hw/arm/musicpal.c b/hw/arm/musicpal.c
449
index XXXXXXX..XXXXXXX 100644
450
--- a/hw/arm/musicpal.c
451
+++ b/hw/arm/musicpal.c
452
@@ -XXX,XX +XXX,XX @@
453
#include "qemu-common.h"
454
#include "cpu.h"
455
#include "hw/sysbus.h"
456
-#include "hw/arm/arm.h"
457
+#include "hw/arm/boot.h"
458
#include "net/net.h"
459
#include "sysemu/sysemu.h"
460
#include "hw/boards.h"
461
diff --git a/hw/arm/netduino2.c b/hw/arm/netduino2.c
462
index XXXXXXX..XXXXXXX 100644
463
--- a/hw/arm/netduino2.c
464
+++ b/hw/arm/netduino2.c
465
@@ -XXX,XX +XXX,XX @@
466
#include "hw/boards.h"
467
#include "qemu/error-report.h"
468
#include "hw/arm/stm32f205_soc.h"
469
-#include "hw/arm/arm.h"
470
+#include "hw/arm/boot.h"
471
472
static void netduino2_init(MachineState *machine)
473
{
22
{
474
diff --git a/hw/arm/nrf51_soc.c b/hw/arm/nrf51_soc.c
23
TCGv_ptr qd;
475
index XXXXXXX..XXXXXXX 100644
24
uint64_t imm;
476
--- a/hw/arm/nrf51_soc.c
25
@@ -XXX,XX +XXX,XX @@ static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
477
+++ b/hw/arm/nrf51_soc.c
26
478
@@ -XXX,XX +XXX,XX @@
27
imm = asimd_imm_const(a->imm, a->cmode, a->op);
479
#include "qemu/osdep.h"
28
480
#include "qapi/error.h"
29
- qd = mve_qreg_ptr(a->qd);
481
#include "qemu-common.h"
30
- fn(cpu_env, qd, tcg_constant_i64(imm));
482
-#include "hw/arm/arm.h"
31
- tcg_temp_free_ptr(qd);
483
+#include "hw/arm/boot.h"
32
+ if (vecfn && mve_no_predication(s)) {
484
#include "hw/sysbus.h"
33
+ vecfn(MO_64, mve_qreg_offset(a->qd), mve_qreg_offset(a->qd),
485
#include "hw/boards.h"
34
+ imm, 16, 16);
486
#include "hw/misc/unimp.h"
35
+ } else {
487
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
36
+ qd = mve_qreg_ptr(a->qd);
488
index XXXXXXX..XXXXXXX 100644
37
+ fn(cpu_env, qd, tcg_constant_i64(imm));
489
--- a/hw/arm/nseries.c
38
+ tcg_temp_free_ptr(qd);
490
+++ b/hw/arm/nseries.c
39
+ }
491
@@ -XXX,XX +XXX,XX @@
40
mve_update_eci(s);
492
#include "qemu/bswap.h"
41
return true;
493
#include "sysemu/sysemu.h"
42
}
494
#include "hw/arm/omap.h"
43
495
-#include "hw/arm/arm.h"
44
+static void gen_gvec_vmovi(unsigned vece, uint32_t dofs, uint32_t aofs,
496
+#include "hw/arm/boot.h"
45
+ int64_t c, uint32_t oprsz, uint32_t maxsz)
497
#include "hw/irq.h"
46
+{
498
#include "ui/console.h"
47
+ tcg_gen_gvec_dup_imm(vece, dofs, oprsz, maxsz, c);
499
#include "hw/boards.h"
48
+}
500
diff --git a/hw/arm/omap1.c b/hw/arm/omap1.c
49
+
501
index XXXXXXX..XXXXXXX 100644
50
static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
502
--- a/hw/arm/omap1.c
51
{
503
+++ b/hw/arm/omap1.c
52
/* Handle decode of cmode/op here between VORR/VBIC/VMOV */
504
@@ -XXX,XX +XXX,XX @@
53
MVEGenOneOpImmFn *fn;
505
#include "cpu.h"
54
+ GVecGen2iFn *vecfn;
506
#include "hw/boards.h"
55
507
#include "hw/hw.h"
56
if ((a->cmode & 1) && a->cmode < 12) {
508
-#include "hw/arm/arm.h"
57
if (a->op) {
509
+#include "hw/arm/boot.h"
58
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
510
#include "hw/arm/omap.h"
59
* so the VBIC becomes a logical AND operation.
511
#include "sysemu/sysemu.h"
60
*/
512
#include "hw/arm/soc_dma.h"
61
fn = gen_helper_mve_vandi;
513
diff --git a/hw/arm/omap2.c b/hw/arm/omap2.c
62
+ vecfn = tcg_gen_gvec_andi;
514
index XXXXXXX..XXXXXXX 100644
63
} else {
515
--- a/hw/arm/omap2.c
64
fn = gen_helper_mve_vorri;
516
+++ b/hw/arm/omap2.c
65
+ vecfn = tcg_gen_gvec_ori;
517
@@ -XXX,XX +XXX,XX @@
66
}
518
#include "sysemu/qtest.h"
67
} else {
519
#include "hw/boards.h"
68
/* There is one unallocated cmode/op combination in this space */
520
#include "hw/hw.h"
69
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
521
-#include "hw/arm/arm.h"
70
}
522
+#include "hw/arm/boot.h"
71
/* asimd_imm_const() sorts out VMVNI vs VMOVI for us */
523
#include "hw/arm/omap.h"
72
fn = gen_helper_mve_vmovi;
524
#include "sysemu/sysemu.h"
73
+ vecfn = gen_gvec_vmovi;
525
#include "qemu/timer.h"
74
}
526
diff --git a/hw/arm/omap_sx1.c b/hw/arm/omap_sx1.c
75
- return do_1imm(s, a, fn);
527
index XXXXXXX..XXXXXXX 100644
76
+ return do_1imm(s, a, fn, vecfn);
528
--- a/hw/arm/omap_sx1.c
77
}
529
+++ b/hw/arm/omap_sx1.c
78
530
@@ -XXX,XX +XXX,XX @@
79
static bool do_2shift_vec(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
531
#include "ui/console.h"
532
#include "hw/arm/omap.h"
533
#include "hw/boards.h"
534
-#include "hw/arm/arm.h"
535
+#include "hw/arm/boot.h"
536
#include "hw/block/flash.h"
537
#include "sysemu/qtest.h"
538
#include "exec/address-spaces.h"
539
diff --git a/hw/arm/palm.c b/hw/arm/palm.c
540
index XXXXXXX..XXXXXXX 100644
541
--- a/hw/arm/palm.c
542
+++ b/hw/arm/palm.c
543
@@ -XXX,XX +XXX,XX @@
544
#include "ui/console.h"
545
#include "hw/arm/omap.h"
546
#include "hw/boards.h"
547
-#include "hw/arm/arm.h"
548
+#include "hw/arm/boot.h"
549
#include "hw/input/tsc2xxx.h"
550
#include "hw/loader.h"
551
#include "exec/address-spaces.h"
552
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
553
index XXXXXXX..XXXXXXX 100644
554
--- a/hw/arm/raspi.c
555
+++ b/hw/arm/raspi.c
556
@@ -XXX,XX +XXX,XX @@
557
#include "qemu/error-report.h"
558
#include "hw/boards.h"
559
#include "hw/loader.h"
560
-#include "hw/arm/arm.h"
561
+#include "hw/arm/boot.h"
562
#include "sysemu/sysemu.h"
563
564
#define SMPBOOT_ADDR 0x300 /* this should leave enough space for ATAGS */
565
diff --git a/hw/arm/realview.c b/hw/arm/realview.c
566
index XXXXXXX..XXXXXXX 100644
567
--- a/hw/arm/realview.c
568
+++ b/hw/arm/realview.c
569
@@ -XXX,XX +XXX,XX @@
570
#include "qemu-common.h"
571
#include "cpu.h"
572
#include "hw/sysbus.h"
573
-#include "hw/arm/arm.h"
574
+#include "hw/arm/boot.h"
575
#include "hw/arm/primecell.h"
576
#include "hw/net/lan9118.h"
577
#include "hw/net/smc91c111.h"
578
diff --git a/hw/arm/spitz.c b/hw/arm/spitz.c
579
index XXXXXXX..XXXXXXX 100644
580
--- a/hw/arm/spitz.c
581
+++ b/hw/arm/spitz.c
582
@@ -XXX,XX +XXX,XX @@
583
#include "qapi/error.h"
584
#include "hw/hw.h"
585
#include "hw/arm/pxa.h"
586
-#include "hw/arm/arm.h"
587
+#include "hw/arm/boot.h"
588
#include "sysemu/sysemu.h"
589
#include "hw/pcmcia.h"
590
#include "hw/i2c/i2c.h"
591
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
592
index XXXXXXX..XXXXXXX 100644
593
--- a/hw/arm/stellaris.c
594
+++ b/hw/arm/stellaris.c
595
@@ -XXX,XX +XXX,XX @@
596
#include "qapi/error.h"
597
#include "hw/sysbus.h"
598
#include "hw/ssi/ssi.h"
599
-#include "hw/arm/arm.h"
600
+#include "hw/arm/boot.h"
601
#include "qemu/timer.h"
602
#include "hw/i2c/i2c.h"
603
#include "net/net.h"
604
diff --git a/hw/arm/stm32f205_soc.c b/hw/arm/stm32f205_soc.c
605
index XXXXXXX..XXXXXXX 100644
606
--- a/hw/arm/stm32f205_soc.c
607
+++ b/hw/arm/stm32f205_soc.c
608
@@ -XXX,XX +XXX,XX @@
609
#include "qemu/osdep.h"
610
#include "qapi/error.h"
611
#include "qemu-common.h"
612
-#include "hw/arm/arm.h"
613
+#include "hw/arm/boot.h"
614
#include "exec/address-spaces.h"
615
#include "hw/arm/stm32f205_soc.h"
616
617
diff --git a/hw/arm/strongarm.c b/hw/arm/strongarm.c
618
index XXXXXXX..XXXXXXX 100644
619
--- a/hw/arm/strongarm.c
620
+++ b/hw/arm/strongarm.c
621
@@ -XXX,XX +XXX,XX @@
622
#include "hw/sysbus.h"
623
#include "strongarm.h"
624
#include "qemu/error-report.h"
625
-#include "hw/arm/arm.h"
626
+#include "hw/arm/boot.h"
627
#include "chardev/char-fe.h"
628
#include "chardev/char-serial.h"
629
#include "sysemu/sysemu.h"
630
diff --git a/hw/arm/tosa.c b/hw/arm/tosa.c
631
index XXXXXXX..XXXXXXX 100644
632
--- a/hw/arm/tosa.c
633
+++ b/hw/arm/tosa.c
634
@@ -XXX,XX +XXX,XX @@
635
#include "qapi/error.h"
636
#include "hw/hw.h"
637
#include "hw/arm/pxa.h"
638
-#include "hw/arm/arm.h"
639
+#include "hw/arm/boot.h"
640
#include "hw/arm/sharpsl.h"
641
#include "hw/pcmcia.h"
642
#include "hw/boards.h"
643
diff --git a/hw/arm/versatilepb.c b/hw/arm/versatilepb.c
644
index XXXXXXX..XXXXXXX 100644
645
--- a/hw/arm/versatilepb.c
646
+++ b/hw/arm/versatilepb.c
647
@@ -XXX,XX +XXX,XX @@
648
#include "qemu-common.h"
649
#include "cpu.h"
650
#include "hw/sysbus.h"
651
-#include "hw/arm/arm.h"
652
+#include "hw/arm/boot.h"
653
#include "hw/net/smc91c111.h"
654
#include "net/net.h"
655
#include "sysemu/sysemu.h"
656
diff --git a/hw/arm/vexpress.c b/hw/arm/vexpress.c
657
index XXXXXXX..XXXXXXX 100644
658
--- a/hw/arm/vexpress.c
659
+++ b/hw/arm/vexpress.c
660
@@ -XXX,XX +XXX,XX @@
661
#include "qemu-common.h"
662
#include "cpu.h"
663
#include "hw/sysbus.h"
664
-#include "hw/arm/arm.h"
665
+#include "hw/arm/boot.h"
666
#include "hw/arm/primecell.h"
667
#include "hw/net/lan9118.h"
668
#include "hw/i2c/i2c.h"
669
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
670
index XXXXXXX..XXXXXXX 100644
671
--- a/hw/arm/virt.c
672
+++ b/hw/arm/virt.c
673
@@ -XXX,XX +XXX,XX @@
674
#include "qemu/option.h"
675
#include "qapi/error.h"
676
#include "hw/sysbus.h"
677
-#include "hw/arm/arm.h"
678
+#include "hw/arm/boot.h"
679
#include "hw/arm/primecell.h"
680
#include "hw/arm/virt.h"
681
#include "hw/block/flash.h"
682
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
683
index XXXXXXX..XXXXXXX 100644
684
--- a/hw/arm/xilinx_zynq.c
685
+++ b/hw/arm/xilinx_zynq.c
686
@@ -XXX,XX +XXX,XX @@
687
#include "qemu-common.h"
688
#include "cpu.h"
689
#include "hw/sysbus.h"
690
-#include "hw/arm/arm.h"
691
+#include "hw/arm/boot.h"
692
#include "net/net.h"
693
#include "exec/address-spaces.h"
694
#include "sysemu/sysemu.h"
695
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
696
index XXXXXXX..XXXXXXX 100644
697
--- a/hw/arm/xlnx-versal.c
698
+++ b/hw/arm/xlnx-versal.c
699
@@ -XXX,XX +XXX,XX @@
700
#include "net/net.h"
701
#include "sysemu/sysemu.h"
702
#include "sysemu/kvm.h"
703
-#include "hw/arm/arm.h"
704
+#include "hw/arm/boot.h"
705
#include "kvm_arm.h"
706
#include "hw/misc/unimp.h"
707
#include "hw/intc/arm_gicv3_common.h"
708
diff --git a/hw/arm/z2.c b/hw/arm/z2.c
709
index XXXXXXX..XXXXXXX 100644
710
--- a/hw/arm/z2.c
711
+++ b/hw/arm/z2.c
712
@@ -XXX,XX +XXX,XX @@
713
#include "qemu/osdep.h"
714
#include "hw/hw.h"
715
#include "hw/arm/pxa.h"
716
-#include "hw/arm/arm.h"
717
+#include "hw/arm/boot.h"
718
#include "hw/i2c/i2c.h"
719
#include "hw/ssi/ssi.h"
720
#include "hw/boards.h"
721
--
80
--
722
2.20.1
81
2.20.1
723
82
724
83
diff view generated by jsdifflib