1
As promised, another pullreq... This one's mostly RTH's patches.
1
arm queue: big stuff here is my MVE codegen optimisation,
2
and Alex's Apple Silicon hvf support.
2
3
3
thanks
4
-- PMM
4
-- PMM
5
5
6
The following changes since commit 784c2e4f232adf5ef47a84a262ec72a07d068d6a:
6
The following changes since commit 7adb961995a3744f51396502b33ad04a56a317c3:
7
7
8
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2018-10-19 15:30:40 +0100)
8
Merge remote-tracking branch 'remotes/dgilbert-gitlab/tags/pull-virtiofs-20210916' into staging (2021-09-19 18:53:29 +0100)
9
9
10
are available in the Git repository at:
10
are available in the Git repository at:
11
11
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181019
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210920
13
13
14
for you to fetch changes up to 88c9add25e7120e8622796c81ad3f3fb7f8d40e7:
14
for you to fetch changes up to 1dc5a60bfe406bc1122d68cbdefda38d23134b27:
15
15
16
target/arm: Only flush tlb if ASID changes (2018-10-19 17:38:48 +0100)
16
target/arm: Optimize MVE 1op-immediate insns (2021-09-20 14:18:01 +0100)
17
17
18
----------------------------------------------------------------
18
----------------------------------------------------------------
19
target-arm queue:
19
target-arm queue:
20
* ssi-sd: Make devices picking up backends unavailable with -device
20
* Optimize codegen for MVE when predication not active
21
* Add support for VCPU event states
21
* hvf: Add Apple Silicon support
22
* Move towards making ID registers the source of truth for
22
* hw/intc: Set GIC maintenance interrupt level to only 0 or 1
23
whether a guest CPU implements a feature, rather than having
23
* Fix mishandling of MVE FPSCR.LTPSIZE reset for usermode emulator
24
parallel ID registers and feature bit flags
24
* elf2dmp: Fix coverity nits
25
* Implement various HCR hypervisor trap/config bits
26
* Get IL bit correct for v7 syndrome values
27
* Report correct syndrome for FP/SIMD traps to Hyp mode
28
* hw/arm/boot: Increase compliance with kernel arm64 boot protocol
29
* Refactor A32 Neon to use generic vector infrastructure
30
* Fix a bug in A32 VLD2 "(multiple 2-element structures)" insn
31
* net: cadence_gem: Report features correctly in ID register
32
* Avoid some unnecessary TLB flushes on TTBR register writes
33
25
34
----------------------------------------------------------------
26
----------------------------------------------------------------
35
Dongjiu Geng (1):
27
Alexander Graf (7):
36
target/arm: Add support for VCPU event states
28
arm: Move PMC register definitions to internals.h
29
hvf: Add execute to dirty log permission bitmap
30
hvf: Introduce hvf_arch_init() callback
31
hvf: Add Apple Silicon support
32
hvf: arm: Implement PSCI handling
33
arm: Add Hypervisor.framework build target
34
hvf: arm: Add rudimentary PMC support
37
35
38
Edgar E. Iglesias (2):
36
Peter Collingbourne (1):
39
net: cadence_gem: Announce availability of priority queues
37
arm/hvf: Add a WFI handler
40
net: cadence_gem: Announce 64bit addressing support
41
38
42
Markus Armbruster (1):
39
Peter Maydell (18):
43
ssi-sd: Make devices picking up backends unavailable with -device
40
elf2dmp: Check curl_easy_setopt() return value
41
elf2dmp: Fail cleanly if PDB file specifies zero block_size
42
target/arm: Don't skip M-profile reset entirely in user mode
43
target/arm: Always clear exclusive monitor on reset
44
target/arm: Consolidate ifdef blocks in reset
45
hvf: arm: Implement -cpu host
46
target/arm: Avoid goto_tb if we're trying to exit to the main loop
47
target/arm: Enforce that FPDSCR.LTPSIZE is 4 on inbound migration
48
target/arm: Add TB flag for "MVE insns not predicated"
49
target/arm: Optimize MVE logic ops
50
target/arm: Optimize MVE arithmetic ops
51
target/arm: Optimize MVE VNEG, VABS
52
target/arm: Optimize MVE VDUP
53
target/arm: Optimize MVE VMVN
54
target/arm: Optimize MVE VSHL, VSHR immediate forms
55
target/arm: Optimize MVE VSHLL and VMOVL
56
target/arm: Optimize MVE VSLI and VSRI
57
target/arm: Optimize MVE 1op-immediate insns
44
58
45
Peter Maydell (10):
59
Shashi Mallela (1):
46
target/arm: Improve debug logging of AArch32 exception return
60
hw/intc: Set GIC maintenance interrupt level to only 0 or 1
47
target/arm: Make switch_mode() file-local
48
target/arm: Implement HCR.FB
49
target/arm: Implement HCR.DC
50
target/arm: ISR_EL1 bits track virtual interrupts if IMO/FMO set
51
target/arm: Implement HCR.VI and VF
52
target/arm: Implement HCR.PTW
53
target/arm: New utility function to extract EC from syndrome
54
target/arm: Get IL bit correct for v7 syndrome values
55
target/arm: Report correct syndrome for FP/SIMD traps to Hyp mode
56
61
57
Richard Henderson (30):
62
meson.build | 8 +
58
target/arm: Move some system registers into a substructure
63
include/sysemu/hvf_int.h | 12 +-
59
target/arm: V8M should not imply V7VE
64
target/arm/cpu.h | 6 +-
60
target/arm: Convert v8 extensions from feature bits to isar tests
65
target/arm/hvf_arm.h | 18 +
61
target/arm: Convert division from feature bits to isar0 tests
66
target/arm/internals.h | 44 ++
62
target/arm: Convert jazelle from feature bit to isar1 test
67
target/arm/kvm_arm.h | 2 -
63
target/arm: Convert t32ee from feature bit to isar3 test
68
target/arm/translate.h | 2 +
64
target/arm: Convert sve from feature bit to aa64pfr0 test
69
accel/hvf/hvf-accel-ops.c | 21 +-
65
target/arm: Convert v8.2-fp16 from feature bit to aa64pfr0 test
70
contrib/elf2dmp/download.c | 22 +-
66
target/arm: Hoist address increment for vector memory ops
71
contrib/elf2dmp/pdb.c | 4 +
67
target/arm: Don't call tcg_clear_temp_count
72
hw/intc/arm_gicv3_cpuif.c | 5 +-
68
target/arm: Use tcg_gen_gvec_dup_i64 for LD[1-4]R
73
target/arm/cpu.c | 56 +-
69
target/arm: Promote consecutive memory ops for aa64
74
target/arm/helper.c | 77 ++-
70
target/arm: Mark some arrays const
75
target/arm/hvf/hvf.c | 1278 +++++++++++++++++++++++++++++++++++++++++
71
target/arm: Use gvec for NEON VDUP
76
target/arm/machine.c | 13 +
72
target/arm: Use gvec for NEON VMOV, VMVN, VBIC & VORR (immediate)
77
target/arm/translate-m-nocp.c | 8 +-
73
target/arm: Use gvec for NEON_3R_LOGIC insns
78
target/arm/translate-mve.c | 310 +++++++---
74
target/arm: Use gvec for NEON_3R_VADD_VSUB insns
79
target/arm/translate-vfp.c | 33 +-
75
target/arm: Use gvec for NEON_2RM_VMN, NEON_2RM_VNEG
80
target/arm/translate.c | 42 +-
76
target/arm: Use gvec for NEON_3R_VMUL
81
target/i386/hvf/hvf.c | 10 +
77
target/arm: Use gvec for VSHR, VSHL
82
MAINTAINERS | 5 +
78
target/arm: Use gvec for VSRA
83
target/arm/hvf/meson.build | 3 +
79
target/arm: Use gvec for VSRI, VSLI
84
target/arm/hvf/trace-events | 11 +
80
target/arm: Use gvec for NEON_3R_VML
85
target/arm/meson.build | 2 +
81
target/arm: Use gvec for NEON_3R_VTST_VCEQ, NEON_3R_VCGT, NEON_3R_VCGE
86
24 files changed, 1824 insertions(+), 168 deletions(-)
82
target/arm: Use gvec for NEON VLD all lanes
87
create mode 100644 target/arm/hvf_arm.h
83
target/arm: Reorg NEON VLD/VST all elements
88
create mode 100644 target/arm/hvf/hvf.c
84
target/arm: Promote consecutive memory ops for aa32
89
create mode 100644 target/arm/hvf/meson.build
85
target/arm: Reorg NEON VLD/VST single element to one lane
90
create mode 100644 target/arm/hvf/trace-events
86
target/arm: Remove writefn from TTBR0_EL3
87
target/arm: Only flush tlb if ASID changes
88
91
89
Stewart Hildebrand (1):
90
hw/arm/boot: Increase compliance with kernel arm64 boot protocol
91
92
target/arm/cpu.h | 227 ++++++-
93
target/arm/internals.h | 45 +-
94
target/arm/kvm_arm.h | 24 +
95
target/arm/translate.h | 21 +
96
hw/arm/boot.c | 18 +
97
hw/intc/armv7m_nvic.c | 12 +-
98
hw/net/cadence_gem.c | 9 +-
99
hw/sd/ssi-sd.c | 2 +
100
linux-user/aarch64/signal.c | 4 +-
101
linux-user/elfload.c | 60 +-
102
linux-user/syscall.c | 10 +-
103
target/arm/cpu.c | 242 ++++----
104
target/arm/cpu64.c | 148 +++--
105
target/arm/helper.c | 397 ++++++++----
106
target/arm/kvm.c | 60 ++
107
target/arm/kvm32.c | 13 +
108
target/arm/kvm64.c | 15 +-
109
target/arm/machine.c | 28 +-
110
target/arm/op_helper.c | 2 +-
111
target/arm/translate-a64.c | 715 ++++-----------------
112
target/arm/translate.c | 1451 ++++++++++++++++++++++++++++---------------
113
21 files changed, 2021 insertions(+), 1482 deletions(-)
114
diff view generated by jsdifflib
Deleted patch
1
From: Markus Armbruster <armbru@redhat.com>
2
1
3
Device models aren't supposed to go on fishing expeditions for
4
backends. They should expose suitable properties for the user to set.
5
For onboard devices, board code sets them.
6
7
Device ssi-sd picks up its block backend in its init() method with
8
drive_get_next() instead. This mistake is already marked FIXME since
9
commit af9e40a.
10
11
Unset user_creatable to remove the mistake from our external
12
interface. Since the SSI bus doesn't support hotplug, only -device
13
can be affected. Only certain ARM machines have ssi-sd and provide an
14
SSI bus for it; this patch breaks -device ssi-sd for these machines.
15
No actual use of -device ssi-sd is known.
16
17
Signed-off-by: Markus Armbruster <armbru@redhat.com>
18
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Acked-by: Thomas Huth <thuth@redhat.com>
20
Message-id: 20181009060835.4608-1-armbru@redhat.com
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
23
hw/sd/ssi-sd.c | 2 ++
24
1 file changed, 2 insertions(+)
25
26
diff --git a/hw/sd/ssi-sd.c b/hw/sd/ssi-sd.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/sd/ssi-sd.c
29
+++ b/hw/sd/ssi-sd.c
30
@@ -XXX,XX +XXX,XX @@ static void ssi_sd_class_init(ObjectClass *klass, void *data)
31
k->cs_polarity = SSI_CS_LOW;
32
dc->vmsd = &vmstate_ssi_sd;
33
dc->reset = ssi_sd_reset;
34
+ /* Reason: init() method uses drive_get_next() */
35
+ dc->user_creatable = false;
36
}
37
38
static const TypeInfo ssi_sd_info = {
39
--
40
2.19.1
41
42
diff view generated by jsdifflib
1
From: Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>
1
Coverity points out that we aren't checking the return value
2
from curl_easy_setopt().
2
3
3
"The Image must be placed text_offset bytes from a 2MB aligned base
4
Fixes: Coverity CID 1458895
4
address anywhere in usable system RAM and called there."
5
Inspired-by: Peter Maydell <peter.maydell@linaro.org>
5
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
For the virt board, we write our startup bootloader at the very
7
Reviewed-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
7
bottom of RAM, so that bit can't be used for the image. To avoid
8
Tested-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
8
overlap in case the image requests to be loaded at an offset
9
Message-id: 20210910170656.366592-2-philmd@redhat.com
9
smaller than our bootloader, we increment the load offset to the
10
next 2MB.
11
12
This fixes a boot failure for Xen AArch64.
13
14
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
15
Tested-by: Andre Przywara <andre.przywara@arm.com>
16
Message-id: b8a89518794b4436af0c151ed10de4fa@dornerworks.com
17
[PMM: Rephrased a comment a bit]
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
11
---
21
hw/arm/boot.c | 18 ++++++++++++++++++
12
contrib/elf2dmp/download.c | 22 ++++++++++------------
22
1 file changed, 18 insertions(+)
13
1 file changed, 10 insertions(+), 12 deletions(-)
23
14
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
15
diff --git a/contrib/elf2dmp/download.c b/contrib/elf2dmp/download.c
25
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/arm/boot.c
17
--- a/contrib/elf2dmp/download.c
27
+++ b/hw/arm/boot.c
18
+++ b/contrib/elf2dmp/download.c
28
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ int download_url(const char *name, const char *url)
29
#include "qemu/config-file.h"
20
goto out_curl;
30
#include "qemu/option.h"
31
#include "exec/address-spaces.h"
32
+#include "qemu/units.h"
33
34
/* Kernel boot protocol is specified in the kernel docs
35
* Documentation/arm/Booting and Documentation/arm64/booting.txt
36
@@ -XXX,XX +XXX,XX @@
37
#define ARM64_TEXT_OFFSET_OFFSET 8
38
#define ARM64_MAGIC_OFFSET 56
39
40
+#define BOOTLOADER_MAX_SIZE (4 * KiB)
41
+
42
AddressSpace *arm_boot_address_space(ARMCPU *cpu,
43
const struct arm_boot_info *info)
44
{
45
@@ -XXX,XX +XXX,XX @@ static void write_bootloader(const char *name, hwaddr addr,
46
code[i] = tswap32(insn);
47
}
21
}
48
22
49
+ assert((len * sizeof(uint32_t)) < BOOTLOADER_MAX_SIZE);
23
- curl_easy_setopt(curl, CURLOPT_URL, url);
50
+
24
- curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, NULL);
51
rom_add_blob_fixed_as(name, code, len * sizeof(uint32_t), addr, as);
25
- curl_easy_setopt(curl, CURLOPT_WRITEDATA, file);
52
26
- curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1);
53
g_free(code);
27
- curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0);
54
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
28
-
55
memcpy(&hdrvals, buffer + ARM64_TEXT_OFFSET_OFFSET, sizeof(hdrvals));
29
- if (curl_easy_perform(curl) != CURLE_OK) {
56
if (hdrvals[1] != 0) {
30
- err = 1;
57
kernel_load_offset = le64_to_cpu(hdrvals[0]);
31
- fclose(file);
58
+
32
+ if (curl_easy_setopt(curl, CURLOPT_URL, url) != CURLE_OK
59
+ /*
33
+ || curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, NULL) != CURLE_OK
60
+ * We write our startup "bootloader" at the very bottom of RAM,
34
+ || curl_easy_setopt(curl, CURLOPT_WRITEDATA, file) != CURLE_OK
61
+ * so that bit can't be used for the image. Luckily the Image
35
+ || curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1) != CURLE_OK
62
+ * format specification is that the image requests only an offset
36
+ || curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0) != CURLE_OK
63
+ * from a 2MB boundary, not an absolute load address. So if the
37
+ || curl_easy_perform(curl) != CURLE_OK) {
64
+ * image requests an offset that might mean it overlaps with the
38
unlink(name);
65
+ * bootloader, we can just load it starting at 2MB+offset rather
39
- goto out_curl;
66
+ * than 0MB + offset.
40
+ fclose(file);
67
+ */
41
+ err = 1;
68
+ if (kernel_load_offset < BOOTLOADER_MAX_SIZE) {
42
+ } else {
69
+ kernel_load_offset += 2 * MiB;
43
+ err = fclose(file);
70
+ }
71
}
72
}
44
}
73
45
46
- err = fclose(file);
47
-
48
out_curl:
49
curl_easy_cleanup(curl);
50
74
--
51
--
75
2.19.1
52
2.20.1
76
53
77
54
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Coverity points out that if the PDB file we're trying to read
2
has a header specifying a block_size of zero then we will
3
end up trying to divide by zero in pdb_ds_read_file().
4
Check for this and fail cleanly instead.
2
5
3
For a sequence of loads or stores from a single register,
6
Fixes: Coverity CID 1458869
4
little-endian operations can be promoted to an 8-byte op.
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
This can reduce the number of operations by a factor of 8.
8
Reviewed-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Tested-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
11
Message-id: 20210910170656.366592-3-philmd@redhat.com
12
Message-Id: <20210901143910.17112-3-peter.maydell@linaro.org>
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
---
15
contrib/elf2dmp/pdb.c | 4 ++++
16
1 file changed, 4 insertions(+)
6
17
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
18
diff --git a/contrib/elf2dmp/pdb.c b/contrib/elf2dmp/pdb.c
8
Message-id: 20181011205206.3552-5-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-a64.c | 66 +++++++++++++++++++++++---------------
13
1 file changed, 40 insertions(+), 26 deletions(-)
14
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
20
--- a/contrib/elf2dmp/pdb.c
18
+++ b/target/arm/translate-a64.c
21
+++ b/contrib/elf2dmp/pdb.c
19
@@ -XXX,XX +XXX,XX @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
22
@@ -XXX,XX +XXX,XX @@ out_symbols:
20
23
21
/* Store from vector register to memory */
24
static int pdb_reader_ds_init(struct pdb_reader *r, PDB_DS_HEADER *hdr)
22
static void do_vec_st(DisasContext *s, int srcidx, int element,
23
- TCGv_i64 tcg_addr, int size)
24
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
25
{
25
{
26
- TCGMemOp memop = s->be_data + size;
26
+ if (hdr->block_size == 0) {
27
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
27
+ return 1;
28
29
read_vec_element(s, tcg_tmp, srcidx, element, size);
30
- tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
31
+ tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
32
33
tcg_temp_free_i64(tcg_tmp);
34
}
35
36
/* Load from memory to vector register */
37
static void do_vec_ld(DisasContext *s, int destidx, int element,
38
- TCGv_i64 tcg_addr, int size)
39
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
40
{
41
- TCGMemOp memop = s->be_data + size;
42
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
43
44
- tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
45
+ tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
46
write_vec_element(s, tcg_tmp, destidx, element, size);
47
48
tcg_temp_free_i64(tcg_tmp);
49
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
50
bool is_postidx = extract32(insn, 23, 1);
51
bool is_q = extract32(insn, 30, 1);
52
TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
53
+ TCGMemOp endian = s->be_data;
54
55
- int ebytes = 1 << size;
56
- int elements = (is_q ? 128 : 64) / (8 << size);
57
+ int ebytes; /* bytes per element */
58
+ int elements; /* elements per vector */
59
int rpt; /* num iterations */
60
int selem; /* structure elements */
61
int r;
62
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
63
gen_check_sp_alignment(s);
64
}
65
66
+ /* For our purposes, bytes are always little-endian. */
67
+ if (size == 0) {
68
+ endian = MO_LE;
69
+ }
28
+ }
70
+
29
+
71
+ /* Consecutive little-endian elements from a single register
30
memset(r->file_used, 0, sizeof(r->file_used));
72
+ * can be promoted to a larger little-endian operation.
31
r->ds.header = hdr;
73
+ */
32
r->ds.toc = pdb_ds_read(hdr, (uint32_t *)((uint8_t *)hdr +
74
+ if (selem == 1 && endian == MO_LE) {
75
+ size = 3;
76
+ }
77
+ ebytes = 1 << size;
78
+ elements = (is_q ? 16 : 8) / ebytes;
79
+
80
tcg_rn = cpu_reg_sp(s, rn);
81
tcg_addr = tcg_temp_new_i64();
82
tcg_gen_mov_i64(tcg_addr, tcg_rn);
83
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
84
for (r = 0; r < rpt; r++) {
85
int e;
86
for (e = 0; e < elements; e++) {
87
- int tt = (rt + r) % 32;
88
int xs;
89
for (xs = 0; xs < selem; xs++) {
90
+ int tt = (rt + r + xs) % 32;
91
if (is_store) {
92
- do_vec_st(s, tt, e, tcg_addr, size);
93
+ do_vec_st(s, tt, e, tcg_addr, size, endian);
94
} else {
95
- do_vec_ld(s, tt, e, tcg_addr, size);
96
-
97
- /* For non-quad operations, setting a slice of the low
98
- * 64 bits of the register clears the high 64 bits (in
99
- * the ARM ARM pseudocode this is implicit in the fact
100
- * that 'rval' is a 64 bit wide variable).
101
- * For quad operations, we might still need to zero the
102
- * high bits of SVE. We optimize by noticing that we only
103
- * need to do this the first time we touch a register.
104
- */
105
- if (e == 0 && (r == 0 || xs == selem - 1)) {
106
- clear_vec_high(s, is_q, tt);
107
- }
108
+ do_vec_ld(s, tt, e, tcg_addr, size, endian);
109
}
110
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
111
- tt = (tt + 1) % 32;
112
}
113
}
114
}
115
116
+ if (!is_store) {
117
+ /* For non-quad operations, setting a slice of the low
118
+ * 64 bits of the register clears the high 64 bits (in
119
+ * the ARM ARM pseudocode this is implicit in the fact
120
+ * that 'rval' is a 64 bit wide variable).
121
+ * For quad operations, we might still need to zero the
122
+ * high bits of SVE.
123
+ */
124
+ for (r = 0; r < rpt * selem; r++) {
125
+ int tt = (rt + r) % 32;
126
+ clear_vec_high(s, is_q, tt);
127
+ }
128
+ }
129
+
130
if (is_postidx) {
131
int rm = extract32(insn, 16, 5);
132
if (rm == 31) {
133
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
134
} else {
135
/* Load/store one element per register */
136
if (is_load) {
137
- do_vec_ld(s, rt, index, tcg_addr, scale);
138
+ do_vec_ld(s, rt, index, tcg_addr, scale, s->be_data);
139
} else {
140
- do_vec_st(s, rt, index, tcg_addr, scale);
141
+ do_vec_st(s, rt, index, tcg_addr, scale, s->be_data);
142
}
143
}
144
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
145
--
33
--
146
2.19.1
34
2.20.1
147
35
148
36
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Currently all of the M-profile specific code in arm_cpu_reset() is
2
inside a !defined(CONFIG_USER_ONLY) ifdef block. This is
3
unintentional: it happened because originally the only
4
M-profile-specific handling was the setup of the initial SP and PC
5
from the vector table, which is system-emulation only. But then we
6
added a lot of other M-profile setup to the same "if (ARM_FEATURE_M)"
7
code block without noticing that it was all inside a not-user-mode
8
ifdef. This has generally been harmless, but with the addition of
9
v8.1M low-overhead-loop support we ran into a problem: the reset of
10
FPSCR.LTPSIZE to 4 was only being done for system emulation mode, so
11
if a user-mode guest tried to execute the LE instruction it would
12
incorrectly take a UsageFault.
2
13
3
Both arm and thumb2 division are controlled by the same ISAR field,
14
Adjust the ifdefs so only the really system-emulation specific parts
4
which takes care of the arm implies thumb case. Having M imply
15
are covered. Because this means we now run some reset code that sets
5
thumb2 division was wrong for cortex-m0, which is v6m and does not
16
up initial values in the FPCCR and similar FPU related registers,
6
have thumb2 at all, much less thumb2 division.
17
explicitly set up the registers controlling FPU context handling in
18
user-emulation mode so that the FPU works by design and not by
19
chance.
7
20
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
21
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/613
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
22
Cc: qemu-stable@nongnu.org
10
Message-id: 20181016223115.24100-5-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Message-id: 20210914120725.24992-2-peter.maydell@linaro.org
13
---
26
---
14
target/arm/cpu.h | 12 ++++++++++--
27
target/arm/cpu.c | 19 +++++++++++++++++++
15
linux-user/elfload.c | 4 ++--
28
1 file changed, 19 insertions(+)
16
target/arm/cpu.c | 10 +---------
17
target/arm/translate.c | 4 ++--
18
4 files changed, 15 insertions(+), 15 deletions(-)
19
29
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
25
ARM_FEATURE_VFP3,
26
ARM_FEATURE_VFP_FP16,
27
ARM_FEATURE_NEON,
28
- ARM_FEATURE_THUMB_DIV, /* divide supported in Thumb encoding */
29
ARM_FEATURE_M, /* Microcontroller profile. */
30
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
31
ARM_FEATURE_THUMB2EE,
32
@@ -XXX,XX +XXX,XX @@ enum arm_features {
33
ARM_FEATURE_V5,
34
ARM_FEATURE_STRONGARM,
35
ARM_FEATURE_VAPA, /* cp15 VA to PA lookups */
36
- ARM_FEATURE_ARM_DIV, /* divide supported in ARM encoding */
37
ARM_FEATURE_VFP4, /* VFPv4 (implies that NEON is v2) */
38
ARM_FEATURE_GENERIC_TIMER,
39
ARM_FEATURE_MVFR, /* Media and VFP Feature Registers 0 and 1 */
40
@@ -XXX,XX +XXX,XX @@ extern const uint64_t pred_esz_masks[4];
41
/*
42
* 32-bit feature tests via id registers.
43
*/
44
+static inline bool isar_feature_thumb_div(const ARMISARegisters *id)
45
+{
46
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) != 0;
47
+}
48
+
49
+static inline bool isar_feature_arm_div(const ARMISARegisters *id)
50
+{
51
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
52
+}
53
+
54
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
55
{
56
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
57
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/linux-user/elfload.c
60
+++ b/linux-user/elfload.c
61
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
62
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
63
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
64
GET_FEATURE(ARM_FEATURE_VFP4, ARM_HWCAP_ARM_VFPv4);
65
- GET_FEATURE(ARM_FEATURE_ARM_DIV, ARM_HWCAP_ARM_IDIVA);
66
- GET_FEATURE(ARM_FEATURE_THUMB_DIV, ARM_HWCAP_ARM_IDIVT);
67
+ GET_FEATURE_ID(arm_div, ARM_HWCAP_ARM_IDIVA);
68
+ GET_FEATURE_ID(thumb_div, ARM_HWCAP_ARM_IDIVT);
69
/* All QEMU's VFPv3 CPUs have 32 registers, see VFP_DREG in translate.c.
70
* Note that the ARM_HWCAP_ARM_VFPv3D16 bit is always the inverse of
71
* ARM_HWCAP_ARM_VFPD32 (and so always clear for QEMU); it is unrelated
72
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
73
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/cpu.c
32
--- a/target/arm/cpu.c
75
+++ b/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
76
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
77
* Presence of EL2 itself is ARM_FEATURE_EL2, and of the
35
env->uncached_cpsr = ARM_CPU_MODE_SVC;
78
* Security Extensions is ARM_FEATURE_EL3.
79
*/
80
- set_feature(env, ARM_FEATURE_ARM_DIV);
81
+ assert(cpu_isar_feature(arm_div, cpu));
82
set_feature(env, ARM_FEATURE_LPAE);
83
set_feature(env, ARM_FEATURE_V7);
84
}
36
}
85
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
37
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
86
if (arm_feature(env, ARM_FEATURE_V5)) {
38
+#endif
87
set_feature(env, ARM_FEATURE_V4T);
39
40
if (arm_feature(env, ARM_FEATURE_M)) {
41
+#ifndef CONFIG_USER_ONLY
42
uint32_t initial_msp; /* Loaded from 0x0 */
43
uint32_t initial_pc; /* Loaded from 0x4 */
44
uint8_t *rom;
45
uint32_t vecbase;
46
+#endif
47
48
if (cpu_isar_feature(aa32_lob, cpu)) {
49
/*
50
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
51
env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
52
R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
53
}
54
+
55
+#ifndef CONFIG_USER_ONLY
56
/* Unlike A/R profile, M profile defines the reset LR value */
57
env->regs[14] = 0xffffffff;
58
59
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
60
env->regs[13] = initial_msp & 0xFFFFFFFC;
61
env->regs[15] = initial_pc & ~1;
62
env->thumb = initial_pc & 1;
63
+#else
64
+ /*
65
+ * For user mode we run non-secure and with access to the FPU.
66
+ * The FPU context is active (ie does not need further setup)
67
+ * and is owned by non-secure.
68
+ */
69
+ env->v7m.secure = false;
70
+ env->v7m.nsacr = 0xcff;
71
+ env->v7m.cpacr[M_REG_NS] = 0xf0ffff;
72
+ env->v7m.fpccr[M_REG_S] &=
73
+ ~(R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK);
74
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
75
+#endif
88
}
76
}
89
- if (arm_feature(env, ARM_FEATURE_M)) {
77
90
- set_feature(env, ARM_FEATURE_THUMB_DIV);
78
+#ifndef CONFIG_USER_ONLY
91
- }
79
/* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
92
- if (arm_feature(env, ARM_FEATURE_ARM_DIV)) {
80
* executing as AArch32 then check if highvecs are enabled and
93
- set_feature(env, ARM_FEATURE_THUMB_DIV);
81
* adjust the PC accordingly.
94
- }
95
if (arm_feature(env, ARM_FEATURE_VFP4)) {
96
set_feature(env, ARM_FEATURE_VFP3);
97
set_feature(env, ARM_FEATURE_VFP_FP16);
98
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
99
ARMCPU *cpu = ARM_CPU(obj);
100
101
set_feature(&cpu->env, ARM_FEATURE_V7);
102
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DIV);
103
- set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
104
set_feature(&cpu->env, ARM_FEATURE_V7MP);
105
set_feature(&cpu->env, ARM_FEATURE_PMSA);
106
cpu->midr = 0x411fc153; /* r1p3 */
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
112
case 1:
113
case 3:
114
/* SDIV, UDIV */
115
- if (!arm_dc_feature(s, ARM_FEATURE_ARM_DIV)) {
116
+ if (!dc_isar_feature(arm_div, s)) {
117
goto illegal_op;
118
}
119
if (((insn >> 5) & 7) || (rd != 15)) {
120
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
121
tmp2 = load_reg(s, rm);
122
if ((op & 0x50) == 0x10) {
123
/* sdiv, udiv */
124
- if (!arm_dc_feature(s, ARM_FEATURE_THUMB_DIV)) {
125
+ if (!dc_isar_feature(thumb_div, s)) {
126
goto illegal_op;
127
}
128
if (op & 0x20)
129
--
82
--
130
2.19.1
83
2.20.1
131
84
132
85
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
There's no particular reason why the exclusive monitor should
2
be only cleared on reset in system emulation mode. It doesn't
3
hurt if it isn't cleared in user mode, but we might as well
4
reduce the amount of code we have that's inside an ifdef.
2
5
3
Instantiating mps2-an505 (cortex-m33) will fail make check when
4
V7VE asserts that ID_ISAR0.Divide includes ARM division. It is
5
also wrong to include ARM_FEATURE_LPAE.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181016223115.24100-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210914120725.24992-3-peter.maydell@linaro.org
11
---
9
---
12
target/arm/cpu.c | 6 +++++-
10
target/arm/cpu.c | 6 +++---
13
1 file changed, 5 insertions(+), 1 deletion(-)
11
1 file changed, 3 insertions(+), 3 deletions(-)
14
12
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.c
15
--- a/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
16
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
17
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
20
18
env->regs[15] = 0xFFFF0000;
21
/* Some features automatically imply others: */
22
if (arm_feature(env, ARM_FEATURE_V8)) {
23
- set_feature(env, ARM_FEATURE_V7VE);
24
+ if (arm_feature(env, ARM_FEATURE_M)) {
25
+ set_feature(env, ARM_FEATURE_V7);
26
+ } else {
27
+ set_feature(env, ARM_FEATURE_V7VE);
28
+ }
29
}
19
}
30
if (arm_feature(env, ARM_FEATURE_V7VE)) {
20
31
/* v7 Virtualization Extensions. In real hardware this implies
21
+ env->vfp.xregs[ARM_VFP_FPEXC] = 0;
22
+#endif
23
+
24
/* M profile requires that reset clears the exclusive monitor;
25
* A profile does not, but clearing it makes more sense than having it
26
* set with an exclusive access on address zero.
27
*/
28
arm_clear_exclusive(env);
29
30
- env->vfp.xregs[ARM_VFP_FPEXC] = 0;
31
-#endif
32
-
33
if (arm_feature(env, ARM_FEATURE_PMSA)) {
34
if (cpu->pmsav7_dregion > 0) {
35
if (arm_feature(env, ARM_FEATURE_V8)) {
32
--
36
--
33
2.19.1
37
2.20.1
34
38
35
39
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Move an ifndef CONFIG_USER_ONLY code block up in arm_cpu_reset() so
2
it can be merged with another earlier one.
2
3
3
Most of the v8 extensions are self-contained within the ISAR
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
registers and are not implied by other feature bits, which
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
makes them the easiest to convert.
6
Message-id: 20210914120725.24992-4-peter.maydell@linaro.org
7
---
8
target/arm/cpu.c | 22 ++++++++++------------
9
1 file changed, 10 insertions(+), 12 deletions(-)
6
10
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181016223115.24100-4-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/cpu.h | 131 +++++++++++++++++++++++++++++++++----
14
target/arm/translate.h | 7 ++
15
linux-user/elfload.c | 46 ++++++++-----
16
target/arm/cpu.c | 27 +++++---
17
target/arm/cpu64.c | 57 +++++++++-------
18
target/arm/translate-a64.c | 101 ++++++++++++++--------------
19
target/arm/translate.c | 36 +++++-----
20
7 files changed, 273 insertions(+), 132 deletions(-)
21
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ typedef enum ARMPSCIState {
27
PSCI_ON_PENDING = 2
28
} ARMPSCIState;
29
30
+typedef struct ARMISARegisters ARMISARegisters;
31
+
32
/**
33
* ARMCPU:
34
* @env: #CPUARMState
35
@@ -XXX,XX +XXX,XX @@ enum arm_features {
36
ARM_FEATURE_LPAE, /* has Large Physical Address Extension */
37
ARM_FEATURE_V8,
38
ARM_FEATURE_AARCH64, /* supports 64 bit mode */
39
- ARM_FEATURE_V8_AES, /* implements AES part of v8 Crypto Extensions */
40
ARM_FEATURE_CBAR, /* has cp15 CBAR */
41
ARM_FEATURE_CRC, /* ARMv8 CRC instructions */
42
ARM_FEATURE_CBAR_RO, /* has cp15 CBAR and it is read-only */
43
ARM_FEATURE_EL2, /* has EL2 Virtualization support */
44
ARM_FEATURE_EL3, /* has EL3 Secure monitor support */
45
- ARM_FEATURE_V8_SHA1, /* implements SHA1 part of v8 Crypto Extensions */
46
- ARM_FEATURE_V8_SHA256, /* implements SHA256 part of v8 Crypto Extensions */
47
- ARM_FEATURE_V8_PMULL, /* implements PMULL part of v8 Crypto Extensions */
48
ARM_FEATURE_THUMB_DSP, /* DSP insns supported in the Thumb encodings */
49
ARM_FEATURE_PMU, /* has PMU support */
50
ARM_FEATURE_VBAR, /* has cp15 VBAR */
51
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
52
ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
53
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
54
- ARM_FEATURE_V8_SHA512, /* implements SHA512 part of v8 Crypto Extensions */
55
- ARM_FEATURE_V8_SHA3, /* implements SHA3 part of v8 Crypto Extensions */
56
- ARM_FEATURE_V8_SM3, /* implements SM3 part of v8 Crypto Extensions */
57
- ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
58
- ARM_FEATURE_V8_ATOMICS, /* ARMv8.1-Atomics feature */
59
- ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
60
- ARM_FEATURE_V8_DOTPROD, /* implements v8.2 simd dot product */
61
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
62
- ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */
63
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
64
};
65
66
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
67
/* Shared between translate-sve.c and sve_helper.c. */
68
extern const uint64_t pred_esz_masks[4];
69
70
+/*
71
+ * 32-bit feature tests via id registers.
72
+ */
73
+static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
74
+{
75
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
76
+}
77
+
78
+static inline bool isar_feature_aa32_pmull(const ARMISARegisters *id)
79
+{
80
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) > 1;
81
+}
82
+
83
+static inline bool isar_feature_aa32_sha1(const ARMISARegisters *id)
84
+{
85
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA1) != 0;
86
+}
87
+
88
+static inline bool isar_feature_aa32_sha2(const ARMISARegisters *id)
89
+{
90
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA2) != 0;
91
+}
92
+
93
+static inline bool isar_feature_aa32_crc32(const ARMISARegisters *id)
94
+{
95
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, CRC32) != 0;
96
+}
97
+
98
+static inline bool isar_feature_aa32_rdm(const ARMISARegisters *id)
99
+{
100
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, RDM) != 0;
101
+}
102
+
103
+static inline bool isar_feature_aa32_vcma(const ARMISARegisters *id)
104
+{
105
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, VCMA) != 0;
106
+}
107
+
108
+static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
109
+{
110
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
111
+}
112
+
113
+/*
114
+ * 64-bit feature tests via id registers.
115
+ */
116
+static inline bool isar_feature_aa64_aes(const ARMISARegisters *id)
117
+{
118
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) != 0;
119
+}
120
+
121
+static inline bool isar_feature_aa64_pmull(const ARMISARegisters *id)
122
+{
123
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) > 1;
124
+}
125
+
126
+static inline bool isar_feature_aa64_sha1(const ARMISARegisters *id)
127
+{
128
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA1) != 0;
129
+}
130
+
131
+static inline bool isar_feature_aa64_sha256(const ARMISARegisters *id)
132
+{
133
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) != 0;
134
+}
135
+
136
+static inline bool isar_feature_aa64_sha512(const ARMISARegisters *id)
137
+{
138
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) > 1;
139
+}
140
+
141
+static inline bool isar_feature_aa64_crc32(const ARMISARegisters *id)
142
+{
143
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, CRC32) != 0;
144
+}
145
+
146
+static inline bool isar_feature_aa64_atomics(const ARMISARegisters *id)
147
+{
148
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, ATOMIC) != 0;
149
+}
150
+
151
+static inline bool isar_feature_aa64_rdm(const ARMISARegisters *id)
152
+{
153
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RDM) != 0;
154
+}
155
+
156
+static inline bool isar_feature_aa64_sha3(const ARMISARegisters *id)
157
+{
158
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA3) != 0;
159
+}
160
+
161
+static inline bool isar_feature_aa64_sm3(const ARMISARegisters *id)
162
+{
163
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM3) != 0;
164
+}
165
+
166
+static inline bool isar_feature_aa64_sm4(const ARMISARegisters *id)
167
+{
168
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM4) != 0;
169
+}
170
+
171
+static inline bool isar_feature_aa64_dp(const ARMISARegisters *id)
172
+{
173
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, DP) != 0;
174
+}
175
+
176
+static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
177
+{
178
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
179
+}
180
+
181
+/*
182
+ * Forward to the above feature tests given an ARMCPU pointer.
183
+ */
184
+#define cpu_isar_feature(name, cpu) \
185
+ ({ ARMCPU *cpu_ = (cpu); isar_feature_##name(&cpu_->isar); })
186
+
187
#endif
188
diff --git a/target/arm/translate.h b/target/arm/translate.h
189
index XXXXXXX..XXXXXXX 100644
190
--- a/target/arm/translate.h
191
+++ b/target/arm/translate.h
192
@@ -XXX,XX +XXX,XX @@
193
/* internal defines */
194
typedef struct DisasContext {
195
DisasContextBase base;
196
+ const ARMISARegisters *isar;
197
198
target_ulong pc;
199
target_ulong page_start;
200
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
201
return ret;
202
}
203
204
+/*
205
+ * Forward to the isar_feature_* tests given a DisasContext pointer.
206
+ */
207
+#define dc_isar_feature(name, ctx) \
208
+ ({ DisasContext *ctx_ = (ctx); isar_feature_##name(ctx_->isar); })
209
+
210
#endif /* TARGET_ARM_TRANSLATE_H */
211
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
212
index XXXXXXX..XXXXXXX 100644
213
--- a/linux-user/elfload.c
214
+++ b/linux-user/elfload.c
215
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
216
/* probe for the extra features */
217
#define GET_FEATURE(feat, hwcap) \
218
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
219
+
220
+#define GET_FEATURE_ID(feat, hwcap) \
221
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
222
+
223
/* EDSP is in v5TE and above, but all our v5 CPUs are v5TE */
224
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
225
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
226
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
227
ARMCPU *cpu = ARM_CPU(thread_cpu);
228
uint32_t hwcaps = 0;
229
230
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP2_ARM_AES);
231
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP2_ARM_PMULL);
232
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP2_ARM_SHA1);
233
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP2_ARM_SHA2);
234
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP2_ARM_CRC32);
235
+ GET_FEATURE_ID(aa32_aes, ARM_HWCAP2_ARM_AES);
236
+ GET_FEATURE_ID(aa32_pmull, ARM_HWCAP2_ARM_PMULL);
237
+ GET_FEATURE_ID(aa32_sha1, ARM_HWCAP2_ARM_SHA1);
238
+ GET_FEATURE_ID(aa32_sha2, ARM_HWCAP2_ARM_SHA2);
239
+ GET_FEATURE_ID(aa32_crc32, ARM_HWCAP2_ARM_CRC32);
240
return hwcaps;
241
}
242
243
#undef GET_FEATURE
244
+#undef GET_FEATURE_ID
245
246
#else
247
/* 64 bit ARM definitions */
248
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
249
/* probe for the extra features */
250
#define GET_FEATURE(feat, hwcap) \
251
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
252
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP_A64_AES);
253
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP_A64_PMULL);
254
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP_A64_SHA1);
255
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP_A64_SHA2);
256
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP_A64_CRC32);
257
- GET_FEATURE(ARM_FEATURE_V8_SHA3, ARM_HWCAP_A64_SHA3);
258
- GET_FEATURE(ARM_FEATURE_V8_SM3, ARM_HWCAP_A64_SM3);
259
- GET_FEATURE(ARM_FEATURE_V8_SM4, ARM_HWCAP_A64_SM4);
260
- GET_FEATURE(ARM_FEATURE_V8_SHA512, ARM_HWCAP_A64_SHA512);
261
+#define GET_FEATURE_ID(feat, hwcap) \
262
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
263
+
264
+ GET_FEATURE_ID(aa64_aes, ARM_HWCAP_A64_AES);
265
+ GET_FEATURE_ID(aa64_pmull, ARM_HWCAP_A64_PMULL);
266
+ GET_FEATURE_ID(aa64_sha1, ARM_HWCAP_A64_SHA1);
267
+ GET_FEATURE_ID(aa64_sha256, ARM_HWCAP_A64_SHA2);
268
+ GET_FEATURE_ID(aa64_sha512, ARM_HWCAP_A64_SHA512);
269
+ GET_FEATURE_ID(aa64_crc32, ARM_HWCAP_A64_CRC32);
270
+ GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
271
+ GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
272
+ GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
273
GET_FEATURE(ARM_FEATURE_V8_FP16,
274
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
275
- GET_FEATURE(ARM_FEATURE_V8_ATOMICS, ARM_HWCAP_A64_ATOMICS);
276
- GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
277
- GET_FEATURE(ARM_FEATURE_V8_DOTPROD, ARM_HWCAP_A64_ASIMDDP);
278
- GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
279
+ GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
280
+ GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
281
+ GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
282
+ GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
283
GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
284
+
285
#undef GET_FEATURE
286
+#undef GET_FEATURE_ID
287
288
return hwcaps;
289
}
290
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
11
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
291
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
292
--- a/target/arm/cpu.c
13
--- a/target/arm/cpu.c
293
+++ b/target/arm/cpu.c
14
+++ b/target/arm/cpu.c
294
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
15
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
295
cortex_a15_initfn(obj);
16
env->uncached_cpsr = ARM_CPU_MODE_SVC;
296
#ifdef CONFIG_USER_ONLY
17
}
297
/* We don't set these in system emulation mode for the moment,
18
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
298
- * since we don't correctly set the ID registers to advertise them,
299
+ * since we don't correctly set (all of) the ID registers to
300
+ * advertise them.
301
*/
302
set_feature(&cpu->env, ARM_FEATURE_V8);
303
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
304
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
305
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
306
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
307
- set_feature(&cpu->env, ARM_FEATURE_CRC);
308
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
309
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
310
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
311
+ {
312
+ uint32_t t;
313
+
19
+
314
+ t = cpu->isar.id_isar5;
20
+ /* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
315
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
21
+ * executing as AArch32 then check if highvecs are enabled and
316
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
22
+ * adjust the PC accordingly.
317
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
23
+ */
318
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
24
+ if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) {
319
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
25
+ env->regs[15] = 0xFFFF0000;
320
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
26
+ }
321
+ cpu->isar.id_isar5 = t;
322
+
27
+
323
+ t = cpu->isar.id_isar6;
28
+ env->vfp.xregs[ARM_VFP_FPEXC] = 0;
324
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
29
#endif
325
+ cpu->isar.id_isar6 = t;
30
326
+ }
31
if (arm_feature(env, ARM_FEATURE_M)) {
32
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
327
#endif
33
#endif
328
}
34
}
329
}
35
330
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
36
-#ifndef CONFIG_USER_ONLY
331
index XXXXXXX..XXXXXXX 100644
37
- /* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
332
--- a/target/arm/cpu64.c
38
- * executing as AArch32 then check if highvecs are enabled and
333
+++ b/target/arm/cpu64.c
39
- * adjust the PC accordingly.
334
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
40
- */
335
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
41
- if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) {
336
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
42
- env->regs[15] = 0xFFFF0000;
337
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
338
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
339
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
340
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
341
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
342
- set_feature(&cpu->env, ARM_FEATURE_CRC);
343
set_feature(&cpu->env, ARM_FEATURE_EL2);
344
set_feature(&cpu->env, ARM_FEATURE_EL3);
345
set_feature(&cpu->env, ARM_FEATURE_PMU);
346
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
347
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
348
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
349
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
350
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
351
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
352
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
353
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
354
- set_feature(&cpu->env, ARM_FEATURE_CRC);
355
set_feature(&cpu->env, ARM_FEATURE_EL2);
356
set_feature(&cpu->env, ARM_FEATURE_EL3);
357
set_feature(&cpu->env, ARM_FEATURE_PMU);
358
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
359
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
360
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
361
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
362
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
363
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
364
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
365
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
366
- set_feature(&cpu->env, ARM_FEATURE_CRC);
367
set_feature(&cpu->env, ARM_FEATURE_EL2);
368
set_feature(&cpu->env, ARM_FEATURE_EL3);
369
set_feature(&cpu->env, ARM_FEATURE_PMU);
370
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
371
if (kvm_enabled()) {
372
kvm_arm_set_cpu_features_from_host(cpu);
373
} else {
374
+ uint64_t t;
375
+ uint32_t u;
376
aarch64_a57_initfn(obj);
377
+
378
+ t = cpu->isar.id_aa64isar0;
379
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
380
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
381
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
382
+ t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
383
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
384
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
385
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
386
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
387
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
388
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
389
+ cpu->isar.id_aa64isar0 = t;
390
+
391
+ t = cpu->isar.id_aa64isar1;
392
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
393
+ cpu->isar.id_aa64isar1 = t;
394
+
395
+ /* Replicate the same data to the 32-bit id registers. */
396
+ u = cpu->isar.id_isar5;
397
+ u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
398
+ u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
399
+ u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
400
+ u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
401
+ u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
402
+ u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
403
+ cpu->isar.id_isar5 = u;
404
+
405
+ u = cpu->isar.id_isar6;
406
+ u = FIELD_DP32(u, ID_ISAR6, DP, 1);
407
+ cpu->isar.id_isar6 = u;
408
+
409
#ifdef CONFIG_USER_ONLY
410
/* We don't set these in system emulation mode for the moment,
411
* since we don't correctly set the ID registers to advertise them,
412
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
413
* whereas the architecture requires them to be present in both if
414
* present in either.
415
*/
416
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA512);
417
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA3);
418
- set_feature(&cpu->env, ARM_FEATURE_V8_SM3);
419
- set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
420
- set_feature(&cpu->env, ARM_FEATURE_V8_ATOMICS);
421
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
422
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
423
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
424
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
425
set_feature(&cpu->env, ARM_FEATURE_SVE);
426
/* For usermode -cpu max we can use a larger and more efficient DCZ
427
* blocksize since we don't have to follow what the hardware does.
428
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
429
index XXXXXXX..XXXXXXX 100644
430
--- a/target/arm/translate-a64.c
431
+++ b/target/arm/translate-a64.c
432
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
433
}
434
if (rt2 == 31
435
&& ((rt | rs) & 1) == 0
436
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
437
+ && dc_isar_feature(aa64_atomics, s)) {
438
/* CASP / CASPL */
439
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
440
return;
441
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
442
}
443
if (rt2 == 31
444
&& ((rt | rs) & 1) == 0
445
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
446
+ && dc_isar_feature(aa64_atomics, s)) {
447
/* CASPA / CASPAL */
448
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
449
return;
450
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
451
case 0xb: /* CASL */
452
case 0xe: /* CASA */
453
case 0xf: /* CASAL */
454
- if (rt2 == 31 && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
455
+ if (rt2 == 31 && dc_isar_feature(aa64_atomics, s)) {
456
gen_compare_and_swap(s, rs, rt, rn, size);
457
return;
458
}
459
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
460
int rs = extract32(insn, 16, 5);
461
int rn = extract32(insn, 5, 5);
462
int o3_opc = extract32(insn, 12, 4);
463
- int feature = ARM_FEATURE_V8_ATOMICS;
464
TCGv_i64 tcg_rn, tcg_rs;
465
AtomicThreeOpFn *fn;
466
467
- if (is_vector) {
468
+ if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
469
unallocated_encoding(s);
470
return;
471
}
472
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
473
unallocated_encoding(s);
474
return;
475
}
476
- if (!arm_dc_feature(s, feature)) {
477
- unallocated_encoding(s);
478
- return;
479
- }
43
- }
480
44
-
481
if (rn == 31) {
45
- env->vfp.xregs[ARM_VFP_FPEXC] = 0;
482
gen_check_sp_alignment(s);
46
-#endif
483
@@ -XXX,XX +XXX,XX @@ static void handle_crc32(DisasContext *s,
47
-
484
TCGv_i64 tcg_acc, tcg_val;
48
/* M profile requires that reset clears the exclusive monitor;
485
TCGv_i32 tcg_bytes;
49
* A profile does not, but clearing it makes more sense than having it
486
50
* set with an exclusive access on address zero.
487
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)
488
+ if (!dc_isar_feature(aa64_crc32, s)
489
|| (sf == 1 && sz != 3)
490
|| (sf == 0 && sz == 3)) {
491
unallocated_encoding(s);
492
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
493
bool u = extract32(insn, 29, 1);
494
TCGv_i32 ele1, ele2, ele3;
495
TCGv_i64 res;
496
- int feature;
497
+ bool feature;
498
499
switch (u * 16 + opcode) {
500
case 0x10: /* SQRDMLAH (vector) */
501
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
502
unallocated_encoding(s);
503
return;
504
}
505
- feature = ARM_FEATURE_V8_RDM;
506
+ feature = dc_isar_feature(aa64_rdm, s);
507
break;
508
default:
509
unallocated_encoding(s);
510
return;
511
}
512
- if (!arm_dc_feature(s, feature)) {
513
+ if (!feature) {
514
unallocated_encoding(s);
515
return;
516
}
517
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
518
return;
519
}
520
if (size == 3) {
521
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
522
+ if (!dc_isar_feature(aa64_pmull, s)) {
523
unallocated_encoding(s);
524
return;
525
}
526
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
527
int size = extract32(insn, 22, 2);
528
bool u = extract32(insn, 29, 1);
529
bool is_q = extract32(insn, 30, 1);
530
- int feature, rot;
531
+ bool feature;
532
+ int rot;
533
534
switch (u * 16 + opcode) {
535
case 0x10: /* SQRDMLAH (vector) */
536
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
537
unallocated_encoding(s);
538
return;
539
}
540
- feature = ARM_FEATURE_V8_RDM;
541
+ feature = dc_isar_feature(aa64_rdm, s);
542
break;
543
case 0x02: /* SDOT (vector) */
544
case 0x12: /* UDOT (vector) */
545
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
546
unallocated_encoding(s);
547
return;
548
}
549
- feature = ARM_FEATURE_V8_DOTPROD;
550
+ feature = dc_isar_feature(aa64_dp, s);
551
break;
552
case 0x18: /* FCMLA, #0 */
553
case 0x19: /* FCMLA, #90 */
554
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
555
unallocated_encoding(s);
556
return;
557
}
558
- feature = ARM_FEATURE_V8_FCMA;
559
+ feature = dc_isar_feature(aa64_fcma, s);
560
break;
561
default:
562
unallocated_encoding(s);
563
return;
564
}
565
- if (!arm_dc_feature(s, feature)) {
566
+ if (!feature) {
567
unallocated_encoding(s);
568
return;
569
}
570
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
571
break;
572
case 0x1d: /* SQRDMLAH */
573
case 0x1f: /* SQRDMLSH */
574
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
575
+ if (!dc_isar_feature(aa64_rdm, s)) {
576
unallocated_encoding(s);
577
return;
578
}
579
break;
580
case 0x0e: /* SDOT */
581
case 0x1e: /* UDOT */
582
- if (size != MO_32 || !arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
583
+ if (size != MO_32 || !dc_isar_feature(aa64_dp, s)) {
584
unallocated_encoding(s);
585
return;
586
}
587
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
588
case 0x13: /* FCMLA #90 */
589
case 0x15: /* FCMLA #180 */
590
case 0x17: /* FCMLA #270 */
591
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
592
+ if (!dc_isar_feature(aa64_fcma, s)) {
593
unallocated_encoding(s);
594
return;
595
}
596
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
597
TCGv_i32 tcg_decrypt;
598
CryptoThreeOpIntFn *genfn;
599
600
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
601
- || size != 0) {
602
+ if (!dc_isar_feature(aa64_aes, s) || size != 0) {
603
unallocated_encoding(s);
604
return;
605
}
606
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
607
int rd = extract32(insn, 0, 5);
608
CryptoThreeOpFn *genfn;
609
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
610
- int feature = ARM_FEATURE_V8_SHA256;
611
+ bool feature;
612
613
if (size != 0) {
614
unallocated_encoding(s);
615
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
616
case 2: /* SHA1M */
617
case 3: /* SHA1SU0 */
618
genfn = NULL;
619
- feature = ARM_FEATURE_V8_SHA1;
620
+ feature = dc_isar_feature(aa64_sha1, s);
621
break;
622
case 4: /* SHA256H */
623
genfn = gen_helper_crypto_sha256h;
624
+ feature = dc_isar_feature(aa64_sha256, s);
625
break;
626
case 5: /* SHA256H2 */
627
genfn = gen_helper_crypto_sha256h2;
628
+ feature = dc_isar_feature(aa64_sha256, s);
629
break;
630
case 6: /* SHA256SU1 */
631
genfn = gen_helper_crypto_sha256su1;
632
+ feature = dc_isar_feature(aa64_sha256, s);
633
break;
634
default:
635
unallocated_encoding(s);
636
return;
637
}
638
639
- if (!arm_dc_feature(s, feature)) {
640
+ if (!feature) {
641
unallocated_encoding(s);
642
return;
643
}
644
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
645
int rn = extract32(insn, 5, 5);
646
int rd = extract32(insn, 0, 5);
647
CryptoTwoOpFn *genfn;
648
- int feature;
649
+ bool feature;
650
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
651
652
if (size != 0) {
653
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
654
655
switch (opcode) {
656
case 0: /* SHA1H */
657
- feature = ARM_FEATURE_V8_SHA1;
658
+ feature = dc_isar_feature(aa64_sha1, s);
659
genfn = gen_helper_crypto_sha1h;
660
break;
661
case 1: /* SHA1SU1 */
662
- feature = ARM_FEATURE_V8_SHA1;
663
+ feature = dc_isar_feature(aa64_sha1, s);
664
genfn = gen_helper_crypto_sha1su1;
665
break;
666
case 2: /* SHA256SU0 */
667
- feature = ARM_FEATURE_V8_SHA256;
668
+ feature = dc_isar_feature(aa64_sha256, s);
669
genfn = gen_helper_crypto_sha256su0;
670
break;
671
default:
672
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
673
return;
674
}
675
676
- if (!arm_dc_feature(s, feature)) {
677
+ if (!feature) {
678
unallocated_encoding(s);
679
return;
680
}
681
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
682
int rm = extract32(insn, 16, 5);
683
int rn = extract32(insn, 5, 5);
684
int rd = extract32(insn, 0, 5);
685
- int feature;
686
+ bool feature;
687
CryptoThreeOpFn *genfn;
688
689
if (o == 0) {
690
switch (opcode) {
691
case 0: /* SHA512H */
692
- feature = ARM_FEATURE_V8_SHA512;
693
+ feature = dc_isar_feature(aa64_sha512, s);
694
genfn = gen_helper_crypto_sha512h;
695
break;
696
case 1: /* SHA512H2 */
697
- feature = ARM_FEATURE_V8_SHA512;
698
+ feature = dc_isar_feature(aa64_sha512, s);
699
genfn = gen_helper_crypto_sha512h2;
700
break;
701
case 2: /* SHA512SU1 */
702
- feature = ARM_FEATURE_V8_SHA512;
703
+ feature = dc_isar_feature(aa64_sha512, s);
704
genfn = gen_helper_crypto_sha512su1;
705
break;
706
case 3: /* RAX1 */
707
- feature = ARM_FEATURE_V8_SHA3;
708
+ feature = dc_isar_feature(aa64_sha3, s);
709
genfn = NULL;
710
break;
711
}
712
} else {
713
switch (opcode) {
714
case 0: /* SM3PARTW1 */
715
- feature = ARM_FEATURE_V8_SM3;
716
+ feature = dc_isar_feature(aa64_sm3, s);
717
genfn = gen_helper_crypto_sm3partw1;
718
break;
719
case 1: /* SM3PARTW2 */
720
- feature = ARM_FEATURE_V8_SM3;
721
+ feature = dc_isar_feature(aa64_sm3, s);
722
genfn = gen_helper_crypto_sm3partw2;
723
break;
724
case 2: /* SM4EKEY */
725
- feature = ARM_FEATURE_V8_SM4;
726
+ feature = dc_isar_feature(aa64_sm4, s);
727
genfn = gen_helper_crypto_sm4ekey;
728
break;
729
default:
730
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
731
}
732
}
733
734
- if (!arm_dc_feature(s, feature)) {
735
+ if (!feature) {
736
unallocated_encoding(s);
737
return;
738
}
739
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
740
int rn = extract32(insn, 5, 5);
741
int rd = extract32(insn, 0, 5);
742
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
743
- int feature;
744
+ bool feature;
745
CryptoTwoOpFn *genfn;
746
747
switch (opcode) {
748
case 0: /* SHA512SU0 */
749
- feature = ARM_FEATURE_V8_SHA512;
750
+ feature = dc_isar_feature(aa64_sha512, s);
751
genfn = gen_helper_crypto_sha512su0;
752
break;
753
case 1: /* SM4E */
754
- feature = ARM_FEATURE_V8_SM4;
755
+ feature = dc_isar_feature(aa64_sm4, s);
756
genfn = gen_helper_crypto_sm4e;
757
break;
758
default:
759
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
760
return;
761
}
762
763
- if (!arm_dc_feature(s, feature)) {
764
+ if (!feature) {
765
unallocated_encoding(s);
766
return;
767
}
768
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
769
int ra = extract32(insn, 10, 5);
770
int rn = extract32(insn, 5, 5);
771
int rd = extract32(insn, 0, 5);
772
- int feature;
773
+ bool feature;
774
775
switch (op0) {
776
case 0: /* EOR3 */
777
case 1: /* BCAX */
778
- feature = ARM_FEATURE_V8_SHA3;
779
+ feature = dc_isar_feature(aa64_sha3, s);
780
break;
781
case 2: /* SM3SS1 */
782
- feature = ARM_FEATURE_V8_SM3;
783
+ feature = dc_isar_feature(aa64_sm3, s);
784
break;
785
default:
786
unallocated_encoding(s);
787
return;
788
}
789
790
- if (!arm_dc_feature(s, feature)) {
791
+ if (!feature) {
792
unallocated_encoding(s);
793
return;
794
}
795
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
796
TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
797
int pass;
798
799
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA3)) {
800
+ if (!dc_isar_feature(aa64_sha3, s)) {
801
unallocated_encoding(s);
802
return;
803
}
804
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
805
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
806
TCGv_i32 tcg_imm2, tcg_opcode;
807
808
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SM3)) {
809
+ if (!dc_isar_feature(aa64_sm3, s)) {
810
unallocated_encoding(s);
811
return;
812
}
813
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
814
ARMCPU *arm_cpu = arm_env_get_cpu(env);
815
int bound;
816
817
+ dc->isar = &arm_cpu->isar;
818
dc->pc = dc->base.pc_first;
819
dc->condjmp = 0;
820
821
diff --git a/target/arm/translate.c b/target/arm/translate.c
822
index XXXXXXX..XXXXXXX 100644
823
--- a/target/arm/translate.c
824
+++ b/target/arm/translate.c
825
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
826
static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
827
int q, int rd, int rn, int rm)
828
{
829
- if (arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
830
+ if (dc_isar_feature(aa32_rdm, s)) {
831
int opr_sz = (1 + q) * 8;
832
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
833
vfp_reg_offset(1, rn),
834
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
835
return 1;
836
}
837
if (!u) { /* SHA-1 */
838
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
839
+ if (!dc_isar_feature(aa32_sha1, s)) {
840
return 1;
841
}
842
ptr1 = vfp_reg_ptr(true, rd);
843
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
844
gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp4);
845
tcg_temp_free_i32(tmp4);
846
} else { /* SHA-256 */
847
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256) || size == 3) {
848
+ if (!dc_isar_feature(aa32_sha2, s) || size == 3) {
849
return 1;
850
}
851
ptr1 = vfp_reg_ptr(true, rd);
852
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
853
if (op == 14 && size == 2) {
854
TCGv_i64 tcg_rn, tcg_rm, tcg_rd;
855
856
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
857
+ if (!dc_isar_feature(aa32_pmull, s)) {
858
return 1;
859
}
860
tcg_rn = tcg_temp_new_i64();
861
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
862
{
863
NeonGenThreeOpEnvFn *fn;
864
865
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
866
+ if (!dc_isar_feature(aa32_rdm, s)) {
867
return 1;
868
}
869
if (u && ((rd | rn) & 1)) {
870
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
871
break;
872
}
873
case NEON_2RM_AESE: case NEON_2RM_AESMC:
874
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
875
- || ((rm | rd) & 1)) {
876
+ if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
877
return 1;
878
}
879
ptr1 = vfp_reg_ptr(true, rd);
880
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
881
tcg_temp_free_i32(tmp3);
882
break;
883
case NEON_2RM_SHA1H:
884
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)
885
- || ((rm | rd) & 1)) {
886
+ if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
887
return 1;
888
}
889
ptr1 = vfp_reg_ptr(true, rd);
890
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
891
}
892
/* bit 6 (q): set -> SHA256SU0, cleared -> SHA1SU1 */
893
if (q) {
894
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256)) {
895
+ if (!dc_isar_feature(aa32_sha2, s)) {
896
return 1;
897
}
898
- } else if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
899
+ } else if (!dc_isar_feature(aa32_sha1, s)) {
900
return 1;
901
}
902
ptr1 = vfp_reg_ptr(true, rd);
903
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
904
/* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
905
int size = extract32(insn, 20, 1);
906
data = extract32(insn, 23, 2); /* rot */
907
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
908
+ if (!dc_isar_feature(aa32_vcma, s)
909
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
910
return 1;
911
}
912
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
913
/* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
914
int size = extract32(insn, 20, 1);
915
data = extract32(insn, 24, 1); /* rot */
916
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
917
+ if (!dc_isar_feature(aa32_vcma, s)
918
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
919
return 1;
920
}
921
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
922
} else if ((insn & 0xfeb00f00) == 0xfc200d00) {
923
/* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
924
bool u = extract32(insn, 4, 1);
925
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
926
+ if (!dc_isar_feature(aa32_dp, s)) {
927
return 1;
928
}
929
fn_gvec = u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
930
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
931
int size = extract32(insn, 23, 1);
932
int index;
933
934
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
935
+ if (!dc_isar_feature(aa32_vcma, s)) {
936
return 1;
937
}
938
if (size == 0) {
939
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
940
} else if ((insn & 0xffb00f00) == 0xfe200d00) {
941
/* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
942
int u = extract32(insn, 4, 1);
943
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
944
+ if (!dc_isar_feature(aa32_dp, s)) {
945
return 1;
946
}
947
fn_gvec = u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
948
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
949
* op1 == 3 is UNPREDICTABLE but handle as UNDEFINED.
950
* Bits 8, 10 and 11 should be zero.
951
*/
952
- if (!arm_dc_feature(s, ARM_FEATURE_CRC) || op1 == 0x3 ||
953
- (c & 0xd) != 0) {
954
+ if (!dc_isar_feature(aa32_crc32, s) || op1 == 0x3 || (c & 0xd) != 0) {
955
goto illegal_op;
956
}
957
958
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
959
case 0x28:
960
case 0x29:
961
case 0x2a:
962
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)) {
963
+ if (!dc_isar_feature(aa32_crc32, s)) {
964
goto illegal_op;
965
}
966
break;
967
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
968
CPUARMState *env = cs->env_ptr;
969
ARMCPU *cpu = arm_env_get_cpu(env);
970
971
+ dc->isar = &cpu->isar;
972
dc->pc = dc->base.pc_first;
973
dc->condjmp = 0;
974
975
--
51
--
976
2.19.1
52
2.20.1
977
53
978
54
diff view generated by jsdifflib
1
From: Richard Henderson <rth@twiddle.net>
1
From: Shashi Mallela <shashi.mallela@linaro.org>
2
2
3
This can reduce the number of opcodes required for certain
3
During sbsa acs level 3 testing, it is seen that the GIC maintenance
4
complex forms of load-multiple (e.g. ld4.16b).
4
interrupts are not triggered and the related test cases fail. This
5
is because we were incorrectly passing the value of the MISR register
6
(from maintenance_interrupt_state()) to qemu_set_irq() as the level
7
argument, whereas the device on the other end of this irq line
8
expects a 0/1 value.
5
9
6
Signed-off-by: Richard Henderson <rth@twiddle.net>
10
Fix the logic to pass a 0/1 level indication, rather than a
7
Message-id: 20181011205206.3552-2-richard.henderson@linaro.org
11
0/not-0 value.
12
13
Fixes: c5fc89b36c0 ("hw/intc/arm_gicv3: Implement gicv3_cpuif_virt_update()")
14
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Message-id: 20210915205809.59068-1-shashi.mallela@linaro.org
17
[PMM: tweaked commit message; collapsed nested if()s into one]
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
20
---
11
target/arm/translate-a64.c | 12 ++++++++----
21
hw/intc/arm_gicv3_cpuif.c | 5 +++--
12
1 file changed, 8 insertions(+), 4 deletions(-)
22
1 file changed, 3 insertions(+), 2 deletions(-)
13
23
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
24
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
26
--- a/hw/intc/arm_gicv3_cpuif.c
17
+++ b/target/arm/translate-a64.c
27
+++ b/hw/intc/arm_gicv3_cpuif.c
18
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
28
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
19
bool is_store = !extract32(insn, 22, 1);
20
bool is_postidx = extract32(insn, 23, 1);
21
bool is_q = extract32(insn, 30, 1);
22
- TCGv_i64 tcg_addr, tcg_rn;
23
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
24
25
int ebytes = 1 << size;
26
int elements = (is_q ? 128 : 64) / (8 << size);
27
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
28
tcg_rn = cpu_reg_sp(s, rn);
29
tcg_addr = tcg_temp_new_i64();
30
tcg_gen_mov_i64(tcg_addr, tcg_rn);
31
+ tcg_ebytes = tcg_const_i64(ebytes);
32
33
for (r = 0; r < rpt; r++) {
34
int e;
35
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
36
clear_vec_high(s, is_q, tt);
37
}
38
}
39
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
40
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
41
tt = (tt + 1) % 32;
42
}
43
}
44
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
45
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
46
}
29
}
47
}
30
}
48
+ tcg_temp_free_i64(tcg_ebytes);
31
49
tcg_temp_free_i64(tcg_addr);
32
- if (cs->ich_hcr_el2 & ICH_HCR_EL2_EN) {
50
}
33
- maintlevel = maintenance_interrupt_state(cs);
51
34
+ if ((cs->ich_hcr_el2 & ICH_HCR_EL2_EN) &&
52
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
35
+ maintenance_interrupt_state(cs) != 0) {
53
bool replicate = false;
36
+ maintlevel = 1;
54
int index = is_q << 3 | S << 2 | size;
55
int ebytes, xs;
56
- TCGv_i64 tcg_addr, tcg_rn;
57
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
58
59
switch (scale) {
60
case 3:
61
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
62
tcg_rn = cpu_reg_sp(s, rn);
63
tcg_addr = tcg_temp_new_i64();
64
tcg_gen_mov_i64(tcg_addr, tcg_rn);
65
+ tcg_ebytes = tcg_const_i64(ebytes);
66
67
for (xs = 0; xs < selem; xs++) {
68
if (replicate) {
69
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
70
do_vec_st(s, rt, index, tcg_addr, scale);
71
}
72
}
73
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
74
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
75
rt = (rt + 1) % 32;
76
}
37
}
77
38
78
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
39
trace_gicv3_cpuif_virt_set_irqs(gicv3_redist_affid(cs), fiqlevel,
79
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
80
}
81
}
82
+ tcg_temp_free_i64(tcg_ebytes);
83
tcg_temp_free_i64(tcg_addr);
84
}
85
86
--
40
--
87
2.19.1
41
2.20.1
88
42
89
43
diff view generated by jsdifflib
1
For AArch32, exception return happens through certain kinds
1
From: Alexander Graf <agraf@csgraf.de>
2
of CPSR write. We don't currently have any CPU_LOG_INT logging
3
of these events (unlike AArch64, where we log in the ERET
4
instruction). Add some suitable logging.
5
2
6
This will log exception returns like this:
3
We will need PMC register definitions in accel specific code later.
7
Exception return from AArch32 hyp to usr PC 0x80100374
4
Move all constant definitions to common arm headers so we can reuse
5
them.
8
6
9
paralleling the existing logging in the exception_return
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
10
helper for AArch64 exception returns:
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
9
Message-id: 20210916155404.86958-2-agraf@csgraf.de
12
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c
13
14
(Note that an AArch32 exception return can only be
15
AArch32->AArch32, never to AArch64.)
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20181012144235.19646-2-peter.maydell@linaro.org
20
---
11
---
21
target/arm/internals.h | 18 ++++++++++++++++++
12
target/arm/internals.h | 44 ++++++++++++++++++++++++++++++++++++++++++
22
target/arm/helper.c | 10 ++++++++++
13
target/arm/helper.c | 44 ------------------------------------------
23
target/arm/translate.c | 7 +------
14
2 files changed, 44 insertions(+), 44 deletions(-)
24
3 files changed, 29 insertions(+), 6 deletions(-)
25
15
26
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
27
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/internals.h
18
--- a/target/arm/internals.h
29
+++ b/target/arm/internals.h
19
+++ b/target/arm/internals.h
30
@@ -XXX,XX +XXX,XX @@ static inline uint32_t v7m_sp_limit(CPUARMState *env)
20
@@ -XXX,XX +XXX,XX @@ enum MVEECIState {
31
}
21
/* All other values reserved */
32
}
22
};
33
23
34
+/**
24
+/* Definitions for the PMU registers */
35
+ * aarch32_mode_name(): Return name of the AArch32 CPU mode
25
+#define PMCRN_MASK 0xf800
36
+ * @psr: Program Status Register indicating CPU mode
26
+#define PMCRN_SHIFT 11
37
+ *
27
+#define PMCRLC 0x40
38
+ * Returns, for debug logging purposes, a printable representation
28
+#define PMCRDP 0x20
39
+ * of the AArch32 CPU mode ("svc", "usr", etc) as indicated by
29
+#define PMCRX 0x10
40
+ * the low bits of the specified PSR.
30
+#define PMCRD 0x8
31
+#define PMCRC 0x4
32
+#define PMCRP 0x2
33
+#define PMCRE 0x1
34
+/*
35
+ * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
36
+ * which can be written as 1 to trigger behaviour but which stay RAZ).
41
+ */
37
+ */
42
+static inline const char *aarch32_mode_name(uint32_t psr)
38
+#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
39
+
40
+#define PMXEVTYPER_P 0x80000000
41
+#define PMXEVTYPER_U 0x40000000
42
+#define PMXEVTYPER_NSK 0x20000000
43
+#define PMXEVTYPER_NSU 0x10000000
44
+#define PMXEVTYPER_NSH 0x08000000
45
+#define PMXEVTYPER_M 0x04000000
46
+#define PMXEVTYPER_MT 0x02000000
47
+#define PMXEVTYPER_EVTCOUNT 0x0000ffff
48
+#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
49
+ PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
50
+ PMXEVTYPER_M | PMXEVTYPER_MT | \
51
+ PMXEVTYPER_EVTCOUNT)
52
+
53
+#define PMCCFILTR 0xf8000000
54
+#define PMCCFILTR_M PMXEVTYPER_M
55
+#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M)
56
+
57
+static inline uint32_t pmu_num_counters(CPUARMState *env)
43
+{
58
+{
44
+ static const char cpu_mode_names[16][4] = {
59
+ return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
45
+ "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
60
+}
46
+ "???", "???", "hyp", "und", "???", "???", "???", "sys"
47
+ };
48
+
61
+
49
+ return cpu_mode_names[psr & 0xf];
62
+/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
63
+static inline uint64_t pmu_counter_mask(CPUARMState *env)
64
+{
65
+ return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
50
+}
66
+}
51
+
67
+
52
#endif
68
#endif
53
diff --git a/target/arm/helper.c b/target/arm/helper.c
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
54
index XXXXXXX..XXXXXXX 100644
70
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/helper.c
71
--- a/target/arm/helper.c
56
+++ b/target/arm/helper.c
72
+++ b/target/arm/helper.c
57
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
73
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
58
mask |= CPSR_IL;
74
REGINFO_SENTINEL
59
val |= CPSR_IL;
75
};
60
}
76
61
+ qemu_log_mask(LOG_GUEST_ERROR,
77
-/* Definitions for the PMU registers */
62
+ "Illegal AArch32 mode switch attempt from %s to %s\n",
78
-#define PMCRN_MASK 0xf800
63
+ aarch32_mode_name(env->uncached_cpsr),
79
-#define PMCRN_SHIFT 11
64
+ aarch32_mode_name(val));
80
-#define PMCRLC 0x40
65
} else {
81
-#define PMCRDP 0x20
66
+ qemu_log_mask(CPU_LOG_INT, "%s %s to %s PC 0x%" PRIx32 "\n",
82
-#define PMCRX 0x10
67
+ write_type == CPSRWriteExceptionReturn ?
83
-#define PMCRD 0x8
68
+ "Exception return from AArch32" :
84
-#define PMCRC 0x4
69
+ "AArch32 mode switch from",
85
-#define PMCRP 0x2
70
+ aarch32_mode_name(env->uncached_cpsr),
86
-#define PMCRE 0x1
71
+ aarch32_mode_name(val), env->regs[15]);
87
-/*
72
switch_mode(env, val & CPSR_M);
88
- * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
73
}
89
- * which can be written as 1 to trigger behaviour but which stay RAZ).
74
}
90
- */
75
diff --git a/target/arm/translate.c b/target/arm/translate.c
91
-#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/translate.c
78
+++ b/target/arm/translate.c
79
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb)
80
translator_loop(ops, &dc.base, cpu, tb);
81
}
82
83
-static const char *cpu_mode_names[16] = {
84
- "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
85
- "???", "???", "hyp", "und", "???", "???", "???", "sys"
86
-};
87
-
92
-
88
void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
93
-#define PMXEVTYPER_P 0x80000000
89
int flags)
94
-#define PMXEVTYPER_U 0x40000000
90
{
95
-#define PMXEVTYPER_NSK 0x20000000
91
@@ -XXX,XX +XXX,XX @@ void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
96
-#define PMXEVTYPER_NSU 0x10000000
92
psr & CPSR_V ? 'V' : '-',
97
-#define PMXEVTYPER_NSH 0x08000000
93
psr & CPSR_T ? 'T' : 'A',
98
-#define PMXEVTYPER_M 0x04000000
94
ns_status,
99
-#define PMXEVTYPER_MT 0x02000000
95
- cpu_mode_names[psr & 0xf], (psr & 0x10) ? 32 : 26);
100
-#define PMXEVTYPER_EVTCOUNT 0x0000ffff
96
+ aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26);
101
-#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
97
}
102
- PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
98
103
- PMXEVTYPER_M | PMXEVTYPER_MT | \
99
if (flags & CPU_DUMP_FPU) {
104
- PMXEVTYPER_EVTCOUNT)
105
-
106
-#define PMCCFILTR 0xf8000000
107
-#define PMCCFILTR_M PMXEVTYPER_M
108
-#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M)
109
-
110
-static inline uint32_t pmu_num_counters(CPUARMState *env)
111
-{
112
- return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
113
-}
114
-
115
-/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
116
-static inline uint64_t pmu_counter_mask(CPUARMState *env)
117
-{
118
- return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
119
-}
120
-
121
typedef struct pm_event {
122
uint16_t number; /* PMEVTYPER.evtCount is 16 bits wide */
123
/* If the event is supported on this CPU (used to generate PMCEID[01]) */
100
--
124
--
101
2.19.1
125
2.20.1
102
126
103
127
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Move cmtst_op expanders from translate-a64.c.
3
Hvf's permission bitmap during and after dirty logging does not include
4
the HV_MEMORY_EXEC permission. At least on Apple Silicon, this leads to
5
instruction faults once dirty logging was enabled.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Add the bit to make it work properly.
6
Message-id: 20181011205206.3552-17-richard.henderson@linaro.org
8
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210916155404.86958-3-agraf@csgraf.de
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
target/arm/translate.h | 2 +
14
accel/hvf/hvf-accel-ops.c | 4 ++--
11
target/arm/translate-a64.c | 38 ------------------
15
1 file changed, 2 insertions(+), 2 deletions(-)
12
target/arm/translate.c | 81 +++++++++++++++++++++++++++-----------
13
3 files changed, 60 insertions(+), 61 deletions(-)
14
16
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
17
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
19
--- a/accel/hvf/hvf-accel-ops.c
18
+++ b/target/arm/translate.h
20
+++ b/accel/hvf/hvf-accel-ops.c
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
21
@@ -XXX,XX +XXX,XX @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
20
extern const GVecGen3 bif_op;
22
if (on) {
21
extern const GVecGen3 mla_op[4];
23
slot->flags |= HVF_SLOT_LOG;
22
extern const GVecGen3 mls_op[4];
24
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
23
+extern const GVecGen3 cmtst_op[4];
25
- HV_MEMORY_READ);
24
extern const GVecGen2i ssra_op[4];
26
+ HV_MEMORY_READ | HV_MEMORY_EXEC);
25
extern const GVecGen2i usra_op[4];
27
/* stop tracking region*/
26
extern const GVecGen2i sri_op[4];
28
} else {
27
extern const GVecGen2i sli_op[4];
29
slot->flags &= ~HVF_SLOT_LOG;
28
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
30
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
29
31
- HV_MEMORY_READ | HV_MEMORY_WRITE);
30
/*
32
+ HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
31
* Forward to the isar_feature_* tests given a DisasContext pointer.
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-a64.c
35
+++ b/target/arm/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
37
}
33
}
38
}
34
}
39
35
40
-/* CMTST : test is "if (X & Y != 0)". */
41
-static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
42
-{
43
- tcg_gen_and_i32(d, a, b);
44
- tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
45
- tcg_gen_neg_i32(d, d);
46
-}
47
-
48
-static void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
49
-{
50
- tcg_gen_and_i64(d, a, b);
51
- tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
52
- tcg_gen_neg_i64(d, d);
53
-}
54
-
55
-static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
56
-{
57
- tcg_gen_and_vec(vece, d, a, b);
58
- tcg_gen_dupi_vec(vece, a, 0);
59
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
60
-}
61
-
62
static void handle_3same_64(DisasContext *s, int opcode, bool u,
63
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn, TCGv_i64 tcg_rm)
64
{
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
66
/* Integer op subgroup of C3.6.16. */
67
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
68
{
69
- static const GVecGen3 cmtst_op[4] = {
70
- { .fni4 = gen_helper_neon_tst_u8,
71
- .fniv = gen_cmtst_vec,
72
- .vece = MO_8 },
73
- { .fni4 = gen_helper_neon_tst_u16,
74
- .fniv = gen_cmtst_vec,
75
- .vece = MO_16 },
76
- { .fni4 = gen_cmtst_i32,
77
- .fniv = gen_cmtst_vec,
78
- .vece = MO_32 },
79
- { .fni8 = gen_cmtst_i64,
80
- .fniv = gen_cmtst_vec,
81
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
82
- .vece = MO_64 },
83
- };
84
-
85
int is_q = extract32(insn, 30, 1);
86
int u = extract32(insn, 29, 1);
87
int size = extract32(insn, 22, 2);
88
diff --git a/target/arm/translate.c b/target/arm/translate.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate.c
91
+++ b/target/arm/translate.c
92
@@ -XXX,XX +XXX,XX @@ const GVecGen3 mls_op[4] = {
93
.vece = MO_64 },
94
};
95
96
+/* CMTST : test is "if (X & Y != 0)". */
97
+static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
98
+{
99
+ tcg_gen_and_i32(d, a, b);
100
+ tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
101
+ tcg_gen_neg_i32(d, d);
102
+}
103
+
104
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
105
+{
106
+ tcg_gen_and_i64(d, a, b);
107
+ tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
108
+ tcg_gen_neg_i64(d, d);
109
+}
110
+
111
+static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
112
+{
113
+ tcg_gen_and_vec(vece, d, a, b);
114
+ tcg_gen_dupi_vec(vece, a, 0);
115
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
116
+}
117
+
118
+const GVecGen3 cmtst_op[4] = {
119
+ { .fni4 = gen_helper_neon_tst_u8,
120
+ .fniv = gen_cmtst_vec,
121
+ .vece = MO_8 },
122
+ { .fni4 = gen_helper_neon_tst_u16,
123
+ .fniv = gen_cmtst_vec,
124
+ .vece = MO_16 },
125
+ { .fni4 = gen_cmtst_i32,
126
+ .fniv = gen_cmtst_vec,
127
+ .vece = MO_32 },
128
+ { .fni8 = gen_cmtst_i64,
129
+ .fniv = gen_cmtst_vec,
130
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
131
+ .vece = MO_64 },
132
+};
133
+
134
/* Translate a NEON data processing instruction. Return nonzero if the
135
instruction is invalid.
136
We process data in a mixture of 32-bit and 64-bit chunks.
137
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
138
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
139
u ? &mls_op[size] : &mla_op[size]);
140
return 0;
141
+
142
+ case NEON_3R_VTST_VCEQ:
143
+ if (u) { /* VCEQ */
144
+ tcg_gen_gvec_cmp(TCG_COND_EQ, size, rd_ofs, rn_ofs, rm_ofs,
145
+ vec_size, vec_size);
146
+ } else { /* VTST */
147
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
148
+ vec_size, vec_size, &cmtst_op[size]);
149
+ }
150
+ return 0;
151
+
152
+ case NEON_3R_VCGT:
153
+ tcg_gen_gvec_cmp(u ? TCG_COND_GTU : TCG_COND_GT, size,
154
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
155
+ return 0;
156
+
157
+ case NEON_3R_VCGE:
158
+ tcg_gen_gvec_cmp(u ? TCG_COND_GEU : TCG_COND_GE, size,
159
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
160
+ return 0;
161
}
162
163
if (size == 3) {
164
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
165
case NEON_3R_VQSUB:
166
GEN_NEON_INTEGER_OP_ENV(qsub);
167
break;
168
- case NEON_3R_VCGT:
169
- GEN_NEON_INTEGER_OP(cgt);
170
- break;
171
- case NEON_3R_VCGE:
172
- GEN_NEON_INTEGER_OP(cge);
173
- break;
174
case NEON_3R_VSHL:
175
GEN_NEON_INTEGER_OP(shl);
176
break;
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
178
tmp2 = neon_load_reg(rd, pass);
179
gen_neon_add(size, tmp, tmp2);
180
break;
181
- case NEON_3R_VTST_VCEQ:
182
- if (!u) { /* VTST */
183
- switch (size) {
184
- case 0: gen_helper_neon_tst_u8(tmp, tmp, tmp2); break;
185
- case 1: gen_helper_neon_tst_u16(tmp, tmp, tmp2); break;
186
- case 2: gen_helper_neon_tst_u32(tmp, tmp, tmp2); break;
187
- default: abort();
188
- }
189
- } else { /* VCEQ */
190
- switch (size) {
191
- case 0: gen_helper_neon_ceq_u8(tmp, tmp, tmp2); break;
192
- case 1: gen_helper_neon_ceq_u16(tmp, tmp, tmp2); break;
193
- case 2: gen_helper_neon_ceq_u32(tmp, tmp, tmp2); break;
194
- default: abort();
195
- }
196
- }
197
- break;
198
case NEON_3R_VMUL:
199
/* VMUL.P8; other cases already eliminated. */
200
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
201
--
36
--
202
2.19.1
37
2.20.1
203
38
204
39
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Move expanders for VBSL, VBIT, and VBIF from translate-a64.c.
3
We will need to install a migration helper for the ARM hvf backend.
4
Let's introduce an arch callback for the overall hvf init chain to
5
do so.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
6
Message-id: 20181011205206.3552-9-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20210916155404.86958-4-agraf@csgraf.de
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/translate.h | 6 ++
12
include/sysemu/hvf_int.h | 1 +
11
target/arm/translate-a64.c | 61 --------------
13
accel/hvf/hvf-accel-ops.c | 3 ++-
12
target/arm/translate.c | 162 +++++++++++++++++++++++++++----------
14
target/i386/hvf/hvf.c | 5 +++++
13
3 files changed, 124 insertions(+), 105 deletions(-)
15
3 files changed, 8 insertions(+), 1 deletion(-)
14
16
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
17
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
19
--- a/include/sysemu/hvf_int.h
18
+++ b/target/arm/translate.h
20
+++ b/include/sysemu/hvf_int.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
21
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
20
return ret;
22
};
23
24
void assert_hvf_ok(hv_return_t ret);
25
+int hvf_arch_init(void);
26
int hvf_arch_init_vcpu(CPUState *cpu);
27
void hvf_arch_vcpu_destroy(CPUState *cpu);
28
int hvf_vcpu_exec(CPUState *);
29
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/accel/hvf/hvf-accel-ops.c
32
+++ b/accel/hvf/hvf-accel-ops.c
33
@@ -XXX,XX +XXX,XX @@ static int hvf_accel_init(MachineState *ms)
34
35
hvf_state = s;
36
memory_listener_register(&hvf_memory_listener, &address_space_memory);
37
- return 0;
38
+
39
+ return hvf_arch_init();
21
}
40
}
22
41
23
+
42
static void hvf_accel_class_init(ObjectClass *oc, void *data)
24
+/* Vector operations shared between ARM and AArch64. */
43
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
25
+extern const GVecGen3 bsl_op;
26
+extern const GVecGen3 bit_op;
27
+extern const GVecGen3 bif_op;
28
+
29
/*
30
* Forward to the isar_feature_* tests given a DisasContext pointer.
31
*/
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
33
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-a64.c
45
--- a/target/i386/hvf/hvf.c
35
+++ b/target/arm/translate-a64.c
46
+++ b/target/i386/hvf/hvf.c
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
47
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
37
}
48
return env->apic_bus_freq != 0;
38
}
49
}
39
50
40
-static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
51
+int hvf_arch_init(void)
41
-{
42
- tcg_gen_xor_i64(rn, rn, rm);
43
- tcg_gen_and_i64(rn, rn, rd);
44
- tcg_gen_xor_i64(rd, rm, rn);
45
-}
46
-
47
-static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
48
-{
49
- tcg_gen_xor_i64(rn, rn, rd);
50
- tcg_gen_and_i64(rn, rn, rm);
51
- tcg_gen_xor_i64(rd, rd, rn);
52
-}
53
-
54
-static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
55
-{
56
- tcg_gen_xor_i64(rn, rn, rd);
57
- tcg_gen_andc_i64(rn, rn, rm);
58
- tcg_gen_xor_i64(rd, rd, rn);
59
-}
60
-
61
-static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
62
-{
63
- tcg_gen_xor_vec(vece, rn, rn, rm);
64
- tcg_gen_and_vec(vece, rn, rn, rd);
65
- tcg_gen_xor_vec(vece, rd, rm, rn);
66
-}
67
-
68
-static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
69
-{
70
- tcg_gen_xor_vec(vece, rn, rn, rd);
71
- tcg_gen_and_vec(vece, rn, rn, rm);
72
- tcg_gen_xor_vec(vece, rd, rd, rn);
73
-}
74
-
75
-static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
76
-{
77
- tcg_gen_xor_vec(vece, rn, rn, rd);
78
- tcg_gen_andc_vec(vece, rn, rn, rm);
79
- tcg_gen_xor_vec(vece, rd, rd, rn);
80
-}
81
-
82
/* Logic op (opcode == 3) subgroup of C3.6.16. */
83
static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
84
{
85
- static const GVecGen3 bsl_op = {
86
- .fni8 = gen_bsl_i64,
87
- .fniv = gen_bsl_vec,
88
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
89
- .load_dest = true
90
- };
91
- static const GVecGen3 bit_op = {
92
- .fni8 = gen_bit_i64,
93
- .fniv = gen_bit_vec,
94
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
95
- .load_dest = true
96
- };
97
- static const GVecGen3 bif_op = {
98
- .fni8 = gen_bif_i64,
99
- .fniv = gen_bif_vec,
100
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
- .load_dest = true
102
- };
103
-
104
int rd = extract32(insn, 0, 5);
105
int rn = extract32(insn, 5, 5);
106
int rm = extract32(insn, 16, 5);
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
112
return 0;
113
}
114
115
-/* Bitwise select. dest = c ? t : f. Clobbers T and F. */
116
-static void gen_neon_bsl(TCGv_i32 dest, TCGv_i32 t, TCGv_i32 f, TCGv_i32 c)
117
-{
118
- tcg_gen_and_i32(t, t, c);
119
- tcg_gen_andc_i32(f, f, c);
120
- tcg_gen_or_i32(dest, t, f);
121
-}
122
-
123
static inline void gen_neon_narrow(int size, TCGv_i32 dest, TCGv_i64 src)
124
{
125
switch (size) {
126
@@ -XXX,XX +XXX,XX @@ static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
127
return 1;
128
}
129
130
+/*
131
+ * Expanders for VBitOps_VBIF, VBIT, VBSL.
132
+ */
133
+static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
134
+{
52
+{
135
+ tcg_gen_xor_i64(rn, rn, rm);
53
+ return 0;
136
+ tcg_gen_and_i64(rn, rn, rd);
137
+ tcg_gen_xor_i64(rd, rm, rn);
138
+}
54
+}
139
+
55
+
140
+static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
56
int hvf_arch_init_vcpu(CPUState *cpu)
141
+{
142
+ tcg_gen_xor_i64(rn, rn, rd);
143
+ tcg_gen_and_i64(rn, rn, rm);
144
+ tcg_gen_xor_i64(rd, rd, rn);
145
+}
146
+
147
+static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
148
+{
149
+ tcg_gen_xor_i64(rn, rn, rd);
150
+ tcg_gen_andc_i64(rn, rn, rm);
151
+ tcg_gen_xor_i64(rd, rd, rn);
152
+}
153
+
154
+static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
155
+{
156
+ tcg_gen_xor_vec(vece, rn, rn, rm);
157
+ tcg_gen_and_vec(vece, rn, rn, rd);
158
+ tcg_gen_xor_vec(vece, rd, rm, rn);
159
+}
160
+
161
+static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
162
+{
163
+ tcg_gen_xor_vec(vece, rn, rn, rd);
164
+ tcg_gen_and_vec(vece, rn, rn, rm);
165
+ tcg_gen_xor_vec(vece, rd, rd, rn);
166
+}
167
+
168
+static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
169
+{
170
+ tcg_gen_xor_vec(vece, rn, rn, rd);
171
+ tcg_gen_andc_vec(vece, rn, rn, rm);
172
+ tcg_gen_xor_vec(vece, rd, rd, rn);
173
+}
174
+
175
+const GVecGen3 bsl_op = {
176
+ .fni8 = gen_bsl_i64,
177
+ .fniv = gen_bsl_vec,
178
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
179
+ .load_dest = true
180
+};
181
+
182
+const GVecGen3 bit_op = {
183
+ .fni8 = gen_bit_i64,
184
+ .fniv = gen_bit_vec,
185
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
+ .load_dest = true
187
+};
188
+
189
+const GVecGen3 bif_op = {
190
+ .fni8 = gen_bif_i64,
191
+ .fniv = gen_bif_vec,
192
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
193
+ .load_dest = true
194
+};
195
+
196
+
197
/* Translate a NEON data processing instruction. Return nonzero if the
198
instruction is invalid.
199
We process data in a mixture of 32-bit and 64-bit chunks.
200
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
201
{
57
{
202
int op;
58
X86CPU *x86cpu = X86_CPU(cpu);
203
int q;
204
- int rd, rn, rm;
205
+ int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
206
int size;
207
int shift;
208
int pass;
209
int count;
210
int pairwise;
211
int u;
212
+ int vec_size;
213
uint32_t imm, mask;
214
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
215
TCGv_ptr ptr1, ptr2, ptr3;
216
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
217
VFP_DREG_N(rn, insn);
218
VFP_DREG_M(rm, insn);
219
size = (insn >> 20) & 3;
220
+ vec_size = q ? 16 : 8;
221
+ rd_ofs = neon_reg_offset(rd, 0);
222
+ rn_ofs = neon_reg_offset(rn, 0);
223
+ rm_ofs = neon_reg_offset(rm, 0);
224
+
225
if ((insn & (1 << 23)) == 0) {
226
/* Three register same length. */
227
op = ((insn >> 7) & 0x1e) | ((insn >> 4) & 1);
228
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
229
q, rd, rn, rm);
230
}
231
return 1;
232
+
233
+ case NEON_3R_LOGIC: /* Logic ops. */
234
+ switch ((u << 2) | size) {
235
+ case 0: /* VAND */
236
+ tcg_gen_gvec_and(0, rd_ofs, rn_ofs, rm_ofs,
237
+ vec_size, vec_size);
238
+ break;
239
+ case 1: /* VBIC */
240
+ tcg_gen_gvec_andc(0, rd_ofs, rn_ofs, rm_ofs,
241
+ vec_size, vec_size);
242
+ break;
243
+ case 2:
244
+ if (rn == rm) {
245
+ /* VMOV */
246
+ tcg_gen_gvec_mov(0, rd_ofs, rn_ofs, vec_size, vec_size);
247
+ } else {
248
+ /* VORR */
249
+ tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
250
+ vec_size, vec_size);
251
+ }
252
+ break;
253
+ case 3: /* VORN */
254
+ tcg_gen_gvec_orc(0, rd_ofs, rn_ofs, rm_ofs,
255
+ vec_size, vec_size);
256
+ break;
257
+ case 4: /* VEOR */
258
+ tcg_gen_gvec_xor(0, rd_ofs, rn_ofs, rm_ofs,
259
+ vec_size, vec_size);
260
+ break;
261
+ case 5: /* VBSL */
262
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
263
+ vec_size, vec_size, &bsl_op);
264
+ break;
265
+ case 6: /* VBIT */
266
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
267
+ vec_size, vec_size, &bit_op);
268
+ break;
269
+ case 7: /* VBIF */
270
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
271
+ vec_size, vec_size, &bif_op);
272
+ break;
273
+ }
274
+ return 0;
275
}
276
- if (size == 3 && op != NEON_3R_LOGIC) {
277
+ if (size == 3) {
278
/* 64-bit element instructions. */
279
for (pass = 0; pass < (q ? 2 : 1); pass++) {
280
neon_load_reg64(cpu_V0, rn + pass);
281
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
282
case NEON_3R_VRHADD:
283
GEN_NEON_INTEGER_OP(rhadd);
284
break;
285
- case NEON_3R_LOGIC: /* Logic ops. */
286
- switch ((u << 2) | size) {
287
- case 0: /* VAND */
288
- tcg_gen_and_i32(tmp, tmp, tmp2);
289
- break;
290
- case 1: /* BIC */
291
- tcg_gen_andc_i32(tmp, tmp, tmp2);
292
- break;
293
- case 2: /* VORR */
294
- tcg_gen_or_i32(tmp, tmp, tmp2);
295
- break;
296
- case 3: /* VORN */
297
- tcg_gen_orc_i32(tmp, tmp, tmp2);
298
- break;
299
- case 4: /* VEOR */
300
- tcg_gen_xor_i32(tmp, tmp, tmp2);
301
- break;
302
- case 5: /* VBSL */
303
- tmp3 = neon_load_reg(rd, pass);
304
- gen_neon_bsl(tmp, tmp, tmp2, tmp3);
305
- tcg_temp_free_i32(tmp3);
306
- break;
307
- case 6: /* VBIT */
308
- tmp3 = neon_load_reg(rd, pass);
309
- gen_neon_bsl(tmp, tmp, tmp3, tmp2);
310
- tcg_temp_free_i32(tmp3);
311
- break;
312
- case 7: /* VBIF */
313
- tmp3 = neon_load_reg(rd, pass);
314
- gen_neon_bsl(tmp, tmp3, tmp, tmp2);
315
- tcg_temp_free_i32(tmp3);
316
- break;
317
- }
318
- break;
319
case NEON_3R_VHSUB:
320
GEN_NEON_INTEGER_OP(hsub);
321
break;
322
--
59
--
323
2.19.1
60
2.20.1
324
61
325
62
diff view generated by jsdifflib
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
This patch extends the qemu-kvm state sync logic with support for
3
With Apple Silicon available to the masses, it's a good time to add support
4
KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception.
4
for driving its virtualization extensions from QEMU.
5
And also it can support the exception state migration.
6
5
7
The SError exception states include SError pending state and ESR value,
6
This patch adds all necessary architecture specific code to get basic VMs
8
the kvm_put/get_vcpu_events() will be called when set or get system
7
working, including save/restore.
9
registers. When do migration, if source machine has SError pending,
10
QEMU will do this migration regardless whether the target machine supports
11
to specify guest ESR value, because if target machine does not support that,
12
it can also inject the SError with zero ESR value.
13
8
14
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
9
Known limitations:
15
Reviewed-by: Andrew Jones <drjones@redhat.com>
10
11
- WFI handling is missing (follows in later patch)
12
- No watchpoint/breakpoint support
13
14
Signed-off-by: Alexander Graf <agraf@csgraf.de>
15
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
16
Reviewed-by: Sergio Lopez <slp@redhat.com>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 1538067351-23931-3-git-send-email-gengdongjiu@huawei.com
18
Message-id: 20210916155404.86958-5-agraf@csgraf.de
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
---
20
target/arm/cpu.h | 7 ++++++
21
meson.build | 1 +
21
target/arm/kvm_arm.h | 24 ++++++++++++++++++
22
include/sysemu/hvf_int.h | 10 +-
22
target/arm/kvm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++
23
accel/hvf/hvf-accel-ops.c | 9 +
23
target/arm/kvm32.c | 13 ++++++++++
24
target/arm/hvf/hvf.c | 794 ++++++++++++++++++++++++++++++++++++
24
target/arm/kvm64.c | 13 ++++++++++
25
target/i386/hvf/hvf.c | 5 +
25
target/arm/machine.c | 22 ++++++++++++++++
26
MAINTAINERS | 5 +
26
6 files changed, 139 insertions(+)
27
target/arm/hvf/trace-events | 10 +
28
7 files changed, 833 insertions(+), 1 deletion(-)
29
create mode 100644 target/arm/hvf/hvf.c
30
create mode 100644 target/arm/hvf/trace-events
27
31
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
diff --git a/meson.build b/meson.build
29
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
34
--- a/meson.build
31
+++ b/target/arm/cpu.h
35
+++ b/meson.build
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
36
@@ -XXX,XX +XXX,XX @@ if have_system or have_user
33
*/
37
'accel/tcg',
34
} exception;
38
'hw/core',
35
39
'target/arm',
36
+ /* Information associated with an SError */
40
+ 'target/arm/hvf',
37
+ struct {
41
'target/hppa',
38
+ uint8_t pending;
42
'target/i386',
39
+ uint8_t has_esr;
43
'target/i386/kvm',
40
+ uint64_t esr;
44
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
41
+ } serror;
42
+
43
/* Thumb-2 EE state. */
44
uint32_t teecr;
45
uint32_t teehbr;
46
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
47
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/kvm_arm.h
46
--- a/include/sysemu/hvf_int.h
49
+++ b/target/arm/kvm_arm.h
47
+++ b/include/sysemu/hvf_int.h
50
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu);
48
@@ -XXX,XX +XXX,XX @@
51
*/
49
#ifndef HVF_INT_H
52
void kvm_arm_reset_vcpu(ARMCPU *cpu);
50
#define HVF_INT_H
53
51
54
+/**
52
+#ifdef __aarch64__
55
+ * kvm_arm_init_serror_injection:
53
+#include <Hypervisor/Hypervisor.h>
56
+ * @cs: CPUState
54
+#else
55
#include <Hypervisor/hv.h>
56
+#endif
57
58
/* hvf_slot flags */
59
#define HVF_SLOT_LOG (1 << 0)
60
@@ -XXX,XX +XXX,XX @@ struct HVFState {
61
int num_slots;
62
63
hvf_vcpu_caps *hvf_caps;
64
+ uint64_t vtimer_offset;
65
};
66
extern HVFState *hvf_state;
67
68
struct hvf_vcpu_state {
69
- int fd;
70
+ uint64_t fd;
71
+ void *exit;
72
+ bool vtimer_masked;
73
};
74
75
void assert_hvf_ok(hv_return_t ret);
76
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *);
77
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
78
int hvf_put_registers(CPUState *);
79
int hvf_get_registers(CPUState *);
80
+void hvf_kick_vcpu_thread(CPUState *cpu);
81
82
#endif
83
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/accel/hvf/hvf-accel-ops.c
86
+++ b/accel/hvf/hvf-accel-ops.c
87
@@ -XXX,XX +XXX,XX @@
88
89
HVFState *hvf_state;
90
91
+#ifdef __aarch64__
92
+#define HV_VM_DEFAULT NULL
93
+#endif
94
+
95
/* Memory slots */
96
97
hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
98
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
99
pthread_sigmask(SIG_BLOCK, NULL, &set);
100
sigdelset(&set, SIG_IPI);
101
102
+#ifdef __aarch64__
103
+ r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
104
+#else
105
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
106
+#endif
107
cpu->vcpu_dirty = 1;
108
assert_hvf_ok(r);
109
110
@@ -XXX,XX +XXX,XX @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
111
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
112
113
ops->create_vcpu_thread = hvf_start_vcpu_thread;
114
+ ops->kick_vcpu_thread = hvf_kick_vcpu_thread;
115
116
ops->synchronize_post_reset = hvf_cpu_synchronize_post_reset;
117
ops->synchronize_post_init = hvf_cpu_synchronize_post_init;
118
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
119
new file mode 100644
120
index XXXXXXX..XXXXXXX
121
--- /dev/null
122
+++ b/target/arm/hvf/hvf.c
123
@@ -XXX,XX +XXX,XX @@
124
+/*
125
+ * QEMU Hypervisor.framework support for Apple Silicon
126
+
127
+ * Copyright 2020 Alexander Graf <agraf@csgraf.de>
57
+ *
128
+ *
58
+ * Check whether KVM can set guest SError syndrome.
129
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
130
+ * See the COPYING file in the top-level directory.
131
+ *
59
+ */
132
+ */
60
+void kvm_arm_init_serror_injection(CPUState *cs);
133
+
61
+
134
+#include "qemu/osdep.h"
62
+/**
135
+#include "qemu-common.h"
63
+ * kvm_get_vcpu_events:
136
+#include "qemu/error-report.h"
64
+ * @cpu: ARMCPU
137
+
65
+ *
138
+#include "sysemu/runstate.h"
66
+ * Get VCPU related state from kvm.
139
+#include "sysemu/hvf.h"
67
+ */
140
+#include "sysemu/hvf_int.h"
68
+int kvm_get_vcpu_events(ARMCPU *cpu);
141
+#include "sysemu/hw_accel.h"
69
+
142
+
70
+/**
143
+#include <mach/mach_time.h>
71
+ * kvm_put_vcpu_events:
144
+
72
+ * @cpu: ARMCPU
145
+#include "exec/address-spaces.h"
73
+ *
146
+#include "hw/irq.h"
74
+ * Put VCPU related state to kvm.
147
+#include "qemu/main-loop.h"
75
+ */
148
+#include "sysemu/cpus.h"
76
+int kvm_put_vcpu_events(ARMCPU *cpu);
149
+#include "target/arm/cpu.h"
77
+
150
+#include "target/arm/internals.h"
78
#ifdef CONFIG_KVM
151
+#include "trace/trace-target_arm_hvf.h"
79
/**
152
+#include "migration/vmstate.h"
80
* kvm_arm_create_scratch_host_vcpu:
153
+
81
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
154
+#define HVF_SYSREG(crn, crm, op0, op1, op2) \
82
index XXXXXXX..XXXXXXX 100644
155
+ ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
83
--- a/target/arm/kvm.c
156
+#define PL1_WRITE_MASK 0x4
84
+++ b/target/arm/kvm.c
157
+
85
@@ -XXX,XX +XXX,XX @@ const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
158
+#define SYSREG(op0, op1, crn, crm, op2) \
86
};
159
+ ((op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (crm << 1))
87
160
+#define SYSREG_MASK SYSREG(0x3, 0x7, 0xf, 0xf, 0x7)
88
static bool cap_has_mp_state;
161
+#define SYSREG_OSLAR_EL1 SYSREG(2, 0, 1, 0, 4)
89
+static bool cap_has_inject_serror_esr;
162
+#define SYSREG_OSLSR_EL1 SYSREG(2, 0, 1, 1, 4)
90
163
+#define SYSREG_OSDLR_EL1 SYSREG(2, 0, 1, 3, 4)
91
static ARMHostCPUFeatures arm_host_cpu_features;
164
+#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1)
92
165
+
93
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vcpu_init(CPUState *cs)
166
+#define WFX_IS_WFE (1 << 0)
94
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
167
+
95
}
168
+#define TMR_CTL_ENABLE (1 << 0)
96
169
+#define TMR_CTL_IMASK (1 << 1)
97
+void kvm_arm_init_serror_injection(CPUState *cs)
170
+#define TMR_CTL_ISTATUS (1 << 2)
98
+{
171
+
99
+ cap_has_inject_serror_esr = kvm_check_extension(cs->kvm_state,
172
+typedef struct HVFVTimer {
100
+ KVM_CAP_ARM_INJECT_SERROR_ESR);
173
+ /* Vtimer value during migration and paused state */
101
+}
174
+ uint64_t vtimer_val;
102
+
175
+} HVFVTimer;
103
bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
176
+
104
int *fdarray,
177
+static HVFVTimer vtimer;
105
struct kvm_vcpu_init *init)
178
+
106
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
179
+struct hvf_reg_match {
107
return 0;
180
+ int reg;
108
}
181
+ uint64_t offset;
109
182
+};
110
+int kvm_put_vcpu_events(ARMCPU *cpu)
183
+
111
+{
184
+static const struct hvf_reg_match hvf_reg_match[] = {
112
+ CPUARMState *env = &cpu->env;
185
+ { HV_REG_X0, offsetof(CPUARMState, xregs[0]) },
113
+ struct kvm_vcpu_events events;
186
+ { HV_REG_X1, offsetof(CPUARMState, xregs[1]) },
114
+ int ret;
187
+ { HV_REG_X2, offsetof(CPUARMState, xregs[2]) },
115
+
188
+ { HV_REG_X3, offsetof(CPUARMState, xregs[3]) },
116
+ if (!kvm_has_vcpu_events()) {
189
+ { HV_REG_X4, offsetof(CPUARMState, xregs[4]) },
190
+ { HV_REG_X5, offsetof(CPUARMState, xregs[5]) },
191
+ { HV_REG_X6, offsetof(CPUARMState, xregs[6]) },
192
+ { HV_REG_X7, offsetof(CPUARMState, xregs[7]) },
193
+ { HV_REG_X8, offsetof(CPUARMState, xregs[8]) },
194
+ { HV_REG_X9, offsetof(CPUARMState, xregs[9]) },
195
+ { HV_REG_X10, offsetof(CPUARMState, xregs[10]) },
196
+ { HV_REG_X11, offsetof(CPUARMState, xregs[11]) },
197
+ { HV_REG_X12, offsetof(CPUARMState, xregs[12]) },
198
+ { HV_REG_X13, offsetof(CPUARMState, xregs[13]) },
199
+ { HV_REG_X14, offsetof(CPUARMState, xregs[14]) },
200
+ { HV_REG_X15, offsetof(CPUARMState, xregs[15]) },
201
+ { HV_REG_X16, offsetof(CPUARMState, xregs[16]) },
202
+ { HV_REG_X17, offsetof(CPUARMState, xregs[17]) },
203
+ { HV_REG_X18, offsetof(CPUARMState, xregs[18]) },
204
+ { HV_REG_X19, offsetof(CPUARMState, xregs[19]) },
205
+ { HV_REG_X20, offsetof(CPUARMState, xregs[20]) },
206
+ { HV_REG_X21, offsetof(CPUARMState, xregs[21]) },
207
+ { HV_REG_X22, offsetof(CPUARMState, xregs[22]) },
208
+ { HV_REG_X23, offsetof(CPUARMState, xregs[23]) },
209
+ { HV_REG_X24, offsetof(CPUARMState, xregs[24]) },
210
+ { HV_REG_X25, offsetof(CPUARMState, xregs[25]) },
211
+ { HV_REG_X26, offsetof(CPUARMState, xregs[26]) },
212
+ { HV_REG_X27, offsetof(CPUARMState, xregs[27]) },
213
+ { HV_REG_X28, offsetof(CPUARMState, xregs[28]) },
214
+ { HV_REG_X29, offsetof(CPUARMState, xregs[29]) },
215
+ { HV_REG_X30, offsetof(CPUARMState, xregs[30]) },
216
+ { HV_REG_PC, offsetof(CPUARMState, pc) },
217
+};
218
+
219
+static const struct hvf_reg_match hvf_fpreg_match[] = {
220
+ { HV_SIMD_FP_REG_Q0, offsetof(CPUARMState, vfp.zregs[0]) },
221
+ { HV_SIMD_FP_REG_Q1, offsetof(CPUARMState, vfp.zregs[1]) },
222
+ { HV_SIMD_FP_REG_Q2, offsetof(CPUARMState, vfp.zregs[2]) },
223
+ { HV_SIMD_FP_REG_Q3, offsetof(CPUARMState, vfp.zregs[3]) },
224
+ { HV_SIMD_FP_REG_Q4, offsetof(CPUARMState, vfp.zregs[4]) },
225
+ { HV_SIMD_FP_REG_Q5, offsetof(CPUARMState, vfp.zregs[5]) },
226
+ { HV_SIMD_FP_REG_Q6, offsetof(CPUARMState, vfp.zregs[6]) },
227
+ { HV_SIMD_FP_REG_Q7, offsetof(CPUARMState, vfp.zregs[7]) },
228
+ { HV_SIMD_FP_REG_Q8, offsetof(CPUARMState, vfp.zregs[8]) },
229
+ { HV_SIMD_FP_REG_Q9, offsetof(CPUARMState, vfp.zregs[9]) },
230
+ { HV_SIMD_FP_REG_Q10, offsetof(CPUARMState, vfp.zregs[10]) },
231
+ { HV_SIMD_FP_REG_Q11, offsetof(CPUARMState, vfp.zregs[11]) },
232
+ { HV_SIMD_FP_REG_Q12, offsetof(CPUARMState, vfp.zregs[12]) },
233
+ { HV_SIMD_FP_REG_Q13, offsetof(CPUARMState, vfp.zregs[13]) },
234
+ { HV_SIMD_FP_REG_Q14, offsetof(CPUARMState, vfp.zregs[14]) },
235
+ { HV_SIMD_FP_REG_Q15, offsetof(CPUARMState, vfp.zregs[15]) },
236
+ { HV_SIMD_FP_REG_Q16, offsetof(CPUARMState, vfp.zregs[16]) },
237
+ { HV_SIMD_FP_REG_Q17, offsetof(CPUARMState, vfp.zregs[17]) },
238
+ { HV_SIMD_FP_REG_Q18, offsetof(CPUARMState, vfp.zregs[18]) },
239
+ { HV_SIMD_FP_REG_Q19, offsetof(CPUARMState, vfp.zregs[19]) },
240
+ { HV_SIMD_FP_REG_Q20, offsetof(CPUARMState, vfp.zregs[20]) },
241
+ { HV_SIMD_FP_REG_Q21, offsetof(CPUARMState, vfp.zregs[21]) },
242
+ { HV_SIMD_FP_REG_Q22, offsetof(CPUARMState, vfp.zregs[22]) },
243
+ { HV_SIMD_FP_REG_Q23, offsetof(CPUARMState, vfp.zregs[23]) },
244
+ { HV_SIMD_FP_REG_Q24, offsetof(CPUARMState, vfp.zregs[24]) },
245
+ { HV_SIMD_FP_REG_Q25, offsetof(CPUARMState, vfp.zregs[25]) },
246
+ { HV_SIMD_FP_REG_Q26, offsetof(CPUARMState, vfp.zregs[26]) },
247
+ { HV_SIMD_FP_REG_Q27, offsetof(CPUARMState, vfp.zregs[27]) },
248
+ { HV_SIMD_FP_REG_Q28, offsetof(CPUARMState, vfp.zregs[28]) },
249
+ { HV_SIMD_FP_REG_Q29, offsetof(CPUARMState, vfp.zregs[29]) },
250
+ { HV_SIMD_FP_REG_Q30, offsetof(CPUARMState, vfp.zregs[30]) },
251
+ { HV_SIMD_FP_REG_Q31, offsetof(CPUARMState, vfp.zregs[31]) },
252
+};
253
+
254
+struct hvf_sreg_match {
255
+ int reg;
256
+ uint32_t key;
257
+ uint32_t cp_idx;
258
+};
259
+
260
+static struct hvf_sreg_match hvf_sreg_match[] = {
261
+ { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 4) },
262
+ { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 5) },
263
+ { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 6) },
264
+ { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 7) },
265
+
266
+ { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 4) },
267
+ { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 5) },
268
+ { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 6) },
269
+ { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 7) },
270
+
271
+ { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 4) },
272
+ { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 5) },
273
+ { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 6) },
274
+ { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 7) },
275
+
276
+ { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 4) },
277
+ { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 5) },
278
+ { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 6) },
279
+ { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 7) },
280
+
281
+ { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 4) },
282
+ { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 5) },
283
+ { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 6) },
284
+ { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 7) },
285
+
286
+ { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 4) },
287
+ { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 5) },
288
+ { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 6) },
289
+ { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 7) },
290
+
291
+ { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 4) },
292
+ { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 5) },
293
+ { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 6) },
294
+ { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 7) },
295
+
296
+ { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 4) },
297
+ { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 5) },
298
+ { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 6) },
299
+ { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 7) },
300
+
301
+ { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 4) },
302
+ { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 5) },
303
+ { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 6) },
304
+ { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 7) },
305
+
306
+ { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 4) },
307
+ { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 5) },
308
+ { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 6) },
309
+ { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 7) },
310
+
311
+ { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 4) },
312
+ { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 5) },
313
+ { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 6) },
314
+ { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 7) },
315
+
316
+ { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 4) },
317
+ { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 5) },
318
+ { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 6) },
319
+ { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 7) },
320
+
321
+ { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 4) },
322
+ { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 5) },
323
+ { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 6) },
324
+ { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 7) },
325
+
326
+ { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 4) },
327
+ { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 5) },
328
+ { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 6) },
329
+ { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 7) },
330
+
331
+ { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 4) },
332
+ { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 5) },
333
+ { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 6) },
334
+ { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 7) },
335
+
336
+ { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 4) },
337
+ { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 5) },
338
+ { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 6) },
339
+ { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 7) },
340
+
341
+#ifdef SYNC_NO_RAW_REGS
342
+ /*
343
+ * The registers below are manually synced on init because they are
344
+ * marked as NO_RAW. We still list them to make number space sync easier.
345
+ */
346
+ { HV_SYS_REG_MDCCINT_EL1, HVF_SYSREG(0, 2, 2, 0, 0) },
347
+ { HV_SYS_REG_MIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 0) },
348
+ { HV_SYS_REG_MPIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 5) },
349
+ { HV_SYS_REG_ID_AA64PFR0_EL1, HVF_SYSREG(0, 4, 3, 0, 0) },
350
+#endif
351
+ { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 2) },
352
+ { HV_SYS_REG_ID_AA64DFR0_EL1, HVF_SYSREG(0, 5, 3, 0, 0) },
353
+ { HV_SYS_REG_ID_AA64DFR1_EL1, HVF_SYSREG(0, 5, 3, 0, 1) },
354
+ { HV_SYS_REG_ID_AA64ISAR0_EL1, HVF_SYSREG(0, 6, 3, 0, 0) },
355
+ { HV_SYS_REG_ID_AA64ISAR1_EL1, HVF_SYSREG(0, 6, 3, 0, 1) },
356
+#ifdef SYNC_NO_MMFR0
357
+ /* We keep the hardware MMFR0 around. HW limits are there anyway */
358
+ { HV_SYS_REG_ID_AA64MMFR0_EL1, HVF_SYSREG(0, 7, 3, 0, 0) },
359
+#endif
360
+ { HV_SYS_REG_ID_AA64MMFR1_EL1, HVF_SYSREG(0, 7, 3, 0, 1) },
361
+ { HV_SYS_REG_ID_AA64MMFR2_EL1, HVF_SYSREG(0, 7, 3, 0, 2) },
362
+
363
+ { HV_SYS_REG_MDSCR_EL1, HVF_SYSREG(0, 2, 2, 0, 2) },
364
+ { HV_SYS_REG_SCTLR_EL1, HVF_SYSREG(1, 0, 3, 0, 0) },
365
+ { HV_SYS_REG_CPACR_EL1, HVF_SYSREG(1, 0, 3, 0, 2) },
366
+ { HV_SYS_REG_TTBR0_EL1, HVF_SYSREG(2, 0, 3, 0, 0) },
367
+ { HV_SYS_REG_TTBR1_EL1, HVF_SYSREG(2, 0, 3, 0, 1) },
368
+ { HV_SYS_REG_TCR_EL1, HVF_SYSREG(2, 0, 3, 0, 2) },
369
+
370
+ { HV_SYS_REG_APIAKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 0) },
371
+ { HV_SYS_REG_APIAKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 1) },
372
+ { HV_SYS_REG_APIBKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 2) },
373
+ { HV_SYS_REG_APIBKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 3) },
374
+ { HV_SYS_REG_APDAKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 0) },
375
+ { HV_SYS_REG_APDAKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 1) },
376
+ { HV_SYS_REG_APDBKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 2) },
377
+ { HV_SYS_REG_APDBKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 3) },
378
+ { HV_SYS_REG_APGAKEYLO_EL1, HVF_SYSREG(2, 3, 3, 0, 0) },
379
+ { HV_SYS_REG_APGAKEYHI_EL1, HVF_SYSREG(2, 3, 3, 0, 1) },
380
+
381
+ { HV_SYS_REG_SPSR_EL1, HVF_SYSREG(4, 0, 3, 0, 0) },
382
+ { HV_SYS_REG_ELR_EL1, HVF_SYSREG(4, 0, 3, 0, 1) },
383
+ { HV_SYS_REG_SP_EL0, HVF_SYSREG(4, 1, 3, 0, 0) },
384
+ { HV_SYS_REG_AFSR0_EL1, HVF_SYSREG(5, 1, 3, 0, 0) },
385
+ { HV_SYS_REG_AFSR1_EL1, HVF_SYSREG(5, 1, 3, 0, 1) },
386
+ { HV_SYS_REG_ESR_EL1, HVF_SYSREG(5, 2, 3, 0, 0) },
387
+ { HV_SYS_REG_FAR_EL1, HVF_SYSREG(6, 0, 3, 0, 0) },
388
+ { HV_SYS_REG_PAR_EL1, HVF_SYSREG(7, 4, 3, 0, 0) },
389
+ { HV_SYS_REG_MAIR_EL1, HVF_SYSREG(10, 2, 3, 0, 0) },
390
+ { HV_SYS_REG_AMAIR_EL1, HVF_SYSREG(10, 3, 3, 0, 0) },
391
+ { HV_SYS_REG_VBAR_EL1, HVF_SYSREG(12, 0, 3, 0, 0) },
392
+ { HV_SYS_REG_CONTEXTIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 1) },
393
+ { HV_SYS_REG_TPIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 4) },
394
+ { HV_SYS_REG_CNTKCTL_EL1, HVF_SYSREG(14, 1, 3, 0, 0) },
395
+ { HV_SYS_REG_CSSELR_EL1, HVF_SYSREG(0, 0, 3, 2, 0) },
396
+ { HV_SYS_REG_TPIDR_EL0, HVF_SYSREG(13, 0, 3, 3, 2) },
397
+ { HV_SYS_REG_TPIDRRO_EL0, HVF_SYSREG(13, 0, 3, 3, 3) },
398
+ { HV_SYS_REG_CNTV_CTL_EL0, HVF_SYSREG(14, 3, 3, 3, 1) },
399
+ { HV_SYS_REG_CNTV_CVAL_EL0, HVF_SYSREG(14, 3, 3, 3, 2) },
400
+ { HV_SYS_REG_SP_EL1, HVF_SYSREG(4, 1, 3, 4, 0) },
401
+};
402
+
403
+int hvf_get_registers(CPUState *cpu)
404
+{
405
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
406
+ CPUARMState *env = &arm_cpu->env;
407
+ hv_return_t ret;
408
+ uint64_t val;
409
+ hv_simd_fp_uchar16_t fpval;
410
+ int i;
411
+
412
+ for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
413
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, hvf_reg_match[i].reg, &val);
414
+ *(uint64_t *)((void *)env + hvf_reg_match[i].offset) = val;
415
+ assert_hvf_ok(ret);
416
+ }
417
+
418
+ for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
419
+ ret = hv_vcpu_get_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
420
+ &fpval);
421
+ memcpy((void *)env + hvf_fpreg_match[i].offset, &fpval, sizeof(fpval));
422
+ assert_hvf_ok(ret);
423
+ }
424
+
425
+ val = 0;
426
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPCR, &val);
427
+ assert_hvf_ok(ret);
428
+ vfp_set_fpcr(env, val);
429
+
430
+ val = 0;
431
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPSR, &val);
432
+ assert_hvf_ok(ret);
433
+ vfp_set_fpsr(env, val);
434
+
435
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_CPSR, &val);
436
+ assert_hvf_ok(ret);
437
+ pstate_write(env, val);
438
+
439
+ for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
440
+ if (hvf_sreg_match[i].cp_idx == -1) {
441
+ continue;
442
+ }
443
+
444
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, &val);
445
+ assert_hvf_ok(ret);
446
+
447
+ arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx] = val;
448
+ }
449
+ assert(write_list_to_cpustate(arm_cpu));
450
+
451
+ aarch64_restore_sp(env, arm_current_el(env));
452
+
453
+ return 0;
454
+}
455
+
456
+int hvf_put_registers(CPUState *cpu)
457
+{
458
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
459
+ CPUARMState *env = &arm_cpu->env;
460
+ hv_return_t ret;
461
+ uint64_t val;
462
+ hv_simd_fp_uchar16_t fpval;
463
+ int i;
464
+
465
+ for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
466
+ val = *(uint64_t *)((void *)env + hvf_reg_match[i].offset);
467
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, hvf_reg_match[i].reg, val);
468
+ assert_hvf_ok(ret);
469
+ }
470
+
471
+ for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
472
+ memcpy(&fpval, (void *)env + hvf_fpreg_match[i].offset, sizeof(fpval));
473
+ ret = hv_vcpu_set_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
474
+ fpval);
475
+ assert_hvf_ok(ret);
476
+ }
477
+
478
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPCR, vfp_get_fpcr(env));
479
+ assert_hvf_ok(ret);
480
+
481
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPSR, vfp_get_fpsr(env));
482
+ assert_hvf_ok(ret);
483
+
484
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_CPSR, pstate_read(env));
485
+ assert_hvf_ok(ret);
486
+
487
+ aarch64_save_sp(env, arm_current_el(env));
488
+
489
+ assert(write_cpustate_to_list(arm_cpu, false));
490
+ for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
491
+ if (hvf_sreg_match[i].cp_idx == -1) {
492
+ continue;
493
+ }
494
+
495
+ val = arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx];
496
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, val);
497
+ assert_hvf_ok(ret);
498
+ }
499
+
500
+ ret = hv_vcpu_set_vtimer_offset(cpu->hvf->fd, hvf_state->vtimer_offset);
501
+ assert_hvf_ok(ret);
502
+
503
+ return 0;
504
+}
505
+
506
+static void flush_cpu_state(CPUState *cpu)
507
+{
508
+ if (cpu->vcpu_dirty) {
509
+ hvf_put_registers(cpu);
510
+ cpu->vcpu_dirty = false;
511
+ }
512
+}
513
+
514
+static void hvf_set_reg(CPUState *cpu, int rt, uint64_t val)
515
+{
516
+ hv_return_t r;
517
+
518
+ flush_cpu_state(cpu);
519
+
520
+ if (rt < 31) {
521
+ r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_X0 + rt, val);
522
+ assert_hvf_ok(r);
523
+ }
524
+}
525
+
526
+static uint64_t hvf_get_reg(CPUState *cpu, int rt)
527
+{
528
+ uint64_t val = 0;
529
+ hv_return_t r;
530
+
531
+ flush_cpu_state(cpu);
532
+
533
+ if (rt < 31) {
534
+ r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_X0 + rt, &val);
535
+ assert_hvf_ok(r);
536
+ }
537
+
538
+ return val;
539
+}
540
+
541
+void hvf_arch_vcpu_destroy(CPUState *cpu)
542
+{
543
+}
544
+
545
+int hvf_arch_init_vcpu(CPUState *cpu)
546
+{
547
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
548
+ CPUARMState *env = &arm_cpu->env;
549
+ uint32_t sregs_match_len = ARRAY_SIZE(hvf_sreg_match);
550
+ uint32_t sregs_cnt = 0;
551
+ uint64_t pfr;
552
+ hv_return_t ret;
553
+ int i;
554
+
555
+ env->aarch64 = 1;
556
+ asm volatile("mrs %0, cntfrq_el0" : "=r"(arm_cpu->gt_cntfrq_hz));
557
+
558
+ /* Allocate enough space for our sysreg sync */
559
+ arm_cpu->cpreg_indexes = g_renew(uint64_t, arm_cpu->cpreg_indexes,
560
+ sregs_match_len);
561
+ arm_cpu->cpreg_values = g_renew(uint64_t, arm_cpu->cpreg_values,
562
+ sregs_match_len);
563
+ arm_cpu->cpreg_vmstate_indexes = g_renew(uint64_t,
564
+ arm_cpu->cpreg_vmstate_indexes,
565
+ sregs_match_len);
566
+ arm_cpu->cpreg_vmstate_values = g_renew(uint64_t,
567
+ arm_cpu->cpreg_vmstate_values,
568
+ sregs_match_len);
569
+
570
+ memset(arm_cpu->cpreg_values, 0, sregs_match_len * sizeof(uint64_t));
571
+
572
+ /* Populate cp list for all known sysregs */
573
+ for (i = 0; i < sregs_match_len; i++) {
574
+ const ARMCPRegInfo *ri;
575
+ uint32_t key = hvf_sreg_match[i].key;
576
+
577
+ ri = get_arm_cp_reginfo(arm_cpu->cp_regs, key);
578
+ if (ri) {
579
+ assert(!(ri->type & ARM_CP_NO_RAW));
580
+ hvf_sreg_match[i].cp_idx = sregs_cnt;
581
+ arm_cpu->cpreg_indexes[sregs_cnt++] = cpreg_to_kvm_id(key);
582
+ } else {
583
+ hvf_sreg_match[i].cp_idx = -1;
584
+ }
585
+ }
586
+ arm_cpu->cpreg_array_len = sregs_cnt;
587
+ arm_cpu->cpreg_vmstate_array_len = sregs_cnt;
588
+
589
+ assert(write_cpustate_to_list(arm_cpu, false));
590
+
591
+ /* Set CP_NO_RAW system registers on init */
592
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MIDR_EL1,
593
+ arm_cpu->midr);
594
+ assert_hvf_ok(ret);
595
+
596
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MPIDR_EL1,
597
+ arm_cpu->mp_affinity);
598
+ assert_hvf_ok(ret);
599
+
600
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, &pfr);
601
+ assert_hvf_ok(ret);
602
+ pfr |= env->gicv3state ? (1 << 24) : 0;
603
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, pfr);
604
+ assert_hvf_ok(ret);
605
+
606
+ /* We're limited to underlying hardware caps, override internal versions */
607
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64MMFR0_EL1,
608
+ &arm_cpu->isar.id_aa64mmfr0);
609
+ assert_hvf_ok(ret);
610
+
611
+ return 0;
612
+}
613
+
614
+void hvf_kick_vcpu_thread(CPUState *cpu)
615
+{
616
+ hv_vcpus_exit(&cpu->hvf->fd, 1);
617
+}
618
+
619
+static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
620
+ uint32_t syndrome)
621
+{
622
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
623
+ CPUARMState *env = &arm_cpu->env;
624
+
625
+ cpu->exception_index = excp;
626
+ env->exception.target_el = 1;
627
+ env->exception.syndrome = syndrome;
628
+
629
+ arm_cpu_do_interrupt(cpu);
630
+}
631
+
632
+static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
633
+{
634
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
635
+ CPUARMState *env = &arm_cpu->env;
636
+ uint64_t val = 0;
637
+
638
+ switch (reg) {
639
+ case SYSREG_CNTPCT_EL0:
640
+ val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
641
+ gt_cntfrq_period_ns(arm_cpu);
642
+ break;
643
+ case SYSREG_OSLSR_EL1:
644
+ val = env->cp15.oslsr_el1;
645
+ break;
646
+ case SYSREG_OSDLR_EL1:
647
+ /* Dummy register */
648
+ break;
649
+ default:
650
+ cpu_synchronize_state(cpu);
651
+ trace_hvf_unhandled_sysreg_read(env->pc, reg,
652
+ (reg >> 20) & 0x3,
653
+ (reg >> 14) & 0x7,
654
+ (reg >> 10) & 0xf,
655
+ (reg >> 1) & 0xf,
656
+ (reg >> 17) & 0x7);
657
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
658
+ return 1;
659
+ }
660
+
661
+ trace_hvf_sysreg_read(reg,
662
+ (reg >> 20) & 0x3,
663
+ (reg >> 14) & 0x7,
664
+ (reg >> 10) & 0xf,
665
+ (reg >> 1) & 0xf,
666
+ (reg >> 17) & 0x7,
667
+ val);
668
+ hvf_set_reg(cpu, rt, val);
669
+
670
+ return 0;
671
+}
672
+
673
+static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
674
+{
675
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
676
+ CPUARMState *env = &arm_cpu->env;
677
+
678
+ trace_hvf_sysreg_write(reg,
679
+ (reg >> 20) & 0x3,
680
+ (reg >> 14) & 0x7,
681
+ (reg >> 10) & 0xf,
682
+ (reg >> 1) & 0xf,
683
+ (reg >> 17) & 0x7,
684
+ val);
685
+
686
+ switch (reg) {
687
+ case SYSREG_OSLAR_EL1:
688
+ env->cp15.oslsr_el1 = val & 1;
689
+ break;
690
+ case SYSREG_OSDLR_EL1:
691
+ /* Dummy register */
692
+ break;
693
+ default:
694
+ cpu_synchronize_state(cpu);
695
+ trace_hvf_unhandled_sysreg_write(env->pc, reg,
696
+ (reg >> 20) & 0x3,
697
+ (reg >> 14) & 0x7,
698
+ (reg >> 10) & 0xf,
699
+ (reg >> 1) & 0xf,
700
+ (reg >> 17) & 0x7);
701
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
702
+ return 1;
703
+ }
704
+
705
+ return 0;
706
+}
707
+
708
+static int hvf_inject_interrupts(CPUState *cpu)
709
+{
710
+ if (cpu->interrupt_request & CPU_INTERRUPT_FIQ) {
711
+ trace_hvf_inject_fiq();
712
+ hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_FIQ,
713
+ true);
714
+ }
715
+
716
+ if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
717
+ trace_hvf_inject_irq();
718
+ hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_IRQ,
719
+ true);
720
+ }
721
+
722
+ return 0;
723
+}
724
+
725
+static uint64_t hvf_vtimer_val_raw(void)
726
+{
727
+ /*
728
+ * mach_absolute_time() returns the vtimer value without the VM
729
+ * offset that we define. Add our own offset on top.
730
+ */
731
+ return mach_absolute_time() - hvf_state->vtimer_offset;
732
+}
733
+
734
+static void hvf_sync_vtimer(CPUState *cpu)
735
+{
736
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
737
+ hv_return_t r;
738
+ uint64_t ctl;
739
+ bool irq_state;
740
+
741
+ if (!cpu->hvf->vtimer_masked) {
742
+ /* We will get notified on vtimer changes by hvf, nothing to do */
743
+ return;
744
+ }
745
+
746
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
747
+ assert_hvf_ok(r);
748
+
749
+ irq_state = (ctl & (TMR_CTL_ENABLE | TMR_CTL_IMASK | TMR_CTL_ISTATUS)) ==
750
+ (TMR_CTL_ENABLE | TMR_CTL_ISTATUS);
751
+ qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], irq_state);
752
+
753
+ if (!irq_state) {
754
+ /* Timer no longer asserting, we can unmask it */
755
+ hv_vcpu_set_vtimer_mask(cpu->hvf->fd, false);
756
+ cpu->hvf->vtimer_masked = false;
757
+ }
758
+}
759
+
760
+int hvf_vcpu_exec(CPUState *cpu)
761
+{
762
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
763
+ CPUARMState *env = &arm_cpu->env;
764
+ hv_vcpu_exit_t *hvf_exit = cpu->hvf->exit;
765
+ hv_return_t r;
766
+ bool advance_pc = false;
767
+
768
+ if (hvf_inject_interrupts(cpu)) {
769
+ return EXCP_INTERRUPT;
770
+ }
771
+
772
+ if (cpu->halted) {
773
+ return EXCP_HLT;
774
+ }
775
+
776
+ flush_cpu_state(cpu);
777
+
778
+ qemu_mutex_unlock_iothread();
779
+ assert_hvf_ok(hv_vcpu_run(cpu->hvf->fd));
780
+
781
+ /* handle VMEXIT */
782
+ uint64_t exit_reason = hvf_exit->reason;
783
+ uint64_t syndrome = hvf_exit->exception.syndrome;
784
+ uint32_t ec = syn_get_ec(syndrome);
785
+
786
+ qemu_mutex_lock_iothread();
787
+ switch (exit_reason) {
788
+ case HV_EXIT_REASON_EXCEPTION:
789
+ /* This is the main one, handle below. */
790
+ break;
791
+ case HV_EXIT_REASON_VTIMER_ACTIVATED:
792
+ qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], 1);
793
+ cpu->hvf->vtimer_masked = true;
117
+ return 0;
794
+ return 0;
118
+ }
795
+ case HV_EXIT_REASON_CANCELED:
119
+
796
+ /* we got kicked, no exit to process */
120
+ memset(&events, 0, sizeof(events));
121
+ events.exception.serror_pending = env->serror.pending;
122
+
123
+ /* Inject SError to guest with specified syndrome if host kernel
124
+ * supports it, otherwise inject SError without syndrome.
125
+ */
126
+ if (cap_has_inject_serror_esr) {
127
+ events.exception.serror_has_esr = env->serror.has_esr;
128
+ events.exception.serror_esr = env->serror.esr;
129
+ }
130
+
131
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events);
132
+ if (ret) {
133
+ error_report("failed to put vcpu events");
134
+ }
135
+
136
+ return ret;
137
+}
138
+
139
+int kvm_get_vcpu_events(ARMCPU *cpu)
140
+{
141
+ CPUARMState *env = &cpu->env;
142
+ struct kvm_vcpu_events events;
143
+ int ret;
144
+
145
+ if (!kvm_has_vcpu_events()) {
146
+ return 0;
797
+ return 0;
147
+ }
798
+ default:
148
+
799
+ assert(0);
149
+ memset(&events, 0, sizeof(events));
800
+ }
150
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_VCPU_EVENTS, &events);
801
+
151
+ if (ret) {
802
+ hvf_sync_vtimer(cpu);
152
+ error_report("failed to get vcpu events");
803
+
153
+ return ret;
804
+ switch (ec) {
154
+ }
805
+ case EC_DATAABORT: {
155
+
806
+ bool isv = syndrome & ARM_EL_ISV;
156
+ env->serror.pending = events.exception.serror_pending;
807
+ bool iswrite = (syndrome >> 6) & 1;
157
+ env->serror.has_esr = events.exception.serror_has_esr;
808
+ bool s1ptw = (syndrome >> 7) & 1;
158
+ env->serror.esr = events.exception.serror_esr;
809
+ uint32_t sas = (syndrome >> 22) & 3;
810
+ uint32_t len = 1 << sas;
811
+ uint32_t srt = (syndrome >> 16) & 0x1f;
812
+ uint64_t val = 0;
813
+
814
+ trace_hvf_data_abort(env->pc, hvf_exit->exception.virtual_address,
815
+ hvf_exit->exception.physical_address, isv,
816
+ iswrite, s1ptw, len, srt);
817
+
818
+ assert(isv);
819
+
820
+ if (iswrite) {
821
+ val = hvf_get_reg(cpu, srt);
822
+ address_space_write(&address_space_memory,
823
+ hvf_exit->exception.physical_address,
824
+ MEMTXATTRS_UNSPECIFIED, &val, len);
825
+ } else {
826
+ address_space_read(&address_space_memory,
827
+ hvf_exit->exception.physical_address,
828
+ MEMTXATTRS_UNSPECIFIED, &val, len);
829
+ hvf_set_reg(cpu, srt, val);
830
+ }
831
+
832
+ advance_pc = true;
833
+ break;
834
+ }
835
+ case EC_SYSTEMREGISTERTRAP: {
836
+ bool isread = (syndrome >> 0) & 1;
837
+ uint32_t rt = (syndrome >> 5) & 0x1f;
838
+ uint32_t reg = syndrome & SYSREG_MASK;
839
+ uint64_t val;
840
+ int ret = 0;
841
+
842
+ if (isread) {
843
+ ret = hvf_sysreg_read(cpu, reg, rt);
844
+ } else {
845
+ val = hvf_get_reg(cpu, rt);
846
+ ret = hvf_sysreg_write(cpu, reg, val);
847
+ }
848
+
849
+ advance_pc = !ret;
850
+ break;
851
+ }
852
+ case EC_WFX_TRAP:
853
+ advance_pc = true;
854
+ break;
855
+ case EC_AA64_HVC:
856
+ cpu_synchronize_state(cpu);
857
+ trace_hvf_unknown_hvc(env->xregs[0]);
858
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
859
+ env->xregs[0] = -1;
860
+ break;
861
+ case EC_AA64_SMC:
862
+ cpu_synchronize_state(cpu);
863
+ trace_hvf_unknown_smc(env->xregs[0]);
864
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
865
+ break;
866
+ default:
867
+ cpu_synchronize_state(cpu);
868
+ trace_hvf_exit(syndrome, ec, env->pc);
869
+ error_report("0x%llx: unhandled exception ec=0x%x", env->pc, ec);
870
+ }
871
+
872
+ if (advance_pc) {
873
+ uint64_t pc;
874
+
875
+ flush_cpu_state(cpu);
876
+
877
+ r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_PC, &pc);
878
+ assert_hvf_ok(r);
879
+ pc += 4;
880
+ r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc);
881
+ assert_hvf_ok(r);
882
+ }
159
+
883
+
160
+ return 0;
884
+ return 0;
161
+}
885
+}
162
+
886
+
163
void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
887
+static const VMStateDescription vmstate_hvf_vtimer = {
164
{
888
+ .name = "hvf-vtimer",
165
}
166
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
167
index XXXXXXX..XXXXXXX 100644
168
--- a/target/arm/kvm32.c
169
+++ b/target/arm/kvm32.c
170
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
171
}
172
cpu->mp_affinity = mpidr & ARM32_AFFINITY_MASK;
173
174
+ /* Check whether userspace can specify guest syndrome value */
175
+ kvm_arm_init_serror_injection(cs);
176
+
177
return kvm_arm_init_cpreg_list(cpu);
178
}
179
180
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
181
return ret;
182
}
183
184
+ ret = kvm_put_vcpu_events(cpu);
185
+ if (ret) {
186
+ return ret;
187
+ }
188
+
189
/* Note that we do not call write_cpustate_to_list()
190
* here, so we are only writing the tuple list back to
191
* KVM. This is safe because nothing can change the
192
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
193
}
194
vfp_set_fpscr(env, fpscr);
195
196
+ ret = kvm_get_vcpu_events(cpu);
197
+ if (ret) {
198
+ return ret;
199
+ }
200
+
201
if (!write_kvmstate_to_list(cpu)) {
202
return EINVAL;
203
}
204
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
205
index XXXXXXX..XXXXXXX 100644
206
--- a/target/arm/kvm64.c
207
+++ b/target/arm/kvm64.c
208
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
209
210
kvm_arm_init_debug(cs);
211
212
+ /* Check whether user space can specify guest syndrome value */
213
+ kvm_arm_init_serror_injection(cs);
214
+
215
return kvm_arm_init_cpreg_list(cpu);
216
}
217
218
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
219
return ret;
220
}
221
222
+ ret = kvm_put_vcpu_events(cpu);
223
+ if (ret) {
224
+ return ret;
225
+ }
226
+
227
if (!write_list_to_kvmstate(cpu, level)) {
228
return EINVAL;
229
}
230
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
231
}
232
vfp_set_fpcr(env, fpr);
233
234
+ ret = kvm_get_vcpu_events(cpu);
235
+ if (ret) {
236
+ return ret;
237
+ }
238
+
239
if (!write_kvmstate_to_list(cpu)) {
240
return EINVAL;
241
}
242
diff --git a/target/arm/machine.c b/target/arm/machine.c
243
index XXXXXXX..XXXXXXX 100644
244
--- a/target/arm/machine.c
245
+++ b/target/arm/machine.c
246
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_sve = {
247
};
248
#endif /* AARCH64 */
249
250
+static bool serror_needed(void *opaque)
251
+{
252
+ ARMCPU *cpu = opaque;
253
+ CPUARMState *env = &cpu->env;
254
+
255
+ return env->serror.pending != 0;
256
+}
257
+
258
+static const VMStateDescription vmstate_serror = {
259
+ .name = "cpu/serror",
260
+ .version_id = 1,
889
+ .version_id = 1,
261
+ .minimum_version_id = 1,
890
+ .minimum_version_id = 1,
262
+ .needed = serror_needed,
263
+ .fields = (VMStateField[]) {
891
+ .fields = (VMStateField[]) {
264
+ VMSTATE_UINT8(env.serror.pending, ARMCPU),
892
+ VMSTATE_UINT64(vtimer_val, HVFVTimer),
265
+ VMSTATE_UINT8(env.serror.has_esr, ARMCPU),
266
+ VMSTATE_UINT64(env.serror.esr, ARMCPU),
267
+ VMSTATE_END_OF_LIST()
893
+ VMSTATE_END_OF_LIST()
268
+ }
894
+ },
269
+};
895
+};
270
+
896
+
271
static bool m_needed(void *opaque)
897
+static void hvf_vm_state_change(void *opaque, bool running, RunState state)
898
+{
899
+ HVFVTimer *s = opaque;
900
+
901
+ if (running) {
902
+ /* Update vtimer offset on all CPUs */
903
+ hvf_state->vtimer_offset = mach_absolute_time() - s->vtimer_val;
904
+ cpu_synchronize_all_states();
905
+ } else {
906
+ /* Remember vtimer value on every pause */
907
+ s->vtimer_val = hvf_vtimer_val_raw();
908
+ }
909
+}
910
+
911
+int hvf_arch_init(void)
912
+{
913
+ hvf_state->vtimer_offset = mach_absolute_time();
914
+ vmstate_register(NULL, 0, &vmstate_hvf_vtimer, &vtimer);
915
+ qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
916
+ return 0;
917
+}
918
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
919
index XXXXXXX..XXXXXXX 100644
920
--- a/target/i386/hvf/hvf.c
921
+++ b/target/i386/hvf/hvf.c
922
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
923
return env->apic_bus_freq != 0;
924
}
925
926
+void hvf_kick_vcpu_thread(CPUState *cpu)
927
+{
928
+ cpus_kick_thread(cpu);
929
+}
930
+
931
int hvf_arch_init(void)
272
{
932
{
273
ARMCPU *cpu = opaque;
933
return 0;
274
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
934
diff --git a/MAINTAINERS b/MAINTAINERS
275
#ifdef TARGET_AARCH64
935
index XXXXXXX..XXXXXXX 100644
276
&vmstate_sve,
936
--- a/MAINTAINERS
277
#endif
937
+++ b/MAINTAINERS
278
+ &vmstate_serror,
938
@@ -XXX,XX +XXX,XX @@ F: accel/accel-*.c
279
NULL
939
F: accel/Makefile.objs
280
}
940
F: accel/stubs/Makefile.objs
281
};
941
942
+Apple Silicon HVF CPUs
943
+M: Alexander Graf <agraf@csgraf.de>
944
+S: Maintained
945
+F: target/arm/hvf/
946
+
947
X86 HVF CPUs
948
M: Cameron Esfahani <dirty@apple.com>
949
M: Roman Bolshakov <r.bolshakov@yadro.com>
950
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
951
new file mode 100644
952
index XXXXXXX..XXXXXXX
953
--- /dev/null
954
+++ b/target/arm/hvf/trace-events
955
@@ -XXX,XX +XXX,XX @@
956
+hvf_unhandled_sysreg_read(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg read at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
957
+hvf_unhandled_sysreg_write(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg write at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
958
+hvf_inject_fiq(void) "injecting FIQ"
959
+hvf_inject_irq(void) "injecting IRQ"
960
+hvf_data_abort(uint64_t pc, uint64_t va, uint64_t pa, bool isv, bool iswrite, bool s1ptw, uint32_t len, uint32_t srt) "data abort: [pc=0x%"PRIx64" va=0x%016"PRIx64" pa=0x%016"PRIx64" isv=%d iswrite=%d s1ptw=%d len=%d srt=%d]"
961
+hvf_sysreg_read(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg read 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d) = 0x%016"PRIx64
962
+hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg write 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d, val=0x%016"PRIx64")"
963
+hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
964
+hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
965
+hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
282
--
966
--
283
2.19.1
967
2.20.1
284
968
285
969
diff view generated by jsdifflib
1
The HCR.FB virtualization configuration register bit requests that
1
From: Peter Collingbourne <pcc@google.com>
2
TLB maintenance, branch predictor invalidate-all and icache
3
invalidate-all operations performed in NS EL1 should be upgraded
4
from "local CPU only to "broadcast within Inner Shareable domain".
5
For QEMU we NOP the branch predictor and icache operations, so
6
we only need to upgrade the TLB invalidates:
7
AArch32 TLBIALL, TLBIMVA, TLBIASID, DTLBIALL, DTLBIMVA, DTLBIASID,
8
ITLBIALL, ITLBIMVA, ITLBIASID, TLBIMVAA, TLBIMVAL, TLBIMVAAL
9
AArch64 TLBI VMALLE1, TLBI VAE1, TLBI ASIDE1, TLBI VAAE1,
10
TLBI VALE1, TLBI VAALE1
11
2
3
Sleep on WFI until the VTIMER is due but allow ourselves to be woken
4
up on IPI.
5
6
In this implementation IPI is blocked on the CPU thread at startup and
7
pselect() is used to atomically unblock the signal and begin sleeping.
8
The signal is sent unconditionally so there's no need to worry about
9
races between actually sleeping and the "we think we're sleeping"
10
state. It may lead to an extra wakeup but that's better than missing
11
it entirely.
12
13
Signed-off-by: Peter Collingbourne <pcc@google.com>
14
Signed-off-by: Alexander Graf <agraf@csgraf.de>
15
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
16
Reviewed-by: Sergio Lopez <slp@redhat.com>
17
Message-id: 20210916155404.86958-6-agraf@csgraf.de
18
[agraf: Remove unused 'set' variable, always advance PC on WFX trap,
19
support vm stop / continue operations and cntv offsets]
20
Signed-off-by: Alexander Graf <agraf@csgraf.de>
21
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
22
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20181012144235.19646-4-peter.maydell@linaro.org
15
---
24
---
16
target/arm/helper.c | 191 +++++++++++++++++++++++++++-----------------
25
include/sysemu/hvf_int.h | 1 +
17
1 file changed, 116 insertions(+), 75 deletions(-)
26
accel/hvf/hvf-accel-ops.c | 5 +--
27
target/arm/hvf/hvf.c | 79 +++++++++++++++++++++++++++++++++++++++
28
3 files changed, 82 insertions(+), 3 deletions(-)
18
29
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
30
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
20
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
32
--- a/include/sysemu/hvf_int.h
22
+++ b/target/arm/helper.c
33
+++ b/include/sysemu/hvf_int.h
23
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
34
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
24
raw_write(env, ri, value);
35
uint64_t fd;
36
void *exit;
37
bool vtimer_masked;
38
+ sigset_t unblock_ipi_mask;
39
};
40
41
void assert_hvf_ok(hv_return_t ret);
42
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/accel/hvf/hvf-accel-ops.c
45
+++ b/accel/hvf/hvf-accel-ops.c
46
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
47
cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
48
49
/* init cpu signals */
50
- sigset_t set;
51
struct sigaction sigact;
52
53
memset(&sigact, 0, sizeof(sigact));
54
sigact.sa_handler = dummy_signal;
55
sigaction(SIG_IPI, &sigact, NULL);
56
57
- pthread_sigmask(SIG_BLOCK, NULL, &set);
58
- sigdelset(&set, SIG_IPI);
59
+ pthread_sigmask(SIG_BLOCK, NULL, &cpu->hvf->unblock_ipi_mask);
60
+ sigdelset(&cpu->hvf->unblock_ipi_mask, SIG_IPI);
61
62
#ifdef __aarch64__
63
r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
64
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/hvf/hvf.c
67
+++ b/target/arm/hvf/hvf.c
68
@@ -XXX,XX +XXX,XX @@
69
* QEMU Hypervisor.framework support for Apple Silicon
70
71
* Copyright 2020 Alexander Graf <agraf@csgraf.de>
72
+ * Copyright 2020 Google LLC
73
*
74
* This work is licensed under the terms of the GNU GPL, version 2 or later.
75
* See the COPYING file in the top-level directory.
76
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init_vcpu(CPUState *cpu)
77
78
void hvf_kick_vcpu_thread(CPUState *cpu)
79
{
80
+ cpus_kick_thread(cpu);
81
hv_vcpus_exit(&cpu->hvf->fd, 1);
25
}
82
}
26
83
27
-static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
84
@@ -XXX,XX +XXX,XX @@ static uint64_t hvf_vtimer_val_raw(void)
28
- uint64_t value)
85
return mach_absolute_time() - hvf_state->vtimer_offset;
29
-{
30
- /* Invalidate all (TLBIALL) */
31
- ARMCPU *cpu = arm_env_get_cpu(env);
32
-
33
- tlb_flush(CPU(cpu));
34
-}
35
-
36
-static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
37
- uint64_t value)
38
-{
39
- /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
40
- ARMCPU *cpu = arm_env_get_cpu(env);
41
-
42
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
43
-}
44
-
45
-static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
46
- uint64_t value)
47
-{
48
- /* Invalidate by ASID (TLBIASID) */
49
- ARMCPU *cpu = arm_env_get_cpu(env);
50
-
51
- tlb_flush(CPU(cpu));
52
-}
53
-
54
-static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
- uint64_t value)
56
-{
57
- /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
58
- ARMCPU *cpu = arm_env_get_cpu(env);
59
-
60
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
61
-}
62
-
63
/* IS variants of TLB operations must affect all cores */
64
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
65
uint64_t value)
66
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
68
}
86
}
69
87
70
+/*
88
+static uint64_t hvf_vtimer_val(void)
71
+ * Non-IS variants of TLB operations are upgraded to
72
+ * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
73
+ * force broadcast of these operations.
74
+ */
75
+static bool tlb_force_broadcast(CPUARMState *env)
76
+{
89
+{
77
+ return (env->cp15.hcr_el2 & HCR_FB) &&
90
+ if (!runstate_is_running()) {
78
+ arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
91
+ /* VM is paused, the vtimer value is in vtimer.vtimer_val */
92
+ return vtimer.vtimer_val;
93
+ }
94
+
95
+ return hvf_vtimer_val_raw();
79
+}
96
+}
80
+
97
+
81
+static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
98
+static void hvf_wait_for_ipi(CPUState *cpu, struct timespec *ts)
82
+ uint64_t value)
83
+{
99
+{
84
+ /* Invalidate all (TLBIALL) */
100
+ /*
85
+ ARMCPU *cpu = arm_env_get_cpu(env);
101
+ * Use pselect to sleep so that other threads can IPI us while we're
102
+ * sleeping.
103
+ */
104
+ qatomic_mb_set(&cpu->thread_kicked, false);
105
+ qemu_mutex_unlock_iothread();
106
+ pselect(0, 0, 0, 0, ts, &cpu->hvf->unblock_ipi_mask);
107
+ qemu_mutex_lock_iothread();
108
+}
86
+
109
+
87
+ if (tlb_force_broadcast(env)) {
110
+static void hvf_wfi(CPUState *cpu)
88
+ tlbiall_is_write(env, NULL, value);
111
+{
112
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
113
+ struct timespec ts;
114
+ hv_return_t r;
115
+ uint64_t ctl;
116
+ uint64_t cval;
117
+ int64_t ticks_to_sleep;
118
+ uint64_t seconds;
119
+ uint64_t nanos;
120
+ uint32_t cntfrq;
121
+
122
+ if (cpu->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) {
123
+ /* Interrupt pending, no need to wait */
89
+ return;
124
+ return;
90
+ }
125
+ }
91
+
126
+
92
+ tlb_flush(CPU(cpu));
127
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
93
+}
128
+ assert_hvf_ok(r);
94
+
129
+
95
+static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
130
+ if (!(ctl & 1) || (ctl & 2)) {
96
+ uint64_t value)
131
+ /* Timer disabled or masked, just wait for an IPI. */
97
+{
132
+ hvf_wait_for_ipi(cpu, NULL);
98
+ /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
99
+ ARMCPU *cpu = arm_env_get_cpu(env);
100
+
101
+ if (tlb_force_broadcast(env)) {
102
+ tlbimva_is_write(env, NULL, value);
103
+ return;
133
+ return;
104
+ }
134
+ }
105
+
135
+
106
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
136
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0, &cval);
107
+}
137
+ assert_hvf_ok(r);
108
+
138
+
109
+static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
139
+ ticks_to_sleep = cval - hvf_vtimer_val();
110
+ uint64_t value)
140
+ if (ticks_to_sleep < 0) {
111
+{
112
+ /* Invalidate by ASID (TLBIASID) */
113
+ ARMCPU *cpu = arm_env_get_cpu(env);
114
+
115
+ if (tlb_force_broadcast(env)) {
116
+ tlbiasid_is_write(env, NULL, value);
117
+ return;
141
+ return;
118
+ }
142
+ }
119
+
143
+
120
+ tlb_flush(CPU(cpu));
144
+ cntfrq = gt_cntfrq_period_ns(arm_cpu);
121
+}
145
+ seconds = muldiv64(ticks_to_sleep, cntfrq, NANOSECONDS_PER_SECOND);
146
+ ticks_to_sleep -= muldiv64(seconds, NANOSECONDS_PER_SECOND, cntfrq);
147
+ nanos = ticks_to_sleep * cntfrq;
122
+
148
+
123
+static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
149
+ /*
124
+ uint64_t value)
150
+ * Don't sleep for less than the time a context switch would take,
125
+{
151
+ * so that we can satisfy fast timer requests on the same CPU.
126
+ /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
152
+ * Measurements on M1 show the sweet spot to be ~2ms.
127
+ ARMCPU *cpu = arm_env_get_cpu(env);
153
+ */
128
+
154
+ if (!seconds && nanos < (2 * SCALE_MS)) {
129
+ if (tlb_force_broadcast(env)) {
130
+ tlbimvaa_is_write(env, NULL, value);
131
+ return;
155
+ return;
132
+ }
156
+ }
133
+
157
+
134
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
158
+ ts = (struct timespec) { seconds, nanos };
159
+ hvf_wait_for_ipi(cpu, &ts);
135
+}
160
+}
136
+
161
+
137
static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
162
static void hvf_sync_vtimer(CPUState *cpu)
138
uint64_t value)
139
{
163
{
140
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
164
ARMCPU *arm_cpu = ARM_CPU(cpu);
141
* Page D4-1736 (DDI0487A.b)
165
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
142
*/
143
144
-static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
145
- uint64_t value)
146
-{
147
- CPUState *cs = ENV_GET_CPU(env);
148
-
149
- if (arm_is_secure_below_el3(env)) {
150
- tlb_flush_by_mmuidx(cs,
151
- ARMMMUIdxBit_S1SE1 |
152
- ARMMMUIdxBit_S1SE0);
153
- } else {
154
- tlb_flush_by_mmuidx(cs,
155
- ARMMMUIdxBit_S12NSE1 |
156
- ARMMMUIdxBit_S12NSE0);
157
- }
158
-}
159
-
160
static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
uint64_t value)
162
{
163
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
164
}
166
}
165
}
167
case EC_WFX_TRAP:
166
168
advance_pc = true;
167
+static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
169
+ if (!(syndrome & WFX_IS_WFE)) {
168
+ uint64_t value)
170
+ hvf_wfi(cpu);
169
+{
171
+ }
170
+ CPUState *cs = ENV_GET_CPU(env);
172
break;
171
+
173
case EC_AA64_HVC:
172
+ if (tlb_force_broadcast(env)) {
174
cpu_synchronize_state(cpu);
173
+ tlbi_aa64_vmalle1_write(env, NULL, value);
174
+ return;
175
+ }
176
+
177
+ if (arm_is_secure_below_el3(env)) {
178
+ tlb_flush_by_mmuidx(cs,
179
+ ARMMMUIdxBit_S1SE1 |
180
+ ARMMMUIdxBit_S1SE0);
181
+ } else {
182
+ tlb_flush_by_mmuidx(cs,
183
+ ARMMMUIdxBit_S12NSE1 |
184
+ ARMMMUIdxBit_S12NSE0);
185
+ }
186
+}
187
+
188
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
189
uint64_t value)
190
{
191
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
192
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
193
}
194
195
-static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
- uint64_t value)
197
-{
198
- /* Invalidate by VA, EL1&0 (AArch64 version).
199
- * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
200
- * since we don't support flush-for-specific-ASID-only or
201
- * flush-last-level-only.
202
- */
203
- ARMCPU *cpu = arm_env_get_cpu(env);
204
- CPUState *cs = CPU(cpu);
205
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
206
-
207
- if (arm_is_secure_below_el3(env)) {
208
- tlb_flush_page_by_mmuidx(cs, pageaddr,
209
- ARMMMUIdxBit_S1SE1 |
210
- ARMMMUIdxBit_S1SE0);
211
- } else {
212
- tlb_flush_page_by_mmuidx(cs, pageaddr,
213
- ARMMMUIdxBit_S12NSE1 |
214
- ARMMMUIdxBit_S12NSE0);
215
- }
216
-}
217
-
218
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
219
uint64_t value)
220
{
221
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
222
}
223
}
224
225
+static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
226
+ uint64_t value)
227
+{
228
+ /* Invalidate by VA, EL1&0 (AArch64 version).
229
+ * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
230
+ * since we don't support flush-for-specific-ASID-only or
231
+ * flush-last-level-only.
232
+ */
233
+ ARMCPU *cpu = arm_env_get_cpu(env);
234
+ CPUState *cs = CPU(cpu);
235
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
236
+
237
+ if (tlb_force_broadcast(env)) {
238
+ tlbi_aa64_vae1is_write(env, NULL, value);
239
+ return;
240
+ }
241
+
242
+ if (arm_is_secure_below_el3(env)) {
243
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
244
+ ARMMMUIdxBit_S1SE1 |
245
+ ARMMMUIdxBit_S1SE0);
246
+ } else {
247
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
248
+ ARMMMUIdxBit_S12NSE1 |
249
+ ARMMMUIdxBit_S12NSE0);
250
+ }
251
+}
252
+
253
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
254
uint64_t value)
255
{
256
--
175
--
257
2.19.1
176
2.20.1
258
177
259
178
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Now that we have working system register sync, we push more target CPU
2
2
properties into the virtual machine. That might be useful in some
3
Having V6 alone imply jazelle was wrong for cortex-m0.
3
situations, but is not the typical case that users want.
4
Change to an assertion for V6 & !M.
4
5
5
So let's add a -cpu host option that allows them to explicitly pass all
6
This was harmless, because the only place we tested ARM_FEATURE_JAZELLE
6
CPU capabilities of their host CPU into the guest.
7
was for 'bxj' in disas_arm(), which is unreachable for M-profile cores.
7
8
8
Signed-off-by: Alexander Graf <agraf@csgraf.de>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Sergio Lopez <slp@redhat.com>
11
Message-id: 20181016223115.24100-6-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20210916155404.86958-7-agraf@csgraf.de
13
[PMM: drop unnecessary #include line from .h file]
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
---
15
target/arm/cpu.h | 6 +++++-
16
target/arm/cpu.h | 2 +
16
target/arm/cpu.c | 17 ++++++++++++++---
17
target/arm/hvf_arm.h | 18 +++++++++
17
target/arm/translate.c | 2 +-
18
target/arm/kvm_arm.h | 2 -
18
3 files changed, 20 insertions(+), 5 deletions(-)
19
target/arm/cpu.c | 13 ++++--
20
target/arm/hvf/hvf.c | 95 ++++++++++++++++++++++++++++++++++++++++++++
21
5 files changed, 124 insertions(+), 6 deletions(-)
22
create mode 100644 target/arm/hvf_arm.h
19
23
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
26
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
27
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
28
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
25
ARM_FEATURE_PMU, /* has PMU support */
29
#define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
26
ARM_FEATURE_VBAR, /* has cp15 VBAR */
30
#define CPU_RESOLVING_TYPE TYPE_ARM_CPU
27
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
31
28
- ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
32
+#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
29
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
33
+
30
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
34
#define cpu_signal_handler cpu_arm_signal_handler
31
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
35
#define cpu_list arm_cpu_list
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_arm_div(const ARMISARegisters *id)
36
33
return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
37
diff --git a/target/arm/hvf_arm.h b/target/arm/hvf_arm.h
34
}
38
new file mode 100644
35
39
index XXXXXXX..XXXXXXX
36
+static inline bool isar_feature_jazelle(const ARMISARegisters *id)
40
--- /dev/null
37
+{
41
+++ b/target/arm/hvf_arm.h
38
+ return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
42
@@ -XXX,XX +XXX,XX @@
39
+}
43
+/*
40
+
44
+ * QEMU Hypervisor.framework (HVF) support -- ARM specifics
41
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
45
+ *
42
{
46
+ * Copyright (c) 2021 Alexander Graf
43
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
47
+ *
48
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
49
+ * See the COPYING file in the top-level directory.
50
+ *
51
+ */
52
+
53
+#ifndef QEMU_HVF_ARM_H
54
+#define QEMU_HVF_ARM_H
55
+
56
+#include "cpu.h"
57
+
58
+void hvf_arm_set_cpu_features_from_host(struct ARMCPU *cpu);
59
+
60
+#endif
61
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/kvm_arm.h
64
+++ b/target/arm/kvm_arm.h
65
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
66
*/
67
void kvm_arm_destroy_scratch_host_vcpu(int *fdarray);
68
69
-#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
70
-
71
/**
72
* ARMHostCPUFeatures: information about the host CPU (identified
73
* by asking the host kernel)
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
74
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
45
index XXXXXXX..XXXXXXX 100644
75
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu.c
76
--- a/target/arm/cpu.c
47
+++ b/target/arm/cpu.c
77
+++ b/target/arm/cpu.c
78
@@ -XXX,XX +XXX,XX @@
79
#include "sysemu/tcg.h"
80
#include "sysemu/hw_accel.h"
81
#include "kvm_arm.h"
82
+#include "hvf_arm.h"
83
#include "disas/capstone.h"
84
#include "fpu/softfloat.h"
85
48
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
86
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
87
* this is the first point where we can report it.
88
*/
89
if (cpu->host_cpu_probe_failed) {
90
- if (!kvm_enabled()) {
91
- error_setg(errp, "The 'host' CPU type can only be used with KVM");
92
+ if (!kvm_enabled() && !hvf_enabled()) {
93
+ error_setg(errp, "The 'host' CPU type can only be used with KVM or HVF");
94
} else {
95
error_setg(errp, "Failed to retrieve host CPU features");
96
}
97
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
98
#endif /* CONFIG_TCG */
99
}
100
101
-#ifdef CONFIG_KVM
102
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
103
static void arm_host_initfn(Object *obj)
104
{
105
ARMCPU *cpu = ARM_CPU(obj);
106
107
+#ifdef CONFIG_KVM
108
kvm_arm_set_cpu_features_from_host(cpu);
109
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
110
aarch64_add_sve_properties(obj);
49
}
111
}
50
if (arm_feature(env, ARM_FEATURE_V6)) {
112
+#else
51
set_feature(env, ARM_FEATURE_V5);
113
+ hvf_arm_set_cpu_features_from_host(cpu);
52
- set_feature(env, ARM_FEATURE_JAZELLE);
114
+#endif
53
if (!arm_feature(env, ARM_FEATURE_M)) {
115
arm_cpu_post_init(obj);
54
+ assert(cpu_isar_feature(jazelle, cpu));
116
}
55
set_feature(env, ARM_FEATURE_AUXCR);
117
56
}
118
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_register_types(void)
57
}
119
{
58
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
120
type_register_static(&arm_cpu_type_info);
59
set_feature(&cpu->env, ARM_FEATURE_VFP);
121
60
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
122
-#ifdef CONFIG_KVM
61
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
123
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
62
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
124
type_register_static(&host_arm_cpu_type_info);
63
cpu->midr = 0x41069265;
125
#endif
64
cpu->reset_fpsid = 0x41011090;
126
}
65
cpu->ctr = 0x1dd20d2;
127
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
66
cpu->reset_sctlr = 0x00090078;
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/hvf/hvf.c
130
+++ b/target/arm/hvf/hvf.c
131
@@ -XXX,XX +XXX,XX @@
132
#include "sysemu/hvf.h"
133
#include "sysemu/hvf_int.h"
134
#include "sysemu/hw_accel.h"
135
+#include "hvf_arm.h"
136
137
#include <mach/mach_time.h>
138
139
@@ -XXX,XX +XXX,XX @@ typedef struct HVFVTimer {
140
141
static HVFVTimer vtimer;
142
143
+typedef struct ARMHostCPUFeatures {
144
+ ARMISARegisters isar;
145
+ uint64_t features;
146
+ uint64_t midr;
147
+ uint32_t reset_sctlr;
148
+ const char *dtb_compatible;
149
+} ARMHostCPUFeatures;
150
+
151
+static ARMHostCPUFeatures arm_host_cpu_features;
152
+
153
struct hvf_reg_match {
154
int reg;
155
uint64_t offset;
156
@@ -XXX,XX +XXX,XX @@ static uint64_t hvf_get_reg(CPUState *cpu, int rt)
157
return val;
158
}
159
160
+static bool hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
161
+{
162
+ ARMISARegisters host_isar = {};
163
+ const struct isar_regs {
164
+ int reg;
165
+ uint64_t *val;
166
+ } regs[] = {
167
+ { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 },
168
+ { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 },
169
+ { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 },
170
+ { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 },
171
+ { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 },
172
+ { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 },
173
+ { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 },
174
+ { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 },
175
+ { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 },
176
+ };
177
+ hv_vcpu_t fd;
178
+ hv_return_t r = HV_SUCCESS;
179
+ hv_vcpu_exit_t *exit;
180
+ int i;
181
+
182
+ ahcf->dtb_compatible = "arm,arm-v8";
183
+ ahcf->features = (1ULL << ARM_FEATURE_V8) |
184
+ (1ULL << ARM_FEATURE_NEON) |
185
+ (1ULL << ARM_FEATURE_AARCH64) |
186
+ (1ULL << ARM_FEATURE_PMU) |
187
+ (1ULL << ARM_FEATURE_GENERIC_TIMER);
188
+
189
+ /* We set up a small vcpu to extract host registers */
190
+
191
+ if (hv_vcpu_create(&fd, &exit, NULL) != HV_SUCCESS) {
192
+ return false;
193
+ }
194
+
195
+ for (i = 0; i < ARRAY_SIZE(regs); i++) {
196
+ r |= hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val);
197
+ }
198
+ r |= hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr);
199
+ r |= hv_vcpu_destroy(fd);
200
+
201
+ ahcf->isar = host_isar;
67
+
202
+
68
+ /*
203
+ /*
69
+ * ARMv5 does not have the ID_ISAR registers, but we can still
204
+ * A scratch vCPU returns SCTLR 0, so let's fill our default with the M1
70
+ * set the field to indicate Jazelle support within QEMU.
205
+ * boot SCTLR from https://github.com/AsahiLinux/m1n1/issues/97
71
+ */
206
+ */
72
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
207
+ ahcf->reset_sctlr = 0x30100180;
73
}
74
75
static void arm946_initfn(Object *obj)
76
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
77
set_feature(&cpu->env, ARM_FEATURE_AUXCR);
78
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
79
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
80
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
81
cpu->midr = 0x4106a262;
82
cpu->reset_fpsid = 0x410110a0;
83
cpu->ctr = 0x1dd20d2;
84
cpu->reset_sctlr = 0x00090078;
85
cpu->reset_auxcr = 1;
86
+
87
+ /*
208
+ /*
88
+ * ARMv5 does not have the ID_ISAR registers, but we can still
209
+ * SPAN is disabled by default when SCTLR.SPAN=1. To improve compatibility,
89
+ * set the field to indicate Jazelle support within QEMU.
210
+ * let's disable it on boot and then allow guest software to turn it on by
211
+ * setting it to 0.
90
+ */
212
+ */
91
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
213
+ ahcf->reset_sctlr |= 0x00800000;
92
+
214
+
93
{
215
+ /* Make sure we don't advertise AArch32 support for EL0/EL1 */
94
/* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
216
+ if ((host_isar.id_aa64pfr0 & 0xff) != 0x11) {
95
ARMCPRegInfo ifar = {
217
+ return false;
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
218
+ }
97
index XXXXXXX..XXXXXXX 100644
219
+
98
--- a/target/arm/translate.c
220
+ return r == HV_SUCCESS;
99
+++ b/target/arm/translate.c
221
+}
100
@@ -XXX,XX +XXX,XX @@
222
+
101
#define ENABLE_ARCH_5 arm_dc_feature(s, ARM_FEATURE_V5)
223
+void hvf_arm_set_cpu_features_from_host(ARMCPU *cpu)
102
/* currently all emulated v5 cores are also v5TE, so don't bother */
224
+{
103
#define ENABLE_ARCH_5TE arm_dc_feature(s, ARM_FEATURE_V5)
225
+ if (!arm_host_cpu_features.dtb_compatible) {
104
-#define ENABLE_ARCH_5J arm_dc_feature(s, ARM_FEATURE_JAZELLE)
226
+ if (!hvf_enabled() ||
105
+#define ENABLE_ARCH_5J dc_isar_feature(jazelle, s)
227
+ !hvf_arm_get_host_cpu_features(&arm_host_cpu_features)) {
106
#define ENABLE_ARCH_6 arm_dc_feature(s, ARM_FEATURE_V6)
228
+ /*
107
#define ENABLE_ARCH_6K arm_dc_feature(s, ARM_FEATURE_V6K)
229
+ * We can't report this error yet, so flag that we need to
108
#define ENABLE_ARCH_6T2 arm_dc_feature(s, ARM_FEATURE_THUMB2)
230
+ * in arm_cpu_realizefn().
231
+ */
232
+ cpu->host_cpu_probe_failed = true;
233
+ return;
234
+ }
235
+ }
236
+
237
+ cpu->dtb_compatible = arm_host_cpu_features.dtb_compatible;
238
+ cpu->isar = arm_host_cpu_features.isar;
239
+ cpu->env.features = arm_host_cpu_features.features;
240
+ cpu->midr = arm_host_cpu_features.midr;
241
+ cpu->reset_sctlr = arm_host_cpu_features.reset_sctlr;
242
+}
243
+
244
void hvf_arch_vcpu_destroy(CPUState *cpu)
245
{
246
}
109
--
247
--
110
2.19.1
248
2.20.1
111
249
112
250
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Create struct ARMISARegisters, to be accessed during translation.
3
We need to handle PSCI calls. Most of the TCG code works for us,
4
4
but we can simplify it to only handle aa64 mode and we need to
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
handle SUSPEND differently.
6
Message-id: 20181016223115.24100-2-richard.henderson@linaro.org
6
7
This patch takes the TCG code as template and duplicates it in HVF.
8
9
To tell the guest that we support PSCI 0.2 now, update the check in
10
arm_cpu_initfn() as well.
11
12
Signed-off-by: Alexander Graf <agraf@csgraf.de>
13
Reviewed-by: Sergio Lopez <slp@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20210916155404.86958-8-agraf@csgraf.de
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
17
---
10
target/arm/cpu.h | 32 ++++----
18
target/arm/cpu.c | 4 +-
11
hw/intc/armv7m_nvic.c | 12 +--
19
target/arm/hvf/hvf.c | 141 ++++++++++++++++++++++++++++++++++--
12
target/arm/cpu.c | 178 +++++++++++++++++++++---------------------
20
target/arm/hvf/trace-events | 1 +
13
target/arm/cpu64.c | 70 ++++++++---------
21
3 files changed, 139 insertions(+), 7 deletions(-)
14
target/arm/helper.c | 28 +++----
22
15
5 files changed, 162 insertions(+), 158 deletions(-)
16
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
22
* ARMv7AR ARM Architecture Reference Manual. A reset_ prefix
23
* is used for reset values of non-constant registers; no reset_
24
* prefix means a constant register.
25
+ * Some of these registers are split out into a substructure that
26
+ * is shared with the translators to control the ISA.
27
*/
28
+ struct ARMISARegisters {
29
+ uint32_t id_isar0;
30
+ uint32_t id_isar1;
31
+ uint32_t id_isar2;
32
+ uint32_t id_isar3;
33
+ uint32_t id_isar4;
34
+ uint32_t id_isar5;
35
+ uint32_t id_isar6;
36
+ uint32_t mvfr0;
37
+ uint32_t mvfr1;
38
+ uint32_t mvfr2;
39
+ uint64_t id_aa64isar0;
40
+ uint64_t id_aa64isar1;
41
+ uint64_t id_aa64pfr0;
42
+ uint64_t id_aa64pfr1;
43
+ } isar;
44
uint32_t midr;
45
uint32_t revidr;
46
uint32_t reset_fpsid;
47
- uint32_t mvfr0;
48
- uint32_t mvfr1;
49
- uint32_t mvfr2;
50
uint32_t ctr;
51
uint32_t reset_sctlr;
52
uint32_t id_pfr0;
53
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
54
uint32_t id_mmfr2;
55
uint32_t id_mmfr3;
56
uint32_t id_mmfr4;
57
- uint32_t id_isar0;
58
- uint32_t id_isar1;
59
- uint32_t id_isar2;
60
- uint32_t id_isar3;
61
- uint32_t id_isar4;
62
- uint32_t id_isar5;
63
- uint32_t id_isar6;
64
- uint64_t id_aa64pfr0;
65
- uint64_t id_aa64pfr1;
66
uint64_t id_aa64dfr0;
67
uint64_t id_aa64dfr1;
68
uint64_t id_aa64afr0;
69
uint64_t id_aa64afr1;
70
- uint64_t id_aa64isar0;
71
- uint64_t id_aa64isar1;
72
uint64_t id_aa64mmfr0;
73
uint64_t id_aa64mmfr1;
74
uint32_t dbgdidr;
75
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/hw/intc/armv7m_nvic.c
78
+++ b/hw/intc/armv7m_nvic.c
79
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
80
case 0xd5c: /* MMFR3. */
81
return cpu->id_mmfr3;
82
case 0xd60: /* ISAR0. */
83
- return cpu->id_isar0;
84
+ return cpu->isar.id_isar0;
85
case 0xd64: /* ISAR1. */
86
- return cpu->id_isar1;
87
+ return cpu->isar.id_isar1;
88
case 0xd68: /* ISAR2. */
89
- return cpu->id_isar2;
90
+ return cpu->isar.id_isar2;
91
case 0xd6c: /* ISAR3. */
92
- return cpu->id_isar3;
93
+ return cpu->isar.id_isar3;
94
case 0xd70: /* ISAR4. */
95
- return cpu->id_isar4;
96
+ return cpu->isar.id_isar4;
97
case 0xd74: /* ISAR5. */
98
- return cpu->id_isar5;
99
+ return cpu->isar.id_isar5;
100
case 0xd78: /* CLIDR */
101
return cpu->clidr;
102
case 0xd7c: /* CTR */
103
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
23
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
104
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
105
--- a/target/arm/cpu.c
25
--- a/target/arm/cpu.c
106
+++ b/target/arm/cpu.c
26
+++ b/target/arm/cpu.c
107
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
27
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
108
g_hash_table_foreach(cpu->cp_regs, cp_reg_check_reset, cpu);
28
cpu->psci_version = 1; /* By default assume PSCI v0.1 */
109
29
cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
110
env->vfp.xregs[ARM_VFP_FPSID] = cpu->reset_fpsid;
30
111
- env->vfp.xregs[ARM_VFP_MVFR0] = cpu->mvfr0;
31
- if (tcg_enabled()) {
112
- env->vfp.xregs[ARM_VFP_MVFR1] = cpu->mvfr1;
32
- cpu->psci_version = 2; /* TCG implements PSCI 0.2 */
113
- env->vfp.xregs[ARM_VFP_MVFR2] = cpu->mvfr2;
33
+ if (tcg_enabled() || hvf_enabled()) {
114
+ env->vfp.xregs[ARM_VFP_MVFR0] = cpu->isar.mvfr0;
34
+ cpu->psci_version = 2; /* TCG and HVF implement PSCI 0.2 */
115
+ env->vfp.xregs[ARM_VFP_MVFR1] = cpu->isar.mvfr1;
116
+ env->vfp.xregs[ARM_VFP_MVFR2] = cpu->isar.mvfr2;
117
118
cpu->power_state = cpu->start_powered_off ? PSCI_OFF : PSCI_ON;
119
s->halted = cpu->start_powered_off;
120
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
121
* registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
122
*/
123
cpu->id_pfr1 &= ~0xf0;
124
- cpu->id_aa64pfr0 &= ~0xf000;
125
+ cpu->isar.id_aa64pfr0 &= ~0xf000;
126
}
35
}
127
128
if (!cpu->has_el2) {
129
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
130
* registers if we don't have EL2. These are id_pfr1[15:12] and
131
* id_aa64pfr0_el1[11:8].
132
*/
133
- cpu->id_aa64pfr0 &= ~0xf00;
134
+ cpu->isar.id_aa64pfr0 &= ~0xf00;
135
cpu->id_pfr1 &= ~0xf000;
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
139
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
140
cpu->midr = 0x4107b362;
141
cpu->reset_fpsid = 0x410120b4;
142
- cpu->mvfr0 = 0x11111111;
143
- cpu->mvfr1 = 0x00000000;
144
+ cpu->isar.mvfr0 = 0x11111111;
145
+ cpu->isar.mvfr1 = 0x00000000;
146
cpu->ctr = 0x1dd20d2;
147
cpu->reset_sctlr = 0x00050078;
148
cpu->id_pfr0 = 0x111;
149
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
150
cpu->id_mmfr0 = 0x01130003;
151
cpu->id_mmfr1 = 0x10030302;
152
cpu->id_mmfr2 = 0x01222110;
153
- cpu->id_isar0 = 0x00140011;
154
- cpu->id_isar1 = 0x12002111;
155
- cpu->id_isar2 = 0x11231111;
156
- cpu->id_isar3 = 0x01102131;
157
- cpu->id_isar4 = 0x141;
158
+ cpu->isar.id_isar0 = 0x00140011;
159
+ cpu->isar.id_isar1 = 0x12002111;
160
+ cpu->isar.id_isar2 = 0x11231111;
161
+ cpu->isar.id_isar3 = 0x01102131;
162
+ cpu->isar.id_isar4 = 0x141;
163
cpu->reset_auxcr = 7;
164
}
36
}
165
37
166
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
38
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
167
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
39
index XXXXXXX..XXXXXXX 100644
168
cpu->midr = 0x4117b363;
40
--- a/target/arm/hvf/hvf.c
169
cpu->reset_fpsid = 0x410120b4;
41
+++ b/target/arm/hvf/hvf.c
170
- cpu->mvfr0 = 0x11111111;
42
@@ -XXX,XX +XXX,XX @@
171
- cpu->mvfr1 = 0x00000000;
43
#include "hw/irq.h"
172
+ cpu->isar.mvfr0 = 0x11111111;
44
#include "qemu/main-loop.h"
173
+ cpu->isar.mvfr1 = 0x00000000;
45
#include "sysemu/cpus.h"
174
cpu->ctr = 0x1dd20d2;
46
+#include "arm-powerctl.h"
175
cpu->reset_sctlr = 0x00050078;
47
#include "target/arm/cpu.h"
176
cpu->id_pfr0 = 0x111;
48
#include "target/arm/internals.h"
177
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
49
#include "trace/trace-target_arm_hvf.h"
178
cpu->id_mmfr0 = 0x01130003;
50
@@ -XXX,XX +XXX,XX @@
179
cpu->id_mmfr1 = 0x10030302;
51
#define TMR_CTL_IMASK (1 << 1)
180
cpu->id_mmfr2 = 0x01222110;
52
#define TMR_CTL_ISTATUS (1 << 2)
181
- cpu->id_isar0 = 0x00140011;
53
182
- cpu->id_isar1 = 0x12002111;
54
+static void hvf_wfi(CPUState *cpu);
183
- cpu->id_isar2 = 0x11231111;
55
+
184
- cpu->id_isar3 = 0x01102131;
56
typedef struct HVFVTimer {
185
- cpu->id_isar4 = 0x141;
57
/* Vtimer value during migration and paused state */
186
+ cpu->isar.id_isar0 = 0x00140011;
58
uint64_t vtimer_val;
187
+ cpu->isar.id_isar1 = 0x12002111;
59
@@ -XXX,XX +XXX,XX @@ static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
188
+ cpu->isar.id_isar2 = 0x11231111;
60
arm_cpu_do_interrupt(cpu);
189
+ cpu->isar.id_isar3 = 0x01102131;
190
+ cpu->isar.id_isar4 = 0x141;
191
cpu->reset_auxcr = 7;
192
}
61
}
193
62
194
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
63
+static void hvf_psci_cpu_off(ARMCPU *arm_cpu)
195
set_feature(&cpu->env, ARM_FEATURE_EL3);
64
+{
196
cpu->midr = 0x410fb767;
65
+ int32_t ret = arm_set_cpu_off(arm_cpu->mp_affinity);
197
cpu->reset_fpsid = 0x410120b5;
66
+ assert(ret == QEMU_ARM_POWERCTL_RET_SUCCESS);
198
- cpu->mvfr0 = 0x11111111;
67
+}
199
- cpu->mvfr1 = 0x00000000;
68
+
200
+ cpu->isar.mvfr0 = 0x11111111;
69
+/*
201
+ cpu->isar.mvfr1 = 0x00000000;
70
+ * Handle a PSCI call.
202
cpu->ctr = 0x1dd20d2;
71
+ *
203
cpu->reset_sctlr = 0x00050078;
72
+ * Returns 0 on success
204
cpu->id_pfr0 = 0x111;
73
+ * -1 when the PSCI call is unknown,
205
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
74
+ */
206
cpu->id_mmfr0 = 0x01130003;
75
+static bool hvf_handle_psci_call(CPUState *cpu)
207
cpu->id_mmfr1 = 0x10030302;
76
+{
208
cpu->id_mmfr2 = 0x01222100;
77
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
209
- cpu->id_isar0 = 0x0140011;
78
+ CPUARMState *env = &arm_cpu->env;
210
- cpu->id_isar1 = 0x12002111;
79
+ uint64_t param[4] = {
211
- cpu->id_isar2 = 0x11231121;
80
+ env->xregs[0],
212
- cpu->id_isar3 = 0x01102131;
81
+ env->xregs[1],
213
- cpu->id_isar4 = 0x01141;
82
+ env->xregs[2],
214
+ cpu->isar.id_isar0 = 0x0140011;
83
+ env->xregs[3]
215
+ cpu->isar.id_isar1 = 0x12002111;
84
+ };
216
+ cpu->isar.id_isar2 = 0x11231121;
85
+ uint64_t context_id, mpidr;
217
+ cpu->isar.id_isar3 = 0x01102131;
86
+ bool target_aarch64 = true;
218
+ cpu->isar.id_isar4 = 0x01141;
87
+ CPUState *target_cpu_state;
219
cpu->reset_auxcr = 7;
88
+ ARMCPU *target_cpu;
220
}
89
+ target_ulong entry;
221
90
+ int target_el = 1;
222
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
91
+ int32_t ret = 0;
223
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
92
+
224
cpu->midr = 0x410fb022;
93
+ trace_hvf_psci_call(param[0], param[1], param[2], param[3],
225
cpu->reset_fpsid = 0x410120b4;
94
+ arm_cpu->mp_affinity);
226
- cpu->mvfr0 = 0x11111111;
95
+
227
- cpu->mvfr1 = 0x00000000;
96
+ switch (param[0]) {
228
+ cpu->isar.mvfr0 = 0x11111111;
97
+ case QEMU_PSCI_0_2_FN_PSCI_VERSION:
229
+ cpu->isar.mvfr1 = 0x00000000;
98
+ ret = QEMU_PSCI_0_2_RET_VERSION_0_2;
230
cpu->ctr = 0x1d192992; /* 32K icache 32K dcache */
99
+ break;
231
cpu->id_pfr0 = 0x111;
100
+ case QEMU_PSCI_0_2_FN_MIGRATE_INFO_TYPE:
232
cpu->id_pfr1 = 0x1;
101
+ ret = QEMU_PSCI_0_2_RET_TOS_MIGRATION_NOT_REQUIRED; /* No trusted OS */
233
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
102
+ break;
234
cpu->id_mmfr0 = 0x01100103;
103
+ case QEMU_PSCI_0_2_FN_AFFINITY_INFO:
235
cpu->id_mmfr1 = 0x10020302;
104
+ case QEMU_PSCI_0_2_FN64_AFFINITY_INFO:
236
cpu->id_mmfr2 = 0x01222000;
105
+ mpidr = param[1];
237
- cpu->id_isar0 = 0x00100011;
106
+
238
- cpu->id_isar1 = 0x12002111;
107
+ switch (param[2]) {
239
- cpu->id_isar2 = 0x11221011;
108
+ case 0:
240
- cpu->id_isar3 = 0x01102131;
109
+ target_cpu_state = arm_get_cpu_by_id(mpidr);
241
- cpu->id_isar4 = 0x141;
110
+ if (!target_cpu_state) {
242
+ cpu->isar.id_isar0 = 0x00100011;
111
+ ret = QEMU_PSCI_RET_INVALID_PARAMS;
243
+ cpu->isar.id_isar1 = 0x12002111;
112
+ break;
244
+ cpu->isar.id_isar2 = 0x11221011;
113
+ }
245
+ cpu->isar.id_isar3 = 0x01102131;
114
+ target_cpu = ARM_CPU(target_cpu_state);
246
+ cpu->isar.id_isar4 = 0x141;
115
+
247
cpu->reset_auxcr = 1;
116
+ ret = target_cpu->power_state;
248
}
117
+ break;
249
118
+ default:
250
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
119
+ /* Everything above affinity level 0 is always on. */
251
cpu->id_mmfr1 = 0x00000000;
120
+ ret = 0;
252
cpu->id_mmfr2 = 0x00000000;
121
+ }
253
cpu->id_mmfr3 = 0x00000000;
122
+ break;
254
- cpu->id_isar0 = 0x01141110;
123
+ case QEMU_PSCI_0_2_FN_SYSTEM_RESET:
255
- cpu->id_isar1 = 0x02111000;
124
+ qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
256
- cpu->id_isar2 = 0x21112231;
125
+ /*
257
- cpu->id_isar3 = 0x01111110;
126
+ * QEMU reset and shutdown are async requests, but PSCI
258
- cpu->id_isar4 = 0x01310102;
127
+ * mandates that we never return from the reset/shutdown
259
- cpu->id_isar5 = 0x00000000;
128
+ * call, so power the CPU off now so it doesn't execute
260
- cpu->id_isar6 = 0x00000000;
129
+ * anything further.
261
+ cpu->isar.id_isar0 = 0x01141110;
130
+ */
262
+ cpu->isar.id_isar1 = 0x02111000;
131
+ hvf_psci_cpu_off(arm_cpu);
263
+ cpu->isar.id_isar2 = 0x21112231;
132
+ break;
264
+ cpu->isar.id_isar3 = 0x01111110;
133
+ case QEMU_PSCI_0_2_FN_SYSTEM_OFF:
265
+ cpu->isar.id_isar4 = 0x01310102;
134
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
266
+ cpu->isar.id_isar5 = 0x00000000;
135
+ hvf_psci_cpu_off(arm_cpu);
267
+ cpu->isar.id_isar6 = 0x00000000;
136
+ break;
268
}
137
+ case QEMU_PSCI_0_1_FN_CPU_ON:
269
138
+ case QEMU_PSCI_0_2_FN_CPU_ON:
270
static void cortex_m4_initfn(Object *obj)
139
+ case QEMU_PSCI_0_2_FN64_CPU_ON:
271
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
140
+ mpidr = param[1];
272
cpu->id_mmfr1 = 0x00000000;
141
+ entry = param[2];
273
cpu->id_mmfr2 = 0x00000000;
142
+ context_id = param[3];
274
cpu->id_mmfr3 = 0x00000000;
143
+ ret = arm_set_cpu_on(mpidr, entry, context_id,
275
- cpu->id_isar0 = 0x01141110;
144
+ target_el, target_aarch64);
276
- cpu->id_isar1 = 0x02111000;
145
+ break;
277
- cpu->id_isar2 = 0x21112231;
146
+ case QEMU_PSCI_0_1_FN_CPU_OFF:
278
- cpu->id_isar3 = 0x01111110;
147
+ case QEMU_PSCI_0_2_FN_CPU_OFF:
279
- cpu->id_isar4 = 0x01310102;
148
+ hvf_psci_cpu_off(arm_cpu);
280
- cpu->id_isar5 = 0x00000000;
149
+ break;
281
- cpu->id_isar6 = 0x00000000;
150
+ case QEMU_PSCI_0_1_FN_CPU_SUSPEND:
282
+ cpu->isar.id_isar0 = 0x01141110;
151
+ case QEMU_PSCI_0_2_FN_CPU_SUSPEND:
283
+ cpu->isar.id_isar1 = 0x02111000;
152
+ case QEMU_PSCI_0_2_FN64_CPU_SUSPEND:
284
+ cpu->isar.id_isar2 = 0x21112231;
153
+ /* Affinity levels are not supported in QEMU */
285
+ cpu->isar.id_isar3 = 0x01111110;
154
+ if (param[1] & 0xfffe0000) {
286
+ cpu->isar.id_isar4 = 0x01310102;
155
+ ret = QEMU_PSCI_RET_INVALID_PARAMS;
287
+ cpu->isar.id_isar5 = 0x00000000;
156
+ break;
288
+ cpu->isar.id_isar6 = 0x00000000;
157
+ }
289
}
158
+ /* Powerdown is not supported, we always go into WFI */
290
159
+ env->xregs[0] = 0;
291
static void cortex_m33_initfn(Object *obj)
160
+ hvf_wfi(cpu);
292
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
161
+ break;
293
cpu->id_mmfr1 = 0x00000000;
162
+ case QEMU_PSCI_0_1_FN_MIGRATE:
294
cpu->id_mmfr2 = 0x01000000;
163
+ case QEMU_PSCI_0_2_FN_MIGRATE:
295
cpu->id_mmfr3 = 0x00000000;
164
+ ret = QEMU_PSCI_RET_NOT_SUPPORTED;
296
- cpu->id_isar0 = 0x01101110;
165
+ break;
297
- cpu->id_isar1 = 0x02212000;
166
+ default:
298
- cpu->id_isar2 = 0x20232232;
167
+ return false;
299
- cpu->id_isar3 = 0x01111131;
168
+ }
300
- cpu->id_isar4 = 0x01310132;
169
+
301
- cpu->id_isar5 = 0x00000000;
170
+ env->xregs[0] = ret;
302
- cpu->id_isar6 = 0x00000000;
171
+ return true;
303
+ cpu->isar.id_isar0 = 0x01101110;
172
+}
304
+ cpu->isar.id_isar1 = 0x02212000;
173
+
305
+ cpu->isar.id_isar2 = 0x20232232;
174
static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
306
+ cpu->isar.id_isar3 = 0x01111131;
175
{
307
+ cpu->isar.id_isar4 = 0x01310132;
176
ARMCPU *arm_cpu = ARM_CPU(cpu);
308
+ cpu->isar.id_isar5 = 0x00000000;
177
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
309
+ cpu->isar.id_isar6 = 0x00000000;
178
break;
310
cpu->clidr = 0x00000000;
179
case EC_AA64_HVC:
311
cpu->ctr = 0x8000c000;
180
cpu_synchronize_state(cpu);
312
}
181
- trace_hvf_unknown_hvc(env->xregs[0]);
313
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
182
- /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
314
cpu->id_mmfr1 = 0x00000000;
183
- env->xregs[0] = -1;
315
cpu->id_mmfr2 = 0x01200000;
184
+ if (arm_cpu->psci_conduit == QEMU_PSCI_CONDUIT_HVC) {
316
cpu->id_mmfr3 = 0x0211;
185
+ if (!hvf_handle_psci_call(cpu)) {
317
- cpu->id_isar0 = 0x02101111;
186
+ trace_hvf_unknown_hvc(env->xregs[0]);
318
- cpu->id_isar1 = 0x13112111;
187
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
319
- cpu->id_isar2 = 0x21232141;
188
+ env->xregs[0] = -1;
320
- cpu->id_isar3 = 0x01112131;
189
+ }
321
- cpu->id_isar4 = 0x0010142;
190
+ } else {
322
- cpu->id_isar5 = 0x0;
191
+ trace_hvf_unknown_hvc(env->xregs[0]);
323
- cpu->id_isar6 = 0x0;
192
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
324
+ cpu->isar.id_isar0 = 0x02101111;
193
+ }
325
+ cpu->isar.id_isar1 = 0x13112111;
194
break;
326
+ cpu->isar.id_isar2 = 0x21232141;
195
case EC_AA64_SMC:
327
+ cpu->isar.id_isar3 = 0x01112131;
196
cpu_synchronize_state(cpu);
328
+ cpu->isar.id_isar4 = 0x0010142;
197
- trace_hvf_unknown_smc(env->xregs[0]);
329
+ cpu->isar.id_isar5 = 0x0;
198
- hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
330
+ cpu->isar.id_isar6 = 0x0;
199
+ if (arm_cpu->psci_conduit == QEMU_PSCI_CONDUIT_SMC) {
331
cpu->mp_is_up = true;
200
+ advance_pc = true;
332
cpu->pmsav7_dregion = 16;
201
+
333
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
202
+ if (!hvf_handle_psci_call(cpu)) {
334
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
203
+ trace_hvf_unknown_smc(env->xregs[0]);
335
set_feature(&cpu->env, ARM_FEATURE_EL3);
204
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
336
cpu->midr = 0x410fc080;
205
+ env->xregs[0] = -1;
337
cpu->reset_fpsid = 0x410330c0;
206
+ }
338
- cpu->mvfr0 = 0x11110222;
207
+ } else {
339
- cpu->mvfr1 = 0x00011111;
208
+ trace_hvf_unknown_smc(env->xregs[0]);
340
+ cpu->isar.mvfr0 = 0x11110222;
209
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
341
+ cpu->isar.mvfr1 = 0x00011111;
210
+ }
342
cpu->ctr = 0x82048004;
211
break;
343
cpu->reset_sctlr = 0x00c50078;
212
default:
344
cpu->id_pfr0 = 0x1031;
213
cpu_synchronize_state(cpu);
345
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
214
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
346
cpu->id_mmfr1 = 0x20000000;
347
cpu->id_mmfr2 = 0x01202000;
348
cpu->id_mmfr3 = 0x11;
349
- cpu->id_isar0 = 0x00101111;
350
- cpu->id_isar1 = 0x12112111;
351
- cpu->id_isar2 = 0x21232031;
352
- cpu->id_isar3 = 0x11112131;
353
- cpu->id_isar4 = 0x00111142;
354
+ cpu->isar.id_isar0 = 0x00101111;
355
+ cpu->isar.id_isar1 = 0x12112111;
356
+ cpu->isar.id_isar2 = 0x21232031;
357
+ cpu->isar.id_isar3 = 0x11112131;
358
+ cpu->isar.id_isar4 = 0x00111142;
359
cpu->dbgdidr = 0x15141000;
360
cpu->clidr = (1 << 27) | (2 << 24) | 3;
361
cpu->ccsidr[0] = 0xe007e01a; /* 16k L1 dcache. */
362
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
363
set_feature(&cpu->env, ARM_FEATURE_CBAR);
364
cpu->midr = 0x410fc090;
365
cpu->reset_fpsid = 0x41033090;
366
- cpu->mvfr0 = 0x11110222;
367
- cpu->mvfr1 = 0x01111111;
368
+ cpu->isar.mvfr0 = 0x11110222;
369
+ cpu->isar.mvfr1 = 0x01111111;
370
cpu->ctr = 0x80038003;
371
cpu->reset_sctlr = 0x00c50078;
372
cpu->id_pfr0 = 0x1031;
373
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
374
cpu->id_mmfr1 = 0x20000000;
375
cpu->id_mmfr2 = 0x01230000;
376
cpu->id_mmfr3 = 0x00002111;
377
- cpu->id_isar0 = 0x00101111;
378
- cpu->id_isar1 = 0x13112111;
379
- cpu->id_isar2 = 0x21232041;
380
- cpu->id_isar3 = 0x11112131;
381
- cpu->id_isar4 = 0x00111142;
382
+ cpu->isar.id_isar0 = 0x00101111;
383
+ cpu->isar.id_isar1 = 0x13112111;
384
+ cpu->isar.id_isar2 = 0x21232041;
385
+ cpu->isar.id_isar3 = 0x11112131;
386
+ cpu->isar.id_isar4 = 0x00111142;
387
cpu->dbgdidr = 0x35141000;
388
cpu->clidr = (1 << 27) | (1 << 24) | 3;
389
cpu->ccsidr[0] = 0xe00fe019; /* 16k L1 dcache. */
390
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
391
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
392
cpu->midr = 0x410fc075;
393
cpu->reset_fpsid = 0x41023075;
394
- cpu->mvfr0 = 0x10110222;
395
- cpu->mvfr1 = 0x11111111;
396
+ cpu->isar.mvfr0 = 0x10110222;
397
+ cpu->isar.mvfr1 = 0x11111111;
398
cpu->ctr = 0x84448003;
399
cpu->reset_sctlr = 0x00c50078;
400
cpu->id_pfr0 = 0x00001131;
401
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
402
/* a7_mpcore_r0p5_trm, page 4-4 gives 0x01101110; but
403
* table 4-41 gives 0x02101110, which includes the arm div insns.
404
*/
405
- cpu->id_isar0 = 0x02101110;
406
- cpu->id_isar1 = 0x13112111;
407
- cpu->id_isar2 = 0x21232041;
408
- cpu->id_isar3 = 0x11112131;
409
- cpu->id_isar4 = 0x10011142;
410
+ cpu->isar.id_isar0 = 0x02101110;
411
+ cpu->isar.id_isar1 = 0x13112111;
412
+ cpu->isar.id_isar2 = 0x21232041;
413
+ cpu->isar.id_isar3 = 0x11112131;
414
+ cpu->isar.id_isar4 = 0x10011142;
415
cpu->dbgdidr = 0x3515f005;
416
cpu->clidr = 0x0a200023;
417
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
418
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
419
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
420
cpu->midr = 0x412fc0f1;
421
cpu->reset_fpsid = 0x410430f0;
422
- cpu->mvfr0 = 0x10110222;
423
- cpu->mvfr1 = 0x11111111;
424
+ cpu->isar.mvfr0 = 0x10110222;
425
+ cpu->isar.mvfr1 = 0x11111111;
426
cpu->ctr = 0x8444c004;
427
cpu->reset_sctlr = 0x00c50078;
428
cpu->id_pfr0 = 0x00001131;
429
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
430
cpu->id_mmfr1 = 0x20000000;
431
cpu->id_mmfr2 = 0x01240000;
432
cpu->id_mmfr3 = 0x02102211;
433
- cpu->id_isar0 = 0x02101110;
434
- cpu->id_isar1 = 0x13112111;
435
- cpu->id_isar2 = 0x21232041;
436
- cpu->id_isar3 = 0x11112131;
437
- cpu->id_isar4 = 0x10011142;
438
+ cpu->isar.id_isar0 = 0x02101110;
439
+ cpu->isar.id_isar1 = 0x13112111;
440
+ cpu->isar.id_isar2 = 0x21232041;
441
+ cpu->isar.id_isar3 = 0x11112131;
442
+ cpu->isar.id_isar4 = 0x10011142;
443
cpu->dbgdidr = 0x3515f021;
444
cpu->clidr = 0x0a200023;
445
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
446
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
447
index XXXXXXX..XXXXXXX 100644
215
index XXXXXXX..XXXXXXX 100644
448
--- a/target/arm/cpu64.c
216
--- a/target/arm/hvf/trace-events
449
+++ b/target/arm/cpu64.c
217
+++ b/target/arm/hvf/trace-events
450
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
218
@@ -XXX,XX +XXX,XX @@ hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_
451
cpu->midr = 0x411fd070;
219
hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
452
cpu->revidr = 0x00000000;
220
hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
453
cpu->reset_fpsid = 0x41034070;
221
hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
454
- cpu->mvfr0 = 0x10110222;
222
+hvf_psci_call(uint64_t x0, uint64_t x1, uint64_t x2, uint64_t x3, uint32_t cpuid) "PSCI Call x0=0x%016"PRIx64" x1=0x%016"PRIx64" x2=0x%016"PRIx64" x3=0x%016"PRIx64" cpu=0x%x"
455
- cpu->mvfr1 = 0x12111111;
456
- cpu->mvfr2 = 0x00000043;
457
+ cpu->isar.mvfr0 = 0x10110222;
458
+ cpu->isar.mvfr1 = 0x12111111;
459
+ cpu->isar.mvfr2 = 0x00000043;
460
cpu->ctr = 0x8444c004;
461
cpu->reset_sctlr = 0x00c50838;
462
cpu->id_pfr0 = 0x00000131;
463
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
464
cpu->id_mmfr1 = 0x40000000;
465
cpu->id_mmfr2 = 0x01260000;
466
cpu->id_mmfr3 = 0x02102211;
467
- cpu->id_isar0 = 0x02101110;
468
- cpu->id_isar1 = 0x13112111;
469
- cpu->id_isar2 = 0x21232042;
470
- cpu->id_isar3 = 0x01112131;
471
- cpu->id_isar4 = 0x00011142;
472
- cpu->id_isar5 = 0x00011121;
473
- cpu->id_isar6 = 0;
474
- cpu->id_aa64pfr0 = 0x00002222;
475
+ cpu->isar.id_isar0 = 0x02101110;
476
+ cpu->isar.id_isar1 = 0x13112111;
477
+ cpu->isar.id_isar2 = 0x21232042;
478
+ cpu->isar.id_isar3 = 0x01112131;
479
+ cpu->isar.id_isar4 = 0x00011142;
480
+ cpu->isar.id_isar5 = 0x00011121;
481
+ cpu->isar.id_isar6 = 0;
482
+ cpu->isar.id_aa64pfr0 = 0x00002222;
483
cpu->id_aa64dfr0 = 0x10305106;
484
cpu->pmceid0 = 0x00000000;
485
cpu->pmceid1 = 0x00000000;
486
- cpu->id_aa64isar0 = 0x00011120;
487
+ cpu->isar.id_aa64isar0 = 0x00011120;
488
cpu->id_aa64mmfr0 = 0x00001124;
489
cpu->dbgdidr = 0x3516d000;
490
cpu->clidr = 0x0a200023;
491
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
492
cpu->midr = 0x410fd034;
493
cpu->revidr = 0x00000000;
494
cpu->reset_fpsid = 0x41034070;
495
- cpu->mvfr0 = 0x10110222;
496
- cpu->mvfr1 = 0x12111111;
497
- cpu->mvfr2 = 0x00000043;
498
+ cpu->isar.mvfr0 = 0x10110222;
499
+ cpu->isar.mvfr1 = 0x12111111;
500
+ cpu->isar.mvfr2 = 0x00000043;
501
cpu->ctr = 0x84448004; /* L1Ip = VIPT */
502
cpu->reset_sctlr = 0x00c50838;
503
cpu->id_pfr0 = 0x00000131;
504
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
505
cpu->id_mmfr1 = 0x40000000;
506
cpu->id_mmfr2 = 0x01260000;
507
cpu->id_mmfr3 = 0x02102211;
508
- cpu->id_isar0 = 0x02101110;
509
- cpu->id_isar1 = 0x13112111;
510
- cpu->id_isar2 = 0x21232042;
511
- cpu->id_isar3 = 0x01112131;
512
- cpu->id_isar4 = 0x00011142;
513
- cpu->id_isar5 = 0x00011121;
514
- cpu->id_isar6 = 0;
515
- cpu->id_aa64pfr0 = 0x00002222;
516
+ cpu->isar.id_isar0 = 0x02101110;
517
+ cpu->isar.id_isar1 = 0x13112111;
518
+ cpu->isar.id_isar2 = 0x21232042;
519
+ cpu->isar.id_isar3 = 0x01112131;
520
+ cpu->isar.id_isar4 = 0x00011142;
521
+ cpu->isar.id_isar5 = 0x00011121;
522
+ cpu->isar.id_isar6 = 0;
523
+ cpu->isar.id_aa64pfr0 = 0x00002222;
524
cpu->id_aa64dfr0 = 0x10305106;
525
- cpu->id_aa64isar0 = 0x00011120;
526
+ cpu->isar.id_aa64isar0 = 0x00011120;
527
cpu->id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
528
cpu->dbgdidr = 0x3516d000;
529
cpu->clidr = 0x0a200023;
530
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
531
cpu->midr = 0x410fd083;
532
cpu->revidr = 0x00000000;
533
cpu->reset_fpsid = 0x41034080;
534
- cpu->mvfr0 = 0x10110222;
535
- cpu->mvfr1 = 0x12111111;
536
- cpu->mvfr2 = 0x00000043;
537
+ cpu->isar.mvfr0 = 0x10110222;
538
+ cpu->isar.mvfr1 = 0x12111111;
539
+ cpu->isar.mvfr2 = 0x00000043;
540
cpu->ctr = 0x8444c004;
541
cpu->reset_sctlr = 0x00c50838;
542
cpu->id_pfr0 = 0x00000131;
543
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
544
cpu->id_mmfr1 = 0x40000000;
545
cpu->id_mmfr2 = 0x01260000;
546
cpu->id_mmfr3 = 0x02102211;
547
- cpu->id_isar0 = 0x02101110;
548
- cpu->id_isar1 = 0x13112111;
549
- cpu->id_isar2 = 0x21232042;
550
- cpu->id_isar3 = 0x01112131;
551
- cpu->id_isar4 = 0x00011142;
552
- cpu->id_isar5 = 0x00011121;
553
- cpu->id_aa64pfr0 = 0x00002222;
554
+ cpu->isar.id_isar0 = 0x02101110;
555
+ cpu->isar.id_isar1 = 0x13112111;
556
+ cpu->isar.id_isar2 = 0x21232042;
557
+ cpu->isar.id_isar3 = 0x01112131;
558
+ cpu->isar.id_isar4 = 0x00011142;
559
+ cpu->isar.id_isar5 = 0x00011121;
560
+ cpu->isar.id_aa64pfr0 = 0x00002222;
561
cpu->id_aa64dfr0 = 0x10305106;
562
cpu->pmceid0 = 0x00000000;
563
cpu->pmceid1 = 0x00000000;
564
- cpu->id_aa64isar0 = 0x00011120;
565
+ cpu->isar.id_aa64isar0 = 0x00011120;
566
cpu->id_aa64mmfr0 = 0x00001124;
567
cpu->dbgdidr = 0x3516d000;
568
cpu->clidr = 0x0a200023;
569
diff --git a/target/arm/helper.c b/target/arm/helper.c
570
index XXXXXXX..XXXXXXX 100644
571
--- a/target/arm/helper.c
572
+++ b/target/arm/helper.c
573
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
574
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
575
{
576
ARMCPU *cpu = arm_env_get_cpu(env);
577
- uint64_t pfr0 = cpu->id_aa64pfr0;
578
+ uint64_t pfr0 = cpu->isar.id_aa64pfr0;
579
580
if (env->gicv3state) {
581
pfr0 |= 1 << 24;
582
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
583
{ .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH,
584
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
585
.access = PL1_R, .type = ARM_CP_CONST,
586
- .resetvalue = cpu->id_isar0 },
587
+ .resetvalue = cpu->isar.id_isar0 },
588
{ .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH,
589
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1,
590
.access = PL1_R, .type = ARM_CP_CONST,
591
- .resetvalue = cpu->id_isar1 },
592
+ .resetvalue = cpu->isar.id_isar1 },
593
{ .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH,
594
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
595
.access = PL1_R, .type = ARM_CP_CONST,
596
- .resetvalue = cpu->id_isar2 },
597
+ .resetvalue = cpu->isar.id_isar2 },
598
{ .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH,
599
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3,
600
.access = PL1_R, .type = ARM_CP_CONST,
601
- .resetvalue = cpu->id_isar3 },
602
+ .resetvalue = cpu->isar.id_isar3 },
603
{ .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH,
604
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4,
605
.access = PL1_R, .type = ARM_CP_CONST,
606
- .resetvalue = cpu->id_isar4 },
607
+ .resetvalue = cpu->isar.id_isar4 },
608
{ .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH,
609
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5,
610
.access = PL1_R, .type = ARM_CP_CONST,
611
- .resetvalue = cpu->id_isar5 },
612
+ .resetvalue = cpu->isar.id_isar5 },
613
{ .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH,
614
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
615
.access = PL1_R, .type = ARM_CP_CONST,
616
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
617
{ .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
618
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
619
.access = PL1_R, .type = ARM_CP_CONST,
620
- .resetvalue = cpu->id_isar6 },
621
+ .resetvalue = cpu->isar.id_isar6 },
622
REGINFO_SENTINEL
623
};
624
define_arm_cp_regs(cpu, v6_idregs);
625
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
626
{ .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64,
627
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1,
628
.access = PL1_R, .type = ARM_CP_CONST,
629
- .resetvalue = cpu->id_aa64pfr1},
630
+ .resetvalue = cpu->isar.id_aa64pfr1},
631
{ .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
632
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2,
633
.access = PL1_R, .type = ARM_CP_CONST,
634
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
635
{ .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64,
636
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0,
637
.access = PL1_R, .type = ARM_CP_CONST,
638
- .resetvalue = cpu->id_aa64isar0 },
639
+ .resetvalue = cpu->isar.id_aa64isar0 },
640
{ .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64,
641
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1,
642
.access = PL1_R, .type = ARM_CP_CONST,
643
- .resetvalue = cpu->id_aa64isar1 },
644
+ .resetvalue = cpu->isar.id_aa64isar1 },
645
{ .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
646
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
647
.access = PL1_R, .type = ARM_CP_CONST,
648
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
649
{ .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64,
650
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0,
651
.access = PL1_R, .type = ARM_CP_CONST,
652
- .resetvalue = cpu->mvfr0 },
653
+ .resetvalue = cpu->isar.mvfr0 },
654
{ .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64,
655
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1,
656
.access = PL1_R, .type = ARM_CP_CONST,
657
- .resetvalue = cpu->mvfr1 },
658
+ .resetvalue = cpu->isar.mvfr1 },
659
{ .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64,
660
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
661
.access = PL1_R, .type = ARM_CP_CONST,
662
- .resetvalue = cpu->mvfr2 },
663
+ .resetvalue = cpu->isar.mvfr2 },
664
{ .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
665
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3,
666
.access = PL1_R, .type = ARM_CP_CONST,
667
--
223
--
668
2.19.1
224
2.20.1
669
225
670
226
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Since QEMU does not implement ASIDs, changes to the ASID must flush the
3
Now that we have all logic in place that we need to handle Hypervisor.framework
4
tlb. However, if the ASID does not change there is no reason to flush.
4
on Apple Silicon systems, let's add CONFIG_HVF for aarch64 as well so that we
5
can build it.
5
6
6
In testing a boot of the Ubuntu installer to the first menu, this reduces
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
7
the number of flushes by 30%, or nearly 600k instances.
8
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
8
9
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> (x86 only)
9
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
13
Message-id: 20181019015617.22583-3-richard.henderson@linaro.org
12
Message-id: 20210916155404.86958-9-agraf@csgraf.de
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
14
---
16
target/arm/helper.c | 8 +++-----
15
meson.build | 7 +++++++
17
1 file changed, 3 insertions(+), 5 deletions(-)
16
target/arm/hvf/meson.build | 3 +++
17
target/arm/meson.build | 2 ++
18
3 files changed, 12 insertions(+)
19
create mode 100644 target/arm/hvf/meson.build
18
20
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
diff --git a/meson.build b/meson.build
20
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
23
--- a/meson.build
22
+++ b/target/arm/helper.c
24
+++ b/meson.build
23
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_el1_write(CPUARMState *env, const ARMCPRegInfo *ri,
25
@@ -XXX,XX +XXX,XX @@ else
24
static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
26
endif
25
uint64_t value)
27
26
{
28
accelerator_targets = { 'CONFIG_KVM': kvm_targets }
27
- /* 64 bit accesses to the TTBRs can change the ASID and so we
29
+
28
- * must flush the TLB.
30
+if cpu in ['aarch64']
29
- */
31
+ accelerator_targets += {
30
- if (cpreg_field_is_64bit(ri)) {
32
+ 'CONFIG_HVF': ['aarch64-softmmu']
31
+ /* If the ASID changes (with a 64-bit write), we must flush the TLB. */
33
+ }
32
+ if (cpreg_field_is_64bit(ri) &&
34
+endif
33
+ extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
35
+
34
ARMCPU *cpu = arm_env_get_cpu(env);
36
if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
35
-
37
# i386 emulator provides xenpv machine type for multiple architectures
36
tlb_flush(CPU(cpu));
38
accelerator_targets += {
37
}
39
diff --git a/target/arm/hvf/meson.build b/target/arm/hvf/meson.build
38
raw_write(env, ri, value);
40
new file mode 100644
41
index XXXXXXX..XXXXXXX
42
--- /dev/null
43
+++ b/target/arm/hvf/meson.build
44
@@ -XXX,XX +XXX,XX @@
45
+arm_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files(
46
+ 'hvf.c',
47
+))
48
diff --git a/target/arm/meson.build b/target/arm/meson.build
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/meson.build
51
+++ b/target/arm/meson.build
52
@@ -XXX,XX +XXX,XX @@ arm_softmmu_ss.add(files(
53
'psci.c',
54
))
55
56
+subdir('hvf')
57
+
58
target_arch += {'arm': arm_ss}
59
target_softmmu_arch += {'arm': arm_softmmu_ss}
39
--
60
--
40
2.19.1
61
2.20.1
41
62
42
63
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
3
We can expose cycle counters on the PMU easily. To be as compatible as
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
possible, let's do so, but make sure we don't expose any other architectural
5
Message-id: 20181016223115.24100-8-richard.henderson@linaro.org
5
counters that we can not model yet.
6
7
This allows OSs to work that require PMU support.
8
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210916155404.86958-10-agraf@csgraf.de
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
13
---
9
target/arm/cpu.h | 16 +++++++++++++++-
14
target/arm/hvf/hvf.c | 179 +++++++++++++++++++++++++++++++++++++++++++
10
linux-user/aarch64/signal.c | 4 ++--
15
1 file changed, 179 insertions(+)
11
linux-user/elfload.c | 2 +-
16
12
linux-user/syscall.c | 10 ++++++----
17
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
13
target/arm/cpu64.c | 5 ++++-
14
target/arm/helper.c | 9 ++++++---
15
target/arm/machine.c | 3 +--
16
target/arm/translate-a64.c | 4 ++--
17
8 files changed, 37 insertions(+), 16 deletions(-)
18
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
19
--- a/target/arm/hvf/hvf.c
22
+++ b/target/arm/cpu.h
20
+++ b/target/arm/hvf/hvf.c
23
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR1, FRINTTS, 32, 4)
21
@@ -XXX,XX +XXX,XX @@
24
FIELD(ID_AA64ISAR1, SB, 36, 4)
22
#define SYSREG_OSLSR_EL1 SYSREG(2, 0, 1, 1, 4)
25
FIELD(ID_AA64ISAR1, SPECRES, 40, 4)
23
#define SYSREG_OSDLR_EL1 SYSREG(2, 0, 1, 3, 4)
26
24
#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1)
27
+FIELD(ID_AA64PFR0, EL0, 0, 4)
25
+#define SYSREG_PMCR_EL0 SYSREG(3, 3, 9, 12, 0)
28
+FIELD(ID_AA64PFR0, EL1, 4, 4)
26
+#define SYSREG_PMUSERENR_EL0 SYSREG(3, 3, 9, 14, 0)
29
+FIELD(ID_AA64PFR0, EL2, 8, 4)
27
+#define SYSREG_PMCNTENSET_EL0 SYSREG(3, 3, 9, 12, 1)
30
+FIELD(ID_AA64PFR0, EL3, 12, 4)
28
+#define SYSREG_PMCNTENCLR_EL0 SYSREG(3, 3, 9, 12, 2)
31
+FIELD(ID_AA64PFR0, FP, 16, 4)
29
+#define SYSREG_PMINTENCLR_EL1 SYSREG(3, 0, 9, 14, 2)
32
+FIELD(ID_AA64PFR0, ADVSIMD, 20, 4)
30
+#define SYSREG_PMOVSCLR_EL0 SYSREG(3, 3, 9, 12, 3)
33
+FIELD(ID_AA64PFR0, GIC, 24, 4)
31
+#define SYSREG_PMSWINC_EL0 SYSREG(3, 3, 9, 12, 4)
34
+FIELD(ID_AA64PFR0, RAS, 28, 4)
32
+#define SYSREG_PMSELR_EL0 SYSREG(3, 3, 9, 12, 5)
35
+FIELD(ID_AA64PFR0, SVE, 32, 4)
33
+#define SYSREG_PMCEID0_EL0 SYSREG(3, 3, 9, 12, 6)
36
+
34
+#define SYSREG_PMCEID1_EL0 SYSREG(3, 3, 9, 12, 7)
37
QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK);
35
+#define SYSREG_PMCCNTR_EL0 SYSREG(3, 3, 9, 13, 0)
38
36
+#define SYSREG_PMCCFILTR_EL0 SYSREG(3, 3, 14, 15, 7)
39
/* If adding a feature bit which corresponds to a Linux ELF
37
40
@@ -XXX,XX +XXX,XX @@ enum arm_features {
38
#define WFX_IS_WFE (1 << 0)
41
ARM_FEATURE_PMU, /* has PMU support */
39
42
ARM_FEATURE_VBAR, /* has cp15 VBAR */
40
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
43
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
41
val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
44
- ARM_FEATURE_SVE, /* has Scalable Vector Extension */
42
gt_cntfrq_period_ns(arm_cpu);
45
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
43
break;
46
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
44
+ case SYSREG_PMCR_EL0:
47
};
45
+ val = env->cp15.c9_pmcr;
48
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
46
+ break;
49
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
47
+ case SYSREG_PMCCNTR_EL0:
48
+ pmu_op_start(env);
49
+ val = env->cp15.c15_ccnt;
50
+ pmu_op_finish(env);
51
+ break;
52
+ case SYSREG_PMCNTENCLR_EL0:
53
+ val = env->cp15.c9_pmcnten;
54
+ break;
55
+ case SYSREG_PMOVSCLR_EL0:
56
+ val = env->cp15.c9_pmovsr;
57
+ break;
58
+ case SYSREG_PMSELR_EL0:
59
+ val = env->cp15.c9_pmselr;
60
+ break;
61
+ case SYSREG_PMINTENCLR_EL1:
62
+ val = env->cp15.c9_pminten;
63
+ break;
64
+ case SYSREG_PMCCFILTR_EL0:
65
+ val = env->cp15.pmccfiltr_el0;
66
+ break;
67
+ case SYSREG_PMCNTENSET_EL0:
68
+ val = env->cp15.c9_pmcnten;
69
+ break;
70
+ case SYSREG_PMUSERENR_EL0:
71
+ val = env->cp15.c9_pmuserenr;
72
+ break;
73
+ case SYSREG_PMCEID0_EL0:
74
+ case SYSREG_PMCEID1_EL0:
75
+ /* We can't really count anything yet, declare all events invalid */
76
+ val = 0;
77
+ break;
78
case SYSREG_OSLSR_EL1:
79
val = env->cp15.oslsr_el1;
80
break;
81
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
82
return 0;
50
}
83
}
51
84
52
+static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
85
+static void pmu_update_irq(CPUARMState *env)
53
+{
86
+{
54
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
87
+ ARMCPU *cpu = env_archcpu(env);
55
+}
88
+ qemu_set_irq(cpu->pmu_interrupt, (env->cp15.c9_pmcr & PMCRE) &&
56
+
89
+ (env->cp15.c9_pminten & env->cp15.c9_pmovsr));
57
/*
90
+}
58
* Forward to the above feature tests given an ARMCPU pointer.
91
+
59
*/
92
+static bool pmu_event_supported(uint16_t number)
60
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
93
+{
61
index XXXXXXX..XXXXXXX 100644
94
+ return false;
62
--- a/linux-user/aarch64/signal.c
95
+}
63
+++ b/linux-user/aarch64/signal.c
96
+
64
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
97
+/* Returns true if the counter (pass 31 for PMCCNTR) should count events using
65
break;
98
+ * the current EL, security state, and register configuration.
66
99
+ */
67
case TARGET_SVE_MAGIC:
100
+static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
68
- if (arm_feature(env, ARM_FEATURE_SVE)) {
101
+{
69
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
102
+ uint64_t filter;
70
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
103
+ bool enabled, filtered = true;
71
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
104
+ int el = arm_current_el(env);
72
if (!sve && size == sve_size) {
105
+
73
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
106
+ enabled = (env->cp15.c9_pmcr & PMCRE) &&
74
&layout);
107
+ (env->cp15.c9_pmcnten & (1 << counter));
75
108
+
76
/* SVE state needs saving only if it exists. */
109
+ if (counter == 31) {
77
- if (arm_feature(env, ARM_FEATURE_SVE)) {
110
+ filter = env->cp15.pmccfiltr_el0;
78
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
111
+ } else {
79
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
112
+ filter = env->cp15.c14_pmevtyper[counter];
80
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
113
+ }
81
sve_ofs = alloc_sigframe_space(sve_size, &layout);
114
+
82
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
115
+ if (el == 0) {
83
index XXXXXXX..XXXXXXX 100644
116
+ filtered = filter & PMXEVTYPER_U;
84
--- a/linux-user/elfload.c
117
+ } else if (el == 1) {
85
+++ b/linux-user/elfload.c
118
+ filtered = filter & PMXEVTYPER_P;
86
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
119
+ }
87
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
120
+
88
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
121
+ if (counter != 31) {
89
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
122
+ /*
90
- GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
123
+ * If not checking PMCCNTR, ensure the counter is setup to an event we
91
+ GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
124
+ * support
92
125
+ */
93
#undef GET_FEATURE
126
+ uint16_t event = filter & PMXEVTYPER_EVTCOUNT;
94
#undef GET_FEATURE_ID
127
+ if (!pmu_event_supported(event)) {
95
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
128
+ return false;
96
index XXXXXXX..XXXXXXX 100644
129
+ }
97
--- a/linux-user/syscall.c
130
+ }
98
+++ b/linux-user/syscall.c
131
+
99
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
132
+ return enabled && !filtered;
100
* even though the current architectural maximum is VQ=16.
133
+}
101
*/
134
+
102
ret = -TARGET_EINVAL;
135
+static void pmswinc_write(CPUARMState *env, uint64_t value)
103
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)
136
+{
104
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(cpu_env))
137
+ unsigned int i;
105
&& arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
138
+ for (i = 0; i < pmu_num_counters(env); i++) {
106
CPUARMState *env = cpu_env;
139
+ /* Increment a counter's count iff: */
107
ARMCPU *cpu = arm_env_get_cpu(env);
140
+ if ((value & (1 << i)) && /* counter's bit is set */
108
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
141
+ /* counter is enabled and not filtered */
109
return ret;
142
+ pmu_counter_enabled(env, i) &&
110
case TARGET_PR_SVE_GET_VL:
143
+ /* counter is SW_INCR */
111
ret = -TARGET_EINVAL;
144
+ (env->cp15.c14_pmevtyper[i] & PMXEVTYPER_EVTCOUNT) == 0x0) {
112
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)) {
145
+ /*
113
- CPUARMState *env = cpu_env;
146
+ * Detect if this write causes an overflow since we can't predict
114
- ret = ((env->vfp.zcr_el[1] & 0xf) + 1) * 16;
147
+ * PMSWINC overflows like we can for other events
115
+ {
148
+ */
116
+ ARMCPU *cpu = arm_env_get_cpu(cpu_env);
149
+ uint32_t new_pmswinc = env->cp15.c14_pmevcntr[i] + 1;
117
+ if (cpu_isar_feature(aa64_sve, cpu)) {
150
+
118
+ ret = ((cpu->env.vfp.zcr_el[1] & 0xf) + 1) * 16;
151
+ if (env->cp15.c14_pmevcntr[i] & ~new_pmswinc & INT32_MIN) {
119
+ }
152
+ env->cp15.c9_pmovsr |= (1 << i);
120
}
153
+ pmu_update_irq(env);
121
return ret;
154
+ }
122
#endif /* AARCH64 */
155
+
123
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
156
+ env->cp15.c14_pmevcntr[i] = new_pmswinc;
124
index XXXXXXX..XXXXXXX 100644
157
+ }
125
--- a/target/arm/cpu64.c
158
+ }
126
+++ b/target/arm/cpu64.c
159
+}
127
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
160
+
128
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
161
static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
129
cpu->isar.id_aa64isar1 = t;
130
131
+ t = cpu->isar.id_aa64pfr0;
132
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
133
+ cpu->isar.id_aa64pfr0 = t;
134
+
135
/* Replicate the same data to the 32-bit id registers. */
136
u = cpu->isar.id_isar5;
137
u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
139
* present in either.
140
*/
141
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
142
- set_feature(&cpu->env, ARM_FEATURE_SVE);
143
/* For usermode -cpu max we can use a larger and more efficient DCZ
144
* blocksize since we don't have to follow what the hardware does.
145
*/
146
diff --git a/target/arm/helper.c b/target/arm/helper.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/helper.c
149
+++ b/target/arm/helper.c
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
151
define_one_arm_cp_reg(cpu, &sctlr);
152
}
153
154
- if (arm_feature(env, ARM_FEATURE_SVE)) {
155
+ if (cpu_isar_feature(aa64_sve, cpu)) {
156
define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
157
if (arm_feature(env, ARM_FEATURE_EL2)) {
158
define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
159
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
160
uint32_t flags;
161
162
if (is_a64(env)) {
163
+ ARMCPU *cpu = arm_env_get_cpu(env);
164
+
165
*pc = env->pc;
166
flags = ARM_TBFLAG_AARCH64_STATE_MASK;
167
/* Get control bits for tagged addresses */
168
flags |= (arm_regime_tbi0(env, mmu_idx) << ARM_TBFLAG_TBI0_SHIFT);
169
flags |= (arm_regime_tbi1(env, mmu_idx) << ARM_TBFLAG_TBI1_SHIFT);
170
171
- if (arm_feature(env, ARM_FEATURE_SVE)) {
172
+ if (cpu_isar_feature(aa64_sve, cpu)) {
173
int sve_el = sve_exception_el(env, current_el);
174
uint32_t zcr_len;
175
176
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq)
177
void aarch64_sve_change_el(CPUARMState *env, int old_el,
178
int new_el, bool el0_a64)
179
{
162
{
180
+ ARMCPU *cpu = arm_env_get_cpu(env);
163
ARMCPU *arm_cpu = ARM_CPU(cpu);
181
int old_len, new_len;
164
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
182
bool old_a64, new_a64;
165
val);
183
166
184
/* Nothing to do if no SVE. */
167
switch (reg) {
185
- if (!arm_feature(env, ARM_FEATURE_SVE)) {
168
+ case SYSREG_PMCCNTR_EL0:
186
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
169
+ pmu_op_start(env);
187
return;
170
+ env->cp15.c15_ccnt = val;
188
}
171
+ pmu_op_finish(env);
189
172
+ break;
190
diff --git a/target/arm/machine.c b/target/arm/machine.c
173
+ case SYSREG_PMCR_EL0:
191
index XXXXXXX..XXXXXXX 100644
174
+ pmu_op_start(env);
192
--- a/target/arm/machine.c
175
+
193
+++ b/target/arm/machine.c
176
+ if (val & PMCRC) {
194
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_iwmmxt = {
177
+ /* The counter has been reset */
195
static bool sve_needed(void *opaque)
178
+ env->cp15.c15_ccnt = 0;
196
{
179
+ }
197
ARMCPU *cpu = opaque;
180
+
198
- CPUARMState *env = &cpu->env;
181
+ if (val & PMCRP) {
199
182
+ unsigned int i;
200
- return arm_feature(env, ARM_FEATURE_SVE);
183
+ for (i = 0; i < pmu_num_counters(env); i++) {
201
+ return cpu_isar_feature(aa64_sve, cpu);
184
+ env->cp15.c14_pmevcntr[i] = 0;
202
}
185
+ }
203
186
+ }
204
/* The first two words of each Zreg is stored in VFP state. */
187
+
205
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
188
+ env->cp15.c9_pmcr &= ~PMCR_WRITEABLE_MASK;
206
index XXXXXXX..XXXXXXX 100644
189
+ env->cp15.c9_pmcr |= (val & PMCR_WRITEABLE_MASK);
207
--- a/target/arm/translate-a64.c
190
+
208
+++ b/target/arm/translate-a64.c
191
+ pmu_op_finish(env);
209
@@ -XXX,XX +XXX,XX @@ void aarch64_cpu_dump_state(CPUState *cs, FILE *f,
192
+ break;
210
cpu_fprintf(f, " FPCR=%08x FPSR=%08x\n",
193
+ case SYSREG_PMUSERENR_EL0:
211
vfp_get_fpcr(env), vfp_get_fpsr(env));
194
+ env->cp15.c9_pmuserenr = val & 0xf;
212
195
+ break;
213
- if (arm_feature(env, ARM_FEATURE_SVE) && sve_exception_el(env, el) == 0) {
196
+ case SYSREG_PMCNTENSET_EL0:
214
+ if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) {
197
+ env->cp15.c9_pmcnten |= (val & pmu_counter_mask(env));
215
int j, zcr_len = sve_zcr_len_for_el(env, el);
198
+ break;
216
199
+ case SYSREG_PMCNTENCLR_EL0:
217
for (i = 0; i <= FFR_PRED_NUM; i++) {
200
+ env->cp15.c9_pmcnten &= ~(val & pmu_counter_mask(env));
218
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
201
+ break;
219
unallocated_encoding(s);
202
+ case SYSREG_PMINTENCLR_EL1:
220
break;
203
+ pmu_op_start(env);
221
case 0x2:
204
+ env->cp15.c9_pminten |= val;
222
- if (!arm_dc_feature(s, ARM_FEATURE_SVE) || !disas_sve(s, insn)) {
205
+ pmu_op_finish(env);
223
+ if (!dc_isar_feature(aa64_sve, s) || !disas_sve(s, insn)) {
206
+ break;
224
unallocated_encoding(s);
207
+ case SYSREG_PMOVSCLR_EL0:
225
}
208
+ pmu_op_start(env);
209
+ env->cp15.c9_pmovsr &= ~val;
210
+ pmu_op_finish(env);
211
+ break;
212
+ case SYSREG_PMSWINC_EL0:
213
+ pmu_op_start(env);
214
+ pmswinc_write(env, val);
215
+ pmu_op_finish(env);
216
+ break;
217
+ case SYSREG_PMSELR_EL0:
218
+ env->cp15.c9_pmselr = val & 0x1f;
219
+ break;
220
+ case SYSREG_PMCCFILTR_EL0:
221
+ pmu_op_start(env);
222
+ env->cp15.pmccfiltr_el0 = val & PMCCFILTR_EL0;
223
+ pmu_op_finish(env);
224
+ break;
225
case SYSREG_OSLAR_EL1:
226
env->cp15.oslsr_el1 = val & 1;
226
break;
227
break;
227
--
228
--
228
2.19.1
229
2.20.1
229
230
230
231
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Currently gen_jmp_tb() assumes that if it is called then the jump it
2
is handling is the only reason that we might be trying to end the TB,
3
so it will use goto_tb if it can. This is usually the case: mostly
4
"we did something that means we must end the TB" happens on a
5
non-branch instruction. However, there are cases where we decide
6
early in handling an instruction that we need to end the TB and
7
return to the main loop, and then the insn is a complex one that
8
involves gen_jmp_tb(). For instance, for M-profile FP instructions,
9
in gen_preserve_fp_state() which is called from vfp_access_check() we
10
want to force an exit to the main loop if lazy state preservation is
11
active and we are in icount mode.
2
12
3
Instead of shifts and masks, use direct loads and stores from
13
Make gen_jmp_tb() look at the current value of is_jmp, and only use
4
the neon register file.
14
goto_tb if the previous is_jmp was DISAS_NEXT or DISAS_TOO_MANY.
5
15
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-21-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20210913095440.13462-2-peter.maydell@linaro.org
10
---
19
---
11
target/arm/translate.c | 92 +++++++++++++++++++++++-------------------
20
target/arm/translate.c | 34 +++++++++++++++++++++++++++++++++-
12
1 file changed, 50 insertions(+), 42 deletions(-)
21
1 file changed, 33 insertions(+), 1 deletion(-)
13
22
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
23
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
25
--- a/target/arm/translate.c
17
+++ b/target/arm/translate.c
26
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
27
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
19
return tmp;
28
/* An indirect jump so that we still trigger the debug exception. */
20
}
29
gen_set_pc_im(s, dest);
21
30
s->base.is_jmp = DISAS_JUMP;
22
+static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
31
- } else {
23
+{
32
+ return;
24
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
33
+ }
25
+
34
+ switch (s->base.is_jmp) {
26
+ switch (mop) {
35
+ case DISAS_NEXT:
27
+ case MO_UB:
36
+ case DISAS_TOO_MANY:
28
+ tcg_gen_ld8u_i32(var, cpu_env, offset);
37
+ case DISAS_NORETURN:
38
+ /*
39
+ * The normal case: just go to the destination TB.
40
+ * NB: NORETURN happens if we generate code like
41
+ * gen_brcondi(l);
42
+ * gen_jmp();
43
+ * gen_set_label(l);
44
+ * gen_jmp();
45
+ * on the second call to gen_jmp().
46
+ */
47
gen_goto_tb(s, tbno, dest);
29
+ break;
48
+ break;
30
+ case MO_UW:
49
+ case DISAS_UPDATE_NOCHAIN:
31
+ tcg_gen_ld16u_i32(var, cpu_env, offset);
50
+ case DISAS_UPDATE_EXIT:
32
+ break;
51
+ /*
33
+ case MO_UL:
52
+ * We already decided we're leaving the TB for some other reason.
34
+ tcg_gen_ld_i32(var, cpu_env, offset);
53
+ * Avoid using goto_tb so we really do exit back to the main loop
54
+ * and don't chain to another TB.
55
+ */
56
+ gen_set_pc_im(s, dest);
57
+ gen_goto_ptr();
58
+ s->base.is_jmp = DISAS_NORETURN;
35
+ break;
59
+ break;
36
+ default:
60
+ default:
61
+ /*
62
+ * We shouldn't be emitting code for a jump and also have
63
+ * is_jmp set to one of the special cases like DISAS_SWI.
64
+ */
37
+ g_assert_not_reached();
65
+ g_assert_not_reached();
38
+ }
66
}
39
+}
40
+
41
static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
42
{
43
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
44
@@ -XXX,XX +XXX,XX @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
45
tcg_temp_free_i32(var);
46
}
67
}
47
68
48
+static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
49
+{
50
+ long offset = neon_element_offset(reg, ele, size);
51
+
52
+ switch (size) {
53
+ case MO_8:
54
+ tcg_gen_st8_i32(var, cpu_env, offset);
55
+ break;
56
+ case MO_16:
57
+ tcg_gen_st16_i32(var, cpu_env, offset);
58
+ break;
59
+ case MO_32:
60
+ tcg_gen_st_i32(var, cpu_env, offset);
61
+ break;
62
+ default:
63
+ g_assert_not_reached();
64
+ }
65
+}
66
+
67
static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
68
{
69
long offset = neon_element_offset(reg, ele, size);
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
71
int stride;
72
int size;
73
int reg;
74
- int pass;
75
int load;
76
- int shift;
77
int n;
78
int vec_size;
79
int mmu_idx;
80
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
81
} else {
82
/* Single element. */
83
int idx = (insn >> 4) & 0xf;
84
- pass = (insn >> 7) & 1;
85
+ int reg_idx;
86
switch (size) {
87
case 0:
88
- shift = ((insn >> 5) & 3) * 8;
89
+ reg_idx = (insn >> 5) & 7;
90
stride = 1;
91
break;
92
case 1:
93
- shift = ((insn >> 6) & 1) * 16;
94
+ reg_idx = (insn >> 6) & 3;
95
stride = (insn & (1 << 5)) ? 2 : 1;
96
break;
97
case 2:
98
- shift = 0;
99
+ reg_idx = (insn >> 7) & 1;
100
stride = (insn & (1 << 6)) ? 2 : 1;
101
break;
102
default:
103
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
104
*/
105
return 1;
106
}
107
+ tmp = tcg_temp_new_i32();
108
addr = tcg_temp_new_i32();
109
load_reg_var(s, addr, rn);
110
for (reg = 0; reg < nregs; reg++) {
111
if (load) {
112
- tmp = tcg_temp_new_i32();
113
- switch (size) {
114
- case 0:
115
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
116
- break;
117
- case 1:
118
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
119
- break;
120
- case 2:
121
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
122
- break;
123
- default: /* Avoid compiler warnings. */
124
- abort();
125
- }
126
- if (size != 2) {
127
- tmp2 = neon_load_reg(rd, pass);
128
- tcg_gen_deposit_i32(tmp, tmp2, tmp,
129
- shift, size ? 16 : 8);
130
- tcg_temp_free_i32(tmp2);
131
- }
132
- neon_store_reg(rd, pass, tmp);
133
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
134
+ s->be_data | size);
135
+ neon_store_element(rd, reg_idx, size, tmp);
136
} else { /* Store */
137
- tmp = neon_load_reg(rd, pass);
138
- if (shift)
139
- tcg_gen_shri_i32(tmp, tmp, shift);
140
- switch (size) {
141
- case 0:
142
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
143
- break;
144
- case 1:
145
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
146
- break;
147
- case 2:
148
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
149
- break;
150
- }
151
- tcg_temp_free_i32(tmp);
152
+ neon_load_element(tmp, rd, reg_idx, size);
153
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
154
+ s->be_data | size);
155
}
156
rd += stride;
157
tcg_gen_addi_i32(addr, addr, 1 << size);
158
}
159
tcg_temp_free_i32(addr);
160
+ tcg_temp_free_i32(tmp);
161
stride = nregs * (1 << size);
162
}
163
}
164
--
69
--
165
2.19.1
70
2.20.1
166
71
167
72
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Architecturally, for an M-profile CPU with the LOB feature the
2
LTPSIZE field in FPDSCR is always constant 4. QEMU's implementation
3
enforces this everywhere, except that we don't check that it is true
4
in incoming migration data.
2
5
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
We're going to add come in gen_update_fp_context() which relies on
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
the "always 4" property. Since this is TCG-only, we don't actually
5
Message-id: 20181016223115.24100-7-richard.henderson@linaro.org
8
need to be robust to bogus incoming migration data, and the effect of
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
it being wrong would be wrong code generation rather than a QEMU
10
crash; but if it did ever happen somehow it would be very difficult
11
to track down the cause. Add a check so that we fail the inbound
12
migration if the FPDSCR.LTPSIZE value is incorrect.
13
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20210913095440.13462-3-peter.maydell@linaro.org
8
---
17
---
9
target/arm/cpu.h | 6 +++++-
18
target/arm/machine.c | 13 +++++++++++++
10
linux-user/elfload.c | 2 +-
19
1 file changed, 13 insertions(+)
11
target/arm/cpu.c | 4 ----
12
target/arm/helper.c | 2 +-
13
target/arm/machine.c | 3 +--
14
5 files changed, 8 insertions(+), 9 deletions(-)
15
20
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ enum arm_features {
21
ARM_FEATURE_NEON,
22
ARM_FEATURE_M, /* Microcontroller profile. */
23
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
24
- ARM_FEATURE_THUMB2EE,
25
ARM_FEATURE_V7MP, /* v7 Multiprocessing Extensions */
26
ARM_FEATURE_V7VE, /* v7 Virtualization Extensions (non-EL2 parts) */
27
ARM_FEATURE_V4T,
28
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_jazelle(const ARMISARegisters *id)
29
return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
30
}
31
32
+static inline bool isar_feature_t32ee(const ARMISARegisters *id)
33
+{
34
+ return FIELD_EX32(id->id_isar3, ID_ISAR3, T32EE) != 0;
35
+}
36
+
37
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
38
{
39
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
40
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/linux-user/elfload.c
43
+++ b/linux-user/elfload.c
44
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
45
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
46
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
47
GET_FEATURE(ARM_FEATURE_IWMMXT, ARM_HWCAP_ARM_IWMMXT);
48
- GET_FEATURE(ARM_FEATURE_THUMB2EE, ARM_HWCAP_ARM_THUMBEE);
49
+ GET_FEATURE_ID(t32ee, ARM_HWCAP_ARM_THUMBEE);
50
GET_FEATURE(ARM_FEATURE_NEON, ARM_HWCAP_ARM_NEON);
51
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
52
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
53
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/cpu.c
56
+++ b/target/arm/cpu.c
57
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
58
set_feature(&cpu->env, ARM_FEATURE_V7);
59
set_feature(&cpu->env, ARM_FEATURE_VFP3);
60
set_feature(&cpu->env, ARM_FEATURE_NEON);
61
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
62
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
63
set_feature(&cpu->env, ARM_FEATURE_EL3);
64
cpu->midr = 0x410fc080;
65
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
66
set_feature(&cpu->env, ARM_FEATURE_VFP3);
67
set_feature(&cpu->env, ARM_FEATURE_VFP_FP16);
68
set_feature(&cpu->env, ARM_FEATURE_NEON);
69
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
70
set_feature(&cpu->env, ARM_FEATURE_EL3);
71
/* Note that A9 supports the MP extensions even for
72
* A9UP and single-core A9MP (which are both different
73
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
74
set_feature(&cpu->env, ARM_FEATURE_V7VE);
75
set_feature(&cpu->env, ARM_FEATURE_VFP4);
76
set_feature(&cpu->env, ARM_FEATURE_NEON);
77
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
78
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
79
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
80
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
81
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
82
set_feature(&cpu->env, ARM_FEATURE_V7VE);
83
set_feature(&cpu->env, ARM_FEATURE_VFP4);
84
set_feature(&cpu->env, ARM_FEATURE_NEON);
85
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
86
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
87
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
88
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
89
diff --git a/target/arm/helper.c b/target/arm/helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/helper.c
92
+++ b/target/arm/helper.c
93
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
94
define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo);
95
define_arm_cp_regs(cpu, vmsa_cp_reginfo);
96
}
97
- if (arm_feature(env, ARM_FEATURE_THUMB2EE)) {
98
+ if (cpu_isar_feature(t32ee, cpu)) {
99
define_arm_cp_regs(cpu, t2ee_cp_reginfo);
100
}
101
if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
102
diff --git a/target/arm/machine.c b/target/arm/machine.c
21
diff --git a/target/arm/machine.c b/target/arm/machine.c
103
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/machine.c
23
--- a/target/arm/machine.c
105
+++ b/target/arm/machine.c
24
+++ b/target/arm/machine.c
106
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
25
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
107
static bool thumb2ee_needed(void *opaque)
26
hw_breakpoint_update_all(cpu);
108
{
27
hw_watchpoint_update_all(cpu);
109
ARMCPU *cpu = opaque;
28
110
- CPUARMState *env = &cpu->env;
29
+ /*
111
30
+ * TCG gen_update_fp_context() relies on the invariant that
112
- return arm_feature(env, ARM_FEATURE_THUMB2EE);
31
+ * FPDSCR.LTPSIZE is constant 4 for M-profile with the LOB extension;
113
+ return cpu_isar_feature(t32ee, cpu);
32
+ * forbid bogus incoming data with some other value.
114
}
33
+ */
115
34
+ if (arm_feature(env, ARM_FEATURE_M) && cpu_isar_feature(aa32_lob, cpu)) {
116
static const VMStateDescription vmstate_thumb2ee = {
35
+ if (extract32(env->v7m.fpdscr[M_REG_NS],
36
+ FPCR_LTPSIZE_SHIFT, FPCR_LTPSIZE_LENGTH) != 4 ||
37
+ extract32(env->v7m.fpdscr[M_REG_S],
38
+ FPCR_LTPSIZE_SHIFT, FPCR_LTPSIZE_LENGTH) != 4) {
39
+ return -1;
40
+ }
41
+ }
42
if (!kvm_enabled()) {
43
pmu_op_finish(&cpu->env);
44
}
117
--
45
--
118
2.19.1
46
2.20.1
119
47
120
48
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Our current codegen for MVE always calls out to helper functions,
2
2
because some byte lanes might be predicated. The common case is that
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
3
in fact there is no predication active and all lanes should be
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
updated together, so we can produce better code by detecting that and
5
Message-id: 20181016223115.24100-9-richard.henderson@linaro.org
5
using the TCG generic vector infrastructure.
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
7
Add a TB flag that is set when we can guarantee that there is no
8
active MVE predication, and a bool in the DisasContext. Subsequent
9
patches will use this flag to generate improved code for some
10
instructions.
11
12
In most cases when the predication state changes we simply end the TB
13
after that instruction. For the code called from vfp_access_check()
14
that handles lazy state preservation and creating a new FP context,
15
we can usually avoid having to try to end the TB because luckily the
16
new value of the flag following the register changes in those
17
sequences doesn't depend on any runtime decisions. We do have to end
18
the TB if the guest has enabled lazy FP state preservation but not
19
automatic state preservation, but this is an odd corner case that is
20
not going to be common in real-world code.
21
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20210913095440.13462-4-peter.maydell@linaro.org
8
---
25
---
9
target/arm/cpu.h | 17 +++++++++++++++-
26
target/arm/cpu.h | 4 +++-
10
linux-user/elfload.c | 6 +-----
27
target/arm/translate.h | 2 ++
11
target/arm/cpu64.c | 16 ++++++++-------
28
target/arm/helper.c | 33 +++++++++++++++++++++++++++++++++
12
target/arm/helper.c | 2 +-
29
target/arm/translate-m-nocp.c | 8 +++++++-
13
target/arm/translate-a64.c | 40 +++++++++++++++++++-------------------
30
target/arm/translate-mve.c | 13 ++++++++++++-
14
target/arm/translate.c | 6 +++---
31
target/arm/translate-vfp.c | 33 +++++++++++++++++++++++++++------
15
6 files changed, 50 insertions(+), 37 deletions(-)
32
target/arm/translate.c | 8 ++++++++
33
7 files changed, 92 insertions(+), 9 deletions(-)
16
34
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
35
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
37
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
38
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ enum arm_features {
39
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
22
ARM_FEATURE_PMU, /* has PMU support */
40
* | TBFLAG_AM32 | +-----+----------+
23
ARM_FEATURE_VBAR, /* has cp15 VBAR */
41
* | | |TBFLAG_M32|
24
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
42
* +-------------+----------------+----------+
25
- ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
43
- * 31 23 5 4 0
26
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
44
+ * 31 23 6 5 0
27
};
45
*
28
46
* Unless otherwise noted, these bits are cached in env->hflags.
29
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
47
*/
30
return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
48
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, LSPACT, 2, 1) /* Not cached. */
31
}
49
FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */
32
50
/* Set if FPCCR.S does not match current security state */
33
+static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
51
FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */
52
+/* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */
53
+FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */
54
55
/*
56
* Bit usage when in AArch64 state
57
diff --git a/target/arm/translate.h b/target/arm/translate.h
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/translate.h
60
+++ b/target/arm/translate.h
61
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
62
bool align_mem;
63
/* True if PSTATE.IL is set */
64
bool pstate_il;
65
+ /* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
66
+ bool mve_no_pred;
67
/*
68
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
69
* < 0, set by the current instruction.
70
diff --git a/target/arm/helper.c b/target/arm/helper.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/helper.c
73
+++ b/target/arm/helper.c
74
@@ -XXX,XX +XXX,XX @@ static inline void assert_hflags_rebuild_correctly(CPUARMState *env)
75
#endif
76
}
77
78
+static bool mve_no_pred(CPUARMState *env)
34
+{
79
+{
35
+ /*
80
+ /*
36
+ * This is a placeholder for use by VCMA until the rest of
81
+ * Return true if there is definitely no predication of MVE
37
+ * the ARMv8.2-FP16 extension is implemented for aa32 mode.
82
+ * instructions by VPR or LTPSIZE. (Returning false even if there
38
+ * At which point we can properly set and check MVFR1.FPHP.
83
+ * isn't any predication is OK; generated code will just be
84
+ * a little worse.)
85
+ * If the CPU does not implement MVE then this TB flag is always 0.
86
+ *
87
+ * NOTE: if you change this logic, the "recalculate s->mve_no_pred"
88
+ * logic in gen_update_fp_context() needs to be updated to match.
89
+ *
90
+ * We do not include the effect of the ECI bits here -- they are
91
+ * tracked in other TB flags. This simplifies the logic for
92
+ * "when did we emit code that changes the MVE_NO_PRED TB flag
93
+ * and thus need to end the TB?".
39
+ */
94
+ */
40
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
95
+ if (cpu_isar_feature(aa32_mve, env_archcpu(env))) {
96
+ return false;
97
+ }
98
+ if (env->v7m.vpr) {
99
+ return false;
100
+ }
101
+ if (env->v7m.ltpsize < 4) {
102
+ return false;
103
+ }
104
+ return true;
41
+}
105
+}
42
+
106
+
43
/*
107
void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
44
* 64-bit feature tests via id registers.
108
target_ulong *cs_base, uint32_t *pflags)
109
{
110
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
111
if (env->v7m.fpccr[is_secure] & R_V7M_FPCCR_LSPACT_MASK) {
112
DP_TBFLAG_M32(flags, LSPACT, 1);
113
}
114
+
115
+ if (mve_no_pred(env)) {
116
+ DP_TBFLAG_M32(flags, MVE_NO_PRED, 1);
117
+ }
118
} else {
119
/*
120
* Note that XSCALE_CPAR shares bits with VECSTRIDE.
121
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/target/arm/translate-m-nocp.c
124
+++ b/target/arm/translate-m-nocp.c
125
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
126
127
clear_eci_state(s);
128
129
- /* End the TB, because we have updated FP control bits */
130
+ /*
131
+ * End the TB, because we have updated FP control bits,
132
+ * and possibly VPR or LTPSIZE.
133
+ */
134
s->base.is_jmp = DISAS_UPDATE_EXIT;
135
return true;
136
}
137
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
138
store_cpu_field(control, v7m.control[M_REG_S]);
139
tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
140
gen_helper_vfp_set_fpscr(cpu_env, tmp);
141
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
142
tcg_temp_free_i32(tmp);
143
tcg_temp_free_i32(sfpa);
144
break;
145
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
146
}
147
tmp = loadfn(s, opaque, true);
148
store_cpu_field(tmp, v7m.vpr);
149
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
150
break;
151
case ARM_VFP_P0:
152
{
153
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
154
tcg_gen_deposit_i32(vpr, vpr, tmp,
155
R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
156
store_cpu_field(vpr, v7m.vpr);
157
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
158
tcg_temp_free_i32(tmp);
159
break;
160
}
161
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-mve.c
164
+++ b/target/arm/translate-mve.c
165
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VORR, gen_helper_mve_vorr)
166
DO_LOGIC(VORN, gen_helper_mve_vorn)
167
DO_LOGIC(VEOR, gen_helper_mve_veor)
168
169
-DO_LOGIC(VPSEL, gen_helper_mve_vpsel)
170
+static bool trans_VPSEL(DisasContext *s, arg_2op *a)
171
+{
172
+ /* This insn updates predication bits */
173
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
174
+ return do_2op(s, a, gen_helper_mve_vpsel);
175
+}
176
177
#define DO_2OP(INSN, FN) \
178
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
179
@@ -XXX,XX +XXX,XX @@ static bool trans_VPNOT(DisasContext *s, arg_VPNOT *a)
180
}
181
182
gen_helper_mve_vpnot(cpu_env);
183
+ /* This insn updates predication bits */
184
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
185
mve_update_eci(s);
186
return true;
187
}
188
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
189
/* VPT */
190
gen_vpst(s, a->mask);
191
}
192
+ /* This insn updates predication bits */
193
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
194
mve_update_eci(s);
195
return true;
196
}
197
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp_scalar(DisasContext *s, arg_vcmp_scalar *a,
198
/* VPT */
199
gen_vpst(s, a->mask);
200
}
201
+ /* This insn updates predication bits */
202
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
203
mve_update_eci(s);
204
return true;
205
}
206
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/translate-vfp.c
209
+++ b/target/arm/translate-vfp.c
210
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
211
* Generate code for M-profile lazy FP state preservation if needed;
212
* this corresponds to the pseudocode PreserveFPState() function.
45
*/
213
*/
46
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
214
-static void gen_preserve_fp_state(DisasContext *s)
47
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
215
+static void gen_preserve_fp_state(DisasContext *s, bool skip_context_update)
48
}
49
50
+static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
51
+{
52
+ /* We always set the AdvSIMD and FP fields identically wrt FP16. */
53
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
54
+}
55
+
56
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
57
{
216
{
58
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
217
if (s->v7m_lspact) {
59
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
218
/*
60
index XXXXXXX..XXXXXXX 100644
219
@@ -XXX,XX +XXX,XX @@ static void gen_preserve_fp_state(DisasContext *s)
61
--- a/linux-user/elfload.c
220
* any further FP insns in this TB.
62
+++ b/linux-user/elfload.c
221
*/
63
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
222
s->v7m_lspact = false;
64
hwcaps |= ARM_HWCAP_A64_ASIMD;
65
66
/* probe for the extra features */
67
-#define GET_FEATURE(feat, hwcap) \
68
- do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
69
#define GET_FEATURE_ID(feat, hwcap) \
70
do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
71
72
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
73
GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
74
GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
75
GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
76
- GET_FEATURE(ARM_FEATURE_V8_FP16,
77
- ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
78
+ GET_FEATURE_ID(aa64_fp16, ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
79
GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
80
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
81
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
82
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
83
GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
84
85
-#undef GET_FEATURE
86
#undef GET_FEATURE_ID
87
88
return hwcaps;
89
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/cpu64.c
92
+++ b/target/arm/cpu64.c
93
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
94
95
t = cpu->isar.id_aa64pfr0;
96
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
97
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
98
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
99
cpu->isar.id_aa64pfr0 = t;
100
101
/* Replicate the same data to the 32-bit id registers. */
102
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
103
u = FIELD_DP32(u, ID_ISAR6, DP, 1);
104
cpu->isar.id_isar6 = u;
105
106
-#ifdef CONFIG_USER_ONLY
107
- /* We don't set these in system emulation mode for the moment,
108
- * since we don't correctly set the ID registers to advertise them,
109
- * and in some cases they're only available in AArch64 and not AArch32,
110
- * whereas the architecture requires them to be present in both if
111
- * present in either.
112
+ /*
223
+ /*
113
+ * FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
224
+ * The helper might have zeroed VPR, so we do not know the
114
+ * so do not set MVFR1.FPHP. Strictly speaking this is not legal,
225
+ * correct value for the MVE_NO_PRED TB flag any more.
115
+ * but it is also not legal to enable SVE without support for FP16,
226
+ * If we're about to create a new fp context then that
116
+ * and enabling SVE in system mode is more useful in the short term.
227
+ * will precisely determine the MVE_NO_PRED value (see
228
+ * gen_update_fp_context()). Otherwise, we must:
229
+ * - set s->mve_no_pred to false, so this instruction
230
+ * is generated to use helper functions
231
+ * - end the TB now, without chaining to the next TB
232
+ */
233
+ if (skip_context_update || !s->v7m_new_fp_ctxt_needed) {
234
+ s->mve_no_pred = false;
235
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
236
+ }
237
}
238
}
239
240
@@ -XXX,XX +XXX,XX @@ static void gen_update_fp_context(DisasContext *s)
241
TCGv_i32 z32 = tcg_const_i32(0);
242
store_cpu_field(z32, v7m.vpr);
243
}
244
-
245
/*
246
- * We don't need to arrange to end the TB, because the only
247
- * parts of FPSCR which we cache in the TB flags are the VECLEN
248
- * and VECSTRIDE, and those don't exist for M-profile.
249
+ * We just updated the FPSCR and VPR. Some of this state is cached
250
+ * in the MVE_NO_PRED TB flag. We want to avoid having to end the
251
+ * TB here, which means we need the new value of the MVE_NO_PRED
252
+ * flag to be exactly known here and the same for all executions.
253
+ * Luckily FPDSCR.LTPSIZE is always constant 4 and the VPR is
254
+ * always set to 0, so the new MVE_NO_PRED flag is always 1
255
+ * if and only if we have MVE.
256
+ *
257
+ * (The other FPSCR state cached in TB flags is VECLEN and VECSTRIDE,
258
+ * but those do not exist for M-profile, so are not relevant here.)
117
*/
259
*/
118
- set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
260
+ s->mve_no_pred = dc_isar_feature(aa32_mve, s);
119
+
261
120
+#ifdef CONFIG_USER_ONLY
262
if (s->v8m_secure) {
121
/* For usermode -cpu max we can use a larger and more efficient DCZ
263
bits |= R_V7M_CONTROL_SFPA_MASK;
122
* blocksize since we don't have to follow what the hardware does.
264
@@ -XXX,XX +XXX,XX @@ bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
123
*/
265
/* Handle M-profile lazy FP state mechanics */
124
diff --git a/target/arm/helper.c b/target/arm/helper.c
266
125
index XXXXXXX..XXXXXXX 100644
267
/* Trigger lazy-state preservation if necessary */
126
--- a/target/arm/helper.c
268
- gen_preserve_fp_state(s);
127
+++ b/target/arm/helper.c
269
+ gen_preserve_fp_state(s, skip_context_update);
128
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
270
129
uint32_t changed;
271
if (!skip_context_update) {
130
272
/* Update ownership of FP context and create new FP context if needed */
131
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
132
- if (!arm_feature(env, ARM_FEATURE_V8_FP16)) {
133
+ if (!cpu_isar_feature(aa64_fp16, arm_env_get_cpu(env))) {
134
val &= ~FPCR_FZ16;
135
}
136
137
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
138
index XXXXXXX..XXXXXXX 100644
139
--- a/target/arm/translate-a64.c
140
+++ b/target/arm/translate-a64.c
141
@@ -XXX,XX +XXX,XX @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
142
break;
143
case 3:
144
size = MO_16;
145
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
146
+ if (dc_isar_feature(aa64_fp16, s)) {
147
break;
148
}
149
/* fallthru */
150
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
151
break;
152
case 3:
153
size = MO_16;
154
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
155
+ if (dc_isar_feature(aa64_fp16, s)) {
156
break;
157
}
158
/* fallthru */
159
@@ -XXX,XX +XXX,XX @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
160
break;
161
case 3:
162
sz = MO_16;
163
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
164
+ if (dc_isar_feature(aa64_fp16, s)) {
165
break;
166
}
167
/* fallthru */
168
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
169
handle_fp_1src_double(s, opcode, rd, rn);
170
break;
171
case 3:
172
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
173
+ if (!dc_isar_feature(aa64_fp16, s)) {
174
unallocated_encoding(s);
175
return;
176
}
177
@@ -XXX,XX +XXX,XX @@ static void disas_fp_2src(DisasContext *s, uint32_t insn)
178
handle_fp_2src_double(s, opcode, rd, rn, rm);
179
break;
180
case 3:
181
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
182
+ if (!dc_isar_feature(aa64_fp16, s)) {
183
unallocated_encoding(s);
184
return;
185
}
186
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
187
handle_fp_3src_double(s, o0, o1, rd, rn, rm, ra);
188
break;
189
case 3:
190
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
191
+ if (!dc_isar_feature(aa64_fp16, s)) {
192
unallocated_encoding(s);
193
return;
194
}
195
@@ -XXX,XX +XXX,XX @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
196
break;
197
case 3:
198
sz = MO_16;
199
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
200
+ if (dc_isar_feature(aa64_fp16, s)) {
201
break;
202
}
203
/* fallthru */
204
@@ -XXX,XX +XXX,XX @@ static void disas_fp_fixed_conv(DisasContext *s, uint32_t insn)
205
case 1: /* float64 */
206
break;
207
case 3: /* float16 */
208
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
209
+ if (dc_isar_feature(aa64_fp16, s)) {
210
break;
211
}
212
/* fallthru */
213
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
214
break;
215
case 0x6: /* 16-bit float, 32-bit int */
216
case 0xe: /* 16-bit float, 64-bit int */
217
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
218
+ if (dc_isar_feature(aa64_fp16, s)) {
219
break;
220
}
221
/* fallthru */
222
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
223
case 1: /* float64 */
224
break;
225
case 3: /* float16 */
226
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
227
+ if (dc_isar_feature(aa64_fp16, s)) {
228
break;
229
}
230
/* fallthru */
231
@@ -XXX,XX +XXX,XX @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
232
*/
233
is_min = extract32(size, 1, 1);
234
is_fp = true;
235
- if (!is_u && arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
236
+ if (!is_u && dc_isar_feature(aa64_fp16, s)) {
237
size = 1;
238
} else if (!is_u || !is_q || extract32(size, 0, 1)) {
239
unallocated_encoding(s);
240
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
241
242
if (o2 != 0 || ((cmode == 0xf) && is_neg && !is_q)) {
243
/* Check for FMOV (vector, immediate) - half-precision */
244
- if (!(arm_dc_feature(s, ARM_FEATURE_V8_FP16) && o2 && cmode == 0xf)) {
245
+ if (!(dc_isar_feature(aa64_fp16, s) && o2 && cmode == 0xf)) {
246
unallocated_encoding(s);
247
return;
248
}
249
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
250
case 0x2f: /* FMINP */
251
/* FP op, size[0] is 32 or 64 bit*/
252
if (!u) {
253
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
254
+ if (!dc_isar_feature(aa64_fp16, s)) {
255
unallocated_encoding(s);
256
return;
257
} else {
258
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
259
size = MO_32;
260
} else if (immh & 2) {
261
size = MO_16;
262
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
263
+ if (!dc_isar_feature(aa64_fp16, s)) {
264
unallocated_encoding(s);
265
return;
266
}
267
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
268
size = MO_32;
269
} else if (immh & 0x2) {
270
size = MO_16;
271
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
272
+ if (!dc_isar_feature(aa64_fp16, s)) {
273
unallocated_encoding(s);
274
return;
275
}
276
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
277
return;
278
}
279
280
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
281
+ if (!dc_isar_feature(aa64_fp16, s)) {
282
unallocated_encoding(s);
283
}
284
285
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
286
TCGv_ptr fpst;
287
bool pairwise = false;
288
289
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
290
+ if (!dc_isar_feature(aa64_fp16, s)) {
291
unallocated_encoding(s);
292
return;
293
}
294
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
295
case 0x1c: /* FCADD, #90 */
296
case 0x1e: /* FCADD, #270 */
297
if (size == 0
298
- || (size == 1 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))
299
+ || (size == 1 && !dc_isar_feature(aa64_fp16, s))
300
|| (size == 3 && !is_q)) {
301
unallocated_encoding(s);
302
return;
303
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
304
bool need_fpst = true;
305
int rmode;
306
307
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
308
+ if (!dc_isar_feature(aa64_fp16, s)) {
309
unallocated_encoding(s);
310
return;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
313
}
314
break;
315
}
316
- if (is_fp16 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
317
+ if (is_fp16 && !dc_isar_feature(aa64_fp16, s)) {
318
unallocated_encoding(s);
319
return;
320
}
321
diff --git a/target/arm/translate.c b/target/arm/translate.c
273
diff --git a/target/arm/translate.c b/target/arm/translate.c
322
index XXXXXXX..XXXXXXX 100644
274
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/translate.c
275
--- a/target/arm/translate.c
324
+++ b/target/arm/translate.c
276
+++ b/target/arm/translate.c
325
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
277
@@ -XXX,XX +XXX,XX @@ static bool trans_DLS(DisasContext *s, arg_DLS *a)
326
int size = extract32(insn, 20, 1);
278
/* DLSTP: set FPSCR.LTPSIZE */
327
data = extract32(insn, 23, 2); /* rot */
279
tmp = tcg_const_i32(a->size);
328
if (!dc_isar_feature(aa32_vcma, s)
280
store_cpu_field(tmp, v7m.ltpsize);
329
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
281
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
330
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
282
}
331
return 1;
283
return true;
332
}
284
}
333
fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
285
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
334
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
286
assert(ok);
335
int size = extract32(insn, 20, 1);
287
tmp = tcg_const_i32(a->size);
336
data = extract32(insn, 24, 1); /* rot */
288
store_cpu_field(tmp, v7m.ltpsize);
337
if (!dc_isar_feature(aa32_vcma, s)
289
+ /*
338
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
290
+ * LTPSIZE updated, but MVE_NO_PRED will always be the same thing (0)
339
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
291
+ * when we take this upcoming exit from this TB, so gen_jmp_tb() is OK.
340
return 1;
292
+ */
341
}
293
}
342
fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
294
gen_jmp_tb(s, s->base.pc_next, 1);
343
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
295
344
return 1;
296
@@ -XXX,XX +XXX,XX @@ static bool trans_VCTP(DisasContext *s, arg_VCTP *a)
345
}
297
gen_helper_mve_vctp(cpu_env, masklen);
346
if (size == 0) {
298
tcg_temp_free_i32(masklen);
347
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
299
tcg_temp_free_i32(rn_shifted);
348
+ if (!dc_isar_feature(aa32_fp16_arith, s)) {
300
+ /* This insn updates predication bits */
349
return 1;
301
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
350
}
302
mve_update_eci(s);
351
/* For fp16, rm is just Vm, and index is M. */
303
return true;
304
}
305
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
306
dc->v7m_new_fp_ctxt_needed =
307
EX_TBFLAG_M32(tb_flags, NEW_FP_CTXT_NEEDED);
308
dc->v7m_lspact = EX_TBFLAG_M32(tb_flags, LSPACT);
309
+ dc->mve_no_pred = EX_TBFLAG_M32(tb_flags, MVE_NO_PRED);
310
} else {
311
dc->debug_target_el = EX_TBFLAG_ANY(tb_flags, DEBUG_TARGET_EL);
312
dc->sctlr_b = EX_TBFLAG_A32(tb_flags, SCTLR__B);
352
--
313
--
353
2.19.1
314
2.20.1
354
315
355
316
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
When not predicating, implement the MVE bitwise logical insns
2
directly using TCG vector operations.
2
3
3
Instead of shifts and masks, use direct loads and stores from the neon
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
register file. Mirror the iteration structure of the ARM pseudocode
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
more closely. Correct the parameters of the VLD2 A2 insn.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-5-peter.maydell@linaro.org
8
---
9
target/arm/translate-mve.c | 51 +++++++++++++++++++++++++++-----------
10
1 file changed, 36 insertions(+), 15 deletions(-)
6
11
7
Note that this includes a bugfix for handling of the insn
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
8
"VLD2 (multiple 2-element structures)" -- we were using an
9
incorrect stride value.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20181011205206.3552-19-richard.henderson@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/translate.c | 170 ++++++++++++++++++-----------------------
17
1 file changed, 74 insertions(+), 96 deletions(-)
18
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
14
--- a/target/arm/translate-mve.c
22
+++ b/target/arm/translate.c
15
+++ b/target/arm/translate-mve.c
23
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
16
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr mve_qreg_ptr(unsigned reg)
24
return tmp;
17
return ret;
25
}
18
}
26
19
27
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
20
+static bool mve_no_predication(DisasContext *s)
28
+{
21
+{
29
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
22
+ /*
30
+
23
+ * Return true if we are executing the entire MVE instruction
31
+ switch (mop) {
24
+ * with no predication or partial-execution, and so we can safely
32
+ case MO_UB:
25
+ * use an inline TCG vector implementation.
33
+ tcg_gen_ld8u_i64(var, cpu_env, offset);
26
+ */
34
+ break;
27
+ return s->eci == 0 && s->mve_no_pred;
35
+ case MO_UW:
36
+ tcg_gen_ld16u_i64(var, cpu_env, offset);
37
+ break;
38
+ case MO_UL:
39
+ tcg_gen_ld32u_i64(var, cpu_env, offset);
40
+ break;
41
+ case MO_Q:
42
+ tcg_gen_ld_i64(var, cpu_env, offset);
43
+ break;
44
+ default:
45
+ g_assert_not_reached();
46
+ }
47
+}
28
+}
48
+
29
+
49
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
30
static bool mve_check_qreg_bank(DisasContext *s, int qmask)
50
{
31
{
51
tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
32
/*
52
tcg_temp_free_i32(var);
33
@@ -XXX,XX +XXX,XX @@ static bool trans_VNEG_fp(DisasContext *s, arg_1op *a)
34
return do_1op(s, a, fns[a->size]);
53
}
35
}
54
36
55
+static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
37
-static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
38
+static bool do_2op_vec(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn,
39
+ GVecGen3Fn *vecfn)
40
{
41
TCGv_ptr qd, qn, qm;
42
43
@@ -XXX,XX +XXX,XX @@ static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
44
return true;
45
}
46
47
- qd = mve_qreg_ptr(a->qd);
48
- qn = mve_qreg_ptr(a->qn);
49
- qm = mve_qreg_ptr(a->qm);
50
- fn(cpu_env, qd, qn, qm);
51
- tcg_temp_free_ptr(qd);
52
- tcg_temp_free_ptr(qn);
53
- tcg_temp_free_ptr(qm);
54
+ if (vecfn && mve_no_predication(s)) {
55
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qn),
56
+ mve_qreg_offset(a->qm), 16, 16);
57
+ } else {
58
+ qd = mve_qreg_ptr(a->qd);
59
+ qn = mve_qreg_ptr(a->qn);
60
+ qm = mve_qreg_ptr(a->qm);
61
+ fn(cpu_env, qd, qn, qm);
62
+ tcg_temp_free_ptr(qd);
63
+ tcg_temp_free_ptr(qn);
64
+ tcg_temp_free_ptr(qm);
65
+ }
66
mve_update_eci(s);
67
return true;
68
}
69
70
-#define DO_LOGIC(INSN, HELPER) \
71
+static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn *fn)
56
+{
72
+{
57
+ long offset = neon_element_offset(reg, ele, size);
73
+ return do_2op_vec(s, a, fn, NULL);
58
+
59
+ switch (size) {
60
+ case MO_8:
61
+ tcg_gen_st8_i64(var, cpu_env, offset);
62
+ break;
63
+ case MO_16:
64
+ tcg_gen_st16_i64(var, cpu_env, offset);
65
+ break;
66
+ case MO_32:
67
+ tcg_gen_st32_i64(var, cpu_env, offset);
68
+ break;
69
+ case MO_64:
70
+ tcg_gen_st_i64(var, cpu_env, offset);
71
+ break;
72
+ default:
73
+ g_assert_not_reached();
74
+ }
75
+}
74
+}
76
+
75
+
77
static inline void neon_load_reg64(TCGv_i64 var, int reg)
76
+#define DO_LOGIC(INSN, HELPER, VECFN) \
77
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
78
{ \
79
- return do_2op(s, a, HELPER); \
80
+ return do_2op_vec(s, a, HELPER, VECFN); \
81
}
82
83
-DO_LOGIC(VAND, gen_helper_mve_vand)
84
-DO_LOGIC(VBIC, gen_helper_mve_vbic)
85
-DO_LOGIC(VORR, gen_helper_mve_vorr)
86
-DO_LOGIC(VORN, gen_helper_mve_vorn)
87
-DO_LOGIC(VEOR, gen_helper_mve_veor)
88
+DO_LOGIC(VAND, gen_helper_mve_vand, tcg_gen_gvec_and)
89
+DO_LOGIC(VBIC, gen_helper_mve_vbic, tcg_gen_gvec_andc)
90
+DO_LOGIC(VORR, gen_helper_mve_vorr, tcg_gen_gvec_or)
91
+DO_LOGIC(VORN, gen_helper_mve_vorn, tcg_gen_gvec_orc)
92
+DO_LOGIC(VEOR, gen_helper_mve_veor, tcg_gen_gvec_xor)
93
94
static bool trans_VPSEL(DisasContext *s, arg_2op *a)
78
{
95
{
79
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
80
@@ -XXX,XX +XXX,XX @@ static struct {
81
int interleave;
82
int spacing;
83
} const neon_ls_element_type[11] = {
84
- {4, 4, 1},
85
- {4, 4, 2},
86
+ {1, 4, 1},
87
+ {1, 4, 2},
88
{4, 1, 1},
89
- {4, 2, 1},
90
- {3, 3, 1},
91
- {3, 3, 2},
92
+ {2, 2, 2},
93
+ {1, 3, 1},
94
+ {1, 3, 2},
95
{3, 1, 1},
96
{1, 1, 1},
97
- {2, 2, 1},
98
- {2, 2, 2},
99
+ {1, 2, 1},
100
+ {1, 2, 2},
101
{2, 1, 1}
102
};
103
104
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
105
int shift;
106
int n;
107
int vec_size;
108
+ int mmu_idx;
109
+ TCGMemOp endian;
110
TCGv_i32 addr;
111
TCGv_i32 tmp;
112
TCGv_i32 tmp2;
113
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
114
rn = (insn >> 16) & 0xf;
115
rm = insn & 0xf;
116
load = (insn & (1 << 21)) != 0;
117
+ endian = s->be_data;
118
+ mmu_idx = get_mem_index(s);
119
if ((insn & (1 << 23)) == 0) {
120
/* Load store all elements. */
121
op = (insn >> 8) & 0xf;
122
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
123
nregs = neon_ls_element_type[op].nregs;
124
interleave = neon_ls_element_type[op].interleave;
125
spacing = neon_ls_element_type[op].spacing;
126
- if (size == 3 && (interleave | spacing) != 1)
127
+ if (size == 3 && (interleave | spacing) != 1) {
128
return 1;
129
+ }
130
+ tmp64 = tcg_temp_new_i64();
131
addr = tcg_temp_new_i32();
132
+ tmp2 = tcg_const_i32(1 << size);
133
load_reg_var(s, addr, rn);
134
- stride = (1 << size) * interleave;
135
for (reg = 0; reg < nregs; reg++) {
136
- if (interleave > 2 || (interleave == 2 && nregs == 2)) {
137
- load_reg_var(s, addr, rn);
138
- tcg_gen_addi_i32(addr, addr, (1 << size) * reg);
139
- } else if (interleave == 2 && nregs == 4 && reg == 2) {
140
- load_reg_var(s, addr, rn);
141
- tcg_gen_addi_i32(addr, addr, 1 << size);
142
- }
143
- if (size == 3) {
144
- tmp64 = tcg_temp_new_i64();
145
- if (load) {
146
- gen_aa32_ld64(s, tmp64, addr, get_mem_index(s));
147
- neon_store_reg64(tmp64, rd);
148
- } else {
149
- neon_load_reg64(tmp64, rd);
150
- gen_aa32_st64(s, tmp64, addr, get_mem_index(s));
151
- }
152
- tcg_temp_free_i64(tmp64);
153
- tcg_gen_addi_i32(addr, addr, stride);
154
- } else {
155
- for (pass = 0; pass < 2; pass++) {
156
- if (size == 2) {
157
- if (load) {
158
- tmp = tcg_temp_new_i32();
159
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
160
- neon_store_reg(rd, pass, tmp);
161
- } else {
162
- tmp = neon_load_reg(rd, pass);
163
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
164
- tcg_temp_free_i32(tmp);
165
- }
166
- tcg_gen_addi_i32(addr, addr, stride);
167
- } else if (size == 1) {
168
- if (load) {
169
- tmp = tcg_temp_new_i32();
170
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
171
- tcg_gen_addi_i32(addr, addr, stride);
172
- tmp2 = tcg_temp_new_i32();
173
- gen_aa32_ld16u(s, tmp2, addr, get_mem_index(s));
174
- tcg_gen_addi_i32(addr, addr, stride);
175
- tcg_gen_shli_i32(tmp2, tmp2, 16);
176
- tcg_gen_or_i32(tmp, tmp, tmp2);
177
- tcg_temp_free_i32(tmp2);
178
- neon_store_reg(rd, pass, tmp);
179
- } else {
180
- tmp = neon_load_reg(rd, pass);
181
- tmp2 = tcg_temp_new_i32();
182
- tcg_gen_shri_i32(tmp2, tmp, 16);
183
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
184
- tcg_temp_free_i32(tmp);
185
- tcg_gen_addi_i32(addr, addr, stride);
186
- gen_aa32_st16(s, tmp2, addr, get_mem_index(s));
187
- tcg_temp_free_i32(tmp2);
188
- tcg_gen_addi_i32(addr, addr, stride);
189
- }
190
- } else /* size == 0 */ {
191
- if (load) {
192
- tmp2 = NULL;
193
- for (n = 0; n < 4; n++) {
194
- tmp = tcg_temp_new_i32();
195
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
196
- tcg_gen_addi_i32(addr, addr, stride);
197
- if (n == 0) {
198
- tmp2 = tmp;
199
- } else {
200
- tcg_gen_shli_i32(tmp, tmp, n * 8);
201
- tcg_gen_or_i32(tmp2, tmp2, tmp);
202
- tcg_temp_free_i32(tmp);
203
- }
204
- }
205
- neon_store_reg(rd, pass, tmp2);
206
- } else {
207
- tmp2 = neon_load_reg(rd, pass);
208
- for (n = 0; n < 4; n++) {
209
- tmp = tcg_temp_new_i32();
210
- if (n == 0) {
211
- tcg_gen_mov_i32(tmp, tmp2);
212
- } else {
213
- tcg_gen_shri_i32(tmp, tmp2, n * 8);
214
- }
215
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
216
- tcg_temp_free_i32(tmp);
217
- tcg_gen_addi_i32(addr, addr, stride);
218
- }
219
- tcg_temp_free_i32(tmp2);
220
- }
221
+ for (n = 0; n < 8 >> size; n++) {
222
+ int xs;
223
+ for (xs = 0; xs < interleave; xs++) {
224
+ int tt = rd + reg + spacing * xs;
225
+
226
+ if (load) {
227
+ gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
228
+ neon_store_element64(tt, n, size, tmp64);
229
+ } else {
230
+ neon_load_element64(tmp64, tt, n, size);
231
+ gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
232
}
233
+ tcg_gen_add_i32(addr, addr, tmp2);
234
}
235
}
236
- rd += spacing;
237
}
238
tcg_temp_free_i32(addr);
239
- stride = nregs * 8;
240
+ tcg_temp_free_i32(tmp2);
241
+ tcg_temp_free_i64(tmp64);
242
+ stride = nregs * interleave * 8;
243
} else {
244
size = (insn >> 10) & 3;
245
if (size == 3) {
246
--
96
--
247
2.19.1
97
2.20.1
248
98
249
99
diff view generated by jsdifflib
1
The switch_mode() function is defined in target/arm/helper.c and used
1
Optimize MVE arithmetic ops when we have a TCG
2
only in that file and nowhere else, so we can make it file-local
2
vector operation we can use.
3
rather than global.
4
3
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181012144235.19646-3-peter.maydell@linaro.org
7
Message-id: 20210913095440.13462-6-peter.maydell@linaro.org
8
---
8
---
9
target/arm/internals.h | 1 -
9
target/arm/translate-mve.c | 20 +++++++++++---------
10
target/arm/helper.c | 6 ++++--
10
1 file changed, 11 insertions(+), 9 deletions(-)
11
2 files changed, 4 insertions(+), 3 deletions(-)
12
11
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
14
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/internals.h
14
--- a/target/arm/translate-mve.c
16
+++ b/target/arm/internals.h
15
+++ b/target/arm/translate-mve.c
17
@@ -XXX,XX +XXX,XX @@ static inline int bank_number(int mode)
16
@@ -XXX,XX +XXX,XX @@ static bool trans_VPSEL(DisasContext *s, arg_2op *a)
18
g_assert_not_reached();
17
return do_2op(s, a, gen_helper_mve_vpsel);
19
}
18
}
20
19
21
-void switch_mode(CPUARMState *, int);
20
-#define DO_2OP(INSN, FN) \
22
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu);
21
+#define DO_2OP_VEC(INSN, FN, VECFN) \
23
void arm_translate_init(void);
22
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
24
23
{ \
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
static MVEGenTwoOpFn * const fns[] = { \
26
index XXXXXXX..XXXXXXX 100644
25
@@ -XXX,XX +XXX,XX @@ static bool trans_VPSEL(DisasContext *s, arg_2op *a)
27
--- a/target/arm/helper.c
26
gen_helper_mve_##FN##w, \
28
+++ b/target/arm/helper.c
27
NULL, \
29
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
28
}; \
30
V8M_SAttributes *sattrs);
29
- return do_2op(s, a, fns[a->size]); \
31
#endif
30
+ return do_2op_vec(s, a, fns[a->size], VECFN); \
32
31
}
33
+static void switch_mode(CPUARMState *env, int mode);
32
33
-DO_2OP(VADD, vadd)
34
-DO_2OP(VSUB, vsub)
35
-DO_2OP(VMUL, vmul)
36
+#define DO_2OP(INSN, FN) DO_2OP_VEC(INSN, FN, NULL)
34
+
37
+
35
static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg)
38
+DO_2OP_VEC(VADD, vadd, tcg_gen_gvec_add)
36
{
39
+DO_2OP_VEC(VSUB, vsub, tcg_gen_gvec_sub)
37
int nregs;
40
+DO_2OP_VEC(VMUL, vmul, tcg_gen_gvec_mul)
38
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
41
DO_2OP(VMULH_S, vmulhs)
39
return 0;
42
DO_2OP(VMULH_U, vmulhu)
40
}
43
DO_2OP(VRMULH_S, vrmulhs)
41
44
DO_2OP(VRMULH_U, vrmulhu)
42
-void switch_mode(CPUARMState *env, int mode)
45
-DO_2OP(VMAX_S, vmaxs)
43
+static void switch_mode(CPUARMState *env, int mode)
46
-DO_2OP(VMAX_U, vmaxu)
44
{
47
-DO_2OP(VMIN_S, vmins)
45
ARMCPU *cpu = arm_env_get_cpu(env);
48
-DO_2OP(VMIN_U, vminu)
46
49
+DO_2OP_VEC(VMAX_S, vmaxs, tcg_gen_gvec_smax)
47
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
50
+DO_2OP_VEC(VMAX_U, vmaxu, tcg_gen_gvec_umax)
48
51
+DO_2OP_VEC(VMIN_S, vmins, tcg_gen_gvec_smin)
49
#else
52
+DO_2OP_VEC(VMIN_U, vminu, tcg_gen_gvec_umin)
50
53
DO_2OP(VABD_S, vabds)
51
-void switch_mode(CPUARMState *env, int mode)
54
DO_2OP(VABD_U, vabdu)
52
+static void switch_mode(CPUARMState *env, int mode)
55
DO_2OP(VHADD_S, vhadds)
53
{
54
int old_mode;
55
int i;
56
--
56
--
57
2.19.1
57
2.20.1
58
58
59
59
diff view generated by jsdifflib
1
Create and use a utility function to extract the EC field
1
Optimize the MVE VNEG and VABS insns by using TCG
2
from a syndrome, rather than open-coding the shift.
2
vector ops when possible.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181012144235.19646-9-peter.maydell@linaro.org
7
Message-id: 20210913095440.13462-7-peter.maydell@linaro.org
7
---
8
---
8
target/arm/internals.h | 5 +++++
9
target/arm/translate-mve.c | 32 ++++++++++++++++++++++----------
9
target/arm/helper.c | 4 ++--
10
1 file changed, 22 insertions(+), 10 deletions(-)
10
target/arm/kvm64.c | 2 +-
11
target/arm/op_helper.c | 2 +-
12
4 files changed, 9 insertions(+), 4 deletions(-)
13
11
14
diff --git a/target/arm/internals.h b/target/arm/internals.h
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/internals.h
14
--- a/target/arm/translate-mve.c
17
+++ b/target/arm/internals.h
15
+++ b/target/arm/translate-mve.c
18
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
16
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
19
#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
17
return true;
20
#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
18
}
21
19
22
+static inline uint32_t syn_get_ec(uint32_t syn)
20
-static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
21
+static bool do_1op_vec(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn,
22
+ GVecGen2Fn vecfn)
23
{
24
TCGv_ptr qd, qm;
25
26
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
27
return true;
28
}
29
30
- qd = mve_qreg_ptr(a->qd);
31
- qm = mve_qreg_ptr(a->qm);
32
- fn(cpu_env, qd, qm);
33
- tcg_temp_free_ptr(qd);
34
- tcg_temp_free_ptr(qm);
35
+ if (vecfn && mve_no_predication(s)) {
36
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qm), 16, 16);
37
+ } else {
38
+ qd = mve_qreg_ptr(a->qd);
39
+ qm = mve_qreg_ptr(a->qm);
40
+ fn(cpu_env, qd, qm);
41
+ tcg_temp_free_ptr(qd);
42
+ tcg_temp_free_ptr(qm);
43
+ }
44
mve_update_eci(s);
45
return true;
46
}
47
48
-#define DO_1OP(INSN, FN) \
49
+static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
23
+{
50
+{
24
+ return syn >> ARM_EL_EC_SHIFT;
51
+ return do_1op_vec(s, a, fn, NULL);
25
+}
52
+}
26
+
53
+
27
/* Utility functions for constructing various kinds of syndrome value.
54
+#define DO_1OP_VEC(INSN, FN, VECFN) \
28
* Note that in general we follow the AArch64 syndrome values; in a
55
static bool trans_##INSN(DisasContext *s, arg_1op *a) \
29
* few cases the value in HSR for exceptions taken to AArch32 Hyp
56
{ \
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
57
static MVEGenOneOpFn * const fns[] = { \
31
index XXXXXXX..XXXXXXX 100644
58
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
32
--- a/target/arm/helper.c
59
gen_helper_mve_##FN##w, \
33
+++ b/target/arm/helper.c
60
NULL, \
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
61
}; \
35
uint32_t moe;
62
- return do_1op(s, a, fns[a->size]); \
36
63
+ return do_1op_vec(s, a, fns[a->size], VECFN); \
37
/* If this is a debug exception we must update the DBGDSCR.MOE bits */
38
- switch (env->exception.syndrome >> ARM_EL_EC_SHIFT) {
39
+ switch (syn_get_ec(env->exception.syndrome)) {
40
case EC_BREAKPOINT:
41
case EC_BREAKPOINT_SAME_EL:
42
moe = 1;
43
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
44
if (qemu_loglevel_mask(CPU_LOG_INT)
45
&& !excp_is_internal(cs->exception_index)) {
46
qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n",
47
- env->exception.syndrome >> ARM_EL_EC_SHIFT,
48
+ syn_get_ec(env->exception.syndrome),
49
env->exception.syndrome);
50
}
64
}
51
65
52
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
66
+#define DO_1OP(INSN, FN) DO_1OP_VEC(INSN, FN, NULL)
53
index XXXXXXX..XXXXXXX 100644
67
+
54
--- a/target/arm/kvm64.c
68
DO_1OP(VCLZ, vclz)
55
+++ b/target/arm/kvm64.c
69
DO_1OP(VCLS, vcls)
56
@@ -XXX,XX +XXX,XX @@ int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
70
-DO_1OP(VABS, vabs)
57
71
-DO_1OP(VNEG, vneg)
58
bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
72
+DO_1OP_VEC(VABS, vabs, tcg_gen_gvec_abs)
59
{
73
+DO_1OP_VEC(VNEG, vneg, tcg_gen_gvec_neg)
60
- int hsr_ec = debug_exit->hsr >> ARM_EL_EC_SHIFT;
74
DO_1OP(VQABS, vqabs)
61
+ int hsr_ec = syn_get_ec(debug_exit->hsr);
75
DO_1OP(VQNEG, vqneg)
62
ARMCPU *cpu = ARM_CPU(cs);
76
DO_1OP(VMAXA, vmaxa)
63
CPUClass *cc = CPU_GET_CLASS(cs);
64
CPUARMState *env = &cpu->env;
65
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/op_helper.c
68
+++ b/target/arm/op_helper.c
69
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
70
* (see DDI0478C.a D1.10.4)
71
*/
72
target_el = 2;
73
- if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
74
+ if (syn_get_ec(syndrome) == EC_ADVSIMDFPACCESSTRAP) {
75
syndrome = syn_uncategorized();
76
}
77
}
78
--
77
--
79
2.19.1
78
2.20.1
80
79
81
80
diff view generated by jsdifflib
1
For the v7 version of the Arm architecture, the IL bit in
1
Optimize the MVE VDUP insns by using TCG vector ops when possible.
2
syndrome register values where the field is not valid was
3
defined to be UNK/SBZP. In v8 this is RES1, which is what
4
QEMU currently implements. Handle the desired v7 behaviour
5
by squashing the IL bit for the affected cases:
6
* EC == EC_UNCATEGORIZED
7
* prefetch aborts
8
* data aborts where ISV is 0
9
10
(The fourth case listed in the v8 Arm ARM DDI 0487C.a in
11
section G7.2.70, "illegal state exception", can't happen
12
on a v7 CPU.)
13
14
This deals with a corner case noted in a comment.
15
2
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20181012144235.19646-10-peter.maydell@linaro.org
5
Message-id: 20210913095440.13462-8-peter.maydell@linaro.org
19
---
6
---
20
target/arm/internals.h | 7 ++-----
7
target/arm/translate-mve.c | 12 ++++++++----
21
target/arm/helper.c | 13 +++++++++++++
8
1 file changed, 8 insertions(+), 4 deletions(-)
22
2 files changed, 15 insertions(+), 5 deletions(-)
23
9
24
diff --git a/target/arm/internals.h b/target/arm/internals.h
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
25
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/internals.h
12
--- a/target/arm/translate-mve.c
27
+++ b/target/arm/internals.h
13
+++ b/target/arm/translate-mve.c
28
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
29
/* Utility functions for constructing various kinds of syndrome value.
15
return true;
30
* Note that in general we follow the AArch64 syndrome values; in a
31
* few cases the value in HSR for exceptions taken to AArch32 Hyp
32
- * mode differs slightly, so if we ever implemented Hyp mode then the
33
- * syndrome value would need some massaging on exception entry.
34
- * (One example of this is that AArch64 defaults to IL bit set for
35
- * exceptions which don't specifically indicate information about the
36
- * trapping instruction, whereas AArch32 defaults to IL bit clear.)
37
+ * mode differs slightly, and we fix this up when populating HSR in
38
+ * arm_cpu_do_interrupt_aarch32_hyp().
39
*/
40
static inline uint32_t syn_uncategorized(void)
41
{
42
diff --git a/target/arm/helper.c b/target/arm/helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/helper.c
45
+++ b/target/arm/helper.c
46
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs)
47
}
16
}
48
17
49
if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) {
18
- qd = mve_qreg_ptr(a->qd);
50
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
19
rt = load_reg(s, a->rt);
51
+ /*
20
- tcg_gen_dup_i32(a->size, rt, rt);
52
+ * QEMU syndrome values are v8-style. v7 has the IL bit
21
- gen_helper_mve_vdup(cpu_env, qd, rt);
53
+ * UNK/SBZP for "field not valid" cases, where v8 uses RES1.
22
- tcg_temp_free_ptr(qd);
54
+ * If this is a v7 CPU, squash the IL bit in those cases.
23
+ if (mve_no_predication(s)) {
55
+ */
24
+ tcg_gen_gvec_dup_i32(a->size, mve_qreg_offset(a->qd), 16, 16, rt);
56
+ if (cs->exception_index == EXCP_PREFETCH_ABORT ||
25
+ } else {
57
+ (cs->exception_index == EXCP_DATA_ABORT &&
26
+ qd = mve_qreg_ptr(a->qd);
58
+ !(env->exception.syndrome & ARM_EL_ISV)) ||
27
+ tcg_gen_dup_i32(a->size, rt, rt);
59
+ syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) {
28
+ gen_helper_mve_vdup(cpu_env, qd, rt);
60
+ env->exception.syndrome &= ~ARM_EL_IL;
29
+ tcg_temp_free_ptr(qd);
61
+ }
30
+ }
62
+ }
31
tcg_temp_free_i32(rt);
63
env->cp15.esr_el[2] = env->exception.syndrome;
32
mve_update_eci(s);
64
}
33
return true;
65
66
--
34
--
67
2.19.1
35
2.20.1
68
36
69
37
diff view generated by jsdifflib
1
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
1
Optimize the MVE VMVN insn by using TCG vector ops when possible.
2
provided in HSR has more information than is reported to AArch64.
3
Specifically, there are extra fields TA and coproc which indicate
4
whether the trapped instruction was FP or SIMD. Add this extra
5
information to the syndromes we construct, and mask it out when
6
taking the exception to AArch64.
7
2
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20181012144235.19646-11-peter.maydell@linaro.org
5
Message-id: 20210913095440.13462-9-peter.maydell@linaro.org
11
---
6
---
12
target/arm/internals.h | 14 +++++++++++++-
7
target/arm/translate-mve.c | 2 +-
13
target/arm/helper.c | 9 +++++++++
8
1 file changed, 1 insertion(+), 1 deletion(-)
14
target/arm/translate.c | 8 ++++----
15
3 files changed, 26 insertions(+), 5 deletions(-)
16
9
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
18
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/internals.h
12
--- a/target/arm/translate-mve.c
20
+++ b/target/arm/internals.h
13
+++ b/target/arm/translate-mve.c
21
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_1op *a)
22
* few cases the value in HSR for exceptions taken to AArch32 Hyp
15
23
* mode differs slightly, and we fix this up when populating HSR in
16
static bool trans_VMVN(DisasContext *s, arg_1op *a)
24
* arm_cpu_do_interrupt_aarch32_hyp().
25
+ * The exception is FP/SIMD access traps -- these report extra information
26
+ * when taking an exception to AArch32. For those we include the extra coproc
27
+ * and TA fields, and mask them out when taking the exception to AArch64.
28
*/
29
static inline uint32_t syn_uncategorized(void)
30
{
17
{
31
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
18
- return do_1op(s, a, gen_helper_mve_vmvn);
32
19
+ return do_1op_vec(s, a, gen_helper_mve_vmvn, tcg_gen_gvec_not);
33
static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
34
{
35
+ /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
36
return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
37
| (is_16bit ? 0 : ARM_EL_IL)
38
- | (cv << 24) | (cond << 20);
39
+ | (cv << 24) | (cond << 20) | 0xa;
40
+}
41
+
42
+static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
43
+{
44
+ /* AArch32 SIMD trap: TA == 1 coproc == 0 */
45
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
46
+ | (is_16bit ? 0 : ARM_EL_IL)
47
+ | (cv << 24) | (cond << 20) | (1 << 5);
48
}
20
}
49
21
50
static inline uint32_t syn_sve_access_trap(void)
22
static bool trans_VABS_fp(DisasContext *s, arg_1op *a)
51
diff --git a/target/arm/helper.c b/target/arm/helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/helper.c
54
+++ b/target/arm/helper.c
55
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
56
case EXCP_HVC:
57
case EXCP_HYP_TRAP:
58
case EXCP_SMC:
59
+ if (syn_get_ec(env->exception.syndrome) == EC_ADVSIMDFPACCESSTRAP) {
60
+ /*
61
+ * QEMU internal FP/SIMD syndromes from AArch32 include the
62
+ * TA and coproc fields which are only exposed if the exception
63
+ * is taken to AArch32 Hyp mode. Mask them out to get a valid
64
+ * AArch64 format syndrome.
65
+ */
66
+ env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20);
67
+ }
68
env->cp15.esr_el[new_el] = env->exception.syndrome;
69
break;
70
case EXCP_IRQ:
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate.c
74
+++ b/target/arm/translate.c
75
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
76
*/
77
if (s->fp_excp_el) {
78
gen_exception_insn(s, 4, EXCP_UDEF,
79
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
80
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
81
return 0;
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
85
*/
86
if (s->fp_excp_el) {
87
gen_exception_insn(s, 4, EXCP_UDEF,
88
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
89
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
90
return 0;
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
94
95
if (s->fp_excp_el) {
96
gen_exception_insn(s, 4, EXCP_UDEF,
97
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
98
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
99
return 0;
100
}
101
if (!s->vfp_enabled) {
102
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
103
104
if (s->fp_excp_el) {
105
gen_exception_insn(s, 4, EXCP_UDEF,
106
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
107
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
108
return 0;
109
}
110
if (!s->vfp_enabled) {
111
--
23
--
112
2.19.1
24
2.20.1
113
25
114
26
diff view generated by jsdifflib
1
The HCR.DC virtualization configuration register bit has the
1
Optimize the MVE VSHL and VSHR immediate forms by using TCG vector
2
following effects:
2
ops when possible.
3
* SCTLR.M behaves as if it is 0 for all purposes except
4
direct reads of the bit
5
* HCR.VM behaves as if it is 1 for all purposes except
6
direct reads of the bit
7
* the memory type produced by the first stage of the EL1&EL0
8
translation regime is Normal Non-Shareable,
9
Inner Write-Back Read-Allocate Write-Allocate,
10
Outer Write-Back Read-Allocate Write-Allocate.
11
12
Implement this behaviour.
13
3
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20181012144235.19646-5-peter.maydell@linaro.org
6
Message-id: 20210913095440.13462-10-peter.maydell@linaro.org
17
---
7
---
18
target/arm/helper.c | 23 +++++++++++++++++++++--
8
target/arm/translate-mve.c | 83 +++++++++++++++++++++++++++++---------
19
1 file changed, 21 insertions(+), 2 deletions(-)
9
1 file changed, 63 insertions(+), 20 deletions(-)
20
10
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
22
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
13
--- a/target/arm/translate-mve.c
24
+++ b/target/arm/helper.c
14
+++ b/target/arm/translate-mve.c
25
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
15
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
26
* * The Non-secure TTBCR.EAE bit is set to 1
16
return do_1imm(s, a, fn);
27
* * The implementation includes EL2, and the value of HCR.VM is 1
17
}
28
*
18
29
+ * (Note that HCR.DC makes HCR.VM behave as if it is 1.)
19
-static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
30
+ *
20
- bool negateshift)
31
* ATS1Hx always uses the 64bit format (not supported yet).
21
+static bool do_2shift_vec(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
32
*/
22
+ bool negateshift, GVecGen2iFn vecfn)
33
format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
23
{
34
24
TCGv_ptr qd, qm;
35
if (arm_feature(env, ARM_FEATURE_EL2)) {
25
int shift = a->shift;
36
if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
26
@@ -XXX,XX +XXX,XX @@ static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
37
- format64 |= env->cp15.hcr_el2 & HCR_VM;
27
shift = -shift;
38
+ format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
39
} else {
40
format64 |= arm_current_el(env) == 2;
41
}
42
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
43
}
28
}
44
29
45
if (mmu_idx == ARMMMUIdx_S2NS) {
30
- qd = mve_qreg_ptr(a->qd);
46
- return (env->cp15.hcr_el2 & HCR_VM) == 0;
31
- qm = mve_qreg_ptr(a->qm);
47
+ /* HCR.DC means HCR.VM behaves as 1 */
32
- fn(cpu_env, qd, qm, tcg_constant_i32(shift));
48
+ return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0;
33
- tcg_temp_free_ptr(qd);
34
- tcg_temp_free_ptr(qm);
35
+ if (vecfn && mve_no_predication(s)) {
36
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qm),
37
+ shift, 16, 16);
38
+ } else {
39
+ qd = mve_qreg_ptr(a->qd);
40
+ qm = mve_qreg_ptr(a->qm);
41
+ fn(cpu_env, qd, qm, tcg_constant_i32(shift));
42
+ tcg_temp_free_ptr(qd);
43
+ tcg_temp_free_ptr(qm);
44
+ }
45
mve_update_eci(s);
46
return true;
47
}
48
49
-#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
50
- static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
51
- { \
52
- static MVEGenTwoOpShiftFn * const fns[] = { \
53
- gen_helper_mve_##FN##b, \
54
- gen_helper_mve_##FN##h, \
55
- gen_helper_mve_##FN##w, \
56
- NULL, \
57
- }; \
58
- return do_2shift(s, a, fns[a->size], NEGATESHIFT); \
59
+static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
60
+ bool negateshift)
61
+{
62
+ return do_2shift_vec(s, a, fn, negateshift, NULL);
63
+}
64
+
65
+#define DO_2SHIFT_VEC(INSN, FN, NEGATESHIFT, VECFN) \
66
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
67
+ { \
68
+ static MVEGenTwoOpShiftFn * const fns[] = { \
69
+ gen_helper_mve_##FN##b, \
70
+ gen_helper_mve_##FN##h, \
71
+ gen_helper_mve_##FN##w, \
72
+ NULL, \
73
+ }; \
74
+ return do_2shift_vec(s, a, fns[a->size], NEGATESHIFT, VECFN); \
49
}
75
}
50
76
51
if (env->cp15.hcr_el2 & HCR_TGE) {
77
-DO_2SHIFT(VSHLI, vshli_u, false)
52
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
78
+#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
53
}
79
+ DO_2SHIFT_VEC(INSN, FN, NEGATESHIFT, NULL)
54
}
80
+
55
81
+static void do_gvec_shri_s(unsigned vece, uint32_t dofs, uint32_t aofs,
56
+ if ((env->cp15.hcr_el2 & HCR_DC) &&
82
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
57
+ (mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1)) {
83
+{
58
+ /* HCR.DC means SCTLR_EL1.M behaves as 0 */
84
+ /*
59
+ return true;
85
+ * We get here with a negated shift count, and we must handle
86
+ * shifts by the element size, which tcg_gen_gvec_sari() does not do.
87
+ */
88
+ shift = -shift;
89
+ if (shift == (8 << vece)) {
90
+ shift--;
60
+ }
91
+ }
92
+ tcg_gen_gvec_sari(vece, dofs, aofs, shift, oprsz, maxsz);
93
+}
61
+
94
+
62
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
95
+static void do_gvec_shri_u(unsigned vece, uint32_t dofs, uint32_t aofs,
63
}
96
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
64
97
+{
65
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
98
+ /*
66
99
+ * We get here with a negated shift count, and we must handle
67
/* Combine the S1 and S2 cache attributes, if needed */
100
+ * shifts by the element size, which tcg_gen_gvec_shri() does not do.
68
if (!ret && cacheattrs != NULL) {
101
+ */
69
+ if (env->cp15.hcr_el2 & HCR_DC) {
102
+ shift = -shift;
70
+ /*
103
+ if (shift == (8 << vece)) {
71
+ * HCR.DC forces the first stage attributes to
104
+ tcg_gen_gvec_dup_imm(vece, dofs, oprsz, maxsz, 0);
72
+ * Normal Non-Shareable,
105
+ } else {
73
+ * Inner Write-Back Read-Allocate Write-Allocate,
106
+ tcg_gen_gvec_shri(vece, dofs, aofs, shift, oprsz, maxsz);
74
+ * Outer Write-Back Read-Allocate Write-Allocate.
107
+ }
75
+ */
108
+}
76
+ cacheattrs->attrs = 0xff;
109
+
77
+ cacheattrs->shareability = 0;
110
+DO_2SHIFT_VEC(VSHLI, vshli_u, false, tcg_gen_gvec_shli)
78
+ }
111
DO_2SHIFT(VQSHLI_S, vqshli_s, false)
79
*cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
112
DO_2SHIFT(VQSHLI_U, vqshli_u, false)
80
}
113
DO_2SHIFT(VQSHLUI, vqshlui_s, false)
114
/* These right shifts use a left-shift helper with negated shift count */
115
-DO_2SHIFT(VSHRI_S, vshli_s, true)
116
-DO_2SHIFT(VSHRI_U, vshli_u, true)
117
+DO_2SHIFT_VEC(VSHRI_S, vshli_s, true, do_gvec_shri_s)
118
+DO_2SHIFT_VEC(VSHRI_U, vshli_u, true, do_gvec_shri_u)
119
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
120
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
81
121
82
--
122
--
83
2.19.1
123
2.20.1
84
124
85
125
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Optimize the MVE VSHLL insns by using TCG vector ops when possible.
2
This includes the VMOVL insn, which we handle in mve.decode as "VSHLL
3
with zero shift count".
2
4
3
Move shi_op and sli_op expanders from translate-a64.c.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-11-peter.maydell@linaro.org
8
---
9
target/arm/translate-mve.c | 67 +++++++++++++++++++++++++++++++++-----
10
1 file changed, 59 insertions(+), 8 deletions(-)
4
11
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
6
Message-id: 20181011205206.3552-15-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.h | 2 +
11
target/arm/translate-a64.c | 152 +----------------------
12
target/arm/translate.c | 244 ++++++++++++++++++++++++++-----------
13
3 files changed, 179 insertions(+), 219 deletions(-)
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
14
--- a/target/arm/translate-mve.c
18
+++ b/target/arm/translate.h
15
+++ b/target/arm/translate-mve.c
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
16
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SCALAR(VQSHL_U_scalar, vqshli_u)
20
extern const GVecGen3 bif_op;
17
DO_2SHIFT_SCALAR(VQRSHL_S_scalar, vqrshli_s)
21
extern const GVecGen2i ssra_op[4];
18
DO_2SHIFT_SCALAR(VQRSHL_U_scalar, vqrshli_u)
22
extern const GVecGen2i usra_op[4];
19
23
+extern const GVecGen2i sri_op[4];
20
-#define DO_VSHLL(INSN, FN) \
24
+extern const GVecGen2i sli_op[4];
21
- static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
25
22
- { \
26
/*
23
- static MVEGenTwoOpShiftFn * const fns[] = { \
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
24
- gen_helper_mve_##FN##b, \
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
25
- gen_helper_mve_##FN##h, \
29
index XXXXXXX..XXXXXXX 100644
26
- }; \
30
--- a/target/arm/translate-a64.c
27
- return do_2shift(s, a, fns[a->size], false); \
31
+++ b/target/arm/translate-a64.c
28
+#define DO_VSHLL(INSN, FN) \
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
29
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
30
+ { \
31
+ static MVEGenTwoOpShiftFn * const fns[] = { \
32
+ gen_helper_mve_##FN##b, \
33
+ gen_helper_mve_##FN##h, \
34
+ }; \
35
+ return do_2shift_vec(s, a, fns[a->size], false, do_gvec_##FN); \
33
}
36
}
34
}
37
35
38
+/*
36
-static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
39
+ * For the VSHLL vector helpers, the vece is the size of the input
37
-{
40
+ * (ie MO_8 or MO_16); the helpers want to work in the output size.
38
- uint64_t mask = dup_const(MO_8, 0xff >> shift);
41
+ * The shift count can be 0..<input size>, inclusive. (0 is VMOVL.)
39
- TCGv_i64 t = tcg_temp_new_i64();
42
+ */
40
-
43
+static void do_gvec_vshllbs(unsigned vece, uint32_t dofs, uint32_t aofs,
41
- tcg_gen_shri_i64(t, a, shift);
44
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
42
- tcg_gen_andi_i64(t, t, mask);
43
- tcg_gen_andi_i64(d, d, ~mask);
44
- tcg_gen_or_i64(d, d, t);
45
- tcg_temp_free_i64(t);
46
-}
47
-
48
-static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
49
-{
50
- uint64_t mask = dup_const(MO_16, 0xffff >> shift);
51
- TCGv_i64 t = tcg_temp_new_i64();
52
-
53
- tcg_gen_shri_i64(t, a, shift);
54
- tcg_gen_andi_i64(t, t, mask);
55
- tcg_gen_andi_i64(d, d, ~mask);
56
- tcg_gen_or_i64(d, d, t);
57
- tcg_temp_free_i64(t);
58
-}
59
-
60
-static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
61
-{
62
- tcg_gen_shri_i32(a, a, shift);
63
- tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
64
-}
65
-
66
-static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_shri_i64(a, a, shift);
69
- tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
70
-}
71
-
72
-static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
73
-{
74
- uint64_t mask = (2ull << ((8 << vece) - 1)) - 1;
75
- TCGv_vec t = tcg_temp_new_vec_matching(d);
76
- TCGv_vec m = tcg_temp_new_vec_matching(d);
77
-
78
- tcg_gen_dupi_vec(vece, m, mask ^ (mask >> sh));
79
- tcg_gen_shri_vec(vece, t, a, sh);
80
- tcg_gen_and_vec(vece, d, d, m);
81
- tcg_gen_or_vec(vece, d, d, t);
82
-
83
- tcg_temp_free_vec(t);
84
- tcg_temp_free_vec(m);
85
-}
86
-
87
/* SSHR[RA]/USHR[RA] - Vector shift right (optional rounding/accumulate) */
88
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
89
int immh, int immb, int opcode, int rn, int rd)
90
{
91
- static const GVecGen2i sri_op[4] = {
92
- { .fni8 = gen_shr8_ins_i64,
93
- .fniv = gen_shr_ins_vec,
94
- .load_dest = true,
95
- .opc = INDEX_op_shri_vec,
96
- .vece = MO_8 },
97
- { .fni8 = gen_shr16_ins_i64,
98
- .fniv = gen_shr_ins_vec,
99
- .load_dest = true,
100
- .opc = INDEX_op_shri_vec,
101
- .vece = MO_16 },
102
- { .fni4 = gen_shr32_ins_i32,
103
- .fniv = gen_shr_ins_vec,
104
- .load_dest = true,
105
- .opc = INDEX_op_shri_vec,
106
- .vece = MO_32 },
107
- { .fni8 = gen_shr64_ins_i64,
108
- .fniv = gen_shr_ins_vec,
109
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
110
- .load_dest = true,
111
- .opc = INDEX_op_shri_vec,
112
- .vece = MO_64 },
113
- };
114
-
115
int size = 32 - clz32(immh) - 1;
116
int immhb = immh << 3 | immb;
117
int shift = 2 * (8 << size) - immhb;
118
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
119
clear_vec_high(s, is_q, rd);
120
}
121
122
-static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
123
-{
124
- uint64_t mask = dup_const(MO_8, 0xff << shift);
125
- TCGv_i64 t = tcg_temp_new_i64();
126
-
127
- tcg_gen_shli_i64(t, a, shift);
128
- tcg_gen_andi_i64(t, t, mask);
129
- tcg_gen_andi_i64(d, d, ~mask);
130
- tcg_gen_or_i64(d, d, t);
131
- tcg_temp_free_i64(t);
132
-}
133
-
134
-static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
135
-{
136
- uint64_t mask = dup_const(MO_16, 0xffff << shift);
137
- TCGv_i64 t = tcg_temp_new_i64();
138
-
139
- tcg_gen_shli_i64(t, a, shift);
140
- tcg_gen_andi_i64(t, t, mask);
141
- tcg_gen_andi_i64(d, d, ~mask);
142
- tcg_gen_or_i64(d, d, t);
143
- tcg_temp_free_i64(t);
144
-}
145
-
146
-static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
147
-{
148
- tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
149
-}
150
-
151
-static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
152
-{
153
- tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
154
-}
155
-
156
-static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
157
-{
158
- uint64_t mask = (1ull << sh) - 1;
159
- TCGv_vec t = tcg_temp_new_vec_matching(d);
160
- TCGv_vec m = tcg_temp_new_vec_matching(d);
161
-
162
- tcg_gen_dupi_vec(vece, m, mask);
163
- tcg_gen_shli_vec(vece, t, a, sh);
164
- tcg_gen_and_vec(vece, d, d, m);
165
- tcg_gen_or_vec(vece, d, d, t);
166
-
167
- tcg_temp_free_vec(t);
168
- tcg_temp_free_vec(m);
169
-}
170
-
171
/* SHL/SLI - Vector shift left */
172
static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
173
int immh, int immb, int opcode, int rn, int rd)
174
{
175
- static const GVecGen2i shi_op[4] = {
176
- { .fni8 = gen_shl8_ins_i64,
177
- .fniv = gen_shl_ins_vec,
178
- .opc = INDEX_op_shli_vec,
179
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
180
- .load_dest = true,
181
- .vece = MO_8 },
182
- { .fni8 = gen_shl16_ins_i64,
183
- .fniv = gen_shl_ins_vec,
184
- .opc = INDEX_op_shli_vec,
185
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
- .load_dest = true,
187
- .vece = MO_16 },
188
- { .fni4 = gen_shl32_ins_i32,
189
- .fniv = gen_shl_ins_vec,
190
- .opc = INDEX_op_shli_vec,
191
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
192
- .load_dest = true,
193
- .vece = MO_32 },
194
- { .fni8 = gen_shl64_ins_i64,
195
- .fniv = gen_shl_ins_vec,
196
- .opc = INDEX_op_shli_vec,
197
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
198
- .load_dest = true,
199
- .vece = MO_64 },
200
- };
201
int size = 32 - clz32(immh) - 1;
202
int immhb = immh << 3 | immb;
203
int shift = immhb - (8 << size);
204
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
205
}
206
207
if (insert) {
208
- gen_gvec_op2i(s, is_q, rd, rn, shift, &shi_op[size]);
209
+ gen_gvec_op2i(s, is_q, rd, rn, shift, &sli_op[size]);
210
} else {
211
gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_shli, size);
212
}
213
diff --git a/target/arm/translate.c b/target/arm/translate.c
214
index XXXXXXX..XXXXXXX 100644
215
--- a/target/arm/translate.c
216
+++ b/target/arm/translate.c
217
@@ -XXX,XX +XXX,XX @@ const GVecGen2i usra_op[4] = {
218
.vece = MO_64, },
219
};
220
221
+static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
222
+{
45
+{
223
+ uint64_t mask = dup_const(MO_8, 0xff >> shift);
46
+ unsigned ovece = vece + 1;
224
+ TCGv_i64 t = tcg_temp_new_i64();
47
+ unsigned ibits = vece == MO_8 ? 8 : 16;
225
+
48
+ tcg_gen_gvec_shli(ovece, dofs, aofs, ibits, oprsz, maxsz);
226
+ tcg_gen_shri_i64(t, a, shift);
49
+ tcg_gen_gvec_sari(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
227
+ tcg_gen_andi_i64(t, t, mask);
228
+ tcg_gen_andi_i64(d, d, ~mask);
229
+ tcg_gen_or_i64(d, d, t);
230
+ tcg_temp_free_i64(t);
231
+}
50
+}
232
+
51
+
233
+static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
52
+static void do_gvec_vshllbu(unsigned vece, uint32_t dofs, uint32_t aofs,
53
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
234
+{
54
+{
235
+ uint64_t mask = dup_const(MO_16, 0xffff >> shift);
55
+ unsigned ovece = vece + 1;
236
+ TCGv_i64 t = tcg_temp_new_i64();
56
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
237
+
57
+ ovece == MO_16 ? 0xff : 0xffff, oprsz, maxsz);
238
+ tcg_gen_shri_i64(t, a, shift);
58
+ tcg_gen_gvec_shli(ovece, dofs, dofs, shift, oprsz, maxsz);
239
+ tcg_gen_andi_i64(t, t, mask);
240
+ tcg_gen_andi_i64(d, d, ~mask);
241
+ tcg_gen_or_i64(d, d, t);
242
+ tcg_temp_free_i64(t);
243
+}
59
+}
244
+
60
+
245
+static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
61
+static void do_gvec_vshllts(unsigned vece, uint32_t dofs, uint32_t aofs,
62
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
246
+{
63
+{
247
+ tcg_gen_shri_i32(a, a, shift);
64
+ unsigned ovece = vece + 1;
248
+ tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
65
+ unsigned ibits = vece == MO_8 ? 8 : 16;
249
+}
66
+ if (shift == 0) {
250
+
67
+ tcg_gen_gvec_sari(ovece, dofs, aofs, ibits, oprsz, maxsz);
251
+static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
252
+{
253
+ tcg_gen_shri_i64(a, a, shift);
254
+ tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
255
+}
256
+
257
+static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
258
+{
259
+ if (sh == 0) {
260
+ tcg_gen_mov_vec(d, a);
261
+ } else {
68
+ } else {
262
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
69
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
263
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
70
+ ovece == MO_16 ? 0xff00 : 0xffff0000, oprsz, maxsz);
264
+
71
+ tcg_gen_gvec_sari(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
265
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
266
+ tcg_gen_shri_vec(vece, t, a, sh);
267
+ tcg_gen_and_vec(vece, d, d, m);
268
+ tcg_gen_or_vec(vece, d, d, t);
269
+
270
+ tcg_temp_free_vec(t);
271
+ tcg_temp_free_vec(m);
272
+ }
72
+ }
273
+}
73
+}
274
+
74
+
275
+const GVecGen2i sri_op[4] = {
75
+static void do_gvec_vshlltu(unsigned vece, uint32_t dofs, uint32_t aofs,
276
+ { .fni8 = gen_shr8_ins_i64,
76
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
277
+ .fniv = gen_shr_ins_vec,
278
+ .load_dest = true,
279
+ .opc = INDEX_op_shri_vec,
280
+ .vece = MO_8 },
281
+ { .fni8 = gen_shr16_ins_i64,
282
+ .fniv = gen_shr_ins_vec,
283
+ .load_dest = true,
284
+ .opc = INDEX_op_shri_vec,
285
+ .vece = MO_16 },
286
+ { .fni4 = gen_shr32_ins_i32,
287
+ .fniv = gen_shr_ins_vec,
288
+ .load_dest = true,
289
+ .opc = INDEX_op_shri_vec,
290
+ .vece = MO_32 },
291
+ { .fni8 = gen_shr64_ins_i64,
292
+ .fniv = gen_shr_ins_vec,
293
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
294
+ .load_dest = true,
295
+ .opc = INDEX_op_shri_vec,
296
+ .vece = MO_64 },
297
+};
298
+
299
+static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
300
+{
77
+{
301
+ uint64_t mask = dup_const(MO_8, 0xff << shift);
78
+ unsigned ovece = vece + 1;
302
+ TCGv_i64 t = tcg_temp_new_i64();
79
+ unsigned ibits = vece == MO_8 ? 8 : 16;
303
+
80
+ if (shift == 0) {
304
+ tcg_gen_shli_i64(t, a, shift);
81
+ tcg_gen_gvec_shri(ovece, dofs, aofs, ibits, oprsz, maxsz);
305
+ tcg_gen_andi_i64(t, t, mask);
306
+ tcg_gen_andi_i64(d, d, ~mask);
307
+ tcg_gen_or_i64(d, d, t);
308
+ tcg_temp_free_i64(t);
309
+}
310
+
311
+static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
312
+{
313
+ uint64_t mask = dup_const(MO_16, 0xffff << shift);
314
+ TCGv_i64 t = tcg_temp_new_i64();
315
+
316
+ tcg_gen_shli_i64(t, a, shift);
317
+ tcg_gen_andi_i64(t, t, mask);
318
+ tcg_gen_andi_i64(d, d, ~mask);
319
+ tcg_gen_or_i64(d, d, t);
320
+ tcg_temp_free_i64(t);
321
+}
322
+
323
+static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
324
+{
325
+ tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
326
+}
327
+
328
+static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
329
+{
330
+ tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
331
+}
332
+
333
+static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
334
+{
335
+ if (sh == 0) {
336
+ tcg_gen_mov_vec(d, a);
337
+ } else {
82
+ } else {
338
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
83
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
339
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
84
+ ovece == MO_16 ? 0xff00 : 0xffff0000, oprsz, maxsz);
340
+
85
+ tcg_gen_gvec_shri(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
341
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
342
+ tcg_gen_shli_vec(vece, t, a, sh);
343
+ tcg_gen_and_vec(vece, d, d, m);
344
+ tcg_gen_or_vec(vece, d, d, t);
345
+
346
+ tcg_temp_free_vec(t);
347
+ tcg_temp_free_vec(m);
348
+ }
86
+ }
349
+}
87
+}
350
+
88
+
351
+const GVecGen2i sli_op[4] = {
89
DO_VSHLL(VSHLL_BS, vshllbs)
352
+ { .fni8 = gen_shl8_ins_i64,
90
DO_VSHLL(VSHLL_BU, vshllbu)
353
+ .fniv = gen_shl_ins_vec,
91
DO_VSHLL(VSHLL_TS, vshllts)
354
+ .load_dest = true,
355
+ .opc = INDEX_op_shli_vec,
356
+ .vece = MO_8 },
357
+ { .fni8 = gen_shl16_ins_i64,
358
+ .fniv = gen_shl_ins_vec,
359
+ .load_dest = true,
360
+ .opc = INDEX_op_shli_vec,
361
+ .vece = MO_16 },
362
+ { .fni4 = gen_shl32_ins_i32,
363
+ .fniv = gen_shl_ins_vec,
364
+ .load_dest = true,
365
+ .opc = INDEX_op_shli_vec,
366
+ .vece = MO_32 },
367
+ { .fni8 = gen_shl64_ins_i64,
368
+ .fniv = gen_shl_ins_vec,
369
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
370
+ .load_dest = true,
371
+ .opc = INDEX_op_shli_vec,
372
+ .vece = MO_64 },
373
+};
374
+
375
/* Translate a NEON data processing instruction. Return nonzero if the
376
instruction is invalid.
377
We process data in a mixture of 32-bit and 64-bit chunks.
378
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
379
int pairwise;
380
int u;
381
int vec_size;
382
- uint32_t imm, mask;
383
+ uint32_t imm;
384
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
385
TCGv_ptr ptr1, ptr2, ptr3;
386
TCGv_i64 tmp64;
387
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
388
}
389
return 0;
390
391
+ case 4: /* VSRI */
392
+ if (!u) {
393
+ return 1;
394
+ }
395
+ /* Right shift comes here negative. */
396
+ shift = -shift;
397
+ /* Shift out of range leaves destination unchanged. */
398
+ if (shift < 8 << size) {
399
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
400
+ shift, &sri_op[size]);
401
+ }
402
+ return 0;
403
+
404
case 5: /* VSHL, VSLI */
405
- if (!u) { /* VSHL */
406
+ if (u) { /* VSLI */
407
+ /* Shift out of range leaves destination unchanged. */
408
+ if (shift < 8 << size) {
409
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size,
410
+ vec_size, shift, &sli_op[size]);
411
+ }
412
+ } else { /* VSHL */
413
/* Shifts larger than the element size are
414
* architecturally valid and results in zero.
415
*/
416
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
417
tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
418
vec_size, vec_size);
419
}
420
- return 0;
421
}
422
- break;
423
+ return 0;
424
}
425
426
if (size == 3) {
427
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
428
else
429
gen_helper_neon_rshl_s64(cpu_V0, cpu_V0, cpu_V1);
430
break;
431
- case 4: /* VSRI */
432
- case 5: /* VSHL, VSLI */
433
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
434
- break;
435
case 6: /* VQSHLU */
436
gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
437
cpu_V0, cpu_V1);
438
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
439
/* Accumulate. */
440
neon_load_reg64(cpu_V1, rd + pass);
441
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
442
- } else if (op == 4 || (op == 5 && u)) {
443
- /* Insert */
444
- neon_load_reg64(cpu_V1, rd + pass);
445
- uint64_t mask;
446
- if (shift < -63 || shift > 63) {
447
- mask = 0;
448
- } else {
449
- if (op == 4) {
450
- mask = 0xffffffffffffffffull >> -shift;
451
- } else {
452
- mask = 0xffffffffffffffffull << shift;
453
- }
454
- }
455
- tcg_gen_andi_i64(cpu_V1, cpu_V1, ~mask);
456
- tcg_gen_or_i64(cpu_V0, cpu_V0, cpu_V1);
457
}
458
neon_store_reg64(cpu_V0, rd + pass);
459
} else { /* size < 3 */
460
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
461
case 3: /* VRSRA */
462
GEN_NEON_INTEGER_OP(rshl);
463
break;
464
- case 4: /* VSRI */
465
- case 5: /* VSHL, VSLI */
466
- switch (size) {
467
- case 0: gen_helper_neon_shl_u8(tmp, tmp, tmp2); break;
468
- case 1: gen_helper_neon_shl_u16(tmp, tmp, tmp2); break;
469
- case 2: gen_helper_neon_shl_u32(tmp, tmp, tmp2); break;
470
- default: abort();
471
- }
472
- break;
473
case 6: /* VQSHLU */
474
switch (size) {
475
case 0:
476
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
477
tmp2 = neon_load_reg(rd, pass);
478
gen_neon_add(size, tmp, tmp2);
479
tcg_temp_free_i32(tmp2);
480
- } else if (op == 4 || (op == 5 && u)) {
481
- /* Insert */
482
- switch (size) {
483
- case 0:
484
- if (op == 4)
485
- mask = 0xff >> -shift;
486
- else
487
- mask = (uint8_t)(0xff << shift);
488
- mask |= mask << 8;
489
- mask |= mask << 16;
490
- break;
491
- case 1:
492
- if (op == 4)
493
- mask = 0xffff >> -shift;
494
- else
495
- mask = (uint16_t)(0xffff << shift);
496
- mask |= mask << 16;
497
- break;
498
- case 2:
499
- if (shift < -31 || shift > 31) {
500
- mask = 0;
501
- } else {
502
- if (op == 4)
503
- mask = 0xffffffffu >> -shift;
504
- else
505
- mask = 0xffffffffu << shift;
506
- }
507
- break;
508
- default:
509
- abort();
510
- }
511
- tmp2 = neon_load_reg(rd, pass);
512
- tcg_gen_andi_i32(tmp, tmp, mask);
513
- tcg_gen_andi_i32(tmp2, tmp2, ~mask);
514
- tcg_gen_or_i32(tmp, tmp, tmp2);
515
- tcg_temp_free_i32(tmp2);
516
}
517
neon_store_reg(rd, pass, tmp);
518
}
519
--
92
--
520
2.19.1
93
2.20.1
521
94
522
95
diff view generated by jsdifflib
1
The A/I/F bits in ISR_EL1 should track the virtual interrupt
1
Optimize the MVE shift-and-insert insns by using TCG
2
status, not the physical interrupt status, if the associated
2
vector ops when possible.
3
HCR_EL2.AMO/IMO/FMO bit is set. Implement this, rather than
4
always showing the physical interrupt status.
5
6
We don't currently implement anything to do with external
7
aborts, so this applies only to the I and F bits (though it
8
ought to be possible for the outer guest to present a virtual
9
external abort to the inner guest, even if QEMU doesn't
10
emulate physical external aborts, so there is missing
11
functionality in this area).
12
3
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20181012144235.19646-6-peter.maydell@linaro.org
6
Message-id: 20210913095440.13462-12-peter.maydell@linaro.org
16
---
7
---
17
target/arm/helper.c | 22 ++++++++++++++++++----
8
target/arm/translate-mve.c | 4 ++--
18
1 file changed, 18 insertions(+), 4 deletions(-)
9
1 file changed, 2 insertions(+), 2 deletions(-)
19
10
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
21
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
13
--- a/target/arm/translate-mve.c
23
+++ b/target/arm/helper.c
14
+++ b/target/arm/translate-mve.c
24
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
15
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_VEC(VSHRI_U, vshli_u, true, do_gvec_shri_u)
25
CPUState *cs = ENV_GET_CPU(env);
16
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
26
uint64_t ret = 0;
17
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
27
18
28
- if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
19
-DO_2SHIFT(VSRI, vsri, false)
29
- ret |= CPSR_I;
20
-DO_2SHIFT(VSLI, vsli, false)
30
+ if (arm_hcr_el2_imo(env)) {
21
+DO_2SHIFT_VEC(VSRI, vsri, false, gen_gvec_sri)
31
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
22
+DO_2SHIFT_VEC(VSLI, vsli, false, gen_gvec_sli)
32
+ ret |= CPSR_I;
23
33
+ }
24
#define DO_2SHIFT_FP(INSN, FN) \
34
+ } else {
25
static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
35
+ if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
36
+ ret |= CPSR_I;
37
+ }
38
}
39
- if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
40
- ret |= CPSR_F;
41
+
42
+ if (arm_hcr_el2_fmo(env)) {
43
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
44
+ ret |= CPSR_F;
45
+ }
46
+ } else {
47
+ if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
48
+ ret |= CPSR_F;
49
+ }
50
}
51
+
52
/* External aborts are not possible in QEMU so A bit is always clear */
53
return ret;
54
}
55
--
26
--
56
2.19.1
27
2.20.1
57
28
58
29
diff view generated by jsdifflib
1
The HCR_EL2 VI and VF bits are supposed to track whether there is
1
Optimize the MVE 1op-immediate insns (VORR, VBIC, VMOV) to
2
a pending virtual IRQ or virtual FIQ. For QEMU we store the
2
use TCG vector ops when possible.
3
pending VIRQ/VFIQ status in cs->interrupt_request, so this means:
4
* if the register is read we must get these bit values from
5
cs->interrupt_request
6
* if the register is written then we must write the bit
7
values back into cs->interrupt_request
8
3
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181012144235.19646-7-peter.maydell@linaro.org
6
Message-id: 20210913095440.13462-13-peter.maydell@linaro.org
12
---
7
---
13
target/arm/helper.c | 47 +++++++++++++++++++++++++++++++++++++++++----
8
target/arm/translate-mve.c | 26 +++++++++++++++++++++-----
14
1 file changed, 43 insertions(+), 4 deletions(-)
9
1 file changed, 21 insertions(+), 5 deletions(-)
15
10
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
17
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
13
--- a/target/arm/translate-mve.c
19
+++ b/target/arm/helper.c
14
+++ b/target/arm/translate-mve.c
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VADDLV(DisasContext *s, arg_VADDLV *a)
21
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
16
return true;
17
}
18
19
-static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
20
+static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn,
21
+ GVecGen2iFn *vecfn)
22
{
22
{
23
ARMCPU *cpu = arm_env_get_cpu(env);
23
TCGv_ptr qd;
24
+ CPUState *cs = ENV_GET_CPU(env);
24
uint64_t imm;
25
uint64_t valid_mask = HCR_MASK;
25
@@ -XXX,XX +XXX,XX @@ static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
26
26
27
if (arm_feature(env, ARM_FEATURE_EL3)) {
27
imm = asimd_imm_const(a->imm, a->cmode, a->op);
28
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
28
29
/* Clear RES0 bits. */
29
- qd = mve_qreg_ptr(a->qd);
30
value &= valid_mask;
30
- fn(cpu_env, qd, tcg_constant_i64(imm));
31
31
- tcg_temp_free_ptr(qd);
32
+ /*
32
+ if (vecfn && mve_no_predication(s)) {
33
+ * VI and VF are kept in cs->interrupt_request. Modifying that
33
+ vecfn(MO_64, mve_qreg_offset(a->qd), mve_qreg_offset(a->qd),
34
+ * requires that we have the iothread lock, which is done by
34
+ imm, 16, 16);
35
+ * marking the reginfo structs as ARM_CP_IO.
36
+ * Note that if a write to HCR pends a VIRQ or VFIQ it is never
37
+ * possible for it to be taken immediately, because VIRQ and
38
+ * VFIQ are masked unless running at EL0 or EL1, and HCR
39
+ * can only be written at EL2.
40
+ */
41
+ g_assert(qemu_mutex_iothread_locked());
42
+ if (value & HCR_VI) {
43
+ cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
44
+ } else {
35
+ } else {
45
+ cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
36
+ qd = mve_qreg_ptr(a->qd);
37
+ fn(cpu_env, qd, tcg_constant_i64(imm));
38
+ tcg_temp_free_ptr(qd);
46
+ }
39
+ }
47
+ if (value & HCR_VF) {
40
mve_update_eci(s);
48
+ cs->interrupt_request |= CPU_INTERRUPT_VFIQ;
41
return true;
49
+ } else {
50
+ cs->interrupt_request &= ~CPU_INTERRUPT_VFIQ;
51
+ }
52
+ value &= ~(HCR_VI | HCR_VF);
53
+
54
/* These bits change the MMU setup:
55
* HCR_VM enables stage 2 translation
56
* HCR_PTW forbids certain page-table setups
57
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
58
hcr_write(env, NULL, value);
59
}
42
}
60
43
61
+static uint64_t hcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
44
+static void gen_gvec_vmovi(unsigned vece, uint32_t dofs, uint32_t aofs,
45
+ int64_t c, uint32_t oprsz, uint32_t maxsz)
62
+{
46
+{
63
+ /* The VI and VF bits live in cs->interrupt_request */
47
+ tcg_gen_gvec_dup_imm(vece, dofs, oprsz, maxsz, c);
64
+ uint64_t ret = env->cp15.hcr_el2 & ~(HCR_VI | HCR_VF);
65
+ CPUState *cs = ENV_GET_CPU(env);
66
+
67
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
68
+ ret |= HCR_VI;
69
+ }
70
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
71
+ ret |= HCR_VF;
72
+ }
73
+ return ret;
74
+}
48
+}
75
+
49
+
76
static const ARMCPRegInfo el2_cp_reginfo[] = {
50
static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
77
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
51
{
78
+ .type = ARM_CP_IO,
52
/* Handle decode of cmode/op here between VORR/VBIC/VMOV */
79
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
53
MVEGenOneOpImmFn *fn;
80
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
54
+ GVecGen2iFn *vecfn;
81
- .writefn = hcr_write },
55
82
+ .writefn = hcr_write, .readfn = hcr_read },
56
if ((a->cmode & 1) && a->cmode < 12) {
83
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
57
if (a->op) {
84
- .type = ARM_CP_ALIAS,
58
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
85
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
59
* so the VBIC becomes a logical AND operation.
86
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
60
*/
87
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
61
fn = gen_helper_mve_vandi;
88
- .writefn = hcr_writelow },
62
+ vecfn = tcg_gen_gvec_andi;
89
+ .writefn = hcr_writelow, .readfn = hcr_read },
63
} else {
90
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
64
fn = gen_helper_mve_vorri;
91
.type = ARM_CP_ALIAS,
65
+ vecfn = tcg_gen_gvec_ori;
92
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
66
}
93
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
67
} else {
94
68
/* There is one unallocated cmode/op combination in this space */
95
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
69
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
96
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
70
}
97
- .type = ARM_CP_ALIAS,
71
/* asimd_imm_const() sorts out VMVNI vs VMOVI for us */
98
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
72
fn = gen_helper_mve_vmovi;
99
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
73
+ vecfn = gen_gvec_vmovi;
100
.access = PL2_RW,
74
}
101
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
75
- return do_1imm(s, a, fn);
76
+ return do_1imm(s, a, fn, vecfn);
77
}
78
79
static bool do_2shift_vec(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
102
--
80
--
103
2.19.1
81
2.20.1
104
82
105
83
diff view generated by jsdifflib
Deleted patch
1
If the HCR_EL2 PTW virtualizaiton configuration register bit
2
is set, then this means that a stage 2 Permission fault must
3
be generated if a stage 1 translation table access is made
4
to an address that is mapped as Device memory in stage 2.
5
Implement this.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181012144235.19646-8-peter.maydell@linaro.org
10
---
11
target/arm/helper.c | 21 ++++++++++++++++++++-
12
1 file changed, 20 insertions(+), 1 deletion(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
19
hwaddr s2pa;
20
int s2prot;
21
int ret;
22
+ ARMCacheAttrs cacheattrs = {};
23
+ ARMCacheAttrs *pcacheattrs = NULL;
24
+
25
+ if (env->cp15.hcr_el2 & HCR_PTW) {
26
+ /*
27
+ * PTW means we must fault if this S1 walk touches S2 Device
28
+ * memory; otherwise we don't care about the attributes and can
29
+ * save the S2 translation the effort of computing them.
30
+ */
31
+ pcacheattrs = &cacheattrs;
32
+ }
33
34
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
35
- &txattrs, &s2prot, &s2size, fi, NULL);
36
+ &txattrs, &s2prot, &s2size, fi, pcacheattrs);
37
if (ret) {
38
assert(fi->type != ARMFault_None);
39
fi->s2addr = addr;
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
41
fi->s1ptw = true;
42
return ~0;
43
}
44
+ if (pcacheattrs && (pcacheattrs->attrs & 0xf0) == 0) {
45
+ /* Access was to Device memory: generate Permission fault */
46
+ fi->type = ARMFault_Permission;
47
+ fi->s2addr = addr;
48
+ fi->stage2 = true;
49
+ fi->s1ptw = true;
50
+ return ~0;
51
+ }
52
addr = s2pa;
53
}
54
return addr;
55
--
56
2.19.1
57
58
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This is done generically in translator_loop.
4
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20181011205206.3552-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-a64.c | 1 -
13
target/arm/translate.c | 1 -
14
2 files changed, 2 deletions(-)
15
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a64.c
19
+++ b/target/arm/translate-a64.c
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
21
22
static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
23
{
24
- tcg_clear_temp_count();
25
}
26
27
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
28
diff --git a/target/arm/translate.c b/target/arm/translate.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate.c
31
+++ b/target/arm/translate.c
32
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_start(DisasContextBase *dcbase, CPUState *cpu)
33
tcg_gen_movi_i32(tmp, 0);
34
store_cpu_field(tmp, condexec_bits);
35
}
36
- tcg_clear_temp_count();
37
}
38
39
static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
40
--
41
2.19.1
42
43
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-4-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 28 +++-------------------------
9
1 file changed, 3 insertions(+), 25 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
16
for (xs = 0; xs < selem; xs++) {
17
if (replicate) {
18
/* Load and replicate to all elements */
19
- uint64_t mulconst;
20
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
21
22
tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr,
23
get_mem_index(s), s->be_data + scale);
24
- switch (scale) {
25
- case 0:
26
- mulconst = 0x0101010101010101ULL;
27
- break;
28
- case 1:
29
- mulconst = 0x0001000100010001ULL;
30
- break;
31
- case 2:
32
- mulconst = 0x0000000100000001ULL;
33
- break;
34
- case 3:
35
- mulconst = 0;
36
- break;
37
- default:
38
- g_assert_not_reached();
39
- }
40
- if (mulconst) {
41
- tcg_gen_muli_i64(tcg_tmp, tcg_tmp, mulconst);
42
- }
43
- write_vec_element(s, tcg_tmp, rt, 0, MO_64);
44
- if (is_q) {
45
- write_vec_element(s, tcg_tmp, rt, 1, MO_64);
46
- }
47
+ tcg_gen_gvec_dup_i64(scale, vec_full_reg_offset(s, rt),
48
+ (is_q + 1) * 8, vec_full_reg_size(s),
49
+ tcg_tmp);
50
tcg_temp_free_i64(tcg_tmp);
51
- clear_vec_high(s, is_q, rt);
52
} else {
53
/* Load/store one element per register */
54
if (is_load) {
55
--
56
2.19.1
57
58
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Message-id: 20181011205206.3552-6-richard.henderson@linaro.org
6
[PMM: drop change to now-deleted cpu_mode_names array]
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.c | 4 ++--
11
1 file changed, 2 insertions(+), 2 deletions(-)
12
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
16
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_F0d, cpu_F1d;
18
19
#include "exec/gen-icount.h"
20
21
-static const char *regnames[] =
22
+static const char * const regnames[] =
23
{ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
24
"r8", "r9", "r10", "r11", "r12", "r13", "r14", "pc" };
25
26
@@ -XXX,XX +XXX,XX @@ static struct {
27
int nregs;
28
int interleave;
29
int spacing;
30
-} neon_ls_element_type[11] = {
31
+} const neon_ls_element_type[11] = {
32
{4, 4, 1},
33
{4, 4, 2},
34
{4, 1, 1},
35
--
36
2.19.1
37
38
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Also introduces neon_element_offset to find the env offset
4
of a specific element within a neon register.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-7-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate.c | 63 ++++++++++++++++++++++++------------------
12
1 file changed, 36 insertions(+), 27 deletions(-)
13
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
17
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ neon_reg_offset (int reg, int n)
19
return vfp_reg_offset(0, sreg);
20
}
21
22
+/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
23
+ * where 0 is the least significant end of the register.
24
+ */
25
+static inline long
26
+neon_element_offset(int reg, int element, TCGMemOp size)
27
+{
28
+ int element_size = 1 << size;
29
+ int ofs = element * element_size;
30
+#ifdef HOST_WORDS_BIGENDIAN
31
+ /* Calculate the offset assuming fully little-endian,
32
+ * then XOR to account for the order of the 8-byte units.
33
+ */
34
+ if (element_size < 8) {
35
+ ofs ^= 8 - element_size;
36
+ }
37
+#endif
38
+ return neon_reg_offset(reg, 0) + ofs;
39
+}
40
+
41
static TCGv_i32 neon_load_reg(int reg, int pass)
42
{
43
TCGv_i32 tmp = tcg_temp_new_i32();
44
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
45
tmp = load_reg(s, rd);
46
if (insn & (1 << 23)) {
47
/* VDUP */
48
- if (size == 0) {
49
- gen_neon_dup_u8(tmp, 0);
50
- } else if (size == 1) {
51
- gen_neon_dup_low16(tmp);
52
- }
53
- for (n = 0; n <= pass * 2; n++) {
54
- tmp2 = tcg_temp_new_i32();
55
- tcg_gen_mov_i32(tmp2, tmp);
56
- neon_store_reg(rn, n, tmp2);
57
- }
58
- neon_store_reg(rn, n, tmp);
59
+ int vec_size = pass ? 16 : 8;
60
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rn, 0),
61
+ vec_size, vec_size, tmp);
62
+ tcg_temp_free_i32(tmp);
63
} else {
64
/* VMOV */
65
switch (size) {
66
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
67
tcg_temp_free_i32(tmp);
68
} else if ((insn & 0x380) == 0) {
69
/* VDUP */
70
+ int element;
71
+ TCGMemOp size;
72
+
73
if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
74
return 1;
75
}
76
- if (insn & (1 << 19)) {
77
- tmp = neon_load_reg(rm, 1);
78
- } else {
79
- tmp = neon_load_reg(rm, 0);
80
- }
81
if (insn & (1 << 16)) {
82
- gen_neon_dup_u8(tmp, ((insn >> 17) & 3) * 8);
83
+ size = MO_8;
84
+ element = (insn >> 17) & 7;
85
} else if (insn & (1 << 17)) {
86
- if ((insn >> 18) & 1)
87
- gen_neon_dup_high16(tmp);
88
- else
89
- gen_neon_dup_low16(tmp);
90
+ size = MO_16;
91
+ element = (insn >> 18) & 3;
92
+ } else {
93
+ size = MO_32;
94
+ element = (insn >> 19) & 1;
95
}
96
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
97
- tmp2 = tcg_temp_new_i32();
98
- tcg_gen_mov_i32(tmp2, tmp);
99
- neon_store_reg(rd, pass, tmp2);
100
- }
101
- tcg_temp_free_i32(tmp);
102
+ tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0),
103
+ neon_element_offset(rm, element, size),
104
+ q ? 16 : 8, q ? 16 : 8);
105
} else {
106
return 1;
107
}
108
--
109
2.19.1
110
111
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-8-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 67 ++++++++++++++++++++++++------------------
9
1 file changed, 39 insertions(+), 28 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
return 1;
17
}
18
} else { /* (insn & 0x00380080) == 0 */
19
- int invert;
20
+ int invert, reg_ofs, vec_size;
21
+
22
if (q && (rd & 1)) {
23
return 1;
24
}
25
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
26
break;
27
case 14:
28
imm |= (imm << 8) | (imm << 16) | (imm << 24);
29
- if (invert)
30
+ if (invert) {
31
imm = ~imm;
32
+ }
33
break;
34
case 15:
35
if (invert) {
36
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
37
| ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
38
break;
39
}
40
- if (invert)
41
+ if (invert) {
42
imm = ~imm;
43
+ }
44
45
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
46
- if (op & 1 && op < 12) {
47
- tmp = neon_load_reg(rd, pass);
48
- if (invert) {
49
- /* The immediate value has already been inverted, so
50
- BIC becomes AND. */
51
- tcg_gen_andi_i32(tmp, tmp, imm);
52
- } else {
53
- tcg_gen_ori_i32(tmp, tmp, imm);
54
- }
55
+ reg_ofs = neon_reg_offset(rd, 0);
56
+ vec_size = q ? 16 : 8;
57
+
58
+ if (op & 1 && op < 12) {
59
+ if (invert) {
60
+ /* The immediate value has already been inverted,
61
+ * so BIC becomes AND.
62
+ */
63
+ tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
64
+ vec_size, vec_size);
65
} else {
66
- /* VMOV, VMVN. */
67
- tmp = tcg_temp_new_i32();
68
- if (op == 14 && invert) {
69
- int n;
70
- uint32_t val;
71
- val = 0;
72
- for (n = 0; n < 4; n++) {
73
- if (imm & (1 << (n + (pass & 1) * 4)))
74
- val |= 0xff << (n * 8);
75
- }
76
- tcg_gen_movi_i32(tmp, val);
77
- } else {
78
- tcg_gen_movi_i32(tmp, imm);
79
- }
80
+ tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
81
+ vec_size, vec_size);
82
+ }
83
+ } else {
84
+ /* VMOV, VMVN. */
85
+ if (op == 14 && invert) {
86
+ TCGv_i64 t64 = tcg_temp_new_i64();
87
+
88
+ for (pass = 0; pass <= q; ++pass) {
89
+ uint64_t val = 0;
90
+ int n;
91
+
92
+ for (n = 0; n < 8; n++) {
93
+ if (imm & (1 << (n + pass * 8))) {
94
+ val |= 0xffull << (n * 8);
95
+ }
96
+ }
97
+ tcg_gen_movi_i64(t64, val);
98
+ neon_store_reg64(t64, rd + pass);
99
+ }
100
+ tcg_temp_free_i64(t64);
101
+ } else {
102
+ tcg_gen_gvec_dup32i(reg_ofs, vec_size, vec_size, imm);
103
}
104
- neon_store_reg(rd, pass, tmp);
105
}
106
}
107
} else { /* (insn & 0x00800010 == 0x00800000) */
108
--
109
2.19.1
110
111
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-10-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 29 ++++++++++-------------------
9
1 file changed, 10 insertions(+), 19 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
break;
17
}
18
return 0;
19
+
20
+ case NEON_3R_VADD_VSUB:
21
+ if (u) {
22
+ tcg_gen_gvec_sub(size, rd_ofs, rn_ofs, rm_ofs,
23
+ vec_size, vec_size);
24
+ } else {
25
+ tcg_gen_gvec_add(size, rd_ofs, rn_ofs, rm_ofs,
26
+ vec_size, vec_size);
27
+ }
28
+ return 0;
29
}
30
if (size == 3) {
31
/* 64-bit element instructions. */
32
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
33
cpu_V1, cpu_V0);
34
}
35
break;
36
- case NEON_3R_VADD_VSUB:
37
- if (u) {
38
- tcg_gen_sub_i64(CPU_V001);
39
- } else {
40
- tcg_gen_add_i64(CPU_V001);
41
- }
42
- break;
43
default:
44
abort();
45
}
46
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
47
tmp2 = neon_load_reg(rd, pass);
48
gen_neon_add(size, tmp, tmp2);
49
break;
50
- case NEON_3R_VADD_VSUB:
51
- if (!u) { /* VADD */
52
- gen_neon_add(size, tmp, tmp2);
53
- } else { /* VSUB */
54
- switch (size) {
55
- case 0: gen_helper_neon_sub_u8(tmp, tmp, tmp2); break;
56
- case 1: gen_helper_neon_sub_u16(tmp, tmp, tmp2); break;
57
- case 2: tcg_gen_sub_i32(tmp, tmp, tmp2); break;
58
- default: abort();
59
- }
60
- }
61
- break;
62
case NEON_3R_VTST_VCEQ:
63
if (!u) { /* VTST */
64
switch (size) {
65
--
66
2.19.1
67
68
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-11-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 16 ++++++++--------
9
1 file changed, 8 insertions(+), 8 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
tcg_temp_free_ptr(ptr1);
17
tcg_temp_free_ptr(ptr2);
18
break;
19
+
20
+ case NEON_2RM_VMVN:
21
+ tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
22
+ break;
23
+ case NEON_2RM_VNEG:
24
+ tcg_gen_gvec_neg(size, rd_ofs, rm_ofs, vec_size, vec_size);
25
+ break;
26
+
27
default:
28
elementwise:
29
for (pass = 0; pass < (q ? 4 : 2); pass++) {
30
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
31
case NEON_2RM_VCNT:
32
gen_helper_neon_cnt_u8(tmp, tmp);
33
break;
34
- case NEON_2RM_VMVN:
35
- tcg_gen_not_i32(tmp, tmp);
36
- break;
37
case NEON_2RM_VQABS:
38
switch (size) {
39
case 0:
40
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
41
default: abort();
42
}
43
break;
44
- case NEON_2RM_VNEG:
45
- tmp2 = tcg_const_i32(0);
46
- gen_neon_rsb(size, tmp, tmp2);
47
- tcg_temp_free_i32(tmp2);
48
- break;
49
case NEON_2RM_VCGT0_F:
50
{
51
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
52
--
53
2.19.1
54
55
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-12-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 31 +++++++++++++++----------------
9
1 file changed, 15 insertions(+), 16 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
vec_size, vec_size);
17
}
18
return 0;
19
+
20
+ case NEON_3R_VMUL: /* VMUL */
21
+ if (u) {
22
+ /* Polynomial case allows only P8 and is handled below. */
23
+ if (size != 0) {
24
+ return 1;
25
+ }
26
+ } else {
27
+ tcg_gen_gvec_mul(size, rd_ofs, rn_ofs, rm_ofs,
28
+ vec_size, vec_size);
29
+ return 0;
30
+ }
31
+ break;
32
}
33
if (size == 3) {
34
/* 64-bit element instructions. */
35
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
36
return 1;
37
}
38
break;
39
- case NEON_3R_VMUL:
40
- if (u && (size != 0)) {
41
- /* UNDEF on invalid size for polynomial subcase */
42
- return 1;
43
- }
44
- break;
45
case NEON_3R_VFM_VQRDMLSH:
46
if (!arm_dc_feature(s, ARM_FEATURE_VFP4)) {
47
return 1;
48
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
49
}
50
break;
51
case NEON_3R_VMUL:
52
- if (u) { /* polynomial */
53
- gen_helper_neon_mul_p8(tmp, tmp, tmp2);
54
- } else { /* Integer */
55
- switch (size) {
56
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
57
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
58
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
59
- default: abort();
60
- }
61
- }
62
+ /* VMUL.P8; other cases already eliminated. */
63
+ gen_helper_neon_mul_p8(tmp, tmp, tmp2);
64
break;
65
case NEON_3R_VPMAX:
66
GEN_NEON_INTEGER_OP(pmax);
67
--
68
2.19.1
69
70
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-13-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 70 +++++++++++++++++++++++++++++-------------
9
1 file changed, 48 insertions(+), 22 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
size--;
17
}
18
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
19
- /* To avoid excessive duplication of ops we implement shift
20
- by immediate using the variable shift operations. */
21
if (op < 8) {
22
/* Shift by immediate:
23
VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
24
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
25
}
26
/* Right shifts are encoded as N - shift, where N is the
27
element size in bits. */
28
- if (op <= 4)
29
+ if (op <= 4) {
30
shift = shift - (1 << (size + 3));
31
+ }
32
+
33
+ switch (op) {
34
+ case 0: /* VSHR */
35
+ /* Right shift comes here negative. */
36
+ shift = -shift;
37
+ /* Shifts larger than the element size are architecturally
38
+ * valid. Unsigned results in all zeros; signed results
39
+ * in all sign bits.
40
+ */
41
+ if (!u) {
42
+ tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
43
+ MIN(shift, (8 << size) - 1),
44
+ vec_size, vec_size);
45
+ } else if (shift >= 8 << size) {
46
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
47
+ } else {
48
+ tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
49
+ vec_size, vec_size);
50
+ }
51
+ return 0;
52
+
53
+ case 5: /* VSHL, VSLI */
54
+ if (!u) { /* VSHL */
55
+ /* Shifts larger than the element size are
56
+ * architecturally valid and results in zero.
57
+ */
58
+ if (shift >= 8 << size) {
59
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
60
+ } else {
61
+ tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
62
+ vec_size, vec_size);
63
+ }
64
+ return 0;
65
+ }
66
+ break;
67
+ }
68
+
69
if (size == 3) {
70
count = q + 1;
71
} else {
72
count = q ? 4: 2;
73
}
74
- switch (size) {
75
- case 0:
76
- imm = (uint8_t) shift;
77
- imm |= imm << 8;
78
- imm |= imm << 16;
79
- break;
80
- case 1:
81
- imm = (uint16_t) shift;
82
- imm |= imm << 16;
83
- break;
84
- case 2:
85
- case 3:
86
- imm = shift;
87
- break;
88
- default:
89
- abort();
90
- }
91
+
92
+ /* To avoid excessive duplication of ops we implement shift
93
+ * by immediate using the variable shift operations.
94
+ */
95
+ imm = dup_const(size, shift);
96
97
for (pass = 0; pass < count; pass++) {
98
if (size == 3) {
99
neon_load_reg64(cpu_V0, rm + pass);
100
tcg_gen_movi_i64(cpu_V1, imm);
101
switch (op) {
102
- case 0: /* VSHR */
103
case 1: /* VSRA */
104
if (u)
105
gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
106
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
107
cpu_V0, cpu_V1);
108
}
109
break;
110
+ default:
111
+ g_assert_not_reached();
112
}
113
if (op == 1 || op == 3) {
114
/* Accumulate. */
115
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
116
tmp2 = tcg_temp_new_i32();
117
tcg_gen_movi_i32(tmp2, imm);
118
switch (op) {
119
- case 0: /* VSHR */
120
case 1: /* VSRA */
121
GEN_NEON_INTEGER_OP(shl);
122
break;
123
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
124
case 7: /* VQSHL */
125
GEN_NEON_INTEGER_OP_ENV(qshl);
126
break;
127
+ default:
128
+ g_assert_not_reached();
129
}
130
tcg_temp_free_i32(tmp2);
131
132
--
133
2.19.1
134
135
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Move ssra_op and usra_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-14-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.h | 2 +
11
target/arm/translate-a64.c | 106 ----------------------------
12
target/arm/translate.c | 139 ++++++++++++++++++++++++++++++++++---
13
3 files changed, 130 insertions(+), 117 deletions(-)
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
18
+++ b/target/arm/translate.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
extern const GVecGen3 bsl_op;
21
extern const GVecGen3 bit_op;
22
extern const GVecGen3 bif_op;
23
+extern const GVecGen2i ssra_op[4];
24
+extern const GVecGen2i usra_op[4];
25
26
/*
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
33
}
34
}
35
36
-static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
37
-{
38
- tcg_gen_vec_sar8i_i64(a, a, shift);
39
- tcg_gen_vec_add8_i64(d, d, a);
40
-}
41
-
42
-static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
43
-{
44
- tcg_gen_vec_sar16i_i64(a, a, shift);
45
- tcg_gen_vec_add16_i64(d, d, a);
46
-}
47
-
48
-static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
49
-{
50
- tcg_gen_sari_i32(a, a, shift);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
55
-{
56
- tcg_gen_sari_i64(a, a, shift);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
61
-{
62
- tcg_gen_sari_vec(vece, a, a, sh);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_vec_shr8i_i64(a, a, shift);
69
- tcg_gen_vec_add8_i64(d, d, a);
70
-}
71
-
72
-static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
73
-{
74
- tcg_gen_vec_shr16i_i64(a, a, shift);
75
- tcg_gen_vec_add16_i64(d, d, a);
76
-}
77
-
78
-static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
79
-{
80
- tcg_gen_shri_i32(a, a, shift);
81
- tcg_gen_add_i32(d, d, a);
82
-}
83
-
84
-static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
85
-{
86
- tcg_gen_shri_i64(a, a, shift);
87
- tcg_gen_add_i64(d, d, a);
88
-}
89
-
90
-static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
91
-{
92
- tcg_gen_shri_vec(vece, a, a, sh);
93
- tcg_gen_add_vec(vece, d, d, a);
94
-}
95
-
96
static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
97
{
98
uint64_t mask = dup_const(MO_8, 0xff >> shift);
99
@@ -XXX,XX +XXX,XX @@ static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
100
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
101
int immh, int immb, int opcode, int rn, int rd)
102
{
103
- static const GVecGen2i ssra_op[4] = {
104
- { .fni8 = gen_ssra8_i64,
105
- .fniv = gen_ssra_vec,
106
- .load_dest = true,
107
- .opc = INDEX_op_sari_vec,
108
- .vece = MO_8 },
109
- { .fni8 = gen_ssra16_i64,
110
- .fniv = gen_ssra_vec,
111
- .load_dest = true,
112
- .opc = INDEX_op_sari_vec,
113
- .vece = MO_16 },
114
- { .fni4 = gen_ssra32_i32,
115
- .fniv = gen_ssra_vec,
116
- .load_dest = true,
117
- .opc = INDEX_op_sari_vec,
118
- .vece = MO_32 },
119
- { .fni8 = gen_ssra64_i64,
120
- .fniv = gen_ssra_vec,
121
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
122
- .load_dest = true,
123
- .opc = INDEX_op_sari_vec,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen2i usra_op[4] = {
127
- { .fni8 = gen_usra8_i64,
128
- .fniv = gen_usra_vec,
129
- .load_dest = true,
130
- .opc = INDEX_op_shri_vec,
131
- .vece = MO_8, },
132
- { .fni8 = gen_usra16_i64,
133
- .fniv = gen_usra_vec,
134
- .load_dest = true,
135
- .opc = INDEX_op_shri_vec,
136
- .vece = MO_16, },
137
- { .fni4 = gen_usra32_i32,
138
- .fniv = gen_usra_vec,
139
- .load_dest = true,
140
- .opc = INDEX_op_shri_vec,
141
- .vece = MO_32, },
142
- { .fni8 = gen_usra64_i64,
143
- .fniv = gen_usra_vec,
144
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
145
- .load_dest = true,
146
- .opc = INDEX_op_shri_vec,
147
- .vece = MO_64, },
148
- };
149
static const GVecGen2i sri_op[4] = {
150
{ .fni8 = gen_shr8_ins_i64,
151
.fniv = gen_shr_ins_vec,
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ const GVecGen3 bif_op = {
157
.load_dest = true
158
};
159
160
+static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
161
+{
162
+ tcg_gen_vec_sar8i_i64(a, a, shift);
163
+ tcg_gen_vec_add8_i64(d, d, a);
164
+}
165
+
166
+static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
167
+{
168
+ tcg_gen_vec_sar16i_i64(a, a, shift);
169
+ tcg_gen_vec_add16_i64(d, d, a);
170
+}
171
+
172
+static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
173
+{
174
+ tcg_gen_sari_i32(a, a, shift);
175
+ tcg_gen_add_i32(d, d, a);
176
+}
177
+
178
+static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
179
+{
180
+ tcg_gen_sari_i64(a, a, shift);
181
+ tcg_gen_add_i64(d, d, a);
182
+}
183
+
184
+static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
185
+{
186
+ tcg_gen_sari_vec(vece, a, a, sh);
187
+ tcg_gen_add_vec(vece, d, d, a);
188
+}
189
+
190
+const GVecGen2i ssra_op[4] = {
191
+ { .fni8 = gen_ssra8_i64,
192
+ .fniv = gen_ssra_vec,
193
+ .load_dest = true,
194
+ .opc = INDEX_op_sari_vec,
195
+ .vece = MO_8 },
196
+ { .fni8 = gen_ssra16_i64,
197
+ .fniv = gen_ssra_vec,
198
+ .load_dest = true,
199
+ .opc = INDEX_op_sari_vec,
200
+ .vece = MO_16 },
201
+ { .fni4 = gen_ssra32_i32,
202
+ .fniv = gen_ssra_vec,
203
+ .load_dest = true,
204
+ .opc = INDEX_op_sari_vec,
205
+ .vece = MO_32 },
206
+ { .fni8 = gen_ssra64_i64,
207
+ .fniv = gen_ssra_vec,
208
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
209
+ .load_dest = true,
210
+ .opc = INDEX_op_sari_vec,
211
+ .vece = MO_64 },
212
+};
213
+
214
+static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
215
+{
216
+ tcg_gen_vec_shr8i_i64(a, a, shift);
217
+ tcg_gen_vec_add8_i64(d, d, a);
218
+}
219
+
220
+static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
221
+{
222
+ tcg_gen_vec_shr16i_i64(a, a, shift);
223
+ tcg_gen_vec_add16_i64(d, d, a);
224
+}
225
+
226
+static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
227
+{
228
+ tcg_gen_shri_i32(a, a, shift);
229
+ tcg_gen_add_i32(d, d, a);
230
+}
231
+
232
+static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
233
+{
234
+ tcg_gen_shri_i64(a, a, shift);
235
+ tcg_gen_add_i64(d, d, a);
236
+}
237
+
238
+static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
239
+{
240
+ tcg_gen_shri_vec(vece, a, a, sh);
241
+ tcg_gen_add_vec(vece, d, d, a);
242
+}
243
+
244
+const GVecGen2i usra_op[4] = {
245
+ { .fni8 = gen_usra8_i64,
246
+ .fniv = gen_usra_vec,
247
+ .load_dest = true,
248
+ .opc = INDEX_op_shri_vec,
249
+ .vece = MO_8, },
250
+ { .fni8 = gen_usra16_i64,
251
+ .fniv = gen_usra_vec,
252
+ .load_dest = true,
253
+ .opc = INDEX_op_shri_vec,
254
+ .vece = MO_16, },
255
+ { .fni4 = gen_usra32_i32,
256
+ .fniv = gen_usra_vec,
257
+ .load_dest = true,
258
+ .opc = INDEX_op_shri_vec,
259
+ .vece = MO_32, },
260
+ { .fni8 = gen_usra64_i64,
261
+ .fniv = gen_usra_vec,
262
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
263
+ .load_dest = true,
264
+ .opc = INDEX_op_shri_vec,
265
+ .vece = MO_64, },
266
+};
267
268
/* Translate a NEON data processing instruction. Return nonzero if the
269
instruction is invalid.
270
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
271
}
272
return 0;
273
274
+ case 1: /* VSRA */
275
+ /* Right shift comes here negative. */
276
+ shift = -shift;
277
+ /* Shifts larger than the element size are architecturally
278
+ * valid. Unsigned results in all zeros; signed results
279
+ * in all sign bits.
280
+ */
281
+ if (!u) {
282
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
283
+ MIN(shift, (8 << size) - 1),
284
+ &ssra_op[size]);
285
+ } else if (shift >= 8 << size) {
286
+ /* rd += 0 */
287
+ } else {
288
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
289
+ shift, &usra_op[size]);
290
+ }
291
+ return 0;
292
+
293
case 5: /* VSHL, VSLI */
294
if (!u) { /* VSHL */
295
/* Shifts larger than the element size are
296
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
297
neon_load_reg64(cpu_V0, rm + pass);
298
tcg_gen_movi_i64(cpu_V1, imm);
299
switch (op) {
300
- case 1: /* VSRA */
301
- if (u)
302
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
303
- else
304
- gen_helper_neon_shl_s64(cpu_V0, cpu_V0, cpu_V1);
305
- break;
306
case 2: /* VRSHR */
307
case 3: /* VRSRA */
308
if (u)
309
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
310
default:
311
g_assert_not_reached();
312
}
313
- if (op == 1 || op == 3) {
314
+ if (op == 3) {
315
/* Accumulate. */
316
neon_load_reg64(cpu_V1, rd + pass);
317
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
318
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
319
tmp2 = tcg_temp_new_i32();
320
tcg_gen_movi_i32(tmp2, imm);
321
switch (op) {
322
- case 1: /* VSRA */
323
- GEN_NEON_INTEGER_OP(shl);
324
- break;
325
case 2: /* VRSHR */
326
case 3: /* VRSRA */
327
GEN_NEON_INTEGER_OP(rshl);
328
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
329
}
330
tcg_temp_free_i32(tmp2);
331
332
- if (op == 1 || op == 3) {
333
+ if (op == 3) {
334
/* Accumulate. */
335
tmp2 = neon_load_reg(rd, pass);
336
gen_neon_add(size, tmp, tmp2);
337
--
338
2.19.1
339
340
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Move mla_op and mls_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-16-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.h | 2 +
11
target/arm/translate-a64.c | 106 -----------------------------
12
target/arm/translate.c | 134 ++++++++++++++++++++++++++++++++-----
13
3 files changed, 120 insertions(+), 122 deletions(-)
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
18
+++ b/target/arm/translate.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
extern const GVecGen3 bsl_op;
21
extern const GVecGen3 bit_op;
22
extern const GVecGen3 bif_op;
23
+extern const GVecGen3 mla_op[4];
24
+extern const GVecGen3 mls_op[4];
25
extern const GVecGen2i ssra_op[4];
26
extern const GVecGen2i usra_op[4];
27
extern const GVecGen2i sri_op[4];
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
33
}
34
}
35
36
-static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
37
-{
38
- gen_helper_neon_mul_u8(a, a, b);
39
- gen_helper_neon_add_u8(d, d, a);
40
-}
41
-
42
-static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
43
-{
44
- gen_helper_neon_mul_u16(a, a, b);
45
- gen_helper_neon_add_u16(d, d, a);
46
-}
47
-
48
-static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
49
-{
50
- tcg_gen_mul_i32(a, a, b);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
55
-{
56
- tcg_gen_mul_i64(a, a, b);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
61
-{
62
- tcg_gen_mul_vec(vece, a, a, b);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
67
-{
68
- gen_helper_neon_mul_u8(a, a, b);
69
- gen_helper_neon_sub_u8(d, d, a);
70
-}
71
-
72
-static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
73
-{
74
- gen_helper_neon_mul_u16(a, a, b);
75
- gen_helper_neon_sub_u16(d, d, a);
76
-}
77
-
78
-static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
79
-{
80
- tcg_gen_mul_i32(a, a, b);
81
- tcg_gen_sub_i32(d, d, a);
82
-}
83
-
84
-static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
85
-{
86
- tcg_gen_mul_i64(a, a, b);
87
- tcg_gen_sub_i64(d, d, a);
88
-}
89
-
90
-static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
91
-{
92
- tcg_gen_mul_vec(vece, a, a, b);
93
- tcg_gen_sub_vec(vece, d, d, a);
94
-}
95
-
96
/* Integer op subgroup of C3.6.16. */
97
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
98
{
99
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
100
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
.vece = MO_64 },
102
};
103
- static const GVecGen3 mla_op[4] = {
104
- { .fni4 = gen_mla8_i32,
105
- .fniv = gen_mla_vec,
106
- .opc = INDEX_op_mul_vec,
107
- .load_dest = true,
108
- .vece = MO_8 },
109
- { .fni4 = gen_mla16_i32,
110
- .fniv = gen_mla_vec,
111
- .opc = INDEX_op_mul_vec,
112
- .load_dest = true,
113
- .vece = MO_16 },
114
- { .fni4 = gen_mla32_i32,
115
- .fniv = gen_mla_vec,
116
- .opc = INDEX_op_mul_vec,
117
- .load_dest = true,
118
- .vece = MO_32 },
119
- { .fni8 = gen_mla64_i64,
120
- .fniv = gen_mla_vec,
121
- .opc = INDEX_op_mul_vec,
122
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
123
- .load_dest = true,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen3 mls_op[4] = {
127
- { .fni4 = gen_mls8_i32,
128
- .fniv = gen_mls_vec,
129
- .opc = INDEX_op_mul_vec,
130
- .load_dest = true,
131
- .vece = MO_8 },
132
- { .fni4 = gen_mls16_i32,
133
- .fniv = gen_mls_vec,
134
- .opc = INDEX_op_mul_vec,
135
- .load_dest = true,
136
- .vece = MO_16 },
137
- { .fni4 = gen_mls32_i32,
138
- .fniv = gen_mls_vec,
139
- .opc = INDEX_op_mul_vec,
140
- .load_dest = true,
141
- .vece = MO_32 },
142
- { .fni8 = gen_mls64_i64,
143
- .fniv = gen_mls_vec,
144
- .opc = INDEX_op_mul_vec,
145
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
146
- .load_dest = true,
147
- .vece = MO_64 },
148
- };
149
150
int is_q = extract32(insn, 30, 1);
151
int u = extract32(insn, 29, 1);
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static void gen_neon_narrow_op(int op, int u, int size,
157
#define NEON_3R_VABA 15
158
#define NEON_3R_VADD_VSUB 16
159
#define NEON_3R_VTST_VCEQ 17
160
-#define NEON_3R_VML 18 /* VMLA, VMLAL, VMLS, VMLSL */
161
+#define NEON_3R_VML 18 /* VMLA, VMLS */
162
#define NEON_3R_VMUL 19
163
#define NEON_3R_VPMAX 20
164
#define NEON_3R_VPMIN 21
165
@@ -XXX,XX +XXX,XX @@ const GVecGen2i sli_op[4] = {
166
.vece = MO_64 },
167
};
168
169
+static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
170
+{
171
+ gen_helper_neon_mul_u8(a, a, b);
172
+ gen_helper_neon_add_u8(d, d, a);
173
+}
174
+
175
+static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
176
+{
177
+ gen_helper_neon_mul_u8(a, a, b);
178
+ gen_helper_neon_sub_u8(d, d, a);
179
+}
180
+
181
+static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
182
+{
183
+ gen_helper_neon_mul_u16(a, a, b);
184
+ gen_helper_neon_add_u16(d, d, a);
185
+}
186
+
187
+static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
188
+{
189
+ gen_helper_neon_mul_u16(a, a, b);
190
+ gen_helper_neon_sub_u16(d, d, a);
191
+}
192
+
193
+static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
194
+{
195
+ tcg_gen_mul_i32(a, a, b);
196
+ tcg_gen_add_i32(d, d, a);
197
+}
198
+
199
+static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
200
+{
201
+ tcg_gen_mul_i32(a, a, b);
202
+ tcg_gen_sub_i32(d, d, a);
203
+}
204
+
205
+static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
206
+{
207
+ tcg_gen_mul_i64(a, a, b);
208
+ tcg_gen_add_i64(d, d, a);
209
+}
210
+
211
+static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
212
+{
213
+ tcg_gen_mul_i64(a, a, b);
214
+ tcg_gen_sub_i64(d, d, a);
215
+}
216
+
217
+static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
218
+{
219
+ tcg_gen_mul_vec(vece, a, a, b);
220
+ tcg_gen_add_vec(vece, d, d, a);
221
+}
222
+
223
+static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
224
+{
225
+ tcg_gen_mul_vec(vece, a, a, b);
226
+ tcg_gen_sub_vec(vece, d, d, a);
227
+}
228
+
229
+/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
230
+ * these tables are shared with AArch64 which does support them.
231
+ */
232
+const GVecGen3 mla_op[4] = {
233
+ { .fni4 = gen_mla8_i32,
234
+ .fniv = gen_mla_vec,
235
+ .opc = INDEX_op_mul_vec,
236
+ .load_dest = true,
237
+ .vece = MO_8 },
238
+ { .fni4 = gen_mla16_i32,
239
+ .fniv = gen_mla_vec,
240
+ .opc = INDEX_op_mul_vec,
241
+ .load_dest = true,
242
+ .vece = MO_16 },
243
+ { .fni4 = gen_mla32_i32,
244
+ .fniv = gen_mla_vec,
245
+ .opc = INDEX_op_mul_vec,
246
+ .load_dest = true,
247
+ .vece = MO_32 },
248
+ { .fni8 = gen_mla64_i64,
249
+ .fniv = gen_mla_vec,
250
+ .opc = INDEX_op_mul_vec,
251
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
252
+ .load_dest = true,
253
+ .vece = MO_64 },
254
+};
255
+
256
+const GVecGen3 mls_op[4] = {
257
+ { .fni4 = gen_mls8_i32,
258
+ .fniv = gen_mls_vec,
259
+ .opc = INDEX_op_mul_vec,
260
+ .load_dest = true,
261
+ .vece = MO_8 },
262
+ { .fni4 = gen_mls16_i32,
263
+ .fniv = gen_mls_vec,
264
+ .opc = INDEX_op_mul_vec,
265
+ .load_dest = true,
266
+ .vece = MO_16 },
267
+ { .fni4 = gen_mls32_i32,
268
+ .fniv = gen_mls_vec,
269
+ .opc = INDEX_op_mul_vec,
270
+ .load_dest = true,
271
+ .vece = MO_32 },
272
+ { .fni8 = gen_mls64_i64,
273
+ .fniv = gen_mls_vec,
274
+ .opc = INDEX_op_mul_vec,
275
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
276
+ .load_dest = true,
277
+ .vece = MO_64 },
278
+};
279
+
280
/* Translate a NEON data processing instruction. Return nonzero if the
281
instruction is invalid.
282
We process data in a mixture of 32-bit and 64-bit chunks.
283
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
284
return 0;
285
}
286
break;
287
+
288
+ case NEON_3R_VML: /* VMLA, VMLS */
289
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
290
+ u ? &mls_op[size] : &mla_op[size]);
291
+ return 0;
292
}
293
+
294
if (size == 3) {
295
/* 64-bit element instructions. */
296
for (pass = 0; pass < (q ? 2 : 1); pass++) {
297
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
298
}
299
}
300
break;
301
- case NEON_3R_VML: /* VMLA, VMLAL, VMLS,VMLSL */
302
- switch (size) {
303
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
304
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
305
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
306
- default: abort();
307
- }
308
- tcg_temp_free_i32(tmp2);
309
- tmp2 = neon_load_reg(rd, pass);
310
- if (u) { /* VMLS */
311
- gen_neon_rsb(size, tmp, tmp2);
312
- } else { /* VMLA */
313
- gen_neon_add(size, tmp, tmp2);
314
- }
315
- break;
316
case NEON_3R_VMUL:
317
/* VMUL.P8; other cases already eliminated. */
318
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
319
--
320
2.19.1
321
322
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-18-richard.henderson@linaro.org
5
[PMM: added parens in ?: expression]
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/translate.c | 81 ++++++++++++++----------------------------
10
1 file changed, 26 insertions(+), 55 deletions(-)
11
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate.c
15
+++ b/target/arm/translate.c
16
@@ -XXX,XX +XXX,XX @@ static void gen_vfp_msr(TCGv_i32 tmp)
17
tcg_temp_free_i32(tmp);
18
}
19
20
-static void gen_neon_dup_u8(TCGv_i32 var, int shift)
21
-{
22
- TCGv_i32 tmp = tcg_temp_new_i32();
23
- if (shift)
24
- tcg_gen_shri_i32(var, var, shift);
25
- tcg_gen_ext8u_i32(var, var);
26
- tcg_gen_shli_i32(tmp, var, 8);
27
- tcg_gen_or_i32(var, var, tmp);
28
- tcg_gen_shli_i32(tmp, var, 16);
29
- tcg_gen_or_i32(var, var, tmp);
30
- tcg_temp_free_i32(tmp);
31
-}
32
-
33
static void gen_neon_dup_low16(TCGv_i32 var)
34
{
35
TCGv_i32 tmp = tcg_temp_new_i32();
36
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
37
tcg_temp_free_i32(tmp);
38
}
39
40
-static TCGv_i32 gen_load_and_replicate(DisasContext *s, TCGv_i32 addr, int size)
41
-{
42
- /* Load a single Neon element and replicate into a 32 bit TCG reg */
43
- TCGv_i32 tmp = tcg_temp_new_i32();
44
- switch (size) {
45
- case 0:
46
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
47
- gen_neon_dup_u8(tmp, 0);
48
- break;
49
- case 1:
50
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
51
- gen_neon_dup_low16(tmp);
52
- break;
53
- case 2:
54
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
55
- break;
56
- default: /* Avoid compiler warnings. */
57
- abort();
58
- }
59
- return tmp;
60
-}
61
-
62
static int handle_vsel(uint32_t insn, uint32_t rd, uint32_t rn, uint32_t rm,
63
uint32_t dp)
64
{
65
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
66
int load;
67
int shift;
68
int n;
69
+ int vec_size;
70
TCGv_i32 addr;
71
TCGv_i32 tmp;
72
TCGv_i32 tmp2;
73
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
74
}
75
addr = tcg_temp_new_i32();
76
load_reg_var(s, addr, rn);
77
- if (nregs == 1) {
78
- /* VLD1 to all lanes: bit 5 indicates how many Dregs to write */
79
- tmp = gen_load_and_replicate(s, addr, size);
80
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
81
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
82
- if (insn & (1 << 5)) {
83
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 0));
84
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 1));
85
- }
86
- tcg_temp_free_i32(tmp);
87
- } else {
88
- /* VLD2/3/4 to all lanes: bit 5 indicates register stride */
89
- stride = (insn & (1 << 5)) ? 2 : 1;
90
- for (reg = 0; reg < nregs; reg++) {
91
- tmp = gen_load_and_replicate(s, addr, size);
92
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
93
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
94
- tcg_temp_free_i32(tmp);
95
- tcg_gen_addi_i32(addr, addr, 1 << size);
96
- rd += stride;
97
+
98
+ /* VLD1 to all lanes: bit 5 indicates how many Dregs to write.
99
+ * VLD2/3/4 to all lanes: bit 5 indicates register stride.
100
+ */
101
+ stride = (insn & (1 << 5)) ? 2 : 1;
102
+ vec_size = nregs == 1 ? stride * 8 : 8;
103
+
104
+ tmp = tcg_temp_new_i32();
105
+ for (reg = 0; reg < nregs; reg++) {
106
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
107
+ s->be_data | size);
108
+ if ((rd & 1) && vec_size == 16) {
109
+ /* We cannot write 16 bytes at once because the
110
+ * destination is unaligned.
111
+ */
112
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
113
+ 8, 8, tmp);
114
+ tcg_gen_gvec_mov(0, neon_reg_offset(rd + 1, 0),
115
+ neon_reg_offset(rd, 0), 8, 8);
116
+ } else {
117
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
118
+ vec_size, vec_size, tmp);
119
}
120
+ tcg_gen_addi_i32(addr, addr, 1 << size);
121
+ rd += stride;
122
}
123
+ tcg_temp_free_i32(tmp);
124
tcg_temp_free_i32(addr);
125
stride = (1 << size) * nregs;
126
} else {
127
--
128
2.19.1
129
130
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
For a sequence of loads or stores from a single register,
4
little-endian operations can be promoted to an 8-byte op.
5
This can reduce the number of operations by a factor of 8.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181011205206.3552-20-richard.henderson@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/translate.c | 10 ++++++++++
14
1 file changed, 10 insertions(+)
15
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.c
19
+++ b/target/arm/translate.c
20
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
21
if (size == 3 && (interleave | spacing) != 1) {
22
return 1;
23
}
24
+ /* For our purposes, bytes are always little-endian. */
25
+ if (size == 0) {
26
+ endian = MO_LE;
27
+ }
28
+ /* Consecutive little-endian elements from a single register
29
+ * can be promoted to a larger little-endian operation.
30
+ */
31
+ if (interleave == 1 && endian == MO_LE) {
32
+ size = 3;
33
+ }
34
tmp64 = tcg_temp_new_i64();
35
addr = tcg_temp_new_i32();
36
tmp2 = tcg_const_i32(1 << size);
37
--
38
2.19.1
39
40
diff view generated by jsdifflib
Deleted patch
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
1
3
Announce the availability of the various priority queues.
4
This fixes an issue where guest kernels would miss to
5
configure secondary queues due to inproper feature bits.
6
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20181017213932.19973-2-edgar.iglesias@gmail.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/net/cadence_gem.c | 8 +++++++-
13
1 file changed, 7 insertions(+), 1 deletion(-)
14
15
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/net/cadence_gem.c
18
+++ b/hw/net/cadence_gem.c
19
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
20
int i;
21
CadenceGEMState *s = CADENCE_GEM(d);
22
const uint8_t *a;
23
+ uint32_t queues_mask = 0;
24
25
DB_PRINT("\n");
26
27
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
28
s->regs[GEM_DESCONF] = 0x02500111;
29
s->regs[GEM_DESCONF2] = 0x2ab13fff;
30
s->regs[GEM_DESCONF5] = 0x002f2045;
31
- s->regs[GEM_DESCONF6] = 0x00000200;
32
+ s->regs[GEM_DESCONF6] = 0x0;
33
+
34
+ if (s->num_priority_queues > 1) {
35
+ queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
36
+ s->regs[GEM_DESCONF6] |= queues_mask;
37
+ }
38
39
/* Set MAC address */
40
a = &s->conf.macaddr.a[0];
41
--
42
2.19.1
43
44
diff view generated by jsdifflib
Deleted patch
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
1
3
Announce 64bit addressing support.
4
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Message-id: 20181017213932.19973-3-edgar.iglesias@gmail.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/net/cadence_gem.c | 3 ++-
12
1 file changed, 2 insertions(+), 1 deletion(-)
13
14
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/net/cadence_gem.c
17
+++ b/hw/net/cadence_gem.c
18
@@ -XXX,XX +XXX,XX @@
19
#define GEM_DESCONF4 (0x0000028C/4)
20
#define GEM_DESCONF5 (0x00000290/4)
21
#define GEM_DESCONF6 (0x00000294/4)
22
+#define GEM_DESCONF6_64B_MASK (1U << 23)
23
#define GEM_DESCONF7 (0x00000298/4)
24
25
#define GEM_INT_Q1_STATUS (0x00000400 / 4)
26
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
27
s->regs[GEM_DESCONF] = 0x02500111;
28
s->regs[GEM_DESCONF2] = 0x2ab13fff;
29
s->regs[GEM_DESCONF5] = 0x002f2045;
30
- s->regs[GEM_DESCONF6] = 0x0;
31
+ s->regs[GEM_DESCONF6] = GEM_DESCONF6_64B_MASK;
32
33
if (s->num_priority_queues > 1) {
34
queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
35
--
36
2.19.1
37
38
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The EL3 version of this register does not include an ASID,
4
and so the tlb_flush performed by vmsa_ttbr_write is not needed.
5
6
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20181019015617.22583-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
20
.fieldoffset = offsetof(CPUARMState, cp15.mvbar) },
21
{ .name = "TTBR0_EL3", .state = ARM_CP_STATE_AA64,
22
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 0,
23
- .access = PL3_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
24
+ .access = PL3_RW, .resetvalue = 0,
25
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[3]) },
26
{ .name = "TCR_EL3", .state = ARM_CP_STATE_AA64,
27
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2,
28
--
29
2.19.1
30
31
diff view generated by jsdifflib