1
Hopefully last target-arm queue before softfreeze;
1
target-arm queue: the big stuff here is the final part of
2
this one's largest part is the remainder of the SVE patches,
2
rth's patches for Cortex-A76 and Neoverse-N1 support;
3
but there are a selection of other minor things too.
3
also present are Gavin's NUMA series and a few other things.
4
4
5
thanks
5
thanks
6
-- PMM
6
-- PMM
7
7
8
The following changes since commit 109b25045b3651f9c5d02c3766c0b3ff63e6d193:
8
The following changes since commit 554623226f800acf48a2ed568900c1c968ec9a8b:
9
9
10
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging (2018-06-29 12:30:29 +0100)
10
Merge tag 'qemu-sparc-20220508' of https://github.com/mcayland/qemu into staging (2022-05-08 17:03:26 -0500)
11
11
12
are available in the Git repository at:
12
are available in the Git repository at:
13
13
14
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180629
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220509
15
15
16
for you to fetch changes up to 802abf4024d23e48d45373ac3f2b580124b54b47:
16
for you to fetch changes up to ae9141d4a3265553503bf07d3574b40f84615a34:
17
17
18
target/arm: Add ID_ISAR6 (2018-06-29 15:30:54 +0100)
18
hw/acpi/aml-build: Use existing CPU topology to build PPTT table (2022-05-09 11:47:55 +0100)
19
19
20
----------------------------------------------------------------
20
----------------------------------------------------------------
21
target-arm queue:
21
target-arm queue:
22
* last of the SVE patches; SVE is now enabled for aarch64 linux-user
22
* MAINTAINERS/.mailmap: update email for Leif Lindholm
23
* sd: Don't trace SDRequest crc field (coverity bugfix)
23
* hw/arm: add version information to sbsa-ref machine DT
24
* target/arm: Mark PMINTENSET accesses as possibly doing IO
24
* Enable new features for -cpu max:
25
* clean up v7VE feature bit handling
25
FEAT_Debugv8p2, FEAT_Debugv8p4, FEAT_RAS (minimal version only),
26
* i.mx7d: minor cleanups
26
FEAT_IESB, FEAT_CSV2, FEAT_CSV2_2, FEAT_CSV3, FEAT_DGH
27
* target/arm: support reading of CNT[VCT|FRQ]_EL0 from user-space
27
* Emulate Cortex-A76
28
* target/arm: Implement ARMv8.2-DotProd
28
* Emulate Neoverse-N1
29
* virt: add addresses to dt node names (which stops dtc from
29
* Fix the virt board default NUMA topology
30
complaining that they're not correctly named)
31
* cleanups: replace error_setg(&error_fatal) by error_report() + exit()
32
30
33
----------------------------------------------------------------
31
----------------------------------------------------------------
34
Aaron Lindsay (3):
32
Gavin Shan (6):
35
target/arm: Add ARM_FEATURE_V7VE for v7 Virtualization Extensions
33
qapi/machine.json: Add cluster-id
36
target/arm: Remove redundant DIV detection for KVM
34
qtest/numa-test: Specify CPU topology in aarch64_numa_cpu()
37
target/arm: Mark PMINTENSET accesses as possibly doing IO
35
hw/arm/virt: Consider SMP configuration in CPU topology
36
qtest/numa-test: Correct CPU and NUMA association in aarch64_numa_cpu()
37
hw/arm/virt: Fix CPU's default NUMA node ID
38
hw/acpi/aml-build: Use existing CPU topology to build PPTT table
38
39
39
Alex Bennée (1):
40
Leif Lindholm (2):
40
target/arm: support reading of CNT[VCT|FRQ]_EL0 from user-space
41
MAINTAINERS/.mailmap: update email for Leif Lindholm
42
hw/arm: add versioning to sbsa-ref machine DT
41
43
42
Eric Auger (3):
44
Richard Henderson (24):
43
device_tree: Add qemu_fdt_node_unit_path
45
target/arm: Handle cpreg registration for missing EL
44
hw/arm/virt: Silence dtc /intc warnings
46
target/arm: Drop EL3 no EL2 fallbacks
45
hw/arm/virt: Silence dtc /memory warning
47
target/arm: Merge zcr reginfo
48
target/arm: Adjust definition of CONTEXTIDR_EL2
49
target/arm: Move cortex impdef sysregs to cpu_tcg.c
50
target/arm: Update qemu-system-arm -cpu max to cortex-a57
51
target/arm: Set ID_DFR0.PerfMon for qemu-system-arm -cpu max
52
target/arm: Split out aa32_max_features
53
target/arm: Annotate arm_max_initfn with FEAT identifiers
54
target/arm: Use field names for manipulating EL2 and EL3 modes
55
target/arm: Enable FEAT_Debugv8p2 for -cpu max
56
target/arm: Enable FEAT_Debugv8p4 for -cpu max
57
target/arm: Add minimal RAS registers
58
target/arm: Enable SCR and HCR bits for RAS
59
target/arm: Implement virtual SError exceptions
60
target/arm: Implement ESB instruction
61
target/arm: Enable FEAT_RAS for -cpu max
62
target/arm: Enable FEAT_IESB for -cpu max
63
target/arm: Enable FEAT_CSV2 for -cpu max
64
target/arm: Enable FEAT_CSV2_2 for -cpu max
65
target/arm: Enable FEAT_CSV3 for -cpu max
66
target/arm: Enable FEAT_DGH for -cpu max
67
target/arm: Define cortex-a76
68
target/arm: Define neoverse-n1
46
69
47
Jean-Christophe Dubois (3):
70
docs/system/arm/emulation.rst | 10 +
48
i.mx7d: Remove unused header files
71
docs/system/arm/virt.rst | 2 +
49
i.mx7d: Change SRC unimplemented device name from sdma to src
72
qapi/machine.json | 6 +-
50
i.mx7d: Change IRQ number type from hwaddr to int
73
target/arm/cpregs.h | 11 +
51
74
target/arm/cpu.h | 23 ++
52
Peter Maydell (1):
75
target/arm/helper.h | 1 +
53
sd: Don't trace SDRequest crc field
76
target/arm/internals.h | 16 ++
54
77
target/arm/syndrome.h | 5 +
55
Philippe Mathieu-Daudé (4):
78
target/arm/a32.decode | 16 +-
56
hw/block/fdc: Replace error_setg(&error_abort) by assert()
79
target/arm/t32.decode | 18 +-
57
hw/arm/sysbus-fdt: Replace error_setg(&error_fatal) by error_report() + exit()
80
hw/acpi/aml-build.c | 111 ++++----
58
device_tree: Replace error_setg(&error_fatal) by error_report() + exit()
81
hw/arm/sbsa-ref.c | 16 ++
59
sdcard: Use the ldst API
82
hw/arm/virt.c | 21 +-
60
83
hw/core/machine-hmp-cmds.c | 4 +
61
Richard Henderson (40):
84
hw/core/machine.c | 16 ++
62
target/arm: Implement SVE Memory Contiguous Load Group
85
target/arm/cpu.c | 66 ++++-
63
target/arm: Implement SVE Contiguous Load, first-fault and no-fault
86
target/arm/cpu64.c | 353 ++++++++++++++-----------
64
target/arm: Implement SVE Memory Contiguous Store Group
87
target/arm/cpu_tcg.c | 227 +++++++++++-----
65
target/arm: Implement SVE load and broadcast quadword
88
target/arm/helper.c | 600 +++++++++++++++++++++++++-----------------
66
target/arm: Implement SVE integer convert to floating-point
89
target/arm/op_helper.c | 43 +++
67
target/arm: Implement SVE floating-point arithmetic (predicated)
90
target/arm/translate-a64.c | 18 ++
68
target/arm: Implement SVE FP Multiply-Add Group
91
target/arm/translate.c | 23 ++
69
target/arm: Implement SVE Floating Point Accumulating Reduction Group
92
tests/qtest/numa-test.c | 19 +-
70
target/arm: Implement SVE load and broadcast element
93
.mailmap | 3 +-
71
target/arm: Implement SVE store vector/predicate register
94
MAINTAINERS | 2 +-
72
target/arm: Implement SVE scatter stores
95
25 files changed, 1068 insertions(+), 562 deletions(-)
73
target/arm: Implement SVE prefetches
74
target/arm: Implement SVE gather loads
75
target/arm: Implement SVE first-fault gather loads
76
target/arm: Implement SVE scatter store vector immediate
77
target/arm: Implement SVE floating-point compare vectors
78
target/arm: Implement SVE floating-point arithmetic with immediate
79
target/arm: Implement SVE Floating Point Multiply Indexed Group
80
target/arm: Implement SVE FP Fast Reduction Group
81
target/arm: Implement SVE Floating Point Unary Operations - Unpredicated Group
82
target/arm: Implement SVE FP Compare with Zero Group
83
target/arm: Implement SVE floating-point trig multiply-add coefficient
84
target/arm: Implement SVE floating-point convert precision
85
target/arm: Implement SVE floating-point convert to integer
86
target/arm: Implement SVE floating-point round to integral value
87
target/arm: Implement SVE floating-point unary operations
88
target/arm: Implement SVE MOVPRFX
89
target/arm: Implement SVE floating-point complex add
90
target/arm: Implement SVE fp complex multiply add
91
target/arm: Pass index to AdvSIMD FCMLA (indexed)
92
target/arm: Implement SVE fp complex multiply add (indexed)
93
target/arm: Implement SVE dot product (vectors)
94
target/arm: Implement SVE dot product (indexed)
95
target/arm: Enable SVE for aarch64-linux-user
96
target/arm: Implement ARMv8.2-DotProd
97
target/arm: Fix SVE signed division vs x86 overflow exception
98
target/arm: Fix SVE system register access checks
99
target/arm: Prune a57 features from max
100
target/arm: Prune a15 features from max
101
target/arm: Add ID_ISAR6
102
103
include/sysemu/device_tree.h | 16 +
104
target/arm/cpu.h | 3 +
105
target/arm/helper-sve.h | 682 +++++++++++++++
106
target/arm/helper.h | 44 +-
107
device_tree.c | 78 +-
108
hw/arm/boot.c | 41 +-
109
hw/arm/fsl-imx7.c | 8 +-
110
hw/arm/mcimx7d-sabre.c | 2 -
111
hw/arm/sysbus-fdt.c | 53 +-
112
hw/arm/virt.c | 70 +-
113
hw/block/fdc.c | 9 +-
114
hw/sd/bcm2835_sdhost.c | 13 +-
115
hw/sd/core.c | 2 +-
116
hw/sd/milkymist-memcard.c | 3 +-
117
hw/sd/omap_mmc.c | 6 +-
118
hw/sd/pl181.c | 11 +-
119
hw/sd/sdhci.c | 15 +-
120
hw/sd/ssi-sd.c | 6 +-
121
linux-user/elfload.c | 2 +
122
target/arm/cpu.c | 36 +-
123
target/arm/cpu64.c | 13 +-
124
target/arm/helper.c | 44 +-
125
target/arm/kvm32.c | 27 +-
126
target/arm/sve_helper.c | 1875 +++++++++++++++++++++++++++++++++++++++++-
127
target/arm/translate-a64.c | 62 +-
128
target/arm/translate-sve.c | 1688 ++++++++++++++++++++++++++++++++++++-
129
target/arm/translate.c | 102 ++-
130
target/arm/vec_helper.c | 311 ++++++-
131
hw/sd/trace-events | 2 +-
132
target/arm/sve.decode | 427 ++++++++++
133
30 files changed, 5394 insertions(+), 257 deletions(-)
134
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Use assert() instead of error_setg(&error_abort),
4
as suggested by the "qapi/error.h" documentation:
5
6
Please don't error_setg(&error_fatal, ...), use error_report() and
7
exit(), because that's more obvious.
8
Likewise, don't error_setg(&error_abort, ...), use assert().
9
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Acked-by: John Snow <jsnow@redhat.com>
12
Message-id: 20180625165749.3910-2-f4bug@amsat.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/block/fdc.c | 9 +--------
16
1 file changed, 1 insertion(+), 8 deletions(-)
17
18
diff --git a/hw/block/fdc.c b/hw/block/fdc.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/block/fdc.c
21
+++ b/hw/block/fdc.c
22
@@ -XXX,XX +XXX,XX @@ static int pick_geometry(FDrive *drv)
23
nb_sectors,
24
FloppyDriveType_str(parse->drive));
25
}
26
+ assert(type_match != -1 && "misconfigured fd_format");
27
match = type_match;
28
}
29
-
30
- /* No match of any kind found -- fd_format is misconfigured, abort. */
31
- if (match == -1) {
32
- error_setg(&error_abort, "No candidate geometries present in table "
33
- " for floppy drive type '%s'",
34
- FloppyDriveType_str(drv->drive));
35
- }
36
-
37
parse = &(fd_formats[match]);
38
39
out:
40
--
41
2.17.1
42
43
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Use error_report() + exit() instead of error_setg(&error_fatal),
4
as suggested by the "qapi/error.h" documentation:
5
6
Please don't error_setg(&error_fatal, ...), use error_report() and
7
exit(), because that's more obvious.
8
9
This fixes CID 1352173:
10
"Passing null pointer dt_name to qemu_fdt_node_path, which dereferences it."
11
12
And this also fixes:
13
14
hw/arm/sysbus-fdt.c:322:9: warning: Array access (from variable 'node_path') results in a null pointer dereference
15
if (node_path[1]) {
16
^~~~~~~~~~~~
17
18
Fixes: Coverity CID 1352173 (Dereference after null check)
19
Suggested-by: Eric Blake <eblake@redhat.com>
20
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Reviewed-by: Eric Auger <eric.auger@redhat.com>
22
Message-id: 20180625165749.3910-3-f4bug@amsat.org
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
25
hw/arm/sysbus-fdt.c | 53 +++++++++++++++++++++++++--------------------
26
1 file changed, 30 insertions(+), 23 deletions(-)
27
28
diff --git a/hw/arm/sysbus-fdt.c b/hw/arm/sysbus-fdt.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/sysbus-fdt.c
31
+++ b/hw/arm/sysbus-fdt.c
32
@@ -XXX,XX +XXX,XX @@ static void copy_properties_from_host(HostProperty *props, int nb_props,
33
r = qemu_fdt_getprop(host_fdt, node_path,
34
props[i].name,
35
&prop_len,
36
- props[i].optional ? &err : &error_fatal);
37
+ &err);
38
if (r) {
39
qemu_fdt_setprop(guest_fdt, nodename,
40
props[i].name, r, prop_len);
41
} else {
42
- if (prop_len != -FDT_ERR_NOTFOUND) {
43
- /* optional property not returned although property exists */
44
- error_report_err(err);
45
- } else {
46
+ if (props[i].optional && prop_len == -FDT_ERR_NOTFOUND) {
47
+ /* optional property does not exist */
48
error_free(err);
49
+ } else {
50
+ error_report_err(err);
51
+ }
52
+ if (!props[i].optional) {
53
+ /* mandatory property not found: bail out */
54
+ exit(1);
55
}
56
}
57
}
58
@@ -XXX,XX +XXX,XX @@ static void fdt_build_clock_node(void *host_fdt, void *guest_fdt,
59
60
node_offset = fdt_node_offset_by_phandle(host_fdt, host_phandle);
61
if (node_offset <= 0) {
62
- error_setg(&error_fatal,
63
- "not able to locate clock handle %d in host device tree",
64
- host_phandle);
65
+ error_report("not able to locate clock handle %d in host device tree",
66
+ host_phandle);
67
+ exit(1);
68
}
69
node_path = g_malloc(path_len);
70
while ((ret = fdt_get_path(host_fdt, node_offset, node_path, path_len))
71
@@ -XXX,XX +XXX,XX @@ static void fdt_build_clock_node(void *host_fdt, void *guest_fdt,
72
node_path = g_realloc(node_path, path_len);
73
}
74
if (ret < 0) {
75
- error_setg(&error_fatal,
76
- "not able to retrieve node path for clock handle %d",
77
- host_phandle);
78
+ error_report("not able to retrieve node path for clock handle %d",
79
+ host_phandle);
80
+ exit(1);
81
}
82
83
r = qemu_fdt_getprop(host_fdt, node_path, "compatible", &prop_len,
84
&error_fatal);
85
if (strcmp(r, "fixed-clock")) {
86
- error_setg(&error_fatal,
87
- "clock handle %d is not a fixed clock", host_phandle);
88
+ error_report("clock handle %d is not a fixed clock", host_phandle);
89
+ exit(1);
90
}
91
92
nodename = strrchr(node_path, '/');
93
@@ -XXX,XX +XXX,XX @@ static int add_amd_xgbe_fdt_node(SysBusDevice *sbdev, void *opaque)
94
95
dt_name = sysfs_to_dt_name(vbasedev->name);
96
if (!dt_name) {
97
- error_setg(&error_fatal, "%s incorrect sysfs device name %s",
98
- __func__, vbasedev->name);
99
+ error_report("%s incorrect sysfs device name %s",
100
+ __func__, vbasedev->name);
101
+ exit(1);
102
}
103
node_path = qemu_fdt_node_path(host_fdt, dt_name, vdev->compat,
104
&error_fatal);
105
if (!node_path || !node_path[0]) {
106
- error_setg(&error_fatal, "%s unable to retrieve node path for %s/%s",
107
- __func__, dt_name, vdev->compat);
108
+ error_report("%s unable to retrieve node path for %s/%s",
109
+ __func__, dt_name, vdev->compat);
110
+ exit(1);
111
}
112
113
if (node_path[1]) {
114
- error_setg(&error_fatal, "%s more than one node matching %s/%s!",
115
- __func__, dt_name, vdev->compat);
116
+ error_report("%s more than one node matching %s/%s!",
117
+ __func__, dt_name, vdev->compat);
118
+ exit(1);
119
}
120
121
g_free(dt_name);
122
123
if (vbasedev->num_regions != 5) {
124
- error_setg(&error_fatal, "%s Does the host dt node combine XGBE/PHY?",
125
- __func__);
126
+ error_report("%s Does the host dt node combine XGBE/PHY?", __func__);
127
+ exit(1);
128
}
129
130
/* generate nodes for DMA_CLK and PTP_CLK */
131
r = qemu_fdt_getprop(host_fdt, node_path[0], "clocks",
132
&prop_len, &error_fatal);
133
if (prop_len != 8) {
134
- error_setg(&error_fatal, "%s clocks property should contain 2 handles",
135
- __func__);
136
+ error_report("%s clocks property should contain 2 handles", __func__);
137
+ exit(1);
138
}
139
host_clock_phandles = (uint32_t *)r;
140
guest_clock_phandles[0] = qemu_fdt_alloc_phandle(guest_fdt);
141
--
142
2.17.1
143
144
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Use error_report() + exit() instead of error_setg(&error_fatal),
4
as suggested by the "qapi/error.h" documentation:
5
6
Please don't error_setg(&error_fatal, ...), use error_report() and
7
exit(), because that's more obvious.
8
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Reviewed-by: Markus Armbruster <armbru@redhat.com>
12
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
13
Message-id: 20180625165749.3910-4-f4bug@amsat.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
device_tree.c | 23 +++++++++++++----------
17
1 file changed, 13 insertions(+), 10 deletions(-)
18
19
diff --git a/device_tree.c b/device_tree.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/device_tree.c
22
+++ b/device_tree.c
23
@@ -XXX,XX +XXX,XX @@ static void read_fstree(void *fdt, const char *dirname)
24
const char *parent_node;
25
26
if (strstr(dirname, root_dir) != dirname) {
27
- error_setg(&error_fatal, "%s: %s must be searched within %s",
28
- __func__, dirname, root_dir);
29
+ error_report("%s: %s must be searched within %s",
30
+ __func__, dirname, root_dir);
31
+ exit(1);
32
}
33
parent_node = &dirname[strlen(SYSFS_DT_BASEDIR)];
34
35
d = opendir(dirname);
36
if (!d) {
37
- error_setg(&error_fatal, "%s cannot open %s", __func__, dirname);
38
- return;
39
+ error_report("%s cannot open %s", __func__, dirname);
40
+ exit(1);
41
}
42
43
while ((de = readdir(d)) != NULL) {
44
@@ -XXX,XX +XXX,XX @@ static void read_fstree(void *fdt, const char *dirname)
45
tmpnam = g_strdup_printf("%s/%s", dirname, de->d_name);
46
47
if (lstat(tmpnam, &st) < 0) {
48
- error_setg(&error_fatal, "%s cannot lstat %s", __func__, tmpnam);
49
+ error_report("%s cannot lstat %s", __func__, tmpnam);
50
+ exit(1);
51
}
52
53
if (S_ISREG(st.st_mode)) {
54
@@ -XXX,XX +XXX,XX @@ static void read_fstree(void *fdt, const char *dirname)
55
gsize len;
56
57
if (!g_file_get_contents(tmpnam, &val, &len, NULL)) {
58
- error_setg(&error_fatal, "%s not able to extract info from %s",
59
- __func__, tmpnam);
60
+ error_report("%s not able to extract info from %s",
61
+ __func__, tmpnam);
62
+ exit(1);
63
}
64
65
if (strlen(parent_node) > 0) {
66
@@ -XXX,XX +XXX,XX @@ void *load_device_tree_from_sysfs(void)
67
host_fdt = create_device_tree(&host_fdt_size);
68
read_fstree(host_fdt, SYSFS_DT_BASEDIR);
69
if (fdt_check_header(host_fdt)) {
70
- error_setg(&error_fatal,
71
- "%s host device tree extracted into memory is invalid",
72
- __func__);
73
+ error_report("%s host device tree extracted into memory is invalid",
74
+ __func__);
75
+ exit(1);
76
}
77
return host_fdt;
78
}
79
--
80
2.17.1
81
82
diff view generated by jsdifflib
Deleted patch
1
From: Eric Auger <eric.auger@redhat.com>
2
1
3
This helper allows to retrieve the paths of nodes whose name
4
match node-name or node-name@unit-address patterns.
5
6
Signed-off-by: Eric Auger <eric.auger@redhat.com>
7
Message-id: 1530044492-24921-2-git-send-email-eric.auger@redhat.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/sysemu/device_tree.h | 16 +++++++++++
12
device_tree.c | 55 ++++++++++++++++++++++++++++++++++++
13
2 files changed, 71 insertions(+)
14
15
diff --git a/include/sysemu/device_tree.h b/include/sysemu/device_tree.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/sysemu/device_tree.h
18
+++ b/include/sysemu/device_tree.h
19
@@ -XXX,XX +XXX,XX @@ void *load_device_tree_from_sysfs(void);
20
char **qemu_fdt_node_path(void *fdt, const char *name, char *compat,
21
Error **errp);
22
23
+/**
24
+ * qemu_fdt_node_unit_path: return the paths of nodes matching a given
25
+ * node-name, ie. node-name and node-name@unit-address
26
+ * @fdt: pointer to the dt blob
27
+ * @name: node name
28
+ * @errp: handle to an error object
29
+ *
30
+ * returns a newly allocated NULL-terminated array of node paths.
31
+ * Use g_strfreev() to free it. If one or more nodes were found, the
32
+ * array contains the path of each node and the last element equals to
33
+ * NULL. If there is no error but no matching node was found, the
34
+ * returned array contains a single element equal to NULL. If an error
35
+ * was encountered when parsing the blob, the function returns NULL
36
+ */
37
+char **qemu_fdt_node_unit_path(void *fdt, const char *name, Error **errp);
38
+
39
int qemu_fdt_setprop(void *fdt, const char *node_path,
40
const char *property, const void *val, int size);
41
int qemu_fdt_setprop_cell(void *fdt, const char *node_path,
42
diff --git a/device_tree.c b/device_tree.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/device_tree.c
45
+++ b/device_tree.c
46
@@ -XXX,XX +XXX,XX @@ static int findnode_nofail(void *fdt, const char *node_path)
47
return offset;
48
}
49
50
+char **qemu_fdt_node_unit_path(void *fdt, const char *name, Error **errp)
51
+{
52
+ char *prefix = g_strdup_printf("%s@", name);
53
+ unsigned int path_len = 16, n = 0;
54
+ GSList *path_list = NULL, *iter;
55
+ const char *iter_name;
56
+ int offset, len, ret;
57
+ char **path_array;
58
+
59
+ offset = fdt_next_node(fdt, -1, NULL);
60
+
61
+ while (offset >= 0) {
62
+ iter_name = fdt_get_name(fdt, offset, &len);
63
+ if (!iter_name) {
64
+ offset = len;
65
+ break;
66
+ }
67
+ if (!strcmp(iter_name, name) || g_str_has_prefix(iter_name, prefix)) {
68
+ char *path;
69
+
70
+ path = g_malloc(path_len);
71
+ while ((ret = fdt_get_path(fdt, offset, path, path_len))
72
+ == -FDT_ERR_NOSPACE) {
73
+ path_len += 16;
74
+ path = g_realloc(path, path_len);
75
+ }
76
+ path_list = g_slist_prepend(path_list, path);
77
+ n++;
78
+ }
79
+ offset = fdt_next_node(fdt, offset, NULL);
80
+ }
81
+ g_free(prefix);
82
+
83
+ if (offset < 0 && offset != -FDT_ERR_NOTFOUND) {
84
+ error_setg(errp, "%s: abort parsing dt for %s node units: %s",
85
+ __func__, name, fdt_strerror(offset));
86
+ for (iter = path_list; iter; iter = iter->next) {
87
+ g_free(iter->data);
88
+ }
89
+ g_slist_free(path_list);
90
+ return NULL;
91
+ }
92
+
93
+ path_array = g_new(char *, n + 1);
94
+ path_array[n--] = NULL;
95
+
96
+ for (iter = path_list; iter; iter = iter->next) {
97
+ path_array[n--] = iter->data;
98
+ }
99
+
100
+ g_slist_free(path_list);
101
+
102
+ return path_array;
103
+}
104
+
105
char **qemu_fdt_node_path(void *fdt, const char *name, char *compat,
106
Error **errp)
107
{
108
--
109
2.17.1
110
111
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
2
2
3
We already check for the same condition within the normal integer
3
NUVIA was acquired by Qualcomm in March 2021, but kept functioning on
4
sdiv and sdiv64 helpers. Use a slightly different formation that
4
separate infrastructure for a transitional period. We've now switched
5
does not require deducing the expression type.
5
over to contributing as Qualcomm Innovation Center (quicinc), so update
6
my email address to reflect this.
6
7
7
Fixes: f97cfd596ed
8
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220505113740.75565-1-quic_llindhol@quicinc.com
10
Cc: Leif Lindholm <leif@nuviainc.com>
11
Cc: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
[Fixed commit message typo]
11
Message-id: 20180629001538.11415-2-richard.henderson@linaro.org
12
[PMM: reworded a comment]
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
---
15
target/arm/sve_helper.c | 20 +++++++++++++++-----
16
.mailmap | 3 ++-
16
1 file changed, 15 insertions(+), 5 deletions(-)
17
MAINTAINERS | 2 +-
18
2 files changed, 3 insertions(+), 2 deletions(-)
17
19
18
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
20
diff --git a/.mailmap b/.mailmap
19
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/sve_helper.c
22
--- a/.mailmap
21
+++ b/target/arm/sve_helper.c
23
+++ b/.mailmap
22
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
24
@@ -XXX,XX +XXX,XX @@ Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
23
#define DO_MIN(N, M) ((N) >= (M) ? (M) : (N))
25
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
24
#define DO_ABD(N, M) ((N) >= (M) ? (N) - (M) : (M) - (N))
26
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
25
#define DO_MUL(N, M) (N * M)
27
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
26
-#define DO_DIV(N, M) (M ? N / M : 0)
28
-Leif Lindholm <leif@nuviainc.com> <leif.lindholm@linaro.org>
27
+
29
+Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
28
+
30
+Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
29
+/*
31
Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org>
30
+ * We must avoid the C undefined behaviour cases: division by
32
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
31
+ * zero and signed division of INT_MIN by -1. Both of these
33
Paul Burton <paulburton@kernel.org> <paul.burton@imgtec.com>
32
+ * have architecturally defined required results for Arm.
34
diff --git a/MAINTAINERS b/MAINTAINERS
33
+ * We special case all signed divisions by -1 to avoid having
35
index XXXXXXX..XXXXXXX 100644
34
+ * to deduce the minimum integer for the type involved.
36
--- a/MAINTAINERS
35
+ */
37
+++ b/MAINTAINERS
36
+#define DO_SDIV(N, M) (unlikely(M == 0) ? 0 : unlikely(M == -1) ? -N : N / M)
38
@@ -XXX,XX +XXX,XX @@ F: include/hw/ssi/imx_spi.h
37
+#define DO_UDIV(N, M) (unlikely(M == 0) ? 0 : N / M)
39
SBSA-REF
38
40
M: Radoslaw Biernacki <rad@semihalf.com>
39
DO_ZPZZ(sve_and_zpzz_b, uint8_t, H1, DO_AND)
41
M: Peter Maydell <peter.maydell@linaro.org>
40
DO_ZPZZ(sve_and_zpzz_h, uint16_t, H1_2, DO_AND)
42
-R: Leif Lindholm <leif@nuviainc.com>
41
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ(sve_umulh_zpzz_h, uint16_t, H1_2, do_mulh_h)
43
+R: Leif Lindholm <quic_llindhol@quicinc.com>
42
DO_ZPZZ(sve_umulh_zpzz_s, uint32_t, H1_4, do_mulh_s)
44
L: qemu-arm@nongnu.org
43
DO_ZPZZ_D(sve_umulh_zpzz_d, uint64_t, do_umulh_d)
45
S: Maintained
44
46
F: hw/arm/sbsa-ref.c
45
-DO_ZPZZ(sve_sdiv_zpzz_s, int32_t, H1_4, DO_DIV)
46
-DO_ZPZZ_D(sve_sdiv_zpzz_d, int64_t, DO_DIV)
47
+DO_ZPZZ(sve_sdiv_zpzz_s, int32_t, H1_4, DO_SDIV)
48
+DO_ZPZZ_D(sve_sdiv_zpzz_d, int64_t, DO_SDIV)
49
50
-DO_ZPZZ(sve_udiv_zpzz_s, uint32_t, H1_4, DO_DIV)
51
-DO_ZPZZ_D(sve_udiv_zpzz_d, uint64_t, DO_DIV)
52
+DO_ZPZZ(sve_udiv_zpzz_s, uint32_t, H1_4, DO_UDIV)
53
+DO_ZPZZ_D(sve_udiv_zpzz_d, uint64_t, DO_UDIV)
54
55
/* Note that all bits of the shift are significant
56
and not modulo the element size. */
57
--
47
--
58
2.17.1
48
2.25.1
59
49
60
50
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
More gracefully handle cpregs when EL2 and/or EL3 are missing.
4
If the reg is entirely inaccessible, do not register it at all.
5
If the reg is for EL2, and EL3 is present but EL2 is not,
6
either discard, squash to res0, const, or keep unchanged.
7
8
Per rule RJFFP, mark the 4 aarch32 hypervisor access registers
9
with ARM_CP_EL3_NO_EL2_KEEP, and mark all of the EL2 address
10
translation and tlb invalidation "regs" ARM_CP_EL3_NO_EL2_UNDEF.
11
Mark the 2 virtualization processor id regs ARM_CP_EL3_NO_EL2_C_NZ.
12
13
This will simplify cpreg registration for conditional arm features.
2
14
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-9-richard.henderson@linaro.org
17
Message-id: 20220506180242.216785-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
19
---
8
target/arm/helper-sve.h | 7 +++++
20
target/arm/cpregs.h | 11 +++
9
target/arm/sve_helper.c | 56 ++++++++++++++++++++++++++++++++++++++
21
target/arm/helper.c | 178 ++++++++++++++++++++++++++++++--------------
10
target/arm/translate-sve.c | 45 ++++++++++++++++++++++++++++++
22
2 files changed, 133 insertions(+), 56 deletions(-)
11
target/arm/sve.decode | 5 ++++
23
12
4 files changed, 113 insertions(+)
24
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
26
--- a/target/arm/cpregs.h
17
+++ b/target/arm/helper-sve.h
27
+++ b/target/arm/cpregs.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
28
@@ -XXX,XX +XXX,XX @@ enum {
19
DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
29
ARM_CP_SVE = 1 << 14,
20
void, ptr, ptr, ptr, ptr, i32)
30
/* Flag: Do not expose in gdb sysreg xml. */
21
31
ARM_CP_NO_GDB = 1 << 15,
22
+DEF_HELPER_FLAGS_5(sve_fadda_h, TCG_CALL_NO_RWG,
32
+ /*
23
+ i64, i64, ptr, ptr, ptr, i32)
33
+ * Flags: If EL3 but not EL2...
24
+DEF_HELPER_FLAGS_5(sve_fadda_s, TCG_CALL_NO_RWG,
34
+ * - UNDEF: discard the cpreg,
25
+ i64, i64, ptr, ptr, ptr, i32)
35
+ * - KEEP: retain the cpreg as is,
26
+DEF_HELPER_FLAGS_5(sve_fadda_d, TCG_CALL_NO_RWG,
36
+ * - C_NZ: set const on the cpreg, but retain resetvalue,
27
+ i64, i64, ptr, ptr, ptr, i32)
37
+ * - else: set const on the cpreg, zero resetvalue, aka RES0.
28
+
38
+ * See rule RJFFP in section D1.1.3 of DDI0487H.a.
29
DEF_HELPER_FLAGS_6(sve_fadd_h, TCG_CALL_NO_RWG,
39
+ */
30
void, ptr, ptr, ptr, ptr, ptr, i32)
40
+ ARM_CP_EL3_NO_EL2_UNDEF = 1 << 16,
31
DEF_HELPER_FLAGS_6(sve_fadd_s, TCG_CALL_NO_RWG,
41
+ ARM_CP_EL3_NO_EL2_KEEP = 1 << 17,
32
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
42
+ ARM_CP_EL3_NO_EL2_C_NZ = 1 << 18,
43
};
44
45
/*
46
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/sve_helper.c
48
--- a/target/arm/helper.c
35
+++ b/target/arm/sve_helper.c
49
+++ b/target/arm/helper.c
36
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
50
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
37
return predtest_ones(d, oprsz, esz_mask);
51
.access = PL1_RW, .readfn = spsel_read, .writefn = spsel_write },
38
}
52
{ .name = "FPEXC32_EL2", .state = ARM_CP_STATE_AA64,
39
53
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 3, .opc2 = 0,
40
+uint64_t HELPER(sve_fadda_h)(uint64_t nn, void *vm, void *vg,
54
- .access = PL2_RW, .type = ARM_CP_ALIAS | ARM_CP_FPU,
41
+ void *status, uint32_t desc)
55
+ .access = PL2_RW,
42
+{
56
+ .type = ARM_CP_ALIAS | ARM_CP_FPU | ARM_CP_EL3_NO_EL2_KEEP,
43
+ intptr_t i = 0, opr_sz = simd_oprsz(desc);
57
.fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]) },
44
+ float16 result = nn;
58
{ .name = "DACR32_EL2", .state = ARM_CP_STATE_AA64,
45
+
59
.opc0 = 3, .opc1 = 4, .crn = 3, .crm = 0, .opc2 = 0,
46
+ do {
60
- .access = PL2_RW, .resetvalue = 0,
47
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
61
+ .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
48
+ do {
62
.writefn = dacr_write, .raw_writefn = raw_write,
49
+ if (pg & 1) {
63
.fieldoffset = offsetof(CPUARMState, cp15.dacr32_el2) },
50
+ float16 mm = *(float16 *)(vm + H1_2(i));
64
{ .name = "IFSR32_EL2", .state = ARM_CP_STATE_AA64,
51
+ result = float16_add(result, mm, status);
65
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 0, .opc2 = 1,
66
- .access = PL2_RW, .resetvalue = 0,
67
+ .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
68
.fieldoffset = offsetof(CPUARMState, cp15.ifsr32_el2) },
69
{ .name = "SPSR_IRQ", .state = ARM_CP_STATE_AA64,
70
.type = ARM_CP_ALIAS,
71
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
72
.writefn = tlbimva_hyp_is_write },
73
{ .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
74
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
75
- .type = ARM_CP_NO_RAW, .access = PL2_W,
76
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
77
.writefn = tlbi_aa64_alle2_write },
78
{ .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64,
79
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
80
- .type = ARM_CP_NO_RAW, .access = PL2_W,
81
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
82
.writefn = tlbi_aa64_vae2_write },
83
{ .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64,
84
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
85
- .access = PL2_W, .type = ARM_CP_NO_RAW,
86
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
87
.writefn = tlbi_aa64_vae2_write },
88
{ .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64,
89
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
90
- .access = PL2_W, .type = ARM_CP_NO_RAW,
91
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
92
.writefn = tlbi_aa64_alle2is_write },
93
{ .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64,
94
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
95
- .type = ARM_CP_NO_RAW, .access = PL2_W,
96
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
97
.writefn = tlbi_aa64_vae2is_write },
98
{ .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64,
99
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
100
- .access = PL2_W, .type = ARM_CP_NO_RAW,
101
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
102
.writefn = tlbi_aa64_vae2is_write },
103
#ifndef CONFIG_USER_ONLY
104
/* Unlike the other EL2-related AT operations, these must
105
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
106
{ .name = "AT_S1E2R", .state = ARM_CP_STATE_AA64,
107
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0,
108
.access = PL2_W, .accessfn = at_s1e2_access,
109
- .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 },
110
+ .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
111
+ .writefn = ats_write64 },
112
{ .name = "AT_S1E2W", .state = ARM_CP_STATE_AA64,
113
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1,
114
.access = PL2_W, .accessfn = at_s1e2_access,
115
- .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 },
116
+ .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
117
+ .writefn = ats_write64 },
118
/* The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
119
* if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3
120
* with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose
121
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
122
{ .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64,
123
.opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0,
124
.access = PL2_RW, .accessfn = access_tda,
125
- .type = ARM_CP_NOP },
126
+ .type = ARM_CP_NOP | ARM_CP_EL3_NO_EL2_KEEP },
127
/* Dummy MDCCINT_EL1, since we don't implement the Debug Communications
128
* Channel but Linux may try to access this register. The 32-bit
129
* alias is DBGDCCINT.
130
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
131
.access = PL2_W, .type = ARM_CP_NOP },
132
{ .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
133
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
134
- .access = PL2_W, .type = ARM_CP_NO_RAW,
135
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
136
.writefn = tlbi_aa64_rvae2is_write },
137
{ .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
138
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
139
- .access = PL2_W, .type = ARM_CP_NO_RAW,
140
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
141
.writefn = tlbi_aa64_rvae2is_write },
142
{ .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
143
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
144
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
145
.access = PL2_W, .type = ARM_CP_NOP },
146
{ .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
147
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
148
- .access = PL2_W, .type = ARM_CP_NO_RAW,
149
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
150
.writefn = tlbi_aa64_rvae2is_write },
151
{ .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
152
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
153
- .access = PL2_W, .type = ARM_CP_NO_RAW,
154
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
155
.writefn = tlbi_aa64_rvae2is_write },
156
{ .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
157
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
158
- .access = PL2_W, .type = ARM_CP_NO_RAW,
159
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
160
.writefn = tlbi_aa64_rvae2_write },
161
{ .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
162
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
163
- .access = PL2_W, .type = ARM_CP_NO_RAW,
164
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
165
.writefn = tlbi_aa64_rvae2_write },
166
{ .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
167
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
168
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
169
.writefn = tlbi_aa64_vae1is_write },
170
{ .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
171
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
172
- .access = PL2_W, .type = ARM_CP_NO_RAW,
173
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
174
.writefn = tlbi_aa64_alle2is_write },
175
{ .name = "TLBI_VAE2OS", .state = ARM_CP_STATE_AA64,
176
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 1,
177
- .access = PL2_W, .type = ARM_CP_NO_RAW,
178
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
179
.writefn = tlbi_aa64_vae2is_write },
180
{ .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
181
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
182
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
183
.writefn = tlbi_aa64_alle1is_write },
184
{ .name = "TLBI_VALE2OS", .state = ARM_CP_STATE_AA64,
185
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 5,
186
- .access = PL2_W, .type = ARM_CP_NO_RAW,
187
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
188
.writefn = tlbi_aa64_vae2is_write },
189
{ .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
190
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
191
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
192
{ .name = "VPIDR", .state = ARM_CP_STATE_AA32,
193
.cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
194
.access = PL2_RW, .accessfn = access_el3_aa32ns,
195
- .resetvalue = cpu->midr, .type = ARM_CP_ALIAS,
196
+ .resetvalue = cpu->midr,
197
+ .type = ARM_CP_ALIAS | ARM_CP_EL3_NO_EL2_C_NZ,
198
.fieldoffset = offsetoflow32(CPUARMState, cp15.vpidr_el2) },
199
{ .name = "VPIDR_EL2", .state = ARM_CP_STATE_AA64,
200
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
201
.access = PL2_RW, .resetvalue = cpu->midr,
202
+ .type = ARM_CP_EL3_NO_EL2_C_NZ,
203
.fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) },
204
{ .name = "VMPIDR", .state = ARM_CP_STATE_AA32,
205
.cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
206
.access = PL2_RW, .accessfn = access_el3_aa32ns,
207
- .resetvalue = vmpidr_def, .type = ARM_CP_ALIAS,
208
+ .resetvalue = vmpidr_def,
209
+ .type = ARM_CP_ALIAS | ARM_CP_EL3_NO_EL2_C_NZ,
210
.fieldoffset = offsetoflow32(CPUARMState, cp15.vmpidr_el2) },
211
{ .name = "VMPIDR_EL2", .state = ARM_CP_STATE_AA64,
212
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
213
- .access = PL2_RW,
214
- .resetvalue = vmpidr_def,
215
+ .access = PL2_RW, .resetvalue = vmpidr_def,
216
+ .type = ARM_CP_EL3_NO_EL2_C_NZ,
217
.fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) },
218
};
219
define_arm_cp_regs(cpu, vpidr_regs);
220
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
221
int crm, int opc1, int opc2,
222
const char *name)
223
{
224
+ CPUARMState *env = &cpu->env;
225
uint32_t key;
226
ARMCPRegInfo *r2;
227
bool is64 = r->type & ARM_CP_64BIT;
228
bool ns = secstate & ARM_CP_SECSTATE_NS;
229
int cp = r->cp;
230
- bool isbanked;
231
size_t name_len;
232
+ bool make_const;
233
234
switch (state) {
235
case ARM_CP_STATE_AA32:
236
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
237
}
238
}
239
240
+ /*
241
+ * Eliminate registers that are not present because the EL is missing.
242
+ * Doing this here makes it easier to put all registers for a given
243
+ * feature into the same ARMCPRegInfo array and define them all at once.
244
+ */
245
+ make_const = false;
246
+ if (arm_feature(env, ARM_FEATURE_EL3)) {
247
+ /*
248
+ * An EL2 register without EL2 but with EL3 is (usually) RES0.
249
+ * See rule RJFFP in section D1.1.3 of DDI0487H.a.
250
+ */
251
+ int min_el = ctz32(r->access) / 2;
252
+ if (min_el == 2 && !arm_feature(env, ARM_FEATURE_EL2)) {
253
+ if (r->type & ARM_CP_EL3_NO_EL2_UNDEF) {
254
+ return;
52
+ }
255
+ }
53
+ i += sizeof(float16), pg >>= sizeof(float16);
256
+ make_const = !(r->type & ARM_CP_EL3_NO_EL2_KEEP);
54
+ } while (i & 15);
257
+ }
55
+ } while (i < opr_sz);
258
+ } else {
56
+
259
+ CPAccessRights max_el = (arm_feature(env, ARM_FEATURE_EL2)
57
+ return result;
260
+ ? PL2_RW : PL1_RW);
58
+}
261
+ if ((r->access & max_el) == 0) {
59
+
262
+ return;
60
+uint64_t HELPER(sve_fadda_s)(uint64_t nn, void *vm, void *vg,
61
+ void *status, uint32_t desc)
62
+{
63
+ intptr_t i = 0, opr_sz = simd_oprsz(desc);
64
+ float32 result = nn;
65
+
66
+ do {
67
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
68
+ do {
69
+ if (pg & 1) {
70
+ float32 mm = *(float32 *)(vm + H1_2(i));
71
+ result = float32_add(result, mm, status);
72
+ }
73
+ i += sizeof(float32), pg >>= sizeof(float32);
74
+ } while (i & 15);
75
+ } while (i < opr_sz);
76
+
77
+ return result;
78
+}
79
+
80
+uint64_t HELPER(sve_fadda_d)(uint64_t nn, void *vm, void *vg,
81
+ void *status, uint32_t desc)
82
+{
83
+ intptr_t i = 0, opr_sz = simd_oprsz(desc) / 8;
84
+ uint64_t *m = vm;
85
+ uint8_t *pg = vg;
86
+
87
+ for (i = 0; i < opr_sz; i++) {
88
+ if (pg[H1(i)] & 1) {
89
+ nn = float64_add(nn, m[i], status);
90
+ }
263
+ }
91
+ }
264
+ }
92
+
265
+
93
+ return nn;
266
/* Combine cpreg and name into one allocation. */
94
+}
267
name_len = strlen(name) + 1;
95
+
268
r2 = g_malloc(sizeof(*r2) + name_len);
96
/* Fully general three-operand expander, controlled by a predicate,
269
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
97
* With the extra float_status parameter.
270
r2->opaque = opaque;
98
*/
271
}
99
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
272
100
index XXXXXXX..XXXXXXX 100644
273
- isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
101
--- a/target/arm/translate-sve.c
274
- if (isbanked) {
102
+++ b/target/arm/translate-sve.c
275
+ if (make_const) {
103
@@ -XXX,XX +XXX,XX @@ DO_ZZI(UMIN, umin)
276
+ /* This should not have been a very special register to begin. */
104
277
+ int old_special = r2->type & ARM_CP_SPECIAL_MASK;
105
#undef DO_ZZI
278
+ assert(old_special == 0 || old_special == ARM_CP_NOP);
106
279
/*
107
+/*
280
- * Register is banked (using both entries in array).
108
+ *** SVE Floating Point Accumulating Reduction Group
281
- * Overwriting fieldoffset as the array is only used to define
109
+ */
282
- * banked registers but later only fieldoffset is used.
110
+
283
+ * Set the special function to CONST, retaining the other flags.
111
+static bool trans_FADDA(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
284
+ * This is important for e.g. ARM_CP_SVE so that we still
112
+{
285
+ * take the SVE trap if CPTR_EL3.EZ == 0.
113
+ typedef void fadda_fn(TCGv_i64, TCGv_i64, TCGv_ptr,
286
*/
114
+ TCGv_ptr, TCGv_ptr, TCGv_i32);
287
- r2->fieldoffset = r->bank_fieldoffsets[ns];
115
+ static fadda_fn * const fns[3] = {
288
- }
116
+ gen_helper_sve_fadda_h,
289
+ r2->type = (r2->type & ~ARM_CP_SPECIAL_MASK) | ARM_CP_CONST;
117
+ gen_helper_sve_fadda_s,
290
+ /*
118
+ gen_helper_sve_fadda_d,
291
+ * Usually, these registers become RES0, but there are a few
119
+ };
292
+ * special cases like VPIDR_EL2 which have a constant non-zero
120
+ unsigned vsz = vec_full_reg_size(s);
293
+ * value with writes ignored.
121
+ TCGv_ptr t_rm, t_pg, t_fpst;
294
+ */
122
+ TCGv_i64 t_val;
295
+ if (!(r->type & ARM_CP_EL3_NO_EL2_C_NZ)) {
123
+ TCGv_i32 t_desc;
296
+ r2->resetvalue = 0;
124
+
297
+ }
125
+ if (a->esz == 0) {
298
+ /*
126
+ return false;
299
+ * ARM_CP_CONST has precedence, so removing the callbacks and
127
+ }
300
+ * offsets are not strictly necessary, but it is potentially
128
+ if (!sve_access_check(s)) {
301
+ * less confusing to debug later.
129
+ return true;
302
+ */
130
+ }
303
+ r2->readfn = NULL;
131
+
304
+ r2->writefn = NULL;
132
+ t_val = load_esz(cpu_env, vec_reg_offset(s, a->rn, 0, a->esz), a->esz);
305
+ r2->raw_readfn = NULL;
133
+ t_rm = tcg_temp_new_ptr();
306
+ r2->raw_writefn = NULL;
134
+ t_pg = tcg_temp_new_ptr();
307
+ r2->resetfn = NULL;
135
+ tcg_gen_addi_ptr(t_rm, cpu_env, vec_full_reg_offset(s, a->rm));
308
+ r2->fieldoffset = 0;
136
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
309
+ r2->bank_fieldoffsets[0] = 0;
137
+ t_fpst = get_fpstatus_ptr(a->esz == MO_16);
310
+ r2->bank_fieldoffsets[1] = 0;
138
+ t_desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
311
+ } else {
139
+
312
+ bool isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
140
+ fns[a->esz - 1](t_val, t_val, t_rm, t_pg, t_fpst, t_desc);
313
141
+
314
- if (state == ARM_CP_STATE_AA32) {
142
+ tcg_temp_free_i32(t_desc);
315
if (isbanked) {
143
+ tcg_temp_free_ptr(t_fpst);
316
/*
144
+ tcg_temp_free_ptr(t_pg);
317
- * If the register is banked then we don't need to migrate or
145
+ tcg_temp_free_ptr(t_rm);
318
- * reset the 32-bit instance in certain cases:
146
+
319
- *
147
+ write_fp_dreg(s, a->rd, t_val);
320
- * 1) If the register has both 32-bit and 64-bit instances then we
148
+ tcg_temp_free_i64(t_val);
321
- * can count on the 64-bit instance taking care of the
149
+ return true;
322
- * non-secure bank.
150
+}
323
- * 2) If ARMv8 is enabled then we can count on a 64-bit version
151
+
324
- * taking care of the secure bank. This requires that separate
152
/*
325
- * 32 and 64-bit definitions are provided.
153
*** SVE Floating Point Arithmetic - Unpredicated Group
326
+ * Register is banked (using both entries in array).
154
*/
327
+ * Overwriting fieldoffset as the array is only used to define
155
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
328
+ * banked registers but later only fieldoffset is used.
156
index XXXXXXX..XXXXXXX 100644
329
*/
157
--- a/target/arm/sve.decode
330
- if ((r->state == ARM_CP_STATE_BOTH && ns) ||
158
+++ b/target/arm/sve.decode
331
- (arm_feature(&cpu->env, ARM_FEATURE_V8) && !ns)) {
159
@@ -XXX,XX +XXX,XX @@ UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
332
+ r2->fieldoffset = r->bank_fieldoffsets[ns];
160
# SVE integer multiply immediate (unpredicated)
333
+ }
161
MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
334
+ if (state == ARM_CP_STATE_AA32) {
162
335
+ if (isbanked) {
163
+### SVE FP Accumulating Reduction Group
336
+ /*
164
+
337
+ * If the register is banked then we don't need to migrate or
165
+# SVE floating-point serial reduction (predicated)
338
+ * reset the 32-bit instance in certain cases:
166
+FADDA 01100101 .. 011 000 001 ... ..... ..... @rdn_pg_rm
339
+ *
167
+
340
+ * 1) If the register has both 32-bit and 64-bit instances
168
### SVE Floating Point Arithmetic - Unpredicated Group
341
+ * then we can count on the 64-bit instance taking care
169
342
+ * of the non-secure bank.
170
# SVE floating-point arithmetic (unpredicated)
343
+ * 2) If ARMv8 is enabled then we can count on a 64-bit
344
+ * version taking care of the secure bank. This requires
345
+ * that separate 32 and 64-bit definitions are provided.
346
+ */
347
+ if ((r->state == ARM_CP_STATE_BOTH && ns) ||
348
+ (arm_feature(env, ARM_FEATURE_V8) && !ns)) {
349
+ r2->type |= ARM_CP_ALIAS;
350
+ }
351
+ } else if ((secstate != r->secure) && !ns) {
352
+ /*
353
+ * The register is not banked so we only want to allow
354
+ * migration of the non-secure instance.
355
+ */
356
r2->type |= ARM_CP_ALIAS;
357
}
358
- } else if ((secstate != r->secure) && !ns) {
359
- /*
360
- * The register is not banked so we only want to allow migration
361
- * of the non-secure instance.
362
- */
363
- r2->type |= ARM_CP_ALIAS;
364
- }
365
366
- if (HOST_BIG_ENDIAN &&
367
- r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
368
- r2->fieldoffset += sizeof(uint32_t);
369
+ if (HOST_BIG_ENDIAN &&
370
+ r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
371
+ r2->fieldoffset += sizeof(uint32_t);
372
+ }
373
}
374
}
375
376
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
377
* multiple times. Special registers (ie NOP/WFI) are
378
* never migratable and not even raw-accessible.
379
*/
380
- if (r->type & ARM_CP_SPECIAL_MASK) {
381
+ if (r2->type & ARM_CP_SPECIAL_MASK) {
382
r2->type |= ARM_CP_NO_RAW;
383
}
384
if (((r->crm == CP_ANY) && crm != 0) ||
171
--
385
--
172
2.17.1
386
2.25.1
173
174
diff view generated by jsdifflib
1
From: Aaron Lindsay <alindsay@codeaurora.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This makes it match its AArch64 equivalent, PMINTENSET_EL1
3
Drop el3_no_el2_cp_reginfo, el3_no_el2_v8_cp_reginfo, and the local
4
4
vpidr_regs definition, and rely on the squashing to ARM_CP_CONST
5
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
5
while registering for v8.
6
Message-id: 1529699547-17044-13-git-send-email-alindsay@codeaurora.org
6
7
This is a behavior change for v7 cpus with Security Extensions and
8
without Virtualization Extensions, in that the virtualization cpregs
9
are now correctly not present. This would be a migration compatibility
10
break, except that we have an existing bug in which migration of 32-bit
11
cpus with Security Extensions enabled does not work.
12
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20220506180242.216785-3-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
17
---
9
target/arm/helper.c | 2 +-
18
target/arm/helper.c | 158 ++++----------------------------------------
10
1 file changed, 1 insertion(+), 1 deletion(-)
19
1 file changed, 13 insertions(+), 145 deletions(-)
11
20
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
23
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
24
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
25
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
17
.writefn = pmuserenr_write, .raw_writefn = raw_write },
26
.fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) },
18
{ .name = "PMINTENSET", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 1,
27
};
19
.access = PL1_RW, .accessfn = access_tpm,
28
20
- .type = ARM_CP_ALIAS,
29
-/* Used to describe the behaviour of EL2 regs when EL2 does not exist. */
21
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
30
-static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
22
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pminten),
31
- { .name = "VBAR_EL2", .state = ARM_CP_STATE_BOTH,
23
.resetvalue = 0,
32
- .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 0,
24
.writefn = pmintenset_write, .raw_writefn = raw_write },
33
- .access = PL2_RW,
34
- .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore },
35
- { .name = "HCR_EL2", .state = ARM_CP_STATE_BOTH,
36
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
37
- .access = PL2_RW,
38
- .type = ARM_CP_CONST, .resetvalue = 0 },
39
- { .name = "HACR_EL2", .state = ARM_CP_STATE_BOTH,
40
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 7,
41
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
42
- { .name = "ESR_EL2", .state = ARM_CP_STATE_BOTH,
43
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 0,
44
- .access = PL2_RW,
45
- .type = ARM_CP_CONST, .resetvalue = 0 },
46
- { .name = "CPTR_EL2", .state = ARM_CP_STATE_BOTH,
47
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 2,
48
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
49
- { .name = "MAIR_EL2", .state = ARM_CP_STATE_BOTH,
50
- .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 0,
51
- .access = PL2_RW, .type = ARM_CP_CONST,
52
- .resetvalue = 0 },
53
- { .name = "HMAIR1", .state = ARM_CP_STATE_AA32,
54
- .cp = 15, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 1,
55
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
56
- { .name = "AMAIR_EL2", .state = ARM_CP_STATE_BOTH,
57
- .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 0,
58
- .access = PL2_RW, .type = ARM_CP_CONST,
59
- .resetvalue = 0 },
60
- { .name = "HAMAIR1", .state = ARM_CP_STATE_AA32,
61
- .cp = 15, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 1,
62
- .access = PL2_RW, .type = ARM_CP_CONST,
63
- .resetvalue = 0 },
64
- { .name = "AFSR0_EL2", .state = ARM_CP_STATE_BOTH,
65
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 0,
66
- .access = PL2_RW, .type = ARM_CP_CONST,
67
- .resetvalue = 0 },
68
- { .name = "AFSR1_EL2", .state = ARM_CP_STATE_BOTH,
69
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 1,
70
- .access = PL2_RW, .type = ARM_CP_CONST,
71
- .resetvalue = 0 },
72
- { .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH,
73
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2,
74
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
75
- { .name = "VTCR_EL2", .state = ARM_CP_STATE_BOTH,
76
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
77
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
78
- .type = ARM_CP_CONST, .resetvalue = 0 },
79
- { .name = "VTTBR", .state = ARM_CP_STATE_AA32,
80
- .cp = 15, .opc1 = 6, .crm = 2,
81
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
82
- .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
83
- { .name = "VTTBR_EL2", .state = ARM_CP_STATE_AA64,
84
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 0,
85
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
86
- { .name = "SCTLR_EL2", .state = ARM_CP_STATE_BOTH,
87
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 0,
88
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
89
- { .name = "TPIDR_EL2", .state = ARM_CP_STATE_BOTH,
90
- .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 2,
91
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
92
- { .name = "TTBR0_EL2", .state = ARM_CP_STATE_AA64,
93
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0,
94
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
95
- { .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2,
96
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
97
- .resetvalue = 0 },
98
- { .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH,
99
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0,
100
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
101
- { .name = "CNTVOFF_EL2", .state = ARM_CP_STATE_AA64,
102
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 0, .opc2 = 3,
103
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
104
- { .name = "CNTVOFF", .cp = 15, .opc1 = 4, .crm = 14,
105
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
106
- .resetvalue = 0 },
107
- { .name = "CNTHP_CVAL_EL2", .state = ARM_CP_STATE_AA64,
108
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 2,
109
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
110
- { .name = "CNTHP_CVAL", .cp = 15, .opc1 = 6, .crm = 14,
111
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
112
- .resetvalue = 0 },
113
- { .name = "CNTHP_TVAL_EL2", .state = ARM_CP_STATE_BOTH,
114
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 0,
115
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
116
- { .name = "CNTHP_CTL_EL2", .state = ARM_CP_STATE_BOTH,
117
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 1,
118
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
119
- { .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
120
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1,
121
- .access = PL2_RW, .accessfn = access_tda,
122
- .type = ARM_CP_CONST, .resetvalue = 0 },
123
- { .name = "HPFAR_EL2", .state = ARM_CP_STATE_BOTH,
124
- .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4,
125
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
126
- .type = ARM_CP_CONST, .resetvalue = 0 },
127
- { .name = "HSTR_EL2", .state = ARM_CP_STATE_BOTH,
128
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3,
129
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
130
- { .name = "FAR_EL2", .state = ARM_CP_STATE_BOTH,
131
- .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 0,
132
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
133
- { .name = "HIFAR", .state = ARM_CP_STATE_AA32,
134
- .type = ARM_CP_CONST,
135
- .cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2,
136
- .access = PL2_RW, .resetvalue = 0 },
137
-};
138
-
139
-/* Ditto, but for registers which exist in ARMv8 but not v7 */
140
-static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
141
- { .name = "HCR2", .state = ARM_CP_STATE_AA32,
142
- .cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
143
- .access = PL2_RW,
144
- .type = ARM_CP_CONST, .resetvalue = 0 },
145
-};
146
-
147
static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
148
{
149
ARMCPU *cpu = env_archcpu(env);
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
151
define_arm_cp_regs(cpu, v8_idregs);
152
define_arm_cp_regs(cpu, v8_cp_reginfo);
153
}
154
- if (arm_feature(env, ARM_FEATURE_EL2)) {
155
+
156
+ /*
157
+ * Register the base EL2 cpregs.
158
+ * Pre v8, these registers are implemented only as part of the
159
+ * Virtualization Extensions (EL2 present). Beginning with v8,
160
+ * if EL2 is missing but EL3 is enabled, mostly these become
161
+ * RES0 from EL3, with some specific exceptions.
162
+ */
163
+ if (arm_feature(env, ARM_FEATURE_EL2)
164
+ || (arm_feature(env, ARM_FEATURE_EL3)
165
+ && arm_feature(env, ARM_FEATURE_V8))) {
166
uint64_t vmpidr_def = mpidr_read_val(env);
167
ARMCPRegInfo vpidr_regs[] = {
168
{ .name = "VPIDR", .state = ARM_CP_STATE_AA32,
169
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
170
};
171
define_one_arm_cp_reg(cpu, &rvbar);
172
}
173
- } else {
174
- /* If EL2 is missing but higher ELs are enabled, we need to
175
- * register the no_el2 reginfos.
176
- */
177
- if (arm_feature(env, ARM_FEATURE_EL3)) {
178
- /* When EL3 exists but not EL2, VPIDR and VMPIDR take the value
179
- * of MIDR_EL1 and MPIDR_EL1.
180
- */
181
- ARMCPRegInfo vpidr_regs[] = {
182
- { .name = "VPIDR_EL2", .state = ARM_CP_STATE_BOTH,
183
- .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
184
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
185
- .type = ARM_CP_CONST, .resetvalue = cpu->midr,
186
- .fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) },
187
- { .name = "VMPIDR_EL2", .state = ARM_CP_STATE_BOTH,
188
- .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
189
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
190
- .type = ARM_CP_NO_RAW,
191
- .writefn = arm_cp_write_ignore, .readfn = mpidr_read },
192
- };
193
- define_arm_cp_regs(cpu, vpidr_regs);
194
- define_arm_cp_regs(cpu, el3_no_el2_cp_reginfo);
195
- if (arm_feature(env, ARM_FEATURE_V8)) {
196
- define_arm_cp_regs(cpu, el3_no_el2_v8_cp_reginfo);
197
- }
198
- }
199
}
200
+
201
+ /* Register the base EL3 cpregs. */
202
if (arm_feature(env, ARM_FEATURE_EL3)) {
203
define_arm_cp_regs(cpu, el3_cp_reginfo);
204
ARMCPRegInfo el3_regs[] = {
25
--
205
--
26
2.17.1
206
2.25.1
27
28
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Leave ARM_CP_SVE, removing ARM_CP_FPU; the sve_access_check
3
Drop zcr_no_el2_reginfo and merge the 3 registers into one array,
4
produced by the flag already includes fp_access_check. If
4
now that ZCR_EL2 can be squashed to RES0 and ZCR_EL3 dropped
5
we also check ARM_CP_FPU the double fp_access_check asserts.
5
while registering.
6
6
7
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20220506180242.216785-4-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
12
Message-id: 20180629001538.11415-3-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
target/arm/helper.c | 8 ++++----
12
target/arm/helper.c | 55 ++++++++++++++-------------------------------
16
target/arm/translate-a64.c | 5 ++---
13
1 file changed, 17 insertions(+), 38 deletions(-)
17
2 files changed, 6 insertions(+), 7 deletions(-)
18
14
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
19
@@ -XXX,XX +XXX,XX @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
static const ARMCPRegInfo zcr_el1_reginfo = {
20
}
25
.name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
21
}
26
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
22
27
- .access = PL1_RW, .type = ARM_CP_SVE | ARM_CP_FPU,
23
-static const ARMCPRegInfo zcr_el1_reginfo = {
28
+ .access = PL1_RW, .type = ARM_CP_SVE,
24
- .name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
29
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
25
- .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
30
.writefn = zcr_write, .raw_writefn = raw_write
26
- .access = PL1_RW, .type = ARM_CP_SVE,
27
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
28
- .writefn = zcr_write, .raw_writefn = raw_write
29
-};
30
-
31
-static const ARMCPRegInfo zcr_el2_reginfo = {
32
- .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
33
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
34
- .access = PL2_RW, .type = ARM_CP_SVE,
35
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
36
- .writefn = zcr_write, .raw_writefn = raw_write
37
-};
38
-
39
-static const ARMCPRegInfo zcr_no_el2_reginfo = {
40
- .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
41
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
42
- .access = PL2_RW, .type = ARM_CP_SVE,
43
- .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore
44
-};
45
-
46
-static const ARMCPRegInfo zcr_el3_reginfo = {
47
- .name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
48
- .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
49
- .access = PL3_RW, .type = ARM_CP_SVE,
50
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
51
- .writefn = zcr_write, .raw_writefn = raw_write
52
+static const ARMCPRegInfo zcr_reginfo[] = {
53
+ { .name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
54
+ .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
55
+ .access = PL1_RW, .type = ARM_CP_SVE,
56
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
57
+ .writefn = zcr_write, .raw_writefn = raw_write },
58
+ { .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
59
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
60
+ .access = PL2_RW, .type = ARM_CP_SVE,
61
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
62
+ .writefn = zcr_write, .raw_writefn = raw_write },
63
+ { .name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
64
+ .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
65
+ .access = PL3_RW, .type = ARM_CP_SVE,
66
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
67
+ .writefn = zcr_write, .raw_writefn = raw_write },
31
};
68
};
32
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo zcr_el1_reginfo = {
69
33
static const ARMCPRegInfo zcr_el2_reginfo = {
70
void hw_watchpoint_update(ARMCPU *cpu, int n)
34
.name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
71
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
35
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
36
- .access = PL2_RW, .type = ARM_CP_SVE | ARM_CP_FPU,
37
+ .access = PL2_RW, .type = ARM_CP_SVE,
38
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
39
.writefn = zcr_write, .raw_writefn = raw_write
40
};
41
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo zcr_el2_reginfo = {
42
static const ARMCPRegInfo zcr_no_el2_reginfo = {
43
.name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
44
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
45
- .access = PL2_RW, .type = ARM_CP_SVE | ARM_CP_FPU,
46
+ .access = PL2_RW, .type = ARM_CP_SVE,
47
.readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore
48
};
49
50
static const ARMCPRegInfo zcr_el3_reginfo = {
51
.name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
52
.opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
53
- .access = PL3_RW, .type = ARM_CP_SVE | ARM_CP_FPU,
54
+ .access = PL3_RW, .type = ARM_CP_SVE,
55
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
56
.writefn = zcr_write, .raw_writefn = raw_write
57
};
58
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-a64.c
61
+++ b/target/arm/translate-a64.c
62
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
63
default:
64
break;
65
}
72
}
66
- if ((ri->type & ARM_CP_SVE) && !sve_access_check(s)) {
73
67
- return;
74
if (cpu_isar_feature(aa64_sve, cpu)) {
68
- }
75
- define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
69
if ((ri->type & ARM_CP_FPU) && !fp_access_check(s)) {
76
- if (arm_feature(env, ARM_FEATURE_EL2)) {
70
return;
77
- define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
71
+ } else if ((ri->type & ARM_CP_SVE) && !sve_access_check(s)) {
78
- } else {
72
+ return;
79
- define_one_arm_cp_reg(cpu, &zcr_no_el2_reginfo);
80
- }
81
- if (arm_feature(env, ARM_FEATURE_EL3)) {
82
- define_one_arm_cp_reg(cpu, &zcr_el3_reginfo);
83
- }
84
+ define_arm_cp_regs(cpu, zcr_reginfo);
73
}
85
}
74
86
75
if ((tb_cflags(s->base.tb) & CF_USE_ICOUNT) && (ri->type & ARM_CP_IO)) {
87
#ifdef TARGET_AARCH64
76
--
88
--
77
2.17.1
89
2.25.1
78
79
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Since kernel commit a86bd139f2 (arm64: arch_timer: Enable CNTVCT_EL0
3
This register is present for either VHE or Debugv8p2.
4
trap..), released in kernel version v4.12, user-space has been able
5
to read these system registers. As we can't use QEMUTimer's in
6
linux-user mode we just directly call cpu_get_clock().
7
4
8
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180625160009.17437-2-alex.bennee@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220506180242.216785-5-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
target/arm/helper.c | 27 ++++++++++++++++++++++++---
10
target/arm/helper.c | 15 +++++++++++----
15
1 file changed, 24 insertions(+), 3 deletions(-)
11
1 file changed, 11 insertions(+), 4 deletions(-)
16
12
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
15
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
17
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo jazelle_regs[] = {
18
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
22
};
19
};
23
20
24
#else
21
+static const ARMCPRegInfo contextidr_el2 = {
25
-/* In user-mode none of the generic timer registers are accessible,
22
+ .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
26
- * and their implementation depends on QEMU_CLOCK_VIRTUAL and qdev gpio outputs,
23
+ .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
27
- * so instead just don't register any of them.
24
+ .access = PL2_RW,
25
+ .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2])
26
+};
28
+
27
+
29
+/* In user-mode most of the generic timer registers are inaccessible
28
static const ARMCPRegInfo vhe_reginfo[] = {
30
+ * however modern kernels (4.12+) allow access to cntvct_el0
29
- { .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
31
*/
30
- .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
32
+
31
- .access = PL2_RW,
33
+static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri)
32
- .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2]) },
34
+{
33
{ .name = "TTBR1_EL2", .state = ARM_CP_STATE_AA64,
35
+ /* Currently we have no support for QEMUTimer in linux-user so we
34
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1,
36
+ * can't call gt_get_countervalue(env), instead we directly
35
.access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write,
37
+ * call the lower level functions.
36
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
38
+ */
37
define_one_arm_cp_reg(cpu, &ssbs_reginfo);
39
+ return cpu_get_clock() / GTIMER_SCALE;
38
}
40
+}
39
41
+
40
+ if (cpu_isar_feature(aa64_vh, cpu) ||
42
static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
41
+ cpu_isar_feature(aa64_debugv8p2, cpu)) {
43
+ { .name = "CNTFRQ_EL0", .state = ARM_CP_STATE_AA64,
42
+ define_one_arm_cp_reg(cpu, &contextidr_el2);
44
+ .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 0,
43
+ }
45
+ .type = ARM_CP_CONST, .access = PL0_R /* no PL1_RW in linux-user */,
44
if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
46
+ .fieldoffset = offsetof(CPUARMState, cp15.c14_cntfrq),
45
define_arm_cp_regs(cpu, vhe_reginfo);
47
+ .resetvalue = NANOSECONDS_PER_SECOND / GTIMER_SCALE,
46
}
48
+ },
49
+ { .name = "CNTVCT_EL0", .state = ARM_CP_STATE_AA64,
50
+ .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 2,
51
+ .access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO,
52
+ .readfn = gt_virt_cnt_read,
53
+ },
54
REGINFO_SENTINEL
55
};
56
57
--
47
--
58
2.17.1
48
2.25.1
59
60
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Previously we were defining some of these in user-only mode,
4
but none of them are accessible from user-only, therefore
5
define them only in system mode.
6
7
This will shortly be used from cpu_tcg.c also.
2
8
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-24-richard.henderson@linaro.org
11
Message-id: 20220506180242.216785-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
13
---
8
target/arm/helper-sve.h | 13 +++++++++
14
target/arm/internals.h | 6 ++++
9
target/arm/sve_helper.c | 55 ++++++++++++++++++++++++++++++++++++++
15
target/arm/cpu64.c | 64 +++---------------------------------------
10
target/arm/translate-sve.c | 30 +++++++++++++++++++++
16
target/arm/cpu_tcg.c | 59 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 8 ++++++
17
3 files changed, 69 insertions(+), 60 deletions(-)
12
4 files changed, 106 insertions(+)
18
13
19
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
21
--- a/target/arm/internals.h
17
+++ b/target/arm/helper-sve.h
22
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(sve_fmins_s, TCG_CALL_NO_RWG,
23
@@ -XXX,XX +XXX,XX @@ int aarch64_fpu_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg);
19
DEF_HELPER_FLAGS_6(sve_fmins_d, TCG_CALL_NO_RWG,
24
int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg);
20
void, ptr, ptr, ptr, i64, ptr, i32)
25
#endif
21
26
22
+DEF_HELPER_FLAGS_5(sve_fcvt_sh, TCG_CALL_NO_RWG,
27
+#ifdef CONFIG_USER_ONLY
23
+ void, ptr, ptr, ptr, ptr, i32)
28
+static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
24
+DEF_HELPER_FLAGS_5(sve_fcvt_dh, TCG_CALL_NO_RWG,
29
+#else
25
+ void, ptr, ptr, ptr, ptr, i32)
30
+void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
26
+DEF_HELPER_FLAGS_5(sve_fcvt_hs, TCG_CALL_NO_RWG,
31
+#endif
27
+ void, ptr, ptr, ptr, ptr, i32)
32
+
28
+DEF_HELPER_FLAGS_5(sve_fcvt_ds, TCG_CALL_NO_RWG,
33
#endif
29
+ void, ptr, ptr, ptr, ptr, i32)
34
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
30
+DEF_HELPER_FLAGS_5(sve_fcvt_hd, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve_fcvt_sd, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, i32)
34
+
35
DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
36
void, ptr, ptr, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
38
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
39
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/sve_helper.c
36
--- a/target/arm/cpu64.c
41
+++ b/target/arm/sve_helper.c
37
+++ b/target/arm/cpu64.c
42
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
38
@@ -XXX,XX +XXX,XX @@
43
} while (i != 0); \
39
#include "hvf_arm.h"
40
#include "qapi/visitor.h"
41
#include "hw/qdev-properties.h"
42
-#include "cpregs.h"
43
+#include "internals.h"
44
45
46
-#ifndef CONFIG_USER_ONLY
47
-static uint64_t a57_a53_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
48
-{
49
- ARMCPU *cpu = env_archcpu(env);
50
-
51
- /* Number of cores is in [25:24]; otherwise we RAZ */
52
- return (cpu->core_count - 1) << 24;
53
-}
54
-#endif
55
-
56
-static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
57
-#ifndef CONFIG_USER_ONLY
58
- { .name = "L2CTLR_EL1", .state = ARM_CP_STATE_AA64,
59
- .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 2,
60
- .access = PL1_RW, .readfn = a57_a53_l2ctlr_read,
61
- .writefn = arm_cp_write_ignore },
62
- { .name = "L2CTLR",
63
- .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 2,
64
- .access = PL1_RW, .readfn = a57_a53_l2ctlr_read,
65
- .writefn = arm_cp_write_ignore },
66
-#endif
67
- { .name = "L2ECTLR_EL1", .state = ARM_CP_STATE_AA64,
68
- .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 3,
69
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
70
- { .name = "L2ECTLR",
71
- .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 3,
72
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
73
- { .name = "L2ACTLR", .state = ARM_CP_STATE_BOTH,
74
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 0, .opc2 = 0,
75
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
76
- { .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
77
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 0,
78
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
79
- { .name = "CPUACTLR",
80
- .cp = 15, .opc1 = 0, .crm = 15,
81
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
82
- { .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
83
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 1,
84
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
85
- { .name = "CPUECTLR",
86
- .cp = 15, .opc1 = 1, .crm = 15,
87
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
88
- { .name = "CPUMERRSR_EL1", .state = ARM_CP_STATE_AA64,
89
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 2,
90
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
91
- { .name = "CPUMERRSR",
92
- .cp = 15, .opc1 = 2, .crm = 15,
93
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
94
- { .name = "L2MERRSR_EL1", .state = ARM_CP_STATE_AA64,
95
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 3,
96
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
97
- { .name = "L2MERRSR",
98
- .cp = 15, .opc1 = 3, .crm = 15,
99
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
100
-};
101
-
102
static void aarch64_a57_initfn(Object *obj)
103
{
104
ARMCPU *cpu = ARM_CPU(obj);
105
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
106
cpu->gic_num_lrs = 4;
107
cpu->gic_vpribits = 5;
108
cpu->gic_vprebits = 5;
109
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
110
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
44
}
111
}
45
112
46
+/* SVE fp16 conversions always use IEEE mode. Like AdvSIMD, they ignore
113
static void aarch64_a53_initfn(Object *obj)
47
+ * FZ16. When converting from fp16, this affects flushing input denormals;
114
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
48
+ * when converting to fp16, this affects flushing output denormals.
115
cpu->gic_num_lrs = 4;
49
+ */
116
cpu->gic_vpribits = 5;
50
+static inline float32 sve_f16_to_f32(float16 f, float_status *fpst)
117
cpu->gic_vprebits = 5;
118
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
119
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
120
}
121
122
static void aarch64_a72_initfn(Object *obj)
123
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
124
cpu->gic_num_lrs = 4;
125
cpu->gic_vpribits = 5;
126
cpu->gic_vprebits = 5;
127
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
128
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
129
}
130
131
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
132
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/target/arm/cpu_tcg.c
135
+++ b/target/arm/cpu_tcg.c
136
@@ -XXX,XX +XXX,XX @@
137
#endif
138
#include "cpregs.h"
139
140
+#ifndef CONFIG_USER_ONLY
141
+static uint64_t l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
51
+{
142
+{
52
+ flag save = get_flush_inputs_to_zero(fpst);
143
+ ARMCPU *cpu = env_archcpu(env);
53
+ float32 ret;
144
+
54
+
145
+ /* Number of cores is in [25:24]; otherwise we RAZ */
55
+ set_flush_inputs_to_zero(false, fpst);
146
+ return (cpu->core_count - 1) << 24;
56
+ ret = float16_to_float32(f, true, fpst);
57
+ set_flush_inputs_to_zero(save, fpst);
58
+ return ret;
59
+}
147
+}
60
+
148
+
61
+static inline float64 sve_f16_to_f64(float16 f, float_status *fpst)
149
+static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
150
+ { .name = "L2CTLR_EL1", .state = ARM_CP_STATE_AA64,
151
+ .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 2,
152
+ .access = PL1_RW, .readfn = l2ctlr_read,
153
+ .writefn = arm_cp_write_ignore },
154
+ { .name = "L2CTLR",
155
+ .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 2,
156
+ .access = PL1_RW, .readfn = l2ctlr_read,
157
+ .writefn = arm_cp_write_ignore },
158
+ { .name = "L2ECTLR_EL1", .state = ARM_CP_STATE_AA64,
159
+ .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 3,
160
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
161
+ { .name = "L2ECTLR",
162
+ .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 3,
163
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
164
+ { .name = "L2ACTLR", .state = ARM_CP_STATE_BOTH,
165
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 0, .opc2 = 0,
166
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
167
+ { .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
168
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 0,
169
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
170
+ { .name = "CPUACTLR",
171
+ .cp = 15, .opc1 = 0, .crm = 15,
172
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
173
+ { .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
174
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 1,
175
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
176
+ { .name = "CPUECTLR",
177
+ .cp = 15, .opc1 = 1, .crm = 15,
178
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
179
+ { .name = "CPUMERRSR_EL1", .state = ARM_CP_STATE_AA64,
180
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 2,
181
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
182
+ { .name = "CPUMERRSR",
183
+ .cp = 15, .opc1 = 2, .crm = 15,
184
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
185
+ { .name = "L2MERRSR_EL1", .state = ARM_CP_STATE_AA64,
186
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 3,
187
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
188
+ { .name = "L2MERRSR",
189
+ .cp = 15, .opc1 = 3, .crm = 15,
190
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
191
+};
192
+
193
+void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu)
62
+{
194
+{
63
+ flag save = get_flush_inputs_to_zero(fpst);
195
+ define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
64
+ float64 ret;
65
+
66
+ set_flush_inputs_to_zero(false, fpst);
67
+ ret = float16_to_float64(f, true, fpst);
68
+ set_flush_inputs_to_zero(save, fpst);
69
+ return ret;
70
+}
196
+}
71
+
197
+#endif /* !CONFIG_USER_ONLY */
72
+static inline float16 sve_f32_to_f16(float32 f, float_status *fpst)
198
+
73
+{
199
/* CPU models. These are not needed for the AArch64 linux-user build. */
74
+ flag save = get_flush_to_zero(fpst);
200
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
75
+ float16 ret;
201
76
+
77
+ set_flush_to_zero(false, fpst);
78
+ ret = float32_to_float16(f, true, fpst);
79
+ set_flush_to_zero(save, fpst);
80
+ return ret;
81
+}
82
+
83
+static inline float16 sve_f64_to_f16(float64 f, float_status *fpst)
84
+{
85
+ flag save = get_flush_to_zero(fpst);
86
+ float16 ret;
87
+
88
+ set_flush_to_zero(false, fpst);
89
+ ret = float64_to_float16(f, true, fpst);
90
+ set_flush_to_zero(save, fpst);
91
+ return ret;
92
+}
93
+
94
+DO_ZPZ_FP(sve_fcvt_sh, uint32_t, H1_4, sve_f32_to_f16)
95
+DO_ZPZ_FP(sve_fcvt_hs, uint32_t, H1_4, sve_f16_to_f32)
96
+DO_ZPZ_FP(sve_fcvt_dh, uint64_t, , sve_f64_to_f16)
97
+DO_ZPZ_FP(sve_fcvt_hd, uint64_t, , sve_f16_to_f64)
98
+DO_ZPZ_FP(sve_fcvt_ds, uint64_t, , float64_to_float32)
99
+DO_ZPZ_FP(sve_fcvt_sd, uint64_t, , float32_to_float64)
100
+
101
DO_ZPZ_FP(sve_scvt_hh, uint16_t, H1_2, int16_to_float16)
102
DO_ZPZ_FP(sve_scvt_sh, uint32_t, H1_4, int32_to_float16)
103
DO_ZPZ_FP(sve_scvt_ss, uint32_t, H1_4, int32_to_float32)
104
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/target/arm/translate-sve.c
107
+++ b/target/arm/translate-sve.c
108
@@ -XXX,XX +XXX,XX @@ static bool do_zpz_ptr(DisasContext *s, int rd, int rn, int pg,
109
return true;
110
}
111
112
+static bool trans_FCVT_sh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
113
+{
114
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvt_sh);
115
+}
116
+
117
+static bool trans_FCVT_hs(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
118
+{
119
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_hs);
120
+}
121
+
122
+static bool trans_FCVT_dh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
123
+{
124
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvt_dh);
125
+}
126
+
127
+static bool trans_FCVT_hd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
128
+{
129
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_hd);
130
+}
131
+
132
+static bool trans_FCVT_ds(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
133
+{
134
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_ds);
135
+}
136
+
137
+static bool trans_FCVT_sd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
138
+{
139
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_sd);
140
+}
141
+
142
static bool trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
143
{
144
return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_scvt_hh);
145
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
146
index XXXXXXX..XXXXXXX 100644
147
--- a/target/arm/sve.decode
148
+++ b/target/arm/sve.decode
149
@@ -XXX,XX +XXX,XX @@ FNMLS_zpzzz 01100101 .. 1 ..... 111 ... ..... ..... @rdn_pg_rm_ra
150
151
### SVE FP Unary Operations Predicated Group
152
153
+# SVE floating-point convert precision
154
+FCVT_sh 01100101 10 0010 00 101 ... ..... ..... @rd_pg_rn_e0
155
+FCVT_hs 01100101 10 0010 01 101 ... ..... ..... @rd_pg_rn_e0
156
+FCVT_dh 01100101 11 0010 00 101 ... ..... ..... @rd_pg_rn_e0
157
+FCVT_hd 01100101 11 0010 01 101 ... ..... ..... @rd_pg_rn_e0
158
+FCVT_ds 01100101 11 0010 10 101 ... ..... ..... @rd_pg_rn_e0
159
+FCVT_sd 01100101 11 0010 11 101 ... ..... ..... @rd_pg_rn_e0
160
+
161
# SVE integer convert to floating-point
162
SCVTF_hh 01100101 01 010 01 0 101 ... ..... ..... @rd_pg_rn_e0
163
SCVTF_sh 01100101 01 010 10 0 101 ... ..... ..... @rd_pg_rn_e0
164
--
202
--
165
2.17.1
203
2.25.1
166
167
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Instead of starting with cortex-a15 and adding v8 features to
4
a v7 cpu, begin with a v8 cpu stripped of its aarch64 features.
5
This fixes the long-standing to-do where we only enabled v8
6
features for user-only.
2
7
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-26-richard.henderson@linaro.org
10
Message-id: 20220506180242.216785-7-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/helper-sve.h | 14 +++++++
13
target/arm/cpu_tcg.c | 151 ++++++++++++++++++++++++++-----------------
9
target/arm/sve_helper.c | 8 ++++
14
1 file changed, 92 insertions(+), 59 deletions(-)
10
target/arm/translate-sve.c | 77 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 9 +++++
12
4 files changed, 108 insertions(+)
13
15
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
18
--- a/target/arm/cpu_tcg.c
17
+++ b/target/arm/helper-sve.h
19
+++ b/target/arm/cpu_tcg.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_fcvtzu_sd, TCG_CALL_NO_RWG,
20
@@ -XXX,XX +XXX,XX @@ static void arm_v7m_class_init(ObjectClass *oc, void *data)
19
DEF_HELPER_FLAGS_5(sve_fcvtzu_dd, TCG_CALL_NO_RWG,
21
static void arm_max_initfn(Object *obj)
20
void, ptr, ptr, ptr, ptr, i32)
22
{
21
23
ARMCPU *cpu = ARM_CPU(obj);
22
+DEF_HELPER_FLAGS_5(sve_frint_h, TCG_CALL_NO_RWG,
24
+ uint32_t t;
23
+ void, ptr, ptr, ptr, ptr, i32)
25
24
+DEF_HELPER_FLAGS_5(sve_frint_s, TCG_CALL_NO_RWG,
26
- cortex_a15_initfn(obj);
25
+ void, ptr, ptr, ptr, ptr, i32)
27
+ /* aarch64_a57_initfn, advertising none of the aarch64 features */
26
+DEF_HELPER_FLAGS_5(sve_frint_d, TCG_CALL_NO_RWG,
28
+ cpu->dtb_compatible = "arm,cortex-a57";
27
+ void, ptr, ptr, ptr, ptr, i32)
29
+ set_feature(&cpu->env, ARM_FEATURE_V8);
30
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
31
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
32
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
33
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
34
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
35
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
36
+ cpu->midr = 0x411fd070;
37
+ cpu->revidr = 0x00000000;
38
+ cpu->reset_fpsid = 0x41034070;
39
+ cpu->isar.mvfr0 = 0x10110222;
40
+ cpu->isar.mvfr1 = 0x12111111;
41
+ cpu->isar.mvfr2 = 0x00000043;
42
+ cpu->ctr = 0x8444c004;
43
+ cpu->reset_sctlr = 0x00c50838;
44
+ cpu->isar.id_pfr0 = 0x00000131;
45
+ cpu->isar.id_pfr1 = 0x00011011;
46
+ cpu->isar.id_dfr0 = 0x03010066;
47
+ cpu->id_afr0 = 0x00000000;
48
+ cpu->isar.id_mmfr0 = 0x10101105;
49
+ cpu->isar.id_mmfr1 = 0x40000000;
50
+ cpu->isar.id_mmfr2 = 0x01260000;
51
+ cpu->isar.id_mmfr3 = 0x02102211;
52
+ cpu->isar.id_isar0 = 0x02101110;
53
+ cpu->isar.id_isar1 = 0x13112111;
54
+ cpu->isar.id_isar2 = 0x21232042;
55
+ cpu->isar.id_isar3 = 0x01112131;
56
+ cpu->isar.id_isar4 = 0x00011142;
57
+ cpu->isar.id_isar5 = 0x00011121;
58
+ cpu->isar.id_isar6 = 0;
59
+ cpu->isar.dbgdidr = 0x3516d000;
60
+ cpu->clidr = 0x0a200023;
61
+ cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
62
+ cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
63
+ cpu->ccsidr[2] = 0x70ffe07a; /* 2048KB L2 cache */
64
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
65
66
- /* old-style VFP short-vector support */
67
- cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
68
+ /* Add additional features supported by QEMU */
69
+ t = cpu->isar.id_isar5;
70
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
71
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
72
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
73
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
74
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
75
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
76
+ cpu->isar.id_isar5 = t;
28
+
77
+
29
+DEF_HELPER_FLAGS_5(sve_frintx_h, TCG_CALL_NO_RWG,
78
+ t = cpu->isar.id_isar6;
30
+ void, ptr, ptr, ptr, ptr, i32)
79
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
31
+DEF_HELPER_FLAGS_5(sve_frintx_s, TCG_CALL_NO_RWG,
80
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
32
+ void, ptr, ptr, ptr, ptr, i32)
81
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
33
+DEF_HELPER_FLAGS_5(sve_frintx_d, TCG_CALL_NO_RWG,
82
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1);
34
+ void, ptr, ptr, ptr, ptr, i32)
83
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
84
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
85
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
86
+ cpu->isar.id_isar6 = t;
35
+
87
+
36
DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
88
+ t = cpu->isar.mvfr1;
37
void, ptr, ptr, ptr, ptr, i32)
89
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
38
DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
90
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
39
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
91
+ cpu->isar.mvfr1 = t;
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve_helper.c
42
+++ b/target/arm/sve_helper.c
43
@@ -XXX,XX +XXX,XX @@ DO_ZPZ_FP(sve_fcvtzu_sd, uint64_t, , vfp_float32_to_uint64_rtz)
44
DO_ZPZ_FP(sve_fcvtzu_ds, uint64_t, , helper_vfp_touizd)
45
DO_ZPZ_FP(sve_fcvtzu_dd, uint64_t, , vfp_float64_to_uint64_rtz)
46
47
+DO_ZPZ_FP(sve_frint_h, uint16_t, H1_2, helper_advsimd_rinth)
48
+DO_ZPZ_FP(sve_frint_s, uint32_t, H1_4, helper_rints)
49
+DO_ZPZ_FP(sve_frint_d, uint64_t, , helper_rintd)
50
+
92
+
51
+DO_ZPZ_FP(sve_frintx_h, uint16_t, H1_2, float16_round_to_int)
93
+ t = cpu->isar.mvfr2;
52
+DO_ZPZ_FP(sve_frintx_s, uint32_t, H1_4, float32_round_to_int)
94
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
53
+DO_ZPZ_FP(sve_frintx_d, uint64_t, , float64_round_to_int)
95
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
96
+ cpu->isar.mvfr2 = t;
54
+
97
+
55
DO_ZPZ_FP(sve_scvt_hh, uint16_t, H1_2, int16_to_float16)
98
+ t = cpu->isar.id_mmfr3;
56
DO_ZPZ_FP(sve_scvt_sh, uint32_t, H1_4, int32_to_float16)
99
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
57
DO_ZPZ_FP(sve_scvt_ss, uint32_t, H1_4, int32_to_float32)
100
+ cpu->isar.id_mmfr3 = t;
58
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
101
+
59
index XXXXXXX..XXXXXXX 100644
102
+ t = cpu->isar.id_mmfr4;
60
--- a/target/arm/translate-sve.c
103
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
61
+++ b/target/arm/translate-sve.c
104
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
62
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVTZU_dd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
105
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
63
return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvtzu_dd);
106
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
107
+ cpu->isar.id_mmfr4 = t;
108
+
109
+ t = cpu->isar.id_pfr0;
110
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1);
111
+ cpu->isar.id_pfr0 = t;
112
+
113
+ t = cpu->isar.id_pfr2;
114
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
115
+ cpu->isar.id_pfr2 = t;
116
117
#ifdef CONFIG_USER_ONLY
118
/*
119
- * We don't set these in system emulation mode for the moment,
120
- * since we don't correctly set (all of) the ID registers to
121
- * advertise them.
122
+ * Break with true ARMv8 and add back old-style VFP short-vector support.
123
+ * Only do this for user-mode, where -cpu max is the default, so that
124
+ * older v6 and v7 programs are more likely to work without adjustment.
125
*/
126
- set_feature(&cpu->env, ARM_FEATURE_V8);
127
- {
128
- uint32_t t;
129
-
130
- t = cpu->isar.id_isar5;
131
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
132
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
133
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
134
- t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
135
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
136
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
137
- cpu->isar.id_isar5 = t;
138
-
139
- t = cpu->isar.id_isar6;
140
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
141
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
142
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
143
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
144
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
145
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
146
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
147
- cpu->isar.id_isar6 = t;
148
-
149
- t = cpu->isar.mvfr1;
150
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
151
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
152
- cpu->isar.mvfr1 = t;
153
-
154
- t = cpu->isar.mvfr2;
155
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
156
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
157
- cpu->isar.mvfr2 = t;
158
-
159
- t = cpu->isar.id_mmfr3;
160
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
161
- cpu->isar.id_mmfr3 = t;
162
-
163
- t = cpu->isar.id_mmfr4;
164
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
165
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
166
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
167
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
168
- cpu->isar.id_mmfr4 = t;
169
-
170
- t = cpu->isar.id_pfr0;
171
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
172
- cpu->isar.id_pfr0 = t;
173
-
174
- t = cpu->isar.id_pfr2;
175
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
176
- cpu->isar.id_pfr2 = t;
177
- }
178
-#endif /* CONFIG_USER_ONLY */
179
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
180
+#endif
64
}
181
}
65
182
#endif /* !TARGET_AARCH64 */
66
+static gen_helper_gvec_3_ptr * const frint_fns[3] = {
183
67
+ gen_helper_sve_frint_h,
68
+ gen_helper_sve_frint_s,
69
+ gen_helper_sve_frint_d
70
+};
71
+
72
+static bool trans_FRINTI(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
73
+{
74
+ if (a->esz == 0) {
75
+ return false;
76
+ }
77
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16,
78
+ frint_fns[a->esz - 1]);
79
+}
80
+
81
+static bool trans_FRINTX(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
82
+{
83
+ static gen_helper_gvec_3_ptr * const fns[3] = {
84
+ gen_helper_sve_frintx_h,
85
+ gen_helper_sve_frintx_s,
86
+ gen_helper_sve_frintx_d
87
+ };
88
+ if (a->esz == 0) {
89
+ return false;
90
+ }
91
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
92
+}
93
+
94
+static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
95
+{
96
+ if (a->esz == 0) {
97
+ return false;
98
+ }
99
+ if (sve_access_check(s)) {
100
+ unsigned vsz = vec_full_reg_size(s);
101
+ TCGv_i32 tmode = tcg_const_i32(mode);
102
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
103
+
104
+ gen_helper_set_rmode(tmode, tmode, status);
105
+
106
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
107
+ vec_full_reg_offset(s, a->rn),
108
+ pred_full_reg_offset(s, a->pg),
109
+ status, vsz, vsz, 0, frint_fns[a->esz - 1]);
110
+
111
+ gen_helper_set_rmode(tmode, tmode, status);
112
+ tcg_temp_free_i32(tmode);
113
+ tcg_temp_free_ptr(status);
114
+ }
115
+ return true;
116
+}
117
+
118
+static bool trans_FRINTN(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
119
+{
120
+ return do_frint_mode(s, a, float_round_nearest_even);
121
+}
122
+
123
+static bool trans_FRINTP(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
124
+{
125
+ return do_frint_mode(s, a, float_round_up);
126
+}
127
+
128
+static bool trans_FRINTM(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
129
+{
130
+ return do_frint_mode(s, a, float_round_down);
131
+}
132
+
133
+static bool trans_FRINTZ(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
134
+{
135
+ return do_frint_mode(s, a, float_round_to_zero);
136
+}
137
+
138
+static bool trans_FRINTA(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
139
+{
140
+ return do_frint_mode(s, a, float_round_ties_away);
141
+}
142
+
143
static bool trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
144
{
145
return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_scvt_hh);
146
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
147
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/sve.decode
149
+++ b/target/arm/sve.decode
150
@@ -XXX,XX +XXX,XX @@ FCVTZU_sd 01100101 11 011 10 1 101 ... ..... ..... @rd_pg_rn_e0
151
FCVTZS_dd 01100101 11 011 11 0 101 ... ..... ..... @rd_pg_rn_e0
152
FCVTZU_dd 01100101 11 011 11 1 101 ... ..... ..... @rd_pg_rn_e0
153
154
+# SVE floating-point round to integral value
155
+FRINTN 01100101 .. 000 000 101 ... ..... ..... @rd_pg_rn
156
+FRINTP 01100101 .. 000 001 101 ... ..... ..... @rd_pg_rn
157
+FRINTM 01100101 .. 000 010 101 ... ..... ..... @rd_pg_rn
158
+FRINTZ 01100101 .. 000 011 101 ... ..... ..... @rd_pg_rn
159
+FRINTA 01100101 .. 000 100 101 ... ..... ..... @rd_pg_rn
160
+FRINTX 01100101 .. 000 110 101 ... ..... ..... @rd_pg_rn
161
+FRINTI 01100101 .. 000 111 101 ... ..... ..... @rd_pg_rn
162
+
163
# SVE integer convert to floating-point
164
SCVTF_hh 01100101 01 010 01 0 101 ... ..... ..... @rd_pg_rn_e0
165
SCVTF_sh 01100101 01 010 10 0 101 ... ..... ..... @rd_pg_rn_e0
166
--
184
--
167
2.17.1
185
2.25.1
168
169
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We set this for qemu-system-aarch64, but failed to do so
4
for the strictly 32-bit emulation.
5
6
Fixes: 3bec78447a9 ("target/arm: Provide ARMv8.4-PMU in '-cpu max'")
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-33-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-8-richard.henderson@linaro.org
6
[PMM: moved 'ra=%reg_movprfx' here from following patch]
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
11
---
9
target/arm/helper.h | 5 +++
12
target/arm/cpu_tcg.c | 4 ++++
10
target/arm/translate-sve.c | 17 ++++++++++
13
1 file changed, 4 insertions(+)
11
target/arm/vec_helper.c | 67 ++++++++++++++++++++++++++++++++++++++
12
target/arm/sve.decode | 3 ++
13
4 files changed, 92 insertions(+)
14
14
15
diff --git a/target/arm/helper.h b/target/arm/helper.h
15
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.h
17
--- a/target/arm/cpu_tcg.c
18
+++ b/target/arm/helper.h
18
+++ b/target/arm/cpu_tcg.c
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_qrdmlah_s32, TCG_CALL_NO_RWG,
19
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
20
DEF_HELPER_FLAGS_5(gvec_qrdmlsh_s32, TCG_CALL_NO_RWG,
20
t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
21
void, ptr, ptr, ptr, ptr, i32)
21
cpu->isar.id_pfr2 = t;
22
22
23
+DEF_HELPER_FLAGS_4(gvec_sdot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+ t = cpu->isar.id_dfr0;
24
+DEF_HELPER_FLAGS_4(gvec_udot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
25
+DEF_HELPER_FLAGS_4(gvec_sdot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+ cpu->isar.id_dfr0 = t;
26
+DEF_HELPER_FLAGS_4(gvec_udot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+
26
+
28
DEF_HELPER_FLAGS_5(gvec_fcaddh, TCG_CALL_NO_RWG,
27
#ifdef CONFIG_USER_ONLY
29
void, ptr, ptr, ptr, ptr, i32)
28
/*
30
DEF_HELPER_FLAGS_5(gvec_fcadds, TCG_CALL_NO_RWG,
29
* Break with true ARMv8 and add back old-style VFP short-vector support.
31
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate-sve.c
34
+++ b/target/arm/translate-sve.c
35
@@ -XXX,XX +XXX,XX @@ DO_ZZI(UMIN, umin)
36
37
#undef DO_ZZI
38
39
+static bool trans_DOT_zzz(DisasContext *s, arg_DOT_zzz *a, uint32_t insn)
40
+{
41
+ static gen_helper_gvec_3 * const fns[2][2] = {
42
+ { gen_helper_gvec_sdot_b, gen_helper_gvec_sdot_h },
43
+ { gen_helper_gvec_udot_b, gen_helper_gvec_udot_h }
44
+ };
45
+
46
+ if (sve_access_check(s)) {
47
+ unsigned vsz = vec_full_reg_size(s);
48
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
49
+ vec_full_reg_offset(s, a->rn),
50
+ vec_full_reg_offset(s, a->rm),
51
+ vsz, vsz, 0, fns[a->u][a->sz]);
52
+ }
53
+ return true;
54
+}
55
+
56
/*
57
*** SVE Floating Point Multiply-Add Indexed Group
58
*/
59
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/vec_helper.c
62
+++ b/target/arm/vec_helper.c
63
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
64
clear_tail(d, opr_sz, simd_maxsz(desc));
65
}
66
67
+/* Integer 8 and 16-bit dot-product.
68
+ *
69
+ * Note that for the loops herein, host endianness does not matter
70
+ * with respect to the ordering of data within the 64-bit lanes.
71
+ * All elements are treated equally, no matter where they are.
72
+ */
73
+
74
+void HELPER(gvec_sdot_b)(void *vd, void *vn, void *vm, uint32_t desc)
75
+{
76
+ intptr_t i, opr_sz = simd_oprsz(desc);
77
+ uint32_t *d = vd;
78
+ int8_t *n = vn, *m = vm;
79
+
80
+ for (i = 0; i < opr_sz / 4; ++i) {
81
+ d[i] += n[i * 4 + 0] * m[i * 4 + 0]
82
+ + n[i * 4 + 1] * m[i * 4 + 1]
83
+ + n[i * 4 + 2] * m[i * 4 + 2]
84
+ + n[i * 4 + 3] * m[i * 4 + 3];
85
+ }
86
+ clear_tail(d, opr_sz, simd_maxsz(desc));
87
+}
88
+
89
+void HELPER(gvec_udot_b)(void *vd, void *vn, void *vm, uint32_t desc)
90
+{
91
+ intptr_t i, opr_sz = simd_oprsz(desc);
92
+ uint32_t *d = vd;
93
+ uint8_t *n = vn, *m = vm;
94
+
95
+ for (i = 0; i < opr_sz / 4; ++i) {
96
+ d[i] += n[i * 4 + 0] * m[i * 4 + 0]
97
+ + n[i * 4 + 1] * m[i * 4 + 1]
98
+ + n[i * 4 + 2] * m[i * 4 + 2]
99
+ + n[i * 4 + 3] * m[i * 4 + 3];
100
+ }
101
+ clear_tail(d, opr_sz, simd_maxsz(desc));
102
+}
103
+
104
+void HELPER(gvec_sdot_h)(void *vd, void *vn, void *vm, uint32_t desc)
105
+{
106
+ intptr_t i, opr_sz = simd_oprsz(desc);
107
+ uint64_t *d = vd;
108
+ int16_t *n = vn, *m = vm;
109
+
110
+ for (i = 0; i < opr_sz / 8; ++i) {
111
+ d[i] += (int64_t)n[i * 4 + 0] * m[i * 4 + 0]
112
+ + (int64_t)n[i * 4 + 1] * m[i * 4 + 1]
113
+ + (int64_t)n[i * 4 + 2] * m[i * 4 + 2]
114
+ + (int64_t)n[i * 4 + 3] * m[i * 4 + 3];
115
+ }
116
+ clear_tail(d, opr_sz, simd_maxsz(desc));
117
+}
118
+
119
+void HELPER(gvec_udot_h)(void *vd, void *vn, void *vm, uint32_t desc)
120
+{
121
+ intptr_t i, opr_sz = simd_oprsz(desc);
122
+ uint64_t *d = vd;
123
+ uint16_t *n = vn, *m = vm;
124
+
125
+ for (i = 0; i < opr_sz / 8; ++i) {
126
+ d[i] += (uint64_t)n[i * 4 + 0] * m[i * 4 + 0]
127
+ + (uint64_t)n[i * 4 + 1] * m[i * 4 + 1]
128
+ + (uint64_t)n[i * 4 + 2] * m[i * 4 + 2]
129
+ + (uint64_t)n[i * 4 + 3] * m[i * 4 + 3];
130
+ }
131
+ clear_tail(d, opr_sz, simd_maxsz(desc));
132
+}
133
+
134
void HELPER(gvec_fcaddh)(void *vd, void *vn, void *vm,
135
void *vfpst, uint32_t desc)
136
{
137
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
138
index XXXXXXX..XXXXXXX 100644
139
--- a/target/arm/sve.decode
140
+++ b/target/arm/sve.decode
141
@@ -XXX,XX +XXX,XX @@ UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
142
# SVE integer multiply immediate (unpredicated)
143
MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
144
145
+# SVE integer dot product (unpredicated)
146
+DOT_zzz 01000100 1 sz:1 0 rm:5 00000 u:1 rn:5 rd:5 ra=%reg_movprfx
147
+
148
# SVE floating-point complex add (predicated)
149
FCADD 01100100 esz:2 00000 rot:1 100 pg:3 rm:5 rd:5 \
150
rn=%reg_movprfx
151
--
30
--
152
2.17.1
31
2.25.1
153
154
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Share the code to set AArch32 max features so that we no
4
longer have code drift between qemu{-system,}-{arm,aarch64}.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-22-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-9-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 42 +++++++++++++++++++++++++++++++++++++
11
target/arm/internals.h | 2 +
9
target/arm/sve_helper.c | 43 ++++++++++++++++++++++++++++++++++++++
12
target/arm/cpu64.c | 50 +-----------------
10
target/arm/translate-sve.c | 43 ++++++++++++++++++++++++++++++++++++++
13
target/arm/cpu_tcg.c | 114 ++++++++++++++++++++++-------------------
11
target/arm/sve.decode | 10 +++++++++
14
3 files changed, 65 insertions(+), 101 deletions(-)
12
4 files changed, 138 insertions(+)
13
15
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
18
--- a/target/arm/internals.h
17
+++ b/target/arm/helper-sve.h
19
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_fadda_s, TCG_CALL_NO_RWG,
20
@@ -XXX,XX +XXX,XX @@ static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
19
DEF_HELPER_FLAGS_5(sve_fadda_d, TCG_CALL_NO_RWG,
21
void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
20
i64, i64, ptr, ptr, ptr, i32)
22
#endif
21
23
22
+DEF_HELPER_FLAGS_5(sve_fcmge0_h, TCG_CALL_NO_RWG,
24
+void aa32_max_features(ARMCPU *cpu);
23
+ void, ptr, ptr, ptr, ptr, i32)
25
+
24
+DEF_HELPER_FLAGS_5(sve_fcmge0_s, TCG_CALL_NO_RWG,
26
#endif
25
+ void, ptr, ptr, ptr, ptr, i32)
27
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
26
+DEF_HELPER_FLAGS_5(sve_fcmge0_d, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_5(sve_fcmgt0_h, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_5(sve_fcmgt0_s, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(sve_fcmgt0_d, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+
36
+DEF_HELPER_FLAGS_5(sve_fcmlt0_h, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_5(sve_fcmlt0_s, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_5(sve_fcmlt0_d, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, i32)
42
+
43
+DEF_HELPER_FLAGS_5(sve_fcmle0_h, TCG_CALL_NO_RWG,
44
+ void, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_5(sve_fcmle0_s, TCG_CALL_NO_RWG,
46
+ void, ptr, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_5(sve_fcmle0_d, TCG_CALL_NO_RWG,
48
+ void, ptr, ptr, ptr, ptr, i32)
49
+
50
+DEF_HELPER_FLAGS_5(sve_fcmeq0_h, TCG_CALL_NO_RWG,
51
+ void, ptr, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_5(sve_fcmeq0_s, TCG_CALL_NO_RWG,
53
+ void, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_5(sve_fcmeq0_d, TCG_CALL_NO_RWG,
55
+ void, ptr, ptr, ptr, ptr, i32)
56
+
57
+DEF_HELPER_FLAGS_5(sve_fcmne0_h, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_5(sve_fcmne0_s, TCG_CALL_NO_RWG,
60
+ void, ptr, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_5(sve_fcmne0_d, TCG_CALL_NO_RWG,
62
+ void, ptr, ptr, ptr, ptr, i32)
63
+
64
DEF_HELPER_FLAGS_6(sve_fadd_h, TCG_CALL_NO_RWG,
65
void, ptr, ptr, ptr, ptr, ptr, i32)
66
DEF_HELPER_FLAGS_6(sve_fadd_s, TCG_CALL_NO_RWG,
67
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
68
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/sve_helper.c
29
--- a/target/arm/cpu64.c
70
+++ b/target/arm/sve_helper.c
30
+++ b/target/arm/cpu64.c
71
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, \
31
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
72
32
{
73
#define DO_FCMGE(TYPE, X, Y, ST) TYPE##_compare(Y, X, ST) <= 0
33
ARMCPU *cpu = ARM_CPU(obj);
74
#define DO_FCMGT(TYPE, X, Y, ST) TYPE##_compare(Y, X, ST) < 0
34
uint64_t t;
75
+#define DO_FCMLE(TYPE, X, Y, ST) TYPE##_compare(X, Y, ST) <= 0
35
- uint32_t u;
76
+#define DO_FCMLT(TYPE, X, Y, ST) TYPE##_compare(X, Y, ST) < 0
36
77
#define DO_FCMEQ(TYPE, X, Y, ST) TYPE##_compare_quiet(X, Y, ST) == 0
37
if (kvm_enabled() || hvf_enabled()) {
78
#define DO_FCMNE(TYPE, X, Y, ST) TYPE##_compare_quiet(X, Y, ST) != 0
38
/* With KVM or HVF, '-cpu max' is identical to '-cpu host' */
79
#define DO_FCMUO(TYPE, X, Y, ST) \
39
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
80
@@ -XXX,XX +XXX,XX @@ DO_FPCMP_PPZZ_ALL(sve_facgt, DO_FACGT)
40
t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
81
#undef DO_FPCMP_PPZZ_H
41
cpu->isar.id_aa64zfr0 = t;
82
#undef DO_FPCMP_PPZZ
42
83
43
- /* Replicate the same data to the 32-bit id registers. */
84
+/* One operand floating-point comparison against zero, controlled
44
- u = cpu->isar.id_isar5;
85
+ * by a predicate.
45
- u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
86
+ */
46
- u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
87
+#define DO_FPCMP_PPZ0(NAME, TYPE, H, OP) \
47
- u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
88
+void HELPER(NAME)(void *vd, void *vn, void *vg, \
48
- u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
89
+ void *status, uint32_t desc) \
49
- u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
90
+{ \
50
- u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
91
+ intptr_t i = simd_oprsz(desc), j = (i - 1) >> 6; \
51
- cpu->isar.id_isar5 = u;
92
+ uint64_t *d = vd, *g = vg; \
52
-
93
+ do { \
53
- u = cpu->isar.id_isar6;
94
+ uint64_t out = 0, pg = g[j]; \
54
- u = FIELD_DP32(u, ID_ISAR6, JSCVT, 1);
95
+ do { \
55
- u = FIELD_DP32(u, ID_ISAR6, DP, 1);
96
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
56
- u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
97
+ if ((pg >> (i & 63)) & 1) { \
57
- u = FIELD_DP32(u, ID_ISAR6, SB, 1);
98
+ TYPE nn = *(TYPE *)(vn + H(i)); \
58
- u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
99
+ out |= OP(TYPE, nn, 0, status); \
59
- u = FIELD_DP32(u, ID_ISAR6, BF16, 1);
100
+ } \
60
- u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
101
+ } while (i & 63); \
61
- cpu->isar.id_isar6 = u;
102
+ d[j--] = out; \
62
-
103
+ } while (i > 0); \
63
- u = cpu->isar.id_pfr0;
64
- u = FIELD_DP32(u, ID_PFR0, DIT, 1);
65
- cpu->isar.id_pfr0 = u;
66
-
67
- u = cpu->isar.id_pfr2;
68
- u = FIELD_DP32(u, ID_PFR2, SSBS, 1);
69
- cpu->isar.id_pfr2 = u;
70
-
71
- u = cpu->isar.id_mmfr3;
72
- u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
73
- cpu->isar.id_mmfr3 = u;
74
-
75
- u = cpu->isar.id_mmfr4;
76
- u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */
77
- u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
78
- u = FIELD_DP32(u, ID_MMFR4, CNP, 1); /* TTCNP */
79
- u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */
80
- cpu->isar.id_mmfr4 = u;
81
-
82
t = cpu->isar.id_aa64dfr0;
83
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
84
cpu->isar.id_aa64dfr0 = t;
85
86
- u = cpu->isar.id_dfr0;
87
- u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
88
- cpu->isar.id_dfr0 = u;
89
-
90
- u = cpu->isar.mvfr1;
91
- u = FIELD_DP32(u, MVFR1, FPHP, 3); /* v8.2-FP16 */
92
- u = FIELD_DP32(u, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
93
- cpu->isar.mvfr1 = u;
94
+ /* Replicate the same data to the 32-bit id registers. */
95
+ aa32_max_features(cpu);
96
97
#ifdef CONFIG_USER_ONLY
98
/*
99
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/cpu_tcg.c
102
+++ b/target/arm/cpu_tcg.c
103
@@ -XXX,XX +XXX,XX @@
104
#endif
105
#include "cpregs.h"
106
107
+
108
+/* Share AArch32 -cpu max features with AArch64. */
109
+void aa32_max_features(ARMCPU *cpu)
110
+{
111
+ uint32_t t;
112
+
113
+ /* Add additional features supported by QEMU */
114
+ t = cpu->isar.id_isar5;
115
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
116
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
117
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
118
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
119
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
120
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
121
+ cpu->isar.id_isar5 = t;
122
+
123
+ t = cpu->isar.id_isar6;
124
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
125
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
126
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
127
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1);
128
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
129
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
130
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
131
+ cpu->isar.id_isar6 = t;
132
+
133
+ t = cpu->isar.mvfr1;
134
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
135
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
136
+ cpu->isar.mvfr1 = t;
137
+
138
+ t = cpu->isar.mvfr2;
139
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
140
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
141
+ cpu->isar.mvfr2 = t;
142
+
143
+ t = cpu->isar.id_mmfr3;
144
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
145
+ cpu->isar.id_mmfr3 = t;
146
+
147
+ t = cpu->isar.id_mmfr4;
148
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
149
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
150
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
151
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
152
+ cpu->isar.id_mmfr4 = t;
153
+
154
+ t = cpu->isar.id_pfr0;
155
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1);
156
+ cpu->isar.id_pfr0 = t;
157
+
158
+ t = cpu->isar.id_pfr2;
159
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
160
+ cpu->isar.id_pfr2 = t;
161
+
162
+ t = cpu->isar.id_dfr0;
163
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
164
+ cpu->isar.id_dfr0 = t;
104
+}
165
+}
105
+
166
+
106
+#define DO_FPCMP_PPZ0_H(NAME, OP) \
167
#ifndef CONFIG_USER_ONLY
107
+ DO_FPCMP_PPZ0(NAME##_h, float16, H1_2, OP)
168
static uint64_t l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
108
+#define DO_FPCMP_PPZ0_S(NAME, OP) \
169
{
109
+ DO_FPCMP_PPZ0(NAME##_s, float32, H1_4, OP)
170
@@ -XXX,XX +XXX,XX @@ static void arm_v7m_class_init(ObjectClass *oc, void *data)
110
+#define DO_FPCMP_PPZ0_D(NAME, OP) \
171
static void arm_max_initfn(Object *obj)
111
+ DO_FPCMP_PPZ0(NAME##_d, float64, , OP)
172
{
112
+
173
ARMCPU *cpu = ARM_CPU(obj);
113
+#define DO_FPCMP_PPZ0_ALL(NAME, OP) \
174
- uint32_t t;
114
+ DO_FPCMP_PPZ0_H(NAME, OP) \
175
115
+ DO_FPCMP_PPZ0_S(NAME, OP) \
176
/* aarch64_a57_initfn, advertising none of the aarch64 features */
116
+ DO_FPCMP_PPZ0_D(NAME, OP)
177
cpu->dtb_compatible = "arm,cortex-a57";
117
+
178
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
118
+DO_FPCMP_PPZ0_ALL(sve_fcmge0, DO_FCMGE)
179
cpu->ccsidr[2] = 0x70ffe07a; /* 2048KB L2 cache */
119
+DO_FPCMP_PPZ0_ALL(sve_fcmgt0, DO_FCMGT)
180
define_cortex_a72_a57_a53_cp_reginfo(cpu);
120
+DO_FPCMP_PPZ0_ALL(sve_fcmle0, DO_FCMLE)
181
121
+DO_FPCMP_PPZ0_ALL(sve_fcmlt0, DO_FCMLT)
182
- /* Add additional features supported by QEMU */
122
+DO_FPCMP_PPZ0_ALL(sve_fcmeq0, DO_FCMEQ)
183
- t = cpu->isar.id_isar5;
123
+DO_FPCMP_PPZ0_ALL(sve_fcmne0, DO_FCMNE)
184
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
124
+
185
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
125
/*
186
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
126
* Load contiguous data, protected by a governing predicate.
187
- t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
127
*/
188
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
128
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
189
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
129
index XXXXXXX..XXXXXXX 100644
190
- cpu->isar.id_isar5 = t;
130
--- a/target/arm/translate-sve.c
191
-
131
+++ b/target/arm/translate-sve.c
192
- t = cpu->isar.id_isar6;
132
@@ -XXX,XX +XXX,XX @@ static bool trans_FRSQRTE(DisasContext *s, arg_rr_esz *a, uint32_t insn)
193
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
133
return true;
194
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
134
}
195
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
135
196
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
136
+/*
197
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
137
+ *** SVE Floating Point Compare with Zero Group
198
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
138
+ */
199
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
139
+
200
- cpu->isar.id_isar6 = t;
140
+static void do_ppz_fp(DisasContext *s, arg_rpr_esz *a,
201
-
141
+ gen_helper_gvec_3_ptr *fn)
202
- t = cpu->isar.mvfr1;
142
+{
203
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
143
+ unsigned vsz = vec_full_reg_size(s);
204
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
144
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
205
- cpu->isar.mvfr1 = t;
145
+
206
-
146
+ tcg_gen_gvec_3_ptr(pred_full_reg_offset(s, a->rd),
207
- t = cpu->isar.mvfr2;
147
+ vec_full_reg_offset(s, a->rn),
208
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
148
+ pred_full_reg_offset(s, a->pg),
209
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
149
+ status, vsz, vsz, 0, fn);
210
- cpu->isar.mvfr2 = t;
150
+ tcg_temp_free_ptr(status);
211
-
151
+}
212
- t = cpu->isar.id_mmfr3;
152
+
213
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
153
+#define DO_PPZ(NAME, name) \
214
- cpu->isar.id_mmfr3 = t;
154
+static bool trans_##NAME(DisasContext *s, arg_rpr_esz *a, uint32_t insn) \
215
-
155
+{ \
216
- t = cpu->isar.id_mmfr4;
156
+ static gen_helper_gvec_3_ptr * const fns[3] = { \
217
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
157
+ gen_helper_sve_##name##_h, \
218
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
158
+ gen_helper_sve_##name##_s, \
219
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
159
+ gen_helper_sve_##name##_d, \
220
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
160
+ }; \
221
- cpu->isar.id_mmfr4 = t;
161
+ if (a->esz == 0) { \
222
-
162
+ return false; \
223
- t = cpu->isar.id_pfr0;
163
+ } \
224
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
164
+ if (sve_access_check(s)) { \
225
- cpu->isar.id_pfr0 = t;
165
+ do_ppz_fp(s, a, fns[a->esz - 1]); \
226
-
166
+ } \
227
- t = cpu->isar.id_pfr2;
167
+ return true; \
228
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
168
+}
229
- cpu->isar.id_pfr2 = t;
169
+
230
-
170
+DO_PPZ(FCMGE_ppz0, fcmge0)
231
- t = cpu->isar.id_dfr0;
171
+DO_PPZ(FCMGT_ppz0, fcmgt0)
232
- t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
172
+DO_PPZ(FCMLE_ppz0, fcmle0)
233
- cpu->isar.id_dfr0 = t;
173
+DO_PPZ(FCMLT_ppz0, fcmlt0)
234
+ aa32_max_features(cpu);
174
+DO_PPZ(FCMEQ_ppz0, fcmeq0)
235
175
+DO_PPZ(FCMNE_ppz0, fcmne0)
236
#ifdef CONFIG_USER_ONLY
176
+
237
/*
177
+#undef DO_PPZ
178
+
179
/*
180
*** SVE Floating Point Accumulating Reduction Group
181
*/
182
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
183
index XXXXXXX..XXXXXXX 100644
184
--- a/target/arm/sve.decode
185
+++ b/target/arm/sve.decode
186
@@ -XXX,XX +XXX,XX @@
187
# One register operand, with governing predicate, vector element size
188
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
189
@rd_pg4_pn ........ esz:2 ... ... .. pg:4 . rn:4 rd:5 &rpr_esz
190
+@pd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 . rd:4 &rpr_esz
191
192
# One register operand, with governing predicate, no vector element size
193
@rd_pg_rn_e0 ........ .. ... ... ... pg:3 rn:5 rd:5 &rpr_esz esz=0
194
@@ -XXX,XX +XXX,XX @@ FMINV 01100101 .. 000 111 001 ... ..... ..... @rd_pg_rn
195
FRECPE 01100101 .. 001 110 001100 ..... ..... @rd_rn
196
FRSQRTE 01100101 .. 001 111 001100 ..... ..... @rd_rn
197
198
+### SVE FP Compare with Zero Group
199
+
200
+FCMGE_ppz0 01100101 .. 0100 00 001 ... ..... 0 .... @pd_pg_rn
201
+FCMGT_ppz0 01100101 .. 0100 00 001 ... ..... 1 .... @pd_pg_rn
202
+FCMLT_ppz0 01100101 .. 0100 01 001 ... ..... 0 .... @pd_pg_rn
203
+FCMLE_ppz0 01100101 .. 0100 01 001 ... ..... 1 .... @pd_pg_rn
204
+FCMEQ_ppz0 01100101 .. 0100 10 001 ... ..... 0 .... @pd_pg_rn
205
+FCMNE_ppz0 01100101 .. 0100 11 001 ... ..... 0 .... @pd_pg_rn
206
+
207
### SVE FP Accumulating Reduction Group
208
209
# SVE floating-point serial reduction (predicated)
210
--
238
--
211
2.17.1
239
2.25.1
212
213
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Update the legacy feature names to the current names.
4
Provide feature names for id changes that were not marked.
5
Sort the field updates into increasing bitfield order.
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-21-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-10-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper.h | 8 +++++++
12
target/arm/cpu64.c | 100 +++++++++++++++++++++----------------------
9
target/arm/translate-sve.c | 47 ++++++++++++++++++++++++++++++++++++++
13
target/arm/cpu_tcg.c | 48 ++++++++++-----------
10
target/arm/vec_helper.c | 20 ++++++++++++++++
14
2 files changed, 74 insertions(+), 74 deletions(-)
11
target/arm/sve.decode | 5 ++++
12
4 files changed, 80 insertions(+)
13
15
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.h
18
--- a/target/arm/cpu64.c
17
+++ b/target/arm/helper.h
19
+++ b/target/arm/cpu64.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
19
DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
21
cpu->midr = t;
20
void, ptr, ptr, ptr, ptr, i32)
22
21
23
t = cpu->isar.id_aa64isar0;
22
+DEF_HELPER_FLAGS_4(gvec_frecpe_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
- t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
23
+DEF_HELPER_FLAGS_4(gvec_frecpe_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
24
+DEF_HELPER_FLAGS_4(gvec_frecpe_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
25
+
27
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* FEAT_PMULL */
26
+DEF_HELPER_FLAGS_4(gvec_frsqrte_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); /* FEAT_SHA1 */
27
+DEF_HELPER_FLAGS_4(gvec_frsqrte_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* FEAT_SHA512 */
28
+DEF_HELPER_FLAGS_4(gvec_frsqrte_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
29
+
31
- t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
30
DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
32
- t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
31
DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
33
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
32
DEF_HELPER_FLAGS_5(gvec_fadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
- t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
33
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
35
- t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
36
- t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
37
- t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1);
38
- t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */
39
- t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
40
- t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1);
41
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2); /* FEAT_LSE */
42
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1); /* FEAT_RDM */
43
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1); /* FEAT_SHA3 */
44
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1); /* FEAT_SM3 */
45
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1); /* FEAT_SM4 */
46
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1); /* FEAT_DotProd */
47
+ t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1); /* FEAT_FHM */
48
+ t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* FEAT_FlagM2 */
49
+ t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
50
+ t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1); /* FEAT_RNG */
51
cpu->isar.id_aa64isar0 = t;
52
53
t = cpu->isar.id_aa64isar1;
54
- t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2);
55
- t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1);
56
- t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
57
- t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
58
- t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
59
- t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1);
60
- t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
61
- t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
62
- t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
63
+ t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2); /* FEAT_DPB2 */
64
+ t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1); /* FEAT_JSCVT */
65
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1); /* FEAT_FCMA */
66
+ t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* FEAT_LRCPC2 */
67
+ t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1); /* FEAT_FRINTTS */
68
+ t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); /* FEAT_SB */
69
+ t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); /* FEAT_SPECRES */
70
+ t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1); /* FEAT_BF16 */
71
+ t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
72
cpu->isar.id_aa64isar1 = t;
73
74
t = cpu->isar.id_aa64pfr0;
75
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
76
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
77
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
78
- t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
79
- t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
80
- t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1);
81
- t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1);
82
+ t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
83
+ t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
84
cpu->isar.id_aa64pfr0 = t;
85
86
t = cpu->isar.id_aa64pfr1;
87
- t = FIELD_DP64(t, ID_AA64PFR1, BT, 1);
88
- t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2);
89
+ t = FIELD_DP64(t, ID_AA64PFR1, BT, 1); /* FEAT_BTI */
90
+ t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2); /* FEAT_SSBS2 */
91
/*
92
* Begin with full support for MTE. This will be downgraded to MTE=0
93
* during realize if the board provides no tag memory, much like
94
* we do for EL2 with the virtualization=on property.
95
*/
96
- t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3);
97
+ t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
98
cpu->isar.id_aa64pfr1 = t;
99
100
t = cpu->isar.id_aa64mmfr0;
101
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
102
cpu->isar.id_aa64mmfr0 = t;
103
104
t = cpu->isar.id_aa64mmfr1;
105
- t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
106
- t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
107
- t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
108
- t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
109
- t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */
110
- t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* TTS2UXN */
111
+ t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* FEAT_VMID16 */
112
+ t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
113
+ t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
114
+ t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); /* FEAT_LOR */
115
+ t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* FEAT_PAN2 */
116
+ t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* FEAT_XNX */
117
cpu->isar.id_aa64mmfr1 = t;
118
119
t = cpu->isar.id_aa64mmfr2;
120
- t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1);
121
- t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */
122
- t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */
123
- t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
124
- t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
125
- t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
126
+ t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* FEAT_TTCNP */
127
+ t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); /* FEAT_UAO */
128
+ t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
129
+ t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
130
+ t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
131
+ t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
132
cpu->isar.id_aa64mmfr2 = t;
133
134
t = cpu->isar.id_aa64zfr0;
135
t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
136
- t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* PMULL */
137
- t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
138
- t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1);
139
- t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
140
- t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
141
- t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
142
- t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1);
143
- t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
144
+ t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* FEAT_SVE_PMULL128 */
145
+ t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1); /* FEAT_SVE_BitPerm */
146
+ t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1); /* FEAT_BF16 */
147
+ t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1); /* FEAT_SVE_SHA3 */
148
+ t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1); /* FEAT_SVE_SM4 */
149
+ t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1); /* FEAT_I8MM */
150
+ t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1); /* FEAT_F32MM */
151
+ t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1); /* FEAT_F64MM */
152
cpu->isar.id_aa64zfr0 = t;
153
154
t = cpu->isar.id_aa64dfr0;
155
- t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
156
+ t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
157
cpu->isar.id_aa64dfr0 = t;
158
159
/* Replicate the same data to the 32-bit id registers. */
160
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
34
index XXXXXXX..XXXXXXX 100644
161
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-sve.c
162
--- a/target/arm/cpu_tcg.c
36
+++ b/target/arm/translate-sve.c
163
+++ b/target/arm/cpu_tcg.c
37
@@ -XXX,XX +XXX,XX @@ DO_VPZ(FMAXNMV, fmaxnmv)
164
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
38
DO_VPZ(FMINV, fminv)
165
39
DO_VPZ(FMAXV, fmaxv)
166
/* Add additional features supported by QEMU */
40
167
t = cpu->isar.id_isar5;
41
+/*
168
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
42
+ *** SVE Floating Point Unary Operations - Unpredicated Group
169
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
43
+ */
170
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
44
+
171
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2); /* FEAT_PMULL */
45
+static void do_zz_fp(DisasContext *s, arg_rr_esz *a, gen_helper_gvec_2_ptr *fn)
172
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1); /* FEAT_SHA1 */
46
+{
173
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1); /* FEAT_SHA256 */
47
+ unsigned vsz = vec_full_reg_size(s);
174
t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
48
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
175
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
49
+
176
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
50
+ tcg_gen_gvec_2_ptr(vec_full_reg_offset(s, a->rd),
177
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1); /* FEAT_RDM */
51
+ vec_full_reg_offset(s, a->rn),
178
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1); /* FEAT_FCMA */
52
+ status, vsz, vsz, 0, fn);
179
cpu->isar.id_isar5 = t;
53
+ tcg_temp_free_ptr(status);
180
54
+}
181
t = cpu->isar.id_isar6;
55
+
182
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
56
+static bool trans_FRECPE(DisasContext *s, arg_rr_esz *a, uint32_t insn)
183
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
57
+{
184
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
58
+ static gen_helper_gvec_2_ptr * const fns[3] = {
185
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
59
+ gen_helper_gvec_frecpe_h,
186
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
60
+ gen_helper_gvec_frecpe_s,
187
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
61
+ gen_helper_gvec_frecpe_d,
188
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
62
+ };
189
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1); /* FEAT_JSCVT */
63
+ if (a->esz == 0) {
190
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1); /* Feat_DotProd */
64
+ return false;
191
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1); /* FEAT_FHM */
65
+ }
192
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1); /* FEAT_SB */
66
+ if (sve_access_check(s)) {
193
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1); /* FEAT_SPECRES */
67
+ do_zz_fp(s, a, fns[a->esz - 1]);
194
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1); /* FEAT_AA32BF16 */
68
+ }
195
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1); /* FEAT_AA32I8MM */
69
+ return true;
196
cpu->isar.id_isar6 = t;
70
+}
197
71
+
198
t = cpu->isar.mvfr1;
72
+static bool trans_FRSQRTE(DisasContext *s, arg_rr_esz *a, uint32_t insn)
199
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
73
+{
200
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
74
+ static gen_helper_gvec_2_ptr * const fns[3] = {
201
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* FEAT_FP16 */
75
+ gen_helper_gvec_frsqrte_h,
202
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* FEAT_FP16 */
76
+ gen_helper_gvec_frsqrte_s,
203
cpu->isar.mvfr1 = t;
77
+ gen_helper_gvec_frsqrte_d,
204
78
+ };
205
t = cpu->isar.mvfr2;
79
+ if (a->esz == 0) {
206
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
80
+ return false;
207
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
81
+ }
208
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
82
+ if (sve_access_check(s)) {
209
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
83
+ do_zz_fp(s, a, fns[a->esz - 1]);
210
cpu->isar.mvfr2 = t;
84
+ }
211
85
+ return true;
212
t = cpu->isar.id_mmfr3;
86
+}
213
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
87
+
214
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* FEAT_PAN2 */
88
/*
215
cpu->isar.id_mmfr3 = t;
89
*** SVE Floating Point Accumulating Reduction Group
216
90
*/
217
t = cpu->isar.id_mmfr4;
91
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
218
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
92
index XXXXXXX..XXXXXXX 100644
219
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
93
--- a/target/arm/vec_helper.c
220
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
94
+++ b/target/arm/vec_helper.c
221
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
95
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm,
222
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* FEAT_AA32HPD */
96
clear_tail(d, opr_sz, simd_maxsz(desc));
223
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
224
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* FEAT_TTCNP */
225
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* FEAT_XNX*/
226
cpu->isar.id_mmfr4 = t;
227
228
t = cpu->isar.id_pfr0;
229
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
230
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
231
cpu->isar.id_pfr0 = t;
232
233
t = cpu->isar.id_pfr2;
234
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
235
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1); /* FEAT_SSBS */
236
cpu->isar.id_pfr2 = t;
237
238
t = cpu->isar.id_dfr0;
239
- t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
240
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
241
cpu->isar.id_dfr0 = t;
97
}
242
}
98
243
99
+#define DO_2OP(NAME, FUNC, TYPE) \
100
+void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
101
+{ \
102
+ intptr_t i, oprsz = simd_oprsz(desc); \
103
+ TYPE *d = vd, *n = vn; \
104
+ for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
105
+ d[i] = FUNC(n[i], stat); \
106
+ } \
107
+}
108
+
109
+DO_2OP(gvec_frecpe_h, helper_recpe_f16, float16)
110
+DO_2OP(gvec_frecpe_s, helper_recpe_f32, float32)
111
+DO_2OP(gvec_frecpe_d, helper_recpe_f64, float64)
112
+
113
+DO_2OP(gvec_frsqrte_h, helper_rsqrte_f16, float16)
114
+DO_2OP(gvec_frsqrte_s, helper_rsqrte_f32, float32)
115
+DO_2OP(gvec_frsqrte_d, helper_rsqrte_f64, float64)
116
+
117
+#undef DO_2OP
118
+
119
/* Floating-point trigonometric starting value.
120
* See the ARM ARM pseudocode function FPTrigSMul.
121
*/
122
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
123
index XXXXXXX..XXXXXXX 100644
124
--- a/target/arm/sve.decode
125
+++ b/target/arm/sve.decode
126
@@ -XXX,XX +XXX,XX @@ FMINNMV 01100101 .. 000 101 001 ... ..... ..... @rd_pg_rn
127
FMAXV 01100101 .. 000 110 001 ... ..... ..... @rd_pg_rn
128
FMINV 01100101 .. 000 111 001 ... ..... ..... @rd_pg_rn
129
130
+## SVE Floating Point Unary Operations - Unpredicated Group
131
+
132
+FRECPE 01100101 .. 001 110 001100 ..... ..... @rd_rn
133
+FRSQRTE 01100101 .. 001 111 001100 ..... ..... @rd_rn
134
+
135
### SVE FP Accumulating Reduction Group
136
137
# SVE floating-point serial reduction (predicated)
138
--
244
--
139
2.17.1
245
2.25.1
140
141
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
There is no need to re-set these 3 features already
3
Use FIELD_DP{32,64} to manipulate id_pfr1 and id_aa64pfr0
4
implied by the call to aarch64_a15_initfn.
4
during arm_cpu_realizefn.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20220506180242.216785-11-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180629001538.11415-5-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/cpu.c | 3 ---
11
target/arm/cpu.c | 22 +++++++++++++---------
13
1 file changed, 3 deletions(-)
12
1 file changed, 13 insertions(+), 9 deletions(-)
14
13
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.c
16
--- a/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
17
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
18
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
20
* since we don't correctly set the ID registers to advertise them,
21
*/
19
*/
22
set_feature(&cpu->env, ARM_FEATURE_V8);
20
unset_feature(env, ARM_FEATURE_EL3);
23
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
21
24
- set_feature(&cpu->env, ARM_FEATURE_NEON);
22
- /* Disable the security extension feature bits in the processor feature
25
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
23
- * registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
26
set_feature(&cpu->env, ARM_FEATURE_V8_AES);
24
+ /*
27
set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
25
+ * Disable the security extension feature bits in the processor
28
set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
26
+ * feature registers as well.
27
*/
28
- cpu->isar.id_pfr1 &= ~0xf0;
29
- cpu->isar.id_aa64pfr0 &= ~0xf000;
30
+ cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1, ID_PFR1, SECURITY, 0);
31
+ cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
32
+ ID_AA64PFR0, EL3, 0);
33
}
34
35
if (!cpu->has_el2) {
36
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
37
}
38
39
if (!arm_feature(env, ARM_FEATURE_EL2)) {
40
- /* Disable the hypervisor feature bits in the processor feature
41
- * registers if we don't have EL2. These are id_pfr1[15:12] and
42
- * id_aa64pfr0_el1[11:8].
43
+ /*
44
+ * Disable the hypervisor feature bits in the processor feature
45
+ * registers if we don't have EL2.
46
*/
47
- cpu->isar.id_aa64pfr0 &= ~0xf00;
48
- cpu->isar.id_pfr1 &= ~0xf000;
49
+ cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
50
+ ID_AA64PFR0, EL2, 0);
51
+ cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1,
52
+ ID_PFR1, VIRTUALIZATION, 0);
53
}
54
55
#ifndef CONFIG_USER_ONLY
29
--
56
--
30
2.17.1
57
2.25.1
31
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Enable ARM_FEATURE_SVE for the generic "max" cpu.
3
The only portion of FEAT_Debugv8p2 that is relevant to QEMU
4
is CONTEXTIDR_EL2, which is also conditionally implemented
5
with FEAT_VHE. The rest of the debug extension concerns the
6
External debug interface, which is outside the scope of QEMU.
4
7
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180627043328.11531-35-richard.henderson@linaro.org
10
Message-id: 20220506180242.216785-12-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
linux-user/elfload.c | 1 +
13
docs/system/arm/emulation.rst | 1 +
12
target/arm/cpu.c | 7 +++++++
14
target/arm/cpu.c | 1 +
13
target/arm/cpu64.c | 1 +
15
target/arm/cpu64.c | 1 +
14
3 files changed, 9 insertions(+)
16
target/arm/cpu_tcg.c | 2 ++
17
4 files changed, 5 insertions(+)
15
18
16
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
17
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
18
--- a/linux-user/elfload.c
21
--- a/docs/system/arm/emulation.rst
19
+++ b/linux-user/elfload.c
22
+++ b/docs/system/arm/emulation.rst
20
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
21
GET_FEATURE(ARM_FEATURE_V8_ATOMICS, ARM_HWCAP_A64_ATOMICS);
24
- FEAT_BTI (Branch Target Identification)
22
GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
25
- FEAT_DIT (Data Independent Timing instructions)
23
GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
26
- FEAT_DPB (DC CVAP instruction)
24
+ GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
27
+- FEAT_Debugv8p2 (Debug changes for v8.2)
25
#undef GET_FEATURE
28
- FEAT_DotProd (Advanced SIMD dot product instructions)
26
29
- FEAT_FCMA (Floating-point complex number instructions)
27
return hwcaps;
30
- FEAT_FHM (Floating-point half-precision multiplication instructions)
28
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
29
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.c
33
--- a/target/arm/cpu.c
31
+++ b/target/arm/cpu.c
34
+++ b/target/arm/cpu.c
32
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
35
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
33
env->cp15.sctlr_el[1] |= SCTLR_UCT | SCTLR_UCI | SCTLR_DZE;
36
* feature registers as well.
34
/* and to the FP/Neon instructions */
37
*/
35
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 20, 2, 3);
38
cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1, ID_PFR1, SECURITY, 0);
36
+ /* and to the SVE instructions */
39
+ cpu->isar.id_dfr0 = FIELD_DP32(cpu->isar.id_dfr0, ID_DFR0, COPSDBG, 0);
37
+ env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 16, 2, 3);
40
cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
38
+ env->cp15.cptr_el[3] |= CPTR_EZ;
41
ID_AA64PFR0, EL3, 0);
39
+ /* with maximum vector length */
42
}
40
+ env->vfp.zcr_el[1] = ARM_MAX_VQ - 1;
41
+ env->vfp.zcr_el[2] = ARM_MAX_VQ - 1;
42
+ env->vfp.zcr_el[3] = ARM_MAX_VQ - 1;
43
#else
44
/* Reset into the highest available EL */
45
if (arm_feature(env, ARM_FEATURE_EL3)) {
46
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
43
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
47
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/cpu64.c
45
--- a/target/arm/cpu64.c
49
+++ b/target/arm/cpu64.c
46
+++ b/target/arm/cpu64.c
50
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
47
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
51
set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
48
cpu->isar.id_aa64zfr0 = t;
52
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
49
53
set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
50
t = cpu->isar.id_aa64dfr0;
54
+ set_feature(&cpu->env, ARM_FEATURE_SVE);
51
+ t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 8); /* FEAT_Debugv8p2 */
55
/* For usermode -cpu max we can use a larger and more efficient DCZ
52
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
56
* blocksize since we don't have to follow what the hardware does.
53
cpu->isar.id_aa64dfr0 = t;
57
*/
54
55
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/cpu_tcg.c
58
+++ b/target/arm/cpu_tcg.c
59
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
60
cpu->isar.id_pfr2 = t;
61
62
t = cpu->isar.id_dfr0;
63
+ t = FIELD_DP32(t, ID_DFR0, COPDBG, 8); /* FEAT_Debugv8p2 */
64
+ t = FIELD_DP32(t, ID_DFR0, COPSDBG, 8); /* FEAT_Debugv8p2 */
65
t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
66
cpu->isar.id_dfr0 = t;
67
}
58
--
68
--
59
2.17.1
69
2.25.1
60
61
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This extension concerns changes to the External Debug interface,
4
with Secure and Non-secure access to the debug registers, and all
5
of it is outside the scope of QEMU. Indicating support for this
6
is mandatory with FEAT_SEL2, which we do implement.
2
7
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-30-richard.henderson@linaro.org
10
Message-id: 20220506180242.216785-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/helper-sve.h | 4 +
13
docs/system/arm/emulation.rst | 1 +
9
target/arm/sve_helper.c | 162 +++++++++++++++++++++++++++++++++++++
14
target/arm/cpu64.c | 2 +-
10
target/arm/translate-sve.c | 37 +++++++++
15
target/arm/cpu_tcg.c | 4 ++--
11
target/arm/sve.decode | 4 +
16
3 files changed, 4 insertions(+), 3 deletions(-)
12
4 files changed, 207 insertions(+)
13
17
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
20
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/helper-sve.h
21
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
22
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
23
- FEAT_DIT (Data Independent Timing instructions)
20
DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
24
- FEAT_DPB (DC CVAP instruction)
21
25
- FEAT_Debugv8p2 (Debug changes for v8.2)
22
+DEF_HELPER_FLAGS_3(sve_fcmla_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
26
+- FEAT_Debugv8p4 (Debug changes for v8.4)
23
+DEF_HELPER_FLAGS_3(sve_fcmla_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
27
- FEAT_DotProd (Advanced SIMD dot product instructions)
24
+DEF_HELPER_FLAGS_3(sve_fcmla_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
28
- FEAT_FCMA (Floating-point complex number instructions)
25
+
29
- FEAT_FHM (Floating-point half-precision multiplication instructions)
26
DEF_HELPER_FLAGS_5(sve_ftmad_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
DEF_HELPER_FLAGS_5(sve_ftmad_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
DEF_HELPER_FLAGS_5(sve_ftmad_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
30
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/sve_helper.c
32
--- a/target/arm/cpu64.c
32
+++ b/target/arm/sve_helper.c
33
+++ b/target/arm/cpu64.c
33
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcadd_d)(void *vd, void *vn, void *vm, void *vg,
34
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
34
} while (i != 0);
35
cpu->isar.id_aa64zfr0 = t;
36
37
t = cpu->isar.id_aa64dfr0;
38
- t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 8); /* FEAT_Debugv8p2 */
39
+ t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 9); /* FEAT_Debugv8p4 */
40
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
41
cpu->isar.id_aa64dfr0 = t;
42
43
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/cpu_tcg.c
46
+++ b/target/arm/cpu_tcg.c
47
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
48
cpu->isar.id_pfr2 = t;
49
50
t = cpu->isar.id_dfr0;
51
- t = FIELD_DP32(t, ID_DFR0, COPDBG, 8); /* FEAT_Debugv8p2 */
52
- t = FIELD_DP32(t, ID_DFR0, COPSDBG, 8); /* FEAT_Debugv8p2 */
53
+ t = FIELD_DP32(t, ID_DFR0, COPDBG, 9); /* FEAT_Debugv8p4 */
54
+ t = FIELD_DP32(t, ID_DFR0, COPSDBG, 9); /* FEAT_Debugv8p4 */
55
t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
56
cpu->isar.id_dfr0 = t;
35
}
57
}
36
37
+/*
38
+ * FP Complex Multiply
39
+ */
40
+
41
+QEMU_BUILD_BUG_ON(SIMD_DATA_SHIFT + 22 > 32);
42
+
43
+void HELPER(sve_fcmla_zpzzz_h)(CPUARMState *env, void *vg, uint32_t desc)
44
+{
45
+ intptr_t j, i = simd_oprsz(desc);
46
+ unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
47
+ unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
48
+ unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
49
+ unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
50
+ unsigned rot = extract32(desc, SIMD_DATA_SHIFT + 20, 2);
51
+ bool flip = rot & 1;
52
+ float16 neg_imag, neg_real;
53
+ void *vd = &env->vfp.zregs[rd];
54
+ void *vn = &env->vfp.zregs[rn];
55
+ void *vm = &env->vfp.zregs[rm];
56
+ void *va = &env->vfp.zregs[ra];
57
+ uint64_t *g = vg;
58
+
59
+ neg_imag = float16_set_sign(0, (rot & 2) != 0);
60
+ neg_real = float16_set_sign(0, rot == 1 || rot == 2);
61
+
62
+ do {
63
+ uint64_t pg = g[(i - 1) >> 6];
64
+ do {
65
+ float16 e1, e2, e3, e4, nr, ni, mr, mi, d;
66
+
67
+ /* I holds the real index; J holds the imag index. */
68
+ j = i - sizeof(float16);
69
+ i -= 2 * sizeof(float16);
70
+
71
+ nr = *(float16 *)(vn + H1_2(i));
72
+ ni = *(float16 *)(vn + H1_2(j));
73
+ mr = *(float16 *)(vm + H1_2(i));
74
+ mi = *(float16 *)(vm + H1_2(j));
75
+
76
+ e2 = (flip ? ni : nr);
77
+ e1 = (flip ? mi : mr) ^ neg_real;
78
+ e4 = e2;
79
+ e3 = (flip ? mr : mi) ^ neg_imag;
80
+
81
+ if (likely((pg >> (i & 63)) & 1)) {
82
+ d = *(float16 *)(va + H1_2(i));
83
+ d = float16_muladd(e2, e1, d, 0, &env->vfp.fp_status_f16);
84
+ *(float16 *)(vd + H1_2(i)) = d;
85
+ }
86
+ if (likely((pg >> (j & 63)) & 1)) {
87
+ d = *(float16 *)(va + H1_2(j));
88
+ d = float16_muladd(e4, e3, d, 0, &env->vfp.fp_status_f16);
89
+ *(float16 *)(vd + H1_2(j)) = d;
90
+ }
91
+ } while (i & 63);
92
+ } while (i != 0);
93
+}
94
+
95
+void HELPER(sve_fcmla_zpzzz_s)(CPUARMState *env, void *vg, uint32_t desc)
96
+{
97
+ intptr_t j, i = simd_oprsz(desc);
98
+ unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
99
+ unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
100
+ unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
101
+ unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
102
+ unsigned rot = extract32(desc, SIMD_DATA_SHIFT + 20, 2);
103
+ bool flip = rot & 1;
104
+ float32 neg_imag, neg_real;
105
+ void *vd = &env->vfp.zregs[rd];
106
+ void *vn = &env->vfp.zregs[rn];
107
+ void *vm = &env->vfp.zregs[rm];
108
+ void *va = &env->vfp.zregs[ra];
109
+ uint64_t *g = vg;
110
+
111
+ neg_imag = float32_set_sign(0, (rot & 2) != 0);
112
+ neg_real = float32_set_sign(0, rot == 1 || rot == 2);
113
+
114
+ do {
115
+ uint64_t pg = g[(i - 1) >> 6];
116
+ do {
117
+ float32 e1, e2, e3, e4, nr, ni, mr, mi, d;
118
+
119
+ /* I holds the real index; J holds the imag index. */
120
+ j = i - sizeof(float32);
121
+ i -= 2 * sizeof(float32);
122
+
123
+ nr = *(float32 *)(vn + H1_2(i));
124
+ ni = *(float32 *)(vn + H1_2(j));
125
+ mr = *(float32 *)(vm + H1_2(i));
126
+ mi = *(float32 *)(vm + H1_2(j));
127
+
128
+ e2 = (flip ? ni : nr);
129
+ e1 = (flip ? mi : mr) ^ neg_real;
130
+ e4 = e2;
131
+ e3 = (flip ? mr : mi) ^ neg_imag;
132
+
133
+ if (likely((pg >> (i & 63)) & 1)) {
134
+ d = *(float32 *)(va + H1_2(i));
135
+ d = float32_muladd(e2, e1, d, 0, &env->vfp.fp_status);
136
+ *(float32 *)(vd + H1_2(i)) = d;
137
+ }
138
+ if (likely((pg >> (j & 63)) & 1)) {
139
+ d = *(float32 *)(va + H1_2(j));
140
+ d = float32_muladd(e4, e3, d, 0, &env->vfp.fp_status);
141
+ *(float32 *)(vd + H1_2(j)) = d;
142
+ }
143
+ } while (i & 63);
144
+ } while (i != 0);
145
+}
146
+
147
+void HELPER(sve_fcmla_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
148
+{
149
+ intptr_t j, i = simd_oprsz(desc);
150
+ unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
151
+ unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
152
+ unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
153
+ unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
154
+ unsigned rot = extract32(desc, SIMD_DATA_SHIFT + 20, 2);
155
+ bool flip = rot & 1;
156
+ float64 neg_imag, neg_real;
157
+ void *vd = &env->vfp.zregs[rd];
158
+ void *vn = &env->vfp.zregs[rn];
159
+ void *vm = &env->vfp.zregs[rm];
160
+ void *va = &env->vfp.zregs[ra];
161
+ uint64_t *g = vg;
162
+
163
+ neg_imag = float64_set_sign(0, (rot & 2) != 0);
164
+ neg_real = float64_set_sign(0, rot == 1 || rot == 2);
165
+
166
+ do {
167
+ uint64_t pg = g[(i - 1) >> 6];
168
+ do {
169
+ float64 e1, e2, e3, e4, nr, ni, mr, mi, d;
170
+
171
+ /* I holds the real index; J holds the imag index. */
172
+ j = i - sizeof(float64);
173
+ i -= 2 * sizeof(float64);
174
+
175
+ nr = *(float64 *)(vn + H1_2(i));
176
+ ni = *(float64 *)(vn + H1_2(j));
177
+ mr = *(float64 *)(vm + H1_2(i));
178
+ mi = *(float64 *)(vm + H1_2(j));
179
+
180
+ e2 = (flip ? ni : nr);
181
+ e1 = (flip ? mi : mr) ^ neg_real;
182
+ e4 = e2;
183
+ e3 = (flip ? mr : mi) ^ neg_imag;
184
+
185
+ if (likely((pg >> (i & 63)) & 1)) {
186
+ d = *(float64 *)(va + H1_2(i));
187
+ d = float64_muladd(e2, e1, d, 0, &env->vfp.fp_status);
188
+ *(float64 *)(vd + H1_2(i)) = d;
189
+ }
190
+ if (likely((pg >> (j & 63)) & 1)) {
191
+ d = *(float64 *)(va + H1_2(j));
192
+ d = float64_muladd(e4, e3, d, 0, &env->vfp.fp_status);
193
+ *(float64 *)(vd + H1_2(j)) = d;
194
+ }
195
+ } while (i & 63);
196
+ } while (i != 0);
197
+}
198
+
199
/*
200
* Load contiguous data, protected by a governing predicate.
201
*/
202
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
203
index XXXXXXX..XXXXXXX 100644
204
--- a/target/arm/translate-sve.c
205
+++ b/target/arm/translate-sve.c
206
@@ -XXX,XX +XXX,XX @@ DO_FMLA(FNMLS_zpzzz, fnmls_zpzzz)
207
208
#undef DO_FMLA
209
210
+static bool trans_FCMLA_zpzzz(DisasContext *s,
211
+ arg_FCMLA_zpzzz *a, uint32_t insn)
212
+{
213
+ static gen_helper_sve_fmla * const fns[3] = {
214
+ gen_helper_sve_fcmla_zpzzz_h,
215
+ gen_helper_sve_fcmla_zpzzz_s,
216
+ gen_helper_sve_fcmla_zpzzz_d,
217
+ };
218
+
219
+ if (a->esz == 0) {
220
+ return false;
221
+ }
222
+ if (sve_access_check(s)) {
223
+ unsigned vsz = vec_full_reg_size(s);
224
+ unsigned desc;
225
+ TCGv_i32 t_desc;
226
+ TCGv_ptr pg = tcg_temp_new_ptr();
227
+
228
+ /* We would need 7 operands to pass these arguments "properly".
229
+ * So we encode all the register numbers into the descriptor.
230
+ */
231
+ desc = deposit32(a->rd, 5, 5, a->rn);
232
+ desc = deposit32(desc, 10, 5, a->rm);
233
+ desc = deposit32(desc, 15, 5, a->ra);
234
+ desc = deposit32(desc, 20, 2, a->rot);
235
+ desc = sextract32(desc, 0, 22);
236
+ desc = simd_desc(vsz, vsz, desc);
237
+
238
+ t_desc = tcg_const_i32(desc);
239
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
240
+ fns[a->esz - 1](cpu_env, pg, t_desc);
241
+ tcg_temp_free_i32(t_desc);
242
+ tcg_temp_free_ptr(pg);
243
+ }
244
+ return true;
245
+}
246
+
247
/*
248
*** SVE Floating Point Unary Operations Predicated Group
249
*/
250
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
251
index XXXXXXX..XXXXXXX 100644
252
--- a/target/arm/sve.decode
253
+++ b/target/arm/sve.decode
254
@@ -XXX,XX +XXX,XX @@ MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
255
FCADD 01100100 esz:2 00000 rot:1 100 pg:3 rm:5 rd:5 \
256
rn=%reg_movprfx
257
258
+# SVE floating-point complex multiply-add (predicated)
259
+FCMLA_zpzzz 01100100 esz:2 0 rm:5 0 rot:2 pg:3 rn:5 rd:5 \
260
+ ra=%reg_movprfx
261
+
262
### SVE FP Multiply-Add Indexed Group
263
264
# SVE floating-point multiply-add (indexed)
265
--
58
--
266
2.17.1
59
2.25.1
267
268
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Add only the system registers required to implement zero error
4
records. This means that all values for ERRSELR are out of range,
5
which means that it and all of the indexed error record registers
6
need not be implemented.
7
8
Add the EL2 registers required for injecting virtual SError.
2
9
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-29-richard.henderson@linaro.org
12
Message-id: 20220506180242.216785-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
14
---
8
target/arm/helper-sve.h | 7 +++
15
target/arm/cpu.h | 5 +++
9
target/arm/sve_helper.c | 100 +++++++++++++++++++++++++++++++++++++
16
target/arm/helper.c | 84 +++++++++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 24 +++++++++
17
2 files changed, 89 insertions(+)
11
target/arm/sve.decode | 4 ++
12
4 files changed, 135 insertions(+)
13
18
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
21
--- a/target/arm/cpu.h
17
+++ b/target/arm/helper-sve.h
22
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(sve_facgt_s, TCG_CALL_NO_RWG,
23
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
19
DEF_HELPER_FLAGS_6(sve_facgt_d, TCG_CALL_NO_RWG,
24
uint64_t tfsr_el[4]; /* tfsre0_el1 is index 0. */
20
void, ptr, ptr, ptr, ptr, ptr, i32)
25
uint64_t gcr_el1;
21
26
uint64_t rgsr_el1;
22
+DEF_HELPER_FLAGS_6(sve_fcadd_h, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_6(sve_fcadd_s, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_6(sve_fcadd_d, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, ptr, i32)
28
+
27
+
29
DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
28
+ /* Minimal RAS registers */
30
DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
29
+ uint64_t disr_el1;
31
DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
30
+ uint64_t vdisr_el2;
32
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
31
+ uint64_t vsesr_el2;
32
} cp15;
33
34
struct {
35
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/sve_helper.c
37
--- a/target/arm/helper.c
35
+++ b/target/arm/sve_helper.c
38
+++ b/target/arm/helper.c
36
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ftmad_d)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
39
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
37
}
40
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
38
}
41
};
39
42
40
+/*
43
+/*
41
+ * FP Complex Add
44
+ * Check for traps to RAS registers, which are controlled
45
+ * by HCR_EL2.TERR and SCR_EL3.TERR.
42
+ */
46
+ */
47
+static CPAccessResult access_terr(CPUARMState *env, const ARMCPRegInfo *ri,
48
+ bool isread)
49
+{
50
+ int el = arm_current_el(env);
43
+
51
+
44
+void HELPER(sve_fcadd_h)(void *vd, void *vn, void *vm, void *vg,
52
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_TERR)) {
45
+ void *vs, uint32_t desc)
53
+ return CP_ACCESS_TRAP_EL2;
46
+{
54
+ }
47
+ intptr_t j, i = simd_oprsz(desc);
55
+ if (el < 3 && (env->cp15.scr_el3 & SCR_TERR)) {
48
+ uint64_t *g = vg;
56
+ return CP_ACCESS_TRAP_EL3;
49
+ float16 neg_imag = float16_set_sign(0, simd_data(desc));
57
+ }
50
+ float16 neg_real = float16_chs(neg_imag);
58
+ return CP_ACCESS_OK;
51
+
52
+ do {
53
+ uint64_t pg = g[(i - 1) >> 6];
54
+ do {
55
+ float16 e0, e1, e2, e3;
56
+
57
+ /* I holds the real index; J holds the imag index. */
58
+ j = i - sizeof(float16);
59
+ i -= 2 * sizeof(float16);
60
+
61
+ e0 = *(float16 *)(vn + H1_2(i));
62
+ e1 = *(float16 *)(vm + H1_2(j)) ^ neg_real;
63
+ e2 = *(float16 *)(vn + H1_2(j));
64
+ e3 = *(float16 *)(vm + H1_2(i)) ^ neg_imag;
65
+
66
+ if (likely((pg >> (i & 63)) & 1)) {
67
+ *(float16 *)(vd + H1_2(i)) = float16_add(e0, e1, vs);
68
+ }
69
+ if (likely((pg >> (j & 63)) & 1)) {
70
+ *(float16 *)(vd + H1_2(j)) = float16_add(e2, e3, vs);
71
+ }
72
+ } while (i & 63);
73
+ } while (i != 0);
74
+}
59
+}
75
+
60
+
76
+void HELPER(sve_fcadd_s)(void *vd, void *vn, void *vm, void *vg,
61
+static uint64_t disr_read(CPUARMState *env, const ARMCPRegInfo *ri)
77
+ void *vs, uint32_t desc)
78
+{
62
+{
79
+ intptr_t j, i = simd_oprsz(desc);
63
+ int el = arm_current_el(env);
80
+ uint64_t *g = vg;
81
+ float32 neg_imag = float32_set_sign(0, simd_data(desc));
82
+ float32 neg_real = float32_chs(neg_imag);
83
+
64
+
84
+ do {
65
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_AMO)) {
85
+ uint64_t pg = g[(i - 1) >> 6];
66
+ return env->cp15.vdisr_el2;
86
+ do {
67
+ }
87
+ float32 e0, e1, e2, e3;
68
+ if (el < 3 && (env->cp15.scr_el3 & SCR_EA)) {
88
+
69
+ return 0; /* RAZ/WI */
89
+ /* I holds the real index; J holds the imag index. */
70
+ }
90
+ j = i - sizeof(float32);
71
+ return env->cp15.disr_el1;
91
+ i -= 2 * sizeof(float32);
92
+
93
+ e0 = *(float32 *)(vn + H1_2(i));
94
+ e1 = *(float32 *)(vm + H1_2(j)) ^ neg_real;
95
+ e2 = *(float32 *)(vn + H1_2(j));
96
+ e3 = *(float32 *)(vm + H1_2(i)) ^ neg_imag;
97
+
98
+ if (likely((pg >> (i & 63)) & 1)) {
99
+ *(float32 *)(vd + H1_2(i)) = float32_add(e0, e1, vs);
100
+ }
101
+ if (likely((pg >> (j & 63)) & 1)) {
102
+ *(float32 *)(vd + H1_2(j)) = float32_add(e2, e3, vs);
103
+ }
104
+ } while (i & 63);
105
+ } while (i != 0);
106
+}
72
+}
107
+
73
+
108
+void HELPER(sve_fcadd_d)(void *vd, void *vn, void *vm, void *vg,
74
+static void disr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t val)
109
+ void *vs, uint32_t desc)
110
+{
75
+{
111
+ intptr_t j, i = simd_oprsz(desc);
76
+ int el = arm_current_el(env);
112
+ uint64_t *g = vg;
113
+ float64 neg_imag = float64_set_sign(0, simd_data(desc));
114
+ float64 neg_real = float64_chs(neg_imag);
115
+
77
+
116
+ do {
78
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_AMO)) {
117
+ uint64_t pg = g[(i - 1) >> 6];
79
+ env->cp15.vdisr_el2 = val;
118
+ do {
80
+ return;
119
+ float64 e0, e1, e2, e3;
81
+ }
120
+
82
+ if (el < 3 && (env->cp15.scr_el3 & SCR_EA)) {
121
+ /* I holds the real index; J holds the imag index. */
83
+ return; /* RAZ/WI */
122
+ j = i - sizeof(float64);
84
+ }
123
+ i -= 2 * sizeof(float64);
85
+ env->cp15.disr_el1 = val;
124
+
125
+ e0 = *(float64 *)(vn + H1_2(i));
126
+ e1 = *(float64 *)(vm + H1_2(j)) ^ neg_real;
127
+ e2 = *(float64 *)(vn + H1_2(j));
128
+ e3 = *(float64 *)(vm + H1_2(i)) ^ neg_imag;
129
+
130
+ if (likely((pg >> (i & 63)) & 1)) {
131
+ *(float64 *)(vd + H1_2(i)) = float64_add(e0, e1, vs);
132
+ }
133
+ if (likely((pg >> (j & 63)) & 1)) {
134
+ *(float64 *)(vd + H1_2(j)) = float64_add(e2, e3, vs);
135
+ }
136
+ } while (i & 63);
137
+ } while (i != 0);
138
+}
86
+}
139
+
87
+
140
/*
88
+/*
141
* Load contiguous data, protected by a governing predicate.
89
+ * Minimal RAS implementation with no Error Records.
142
*/
90
+ * Which means that all of the Error Record registers:
143
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
91
+ * ERXADDR_EL1
144
index XXXXXXX..XXXXXXX 100644
92
+ * ERXCTLR_EL1
145
--- a/target/arm/translate-sve.c
93
+ * ERXFR_EL1
146
+++ b/target/arm/translate-sve.c
94
+ * ERXMISC0_EL1
147
@@ -XXX,XX +XXX,XX @@ DO_FPCMP(FACGT, facgt)
95
+ * ERXMISC1_EL1
148
96
+ * ERXMISC2_EL1
149
#undef DO_FPCMP
97
+ * ERXMISC3_EL1
150
98
+ * ERXPFGCDN_EL1 (RASv1p1)
151
+static bool trans_FCADD(DisasContext *s, arg_FCADD *a, uint32_t insn)
99
+ * ERXPFGCTL_EL1 (RASv1p1)
152
+{
100
+ * ERXPFGF_EL1 (RASv1p1)
153
+ static gen_helper_gvec_4_ptr * const fns[3] = {
101
+ * ERXSTATUS_EL1
154
+ gen_helper_sve_fcadd_h,
102
+ * and
155
+ gen_helper_sve_fcadd_s,
103
+ * ERRSELR_EL1
156
+ gen_helper_sve_fcadd_d
104
+ * may generate UNDEFINED, which is the effect we get by not
157
+ };
105
+ * listing them at all.
106
+ */
107
+static const ARMCPRegInfo minimal_ras_reginfo[] = {
108
+ { .name = "DISR_EL1", .state = ARM_CP_STATE_BOTH,
109
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 1, .opc2 = 1,
110
+ .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.disr_el1),
111
+ .readfn = disr_read, .writefn = disr_write, .raw_writefn = raw_write },
112
+ { .name = "ERRIDR_EL1", .state = ARM_CP_STATE_BOTH,
113
+ .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 3, .opc2 = 0,
114
+ .access = PL1_R, .accessfn = access_terr,
115
+ .type = ARM_CP_CONST, .resetvalue = 0 },
116
+ { .name = "VDISR_EL2", .state = ARM_CP_STATE_BOTH,
117
+ .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 1, .opc2 = 1,
118
+ .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vdisr_el2) },
119
+ { .name = "VSESR_EL2", .state = ARM_CP_STATE_BOTH,
120
+ .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 3,
121
+ .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vsesr_el2) },
122
+};
158
+
123
+
159
+ if (a->esz == 0) {
124
/* Return the exception level to which exceptions should be taken
160
+ return false;
125
* via SVEAccessTrap. If an exception should be routed through
126
* AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should
127
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
128
if (cpu_isar_feature(aa64_ssbs, cpu)) {
129
define_one_arm_cp_reg(cpu, &ssbs_reginfo);
130
}
131
+ if (cpu_isar_feature(any_ras, cpu)) {
132
+ define_arm_cp_regs(cpu, minimal_ras_reginfo);
161
+ }
133
+ }
162
+ if (sve_access_check(s)) {
134
163
+ unsigned vsz = vec_full_reg_size(s);
135
if (cpu_isar_feature(aa64_vh, cpu) ||
164
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
136
cpu_isar_feature(aa64_debugv8p2, cpu)) {
165
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
166
+ vec_full_reg_offset(s, a->rn),
167
+ vec_full_reg_offset(s, a->rm),
168
+ pred_full_reg_offset(s, a->pg),
169
+ status, vsz, vsz, a->rot, fns[a->esz - 1]);
170
+ tcg_temp_free_ptr(status);
171
+ }
172
+ return true;
173
+}
174
+
175
typedef void gen_helper_sve_fmla(TCGv_env, TCGv_ptr, TCGv_i32);
176
177
static bool do_fmla(DisasContext *s, arg_rprrr_esz *a, gen_helper_sve_fmla *fn)
178
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
179
index XXXXXXX..XXXXXXX 100644
180
--- a/target/arm/sve.decode
181
+++ b/target/arm/sve.decode
182
@@ -XXX,XX +XXX,XX @@ UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
183
# SVE integer multiply immediate (unpredicated)
184
MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
185
186
+# SVE floating-point complex add (predicated)
187
+FCADD 01100100 esz:2 00000 rot:1 100 pg:3 rm:5 rd:5 \
188
+ rn=%reg_movprfx
189
+
190
### SVE FP Multiply-Add Indexed Group
191
192
# SVE floating-point multiply-add (indexed)
193
--
137
--
194
2.17.1
138
2.25.1
195
196
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Enable writes to the TERR and TEA bits when RAS is enabled.
4
These bits are otherwise RES0.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-25-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-15-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 30 +++++++++++++
11
target/arm/helper.c | 9 +++++++++
9
target/arm/helper.h | 12 +++---
12
1 file changed, 9 insertions(+)
10
target/arm/helper.c | 2 +-
11
target/arm/sve_helper.c | 88 ++++++++++++++++++++++++++++++++++++++
12
target/arm/translate-sve.c | 70 ++++++++++++++++++++++++++++++
13
target/arm/sve.decode | 16 +++++++
14
6 files changed, 211 insertions(+), 7 deletions(-)
15
13
16
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper-sve.h
19
+++ b/target/arm/helper-sve.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_fcvt_hd, TCG_CALL_NO_RWG,
21
DEF_HELPER_FLAGS_5(sve_fcvt_sd, TCG_CALL_NO_RWG,
22
void, ptr, ptr, ptr, ptr, i32)
23
24
+DEF_HELPER_FLAGS_5(sve_fcvtzs_hh, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve_fcvtzs_hs, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve_fcvtzs_ss, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(sve_fcvtzs_ds, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve_fcvtzs_hd, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_5(sve_fcvtzs_sd, TCG_CALL_NO_RWG,
35
+ void, ptr, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_5(sve_fcvtzs_dd, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, i32)
38
+
39
+DEF_HELPER_FLAGS_5(sve_fcvtzu_hh, TCG_CALL_NO_RWG,
40
+ void, ptr, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_5(sve_fcvtzu_hs, TCG_CALL_NO_RWG,
42
+ void, ptr, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_5(sve_fcvtzu_ss, TCG_CALL_NO_RWG,
44
+ void, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_5(sve_fcvtzu_ds, TCG_CALL_NO_RWG,
46
+ void, ptr, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_5(sve_fcvtzu_hd, TCG_CALL_NO_RWG,
48
+ void, ptr, ptr, ptr, ptr, i32)
49
+DEF_HELPER_FLAGS_5(sve_fcvtzu_sd, TCG_CALL_NO_RWG,
50
+ void, ptr, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_5(sve_fcvtzu_dd, TCG_CALL_NO_RWG,
52
+ void, ptr, ptr, ptr, ptr, i32)
53
+
54
DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
55
void, ptr, ptr, ptr, ptr, i32)
56
DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
57
diff --git a/target/arm/helper.h b/target/arm/helper.h
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/helper.h
60
+++ b/target/arm/helper.h
61
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(vfp_touid, i32, f64, ptr)
62
DEF_HELPER_2(vfp_touizh, i32, f16, ptr)
63
DEF_HELPER_2(vfp_touizs, i32, f32, ptr)
64
DEF_HELPER_2(vfp_touizd, i32, f64, ptr)
65
-DEF_HELPER_2(vfp_tosih, i32, f16, ptr)
66
-DEF_HELPER_2(vfp_tosis, i32, f32, ptr)
67
-DEF_HELPER_2(vfp_tosid, i32, f64, ptr)
68
-DEF_HELPER_2(vfp_tosizh, i32, f16, ptr)
69
-DEF_HELPER_2(vfp_tosizs, i32, f32, ptr)
70
-DEF_HELPER_2(vfp_tosizd, i32, f64, ptr)
71
+DEF_HELPER_2(vfp_tosih, s32, f16, ptr)
72
+DEF_HELPER_2(vfp_tosis, s32, f32, ptr)
73
+DEF_HELPER_2(vfp_tosid, s32, f64, ptr)
74
+DEF_HELPER_2(vfp_tosizh, s32, f16, ptr)
75
+DEF_HELPER_2(vfp_tosizs, s32, f32, ptr)
76
+DEF_HELPER_2(vfp_tosizd, s32, f64, ptr)
77
78
DEF_HELPER_3(vfp_toshs_round_to_zero, i32, f32, i32, ptr)
79
DEF_HELPER_3(vfp_tosls_round_to_zero, i32, f32, i32, ptr)
80
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
81
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
83
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
84
@@ -XXX,XX +XXX,XX @@ ftype HELPER(name)(uint32_t x, void *fpstp) \
18
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
85
}
19
}
86
20
valid_mask &= ~SCR_NET;
87
#define CONV_FTOI(name, ftype, fsz, sign, round) \
21
88
-uint32_t HELPER(name)(ftype x, void *fpstp) \
22
+ if (cpu_isar_feature(aa64_ras, cpu)) {
89
+sign##int32_t HELPER(name)(ftype x, void *fpstp) \
23
+ valid_mask |= SCR_TERR;
90
{ \
24
+ }
91
float_status *fpst = fpstp; \
25
if (cpu_isar_feature(aa64_lor, cpu)) {
92
if (float##fsz##_is_any_nan(x)) { \
26
valid_mask |= SCR_TLOR;
93
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
27
}
94
index XXXXXXX..XXXXXXX 100644
28
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
95
--- a/target/arm/sve_helper.c
29
}
96
+++ b/target/arm/sve_helper.c
30
} else {
97
@@ -XXX,XX +XXX,XX @@ static inline float16 sve_f64_to_f16(float64 f, float_status *fpst)
31
valid_mask &= ~(SCR_RW | SCR_ST);
98
return ret;
32
+ if (cpu_isar_feature(aa32_ras, cpu)) {
99
}
33
+ valid_mask |= SCR_TERR;
100
34
+ }
101
+static inline int16_t vfp_float16_to_int16_rtz(float16 f, float_status *s)
35
}
102
+{
36
103
+ if (float16_is_any_nan(f)) {
37
if (!arm_feature(env, ARM_FEATURE_EL2)) {
104
+ float_raise(float_flag_invalid, s);
38
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
105
+ return 0;
39
if (cpu_isar_feature(aa64_vh, cpu)) {
106
+ }
40
valid_mask |= HCR_E2H;
107
+ return float16_to_int16_round_to_zero(f, s);
41
}
108
+}
42
+ if (cpu_isar_feature(aa64_ras, cpu)) {
109
+
43
+ valid_mask |= HCR_TERR | HCR_TEA;
110
+static inline int64_t vfp_float16_to_int64_rtz(float16 f, float_status *s)
44
+ }
111
+{
45
if (cpu_isar_feature(aa64_lor, cpu)) {
112
+ if (float16_is_any_nan(f)) {
46
valid_mask |= HCR_TLOR;
113
+ float_raise(float_flag_invalid, s);
47
}
114
+ return 0;
115
+ }
116
+ return float16_to_int64_round_to_zero(f, s);
117
+}
118
+
119
+static inline int64_t vfp_float32_to_int64_rtz(float32 f, float_status *s)
120
+{
121
+ if (float32_is_any_nan(f)) {
122
+ float_raise(float_flag_invalid, s);
123
+ return 0;
124
+ }
125
+ return float32_to_int64_round_to_zero(f, s);
126
+}
127
+
128
+static inline int64_t vfp_float64_to_int64_rtz(float64 f, float_status *s)
129
+{
130
+ if (float64_is_any_nan(f)) {
131
+ float_raise(float_flag_invalid, s);
132
+ return 0;
133
+ }
134
+ return float64_to_int64_round_to_zero(f, s);
135
+}
136
+
137
+static inline uint16_t vfp_float16_to_uint16_rtz(float16 f, float_status *s)
138
+{
139
+ if (float16_is_any_nan(f)) {
140
+ float_raise(float_flag_invalid, s);
141
+ return 0;
142
+ }
143
+ return float16_to_uint16_round_to_zero(f, s);
144
+}
145
+
146
+static inline uint64_t vfp_float16_to_uint64_rtz(float16 f, float_status *s)
147
+{
148
+ if (float16_is_any_nan(f)) {
149
+ float_raise(float_flag_invalid, s);
150
+ return 0;
151
+ }
152
+ return float16_to_uint64_round_to_zero(f, s);
153
+}
154
+
155
+static inline uint64_t vfp_float32_to_uint64_rtz(float32 f, float_status *s)
156
+{
157
+ if (float32_is_any_nan(f)) {
158
+ float_raise(float_flag_invalid, s);
159
+ return 0;
160
+ }
161
+ return float32_to_uint64_round_to_zero(f, s);
162
+}
163
+
164
+static inline uint64_t vfp_float64_to_uint64_rtz(float64 f, float_status *s)
165
+{
166
+ if (float64_is_any_nan(f)) {
167
+ float_raise(float_flag_invalid, s);
168
+ return 0;
169
+ }
170
+ return float64_to_uint64_round_to_zero(f, s);
171
+}
172
+
173
DO_ZPZ_FP(sve_fcvt_sh, uint32_t, H1_4, sve_f32_to_f16)
174
DO_ZPZ_FP(sve_fcvt_hs, uint32_t, H1_4, sve_f16_to_f32)
175
DO_ZPZ_FP(sve_fcvt_dh, uint64_t, , sve_f64_to_f16)
176
@@ -XXX,XX +XXX,XX @@ DO_ZPZ_FP(sve_fcvt_hd, uint64_t, , sve_f16_to_f64)
177
DO_ZPZ_FP(sve_fcvt_ds, uint64_t, , float64_to_float32)
178
DO_ZPZ_FP(sve_fcvt_sd, uint64_t, , float32_to_float64)
179
180
+DO_ZPZ_FP(sve_fcvtzs_hh, uint16_t, H1_2, vfp_float16_to_int16_rtz)
181
+DO_ZPZ_FP(sve_fcvtzs_hs, uint32_t, H1_4, helper_vfp_tosizh)
182
+DO_ZPZ_FP(sve_fcvtzs_ss, uint32_t, H1_4, helper_vfp_tosizs)
183
+DO_ZPZ_FP(sve_fcvtzs_hd, uint64_t, , vfp_float16_to_int64_rtz)
184
+DO_ZPZ_FP(sve_fcvtzs_sd, uint64_t, , vfp_float32_to_int64_rtz)
185
+DO_ZPZ_FP(sve_fcvtzs_ds, uint64_t, , helper_vfp_tosizd)
186
+DO_ZPZ_FP(sve_fcvtzs_dd, uint64_t, , vfp_float64_to_int64_rtz)
187
+
188
+DO_ZPZ_FP(sve_fcvtzu_hh, uint16_t, H1_2, vfp_float16_to_uint16_rtz)
189
+DO_ZPZ_FP(sve_fcvtzu_hs, uint32_t, H1_4, helper_vfp_touizh)
190
+DO_ZPZ_FP(sve_fcvtzu_ss, uint32_t, H1_4, helper_vfp_touizs)
191
+DO_ZPZ_FP(sve_fcvtzu_hd, uint64_t, , vfp_float16_to_uint64_rtz)
192
+DO_ZPZ_FP(sve_fcvtzu_sd, uint64_t, , vfp_float32_to_uint64_rtz)
193
+DO_ZPZ_FP(sve_fcvtzu_ds, uint64_t, , helper_vfp_touizd)
194
+DO_ZPZ_FP(sve_fcvtzu_dd, uint64_t, , vfp_float64_to_uint64_rtz)
195
+
196
DO_ZPZ_FP(sve_scvt_hh, uint16_t, H1_2, int16_to_float16)
197
DO_ZPZ_FP(sve_scvt_sh, uint32_t, H1_4, int32_to_float16)
198
DO_ZPZ_FP(sve_scvt_ss, uint32_t, H1_4, int32_to_float32)
199
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
200
index XXXXXXX..XXXXXXX 100644
201
--- a/target/arm/translate-sve.c
202
+++ b/target/arm/translate-sve.c
203
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVT_sd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
204
return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_sd);
205
}
206
207
+static bool trans_FCVTZS_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
208
+{
209
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvtzs_hh);
210
+}
211
+
212
+static bool trans_FCVTZU_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
213
+{
214
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvtzu_hh);
215
+}
216
+
217
+static bool trans_FCVTZS_hs(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
218
+{
219
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvtzs_hs);
220
+}
221
+
222
+static bool trans_FCVTZU_hs(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
223
+{
224
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvtzu_hs);
225
+}
226
+
227
+static bool trans_FCVTZS_hd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
228
+{
229
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvtzs_hd);
230
+}
231
+
232
+static bool trans_FCVTZU_hd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
233
+{
234
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvtzu_hd);
235
+}
236
+
237
+static bool trans_FCVTZS_ss(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
238
+{
239
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvtzs_ss);
240
+}
241
+
242
+static bool trans_FCVTZU_ss(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
243
+{
244
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvtzu_ss);
245
+}
246
+
247
+static bool trans_FCVTZS_sd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
248
+{
249
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvtzs_sd);
250
+}
251
+
252
+static bool trans_FCVTZU_sd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
253
+{
254
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvtzu_sd);
255
+}
256
+
257
+static bool trans_FCVTZS_ds(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
258
+{
259
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvtzs_ds);
260
+}
261
+
262
+static bool trans_FCVTZU_ds(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
263
+{
264
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvtzu_ds);
265
+}
266
+
267
+static bool trans_FCVTZS_dd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
268
+{
269
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvtzs_dd);
270
+}
271
+
272
+static bool trans_FCVTZU_dd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
273
+{
274
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvtzu_dd);
275
+}
276
+
277
static bool trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
278
{
279
return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_scvt_hh);
280
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
281
index XXXXXXX..XXXXXXX 100644
282
--- a/target/arm/sve.decode
283
+++ b/target/arm/sve.decode
284
@@ -XXX,XX +XXX,XX @@ FCVT_hd 01100101 11 0010 01 101 ... ..... ..... @rd_pg_rn_e0
285
FCVT_ds 01100101 11 0010 10 101 ... ..... ..... @rd_pg_rn_e0
286
FCVT_sd 01100101 11 0010 11 101 ... ..... ..... @rd_pg_rn_e0
287
288
+# SVE floating-point convert to integer
289
+FCVTZS_hh 01100101 01 011 01 0 101 ... ..... ..... @rd_pg_rn_e0
290
+FCVTZU_hh 01100101 01 011 01 1 101 ... ..... ..... @rd_pg_rn_e0
291
+FCVTZS_hs 01100101 01 011 10 0 101 ... ..... ..... @rd_pg_rn_e0
292
+FCVTZU_hs 01100101 01 011 10 1 101 ... ..... ..... @rd_pg_rn_e0
293
+FCVTZS_hd 01100101 01 011 11 0 101 ... ..... ..... @rd_pg_rn_e0
294
+FCVTZU_hd 01100101 01 011 11 1 101 ... ..... ..... @rd_pg_rn_e0
295
+FCVTZS_ss 01100101 10 011 10 0 101 ... ..... ..... @rd_pg_rn_e0
296
+FCVTZU_ss 01100101 10 011 10 1 101 ... ..... ..... @rd_pg_rn_e0
297
+FCVTZS_ds 01100101 11 011 00 0 101 ... ..... ..... @rd_pg_rn_e0
298
+FCVTZU_ds 01100101 11 011 00 1 101 ... ..... ..... @rd_pg_rn_e0
299
+FCVTZS_sd 01100101 11 011 10 0 101 ... ..... ..... @rd_pg_rn_e0
300
+FCVTZU_sd 01100101 11 011 10 1 101 ... ..... ..... @rd_pg_rn_e0
301
+FCVTZS_dd 01100101 11 011 11 0 101 ... ..... ..... @rd_pg_rn_e0
302
+FCVTZU_dd 01100101 11 011 11 1 101 ... ..... ..... @rd_pg_rn_e0
303
+
304
# SVE integer convert to floating-point
305
SCVTF_hh 01100101 01 010 01 0 101 ... ..... ..... @rd_pg_rn_e0
306
SCVTF_sh 01100101 01 010 10 0 101 ... ..... ..... @rd_pg_rn_e0
307
--
48
--
308
2.17.1
49
2.25.1
309
310
diff view generated by jsdifflib
1
From: Aaron Lindsay <alindsay@codeaurora.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
3
Virtual SError exceptions are raised by setting HCR_EL2.VSE,
4
Message-id: 1529699547-17044-5-git-send-email-alindsay@codeaurora.org
4
and are routed to EL1 just like other virtual exceptions.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220506180242.216785-16-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
10
---
7
target/arm/cpu.h | 1 +
11
target/arm/cpu.h | 2 ++
8
target/arm/cpu.c | 21 ++++++++++++++-------
12
target/arm/internals.h | 8 ++++++++
9
target/arm/kvm32.c | 8 ++++----
13
target/arm/syndrome.h | 5 +++++
10
3 files changed, 19 insertions(+), 11 deletions(-)
14
target/arm/cpu.c | 38 +++++++++++++++++++++++++++++++++++++-
15
target/arm/helper.c | 40 +++++++++++++++++++++++++++++++++++++++-
16
5 files changed, 91 insertions(+), 2 deletions(-)
11
17
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.h
20
--- a/target/arm/cpu.h
15
+++ b/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ enum arm_features {
22
@@ -XXX,XX +XXX,XX @@
17
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
23
#define EXCP_LSERR 21 /* v8M LSERR SecureFault */
18
ARM_FEATURE_THUMB2EE,
24
#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
19
ARM_FEATURE_V7MP, /* v7 Multiprocessing Extensions */
25
#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */
20
+ ARM_FEATURE_V7VE, /* v7 Virtualization Extensions (non-EL2 parts) */
26
+#define EXCP_VSERR 24
21
ARM_FEATURE_V4T,
27
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
22
ARM_FEATURE_V5,
28
23
ARM_FEATURE_STRONGARM,
29
#define ARMV7M_EXCP_RESET 1
30
@@ -XXX,XX +XXX,XX @@ enum {
31
#define CPU_INTERRUPT_FIQ CPU_INTERRUPT_TGT_EXT_1
32
#define CPU_INTERRUPT_VIRQ CPU_INTERRUPT_TGT_EXT_2
33
#define CPU_INTERRUPT_VFIQ CPU_INTERRUPT_TGT_EXT_3
34
+#define CPU_INTERRUPT_VSERR CPU_INTERRUPT_TGT_INT_0
35
36
/* The usual mapping for an AArch64 system register to its AArch32
37
* counterpart is for the 32 bit world to have access to the lower
38
diff --git a/target/arm/internals.h b/target/arm/internals.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/internals.h
41
+++ b/target/arm/internals.h
42
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_virq(ARMCPU *cpu);
43
*/
44
void arm_cpu_update_vfiq(ARMCPU *cpu);
45
46
+/**
47
+ * arm_cpu_update_vserr: Update CPU_INTERRUPT_VSERR bit
48
+ *
49
+ * Update the CPU_INTERRUPT_VSERR bit in cs->interrupt_request,
50
+ * following a change to the HCR_EL2.VSE bit.
51
+ */
52
+void arm_cpu_update_vserr(ARMCPU *cpu);
53
+
54
/**
55
* arm_mmu_idx_el:
56
* @env: The cpu environment
57
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/syndrome.h
60
+++ b/target/arm/syndrome.h
61
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_pcalignment(void)
62
return (EC_PCALIGNMENT << ARM_EL_EC_SHIFT) | ARM_EL_IL;
63
}
64
65
+static inline uint32_t syn_serror(uint32_t extra)
66
+{
67
+ return (EC_SERROR << ARM_EL_EC_SHIFT) | ARM_EL_IL | extra;
68
+}
69
+
70
#endif /* TARGET_ARM_SYNDROME_H */
24
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
71
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
25
index XXXXXXX..XXXXXXX 100644
72
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/cpu.c
73
--- a/target/arm/cpu.c
27
+++ b/target/arm/cpu.c
74
+++ b/target/arm/cpu.c
28
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
75
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_has_work(CPUState *cs)
29
76
return (cpu->power_state != PSCI_OFF)
30
/* Some features automatically imply others: */
77
&& cs->interrupt_request &
31
if (arm_feature(env, ARM_FEATURE_V8)) {
78
(CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
32
- set_feature(env, ARM_FEATURE_V7);
79
- | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
33
+ set_feature(env, ARM_FEATURE_V7VE);
80
+ | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR
81
| CPU_INTERRUPT_EXITTB);
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
85
return false;
86
}
87
return !(env->daif & PSTATE_I);
88
+ case EXCP_VSERR:
89
+ if (!(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
90
+ /* VIRQs are only taken when hypervized. */
91
+ return false;
92
+ }
93
+ return !(env->daif & PSTATE_A);
94
default:
95
g_assert_not_reached();
96
}
97
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
98
goto found;
99
}
100
}
101
+ if (interrupt_request & CPU_INTERRUPT_VSERR) {
102
+ excp_idx = EXCP_VSERR;
103
+ target_el = 1;
104
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
105
+ cur_el, secure, hcr_el2)) {
106
+ /* Taking a virtual abort clears HCR_EL2.VSE */
107
+ env->cp15.hcr_el2 &= ~HCR_VSE;
108
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
109
+ goto found;
110
+ }
34
+ }
111
+ }
35
+ if (arm_feature(env, ARM_FEATURE_V7VE)) {
112
return false;
36
+ /* v7 Virtualization Extensions. In real hardware this implies
113
37
+ * EL2 and also the presence of the Security Extensions.
114
found:
38
+ * For QEMU, for backwards-compatibility we implement some
115
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
39
+ * CPUs or CPU configs which have no actual EL2 or EL3 but do
116
}
40
+ * include the various other features that V7VE implies.
117
}
41
+ * Presence of EL2 itself is ARM_FEATURE_EL2, and of the
118
42
+ * Security Extensions is ARM_FEATURE_EL3.
119
+void arm_cpu_update_vserr(ARMCPU *cpu)
43
+ */
120
+{
44
set_feature(env, ARM_FEATURE_ARM_DIV);
121
+ /*
45
set_feature(env, ARM_FEATURE_LPAE);
122
+ * Update the interrupt level for VSERR, which is the HCR_EL2.VSE bit.
46
+ set_feature(env, ARM_FEATURE_V7);
123
+ */
47
}
124
+ CPUARMState *env = &cpu->env;
48
if (arm_feature(env, ARM_FEATURE_V7)) {
125
+ CPUState *cs = CPU(cpu);
49
set_feature(env, ARM_FEATURE_VAPA);
126
+
50
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
127
+ bool new_state = env->cp15.hcr_el2 & HCR_VSE;
51
ARMCPU *cpu = ARM_CPU(obj);
128
+
52
129
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VSERR) != 0)) {
53
cpu->dtb_compatible = "arm,cortex-a7";
130
+ if (new_state) {
54
- set_feature(&cpu->env, ARM_FEATURE_V7);
131
+ cpu_interrupt(cs, CPU_INTERRUPT_VSERR);
55
+ set_feature(&cpu->env, ARM_FEATURE_V7VE);
132
+ } else {
56
set_feature(&cpu->env, ARM_FEATURE_VFP4);
133
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
57
set_feature(&cpu->env, ARM_FEATURE_NEON);
134
+ }
58
set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
135
+ }
59
- set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
136
+}
60
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
137
+
61
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
138
#ifndef CONFIG_USER_ONLY
62
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
139
static void arm_cpu_set_irq(void *opaque, int irq, int level)
63
- set_feature(&cpu->env, ARM_FEATURE_LPAE);
140
{
64
set_feature(&cpu->env, ARM_FEATURE_EL3);
141
diff --git a/target/arm/helper.c b/target/arm/helper.c
65
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
142
index XXXXXXX..XXXXXXX 100644
66
cpu->midr = 0x410fc075;
143
--- a/target/arm/helper.c
67
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
144
+++ b/target/arm/helper.c
68
ARMCPU *cpu = ARM_CPU(obj);
145
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
69
146
}
70
cpu->dtb_compatible = "arm,cortex-a15";
147
}
71
- set_feature(&cpu->env, ARM_FEATURE_V7);
148
72
+ set_feature(&cpu->env, ARM_FEATURE_V7VE);
149
- /* External aborts are not possible in QEMU so A bit is always clear */
73
set_feature(&cpu->env, ARM_FEATURE_VFP4);
150
+ if (hcr_el2 & HCR_AMO) {
74
set_feature(&cpu->env, ARM_FEATURE_NEON);
151
+ if (cs->interrupt_request & CPU_INTERRUPT_VSERR) {
75
set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
152
+ ret |= CPSR_A;
76
- set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
153
+ }
77
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
154
+ }
78
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
155
+
79
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
156
return ret;
80
- set_feature(&cpu->env, ARM_FEATURE_LPAE);
157
}
81
set_feature(&cpu->env, ARM_FEATURE_EL3);
158
82
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
159
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
83
cpu->midr = 0x412fc0f1;
160
g_assert(qemu_mutex_iothread_locked());
84
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
161
arm_cpu_update_virq(cpu);
85
index XXXXXXX..XXXXXXX 100644
162
arm_cpu_update_vfiq(cpu);
86
--- a/target/arm/kvm32.c
163
+ arm_cpu_update_vserr(cpu);
87
+++ b/target/arm/kvm32.c
164
}
88
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
165
89
/* Now we've retrieved all the register information we can
166
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
90
* set the feature bits based on the ID register fields.
167
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(CPUState *cs)
91
* We can assume any KVM supporting CPU is at least a v7
168
[EXCP_LSERR] = "v8M LSERR UsageFault",
92
- * with VFPv3, LPAE and the generic timers; this in turn implies
169
[EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
93
- * most of the other feature bits, but a few must be tested.
170
[EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
94
+ * with VFPv3, virtualization extensions, and the generic
171
+ [EXCP_VSERR] = "Virtual SERR",
95
+ * timers; this in turn implies most of the other feature
172
};
96
+ * bits, but a few must be tested.
173
97
*/
174
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
98
- set_feature(&features, ARM_FEATURE_V7);
175
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
99
+ set_feature(&features, ARM_FEATURE_V7VE);
176
mask = CPSR_A | CPSR_I | CPSR_F;
100
set_feature(&features, ARM_FEATURE_VFP3);
177
offset = 4;
101
- set_feature(&features, ARM_FEATURE_LPAE);
178
break;
102
set_feature(&features, ARM_FEATURE_GENERIC_TIMER);
179
+ case EXCP_VSERR:
103
180
+ {
104
switch (extract32(id_isar0, 24, 4)) {
181
+ /*
182
+ * Note that this is reported as a data abort, but the DFAR
183
+ * has an UNKNOWN value. Construct the SError syndrome from
184
+ * AET and ExT fields.
185
+ */
186
+ ARMMMUFaultInfo fi = { .type = ARMFault_AsyncExternal, };
187
+
188
+ if (extended_addresses_enabled(env)) {
189
+ env->exception.fsr = arm_fi_to_lfsc(&fi);
190
+ } else {
191
+ env->exception.fsr = arm_fi_to_sfsc(&fi);
192
+ }
193
+ env->exception.fsr |= env->cp15.vsesr_el2 & 0xd000;
194
+ A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr);
195
+ qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x\n",
196
+ env->exception.fsr);
197
+
198
+ new_mode = ARM_CPU_MODE_ABT;
199
+ addr = 0x10;
200
+ mask = CPSR_A | CPSR_I;
201
+ offset = 8;
202
+ }
203
+ break;
204
case EXCP_SMC:
205
new_mode = ARM_CPU_MODE_MON;
206
addr = 0x08;
207
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
208
case EXCP_VFIQ:
209
addr += 0x100;
210
break;
211
+ case EXCP_VSERR:
212
+ addr += 0x180;
213
+ /* Construct the SError syndrome from IDS and ISS fields. */
214
+ env->exception.syndrome = syn_serror(env->cp15.vsesr_el2 & 0x1ffffff);
215
+ env->cp15.esr_el[new_el] = env->exception.syndrome;
216
+ break;
217
default:
218
cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
219
}
105
--
220
--
106
2.17.1
221
2.25.1
107
108
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Check for and defer any pending virtual SError.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20220506180242.216785-17-richard.henderson@linaro.org
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20180627043328.11531-34-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
---
9
target/arm/helper.h | 5 ++
10
target/arm/helper.h | 1 +
10
target/arm/translate-sve.c | 18 ++++++
11
target/arm/a32.decode | 16 ++++++++------
11
target/arm/vec_helper.c | 124 +++++++++++++++++++++++++++++++++++++
12
target/arm/t32.decode | 18 ++++++++--------
12
target/arm/sve.decode | 6 ++
13
target/arm/op_helper.c | 43 ++++++++++++++++++++++++++++++++++++++
13
4 files changed, 153 insertions(+)
14
target/arm/translate-a64.c | 17 +++++++++++++++
15
target/arm/translate.c | 23 ++++++++++++++++++++
16
6 files changed, 103 insertions(+), 15 deletions(-)
14
17
15
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.h
20
--- a/target/arm/helper.h
18
+++ b/target/arm/helper.h
21
+++ b/target/arm/helper.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_udot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_1(wfe, void, env)
20
DEF_HELPER_FLAGS_4(gvec_sdot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
DEF_HELPER_1(yield, void, env)
21
DEF_HELPER_FLAGS_4(gvec_udot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
DEF_HELPER_1(pre_hvc, void, env)
22
25
DEF_HELPER_2(pre_smc, void, env, i32)
23
+DEF_HELPER_FLAGS_4(gvec_sdot_idx_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_1(vesb, void, env)
24
+DEF_HELPER_FLAGS_4(gvec_udot_idx_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
25
+DEF_HELPER_FLAGS_4(gvec_sdot_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
DEF_HELPER_3(cpsr_write, void, env, i32, i32)
26
+DEF_HELPER_FLAGS_4(gvec_udot_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
DEF_HELPER_2(cpsr_write_eret, void, env, i32)
27
+
30
diff --git a/target/arm/a32.decode b/target/arm/a32.decode
28
DEF_HELPER_FLAGS_5(gvec_fcaddh, TCG_CALL_NO_RWG,
31
index XXXXXXX..XXXXXXX 100644
29
void, ptr, ptr, ptr, ptr, i32)
32
--- a/target/arm/a32.decode
30
DEF_HELPER_FLAGS_5(gvec_fcadds, TCG_CALL_NO_RWG,
33
+++ b/target/arm/a32.decode
31
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
34
@@ -XXX,XX +XXX,XX @@ SMULTT .... 0001 0110 .... 0000 .... 1110 .... @rd0mn
32
index XXXXXXX..XXXXXXX 100644
35
33
--- a/target/arm/translate-sve.c
36
{
34
+++ b/target/arm/translate-sve.c
37
{
35
@@ -XXX,XX +XXX,XX @@ static bool trans_DOT_zzz(DisasContext *s, arg_DOT_zzz *a, uint32_t insn)
38
- YIELD ---- 0011 0010 0000 1111 ---- 0000 0001
39
- WFE ---- 0011 0010 0000 1111 ---- 0000 0010
40
- WFI ---- 0011 0010 0000 1111 ---- 0000 0011
41
+ [
42
+ YIELD ---- 0011 0010 0000 1111 ---- 0000 0001
43
+ WFE ---- 0011 0010 0000 1111 ---- 0000 0010
44
+ WFI ---- 0011 0010 0000 1111 ---- 0000 0011
45
46
- # TODO: Implement SEV, SEVL; may help SMP performance.
47
- # SEV ---- 0011 0010 0000 1111 ---- 0000 0100
48
- # SEVL ---- 0011 0010 0000 1111 ---- 0000 0101
49
+ # TODO: Implement SEV, SEVL; may help SMP performance.
50
+ # SEV ---- 0011 0010 0000 1111 ---- 0000 0100
51
+ # SEVL ---- 0011 0010 0000 1111 ---- 0000 0101
52
+
53
+ ESB ---- 0011 0010 0000 1111 ---- 0001 0000
54
+ ]
55
56
# The canonical nop ends in 00000000, but the whole of the
57
# rest of the space executes as nop if otherwise unsupported.
58
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/t32.decode
61
+++ b/target/arm/t32.decode
62
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
63
[
64
# Hints, and CPS
65
{
66
- YIELD 1111 0011 1010 1111 1000 0000 0000 0001
67
- WFE 1111 0011 1010 1111 1000 0000 0000 0010
68
- WFI 1111 0011 1010 1111 1000 0000 0000 0011
69
+ [
70
+ YIELD 1111 0011 1010 1111 1000 0000 0000 0001
71
+ WFE 1111 0011 1010 1111 1000 0000 0000 0010
72
+ WFI 1111 0011 1010 1111 1000 0000 0000 0011
73
74
- # TODO: Implement SEV, SEVL; may help SMP performance.
75
- # SEV 1111 0011 1010 1111 1000 0000 0000 0100
76
- # SEVL 1111 0011 1010 1111 1000 0000 0000 0101
77
+ # TODO: Implement SEV, SEVL; may help SMP performance.
78
+ # SEV 1111 0011 1010 1111 1000 0000 0000 0100
79
+ # SEVL 1111 0011 1010 1111 1000 0000 0000 0101
80
81
- # For M-profile minimal-RAS ESB can be a NOP, which is the
82
- # default behaviour since it is in the hint space.
83
- # ESB 1111 0011 1010 1111 1000 0000 0001 0000
84
+ ESB 1111 0011 1010 1111 1000 0000 0001 0000
85
+ ]
86
87
# The canonical nop ends in 0000 0000, but the whole rest
88
# of the space is "reserved hint, behaves as nop".
89
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/op_helper.c
92
+++ b/target/arm/op_helper.c
93
@@ -XXX,XX +XXX,XX @@ void HELPER(probe_access)(CPUARMState *env, target_ulong ptr,
94
access_type, mmu_idx, ra);
95
}
96
}
97
+
98
+/*
99
+ * This function corresponds to AArch64.vESBOperation().
100
+ * Note that the AArch32 version is not functionally different.
101
+ */
102
+void HELPER(vesb)(CPUARMState *env)
103
+{
104
+ /*
105
+ * The EL2Enabled() check is done inside arm_hcr_el2_eff,
106
+ * and will return HCR_EL2.VSE == 0, so nothing happens.
107
+ */
108
+ uint64_t hcr = arm_hcr_el2_eff(env);
109
+ bool enabled = !(hcr & HCR_TGE) && (hcr & HCR_AMO);
110
+ bool pending = enabled && (hcr & HCR_VSE);
111
+ bool masked = (env->daif & PSTATE_A);
112
+
113
+ /* If VSE pending and masked, defer the exception. */
114
+ if (pending && masked) {
115
+ uint32_t syndrome;
116
+
117
+ if (arm_el_is_aa64(env, 1)) {
118
+ /* Copy across IDS and ISS from VSESR. */
119
+ syndrome = env->cp15.vsesr_el2 & 0x1ffffff;
120
+ } else {
121
+ ARMMMUFaultInfo fi = { .type = ARMFault_AsyncExternal };
122
+
123
+ if (extended_addresses_enabled(env)) {
124
+ syndrome = arm_fi_to_lfsc(&fi);
125
+ } else {
126
+ syndrome = arm_fi_to_sfsc(&fi);
127
+ }
128
+ /* Copy across AET and ExT from VSESR. */
129
+ syndrome |= env->cp15.vsesr_el2 & 0xd000;
130
+ }
131
+
132
+ /* Set VDISR_EL2.A along with the syndrome. */
133
+ env->cp15.vdisr_el2 = syndrome | (1u << 31);
134
+
135
+ /* Clear pending virtual SError */
136
+ env->cp15.hcr_el2 &= ~HCR_VSE;
137
+ cpu_reset_interrupt(env_cpu(env), CPU_INTERRUPT_VSERR);
138
+ }
139
+}
140
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/target/arm/translate-a64.c
143
+++ b/target/arm/translate-a64.c
144
@@ -XXX,XX +XXX,XX @@ static void handle_hint(DisasContext *s, uint32_t insn,
145
gen_helper_autib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
146
}
147
break;
148
+ case 0b10000: /* ESB */
149
+ /* Without RAS, we must implement this as NOP. */
150
+ if (dc_isar_feature(aa64_ras, s)) {
151
+ /*
152
+ * QEMU does not have a source of physical SErrors,
153
+ * so we are only concerned with virtual SErrors.
154
+ * The pseudocode in the ARM for this case is
155
+ * if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
156
+ * AArch64.vESBOperation();
157
+ * Most of the condition can be evaluated at translation time.
158
+ * Test for EL2 present, and defer test for SEL2 to runtime.
159
+ */
160
+ if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
161
+ gen_helper_vesb(cpu_env);
162
+ }
163
+ }
164
+ break;
165
case 0b11000: /* PACIAZ */
166
if (s->pauth_active) {
167
gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30],
168
diff --git a/target/arm/translate.c b/target/arm/translate.c
169
index XXXXXXX..XXXXXXX 100644
170
--- a/target/arm/translate.c
171
+++ b/target/arm/translate.c
172
@@ -XXX,XX +XXX,XX @@ static bool trans_WFI(DisasContext *s, arg_WFI *a)
36
return true;
173
return true;
37
}
174
}
38
175
39
+static bool trans_DOT_zzx(DisasContext *s, arg_DOT_zzx *a, uint32_t insn)
176
+static bool trans_ESB(DisasContext *s, arg_ESB *a)
40
+{
177
+{
41
+ static gen_helper_gvec_3 * const fns[2][2] = {
178
+ /*
42
+ { gen_helper_gvec_sdot_idx_b, gen_helper_gvec_sdot_idx_h },
179
+ * For M-profile, minimal-RAS ESB can be a NOP.
43
+ { gen_helper_gvec_udot_idx_b, gen_helper_gvec_udot_idx_h }
180
+ * Without RAS, we must implement this as NOP.
44
+ };
181
+ */
45
+
182
+ if (!arm_dc_feature(s, ARM_FEATURE_M) && dc_isar_feature(aa32_ras, s)) {
46
+ if (sve_access_check(s)) {
183
+ /*
47
+ unsigned vsz = vec_full_reg_size(s);
184
+ * QEMU does not have a source of physical SErrors,
48
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
185
+ * so we are only concerned with virtual SErrors.
49
+ vec_full_reg_offset(s, a->rn),
186
+ * The pseudocode in the ARM for this case is
50
+ vec_full_reg_offset(s, a->rm),
187
+ * if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
51
+ vsz, vsz, a->index, fns[a->u][a->sz]);
188
+ * AArch32.vESBOperation();
189
+ * Most of the condition can be evaluated at translation time.
190
+ * Test for EL2 present, and defer test for SEL2 to runtime.
191
+ */
192
+ if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
193
+ gen_helper_vesb(cpu_env);
194
+ }
52
+ }
195
+ }
53
+ return true;
196
+ return true;
54
+}
197
+}
55
+
198
+
56
+
199
static bool trans_NOP(DisasContext *s, arg_NOP *a)
57
/*
58
*** SVE Floating Point Multiply-Add Indexed Group
59
*/
60
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/vec_helper.c
63
+++ b/target/arm/vec_helper.c
64
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_udot_h)(void *vd, void *vn, void *vm, uint32_t desc)
65
clear_tail(d, opr_sz, simd_maxsz(desc));
66
}
67
68
+void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
69
+{
70
+ intptr_t i, segend, opr_sz = simd_oprsz(desc), opr_sz_4 = opr_sz / 4;
71
+ intptr_t index = simd_data(desc);
72
+ uint32_t *d = vd;
73
+ int8_t *n = vn;
74
+ int8_t *m_indexed = (int8_t *)vm + index * 4;
75
+
76
+ /* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
77
+ * Otherwise opr_sz is a multiple of 16.
78
+ */
79
+ segend = MIN(4, opr_sz_4);
80
+ i = 0;
81
+ do {
82
+ int8_t m0 = m_indexed[i * 4 + 0];
83
+ int8_t m1 = m_indexed[i * 4 + 1];
84
+ int8_t m2 = m_indexed[i * 4 + 2];
85
+ int8_t m3 = m_indexed[i * 4 + 3];
86
+
87
+ do {
88
+ d[i] += n[i * 4 + 0] * m0
89
+ + n[i * 4 + 1] * m1
90
+ + n[i * 4 + 2] * m2
91
+ + n[i * 4 + 3] * m3;
92
+ } while (++i < segend);
93
+ segend = i + 4;
94
+ } while (i < opr_sz_4);
95
+
96
+ clear_tail(d, opr_sz, simd_maxsz(desc));
97
+}
98
+
99
+void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
100
+{
101
+ intptr_t i, segend, opr_sz = simd_oprsz(desc), opr_sz_4 = opr_sz / 4;
102
+ intptr_t index = simd_data(desc);
103
+ uint32_t *d = vd;
104
+ uint8_t *n = vn;
105
+ uint8_t *m_indexed = (uint8_t *)vm + index * 4;
106
+
107
+ /* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
108
+ * Otherwise opr_sz is a multiple of 16.
109
+ */
110
+ segend = MIN(4, opr_sz_4);
111
+ i = 0;
112
+ do {
113
+ uint8_t m0 = m_indexed[i * 4 + 0];
114
+ uint8_t m1 = m_indexed[i * 4 + 1];
115
+ uint8_t m2 = m_indexed[i * 4 + 2];
116
+ uint8_t m3 = m_indexed[i * 4 + 3];
117
+
118
+ do {
119
+ d[i] += n[i * 4 + 0] * m0
120
+ + n[i * 4 + 1] * m1
121
+ + n[i * 4 + 2] * m2
122
+ + n[i * 4 + 3] * m3;
123
+ } while (++i < segend);
124
+ segend = i + 4;
125
+ } while (i < opr_sz_4);
126
+
127
+ clear_tail(d, opr_sz, simd_maxsz(desc));
128
+}
129
+
130
+void HELPER(gvec_sdot_idx_h)(void *vd, void *vn, void *vm, uint32_t desc)
131
+{
132
+ intptr_t i, opr_sz = simd_oprsz(desc), opr_sz_8 = opr_sz / 8;
133
+ intptr_t index = simd_data(desc);
134
+ uint64_t *d = vd;
135
+ int16_t *n = vn;
136
+ int16_t *m_indexed = (int16_t *)vm + index * 4;
137
+
138
+ /* This is supported by SVE only, so opr_sz is always a multiple of 16.
139
+ * Process the entire segment all at once, writing back the results
140
+ * only after we've consumed all of the inputs.
141
+ */
142
+ for (i = 0; i < opr_sz_8 ; i += 2) {
143
+ uint64_t d0, d1;
144
+
145
+ d0 = n[i * 4 + 0] * (int64_t)m_indexed[i * 4 + 0];
146
+ d0 += n[i * 4 + 1] * (int64_t)m_indexed[i * 4 + 1];
147
+ d0 += n[i * 4 + 2] * (int64_t)m_indexed[i * 4 + 2];
148
+ d0 += n[i * 4 + 3] * (int64_t)m_indexed[i * 4 + 3];
149
+ d1 = n[i * 4 + 4] * (int64_t)m_indexed[i * 4 + 0];
150
+ d1 += n[i * 4 + 5] * (int64_t)m_indexed[i * 4 + 1];
151
+ d1 += n[i * 4 + 6] * (int64_t)m_indexed[i * 4 + 2];
152
+ d1 += n[i * 4 + 7] * (int64_t)m_indexed[i * 4 + 3];
153
+
154
+ d[i + 0] += d0;
155
+ d[i + 1] += d1;
156
+ }
157
+
158
+ clear_tail(d, opr_sz, simd_maxsz(desc));
159
+}
160
+
161
+void HELPER(gvec_udot_idx_h)(void *vd, void *vn, void *vm, uint32_t desc)
162
+{
163
+ intptr_t i, opr_sz = simd_oprsz(desc), opr_sz_8 = opr_sz / 8;
164
+ intptr_t index = simd_data(desc);
165
+ uint64_t *d = vd;
166
+ uint16_t *n = vn;
167
+ uint16_t *m_indexed = (uint16_t *)vm + index * 4;
168
+
169
+ /* This is supported by SVE only, so opr_sz is always a multiple of 16.
170
+ * Process the entire segment all at once, writing back the results
171
+ * only after we've consumed all of the inputs.
172
+ */
173
+ for (i = 0; i < opr_sz_8 ; i += 2) {
174
+ uint64_t d0, d1;
175
+
176
+ d0 = n[i * 4 + 0] * (uint64_t)m_indexed[i * 4 + 0];
177
+ d0 += n[i * 4 + 1] * (uint64_t)m_indexed[i * 4 + 1];
178
+ d0 += n[i * 4 + 2] * (uint64_t)m_indexed[i * 4 + 2];
179
+ d0 += n[i * 4 + 3] * (uint64_t)m_indexed[i * 4 + 3];
180
+ d1 = n[i * 4 + 4] * (uint64_t)m_indexed[i * 4 + 0];
181
+ d1 += n[i * 4 + 5] * (uint64_t)m_indexed[i * 4 + 1];
182
+ d1 += n[i * 4 + 6] * (uint64_t)m_indexed[i * 4 + 2];
183
+ d1 += n[i * 4 + 7] * (uint64_t)m_indexed[i * 4 + 3];
184
+
185
+ d[i + 0] += d0;
186
+ d[i + 1] += d1;
187
+ }
188
+
189
+ clear_tail(d, opr_sz, simd_maxsz(desc));
190
+}
191
+
192
void HELPER(gvec_fcaddh)(void *vd, void *vn, void *vm,
193
void *vfpst, uint32_t desc)
194
{
200
{
195
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
201
return true;
196
index XXXXXXX..XXXXXXX 100644
197
--- a/target/arm/sve.decode
198
+++ b/target/arm/sve.decode
199
@@ -XXX,XX +XXX,XX @@ MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
200
# SVE integer dot product (unpredicated)
201
DOT_zzz 01000100 1 sz:1 0 rm:5 00000 u:1 rn:5 rd:5 ra=%reg_movprfx
202
203
+# SVE integer dot product (indexed)
204
+DOT_zzx 01000100 101 index:2 rm:3 00000 u:1 rn:5 rd:5 \
205
+ sz=0 ra=%reg_movprfx
206
+DOT_zzx 01000100 111 index:1 rm:4 00000 u:1 rn:5 rd:5 \
207
+ sz=1 ra=%reg_movprfx
208
+
209
# SVE floating-point complex add (predicated)
210
FCADD 01100100 esz:2 00000 rot:1 100 pg:3 rm:5 rd:5 \
211
rn=%reg_movprfx
212
--
202
--
213
2.17.1
203
2.25.1
214
215
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-28-richard.henderson@linaro.org
5
Message-id: 20220506180242.216785-18-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/translate-sve.c | 60 +++++++++++++++++++++++++++++++++++++-
8
docs/system/arm/emulation.rst | 1 +
9
target/arm/sve.decode | 7 +++++
9
target/arm/cpu64.c | 1 +
10
2 files changed, 66 insertions(+), 1 deletion(-)
10
target/arm/cpu_tcg.c | 1 +
11
3 files changed, 3 insertions(+)
11
12
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
15
--- a/docs/system/arm/emulation.rst
15
+++ b/target/arm/translate-sve.c
16
+++ b/docs/system/arm/emulation.rst
16
@@ -XXX,XX +XXX,XX @@ static bool do_zpzz_ool(DisasContext *s, arg_rprr_esz *a, gen_helper_gvec_4 *fn)
17
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
17
return true;
18
- FEAT_PMULL (PMULL, PMULL2 instructions)
18
}
19
- FEAT_PMUv3p1 (PMU Extensions v3.1)
19
20
- FEAT_PMUv3p4 (PMU Extensions v3.4)
20
+/* Select active elememnts from Zn and inactive elements from Zm,
21
+- FEAT_RAS (Reliability, availability, and serviceability)
21
+ * storing the result in Zd.
22
- FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions)
22
+ */
23
- FEAT_RNG (Random number generator)
23
+static void do_sel_z(DisasContext *s, int rd, int rn, int rm, int pg, int esz)
24
- FEAT_SB (Speculation Barrier)
24
+{
25
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
25
+ static gen_helper_gvec_4 * const fns[4] = {
26
+ gen_helper_sve_sel_zpzz_b, gen_helper_sve_sel_zpzz_h,
27
+ gen_helper_sve_sel_zpzz_s, gen_helper_sve_sel_zpzz_d
28
+ };
29
+ unsigned vsz = vec_full_reg_size(s);
30
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
31
+ vec_full_reg_offset(s, rn),
32
+ vec_full_reg_offset(s, rm),
33
+ pred_full_reg_offset(s, pg),
34
+ vsz, vsz, 0, fns[esz]);
35
+}
36
+
37
#define DO_ZPZZ(NAME, name) \
38
static bool trans_##NAME##_zpzz(DisasContext *s, arg_rprr_esz *a, \
39
uint32_t insn) \
40
@@ -XXX,XX +XXX,XX @@ static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
41
return do_zpzz_ool(s, a, fns[a->esz]);
42
}
43
44
-DO_ZPZZ(SEL, sel)
45
+static bool trans_SEL_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
46
+{
47
+ if (sve_access_check(s)) {
48
+ do_sel_z(s, a->rd, a->rn, a->rm, a->pg, a->esz);
49
+ }
50
+ return true;
51
+}
52
53
#undef DO_ZPZZ
54
55
@@ -XXX,XX +XXX,XX @@ static bool trans_PRF_rr(DisasContext *s, arg_PRF_rr *a, uint32_t insn)
56
sve_access_check(s);
57
return true;
58
}
59
+
60
+/*
61
+ * Move Prefix
62
+ *
63
+ * TODO: The implementation so far could handle predicated merging movprfx.
64
+ * The helper functions as written take an extra source register to
65
+ * use in the operation, but the result is only written when predication
66
+ * succeeds. For unpredicated movprfx, we need to rearrange the helpers
67
+ * to allow the final write back to the destination to be unconditional.
68
+ * For predicated zeroing movprfx, we need to rearrange the helpers to
69
+ * allow the final write back to zero inactives.
70
+ *
71
+ * In the meantime, just emit the moves.
72
+ */
73
+
74
+static bool trans_MOVPRFX(DisasContext *s, arg_MOVPRFX *a, uint32_t insn)
75
+{
76
+ return do_mov_z(s, a->rd, a->rn);
77
+}
78
+
79
+static bool trans_MOVPRFX_m(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
80
+{
81
+ if (sve_access_check(s)) {
82
+ do_sel_z(s, a->rd, a->rn, a->rd, a->pg, a->esz);
83
+ }
84
+ return true;
85
+}
86
+
87
+static bool trans_MOVPRFX_z(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
88
+{
89
+ if (sve_access_check(s)) {
90
+ do_movz_zpz(s, a->rd, a->rn, a->pg, a->esz);
91
+ }
92
+ return true;
93
+}
94
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
95
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
96
--- a/target/arm/sve.decode
27
--- a/target/arm/cpu64.c
97
+++ b/target/arm/sve.decode
28
+++ b/target/arm/cpu64.c
98
@@ -XXX,XX +XXX,XX @@ ORV 00000100 .. 011 000 001 ... ..... ..... @rd_pg_rn
29
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
99
EORV 00000100 .. 011 001 001 ... ..... ..... @rd_pg_rn
30
t = cpu->isar.id_aa64pfr0;
100
ANDV 00000100 .. 011 010 001 ... ..... ..... @rd_pg_rn
31
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
101
32
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
102
+# SVE constructive prefix (predicated)
33
+ t = FIELD_DP64(t, ID_AA64PFR0, RAS, 1); /* FEAT_RAS */
103
+MOVPRFX_z 00000100 .. 010 000 001 ... ..... ..... @rd_pg_rn
34
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
104
+MOVPRFX_m 00000100 .. 010 001 001 ... ..... ..... @rd_pg_rn
35
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
105
+
36
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
106
# SVE integer add reduction (predicated)
37
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
107
# Note that saddv requires size != 3.
38
index XXXXXXX..XXXXXXX 100644
108
UADDV 00000100 .. 000 001 001 ... ..... ..... @rd_pg_rn
39
--- a/target/arm/cpu_tcg.c
109
@@ -XXX,XX +XXX,XX @@ ADR_p64 00000100 11 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
40
+++ b/target/arm/cpu_tcg.c
110
41
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
111
### SVE Integer Misc - Unpredicated Group
42
112
43
t = cpu->isar.id_pfr0;
113
+# SVE constructive prefix (unpredicated)
44
t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
114
+MOVPRFX 00000100 00 1 00000 101111 rn:5 rd:5
45
+ t = FIELD_DP32(t, ID_PFR0, RAS, 1); /* FEAT_RAS */
115
+
46
cpu->isar.id_pfr0 = t;
116
# SVE floating-point exponential accelerator
47
117
# Note esz != 0
48
t = cpu->isar.id_pfr2;
118
FEXPA 00000100 .. 1 00000 101110 ..... ..... @rd_rn
119
--
49
--
120
2.17.1
50
2.25.1
121
122
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This feature is AArch64 only, and applies to physical SErrors,
4
which QEMU does not implement, thus the feature is a nop.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-23-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 4 +++
11
docs/system/arm/emulation.rst | 1 +
9
target/arm/sve_helper.c | 70 ++++++++++++++++++++++++++++++++++++++
12
target/arm/cpu64.c | 1 +
10
target/arm/translate-sve.c | 27 +++++++++++++++
13
2 files changed, 2 insertions(+)
11
target/arm/sve.decode | 3 ++
12
4 files changed, 104 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/helper-sve.h
18
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
20
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
20
DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
21
- FEAT_HPDS (Hierarchical permission disables)
21
22
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
22
+DEF_HELPER_FLAGS_5(sve_ftmad_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
+- FEAT_IESB (Implicit error synchronization event)
23
+DEF_HELPER_FLAGS_5(sve_ftmad_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
- FEAT_JSCVT (JavaScript conversion instructions)
24
+DEF_HELPER_FLAGS_5(sve_ftmad_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
- FEAT_LOR (Limited ordering regions)
25
+
26
- FEAT_LPA (Large Physical Address space)
26
DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
27
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
28
DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
29
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
30
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/sve_helper.c
29
--- a/target/arm/cpu64.c
32
+++ b/target/arm/sve_helper.c
30
+++ b/target/arm/cpu64.c
33
@@ -XXX,XX +XXX,XX @@ DO_FPCMP_PPZ0_ALL(sve_fcmlt0, DO_FCMLT)
31
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
34
DO_FPCMP_PPZ0_ALL(sve_fcmeq0, DO_FCMEQ)
32
t = cpu->isar.id_aa64mmfr2;
35
DO_FPCMP_PPZ0_ALL(sve_fcmne0, DO_FCMNE)
33
t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* FEAT_TTCNP */
36
34
t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); /* FEAT_UAO */
37
+/* FP Trig Multiply-Add. */
35
+ t = FIELD_DP64(t, ID_AA64MMFR2, IESB, 1); /* FEAT_IESB */
38
+
36
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
39
+void HELPER(sve_ftmad_h)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
37
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
40
+{
38
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
41
+ static const float16 coeff[16] = {
42
+ 0x3c00, 0xb155, 0x2030, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000,
43
+ 0x3c00, 0xb800, 0x293a, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000,
44
+ };
45
+ intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(float16);
46
+ intptr_t x = simd_data(desc);
47
+ float16 *d = vd, *n = vn, *m = vm;
48
+ for (i = 0; i < opr_sz; i++) {
49
+ float16 mm = m[i];
50
+ intptr_t xx = x;
51
+ if (float16_is_neg(mm)) {
52
+ mm = float16_abs(mm);
53
+ xx += 8;
54
+ }
55
+ d[i] = float16_muladd(n[i], mm, coeff[xx], 0, vs);
56
+ }
57
+}
58
+
59
+void HELPER(sve_ftmad_s)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
60
+{
61
+ static const float32 coeff[16] = {
62
+ 0x3f800000, 0xbe2aaaab, 0x3c088886, 0xb95008b9,
63
+ 0x36369d6d, 0x00000000, 0x00000000, 0x00000000,
64
+ 0x3f800000, 0xbf000000, 0x3d2aaaa6, 0xbab60705,
65
+ 0x37cd37cc, 0x00000000, 0x00000000, 0x00000000,
66
+ };
67
+ intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(float32);
68
+ intptr_t x = simd_data(desc);
69
+ float32 *d = vd, *n = vn, *m = vm;
70
+ for (i = 0; i < opr_sz; i++) {
71
+ float32 mm = m[i];
72
+ intptr_t xx = x;
73
+ if (float32_is_neg(mm)) {
74
+ mm = float32_abs(mm);
75
+ xx += 8;
76
+ }
77
+ d[i] = float32_muladd(n[i], mm, coeff[xx], 0, vs);
78
+ }
79
+}
80
+
81
+void HELPER(sve_ftmad_d)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
82
+{
83
+ static const float64 coeff[16] = {
84
+ 0x3ff0000000000000ull, 0xbfc5555555555543ull,
85
+ 0x3f8111111110f30cull, 0xbf2a01a019b92fc6ull,
86
+ 0x3ec71de351f3d22bull, 0xbe5ae5e2b60f7b91ull,
87
+ 0x3de5d8408868552full, 0x0000000000000000ull,
88
+ 0x3ff0000000000000ull, 0xbfe0000000000000ull,
89
+ 0x3fa5555555555536ull, 0xbf56c16c16c13a0bull,
90
+ 0x3efa01a019b1e8d8ull, 0xbe927e4f7282f468ull,
91
+ 0x3e21ee96d2641b13ull, 0xbda8f76380fbb401ull,
92
+ };
93
+ intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(float64);
94
+ intptr_t x = simd_data(desc);
95
+ float64 *d = vd, *n = vn, *m = vm;
96
+ for (i = 0; i < opr_sz; i++) {
97
+ float64 mm = m[i];
98
+ intptr_t xx = x;
99
+ if (float64_is_neg(mm)) {
100
+ mm = float64_abs(mm);
101
+ xx += 8;
102
+ }
103
+ d[i] = float64_muladd(n[i], mm, coeff[xx], 0, vs);
104
+ }
105
+}
106
+
107
/*
108
* Load contiguous data, protected by a governing predicate.
109
*/
110
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/translate-sve.c
113
+++ b/target/arm/translate-sve.c
114
@@ -XXX,XX +XXX,XX @@ DO_PPZ(FCMNE_ppz0, fcmne0)
115
116
#undef DO_PPZ
117
118
+/*
119
+ *** SVE floating-point trig multiply-add coefficient
120
+ */
121
+
122
+static bool trans_FTMAD(DisasContext *s, arg_FTMAD *a, uint32_t insn)
123
+{
124
+ static gen_helper_gvec_3_ptr * const fns[3] = {
125
+ gen_helper_sve_ftmad_h,
126
+ gen_helper_sve_ftmad_s,
127
+ gen_helper_sve_ftmad_d,
128
+ };
129
+
130
+ if (a->esz == 0) {
131
+ return false;
132
+ }
133
+ if (sve_access_check(s)) {
134
+ unsigned vsz = vec_full_reg_size(s);
135
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
136
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
137
+ vec_full_reg_offset(s, a->rn),
138
+ vec_full_reg_offset(s, a->rm),
139
+ status, vsz, vsz, a->imm, fns[a->esz - 1]);
140
+ tcg_temp_free_ptr(status);
141
+ }
142
+ return true;
143
+}
144
+
145
/*
146
*** SVE Floating Point Accumulating Reduction Group
147
*/
148
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
149
index XXXXXXX..XXXXXXX 100644
150
--- a/target/arm/sve.decode
151
+++ b/target/arm/sve.decode
152
@@ -XXX,XX +XXX,XX @@ FMINNM_zpzi 01100101 .. 011 101 100 ... 0000 . ..... @rdn_i1
153
FMAX_zpzi 01100101 .. 011 110 100 ... 0000 . ..... @rdn_i1
154
FMIN_zpzi 01100101 .. 011 111 100 ... 0000 . ..... @rdn_i1
155
156
+# SVE floating-point trig multiply-add coefficient
157
+FTMAD 01100101 esz:2 010 imm:3 100000 rm:5 rd:5 rn=%reg_movprfx
158
+
159
### SVE FP Multiply-Add Group
160
161
# SVE floating-point multiply-accumulate writing addend
162
--
39
--
163
2.17.1
40
2.25.1
164
165
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
There is no need to re-set these 9 features already
3
This extension concerns branch speculation, which TCG does
4
implied by the call to aarch64_a57_initfn.
4
not implement. Thus we can trivially enable this feature.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20220506180242.216785-20-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180629001538.11415-4-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/cpu64.c | 9 ---------
11
docs/system/arm/emulation.rst | 1 +
13
1 file changed, 9 deletions(-)
12
target/arm/cpu64.c | 1 +
13
target/arm/cpu_tcg.c | 1 +
14
3 files changed, 3 insertions(+)
14
15
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
17
index XXXXXXX..XXXXXXX 100644
18
--- a/docs/system/arm/emulation.rst
19
+++ b/docs/system/arm/emulation.rst
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
21
- FEAT_BBM at level 2 (Translation table break-before-make levels)
22
- FEAT_BF16 (AArch64 BFloat16 instructions)
23
- FEAT_BTI (Branch Target Identification)
24
+- FEAT_CSV2 (Cache speculation variant 2)
25
- FEAT_DIT (Data Independent Timing instructions)
26
- FEAT_DPB (DC CVAP instruction)
27
- FEAT_Debugv8p2 (Debug changes for v8.2)
15
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
28
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
16
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu64.c
30
--- a/target/arm/cpu64.c
18
+++ b/target/arm/cpu64.c
31
+++ b/target/arm/cpu64.c
19
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
32
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
20
* whereas the architecture requires them to be present in both if
33
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
21
* present in either.
34
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
22
*/
35
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
23
- set_feature(&cpu->env, ARM_FEATURE_V8);
36
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 1); /* FEAT_CSV2 */
24
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
37
cpu->isar.id_aa64pfr0 = t;
25
- set_feature(&cpu->env, ARM_FEATURE_NEON);
38
26
- set_feature(&cpu->env, ARM_FEATURE_AARCH64);
39
t = cpu->isar.id_aa64pfr1;
27
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
40
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
28
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
41
index XXXXXXX..XXXXXXX 100644
29
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
42
--- a/target/arm/cpu_tcg.c
30
set_feature(&cpu->env, ARM_FEATURE_V8_SHA512);
43
+++ b/target/arm/cpu_tcg.c
31
set_feature(&cpu->env, ARM_FEATURE_V8_SHA3);
44
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
32
set_feature(&cpu->env, ARM_FEATURE_V8_SM3);
45
cpu->isar.id_mmfr4 = t;
33
set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
46
34
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
47
t = cpu->isar.id_pfr0;
35
- set_feature(&cpu->env, ARM_FEATURE_CRC);
48
+ t = FIELD_DP32(t, ID_PFR0, CSV2, 2); /* FEAT_CVS2 */
36
set_feature(&cpu->env, ARM_FEATURE_V8_ATOMICS);
49
t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
37
set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
50
t = FIELD_DP32(t, ID_PFR0, RAS, 1); /* FEAT_RAS */
38
set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
51
cpu->isar.id_pfr0 = t;
39
--
52
--
40
2.17.1
53
2.25.1
41
42
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This register was added to aa32 state by ARMv8.2.
3
There is no branch prediction in TCG, therefore there is no
4
need to actually include the context number into the predictor.
5
Therefore all we need to do is add the state for SCXTNUM_ELx.
4
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20220506180242.216785-21-richard.henderson@linaro.org
7
Message-id: 20180629001538.11415-6-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/cpu.h | 1 +
12
docs/system/arm/emulation.rst | 3 ++
11
target/arm/cpu.c | 4 ++++
13
target/arm/cpu.h | 16 +++++++++
12
target/arm/cpu64.c | 2 ++
14
target/arm/cpu.c | 5 +++
13
target/arm/helper.c | 5 ++---
15
target/arm/cpu64.c | 3 +-
14
4 files changed, 9 insertions(+), 3 deletions(-)
16
target/arm/helper.c | 61 ++++++++++++++++++++++++++++++++++-
17
5 files changed, 86 insertions(+), 2 deletions(-)
15
18
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
20
index XXXXXXX..XXXXXXX 100644
21
--- a/docs/system/arm/emulation.rst
22
+++ b/docs/system/arm/emulation.rst
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
24
- FEAT_BF16 (AArch64 BFloat16 instructions)
25
- FEAT_BTI (Branch Target Identification)
26
- FEAT_CSV2 (Cache speculation variant 2)
27
+- FEAT_CSV2_1p1 (Cache speculation variant 2, version 1.1)
28
+- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
29
+- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
30
- FEAT_DIT (Data Independent Timing instructions)
31
- FEAT_DPB (DC CVAP instruction)
32
- FEAT_Debugv8p2 (Debug changes for v8.2)
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
35
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
37
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
21
uint32_t id_isar3;
38
ARMPACKey apdb;
22
uint32_t id_isar4;
39
ARMPACKey apga;
23
uint32_t id_isar5;
40
} keys;
24
+ uint32_t id_isar6;
41
+
25
uint64_t id_aa64pfr0;
42
+ uint64_t scxtnum_el[4];
26
uint64_t id_aa64pfr1;
43
#endif
27
uint64_t id_aa64dfr0;
44
45
#if defined(CONFIG_USER_ONLY)
46
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
47
#define SCTLR_WXN (1U << 19)
48
#define SCTLR_ST (1U << 20) /* up to ??, RAZ in v6 */
49
#define SCTLR_UWXN (1U << 20) /* v7 onward, AArch32 only */
50
+#define SCTLR_TSCXT (1U << 20) /* FEAT_CSV2_1p2, AArch64 only */
51
#define SCTLR_FI (1U << 21) /* up to v7, v8 RES0 */
52
#define SCTLR_IESB (1U << 21) /* v8.2-IESB, AArch64 only */
53
#define SCTLR_U (1U << 22) /* up to v6, RAO in v7 */
54
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
55
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
56
}
57
58
+static inline bool isar_feature_aa64_scxtnum(const ARMISARegisters *id)
59
+{
60
+ int key = FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, CSV2);
61
+ if (key >= 2) {
62
+ return true; /* FEAT_CSV2_2 */
63
+ }
64
+ if (key == 1) {
65
+ key = FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, CSV2_FRAC);
66
+ return key >= 2; /* FEAT_CSV2_1p2 */
67
+ }
68
+ return false;
69
+}
70
+
71
static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
72
{
73
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
28
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
74
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
29
index XXXXXXX..XXXXXXX 100644
75
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.c
76
--- a/target/arm/cpu.c
31
+++ b/target/arm/cpu.c
77
+++ b/target/arm/cpu.c
32
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
78
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
33
cpu->id_isar3 = 0x01111110;
79
*/
34
cpu->id_isar4 = 0x01310102;
80
env->cp15.gcr_el1 = 0x1ffff;
35
cpu->id_isar5 = 0x00000000;
81
}
36
+ cpu->id_isar6 = 0x00000000;
82
+ /*
37
}
83
+ * Disable access to SCXTNUM_EL0 from CSV2_1p2.
38
84
+ * This is not yet exposed from the Linux kernel in any way.
39
static void cortex_m4_initfn(Object *obj)
85
+ */
40
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
86
+ env->cp15.sctlr_el[1] |= SCTLR_TSCXT;
41
cpu->id_isar3 = 0x01111110;
87
#else
42
cpu->id_isar4 = 0x01310102;
88
/* Reset into the highest available EL */
43
cpu->id_isar5 = 0x00000000;
89
if (arm_feature(env, ARM_FEATURE_EL3)) {
44
+ cpu->id_isar6 = 0x00000000;
45
}
46
47
static void cortex_m33_initfn(Object *obj)
48
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
49
cpu->id_isar3 = 0x01111131;
50
cpu->id_isar4 = 0x01310132;
51
cpu->id_isar5 = 0x00000000;
52
+ cpu->id_isar6 = 0x00000000;
53
cpu->clidr = 0x00000000;
54
cpu->ctr = 0x8000c000;
55
}
56
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
57
cpu->id_isar3 = 0x01112131;
58
cpu->id_isar4 = 0x0010142;
59
cpu->id_isar5 = 0x0;
60
+ cpu->id_isar6 = 0x0;
61
cpu->mp_is_up = true;
62
cpu->pmsav7_dregion = 16;
63
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
64
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
90
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
65
index XXXXXXX..XXXXXXX 100644
91
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/cpu64.c
92
--- a/target/arm/cpu64.c
67
+++ b/target/arm/cpu64.c
93
+++ b/target/arm/cpu64.c
68
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
94
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
69
cpu->id_isar3 = 0x01112131;
95
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
70
cpu->id_isar4 = 0x00011142;
96
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
71
cpu->id_isar5 = 0x00011121;
97
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
72
+ cpu->id_isar6 = 0;
98
- t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 1); /* FEAT_CSV2 */
73
cpu->id_aa64pfr0 = 0x00002222;
99
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 2); /* FEAT_CSV2_2 */
74
cpu->id_aa64dfr0 = 0x10305106;
100
cpu->isar.id_aa64pfr0 = t;
75
cpu->pmceid0 = 0x00000000;
101
76
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
102
t = cpu->isar.id_aa64pfr1;
77
cpu->id_isar3 = 0x01112131;
103
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
78
cpu->id_isar4 = 0x00011142;
104
* we do for EL2 with the virtualization=on property.
79
cpu->id_isar5 = 0x00011121;
105
*/
80
+ cpu->id_isar6 = 0;
106
t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
81
cpu->id_aa64pfr0 = 0x00002222;
107
+ t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
82
cpu->id_aa64dfr0 = 0x10305106;
108
cpu->isar.id_aa64pfr1 = t;
83
cpu->id_aa64isar0 = 0x00011120;
109
110
t = cpu->isar.id_aa64mmfr0;
84
diff --git a/target/arm/helper.c b/target/arm/helper.c
111
diff --git a/target/arm/helper.c b/target/arm/helper.c
85
index XXXXXXX..XXXXXXX 100644
112
index XXXXXXX..XXXXXXX 100644
86
--- a/target/arm/helper.c
113
--- a/target/arm/helper.c
87
+++ b/target/arm/helper.c
114
+++ b/target/arm/helper.c
115
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
116
if (cpu_isar_feature(aa64_mte, cpu)) {
117
valid_mask |= SCR_ATA;
118
}
119
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
120
+ valid_mask |= SCR_ENSCXT;
121
+ }
122
} else {
123
valid_mask &= ~(SCR_RW | SCR_ST);
124
if (cpu_isar_feature(aa32_ras, cpu)) {
125
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
126
if (cpu_isar_feature(aa64_mte, cpu)) {
127
valid_mask |= HCR_ATA | HCR_DCT | HCR_TID5;
128
}
129
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
130
+ valid_mask |= HCR_ENSCXT;
131
+ }
132
}
133
134
/* Clear RES0 bits. */
135
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
136
{ K(3, 0, 5, 6, 0), K(3, 4, 5, 6, 0), K(3, 5, 5, 6, 0),
137
"TFSR_EL1", "TFSR_EL2", "TFSR_EL12", isar_feature_aa64_mte },
138
139
+ { K(3, 0, 13, 0, 7), K(3, 4, 13, 0, 7), K(3, 5, 13, 0, 7),
140
+ "SCXTNUM_EL1", "SCXTNUM_EL2", "SCXTNUM_EL12",
141
+ isar_feature_aa64_scxtnum },
142
+
143
/* TODO: ARMv8.2-SPE -- PMSCR_EL2 */
144
/* TODO: ARMv8.4-Trace -- TRFCR_EL2 */
145
};
146
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
147
},
148
};
149
150
-#endif
151
+static CPAccessResult access_scxtnum(CPUARMState *env, const ARMCPRegInfo *ri,
152
+ bool isread)
153
+{
154
+ uint64_t hcr = arm_hcr_el2_eff(env);
155
+ int el = arm_current_el(env);
156
+
157
+ if (el == 0 && !((hcr & HCR_E2H) && (hcr & HCR_TGE))) {
158
+ if (env->cp15.sctlr_el[1] & SCTLR_TSCXT) {
159
+ if (hcr & HCR_TGE) {
160
+ return CP_ACCESS_TRAP_EL2;
161
+ }
162
+ return CP_ACCESS_TRAP;
163
+ }
164
+ } else if (el < 2 && (env->cp15.sctlr_el[2] & SCTLR_TSCXT)) {
165
+ return CP_ACCESS_TRAP_EL2;
166
+ }
167
+ if (el < 2 && arm_is_el2_enabled(env) && !(hcr & HCR_ENSCXT)) {
168
+ return CP_ACCESS_TRAP_EL2;
169
+ }
170
+ if (el < 3
171
+ && arm_feature(env, ARM_FEATURE_EL3)
172
+ && !(env->cp15.scr_el3 & SCR_ENSCXT)) {
173
+ return CP_ACCESS_TRAP_EL3;
174
+ }
175
+ return CP_ACCESS_OK;
176
+}
177
+
178
+static const ARMCPRegInfo scxtnum_reginfo[] = {
179
+ { .name = "SCXTNUM_EL0", .state = ARM_CP_STATE_AA64,
180
+ .opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 7,
181
+ .access = PL0_RW, .accessfn = access_scxtnum,
182
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[0]) },
183
+ { .name = "SCXTNUM_EL1", .state = ARM_CP_STATE_AA64,
184
+ .opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 7,
185
+ .access = PL1_RW, .accessfn = access_scxtnum,
186
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[1]) },
187
+ { .name = "SCXTNUM_EL2", .state = ARM_CP_STATE_AA64,
188
+ .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 7,
189
+ .access = PL2_RW, .accessfn = access_scxtnum,
190
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[2]) },
191
+ { .name = "SCXTNUM_EL3", .state = ARM_CP_STATE_AA64,
192
+ .opc0 = 3, .opc1 = 6, .crn = 13, .crm = 0, .opc2 = 7,
193
+ .access = PL3_RW,
194
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[3]) },
195
+};
196
+#endif /* TARGET_AARCH64 */
197
198
static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri,
199
bool isread)
88
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
200
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
89
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
201
define_arm_cp_regs(cpu, mte_tco_ro_reginfo);
90
.access = PL1_R, .type = ARM_CP_CONST,
202
define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo);
91
.resetvalue = cpu->id_mmfr4 },
203
}
92
- /* 7 is as yet unallocated and must RAZ */
204
+
93
- { .name = "ID_ISAR7_RESERVED", .state = ARM_CP_STATE_BOTH,
205
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
94
+ { .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
206
+ define_arm_cp_regs(cpu, scxtnum_reginfo);
95
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
207
+ }
96
.access = PL1_R, .type = ARM_CP_CONST,
208
#endif
97
- .resetvalue = 0 },
209
98
+ .resetvalue = cpu->id_isar6 },
210
if (cpu_isar_feature(any_predinv, cpu)) {
99
REGINFO_SENTINEL
100
};
101
define_arm_cp_regs(cpu, v6_idregs);
102
--
211
--
103
2.17.1
212
2.25.1
104
105
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This extension concerns cache speculation, which TCG does
4
not implement. Thus we can trivially enable this feature.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-27-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-22-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 14 ++++++++++++++
11
docs/system/arm/emulation.rst | 1 +
9
target/arm/sve_helper.c | 8 ++++++++
12
target/arm/cpu64.c | 1 +
10
target/arm/translate-sve.c | 26 ++++++++++++++++++++++++++
13
target/arm/cpu_tcg.c | 1 +
11
target/arm/sve.decode | 4 ++++
14
3 files changed, 3 insertions(+)
12
4 files changed, 52 insertions(+)
13
15
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
18
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/helper-sve.h
19
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_frintx_s, TCG_CALL_NO_RWG,
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
DEF_HELPER_FLAGS_5(sve_frintx_d, TCG_CALL_NO_RWG,
21
- FEAT_CSV2_1p1 (Cache speculation variant 2, version 1.1)
20
void, ptr, ptr, ptr, ptr, i32)
22
- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
21
23
- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
22
+DEF_HELPER_FLAGS_5(sve_frecpx_h, TCG_CALL_NO_RWG,
24
+- FEAT_CSV3 (Cache speculation variant 3)
23
+ void, ptr, ptr, ptr, ptr, i32)
25
- FEAT_DIT (Data Independent Timing instructions)
24
+DEF_HELPER_FLAGS_5(sve_frecpx_s, TCG_CALL_NO_RWG,
26
- FEAT_DPB (DC CVAP instruction)
25
+ void, ptr, ptr, ptr, ptr, i32)
27
- FEAT_Debugv8p2 (Debug changes for v8.2)
26
+DEF_HELPER_FLAGS_5(sve_frecpx_d, TCG_CALL_NO_RWG,
28
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_5(sve_fsqrt_h, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_5(sve_fsqrt_s, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(sve_fsqrt_d, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+
36
DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
37
void, ptr, ptr, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
39
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
40
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve_helper.c
30
--- a/target/arm/cpu64.c
42
+++ b/target/arm/sve_helper.c
31
+++ b/target/arm/cpu64.c
43
@@ -XXX,XX +XXX,XX @@ DO_ZPZ_FP(sve_frintx_h, uint16_t, H1_2, float16_round_to_int)
32
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
44
DO_ZPZ_FP(sve_frintx_s, uint32_t, H1_4, float32_round_to_int)
33
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
45
DO_ZPZ_FP(sve_frintx_d, uint64_t, , float64_round_to_int)
34
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
46
35
t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 2); /* FEAT_CSV2_2 */
47
+DO_ZPZ_FP(sve_frecpx_h, uint16_t, H1_2, helper_frecpx_f16)
36
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV3, 1); /* FEAT_CSV3 */
48
+DO_ZPZ_FP(sve_frecpx_s, uint32_t, H1_4, helper_frecpx_f32)
37
cpu->isar.id_aa64pfr0 = t;
49
+DO_ZPZ_FP(sve_frecpx_d, uint64_t, , helper_frecpx_f64)
38
50
+
39
t = cpu->isar.id_aa64pfr1;
51
+DO_ZPZ_FP(sve_fsqrt_h, uint16_t, H1_2, float16_sqrt)
40
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
52
+DO_ZPZ_FP(sve_fsqrt_s, uint32_t, H1_4, float32_sqrt)
53
+DO_ZPZ_FP(sve_fsqrt_d, uint64_t, , float64_sqrt)
54
+
55
DO_ZPZ_FP(sve_scvt_hh, uint16_t, H1_2, int16_to_float16)
56
DO_ZPZ_FP(sve_scvt_sh, uint32_t, H1_4, int32_to_float16)
57
DO_ZPZ_FP(sve_scvt_ss, uint32_t, H1_4, int32_to_float32)
58
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
59
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-sve.c
42
--- a/target/arm/cpu_tcg.c
61
+++ b/target/arm/translate-sve.c
43
+++ b/target/arm/cpu_tcg.c
62
@@ -XXX,XX +XXX,XX @@ static bool trans_FRINTA(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
44
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
63
return do_frint_mode(s, a, float_round_ties_away);
45
cpu->isar.id_pfr0 = t;
64
}
46
65
47
t = cpu->isar.id_pfr2;
66
+static bool trans_FRECPX(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
48
+ t = FIELD_DP32(t, ID_PFR2, CSV3, 1); /* FEAT_CSV3 */
67
+{
49
t = FIELD_DP32(t, ID_PFR2, SSBS, 1); /* FEAT_SSBS */
68
+ static gen_helper_gvec_3_ptr * const fns[3] = {
50
cpu->isar.id_pfr2 = t;
69
+ gen_helper_sve_frecpx_h,
51
70
+ gen_helper_sve_frecpx_s,
71
+ gen_helper_sve_frecpx_d
72
+ };
73
+ if (a->esz == 0) {
74
+ return false;
75
+ }
76
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
77
+}
78
+
79
+static bool trans_FSQRT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
80
+{
81
+ static gen_helper_gvec_3_ptr * const fns[3] = {
82
+ gen_helper_sve_fsqrt_h,
83
+ gen_helper_sve_fsqrt_s,
84
+ gen_helper_sve_fsqrt_d
85
+ };
86
+ if (a->esz == 0) {
87
+ return false;
88
+ }
89
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
90
+}
91
+
92
static bool trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
93
{
94
return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_scvt_hh);
95
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
96
index XXXXXXX..XXXXXXX 100644
97
--- a/target/arm/sve.decode
98
+++ b/target/arm/sve.decode
99
@@ -XXX,XX +XXX,XX @@ FRINTA 01100101 .. 000 100 101 ... ..... ..... @rd_pg_rn
100
FRINTX 01100101 .. 000 110 101 ... ..... ..... @rd_pg_rn
101
FRINTI 01100101 .. 000 111 101 ... ..... ..... @rd_pg_rn
102
103
+# SVE floating-point unary operations
104
+FRECPX 01100101 .. 001 100 101 ... ..... ..... @rd_pg_rn
105
+FSQRT 01100101 .. 001 101 101 ... ..... ..... @rd_pg_rn
106
+
107
# SVE integer convert to floating-point
108
SCVTF_hh 01100101 01 010 01 0 101 ... ..... ..... @rd_pg_rn_e0
109
SCVTF_sh 01100101 01 010 10 0 101 ... ..... ..... @rd_pg_rn_e0
110
--
52
--
111
2.17.1
53
2.25.1
112
113
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We've already added the helpers with an SVE patch, all that remains
3
This extension concerns not merging memory access, which TCG does
4
is to wire up the aa64 and aa32 translators. Enable the feature
4
not implement. Thus we can trivially enable this feature.
5
within -cpu max for CONFIG_USER_ONLY.
5
Add a comment to handle_hint for the DGH instruction, but no code.
6
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180627043328.11531-36-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-23-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/cpu.h | 1 +
12
docs/system/arm/emulation.rst | 1 +
13
linux-user/elfload.c | 1 +
13
target/arm/cpu64.c | 1 +
14
target/arm/cpu.c | 1 +
14
target/arm/translate-a64.c | 1 +
15
target/arm/cpu64.c | 1 +
15
3 files changed, 3 insertions(+)
16
target/arm/translate-a64.c | 36 +++++++++++++++++++
17
target/arm/translate.c | 74 +++++++++++++++++++++++++++-----------
18
6 files changed, 93 insertions(+), 21 deletions(-)
19
16
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
19
--- a/docs/system/arm/emulation.rst
23
+++ b/target/arm/cpu.h
20
+++ b/docs/system/arm/emulation.rst
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
21
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
25
ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
22
- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
26
ARM_FEATURE_V8_ATOMICS, /* ARMv8.1-Atomics feature */
23
- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
27
ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
24
- FEAT_CSV3 (Cache speculation variant 3)
28
+ ARM_FEATURE_V8_DOTPROD, /* implements v8.2 simd dot product */
25
+- FEAT_DGH (Data gathering hint)
29
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
26
- FEAT_DIT (Data Independent Timing instructions)
30
ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */
27
- FEAT_DPB (DC CVAP instruction)
31
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
28
- FEAT_Debugv8p2 (Debug changes for v8.2)
32
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/linux-user/elfload.c
35
+++ b/linux-user/elfload.c
36
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
37
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
38
GET_FEATURE(ARM_FEATURE_V8_ATOMICS, ARM_HWCAP_A64_ATOMICS);
39
GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
40
+ GET_FEATURE(ARM_FEATURE_V8_DOTPROD, ARM_HWCAP_A64_ASIMDDP);
41
GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
42
GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
43
#undef GET_FEATURE
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu.c
47
+++ b/target/arm/cpu.c
48
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
49
set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
50
set_feature(&cpu->env, ARM_FEATURE_CRC);
51
set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
52
+ set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
53
set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
54
#endif
55
}
56
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
29
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
57
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/cpu64.c
31
--- a/target/arm/cpu64.c
59
+++ b/target/arm/cpu64.c
32
+++ b/target/arm/cpu64.c
60
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
33
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
61
set_feature(&cpu->env, ARM_FEATURE_CRC);
34
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); /* FEAT_SB */
62
set_feature(&cpu->env, ARM_FEATURE_V8_ATOMICS);
35
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); /* FEAT_SPECRES */
63
set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
36
t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1); /* FEAT_BF16 */
64
+ set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
37
+ t = FIELD_DP64(t, ID_AA64ISAR1, DGH, 1); /* FEAT_DGH */
65
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
38
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
66
set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
39
cpu->isar.id_aa64isar1 = t;
67
set_feature(&cpu->env, ARM_FEATURE_SVE);
40
68
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
41
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
69
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/translate-a64.c
43
--- a/target/arm/translate-a64.c
71
+++ b/target/arm/translate-a64.c
44
+++ b/target/arm/translate-a64.c
72
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_op3(DisasContext *s, bool is_q, int rd,
45
@@ -XXX,XX +XXX,XX @@ static void handle_hint(DisasContext *s, uint32_t insn,
73
vec_full_reg_size(s), gvec_op);
74
}
75
76
+/* Expand a 3-operand operation using an out-of-line helper. */
77
+static void gen_gvec_op3_ool(DisasContext *s, bool is_q, int rd,
78
+ int rn, int rm, int data, gen_helper_gvec_3 *fn)
79
+{
80
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
81
+ vec_full_reg_offset(s, rn),
82
+ vec_full_reg_offset(s, rm),
83
+ is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
84
+}
85
+
86
/* Expand a 3-operand + env pointer operation using
87
* an out-of-line helper.
88
*/
89
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
90
}
91
feature = ARM_FEATURE_V8_RDM;
92
break;
46
break;
93
+ case 0x02: /* SDOT (vector) */
47
case 0b00100: /* SEV */
94
+ case 0x12: /* UDOT (vector) */
48
case 0b00101: /* SEVL */
95
+ if (size != MO_32) {
49
+ case 0b00110: /* DGH */
96
+ unallocated_encoding(s);
50
/* we treat all as NOP at least for now */
97
+ return;
98
+ }
99
+ feature = ARM_FEATURE_V8_DOTPROD;
100
+ break;
101
case 0x8: /* FCMLA, #0 */
102
case 0x9: /* FCMLA, #90 */
103
case 0xa: /* FCMLA, #180 */
104
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
105
}
106
return;
107
108
+ case 0x2: /* SDOT / UDOT */
109
+ gen_gvec_op3_ool(s, is_q, rd, rn, rm, 0,
110
+ u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b);
111
+ return;
112
+
113
case 0x8: /* FCMLA, #0 */
114
case 0x9: /* FCMLA, #90 */
115
case 0xa: /* FCMLA, #180 */
116
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
117
return;
118
}
119
break;
51
break;
120
+ case 0x0e: /* SDOT */
52
case 0b00111: /* XPACLRI */
121
+ case 0x1e: /* UDOT */
122
+ if (size != MO_32 || !arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
123
+ unallocated_encoding(s);
124
+ return;
125
+ }
126
+ break;
127
case 0x11: /* FCMLA #0 */
128
case 0x13: /* FCMLA #90 */
129
case 0x15: /* FCMLA #180 */
130
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
131
}
132
133
switch (16 * u + opcode) {
134
+ case 0x0e: /* SDOT */
135
+ case 0x1e: /* UDOT */
136
+ gen_gvec_op3_ool(s, is_q, rd, rn, rm, index,
137
+ u ? gen_helper_gvec_udot_idx_b
138
+ : gen_helper_gvec_sdot_idx_b);
139
+ return;
140
case 0x11: /* FCMLA #0 */
141
case 0x13: /* FCMLA #90 */
142
case 0x15: /* FCMLA #180 */
143
diff --git a/target/arm/translate.c b/target/arm/translate.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/target/arm/translate.c
146
+++ b/target/arm/translate.c
147
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
148
*/
149
static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
150
{
151
- gen_helper_gvec_3_ptr *fn_gvec_ptr;
152
- int rd, rn, rm, rot, size, opr_sz;
153
- TCGv_ptr fpst;
154
+ gen_helper_gvec_3 *fn_gvec = NULL;
155
+ gen_helper_gvec_3_ptr *fn_gvec_ptr = NULL;
156
+ int rd, rn, rm, opr_sz;
157
+ int data = 0;
158
bool q;
159
160
q = extract32(insn, 6, 1);
161
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
162
163
if ((insn & 0xfe200f10) == 0xfc200800) {
164
/* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
165
- size = extract32(insn, 20, 1);
166
- rot = extract32(insn, 23, 2);
167
+ int size = extract32(insn, 20, 1);
168
+ data = extract32(insn, 23, 2); /* rot */
169
if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
170
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
171
return 1;
172
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
173
fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
174
} else if ((insn & 0xfea00f10) == 0xfc800800) {
175
/* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
176
- size = extract32(insn, 20, 1);
177
- rot = extract32(insn, 24, 1);
178
+ int size = extract32(insn, 20, 1);
179
+ data = extract32(insn, 24, 1); /* rot */
180
if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
181
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
182
return 1;
183
}
184
fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
185
+ } else if ((insn & 0xfeb00f00) == 0xfc200d00) {
186
+ /* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
187
+ bool u = extract32(insn, 4, 1);
188
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
189
+ return 1;
190
+ }
191
+ fn_gvec = u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
192
} else {
193
return 1;
194
}
195
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
196
}
197
198
opr_sz = (1 + q) * 8;
199
- fpst = get_fpstatus_ptr(1);
200
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
201
- vfp_reg_offset(1, rn),
202
- vfp_reg_offset(1, rm), fpst,
203
- opr_sz, opr_sz, rot, fn_gvec_ptr);
204
- tcg_temp_free_ptr(fpst);
205
+ if (fn_gvec_ptr) {
206
+ TCGv_ptr fpst = get_fpstatus_ptr(1);
207
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
208
+ vfp_reg_offset(1, rn),
209
+ vfp_reg_offset(1, rm), fpst,
210
+ opr_sz, opr_sz, data, fn_gvec_ptr);
211
+ tcg_temp_free_ptr(fpst);
212
+ } else {
213
+ tcg_gen_gvec_3_ool(vfp_reg_offset(1, rd),
214
+ vfp_reg_offset(1, rn),
215
+ vfp_reg_offset(1, rm),
216
+ opr_sz, opr_sz, data, fn_gvec);
217
+ }
218
return 0;
219
}
220
221
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
222
223
static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
224
{
225
- gen_helper_gvec_3_ptr *fn_gvec_ptr;
226
+ gen_helper_gvec_3 *fn_gvec = NULL;
227
+ gen_helper_gvec_3_ptr *fn_gvec_ptr = NULL;
228
int rd, rn, rm, opr_sz, data;
229
- TCGv_ptr fpst;
230
bool q;
231
232
q = extract32(insn, 6, 1);
233
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
234
data = (index << 2) | rot;
235
fn_gvec_ptr = (size ? gen_helper_gvec_fcmlas_idx
236
: gen_helper_gvec_fcmlah_idx);
237
+ } else if ((insn & 0xffb00f00) == 0xfe200d00) {
238
+ /* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
239
+ int u = extract32(insn, 4, 1);
240
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
241
+ return 1;
242
+ }
243
+ fn_gvec = u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
244
+ /* rm is just Vm, and index is M. */
245
+ data = extract32(insn, 5, 1); /* index */
246
+ rm = extract32(insn, 0, 4);
247
} else {
248
return 1;
249
}
250
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
251
}
252
253
opr_sz = (1 + q) * 8;
254
- fpst = get_fpstatus_ptr(1);
255
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
256
- vfp_reg_offset(1, rn),
257
- vfp_reg_offset(1, rm), fpst,
258
- opr_sz, opr_sz, data, fn_gvec_ptr);
259
- tcg_temp_free_ptr(fpst);
260
+ if (fn_gvec_ptr) {
261
+ TCGv_ptr fpst = get_fpstatus_ptr(1);
262
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
263
+ vfp_reg_offset(1, rn),
264
+ vfp_reg_offset(1, rm), fpst,
265
+ opr_sz, opr_sz, data, fn_gvec_ptr);
266
+ tcg_temp_free_ptr(fpst);
267
+ } else {
268
+ tcg_gen_gvec_3_ool(vfp_reg_offset(1, rd),
269
+ vfp_reg_offset(1, rn),
270
+ vfp_reg_offset(1, rm),
271
+ opr_sz, opr_sz, data, fn_gvec);
272
+ }
273
return 0;
274
}
275
276
--
53
--
277
2.17.1
54
2.25.1
278
279
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Enable the a76 for virt and sbsa board use.
2
4
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-11-richard.henderson@linaro.org
7
Message-id: 20220506180242.216785-24-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate-sve.c | 103 +++++++++++++++++++++++++++++++++++++
10
docs/system/arm/virt.rst | 1 +
9
target/arm/sve.decode | 6 +++
11
hw/arm/sbsa-ref.c | 1 +
10
2 files changed, 109 insertions(+)
12
hw/arm/virt.c | 1 +
13
target/arm/cpu64.c | 66 ++++++++++++++++++++++++++++++++++++++++
14
4 files changed, 69 insertions(+)
11
15
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
18
--- a/docs/system/arm/virt.rst
15
+++ b/target/arm/translate-sve.c
19
+++ b/docs/system/arm/virt.rst
16
@@ -XXX,XX +XXX,XX @@ static void do_ldr(DisasContext *s, uint32_t vofs, uint32_t len,
20
@@ -XXX,XX +XXX,XX @@ Supported guest CPU types:
17
tcg_temp_free_i64(t0);
21
- ``cortex-a53`` (64-bit)
22
- ``cortex-a57`` (64-bit)
23
- ``cortex-a72`` (64-bit)
24
+- ``cortex-a76`` (64-bit)
25
- ``a64fx`` (64-bit)
26
- ``host`` (with KVM only)
27
- ``max`` (same as ``host`` for KVM; best possible emulation with TCG)
28
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/sbsa-ref.c
31
+++ b/hw/arm/sbsa-ref.c
32
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
33
static const char * const valid_cpus[] = {
34
ARM_CPU_TYPE_NAME("cortex-a57"),
35
ARM_CPU_TYPE_NAME("cortex-a72"),
36
+ ARM_CPU_TYPE_NAME("cortex-a76"),
37
ARM_CPU_TYPE_NAME("max"),
38
};
39
40
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/virt.c
43
+++ b/hw/arm/virt.c
44
@@ -XXX,XX +XXX,XX @@ static const char *valid_cpus[] = {
45
ARM_CPU_TYPE_NAME("cortex-a53"),
46
ARM_CPU_TYPE_NAME("cortex-a57"),
47
ARM_CPU_TYPE_NAME("cortex-a72"),
48
+ ARM_CPU_TYPE_NAME("cortex-a76"),
49
ARM_CPU_TYPE_NAME("a64fx"),
50
ARM_CPU_TYPE_NAME("host"),
51
ARM_CPU_TYPE_NAME("max"),
52
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/cpu64.c
55
+++ b/target/arm/cpu64.c
56
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
57
define_cortex_a72_a57_a53_cp_reginfo(cpu);
18
}
58
}
19
59
20
+/* Similarly for stores. */
60
+static void aarch64_a76_initfn(Object *obj)
21
+static void do_str(DisasContext *s, uint32_t vofs, uint32_t len,
22
+ int rn, int imm)
23
+{
61
+{
24
+ uint32_t len_align = QEMU_ALIGN_DOWN(len, 8);
62
+ ARMCPU *cpu = ARM_CPU(obj);
25
+ uint32_t len_remain = len % 8;
26
+ uint32_t nparts = len / 8 + ctpop8(len_remain);
27
+ int midx = get_mem_index(s);
28
+ TCGv_i64 addr, t0;
29
+
63
+
30
+ addr = tcg_temp_new_i64();
64
+ cpu->dtb_compatible = "arm,cortex-a76";
31
+ t0 = tcg_temp_new_i64();
65
+ set_feature(&cpu->env, ARM_FEATURE_V8);
66
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
67
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
68
+ set_feature(&cpu->env, ARM_FEATURE_AARCH64);
69
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
70
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
71
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
72
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
32
+
73
+
33
+ /* Note that unpredicated load/store of vector/predicate registers
74
+ /* Ordered by B2.4 AArch64 registers by functional group */
34
+ * are defined as a stream of bytes, which equates to little-endian
75
+ cpu->clidr = 0x82000023;
35
+ * operations on larger quantities. There is no nice way to force
76
+ cpu->ctr = 0x8444C004;
36
+ * a little-endian store for aarch64_be-linux-user out of line.
77
+ cpu->dcz_blocksize = 4;
37
+ *
78
+ cpu->isar.id_aa64dfr0 = 0x0000000010305408ull;
38
+ * Attempt to keep code expansion to a minimum by limiting the
79
+ cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
39
+ * amount of unrolling done.
80
+ cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
40
+ */
81
+ cpu->isar.id_aa64mmfr0 = 0x0000000000101122ull;
41
+ if (nparts <= 4) {
82
+ cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
42
+ int i;
83
+ cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
84
+ cpu->isar.id_aa64pfr0 = 0x1100000010111112ull; /* GIC filled in later */
85
+ cpu->isar.id_aa64pfr1 = 0x0000000000000010ull;
86
+ cpu->id_afr0 = 0x00000000;
87
+ cpu->isar.id_dfr0 = 0x04010088;
88
+ cpu->isar.id_isar0 = 0x02101110;
89
+ cpu->isar.id_isar1 = 0x13112111;
90
+ cpu->isar.id_isar2 = 0x21232042;
91
+ cpu->isar.id_isar3 = 0x01112131;
92
+ cpu->isar.id_isar4 = 0x00010142;
93
+ cpu->isar.id_isar5 = 0x01011121;
94
+ cpu->isar.id_isar6 = 0x00000010;
95
+ cpu->isar.id_mmfr0 = 0x10201105;
96
+ cpu->isar.id_mmfr1 = 0x40000000;
97
+ cpu->isar.id_mmfr2 = 0x01260000;
98
+ cpu->isar.id_mmfr3 = 0x02122211;
99
+ cpu->isar.id_mmfr4 = 0x00021110;
100
+ cpu->isar.id_pfr0 = 0x10010131;
101
+ cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
102
+ cpu->isar.id_pfr2 = 0x00000011;
103
+ cpu->midr = 0x414fd0b1; /* r4p1 */
104
+ cpu->revidr = 0;
43
+
105
+
44
+ for (i = 0; i < len_align; i += 8) {
106
+ /* From B2.18 CCSIDR_EL1 */
45
+ tcg_gen_ld_i64(t0, cpu_env, vofs + i);
107
+ cpu->ccsidr[0] = 0x701fe01a; /* 64KB L1 dcache */
46
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, rn), imm + i);
108
+ cpu->ccsidr[1] = 0x201fe01a; /* 64KB L1 icache */
47
+ tcg_gen_qemu_st_i64(t0, addr, midx, MO_LEQ);
109
+ cpu->ccsidr[2] = 0x707fe03a; /* 512KB L2 cache */
48
+ }
49
+ } else {
50
+ TCGLabel *loop = gen_new_label();
51
+ TCGv_ptr t2, i = tcg_const_local_ptr(0);
52
+
110
+
53
+ gen_set_label(loop);
111
+ /* From B2.93 SCTLR_EL3 */
112
+ cpu->reset_sctlr = 0x30c50838;
54
+
113
+
55
+ t2 = tcg_temp_new_ptr();
114
+ /* From B4.23 ICH_VTR_EL2 */
56
+ tcg_gen_add_ptr(t2, cpu_env, i);
115
+ cpu->gic_num_lrs = 4;
57
+ tcg_gen_ld_i64(t0, t2, vofs);
116
+ cpu->gic_vpribits = 5;
117
+ cpu->gic_vprebits = 5;
58
+
118
+
59
+ /* Minimize the number of local temps that must be re-read from
119
+ /* From B5.1 AdvSIMD AArch64 register summary */
60
+ * the stack each iteration. Instead, re-compute values other
120
+ cpu->isar.mvfr0 = 0x10110222;
61
+ * than the loop counter.
121
+ cpu->isar.mvfr1 = 0x13211111;
62
+ */
122
+ cpu->isar.mvfr2 = 0x00000043;
63
+ tcg_gen_addi_ptr(t2, i, imm);
64
+ tcg_gen_extu_ptr_i64(addr, t2);
65
+ tcg_gen_add_i64(addr, addr, cpu_reg_sp(s, rn));
66
+ tcg_temp_free_ptr(t2);
67
+
68
+ tcg_gen_qemu_st_i64(t0, addr, midx, MO_LEQ);
69
+
70
+ tcg_gen_addi_ptr(i, i, 8);
71
+
72
+ tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
73
+ tcg_temp_free_ptr(i);
74
+ }
75
+
76
+ /* Predicate register stores can be any multiple of 2. */
77
+ if (len_remain) {
78
+ tcg_gen_ld_i64(t0, cpu_env, vofs + len_align);
79
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, rn), imm + len_align);
80
+
81
+ switch (len_remain) {
82
+ case 2:
83
+ case 4:
84
+ case 8:
85
+ tcg_gen_qemu_st_i64(t0, addr, midx, MO_LE | ctz32(len_remain));
86
+ break;
87
+
88
+ case 6:
89
+ tcg_gen_qemu_st_i64(t0, addr, midx, MO_LEUL);
90
+ tcg_gen_addi_i64(addr, addr, 4);
91
+ tcg_gen_shri_i64(t0, t0, 32);
92
+ tcg_gen_qemu_st_i64(t0, addr, midx, MO_LEUW);
93
+ break;
94
+
95
+ default:
96
+ g_assert_not_reached();
97
+ }
98
+ }
99
+ tcg_temp_free_i64(addr);
100
+ tcg_temp_free_i64(t0);
101
+}
123
+}
102
+
124
+
103
static bool trans_LDR_zri(DisasContext *s, arg_rri *a, uint32_t insn)
125
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
104
{
126
{
105
if (sve_access_check(s)) {
127
/*
106
@@ -XXX,XX +XXX,XX @@ static bool trans_LDR_pri(DisasContext *s, arg_rri *a, uint32_t insn)
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
107
return true;
129
{ .name = "cortex-a57", .initfn = aarch64_a57_initfn },
108
}
130
{ .name = "cortex-a53", .initfn = aarch64_a53_initfn },
109
131
{ .name = "cortex-a72", .initfn = aarch64_a72_initfn },
110
+static bool trans_STR_zri(DisasContext *s, arg_rri *a, uint32_t insn)
132
+ { .name = "cortex-a76", .initfn = aarch64_a76_initfn },
111
+{
133
{ .name = "a64fx", .initfn = aarch64_a64fx_initfn },
112
+ if (sve_access_check(s)) {
134
{ .name = "max", .initfn = aarch64_max_initfn },
113
+ int size = vec_full_reg_size(s);
135
#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
114
+ int off = vec_full_reg_offset(s, a->rd);
115
+ do_str(s, off, size, a->rn, a->imm * size);
116
+ }
117
+ return true;
118
+}
119
+
120
+static bool trans_STR_pri(DisasContext *s, arg_rri *a, uint32_t insn)
121
+{
122
+ if (sve_access_check(s)) {
123
+ int size = pred_full_reg_size(s);
124
+ int off = pred_full_reg_offset(s, a->rd);
125
+ do_str(s, off, size, a->rn, a->imm * size);
126
+ }
127
+ return true;
128
+}
129
+
130
/*
131
*** SVE Memory - Contiguous Load Group
132
*/
133
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
134
index XXXXXXX..XXXXXXX 100644
135
--- a/target/arm/sve.decode
136
+++ b/target/arm/sve.decode
137
@@ -XXX,XX +XXX,XX @@ LD1RQ_zpri 1010010 .. 00 0.... 001 ... ..... ..... \
138
139
### SVE Memory Store Group
140
141
+# SVE store predicate register
142
+STR_pri 1110010 11 0. ..... 000 ... ..... 0 .... @pd_rn_i9
143
+
144
+# SVE store vector register
145
+STR_zri 1110010 11 0. ..... 010 ... ..... ..... @rd_rn_i9
146
+
147
# SVE contiguous store (scalar plus immediate)
148
# ST1B, ST1H, ST1W, ST1D; require msz <= esz
149
ST_zpri 1110010 .. esz:2 0.... 111 ... ..... ..... \
150
--
136
--
151
2.17.1
137
2.25.1
152
153
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Enable the n1 for virt and sbsa board use.
2
4
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-5-richard.henderson@linaro.org
7
Message-id: 20220506180242.216785-25-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate-sve.c | 52 ++++++++++++++++++++++++++++++++++++++
10
docs/system/arm/virt.rst | 1 +
9
target/arm/sve.decode | 9 +++++++
11
hw/arm/sbsa-ref.c | 1 +
10
2 files changed, 61 insertions(+)
12
hw/arm/virt.c | 1 +
13
target/arm/cpu64.c | 66 ++++++++++++++++++++++++++++++++++++++++
14
4 files changed, 69 insertions(+)
11
15
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
18
--- a/docs/system/arm/virt.rst
15
+++ b/target/arm/translate-sve.c
19
+++ b/docs/system/arm/virt.rst
16
@@ -XXX,XX +XXX,XX @@ static bool trans_LDNF1_zpri(DisasContext *s, arg_rpri_load *a, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ Supported guest CPU types:
17
return true;
21
- ``cortex-a76`` (64-bit)
22
- ``a64fx`` (64-bit)
23
- ``host`` (with KVM only)
24
+- ``neoverse-n1`` (64-bit)
25
- ``max`` (same as ``host`` for KVM; best possible emulation with TCG)
26
27
Note that the default is ``cortex-a15``, so for an AArch64 guest you must
28
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/sbsa-ref.c
31
+++ b/hw/arm/sbsa-ref.c
32
@@ -XXX,XX +XXX,XX @@ static const char * const valid_cpus[] = {
33
ARM_CPU_TYPE_NAME("cortex-a57"),
34
ARM_CPU_TYPE_NAME("cortex-a72"),
35
ARM_CPU_TYPE_NAME("cortex-a76"),
36
+ ARM_CPU_TYPE_NAME("neoverse-n1"),
37
ARM_CPU_TYPE_NAME("max"),
38
};
39
40
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/virt.c
43
+++ b/hw/arm/virt.c
44
@@ -XXX,XX +XXX,XX @@ static const char *valid_cpus[] = {
45
ARM_CPU_TYPE_NAME("cortex-a72"),
46
ARM_CPU_TYPE_NAME("cortex-a76"),
47
ARM_CPU_TYPE_NAME("a64fx"),
48
+ ARM_CPU_TYPE_NAME("neoverse-n1"),
49
ARM_CPU_TYPE_NAME("host"),
50
ARM_CPU_TYPE_NAME("max"),
51
};
52
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/cpu64.c
55
+++ b/target/arm/cpu64.c
56
@@ -XXX,XX +XXX,XX @@ static void aarch64_a76_initfn(Object *obj)
57
cpu->isar.mvfr2 = 0x00000043;
18
}
58
}
19
59
20
+static void do_ldrq(DisasContext *s, int zt, int pg, TCGv_i64 addr, int msz)
60
+static void aarch64_neoverse_n1_initfn(Object *obj)
21
+{
61
+{
22
+ static gen_helper_gvec_mem * const fns[4] = {
62
+ ARMCPU *cpu = ARM_CPU(obj);
23
+ gen_helper_sve_ld1bb_r, gen_helper_sve_ld1hh_r,
24
+ gen_helper_sve_ld1ss_r, gen_helper_sve_ld1dd_r,
25
+ };
26
+ unsigned vsz = vec_full_reg_size(s);
27
+ TCGv_ptr t_pg;
28
+ TCGv_i32 desc;
29
+
63
+
30
+ /* Load the first quadword using the normal predicated load helpers. */
64
+ cpu->dtb_compatible = "arm,neoverse-n1";
31
+ desc = tcg_const_i32(simd_desc(16, 16, zt));
65
+ set_feature(&cpu->env, ARM_FEATURE_V8);
32
+ t_pg = tcg_temp_new_ptr();
66
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
67
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
68
+ set_feature(&cpu->env, ARM_FEATURE_AARCH64);
69
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
70
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
71
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
72
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
33
+
73
+
34
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
74
+ /* Ordered by B2.4 AArch64 registers by functional group */
35
+ fns[msz](cpu_env, t_pg, addr, desc);
75
+ cpu->clidr = 0x82000023;
76
+ cpu->ctr = 0x8444c004;
77
+ cpu->dcz_blocksize = 4;
78
+ cpu->isar.id_aa64dfr0 = 0x0000000110305408ull;
79
+ cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
80
+ cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
81
+ cpu->isar.id_aa64mmfr0 = 0x0000000000101125ull;
82
+ cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
83
+ cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
84
+ cpu->isar.id_aa64pfr0 = 0x1100000010111112ull; /* GIC filled in later */
85
+ cpu->isar.id_aa64pfr1 = 0x0000000000000020ull;
86
+ cpu->id_afr0 = 0x00000000;
87
+ cpu->isar.id_dfr0 = 0x04010088;
88
+ cpu->isar.id_isar0 = 0x02101110;
89
+ cpu->isar.id_isar1 = 0x13112111;
90
+ cpu->isar.id_isar2 = 0x21232042;
91
+ cpu->isar.id_isar3 = 0x01112131;
92
+ cpu->isar.id_isar4 = 0x00010142;
93
+ cpu->isar.id_isar5 = 0x01011121;
94
+ cpu->isar.id_isar6 = 0x00000010;
95
+ cpu->isar.id_mmfr0 = 0x10201105;
96
+ cpu->isar.id_mmfr1 = 0x40000000;
97
+ cpu->isar.id_mmfr2 = 0x01260000;
98
+ cpu->isar.id_mmfr3 = 0x02122211;
99
+ cpu->isar.id_mmfr4 = 0x00021110;
100
+ cpu->isar.id_pfr0 = 0x10010131;
101
+ cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
102
+ cpu->isar.id_pfr2 = 0x00000011;
103
+ cpu->midr = 0x414fd0c1; /* r4p1 */
104
+ cpu->revidr = 0;
36
+
105
+
37
+ tcg_temp_free_ptr(t_pg);
106
+ /* From B2.23 CCSIDR_EL1 */
38
+ tcg_temp_free_i32(desc);
107
+ cpu->ccsidr[0] = 0x701fe01a; /* 64KB L1 dcache */
108
+ cpu->ccsidr[1] = 0x201fe01a; /* 64KB L1 icache */
109
+ cpu->ccsidr[2] = 0x70ffe03a; /* 1MB L2 cache */
39
+
110
+
40
+ /* Replicate that first quadword. */
111
+ /* From B2.98 SCTLR_EL3 */
41
+ if (vsz > 16) {
112
+ cpu->reset_sctlr = 0x30c50838;
42
+ unsigned dofs = vec_full_reg_offset(s, zt);
113
+
43
+ tcg_gen_gvec_dup_mem(4, dofs + 16, dofs, vsz - 16, vsz - 16);
114
+ /* From B4.23 ICH_VTR_EL2 */
44
+ }
115
+ cpu->gic_num_lrs = 4;
116
+ cpu->gic_vpribits = 5;
117
+ cpu->gic_vprebits = 5;
118
+
119
+ /* From B5.1 AdvSIMD AArch64 register summary */
120
+ cpu->isar.mvfr0 = 0x10110222;
121
+ cpu->isar.mvfr1 = 0x13211111;
122
+ cpu->isar.mvfr2 = 0x00000043;
45
+}
123
+}
46
+
124
+
47
+static bool trans_LD1RQ_zprr(DisasContext *s, arg_rprr_load *a, uint32_t insn)
125
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
48
+{
49
+ if (a->rm == 31) {
50
+ return false;
51
+ }
52
+ if (sve_access_check(s)) {
53
+ int msz = dtype_msz(a->dtype);
54
+ TCGv_i64 addr = new_tmp_a64(s);
55
+ tcg_gen_shli_i64(addr, cpu_reg(s, a->rm), msz);
56
+ tcg_gen_add_i64(addr, addr, cpu_reg_sp(s, a->rn));
57
+ do_ldrq(s, a->rd, a->pg, addr, msz);
58
+ }
59
+ return true;
60
+}
61
+
62
+static bool trans_LD1RQ_zpri(DisasContext *s, arg_rpri_load *a, uint32_t insn)
63
+{
64
+ if (sve_access_check(s)) {
65
+ TCGv_i64 addr = new_tmp_a64(s);
66
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, a->rn), a->imm * 16);
67
+ do_ldrq(s, a->rd, a->pg, addr, dtype_msz(a->dtype));
68
+ }
69
+ return true;
70
+}
71
+
72
static void do_st_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
73
int msz, int esz, int nreg)
74
{
126
{
75
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
127
/*
76
index XXXXXXX..XXXXXXX 100644
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
77
--- a/target/arm/sve.decode
129
{ .name = "cortex-a72", .initfn = aarch64_a72_initfn },
78
+++ b/target/arm/sve.decode
130
{ .name = "cortex-a76", .initfn = aarch64_a76_initfn },
79
@@ -XXX,XX +XXX,XX @@ LD_zprr 1010010 .. nreg:2 ..... 110 ... ..... ..... @rprr_load_msz
131
{ .name = "a64fx", .initfn = aarch64_a64fx_initfn },
80
# LD2B, LD2H, LD2W, LD2D; etc.
132
+ { .name = "neoverse-n1", .initfn = aarch64_neoverse_n1_initfn },
81
LD_zpri 1010010 .. nreg:2 0.... 111 ... ..... ..... @rpri_load_msz
133
{ .name = "max", .initfn = aarch64_max_initfn },
82
134
#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
83
+# SVE load and broadcast quadword (scalar plus scalar)
135
{ .name = "host", .initfn = aarch64_host_initfn },
84
+LD1RQ_zprr 1010010 .. 00 ..... 000 ... ..... ..... \
85
+ @rprr_load_msz nreg=0
86
+
87
+# SVE load and broadcast quadword (scalar plus immediate)
88
+# LD1RQB, LD1RQH, LD1RQS, LD1RQD
89
+LD1RQ_zpri 1010010 .. 00 0.... 001 ... ..... ..... \
90
+ @rpri_load_msz nreg=0
91
+
92
### SVE Memory Store Group
93
94
# SVE contiguous store (scalar plus immediate)
95
--
136
--
96
2.17.1
137
2.25.1
97
98
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
2
2
3
The load/store API will ease further code movement.
3
The sbsa-ref machine is continuously evolving. Some of the changes we
4
want to make in the near future, to align with real components (e.g.
5
the GIC-700), will break compatibility for existing firmware.
4
6
5
Per the Physical Layer Simplified Spec. "3.6 Bus Protocol":
7
Introduce two new properties to the DT generated on machine generation:
8
- machine-version-major
9
To be incremented when a platform change makes the machine
10
incompatible with existing firmware.
11
- machine-version-minor
12
To be incremented when functionality is added to the machine
13
without causing incompatibility with existing firmware.
14
to be reset to 0 when machine-version-major is incremented.
6
15
7
"In the CMD line the Most Significant Bit (MSB) is transmitted
16
This versioning scheme is *neither*:
8
first, the Least Significant Bit (LSB) is the last."
17
- A QEMU versioned machine type; a given version of QEMU will emulate
18
a given version of the platform.
19
- A reflection of level of SBSA (now SystemReady SR) support provided.
9
20
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
The version will increment on guest-visible functional changes only,
22
akin to a revision ID register found on a physical platform.
23
24
These properties are both introduced with the value 0.
25
(Hence, a machine where the DT is lacking these nodes is equivalent
26
to version 0.0.)
27
28
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
29
Message-id: 20220505113947.75714-1-quic_llindhol@quicinc.com
30
Cc: Peter Maydell <peter.maydell@linaro.org>
31
Cc: Radoslaw Biernacki <rad@semihalf.com>
32
Cc: Cédric Le Goater <clg@kaod.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
33
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
35
---
14
hw/sd/bcm2835_sdhost.c | 13 +++++--------
36
hw/arm/sbsa-ref.c | 14 ++++++++++++++
15
hw/sd/milkymist-memcard.c | 3 +--
37
1 file changed, 14 insertions(+)
16
hw/sd/omap_mmc.c | 6 ++----
17
hw/sd/pl181.c | 11 ++++-------
18
hw/sd/sdhci.c | 15 +++++----------
19
hw/sd/ssi-sd.c | 6 ++----
20
6 files changed, 19 insertions(+), 35 deletions(-)
21
38
22
diff --git a/hw/sd/bcm2835_sdhost.c b/hw/sd/bcm2835_sdhost.c
39
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
23
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/sd/bcm2835_sdhost.c
41
--- a/hw/arm/sbsa-ref.c
25
+++ b/hw/sd/bcm2835_sdhost.c
42
+++ b/hw/arm/sbsa-ref.c
26
@@ -XXX,XX +XXX,XX @@ static void bcm2835_sdhost_send_command(BCM2835SDHostState *s)
43
@@ -XXX,XX +XXX,XX @@ static void create_fdt(SBSAMachineState *sms)
27
goto error;
44
qemu_fdt_setprop_cell(fdt, "/", "#address-cells", 0x2);
28
}
45
qemu_fdt_setprop_cell(fdt, "/", "#size-cells", 0x2);
29
if (!(s->cmd & SDCMD_NO_RESPONSE)) {
46
30
-#define RWORD(n) (((uint32_t)rsp[n] << 24) | (rsp[n + 1] << 16) \
47
+ /*
31
- | (rsp[n + 2] << 8) | rsp[n + 3])
48
+ * This versioning scheme is for informing platform fw only. It is neither:
32
if (rlen == 0 || (rlen == 4 && (s->cmd & SDCMD_LONG_RESPONSE))) {
49
+ * - A QEMU versioned machine type; a given version of QEMU will emulate
33
goto error;
50
+ * a given version of the platform.
34
}
51
+ * - A reflection of level of SBSA (now SystemReady SR) support provided.
35
@@ -XXX,XX +XXX,XX @@ static void bcm2835_sdhost_send_command(BCM2835SDHostState *s)
52
+ *
36
goto error;
53
+ * machine-version-major: updated when changes breaking fw compatibility
37
}
54
+ * are introduced.
38
if (rlen == 4) {
55
+ * machine-version-minor: updated when features are added that don't break
39
- s->rsp[0] = RWORD(0);
56
+ * fw compatibility.
40
+ s->rsp[0] = ldl_be_p(&rsp[0]);
57
+ */
41
s->rsp[1] = s->rsp[2] = s->rsp[3] = 0;
58
+ qemu_fdt_setprop_cell(fdt, "/", "machine-version-major", 0);
42
} else {
59
+ qemu_fdt_setprop_cell(fdt, "/", "machine-version-minor", 0);
43
- s->rsp[0] = RWORD(12);
60
+
44
- s->rsp[1] = RWORD(8);
61
if (ms->numa_state->have_numa_distance) {
45
- s->rsp[2] = RWORD(4);
62
int size = nb_numa_nodes * nb_numa_nodes * 3 * sizeof(uint32_t);
46
- s->rsp[3] = RWORD(0);
63
uint32_t *matrix = g_malloc0(size);
47
+ s->rsp[0] = ldl_be_p(&rsp[12]);
48
+ s->rsp[1] = ldl_be_p(&rsp[8]);
49
+ s->rsp[2] = ldl_be_p(&rsp[4]);
50
+ s->rsp[3] = ldl_be_p(&rsp[0]);
51
}
52
-#undef RWORD
53
}
54
/* We never really delay commands, so if this was a 'busywait' command
55
* then we've completed it now and can raise the interrupt.
56
diff --git a/hw/sd/milkymist-memcard.c b/hw/sd/milkymist-memcard.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/hw/sd/milkymist-memcard.c
59
+++ b/hw/sd/milkymist-memcard.c
60
@@ -XXX,XX +XXX,XX @@ static void memcard_sd_command(MilkymistMemcardState *s)
61
SDRequest req;
62
63
req.cmd = s->command[0] & 0x3f;
64
- req.arg = (s->command[1] << 24) | (s->command[2] << 16)
65
- | (s->command[3] << 8) | s->command[4];
66
+ req.arg = ldl_be_p(s->command + 1);
67
req.crc = s->command[5];
68
69
s->response[0] = req.cmd;
70
diff --git a/hw/sd/omap_mmc.c b/hw/sd/omap_mmc.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/hw/sd/omap_mmc.c
73
+++ b/hw/sd/omap_mmc.c
74
@@ -XXX,XX +XXX,XX @@ static void omap_mmc_command(struct omap_mmc_s *host, int cmd, int dir,
75
CID_CSD_OVERWRITE;
76
if (host->sdio & (1 << 13))
77
mask |= AKE_SEQ_ERROR;
78
- rspstatus = (response[0] << 24) | (response[1] << 16) |
79
- (response[2] << 8) | (response[3] << 0);
80
+ rspstatus = ldl_be_p(response);
81
break;
82
83
case sd_r2:
84
@@ -XXX,XX +XXX,XX @@ static void omap_mmc_command(struct omap_mmc_s *host, int cmd, int dir,
85
}
86
rsplen = 4;
87
88
- rspstatus = (response[0] << 24) | (response[1] << 16) |
89
- (response[2] << 8) | (response[3] << 0);
90
+ rspstatus = ldl_be_p(response);
91
if (rspstatus & 0x80000000)
92
host->status &= 0xe000;
93
else
94
diff --git a/hw/sd/pl181.c b/hw/sd/pl181.c
95
index XXXXXXX..XXXXXXX 100644
96
--- a/hw/sd/pl181.c
97
+++ b/hw/sd/pl181.c
98
@@ -XXX,XX +XXX,XX @@ static void pl181_send_command(PL181State *s)
99
if (rlen < 0)
100
goto error;
101
if (s->cmd & PL181_CMD_RESPONSE) {
102
-#define RWORD(n) (((uint32_t)response[n] << 24) | (response[n + 1] << 16) \
103
- | (response[n + 2] << 8) | response[n + 3])
104
if (rlen == 0 || (rlen == 4 && (s->cmd & PL181_CMD_LONGRESP)))
105
goto error;
106
if (rlen != 4 && rlen != 16)
107
goto error;
108
- s->response[0] = RWORD(0);
109
+ s->response[0] = ldl_be_p(&response[0]);
110
if (rlen == 4) {
111
s->response[1] = s->response[2] = s->response[3] = 0;
112
} else {
113
- s->response[1] = RWORD(4);
114
- s->response[2] = RWORD(8);
115
- s->response[3] = RWORD(12) & ~1;
116
+ s->response[1] = ldl_be_p(&response[4]);
117
+ s->response[2] = ldl_be_p(&response[8]);
118
+ s->response[3] = ldl_be_p(&response[12]) & ~1;
119
}
120
DPRINTF("Response received\n");
121
s->status |= PL181_STATUS_CMDRESPEND;
122
-#undef RWORD
123
} else {
124
DPRINTF("Command sent\n");
125
s->status |= PL181_STATUS_CMDSENT;
126
diff --git a/hw/sd/sdhci.c b/hw/sd/sdhci.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/hw/sd/sdhci.c
129
+++ b/hw/sd/sdhci.c
130
@@ -XXX,XX +XXX,XX @@ static void sdhci_send_command(SDHCIState *s)
131
132
if (s->cmdreg & SDHC_CMD_RESPONSE) {
133
if (rlen == 4) {
134
- s->rspreg[0] = (response[0] << 24) | (response[1] << 16) |
135
- (response[2] << 8) | response[3];
136
+ s->rspreg[0] = ldl_be_p(response);
137
s->rspreg[1] = s->rspreg[2] = s->rspreg[3] = 0;
138
trace_sdhci_response4(s->rspreg[0]);
139
} else if (rlen == 16) {
140
- s->rspreg[0] = (response[11] << 24) | (response[12] << 16) |
141
- (response[13] << 8) | response[14];
142
- s->rspreg[1] = (response[7] << 24) | (response[8] << 16) |
143
- (response[9] << 8) | response[10];
144
- s->rspreg[2] = (response[3] << 24) | (response[4] << 16) |
145
- (response[5] << 8) | response[6];
146
+ s->rspreg[0] = ldl_be_p(&response[11]);
147
+ s->rspreg[1] = ldl_be_p(&response[7]);
148
+ s->rspreg[2] = ldl_be_p(&response[3]);
149
s->rspreg[3] = (response[0] << 16) | (response[1] << 8) |
150
response[2];
151
trace_sdhci_response16(s->rspreg[3], s->rspreg[2],
152
@@ -XXX,XX +XXX,XX @@ static void sdhci_end_transfer(SDHCIState *s)
153
trace_sdhci_end_transfer(request.cmd, request.arg);
154
sdbus_do_command(&s->sdbus, &request, response);
155
/* Auto CMD12 response goes to the upper Response register */
156
- s->rspreg[3] = (response[0] << 24) | (response[1] << 16) |
157
- (response[2] << 8) | response[3];
158
+ s->rspreg[3] = ldl_be_p(response);
159
}
160
161
s->prnsts &= ~(SDHC_DOING_READ | SDHC_DOING_WRITE |
162
diff --git a/hw/sd/ssi-sd.c b/hw/sd/ssi-sd.c
163
index XXXXXXX..XXXXXXX 100644
164
--- a/hw/sd/ssi-sd.c
165
+++ b/hw/sd/ssi-sd.c
166
@@ -XXX,XX +XXX,XX @@ static uint32_t ssi_sd_transfer(SSISlave *dev, uint32_t val)
167
uint8_t longresp[16];
168
/* FIXME: Check CRC. */
169
request.cmd = s->cmd;
170
- request.arg = (s->cmdarg[0] << 24) | (s->cmdarg[1] << 16)
171
- | (s->cmdarg[2] << 8) | s->cmdarg[3];
172
+ request.arg = ldl_be_p(s->cmdarg);
173
DPRINTF("CMD%d arg 0x%08x\n", s->cmd, request.arg);
174
s->arglen = sdbus_do_command(&s->sdbus, &request, longresp);
175
if (s->arglen <= 0) {
176
@@ -XXX,XX +XXX,XX @@ static uint32_t ssi_sd_transfer(SSISlave *dev, uint32_t val)
177
/* CMD13 returns a 2-byte statuse work. Other commands
178
only return the first byte. */
179
s->arglen = (s->cmd == 13) ? 2 : 1;
180
- cardstatus = (longresp[0] << 24) | (longresp[1] << 16)
181
- | (longresp[2] << 8) | longresp[3];
182
+ cardstatus = ldl_be_p(longresp);
183
status = 0;
184
if (((cardstatus >> 9) & 0xf) < 4)
185
status |= SSI_SDR_IDLE;
186
--
64
--
187
2.17.1
65
2.25.1
188
66
189
67
diff view generated by jsdifflib
1
From: Aaron Lindsay <alindsay@codeaurora.org>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
KVM implies V7VE, which implies ARM_DIV and THUMB_DIV. The conditional
3
This adds cluster-id in CPU instance properties, which will be used
4
detection here is therefore unnecessary. Because V7VE is already
4
by arm/virt machine. Besides, the cluster-id is also verified or
5
unconditionally specified for all KVM hosts, ARM_DIV and THUMB_DIV are
5
dumped in various spots:
6
already indirectly specified and do not need to be included here at all.
7
6
8
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
7
* hw/core/machine.c::machine_set_cpu_numa_node() to associate
9
Message-id: 1529699547-17044-6-git-send-email-alindsay@codeaurora.org
8
CPU with its NUMA node.
9
10
* hw/core/machine.c::machine_numa_finish_cpu_init() to record
11
CPU slots with no NUMA mapping set.
12
13
* hw/core/machine-hmp-cmds.c::hmp_hotpluggable_cpus() to dump
14
cluster-id.
15
16
Signed-off-by: Gavin Shan <gshan@redhat.com>
17
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
18
Acked-by: Igor Mammedov <imammedo@redhat.com>
19
Message-id: 20220503140304.855514-2-gshan@redhat.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
21
---
12
target/arm/kvm32.c | 19 +------------------
22
qapi/machine.json | 6 ++++--
13
1 file changed, 1 insertion(+), 18 deletions(-)
23
hw/core/machine-hmp-cmds.c | 4 ++++
24
hw/core/machine.c | 16 ++++++++++++++++
25
3 files changed, 24 insertions(+), 2 deletions(-)
14
26
15
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
27
diff --git a/qapi/machine.json b/qapi/machine.json
16
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/kvm32.c
29
--- a/qapi/machine.json
18
+++ b/target/arm/kvm32.c
30
+++ b/qapi/machine.json
19
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
31
@@ -XXX,XX +XXX,XX @@
20
* and then query that CPU for the relevant ID registers.
32
# @node-id: NUMA node ID the CPU belongs to
21
*/
33
# @socket-id: socket number within node/board the CPU belongs to
22
int i, ret, fdarray[3];
34
# @die-id: die number within socket the CPU belongs to (since 4.1)
23
- uint32_t midr, id_pfr0, id_isar0, mvfr1;
35
-# @core-id: core number within die the CPU belongs to
24
+ uint32_t midr, id_pfr0, mvfr1;
36
+# @cluster-id: cluster number within die the CPU belongs to (since 7.1)
25
uint64_t features = 0;
37
+# @core-id: core number within cluster the CPU belongs to
26
/* Old kernels may not know about the PREFERRED_TARGET ioctl: however
38
# @thread-id: thread number within core the CPU belongs to
27
* we know these will only support creating one kind of guest CPU,
39
#
28
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
40
-# Note: currently there are 5 properties that could be present
29
| ENCODE_CP_REG(15, 0, 0, 0, 1, 0, 0),
41
+# Note: currently there are 6 properties that could be present
30
.addr = (uintptr_t)&id_pfr0,
42
# but management should be prepared to pass through other
31
},
43
# properties with device_add command to allow for future
32
- {
44
# interface extension. This also requires the filed names to be kept in
33
- .id = KVM_REG_ARM | KVM_REG_SIZE_U32
45
@@ -XXX,XX +XXX,XX @@
34
- | ENCODE_CP_REG(15, 0, 0, 0, 2, 0, 0),
46
'data': { '*node-id': 'int',
35
- .addr = (uintptr_t)&id_isar0,
47
'*socket-id': 'int',
36
- },
48
'*die-id': 'int',
37
{
49
+ '*cluster-id': 'int',
38
.id = KVM_REG_ARM | KVM_REG_SIZE_U32
50
'*core-id': 'int',
39
| KVM_REG_ARM_VFP | KVM_REG_ARM_VFP_MVFR1,
51
'*thread-id': 'int'
40
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
52
}
41
set_feature(&features, ARM_FEATURE_VFP3);
53
diff --git a/hw/core/machine-hmp-cmds.c b/hw/core/machine-hmp-cmds.c
42
set_feature(&features, ARM_FEATURE_GENERIC_TIMER);
54
index XXXXXXX..XXXXXXX 100644
43
55
--- a/hw/core/machine-hmp-cmds.c
44
- switch (extract32(id_isar0, 24, 4)) {
56
+++ b/hw/core/machine-hmp-cmds.c
45
- case 1:
57
@@ -XXX,XX +XXX,XX @@ void hmp_hotpluggable_cpus(Monitor *mon, const QDict *qdict)
46
- set_feature(&features, ARM_FEATURE_THUMB_DIV);
58
if (c->has_die_id) {
47
- break;
59
monitor_printf(mon, " die-id: \"%" PRIu64 "\"\n", c->die_id);
48
- case 2:
60
}
49
- set_feature(&features, ARM_FEATURE_ARM_DIV);
61
+ if (c->has_cluster_id) {
50
- set_feature(&features, ARM_FEATURE_THUMB_DIV);
62
+ monitor_printf(mon, " cluster-id: \"%" PRIu64 "\"\n",
51
- break;
63
+ c->cluster_id);
52
- default:
64
+ }
53
- break;
65
if (c->has_core_id) {
54
- }
66
monitor_printf(mon, " core-id: \"%" PRIu64 "\"\n", c->core_id);
55
-
67
}
56
if (extract32(id_pfr0, 12, 4) == 1) {
68
diff --git a/hw/core/machine.c b/hw/core/machine.c
57
set_feature(&features, ARM_FEATURE_THUMB2EE);
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/core/machine.c
71
+++ b/hw/core/machine.c
72
@@ -XXX,XX +XXX,XX @@ void machine_set_cpu_numa_node(MachineState *machine,
73
return;
74
}
75
76
+ if (props->has_cluster_id && !slot->props.has_cluster_id) {
77
+ error_setg(errp, "cluster-id is not supported");
78
+ return;
79
+ }
80
+
81
if (props->has_socket_id && !slot->props.has_socket_id) {
82
error_setg(errp, "socket-id is not supported");
83
return;
84
@@ -XXX,XX +XXX,XX @@ void machine_set_cpu_numa_node(MachineState *machine,
85
continue;
86
}
87
88
+ if (props->has_cluster_id &&
89
+ props->cluster_id != slot->props.cluster_id) {
90
+ continue;
91
+ }
92
+
93
if (props->has_die_id && props->die_id != slot->props.die_id) {
94
continue;
95
}
96
@@ -XXX,XX +XXX,XX @@ static char *cpu_slot_to_string(const CPUArchId *cpu)
97
}
98
g_string_append_printf(s, "die-id: %"PRId64, cpu->props.die_id);
58
}
99
}
100
+ if (cpu->props.has_cluster_id) {
101
+ if (s->len) {
102
+ g_string_append_printf(s, ", ");
103
+ }
104
+ g_string_append_printf(s, "cluster-id: %"PRId64, cpu->props.cluster_id);
105
+ }
106
if (cpu->props.has_core_id) {
107
if (s->len) {
108
g_string_append_printf(s, ", ");
59
--
109
--
60
2.17.1
110
2.25.1
61
62
diff view generated by jsdifflib
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
The qdev_get_gpio_in() function accept an int as second parameter.
3
The CPU topology isn't enabled on arm/virt machine yet, but we're
4
going to do it in next patch. After the CPU topology is enabled by
5
next patch, "thread-id=1" becomes invalid because the CPU core is
6
preferred on arm/virt machine. It means these two CPUs have 0/1
7
as their core IDs, but their thread IDs are all 0. It will trigger
8
test failure as the following message indicates:
4
9
5
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
10
[14/21 qemu:qtest+qtest-aarch64 / qtest-aarch64/numa-test ERROR
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
1.48s killed by signal 6 SIGABRT
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
>>> G_TEST_DBUS_DAEMON=/home/gavin/sandbox/qemu.main/tests/dbus-vmstate-daemon.sh \
13
QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon \
14
QTEST_QEMU_BINARY=./qemu-system-aarch64 \
15
QTEST_QEMU_IMG=./qemu-img MALLOC_PERTURB_=83 \
16
/home/gavin/sandbox/qemu.main/build/tests/qtest/numa-test --tap -k
17
――――――――――――――――――――――――――――――――――――――――――――――
18
stderr:
19
qemu-system-aarch64: -numa cpu,node-id=0,thread-id=1: no match found
20
21
This fixes the issue by providing comprehensive SMP configurations
22
in aarch64_numa_cpu(). The SMP configurations aren't used before
23
the CPU topology is enabled in next patch.
24
25
Signed-off-by: Gavin Shan <gshan@redhat.com>
26
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
27
Message-id: 20220503140304.855514-3-gshan@redhat.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
29
---
10
hw/arm/fsl-imx7.c | 6 +++---
30
tests/qtest/numa-test.c | 3 ++-
11
1 file changed, 3 insertions(+), 3 deletions(-)
31
1 file changed, 2 insertions(+), 1 deletion(-)
12
32
13
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
33
diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
14
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/fsl-imx7.c
35
--- a/tests/qtest/numa-test.c
16
+++ b/hw/arm/fsl-imx7.c
36
+++ b/tests/qtest/numa-test.c
17
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
37
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
18
FSL_IMX7_ECSPI4_ADDR,
38
QTestState *qts;
19
};
39
g_autofree char *cli = NULL;
20
40
21
- static const hwaddr FSL_IMX7_SPIn_IRQ[FSL_IMX7_NUM_ECSPIS] = {
41
- cli = make_cli(data, "-machine smp.cpus=2 "
22
+ static const int FSL_IMX7_SPIn_IRQ[FSL_IMX7_NUM_ECSPIS] = {
42
+ cli = make_cli(data, "-machine "
23
FSL_IMX7_ECSPI1_IRQ,
43
+ "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
24
FSL_IMX7_ECSPI2_IRQ,
44
"-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
25
FSL_IMX7_ECSPI3_IRQ,
45
"-numa cpu,node-id=1,thread-id=0 "
26
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
46
"-numa cpu,node-id=0,thread-id=1");
27
FSL_IMX7_I2C4_ADDR,
28
};
29
30
- static const hwaddr FSL_IMX7_I2Cn_IRQ[FSL_IMX7_NUM_I2CS] = {
31
+ static const int FSL_IMX7_I2Cn_IRQ[FSL_IMX7_NUM_I2CS] = {
32
FSL_IMX7_I2C1_IRQ,
33
FSL_IMX7_I2C2_IRQ,
34
FSL_IMX7_I2C3_IRQ,
35
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
36
FSL_IMX7_USB3_ADDR,
37
};
38
39
- static const hwaddr FSL_IMX7_USBn_IRQ[FSL_IMX7_NUM_USBS] = {
40
+ static const int FSL_IMX7_USBn_IRQ[FSL_IMX7_NUM_USBS] = {
41
FSL_IMX7_USB1_IRQ,
42
FSL_IMX7_USB2_IRQ,
43
FSL_IMX7_USB3_IRQ,
44
--
47
--
45
2.17.1
48
2.25.1
46
49
47
50
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
When running dtc on the guest /proc/device-tree we get the
3
Currently, the SMP configuration isn't considered when the CPU
4
following warning: Warning (unit_address_vs_reg): Node /memory
4
topology is populated. In this case, it's impossible to provide
5
has a reg or ranges property, but no unit name".
5
the default CPU-to-NUMA mapping or association based on the socket
6
ID of the given CPU.
6
7
7
Let's fix that by adding the unit address to the node name. We also
8
This takes account of SMP configuration when the CPU topology
8
don't create the /memory node anymore in create_fdt(). We directly
9
is populated. The die ID for the given CPU isn't assigned since
9
create it in load_dtb. /chosen still needs to be created in create_fdt
10
it's not supported on arm/virt machine. Besides, the used SMP
10
as the uart needs it. In case the user provided his own dtb, we nop
11
configuration in qtest/numa-test/aarch64_numa_cpu() is corrcted
11
all memory nodes found in root and create new one(s).
12
to avoid testing failure
12
13
13
Signed-off-by: Eric Auger <eric.auger@redhat.com>
14
Signed-off-by: Gavin Shan <gshan@redhat.com>
14
Message-id: 1530044492-24921-4-git-send-email-eric.auger@redhat.com
15
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Acked-by: Igor Mammedov <imammedo@redhat.com>
17
Message-id: 20220503140304.855514-4-gshan@redhat.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
19
---
18
hw/arm/boot.c | 41 +++++++++++++++++++++++------------------
20
hw/arm/virt.c | 15 ++++++++++++++-
19
hw/arm/virt.c | 7 +------
21
1 file changed, 14 insertions(+), 1 deletion(-)
20
2 files changed, 24 insertions(+), 24 deletions(-)
21
22
22
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/arm/boot.c
25
+++ b/hw/arm/boot.c
26
@@ -XXX,XX +XXX,XX @@ int arm_load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
27
hwaddr addr_limit, AddressSpace *as)
28
{
29
void *fdt = NULL;
30
- int size, rc;
31
+ int size, rc, n = 0;
32
uint32_t acells, scells;
33
char *nodename;
34
unsigned int i;
35
hwaddr mem_base, mem_len;
36
+ char **node_path;
37
+ Error *err = NULL;
38
39
if (binfo->dtb_filename) {
40
char *filename;
41
@@ -XXX,XX +XXX,XX @@ int arm_load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
42
goto fail;
43
}
44
45
+ /* nop all root nodes matching /memory or /memory@unit-address */
46
+ node_path = qemu_fdt_node_unit_path(fdt, "memory", &err);
47
+ if (err) {
48
+ error_report_err(err);
49
+ goto fail;
50
+ }
51
+ while (node_path[n]) {
52
+ if (g_str_has_prefix(node_path[n], "/memory")) {
53
+ qemu_fdt_nop_node(fdt, node_path[n]);
54
+ }
55
+ n++;
56
+ }
57
+ g_strfreev(node_path);
58
+
59
if (nb_numa_nodes > 0) {
60
- /*
61
- * Turn the /memory node created before into a NOP node, then create
62
- * /memory@addr nodes for all numa nodes respectively.
63
- */
64
- qemu_fdt_nop_node(fdt, "/memory");
65
mem_base = binfo->loader_start;
66
for (i = 0; i < nb_numa_nodes; i++) {
67
mem_len = numa_info[i].node_mem;
68
@@ -XXX,XX +XXX,XX @@ int arm_load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
69
g_free(nodename);
70
}
71
} else {
72
- Error *err = NULL;
73
+ nodename = g_strdup_printf("/memory@%" PRIx64, binfo->loader_start);
74
+ qemu_fdt_add_subnode(fdt, nodename);
75
+ qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
76
77
- rc = fdt_path_offset(fdt, "/memory");
78
- if (rc < 0) {
79
- qemu_fdt_add_subnode(fdt, "/memory");
80
- }
81
-
82
- if (!qemu_fdt_getprop(fdt, "/memory", "device_type", NULL, &err)) {
83
- qemu_fdt_setprop_string(fdt, "/memory", "device_type", "memory");
84
- }
85
-
86
- rc = qemu_fdt_setprop_sized_cells(fdt, "/memory", "reg",
87
+ rc = qemu_fdt_setprop_sized_cells(fdt, nodename, "reg",
88
acells, binfo->loader_start,
89
scells, binfo->ram_size);
90
if (rc < 0) {
91
- fprintf(stderr, "couldn't set /memory/reg\n");
92
+ fprintf(stderr, "couldn't set %s reg\n", nodename);
93
goto fail;
94
}
95
+ g_free(nodename);
96
}
97
98
rc = fdt_path_offset(fdt, "/chosen");
99
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
23
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
100
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
101
--- a/hw/arm/virt.c
25
--- a/hw/arm/virt.c
102
+++ b/hw/arm/virt.c
26
+++ b/hw/arm/virt.c
103
@@ -XXX,XX +XXX,XX @@ static void create_fdt(VirtMachineState *vms)
27
@@ -XXX,XX +XXX,XX @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
104
qemu_fdt_setprop_cell(fdt, "/", "#address-cells", 0x2);
28
int n;
105
qemu_fdt_setprop_cell(fdt, "/", "#size-cells", 0x2);
29
unsigned int max_cpus = ms->smp.max_cpus;
106
30
VirtMachineState *vms = VIRT_MACHINE(ms);
107
- /*
31
+ MachineClass *mc = MACHINE_GET_CLASS(vms);
108
- * /chosen and /memory nodes must exist for load_dtb
32
109
- * to fill in necessary properties later
33
if (ms->possible_cpus) {
110
- */
34
assert(ms->possible_cpus->len == max_cpus);
111
+ /* /chosen must exist for load_dtb to fill in necessary properties later */
35
@@ -XXX,XX +XXX,XX @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
112
qemu_fdt_add_subnode(fdt, "/chosen");
36
ms->possible_cpus->cpus[n].type = ms->cpu_type;
113
- qemu_fdt_add_subnode(fdt, "/memory");
37
ms->possible_cpus->cpus[n].arch_id =
114
- qemu_fdt_setprop_string(fdt, "/memory", "device_type", "memory");
38
virt_cpu_mp_affinity(vms, n);
115
39
+
116
/* Clock node, for the benefit of the UART. The kernel device tree
40
+ assert(!mc->smp_props.dies_supported);
117
* binding documentation claims the PL011 node clock properties are
41
+ ms->possible_cpus->cpus[n].props.has_socket_id = true;
42
+ ms->possible_cpus->cpus[n].props.socket_id =
43
+ n / (ms->smp.clusters * ms->smp.cores * ms->smp.threads);
44
+ ms->possible_cpus->cpus[n].props.has_cluster_id = true;
45
+ ms->possible_cpus->cpus[n].props.cluster_id =
46
+ (n / (ms->smp.cores * ms->smp.threads)) % ms->smp.clusters;
47
+ ms->possible_cpus->cpus[n].props.has_core_id = true;
48
+ ms->possible_cpus->cpus[n].props.core_id =
49
+ (n / ms->smp.threads) % ms->smp.cores;
50
ms->possible_cpus->cpus[n].props.has_thread_id = true;
51
- ms->possible_cpus->cpus[n].props.thread_id = n;
52
+ ms->possible_cpus->cpus[n].props.thread_id =
53
+ n % ms->smp.threads;
54
}
55
return ms->possible_cpus;
56
}
118
--
57
--
119
2.17.1
58
2.25.1
120
121
diff view generated by jsdifflib
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
3
In aarch64_numa_cpu(), the CPU and NUMA association is something
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
like below. Two threads in the same core/cluster/socket are
5
associated with two individual NUMA nodes, which is unreal as
6
Igor Mammedov mentioned. We don't expect the association to break
7
NUMA-to-socket boundary, which matches with the real world.
8
9
NUMA-node socket cluster core thread
10
------------------------------------------
11
0 0 0 0 0
12
1 0 0 0 1
13
14
This corrects the topology for CPUs and their association with
15
NUMA nodes. After this patch is applied, the CPU and NUMA
16
association becomes something like below, which looks real.
17
Besides, socket/cluster/core/thread IDs are all checked when
18
the NUMA node IDs are verified. It helps to check if the CPU
19
topology is properly populated or not.
20
21
NUMA-node socket cluster core thread
22
------------------------------------------
23
0 1 0 0 0
24
1 0 0 0 0
25
26
Suggested-by: Igor Mammedov <imammedo@redhat.com>
27
Signed-off-by: Gavin Shan <gshan@redhat.com>
28
Acked-by: Igor Mammedov <imammedo@redhat.com>
29
Message-id: 20220503140304.855514-5-gshan@redhat.com
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
31
---
7
hw/arm/fsl-imx7.c | 2 +-
32
tests/qtest/numa-test.c | 18 ++++++++++++------
8
1 file changed, 1 insertion(+), 1 deletion(-)
33
1 file changed, 12 insertions(+), 6 deletions(-)
9
34
10
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
35
diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
11
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/arm/fsl-imx7.c
37
--- a/tests/qtest/numa-test.c
13
+++ b/hw/arm/fsl-imx7.c
38
+++ b/tests/qtest/numa-test.c
14
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
39
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
15
/*
40
g_autofree char *cli = NULL;
16
* SRC
41
17
*/
42
cli = make_cli(data, "-machine "
18
- create_unimplemented_device("sdma", FSL_IMX7_SRC_ADDR, FSL_IMX7_SRC_SIZE);
43
- "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
19
+ create_unimplemented_device("src", FSL_IMX7_SRC_ADDR, FSL_IMX7_SRC_SIZE);
44
+ "smp.cpus=2,smp.sockets=2,smp.clusters=1,smp.cores=1,smp.threads=1 "
20
45
"-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
21
/*
46
- "-numa cpu,node-id=1,thread-id=0 "
22
* Watchdog
47
- "-numa cpu,node-id=0,thread-id=1");
48
+ "-numa cpu,node-id=0,socket-id=1,cluster-id=0,core-id=0,thread-id=0 "
49
+ "-numa cpu,node-id=1,socket-id=0,cluster-id=0,core-id=0,thread-id=0");
50
qts = qtest_init(cli);
51
cpus = get_cpus(qts, &resp);
52
g_assert(cpus);
53
54
while ((e = qlist_pop(cpus))) {
55
QDict *cpu, *props;
56
- int64_t thread, node;
57
+ int64_t socket, cluster, core, thread, node;
58
59
cpu = qobject_to(QDict, e);
60
g_assert(qdict_haskey(cpu, "props"));
61
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
62
63
g_assert(qdict_haskey(props, "node-id"));
64
node = qdict_get_int(props, "node-id");
65
+ g_assert(qdict_haskey(props, "socket-id"));
66
+ socket = qdict_get_int(props, "socket-id");
67
+ g_assert(qdict_haskey(props, "cluster-id"));
68
+ cluster = qdict_get_int(props, "cluster-id");
69
+ g_assert(qdict_haskey(props, "core-id"));
70
+ core = qdict_get_int(props, "core-id");
71
g_assert(qdict_haskey(props, "thread-id"));
72
thread = qdict_get_int(props, "thread-id");
73
74
- if (thread == 0) {
75
+ if (socket == 0 && cluster == 0 && core == 0 && thread == 0) {
76
g_assert_cmpint(node, ==, 1);
77
- } else if (thread == 1) {
78
+ } else if (socket == 1 && cluster == 0 && core == 0 && thread == 0) {
79
g_assert_cmpint(node, ==, 0);
80
} else {
81
g_assert(false);
23
--
82
--
24
2.17.1
83
2.25.1
25
26
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
When running dtc on the guest /proc/device-tree we get the
3
When CPU-to-NUMA association isn't explicitly provided by users,
4
following warnings: "Warning (unit_address_vs_reg): Node <name>
4
the default one is given by mc->get_default_cpu_node_id(). However,
5
has a reg or ranges property, but no unit name", with name:
5
the CPU topology isn't fully considered in the default association
6
/intc, /intc/its, /intc/v2m.
6
and this causes CPU topology broken warnings on booting Linux guest.
7
7
8
Nodes should have a name in the form <name>[@<unit-address>] where
8
For example, the following warning messages are observed when the
9
unit-address is the primary address used to access the device, listed
9
Linux guest is booted with the following command lines.
10
in the node's reg property. This fix seems to make dtc happy.
11
10
12
Signed-off-by: Eric Auger <eric.auger@redhat.com>
11
/home/gavin/sandbox/qemu.main/build/qemu-system-aarch64 \
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
-accel kvm -machine virt,gic-version=host \
14
Message-id: 1530044492-24921-3-git-send-email-eric.auger@redhat.com
13
-cpu host \
14
-smp 6,sockets=2,cores=3,threads=1 \
15
-m 1024M,slots=16,maxmem=64G \
16
-object memory-backend-ram,id=mem0,size=128M \
17
-object memory-backend-ram,id=mem1,size=128M \
18
-object memory-backend-ram,id=mem2,size=128M \
19
-object memory-backend-ram,id=mem3,size=128M \
20
-object memory-backend-ram,id=mem4,size=128M \
21
-object memory-backend-ram,id=mem4,size=384M \
22
-numa node,nodeid=0,memdev=mem0 \
23
-numa node,nodeid=1,memdev=mem1 \
24
-numa node,nodeid=2,memdev=mem2 \
25
-numa node,nodeid=3,memdev=mem3 \
26
-numa node,nodeid=4,memdev=mem4 \
27
-numa node,nodeid=5,memdev=mem5
28
:
29
alternatives: patching kernel code
30
BUG: arch topology borken
31
the CLS domain not a subset of the MC domain
32
<the above error log repeats>
33
BUG: arch topology borken
34
the DIE domain not a subset of the NODE domain
35
36
With current implementation of mc->get_default_cpu_node_id(),
37
CPU#0 to CPU#5 are associated with NODE#0 to NODE#5 separately.
38
That's incorrect because CPU#0/1/2 should be associated with same
39
NUMA node because they're seated in same socket.
40
41
This fixes the issue by considering the socket ID when the default
42
CPU-to-NUMA association is provided in virt_possible_cpu_arch_ids().
43
With this applied, no more CPU topology broken warnings are seen
44
from the Linux guest. The 6 CPUs are associated with NODE#0/1, but
45
there are no CPUs associated with NODE#2/3/4/5.
46
47
Signed-off-by: Gavin Shan <gshan@redhat.com>
48
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
49
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
50
Message-id: 20220503140304.855514-6-gshan@redhat.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
51
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
52
---
17
hw/arm/virt.c | 63 +++++++++++++++++++++++++++++++--------------------
53
hw/arm/virt.c | 4 +++-
18
1 file changed, 39 insertions(+), 24 deletions(-)
54
1 file changed, 3 insertions(+), 1 deletion(-)
19
55
20
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
56
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
21
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt.c
58
--- a/hw/arm/virt.c
23
+++ b/hw/arm/virt.c
59
+++ b/hw/arm/virt.c
24
@@ -XXX,XX +XXX,XX @@ static void fdt_add_cpu_nodes(const VirtMachineState *vms)
60
@@ -XXX,XX +XXX,XX @@ virt_cpu_index_to_props(MachineState *ms, unsigned cpu_index)
25
61
26
static void fdt_add_its_gic_node(VirtMachineState *vms)
62
static int64_t virt_get_default_cpu_node_id(const MachineState *ms, int idx)
27
{
63
{
28
+ char *nodename;
64
- return idx % ms->numa_state->num_nodes;
65
+ int64_t socket_id = ms->possible_cpus->cpus[idx].props.socket_id;
29
+
66
+
30
vms->msi_phandle = qemu_fdt_alloc_phandle(vms->fdt);
67
+ return socket_id % ms->numa_state->num_nodes;
31
- qemu_fdt_add_subnode(vms->fdt, "/intc/its");
32
- qemu_fdt_setprop_string(vms->fdt, "/intc/its", "compatible",
33
+ nodename = g_strdup_printf("/intc/its@%" PRIx64,
34
+ vms->memmap[VIRT_GIC_ITS].base);
35
+ qemu_fdt_add_subnode(vms->fdt, nodename);
36
+ qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
37
"arm,gic-v3-its");
38
- qemu_fdt_setprop(vms->fdt, "/intc/its", "msi-controller", NULL, 0);
39
- qemu_fdt_setprop_sized_cells(vms->fdt, "/intc/its", "reg",
40
+ qemu_fdt_setprop(vms->fdt, nodename, "msi-controller", NULL, 0);
41
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
42
2, vms->memmap[VIRT_GIC_ITS].base,
43
2, vms->memmap[VIRT_GIC_ITS].size);
44
- qemu_fdt_setprop_cell(vms->fdt, "/intc/its", "phandle", vms->msi_phandle);
45
+ qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", vms->msi_phandle);
46
+ g_free(nodename);
47
}
68
}
48
69
49
static void fdt_add_v2m_gic_node(VirtMachineState *vms)
70
static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
50
{
51
+ char *nodename;
52
+
53
+ nodename = g_strdup_printf("/intc/v2m@%" PRIx64,
54
+ vms->memmap[VIRT_GIC_V2M].base);
55
vms->msi_phandle = qemu_fdt_alloc_phandle(vms->fdt);
56
- qemu_fdt_add_subnode(vms->fdt, "/intc/v2m");
57
- qemu_fdt_setprop_string(vms->fdt, "/intc/v2m", "compatible",
58
+ qemu_fdt_add_subnode(vms->fdt, nodename);
59
+ qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
60
"arm,gic-v2m-frame");
61
- qemu_fdt_setprop(vms->fdt, "/intc/v2m", "msi-controller", NULL, 0);
62
- qemu_fdt_setprop_sized_cells(vms->fdt, "/intc/v2m", "reg",
63
+ qemu_fdt_setprop(vms->fdt, nodename, "msi-controller", NULL, 0);
64
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
65
2, vms->memmap[VIRT_GIC_V2M].base,
66
2, vms->memmap[VIRT_GIC_V2M].size);
67
- qemu_fdt_setprop_cell(vms->fdt, "/intc/v2m", "phandle", vms->msi_phandle);
68
+ qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", vms->msi_phandle);
69
+ g_free(nodename);
70
}
71
72
static void fdt_add_gic_node(VirtMachineState *vms)
73
{
74
+ char *nodename;
75
+
76
vms->gic_phandle = qemu_fdt_alloc_phandle(vms->fdt);
77
qemu_fdt_setprop_cell(vms->fdt, "/", "interrupt-parent", vms->gic_phandle);
78
79
- qemu_fdt_add_subnode(vms->fdt, "/intc");
80
- qemu_fdt_setprop_cell(vms->fdt, "/intc", "#interrupt-cells", 3);
81
- qemu_fdt_setprop(vms->fdt, "/intc", "interrupt-controller", NULL, 0);
82
- qemu_fdt_setprop_cell(vms->fdt, "/intc", "#address-cells", 0x2);
83
- qemu_fdt_setprop_cell(vms->fdt, "/intc", "#size-cells", 0x2);
84
- qemu_fdt_setprop(vms->fdt, "/intc", "ranges", NULL, 0);
85
+ nodename = g_strdup_printf("/intc@%" PRIx64,
86
+ vms->memmap[VIRT_GIC_DIST].base);
87
+ qemu_fdt_add_subnode(vms->fdt, nodename);
88
+ qemu_fdt_setprop_cell(vms->fdt, nodename, "#interrupt-cells", 3);
89
+ qemu_fdt_setprop(vms->fdt, nodename, "interrupt-controller", NULL, 0);
90
+ qemu_fdt_setprop_cell(vms->fdt, nodename, "#address-cells", 0x2);
91
+ qemu_fdt_setprop_cell(vms->fdt, nodename, "#size-cells", 0x2);
92
+ qemu_fdt_setprop(vms->fdt, nodename, "ranges", NULL, 0);
93
if (vms->gic_version == 3) {
94
int nb_redist_regions = virt_gicv3_redist_region_count(vms);
95
96
- qemu_fdt_setprop_string(vms->fdt, "/intc", "compatible",
97
+ qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
98
"arm,gic-v3");
99
100
- qemu_fdt_setprop_cell(vms->fdt, "/intc",
101
+ qemu_fdt_setprop_cell(vms->fdt, nodename,
102
"#redistributor-regions", nb_redist_regions);
103
104
if (nb_redist_regions == 1) {
105
- qemu_fdt_setprop_sized_cells(vms->fdt, "/intc", "reg",
106
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
107
2, vms->memmap[VIRT_GIC_DIST].base,
108
2, vms->memmap[VIRT_GIC_DIST].size,
109
2, vms->memmap[VIRT_GIC_REDIST].base,
110
2, vms->memmap[VIRT_GIC_REDIST].size);
111
} else {
112
- qemu_fdt_setprop_sized_cells(vms->fdt, "/intc", "reg",
113
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
114
2, vms->memmap[VIRT_GIC_DIST].base,
115
2, vms->memmap[VIRT_GIC_DIST].size,
116
2, vms->memmap[VIRT_GIC_REDIST].base,
117
@@ -XXX,XX +XXX,XX @@ static void fdt_add_gic_node(VirtMachineState *vms)
118
}
119
120
if (vms->virt) {
121
- qemu_fdt_setprop_cells(vms->fdt, "/intc", "interrupts",
122
+ qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
123
GIC_FDT_IRQ_TYPE_PPI, ARCH_GICV3_MAINT_IRQ,
124
GIC_FDT_IRQ_FLAGS_LEVEL_HI);
125
}
126
} else {
127
/* 'cortex-a15-gic' means 'GIC v2' */
128
- qemu_fdt_setprop_string(vms->fdt, "/intc", "compatible",
129
+ qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
130
"arm,cortex-a15-gic");
131
- qemu_fdt_setprop_sized_cells(vms->fdt, "/intc", "reg",
132
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
133
2, vms->memmap[VIRT_GIC_DIST].base,
134
2, vms->memmap[VIRT_GIC_DIST].size,
135
2, vms->memmap[VIRT_GIC_CPU].base,
136
2, vms->memmap[VIRT_GIC_CPU].size);
137
}
138
139
- qemu_fdt_setprop_cell(vms->fdt, "/intc", "phandle", vms->gic_phandle);
140
+ qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", vms->gic_phandle);
141
+ g_free(nodename);
142
}
143
144
static void fdt_add_pmu_nodes(const VirtMachineState *vms)
145
--
71
--
146
2.17.1
72
2.25.1
147
148
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20180627043328.11531-2-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/helper-sve.h | 35 +++++++++
10
target/arm/sve_helper.c | 153 +++++++++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 121 +++++++++++++++++++++++++++++
12
target/arm/sve.decode | 34 +++++++++
13
4 files changed, 343 insertions(+)
14
15
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-sve.h
18
+++ b/target/arm/helper-sve.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
22
void, ptr, ptr, ptr, ptr, i32)
23
+
24
+DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
25
+DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
26
+DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
27
+DEF_HELPER_FLAGS_4(sve_ld4bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
28
+
29
+DEF_HELPER_FLAGS_4(sve_ld1hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
30
+DEF_HELPER_FLAGS_4(sve_ld2hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
31
+DEF_HELPER_FLAGS_4(sve_ld3hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
32
+DEF_HELPER_FLAGS_4(sve_ld4hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
33
+
34
+DEF_HELPER_FLAGS_4(sve_ld1ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
35
+DEF_HELPER_FLAGS_4(sve_ld2ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
36
+DEF_HELPER_FLAGS_4(sve_ld3ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
37
+DEF_HELPER_FLAGS_4(sve_ld4ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
38
+
39
+DEF_HELPER_FLAGS_4(sve_ld1dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
40
+DEF_HELPER_FLAGS_4(sve_ld2dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
41
+DEF_HELPER_FLAGS_4(sve_ld3dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
42
+DEF_HELPER_FLAGS_4(sve_ld4dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
43
+
44
+DEF_HELPER_FLAGS_4(sve_ld1bhu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
45
+DEF_HELPER_FLAGS_4(sve_ld1bsu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
46
+DEF_HELPER_FLAGS_4(sve_ld1bdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
47
+DEF_HELPER_FLAGS_4(sve_ld1bhs_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
48
+DEF_HELPER_FLAGS_4(sve_ld1bss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
49
+DEF_HELPER_FLAGS_4(sve_ld1bds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
50
+
51
+DEF_HELPER_FLAGS_4(sve_ld1hsu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
52
+DEF_HELPER_FLAGS_4(sve_ld1hdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
53
+DEF_HELPER_FLAGS_4(sve_ld1hss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
54
+DEF_HELPER_FLAGS_4(sve_ld1hds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
55
+
56
+DEF_HELPER_FLAGS_4(sve_ld1sdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
57
+DEF_HELPER_FLAGS_4(sve_ld1sds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
58
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/sve_helper.c
61
+++ b/target/arm/sve_helper.c
62
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
63
64
return predtest_ones(d, oprsz, esz_mask);
65
}
66
+
67
+/*
68
+ * Load contiguous data, protected by a governing predicate.
69
+ */
70
+#define DO_LD1(NAME, FN, TYPEE, TYPEM, H) \
71
+static void do_##NAME(CPUARMState *env, void *vd, void *vg, \
72
+ target_ulong addr, intptr_t oprsz, \
73
+ uintptr_t ra) \
74
+{ \
75
+ intptr_t i = 0; \
76
+ do { \
77
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
78
+ do { \
79
+ TYPEM m = 0; \
80
+ if (pg & 1) { \
81
+ m = FN(env, addr, ra); \
82
+ } \
83
+ *(TYPEE *)(vd + H(i)) = m; \
84
+ i += sizeof(TYPEE), pg >>= sizeof(TYPEE); \
85
+ addr += sizeof(TYPEM); \
86
+ } while (i & 15); \
87
+ } while (i < oprsz); \
88
+} \
89
+void HELPER(NAME)(CPUARMState *env, void *vg, \
90
+ target_ulong addr, uint32_t desc) \
91
+{ \
92
+ do_##NAME(env, &env->vfp.zregs[simd_data(desc)], vg, \
93
+ addr, simd_oprsz(desc), GETPC()); \
94
+}
95
+
96
+#define DO_LD2(NAME, FN, TYPEE, TYPEM, H) \
97
+void HELPER(NAME)(CPUARMState *env, void *vg, \
98
+ target_ulong addr, uint32_t desc) \
99
+{ \
100
+ intptr_t i, oprsz = simd_oprsz(desc); \
101
+ intptr_t ra = GETPC(); \
102
+ unsigned rd = simd_data(desc); \
103
+ void *d1 = &env->vfp.zregs[rd]; \
104
+ void *d2 = &env->vfp.zregs[(rd + 1) & 31]; \
105
+ for (i = 0; i < oprsz; ) { \
106
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
107
+ do { \
108
+ TYPEM m1 = 0, m2 = 0; \
109
+ if (pg & 1) { \
110
+ m1 = FN(env, addr, ra); \
111
+ m2 = FN(env, addr + sizeof(TYPEM), ra); \
112
+ } \
113
+ *(TYPEE *)(d1 + H(i)) = m1; \
114
+ *(TYPEE *)(d2 + H(i)) = m2; \
115
+ i += sizeof(TYPEE), pg >>= sizeof(TYPEE); \
116
+ addr += 2 * sizeof(TYPEM); \
117
+ } while (i & 15); \
118
+ } \
119
+}
120
+
121
+#define DO_LD3(NAME, FN, TYPEE, TYPEM, H) \
122
+void HELPER(NAME)(CPUARMState *env, void *vg, \
123
+ target_ulong addr, uint32_t desc) \
124
+{ \
125
+ intptr_t i, oprsz = simd_oprsz(desc); \
126
+ intptr_t ra = GETPC(); \
127
+ unsigned rd = simd_data(desc); \
128
+ void *d1 = &env->vfp.zregs[rd]; \
129
+ void *d2 = &env->vfp.zregs[(rd + 1) & 31]; \
130
+ void *d3 = &env->vfp.zregs[(rd + 2) & 31]; \
131
+ for (i = 0; i < oprsz; ) { \
132
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
133
+ do { \
134
+ TYPEM m1 = 0, m2 = 0, m3 = 0; \
135
+ if (pg & 1) { \
136
+ m1 = FN(env, addr, ra); \
137
+ m2 = FN(env, addr + sizeof(TYPEM), ra); \
138
+ m3 = FN(env, addr + 2 * sizeof(TYPEM), ra); \
139
+ } \
140
+ *(TYPEE *)(d1 + H(i)) = m1; \
141
+ *(TYPEE *)(d2 + H(i)) = m2; \
142
+ *(TYPEE *)(d3 + H(i)) = m3; \
143
+ i += sizeof(TYPEE), pg >>= sizeof(TYPEE); \
144
+ addr += 3 * sizeof(TYPEM); \
145
+ } while (i & 15); \
146
+ } \
147
+}
148
+
149
+#define DO_LD4(NAME, FN, TYPEE, TYPEM, H) \
150
+void HELPER(NAME)(CPUARMState *env, void *vg, \
151
+ target_ulong addr, uint32_t desc) \
152
+{ \
153
+ intptr_t i, oprsz = simd_oprsz(desc); \
154
+ intptr_t ra = GETPC(); \
155
+ unsigned rd = simd_data(desc); \
156
+ void *d1 = &env->vfp.zregs[rd]; \
157
+ void *d2 = &env->vfp.zregs[(rd + 1) & 31]; \
158
+ void *d3 = &env->vfp.zregs[(rd + 2) & 31]; \
159
+ void *d4 = &env->vfp.zregs[(rd + 3) & 31]; \
160
+ for (i = 0; i < oprsz; ) { \
161
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
162
+ do { \
163
+ TYPEM m1 = 0, m2 = 0, m3 = 0, m4 = 0; \
164
+ if (pg & 1) { \
165
+ m1 = FN(env, addr, ra); \
166
+ m2 = FN(env, addr + sizeof(TYPEM), ra); \
167
+ m3 = FN(env, addr + 2 * sizeof(TYPEM), ra); \
168
+ m4 = FN(env, addr + 3 * sizeof(TYPEM), ra); \
169
+ } \
170
+ *(TYPEE *)(d1 + H(i)) = m1; \
171
+ *(TYPEE *)(d2 + H(i)) = m2; \
172
+ *(TYPEE *)(d3 + H(i)) = m3; \
173
+ *(TYPEE *)(d4 + H(i)) = m4; \
174
+ i += sizeof(TYPEE), pg >>= sizeof(TYPEE); \
175
+ addr += 4 * sizeof(TYPEM); \
176
+ } while (i & 15); \
177
+ } \
178
+}
179
+
180
+DO_LD1(sve_ld1bhu_r, cpu_ldub_data_ra, uint16_t, uint8_t, H1_2)
181
+DO_LD1(sve_ld1bhs_r, cpu_ldsb_data_ra, uint16_t, int8_t, H1_2)
182
+DO_LD1(sve_ld1bsu_r, cpu_ldub_data_ra, uint32_t, uint8_t, H1_4)
183
+DO_LD1(sve_ld1bss_r, cpu_ldsb_data_ra, uint32_t, int8_t, H1_4)
184
+DO_LD1(sve_ld1bdu_r, cpu_ldub_data_ra, uint64_t, uint8_t, )
185
+DO_LD1(sve_ld1bds_r, cpu_ldsb_data_ra, uint64_t, int8_t, )
186
+
187
+DO_LD1(sve_ld1hsu_r, cpu_lduw_data_ra, uint32_t, uint16_t, H1_4)
188
+DO_LD1(sve_ld1hss_r, cpu_ldsw_data_ra, uint32_t, int8_t, H1_4)
189
+DO_LD1(sve_ld1hdu_r, cpu_lduw_data_ra, uint64_t, uint16_t, )
190
+DO_LD1(sve_ld1hds_r, cpu_ldsw_data_ra, uint64_t, int16_t, )
191
+
192
+DO_LD1(sve_ld1sdu_r, cpu_ldl_data_ra, uint64_t, uint32_t, )
193
+DO_LD1(sve_ld1sds_r, cpu_ldl_data_ra, uint64_t, int32_t, )
194
+
195
+DO_LD1(sve_ld1bb_r, cpu_ldub_data_ra, uint8_t, uint8_t, H1)
196
+DO_LD2(sve_ld2bb_r, cpu_ldub_data_ra, uint8_t, uint8_t, H1)
197
+DO_LD3(sve_ld3bb_r, cpu_ldub_data_ra, uint8_t, uint8_t, H1)
198
+DO_LD4(sve_ld4bb_r, cpu_ldub_data_ra, uint8_t, uint8_t, H1)
199
+
200
+DO_LD1(sve_ld1hh_r, cpu_lduw_data_ra, uint16_t, uint16_t, H1_2)
201
+DO_LD2(sve_ld2hh_r, cpu_lduw_data_ra, uint16_t, uint16_t, H1_2)
202
+DO_LD3(sve_ld3hh_r, cpu_lduw_data_ra, uint16_t, uint16_t, H1_2)
203
+DO_LD4(sve_ld4hh_r, cpu_lduw_data_ra, uint16_t, uint16_t, H1_2)
204
+
205
+DO_LD1(sve_ld1ss_r, cpu_ldl_data_ra, uint32_t, uint32_t, H1_4)
206
+DO_LD2(sve_ld2ss_r, cpu_ldl_data_ra, uint32_t, uint32_t, H1_4)
207
+DO_LD3(sve_ld3ss_r, cpu_ldl_data_ra, uint32_t, uint32_t, H1_4)
208
+DO_LD4(sve_ld4ss_r, cpu_ldl_data_ra, uint32_t, uint32_t, H1_4)
209
+
210
+DO_LD1(sve_ld1dd_r, cpu_ldq_data_ra, uint64_t, uint64_t, )
211
+DO_LD2(sve_ld2dd_r, cpu_ldq_data_ra, uint64_t, uint64_t, )
212
+DO_LD3(sve_ld3dd_r, cpu_ldq_data_ra, uint64_t, uint64_t, )
213
+DO_LD4(sve_ld4dd_r, cpu_ldq_data_ra, uint64_t, uint64_t, )
214
+
215
+#undef DO_LD1
216
+#undef DO_LD2
217
+#undef DO_LD3
218
+#undef DO_LD4
219
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
220
index XXXXXXX..XXXXXXX 100644
221
--- a/target/arm/translate-sve.c
222
+++ b/target/arm/translate-sve.c
223
@@ -XXX,XX +XXX,XX @@ typedef void gen_helper_gvec_flags_3(TCGv_i32, TCGv_ptr, TCGv_ptr,
224
typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
225
TCGv_ptr, TCGv_ptr, TCGv_i32);
226
227
+typedef void gen_helper_gvec_mem(TCGv_env, TCGv_ptr, TCGv_i64, TCGv_i32);
228
+
229
/*
230
* Helpers for extracting complex instruction fields.
231
*/
232
@@ -XXX,XX +XXX,XX @@ static inline int expand_imm_sh8u(int x)
233
return (uint8_t)x << (x & 0x100 ? 8 : 0);
234
}
235
236
+/* Convert a 2-bit memory size (msz) to a 4-bit data type (dtype)
237
+ * with unsigned data. C.f. SVE Memory Contiguous Load Group.
238
+ */
239
+static inline int msz_dtype(int msz)
240
+{
241
+ static const uint8_t dtype[4] = { 0, 5, 10, 15 };
242
+ return dtype[msz];
243
+}
244
+
245
/*
246
* Include the generated decoder.
247
*/
248
@@ -XXX,XX +XXX,XX @@ static bool trans_LDR_pri(DisasContext *s, arg_rri *a, uint32_t insn)
249
}
250
return true;
251
}
252
+
253
+/*
254
+ *** SVE Memory - Contiguous Load Group
255
+ */
256
+
257
+/* The memory mode of the dtype. */
258
+static const TCGMemOp dtype_mop[16] = {
259
+ MO_UB, MO_UB, MO_UB, MO_UB,
260
+ MO_SL, MO_UW, MO_UW, MO_UW,
261
+ MO_SW, MO_SW, MO_UL, MO_UL,
262
+ MO_SB, MO_SB, MO_SB, MO_Q
263
+};
264
+
265
+#define dtype_msz(x) (dtype_mop[x] & MO_SIZE)
266
+
267
+/* The vector element size of dtype. */
268
+static const uint8_t dtype_esz[16] = {
269
+ 0, 1, 2, 3,
270
+ 3, 1, 2, 3,
271
+ 3, 2, 2, 3,
272
+ 3, 2, 1, 3
273
+};
274
+
275
+static void do_mem_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
276
+ gen_helper_gvec_mem *fn)
277
+{
278
+ unsigned vsz = vec_full_reg_size(s);
279
+ TCGv_ptr t_pg;
280
+ TCGv_i32 desc;
281
+
282
+ /* For e.g. LD4, there are not enough arguments to pass all 4
283
+ * registers as pointers, so encode the regno into the data field.
284
+ * For consistency, do this even for LD1.
285
+ */
286
+ desc = tcg_const_i32(simd_desc(vsz, vsz, zt));
287
+ t_pg = tcg_temp_new_ptr();
288
+
289
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
290
+ fn(cpu_env, t_pg, addr, desc);
291
+
292
+ tcg_temp_free_ptr(t_pg);
293
+ tcg_temp_free_i32(desc);
294
+}
295
+
296
+static void do_ld_zpa(DisasContext *s, int zt, int pg,
297
+ TCGv_i64 addr, int dtype, int nreg)
298
+{
299
+ static gen_helper_gvec_mem * const fns[16][4] = {
300
+ { gen_helper_sve_ld1bb_r, gen_helper_sve_ld2bb_r,
301
+ gen_helper_sve_ld3bb_r, gen_helper_sve_ld4bb_r },
302
+ { gen_helper_sve_ld1bhu_r, NULL, NULL, NULL },
303
+ { gen_helper_sve_ld1bsu_r, NULL, NULL, NULL },
304
+ { gen_helper_sve_ld1bdu_r, NULL, NULL, NULL },
305
+
306
+ { gen_helper_sve_ld1sds_r, NULL, NULL, NULL },
307
+ { gen_helper_sve_ld1hh_r, gen_helper_sve_ld2hh_r,
308
+ gen_helper_sve_ld3hh_r, gen_helper_sve_ld4hh_r },
309
+ { gen_helper_sve_ld1hsu_r, NULL, NULL, NULL },
310
+ { gen_helper_sve_ld1hdu_r, NULL, NULL, NULL },
311
+
312
+ { gen_helper_sve_ld1hds_r, NULL, NULL, NULL },
313
+ { gen_helper_sve_ld1hss_r, NULL, NULL, NULL },
314
+ { gen_helper_sve_ld1ss_r, gen_helper_sve_ld2ss_r,
315
+ gen_helper_sve_ld3ss_r, gen_helper_sve_ld4ss_r },
316
+ { gen_helper_sve_ld1sdu_r, NULL, NULL, NULL },
317
+
318
+ { gen_helper_sve_ld1bds_r, NULL, NULL, NULL },
319
+ { gen_helper_sve_ld1bss_r, NULL, NULL, NULL },
320
+ { gen_helper_sve_ld1bhs_r, NULL, NULL, NULL },
321
+ { gen_helper_sve_ld1dd_r, gen_helper_sve_ld2dd_r,
322
+ gen_helper_sve_ld3dd_r, gen_helper_sve_ld4dd_r },
323
+ };
324
+ gen_helper_gvec_mem *fn = fns[dtype][nreg];
325
+
326
+ /* While there are holes in the table, they are not
327
+ * accessible via the instruction encoding.
328
+ */
329
+ assert(fn != NULL);
330
+ do_mem_zpa(s, zt, pg, addr, fn);
331
+}
332
+
333
+static bool trans_LD_zprr(DisasContext *s, arg_rprr_load *a, uint32_t insn)
334
+{
335
+ if (a->rm == 31) {
336
+ return false;
337
+ }
338
+ if (sve_access_check(s)) {
339
+ TCGv_i64 addr = new_tmp_a64(s);
340
+ tcg_gen_muli_i64(addr, cpu_reg(s, a->rm),
341
+ (a->nreg + 1) << dtype_msz(a->dtype));
342
+ tcg_gen_add_i64(addr, addr, cpu_reg_sp(s, a->rn));
343
+ do_ld_zpa(s, a->rd, a->pg, addr, a->dtype, a->nreg);
344
+ }
345
+ return true;
346
+}
347
+
348
+static bool trans_LD_zpri(DisasContext *s, arg_rpri_load *a, uint32_t insn)
349
+{
350
+ if (sve_access_check(s)) {
351
+ int vsz = vec_full_reg_size(s);
352
+ int elements = vsz >> dtype_esz[a->dtype];
353
+ TCGv_i64 addr = new_tmp_a64(s);
354
+
355
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, a->rn),
356
+ (a->imm * elements * (a->nreg + 1))
357
+ << dtype_msz(a->dtype));
358
+ do_ld_zpa(s, a->rd, a->pg, addr, a->dtype, a->nreg);
359
+ }
360
+ return true;
361
+}
362
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
363
index XXXXXXX..XXXXXXX 100644
364
--- a/target/arm/sve.decode
365
+++ b/target/arm/sve.decode
366
@@ -XXX,XX +XXX,XX @@
367
# Unsigned 8-bit immediate, optionally shifted left by 8.
368
%sh8_i8u 5:9 !function=expand_imm_sh8u
369
370
+# Unsigned load of msz into esz=2, represented as a dtype.
371
+%msz_dtype 23:2 !function=msz_dtype
372
+
373
# Either a copy of rd (at bit 0), or a different source
374
# as propagated via the MOVPRFX instruction.
375
%reg_movprfx 0:5
376
@@ -XXX,XX +XXX,XX @@
377
&incdec2_cnt rd rn pat esz imm d u
378
&incdec_pred rd pg esz d u
379
&incdec2_pred rd rn pg esz d u
380
+&rprr_load rd pg rn rm dtype nreg
381
+&rpri_load rd pg rn imm dtype nreg
382
383
###########################################################################
384
# Named instruction formats. These are generally used to
385
@@ -XXX,XX +XXX,XX @@
386
@incdec2_pred ........ esz:2 .... .. ..... .. pg:4 rd:5 \
387
&incdec2_pred rn=%reg_movprfx
388
389
+# Loads; user must fill in NREG.
390
+@rprr_load_dt ....... dtype:4 rm:5 ... pg:3 rn:5 rd:5 &rprr_load
391
+@rpri_load_dt ....... dtype:4 . imm:s4 ... pg:3 rn:5 rd:5 &rpri_load
392
+
393
+@rprr_load_msz ....... .... rm:5 ... pg:3 rn:5 rd:5 \
394
+ &rprr_load dtype=%msz_dtype
395
+@rpri_load_msz ....... .... . imm:s4 ... pg:3 rn:5 rd:5 \
396
+ &rpri_load dtype=%msz_dtype
397
+
398
###########################################################################
399
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
400
401
@@ -XXX,XX +XXX,XX @@ LDR_pri 10000101 10 ...... 000 ... ..... 0 .... @pd_rn_i9
402
403
# SVE load vector register
404
LDR_zri 10000101 10 ...... 010 ... ..... ..... @rd_rn_i9
405
+
406
+### SVE Memory Contiguous Load Group
407
+
408
+# SVE contiguous load (scalar plus scalar)
409
+LD_zprr 1010010 .... ..... 010 ... ..... ..... @rprr_load_dt nreg=0
410
+
411
+# SVE contiguous load (scalar plus immediate)
412
+LD_zpri 1010010 .... 0.... 101 ... ..... ..... @rpri_load_dt nreg=0
413
+
414
+# SVE contiguous non-temporal load (scalar plus scalar)
415
+# LDNT1B, LDNT1H, LDNT1W, LDNT1D
416
+# SVE load multiple structures (scalar plus scalar)
417
+# LD2B, LD2H, LD2W, LD2D; etc.
418
+LD_zprr 1010010 .. nreg:2 ..... 110 ... ..... ..... @rprr_load_msz
419
+
420
+# SVE contiguous non-temporal load (scalar plus immediate)
421
+# LDNT1B, LDNT1H, LDNT1W, LDNT1D
422
+# SVE load multiple structures (scalar plus immediate)
423
+# LD2B, LD2H, LD2W, LD2D; etc.
424
+LD_zpri 1010010 .. nreg:2 0.... 111 ... ..... ..... @rpri_load_msz
425
--
426
2.17.1
427
428
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20180627043328.11531-3-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/helper-sve.h | 40 ++++++++++
10
target/arm/sve_helper.c | 157 +++++++++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 69 ++++++++++++++++
12
target/arm/sve.decode | 6 ++
13
4 files changed, 272 insertions(+)
14
15
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-sve.h
18
+++ b/target/arm/helper-sve.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_ld1hds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
20
21
DEF_HELPER_FLAGS_4(sve_ld1sdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
22
DEF_HELPER_FLAGS_4(sve_ld1sds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
23
+
24
+DEF_HELPER_FLAGS_4(sve_ldff1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
25
+DEF_HELPER_FLAGS_4(sve_ldff1bhu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
26
+DEF_HELPER_FLAGS_4(sve_ldff1bsu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
27
+DEF_HELPER_FLAGS_4(sve_ldff1bdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
28
+DEF_HELPER_FLAGS_4(sve_ldff1bhs_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
29
+DEF_HELPER_FLAGS_4(sve_ldff1bss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
30
+DEF_HELPER_FLAGS_4(sve_ldff1bds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
31
+
32
+DEF_HELPER_FLAGS_4(sve_ldff1hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
33
+DEF_HELPER_FLAGS_4(sve_ldff1hsu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
34
+DEF_HELPER_FLAGS_4(sve_ldff1hdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
35
+DEF_HELPER_FLAGS_4(sve_ldff1hss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
36
+DEF_HELPER_FLAGS_4(sve_ldff1hds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
37
+
38
+DEF_HELPER_FLAGS_4(sve_ldff1ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
39
+DEF_HELPER_FLAGS_4(sve_ldff1sdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
40
+DEF_HELPER_FLAGS_4(sve_ldff1sds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
41
+
42
+DEF_HELPER_FLAGS_4(sve_ldff1dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
43
+
44
+DEF_HELPER_FLAGS_4(sve_ldnf1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
45
+DEF_HELPER_FLAGS_4(sve_ldnf1bhu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
46
+DEF_HELPER_FLAGS_4(sve_ldnf1bsu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
47
+DEF_HELPER_FLAGS_4(sve_ldnf1bdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
48
+DEF_HELPER_FLAGS_4(sve_ldnf1bhs_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
49
+DEF_HELPER_FLAGS_4(sve_ldnf1bss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
50
+DEF_HELPER_FLAGS_4(sve_ldnf1bds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
51
+
52
+DEF_HELPER_FLAGS_4(sve_ldnf1hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
53
+DEF_HELPER_FLAGS_4(sve_ldnf1hsu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
54
+DEF_HELPER_FLAGS_4(sve_ldnf1hdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
55
+DEF_HELPER_FLAGS_4(sve_ldnf1hss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
56
+DEF_HELPER_FLAGS_4(sve_ldnf1hds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
57
+
58
+DEF_HELPER_FLAGS_4(sve_ldnf1ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
59
+DEF_HELPER_FLAGS_4(sve_ldnf1sdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
60
+DEF_HELPER_FLAGS_4(sve_ldnf1sds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
61
+
62
+DEF_HELPER_FLAGS_4(sve_ldnf1dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
63
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/sve_helper.c
66
+++ b/target/arm/sve_helper.c
67
@@ -XXX,XX +XXX,XX @@ DO_LD4(sve_ld4dd_r, cpu_ldq_data_ra, uint64_t, uint64_t, )
68
#undef DO_LD2
69
#undef DO_LD3
70
#undef DO_LD4
71
+
72
+/*
73
+ * Load contiguous data, first-fault and no-fault.
74
+ */
75
+
76
+#ifdef CONFIG_USER_ONLY
77
+
78
+/* Fault on byte I. All bits in FFR from I are cleared. The vector
79
+ * result from I is CONSTRAINED UNPREDICTABLE; we choose the MERGE
80
+ * option, which leaves subsequent data unchanged.
81
+ */
82
+static void record_fault(CPUARMState *env, uintptr_t i, uintptr_t oprsz)
83
+{
84
+ uint64_t *ffr = env->vfp.pregs[FFR_PRED_NUM].p;
85
+
86
+ if (i & 63) {
87
+ ffr[i / 64] &= MAKE_64BIT_MASK(0, i & 63);
88
+ i = ROUND_UP(i, 64);
89
+ }
90
+ for (; i < oprsz; i += 64) {
91
+ ffr[i / 64] = 0;
92
+ }
93
+}
94
+
95
+/* Hold the mmap lock during the operation so that there is no race
96
+ * between page_check_range and the load operation. We expect the
97
+ * usual case to have no faults at all, so we check the whole range
98
+ * first and if successful defer to the normal load operation.
99
+ *
100
+ * TODO: Change mmap_lock to a rwlock so that multiple readers
101
+ * can run simultaneously. This will probably help other uses
102
+ * within QEMU as well.
103
+ */
104
+#define DO_LDFF1(PART, FN, TYPEE, TYPEM, H) \
105
+static void do_sve_ldff1##PART(CPUARMState *env, void *vd, void *vg, \
106
+ target_ulong addr, intptr_t oprsz, \
107
+ bool first, uintptr_t ra) \
108
+{ \
109
+ intptr_t i = 0; \
110
+ do { \
111
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
112
+ do { \
113
+ TYPEM m = 0; \
114
+ if (pg & 1) { \
115
+ if (!first && \
116
+ unlikely(page_check_range(addr, sizeof(TYPEM), \
117
+ PAGE_READ))) { \
118
+ record_fault(env, i, oprsz); \
119
+ return; \
120
+ } \
121
+ m = FN(env, addr, ra); \
122
+ first = false; \
123
+ } \
124
+ *(TYPEE *)(vd + H(i)) = m; \
125
+ i += sizeof(TYPEE), pg >>= sizeof(TYPEE); \
126
+ addr += sizeof(TYPEM); \
127
+ } while (i & 15); \
128
+ } while (i < oprsz); \
129
+} \
130
+void HELPER(sve_ldff1##PART)(CPUARMState *env, void *vg, \
131
+ target_ulong addr, uint32_t desc) \
132
+{ \
133
+ intptr_t oprsz = simd_oprsz(desc); \
134
+ unsigned rd = simd_data(desc); \
135
+ void *vd = &env->vfp.zregs[rd]; \
136
+ mmap_lock(); \
137
+ if (likely(page_check_range(addr, oprsz, PAGE_READ) == 0)) { \
138
+ do_sve_ld1##PART(env, vd, vg, addr, oprsz, GETPC()); \
139
+ } else { \
140
+ do_sve_ldff1##PART(env, vd, vg, addr, oprsz, true, GETPC()); \
141
+ } \
142
+ mmap_unlock(); \
143
+}
144
+
145
+/* No-fault loads are like first-fault loads without the
146
+ * first faulting special case.
147
+ */
148
+#define DO_LDNF1(PART) \
149
+void HELPER(sve_ldnf1##PART)(CPUARMState *env, void *vg, \
150
+ target_ulong addr, uint32_t desc) \
151
+{ \
152
+ intptr_t oprsz = simd_oprsz(desc); \
153
+ unsigned rd = simd_data(desc); \
154
+ void *vd = &env->vfp.zregs[rd]; \
155
+ mmap_lock(); \
156
+ if (likely(page_check_range(addr, oprsz, PAGE_READ) == 0)) { \
157
+ do_sve_ld1##PART(env, vd, vg, addr, oprsz, GETPC()); \
158
+ } else { \
159
+ do_sve_ldff1##PART(env, vd, vg, addr, oprsz, false, GETPC()); \
160
+ } \
161
+ mmap_unlock(); \
162
+}
163
+
164
+#else
165
+
166
+/* TODO: System mode is not yet supported.
167
+ * This would probably use tlb_vaddr_to_host.
168
+ */
169
+#define DO_LDFF1(PART, FN, TYPEE, TYPEM, H) \
170
+void HELPER(sve_ldff1##PART)(CPUARMState *env, void *vg, \
171
+ target_ulong addr, uint32_t desc) \
172
+{ \
173
+ g_assert_not_reached(); \
174
+}
175
+
176
+#define DO_LDNF1(PART) \
177
+void HELPER(sve_ldnf1##PART)(CPUARMState *env, void *vg, \
178
+ target_ulong addr, uint32_t desc) \
179
+{ \
180
+ g_assert_not_reached(); \
181
+}
182
+
183
+#endif
184
+
185
+DO_LDFF1(bb_r, cpu_ldub_data_ra, uint8_t, uint8_t, H1)
186
+DO_LDFF1(bhu_r, cpu_ldub_data_ra, uint16_t, uint8_t, H1_2)
187
+DO_LDFF1(bhs_r, cpu_ldsb_data_ra, uint16_t, int8_t, H1_2)
188
+DO_LDFF1(bsu_r, cpu_ldub_data_ra, uint32_t, uint8_t, H1_4)
189
+DO_LDFF1(bss_r, cpu_ldsb_data_ra, uint32_t, int8_t, H1_4)
190
+DO_LDFF1(bdu_r, cpu_ldub_data_ra, uint64_t, uint8_t, )
191
+DO_LDFF1(bds_r, cpu_ldsb_data_ra, uint64_t, int8_t, )
192
+
193
+DO_LDFF1(hh_r, cpu_lduw_data_ra, uint16_t, uint16_t, H1_2)
194
+DO_LDFF1(hsu_r, cpu_lduw_data_ra, uint32_t, uint16_t, H1_4)
195
+DO_LDFF1(hss_r, cpu_ldsw_data_ra, uint32_t, int8_t, H1_4)
196
+DO_LDFF1(hdu_r, cpu_lduw_data_ra, uint64_t, uint16_t, )
197
+DO_LDFF1(hds_r, cpu_ldsw_data_ra, uint64_t, int16_t, )
198
+
199
+DO_LDFF1(ss_r, cpu_ldl_data_ra, uint32_t, uint32_t, H1_4)
200
+DO_LDFF1(sdu_r, cpu_ldl_data_ra, uint64_t, uint32_t, )
201
+DO_LDFF1(sds_r, cpu_ldl_data_ra, uint64_t, int32_t, )
202
+
203
+DO_LDFF1(dd_r, cpu_ldq_data_ra, uint64_t, uint64_t, )
204
+
205
+#undef DO_LDFF1
206
+
207
+DO_LDNF1(bb_r)
208
+DO_LDNF1(bhu_r)
209
+DO_LDNF1(bhs_r)
210
+DO_LDNF1(bsu_r)
211
+DO_LDNF1(bss_r)
212
+DO_LDNF1(bdu_r)
213
+DO_LDNF1(bds_r)
214
+
215
+DO_LDNF1(hh_r)
216
+DO_LDNF1(hsu_r)
217
+DO_LDNF1(hss_r)
218
+DO_LDNF1(hdu_r)
219
+DO_LDNF1(hds_r)
220
+
221
+DO_LDNF1(ss_r)
222
+DO_LDNF1(sdu_r)
223
+DO_LDNF1(sds_r)
224
+
225
+DO_LDNF1(dd_r)
226
+
227
+#undef DO_LDNF1
228
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
229
index XXXXXXX..XXXXXXX 100644
230
--- a/target/arm/translate-sve.c
231
+++ b/target/arm/translate-sve.c
232
@@ -XXX,XX +XXX,XX @@ static bool trans_LD_zpri(DisasContext *s, arg_rpri_load *a, uint32_t insn)
233
}
234
return true;
235
}
236
+
237
+static bool trans_LDFF1_zprr(DisasContext *s, arg_rprr_load *a, uint32_t insn)
238
+{
239
+ static gen_helper_gvec_mem * const fns[16] = {
240
+ gen_helper_sve_ldff1bb_r,
241
+ gen_helper_sve_ldff1bhu_r,
242
+ gen_helper_sve_ldff1bsu_r,
243
+ gen_helper_sve_ldff1bdu_r,
244
+
245
+ gen_helper_sve_ldff1sds_r,
246
+ gen_helper_sve_ldff1hh_r,
247
+ gen_helper_sve_ldff1hsu_r,
248
+ gen_helper_sve_ldff1hdu_r,
249
+
250
+ gen_helper_sve_ldff1hds_r,
251
+ gen_helper_sve_ldff1hss_r,
252
+ gen_helper_sve_ldff1ss_r,
253
+ gen_helper_sve_ldff1sdu_r,
254
+
255
+ gen_helper_sve_ldff1bds_r,
256
+ gen_helper_sve_ldff1bss_r,
257
+ gen_helper_sve_ldff1bhs_r,
258
+ gen_helper_sve_ldff1dd_r,
259
+ };
260
+
261
+ if (sve_access_check(s)) {
262
+ TCGv_i64 addr = new_tmp_a64(s);
263
+ tcg_gen_shli_i64(addr, cpu_reg(s, a->rm), dtype_msz(a->dtype));
264
+ tcg_gen_add_i64(addr, addr, cpu_reg_sp(s, a->rn));
265
+ do_mem_zpa(s, a->rd, a->pg, addr, fns[a->dtype]);
266
+ }
267
+ return true;
268
+}
269
+
270
+static bool trans_LDNF1_zpri(DisasContext *s, arg_rpri_load *a, uint32_t insn)
271
+{
272
+ static gen_helper_gvec_mem * const fns[16] = {
273
+ gen_helper_sve_ldnf1bb_r,
274
+ gen_helper_sve_ldnf1bhu_r,
275
+ gen_helper_sve_ldnf1bsu_r,
276
+ gen_helper_sve_ldnf1bdu_r,
277
+
278
+ gen_helper_sve_ldnf1sds_r,
279
+ gen_helper_sve_ldnf1hh_r,
280
+ gen_helper_sve_ldnf1hsu_r,
281
+ gen_helper_sve_ldnf1hdu_r,
282
+
283
+ gen_helper_sve_ldnf1hds_r,
284
+ gen_helper_sve_ldnf1hss_r,
285
+ gen_helper_sve_ldnf1ss_r,
286
+ gen_helper_sve_ldnf1sdu_r,
287
+
288
+ gen_helper_sve_ldnf1bds_r,
289
+ gen_helper_sve_ldnf1bss_r,
290
+ gen_helper_sve_ldnf1bhs_r,
291
+ gen_helper_sve_ldnf1dd_r,
292
+ };
293
+
294
+ if (sve_access_check(s)) {
295
+ int vsz = vec_full_reg_size(s);
296
+ int elements = vsz >> dtype_esz[a->dtype];
297
+ int off = (a->imm * elements) << dtype_msz(a->dtype);
298
+ TCGv_i64 addr = new_tmp_a64(s);
299
+
300
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, a->rn), off);
301
+ do_mem_zpa(s, a->rd, a->pg, addr, fns[a->dtype]);
302
+ }
303
+ return true;
304
+}
305
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
306
index XXXXXXX..XXXXXXX 100644
307
--- a/target/arm/sve.decode
308
+++ b/target/arm/sve.decode
309
@@ -XXX,XX +XXX,XX @@ LDR_zri 10000101 10 ...... 010 ... ..... ..... @rd_rn_i9
310
# SVE contiguous load (scalar plus scalar)
311
LD_zprr 1010010 .... ..... 010 ... ..... ..... @rprr_load_dt nreg=0
312
313
+# SVE contiguous first-fault load (scalar plus scalar)
314
+LDFF1_zprr 1010010 .... ..... 011 ... ..... ..... @rprr_load_dt nreg=0
315
+
316
# SVE contiguous load (scalar plus immediate)
317
LD_zpri 1010010 .... 0.... 101 ... ..... ..... @rpri_load_dt nreg=0
318
319
+# SVE contiguous non-fault load (scalar plus immediate)
320
+LDNF1_zpri 1010010 .... 1.... 101 ... ..... ..... @rpri_load_dt nreg=0
321
+
322
# SVE contiguous non-temporal load (scalar plus scalar)
323
# LDNT1B, LDNT1H, LDNT1W, LDNT1D
324
# SVE load multiple structures (scalar plus scalar)
325
--
326
2.17.1
327
328
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 29 +++++
9
target/arm/sve_helper.c | 211 +++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 65 ++++++++++++
11
target/arm/sve.decode | 38 +++++++
12
4 files changed, 343 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_ldnf1sdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
19
DEF_HELPER_FLAGS_4(sve_ldnf1sds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
20
21
DEF_HELPER_FLAGS_4(sve_ldnf1dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
22
+
23
+DEF_HELPER_FLAGS_4(sve_st1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
24
+DEF_HELPER_FLAGS_4(sve_st2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
25
+DEF_HELPER_FLAGS_4(sve_st3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
26
+DEF_HELPER_FLAGS_4(sve_st4bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
27
+
28
+DEF_HELPER_FLAGS_4(sve_st1hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
29
+DEF_HELPER_FLAGS_4(sve_st2hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
30
+DEF_HELPER_FLAGS_4(sve_st3hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
31
+DEF_HELPER_FLAGS_4(sve_st4hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
32
+
33
+DEF_HELPER_FLAGS_4(sve_st1ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
34
+DEF_HELPER_FLAGS_4(sve_st2ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
35
+DEF_HELPER_FLAGS_4(sve_st3ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
36
+DEF_HELPER_FLAGS_4(sve_st4ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
37
+
38
+DEF_HELPER_FLAGS_4(sve_st1dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
39
+DEF_HELPER_FLAGS_4(sve_st2dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
40
+DEF_HELPER_FLAGS_4(sve_st3dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
41
+DEF_HELPER_FLAGS_4(sve_st4dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
42
+
43
+DEF_HELPER_FLAGS_4(sve_st1bh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
44
+DEF_HELPER_FLAGS_4(sve_st1bs_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
45
+DEF_HELPER_FLAGS_4(sve_st1bd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
46
+
47
+DEF_HELPER_FLAGS_4(sve_st1hs_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
48
+DEF_HELPER_FLAGS_4(sve_st1hd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
49
+
50
+DEF_HELPER_FLAGS_4(sve_st1sd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
51
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/sve_helper.c
54
+++ b/target/arm/sve_helper.c
55
@@ -XXX,XX +XXX,XX @@ DO_LDNF1(sds_r)
56
DO_LDNF1(dd_r)
57
58
#undef DO_LDNF1
59
+
60
+/*
61
+ * Store contiguous data, protected by a governing predicate.
62
+ */
63
+#define DO_ST1(NAME, FN, TYPEE, TYPEM, H) \
64
+void HELPER(NAME)(CPUARMState *env, void *vg, \
65
+ target_ulong addr, uint32_t desc) \
66
+{ \
67
+ intptr_t i, oprsz = simd_oprsz(desc); \
68
+ intptr_t ra = GETPC(); \
69
+ unsigned rd = simd_data(desc); \
70
+ void *vd = &env->vfp.zregs[rd]; \
71
+ for (i = 0; i < oprsz; ) { \
72
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
73
+ do { \
74
+ if (pg & 1) { \
75
+ TYPEM m = *(TYPEE *)(vd + H(i)); \
76
+ FN(env, addr, m, ra); \
77
+ } \
78
+ i += sizeof(TYPEE), pg >>= sizeof(TYPEE); \
79
+ addr += sizeof(TYPEM); \
80
+ } while (i & 15); \
81
+ } \
82
+}
83
+
84
+#define DO_ST1_D(NAME, FN, TYPEM) \
85
+void HELPER(NAME)(CPUARMState *env, void *vg, \
86
+ target_ulong addr, uint32_t desc) \
87
+{ \
88
+ intptr_t i, oprsz = simd_oprsz(desc) / 8; \
89
+ intptr_t ra = GETPC(); \
90
+ unsigned rd = simd_data(desc); \
91
+ uint64_t *d = &env->vfp.zregs[rd].d[0]; \
92
+ uint8_t *pg = vg; \
93
+ for (i = 0; i < oprsz; i += 1) { \
94
+ if (pg[H1(i)] & 1) { \
95
+ FN(env, addr, d[i], ra); \
96
+ } \
97
+ addr += sizeof(TYPEM); \
98
+ } \
99
+}
100
+
101
+#define DO_ST2(NAME, FN, TYPEE, TYPEM, H) \
102
+void HELPER(NAME)(CPUARMState *env, void *vg, \
103
+ target_ulong addr, uint32_t desc) \
104
+{ \
105
+ intptr_t i, oprsz = simd_oprsz(desc); \
106
+ intptr_t ra = GETPC(); \
107
+ unsigned rd = simd_data(desc); \
108
+ void *d1 = &env->vfp.zregs[rd]; \
109
+ void *d2 = &env->vfp.zregs[(rd + 1) & 31]; \
110
+ for (i = 0; i < oprsz; ) { \
111
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
112
+ do { \
113
+ if (pg & 1) { \
114
+ TYPEM m1 = *(TYPEE *)(d1 + H(i)); \
115
+ TYPEM m2 = *(TYPEE *)(d2 + H(i)); \
116
+ FN(env, addr, m1, ra); \
117
+ FN(env, addr + sizeof(TYPEM), m2, ra); \
118
+ } \
119
+ i += sizeof(TYPEE), pg >>= sizeof(TYPEE); \
120
+ addr += 2 * sizeof(TYPEM); \
121
+ } while (i & 15); \
122
+ } \
123
+}
124
+
125
+#define DO_ST3(NAME, FN, TYPEE, TYPEM, H) \
126
+void HELPER(NAME)(CPUARMState *env, void *vg, \
127
+ target_ulong addr, uint32_t desc) \
128
+{ \
129
+ intptr_t i, oprsz = simd_oprsz(desc); \
130
+ intptr_t ra = GETPC(); \
131
+ unsigned rd = simd_data(desc); \
132
+ void *d1 = &env->vfp.zregs[rd]; \
133
+ void *d2 = &env->vfp.zregs[(rd + 1) & 31]; \
134
+ void *d3 = &env->vfp.zregs[(rd + 2) & 31]; \
135
+ for (i = 0; i < oprsz; ) { \
136
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
137
+ do { \
138
+ if (pg & 1) { \
139
+ TYPEM m1 = *(TYPEE *)(d1 + H(i)); \
140
+ TYPEM m2 = *(TYPEE *)(d2 + H(i)); \
141
+ TYPEM m3 = *(TYPEE *)(d3 + H(i)); \
142
+ FN(env, addr, m1, ra); \
143
+ FN(env, addr + sizeof(TYPEM), m2, ra); \
144
+ FN(env, addr + 2 * sizeof(TYPEM), m3, ra); \
145
+ } \
146
+ i += sizeof(TYPEE), pg >>= sizeof(TYPEE); \
147
+ addr += 3 * sizeof(TYPEM); \
148
+ } while (i & 15); \
149
+ } \
150
+}
151
+
152
+#define DO_ST4(NAME, FN, TYPEE, TYPEM, H) \
153
+void HELPER(NAME)(CPUARMState *env, void *vg, \
154
+ target_ulong addr, uint32_t desc) \
155
+{ \
156
+ intptr_t i, oprsz = simd_oprsz(desc); \
157
+ intptr_t ra = GETPC(); \
158
+ unsigned rd = simd_data(desc); \
159
+ void *d1 = &env->vfp.zregs[rd]; \
160
+ void *d2 = &env->vfp.zregs[(rd + 1) & 31]; \
161
+ void *d3 = &env->vfp.zregs[(rd + 2) & 31]; \
162
+ void *d4 = &env->vfp.zregs[(rd + 3) & 31]; \
163
+ for (i = 0; i < oprsz; ) { \
164
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
165
+ do { \
166
+ if (pg & 1) { \
167
+ TYPEM m1 = *(TYPEE *)(d1 + H(i)); \
168
+ TYPEM m2 = *(TYPEE *)(d2 + H(i)); \
169
+ TYPEM m3 = *(TYPEE *)(d3 + H(i)); \
170
+ TYPEM m4 = *(TYPEE *)(d4 + H(i)); \
171
+ FN(env, addr, m1, ra); \
172
+ FN(env, addr + sizeof(TYPEM), m2, ra); \
173
+ FN(env, addr + 2 * sizeof(TYPEM), m3, ra); \
174
+ FN(env, addr + 3 * sizeof(TYPEM), m4, ra); \
175
+ } \
176
+ i += sizeof(TYPEE), pg >>= sizeof(TYPEE); \
177
+ addr += 4 * sizeof(TYPEM); \
178
+ } while (i & 15); \
179
+ } \
180
+}
181
+
182
+DO_ST1(sve_st1bh_r, cpu_stb_data_ra, uint16_t, uint8_t, H1_2)
183
+DO_ST1(sve_st1bs_r, cpu_stb_data_ra, uint32_t, uint8_t, H1_4)
184
+DO_ST1_D(sve_st1bd_r, cpu_stb_data_ra, uint8_t)
185
+
186
+DO_ST1(sve_st1hs_r, cpu_stw_data_ra, uint32_t, uint16_t, H1_4)
187
+DO_ST1_D(sve_st1hd_r, cpu_stw_data_ra, uint16_t)
188
+
189
+DO_ST1_D(sve_st1sd_r, cpu_stl_data_ra, uint32_t)
190
+
191
+DO_ST1(sve_st1bb_r, cpu_stb_data_ra, uint8_t, uint8_t, H1)
192
+DO_ST2(sve_st2bb_r, cpu_stb_data_ra, uint8_t, uint8_t, H1)
193
+DO_ST3(sve_st3bb_r, cpu_stb_data_ra, uint8_t, uint8_t, H1)
194
+DO_ST4(sve_st4bb_r, cpu_stb_data_ra, uint8_t, uint8_t, H1)
195
+
196
+DO_ST1(sve_st1hh_r, cpu_stw_data_ra, uint16_t, uint16_t, H1_2)
197
+DO_ST2(sve_st2hh_r, cpu_stw_data_ra, uint16_t, uint16_t, H1_2)
198
+DO_ST3(sve_st3hh_r, cpu_stw_data_ra, uint16_t, uint16_t, H1_2)
199
+DO_ST4(sve_st4hh_r, cpu_stw_data_ra, uint16_t, uint16_t, H1_2)
200
+
201
+DO_ST1(sve_st1ss_r, cpu_stl_data_ra, uint32_t, uint32_t, H1_4)
202
+DO_ST2(sve_st2ss_r, cpu_stl_data_ra, uint32_t, uint32_t, H1_4)
203
+DO_ST3(sve_st3ss_r, cpu_stl_data_ra, uint32_t, uint32_t, H1_4)
204
+DO_ST4(sve_st4ss_r, cpu_stl_data_ra, uint32_t, uint32_t, H1_4)
205
+
206
+DO_ST1_D(sve_st1dd_r, cpu_stq_data_ra, uint64_t)
207
+
208
+void HELPER(sve_st2dd_r)(CPUARMState *env, void *vg,
209
+ target_ulong addr, uint32_t desc)
210
+{
211
+ intptr_t i, oprsz = simd_oprsz(desc) / 8;
212
+ intptr_t ra = GETPC();
213
+ unsigned rd = simd_data(desc);
214
+ uint64_t *d1 = &env->vfp.zregs[rd].d[0];
215
+ uint64_t *d2 = &env->vfp.zregs[(rd + 1) & 31].d[0];
216
+ uint8_t *pg = vg;
217
+
218
+ for (i = 0; i < oprsz; i += 1) {
219
+ if (pg[H1(i)] & 1) {
220
+ cpu_stq_data_ra(env, addr, d1[i], ra);
221
+ cpu_stq_data_ra(env, addr + 8, d2[i], ra);
222
+ }
223
+ addr += 2 * 8;
224
+ }
225
+}
226
+
227
+void HELPER(sve_st3dd_r)(CPUARMState *env, void *vg,
228
+ target_ulong addr, uint32_t desc)
229
+{
230
+ intptr_t i, oprsz = simd_oprsz(desc) / 8;
231
+ intptr_t ra = GETPC();
232
+ unsigned rd = simd_data(desc);
233
+ uint64_t *d1 = &env->vfp.zregs[rd].d[0];
234
+ uint64_t *d2 = &env->vfp.zregs[(rd + 1) & 31].d[0];
235
+ uint64_t *d3 = &env->vfp.zregs[(rd + 2) & 31].d[0];
236
+ uint8_t *pg = vg;
237
+
238
+ for (i = 0; i < oprsz; i += 1) {
239
+ if (pg[H1(i)] & 1) {
240
+ cpu_stq_data_ra(env, addr, d1[i], ra);
241
+ cpu_stq_data_ra(env, addr + 8, d2[i], ra);
242
+ cpu_stq_data_ra(env, addr + 16, d3[i], ra);
243
+ }
244
+ addr += 3 * 8;
245
+ }
246
+}
247
+
248
+void HELPER(sve_st4dd_r)(CPUARMState *env, void *vg,
249
+ target_ulong addr, uint32_t desc)
250
+{
251
+ intptr_t i, oprsz = simd_oprsz(desc) / 8;
252
+ intptr_t ra = GETPC();
253
+ unsigned rd = simd_data(desc);
254
+ uint64_t *d1 = &env->vfp.zregs[rd].d[0];
255
+ uint64_t *d2 = &env->vfp.zregs[(rd + 1) & 31].d[0];
256
+ uint64_t *d3 = &env->vfp.zregs[(rd + 2) & 31].d[0];
257
+ uint64_t *d4 = &env->vfp.zregs[(rd + 3) & 31].d[0];
258
+ uint8_t *pg = vg;
259
+
260
+ for (i = 0; i < oprsz; i += 1) {
261
+ if (pg[H1(i)] & 1) {
262
+ cpu_stq_data_ra(env, addr, d1[i], ra);
263
+ cpu_stq_data_ra(env, addr + 8, d2[i], ra);
264
+ cpu_stq_data_ra(env, addr + 16, d3[i], ra);
265
+ cpu_stq_data_ra(env, addr + 24, d4[i], ra);
266
+ }
267
+ addr += 4 * 8;
268
+ }
269
+}
270
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
271
index XXXXXXX..XXXXXXX 100644
272
--- a/target/arm/translate-sve.c
273
+++ b/target/arm/translate-sve.c
274
@@ -XXX,XX +XXX,XX @@ static bool trans_LDNF1_zpri(DisasContext *s, arg_rpri_load *a, uint32_t insn)
275
}
276
return true;
277
}
278
+
279
+static void do_st_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
280
+ int msz, int esz, int nreg)
281
+{
282
+ static gen_helper_gvec_mem * const fn_single[4][4] = {
283
+ { gen_helper_sve_st1bb_r, gen_helper_sve_st1bh_r,
284
+ gen_helper_sve_st1bs_r, gen_helper_sve_st1bd_r },
285
+ { NULL, gen_helper_sve_st1hh_r,
286
+ gen_helper_sve_st1hs_r, gen_helper_sve_st1hd_r },
287
+ { NULL, NULL,
288
+ gen_helper_sve_st1ss_r, gen_helper_sve_st1sd_r },
289
+ { NULL, NULL, NULL, gen_helper_sve_st1dd_r },
290
+ };
291
+ static gen_helper_gvec_mem * const fn_multiple[3][4] = {
292
+ { gen_helper_sve_st2bb_r, gen_helper_sve_st2hh_r,
293
+ gen_helper_sve_st2ss_r, gen_helper_sve_st2dd_r },
294
+ { gen_helper_sve_st3bb_r, gen_helper_sve_st3hh_r,
295
+ gen_helper_sve_st3ss_r, gen_helper_sve_st3dd_r },
296
+ { gen_helper_sve_st4bb_r, gen_helper_sve_st4hh_r,
297
+ gen_helper_sve_st4ss_r, gen_helper_sve_st4dd_r },
298
+ };
299
+ gen_helper_gvec_mem *fn;
300
+
301
+ if (nreg == 0) {
302
+ /* ST1 */
303
+ fn = fn_single[msz][esz];
304
+ } else {
305
+ /* ST2, ST3, ST4 -- msz == esz, enforced by encoding */
306
+ assert(msz == esz);
307
+ fn = fn_multiple[nreg - 1][msz];
308
+ }
309
+ assert(fn != NULL);
310
+ do_mem_zpa(s, zt, pg, addr, fn);
311
+}
312
+
313
+static bool trans_ST_zprr(DisasContext *s, arg_rprr_store *a, uint32_t insn)
314
+{
315
+ if (a->rm == 31 || a->msz > a->esz) {
316
+ return false;
317
+ }
318
+ if (sve_access_check(s)) {
319
+ TCGv_i64 addr = new_tmp_a64(s);
320
+ tcg_gen_muli_i64(addr, cpu_reg(s, a->rm), (a->nreg + 1) << a->msz);
321
+ tcg_gen_add_i64(addr, addr, cpu_reg_sp(s, a->rn));
322
+ do_st_zpa(s, a->rd, a->pg, addr, a->msz, a->esz, a->nreg);
323
+ }
324
+ return true;
325
+}
326
+
327
+static bool trans_ST_zpri(DisasContext *s, arg_rpri_store *a, uint32_t insn)
328
+{
329
+ if (a->msz > a->esz) {
330
+ return false;
331
+ }
332
+ if (sve_access_check(s)) {
333
+ int vsz = vec_full_reg_size(s);
334
+ int elements = vsz >> a->esz;
335
+ TCGv_i64 addr = new_tmp_a64(s);
336
+
337
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, a->rn),
338
+ (a->imm * elements * (a->nreg + 1)) << a->msz);
339
+ do_st_zpa(s, a->rd, a->pg, addr, a->msz, a->esz, a->nreg);
340
+ }
341
+ return true;
342
+}
343
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
344
index XXXXXXX..XXXXXXX 100644
345
--- a/target/arm/sve.decode
346
+++ b/target/arm/sve.decode
347
@@ -XXX,XX +XXX,XX @@
348
%imm7_22_16 22:2 16:5
349
%imm8_16_10 16:5 10:3
350
%imm9_16_10 16:s6 10:3
351
+%size_23 23:2
352
353
# A combination of tsz:imm3 -- extract esize.
354
%tszimm_esz 22:2 5:5 !function=tszimm_esz
355
@@ -XXX,XX +XXX,XX @@
356
&incdec2_pred rd rn pg esz d u
357
&rprr_load rd pg rn rm dtype nreg
358
&rpri_load rd pg rn imm dtype nreg
359
+&rprr_store rd pg rn rm msz esz nreg
360
+&rpri_store rd pg rn imm msz esz nreg
361
362
###########################################################################
363
# Named instruction formats. These are generally used to
364
@@ -XXX,XX +XXX,XX @@
365
@rpri_load_msz ....... .... . imm:s4 ... pg:3 rn:5 rd:5 \
366
&rpri_load dtype=%msz_dtype
367
368
+# Stores; user must fill in ESZ, MSZ, NREG as needed.
369
+@rprr_store ....... .. .. rm:5 ... pg:3 rn:5 rd:5 &rprr_store
370
+@rpri_store_msz ....... msz:2 .. . imm:s4 ... pg:3 rn:5 rd:5 &rpri_store
371
+@rprr_store_esz_n0 ....... .. esz:2 rm:5 ... pg:3 rn:5 rd:5 \
372
+ &rprr_store nreg=0
373
+
374
###########################################################################
375
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
376
377
@@ -XXX,XX +XXX,XX @@ LD_zprr 1010010 .. nreg:2 ..... 110 ... ..... ..... @rprr_load_msz
378
# SVE load multiple structures (scalar plus immediate)
379
# LD2B, LD2H, LD2W, LD2D; etc.
380
LD_zpri 1010010 .. nreg:2 0.... 111 ... ..... ..... @rpri_load_msz
381
+
382
+### SVE Memory Store Group
383
+
384
+# SVE contiguous store (scalar plus immediate)
385
+# ST1B, ST1H, ST1W, ST1D; require msz <= esz
386
+ST_zpri 1110010 .. esz:2 0.... 111 ... ..... ..... \
387
+ @rpri_store_msz nreg=0
388
+
389
+# SVE contiguous store (scalar plus scalar)
390
+# ST1B, ST1H, ST1W, ST1D; require msz <= esz
391
+# Enumerate msz lest we conflict with STR_zri.
392
+ST_zprr 1110010 00 .. ..... 010 ... ..... ..... \
393
+ @rprr_store_esz_n0 msz=0
394
+ST_zprr 1110010 01 .. ..... 010 ... ..... ..... \
395
+ @rprr_store_esz_n0 msz=1
396
+ST_zprr 1110010 10 .. ..... 010 ... ..... ..... \
397
+ @rprr_store_esz_n0 msz=2
398
+ST_zprr 1110010 11 11 ..... 010 ... ..... ..... \
399
+ @rprr_store msz=3 esz=3 nreg=0
400
+
401
+# SVE contiguous non-temporal store (scalar plus immediate) (nreg == 0)
402
+# SVE store multiple structures (scalar plus immediate) (nreg != 0)
403
+ST_zpri 1110010 .. nreg:2 1.... 111 ... ..... ..... \
404
+ @rpri_store_msz esz=%size_23
405
+
406
+# SVE contiguous non-temporal store (scalar plus scalar) (nreg == 0)
407
+# SVE store multiple structures (scalar plus scalar) (nreg != 0)
408
+ST_zprr 1110010 msz:2 nreg:2 ..... 011 ... ..... ..... \
409
+ @rprr_store esz=%size_23
410
--
411
2.17.1
412
413
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
[PMM: fixed typo]
6
Message-id: 20180627043328.11531-6-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/helper-sve.h | 30 +++++++++++++
10
target/arm/sve_helper.c | 38 ++++++++++++++++
11
target/arm/translate-sve.c | 90 ++++++++++++++++++++++++++++++++++++++
12
target/arm/sve.decode | 22 ++++++++++
13
4 files changed, 180 insertions(+)
14
15
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-sve.h
18
+++ b/target/arm/helper-sve.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
20
DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, i32)
22
23
+DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve_scvt_dh, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_5(sve_scvt_ss, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_5(sve_scvt_sd, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(sve_scvt_ds, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(sve_scvt_dd, TCG_CALL_NO_RWG,
36
+ void, ptr, ptr, ptr, ptr, i32)
37
+
38
+DEF_HELPER_FLAGS_5(sve_ucvt_hh, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_5(sve_ucvt_sh, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_5(sve_ucvt_dh, TCG_CALL_NO_RWG,
43
+ void, ptr, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_5(sve_ucvt_ss, TCG_CALL_NO_RWG,
45
+ void, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(sve_ucvt_sd, TCG_CALL_NO_RWG,
47
+ void, ptr, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_5(sve_ucvt_ds, TCG_CALL_NO_RWG,
49
+ void, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_5(sve_ucvt_dd, TCG_CALL_NO_RWG,
51
+ void, ptr, ptr, ptr, ptr, i32)
52
+
53
DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
54
DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
55
DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
56
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/sve_helper.c
59
+++ b/target/arm/sve_helper.c
60
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
61
return predtest_ones(d, oprsz, esz_mask);
62
}
63
64
+/* Fully general two-operand expander, controlled by a predicate,
65
+ * With the extra float_status parameter.
66
+ */
67
+#define DO_ZPZ_FP(NAME, TYPE, H, OP) \
68
+void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
69
+{ \
70
+ intptr_t i = simd_oprsz(desc); \
71
+ uint64_t *g = vg; \
72
+ do { \
73
+ uint64_t pg = g[(i - 1) >> 6]; \
74
+ do { \
75
+ i -= sizeof(TYPE); \
76
+ if (likely((pg >> (i & 63)) & 1)) { \
77
+ TYPE nn = *(TYPE *)(vn + H(i)); \
78
+ *(TYPE *)(vd + H(i)) = OP(nn, status); \
79
+ } \
80
+ } while (i & 63); \
81
+ } while (i != 0); \
82
+}
83
+
84
+DO_ZPZ_FP(sve_scvt_hh, uint16_t, H1_2, int16_to_float16)
85
+DO_ZPZ_FP(sve_scvt_sh, uint32_t, H1_4, int32_to_float16)
86
+DO_ZPZ_FP(sve_scvt_ss, uint32_t, H1_4, int32_to_float32)
87
+DO_ZPZ_FP(sve_scvt_sd, uint64_t, , int32_to_float64)
88
+DO_ZPZ_FP(sve_scvt_dh, uint64_t, , int64_to_float16)
89
+DO_ZPZ_FP(sve_scvt_ds, uint64_t, , int64_to_float32)
90
+DO_ZPZ_FP(sve_scvt_dd, uint64_t, , int64_to_float64)
91
+
92
+DO_ZPZ_FP(sve_ucvt_hh, uint16_t, H1_2, uint16_to_float16)
93
+DO_ZPZ_FP(sve_ucvt_sh, uint32_t, H1_4, uint32_to_float16)
94
+DO_ZPZ_FP(sve_ucvt_ss, uint32_t, H1_4, uint32_to_float32)
95
+DO_ZPZ_FP(sve_ucvt_sd, uint64_t, , uint32_to_float64)
96
+DO_ZPZ_FP(sve_ucvt_dh, uint64_t, , uint64_to_float16)
97
+DO_ZPZ_FP(sve_ucvt_ds, uint64_t, , uint64_to_float32)
98
+DO_ZPZ_FP(sve_ucvt_dd, uint64_t, , uint64_to_float64)
99
+
100
+#undef DO_ZPZ_FP
101
+
102
/*
103
* Load contiguous data, protected by a governing predicate.
104
*/
105
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate-sve.c
108
+++ b/target/arm/translate-sve.c
109
@@ -XXX,XX +XXX,XX @@ DO_FP3(FRSQRTS, rsqrts)
110
111
#undef DO_FP3
112
113
+
114
+/*
115
+ *** SVE Floating Point Unary Operations Predicated Group
116
+ */
117
+
118
+static bool do_zpz_ptr(DisasContext *s, int rd, int rn, int pg,
119
+ bool is_fp16, gen_helper_gvec_3_ptr *fn)
120
+{
121
+ if (sve_access_check(s)) {
122
+ unsigned vsz = vec_full_reg_size(s);
123
+ TCGv_ptr status = get_fpstatus_ptr(is_fp16);
124
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
125
+ vec_full_reg_offset(s, rn),
126
+ pred_full_reg_offset(s, pg),
127
+ status, vsz, vsz, 0, fn);
128
+ tcg_temp_free_ptr(status);
129
+ }
130
+ return true;
131
+}
132
+
133
+static bool trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
134
+{
135
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_scvt_hh);
136
+}
137
+
138
+static bool trans_SCVTF_sh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
139
+{
140
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_scvt_sh);
141
+}
142
+
143
+static bool trans_SCVTF_dh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
144
+{
145
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_scvt_dh);
146
+}
147
+
148
+static bool trans_SCVTF_ss(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
149
+{
150
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_scvt_ss);
151
+}
152
+
153
+static bool trans_SCVTF_ds(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
154
+{
155
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_scvt_ds);
156
+}
157
+
158
+static bool trans_SCVTF_sd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
159
+{
160
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_scvt_sd);
161
+}
162
+
163
+static bool trans_SCVTF_dd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
164
+{
165
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_scvt_dd);
166
+}
167
+
168
+static bool trans_UCVTF_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
169
+{
170
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_ucvt_hh);
171
+}
172
+
173
+static bool trans_UCVTF_sh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
174
+{
175
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_ucvt_sh);
176
+}
177
+
178
+static bool trans_UCVTF_dh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
179
+{
180
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_ucvt_dh);
181
+}
182
+
183
+static bool trans_UCVTF_ss(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
184
+{
185
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_ucvt_ss);
186
+}
187
+
188
+static bool trans_UCVTF_ds(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
189
+{
190
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_ucvt_ds);
191
+}
192
+
193
+static bool trans_UCVTF_sd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
194
+{
195
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_ucvt_sd);
196
+}
197
+
198
+static bool trans_UCVTF_dd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
199
+{
200
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_ucvt_dd);
201
+}
202
+
203
/*
204
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
205
*/
206
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/sve.decode
209
+++ b/target/arm/sve.decode
210
@@ -XXX,XX +XXX,XX @@
211
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
212
@rd_pg4_pn ........ esz:2 ... ... .. pg:4 . rn:4 rd:5 &rpr_esz
213
214
+# One register operand, with governing predicate, no vector element size
215
+@rd_pg_rn_e0 ........ .. ... ... ... pg:3 rn:5 rd:5 &rpr_esz esz=0
216
+
217
# Two register operands with a 6-bit signed immediate.
218
@rd_rn_i6 ........ ... rn:5 ..... imm:s6 rd:5 &rri
219
220
@@ -XXX,XX +XXX,XX @@ FTSMUL 01100101 .. 0 ..... 000 011 ..... ..... @rd_rn_rm
221
FRECPS 01100101 .. 0 ..... 000 110 ..... ..... @rd_rn_rm
222
FRSQRTS 01100101 .. 0 ..... 000 111 ..... ..... @rd_rn_rm
223
224
+### SVE FP Unary Operations Predicated Group
225
+
226
+# SVE integer convert to floating-point
227
+SCVTF_hh 01100101 01 010 01 0 101 ... ..... ..... @rd_pg_rn_e0
228
+SCVTF_sh 01100101 01 010 10 0 101 ... ..... ..... @rd_pg_rn_e0
229
+SCVTF_dh 01100101 01 010 11 0 101 ... ..... ..... @rd_pg_rn_e0
230
+SCVTF_ss 01100101 10 010 10 0 101 ... ..... ..... @rd_pg_rn_e0
231
+SCVTF_sd 01100101 11 010 00 0 101 ... ..... ..... @rd_pg_rn_e0
232
+SCVTF_ds 01100101 11 010 10 0 101 ... ..... ..... @rd_pg_rn_e0
233
+SCVTF_dd 01100101 11 010 11 0 101 ... ..... ..... @rd_pg_rn_e0
234
+
235
+UCVTF_hh 01100101 01 010 01 1 101 ... ..... ..... @rd_pg_rn_e0
236
+UCVTF_sh 01100101 01 010 10 1 101 ... ..... ..... @rd_pg_rn_e0
237
+UCVTF_dh 01100101 01 010 11 1 101 ... ..... ..... @rd_pg_rn_e0
238
+UCVTF_ss 01100101 10 010 10 1 101 ... ..... ..... @rd_pg_rn_e0
239
+UCVTF_sd 01100101 11 010 00 1 101 ... ..... ..... @rd_pg_rn_e0
240
+UCVTF_ds 01100101 11 010 10 1 101 ... ..... ..... @rd_pg_rn_e0
241
+UCVTF_dd 01100101 11 010 11 1 101 ... ..... ..... @rd_pg_rn_e0
242
+
243
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
244
245
# SVE load predicate register
246
--
247
2.17.1
248
249
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-7-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 77 +++++++++++++++++++++++++++++++++
9
target/arm/sve_helper.c | 89 ++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 46 ++++++++++++++++++++
11
target/arm/sve.decode | 17 ++++++++
12
4 files changed, 229 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_6(sve_fadd_h, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_6(sve_fadd_s, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_6(sve_fadd_d, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_6(sve_fsub_h, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_6(sve_fsub_s, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_6(sve_fsub_d, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, ptr, i32)
35
+
36
+DEF_HELPER_FLAGS_6(sve_fmul_h, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_6(sve_fmul_s, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_6(sve_fmul_d, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, ptr, i32)
42
+
43
+DEF_HELPER_FLAGS_6(sve_fdiv_h, TCG_CALL_NO_RWG,
44
+ void, ptr, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_6(sve_fdiv_s, TCG_CALL_NO_RWG,
46
+ void, ptr, ptr, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_6(sve_fdiv_d, TCG_CALL_NO_RWG,
48
+ void, ptr, ptr, ptr, ptr, ptr, i32)
49
+
50
+DEF_HELPER_FLAGS_6(sve_fmin_h, TCG_CALL_NO_RWG,
51
+ void, ptr, ptr, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_6(sve_fmin_s, TCG_CALL_NO_RWG,
53
+ void, ptr, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_6(sve_fmin_d, TCG_CALL_NO_RWG,
55
+ void, ptr, ptr, ptr, ptr, ptr, i32)
56
+
57
+DEF_HELPER_FLAGS_6(sve_fmax_h, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_6(sve_fmax_s, TCG_CALL_NO_RWG,
60
+ void, ptr, ptr, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_6(sve_fmax_d, TCG_CALL_NO_RWG,
62
+ void, ptr, ptr, ptr, ptr, ptr, i32)
63
+
64
+DEF_HELPER_FLAGS_6(sve_fminnum_h, TCG_CALL_NO_RWG,
65
+ void, ptr, ptr, ptr, ptr, ptr, i32)
66
+DEF_HELPER_FLAGS_6(sve_fminnum_s, TCG_CALL_NO_RWG,
67
+ void, ptr, ptr, ptr, ptr, ptr, i32)
68
+DEF_HELPER_FLAGS_6(sve_fminnum_d, TCG_CALL_NO_RWG,
69
+ void, ptr, ptr, ptr, ptr, ptr, i32)
70
+
71
+DEF_HELPER_FLAGS_6(sve_fmaxnum_h, TCG_CALL_NO_RWG,
72
+ void, ptr, ptr, ptr, ptr, ptr, i32)
73
+DEF_HELPER_FLAGS_6(sve_fmaxnum_s, TCG_CALL_NO_RWG,
74
+ void, ptr, ptr, ptr, ptr, ptr, i32)
75
+DEF_HELPER_FLAGS_6(sve_fmaxnum_d, TCG_CALL_NO_RWG,
76
+ void, ptr, ptr, ptr, ptr, ptr, i32)
77
+
78
+DEF_HELPER_FLAGS_6(sve_fabd_h, TCG_CALL_NO_RWG,
79
+ void, ptr, ptr, ptr, ptr, ptr, i32)
80
+DEF_HELPER_FLAGS_6(sve_fabd_s, TCG_CALL_NO_RWG,
81
+ void, ptr, ptr, ptr, ptr, ptr, i32)
82
+DEF_HELPER_FLAGS_6(sve_fabd_d, TCG_CALL_NO_RWG,
83
+ void, ptr, ptr, ptr, ptr, ptr, i32)
84
+
85
+DEF_HELPER_FLAGS_6(sve_fscalbn_h, TCG_CALL_NO_RWG,
86
+ void, ptr, ptr, ptr, ptr, ptr, i32)
87
+DEF_HELPER_FLAGS_6(sve_fscalbn_s, TCG_CALL_NO_RWG,
88
+ void, ptr, ptr, ptr, ptr, ptr, i32)
89
+DEF_HELPER_FLAGS_6(sve_fscalbn_d, TCG_CALL_NO_RWG,
90
+ void, ptr, ptr, ptr, ptr, ptr, i32)
91
+
92
+DEF_HELPER_FLAGS_6(sve_fmulx_h, TCG_CALL_NO_RWG,
93
+ void, ptr, ptr, ptr, ptr, ptr, i32)
94
+DEF_HELPER_FLAGS_6(sve_fmulx_s, TCG_CALL_NO_RWG,
95
+ void, ptr, ptr, ptr, ptr, ptr, i32)
96
+DEF_HELPER_FLAGS_6(sve_fmulx_d, TCG_CALL_NO_RWG,
97
+ void, ptr, ptr, ptr, ptr, ptr, i32)
98
+
99
DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
100
void, ptr, ptr, ptr, ptr, i32)
101
DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
102
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/sve_helper.c
105
+++ b/target/arm/sve_helper.c
106
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
107
return predtest_ones(d, oprsz, esz_mask);
108
}
109
110
+/* Fully general three-operand expander, controlled by a predicate,
111
+ * With the extra float_status parameter.
112
+ */
113
+#define DO_ZPZZ_FP(NAME, TYPE, H, OP) \
114
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, \
115
+ void *status, uint32_t desc) \
116
+{ \
117
+ intptr_t i = simd_oprsz(desc); \
118
+ uint64_t *g = vg; \
119
+ do { \
120
+ uint64_t pg = g[(i - 1) >> 6]; \
121
+ do { \
122
+ i -= sizeof(TYPE); \
123
+ if (likely((pg >> (i & 63)) & 1)) { \
124
+ TYPE nn = *(TYPE *)(vn + H(i)); \
125
+ TYPE mm = *(TYPE *)(vm + H(i)); \
126
+ *(TYPE *)(vd + H(i)) = OP(nn, mm, status); \
127
+ } \
128
+ } while (i & 63); \
129
+ } while (i != 0); \
130
+}
131
+
132
+DO_ZPZZ_FP(sve_fadd_h, uint16_t, H1_2, float16_add)
133
+DO_ZPZZ_FP(sve_fadd_s, uint32_t, H1_4, float32_add)
134
+DO_ZPZZ_FP(sve_fadd_d, uint64_t, , float64_add)
135
+
136
+DO_ZPZZ_FP(sve_fsub_h, uint16_t, H1_2, float16_sub)
137
+DO_ZPZZ_FP(sve_fsub_s, uint32_t, H1_4, float32_sub)
138
+DO_ZPZZ_FP(sve_fsub_d, uint64_t, , float64_sub)
139
+
140
+DO_ZPZZ_FP(sve_fmul_h, uint16_t, H1_2, float16_mul)
141
+DO_ZPZZ_FP(sve_fmul_s, uint32_t, H1_4, float32_mul)
142
+DO_ZPZZ_FP(sve_fmul_d, uint64_t, , float64_mul)
143
+
144
+DO_ZPZZ_FP(sve_fdiv_h, uint16_t, H1_2, float16_div)
145
+DO_ZPZZ_FP(sve_fdiv_s, uint32_t, H1_4, float32_div)
146
+DO_ZPZZ_FP(sve_fdiv_d, uint64_t, , float64_div)
147
+
148
+DO_ZPZZ_FP(sve_fmin_h, uint16_t, H1_2, float16_min)
149
+DO_ZPZZ_FP(sve_fmin_s, uint32_t, H1_4, float32_min)
150
+DO_ZPZZ_FP(sve_fmin_d, uint64_t, , float64_min)
151
+
152
+DO_ZPZZ_FP(sve_fmax_h, uint16_t, H1_2, float16_max)
153
+DO_ZPZZ_FP(sve_fmax_s, uint32_t, H1_4, float32_max)
154
+DO_ZPZZ_FP(sve_fmax_d, uint64_t, , float64_max)
155
+
156
+DO_ZPZZ_FP(sve_fminnum_h, uint16_t, H1_2, float16_minnum)
157
+DO_ZPZZ_FP(sve_fminnum_s, uint32_t, H1_4, float32_minnum)
158
+DO_ZPZZ_FP(sve_fminnum_d, uint64_t, , float64_minnum)
159
+
160
+DO_ZPZZ_FP(sve_fmaxnum_h, uint16_t, H1_2, float16_maxnum)
161
+DO_ZPZZ_FP(sve_fmaxnum_s, uint32_t, H1_4, float32_maxnum)
162
+DO_ZPZZ_FP(sve_fmaxnum_d, uint64_t, , float64_maxnum)
163
+
164
+static inline float16 abd_h(float16 a, float16 b, float_status *s)
165
+{
166
+ return float16_abs(float16_sub(a, b, s));
167
+}
168
+
169
+static inline float32 abd_s(float32 a, float32 b, float_status *s)
170
+{
171
+ return float32_abs(float32_sub(a, b, s));
172
+}
173
+
174
+static inline float64 abd_d(float64 a, float64 b, float_status *s)
175
+{
176
+ return float64_abs(float64_sub(a, b, s));
177
+}
178
+
179
+DO_ZPZZ_FP(sve_fabd_h, uint16_t, H1_2, abd_h)
180
+DO_ZPZZ_FP(sve_fabd_s, uint32_t, H1_4, abd_s)
181
+DO_ZPZZ_FP(sve_fabd_d, uint64_t, , abd_d)
182
+
183
+static inline float64 scalbn_d(float64 a, int64_t b, float_status *s)
184
+{
185
+ int b_int = MIN(MAX(b, INT_MIN), INT_MAX);
186
+ return float64_scalbn(a, b_int, s);
187
+}
188
+
189
+DO_ZPZZ_FP(sve_fscalbn_h, int16_t, H1_2, float16_scalbn)
190
+DO_ZPZZ_FP(sve_fscalbn_s, int32_t, H1_4, float32_scalbn)
191
+DO_ZPZZ_FP(sve_fscalbn_d, int64_t, , scalbn_d)
192
+
193
+DO_ZPZZ_FP(sve_fmulx_h, uint16_t, H1_2, helper_advsimd_mulxh)
194
+DO_ZPZZ_FP(sve_fmulx_s, uint32_t, H1_4, helper_vfp_mulxs)
195
+DO_ZPZZ_FP(sve_fmulx_d, uint64_t, , helper_vfp_mulxd)
196
+
197
+#undef DO_ZPZZ_FP
198
+
199
/* Fully general two-operand expander, controlled by a predicate,
200
* With the extra float_status parameter.
201
*/
202
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
203
index XXXXXXX..XXXXXXX 100644
204
--- a/target/arm/translate-sve.c
205
+++ b/target/arm/translate-sve.c
206
@@ -XXX,XX +XXX,XX @@ DO_FP3(FRSQRTS, rsqrts)
207
208
#undef DO_FP3
209
210
+/*
211
+ *** SVE Floating Point Arithmetic - Predicated Group
212
+ */
213
+
214
+static bool do_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
215
+ gen_helper_gvec_4_ptr *fn)
216
+{
217
+ if (fn == NULL) {
218
+ return false;
219
+ }
220
+ if (sve_access_check(s)) {
221
+ unsigned vsz = vec_full_reg_size(s);
222
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
223
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
224
+ vec_full_reg_offset(s, a->rn),
225
+ vec_full_reg_offset(s, a->rm),
226
+ pred_full_reg_offset(s, a->pg),
227
+ status, vsz, vsz, 0, fn);
228
+ tcg_temp_free_ptr(status);
229
+ }
230
+ return true;
231
+}
232
+
233
+#define DO_FP3(NAME, name) \
234
+static bool trans_##NAME(DisasContext *s, arg_rprr_esz *a, uint32_t insn) \
235
+{ \
236
+ static gen_helper_gvec_4_ptr * const fns[4] = { \
237
+ NULL, gen_helper_sve_##name##_h, \
238
+ gen_helper_sve_##name##_s, gen_helper_sve_##name##_d \
239
+ }; \
240
+ return do_zpzz_fp(s, a, fns[a->esz]); \
241
+}
242
+
243
+DO_FP3(FADD_zpzz, fadd)
244
+DO_FP3(FSUB_zpzz, fsub)
245
+DO_FP3(FMUL_zpzz, fmul)
246
+DO_FP3(FMIN_zpzz, fmin)
247
+DO_FP3(FMAX_zpzz, fmax)
248
+DO_FP3(FMINNM_zpzz, fminnum)
249
+DO_FP3(FMAXNM_zpzz, fmaxnum)
250
+DO_FP3(FABD, fabd)
251
+DO_FP3(FSCALE, fscalbn)
252
+DO_FP3(FDIV, fdiv)
253
+DO_FP3(FMULX, fmulx)
254
+
255
+#undef DO_FP3
256
257
/*
258
*** SVE Floating Point Unary Operations Predicated Group
259
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
260
index XXXXXXX..XXXXXXX 100644
261
--- a/target/arm/sve.decode
262
+++ b/target/arm/sve.decode
263
@@ -XXX,XX +XXX,XX @@ FTSMUL 01100101 .. 0 ..... 000 011 ..... ..... @rd_rn_rm
264
FRECPS 01100101 .. 0 ..... 000 110 ..... ..... @rd_rn_rm
265
FRSQRTS 01100101 .. 0 ..... 000 111 ..... ..... @rd_rn_rm
266
267
+### SVE FP Arithmetic Predicated Group
268
+
269
+# SVE floating-point arithmetic (predicated)
270
+FADD_zpzz 01100101 .. 00 0000 100 ... ..... ..... @rdn_pg_rm
271
+FSUB_zpzz 01100101 .. 00 0001 100 ... ..... ..... @rdn_pg_rm
272
+FMUL_zpzz 01100101 .. 00 0010 100 ... ..... ..... @rdn_pg_rm
273
+FSUB_zpzz 01100101 .. 00 0011 100 ... ..... ..... @rdm_pg_rn # FSUBR
274
+FMAXNM_zpzz 01100101 .. 00 0100 100 ... ..... ..... @rdn_pg_rm
275
+FMINNM_zpzz 01100101 .. 00 0101 100 ... ..... ..... @rdn_pg_rm
276
+FMAX_zpzz 01100101 .. 00 0110 100 ... ..... ..... @rdn_pg_rm
277
+FMIN_zpzz 01100101 .. 00 0111 100 ... ..... ..... @rdn_pg_rm
278
+FABD 01100101 .. 00 1000 100 ... ..... ..... @rdn_pg_rm
279
+FSCALE 01100101 .. 00 1001 100 ... ..... ..... @rdn_pg_rm
280
+FMULX 01100101 .. 00 1010 100 ... ..... ..... @rdn_pg_rm
281
+FDIV 01100101 .. 00 1100 100 ... ..... ..... @rdm_pg_rn # FDIVR
282
+FDIV 01100101 .. 00 1101 100 ... ..... ..... @rdn_pg_rm
283
+
284
### SVE FP Unary Operations Predicated Group
285
286
# SVE integer convert to floating-point
287
--
288
2.17.1
289
290
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20180627043328.11531-8-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/helper-sve.h | 16 ++++
10
target/arm/sve_helper.c | 158 +++++++++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 49 ++++++++++++
12
target/arm/sve.decode | 18 +++++
13
4 files changed, 241 insertions(+)
14
15
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-sve.h
18
+++ b/target/arm/helper-sve.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_ucvt_ds, TCG_CALL_NO_RWG,
20
DEF_HELPER_FLAGS_5(sve_ucvt_dd, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, i32)
22
23
+DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
24
+DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
25
+DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
26
+
27
+DEF_HELPER_FLAGS_3(sve_fmls_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
28
+DEF_HELPER_FLAGS_3(sve_fmls_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
29
+DEF_HELPER_FLAGS_3(sve_fmls_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_3(sve_fnmla_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
32
+DEF_HELPER_FLAGS_3(sve_fnmla_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
33
+DEF_HELPER_FLAGS_3(sve_fnmla_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
34
+
35
+DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
36
+DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
37
+DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
38
+
39
DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
40
DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
41
DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
42
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/sve_helper.c
45
+++ b/target/arm/sve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_ZPZ_FP(sve_ucvt_dd, uint64_t, , uint64_to_float64)
47
48
#undef DO_ZPZ_FP
49
50
+/* 4-operand predicated multiply-add. This requires 7 operands to pass
51
+ * "properly", so we need to encode some of the registers into DESC.
52
+ */
53
+QEMU_BUILD_BUG_ON(SIMD_DATA_SHIFT + 20 > 32);
54
+
55
+static void do_fmla_zpzzz_h(CPUARMState *env, void *vg, uint32_t desc,
56
+ uint16_t neg1, uint16_t neg3)
57
+{
58
+ intptr_t i = simd_oprsz(desc);
59
+ unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
60
+ unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
61
+ unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
62
+ unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
63
+ void *vd = &env->vfp.zregs[rd];
64
+ void *vn = &env->vfp.zregs[rn];
65
+ void *vm = &env->vfp.zregs[rm];
66
+ void *va = &env->vfp.zregs[ra];
67
+ uint64_t *g = vg;
68
+
69
+ do {
70
+ uint64_t pg = g[(i - 1) >> 6];
71
+ do {
72
+ i -= 2;
73
+ if (likely((pg >> (i & 63)) & 1)) {
74
+ float16 e1, e2, e3, r;
75
+
76
+ e1 = *(uint16_t *)(vn + H1_2(i)) ^ neg1;
77
+ e2 = *(uint16_t *)(vm + H1_2(i));
78
+ e3 = *(uint16_t *)(va + H1_2(i)) ^ neg3;
79
+ r = float16_muladd(e1, e2, e3, 0, &env->vfp.fp_status);
80
+ *(uint16_t *)(vd + H1_2(i)) = r;
81
+ }
82
+ } while (i & 63);
83
+ } while (i != 0);
84
+}
85
+
86
+void HELPER(sve_fmla_zpzzz_h)(CPUARMState *env, void *vg, uint32_t desc)
87
+{
88
+ do_fmla_zpzzz_h(env, vg, desc, 0, 0);
89
+}
90
+
91
+void HELPER(sve_fmls_zpzzz_h)(CPUARMState *env, void *vg, uint32_t desc)
92
+{
93
+ do_fmla_zpzzz_h(env, vg, desc, 0x8000, 0);
94
+}
95
+
96
+void HELPER(sve_fnmla_zpzzz_h)(CPUARMState *env, void *vg, uint32_t desc)
97
+{
98
+ do_fmla_zpzzz_h(env, vg, desc, 0x8000, 0x8000);
99
+}
100
+
101
+void HELPER(sve_fnmls_zpzzz_h)(CPUARMState *env, void *vg, uint32_t desc)
102
+{
103
+ do_fmla_zpzzz_h(env, vg, desc, 0, 0x8000);
104
+}
105
+
106
+static void do_fmla_zpzzz_s(CPUARMState *env, void *vg, uint32_t desc,
107
+ uint32_t neg1, uint32_t neg3)
108
+{
109
+ intptr_t i = simd_oprsz(desc);
110
+ unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
111
+ unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
112
+ unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
113
+ unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
114
+ void *vd = &env->vfp.zregs[rd];
115
+ void *vn = &env->vfp.zregs[rn];
116
+ void *vm = &env->vfp.zregs[rm];
117
+ void *va = &env->vfp.zregs[ra];
118
+ uint64_t *g = vg;
119
+
120
+ do {
121
+ uint64_t pg = g[(i - 1) >> 6];
122
+ do {
123
+ i -= 4;
124
+ if (likely((pg >> (i & 63)) & 1)) {
125
+ float32 e1, e2, e3, r;
126
+
127
+ e1 = *(uint32_t *)(vn + H1_4(i)) ^ neg1;
128
+ e2 = *(uint32_t *)(vm + H1_4(i));
129
+ e3 = *(uint32_t *)(va + H1_4(i)) ^ neg3;
130
+ r = float32_muladd(e1, e2, e3, 0, &env->vfp.fp_status);
131
+ *(uint32_t *)(vd + H1_4(i)) = r;
132
+ }
133
+ } while (i & 63);
134
+ } while (i != 0);
135
+}
136
+
137
+void HELPER(sve_fmla_zpzzz_s)(CPUARMState *env, void *vg, uint32_t desc)
138
+{
139
+ do_fmla_zpzzz_s(env, vg, desc, 0, 0);
140
+}
141
+
142
+void HELPER(sve_fmls_zpzzz_s)(CPUARMState *env, void *vg, uint32_t desc)
143
+{
144
+ do_fmla_zpzzz_s(env, vg, desc, 0x80000000, 0);
145
+}
146
+
147
+void HELPER(sve_fnmla_zpzzz_s)(CPUARMState *env, void *vg, uint32_t desc)
148
+{
149
+ do_fmla_zpzzz_s(env, vg, desc, 0x80000000, 0x80000000);
150
+}
151
+
152
+void HELPER(sve_fnmls_zpzzz_s)(CPUARMState *env, void *vg, uint32_t desc)
153
+{
154
+ do_fmla_zpzzz_s(env, vg, desc, 0, 0x80000000);
155
+}
156
+
157
+static void do_fmla_zpzzz_d(CPUARMState *env, void *vg, uint32_t desc,
158
+ uint64_t neg1, uint64_t neg3)
159
+{
160
+ intptr_t i = simd_oprsz(desc);
161
+ unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
162
+ unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
163
+ unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
164
+ unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
165
+ void *vd = &env->vfp.zregs[rd];
166
+ void *vn = &env->vfp.zregs[rn];
167
+ void *vm = &env->vfp.zregs[rm];
168
+ void *va = &env->vfp.zregs[ra];
169
+ uint64_t *g = vg;
170
+
171
+ do {
172
+ uint64_t pg = g[(i - 1) >> 6];
173
+ do {
174
+ i -= 8;
175
+ if (likely((pg >> (i & 63)) & 1)) {
176
+ float64 e1, e2, e3, r;
177
+
178
+ e1 = *(uint64_t *)(vn + i) ^ neg1;
179
+ e2 = *(uint64_t *)(vm + i);
180
+ e3 = *(uint64_t *)(va + i) ^ neg3;
181
+ r = float64_muladd(e1, e2, e3, 0, &env->vfp.fp_status);
182
+ *(uint64_t *)(vd + i) = r;
183
+ }
184
+ } while (i & 63);
185
+ } while (i != 0);
186
+}
187
+
188
+void HELPER(sve_fmla_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
189
+{
190
+ do_fmla_zpzzz_d(env, vg, desc, 0, 0);
191
+}
192
+
193
+void HELPER(sve_fmls_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
194
+{
195
+ do_fmla_zpzzz_d(env, vg, desc, INT64_MIN, 0);
196
+}
197
+
198
+void HELPER(sve_fnmla_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
199
+{
200
+ do_fmla_zpzzz_d(env, vg, desc, INT64_MIN, INT64_MIN);
201
+}
202
+
203
+void HELPER(sve_fnmls_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
204
+{
205
+ do_fmla_zpzzz_d(env, vg, desc, 0, INT64_MIN);
206
+}
207
+
208
/*
209
* Load contiguous data, protected by a governing predicate.
210
*/
211
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
212
index XXXXXXX..XXXXXXX 100644
213
--- a/target/arm/translate-sve.c
214
+++ b/target/arm/translate-sve.c
215
@@ -XXX,XX +XXX,XX @@ DO_FP3(FMULX, fmulx)
216
217
#undef DO_FP3
218
219
+typedef void gen_helper_sve_fmla(TCGv_env, TCGv_ptr, TCGv_i32);
220
+
221
+static bool do_fmla(DisasContext *s, arg_rprrr_esz *a, gen_helper_sve_fmla *fn)
222
+{
223
+ if (fn == NULL) {
224
+ return false;
225
+ }
226
+ if (!sve_access_check(s)) {
227
+ return true;
228
+ }
229
+
230
+ unsigned vsz = vec_full_reg_size(s);
231
+ unsigned desc;
232
+ TCGv_i32 t_desc;
233
+ TCGv_ptr pg = tcg_temp_new_ptr();
234
+
235
+ /* We would need 7 operands to pass these arguments "properly".
236
+ * So we encode all the register numbers into the descriptor.
237
+ */
238
+ desc = deposit32(a->rd, 5, 5, a->rn);
239
+ desc = deposit32(desc, 10, 5, a->rm);
240
+ desc = deposit32(desc, 15, 5, a->ra);
241
+ desc = simd_desc(vsz, vsz, desc);
242
+
243
+ t_desc = tcg_const_i32(desc);
244
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
245
+ fn(cpu_env, pg, t_desc);
246
+ tcg_temp_free_i32(t_desc);
247
+ tcg_temp_free_ptr(pg);
248
+ return true;
249
+}
250
+
251
+#define DO_FMLA(NAME, name) \
252
+static bool trans_##NAME(DisasContext *s, arg_rprrr_esz *a, uint32_t insn) \
253
+{ \
254
+ static gen_helper_sve_fmla * const fns[4] = { \
255
+ NULL, gen_helper_sve_##name##_h, \
256
+ gen_helper_sve_##name##_s, gen_helper_sve_##name##_d \
257
+ }; \
258
+ return do_fmla(s, a, fns[a->esz]); \
259
+}
260
+
261
+DO_FMLA(FMLA_zpzzz, fmla_zpzzz)
262
+DO_FMLA(FMLS_zpzzz, fmls_zpzzz)
263
+DO_FMLA(FNMLA_zpzzz, fnmla_zpzzz)
264
+DO_FMLA(FNMLS_zpzzz, fnmls_zpzzz)
265
+
266
+#undef DO_FMLA
267
+
268
/*
269
*** SVE Floating Point Unary Operations Predicated Group
270
*/
271
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
272
index XXXXXXX..XXXXXXX 100644
273
--- a/target/arm/sve.decode
274
+++ b/target/arm/sve.decode
275
@@ -XXX,XX +XXX,XX @@
276
&rprrr_esz ra=%reg_movprfx
277
@rdn_pg_ra_rm ........ esz:2 . rm:5 ... pg:3 ra:5 rd:5 \
278
&rprrr_esz rn=%reg_movprfx
279
+@rdn_pg_rm_ra ........ esz:2 . ra:5 ... pg:3 rm:5 rd:5 \
280
+ &rprrr_esz rn=%reg_movprfx
281
282
# One register operand, with governing predicate, vector element size
283
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
284
@@ -XXX,XX +XXX,XX @@ FMULX 01100101 .. 00 1010 100 ... ..... ..... @rdn_pg_rm
285
FDIV 01100101 .. 00 1100 100 ... ..... ..... @rdm_pg_rn # FDIVR
286
FDIV 01100101 .. 00 1101 100 ... ..... ..... @rdn_pg_rm
287
288
+### SVE FP Multiply-Add Group
289
+
290
+# SVE floating-point multiply-accumulate writing addend
291
+FMLA_zpzzz 01100101 .. 1 ..... 000 ... ..... ..... @rda_pg_rn_rm
292
+FMLS_zpzzz 01100101 .. 1 ..... 001 ... ..... ..... @rda_pg_rn_rm
293
+FNMLA_zpzzz 01100101 .. 1 ..... 010 ... ..... ..... @rda_pg_rn_rm
294
+FNMLS_zpzzz 01100101 .. 1 ..... 011 ... ..... ..... @rda_pg_rn_rm
295
+
296
+# SVE floating-point multiply-accumulate writing multiplicand
297
+# Alter the operand extraction order and reuse the helpers from above.
298
+# FMAD, FMSB, FNMAD, FNMS
299
+FMLA_zpzzz 01100101 .. 1 ..... 100 ... ..... ..... @rdn_pg_rm_ra
300
+FMLS_zpzzz 01100101 .. 1 ..... 101 ... ..... ..... @rdn_pg_rm_ra
301
+FNMLA_zpzzz 01100101 .. 1 ..... 110 ... ..... ..... @rdn_pg_rm_ra
302
+FNMLS_zpzzz 01100101 .. 1 ..... 111 ... ..... ..... @rdn_pg_rm_ra
303
+
304
### SVE FP Unary Operations Predicated Group
305
306
# SVE integer convert to floating-point
307
--
308
2.17.1
309
310
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-10-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 5 +++
9
target/arm/sve_helper.c | 41 +++++++++++++++++++++++++
10
target/arm/translate-sve.c | 62 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 5 +++
12
4 files changed, 113 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_clr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_3(sve_clr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_3(sve_clr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_movz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_movz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_movz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve_movz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+
27
DEF_HELPER_FLAGS_4(sve_asr_zpzi_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
DEF_HELPER_FLAGS_4(sve_asr_zpzi_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_4(sve_asr_zpzi_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/sve_helper.c
33
+++ b/target/arm/sve_helper.c
34
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_clr_d)(void *vd, void *vg, uint32_t desc)
35
}
36
}
37
38
+/* Copy Zn into Zd, and store zero into inactive elements. */
39
+void HELPER(sve_movz_b)(void *vd, void *vn, void *vg, uint32_t desc)
40
+{
41
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
42
+ uint64_t *d = vd, *n = vn;
43
+ uint8_t *pg = vg;
44
+ for (i = 0; i < opr_sz; i += 1) {
45
+ d[i] = n[i] & expand_pred_b(pg[H1(i)]);
46
+ }
47
+}
48
+
49
+void HELPER(sve_movz_h)(void *vd, void *vn, void *vg, uint32_t desc)
50
+{
51
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
52
+ uint64_t *d = vd, *n = vn;
53
+ uint8_t *pg = vg;
54
+ for (i = 0; i < opr_sz; i += 1) {
55
+ d[i] = n[i] & expand_pred_h(pg[H1(i)]);
56
+ }
57
+}
58
+
59
+void HELPER(sve_movz_s)(void *vd, void *vn, void *vg, uint32_t desc)
60
+{
61
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
62
+ uint64_t *d = vd, *n = vn;
63
+ uint8_t *pg = vg;
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ d[i] = n[i] & expand_pred_s(pg[H1(i)]);
66
+ }
67
+}
68
+
69
+void HELPER(sve_movz_d)(void *vd, void *vn, void *vg, uint32_t desc)
70
+{
71
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
72
+ uint64_t *d = vd, *n = vn;
73
+ uint8_t *pg = vg;
74
+ for (i = 0; i < opr_sz; i += 1) {
75
+ d[i] = n[1] & -(uint64_t)(pg[H1(i)] & 1);
76
+ }
77
+}
78
+
79
/* Three-operand expander, immediate operand, controlled by a predicate.
80
*/
81
#define DO_ZPZI(NAME, TYPE, H, OP) \
82
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/target/arm/translate-sve.c
85
+++ b/target/arm/translate-sve.c
86
@@ -XXX,XX +XXX,XX @@ static bool do_clr_zp(DisasContext *s, int rd, int pg, int esz)
87
return true;
88
}
89
90
+/* Copy Zn into Zd, storing zeros into inactive elements. */
91
+static void do_movz_zpz(DisasContext *s, int rd, int rn, int pg, int esz)
92
+{
93
+ static gen_helper_gvec_3 * const fns[4] = {
94
+ gen_helper_sve_movz_b, gen_helper_sve_movz_h,
95
+ gen_helper_sve_movz_s, gen_helper_sve_movz_d,
96
+ };
97
+ unsigned vsz = vec_full_reg_size(s);
98
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
99
+ vec_full_reg_offset(s, rn),
100
+ pred_full_reg_offset(s, pg),
101
+ vsz, vsz, 0, fns[esz]);
102
+}
103
+
104
static bool do_zpzi_ool(DisasContext *s, arg_rpri_esz *a,
105
gen_helper_gvec_3 *fn)
106
{
107
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1RQ_zpri(DisasContext *s, arg_rpri_load *a, uint32_t insn)
108
return true;
109
}
110
111
+/* Load and broadcast element. */
112
+static bool trans_LD1R_zpri(DisasContext *s, arg_rpri_load *a, uint32_t insn)
113
+{
114
+ if (!sve_access_check(s)) {
115
+ return true;
116
+ }
117
+
118
+ unsigned vsz = vec_full_reg_size(s);
119
+ unsigned psz = pred_full_reg_size(s);
120
+ unsigned esz = dtype_esz[a->dtype];
121
+ TCGLabel *over = gen_new_label();
122
+ TCGv_i64 temp;
123
+
124
+ /* If the guarding predicate has no bits set, no load occurs. */
125
+ if (psz <= 8) {
126
+ /* Reduce the pred_esz_masks value simply to reduce the
127
+ * size of the code generated here.
128
+ */
129
+ uint64_t psz_mask = MAKE_64BIT_MASK(0, psz * 8);
130
+ temp = tcg_temp_new_i64();
131
+ tcg_gen_ld_i64(temp, cpu_env, pred_full_reg_offset(s, a->pg));
132
+ tcg_gen_andi_i64(temp, temp, pred_esz_masks[esz] & psz_mask);
133
+ tcg_gen_brcondi_i64(TCG_COND_EQ, temp, 0, over);
134
+ tcg_temp_free_i64(temp);
135
+ } else {
136
+ TCGv_i32 t32 = tcg_temp_new_i32();
137
+ find_last_active(s, t32, esz, a->pg);
138
+ tcg_gen_brcondi_i32(TCG_COND_LT, t32, 0, over);
139
+ tcg_temp_free_i32(t32);
140
+ }
141
+
142
+ /* Load the data. */
143
+ temp = tcg_temp_new_i64();
144
+ tcg_gen_addi_i64(temp, cpu_reg_sp(s, a->rn), a->imm << esz);
145
+ tcg_gen_qemu_ld_i64(temp, temp, get_mem_index(s),
146
+ s->be_data | dtype_mop[a->dtype]);
147
+
148
+ /* Broadcast to *all* elements. */
149
+ tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd),
150
+ vsz, vsz, temp);
151
+ tcg_temp_free_i64(temp);
152
+
153
+ /* Zero the inactive elements. */
154
+ gen_set_label(over);
155
+ do_movz_zpz(s, a->rd, a->rd, a->pg, esz);
156
+ return true;
157
+}
158
+
159
static void do_st_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
160
int msz, int esz, int nreg)
161
{
162
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
163
index XXXXXXX..XXXXXXX 100644
164
--- a/target/arm/sve.decode
165
+++ b/target/arm/sve.decode
166
@@ -XXX,XX +XXX,XX @@
167
%imm8_16_10 16:5 10:3
168
%imm9_16_10 16:s6 10:3
169
%size_23 23:2
170
+%dtype_23_13 23:2 13:2
171
172
# A combination of tsz:imm3 -- extract esize.
173
%tszimm_esz 22:2 5:5 !function=tszimm_esz
174
@@ -XXX,XX +XXX,XX @@ LDR_pri 10000101 10 ...... 000 ... ..... 0 .... @pd_rn_i9
175
# SVE load vector register
176
LDR_zri 10000101 10 ...... 010 ... ..... ..... @rd_rn_i9
177
178
+# SVE load and broadcast element
179
+LD1R_zpri 1000010 .. 1 imm:6 1.. pg:3 rn:5 rd:5 \
180
+ &rpri_load dtype=%dtype_23_13 nreg=0
181
+
182
### SVE Memory Contiguous Load Group
183
184
# SVE contiguous load (scalar plus scalar)
185
--
186
2.17.1
187
188
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 41 +++++++++++++++++++++
9
target/arm/sve_helper.c | 61 +++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 75 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 39 ++++++++++++++++++++
12
4 files changed, 216 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_st1hs_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
19
DEF_HELPER_FLAGS_4(sve_st1hd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
20
21
DEF_HELPER_FLAGS_4(sve_st1sd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
22
+
23
+DEF_HELPER_FLAGS_6(sve_stbs_zsu, TCG_CALL_NO_WG,
24
+ void, env, ptr, ptr, ptr, tl, i32)
25
+DEF_HELPER_FLAGS_6(sve_sths_zsu, TCG_CALL_NO_WG,
26
+ void, env, ptr, ptr, ptr, tl, i32)
27
+DEF_HELPER_FLAGS_6(sve_stss_zsu, TCG_CALL_NO_WG,
28
+ void, env, ptr, ptr, ptr, tl, i32)
29
+
30
+DEF_HELPER_FLAGS_6(sve_stbs_zss, TCG_CALL_NO_WG,
31
+ void, env, ptr, ptr, ptr, tl, i32)
32
+DEF_HELPER_FLAGS_6(sve_sths_zss, TCG_CALL_NO_WG,
33
+ void, env, ptr, ptr, ptr, tl, i32)
34
+DEF_HELPER_FLAGS_6(sve_stss_zss, TCG_CALL_NO_WG,
35
+ void, env, ptr, ptr, ptr, tl, i32)
36
+
37
+DEF_HELPER_FLAGS_6(sve_stbd_zsu, TCG_CALL_NO_WG,
38
+ void, env, ptr, ptr, ptr, tl, i32)
39
+DEF_HELPER_FLAGS_6(sve_sthd_zsu, TCG_CALL_NO_WG,
40
+ void, env, ptr, ptr, ptr, tl, i32)
41
+DEF_HELPER_FLAGS_6(sve_stsd_zsu, TCG_CALL_NO_WG,
42
+ void, env, ptr, ptr, ptr, tl, i32)
43
+DEF_HELPER_FLAGS_6(sve_stdd_zsu, TCG_CALL_NO_WG,
44
+ void, env, ptr, ptr, ptr, tl, i32)
45
+
46
+DEF_HELPER_FLAGS_6(sve_stbd_zss, TCG_CALL_NO_WG,
47
+ void, env, ptr, ptr, ptr, tl, i32)
48
+DEF_HELPER_FLAGS_6(sve_sthd_zss, TCG_CALL_NO_WG,
49
+ void, env, ptr, ptr, ptr, tl, i32)
50
+DEF_HELPER_FLAGS_6(sve_stsd_zss, TCG_CALL_NO_WG,
51
+ void, env, ptr, ptr, ptr, tl, i32)
52
+DEF_HELPER_FLAGS_6(sve_stdd_zss, TCG_CALL_NO_WG,
53
+ void, env, ptr, ptr, ptr, tl, i32)
54
+
55
+DEF_HELPER_FLAGS_6(sve_stbd_zd, TCG_CALL_NO_WG,
56
+ void, env, ptr, ptr, ptr, tl, i32)
57
+DEF_HELPER_FLAGS_6(sve_sthd_zd, TCG_CALL_NO_WG,
58
+ void, env, ptr, ptr, ptr, tl, i32)
59
+DEF_HELPER_FLAGS_6(sve_stsd_zd, TCG_CALL_NO_WG,
60
+ void, env, ptr, ptr, ptr, tl, i32)
61
+DEF_HELPER_FLAGS_6(sve_stdd_zd, TCG_CALL_NO_WG,
62
+ void, env, ptr, ptr, ptr, tl, i32)
63
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/sve_helper.c
66
+++ b/target/arm/sve_helper.c
67
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_st4dd_r)(CPUARMState *env, void *vg,
68
addr += 4 * 8;
69
}
70
}
71
+
72
+/* Stores with a vector index. */
73
+
74
+#define DO_ST1_ZPZ_S(NAME, TYPEI, FN) \
75
+void HELPER(NAME)(CPUARMState *env, void *vd, void *vg, void *vm, \
76
+ target_ulong base, uint32_t desc) \
77
+{ \
78
+ intptr_t i, oprsz = simd_oprsz(desc); \
79
+ unsigned scale = simd_data(desc); \
80
+ uintptr_t ra = GETPC(); \
81
+ for (i = 0; i < oprsz; ) { \
82
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
83
+ do { \
84
+ if (likely(pg & 1)) { \
85
+ target_ulong off = *(TYPEI *)(vm + H1_4(i)); \
86
+ uint32_t d = *(uint32_t *)(vd + H1_4(i)); \
87
+ FN(env, base + (off << scale), d, ra); \
88
+ } \
89
+ i += sizeof(uint32_t), pg >>= sizeof(uint32_t); \
90
+ } while (i & 15); \
91
+ } \
92
+}
93
+
94
+#define DO_ST1_ZPZ_D(NAME, TYPEI, FN) \
95
+void HELPER(NAME)(CPUARMState *env, void *vd, void *vg, void *vm, \
96
+ target_ulong base, uint32_t desc) \
97
+{ \
98
+ intptr_t i, oprsz = simd_oprsz(desc) / 8; \
99
+ unsigned scale = simd_data(desc); \
100
+ uintptr_t ra = GETPC(); \
101
+ uint64_t *d = vd, *m = vm; uint8_t *pg = vg; \
102
+ for (i = 0; i < oprsz; i++) { \
103
+ if (likely(pg[H1(i)] & 1)) { \
104
+ target_ulong off = (target_ulong)(TYPEI)m[i] << scale; \
105
+ FN(env, base + off, d[i], ra); \
106
+ } \
107
+ } \
108
+}
109
+
110
+DO_ST1_ZPZ_S(sve_stbs_zsu, uint32_t, cpu_stb_data_ra)
111
+DO_ST1_ZPZ_S(sve_sths_zsu, uint32_t, cpu_stw_data_ra)
112
+DO_ST1_ZPZ_S(sve_stss_zsu, uint32_t, cpu_stl_data_ra)
113
+
114
+DO_ST1_ZPZ_S(sve_stbs_zss, int32_t, cpu_stb_data_ra)
115
+DO_ST1_ZPZ_S(sve_sths_zss, int32_t, cpu_stw_data_ra)
116
+DO_ST1_ZPZ_S(sve_stss_zss, int32_t, cpu_stl_data_ra)
117
+
118
+DO_ST1_ZPZ_D(sve_stbd_zsu, uint32_t, cpu_stb_data_ra)
119
+DO_ST1_ZPZ_D(sve_sthd_zsu, uint32_t, cpu_stw_data_ra)
120
+DO_ST1_ZPZ_D(sve_stsd_zsu, uint32_t, cpu_stl_data_ra)
121
+DO_ST1_ZPZ_D(sve_stdd_zsu, uint32_t, cpu_stq_data_ra)
122
+
123
+DO_ST1_ZPZ_D(sve_stbd_zss, int32_t, cpu_stb_data_ra)
124
+DO_ST1_ZPZ_D(sve_sthd_zss, int32_t, cpu_stw_data_ra)
125
+DO_ST1_ZPZ_D(sve_stsd_zss, int32_t, cpu_stl_data_ra)
126
+DO_ST1_ZPZ_D(sve_stdd_zss, int32_t, cpu_stq_data_ra)
127
+
128
+DO_ST1_ZPZ_D(sve_stbd_zd, uint64_t, cpu_stb_data_ra)
129
+DO_ST1_ZPZ_D(sve_sthd_zd, uint64_t, cpu_stw_data_ra)
130
+DO_ST1_ZPZ_D(sve_stsd_zd, uint64_t, cpu_stl_data_ra)
131
+DO_ST1_ZPZ_D(sve_stdd_zd, uint64_t, cpu_stq_data_ra)
132
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/target/arm/translate-sve.c
135
+++ b/target/arm/translate-sve.c
136
@@ -XXX,XX +XXX,XX @@ typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
137
TCGv_ptr, TCGv_ptr, TCGv_i32);
138
139
typedef void gen_helper_gvec_mem(TCGv_env, TCGv_ptr, TCGv_i64, TCGv_i32);
140
+typedef void gen_helper_gvec_mem_scatter(TCGv_env, TCGv_ptr, TCGv_ptr,
141
+ TCGv_ptr, TCGv_i64, TCGv_i32);
142
143
/*
144
* Helpers for extracting complex instruction fields.
145
@@ -XXX,XX +XXX,XX @@ static bool trans_ST_zpri(DisasContext *s, arg_rpri_store *a, uint32_t insn)
146
}
147
return true;
148
}
149
+
150
+/*
151
+ *** SVE gather loads / scatter stores
152
+ */
153
+
154
+static void do_mem_zpz(DisasContext *s, int zt, int pg, int zm, int scale,
155
+ TCGv_i64 scalar, gen_helper_gvec_mem_scatter *fn)
156
+{
157
+ unsigned vsz = vec_full_reg_size(s);
158
+ TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, scale));
159
+ TCGv_ptr t_zm = tcg_temp_new_ptr();
160
+ TCGv_ptr t_pg = tcg_temp_new_ptr();
161
+ TCGv_ptr t_zt = tcg_temp_new_ptr();
162
+
163
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
164
+ tcg_gen_addi_ptr(t_zm, cpu_env, vec_full_reg_offset(s, zm));
165
+ tcg_gen_addi_ptr(t_zt, cpu_env, vec_full_reg_offset(s, zt));
166
+ fn(cpu_env, t_zt, t_pg, t_zm, scalar, desc);
167
+
168
+ tcg_temp_free_ptr(t_zt);
169
+ tcg_temp_free_ptr(t_zm);
170
+ tcg_temp_free_ptr(t_pg);
171
+ tcg_temp_free_i32(desc);
172
+}
173
+
174
+static bool trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a, uint32_t insn)
175
+{
176
+ /* Indexed by [xs][msz]. */
177
+ static gen_helper_gvec_mem_scatter * const fn32[2][3] = {
178
+ { gen_helper_sve_stbs_zsu,
179
+ gen_helper_sve_sths_zsu,
180
+ gen_helper_sve_stss_zsu, },
181
+ { gen_helper_sve_stbs_zss,
182
+ gen_helper_sve_sths_zss,
183
+ gen_helper_sve_stss_zss, },
184
+ };
185
+ /* Note that we overload xs=2 to indicate 64-bit offset. */
186
+ static gen_helper_gvec_mem_scatter * const fn64[3][4] = {
187
+ { gen_helper_sve_stbd_zsu,
188
+ gen_helper_sve_sthd_zsu,
189
+ gen_helper_sve_stsd_zsu,
190
+ gen_helper_sve_stdd_zsu, },
191
+ { gen_helper_sve_stbd_zss,
192
+ gen_helper_sve_sthd_zss,
193
+ gen_helper_sve_stsd_zss,
194
+ gen_helper_sve_stdd_zss, },
195
+ { gen_helper_sve_stbd_zd,
196
+ gen_helper_sve_sthd_zd,
197
+ gen_helper_sve_stsd_zd,
198
+ gen_helper_sve_stdd_zd, },
199
+ };
200
+ gen_helper_gvec_mem_scatter *fn;
201
+
202
+ if (a->esz < a->msz || (a->msz == 0 && a->scale)) {
203
+ return false;
204
+ }
205
+ if (!sve_access_check(s)) {
206
+ return true;
207
+ }
208
+ switch (a->esz) {
209
+ case MO_32:
210
+ fn = fn32[a->xs][a->msz];
211
+ break;
212
+ case MO_64:
213
+ fn = fn64[a->xs][a->msz];
214
+ break;
215
+ default:
216
+ g_assert_not_reached();
217
+ }
218
+ do_mem_zpz(s, a->rd, a->pg, a->rm, a->scale * a->msz,
219
+ cpu_reg_sp(s, a->rn), fn);
220
+ return true;
221
+}
222
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
223
index XXXXXXX..XXXXXXX 100644
224
--- a/target/arm/sve.decode
225
+++ b/target/arm/sve.decode
226
@@ -XXX,XX +XXX,XX @@
227
&rpri_load rd pg rn imm dtype nreg
228
&rprr_store rd pg rn rm msz esz nreg
229
&rpri_store rd pg rn imm msz esz nreg
230
+&rprr_scatter_store rd pg rn rm esz msz xs scale
231
232
###########################################################################
233
# Named instruction formats. These are generally used to
234
@@ -XXX,XX +XXX,XX @@
235
@rpri_store_msz ....... msz:2 .. . imm:s4 ... pg:3 rn:5 rd:5 &rpri_store
236
@rprr_store_esz_n0 ....... .. esz:2 rm:5 ... pg:3 rn:5 rd:5 \
237
&rprr_store nreg=0
238
+@rprr_scatter_store ....... msz:2 .. rm:5 ... pg:3 rn:5 rd:5 \
239
+ &rprr_scatter_store
240
241
###########################################################################
242
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
243
@@ -XXX,XX +XXX,XX @@ ST_zpri 1110010 .. nreg:2 1.... 111 ... ..... ..... \
244
# SVE store multiple structures (scalar plus scalar) (nreg != 0)
245
ST_zprr 1110010 msz:2 nreg:2 ..... 011 ... ..... ..... \
246
@rprr_store esz=%size_23
247
+
248
+# SVE 32-bit scatter store (scalar plus 32-bit scaled offsets)
249
+# Require msz > 0 && msz <= esz.
250
+ST1_zprz 1110010 .. 11 ..... 100 ... ..... ..... \
251
+ @rprr_scatter_store xs=0 esz=2 scale=1
252
+ST1_zprz 1110010 .. 11 ..... 110 ... ..... ..... \
253
+ @rprr_scatter_store xs=1 esz=2 scale=1
254
+
255
+# SVE 32-bit scatter store (scalar plus 32-bit unscaled offsets)
256
+# Require msz <= esz.
257
+ST1_zprz 1110010 .. 10 ..... 100 ... ..... ..... \
258
+ @rprr_scatter_store xs=0 esz=2 scale=0
259
+ST1_zprz 1110010 .. 10 ..... 110 ... ..... ..... \
260
+ @rprr_scatter_store xs=1 esz=2 scale=0
261
+
262
+# SVE 64-bit scatter store (scalar plus 64-bit scaled offset)
263
+# Require msz > 0
264
+ST1_zprz 1110010 .. 01 ..... 101 ... ..... ..... \
265
+ @rprr_scatter_store xs=2 esz=3 scale=1
266
+
267
+# SVE 64-bit scatter store (scalar plus 64-bit unscaled offset)
268
+ST1_zprz 1110010 .. 00 ..... 101 ... ..... ..... \
269
+ @rprr_scatter_store xs=2 esz=3 scale=0
270
+
271
+# SVE 64-bit scatter store (scalar plus unpacked 32-bit scaled offset)
272
+# Require msz > 0
273
+ST1_zprz 1110010 .. 01 ..... 100 ... ..... ..... \
274
+ @rprr_scatter_store xs=0 esz=3 scale=1
275
+ST1_zprz 1110010 .. 01 ..... 110 ... ..... ..... \
276
+ @rprr_scatter_store xs=1 esz=3 scale=1
277
+
278
+# SVE 64-bit scatter store (scalar plus unpacked 32-bit unscaled offset)
279
+ST1_zprz 1110010 .. 00 ..... 100 ... ..... ..... \
280
+ @rprr_scatter_store xs=0 esz=3 scale=0
281
+ST1_zprz 1110010 .. 00 ..... 110 ... ..... ..... \
282
+ @rprr_scatter_store xs=1 esz=3 scale=0
283
--
284
2.17.1
285
286
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-sve.c | 21 +++++++++++++++++++++
9
target/arm/sve.decode | 23 +++++++++++++++++++++++
10
2 files changed, 44 insertions(+)
11
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
15
+++ b/target/arm/translate-sve.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a, uint32_t insn)
17
cpu_reg_sp(s, a->rn), fn);
18
return true;
19
}
20
+
21
+/*
22
+ * Prefetches
23
+ */
24
+
25
+static bool trans_PRF(DisasContext *s, arg_PRF *a, uint32_t insn)
26
+{
27
+ /* Prefetch is a nop within QEMU. */
28
+ sve_access_check(s);
29
+ return true;
30
+}
31
+
32
+static bool trans_PRF_rr(DisasContext *s, arg_PRF_rr *a, uint32_t insn)
33
+{
34
+ if (a->rm == 31) {
35
+ return false;
36
+ }
37
+ /* Prefetch is a nop within QEMU. */
38
+ sve_access_check(s);
39
+ return true;
40
+}
41
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/sve.decode
44
+++ b/target/arm/sve.decode
45
@@ -XXX,XX +XXX,XX @@ LD1RQ_zprr 1010010 .. 00 ..... 000 ... ..... ..... \
46
LD1RQ_zpri 1010010 .. 00 0.... 001 ... ..... ..... \
47
@rpri_load_msz nreg=0
48
49
+# SVE 32-bit gather prefetch (scalar plus 32-bit scaled offsets)
50
+PRF 1000010 00 -1 ----- 0-- --- ----- 0 ----
51
+
52
+# SVE 32-bit gather prefetch (vector plus immediate)
53
+PRF 1000010 -- 00 ----- 111 --- ----- 0 ----
54
+
55
+# SVE contiguous prefetch (scalar plus immediate)
56
+PRF 1000010 11 1- ----- 0-- --- ----- 0 ----
57
+
58
+# SVE contiguous prefetch (scalar plus scalar)
59
+PRF_rr 1000010 -- 00 rm:5 110 --- ----- 0 ----
60
+
61
+### SVE Memory 64-bit Gather Group
62
+
63
+# SVE 64-bit gather prefetch (scalar plus 64-bit scaled offsets)
64
+PRF 1100010 00 11 ----- 1-- --- ----- 0 ----
65
+
66
+# SVE 64-bit gather prefetch (scalar plus unpacked 32-bit scaled offsets)
67
+PRF 1100010 00 -1 ----- 0-- --- ----- 0 ----
68
+
69
+# SVE 64-bit gather prefetch (vector plus immediate)
70
+PRF 1100010 -- 00 ----- 111 --- ----- 0 ----
71
+
72
### SVE Memory Store Group
73
74
# SVE store predicate register
75
--
76
2.17.1
77
78
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20180627043328.11531-14-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/helper-sve.h | 67 +++++++++++++++++++++++++
10
target/arm/sve_helper.c | 77 ++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 100 +++++++++++++++++++++++++++++++++++++
12
target/arm/sve.decode | 57 +++++++++++++++++++++
13
4 files changed, 301 insertions(+)
14
15
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-sve.h
18
+++ b/target/arm/helper-sve.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_st1hd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
20
21
DEF_HELPER_FLAGS_4(sve_st1sd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
22
23
+DEF_HELPER_FLAGS_6(sve_ldbsu_zsu, TCG_CALL_NO_WG,
24
+ void, env, ptr, ptr, ptr, tl, i32)
25
+DEF_HELPER_FLAGS_6(sve_ldhsu_zsu, TCG_CALL_NO_WG,
26
+ void, env, ptr, ptr, ptr, tl, i32)
27
+DEF_HELPER_FLAGS_6(sve_ldssu_zsu, TCG_CALL_NO_WG,
28
+ void, env, ptr, ptr, ptr, tl, i32)
29
+DEF_HELPER_FLAGS_6(sve_ldbss_zsu, TCG_CALL_NO_WG,
30
+ void, env, ptr, ptr, ptr, tl, i32)
31
+DEF_HELPER_FLAGS_6(sve_ldhss_zsu, TCG_CALL_NO_WG,
32
+ void, env, ptr, ptr, ptr, tl, i32)
33
+
34
+DEF_HELPER_FLAGS_6(sve_ldbsu_zss, TCG_CALL_NO_WG,
35
+ void, env, ptr, ptr, ptr, tl, i32)
36
+DEF_HELPER_FLAGS_6(sve_ldhsu_zss, TCG_CALL_NO_WG,
37
+ void, env, ptr, ptr, ptr, tl, i32)
38
+DEF_HELPER_FLAGS_6(sve_ldssu_zss, TCG_CALL_NO_WG,
39
+ void, env, ptr, ptr, ptr, tl, i32)
40
+DEF_HELPER_FLAGS_6(sve_ldbss_zss, TCG_CALL_NO_WG,
41
+ void, env, ptr, ptr, ptr, tl, i32)
42
+DEF_HELPER_FLAGS_6(sve_ldhss_zss, TCG_CALL_NO_WG,
43
+ void, env, ptr, ptr, ptr, tl, i32)
44
+
45
+DEF_HELPER_FLAGS_6(sve_ldbdu_zsu, TCG_CALL_NO_WG,
46
+ void, env, ptr, ptr, ptr, tl, i32)
47
+DEF_HELPER_FLAGS_6(sve_ldhdu_zsu, TCG_CALL_NO_WG,
48
+ void, env, ptr, ptr, ptr, tl, i32)
49
+DEF_HELPER_FLAGS_6(sve_ldsdu_zsu, TCG_CALL_NO_WG,
50
+ void, env, ptr, ptr, ptr, tl, i32)
51
+DEF_HELPER_FLAGS_6(sve_ldddu_zsu, TCG_CALL_NO_WG,
52
+ void, env, ptr, ptr, ptr, tl, i32)
53
+DEF_HELPER_FLAGS_6(sve_ldbds_zsu, TCG_CALL_NO_WG,
54
+ void, env, ptr, ptr, ptr, tl, i32)
55
+DEF_HELPER_FLAGS_6(sve_ldhds_zsu, TCG_CALL_NO_WG,
56
+ void, env, ptr, ptr, ptr, tl, i32)
57
+DEF_HELPER_FLAGS_6(sve_ldsds_zsu, TCG_CALL_NO_WG,
58
+ void, env, ptr, ptr, ptr, tl, i32)
59
+
60
+DEF_HELPER_FLAGS_6(sve_ldbdu_zss, TCG_CALL_NO_WG,
61
+ void, env, ptr, ptr, ptr, tl, i32)
62
+DEF_HELPER_FLAGS_6(sve_ldhdu_zss, TCG_CALL_NO_WG,
63
+ void, env, ptr, ptr, ptr, tl, i32)
64
+DEF_HELPER_FLAGS_6(sve_ldsdu_zss, TCG_CALL_NO_WG,
65
+ void, env, ptr, ptr, ptr, tl, i32)
66
+DEF_HELPER_FLAGS_6(sve_ldddu_zss, TCG_CALL_NO_WG,
67
+ void, env, ptr, ptr, ptr, tl, i32)
68
+DEF_HELPER_FLAGS_6(sve_ldbds_zss, TCG_CALL_NO_WG,
69
+ void, env, ptr, ptr, ptr, tl, i32)
70
+DEF_HELPER_FLAGS_6(sve_ldhds_zss, TCG_CALL_NO_WG,
71
+ void, env, ptr, ptr, ptr, tl, i32)
72
+DEF_HELPER_FLAGS_6(sve_ldsds_zss, TCG_CALL_NO_WG,
73
+ void, env, ptr, ptr, ptr, tl, i32)
74
+
75
+DEF_HELPER_FLAGS_6(sve_ldbdu_zd, TCG_CALL_NO_WG,
76
+ void, env, ptr, ptr, ptr, tl, i32)
77
+DEF_HELPER_FLAGS_6(sve_ldhdu_zd, TCG_CALL_NO_WG,
78
+ void, env, ptr, ptr, ptr, tl, i32)
79
+DEF_HELPER_FLAGS_6(sve_ldsdu_zd, TCG_CALL_NO_WG,
80
+ void, env, ptr, ptr, ptr, tl, i32)
81
+DEF_HELPER_FLAGS_6(sve_ldddu_zd, TCG_CALL_NO_WG,
82
+ void, env, ptr, ptr, ptr, tl, i32)
83
+DEF_HELPER_FLAGS_6(sve_ldbds_zd, TCG_CALL_NO_WG,
84
+ void, env, ptr, ptr, ptr, tl, i32)
85
+DEF_HELPER_FLAGS_6(sve_ldhds_zd, TCG_CALL_NO_WG,
86
+ void, env, ptr, ptr, ptr, tl, i32)
87
+DEF_HELPER_FLAGS_6(sve_ldsds_zd, TCG_CALL_NO_WG,
88
+ void, env, ptr, ptr, ptr, tl, i32)
89
+
90
DEF_HELPER_FLAGS_6(sve_stbs_zsu, TCG_CALL_NO_WG,
91
void, env, ptr, ptr, ptr, tl, i32)
92
DEF_HELPER_FLAGS_6(sve_sths_zsu, TCG_CALL_NO_WG,
93
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/sve_helper.c
96
+++ b/target/arm/sve_helper.c
97
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_st4dd_r)(CPUARMState *env, void *vg,
98
}
99
}
100
101
+/* Loads with a vector index. */
102
+
103
+#define DO_LD1_ZPZ_S(NAME, TYPEI, TYPEM, FN) \
104
+void HELPER(NAME)(CPUARMState *env, void *vd, void *vg, void *vm, \
105
+ target_ulong base, uint32_t desc) \
106
+{ \
107
+ intptr_t i, oprsz = simd_oprsz(desc); \
108
+ unsigned scale = simd_data(desc); \
109
+ uintptr_t ra = GETPC(); \
110
+ for (i = 0; i < oprsz; i++) { \
111
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
112
+ do { \
113
+ TYPEM m = 0; \
114
+ if (pg & 1) { \
115
+ target_ulong off = *(TYPEI *)(vm + H1_4(i)); \
116
+ m = FN(env, base + (off << scale), ra); \
117
+ } \
118
+ *(uint32_t *)(vd + H1_4(i)) = m; \
119
+ i += 4, pg >>= 4; \
120
+ } while (i & 15); \
121
+ } \
122
+}
123
+
124
+#define DO_LD1_ZPZ_D(NAME, TYPEI, TYPEM, FN) \
125
+void HELPER(NAME)(CPUARMState *env, void *vd, void *vg, void *vm, \
126
+ target_ulong base, uint32_t desc) \
127
+{ \
128
+ intptr_t i, oprsz = simd_oprsz(desc) / 8; \
129
+ unsigned scale = simd_data(desc); \
130
+ uintptr_t ra = GETPC(); \
131
+ uint64_t *d = vd, *m = vm; uint8_t *pg = vg; \
132
+ for (i = 0; i < oprsz; i++) { \
133
+ TYPEM mm = 0; \
134
+ if (pg[H1(i)] & 1) { \
135
+ target_ulong off = (TYPEI)m[i]; \
136
+ mm = FN(env, base + (off << scale), ra); \
137
+ } \
138
+ d[i] = mm; \
139
+ } \
140
+}
141
+
142
+DO_LD1_ZPZ_S(sve_ldbsu_zsu, uint32_t, uint8_t, cpu_ldub_data_ra)
143
+DO_LD1_ZPZ_S(sve_ldhsu_zsu, uint32_t, uint16_t, cpu_lduw_data_ra)
144
+DO_LD1_ZPZ_S(sve_ldssu_zsu, uint32_t, uint32_t, cpu_ldl_data_ra)
145
+DO_LD1_ZPZ_S(sve_ldbss_zsu, uint32_t, int8_t, cpu_ldub_data_ra)
146
+DO_LD1_ZPZ_S(sve_ldhss_zsu, uint32_t, int16_t, cpu_lduw_data_ra)
147
+
148
+DO_LD1_ZPZ_S(sve_ldbsu_zss, int32_t, uint8_t, cpu_ldub_data_ra)
149
+DO_LD1_ZPZ_S(sve_ldhsu_zss, int32_t, uint16_t, cpu_lduw_data_ra)
150
+DO_LD1_ZPZ_S(sve_ldssu_zss, int32_t, uint32_t, cpu_ldl_data_ra)
151
+DO_LD1_ZPZ_S(sve_ldbss_zss, int32_t, int8_t, cpu_ldub_data_ra)
152
+DO_LD1_ZPZ_S(sve_ldhss_zss, int32_t, int16_t, cpu_lduw_data_ra)
153
+
154
+DO_LD1_ZPZ_D(sve_ldbdu_zsu, uint32_t, uint8_t, cpu_ldub_data_ra)
155
+DO_LD1_ZPZ_D(sve_ldhdu_zsu, uint32_t, uint16_t, cpu_lduw_data_ra)
156
+DO_LD1_ZPZ_D(sve_ldsdu_zsu, uint32_t, uint32_t, cpu_ldl_data_ra)
157
+DO_LD1_ZPZ_D(sve_ldddu_zsu, uint32_t, uint64_t, cpu_ldq_data_ra)
158
+DO_LD1_ZPZ_D(sve_ldbds_zsu, uint32_t, int8_t, cpu_ldub_data_ra)
159
+DO_LD1_ZPZ_D(sve_ldhds_zsu, uint32_t, int16_t, cpu_lduw_data_ra)
160
+DO_LD1_ZPZ_D(sve_ldsds_zsu, uint32_t, int32_t, cpu_ldl_data_ra)
161
+
162
+DO_LD1_ZPZ_D(sve_ldbdu_zss, int32_t, uint8_t, cpu_ldub_data_ra)
163
+DO_LD1_ZPZ_D(sve_ldhdu_zss, int32_t, uint16_t, cpu_lduw_data_ra)
164
+DO_LD1_ZPZ_D(sve_ldsdu_zss, int32_t, uint32_t, cpu_ldl_data_ra)
165
+DO_LD1_ZPZ_D(sve_ldddu_zss, int32_t, uint64_t, cpu_ldq_data_ra)
166
+DO_LD1_ZPZ_D(sve_ldbds_zss, int32_t, int8_t, cpu_ldub_data_ra)
167
+DO_LD1_ZPZ_D(sve_ldhds_zss, int32_t, int16_t, cpu_lduw_data_ra)
168
+DO_LD1_ZPZ_D(sve_ldsds_zss, int32_t, int32_t, cpu_ldl_data_ra)
169
+
170
+DO_LD1_ZPZ_D(sve_ldbdu_zd, uint64_t, uint8_t, cpu_ldub_data_ra)
171
+DO_LD1_ZPZ_D(sve_ldhdu_zd, uint64_t, uint16_t, cpu_lduw_data_ra)
172
+DO_LD1_ZPZ_D(sve_ldsdu_zd, uint64_t, uint32_t, cpu_ldl_data_ra)
173
+DO_LD1_ZPZ_D(sve_ldddu_zd, uint64_t, uint64_t, cpu_ldq_data_ra)
174
+DO_LD1_ZPZ_D(sve_ldbds_zd, uint64_t, int8_t, cpu_ldub_data_ra)
175
+DO_LD1_ZPZ_D(sve_ldhds_zd, uint64_t, int16_t, cpu_lduw_data_ra)
176
+DO_LD1_ZPZ_D(sve_ldsds_zd, uint64_t, int32_t, cpu_ldl_data_ra)
177
+
178
/* Stores with a vector index. */
179
180
#define DO_ST1_ZPZ_S(NAME, TYPEI, FN) \
181
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
182
index XXXXXXX..XXXXXXX 100644
183
--- a/target/arm/translate-sve.c
184
+++ b/target/arm/translate-sve.c
185
@@ -XXX,XX +XXX,XX @@ static void do_mem_zpz(DisasContext *s, int zt, int pg, int zm, int scale,
186
tcg_temp_free_i32(desc);
187
}
188
189
+/* Indexed by [ff][xs][u][msz]. */
190
+static gen_helper_gvec_mem_scatter * const gather_load_fn32[2][2][2][3] = {
191
+ { { { gen_helper_sve_ldbss_zsu,
192
+ gen_helper_sve_ldhss_zsu,
193
+ NULL, },
194
+ { gen_helper_sve_ldbsu_zsu,
195
+ gen_helper_sve_ldhsu_zsu,
196
+ gen_helper_sve_ldssu_zsu, } },
197
+ { { gen_helper_sve_ldbss_zss,
198
+ gen_helper_sve_ldhss_zss,
199
+ NULL, },
200
+ { gen_helper_sve_ldbsu_zss,
201
+ gen_helper_sve_ldhsu_zss,
202
+ gen_helper_sve_ldssu_zss, } } },
203
+ /* TODO fill in first-fault handlers */
204
+};
205
+
206
+/* Note that we overload xs=2 to indicate 64-bit offset. */
207
+static gen_helper_gvec_mem_scatter * const gather_load_fn64[2][3][2][4] = {
208
+ { { { gen_helper_sve_ldbds_zsu,
209
+ gen_helper_sve_ldhds_zsu,
210
+ gen_helper_sve_ldsds_zsu,
211
+ NULL, },
212
+ { gen_helper_sve_ldbdu_zsu,
213
+ gen_helper_sve_ldhdu_zsu,
214
+ gen_helper_sve_ldsdu_zsu,
215
+ gen_helper_sve_ldddu_zsu, } },
216
+ { { gen_helper_sve_ldbds_zss,
217
+ gen_helper_sve_ldhds_zss,
218
+ gen_helper_sve_ldsds_zss,
219
+ NULL, },
220
+ { gen_helper_sve_ldbdu_zss,
221
+ gen_helper_sve_ldhdu_zss,
222
+ gen_helper_sve_ldsdu_zss,
223
+ gen_helper_sve_ldddu_zss, } },
224
+ { { gen_helper_sve_ldbds_zd,
225
+ gen_helper_sve_ldhds_zd,
226
+ gen_helper_sve_ldsds_zd,
227
+ NULL, },
228
+ { gen_helper_sve_ldbdu_zd,
229
+ gen_helper_sve_ldhdu_zd,
230
+ gen_helper_sve_ldsdu_zd,
231
+ gen_helper_sve_ldddu_zd, } } },
232
+ /* TODO fill in first-fault handlers */
233
+};
234
+
235
+static bool trans_LD1_zprz(DisasContext *s, arg_LD1_zprz *a, uint32_t insn)
236
+{
237
+ gen_helper_gvec_mem_scatter *fn = NULL;
238
+
239
+ if (!sve_access_check(s)) {
240
+ return true;
241
+ }
242
+
243
+ switch (a->esz) {
244
+ case MO_32:
245
+ fn = gather_load_fn32[a->ff][a->xs][a->u][a->msz];
246
+ break;
247
+ case MO_64:
248
+ fn = gather_load_fn64[a->ff][a->xs][a->u][a->msz];
249
+ break;
250
+ }
251
+ assert(fn != NULL);
252
+
253
+ do_mem_zpz(s, a->rd, a->pg, a->rm, a->scale * a->msz,
254
+ cpu_reg_sp(s, a->rn), fn);
255
+ return true;
256
+}
257
+
258
+static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz *a, uint32_t insn)
259
+{
260
+ gen_helper_gvec_mem_scatter *fn = NULL;
261
+ TCGv_i64 imm;
262
+
263
+ if (a->esz < a->msz || (a->esz == a->msz && !a->u)) {
264
+ return false;
265
+ }
266
+ if (!sve_access_check(s)) {
267
+ return true;
268
+ }
269
+
270
+ switch (a->esz) {
271
+ case MO_32:
272
+ fn = gather_load_fn32[a->ff][0][a->u][a->msz];
273
+ break;
274
+ case MO_64:
275
+ fn = gather_load_fn64[a->ff][2][a->u][a->msz];
276
+ break;
277
+ }
278
+ assert(fn != NULL);
279
+
280
+ /* Treat LD1_zpiz (zn[x] + imm) the same way as LD1_zprz (rn + zm[x])
281
+ * by loading the immediate into the scalar parameter.
282
+ */
283
+ imm = tcg_const_i64(a->imm << a->msz);
284
+ do_mem_zpz(s, a->rd, a->pg, a->rn, 0, imm, fn);
285
+ tcg_temp_free_i64(imm);
286
+ return true;
287
+}
288
+
289
static bool trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a, uint32_t insn)
290
{
291
/* Indexed by [xs][msz]. */
292
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
293
index XXXXXXX..XXXXXXX 100644
294
--- a/target/arm/sve.decode
295
+++ b/target/arm/sve.decode
296
@@ -XXX,XX +XXX,XX @@
297
&rpri_load rd pg rn imm dtype nreg
298
&rprr_store rd pg rn rm msz esz nreg
299
&rpri_store rd pg rn imm msz esz nreg
300
+&rprr_gather_load rd pg rn rm esz msz u ff xs scale
301
+&rpri_gather_load rd pg rn imm esz msz u ff
302
&rprr_scatter_store rd pg rn rm esz msz xs scale
303
304
###########################################################################
305
@@ -XXX,XX +XXX,XX @@
306
@rpri_load_msz ....... .... . imm:s4 ... pg:3 rn:5 rd:5 \
307
&rpri_load dtype=%msz_dtype
308
309
+# Gather Loads.
310
+@rprr_g_load_u ....... .. . . rm:5 . u:1 ff:1 pg:3 rn:5 rd:5 \
311
+ &rprr_gather_load xs=2
312
+@rprr_g_load_xs_u ....... .. xs:1 . rm:5 . u:1 ff:1 pg:3 rn:5 rd:5 \
313
+ &rprr_gather_load
314
+@rprr_g_load_xs_u_sc ....... .. xs:1 scale:1 rm:5 . u:1 ff:1 pg:3 rn:5 rd:5 \
315
+ &rprr_gather_load
316
+@rprr_g_load_xs_sc ....... .. xs:1 scale:1 rm:5 . . ff:1 pg:3 rn:5 rd:5 \
317
+ &rprr_gather_load
318
+@rprr_g_load_u_sc ....... .. . scale:1 rm:5 . u:1 ff:1 pg:3 rn:5 rd:5 \
319
+ &rprr_gather_load xs=2
320
+@rprr_g_load_sc ....... .. . scale:1 rm:5 . . ff:1 pg:3 rn:5 rd:5 \
321
+ &rprr_gather_load xs=2
322
+@rpri_g_load ....... msz:2 .. imm:5 . u:1 ff:1 pg:3 rn:5 rd:5 \
323
+ &rpri_gather_load
324
+
325
# Stores; user must fill in ESZ, MSZ, NREG as needed.
326
@rprr_store ....... .. .. rm:5 ... pg:3 rn:5 rd:5 &rprr_store
327
@rpri_store_msz ....... msz:2 .. . imm:s4 ... pg:3 rn:5 rd:5 &rpri_store
328
@@ -XXX,XX +XXX,XX @@ LDR_zri 10000101 10 ...... 010 ... ..... ..... @rd_rn_i9
329
LD1R_zpri 1000010 .. 1 imm:6 1.. pg:3 rn:5 rd:5 \
330
&rpri_load dtype=%dtype_23_13 nreg=0
331
332
+# SVE 32-bit gather load (scalar plus 32-bit unscaled offsets)
333
+# SVE 32-bit gather load (scalar plus 32-bit scaled offsets)
334
+LD1_zprz 1000010 00 .0 ..... 0.. ... ..... ..... \
335
+ @rprr_g_load_xs_u esz=2 msz=0 scale=0
336
+LD1_zprz 1000010 01 .. ..... 0.. ... ..... ..... \
337
+ @rprr_g_load_xs_u_sc esz=2 msz=1
338
+LD1_zprz 1000010 10 .. ..... 01. ... ..... ..... \
339
+ @rprr_g_load_xs_sc esz=2 msz=2 u=1
340
+
341
+# SVE 32-bit gather load (vector plus immediate)
342
+LD1_zpiz 1000010 .. 01 ..... 1.. ... ..... ..... \
343
+ @rpri_g_load esz=2
344
+
345
### SVE Memory Contiguous Load Group
346
347
# SVE contiguous load (scalar plus scalar)
348
@@ -XXX,XX +XXX,XX @@ PRF_rr 1000010 -- 00 rm:5 110 --- ----- 0 ----
349
350
### SVE Memory 64-bit Gather Group
351
352
+# SVE 64-bit gather load (scalar plus 32-bit unpacked unscaled offsets)
353
+# SVE 64-bit gather load (scalar plus 32-bit unpacked scaled offsets)
354
+LD1_zprz 1100010 00 .0 ..... 0.. ... ..... ..... \
355
+ @rprr_g_load_xs_u esz=3 msz=0 scale=0
356
+LD1_zprz 1100010 01 .. ..... 0.. ... ..... ..... \
357
+ @rprr_g_load_xs_u_sc esz=3 msz=1
358
+LD1_zprz 1100010 10 .. ..... 0.. ... ..... ..... \
359
+ @rprr_g_load_xs_u_sc esz=3 msz=2
360
+LD1_zprz 1100010 11 .. ..... 01. ... ..... ..... \
361
+ @rprr_g_load_xs_sc esz=3 msz=3 u=1
362
+
363
+# SVE 64-bit gather load (scalar plus 64-bit unscaled offsets)
364
+# SVE 64-bit gather load (scalar plus 64-bit scaled offsets)
365
+LD1_zprz 1100010 00 10 ..... 1.. ... ..... ..... \
366
+ @rprr_g_load_u esz=3 msz=0 scale=0
367
+LD1_zprz 1100010 01 1. ..... 1.. ... ..... ..... \
368
+ @rprr_g_load_u_sc esz=3 msz=1
369
+LD1_zprz 1100010 10 1. ..... 1.. ... ..... ..... \
370
+ @rprr_g_load_u_sc esz=3 msz=2
371
+LD1_zprz 1100010 11 1. ..... 11. ... ..... ..... \
372
+ @rprr_g_load_sc esz=3 msz=3 u=1
373
+
374
+# SVE 64-bit gather load (vector plus immediate)
375
+LD1_zpiz 1100010 .. 01 ..... 1.. ... ..... ..... \
376
+ @rpri_g_load esz=3
377
+
378
# SVE 64-bit gather prefetch (scalar plus 64-bit scaled offsets)
379
PRF 1100010 00 11 ----- 1-- --- ----- 0 ----
380
381
--
382
2.17.1
383
384
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-15-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 67 +++++++++++++++++++++++++++++
9
target/arm/sve_helper.c | 88 ++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 40 ++++++++++++++++-
11
3 files changed, 193 insertions(+), 2 deletions(-)
12
13
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-sve.h
16
+++ b/target/arm/helper-sve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(sve_ldhds_zd, TCG_CALL_NO_WG,
18
DEF_HELPER_FLAGS_6(sve_ldsds_zd, TCG_CALL_NO_WG,
19
void, env, ptr, ptr, ptr, tl, i32)
20
21
+DEF_HELPER_FLAGS_6(sve_ldffbsu_zsu, TCG_CALL_NO_WG,
22
+ void, env, ptr, ptr, ptr, tl, i32)
23
+DEF_HELPER_FLAGS_6(sve_ldffhsu_zsu, TCG_CALL_NO_WG,
24
+ void, env, ptr, ptr, ptr, tl, i32)
25
+DEF_HELPER_FLAGS_6(sve_ldffssu_zsu, TCG_CALL_NO_WG,
26
+ void, env, ptr, ptr, ptr, tl, i32)
27
+DEF_HELPER_FLAGS_6(sve_ldffbss_zsu, TCG_CALL_NO_WG,
28
+ void, env, ptr, ptr, ptr, tl, i32)
29
+DEF_HELPER_FLAGS_6(sve_ldffhss_zsu, TCG_CALL_NO_WG,
30
+ void, env, ptr, ptr, ptr, tl, i32)
31
+
32
+DEF_HELPER_FLAGS_6(sve_ldffbsu_zss, TCG_CALL_NO_WG,
33
+ void, env, ptr, ptr, ptr, tl, i32)
34
+DEF_HELPER_FLAGS_6(sve_ldffhsu_zss, TCG_CALL_NO_WG,
35
+ void, env, ptr, ptr, ptr, tl, i32)
36
+DEF_HELPER_FLAGS_6(sve_ldffssu_zss, TCG_CALL_NO_WG,
37
+ void, env, ptr, ptr, ptr, tl, i32)
38
+DEF_HELPER_FLAGS_6(sve_ldffbss_zss, TCG_CALL_NO_WG,
39
+ void, env, ptr, ptr, ptr, tl, i32)
40
+DEF_HELPER_FLAGS_6(sve_ldffhss_zss, TCG_CALL_NO_WG,
41
+ void, env, ptr, ptr, ptr, tl, i32)
42
+
43
+DEF_HELPER_FLAGS_6(sve_ldffbdu_zsu, TCG_CALL_NO_WG,
44
+ void, env, ptr, ptr, ptr, tl, i32)
45
+DEF_HELPER_FLAGS_6(sve_ldffhdu_zsu, TCG_CALL_NO_WG,
46
+ void, env, ptr, ptr, ptr, tl, i32)
47
+DEF_HELPER_FLAGS_6(sve_ldffsdu_zsu, TCG_CALL_NO_WG,
48
+ void, env, ptr, ptr, ptr, tl, i32)
49
+DEF_HELPER_FLAGS_6(sve_ldffddu_zsu, TCG_CALL_NO_WG,
50
+ void, env, ptr, ptr, ptr, tl, i32)
51
+DEF_HELPER_FLAGS_6(sve_ldffbds_zsu, TCG_CALL_NO_WG,
52
+ void, env, ptr, ptr, ptr, tl, i32)
53
+DEF_HELPER_FLAGS_6(sve_ldffhds_zsu, TCG_CALL_NO_WG,
54
+ void, env, ptr, ptr, ptr, tl, i32)
55
+DEF_HELPER_FLAGS_6(sve_ldffsds_zsu, TCG_CALL_NO_WG,
56
+ void, env, ptr, ptr, ptr, tl, i32)
57
+
58
+DEF_HELPER_FLAGS_6(sve_ldffbdu_zss, TCG_CALL_NO_WG,
59
+ void, env, ptr, ptr, ptr, tl, i32)
60
+DEF_HELPER_FLAGS_6(sve_ldffhdu_zss, TCG_CALL_NO_WG,
61
+ void, env, ptr, ptr, ptr, tl, i32)
62
+DEF_HELPER_FLAGS_6(sve_ldffsdu_zss, TCG_CALL_NO_WG,
63
+ void, env, ptr, ptr, ptr, tl, i32)
64
+DEF_HELPER_FLAGS_6(sve_ldffddu_zss, TCG_CALL_NO_WG,
65
+ void, env, ptr, ptr, ptr, tl, i32)
66
+DEF_HELPER_FLAGS_6(sve_ldffbds_zss, TCG_CALL_NO_WG,
67
+ void, env, ptr, ptr, ptr, tl, i32)
68
+DEF_HELPER_FLAGS_6(sve_ldffhds_zss, TCG_CALL_NO_WG,
69
+ void, env, ptr, ptr, ptr, tl, i32)
70
+DEF_HELPER_FLAGS_6(sve_ldffsds_zss, TCG_CALL_NO_WG,
71
+ void, env, ptr, ptr, ptr, tl, i32)
72
+
73
+DEF_HELPER_FLAGS_6(sve_ldffbdu_zd, TCG_CALL_NO_WG,
74
+ void, env, ptr, ptr, ptr, tl, i32)
75
+DEF_HELPER_FLAGS_6(sve_ldffhdu_zd, TCG_CALL_NO_WG,
76
+ void, env, ptr, ptr, ptr, tl, i32)
77
+DEF_HELPER_FLAGS_6(sve_ldffsdu_zd, TCG_CALL_NO_WG,
78
+ void, env, ptr, ptr, ptr, tl, i32)
79
+DEF_HELPER_FLAGS_6(sve_ldffddu_zd, TCG_CALL_NO_WG,
80
+ void, env, ptr, ptr, ptr, tl, i32)
81
+DEF_HELPER_FLAGS_6(sve_ldffbds_zd, TCG_CALL_NO_WG,
82
+ void, env, ptr, ptr, ptr, tl, i32)
83
+DEF_HELPER_FLAGS_6(sve_ldffhds_zd, TCG_CALL_NO_WG,
84
+ void, env, ptr, ptr, ptr, tl, i32)
85
+DEF_HELPER_FLAGS_6(sve_ldffsds_zd, TCG_CALL_NO_WG,
86
+ void, env, ptr, ptr, ptr, tl, i32)
87
+
88
DEF_HELPER_FLAGS_6(sve_stbs_zsu, TCG_CALL_NO_WG,
89
void, env, ptr, ptr, ptr, tl, i32)
90
DEF_HELPER_FLAGS_6(sve_sths_zsu, TCG_CALL_NO_WG,
91
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/sve_helper.c
94
+++ b/target/arm/sve_helper.c
95
@@ -XXX,XX +XXX,XX @@ DO_LD1_ZPZ_D(sve_ldbds_zd, uint64_t, int8_t, cpu_ldub_data_ra)
96
DO_LD1_ZPZ_D(sve_ldhds_zd, uint64_t, int16_t, cpu_lduw_data_ra)
97
DO_LD1_ZPZ_D(sve_ldsds_zd, uint64_t, int32_t, cpu_ldl_data_ra)
98
99
+/* First fault loads with a vector index. */
100
+
101
+#ifdef CONFIG_USER_ONLY
102
+
103
+#define DO_LDFF1_ZPZ(NAME, TYPEE, TYPEI, TYPEM, FN, H) \
104
+void HELPER(NAME)(CPUARMState *env, void *vd, void *vg, void *vm, \
105
+ target_ulong base, uint32_t desc) \
106
+{ \
107
+ intptr_t i, oprsz = simd_oprsz(desc); \
108
+ unsigned scale = simd_data(desc); \
109
+ uintptr_t ra = GETPC(); \
110
+ bool first = true; \
111
+ mmap_lock(); \
112
+ for (i = 0; i < oprsz; i++) { \
113
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
114
+ do { \
115
+ TYPEM m = 0; \
116
+ if (pg & 1) { \
117
+ target_ulong off = *(TYPEI *)(vm + H(i)); \
118
+ target_ulong addr = base + (off << scale); \
119
+ if (!first && \
120
+ page_check_range(addr, sizeof(TYPEM), PAGE_READ)) { \
121
+ record_fault(env, i, oprsz); \
122
+ goto exit; \
123
+ } \
124
+ m = FN(env, addr, ra); \
125
+ first = false; \
126
+ } \
127
+ *(TYPEE *)(vd + H(i)) = m; \
128
+ i += sizeof(TYPEE), pg >>= sizeof(TYPEE); \
129
+ } while (i & 15); \
130
+ } \
131
+ exit: \
132
+ mmap_unlock(); \
133
+}
134
+
135
+#else
136
+
137
+#define DO_LDFF1_ZPZ(NAME, TYPEE, TYPEI, TYPEM, FN, H) \
138
+void HELPER(NAME)(CPUARMState *env, void *vd, void *vg, void *vm, \
139
+ target_ulong base, uint32_t desc) \
140
+{ \
141
+ g_assert_not_reached(); \
142
+}
143
+
144
+#endif
145
+
146
+#define DO_LDFF1_ZPZ_S(NAME, TYPEI, TYPEM, FN) \
147
+ DO_LDFF1_ZPZ(NAME, uint32_t, TYPEI, TYPEM, FN, H1_4)
148
+#define DO_LDFF1_ZPZ_D(NAME, TYPEI, TYPEM, FN) \
149
+ DO_LDFF1_ZPZ(NAME, uint64_t, TYPEI, TYPEM, FN, )
150
+
151
+DO_LDFF1_ZPZ_S(sve_ldffbsu_zsu, uint32_t, uint8_t, cpu_ldub_data_ra)
152
+DO_LDFF1_ZPZ_S(sve_ldffhsu_zsu, uint32_t, uint16_t, cpu_lduw_data_ra)
153
+DO_LDFF1_ZPZ_S(sve_ldffssu_zsu, uint32_t, uint32_t, cpu_ldl_data_ra)
154
+DO_LDFF1_ZPZ_S(sve_ldffbss_zsu, uint32_t, int8_t, cpu_ldub_data_ra)
155
+DO_LDFF1_ZPZ_S(sve_ldffhss_zsu, uint32_t, int16_t, cpu_lduw_data_ra)
156
+
157
+DO_LDFF1_ZPZ_S(sve_ldffbsu_zss, int32_t, uint8_t, cpu_ldub_data_ra)
158
+DO_LDFF1_ZPZ_S(sve_ldffhsu_zss, int32_t, uint16_t, cpu_lduw_data_ra)
159
+DO_LDFF1_ZPZ_S(sve_ldffssu_zss, int32_t, uint32_t, cpu_ldl_data_ra)
160
+DO_LDFF1_ZPZ_S(sve_ldffbss_zss, int32_t, int8_t, cpu_ldub_data_ra)
161
+DO_LDFF1_ZPZ_S(sve_ldffhss_zss, int32_t, int16_t, cpu_lduw_data_ra)
162
+
163
+DO_LDFF1_ZPZ_D(sve_ldffbdu_zsu, uint32_t, uint8_t, cpu_ldub_data_ra)
164
+DO_LDFF1_ZPZ_D(sve_ldffhdu_zsu, uint32_t, uint16_t, cpu_lduw_data_ra)
165
+DO_LDFF1_ZPZ_D(sve_ldffsdu_zsu, uint32_t, uint32_t, cpu_ldl_data_ra)
166
+DO_LDFF1_ZPZ_D(sve_ldffddu_zsu, uint32_t, uint64_t, cpu_ldq_data_ra)
167
+DO_LDFF1_ZPZ_D(sve_ldffbds_zsu, uint32_t, int8_t, cpu_ldub_data_ra)
168
+DO_LDFF1_ZPZ_D(sve_ldffhds_zsu, uint32_t, int16_t, cpu_lduw_data_ra)
169
+DO_LDFF1_ZPZ_D(sve_ldffsds_zsu, uint32_t, int32_t, cpu_ldl_data_ra)
170
+
171
+DO_LDFF1_ZPZ_D(sve_ldffbdu_zss, int32_t, uint8_t, cpu_ldub_data_ra)
172
+DO_LDFF1_ZPZ_D(sve_ldffhdu_zss, int32_t, uint16_t, cpu_lduw_data_ra)
173
+DO_LDFF1_ZPZ_D(sve_ldffsdu_zss, int32_t, uint32_t, cpu_ldl_data_ra)
174
+DO_LDFF1_ZPZ_D(sve_ldffddu_zss, int32_t, uint64_t, cpu_ldq_data_ra)
175
+DO_LDFF1_ZPZ_D(sve_ldffbds_zss, int32_t, int8_t, cpu_ldub_data_ra)
176
+DO_LDFF1_ZPZ_D(sve_ldffhds_zss, int32_t, int16_t, cpu_lduw_data_ra)
177
+DO_LDFF1_ZPZ_D(sve_ldffsds_zss, int32_t, int32_t, cpu_ldl_data_ra)
178
+
179
+DO_LDFF1_ZPZ_D(sve_ldffbdu_zd, uint64_t, uint8_t, cpu_ldub_data_ra)
180
+DO_LDFF1_ZPZ_D(sve_ldffhdu_zd, uint64_t, uint16_t, cpu_lduw_data_ra)
181
+DO_LDFF1_ZPZ_D(sve_ldffsdu_zd, uint64_t, uint32_t, cpu_ldl_data_ra)
182
+DO_LDFF1_ZPZ_D(sve_ldffddu_zd, uint64_t, uint64_t, cpu_ldq_data_ra)
183
+DO_LDFF1_ZPZ_D(sve_ldffbds_zd, uint64_t, int8_t, cpu_ldub_data_ra)
184
+DO_LDFF1_ZPZ_D(sve_ldffhds_zd, uint64_t, int16_t, cpu_lduw_data_ra)
185
+DO_LDFF1_ZPZ_D(sve_ldffsds_zd, uint64_t, int32_t, cpu_ldl_data_ra)
186
+
187
/* Stores with a vector index. */
188
189
#define DO_ST1_ZPZ_S(NAME, TYPEI, FN) \
190
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
191
index XXXXXXX..XXXXXXX 100644
192
--- a/target/arm/translate-sve.c
193
+++ b/target/arm/translate-sve.c
194
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_mem_scatter * const gather_load_fn32[2][2][2][3] = {
195
{ gen_helper_sve_ldbsu_zss,
196
gen_helper_sve_ldhsu_zss,
197
gen_helper_sve_ldssu_zss, } } },
198
- /* TODO fill in first-fault handlers */
199
+
200
+ { { { gen_helper_sve_ldffbss_zsu,
201
+ gen_helper_sve_ldffhss_zsu,
202
+ NULL, },
203
+ { gen_helper_sve_ldffbsu_zsu,
204
+ gen_helper_sve_ldffhsu_zsu,
205
+ gen_helper_sve_ldffssu_zsu, } },
206
+ { { gen_helper_sve_ldffbss_zss,
207
+ gen_helper_sve_ldffhss_zss,
208
+ NULL, },
209
+ { gen_helper_sve_ldffbsu_zss,
210
+ gen_helper_sve_ldffhsu_zss,
211
+ gen_helper_sve_ldffssu_zss, } } }
212
};
213
214
/* Note that we overload xs=2 to indicate 64-bit offset. */
215
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_mem_scatter * const gather_load_fn64[2][3][2][4] = {
216
gen_helper_sve_ldhdu_zd,
217
gen_helper_sve_ldsdu_zd,
218
gen_helper_sve_ldddu_zd, } } },
219
- /* TODO fill in first-fault handlers */
220
+
221
+ { { { gen_helper_sve_ldffbds_zsu,
222
+ gen_helper_sve_ldffhds_zsu,
223
+ gen_helper_sve_ldffsds_zsu,
224
+ NULL, },
225
+ { gen_helper_sve_ldffbdu_zsu,
226
+ gen_helper_sve_ldffhdu_zsu,
227
+ gen_helper_sve_ldffsdu_zsu,
228
+ gen_helper_sve_ldffddu_zsu, } },
229
+ { { gen_helper_sve_ldffbds_zss,
230
+ gen_helper_sve_ldffhds_zss,
231
+ gen_helper_sve_ldffsds_zss,
232
+ NULL, },
233
+ { gen_helper_sve_ldffbdu_zss,
234
+ gen_helper_sve_ldffhdu_zss,
235
+ gen_helper_sve_ldffsdu_zss,
236
+ gen_helper_sve_ldffddu_zss, } },
237
+ { { gen_helper_sve_ldffbds_zd,
238
+ gen_helper_sve_ldffhds_zd,
239
+ gen_helper_sve_ldffsds_zd,
240
+ NULL, },
241
+ { gen_helper_sve_ldffbdu_zd,
242
+ gen_helper_sve_ldffhdu_zd,
243
+ gen_helper_sve_ldffsdu_zd,
244
+ gen_helper_sve_ldffddu_zd, } } }
245
};
246
247
static bool trans_LD1_zprz(DisasContext *s, arg_LD1_zprz *a, uint32_t insn)
248
--
249
2.17.1
250
251
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-16-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-sve.c | 85 ++++++++++++++++++++++++++------------
9
target/arm/sve.decode | 11 +++++
10
2 files changed, 70 insertions(+), 26 deletions(-)
11
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
15
+++ b/target/arm/translate-sve.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz *a, uint32_t insn)
17
return true;
18
}
19
20
+/* Indexed by [xs][msz]. */
21
+static gen_helper_gvec_mem_scatter * const scatter_store_fn32[2][3] = {
22
+ { gen_helper_sve_stbs_zsu,
23
+ gen_helper_sve_sths_zsu,
24
+ gen_helper_sve_stss_zsu, },
25
+ { gen_helper_sve_stbs_zss,
26
+ gen_helper_sve_sths_zss,
27
+ gen_helper_sve_stss_zss, },
28
+};
29
+
30
+/* Note that we overload xs=2 to indicate 64-bit offset. */
31
+static gen_helper_gvec_mem_scatter * const scatter_store_fn64[3][4] = {
32
+ { gen_helper_sve_stbd_zsu,
33
+ gen_helper_sve_sthd_zsu,
34
+ gen_helper_sve_stsd_zsu,
35
+ gen_helper_sve_stdd_zsu, },
36
+ { gen_helper_sve_stbd_zss,
37
+ gen_helper_sve_sthd_zss,
38
+ gen_helper_sve_stsd_zss,
39
+ gen_helper_sve_stdd_zss, },
40
+ { gen_helper_sve_stbd_zd,
41
+ gen_helper_sve_sthd_zd,
42
+ gen_helper_sve_stsd_zd,
43
+ gen_helper_sve_stdd_zd, },
44
+};
45
+
46
static bool trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a, uint32_t insn)
47
{
48
- /* Indexed by [xs][msz]. */
49
- static gen_helper_gvec_mem_scatter * const fn32[2][3] = {
50
- { gen_helper_sve_stbs_zsu,
51
- gen_helper_sve_sths_zsu,
52
- gen_helper_sve_stss_zsu, },
53
- { gen_helper_sve_stbs_zss,
54
- gen_helper_sve_sths_zss,
55
- gen_helper_sve_stss_zss, },
56
- };
57
- /* Note that we overload xs=2 to indicate 64-bit offset. */
58
- static gen_helper_gvec_mem_scatter * const fn64[3][4] = {
59
- { gen_helper_sve_stbd_zsu,
60
- gen_helper_sve_sthd_zsu,
61
- gen_helper_sve_stsd_zsu,
62
- gen_helper_sve_stdd_zsu, },
63
- { gen_helper_sve_stbd_zss,
64
- gen_helper_sve_sthd_zss,
65
- gen_helper_sve_stsd_zss,
66
- gen_helper_sve_stdd_zss, },
67
- { gen_helper_sve_stbd_zd,
68
- gen_helper_sve_sthd_zd,
69
- gen_helper_sve_stsd_zd,
70
- gen_helper_sve_stdd_zd, },
71
- };
72
gen_helper_gvec_mem_scatter *fn;
73
74
if (a->esz < a->msz || (a->msz == 0 && a->scale)) {
75
@@ -XXX,XX +XXX,XX @@ static bool trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a, uint32_t insn)
76
}
77
switch (a->esz) {
78
case MO_32:
79
- fn = fn32[a->xs][a->msz];
80
+ fn = scatter_store_fn32[a->xs][a->msz];
81
break;
82
case MO_64:
83
- fn = fn64[a->xs][a->msz];
84
+ fn = scatter_store_fn64[a->xs][a->msz];
85
break;
86
default:
87
g_assert_not_reached();
88
@@ -XXX,XX +XXX,XX @@ static bool trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a, uint32_t insn)
89
return true;
90
}
91
92
+static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz *a, uint32_t insn)
93
+{
94
+ gen_helper_gvec_mem_scatter *fn = NULL;
95
+ TCGv_i64 imm;
96
+
97
+ if (a->esz < a->msz) {
98
+ return false;
99
+ }
100
+ if (!sve_access_check(s)) {
101
+ return true;
102
+ }
103
+
104
+ switch (a->esz) {
105
+ case MO_32:
106
+ fn = scatter_store_fn32[0][a->msz];
107
+ break;
108
+ case MO_64:
109
+ fn = scatter_store_fn64[2][a->msz];
110
+ break;
111
+ }
112
+ assert(fn != NULL);
113
+
114
+ /* Treat ST1_zpiz (zn[x] + imm) the same way as ST1_zprz (rn + zm[x])
115
+ * by loading the immediate into the scalar parameter.
116
+ */
117
+ imm = tcg_const_i64(a->imm << a->msz);
118
+ do_mem_zpz(s, a->rd, a->pg, a->rn, 0, imm, fn);
119
+ tcg_temp_free_i64(imm);
120
+ return true;
121
+}
122
+
123
/*
124
* Prefetches
125
*/
126
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
127
index XXXXXXX..XXXXXXX 100644
128
--- a/target/arm/sve.decode
129
+++ b/target/arm/sve.decode
130
@@ -XXX,XX +XXX,XX @@
131
&rprr_gather_load rd pg rn rm esz msz u ff xs scale
132
&rpri_gather_load rd pg rn imm esz msz u ff
133
&rprr_scatter_store rd pg rn rm esz msz xs scale
134
+&rpri_scatter_store rd pg rn imm esz msz
135
136
###########################################################################
137
# Named instruction formats. These are generally used to
138
@@ -XXX,XX +XXX,XX @@
139
&rprr_store nreg=0
140
@rprr_scatter_store ....... msz:2 .. rm:5 ... pg:3 rn:5 rd:5 \
141
&rprr_scatter_store
142
+@rpri_scatter_store ....... msz:2 .. imm:5 ... pg:3 rn:5 rd:5 \
143
+ &rpri_scatter_store
144
145
###########################################################################
146
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
147
@@ -XXX,XX +XXX,XX @@ ST1_zprz 1110010 .. 01 ..... 101 ... ..... ..... \
148
ST1_zprz 1110010 .. 00 ..... 101 ... ..... ..... \
149
@rprr_scatter_store xs=2 esz=3 scale=0
150
151
+# SVE 64-bit scatter store (vector plus immediate)
152
+ST1_zpiz 1110010 .. 10 ..... 101 ... ..... ..... \
153
+ @rpri_scatter_store esz=3
154
+
155
+# SVE 32-bit scatter store (vector plus immediate)
156
+ST1_zpiz 1110010 .. 11 ..... 101 ... ..... ..... \
157
+ @rpri_scatter_store esz=2
158
+
159
# SVE 64-bit scatter store (scalar plus unpacked 32-bit scaled offset)
160
# Require msz > 0
161
ST1_zprz 1110010 .. 01 ..... 100 ... ..... ..... \
162
--
163
2.17.1
164
165
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-17-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 49 ++++++++++++++++++++++++++++++
9
target/arm/sve_helper.c | 62 ++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 40 ++++++++++++++++++++++++
11
target/arm/sve.decode | 11 +++++++
12
4 files changed, 162 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_ucvt_ds, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(sve_ucvt_dd, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_6(sve_fcmge_h, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_6(sve_fcmge_s, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_6(sve_fcmge_d, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_6(sve_fcmgt_h, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_6(sve_fcmgt_s, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_6(sve_fcmgt_d, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, ptr, i32)
35
+
36
+DEF_HELPER_FLAGS_6(sve_fcmeq_h, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_6(sve_fcmeq_s, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_6(sve_fcmeq_d, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, ptr, i32)
42
+
43
+DEF_HELPER_FLAGS_6(sve_fcmne_h, TCG_CALL_NO_RWG,
44
+ void, ptr, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_6(sve_fcmne_s, TCG_CALL_NO_RWG,
46
+ void, ptr, ptr, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_6(sve_fcmne_d, TCG_CALL_NO_RWG,
48
+ void, ptr, ptr, ptr, ptr, ptr, i32)
49
+
50
+DEF_HELPER_FLAGS_6(sve_fcmuo_h, TCG_CALL_NO_RWG,
51
+ void, ptr, ptr, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_6(sve_fcmuo_s, TCG_CALL_NO_RWG,
53
+ void, ptr, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_6(sve_fcmuo_d, TCG_CALL_NO_RWG,
55
+ void, ptr, ptr, ptr, ptr, ptr, i32)
56
+
57
+DEF_HELPER_FLAGS_6(sve_facge_h, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_6(sve_facge_s, TCG_CALL_NO_RWG,
60
+ void, ptr, ptr, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_6(sve_facge_d, TCG_CALL_NO_RWG,
62
+ void, ptr, ptr, ptr, ptr, ptr, i32)
63
+
64
+DEF_HELPER_FLAGS_6(sve_facgt_h, TCG_CALL_NO_RWG,
65
+ void, ptr, ptr, ptr, ptr, ptr, i32)
66
+DEF_HELPER_FLAGS_6(sve_facgt_s, TCG_CALL_NO_RWG,
67
+ void, ptr, ptr, ptr, ptr, ptr, i32)
68
+DEF_HELPER_FLAGS_6(sve_facgt_d, TCG_CALL_NO_RWG,
69
+ void, ptr, ptr, ptr, ptr, ptr, i32)
70
+
71
DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
72
DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
73
DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
74
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/sve_helper.c
77
+++ b/target/arm/sve_helper.c
78
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fnmls_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
79
do_fmla_zpzzz_d(env, vg, desc, 0, INT64_MIN);
80
}
81
82
+/* Two operand floating-point comparison controlled by a predicate.
83
+ * Unlike the integer version, we are not allowed to optimistically
84
+ * compare operands, since the comparison may have side effects wrt
85
+ * the FPSR.
86
+ */
87
+#define DO_FPCMP_PPZZ(NAME, TYPE, H, OP) \
88
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, \
89
+ void *status, uint32_t desc) \
90
+{ \
91
+ intptr_t i = simd_oprsz(desc), j = (i - 1) >> 6; \
92
+ uint64_t *d = vd, *g = vg; \
93
+ do { \
94
+ uint64_t out = 0, pg = g[j]; \
95
+ do { \
96
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
97
+ if (likely((pg >> (i & 63)) & 1)) { \
98
+ TYPE nn = *(TYPE *)(vn + H(i)); \
99
+ TYPE mm = *(TYPE *)(vm + H(i)); \
100
+ out |= OP(TYPE, nn, mm, status); \
101
+ } \
102
+ } while (i & 63); \
103
+ d[j--] = out; \
104
+ } while (i > 0); \
105
+}
106
+
107
+#define DO_FPCMP_PPZZ_H(NAME, OP) \
108
+ DO_FPCMP_PPZZ(NAME##_h, float16, H1_2, OP)
109
+#define DO_FPCMP_PPZZ_S(NAME, OP) \
110
+ DO_FPCMP_PPZZ(NAME##_s, float32, H1_4, OP)
111
+#define DO_FPCMP_PPZZ_D(NAME, OP) \
112
+ DO_FPCMP_PPZZ(NAME##_d, float64, , OP)
113
+
114
+#define DO_FPCMP_PPZZ_ALL(NAME, OP) \
115
+ DO_FPCMP_PPZZ_H(NAME, OP) \
116
+ DO_FPCMP_PPZZ_S(NAME, OP) \
117
+ DO_FPCMP_PPZZ_D(NAME, OP)
118
+
119
+#define DO_FCMGE(TYPE, X, Y, ST) TYPE##_compare(Y, X, ST) <= 0
120
+#define DO_FCMGT(TYPE, X, Y, ST) TYPE##_compare(Y, X, ST) < 0
121
+#define DO_FCMEQ(TYPE, X, Y, ST) TYPE##_compare_quiet(X, Y, ST) == 0
122
+#define DO_FCMNE(TYPE, X, Y, ST) TYPE##_compare_quiet(X, Y, ST) != 0
123
+#define DO_FCMUO(TYPE, X, Y, ST) \
124
+ TYPE##_compare_quiet(X, Y, ST) == float_relation_unordered
125
+#define DO_FACGE(TYPE, X, Y, ST) \
126
+ TYPE##_compare(TYPE##_abs(Y), TYPE##_abs(X), ST) <= 0
127
+#define DO_FACGT(TYPE, X, Y, ST) \
128
+ TYPE##_compare(TYPE##_abs(Y), TYPE##_abs(X), ST) < 0
129
+
130
+DO_FPCMP_PPZZ_ALL(sve_fcmge, DO_FCMGE)
131
+DO_FPCMP_PPZZ_ALL(sve_fcmgt, DO_FCMGT)
132
+DO_FPCMP_PPZZ_ALL(sve_fcmeq, DO_FCMEQ)
133
+DO_FPCMP_PPZZ_ALL(sve_fcmne, DO_FCMNE)
134
+DO_FPCMP_PPZZ_ALL(sve_fcmuo, DO_FCMUO)
135
+DO_FPCMP_PPZZ_ALL(sve_facge, DO_FACGE)
136
+DO_FPCMP_PPZZ_ALL(sve_facgt, DO_FACGT)
137
+
138
+#undef DO_FPCMP_PPZZ_ALL
139
+#undef DO_FPCMP_PPZZ_D
140
+#undef DO_FPCMP_PPZZ_S
141
+#undef DO_FPCMP_PPZZ_H
142
+#undef DO_FPCMP_PPZZ
143
+
144
/*
145
* Load contiguous data, protected by a governing predicate.
146
*/
147
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
148
index XXXXXXX..XXXXXXX 100644
149
--- a/target/arm/translate-sve.c
150
+++ b/target/arm/translate-sve.c
151
@@ -XXX,XX +XXX,XX @@ DO_FP3(FMULX, fmulx)
152
153
#undef DO_FP3
154
155
+static bool do_fp_cmp(DisasContext *s, arg_rprr_esz *a,
156
+ gen_helper_gvec_4_ptr *fn)
157
+{
158
+ if (fn == NULL) {
159
+ return false;
160
+ }
161
+ if (sve_access_check(s)) {
162
+ unsigned vsz = vec_full_reg_size(s);
163
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
164
+ tcg_gen_gvec_4_ptr(pred_full_reg_offset(s, a->rd),
165
+ vec_full_reg_offset(s, a->rn),
166
+ vec_full_reg_offset(s, a->rm),
167
+ pred_full_reg_offset(s, a->pg),
168
+ status, vsz, vsz, 0, fn);
169
+ tcg_temp_free_ptr(status);
170
+ }
171
+ return true;
172
+}
173
+
174
+#define DO_FPCMP(NAME, name) \
175
+static bool trans_##NAME##_ppzz(DisasContext *s, arg_rprr_esz *a, \
176
+ uint32_t insn) \
177
+{ \
178
+ static gen_helper_gvec_4_ptr * const fns[4] = { \
179
+ NULL, gen_helper_sve_##name##_h, \
180
+ gen_helper_sve_##name##_s, gen_helper_sve_##name##_d \
181
+ }; \
182
+ return do_fp_cmp(s, a, fns[a->esz]); \
183
+}
184
+
185
+DO_FPCMP(FCMGE, fcmge)
186
+DO_FPCMP(FCMGT, fcmgt)
187
+DO_FPCMP(FCMEQ, fcmeq)
188
+DO_FPCMP(FCMNE, fcmne)
189
+DO_FPCMP(FCMUO, fcmuo)
190
+DO_FPCMP(FACGE, facge)
191
+DO_FPCMP(FACGT, facgt)
192
+
193
+#undef DO_FPCMP
194
+
195
typedef void gen_helper_sve_fmla(TCGv_env, TCGv_ptr, TCGv_i32);
196
197
static bool do_fmla(DisasContext *s, arg_rprrr_esz *a, gen_helper_sve_fmla *fn)
198
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
199
index XXXXXXX..XXXXXXX 100644
200
--- a/target/arm/sve.decode
201
+++ b/target/arm/sve.decode
202
@@ -XXX,XX +XXX,XX @@ UXTH 00000100 .. 010 011 101 ... ..... ..... @rd_pg_rn
203
SXTW 00000100 .. 010 100 101 ... ..... ..... @rd_pg_rn
204
UXTW 00000100 .. 010 101 101 ... ..... ..... @rd_pg_rn
205
206
+### SVE Floating Point Compare - Vectors Group
207
+
208
+# SVE floating-point compare vectors
209
+FCMGE_ppzz 01100101 .. 0 ..... 010 ... ..... 0 .... @pd_pg_rn_rm
210
+FCMGT_ppzz 01100101 .. 0 ..... 010 ... ..... 1 .... @pd_pg_rn_rm
211
+FCMEQ_ppzz 01100101 .. 0 ..... 011 ... ..... 0 .... @pd_pg_rn_rm
212
+FCMNE_ppzz 01100101 .. 0 ..... 011 ... ..... 1 .... @pd_pg_rn_rm
213
+FCMUO_ppzz 01100101 .. 0 ..... 110 ... ..... 0 .... @pd_pg_rn_rm
214
+FACGE_ppzz 01100101 .. 0 ..... 110 ... ..... 1 .... @pd_pg_rn_rm
215
+FACGT_ppzz 01100101 .. 0 ..... 111 ... ..... 1 .... @pd_pg_rn_rm
216
+
217
### SVE Integer Multiply-Add Group
218
219
# SVE integer multiply-add writing addend (predicated)
220
--
221
2.17.1
222
223
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-18-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 56 ++++++++++++++++++++++++++++
9
target/arm/sve_helper.c | 69 +++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 75 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 14 +++++++
12
4 files changed, 214 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(sve_fmulx_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_6(sve_fmulx_d, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_6(sve_fadds_h, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, i64, ptr, i32)
24
+DEF_HELPER_FLAGS_6(sve_fadds_s, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, i64, ptr, i32)
26
+DEF_HELPER_FLAGS_6(sve_fadds_d, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, i64, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_6(sve_fsubs_h, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, i64, ptr, i32)
31
+DEF_HELPER_FLAGS_6(sve_fsubs_s, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, i64, ptr, i32)
33
+DEF_HELPER_FLAGS_6(sve_fsubs_d, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, i64, ptr, i32)
35
+
36
+DEF_HELPER_FLAGS_6(sve_fmuls_h, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, i64, ptr, i32)
38
+DEF_HELPER_FLAGS_6(sve_fmuls_s, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, i64, ptr, i32)
40
+DEF_HELPER_FLAGS_6(sve_fmuls_d, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, i64, ptr, i32)
42
+
43
+DEF_HELPER_FLAGS_6(sve_fsubrs_h, TCG_CALL_NO_RWG,
44
+ void, ptr, ptr, ptr, i64, ptr, i32)
45
+DEF_HELPER_FLAGS_6(sve_fsubrs_s, TCG_CALL_NO_RWG,
46
+ void, ptr, ptr, ptr, i64, ptr, i32)
47
+DEF_HELPER_FLAGS_6(sve_fsubrs_d, TCG_CALL_NO_RWG,
48
+ void, ptr, ptr, ptr, i64, ptr, i32)
49
+
50
+DEF_HELPER_FLAGS_6(sve_fmaxnms_h, TCG_CALL_NO_RWG,
51
+ void, ptr, ptr, ptr, i64, ptr, i32)
52
+DEF_HELPER_FLAGS_6(sve_fmaxnms_s, TCG_CALL_NO_RWG,
53
+ void, ptr, ptr, ptr, i64, ptr, i32)
54
+DEF_HELPER_FLAGS_6(sve_fmaxnms_d, TCG_CALL_NO_RWG,
55
+ void, ptr, ptr, ptr, i64, ptr, i32)
56
+
57
+DEF_HELPER_FLAGS_6(sve_fminnms_h, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, i64, ptr, i32)
59
+DEF_HELPER_FLAGS_6(sve_fminnms_s, TCG_CALL_NO_RWG,
60
+ void, ptr, ptr, ptr, i64, ptr, i32)
61
+DEF_HELPER_FLAGS_6(sve_fminnms_d, TCG_CALL_NO_RWG,
62
+ void, ptr, ptr, ptr, i64, ptr, i32)
63
+
64
+DEF_HELPER_FLAGS_6(sve_fmaxs_h, TCG_CALL_NO_RWG,
65
+ void, ptr, ptr, ptr, i64, ptr, i32)
66
+DEF_HELPER_FLAGS_6(sve_fmaxs_s, TCG_CALL_NO_RWG,
67
+ void, ptr, ptr, ptr, i64, ptr, i32)
68
+DEF_HELPER_FLAGS_6(sve_fmaxs_d, TCG_CALL_NO_RWG,
69
+ void, ptr, ptr, ptr, i64, ptr, i32)
70
+
71
+DEF_HELPER_FLAGS_6(sve_fmins_h, TCG_CALL_NO_RWG,
72
+ void, ptr, ptr, ptr, i64, ptr, i32)
73
+DEF_HELPER_FLAGS_6(sve_fmins_s, TCG_CALL_NO_RWG,
74
+ void, ptr, ptr, ptr, i64, ptr, i32)
75
+DEF_HELPER_FLAGS_6(sve_fmins_d, TCG_CALL_NO_RWG,
76
+ void, ptr, ptr, ptr, i64, ptr, i32)
77
+
78
DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
79
void, ptr, ptr, ptr, ptr, i32)
80
DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
81
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/sve_helper.c
84
+++ b/target/arm/sve_helper.c
85
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_FP(sve_fmulx_d, uint64_t, , helper_vfp_mulxd)
86
87
#undef DO_ZPZZ_FP
88
89
+/* Three-operand expander, with one scalar operand, controlled by
90
+ * a predicate, with the extra float_status parameter.
91
+ */
92
+#define DO_ZPZS_FP(NAME, TYPE, H, OP) \
93
+void HELPER(NAME)(void *vd, void *vn, void *vg, uint64_t scalar, \
94
+ void *status, uint32_t desc) \
95
+{ \
96
+ intptr_t i = simd_oprsz(desc); \
97
+ uint64_t *g = vg; \
98
+ TYPE mm = scalar; \
99
+ do { \
100
+ uint64_t pg = g[(i - 1) >> 6]; \
101
+ do { \
102
+ i -= sizeof(TYPE); \
103
+ if (likely((pg >> (i & 63)) & 1)) { \
104
+ TYPE nn = *(TYPE *)(vn + H(i)); \
105
+ *(TYPE *)(vd + H(i)) = OP(nn, mm, status); \
106
+ } \
107
+ } while (i & 63); \
108
+ } while (i != 0); \
109
+}
110
+
111
+DO_ZPZS_FP(sve_fadds_h, float16, H1_2, float16_add)
112
+DO_ZPZS_FP(sve_fadds_s, float32, H1_4, float32_add)
113
+DO_ZPZS_FP(sve_fadds_d, float64, , float64_add)
114
+
115
+DO_ZPZS_FP(sve_fsubs_h, float16, H1_2, float16_sub)
116
+DO_ZPZS_FP(sve_fsubs_s, float32, H1_4, float32_sub)
117
+DO_ZPZS_FP(sve_fsubs_d, float64, , float64_sub)
118
+
119
+DO_ZPZS_FP(sve_fmuls_h, float16, H1_2, float16_mul)
120
+DO_ZPZS_FP(sve_fmuls_s, float32, H1_4, float32_mul)
121
+DO_ZPZS_FP(sve_fmuls_d, float64, , float64_mul)
122
+
123
+static inline float16 subr_h(float16 a, float16 b, float_status *s)
124
+{
125
+ return float16_sub(b, a, s);
126
+}
127
+
128
+static inline float32 subr_s(float32 a, float32 b, float_status *s)
129
+{
130
+ return float32_sub(b, a, s);
131
+}
132
+
133
+static inline float64 subr_d(float64 a, float64 b, float_status *s)
134
+{
135
+ return float64_sub(b, a, s);
136
+}
137
+
138
+DO_ZPZS_FP(sve_fsubrs_h, float16, H1_2, subr_h)
139
+DO_ZPZS_FP(sve_fsubrs_s, float32, H1_4, subr_s)
140
+DO_ZPZS_FP(sve_fsubrs_d, float64, , subr_d)
141
+
142
+DO_ZPZS_FP(sve_fmaxnms_h, float16, H1_2, float16_maxnum)
143
+DO_ZPZS_FP(sve_fmaxnms_s, float32, H1_4, float32_maxnum)
144
+DO_ZPZS_FP(sve_fmaxnms_d, float64, , float64_maxnum)
145
+
146
+DO_ZPZS_FP(sve_fminnms_h, float16, H1_2, float16_minnum)
147
+DO_ZPZS_FP(sve_fminnms_s, float32, H1_4, float32_minnum)
148
+DO_ZPZS_FP(sve_fminnms_d, float64, , float64_minnum)
149
+
150
+DO_ZPZS_FP(sve_fmaxs_h, float16, H1_2, float16_max)
151
+DO_ZPZS_FP(sve_fmaxs_s, float32, H1_4, float32_max)
152
+DO_ZPZS_FP(sve_fmaxs_d, float64, , float64_max)
153
+
154
+DO_ZPZS_FP(sve_fmins_h, float16, H1_2, float16_min)
155
+DO_ZPZS_FP(sve_fmins_s, float32, H1_4, float32_min)
156
+DO_ZPZS_FP(sve_fmins_d, float64, , float64_min)
157
+
158
/* Fully general two-operand expander, controlled by a predicate,
159
* With the extra float_status parameter.
160
*/
161
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-sve.c
164
+++ b/target/arm/translate-sve.c
165
@@ -XXX,XX +XXX,XX @@
166
#include "exec/log.h"
167
#include "trace-tcg.h"
168
#include "translate-a64.h"
169
+#include "fpu/softfloat.h"
170
171
172
typedef void GVecGen2sFn(unsigned, uint32_t, uint32_t,
173
@@ -XXX,XX +XXX,XX @@ DO_FP3(FMULX, fmulx)
174
175
#undef DO_FP3
176
177
+typedef void gen_helper_sve_fp2scalar(TCGv_ptr, TCGv_ptr, TCGv_ptr,
178
+ TCGv_i64, TCGv_ptr, TCGv_i32);
179
+
180
+static void do_fp_scalar(DisasContext *s, int zd, int zn, int pg, bool is_fp16,
181
+ TCGv_i64 scalar, gen_helper_sve_fp2scalar *fn)
182
+{
183
+ unsigned vsz = vec_full_reg_size(s);
184
+ TCGv_ptr t_zd, t_zn, t_pg, status;
185
+ TCGv_i32 desc;
186
+
187
+ t_zd = tcg_temp_new_ptr();
188
+ t_zn = tcg_temp_new_ptr();
189
+ t_pg = tcg_temp_new_ptr();
190
+ tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, zd));
191
+ tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, zn));
192
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
193
+
194
+ status = get_fpstatus_ptr(is_fp16);
195
+ desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
196
+ fn(t_zd, t_zn, t_pg, scalar, status, desc);
197
+
198
+ tcg_temp_free_i32(desc);
199
+ tcg_temp_free_ptr(status);
200
+ tcg_temp_free_ptr(t_pg);
201
+ tcg_temp_free_ptr(t_zn);
202
+ tcg_temp_free_ptr(t_zd);
203
+}
204
+
205
+static void do_fp_imm(DisasContext *s, arg_rpri_esz *a, uint64_t imm,
206
+ gen_helper_sve_fp2scalar *fn)
207
+{
208
+ TCGv_i64 temp = tcg_const_i64(imm);
209
+ do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz == MO_16, temp, fn);
210
+ tcg_temp_free_i64(temp);
211
+}
212
+
213
+#define DO_FP_IMM(NAME, name, const0, const1) \
214
+static bool trans_##NAME##_zpzi(DisasContext *s, arg_rpri_esz *a, \
215
+ uint32_t insn) \
216
+{ \
217
+ static gen_helper_sve_fp2scalar * const fns[3] = { \
218
+ gen_helper_sve_##name##_h, \
219
+ gen_helper_sve_##name##_s, \
220
+ gen_helper_sve_##name##_d \
221
+ }; \
222
+ static uint64_t const val[3][2] = { \
223
+ { float16_##const0, float16_##const1 }, \
224
+ { float32_##const0, float32_##const1 }, \
225
+ { float64_##const0, float64_##const1 }, \
226
+ }; \
227
+ if (a->esz == 0) { \
228
+ return false; \
229
+ } \
230
+ if (sve_access_check(s)) { \
231
+ do_fp_imm(s, a, val[a->esz - 1][a->imm], fns[a->esz - 1]); \
232
+ } \
233
+ return true; \
234
+}
235
+
236
+#define float16_two make_float16(0x4000)
237
+#define float32_two make_float32(0x40000000)
238
+#define float64_two make_float64(0x4000000000000000ULL)
239
+
240
+DO_FP_IMM(FADD, fadds, half, one)
241
+DO_FP_IMM(FSUB, fsubs, half, one)
242
+DO_FP_IMM(FMUL, fmuls, half, two)
243
+DO_FP_IMM(FSUBR, fsubrs, half, one)
244
+DO_FP_IMM(FMAXNM, fmaxnms, zero, one)
245
+DO_FP_IMM(FMINNM, fminnms, zero, one)
246
+DO_FP_IMM(FMAX, fmaxs, zero, one)
247
+DO_FP_IMM(FMIN, fmins, zero, one)
248
+
249
+#undef DO_FP_IMM
250
+
251
static bool do_fp_cmp(DisasContext *s, arg_rprr_esz *a,
252
gen_helper_gvec_4_ptr *fn)
253
{
254
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
255
index XXXXXXX..XXXXXXX 100644
256
--- a/target/arm/sve.decode
257
+++ b/target/arm/sve.decode
258
@@ -XXX,XX +XXX,XX @@
259
@rdn_pg4 ........ esz:2 .. pg:4 ... ........ rd:5 \
260
&rpri_esz rn=%reg_movprfx
261
262
+# Two register operand, one one-bit floating-point operand.
263
+@rdn_i1 ........ esz:2 ......... pg:3 .... imm:1 rd:5 \
264
+ &rpri_esz rn=%reg_movprfx
265
+
266
# Two register operand, one encoded bitmask.
267
@rdn_dbm ........ .. .... dbm:13 rd:5 \
268
&rr_dbm rn=%reg_movprfx
269
@@ -XXX,XX +XXX,XX @@ FMULX 01100101 .. 00 1010 100 ... ..... ..... @rdn_pg_rm
270
FDIV 01100101 .. 00 1100 100 ... ..... ..... @rdm_pg_rn # FDIVR
271
FDIV 01100101 .. 00 1101 100 ... ..... ..... @rdn_pg_rm
272
273
+# SVE floating-point arithmetic with immediate (predicated)
274
+FADD_zpzi 01100101 .. 011 000 100 ... 0000 . ..... @rdn_i1
275
+FSUB_zpzi 01100101 .. 011 001 100 ... 0000 . ..... @rdn_i1
276
+FMUL_zpzi 01100101 .. 011 010 100 ... 0000 . ..... @rdn_i1
277
+FSUBR_zpzi 01100101 .. 011 011 100 ... 0000 . ..... @rdn_i1
278
+FMAXNM_zpzi 01100101 .. 011 100 100 ... 0000 . ..... @rdn_i1
279
+FMINNM_zpzi 01100101 .. 011 101 100 ... 0000 . ..... @rdn_i1
280
+FMAX_zpzi 01100101 .. 011 110 100 ... 0000 . ..... @rdn_i1
281
+FMIN_zpzi 01100101 .. 011 111 100 ... 0000 . ..... @rdn_i1
282
+
283
### SVE FP Multiply-Add Group
284
285
# SVE floating-point multiply-accumulate writing addend
286
--
287
2.17.1
288
289
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper.h | 14 +++++++++++
9
target/arm/translate-sve.c | 50 ++++++++++++++++++++++++++++++++++++++
10
target/arm/vec_helper.c | 48 ++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 19 +++++++++++++++
12
4 files changed, 131 insertions(+)
13
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.h
17
+++ b/target/arm/helper.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_ftsmul_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(gvec_ftsmul_d, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_5(gvec_fmul_idx_h, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(gvec_fmul_idx_s, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(gvec_fmul_idx_d, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_6(gvec_fmla_idx_h, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_6(gvec_fmla_idx_s, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_6(gvec_fmla_idx_d, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, ptr, i32)
35
+
36
#ifdef TARGET_AARCH64
37
#include "helper-a64.h"
38
#include "helper-sve.h"
39
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/translate-sve.c
42
+++ b/target/arm/translate-sve.c
43
@@ -XXX,XX +XXX,XX @@ DO_ZZI(UMIN, umin)
44
45
#undef DO_ZZI
46
47
+/*
48
+ *** SVE Floating Point Multiply-Add Indexed Group
49
+ */
50
+
51
+static bool trans_FMLA_zzxz(DisasContext *s, arg_FMLA_zzxz *a, uint32_t insn)
52
+{
53
+ static gen_helper_gvec_4_ptr * const fns[3] = {
54
+ gen_helper_gvec_fmla_idx_h,
55
+ gen_helper_gvec_fmla_idx_s,
56
+ gen_helper_gvec_fmla_idx_d,
57
+ };
58
+
59
+ if (sve_access_check(s)) {
60
+ unsigned vsz = vec_full_reg_size(s);
61
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
62
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
63
+ vec_full_reg_offset(s, a->rn),
64
+ vec_full_reg_offset(s, a->rm),
65
+ vec_full_reg_offset(s, a->ra),
66
+ status, vsz, vsz, (a->index << 1) | a->sub,
67
+ fns[a->esz - 1]);
68
+ tcg_temp_free_ptr(status);
69
+ }
70
+ return true;
71
+}
72
+
73
+/*
74
+ *** SVE Floating Point Multiply Indexed Group
75
+ */
76
+
77
+static bool trans_FMUL_zzx(DisasContext *s, arg_FMUL_zzx *a, uint32_t insn)
78
+{
79
+ static gen_helper_gvec_3_ptr * const fns[3] = {
80
+ gen_helper_gvec_fmul_idx_h,
81
+ gen_helper_gvec_fmul_idx_s,
82
+ gen_helper_gvec_fmul_idx_d,
83
+ };
84
+
85
+ if (sve_access_check(s)) {
86
+ unsigned vsz = vec_full_reg_size(s);
87
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
88
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
89
+ vec_full_reg_offset(s, a->rn),
90
+ vec_full_reg_offset(s, a->rm),
91
+ status, vsz, vsz, a->index, fns[a->esz - 1]);
92
+ tcg_temp_free_ptr(status);
93
+ }
94
+ return true;
95
+}
96
+
97
/*
98
*** SVE Floating Point Accumulating Reduction Group
99
*/
100
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/target/arm/vec_helper.c
103
+++ b/target/arm/vec_helper.c
104
@@ -XXX,XX +XXX,XX @@ DO_3OP(gvec_rsqrts_d, helper_rsqrtsf_f64, float64)
105
106
#endif
107
#undef DO_3OP
108
+
109
+/* For the indexed ops, SVE applies the index per 128-bit vector segment.
110
+ * For AdvSIMD, there is of course only one such vector segment.
111
+ */
112
+
113
+#define DO_MUL_IDX(NAME, TYPE, H) \
114
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
115
+{ \
116
+ intptr_t i, j, oprsz = simd_oprsz(desc), segment = 16 / sizeof(TYPE); \
117
+ intptr_t idx = simd_data(desc); \
118
+ TYPE *d = vd, *n = vn, *m = vm; \
119
+ for (i = 0; i < oprsz / sizeof(TYPE); i += segment) { \
120
+ TYPE mm = m[H(i + idx)]; \
121
+ for (j = 0; j < segment; j++) { \
122
+ d[i + j] = TYPE##_mul(n[i + j], mm, stat); \
123
+ } \
124
+ } \
125
+}
126
+
127
+DO_MUL_IDX(gvec_fmul_idx_h, float16, H2)
128
+DO_MUL_IDX(gvec_fmul_idx_s, float32, H4)
129
+DO_MUL_IDX(gvec_fmul_idx_d, float64, )
130
+
131
+#undef DO_MUL_IDX
132
+
133
+#define DO_FMLA_IDX(NAME, TYPE, H) \
134
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, \
135
+ void *stat, uint32_t desc) \
136
+{ \
137
+ intptr_t i, j, oprsz = simd_oprsz(desc), segment = 16 / sizeof(TYPE); \
138
+ TYPE op1_neg = extract32(desc, SIMD_DATA_SHIFT, 1); \
139
+ intptr_t idx = desc >> (SIMD_DATA_SHIFT + 1); \
140
+ TYPE *d = vd, *n = vn, *m = vm, *a = va; \
141
+ op1_neg <<= (8 * sizeof(TYPE) - 1); \
142
+ for (i = 0; i < oprsz / sizeof(TYPE); i += segment) { \
143
+ TYPE mm = m[H(i + idx)]; \
144
+ for (j = 0; j < segment; j++) { \
145
+ d[i + j] = TYPE##_muladd(n[i + j] ^ op1_neg, \
146
+ mm, a[i + j], 0, stat); \
147
+ } \
148
+ } \
149
+}
150
+
151
+DO_FMLA_IDX(gvec_fmla_idx_h, float16, H2)
152
+DO_FMLA_IDX(gvec_fmla_idx_s, float32, H4)
153
+DO_FMLA_IDX(gvec_fmla_idx_d, float64, )
154
+
155
+#undef DO_FMLA_IDX
156
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
157
index XXXXXXX..XXXXXXX 100644
158
--- a/target/arm/sve.decode
159
+++ b/target/arm/sve.decode
160
@@ -XXX,XX +XXX,XX @@
161
%imm9_16_10 16:s6 10:3
162
%size_23 23:2
163
%dtype_23_13 23:2 13:2
164
+%index3_22_19 22:1 19:2
165
166
# A combination of tsz:imm3 -- extract esize.
167
%tszimm_esz 22:2 5:5 !function=tszimm_esz
168
@@ -XXX,XX +XXX,XX @@ UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
169
# SVE integer multiply immediate (unpredicated)
170
MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
171
172
+### SVE FP Multiply-Add Indexed Group
173
+
174
+# SVE floating-point multiply-add (indexed)
175
+FMLA_zzxz 01100100 0.1 .. rm:3 00000 sub:1 rn:5 rd:5 \
176
+ ra=%reg_movprfx index=%index3_22_19 esz=1
177
+FMLA_zzxz 01100100 101 index:2 rm:3 00000 sub:1 rn:5 rd:5 \
178
+ ra=%reg_movprfx esz=2
179
+FMLA_zzxz 01100100 111 index:1 rm:4 00000 sub:1 rn:5 rd:5 \
180
+ ra=%reg_movprfx esz=3
181
+
182
+### SVE FP Multiply Indexed Group
183
+
184
+# SVE floating-point multiply (indexed)
185
+FMUL_zzx 01100100 0.1 .. rm:3 001000 rn:5 rd:5 \
186
+ index=%index3_22_19 esz=1
187
+FMUL_zzx 01100100 101 index:2 rm:3 001000 rn:5 rd:5 esz=2
188
+FMUL_zzx 01100100 111 index:1 rm:4 001000 rn:5 rd:5 esz=3
189
+
190
### SVE FP Accumulating Reduction Group
191
192
# SVE floating-point serial reduction (predicated)
193
--
194
2.17.1
195
196
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180627043328.11531-20-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 35 ++++++++++++++++++++++
9
target/arm/sve_helper.c | 61 ++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 57 +++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 8 +++++
12
4 files changed, 161 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_faddv_h, TCG_CALL_NO_RWG,
23
+ i64, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_faddv_s, TCG_CALL_NO_RWG,
25
+ i64, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve_faddv_d, TCG_CALL_NO_RWG,
27
+ i64, ptr, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_4(sve_fmaxnmv_h, TCG_CALL_NO_RWG,
30
+ i64, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve_fmaxnmv_s, TCG_CALL_NO_RWG,
32
+ i64, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_fmaxnmv_d, TCG_CALL_NO_RWG,
34
+ i64, ptr, ptr, ptr, i32)
35
+
36
+DEF_HELPER_FLAGS_4(sve_fminnmv_h, TCG_CALL_NO_RWG,
37
+ i64, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(sve_fminnmv_s, TCG_CALL_NO_RWG,
39
+ i64, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(sve_fminnmv_d, TCG_CALL_NO_RWG,
41
+ i64, ptr, ptr, ptr, i32)
42
+
43
+DEF_HELPER_FLAGS_4(sve_fmaxv_h, TCG_CALL_NO_RWG,
44
+ i64, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_4(sve_fmaxv_s, TCG_CALL_NO_RWG,
46
+ i64, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_4(sve_fmaxv_d, TCG_CALL_NO_RWG,
48
+ i64, ptr, ptr, ptr, i32)
49
+
50
+DEF_HELPER_FLAGS_4(sve_fminv_h, TCG_CALL_NO_RWG,
51
+ i64, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_4(sve_fminv_s, TCG_CALL_NO_RWG,
53
+ i64, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_4(sve_fminv_d, TCG_CALL_NO_RWG,
55
+ i64, ptr, ptr, ptr, i32)
56
+
57
DEF_HELPER_FLAGS_5(sve_fadda_h, TCG_CALL_NO_RWG,
58
i64, i64, ptr, ptr, ptr, i32)
59
DEF_HELPER_FLAGS_5(sve_fadda_s, TCG_CALL_NO_RWG,
60
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/sve_helper.c
63
+++ b/target/arm/sve_helper.c
64
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
65
return predtest_ones(d, oprsz, esz_mask);
66
}
67
68
+/* Recursive reduction on a function;
69
+ * C.f. the ARM ARM function ReducePredicated.
70
+ *
71
+ * While it would be possible to write this without the DATA temporary,
72
+ * it is much simpler to process the predicate register this way.
73
+ * The recursion is bounded to depth 7 (128 fp16 elements), so there's
74
+ * little to gain with a more complex non-recursive form.
75
+ */
76
+#define DO_REDUCE(NAME, TYPE, H, FUNC, IDENT) \
77
+static TYPE NAME##_reduce(TYPE *data, float_status *status, uintptr_t n) \
78
+{ \
79
+ if (n == 1) { \
80
+ return *data; \
81
+ } else { \
82
+ uintptr_t half = n / 2; \
83
+ TYPE lo = NAME##_reduce(data, status, half); \
84
+ TYPE hi = NAME##_reduce(data + half, status, half); \
85
+ return TYPE##_##FUNC(lo, hi, status); \
86
+ } \
87
+} \
88
+uint64_t HELPER(NAME)(void *vn, void *vg, void *vs, uint32_t desc) \
89
+{ \
90
+ uintptr_t i, oprsz = simd_oprsz(desc), maxsz = simd_maxsz(desc); \
91
+ TYPE data[sizeof(ARMVectorReg) / sizeof(TYPE)]; \
92
+ for (i = 0; i < oprsz; ) { \
93
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
94
+ do { \
95
+ TYPE nn = *(TYPE *)(vn + H(i)); \
96
+ *(TYPE *)((void *)data + i) = (pg & 1 ? nn : IDENT); \
97
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
98
+ } while (i & 15); \
99
+ } \
100
+ for (; i < maxsz; i += sizeof(TYPE)) { \
101
+ *(TYPE *)((void *)data + i) = IDENT; \
102
+ } \
103
+ return NAME##_reduce(data, vs, maxsz / sizeof(TYPE)); \
104
+}
105
+
106
+DO_REDUCE(sve_faddv_h, float16, H1_2, add, float16_zero)
107
+DO_REDUCE(sve_faddv_s, float32, H1_4, add, float32_zero)
108
+DO_REDUCE(sve_faddv_d, float64, , add, float64_zero)
109
+
110
+/* Identity is floatN_default_nan, without the function call. */
111
+DO_REDUCE(sve_fminnmv_h, float16, H1_2, minnum, 0x7E00)
112
+DO_REDUCE(sve_fminnmv_s, float32, H1_4, minnum, 0x7FC00000)
113
+DO_REDUCE(sve_fminnmv_d, float64, , minnum, 0x7FF8000000000000ULL)
114
+
115
+DO_REDUCE(sve_fmaxnmv_h, float16, H1_2, maxnum, 0x7E00)
116
+DO_REDUCE(sve_fmaxnmv_s, float32, H1_4, maxnum, 0x7FC00000)
117
+DO_REDUCE(sve_fmaxnmv_d, float64, , maxnum, 0x7FF8000000000000ULL)
118
+
119
+DO_REDUCE(sve_fminv_h, float16, H1_2, min, float16_infinity)
120
+DO_REDUCE(sve_fminv_s, float32, H1_4, min, float32_infinity)
121
+DO_REDUCE(sve_fminv_d, float64, , min, float64_infinity)
122
+
123
+DO_REDUCE(sve_fmaxv_h, float16, H1_2, max, float16_chs(float16_infinity))
124
+DO_REDUCE(sve_fmaxv_s, float32, H1_4, max, float32_chs(float32_infinity))
125
+DO_REDUCE(sve_fmaxv_d, float64, , max, float64_chs(float64_infinity))
126
+
127
+#undef DO_REDUCE
128
+
129
uint64_t HELPER(sve_fadda_h)(uint64_t nn, void *vm, void *vg,
130
void *status, uint32_t desc)
131
{
132
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/target/arm/translate-sve.c
135
+++ b/target/arm/translate-sve.c
136
@@ -XXX,XX +XXX,XX @@ static bool trans_FMUL_zzx(DisasContext *s, arg_FMUL_zzx *a, uint32_t insn)
137
return true;
138
}
139
140
+/*
141
+ *** SVE Floating Point Fast Reduction Group
142
+ */
143
+
144
+typedef void gen_helper_fp_reduce(TCGv_i64, TCGv_ptr, TCGv_ptr,
145
+ TCGv_ptr, TCGv_i32);
146
+
147
+static void do_reduce(DisasContext *s, arg_rpr_esz *a,
148
+ gen_helper_fp_reduce *fn)
149
+{
150
+ unsigned vsz = vec_full_reg_size(s);
151
+ unsigned p2vsz = pow2ceil(vsz);
152
+ TCGv_i32 t_desc = tcg_const_i32(simd_desc(vsz, p2vsz, 0));
153
+ TCGv_ptr t_zn, t_pg, status;
154
+ TCGv_i64 temp;
155
+
156
+ temp = tcg_temp_new_i64();
157
+ t_zn = tcg_temp_new_ptr();
158
+ t_pg = tcg_temp_new_ptr();
159
+
160
+ tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
161
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
162
+ status = get_fpstatus_ptr(a->esz == MO_16);
163
+
164
+ fn(temp, t_zn, t_pg, status, t_desc);
165
+ tcg_temp_free_ptr(t_zn);
166
+ tcg_temp_free_ptr(t_pg);
167
+ tcg_temp_free_ptr(status);
168
+ tcg_temp_free_i32(t_desc);
169
+
170
+ write_fp_dreg(s, a->rd, temp);
171
+ tcg_temp_free_i64(temp);
172
+}
173
+
174
+#define DO_VPZ(NAME, name) \
175
+static bool trans_##NAME(DisasContext *s, arg_rpr_esz *a, uint32_t insn) \
176
+{ \
177
+ static gen_helper_fp_reduce * const fns[3] = { \
178
+ gen_helper_sve_##name##_h, \
179
+ gen_helper_sve_##name##_s, \
180
+ gen_helper_sve_##name##_d, \
181
+ }; \
182
+ if (a->esz == 0) { \
183
+ return false; \
184
+ } \
185
+ if (sve_access_check(s)) { \
186
+ do_reduce(s, a, fns[a->esz - 1]); \
187
+ } \
188
+ return true; \
189
+}
190
+
191
+DO_VPZ(FADDV, faddv)
192
+DO_VPZ(FMINNMV, fminnmv)
193
+DO_VPZ(FMAXNMV, fmaxnmv)
194
+DO_VPZ(FMINV, fminv)
195
+DO_VPZ(FMAXV, fmaxv)
196
+
197
/*
198
*** SVE Floating Point Accumulating Reduction Group
199
*/
200
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
201
index XXXXXXX..XXXXXXX 100644
202
--- a/target/arm/sve.decode
203
+++ b/target/arm/sve.decode
204
@@ -XXX,XX +XXX,XX @@ FMUL_zzx 01100100 0.1 .. rm:3 001000 rn:5 rd:5 \
205
FMUL_zzx 01100100 101 index:2 rm:3 001000 rn:5 rd:5 esz=2
206
FMUL_zzx 01100100 111 index:1 rm:4 001000 rn:5 rd:5 esz=3
207
208
+### SVE FP Fast Reduction Group
209
+
210
+FADDV 01100101 .. 000 000 001 ... ..... ..... @rd_pg_rn
211
+FMAXNMV 01100101 .. 000 100 001 ... ..... ..... @rd_pg_rn
212
+FMINNMV 01100101 .. 000 101 001 ... ..... ..... @rd_pg_rn
213
+FMAXV 01100101 .. 000 110 001 ... ..... ..... @rd_pg_rn
214
+FMINV 01100101 .. 000 111 001 ... ..... ..... @rd_pg_rn
215
+
216
### SVE FP Accumulating Reduction Group
217
218
# SVE floating-point serial reduction (predicated)
219
--
220
2.17.1
221
222
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
For aa64 advsimd, we had been passing the pre-indexed vector.
4
However, sve applies the index to each 128-bit segment, so we
5
need to pass in the index separately.
6
7
For aa32 advsimd, the fp32 operation always has index 0, but
8
we failed to interpret the fp16 index correctly.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Message-id: 20180627043328.11531-31-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/translate-a64.c | 21 ++++++++++++---------
17
target/arm/translate.c | 32 +++++++++++++++++++++++---------
18
target/arm/vec_helper.c | 10 ++++++----
19
3 files changed, 41 insertions(+), 22 deletions(-)
20
21
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/translate-a64.c
24
+++ b/target/arm/translate-a64.c
25
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
26
case 0x13: /* FCMLA #90 */
27
case 0x15: /* FCMLA #180 */
28
case 0x17: /* FCMLA #270 */
29
- tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
30
- vec_full_reg_offset(s, rn),
31
- vec_reg_offset(s, rm, index, size), fpst,
32
- is_q ? 16 : 8, vec_full_reg_size(s),
33
- extract32(insn, 13, 2), /* rot */
34
- size == MO_64
35
- ? gen_helper_gvec_fcmlas_idx
36
- : gen_helper_gvec_fcmlah_idx);
37
- tcg_temp_free_ptr(fpst);
38
+ {
39
+ int rot = extract32(insn, 13, 2);
40
+ int data = (index << 2) | rot;
41
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
42
+ vec_full_reg_offset(s, rn),
43
+ vec_full_reg_offset(s, rm), fpst,
44
+ is_q ? 16 : 8, vec_full_reg_size(s), data,
45
+ size == MO_64
46
+ ? gen_helper_gvec_fcmlas_idx
47
+ : gen_helper_gvec_fcmlah_idx);
48
+ tcg_temp_free_ptr(fpst);
49
+ }
50
return;
51
}
52
53
diff --git a/target/arm/translate.c b/target/arm/translate.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/translate.c
56
+++ b/target/arm/translate.c
57
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
58
59
static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
60
{
61
- int rd, rn, rm, rot, size, opr_sz;
62
+ gen_helper_gvec_3_ptr *fn_gvec_ptr;
63
+ int rd, rn, rm, opr_sz, data;
64
TCGv_ptr fpst;
65
bool q;
66
67
q = extract32(insn, 6, 1);
68
VFP_DREG_D(rd, insn);
69
VFP_DREG_N(rn, insn);
70
- VFP_DREG_M(rm, insn);
71
if ((rd | rn) & q) {
72
return 1;
73
}
74
75
if ((insn & 0xff000f10) == 0xfe000800) {
76
/* VCMLA (indexed) -- 1111 1110 S.RR .... .... 1000 ...0 .... */
77
- rot = extract32(insn, 20, 2);
78
- size = extract32(insn, 23, 1);
79
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
80
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
81
+ int rot = extract32(insn, 20, 2);
82
+ int size = extract32(insn, 23, 1);
83
+ int index;
84
+
85
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
86
return 1;
87
}
88
+ if (size == 0) {
89
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
90
+ return 1;
91
+ }
92
+ /* For fp16, rm is just Vm, and index is M. */
93
+ rm = extract32(insn, 0, 4);
94
+ index = extract32(insn, 5, 1);
95
+ } else {
96
+ /* For fp32, rm is the usual M:Vm, and index is 0. */
97
+ VFP_DREG_M(rm, insn);
98
+ index = 0;
99
+ }
100
+ data = (index << 2) | rot;
101
+ fn_gvec_ptr = (size ? gen_helper_gvec_fcmlas_idx
102
+ : gen_helper_gvec_fcmlah_idx);
103
} else {
104
return 1;
105
}
106
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
107
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
108
vfp_reg_offset(1, rn),
109
vfp_reg_offset(1, rm), fpst,
110
- opr_sz, opr_sz, rot,
111
- size ? gen_helper_gvec_fcmlas_idx
112
- : gen_helper_gvec_fcmlah_idx);
113
+ opr_sz, opr_sz, data, fn_gvec_ptr);
114
tcg_temp_free_ptr(fpst);
115
return 0;
116
}
117
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/target/arm/vec_helper.c
120
+++ b/target/arm/vec_helper.c
121
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlah_idx)(void *vd, void *vn, void *vm,
122
float_status *fpst = vfpst;
123
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
124
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
125
+ intptr_t index = extract32(desc, SIMD_DATA_SHIFT + 2, 2);
126
uint32_t neg_real = flip ^ neg_imag;
127
uintptr_t i;
128
- float16 e1 = m[H2(flip)];
129
- float16 e3 = m[H2(1 - flip)];
130
+ float16 e1 = m[H2(2 * index + flip)];
131
+ float16 e3 = m[H2(2 * index + 1 - flip)];
132
133
/* Shift boolean to the sign bit so we can xor to negate. */
134
neg_real <<= 15;
135
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlas_idx)(void *vd, void *vn, void *vm,
136
float_status *fpst = vfpst;
137
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
138
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
139
+ intptr_t index = extract32(desc, SIMD_DATA_SHIFT + 2, 2);
140
uint32_t neg_real = flip ^ neg_imag;
141
uintptr_t i;
142
- float32 e1 = m[H4(flip)];
143
- float32 e3 = m[H4(1 - flip)];
144
+ float32 e1 = m[H4(2 * index + flip)];
145
+ float32 e3 = m[H4(2 * index + 1 - flip)];
146
147
/* Shift boolean to the sign bit so we can xor to negate. */
148
neg_real <<= 31;
149
--
150
2.17.1
151
152
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
Enhance the existing helpers to support SVE, which takes the
3
When the PPTT table is built, the CPU topology is re-calculated, but
4
index from each 128-bit segment. The change has no effect
4
it's unecessary because the CPU topology has been populated in
5
for AdvSIMD, since there is only one such segment.
5
virt_possible_cpu_arch_ids() on arm/virt machine.
6
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
This reworks build_pptt() to avoid by reusing the existing IDs in
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
ms->possible_cpus. Currently, the only user of build_pptt() is
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
arm/virt machine.
10
Message-id: 20180627043328.11531-32-richard.henderson@linaro.org
10
11
Signed-off-by: Gavin Shan <gshan@redhat.com>
12
Tested-by: Yanan Wang <wangyanan55@huawei.com>
13
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
14
Acked-by: Igor Mammedov <imammedo@redhat.com>
15
Acked-by: Michael S. Tsirkin <mst@redhat.com>
16
Message-id: 20220503140304.855514-7-gshan@redhat.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
18
---
13
target/arm/translate-sve.c | 23 ++++++++++++++++++
19
hw/acpi/aml-build.c | 111 +++++++++++++++++++-------------------------
14
target/arm/vec_helper.c | 50 +++++++++++++++++++++++---------------
20
1 file changed, 48 insertions(+), 63 deletions(-)
15
target/arm/sve.decode | 6 +++++
16
3 files changed, 59 insertions(+), 20 deletions(-)
17
21
18
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
22
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
19
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate-sve.c
24
--- a/hw/acpi/aml-build.c
21
+++ b/target/arm/translate-sve.c
25
+++ b/hw/acpi/aml-build.c
22
@@ -XXX,XX +XXX,XX @@ static bool trans_FCMLA_zpzzz(DisasContext *s,
26
@@ -XXX,XX +XXX,XX @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
23
return true;
27
const char *oem_id, const char *oem_table_id)
28
{
29
MachineClass *mc = MACHINE_GET_CLASS(ms);
30
- GQueue *list = g_queue_new();
31
- guint pptt_start = table_data->len;
32
- guint parent_offset;
33
- guint length, i;
34
- int uid = 0;
35
- int socket;
36
+ CPUArchIdList *cpus = ms->possible_cpus;
37
+ int64_t socket_id = -1, cluster_id = -1, core_id = -1;
38
+ uint32_t socket_offset = 0, cluster_offset = 0, core_offset = 0;
39
+ uint32_t pptt_start = table_data->len;
40
+ int n;
41
AcpiTable table = { .sig = "PPTT", .rev = 2,
42
.oem_id = oem_id, .oem_table_id = oem_table_id };
43
44
acpi_table_begin(&table, table_data);
45
46
- for (socket = 0; socket < ms->smp.sockets; socket++) {
47
- g_queue_push_tail(list,
48
- GUINT_TO_POINTER(table_data->len - pptt_start));
49
- build_processor_hierarchy_node(
50
- table_data,
51
- /*
52
- * Physical package - represents the boundary
53
- * of a physical package
54
- */
55
- (1 << 0),
56
- 0, socket, NULL, 0);
57
- }
58
-
59
- if (mc->smp_props.clusters_supported) {
60
- length = g_queue_get_length(list);
61
- for (i = 0; i < length; i++) {
62
- int cluster;
63
-
64
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
65
- for (cluster = 0; cluster < ms->smp.clusters; cluster++) {
66
- g_queue_push_tail(list,
67
- GUINT_TO_POINTER(table_data->len - pptt_start));
68
- build_processor_hierarchy_node(
69
- table_data,
70
- (0 << 0), /* not a physical package */
71
- parent_offset, cluster, NULL, 0);
72
- }
73
+ /*
74
+ * This works with the assumption that cpus[n].props.*_id has been
75
+ * sorted from top to down levels in mc->possible_cpu_arch_ids().
76
+ * Otherwise, the unexpected and duplicated containers will be
77
+ * created.
78
+ */
79
+ for (n = 0; n < cpus->len; n++) {
80
+ if (cpus->cpus[n].props.socket_id != socket_id) {
81
+ assert(cpus->cpus[n].props.socket_id > socket_id);
82
+ socket_id = cpus->cpus[n].props.socket_id;
83
+ cluster_id = -1;
84
+ core_id = -1;
85
+ socket_offset = table_data->len - pptt_start;
86
+ build_processor_hierarchy_node(table_data,
87
+ (1 << 0), /* Physical package */
88
+ 0, socket_id, NULL, 0);
89
}
90
- }
91
92
- length = g_queue_get_length(list);
93
- for (i = 0; i < length; i++) {
94
- int core;
95
-
96
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
97
- for (core = 0; core < ms->smp.cores; core++) {
98
- if (ms->smp.threads > 1) {
99
- g_queue_push_tail(list,
100
- GUINT_TO_POINTER(table_data->len - pptt_start));
101
- build_processor_hierarchy_node(
102
- table_data,
103
- (0 << 0), /* not a physical package */
104
- parent_offset, core, NULL, 0);
105
- } else {
106
- build_processor_hierarchy_node(
107
- table_data,
108
- (1 << 1) | /* ACPI Processor ID valid */
109
- (1 << 3), /* Node is a Leaf */
110
- parent_offset, uid++, NULL, 0);
111
+ if (mc->smp_props.clusters_supported) {
112
+ if (cpus->cpus[n].props.cluster_id != cluster_id) {
113
+ assert(cpus->cpus[n].props.cluster_id > cluster_id);
114
+ cluster_id = cpus->cpus[n].props.cluster_id;
115
+ core_id = -1;
116
+ cluster_offset = table_data->len - pptt_start;
117
+ build_processor_hierarchy_node(table_data,
118
+ (0 << 0), /* Not a physical package */
119
+ socket_offset, cluster_id, NULL, 0);
120
}
121
+ } else {
122
+ cluster_offset = socket_offset;
123
}
124
- }
125
126
- length = g_queue_get_length(list);
127
- for (i = 0; i < length; i++) {
128
- int thread;
129
+ if (ms->smp.threads == 1) {
130
+ build_processor_hierarchy_node(table_data,
131
+ (1 << 1) | /* ACPI Processor ID valid */
132
+ (1 << 3), /* Node is a Leaf */
133
+ cluster_offset, n, NULL, 0);
134
+ } else {
135
+ if (cpus->cpus[n].props.core_id != core_id) {
136
+ assert(cpus->cpus[n].props.core_id > core_id);
137
+ core_id = cpus->cpus[n].props.core_id;
138
+ core_offset = table_data->len - pptt_start;
139
+ build_processor_hierarchy_node(table_data,
140
+ (0 << 0), /* Not a physical package */
141
+ cluster_offset, core_id, NULL, 0);
142
+ }
143
144
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
145
- for (thread = 0; thread < ms->smp.threads; thread++) {
146
- build_processor_hierarchy_node(
147
- table_data,
148
+ build_processor_hierarchy_node(table_data,
149
(1 << 1) | /* ACPI Processor ID valid */
150
(1 << 2) | /* Processor is a Thread */
151
(1 << 3), /* Node is a Leaf */
152
- parent_offset, uid++, NULL, 0);
153
+ core_offset, n, NULL, 0);
154
}
155
}
156
157
- g_queue_free(list);
158
acpi_table_end(linker, &table);
24
}
159
}
25
160
26
+static bool trans_FCMLA_zzxz(DisasContext *s, arg_FCMLA_zzxz *a, uint32_t insn)
27
+{
28
+ static gen_helper_gvec_3_ptr * const fns[2] = {
29
+ gen_helper_gvec_fcmlah_idx,
30
+ gen_helper_gvec_fcmlas_idx,
31
+ };
32
+
33
+ tcg_debug_assert(a->esz == 1 || a->esz == 2);
34
+ tcg_debug_assert(a->rd == a->ra);
35
+ if (sve_access_check(s)) {
36
+ unsigned vsz = vec_full_reg_size(s);
37
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
38
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
39
+ vec_full_reg_offset(s, a->rn),
40
+ vec_full_reg_offset(s, a->rm),
41
+ status, vsz, vsz,
42
+ a->index * 4 + a->rot,
43
+ fns[a->esz - 1]);
44
+ tcg_temp_free_ptr(status);
45
+ }
46
+ return true;
47
+}
48
+
49
/*
50
*** SVE Floating Point Unary Operations Predicated Group
51
*/
52
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/vec_helper.c
55
+++ b/target/arm/vec_helper.c
56
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlah_idx)(void *vd, void *vn, void *vm,
57
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
58
intptr_t index = extract32(desc, SIMD_DATA_SHIFT + 2, 2);
59
uint32_t neg_real = flip ^ neg_imag;
60
- uintptr_t i;
61
- float16 e1 = m[H2(2 * index + flip)];
62
- float16 e3 = m[H2(2 * index + 1 - flip)];
63
+ intptr_t elements = opr_sz / sizeof(float16);
64
+ intptr_t eltspersegment = 16 / sizeof(float16);
65
+ intptr_t i, j;
66
67
/* Shift boolean to the sign bit so we can xor to negate. */
68
neg_real <<= 15;
69
neg_imag <<= 15;
70
- e1 ^= neg_real;
71
- e3 ^= neg_imag;
72
73
- for (i = 0; i < opr_sz / 2; i += 2) {
74
- float16 e2 = n[H2(i + flip)];
75
- float16 e4 = e2;
76
+ for (i = 0; i < elements; i += eltspersegment) {
77
+ float16 mr = m[H2(i + 2 * index + 0)];
78
+ float16 mi = m[H2(i + 2 * index + 1)];
79
+ float16 e1 = neg_real ^ (flip ? mi : mr);
80
+ float16 e3 = neg_imag ^ (flip ? mr : mi);
81
82
- d[H2(i)] = float16_muladd(e2, e1, d[H2(i)], 0, fpst);
83
- d[H2(i + 1)] = float16_muladd(e4, e3, d[H2(i + 1)], 0, fpst);
84
+ for (j = i; j < i + eltspersegment; j += 2) {
85
+ float16 e2 = n[H2(j + flip)];
86
+ float16 e4 = e2;
87
+
88
+ d[H2(j)] = float16_muladd(e2, e1, d[H2(j)], 0, fpst);
89
+ d[H2(j + 1)] = float16_muladd(e4, e3, d[H2(j + 1)], 0, fpst);
90
+ }
91
}
92
clear_tail(d, opr_sz, simd_maxsz(desc));
93
}
94
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlas_idx)(void *vd, void *vn, void *vm,
95
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
96
intptr_t index = extract32(desc, SIMD_DATA_SHIFT + 2, 2);
97
uint32_t neg_real = flip ^ neg_imag;
98
- uintptr_t i;
99
- float32 e1 = m[H4(2 * index + flip)];
100
- float32 e3 = m[H4(2 * index + 1 - flip)];
101
+ intptr_t elements = opr_sz / sizeof(float32);
102
+ intptr_t eltspersegment = 16 / sizeof(float32);
103
+ intptr_t i, j;
104
105
/* Shift boolean to the sign bit so we can xor to negate. */
106
neg_real <<= 31;
107
neg_imag <<= 31;
108
- e1 ^= neg_real;
109
- e3 ^= neg_imag;
110
111
- for (i = 0; i < opr_sz / 4; i += 2) {
112
- float32 e2 = n[H4(i + flip)];
113
- float32 e4 = e2;
114
+ for (i = 0; i < elements; i += eltspersegment) {
115
+ float32 mr = m[H4(i + 2 * index + 0)];
116
+ float32 mi = m[H4(i + 2 * index + 1)];
117
+ float32 e1 = neg_real ^ (flip ? mi : mr);
118
+ float32 e3 = neg_imag ^ (flip ? mr : mi);
119
120
- d[H4(i)] = float32_muladd(e2, e1, d[H4(i)], 0, fpst);
121
- d[H4(i + 1)] = float32_muladd(e4, e3, d[H4(i + 1)], 0, fpst);
122
+ for (j = i; j < i + eltspersegment; j += 2) {
123
+ float32 e2 = n[H4(j + flip)];
124
+ float32 e4 = e2;
125
+
126
+ d[H4(j)] = float32_muladd(e2, e1, d[H4(j)], 0, fpst);
127
+ d[H4(j + 1)] = float32_muladd(e4, e3, d[H4(j + 1)], 0, fpst);
128
+ }
129
}
130
clear_tail(d, opr_sz, simd_maxsz(desc));
131
}
132
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
133
index XXXXXXX..XXXXXXX 100644
134
--- a/target/arm/sve.decode
135
+++ b/target/arm/sve.decode
136
@@ -XXX,XX +XXX,XX @@ FCADD 01100100 esz:2 00000 rot:1 100 pg:3 rm:5 rd:5 \
137
FCMLA_zpzzz 01100100 esz:2 0 rm:5 0 rot:2 pg:3 rn:5 rd:5 \
138
ra=%reg_movprfx
139
140
+# SVE floating-point complex multiply-add (indexed)
141
+FCMLA_zzxz 01100100 10 1 index:2 rm:3 0001 rot:2 rn:5 rd:5 \
142
+ ra=%reg_movprfx esz=1
143
+FCMLA_zzxz 01100100 11 1 index:1 rm:4 0001 rot:2 rn:5 rd:5 \
144
+ ra=%reg_movprfx esz=2
145
+
146
### SVE FP Multiply-Add Indexed Group
147
148
# SVE floating-point multiply-add (indexed)
149
--
161
--
150
2.17.1
162
2.25.1
151
152
diff view generated by jsdifflib
Deleted patch
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
1
3
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
7
hw/arm/mcimx7d-sabre.c | 2 --
8
1 file changed, 2 deletions(-)
9
10
diff --git a/hw/arm/mcimx7d-sabre.c b/hw/arm/mcimx7d-sabre.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/arm/mcimx7d-sabre.c
13
+++ b/hw/arm/mcimx7d-sabre.c
14
@@ -XXX,XX +XXX,XX @@
15
#include "hw/arm/fsl-imx7.h"
16
#include "hw/boards.h"
17
#include "sysemu/sysemu.h"
18
-#include "sysemu/device_tree.h"
19
#include "qemu/error-report.h"
20
#include "sysemu/qtest.h"
21
-#include "net/net.h"
22
23
typedef struct {
24
FslIMX7State soc;
25
--
26
2.17.1
27
28
diff view generated by jsdifflib
Deleted patch
1
We don't actually implement SD command CRC checking, because
2
for almost all of our SD controllers the CRC generation is
3
done in hardware, and so modelling CRC generation and checking
4
would be a bit pointless. (The exception is that milkymist-memcard
5
makes the guest software compute the CRC.)
6
1
7
As a result almost all of our SD controller models don't bother
8
to set the SDRequest crc field, and the SD card model doesn't
9
check it. So the tracing of it in sdbus_do_command() provokes
10
Coverity warnings about use of uninitialized data.
11
12
Drop the CRC field from the trace; we can always add it back
13
if and when we do anything useful with the CRC.
14
15
Fixes Coverity issues 1386072, 1386074, 1386076, 1390571.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Message-id: 20180626180324.5537-1-peter.maydell@linaro.org
20
---
21
hw/sd/core.c | 2 +-
22
hw/sd/trace-events | 2 +-
23
2 files changed, 2 insertions(+), 2 deletions(-)
24
25
diff --git a/hw/sd/core.c b/hw/sd/core.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/hw/sd/core.c
28
+++ b/hw/sd/core.c
29
@@ -XXX,XX +XXX,XX @@ int sdbus_do_command(SDBus *sdbus, SDRequest *req, uint8_t *response)
30
{
31
SDState *card = get_card(sdbus);
32
33
- trace_sdbus_command(sdbus_name(sdbus), req->cmd, req->arg, req->crc);
34
+ trace_sdbus_command(sdbus_name(sdbus), req->cmd, req->arg);
35
if (card) {
36
SDCardClass *sc = SD_CARD_GET_CLASS(card);
37
38
diff --git a/hw/sd/trace-events b/hw/sd/trace-events
39
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/sd/trace-events
41
+++ b/hw/sd/trace-events
42
@@ -XXX,XX +XXX,XX @@ bcm2835_sdhost_edm_change(const char *why, uint32_t edm) "(%s) EDM now 0x%x"
43
bcm2835_sdhost_update_irq(uint32_t irq) "IRQ bits 0x%x\n"
44
45
# hw/sd/core.c
46
-sdbus_command(const char *bus_name, uint8_t cmd, uint32_t arg, uint8_t crc) "@%s CMD%02d arg 0x%08x crc 0x%02x"
47
+sdbus_command(const char *bus_name, uint8_t cmd, uint32_t arg) "@%s CMD%02d arg 0x%08x"
48
sdbus_read(const char *bus_name, uint8_t value) "@%s value 0x%02x"
49
sdbus_write(const char *bus_name, uint8_t value) "@%s value 0x%02x"
50
sdbus_set_voltage(const char *bus_name, uint16_t millivolts) "@%s %u (mV)"
51
--
52
2.17.1
53
54
diff view generated by jsdifflib