1
Another target-arm queue, since we're over 30 patches
1
Big pullreq this week, since it's got RTH's PAN/UAO/ATS1E1
2
already. Most of this is RTH's SVE-patches-part-1.
2
implementation in it, and also Philippe's raspi board model
3
cleanup patchset, as well as a scattering of smaller stuff.
3
4
4
thanks
5
-- PMM
5
-- PMM
6
6
7
7
8
The following changes since commit d32e41a1188e929cc0fb16829ce3736046951e39:
8
The following changes since commit 7ce9ce89930ce260af839fb3e3e5f9101f5c69a0:
9
9
10
Merge remote-tracking branch 'remotes/famz/tags/docker-and-block-pull-request' into staging (2018-05-18 14:11:52 +0100)
10
Merge remote-tracking branch 'remotes/kraxel/tags/ui-20200212-pull-request' into staging (2020-02-13 11:06:32 +0000)
11
11
12
are available in the Git repository at:
12
are available in the Git repository at:
13
13
14
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180518
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200213
15
15
16
for you to fetch changes up to b94f8f60bd841c5b737185cd38263e26822f77ab:
16
for you to fetch changes up to dc7a88d0810ad272bdcd2e0869359af78fdd9114:
17
17
18
target/arm: Implement SVE Permute - Extract Group (2018-05-18 17:48:09 +0100)
18
target/arm: Implement ARMv8.1-VMID16 extension (2020-02-13 14:30:51 +0000)
19
19
20
----------------------------------------------------------------
20
----------------------------------------------------------------
21
target-arm queue:
21
target-arm queue:
22
* Initial part of SVE implementation (currently disabled)
22
* i.MX: Fix inverted sense of register bits in watchdog timer
23
* smmuv3: fix some minor Coverity issues
23
* i.MX: Add support for WDT on i.MX6
24
* add model of Xilinx ZynqMP generic DMA controller
24
* arm/virt: cleanups to ACPI tables
25
* expose (most) Arm coprocessor/system registers to
25
* Implement ARMv8.1-VMID16 extension
26
gdb via QEMU's gdbstub, for reads only
26
* Implement ARMv8.1-PAN
27
* Implement ARMv8.2-UAO
28
* Implement ARMv8.2-ATS1E1
29
* ast2400/2500/2600: Wire up EHCI controllers
30
* hw/char/exynos4210_uart: Fix memleaks in exynos4210_uart_init
31
* hw/arm/raspi: Clean up the board code
27
32
28
----------------------------------------------------------------
33
----------------------------------------------------------------
29
Abdallah Bouassida (3):
34
Chen Qun (1):
30
target/arm: Add "ARM_CP_NO_GDB" as a new bit field for ARMCPRegInfo type
35
hw/char/exynos4210_uart: Fix memleaks in exynos4210_uart_init
31
target/arm: Add "_S" suffix to the secure version of a sysreg
32
target/arm: Add the XML dynamic generation
33
36
34
Eric Auger (2):
37
Guenter Roeck (2):
35
hw/arm/smmuv3: Fix Coverity issue in smmuv3_record_event
38
hw/arm: ast2400/ast2500: Wire up EHCI controllers
36
hw/arm/smmu-common: Fix coverity issue in get_block_pte_address
39
hw/arm: ast2600: Wire up EHCI controllers
37
40
38
Francisco Iglesias (2):
41
Heyi Guo (7):
39
xlnx-zdma: Add a model of the Xilinx ZynqMP generic DMA
42
bios-tables-test: prepare to change ARM virt ACPI DSDT
40
xlnx-zynqmp: Connect the ZynqMP GDMA and ADMA
43
arm/virt/acpi: remove meaningless sub device "RP0" from PCI0
44
arm/virt/acpi: remove _ADR from devices identified by _HID
45
arm/acpi: fix PCI _PRT definition
46
arm/acpi: fix duplicated _UID of PCI interrupt link devices
47
arm/acpi: simplify the description of PCI _CRS
48
virt/acpi: update golden masters for DSDT update
41
49
42
Richard Henderson (25):
50
Peter Maydell (1):
43
target/arm: Introduce translate-a64.h
51
target/arm: Implement ARMv8.1-VMID16 extension
44
target/arm: Add SVE decode skeleton
45
target/arm: Implement SVE Bitwise Logical - Unpredicated Group
46
target/arm: Implement SVE load vector/predicate
47
target/arm: Implement SVE predicate test
48
target/arm: Implement SVE Predicate Logical Operations Group
49
target/arm: Implement SVE Predicate Misc Group
50
target/arm: Implement SVE Integer Binary Arithmetic - Predicated Group
51
target/arm: Implement SVE Integer Reduction Group
52
target/arm: Implement SVE bitwise shift by immediate (predicated)
53
target/arm: Implement SVE bitwise shift by vector (predicated)
54
target/arm: Implement SVE bitwise shift by wide elements (predicated)
55
target/arm: Implement SVE Integer Arithmetic - Unary Predicated Group
56
target/arm: Implement SVE Integer Multiply-Add Group
57
target/arm: Implement SVE Integer Arithmetic - Unpredicated Group
58
target/arm: Implement SVE Index Generation Group
59
target/arm: Implement SVE Stack Allocation Group
60
target/arm: Implement SVE Bitwise Shift - Unpredicated Group
61
target/arm: Implement SVE Compute Vector Address Group
62
target/arm: Implement SVE floating-point exponential accelerator
63
target/arm: Implement SVE floating-point trig select coefficient
64
target/arm: Implement SVE Element Count Group
65
target/arm: Implement SVE Bitwise Immediate Group
66
target/arm: Implement SVE Integer Wide Immediate - Predicated Group
67
target/arm: Implement SVE Permute - Extract Group
68
52
69
hw/dma/Makefile.objs | 1 +
53
Philippe Mathieu-Daudé (13):
70
target/arm/Makefile.objs | 10 +
54
hw/arm/raspi: Use BCM2708 machine type with pre Device Tree kernels
71
include/hw/arm/xlnx-zynqmp.h | 5 +
55
hw/arm/raspi: Correct the board descriptions
72
include/hw/dma/xlnx-zdma.h | 84 ++
56
hw/arm/raspi: Extract the version from the board revision
73
include/qom/cpu.h | 5 +-
57
hw/arm/raspi: Extract the RAM size from the board revision
74
target/arm/cpu.h | 37 +-
58
hw/arm/raspi: Extract the processor type from the board revision
75
target/arm/helper-sve.h | 427 +++++++++
59
hw/arm/raspi: Trivial code movement
76
target/arm/helper.h | 1 +
60
hw/arm/raspi: Make machines children of abstract RaspiMachineClass
77
target/arm/translate-a64.h | 118 +++
61
hw/arm/raspi: Make board_rev a field of RaspiMachineClass
78
gdbstub.c | 10 +
62
hw/arm/raspi: Let class_init() directly call raspi_machine_init()
79
hw/arm/smmu-common.c | 4 +-
63
hw/arm/raspi: Set default RAM size to size encoded in board revision
80
hw/arm/smmuv3.c | 2 +-
64
hw/arm/raspi: Extract the board model from the board revision
81
hw/arm/xlnx-zynqmp.c | 53 ++
65
hw/arm/raspi: Use a unique raspi_machine_class_init() method
82
hw/dma/xlnx-zdma.c | 832 +++++++++++++++++
66
hw/arm/raspi: Extract the cores count from the board revision
83
target/arm/cpu.c | 1 +
84
target/arm/gdbstub.c | 76 ++
85
target/arm/helper.c | 57 +-
86
target/arm/sve_helper.c | 1562 +++++++++++++++++++++++++++++++
87
target/arm/translate-a64.c | 119 +--
88
target/arm/translate-sve.c | 2070 ++++++++++++++++++++++++++++++++++++++++++
89
.gitignore | 1 +
90
target/arm/sve.decode | 419 +++++++++
91
22 files changed, 5778 insertions(+), 116 deletions(-)
92
create mode 100644 include/hw/dma/xlnx-zdma.h
93
create mode 100644 target/arm/helper-sve.h
94
create mode 100644 target/arm/translate-a64.h
95
create mode 100644 hw/dma/xlnx-zdma.c
96
create mode 100644 target/arm/sve_helper.c
97
create mode 100644 target/arm/translate-sve.c
98
create mode 100644 target/arm/sve.decode
99
67
68
Richard Henderson (20):
69
target/arm: Add arm_mmu_idx_is_stage1_of_2
70
target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled
71
target/arm: Add isar_feature tests for PAN + ATS1E1
72
target/arm: Move LOR regdefs to file scope
73
target/arm: Split out aarch32_cpsr_valid_mask
74
target/arm: Mask CPSR_J when Jazelle is not enabled
75
target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask
76
target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return
77
target/arm: Remove CPSR_RESERVED
78
target/arm: Introduce aarch64_pstate_valid_mask
79
target/arm: Update MSR access for PAN
80
target/arm: Update arm_mmu_idx_el for PAN
81
target/arm: Enforce PAN semantics in get_S1prot
82
target/arm: Set PAN bit as required on exception entry
83
target/arm: Implement ATS1E1 system registers
84
target/arm: Enable ARMv8.2-ATS1E1 in -cpu max
85
target/arm: Add ID_AA64MMFR2_EL1
86
target/arm: Update MSR access to UAO
87
target/arm: Implement UAO semantics
88
target/arm: Enable ARMv8.2-UAO in -cpu max
89
90
Roman Kapl (2):
91
i.MX: Fix inverted register bits in wdt code.
92
i.MX: Add support for WDT on i.MX6
93
94
include/hw/arm/aspeed_soc.h | 6 +
95
include/hw/arm/fsl-imx6.h | 3 +
96
target/arm/cpu-param.h | 2 +-
97
target/arm/cpu.h | 95 ++++++++---
98
target/arm/internals.h | 85 ++++++++++
99
hw/arm/aspeed_ast2600.c | 23 +++
100
hw/arm/aspeed_soc.c | 25 +++
101
hw/arm/fsl-imx6.c | 21 +++
102
hw/arm/raspi.c | 190 ++++++++++++++++------
103
hw/arm/virt-acpi-build.c | 25 +--
104
hw/char/exynos4210_uart.c | 5 +-
105
hw/misc/imx2_wdt.c | 2 +-
106
target/arm/cpu.c | 4 +
107
target/arm/cpu64.c | 10 ++
108
target/arm/helper-a64.c | 6 +-
109
target/arm/helper.c | 327 +++++++++++++++++++++++++++++---------
110
target/arm/kvm64.c | 2 +
111
target/arm/op_helper.c | 14 +-
112
target/arm/translate-a64.c | 31 ++++
113
target/arm/translate.c | 42 +++--
114
tests/data/acpi/virt/DSDT | Bin 18462 -> 5307 bytes
115
tests/data/acpi/virt/DSDT.memhp | Bin 19799 -> 6644 bytes
116
tests/data/acpi/virt/DSDT.numamem | Bin 18462 -> 5307 bytes
117
23 files changed, 731 insertions(+), 187 deletions(-)
118
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Roman Kapl <rka@sysgo.com>
2
2
3
Coverity complains about use of uninitialized Evt struct.
3
Documentation says for WDA '0: Assert WDOG output.' and for SRS
4
The EVT_SET_TYPE and similar setters use deposit32() on fields
4
'0: Assert system reset signal.'.
5
in the struct, so they read the uninitialized existing values.
6
In cases where we don't set all the fields in the event struct
7
we'll end up leaking random uninitialized data from QEMU's
8
stack into the guest.
9
5
10
Initializing the struct with "Evt evt = {};" ought to satisfy
6
Signed-off-by: Roman Kapl <rka@sysgo.com>
11
Coverity and fix the data leak.
7
Message-id: 20200207095409.11227-1-rka@sysgo.com
12
13
Signed-off-by: Eric Auger <eric.auger@redhat.com>
14
Reported-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Message-id: 1526493784-25328-2-git-send-email-eric.auger@redhat.com
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
10
---
20
hw/arm/smmuv3.c | 2 +-
11
hw/misc/imx2_wdt.c | 2 +-
21
1 file changed, 1 insertion(+), 1 deletion(-)
12
1 file changed, 1 insertion(+), 1 deletion(-)
22
13
23
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
14
diff --git a/hw/misc/imx2_wdt.c b/hw/misc/imx2_wdt.c
24
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/arm/smmuv3.c
16
--- a/hw/misc/imx2_wdt.c
26
+++ b/hw/arm/smmuv3.c
17
+++ b/hw/misc/imx2_wdt.c
27
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
18
@@ -XXX,XX +XXX,XX @@ static void imx2_wdt_write(void *opaque, hwaddr addr,
28
19
uint64_t value, unsigned int size)
29
void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
30
{
20
{
31
- Evt evt;
21
if (addr == IMX2_WDT_WCR &&
32
+ Evt evt = {};
22
- (value & (IMX2_WDT_WCR_WDA | IMX2_WDT_WCR_SRS))) {
33
MemTxResult r;
23
+ (~value & (IMX2_WDT_WCR_WDA | IMX2_WDT_WCR_SRS))) {
34
24
watchdog_perform_action();
35
if (!smmuv3_eventq_enabled(s)) {
25
}
26
}
36
--
27
--
37
2.17.0
28
2.20.1
38
29
39
30
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Roman Kapl <rka@sysgo.com>
2
2
3
Uses the i.MX2 rudimentary watchdog driver.
4
5
Signed-off-by: Roman Kapl <rka@sysgo.com>
6
Message-id: 20200207095529.11309-1-rka@sysgo.com
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
[PMM: removed accidental duplicate #include line]
5
Message-id: 20180516223007.10256-23-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 11 ++
11
include/hw/arm/fsl-imx6.h | 3 +++
9
target/arm/sve_helper.c | 136 ++++++++++++++++++
12
hw/arm/fsl-imx6.c | 21 +++++++++++++++++++++
10
target/arm/translate-sve.c | 288 +++++++++++++++++++++++++++++++++++++
13
2 files changed, 24 insertions(+)
11
target/arm/sve.decode | 31 +++-
12
4 files changed, 465 insertions(+), 1 deletion(-)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/include/hw/arm/fsl-imx6.h b/include/hw/arm/fsl-imx6.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/include/hw/arm/fsl-imx6.h
17
+++ b/target/arm/helper-sve.h
18
+++ b/include/hw/arm/fsl-imx6.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_ftssel_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@
19
DEF_HELPER_FLAGS_4(sve_ftssel_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
#include "hw/cpu/a9mpcore.h"
20
DEF_HELPER_FLAGS_4(sve_ftssel_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
#include "hw/misc/imx6_ccm.h"
21
22
#include "hw/misc/imx6_src.h"
22
+DEF_HELPER_FLAGS_4(sve_sqaddi_b, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
23
+#include "hw/misc/imx2_wdt.h"
23
+DEF_HELPER_FLAGS_4(sve_sqaddi_h, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
24
#include "hw/char/imx_serial.h"
24
+DEF_HELPER_FLAGS_4(sve_sqaddi_s, TCG_CALL_NO_RWG, void, ptr, ptr, s64, i32)
25
#include "hw/timer/imx_gpt.h"
25
+DEF_HELPER_FLAGS_4(sve_sqaddi_d, TCG_CALL_NO_RWG, void, ptr, ptr, s64, i32)
26
#include "hw/timer/imx_epit.h"
26
+
27
@@ -XXX,XX +XXX,XX @@
27
+DEF_HELPER_FLAGS_4(sve_uqaddi_b, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
28
#define FSL_IMX6_NUM_GPIOS 7
28
+DEF_HELPER_FLAGS_4(sve_uqaddi_h, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
29
#define FSL_IMX6_NUM_ESDHCS 4
29
+DEF_HELPER_FLAGS_4(sve_uqaddi_s, TCG_CALL_NO_RWG, void, ptr, ptr, s64, i32)
30
#define FSL_IMX6_NUM_ECSPIS 5
30
+DEF_HELPER_FLAGS_4(sve_uqaddi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
31
+#define FSL_IMX6_NUM_WDTS 2
31
+DEF_HELPER_FLAGS_4(sve_uqsubi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
32
32
+
33
typedef struct FslIMX6State {
33
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
/*< private >*/
34
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
@@ -XXX,XX +XXX,XX @@ typedef struct FslIMX6State {
35
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
36
IMXGPIOState gpio[FSL_IMX6_NUM_GPIOS];
36
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
37
SDHCIState esdhc[FSL_IMX6_NUM_ESDHCS];
38
IMXSPIState spi[FSL_IMX6_NUM_ECSPIS];
39
+ IMX2WdtState wdt[FSL_IMX6_NUM_WDTS];
40
IMXFECState eth;
41
MemoryRegion rom;
42
MemoryRegion caam;
43
diff --git a/hw/arm/fsl-imx6.c b/hw/arm/fsl-imx6.c
37
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/sve_helper.c
45
--- a/hw/arm/fsl-imx6.c
39
+++ b/target/arm/sve_helper.c
46
+++ b/hw/arm/fsl-imx6.c
40
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ftssel_d)(void *vd, void *vn, void *vm, uint32_t desc)
47
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6_init(Object *obj)
41
d[i] = nn ^ (mm & 2) << 62;
48
sysbus_init_child_obj(obj, name, &s->spi[i], sizeof(s->spi[i]),
49
TYPE_IMX_SPI);
42
}
50
}
43
}
51
+ for (i = 0; i < FSL_IMX6_NUM_WDTS; i++) {
44
+
52
+ snprintf(name, NAME_SIZE, "wdt%d", i);
45
+/*
53
+ sysbus_init_child_obj(obj, name, &s->wdt[i], sizeof(s->wdt[i]),
46
+ * Signed saturating addition with scalar operand.
54
+ TYPE_IMX2_WDT);
47
+ */
48
+
49
+void HELPER(sve_sqaddi_b)(void *d, void *a, int32_t b, uint32_t desc)
50
+{
51
+ intptr_t i, oprsz = simd_oprsz(desc);
52
+
53
+ for (i = 0; i < oprsz; i += sizeof(int8_t)) {
54
+ int r = *(int8_t *)(a + i) + b;
55
+ if (r > INT8_MAX) {
56
+ r = INT8_MAX;
57
+ } else if (r < INT8_MIN) {
58
+ r = INT8_MIN;
59
+ }
60
+ *(int8_t *)(d + i) = r;
61
+ }
62
+}
63
+
64
+void HELPER(sve_sqaddi_h)(void *d, void *a, int32_t b, uint32_t desc)
65
+{
66
+ intptr_t i, oprsz = simd_oprsz(desc);
67
+
68
+ for (i = 0; i < oprsz; i += sizeof(int16_t)) {
69
+ int r = *(int16_t *)(a + i) + b;
70
+ if (r > INT16_MAX) {
71
+ r = INT16_MAX;
72
+ } else if (r < INT16_MIN) {
73
+ r = INT16_MIN;
74
+ }
75
+ *(int16_t *)(d + i) = r;
76
+ }
77
+}
78
+
79
+void HELPER(sve_sqaddi_s)(void *d, void *a, int64_t b, uint32_t desc)
80
+{
81
+ intptr_t i, oprsz = simd_oprsz(desc);
82
+
83
+ for (i = 0; i < oprsz; i += sizeof(int32_t)) {
84
+ int64_t r = *(int32_t *)(a + i) + b;
85
+ if (r > INT32_MAX) {
86
+ r = INT32_MAX;
87
+ } else if (r < INT32_MIN) {
88
+ r = INT32_MIN;
89
+ }
90
+ *(int32_t *)(d + i) = r;
91
+ }
92
+}
93
+
94
+void HELPER(sve_sqaddi_d)(void *d, void *a, int64_t b, uint32_t desc)
95
+{
96
+ intptr_t i, oprsz = simd_oprsz(desc);
97
+
98
+ for (i = 0; i < oprsz; i += sizeof(int64_t)) {
99
+ int64_t ai = *(int64_t *)(a + i);
100
+ int64_t r = ai + b;
101
+ if (((r ^ ai) & ~(ai ^ b)) < 0) {
102
+ /* Signed overflow. */
103
+ r = (r < 0 ? INT64_MAX : INT64_MIN);
104
+ }
105
+ *(int64_t *)(d + i) = r;
106
+ }
107
+}
108
+
109
+/*
110
+ * Unsigned saturating addition with scalar operand.
111
+ */
112
+
113
+void HELPER(sve_uqaddi_b)(void *d, void *a, int32_t b, uint32_t desc)
114
+{
115
+ intptr_t i, oprsz = simd_oprsz(desc);
116
+
117
+ for (i = 0; i < oprsz; i += sizeof(uint8_t)) {
118
+ int r = *(uint8_t *)(a + i) + b;
119
+ if (r > UINT8_MAX) {
120
+ r = UINT8_MAX;
121
+ } else if (r < 0) {
122
+ r = 0;
123
+ }
124
+ *(uint8_t *)(d + i) = r;
125
+ }
126
+}
127
+
128
+void HELPER(sve_uqaddi_h)(void *d, void *a, int32_t b, uint32_t desc)
129
+{
130
+ intptr_t i, oprsz = simd_oprsz(desc);
131
+
132
+ for (i = 0; i < oprsz; i += sizeof(uint16_t)) {
133
+ int r = *(uint16_t *)(a + i) + b;
134
+ if (r > UINT16_MAX) {
135
+ r = UINT16_MAX;
136
+ } else if (r < 0) {
137
+ r = 0;
138
+ }
139
+ *(uint16_t *)(d + i) = r;
140
+ }
141
+}
142
+
143
+void HELPER(sve_uqaddi_s)(void *d, void *a, int64_t b, uint32_t desc)
144
+{
145
+ intptr_t i, oprsz = simd_oprsz(desc);
146
+
147
+ for (i = 0; i < oprsz; i += sizeof(uint32_t)) {
148
+ int64_t r = *(uint32_t *)(a + i) + b;
149
+ if (r > UINT32_MAX) {
150
+ r = UINT32_MAX;
151
+ } else if (r < 0) {
152
+ r = 0;
153
+ }
154
+ *(uint32_t *)(d + i) = r;
155
+ }
156
+}
157
+
158
+void HELPER(sve_uqaddi_d)(void *d, void *a, uint64_t b, uint32_t desc)
159
+{
160
+ intptr_t i, oprsz = simd_oprsz(desc);
161
+
162
+ for (i = 0; i < oprsz; i += sizeof(uint64_t)) {
163
+ uint64_t r = *(uint64_t *)(a + i) + b;
164
+ if (r < b) {
165
+ r = UINT64_MAX;
166
+ }
167
+ *(uint64_t *)(d + i) = r;
168
+ }
169
+}
170
+
171
+void HELPER(sve_uqsubi_d)(void *d, void *a, uint64_t b, uint32_t desc)
172
+{
173
+ intptr_t i, oprsz = simd_oprsz(desc);
174
+
175
+ for (i = 0; i < oprsz; i += sizeof(uint64_t)) {
176
+ uint64_t ai = *(uint64_t *)(a + i);
177
+ *(uint64_t *)(d + i) = (ai < b ? 0 : ai - b);
178
+ }
179
+}
180
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
181
index XXXXXXX..XXXXXXX 100644
182
--- a/target/arm/translate-sve.c
183
+++ b/target/arm/translate-sve.c
184
@@ -XXX,XX +XXX,XX @@ static int tszimm_shl(int x)
185
return x - (8 << tszimm_esz(x));
186
}
187
188
+static inline int plus1(int x)
189
+{
190
+ return x + 1;
191
+}
192
+
193
/*
194
* Include the generated decoder.
195
*/
196
@@ -XXX,XX +XXX,XX @@ static bool trans_PNEXT(DisasContext *s, arg_rr_esz *a, uint32_t insn)
197
return do_pfirst_pnext(s, a, gen_helper_sve_pnext);
198
}
199
200
+/*
201
+ *** SVE Element Count Group
202
+ */
203
+
204
+/* Perform an inline saturating addition of a 32-bit value within
205
+ * a 64-bit register. The second operand is known to be positive,
206
+ * which halves the comparisions we must perform to bound the result.
207
+ */
208
+static void do_sat_addsub_32(TCGv_i64 reg, TCGv_i64 val, bool u, bool d)
209
+{
210
+ int64_t ibound;
211
+ TCGv_i64 bound;
212
+ TCGCond cond;
213
+
214
+ /* Use normal 64-bit arithmetic to detect 32-bit overflow. */
215
+ if (u) {
216
+ tcg_gen_ext32u_i64(reg, reg);
217
+ } else {
218
+ tcg_gen_ext32s_i64(reg, reg);
219
+ }
220
+ if (d) {
221
+ tcg_gen_sub_i64(reg, reg, val);
222
+ ibound = (u ? 0 : INT32_MIN);
223
+ cond = TCG_COND_LT;
224
+ } else {
225
+ tcg_gen_add_i64(reg, reg, val);
226
+ ibound = (u ? UINT32_MAX : INT32_MAX);
227
+ cond = TCG_COND_GT;
228
+ }
229
+ bound = tcg_const_i64(ibound);
230
+ tcg_gen_movcond_i64(cond, reg, reg, bound, bound, reg);
231
+ tcg_temp_free_i64(bound);
232
+}
233
+
234
+/* Similarly with 64-bit values. */
235
+static void do_sat_addsub_64(TCGv_i64 reg, TCGv_i64 val, bool u, bool d)
236
+{
237
+ TCGv_i64 t0 = tcg_temp_new_i64();
238
+ TCGv_i64 t1 = tcg_temp_new_i64();
239
+ TCGv_i64 t2;
240
+
241
+ if (u) {
242
+ if (d) {
243
+ tcg_gen_sub_i64(t0, reg, val);
244
+ tcg_gen_movi_i64(t1, 0);
245
+ tcg_gen_movcond_i64(TCG_COND_LTU, reg, reg, val, t1, t0);
246
+ } else {
247
+ tcg_gen_add_i64(t0, reg, val);
248
+ tcg_gen_movi_i64(t1, -1);
249
+ tcg_gen_movcond_i64(TCG_COND_LTU, reg, t0, reg, t1, t0);
250
+ }
251
+ } else {
252
+ if (d) {
253
+ /* Detect signed overflow for subtraction. */
254
+ tcg_gen_xor_i64(t0, reg, val);
255
+ tcg_gen_sub_i64(t1, reg, val);
256
+ tcg_gen_xor_i64(reg, reg, t0);
257
+ tcg_gen_and_i64(t0, t0, reg);
258
+
259
+ /* Bound the result. */
260
+ tcg_gen_movi_i64(reg, INT64_MIN);
261
+ t2 = tcg_const_i64(0);
262
+ tcg_gen_movcond_i64(TCG_COND_LT, reg, t0, t2, reg, t1);
263
+ } else {
264
+ /* Detect signed overflow for addition. */
265
+ tcg_gen_xor_i64(t0, reg, val);
266
+ tcg_gen_add_i64(reg, reg, val);
267
+ tcg_gen_xor_i64(t1, reg, val);
268
+ tcg_gen_andc_i64(t0, t1, t0);
269
+
270
+ /* Bound the result. */
271
+ tcg_gen_movi_i64(t1, INT64_MAX);
272
+ t2 = tcg_const_i64(0);
273
+ tcg_gen_movcond_i64(TCG_COND_LT, reg, t0, t2, t1, reg);
274
+ }
275
+ tcg_temp_free_i64(t2);
276
+ }
277
+ tcg_temp_free_i64(t0);
278
+ tcg_temp_free_i64(t1);
279
+}
280
+
281
+/* Similarly with a vector and a scalar operand. */
282
+static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
283
+ TCGv_i64 val, bool u, bool d)
284
+{
285
+ unsigned vsz = vec_full_reg_size(s);
286
+ TCGv_ptr dptr, nptr;
287
+ TCGv_i32 t32, desc;
288
+ TCGv_i64 t64;
289
+
290
+ dptr = tcg_temp_new_ptr();
291
+ nptr = tcg_temp_new_ptr();
292
+ tcg_gen_addi_ptr(dptr, cpu_env, vec_full_reg_offset(s, rd));
293
+ tcg_gen_addi_ptr(nptr, cpu_env, vec_full_reg_offset(s, rn));
294
+ desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
295
+
296
+ switch (esz) {
297
+ case MO_8:
298
+ t32 = tcg_temp_new_i32();
299
+ tcg_gen_extrl_i64_i32(t32, val);
300
+ if (d) {
301
+ tcg_gen_neg_i32(t32, t32);
302
+ }
303
+ if (u) {
304
+ gen_helper_sve_uqaddi_b(dptr, nptr, t32, desc);
305
+ } else {
306
+ gen_helper_sve_sqaddi_b(dptr, nptr, t32, desc);
307
+ }
308
+ tcg_temp_free_i32(t32);
309
+ break;
310
+
311
+ case MO_16:
312
+ t32 = tcg_temp_new_i32();
313
+ tcg_gen_extrl_i64_i32(t32, val);
314
+ if (d) {
315
+ tcg_gen_neg_i32(t32, t32);
316
+ }
317
+ if (u) {
318
+ gen_helper_sve_uqaddi_h(dptr, nptr, t32, desc);
319
+ } else {
320
+ gen_helper_sve_sqaddi_h(dptr, nptr, t32, desc);
321
+ }
322
+ tcg_temp_free_i32(t32);
323
+ break;
324
+
325
+ case MO_32:
326
+ t64 = tcg_temp_new_i64();
327
+ if (d) {
328
+ tcg_gen_neg_i64(t64, val);
329
+ } else {
330
+ tcg_gen_mov_i64(t64, val);
331
+ }
332
+ if (u) {
333
+ gen_helper_sve_uqaddi_s(dptr, nptr, t64, desc);
334
+ } else {
335
+ gen_helper_sve_sqaddi_s(dptr, nptr, t64, desc);
336
+ }
337
+ tcg_temp_free_i64(t64);
338
+ break;
339
+
340
+ case MO_64:
341
+ if (u) {
342
+ if (d) {
343
+ gen_helper_sve_uqsubi_d(dptr, nptr, val, desc);
344
+ } else {
345
+ gen_helper_sve_uqaddi_d(dptr, nptr, val, desc);
346
+ }
347
+ } else if (d) {
348
+ t64 = tcg_temp_new_i64();
349
+ tcg_gen_neg_i64(t64, val);
350
+ gen_helper_sve_sqaddi_d(dptr, nptr, t64, desc);
351
+ tcg_temp_free_i64(t64);
352
+ } else {
353
+ gen_helper_sve_sqaddi_d(dptr, nptr, val, desc);
354
+ }
355
+ break;
356
+
357
+ default:
358
+ g_assert_not_reached();
359
+ }
55
+ }
360
+
56
+
361
+ tcg_temp_free_ptr(dptr);
57
362
+ tcg_temp_free_ptr(nptr);
58
sysbus_init_child_obj(obj, "eth", &s->eth, sizeof(s->eth), TYPE_IMX_ENET);
363
+ tcg_temp_free_i32(desc);
59
}
364
+}
60
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6_realize(DeviceState *dev, Error **errp)
61
qdev_get_gpio_in(DEVICE(&s->a9mpcore),
62
FSL_IMX6_ENET_MAC_1588_IRQ));
63
64
+ /*
65
+ * Watchdog
66
+ */
67
+ for (i = 0; i < FSL_IMX6_NUM_WDTS; i++) {
68
+ static const hwaddr FSL_IMX6_WDOGn_ADDR[FSL_IMX6_NUM_WDTS] = {
69
+ FSL_IMX6_WDOG1_ADDR,
70
+ FSL_IMX6_WDOG2_ADDR,
71
+ };
365
+
72
+
366
+static bool trans_CNT_r(DisasContext *s, arg_CNT_r *a, uint32_t insn)
73
+ object_property_set_bool(OBJECT(&s->wdt[i]), true, "realized",
367
+{
74
+ &error_abort);
368
+ if (sve_access_check(s)) {
369
+ unsigned fullsz = vec_full_reg_size(s);
370
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
371
+ tcg_gen_movi_i64(cpu_reg(s, a->rd), numelem * a->imm);
372
+ }
373
+ return true;
374
+}
375
+
75
+
376
+static bool trans_INCDEC_r(DisasContext *s, arg_incdec_cnt *a, uint32_t insn)
76
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->wdt[i]), 0, FSL_IMX6_WDOGn_ADDR[i]);
377
+{
378
+ if (sve_access_check(s)) {
379
+ unsigned fullsz = vec_full_reg_size(s);
380
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
381
+ int inc = numelem * a->imm * (a->d ? -1 : 1);
382
+ TCGv_i64 reg = cpu_reg(s, a->rd);
383
+
384
+ tcg_gen_addi_i64(reg, reg, inc);
385
+ }
386
+ return true;
387
+}
388
+
389
+static bool trans_SINCDEC_r_32(DisasContext *s, arg_incdec_cnt *a,
390
+ uint32_t insn)
391
+{
392
+ if (!sve_access_check(s)) {
393
+ return true;
394
+ }
77
+ }
395
+
78
+
396
+ unsigned fullsz = vec_full_reg_size(s);
79
/* ROM memory */
397
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
80
memory_region_init_rom(&s->rom, NULL, "imx6.rom",
398
+ int inc = numelem * a->imm;
81
FSL_IMX6_ROM_SIZE, &err);
399
+ TCGv_i64 reg = cpu_reg(s, a->rd);
400
+
401
+ /* Use normal 64-bit arithmetic to detect 32-bit overflow. */
402
+ if (inc == 0) {
403
+ if (a->u) {
404
+ tcg_gen_ext32u_i64(reg, reg);
405
+ } else {
406
+ tcg_gen_ext32s_i64(reg, reg);
407
+ }
408
+ } else {
409
+ TCGv_i64 t = tcg_const_i64(inc);
410
+ do_sat_addsub_32(reg, t, a->u, a->d);
411
+ tcg_temp_free_i64(t);
412
+ }
413
+ return true;
414
+}
415
+
416
+static bool trans_SINCDEC_r_64(DisasContext *s, arg_incdec_cnt *a,
417
+ uint32_t insn)
418
+{
419
+ if (!sve_access_check(s)) {
420
+ return true;
421
+ }
422
+
423
+ unsigned fullsz = vec_full_reg_size(s);
424
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
425
+ int inc = numelem * a->imm;
426
+ TCGv_i64 reg = cpu_reg(s, a->rd);
427
+
428
+ if (inc != 0) {
429
+ TCGv_i64 t = tcg_const_i64(inc);
430
+ do_sat_addsub_64(reg, t, a->u, a->d);
431
+ tcg_temp_free_i64(t);
432
+ }
433
+ return true;
434
+}
435
+
436
+static bool trans_INCDEC_v(DisasContext *s, arg_incdec2_cnt *a, uint32_t insn)
437
+{
438
+ if (a->esz == 0) {
439
+ return false;
440
+ }
441
+
442
+ unsigned fullsz = vec_full_reg_size(s);
443
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
444
+ int inc = numelem * a->imm;
445
+
446
+ if (inc != 0) {
447
+ if (sve_access_check(s)) {
448
+ TCGv_i64 t = tcg_const_i64(a->d ? -inc : inc);
449
+ tcg_gen_gvec_adds(a->esz, vec_full_reg_offset(s, a->rd),
450
+ vec_full_reg_offset(s, a->rn),
451
+ t, fullsz, fullsz);
452
+ tcg_temp_free_i64(t);
453
+ }
454
+ } else {
455
+ do_mov_z(s, a->rd, a->rn);
456
+ }
457
+ return true;
458
+}
459
+
460
+static bool trans_SINCDEC_v(DisasContext *s, arg_incdec2_cnt *a,
461
+ uint32_t insn)
462
+{
463
+ if (a->esz == 0) {
464
+ return false;
465
+ }
466
+
467
+ unsigned fullsz = vec_full_reg_size(s);
468
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
469
+ int inc = numelem * a->imm;
470
+
471
+ if (inc != 0) {
472
+ if (sve_access_check(s)) {
473
+ TCGv_i64 t = tcg_const_i64(inc);
474
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, t, a->u, a->d);
475
+ tcg_temp_free_i64(t);
476
+ }
477
+ } else {
478
+ do_mov_z(s, a->rd, a->rn);
479
+ }
480
+ return true;
481
+}
482
+
483
/*
484
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
485
*/
486
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
487
index XXXXXXX..XXXXXXX 100644
488
--- a/target/arm/sve.decode
489
+++ b/target/arm/sve.decode
490
@@ -XXX,XX +XXX,XX @@
491
###########################################################################
492
# Named fields. These are primarily for disjoint fields.
493
494
+%imm4_16_p1 16:4 !function=plus1
495
%imm6_22_5 22:1 5:5
496
%imm9_16_10 16:s6 10:3
497
498
@@ -XXX,XX +XXX,XX @@
499
&rprr_esz rd pg rn rm esz
500
&rprrr_esz rd pg rn rm ra esz
501
&rpri_esz rd pg rn imm esz
502
+&ptrue rd esz pat s
503
+&incdec_cnt rd pat esz imm d u
504
+&incdec2_cnt rd rn pat esz imm d u
505
506
###########################################################################
507
# Named instruction formats. These are generally used to
508
@@ -XXX,XX +XXX,XX @@
509
@rd_rn_i9 ........ ........ ...... rn:5 rd:5 \
510
&rri imm=%imm9_16_10
511
512
+# One register, pattern, and uint4+1.
513
+# User must fill in U and D.
514
+@incdec_cnt ........ esz:2 .. .... ...... pat:5 rd:5 \
515
+ &incdec_cnt imm=%imm4_16_p1
516
+@incdec2_cnt ........ esz:2 .. .... ...... pat:5 rd:5 \
517
+ &incdec2_cnt imm=%imm4_16_p1 rn=%reg_movprfx
518
+
519
###########################################################################
520
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
521
522
@@ -XXX,XX +XXX,XX @@ FEXPA 00000100 .. 1 00000 101110 ..... ..... @rd_rn
523
# Note esz != 0
524
FTSSEL 00000100 .. 1 ..... 101100 ..... ..... @rd_rn_rm
525
526
-### SVE Predicate Logical Operations Group
527
+### SVE Element Count Group
528
+
529
+# SVE element count
530
+CNT_r 00000100 .. 10 .... 1110 0 0 ..... ..... @incdec_cnt d=0 u=1
531
+
532
+# SVE inc/dec register by element count
533
+INCDEC_r 00000100 .. 11 .... 1110 0 d:1 ..... ..... @incdec_cnt u=1
534
+
535
+# SVE saturating inc/dec register by element count
536
+SINCDEC_r_32 00000100 .. 10 .... 1111 d:1 u:1 ..... ..... @incdec_cnt
537
+SINCDEC_r_64 00000100 .. 11 .... 1111 d:1 u:1 ..... ..... @incdec_cnt
538
+
539
+# SVE inc/dec vector by element count
540
+# Note this requires esz != 0.
541
+INCDEC_v 00000100 .. 1 1 .... 1100 0 d:1 ..... ..... @incdec2_cnt u=1
542
+
543
+# SVE saturating inc/dec vector by element count
544
+# Note these require esz != 0.
545
+SINCDEC_v 00000100 .. 1 0 .... 1100 d:1 u:1 ..... ..... @incdec2_cnt
546
547
# SVE predicate logical operations
548
AND_pppp 00100101 0. 00 .... 01 .... 0 .... 0 .... @pd_pg_pn_pm_s
549
--
82
--
550
2.17.0
83
2.20.1
551
84
552
85
diff view generated by jsdifflib
New patch
1
From: Heyi Guo <guoheyi@huawei.com>
1
2
3
We are going to change ARM virt ACPI DSDT table, which will cause make
4
check to fail, so temporarily add related golden masters to ignore
5
list.
6
7
Signed-off-by: Heyi Guo <guoheyi@huawei.com>
8
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
9
Message-id: 20200204014325.16279-2-guoheyi@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
tests/qtest/bios-tables-test-allowed-diff.h | 3 +++
13
1 file changed, 3 insertions(+)
14
15
diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/tests/qtest/bios-tables-test-allowed-diff.h
18
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
19
@@ -1 +1,4 @@
20
/* List of comma-separated changed AML files to ignore */
21
+"tests/data/acpi/virt/DSDT",
22
+"tests/data/acpi/virt/DSDT.memhp",
23
+"tests/data/acpi/virt/DSDT.numamem",
24
--
25
2.20.1
26
27
diff view generated by jsdifflib
New patch
1
From: Heyi Guo <guoheyi@huawei.com>
1
2
3
The sub device "RP0" under PCI0 in ACPI/DSDT does not contain any
4
method or property other than "_ADR", so it is safe to remove it.
5
6
Signed-off-by: Heyi Guo <guoheyi@huawei.com>
7
Acked-by: "Michael S. Tsirkin" <mst@redhat.com>
8
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
9
Message-id: 20200204014325.16279-3-guoheyi@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/virt-acpi-build.c | 4 ----
13
1 file changed, 4 deletions(-)
14
15
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/virt-acpi-build.c
18
+++ b/hw/arm/virt-acpi-build.c
19
@@ -XXX,XX +XXX,XX @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
20
aml_append(method, aml_return(buf));
21
aml_append(dev, method);
22
23
- Aml *dev_rp0 = aml_device("%s", "RP0");
24
- aml_append(dev_rp0, aml_name_decl("_ADR", aml_int(0)));
25
- aml_append(dev, dev_rp0);
26
-
27
Aml *dev_res0 = aml_device("%s", "RES0");
28
aml_append(dev_res0, aml_name_decl("_HID", aml_string("PNP0C02")));
29
crs = aml_resource_template();
30
--
31
2.20.1
32
33
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Heyi Guo <guoheyi@huawei.com>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
According to ACPI spec, _ADR should be used for device on a bus that
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
has a standard enumeration algorithm, but not for device which is on
5
Message-id: 20180516223007.10256-19-richard.henderson@linaro.org
5
system bus and must be enumerated by OSPM. And it is not recommended
6
to contain both _HID and _ADR in a single device.
7
8
See ACPI 6.3, section 6.1, top of page 343:
9
10
A device object must contain either an _HID object or an _ADR object,
11
but should not contain both.
12
13
(https://uefi.org/sites/default/files/resources/ACPI_6_3_May16.pdf)
14
15
Signed-off-by: Heyi Guo <guoheyi@huawei.com>
16
Acked-by: Igor Mammedov <imammedo@redhat.com>
17
Acked-by: Michael S. Tsirkin <mst@redhat.com>
18
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
19
Message-id: 20200204014325.16279-4-guoheyi@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
21
---
8
target/arm/helper-sve.h | 12 ++++++
22
hw/arm/virt-acpi-build.c | 8 --------
9
target/arm/sve_helper.c | 30 ++++++++++++++
23
1 file changed, 8 deletions(-)
10
target/arm/translate-sve.c | 85 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 26 ++++++++++++
12
4 files changed, 153 insertions(+)
13
24
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
25
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
15
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
27
--- a/hw/arm/virt-acpi-build.c
17
+++ b/target/arm/helper-sve.h
28
+++ b/hw/arm/virt-acpi-build.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_index_h, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
29
@@ -XXX,XX +XXX,XX @@ static void acpi_dsdt_add_uart(Aml *scope, const MemMapEntry *uart_memmap,
19
DEF_HELPER_FLAGS_4(sve_index_s, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
30
AML_EXCLUSIVE, &uart_irq, 1));
20
DEF_HELPER_FLAGS_4(sve_index_d, TCG_CALL_NO_RWG, void, ptr, i64, i64, i32)
31
aml_append(dev, aml_name_decl("_CRS", crs));
21
32
22
+DEF_HELPER_FLAGS_4(sve_asr_zzw_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
- /* The _ADR entry is used to link this device to the UART described
23
+DEF_HELPER_FLAGS_4(sve_asr_zzw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
- * in the SPCR table, i.e. SPCR.base_address.address == _ADR.
24
+DEF_HELPER_FLAGS_4(sve_asr_zzw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
- */
25
+
36
- aml_append(dev, aml_name_decl("_ADR", aml_int(uart_memmap->base)));
26
+DEF_HELPER_FLAGS_4(sve_lsr_zzw_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
-
27
+DEF_HELPER_FLAGS_4(sve_lsr_zzw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
aml_append(scope, dev);
28
+DEF_HELPER_FLAGS_4(sve_lsr_zzw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_4(sve_lsl_zzw_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve_lsl_zzw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(sve_lsl_zzw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+
34
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
37
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/sve_helper.c
40
+++ b/target/arm/sve_helper.c
41
@@ -XXX,XX +XXX,XX @@ DO_ZPZ(sve_neg_h, uint16_t, H1_2, DO_NEG)
42
DO_ZPZ(sve_neg_s, uint32_t, H1_4, DO_NEG)
43
DO_ZPZ_D(sve_neg_d, uint64_t, DO_NEG)
44
45
+/* Three-operand expander, unpredicated, in which the third operand is "wide".
46
+ */
47
+#define DO_ZZW(NAME, TYPE, TYPEW, H, OP) \
48
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
49
+{ \
50
+ intptr_t i, opr_sz = simd_oprsz(desc); \
51
+ for (i = 0; i < opr_sz; ) { \
52
+ TYPEW mm = *(TYPEW *)(vm + i); \
53
+ do { \
54
+ TYPE nn = *(TYPE *)(vn + H(i)); \
55
+ *(TYPE *)(vd + H(i)) = OP(nn, mm); \
56
+ i += sizeof(TYPE); \
57
+ } while (i & 7); \
58
+ } \
59
+}
60
+
61
+DO_ZZW(sve_asr_zzw_b, int8_t, uint64_t, H1, DO_ASR)
62
+DO_ZZW(sve_lsr_zzw_b, uint8_t, uint64_t, H1, DO_LSR)
63
+DO_ZZW(sve_lsl_zzw_b, uint8_t, uint64_t, H1, DO_LSL)
64
+
65
+DO_ZZW(sve_asr_zzw_h, int16_t, uint64_t, H1_2, DO_ASR)
66
+DO_ZZW(sve_lsr_zzw_h, uint16_t, uint64_t, H1_2, DO_LSR)
67
+DO_ZZW(sve_lsl_zzw_h, uint16_t, uint64_t, H1_2, DO_LSL)
68
+
69
+DO_ZZW(sve_asr_zzw_s, int32_t, uint64_t, H1_4, DO_ASR)
70
+DO_ZZW(sve_lsr_zzw_s, uint32_t, uint64_t, H1_4, DO_LSR)
71
+DO_ZZW(sve_lsl_zzw_s, uint32_t, uint64_t, H1_4, DO_LSL)
72
+
73
+#undef DO_ZZW
74
+
75
#undef DO_CLS_B
76
#undef DO_CLS_H
77
#undef DO_CLZ_B
78
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate-sve.c
81
+++ b/target/arm/translate-sve.c
82
@@ -XXX,XX +XXX,XX @@ static bool do_mov_z(DisasContext *s, int rd, int rn)
83
return do_vector2_z(s, tcg_gen_gvec_mov, 0, rd, rn);
84
}
39
}
85
40
86
+/* Initialize a Zreg with replications of a 64-bit immediate. */
41
@@ -XXX,XX +XXX,XX @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
87
+static void do_dupi_z(DisasContext *s, int rd, uint64_t word)
42
aml_append(dev, aml_name_decl("_CID", aml_string("PNP0A03")));
88
+{
43
aml_append(dev, aml_name_decl("_SEG", aml_int(0)));
89
+ unsigned vsz = vec_full_reg_size(s);
44
aml_append(dev, aml_name_decl("_BBN", aml_int(0)));
90
+ tcg_gen_gvec_dup64i(vec_full_reg_offset(s, rd), vsz, vsz, word);
45
- aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
91
+}
46
aml_append(dev, aml_name_decl("_UID", aml_string("PCI0")));
92
+
47
aml_append(dev, aml_name_decl("_STR", aml_unicode("PCIe 0 Device")));
93
/* Invoke a vector expander on two Pregs. */
48
aml_append(dev, aml_name_decl("_CCA", aml_int(1)));
94
static bool do_vector2_p(DisasContext *s, GVecGen2Fn *gvec_fn,
49
@@ -XXX,XX +XXX,XX @@ static void acpi_dsdt_add_gpio(Aml *scope, const MemMapEntry *gpio_memmap,
95
int esz, int rd, int rn)
50
{
96
@@ -XXX,XX +XXX,XX @@ DO_ZPZW(LSL, lsl)
51
Aml *dev = aml_device("GPO0");
97
52
aml_append(dev, aml_name_decl("_HID", aml_string("ARMH0061")));
98
#undef DO_ZPZW
53
- aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
99
54
aml_append(dev, aml_name_decl("_UID", aml_int(0)));
100
+/*
55
101
+ *** SVE Bitwise Shift - Unpredicated Group
56
Aml *crs = aml_resource_template();
102
+ */
57
@@ -XXX,XX +XXX,XX @@ static void acpi_dsdt_add_power_button(Aml *scope)
103
+
58
{
104
+static bool do_shift_imm(DisasContext *s, arg_rri_esz *a, bool asr,
59
Aml *dev = aml_device(ACPI_POWER_BUTTON_DEVICE);
105
+ void (*gvec_fn)(unsigned, uint32_t, uint32_t,
60
aml_append(dev, aml_name_decl("_HID", aml_string("PNP0C0C")));
106
+ int64_t, uint32_t, uint32_t))
61
- aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
107
+{
62
aml_append(dev, aml_name_decl("_UID", aml_int(0)));
108
+ if (a->esz < 0) {
63
aml_append(scope, dev);
109
+ /* Invalid tsz encoding -- see tszimm_esz. */
64
}
110
+ return false;
111
+ }
112
+ if (sve_access_check(s)) {
113
+ unsigned vsz = vec_full_reg_size(s);
114
+ /* Shift by element size is architecturally valid. For
115
+ arithmetic right-shift, it's the same as by one less.
116
+ Otherwise it is a zeroing operation. */
117
+ if (a->imm >= 8 << a->esz) {
118
+ if (asr) {
119
+ a->imm = (8 << a->esz) - 1;
120
+ } else {
121
+ do_dupi_z(s, a->rd, 0);
122
+ return true;
123
+ }
124
+ }
125
+ gvec_fn(a->esz, vec_full_reg_offset(s, a->rd),
126
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
127
+ }
128
+ return true;
129
+}
130
+
131
+static bool trans_ASR_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
132
+{
133
+ return do_shift_imm(s, a, true, tcg_gen_gvec_sari);
134
+}
135
+
136
+static bool trans_LSR_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
137
+{
138
+ return do_shift_imm(s, a, false, tcg_gen_gvec_shri);
139
+}
140
+
141
+static bool trans_LSL_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
142
+{
143
+ return do_shift_imm(s, a, false, tcg_gen_gvec_shli);
144
+}
145
+
146
+static bool do_zzw_ool(DisasContext *s, arg_rrr_esz *a, gen_helper_gvec_3 *fn)
147
+{
148
+ if (fn == NULL) {
149
+ return false;
150
+ }
151
+ if (sve_access_check(s)) {
152
+ unsigned vsz = vec_full_reg_size(s);
153
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
154
+ vec_full_reg_offset(s, a->rn),
155
+ vec_full_reg_offset(s, a->rm),
156
+ vsz, vsz, 0, fn);
157
+ }
158
+ return true;
159
+}
160
+
161
+#define DO_ZZW(NAME, name) \
162
+static bool trans_##NAME##_zzw(DisasContext *s, arg_rrr_esz *a, \
163
+ uint32_t insn) \
164
+{ \
165
+ static gen_helper_gvec_3 * const fns[4] = { \
166
+ gen_helper_sve_##name##_zzw_b, gen_helper_sve_##name##_zzw_h, \
167
+ gen_helper_sve_##name##_zzw_s, NULL \
168
+ }; \
169
+ return do_zzw_ool(s, a, fns[a->esz]); \
170
+}
171
+
172
+DO_ZZW(ASR, asr)
173
+DO_ZZW(LSR, lsr)
174
+DO_ZZW(LSL, lsl)
175
+
176
+#undef DO_ZZW
177
+
178
/*
179
*** SVE Integer Multiply-Add Group
180
*/
181
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
182
index XXXXXXX..XXXXXXX 100644
183
--- a/target/arm/sve.decode
184
+++ b/target/arm/sve.decode
185
@@ -XXX,XX +XXX,XX @@
186
# A combination of tsz:imm3 -- extract (tsz:imm3) - esize
187
%tszimm_shl 22:2 5:5 !function=tszimm_shl
188
189
+# Similarly for the tszh/tszl pair at 22/16 for zzi
190
+%tszimm16_esz 22:2 16:5 !function=tszimm_esz
191
+%tszimm16_shr 22:2 16:5 !function=tszimm_shr
192
+%tszimm16_shl 22:2 16:5 !function=tszimm_shl
193
+
194
# Either a copy of rd (at bit 0), or a different source
195
# as propagated via the MOVPRFX instruction.
196
%reg_movprfx 0:5
197
@@ -XXX,XX +XXX,XX @@
198
199
&rr_esz rd rn esz
200
&rri rd rn imm
201
+&rri_esz rd rn imm esz
202
&rrr_esz rd rn rm esz
203
&rpr_esz rd pg rn esz
204
&rprr_s rd pg rn rm s
205
@@ -XXX,XX +XXX,XX @@
206
@rdn_pg_tszimm ........ .. ... ... ... pg:3 ..... rd:5 \
207
&rpri_esz rn=%reg_movprfx esz=%tszimm_esz
208
209
+# Similarly without predicate.
210
+@rd_rn_tszimm ........ .. ... ... ...... rn:5 rd:5 \
211
+ &rri_esz esz=%tszimm16_esz
212
+
213
# Basic Load/Store with 9-bit immediate offset
214
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
215
&rri imm=%imm9_16_10
216
@@ -XXX,XX +XXX,XX @@ ADDPL 00000100 011 ..... 01010 ...... ..... @rd_rn_i6
217
# SVE stack frame size
218
RDVL 00000100 101 11111 01010 imm:s6 rd:5
219
220
+### SVE Bitwise Shift - Unpredicated Group
221
+
222
+# SVE bitwise shift by immediate (unpredicated)
223
+ASR_zzi 00000100 .. 1 ..... 1001 00 ..... ..... \
224
+ @rd_rn_tszimm imm=%tszimm16_shr
225
+LSR_zzi 00000100 .. 1 ..... 1001 01 ..... ..... \
226
+ @rd_rn_tszimm imm=%tszimm16_shr
227
+LSL_zzi 00000100 .. 1 ..... 1001 11 ..... ..... \
228
+ @rd_rn_tszimm imm=%tszimm16_shl
229
+
230
+# SVE bitwise shift by wide elements (unpredicated)
231
+# Note esz != 3
232
+ASR_zzw 00000100 .. 1 ..... 1000 00 ..... ..... @rd_rn_rm
233
+LSR_zzw 00000100 .. 1 ..... 1000 01 ..... ..... @rd_rn_rm
234
+LSL_zzw 00000100 .. 1 ..... 1000 11 ..... ..... @rd_rn_rm
235
+
236
### SVE Predicate Logical Operations Group
237
238
# SVE predicate logical operations
239
--
65
--
240
2.17.0
66
2.20.1
241
67
242
68
diff view generated by jsdifflib
New patch
1
From: Heyi Guo <guoheyi@huawei.com>
1
2
3
The address field in each _PRT mapping package should be constructed
4
with high word for device# and low word for function#, so it is wrong
5
to use bus_no as the high word. The existing code adds a bunch useless
6
entries with device #s above 31. Enumerate all possible slots
7
(i.e. PCI_SLOT_MAX) instead.
8
9
Signed-off-by: Heyi Guo <guoheyi@huawei.com>
10
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
11
Message-id: 20200204014325.16279-5-guoheyi@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/arm/virt-acpi-build.c | 10 +++++-----
15
1 file changed, 5 insertions(+), 5 deletions(-)
16
17
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/virt-acpi-build.c
20
+++ b/hw/arm/virt-acpi-build.c
21
@@ -XXX,XX +XXX,XX @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
22
{
23
int ecam_id = VIRT_ECAM_ID(highmem_ecam);
24
Aml *method, *crs, *ifctx, *UUID, *ifctx1, *elsectx, *buf;
25
- int i, bus_no;
26
+ int i, slot_no;
27
hwaddr base_mmio = memmap[VIRT_PCIE_MMIO].base;
28
hwaddr size_mmio = memmap[VIRT_PCIE_MMIO].size;
29
hwaddr base_pio = memmap[VIRT_PCIE_PIO].base;
30
@@ -XXX,XX +XXX,XX @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
31
aml_append(dev, aml_name_decl("_CCA", aml_int(1)));
32
33
/* Declare the PCI Routing Table. */
34
- Aml *rt_pkg = aml_varpackage(nr_pcie_buses * PCI_NUM_PINS);
35
- for (bus_no = 0; bus_no < nr_pcie_buses; bus_no++) {
36
+ Aml *rt_pkg = aml_varpackage(PCI_SLOT_MAX * PCI_NUM_PINS);
37
+ for (slot_no = 0; slot_no < PCI_SLOT_MAX; slot_no++) {
38
for (i = 0; i < PCI_NUM_PINS; i++) {
39
- int gsi = (i + bus_no) % PCI_NUM_PINS;
40
+ int gsi = (i + slot_no) % PCI_NUM_PINS;
41
Aml *pkg = aml_package(4);
42
- aml_append(pkg, aml_int((bus_no << 16) | 0xFFFF));
43
+ aml_append(pkg, aml_int((slot_no << 16) | 0xFFFF));
44
aml_append(pkg, aml_int(i));
45
aml_append(pkg, aml_name("GSI%d", gsi));
46
aml_append(pkg, aml_int(0));
47
--
48
2.20.1
49
50
diff view generated by jsdifflib
New patch
1
From: Heyi Guo <guoheyi@huawei.com>
1
2
3
Using _UID of 0 for all PCI interrupt link devices absolutely violates
4
the spec. Simply increase one by one.
5
6
Signed-off-by: Heyi Guo <guoheyi@huawei.com>
7
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
8
Message-id: 20200204014325.16279-6-guoheyi@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/arm/virt-acpi-build.c | 2 +-
12
1 file changed, 1 insertion(+), 1 deletion(-)
13
14
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/virt-acpi-build.c
17
+++ b/hw/arm/virt-acpi-build.c
18
@@ -XXX,XX +XXX,XX @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
19
uint32_t irqs = irq + i;
20
Aml *dev_gsi = aml_device("GSI%d", i);
21
aml_append(dev_gsi, aml_name_decl("_HID", aml_string("PNP0C0F")));
22
- aml_append(dev_gsi, aml_name_decl("_UID", aml_int(0)));
23
+ aml_append(dev_gsi, aml_name_decl("_UID", aml_int(i)));
24
crs = aml_resource_template();
25
aml_append(crs,
26
aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
27
--
28
2.20.1
29
30
diff view generated by jsdifflib
New patch
1
From: Heyi Guo <guoheyi@huawei.com>
1
2
3
The original code defines a named object for the resource template but
4
then returns the resource template object itself; the resulted output
5
is like below:
6
7
Method (_CRS, 0, NotSerialized) // _CRS: Current Resource Settings
8
{
9
Name (RBUF, ResourceTemplate ()
10
{
11
WordBusNumber (ResourceProducer, MinFixed, MaxFixed, PosDecode,
12
0x0000, // Granularity
13
0x0000, // Range Minimum
14
0x00FF, // Range Maximum
15
0x0000, // Translation Offset
16
0x0100, // Length
17
,, )
18
......
19
})
20
Return (ResourceTemplate ()
21
{
22
WordBusNumber (ResourceProducer, MinFixed, MaxFixed, PosDecode,
23
0x0000, // Granularity
24
0x0000, // Range Minimum
25
0x00FF, // Range Maximum
26
0x0000, // Translation Offset
27
0x0100, // Length
28
,, )
29
......
30
})
31
}
32
33
So the named object "RBUF" is actually useless. The more natural way
34
is to return RBUF instead, or simply drop RBUF definition.
35
36
Choose the latter one to simplify the code.
37
38
Signed-off-by: Heyi Guo <guoheyi@huawei.com>
39
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
40
Message-id: 20200204014325.16279-7-guoheyi@huawei.com
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
---
43
hw/arm/virt-acpi-build.c | 1 -
44
1 file changed, 1 deletion(-)
45
46
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/virt-acpi-build.c
49
+++ b/hw/arm/virt-acpi-build.c
50
@@ -XXX,XX +XXX,XX @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
51
size_mmio_high));
52
}
53
54
- aml_append(method, aml_name_decl("RBUF", rbuf));
55
aml_append(method, aml_return(rbuf));
56
aml_append(dev, method);
57
58
--
59
2.20.1
60
61
diff view generated by jsdifflib
New patch
1
1
From: Heyi Guo <guoheyi@huawei.com>
2
3
Differences between disassembled ASL files:
4
5
@@ -XXX,XX +XXX,XX @@
6
*
7
* Disassembling to symbolic ASL+ operators
8
*
9
- * Disassembly of DSDT, Thu Jan 23 16:00:04 2020
10
+ * Disassembly of DSDT.new, Thu Jan 23 16:47:12 2020
11
*
12
* Original Table Header:
13
* Signature "DSDT"
14
- * Length 0x0000481E (18462)
15
+ * Length 0x000014BB (5307)
16
* Revision 0x02
17
- * Checksum 0x60
18
+ * Checksum 0xD1
19
* OEM ID "BOCHS "
20
* OEM Table ID "BXPCDSDT"
21
* OEM Revision 0x00000001 (1)
22
@@ -XXX,XX +XXX,XX @@ DefinitionBlock ("", "DSDT", 2, "BOCHS ", "BXPCDSDT", 0x00000001)
23
0x00000021,
24
}
25
})
26
- Name (_ADR, 0x09000000) // _ADR: Address
27
}
28
29
Device (FLS0)
30
@@ -XXX,XX +XXX,XX @@ DefinitionBlock ("", "DSDT", 2, "BOCHS ", "BXPCDSDT", 0x00000001)
31
Name (_CID, "PNP0A03" /* PCI Bus */) // _CID: Compatible ID
32
Name (_SEG, Zero) // _SEG: PCI Segment
33
Name (_BBN, Zero) // _BBN: BIOS Bus Number
34
- Name (_ADR, Zero) // _ADR: Address
35
Name (_UID, "PCI0") // _UID: Unique ID
36
Name (_STR, Unicode ("PCIe 0 Device")) // _STR: Description String
37
Name (_CCA, One) // _CCA: Cache Coherency Attribute
38
- Name (_PRT, Package (0x0400) // _PRT: PCI Routing Table
39
+ Name (_PRT, Package (0x80) // _PRT: PCI Routing Table
40
{
41
Package (0x04)
42
{
43
@@ -XXX,XX +XXX,XX @@ DefinitionBlock ("", "DSDT", 2, "BOCHS ", "BXPCDSDT", 0x00000001)
44
0x03,
45
GSI2,
46
Zero
47
- },
48
-
49
- Package (0x04)
50
- {
51
- 0x0020FFFF,
52
- Zero,
53
- GSI0,
54
- Zero
55
- },
56
-
57
- *Omit the other (4 * (256 - 32) - 2) packages*
58
-
59
- Package (0x04)
60
- {
61
- 0x00FFFFFF,
62
- 0x03,
63
- GSI2,
64
- Zero
65
}
66
})
67
Device (GSI0)
68
@@ -XXX,XX +XXX,XX @@ DefinitionBlock ("", "DSDT", 2, "BOCHS ", "BXPCDSDT", 0x00000001)
69
Device (GSI1)
70
{
71
Name (_HID, "PNP0C0F" /* PCI Interrupt Link Device */) // _HID: Hardware ID
72
- Name (_UID, Zero) // _UID: Unique ID
73
+ Name (_UID, One) // _UID: Unique ID
74
Name (_PRS, ResourceTemplate () // _PRS: Possible Resource Settings
75
{
76
Interrupt (ResourceConsumer, Level, ActiveHigh, Exclusive, ,, )
77
@@ -XXX,XX +XXX,XX @@ DefinitionBlock ("", "DSDT", 2, "BOCHS ", "BXPCDSDT", 0x00000001)
78
Device (GSI2)
79
{
80
Name (_HID, "PNP0C0F" /* PCI Interrupt Link Device */) // _HID: Hardware ID
81
- Name (_UID, Zero) // _UID: Unique ID
82
+ Name (_UID, 0x02) // _UID: Unique ID
83
Name (_PRS, ResourceTemplate () // _PRS: Possible Resource Settings
84
{
85
Interrupt (ResourceConsumer, Level, ActiveHigh, Exclusive, ,, )
86
@@ -XXX,XX +XXX,XX @@ DefinitionBlock ("", "DSDT", 2, "BOCHS ", "BXPCDSDT", 0x00000001)
87
Device (GSI3)
88
{
89
Name (_HID, "PNP0C0F" /* PCI Interrupt Link Device */) // _HID: Hardware ID
90
- Name (_UID, Zero) // _UID: Unique ID
91
+ Name (_UID, 0x03) // _UID: Unique ID
92
Name (_PRS, ResourceTemplate () // _PRS: Possible Resource Settings
93
{
94
Interrupt (ResourceConsumer, Level, ActiveHigh, Exclusive, ,, )
95
@@ -XXX,XX +XXX,XX @@ DefinitionBlock ("", "DSDT", 2, "BOCHS ", "BXPCDSDT", 0x00000001)
96
97
Method (_CRS, 0, NotSerialized) // _CRS: Current Resource Settings
98
{
99
- Name (RBUF, ResourceTemplate ()
100
- {
101
- WordBusNumber (ResourceProducer, MinFixed, MaxFixed, PosDecode,
102
- 0x0000, // Granularity
103
- 0x0000, // Range Minimum
104
- 0x00FF, // Range Maximum
105
- 0x0000, // Translation Offset
106
- 0x0100, // Length
107
- ,, )
108
- DWordMemory (ResourceProducer, PosDecode, MinFixed, MaxFixed, NonCacheable, ReadWrite,
109
- 0x00000000, // Granularity
110
- 0x10000000, // Range Minimum
111
- 0x3EFEFFFF, // Range Maximum
112
- 0x00000000, // Translation Offset
113
- 0x2EFF0000, // Length
114
- ,, , AddressRangeMemory, TypeStatic)
115
- DWordIO (ResourceProducer, MinFixed, MaxFixed, PosDecode, EntireRange,
116
- 0x00000000, // Granularity
117
- 0x00000000, // Range Minimum
118
- 0x0000FFFF, // Range Maximum
119
- 0x3EFF0000, // Translation Offset
120
- 0x00010000, // Length
121
- ,, , TypeStatic, DenseTranslation)
122
- QWordMemory (ResourceProducer, PosDecode, MinFixed, MaxFixed, NonCacheable, ReadWrite,
123
- 0x0000000000000000, // Granularity
124
- 0x0000008000000000, // Range Minimum
125
- 0x000000FFFFFFFFFF, // Range Maximum
126
- 0x0000000000000000, // Translation Offset
127
- 0x0000008000000000, // Length
128
- ,, , AddressRangeMemory, TypeStatic)
129
- })
130
Return (ResourceTemplate ()
131
{
132
WordBusNumber (ResourceProducer, MinFixed, MaxFixed, PosDecode,
133
@@ -XXX,XX +XXX,XX @@ DefinitionBlock ("", "DSDT", 2, "BOCHS ", "BXPCDSDT", 0x00000001)
134
})
135
}
136
137
- Device (RP0)
138
- {
139
- Name (_ADR, Zero) // _ADR: Address
140
- }
141
-
142
Device (RES0)
143
{
144
Name (_HID, "PNP0C02" /* PNP Motherboard Resources */) // _HID: Hardware ID
145
@@ -XXX,XX +XXX,XX @@ DefinitionBlock ("", "DSDT", 2, "BOCHS ", "BXPCDSDT", 0x00000001)
146
Device (PWRB)
147
{
148
Name (_HID, "PNP0C0C" /* Power Button Device */) // _HID: Hardware ID
149
- Name (_ADR, Zero) // _ADR: Address
150
Name (_UID, Zero) // _UID: Unique ID
151
}
152
}
153
154
The differences between the two versions of DSDT.memhp are almost the
155
same as the above, except for total length and checksum.
156
157
DSDT.numamem binary is just the same with DSDT on virt machine, so we
158
don't show the differences again.
159
160
Signed-off-by: Heyi Guo <guoheyi@huawei.com>
161
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
162
Message-id: 20200204014325.16279-8-guoheyi@huawei.com
163
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
164
---
165
tests/qtest/bios-tables-test-allowed-diff.h | 3 ---
166
tests/data/acpi/virt/DSDT | Bin 18462 -> 5307 bytes
167
tests/data/acpi/virt/DSDT.memhp | Bin 19799 -> 6644 bytes
168
tests/data/acpi/virt/DSDT.numamem | Bin 18462 -> 5307 bytes
169
4 files changed, 3 deletions(-)
170
171
diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
172
index XXXXXXX..XXXXXXX 100644
173
--- a/tests/qtest/bios-tables-test-allowed-diff.h
174
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
175
@@ -1,4 +1 @@
176
/* List of comma-separated changed AML files to ignore */
177
-"tests/data/acpi/virt/DSDT",
178
-"tests/data/acpi/virt/DSDT.memhp",
179
-"tests/data/acpi/virt/DSDT.numamem",
180
diff --git a/tests/data/acpi/virt/DSDT b/tests/data/acpi/virt/DSDT
181
index XXXXXXX..XXXXXXX 100644
182
GIT binary patch
183
delta 156
184
zcmbO?fpNDcmrJlq$Zin^2BwP>xulufJQ*iyC^K43^tIeLL4lLWeZ}O>oO+X=b6T<Z
185
z6mvCfR_C%{pDgc^#>hCi&BajKi^V<I(}*M9!_$Q~z%RhS*}#o~BR<sAg^OwOMVP!X
186
lHhJdBGOmtnjvVpMLBX3Jy81CrwsAkqC^^YPgaxRb0RT`TDiQzy
187
188
literal 18462
189
zcmc)ScXSkm8iw%+36N|;NFdS#0*VC-rifrC*(4Ap5OxEoL4yrNET~uz6^x349TdAp
190
z#ol{Y6npQeh`smTHTRwDuD;K8uK!-nakJ0v%s2Z>CNML{-I`=g)4(x7&}nM*`1qLQ
191
zpz7@!<28CLD+q${e)zR$!Q7lFEy?PZ=GK1kva+(=mNE4;-Kye^^@<TeZp*~_nxMJ0
192
zHYYy5A@gLSVN6+Bd3pND+?IGES==wydwyOJPRt96f?z?HAS-LIYPOcDs!0@tPc*ld
193
z*Nsi4r;Ht!7_TYAF{L<Gn4Y5LgPhsga=1!)>Q!--tkj18UL_~9%E-FO@w(J16KWeK
194
z3R0o1B%7*Y`C2Dl_1|lD%Il+5!;MwtOiE<F2dS-<*$ez@&A+j+pi>%K<|FWeGb6&y
195
z{$oU^;O`OT=@Hf8tEg~uW<;!0)QlXPQQ<QxBWGks&FEq?Dt*Srku!3lX5`w8jeW-O
196
z$QhlZX2fj9aG$YB<cuy+GYV|RCO%_C<czLSGYW0S2%j-Baz<{{j3S#c(r0WMIU_G>
197
zMh}~@sm&<IuhC!oM=WYaiOtx|XGHF%{3Xfk>b-2n<~}2OKP`xQ9er%Z7Cs|-KkXJZ
198
zqo2*#(q}~Pr-e~7`rC}Hd`9$s+C6H<K%23(&xqbni=t)>vKga%M)ZDK95rJNn^EO6
199
zqW9AtQ8R|vjB1||y`T1snz6Rch}(>c=>4=LYR0-YqsC`M@29Ip%~;Q7)cTC*{j^uq
200
zj16o?ozIBgPkTqr7-lm@`;6%Qv`^HGGMk})#ykFn3jb}Wh~7{8M$M?O8TCFRdOz(K
201
zHDhC&v5n7&-cMJLnz4z^*w$x6@2CBvW{k8MV|_;SemWp(#%4C7!DduO@23N!W^7?I
202
z#`%os{j@Y{##T0Ce0s)$RoRX4`t%EF9M@P@RW?!wE^!@@rK&PKjHw;1+v@6Zy48V|
203
zZgqs#EnF{rvMEtq8tdN}#Dn@^_h3*^rvGYm@8Dp1u&cfXS}1i8(wJ!KdGdwX`9V&P
204
z{G9yu_F!~UBU1OXbiX|4Q4l^J>!hg2M7E+b=+P~wpuIgS2-nea=?d4<f`zH)I@Z&o
205
znGDy|{ElUH%#>O-UE!LUMRh<FZc&SNtf>sw%xopQW6jJf$PU6aGmB=Y*3_aMYbwJv
206
z^@=*SqNBsqvgt}2I~LUeR9cxycXo!ebH_F_&d#YdGcR80&Mt83kXWxEv#1WZ+^KYD
207
zS2(-E*_BSEJ9FX8?N~GOEztp*JC*LtgHs3dsqbFLw<M2Fr8{GA#^BTeojaB8%!e}{
208
z&U`wR?$jBD{X7fc)B&J7mG0~YXE!*z(W!K2A)JM9>VVOmN_TdLvpbyK=~TK?CsXDu
209
zf>Q^W?o_(77|voii|JIlvj?0#;M4)BJC*M231?3@d(x?NX9=7oaOwcool1AE2Ip#U
210
zu12TQoxR}f1*Z;p-KlhEZ#a9y*_%$KJ9VmQKhHjJ>HyiDN_X~!voD-|=~TM2ADsQ*
211
z)B&|SmF`>}&eh>uold1Y`@`8EP91=|Q|ZnDa1MZT0G&#A4uo?coH}54r_!CJaF)VZ
212
zN~hADgWwzlrw;JlsdVRHI0wTym`<fT*MM^kICVncPNh57gmX<e*Q8VF&LMCPfm0_C
213
z?o_&SEjZVLb1gcR?pzzrwc*qWhdY(-TnEl|;9Q4Jr90P!b6q%fg5pl4JBPwK6waY^
214
zD&4sroa@1<6B>6a-MK!T>%+M|ol19Z0Otm9>IBH0N_TDu=Z0`@NT<@B!{8hSr%ssM
215
zsdVQ?aBc+WMszCOsq1p~_iY)RI>B<M(w({-r!_N2p5<_s)2Vc)F2ZTe%#mjWoH`+M
216
zr_!Ce^rkg4$G4SmR??|-r!Kgeb7MGl0_RSpJ9U}OoWtQ9PN&kHy0~V}P2kiCpF5TA
217
z)Fm}@j(~Fnol1A=LYg^8!l@HPcPibf%V*}?6wXcQRJv0a&CIzOoI0U&r_!CeRA$c2
218
z;oO`~r8{+j%$!@msS{9lD&48eV&>cu&MoOwx>Fa!%()euI$?FE(w({lX3nkQ+?r0M
219
zJ4eAe3eHh<D&1KHXBC`PbSmAci(KZchO?SZr8{+L%bamI<8&(BsS8?KGjj%917{7L
220
zN_Xlqme$PtJyZ*4EuBht>f)6->)@=TQ|V4!vNGprI7ic|bf+#<nR5)BW9U@6Q<taA
221
zSr2DDol1A=qLewefpZ%=mG0D~D06NL=eBez-Kh&u<{S&>SUQ#N)MY1gHo)0Hr_!Ce
222
z*ksOeaE_x>=}ui@GUs?W$J42FXCs`Aa5mDZbms&(C%`#@PNh2&a3<hP(5ZB163!%?
223
zNjjD8Y=W~1&L%pQ?wkncL^vnXsdVQgI48k5iB6?EC&M`z&dGEt-MJl{+rhaVol19Z
224
z59ju9ZcnGuojbs}1Dre1sdQ&EoXv1H)2Vdl6ga2AIfYK8J9mV0M>uz+Q|Zo~;M@t$
225
zo#<4$vjxr;I9upcx^pU=Q{kLSr_!A}!?`n@JJYFj=Pq#W0_QGtD&4s&oV&ugE1gPr
226
z?gr;>aPCH@(w(~MZGYeH4(IN4D&08^&S`K?qf_b5J>c8}&OPW<x^quB_k?p#I+gC6
227
z4(D_@r_-r)=U#B`1?OILD&08)&KYpdpi}A2z2V#&&b{eWx^o{m_knXCI+gC+7tVd*
228
z+?P(JJNJWgKREZJQ|Zo`aL$BtCY?%m?hohwaPCj1(wzsuc>tUT(5ZCifp8uO=Ye!8
229
z-8l=+S#ZvxQ|Zow;5-P<gXmPc^I$j+hVx)LmF_$Q&O_ilgifV94~6qkI1i;$>CV}3
230
z&W3X~ol18e2IpaL9!96qopa!v1Lqt%mF_$o&coq6oKB@XkAU+CIFF!H>CPkJJQB_$
231
z=~TLNE}V1WoJ*(Do%7(F2j@IGmF_$W&ZFQwicY0FTj6Ylvz1PzJCBC*XgH6iQ|ZoQ
232
z;5-J-W9U@6^H?~Kh4WZCmF}Dm=X^Nl)2Vdlac~|7=W%o@-FZBm$HRF%ol18ufO7$y
233
z3+Pn3b0M4y;ao_k(w!&3c><g#(5ZCiiEy3>=ZSPG-FXt6C&76Vol19}4Cl#ko=m6G
234
zou|Ng3Y@3VsdVS5aGnb1sdOscc^aIj!Fd{;N_Q@Ta}k`2=v2CMF`SFxTui6Zou|Wj
235
zI-IA|sdVQVaGn9@8FVV$xdhH7a4w-!>CQ9ZJQL0{=~TM&EI7}C^DH`*?mQdLv*A3O
236
zPNh4~f%6<V&!JQ4&U4{B7tV9(RJ!v#IM0LgJUW%`JRi>U;XI#Cr8_Tx^8z?8pi}A2
237
z3*o#F&I{>Oy0Z<=HaOepRJ!vbI4^?pB081syco`l;k=kmr8_Ty^Ab2Op;PJ3rEo5V
238
zb19ulcU}tTrEp$Kr_!C5!Fd^+m(i(o=jCu-4(H``D&2VnoL9hk1)WNFUJ2)wa9&BL
239
z(w$eqc@>;j(W!Lj)o@-7=hbv7-FXe1*T8uVol19J3+J_PUQ4Iao!7y69h}$EsdVS{
240
za9$7R^>ixTc>|m`z<C3mN_XA}=Z$dQNT<@BH^F%moHx;_bmz@*-VEo>bSm9>3!JyW
241
zc?+FNcisx;t#IB-r_!Cb!Fd~;x6!F|=k0Lb4(IK3D&2VpoOi%^2c1fH-U;WOaNbF$
242
z(w%p~c^8~_(W!Lj-EiIw=iPKF-FXk3_rQ4%ol1A!3+KIX-b<&_o%g|cADs8msdVT4
243
zaNZB+{d6kbxeU%_a4w@$>COk>d;rb|=v2D%K{y|T^Fcb5?pzM%ayXaMsdVQ<a6Sa*
244
zLv$+L`7oRh!}&0sN_Rd2=Ob`FLZ{N5kHYyVoR89}bmwDmJ_hGwbSmBXIGm5e`8b_Q
245
zcRm5<6L3C3r_!BI!uceePtvJ$=TmS#1?N+AD&6@soKM5~G@VL!J_F}7a6Ut)(w)!3
246
z`7E5z(y4Ukb8tQf=W}!_-T6G6&%^mVol1AU0Ot#EzCfqaoiD=qBAhSMsdVQ{aJ~fR
247
zOLQvT`7)d@!}&6uN_V~j=PPi&LZ{N5E8tuK=L$NN?tB%_SK)k>PNh3vgYz{wU!zm$
248
z&e!359nRP3RJwB|oGam6NvG1CZ@~EmoNv&nbmuBKSHZc8PNh5Fg!4@}-=tIN&bQ!v
249
z3(mLbRJ!wRINyfzZ90|id<V{V;CzQpr90n+^IbUKrBmt7_uzaF&iCk4y7PTF--q*k
250
zI+gDH0L~BK{D4lSJ3oZ;LpVRAQ|Zo+;QR>AkLXmo^J6$ahVx@OmG1lm&QIX{gifV9
251
zKZWyCI6tLR>CVsK{0z>|=v2D%b2vYT^K&|t?)(DIFW~%wPNh4)g!4-{zob*?&adG7
252
z3eK<SRJ!wPIKPJTYdV$g{07c%;QWS8r8~cc^IJH-rBmt7@8J9n&hO|{y7PNDzlZaC
253
zI+gDH0nQ)b{DDrTJAZ`pM>v0^Q|Zp1;QR^BpXgM&^Jh4JhVy4SmG1ln&R^jCg-)eA
254
ze}(f`IDe&6>CWHa{0+|E=v2D%cQ}8C^LILx?)(GJKj8d>PNh5lg!4~0|D;pt&UQH4
255
z;cTZ<nQ}I_*5~MdjIsBd#>?tb?<du5qdwH5FqYr(K^|)csSol9Kj?#xm2_!ICX!j{
256
zQR(-;hHqB=U!#UZj7mMmQR%m9|J$gwB1WYi<EZqzw*PI^+7Y8tkEKVI6t%>wtAeG4
257
zTCix8Zc4^?4?p)L$W2sFtScVVH8$(`Zb7F4Jre}_VFW?ealM0}AS=A9KSk~Be{Pk!
258
z+dfRsWEEtmN=tVv-mYh}f`#kbIvoql(`|eBC$o6^Yxwx=VCnyD%el#kjg3KWyeTm@
259
zD5=Y98J~>jESwR<YbKYsjp@30&*Gl3qUMH`l|PmCAGKuitg2;Ou9&uPMl44QROoB2
260
zzE;i*Bb*c7sSHQW32$Ph;cZ*dqQ%p*j?gpZ9ZQ$D^;)zzvs~)oqVUO?;lknLOJ`hE
261
zn0h?iNcqwkB^$QXBpY(t2B%)lb0Z%AAUXW7hSPd~+R%4-yrC^`@m~4{W@lxEH~R3G
262
z{6u3}OX^M4&8-bNiQ3FZ)ui^E@H1q>Ux3P3**|_v9lL~nNTs9FKc4iLqVQ|@!7|ld
263
zrwj`}WoLA4jW+T3N9>e`Z|M%-z^y0J^HaZI*;zwVtIn%U=pEnMv2ycbIn77qhZ(O;
264
z){Y%iGN7e)Qd8c{Fs8N@EuJ$q)=9tW^BX58s$=t-TT8<`sg0!sac$wRw~Pn>0Sn~0
265
A00000
266
267
diff --git a/tests/data/acpi/virt/DSDT.memhp b/tests/data/acpi/virt/DSDT.memhp
268
index XXXXXXX..XXXXXXX 100644
269
GIT binary patch
270
delta 173
271
zcmcaUi}8ywmrJlq$QMZl2ByY|T++<_a~LOTC^K43^tIeLL4lLWeZ}O>oO+X=b6T<Z
272
z6mvCfR_C%{pDgc^#>hCi&BajKi^V<I(}*M9!_$Q~z%RhS*}#o~BR<sAg^OwOMVP!X
273
pHhJdBGOmtnjvVpMLBX3Jy81D0wsHT%Dk&Kd9^{0q-Wg&Z0|3#tFC_o~
274
275
literal 19799
276
zcmc)ScX$+4-^TG-5+Q6T0U{ux#NIU_h}cOs!9=qVlK^VeU_n7F5wTzeR8;K7UZcj|
277
zd+)vX-h1!8=DE)nxW78@^WSs5T$h`<XU_S}e0EPJGv{)rrn#nNayp87jHsTFs%tK*
278
z-l8#8qjiZWio$aESu*d1!mZnytJ_-V4NH}mmlw6w)z|c`N;TFitP>TrO{}kpTIbak
279
zrY5BG8=KN~<>eI>xs63_six)u!;(Yh_l`ov-cd;u9n~{RB$iQ{tyWbvO?|?K)_E1<
280
z8k%!e8pbzGP?fb&Wk9lDu8P`6g|oHi(4``KRP2(-?s!p`!hDx8<0hxZWxH%%o1Q4h
281
zNbRM$r7BshKB=mI_UzGnsJe!oRTWNZ%D)HMy_MSmF6_Aon~Zwou;pF?2b?bvcKfdq
282
zJ)%V=Dsm;N!%>WMbG}5fM_i3Ut1;4RRL0gQh^x`lYE*iSQL!}&<7yOHjZt1>?bsTN
283
z#nnh!jkUeT=-3)P<7)J=8l%0&I<Yl+#ntF%HP-POV`6I*#nmXW8e_c1y0JBS$JH2U
284
zHP*En75Ft;I6jhbHA=0<dR`-TnDUn-Khy_XjrF}o{5b6scN{~l#s*#^ew_A=t1--K
285
zZ0I%O$7#Q~8pEx|MqVR+oc52av9#6L*lWa((~`IvWmaRX*N7jd1LA5dYc;C8M*KJ(
286
z7*}I?t5NMW;>YQrxEd>3jg-|G89z=-<7%vIHEO&@{5V}auEwfXqt<K0kJG_%HCDG8
287
z<Ge=vI2{sKV@<2EiPwl9r$ggvlv|BDuMs~^mx!w|(rVOujregoEUv~VtFfuqh##j*
288
z#?=^YH8%4a@#A!OT#YeSqrq#$kJF{%YOH59bZM0Ns~`T}R>qIhrQ>RBU^T{jjrego
289
zBCf_pR%1f0#!+t-G^Xlv2hupcvAn8$j4oVKx**G@%5WJ|yNO<_tH<kA7gl=J6<XiW
290
zFKldS8ZaU`yfI2e_0iO*EGpN3HCoyynjY<-pOG46FG3odS_U0=UO{nGIIy^|xVt;r
291
zq**1h%Ly%4L<9ST^~oLzBlDp^p)wlSx3EulcOnXX)Gn7oFE#9!-InR6rui6ps(z-e
292
zu9>oJb1C%9H`N7E*rS?edMbvV`MnfmdghOyAPPP6O$)L;)il#nG4#|CW%i=0!))7J
293
z${90Fbpe%A=A%0ogLARWKJ(7SvOV($ujtO6aO#p+N04u-3odsmy0aIYz2NLcr=mNH
294
z;4I4Y%)ceNKy#;}JA1>ai}~!kmw8Ki)2ZmrB%Db&bwTG&MRyj%Sqx_}or><%9ff_K
295
zec;pupgR@a*%!{faQ3BB(VhL^><6bV7~QGp&i-)rhqFJOitg0SlsQY_)CHzH72P=i
296
z&H-=^pi|ME1K}J9r!Gj{sp!r@a1MfV5S@zdEQPZaPF=vdQ_-D^!?`$|i_@v-&cSdF
297
zhEo^3?o@Q=5IBdxIfPC{cj{KtKF^_W>H^uFitbzj&L!Ynf=)$u4uf+ToVuWPr=mNT
298
zgmXzam!wnCox|ZA4yP`_-Kpr#rQlo&&ZX#7bm!7=E)AzH*xjk<&Jl2qfO7<$ita3f
299
zvkXpM;JZ`Noy)+v44li*sp!sS;anC@-4M7_(Vfe|xg4C!(W&Up<>6c&PTfGbQ_-C(
300
zz_|jPE6}Ov&K2QY5l-E3xKq)cE5W%EoGa0(=+2elTp3Q?ptw`fovXmP3Y@FZsp!sC
301
z;an9?-O#vG(VeToxf-0S(W&Up)!|$nPTc^xQ_-Dkz_|vTYtX6a&Nbm&6HeVQxl_@d
302
zYr(k|oNLjk=uXYc+4;5{PTgR+Q_-E8j#JP4m1hN<6?7`PQ<HG&nZNQJ38!wz+^OhJ
303
zO}(jS{`$5O&PqBJ-KhyTbB=;jH*oG$bf>1-%(*t4YtyOdPED?vb2OZ~;d7^=J2j<d
304
z&UN5ihfYOzY9h^?W8l;cqB|Acsp&Iwt_$b7bSk=2lV;{z4^G`sx>M1enkqBr`f#pK
305
zr=mMGL1xYk;M5JMI~CojX)$wd2<L`$D!NmXVdmTjPTjD&Q_-E80yF2vaBfVeqC3aJ
306
zITp^bbSk>D3eGAxtLRj8rzW|~Sq*13or><%)Rs9@aHi-~bf+e?)H8nvTmxqfor><%
307
zG?sei&qK9v*3zlyPEB5!a~z!G=u~v4rmW1l37nhIspw8kRGG65&N@03-Kps*bJoLI
308
zPp6_gH7RAzP2t>>PDOWWD$1Oj!MPcoitf||lsOyVY@k!ootkzsXBy5lor><%WRp3^
309
z!#SQ#MR#h7$($45oIt0dI~(C_gtL)OMR#rv=jL#3PN$+fo8WAMvx!bccQ(V>3}-W)
310
zitcQIvjxr;Iu+eH5zdKlPNY-Oos;041m`3=72P=*&dG32rc=?KTfn&moLkVT=*}(S
311
z+!D?$=~Q&*R&Z_w=T>woy0aC|RybSfRCMRoaBdCf)^sYma~n9ffpZ%=72UZloZG^=
312
zEuD()Y=g56&Ney~-8lu$DR54qQ_-E<!MPor+tI1$&h6pc9?tFQRCMPKaP9!-4s<HI
313
zb4NILgmXta72T;>Z#&;kg>x#MitgMA&Yj@giB3g#?hNP7aPCZ}qC2O-IStNfbSk=Y
314
zI-JwtoKB~rJ9mL|7dUsJQ_-C>;G6;H3_2CvxhtHz!nrG*itgME&fVbLjZQ^(?hfbf
315
zaPCg0qC5A1a}PN8pi|ME?Qpik*-odTJNJZhPdN9aQ_-C>;hYKQOga_axfh&!!MPWm
316
zitd~R=PWp9(W&Upz2V#&&b{eWbmu;B?gQsObSk=YUpV)Lb6+|Y-MJr}`@y*%or><9
317
z4d-k)XVa<Z&i&!sAI|;hRCMP7a2^2X0dy+5^FTNcg!4c;72SCdoCm>q5S@zdJQ&V{
318
z;XIg5MRy(o=OJ(&LZ_lT=fF7!&N*}{x^pg^bK#szr=mOO!8s4kd2}ke^H4Ysh4WB4
319
z72SCloQJ`A7@dmlJRHu$;XIs9MR(4Jb3UB&=~Q&*5pW&>=Mi)&y7NdlkA(9`Iu+e{
320
z6r4xFc@&+B?py%p0yr1Ysp!t5;XE47qv=$1=P_^|1LrYxD!TJnIFE(%SUMHmc^sU_
321
z!Fe2=itaof&g0=co=!z~o&e_waGpS?qB~E7^F%mLq*KwIC&76VoF~z#=*|u}JK*e~
322
zQ_-C#!+A2CC)26u&Qst#1<q6GRCMR5aGnb1sdOs3^E5b5gYz^x72SC{oTtNiI-QE{
323
zJOj=%;5>s)MR%SF=b3PxNvEPa&w}$TIM1R}(Vb_*c{ZG9)2ZmrbKpD&&U5HgbmzHn
324
zo(t!>bSk>@JUGvT^E^5g-Ps9eC!C#hD!TK0IM0Xkd^#1~c>$akz<B|kitfA+&I{qZ
325
zkWNK+cEQ;NXBVA{?z{-ji{QM7PDOWK4ClpgUQDN=J1>Fr5;!lRQ_-E5!g(p2m(r=|
326
z&dcDu49?5wRCMR%a9$4Q<#Z~#^9nexfb$AE72SCyoL9nmC7p`yyb8{%;Jk`XMR#5e
327
z=hbjtO{bzeuYvO#IIp2o(Vf@Ac`cmR(y8dq>)^Z&&g<w@bm#SOUJvK>bSk>@1~_kk
328
z^9DK<-FYLNH^O-%or><f3C^3~yopXlcis%=&2ZjKr=mM=f%6tPZ=qAsowveyE1b8|
329
zsp!tz;Jgja+vrqu=k0Lb4(IK3D!TIyIPZY-4muUxc_*BA!g(j1itfA%&b#2ei%vy%
330
z-VNv7aNbR)qC4+_^By?wp;OVF_riHEocGeH=+67#ybsR%=u~v){czq7=lygly7K`z
331
zAAs`#Iu+gdAe;}v`5>K&?tBQ&hv0mOPDOV<4CljeK1`>gJ0F4b5jY>AQ_-D|!ucqi
332
zkJ72=&d1<<49>^sRCMR#a6S&_<8&&z^9eYgfb$7D72WwHoKM2}B%O-xd<xE|;CzZs
333
zMRz_8=hJXLO{bzepMmokIG>?Y(Vfr2`7E5z(y8dq=iqz}&gbY<bm#MMJ`d;fbSk>@
334
z1vp=T^94E;-T5M%FT(jEor>;!3C@?`e2Gp)cfJhg%W%F-r=mMwf%6qOU!hacov*_A
335
zDx9y<sp!tv;Cv0v*XUGq=j(934(IE1D!TIxINyNt4LTLw`6iri!uckhitc<1&bQ!v
336
zi%vy%z76NwaK25aqC4M#^Bp+fp;OVF@51>mobS@9=+5`xd=Jj|=u~v)`*6Mw=lgUj
337
zy7L1#KY;TCIu+gdA)Ft=`5~Q(?)(VOkKp`>PDOWq4ClvieoUvLJ3oQ*6F5JiQ_-ED
338
z!uctjpVF!5&d=cd49?H!RCMR(aDEQw=X5H%^9wk?fb$DF72WwIoL|EEC7p`y{0h#m
339
z;QWeCMR$G;=htw4O{bzezk%}`IKQD&(VgGI`7NB^(y8dq@8J9n&hO|{bm#YQeh=sO
340
zbSk>@2RMI#^9MQ=-T5P&Kf?JVor>=K3C^G3{E1FQcm53L&v5=sr=mN5f%6wQf1y*+
341
zoxj5QE1bX5sp!t%;QS5F-{@3y=kIX-4(IQ5D!TIzIRAk24>}dy`6rxz!ucniithXi
342
z&cERNi%vy%{tf5faQ;oFqC30c?1r<OPQ|RVbzXg;{>K>mzG<p_T=x5<dTrE0J^Ce!
343
zGY|4uF3LX0BRuGX>q>jJH8(XUa;0+Le+^$&{l7{rA5$v3j-_&6*Zyy%R){H;UB^<n
344
zZ*Bj#QY*%k%C5_mDlKVCRaZq_{nW5ztX@hd^bgNHiHe%4CypCX*DE>e$i7jJKH3sR
345
z`Y@s>am0`)>XQhI`d8B3{r5)M#qKq=CDErKo76hfyjxon(Sp^iPo}{fy>^Fx`R2Kw
346
zVg2l=>;G-fMa>f%8>6CBOH)HsI<9xygyvM?f*Db&W^zSmU9XO50|q5aTGMY-{xV|t
347
z*i$FZs=9Z>S9V%3BUz{hBlWXLKP%fq2zA0jWhiw(cu9^3ubm|)bxcnjq%9Sh))k$D
348
zPwL3G%dRd78{0$Uu)b@?`Ter%!%ix?W|XecR@0m=>|7>$G|#T{*hkH4@1H(#$mi)L
349
z9!RA-dw1-jH?Sa)2rqj0OL0?Ud0X~N)vfc=g-x~jN7ZCUPI!h)_ywp;mjCNx$_xp8
350
zNF&DPKAzl<lJGM;Sf;*c>ovnub~dT4(JmG}Vy7Z}r8|6qTN`rqv%g>kiB+;)=hQao
351
z8{ZmOMZJ@St#gY*Ow~5mHk6f)YO9^p(z`u`DV>8m4w#aN5?ilT5cZR<YfPn^Q{y{J
352
zv^v>CXMp*QlbfoOb6V$(Uo3klYEqLul;Eo>ADugr^wiz<m^r&_+8(pocinsEvYAu&
353
z+GFpTJ51dxcX$h*x6>>C)SNkWjp?RvguQwvhqlfe+pw^HDz1J_eL7vX<J5hoXKJQv
354
z#y2GkTIa61=)h}2y@GU8bwin6>h#hOmKwCwxJ6r8>)e(9(Y7iYn@ra>w<Xf`<C_!d
355
zru3vlx^~mHMC;tLf3z$}hc_-gp>~p9np0(^gTiBhA`OLb98p))l`L*eq#IJ3MWt0e
356
z!y9PZ+M_A0%Y|*--4|?d%9PNfeM%_UIjAGM=az9>PD!+94&(g6ouwVwLkVk>1zN2G
357
zn>i&hrFErHq$^RnM!KoFC1GXT&zKfv3Kn-{JnhN;`PNfPtA?gE{CejPA>Xzr-86dC
358
z!a=$4tdbjm(chVq$D-3mE_Tn37KDz;eme4o?BYNz)@2u0Y^cqzT~&*@wS&`DTjbgf
359
z(_&qF_;^C6u+R_+X`!JmbO;L#p~%dzIxK{~A!Ig(-kqhVgmq#2%#ahAl>;&>6SEes
360
z2}=!OTSI3}ua2-f61j3@c+NrQ6uXcdsDT}b8D8bcWK!kZWYS_k_025~)&aG(hdqbQ
361
z?V)(s*dC5EY|4E?q1(d6(W6S2*Z4~({`mp4hf%rcV_I1QtEKQ?ji!e|*S<>_b=i`o
362
z%W904_xM-C%+Sp?(OIZxx-mSDDx4w8_bU&Nc+k0{Pt~)1=9Fgt{&a;w5w<Ibq1+Y5
363
zR4(gimGzp*19XmVDF{aw;<V|zsE3Xq?5{ktCbv8N5zp-|JmKqqzB~P)&+RUpVE=c!
364
zD_uFQU&J1r$&P8!{P3<$4~vPgSTVh`xMP}5ky;)(y>;G*aH?E%>PnTTbYu&kwGsUX
365
DDbP3`
366
367
diff --git a/tests/data/acpi/virt/DSDT.numamem b/tests/data/acpi/virt/DSDT.numamem
368
index XXXXXXX..XXXXXXX 100644
369
GIT binary patch
370
delta 156
371
zcmbO?fpNDcmrJlq$Zin^2BwP>xulufJQ*iyC^K43^tIeLL4lLWeZ}O>oO+X=b6T<Z
372
z6mvCfR_C%{pDgc^#>hCi&BajKi^V<I(}*M9!_$Q~z%RhS*}#o~BR<sAg^OwOMVP!X
373
lHhJdBGOmtnjvVpMLBX3Jy81CrwsAkqC^^YPgaxRb0RT`TDiQzy
374
375
literal 18462
376
zcmc)ScXSkm8iw%+36N|;NFdS#0*VC-rifrC*(4Ap5OxEoL4yrNET~uz6^x349TdAp
377
z#ol{Y6npQeh`smTHTRwDuD;K8uK!-nakJ0v%s2Z>CNML{-I`=g)4(x7&}nM*`1qLQ
378
zpz7@!<28CLD+q${e)zR$!Q7lFEy?PZ=GK1kva+(=mNE4;-Kye^^@<TeZp*~_nxMJ0
379
zHYYy5A@gLSVN6+Bd3pND+?IGES==wydwyOJPRt96f?z?HAS-LIYPOcDs!0@tPc*ld
380
z*Nsi4r;Ht!7_TYAF{L<Gn4Y5LgPhsga=1!)>Q!--tkj18UL_~9%E-FO@w(J16KWeK
381
z3R0o1B%7*Y`C2Dl_1|lD%Il+5!;MwtOiE<F2dS-<*$ez@&A+j+pi>%K<|FWeGb6&y
382
z{$oU^;O`OT=@Hf8tEg~uW<;!0)QlXPQQ<QxBWGks&FEq?Dt*Srku!3lX5`w8jeW-O
383
z$QhlZX2fj9aG$YB<cuy+GYV|RCO%_C<czLSGYW0S2%j-Baz<{{j3S#c(r0WMIU_G>
384
zMh}~@sm&<IuhC!oM=WYaiOtx|XGHF%{3Xfk>b-2n<~}2OKP`xQ9er%Z7Cs|-KkXJZ
385
zqo2*#(q}~Pr-e~7`rC}Hd`9$s+C6H<K%23(&xqbni=t)>vKga%M)ZDK95rJNn^EO6
386
zqW9AtQ8R|vjB1||y`T1snz6Rch}(>c=>4=LYR0-YqsC`M@29Ip%~;Q7)cTC*{j^uq
387
zj16o?ozIBgPkTqr7-lm@`;6%Qv`^HGGMk})#ykFn3jb}Wh~7{8M$M?O8TCFRdOz(K
388
zHDhC&v5n7&-cMJLnz4z^*w$x6@2CBvW{k8MV|_;SemWp(#%4C7!DduO@23N!W^7?I
389
z#`%os{j@Y{##T0Ce0s)$RoRX4`t%EF9M@P@RW?!wE^!@@rK&PKjHw;1+v@6Zy48V|
390
zZgqs#EnF{rvMEtq8tdN}#Dn@^_h3*^rvGYm@8Dp1u&cfXS}1i8(wJ!KdGdwX`9V&P
391
z{G9yu_F!~UBU1OXbiX|4Q4l^J>!hg2M7E+b=+P~wpuIgS2-nea=?d4<f`zH)I@Z&o
392
znGDy|{ElUH%#>O-UE!LUMRh<FZc&SNtf>sw%xopQW6jJf$PU6aGmB=Y*3_aMYbwJv
393
z^@=*SqNBsqvgt}2I~LUeR9cxycXo!ebH_F_&d#YdGcR80&Mt83kXWxEv#1WZ+^KYD
394
zS2(-E*_BSEJ9FX8?N~GOEztp*JC*LtgHs3dsqbFLw<M2Fr8{GA#^BTeojaB8%!e}{
395
z&U`wR?$jBD{X7fc)B&J7mG0~YXE!*z(W!K2A)JM9>VVOmN_TdLvpbyK=~TK?CsXDu
396
zf>Q^W?o_(77|voii|JIlvj?0#;M4)BJC*M231?3@d(x?NX9=7oaOwcool1AE2Ip#U
397
zu12TQoxR}f1*Z;p-KlhEZ#a9y*_%$KJ9VmQKhHjJ>HyiDN_X~!voD-|=~TM2ADsQ*
398
z)B&|SmF`>}&eh>uold1Y`@`8EP91=|Q|ZnDa1MZT0G&#A4uo?coH}54r_!CJaF)VZ
399
zN~hADgWwzlrw;JlsdVRHI0wTym`<fT*MM^kICVncPNh57gmX<e*Q8VF&LMCPfm0_C
400
z?o_&SEjZVLb1gcR?pzzrwc*qWhdY(-TnEl|;9Q4Jr90P!b6q%fg5pl4JBPwK6waY^
401
zD&4sroa@1<6B>6a-MK!T>%+M|ol19Z0Otm9>IBH0N_TDu=Z0`@NT<@B!{8hSr%ssM
402
zsdVQ?aBc+WMszCOsq1p~_iY)RI>B<M(w({-r!_N2p5<_s)2Vc)F2ZTe%#mjWoH`+M
403
zr_!Ce^rkg4$G4SmR??|-r!Kgeb7MGl0_RSpJ9U}OoWtQ9PN&kHy0~V}P2kiCpF5TA
404
z)Fm}@j(~Fnol1A=LYg^8!l@HPcPibf%V*}?6wXcQRJv0a&CIzOoI0U&r_!CeRA$c2
405
z;oO`~r8{+j%$!@msS{9lD&48eV&>cu&MoOwx>Fa!%()euI$?FE(w({lX3nkQ+?r0M
406
zJ4eAe3eHh<D&1KHXBC`PbSmAci(KZchO?SZr8{+L%bamI<8&(BsS8?KGjj%917{7L
407
zN_Xlqme$PtJyZ*4EuBht>f)6->)@=TQ|V4!vNGprI7ic|bf+#<nR5)BW9U@6Q<taA
408
zSr2DDol1A=qLewefpZ%=mG0D~D06NL=eBez-Kh&u<{S&>SUQ#N)MY1gHo)0Hr_!Ce
409
z*ksOeaE_x>=}ui@GUs?W$J42FXCs`Aa5mDZbms&(C%`#@PNh2&a3<hP(5ZB163!%?
410
zNjjD8Y=W~1&L%pQ?wkncL^vnXsdVQgI48k5iB6?EC&M`z&dGEt-MJl{+rhaVol19Z
411
z59ju9ZcnGuojbs}1Dre1sdQ&EoXv1H)2Vdl6ga2AIfYK8J9mV0M>uz+Q|Zo~;M@t$
412
zo#<4$vjxr;I9upcx^pU=Q{kLSr_!A}!?`n@JJYFj=Pq#W0_QGtD&4s&oV&ugE1gPr
413
z?gr;>aPCH@(w(~MZGYeH4(IN4D&08^&S`K?qf_b5J>c8}&OPW<x^quB_k?p#I+gC6
414
z4(D_@r_-r)=U#B`1?OILD&08)&KYpdpi}A2z2V#&&b{eWx^o{m_knXCI+gC+7tVd*
415
z+?P(JJNJWgKREZJQ|Zo`aL$BtCY?%m?hohwaPCj1(wzsuc>tUT(5ZCifp8uO=Ye!8
416
z-8l=+S#ZvxQ|Zow;5-P<gXmPc^I$j+hVx)LmF_$Q&O_ilgifV94~6qkI1i;$>CV}3
417
z&W3X~ol18e2IpaL9!96qopa!v1Lqt%mF_$o&coq6oKB@XkAU+CIFF!H>CPkJJQB_$
418
z=~TLNE}V1WoJ*(Do%7(F2j@IGmF_$W&ZFQwicY0FTj6Ylvz1PzJCBC*XgH6iQ|ZoQ
419
z;5-J-W9U@6^H?~Kh4WZCmF}Dm=X^Nl)2Vdlac~|7=W%o@-FZBm$HRF%ol18ufO7$y
420
z3+Pn3b0M4y;ao_k(w!&3c><g#(5ZCiiEy3>=ZSPG-FXt6C&76Vol19}4Cl#ko=m6G
421
zou|Ng3Y@3VsdVS5aGnb1sdOscc^aIj!Fd{;N_Q@Ta}k`2=v2CMF`SFxTui6Zou|Wj
422
zI-IA|sdVQVaGn9@8FVV$xdhH7a4w-!>CQ9ZJQL0{=~TM&EI7}C^DH`*?mQdLv*A3O
423
zPNh4~f%6<V&!JQ4&U4{B7tV9(RJ!v#IM0LgJUW%`JRi>U;XI#Cr8_Tx^8z?8pi}A2
424
z3*o#F&I{>Oy0Z<=HaOepRJ!vbI4^?pB081syco`l;k=kmr8_Ty^Ab2Op;PJ3rEo5V
425
zb19ulcU}tTrEp$Kr_!C5!Fd^+m(i(o=jCu-4(H``D&2VnoL9hk1)WNFUJ2)wa9&BL
426
z(w$eqc@>;j(W!Lj)o@-7=hbv7-FXe1*T8uVol19J3+J_PUQ4Iao!7y69h}$EsdVS{
427
za9$7R^>ixTc>|m`z<C3mN_XA}=Z$dQNT<@BH^F%moHx;_bmz@*-VEo>bSm9>3!JyW
428
zc?+FNcisx;t#IB-r_!Cb!Fd~;x6!F|=k0Lb4(IK3D&2VpoOi%^2c1fH-U;WOaNbF$
429
z(w%p~c^8~_(W!Lj-EiIw=iPKF-FXk3_rQ4%ol1A!3+KIX-b<&_o%g|cADs8msdVT4
430
zaNZB+{d6kbxeU%_a4w@$>COk>d;rb|=v2D%K{y|T^Fcb5?pzM%ayXaMsdVQ<a6Sa*
431
zLv$+L`7oRh!}&0sN_Rd2=Ob`FLZ{N5kHYyVoR89}bmwDmJ_hGwbSmBXIGm5e`8b_Q
432
zcRm5<6L3C3r_!BI!uceePtvJ$=TmS#1?N+AD&6@soKM5~G@VL!J_F}7a6Ut)(w)!3
433
z`7E5z(y4Ukb8tQf=W}!_-T6G6&%^mVol1AU0Ot#EzCfqaoiD=qBAhSMsdVQ{aJ~fR
434
zOLQvT`7)d@!}&6uN_V~j=PPi&LZ{N5E8tuK=L$NN?tB%_SK)k>PNh3vgYz{wU!zm$
435
z&e!359nRP3RJwB|oGam6NvG1CZ@~EmoNv&nbmuBKSHZc8PNh5Fg!4@}-=tIN&bQ!v
436
z3(mLbRJ!wRINyfzZ90|id<V{V;CzQpr90n+^IbUKrBmt7_uzaF&iCk4y7PTF--q*k
437
zI+gDH0L~BK{D4lSJ3oZ;LpVRAQ|Zo+;QR>AkLXmo^J6$ahVx@OmG1lm&QIX{gifV9
438
zKZWyCI6tLR>CVsK{0z>|=v2D%b2vYT^K&|t?)(DIFW~%wPNh4)g!4-{zob*?&adG7
439
z3eK<SRJ!wPIKPJTYdV$g{07c%;QWS8r8~cc^IJH-rBmt7@8J9n&hO|{y7PNDzlZaC
440
zI+gDH0nQ)b{DDrTJAZ`pM>v0^Q|Zp1;QR^BpXgM&^Jh4JhVy4SmG1ln&R^jCg-)eA
441
ze}(f`IDe&6>CWHa{0+|E=v2D%cQ}8C^LILx?)(GJKj8d>PNh5lg!4~0|D;pt&UQH4
442
z;cTZ<nQ}I_*5~MdjIsBd#>?tb?<du5qdwH5FqYr(K^|)csSol9Kj?#xm2_!ICX!j{
443
zQR(-;hHqB=U!#UZj7mMmQR%m9|J$gwB1WYi<EZqzw*PI^+7Y8tkEKVI6t%>wtAeG4
444
zTCix8Zc4^?4?p)L$W2sFtScVVH8$(`Zb7F4Jre}_VFW?ealM0}AS=A9KSk~Be{Pk!
445
z+dfRsWEEtmN=tVv-mYh}f`#kbIvoql(`|eBC$o6^Yxwx=VCnyD%el#kjg3KWyeTm@
446
zD5=Y98J~>jESwR<YbKYsjp@30&*Gl3qUMH`l|PmCAGKuitg2;Ou9&uPMl44QROoB2
447
zzE;i*Bb*c7sSHQW32$Ph;cZ*dqQ%p*j?gpZ9ZQ$D^;)zzvs~)oqVUO?;lknLOJ`hE
448
zn0h?iNcqwkB^$QXBpY(t2B%)lb0Z%AAUXW7hSPd~+R%4-yrC^`@m~4{W@lxEH~R3G
449
z{6u3}OX^M4&8-bNiQ3FZ)ui^E@H1q>Ux3P3**|_v9lL~nNTs9FKc4iLqVQ|@!7|ld
450
zrwj`}WoLA4jW+T3N9>e`Z|M%-z^y0J^HaZI*;zwVtIn%U=pEnMv2ycbIn77qhZ(O;
451
z){Y%iGN7e)Qd8c{Fs8N@EuJ$q)=9tW^BX58s$=t-TT8<`sg0!sac$wRw~Pn>0Sn~0
452
A00000
453
454
--
455
2.20.1
456
457
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Use a common predicate for querying stage1-ness.
4
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-22-richard.henderson@linaro.org
8
Message-id: 20200208125816.14954-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 4 ++++
11
target/arm/internals.h | 18 ++++++++++++++++++
9
target/arm/sve_helper.c | 43 ++++++++++++++++++++++++++++++++++++++
12
target/arm/helper.c | 8 +++-----
10
target/arm/translate-sve.c | 21 +++++++++++++++++++
13
2 files changed, 21 insertions(+), 5 deletions(-)
11
target/arm/sve.decode | 4 ++++
12
4 files changed, 72 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/internals.h
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_fexpa_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
19
DEF_HELPER_FLAGS_3(sve_fexpa_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env);
20
DEF_HELPER_FLAGS_3(sve_fexpa_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
#endif
21
22
22
+DEF_HELPER_FLAGS_4(sve_ftssel_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+/**
23
+DEF_HELPER_FLAGS_4(sve_ftssel_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+ * arm_mmu_idx_is_stage1_of_2:
24
+DEF_HELPER_FLAGS_4(sve_ftssel_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+ * @mmu_idx: The ARMMMUIdx to test
25
+
26
+ *
26
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
+ * Return true if @mmu_idx is a NOTLB mmu_idx that is the
27
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
+ * first stage of a two stage regime.
28
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
+ */
29
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
30
+static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/sve_helper.c
32
+++ b/target/arm/sve_helper.c
33
@@ -XXX,XX +XXX,XX @@
34
#include "exec/cpu_ldst.h"
35
#include "exec/helper-proto.h"
36
#include "tcg/tcg-gvec-desc.h"
37
+#include "fpu/softfloat.h"
38
39
40
/* Note that vector data is stored in host-endian 64-bit chunks,
41
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fexpa_d)(void *vd, void *vn, uint32_t desc)
42
d[i] = coeff[idx] | (exp << 52);
43
}
44
}
45
+
46
+void HELPER(sve_ftssel_h)(void *vd, void *vn, void *vm, uint32_t desc)
47
+{
31
+{
48
+ intptr_t i, opr_sz = simd_oprsz(desc) / 2;
32
+ switch (mmu_idx) {
49
+ uint16_t *d = vd, *n = vn, *m = vm;
33
+ case ARMMMUIdx_Stage1_E0:
50
+ for (i = 0; i < opr_sz; i += 1) {
34
+ case ARMMMUIdx_Stage1_E1:
51
+ uint16_t nn = n[i];
35
+ return true;
52
+ uint16_t mm = m[i];
36
+ default:
53
+ if (mm & 1) {
37
+ return false;
54
+ nn = float16_one;
55
+ }
56
+ d[i] = nn ^ (mm & 2) << 14;
57
+ }
38
+ }
58
+}
39
+}
59
+
40
+
60
+void HELPER(sve_ftssel_s)(void *vd, void *vn, void *vm, uint32_t desc)
41
/*
61
+{
42
* Parameters of a given virtual address, as extracted from the
62
+ intptr_t i, opr_sz = simd_oprsz(desc) / 4;
43
* translation control register (TCR) for a given regime.
63
+ uint32_t *d = vd, *n = vn, *m = vm;
44
diff --git a/target/arm/helper.c b/target/arm/helper.c
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ uint32_t nn = n[i];
66
+ uint32_t mm = m[i];
67
+ if (mm & 1) {
68
+ nn = float32_one;
69
+ }
70
+ d[i] = nn ^ (mm & 2) << 30;
71
+ }
72
+}
73
+
74
+void HELPER(sve_ftssel_d)(void *vd, void *vn, void *vm, uint32_t desc)
75
+{
76
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
77
+ uint64_t *d = vd, *n = vn, *m = vm;
78
+ for (i = 0; i < opr_sz; i += 1) {
79
+ uint64_t nn = n[i];
80
+ uint64_t mm = m[i];
81
+ if (mm & 1) {
82
+ nn = float64_one;
83
+ }
84
+ d[i] = nn ^ (mm & 2) << 62;
85
+ }
86
+}
87
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
88
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
89
--- a/target/arm/translate-sve.c
46
--- a/target/arm/helper.c
90
+++ b/target/arm/translate-sve.c
47
+++ b/target/arm/helper.c
91
@@ -XXX,XX +XXX,XX @@ static bool trans_FEXPA(DisasContext *s, arg_rr_esz *a, uint32_t insn)
48
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
92
return true;
49
bool take_exc = false;
93
}
50
94
51
if (fi.s1ptw && current_el == 1 && !arm_is_secure(env)
95
+static bool trans_FTSSEL(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
52
- && (mmu_idx == ARMMMUIdx_Stage1_E1 ||
96
+{
53
- mmu_idx == ARMMMUIdx_Stage1_E0)) {
97
+ static gen_helper_gvec_3 * const fns[4] = {
54
+ && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
98
+ NULL,
55
/*
99
+ gen_helper_sve_ftssel_h,
56
* Synchronous stage 2 fault on an access made as part of the
100
+ gen_helper_sve_ftssel_s,
57
* translation table walk for AT S1E0* or AT S1E1* insn
101
+ gen_helper_sve_ftssel_d,
58
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
102
+ };
59
}
103
+ if (a->esz == 0) {
60
}
104
+ return false;
61
105
+ }
62
- if ((env->cp15.hcr_el2 & HCR_DC) &&
106
+ if (sve_access_check(s)) {
63
- (mmu_idx == ARMMMUIdx_Stage1_E0 || mmu_idx == ARMMMUIdx_Stage1_E1)) {
107
+ unsigned vsz = vec_full_reg_size(s);
64
+ if ((env->cp15.hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
108
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
65
/* HCR.DC means SCTLR_EL1.M behaves as 0 */
109
+ vec_full_reg_offset(s, a->rn),
66
return true;
110
+ vec_full_reg_offset(s, a->rm),
67
}
111
+ vsz, vsz, 0, fns[a->esz]);
68
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
112
+ }
69
hwaddr addr, MemTxAttrs txattrs,
113
+ return true;
70
ARMMMUFaultInfo *fi)
114
+}
71
{
115
+
72
- if ((mmu_idx == ARMMMUIdx_Stage1_E0 || mmu_idx == ARMMMUIdx_Stage1_E1) &&
116
/*
73
+ if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
117
*** SVE Predicate Logical Operations Group
74
!regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
118
*/
75
target_ulong s2size;
119
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
76
hwaddr s2pa;
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/sve.decode
122
+++ b/target/arm/sve.decode
123
@@ -XXX,XX +XXX,XX @@ ADR_p64 00000100 11 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
124
# Note esz != 0
125
FEXPA 00000100 .. 1 00000 101110 ..... ..... @rd_rn
126
127
+# SVE floating-point trig select coefficient
128
+# Note esz != 0
129
+FTSSEL 00000100 .. 1 ..... 101100 ..... ..... @rd_rn_rm
130
+
131
### SVE Predicate Logical Operations Group
132
133
# SVE predicate logical operations
134
--
77
--
135
2.17.0
78
2.20.1
136
79
137
80
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move some stuff that will be common to both translate-a64.c
3
To implement PAN, we will want to swap, for short periods
4
and translate-sve.c.
4
of time, to a different privileged mmu_idx. In addition,
5
we cannot do this with flushing alone, because the AT*
6
instructions have both PAN and PAN-less versions.
7
8
Add the ARMMMUIdx*_PAN constants where necessary next to
9
the corresponding ARMMMUIdx* constant.
5
10
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180516223007.10256-2-richard.henderson@linaro.org
14
Message-id: 20200208125816.14954-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
16
---
12
target/arm/translate-a64.h | 118 +++++++++++++++++++++++++++++++++++++
17
target/arm/cpu-param.h | 2 +-
13
target/arm/translate-a64.c | 112 +++++------------------------------
18
target/arm/cpu.h | 33 ++++++++++++++-------
14
2 files changed, 133 insertions(+), 97 deletions(-)
19
target/arm/internals.h | 9 ++++++
15
create mode 100644 target/arm/translate-a64.h
20
target/arm/helper.c | 60 +++++++++++++++++++++++++++++++-------
16
21
target/arm/translate-a64.c | 3 ++
17
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
22
target/arm/translate.c | 2 ++
18
new file mode 100644
23
6 files changed, 87 insertions(+), 22 deletions(-)
19
index XXXXXXX..XXXXXXX
24
20
--- /dev/null
25
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
21
+++ b/target/arm/translate-a64.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu-param.h
28
+++ b/target/arm/cpu-param.h
22
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@
23
+/*
30
# define TARGET_PAGE_BITS_MIN 10
24
+ * AArch64 translation, common definitions.
31
#endif
25
+ *
32
26
+ * This library is free software; you can redistribute it and/or
33
-#define NB_MMU_MODES 9
27
+ * modify it under the terms of the GNU Lesser General Public
34
+#define NB_MMU_MODES 12
28
+ * License as published by the Free Software Foundation; either
35
29
+ * version 2 of the License, or (at your option) any later version.
36
#endif
30
+ *
37
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
31
+ * This library is distributed in the hope that it will be useful,
38
index XXXXXXX..XXXXXXX 100644
32
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
39
--- a/target/arm/cpu.h
33
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
40
+++ b/target/arm/cpu.h
34
+ * Lesser General Public License for more details.
41
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
35
+ *
42
* 5. we want to be able to use the TLB for accesses done as part of a
36
+ * You should have received a copy of the GNU Lesser General Public
43
* stage1 page table walk, rather than having to walk the stage2 page
37
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
44
* table over and over.
38
+ */
45
+ * 6. we need separate EL1/EL2 mmu_idx for handling the Privileged Access
39
+
46
+ * Never (PAN) bit within PSTATE.
40
+#ifndef TARGET_ARM_TRANSLATE_A64_H
47
*
41
+#define TARGET_ARM_TRANSLATE_A64_H
48
* This gives us the following list of cases:
42
+
49
*
43
+void unallocated_encoding(DisasContext *s);
50
* NS EL0 EL1&0 stage 1+2 (aka NS PL0)
44
+
51
* NS EL1 EL1&0 stage 1+2 (aka NS PL1)
45
+#define unsupported_encoding(s, insn) \
52
+ * NS EL1 EL1&0 stage 1+2 +PAN
46
+ do { \
53
* NS EL0 EL2&0
47
+ qemu_log_mask(LOG_UNIMP, \
54
- * NS EL2 EL2&0
48
+ "%s:%d: unsupported instruction encoding 0x%08x " \
55
+ * NS EL2 EL2&0 +PAN
49
+ "at pc=%016" PRIx64 "\n", \
56
* NS EL2 (aka NS PL2)
50
+ __FILE__, __LINE__, insn, s->pc - 4); \
57
* S EL0 EL1&0 (aka S PL0)
51
+ unallocated_encoding(s); \
58
* S EL1 EL1&0 (not used if EL3 is 32 bit)
52
+ } while (0)
59
+ * S EL1 EL1&0 +PAN
53
+
60
* S EL3 (aka S PL1)
54
+TCGv_i64 new_tmp_a64(DisasContext *s);
61
* NS EL1&0 stage 2
55
+TCGv_i64 new_tmp_a64_zero(DisasContext *s);
62
*
56
+TCGv_i64 cpu_reg(DisasContext *s, int reg);
63
- * for a total of 9 different mmu_idx.
57
+TCGv_i64 cpu_reg_sp(DisasContext *s, int reg);
64
+ * for a total of 12 different mmu_idx.
58
+TCGv_i64 read_cpu_reg(DisasContext *s, int reg, int sf);
65
*
59
+TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf);
66
* R profile CPUs have an MPU, but can use the same set of MMU indexes
60
+void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v);
67
* as A profile. They only need to distinguish NS EL0 and NS EL1 (and
61
+TCGv_ptr get_fpstatus_ptr(bool);
68
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
62
+bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
69
/*
63
+ unsigned int imms, unsigned int immr);
70
* A-profile.
64
+uint64_t vfp_expand_imm(int size, uint8_t imm8);
71
*/
65
+bool sve_access_check(DisasContext *s);
72
- ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
66
+
73
- ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A,
67
+/* We should have at some point before trying to access an FP register
74
+ ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
68
+ * done the necessary access check, so assert that
75
+ ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A,
69
+ * (a) we did the check and
76
70
+ * (b) we didn't then just plough ahead anyway if it failed.
77
- ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A,
71
+ * Print the instruction pattern in the abort message so we can figure
78
+ ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A,
72
+ * out what we need to fix if a user encounters this problem in the wild.
79
+ ARMMMUIdx_E10_1_PAN = 3 | ARM_MMU_IDX_A,
73
+ */
80
74
+static inline void assert_fp_access_checked(DisasContext *s)
81
- ARMMMUIdx_E2 = 3 | ARM_MMU_IDX_A,
75
+{
82
- ARMMMUIdx_E20_2 = 4 | ARM_MMU_IDX_A,
76
+#ifdef CONFIG_DEBUG_TCG
83
+ ARMMMUIdx_E2 = 4 | ARM_MMU_IDX_A,
77
+ if (unlikely(!s->fp_access_checked || s->fp_excp_el)) {
84
+ ARMMMUIdx_E20_2 = 5 | ARM_MMU_IDX_A,
78
+ fprintf(stderr, "target-arm: FP access check missing for "
85
+ ARMMMUIdx_E20_2_PAN = 6 | ARM_MMU_IDX_A,
79
+ "instruction 0x%08x\n", s->insn);
86
80
+ abort();
87
- ARMMMUIdx_SE10_0 = 5 | ARM_MMU_IDX_A,
81
+ }
88
- ARMMMUIdx_SE10_1 = 6 | ARM_MMU_IDX_A,
82
+#endif
89
- ARMMMUIdx_SE3 = 7 | ARM_MMU_IDX_A,
83
+}
90
+ ARMMMUIdx_SE10_0 = 7 | ARM_MMU_IDX_A,
84
+
91
+ ARMMMUIdx_SE10_1 = 8 | ARM_MMU_IDX_A,
85
+/* Return the offset into CPUARMState of an element of specified
92
+ ARMMMUIdx_SE10_1_PAN = 9 | ARM_MMU_IDX_A,
86
+ * size, 'element' places in from the least significant end of
93
+ ARMMMUIdx_SE3 = 10 | ARM_MMU_IDX_A,
87
+ * the FP/vector register Qn.
94
88
+ */
95
- ARMMMUIdx_Stage2 = 8 | ARM_MMU_IDX_A,
89
+static inline int vec_reg_offset(DisasContext *s, int regno,
96
+ ARMMMUIdx_Stage2 = 11 | ARM_MMU_IDX_A,
90
+ int element, TCGMemOp size)
97
91
+{
98
/*
92
+ int offs = 0;
99
* These are not allocated TLBs and are used only for AT system
93
+#ifdef HOST_WORDS_BIGENDIAN
100
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
94
+ /* This is complicated slightly because vfp.zregs[n].d[0] is
101
*/
95
+ * still the low half and vfp.zregs[n].d[1] the high half
102
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
96
+ * of the 128 bit vector, even on big endian systems.
103
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
97
+ * Calculate the offset assuming a fully bigendian 128 bits,
104
+ ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
98
+ * then XOR to account for the order of the two 64 bit halves.
105
99
+ */
106
/*
100
+ offs += (16 - ((element + 1) * (1 << size)));
107
* M-profile.
101
+ offs ^= 8;
108
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
102
+#else
109
TO_CORE_BIT(E10_0),
103
+ offs += element * (1 << size);
110
TO_CORE_BIT(E20_0),
104
+#endif
111
TO_CORE_BIT(E10_1),
105
+ offs += offsetof(CPUARMState, vfp.zregs[regno]);
112
+ TO_CORE_BIT(E10_1_PAN),
106
+ assert_fp_access_checked(s);
113
TO_CORE_BIT(E2),
107
+ return offs;
114
TO_CORE_BIT(E20_2),
108
+}
115
+ TO_CORE_BIT(E20_2_PAN),
109
+
116
TO_CORE_BIT(SE10_0),
110
+/* Return the offset info CPUARMState of the "whole" vector register Qn. */
117
TO_CORE_BIT(SE10_1),
111
+static inline int vec_full_reg_offset(DisasContext *s, int regno)
118
+ TO_CORE_BIT(SE10_1_PAN),
112
+{
119
TO_CORE_BIT(SE3),
113
+ assert_fp_access_checked(s);
120
TO_CORE_BIT(Stage2),
114
+ return offsetof(CPUARMState, vfp.zregs[regno]);
121
115
+}
122
diff --git a/target/arm/internals.h b/target/arm/internals.h
116
+
123
index XXXXXXX..XXXXXXX 100644
117
+/* Return a newly allocated pointer to the vector register. */
124
--- a/target/arm/internals.h
118
+static inline TCGv_ptr vec_full_reg_ptr(DisasContext *s, int regno)
125
+++ b/target/arm/internals.h
119
+{
126
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
120
+ TCGv_ptr ret = tcg_temp_new_ptr();
127
switch (mmu_idx) {
121
+ tcg_gen_addi_ptr(ret, cpu_env, vec_full_reg_offset(s, regno));
128
case ARMMMUIdx_Stage1_E0:
122
+ return ret;
129
case ARMMMUIdx_Stage1_E1:
123
+}
130
+ case ARMMMUIdx_Stage1_E1_PAN:
124
+
131
case ARMMMUIdx_E10_0:
125
+/* Return the byte size of the "whole" vector register, VL / 8. */
132
case ARMMMUIdx_E10_1:
126
+static inline int vec_full_reg_size(DisasContext *s)
133
+ case ARMMMUIdx_E10_1_PAN:
127
+{
134
case ARMMMUIdx_E20_0:
128
+ return s->sve_len;
135
case ARMMMUIdx_E20_2:
129
+}
136
+ case ARMMMUIdx_E20_2_PAN:
130
+
137
case ARMMMUIdx_SE10_0:
131
+bool disas_sve(DisasContext *, uint32_t);
138
case ARMMMUIdx_SE10_1:
132
+
139
+ case ARMMMUIdx_SE10_1_PAN:
133
+/* Note that the gvec expanders operate on offsets + sizes. */
140
return true;
134
+typedef void GVecGen2Fn(unsigned, uint32_t, uint32_t, uint32_t, uint32_t);
141
default:
135
+typedef void GVecGen2iFn(unsigned, uint32_t, uint32_t, int64_t,
142
return false;
136
+ uint32_t, uint32_t);
143
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
137
+typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
144
switch (mmu_idx) {
138
+ uint32_t, uint32_t, uint32_t);
145
case ARMMMUIdx_E10_0:
139
+
146
case ARMMMUIdx_E10_1:
140
+#endif /* TARGET_ARM_TRANSLATE_A64_H */
147
+ case ARMMMUIdx_E10_1_PAN:
148
case ARMMMUIdx_E20_0:
149
case ARMMMUIdx_E20_2:
150
+ case ARMMMUIdx_E20_2_PAN:
151
case ARMMMUIdx_Stage1_E0:
152
case ARMMMUIdx_Stage1_E1:
153
+ case ARMMMUIdx_Stage1_E1_PAN:
154
case ARMMMUIdx_E2:
155
case ARMMMUIdx_Stage2:
156
case ARMMMUIdx_MPrivNegPri:
157
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
158
case ARMMMUIdx_SE3:
159
case ARMMMUIdx_SE10_0:
160
case ARMMMUIdx_SE10_1:
161
+ case ARMMMUIdx_SE10_1_PAN:
162
case ARMMMUIdx_MSPrivNegPri:
163
case ARMMMUIdx_MSUserNegPri:
164
case ARMMMUIdx_MSPriv:
165
@@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
166
switch (mmu_idx) {
167
case ARMMMUIdx_Stage1_E0:
168
case ARMMMUIdx_Stage1_E1:
169
+ case ARMMMUIdx_Stage1_E1_PAN:
170
return true;
171
default:
172
return false;
173
diff --git a/target/arm/helper.c b/target/arm/helper.c
174
index XXXXXXX..XXXXXXX 100644
175
--- a/target/arm/helper.c
176
+++ b/target/arm/helper.c
177
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
178
179
tlb_flush_by_mmuidx(cs,
180
ARMMMUIdxBit_E10_1 |
181
+ ARMMMUIdxBit_E10_1_PAN |
182
ARMMMUIdxBit_E10_0 |
183
ARMMMUIdxBit_Stage2);
184
}
185
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
186
187
tlb_flush_by_mmuidx_all_cpus_synced(cs,
188
ARMMMUIdxBit_E10_1 |
189
+ ARMMMUIdxBit_E10_1_PAN |
190
ARMMMUIdxBit_E10_0 |
191
ARMMMUIdxBit_Stage2);
192
}
193
@@ -XXX,XX +XXX,XX @@ static int gt_phys_redir_timeridx(CPUARMState *env)
194
switch (arm_mmu_idx(env)) {
195
case ARMMMUIdx_E20_0:
196
case ARMMMUIdx_E20_2:
197
+ case ARMMMUIdx_E20_2_PAN:
198
return GTIMER_HYP;
199
default:
200
return GTIMER_PHYS;
201
@@ -XXX,XX +XXX,XX @@ static int gt_virt_redir_timeridx(CPUARMState *env)
202
switch (arm_mmu_idx(env)) {
203
case ARMMMUIdx_E20_0:
204
case ARMMMUIdx_E20_2:
205
+ case ARMMMUIdx_E20_2_PAN:
206
return GTIMER_HYPVIRT;
207
default:
208
return GTIMER_VIRT;
209
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
210
format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
211
212
if (arm_feature(env, ARM_FEATURE_EL2)) {
213
- if (mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_E10_1) {
214
+ if (mmu_idx == ARMMMUIdx_E10_0 ||
215
+ mmu_idx == ARMMMUIdx_E10_1 ||
216
+ mmu_idx == ARMMMUIdx_E10_1_PAN) {
217
format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
218
} else {
219
format64 |= arm_current_el(env) == 2;
220
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
221
if (extract64(raw_read(env, ri) ^ value, 48, 16) &&
222
(arm_hcr_el2_eff(env) & HCR_E2H)) {
223
tlb_flush_by_mmuidx(env_cpu(env),
224
- ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E20_0);
225
+ ARMMMUIdxBit_E20_2 |
226
+ ARMMMUIdxBit_E20_2_PAN |
227
+ ARMMMUIdxBit_E20_0);
228
}
229
raw_write(env, ri, value);
230
}
231
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
232
if (raw_read(env, ri) != value) {
233
tlb_flush_by_mmuidx(cs,
234
ARMMMUIdxBit_E10_1 |
235
+ ARMMMUIdxBit_E10_1_PAN |
236
ARMMMUIdxBit_E10_0 |
237
ARMMMUIdxBit_Stage2);
238
raw_write(env, ri, value);
239
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
240
{
241
/* Since we exclude secure first, we may read HCR_EL2 directly. */
242
if (arm_is_secure_below_el3(env)) {
243
- return ARMMMUIdxBit_SE10_1 | ARMMMUIdxBit_SE10_0;
244
+ return ARMMMUIdxBit_SE10_1 |
245
+ ARMMMUIdxBit_SE10_1_PAN |
246
+ ARMMMUIdxBit_SE10_0;
247
} else if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE))
248
== (HCR_E2H | HCR_TGE)) {
249
- return ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E20_0;
250
+ return ARMMMUIdxBit_E20_2 |
251
+ ARMMMUIdxBit_E20_2_PAN |
252
+ ARMMMUIdxBit_E20_0;
253
} else {
254
- return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0;
255
+ return ARMMMUIdxBit_E10_1 |
256
+ ARMMMUIdxBit_E10_1_PAN |
257
+ ARMMMUIdxBit_E10_0;
258
}
259
}
260
261
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
262
* stage 1 translations.
263
*/
264
if (arm_is_secure_below_el3(env)) {
265
- return ARMMMUIdxBit_SE10_1 | ARMMMUIdxBit_SE10_0;
266
+ return ARMMMUIdxBit_SE10_1 |
267
+ ARMMMUIdxBit_SE10_1_PAN |
268
+ ARMMMUIdxBit_SE10_0;
269
} else if (arm_feature(env, ARM_FEATURE_EL2)) {
270
- return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0 | ARMMMUIdxBit_Stage2;
271
+ return ARMMMUIdxBit_E10_1 |
272
+ ARMMMUIdxBit_E10_1_PAN |
273
+ ARMMMUIdxBit_E10_0 |
274
+ ARMMMUIdxBit_Stage2;
275
} else {
276
- return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0;
277
+ return ARMMMUIdxBit_E10_1 |
278
+ ARMMMUIdxBit_E10_1_PAN |
279
+ ARMMMUIdxBit_E10_0;
280
}
281
}
282
283
static int e2_tlbmask(CPUARMState *env)
284
{
285
/* TODO: ARMv8.4-SecEL2 */
286
- return ARMMMUIdxBit_E20_0 | ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E2;
287
+ return ARMMMUIdxBit_E20_0 |
288
+ ARMMMUIdxBit_E20_2 |
289
+ ARMMMUIdxBit_E20_2_PAN |
290
+ ARMMMUIdxBit_E2;
291
}
292
293
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
294
@@ -XXX,XX +XXX,XX @@ static uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
295
switch (mmu_idx) {
296
case ARMMMUIdx_E20_0:
297
case ARMMMUIdx_E20_2:
298
+ case ARMMMUIdx_E20_2_PAN:
299
case ARMMMUIdx_Stage2:
300
case ARMMMUIdx_E2:
301
return 2;
302
@@ -XXX,XX +XXX,XX @@ static uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
303
case ARMMMUIdx_SE10_0:
304
return arm_el_is_aa64(env, 3) ? 1 : 3;
305
case ARMMMUIdx_SE10_1:
306
+ case ARMMMUIdx_SE10_1_PAN:
307
case ARMMMUIdx_Stage1_E0:
308
case ARMMMUIdx_Stage1_E1:
309
+ case ARMMMUIdx_Stage1_E1_PAN:
310
case ARMMMUIdx_E10_0:
311
case ARMMMUIdx_E10_1:
312
+ case ARMMMUIdx_E10_1_PAN:
313
case ARMMMUIdx_MPrivNegPri:
314
case ARMMMUIdx_MUserNegPri:
315
case ARMMMUIdx_MPriv:
316
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
317
return ARMMMUIdx_Stage1_E0;
318
case ARMMMUIdx_E10_1:
319
return ARMMMUIdx_Stage1_E1;
320
+ case ARMMMUIdx_E10_1_PAN:
321
+ return ARMMMUIdx_Stage1_E1_PAN;
322
default:
323
return mmu_idx;
324
}
325
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
326
return false;
327
case ARMMMUIdx_E10_0:
328
case ARMMMUIdx_E10_1:
329
+ case ARMMMUIdx_E10_1_PAN:
330
g_assert_not_reached();
331
}
332
}
333
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
334
target_ulong *page_size,
335
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
336
{
337
- if (mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_E10_1) {
338
+ if (mmu_idx == ARMMMUIdx_E10_0 ||
339
+ mmu_idx == ARMMMUIdx_E10_1 ||
340
+ mmu_idx == ARMMMUIdx_E10_1_PAN) {
341
/* Call ourselves recursively to do the stage 1 and then stage 2
342
* translations.
343
*/
344
@@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
345
case ARMMMUIdx_SE10_0:
346
return 0;
347
case ARMMMUIdx_E10_1:
348
+ case ARMMMUIdx_E10_1_PAN:
349
case ARMMMUIdx_SE10_1:
350
+ case ARMMMUIdx_SE10_1_PAN:
351
return 1;
352
case ARMMMUIdx_E2:
353
case ARMMMUIdx_E20_2:
354
+ case ARMMMUIdx_E20_2_PAN:
355
return 2;
356
case ARMMMUIdx_SE3:
357
return 3;
358
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
359
/* TODO: ARMv8.2-UAO */
360
switch (mmu_idx) {
361
case ARMMMUIdx_E10_1:
362
+ case ARMMMUIdx_E10_1_PAN:
363
case ARMMMUIdx_SE10_1:
364
+ case ARMMMUIdx_SE10_1_PAN:
365
/* TODO: ARMv8.3-NV */
366
flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
367
break;
368
case ARMMMUIdx_E20_2:
369
+ case ARMMMUIdx_E20_2_PAN:
370
/* TODO: ARMv8.4-SecEL2 */
371
/*
372
* Note that E20_2 is gated by HCR_EL2.E2H == 1, but E20_0 is
141
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
373
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
142
index XXXXXXX..XXXXXXX 100644
374
index XXXXXXX..XXXXXXX 100644
143
--- a/target/arm/translate-a64.c
375
--- a/target/arm/translate-a64.c
144
+++ b/target/arm/translate-a64.c
376
+++ b/target/arm/translate-a64.c
145
@@ -XXX,XX +XXX,XX @@
377
@@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s)
146
#include "exec/log.h"
378
*/
147
379
switch (useridx) {
148
#include "trace-tcg.h"
380
case ARMMMUIdx_E10_1:
149
+#include "translate-a64.h"
381
+ case ARMMMUIdx_E10_1_PAN:
150
382
useridx = ARMMMUIdx_E10_0;
151
static TCGv_i64 cpu_X[32];
383
break;
152
static TCGv_i64 cpu_pc;
384
case ARMMMUIdx_E20_2:
153
385
+ case ARMMMUIdx_E20_2_PAN:
154
/* Load/store exclusive handling */
386
useridx = ARMMMUIdx_E20_0;
155
static TCGv_i64 cpu_exclusive_high;
387
break;
156
-static TCGv_i64 cpu_reg(DisasContext *s, int reg);
388
case ARMMMUIdx_SE10_1:
157
389
+ case ARMMMUIdx_SE10_1_PAN:
158
static const char *regnames[] = {
390
useridx = ARMMMUIdx_SE10_0;
159
"x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
391
break;
160
@@ -XXX,XX +XXX,XX @@ typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
392
default:
161
typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
393
diff --git a/target/arm/translate.c b/target/arm/translate.c
162
typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, TCGMemOp);
394
index XXXXXXX..XXXXXXX 100644
163
395
--- a/target/arm/translate.c
164
-/* Note that the gvec expanders operate on offsets + sizes. */
396
+++ b/target/arm/translate.c
165
-typedef void GVecGen2Fn(unsigned, uint32_t, uint32_t, uint32_t, uint32_t);
397
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
166
-typedef void GVecGen2iFn(unsigned, uint32_t, uint32_t, int64_t,
398
case ARMMMUIdx_E2: /* this one is UNPREDICTABLE */
167
- uint32_t, uint32_t);
399
case ARMMMUIdx_E10_0:
168
-typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
400
case ARMMMUIdx_E10_1:
169
- uint32_t, uint32_t, uint32_t);
401
+ case ARMMMUIdx_E10_1_PAN:
170
-
402
return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
171
/* initialize TCG globals. */
403
case ARMMMUIdx_SE3:
172
void a64_translate_init(void)
404
case ARMMMUIdx_SE10_0:
173
{
405
case ARMMMUIdx_SE10_1:
174
@@ -XXX,XX +XXX,XX @@ static inline void gen_goto_tb(DisasContext *s, int n, uint64_t dest)
406
+ case ARMMMUIdx_SE10_1_PAN:
175
}
407
return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0);
176
}
408
case ARMMMUIdx_MUser:
177
409
case ARMMMUIdx_MPriv:
178
-static void unallocated_encoding(DisasContext *s)
179
+void unallocated_encoding(DisasContext *s)
180
{
181
/* Unallocated and reserved encodings are uncategorized */
182
gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(),
183
default_exception_el(s));
184
}
185
186
-#define unsupported_encoding(s, insn) \
187
- do { \
188
- qemu_log_mask(LOG_UNIMP, \
189
- "%s:%d: unsupported instruction encoding 0x%08x " \
190
- "at pc=%016" PRIx64 "\n", \
191
- __FILE__, __LINE__, insn, s->pc - 4); \
192
- unallocated_encoding(s); \
193
- } while (0)
194
-
195
static void init_tmp_a64_array(DisasContext *s)
196
{
197
#ifdef CONFIG_DEBUG_TCG
198
@@ -XXX,XX +XXX,XX @@ static void free_tmp_a64(DisasContext *s)
199
init_tmp_a64_array(s);
200
}
201
202
-static TCGv_i64 new_tmp_a64(DisasContext *s)
203
+TCGv_i64 new_tmp_a64(DisasContext *s)
204
{
205
assert(s->tmp_a64_count < TMP_A64_MAX);
206
return s->tmp_a64[s->tmp_a64_count++] = tcg_temp_new_i64();
207
}
208
209
-static TCGv_i64 new_tmp_a64_zero(DisasContext *s)
210
+TCGv_i64 new_tmp_a64_zero(DisasContext *s)
211
{
212
TCGv_i64 t = new_tmp_a64(s);
213
tcg_gen_movi_i64(t, 0);
214
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 new_tmp_a64_zero(DisasContext *s)
215
* to cpu_X[31] and ZR accesses to a temporary which can be discarded.
216
* This is the point of the _sp forms.
217
*/
218
-static TCGv_i64 cpu_reg(DisasContext *s, int reg)
219
+TCGv_i64 cpu_reg(DisasContext *s, int reg)
220
{
221
if (reg == 31) {
222
return new_tmp_a64_zero(s);
223
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_reg(DisasContext *s, int reg)
224
}
225
226
/* register access for when 31 == SP */
227
-static TCGv_i64 cpu_reg_sp(DisasContext *s, int reg)
228
+TCGv_i64 cpu_reg_sp(DisasContext *s, int reg)
229
{
230
return cpu_X[reg];
231
}
232
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_reg_sp(DisasContext *s, int reg)
233
* representing the register contents. This TCGv is an auto-freed
234
* temporary so it need not be explicitly freed, and may be modified.
235
*/
236
-static TCGv_i64 read_cpu_reg(DisasContext *s, int reg, int sf)
237
+TCGv_i64 read_cpu_reg(DisasContext *s, int reg, int sf)
238
{
239
TCGv_i64 v = new_tmp_a64(s);
240
if (reg != 31) {
241
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 read_cpu_reg(DisasContext *s, int reg, int sf)
242
return v;
243
}
244
245
-static TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
246
+TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
247
{
248
TCGv_i64 v = new_tmp_a64(s);
249
if (sf) {
250
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
251
return v;
252
}
253
254
-/* We should have at some point before trying to access an FP register
255
- * done the necessary access check, so assert that
256
- * (a) we did the check and
257
- * (b) we didn't then just plough ahead anyway if it failed.
258
- * Print the instruction pattern in the abort message so we can figure
259
- * out what we need to fix if a user encounters this problem in the wild.
260
- */
261
-static inline void assert_fp_access_checked(DisasContext *s)
262
-{
263
-#ifdef CONFIG_DEBUG_TCG
264
- if (unlikely(!s->fp_access_checked || s->fp_excp_el)) {
265
- fprintf(stderr, "target-arm: FP access check missing for "
266
- "instruction 0x%08x\n", s->insn);
267
- abort();
268
- }
269
-#endif
270
-}
271
-
272
-/* Return the offset into CPUARMState of an element of specified
273
- * size, 'element' places in from the least significant end of
274
- * the FP/vector register Qn.
275
- */
276
-static inline int vec_reg_offset(DisasContext *s, int regno,
277
- int element, TCGMemOp size)
278
-{
279
- int offs = 0;
280
-#ifdef HOST_WORDS_BIGENDIAN
281
- /* This is complicated slightly because vfp.zregs[n].d[0] is
282
- * still the low half and vfp.zregs[n].d[1] the high half
283
- * of the 128 bit vector, even on big endian systems.
284
- * Calculate the offset assuming a fully bigendian 128 bits,
285
- * then XOR to account for the order of the two 64 bit halves.
286
- */
287
- offs += (16 - ((element + 1) * (1 << size)));
288
- offs ^= 8;
289
-#else
290
- offs += element * (1 << size);
291
-#endif
292
- offs += offsetof(CPUARMState, vfp.zregs[regno]);
293
- assert_fp_access_checked(s);
294
- return offs;
295
-}
296
-
297
-/* Return the offset info CPUARMState of the "whole" vector register Qn. */
298
-static inline int vec_full_reg_offset(DisasContext *s, int regno)
299
-{
300
- assert_fp_access_checked(s);
301
- return offsetof(CPUARMState, vfp.zregs[regno]);
302
-}
303
-
304
-/* Return a newly allocated pointer to the vector register. */
305
-static TCGv_ptr vec_full_reg_ptr(DisasContext *s, int regno)
306
-{
307
- TCGv_ptr ret = tcg_temp_new_ptr();
308
- tcg_gen_addi_ptr(ret, cpu_env, vec_full_reg_offset(s, regno));
309
- return ret;
310
-}
311
-
312
-/* Return the byte size of the "whole" vector register, VL / 8. */
313
-static inline int vec_full_reg_size(DisasContext *s)
314
-{
315
- /* FIXME SVE: We should put the composite ZCR_EL* value into tb->flags.
316
- In the meantime this is just the AdvSIMD length of 128. */
317
- return 128 / 8;
318
-}
319
-
320
/* Return the offset into CPUARMState of a slice (from
321
* the least significant end) of FP register Qn (ie
322
* Dn, Sn, Hn or Bn).
323
@@ -XXX,XX +XXX,XX @@ static void clear_vec_high(DisasContext *s, bool is_q, int rd)
324
}
325
}
326
327
-static void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v)
328
+void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v)
329
{
330
unsigned ofs = fp_reg_offset(s, reg, MO_64);
331
332
@@ -XXX,XX +XXX,XX @@ static void write_fp_sreg(DisasContext *s, int reg, TCGv_i32 v)
333
tcg_temp_free_i64(tmp);
334
}
335
336
-static TCGv_ptr get_fpstatus_ptr(bool is_f16)
337
+TCGv_ptr get_fpstatus_ptr(bool is_f16)
338
{
339
TCGv_ptr statusptr = tcg_temp_new_ptr();
340
int offset;
341
@@ -XXX,XX +XXX,XX @@ static inline bool fp_access_check(DisasContext *s)
342
/* Check that SVE access is enabled. If it is, return true.
343
* If not, emit code to generate an appropriate exception and return false.
344
*/
345
-static inline bool sve_access_check(DisasContext *s)
346
+bool sve_access_check(DisasContext *s)
347
{
348
if (s->sve_excp_el) {
349
gen_exception_insn(s, 4, EXCP_UDEF, syn_sve_access_trap(),
350
s->sve_excp_el);
351
return false;
352
}
353
- return true;
354
+ return fp_access_check(s);
355
}
356
357
/*
358
@@ -XXX,XX +XXX,XX @@ static inline uint64_t bitmask64(unsigned int length)
359
* value (ie should cause a guest UNDEF exception), and true if they are
360
* valid, in which case the decoded bit pattern is written to result.
361
*/
362
-static bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
363
- unsigned int imms, unsigned int immr)
364
+bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
365
+ unsigned int imms, unsigned int immr)
366
{
367
uint64_t mask;
368
unsigned e, levels, s, r;
369
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
370
* the range 01....1xx to 10....0xx, and the most significant 4 bits of
371
* the mantissa; see VFPExpandImm() in the v8 ARM ARM.
372
*/
373
-static uint64_t vfp_expand_imm(int size, uint8_t imm8)
374
+uint64_t vfp_expand_imm(int size, uint8_t imm8)
375
{
376
uint64_t imm;
377
378
--
410
--
379
2.17.0
411
2.20.1
380
412
381
413
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Include definitions for all of the bits in ID_MMFR3.
4
We already have a definition for ID_AA64MMFR1.PAN.
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-25-richard.henderson@linaro.org
9
Message-id: 20200208125816.14954-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 10 ++++
12
target/arm/cpu.h | 29 +++++++++++++++++++++++++++++
9
target/arm/sve_helper.c | 108 +++++++++++++++++++++++++++++++++++++
13
1 file changed, 29 insertions(+)
10
target/arm/translate-sve.c | 88 ++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 19 ++++++-
12
4 files changed, 224 insertions(+), 1 deletion(-)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/cpu.h
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_uqaddi_s, TCG_CALL_NO_RWG, void, ptr, ptr, s64, i32)
19
@@ -XXX,XX +XXX,XX @@ FIELD(ID_ISAR6, FHM, 8, 4)
19
DEF_HELPER_FLAGS_4(sve_uqaddi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
20
FIELD(ID_ISAR6, SB, 12, 4)
20
DEF_HELPER_FLAGS_4(sve_uqsubi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
21
FIELD(ID_ISAR6, SPECRES, 16, 4)
21
22
22
+DEF_HELPER_FLAGS_5(sve_cpy_m_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
23
+FIELD(ID_MMFR3, CMAINTVA, 0, 4)
23
+DEF_HELPER_FLAGS_5(sve_cpy_m_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
24
+FIELD(ID_MMFR3, CMAINTSW, 4, 4)
24
+DEF_HELPER_FLAGS_5(sve_cpy_m_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
25
+FIELD(ID_MMFR3, BPMAINT, 8, 4)
25
+DEF_HELPER_FLAGS_5(sve_cpy_m_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
26
+FIELD(ID_MMFR3, MAINTBCST, 12, 4)
27
+FIELD(ID_MMFR3, PAN, 16, 4)
28
+FIELD(ID_MMFR3, COHWALK, 20, 4)
29
+FIELD(ID_MMFR3, CMEMSZ, 24, 4)
30
+FIELD(ID_MMFR3, SUPERSEC, 28, 4)
26
+
31
+
27
+DEF_HELPER_FLAGS_4(sve_cpy_z_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
32
FIELD(ID_MMFR4, SPECSEI, 0, 4)
28
+DEF_HELPER_FLAGS_4(sve_cpy_z_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
33
FIELD(ID_MMFR4, AC2, 4, 4)
29
+DEF_HELPER_FLAGS_4(sve_cpy_z_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
34
FIELD(ID_MMFR4, XNX, 8, 4)
30
+DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
35
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_vminmaxnm(const ARMISARegisters *id)
31
+
36
return FIELD_EX64(id->mvfr2, MVFR2, FPMISC) >= 4;
32
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/sve_helper.c
38
+++ b/target/arm/sve_helper.c
39
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uqsubi_d)(void *d, void *a, uint64_t b, uint32_t desc)
40
*(uint64_t *)(d + i) = (ai < b ? 0 : ai - b);
41
}
42
}
37
}
43
+
38
44
+/* Two operand predicated copy immediate with merge. All valid immediates
39
+static inline bool isar_feature_aa32_pan(const ARMISARegisters *id)
45
+ * can fit within 17 signed bits in the simd_data field.
46
+ */
47
+void HELPER(sve_cpy_m_b)(void *vd, void *vn, void *vg,
48
+ uint64_t mm, uint32_t desc)
49
+{
40
+{
50
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
41
+ return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) != 0;
51
+ uint64_t *d = vd, *n = vn;
52
+ uint8_t *pg = vg;
53
+
54
+ mm = dup_const(MO_8, mm);
55
+ for (i = 0; i < opr_sz; i += 1) {
56
+ uint64_t nn = n[i];
57
+ uint64_t pp = expand_pred_b(pg[H1(i)]);
58
+ d[i] = (mm & pp) | (nn & ~pp);
59
+ }
60
+}
42
+}
61
+
43
+
62
+void HELPER(sve_cpy_m_h)(void *vd, void *vn, void *vg,
44
+static inline bool isar_feature_aa32_ats1e1(const ARMISARegisters *id)
63
+ uint64_t mm, uint32_t desc)
64
+{
45
+{
65
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
46
+ return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) >= 2;
66
+ uint64_t *d = vd, *n = vn;
67
+ uint8_t *pg = vg;
68
+
69
+ mm = dup_const(MO_16, mm);
70
+ for (i = 0; i < opr_sz; i += 1) {
71
+ uint64_t nn = n[i];
72
+ uint64_t pp = expand_pred_h(pg[H1(i)]);
73
+ d[i] = (mm & pp) | (nn & ~pp);
74
+ }
75
+}
76
+
77
+void HELPER(sve_cpy_m_s)(void *vd, void *vn, void *vg,
78
+ uint64_t mm, uint32_t desc)
79
+{
80
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
81
+ uint64_t *d = vd, *n = vn;
82
+ uint8_t *pg = vg;
83
+
84
+ mm = dup_const(MO_32, mm);
85
+ for (i = 0; i < opr_sz; i += 1) {
86
+ uint64_t nn = n[i];
87
+ uint64_t pp = expand_pred_s(pg[H1(i)]);
88
+ d[i] = (mm & pp) | (nn & ~pp);
89
+ }
90
+}
91
+
92
+void HELPER(sve_cpy_m_d)(void *vd, void *vn, void *vg,
93
+ uint64_t mm, uint32_t desc)
94
+{
95
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
96
+ uint64_t *d = vd, *n = vn;
97
+ uint8_t *pg = vg;
98
+
99
+ for (i = 0; i < opr_sz; i += 1) {
100
+ uint64_t nn = n[i];
101
+ d[i] = (pg[H1(i)] & 1 ? mm : nn);
102
+ }
103
+}
104
+
105
+void HELPER(sve_cpy_z_b)(void *vd, void *vg, uint64_t val, uint32_t desc)
106
+{
107
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
108
+ uint64_t *d = vd;
109
+ uint8_t *pg = vg;
110
+
111
+ val = dup_const(MO_8, val);
112
+ for (i = 0; i < opr_sz; i += 1) {
113
+ d[i] = val & expand_pred_b(pg[H1(i)]);
114
+ }
115
+}
116
+
117
+void HELPER(sve_cpy_z_h)(void *vd, void *vg, uint64_t val, uint32_t desc)
118
+{
119
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
120
+ uint64_t *d = vd;
121
+ uint8_t *pg = vg;
122
+
123
+ val = dup_const(MO_16, val);
124
+ for (i = 0; i < opr_sz; i += 1) {
125
+ d[i] = val & expand_pred_h(pg[H1(i)]);
126
+ }
127
+}
128
+
129
+void HELPER(sve_cpy_z_s)(void *vd, void *vg, uint64_t val, uint32_t desc)
130
+{
131
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
132
+ uint64_t *d = vd;
133
+ uint8_t *pg = vg;
134
+
135
+ val = dup_const(MO_32, val);
136
+ for (i = 0; i < opr_sz; i += 1) {
137
+ d[i] = val & expand_pred_s(pg[H1(i)]);
138
+ }
139
+}
140
+
141
+void HELPER(sve_cpy_z_d)(void *vd, void *vg, uint64_t val, uint32_t desc)
142
+{
143
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
144
+ uint64_t *d = vd;
145
+ uint8_t *pg = vg;
146
+
147
+ for (i = 0; i < opr_sz; i += 1) {
148
+ d[i] = (pg[H1(i)] & 1 ? val : 0);
149
+ }
150
+}
151
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
152
index XXXXXXX..XXXXXXX 100644
153
--- a/target/arm/translate-sve.c
154
+++ b/target/arm/translate-sve.c
155
@@ -XXX,XX +XXX,XX @@ static inline int plus1(int x)
156
return x + 1;
157
}
158
159
+/* The SH bit is in bit 8. Extract the low 8 and shift. */
160
+static inline int expand_imm_sh8s(int x)
161
+{
162
+ return (int8_t)x << (x & 0x100 ? 8 : 0);
163
+}
47
+}
164
+
48
+
165
/*
49
/*
166
* Include the generated decoder.
50
* 64-bit feature tests via id registers.
167
*/
51
*/
168
@@ -XXX,XX +XXX,XX @@ static bool trans_DUPM(DisasContext *s, arg_DUPM *a, uint32_t insn)
52
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_lor(const ARMISARegisters *id)
169
return true;
53
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, LO) != 0;
170
}
54
}
171
55
172
+/*
56
+static inline bool isar_feature_aa64_pan(const ARMISARegisters *id)
173
+ *** SVE Integer Wide Immediate - Predicated Group
174
+ */
175
+
176
+/* Implement all merging copies. This is used for CPY (immediate),
177
+ * FCPY, CPY (scalar), CPY (SIMD&FP scalar).
178
+ */
179
+static void do_cpy_m(DisasContext *s, int esz, int rd, int rn, int pg,
180
+ TCGv_i64 val)
181
+{
57
+{
182
+ typedef void gen_cpy(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64, TCGv_i32);
58
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) != 0;
183
+ static gen_cpy * const fns[4] = {
184
+ gen_helper_sve_cpy_m_b, gen_helper_sve_cpy_m_h,
185
+ gen_helper_sve_cpy_m_s, gen_helper_sve_cpy_m_d,
186
+ };
187
+ unsigned vsz = vec_full_reg_size(s);
188
+ TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
189
+ TCGv_ptr t_zd = tcg_temp_new_ptr();
190
+ TCGv_ptr t_zn = tcg_temp_new_ptr();
191
+ TCGv_ptr t_pg = tcg_temp_new_ptr();
192
+
193
+ tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, rd));
194
+ tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, rn));
195
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
196
+
197
+ fns[esz](t_zd, t_zn, t_pg, val, desc);
198
+
199
+ tcg_temp_free_ptr(t_zd);
200
+ tcg_temp_free_ptr(t_zn);
201
+ tcg_temp_free_ptr(t_pg);
202
+ tcg_temp_free_i32(desc);
203
+}
59
+}
204
+
60
+
205
+static bool trans_FCPY(DisasContext *s, arg_FCPY *a, uint32_t insn)
61
+static inline bool isar_feature_aa64_ats1e1(const ARMISARegisters *id)
206
+{
62
+{
207
+ if (a->esz == 0) {
63
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 2;
208
+ return false;
209
+ }
210
+ if (sve_access_check(s)) {
211
+ /* Decode the VFP immediate. */
212
+ uint64_t imm = vfp_expand_imm(a->esz, a->imm);
213
+ TCGv_i64 t_imm = tcg_const_i64(imm);
214
+ do_cpy_m(s, a->esz, a->rd, a->rn, a->pg, t_imm);
215
+ tcg_temp_free_i64(t_imm);
216
+ }
217
+ return true;
218
+}
64
+}
219
+
65
+
220
+static bool trans_CPY_m_i(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
66
static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
221
+{
67
{
222
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
68
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
223
+ return false;
224
+ }
225
+ if (sve_access_check(s)) {
226
+ TCGv_i64 t_imm = tcg_const_i64(a->imm);
227
+ do_cpy_m(s, a->esz, a->rd, a->rn, a->pg, t_imm);
228
+ tcg_temp_free_i64(t_imm);
229
+ }
230
+ return true;
231
+}
232
+
233
+static bool trans_CPY_z_i(DisasContext *s, arg_CPY_z_i *a, uint32_t insn)
234
+{
235
+ static gen_helper_gvec_2i * const fns[4] = {
236
+ gen_helper_sve_cpy_z_b, gen_helper_sve_cpy_z_h,
237
+ gen_helper_sve_cpy_z_s, gen_helper_sve_cpy_z_d,
238
+ };
239
+
240
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
241
+ return false;
242
+ }
243
+ if (sve_access_check(s)) {
244
+ unsigned vsz = vec_full_reg_size(s);
245
+ TCGv_i64 t_imm = tcg_const_i64(a->imm);
246
+ tcg_gen_gvec_2i_ool(vec_full_reg_offset(s, a->rd),
247
+ pred_full_reg_offset(s, a->pg),
248
+ t_imm, vsz, vsz, 0, fns[a->esz]);
249
+ tcg_temp_free_i64(t_imm);
250
+ }
251
+ return true;
252
+}
253
+
254
/*
255
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
256
*/
257
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
258
index XXXXXXX..XXXXXXX 100644
259
--- a/target/arm/sve.decode
260
+++ b/target/arm/sve.decode
261
@@ -XXX,XX +XXX,XX @@
262
###########################################################################
263
# Named fields. These are primarily for disjoint fields.
264
265
-%imm4_16_p1 16:4 !function=plus1
266
+%imm4_16_p1 16:4 !function=plus1
267
%imm6_22_5 22:1 5:5
268
%imm9_16_10 16:s6 10:3
269
270
@@ -XXX,XX +XXX,XX @@
271
%tszimm16_shr 22:2 16:5 !function=tszimm_shr
272
%tszimm16_shl 22:2 16:5 !function=tszimm_shl
273
274
+# Signed 8-bit immediate, optionally shifted left by 8.
275
+%sh8_i8s 5:9 !function=expand_imm_sh8s
276
+
277
# Either a copy of rd (at bit 0), or a different source
278
# as propagated via the MOVPRFX instruction.
279
%reg_movprfx 0:5
280
@@ -XXX,XX +XXX,XX @@
281
@rd_rn_tszimm ........ .. ... ... ...... rn:5 rd:5 \
282
&rri_esz esz=%tszimm16_esz
283
284
+# Two register operand, one immediate operand, with 4-bit predicate.
285
+# User must fill in imm.
286
+@rdn_pg4 ........ esz:2 .. pg:4 ... ........ rd:5 \
287
+ &rpri_esz rn=%reg_movprfx
288
+
289
# Two register operand, one encoded bitmask.
290
@rdn_dbm ........ .. .... dbm:13 rd:5 \
291
&rr_dbm rn=%reg_movprfx
292
@@ -XXX,XX +XXX,XX @@ AND_zzi 00000101 10 0000 ............. ..... @rdn_dbm
293
# SVE broadcast bitmask immediate
294
DUPM 00000101 11 0000 dbm:13 rd:5
295
296
+### SVE Integer Wide Immediate - Predicated Group
297
+
298
+# SVE copy floating-point immediate (predicated)
299
+FCPY 00000101 .. 01 .... 110 imm:8 ..... @rdn_pg4
300
+
301
+# SVE copy integer immediate (predicated)
302
+CPY_m_i 00000101 .. 01 .... 01 . ........ ..... @rdn_pg4 imm=%sh8_i8s
303
+CPY_z_i 00000101 .. 01 .... 00 . ........ ..... @rdn_pg4 imm=%sh8_i8s
304
+
305
### SVE Predicate Logical Operations Group
306
307
# SVE predicate logical operations
308
--
69
--
309
2.17.0
70
2.20.1
310
71
311
72
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
For static const regdefs, file scope is preferred.
4
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-24-richard.henderson@linaro.org
7
Message-id: 20200208125816.14954-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate-sve.c | 49 ++++++++++++++++++++++++++++++++++++++
10
target/arm/helper.c | 57 +++++++++++++++++++++++----------------------
9
target/arm/sve.decode | 17 +++++++++++++
11
1 file changed, 29 insertions(+), 28 deletions(-)
10
2 files changed, 66 insertions(+)
11
12
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
15
--- a/target/arm/helper.c
15
+++ b/target/arm/translate-sve.c
16
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_SINCDEC_v(DisasContext *s, arg_incdec2_cnt *a,
17
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_other(CPUARMState *env,
17
return true;
18
return access_lor_ns(env);
18
}
19
}
19
20
20
+/*
21
+/*
21
+ *** SVE Bitwise Immediate Group
22
+ * A trivial implementation of ARMv8.1-LOR leaves all of these
23
+ * registers fixed at 0, which indicates that there are zero
24
+ * supported Limited Ordering regions.
22
+ */
25
+ */
26
+static const ARMCPRegInfo lor_reginfo[] = {
27
+ { .name = "LORSA_EL1", .state = ARM_CP_STATE_AA64,
28
+ .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 0,
29
+ .access = PL1_RW, .accessfn = access_lor_other,
30
+ .type = ARM_CP_CONST, .resetvalue = 0 },
31
+ { .name = "LOREA_EL1", .state = ARM_CP_STATE_AA64,
32
+ .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 1,
33
+ .access = PL1_RW, .accessfn = access_lor_other,
34
+ .type = ARM_CP_CONST, .resetvalue = 0 },
35
+ { .name = "LORN_EL1", .state = ARM_CP_STATE_AA64,
36
+ .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 2,
37
+ .access = PL1_RW, .accessfn = access_lor_other,
38
+ .type = ARM_CP_CONST, .resetvalue = 0 },
39
+ { .name = "LORC_EL1", .state = ARM_CP_STATE_AA64,
40
+ .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 3,
41
+ .access = PL1_RW, .accessfn = access_lor_other,
42
+ .type = ARM_CP_CONST, .resetvalue = 0 },
43
+ { .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
44
+ .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
45
+ .access = PL1_R, .accessfn = access_lorid,
46
+ .type = ARM_CP_CONST, .resetvalue = 0 },
47
+ REGINFO_SENTINEL
48
+};
23
+
49
+
24
+static bool do_zz_dbm(DisasContext *s, arg_rr_dbm *a, GVecGen2iFn *gvec_fn)
50
#ifdef TARGET_AARCH64
25
+{
51
static CPAccessResult access_pauth(CPUARMState *env, const ARMCPRegInfo *ri,
26
+ uint64_t imm;
52
bool isread)
27
+ if (!logic_imm_decode_wmask(&imm, extract32(a->dbm, 12, 1),
53
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
28
+ extract32(a->dbm, 0, 6),
54
}
29
+ extract32(a->dbm, 6, 6))) {
55
30
+ return false;
56
if (cpu_isar_feature(aa64_lor, cpu)) {
31
+ }
57
- /*
32
+ if (sve_access_check(s)) {
58
- * A trivial implementation of ARMv8.1-LOR leaves all of these
33
+ unsigned vsz = vec_full_reg_size(s);
59
- * registers fixed at 0, which indicates that there are zero
34
+ gvec_fn(MO_64, vec_full_reg_offset(s, a->rd),
60
- * supported Limited Ordering regions.
35
+ vec_full_reg_offset(s, a->rn), imm, vsz, vsz);
61
- */
36
+ }
62
- static const ARMCPRegInfo lor_reginfo[] = {
37
+ return true;
63
- { .name = "LORSA_EL1", .state = ARM_CP_STATE_AA64,
38
+}
64
- .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 0,
39
+
65
- .access = PL1_RW, .accessfn = access_lor_other,
40
+static bool trans_AND_zzi(DisasContext *s, arg_rr_dbm *a, uint32_t insn)
66
- .type = ARM_CP_CONST, .resetvalue = 0 },
41
+{
67
- { .name = "LOREA_EL1", .state = ARM_CP_STATE_AA64,
42
+ return do_zz_dbm(s, a, tcg_gen_gvec_andi);
68
- .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 1,
43
+}
69
- .access = PL1_RW, .accessfn = access_lor_other,
44
+
70
- .type = ARM_CP_CONST, .resetvalue = 0 },
45
+static bool trans_ORR_zzi(DisasContext *s, arg_rr_dbm *a, uint32_t insn)
71
- { .name = "LORN_EL1", .state = ARM_CP_STATE_AA64,
46
+{
72
- .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 2,
47
+ return do_zz_dbm(s, a, tcg_gen_gvec_ori);
73
- .access = PL1_RW, .accessfn = access_lor_other,
48
+}
74
- .type = ARM_CP_CONST, .resetvalue = 0 },
49
+
75
- { .name = "LORC_EL1", .state = ARM_CP_STATE_AA64,
50
+static bool trans_EOR_zzi(DisasContext *s, arg_rr_dbm *a, uint32_t insn)
76
- .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 3,
51
+{
77
- .access = PL1_RW, .accessfn = access_lor_other,
52
+ return do_zz_dbm(s, a, tcg_gen_gvec_xori);
78
- .type = ARM_CP_CONST, .resetvalue = 0 },
53
+}
79
- { .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
54
+
80
- .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
55
+static bool trans_DUPM(DisasContext *s, arg_DUPM *a, uint32_t insn)
81
- .access = PL1_R, .accessfn = access_lorid,
56
+{
82
- .type = ARM_CP_CONST, .resetvalue = 0 },
57
+ uint64_t imm;
83
- REGINFO_SENTINEL
58
+ if (!logic_imm_decode_wmask(&imm, extract32(a->dbm, 12, 1),
84
- };
59
+ extract32(a->dbm, 0, 6),
85
define_arm_cp_regs(cpu, lor_reginfo);
60
+ extract32(a->dbm, 6, 6))) {
86
}
61
+ return false;
87
62
+ }
63
+ if (sve_access_check(s)) {
64
+ do_dupi_z(s, a->rd, imm);
65
+ }
66
+ return true;
67
+}
68
+
69
/*
70
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
71
*/
72
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/sve.decode
75
+++ b/target/arm/sve.decode
76
@@ -XXX,XX +XXX,XX @@
77
78
&rr_esz rd rn esz
79
&rri rd rn imm
80
+&rr_dbm rd rn dbm
81
&rrri rd rn rm imm
82
&rri_esz rd rn imm esz
83
&rrr_esz rd rn rm esz
84
@@ -XXX,XX +XXX,XX @@
85
@rd_rn_tszimm ........ .. ... ... ...... rn:5 rd:5 \
86
&rri_esz esz=%tszimm16_esz
87
88
+# Two register operand, one encoded bitmask.
89
+@rdn_dbm ........ .. .... dbm:13 rd:5 \
90
+ &rr_dbm rn=%reg_movprfx
91
+
92
# Basic Load/Store with 9-bit immediate offset
93
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
94
&rri imm=%imm9_16_10
95
@@ -XXX,XX +XXX,XX @@ INCDEC_v 00000100 .. 1 1 .... 1100 0 d:1 ..... ..... @incdec2_cnt u=1
96
# Note these require esz != 0.
97
SINCDEC_v 00000100 .. 1 0 .... 1100 d:1 u:1 ..... ..... @incdec2_cnt
98
99
+### SVE Bitwise Immediate Group
100
+
101
+# SVE bitwise logical with immediate (unpredicated)
102
+ORR_zzi 00000101 00 0000 ............. ..... @rdn_dbm
103
+EOR_zzi 00000101 01 0000 ............. ..... @rdn_dbm
104
+AND_zzi 00000101 10 0000 ............. ..... @rdn_dbm
105
+
106
+# SVE broadcast bitmask immediate
107
+DUPM 00000101 11 0000 dbm:13 rd:5
108
+
109
+### SVE Predicate Logical Operations Group
110
+
111
# SVE predicate logical operations
112
AND_pppp 00100101 0. 00 .... 01 .... 0 .... 0 .... @pd_pg_pn_pm_s
113
BIC_pppp 00100101 0. 00 .... 01 .... 0 .... 1 .... @pd_pg_pn_pm_s
114
--
88
--
115
2.17.0
89
2.20.1
116
90
117
91
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Split this helper out of msr_mask in translate.c. At the same time,
4
transform the negative reductive logic to positive accumulative logic.
5
It will be usable along the exception paths.
6
7
While touching msr_mask, fix up formatting.
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20200208125816.14954-6-richard.henderson@linaro.org
5
Message-id: 20180516223007.10256-26-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
13
---
8
target/arm/helper-sve.h | 2 +
14
target/arm/internals.h | 21 +++++++++++++++++++++
9
target/arm/sve_helper.c | 81 ++++++++++++++++++++++++++++++++++++++
15
target/arm/translate.c | 40 +++++++++++++++++-----------------------
10
target/arm/translate-sve.c | 34 ++++++++++++++++
16
2 files changed, 38 insertions(+), 23 deletions(-)
11
target/arm/sve.decode | 7 ++++
12
4 files changed, 124 insertions(+)
13
17
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
20
--- a/target/arm/internals.h
17
+++ b/target/arm/helper-sve.h
21
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_cpy_z_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
22
@@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
19
DEF_HELPER_FLAGS_4(sve_cpy_z_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
20
DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_ext, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve_helper.c
30
+++ b/target/arm/sve_helper.c
31
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_cpy_z_d)(void *vd, void *vg, uint64_t val, uint32_t desc)
32
d[i] = (pg[H1(i)] & 1 ? val : 0);
33
}
23
}
34
}
24
}
25
26
+static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
27
+ const ARMISARegisters *id)
28
+{
29
+ uint32_t valid = CPSR_M | CPSR_AIF | CPSR_IL | CPSR_NZCV | CPSR_J;
35
+
30
+
36
+/* Big-endian hosts need to frob the byte indicies. If the copy
31
+ if ((features >> ARM_FEATURE_V4T) & 1) {
37
+ * happens to be 8-byte aligned, then no frobbing necessary.
32
+ valid |= CPSR_T;
38
+ */
39
+static void swap_memmove(void *vd, void *vs, size_t n)
40
+{
41
+ uintptr_t d = (uintptr_t)vd;
42
+ uintptr_t s = (uintptr_t)vs;
43
+ uintptr_t o = (d | s | n) & 7;
44
+ size_t i;
45
+
46
+#ifndef HOST_WORDS_BIGENDIAN
47
+ o = 0;
48
+#endif
49
+ switch (o) {
50
+ case 0:
51
+ memmove(vd, vs, n);
52
+ break;
53
+
54
+ case 4:
55
+ if (d < s || d >= s + n) {
56
+ for (i = 0; i < n; i += 4) {
57
+ *(uint32_t *)H1_4(d + i) = *(uint32_t *)H1_4(s + i);
58
+ }
59
+ } else {
60
+ for (i = n; i > 0; ) {
61
+ i -= 4;
62
+ *(uint32_t *)H1_4(d + i) = *(uint32_t *)H1_4(s + i);
63
+ }
64
+ }
65
+ break;
66
+
67
+ case 2:
68
+ case 6:
69
+ if (d < s || d >= s + n) {
70
+ for (i = 0; i < n; i += 2) {
71
+ *(uint16_t *)H1_2(d + i) = *(uint16_t *)H1_2(s + i);
72
+ }
73
+ } else {
74
+ for (i = n; i > 0; ) {
75
+ i -= 2;
76
+ *(uint16_t *)H1_2(d + i) = *(uint16_t *)H1_2(s + i);
77
+ }
78
+ }
79
+ break;
80
+
81
+ default:
82
+ if (d < s || d >= s + n) {
83
+ for (i = 0; i < n; i++) {
84
+ *(uint8_t *)H1(d + i) = *(uint8_t *)H1(s + i);
85
+ }
86
+ } else {
87
+ for (i = n; i > 0; ) {
88
+ i -= 1;
89
+ *(uint8_t *)H1(d + i) = *(uint8_t *)H1(s + i);
90
+ }
91
+ }
92
+ break;
93
+ }
33
+ }
94
+}
34
+ if ((features >> ARM_FEATURE_V5) & 1) {
95
+
35
+ valid |= CPSR_Q; /* V5TE in reality*/
96
+void HELPER(sve_ext)(void *vd, void *vn, void *vm, uint32_t desc)
97
+{
98
+ intptr_t opr_sz = simd_oprsz(desc);
99
+ size_t n_ofs = simd_data(desc);
100
+ size_t n_siz = opr_sz - n_ofs;
101
+
102
+ if (vd != vm) {
103
+ swap_memmove(vd, vn + n_ofs, n_siz);
104
+ swap_memmove(vd + n_siz, vm, n_ofs);
105
+ } else if (vd != vn) {
106
+ swap_memmove(vd + n_siz, vd, n_ofs);
107
+ swap_memmove(vd, vn + n_ofs, n_siz);
108
+ } else {
109
+ /* vd == vn == vm. Need temp space. */
110
+ ARMVectorReg tmp;
111
+ swap_memmove(&tmp, vm, n_ofs);
112
+ swap_memmove(vd, vd + n_ofs, n_siz);
113
+ memcpy(vd + n_siz, &tmp, n_ofs);
114
+ }
36
+ }
115
+}
37
+ if ((features >> ARM_FEATURE_V6) & 1) {
116
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
38
+ valid |= CPSR_E | CPSR_GE;
117
index XXXXXXX..XXXXXXX 100644
39
+ }
118
--- a/target/arm/translate-sve.c
40
+ if ((features >> ARM_FEATURE_THUMB2) & 1) {
119
+++ b/target/arm/translate-sve.c
41
+ valid |= CPSR_IT;
120
@@ -XXX,XX +XXX,XX @@ static bool trans_CPY_z_i(DisasContext *s, arg_CPY_z_i *a, uint32_t insn)
121
return true;
122
}
123
124
+/*
125
+ *** SVE Permute Extract Group
126
+ */
127
+
128
+static bool trans_EXT(DisasContext *s, arg_EXT *a, uint32_t insn)
129
+{
130
+ if (!sve_access_check(s)) {
131
+ return true;
132
+ }
42
+ }
133
+
43
+
134
+ unsigned vsz = vec_full_reg_size(s);
44
+ return valid;
135
+ unsigned n_ofs = a->imm >= vsz ? 0 : a->imm;
136
+ unsigned n_siz = vsz - n_ofs;
137
+ unsigned d = vec_full_reg_offset(s, a->rd);
138
+ unsigned n = vec_full_reg_offset(s, a->rn);
139
+ unsigned m = vec_full_reg_offset(s, a->rm);
140
+
141
+ /* Use host vector move insns if we have appropriate sizes
142
+ * and no unfortunate overlap.
143
+ */
144
+ if (m != d
145
+ && n_ofs == size_for_gvec(n_ofs)
146
+ && n_siz == size_for_gvec(n_siz)
147
+ && (d != n || n_siz <= n_ofs)) {
148
+ tcg_gen_gvec_mov(0, d, n + n_ofs, n_siz, n_siz);
149
+ if (n_ofs != 0) {
150
+ tcg_gen_gvec_mov(0, d + n_siz, m, n_ofs, n_ofs);
151
+ }
152
+ } else {
153
+ tcg_gen_gvec_3_ool(d, n, m, vsz, vsz, n_ofs, gen_helper_sve_ext);
154
+ }
155
+ return true;
156
+}
45
+}
157
+
46
+
158
/*
47
/*
159
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
48
* Parameters of a given virtual address, as extracted from the
160
*/
49
* translation control register (TCR) for a given regime.
161
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
50
diff --git a/target/arm/translate.c b/target/arm/translate.c
162
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/sve.decode
52
--- a/target/arm/translate.c
164
+++ b/target/arm/sve.decode
53
+++ b/target/arm/translate.c
165
@@ -XXX,XX +XXX,XX @@
54
@@ -XXX,XX +XXX,XX @@ static inline void gen_mulxy(TCGv_i32 t0, TCGv_i32 t1, int x, int y)
166
55
/* Return the mask of PSR bits set by a MSR instruction. */
167
%imm4_16_p1 16:4 !function=plus1
56
static uint32_t msr_mask(DisasContext *s, int flags, int spsr)
168
%imm6_22_5 22:1 5:5
57
{
169
+%imm8_16_10 16:5 10:3
58
- uint32_t mask;
170
%imm9_16_10 16:s6 10:3
59
+ uint32_t mask = 0;
171
60
172
# A combination of tsz:imm3 -- extract esize.
61
- mask = 0;
173
@@ -XXX,XX +XXX,XX @@ FCPY 00000101 .. 01 .... 110 imm:8 ..... @rdn_pg4
62
- if (flags & (1 << 0))
174
CPY_m_i 00000101 .. 01 .... 01 . ........ ..... @rdn_pg4 imm=%sh8_i8s
63
+ if (flags & (1 << 0)) {
175
CPY_z_i 00000101 .. 01 .... 00 . ........ ..... @rdn_pg4 imm=%sh8_i8s
64
mask |= 0xff;
176
65
- if (flags & (1 << 1))
177
+### SVE Permute - Extract Group
66
+ }
67
+ if (flags & (1 << 1)) {
68
mask |= 0xff00;
69
- if (flags & (1 << 2))
70
+ }
71
+ if (flags & (1 << 2)) {
72
mask |= 0xff0000;
73
- if (flags & (1 << 3))
74
+ }
75
+ if (flags & (1 << 3)) {
76
mask |= 0xff000000;
77
+ }
78
79
- /* Mask out undefined bits. */
80
- mask &= ~CPSR_RESERVED;
81
- if (!arm_dc_feature(s, ARM_FEATURE_V4T)) {
82
- mask &= ~CPSR_T;
83
- }
84
- if (!arm_dc_feature(s, ARM_FEATURE_V5)) {
85
- mask &= ~CPSR_Q; /* V5TE in reality*/
86
- }
87
- if (!arm_dc_feature(s, ARM_FEATURE_V6)) {
88
- mask &= ~(CPSR_E | CPSR_GE);
89
- }
90
- if (!arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
91
- mask &= ~CPSR_IT;
92
- }
93
- /* Mask out execution state and reserved bits. */
94
+ /* Mask out undefined and reserved bits. */
95
+ mask &= aarch32_cpsr_valid_mask(s->features, s->isar);
178
+
96
+
179
+# SVE extract vector (immediate offset)
97
+ /* Mask out execution state. */
180
+EXT 00000101 001 ..... 000 ... rm:5 rd:5 \
98
if (!spsr) {
181
+ &rrri rn=%reg_movprfx imm=%imm8_16_10
99
- mask &= ~(CPSR_EXEC | CPSR_RESERVED);
100
+ mask &= ~CPSR_EXEC;
101
}
182
+
102
+
183
### SVE Predicate Logical Operations Group
103
/* Mask out privileged bits. */
184
104
- if (IS_USER(s))
185
# SVE predicate logical operations
105
+ if (IS_USER(s)) {
106
mask &= CPSR_USER;
107
+ }
108
return mask;
109
}
110
186
--
111
--
187
2.17.0
112
2.20.1
188
113
189
114
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The J bit signals Jazelle mode, and so of course is RES0
4
when the feature is not enabled.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200208125816.14954-7-richard.henderson@linaro.org
5
Message-id: 20180516223007.10256-21-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 4 ++
11
target/arm/internals.h | 5 ++++-
9
target/arm/sve_helper.c | 90 ++++++++++++++++++++++++++++++++++++++
12
1 file changed, 4 insertions(+), 1 deletion(-)
10
target/arm/translate-sve.c | 24 ++++++++++
11
target/arm/sve.decode | 7 +++
12
4 files changed, 125 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/target/arm/internals.h
17
+++ b/target/arm/helper-sve.h
17
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_adr_p64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
19
DEF_HELPER_FLAGS_4(sve_adr_s32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
20
DEF_HELPER_FLAGS_4(sve_adr_u32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
const ARMISARegisters *id)
21
21
{
22
+DEF_HELPER_FLAGS_3(sve_fexpa_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
22
- uint32_t valid = CPSR_M | CPSR_AIF | CPSR_IL | CPSR_NZCV | CPSR_J;
23
+DEF_HELPER_FLAGS_3(sve_fexpa_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
+ uint32_t valid = CPSR_M | CPSR_AIF | CPSR_IL | CPSR_NZCV;
24
+DEF_HELPER_FLAGS_3(sve_fexpa_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
25
+
25
if ((features >> ARM_FEATURE_V4T) & 1) {
26
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
valid |= CPSR_T;
27
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
28
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
if ((features >> ARM_FEATURE_THUMB2) & 1) {
29
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
29
valid |= CPSR_IT;
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/sve_helper.c
32
+++ b/target/arm/sve_helper.c
33
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_adr_u32)(void *vd, void *vn, void *vm, uint32_t desc)
34
d[i] = n[i] + ((uint64_t)(uint32_t)m[i] << sh);
35
}
30
}
31
+ if (isar_feature_jazelle(id)) {
32
+ valid |= CPSR_J;
33
+ }
34
35
return valid;
36
}
36
}
37
+
38
+void HELPER(sve_fexpa_h)(void *vd, void *vn, uint32_t desc)
39
+{
40
+ /* These constants are cut-and-paste directly from the ARM pseudocode. */
41
+ static const uint16_t coeff[] = {
42
+ 0x0000, 0x0016, 0x002d, 0x0045, 0x005d, 0x0075, 0x008e, 0x00a8,
43
+ 0x00c2, 0x00dc, 0x00f8, 0x0114, 0x0130, 0x014d, 0x016b, 0x0189,
44
+ 0x01a8, 0x01c8, 0x01e8, 0x0209, 0x022b, 0x024e, 0x0271, 0x0295,
45
+ 0x02ba, 0x02e0, 0x0306, 0x032e, 0x0356, 0x037f, 0x03a9, 0x03d4,
46
+ };
47
+ intptr_t i, opr_sz = simd_oprsz(desc) / 2;
48
+ uint16_t *d = vd, *n = vn;
49
+
50
+ for (i = 0; i < opr_sz; i++) {
51
+ uint16_t nn = n[i];
52
+ intptr_t idx = extract32(nn, 0, 5);
53
+ uint16_t exp = extract32(nn, 5, 5);
54
+ d[i] = coeff[idx] | (exp << 10);
55
+ }
56
+}
57
+
58
+void HELPER(sve_fexpa_s)(void *vd, void *vn, uint32_t desc)
59
+{
60
+ /* These constants are cut-and-paste directly from the ARM pseudocode. */
61
+ static const uint32_t coeff[] = {
62
+ 0x000000, 0x0164d2, 0x02cd87, 0x043a29,
63
+ 0x05aac3, 0x071f62, 0x08980f, 0x0a14d5,
64
+ 0x0b95c2, 0x0d1adf, 0x0ea43a, 0x1031dc,
65
+ 0x11c3d3, 0x135a2b, 0x14f4f0, 0x16942d,
66
+ 0x1837f0, 0x19e046, 0x1b8d3a, 0x1d3eda,
67
+ 0x1ef532, 0x20b051, 0x227043, 0x243516,
68
+ 0x25fed7, 0x27cd94, 0x29a15b, 0x2b7a3a,
69
+ 0x2d583f, 0x2f3b79, 0x3123f6, 0x3311c4,
70
+ 0x3504f3, 0x36fd92, 0x38fbaf, 0x3aff5b,
71
+ 0x3d08a4, 0x3f179a, 0x412c4d, 0x4346cd,
72
+ 0x45672a, 0x478d75, 0x49b9be, 0x4bec15,
73
+ 0x4e248c, 0x506334, 0x52a81e, 0x54f35b,
74
+ 0x5744fd, 0x599d16, 0x5bfbb8, 0x5e60f5,
75
+ 0x60ccdf, 0x633f89, 0x65b907, 0x68396a,
76
+ 0x6ac0c7, 0x6d4f30, 0x6fe4ba, 0x728177,
77
+ 0x75257d, 0x77d0df, 0x7a83b3, 0x7d3e0c,
78
+ };
79
+ intptr_t i, opr_sz = simd_oprsz(desc) / 4;
80
+ uint32_t *d = vd, *n = vn;
81
+
82
+ for (i = 0; i < opr_sz; i++) {
83
+ uint32_t nn = n[i];
84
+ intptr_t idx = extract32(nn, 0, 6);
85
+ uint32_t exp = extract32(nn, 6, 8);
86
+ d[i] = coeff[idx] | (exp << 23);
87
+ }
88
+}
89
+
90
+void HELPER(sve_fexpa_d)(void *vd, void *vn, uint32_t desc)
91
+{
92
+ /* These constants are cut-and-paste directly from the ARM pseudocode. */
93
+ static const uint64_t coeff[] = {
94
+ 0x0000000000000ull, 0x02C9A3E778061ull, 0x059B0D3158574ull,
95
+ 0x0874518759BC8ull, 0x0B5586CF9890Full, 0x0E3EC32D3D1A2ull,
96
+ 0x11301D0125B51ull, 0x1429AAEA92DE0ull, 0x172B83C7D517Bull,
97
+ 0x1A35BEB6FCB75ull, 0x1D4873168B9AAull, 0x2063B88628CD6ull,
98
+ 0x2387A6E756238ull, 0x26B4565E27CDDull, 0x29E9DF51FDEE1ull,
99
+ 0x2D285A6E4030Bull, 0x306FE0A31B715ull, 0x33C08B26416FFull,
100
+ 0x371A7373AA9CBull, 0x3A7DB34E59FF7ull, 0x3DEA64C123422ull,
101
+ 0x4160A21F72E2Aull, 0x44E086061892Dull, 0x486A2B5C13CD0ull,
102
+ 0x4BFDAD5362A27ull, 0x4F9B2769D2CA7ull, 0x5342B569D4F82ull,
103
+ 0x56F4736B527DAull, 0x5AB07DD485429ull, 0x5E76F15AD2148ull,
104
+ 0x6247EB03A5585ull, 0x6623882552225ull, 0x6A09E667F3BCDull,
105
+ 0x6DFB23C651A2Full, 0x71F75E8EC5F74ull, 0x75FEB564267C9ull,
106
+ 0x7A11473EB0187ull, 0x7E2F336CF4E62ull, 0x82589994CCE13ull,
107
+ 0x868D99B4492EDull, 0x8ACE5422AA0DBull, 0x8F1AE99157736ull,
108
+ 0x93737B0CDC5E5ull, 0x97D829FDE4E50ull, 0x9C49182A3F090ull,
109
+ 0xA0C667B5DE565ull, 0xA5503B23E255Dull, 0xA9E6B5579FDBFull,
110
+ 0xAE89F995AD3ADull, 0xB33A2B84F15FBull, 0xB7F76F2FB5E47ull,
111
+ 0xBCC1E904BC1D2ull, 0xC199BDD85529Cull, 0xC67F12E57D14Bull,
112
+ 0xCB720DCEF9069ull, 0xD072D4A07897Cull, 0xD5818DCFBA487ull,
113
+ 0xDA9E603DB3285ull, 0xDFC97337B9B5Full, 0xE502EE78B3FF6ull,
114
+ 0xEA4AFA2A490DAull, 0xEFA1BEE615A27ull, 0xF50765B6E4540ull,
115
+ 0xFA7C1819E90D8ull,
116
+ };
117
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
118
+ uint64_t *d = vd, *n = vn;
119
+
120
+ for (i = 0; i < opr_sz; i++) {
121
+ uint64_t nn = n[i];
122
+ intptr_t idx = extract32(nn, 0, 6);
123
+ uint64_t exp = extract32(nn, 6, 11);
124
+ d[i] = coeff[idx] | (exp << 52);
125
+ }
126
+}
127
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/translate-sve.c
130
+++ b/target/arm/translate-sve.c
131
@@ -XXX,XX +XXX,XX @@ static bool trans_ADR_u32(DisasContext *s, arg_rrri *a, uint32_t insn)
132
return do_adr(s, a, gen_helper_sve_adr_u32);
133
}
134
135
+/*
136
+ *** SVE Integer Misc - Unpredicated Group
137
+ */
138
+
139
+static bool trans_FEXPA(DisasContext *s, arg_rr_esz *a, uint32_t insn)
140
+{
141
+ static gen_helper_gvec_2 * const fns[4] = {
142
+ NULL,
143
+ gen_helper_sve_fexpa_h,
144
+ gen_helper_sve_fexpa_s,
145
+ gen_helper_sve_fexpa_d,
146
+ };
147
+ if (a->esz == 0) {
148
+ return false;
149
+ }
150
+ if (sve_access_check(s)) {
151
+ unsigned vsz = vec_full_reg_size(s);
152
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
153
+ vec_full_reg_offset(s, a->rn),
154
+ vsz, vsz, 0, fns[a->esz]);
155
+ }
156
+ return true;
157
+}
158
+
159
/*
160
*** SVE Predicate Logical Operations Group
161
*/
162
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
163
index XXXXXXX..XXXXXXX 100644
164
--- a/target/arm/sve.decode
165
+++ b/target/arm/sve.decode
166
@@ -XXX,XX +XXX,XX @@
167
168
# Two operand
169
@pd_pn ........ esz:2 .. .... ....... rn:4 . rd:4 &rr_esz
170
+@rd_rn ........ esz:2 ...... ...... rn:5 rd:5 &rr_esz
171
172
# Three operand with unused vector element size
173
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
174
@@ -XXX,XX +XXX,XX @@ ADR_u32 00000100 01 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
175
ADR_p32 00000100 10 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
176
ADR_p64 00000100 11 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
177
178
+### SVE Integer Misc - Unpredicated Group
179
+
180
+# SVE floating-point exponential accelerator
181
+# Note esz != 0
182
+FEXPA 00000100 .. 1 00000 101110 ..... ..... @rd_rn
183
+
184
### SVE Predicate Logical Operations Group
185
186
# SVE predicate logical operations
187
--
37
--
188
2.17.0
38
2.20.1
189
39
190
40
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
CPSR_ERET_MASK was a useless renaming of CPSR_RESERVED.
4
The function also takes into account bits that the cpu
5
does not support.
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-7-richard.henderson@linaro.org
9
Message-id: 20200208125816.14954-8-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/cpu.h | 4 +-
12
target/arm/cpu.h | 2 --
9
target/arm/helper-sve.h | 10 +
13
target/arm/op_helper.c | 5 ++++-
10
target/arm/sve_helper.c | 39 ++++
14
2 files changed, 4 insertions(+), 3 deletions(-)
11
target/arm/translate-sve.c | 361 +++++++++++++++++++++++++++++++++++++
12
target/arm/sve.decode | 16 ++
13
5 files changed, 429 insertions(+), 1 deletion(-)
14
15
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
18
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
20
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
20
#ifdef TARGET_AARCH64
21
#define CPSR_USER (CPSR_NZCV | CPSR_Q | CPSR_GE)
21
/* Store FFR as pregs[16] to make it easier to treat as any other. */
22
/* Execution state bits. MRS read as zero, MSR writes ignored. */
22
ARMPredicateReg pregs[17];
23
#define CPSR_EXEC (CPSR_T | CPSR_IT | CPSR_J | CPSR_IL)
23
+ /* Scratch space for aa64 sve predicate temporary. */
24
-/* Mask of bits which may be set by exception return copying them from SPSR */
24
+ ARMPredicateReg preg_tmp;
25
-#define CPSR_ERET_MASK (~CPSR_RESERVED)
25
#endif
26
26
27
/* Bit definitions for M profile XPSR. Most are the same as CPSR. */
27
uint32_t xregs[16];
28
#define XPSR_EXCP 0x1ffU
28
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
29
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
29
int vec_len;
30
int vec_stride;
31
32
- /* scratch space when Tn are not sufficient. */
33
+ /* Scratch space for aa32 neon expansion. */
34
uint32_t scratch[8];
35
36
/* There are a number of distinct float control structures:
37
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
38
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper-sve.h
31
--- a/target/arm/op_helper.c
40
+++ b/target/arm/helper-sve.h
32
+++ b/target/arm/op_helper.c
41
@@ -XXX,XX +XXX,XX @@
33
@@ -XXX,XX +XXX,XX @@ void HELPER(cpsr_write)(CPUARMState *env, uint32_t val, uint32_t mask)
42
34
/* Write the CPSR for a 32-bit exception return */
43
DEF_HELPER_FLAGS_2(sve_predtest1, TCG_CALL_NO_WG, i32, i64, i64)
35
void HELPER(cpsr_write_eret)(CPUARMState *env, uint32_t val)
44
DEF_HELPER_FLAGS_3(sve_predtest, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
36
{
37
+ uint32_t mask;
45
+
38
+
46
+DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
39
qemu_mutex_lock_iothread();
47
+DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
40
arm_call_pre_el_change_hook(env_archcpu(env));
48
+DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
41
qemu_mutex_unlock_iothread();
49
+DEF_HELPER_FLAGS_5(sve_sel_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
42
50
+DEF_HELPER_FLAGS_5(sve_orr_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
43
- cpsr_write(env, val, CPSR_ERET_MASK, CPSRWriteExceptionReturn);
51
+DEF_HELPER_FLAGS_5(sve_orn_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
44
+ mask = aarch32_cpsr_valid_mask(env->features, &env_archcpu(env)->isar);
52
+DEF_HELPER_FLAGS_5(sve_nor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
45
+ cpsr_write(env, val, mask, CPSRWriteExceptionReturn);
53
+DEF_HELPER_FLAGS_5(sve_nand_pppp, TCG_CALL_NO_RWG,
46
54
+ void, ptr, ptr, ptr, ptr, i32)
47
/* Generated code has already stored the new PC value, but
55
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
48
* without masking out its low bits, because which bits need
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/sve_helper.c
58
+++ b/target/arm/sve_helper.c
59
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_predtest)(void *vd, void *vg, uint32_t words)
60
61
return flags;
62
}
63
+
64
+#define LOGICAL_PPPP(NAME, FUNC) \
65
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
66
+{ \
67
+ uintptr_t opr_sz = simd_oprsz(desc); \
68
+ uint64_t *d = vd, *n = vn, *m = vm, *g = vg; \
69
+ uintptr_t i; \
70
+ for (i = 0; i < opr_sz / 8; ++i) { \
71
+ d[i] = FUNC(n[i], m[i], g[i]); \
72
+ } \
73
+}
74
+
75
+#define DO_AND(N, M, G) (((N) & (M)) & (G))
76
+#define DO_BIC(N, M, G) (((N) & ~(M)) & (G))
77
+#define DO_EOR(N, M, G) (((N) ^ (M)) & (G))
78
+#define DO_ORR(N, M, G) (((N) | (M)) & (G))
79
+#define DO_ORN(N, M, G) (((N) | ~(M)) & (G))
80
+#define DO_NOR(N, M, G) (~((N) | (M)) & (G))
81
+#define DO_NAND(N, M, G) (~((N) & (M)) & (G))
82
+#define DO_SEL(N, M, G) (((N) & (G)) | ((M) & ~(G)))
83
+
84
+LOGICAL_PPPP(sve_and_pppp, DO_AND)
85
+LOGICAL_PPPP(sve_bic_pppp, DO_BIC)
86
+LOGICAL_PPPP(sve_eor_pppp, DO_EOR)
87
+LOGICAL_PPPP(sve_sel_pppp, DO_SEL)
88
+LOGICAL_PPPP(sve_orr_pppp, DO_ORR)
89
+LOGICAL_PPPP(sve_orn_pppp, DO_ORN)
90
+LOGICAL_PPPP(sve_nor_pppp, DO_NOR)
91
+LOGICAL_PPPP(sve_nand_pppp, DO_NAND)
92
+
93
+#undef DO_AND
94
+#undef DO_BIC
95
+#undef DO_EOR
96
+#undef DO_ORR
97
+#undef DO_ORN
98
+#undef DO_NOR
99
+#undef DO_NAND
100
+#undef DO_SEL
101
+#undef LOGICAL_PPPP
102
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/translate-sve.c
105
+++ b/target/arm/translate-sve.c
106
@@ -XXX,XX +XXX,XX @@ static inline int pred_full_reg_size(DisasContext *s)
107
return s->sve_len >> 3;
108
}
109
110
+/* Round up the size of a register to a size allowed by
111
+ * the tcg vector infrastructure. Any operation which uses this
112
+ * size may assume that the bits above pred_full_reg_size are zero,
113
+ * and must leave them the same way.
114
+ *
115
+ * Note that this is not needed for the vector registers as they
116
+ * are always properly sized for tcg vectors.
117
+ */
118
+static int size_for_gvec(int size)
119
+{
120
+ if (size <= 8) {
121
+ return 8;
122
+ } else {
123
+ return QEMU_ALIGN_UP(size, 16);
124
+ }
125
+}
126
+
127
+static int pred_gvec_reg_size(DisasContext *s)
128
+{
129
+ return size_for_gvec(pred_full_reg_size(s));
130
+}
131
+
132
/* Invoke a vector expander on two Zregs. */
133
static bool do_vector2_z(DisasContext *s, GVecGen2Fn *gvec_fn,
134
int esz, int rd, int rn)
135
@@ -XXX,XX +XXX,XX @@ static bool do_mov_z(DisasContext *s, int rd, int rn)
136
return do_vector2_z(s, tcg_gen_gvec_mov, 0, rd, rn);
137
}
138
139
+/* Invoke a vector expander on two Pregs. */
140
+static bool do_vector2_p(DisasContext *s, GVecGen2Fn *gvec_fn,
141
+ int esz, int rd, int rn)
142
+{
143
+ if (sve_access_check(s)) {
144
+ unsigned psz = pred_gvec_reg_size(s);
145
+ gvec_fn(esz, pred_full_reg_offset(s, rd),
146
+ pred_full_reg_offset(s, rn), psz, psz);
147
+ }
148
+ return true;
149
+}
150
+
151
+/* Invoke a vector expander on three Pregs. */
152
+static bool do_vector3_p(DisasContext *s, GVecGen3Fn *gvec_fn,
153
+ int esz, int rd, int rn, int rm)
154
+{
155
+ if (sve_access_check(s)) {
156
+ unsigned psz = pred_gvec_reg_size(s);
157
+ gvec_fn(esz, pred_full_reg_offset(s, rd),
158
+ pred_full_reg_offset(s, rn),
159
+ pred_full_reg_offset(s, rm), psz, psz);
160
+ }
161
+ return true;
162
+}
163
+
164
+/* Invoke a vector operation on four Pregs. */
165
+static bool do_vecop4_p(DisasContext *s, const GVecGen4 *gvec_op,
166
+ int rd, int rn, int rm, int rg)
167
+{
168
+ if (sve_access_check(s)) {
169
+ unsigned psz = pred_gvec_reg_size(s);
170
+ tcg_gen_gvec_4(pred_full_reg_offset(s, rd),
171
+ pred_full_reg_offset(s, rn),
172
+ pred_full_reg_offset(s, rm),
173
+ pred_full_reg_offset(s, rg),
174
+ psz, psz, gvec_op);
175
+ }
176
+ return true;
177
+}
178
+
179
+/* Invoke a vector move on two Pregs. */
180
+static bool do_mov_p(DisasContext *s, int rd, int rn)
181
+{
182
+ return do_vector2_p(s, tcg_gen_gvec_mov, 0, rd, rn);
183
+}
184
+
185
/* Set the cpu flags as per a return from an SVE helper. */
186
static void do_pred_flags(TCGv_i32 t)
187
{
188
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
189
return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
190
}
191
192
+/*
193
+ *** SVE Predicate Logical Operations Group
194
+ */
195
+
196
+static bool do_pppp_flags(DisasContext *s, arg_rprr_s *a,
197
+ const GVecGen4 *gvec_op)
198
+{
199
+ if (!sve_access_check(s)) {
200
+ return true;
201
+ }
202
+
203
+ unsigned psz = pred_gvec_reg_size(s);
204
+ int dofs = pred_full_reg_offset(s, a->rd);
205
+ int nofs = pred_full_reg_offset(s, a->rn);
206
+ int mofs = pred_full_reg_offset(s, a->rm);
207
+ int gofs = pred_full_reg_offset(s, a->pg);
208
+
209
+ if (psz == 8) {
210
+ /* Do the operation and the flags generation in temps. */
211
+ TCGv_i64 pd = tcg_temp_new_i64();
212
+ TCGv_i64 pn = tcg_temp_new_i64();
213
+ TCGv_i64 pm = tcg_temp_new_i64();
214
+ TCGv_i64 pg = tcg_temp_new_i64();
215
+
216
+ tcg_gen_ld_i64(pn, cpu_env, nofs);
217
+ tcg_gen_ld_i64(pm, cpu_env, mofs);
218
+ tcg_gen_ld_i64(pg, cpu_env, gofs);
219
+
220
+ gvec_op->fni8(pd, pn, pm, pg);
221
+ tcg_gen_st_i64(pd, cpu_env, dofs);
222
+
223
+ do_predtest1(pd, pg);
224
+
225
+ tcg_temp_free_i64(pd);
226
+ tcg_temp_free_i64(pn);
227
+ tcg_temp_free_i64(pm);
228
+ tcg_temp_free_i64(pg);
229
+ } else {
230
+ /* The operation and flags generation is large. The computation
231
+ * of the flags depends on the original contents of the guarding
232
+ * predicate. If the destination overwrites the guarding predicate,
233
+ * then the easiest way to get this right is to save a copy.
234
+ */
235
+ int tofs = gofs;
236
+ if (a->rd == a->pg) {
237
+ tofs = offsetof(CPUARMState, vfp.preg_tmp);
238
+ tcg_gen_gvec_mov(0, tofs, gofs, psz, psz);
239
+ }
240
+
241
+ tcg_gen_gvec_4(dofs, nofs, mofs, gofs, psz, psz, gvec_op);
242
+ do_predtest(s, dofs, tofs, psz / 8);
243
+ }
244
+ return true;
245
+}
246
+
247
+static void gen_and_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
248
+{
249
+ tcg_gen_and_i64(pd, pn, pm);
250
+ tcg_gen_and_i64(pd, pd, pg);
251
+}
252
+
253
+static void gen_and_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
254
+ TCGv_vec pm, TCGv_vec pg)
255
+{
256
+ tcg_gen_and_vec(vece, pd, pn, pm);
257
+ tcg_gen_and_vec(vece, pd, pd, pg);
258
+}
259
+
260
+static bool trans_AND_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
261
+{
262
+ static const GVecGen4 op = {
263
+ .fni8 = gen_and_pg_i64,
264
+ .fniv = gen_and_pg_vec,
265
+ .fno = gen_helper_sve_and_pppp,
266
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
267
+ };
268
+ if (a->s) {
269
+ return do_pppp_flags(s, a, &op);
270
+ } else if (a->rn == a->rm) {
271
+ if (a->pg == a->rn) {
272
+ return do_mov_p(s, a->rd, a->rn);
273
+ } else {
274
+ return do_vector3_p(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->pg);
275
+ }
276
+ } else if (a->pg == a->rn || a->pg == a->rm) {
277
+ return do_vector3_p(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->rm);
278
+ } else {
279
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
280
+ }
281
+}
282
+
283
+static void gen_bic_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
284
+{
285
+ tcg_gen_andc_i64(pd, pn, pm);
286
+ tcg_gen_and_i64(pd, pd, pg);
287
+}
288
+
289
+static void gen_bic_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
290
+ TCGv_vec pm, TCGv_vec pg)
291
+{
292
+ tcg_gen_andc_vec(vece, pd, pn, pm);
293
+ tcg_gen_and_vec(vece, pd, pd, pg);
294
+}
295
+
296
+static bool trans_BIC_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
297
+{
298
+ static const GVecGen4 op = {
299
+ .fni8 = gen_bic_pg_i64,
300
+ .fniv = gen_bic_pg_vec,
301
+ .fno = gen_helper_sve_bic_pppp,
302
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
303
+ };
304
+ if (a->s) {
305
+ return do_pppp_flags(s, a, &op);
306
+ } else if (a->pg == a->rn) {
307
+ return do_vector3_p(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
308
+ } else {
309
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
310
+ }
311
+}
312
+
313
+static void gen_eor_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
314
+{
315
+ tcg_gen_xor_i64(pd, pn, pm);
316
+ tcg_gen_and_i64(pd, pd, pg);
317
+}
318
+
319
+static void gen_eor_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
320
+ TCGv_vec pm, TCGv_vec pg)
321
+{
322
+ tcg_gen_xor_vec(vece, pd, pn, pm);
323
+ tcg_gen_and_vec(vece, pd, pd, pg);
324
+}
325
+
326
+static bool trans_EOR_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
327
+{
328
+ static const GVecGen4 op = {
329
+ .fni8 = gen_eor_pg_i64,
330
+ .fniv = gen_eor_pg_vec,
331
+ .fno = gen_helper_sve_eor_pppp,
332
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
333
+ };
334
+ if (a->s) {
335
+ return do_pppp_flags(s, a, &op);
336
+ } else {
337
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
338
+ }
339
+}
340
+
341
+static void gen_sel_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
342
+{
343
+ tcg_gen_and_i64(pn, pn, pg);
344
+ tcg_gen_andc_i64(pm, pm, pg);
345
+ tcg_gen_or_i64(pd, pn, pm);
346
+}
347
+
348
+static void gen_sel_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
349
+ TCGv_vec pm, TCGv_vec pg)
350
+{
351
+ tcg_gen_and_vec(vece, pn, pn, pg);
352
+ tcg_gen_andc_vec(vece, pm, pm, pg);
353
+ tcg_gen_or_vec(vece, pd, pn, pm);
354
+}
355
+
356
+static bool trans_SEL_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
357
+{
358
+ static const GVecGen4 op = {
359
+ .fni8 = gen_sel_pg_i64,
360
+ .fniv = gen_sel_pg_vec,
361
+ .fno = gen_helper_sve_sel_pppp,
362
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
363
+ };
364
+ if (a->s) {
365
+ return false;
366
+ } else {
367
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
368
+ }
369
+}
370
+
371
+static void gen_orr_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
372
+{
373
+ tcg_gen_or_i64(pd, pn, pm);
374
+ tcg_gen_and_i64(pd, pd, pg);
375
+}
376
+
377
+static void gen_orr_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
378
+ TCGv_vec pm, TCGv_vec pg)
379
+{
380
+ tcg_gen_or_vec(vece, pd, pn, pm);
381
+ tcg_gen_and_vec(vece, pd, pd, pg);
382
+}
383
+
384
+static bool trans_ORR_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
385
+{
386
+ static const GVecGen4 op = {
387
+ .fni8 = gen_orr_pg_i64,
388
+ .fniv = gen_orr_pg_vec,
389
+ .fno = gen_helper_sve_orr_pppp,
390
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
391
+ };
392
+ if (a->s) {
393
+ return do_pppp_flags(s, a, &op);
394
+ } else if (a->pg == a->rn && a->rn == a->rm) {
395
+ return do_mov_p(s, a->rd, a->rn);
396
+ } else {
397
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
398
+ }
399
+}
400
+
401
+static void gen_orn_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
402
+{
403
+ tcg_gen_orc_i64(pd, pn, pm);
404
+ tcg_gen_and_i64(pd, pd, pg);
405
+}
406
+
407
+static void gen_orn_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
408
+ TCGv_vec pm, TCGv_vec pg)
409
+{
410
+ tcg_gen_orc_vec(vece, pd, pn, pm);
411
+ tcg_gen_and_vec(vece, pd, pd, pg);
412
+}
413
+
414
+static bool trans_ORN_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
415
+{
416
+ static const GVecGen4 op = {
417
+ .fni8 = gen_orn_pg_i64,
418
+ .fniv = gen_orn_pg_vec,
419
+ .fno = gen_helper_sve_orn_pppp,
420
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
421
+ };
422
+ if (a->s) {
423
+ return do_pppp_flags(s, a, &op);
424
+ } else {
425
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
426
+ }
427
+}
428
+
429
+static void gen_nor_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
430
+{
431
+ tcg_gen_or_i64(pd, pn, pm);
432
+ tcg_gen_andc_i64(pd, pg, pd);
433
+}
434
+
435
+static void gen_nor_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
436
+ TCGv_vec pm, TCGv_vec pg)
437
+{
438
+ tcg_gen_or_vec(vece, pd, pn, pm);
439
+ tcg_gen_andc_vec(vece, pd, pg, pd);
440
+}
441
+
442
+static bool trans_NOR_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
443
+{
444
+ static const GVecGen4 op = {
445
+ .fni8 = gen_nor_pg_i64,
446
+ .fniv = gen_nor_pg_vec,
447
+ .fno = gen_helper_sve_nor_pppp,
448
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
449
+ };
450
+ if (a->s) {
451
+ return do_pppp_flags(s, a, &op);
452
+ } else {
453
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
454
+ }
455
+}
456
+
457
+static void gen_nand_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
458
+{
459
+ tcg_gen_and_i64(pd, pn, pm);
460
+ tcg_gen_andc_i64(pd, pg, pd);
461
+}
462
+
463
+static void gen_nand_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
464
+ TCGv_vec pm, TCGv_vec pg)
465
+{
466
+ tcg_gen_and_vec(vece, pd, pn, pm);
467
+ tcg_gen_andc_vec(vece, pd, pg, pd);
468
+}
469
+
470
+static bool trans_NAND_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
471
+{
472
+ static const GVecGen4 op = {
473
+ .fni8 = gen_nand_pg_i64,
474
+ .fniv = gen_nand_pg_vec,
475
+ .fno = gen_helper_sve_nand_pppp,
476
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
477
+ };
478
+ if (a->s) {
479
+ return do_pppp_flags(s, a, &op);
480
+ } else {
481
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
482
+ }
483
+}
484
+
485
/*
486
*** SVE Predicate Misc Group
487
*/
488
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
489
index XXXXXXX..XXXXXXX 100644
490
--- a/target/arm/sve.decode
491
+++ b/target/arm/sve.decode
492
@@ -XXX,XX +XXX,XX @@
493
494
&rri rd rn imm
495
&rrr_esz rd rn rm esz
496
+&rprr_s rd pg rn rm s
497
498
###########################################################################
499
# Named instruction formats. These are generally used to
500
@@ -XXX,XX +XXX,XX @@
501
# Three operand with unused vector element size
502
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
503
504
+# Three predicate operand, with governing predicate, flag setting
505
+@pd_pg_pn_pm_s ........ . s:1 .. rm:4 .. pg:4 . rn:4 . rd:4 &rprr_s
506
+
507
# Basic Load/Store with 9-bit immediate offset
508
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
509
&rri imm=%imm9_16_10
510
@@ -XXX,XX +XXX,XX @@ ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
511
EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
512
BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
513
514
+### SVE Predicate Logical Operations Group
515
+
516
+# SVE predicate logical operations
517
+AND_pppp 00100101 0. 00 .... 01 .... 0 .... 0 .... @pd_pg_pn_pm_s
518
+BIC_pppp 00100101 0. 00 .... 01 .... 0 .... 1 .... @pd_pg_pn_pm_s
519
+EOR_pppp 00100101 0. 00 .... 01 .... 1 .... 0 .... @pd_pg_pn_pm_s
520
+SEL_pppp 00100101 0. 00 .... 01 .... 1 .... 1 .... @pd_pg_pn_pm_s
521
+ORR_pppp 00100101 1. 00 .... 01 .... 0 .... 0 .... @pd_pg_pn_pm_s
522
+ORN_pppp 00100101 1. 00 .... 01 .... 0 .... 1 .... @pd_pg_pn_pm_s
523
+NOR_pppp 00100101 1. 00 .... 01 .... 1 .... 0 .... @pd_pg_pn_pm_s
524
+NAND_pppp 00100101 1. 00 .... 01 .... 1 .... 1 .... @pd_pg_pn_pm_s
525
+
526
### SVE Predicate Misc Group
527
528
# SVE predicate test
529
--
49
--
530
2.17.0
50
2.20.1
531
51
532
52
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Using ~0 as the mask on the aarch64->aarch32 exception return
4
was not even as correct as the CPSR_ERET_MASK that we had used
5
on the aarch32->aarch32 exception return.
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-17-richard.henderson@linaro.org
9
Message-id: 20200208125816.14954-9-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 5 +++
12
target/arm/helper-a64.c | 5 +++--
9
target/arm/sve_helper.c | 40 +++++++++++++++++++
13
1 file changed, 3 insertions(+), 2 deletions(-)
10
target/arm/translate-sve.c | 79 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 14 +++++++
12
4 files changed, 138 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/helper-a64.c
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/helper-a64.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(sve_mls_s, TCG_CALL_NO_RWG,
19
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
19
DEF_HELPER_FLAGS_6(sve_mls_d, TCG_CALL_NO_RWG,
20
{
20
void, ptr, ptr, ptr, ptr, ptr, i32)
21
int cur_el = arm_current_el(env);
21
22
unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el);
22
+DEF_HELPER_FLAGS_4(sve_index_b, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
23
- uint32_t spsr = env->banked_spsr[spsr_idx];
23
+DEF_HELPER_FLAGS_4(sve_index_h, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
24
+ uint32_t mask, spsr = env->banked_spsr[spsr_idx];
24
+DEF_HELPER_FLAGS_4(sve_index_s, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
25
int new_el;
25
+DEF_HELPER_FLAGS_4(sve_index_d, TCG_CALL_NO_RWG, void, ptr, i64, i64, i32)
26
bool return_to_aa64 = (spsr & PSTATE_nRW) == 0;
26
+
27
27
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
28
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
* will sort the register banks out for us, and we've already
29
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
* caught all the bad-mode cases in el_from_spsr().
30
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
31
*/
31
index XXXXXXX..XXXXXXX 100644
32
- cpsr_write(env, spsr, ~0, CPSRWriteRaw);
32
--- a/target/arm/sve_helper.c
33
+ mask = aarch32_cpsr_valid_mask(env->features, &env_archcpu(env)->isar);
33
+++ b/target/arm/sve_helper.c
34
+ cpsr_write(env, spsr, mask, CPSRWriteRaw);
34
@@ -XXX,XX +XXX,XX @@ DO_ZPZZZ_D(sve_mls_d, uint64_t, DO_MLS)
35
if (!arm_singlestep_active(env)) {
35
#undef DO_MLS
36
env->uncached_cpsr &= ~PSTATE_SS;
36
#undef DO_ZPZZZ
37
}
37
#undef DO_ZPZZZ_D
38
+
39
+void HELPER(sve_index_b)(void *vd, uint32_t start,
40
+ uint32_t incr, uint32_t desc)
41
+{
42
+ intptr_t i, opr_sz = simd_oprsz(desc);
43
+ uint8_t *d = vd;
44
+ for (i = 0; i < opr_sz; i += 1) {
45
+ d[H1(i)] = start + i * incr;
46
+ }
47
+}
48
+
49
+void HELPER(sve_index_h)(void *vd, uint32_t start,
50
+ uint32_t incr, uint32_t desc)
51
+{
52
+ intptr_t i, opr_sz = simd_oprsz(desc) / 2;
53
+ uint16_t *d = vd;
54
+ for (i = 0; i < opr_sz; i += 1) {
55
+ d[H2(i)] = start + i * incr;
56
+ }
57
+}
58
+
59
+void HELPER(sve_index_s)(void *vd, uint32_t start,
60
+ uint32_t incr, uint32_t desc)
61
+{
62
+ intptr_t i, opr_sz = simd_oprsz(desc) / 4;
63
+ uint32_t *d = vd;
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ d[H4(i)] = start + i * incr;
66
+ }
67
+}
68
+
69
+void HELPER(sve_index_d)(void *vd, uint64_t start,
70
+ uint64_t incr, uint32_t desc)
71
+{
72
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
73
+ uint64_t *d = vd;
74
+ for (i = 0; i < opr_sz; i += 1) {
75
+ d[i] = start + i * incr;
76
+ }
77
+}
78
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate-sve.c
81
+++ b/target/arm/translate-sve.c
82
@@ -XXX,XX +XXX,XX @@ DO_ZPZZZ(MLS, mls)
83
84
#undef DO_ZPZZZ
85
86
+/*
87
+ *** SVE Index Generation Group
88
+ */
89
+
90
+static void do_index(DisasContext *s, int esz, int rd,
91
+ TCGv_i64 start, TCGv_i64 incr)
92
+{
93
+ unsigned vsz = vec_full_reg_size(s);
94
+ TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
95
+ TCGv_ptr t_zd = tcg_temp_new_ptr();
96
+
97
+ tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, rd));
98
+ if (esz == 3) {
99
+ gen_helper_sve_index_d(t_zd, start, incr, desc);
100
+ } else {
101
+ typedef void index_fn(TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
102
+ static index_fn * const fns[3] = {
103
+ gen_helper_sve_index_b,
104
+ gen_helper_sve_index_h,
105
+ gen_helper_sve_index_s,
106
+ };
107
+ TCGv_i32 s32 = tcg_temp_new_i32();
108
+ TCGv_i32 i32 = tcg_temp_new_i32();
109
+
110
+ tcg_gen_extrl_i64_i32(s32, start);
111
+ tcg_gen_extrl_i64_i32(i32, incr);
112
+ fns[esz](t_zd, s32, i32, desc);
113
+
114
+ tcg_temp_free_i32(s32);
115
+ tcg_temp_free_i32(i32);
116
+ }
117
+ tcg_temp_free_ptr(t_zd);
118
+ tcg_temp_free_i32(desc);
119
+}
120
+
121
+static bool trans_INDEX_ii(DisasContext *s, arg_INDEX_ii *a, uint32_t insn)
122
+{
123
+ if (sve_access_check(s)) {
124
+ TCGv_i64 start = tcg_const_i64(a->imm1);
125
+ TCGv_i64 incr = tcg_const_i64(a->imm2);
126
+ do_index(s, a->esz, a->rd, start, incr);
127
+ tcg_temp_free_i64(start);
128
+ tcg_temp_free_i64(incr);
129
+ }
130
+ return true;
131
+}
132
+
133
+static bool trans_INDEX_ir(DisasContext *s, arg_INDEX_ir *a, uint32_t insn)
134
+{
135
+ if (sve_access_check(s)) {
136
+ TCGv_i64 start = tcg_const_i64(a->imm);
137
+ TCGv_i64 incr = cpu_reg(s, a->rm);
138
+ do_index(s, a->esz, a->rd, start, incr);
139
+ tcg_temp_free_i64(start);
140
+ }
141
+ return true;
142
+}
143
+
144
+static bool trans_INDEX_ri(DisasContext *s, arg_INDEX_ri *a, uint32_t insn)
145
+{
146
+ if (sve_access_check(s)) {
147
+ TCGv_i64 start = cpu_reg(s, a->rn);
148
+ TCGv_i64 incr = tcg_const_i64(a->imm);
149
+ do_index(s, a->esz, a->rd, start, incr);
150
+ tcg_temp_free_i64(incr);
151
+ }
152
+ return true;
153
+}
154
+
155
+static bool trans_INDEX_rr(DisasContext *s, arg_INDEX_rr *a, uint32_t insn)
156
+{
157
+ if (sve_access_check(s)) {
158
+ TCGv_i64 start = cpu_reg(s, a->rn);
159
+ TCGv_i64 incr = cpu_reg(s, a->rm);
160
+ do_index(s, a->esz, a->rd, start, incr);
161
+ }
162
+ return true;
163
+}
164
+
165
/*
166
*** SVE Predicate Logical Operations Group
167
*/
168
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
169
index XXXXXXX..XXXXXXX 100644
170
--- a/target/arm/sve.decode
171
+++ b/target/arm/sve.decode
172
@@ -XXX,XX +XXX,XX @@ ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
173
EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
174
BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
175
176
+### SVE Index Generation Group
177
+
178
+# SVE index generation (immediate start, immediate increment)
179
+INDEX_ii 00000100 esz:2 1 imm2:s5 010000 imm1:s5 rd:5
180
+
181
+# SVE index generation (immediate start, register increment)
182
+INDEX_ir 00000100 esz:2 1 rm:5 010010 imm:s5 rd:5
183
+
184
+# SVE index generation (register start, immediate increment)
185
+INDEX_ri 00000100 esz:2 1 imm:s5 010001 rn:5 rd:5
186
+
187
+# SVE index generation (register start, register increment)
188
+INDEX_rr 00000100 .. 1 ..... 010011 ..... ..... @rd_rn_rm
189
+
190
### SVE Predicate Logical Operations Group
191
192
# SVE predicate logical operations
193
--
38
--
194
2.17.0
39
2.20.1
195
40
196
41
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The only remaining use was in op_helper.c. Use PSTATE_SS
4
directly, and move the commentary so that it is more obvious
5
what is going on.
6
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20180516223007.10256-8-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20200208125816.14954-10-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
11
---
7
target/arm/cpu.h | 4 +
12
target/arm/cpu.h | 6 ------
8
target/arm/helper-sve.h | 3 +
13
target/arm/op_helper.c | 9 ++++++++-
9
target/arm/sve_helper.c | 84 +++++++++++++++
14
2 files changed, 8 insertions(+), 7 deletions(-)
10
target/arm/translate-sve.c | 209 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 31 ++++++
12
5 files changed, 331 insertions(+)
13
15
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
18
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
20
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
19
21
#define CPSR_IT_2_7 (0xfc00U)
20
#ifdef TARGET_AARCH64
22
#define CPSR_GE (0xfU << 16)
21
/* Store FFR as pregs[16] to make it easier to treat as any other. */
23
#define CPSR_IL (1U << 20)
22
+#define FFR_PRED_NUM 16
24
-/* Note that the RESERVED bits include bit 21, which is PSTATE_SS in
23
ARMPredicateReg pregs[17];
25
- * an AArch64 SPSR but RES0 in AArch32 SPSR and CPSR. In QEMU we use
24
/* Scratch space for aa64 sve predicate temporary. */
26
- * env->uncached_cpsr bit 21 to store PSTATE.SS when executing in AArch32,
25
ARMPredicateReg preg_tmp;
27
- * where it is live state but not accessible to the AArch32 code.
26
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
28
- */
27
return &env->vfp.zregs[regno].d[0];
29
-#define CPSR_RESERVED (0x7U << 21)
30
#define CPSR_J (1U << 24)
31
#define CPSR_IT_0_1 (3U << 25)
32
#define CPSR_Q (1U << 27)
33
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/op_helper.c
36
+++ b/target/arm/op_helper.c
37
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_bkpt_insn)(CPUARMState *env, uint32_t syndrome)
38
39
uint32_t HELPER(cpsr_read)(CPUARMState *env)
40
{
41
- return cpsr_read(env) & ~(CPSR_EXEC | CPSR_RESERVED);
42
+ /*
43
+ * We store the ARMv8 PSTATE.SS bit in env->uncached_cpsr.
44
+ * This is convenient for populating SPSR_ELx, but must be
45
+ * hidden from aarch32 mode, where it is not visible.
46
+ *
47
+ * TODO: ARMv8.4-DIT -- need to move SS somewhere else.
48
+ */
49
+ return cpsr_read(env) & ~(CPSR_EXEC | PSTATE_SS);
28
}
50
}
29
51
30
+/* Shared between translate-sve.c and sve_helper.c. */
52
void HELPER(cpsr_write)(CPUARMState *env, uint32_t val, uint32_t mask)
31
+extern const uint64_t pred_esz_masks[4];
32
+
33
#endif
34
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/helper-sve.h
37
+++ b/target/arm/helper-sve.h
38
@@ -XXX,XX +XXX,XX @@
39
DEF_HELPER_FLAGS_2(sve_predtest1, TCG_CALL_NO_WG, i32, i64, i64)
40
DEF_HELPER_FLAGS_3(sve_predtest, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
41
42
+DEF_HELPER_FLAGS_3(sve_pfirst, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_3(sve_pnext, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
44
+
45
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
46
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
47
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
48
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/sve_helper.c
51
+++ b/target/arm/sve_helper.c
52
@@ -XXX,XX +XXX,XX @@ LOGICAL_PPPP(sve_nand_pppp, DO_NAND)
53
#undef DO_NAND
54
#undef DO_SEL
55
#undef LOGICAL_PPPP
56
+
57
+/* Similar to the ARM LastActiveElement pseudocode function, except the
58
+ result is multiplied by the element size. This includes the not found
59
+ indication; e.g. not found for esz=3 is -8. */
60
+static intptr_t last_active_element(uint64_t *g, intptr_t words, intptr_t esz)
61
+{
62
+ uint64_t mask = pred_esz_masks[esz];
63
+ intptr_t i = words;
64
+
65
+ do {
66
+ uint64_t this_g = g[--i] & mask;
67
+ if (this_g) {
68
+ return i * 64 + (63 - clz64(this_g));
69
+ }
70
+ } while (i > 0);
71
+ return (intptr_t)-1 << esz;
72
+}
73
+
74
+uint32_t HELPER(sve_pfirst)(void *vd, void *vg, uint32_t words)
75
+{
76
+ uint32_t flags = PREDTEST_INIT;
77
+ uint64_t *d = vd, *g = vg;
78
+ intptr_t i = 0;
79
+
80
+ do {
81
+ uint64_t this_d = d[i];
82
+ uint64_t this_g = g[i];
83
+
84
+ if (this_g) {
85
+ if (!(flags & 4)) {
86
+ /* Set in D the first bit of G. */
87
+ this_d |= this_g & -this_g;
88
+ d[i] = this_d;
89
+ }
90
+ flags = iter_predtest_fwd(this_d, this_g, flags);
91
+ }
92
+ } while (++i < words);
93
+
94
+ return flags;
95
+}
96
+
97
+uint32_t HELPER(sve_pnext)(void *vd, void *vg, uint32_t pred_desc)
98
+{
99
+ intptr_t words = extract32(pred_desc, 0, SIMD_OPRSZ_BITS);
100
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
101
+ uint32_t flags = PREDTEST_INIT;
102
+ uint64_t *d = vd, *g = vg, esz_mask;
103
+ intptr_t i, next;
104
+
105
+ next = last_active_element(vd, words, esz) + (1 << esz);
106
+ esz_mask = pred_esz_masks[esz];
107
+
108
+ /* Similar to the pseudocode for pnext, but scaled by ESZ
109
+ so that we find the correct bit. */
110
+ if (next < words * 64) {
111
+ uint64_t mask = -1;
112
+
113
+ if (next & 63) {
114
+ mask = ~((1ull << (next & 63)) - 1);
115
+ next &= -64;
116
+ }
117
+ do {
118
+ uint64_t this_g = g[next / 64] & esz_mask & mask;
119
+ if (this_g != 0) {
120
+ next = (next & -64) + ctz64(this_g);
121
+ break;
122
+ }
123
+ next += 64;
124
+ mask = -1;
125
+ } while (next < words * 64);
126
+ }
127
+
128
+ i = 0;
129
+ do {
130
+ uint64_t this_d = 0;
131
+ if (i == next / 64) {
132
+ this_d = 1ull << (next & 63);
133
+ }
134
+ d[i] = this_d;
135
+ flags = iter_predtest_fwd(this_d, g[i] & esz_mask, flags);
136
+ } while (++i < words);
137
+
138
+ return flags;
139
+}
140
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/target/arm/translate-sve.c
143
+++ b/target/arm/translate-sve.c
144
@@ -XXX,XX +XXX,XX @@
145
#include "exec/exec-all.h"
146
#include "tcg-op.h"
147
#include "tcg-op-gvec.h"
148
+#include "tcg-gvec-desc.h"
149
#include "qemu/log.h"
150
#include "arm_ldst.h"
151
#include "translate.h"
152
@@ -XXX,XX +XXX,XX @@ static void do_predtest(DisasContext *s, int dofs, int gofs, int words)
153
tcg_temp_free_i32(t);
154
}
155
156
+/* For each element size, the bits within a predicate word that are active. */
157
+const uint64_t pred_esz_masks[4] = {
158
+ 0xffffffffffffffffull, 0x5555555555555555ull,
159
+ 0x1111111111111111ull, 0x0101010101010101ull
160
+};
161
+
162
/*
163
*** SVE Logical - Unpredicated Group
164
*/
165
@@ -XXX,XX +XXX,XX @@ static bool trans_PTEST(DisasContext *s, arg_PTEST *a, uint32_t insn)
166
return true;
167
}
168
169
+/* See the ARM pseudocode DecodePredCount. */
170
+static unsigned decode_pred_count(unsigned fullsz, int pattern, int esz)
171
+{
172
+ unsigned elements = fullsz >> esz;
173
+ unsigned bound;
174
+
175
+ switch (pattern) {
176
+ case 0x0: /* POW2 */
177
+ return pow2floor(elements);
178
+ case 0x1: /* VL1 */
179
+ case 0x2: /* VL2 */
180
+ case 0x3: /* VL3 */
181
+ case 0x4: /* VL4 */
182
+ case 0x5: /* VL5 */
183
+ case 0x6: /* VL6 */
184
+ case 0x7: /* VL7 */
185
+ case 0x8: /* VL8 */
186
+ bound = pattern;
187
+ break;
188
+ case 0x9: /* VL16 */
189
+ case 0xa: /* VL32 */
190
+ case 0xb: /* VL64 */
191
+ case 0xc: /* VL128 */
192
+ case 0xd: /* VL256 */
193
+ bound = 16 << (pattern - 9);
194
+ break;
195
+ case 0x1d: /* MUL4 */
196
+ return elements - elements % 4;
197
+ case 0x1e: /* MUL3 */
198
+ return elements - elements % 3;
199
+ case 0x1f: /* ALL */
200
+ return elements;
201
+ default: /* #uimm5 */
202
+ return 0;
203
+ }
204
+ return elements >= bound ? bound : 0;
205
+}
206
+
207
+/* This handles all of the predicate initialization instructions,
208
+ * PTRUE, PFALSE, SETFFR. For PFALSE, we will have set PAT == 32
209
+ * so that decode_pred_count returns 0. For SETFFR, we will have
210
+ * set RD == 16 == FFR.
211
+ */
212
+static bool do_predset(DisasContext *s, int esz, int rd, int pat, bool setflag)
213
+{
214
+ if (!sve_access_check(s)) {
215
+ return true;
216
+ }
217
+
218
+ unsigned fullsz = vec_full_reg_size(s);
219
+ unsigned ofs = pred_full_reg_offset(s, rd);
220
+ unsigned numelem, setsz, i;
221
+ uint64_t word, lastword;
222
+ TCGv_i64 t;
223
+
224
+ numelem = decode_pred_count(fullsz, pat, esz);
225
+
226
+ /* Determine what we must store into each bit, and how many. */
227
+ if (numelem == 0) {
228
+ lastword = word = 0;
229
+ setsz = fullsz;
230
+ } else {
231
+ setsz = numelem << esz;
232
+ lastword = word = pred_esz_masks[esz];
233
+ if (setsz % 64) {
234
+ lastword &= ~(-1ull << (setsz % 64));
235
+ }
236
+ }
237
+
238
+ t = tcg_temp_new_i64();
239
+ if (fullsz <= 64) {
240
+ tcg_gen_movi_i64(t, lastword);
241
+ tcg_gen_st_i64(t, cpu_env, ofs);
242
+ goto done;
243
+ }
244
+
245
+ if (word == lastword) {
246
+ unsigned maxsz = size_for_gvec(fullsz / 8);
247
+ unsigned oprsz = size_for_gvec(setsz / 8);
248
+
249
+ if (oprsz * 8 == setsz) {
250
+ tcg_gen_gvec_dup64i(ofs, oprsz, maxsz, word);
251
+ goto done;
252
+ }
253
+ if (oprsz * 8 == setsz + 8) {
254
+ tcg_gen_gvec_dup64i(ofs, oprsz, maxsz, word);
255
+ tcg_gen_movi_i64(t, 0);
256
+ tcg_gen_st_i64(t, cpu_env, ofs + oprsz - 8);
257
+ goto done;
258
+ }
259
+ }
260
+
261
+ setsz /= 8;
262
+ fullsz /= 8;
263
+
264
+ tcg_gen_movi_i64(t, word);
265
+ for (i = 0; i < setsz; i += 8) {
266
+ tcg_gen_st_i64(t, cpu_env, ofs + i);
267
+ }
268
+ if (lastword != word) {
269
+ tcg_gen_movi_i64(t, lastword);
270
+ tcg_gen_st_i64(t, cpu_env, ofs + i);
271
+ i += 8;
272
+ }
273
+ if (i < fullsz) {
274
+ tcg_gen_movi_i64(t, 0);
275
+ for (; i < fullsz; i += 8) {
276
+ tcg_gen_st_i64(t, cpu_env, ofs + i);
277
+ }
278
+ }
279
+
280
+ done:
281
+ tcg_temp_free_i64(t);
282
+
283
+ /* PTRUES */
284
+ if (setflag) {
285
+ tcg_gen_movi_i32(cpu_NF, -(word != 0));
286
+ tcg_gen_movi_i32(cpu_CF, word == 0);
287
+ tcg_gen_movi_i32(cpu_VF, 0);
288
+ tcg_gen_mov_i32(cpu_ZF, cpu_NF);
289
+ }
290
+ return true;
291
+}
292
+
293
+static bool trans_PTRUE(DisasContext *s, arg_PTRUE *a, uint32_t insn)
294
+{
295
+ return do_predset(s, a->esz, a->rd, a->pat, a->s);
296
+}
297
+
298
+static bool trans_SETFFR(DisasContext *s, arg_SETFFR *a, uint32_t insn)
299
+{
300
+ /* Note pat == 31 is #all, to set all elements. */
301
+ return do_predset(s, 0, FFR_PRED_NUM, 31, false);
302
+}
303
+
304
+static bool trans_PFALSE(DisasContext *s, arg_PFALSE *a, uint32_t insn)
305
+{
306
+ /* Note pat == 32 is #unimp, to set no elements. */
307
+ return do_predset(s, 0, a->rd, 32, false);
308
+}
309
+
310
+static bool trans_RDFFR_p(DisasContext *s, arg_RDFFR_p *a, uint32_t insn)
311
+{
312
+ /* The path through do_pppp_flags is complicated enough to want to avoid
313
+ * duplication. Frob the arguments into the form of a predicated AND.
314
+ */
315
+ arg_rprr_s alt_a = {
316
+ .rd = a->rd, .pg = a->pg, .s = a->s,
317
+ .rn = FFR_PRED_NUM, .rm = FFR_PRED_NUM,
318
+ };
319
+ return trans_AND_pppp(s, &alt_a, insn);
320
+}
321
+
322
+static bool trans_RDFFR(DisasContext *s, arg_RDFFR *a, uint32_t insn)
323
+{
324
+ return do_mov_p(s, a->rd, FFR_PRED_NUM);
325
+}
326
+
327
+static bool trans_WRFFR(DisasContext *s, arg_WRFFR *a, uint32_t insn)
328
+{
329
+ return do_mov_p(s, FFR_PRED_NUM, a->rn);
330
+}
331
+
332
+static bool do_pfirst_pnext(DisasContext *s, arg_rr_esz *a,
333
+ void (*gen_fn)(TCGv_i32, TCGv_ptr,
334
+ TCGv_ptr, TCGv_i32))
335
+{
336
+ if (!sve_access_check(s)) {
337
+ return true;
338
+ }
339
+
340
+ TCGv_ptr t_pd = tcg_temp_new_ptr();
341
+ TCGv_ptr t_pg = tcg_temp_new_ptr();
342
+ TCGv_i32 t;
343
+ unsigned desc;
344
+
345
+ desc = DIV_ROUND_UP(pred_full_reg_size(s), 8);
346
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
347
+
348
+ tcg_gen_addi_ptr(t_pd, cpu_env, pred_full_reg_offset(s, a->rd));
349
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->rn));
350
+ t = tcg_const_i32(desc);
351
+
352
+ gen_fn(t, t_pd, t_pg, t);
353
+ tcg_temp_free_ptr(t_pd);
354
+ tcg_temp_free_ptr(t_pg);
355
+
356
+ do_pred_flags(t);
357
+ tcg_temp_free_i32(t);
358
+ return true;
359
+}
360
+
361
+static bool trans_PFIRST(DisasContext *s, arg_rr_esz *a, uint32_t insn)
362
+{
363
+ return do_pfirst_pnext(s, a, gen_helper_sve_pfirst);
364
+}
365
+
366
+static bool trans_PNEXT(DisasContext *s, arg_rr_esz *a, uint32_t insn)
367
+{
368
+ return do_pfirst_pnext(s, a, gen_helper_sve_pnext);
369
+}
370
+
371
/*
372
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
373
*/
374
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
375
index XXXXXXX..XXXXXXX 100644
376
--- a/target/arm/sve.decode
377
+++ b/target/arm/sve.decode
378
@@ -XXX,XX +XXX,XX @@
379
# when creating helpers common to those for the individual
380
# instruction patterns.
381
382
+&rr_esz rd rn esz
383
&rri rd rn imm
384
&rrr_esz rd rn rm esz
385
&rprr_s rd pg rn rm s
386
@@ -XXX,XX +XXX,XX @@
387
# Named instruction formats. These are generally used to
388
# reduce the amount of duplication between instruction patterns.
389
390
+# Two operand with unused vector element size
391
+@pd_pn_e0 ........ ........ ....... rn:4 . rd:4 &rr_esz esz=0
392
+
393
+# Two operand
394
+@pd_pn ........ esz:2 .. .... ....... rn:4 . rd:4 &rr_esz
395
+
396
# Three operand with unused vector element size
397
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
398
399
@@ -XXX,XX +XXX,XX @@ NAND_pppp 00100101 1. 00 .... 01 .... 1 .... 1 .... @pd_pg_pn_pm_s
400
# SVE predicate test
401
PTEST 00100101 01 010000 11 pg:4 0 rn:4 0 0000
402
403
+# SVE predicate initialize
404
+PTRUE 00100101 esz:2 01100 s:1 111000 pat:5 0 rd:4
405
+
406
+# SVE initialize FFR
407
+SETFFR 00100101 0010 1100 1001 0000 0000 0000
408
+
409
+# SVE zero predicate register
410
+PFALSE 00100101 0001 1000 1110 0100 0000 rd:4
411
+
412
+# SVE predicate read from FFR (predicated)
413
+RDFFR_p 00100101 0 s:1 0110001111000 pg:4 0 rd:4
414
+
415
+# SVE predicate read from FFR (unpredicated)
416
+RDFFR 00100101 0001 1001 1111 0000 0000 rd:4
417
+
418
+# SVE FFR write from predicate (WRFFR)
419
+WRFFR 00100101 0010 1000 1001 000 rn:4 00000
420
+
421
+# SVE predicate first active
422
+PFIRST 00100101 01 011 000 11000 00 .... 0 .... @pd_pn_e0
423
+
424
+# SVE predicate next active
425
+PNEXT 00100101 .. 011 001 11000 10 .... 0 .... @pd_pn
426
+
427
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
428
429
# SVE load predicate register
430
--
53
--
431
2.17.0
54
2.20.1
432
55
433
56
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Use this along the exception return path, where we previously
4
accepted any values.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-18-richard.henderson@linaro.org
8
Message-id: 20200208125816.14954-11-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate-sve.c | 27 +++++++++++++++++++++++++++
11
target/arm/internals.h | 12 ++++++++++++
9
target/arm/sve.decode | 12 ++++++++++++
12
target/arm/helper-a64.c | 1 +
10
2 files changed, 39 insertions(+)
13
2 files changed, 13 insertions(+)
11
14
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
17
--- a/target/arm/internals.h
15
+++ b/target/arm/translate-sve.c
18
+++ b/target/arm/internals.h
16
@@ -XXX,XX +XXX,XX @@ static bool trans_INDEX_rr(DisasContext *s, arg_INDEX_rr *a, uint32_t insn)
19
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
17
return true;
20
return valid;
18
}
21
}
19
22
20
+/*
23
+static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
21
+ *** SVE Stack Allocation Group
24
+{
22
+ */
25
+ uint32_t valid;
23
+
26
+
24
+static bool trans_ADDVL(DisasContext *s, arg_ADDVL *a, uint32_t insn)
27
+ valid = PSTATE_M | PSTATE_DAIF | PSTATE_IL | PSTATE_SS | PSTATE_NZCV;
25
+{
28
+ if (isar_feature_aa64_bti(id)) {
26
+ TCGv_i64 rd = cpu_reg_sp(s, a->rd);
29
+ valid |= PSTATE_BTYPE;
27
+ TCGv_i64 rn = cpu_reg_sp(s, a->rn);
30
+ }
28
+ tcg_gen_addi_i64(rd, rn, a->imm * vec_full_reg_size(s));
29
+ return true;
30
+}
31
+
31
+
32
+static bool trans_ADDPL(DisasContext *s, arg_ADDPL *a, uint32_t insn)
32
+ return valid;
33
+{
34
+ TCGv_i64 rd = cpu_reg_sp(s, a->rd);
35
+ TCGv_i64 rn = cpu_reg_sp(s, a->rn);
36
+ tcg_gen_addi_i64(rd, rn, a->imm * pred_full_reg_size(s));
37
+ return true;
38
+}
39
+
40
+static bool trans_RDVL(DisasContext *s, arg_RDVL *a, uint32_t insn)
41
+{
42
+ TCGv_i64 reg = cpu_reg(s, a->rd);
43
+ tcg_gen_movi_i64(reg, a->imm * vec_full_reg_size(s));
44
+ return true;
45
+}
33
+}
46
+
34
+
47
/*
35
/*
48
*** SVE Predicate Logical Operations Group
36
* Parameters of a given virtual address, as extracted from the
49
*/
37
* translation control register (TCR) for a given regime.
50
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
38
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
51
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/sve.decode
40
--- a/target/arm/helper-a64.c
53
+++ b/target/arm/sve.decode
41
+++ b/target/arm/helper-a64.c
54
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
55
# One register operand, with governing predicate, vector element size
43
cur_el, new_el, env->regs[15]);
56
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
44
} else {
57
45
env->aarch64 = 1;
58
+# Two register operands with a 6-bit signed immediate.
46
+ spsr &= aarch64_pstate_valid_mask(&env_archcpu(env)->isar);
59
+@rd_rn_i6 ........ ... rn:5 ..... imm:s6 rd:5 &rri
47
pstate_write(env, spsr);
60
+
48
if (!arm_singlestep_active(env)) {
61
# Two register operand, one immediate operand, with predicate,
49
env->pstate &= ~PSTATE_SS;
62
# element size encoded as TSZHL. User must fill in imm.
63
@rdn_pg_tszimm ........ .. ... ... ... pg:3 ..... rd:5 \
64
@@ -XXX,XX +XXX,XX @@ INDEX_ri 00000100 esz:2 1 imm:s5 010001 rn:5 rd:5
65
# SVE index generation (register start, register increment)
66
INDEX_rr 00000100 .. 1 ..... 010011 ..... ..... @rd_rn_rm
67
68
+### SVE Stack Allocation Group
69
+
70
+# SVE stack frame adjustment
71
+ADDVL 00000100 001 ..... 01010 ...... ..... @rd_rn_i6
72
+ADDPL 00000100 011 ..... 01010 ...... ..... @rd_rn_i6
73
+
74
+# SVE stack frame size
75
+RDVL 00000100 101 11111 01010 imm:s6 rd:5
76
+
77
### SVE Predicate Logical Operations Group
78
79
# SVE predicate logical operations
80
--
50
--
81
2.17.0
51
2.20.1
82
52
83
53
diff view generated by jsdifflib
1
From: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Generate an XML description for the cp-regs.
3
For aarch64, there's a dedicated msr (imm, reg) insn.
4
Register these regs with the gdb_register_coprocessor().
4
For aarch32, this is done via msr to cpsr. Writes from el0
5
Add arm_gdb_get_sysreg() to use it as a callback to read those regs.
5
are ignored, which is already handled by the CPSR_USER mask.
6
Add a dummy arm_gdb_set_sysreg().
7
6
8
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
9
Tested-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 1524153386-3550-4-git-send-email-abdallah.bouassida@lauterbach.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200208125816.14954-12-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
include/qom/cpu.h | 5 ++-
12
target/arm/cpu.h | 2 ++
15
target/arm/cpu.h | 26 +++++++++++++++
13
target/arm/internals.h | 6 ++++++
16
gdbstub.c | 10 ++++++
14
target/arm/helper.c | 21 +++++++++++++++++++++
17
target/arm/cpu.c | 1 +
15
target/arm/translate-a64.c | 14 ++++++++++++++
18
target/arm/gdbstub.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
16
4 files changed, 43 insertions(+)
19
target/arm/helper.c | 26 +++++++++++++++
20
6 files changed, 143 insertions(+), 1 deletion(-)
21
17
22
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/include/qom/cpu.h
25
+++ b/include/qom/cpu.h
26
@@ -XXX,XX +XXX,XX @@ struct TranslationBlock;
27
* before the insn which triggers a watchpoint rather than after it.
28
* @gdb_arch_name: Optional callback that returns the architecture name known
29
* to GDB. The caller must free the returned string with g_free.
30
+ * @gdb_get_dynamic_xml: Callback to return dynamically generated XML for the
31
+ * gdb stub. Returns a pointer to the XML contents for the specified XML file
32
+ * or NULL if the CPU doesn't have a dynamically generated content for it.
33
* @cpu_exec_enter: Callback for cpu_exec preparation.
34
* @cpu_exec_exit: Callback for cpu_exec cleanup.
35
* @cpu_exec_interrupt: Callback for processing interrupts in cpu_exec.
36
@@ -XXX,XX +XXX,XX @@ typedef struct CPUClass {
37
const struct VMStateDescription *vmsd;
38
const char *gdb_core_xml_file;
39
gchar * (*gdb_arch_name)(CPUState *cpu);
40
-
41
+ const char * (*gdb_get_dynamic_xml)(CPUState *cpu, const char *xmlname);
42
void (*cpu_exec_enter)(CPUState *cpu);
43
void (*cpu_exec_exit)(CPUState *cpu);
44
bool (*cpu_exec_interrupt)(CPUState *cpu, int interrupt_request);
45
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
46
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.h
20
--- a/target/arm/cpu.h
48
+++ b/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
49
@@ -XXX,XX +XXX,XX @@ enum {
22
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
50
s<2n+1> maps to the most significant half of d<n>
23
#define CPSR_IT_2_7 (0xfc00U)
51
*/
24
#define CPSR_GE (0xfU << 16)
52
25
#define CPSR_IL (1U << 20)
53
+/**
26
+#define CPSR_PAN (1U << 22)
54
+ * DynamicGDBXMLInfo:
27
#define CPSR_J (1U << 24)
55
+ * @desc: Contains the XML descriptions.
28
#define CPSR_IT_0_1 (3U << 25)
56
+ * @num_cpregs: Number of the Coprocessor registers seen by GDB.
29
#define CPSR_Q (1U << 27)
57
+ * @cpregs_keys: Array that contains the corresponding Key of
30
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
58
+ * a given cpreg with the same order of the cpreg in the XML description.
31
#define PSTATE_BTYPE (3U << 10)
59
+ */
32
#define PSTATE_IL (1U << 20)
60
+typedef struct DynamicGDBXMLInfo {
33
#define PSTATE_SS (1U << 21)
61
+ char *desc;
34
+#define PSTATE_PAN (1U << 22)
62
+ int num_cpregs;
35
#define PSTATE_V (1U << 28)
63
+ uint32_t *cpregs_keys;
36
#define PSTATE_C (1U << 29)
64
+} DynamicGDBXMLInfo;
37
#define PSTATE_Z (1U << 30)
65
+
38
diff --git a/target/arm/internals.h b/target/arm/internals.h
66
/* CPU state for each instance of a generic timer (in cp15 c14) */
67
typedef struct ARMGenericTimer {
68
uint64_t cval; /* Timer CompareValue register */
69
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
70
uint64_t *cpreg_vmstate_values;
71
int32_t cpreg_vmstate_array_len;
72
73
+ DynamicGDBXMLInfo dyn_xml;
74
+
75
/* Timers used by the generic (architected) timer */
76
QEMUTimer *gt_timer[NUM_GTIMERS];
77
/* GPIO outputs for generic timer */
78
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cpu, vaddr addr,
79
int arm_cpu_gdb_read_register(CPUState *cpu, uint8_t *buf, int reg);
80
int arm_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
81
82
+/* Dynamically generates for gdb stub an XML description of the sysregs from
83
+ * the cp_regs hashtable. Returns the registered sysregs number.
84
+ */
85
+int arm_gen_dynamic_xml(CPUState *cpu);
86
+
87
+/* Returns the dynamically generated XML for the gdb stub.
88
+ * Returns a pointer to the XML contents for the specified XML file or NULL
89
+ * if the XML name doesn't match the predefined one.
90
+ */
91
+const char *arm_gdb_get_dynamic_xml(CPUState *cpu, const char *xmlname);
92
+
93
int arm_cpu_write_elf64_note(WriteCoreDumpFunction f, CPUState *cs,
94
int cpuid, void *opaque);
95
int arm_cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cs,
96
diff --git a/gdbstub.c b/gdbstub.c
97
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
98
--- a/gdbstub.c
40
--- a/target/arm/internals.h
99
+++ b/gdbstub.c
41
+++ b/target/arm/internals.h
100
@@ -XXX,XX +XXX,XX @@ static const char *get_feature_xml(const char *p, const char **newp,
42
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
101
}
43
if (isar_feature_jazelle(id)) {
102
return target_xml;
44
valid |= CPSR_J;
103
}
45
}
104
+ if (cc->gdb_get_dynamic_xml) {
46
+ if (isar_feature_aa32_pan(id)) {
105
+ CPUState *cpu = first_cpu;
47
+ valid |= CPSR_PAN;
106
+ char *xmlname = g_strndup(p, len);
107
+ const char *xml = cc->gdb_get_dynamic_xml(cpu, xmlname);
108
+
109
+ g_free(xmlname);
110
+ if (xml) {
111
+ return xml;
112
+ }
113
+ }
48
+ }
114
for (i = 0; ; i++) {
49
115
name = xml_builtin[i][0];
50
return valid;
116
if (!name || (strncmp(name, p, len) == 0 && strlen(name) == len))
117
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/target/arm/cpu.c
120
+++ b/target/arm/cpu.c
121
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
122
cc->gdb_num_core_regs = 26;
123
cc->gdb_core_xml_file = "arm-core.xml";
124
cc->gdb_arch_name = arm_gdb_arch_name;
125
+ cc->gdb_get_dynamic_xml = arm_gdb_get_dynamic_xml;
126
cc->gdb_stop_before_watchpoint = true;
127
cc->debug_excp_handler = arm_debug_excp_handler;
128
cc->debug_check_watchpoint = arm_debug_check_watchpoint;
129
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/target/arm/gdbstub.c
132
+++ b/target/arm/gdbstub.c
133
@@ -XXX,XX +XXX,XX @@
134
#include "cpu.h"
135
#include "exec/gdbstub.h"
136
137
+typedef struct RegisterSysregXmlParam {
138
+ CPUState *cs;
139
+ GString *s;
140
+} RegisterSysregXmlParam;
141
+
142
/* Old gdb always expect FPA registers. Newer (xml-aware) gdb only expect
143
whatever the target description contains. Due to a historical mishap
144
the FPA registers appear in between core integer regs and the CPSR.
145
@@ -XXX,XX +XXX,XX @@ int arm_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
146
/* Unknown register. */
147
return 0;
148
}
51
}
149
+
52
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
150
+static void arm_gen_one_xml_reg_tag(GString *s, DynamicGDBXMLInfo *dyn_xml,
53
if (isar_feature_aa64_bti(id)) {
151
+ ARMCPRegInfo *ri, uint32_t ri_key,
54
valid |= PSTATE_BTYPE;
152
+ int bitsize)
55
}
153
+{
56
+ if (isar_feature_aa64_pan(id)) {
154
+ g_string_append_printf(s, "<reg name=\"%s\"", ri->name);
57
+ valid |= PSTATE_PAN;
155
+ g_string_append_printf(s, " bitsize=\"%d\"", bitsize);
156
+ g_string_append_printf(s, " group=\"cp_regs\"/>");
157
+ dyn_xml->num_cpregs++;
158
+ dyn_xml->cpregs_keys[dyn_xml->num_cpregs - 1] = ri_key;
159
+}
160
+
161
+static void arm_register_sysreg_for_xml(gpointer key, gpointer value,
162
+ gpointer p)
163
+{
164
+ uint32_t ri_key = *(uint32_t *)key;
165
+ ARMCPRegInfo *ri = value;
166
+ RegisterSysregXmlParam *param = (RegisterSysregXmlParam *)p;
167
+ GString *s = param->s;
168
+ ARMCPU *cpu = ARM_CPU(param->cs);
169
+ CPUARMState *env = &cpu->env;
170
+ DynamicGDBXMLInfo *dyn_xml = &cpu->dyn_xml;
171
+
172
+ if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_NO_GDB))) {
173
+ if (arm_feature(env, ARM_FEATURE_AARCH64)) {
174
+ if (ri->state == ARM_CP_STATE_AA64) {
175
+ arm_gen_one_xml_reg_tag(s , dyn_xml, ri, ri_key, 64);
176
+ }
177
+ } else {
178
+ if (ri->state == ARM_CP_STATE_AA32) {
179
+ if (!arm_feature(env, ARM_FEATURE_EL3) &&
180
+ (ri->secure & ARM_CP_SECSTATE_S)) {
181
+ return;
182
+ }
183
+ if (ri->type & ARM_CP_64BIT) {
184
+ arm_gen_one_xml_reg_tag(s , dyn_xml, ri, ri_key, 64);
185
+ } else {
186
+ arm_gen_one_xml_reg_tag(s , dyn_xml, ri, ri_key, 32);
187
+ }
188
+ }
189
+ }
190
+ }
58
+ }
191
+}
59
192
+
60
return valid;
193
+int arm_gen_dynamic_xml(CPUState *cs)
61
}
194
+{
195
+ ARMCPU *cpu = ARM_CPU(cs);
196
+ GString *s = g_string_new(NULL);
197
+ RegisterSysregXmlParam param = {cs, s};
198
+
199
+ cpu->dyn_xml.num_cpregs = 0;
200
+ cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
201
+ g_hash_table_size(cpu->cp_regs));
202
+ g_string_printf(s, "<?xml version=\"1.0\"?>");
203
+ g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
204
+ g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
205
+ g_hash_table_foreach(cpu->cp_regs, arm_register_sysreg_for_xml, &param);
206
+ g_string_append_printf(s, "</feature>");
207
+ cpu->dyn_xml.desc = g_string_free(s, false);
208
+ return cpu->dyn_xml.num_cpregs;
209
+}
210
+
211
+const char *arm_gdb_get_dynamic_xml(CPUState *cs, const char *xmlname)
212
+{
213
+ ARMCPU *cpu = ARM_CPU(cs);
214
+
215
+ if (strcmp(xmlname, "system-registers.xml") == 0) {
216
+ return cpu->dyn_xml.desc;
217
+ }
218
+ return NULL;
219
+}
220
diff --git a/target/arm/helper.c b/target/arm/helper.c
62
diff --git a/target/arm/helper.c b/target/arm/helper.c
221
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
222
--- a/target/arm/helper.c
64
--- a/target/arm/helper.c
223
+++ b/target/arm/helper.c
65
+++ b/target/arm/helper.c
224
@@ -XXX,XX +XXX,XX @@ static void write_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri,
66
@@ -XXX,XX +XXX,XX @@ static void aa64_daif_write(CPUARMState *env, const ARMCPRegInfo *ri,
225
}
67
env->daif = value & PSTATE_DAIF;
226
}
68
}
227
69
228
+static int arm_gdb_get_sysreg(CPUARMState *env, uint8_t *buf, int reg)
70
+static uint64_t aa64_pan_read(CPUARMState *env, const ARMCPRegInfo *ri)
229
+{
71
+{
230
+ ARMCPU *cpu = arm_env_get_cpu(env);
72
+ return env->pstate & PSTATE_PAN;
231
+ const ARMCPRegInfo *ri;
232
+ uint32_t key;
233
+
234
+ key = cpu->dyn_xml.cpregs_keys[reg];
235
+ ri = get_arm_cp_reginfo(cpu->cp_regs, key);
236
+ if (ri) {
237
+ if (cpreg_field_is_64bit(ri)) {
238
+ return gdb_get_reg64(buf, (uint64_t)read_raw_cp_reg(env, ri));
239
+ } else {
240
+ return gdb_get_reg32(buf, (uint32_t)read_raw_cp_reg(env, ri));
241
+ }
242
+ }
243
+ return 0;
244
+}
73
+}
245
+
74
+
246
+static int arm_gdb_set_sysreg(CPUARMState *env, uint8_t *buf, int reg)
75
+static void aa64_pan_write(CPUARMState *env, const ARMCPRegInfo *ri,
76
+ uint64_t value)
247
+{
77
+{
248
+ return 0;
78
+ env->pstate = (env->pstate & ~PSTATE_PAN) | (value & PSTATE_PAN);
249
+}
79
+}
250
+
80
+
251
static bool raw_accessors_invalid(const ARMCPRegInfo *ri)
81
+static const ARMCPRegInfo pan_reginfo = {
252
{
82
+ .name = "PAN", .state = ARM_CP_STATE_AA64,
253
/* Return true if the regdef would cause an assertion if you called
83
+ .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 3,
254
@@ -XXX,XX +XXX,XX @@ void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu)
84
+ .type = ARM_CP_NO_RAW, .access = PL1_RW,
255
gdb_register_coprocessor(cs, vfp_gdb_get_reg, vfp_gdb_set_reg,
85
+ .readfn = aa64_pan_read, .writefn = aa64_pan_write
256
19, "arm-vfp.xml", 0);
86
+};
87
+
88
static CPAccessResult aa64_cacheop_access(CPUARMState *env,
89
const ARMCPRegInfo *ri,
90
bool isread)
91
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
92
if (cpu_isar_feature(aa64_lor, cpu)) {
93
define_arm_cp_regs(cpu, lor_reginfo);
257
}
94
}
258
+ gdb_register_coprocessor(cs, arm_gdb_get_sysreg, arm_gdb_set_sysreg,
95
+ if (cpu_isar_feature(aa64_pan, cpu)) {
259
+ arm_gen_dynamic_xml(cs),
96
+ define_one_arm_cp_reg(cpu, &pan_reginfo);
260
+ "system-registers.xml", 0);
97
+ }
261
}
98
262
99
if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
263
/* Sort alphabetically by type name, except for "any". */
100
define_arm_cp_regs(cpu, vhe_reginfo);
101
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
102
index XXXXXXX..XXXXXXX 100644
103
--- a/target/arm/translate-a64.c
104
+++ b/target/arm/translate-a64.c
105
@@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
106
s->base.is_jmp = DISAS_NEXT;
107
break;
108
109
+ case 0x04: /* PAN */
110
+ if (!dc_isar_feature(aa64_pan, s) || s->current_el == 0) {
111
+ goto do_unallocated;
112
+ }
113
+ if (crm & 1) {
114
+ set_pstate_bits(PSTATE_PAN);
115
+ } else {
116
+ clear_pstate_bits(PSTATE_PAN);
117
+ }
118
+ t1 = tcg_const_i32(s->current_el);
119
+ gen_helper_rebuild_hflags_a64(cpu_env, t1);
120
+ tcg_temp_free_i32(t1);
121
+ break;
122
+
123
case 0x05: /* SPSel */
124
if (s->current_el == 0) {
125
goto do_unallocated;
264
--
126
--
265
2.17.0
127
2.20.1
266
128
267
129
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Examine the PAN bit for EL1, EL2, and Secure EL1 to
4
determine if it applies.
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-15-richard.henderson@linaro.org
9
Message-id: 20200208125816.14954-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 18 ++++++++++++
12
target/arm/helper.c | 9 +++++++++
9
target/arm/sve_helper.c | 57 ++++++++++++++++++++++++++++++++++++++
13
1 file changed, 9 insertions(+)
10
target/arm/translate-sve.c | 34 +++++++++++++++++++++++
11
target/arm/sve.decode | 17 ++++++++++++
12
4 files changed, 126 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/helper.c
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_neg_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
19
DEF_HELPER_FLAGS_4(sve_neg_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
return ARMMMUIdx_E10_0;
20
DEF_HELPER_FLAGS_4(sve_neg_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
case 1:
21
22
if (arm_is_secure_below_el3(env)) {
22
+DEF_HELPER_FLAGS_6(sve_mla_b, TCG_CALL_NO_RWG,
23
+ if (env->pstate & PSTATE_PAN) {
23
+ void, ptr, ptr, ptr, ptr, ptr, i32)
24
+ return ARMMMUIdx_SE10_1_PAN;
24
+DEF_HELPER_FLAGS_6(sve_mla_h, TCG_CALL_NO_RWG,
25
+ }
25
+ void, ptr, ptr, ptr, ptr, ptr, i32)
26
return ARMMMUIdx_SE10_1;
26
+DEF_HELPER_FLAGS_6(sve_mla_s, TCG_CALL_NO_RWG,
27
}
27
+ void, ptr, ptr, ptr, ptr, ptr, i32)
28
+ if (env->pstate & PSTATE_PAN) {
28
+DEF_HELPER_FLAGS_6(sve_mla_d, TCG_CALL_NO_RWG,
29
+ return ARMMMUIdx_E10_1_PAN;
29
+ void, ptr, ptr, ptr, ptr, ptr, i32)
30
+ }
30
+
31
return ARMMMUIdx_E10_1;
31
+DEF_HELPER_FLAGS_6(sve_mls_b, TCG_CALL_NO_RWG,
32
case 2:
32
+ void, ptr, ptr, ptr, ptr, ptr, i32)
33
/* TODO: ARMv8.4-SecEL2 */
33
+DEF_HELPER_FLAGS_6(sve_mls_h, TCG_CALL_NO_RWG,
34
/* Note that TGE does not apply at EL2. */
34
+ void, ptr, ptr, ptr, ptr, ptr, i32)
35
if ((env->cp15.hcr_el2 & HCR_E2H) && arm_el_is_aa64(env, 2)) {
35
+DEF_HELPER_FLAGS_6(sve_mls_s, TCG_CALL_NO_RWG,
36
+ if (env->pstate & PSTATE_PAN) {
36
+ void, ptr, ptr, ptr, ptr, ptr, i32)
37
+ return ARMMMUIdx_E20_2_PAN;
37
+DEF_HELPER_FLAGS_6(sve_mls_d, TCG_CALL_NO_RWG,
38
+ }
38
+ void, ptr, ptr, ptr, ptr, ptr, i32)
39
return ARMMMUIdx_E20_2;
39
+
40
}
40
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
41
return ARMMMUIdx_E2;
41
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
42
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
43
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/sve_helper.c
46
+++ b/target/arm/sve_helper.c
47
@@ -XXX,XX +XXX,XX @@ DO_ZPZI_D(sve_asrd_d, int64_t, DO_ASRD)
48
#undef DO_ASRD
49
#undef DO_ZPZI
50
#undef DO_ZPZI_D
51
+
52
+/* Fully general four-operand expander, controlled by a predicate.
53
+ */
54
+#define DO_ZPZZZ(NAME, TYPE, H, OP) \
55
+void HELPER(NAME)(void *vd, void *va, void *vn, void *vm, \
56
+ void *vg, uint32_t desc) \
57
+{ \
58
+ intptr_t i, opr_sz = simd_oprsz(desc); \
59
+ for (i = 0; i < opr_sz; ) { \
60
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
61
+ do { \
62
+ if (pg & 1) { \
63
+ TYPE nn = *(TYPE *)(vn + H(i)); \
64
+ TYPE mm = *(TYPE *)(vm + H(i)); \
65
+ TYPE aa = *(TYPE *)(va + H(i)); \
66
+ *(TYPE *)(vd + H(i)) = OP(aa, nn, mm); \
67
+ } \
68
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
69
+ } while (i & 15); \
70
+ } \
71
+}
72
+
73
+/* Similarly, specialized for 64-bit operands. */
74
+#define DO_ZPZZZ_D(NAME, TYPE, OP) \
75
+void HELPER(NAME)(void *vd, void *va, void *vn, void *vm, \
76
+ void *vg, uint32_t desc) \
77
+{ \
78
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8; \
79
+ TYPE *d = vd, *a = va, *n = vn, *m = vm; \
80
+ uint8_t *pg = vg; \
81
+ for (i = 0; i < opr_sz; i += 1) { \
82
+ if (pg[H1(i)] & 1) { \
83
+ TYPE aa = a[i], nn = n[i], mm = m[i]; \
84
+ d[i] = OP(aa, nn, mm); \
85
+ } \
86
+ } \
87
+}
88
+
89
+#define DO_MLA(A, N, M) (A + N * M)
90
+#define DO_MLS(A, N, M) (A - N * M)
91
+
92
+DO_ZPZZZ(sve_mla_b, uint8_t, H1, DO_MLA)
93
+DO_ZPZZZ(sve_mls_b, uint8_t, H1, DO_MLS)
94
+
95
+DO_ZPZZZ(sve_mla_h, uint16_t, H1_2, DO_MLA)
96
+DO_ZPZZZ(sve_mls_h, uint16_t, H1_2, DO_MLS)
97
+
98
+DO_ZPZZZ(sve_mla_s, uint32_t, H1_4, DO_MLA)
99
+DO_ZPZZZ(sve_mls_s, uint32_t, H1_4, DO_MLS)
100
+
101
+DO_ZPZZZ_D(sve_mla_d, uint64_t, DO_MLA)
102
+DO_ZPZZZ_D(sve_mls_d, uint64_t, DO_MLS)
103
+
104
+#undef DO_MLA
105
+#undef DO_MLS
106
+#undef DO_ZPZZZ
107
+#undef DO_ZPZZZ_D
108
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
109
index XXXXXXX..XXXXXXX 100644
110
--- a/target/arm/translate-sve.c
111
+++ b/target/arm/translate-sve.c
112
@@ -XXX,XX +XXX,XX @@ DO_ZPZW(LSL, lsl)
113
114
#undef DO_ZPZW
115
116
+/*
117
+ *** SVE Integer Multiply-Add Group
118
+ */
119
+
120
+static bool do_zpzzz_ool(DisasContext *s, arg_rprrr_esz *a,
121
+ gen_helper_gvec_5 *fn)
122
+{
123
+ if (sve_access_check(s)) {
124
+ unsigned vsz = vec_full_reg_size(s);
125
+ tcg_gen_gvec_5_ool(vec_full_reg_offset(s, a->rd),
126
+ vec_full_reg_offset(s, a->ra),
127
+ vec_full_reg_offset(s, a->rn),
128
+ vec_full_reg_offset(s, a->rm),
129
+ pred_full_reg_offset(s, a->pg),
130
+ vsz, vsz, 0, fn);
131
+ }
132
+ return true;
133
+}
134
+
135
+#define DO_ZPZZZ(NAME, name) \
136
+static bool trans_##NAME(DisasContext *s, arg_rprrr_esz *a, uint32_t insn) \
137
+{ \
138
+ static gen_helper_gvec_5 * const fns[4] = { \
139
+ gen_helper_sve_##name##_b, gen_helper_sve_##name##_h, \
140
+ gen_helper_sve_##name##_s, gen_helper_sve_##name##_d, \
141
+ }; \
142
+ return do_zpzzz_ool(s, a, fns[a->esz]); \
143
+}
144
+
145
+DO_ZPZZZ(MLA, mla)
146
+DO_ZPZZZ(MLS, mls)
147
+
148
+#undef DO_ZPZZZ
149
+
150
/*
151
*** SVE Predicate Logical Operations Group
152
*/
153
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
154
index XXXXXXX..XXXXXXX 100644
155
--- a/target/arm/sve.decode
156
+++ b/target/arm/sve.decode
157
@@ -XXX,XX +XXX,XX @@
158
&rpr_esz rd pg rn esz
159
&rprr_s rd pg rn rm s
160
&rprr_esz rd pg rn rm esz
161
+&rprrr_esz rd pg rn rm ra esz
162
&rpri_esz rd pg rn imm esz
163
164
###########################################################################
165
@@ -XXX,XX +XXX,XX @@
166
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
167
&rprr_esz rm=%reg_movprfx
168
169
+# Three register operand, with governing predicate, vector element size
170
+@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
171
+ &rprrr_esz ra=%reg_movprfx
172
+@rdn_pg_ra_rm ........ esz:2 . rm:5 ... pg:3 ra:5 rd:5 \
173
+ &rprrr_esz rn=%reg_movprfx
174
+
175
# One register operand, with governing predicate, vector element size
176
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
177
178
@@ -XXX,XX +XXX,XX @@ UXTH 00000100 .. 010 011 101 ... ..... ..... @rd_pg_rn
179
SXTW 00000100 .. 010 100 101 ... ..... ..... @rd_pg_rn
180
UXTW 00000100 .. 010 101 101 ... ..... ..... @rd_pg_rn
181
182
+### SVE Integer Multiply-Add Group
183
+
184
+# SVE integer multiply-add writing addend (predicated)
185
+MLA 00000100 .. 0 ..... 010 ... ..... ..... @rda_pg_rn_rm
186
+MLS 00000100 .. 0 ..... 011 ... ..... ..... @rda_pg_rn_rm
187
+
188
+# SVE integer multiply-add writing multiplicand (predicated)
189
+MLA 00000100 .. 0 ..... 110 ... ..... ..... @rdn_pg_ra_rm # MAD
190
+MLS 00000100 .. 0 ..... 111 ... ..... ..... @rdn_pg_ra_rm # MSB
191
+
192
### SVE Logical - Unpredicated Group
193
194
# SVE bitwise logical operations (unpredicated)
195
--
42
--
196
2.17.0
43
2.20.1
197
44
198
45
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
If we have a PAN-enforcing mmu_idx, set prot == 0 if user_rw != 0.
4
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-20-richard.henderson@linaro.org
8
Message-id: 20200208125816.14954-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 5 +++++
11
target/arm/internals.h | 13 +++++++++++++
9
target/arm/sve_helper.c | 40 ++++++++++++++++++++++++++++++++++++++
12
target/arm/helper.c | 3 +++
10
target/arm/translate-sve.c | 36 ++++++++++++++++++++++++++++++++++
13
2 files changed, 16 insertions(+)
11
target/arm/sve.decode | 12 ++++++++++++
12
4 files changed, 93 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/internals.h
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_lsl_zzw_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
19
DEF_HELPER_FLAGS_4(sve_lsl_zzw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve_lsl_zzw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_adr_p32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_adr_p64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_adr_s32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve_adr_u32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+
27
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/sve_helper.c
33
+++ b/target/arm/sve_helper.c
34
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_index_d)(void *vd, uint64_t start,
35
d[i] = start + i * incr;
36
}
20
}
37
}
21
}
38
+
22
39
+void HELPER(sve_adr_p32)(void *vd, void *vn, void *vm, uint32_t desc)
23
+static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
40
+{
24
+{
41
+ intptr_t i, opr_sz = simd_oprsz(desc) / 4;
25
+ switch (mmu_idx) {
42
+ uint32_t sh = simd_data(desc);
26
+ case ARMMMUIdx_Stage1_E1_PAN:
43
+ uint32_t *d = vd, *n = vn, *m = vm;
27
+ case ARMMMUIdx_E10_1_PAN:
44
+ for (i = 0; i < opr_sz; i += 1) {
28
+ case ARMMMUIdx_E20_2_PAN:
45
+ d[i] = n[i] + (m[i] << sh);
29
+ case ARMMMUIdx_SE10_1_PAN:
30
+ return true;
31
+ default:
32
+ return false;
46
+ }
33
+ }
47
+}
34
+}
48
+
35
+
49
+void HELPER(sve_adr_p64)(void *vd, void *vn, void *vm, uint32_t desc)
36
/* Return the FSR value for a debug exception (watchpoint, hardware
50
+{
37
* breakpoint or BKPT insn) targeting the specified exception level.
51
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
38
*/
52
+ uint64_t sh = simd_data(desc);
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
53
+ uint64_t *d = vd, *n = vn, *m = vm;
54
+ for (i = 0; i < opr_sz; i += 1) {
55
+ d[i] = n[i] + (m[i] << sh);
56
+ }
57
+}
58
+
59
+void HELPER(sve_adr_s32)(void *vd, void *vn, void *vm, uint32_t desc)
60
+{
61
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
62
+ uint64_t sh = simd_data(desc);
63
+ uint64_t *d = vd, *n = vn, *m = vm;
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ d[i] = n[i] + ((uint64_t)(int32_t)m[i] << sh);
66
+ }
67
+}
68
+
69
+void HELPER(sve_adr_u32)(void *vd, void *vn, void *vm, uint32_t desc)
70
+{
71
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
72
+ uint64_t sh = simd_data(desc);
73
+ uint64_t *d = vd, *n = vn, *m = vm;
74
+ for (i = 0; i < opr_sz; i += 1) {
75
+ d[i] = n[i] + ((uint64_t)(uint32_t)m[i] << sh);
76
+ }
77
+}
78
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
79
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate-sve.c
41
--- a/target/arm/helper.c
81
+++ b/target/arm/translate-sve.c
42
+++ b/target/arm/helper.c
82
@@ -XXX,XX +XXX,XX @@ static bool trans_RDVL(DisasContext *s, arg_RDVL *a, uint32_t insn)
43
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
83
return true;
44
if (is_user) {
84
}
45
prot_rw = user_rw;
85
46
} else {
86
+/*
47
+ if (user_rw && regime_is_pan(env, mmu_idx)) {
87
+ *** SVE Compute Vector Address Group
48
+ return 0;
88
+ */
49
+ }
89
+
50
prot_rw = simple_ap_to_rw_prot_is_user(ap, false);
90
+static bool do_adr(DisasContext *s, arg_rrri *a, gen_helper_gvec_3 *fn)
51
}
91
+{
52
92
+ if (sve_access_check(s)) {
93
+ unsigned vsz = vec_full_reg_size(s);
94
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
95
+ vec_full_reg_offset(s, a->rn),
96
+ vec_full_reg_offset(s, a->rm),
97
+ vsz, vsz, a->imm, fn);
98
+ }
99
+ return true;
100
+}
101
+
102
+static bool trans_ADR_p32(DisasContext *s, arg_rrri *a, uint32_t insn)
103
+{
104
+ return do_adr(s, a, gen_helper_sve_adr_p32);
105
+}
106
+
107
+static bool trans_ADR_p64(DisasContext *s, arg_rrri *a, uint32_t insn)
108
+{
109
+ return do_adr(s, a, gen_helper_sve_adr_p64);
110
+}
111
+
112
+static bool trans_ADR_s32(DisasContext *s, arg_rrri *a, uint32_t insn)
113
+{
114
+ return do_adr(s, a, gen_helper_sve_adr_s32);
115
+}
116
+
117
+static bool trans_ADR_u32(DisasContext *s, arg_rrri *a, uint32_t insn)
118
+{
119
+ return do_adr(s, a, gen_helper_sve_adr_u32);
120
+}
121
+
122
/*
123
*** SVE Predicate Logical Operations Group
124
*/
125
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/arm/sve.decode
128
+++ b/target/arm/sve.decode
129
@@ -XXX,XX +XXX,XX @@
130
131
&rr_esz rd rn esz
132
&rri rd rn imm
133
+&rrri rd rn rm imm
134
&rri_esz rd rn imm esz
135
&rrr_esz rd rn rm esz
136
&rpr_esz rd pg rn esz
137
@@ -XXX,XX +XXX,XX @@
138
# Three operand, vector element size
139
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
140
141
+# Three operand with "memory" size, aka immediate left shift
142
+@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
143
+
144
# Two register operand, with governing predicate, vector element size
145
@rdn_pg_rm ........ esz:2 ... ... ... pg:3 rm:5 rd:5 \
146
&rprr_esz rn=%reg_movprfx
147
@@ -XXX,XX +XXX,XX @@ ASR_zzw 00000100 .. 1 ..... 1000 00 ..... ..... @rd_rn_rm
148
LSR_zzw 00000100 .. 1 ..... 1000 01 ..... ..... @rd_rn_rm
149
LSL_zzw 00000100 .. 1 ..... 1000 11 ..... ..... @rd_rn_rm
150
151
+### SVE Compute Vector Address Group
152
+
153
+# SVE vector address generation
154
+ADR_s32 00000100 00 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
155
+ADR_u32 00000100 01 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
156
+ADR_p32 00000100 10 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
157
+ADR_p64 00000100 11 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
158
+
159
### SVE Predicate Logical Operations Group
160
161
# SVE predicate logical operations
162
--
53
--
163
2.17.0
54
2.20.1
164
55
165
56
diff view generated by jsdifflib
1
From: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This is a preparation for the coming feature of creating dynamically an XML
3
The PAN bit is preserved, or set as per SCTLR_ELx.SPAN,
4
description for the ARM sysregs.
4
plus several other conditions listed in the ARM ARM.
5
Add "_S" suffix to the secure version of sysregs that have both S and NS views
6
Replace (S) and (NS) by _S and _NS for the register that are manually defined,
7
so all the registers follow the same convention.
8
5
9
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 20200208125816.14954-15-richard.henderson@linaro.org
12
Tested-by: Alex Bennée <alex.bennee@linaro.org>
13
Message-id: 1524153386-3550-3-git-send-email-abdallah.bouassida@lauterbach.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
10
---
16
target/arm/helper.c | 29 ++++++++++++++++++-----------
11
target/arm/helper.c | 53 ++++++++++++++++++++++++++++++++++++++++++---
17
1 file changed, 18 insertions(+), 11 deletions(-)
12
1 file changed, 50 insertions(+), 3 deletions(-)
18
13
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
18
@@ -XXX,XX +XXX,XX @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
24
* the secure register to be properly reset and migrated. There is also no
19
uint32_t mask, uint32_t offset,
25
* v8 EL1 version of the register so the non-secure instance stands alone.
20
uint32_t newpc)
26
*/
27
- { .name = "FCSEIDR(NS)",
28
+ { .name = "FCSEIDR",
29
.cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 0,
30
.access = PL1_RW, .secure = ARM_CP_SECSTATE_NS,
31
.fieldoffset = offsetof(CPUARMState, cp15.fcseidr_ns),
32
.resetvalue = 0, .writefn = fcse_write, .raw_writefn = raw_write, },
33
- { .name = "FCSEIDR(S)",
34
+ { .name = "FCSEIDR_S",
35
.cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 0,
36
.access = PL1_RW, .secure = ARM_CP_SECSTATE_S,
37
.fieldoffset = offsetof(CPUARMState, cp15.fcseidr_s),
38
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
39
.access = PL1_RW, .secure = ARM_CP_SECSTATE_NS,
40
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[1]),
41
.resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, },
42
- { .name = "CONTEXTIDR(S)", .state = ARM_CP_STATE_AA32,
43
+ { .name = "CONTEXTIDR_S", .state = ARM_CP_STATE_AA32,
44
.cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 1,
45
.access = PL1_RW, .secure = ARM_CP_SECSTATE_S,
46
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_s),
47
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
48
cp15.c14_timer[GTIMER_PHYS].ctl),
49
.writefn = gt_phys_ctl_write, .raw_writefn = raw_write,
50
},
51
- { .name = "CNTP_CTL(S)",
52
+ { .name = "CNTP_CTL_S",
53
.cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 1,
54
.secure = ARM_CP_SECSTATE_S,
55
.type = ARM_CP_IO | ARM_CP_ALIAS, .access = PL1_RW | PL0_R,
56
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
57
.accessfn = gt_ptimer_access,
58
.readfn = gt_phys_tval_read, .writefn = gt_phys_tval_write,
59
},
60
- { .name = "CNTP_TVAL(S)",
61
+ { .name = "CNTP_TVAL_S",
62
.cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 0,
63
.secure = ARM_CP_SECSTATE_S,
64
.type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL1_RW | PL0_R,
65
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
66
.accessfn = gt_ptimer_access,
67
.writefn = gt_phys_cval_write, .raw_writefn = raw_write,
68
},
69
- { .name = "CNTP_CVAL(S)", .cp = 15, .crm = 14, .opc1 = 2,
70
+ { .name = "CNTP_CVAL_S", .cp = 15, .crm = 14, .opc1 = 2,
71
.secure = ARM_CP_SECSTATE_S,
72
.access = PL1_RW | PL0_R,
73
.type = ARM_CP_64BIT | ARM_CP_IO | ARM_CP_ALIAS,
74
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *arch_query_cpu_definitions(Error **errp)
75
76
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
77
void *opaque, int state, int secstate,
78
- int crm, int opc1, int opc2)
79
+ int crm, int opc1, int opc2,
80
+ const char *name)
81
{
21
{
82
/* Private utility function for define_one_arm_cp_reg_with_opaque():
22
+ int new_el;
83
* add a single reginfo struct to the hash table.
84
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
85
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
86
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
87
88
+ r2->name = g_strdup(name);
89
/* Reset the secure state to the specific incoming state. This is
90
* necessary as the register may have been defined with both states.
91
*/
92
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
93
/* Under AArch32 CP registers can be common
94
* (same for secure and non-secure world) or banked.
95
*/
96
+ char *name;
97
+
23
+
98
switch (r->secure) {
24
/* Change the CPU state so as to actually take the exception. */
99
case ARM_CP_SECSTATE_S:
25
switch_mode(env, new_mode);
100
case ARM_CP_SECSTATE_NS:
26
+ new_el = arm_current_el(env);
101
add_cpreg_to_hashtable(cpu, r, opaque, state,
27
+
102
- r->secure, crm, opc1, opc2);
28
/*
103
+ r->secure, crm, opc1, opc2,
29
* For exceptions taken to AArch32 we must clear the SS bit in both
104
+ r->name);
30
* PSTATE and in the old-state value we save to SPSR_<mode>, so zero it now.
105
break;
31
@@ -XXX,XX +XXX,XX @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
106
default:
32
env->uncached_cpsr = (env->uncached_cpsr & ~CPSR_M) | new_mode;
107
+ name = g_strdup_printf("%s_S", r->name);
33
/* Set new mode endianness */
108
add_cpreg_to_hashtable(cpu, r, opaque, state,
34
env->uncached_cpsr &= ~CPSR_E;
109
ARM_CP_SECSTATE_S,
35
- if (env->cp15.sctlr_el[arm_current_el(env)] & SCTLR_EE) {
110
- crm, opc1, opc2);
36
+ if (env->cp15.sctlr_el[new_el] & SCTLR_EE) {
111
+ crm, opc1, opc2, name);
37
env->uncached_cpsr |= CPSR_E;
112
+ g_free(name);
38
}
113
add_cpreg_to_hashtable(cpu, r, opaque, state,
39
/* J and IL must always be cleared for exception entry */
114
ARM_CP_SECSTATE_NS,
40
@@ -XXX,XX +XXX,XX @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
115
- crm, opc1, opc2);
41
env->thumb = (env->cp15.sctlr_el[2] & SCTLR_TE) != 0;
116
+ crm, opc1, opc2, r->name);
42
env->elr_el[2] = env->regs[15];
117
break;
43
} else {
118
}
44
+ /* CPSR.PAN is normally preserved preserved unless... */
119
} else {
45
+ if (cpu_isar_feature(aa64_pan, env_archcpu(env))) {
120
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
46
+ switch (new_el) {
121
* of AArch32 */
47
+ case 3:
122
add_cpreg_to_hashtable(cpu, r, opaque, state,
48
+ if (!arm_is_secure_below_el3(env)) {
123
ARM_CP_SECSTATE_NS,
49
+ /* ... the target is EL3, from non-secure state. */
124
- crm, opc1, opc2);
50
+ env->uncached_cpsr &= ~CPSR_PAN;
125
+ crm, opc1, opc2, r->name);
51
+ break;
126
}
52
+ }
127
}
53
+ /* ... the target is EL3, from secure state ... */
128
}
54
+ /* fall through */
55
+ case 1:
56
+ /* ... the target is EL1 and SCTLR.SPAN is 0. */
57
+ if (!(env->cp15.sctlr_el[new_el] & SCTLR_SPAN)) {
58
+ env->uncached_cpsr |= CPSR_PAN;
59
+ }
60
+ break;
61
+ }
62
+ }
63
/*
64
* this is a lie, as there was no c1_sys on V4T/V5, but who cares
65
* and we should just guard the thumb mode on V4
66
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
67
unsigned int new_el = env->exception.target_el;
68
target_ulong addr = env->cp15.vbar_el[new_el];
69
unsigned int new_mode = aarch64_pstate_mode(new_el, true);
70
+ unsigned int old_mode;
71
unsigned int cur_el = arm_current_el(env);
72
73
/*
74
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
75
}
76
77
if (is_a64(env)) {
78
- env->banked_spsr[aarch64_banked_spsr_index(new_el)] = pstate_read(env);
79
+ old_mode = pstate_read(env);
80
aarch64_save_sp(env, arm_current_el(env));
81
env->elr_el[new_el] = env->pc;
82
} else {
83
- env->banked_spsr[aarch64_banked_spsr_index(new_el)] = cpsr_read(env);
84
+ old_mode = cpsr_read(env);
85
env->elr_el[new_el] = env->regs[15];
86
87
aarch64_sync_32_to_64(env);
88
89
env->condexec_bits = 0;
90
}
91
+ env->banked_spsr[aarch64_banked_spsr_index(new_el)] = old_mode;
92
+
93
qemu_log_mask(CPU_LOG_INT, "...with ELR 0x%" PRIx64 "\n",
94
env->elr_el[new_el]);
95
96
+ if (cpu_isar_feature(aa64_pan, cpu)) {
97
+ /* The value of PSTATE.PAN is normally preserved, except when ... */
98
+ new_mode |= old_mode & PSTATE_PAN;
99
+ switch (new_el) {
100
+ case 2:
101
+ /* ... the target is EL2 with HCR_EL2.{E2H,TGE} == '11' ... */
102
+ if ((arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE))
103
+ != (HCR_E2H | HCR_TGE)) {
104
+ break;
105
+ }
106
+ /* fall through */
107
+ case 1:
108
+ /* ... the target is EL1 ... */
109
+ /* ... and SCTLR_ELx.SPAN == 0, then set to 1. */
110
+ if ((env->cp15.sctlr_el[new_el] & SCTLR_SPAN) == 0) {
111
+ new_mode |= PSTATE_PAN;
112
+ }
113
+ break;
114
+ }
115
+ }
116
+
117
pstate_write(env, PSTATE_DAIF | new_mode);
118
env->aarch64 = 1;
119
aarch64_restore_sp(env, new_el);
129
--
120
--
130
2.17.0
121
2.20.1
131
122
132
123
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This is a minor enhancement over ARMv8.1-PAN.
4
The *_PAN mmu_idx are used with the existing do_ats_write.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-14-richard.henderson@linaro.org
8
Message-id: 20200208125816.14954-16-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 60 ++++++++++++++++++
11
target/arm/helper.c | 56 ++++++++++++++++++++++++++++++++++++++++-----
9
target/arm/sve_helper.c | 127 +++++++++++++++++++++++++++++++++++++
12
1 file changed, 50 insertions(+), 6 deletions(-)
10
target/arm/translate-sve.c | 113 +++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 23 +++++++
12
4 files changed, 323 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper-sve.h
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_asrd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
19
DEF_HELPER_FLAGS_4(sve_asrd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
20
DEF_HELPER_FLAGS_4(sve_asrd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
switch (ri->opc2 & 6) {
21
21
case 0:
22
+DEF_HELPER_FLAGS_4(sve_cls_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
- /* stage 1 current state PL1: ATS1CPR, ATS1CPW */
23
+DEF_HELPER_FLAGS_4(sve_cls_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+ /* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */
24
+DEF_HELPER_FLAGS_4(sve_cls_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
switch (el) {
25
+DEF_HELPER_FLAGS_4(sve_cls_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
case 3:
26
mmu_idx = ARMMMUIdx_SE3;
27
break;
28
case 2:
29
- mmu_idx = ARMMMUIdx_Stage1_E1;
30
- break;
31
+ g_assert(!secure); /* TODO: ARMv8.4-SecEL2 */
32
+ /* fall through */
33
case 1:
34
- mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
35
+ if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
36
+ mmu_idx = (secure ? ARMMMUIdx_SE10_1_PAN
37
+ : ARMMMUIdx_Stage1_E1_PAN);
38
+ } else {
39
+ mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
40
+ }
41
break;
42
default:
43
g_assert_not_reached();
44
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
45
switch (ri->opc2 & 6) {
46
case 0:
47
switch (ri->opc1) {
48
- case 0: /* AT S1E1R, AT S1E1W */
49
- mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
50
+ case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */
51
+ if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) {
52
+ mmu_idx = (secure ? ARMMMUIdx_SE10_1_PAN
53
+ : ARMMMUIdx_Stage1_E1_PAN);
54
+ } else {
55
+ mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
56
+ }
57
break;
58
case 4: /* AT S1E2R, AT S1E2W */
59
mmu_idx = ARMMMUIdx_E2;
60
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
61
REGINFO_SENTINEL
62
};
63
64
+#ifndef CONFIG_USER_ONLY
65
+static const ARMCPRegInfo ats1e1_reginfo[] = {
66
+ { .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64,
67
+ .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0,
68
+ .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
69
+ .writefn = ats_write64 },
70
+ { .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64,
71
+ .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
72
+ .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
73
+ .writefn = ats_write64 },
74
+ REGINFO_SENTINEL
75
+};
26
+
76
+
27
+DEF_HELPER_FLAGS_4(sve_clz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
77
+static const ARMCPRegInfo ats1cp_reginfo[] = {
28
+DEF_HELPER_FLAGS_4(sve_clz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
78
+ { .name = "ATS1CPRP",
29
+DEF_HELPER_FLAGS_4(sve_clz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
79
+ .cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0,
30
+DEF_HELPER_FLAGS_4(sve_clz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
80
+ .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
81
+ .writefn = ats_write },
82
+ { .name = "ATS1CPWP",
83
+ .cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
84
+ .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
85
+ .writefn = ats_write },
86
+ REGINFO_SENTINEL
87
+};
88
+#endif
31
+
89
+
32
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
90
void register_cp_regs_for_features(ARMCPU *cpu)
33
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
91
{
34
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
92
/* Register all the coprocessor registers based on feature bits */
35
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
93
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
36
+
94
if (cpu_isar_feature(aa64_pan, cpu)) {
37
+DEF_HELPER_FLAGS_4(sve_cnot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
95
define_one_arm_cp_reg(cpu, &pan_reginfo);
38
+DEF_HELPER_FLAGS_4(sve_cnot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
96
}
39
+DEF_HELPER_FLAGS_4(sve_cnot_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
97
+#ifndef CONFIG_USER_ONLY
40
+DEF_HELPER_FLAGS_4(sve_cnot_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
98
+ if (cpu_isar_feature(aa64_ats1e1, cpu)) {
41
+
99
+ define_arm_cp_regs(cpu, ats1e1_reginfo);
42
+DEF_HELPER_FLAGS_4(sve_fabs_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_4(sve_fabs_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_4(sve_fabs_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
+
46
+DEF_HELPER_FLAGS_4(sve_fneg_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_4(sve_fneg_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_4(sve_fneg_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
49
+
50
+DEF_HELPER_FLAGS_4(sve_not_zpz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_4(sve_not_zpz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_4(sve_not_zpz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_4(sve_not_zpz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
54
+
55
+DEF_HELPER_FLAGS_4(sve_sxtb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
56
+DEF_HELPER_FLAGS_4(sve_sxtb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
57
+DEF_HELPER_FLAGS_4(sve_sxtb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
58
+
59
+DEF_HELPER_FLAGS_4(sve_uxtb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
60
+DEF_HELPER_FLAGS_4(sve_uxtb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_4(sve_uxtb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
62
+
63
+DEF_HELPER_FLAGS_4(sve_sxth_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
64
+DEF_HELPER_FLAGS_4(sve_sxth_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
65
+
66
+DEF_HELPER_FLAGS_4(sve_uxth_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
67
+DEF_HELPER_FLAGS_4(sve_uxth_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
68
+
69
+DEF_HELPER_FLAGS_4(sve_sxtw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
70
+DEF_HELPER_FLAGS_4(sve_uxtw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
71
+
72
+DEF_HELPER_FLAGS_4(sve_abs_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
73
+DEF_HELPER_FLAGS_4(sve_abs_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
74
+DEF_HELPER_FLAGS_4(sve_abs_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
75
+DEF_HELPER_FLAGS_4(sve_abs_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
76
+
77
+DEF_HELPER_FLAGS_4(sve_neg_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
78
+DEF_HELPER_FLAGS_4(sve_neg_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
79
+DEF_HELPER_FLAGS_4(sve_neg_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
80
+DEF_HELPER_FLAGS_4(sve_neg_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
81
+
82
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
83
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
84
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
85
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/arm/sve_helper.c
88
+++ b/target/arm/sve_helper.c
89
@@ -XXX,XX +XXX,XX @@ DO_ZPZW(sve_lsl_zpzw_s, uint32_t, uint64_t, H1_4, DO_LSL)
90
91
#undef DO_ZPZW
92
93
+/* Fully general two-operand expander, controlled by a predicate.
94
+ */
95
+#define DO_ZPZ(NAME, TYPE, H, OP) \
96
+void HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
97
+{ \
98
+ intptr_t i, opr_sz = simd_oprsz(desc); \
99
+ for (i = 0; i < opr_sz; ) { \
100
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
101
+ do { \
102
+ if (pg & 1) { \
103
+ TYPE nn = *(TYPE *)(vn + H(i)); \
104
+ *(TYPE *)(vd + H(i)) = OP(nn); \
105
+ } \
106
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
107
+ } while (i & 15); \
108
+ } \
109
+}
110
+
111
+/* Similarly, specialized for 64-bit operands. */
112
+#define DO_ZPZ_D(NAME, TYPE, OP) \
113
+void HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
114
+{ \
115
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8; \
116
+ TYPE *d = vd, *n = vn; \
117
+ uint8_t *pg = vg; \
118
+ for (i = 0; i < opr_sz; i += 1) { \
119
+ if (pg[H1(i)] & 1) { \
120
+ TYPE nn = n[i]; \
121
+ d[i] = OP(nn); \
122
+ } \
123
+ } \
124
+}
125
+
126
+#define DO_CLS_B(N) (clrsb32(N) - 24)
127
+#define DO_CLS_H(N) (clrsb32(N) - 16)
128
+
129
+DO_ZPZ(sve_cls_b, int8_t, H1, DO_CLS_B)
130
+DO_ZPZ(sve_cls_h, int16_t, H1_2, DO_CLS_H)
131
+DO_ZPZ(sve_cls_s, int32_t, H1_4, clrsb32)
132
+DO_ZPZ_D(sve_cls_d, int64_t, clrsb64)
133
+
134
+#define DO_CLZ_B(N) (clz32(N) - 24)
135
+#define DO_CLZ_H(N) (clz32(N) - 16)
136
+
137
+DO_ZPZ(sve_clz_b, uint8_t, H1, DO_CLZ_B)
138
+DO_ZPZ(sve_clz_h, uint16_t, H1_2, DO_CLZ_H)
139
+DO_ZPZ(sve_clz_s, uint32_t, H1_4, clz32)
140
+DO_ZPZ_D(sve_clz_d, uint64_t, clz64)
141
+
142
+DO_ZPZ(sve_cnt_zpz_b, uint8_t, H1, ctpop8)
143
+DO_ZPZ(sve_cnt_zpz_h, uint16_t, H1_2, ctpop16)
144
+DO_ZPZ(sve_cnt_zpz_s, uint32_t, H1_4, ctpop32)
145
+DO_ZPZ_D(sve_cnt_zpz_d, uint64_t, ctpop64)
146
+
147
+#define DO_CNOT(N) (N == 0)
148
+
149
+DO_ZPZ(sve_cnot_b, uint8_t, H1, DO_CNOT)
150
+DO_ZPZ(sve_cnot_h, uint16_t, H1_2, DO_CNOT)
151
+DO_ZPZ(sve_cnot_s, uint32_t, H1_4, DO_CNOT)
152
+DO_ZPZ_D(sve_cnot_d, uint64_t, DO_CNOT)
153
+
154
+#define DO_FABS(N) (N & ((__typeof(N))-1 >> 1))
155
+
156
+DO_ZPZ(sve_fabs_h, uint16_t, H1_2, DO_FABS)
157
+DO_ZPZ(sve_fabs_s, uint32_t, H1_4, DO_FABS)
158
+DO_ZPZ_D(sve_fabs_d, uint64_t, DO_FABS)
159
+
160
+#define DO_FNEG(N) (N ^ ~((__typeof(N))-1 >> 1))
161
+
162
+DO_ZPZ(sve_fneg_h, uint16_t, H1_2, DO_FNEG)
163
+DO_ZPZ(sve_fneg_s, uint32_t, H1_4, DO_FNEG)
164
+DO_ZPZ_D(sve_fneg_d, uint64_t, DO_FNEG)
165
+
166
+#define DO_NOT(N) (~N)
167
+
168
+DO_ZPZ(sve_not_zpz_b, uint8_t, H1, DO_NOT)
169
+DO_ZPZ(sve_not_zpz_h, uint16_t, H1_2, DO_NOT)
170
+DO_ZPZ(sve_not_zpz_s, uint32_t, H1_4, DO_NOT)
171
+DO_ZPZ_D(sve_not_zpz_d, uint64_t, DO_NOT)
172
+
173
+#define DO_SXTB(N) ((int8_t)N)
174
+#define DO_SXTH(N) ((int16_t)N)
175
+#define DO_SXTS(N) ((int32_t)N)
176
+#define DO_UXTB(N) ((uint8_t)N)
177
+#define DO_UXTH(N) ((uint16_t)N)
178
+#define DO_UXTS(N) ((uint32_t)N)
179
+
180
+DO_ZPZ(sve_sxtb_h, uint16_t, H1_2, DO_SXTB)
181
+DO_ZPZ(sve_sxtb_s, uint32_t, H1_4, DO_SXTB)
182
+DO_ZPZ(sve_sxth_s, uint32_t, H1_4, DO_SXTH)
183
+DO_ZPZ_D(sve_sxtb_d, uint64_t, DO_SXTB)
184
+DO_ZPZ_D(sve_sxth_d, uint64_t, DO_SXTH)
185
+DO_ZPZ_D(sve_sxtw_d, uint64_t, DO_SXTS)
186
+
187
+DO_ZPZ(sve_uxtb_h, uint16_t, H1_2, DO_UXTB)
188
+DO_ZPZ(sve_uxtb_s, uint32_t, H1_4, DO_UXTB)
189
+DO_ZPZ(sve_uxth_s, uint32_t, H1_4, DO_UXTH)
190
+DO_ZPZ_D(sve_uxtb_d, uint64_t, DO_UXTB)
191
+DO_ZPZ_D(sve_uxth_d, uint64_t, DO_UXTH)
192
+DO_ZPZ_D(sve_uxtw_d, uint64_t, DO_UXTS)
193
+
194
+#define DO_ABS(N) (N < 0 ? -N : N)
195
+
196
+DO_ZPZ(sve_abs_b, int8_t, H1, DO_ABS)
197
+DO_ZPZ(sve_abs_h, int16_t, H1_2, DO_ABS)
198
+DO_ZPZ(sve_abs_s, int32_t, H1_4, DO_ABS)
199
+DO_ZPZ_D(sve_abs_d, int64_t, DO_ABS)
200
+
201
+#define DO_NEG(N) (-N)
202
+
203
+DO_ZPZ(sve_neg_b, uint8_t, H1, DO_NEG)
204
+DO_ZPZ(sve_neg_h, uint16_t, H1_2, DO_NEG)
205
+DO_ZPZ(sve_neg_s, uint32_t, H1_4, DO_NEG)
206
+DO_ZPZ_D(sve_neg_d, uint64_t, DO_NEG)
207
+
208
+#undef DO_CLS_B
209
+#undef DO_CLS_H
210
+#undef DO_CLZ_B
211
+#undef DO_CLZ_H
212
+#undef DO_CNOT
213
+#undef DO_FABS
214
+#undef DO_FNEG
215
+#undef DO_ABS
216
+#undef DO_NEG
217
+#undef DO_ZPZ
218
+#undef DO_ZPZ_D
219
+
220
/* Two-operand reduction expander, controlled by a predicate.
221
* The difference between TYPERED and TYPERET has to do with
222
* sign-extension. E.g. for SMAX, TYPERED must be signed,
223
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
224
index XXXXXXX..XXXXXXX 100644
225
--- a/target/arm/translate-sve.c
226
+++ b/target/arm/translate-sve.c
227
@@ -XXX,XX +XXX,XX @@ static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
228
229
#undef DO_ZPZZ
230
231
+/*
232
+ *** SVE Integer Arithmetic - Unary Predicated Group
233
+ */
234
+
235
+static bool do_zpz_ool(DisasContext *s, arg_rpr_esz *a, gen_helper_gvec_3 *fn)
236
+{
237
+ if (fn == NULL) {
238
+ return false;
239
+ }
100
+ }
240
+ if (sve_access_check(s)) {
101
+ if (cpu_isar_feature(aa32_ats1e1, cpu)) {
241
+ unsigned vsz = vec_full_reg_size(s);
102
+ define_arm_cp_regs(cpu, ats1cp_reginfo);
242
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
243
+ vec_full_reg_offset(s, a->rn),
244
+ pred_full_reg_offset(s, a->pg),
245
+ vsz, vsz, 0, fn);
246
+ }
103
+ }
247
+ return true;
104
+#endif
248
+}
105
249
+
106
if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
250
+#define DO_ZPZ(NAME, name) \
107
define_arm_cp_regs(cpu, vhe_reginfo);
251
+static bool trans_##NAME(DisasContext *s, arg_rpr_esz *a, uint32_t insn) \
252
+{ \
253
+ static gen_helper_gvec_3 * const fns[4] = { \
254
+ gen_helper_sve_##name##_b, gen_helper_sve_##name##_h, \
255
+ gen_helper_sve_##name##_s, gen_helper_sve_##name##_d, \
256
+ }; \
257
+ return do_zpz_ool(s, a, fns[a->esz]); \
258
+}
259
+
260
+DO_ZPZ(CLS, cls)
261
+DO_ZPZ(CLZ, clz)
262
+DO_ZPZ(CNT_zpz, cnt_zpz)
263
+DO_ZPZ(CNOT, cnot)
264
+DO_ZPZ(NOT_zpz, not_zpz)
265
+DO_ZPZ(ABS, abs)
266
+DO_ZPZ(NEG, neg)
267
+
268
+static bool trans_FABS(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
269
+{
270
+ static gen_helper_gvec_3 * const fns[4] = {
271
+ NULL,
272
+ gen_helper_sve_fabs_h,
273
+ gen_helper_sve_fabs_s,
274
+ gen_helper_sve_fabs_d
275
+ };
276
+ return do_zpz_ool(s, a, fns[a->esz]);
277
+}
278
+
279
+static bool trans_FNEG(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
280
+{
281
+ static gen_helper_gvec_3 * const fns[4] = {
282
+ NULL,
283
+ gen_helper_sve_fneg_h,
284
+ gen_helper_sve_fneg_s,
285
+ gen_helper_sve_fneg_d
286
+ };
287
+ return do_zpz_ool(s, a, fns[a->esz]);
288
+}
289
+
290
+static bool trans_SXTB(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
291
+{
292
+ static gen_helper_gvec_3 * const fns[4] = {
293
+ NULL,
294
+ gen_helper_sve_sxtb_h,
295
+ gen_helper_sve_sxtb_s,
296
+ gen_helper_sve_sxtb_d
297
+ };
298
+ return do_zpz_ool(s, a, fns[a->esz]);
299
+}
300
+
301
+static bool trans_UXTB(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
302
+{
303
+ static gen_helper_gvec_3 * const fns[4] = {
304
+ NULL,
305
+ gen_helper_sve_uxtb_h,
306
+ gen_helper_sve_uxtb_s,
307
+ gen_helper_sve_uxtb_d
308
+ };
309
+ return do_zpz_ool(s, a, fns[a->esz]);
310
+}
311
+
312
+static bool trans_SXTH(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
313
+{
314
+ static gen_helper_gvec_3 * const fns[4] = {
315
+ NULL, NULL,
316
+ gen_helper_sve_sxth_s,
317
+ gen_helper_sve_sxth_d
318
+ };
319
+ return do_zpz_ool(s, a, fns[a->esz]);
320
+}
321
+
322
+static bool trans_UXTH(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
323
+{
324
+ static gen_helper_gvec_3 * const fns[4] = {
325
+ NULL, NULL,
326
+ gen_helper_sve_uxth_s,
327
+ gen_helper_sve_uxth_d
328
+ };
329
+ return do_zpz_ool(s, a, fns[a->esz]);
330
+}
331
+
332
+static bool trans_SXTW(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
333
+{
334
+ return do_zpz_ool(s, a, a->esz == 3 ? gen_helper_sve_sxtw_d : NULL);
335
+}
336
+
337
+static bool trans_UXTW(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
338
+{
339
+ return do_zpz_ool(s, a, a->esz == 3 ? gen_helper_sve_uxtw_d : NULL);
340
+}
341
+
342
+#undef DO_ZPZ
343
+
344
/*
345
*** SVE Integer Reduction Group
346
*/
347
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
348
index XXXXXXX..XXXXXXX 100644
349
--- a/target/arm/sve.decode
350
+++ b/target/arm/sve.decode
351
@@ -XXX,XX +XXX,XX @@ ASR_zpzw 00000100 .. 011 000 100 ... ..... ..... @rdn_pg_rm
352
LSR_zpzw 00000100 .. 011 001 100 ... ..... ..... @rdn_pg_rm
353
LSL_zpzw 00000100 .. 011 011 100 ... ..... ..... @rdn_pg_rm
354
355
+### SVE Integer Arithmetic - Unary Predicated Group
356
+
357
+# SVE unary bit operations (predicated)
358
+# Note esz != 0 for FABS and FNEG.
359
+CLS 00000100 .. 011 000 101 ... ..... ..... @rd_pg_rn
360
+CLZ 00000100 .. 011 001 101 ... ..... ..... @rd_pg_rn
361
+CNT_zpz 00000100 .. 011 010 101 ... ..... ..... @rd_pg_rn
362
+CNOT 00000100 .. 011 011 101 ... ..... ..... @rd_pg_rn
363
+NOT_zpz 00000100 .. 011 110 101 ... ..... ..... @rd_pg_rn
364
+FABS 00000100 .. 011 100 101 ... ..... ..... @rd_pg_rn
365
+FNEG 00000100 .. 011 101 101 ... ..... ..... @rd_pg_rn
366
+
367
+# SVE integer unary operations (predicated)
368
+# Note esz > original size for extensions.
369
+ABS 00000100 .. 010 110 101 ... ..... ..... @rd_pg_rn
370
+NEG 00000100 .. 010 111 101 ... ..... ..... @rd_pg_rn
371
+SXTB 00000100 .. 010 000 101 ... ..... ..... @rd_pg_rn
372
+UXTB 00000100 .. 010 001 101 ... ..... ..... @rd_pg_rn
373
+SXTH 00000100 .. 010 010 101 ... ..... ..... @rd_pg_rn
374
+UXTH 00000100 .. 010 011 101 ... ..... ..... @rd_pg_rn
375
+SXTW 00000100 .. 010 100 101 ... ..... ..... @rd_pg_rn
376
+UXTW 00000100 .. 010 101 101 ... ..... ..... @rd_pg_rn
377
+
378
### SVE Logical - Unpredicated Group
379
380
# SVE bitwise logical operations (unpredicated)
381
--
108
--
382
2.17.0
109
2.20.1
383
110
384
111
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Excepting MOVPRFX, which isn't a reduction. Presumably it is
3
This includes enablement of ARMv8.1-PAN.
4
placed within the group because of its encoding.
5
4
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180516223007.10256-10-richard.henderson@linaro.org
7
Message-id: 20200208125816.14954-17-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
target/arm/helper-sve.h | 44 ++++++++++++++++++
10
target/arm/cpu.c | 4 ++++
12
target/arm/sve_helper.c | 91 ++++++++++++++++++++++++++++++++++++++
11
target/arm/cpu64.c | 5 +++++
13
target/arm/translate-sve.c | 68 ++++++++++++++++++++++++++++
12
2 files changed, 9 insertions(+)
14
target/arm/sve.decode | 22 +++++++++
15
4 files changed, 225 insertions(+)
16
13
17
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-sve.h
16
--- a/target/arm/cpu.c
20
+++ b/target/arm/helper-sve.h
17
+++ b/target/arm/cpu.c
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_udiv_zpzz_s, TCG_CALL_NO_RWG,
18
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
22
DEF_HELPER_FLAGS_5(sve_udiv_zpzz_d, TCG_CALL_NO_RWG,
19
t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
23
void, ptr, ptr, ptr, ptr, i32)
20
cpu->isar.mvfr2 = t;
24
21
25
+DEF_HELPER_FLAGS_3(sve_orv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
22
+ t = cpu->id_mmfr3;
26
+DEF_HELPER_FLAGS_3(sve_orv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
23
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
27
+DEF_HELPER_FLAGS_3(sve_orv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
24
+ cpu->id_mmfr3 = t;
28
+DEF_HELPER_FLAGS_3(sve_orv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
29
+
25
+
30
+DEF_HELPER_FLAGS_3(sve_eorv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
26
t = cpu->id_mmfr4;
31
+DEF_HELPER_FLAGS_3(sve_eorv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
27
t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
32
+DEF_HELPER_FLAGS_3(sve_eorv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
28
cpu->id_mmfr4 = t;
33
+DEF_HELPER_FLAGS_3(sve_eorv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
29
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/cpu64.c
32
+++ b/target/arm/cpu64.c
33
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
34
t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
35
t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
36
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
37
+ t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
38
cpu->isar.id_aa64mmfr1 = t;
39
40
/* Replicate the same data to the 32-bit id registers. */
41
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
42
u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
43
cpu->isar.id_isar6 = u;
44
45
+ u = cpu->id_mmfr3;
46
+ u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
47
+ cpu->id_mmfr3 = u;
34
+
48
+
35
+DEF_HELPER_FLAGS_3(sve_andv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
49
/*
36
+DEF_HELPER_FLAGS_3(sve_andv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
50
* FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
37
+DEF_HELPER_FLAGS_3(sve_andv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
51
* so do not set MVFR1.FPHP. Strictly speaking this is not legal,
38
+DEF_HELPER_FLAGS_3(sve_andv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
39
+
40
+DEF_HELPER_FLAGS_3(sve_saddv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_3(sve_saddv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_3(sve_saddv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
43
+
44
+DEF_HELPER_FLAGS_3(sve_uaddv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_3(sve_uaddv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_3(sve_uaddv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_3(sve_uaddv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_3(sve_smaxv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_3(sve_smaxv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_3(sve_smaxv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_3(sve_smaxv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
53
+
54
+DEF_HELPER_FLAGS_3(sve_umaxv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_3(sve_umaxv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
56
+DEF_HELPER_FLAGS_3(sve_umaxv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
57
+DEF_HELPER_FLAGS_3(sve_umaxv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
58
+
59
+DEF_HELPER_FLAGS_3(sve_sminv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
60
+DEF_HELPER_FLAGS_3(sve_sminv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_3(sve_sminv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
62
+DEF_HELPER_FLAGS_3(sve_sminv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
63
+
64
+DEF_HELPER_FLAGS_3(sve_uminv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
65
+DEF_HELPER_FLAGS_3(sve_uminv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
66
+DEF_HELPER_FLAGS_3(sve_uminv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
67
+DEF_HELPER_FLAGS_3(sve_uminv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
68
+
69
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
70
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
71
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
72
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/sve_helper.c
75
+++ b/target/arm/sve_helper.c
76
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_D(sve_udiv_zpzz_d, uint64_t, DO_DIV)
77
78
#undef DO_ZPZZ
79
#undef DO_ZPZZ_D
80
+
81
+/* Two-operand reduction expander, controlled by a predicate.
82
+ * The difference between TYPERED and TYPERET has to do with
83
+ * sign-extension. E.g. for SMAX, TYPERED must be signed,
84
+ * but TYPERET must be unsigned so that e.g. a 32-bit value
85
+ * is not sign-extended to the ABI uint64_t return type.
86
+ */
87
+/* ??? If we were to vectorize this by hand the reduction ordering
88
+ * would change. For integer operands, this is perfectly fine.
89
+ */
90
+#define DO_VPZ(NAME, TYPEELT, TYPERED, TYPERET, H, INIT, OP) \
91
+uint64_t HELPER(NAME)(void *vn, void *vg, uint32_t desc) \
92
+{ \
93
+ intptr_t i, opr_sz = simd_oprsz(desc); \
94
+ TYPERED ret = INIT; \
95
+ for (i = 0; i < opr_sz; ) { \
96
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
97
+ do { \
98
+ if (pg & 1) { \
99
+ TYPEELT nn = *(TYPEELT *)(vn + H(i)); \
100
+ ret = OP(ret, nn); \
101
+ } \
102
+ i += sizeof(TYPEELT), pg >>= sizeof(TYPEELT); \
103
+ } while (i & 15); \
104
+ } \
105
+ return (TYPERET)ret; \
106
+}
107
+
108
+#define DO_VPZ_D(NAME, TYPEE, TYPER, INIT, OP) \
109
+uint64_t HELPER(NAME)(void *vn, void *vg, uint32_t desc) \
110
+{ \
111
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8; \
112
+ TYPEE *n = vn; \
113
+ uint8_t *pg = vg; \
114
+ TYPER ret = INIT; \
115
+ for (i = 0; i < opr_sz; i += 1) { \
116
+ if (pg[H1(i)] & 1) { \
117
+ TYPEE nn = n[i]; \
118
+ ret = OP(ret, nn); \
119
+ } \
120
+ } \
121
+ return ret; \
122
+}
123
+
124
+DO_VPZ(sve_orv_b, uint8_t, uint8_t, uint8_t, H1, 0, DO_ORR)
125
+DO_VPZ(sve_orv_h, uint16_t, uint16_t, uint16_t, H1_2, 0, DO_ORR)
126
+DO_VPZ(sve_orv_s, uint32_t, uint32_t, uint32_t, H1_4, 0, DO_ORR)
127
+DO_VPZ_D(sve_orv_d, uint64_t, uint64_t, 0, DO_ORR)
128
+
129
+DO_VPZ(sve_eorv_b, uint8_t, uint8_t, uint8_t, H1, 0, DO_EOR)
130
+DO_VPZ(sve_eorv_h, uint16_t, uint16_t, uint16_t, H1_2, 0, DO_EOR)
131
+DO_VPZ(sve_eorv_s, uint32_t, uint32_t, uint32_t, H1_4, 0, DO_EOR)
132
+DO_VPZ_D(sve_eorv_d, uint64_t, uint64_t, 0, DO_EOR)
133
+
134
+DO_VPZ(sve_andv_b, uint8_t, uint8_t, uint8_t, H1, -1, DO_AND)
135
+DO_VPZ(sve_andv_h, uint16_t, uint16_t, uint16_t, H1_2, -1, DO_AND)
136
+DO_VPZ(sve_andv_s, uint32_t, uint32_t, uint32_t, H1_4, -1, DO_AND)
137
+DO_VPZ_D(sve_andv_d, uint64_t, uint64_t, -1, DO_AND)
138
+
139
+DO_VPZ(sve_saddv_b, int8_t, uint64_t, uint64_t, H1, 0, DO_ADD)
140
+DO_VPZ(sve_saddv_h, int16_t, uint64_t, uint64_t, H1_2, 0, DO_ADD)
141
+DO_VPZ(sve_saddv_s, int32_t, uint64_t, uint64_t, H1_4, 0, DO_ADD)
142
+
143
+DO_VPZ(sve_uaddv_b, uint8_t, uint64_t, uint64_t, H1, 0, DO_ADD)
144
+DO_VPZ(sve_uaddv_h, uint16_t, uint64_t, uint64_t, H1_2, 0, DO_ADD)
145
+DO_VPZ(sve_uaddv_s, uint32_t, uint64_t, uint64_t, H1_4, 0, DO_ADD)
146
+DO_VPZ_D(sve_uaddv_d, uint64_t, uint64_t, 0, DO_ADD)
147
+
148
+DO_VPZ(sve_smaxv_b, int8_t, int8_t, uint8_t, H1, INT8_MIN, DO_MAX)
149
+DO_VPZ(sve_smaxv_h, int16_t, int16_t, uint16_t, H1_2, INT16_MIN, DO_MAX)
150
+DO_VPZ(sve_smaxv_s, int32_t, int32_t, uint32_t, H1_4, INT32_MIN, DO_MAX)
151
+DO_VPZ_D(sve_smaxv_d, int64_t, int64_t, INT64_MIN, DO_MAX)
152
+
153
+DO_VPZ(sve_umaxv_b, uint8_t, uint8_t, uint8_t, H1, 0, DO_MAX)
154
+DO_VPZ(sve_umaxv_h, uint16_t, uint16_t, uint16_t, H1_2, 0, DO_MAX)
155
+DO_VPZ(sve_umaxv_s, uint32_t, uint32_t, uint32_t, H1_4, 0, DO_MAX)
156
+DO_VPZ_D(sve_umaxv_d, uint64_t, uint64_t, 0, DO_MAX)
157
+
158
+DO_VPZ(sve_sminv_b, int8_t, int8_t, uint8_t, H1, INT8_MAX, DO_MIN)
159
+DO_VPZ(sve_sminv_h, int16_t, int16_t, uint16_t, H1_2, INT16_MAX, DO_MIN)
160
+DO_VPZ(sve_sminv_s, int32_t, int32_t, uint32_t, H1_4, INT32_MAX, DO_MIN)
161
+DO_VPZ_D(sve_sminv_d, int64_t, int64_t, INT64_MAX, DO_MIN)
162
+
163
+DO_VPZ(sve_uminv_b, uint8_t, uint8_t, uint8_t, H1, -1, DO_MIN)
164
+DO_VPZ(sve_uminv_h, uint16_t, uint16_t, uint16_t, H1_2, -1, DO_MIN)
165
+DO_VPZ(sve_uminv_s, uint32_t, uint32_t, uint32_t, H1_4, -1, DO_MIN)
166
+DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
167
+
168
+#undef DO_VPZ
169
+#undef DO_VPZ_D
170
+
171
#undef DO_AND
172
#undef DO_ORR
173
#undef DO_EOR
174
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
175
index XXXXXXX..XXXXXXX 100644
176
--- a/target/arm/translate-sve.c
177
+++ b/target/arm/translate-sve.c
178
@@ -XXX,XX +XXX,XX @@ static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
179
180
#undef DO_ZPZZ
181
182
+/*
183
+ *** SVE Integer Reduction Group
184
+ */
185
+
186
+typedef void gen_helper_gvec_reduc(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_i32);
187
+static bool do_vpz_ool(DisasContext *s, arg_rpr_esz *a,
188
+ gen_helper_gvec_reduc *fn)
189
+{
190
+ unsigned vsz = vec_full_reg_size(s);
191
+ TCGv_ptr t_zn, t_pg;
192
+ TCGv_i32 desc;
193
+ TCGv_i64 temp;
194
+
195
+ if (fn == NULL) {
196
+ return false;
197
+ }
198
+ if (!sve_access_check(s)) {
199
+ return true;
200
+ }
201
+
202
+ desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
203
+ temp = tcg_temp_new_i64();
204
+ t_zn = tcg_temp_new_ptr();
205
+ t_pg = tcg_temp_new_ptr();
206
+
207
+ tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
208
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
209
+ fn(temp, t_zn, t_pg, desc);
210
+ tcg_temp_free_ptr(t_zn);
211
+ tcg_temp_free_ptr(t_pg);
212
+ tcg_temp_free_i32(desc);
213
+
214
+ write_fp_dreg(s, a->rd, temp);
215
+ tcg_temp_free_i64(temp);
216
+ return true;
217
+}
218
+
219
+#define DO_VPZ(NAME, name) \
220
+static bool trans_##NAME(DisasContext *s, arg_rpr_esz *a, uint32_t insn) \
221
+{ \
222
+ static gen_helper_gvec_reduc * const fns[4] = { \
223
+ gen_helper_sve_##name##_b, gen_helper_sve_##name##_h, \
224
+ gen_helper_sve_##name##_s, gen_helper_sve_##name##_d, \
225
+ }; \
226
+ return do_vpz_ool(s, a, fns[a->esz]); \
227
+}
228
+
229
+DO_VPZ(ORV, orv)
230
+DO_VPZ(ANDV, andv)
231
+DO_VPZ(EORV, eorv)
232
+
233
+DO_VPZ(UADDV, uaddv)
234
+DO_VPZ(SMAXV, smaxv)
235
+DO_VPZ(UMAXV, umaxv)
236
+DO_VPZ(SMINV, sminv)
237
+DO_VPZ(UMINV, uminv)
238
+
239
+static bool trans_SADDV(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
240
+{
241
+ static gen_helper_gvec_reduc * const fns[4] = {
242
+ gen_helper_sve_saddv_b, gen_helper_sve_saddv_h,
243
+ gen_helper_sve_saddv_s, NULL
244
+ };
245
+ return do_vpz_ool(s, a, fns[a->esz]);
246
+}
247
+
248
+#undef DO_VPZ
249
+
250
/*
251
*** SVE Predicate Logical Operations Group
252
*/
253
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
254
index XXXXXXX..XXXXXXX 100644
255
--- a/target/arm/sve.decode
256
+++ b/target/arm/sve.decode
257
@@ -XXX,XX +XXX,XX @@
258
&rr_esz rd rn esz
259
&rri rd rn imm
260
&rrr_esz rd rn rm esz
261
+&rpr_esz rd pg rn esz
262
&rprr_s rd pg rn rm s
263
&rprr_esz rd pg rn rm esz
264
265
@@ -XXX,XX +XXX,XX @@
266
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
267
&rprr_esz rm=%reg_movprfx
268
269
+# One register operand, with governing predicate, vector element size
270
+@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
271
+
272
# Basic Load/Store with 9-bit immediate offset
273
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
274
&rri imm=%imm9_16_10
275
@@ -XXX,XX +XXX,XX @@ UDIV_zpzz 00000100 .. 010 101 000 ... ..... ..... @rdn_pg_rm
276
SDIV_zpzz 00000100 .. 010 110 000 ... ..... ..... @rdm_pg_rn # SDIVR
277
UDIV_zpzz 00000100 .. 010 111 000 ... ..... ..... @rdm_pg_rn # UDIVR
278
279
+### SVE Integer Reduction Group
280
+
281
+# SVE bitwise logical reduction (predicated)
282
+ORV 00000100 .. 011 000 001 ... ..... ..... @rd_pg_rn
283
+EORV 00000100 .. 011 001 001 ... ..... ..... @rd_pg_rn
284
+ANDV 00000100 .. 011 010 001 ... ..... ..... @rd_pg_rn
285
+
286
+# SVE integer add reduction (predicated)
287
+# Note that saddv requires size != 3.
288
+UADDV 00000100 .. 000 001 001 ... ..... ..... @rd_pg_rn
289
+SADDV 00000100 .. 000 000 001 ... ..... ..... @rd_pg_rn
290
+
291
+# SVE integer min/max reduction (predicated)
292
+SMAXV 00000100 .. 001 000 001 ... ..... ..... @rd_pg_rn
293
+UMAXV 00000100 .. 001 001 001 ... ..... ..... @rd_pg_rn
294
+SMINV 00000100 .. 001 010 001 ... ..... ..... @rd_pg_rn
295
+UMINV 00000100 .. 001 011 001 ... ..... ..... @rd_pg_rn
296
+
297
### SVE Logical - Unpredicated Group
298
299
# SVE bitwise logical operations (unpredicated)
300
--
52
--
301
2.17.0
53
2.20.1
302
54
303
55
diff view generated by jsdifflib
1
From: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This is a preparation for the coming feature of creating dynamically an XML
3
Add definitions for all of the fields, up to ARMv8.5.
4
description for the ARM sysregs.
4
Convert the existing RESERVED register to a full register.
5
A register has ARM_CP_NO_GDB enabled will not be shown in the dynamic XML.
5
Query KVM for the value of the register for the host.
6
This bit is enabled automatically when creating CP_ANY wildcard aliases.
7
This bit could be enabled manually for any register we want to remove from the
8
dynamic XML description.
9
6
10
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20200208125816.14954-18-richard.henderson@linaro.org
14
Message-id: 1524153386-3550-2-git-send-email-abdallah.bouassida@lauterbach.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
11
---
17
target/arm/cpu.h | 3 ++-
12
target/arm/cpu.h | 17 +++++++++++++++++
18
target/arm/helper.c | 2 +-
13
target/arm/helper.c | 4 ++--
19
2 files changed, 3 insertions(+), 2 deletions(-)
14
target/arm/kvm64.c | 2 ++
15
3 files changed, 21 insertions(+), 2 deletions(-)
20
16
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
25
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
21
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
26
#define ARM_LAST_SPECIAL ARM_CP_DC_ZVA
22
uint64_t id_aa64pfr1;
27
#define ARM_CP_FPU 0x1000
23
uint64_t id_aa64mmfr0;
28
#define ARM_CP_SVE 0x2000
24
uint64_t id_aa64mmfr1;
29
+#define ARM_CP_NO_GDB 0x4000
25
+ uint64_t id_aa64mmfr2;
30
/* Used only as a terminator for ARMCPRegInfo lists */
26
} isar;
31
#define ARM_CP_SENTINEL 0xffff
27
uint32_t midr;
32
/* Mask of only the flag bits in a type field */
28
uint32_t revidr;
33
-#define ARM_CP_FLAG_MASK 0x30ff
29
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64MMFR1, PAN, 20, 4)
34
+#define ARM_CP_FLAG_MASK 0x70ff
30
FIELD(ID_AA64MMFR1, SPECSEI, 24, 4)
35
31
FIELD(ID_AA64MMFR1, XNX, 28, 4)
36
/* Valid values for ARMCPRegInfo state field, indicating which of
32
37
* the AArch32 and AArch64 execution states this register is visible in.
33
+FIELD(ID_AA64MMFR2, CNP, 0, 4)
34
+FIELD(ID_AA64MMFR2, UAO, 4, 4)
35
+FIELD(ID_AA64MMFR2, LSM, 8, 4)
36
+FIELD(ID_AA64MMFR2, IESB, 12, 4)
37
+FIELD(ID_AA64MMFR2, VARANGE, 16, 4)
38
+FIELD(ID_AA64MMFR2, CCIDX, 20, 4)
39
+FIELD(ID_AA64MMFR2, NV, 24, 4)
40
+FIELD(ID_AA64MMFR2, ST, 28, 4)
41
+FIELD(ID_AA64MMFR2, AT, 32, 4)
42
+FIELD(ID_AA64MMFR2, IDS, 36, 4)
43
+FIELD(ID_AA64MMFR2, FWB, 40, 4)
44
+FIELD(ID_AA64MMFR2, TTL, 48, 4)
45
+FIELD(ID_AA64MMFR2, BBM, 52, 4)
46
+FIELD(ID_AA64MMFR2, EVT, 56, 4)
47
+FIELD(ID_AA64MMFR2, E0PD, 60, 4)
48
+
49
FIELD(ID_DFR0, COPDBG, 0, 4)
50
FIELD(ID_DFR0, COPSDBG, 4, 4)
51
FIELD(ID_DFR0, MMAPDBG, 8, 4)
38
diff --git a/target/arm/helper.c b/target/arm/helper.c
52
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/helper.c
54
--- a/target/arm/helper.c
41
+++ b/target/arm/helper.c
55
+++ b/target/arm/helper.c
42
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
56
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
43
if (((r->crm == CP_ANY) && crm != 0) ||
57
.access = PL1_R, .type = ARM_CP_CONST,
44
((r->opc1 == CP_ANY) && opc1 != 0) ||
58
.accessfn = access_aa64_tid3,
45
((r->opc2 == CP_ANY) && opc2 != 0)) {
59
.resetvalue = cpu->isar.id_aa64mmfr1 },
46
- r2->type |= ARM_CP_ALIAS;
60
- { .name = "ID_AA64MMFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
47
+ r2->type |= ARM_CP_ALIAS | ARM_CP_NO_GDB;
61
+ { .name = "ID_AA64MMFR2_EL1", .state = ARM_CP_STATE_AA64,
48
}
62
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 2,
49
63
.access = PL1_R, .type = ARM_CP_CONST,
50
/* Check that raw accesses are either forbidden or handled. Note that
64
.accessfn = access_aa64_tid3,
65
- .resetvalue = 0 },
66
+ .resetvalue = cpu->isar.id_aa64mmfr2 },
67
{ .name = "ID_AA64MMFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
68
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 3,
69
.access = PL1_R, .type = ARM_CP_CONST,
70
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/kvm64.c
73
+++ b/target/arm/kvm64.c
74
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
75
ARM64_SYS_REG(3, 0, 0, 7, 0));
76
err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr1,
77
ARM64_SYS_REG(3, 0, 0, 7, 1));
78
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr2,
79
+ ARM64_SYS_REG(3, 0, 0, 7, 2));
80
81
/*
82
* Note that if AArch32 support is not present in the host,
51
--
83
--
52
2.17.0
84
2.20.1
53
85
54
86
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-11-richard.henderson@linaro.org
5
Message-id: 20200208125816.14954-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/helper-sve.h | 25 ++++
8
target/arm/cpu.h | 6 ++++++
9
target/arm/sve_helper.c | 264 +++++++++++++++++++++++++++++++++++++
9
target/arm/internals.h | 3 +++
10
target/arm/translate-sve.c | 130 ++++++++++++++++++
10
target/arm/helper.c | 21 +++++++++++++++++++++
11
target/arm/sve.decode | 26 ++++
11
target/arm/translate-a64.c | 14 ++++++++++++++
12
4 files changed, 445 insertions(+)
12
4 files changed, 44 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/target/arm/cpu.h
17
+++ b/target/arm/helper-sve.h
17
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_uminv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
19
DEF_HELPER_FLAGS_3(sve_uminv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
19
#define PSTATE_IL (1U << 20)
20
DEF_HELPER_FLAGS_3(sve_uminv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
20
#define PSTATE_SS (1U << 21)
21
21
#define PSTATE_PAN (1U << 22)
22
+DEF_HELPER_FLAGS_3(sve_clr_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
22
+#define PSTATE_UAO (1U << 23)
23
+DEF_HELPER_FLAGS_3(sve_clr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
#define PSTATE_V (1U << 28)
24
+DEF_HELPER_FLAGS_3(sve_clr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
#define PSTATE_C (1U << 29)
25
+DEF_HELPER_FLAGS_3(sve_clr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
#define PSTATE_Z (1U << 30)
26
+
26
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ats1e1(const ARMISARegisters *id)
27
+DEF_HELPER_FLAGS_4(sve_asr_zpzi_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 2;
28
+DEF_HELPER_FLAGS_4(sve_asr_zpzi_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve_asr_zpzi_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_asr_zpzi_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+
32
+DEF_HELPER_FLAGS_4(sve_lsr_zpzi_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_lsr_zpzi_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_lsr_zpzi_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_lsr_zpzi_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_4(sve_lsl_zpzi_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(sve_lsl_zpzi_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(sve_lsl_zpzi_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(sve_lsl_zpzi_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
+
42
+DEF_HELPER_FLAGS_4(sve_asrd_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_4(sve_asrd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_4(sve_asrd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_4(sve_asrd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
46
+
47
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
48
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
49
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
50
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/sve_helper.c
53
+++ b/target/arm/sve_helper.c
54
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_predtest)(void *vd, void *vg, uint32_t words)
55
return flags;
56
}
28
}
57
29
58
+/* Expand active predicate bits to bytes, for byte elements.
30
+static inline bool isar_feature_aa64_uao(const ARMISARegisters *id)
59
+ * for (i = 0; i < 256; ++i) {
60
+ * unsigned long m = 0;
61
+ * for (j = 0; j < 8; j++) {
62
+ * if ((i >> j) & 1) {
63
+ * m |= 0xfful << (j << 3);
64
+ * }
65
+ * }
66
+ * printf("0x%016lx,\n", m);
67
+ * }
68
+ */
69
+static inline uint64_t expand_pred_b(uint8_t byte)
70
+{
31
+{
71
+ static const uint64_t word[256] = {
32
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, UAO) != 0;
72
+ 0x0000000000000000, 0x00000000000000ff, 0x000000000000ff00,
73
+ 0x000000000000ffff, 0x0000000000ff0000, 0x0000000000ff00ff,
74
+ 0x0000000000ffff00, 0x0000000000ffffff, 0x00000000ff000000,
75
+ 0x00000000ff0000ff, 0x00000000ff00ff00, 0x00000000ff00ffff,
76
+ 0x00000000ffff0000, 0x00000000ffff00ff, 0x00000000ffffff00,
77
+ 0x00000000ffffffff, 0x000000ff00000000, 0x000000ff000000ff,
78
+ 0x000000ff0000ff00, 0x000000ff0000ffff, 0x000000ff00ff0000,
79
+ 0x000000ff00ff00ff, 0x000000ff00ffff00, 0x000000ff00ffffff,
80
+ 0x000000ffff000000, 0x000000ffff0000ff, 0x000000ffff00ff00,
81
+ 0x000000ffff00ffff, 0x000000ffffff0000, 0x000000ffffff00ff,
82
+ 0x000000ffffffff00, 0x000000ffffffffff, 0x0000ff0000000000,
83
+ 0x0000ff00000000ff, 0x0000ff000000ff00, 0x0000ff000000ffff,
84
+ 0x0000ff0000ff0000, 0x0000ff0000ff00ff, 0x0000ff0000ffff00,
85
+ 0x0000ff0000ffffff, 0x0000ff00ff000000, 0x0000ff00ff0000ff,
86
+ 0x0000ff00ff00ff00, 0x0000ff00ff00ffff, 0x0000ff00ffff0000,
87
+ 0x0000ff00ffff00ff, 0x0000ff00ffffff00, 0x0000ff00ffffffff,
88
+ 0x0000ffff00000000, 0x0000ffff000000ff, 0x0000ffff0000ff00,
89
+ 0x0000ffff0000ffff, 0x0000ffff00ff0000, 0x0000ffff00ff00ff,
90
+ 0x0000ffff00ffff00, 0x0000ffff00ffffff, 0x0000ffffff000000,
91
+ 0x0000ffffff0000ff, 0x0000ffffff00ff00, 0x0000ffffff00ffff,
92
+ 0x0000ffffffff0000, 0x0000ffffffff00ff, 0x0000ffffffffff00,
93
+ 0x0000ffffffffffff, 0x00ff000000000000, 0x00ff0000000000ff,
94
+ 0x00ff00000000ff00, 0x00ff00000000ffff, 0x00ff000000ff0000,
95
+ 0x00ff000000ff00ff, 0x00ff000000ffff00, 0x00ff000000ffffff,
96
+ 0x00ff0000ff000000, 0x00ff0000ff0000ff, 0x00ff0000ff00ff00,
97
+ 0x00ff0000ff00ffff, 0x00ff0000ffff0000, 0x00ff0000ffff00ff,
98
+ 0x00ff0000ffffff00, 0x00ff0000ffffffff, 0x00ff00ff00000000,
99
+ 0x00ff00ff000000ff, 0x00ff00ff0000ff00, 0x00ff00ff0000ffff,
100
+ 0x00ff00ff00ff0000, 0x00ff00ff00ff00ff, 0x00ff00ff00ffff00,
101
+ 0x00ff00ff00ffffff, 0x00ff00ffff000000, 0x00ff00ffff0000ff,
102
+ 0x00ff00ffff00ff00, 0x00ff00ffff00ffff, 0x00ff00ffffff0000,
103
+ 0x00ff00ffffff00ff, 0x00ff00ffffffff00, 0x00ff00ffffffffff,
104
+ 0x00ffff0000000000, 0x00ffff00000000ff, 0x00ffff000000ff00,
105
+ 0x00ffff000000ffff, 0x00ffff0000ff0000, 0x00ffff0000ff00ff,
106
+ 0x00ffff0000ffff00, 0x00ffff0000ffffff, 0x00ffff00ff000000,
107
+ 0x00ffff00ff0000ff, 0x00ffff00ff00ff00, 0x00ffff00ff00ffff,
108
+ 0x00ffff00ffff0000, 0x00ffff00ffff00ff, 0x00ffff00ffffff00,
109
+ 0x00ffff00ffffffff, 0x00ffffff00000000, 0x00ffffff000000ff,
110
+ 0x00ffffff0000ff00, 0x00ffffff0000ffff, 0x00ffffff00ff0000,
111
+ 0x00ffffff00ff00ff, 0x00ffffff00ffff00, 0x00ffffff00ffffff,
112
+ 0x00ffffffff000000, 0x00ffffffff0000ff, 0x00ffffffff00ff00,
113
+ 0x00ffffffff00ffff, 0x00ffffffffff0000, 0x00ffffffffff00ff,
114
+ 0x00ffffffffffff00, 0x00ffffffffffffff, 0xff00000000000000,
115
+ 0xff000000000000ff, 0xff0000000000ff00, 0xff0000000000ffff,
116
+ 0xff00000000ff0000, 0xff00000000ff00ff, 0xff00000000ffff00,
117
+ 0xff00000000ffffff, 0xff000000ff000000, 0xff000000ff0000ff,
118
+ 0xff000000ff00ff00, 0xff000000ff00ffff, 0xff000000ffff0000,
119
+ 0xff000000ffff00ff, 0xff000000ffffff00, 0xff000000ffffffff,
120
+ 0xff0000ff00000000, 0xff0000ff000000ff, 0xff0000ff0000ff00,
121
+ 0xff0000ff0000ffff, 0xff0000ff00ff0000, 0xff0000ff00ff00ff,
122
+ 0xff0000ff00ffff00, 0xff0000ff00ffffff, 0xff0000ffff000000,
123
+ 0xff0000ffff0000ff, 0xff0000ffff00ff00, 0xff0000ffff00ffff,
124
+ 0xff0000ffffff0000, 0xff0000ffffff00ff, 0xff0000ffffffff00,
125
+ 0xff0000ffffffffff, 0xff00ff0000000000, 0xff00ff00000000ff,
126
+ 0xff00ff000000ff00, 0xff00ff000000ffff, 0xff00ff0000ff0000,
127
+ 0xff00ff0000ff00ff, 0xff00ff0000ffff00, 0xff00ff0000ffffff,
128
+ 0xff00ff00ff000000, 0xff00ff00ff0000ff, 0xff00ff00ff00ff00,
129
+ 0xff00ff00ff00ffff, 0xff00ff00ffff0000, 0xff00ff00ffff00ff,
130
+ 0xff00ff00ffffff00, 0xff00ff00ffffffff, 0xff00ffff00000000,
131
+ 0xff00ffff000000ff, 0xff00ffff0000ff00, 0xff00ffff0000ffff,
132
+ 0xff00ffff00ff0000, 0xff00ffff00ff00ff, 0xff00ffff00ffff00,
133
+ 0xff00ffff00ffffff, 0xff00ffffff000000, 0xff00ffffff0000ff,
134
+ 0xff00ffffff00ff00, 0xff00ffffff00ffff, 0xff00ffffffff0000,
135
+ 0xff00ffffffff00ff, 0xff00ffffffffff00, 0xff00ffffffffffff,
136
+ 0xffff000000000000, 0xffff0000000000ff, 0xffff00000000ff00,
137
+ 0xffff00000000ffff, 0xffff000000ff0000, 0xffff000000ff00ff,
138
+ 0xffff000000ffff00, 0xffff000000ffffff, 0xffff0000ff000000,
139
+ 0xffff0000ff0000ff, 0xffff0000ff00ff00, 0xffff0000ff00ffff,
140
+ 0xffff0000ffff0000, 0xffff0000ffff00ff, 0xffff0000ffffff00,
141
+ 0xffff0000ffffffff, 0xffff00ff00000000, 0xffff00ff000000ff,
142
+ 0xffff00ff0000ff00, 0xffff00ff0000ffff, 0xffff00ff00ff0000,
143
+ 0xffff00ff00ff00ff, 0xffff00ff00ffff00, 0xffff00ff00ffffff,
144
+ 0xffff00ffff000000, 0xffff00ffff0000ff, 0xffff00ffff00ff00,
145
+ 0xffff00ffff00ffff, 0xffff00ffffff0000, 0xffff00ffffff00ff,
146
+ 0xffff00ffffffff00, 0xffff00ffffffffff, 0xffffff0000000000,
147
+ 0xffffff00000000ff, 0xffffff000000ff00, 0xffffff000000ffff,
148
+ 0xffffff0000ff0000, 0xffffff0000ff00ff, 0xffffff0000ffff00,
149
+ 0xffffff0000ffffff, 0xffffff00ff000000, 0xffffff00ff0000ff,
150
+ 0xffffff00ff00ff00, 0xffffff00ff00ffff, 0xffffff00ffff0000,
151
+ 0xffffff00ffff00ff, 0xffffff00ffffff00, 0xffffff00ffffffff,
152
+ 0xffffffff00000000, 0xffffffff000000ff, 0xffffffff0000ff00,
153
+ 0xffffffff0000ffff, 0xffffffff00ff0000, 0xffffffff00ff00ff,
154
+ 0xffffffff00ffff00, 0xffffffff00ffffff, 0xffffffffff000000,
155
+ 0xffffffffff0000ff, 0xffffffffff00ff00, 0xffffffffff00ffff,
156
+ 0xffffffffffff0000, 0xffffffffffff00ff, 0xffffffffffffff00,
157
+ 0xffffffffffffffff,
158
+ };
159
+ return word[byte];
160
+}
33
+}
161
+
34
+
162
+/* Similarly for half-word elements.
35
static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
163
+ * for (i = 0; i < 256; ++i) {
36
{
164
+ * unsigned long m = 0;
37
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
165
+ * if (i & 0xaa) {
38
diff --git a/target/arm/internals.h b/target/arm/internals.h
166
+ * continue;
39
index XXXXXXX..XXXXXXX 100644
167
+ * }
40
--- a/target/arm/internals.h
168
+ * for (j = 0; j < 8; j += 2) {
41
+++ b/target/arm/internals.h
169
+ * if ((i >> j) & 1) {
42
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
170
+ * m |= 0xfffful << (j << 3);
43
if (isar_feature_aa64_pan(id)) {
171
+ * }
44
valid |= PSTATE_PAN;
172
+ * }
45
}
173
+ * printf("[0x%x] = 0x%016lx,\n", i, m);
46
+ if (isar_feature_aa64_uao(id)) {
174
+ * }
47
+ valid |= PSTATE_UAO;
175
+ */
48
+ }
176
+static inline uint64_t expand_pred_h(uint8_t byte)
49
50
return valid;
51
}
52
diff --git a/target/arm/helper.c b/target/arm/helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/helper.c
55
+++ b/target/arm/helper.c
56
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pan_reginfo = {
57
.readfn = aa64_pan_read, .writefn = aa64_pan_write
58
};
59
60
+static uint64_t aa64_uao_read(CPUARMState *env, const ARMCPRegInfo *ri)
177
+{
61
+{
178
+ static const uint64_t word[] = {
62
+ return env->pstate & PSTATE_UAO;
179
+ [0x01] = 0x000000000000ffff, [0x04] = 0x00000000ffff0000,
180
+ [0x05] = 0x00000000ffffffff, [0x10] = 0x0000ffff00000000,
181
+ [0x11] = 0x0000ffff0000ffff, [0x14] = 0x0000ffffffff0000,
182
+ [0x15] = 0x0000ffffffffffff, [0x40] = 0xffff000000000000,
183
+ [0x41] = 0xffff00000000ffff, [0x44] = 0xffff0000ffff0000,
184
+ [0x45] = 0xffff0000ffffffff, [0x50] = 0xffffffff00000000,
185
+ [0x51] = 0xffffffff0000ffff, [0x54] = 0xffffffffffff0000,
186
+ [0x55] = 0xffffffffffffffff,
187
+ };
188
+ return word[byte & 0x55];
189
+}
63
+}
190
+
64
+
191
+/* Similarly for single word elements. */
65
+static void aa64_uao_write(CPUARMState *env, const ARMCPRegInfo *ri,
192
+static inline uint64_t expand_pred_s(uint8_t byte)
66
+ uint64_t value)
193
+{
67
+{
194
+ static const uint64_t word[] = {
68
+ env->pstate = (env->pstate & ~PSTATE_UAO) | (value & PSTATE_UAO);
195
+ [0x01] = 0x00000000ffffffffull,
196
+ [0x10] = 0xffffffff00000000ull,
197
+ [0x11] = 0xffffffffffffffffull,
198
+ };
199
+ return word[byte & 0x11];
200
+}
69
+}
201
+
70
+
202
#define LOGICAL_PPPP(NAME, FUNC) \
71
+static const ARMCPRegInfo uao_reginfo = {
203
void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
72
+ .name = "UAO", .state = ARM_CP_STATE_AA64,
204
{ \
73
+ .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 4,
205
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_pnext)(void *vd, void *vg, uint32_t pred_desc)
74
+ .type = ARM_CP_NO_RAW, .access = PL1_RW,
206
75
+ .readfn = aa64_uao_read, .writefn = aa64_uao_write
207
return flags;
76
+};
208
}
209
+
77
+
210
+/* Store zero into every active element of Zd. We will use this for two
78
static CPAccessResult aa64_cacheop_access(CPUARMState *env,
211
+ * and three-operand predicated instructions for which logic dictates a
79
const ARMCPRegInfo *ri,
212
+ * zero result. In particular, logical shift by element size, which is
80
bool isread)
213
+ * otherwise undefined on the host.
81
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
214
+ *
82
define_arm_cp_regs(cpu, ats1cp_reginfo);
215
+ * For element sizes smaller than uint64_t, we use tables to expand
83
}
216
+ * the N bits of the controlling predicate to a byte mask, and clear
84
#endif
217
+ * those bytes.
85
+ if (cpu_isar_feature(aa64_uao, cpu)) {
218
+ */
86
+ define_one_arm_cp_reg(cpu, &uao_reginfo);
219
+void HELPER(sve_clr_b)(void *vd, void *vg, uint32_t desc)
220
+{
221
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
222
+ uint64_t *d = vd;
223
+ uint8_t *pg = vg;
224
+ for (i = 0; i < opr_sz; i += 1) {
225
+ d[i] &= ~expand_pred_b(pg[H1(i)]);
226
+ }
87
+ }
227
+}
88
89
if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
90
define_arm_cp_regs(cpu, vhe_reginfo);
91
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/translate-a64.c
94
+++ b/target/arm/translate-a64.c
95
@@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
96
s->base.is_jmp = DISAS_NEXT;
97
break;
98
99
+ case 0x03: /* UAO */
100
+ if (!dc_isar_feature(aa64_uao, s) || s->current_el == 0) {
101
+ goto do_unallocated;
102
+ }
103
+ if (crm & 1) {
104
+ set_pstate_bits(PSTATE_UAO);
105
+ } else {
106
+ clear_pstate_bits(PSTATE_UAO);
107
+ }
108
+ t1 = tcg_const_i32(s->current_el);
109
+ gen_helper_rebuild_hflags_a64(cpu_env, t1);
110
+ tcg_temp_free_i32(t1);
111
+ break;
228
+
112
+
229
+void HELPER(sve_clr_h)(void *vd, void *vg, uint32_t desc)
113
case 0x04: /* PAN */
230
+{
114
if (!dc_isar_feature(aa64_pan, s) || s->current_el == 0) {
231
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
115
goto do_unallocated;
232
+ uint64_t *d = vd;
233
+ uint8_t *pg = vg;
234
+ for (i = 0; i < opr_sz; i += 1) {
235
+ d[i] &= ~expand_pred_h(pg[H1(i)]);
236
+ }
237
+}
238
+
239
+void HELPER(sve_clr_s)(void *vd, void *vg, uint32_t desc)
240
+{
241
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
242
+ uint64_t *d = vd;
243
+ uint8_t *pg = vg;
244
+ for (i = 0; i < opr_sz; i += 1) {
245
+ d[i] &= ~expand_pred_s(pg[H1(i)]);
246
+ }
247
+}
248
+
249
+void HELPER(sve_clr_d)(void *vd, void *vg, uint32_t desc)
250
+{
251
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
252
+ uint64_t *d = vd;
253
+ uint8_t *pg = vg;
254
+ for (i = 0; i < opr_sz; i += 1) {
255
+ if (pg[H1(i)] & 1) {
256
+ d[i] = 0;
257
+ }
258
+ }
259
+}
260
+
261
+/* Three-operand expander, immediate operand, controlled by a predicate.
262
+ */
263
+#define DO_ZPZI(NAME, TYPE, H, OP) \
264
+void HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
265
+{ \
266
+ intptr_t i, opr_sz = simd_oprsz(desc); \
267
+ TYPE imm = simd_data(desc); \
268
+ for (i = 0; i < opr_sz; ) { \
269
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
270
+ do { \
271
+ if (pg & 1) { \
272
+ TYPE nn = *(TYPE *)(vn + H(i)); \
273
+ *(TYPE *)(vd + H(i)) = OP(nn, imm); \
274
+ } \
275
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
276
+ } while (i & 15); \
277
+ } \
278
+}
279
+
280
+/* Similarly, specialized for 64-bit operands. */
281
+#define DO_ZPZI_D(NAME, TYPE, OP) \
282
+void HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
283
+{ \
284
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8; \
285
+ TYPE *d = vd, *n = vn; \
286
+ TYPE imm = simd_data(desc); \
287
+ uint8_t *pg = vg; \
288
+ for (i = 0; i < opr_sz; i += 1) { \
289
+ if (pg[H1(i)] & 1) { \
290
+ TYPE nn = n[i]; \
291
+ d[i] = OP(nn, imm); \
292
+ } \
293
+ } \
294
+}
295
+
296
+#define DO_SHR(N, M) (N >> M)
297
+#define DO_SHL(N, M) (N << M)
298
+
299
+/* Arithmetic shift right for division. This rounds negative numbers
300
+ toward zero as per signed division. Therefore before shifting,
301
+ when N is negative, add 2**M-1. */
302
+#define DO_ASRD(N, M) ((N + (N < 0 ? ((__typeof(N))1 << M) - 1 : 0)) >> M)
303
+
304
+DO_ZPZI(sve_asr_zpzi_b, int8_t, H1, DO_SHR)
305
+DO_ZPZI(sve_asr_zpzi_h, int16_t, H1_2, DO_SHR)
306
+DO_ZPZI(sve_asr_zpzi_s, int32_t, H1_4, DO_SHR)
307
+DO_ZPZI_D(sve_asr_zpzi_d, int64_t, DO_SHR)
308
+
309
+DO_ZPZI(sve_lsr_zpzi_b, uint8_t, H1, DO_SHR)
310
+DO_ZPZI(sve_lsr_zpzi_h, uint16_t, H1_2, DO_SHR)
311
+DO_ZPZI(sve_lsr_zpzi_s, uint32_t, H1_4, DO_SHR)
312
+DO_ZPZI_D(sve_lsr_zpzi_d, uint64_t, DO_SHR)
313
+
314
+DO_ZPZI(sve_lsl_zpzi_b, uint8_t, H1, DO_SHL)
315
+DO_ZPZI(sve_lsl_zpzi_h, uint16_t, H1_2, DO_SHL)
316
+DO_ZPZI(sve_lsl_zpzi_s, uint32_t, H1_4, DO_SHL)
317
+DO_ZPZI_D(sve_lsl_zpzi_d, uint64_t, DO_SHL)
318
+
319
+DO_ZPZI(sve_asrd_b, int8_t, H1, DO_ASRD)
320
+DO_ZPZI(sve_asrd_h, int16_t, H1_2, DO_ASRD)
321
+DO_ZPZI(sve_asrd_s, int32_t, H1_4, DO_ASRD)
322
+DO_ZPZI_D(sve_asrd_d, int64_t, DO_ASRD)
323
+
324
+#undef DO_SHR
325
+#undef DO_SHL
326
+#undef DO_ASRD
327
+#undef DO_ZPZI
328
+#undef DO_ZPZI_D
329
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
330
index XXXXXXX..XXXXXXX 100644
331
--- a/target/arm/translate-sve.c
332
+++ b/target/arm/translate-sve.c
333
@@ -XXX,XX +XXX,XX @@
334
#include "trace-tcg.h"
335
#include "translate-a64.h"
336
337
+/*
338
+ * Helpers for extracting complex instruction fields.
339
+ */
340
+
341
+/* See e.g. ASR (immediate, predicated).
342
+ * Returns -1 for unallocated encoding; diagnose later.
343
+ */
344
+static int tszimm_esz(int x)
345
+{
346
+ x >>= 3; /* discard imm3 */
347
+ return 31 - clz32(x);
348
+}
349
+
350
+static int tszimm_shr(int x)
351
+{
352
+ return (16 << tszimm_esz(x)) - x;
353
+}
354
+
355
+/* See e.g. LSL (immediate, predicated). */
356
+static int tszimm_shl(int x)
357
+{
358
+ return x - (8 << tszimm_esz(x));
359
+}
360
+
361
/*
362
* Include the generated decoder.
363
*/
364
@@ -XXX,XX +XXX,XX @@ static bool trans_SADDV(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
365
366
#undef DO_VPZ
367
368
+/*
369
+ *** SVE Shift by Immediate - Predicated Group
370
+ */
371
+
372
+/* Store zero into every active element of Zd. We will use this for two
373
+ * and three-operand predicated instructions for which logic dictates a
374
+ * zero result.
375
+ */
376
+static bool do_clr_zp(DisasContext *s, int rd, int pg, int esz)
377
+{
378
+ static gen_helper_gvec_2 * const fns[4] = {
379
+ gen_helper_sve_clr_b, gen_helper_sve_clr_h,
380
+ gen_helper_sve_clr_s, gen_helper_sve_clr_d,
381
+ };
382
+ if (sve_access_check(s)) {
383
+ unsigned vsz = vec_full_reg_size(s);
384
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, rd),
385
+ pred_full_reg_offset(s, pg),
386
+ vsz, vsz, 0, fns[esz]);
387
+ }
388
+ return true;
389
+}
390
+
391
+static bool do_zpzi_ool(DisasContext *s, arg_rpri_esz *a,
392
+ gen_helper_gvec_3 *fn)
393
+{
394
+ if (sve_access_check(s)) {
395
+ unsigned vsz = vec_full_reg_size(s);
396
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
397
+ vec_full_reg_offset(s, a->rn),
398
+ pred_full_reg_offset(s, a->pg),
399
+ vsz, vsz, a->imm, fn);
400
+ }
401
+ return true;
402
+}
403
+
404
+static bool trans_ASR_zpzi(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
405
+{
406
+ static gen_helper_gvec_3 * const fns[4] = {
407
+ gen_helper_sve_asr_zpzi_b, gen_helper_sve_asr_zpzi_h,
408
+ gen_helper_sve_asr_zpzi_s, gen_helper_sve_asr_zpzi_d,
409
+ };
410
+ if (a->esz < 0) {
411
+ /* Invalid tsz encoding -- see tszimm_esz. */
412
+ return false;
413
+ }
414
+ /* Shift by element size is architecturally valid. For
415
+ arithmetic right-shift, it's the same as by one less. */
416
+ a->imm = MIN(a->imm, (8 << a->esz) - 1);
417
+ return do_zpzi_ool(s, a, fns[a->esz]);
418
+}
419
+
420
+static bool trans_LSR_zpzi(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
421
+{
422
+ static gen_helper_gvec_3 * const fns[4] = {
423
+ gen_helper_sve_lsr_zpzi_b, gen_helper_sve_lsr_zpzi_h,
424
+ gen_helper_sve_lsr_zpzi_s, gen_helper_sve_lsr_zpzi_d,
425
+ };
426
+ if (a->esz < 0) {
427
+ return false;
428
+ }
429
+ /* Shift by element size is architecturally valid.
430
+ For logical shifts, it is a zeroing operation. */
431
+ if (a->imm >= (8 << a->esz)) {
432
+ return do_clr_zp(s, a->rd, a->pg, a->esz);
433
+ } else {
434
+ return do_zpzi_ool(s, a, fns[a->esz]);
435
+ }
436
+}
437
+
438
+static bool trans_LSL_zpzi(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
439
+{
440
+ static gen_helper_gvec_3 * const fns[4] = {
441
+ gen_helper_sve_lsl_zpzi_b, gen_helper_sve_lsl_zpzi_h,
442
+ gen_helper_sve_lsl_zpzi_s, gen_helper_sve_lsl_zpzi_d,
443
+ };
444
+ if (a->esz < 0) {
445
+ return false;
446
+ }
447
+ /* Shift by element size is architecturally valid.
448
+ For logical shifts, it is a zeroing operation. */
449
+ if (a->imm >= (8 << a->esz)) {
450
+ return do_clr_zp(s, a->rd, a->pg, a->esz);
451
+ } else {
452
+ return do_zpzi_ool(s, a, fns[a->esz]);
453
+ }
454
+}
455
+
456
+static bool trans_ASRD(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
457
+{
458
+ static gen_helper_gvec_3 * const fns[4] = {
459
+ gen_helper_sve_asrd_b, gen_helper_sve_asrd_h,
460
+ gen_helper_sve_asrd_s, gen_helper_sve_asrd_d,
461
+ };
462
+ if (a->esz < 0) {
463
+ return false;
464
+ }
465
+ /* Shift by element size is architecturally valid. For arithmetic
466
+ right shift for division, it is a zeroing operation. */
467
+ if (a->imm >= (8 << a->esz)) {
468
+ return do_clr_zp(s, a->rd, a->pg, a->esz);
469
+ } else {
470
+ return do_zpzi_ool(s, a, fns[a->esz]);
471
+ }
472
+}
473
+
474
/*
475
*** SVE Predicate Logical Operations Group
476
*/
477
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
478
index XXXXXXX..XXXXXXX 100644
479
--- a/target/arm/sve.decode
480
+++ b/target/arm/sve.decode
481
@@ -XXX,XX +XXX,XX @@
482
###########################################################################
483
# Named fields. These are primarily for disjoint fields.
484
485
+%imm6_22_5 22:1 5:5
486
%imm9_16_10 16:s6 10:3
487
488
+# A combination of tsz:imm3 -- extract esize.
489
+%tszimm_esz 22:2 5:5 !function=tszimm_esz
490
+# A combination of tsz:imm3 -- extract (2 * esize) - (tsz:imm3)
491
+%tszimm_shr 22:2 5:5 !function=tszimm_shr
492
+# A combination of tsz:imm3 -- extract (tsz:imm3) - esize
493
+%tszimm_shl 22:2 5:5 !function=tszimm_shl
494
+
495
# Either a copy of rd (at bit 0), or a different source
496
# as propagated via the MOVPRFX instruction.
497
%reg_movprfx 0:5
498
@@ -XXX,XX +XXX,XX @@
499
&rpr_esz rd pg rn esz
500
&rprr_s rd pg rn rm s
501
&rprr_esz rd pg rn rm esz
502
+&rpri_esz rd pg rn imm esz
503
504
###########################################################################
505
# Named instruction formats. These are generally used to
506
@@ -XXX,XX +XXX,XX @@
507
# One register operand, with governing predicate, vector element size
508
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
509
510
+# Two register operand, one immediate operand, with predicate,
511
+# element size encoded as TSZHL. User must fill in imm.
512
+@rdn_pg_tszimm ........ .. ... ... ... pg:3 ..... rd:5 \
513
+ &rpri_esz rn=%reg_movprfx esz=%tszimm_esz
514
+
515
# Basic Load/Store with 9-bit immediate offset
516
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
517
&rri imm=%imm9_16_10
518
@@ -XXX,XX +XXX,XX @@ UMAXV 00000100 .. 001 001 001 ... ..... ..... @rd_pg_rn
519
SMINV 00000100 .. 001 010 001 ... ..... ..... @rd_pg_rn
520
UMINV 00000100 .. 001 011 001 ... ..... ..... @rd_pg_rn
521
522
+### SVE Shift by Immediate - Predicated Group
523
+
524
+# SVE bitwise shift by immediate (predicated)
525
+ASR_zpzi 00000100 .. 000 000 100 ... .. ... ..... \
526
+ @rdn_pg_tszimm imm=%tszimm_shr
527
+LSR_zpzi 00000100 .. 000 001 100 ... .. ... ..... \
528
+ @rdn_pg_tszimm imm=%tszimm_shr
529
+LSL_zpzi 00000100 .. 000 011 100 ... .. ... ..... \
530
+ @rdn_pg_tszimm imm=%tszimm_shl
531
+ASRD 00000100 .. 000 100 100 ... .. ... ..... \
532
+ @rdn_pg_tszimm imm=%tszimm_shr
533
+
534
### SVE Logical - Unpredicated Group
535
536
# SVE bitwise logical operations (unpredicated)
537
--
116
--
538
2.17.0
117
2.20.1
539
118
540
119
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Including only 4, as-yet unimplemented, instruction patterns
3
We need only override the current condition under which
4
so that the whole thing compiles.
4
TBFLAG_A64.UNPRIV is set.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180516223007.10256-3-richard.henderson@linaro.org
8
Message-id: 20200208125816.14954-20-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/Makefile.objs | 10 ++++++
11
target/arm/helper.c | 41 +++++++++++++++++++++--------------------
12
target/arm/translate-a64.c | 7 ++++-
12
1 file changed, 21 insertions(+), 20 deletions(-)
13
target/arm/translate-sve.c | 63 ++++++++++++++++++++++++++++++++++++++
14
.gitignore | 1 +
15
target/arm/sve.decode | 45 +++++++++++++++++++++++++++
16
5 files changed, 125 insertions(+), 1 deletion(-)
17
create mode 100644 target/arm/translate-sve.c
18
create mode 100644 target/arm/sve.decode
19
13
20
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/Makefile.objs
16
--- a/target/arm/helper.c
23
+++ b/target/arm/Makefile.objs
17
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ obj-y += gdbstub.o
18
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
25
obj-$(TARGET_AARCH64) += cpu64.o translate-a64.o helper-a64.o gdbstub64.o
19
}
26
obj-y += crypto_helper.o
20
27
obj-$(CONFIG_SOFTMMU) += arm-powerctl.o
21
/* Compute the condition for using AccType_UNPRIV for LDTR et al. */
28
+
22
- /* TODO: ARMv8.2-UAO */
29
+DECODETREE = $(SRC_PATH)/scripts/decodetree.py
23
- switch (mmu_idx) {
30
+
24
- case ARMMMUIdx_E10_1:
31
+target/arm/decode-sve.inc.c: $(SRC_PATH)/target/arm/sve.decode $(DECODETREE)
25
- case ARMMMUIdx_E10_1_PAN:
32
+    $(call quiet-command,\
26
- case ARMMMUIdx_SE10_1:
33
+     $(PYTHON) $(DECODETREE) --decode disas_sve -o $@ $<,\
27
- case ARMMMUIdx_SE10_1_PAN:
34
+     "GEN", $(TARGET_DIR)$@)
28
- /* TODO: ARMv8.3-NV */
35
+
29
- flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
36
+target/arm/translate-sve.o: target/arm/decode-sve.inc.c
30
- break;
37
+obj-$(TARGET_AARCH64) += translate-sve.o
31
- case ARMMMUIdx_E20_2:
38
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
32
- case ARMMMUIdx_E20_2_PAN:
39
index XXXXXXX..XXXXXXX 100644
33
- /* TODO: ARMv8.4-SecEL2 */
40
--- a/target/arm/translate-a64.c
34
- /*
41
+++ b/target/arm/translate-a64.c
35
- * Note that E20_2 is gated by HCR_EL2.E2H == 1, but E20_0 is
42
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
36
- * gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
43
s->fp_access_checked = false;
37
- */
44
38
- if (env->cp15.hcr_el2 & HCR_TGE) {
45
switch (extract32(insn, 25, 4)) {
39
+ if (!(env->pstate & PSTATE_UAO)) {
46
- case 0x0: case 0x1: case 0x2: case 0x3: /* UNALLOCATED */
40
+ switch (mmu_idx) {
47
+ case 0x0: case 0x1: case 0x3: /* UNALLOCATED */
41
+ case ARMMMUIdx_E10_1:
48
unallocated_encoding(s);
42
+ case ARMMMUIdx_E10_1_PAN:
49
break;
43
+ case ARMMMUIdx_SE10_1:
50
+ case 0x2:
44
+ case ARMMMUIdx_SE10_1_PAN:
51
+ if (!arm_dc_feature(s, ARM_FEATURE_SVE) || !disas_sve(s, insn)) {
45
+ /* TODO: ARMv8.3-NV */
52
+ unallocated_encoding(s);
46
flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
53
+ }
47
+ break;
54
+ break;
48
+ case ARMMMUIdx_E20_2:
55
case 0x8: case 0x9: /* Data processing - immediate */
49
+ case ARMMMUIdx_E20_2_PAN:
56
disas_data_proc_imm(s, insn);
50
+ /* TODO: ARMv8.4-SecEL2 */
57
break;
51
+ /*
58
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
52
+ * Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is
59
new file mode 100644
53
+ * gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
60
index XXXXXXX..XXXXXXX
54
+ */
61
--- /dev/null
55
+ if (env->cp15.hcr_el2 & HCR_TGE) {
62
+++ b/target/arm/translate-sve.c
56
+ flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
63
@@ -XXX,XX +XXX,XX @@
57
+ }
64
+/*
58
+ break;
65
+ * AArch64 SVE translation
59
+ default:
66
+ *
60
+ break;
67
+ * Copyright (c) 2018 Linaro, Ltd
61
}
68
+ *
62
- break;
69
+ * This library is free software; you can redistribute it and/or
63
- default:
70
+ * modify it under the terms of the GNU Lesser General Public
64
- break;
71
+ * License as published by the Free Software Foundation; either
65
}
72
+ * version 2 of the License, or (at your option) any later version.
66
73
+ *
67
return rebuild_hflags_common(env, fp_el, mmu_idx, flags);
74
+ * This library is distributed in the hope that it will be useful,
75
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
76
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
77
+ * Lesser General Public License for more details.
78
+ *
79
+ * You should have received a copy of the GNU Lesser General Public
80
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
81
+ */
82
+
83
+#include "qemu/osdep.h"
84
+#include "cpu.h"
85
+#include "exec/exec-all.h"
86
+#include "tcg-op.h"
87
+#include "tcg-op-gvec.h"
88
+#include "qemu/log.h"
89
+#include "arm_ldst.h"
90
+#include "translate.h"
91
+#include "internals.h"
92
+#include "exec/helper-proto.h"
93
+#include "exec/helper-gen.h"
94
+#include "exec/log.h"
95
+#include "trace-tcg.h"
96
+#include "translate-a64.h"
97
+
98
+/*
99
+ * Include the generated decoder.
100
+ */
101
+
102
+#include "decode-sve.inc.c"
103
+
104
+/*
105
+ * Implement all of the translator functions referenced by the decoder.
106
+ */
107
+
108
+static bool trans_AND_zzz(DisasContext *s, arg_AND_zzz *a, uint32_t insn)
109
+{
110
+ return false;
111
+}
112
+
113
+static bool trans_ORR_zzz(DisasContext *s, arg_ORR_zzz *a, uint32_t insn)
114
+{
115
+ return false;
116
+}
117
+
118
+static bool trans_EOR_zzz(DisasContext *s, arg_EOR_zzz *a, uint32_t insn)
119
+{
120
+ return false;
121
+}
122
+
123
+static bool trans_BIC_zzz(DisasContext *s, arg_BIC_zzz *a, uint32_t insn)
124
+{
125
+ return false;
126
+}
127
diff --git a/.gitignore b/.gitignore
128
index XXXXXXX..XXXXXXX 100644
129
--- a/.gitignore
130
+++ b/.gitignore
131
@@ -XXX,XX +XXX,XX @@ trace-dtrace-root.h
132
trace-dtrace-root.dtrace
133
trace-ust-all.h
134
trace-ust-all.c
135
+/target/arm/decode-sve.inc.c
136
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
137
new file mode 100644
138
index XXXXXXX..XXXXXXX
139
--- /dev/null
140
+++ b/target/arm/sve.decode
141
@@ -XXX,XX +XXX,XX @@
142
+# AArch64 SVE instruction descriptions
143
+#
144
+# Copyright (c) 2017 Linaro, Ltd
145
+#
146
+# This library is free software; you can redistribute it and/or
147
+# modify it under the terms of the GNU Lesser General Public
148
+# License as published by the Free Software Foundation; either
149
+# version 2 of the License, or (at your option) any later version.
150
+#
151
+# This library is distributed in the hope that it will be useful,
152
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
153
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
154
+# Lesser General Public License for more details.
155
+#
156
+# You should have received a copy of the GNU Lesser General Public
157
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
158
+
159
+#
160
+# This file is processed by scripts/decodetree.py
161
+#
162
+
163
+###########################################################################
164
+# Named attribute sets. These are used to make nice(er) names
165
+# when creating helpers common to those for the individual
166
+# instruction patterns.
167
+
168
+&rrr_esz rd rn rm esz
169
+
170
+###########################################################################
171
+# Named instruction formats. These are generally used to
172
+# reduce the amount of duplication between instruction patterns.
173
+
174
+# Three operand with unused vector element size
175
+@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
176
+
177
+###########################################################################
178
+# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
179
+
180
+### SVE Logical - Unpredicated Group
181
+
182
+# SVE bitwise logical operations (unpredicated)
183
+AND_zzz 00000100 00 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
184
+ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
185
+EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
186
+BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
187
--
68
--
188
2.17.0
69
2.20.1
189
70
190
71
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-12-richard.henderson@linaro.org
5
Message-id: 20200208125816.14954-21-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/helper-sve.h | 27 +++++++++++++++++++++++++++
8
target/arm/cpu64.c | 4 ++++
9
target/arm/sve_helper.c | 25 +++++++++++++++++++++++++
9
1 file changed, 4 insertions(+)
10
target/arm/translate-sve.c | 4 ++++
11
target/arm/sve.decode | 8 ++++++++
12
4 files changed, 64 insertions(+)
13
10
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
11
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
13
--- a/target/arm/cpu64.c
17
+++ b/target/arm/helper-sve.h
14
+++ b/target/arm/cpu64.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_udiv_zpzz_s, TCG_CALL_NO_RWG,
15
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
19
DEF_HELPER_FLAGS_5(sve_udiv_zpzz_d, TCG_CALL_NO_RWG,
16
t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
20
void, ptr, ptr, ptr, ptr, i32)
17
cpu->isar.id_aa64mmfr1 = t;
21
18
22
+DEF_HELPER_FLAGS_5(sve_asr_zpzz_b, TCG_CALL_NO_RWG,
19
+ t = cpu->isar.id_aa64mmfr2;
23
+ void, ptr, ptr, ptr, ptr, i32)
20
+ t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1);
24
+DEF_HELPER_FLAGS_5(sve_asr_zpzz_h, TCG_CALL_NO_RWG,
21
+ cpu->isar.id_aa64mmfr2 = t;
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve_asr_zpzz_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve_asr_zpzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
22
+
31
+DEF_HELPER_FLAGS_5(sve_lsr_zpzz_b, TCG_CALL_NO_RWG,
23
/* Replicate the same data to the 32-bit id registers. */
32
+ void, ptr, ptr, ptr, ptr, i32)
24
u = cpu->isar.id_isar5;
33
+DEF_HELPER_FLAGS_5(sve_lsr_zpzz_h, TCG_CALL_NO_RWG,
25
u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(sve_lsr_zpzz_s, TCG_CALL_NO_RWG,
36
+ void, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_5(sve_lsr_zpzz_d, TCG_CALL_NO_RWG,
38
+ void, ptr, ptr, ptr, ptr, i32)
39
+
40
+DEF_HELPER_FLAGS_5(sve_lsl_zpzz_b, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_5(sve_lsl_zpzz_h, TCG_CALL_NO_RWG,
43
+ void, ptr, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_5(sve_lsl_zpzz_s, TCG_CALL_NO_RWG,
45
+ void, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(sve_lsl_zpzz_d, TCG_CALL_NO_RWG,
47
+ void, ptr, ptr, ptr, ptr, i32)
48
+
49
DEF_HELPER_FLAGS_3(sve_orv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
50
DEF_HELPER_FLAGS_3(sve_orv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
51
DEF_HELPER_FLAGS_3(sve_orv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
52
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/sve_helper.c
55
+++ b/target/arm/sve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_D(sve_sdiv_zpzz_d, int64_t, DO_DIV)
57
DO_ZPZZ(sve_udiv_zpzz_s, uint32_t, H1_4, DO_DIV)
58
DO_ZPZZ_D(sve_udiv_zpzz_d, uint64_t, DO_DIV)
59
60
+/* Note that all bits of the shift are significant
61
+ and not modulo the element size. */
62
+#define DO_ASR(N, M) (N >> MIN(M, sizeof(N) * 8 - 1))
63
+#define DO_LSR(N, M) (M < sizeof(N) * 8 ? N >> M : 0)
64
+#define DO_LSL(N, M) (M < sizeof(N) * 8 ? N << M : 0)
65
+
66
+DO_ZPZZ(sve_asr_zpzz_b, int8_t, H1, DO_ASR)
67
+DO_ZPZZ(sve_lsr_zpzz_b, uint8_t, H1_2, DO_LSR)
68
+DO_ZPZZ(sve_lsl_zpzz_b, uint8_t, H1_4, DO_LSL)
69
+
70
+DO_ZPZZ(sve_asr_zpzz_h, int16_t, H1, DO_ASR)
71
+DO_ZPZZ(sve_lsr_zpzz_h, uint16_t, H1_2, DO_LSR)
72
+DO_ZPZZ(sve_lsl_zpzz_h, uint16_t, H1_4, DO_LSL)
73
+
74
+DO_ZPZZ(sve_asr_zpzz_s, int32_t, H1, DO_ASR)
75
+DO_ZPZZ(sve_lsr_zpzz_s, uint32_t, H1_2, DO_LSR)
76
+DO_ZPZZ(sve_lsl_zpzz_s, uint32_t, H1_4, DO_LSL)
77
+
78
+DO_ZPZZ_D(sve_asr_zpzz_d, int64_t, DO_ASR)
79
+DO_ZPZZ_D(sve_lsr_zpzz_d, uint64_t, DO_LSR)
80
+DO_ZPZZ_D(sve_lsl_zpzz_d, uint64_t, DO_LSL)
81
+
82
#undef DO_ZPZZ
83
#undef DO_ZPZZ_D
84
85
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
86
#undef DO_ABD
87
#undef DO_MUL
88
#undef DO_DIV
89
+#undef DO_ASR
90
+#undef DO_LSR
91
+#undef DO_LSL
92
93
/* Similar to the ARM LastActiveElement pseudocode function, except the
94
result is multiplied by the element size. This includes the not found
95
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/target/arm/translate-sve.c
98
+++ b/target/arm/translate-sve.c
99
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ(MUL, mul)
100
DO_ZPZZ(SMULH, smulh)
101
DO_ZPZZ(UMULH, umulh)
102
103
+DO_ZPZZ(ASR, asr)
104
+DO_ZPZZ(LSR, lsr)
105
+DO_ZPZZ(LSL, lsl)
106
+
107
static bool trans_SDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
108
{
109
static gen_helper_gvec_4 * const fns[4] = {
110
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/sve.decode
113
+++ b/target/arm/sve.decode
114
@@ -XXX,XX +XXX,XX @@ LSL_zpzi 00000100 .. 000 011 100 ... .. ... ..... \
115
ASRD 00000100 .. 000 100 100 ... .. ... ..... \
116
@rdn_pg_tszimm imm=%tszimm_shr
117
118
+# SVE bitwise shift by vector (predicated)
119
+ASR_zpzz 00000100 .. 010 000 100 ... ..... ..... @rdn_pg_rm
120
+LSR_zpzz 00000100 .. 010 001 100 ... ..... ..... @rdn_pg_rm
121
+LSL_zpzz 00000100 .. 010 011 100 ... ..... ..... @rdn_pg_rm
122
+ASR_zpzz 00000100 .. 010 100 100 ... ..... ..... @rdm_pg_rn # ASRR
123
+LSR_zpzz 00000100 .. 010 101 100 ... ..... ..... @rdm_pg_rn # LSRR
124
+LSL_zpzz 00000100 .. 010 111 100 ... ..... ..... @rdm_pg_rn # LSLR
125
+
126
### SVE Logical - Unpredicated Group
127
128
# SVE bitwise logical operations (unpredicated)
129
--
26
--
130
2.17.0
27
2.20.1
131
28
132
29
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Initialize EHCI controllers on AST2400 and AST2500 using the existing
4
Message-id: 20180516223007.10256-5-richard.henderson@linaro.org
4
TYPE_PLATFORM_EHCI. After this change, booting ast2500-evb into Linux
5
successfully instantiates a USB interface.
6
7
ehci-platform 1e6a3000.usb: EHCI Host Controller
8
ehci-platform 1e6a3000.usb: new USB bus registered, assigned bus number 1
9
ehci-platform 1e6a3000.usb: irq 21, io mem 0x1e6a3000
10
ehci-platform 1e6a3000.usb: USB 2.0 started, EHCI 1.00
11
usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.05
12
usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
13
usb usb1: Product: EHCI Host Controller
14
15
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
16
Reviewed-by: Cédric Le Goater <clg@kaod.org>
17
Reviewed-by: Joel Stanley <joel@jms.id.au>
18
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
19
Message-id: 20200206183437.3979-1-linux@roeck-us.net
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
21
---
7
target/arm/translate-sve.c | 127 +++++++++++++++++++++++++++++++++++++
22
include/hw/arm/aspeed_soc.h | 6 ++++++
8
target/arm/sve.decode | 20 ++++++
23
hw/arm/aspeed_soc.c | 25 +++++++++++++++++++++++++
9
2 files changed, 147 insertions(+)
24
2 files changed, 31 insertions(+)
10
25
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
26
diff --git a/include/hw/arm/aspeed_soc.h b/include/hw/arm/aspeed_soc.h
12
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
28
--- a/include/hw/arm/aspeed_soc.h
14
+++ b/target/arm/translate-sve.c
29
+++ b/include/hw/arm/aspeed_soc.h
15
@@ -XXX,XX +XXX,XX @@
30
@@ -XXX,XX +XXX,XX @@
16
* Implement all of the translator functions referenced by the decoder.
31
#include "target/arm/cpu.h"
17
*/
32
#include "hw/gpio/aspeed_gpio.h"
18
33
#include "hw/sd/aspeed_sdhci.h"
19
+/* Return the offset info CPUARMState of the predicate vector register Pn.
34
+#include "hw/usb/hcd-ehci.h"
20
+ * Note for this purpose, FFR is P16.
35
21
+ */
36
#define ASPEED_SPIS_NUM 2
22
+static inline int pred_full_reg_offset(DisasContext *s, int regno)
37
+#define ASPEED_EHCIS_NUM 2
23
+{
38
#define ASPEED_WDTS_NUM 4
24
+ return offsetof(CPUARMState, vfp.pregs[regno]);
39
#define ASPEED_CPUS_NUM 2
25
+}
40
#define ASPEED_MACS_NUM 4
26
+
41
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedSoCState {
27
+/* Return the byte size of the whole predicate register, VL / 64. */
42
AspeedXDMAState xdma;
28
+static inline int pred_full_reg_size(DisasContext *s)
43
AspeedSMCState fmc;
29
+{
44
AspeedSMCState spi[ASPEED_SPIS_NUM];
30
+ return s->sve_len >> 3;
45
+ EHCISysBusState ehci[ASPEED_EHCIS_NUM];
31
+}
46
AspeedSDMCState sdmc;
32
+
47
AspeedWDTState wdt[ASPEED_WDTS_NUM];
33
/* Invoke a vector expander on two Zregs. */
48
FTGMAC100State ftgmac100[ASPEED_MACS_NUM];
34
static bool do_vector2_z(DisasContext *s, GVecGen2Fn *gvec_fn,
49
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedSoCClass {
35
int esz, int rd, int rn)
50
uint32_t silicon_rev;
36
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
51
uint64_t sram_size;
37
{
52
int spis_num;
38
return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
53
+ int ehcis_num;
39
}
54
int wdts_num;
40
+
55
int macs_num;
41
+/*
56
const int *irqmap;
42
+ *** SVE Memory - 32-bit Gather and Unsized Contiguous Group
57
@@ -XXX,XX +XXX,XX @@ enum {
43
+ */
58
ASPEED_FMC,
44
+
59
ASPEED_SPI1,
45
+/* Subroutine loading a vector register at VOFS of LEN bytes.
60
ASPEED_SPI2,
46
+ * The load should begin at the address Rn + IMM.
61
+ ASPEED_EHCI1,
47
+ */
62
+ ASPEED_EHCI2,
48
+
63
ASPEED_VIC,
49
+static void do_ldr(DisasContext *s, uint32_t vofs, uint32_t len,
64
ASPEED_SDMC,
50
+ int rn, int imm)
65
ASPEED_SCU,
51
+{
66
diff --git a/hw/arm/aspeed_soc.c b/hw/arm/aspeed_soc.c
52
+ uint32_t len_align = QEMU_ALIGN_DOWN(len, 8);
67
index XXXXXXX..XXXXXXX 100644
53
+ uint32_t len_remain = len % 8;
68
--- a/hw/arm/aspeed_soc.c
54
+ uint32_t nparts = len / 8 + ctpop8(len_remain);
69
+++ b/hw/arm/aspeed_soc.c
55
+ int midx = get_mem_index(s);
70
@@ -XXX,XX +XXX,XX @@ static const hwaddr aspeed_soc_ast2400_memmap[] = {
56
+ TCGv_i64 addr, t0, t1;
71
[ASPEED_IOMEM] = 0x1E600000,
57
+
72
[ASPEED_FMC] = 0x1E620000,
58
+ addr = tcg_temp_new_i64();
73
[ASPEED_SPI1] = 0x1E630000,
59
+ t0 = tcg_temp_new_i64();
74
+ [ASPEED_EHCI1] = 0x1E6A1000,
60
+
75
[ASPEED_VIC] = 0x1E6C0000,
61
+ /* Note that unpredicated load/store of vector/predicate registers
76
[ASPEED_SDMC] = 0x1E6E0000,
62
+ * are defined as a stream of bytes, which equates to little-endian
77
[ASPEED_SCU] = 0x1E6E2000,
63
+ * operations on larger quantities. There is no nice way to force
78
@@ -XXX,XX +XXX,XX @@ static const hwaddr aspeed_soc_ast2500_memmap[] = {
64
+ * a little-endian load for aarch64_be-linux-user out of line.
79
[ASPEED_FMC] = 0x1E620000,
65
+ *
80
[ASPEED_SPI1] = 0x1E630000,
66
+ * Attempt to keep code expansion to a minimum by limiting the
81
[ASPEED_SPI2] = 0x1E631000,
67
+ * amount of unrolling done.
82
+ [ASPEED_EHCI1] = 0x1E6A1000,
68
+ */
83
+ [ASPEED_EHCI2] = 0x1E6A3000,
69
+ if (nparts <= 4) {
84
[ASPEED_VIC] = 0x1E6C0000,
70
+ int i;
85
[ASPEED_SDMC] = 0x1E6E0000,
71
+
86
[ASPEED_SCU] = 0x1E6E2000,
72
+ for (i = 0; i < len_align; i += 8) {
87
@@ -XXX,XX +XXX,XX @@ static const int aspeed_soc_ast2400_irqmap[] = {
73
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, rn), imm + i);
88
[ASPEED_UART5] = 10,
74
+ tcg_gen_qemu_ld_i64(t0, addr, midx, MO_LEQ);
89
[ASPEED_VUART] = 8,
75
+ tcg_gen_st_i64(t0, cpu_env, vofs + i);
90
[ASPEED_FMC] = 19,
76
+ }
91
+ [ASPEED_EHCI1] = 5,
77
+ } else {
92
+ [ASPEED_EHCI2] = 13,
78
+ TCGLabel *loop = gen_new_label();
93
[ASPEED_SDMC] = 0,
79
+ TCGv_ptr tp, i = tcg_const_local_ptr(0);
94
[ASPEED_SCU] = 21,
80
+
95
[ASPEED_ADC] = 31,
81
+ gen_set_label(loop);
96
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_init(Object *obj)
82
+
97
sizeof(s->spi[i]), typename);
83
+ /* Minimize the number of local temps that must be re-read from
98
}
84
+ * the stack each iteration. Instead, re-compute values other
99
85
+ * than the loop counter.
100
+ for (i = 0; i < sc->ehcis_num; i++) {
86
+ */
101
+ sysbus_init_child_obj(obj, "ehci[*]", OBJECT(&s->ehci[i]),
87
+ tp = tcg_temp_new_ptr();
102
+ sizeof(s->ehci[i]), TYPE_PLATFORM_EHCI);
88
+ tcg_gen_addi_ptr(tp, i, imm);
89
+ tcg_gen_extu_ptr_i64(addr, tp);
90
+ tcg_gen_add_i64(addr, addr, cpu_reg_sp(s, rn));
91
+
92
+ tcg_gen_qemu_ld_i64(t0, addr, midx, MO_LEQ);
93
+
94
+ tcg_gen_add_ptr(tp, cpu_env, i);
95
+ tcg_gen_addi_ptr(i, i, 8);
96
+ tcg_gen_st_i64(t0, tp, vofs);
97
+ tcg_temp_free_ptr(tp);
98
+
99
+ tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
100
+ tcg_temp_free_ptr(i);
101
+ }
103
+ }
102
+
104
+
103
+ /* Predicate register loads can be any multiple of 2.
105
snprintf(typename, sizeof(typename), "aspeed.sdmc-%s", socname);
104
+ * Note that we still store the entire 64-bit unit into cpu_env.
106
sysbus_init_child_obj(obj, "sdmc", OBJECT(&s->sdmc), sizeof(s->sdmc),
105
+ */
107
typename);
106
+ if (len_remain) {
108
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_realize(DeviceState *dev, Error **errp)
107
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, rn), imm + len_align);
109
s->spi[i].ctrl->flash_window_base);
110
}
111
112
+ /* EHCI */
113
+ for (i = 0; i < sc->ehcis_num; i++) {
114
+ object_property_set_bool(OBJECT(&s->ehci[i]), true, "realized", &err);
115
+ if (err) {
116
+ error_propagate(errp, err);
117
+ return;
118
+ }
119
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->ehci[i]), 0,
120
+ sc->memmap[ASPEED_EHCI1 + i]);
121
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->ehci[i]), 0,
122
+ aspeed_soc_get_irq(s, ASPEED_EHCI1 + i));
123
+ }
108
+
124
+
109
+ switch (len_remain) {
125
/* SDMC - SDRAM Memory Controller */
110
+ case 2:
126
object_property_set_bool(OBJECT(&s->sdmc), true, "realized", &err);
111
+ case 4:
127
if (err) {
112
+ case 8:
128
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2400_class_init(ObjectClass *oc, void *data)
113
+ tcg_gen_qemu_ld_i64(t0, addr, midx, MO_LE | ctz32(len_remain));
129
sc->silicon_rev = AST2400_A1_SILICON_REV;
114
+ break;
130
sc->sram_size = 0x8000;
115
+
131
sc->spis_num = 1;
116
+ case 6:
132
+ sc->ehcis_num = 1;
117
+ t1 = tcg_temp_new_i64();
133
sc->wdts_num = 2;
118
+ tcg_gen_qemu_ld_i64(t0, addr, midx, MO_LEUL);
134
sc->macs_num = 2;
119
+ tcg_gen_addi_i64(addr, addr, 4);
135
sc->irqmap = aspeed_soc_ast2400_irqmap;
120
+ tcg_gen_qemu_ld_i64(t1, addr, midx, MO_LEUW);
136
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2500_class_init(ObjectClass *oc, void *data)
121
+ tcg_gen_deposit_i64(t0, t0, t1, 32, 32);
137
sc->silicon_rev = AST2500_A1_SILICON_REV;
122
+ tcg_temp_free_i64(t1);
138
sc->sram_size = 0x9000;
123
+ break;
139
sc->spis_num = 2;
124
+
140
+ sc->ehcis_num = 2;
125
+ default:
141
sc->wdts_num = 3;
126
+ g_assert_not_reached();
142
sc->macs_num = 2;
127
+ }
143
sc->irqmap = aspeed_soc_ast2500_irqmap;
128
+ tcg_gen_st_i64(t0, cpu_env, vofs + len_align);
129
+ }
130
+ tcg_temp_free_i64(addr);
131
+ tcg_temp_free_i64(t0);
132
+}
133
+
134
+static bool trans_LDR_zri(DisasContext *s, arg_rri *a, uint32_t insn)
135
+{
136
+ if (sve_access_check(s)) {
137
+ int size = vec_full_reg_size(s);
138
+ int off = vec_full_reg_offset(s, a->rd);
139
+ do_ldr(s, off, size, a->rn, a->imm * size);
140
+ }
141
+ return true;
142
+}
143
+
144
+static bool trans_LDR_pri(DisasContext *s, arg_rri *a, uint32_t insn)
145
+{
146
+ if (sve_access_check(s)) {
147
+ int size = pred_full_reg_size(s);
148
+ int off = pred_full_reg_offset(s, a->rd);
149
+ do_ldr(s, off, size, a->rn, a->imm * size);
150
+ }
151
+ return true;
152
+}
153
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
154
index XXXXXXX..XXXXXXX 100644
155
--- a/target/arm/sve.decode
156
+++ b/target/arm/sve.decode
157
@@ -XXX,XX +XXX,XX @@
158
# This file is processed by scripts/decodetree.py
159
#
160
161
+###########################################################################
162
+# Named fields. These are primarily for disjoint fields.
163
+
164
+%imm9_16_10 16:s6 10:3
165
+
166
###########################################################################
167
# Named attribute sets. These are used to make nice(er) names
168
# when creating helpers common to those for the individual
169
# instruction patterns.
170
171
+&rri rd rn imm
172
&rrr_esz rd rn rm esz
173
174
###########################################################################
175
@@ -XXX,XX +XXX,XX @@
176
# Three operand with unused vector element size
177
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
178
179
+# Basic Load/Store with 9-bit immediate offset
180
+@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
181
+ &rri imm=%imm9_16_10
182
+@rd_rn_i9 ........ ........ ...... rn:5 rd:5 \
183
+ &rri imm=%imm9_16_10
184
+
185
###########################################################################
186
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
187
188
@@ -XXX,XX +XXX,XX @@ AND_zzz 00000100 00 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
189
ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
190
EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
191
BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
192
+
193
+### SVE Memory - 32-bit Gather and Unsized Contiguous Group
194
+
195
+# SVE load predicate register
196
+LDR_pri 10000101 10 ...... 000 ... ..... 0 .... @pd_rn_i9
197
+
198
+# SVE load vector register
199
+LDR_zri 10000101 10 ...... 010 ... ..... ..... @rd_rn_i9
200
--
144
--
201
2.17.0
145
2.20.1
202
146
203
147
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
The ZynqMP contains two instances of a generic DMA, the GDMA, located in the
3
Initialize EHCI controllers on AST2600 using the existing
4
FPD (full power domain), and the ADMA, located in LPD (low power domain). This
4
TYPE_PLATFORM_EHCI. After this change, booting ast2600-evb
5
patch adds these two DMAs to the ZynqMP board.
5
into Linux successfully instantiates a USB interface after
6
the necessary changes are made to its devicetree files.
6
7
7
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
ehci-platform: EHCI generic platform driver
9
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
ehci-platform 1e6a3000.usb: EHCI Host Controller
10
Message-id: 20180503214201.29082-3-frasse.iglesias@gmail.com
11
ehci-platform 1e6a3000.usb: new USB bus registered, assigned bus number 1
12
ehci-platform 1e6a3000.usb: irq 25, io mem 0x1e6a3000
13
ehci-platform 1e6a3000.usb: USB 2.0 started, EHCI 1.00
14
usb usb1: Manufacturer: Linux 5.5.0-09825-ga0802f2d0ef5-dirty ehci_hcd
15
usb 1-1: new high-speed USB device number 2 using ehci-platform
16
17
Reviewed-by: Cédric Le Goater <clg@kaod.org>
18
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
19
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
20
Message-id: 20200207174548.9087-1-linux@roeck-us.net
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
22
---
13
include/hw/arm/xlnx-zynqmp.h | 5 ++++
23
hw/arm/aspeed_ast2600.c | 23 +++++++++++++++++++++++
14
hw/arm/xlnx-zynqmp.c | 53 ++++++++++++++++++++++++++++++++++++
24
1 file changed, 23 insertions(+)
15
2 files changed, 58 insertions(+)
16
25
17
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
26
diff --git a/hw/arm/aspeed_ast2600.c b/hw/arm/aspeed_ast2600.c
18
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/xlnx-zynqmp.h
28
--- a/hw/arm/aspeed_ast2600.c
20
+++ b/include/hw/arm/xlnx-zynqmp.h
29
+++ b/hw/arm/aspeed_ast2600.c
21
@@ -XXX,XX +XXX,XX @@
30
@@ -XXX,XX +XXX,XX @@ static const hwaddr aspeed_soc_ast2600_memmap[] = {
22
#include "hw/sd/sdhci.h"
31
[ASPEED_FMC] = 0x1E620000,
23
#include "hw/ssi/xilinx_spips.h"
32
[ASPEED_SPI1] = 0x1E630000,
24
#include "hw/dma/xlnx_dpdma.h"
33
[ASPEED_SPI2] = 0x1E641000,
25
+#include "hw/dma/xlnx-zdma.h"
34
+ [ASPEED_EHCI1] = 0x1E6A1000,
26
#include "hw/display/xlnx_dp.h"
35
+ [ASPEED_EHCI2] = 0x1E6A3000,
27
#include "hw/intc/xlnx-zynqmp-ipi.h"
36
[ASPEED_MII1] = 0x1E650000,
28
#include "hw/timer/xlnx-zynqmp-rtc.h"
37
[ASPEED_MII2] = 0x1E650008,
29
@@ -XXX,XX +XXX,XX @@
38
[ASPEED_MII3] = 0x1E650010,
30
#define XLNX_ZYNQMP_NUM_UARTS 2
39
@@ -XXX,XX +XXX,XX @@ static const int aspeed_soc_ast2600_irqmap[] = {
31
#define XLNX_ZYNQMP_NUM_SDHCI 2
40
[ASPEED_ADC] = 78,
32
#define XLNX_ZYNQMP_NUM_SPIS 2
41
[ASPEED_XDMA] = 6,
33
+#define XLNX_ZYNQMP_NUM_GDMA_CH 8
42
[ASPEED_SDHCI] = 43,
34
+#define XLNX_ZYNQMP_NUM_ADMA_CH 8
43
+ [ASPEED_EHCI1] = 5,
35
44
+ [ASPEED_EHCI2] = 9,
36
#define XLNX_ZYNQMP_NUM_QSPI_BUS 2
45
[ASPEED_EMMC] = 15,
37
#define XLNX_ZYNQMP_NUM_QSPI_BUS_CS 2
46
[ASPEED_GPIO] = 40,
38
@@ -XXX,XX +XXX,XX @@ typedef struct XlnxZynqMPState {
47
[ASPEED_GPIO_1_8V] = 11,
39
XlnxDPDMAState dpdma;
48
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2600_init(Object *obj)
40
XlnxZynqMPIPI ipi;
49
sizeof(s->spi[i]), typename);
41
XlnxZynqMPRTC rtc;
50
}
42
+ XlnxZDMA gdma[XLNX_ZYNQMP_NUM_GDMA_CH];
51
43
+ XlnxZDMA adma[XLNX_ZYNQMP_NUM_ADMA_CH];
52
+ for (i = 0; i < sc->ehcis_num; i++) {
44
53
+ sysbus_init_child_obj(obj, "ehci[*]", OBJECT(&s->ehci[i]),
45
char *boot_cpu;
54
+ sizeof(s->ehci[i]), TYPE_PLATFORM_EHCI);
46
ARMCPU *boot_cpu_ptr;
47
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/hw/arm/xlnx-zynqmp.c
50
+++ b/hw/arm/xlnx-zynqmp.c
51
@@ -XXX,XX +XXX,XX @@ static const int spi_intr[XLNX_ZYNQMP_NUM_SPIS] = {
52
19, 20,
53
};
54
55
+static const uint64_t gdma_ch_addr[XLNX_ZYNQMP_NUM_GDMA_CH] = {
56
+ 0xFD500000, 0xFD510000, 0xFD520000, 0xFD530000,
57
+ 0xFD540000, 0xFD550000, 0xFD560000, 0xFD570000
58
+};
59
+
60
+static const int gdma_ch_intr[XLNX_ZYNQMP_NUM_GDMA_CH] = {
61
+ 124, 125, 126, 127, 128, 129, 130, 131
62
+};
63
+
64
+static const uint64_t adma_ch_addr[XLNX_ZYNQMP_NUM_ADMA_CH] = {
65
+ 0xFFA80000, 0xFFA90000, 0xFFAA0000, 0xFFAB0000,
66
+ 0xFFAC0000, 0xFFAD0000, 0xFFAE0000, 0xFFAF0000
67
+};
68
+
69
+static const int adma_ch_intr[XLNX_ZYNQMP_NUM_ADMA_CH] = {
70
+ 77, 78, 79, 80, 81, 82, 83, 84
71
+};
72
+
73
typedef struct XlnxZynqMPGICRegion {
74
int region_index;
75
uint32_t address;
76
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
77
78
object_initialize(&s->rtc, sizeof(s->rtc), TYPE_XLNX_ZYNQMP_RTC);
79
qdev_set_parent_bus(DEVICE(&s->rtc), sysbus_get_default());
80
+
81
+ for (i = 0; i < XLNX_ZYNQMP_NUM_GDMA_CH; i++) {
82
+ object_initialize(&s->gdma[i], sizeof(s->gdma[i]), TYPE_XLNX_ZDMA);
83
+ qdev_set_parent_bus(DEVICE(&s->gdma[i]), sysbus_get_default());
84
+ }
55
+ }
85
+
56
+
86
+ for (i = 0; i < XLNX_ZYNQMP_NUM_ADMA_CH; i++) {
57
snprintf(typename, sizeof(typename), "aspeed.sdmc-%s", socname);
87
+ object_initialize(&s->adma[i], sizeof(s->adma[i]), TYPE_XLNX_ZDMA);
58
sysbus_init_child_obj(obj, "sdmc", OBJECT(&s->sdmc), sizeof(s->sdmc),
88
+ qdev_set_parent_bus(DEVICE(&s->adma[i]), sysbus_get_default());
59
typename);
89
+ }
60
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2600_realize(DeviceState *dev, Error **errp)
90
}
61
s->spi[i].ctrl->flash_window_base);
91
92
static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
93
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
94
}
62
}
95
sysbus_mmio_map(SYS_BUS_DEVICE(&s->rtc), 0, RTC_ADDR);
63
96
sysbus_connect_irq(SYS_BUS_DEVICE(&s->rtc), 0, gic_spi[RTC_IRQ]);
64
+ /* EHCI */
97
+
65
+ for (i = 0; i < sc->ehcis_num; i++) {
98
+ for (i = 0; i < XLNX_ZYNQMP_NUM_GDMA_CH; i++) {
66
+ object_property_set_bool(OBJECT(&s->ehci[i]), true, "realized", &err);
99
+ object_property_set_uint(OBJECT(&s->gdma[i]), 128, "bus-width", &err);
100
+ object_property_set_bool(OBJECT(&s->gdma[i]), true, "realized", &err);
101
+ if (err) {
67
+ if (err) {
102
+ error_propagate(errp, err);
68
+ error_propagate(errp, err);
103
+ return;
69
+ return;
104
+ }
70
+ }
105
+
71
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->ehci[i]), 0,
106
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gdma[i]), 0, gdma_ch_addr[i]);
72
+ sc->memmap[ASPEED_EHCI1 + i]);
107
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gdma[i]), 0,
73
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->ehci[i]), 0,
108
+ gic_spi[gdma_ch_intr[i]]);
74
+ aspeed_soc_get_irq(s, ASPEED_EHCI1 + i));
109
+ }
75
+ }
110
+
76
+
111
+ for (i = 0; i < XLNX_ZYNQMP_NUM_ADMA_CH; i++) {
77
/* SDMC - SDRAM Memory Controller */
112
+ object_property_set_bool(OBJECT(&s->adma[i]), true, "realized", &err);
78
object_property_set_bool(OBJECT(&s->sdmc), true, "realized", &err);
113
+ if (err) {
79
if (err) {
114
+ error_propagate(errp, err);
80
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2600_class_init(ObjectClass *oc, void *data)
115
+ return;
81
sc->silicon_rev = AST2600_A0_SILICON_REV;
116
+ }
82
sc->sram_size = 0x10000;
117
+
83
sc->spis_num = 2;
118
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->adma[i]), 0, adma_ch_addr[i]);
84
+ sc->ehcis_num = 2;
119
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->adma[i]), 0,
85
sc->wdts_num = 4;
120
+ gic_spi[adma_ch_intr[i]]);
86
sc->macs_num = 4;
121
+ }
87
sc->irqmap = aspeed_soc_ast2600_irqmap;
122
}
123
124
static Property xlnx_zynqmp_props[] = {
125
--
88
--
126
2.17.0
89
2.20.1
127
90
128
91
diff view generated by jsdifflib
New patch
1
From: Chen Qun <kuhn.chenqun@huawei.com>
1
2
3
It's easy to reproduce as follow:
4
virsh qemu-monitor-command vm1 --pretty '{"execute": "device-list-properties",
5
"arguments":{"typename":"exynos4210.uart"}}'
6
7
ASAN shows memory leak stack:
8
#1 0xfffd896d71cb in g_malloc0 (/lib64/libglib-2.0.so.0+0x571cb)
9
#2 0xaaad270beee3 in timer_new_full /qemu/include/qemu/timer.h:530
10
#3 0xaaad270beee3 in timer_new /qemu/include/qemu/timer.h:551
11
#4 0xaaad270beee3 in timer_new_ns /qemu/include/qemu/timer.h:569
12
#5 0xaaad270beee3 in exynos4210_uart_init /qemu/hw/char/exynos4210_uart.c:677
13
#6 0xaaad275c8f4f in object_initialize_with_type /qemu/qom/object.c:516
14
#7 0xaaad275c91bb in object_new_with_type /qemu/qom/object.c:684
15
#8 0xaaad2755df2f in qmp_device_list_properties /qemu/qom/qom-qmp-cmds.c:152
16
17
Reported-by: Euler Robot <euler.robot@huawei.com>
18
Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com>
19
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
20
Message-id: 20200213025603.149432-1-kuhn.chenqun@huawei.com
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
23
hw/char/exynos4210_uart.c | 5 +++--
24
1 file changed, 3 insertions(+), 2 deletions(-)
25
26
diff --git a/hw/char/exynos4210_uart.c b/hw/char/exynos4210_uart.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/char/exynos4210_uart.c
29
+++ b/hw/char/exynos4210_uart.c
30
@@ -XXX,XX +XXX,XX @@ static void exynos4210_uart_init(Object *obj)
31
SysBusDevice *dev = SYS_BUS_DEVICE(obj);
32
Exynos4210UartState *s = EXYNOS4210_UART(dev);
33
34
- s->fifo_timeout_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
35
- exynos4210_uart_timeout_int, s);
36
s->wordtime = NANOSECONDS_PER_SECOND * 10 / 9600;
37
38
/* memory mapping */
39
@@ -XXX,XX +XXX,XX @@ static void exynos4210_uart_realize(DeviceState *dev, Error **errp)
40
{
41
Exynos4210UartState *s = EXYNOS4210_UART(dev);
42
43
+ s->fifo_timeout_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
44
+ exynos4210_uart_timeout_int, s);
45
+
46
qemu_chr_fe_set_handlers(&s->chr, exynos4210_uart_can_receive,
47
exynos4210_uart_receive, exynos4210_uart_event,
48
NULL, s, NULL, true);
49
--
50
2.20.1
51
52
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
When booting without device tree, the Linux kernels uses the $R1
4
register to determine the machine type. The list of values is
5
registered at [1].
6
7
There are two entries for the Raspberry Pi:
8
9
- https://www.arm.linux.org.uk/developer/machines/list.php?mid=3138
10
name: MACH_TYPE_BCM2708
11
value: 0xc42 (3138)
12
status: Active, not mainlined
13
date: 15 Oct 2010
14
15
- https://www.arm.linux.org.uk/developer/machines/list.php?mid=4828
16
name: MACH_TYPE_BCM2835
17
value: 4828
18
status: Active, mainlined
19
date: 6 Dec 2013
20
21
QEMU always used the non-mainlined type MACH_TYPE_BCM2708.
22
The value 0xc43 is registered to 'MX51_GGC' (processor i.MX51), and
23
0xc44 to 'Western Digital Sharespace NAS' (processor Marvell 88F5182).
24
25
The Raspberry Pi foundation bootloader only sets the BCM2708 machine
26
type, see [2] or [3]:
27
28
133 9:
29
134 mov r0, #0
30
135 ldr r1, =3138 @ BCM2708 machine id
31
136 ldr r2, atags @ ATAGS
32
137 bx r4
33
34
U-Boot only uses MACH_TYPE_BCM2708 (see [4]):
35
36
25 /*
37
26 * 2835 is a SKU in a series for which the 2708 is the first or primary SoC,
38
27 * so 2708 has historically been used rather than a dedicated 2835 ID.
39
28 *
40
29 * We don't define a machine type for bcm2709/bcm2836 since the RPi Foundation
41
30 * chose to use someone else's previously registered machine ID (3139, MX51_GGC)
42
31 * rather than obtaining a valid ID:-/
43
32 *
44
33 * For the bcm2837, hopefully a machine type is not needed, since everything
45
34 * is DT.
46
35 */
47
48
While the definition MACH_BCM2709 with value 0xc43 was introduced in
49
a commit described "Add 2709 platform for Raspberry Pi 2" out of the
50
mainline Linux kernel, it does not seem used, and the platform is
51
introduced with Device Tree support anyway (see [5] and [6]).
52
53
Remove the unused values (0xc43 introduced in commit 1df7d1f9303aef
54
"raspi: add raspberry pi 2 machine" and 0xc44 in commit bade58166f4
55
"raspi: Raspberry Pi 3 support"), keeping only MACH_TYPE_BCM2708.
56
57
[1] https://www.arm.linux.org.uk/developer/machines/
58
[2] https://github.com/raspberrypi/tools/blob/920c7ed2e/armstubs/armstub7.S#L135
59
[3] https://github.com/raspberrypi/tools/blob/49719d554/armstubs/armstub7.S#L64
60
[4] https://gitlab.denx.de/u-boot/u-boot/blob/v2015.04/include/configs/rpi-common.h#L18
61
[5] https://github.com/raspberrypi/linux/commit/d9fac63adac#diff-6722037d79570df5b392a49e0e006573R526
62
[6] http://lists.infradead.org/pipermail/linux-rpi-kernel/2015-February/001268.html
63
64
Cc: Zoltán Baldaszti <bztemail@gmail.com>
65
Cc: Pekka Enberg <penberg@iki.fi>
66
Cc: Stephen Warren <swarren@nvidia.com>
67
Cc: Kshitij Soni <kshitij.soni@broadcom.com>
68
Cc: Michael Chan <michael.chan@broadcom.com>
69
Cc: Andrew Baumann <Andrew.Baumann@microsoft.com>
70
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
71
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
72
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
73
Message-id: 20200208165645.15657-2-f4bug@amsat.org
74
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
75
---
76
hw/arm/raspi.c | 6 +++---
77
1 file changed, 3 insertions(+), 3 deletions(-)
78
79
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/hw/arm/raspi.c
82
+++ b/hw/arm/raspi.c
83
@@ -XXX,XX +XXX,XX @@
84
#define FIRMWARE_ADDR_3 0x80000 /* Pi 3 loads kernel.img here by default */
85
#define SPINTABLE_ADDR 0xd8 /* Pi 3 bootloader spintable */
86
87
-/* Table of Linux board IDs for different Pi versions */
88
-static const int raspi_boardid[] = {[1] = 0xc42, [2] = 0xc43, [3] = 0xc44};
89
+/* Registered machine type (matches RPi Foundation bootloader and U-Boot) */
90
+#define MACH_TYPE_BCM2708 3138
91
92
typedef struct RasPiState {
93
BCM283XState soc;
94
@@ -XXX,XX +XXX,XX @@ static void setup_boot(MachineState *machine, int version, size_t ram_size)
95
static struct arm_boot_info binfo;
96
int r;
97
98
- binfo.board_id = raspi_boardid[version];
99
+ binfo.board_id = MACH_TYPE_BCM2708;
100
binfo.ram_size = ram_size;
101
binfo.nb_cpus = machine->smp.cpus;
102
103
--
104
2.20.1
105
106
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
We hardcode the board revision as 0xa21041 for the raspi2, and
4
0xa02082 for the raspi3:
5
6
166 static void raspi_init(MachineState *machine, int version)
7
167 {
8
...
9
194 int board_rev = version == 3 ? 0xa02082 : 0xa21041;
10
11
These revision codes are for the 2B and 3B models, see:
12
https://www.raspberrypi.org/documentation/hardware/raspberrypi/revision-codes/README.md
13
14
Correct the board description.
15
16
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20200208165645.15657-3-f4bug@amsat.org
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
hw/arm/raspi.c | 4 ++--
22
1 file changed, 2 insertions(+), 2 deletions(-)
23
24
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/arm/raspi.c
27
+++ b/hw/arm/raspi.c
28
@@ -XXX,XX +XXX,XX @@ static void raspi2_init(MachineState *machine)
29
30
static void raspi2_machine_init(MachineClass *mc)
31
{
32
- mc->desc = "Raspberry Pi 2";
33
+ mc->desc = "Raspberry Pi 2B";
34
mc->init = raspi2_init;
35
mc->block_default_type = IF_SD;
36
mc->no_parallel = 1;
37
@@ -XXX,XX +XXX,XX @@ static void raspi3_init(MachineState *machine)
38
39
static void raspi3_machine_init(MachineClass *mc)
40
{
41
- mc->desc = "Raspberry Pi 3";
42
+ mc->desc = "Raspberry Pi 3B";
43
mc->init = raspi3_init;
44
mc->block_default_type = IF_SD;
45
mc->no_parallel = 1;
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
The board revision encode the board version. Add a helper
4
to extract the version, and use it.
5
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20200208165645.15657-4-f4bug@amsat.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/Makefile.objs | 2 +-
11
hw/arm/raspi.c | 31 +++++++++++++++++++++++++++----
9
target/arm/helper-sve.h | 21 ++++++++++
12
1 file changed, 27 insertions(+), 4 deletions(-)
10
target/arm/helper.h | 1 +
11
target/arm/sve_helper.c | 78 ++++++++++++++++++++++++++++++++++++++
12
target/arm/translate-sve.c | 65 +++++++++++++++++++++++++++++++
13
target/arm/sve.decode | 5 +++
14
6 files changed, 171 insertions(+), 1 deletion(-)
15
create mode 100644 target/arm/helper-sve.h
16
create mode 100644 target/arm/sve_helper.c
17
13
18
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
14
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/Makefile.objs
16
--- a/hw/arm/raspi.c
21
+++ b/target/arm/Makefile.objs
17
+++ b/hw/arm/raspi.c
22
@@ -XXX,XX +XXX,XX @@ target/arm/decode-sve.inc.c: $(SRC_PATH)/target/arm/sve.decode $(DECODETREE)
23
     "GEN", $(TARGET_DIR)$@)
24
25
target/arm/translate-sve.o: target/arm/decode-sve.inc.c
26
-obj-$(TARGET_AARCH64) += translate-sve.o
27
+obj-$(TARGET_AARCH64) += translate-sve.o sve_helper.o
28
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
29
new file mode 100644
30
index XXXXXXX..XXXXXXX
31
--- /dev/null
32
+++ b/target/arm/helper-sve.h
33
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@
19
#include "qapi/error.h"
20
#include "cpu.h"
21
#include "hw/arm/bcm2836.h"
22
+#include "hw/registerfields.h"
23
#include "qemu/error-report.h"
24
#include "hw/boards.h"
25
#include "hw/loader.h"
26
@@ -XXX,XX +XXX,XX @@ typedef struct RasPiState {
27
MemoryRegion ram;
28
} RasPiState;
29
34
+/*
30
+/*
35
+ * AArch64 SVE specific helper definitions
31
+ * Board revision codes:
36
+ *
32
+ * www.raspberrypi.org/documentation/hardware/raspberrypi/revision-codes/
37
+ * Copyright (c) 2018 Linaro, Ltd
38
+ *
39
+ * This library is free software; you can redistribute it and/or
40
+ * modify it under the terms of the GNU Lesser General Public
41
+ * License as published by the Free Software Foundation; either
42
+ * version 2 of the License, or (at your option) any later version.
43
+ *
44
+ * This library is distributed in the hope that it will be useful,
45
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
46
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
47
+ * Lesser General Public License for more details.
48
+ *
49
+ * You should have received a copy of the GNU Lesser General Public
50
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
51
+ */
33
+ */
34
+FIELD(REV_CODE, REVISION, 0, 4);
35
+FIELD(REV_CODE, TYPE, 4, 8);
36
+FIELD(REV_CODE, PROCESSOR, 12, 4);
37
+FIELD(REV_CODE, MANUFACTURER, 16, 4);
38
+FIELD(REV_CODE, MEMORY_SIZE, 20, 3);
39
+FIELD(REV_CODE, STYLE, 23, 1);
52
+
40
+
53
+DEF_HELPER_FLAGS_2(sve_predtest1, TCG_CALL_NO_WG, i32, i64, i64)
41
+static int board_processor_id(uint32_t board_rev)
54
+DEF_HELPER_FLAGS_3(sve_predtest, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
55
diff --git a/target/arm/helper.h b/target/arm/helper.h
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/helper.h
58
+++ b/target/arm/helper.h
59
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
60
61
#ifdef TARGET_AARCH64
62
#include "helper-a64.h"
63
+#include "helper-sve.h"
64
#endif
65
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
66
new file mode 100644
67
index XXXXXXX..XXXXXXX
68
--- /dev/null
69
+++ b/target/arm/sve_helper.c
70
@@ -XXX,XX +XXX,XX @@
71
+/*
72
+ * ARM SVE Operations
73
+ *
74
+ * Copyright (c) 2018 Linaro, Ltd.
75
+ *
76
+ * This library is free software; you can redistribute it and/or
77
+ * modify it under the terms of the GNU Lesser General Public
78
+ * License as published by the Free Software Foundation; either
79
+ * version 2 of the License, or (at your option) any later version.
80
+ *
81
+ * This library is distributed in the hope that it will be useful,
82
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
83
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
84
+ * Lesser General Public License for more details.
85
+ *
86
+ * You should have received a copy of the GNU Lesser General Public
87
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
88
+ */
89
+
90
+#include "qemu/osdep.h"
91
+#include "cpu.h"
92
+#include "exec/exec-all.h"
93
+#include "exec/cpu_ldst.h"
94
+#include "exec/helper-proto.h"
95
+#include "tcg/tcg-gvec-desc.h"
96
+
97
+
98
+/* Return a value for NZCV as per the ARM PredTest pseudofunction.
99
+ *
100
+ * The return value has bit 31 set if N is set, bit 1 set if Z is clear,
101
+ * and bit 0 set if C is set. Compare the definitions of these variables
102
+ * within CPUARMState.
103
+ */
104
+
105
+/* For no G bits set, NZCV = C. */
106
+#define PREDTEST_INIT 1
107
+
108
+/* This is an iterative function, called for each Pd and Pg word
109
+ * moving forward.
110
+ */
111
+static uint32_t iter_predtest_fwd(uint64_t d, uint64_t g, uint32_t flags)
112
+{
42
+{
113
+ if (likely(g)) {
43
+ assert(FIELD_EX32(board_rev, REV_CODE, STYLE)); /* Only new style */
114
+ /* Compute N from first D & G.
44
+ return FIELD_EX32(board_rev, REV_CODE, PROCESSOR);
115
+ Use bit 2 to signal first G bit seen. */
116
+ if (!(flags & 4)) {
117
+ flags |= ((d & (g & -g)) != 0) << 31;
118
+ flags |= 4;
119
+ }
120
+
121
+ /* Accumulate Z from each D & G. */
122
+ flags |= ((d & g) != 0) << 1;
123
+
124
+ /* Compute C from last !(D & G). Replace previous. */
125
+ flags = deposit32(flags, 0, 1, (d & pow2floor(g)) == 0);
126
+ }
127
+ return flags;
128
+}
45
+}
129
+
46
+
130
+/* The same for a single word predicate. */
47
+static int board_version(uint32_t board_rev)
131
+uint32_t HELPER(sve_predtest1)(uint64_t d, uint64_t g)
132
+{
48
+{
133
+ return iter_predtest_fwd(d, g, PREDTEST_INIT);
49
+ return board_processor_id(board_rev) + 1;
134
+}
50
+}
135
+
51
+
136
+/* The same for a multi-word predicate. */
52
static void write_smpboot(ARMCPU *cpu, const struct arm_boot_info *info)
137
+uint32_t HELPER(sve_predtest)(void *vd, void *vg, uint32_t words)
53
{
138
+{
54
static const uint32_t smpboot[] = {
139
+ uint32_t flags = PREDTEST_INIT;
55
@@ -XXX,XX +XXX,XX @@ static void setup_boot(MachineState *machine, int version, size_t ram_size)
140
+ uint64_t *d = vd, *g = vg;
56
arm_load_kernel(ARM_CPU(first_cpu), machine, &binfo);
141
+ uintptr_t i = 0;
142
+
143
+ do {
144
+ flags = iter_predtest_fwd(d[i], g[i], flags);
145
+ } while (++i < words);
146
+
147
+ return flags;
148
+}
149
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
150
index XXXXXXX..XXXXXXX 100644
151
--- a/target/arm/translate-sve.c
152
+++ b/target/arm/translate-sve.c
153
@@ -XXX,XX +XXX,XX @@ static bool do_mov_z(DisasContext *s, int rd, int rn)
154
return do_vector2_z(s, tcg_gen_gvec_mov, 0, rd, rn);
155
}
57
}
156
58
157
+/* Set the cpu flags as per a return from an SVE helper. */
59
-static void raspi_init(MachineState *machine, int version)
158
+static void do_pred_flags(TCGv_i32 t)
60
+static void raspi_init(MachineState *machine, uint32_t board_rev)
159
+{
61
{
160
+ tcg_gen_mov_i32(cpu_NF, t);
62
RasPiState *s = g_new0(RasPiState, 1);
161
+ tcg_gen_andi_i32(cpu_ZF, t, 2);
63
+ int version = board_version(board_rev);
162
+ tcg_gen_andi_i32(cpu_CF, t, 1);
64
uint32_t vcram_size;
163
+ tcg_gen_movi_i32(cpu_VF, 0);
65
DriveInfo *di;
164
+}
66
BlockBackend *blk;
165
+
67
@@ -XXX,XX +XXX,XX @@ static void raspi_init(MachineState *machine, int version)
166
+/* Subroutines computing the ARM PredTest psuedofunction. */
68
/* Setup the SOC */
167
+static void do_predtest1(TCGv_i64 d, TCGv_i64 g)
69
object_property_add_const_link(OBJECT(&s->soc), "ram", OBJECT(&s->ram),
168
+{
70
&error_abort);
169
+ TCGv_i32 t = tcg_temp_new_i32();
71
- int board_rev = version == 3 ? 0xa02082 : 0xa21041;
170
+
72
object_property_set_int(OBJECT(&s->soc), board_rev, "board-rev",
171
+ gen_helper_sve_predtest1(t, d, g);
73
&error_abort);
172
+ do_pred_flags(t);
74
object_property_set_bool(OBJECT(&s->soc), true, "realized", &error_abort);
173
+ tcg_temp_free_i32(t);
75
@@ -XXX,XX +XXX,XX @@ static void raspi_init(MachineState *machine, int version)
174
+}
76
175
+
77
static void raspi2_init(MachineState *machine)
176
+static void do_predtest(DisasContext *s, int dofs, int gofs, int words)
78
{
177
+{
79
- raspi_init(machine, 2);
178
+ TCGv_ptr dptr = tcg_temp_new_ptr();
80
+ raspi_init(machine, 0xa21041);
179
+ TCGv_ptr gptr = tcg_temp_new_ptr();
180
+ TCGv_i32 t;
181
+
182
+ tcg_gen_addi_ptr(dptr, cpu_env, dofs);
183
+ tcg_gen_addi_ptr(gptr, cpu_env, gofs);
184
+ t = tcg_const_i32(words);
185
+
186
+ gen_helper_sve_predtest(t, dptr, gptr, t);
187
+ tcg_temp_free_ptr(dptr);
188
+ tcg_temp_free_ptr(gptr);
189
+
190
+ do_pred_flags(t);
191
+ tcg_temp_free_i32(t);
192
+}
193
+
194
/*
195
*** SVE Logical - Unpredicated Group
196
*/
197
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
198
return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
199
}
81
}
200
82
201
+/*
83
static void raspi2_machine_init(MachineClass *mc)
202
+ *** SVE Predicate Misc Group
84
@@ -XXX,XX +XXX,XX @@ DEFINE_MACHINE("raspi2", raspi2_machine_init)
203
+ */
85
#ifdef TARGET_AARCH64
204
+
86
static void raspi3_init(MachineState *machine)
205
+static bool trans_PTEST(DisasContext *s, arg_PTEST *a, uint32_t insn)
87
{
206
+{
88
- raspi_init(machine, 3);
207
+ if (sve_access_check(s)) {
89
+ raspi_init(machine, 0xa02082);
208
+ int nofs = pred_full_reg_offset(s, a->rn);
90
}
209
+ int gofs = pred_full_reg_offset(s, a->pg);
91
210
+ int words = DIV_ROUND_UP(pred_full_reg_size(s), 8);
92
static void raspi3_machine_init(MachineClass *mc)
211
+
212
+ if (words == 1) {
213
+ TCGv_i64 pn = tcg_temp_new_i64();
214
+ TCGv_i64 pg = tcg_temp_new_i64();
215
+
216
+ tcg_gen_ld_i64(pn, cpu_env, nofs);
217
+ tcg_gen_ld_i64(pg, cpu_env, gofs);
218
+ do_predtest1(pn, pg);
219
+
220
+ tcg_temp_free_i64(pn);
221
+ tcg_temp_free_i64(pg);
222
+ } else {
223
+ do_predtest(s, nofs, gofs, words);
224
+ }
225
+ }
226
+ return true;
227
+}
228
+
229
/*
230
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
231
*/
232
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
233
index XXXXXXX..XXXXXXX 100644
234
--- a/target/arm/sve.decode
235
+++ b/target/arm/sve.decode
236
@@ -XXX,XX +XXX,XX @@ ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
237
EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
238
BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
239
240
+### SVE Predicate Misc Group
241
+
242
+# SVE predicate test
243
+PTEST 00100101 01 010000 11 pg:4 0 rn:4 0 0000
244
+
245
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
246
247
# SVE load predicate register
248
--
93
--
249
2.17.0
94
2.20.1
250
95
251
96
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
The board revision encode the amount of RAM. Add a helper
4
to extract the RAM size, and use it.
5
Since the amount of RAM is fixed (it is impossible to physically
6
modify to have more or less RAM), do not allow sizes different
7
than the one anounced by the manufacturer.
8
9
Acked-by: Igor Mammedov <imammedo@redhat.com>
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20200208165645.15657-5-f4bug@amsat.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/arm/raspi.c | 15 ++++++++++++---
16
1 file changed, 12 insertions(+), 3 deletions(-)
17
18
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/raspi.c
21
+++ b/hw/arm/raspi.c
22
@@ -XXX,XX +XXX,XX @@
23
24
#include "qemu/osdep.h"
25
#include "qemu/units.h"
26
+#include "qemu/cutils.h"
27
#include "qapi/error.h"
28
#include "cpu.h"
29
#include "hw/arm/bcm2836.h"
30
@@ -XXX,XX +XXX,XX @@ FIELD(REV_CODE, MANUFACTURER, 16, 4);
31
FIELD(REV_CODE, MEMORY_SIZE, 20, 3);
32
FIELD(REV_CODE, STYLE, 23, 1);
33
34
+static uint64_t board_ram_size(uint32_t board_rev)
35
+{
36
+ assert(FIELD_EX32(board_rev, REV_CODE, STYLE)); /* Only new style */
37
+ return 256 * MiB << FIELD_EX32(board_rev, REV_CODE, MEMORY_SIZE);
38
+}
39
+
40
static int board_processor_id(uint32_t board_rev)
41
{
42
assert(FIELD_EX32(board_rev, REV_CODE, STYLE)); /* Only new style */
43
@@ -XXX,XX +XXX,XX @@ static void raspi_init(MachineState *machine, uint32_t board_rev)
44
{
45
RasPiState *s = g_new0(RasPiState, 1);
46
int version = board_version(board_rev);
47
+ uint64_t ram_size = board_ram_size(board_rev);
48
uint32_t vcram_size;
49
DriveInfo *di;
50
BlockBackend *blk;
51
BusState *bus;
52
DeviceState *carddev;
53
54
- if (machine->ram_size > 1 * GiB) {
55
- error_report("Requested ram size is too large for this machine: "
56
- "maximum is 1GB");
57
+ if (machine->ram_size != ram_size) {
58
+ char *size_str = size_to_str(ram_size);
59
+ error_report("Invalid RAM size, should be %s", size_str);
60
+ g_free(size_str);
61
exit(1);
62
}
63
64
--
65
2.20.1
66
67
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
The board revision encode the processor type. Add a helper
4
to extract the type, and use it.
5
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20200208165645.15657-6-f4bug@amsat.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 21 +++++++++++++++++++++
11
hw/arm/raspi.c | 18 ++++++++++++++++--
9
target/arm/sve_helper.c | 35 +++++++++++++++++++++++++++++++++++
12
1 file changed, 16 insertions(+), 2 deletions(-)
10
target/arm/translate-sve.c | 24 ++++++++++++++++++++++++
11
target/arm/sve.decode | 6 ++++++
12
4 files changed, 86 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/hw/arm/raspi.c
17
+++ b/target/arm/helper-sve.h
17
+++ b/hw/arm/raspi.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_lsl_zpzz_s, TCG_CALL_NO_RWG,
18
@@ -XXX,XX +XXX,XX @@ static int board_version(uint32_t board_rev)
19
DEF_HELPER_FLAGS_5(sve_lsl_zpzz_d, TCG_CALL_NO_RWG,
19
return board_processor_id(board_rev) + 1;
20
void, ptr, ptr, ptr, ptr, i32)
20
}
21
21
22
+DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
22
+static const char *board_soc_type(uint32_t board_rev)
23
+ void, ptr, ptr, ptr, ptr, i32)
23
+{
24
+DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
24
+ static const char *soc_types[] = {
25
+ void, ptr, ptr, ptr, ptr, i32)
25
+ NULL, TYPE_BCM2836, TYPE_BCM2837,
26
+DEF_HELPER_FLAGS_5(sve_asr_zpzw_s, TCG_CALL_NO_RWG,
26
+ };
27
+ void, ptr, ptr, ptr, ptr, i32)
27
+ int proc_id = board_processor_id(board_rev);
28
+
28
+
29
+DEF_HELPER_FLAGS_5(sve_lsr_zpzw_b, TCG_CALL_NO_RWG,
29
+ if (proc_id >= ARRAY_SIZE(soc_types) || !soc_types[proc_id]) {
30
+ void, ptr, ptr, ptr, ptr, i32)
30
+ error_report("Unsupported processor id '%d' (board revision: 0x%x)",
31
+DEF_HELPER_FLAGS_5(sve_lsr_zpzw_h, TCG_CALL_NO_RWG,
31
+ proc_id, board_rev);
32
+ void, ptr, ptr, ptr, ptr, i32)
32
+ exit(1);
33
+DEF_HELPER_FLAGS_5(sve_lsr_zpzw_s, TCG_CALL_NO_RWG,
33
+ }
34
+ void, ptr, ptr, ptr, ptr, i32)
34
+ return soc_types[proc_id];
35
+
36
+DEF_HELPER_FLAGS_5(sve_lsl_zpzw_b, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_5(sve_lsl_zpzw_h, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_5(sve_lsl_zpzw_s, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, i32)
42
+
43
DEF_HELPER_FLAGS_3(sve_orv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
44
DEF_HELPER_FLAGS_3(sve_orv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
45
DEF_HELPER_FLAGS_3(sve_orv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
46
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/sve_helper.c
49
+++ b/target/arm/sve_helper.c
50
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_D(sve_lsl_zpzz_d, uint64_t, DO_LSL)
51
#undef DO_ZPZZ
52
#undef DO_ZPZZ_D
53
54
+/* Three-operand expander, controlled by a predicate, in which the
55
+ * third operand is "wide". That is, for D = N op M, the same 64-bit
56
+ * value of M is used with all of the narrower values of N.
57
+ */
58
+#define DO_ZPZW(NAME, TYPE, TYPEW, H, OP) \
59
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
60
+{ \
61
+ intptr_t i, opr_sz = simd_oprsz(desc); \
62
+ for (i = 0; i < opr_sz; ) { \
63
+ uint8_t pg = *(uint8_t *)(vg + H1(i >> 3)); \
64
+ TYPEW mm = *(TYPEW *)(vm + i); \
65
+ do { \
66
+ if (pg & 1) { \
67
+ TYPE nn = *(TYPE *)(vn + H(i)); \
68
+ *(TYPE *)(vd + H(i)) = OP(nn, mm); \
69
+ } \
70
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
71
+ } while (i & 7); \
72
+ } \
73
+}
35
+}
74
+
36
+
75
+DO_ZPZW(sve_asr_zpzw_b, int8_t, uint64_t, H1, DO_ASR)
37
static void write_smpboot(ARMCPU *cpu, const struct arm_boot_info *info)
76
+DO_ZPZW(sve_lsr_zpzw_b, uint8_t, uint64_t, H1, DO_LSR)
38
{
77
+DO_ZPZW(sve_lsl_zpzw_b, uint8_t, uint64_t, H1, DO_LSL)
39
static const uint32_t smpboot[] = {
78
+
40
@@ -XXX,XX +XXX,XX @@ static void raspi_init(MachineState *machine, uint32_t board_rev)
79
+DO_ZPZW(sve_asr_zpzw_h, int16_t, uint64_t, H1_2, DO_ASR)
80
+DO_ZPZW(sve_lsr_zpzw_h, uint16_t, uint64_t, H1_2, DO_LSR)
81
+DO_ZPZW(sve_lsl_zpzw_h, uint16_t, uint64_t, H1_2, DO_LSL)
82
+
83
+DO_ZPZW(sve_asr_zpzw_s, int32_t, uint64_t, H1_4, DO_ASR)
84
+DO_ZPZW(sve_lsr_zpzw_s, uint32_t, uint64_t, H1_4, DO_LSR)
85
+DO_ZPZW(sve_lsl_zpzw_s, uint32_t, uint64_t, H1_4, DO_LSL)
86
+
87
+#undef DO_ZPZW
88
+
89
/* Two-operand reduction expander, controlled by a predicate.
90
* The difference between TYPERED and TYPERET has to do with
91
* sign-extension. E.g. for SMAX, TYPERED must be signed,
92
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/translate-sve.c
95
+++ b/target/arm/translate-sve.c
96
@@ -XXX,XX +XXX,XX @@ static bool trans_ASRD(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
97
}
41
}
98
}
42
99
43
object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
100
+/*
44
- version == 3 ? TYPE_BCM2837 : TYPE_BCM2836,
101
+ *** SVE Bitwise Shift - Predicated Group
45
- &error_abort, NULL);
102
+ */
46
+ board_soc_type(board_rev), &error_abort, NULL);
103
+
47
104
+#define DO_ZPZW(NAME, name) \
48
/* Allocate and map RAM */
105
+static bool trans_##NAME##_zpzw(DisasContext *s, arg_rprr_esz *a, \
49
memory_region_allocate_system_memory(&s->ram, OBJECT(machine), "ram",
106
+ uint32_t insn) \
107
+{ \
108
+ static gen_helper_gvec_4 * const fns[3] = { \
109
+ gen_helper_sve_##name##_zpzw_b, gen_helper_sve_##name##_zpzw_h, \
110
+ gen_helper_sve_##name##_zpzw_s, \
111
+ }; \
112
+ if (a->esz < 0 || a->esz >= 3) { \
113
+ return false; \
114
+ } \
115
+ return do_zpzz_ool(s, a, fns[a->esz]); \
116
+}
117
+
118
+DO_ZPZW(ASR, asr)
119
+DO_ZPZW(LSR, lsr)
120
+DO_ZPZW(LSL, lsl)
121
+
122
+#undef DO_ZPZW
123
+
124
/*
125
*** SVE Predicate Logical Operations Group
126
*/
127
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/sve.decode
130
+++ b/target/arm/sve.decode
131
@@ -XXX,XX +XXX,XX @@ ASR_zpzz 00000100 .. 010 100 100 ... ..... ..... @rdm_pg_rn # ASRR
132
LSR_zpzz 00000100 .. 010 101 100 ... ..... ..... @rdm_pg_rn # LSRR
133
LSL_zpzz 00000100 .. 010 111 100 ... ..... ..... @rdm_pg_rn # LSLR
134
135
+# SVE bitwise shift by wide elements (predicated)
136
+# Note these require size != 3.
137
+ASR_zpzw 00000100 .. 011 000 100 ... ..... ..... @rdn_pg_rm
138
+LSR_zpzw 00000100 .. 011 001 100 ... ..... ..... @rdn_pg_rm
139
+LSL_zpzw 00000100 .. 011 011 100 ... ..... ..... @rdn_pg_rm
140
+
141
### SVE Logical - Unpredicated Group
142
143
# SVE bitwise logical operations (unpredicated)
144
--
50
--
145
2.17.0
51
2.20.1
146
52
147
53
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
There is no point in creating the SoC object before allocating the RAM.
4
Move the call to keep all the SoC-related calls together.
5
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Acked-by: Igor Mammedov <imammedo@redhat.com>
8
Message-id: 20200208165645.15657-7-f4bug@amsat.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/raspi.c | 5 ++---
13
1 file changed, 2 insertions(+), 3 deletions(-)
14
15
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/raspi.c
18
+++ b/hw/arm/raspi.c
19
@@ -XXX,XX +XXX,XX @@ static void raspi_init(MachineState *machine, uint32_t board_rev)
20
exit(1);
21
}
22
23
- object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
24
- board_soc_type(board_rev), &error_abort, NULL);
25
-
26
/* Allocate and map RAM */
27
memory_region_allocate_system_memory(&s->ram, OBJECT(machine), "ram",
28
machine->ram_size);
29
@@ -XXX,XX +XXX,XX @@ static void raspi_init(MachineState *machine, uint32_t board_rev)
30
memory_region_add_subregion_overlap(get_system_memory(), 0, &s->ram, 0);
31
32
/* Setup the SOC */
33
+ object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
34
+ board_soc_type(board_rev), &error_abort, NULL);
35
object_property_add_const_link(OBJECT(&s->soc), "ram", OBJECT(&s->ram),
36
&error_abort);
37
object_property_set_int(OBJECT(&s->soc), board_rev, "board-rev",
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Add a model of the generic DMA found on Xilinx ZynqMP.
3
QOM'ify RaspiMachineState. Now machines inherit of RaspiMachineClass.
4
4
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
Cc: Igor Mammedov <imammedo@redhat.com>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Acked-by: Igor Mammedov <imammedo@redhat.com>
8
Message-id: 20180503214201.29082-2-frasse.iglesias@gmail.com
8
Message-id: 20200208165645.15657-8-f4bug@amsat.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
hw/dma/Makefile.objs | 1 +
12
hw/arm/raspi.c | 56 +++++++++++++++++++++++++++++++++++++++++++-------
12
include/hw/dma/xlnx-zdma.h | 84 ++++
13
1 file changed, 49 insertions(+), 7 deletions(-)
13
hw/dma/xlnx-zdma.c | 832 +++++++++++++++++++++++++++++++++++++
14
3 files changed, 917 insertions(+)
15
create mode 100644 include/hw/dma/xlnx-zdma.h
16
create mode 100644 hw/dma/xlnx-zdma.c
17
14
18
diff --git a/hw/dma/Makefile.objs b/hw/dma/Makefile.objs
15
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/dma/Makefile.objs
17
--- a/hw/arm/raspi.c
21
+++ b/hw/dma/Makefile.objs
18
+++ b/hw/arm/raspi.c
22
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_ETRAXFS) += etraxfs_dma.o
23
common-obj-$(CONFIG_STP2000) += sparc32_dma.o
24
obj-$(CONFIG_XLNX_ZYNQMP) += xlnx_dpdma.o
25
obj-$(CONFIG_XLNX_ZYNQMP_ARM) += xlnx_dpdma.o
26
+common-obj-$(CONFIG_XLNX_ZYNQMP_ARM) += xlnx-zdma.o
27
28
obj-$(CONFIG_OMAP) += omap_dma.o soc_dma.o
29
obj-$(CONFIG_PXA2XX) += pxa2xx_dma.o
30
diff --git a/include/hw/dma/xlnx-zdma.h b/include/hw/dma/xlnx-zdma.h
31
new file mode 100644
32
index XXXXXXX..XXXXXXX
33
--- /dev/null
34
+++ b/include/hw/dma/xlnx-zdma.h
35
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
36
+/*
20
/* Registered machine type (matches RPi Foundation bootloader and U-Boot) */
37
+ * QEMU model of the ZynqMP generic DMA
21
#define MACH_TYPE_BCM2708 3138
38
+ *
22
39
+ * Copyright (c) 2014 Xilinx Inc.
23
-typedef struct RasPiState {
40
+ * Copyright (c) 2018 FEIMTECH AB
24
+typedef struct RaspiMachineState {
41
+ *
25
+ /*< private >*/
42
+ * Written by Edgar E. Iglesias <edgar.iglesias@xilinx.com>,
26
+ MachineState parent_obj;
43
+ * Francisco Iglesias <francisco.iglesias@feimtech.se>
27
+ /*< public >*/
44
+ *
28
BCM283XState soc;
45
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
29
MemoryRegion ram;
46
+ * of this software and associated documentation files (the "Software"), to deal
30
-} RasPiState;
47
+ * in the Software without restriction, including without limitation the rights
31
+} RaspiMachineState;
48
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
49
+ * copies of the Software, and to permit persons to whom the Software is
50
+ * furnished to do so, subject to the following conditions:
51
+ *
52
+ * The above copyright notice and this permission notice shall be included in
53
+ * all copies or substantial portions of the Software.
54
+ *
55
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
56
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
57
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
58
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
59
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
60
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
61
+ * THE SOFTWARE.
62
+ */
63
+
32
+
64
+#ifndef XLNX_ZDMA_H
33
+typedef struct RaspiMachineClass {
65
+#define XLNX_ZDMA_H
34
+ /*< private >*/
35
+ MachineClass parent_obj;
36
+ /*< public >*/
37
+} RaspiMachineClass;
66
+
38
+
67
+#include "hw/sysbus.h"
39
+#define TYPE_RASPI_MACHINE MACHINE_TYPE_NAME("raspi-common")
68
+#include "hw/register.h"
40
+#define RASPI_MACHINE(obj) \
69
+#include "sysemu/dma.h"
41
+ OBJECT_CHECK(RaspiMachineState, (obj), TYPE_RASPI_MACHINE)
70
+
42
+
71
+#define ZDMA_R_MAX (0x204 / 4)
43
+#define RASPI_MACHINE_CLASS(klass) \
44
+ OBJECT_CLASS_CHECK(RaspiMachineClass, (klass), TYPE_RASPI_MACHINE)
45
+#define RASPI_MACHINE_GET_CLASS(obj) \
46
+ OBJECT_GET_CLASS(RaspiMachineClass, (obj), TYPE_RASPI_MACHINE)
47
48
/*
49
* Board revision codes:
50
@@ -XXX,XX +XXX,XX @@ static void setup_boot(MachineState *machine, int version, size_t ram_size)
51
52
static void raspi_init(MachineState *machine, uint32_t board_rev)
53
{
54
- RasPiState *s = g_new0(RasPiState, 1);
55
+ RaspiMachineState *s = RASPI_MACHINE(machine);
56
int version = board_version(board_rev);
57
uint64_t ram_size = board_ram_size(board_rev);
58
uint32_t vcram_size;
59
@@ -XXX,XX +XXX,XX @@ static void raspi2_init(MachineState *machine)
60
raspi_init(machine, 0xa21041);
61
}
62
63
-static void raspi2_machine_init(MachineClass *mc)
64
+static void raspi2_machine_class_init(ObjectClass *oc, void *data)
65
{
66
+ MachineClass *mc = MACHINE_CLASS(oc);
72
+
67
+
73
+typedef enum {
68
mc->desc = "Raspberry Pi 2B";
74
+ DISABLED = 0,
69
mc->init = raspi2_init;
75
+ ENABLED = 1,
70
mc->block_default_type = IF_SD;
76
+ PAUSED = 2,
71
@@ -XXX,XX +XXX,XX @@ static void raspi2_machine_init(MachineClass *mc)
77
+} XlnxZDMAState;
72
mc->default_ram_size = 1 * GiB;
73
mc->ignore_memory_transaction_failures = true;
74
};
75
-DEFINE_MACHINE("raspi2", raspi2_machine_init)
76
77
#ifdef TARGET_AARCH64
78
static void raspi3_init(MachineState *machine)
79
@@ -XXX,XX +XXX,XX @@ static void raspi3_init(MachineState *machine)
80
raspi_init(machine, 0xa02082);
81
}
82
83
-static void raspi3_machine_init(MachineClass *mc)
84
+static void raspi3_machine_class_init(ObjectClass *oc, void *data)
85
{
86
+ MachineClass *mc = MACHINE_CLASS(oc);
78
+
87
+
79
+typedef union {
88
mc->desc = "Raspberry Pi 3B";
80
+ struct {
89
mc->init = raspi3_init;
81
+ uint64_t addr;
90
mc->block_default_type = IF_SD;
82
+ uint32_t size;
91
@@ -XXX,XX +XXX,XX @@ static void raspi3_machine_init(MachineClass *mc)
83
+ uint32_t attr;
92
mc->default_cpus = BCM283X_NCPUS;
84
+ };
93
mc->default_ram_size = 1 * GiB;
85
+ uint32_t words[4];
94
}
86
+} XlnxZDMADescr;
95
-DEFINE_MACHINE("raspi3", raspi3_machine_init)
96
#endif
87
+
97
+
88
+typedef struct XlnxZDMA {
98
+static const TypeInfo raspi_machine_types[] = {
89
+ SysBusDevice parent_obj;
99
+ {
90
+ MemoryRegion iomem;
100
+ .name = MACHINE_TYPE_NAME("raspi2"),
91
+ MemTxAttrs attr;
101
+ .parent = TYPE_RASPI_MACHINE,
92
+ MemoryRegion *dma_mr;
102
+ .class_init = raspi2_machine_class_init,
93
+ AddressSpace *dma_as;
103
+#ifdef TARGET_AARCH64
94
+ qemu_irq irq_zdma_ch_imr;
104
+ }, {
95
+
105
+ .name = MACHINE_TYPE_NAME("raspi3"),
96
+ struct {
106
+ .parent = TYPE_RASPI_MACHINE,
97
+ uint32_t bus_width;
107
+ .class_init = raspi3_machine_class_init,
98
+ } cfg;
99
+
100
+ XlnxZDMAState state;
101
+ bool error;
102
+
103
+ XlnxZDMADescr dsc_src;
104
+ XlnxZDMADescr dsc_dst;
105
+
106
+ uint32_t regs[ZDMA_R_MAX];
107
+ RegisterInfo regs_info[ZDMA_R_MAX];
108
+
109
+ /* We don't model the common bufs. Must be at least 16 bytes
110
+ to model write only mode. */
111
+ uint8_t buf[2048];
112
+} XlnxZDMA;
113
+
114
+#define TYPE_XLNX_ZDMA "xlnx.zdma"
115
+
116
+#define XLNX_ZDMA(obj) \
117
+ OBJECT_CHECK(XlnxZDMA, (obj), TYPE_XLNX_ZDMA)
118
+
119
+#endif /* XLNX_ZDMA_H */
120
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
121
new file mode 100644
122
index XXXXXXX..XXXXXXX
123
--- /dev/null
124
+++ b/hw/dma/xlnx-zdma.c
125
@@ -XXX,XX +XXX,XX @@
126
+/*
127
+ * QEMU model of the ZynqMP generic DMA
128
+ *
129
+ * Copyright (c) 2014 Xilinx Inc.
130
+ * Copyright (c) 2018 FEIMTECH AB
131
+ *
132
+ * Written by Edgar E. Iglesias <edgar.iglesias@xilinx.com>,
133
+ * Francisco Iglesias <francisco.iglesias@feimtech.se>
134
+ *
135
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
136
+ * of this software and associated documentation files (the "Software"), to deal
137
+ * in the Software without restriction, including without limitation the rights
138
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
139
+ * copies of the Software, and to permit persons to whom the Software is
140
+ * furnished to do so, subject to the following conditions:
141
+ *
142
+ * The above copyright notice and this permission notice shall be included in
143
+ * all copies or substantial portions of the Software.
144
+ *
145
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
146
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
147
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
148
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
149
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
150
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
151
+ * THE SOFTWARE.
152
+ */
153
+
154
+#include "qemu/osdep.h"
155
+#include "hw/dma/xlnx-zdma.h"
156
+#include "qemu/bitops.h"
157
+#include "qemu/log.h"
158
+#include "qapi/error.h"
159
+
160
+#ifndef XLNX_ZDMA_ERR_DEBUG
161
+#define XLNX_ZDMA_ERR_DEBUG 0
162
+#endif
108
+#endif
163
+
109
+ }, {
164
+REG32(ZDMA_ERR_CTRL, 0x0)
110
+ .name = TYPE_RASPI_MACHINE,
165
+ FIELD(ZDMA_ERR_CTRL, APB_ERR_RES, 0, 1)
111
+ .parent = TYPE_MACHINE,
166
+REG32(ZDMA_CH_ISR, 0x100)
112
+ .instance_size = sizeof(RaspiMachineState),
167
+ FIELD(ZDMA_CH_ISR, DMA_PAUSE, 11, 1)
113
+ .class_size = sizeof(RaspiMachineClass),
168
+ FIELD(ZDMA_CH_ISR, DMA_DONE, 10, 1)
114
+ .abstract = true,
169
+ FIELD(ZDMA_CH_ISR, AXI_WR_DATA, 9, 1)
170
+ FIELD(ZDMA_CH_ISR, AXI_RD_DATA, 8, 1)
171
+ FIELD(ZDMA_CH_ISR, AXI_RD_DST_DSCR, 7, 1)
172
+ FIELD(ZDMA_CH_ISR, AXI_RD_SRC_DSCR, 6, 1)
173
+ FIELD(ZDMA_CH_ISR, IRQ_DST_ACCT_ERR, 5, 1)
174
+ FIELD(ZDMA_CH_ISR, IRQ_SRC_ACCT_ERR, 4, 1)
175
+ FIELD(ZDMA_CH_ISR, BYTE_CNT_OVRFL, 3, 1)
176
+ FIELD(ZDMA_CH_ISR, DST_DSCR_DONE, 2, 1)
177
+ FIELD(ZDMA_CH_ISR, SRC_DSCR_DONE, 1, 1)
178
+ FIELD(ZDMA_CH_ISR, INV_APB, 0, 1)
179
+REG32(ZDMA_CH_IMR, 0x104)
180
+ FIELD(ZDMA_CH_IMR, DMA_PAUSE, 11, 1)
181
+ FIELD(ZDMA_CH_IMR, DMA_DONE, 10, 1)
182
+ FIELD(ZDMA_CH_IMR, AXI_WR_DATA, 9, 1)
183
+ FIELD(ZDMA_CH_IMR, AXI_RD_DATA, 8, 1)
184
+ FIELD(ZDMA_CH_IMR, AXI_RD_DST_DSCR, 7, 1)
185
+ FIELD(ZDMA_CH_IMR, AXI_RD_SRC_DSCR, 6, 1)
186
+ FIELD(ZDMA_CH_IMR, IRQ_DST_ACCT_ERR, 5, 1)
187
+ FIELD(ZDMA_CH_IMR, IRQ_SRC_ACCT_ERR, 4, 1)
188
+ FIELD(ZDMA_CH_IMR, BYTE_CNT_OVRFL, 3, 1)
189
+ FIELD(ZDMA_CH_IMR, DST_DSCR_DONE, 2, 1)
190
+ FIELD(ZDMA_CH_IMR, SRC_DSCR_DONE, 1, 1)
191
+ FIELD(ZDMA_CH_IMR, INV_APB, 0, 1)
192
+REG32(ZDMA_CH_IEN, 0x108)
193
+ FIELD(ZDMA_CH_IEN, DMA_PAUSE, 11, 1)
194
+ FIELD(ZDMA_CH_IEN, DMA_DONE, 10, 1)
195
+ FIELD(ZDMA_CH_IEN, AXI_WR_DATA, 9, 1)
196
+ FIELD(ZDMA_CH_IEN, AXI_RD_DATA, 8, 1)
197
+ FIELD(ZDMA_CH_IEN, AXI_RD_DST_DSCR, 7, 1)
198
+ FIELD(ZDMA_CH_IEN, AXI_RD_SRC_DSCR, 6, 1)
199
+ FIELD(ZDMA_CH_IEN, IRQ_DST_ACCT_ERR, 5, 1)
200
+ FIELD(ZDMA_CH_IEN, IRQ_SRC_ACCT_ERR, 4, 1)
201
+ FIELD(ZDMA_CH_IEN, BYTE_CNT_OVRFL, 3, 1)
202
+ FIELD(ZDMA_CH_IEN, DST_DSCR_DONE, 2, 1)
203
+ FIELD(ZDMA_CH_IEN, SRC_DSCR_DONE, 1, 1)
204
+ FIELD(ZDMA_CH_IEN, INV_APB, 0, 1)
205
+REG32(ZDMA_CH_IDS, 0x10c)
206
+ FIELD(ZDMA_CH_IDS, DMA_PAUSE, 11, 1)
207
+ FIELD(ZDMA_CH_IDS, DMA_DONE, 10, 1)
208
+ FIELD(ZDMA_CH_IDS, AXI_WR_DATA, 9, 1)
209
+ FIELD(ZDMA_CH_IDS, AXI_RD_DATA, 8, 1)
210
+ FIELD(ZDMA_CH_IDS, AXI_RD_DST_DSCR, 7, 1)
211
+ FIELD(ZDMA_CH_IDS, AXI_RD_SRC_DSCR, 6, 1)
212
+ FIELD(ZDMA_CH_IDS, IRQ_DST_ACCT_ERR, 5, 1)
213
+ FIELD(ZDMA_CH_IDS, IRQ_SRC_ACCT_ERR, 4, 1)
214
+ FIELD(ZDMA_CH_IDS, BYTE_CNT_OVRFL, 3, 1)
215
+ FIELD(ZDMA_CH_IDS, DST_DSCR_DONE, 2, 1)
216
+ FIELD(ZDMA_CH_IDS, SRC_DSCR_DONE, 1, 1)
217
+ FIELD(ZDMA_CH_IDS, INV_APB, 0, 1)
218
+REG32(ZDMA_CH_CTRL0, 0x110)
219
+ FIELD(ZDMA_CH_CTRL0, OVR_FETCH, 7, 1)
220
+ FIELD(ZDMA_CH_CTRL0, POINT_TYPE, 6, 1)
221
+ FIELD(ZDMA_CH_CTRL0, MODE, 4, 2)
222
+ FIELD(ZDMA_CH_CTRL0, RATE_CTRL, 3, 1)
223
+ FIELD(ZDMA_CH_CTRL0, CONT_ADDR, 2, 1)
224
+ FIELD(ZDMA_CH_CTRL0, CONT, 1, 1)
225
+REG32(ZDMA_CH_CTRL1, 0x114)
226
+ FIELD(ZDMA_CH_CTRL1, DST_ISSUE, 5, 5)
227
+ FIELD(ZDMA_CH_CTRL1, SRC_ISSUE, 0, 5)
228
+REG32(ZDMA_CH_FCI, 0x118)
229
+ FIELD(ZDMA_CH_FCI, PROG_CELL_CNT, 2, 2)
230
+ FIELD(ZDMA_CH_FCI, SIDE, 1, 1)
231
+ FIELD(ZDMA_CH_FCI, EN, 0, 1)
232
+REG32(ZDMA_CH_STATUS, 0x11c)
233
+ FIELD(ZDMA_CH_STATUS, STATE, 0, 2)
234
+REG32(ZDMA_CH_DATA_ATTR, 0x120)
235
+ FIELD(ZDMA_CH_DATA_ATTR, ARBURST, 26, 2)
236
+ FIELD(ZDMA_CH_DATA_ATTR, ARCACHE, 22, 4)
237
+ FIELD(ZDMA_CH_DATA_ATTR, ARQOS, 18, 4)
238
+ FIELD(ZDMA_CH_DATA_ATTR, ARLEN, 14, 4)
239
+ FIELD(ZDMA_CH_DATA_ATTR, AWBURST, 12, 2)
240
+ FIELD(ZDMA_CH_DATA_ATTR, AWCACHE, 8, 4)
241
+ FIELD(ZDMA_CH_DATA_ATTR, AWQOS, 4, 4)
242
+ FIELD(ZDMA_CH_DATA_ATTR, AWLEN, 0, 4)
243
+REG32(ZDMA_CH_DSCR_ATTR, 0x124)
244
+ FIELD(ZDMA_CH_DSCR_ATTR, AXCOHRNT, 8, 1)
245
+ FIELD(ZDMA_CH_DSCR_ATTR, AXCACHE, 4, 4)
246
+ FIELD(ZDMA_CH_DSCR_ATTR, AXQOS, 0, 4)
247
+REG32(ZDMA_CH_SRC_DSCR_WORD0, 0x128)
248
+REG32(ZDMA_CH_SRC_DSCR_WORD1, 0x12c)
249
+ FIELD(ZDMA_CH_SRC_DSCR_WORD1, MSB, 0, 17)
250
+REG32(ZDMA_CH_SRC_DSCR_WORD2, 0x130)
251
+ FIELD(ZDMA_CH_SRC_DSCR_WORD2, SIZE, 0, 30)
252
+REG32(ZDMA_CH_SRC_DSCR_WORD3, 0x134)
253
+ FIELD(ZDMA_CH_SRC_DSCR_WORD3, CMD, 3, 2)
254
+ FIELD(ZDMA_CH_SRC_DSCR_WORD3, INTR, 2, 1)
255
+ FIELD(ZDMA_CH_SRC_DSCR_WORD3, TYPE, 1, 1)
256
+ FIELD(ZDMA_CH_SRC_DSCR_WORD3, COHRNT, 0, 1)
257
+REG32(ZDMA_CH_DST_DSCR_WORD0, 0x138)
258
+REG32(ZDMA_CH_DST_DSCR_WORD1, 0x13c)
259
+ FIELD(ZDMA_CH_DST_DSCR_WORD1, MSB, 0, 17)
260
+REG32(ZDMA_CH_DST_DSCR_WORD2, 0x140)
261
+ FIELD(ZDMA_CH_DST_DSCR_WORD2, SIZE, 0, 30)
262
+REG32(ZDMA_CH_DST_DSCR_WORD3, 0x144)
263
+ FIELD(ZDMA_CH_DST_DSCR_WORD3, INTR, 2, 1)
264
+ FIELD(ZDMA_CH_DST_DSCR_WORD3, TYPE, 1, 1)
265
+ FIELD(ZDMA_CH_DST_DSCR_WORD3, COHRNT, 0, 1)
266
+REG32(ZDMA_CH_WR_ONLY_WORD0, 0x148)
267
+REG32(ZDMA_CH_WR_ONLY_WORD1, 0x14c)
268
+REG32(ZDMA_CH_WR_ONLY_WORD2, 0x150)
269
+REG32(ZDMA_CH_WR_ONLY_WORD3, 0x154)
270
+REG32(ZDMA_CH_SRC_START_LSB, 0x158)
271
+REG32(ZDMA_CH_SRC_START_MSB, 0x15c)
272
+ FIELD(ZDMA_CH_SRC_START_MSB, ADDR, 0, 17)
273
+REG32(ZDMA_CH_DST_START_LSB, 0x160)
274
+REG32(ZDMA_CH_DST_START_MSB, 0x164)
275
+ FIELD(ZDMA_CH_DST_START_MSB, ADDR, 0, 17)
276
+REG32(ZDMA_CH_RATE_CTRL, 0x18c)
277
+ FIELD(ZDMA_CH_RATE_CTRL, CNT, 0, 12)
278
+REG32(ZDMA_CH_SRC_CUR_PYLD_LSB, 0x168)
279
+REG32(ZDMA_CH_SRC_CUR_PYLD_MSB, 0x16c)
280
+ FIELD(ZDMA_CH_SRC_CUR_PYLD_MSB, ADDR, 0, 17)
281
+REG32(ZDMA_CH_DST_CUR_PYLD_LSB, 0x170)
282
+REG32(ZDMA_CH_DST_CUR_PYLD_MSB, 0x174)
283
+ FIELD(ZDMA_CH_DST_CUR_PYLD_MSB, ADDR, 0, 17)
284
+REG32(ZDMA_CH_SRC_CUR_DSCR_LSB, 0x178)
285
+REG32(ZDMA_CH_SRC_CUR_DSCR_MSB, 0x17c)
286
+ FIELD(ZDMA_CH_SRC_CUR_DSCR_MSB, ADDR, 0, 17)
287
+REG32(ZDMA_CH_DST_CUR_DSCR_LSB, 0x180)
288
+REG32(ZDMA_CH_DST_CUR_DSCR_MSB, 0x184)
289
+ FIELD(ZDMA_CH_DST_CUR_DSCR_MSB, ADDR, 0, 17)
290
+REG32(ZDMA_CH_TOTAL_BYTE, 0x188)
291
+REG32(ZDMA_CH_RATE_CNTL, 0x18c)
292
+ FIELD(ZDMA_CH_RATE_CNTL, CNT, 0, 12)
293
+REG32(ZDMA_CH_IRQ_SRC_ACCT, 0x190)
294
+ FIELD(ZDMA_CH_IRQ_SRC_ACCT, CNT, 0, 8)
295
+REG32(ZDMA_CH_IRQ_DST_ACCT, 0x194)
296
+ FIELD(ZDMA_CH_IRQ_DST_ACCT, CNT, 0, 8)
297
+REG32(ZDMA_CH_DBG0, 0x198)
298
+ FIELD(ZDMA_CH_DBG0, CMN_BUF_FREE, 0, 9)
299
+REG32(ZDMA_CH_DBG1, 0x19c)
300
+ FIELD(ZDMA_CH_DBG1, CMN_BUF_OCC, 0, 9)
301
+REG32(ZDMA_CH_CTRL2, 0x200)
302
+ FIELD(ZDMA_CH_CTRL2, EN, 0, 1)
303
+
304
+enum {
305
+ PT_REG = 0,
306
+ PT_MEM = 1,
307
+};
308
+
309
+enum {
310
+ CMD_HALT = 1,
311
+ CMD_STOP = 2,
312
+};
313
+
314
+enum {
315
+ RW_MODE_RW = 0,
316
+ RW_MODE_WO = 1,
317
+ RW_MODE_RO = 2,
318
+};
319
+
320
+enum {
321
+ DTYPE_LINEAR = 0,
322
+ DTYPE_LINKED = 1,
323
+};
324
+
325
+enum {
326
+ AXI_BURST_FIXED = 0,
327
+ AXI_BURST_INCR = 1,
328
+};
329
+
330
+static void zdma_ch_imr_update_irq(XlnxZDMA *s)
331
+{
332
+ bool pending;
333
+
334
+ pending = s->regs[R_ZDMA_CH_ISR] & ~s->regs[R_ZDMA_CH_IMR];
335
+
336
+ qemu_set_irq(s->irq_zdma_ch_imr, pending);
337
+}
338
+
339
+static void zdma_ch_isr_postw(RegisterInfo *reg, uint64_t val64)
340
+{
341
+ XlnxZDMA *s = XLNX_ZDMA(reg->opaque);
342
+ zdma_ch_imr_update_irq(s);
343
+}
344
+
345
+static uint64_t zdma_ch_ien_prew(RegisterInfo *reg, uint64_t val64)
346
+{
347
+ XlnxZDMA *s = XLNX_ZDMA(reg->opaque);
348
+ uint32_t val = val64;
349
+
350
+ s->regs[R_ZDMA_CH_IMR] &= ~val;
351
+ zdma_ch_imr_update_irq(s);
352
+ return 0;
353
+}
354
+
355
+static uint64_t zdma_ch_ids_prew(RegisterInfo *reg, uint64_t val64)
356
+{
357
+ XlnxZDMA *s = XLNX_ZDMA(reg->opaque);
358
+ uint32_t val = val64;
359
+
360
+ s->regs[R_ZDMA_CH_IMR] |= val;
361
+ zdma_ch_imr_update_irq(s);
362
+ return 0;
363
+}
364
+
365
+static void zdma_set_state(XlnxZDMA *s, XlnxZDMAState state)
366
+{
367
+ s->state = state;
368
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_STATUS, STATE, state);
369
+
370
+ /* Signal error if we have an error condition. */
371
+ if (s->error) {
372
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_STATUS, STATE, 3);
373
+ }
374
+}
375
+
376
+static void zdma_src_done(XlnxZDMA *s)
377
+{
378
+ unsigned int cnt;
379
+ cnt = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_IRQ_SRC_ACCT, CNT);
380
+ cnt++;
381
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_IRQ_SRC_ACCT, CNT, cnt);
382
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, SRC_DSCR_DONE, true);
383
+
384
+ /* Did we overflow? */
385
+ if (cnt != ARRAY_FIELD_EX32(s->regs, ZDMA_CH_IRQ_SRC_ACCT, CNT)) {
386
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, IRQ_SRC_ACCT_ERR, true);
387
+ }
388
+ zdma_ch_imr_update_irq(s);
389
+}
390
+
391
+static void zdma_dst_done(XlnxZDMA *s)
392
+{
393
+ unsigned int cnt;
394
+ cnt = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_IRQ_DST_ACCT, CNT);
395
+ cnt++;
396
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_IRQ_DST_ACCT, CNT, cnt);
397
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, DST_DSCR_DONE, true);
398
+
399
+ /* Did we overflow? */
400
+ if (cnt != ARRAY_FIELD_EX32(s->regs, ZDMA_CH_IRQ_DST_ACCT, CNT)) {
401
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, IRQ_DST_ACCT_ERR, true);
402
+ }
403
+ zdma_ch_imr_update_irq(s);
404
+}
405
+
406
+static uint64_t zdma_get_regaddr64(XlnxZDMA *s, unsigned int basereg)
407
+{
408
+ uint64_t addr;
409
+
410
+ addr = s->regs[basereg + 1];
411
+ addr <<= 32;
412
+ addr |= s->regs[basereg];
413
+
414
+ return addr;
415
+}
416
+
417
+static void zdma_put_regaddr64(XlnxZDMA *s, unsigned int basereg, uint64_t addr)
418
+{
419
+ s->regs[basereg] = addr;
420
+ s->regs[basereg + 1] = addr >> 32;
421
+}
422
+
423
+static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
424
+{
425
+ /* ZDMA descriptors must be aligned to their own size. */
426
+ if (addr % sizeof(XlnxZDMADescr)) {
427
+ qemu_log_mask(LOG_GUEST_ERROR,
428
+ "zdma: unaligned descriptor at %" PRIx64,
429
+ addr);
430
+ memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
431
+ s->error = true;
432
+ return false;
433
+ }
434
+
435
+ address_space_rw(s->dma_as, addr, s->attr,
436
+ buf, sizeof(XlnxZDMADescr), false);
437
+ return true;
438
+}
439
+
440
+static void zdma_load_src_descriptor(XlnxZDMA *s)
441
+{
442
+ uint64_t src_addr;
443
+ unsigned int ptype = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, POINT_TYPE);
444
+
445
+ if (ptype == PT_REG) {
446
+ memcpy(&s->dsc_src, &s->regs[R_ZDMA_CH_SRC_DSCR_WORD0],
447
+ sizeof(s->dsc_src));
448
+ return;
449
+ }
450
+
451
+ src_addr = zdma_get_regaddr64(s, R_ZDMA_CH_SRC_CUR_DSCR_LSB);
452
+
453
+ if (!zdma_load_descriptor(s, src_addr, &s->dsc_src)) {
454
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, AXI_RD_SRC_DSCR, true);
455
+ }
456
+}
457
+
458
+static void zdma_load_dst_descriptor(XlnxZDMA *s)
459
+{
460
+ uint64_t dst_addr;
461
+ unsigned int ptype = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, POINT_TYPE);
462
+
463
+ if (ptype == PT_REG) {
464
+ memcpy(&s->dsc_dst, &s->regs[R_ZDMA_CH_DST_DSCR_WORD0],
465
+ sizeof(s->dsc_dst));
466
+ return;
467
+ }
468
+
469
+ dst_addr = zdma_get_regaddr64(s, R_ZDMA_CH_DST_CUR_DSCR_LSB);
470
+
471
+ if (!zdma_load_descriptor(s, dst_addr, &s->dsc_dst)) {
472
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, AXI_RD_DST_DSCR, true);
473
+ }
474
+}
475
+
476
+static uint64_t zdma_update_descr_addr(XlnxZDMA *s, bool type,
477
+ unsigned int basereg)
478
+{
479
+ uint64_t addr, next;
480
+
481
+ if (type == DTYPE_LINEAR) {
482
+ next = zdma_get_regaddr64(s, basereg);
483
+ next += sizeof(s->dsc_dst);
484
+ zdma_put_regaddr64(s, basereg, next);
485
+ } else {
486
+ addr = zdma_get_regaddr64(s, basereg);
487
+ addr += sizeof(s->dsc_dst);
488
+ address_space_rw(s->dma_as, addr, s->attr, (void *) &next, 8, false);
489
+ zdma_put_regaddr64(s, basereg, next);
490
+ }
491
+ return next;
492
+}
493
+
494
+static void zdma_write_dst(XlnxZDMA *s, uint8_t *buf, uint32_t len)
495
+{
496
+ uint32_t dst_size, dlen;
497
+ bool dst_intr, dst_type;
498
+ unsigned int ptype = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, POINT_TYPE);
499
+ unsigned int rw_mode = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, MODE);
500
+ unsigned int burst_type = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_DATA_ATTR,
501
+ AWBURST);
502
+
503
+ /* FIXED burst types are only supported in simple dma mode. */
504
+ if (ptype != PT_REG) {
505
+ burst_type = AXI_BURST_INCR;
506
+ }
507
+
508
+ while (len) {
509
+ dst_size = FIELD_EX32(s->dsc_dst.words[2], ZDMA_CH_DST_DSCR_WORD2,
510
+ SIZE);
511
+ dst_type = FIELD_EX32(s->dsc_dst.words[3], ZDMA_CH_DST_DSCR_WORD3,
512
+ TYPE);
513
+ if (dst_size == 0 && ptype == PT_MEM) {
514
+ uint64_t next;
515
+ next = zdma_update_descr_addr(s, dst_type,
516
+ R_ZDMA_CH_DST_CUR_DSCR_LSB);
517
+ zdma_load_descriptor(s, next, &s->dsc_dst);
518
+ dst_size = FIELD_EX32(s->dsc_dst.words[2], ZDMA_CH_DST_DSCR_WORD2,
519
+ SIZE);
520
+ dst_type = FIELD_EX32(s->dsc_dst.words[3], ZDMA_CH_DST_DSCR_WORD3,
521
+ TYPE);
522
+ }
523
+
524
+ /* Match what hardware does by ignoring the dst_size and only using
525
+ * the src size for Simple register mode. */
526
+ if (ptype == PT_REG && rw_mode != RW_MODE_WO) {
527
+ dst_size = len;
528
+ }
529
+
530
+ dst_intr = FIELD_EX32(s->dsc_dst.words[3], ZDMA_CH_DST_DSCR_WORD3,
531
+ INTR);
532
+
533
+ dlen = len > dst_size ? dst_size : len;
534
+ if (burst_type == AXI_BURST_FIXED) {
535
+ if (dlen > (s->cfg.bus_width / 8)) {
536
+ dlen = s->cfg.bus_width / 8;
537
+ }
538
+ }
539
+
540
+ address_space_rw(s->dma_as, s->dsc_dst.addr, s->attr, buf, dlen,
541
+ true);
542
+ if (burst_type == AXI_BURST_INCR) {
543
+ s->dsc_dst.addr += dlen;
544
+ }
545
+ dst_size -= dlen;
546
+ buf += dlen;
547
+ len -= dlen;
548
+
549
+ if (dst_size == 0 && dst_intr) {
550
+ zdma_dst_done(s);
551
+ }
552
+
553
+ /* Write back to buffered descriptor. */
554
+ s->dsc_dst.words[2] = FIELD_DP32(s->dsc_dst.words[2],
555
+ ZDMA_CH_DST_DSCR_WORD2,
556
+ SIZE,
557
+ dst_size);
558
+ }
559
+}
560
+
561
+static void zdma_process_descr(XlnxZDMA *s)
562
+{
563
+ uint64_t src_addr;
564
+ uint32_t src_size, len;
565
+ unsigned int src_cmd;
566
+ bool src_intr, src_type;
567
+ unsigned int ptype = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, POINT_TYPE);
568
+ unsigned int rw_mode = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, MODE);
569
+ unsigned int burst_type = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_DATA_ATTR,
570
+ ARBURST);
571
+
572
+ src_addr = s->dsc_src.addr;
573
+ src_size = FIELD_EX32(s->dsc_src.words[2], ZDMA_CH_SRC_DSCR_WORD2, SIZE);
574
+ src_cmd = FIELD_EX32(s->dsc_src.words[3], ZDMA_CH_SRC_DSCR_WORD3, CMD);
575
+ src_type = FIELD_EX32(s->dsc_src.words[3], ZDMA_CH_SRC_DSCR_WORD3, TYPE);
576
+ src_intr = FIELD_EX32(s->dsc_src.words[3], ZDMA_CH_SRC_DSCR_WORD3, INTR);
577
+
578
+ /* FIXED burst types and non-rw modes are only supported in
579
+ * simple dma mode.
580
+ */
581
+ if (ptype != PT_REG) {
582
+ if (rw_mode != RW_MODE_RW) {
583
+ qemu_log_mask(LOG_GUEST_ERROR,
584
+ "zDMA: rw-mode=%d but not simple DMA mode.\n",
585
+ rw_mode);
586
+ }
587
+ if (burst_type != AXI_BURST_INCR) {
588
+ qemu_log_mask(LOG_GUEST_ERROR,
589
+ "zDMA: burst_type=%d but not simple DMA mode.\n",
590
+ burst_type);
591
+ }
592
+ burst_type = AXI_BURST_INCR;
593
+ rw_mode = RW_MODE_RW;
594
+ }
595
+
596
+ if (rw_mode == RW_MODE_WO) {
597
+ /* In Simple DMA Write-Only, we need to push DST size bytes
598
+ * regardless of what SRC size is set to. */
599
+ src_size = FIELD_EX32(s->dsc_dst.words[2], ZDMA_CH_DST_DSCR_WORD2,
600
+ SIZE);
601
+ memcpy(s->buf, &s->regs[R_ZDMA_CH_WR_ONLY_WORD0], s->cfg.bus_width / 8);
602
+ }
603
+
604
+ while (src_size) {
605
+ len = src_size > ARRAY_SIZE(s->buf) ? ARRAY_SIZE(s->buf) : src_size;
606
+ if (burst_type == AXI_BURST_FIXED) {
607
+ if (len > (s->cfg.bus_width / 8)) {
608
+ len = s->cfg.bus_width / 8;
609
+ }
610
+ }
611
+
612
+ if (rw_mode == RW_MODE_WO) {
613
+ if (len > s->cfg.bus_width / 8) {
614
+ len = s->cfg.bus_width / 8;
615
+ }
616
+ } else {
617
+ address_space_rw(s->dma_as, src_addr, s->attr, s->buf, len,
618
+ false);
619
+ if (burst_type == AXI_BURST_INCR) {
620
+ src_addr += len;
621
+ }
622
+ }
623
+
624
+ if (rw_mode != RW_MODE_RO) {
625
+ zdma_write_dst(s, s->buf, len);
626
+ }
627
+
628
+ s->regs[R_ZDMA_CH_TOTAL_BYTE] += len;
629
+ src_size -= len;
630
+ }
631
+
632
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, DMA_DONE, true);
633
+
634
+ if (src_intr) {
635
+ zdma_src_done(s);
636
+ }
637
+
638
+ /* Load next descriptor. */
639
+ if (ptype == PT_REG || src_cmd == CMD_STOP) {
640
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_CTRL2, EN, 0);
641
+ zdma_set_state(s, DISABLED);
642
+ return;
643
+ }
644
+
645
+ if (src_cmd == CMD_HALT) {
646
+ zdma_set_state(s, PAUSED);
647
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, DMA_PAUSE, 1);
648
+ zdma_ch_imr_update_irq(s);
649
+ return;
650
+ }
651
+
652
+ zdma_update_descr_addr(s, src_type, R_ZDMA_CH_SRC_CUR_DSCR_LSB);
653
+}
654
+
655
+static void zdma_run(XlnxZDMA *s)
656
+{
657
+ while (s->state == ENABLED && !s->error) {
658
+ zdma_load_src_descriptor(s);
659
+
660
+ if (s->error) {
661
+ zdma_set_state(s, DISABLED);
662
+ } else {
663
+ zdma_process_descr(s);
664
+ }
665
+ }
666
+
667
+ zdma_ch_imr_update_irq(s);
668
+}
669
+
670
+static void zdma_update_descr_addr_from_start(XlnxZDMA *s)
671
+{
672
+ uint64_t src_addr, dst_addr;
673
+
674
+ src_addr = zdma_get_regaddr64(s, R_ZDMA_CH_SRC_START_LSB);
675
+ zdma_put_regaddr64(s, R_ZDMA_CH_SRC_CUR_DSCR_LSB, src_addr);
676
+ dst_addr = zdma_get_regaddr64(s, R_ZDMA_CH_DST_START_LSB);
677
+ zdma_put_regaddr64(s, R_ZDMA_CH_DST_CUR_DSCR_LSB, dst_addr);
678
+ zdma_load_dst_descriptor(s);
679
+}
680
+
681
+static void zdma_ch_ctrlx_postw(RegisterInfo *reg, uint64_t val64)
682
+{
683
+ XlnxZDMA *s = XLNX_ZDMA(reg->opaque);
684
+
685
+ if (ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL2, EN)) {
686
+ s->error = false;
687
+
688
+ if (s->state == PAUSED &&
689
+ ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, CONT)) {
690
+ if (ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, CONT_ADDR) == 1) {
691
+ zdma_update_descr_addr_from_start(s);
692
+ } else {
693
+ bool src_type = FIELD_EX32(s->dsc_src.words[3],
694
+ ZDMA_CH_SRC_DSCR_WORD3, TYPE);
695
+ zdma_update_descr_addr(s, src_type,
696
+ R_ZDMA_CH_SRC_CUR_DSCR_LSB);
697
+ }
698
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_CTRL0, CONT, false);
699
+ zdma_set_state(s, ENABLED);
700
+ } else if (s->state == DISABLED) {
701
+ zdma_update_descr_addr_from_start(s);
702
+ zdma_set_state(s, ENABLED);
703
+ }
704
+ } else {
705
+ /* Leave Paused state? */
706
+ if (s->state == PAUSED &&
707
+ ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, CONT)) {
708
+ zdma_set_state(s, DISABLED);
709
+ }
710
+ }
711
+
712
+ zdma_run(s);
713
+}
714
+
715
+static RegisterAccessInfo zdma_regs_info[] = {
716
+ { .name = "ZDMA_ERR_CTRL", .addr = A_ZDMA_ERR_CTRL,
717
+ .rsvd = 0xfffffffe,
718
+ },{ .name = "ZDMA_CH_ISR", .addr = A_ZDMA_CH_ISR,
719
+ .rsvd = 0xfffff000,
720
+ .w1c = 0xfff,
721
+ .post_write = zdma_ch_isr_postw,
722
+ },{ .name = "ZDMA_CH_IMR", .addr = A_ZDMA_CH_IMR,
723
+ .reset = 0xfff,
724
+ .rsvd = 0xfffff000,
725
+ .ro = 0xfff,
726
+ },{ .name = "ZDMA_CH_IEN", .addr = A_ZDMA_CH_IEN,
727
+ .rsvd = 0xfffff000,
728
+ .pre_write = zdma_ch_ien_prew,
729
+ },{ .name = "ZDMA_CH_IDS", .addr = A_ZDMA_CH_IDS,
730
+ .rsvd = 0xfffff000,
731
+ .pre_write = zdma_ch_ids_prew,
732
+ },{ .name = "ZDMA_CH_CTRL0", .addr = A_ZDMA_CH_CTRL0,
733
+ .reset = 0x80,
734
+ .rsvd = 0xffffff01,
735
+ .post_write = zdma_ch_ctrlx_postw,
736
+ },{ .name = "ZDMA_CH_CTRL1", .addr = A_ZDMA_CH_CTRL1,
737
+ .reset = 0x3ff,
738
+ .rsvd = 0xfffffc00,
739
+ },{ .name = "ZDMA_CH_FCI", .addr = A_ZDMA_CH_FCI,
740
+ .rsvd = 0xffffffc0,
741
+ },{ .name = "ZDMA_CH_STATUS", .addr = A_ZDMA_CH_STATUS,
742
+ .rsvd = 0xfffffffc,
743
+ .ro = 0x3,
744
+ },{ .name = "ZDMA_CH_DATA_ATTR", .addr = A_ZDMA_CH_DATA_ATTR,
745
+ .reset = 0x483d20f,
746
+ .rsvd = 0xf0000000,
747
+ },{ .name = "ZDMA_CH_DSCR_ATTR", .addr = A_ZDMA_CH_DSCR_ATTR,
748
+ .rsvd = 0xfffffe00,
749
+ },{ .name = "ZDMA_CH_SRC_DSCR_WORD0", .addr = A_ZDMA_CH_SRC_DSCR_WORD0,
750
+ },{ .name = "ZDMA_CH_SRC_DSCR_WORD1", .addr = A_ZDMA_CH_SRC_DSCR_WORD1,
751
+ .rsvd = 0xfffe0000,
752
+ },{ .name = "ZDMA_CH_SRC_DSCR_WORD2", .addr = A_ZDMA_CH_SRC_DSCR_WORD2,
753
+ .rsvd = 0xc0000000,
754
+ },{ .name = "ZDMA_CH_SRC_DSCR_WORD3", .addr = A_ZDMA_CH_SRC_DSCR_WORD3,
755
+ .rsvd = 0xffffffe0,
756
+ },{ .name = "ZDMA_CH_DST_DSCR_WORD0", .addr = A_ZDMA_CH_DST_DSCR_WORD0,
757
+ },{ .name = "ZDMA_CH_DST_DSCR_WORD1", .addr = A_ZDMA_CH_DST_DSCR_WORD1,
758
+ .rsvd = 0xfffe0000,
759
+ },{ .name = "ZDMA_CH_DST_DSCR_WORD2", .addr = A_ZDMA_CH_DST_DSCR_WORD2,
760
+ .rsvd = 0xc0000000,
761
+ },{ .name = "ZDMA_CH_DST_DSCR_WORD3", .addr = A_ZDMA_CH_DST_DSCR_WORD3,
762
+ .rsvd = 0xfffffffa,
763
+ },{ .name = "ZDMA_CH_WR_ONLY_WORD0", .addr = A_ZDMA_CH_WR_ONLY_WORD0,
764
+ },{ .name = "ZDMA_CH_WR_ONLY_WORD1", .addr = A_ZDMA_CH_WR_ONLY_WORD1,
765
+ },{ .name = "ZDMA_CH_WR_ONLY_WORD2", .addr = A_ZDMA_CH_WR_ONLY_WORD2,
766
+ },{ .name = "ZDMA_CH_WR_ONLY_WORD3", .addr = A_ZDMA_CH_WR_ONLY_WORD3,
767
+ },{ .name = "ZDMA_CH_SRC_START_LSB", .addr = A_ZDMA_CH_SRC_START_LSB,
768
+ },{ .name = "ZDMA_CH_SRC_START_MSB", .addr = A_ZDMA_CH_SRC_START_MSB,
769
+ .rsvd = 0xfffe0000,
770
+ },{ .name = "ZDMA_CH_DST_START_LSB", .addr = A_ZDMA_CH_DST_START_LSB,
771
+ },{ .name = "ZDMA_CH_DST_START_MSB", .addr = A_ZDMA_CH_DST_START_MSB,
772
+ .rsvd = 0xfffe0000,
773
+ },{ .name = "ZDMA_CH_SRC_CUR_PYLD_LSB", .addr = A_ZDMA_CH_SRC_CUR_PYLD_LSB,
774
+ .ro = 0xffffffff,
775
+ },{ .name = "ZDMA_CH_SRC_CUR_PYLD_MSB", .addr = A_ZDMA_CH_SRC_CUR_PYLD_MSB,
776
+ .rsvd = 0xfffe0000,
777
+ .ro = 0x1ffff,
778
+ },{ .name = "ZDMA_CH_DST_CUR_PYLD_LSB", .addr = A_ZDMA_CH_DST_CUR_PYLD_LSB,
779
+ .ro = 0xffffffff,
780
+ },{ .name = "ZDMA_CH_DST_CUR_PYLD_MSB", .addr = A_ZDMA_CH_DST_CUR_PYLD_MSB,
781
+ .rsvd = 0xfffe0000,
782
+ .ro = 0x1ffff,
783
+ },{ .name = "ZDMA_CH_SRC_CUR_DSCR_LSB", .addr = A_ZDMA_CH_SRC_CUR_DSCR_LSB,
784
+ .ro = 0xffffffff,
785
+ },{ .name = "ZDMA_CH_SRC_CUR_DSCR_MSB", .addr = A_ZDMA_CH_SRC_CUR_DSCR_MSB,
786
+ .rsvd = 0xfffe0000,
787
+ .ro = 0x1ffff,
788
+ },{ .name = "ZDMA_CH_DST_CUR_DSCR_LSB", .addr = A_ZDMA_CH_DST_CUR_DSCR_LSB,
789
+ .ro = 0xffffffff,
790
+ },{ .name = "ZDMA_CH_DST_CUR_DSCR_MSB", .addr = A_ZDMA_CH_DST_CUR_DSCR_MSB,
791
+ .rsvd = 0xfffe0000,
792
+ .ro = 0x1ffff,
793
+ },{ .name = "ZDMA_CH_TOTAL_BYTE", .addr = A_ZDMA_CH_TOTAL_BYTE,
794
+ .w1c = 0xffffffff,
795
+ },{ .name = "ZDMA_CH_RATE_CNTL", .addr = A_ZDMA_CH_RATE_CNTL,
796
+ .rsvd = 0xfffff000,
797
+ },{ .name = "ZDMA_CH_IRQ_SRC_ACCT", .addr = A_ZDMA_CH_IRQ_SRC_ACCT,
798
+ .rsvd = 0xffffff00,
799
+ .ro = 0xff,
800
+ .cor = 0xff,
801
+ },{ .name = "ZDMA_CH_IRQ_DST_ACCT", .addr = A_ZDMA_CH_IRQ_DST_ACCT,
802
+ .rsvd = 0xffffff00,
803
+ .ro = 0xff,
804
+ .cor = 0xff,
805
+ },{ .name = "ZDMA_CH_DBG0", .addr = A_ZDMA_CH_DBG0,
806
+ .rsvd = 0xfffffe00,
807
+ .ro = 0x1ff,
808
+ },{ .name = "ZDMA_CH_DBG1", .addr = A_ZDMA_CH_DBG1,
809
+ .rsvd = 0xfffffe00,
810
+ .ro = 0x1ff,
811
+ },{ .name = "ZDMA_CH_CTRL2", .addr = A_ZDMA_CH_CTRL2,
812
+ .rsvd = 0xfffffffe,
813
+ .post_write = zdma_ch_ctrlx_postw,
814
+ }
115
+ }
815
+};
116
+};
816
+
117
+
817
+static void zdma_reset(DeviceState *dev)
118
+DEFINE_TYPES(raspi_machine_types)
818
+{
819
+ XlnxZDMA *s = XLNX_ZDMA(dev);
820
+ unsigned int i;
821
+
822
+ for (i = 0; i < ARRAY_SIZE(s->regs_info); ++i) {
823
+ register_reset(&s->regs_info[i]);
824
+ }
825
+
826
+ zdma_ch_imr_update_irq(s);
827
+}
828
+
829
+static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
830
+{
831
+ XlnxZDMA *s = XLNX_ZDMA(opaque);
832
+ RegisterInfo *r = &s->regs_info[addr / 4];
833
+
834
+ if (!r->data) {
835
+ qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
836
+ object_get_canonical_path(OBJECT(s)),
837
+ addr);
838
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
839
+ zdma_ch_imr_update_irq(s);
840
+ return 0;
841
+ }
842
+ return register_read(r, ~0, NULL, false);
843
+}
844
+
845
+static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
846
+ unsigned size)
847
+{
848
+ XlnxZDMA *s = XLNX_ZDMA(opaque);
849
+ RegisterInfo *r = &s->regs_info[addr / 4];
850
+
851
+ if (!r->data) {
852
+ qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
853
+ object_get_canonical_path(OBJECT(s)),
854
+ addr, value);
855
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
856
+ zdma_ch_imr_update_irq(s);
857
+ return;
858
+ }
859
+ register_write(r, value, ~0, NULL, false);
860
+}
861
+
862
+static const MemoryRegionOps zdma_ops = {
863
+ .read = zdma_read,
864
+ .write = zdma_write,
865
+ .endianness = DEVICE_LITTLE_ENDIAN,
866
+ .valid = {
867
+ .min_access_size = 4,
868
+ .max_access_size = 4,
869
+ },
870
+};
871
+
872
+static void zdma_realize(DeviceState *dev, Error **errp)
873
+{
874
+ XlnxZDMA *s = XLNX_ZDMA(dev);
875
+ unsigned int i;
876
+
877
+ for (i = 0; i < ARRAY_SIZE(zdma_regs_info); ++i) {
878
+ RegisterInfo *r = &s->regs_info[zdma_regs_info[i].addr / 4];
879
+
880
+ *r = (RegisterInfo) {
881
+ .data = (uint8_t *)&s->regs[
882
+ zdma_regs_info[i].addr / 4],
883
+ .data_size = sizeof(uint32_t),
884
+ .access = &zdma_regs_info[i],
885
+ .opaque = s,
886
+ };
887
+ }
888
+
889
+ if (s->dma_mr) {
890
+ s->dma_as = g_malloc0(sizeof(AddressSpace));
891
+ address_space_init(s->dma_as, s->dma_mr, NULL);
892
+ } else {
893
+ s->dma_as = &address_space_memory;
894
+ }
895
+ s->attr = MEMTXATTRS_UNSPECIFIED;
896
+}
897
+
898
+static void zdma_init(Object *obj)
899
+{
900
+ XlnxZDMA *s = XLNX_ZDMA(obj);
901
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
902
+
903
+ memory_region_init_io(&s->iomem, obj, &zdma_ops, s,
904
+ TYPE_XLNX_ZDMA, ZDMA_R_MAX * 4);
905
+ sysbus_init_mmio(sbd, &s->iomem);
906
+ sysbus_init_irq(sbd, &s->irq_zdma_ch_imr);
907
+
908
+ object_property_add_link(obj, "dma", TYPE_MEMORY_REGION,
909
+ (Object **)&s->dma_mr,
910
+ qdev_prop_allow_set_link_before_realize,
911
+ OBJ_PROP_LINK_UNREF_ON_RELEASE,
912
+ &error_abort);
913
+}
914
+
915
+static const VMStateDescription vmstate_zdma = {
916
+ .name = TYPE_XLNX_ZDMA,
917
+ .version_id = 1,
918
+ .minimum_version_id = 1,
919
+ .minimum_version_id_old = 1,
920
+ .fields = (VMStateField[]) {
921
+ VMSTATE_UINT32_ARRAY(regs, XlnxZDMA, ZDMA_R_MAX),
922
+ VMSTATE_UINT32(state, XlnxZDMA),
923
+ VMSTATE_UINT32_ARRAY(dsc_src.words, XlnxZDMA, 4),
924
+ VMSTATE_UINT32_ARRAY(dsc_dst.words, XlnxZDMA, 4),
925
+ VMSTATE_END_OF_LIST(),
926
+ }
927
+};
928
+
929
+static Property zdma_props[] = {
930
+ DEFINE_PROP_UINT32("bus-width", XlnxZDMA, cfg.bus_width, 64),
931
+ DEFINE_PROP_END_OF_LIST(),
932
+};
933
+
934
+static void zdma_class_init(ObjectClass *klass, void *data)
935
+{
936
+ DeviceClass *dc = DEVICE_CLASS(klass);
937
+
938
+ dc->reset = zdma_reset;
939
+ dc->realize = zdma_realize;
940
+ dc->props = zdma_props;
941
+ dc->vmsd = &vmstate_zdma;
942
+}
943
+
944
+static const TypeInfo zdma_info = {
945
+ .name = TYPE_XLNX_ZDMA,
946
+ .parent = TYPE_SYS_BUS_DEVICE,
947
+ .instance_size = sizeof(XlnxZDMA),
948
+ .class_init = zdma_class_init,
949
+ .instance_init = zdma_init,
950
+};
951
+
952
+static void zdma_register_types(void)
953
+{
954
+ type_register_static(&zdma_info);
955
+}
956
+
957
+type_init(zdma_register_types)
958
--
119
--
959
2.17.0
120
2.20.1
960
121
961
122
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
These were the instructions that were stubbed out when
3
We want to have a common class_init(). The only value that
4
introducing the decode skeleton.
4
matters (and changes) is the board revision.
5
Pass the board_rev as class_data to class_init().
5
6
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20200208165645.15657-9-f4bug@amsat.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180516223007.10256-4-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/translate-sve.c | 55 ++++++++++++++++++++++++++++++++------
12
hw/arm/raspi.c | 17 ++++++++++++++---
12
1 file changed, 47 insertions(+), 8 deletions(-)
13
1 file changed, 14 insertions(+), 3 deletions(-)
13
14
14
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
15
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-sve.c
17
--- a/hw/arm/raspi.c
17
+++ b/target/arm/translate-sve.c
18
+++ b/hw/arm/raspi.c
18
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ typedef struct RaspiMachineClass {
19
* Implement all of the translator functions referenced by the decoder.
20
/*< private >*/
20
*/
21
MachineClass parent_obj;
21
22
/*< public >*/
22
-static bool trans_AND_zzz(DisasContext *s, arg_AND_zzz *a, uint32_t insn)
23
+ uint32_t board_rev;
23
+/* Invoke a vector expander on two Zregs. */
24
} RaspiMachineClass;
24
+static bool do_vector2_z(DisasContext *s, GVecGen2Fn *gvec_fn,
25
25
+ int esz, int rd, int rn)
26
#define TYPE_RASPI_MACHINE MACHINE_TYPE_NAME("raspi-common")
27
@@ -XXX,XX +XXX,XX @@ static void setup_boot(MachineState *machine, int version, size_t ram_size)
28
arm_load_kernel(ARM_CPU(first_cpu), machine, &binfo);
29
}
30
31
-static void raspi_init(MachineState *machine, uint32_t board_rev)
32
+static void raspi_init(MachineState *machine)
26
{
33
{
27
- return false;
34
+ RaspiMachineClass *mc = RASPI_MACHINE_GET_CLASS(machine);
28
+ if (sve_access_check(s)) {
35
RaspiMachineState *s = RASPI_MACHINE(machine);
29
+ unsigned vsz = vec_full_reg_size(s);
36
+ uint32_t board_rev = mc->board_rev;
30
+ gvec_fn(esz, vec_full_reg_offset(s, rd),
37
int version = board_version(board_rev);
31
+ vec_full_reg_offset(s, rn), vsz, vsz);
38
uint64_t ram_size = board_ram_size(board_rev);
32
+ }
39
uint32_t vcram_size;
33
+ return true;
40
@@ -XXX,XX +XXX,XX @@ static void raspi_init(MachineState *machine, uint32_t board_rev)
41
42
static void raspi2_init(MachineState *machine)
43
{
44
- raspi_init(machine, 0xa21041);
45
+ raspi_init(machine);
34
}
46
}
35
47
36
-static bool trans_ORR_zzz(DisasContext *s, arg_ORR_zzz *a, uint32_t insn)
48
static void raspi2_machine_class_init(ObjectClass *oc, void *data)
37
+/* Invoke a vector expander on three Zregs. */
38
+static bool do_vector3_z(DisasContext *s, GVecGen3Fn *gvec_fn,
39
+ int esz, int rd, int rn, int rm)
40
{
49
{
41
- return false;
50
MachineClass *mc = MACHINE_CLASS(oc);
42
+ if (sve_access_check(s)) {
51
+ RaspiMachineClass *rmc = RASPI_MACHINE_CLASS(oc);
43
+ unsigned vsz = vec_full_reg_size(s);
52
+ uint32_t board_rev = (uint32_t)(uintptr_t)data;
44
+ gvec_fn(esz, vec_full_reg_offset(s, rd),
53
45
+ vec_full_reg_offset(s, rn),
54
+ rmc->board_rev = board_rev;
46
+ vec_full_reg_offset(s, rm), vsz, vsz);
55
mc->desc = "Raspberry Pi 2B";
47
+ }
56
mc->init = raspi2_init;
48
+ return true;
57
mc->block_default_type = IF_SD;
58
@@ -XXX,XX +XXX,XX @@ static void raspi2_machine_class_init(ObjectClass *oc, void *data)
59
#ifdef TARGET_AARCH64
60
static void raspi3_init(MachineState *machine)
61
{
62
- raspi_init(machine, 0xa02082);
63
+ raspi_init(machine);
49
}
64
}
50
65
51
-static bool trans_EOR_zzz(DisasContext *s, arg_EOR_zzz *a, uint32_t insn)
66
static void raspi3_machine_class_init(ObjectClass *oc, void *data)
52
+/* Invoke a vector move on two Zregs. */
53
+static bool do_mov_z(DisasContext *s, int rd, int rn)
54
{
67
{
55
- return false;
68
MachineClass *mc = MACHINE_CLASS(oc);
56
+ return do_vector2_z(s, tcg_gen_gvec_mov, 0, rd, rn);
69
+ RaspiMachineClass *rmc = RASPI_MACHINE_CLASS(oc);
57
}
70
+ uint32_t board_rev = (uint32_t)(uintptr_t)data;
58
71
59
-static bool trans_BIC_zzz(DisasContext *s, arg_BIC_zzz *a, uint32_t insn)
72
+ rmc->board_rev = board_rev;
60
+/*
73
mc->desc = "Raspberry Pi 3B";
61
+ *** SVE Logical - Unpredicated Group
74
mc->init = raspi3_init;
62
+ */
75
mc->block_default_type = IF_SD;
63
+
76
@@ -XXX,XX +XXX,XX @@ static const TypeInfo raspi_machine_types[] = {
64
+static bool trans_AND_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
77
.name = MACHINE_TYPE_NAME("raspi2"),
65
{
78
.parent = TYPE_RASPI_MACHINE,
66
- return false;
79
.class_init = raspi2_machine_class_init,
67
+ return do_vector3_z(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->rm);
80
+ .class_data = (void *)0xa21041,
68
+}
81
#ifdef TARGET_AARCH64
69
+
82
}, {
70
+static bool trans_ORR_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
83
.name = MACHINE_TYPE_NAME("raspi3"),
71
+{
84
.parent = TYPE_RASPI_MACHINE,
72
+ if (a->rn == a->rm) { /* MOV */
85
.class_init = raspi3_machine_class_init,
73
+ return do_mov_z(s, a->rd, a->rn);
86
+ .class_data = (void *)0xa02082,
74
+ } else {
87
#endif
75
+ return do_vector3_z(s, tcg_gen_gvec_or, 0, a->rd, a->rn, a->rm);
88
}, {
76
+ }
89
.name = TYPE_RASPI_MACHINE,
77
+}
78
+
79
+static bool trans_EOR_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
80
+{
81
+ return do_vector3_z(s, tcg_gen_gvec_xor, 0, a->rd, a->rn, a->rm);
82
+}
83
+
84
+static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
85
+{
86
+ return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
87
}
88
--
90
--
89
2.17.0
91
2.20.1
90
92
91
93
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
raspi_machine_init() access to board_rev via RaspiMachineClass.
4
raspi2_init() and raspi3_init() do nothing. Call raspi_machine_init
5
directly.
6
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
9
Message-id: 20200208165645.15657-10-f4bug@amsat.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/raspi.c | 16 +++-------------
13
1 file changed, 3 insertions(+), 13 deletions(-)
14
15
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/raspi.c
18
+++ b/hw/arm/raspi.c
19
@@ -XXX,XX +XXX,XX @@ static void setup_boot(MachineState *machine, int version, size_t ram_size)
20
arm_load_kernel(ARM_CPU(first_cpu), machine, &binfo);
21
}
22
23
-static void raspi_init(MachineState *machine)
24
+static void raspi_machine_init(MachineState *machine)
25
{
26
RaspiMachineClass *mc = RASPI_MACHINE_GET_CLASS(machine);
27
RaspiMachineState *s = RASPI_MACHINE(machine);
28
@@ -XXX,XX +XXX,XX @@ static void raspi_init(MachineState *machine)
29
setup_boot(machine, version, machine->ram_size - vcram_size);
30
}
31
32
-static void raspi2_init(MachineState *machine)
33
-{
34
- raspi_init(machine);
35
-}
36
-
37
static void raspi2_machine_class_init(ObjectClass *oc, void *data)
38
{
39
MachineClass *mc = MACHINE_CLASS(oc);
40
@@ -XXX,XX +XXX,XX @@ static void raspi2_machine_class_init(ObjectClass *oc, void *data)
41
42
rmc->board_rev = board_rev;
43
mc->desc = "Raspberry Pi 2B";
44
- mc->init = raspi2_init;
45
+ mc->init = raspi_machine_init;
46
mc->block_default_type = IF_SD;
47
mc->no_parallel = 1;
48
mc->no_floppy = 1;
49
@@ -XXX,XX +XXX,XX @@ static void raspi2_machine_class_init(ObjectClass *oc, void *data)
50
};
51
52
#ifdef TARGET_AARCH64
53
-static void raspi3_init(MachineState *machine)
54
-{
55
- raspi_init(machine);
56
-}
57
-
58
static void raspi3_machine_class_init(ObjectClass *oc, void *data)
59
{
60
MachineClass *mc = MACHINE_CLASS(oc);
61
@@ -XXX,XX +XXX,XX @@ static void raspi3_machine_class_init(ObjectClass *oc, void *data)
62
63
rmc->board_rev = board_rev;
64
mc->desc = "Raspberry Pi 3B";
65
- mc->init = raspi3_init;
66
+ mc->init = raspi_machine_init;
67
mc->block_default_type = IF_SD;
68
mc->no_parallel = 1;
69
mc->no_floppy = 1;
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Coverity points out that this can overflow if n > 31,
3
We added a helper to extract the RAM size from the board
4
because it's only doing 32-bit arithmetic. Let's use 1ULL instead
4
revision, and made board_rev a field of RaspiMachineClass.
5
of 1. Also the formulae used to compute n can be replaced by
5
The class_init() can now use the helper to extract from the
6
the level_shift() macro.
6
board revision the board-specific amount of RAM.
7
7
8
Reported-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
9
Message-id: 20200208165645.15657-11-f4bug@amsat.org
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 1526493784-25328-3-git-send-email-eric.auger@redhat.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
12
---
15
hw/arm/smmu-common.c | 4 ++--
13
hw/arm/raspi.c | 4 ++--
16
1 file changed, 2 insertions(+), 2 deletions(-)
14
1 file changed, 2 insertions(+), 2 deletions(-)
17
15
18
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
16
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/smmu-common.c
18
--- a/hw/arm/raspi.c
21
+++ b/hw/arm/smmu-common.c
19
+++ b/hw/arm/raspi.c
22
@@ -XXX,XX +XXX,XX @@ static inline hwaddr get_table_pte_address(uint64_t pte, int granule_sz)
20
@@ -XXX,XX +XXX,XX @@ static void raspi2_machine_class_init(ObjectClass *oc, void *data)
23
static inline hwaddr get_block_pte_address(uint64_t pte, int level,
21
mc->max_cpus = BCM283X_NCPUS;
24
int granule_sz, uint64_t *bsz)
22
mc->min_cpus = BCM283X_NCPUS;
25
{
23
mc->default_cpus = BCM283X_NCPUS;
26
- int n = (granule_sz - 3) * (4 - level) + 3;
24
- mc->default_ram_size = 1 * GiB;
27
+ int n = level_shift(level, granule_sz);
25
+ mc->default_ram_size = board_ram_size(board_rev);
28
26
mc->ignore_memory_transaction_failures = true;
29
- *bsz = 1 << n;
27
};
30
+ *bsz = 1ULL << n;
28
31
return PTE_ADDRESS(pte, n);
29
@@ -XXX,XX +XXX,XX @@ static void raspi3_machine_class_init(ObjectClass *oc, void *data)
30
mc->max_cpus = BCM283X_NCPUS;
31
mc->min_cpus = BCM283X_NCPUS;
32
mc->default_cpus = BCM283X_NCPUS;
33
- mc->default_ram_size = 1 * GiB;
34
+ mc->default_ram_size = board_ram_size(board_rev);
32
}
35
}
36
#endif
33
37
34
--
38
--
35
2.17.0
39
2.20.1
36
40
37
41
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
The board revision encode the model type. Add a helper
4
to extract the model, and use it.
5
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20200208165645.15657-12-f4bug@amsat.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/arm/raspi.c | 18 ++++++++++++++++--
12
1 file changed, 16 insertions(+), 2 deletions(-)
13
14
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/raspi.c
17
+++ b/hw/arm/raspi.c
18
@@ -XXX,XX +XXX,XX @@ static const char *board_soc_type(uint32_t board_rev)
19
return soc_types[proc_id];
20
}
21
22
+static const char *board_type(uint32_t board_rev)
23
+{
24
+ static const char *types[] = {
25
+ "A", "B", "A+", "B+", "2B", "Alpha", "CM1", NULL, "3B", "Zero",
26
+ "CM3", NULL, "Zero W", "3B+", "3A+", NULL, "CM3+", "4B",
27
+ };
28
+ assert(FIELD_EX32(board_rev, REV_CODE, STYLE)); /* Only new style */
29
+ int bt = FIELD_EX32(board_rev, REV_CODE, TYPE);
30
+ if (bt >= ARRAY_SIZE(types) || !types[bt]) {
31
+ return "Unknown";
32
+ }
33
+ return types[bt];
34
+}
35
+
36
static void write_smpboot(ARMCPU *cpu, const struct arm_boot_info *info)
37
{
38
static const uint32_t smpboot[] = {
39
@@ -XXX,XX +XXX,XX @@ static void raspi2_machine_class_init(ObjectClass *oc, void *data)
40
uint32_t board_rev = (uint32_t)(uintptr_t)data;
41
42
rmc->board_rev = board_rev;
43
- mc->desc = "Raspberry Pi 2B";
44
+ mc->desc = g_strdup_printf("Raspberry Pi %s", board_type(board_rev));
45
mc->init = raspi_machine_init;
46
mc->block_default_type = IF_SD;
47
mc->no_parallel = 1;
48
@@ -XXX,XX +XXX,XX @@ static void raspi3_machine_class_init(ObjectClass *oc, void *data)
49
uint32_t board_rev = (uint32_t)(uintptr_t)data;
50
51
rmc->board_rev = board_rev;
52
- mc->desc = "Raspberry Pi 3B";
53
+ mc->desc = g_strdup_printf("Raspberry Pi %s", board_type(board_rev));
54
mc->init = raspi_machine_init;
55
mc->block_default_type = IF_SD;
56
mc->no_parallel = 1;
57
--
58
2.20.1
59
60
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
With the exception of the ignore_memory_transaction_failures
4
Message-id: 20180516223007.10256-9-richard.henderson@linaro.org
4
flag set for the raspi2, both machine_class_init() methods
5
are now identical. Merge them to keep a unique method.
6
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
9
Message-id: 20200208165645.15657-13-f4bug@amsat.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
11
---
7
target/arm/helper-sve.h | 145 +++++++++++++++++++++++++++
12
hw/arm/raspi.c | 31 ++++++-------------------------
8
target/arm/sve_helper.c | 194 +++++++++++++++++++++++++++++++++++++
13
1 file changed, 6 insertions(+), 25 deletions(-)
9
target/arm/translate-sve.c | 68 +++++++++++++
10
target/arm/sve.decode | 42 ++++++++
11
4 files changed, 449 insertions(+)
12
14
13
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-sve.h
17
--- a/hw/arm/raspi.c
16
+++ b/target/arm/helper-sve.h
18
+++ b/hw/arm/raspi.c
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_predtest, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ static void raspi_machine_init(MachineState *machine)
18
DEF_HELPER_FLAGS_3(sve_pfirst, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
20
setup_boot(machine, version, machine->ram_size - vcram_size);
19
DEF_HELPER_FLAGS_3(sve_pnext, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
20
21
+DEF_HELPER_FLAGS_5(sve_and_zpzz_b, TCG_CALL_NO_RWG,
22
+ void, ptr, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_5(sve_and_zpzz_h, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve_and_zpzz_s, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve_and_zpzz_d, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_5(sve_eor_zpzz_b, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve_eor_zpzz_h, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_5(sve_eor_zpzz_s, TCG_CALL_NO_RWG,
35
+ void, ptr, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_5(sve_eor_zpzz_d, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, i32)
38
+
39
+DEF_HELPER_FLAGS_5(sve_orr_zpzz_b, TCG_CALL_NO_RWG,
40
+ void, ptr, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_5(sve_orr_zpzz_h, TCG_CALL_NO_RWG,
42
+ void, ptr, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_5(sve_orr_zpzz_s, TCG_CALL_NO_RWG,
44
+ void, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_5(sve_orr_zpzz_d, TCG_CALL_NO_RWG,
46
+ void, ptr, ptr, ptr, ptr, i32)
47
+
48
+DEF_HELPER_FLAGS_5(sve_bic_zpzz_b, TCG_CALL_NO_RWG,
49
+ void, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_5(sve_bic_zpzz_h, TCG_CALL_NO_RWG,
51
+ void, ptr, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_5(sve_bic_zpzz_s, TCG_CALL_NO_RWG,
53
+ void, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_5(sve_bic_zpzz_d, TCG_CALL_NO_RWG,
55
+ void, ptr, ptr, ptr, ptr, i32)
56
+
57
+DEF_HELPER_FLAGS_5(sve_add_zpzz_b, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_5(sve_add_zpzz_h, TCG_CALL_NO_RWG,
60
+ void, ptr, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_5(sve_add_zpzz_s, TCG_CALL_NO_RWG,
62
+ void, ptr, ptr, ptr, ptr, i32)
63
+DEF_HELPER_FLAGS_5(sve_add_zpzz_d, TCG_CALL_NO_RWG,
64
+ void, ptr, ptr, ptr, ptr, i32)
65
+
66
+DEF_HELPER_FLAGS_5(sve_sub_zpzz_b, TCG_CALL_NO_RWG,
67
+ void, ptr, ptr, ptr, ptr, i32)
68
+DEF_HELPER_FLAGS_5(sve_sub_zpzz_h, TCG_CALL_NO_RWG,
69
+ void, ptr, ptr, ptr, ptr, i32)
70
+DEF_HELPER_FLAGS_5(sve_sub_zpzz_s, TCG_CALL_NO_RWG,
71
+ void, ptr, ptr, ptr, ptr, i32)
72
+DEF_HELPER_FLAGS_5(sve_sub_zpzz_d, TCG_CALL_NO_RWG,
73
+ void, ptr, ptr, ptr, ptr, i32)
74
+
75
+DEF_HELPER_FLAGS_5(sve_smax_zpzz_b, TCG_CALL_NO_RWG,
76
+ void, ptr, ptr, ptr, ptr, i32)
77
+DEF_HELPER_FLAGS_5(sve_smax_zpzz_h, TCG_CALL_NO_RWG,
78
+ void, ptr, ptr, ptr, ptr, i32)
79
+DEF_HELPER_FLAGS_5(sve_smax_zpzz_s, TCG_CALL_NO_RWG,
80
+ void, ptr, ptr, ptr, ptr, i32)
81
+DEF_HELPER_FLAGS_5(sve_smax_zpzz_d, TCG_CALL_NO_RWG,
82
+ void, ptr, ptr, ptr, ptr, i32)
83
+
84
+DEF_HELPER_FLAGS_5(sve_umax_zpzz_b, TCG_CALL_NO_RWG,
85
+ void, ptr, ptr, ptr, ptr, i32)
86
+DEF_HELPER_FLAGS_5(sve_umax_zpzz_h, TCG_CALL_NO_RWG,
87
+ void, ptr, ptr, ptr, ptr, i32)
88
+DEF_HELPER_FLAGS_5(sve_umax_zpzz_s, TCG_CALL_NO_RWG,
89
+ void, ptr, ptr, ptr, ptr, i32)
90
+DEF_HELPER_FLAGS_5(sve_umax_zpzz_d, TCG_CALL_NO_RWG,
91
+ void, ptr, ptr, ptr, ptr, i32)
92
+
93
+DEF_HELPER_FLAGS_5(sve_smin_zpzz_b, TCG_CALL_NO_RWG,
94
+ void, ptr, ptr, ptr, ptr, i32)
95
+DEF_HELPER_FLAGS_5(sve_smin_zpzz_h, TCG_CALL_NO_RWG,
96
+ void, ptr, ptr, ptr, ptr, i32)
97
+DEF_HELPER_FLAGS_5(sve_smin_zpzz_s, TCG_CALL_NO_RWG,
98
+ void, ptr, ptr, ptr, ptr, i32)
99
+DEF_HELPER_FLAGS_5(sve_smin_zpzz_d, TCG_CALL_NO_RWG,
100
+ void, ptr, ptr, ptr, ptr, i32)
101
+
102
+DEF_HELPER_FLAGS_5(sve_umin_zpzz_b, TCG_CALL_NO_RWG,
103
+ void, ptr, ptr, ptr, ptr, i32)
104
+DEF_HELPER_FLAGS_5(sve_umin_zpzz_h, TCG_CALL_NO_RWG,
105
+ void, ptr, ptr, ptr, ptr, i32)
106
+DEF_HELPER_FLAGS_5(sve_umin_zpzz_s, TCG_CALL_NO_RWG,
107
+ void, ptr, ptr, ptr, ptr, i32)
108
+DEF_HELPER_FLAGS_5(sve_umin_zpzz_d, TCG_CALL_NO_RWG,
109
+ void, ptr, ptr, ptr, ptr, i32)
110
+
111
+DEF_HELPER_FLAGS_5(sve_sabd_zpzz_b, TCG_CALL_NO_RWG,
112
+ void, ptr, ptr, ptr, ptr, i32)
113
+DEF_HELPER_FLAGS_5(sve_sabd_zpzz_h, TCG_CALL_NO_RWG,
114
+ void, ptr, ptr, ptr, ptr, i32)
115
+DEF_HELPER_FLAGS_5(sve_sabd_zpzz_s, TCG_CALL_NO_RWG,
116
+ void, ptr, ptr, ptr, ptr, i32)
117
+DEF_HELPER_FLAGS_5(sve_sabd_zpzz_d, TCG_CALL_NO_RWG,
118
+ void, ptr, ptr, ptr, ptr, i32)
119
+
120
+DEF_HELPER_FLAGS_5(sve_uabd_zpzz_b, TCG_CALL_NO_RWG,
121
+ void, ptr, ptr, ptr, ptr, i32)
122
+DEF_HELPER_FLAGS_5(sve_uabd_zpzz_h, TCG_CALL_NO_RWG,
123
+ void, ptr, ptr, ptr, ptr, i32)
124
+DEF_HELPER_FLAGS_5(sve_uabd_zpzz_s, TCG_CALL_NO_RWG,
125
+ void, ptr, ptr, ptr, ptr, i32)
126
+DEF_HELPER_FLAGS_5(sve_uabd_zpzz_d, TCG_CALL_NO_RWG,
127
+ void, ptr, ptr, ptr, ptr, i32)
128
+
129
+DEF_HELPER_FLAGS_5(sve_mul_zpzz_b, TCG_CALL_NO_RWG,
130
+ void, ptr, ptr, ptr, ptr, i32)
131
+DEF_HELPER_FLAGS_5(sve_mul_zpzz_h, TCG_CALL_NO_RWG,
132
+ void, ptr, ptr, ptr, ptr, i32)
133
+DEF_HELPER_FLAGS_5(sve_mul_zpzz_s, TCG_CALL_NO_RWG,
134
+ void, ptr, ptr, ptr, ptr, i32)
135
+DEF_HELPER_FLAGS_5(sve_mul_zpzz_d, TCG_CALL_NO_RWG,
136
+ void, ptr, ptr, ptr, ptr, i32)
137
+
138
+DEF_HELPER_FLAGS_5(sve_smulh_zpzz_b, TCG_CALL_NO_RWG,
139
+ void, ptr, ptr, ptr, ptr, i32)
140
+DEF_HELPER_FLAGS_5(sve_smulh_zpzz_h, TCG_CALL_NO_RWG,
141
+ void, ptr, ptr, ptr, ptr, i32)
142
+DEF_HELPER_FLAGS_5(sve_smulh_zpzz_s, TCG_CALL_NO_RWG,
143
+ void, ptr, ptr, ptr, ptr, i32)
144
+DEF_HELPER_FLAGS_5(sve_smulh_zpzz_d, TCG_CALL_NO_RWG,
145
+ void, ptr, ptr, ptr, ptr, i32)
146
+
147
+DEF_HELPER_FLAGS_5(sve_umulh_zpzz_b, TCG_CALL_NO_RWG,
148
+ void, ptr, ptr, ptr, ptr, i32)
149
+DEF_HELPER_FLAGS_5(sve_umulh_zpzz_h, TCG_CALL_NO_RWG,
150
+ void, ptr, ptr, ptr, ptr, i32)
151
+DEF_HELPER_FLAGS_5(sve_umulh_zpzz_s, TCG_CALL_NO_RWG,
152
+ void, ptr, ptr, ptr, ptr, i32)
153
+DEF_HELPER_FLAGS_5(sve_umulh_zpzz_d, TCG_CALL_NO_RWG,
154
+ void, ptr, ptr, ptr, ptr, i32)
155
+
156
+DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_s, TCG_CALL_NO_RWG,
157
+ void, ptr, ptr, ptr, ptr, i32)
158
+DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_d, TCG_CALL_NO_RWG,
159
+ void, ptr, ptr, ptr, ptr, i32)
160
+
161
+DEF_HELPER_FLAGS_5(sve_udiv_zpzz_s, TCG_CALL_NO_RWG,
162
+ void, ptr, ptr, ptr, ptr, i32)
163
+DEF_HELPER_FLAGS_5(sve_udiv_zpzz_d, TCG_CALL_NO_RWG,
164
+ void, ptr, ptr, ptr, ptr, i32)
165
+
166
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
167
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
168
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
169
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
170
index XXXXXXX..XXXXXXX 100644
171
--- a/target/arm/sve_helper.c
172
+++ b/target/arm/sve_helper.c
173
@@ -XXX,XX +XXX,XX @@
174
#include "tcg/tcg-gvec-desc.h"
175
176
177
+/* Note that vector data is stored in host-endian 64-bit chunks,
178
+ so addressing units smaller than that needs a host-endian fixup. */
179
+#ifdef HOST_WORDS_BIGENDIAN
180
+#define H1(x) ((x) ^ 7)
181
+#define H1_2(x) ((x) ^ 6)
182
+#define H1_4(x) ((x) ^ 4)
183
+#define H2(x) ((x) ^ 3)
184
+#define H4(x) ((x) ^ 1)
185
+#else
186
+#define H1(x) (x)
187
+#define H1_2(x) (x)
188
+#define H1_4(x) (x)
189
+#define H2(x) (x)
190
+#define H4(x) (x)
191
+#endif
192
+
193
/* Return a value for NZCV as per the ARM PredTest pseudofunction.
194
*
195
* The return value has bit 31 set if N is set, bit 1 set if Z is clear,
196
@@ -XXX,XX +XXX,XX @@ LOGICAL_PPPP(sve_nand_pppp, DO_NAND)
197
#undef DO_SEL
198
#undef LOGICAL_PPPP
199
200
+/* Fully general three-operand expander, controlled by a predicate.
201
+ * This is complicated by the host-endian storage of the register file.
202
+ */
203
+/* ??? I don't expect the compiler could ever vectorize this itself.
204
+ * With some tables we can convert bit masks to byte masks, and with
205
+ * extra care wrt byte/word ordering we could use gcc generic vectors
206
+ * and do 16 bytes at a time.
207
+ */
208
+#define DO_ZPZZ(NAME, TYPE, H, OP) \
209
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
210
+{ \
211
+ intptr_t i, opr_sz = simd_oprsz(desc); \
212
+ for (i = 0; i < opr_sz; ) { \
213
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
214
+ do { \
215
+ if (pg & 1) { \
216
+ TYPE nn = *(TYPE *)(vn + H(i)); \
217
+ TYPE mm = *(TYPE *)(vm + H(i)); \
218
+ *(TYPE *)(vd + H(i)) = OP(nn, mm); \
219
+ } \
220
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
221
+ } while (i & 15); \
222
+ } \
223
+}
224
+
225
+/* Similarly, specialized for 64-bit operands. */
226
+#define DO_ZPZZ_D(NAME, TYPE, OP) \
227
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
228
+{ \
229
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8; \
230
+ TYPE *d = vd, *n = vn, *m = vm; \
231
+ uint8_t *pg = vg; \
232
+ for (i = 0; i < opr_sz; i += 1) { \
233
+ if (pg[H1(i)] & 1) { \
234
+ TYPE nn = n[i], mm = m[i]; \
235
+ d[i] = OP(nn, mm); \
236
+ } \
237
+ } \
238
+}
239
+
240
+#define DO_AND(N, M) (N & M)
241
+#define DO_EOR(N, M) (N ^ M)
242
+#define DO_ORR(N, M) (N | M)
243
+#define DO_BIC(N, M) (N & ~M)
244
+#define DO_ADD(N, M) (N + M)
245
+#define DO_SUB(N, M) (N - M)
246
+#define DO_MAX(N, M) ((N) >= (M) ? (N) : (M))
247
+#define DO_MIN(N, M) ((N) >= (M) ? (M) : (N))
248
+#define DO_ABD(N, M) ((N) >= (M) ? (N) - (M) : (M) - (N))
249
+#define DO_MUL(N, M) (N * M)
250
+#define DO_DIV(N, M) (M ? N / M : 0)
251
+
252
+DO_ZPZZ(sve_and_zpzz_b, uint8_t, H1, DO_AND)
253
+DO_ZPZZ(sve_and_zpzz_h, uint16_t, H1_2, DO_AND)
254
+DO_ZPZZ(sve_and_zpzz_s, uint32_t, H1_4, DO_AND)
255
+DO_ZPZZ_D(sve_and_zpzz_d, uint64_t, DO_AND)
256
+
257
+DO_ZPZZ(sve_orr_zpzz_b, uint8_t, H1, DO_ORR)
258
+DO_ZPZZ(sve_orr_zpzz_h, uint16_t, H1_2, DO_ORR)
259
+DO_ZPZZ(sve_orr_zpzz_s, uint32_t, H1_4, DO_ORR)
260
+DO_ZPZZ_D(sve_orr_zpzz_d, uint64_t, DO_ORR)
261
+
262
+DO_ZPZZ(sve_eor_zpzz_b, uint8_t, H1, DO_EOR)
263
+DO_ZPZZ(sve_eor_zpzz_h, uint16_t, H1_2, DO_EOR)
264
+DO_ZPZZ(sve_eor_zpzz_s, uint32_t, H1_4, DO_EOR)
265
+DO_ZPZZ_D(sve_eor_zpzz_d, uint64_t, DO_EOR)
266
+
267
+DO_ZPZZ(sve_bic_zpzz_b, uint8_t, H1, DO_BIC)
268
+DO_ZPZZ(sve_bic_zpzz_h, uint16_t, H1_2, DO_BIC)
269
+DO_ZPZZ(sve_bic_zpzz_s, uint32_t, H1_4, DO_BIC)
270
+DO_ZPZZ_D(sve_bic_zpzz_d, uint64_t, DO_BIC)
271
+
272
+DO_ZPZZ(sve_add_zpzz_b, uint8_t, H1, DO_ADD)
273
+DO_ZPZZ(sve_add_zpzz_h, uint16_t, H1_2, DO_ADD)
274
+DO_ZPZZ(sve_add_zpzz_s, uint32_t, H1_4, DO_ADD)
275
+DO_ZPZZ_D(sve_add_zpzz_d, uint64_t, DO_ADD)
276
+
277
+DO_ZPZZ(sve_sub_zpzz_b, uint8_t, H1, DO_SUB)
278
+DO_ZPZZ(sve_sub_zpzz_h, uint16_t, H1_2, DO_SUB)
279
+DO_ZPZZ(sve_sub_zpzz_s, uint32_t, H1_4, DO_SUB)
280
+DO_ZPZZ_D(sve_sub_zpzz_d, uint64_t, DO_SUB)
281
+
282
+DO_ZPZZ(sve_smax_zpzz_b, int8_t, H1, DO_MAX)
283
+DO_ZPZZ(sve_smax_zpzz_h, int16_t, H1_2, DO_MAX)
284
+DO_ZPZZ(sve_smax_zpzz_s, int32_t, H1_4, DO_MAX)
285
+DO_ZPZZ_D(sve_smax_zpzz_d, int64_t, DO_MAX)
286
+
287
+DO_ZPZZ(sve_umax_zpzz_b, uint8_t, H1, DO_MAX)
288
+DO_ZPZZ(sve_umax_zpzz_h, uint16_t, H1_2, DO_MAX)
289
+DO_ZPZZ(sve_umax_zpzz_s, uint32_t, H1_4, DO_MAX)
290
+DO_ZPZZ_D(sve_umax_zpzz_d, uint64_t, DO_MAX)
291
+
292
+DO_ZPZZ(sve_smin_zpzz_b, int8_t, H1, DO_MIN)
293
+DO_ZPZZ(sve_smin_zpzz_h, int16_t, H1_2, DO_MIN)
294
+DO_ZPZZ(sve_smin_zpzz_s, int32_t, H1_4, DO_MIN)
295
+DO_ZPZZ_D(sve_smin_zpzz_d, int64_t, DO_MIN)
296
+
297
+DO_ZPZZ(sve_umin_zpzz_b, uint8_t, H1, DO_MIN)
298
+DO_ZPZZ(sve_umin_zpzz_h, uint16_t, H1_2, DO_MIN)
299
+DO_ZPZZ(sve_umin_zpzz_s, uint32_t, H1_4, DO_MIN)
300
+DO_ZPZZ_D(sve_umin_zpzz_d, uint64_t, DO_MIN)
301
+
302
+DO_ZPZZ(sve_sabd_zpzz_b, int8_t, H1, DO_ABD)
303
+DO_ZPZZ(sve_sabd_zpzz_h, int16_t, H1_2, DO_ABD)
304
+DO_ZPZZ(sve_sabd_zpzz_s, int32_t, H1_4, DO_ABD)
305
+DO_ZPZZ_D(sve_sabd_zpzz_d, int64_t, DO_ABD)
306
+
307
+DO_ZPZZ(sve_uabd_zpzz_b, uint8_t, H1, DO_ABD)
308
+DO_ZPZZ(sve_uabd_zpzz_h, uint16_t, H1_2, DO_ABD)
309
+DO_ZPZZ(sve_uabd_zpzz_s, uint32_t, H1_4, DO_ABD)
310
+DO_ZPZZ_D(sve_uabd_zpzz_d, uint64_t, DO_ABD)
311
+
312
+/* Because the computation type is at least twice as large as required,
313
+ these work for both signed and unsigned source types. */
314
+static inline uint8_t do_mulh_b(int32_t n, int32_t m)
315
+{
316
+ return (n * m) >> 8;
317
+}
318
+
319
+static inline uint16_t do_mulh_h(int32_t n, int32_t m)
320
+{
321
+ return (n * m) >> 16;
322
+}
323
+
324
+static inline uint32_t do_mulh_s(int64_t n, int64_t m)
325
+{
326
+ return (n * m) >> 32;
327
+}
328
+
329
+static inline uint64_t do_smulh_d(uint64_t n, uint64_t m)
330
+{
331
+ uint64_t lo, hi;
332
+ muls64(&lo, &hi, n, m);
333
+ return hi;
334
+}
335
+
336
+static inline uint64_t do_umulh_d(uint64_t n, uint64_t m)
337
+{
338
+ uint64_t lo, hi;
339
+ mulu64(&lo, &hi, n, m);
340
+ return hi;
341
+}
342
+
343
+DO_ZPZZ(sve_mul_zpzz_b, uint8_t, H1, DO_MUL)
344
+DO_ZPZZ(sve_mul_zpzz_h, uint16_t, H1_2, DO_MUL)
345
+DO_ZPZZ(sve_mul_zpzz_s, uint32_t, H1_4, DO_MUL)
346
+DO_ZPZZ_D(sve_mul_zpzz_d, uint64_t, DO_MUL)
347
+
348
+DO_ZPZZ(sve_smulh_zpzz_b, int8_t, H1, do_mulh_b)
349
+DO_ZPZZ(sve_smulh_zpzz_h, int16_t, H1_2, do_mulh_h)
350
+DO_ZPZZ(sve_smulh_zpzz_s, int32_t, H1_4, do_mulh_s)
351
+DO_ZPZZ_D(sve_smulh_zpzz_d, uint64_t, do_smulh_d)
352
+
353
+DO_ZPZZ(sve_umulh_zpzz_b, uint8_t, H1, do_mulh_b)
354
+DO_ZPZZ(sve_umulh_zpzz_h, uint16_t, H1_2, do_mulh_h)
355
+DO_ZPZZ(sve_umulh_zpzz_s, uint32_t, H1_4, do_mulh_s)
356
+DO_ZPZZ_D(sve_umulh_zpzz_d, uint64_t, do_umulh_d)
357
+
358
+DO_ZPZZ(sve_sdiv_zpzz_s, int32_t, H1_4, DO_DIV)
359
+DO_ZPZZ_D(sve_sdiv_zpzz_d, int64_t, DO_DIV)
360
+
361
+DO_ZPZZ(sve_udiv_zpzz_s, uint32_t, H1_4, DO_DIV)
362
+DO_ZPZZ_D(sve_udiv_zpzz_d, uint64_t, DO_DIV)
363
+
364
+#undef DO_ZPZZ
365
+#undef DO_ZPZZ_D
366
+#undef DO_AND
367
+#undef DO_ORR
368
+#undef DO_EOR
369
+#undef DO_BIC
370
+#undef DO_ADD
371
+#undef DO_SUB
372
+#undef DO_MAX
373
+#undef DO_MIN
374
+#undef DO_ABD
375
+#undef DO_MUL
376
+#undef DO_DIV
377
+
378
/* Similar to the ARM LastActiveElement pseudocode function, except the
379
result is multiplied by the element size. This includes the not found
380
indication; e.g. not found for esz=3 is -8. */
381
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
382
index XXXXXXX..XXXXXXX 100644
383
--- a/target/arm/translate-sve.c
384
+++ b/target/arm/translate-sve.c
385
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
386
return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
387
}
21
}
388
22
389
+/*
23
-static void raspi2_machine_class_init(ObjectClass *oc, void *data)
390
+ *** SVE Integer Arithmetic - Binary Predicated Group
24
+static void raspi_machine_class_init(ObjectClass *oc, void *data)
391
+ */
25
{
392
+
26
MachineClass *mc = MACHINE_CLASS(oc);
393
+static bool do_zpzz_ool(DisasContext *s, arg_rprr_esz *a, gen_helper_gvec_4 *fn)
27
RaspiMachineClass *rmc = RASPI_MACHINE_CLASS(oc);
394
+{
28
@@ -XXX,XX +XXX,XX @@ static void raspi2_machine_class_init(ObjectClass *oc, void *data)
395
+ unsigned vsz = vec_full_reg_size(s);
29
mc->min_cpus = BCM283X_NCPUS;
396
+ if (fn == NULL) {
30
mc->default_cpus = BCM283X_NCPUS;
397
+ return false;
31
mc->default_ram_size = board_ram_size(board_rev);
32
- mc->ignore_memory_transaction_failures = true;
33
+ if (board_version(board_rev) == 2) {
34
+ mc->ignore_memory_transaction_failures = true;
398
+ }
35
+ }
399
+ if (sve_access_check(s)) {
36
};
400
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
37
401
+ vec_full_reg_offset(s, a->rn),
38
-#ifdef TARGET_AARCH64
402
+ vec_full_reg_offset(s, a->rm),
39
-static void raspi3_machine_class_init(ObjectClass *oc, void *data)
403
+ pred_full_reg_offset(s, a->pg),
40
-{
404
+ vsz, vsz, 0, fn);
41
- MachineClass *mc = MACHINE_CLASS(oc);
405
+ }
42
- RaspiMachineClass *rmc = RASPI_MACHINE_CLASS(oc);
406
+ return true;
43
- uint32_t board_rev = (uint32_t)(uintptr_t)data;
407
+}
44
-
408
+
45
- rmc->board_rev = board_rev;
409
+#define DO_ZPZZ(NAME, name) \
46
- mc->desc = g_strdup_printf("Raspberry Pi %s", board_type(board_rev));
410
+static bool trans_##NAME##_zpzz(DisasContext *s, arg_rprr_esz *a, \
47
- mc->init = raspi_machine_init;
411
+ uint32_t insn) \
48
- mc->block_default_type = IF_SD;
412
+{ \
49
- mc->no_parallel = 1;
413
+ static gen_helper_gvec_4 * const fns[4] = { \
50
- mc->no_floppy = 1;
414
+ gen_helper_sve_##name##_zpzz_b, gen_helper_sve_##name##_zpzz_h, \
51
- mc->no_cdrom = 1;
415
+ gen_helper_sve_##name##_zpzz_s, gen_helper_sve_##name##_zpzz_d, \
52
- mc->max_cpus = BCM283X_NCPUS;
416
+ }; \
53
- mc->min_cpus = BCM283X_NCPUS;
417
+ return do_zpzz_ool(s, a, fns[a->esz]); \
54
- mc->default_cpus = BCM283X_NCPUS;
418
+}
55
- mc->default_ram_size = board_ram_size(board_rev);
419
+
56
-}
420
+DO_ZPZZ(AND, and)
57
-#endif
421
+DO_ZPZZ(EOR, eor)
58
-
422
+DO_ZPZZ(ORR, orr)
59
static const TypeInfo raspi_machine_types[] = {
423
+DO_ZPZZ(BIC, bic)
60
{
424
+
61
.name = MACHINE_TYPE_NAME("raspi2"),
425
+DO_ZPZZ(ADD, add)
62
.parent = TYPE_RASPI_MACHINE,
426
+DO_ZPZZ(SUB, sub)
63
- .class_init = raspi2_machine_class_init,
427
+
64
+ .class_init = raspi_machine_class_init,
428
+DO_ZPZZ(SMAX, smax)
65
.class_data = (void *)0xa21041,
429
+DO_ZPZZ(UMAX, umax)
66
#ifdef TARGET_AARCH64
430
+DO_ZPZZ(SMIN, smin)
67
}, {
431
+DO_ZPZZ(UMIN, umin)
68
.name = MACHINE_TYPE_NAME("raspi3"),
432
+DO_ZPZZ(SABD, sabd)
69
.parent = TYPE_RASPI_MACHINE,
433
+DO_ZPZZ(UABD, uabd)
70
- .class_init = raspi3_machine_class_init,
434
+
71
+ .class_init = raspi_machine_class_init,
435
+DO_ZPZZ(MUL, mul)
72
.class_data = (void *)0xa02082,
436
+DO_ZPZZ(SMULH, smulh)
73
#endif
437
+DO_ZPZZ(UMULH, umulh)
74
}, {
438
+
439
+static bool trans_SDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
440
+{
441
+ static gen_helper_gvec_4 * const fns[4] = {
442
+ NULL, NULL, gen_helper_sve_sdiv_zpzz_s, gen_helper_sve_sdiv_zpzz_d
443
+ };
444
+ return do_zpzz_ool(s, a, fns[a->esz]);
445
+}
446
+
447
+static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
448
+{
449
+ static gen_helper_gvec_4 * const fns[4] = {
450
+ NULL, NULL, gen_helper_sve_udiv_zpzz_s, gen_helper_sve_udiv_zpzz_d
451
+ };
452
+ return do_zpzz_ool(s, a, fns[a->esz]);
453
+}
454
+
455
+#undef DO_ZPZZ
456
+
457
/*
458
*** SVE Predicate Logical Operations Group
459
*/
460
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
461
index XXXXXXX..XXXXXXX 100644
462
--- a/target/arm/sve.decode
463
+++ b/target/arm/sve.decode
464
@@ -XXX,XX +XXX,XX @@
465
466
%imm9_16_10 16:s6 10:3
467
468
+# Either a copy of rd (at bit 0), or a different source
469
+# as propagated via the MOVPRFX instruction.
470
+%reg_movprfx 0:5
471
+
472
###########################################################################
473
# Named attribute sets. These are used to make nice(er) names
474
# when creating helpers common to those for the individual
475
@@ -XXX,XX +XXX,XX @@
476
&rri rd rn imm
477
&rrr_esz rd rn rm esz
478
&rprr_s rd pg rn rm s
479
+&rprr_esz rd pg rn rm esz
480
481
###########################################################################
482
# Named instruction formats. These are generally used to
483
@@ -XXX,XX +XXX,XX @@
484
# Three predicate operand, with governing predicate, flag setting
485
@pd_pg_pn_pm_s ........ . s:1 .. rm:4 .. pg:4 . rn:4 . rd:4 &rprr_s
486
487
+# Two register operand, with governing predicate, vector element size
488
+@rdn_pg_rm ........ esz:2 ... ... ... pg:3 rm:5 rd:5 \
489
+ &rprr_esz rn=%reg_movprfx
490
+@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
491
+ &rprr_esz rm=%reg_movprfx
492
+
493
# Basic Load/Store with 9-bit immediate offset
494
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
495
&rri imm=%imm9_16_10
496
@@ -XXX,XX +XXX,XX @@
497
###########################################################################
498
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
499
500
+### SVE Integer Arithmetic - Binary Predicated Group
501
+
502
+# SVE bitwise logical vector operations (predicated)
503
+ORR_zpzz 00000100 .. 011 000 000 ... ..... ..... @rdn_pg_rm
504
+EOR_zpzz 00000100 .. 011 001 000 ... ..... ..... @rdn_pg_rm
505
+AND_zpzz 00000100 .. 011 010 000 ... ..... ..... @rdn_pg_rm
506
+BIC_zpzz 00000100 .. 011 011 000 ... ..... ..... @rdn_pg_rm
507
+
508
+# SVE integer add/subtract vectors (predicated)
509
+ADD_zpzz 00000100 .. 000 000 000 ... ..... ..... @rdn_pg_rm
510
+SUB_zpzz 00000100 .. 000 001 000 ... ..... ..... @rdn_pg_rm
511
+SUB_zpzz 00000100 .. 000 011 000 ... ..... ..... @rdm_pg_rn # SUBR
512
+
513
+# SVE integer min/max/difference (predicated)
514
+SMAX_zpzz 00000100 .. 001 000 000 ... ..... ..... @rdn_pg_rm
515
+UMAX_zpzz 00000100 .. 001 001 000 ... ..... ..... @rdn_pg_rm
516
+SMIN_zpzz 00000100 .. 001 010 000 ... ..... ..... @rdn_pg_rm
517
+UMIN_zpzz 00000100 .. 001 011 000 ... ..... ..... @rdn_pg_rm
518
+SABD_zpzz 00000100 .. 001 100 000 ... ..... ..... @rdn_pg_rm
519
+UABD_zpzz 00000100 .. 001 101 000 ... ..... ..... @rdn_pg_rm
520
+
521
+# SVE integer multiply/divide (predicated)
522
+MUL_zpzz 00000100 .. 010 000 000 ... ..... ..... @rdn_pg_rm
523
+SMULH_zpzz 00000100 .. 010 010 000 ... ..... ..... @rdn_pg_rm
524
+UMULH_zpzz 00000100 .. 010 011 000 ... ..... ..... @rdn_pg_rm
525
+# Note that divide requires size >= 2; below 2 is unallocated.
526
+SDIV_zpzz 00000100 .. 010 100 000 ... ..... ..... @rdn_pg_rm
527
+UDIV_zpzz 00000100 .. 010 101 000 ... ..... ..... @rdn_pg_rm
528
+SDIV_zpzz 00000100 .. 010 110 000 ... ..... ..... @rdm_pg_rn # SDIVR
529
+UDIV_zpzz 00000100 .. 010 111 000 ... ..... ..... @rdm_pg_rn # UDIVR
530
+
531
### SVE Logical - Unpredicated Group
532
533
# SVE bitwise logical operations (unpredicated)
534
--
75
--
535
2.17.0
76
2.20.1
536
77
537
78
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
The count of ARM cores is encoded in the board revision. Add a
4
helper to extract the number of cores, and use it. This will be
5
helpful when we add the Raspi0/1 that have a single core.
6
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20200208165645.15657-14-f4bug@amsat.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
[PMM: tweaked commit message as suggested by Igor]
5
Message-id: 20180516223007.10256-16-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/translate-sve.c | 34 ++++++++++++++++++++++++++++++++++
13
hw/arm/raspi.c | 19 ++++++++++++++++---
9
target/arm/sve.decode | 13 +++++++++++++
14
1 file changed, 16 insertions(+), 3 deletions(-)
10
2 files changed, 47 insertions(+)
11
15
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
18
--- a/hw/arm/raspi.c
15
+++ b/target/arm/translate-sve.c
19
+++ b/hw/arm/raspi.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ static const char *board_soc_type(uint32_t board_rev)
17
return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
21
return soc_types[proc_id];
18
}
22
}
19
23
20
+/*
24
+static int cores_count(uint32_t board_rev)
21
+ *** SVE Integer Arithmetic - Unpredicated Group
25
+{
22
+ */
26
+ static const int soc_cores_count[] = {
27
+ 0, BCM283X_NCPUS, BCM283X_NCPUS,
28
+ };
29
+ int proc_id = board_processor_id(board_rev);
23
+
30
+
24
+static bool trans_ADD_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
31
+ if (proc_id >= ARRAY_SIZE(soc_cores_count) || !soc_cores_count[proc_id]) {
25
+{
32
+ error_report("Unsupported processor id '%d' (board revision: 0x%x)",
26
+ return do_vector3_z(s, tcg_gen_gvec_add, a->esz, a->rd, a->rn, a->rm);
33
+ proc_id, board_rev);
34
+ exit(1);
35
+ }
36
+ return soc_cores_count[proc_id];
27
+}
37
+}
28
+
38
+
29
+static bool trans_SUB_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
39
static const char *board_type(uint32_t board_rev)
30
+{
40
{
31
+ return do_vector3_z(s, tcg_gen_gvec_sub, a->esz, a->rd, a->rn, a->rm);
41
static const char *types[] = {
32
+}
42
@@ -XXX,XX +XXX,XX @@ static void raspi_machine_class_init(ObjectClass *oc, void *data)
33
+
43
mc->no_parallel = 1;
34
+static bool trans_SQADD_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
44
mc->no_floppy = 1;
35
+{
45
mc->no_cdrom = 1;
36
+ return do_vector3_z(s, tcg_gen_gvec_ssadd, a->esz, a->rd, a->rn, a->rm);
46
- mc->max_cpus = BCM283X_NCPUS;
37
+}
47
- mc->min_cpus = BCM283X_NCPUS;
38
+
48
- mc->default_cpus = BCM283X_NCPUS;
39
+static bool trans_SQSUB_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
49
+ mc->default_cpus = mc->min_cpus = mc->max_cpus = cores_count(board_rev);
40
+{
50
mc->default_ram_size = board_ram_size(board_rev);
41
+ return do_vector3_z(s, tcg_gen_gvec_sssub, a->esz, a->rd, a->rn, a->rm);
51
if (board_version(board_rev) == 2) {
42
+}
52
mc->ignore_memory_transaction_failures = true;
43
+
44
+static bool trans_UQADD_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
45
+{
46
+ return do_vector3_z(s, tcg_gen_gvec_usadd, a->esz, a->rd, a->rn, a->rm);
47
+}
48
+
49
+static bool trans_UQSUB_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
50
+{
51
+ return do_vector3_z(s, tcg_gen_gvec_ussub, a->esz, a->rd, a->rn, a->rm);
52
+}
53
+
54
/*
55
*** SVE Integer Arithmetic - Binary Predicated Group
56
*/
57
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/sve.decode
60
+++ b/target/arm/sve.decode
61
@@ -XXX,XX +XXX,XX @@
62
# Three predicate operand, with governing predicate, flag setting
63
@pd_pg_pn_pm_s ........ . s:1 .. rm:4 .. pg:4 . rn:4 . rd:4 &rprr_s
64
65
+# Three operand, vector element size
66
+@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
67
+
68
# Two register operand, with governing predicate, vector element size
69
@rdn_pg_rm ........ esz:2 ... ... ... pg:3 rm:5 rd:5 \
70
&rprr_esz rn=%reg_movprfx
71
@@ -XXX,XX +XXX,XX @@ MLS 00000100 .. 0 ..... 011 ... ..... ..... @rda_pg_rn_rm
72
MLA 00000100 .. 0 ..... 110 ... ..... ..... @rdn_pg_ra_rm # MAD
73
MLS 00000100 .. 0 ..... 111 ... ..... ..... @rdn_pg_ra_rm # MSB
74
75
+### SVE Integer Arithmetic - Unpredicated Group
76
+
77
+# SVE integer add/subtract vectors (unpredicated)
78
+ADD_zzz 00000100 .. 1 ..... 000 000 ..... ..... @rd_rn_rm
79
+SUB_zzz 00000100 .. 1 ..... 000 001 ..... ..... @rd_rn_rm
80
+SQADD_zzz 00000100 .. 1 ..... 000 100 ..... ..... @rd_rn_rm
81
+UQADD_zzz 00000100 .. 1 ..... 000 101 ..... ..... @rd_rn_rm
82
+SQSUB_zzz 00000100 .. 1 ..... 000 110 ..... ..... @rd_rn_rm
83
+UQSUB_zzz 00000100 .. 1 ..... 000 111 ..... ..... @rd_rn_rm
84
+
85
### SVE Logical - Unpredicated Group
86
87
# SVE bitwise logical operations (unpredicated)
88
--
53
--
89
2.17.0
54
2.20.1
90
55
91
56
diff view generated by jsdifflib
New patch
1
The ARMv8.1-VMID16 extension extends the VMID from 8 bits to 16 bits:
1
2
3
* the ID_AA64MMFR1_EL1.VMIDBits field specifies whether the VMID is
4
8 or 16 bits
5
* the VMID field in VTTBR_EL2 is extended to 16 bits
6
* VTCR_EL2.VS lets the guest specify whether to use the full 16 bits,
7
or use the backwards-compatible 8 bits
8
9
For QEMU implementing this is trivial:
10
* we do not track VMIDs in TLB entries, so we never use the VMID field
11
* we treat any write to VTTBR_EL2, not just a change to the VMID field
12
bits, as a "possible VMID change" that causes us to throw away TLB
13
entries, so that code doesn't need changing
14
* we allow the guest to read/write the VTCR_EL2.VS bit already
15
16
So all that's missing is the ID register part: report that we support
17
VMID16 in our 'max' CPU.
18
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Message-id: 20200210120146.17631-1-peter.maydell@linaro.org
23
---
24
target/arm/cpu64.c | 1 +
25
1 file changed, 1 insertion(+)
26
27
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu64.c
30
+++ b/target/arm/cpu64.c
31
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
32
t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
33
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
34
t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
35
+ t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */
36
cpu->isar.id_aa64mmfr1 = t;
37
38
t = cpu->isar.id_aa64mmfr2;
39
--
40
2.20.1
41
42
diff view generated by jsdifflib