1
Mostly this is patches from me and RTH cleaning up and doing
1
Hi; here's the latest round of arm patches. I have included also
2
more decodetree conversion for AArch32 Neon. The major new feature
2
my patchset for the RTC devices to avoid keeping time_t and
3
is Dongjiu Geng's patchset to report host memory errors to KVM guests;
3
time_t diffs in 32-bit variables.
4
also a new aspeed board from Patrick Williams.
5
4
6
thanks
5
thanks
7
-- PMM
6
-- PMM
8
7
9
The following changes since commit 035b448b84f3557206abc44d786c5d3db2638f7d:
8
The following changes since commit 156618d9ea67f2f2e31d9dedd97f2dcccbe6808c:
10
9
11
Merge remote-tracking branch 'remotes/gkurz/tags/9p-next-2020-05-14' into staging (2020-05-14 10:58:30 +0100)
10
Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into staging (2023-08-30 09:20:27 -0400)
12
11
13
are available in the Git repository at:
12
are available in the Git repository at:
14
13
15
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200514
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230831
16
15
17
for you to fetch changes up to e95485f85657be21135c17a9226e297c21e73360:
16
for you to fetch changes up to e73b8bb8a3e9a162f70e9ffbf922d4fafc96bbfb:
18
17
19
target/arm: Convert NEON VFMA, VFMS 3-reg-same insns to decodetree (2020-05-14 15:03:09 +0100)
18
hw/arm: Set number of MPU regions correctly for an505, an521, an524 (2023-08-31 11:07:02 +0100)
20
19
21
----------------------------------------------------------------
20
----------------------------------------------------------------
22
target-arm queue:
21
target-arm queue:
23
* target/arm: Use correct GDB XML for M-profile cores
22
* Some of the preliminary patches for Cortex-A710 support
24
* target/arm: Code cleanup to use gvec APIs better
23
* i.MX7 and i.MX6UL refactoring
25
* aspeed: Add support for the sonorapass-bmc board
24
* Implement SRC device for i.MX7
26
* target/arm: Support reporting KVM host memory errors
25
* Catch illegal-exception-return from EL3 with bad NSE/NS
27
to the guest via ACPI notifications
26
* Use 64-bit offsets for holding time_t differences in RTC devices
28
* target/arm: Finish conversion of Neon 3-reg-same insns to decodetree
27
* Model correct number of MPU regions for an505, an521, an524 boards
29
28
30
----------------------------------------------------------------
29
----------------------------------------------------------------
31
Dongjiu Geng (10):
30
Alex Bennée (1):
32
acpi: nvdimm: change NVDIMM_UUID_LE to a common macro
31
target/arm: properly document FEAT_CRC32
33
hw/arm/virt: Introduce a RAS machine option
34
docs: APEI GHES generation and CPER record description
35
ACPI: Build related register address fields via hardware error fw_cfg blob
36
ACPI: Build Hardware Error Source Table
37
ACPI: Record the Generic Error Status Block address
38
KVM: Move hwpoison page related functions into kvm-all.c
39
ACPI: Record Generic Error Status Block(GESB) table
40
target-arm: kvm64: handle SIGBUS signal from kernel or KVM
41
MAINTAINERS: Add ACPI/HEST/GHES entries
42
32
43
Patrick Williams (1):
33
Jean-Christophe Dubois (6):
44
aspeed: Add support for the sonorapass-bmc board
34
Remove i.MX7 IOMUX GPR device from i.MX6UL
35
Refactor i.MX6UL processor code
36
Add i.MX6UL missing devices.
37
Refactor i.MX7 processor code
38
Add i.MX7 missing TZ devices and memory regions
39
Add i.MX7 SRC device implementation
45
40
46
Peter Maydell (18):
41
Peter Maydell (8):
47
target/arm: Use correct GDB XML for M-profile cores
42
target/arm: Catch illegal-exception-return from EL3 with bad NSE/NS
48
target/arm: Convert Neon 3-reg-same VQRDMLAH/VQRDMLSH to decodetree
43
hw/rtc/m48t59: Use 64-bit arithmetic in set_alarm()
49
target/arm: Convert Neon 3-reg-same SHA to decodetree
44
hw/rtc/twl92230: Use int64_t for sec_offset and alm_sec
50
target/arm: Convert Neon 64-bit element 3-reg-same insns
45
hw/rtc/aspeed_rtc: Use 64-bit offset for holding time_t difference
51
target/arm: Convert Neon VHADD 3-reg-same insns
46
rtc: Use time_t for passing and returning time offsets
52
target/arm: Convert Neon VABA/VABD 3-reg-same to decodetree
47
target/arm: Do all "ARM_FEATURE_X implies Y" checks in post_init
53
target/arm: Convert Neon VRHADD, VHSUB 3-reg-same insns to decodetree
48
hw/arm/armv7m: Add mpu-ns-regions and mpu-s-regions properties
54
target/arm: Convert Neon VQSHL, VRSHL, VQRSHL 3-reg-same insns to decodetree
49
hw/arm: Set number of MPU regions correctly for an505, an521, an524
55
target/arm: Convert Neon VPMAX/VPMIN 3-reg-same insns to decodetree
56
target/arm: Convert Neon VPADD 3-reg-same insns to decodetree
57
target/arm: Convert Neon VQDMULH/VQRDMULH 3-reg-same to decodetree
58
target/arm: Convert Neon VADD, VSUB, VABD 3-reg-same insns to decodetree
59
target/arm: Convert Neon VPMIN/VPMAX/VPADD float 3-reg-same insns to decodetree
60
target/arm: Convert Neon fp VMUL, VMLA, VMLS 3-reg-same insns to decodetree
61
target/arm: Convert Neon 3-reg-same compare insns to decodetree
62
target/arm: Move 'env' argument of recps_f32 and rsqrts_f32 helpers to usual place
63
target/arm: Convert Neon fp VMAX/VMIN/VMAXNM/VMINNM/VRECPS/VRSQRTS to decodetree
64
target/arm: Convert NEON VFMA, VFMS 3-reg-same insns to decodetree
65
50
66
Richard Henderson (16):
51
Richard Henderson (9):
67
target/arm: Create gen_gvec_[us]sra
52
target/arm: Reduce dcz_blocksize to uint8_t
68
target/arm: Create gen_gvec_{u,s}{rshr,rsra}
53
target/arm: Allow cpu to configure GM blocksize
69
target/arm: Create gen_gvec_{sri,sli}
54
target/arm: Support more GM blocksizes
70
target/arm: Remove unnecessary range check for VSHL
55
target/arm: When tag memory is not present, set MTE=1
71
target/arm: Tidy handle_vec_simd_shri
56
target/arm: Introduce make_ccsidr64
72
target/arm: Create gen_gvec_{ceq,clt,cle,cgt,cge}0
57
target/arm: Apply access checks to neoverse-n1 special registers
73
target/arm: Create gen_gvec_{mla,mls}
58
target/arm: Apply access checks to neoverse-v1 special registers
74
target/arm: Swap argument order for VSHL during decode
59
target/arm: Suppress FEAT_TRBE (Trace Buffer Extension)
75
target/arm: Create gen_gvec_{cmtst,ushl,sshl}
60
target/arm: Implement FEAT_HPDS2 as a no-op
76
target/arm: Create gen_gvec_{uqadd, sqadd, uqsub, sqsub}
77
target/arm: Remove fp_status from helper_{recpe, rsqrte}_u32
78
target/arm: Create gen_gvec_{qrdmla,qrdmls}
79
target/arm: Pass pointer to qc to qrdmla/qrdmls
80
target/arm: Clear tail in gvec_fmul_idx_*, gvec_fmla_idx_*
81
target/arm: Vectorize SABD/UABD
82
target/arm: Vectorize SABA/UABA
83
61
84
docs/specs/acpi_hest_ghes.rst | 110 ++
62
docs/system/arm/emulation.rst | 2 +
85
docs/specs/index.rst | 1 +
63
include/hw/arm/armsse.h | 5 +
86
configure | 4 +-
64
include/hw/arm/armv7m.h | 8 +
87
default-configs/arm-softmmu.mak | 1 +
65
include/hw/arm/fsl-imx6ul.h | 158 ++++++++++++++++---
88
include/hw/acpi/aml-build.h | 1 +
66
include/hw/arm/fsl-imx7.h | 338 ++++++++++++++++++++++++++++++-----------
89
include/hw/acpi/generic_event_device.h | 2 +
67
include/hw/misc/imx7_src.h | 66 ++++++++
90
include/hw/acpi/ghes.h | 74 +
68
include/hw/rtc/aspeed_rtc.h | 2 +-
91
include/hw/arm/virt.h | 1 +
69
include/sysemu/rtc.h | 4 +-
92
include/qemu/uuid.h | 27 +
70
target/arm/cpregs.h | 2 +
93
include/sysemu/kvm.h | 3 +-
71
target/arm/cpu.h | 5 +-
94
include/sysemu/kvm_int.h | 12 +
72
target/arm/internals.h | 6 -
95
target/arm/cpu.h | 4 +
73
target/arm/tcg/translate.h | 2 +
96
target/arm/helper.h | 78 +-
74
hw/arm/armsse.c | 16 ++
97
target/arm/internals.h | 5 +-
75
hw/arm/armv7m.c | 21 +++
98
target/arm/translate.h | 84 +-
76
hw/arm/fsl-imx6ul.c | 174 +++++++++++++--------
99
target/i386/cpu.h | 2 +
77
hw/arm/fsl-imx7.c | 201 +++++++++++++++++++-----
100
target/arm/neon-dp.decode | 119 +-
78
hw/arm/mps2-tz.c | 29 ++++
101
accel/kvm/kvm-all.c | 36 +
79
hw/misc/imx7_src.c | 276 +++++++++++++++++++++++++++++++++
102
hw/acpi/aml-build.c | 2 +
80
hw/rtc/aspeed_rtc.c | 5 +-
103
hw/acpi/generic_event_device.c | 19 +
81
hw/rtc/m48t59.c | 2 +-
104
hw/acpi/ghes.c | 448 ++++++
82
hw/rtc/twl92230.c | 4 +-
105
hw/acpi/nvdimm.c | 10 +-
83
softmmu/rtc.c | 4 +-
106
hw/arm/aspeed.c | 78 ++
84
target/arm/cpu.c | 207 ++++++++++++++-----------
107
hw/arm/virt-acpi-build.c | 15 +
85
target/arm/helper.c | 15 +-
108
hw/arm/virt.c | 23 +
86
target/arm/tcg/cpu32.c | 2 +-
109
target/arm/cpu_tcg.c | 1 +
87
target/arm/tcg/cpu64.c | 102 +++++++++----
110
target/arm/gdbstub.c | 22 +-
88
target/arm/tcg/helper-a64.c | 9 ++
111
target/arm/helper.c | 2 +-
89
target/arm/tcg/mte_helper.c | 90 ++++++++---
112
target/arm/kvm64.c | 77 ++
90
target/arm/tcg/translate-a64.c | 5 +-
113
target/arm/neon_helper.c | 17 -
91
hw/misc/meson.build | 1 +
114
target/arm/tlb_helper.c | 2 +-
92
hw/misc/trace-events | 4 +
115
target/arm/translate-a64.c | 210 +--
93
31 files changed, 1393 insertions(+), 372 deletions(-)
116
target/arm/translate-neon.inc.c | 682 +++++++++-
94
create mode 100644 include/hw/misc/imx7_src.h
117
target/arm/translate.c | 2349 +++++++++++++++++---------------
95
create mode 100644 hw/misc/imx7_src.c
118
target/arm/vec_helper.c | 240 +++-
119
target/arm/vfp_helper.c | 9 +-
120
target/i386/kvm.c | 36 -
121
MAINTAINERS | 9 +
122
gdb-xml/arm-m-profile.xml | 27 +
123
hw/acpi/Kconfig | 4 +
124
hw/acpi/Makefile.objs | 1 +
125
41 files changed, 3402 insertions(+), 1445 deletions(-)
126
create mode 100644 docs/specs/acpi_hest_ghes.rst
127
create mode 100644 include/hw/acpi/ghes.h
128
create mode 100644 hw/acpi/ghes.c
129
create mode 100644 gdb-xml/arm-m-profile.xml
130
96
diff view generated by jsdifflib
Deleted patch
1
GDB's remote protocol requires M-profile cores to use the feature
2
name 'org.gnu.gdb.arm.m-profile' instead of the 'org.gnu.gdb.arm.core'
3
feature used for A- and R-profile cores. We weren't doing this, which
4
meant GDB treated our M-profile cores like A-profile ones. This mostly
5
doesn't matter, but for instance means that it doesn't correctly
6
handle backtraces where an M-profile exception frame is involved.
7
1
8
Ship a copy of GDB's arm-m-profile.xml and use it on the M-profile
9
cores. The integer registers have the same offsets as the
10
arm-core.xml, but register 25 is the M-profile XPSR rather than the
11
A-profile CPSR, so we need to update arm_cpu_gdb_read_register() and
12
arm_cpu_gdb_write_register() to handle XSPR reads and writes.
13
14
Fixes: https://bugs.launchpad.net/qemu/+bug/1877136
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20200507134755.13997-1-peter.maydell@linaro.org
18
---
19
configure | 4 ++--
20
target/arm/cpu_tcg.c | 1 +
21
target/arm/gdbstub.c | 22 ++++++++++++++++++----
22
gdb-xml/arm-m-profile.xml | 27 +++++++++++++++++++++++++++
23
4 files changed, 48 insertions(+), 6 deletions(-)
24
create mode 100644 gdb-xml/arm-m-profile.xml
25
26
diff --git a/configure b/configure
27
index XXXXXXX..XXXXXXX 100755
28
--- a/configure
29
+++ b/configure
30
@@ -XXX,XX +XXX,XX @@ case "$target_name" in
31
TARGET_SYSTBL_ABI=common,oabi
32
bflt="yes"
33
mttcg="yes"
34
- gdb_xml_files="arm-core.xml arm-vfp.xml arm-vfp3.xml arm-neon.xml"
35
+ gdb_xml_files="arm-core.xml arm-vfp.xml arm-vfp3.xml arm-neon.xml arm-m-profile.xml"
36
;;
37
aarch64|aarch64_be)
38
TARGET_ARCH=aarch64
39
TARGET_BASE_ARCH=arm
40
bflt="yes"
41
mttcg="yes"
42
- gdb_xml_files="aarch64-core.xml aarch64-fpu.xml arm-core.xml arm-vfp.xml arm-vfp3.xml arm-neon.xml"
43
+ gdb_xml_files="aarch64-core.xml aarch64-fpu.xml arm-core.xml arm-vfp.xml arm-vfp3.xml arm-neon.xml arm-m-profile.xml"
44
;;
45
cris)
46
;;
47
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/cpu_tcg.c
50
+++ b/target/arm/cpu_tcg.c
51
@@ -XXX,XX +XXX,XX @@ static void arm_v7m_class_init(ObjectClass *oc, void *data)
52
#endif
53
54
cc->cpu_exec_interrupt = arm_v7m_cpu_exec_interrupt;
55
+ cc->gdb_core_xml_file = "arm-m-profile.xml";
56
}
57
58
static const ARMCPUInfo arm_tcg_cpus[] = {
59
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/gdbstub.c
62
+++ b/target/arm/gdbstub.c
63
@@ -XXX,XX +XXX,XX @@ int arm_cpu_gdb_read_register(CPUState *cs, GByteArray *mem_buf, int n)
64
}
65
return gdb_get_reg32(mem_buf, 0);
66
case 25:
67
- /* CPSR */
68
- return gdb_get_reg32(mem_buf, cpsr_read(env));
69
+ /* CPSR, or XPSR for M-profile */
70
+ if (arm_feature(env, ARM_FEATURE_M)) {
71
+ return gdb_get_reg32(mem_buf, xpsr_read(env));
72
+ } else {
73
+ return gdb_get_reg32(mem_buf, cpsr_read(env));
74
+ }
75
}
76
/* Unknown register. */
77
return 0;
78
@@ -XXX,XX +XXX,XX @@ int arm_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
79
}
80
return 4;
81
case 25:
82
- /* CPSR */
83
- cpsr_write(env, tmp, 0xffffffff, CPSRWriteByGDBStub);
84
+ /* CPSR, or XPSR for M-profile */
85
+ if (arm_feature(env, ARM_FEATURE_M)) {
86
+ /*
87
+ * Don't allow writing to XPSR.Exception as it can cause
88
+ * a transition into or out of handler mode (it's not
89
+ * writeable via the MSR insn so this is a reasonable
90
+ * restriction). Other fields are safe to update.
91
+ */
92
+ xpsr_write(env, tmp, ~XPSR_EXCP);
93
+ } else {
94
+ cpsr_write(env, tmp, 0xffffffff, CPSRWriteByGDBStub);
95
+ }
96
return 4;
97
}
98
/* Unknown register. */
99
diff --git a/gdb-xml/arm-m-profile.xml b/gdb-xml/arm-m-profile.xml
100
new file mode 100644
101
index XXXXXXX..XXXXXXX
102
--- /dev/null
103
+++ b/gdb-xml/arm-m-profile.xml
104
@@ -XXX,XX +XXX,XX @@
105
+<?xml version="1.0"?>
106
+<!-- Copyright (C) 2010-2020 Free Software Foundation, Inc.
107
+
108
+ Copying and distribution of this file, with or without modification,
109
+ are permitted in any medium without royalty provided the copyright
110
+ notice and this notice are preserved. -->
111
+
112
+<!DOCTYPE feature SYSTEM "gdb-target.dtd">
113
+<feature name="org.gnu.gdb.arm.m-profile">
114
+ <reg name="r0" bitsize="32"/>
115
+ <reg name="r1" bitsize="32"/>
116
+ <reg name="r2" bitsize="32"/>
117
+ <reg name="r3" bitsize="32"/>
118
+ <reg name="r4" bitsize="32"/>
119
+ <reg name="r5" bitsize="32"/>
120
+ <reg name="r6" bitsize="32"/>
121
+ <reg name="r7" bitsize="32"/>
122
+ <reg name="r8" bitsize="32"/>
123
+ <reg name="r9" bitsize="32"/>
124
+ <reg name="r10" bitsize="32"/>
125
+ <reg name="r11" bitsize="32"/>
126
+ <reg name="r12" bitsize="32"/>
127
+ <reg name="sp" bitsize="32" type="data_ptr"/>
128
+ <reg name="lr" bitsize="32"/>
129
+ <reg name="pc" bitsize="32" type="code_ptr"/>
130
+ <reg name="xpsr" bitsize="32" regnum="25"/>
131
+</feature>
132
--
133
2.20.1
134
135
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Provide a functional interface for the vector expansion.
3
This value is only 4 bits wide.
4
This fits better with the existing set of helpers that
5
we provide for other operations.
6
4
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200513163245.17915-10-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20230811214031.171020-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/translate.h | 10 ++-
11
target/arm/cpu.h | 3 ++-
13
target/arm/translate-a64.c | 18 ++--
12
1 file changed, 2 insertions(+), 1 deletion(-)
14
target/arm/translate-neon.inc.c | 23 +----
15
target/arm/translate.c | 146 +++++++++++++++++---------------
16
4 files changed, 95 insertions(+), 102 deletions(-)
17
13
18
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate.h
16
--- a/target/arm/cpu.h
21
+++ b/target/arm/translate.h
17
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ void gen_gvec_mla(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
18
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
23
void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
19
bool prop_lpa2;
24
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
20
25
21
/* DCZ blocksize, in log_2(words), ie low 4 bits of DCZID_EL0 */
26
-extern const GVecGen3 cmtst_op[4];
22
- uint32_t dcz_blocksize;
27
-extern const GVecGen3 sshl_op[4];
23
+ uint8_t dcz_blocksize;
28
-extern const GVecGen3 ushl_op[4];
29
+void gen_gvec_cmtst(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
30
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
31
+void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
32
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
33
+void gen_gvec_ushl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
34
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
35
+
24
+
36
extern const GVecGen4 uqadd_op[4];
25
uint64_t rvbar_prop; /* Property/input signals. */
37
extern const GVecGen4 sqadd_op[4];
26
38
extern const GVecGen4 uqsub_op[4];
27
/* Configurable aspects of GIC cpu interface (which is part of the CPU) */
39
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/translate-a64.c
42
+++ b/target/arm/translate-a64.c
43
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_fn4(DisasContext *s, bool is_q, int rd, int rn, int rm,
44
is_q ? 16 : 8, vec_full_reg_size(s));
45
}
46
47
-/* Expand a 3-operand AdvSIMD vector operation using an op descriptor. */
48
-static void gen_gvec_op3(DisasContext *s, bool is_q, int rd,
49
- int rn, int rm, const GVecGen3 *gvec_op)
50
-{
51
- tcg_gen_gvec_3(vec_full_reg_offset(s, rd), vec_full_reg_offset(s, rn),
52
- vec_full_reg_offset(s, rm), is_q ? 16 : 8,
53
- vec_full_reg_size(s), gvec_op);
54
-}
55
-
56
/* Expand a 3-operand operation using an out-of-line helper. */
57
static void gen_gvec_op3_ool(DisasContext *s, bool is_q, int rd,
58
int rn, int rm, int data, gen_helper_gvec_3 *fn)
59
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
60
(u ? uqsub_op : sqsub_op) + size);
61
return;
62
case 0x08: /* SSHL, USHL */
63
- gen_gvec_op3(s, is_q, rd, rn, rm,
64
- u ? &ushl_op[size] : &sshl_op[size]);
65
+ if (u) {
66
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_ushl, size);
67
+ } else {
68
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sshl, size);
69
+ }
70
return;
71
case 0x0c: /* SMAX, UMAX */
72
if (u) {
73
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
74
return;
75
case 0x11:
76
if (!u) { /* CMTST */
77
- gen_gvec_op3(s, is_q, rd, rn, rm, &cmtst_op[size]);
78
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_cmtst, size);
79
return;
80
}
81
/* else CMEQ */
82
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/target/arm/translate-neon.inc.c
85
+++ b/target/arm/translate-neon.inc.c
86
@@ -XXX,XX +XXX,XX @@ DO_3SAME(VBIC, tcg_gen_gvec_andc)
87
DO_3SAME(VORR, tcg_gen_gvec_or)
88
DO_3SAME(VORN, tcg_gen_gvec_orc)
89
DO_3SAME(VEOR, tcg_gen_gvec_xor)
90
+DO_3SAME(VSHL_S, gen_gvec_sshl)
91
+DO_3SAME(VSHL_U, gen_gvec_ushl)
92
93
/* These insns are all gvec_bitsel but with the inputs in various orders. */
94
#define DO_3SAME_BITSEL(INSN, O1, O2, O3) \
95
@@ -XXX,XX +XXX,XX @@ DO_3SAME_NO_SZ_3(VMIN_U, tcg_gen_gvec_umin)
96
DO_3SAME_NO_SZ_3(VMUL, tcg_gen_gvec_mul)
97
DO_3SAME_NO_SZ_3(VMLA, gen_gvec_mla)
98
DO_3SAME_NO_SZ_3(VMLS, gen_gvec_mls)
99
+DO_3SAME_NO_SZ_3(VTST, gen_gvec_cmtst)
100
101
#define DO_3SAME_CMP(INSN, COND) \
102
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
103
@@ -XXX,XX +XXX,XX @@ DO_3SAME_CMP(VCGE_S, TCG_COND_GE)
104
DO_3SAME_CMP(VCGE_U, TCG_COND_GEU)
105
DO_3SAME_CMP(VCEQ, TCG_COND_EQ)
106
107
-static void gen_VTST_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
108
- uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz)
109
-{
110
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &cmtst_op[vece]);
111
-}
112
-DO_3SAME_NO_SZ_3(VTST, gen_VTST_3s)
113
-
114
#define DO_3SAME_GVEC4(INSN, OPARRAY) \
115
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
116
uint32_t rn_ofs, uint32_t rm_ofs, \
117
@@ -XXX,XX +XXX,XX @@ static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
118
}
119
return do_3same(s, a, gen_VMUL_p_3s);
120
}
121
-
122
-#define DO_3SAME_GVEC3_SHIFT(INSN, OPARRAY) \
123
- static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
124
- uint32_t rn_ofs, uint32_t rm_ofs, \
125
- uint32_t oprsz, uint32_t maxsz) \
126
- { \
127
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, \
128
- oprsz, maxsz, &OPARRAY[vece]); \
129
- } \
130
- DO_3SAME(INSN, gen_##INSN##_3s)
131
-
132
-DO_3SAME_GVEC3_SHIFT(VSHL_S, sshl_op)
133
-DO_3SAME_GVEC3_SHIFT(VSHL_U, ushl_op)
134
diff --git a/target/arm/translate.c b/target/arm/translate.c
135
index XXXXXXX..XXXXXXX 100644
136
--- a/target/arm/translate.c
137
+++ b/target/arm/translate.c
138
@@ -XXX,XX +XXX,XX @@ static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
139
tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
140
}
141
142
-static const TCGOpcode vecop_list_cmtst[] = { INDEX_op_cmp_vec, 0 };
143
-
144
-const GVecGen3 cmtst_op[4] = {
145
- { .fni4 = gen_helper_neon_tst_u8,
146
- .fniv = gen_cmtst_vec,
147
- .opt_opc = vecop_list_cmtst,
148
- .vece = MO_8 },
149
- { .fni4 = gen_helper_neon_tst_u16,
150
- .fniv = gen_cmtst_vec,
151
- .opt_opc = vecop_list_cmtst,
152
- .vece = MO_16 },
153
- { .fni4 = gen_cmtst_i32,
154
- .fniv = gen_cmtst_vec,
155
- .opt_opc = vecop_list_cmtst,
156
- .vece = MO_32 },
157
- { .fni8 = gen_cmtst_i64,
158
- .fniv = gen_cmtst_vec,
159
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
160
- .opt_opc = vecop_list_cmtst,
161
- .vece = MO_64 },
162
-};
163
+void gen_gvec_cmtst(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
164
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
165
+{
166
+ static const TCGOpcode vecop_list[] = { INDEX_op_cmp_vec, 0 };
167
+ static const GVecGen3 ops[4] = {
168
+ { .fni4 = gen_helper_neon_tst_u8,
169
+ .fniv = gen_cmtst_vec,
170
+ .opt_opc = vecop_list,
171
+ .vece = MO_8 },
172
+ { .fni4 = gen_helper_neon_tst_u16,
173
+ .fniv = gen_cmtst_vec,
174
+ .opt_opc = vecop_list,
175
+ .vece = MO_16 },
176
+ { .fni4 = gen_cmtst_i32,
177
+ .fniv = gen_cmtst_vec,
178
+ .opt_opc = vecop_list,
179
+ .vece = MO_32 },
180
+ { .fni8 = gen_cmtst_i64,
181
+ .fniv = gen_cmtst_vec,
182
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
183
+ .opt_opc = vecop_list,
184
+ .vece = MO_64 },
185
+ };
186
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
187
+}
188
189
void gen_ushl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
190
{
191
@@ -XXX,XX +XXX,XX @@ static void gen_ushl_vec(unsigned vece, TCGv_vec dst,
192
tcg_temp_free_vec(rsh);
193
}
194
195
-static const TCGOpcode ushl_list[] = {
196
- INDEX_op_neg_vec, INDEX_op_shlv_vec,
197
- INDEX_op_shrv_vec, INDEX_op_cmp_vec, 0
198
-};
199
-
200
-const GVecGen3 ushl_op[4] = {
201
- { .fniv = gen_ushl_vec,
202
- .fno = gen_helper_gvec_ushl_b,
203
- .opt_opc = ushl_list,
204
- .vece = MO_8 },
205
- { .fniv = gen_ushl_vec,
206
- .fno = gen_helper_gvec_ushl_h,
207
- .opt_opc = ushl_list,
208
- .vece = MO_16 },
209
- { .fni4 = gen_ushl_i32,
210
- .fniv = gen_ushl_vec,
211
- .opt_opc = ushl_list,
212
- .vece = MO_32 },
213
- { .fni8 = gen_ushl_i64,
214
- .fniv = gen_ushl_vec,
215
- .opt_opc = ushl_list,
216
- .vece = MO_64 },
217
-};
218
+void gen_gvec_ushl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
219
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
220
+{
221
+ static const TCGOpcode vecop_list[] = {
222
+ INDEX_op_neg_vec, INDEX_op_shlv_vec,
223
+ INDEX_op_shrv_vec, INDEX_op_cmp_vec, 0
224
+ };
225
+ static const GVecGen3 ops[4] = {
226
+ { .fniv = gen_ushl_vec,
227
+ .fno = gen_helper_gvec_ushl_b,
228
+ .opt_opc = vecop_list,
229
+ .vece = MO_8 },
230
+ { .fniv = gen_ushl_vec,
231
+ .fno = gen_helper_gvec_ushl_h,
232
+ .opt_opc = vecop_list,
233
+ .vece = MO_16 },
234
+ { .fni4 = gen_ushl_i32,
235
+ .fniv = gen_ushl_vec,
236
+ .opt_opc = vecop_list,
237
+ .vece = MO_32 },
238
+ { .fni8 = gen_ushl_i64,
239
+ .fniv = gen_ushl_vec,
240
+ .opt_opc = vecop_list,
241
+ .vece = MO_64 },
242
+ };
243
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
244
+}
245
246
void gen_sshl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
247
{
248
@@ -XXX,XX +XXX,XX @@ static void gen_sshl_vec(unsigned vece, TCGv_vec dst,
249
tcg_temp_free_vec(tmp);
250
}
251
252
-static const TCGOpcode sshl_list[] = {
253
- INDEX_op_neg_vec, INDEX_op_umin_vec, INDEX_op_shlv_vec,
254
- INDEX_op_sarv_vec, INDEX_op_cmp_vec, INDEX_op_cmpsel_vec, 0
255
-};
256
-
257
-const GVecGen3 sshl_op[4] = {
258
- { .fniv = gen_sshl_vec,
259
- .fno = gen_helper_gvec_sshl_b,
260
- .opt_opc = sshl_list,
261
- .vece = MO_8 },
262
- { .fniv = gen_sshl_vec,
263
- .fno = gen_helper_gvec_sshl_h,
264
- .opt_opc = sshl_list,
265
- .vece = MO_16 },
266
- { .fni4 = gen_sshl_i32,
267
- .fniv = gen_sshl_vec,
268
- .opt_opc = sshl_list,
269
- .vece = MO_32 },
270
- { .fni8 = gen_sshl_i64,
271
- .fniv = gen_sshl_vec,
272
- .opt_opc = sshl_list,
273
- .vece = MO_64 },
274
-};
275
+void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
276
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
277
+{
278
+ static const TCGOpcode vecop_list[] = {
279
+ INDEX_op_neg_vec, INDEX_op_umin_vec, INDEX_op_shlv_vec,
280
+ INDEX_op_sarv_vec, INDEX_op_cmp_vec, INDEX_op_cmpsel_vec, 0
281
+ };
282
+ static const GVecGen3 ops[4] = {
283
+ { .fniv = gen_sshl_vec,
284
+ .fno = gen_helper_gvec_sshl_b,
285
+ .opt_opc = vecop_list,
286
+ .vece = MO_8 },
287
+ { .fniv = gen_sshl_vec,
288
+ .fno = gen_helper_gvec_sshl_h,
289
+ .opt_opc = vecop_list,
290
+ .vece = MO_16 },
291
+ { .fni4 = gen_sshl_i32,
292
+ .fniv = gen_sshl_vec,
293
+ .opt_opc = vecop_list,
294
+ .vece = MO_32 },
295
+ { .fni8 = gen_sshl_i64,
296
+ .fniv = gen_sshl_vec,
297
+ .opt_opc = vecop_list,
298
+ .vece = MO_64 },
299
+ };
300
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
301
+}
302
303
static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
304
TCGv_vec a, TCGv_vec b)
305
--
28
--
306
2.20.1
29
2.34.1
307
30
308
31
diff view generated by jsdifflib
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add a SIGBUS signal handler. In this handler, it checks the SIGBUS type,
3
Previously we hard-coded the blocksize with GMID_EL1_BS.
4
translates the host VA delivered by host to guest PA, then fills this PA
4
But the value we choose for -cpu max does not match the
5
to guest APEI GHES memory, then notifies guest according to the SIGBUS
5
value that cortex-a710 uses.
6
type.
6
7
7
Mirror the way we handle dcz_blocksize.
8
When guest accesses the poisoned memory, it will generate a Synchronous
8
9
External Abort(SEA). Then host kernel gets an APEI notification and calls
10
memory_failure() to unmapped the affected page in stage 2, finally
11
returns to guest.
12
13
Guest continues to access the PG_hwpoison page, it will trap to KVM as
14
stage2 fault, then a SIGBUS_MCEERR_AR synchronous signal is delivered to
15
Qemu, Qemu records this error address into guest APEI GHES memory and
16
notifes guest using Synchronous-External-Abort(SEA).
17
18
In order to inject a vSEA, we introduce the kvm_inject_arm_sea() function
19
in which we can setup the type of exception and the syndrome information.
20
When switching to guest, the target vcpu will jump to the synchronous
21
external abort vector table entry.
22
23
The ESR_ELx.DFSC is set to synchronous external abort(0x10), and the
24
ESR_ELx.FnV is set to not valid(0x1), which will tell guest that FAR is
25
not valid and hold an UNKNOWN value. These values will be set to KVM
26
register structures through KVM_SET_ONE_REG IOCTL.
27
28
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
29
Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com>
30
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
31
Acked-by: Xiang Zheng <zhengxiang9@huawei.com>
32
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
33
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
34
Message-id: 20200512030609.19593-10-gengdongjiu@huawei.com
11
Message-id: 20230811214031.171020-3-richard.henderson@linaro.org
35
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
36
---
13
---
37
include/sysemu/kvm.h | 3 +-
14
target/arm/cpu.h | 2 ++
38
target/arm/cpu.h | 4 +++
15
target/arm/internals.h | 6 -----
39
target/arm/internals.h | 5 +--
16
target/arm/tcg/translate.h | 2 ++
40
target/i386/cpu.h | 2 ++
17
target/arm/helper.c | 11 +++++---
41
target/arm/helper.c | 2 +-
18
target/arm/tcg/cpu64.c | 1 +
42
target/arm/kvm64.c | 77 +++++++++++++++++++++++++++++++++++++++++
19
target/arm/tcg/mte_helper.c | 46 ++++++++++++++++++++++------------
43
target/arm/tlb_helper.c | 2 +-
20
target/arm/tcg/translate-a64.c | 5 ++--
44
7 files changed, 89 insertions(+), 6 deletions(-)
21
7 files changed, 45 insertions(+), 28 deletions(-)
45
22
46
diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/include/sysemu/kvm.h
49
+++ b/include/sysemu/kvm.h
50
@@ -XXX,XX +XXX,XX @@ bool kvm_vcpu_id_is_valid(int vcpu_id);
51
/* Returns VCPU ID to be used on KVM_CREATE_VCPU ioctl() */
52
unsigned long kvm_arch_vcpu_id(CPUState *cpu);
53
54
-#ifdef TARGET_I386
55
-#define KVM_HAVE_MCE_INJECTION 1
56
+#ifdef KVM_HAVE_MCE_INJECTION
57
void kvm_arch_on_sigbus_vcpu(CPUState *cpu, int code, void *addr);
58
#endif
59
60
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
61
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/cpu.h
25
--- a/target/arm/cpu.h
63
+++ b/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
64
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
65
/* ARM processors have a weak memory model */
28
66
#define TCG_GUEST_DEFAULT_MO (0)
29
/* DCZ blocksize, in log_2(words), ie low 4 bits of DCZID_EL0 */
67
30
uint8_t dcz_blocksize;
68
+#ifdef TARGET_AARCH64
31
+ /* GM blocksize, in log_2(words), ie low 4 bits of GMID_EL0 */
69
+#define KVM_HAVE_MCE_INJECTION 1
32
+ uint8_t gm_blocksize;
70
+#endif
33
71
+
34
uint64_t rvbar_prop; /* Property/input signals. */
72
#define EXCP_UDEF 1 /* undefined instruction */
35
73
#define EXCP_SWI 2 /* software interrupt */
74
#define EXCP_PREFETCH_ABORT 3
75
diff --git a/target/arm/internals.h b/target/arm/internals.h
36
diff --git a/target/arm/internals.h b/target/arm/internals.h
76
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/internals.h
38
--- a/target/arm/internals.h
78
+++ b/target/arm/internals.h
39
+++ b/target/arm/internals.h
79
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_insn_abort(int same_el, int ea, int s1ptw, int fsc)
40
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(CPUState *cs);
80
| ARM_EL_IL | (ea << 9) | (s1ptw << 7) | fsc;
41
81
}
42
#endif /* !CONFIG_USER_ONLY */
82
43
83
-static inline uint32_t syn_data_abort_no_iss(int same_el,
44
-/*
84
+static inline uint32_t syn_data_abort_no_iss(int same_el, int fnv,
45
- * The log2 of the words in the tag block, for GMID_EL1.BS.
85
int ea, int cm, int s1ptw,
46
- * The is the maximum, 256 bytes, which manipulates 64-bits of tags.
86
int wnr, int fsc)
47
- */
87
{
48
-#define GMID_EL1_BS 6
88
return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
49
-
89
| ARM_EL_IL
50
/*
90
- | (ea << 9) | (cm << 8) | (s1ptw << 7) | (wnr << 6) | fsc;
51
* SVE predicates are 1/8 the size of SVE vectors, and cannot use
91
+ | (fnv << 10) | (ea << 9) | (cm << 8) | (s1ptw << 7)
52
* the same simd_desc() encoding due to restrictions on size.
92
+ | (wnr << 6) | fsc;
53
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
93
}
54
index XXXXXXX..XXXXXXX 100644
94
55
--- a/target/arm/tcg/translate.h
95
static inline uint32_t syn_data_abort_with_iss(int same_el,
56
+++ b/target/arm/tcg/translate.h
96
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
57
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
97
index XXXXXXX..XXXXXXX 100644
58
int8_t btype;
98
--- a/target/i386/cpu.h
59
/* A copy of cpu->dcz_blocksize. */
99
+++ b/target/i386/cpu.h
60
uint8_t dcz_blocksize;
100
@@ -XXX,XX +XXX,XX @@
61
+ /* A copy of cpu->gm_blocksize. */
101
/* The x86 has a strong memory model with some store-after-load re-ordering */
62
+ uint8_t gm_blocksize;
102
#define TCG_GUEST_DEFAULT_MO (TCG_MO_ALL & ~TCG_MO_ST_LD)
63
/* True if this page is guarded. */
103
64
bool guarded_page;
104
+#define KVM_HAVE_MCE_INJECTION 1
65
/* Bottom two bits of XScale c15_cpar coprocessor access control reg */
105
+
106
/* Maximum instruction code size */
107
#define TARGET_MAX_INSN_SIZE 16
108
109
diff --git a/target/arm/helper.c b/target/arm/helper.c
66
diff --git a/target/arm/helper.c b/target/arm/helper.c
110
index XXXXXXX..XXXXXXX 100644
67
index XXXXXXX..XXXXXXX 100644
111
--- a/target/arm/helper.c
68
--- a/target/arm/helper.c
112
+++ b/target/arm/helper.c
69
+++ b/target/arm/helper.c
113
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
70
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_reginfo[] = {
114
* Report exception with ESR indicating a fault due to a
71
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 6,
115
* translation table walk for a cache maintenance instruction.
72
.access = PL1_RW, .accessfn = access_mte,
116
*/
73
.fieldoffset = offsetof(CPUARMState, cp15.gcr_el1) },
117
- syn = syn_data_abort_no_iss(current_el == target_el,
74
- { .name = "GMID_EL1", .state = ARM_CP_STATE_AA64,
118
+ syn = syn_data_abort_no_iss(current_el == target_el, 0,
75
- .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 4,
119
fi.ea, 1, fi.s1ptw, 1, fsc);
76
- .access = PL1_R, .accessfn = access_aa64_tid5,
120
env->exception.vaddress = value;
77
- .type = ARM_CP_CONST, .resetvalue = GMID_EL1_BS },
121
env->exception.fsr = fsr;
78
{ .name = "TCO", .state = ARM_CP_STATE_AA64,
122
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
79
.opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 7,
123
index XXXXXXX..XXXXXXX 100644
80
.type = ARM_CP_NO_RAW,
124
--- a/target/arm/kvm64.c
81
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
125
+++ b/target/arm/kvm64.c
82
* then define only a RAZ/WI version of PSTATE.TCO.
126
@@ -XXX,XX +XXX,XX @@
83
*/
127
#include "sysemu/kvm_int.h"
84
if (cpu_isar_feature(aa64_mte, cpu)) {
128
#include "kvm_arm.h"
85
+ ARMCPRegInfo gmid_reginfo = {
129
#include "internals.h"
86
+ .name = "GMID_EL1", .state = ARM_CP_STATE_AA64,
130
+#include "hw/acpi/acpi.h"
87
+ .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 4,
131
+#include "hw/acpi/ghes.h"
88
+ .access = PL1_R, .accessfn = access_aa64_tid5,
132
+#include "hw/arm/virt.h"
89
+ .type = ARM_CP_CONST, .resetvalue = cpu->gm_blocksize,
133
90
+ };
134
static bool have_guest_debug;
91
+ define_one_arm_cp_reg(cpu, &gmid_reginfo);
135
92
define_arm_cp_regs(cpu, mte_reginfo);
136
@@ -XXX,XX +XXX,XX @@ int kvm_arm_cpreg_level(uint64_t regidx)
93
define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo);
137
return KVM_PUT_RUNTIME_STATE;
94
} else if (cpu_isar_feature(aa64_mte_insn_reg, cpu)) {
95
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/target/arm/tcg/cpu64.c
98
+++ b/target/arm/tcg/cpu64.c
99
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
100
cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
101
cpu->dcz_blocksize = 7; /* 512 bytes */
102
#endif
103
+ cpu->gm_blocksize = 6; /* 256 bytes */
104
105
cpu->sve_vq.supported = MAKE_64BIT_MASK(0, ARM_MAX_VQ);
106
cpu->sme_vq.supported = SVE_VQ_POW2_MAP;
107
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/tcg/mte_helper.c
110
+++ b/target/arm/tcg/mte_helper.c
111
@@ -XXX,XX +XXX,XX @@ void HELPER(st2g_stub)(CPUARMState *env, uint64_t ptr)
112
}
138
}
113
}
139
114
140
+/* Callers must hold the iothread mutex lock */
115
-#define LDGM_STGM_SIZE (4 << GMID_EL1_BS)
141
+static void kvm_inject_arm_sea(CPUState *c)
116
-
142
+{
117
uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
143
+ ARMCPU *cpu = ARM_CPU(c);
118
{
144
+ CPUARMState *env = &cpu->env;
119
int mmu_idx = cpu_mmu_index(env, false);
145
+ CPUClass *cc = CPU_GET_CLASS(c);
120
uintptr_t ra = GETPC();
146
+ uint32_t esr;
121
+ int gm_bs = env_archcpu(env)->gm_blocksize;
147
+ bool same_el;
122
+ int gm_bs_bytes = 4 << gm_bs;
148
+
123
void *tag_mem;
149
+ c->exception_index = EXCP_DATA_ABORT;
124
150
+ env->exception.target_el = 1;
125
- ptr = QEMU_ALIGN_DOWN(ptr, LDGM_STGM_SIZE);
151
+
126
+ ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
152
+ /*
127
153
+ * Set the DFSC to synchronous external abort and set FnV to not valid,
128
/* Trap if accessing an invalid page. */
154
+ * this will tell guest the FAR_ELx is UNKNOWN for this abort.
129
tag_mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_LOAD,
155
+ */
130
- LDGM_STGM_SIZE, MMU_DATA_LOAD,
156
+ same_el = arm_current_el(env) == env->exception.target_el;
131
- LDGM_STGM_SIZE / (2 * TAG_GRANULE), ra);
157
+ esr = syn_data_abort_no_iss(same_el, 1, 0, 0, 0, 0, 0x10);
132
+ gm_bs_bytes, MMU_DATA_LOAD,
158
+
133
+ gm_bs_bytes / (2 * TAG_GRANULE), ra);
159
+ env->exception.syndrome = esr;
134
160
+
135
/* The tag is squashed to zero if the page does not support tags. */
161
+ cc->do_interrupt(c);
136
if (!tag_mem) {
162
+}
137
return 0;
163
+
138
}
164
#define AARCH64_CORE_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
139
165
KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
140
- QEMU_BUILD_BUG_ON(GMID_EL1_BS != 6);
166
141
/*
167
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
142
- * We are loading 64-bits worth of tags. The ordering of elements
168
return ret;
143
- * within the word corresponds to a 64-bit little-endian operation.
144
+ * The ordering of elements within the word corresponds to
145
+ * a little-endian operation.
146
*/
147
- return ldq_le_p(tag_mem);
148
+ switch (gm_bs) {
149
+ case 6:
150
+ /* 256 bytes -> 16 tags -> 64 result bits */
151
+ return ldq_le_p(tag_mem);
152
+ default:
153
+ /* cpu configured with unsupported gm blocksize. */
154
+ g_assert_not_reached();
155
+ }
169
}
156
}
170
157
171
+void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr)
158
void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
172
+{
159
{
173
+ ram_addr_t ram_addr;
160
int mmu_idx = cpu_mmu_index(env, false);
174
+ hwaddr paddr;
161
uintptr_t ra = GETPC();
175
+ Object *obj = qdev_get_machine();
162
+ int gm_bs = env_archcpu(env)->gm_blocksize;
176
+ VirtMachineState *vms = VIRT_MACHINE(obj);
163
+ int gm_bs_bytes = 4 << gm_bs;
177
+ bool acpi_enabled = virt_is_acpi_enabled(vms);
164
void *tag_mem;
178
+
165
179
+ assert(code == BUS_MCEERR_AR || code == BUS_MCEERR_AO);
166
- ptr = QEMU_ALIGN_DOWN(ptr, LDGM_STGM_SIZE);
180
+
167
+ ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
181
+ if (acpi_enabled && addr &&
168
182
+ object_property_get_bool(obj, "ras", NULL)) {
169
/* Trap if accessing an invalid page. */
183
+ ram_addr = qemu_ram_addr_from_host(addr);
170
tag_mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
184
+ if (ram_addr != RAM_ADDR_INVALID &&
171
- LDGM_STGM_SIZE, MMU_DATA_LOAD,
185
+ kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) {
172
- LDGM_STGM_SIZE / (2 * TAG_GRANULE), ra);
186
+ kvm_hwpoison_page_add(ram_addr);
173
+ gm_bs_bytes, MMU_DATA_LOAD,
187
+ /*
174
+ gm_bs_bytes / (2 * TAG_GRANULE), ra);
188
+ * If this is a BUS_MCEERR_AR, we know we have been called
175
189
+ * synchronously from the vCPU thread, so we can easily
176
/*
190
+ * synchronize the state and inject an error.
177
* Tag store only happens if the page support tags,
191
+ *
178
@@ -XXX,XX +XXX,XX @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
192
+ * TODO: we currently don't tell the guest at all about
179
return;
193
+ * BUS_MCEERR_AO. In that case we might either be being
180
}
194
+ * called synchronously from the vCPU thread, or a bit
181
195
+ * later from the main thread, so doing the injection of
182
- QEMU_BUILD_BUG_ON(GMID_EL1_BS != 6);
196
+ * the error would be more complicated.
183
/*
197
+ */
184
- * We are storing 64-bits worth of tags. The ordering of elements
198
+ if (code == BUS_MCEERR_AR) {
185
- * within the word corresponds to a 64-bit little-endian operation.
199
+ kvm_cpu_synchronize_state(c);
186
+ * The ordering of elements within the word corresponds to
200
+ if (!acpi_ghes_record_errors(ACPI_HEST_SRC_ID_SEA, paddr)) {
187
+ * a little-endian operation.
201
+ kvm_inject_arm_sea(c);
188
*/
202
+ } else {
189
- stq_le_p(tag_mem, val);
203
+ error_report("failed to record the error");
190
+ switch (gm_bs) {
204
+ abort();
191
+ case 6:
205
+ }
192
+ stq_le_p(tag_mem, val);
206
+ }
193
+ break;
207
+ return;
194
+ default:
208
+ }
195
+ /* cpu configured with unsupported gm blocksize. */
209
+ if (code == BUS_MCEERR_AO) {
196
+ g_assert_not_reached();
210
+ error_report("Hardware memory error at addr %p for memory used by "
211
+ "QEMU itself instead of guest system!", addr);
212
+ }
213
+ }
197
+ }
214
+
198
}
215
+ if (code == BUS_MCEERR_AR) {
199
216
+ error_report("Hardware memory error!");
200
void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val)
217
+ exit(1);
201
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
218
+ }
202
index XXXXXXX..XXXXXXX 100644
219
+}
203
--- a/target/arm/tcg/translate-a64.c
220
+
204
+++ b/target/arm/tcg/translate-a64.c
221
/* C6.6.29 BRK instruction */
205
@@ -XXX,XX +XXX,XX @@ static bool trans_STGM(DisasContext *s, arg_ldst_tag *a)
222
static const uint32_t brk_insn = 0xd4200000;
206
gen_helper_stgm(cpu_env, addr, tcg_rt);
223
224
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
225
index XXXXXXX..XXXXXXX 100644
226
--- a/target/arm/tlb_helper.c
227
+++ b/target/arm/tlb_helper.c
228
@@ -XXX,XX +XXX,XX @@ static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
229
* ISV field.
230
*/
231
if (!(template_syn & ARM_EL_ISV) || target_el != 2 || s1ptw) {
232
- syn = syn_data_abort_no_iss(same_el,
233
+ syn = syn_data_abort_no_iss(same_el, 0,
234
ea, 0, s1ptw, is_write, fsc);
235
} else {
207
} else {
236
/*
208
MMUAccessType acc = MMU_DATA_STORE;
209
- int size = 4 << GMID_EL1_BS;
210
+ int size = 4 << s->gm_blocksize;
211
212
clean_addr = clean_data_tbi(s, addr);
213
tcg_gen_andi_i64(clean_addr, clean_addr, -size);
214
@@ -XXX,XX +XXX,XX @@ static bool trans_LDGM(DisasContext *s, arg_ldst_tag *a)
215
gen_helper_ldgm(tcg_rt, cpu_env, addr);
216
} else {
217
MMUAccessType acc = MMU_DATA_LOAD;
218
- int size = 4 << GMID_EL1_BS;
219
+ int size = 4 << s->gm_blocksize;
220
221
clean_addr = clean_data_tbi(s, addr);
222
tcg_gen_andi_i64(clean_addr, clean_addr, -size);
223
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
224
dc->cp_regs = arm_cpu->cp_regs;
225
dc->features = env->features;
226
dc->dcz_blocksize = arm_cpu->dcz_blocksize;
227
+ dc->gm_blocksize = arm_cpu->gm_blocksize;
228
229
#ifdef CONFIG_USER_ONLY
230
/* In sve_probe_page, we assume TBI is enabled. */
237
--
231
--
238
2.20.1
232
2.34.1
239
240
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Now that we've converted all cases to gvec, there is quite a bit
3
Support all of the easy GM block sizes.
4
of dead code at the end of the function. Remove it.
4
Use direct memory operations, since the pointers are aligned.
5
5
6
Sink the call to gen_gvec_fn2i to the end, loading a function
6
While BS=2 (16 bytes, 1 tag) is a legal setting, that requires
7
pointer within the switch statement.
7
an atomic store of one nibble. This is not difficult, but there
8
is also no point in supporting it until required.
8
9
10
Note that cortex-a710 sets GM blocksize to match its cacheline
11
size of 64 bytes. I expect many implementations will also
12
match the cacheline, which makes 16 bytes very unlikely.
13
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20230811214031.171020-4-richard.henderson@linaro.org
11
Message-id: 20200513163245.17915-6-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
18
---
14
target/arm/translate-a64.c | 56 ++++++++++----------------------------
19
target/arm/cpu.c | 18 +++++++++---
15
1 file changed, 14 insertions(+), 42 deletions(-)
20
target/arm/tcg/mte_helper.c | 56 +++++++++++++++++++++++++++++++------
21
2 files changed, 62 insertions(+), 12 deletions(-)
16
22
17
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
23
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
18
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate-a64.c
25
--- a/target/arm/cpu.c
20
+++ b/target/arm/translate-a64.c
26
+++ b/target/arm/cpu.c
21
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
27
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
22
int size = 32 - clz32(immh) - 1;
28
ID_PFR1, VIRTUALIZATION, 0);
23
int immhb = immh << 3 | immb;
29
}
24
int shift = 2 * (8 << size) - immhb;
30
25
- bool accumulate = false;
31
+ if (cpu_isar_feature(aa64_mte, cpu)) {
26
- int dsize = is_q ? 128 : 64;
32
+ /*
27
- int esize = 8 << size;
33
+ * The architectural range of GM blocksize is 2-6, however qemu
28
- int elements = dsize/esize;
34
+ * doesn't support blocksize of 2 (see HELPER(ldgm)).
29
- MemOp memop = size | (is_u ? 0 : MO_SIGN);
35
+ */
30
- TCGv_i64 tcg_rn = new_tmp_a64(s);
36
+ if (tcg_enabled()) {
31
- TCGv_i64 tcg_rd = new_tmp_a64(s);
37
+ assert(cpu->gm_blocksize >= 3 && cpu->gm_blocksize <= 6);
32
- TCGv_i64 tcg_round;
38
+ }
33
- uint64_t round_const;
39
+
34
- int i;
40
#ifndef CONFIG_USER_ONLY
35
+ GVecGen2iFn *gvec_fn;
41
- if (cpu->tag_memory == NULL && cpu_isar_feature(aa64_mte, cpu)) {
36
42
/*
37
if (extract32(immh, 3, 1) && !is_q) {
43
* Disable the MTE feature bits if we do not have tag-memory
38
unallocated_encoding(s);
44
* provided by the machine.
39
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
45
*/
40
46
- cpu->isar.id_aa64pfr1 =
41
switch (opcode) {
47
- FIELD_DP64(cpu->isar.id_aa64pfr1, ID_AA64PFR1, MTE, 0);
42
case 0x02: /* SSRA / USRA (accumulate) */
48
- }
43
- gen_gvec_fn2i(s, is_q, rd, rn, shift,
49
+ if (cpu->tag_memory == NULL) {
44
- is_u ? gen_gvec_usra : gen_gvec_ssra, size);
50
+ cpu->isar.id_aa64pfr1 =
45
- return;
51
+ FIELD_DP64(cpu->isar.id_aa64pfr1, ID_AA64PFR1, MTE, 0);
46
+ gvec_fn = is_u ? gen_gvec_usra : gen_gvec_ssra;
52
+ }
53
#endif
54
+ }
55
56
if (tcg_enabled()) {
57
/*
58
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/tcg/mte_helper.c
61
+++ b/target/arm/tcg/mte_helper.c
62
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
63
int gm_bs = env_archcpu(env)->gm_blocksize;
64
int gm_bs_bytes = 4 << gm_bs;
65
void *tag_mem;
66
+ uint64_t ret;
67
+ int shift;
68
69
ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
70
71
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
72
73
/*
74
* The ordering of elements within the word corresponds to
75
- * a little-endian operation.
76
+ * a little-endian operation. Computation of shift comes from
77
+ *
78
+ * index = address<LOG2_TAG_GRANULE+3:LOG2_TAG_GRANULE>
79
+ * data<index*4+3:index*4> = tag
80
+ *
81
+ * Because of the alignment of ptr above, BS=6 has shift=0.
82
+ * All memory operations are aligned. Defer support for BS=2,
83
+ * requiring insertion or extraction of a nibble, until we
84
+ * support a cpu that requires it.
85
*/
86
switch (gm_bs) {
87
+ case 3:
88
+ /* 32 bytes -> 2 tags -> 8 result bits */
89
+ ret = *(uint8_t *)tag_mem;
47
+ break;
90
+ break;
48
91
+ case 4:
49
case 0x08: /* SRI */
92
+ /* 64 bytes -> 4 tags -> 16 result bits */
50
- gen_gvec_fn2i(s, is_q, rd, rn, shift, gen_gvec_sri, size);
93
+ ret = cpu_to_le16(*(uint16_t *)tag_mem);
51
- return;
52
+ gvec_fn = gen_gvec_sri;
53
+ break;
94
+ break;
54
95
+ case 5:
55
case 0x00: /* SSHR / USHR */
96
+ /* 128 bytes -> 8 tags -> 32 result bits */
56
if (is_u) {
97
+ ret = cpu_to_le32(*(uint32_t *)tag_mem);
57
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
58
/* Shift count the same size as element size produces zero. */
59
tcg_gen_gvec_dup_imm(size, vec_full_reg_offset(s, rd),
60
is_q ? 16 : 8, vec_full_reg_size(s), 0);
61
- } else {
62
- gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_shri, size);
63
+ return;
64
}
65
+ gvec_fn = tcg_gen_gvec_shri;
66
} else {
67
/* Shift count the same size as element size produces all sign. */
68
if (shift == 8 << size) {
69
shift -= 1;
70
}
71
- gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_sari, size);
72
+ gvec_fn = tcg_gen_gvec_sari;
73
}
74
- return;
75
+ break;
98
+ break;
76
99
case 6:
77
case 0x04: /* SRSHR / URSHR (rounding) */
100
/* 256 bytes -> 16 tags -> 64 result bits */
78
- gen_gvec_fn2i(s, is_q, rd, rn, shift,
101
- return ldq_le_p(tag_mem);
79
- is_u ? gen_gvec_urshr : gen_gvec_srshr, size);
102
+ return cpu_to_le64(*(uint64_t *)tag_mem);
80
- return;
81
+ gvec_fn = is_u ? gen_gvec_urshr : gen_gvec_srshr;
82
+ break;
83
84
case 0x06: /* SRSRA / URSRA (accum + rounding) */
85
- gen_gvec_fn2i(s, is_q, rd, rn, shift,
86
- is_u ? gen_gvec_ursra : gen_gvec_srsra, size);
87
- return;
88
+ gvec_fn = is_u ? gen_gvec_ursra : gen_gvec_srsra;
89
+ break;
90
91
default:
103
default:
104
- /* cpu configured with unsupported gm blocksize. */
105
+ /*
106
+ * CPU configured with unsupported/invalid gm blocksize.
107
+ * This is detected early in arm_cpu_realizefn.
108
+ */
92
g_assert_not_reached();
109
g_assert_not_reached();
93
}
110
}
94
111
+ shift = extract64(ptr, LOG2_TAG_GRANULE, 4) * 4;
95
- round_const = 1ULL << (shift - 1);
112
+ return ret << shift;
96
- tcg_round = tcg_const_i64(round_const);
97
-
98
- for (i = 0; i < elements; i++) {
99
- read_vec_element(s, tcg_rn, rn, i, memop);
100
- if (accumulate) {
101
- read_vec_element(s, tcg_rd, rd, i, memop);
102
- }
103
-
104
- handle_shri_with_rndacc(tcg_rd, tcg_rn, tcg_round,
105
- accumulate, is_u, size, shift);
106
-
107
- write_vec_element(s, tcg_rd, rd, i, size);
108
- }
109
- tcg_temp_free_i64(tcg_round);
110
-
111
- clear_vec_high(s, is_q, rd);
112
+ gen_gvec_fn2i(s, is_q, rd, rn, shift, gvec_fn, size);
113
}
113
}
114
114
115
/* SHL/SLI - Vector shift left */
115
void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
116
@@ -XXX,XX +XXX,XX @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
117
int gm_bs = env_archcpu(env)->gm_blocksize;
118
int gm_bs_bytes = 4 << gm_bs;
119
void *tag_mem;
120
+ int shift;
121
122
ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
123
124
@@ -XXX,XX +XXX,XX @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
125
return;
126
}
127
128
- /*
129
- * The ordering of elements within the word corresponds to
130
- * a little-endian operation.
131
- */
132
+ /* See LDGM for comments on BS and on shift. */
133
+ shift = extract64(ptr, LOG2_TAG_GRANULE, 4) * 4;
134
+ val >>= shift;
135
switch (gm_bs) {
136
+ case 3:
137
+ /* 32 bytes -> 2 tags -> 8 result bits */
138
+ *(uint8_t *)tag_mem = val;
139
+ break;
140
+ case 4:
141
+ /* 64 bytes -> 4 tags -> 16 result bits */
142
+ *(uint16_t *)tag_mem = cpu_to_le16(val);
143
+ break;
144
+ case 5:
145
+ /* 128 bytes -> 8 tags -> 32 result bits */
146
+ *(uint32_t *)tag_mem = cpu_to_le32(val);
147
+ break;
148
case 6:
149
- stq_le_p(tag_mem, val);
150
+ /* 256 bytes -> 16 tags -> 64 result bits */
151
+ *(uint64_t *)tag_mem = cpu_to_le64(val);
152
break;
153
default:
154
/* cpu configured with unsupported gm blocksize. */
116
--
155
--
117
2.20.1
156
2.34.1
118
119
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rather than perform the argument swap during code generation,
3
When the cpu support MTE, but the system does not, reduce cpu
4
perform it during decode. This means it doesn't have to be
4
support to user instructions at EL0 instead of completely
5
special cased later, and we can share code with aarch64 code
5
disabling MTE. If we encounter a cpu implementation which does
6
generation. Hopefully the decode comment addresses any confusion
6
something else, we can revisit this setting.
7
that might arise in between.
8
7
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20230811214031.171020-5-richard.henderson@linaro.org
11
Message-id: 20200513163245.17915-9-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
12
---
14
target/arm/neon-dp.decode | 17 +++++++++++++++--
13
target/arm/cpu.c | 7 ++++---
15
target/arm/translate-neon.inc.c | 3 +--
14
1 file changed, 4 insertions(+), 3 deletions(-)
16
2 files changed, 16 insertions(+), 4 deletions(-)
17
15
18
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
16
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/neon-dp.decode
18
--- a/target/arm/cpu.c
21
+++ b/target/arm/neon-dp.decode
19
+++ b/target/arm/cpu.c
22
@@ -XXX,XX +XXX,XX @@ VCGT_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 0 .... @3same
20
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
23
VCGE_S_3s 1111 001 0 0 . .. .... .... 0011 . . . 1 .... @3same
21
24
VCGE_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 1 .... @3same
22
#ifndef CONFIG_USER_ONLY
25
23
/*
26
-VSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 0 .... @3same
24
- * Disable the MTE feature bits if we do not have tag-memory
27
-VSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 0 .... @3same
25
- * provided by the machine.
28
+# The _rev suffix indicates that Vn and Vm are reversed. This is
26
+ * If we do not have tag-memory provided by the machine,
29
+# the case for shifts. In the Arm ARM these insns are documented
27
+ * reduce MTE support to instructions enabled at EL0.
30
+# with the Vm and Vn fields in their usual places, but in the
28
+ * This matches Cortex-A710 BROADCASTMTE input being LOW.
31
+# assembly the operands are listed "backwards", ie in the order
29
*/
32
+# Dd, Dm, Dn where other insns use Dd, Dn, Dm. For QEMU we choose
30
if (cpu->tag_memory == NULL) {
33
+# to consider Vm and Vn as being in different fields in the insn,
31
cpu->isar.id_aa64pfr1 =
34
+# which allows us to avoid special-casing shifts in the trans_
32
- FIELD_DP64(cpu->isar.id_aa64pfr1, ID_AA64PFR1, MTE, 0);
35
+# function code. We would otherwise need to manually swap the operands
33
+ FIELD_DP64(cpu->isar.id_aa64pfr1, ID_AA64PFR1, MTE, 1);
36
+# over to call Neon helper functions that are shared with AArch64,
34
}
37
+# which does not have this odd reversed-operand situation.
35
#endif
38
+@3same_rev .... ... . . . size:2 .... .... .... . q:1 . . .... \
36
}
39
+ &3same vn=%vm_dp vm=%vn_dp vd=%vd_dp
40
+
41
+VSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 0 .... @3same_rev
42
+VSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 0 .... @3same_rev
43
44
VMAX_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
45
VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same
46
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-neon.inc.c
49
+++ b/target/arm/translate-neon.inc.c
50
@@ -XXX,XX +XXX,XX @@ static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
51
uint32_t rn_ofs, uint32_t rm_ofs, \
52
uint32_t oprsz, uint32_t maxsz) \
53
{ \
54
- /* Note the operation is vshl vd,vm,vn */ \
55
- tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, \
56
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, \
57
oprsz, maxsz, &OPARRAY[vece]); \
58
} \
59
DO_3SAME(INSN, gen_##INSN##_3s)
60
--
37
--
61
2.20.1
38
2.34.1
62
63
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The functions eliminate duplication of the special cases for
3
Do not hard-code the constants for Neoverse V1.
4
this operation. They match up with the GVecGen2iFn typedef.
5
4
6
Add out-of-line helpers. We got away with only having inline
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
expanders because the neon vector size is only 16 bytes, and
8
we know that the inline expansion will always succeed.
9
When we reuse this for SVE, tcg-gvec-op may decide to use an
10
out-of-line helper due to longer vector lengths.
11
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20230811214031.171020-6-richard.henderson@linaro.org
14
Message-id: 20200513163245.17915-2-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
9
---
17
target/arm/helper.h | 10 +++
10
target/arm/tcg/cpu64.c | 48 ++++++++++++++++++++++++++++--------------
18
target/arm/translate.h | 7 +-
11
1 file changed, 32 insertions(+), 16 deletions(-)
19
target/arm/translate-a64.c | 15 +---
20
target/arm/translate.c | 161 ++++++++++++++++++++++---------------
21
target/arm/vec_helper.c | 25 ++++++
22
5 files changed, 139 insertions(+), 79 deletions(-)
23
12
24
diff --git a/target/arm/helper.h b/target/arm/helper.h
13
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
25
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper.h
15
--- a/target/arm/tcg/cpu64.c
27
+++ b/target/arm/helper.h
16
+++ b/target/arm/tcg/cpu64.c
28
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_pmull_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
17
@@ -XXX,XX +XXX,XX @@
29
18
#include "qemu/module.h"
30
DEF_HELPER_FLAGS_4(neon_pmull_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
#include "qapi/visitor.h"
31
20
#include "hw/qdev-properties.h"
32
+DEF_HELPER_FLAGS_3(gvec_ssra_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
+#include "qemu/units.h"
33
+DEF_HELPER_FLAGS_3(gvec_ssra_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
22
#include "internals.h"
34
+DEF_HELPER_FLAGS_3(gvec_ssra_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
#include "cpregs.h"
35
+DEF_HELPER_FLAGS_3(gvec_ssra_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
36
+
25
+static uint64_t make_ccsidr64(unsigned assoc, unsigned linesize,
37
+DEF_HELPER_FLAGS_3(gvec_usra_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
+ unsigned cachesize)
38
+DEF_HELPER_FLAGS_3(gvec_usra_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_3(gvec_usra_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_3(gvec_usra_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
41
+
42
#ifdef TARGET_AARCH64
43
#include "helper-a64.h"
44
#include "helper-sve.h"
45
diff --git a/target/arm/translate.h b/target/arm/translate.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate.h
48
+++ b/target/arm/translate.h
49
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 mls_op[4];
50
extern const GVecGen3 cmtst_op[4];
51
extern const GVecGen3 sshl_op[4];
52
extern const GVecGen3 ushl_op[4];
53
-extern const GVecGen2i ssra_op[4];
54
-extern const GVecGen2i usra_op[4];
55
extern const GVecGen2i sri_op[4];
56
extern const GVecGen2i sli_op[4];
57
extern const GVecGen4 uqadd_op[4];
58
@@ -XXX,XX +XXX,XX @@ void gen_sshl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
59
void gen_ushl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
60
void gen_sshl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
61
62
+void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
63
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz);
64
+void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
65
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz);
66
+
67
/*
68
* Forward to the isar_feature_* tests given a DisasContext pointer.
69
*/
70
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/translate-a64.c
73
+++ b/target/arm/translate-a64.c
74
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
75
76
switch (opcode) {
77
case 0x02: /* SSRA / USRA (accumulate) */
78
- if (is_u) {
79
- /* Shift count same as element size produces zero to add. */
80
- if (shift == 8 << size) {
81
- goto done;
82
- }
83
- gen_gvec_op2i(s, is_q, rd, rn, shift, &usra_op[size]);
84
- } else {
85
- /* Shift count same as element size produces all sign to add. */
86
- if (shift == 8 << size) {
87
- shift -= 1;
88
- }
89
- gen_gvec_op2i(s, is_q, rd, rn, shift, &ssra_op[size]);
90
- }
91
+ gen_gvec_fn2i(s, is_q, rd, rn, shift,
92
+ is_u ? gen_gvec_usra : gen_gvec_ssra, size);
93
return;
94
case 0x08: /* SRI */
95
/* Shift count same as element size is valid but does nothing. */
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@ static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
101
tcg_gen_add_vec(vece, d, d, a);
102
}
103
104
-static const TCGOpcode vecop_list_ssra[] = {
105
- INDEX_op_sari_vec, INDEX_op_add_vec, 0
106
-};
107
+void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
108
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
109
+{
27
+{
110
+ static const TCGOpcode vecop_list[] = {
28
+ unsigned lg_linesize = ctz32(linesize);
111
+ INDEX_op_sari_vec, INDEX_op_add_vec, 0
29
+ unsigned sets;
112
+ };
113
+ static const GVecGen2i ops[4] = {
114
+ { .fni8 = gen_ssra8_i64,
115
+ .fniv = gen_ssra_vec,
116
+ .fno = gen_helper_gvec_ssra_b,
117
+ .load_dest = true,
118
+ .opt_opc = vecop_list,
119
+ .vece = MO_8 },
120
+ { .fni8 = gen_ssra16_i64,
121
+ .fniv = gen_ssra_vec,
122
+ .fno = gen_helper_gvec_ssra_h,
123
+ .load_dest = true,
124
+ .opt_opc = vecop_list,
125
+ .vece = MO_16 },
126
+ { .fni4 = gen_ssra32_i32,
127
+ .fniv = gen_ssra_vec,
128
+ .fno = gen_helper_gvec_ssra_s,
129
+ .load_dest = true,
130
+ .opt_opc = vecop_list,
131
+ .vece = MO_32 },
132
+ { .fni8 = gen_ssra64_i64,
133
+ .fniv = gen_ssra_vec,
134
+ .fno = gen_helper_gvec_ssra_b,
135
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
136
+ .opt_opc = vecop_list,
137
+ .load_dest = true,
138
+ .vece = MO_64 },
139
+ };
140
141
-const GVecGen2i ssra_op[4] = {
142
- { .fni8 = gen_ssra8_i64,
143
- .fniv = gen_ssra_vec,
144
- .load_dest = true,
145
- .opt_opc = vecop_list_ssra,
146
- .vece = MO_8 },
147
- { .fni8 = gen_ssra16_i64,
148
- .fniv = gen_ssra_vec,
149
- .load_dest = true,
150
- .opt_opc = vecop_list_ssra,
151
- .vece = MO_16 },
152
- { .fni4 = gen_ssra32_i32,
153
- .fniv = gen_ssra_vec,
154
- .load_dest = true,
155
- .opt_opc = vecop_list_ssra,
156
- .vece = MO_32 },
157
- { .fni8 = gen_ssra64_i64,
158
- .fniv = gen_ssra_vec,
159
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
160
- .opt_opc = vecop_list_ssra,
161
- .load_dest = true,
162
- .vece = MO_64 },
163
-};
164
+ /* tszimm encoding produces immediates in the range [1..esize]. */
165
+ tcg_debug_assert(shift > 0);
166
+ tcg_debug_assert(shift <= (8 << vece));
167
+
30
+
168
+ /*
31
+ /*
169
+ * Shifts larger than the element size are architecturally valid.
32
+ * The 64-bit CCSIDR_EL1 format is:
170
+ * Signed results in all sign bits.
33
+ * [55:32] number of sets - 1
34
+ * [23:3] associativity - 1
35
+ * [2:0] log2(linesize) - 4
36
+ * so 0 == 16 bytes, 1 == 32 bytes, 2 == 64 bytes, etc
171
+ */
37
+ */
172
+ shift = MIN(shift, (8 << vece) - 1);
38
+ assert(assoc != 0);
173
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
39
+ assert(is_power_of_2(linesize));
174
+}
40
+ assert(lg_linesize >= 4 && lg_linesize <= 7 + 4);
175
176
static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
177
{
178
@@ -XXX,XX +XXX,XX @@ static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
179
tcg_gen_add_vec(vece, d, d, a);
180
}
181
182
-static const TCGOpcode vecop_list_usra[] = {
183
- INDEX_op_shri_vec, INDEX_op_add_vec, 0
184
-};
185
+void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
186
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
187
+{
188
+ static const TCGOpcode vecop_list[] = {
189
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
190
+ };
191
+ static const GVecGen2i ops[4] = {
192
+ { .fni8 = gen_usra8_i64,
193
+ .fniv = gen_usra_vec,
194
+ .fno = gen_helper_gvec_usra_b,
195
+ .load_dest = true,
196
+ .opt_opc = vecop_list,
197
+ .vece = MO_8, },
198
+ { .fni8 = gen_usra16_i64,
199
+ .fniv = gen_usra_vec,
200
+ .fno = gen_helper_gvec_usra_h,
201
+ .load_dest = true,
202
+ .opt_opc = vecop_list,
203
+ .vece = MO_16, },
204
+ { .fni4 = gen_usra32_i32,
205
+ .fniv = gen_usra_vec,
206
+ .fno = gen_helper_gvec_usra_s,
207
+ .load_dest = true,
208
+ .opt_opc = vecop_list,
209
+ .vece = MO_32, },
210
+ { .fni8 = gen_usra64_i64,
211
+ .fniv = gen_usra_vec,
212
+ .fno = gen_helper_gvec_usra_d,
213
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
214
+ .load_dest = true,
215
+ .opt_opc = vecop_list,
216
+ .vece = MO_64, },
217
+ };
218
219
-const GVecGen2i usra_op[4] = {
220
- { .fni8 = gen_usra8_i64,
221
- .fniv = gen_usra_vec,
222
- .load_dest = true,
223
- .opt_opc = vecop_list_usra,
224
- .vece = MO_8, },
225
- { .fni8 = gen_usra16_i64,
226
- .fniv = gen_usra_vec,
227
- .load_dest = true,
228
- .opt_opc = vecop_list_usra,
229
- .vece = MO_16, },
230
- { .fni4 = gen_usra32_i32,
231
- .fniv = gen_usra_vec,
232
- .load_dest = true,
233
- .opt_opc = vecop_list_usra,
234
- .vece = MO_32, },
235
- { .fni8 = gen_usra64_i64,
236
- .fniv = gen_usra_vec,
237
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
238
- .load_dest = true,
239
- .opt_opc = vecop_list_usra,
240
- .vece = MO_64, },
241
-};
242
+ /* tszimm encoding produces immediates in the range [1..esize]. */
243
+ tcg_debug_assert(shift > 0);
244
+ tcg_debug_assert(shift <= (8 << vece));
245
+
41
+
246
+ /*
42
+ /* sets * associativity * linesize == cachesize. */
247
+ * Shifts larger than the element size are architecturally valid.
43
+ sets = cachesize / (assoc * linesize);
248
+ * Unsigned results in all zeros as input to accumulate: nop.
44
+ assert(cachesize % (assoc * linesize) == 0);
249
+ */
250
+ if (shift < (8 << vece)) {
251
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
252
+ } else {
253
+ /* Nop, but we do need to clear the tail. */
254
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
255
+ }
256
+}
257
258
static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
259
{
260
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
261
case 1: /* VSRA */
262
/* Right shift comes here negative. */
263
shift = -shift;
264
- /* Shifts larger than the element size are architecturally
265
- * valid. Unsigned results in all zeros; signed results
266
- * in all sign bits.
267
- */
268
- if (!u) {
269
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
270
- MIN(shift, (8 << size) - 1),
271
- &ssra_op[size]);
272
- } else if (shift >= 8 << size) {
273
- /* rd += 0 */
274
+ if (u) {
275
+ gen_gvec_usra(size, rd_ofs, rm_ofs, shift,
276
+ vec_size, vec_size);
277
} else {
278
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
279
- shift, &usra_op[size]);
280
+ gen_gvec_ssra(size, rd_ofs, rm_ofs, shift,
281
+ vec_size, vec_size);
282
}
283
return 0;
284
285
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
286
index XXXXXXX..XXXXXXX 100644
287
--- a/target/arm/vec_helper.c
288
+++ b/target/arm/vec_helper.c
289
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sqsub_d)(void *vd, void *vq, void *vn,
290
clear_tail(d, oprsz, simd_maxsz(desc));
291
}
292
293
+
45
+
294
+#define DO_SRA(NAME, TYPE) \
46
+ return ((uint64_t)(sets - 1) << 32)
295
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
47
+ | ((assoc - 1) << 3)
296
+{ \
48
+ | (lg_linesize - 4);
297
+ intptr_t i, oprsz = simd_oprsz(desc); \
298
+ int shift = simd_data(desc); \
299
+ TYPE *d = vd, *n = vn; \
300
+ for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
301
+ d[i] += n[i] >> shift; \
302
+ } \
303
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
304
+}
49
+}
305
+
50
+
306
+DO_SRA(gvec_ssra_b, int8_t)
51
static void aarch64_a35_initfn(Object *obj)
307
+DO_SRA(gvec_ssra_h, int16_t)
52
{
308
+DO_SRA(gvec_ssra_s, int32_t)
53
ARMCPU *cpu = ARM_CPU(obj);
309
+DO_SRA(gvec_ssra_d, int64_t)
54
@@ -XXX,XX +XXX,XX @@ static void aarch64_neoverse_v1_initfn(Object *obj)
310
+
55
* The Neoverse-V1 r1p2 TRM lists 32-bit format CCSIDR_EL1 values,
311
+DO_SRA(gvec_usra_b, uint8_t)
56
* but also says it implements CCIDX, which means they should be
312
+DO_SRA(gvec_usra_h, uint16_t)
57
* 64-bit format. So we here use values which are based on the textual
313
+DO_SRA(gvec_usra_s, uint32_t)
58
- * information in chapter 2 of the TRM (and on the fact that
314
+DO_SRA(gvec_usra_d, uint64_t)
59
- * sets * associativity * linesize == cachesize).
315
+
60
- *
316
+#undef DO_SRA
61
- * The 64-bit CCSIDR_EL1 format is:
317
+
62
- * [55:32] number of sets - 1
318
/*
63
- * [23:3] associativity - 1
319
* Convert float16 to float32, raising no exceptions and
64
- * [2:0] log2(linesize) - 4
320
* preserving exceptional values, including SNaN.
65
- * so 0 == 16 bytes, 1 == 32 bytes, 2 == 64 bytes, etc
66
- *
67
- * L1: 4-way set associative 64-byte line size, total size 64K,
68
- * so sets is 256.
69
+ * information in chapter 2 of the TRM:
70
*
71
+ * L1: 4-way set associative 64-byte line size, total size 64K.
72
* L2: 8-way set associative, 64 byte line size, either 512K or 1MB.
73
- * We pick 1MB, so this has 2048 sets.
74
- *
75
* L3: No L3 (this matches the CLIDR_EL1 value).
76
*/
77
- cpu->ccsidr[0] = 0x000000ff0000001aull; /* 64KB L1 dcache */
78
- cpu->ccsidr[1] = 0x000000ff0000001aull; /* 64KB L1 icache */
79
- cpu->ccsidr[2] = 0x000007ff0000003aull; /* 1MB L2 cache */
80
+ cpu->ccsidr[0] = make_ccsidr64(4, 64, 64 * KiB); /* L1 dcache */
81
+ cpu->ccsidr[1] = cpu->ccsidr[0]; /* L1 icache */
82
+ cpu->ccsidr[2] = make_ccsidr64(8, 64, 1 * MiB); /* L2 cache */
83
84
/* From 3.2.115 SCTLR_EL3 */
85
cpu->reset_sctlr = 0x30c50838;
321
--
86
--
322
2.20.1
87
2.34.1
323
324
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Create vectorized versions of handle_shri_with_rndacc
4
for shift+round and shift+round+accumulate. Add out-of-line
5
helpers in preparation for longer vector lengths from SVE.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200513163245.17915-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.h | 20 ++
13
target/arm/translate.h | 9 +
14
target/arm/translate-a64.c | 11 +-
15
target/arm/translate.c | 463 +++++++++++++++++++++++++++++++++++--
16
target/arm/vec_helper.c | 50 ++++
17
5 files changed, 527 insertions(+), 26 deletions(-)
18
19
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.h
22
+++ b/target/arm/helper.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(gvec_usra_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_3(gvec_usra_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_3(gvec_usra_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
27
+DEF_HELPER_FLAGS_3(gvec_srshr_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_3(gvec_srshr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_3(gvec_srshr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_3(gvec_srshr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+
32
+DEF_HELPER_FLAGS_3(gvec_urshr_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_3(gvec_urshr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_3(gvec_urshr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_3(gvec_urshr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_3(gvec_srsra_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_3(gvec_srsra_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_3(gvec_srsra_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_3(gvec_srsra_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
41
+
42
+DEF_HELPER_FLAGS_3(gvec_ursra_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_3(gvec_ursra_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_3(gvec_ursra_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_3(gvec_ursra_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
46
+
47
#ifdef TARGET_AARCH64
48
#include "helper-a64.h"
49
#include "helper-sve.h"
50
diff --git a/target/arm/translate.h b/target/arm/translate.h
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/translate.h
53
+++ b/target/arm/translate.h
54
@@ -XXX,XX +XXX,XX @@ void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
55
void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
56
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
57
58
+void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
59
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz);
60
+void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
61
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz);
62
+void gen_gvec_srsra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
63
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz);
64
+void gen_gvec_ursra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
65
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz);
66
+
67
/*
68
* Forward to the isar_feature_* tests given a DisasContext pointer.
69
*/
70
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/translate-a64.c
73
+++ b/target/arm/translate-a64.c
74
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
75
return;
76
77
case 0x04: /* SRSHR / URSHR (rounding) */
78
- break;
79
+ gen_gvec_fn2i(s, is_q, rd, rn, shift,
80
+ is_u ? gen_gvec_urshr : gen_gvec_srshr, size);
81
+ return;
82
+
83
case 0x06: /* SRSRA / URSRA (accum + rounding) */
84
- accumulate = true;
85
- break;
86
+ gen_gvec_fn2i(s, is_q, rd, rn, shift,
87
+ is_u ? gen_gvec_ursra : gen_gvec_srsra, size);
88
+ return;
89
+
90
default:
91
g_assert_not_reached();
92
}
93
diff --git a/target/arm/translate.c b/target/arm/translate.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/translate.c
96
+++ b/target/arm/translate.c
97
@@ -XXX,XX +XXX,XX @@ void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
98
}
99
}
100
101
+/*
102
+ * Shift one less than the requested amount, and the low bit is
103
+ * the rounding bit. For the 8 and 16-bit operations, because we
104
+ * mask the low bit, we can perform a normal integer shift instead
105
+ * of a vector shift.
106
+ */
107
+static void gen_srshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
108
+{
109
+ TCGv_i64 t = tcg_temp_new_i64();
110
+
111
+ tcg_gen_shri_i64(t, a, sh - 1);
112
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
113
+ tcg_gen_vec_sar8i_i64(d, a, sh);
114
+ tcg_gen_vec_add8_i64(d, d, t);
115
+ tcg_temp_free_i64(t);
116
+}
117
+
118
+static void gen_srshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
119
+{
120
+ TCGv_i64 t = tcg_temp_new_i64();
121
+
122
+ tcg_gen_shri_i64(t, a, sh - 1);
123
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
124
+ tcg_gen_vec_sar16i_i64(d, a, sh);
125
+ tcg_gen_vec_add16_i64(d, d, t);
126
+ tcg_temp_free_i64(t);
127
+}
128
+
129
+static void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
130
+{
131
+ TCGv_i32 t = tcg_temp_new_i32();
132
+
133
+ tcg_gen_extract_i32(t, a, sh - 1, 1);
134
+ tcg_gen_sari_i32(d, a, sh);
135
+ tcg_gen_add_i32(d, d, t);
136
+ tcg_temp_free_i32(t);
137
+}
138
+
139
+static void gen_srshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
140
+{
141
+ TCGv_i64 t = tcg_temp_new_i64();
142
+
143
+ tcg_gen_extract_i64(t, a, sh - 1, 1);
144
+ tcg_gen_sari_i64(d, a, sh);
145
+ tcg_gen_add_i64(d, d, t);
146
+ tcg_temp_free_i64(t);
147
+}
148
+
149
+static void gen_srshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
150
+{
151
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
152
+ TCGv_vec ones = tcg_temp_new_vec_matching(d);
153
+
154
+ tcg_gen_shri_vec(vece, t, a, sh - 1);
155
+ tcg_gen_dupi_vec(vece, ones, 1);
156
+ tcg_gen_and_vec(vece, t, t, ones);
157
+ tcg_gen_sari_vec(vece, d, a, sh);
158
+ tcg_gen_add_vec(vece, d, d, t);
159
+
160
+ tcg_temp_free_vec(t);
161
+ tcg_temp_free_vec(ones);
162
+}
163
+
164
+void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
165
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
166
+{
167
+ static const TCGOpcode vecop_list[] = {
168
+ INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
169
+ };
170
+ static const GVecGen2i ops[4] = {
171
+ { .fni8 = gen_srshr8_i64,
172
+ .fniv = gen_srshr_vec,
173
+ .fno = gen_helper_gvec_srshr_b,
174
+ .opt_opc = vecop_list,
175
+ .vece = MO_8 },
176
+ { .fni8 = gen_srshr16_i64,
177
+ .fniv = gen_srshr_vec,
178
+ .fno = gen_helper_gvec_srshr_h,
179
+ .opt_opc = vecop_list,
180
+ .vece = MO_16 },
181
+ { .fni4 = gen_srshr32_i32,
182
+ .fniv = gen_srshr_vec,
183
+ .fno = gen_helper_gvec_srshr_s,
184
+ .opt_opc = vecop_list,
185
+ .vece = MO_32 },
186
+ { .fni8 = gen_srshr64_i64,
187
+ .fniv = gen_srshr_vec,
188
+ .fno = gen_helper_gvec_srshr_d,
189
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
190
+ .opt_opc = vecop_list,
191
+ .vece = MO_64 },
192
+ };
193
+
194
+ /* tszimm encoding produces immediates in the range [1..esize] */
195
+ tcg_debug_assert(shift > 0);
196
+ tcg_debug_assert(shift <= (8 << vece));
197
+
198
+ if (shift == (8 << vece)) {
199
+ /*
200
+ * Shifts larger than the element size are architecturally valid.
201
+ * Signed results in all sign bits. With rounding, this produces
202
+ * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
203
+ * I.e. always zero.
204
+ */
205
+ tcg_gen_gvec_dup_imm(vece, rd_ofs, opr_sz, max_sz, 0);
206
+ } else {
207
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
208
+ }
209
+}
210
+
211
+static void gen_srsra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
212
+{
213
+ TCGv_i64 t = tcg_temp_new_i64();
214
+
215
+ gen_srshr8_i64(t, a, sh);
216
+ tcg_gen_vec_add8_i64(d, d, t);
217
+ tcg_temp_free_i64(t);
218
+}
219
+
220
+static void gen_srsra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
221
+{
222
+ TCGv_i64 t = tcg_temp_new_i64();
223
+
224
+ gen_srshr16_i64(t, a, sh);
225
+ tcg_gen_vec_add16_i64(d, d, t);
226
+ tcg_temp_free_i64(t);
227
+}
228
+
229
+static void gen_srsra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
230
+{
231
+ TCGv_i32 t = tcg_temp_new_i32();
232
+
233
+ gen_srshr32_i32(t, a, sh);
234
+ tcg_gen_add_i32(d, d, t);
235
+ tcg_temp_free_i32(t);
236
+}
237
+
238
+static void gen_srsra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
239
+{
240
+ TCGv_i64 t = tcg_temp_new_i64();
241
+
242
+ gen_srshr64_i64(t, a, sh);
243
+ tcg_gen_add_i64(d, d, t);
244
+ tcg_temp_free_i64(t);
245
+}
246
+
247
+static void gen_srsra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
248
+{
249
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
250
+
251
+ gen_srshr_vec(vece, t, a, sh);
252
+ tcg_gen_add_vec(vece, d, d, t);
253
+ tcg_temp_free_vec(t);
254
+}
255
+
256
+void gen_gvec_srsra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
257
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
258
+{
259
+ static const TCGOpcode vecop_list[] = {
260
+ INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
261
+ };
262
+ static const GVecGen2i ops[4] = {
263
+ { .fni8 = gen_srsra8_i64,
264
+ .fniv = gen_srsra_vec,
265
+ .fno = gen_helper_gvec_srsra_b,
266
+ .opt_opc = vecop_list,
267
+ .load_dest = true,
268
+ .vece = MO_8 },
269
+ { .fni8 = gen_srsra16_i64,
270
+ .fniv = gen_srsra_vec,
271
+ .fno = gen_helper_gvec_srsra_h,
272
+ .opt_opc = vecop_list,
273
+ .load_dest = true,
274
+ .vece = MO_16 },
275
+ { .fni4 = gen_srsra32_i32,
276
+ .fniv = gen_srsra_vec,
277
+ .fno = gen_helper_gvec_srsra_s,
278
+ .opt_opc = vecop_list,
279
+ .load_dest = true,
280
+ .vece = MO_32 },
281
+ { .fni8 = gen_srsra64_i64,
282
+ .fniv = gen_srsra_vec,
283
+ .fno = gen_helper_gvec_srsra_d,
284
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
285
+ .opt_opc = vecop_list,
286
+ .load_dest = true,
287
+ .vece = MO_64 },
288
+ };
289
+
290
+ /* tszimm encoding produces immediates in the range [1..esize] */
291
+ tcg_debug_assert(shift > 0);
292
+ tcg_debug_assert(shift <= (8 << vece));
293
+
294
+ /*
295
+ * Shifts larger than the element size are architecturally valid.
296
+ * Signed results in all sign bits. With rounding, this produces
297
+ * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
298
+ * I.e. always zero. With accumulation, this leaves D unchanged.
299
+ */
300
+ if (shift == (8 << vece)) {
301
+ /* Nop, but we do need to clear the tail. */
302
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
303
+ } else {
304
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
305
+ }
306
+}
307
+
308
+static void gen_urshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
309
+{
310
+ TCGv_i64 t = tcg_temp_new_i64();
311
+
312
+ tcg_gen_shri_i64(t, a, sh - 1);
313
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
314
+ tcg_gen_vec_shr8i_i64(d, a, sh);
315
+ tcg_gen_vec_add8_i64(d, d, t);
316
+ tcg_temp_free_i64(t);
317
+}
318
+
319
+static void gen_urshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
320
+{
321
+ TCGv_i64 t = tcg_temp_new_i64();
322
+
323
+ tcg_gen_shri_i64(t, a, sh - 1);
324
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
325
+ tcg_gen_vec_shr16i_i64(d, a, sh);
326
+ tcg_gen_vec_add16_i64(d, d, t);
327
+ tcg_temp_free_i64(t);
328
+}
329
+
330
+static void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
331
+{
332
+ TCGv_i32 t = tcg_temp_new_i32();
333
+
334
+ tcg_gen_extract_i32(t, a, sh - 1, 1);
335
+ tcg_gen_shri_i32(d, a, sh);
336
+ tcg_gen_add_i32(d, d, t);
337
+ tcg_temp_free_i32(t);
338
+}
339
+
340
+static void gen_urshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
341
+{
342
+ TCGv_i64 t = tcg_temp_new_i64();
343
+
344
+ tcg_gen_extract_i64(t, a, sh - 1, 1);
345
+ tcg_gen_shri_i64(d, a, sh);
346
+ tcg_gen_add_i64(d, d, t);
347
+ tcg_temp_free_i64(t);
348
+}
349
+
350
+static void gen_urshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t shift)
351
+{
352
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
353
+ TCGv_vec ones = tcg_temp_new_vec_matching(d);
354
+
355
+ tcg_gen_shri_vec(vece, t, a, shift - 1);
356
+ tcg_gen_dupi_vec(vece, ones, 1);
357
+ tcg_gen_and_vec(vece, t, t, ones);
358
+ tcg_gen_shri_vec(vece, d, a, shift);
359
+ tcg_gen_add_vec(vece, d, d, t);
360
+
361
+ tcg_temp_free_vec(t);
362
+ tcg_temp_free_vec(ones);
363
+}
364
+
365
+void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
366
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
367
+{
368
+ static const TCGOpcode vecop_list[] = {
369
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
370
+ };
371
+ static const GVecGen2i ops[4] = {
372
+ { .fni8 = gen_urshr8_i64,
373
+ .fniv = gen_urshr_vec,
374
+ .fno = gen_helper_gvec_urshr_b,
375
+ .opt_opc = vecop_list,
376
+ .vece = MO_8 },
377
+ { .fni8 = gen_urshr16_i64,
378
+ .fniv = gen_urshr_vec,
379
+ .fno = gen_helper_gvec_urshr_h,
380
+ .opt_opc = vecop_list,
381
+ .vece = MO_16 },
382
+ { .fni4 = gen_urshr32_i32,
383
+ .fniv = gen_urshr_vec,
384
+ .fno = gen_helper_gvec_urshr_s,
385
+ .opt_opc = vecop_list,
386
+ .vece = MO_32 },
387
+ { .fni8 = gen_urshr64_i64,
388
+ .fniv = gen_urshr_vec,
389
+ .fno = gen_helper_gvec_urshr_d,
390
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
391
+ .opt_opc = vecop_list,
392
+ .vece = MO_64 },
393
+ };
394
+
395
+ /* tszimm encoding produces immediates in the range [1..esize] */
396
+ tcg_debug_assert(shift > 0);
397
+ tcg_debug_assert(shift <= (8 << vece));
398
+
399
+ if (shift == (8 << vece)) {
400
+ /*
401
+ * Shifts larger than the element size are architecturally valid.
402
+ * Unsigned results in zero. With rounding, this produces a
403
+ * copy of the most significant bit.
404
+ */
405
+ tcg_gen_gvec_shri(vece, rd_ofs, rm_ofs, shift - 1, opr_sz, max_sz);
406
+ } else {
407
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
408
+ }
409
+}
410
+
411
+static void gen_ursra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
412
+{
413
+ TCGv_i64 t = tcg_temp_new_i64();
414
+
415
+ if (sh == 8) {
416
+ tcg_gen_vec_shr8i_i64(t, a, 7);
417
+ } else {
418
+ gen_urshr8_i64(t, a, sh);
419
+ }
420
+ tcg_gen_vec_add8_i64(d, d, t);
421
+ tcg_temp_free_i64(t);
422
+}
423
+
424
+static void gen_ursra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
425
+{
426
+ TCGv_i64 t = tcg_temp_new_i64();
427
+
428
+ if (sh == 16) {
429
+ tcg_gen_vec_shr16i_i64(t, a, 15);
430
+ } else {
431
+ gen_urshr16_i64(t, a, sh);
432
+ }
433
+ tcg_gen_vec_add16_i64(d, d, t);
434
+ tcg_temp_free_i64(t);
435
+}
436
+
437
+static void gen_ursra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
438
+{
439
+ TCGv_i32 t = tcg_temp_new_i32();
440
+
441
+ if (sh == 32) {
442
+ tcg_gen_shri_i32(t, a, 31);
443
+ } else {
444
+ gen_urshr32_i32(t, a, sh);
445
+ }
446
+ tcg_gen_add_i32(d, d, t);
447
+ tcg_temp_free_i32(t);
448
+}
449
+
450
+static void gen_ursra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
451
+{
452
+ TCGv_i64 t = tcg_temp_new_i64();
453
+
454
+ if (sh == 64) {
455
+ tcg_gen_shri_i64(t, a, 63);
456
+ } else {
457
+ gen_urshr64_i64(t, a, sh);
458
+ }
459
+ tcg_gen_add_i64(d, d, t);
460
+ tcg_temp_free_i64(t);
461
+}
462
+
463
+static void gen_ursra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
464
+{
465
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
466
+
467
+ if (sh == (8 << vece)) {
468
+ tcg_gen_shri_vec(vece, t, a, sh - 1);
469
+ } else {
470
+ gen_urshr_vec(vece, t, a, sh);
471
+ }
472
+ tcg_gen_add_vec(vece, d, d, t);
473
+ tcg_temp_free_vec(t);
474
+}
475
+
476
+void gen_gvec_ursra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
477
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
478
+{
479
+ static const TCGOpcode vecop_list[] = {
480
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
481
+ };
482
+ static const GVecGen2i ops[4] = {
483
+ { .fni8 = gen_ursra8_i64,
484
+ .fniv = gen_ursra_vec,
485
+ .fno = gen_helper_gvec_ursra_b,
486
+ .opt_opc = vecop_list,
487
+ .load_dest = true,
488
+ .vece = MO_8 },
489
+ { .fni8 = gen_ursra16_i64,
490
+ .fniv = gen_ursra_vec,
491
+ .fno = gen_helper_gvec_ursra_h,
492
+ .opt_opc = vecop_list,
493
+ .load_dest = true,
494
+ .vece = MO_16 },
495
+ { .fni4 = gen_ursra32_i32,
496
+ .fniv = gen_ursra_vec,
497
+ .fno = gen_helper_gvec_ursra_s,
498
+ .opt_opc = vecop_list,
499
+ .load_dest = true,
500
+ .vece = MO_32 },
501
+ { .fni8 = gen_ursra64_i64,
502
+ .fniv = gen_ursra_vec,
503
+ .fno = gen_helper_gvec_ursra_d,
504
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
505
+ .opt_opc = vecop_list,
506
+ .load_dest = true,
507
+ .vece = MO_64 },
508
+ };
509
+
510
+ /* tszimm encoding produces immediates in the range [1..esize] */
511
+ tcg_debug_assert(shift > 0);
512
+ tcg_debug_assert(shift <= (8 << vece));
513
+
514
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
515
+}
516
+
517
static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
518
{
519
uint64_t mask = dup_const(MO_8, 0xff >> shift);
520
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
521
}
522
return 0;
523
524
+ case 2: /* VRSHR */
525
+ /* Right shift comes here negative. */
526
+ shift = -shift;
527
+ if (u) {
528
+ gen_gvec_urshr(size, rd_ofs, rm_ofs, shift,
529
+ vec_size, vec_size);
530
+ } else {
531
+ gen_gvec_srshr(size, rd_ofs, rm_ofs, shift,
532
+ vec_size, vec_size);
533
+ }
534
+ return 0;
535
+
536
+ case 3: /* VRSRA */
537
+ /* Right shift comes here negative. */
538
+ shift = -shift;
539
+ if (u) {
540
+ gen_gvec_ursra(size, rd_ofs, rm_ofs, shift,
541
+ vec_size, vec_size);
542
+ } else {
543
+ gen_gvec_srsra(size, rd_ofs, rm_ofs, shift,
544
+ vec_size, vec_size);
545
+ }
546
+ return 0;
547
+
548
case 4: /* VSRI */
549
if (!u) {
550
return 1;
551
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
552
neon_load_reg64(cpu_V0, rm + pass);
553
tcg_gen_movi_i64(cpu_V1, imm);
554
switch (op) {
555
- case 2: /* VRSHR */
556
- case 3: /* VRSRA */
557
- if (u)
558
- gen_helper_neon_rshl_u64(cpu_V0, cpu_V0, cpu_V1);
559
- else
560
- gen_helper_neon_rshl_s64(cpu_V0, cpu_V0, cpu_V1);
561
- break;
562
case 6: /* VQSHLU */
563
gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
564
cpu_V0, cpu_V1);
565
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
566
default:
567
g_assert_not_reached();
568
}
569
- if (op == 3) {
570
- /* Accumulate. */
571
- neon_load_reg64(cpu_V1, rd + pass);
572
- tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
573
- }
574
neon_store_reg64(cpu_V0, rd + pass);
575
} else { /* size < 3 */
576
/* Operands in T0 and T1. */
577
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
578
tmp2 = tcg_temp_new_i32();
579
tcg_gen_movi_i32(tmp2, imm);
580
switch (op) {
581
- case 2: /* VRSHR */
582
- case 3: /* VRSRA */
583
- GEN_NEON_INTEGER_OP(rshl);
584
- break;
585
case 6: /* VQSHLU */
586
switch (size) {
587
case 0:
588
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
589
g_assert_not_reached();
590
}
591
tcg_temp_free_i32(tmp2);
592
-
593
- if (op == 3) {
594
- /* Accumulate. */
595
- tmp2 = neon_load_reg(rd, pass);
596
- gen_neon_add(size, tmp, tmp2);
597
- tcg_temp_free_i32(tmp2);
598
- }
599
neon_store_reg(rd, pass, tmp);
600
}
601
} /* for pass */
602
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
603
index XXXXXXX..XXXXXXX 100644
604
--- a/target/arm/vec_helper.c
605
+++ b/target/arm/vec_helper.c
606
@@ -XXX,XX +XXX,XX @@ DO_SRA(gvec_usra_d, uint64_t)
607
608
#undef DO_SRA
609
610
+#define DO_RSHR(NAME, TYPE) \
611
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
612
+{ \
613
+ intptr_t i, oprsz = simd_oprsz(desc); \
614
+ int shift = simd_data(desc); \
615
+ TYPE *d = vd, *n = vn; \
616
+ for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
617
+ TYPE tmp = n[i] >> (shift - 1); \
618
+ d[i] = (tmp >> 1) + (tmp & 1); \
619
+ } \
620
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
621
+}
622
+
623
+DO_RSHR(gvec_srshr_b, int8_t)
624
+DO_RSHR(gvec_srshr_h, int16_t)
625
+DO_RSHR(gvec_srshr_s, int32_t)
626
+DO_RSHR(gvec_srshr_d, int64_t)
627
+
628
+DO_RSHR(gvec_urshr_b, uint8_t)
629
+DO_RSHR(gvec_urshr_h, uint16_t)
630
+DO_RSHR(gvec_urshr_s, uint32_t)
631
+DO_RSHR(gvec_urshr_d, uint64_t)
632
+
633
+#undef DO_RSHR
634
+
635
+#define DO_RSRA(NAME, TYPE) \
636
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
637
+{ \
638
+ intptr_t i, oprsz = simd_oprsz(desc); \
639
+ int shift = simd_data(desc); \
640
+ TYPE *d = vd, *n = vn; \
641
+ for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
642
+ TYPE tmp = n[i] >> (shift - 1); \
643
+ d[i] += (tmp >> 1) + (tmp & 1); \
644
+ } \
645
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
646
+}
647
+
648
+DO_RSRA(gvec_srsra_b, int8_t)
649
+DO_RSRA(gvec_srsra_h, int16_t)
650
+DO_RSRA(gvec_srsra_s, int32_t)
651
+DO_RSRA(gvec_srsra_d, int64_t)
652
+
653
+DO_RSRA(gvec_ursra_b, uint8_t)
654
+DO_RSRA(gvec_ursra_h, uint16_t)
655
+DO_RSRA(gvec_ursra_s, uint32_t)
656
+DO_RSRA(gvec_ursra_d, uint64_t)
657
+
658
+#undef DO_RSRA
659
+
660
/*
661
* Convert float16 to float32, raising no exceptions and
662
* preserving exceptional values, including SNaN.
663
--
664
2.20.1
665
666
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The functions eliminate duplication of the special cases for
4
this operation. They match up with the GVecGen2iFn typedef.
5
6
Add out-of-line helpers. We got away with only having inline
7
expanders because the neon vector size is only 16 bytes, and
8
we know that the inline expansion will always succeed.
9
When we reuse this for SVE, tcg-gvec-op may decide to use an
10
out-of-line helper due to longer vector lengths.
11
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20200513163245.17915-4-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
target/arm/helper.h | 10 ++
18
target/arm/translate.h | 7 +-
19
target/arm/translate-a64.c | 20 +---
20
target/arm/translate.c | 186 +++++++++++++++++++++----------------
21
target/arm/vec_helper.c | 38 ++++++++
22
5 files changed, 160 insertions(+), 101 deletions(-)
23
24
diff --git a/target/arm/helper.h b/target/arm/helper.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper.h
27
+++ b/target/arm/helper.h
28
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(gvec_ursra_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_3(gvec_ursra_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_3(gvec_ursra_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
32
+DEF_HELPER_FLAGS_3(gvec_sri_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_3(gvec_sri_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_3(gvec_sri_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_3(gvec_sri_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_3(gvec_sli_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_3(gvec_sli_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_3(gvec_sli_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_3(gvec_sli_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
41
+
42
#ifdef TARGET_AARCH64
43
#include "helper-a64.h"
44
#include "helper-sve.h"
45
diff --git a/target/arm/translate.h b/target/arm/translate.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate.h
48
+++ b/target/arm/translate.h
49
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 mls_op[4];
50
extern const GVecGen3 cmtst_op[4];
51
extern const GVecGen3 sshl_op[4];
52
extern const GVecGen3 ushl_op[4];
53
-extern const GVecGen2i sri_op[4];
54
-extern const GVecGen2i sli_op[4];
55
extern const GVecGen4 uqadd_op[4];
56
extern const GVecGen4 sqadd_op[4];
57
extern const GVecGen4 uqsub_op[4];
58
@@ -XXX,XX +XXX,XX @@ void gen_gvec_srsra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
59
void gen_gvec_ursra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
60
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
61
62
+void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
63
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz);
64
+void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
65
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz);
66
+
67
/*
68
* Forward to the isar_feature_* tests given a DisasContext pointer.
69
*/
70
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/translate-a64.c
73
+++ b/target/arm/translate-a64.c
74
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_op2(DisasContext *s, bool is_q, int rd,
75
is_q ? 16 : 8, vec_full_reg_size(s), gvec_op);
76
}
77
78
-/* Expand a 2-operand + immediate AdvSIMD vector operation using
79
- * an op descriptor.
80
- */
81
-static void gen_gvec_op2i(DisasContext *s, bool is_q, int rd,
82
- int rn, int64_t imm, const GVecGen2i *gvec_op)
83
-{
84
- tcg_gen_gvec_2i(vec_full_reg_offset(s, rd), vec_full_reg_offset(s, rn),
85
- is_q ? 16 : 8, vec_full_reg_size(s), imm, gvec_op);
86
-}
87
-
88
/* Expand a 3-operand AdvSIMD vector operation using an op descriptor. */
89
static void gen_gvec_op3(DisasContext *s, bool is_q, int rd,
90
int rn, int rm, const GVecGen3 *gvec_op)
91
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
92
gen_gvec_fn2i(s, is_q, rd, rn, shift,
93
is_u ? gen_gvec_usra : gen_gvec_ssra, size);
94
return;
95
+
96
case 0x08: /* SRI */
97
- /* Shift count same as element size is valid but does nothing. */
98
- if (shift == 8 << size) {
99
- goto done;
100
- }
101
- gen_gvec_op2i(s, is_q, rd, rn, shift, &sri_op[size]);
102
+ gen_gvec_fn2i(s, is_q, rd, rn, shift, gen_gvec_sri, size);
103
return;
104
105
case 0x00: /* SSHR / USHR */
106
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
107
}
108
tcg_temp_free_i64(tcg_round);
109
110
- done:
111
clear_vec_high(s, is_q, rd);
112
}
113
114
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
115
}
116
117
if (insert) {
118
- gen_gvec_op2i(s, is_q, rd, rn, shift, &sli_op[size]);
119
+ gen_gvec_fn2i(s, is_q, rd, rn, shift, gen_gvec_sli, size);
120
} else {
121
gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_shli, size);
122
}
123
diff --git a/target/arm/translate.c b/target/arm/translate.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/target/arm/translate.c
126
+++ b/target/arm/translate.c
127
@@ -XXX,XX +XXX,XX @@ static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
128
129
static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
130
{
131
- if (sh == 0) {
132
- tcg_gen_mov_vec(d, a);
133
- } else {
134
- TCGv_vec t = tcg_temp_new_vec_matching(d);
135
- TCGv_vec m = tcg_temp_new_vec_matching(d);
136
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
137
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
138
139
- tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
140
- tcg_gen_shri_vec(vece, t, a, sh);
141
- tcg_gen_and_vec(vece, d, d, m);
142
- tcg_gen_or_vec(vece, d, d, t);
143
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
144
+ tcg_gen_shri_vec(vece, t, a, sh);
145
+ tcg_gen_and_vec(vece, d, d, m);
146
+ tcg_gen_or_vec(vece, d, d, t);
147
148
- tcg_temp_free_vec(t);
149
- tcg_temp_free_vec(m);
150
- }
151
+ tcg_temp_free_vec(t);
152
+ tcg_temp_free_vec(m);
153
}
154
155
-static const TCGOpcode vecop_list_sri[] = { INDEX_op_shri_vec, 0 };
156
+void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
157
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
158
+{
159
+ static const TCGOpcode vecop_list[] = { INDEX_op_shri_vec, 0 };
160
+ const GVecGen2i ops[4] = {
161
+ { .fni8 = gen_shr8_ins_i64,
162
+ .fniv = gen_shr_ins_vec,
163
+ .fno = gen_helper_gvec_sri_b,
164
+ .load_dest = true,
165
+ .opt_opc = vecop_list,
166
+ .vece = MO_8 },
167
+ { .fni8 = gen_shr16_ins_i64,
168
+ .fniv = gen_shr_ins_vec,
169
+ .fno = gen_helper_gvec_sri_h,
170
+ .load_dest = true,
171
+ .opt_opc = vecop_list,
172
+ .vece = MO_16 },
173
+ { .fni4 = gen_shr32_ins_i32,
174
+ .fniv = gen_shr_ins_vec,
175
+ .fno = gen_helper_gvec_sri_s,
176
+ .load_dest = true,
177
+ .opt_opc = vecop_list,
178
+ .vece = MO_32 },
179
+ { .fni8 = gen_shr64_ins_i64,
180
+ .fniv = gen_shr_ins_vec,
181
+ .fno = gen_helper_gvec_sri_d,
182
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
183
+ .load_dest = true,
184
+ .opt_opc = vecop_list,
185
+ .vece = MO_64 },
186
+ };
187
188
-const GVecGen2i sri_op[4] = {
189
- { .fni8 = gen_shr8_ins_i64,
190
- .fniv = gen_shr_ins_vec,
191
- .load_dest = true,
192
- .opt_opc = vecop_list_sri,
193
- .vece = MO_8 },
194
- { .fni8 = gen_shr16_ins_i64,
195
- .fniv = gen_shr_ins_vec,
196
- .load_dest = true,
197
- .opt_opc = vecop_list_sri,
198
- .vece = MO_16 },
199
- { .fni4 = gen_shr32_ins_i32,
200
- .fniv = gen_shr_ins_vec,
201
- .load_dest = true,
202
- .opt_opc = vecop_list_sri,
203
- .vece = MO_32 },
204
- { .fni8 = gen_shr64_ins_i64,
205
- .fniv = gen_shr_ins_vec,
206
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
207
- .load_dest = true,
208
- .opt_opc = vecop_list_sri,
209
- .vece = MO_64 },
210
-};
211
+ /* tszimm encoding produces immediates in the range [1..esize]. */
212
+ tcg_debug_assert(shift > 0);
213
+ tcg_debug_assert(shift <= (8 << vece));
214
+
215
+ /* Shift of esize leaves destination unchanged. */
216
+ if (shift < (8 << vece)) {
217
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
218
+ } else {
219
+ /* Nop, but we do need to clear the tail. */
220
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
221
+ }
222
+}
223
224
static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
225
{
226
@@ -XXX,XX +XXX,XX @@ static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
227
228
static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
229
{
230
- if (sh == 0) {
231
- tcg_gen_mov_vec(d, a);
232
- } else {
233
- TCGv_vec t = tcg_temp_new_vec_matching(d);
234
- TCGv_vec m = tcg_temp_new_vec_matching(d);
235
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
236
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
237
238
- tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
239
- tcg_gen_shli_vec(vece, t, a, sh);
240
- tcg_gen_and_vec(vece, d, d, m);
241
- tcg_gen_or_vec(vece, d, d, t);
242
+ tcg_gen_shli_vec(vece, t, a, sh);
243
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
244
+ tcg_gen_and_vec(vece, d, d, m);
245
+ tcg_gen_or_vec(vece, d, d, t);
246
247
- tcg_temp_free_vec(t);
248
- tcg_temp_free_vec(m);
249
- }
250
+ tcg_temp_free_vec(t);
251
+ tcg_temp_free_vec(m);
252
}
253
254
-static const TCGOpcode vecop_list_sli[] = { INDEX_op_shli_vec, 0 };
255
+void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
256
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
257
+{
258
+ static const TCGOpcode vecop_list[] = { INDEX_op_shli_vec, 0 };
259
+ const GVecGen2i ops[4] = {
260
+ { .fni8 = gen_shl8_ins_i64,
261
+ .fniv = gen_shl_ins_vec,
262
+ .fno = gen_helper_gvec_sli_b,
263
+ .load_dest = true,
264
+ .opt_opc = vecop_list,
265
+ .vece = MO_8 },
266
+ { .fni8 = gen_shl16_ins_i64,
267
+ .fniv = gen_shl_ins_vec,
268
+ .fno = gen_helper_gvec_sli_h,
269
+ .load_dest = true,
270
+ .opt_opc = vecop_list,
271
+ .vece = MO_16 },
272
+ { .fni4 = gen_shl32_ins_i32,
273
+ .fniv = gen_shl_ins_vec,
274
+ .fno = gen_helper_gvec_sli_s,
275
+ .load_dest = true,
276
+ .opt_opc = vecop_list,
277
+ .vece = MO_32 },
278
+ { .fni8 = gen_shl64_ins_i64,
279
+ .fniv = gen_shl_ins_vec,
280
+ .fno = gen_helper_gvec_sli_d,
281
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
282
+ .load_dest = true,
283
+ .opt_opc = vecop_list,
284
+ .vece = MO_64 },
285
+ };
286
287
-const GVecGen2i sli_op[4] = {
288
- { .fni8 = gen_shl8_ins_i64,
289
- .fniv = gen_shl_ins_vec,
290
- .load_dest = true,
291
- .opt_opc = vecop_list_sli,
292
- .vece = MO_8 },
293
- { .fni8 = gen_shl16_ins_i64,
294
- .fniv = gen_shl_ins_vec,
295
- .load_dest = true,
296
- .opt_opc = vecop_list_sli,
297
- .vece = MO_16 },
298
- { .fni4 = gen_shl32_ins_i32,
299
- .fniv = gen_shl_ins_vec,
300
- .load_dest = true,
301
- .opt_opc = vecop_list_sli,
302
- .vece = MO_32 },
303
- { .fni8 = gen_shl64_ins_i64,
304
- .fniv = gen_shl_ins_vec,
305
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
306
- .load_dest = true,
307
- .opt_opc = vecop_list_sli,
308
- .vece = MO_64 },
309
-};
310
+ /* tszimm encoding produces immediates in the range [0..esize-1]. */
311
+ tcg_debug_assert(shift >= 0);
312
+ tcg_debug_assert(shift < (8 << vece));
313
+
314
+ if (shift == 0) {
315
+ tcg_gen_gvec_mov(vece, rd_ofs, rm_ofs, opr_sz, max_sz);
316
+ } else {
317
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
318
+ }
319
+}
320
321
static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
322
{
323
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
324
}
325
/* Right shift comes here negative. */
326
shift = -shift;
327
- /* Shift out of range leaves destination unchanged. */
328
- if (shift < 8 << size) {
329
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
330
- shift, &sri_op[size]);
331
- }
332
+ gen_gvec_sri(size, rd_ofs, rm_ofs, shift,
333
+ vec_size, vec_size);
334
return 0;
335
336
case 5: /* VSHL, VSLI */
337
if (u) { /* VSLI */
338
- /* Shift out of range leaves destination unchanged. */
339
- if (shift < 8 << size) {
340
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size,
341
- vec_size, shift, &sli_op[size]);
342
- }
343
+ gen_gvec_sli(size, rd_ofs, rm_ofs, shift,
344
+ vec_size, vec_size);
345
} else { /* VSHL */
346
/* Shifts larger than the element size are
347
* architecturally valid and results in zero.
348
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
349
index XXXXXXX..XXXXXXX 100644
350
--- a/target/arm/vec_helper.c
351
+++ b/target/arm/vec_helper.c
352
@@ -XXX,XX +XXX,XX @@ DO_RSRA(gvec_ursra_d, uint64_t)
353
354
#undef DO_RSRA
355
356
+#define DO_SRI(NAME, TYPE) \
357
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
358
+{ \
359
+ intptr_t i, oprsz = simd_oprsz(desc); \
360
+ int shift = simd_data(desc); \
361
+ TYPE *d = vd, *n = vn; \
362
+ for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
363
+ d[i] = deposit64(d[i], 0, sizeof(TYPE) * 8 - shift, n[i] >> shift); \
364
+ } \
365
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
366
+}
367
+
368
+DO_SRI(gvec_sri_b, uint8_t)
369
+DO_SRI(gvec_sri_h, uint16_t)
370
+DO_SRI(gvec_sri_s, uint32_t)
371
+DO_SRI(gvec_sri_d, uint64_t)
372
+
373
+#undef DO_SRI
374
+
375
+#define DO_SLI(NAME, TYPE) \
376
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
377
+{ \
378
+ intptr_t i, oprsz = simd_oprsz(desc); \
379
+ int shift = simd_data(desc); \
380
+ TYPE *d = vd, *n = vn; \
381
+ for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
382
+ d[i] = deposit64(d[i], shift, sizeof(TYPE) * 8 - shift, n[i]); \
383
+ } \
384
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
385
+}
386
+
387
+DO_SLI(gvec_sli_b, uint8_t)
388
+DO_SLI(gvec_sli_h, uint16_t)
389
+DO_SLI(gvec_sli_s, uint32_t)
390
+DO_SLI(gvec_sli_d, uint64_t)
391
+
392
+#undef DO_SLI
393
+
394
/*
395
* Convert float16 to float32, raising no exceptions and
396
* preserving exceptional values, including SNaN.
397
--
398
2.20.1
399
400
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Provide a functional interface for the vector expansion.
3
Access to many of the special registers is enabled or disabled
4
This fits better with the existing set of helpers that
4
by ACTLR_EL[23], which we implement as constant 0, which means
5
we provide for other operations.
5
that all writes outside EL3 should trap.
6
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230811214031.171020-7-richard.henderson@linaro.org
9
Message-id: 20200513163245.17915-13-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/translate.h | 5 ++++
12
target/arm/cpregs.h | 2 ++
13
target/arm/translate-a64.c | 34 ++----------------------
13
target/arm/helper.c | 4 ++--
14
target/arm/translate.c | 54 +++++++++++++++++++-------------------
14
target/arm/tcg/cpu64.c | 46 +++++++++++++++++++++++++++++++++---------
15
3 files changed, 34 insertions(+), 59 deletions(-)
15
3 files changed, 41 insertions(+), 11 deletions(-)
16
16
17
diff --git a/target/arm/translate.h b/target/arm/translate.h
17
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.h
19
--- a/target/arm/cpregs.h
20
+++ b/target/arm/translate.h
20
+++ b/target/arm/cpregs.h
21
@@ -XXX,XX +XXX,XX @@ void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
21
@@ -XXX,XX +XXX,XX @@ static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
22
void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
22
void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
23
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
23
#endif
24
24
25
+void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
25
+CPAccessResult access_tvm_trvm(CPUARMState *, const ARMCPRegInfo *, bool);
26
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
27
+void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
28
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
29
+
26
+
30
/*
27
#endif /* TARGET_ARM_CPREGS_H */
31
* Forward to the isar_feature_* tests given a DisasContext pointer.
28
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
*/
33
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-a64.c
30
--- a/target/arm/helper.c
36
+++ b/target/arm/translate-a64.c
31
+++ b/target/arm/helper.c
37
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_op3_ool(DisasContext *s, bool is_q, int rd,
32
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tpm(CPUARMState *env, const ARMCPRegInfo *ri,
38
is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
39
}
33
}
40
34
41
-/* Expand a 3-operand + env pointer operation using
35
/* Check for traps from EL1 due to HCR_EL2.TVM and HCR_EL2.TRVM. */
42
- * an out-of-line helper.
36
-static CPAccessResult access_tvm_trvm(CPUARMState *env, const ARMCPRegInfo *ri,
43
- */
37
- bool isread)
44
-static void gen_gvec_op3_env(DisasContext *s, bool is_q, int rd,
38
+CPAccessResult access_tvm_trvm(CPUARMState *env, const ARMCPRegInfo *ri,
45
- int rn, int rm, gen_helper_gvec_3_ptr *fn)
39
+ bool isread)
46
-{
40
{
47
- tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
41
if (arm_current_el(env) == 1) {
48
- vec_full_reg_offset(s, rn),
42
uint64_t trap = isread ? HCR_TRVM : HCR_TVM;
49
- vec_full_reg_offset(s, rm), cpu_env,
43
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
50
- is_q ? 16 : 8, vec_full_reg_size(s), 0, fn);
51
-}
52
-
53
/* Expand a 3-operand + fpstatus pointer + simd data value operation using
54
* an out-of-line helper.
55
*/
56
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
57
58
switch (opcode) {
59
case 0x0: /* SQRDMLAH (vector) */
60
- switch (size) {
61
- case 1:
62
- gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlah_s16);
63
- break;
64
- case 2:
65
- gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlah_s32);
66
- break;
67
- default:
68
- g_assert_not_reached();
69
- }
70
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sqrdmlah_qc, size);
71
return;
72
73
case 0x1: /* SQRDMLSH (vector) */
74
- switch (size) {
75
- case 1:
76
- gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlsh_s16);
77
- break;
78
- case 2:
79
- gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlsh_s32);
80
- break;
81
- default:
82
- g_assert_not_reached();
83
- }
84
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sqrdmlsh_qc, size);
85
return;
86
87
case 0x2: /* SDOT / UDOT */
88
diff --git a/target/arm/translate.c b/target/arm/translate.c
89
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate.c
45
--- a/target/arm/tcg/cpu64.c
91
+++ b/target/arm/translate.c
46
+++ b/target/arm/tcg/cpu64.c
92
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
47
@@ -XXX,XX +XXX,XX @@ static void aarch64_a64fx_initfn(Object *obj)
93
[NEON_2RM_VCVT_UF] = 0x4,
48
/* TODO: Add A64FX specific HPC extension registers */
94
};
49
}
95
50
96
-
51
+static CPAccessResult access_actlr_w(CPUARMState *env, const ARMCPRegInfo *r,
97
-/* Expand v8.1 simd helper. */
52
+ bool read)
98
-static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
53
+{
99
- int q, int rd, int rn, int rm)
54
+ if (!read) {
100
+void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
55
+ int el = arm_current_el(env);
101
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
56
+
102
{
57
+ /* Because ACTLR_EL2 is constant 0, writes below EL2 trap to EL2. */
103
- if (dc_isar_feature(aa32_rdm, s)) {
58
+ if (el < 2 && arm_is_el2_enabled(env)) {
104
- int opr_sz = (1 + q) * 8;
59
+ return CP_ACCESS_TRAP_EL2;
105
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
60
+ }
106
- vfp_reg_offset(1, rn),
61
+ /* Because ACTLR_EL3 is constant 0, writes below EL3 trap to EL3. */
107
- vfp_reg_offset(1, rm), cpu_env,
62
+ if (el < 3 && arm_feature(env, ARM_FEATURE_EL3)) {
108
- opr_sz, opr_sz, 0, fn);
63
+ return CP_ACCESS_TRAP_EL3;
109
- return 0;
64
+ }
110
- }
65
+ }
111
- return 1;
66
+ return CP_ACCESS_OK;
112
+ static gen_helper_gvec_3_ptr * const fns[2] = {
113
+ gen_helper_gvec_qrdmlah_s16, gen_helper_gvec_qrdmlah_s32
114
+ };
115
+ tcg_debug_assert(vece >= 1 && vece <= 2);
116
+ tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, cpu_env,
117
+ opr_sz, max_sz, 0, fns[vece - 1]);
118
+}
67
+}
119
+
68
+
120
+void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
69
static const ARMCPRegInfo neoverse_n1_cp_reginfo[] = {
121
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
70
{ .name = "ATCR_EL1", .state = ARM_CP_STATE_AA64,
122
+{
71
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 7, .opc2 = 0,
123
+ static gen_helper_gvec_3_ptr * const fns[2] = {
72
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
124
+ gen_helper_gvec_qrdmlsh_s16, gen_helper_gvec_qrdmlsh_s32
73
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
125
+ };
74
+ /* Traps and enables are the same as for TCR_EL1. */
126
+ tcg_debug_assert(vece >= 1 && vece <= 2);
75
+ .accessfn = access_tvm_trvm, .fgt = FGT_TCR_EL1, },
127
+ tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, cpu_env,
76
{ .name = "ATCR_EL2", .state = ARM_CP_STATE_AA64,
128
+ opr_sz, max_sz, 0, fns[vece - 1]);
77
.opc0 = 3, .opc1 = 4, .crn = 15, .crm = 7, .opc2 = 0,
129
}
78
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
130
79
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo neoverse_n1_cp_reginfo[] = {
131
#define GEN_CMP0(NAME, COND) \
80
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
132
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
81
{ .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
133
break; /* VPADD */
82
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 1, .opc2 = 0,
134
}
83
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
135
/* VQRDMLAH */
84
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
136
- switch (size) {
85
+ .accessfn = access_actlr_w },
137
- case 1:
86
{ .name = "CPUACTLR2_EL1", .state = ARM_CP_STATE_AA64,
138
- return do_v81_helper(s, gen_helper_gvec_qrdmlah_s16,
87
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 1, .opc2 = 1,
139
- q, rd, rn, rm);
88
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
140
- case 2:
89
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
141
- return do_v81_helper(s, gen_helper_gvec_qrdmlah_s32,
90
+ .accessfn = access_actlr_w },
142
- q, rd, rn, rm);
91
{ .name = "CPUACTLR3_EL1", .state = ARM_CP_STATE_AA64,
143
+ if (dc_isar_feature(aa32_rdm, s) && (size == 1 || size == 2)) {
92
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 1, .opc2 = 2,
144
+ gen_gvec_sqrdmlah_qc(size, rd_ofs, rn_ofs, rm_ofs,
93
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
145
+ vec_size, vec_size);
94
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
146
+ return 0;
95
+ .accessfn = access_actlr_w },
147
}
96
/*
148
return 1;
97
* Report CPUCFR_EL1.SCU as 1, as we do not implement the DSU
149
98
* (and in particular its system registers).
150
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
99
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo neoverse_n1_cp_reginfo[] = {
151
break;
100
.access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 4 },
152
}
101
{ .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
153
/* VQRDMLSH */
102
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 1, .opc2 = 4,
154
- switch (size) {
103
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0x961563010 },
155
- case 1:
104
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0x961563010,
156
- return do_v81_helper(s, gen_helper_gvec_qrdmlsh_s16,
105
+ .accessfn = access_actlr_w },
157
- q, rd, rn, rm);
106
{ .name = "CPUPCR_EL3", .state = ARM_CP_STATE_AA64,
158
- case 2:
107
.opc0 = 3, .opc1 = 6, .crn = 15, .crm = 8, .opc2 = 1,
159
- return do_v81_helper(s, gen_helper_gvec_qrdmlsh_s32,
108
.access = PL3_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
160
- q, rd, rn, rm);
109
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo neoverse_n1_cp_reginfo[] = {
161
+ if (dc_isar_feature(aa32_rdm, s) && (size == 1 || size == 2)) {
110
.access = PL3_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
162
+ gen_gvec_sqrdmlsh_qc(size, rd_ofs, rn_ofs, rm_ofs,
111
{ .name = "CPUPWRCTLR_EL1", .state = ARM_CP_STATE_AA64,
163
+ vec_size, vec_size);
112
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 2, .opc2 = 7,
164
+ return 0;
113
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
165
}
114
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
166
return 1;
115
+ .accessfn = access_actlr_w },
167
116
{ .name = "ERXPFGCDN_EL1", .state = ARM_CP_STATE_AA64,
117
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 2, .opc2 = 2,
118
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
119
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
120
+ .accessfn = access_actlr_w },
121
{ .name = "ERXPFGCTL_EL1", .state = ARM_CP_STATE_AA64,
122
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 2, .opc2 = 1,
123
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
124
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
125
+ .accessfn = access_actlr_w },
126
{ .name = "ERXPFGF_EL1", .state = ARM_CP_STATE_AA64,
127
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 2, .opc2 = 0,
128
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
129
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
130
+ .accessfn = access_actlr_w },
131
};
132
133
static void define_neoverse_n1_cp_reginfo(ARMCPU *cpu)
168
--
134
--
169
2.20.1
135
2.34.1
170
171
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Provide a functional interface for the vector expansion.
3
There is only one additional EL1 register modeled, which
4
This fits better with the existing set of helpers that
4
also needs to use access_actlr_w.
5
we provide for other operations.
6
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230811214031.171020-8-richard.henderson@linaro.org
9
Message-id: 20200513163245.17915-8-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/translate.h | 7 +-
11
target/arm/tcg/cpu64.c | 3 ++-
13
target/arm/translate-a64.c | 4 +-
12
1 file changed, 2 insertions(+), 1 deletion(-)
14
target/arm/translate-neon.inc.c | 16 +----
15
target/arm/translate.c | 117 +++++++++++++++++---------------
16
4 files changed, 71 insertions(+), 73 deletions(-)
17
13
18
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate.h
16
--- a/target/arm/tcg/cpu64.c
21
+++ b/target/arm/translate.h
17
+++ b/target/arm/tcg/cpu64.c
22
@@ -XXX,XX +XXX,XX @@ void gen_gvec_cle0(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
18
@@ -XXX,XX +XXX,XX @@ static void define_neoverse_n1_cp_reginfo(ARMCPU *cpu)
23
void gen_gvec_cge0(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
19
static const ARMCPRegInfo neoverse_v1_cp_reginfo[] = {
24
uint32_t opr_sz, uint32_t max_sz);
20
{ .name = "CPUECTLR2_EL1", .state = ARM_CP_STATE_AA64,
25
21
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 1, .opc2 = 5,
26
-extern const GVecGen3 mla_op[4];
22
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
27
-extern const GVecGen3 mls_op[4];
23
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
28
+void gen_gvec_mla(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
24
+ .accessfn = access_actlr_w },
29
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
25
{ .name = "CPUPPMCR_EL3", .state = ARM_CP_STATE_AA64,
30
+void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
26
.opc0 = 3, .opc1 = 6, .crn = 15, .crm = 2, .opc2 = 0,
31
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
27
.access = PL3_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
32
+
33
extern const GVecGen3 cmtst_op[4];
34
extern const GVecGen3 sshl_op[4];
35
extern const GVecGen3 ushl_op[4];
36
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-a64.c
39
+++ b/target/arm/translate-a64.c
40
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
41
return;
42
case 0x12: /* MLA, MLS */
43
if (u) {
44
- gen_gvec_op3(s, is_q, rd, rn, rm, &mls_op[size]);
45
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_mls, size);
46
} else {
47
- gen_gvec_op3(s, is_q, rd, rn, rm, &mla_op[size]);
48
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_mla, size);
49
}
50
return;
51
case 0x11:
52
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/translate-neon.inc.c
55
+++ b/target/arm/translate-neon.inc.c
56
@@ -XXX,XX +XXX,XX @@ DO_3SAME_NO_SZ_3(VMAX_U, tcg_gen_gvec_umax)
57
DO_3SAME_NO_SZ_3(VMIN_S, tcg_gen_gvec_smin)
58
DO_3SAME_NO_SZ_3(VMIN_U, tcg_gen_gvec_umin)
59
DO_3SAME_NO_SZ_3(VMUL, tcg_gen_gvec_mul)
60
+DO_3SAME_NO_SZ_3(VMLA, gen_gvec_mla)
61
+DO_3SAME_NO_SZ_3(VMLS, gen_gvec_mls)
62
63
#define DO_3SAME_CMP(INSN, COND) \
64
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
65
@@ -XXX,XX +XXX,XX @@ static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
66
return do_3same(s, a, gen_VMUL_p_3s);
67
}
68
69
-#define DO_3SAME_GVEC3_NO_SZ_3(INSN, OPARRAY) \
70
- static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
71
- uint32_t rn_ofs, uint32_t rm_ofs, \
72
- uint32_t oprsz, uint32_t maxsz) \
73
- { \
74
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, \
75
- oprsz, maxsz, &OPARRAY[vece]); \
76
- } \
77
- DO_3SAME_NO_SZ_3(INSN, gen_##INSN##_3s)
78
-
79
-
80
-DO_3SAME_GVEC3_NO_SZ_3(VMLA, mla_op)
81
-DO_3SAME_GVEC3_NO_SZ_3(VMLS, mls_op)
82
-
83
#define DO_3SAME_GVEC3_SHIFT(INSN, OPARRAY) \
84
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
85
uint32_t rn_ofs, uint32_t rm_ofs, \
86
diff --git a/target/arm/translate.c b/target/arm/translate.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/translate.c
89
+++ b/target/arm/translate.c
90
@@ -XXX,XX +XXX,XX @@ static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
91
/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
92
* these tables are shared with AArch64 which does support them.
93
*/
94
+void gen_gvec_mla(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
95
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
96
+{
97
+ static const TCGOpcode vecop_list[] = {
98
+ INDEX_op_mul_vec, INDEX_op_add_vec, 0
99
+ };
100
+ static const GVecGen3 ops[4] = {
101
+ { .fni4 = gen_mla8_i32,
102
+ .fniv = gen_mla_vec,
103
+ .load_dest = true,
104
+ .opt_opc = vecop_list,
105
+ .vece = MO_8 },
106
+ { .fni4 = gen_mla16_i32,
107
+ .fniv = gen_mla_vec,
108
+ .load_dest = true,
109
+ .opt_opc = vecop_list,
110
+ .vece = MO_16 },
111
+ { .fni4 = gen_mla32_i32,
112
+ .fniv = gen_mla_vec,
113
+ .load_dest = true,
114
+ .opt_opc = vecop_list,
115
+ .vece = MO_32 },
116
+ { .fni8 = gen_mla64_i64,
117
+ .fniv = gen_mla_vec,
118
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
119
+ .load_dest = true,
120
+ .opt_opc = vecop_list,
121
+ .vece = MO_64 },
122
+ };
123
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
124
+}
125
126
-static const TCGOpcode vecop_list_mla[] = {
127
- INDEX_op_mul_vec, INDEX_op_add_vec, 0
128
-};
129
-
130
-static const TCGOpcode vecop_list_mls[] = {
131
- INDEX_op_mul_vec, INDEX_op_sub_vec, 0
132
-};
133
-
134
-const GVecGen3 mla_op[4] = {
135
- { .fni4 = gen_mla8_i32,
136
- .fniv = gen_mla_vec,
137
- .load_dest = true,
138
- .opt_opc = vecop_list_mla,
139
- .vece = MO_8 },
140
- { .fni4 = gen_mla16_i32,
141
- .fniv = gen_mla_vec,
142
- .load_dest = true,
143
- .opt_opc = vecop_list_mla,
144
- .vece = MO_16 },
145
- { .fni4 = gen_mla32_i32,
146
- .fniv = gen_mla_vec,
147
- .load_dest = true,
148
- .opt_opc = vecop_list_mla,
149
- .vece = MO_32 },
150
- { .fni8 = gen_mla64_i64,
151
- .fniv = gen_mla_vec,
152
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
153
- .load_dest = true,
154
- .opt_opc = vecop_list_mla,
155
- .vece = MO_64 },
156
-};
157
-
158
-const GVecGen3 mls_op[4] = {
159
- { .fni4 = gen_mls8_i32,
160
- .fniv = gen_mls_vec,
161
- .load_dest = true,
162
- .opt_opc = vecop_list_mls,
163
- .vece = MO_8 },
164
- { .fni4 = gen_mls16_i32,
165
- .fniv = gen_mls_vec,
166
- .load_dest = true,
167
- .opt_opc = vecop_list_mls,
168
- .vece = MO_16 },
169
- { .fni4 = gen_mls32_i32,
170
- .fniv = gen_mls_vec,
171
- .load_dest = true,
172
- .opt_opc = vecop_list_mls,
173
- .vece = MO_32 },
174
- { .fni8 = gen_mls64_i64,
175
- .fniv = gen_mls_vec,
176
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
177
- .load_dest = true,
178
- .opt_opc = vecop_list_mls,
179
- .vece = MO_64 },
180
-};
181
+void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
182
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
183
+{
184
+ static const TCGOpcode vecop_list[] = {
185
+ INDEX_op_mul_vec, INDEX_op_sub_vec, 0
186
+ };
187
+ static const GVecGen3 ops[4] = {
188
+ { .fni4 = gen_mls8_i32,
189
+ .fniv = gen_mls_vec,
190
+ .load_dest = true,
191
+ .opt_opc = vecop_list,
192
+ .vece = MO_8 },
193
+ { .fni4 = gen_mls16_i32,
194
+ .fniv = gen_mls_vec,
195
+ .load_dest = true,
196
+ .opt_opc = vecop_list,
197
+ .vece = MO_16 },
198
+ { .fni4 = gen_mls32_i32,
199
+ .fniv = gen_mls_vec,
200
+ .load_dest = true,
201
+ .opt_opc = vecop_list,
202
+ .vece = MO_32 },
203
+ { .fni8 = gen_mls64_i64,
204
+ .fniv = gen_mls_vec,
205
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
206
+ .load_dest = true,
207
+ .opt_opc = vecop_list,
208
+ .vece = MO_64 },
209
+ };
210
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
211
+}
212
213
/* CMTST : test is "if (X & Y != 0)". */
214
static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
215
--
28
--
216
2.20.1
29
2.34.1
217
218
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Provide a functional interface for the vector expansion.
3
Like FEAT_TRF (Self-hosted Trace Extension), suppress tracing
4
This fits better with the existing set of helpers that
4
external to the cpu, which is out of scope for QEMU.
5
we provide for other operations.
6
5
7
Macro-ize the 5 nearly identical comparisons.
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230811214031.171020-10-richard.henderson@linaro.org
11
Message-id: 20200513163245.17915-7-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
target/arm/translate.h | 16 ++-
11
target/arm/cpu.c | 3 +++
15
target/arm/translate-a64.c | 22 ++--
12
1 file changed, 3 insertions(+)
16
target/arm/translate.c | 254 ++++++++-----------------------------
17
3 files changed, 74 insertions(+), 218 deletions(-)
18
13
19
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.h
16
--- a/target/arm/cpu.c
22
+++ b/target/arm/translate.h
17
+++ b/target/arm/cpu.c
23
@@ -XXX,XX +XXX,XX @@ static inline void gen_swstep_exception(DisasContext *s, int isv, int ex)
18
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
24
uint64_t vfp_expand_imm(int size, uint8_t imm8);
19
/* FEAT_SPE (Statistical Profiling Extension) */
25
20
cpu->isar.id_aa64dfr0 =
26
/* Vector operations shared between ARM and AArch64. */
21
FIELD_DP64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, PMSVER, 0);
27
-extern const GVecGen2 ceq0_op[4];
22
+ /* FEAT_TRBE (Trace Buffer Extension) */
28
-extern const GVecGen2 clt0_op[4];
23
+ cpu->isar.id_aa64dfr0 =
29
-extern const GVecGen2 cgt0_op[4];
24
+ FIELD_DP64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, TRACEBUFFER, 0);
30
-extern const GVecGen2 cle0_op[4];
25
/* FEAT_TRF (Self-hosted Trace Extension) */
31
-extern const GVecGen2 cge0_op[4];
26
cpu->isar.id_aa64dfr0 =
32
+void gen_gvec_ceq0(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
27
FIELD_DP64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, TRACEFILT, 0);
33
+ uint32_t opr_sz, uint32_t max_sz);
34
+void gen_gvec_clt0(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
35
+ uint32_t opr_sz, uint32_t max_sz);
36
+void gen_gvec_cgt0(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
37
+ uint32_t opr_sz, uint32_t max_sz);
38
+void gen_gvec_cle0(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
39
+ uint32_t opr_sz, uint32_t max_sz);
40
+void gen_gvec_cge0(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
41
+ uint32_t opr_sz, uint32_t max_sz);
42
+
43
extern const GVecGen3 mla_op[4];
44
extern const GVecGen3 mls_op[4];
45
extern const GVecGen3 cmtst_op[4];
46
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-a64.c
49
+++ b/target/arm/translate-a64.c
50
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_fn4(DisasContext *s, bool is_q, int rd, int rn, int rm,
51
is_q ? 16 : 8, vec_full_reg_size(s));
52
}
53
54
-/* Expand a 2-operand AdvSIMD vector operation using an op descriptor. */
55
-static void gen_gvec_op2(DisasContext *s, bool is_q, int rd,
56
- int rn, const GVecGen2 *gvec_op)
57
-{
58
- tcg_gen_gvec_2(vec_full_reg_offset(s, rd), vec_full_reg_offset(s, rn),
59
- is_q ? 16 : 8, vec_full_reg_size(s), gvec_op);
60
-}
61
-
62
/* Expand a 3-operand AdvSIMD vector operation using an op descriptor. */
63
static void gen_gvec_op3(DisasContext *s, bool is_q, int rd,
64
int rn, int rm, const GVecGen3 *gvec_op)
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
66
}
67
break;
68
case 0x8: /* CMGT, CMGE */
69
- gen_gvec_op2(s, is_q, rd, rn, u ? &cge0_op[size] : &cgt0_op[size]);
70
+ if (u) {
71
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cge0, size);
72
+ } else {
73
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cgt0, size);
74
+ }
75
return;
76
case 0x9: /* CMEQ, CMLE */
77
- gen_gvec_op2(s, is_q, rd, rn, u ? &cle0_op[size] : &ceq0_op[size]);
78
+ if (u) {
79
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cle0, size);
80
+ } else {
81
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_ceq0, size);
82
+ }
83
return;
84
case 0xa: /* CMLT */
85
- gen_gvec_op2(s, is_q, rd, rn, &clt0_op[size]);
86
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clt0, size);
87
return;
88
case 0xb:
89
if (u) { /* ABS, NEG */
90
diff --git a/target/arm/translate.c b/target/arm/translate.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/translate.c
93
+++ b/target/arm/translate.c
94
@@ -XXX,XX +XXX,XX @@ static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
95
return 1;
96
}
97
98
-static void gen_ceq0_i32(TCGv_i32 d, TCGv_i32 a)
99
-{
100
- tcg_gen_setcondi_i32(TCG_COND_EQ, d, a, 0);
101
- tcg_gen_neg_i32(d, d);
102
-}
103
-
104
-static void gen_ceq0_i64(TCGv_i64 d, TCGv_i64 a)
105
-{
106
- tcg_gen_setcondi_i64(TCG_COND_EQ, d, a, 0);
107
- tcg_gen_neg_i64(d, d);
108
-}
109
-
110
-static void gen_ceq0_vec(unsigned vece, TCGv_vec d, TCGv_vec a)
111
-{
112
- TCGv_vec zero = tcg_const_zeros_vec_matching(d);
113
- tcg_gen_cmp_vec(TCG_COND_EQ, vece, d, a, zero);
114
- tcg_temp_free_vec(zero);
115
-}
116
+#define GEN_CMP0(NAME, COND) \
117
+ static void gen_##NAME##0_i32(TCGv_i32 d, TCGv_i32 a) \
118
+ { \
119
+ tcg_gen_setcondi_i32(COND, d, a, 0); \
120
+ tcg_gen_neg_i32(d, d); \
121
+ } \
122
+ static void gen_##NAME##0_i64(TCGv_i64 d, TCGv_i64 a) \
123
+ { \
124
+ tcg_gen_setcondi_i64(COND, d, a, 0); \
125
+ tcg_gen_neg_i64(d, d); \
126
+ } \
127
+ static void gen_##NAME##0_vec(unsigned vece, TCGv_vec d, TCGv_vec a) \
128
+ { \
129
+ TCGv_vec zero = tcg_const_zeros_vec_matching(d); \
130
+ tcg_gen_cmp_vec(COND, vece, d, a, zero); \
131
+ tcg_temp_free_vec(zero); \
132
+ } \
133
+ void gen_gvec_##NAME##0(unsigned vece, uint32_t d, uint32_t m, \
134
+ uint32_t opr_sz, uint32_t max_sz) \
135
+ { \
136
+ const GVecGen2 op[4] = { \
137
+ { .fno = gen_helper_gvec_##NAME##0_b, \
138
+ .fniv = gen_##NAME##0_vec, \
139
+ .opt_opc = vecop_list_cmp, \
140
+ .vece = MO_8 }, \
141
+ { .fno = gen_helper_gvec_##NAME##0_h, \
142
+ .fniv = gen_##NAME##0_vec, \
143
+ .opt_opc = vecop_list_cmp, \
144
+ .vece = MO_16 }, \
145
+ { .fni4 = gen_##NAME##0_i32, \
146
+ .fniv = gen_##NAME##0_vec, \
147
+ .opt_opc = vecop_list_cmp, \
148
+ .vece = MO_32 }, \
149
+ { .fni8 = gen_##NAME##0_i64, \
150
+ .fniv = gen_##NAME##0_vec, \
151
+ .opt_opc = vecop_list_cmp, \
152
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64, \
153
+ .vece = MO_64 }, \
154
+ }; \
155
+ tcg_gen_gvec_2(d, m, opr_sz, max_sz, &op[vece]); \
156
+ }
157
158
static const TCGOpcode vecop_list_cmp[] = {
159
INDEX_op_cmp_vec, 0
160
};
161
162
-const GVecGen2 ceq0_op[4] = {
163
- { .fno = gen_helper_gvec_ceq0_b,
164
- .fniv = gen_ceq0_vec,
165
- .opt_opc = vecop_list_cmp,
166
- .vece = MO_8 },
167
- { .fno = gen_helper_gvec_ceq0_h,
168
- .fniv = gen_ceq0_vec,
169
- .opt_opc = vecop_list_cmp,
170
- .vece = MO_16 },
171
- { .fni4 = gen_ceq0_i32,
172
- .fniv = gen_ceq0_vec,
173
- .opt_opc = vecop_list_cmp,
174
- .vece = MO_32 },
175
- { .fni8 = gen_ceq0_i64,
176
- .fniv = gen_ceq0_vec,
177
- .opt_opc = vecop_list_cmp,
178
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
179
- .vece = MO_64 },
180
-};
181
+GEN_CMP0(ceq, TCG_COND_EQ)
182
+GEN_CMP0(cle, TCG_COND_LE)
183
+GEN_CMP0(cge, TCG_COND_GE)
184
+GEN_CMP0(clt, TCG_COND_LT)
185
+GEN_CMP0(cgt, TCG_COND_GT)
186
187
-static void gen_cle0_i32(TCGv_i32 d, TCGv_i32 a)
188
-{
189
- tcg_gen_setcondi_i32(TCG_COND_LE, d, a, 0);
190
- tcg_gen_neg_i32(d, d);
191
-}
192
-
193
-static void gen_cle0_i64(TCGv_i64 d, TCGv_i64 a)
194
-{
195
- tcg_gen_setcondi_i64(TCG_COND_LE, d, a, 0);
196
- tcg_gen_neg_i64(d, d);
197
-}
198
-
199
-static void gen_cle0_vec(unsigned vece, TCGv_vec d, TCGv_vec a)
200
-{
201
- TCGv_vec zero = tcg_const_zeros_vec_matching(d);
202
- tcg_gen_cmp_vec(TCG_COND_LE, vece, d, a, zero);
203
- tcg_temp_free_vec(zero);
204
-}
205
-
206
-const GVecGen2 cle0_op[4] = {
207
- { .fno = gen_helper_gvec_cle0_b,
208
- .fniv = gen_cle0_vec,
209
- .opt_opc = vecop_list_cmp,
210
- .vece = MO_8 },
211
- { .fno = gen_helper_gvec_cle0_h,
212
- .fniv = gen_cle0_vec,
213
- .opt_opc = vecop_list_cmp,
214
- .vece = MO_16 },
215
- { .fni4 = gen_cle0_i32,
216
- .fniv = gen_cle0_vec,
217
- .opt_opc = vecop_list_cmp,
218
- .vece = MO_32 },
219
- { .fni8 = gen_cle0_i64,
220
- .fniv = gen_cle0_vec,
221
- .opt_opc = vecop_list_cmp,
222
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
223
- .vece = MO_64 },
224
-};
225
-
226
-static void gen_cge0_i32(TCGv_i32 d, TCGv_i32 a)
227
-{
228
- tcg_gen_setcondi_i32(TCG_COND_GE, d, a, 0);
229
- tcg_gen_neg_i32(d, d);
230
-}
231
-
232
-static void gen_cge0_i64(TCGv_i64 d, TCGv_i64 a)
233
-{
234
- tcg_gen_setcondi_i64(TCG_COND_GE, d, a, 0);
235
- tcg_gen_neg_i64(d, d);
236
-}
237
-
238
-static void gen_cge0_vec(unsigned vece, TCGv_vec d, TCGv_vec a)
239
-{
240
- TCGv_vec zero = tcg_const_zeros_vec_matching(d);
241
- tcg_gen_cmp_vec(TCG_COND_GE, vece, d, a, zero);
242
- tcg_temp_free_vec(zero);
243
-}
244
-
245
-const GVecGen2 cge0_op[4] = {
246
- { .fno = gen_helper_gvec_cge0_b,
247
- .fniv = gen_cge0_vec,
248
- .opt_opc = vecop_list_cmp,
249
- .vece = MO_8 },
250
- { .fno = gen_helper_gvec_cge0_h,
251
- .fniv = gen_cge0_vec,
252
- .opt_opc = vecop_list_cmp,
253
- .vece = MO_16 },
254
- { .fni4 = gen_cge0_i32,
255
- .fniv = gen_cge0_vec,
256
- .opt_opc = vecop_list_cmp,
257
- .vece = MO_32 },
258
- { .fni8 = gen_cge0_i64,
259
- .fniv = gen_cge0_vec,
260
- .opt_opc = vecop_list_cmp,
261
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
262
- .vece = MO_64 },
263
-};
264
-
265
-static void gen_clt0_i32(TCGv_i32 d, TCGv_i32 a)
266
-{
267
- tcg_gen_setcondi_i32(TCG_COND_LT, d, a, 0);
268
- tcg_gen_neg_i32(d, d);
269
-}
270
-
271
-static void gen_clt0_i64(TCGv_i64 d, TCGv_i64 a)
272
-{
273
- tcg_gen_setcondi_i64(TCG_COND_LT, d, a, 0);
274
- tcg_gen_neg_i64(d, d);
275
-}
276
-
277
-static void gen_clt0_vec(unsigned vece, TCGv_vec d, TCGv_vec a)
278
-{
279
- TCGv_vec zero = tcg_const_zeros_vec_matching(d);
280
- tcg_gen_cmp_vec(TCG_COND_LT, vece, d, a, zero);
281
- tcg_temp_free_vec(zero);
282
-}
283
-
284
-const GVecGen2 clt0_op[4] = {
285
- { .fno = gen_helper_gvec_clt0_b,
286
- .fniv = gen_clt0_vec,
287
- .opt_opc = vecop_list_cmp,
288
- .vece = MO_8 },
289
- { .fno = gen_helper_gvec_clt0_h,
290
- .fniv = gen_clt0_vec,
291
- .opt_opc = vecop_list_cmp,
292
- .vece = MO_16 },
293
- { .fni4 = gen_clt0_i32,
294
- .fniv = gen_clt0_vec,
295
- .opt_opc = vecop_list_cmp,
296
- .vece = MO_32 },
297
- { .fni8 = gen_clt0_i64,
298
- .fniv = gen_clt0_vec,
299
- .opt_opc = vecop_list_cmp,
300
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
301
- .vece = MO_64 },
302
-};
303
-
304
-static void gen_cgt0_i32(TCGv_i32 d, TCGv_i32 a)
305
-{
306
- tcg_gen_setcondi_i32(TCG_COND_GT, d, a, 0);
307
- tcg_gen_neg_i32(d, d);
308
-}
309
-
310
-static void gen_cgt0_i64(TCGv_i64 d, TCGv_i64 a)
311
-{
312
- tcg_gen_setcondi_i64(TCG_COND_GT, d, a, 0);
313
- tcg_gen_neg_i64(d, d);
314
-}
315
-
316
-static void gen_cgt0_vec(unsigned vece, TCGv_vec d, TCGv_vec a)
317
-{
318
- TCGv_vec zero = tcg_const_zeros_vec_matching(d);
319
- tcg_gen_cmp_vec(TCG_COND_GT, vece, d, a, zero);
320
- tcg_temp_free_vec(zero);
321
-}
322
-
323
-const GVecGen2 cgt0_op[4] = {
324
- { .fno = gen_helper_gvec_cgt0_b,
325
- .fniv = gen_cgt0_vec,
326
- .opt_opc = vecop_list_cmp,
327
- .vece = MO_8 },
328
- { .fno = gen_helper_gvec_cgt0_h,
329
- .fniv = gen_cgt0_vec,
330
- .opt_opc = vecop_list_cmp,
331
- .vece = MO_16 },
332
- { .fni4 = gen_cgt0_i32,
333
- .fniv = gen_cgt0_vec,
334
- .opt_opc = vecop_list_cmp,
335
- .vece = MO_32 },
336
- { .fni8 = gen_cgt0_i64,
337
- .fniv = gen_cgt0_vec,
338
- .opt_opc = vecop_list_cmp,
339
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
340
- .vece = MO_64 },
341
-};
342
+#undef GEN_CMP0
343
344
static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
345
{
346
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
347
break;
348
349
case NEON_2RM_VCEQ0:
350
- tcg_gen_gvec_2(rd_ofs, rm_ofs, vec_size,
351
- vec_size, &ceq0_op[size]);
352
+ gen_gvec_ceq0(size, rd_ofs, rm_ofs, vec_size, vec_size);
353
break;
354
case NEON_2RM_VCGT0:
355
- tcg_gen_gvec_2(rd_ofs, rm_ofs, vec_size,
356
- vec_size, &cgt0_op[size]);
357
+ gen_gvec_cgt0(size, rd_ofs, rm_ofs, vec_size, vec_size);
358
break;
359
case NEON_2RM_VCLE0:
360
- tcg_gen_gvec_2(rd_ofs, rm_ofs, vec_size,
361
- vec_size, &cle0_op[size]);
362
+ gen_gvec_cle0(size, rd_ofs, rm_ofs, vec_size, vec_size);
363
break;
364
case NEON_2RM_VCGE0:
365
- tcg_gen_gvec_2(rd_ofs, rm_ofs, vec_size,
366
- vec_size, &cge0_op[size]);
367
+ gen_gvec_cge0(size, rd_ofs, rm_ofs, vec_size, vec_size);
368
break;
369
case NEON_2RM_VCLT0:
370
- tcg_gen_gvec_2(rd_ofs, rm_ofs, vec_size,
371
- vec_size, &clt0_op[size]);
372
+ gen_gvec_clt0(size, rd_ofs, rm_ofs, vec_size, vec_size);
373
break;
374
375
default:
376
--
28
--
377
2.20.1
29
2.34.1
378
379
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In 1dc8425e551, while converting to gvec, I added an extra range check
3
This feature allows the operating system to set TCR_ELx.HWU*
4
against the shift count. This was unnecessary because the encoding of
4
to allow the implementation to use the PBHA bits from the
5
the shift count produces 0 to the element size - 1.
5
block and page descriptors for for IMPLEMENTATION DEFINED
6
purposes. Since QEMU has no need to use these bits, we may
7
simply ignore them.
6
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20230811214031.171020-11-richard.henderson@linaro.org
9
Message-id: 20200513163245.17915-5-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
target/arm/translate.c | 12 ++----------
14
docs/system/arm/emulation.rst | 1 +
13
1 file changed, 2 insertions(+), 10 deletions(-)
15
target/arm/tcg/cpu32.c | 2 +-
16
target/arm/tcg/cpu64.c | 2 +-
17
3 files changed, 3 insertions(+), 2 deletions(-)
14
18
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
16
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
21
--- a/docs/system/arm/emulation.rst
18
+++ b/target/arm/translate.c
22
+++ b/docs/system/arm/emulation.rst
19
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
20
gen_gvec_sli(size, rd_ofs, rm_ofs, shift,
24
- FEAT_HAFDBS (Hardware management of the access flag and dirty bit state)
21
vec_size, vec_size);
25
- FEAT_HCX (Support for the HCRX_EL2 register)
22
} else { /* VSHL */
26
- FEAT_HPDS (Hierarchical permission disables)
23
- /* Shifts larger than the element size are
27
+- FEAT_HPDS2 (Translation table page-based hardware attributes)
24
- * architecturally valid and results in zero.
28
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
25
- */
29
- FEAT_IDST (ID space trap handling)
26
- if (shift >= 8 << size) {
30
- FEAT_IESB (Implicit error synchronization event)
27
- tcg_gen_gvec_dup_imm(size, rd_ofs,
31
diff --git a/target/arm/tcg/cpu32.c b/target/arm/tcg/cpu32.c
28
- vec_size, vec_size, 0);
32
index XXXXXXX..XXXXXXX 100644
29
- } else {
33
--- a/target/arm/tcg/cpu32.c
30
- tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
34
+++ b/target/arm/tcg/cpu32.c
31
- vec_size, vec_size);
35
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
32
- }
36
cpu->isar.id_mmfr3 = t;
33
+ tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
37
34
+ vec_size, vec_size);
38
t = cpu->isar.id_mmfr4;
35
}
39
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* FEAT_AA32HPD */
36
return 0;
40
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 2); /* FEAT_HPDS2 */
37
}
41
t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
42
t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* FEAT_TTCNP */
43
t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* FEAT_XNX */
44
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/tcg/cpu64.c
47
+++ b/target/arm/tcg/cpu64.c
48
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
49
t = FIELD_DP64(t, ID_AA64MMFR1, HAFDBS, 2); /* FEAT_HAFDBS */
50
t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* FEAT_VMID16 */
51
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
52
- t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
53
+ t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 2); /* FEAT_HPDS2 */
54
t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); /* FEAT_LOR */
55
t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 3); /* FEAT_PAN3 */
56
t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* FEAT_XNX */
38
--
57
--
39
2.20.1
58
2.34.1
40
41
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Provide a functional interface for the vector expansion.
4
This fits better with the existing set of helpers that
5
we provide for other operations.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200513163245.17915-11-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate.h | 13 +-
13
target/arm/translate-a64.c | 22 ++-
14
target/arm/translate-neon.inc.c | 19 +--
15
target/arm/translate.c | 228 +++++++++++++++++---------------
16
4 files changed, 147 insertions(+), 135 deletions(-)
17
18
diff --git a/target/arm/translate.h b/target/arm/translate.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate.h
21
+++ b/target/arm/translate.h
22
@@ -XXX,XX +XXX,XX @@ void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
23
void gen_gvec_ushl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
24
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
25
26
-extern const GVecGen4 uqadd_op[4];
27
-extern const GVecGen4 sqadd_op[4];
28
-extern const GVecGen4 uqsub_op[4];
29
-extern const GVecGen4 sqsub_op[4];
30
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
31
void gen_ushl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
32
void gen_sshl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
33
void gen_ushl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
34
void gen_sshl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
35
36
+void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
37
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
38
+void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
39
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
40
+void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
41
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
42
+void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
43
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
44
+
45
void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
46
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
47
void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
48
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/translate-a64.c
51
+++ b/target/arm/translate-a64.c
52
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
53
54
switch (opcode) {
55
case 0x01: /* SQADD, UQADD */
56
- tcg_gen_gvec_4(vec_full_reg_offset(s, rd),
57
- offsetof(CPUARMState, vfp.qc),
58
- vec_full_reg_offset(s, rn),
59
- vec_full_reg_offset(s, rm),
60
- is_q ? 16 : 8, vec_full_reg_size(s),
61
- (u ? uqadd_op : sqadd_op) + size);
62
+ if (u) {
63
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uqadd_qc, size);
64
+ } else {
65
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sqadd_qc, size);
66
+ }
67
return;
68
case 0x05: /* SQSUB, UQSUB */
69
- tcg_gen_gvec_4(vec_full_reg_offset(s, rd),
70
- offsetof(CPUARMState, vfp.qc),
71
- vec_full_reg_offset(s, rn),
72
- vec_full_reg_offset(s, rm),
73
- is_q ? 16 : 8, vec_full_reg_size(s),
74
- (u ? uqsub_op : sqsub_op) + size);
75
+ if (u) {
76
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uqsub_qc, size);
77
+ } else {
78
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sqsub_qc, size);
79
+ }
80
return;
81
case 0x08: /* SSHL, USHL */
82
if (u) {
83
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/translate-neon.inc.c
86
+++ b/target/arm/translate-neon.inc.c
87
@@ -XXX,XX +XXX,XX @@ DO_3SAME(VORN, tcg_gen_gvec_orc)
88
DO_3SAME(VEOR, tcg_gen_gvec_xor)
89
DO_3SAME(VSHL_S, gen_gvec_sshl)
90
DO_3SAME(VSHL_U, gen_gvec_ushl)
91
+DO_3SAME(VQADD_S, gen_gvec_sqadd_qc)
92
+DO_3SAME(VQADD_U, gen_gvec_uqadd_qc)
93
+DO_3SAME(VQSUB_S, gen_gvec_sqsub_qc)
94
+DO_3SAME(VQSUB_U, gen_gvec_uqsub_qc)
95
96
/* These insns are all gvec_bitsel but with the inputs in various orders. */
97
#define DO_3SAME_BITSEL(INSN, O1, O2, O3) \
98
@@ -XXX,XX +XXX,XX @@ DO_3SAME_CMP(VCGE_S, TCG_COND_GE)
99
DO_3SAME_CMP(VCGE_U, TCG_COND_GEU)
100
DO_3SAME_CMP(VCEQ, TCG_COND_EQ)
101
102
-#define DO_3SAME_GVEC4(INSN, OPARRAY) \
103
- static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
104
- uint32_t rn_ofs, uint32_t rm_ofs, \
105
- uint32_t oprsz, uint32_t maxsz) \
106
- { \
107
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc), \
108
- rn_ofs, rm_ofs, oprsz, maxsz, &OPARRAY[vece]); \
109
- } \
110
- DO_3SAME(INSN, gen_##INSN##_3s)
111
-
112
-DO_3SAME_GVEC4(VQADD_S, sqadd_op)
113
-DO_3SAME_GVEC4(VQADD_U, uqadd_op)
114
-DO_3SAME_GVEC4(VQSUB_S, sqsub_op)
115
-DO_3SAME_GVEC4(VQSUB_U, uqsub_op)
116
-
117
static void gen_VMUL_p_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
118
uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz)
119
{
120
diff --git a/target/arm/translate.c b/target/arm/translate.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/translate.c
123
+++ b/target/arm/translate.c
124
@@ -XXX,XX +XXX,XX @@ static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
125
tcg_temp_free_vec(x);
126
}
127
128
-static const TCGOpcode vecop_list_uqadd[] = {
129
- INDEX_op_usadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
130
-};
131
-
132
-const GVecGen4 uqadd_op[4] = {
133
- { .fniv = gen_uqadd_vec,
134
- .fno = gen_helper_gvec_uqadd_b,
135
- .write_aofs = true,
136
- .opt_opc = vecop_list_uqadd,
137
- .vece = MO_8 },
138
- { .fniv = gen_uqadd_vec,
139
- .fno = gen_helper_gvec_uqadd_h,
140
- .write_aofs = true,
141
- .opt_opc = vecop_list_uqadd,
142
- .vece = MO_16 },
143
- { .fniv = gen_uqadd_vec,
144
- .fno = gen_helper_gvec_uqadd_s,
145
- .write_aofs = true,
146
- .opt_opc = vecop_list_uqadd,
147
- .vece = MO_32 },
148
- { .fniv = gen_uqadd_vec,
149
- .fno = gen_helper_gvec_uqadd_d,
150
- .write_aofs = true,
151
- .opt_opc = vecop_list_uqadd,
152
- .vece = MO_64 },
153
-};
154
+void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
155
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
156
+{
157
+ static const TCGOpcode vecop_list[] = {
158
+ INDEX_op_usadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
159
+ };
160
+ static const GVecGen4 ops[4] = {
161
+ { .fniv = gen_uqadd_vec,
162
+ .fno = gen_helper_gvec_uqadd_b,
163
+ .write_aofs = true,
164
+ .opt_opc = vecop_list,
165
+ .vece = MO_8 },
166
+ { .fniv = gen_uqadd_vec,
167
+ .fno = gen_helper_gvec_uqadd_h,
168
+ .write_aofs = true,
169
+ .opt_opc = vecop_list,
170
+ .vece = MO_16 },
171
+ { .fniv = gen_uqadd_vec,
172
+ .fno = gen_helper_gvec_uqadd_s,
173
+ .write_aofs = true,
174
+ .opt_opc = vecop_list,
175
+ .vece = MO_32 },
176
+ { .fniv = gen_uqadd_vec,
177
+ .fno = gen_helper_gvec_uqadd_d,
178
+ .write_aofs = true,
179
+ .opt_opc = vecop_list,
180
+ .vece = MO_64 },
181
+ };
182
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
183
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
184
+}
185
186
static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
187
TCGv_vec a, TCGv_vec b)
188
@@ -XXX,XX +XXX,XX @@ static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
189
tcg_temp_free_vec(x);
190
}
191
192
-static const TCGOpcode vecop_list_sqadd[] = {
193
- INDEX_op_ssadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
194
-};
195
-
196
-const GVecGen4 sqadd_op[4] = {
197
- { .fniv = gen_sqadd_vec,
198
- .fno = gen_helper_gvec_sqadd_b,
199
- .opt_opc = vecop_list_sqadd,
200
- .write_aofs = true,
201
- .vece = MO_8 },
202
- { .fniv = gen_sqadd_vec,
203
- .fno = gen_helper_gvec_sqadd_h,
204
- .opt_opc = vecop_list_sqadd,
205
- .write_aofs = true,
206
- .vece = MO_16 },
207
- { .fniv = gen_sqadd_vec,
208
- .fno = gen_helper_gvec_sqadd_s,
209
- .opt_opc = vecop_list_sqadd,
210
- .write_aofs = true,
211
- .vece = MO_32 },
212
- { .fniv = gen_sqadd_vec,
213
- .fno = gen_helper_gvec_sqadd_d,
214
- .opt_opc = vecop_list_sqadd,
215
- .write_aofs = true,
216
- .vece = MO_64 },
217
-};
218
+void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
219
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
220
+{
221
+ static const TCGOpcode vecop_list[] = {
222
+ INDEX_op_ssadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
223
+ };
224
+ static const GVecGen4 ops[4] = {
225
+ { .fniv = gen_sqadd_vec,
226
+ .fno = gen_helper_gvec_sqadd_b,
227
+ .opt_opc = vecop_list,
228
+ .write_aofs = true,
229
+ .vece = MO_8 },
230
+ { .fniv = gen_sqadd_vec,
231
+ .fno = gen_helper_gvec_sqadd_h,
232
+ .opt_opc = vecop_list,
233
+ .write_aofs = true,
234
+ .vece = MO_16 },
235
+ { .fniv = gen_sqadd_vec,
236
+ .fno = gen_helper_gvec_sqadd_s,
237
+ .opt_opc = vecop_list,
238
+ .write_aofs = true,
239
+ .vece = MO_32 },
240
+ { .fniv = gen_sqadd_vec,
241
+ .fno = gen_helper_gvec_sqadd_d,
242
+ .opt_opc = vecop_list,
243
+ .write_aofs = true,
244
+ .vece = MO_64 },
245
+ };
246
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
247
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
248
+}
249
250
static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
251
TCGv_vec a, TCGv_vec b)
252
@@ -XXX,XX +XXX,XX @@ static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
253
tcg_temp_free_vec(x);
254
}
255
256
-static const TCGOpcode vecop_list_uqsub[] = {
257
- INDEX_op_ussub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
258
-};
259
-
260
-const GVecGen4 uqsub_op[4] = {
261
- { .fniv = gen_uqsub_vec,
262
- .fno = gen_helper_gvec_uqsub_b,
263
- .opt_opc = vecop_list_uqsub,
264
- .write_aofs = true,
265
- .vece = MO_8 },
266
- { .fniv = gen_uqsub_vec,
267
- .fno = gen_helper_gvec_uqsub_h,
268
- .opt_opc = vecop_list_uqsub,
269
- .write_aofs = true,
270
- .vece = MO_16 },
271
- { .fniv = gen_uqsub_vec,
272
- .fno = gen_helper_gvec_uqsub_s,
273
- .opt_opc = vecop_list_uqsub,
274
- .write_aofs = true,
275
- .vece = MO_32 },
276
- { .fniv = gen_uqsub_vec,
277
- .fno = gen_helper_gvec_uqsub_d,
278
- .opt_opc = vecop_list_uqsub,
279
- .write_aofs = true,
280
- .vece = MO_64 },
281
-};
282
+void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
283
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
284
+{
285
+ static const TCGOpcode vecop_list[] = {
286
+ INDEX_op_ussub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
287
+ };
288
+ static const GVecGen4 ops[4] = {
289
+ { .fniv = gen_uqsub_vec,
290
+ .fno = gen_helper_gvec_uqsub_b,
291
+ .opt_opc = vecop_list,
292
+ .write_aofs = true,
293
+ .vece = MO_8 },
294
+ { .fniv = gen_uqsub_vec,
295
+ .fno = gen_helper_gvec_uqsub_h,
296
+ .opt_opc = vecop_list,
297
+ .write_aofs = true,
298
+ .vece = MO_16 },
299
+ { .fniv = gen_uqsub_vec,
300
+ .fno = gen_helper_gvec_uqsub_s,
301
+ .opt_opc = vecop_list,
302
+ .write_aofs = true,
303
+ .vece = MO_32 },
304
+ { .fniv = gen_uqsub_vec,
305
+ .fno = gen_helper_gvec_uqsub_d,
306
+ .opt_opc = vecop_list,
307
+ .write_aofs = true,
308
+ .vece = MO_64 },
309
+ };
310
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
311
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
312
+}
313
314
static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
315
TCGv_vec a, TCGv_vec b)
316
@@ -XXX,XX +XXX,XX @@ static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
317
tcg_temp_free_vec(x);
318
}
319
320
-static const TCGOpcode vecop_list_sqsub[] = {
321
- INDEX_op_sssub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
322
-};
323
-
324
-const GVecGen4 sqsub_op[4] = {
325
- { .fniv = gen_sqsub_vec,
326
- .fno = gen_helper_gvec_sqsub_b,
327
- .opt_opc = vecop_list_sqsub,
328
- .write_aofs = true,
329
- .vece = MO_8 },
330
- { .fniv = gen_sqsub_vec,
331
- .fno = gen_helper_gvec_sqsub_h,
332
- .opt_opc = vecop_list_sqsub,
333
- .write_aofs = true,
334
- .vece = MO_16 },
335
- { .fniv = gen_sqsub_vec,
336
- .fno = gen_helper_gvec_sqsub_s,
337
- .opt_opc = vecop_list_sqsub,
338
- .write_aofs = true,
339
- .vece = MO_32 },
340
- { .fniv = gen_sqsub_vec,
341
- .fno = gen_helper_gvec_sqsub_d,
342
- .opt_opc = vecop_list_sqsub,
343
- .write_aofs = true,
344
- .vece = MO_64 },
345
-};
346
+void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
347
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
348
+{
349
+ static const TCGOpcode vecop_list[] = {
350
+ INDEX_op_sssub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
351
+ };
352
+ static const GVecGen4 ops[4] = {
353
+ { .fniv = gen_sqsub_vec,
354
+ .fno = gen_helper_gvec_sqsub_b,
355
+ .opt_opc = vecop_list,
356
+ .write_aofs = true,
357
+ .vece = MO_8 },
358
+ { .fniv = gen_sqsub_vec,
359
+ .fno = gen_helper_gvec_sqsub_h,
360
+ .opt_opc = vecop_list,
361
+ .write_aofs = true,
362
+ .vece = MO_16 },
363
+ { .fniv = gen_sqsub_vec,
364
+ .fno = gen_helper_gvec_sqsub_s,
365
+ .opt_opc = vecop_list,
366
+ .write_aofs = true,
367
+ .vece = MO_32 },
368
+ { .fniv = gen_sqsub_vec,
369
+ .fno = gen_helper_gvec_sqsub_d,
370
+ .opt_opc = vecop_list,
371
+ .write_aofs = true,
372
+ .vece = MO_64 },
373
+ };
374
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
375
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
376
+}
377
378
/* Translate a NEON data processing instruction. Return nonzero if the
379
instruction is invalid.
380
--
381
2.20.1
382
383
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
These operations do not touch fp_status.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200513163245.17915-12-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.h | 4 ++--
11
target/arm/translate-a64.c | 5 ++---
12
target/arm/translate.c | 12 ++----------
13
target/arm/vfp_helper.c | 5 ++---
14
4 files changed, 8 insertions(+), 18 deletions(-)
15
16
diff --git a/target/arm/helper.h b/target/arm/helper.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.h
19
+++ b/target/arm/helper.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(recpe_f64, TCG_CALL_NO_RWG, f64, f64, ptr)
21
DEF_HELPER_FLAGS_2(rsqrte_f16, TCG_CALL_NO_RWG, f16, f16, ptr)
22
DEF_HELPER_FLAGS_2(rsqrte_f32, TCG_CALL_NO_RWG, f32, f32, ptr)
23
DEF_HELPER_FLAGS_2(rsqrte_f64, TCG_CALL_NO_RWG, f64, f64, ptr)
24
-DEF_HELPER_2(recpe_u32, i32, i32, ptr)
25
-DEF_HELPER_FLAGS_2(rsqrte_u32, TCG_CALL_NO_RWG, i32, i32, ptr)
26
+DEF_HELPER_FLAGS_1(recpe_u32, TCG_CALL_NO_RWG, i32, i32)
27
+DEF_HELPER_FLAGS_1(rsqrte_u32, TCG_CALL_NO_RWG, i32, i32)
28
DEF_HELPER_FLAGS_4(neon_tbl, TCG_CALL_NO_RWG, i32, i32, i32, ptr, i32)
29
30
DEF_HELPER_3(shl_cc, i32, env, i32, i32)
31
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate-a64.c
34
+++ b/target/arm/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
36
37
switch (opcode) {
38
case 0x3c: /* URECPE */
39
- gen_helper_recpe_u32(tcg_res, tcg_op, fpst);
40
+ gen_helper_recpe_u32(tcg_res, tcg_op);
41
break;
42
case 0x3d: /* FRECPE */
43
gen_helper_recpe_f32(tcg_res, tcg_op, fpst);
44
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
45
unallocated_encoding(s);
46
return;
47
}
48
- need_fpstatus = true;
49
break;
50
case 0x1e: /* FRINT32Z */
51
case 0x1f: /* FRINT64Z */
52
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
53
gen_helper_rints_exact(tcg_res, tcg_op, tcg_fpstatus);
54
break;
55
case 0x7c: /* URSQRTE */
56
- gen_helper_rsqrte_u32(tcg_res, tcg_op, tcg_fpstatus);
57
+ gen_helper_rsqrte_u32(tcg_res, tcg_op);
58
break;
59
case 0x1e: /* FRINT32Z */
60
case 0x5e: /* FRINT32X */
61
diff --git a/target/arm/translate.c b/target/arm/translate.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/translate.c
64
+++ b/target/arm/translate.c
65
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
66
break;
67
}
68
case NEON_2RM_VRECPE:
69
- {
70
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
71
- gen_helper_recpe_u32(tmp, tmp, fpstatus);
72
- tcg_temp_free_ptr(fpstatus);
73
+ gen_helper_recpe_u32(tmp, tmp);
74
break;
75
- }
76
case NEON_2RM_VRSQRTE:
77
- {
78
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
79
- gen_helper_rsqrte_u32(tmp, tmp, fpstatus);
80
- tcg_temp_free_ptr(fpstatus);
81
+ gen_helper_rsqrte_u32(tmp, tmp);
82
break;
83
- }
84
case NEON_2RM_VRECPE_F:
85
{
86
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
87
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
88
index XXXXXXX..XXXXXXX 100644
89
--- a/target/arm/vfp_helper.c
90
+++ b/target/arm/vfp_helper.c
91
@@ -XXX,XX +XXX,XX @@ float64 HELPER(rsqrte_f64)(float64 input, void *fpstp)
92
return make_float64(val);
93
}
94
95
-uint32_t HELPER(recpe_u32)(uint32_t a, void *fpstp)
96
+uint32_t HELPER(recpe_u32)(uint32_t a)
97
{
98
- /* float_status *s = fpstp; */
99
int input, estimate;
100
101
if ((a & 0x80000000) == 0) {
102
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(recpe_u32)(uint32_t a, void *fpstp)
103
return deposit32(0, (32 - 9), 9, estimate);
104
}
105
106
-uint32_t HELPER(rsqrte_u32)(uint32_t a, void *fpstp)
107
+uint32_t HELPER(rsqrte_u32)(uint32_t a)
108
{
109
int estimate;
110
111
--
112
2.20.1
113
114
diff view generated by jsdifflib
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
I and Xiang are willing to review the APEI-related patches and
3
This is a mandatory feature for Armv8.1 architectures but we don't
4
volunteer as the reviewers for the HEST/GHES part.
4
state the feature clearly in our emulation list. Also include
5
FEAT_CRC32 comment in aarch64_max_tcg_initfn for ease of grepping.
5
6
6
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com>
8
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20230824075406.1515566-1-alex.bennee@linaro.org
9
Acked-by: Michael S. Tsirkin <mst@redhat.com>
10
Cc: qemu-stable@nongnu.org
10
Message-id: 20200512030609.19593-11-gengdongjiu@huawei.com
11
Message-Id: <20230222110104.3996971-1-alex.bennee@linaro.org>
12
[PMM: pluralize 'instructions' in docs]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
14
---
13
MAINTAINERS | 9 +++++++++
15
docs/system/arm/emulation.rst | 1 +
14
1 file changed, 9 insertions(+)
16
target/arm/tcg/cpu64.c | 2 +-
17
2 files changed, 2 insertions(+), 1 deletion(-)
15
18
16
diff --git a/MAINTAINERS b/MAINTAINERS
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
17
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
18
--- a/MAINTAINERS
21
--- a/docs/system/arm/emulation.rst
19
+++ b/MAINTAINERS
22
+++ b/docs/system/arm/emulation.rst
20
@@ -XXX,XX +XXX,XX @@ F: tests/qtest/bios-tables-test.c
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
21
F: tests/qtest/acpi-utils.[hc]
24
- FEAT_BBM at level 2 (Translation table break-before-make levels)
22
F: tests/data/acpi/
25
- FEAT_BF16 (AArch64 BFloat16 instructions)
23
26
- FEAT_BTI (Branch Target Identification)
24
+ACPI/HEST/GHES
27
+- FEAT_CRC32 (CRC32 instructions)
25
+R: Dongjiu Geng <gengdongjiu@huawei.com>
28
- FEAT_CSV2 (Cache speculation variant 2)
26
+R: Xiang Zheng <zhengxiang9@huawei.com>
29
- FEAT_CSV2_1p1 (Cache speculation variant 2, version 1.1)
27
+L: qemu-arm@nongnu.org
30
- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
28
+S: Maintained
31
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
29
+F: hw/acpi/ghes.c
32
index XXXXXXX..XXXXXXX 100644
30
+F: include/hw/acpi/ghes.h
33
--- a/target/arm/tcg/cpu64.c
31
+F: docs/specs/acpi_hest_ghes.rst
34
+++ b/target/arm/tcg/cpu64.c
32
+
35
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
33
ppc4xx
36
t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* FEAT_PMULL */
34
M: David Gibson <david@gibson.dropbear.id.au>
37
t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); /* FEAT_SHA1 */
35
L: qemu-ppc@nongnu.org
38
t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* FEAT_SHA512 */
39
- t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
40
+ t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1); /* FEAT_CRC32 */
41
t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2); /* FEAT_LSE */
42
t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1); /* FEAT_RDM */
43
t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1); /* FEAT_SHA3 */
36
--
44
--
37
2.20.1
45
2.34.1
38
46
39
47
diff view generated by jsdifflib
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
kvm_hwpoison_page_add() and kvm_unpoison_all() will both
3
i.MX7 IOMUX GPR device is not equivalent to i.MX6UL IOMUXC GPR device.
4
be used by X86 and ARM platforms, so moving them into
4
In particular, register 22 is not present on i.MX6UL and this is actualy
5
"accel/kvm/kvm-all.c" to avoid duplicate code.
5
The only register that is really emulated in the i.MX7 IOMUX GPR device.
6
6
7
For architectures that don't use the poison-list functionality
7
Note: The i.MX6UL code is actually also implementing the IOMUX GPR device
8
the reset handler will harmlessly do nothing, so let's register
8
as an unimplemented device at the same bus adress and the 2 instantiations
9
the kvm_unpoison_all() function in the generic kvm_init() function.
9
were actualy colliding. So we go back to the unimplemented device for now.
10
10
11
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
12
Message-id: 48681bf51ee97646479bb261bee19abebbc8074e.1692964892.git.jcd@tribudubois.net
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
13
Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com>
14
Acked-by: Xiang Zheng <zhengxiang9@huawei.com>
15
Message-id: 20200512030609.19593-8-gengdongjiu@huawei.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
15
---
18
include/sysemu/kvm_int.h | 12 ++++++++++++
16
include/hw/arm/fsl-imx6ul.h | 2 --
19
accel/kvm/kvm-all.c | 36 ++++++++++++++++++++++++++++++++++++
17
hw/arm/fsl-imx6ul.c | 11 -----------
20
target/i386/kvm.c | 36 ------------------------------------
18
2 files changed, 13 deletions(-)
21
3 files changed, 48 insertions(+), 36 deletions(-)
22
19
23
diff --git a/include/sysemu/kvm_int.h b/include/sysemu/kvm_int.h
20
diff --git a/include/hw/arm/fsl-imx6ul.h b/include/hw/arm/fsl-imx6ul.h
24
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
25
--- a/include/sysemu/kvm_int.h
22
--- a/include/hw/arm/fsl-imx6ul.h
26
+++ b/include/sysemu/kvm_int.h
23
+++ b/include/hw/arm/fsl-imx6ul.h
27
@@ -XXX,XX +XXX,XX @@ void kvm_memory_listener_register(KVMState *s, KVMMemoryListener *kml,
24
@@ -XXX,XX +XXX,XX @@
28
AddressSpace *as, int as_id);
25
#include "hw/misc/imx6ul_ccm.h"
29
26
#include "hw/misc/imx6_src.h"
30
void kvm_set_max_memslot_size(hwaddr max_slot_size);
27
#include "hw/misc/imx7_snvs.h"
31
+
28
-#include "hw/misc/imx7_gpr.h"
32
+/**
29
#include "hw/intc/imx_gpcv2.h"
33
+ * kvm_hwpoison_page_add:
30
#include "hw/watchdog/wdt_imx2.h"
34
+ *
31
#include "hw/gpio/imx_gpio.h"
35
+ * Parameters:
32
@@ -XXX,XX +XXX,XX @@ struct FslIMX6ULState {
36
+ * @ram_addr: the address in the RAM for the poisoned page
33
IMX6SRCState src;
37
+ *
34
IMX7SNVSState snvs;
38
+ * Add a poisoned page to the list
35
IMXGPCv2State gpcv2;
39
+ *
36
- IMX7GPRState gpr;
40
+ * Return: None.
37
IMXSPIState spi[FSL_IMX6UL_NUM_ECSPIS];
41
+ */
38
IMXI2CState i2c[FSL_IMX6UL_NUM_I2CS];
42
+void kvm_hwpoison_page_add(ram_addr_t ram_addr);
39
IMXSerialState uart[FSL_IMX6UL_NUM_UARTS];
43
#endif
40
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
44
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
45
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
46
--- a/accel/kvm/kvm-all.c
42
--- a/hw/arm/fsl-imx6ul.c
47
+++ b/accel/kvm/kvm-all.c
43
+++ b/hw/arm/fsl-imx6ul.c
48
@@ -XXX,XX +XXX,XX @@
44
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
49
#include "qapi/visitor.h"
45
*/
50
#include "qapi/qapi-types-common.h"
46
object_initialize_child(obj, "snvs", &s->snvs, TYPE_IMX7_SNVS);
51
#include "qapi/qapi-visit-common.h"
47
52
+#include "sysemu/reset.h"
48
- /*
53
49
- * GPR
54
#include "hw/boards.h"
50
- */
55
51
- object_initialize_child(obj, "gpr", &s->gpr, TYPE_IMX7_GPR);
56
@@ -XXX,XX +XXX,XX @@ int kvm_vm_check_extension(KVMState *s, unsigned int extension)
52
-
57
return ret;
53
/*
58
}
54
* GPIOs 1 to 5
59
55
*/
60
+typedef struct HWPoisonPage {
56
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
61
+ ram_addr_t ram_addr;
57
FSL_IMX6UL_WDOGn_IRQ[i]));
62
+ QLIST_ENTRY(HWPoisonPage) list;
63
+} HWPoisonPage;
64
+
65
+static QLIST_HEAD(, HWPoisonPage) hwpoison_page_list =
66
+ QLIST_HEAD_INITIALIZER(hwpoison_page_list);
67
+
68
+static void kvm_unpoison_all(void *param)
69
+{
70
+ HWPoisonPage *page, *next_page;
71
+
72
+ QLIST_FOREACH_SAFE(page, &hwpoison_page_list, list, next_page) {
73
+ QLIST_REMOVE(page, list);
74
+ qemu_ram_remap(page->ram_addr, TARGET_PAGE_SIZE);
75
+ g_free(page);
76
+ }
77
+}
78
+
79
+void kvm_hwpoison_page_add(ram_addr_t ram_addr)
80
+{
81
+ HWPoisonPage *page;
82
+
83
+ QLIST_FOREACH(page, &hwpoison_page_list, list) {
84
+ if (page->ram_addr == ram_addr) {
85
+ return;
86
+ }
87
+ }
88
+ page = g_new(HWPoisonPage, 1);
89
+ page->ram_addr = ram_addr;
90
+ QLIST_INSERT_HEAD(&hwpoison_page_list, page, list);
91
+}
92
+
93
static uint32_t adjust_ioeventfd_endianness(uint32_t val, uint32_t size)
94
{
95
#if defined(HOST_WORDS_BIGENDIAN) != defined(TARGET_WORDS_BIGENDIAN)
96
@@ -XXX,XX +XXX,XX @@ static int kvm_init(MachineState *ms)
97
s->kernel_irqchip_split = mc->default_kernel_irqchip_split ? ON_OFF_AUTO_ON : ON_OFF_AUTO_OFF;
98
}
58
}
99
59
100
+ qemu_register_reset(kvm_unpoison_all, NULL);
60
- /*
101
+
61
- * GPR
102
if (s->kernel_irqchip_allowed) {
62
- */
103
kvm_irqchip_create(s);
63
- sysbus_realize(SYS_BUS_DEVICE(&s->gpr), &error_abort);
104
}
64
- sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpr), 0, FSL_IMX6UL_IOMUXC_GPR_ADDR);
105
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/i386/kvm.c
108
+++ b/target/i386/kvm.c
109
@@ -XXX,XX +XXX,XX @@
110
#include "sysemu/sysemu.h"
111
#include "sysemu/hw_accel.h"
112
#include "sysemu/kvm_int.h"
113
-#include "sysemu/reset.h"
114
#include "sysemu/runstate.h"
115
#include "kvm_i386.h"
116
#include "hyperv.h"
117
@@ -XXX,XX +XXX,XX @@ uint64_t kvm_arch_get_supported_msr_feature(KVMState *s, uint32_t index)
118
}
119
}
120
121
-
65
-
122
-typedef struct HWPoisonPage {
66
/*
123
- ram_addr_t ram_addr;
67
* SDMA
124
- QLIST_ENTRY(HWPoisonPage) list;
68
*/
125
-} HWPoisonPage;
126
-
127
-static QLIST_HEAD(, HWPoisonPage) hwpoison_page_list =
128
- QLIST_HEAD_INITIALIZER(hwpoison_page_list);
129
-
130
-static void kvm_unpoison_all(void *param)
131
-{
132
- HWPoisonPage *page, *next_page;
133
-
134
- QLIST_FOREACH_SAFE(page, &hwpoison_page_list, list, next_page) {
135
- QLIST_REMOVE(page, list);
136
- qemu_ram_remap(page->ram_addr, TARGET_PAGE_SIZE);
137
- g_free(page);
138
- }
139
-}
140
-
141
-static void kvm_hwpoison_page_add(ram_addr_t ram_addr)
142
-{
143
- HWPoisonPage *page;
144
-
145
- QLIST_FOREACH(page, &hwpoison_page_list, list) {
146
- if (page->ram_addr == ram_addr) {
147
- return;
148
- }
149
- }
150
- page = g_new(HWPoisonPage, 1);
151
- page->ram_addr = ram_addr;
152
- QLIST_INSERT_HEAD(&hwpoison_page_list, page, list);
153
-}
154
-
155
static int kvm_get_mce_cap_supported(KVMState *s, uint64_t *mce_cap,
156
int *max_banks)
157
{
158
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init(MachineState *ms, KVMState *s)
159
fprintf(stderr, "e820_add_entry() table is full\n");
160
return ret;
161
}
162
- qemu_register_reset(kvm_unpoison_all, NULL);
163
164
shadow_mem = object_property_get_int(OBJECT(s), "kvm-shadow-mem", &error_abort);
165
if (shadow_mem != -1) {
166
--
69
--
167
2.20.1
70
2.34.1
168
169
diff view generated by jsdifflib
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
The little end UUID is used in many places, so make
3
* Add Addr and size definition for most i.MX6UL devices in i.MX6UL header file.
4
NVDIMM_UUID_LE to a common macro to convert the UUID
4
* Use those newly defined named constants whenever possible.
5
to a little end array.
5
* Standardize the way we init a familly of unimplemented devices
6
- SAI
7
- PWM
8
- CAN
9
* Add/rework few comments
6
10
7
Reviewed-by: Xiang Zheng <zhengxiang9@huawei.com>
11
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
8
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
12
Message-id: d579043fbd4e4b490370783fda43fc02c8e9be75.1692964892.git.jcd@tribudubois.net
9
Message-id: 20200512030609.19593-2-gengdongjiu@huawei.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
15
---
13
include/qemu/uuid.h | 27 +++++++++++++++++++++++++++
16
include/hw/arm/fsl-imx6ul.h | 156 +++++++++++++++++++++++++++++++-----
14
hw/acpi/nvdimm.c | 10 +++-------
17
hw/arm/fsl-imx6ul.c | 147 ++++++++++++++++++++++-----------
15
2 files changed, 30 insertions(+), 7 deletions(-)
18
2 files changed, 232 insertions(+), 71 deletions(-)
16
19
17
diff --git a/include/qemu/uuid.h b/include/qemu/uuid.h
20
diff --git a/include/hw/arm/fsl-imx6ul.h b/include/hw/arm/fsl-imx6ul.h
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/include/qemu/uuid.h
22
--- a/include/hw/arm/fsl-imx6ul.h
20
+++ b/include/qemu/uuid.h
23
+++ b/include/hw/arm/fsl-imx6ul.h
21
@@ -XXX,XX +XXX,XX @@ typedef struct {
24
@@ -XXX,XX +XXX,XX @@
22
};
25
#include "exec/memory.h"
23
} QemuUUID;
26
#include "cpu.h"
24
27
#include "qom/object.h"
25
+/**
28
+#include "qemu/units.h"
26
+ * UUID_LE - converts the fields of UUID to little-endian array,
29
27
+ * each of parameters is the filed of UUID.
30
#define TYPE_FSL_IMX6UL "fsl-imx6ul"
28
+ *
31
OBJECT_DECLARE_SIMPLE_TYPE(FslIMX6ULState, FSL_IMX6UL)
29
+ * @time_low: The low field of the timestamp
32
@@ -XXX,XX +XXX,XX @@ enum FslIMX6ULConfiguration {
30
+ * @time_mid: The middle field of the timestamp
33
FSL_IMX6UL_NUM_ADCS = 2,
31
+ * @time_hi_and_version: The high field of the timestamp
34
FSL_IMX6UL_NUM_USB_PHYS = 2,
32
+ * multiplexed with the version number
35
FSL_IMX6UL_NUM_USBS = 2,
33
+ * @clock_seq_hi_and_reserved: The high field of the clock
36
+ FSL_IMX6UL_NUM_SAIS = 3,
34
+ * sequence multiplexed with the variant
37
+ FSL_IMX6UL_NUM_CANS = 2,
35
+ * @clock_seq_low: The low field of the clock sequence
38
+ FSL_IMX6UL_NUM_PWMS = 4,
36
+ * @node0: The spatially unique node0 identifier
39
};
37
+ * @node1: The spatially unique node1 identifier
40
38
+ * @node2: The spatially unique node2 identifier
41
struct FslIMX6ULState {
39
+ * @node3: The spatially unique node3 identifier
42
@@ -XXX,XX +XXX,XX @@ struct FslIMX6ULState {
40
+ * @node4: The spatially unique node4 identifier
43
41
+ * @node5: The spatially unique node5 identifier
44
enum FslIMX6ULMemoryMap {
42
+ */
45
FSL_IMX6UL_MMDC_ADDR = 0x80000000,
43
+#define UUID_LE(time_low, time_mid, time_hi_and_version, \
46
- FSL_IMX6UL_MMDC_SIZE = 2 * 1024 * 1024 * 1024UL,
44
+ clock_seq_hi_and_reserved, clock_seq_low, node0, node1, node2, \
47
+ FSL_IMX6UL_MMDC_SIZE = (2 * GiB),
45
+ node3, node4, node5) \
48
46
+ { (time_low) & 0xff, ((time_low) >> 8) & 0xff, ((time_low) >> 16) & 0xff, \
49
FSL_IMX6UL_QSPI1_MEM_ADDR = 0x60000000,
47
+ ((time_low) >> 24) & 0xff, (time_mid) & 0xff, ((time_mid) >> 8) & 0xff, \
50
- FSL_IMX6UL_EIM_ALIAS_ADDR = 0x58000000,
48
+ (time_hi_and_version) & 0xff, ((time_hi_and_version) >> 8) & 0xff, \
51
- FSL_IMX6UL_EIM_CS_ADDR = 0x50000000,
49
+ (clock_seq_hi_and_reserved), (clock_seq_low), (node0), (node1), (node2),\
52
- FSL_IMX6UL_AES_ENCRYPT_ADDR = 0x10000000,
50
+ (node3), (node4), (node5) }
53
- FSL_IMX6UL_QSPI1_RX_ADDR = 0x0C000000,
51
+
54
+ FSL_IMX6UL_QSPI1_MEM_SIZE = (256 * MiB),
52
#define UUID_FMT "%02hhx%02hhx%02hhx%02hhx-" \
55
53
"%02hhx%02hhx-%02hhx%02hhx-" \
56
- /* AIPS-2 */
54
"%02hhx%02hhx-" \
57
+ FSL_IMX6UL_EIM_ALIAS_ADDR = 0x58000000,
55
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
58
+ FSL_IMX6UL_EIM_ALIAS_SIZE = (128 * MiB),
59
+
60
+ FSL_IMX6UL_EIM_CS_ADDR = 0x50000000,
61
+ FSL_IMX6UL_EIM_CS_SIZE = (128 * MiB),
62
+
63
+ FSL_IMX6UL_AES_ENCRYPT_ADDR = 0x10000000,
64
+ FSL_IMX6UL_AES_ENCRYPT_SIZE = (1 * MiB),
65
+
66
+ FSL_IMX6UL_QSPI1_RX_ADDR = 0x0C000000,
67
+ FSL_IMX6UL_QSPI1_RX_SIZE = (32 * MiB),
68
+
69
+ /* AIPS-2 Begin */
70
FSL_IMX6UL_UART6_ADDR = 0x021FC000,
71
+
72
FSL_IMX6UL_I2C4_ADDR = 0x021F8000,
73
+
74
FSL_IMX6UL_UART5_ADDR = 0x021F4000,
75
FSL_IMX6UL_UART4_ADDR = 0x021F0000,
76
FSL_IMX6UL_UART3_ADDR = 0x021EC000,
77
FSL_IMX6UL_UART2_ADDR = 0x021E8000,
78
+
79
FSL_IMX6UL_WDOG3_ADDR = 0x021E4000,
80
+
81
FSL_IMX6UL_QSPI_ADDR = 0x021E0000,
82
+ FSL_IMX6UL_QSPI_SIZE = 0x500,
83
+
84
FSL_IMX6UL_SYS_CNT_CTRL_ADDR = 0x021DC000,
85
+ FSL_IMX6UL_SYS_CNT_CTRL_SIZE = (16 * KiB),
86
+
87
FSL_IMX6UL_SYS_CNT_CMP_ADDR = 0x021D8000,
88
+ FSL_IMX6UL_SYS_CNT_CMP_SIZE = (16 * KiB),
89
+
90
FSL_IMX6UL_SYS_CNT_RD_ADDR = 0x021D4000,
91
+ FSL_IMX6UL_SYS_CNT_RD_SIZE = (16 * KiB),
92
+
93
FSL_IMX6UL_TZASC_ADDR = 0x021D0000,
94
+ FSL_IMX6UL_TZASC_SIZE = (16 * KiB),
95
+
96
FSL_IMX6UL_PXP_ADDR = 0x021CC000,
97
+ FSL_IMX6UL_PXP_SIZE = (16 * KiB),
98
+
99
FSL_IMX6UL_LCDIF_ADDR = 0x021C8000,
100
+ FSL_IMX6UL_LCDIF_SIZE = 0x100,
101
+
102
FSL_IMX6UL_CSI_ADDR = 0x021C4000,
103
+ FSL_IMX6UL_CSI_SIZE = 0x100,
104
+
105
FSL_IMX6UL_CSU_ADDR = 0x021C0000,
106
+ FSL_IMX6UL_CSU_SIZE = (16 * KiB),
107
+
108
FSL_IMX6UL_OCOTP_CTRL_ADDR = 0x021BC000,
109
+ FSL_IMX6UL_OCOTP_CTRL_SIZE = (4 * KiB),
110
+
111
FSL_IMX6UL_EIM_ADDR = 0x021B8000,
112
+ FSL_IMX6UL_EIM_SIZE = 0x100,
113
+
114
FSL_IMX6UL_SIM2_ADDR = 0x021B4000,
115
+
116
FSL_IMX6UL_MMDC_CFG_ADDR = 0x021B0000,
117
+ FSL_IMX6UL_MMDC_CFG_SIZE = (4 * KiB),
118
+
119
FSL_IMX6UL_ROMCP_ADDR = 0x021AC000,
120
+ FSL_IMX6UL_ROMCP_SIZE = 0x300,
121
+
122
FSL_IMX6UL_I2C3_ADDR = 0x021A8000,
123
FSL_IMX6UL_I2C2_ADDR = 0x021A4000,
124
FSL_IMX6UL_I2C1_ADDR = 0x021A0000,
125
+
126
FSL_IMX6UL_ADC2_ADDR = 0x0219C000,
127
FSL_IMX6UL_ADC1_ADDR = 0x02198000,
128
+ FSL_IMX6UL_ADCn_SIZE = 0x100,
129
+
130
FSL_IMX6UL_USDHC2_ADDR = 0x02194000,
131
FSL_IMX6UL_USDHC1_ADDR = 0x02190000,
132
- FSL_IMX6UL_SIM1_ADDR = 0x0218C000,
133
- FSL_IMX6UL_ENET1_ADDR = 0x02188000,
134
- FSL_IMX6UL_USBO2_USBMISC_ADDR = 0x02184800,
135
- FSL_IMX6UL_USBO2_USB_ADDR = 0x02184000,
136
- FSL_IMX6UL_USBO2_PL301_ADDR = 0x02180000,
137
- FSL_IMX6UL_AIPS2_CFG_ADDR = 0x0217C000,
138
- FSL_IMX6UL_CAAM_ADDR = 0x02140000,
139
- FSL_IMX6UL_A7MPCORE_DAP_ADDR = 0x02100000,
140
141
- /* AIPS-1 */
142
+ FSL_IMX6UL_SIM1_ADDR = 0x0218C000,
143
+ FSL_IMX6UL_SIMn_SIZE = (16 * KiB),
144
+
145
+ FSL_IMX6UL_ENET1_ADDR = 0x02188000,
146
+
147
+ FSL_IMX6UL_USBO2_USBMISC_ADDR = 0x02184800,
148
+ FSL_IMX6UL_USBO2_USB1_ADDR = 0x02184000,
149
+ FSL_IMX6UL_USBO2_USB2_ADDR = 0x02184200,
150
+
151
+ FSL_IMX6UL_USBO2_PL301_ADDR = 0x02180000,
152
+ FSL_IMX6UL_USBO2_PL301_SIZE = (16 * KiB),
153
+
154
+ FSL_IMX6UL_AIPS2_CFG_ADDR = 0x0217C000,
155
+ FSL_IMX6UL_AIPS2_CFG_SIZE = 0x100,
156
+
157
+ FSL_IMX6UL_CAAM_ADDR = 0x02140000,
158
+ FSL_IMX6UL_CAAM_SIZE = (16 * KiB),
159
+
160
+ FSL_IMX6UL_A7MPCORE_DAP_ADDR = 0x02100000,
161
+ FSL_IMX6UL_A7MPCORE_DAP_SIZE = (4 * KiB),
162
+ /* AIPS-2 End */
163
+
164
+ /* AIPS-1 Begin */
165
FSL_IMX6UL_PWM8_ADDR = 0x020FC000,
166
FSL_IMX6UL_PWM7_ADDR = 0x020F8000,
167
FSL_IMX6UL_PWM6_ADDR = 0x020F4000,
168
FSL_IMX6UL_PWM5_ADDR = 0x020F0000,
169
+
170
FSL_IMX6UL_SDMA_ADDR = 0x020EC000,
171
+ FSL_IMX6UL_SDMA_SIZE = 0x300,
172
+
173
FSL_IMX6UL_GPT2_ADDR = 0x020E8000,
174
+
175
FSL_IMX6UL_IOMUXC_GPR_ADDR = 0x020E4000,
176
+ FSL_IMX6UL_IOMUXC_GPR_SIZE = 0x40,
177
+
178
FSL_IMX6UL_IOMUXC_ADDR = 0x020E0000,
179
+ FSL_IMX6UL_IOMUXC_SIZE = 0x700,
180
+
181
FSL_IMX6UL_GPC_ADDR = 0x020DC000,
182
+
183
FSL_IMX6UL_SRC_ADDR = 0x020D8000,
184
+
185
FSL_IMX6UL_EPIT2_ADDR = 0x020D4000,
186
FSL_IMX6UL_EPIT1_ADDR = 0x020D0000,
187
+
188
FSL_IMX6UL_SNVS_HP_ADDR = 0x020CC000,
189
+
190
FSL_IMX6UL_USBPHY2_ADDR = 0x020CA000,
191
- FSL_IMX6UL_USBPHY2_SIZE = (4 * 1024),
192
FSL_IMX6UL_USBPHY1_ADDR = 0x020C9000,
193
- FSL_IMX6UL_USBPHY1_SIZE = (4 * 1024),
194
+
195
FSL_IMX6UL_ANALOG_ADDR = 0x020C8000,
196
+ FSL_IMX6UL_ANALOG_SIZE = 0x300,
197
+
198
FSL_IMX6UL_CCM_ADDR = 0x020C4000,
199
+
200
FSL_IMX6UL_WDOG2_ADDR = 0x020C0000,
201
FSL_IMX6UL_WDOG1_ADDR = 0x020BC000,
202
+
203
FSL_IMX6UL_KPP_ADDR = 0x020B8000,
204
+ FSL_IMX6UL_KPP_SIZE = 0x10,
205
+
206
FSL_IMX6UL_ENET2_ADDR = 0x020B4000,
207
+
208
FSL_IMX6UL_SNVS_LP_ADDR = 0x020B0000,
209
+ FSL_IMX6UL_SNVS_LP_SIZE = (16 * KiB),
210
+
211
FSL_IMX6UL_GPIO5_ADDR = 0x020AC000,
212
FSL_IMX6UL_GPIO4_ADDR = 0x020A8000,
213
FSL_IMX6UL_GPIO3_ADDR = 0x020A4000,
214
FSL_IMX6UL_GPIO2_ADDR = 0x020A0000,
215
FSL_IMX6UL_GPIO1_ADDR = 0x0209C000,
216
+
217
FSL_IMX6UL_GPT1_ADDR = 0x02098000,
218
+
219
FSL_IMX6UL_CAN2_ADDR = 0x02094000,
220
FSL_IMX6UL_CAN1_ADDR = 0x02090000,
221
+ FSL_IMX6UL_CANn_SIZE = (4 * KiB),
222
+
223
FSL_IMX6UL_PWM4_ADDR = 0x0208C000,
224
FSL_IMX6UL_PWM3_ADDR = 0x02088000,
225
FSL_IMX6UL_PWM2_ADDR = 0x02084000,
226
FSL_IMX6UL_PWM1_ADDR = 0x02080000,
227
+ FSL_IMX6UL_PWMn_SIZE = 0x20,
228
+
229
FSL_IMX6UL_AIPS1_CFG_ADDR = 0x0207C000,
230
+ FSL_IMX6UL_AIPS1_CFG_SIZE = (16 * KiB),
231
+
232
FSL_IMX6UL_BEE_ADDR = 0x02044000,
233
+ FSL_IMX6UL_BEE_SIZE = (16 * KiB),
234
+
235
FSL_IMX6UL_TOUCH_CTRL_ADDR = 0x02040000,
236
+ FSL_IMX6UL_TOUCH_CTRL_SIZE = 0x100,
237
+
238
FSL_IMX6UL_SPBA_ADDR = 0x0203C000,
239
+ FSL_IMX6UL_SPBA_SIZE = 0x100,
240
+
241
FSL_IMX6UL_ASRC_ADDR = 0x02034000,
242
+ FSL_IMX6UL_ASRC_SIZE = 0x100,
243
+
244
FSL_IMX6UL_SAI3_ADDR = 0x02030000,
245
FSL_IMX6UL_SAI2_ADDR = 0x0202C000,
246
FSL_IMX6UL_SAI1_ADDR = 0x02028000,
247
+ FSL_IMX6UL_SAIn_SIZE = 0x200,
248
+
249
FSL_IMX6UL_UART8_ADDR = 0x02024000,
250
FSL_IMX6UL_UART1_ADDR = 0x02020000,
251
FSL_IMX6UL_UART7_ADDR = 0x02018000,
252
+
253
FSL_IMX6UL_ECSPI4_ADDR = 0x02014000,
254
FSL_IMX6UL_ECSPI3_ADDR = 0x02010000,
255
FSL_IMX6UL_ECSPI2_ADDR = 0x0200C000,
256
FSL_IMX6UL_ECSPI1_ADDR = 0x02008000,
257
+
258
FSL_IMX6UL_SPDIF_ADDR = 0x02004000,
259
+ FSL_IMX6UL_SPDIF_SIZE = 0x100,
260
+ /* AIPS-1 End */
261
+
262
+ FSL_IMX6UL_BCH_ADDR = 0x01808000,
263
+ FSL_IMX6UL_BCH_SIZE = 0x200,
264
+
265
+ FSL_IMX6UL_GPMI_ADDR = 0x01806000,
266
+ FSL_IMX6UL_GPMI_SIZE = 0x200,
267
268
FSL_IMX6UL_APBH_DMA_ADDR = 0x01804000,
269
- FSL_IMX6UL_APBH_DMA_SIZE = (32 * 1024),
270
+ FSL_IMX6UL_APBH_DMA_SIZE = (4 * KiB),
271
272
FSL_IMX6UL_A7MPCORE_ADDR = 0x00A00000,
273
274
FSL_IMX6UL_OCRAM_ALIAS_ADDR = 0x00920000,
275
- FSL_IMX6UL_OCRAM_ALIAS_SIZE = 0x00060000,
276
+ FSL_IMX6UL_OCRAM_ALIAS_SIZE = (384 * KiB),
277
+
278
FSL_IMX6UL_OCRAM_MEM_ADDR = 0x00900000,
279
- FSL_IMX6UL_OCRAM_MEM_SIZE = 0x00020000,
280
+ FSL_IMX6UL_OCRAM_MEM_SIZE = (128 * KiB),
281
+
282
FSL_IMX6UL_CAAM_MEM_ADDR = 0x00100000,
283
- FSL_IMX6UL_CAAM_MEM_SIZE = 0x00008000,
284
+ FSL_IMX6UL_CAAM_MEM_SIZE = (32 * KiB),
285
+
286
FSL_IMX6UL_ROM_ADDR = 0x00000000,
287
- FSL_IMX6UL_ROM_SIZE = 0x00018000,
288
+ FSL_IMX6UL_ROM_SIZE = (96 * KiB),
289
};
290
291
enum FslIMX6ULIRQs {
292
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
56
index XXXXXXX..XXXXXXX 100644
293
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/acpi/nvdimm.c
294
--- a/hw/arm/fsl-imx6ul.c
58
+++ b/hw/acpi/nvdimm.c
295
+++ b/hw/arm/fsl-imx6ul.c
59
@@ -XXX,XX +XXX,XX @@
296
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
60
*/
297
object_initialize_child(obj, "snvs", &s->snvs, TYPE_IMX7_SNVS);
61
298
62
#include "qemu/osdep.h"
299
/*
63
+#include "qemu/uuid.h"
300
- * GPIOs 1 to 5
64
#include "hw/acpi/acpi.h"
301
+ * GPIOs
65
#include "hw/acpi/aml-build.h"
302
*/
66
#include "hw/acpi/bios-linker-loader.h"
303
for (i = 0; i < FSL_IMX6UL_NUM_GPIOS; i++) {
67
@@ -XXX,XX +XXX,XX @@
304
snprintf(name, NAME_SIZE, "gpio%d", i);
68
#include "hw/mem/nvdimm.h"
305
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
69
#include "qemu/nvdimm-utils.h"
306
}
70
307
71
-#define NVDIMM_UUID_LE(a, b, c, d0, d1, d2, d3, d4, d5, d6, d7) \
308
/*
72
- { (a) & 0xff, ((a) >> 8) & 0xff, ((a) >> 16) & 0xff, ((a) >> 24) & 0xff, \
309
- * GPT 1, 2
73
- (b) & 0xff, ((b) >> 8) & 0xff, (c) & 0xff, ((c) >> 8) & 0xff, \
310
+ * GPTs
74
- (d0), (d1), (d2), (d3), (d4), (d5), (d6), (d7) }
311
*/
312
for (i = 0; i < FSL_IMX6UL_NUM_GPTS; i++) {
313
snprintf(name, NAME_SIZE, "gpt%d", i);
314
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
315
}
316
317
/*
318
- * EPIT 1, 2
319
+ * EPITs
320
*/
321
for (i = 0; i < FSL_IMX6UL_NUM_EPITS; i++) {
322
snprintf(name, NAME_SIZE, "epit%d", i + 1);
323
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
324
}
325
326
/*
327
- * eCSPI
328
+ * eCSPIs
329
*/
330
for (i = 0; i < FSL_IMX6UL_NUM_ECSPIS; i++) {
331
snprintf(name, NAME_SIZE, "spi%d", i + 1);
332
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
333
}
334
335
/*
336
- * I2C
337
+ * I2Cs
338
*/
339
for (i = 0; i < FSL_IMX6UL_NUM_I2CS; i++) {
340
snprintf(name, NAME_SIZE, "i2c%d", i + 1);
341
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
342
}
343
344
/*
345
- * UART
346
+ * UARTs
347
*/
348
for (i = 0; i < FSL_IMX6UL_NUM_UARTS; i++) {
349
snprintf(name, NAME_SIZE, "uart%d", i);
350
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
351
}
352
353
/*
354
- * Ethernet
355
+ * Ethernets
356
*/
357
for (i = 0; i < FSL_IMX6UL_NUM_ETHS; i++) {
358
snprintf(name, NAME_SIZE, "eth%d", i);
359
object_initialize_child(obj, name, &s->eth[i], TYPE_IMX_ENET);
360
}
361
362
- /* USB */
363
+ /*
364
+ * USB PHYs
365
+ */
366
for (i = 0; i < FSL_IMX6UL_NUM_USB_PHYS; i++) {
367
snprintf(name, NAME_SIZE, "usbphy%d", i);
368
object_initialize_child(obj, name, &s->usbphy[i], TYPE_IMX_USBPHY);
369
}
370
+
371
+ /*
372
+ * USBs
373
+ */
374
for (i = 0; i < FSL_IMX6UL_NUM_USBS; i++) {
375
snprintf(name, NAME_SIZE, "usb%d", i);
376
object_initialize_child(obj, name, &s->usb[i], TYPE_CHIPIDEA);
377
}
378
379
/*
380
- * SDHCI
381
+ * SDHCIs
382
*/
383
for (i = 0; i < FSL_IMX6UL_NUM_USDHCS; i++) {
384
snprintf(name, NAME_SIZE, "usdhc%d", i);
385
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
386
}
387
388
/*
389
- * Watchdog
390
+ * Watchdogs
391
*/
392
for (i = 0; i < FSL_IMX6UL_NUM_WDTS; i++) {
393
snprintf(name, NAME_SIZE, "wdt%d", i);
394
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
395
* A7MPCORE DAP
396
*/
397
create_unimplemented_device("a7mpcore-dap", FSL_IMX6UL_A7MPCORE_DAP_ADDR,
398
- 0x100000);
399
+ FSL_IMX6UL_A7MPCORE_DAP_SIZE);
400
401
/*
402
- * GPT 1, 2
403
+ * GPTs
404
*/
405
for (i = 0; i < FSL_IMX6UL_NUM_GPTS; i++) {
406
static const hwaddr FSL_IMX6UL_GPTn_ADDR[FSL_IMX6UL_NUM_GPTS] = {
407
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
408
}
409
410
/*
411
- * EPIT 1, 2
412
+ * EPITs
413
*/
414
for (i = 0; i < FSL_IMX6UL_NUM_EPITS; i++) {
415
static const hwaddr FSL_IMX6UL_EPITn_ADDR[FSL_IMX6UL_NUM_EPITS] = {
416
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
417
}
418
419
/*
420
- * GPIO
421
+ * GPIOs
422
*/
423
for (i = 0; i < FSL_IMX6UL_NUM_GPIOS; i++) {
424
static const hwaddr FSL_IMX6UL_GPIOn_ADDR[FSL_IMX6UL_NUM_GPIOS] = {
425
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
426
}
427
428
/*
429
- * IOMUXC and IOMUXC_GPR
430
+ * IOMUXC
431
*/
432
- for (i = 0; i < 1; i++) {
433
- static const hwaddr FSL_IMX6UL_IOMUXCn_ADDR[FSL_IMX6UL_NUM_IOMUXCS] = {
434
- FSL_IMX6UL_IOMUXC_ADDR,
435
- FSL_IMX6UL_IOMUXC_GPR_ADDR,
436
- };
75
-
437
-
76
/*
438
- snprintf(name, NAME_SIZE, "iomuxc%d", i);
77
* define Byte Addressable Persistent Memory (PM) Region according to
439
- create_unimplemented_device(name, FSL_IMX6UL_IOMUXCn_ADDR[i], 0x4000);
78
* ACPI 6.0: 5.2.25.1 System Physical Address Range Structure.
440
- }
79
*/
441
+ create_unimplemented_device("iomuxc", FSL_IMX6UL_IOMUXC_ADDR,
80
static const uint8_t nvdimm_nfit_spa_uuid[] =
442
+ FSL_IMX6UL_IOMUXC_SIZE);
81
- NVDIMM_UUID_LE(0x66f0d379, 0xb4f3, 0x4074, 0xac, 0x43, 0x0d, 0x33,
443
+ create_unimplemented_device("iomuxc_gpr", FSL_IMX6UL_IOMUXC_GPR_ADDR,
82
- 0x18, 0xb7, 0x8c, 0xdb);
444
+ FSL_IMX6UL_IOMUXC_GPR_SIZE);
83
+ UUID_LE(0x66f0d379, 0xb4f3, 0x4074, 0xac, 0x43, 0x0d, 0x33,
445
84
+ 0x18, 0xb7, 0x8c, 0xdb);
446
/*
85
447
* CCM
86
/*
448
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
87
* NVDIMM Firmware Interface Table
449
sysbus_realize(SYS_BUS_DEVICE(&s->gpcv2), &error_abort);
450
sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpcv2), 0, FSL_IMX6UL_GPC_ADDR);
451
452
- /* Initialize all ECSPI */
453
+ /*
454
+ * ECSPIs
455
+ */
456
for (i = 0; i < FSL_IMX6UL_NUM_ECSPIS; i++) {
457
static const hwaddr FSL_IMX6UL_SPIn_ADDR[FSL_IMX6UL_NUM_ECSPIS] = {
458
FSL_IMX6UL_ECSPI1_ADDR,
459
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
460
}
461
462
/*
463
- * I2C
464
+ * I2Cs
465
*/
466
for (i = 0; i < FSL_IMX6UL_NUM_I2CS; i++) {
467
static const hwaddr FSL_IMX6UL_I2Cn_ADDR[FSL_IMX6UL_NUM_I2CS] = {
468
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
469
}
470
471
/*
472
- * UART
473
+ * UARTs
474
*/
475
for (i = 0; i < FSL_IMX6UL_NUM_UARTS; i++) {
476
static const hwaddr FSL_IMX6UL_UARTn_ADDR[FSL_IMX6UL_NUM_UARTS] = {
477
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
478
}
479
480
/*
481
- * Ethernet
482
+ * Ethernets
483
*
484
* We must use two loops since phy_connected affects the other interface
485
* and we have to set all properties before calling sysbus_realize().
486
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
487
FSL_IMX6UL_ENETn_TIMER_IRQ[i]));
488
}
489
490
- /* USB */
491
+ /*
492
+ * USB PHYs
493
+ */
494
for (i = 0; i < FSL_IMX6UL_NUM_USB_PHYS; i++) {
495
+ static const hwaddr
496
+ FSL_IMX6UL_USB_PHYn_ADDR[FSL_IMX6UL_NUM_USB_PHYS] = {
497
+ FSL_IMX6UL_USBPHY1_ADDR,
498
+ FSL_IMX6UL_USBPHY2_ADDR,
499
+ };
500
+
501
sysbus_realize(SYS_BUS_DEVICE(&s->usbphy[i]), &error_abort);
502
sysbus_mmio_map(SYS_BUS_DEVICE(&s->usbphy[i]), 0,
503
- FSL_IMX6UL_USBPHY1_ADDR + i * 0x1000);
504
+ FSL_IMX6UL_USB_PHYn_ADDR[i]);
505
}
506
507
+ /*
508
+ * USBs
509
+ */
510
for (i = 0; i < FSL_IMX6UL_NUM_USBS; i++) {
511
+ static const hwaddr FSL_IMX6UL_USB02_USBn_ADDR[FSL_IMX6UL_NUM_USBS] = {
512
+ FSL_IMX6UL_USBO2_USB1_ADDR,
513
+ FSL_IMX6UL_USBO2_USB2_ADDR,
514
+ };
515
+
516
static const int FSL_IMX6UL_USBn_IRQ[] = {
517
FSL_IMX6UL_USB1_IRQ,
518
FSL_IMX6UL_USB2_IRQ,
519
};
520
+
521
sysbus_realize(SYS_BUS_DEVICE(&s->usb[i]), &error_abort);
522
sysbus_mmio_map(SYS_BUS_DEVICE(&s->usb[i]), 0,
523
- FSL_IMX6UL_USBO2_USB_ADDR + i * 0x200);
524
+ FSL_IMX6UL_USB02_USBn_ADDR[i]);
525
sysbus_connect_irq(SYS_BUS_DEVICE(&s->usb[i]), 0,
526
qdev_get_gpio_in(DEVICE(&s->a7mpcore),
527
FSL_IMX6UL_USBn_IRQ[i]));
528
}
529
530
/*
531
- * USDHC
532
+ * USDHCs
533
*/
534
for (i = 0; i < FSL_IMX6UL_NUM_USDHCS; i++) {
535
static const hwaddr FSL_IMX6UL_USDHCn_ADDR[FSL_IMX6UL_NUM_USDHCS] = {
536
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
537
sysbus_mmio_map(SYS_BUS_DEVICE(&s->snvs), 0, FSL_IMX6UL_SNVS_HP_ADDR);
538
539
/*
540
- * Watchdog
541
+ * Watchdogs
542
*/
543
for (i = 0; i < FSL_IMX6UL_NUM_WDTS; i++) {
544
static const hwaddr FSL_IMX6UL_WDOGn_ADDR[FSL_IMX6UL_NUM_WDTS] = {
545
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
546
FSL_IMX6UL_WDOG2_ADDR,
547
FSL_IMX6UL_WDOG3_ADDR,
548
};
549
+
550
static const int FSL_IMX6UL_WDOGn_IRQ[FSL_IMX6UL_NUM_WDTS] = {
551
FSL_IMX6UL_WDOG1_IRQ,
552
FSL_IMX6UL_WDOG2_IRQ,
553
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
554
/*
555
* SDMA
556
*/
557
- create_unimplemented_device("sdma", FSL_IMX6UL_SDMA_ADDR, 0x4000);
558
+ create_unimplemented_device("sdma", FSL_IMX6UL_SDMA_ADDR,
559
+ FSL_IMX6UL_SDMA_SIZE);
560
561
/*
562
- * SAI (Audio SSI (Synchronous Serial Interface))
563
+ * SAIs (Audio SSI (Synchronous Serial Interface))
564
*/
565
- create_unimplemented_device("sai1", FSL_IMX6UL_SAI1_ADDR, 0x4000);
566
- create_unimplemented_device("sai2", FSL_IMX6UL_SAI2_ADDR, 0x4000);
567
- create_unimplemented_device("sai3", FSL_IMX6UL_SAI3_ADDR, 0x4000);
568
+ for (i = 0; i < FSL_IMX6UL_NUM_SAIS; i++) {
569
+ static const hwaddr FSL_IMX6UL_SAIn_ADDR[FSL_IMX6UL_NUM_SAIS] = {
570
+ FSL_IMX6UL_SAI1_ADDR,
571
+ FSL_IMX6UL_SAI2_ADDR,
572
+ FSL_IMX6UL_SAI3_ADDR,
573
+ };
574
+
575
+ snprintf(name, NAME_SIZE, "sai%d", i);
576
+ create_unimplemented_device(name, FSL_IMX6UL_SAIn_ADDR[i],
577
+ FSL_IMX6UL_SAIn_SIZE);
578
+ }
579
580
/*
581
- * PWM
582
+ * PWMs
583
*/
584
- create_unimplemented_device("pwm1", FSL_IMX6UL_PWM1_ADDR, 0x4000);
585
- create_unimplemented_device("pwm2", FSL_IMX6UL_PWM2_ADDR, 0x4000);
586
- create_unimplemented_device("pwm3", FSL_IMX6UL_PWM3_ADDR, 0x4000);
587
- create_unimplemented_device("pwm4", FSL_IMX6UL_PWM4_ADDR, 0x4000);
588
+ for (i = 0; i < FSL_IMX6UL_NUM_PWMS; i++) {
589
+ static const hwaddr FSL_IMX6UL_PWMn_ADDR[FSL_IMX6UL_NUM_PWMS] = {
590
+ FSL_IMX6UL_PWM1_ADDR,
591
+ FSL_IMX6UL_PWM2_ADDR,
592
+ FSL_IMX6UL_PWM3_ADDR,
593
+ FSL_IMX6UL_PWM4_ADDR,
594
+ };
595
+
596
+ snprintf(name, NAME_SIZE, "pwm%d", i);
597
+ create_unimplemented_device(name, FSL_IMX6UL_PWMn_ADDR[i],
598
+ FSL_IMX6UL_PWMn_SIZE);
599
+ }
600
601
/*
602
* Audio ASRC (asynchronous sample rate converter)
603
*/
604
- create_unimplemented_device("asrc", FSL_IMX6UL_ASRC_ADDR, 0x4000);
605
+ create_unimplemented_device("asrc", FSL_IMX6UL_ASRC_ADDR,
606
+ FSL_IMX6UL_ASRC_SIZE);
607
608
/*
609
- * CAN
610
+ * CANs
611
*/
612
- create_unimplemented_device("can1", FSL_IMX6UL_CAN1_ADDR, 0x4000);
613
- create_unimplemented_device("can2", FSL_IMX6UL_CAN2_ADDR, 0x4000);
614
+ for (i = 0; i < FSL_IMX6UL_NUM_CANS; i++) {
615
+ static const hwaddr FSL_IMX6UL_CANn_ADDR[FSL_IMX6UL_NUM_CANS] = {
616
+ FSL_IMX6UL_CAN1_ADDR,
617
+ FSL_IMX6UL_CAN2_ADDR,
618
+ };
619
+
620
+ snprintf(name, NAME_SIZE, "can%d", i);
621
+ create_unimplemented_device(name, FSL_IMX6UL_CANn_ADDR[i],
622
+ FSL_IMX6UL_CANn_SIZE);
623
+ }
624
625
/*
626
* APHB_DMA
627
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
628
};
629
630
snprintf(name, NAME_SIZE, "adc%d", i);
631
- create_unimplemented_device(name, FSL_IMX6UL_ADCn_ADDR[i], 0x4000);
632
+ create_unimplemented_device(name, FSL_IMX6UL_ADCn_ADDR[i],
633
+ FSL_IMX6UL_ADCn_SIZE);
634
}
635
636
/*
637
* LCD
638
*/
639
- create_unimplemented_device("lcdif", FSL_IMX6UL_LCDIF_ADDR, 0x4000);
640
+ create_unimplemented_device("lcdif", FSL_IMX6UL_LCDIF_ADDR,
641
+ FSL_IMX6UL_LCDIF_SIZE);
642
643
/*
644
* ROM memory
88
--
645
--
89
2.20.1
646
2.34.1
90
91
diff view generated by jsdifflib
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
kvm_arch_on_sigbus_vcpu() error injection uses source_id as
3
* Add TZASC as unimplemented device.
4
index in etc/hardware_errors to find out Error Status Data
4
- Allow bare metal application to access this (unimplemented) device
5
Block entry corresponding to error source. So supported source_id
5
* Add CSU as unimplemented device.
6
values should be assigned here and not be changed afterwards to
6
- Allow bare metal application to access this (unimplemented) device
7
make sure that guest will write error into expected Error Status
7
* Add 4 missing PWM devices
8
Data Block.
9
8
10
Before QEMU writes a new error to ACPI table, it will check whether
9
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
11
previous error has been acknowledged. If not acknowledged, the new
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
errors will be ignored and not be recorded. For the errors section
11
Message-id: 59e4dc56e14eccfefd379275ec19048dff9c10b3.1692964892.git.jcd@tribudubois.net
13
type, QEMU simulate it to memory section error.
14
15
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
16
Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com>
17
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
18
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
19
Message-id: 20200512030609.19593-9-gengdongjiu@huawei.com
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
13
---
22
include/hw/acpi/ghes.h | 1 +
14
include/hw/arm/fsl-imx6ul.h | 2 +-
23
hw/acpi/ghes.c | 219 +++++++++++++++++++++++++++++++++++++++++
15
hw/arm/fsl-imx6ul.c | 16 ++++++++++++++++
24
2 files changed, 220 insertions(+)
16
2 files changed, 17 insertions(+), 1 deletion(-)
25
17
26
diff --git a/include/hw/acpi/ghes.h b/include/hw/acpi/ghes.h
18
diff --git a/include/hw/arm/fsl-imx6ul.h b/include/hw/arm/fsl-imx6ul.h
27
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
28
--- a/include/hw/acpi/ghes.h
20
--- a/include/hw/arm/fsl-imx6ul.h
29
+++ b/include/hw/acpi/ghes.h
21
+++ b/include/hw/arm/fsl-imx6ul.h
30
@@ -XXX,XX +XXX,XX @@ void build_ghes_error_table(GArray *hardware_errors, BIOSLinker *linker);
22
@@ -XXX,XX +XXX,XX @@ enum FslIMX6ULConfiguration {
31
void acpi_build_hest(GArray *table_data, BIOSLinker *linker);
23
FSL_IMX6UL_NUM_USBS = 2,
32
void acpi_ghes_add_fw_cfg(AcpiGhesState *vms, FWCfgState *s,
24
FSL_IMX6UL_NUM_SAIS = 3,
33
GArray *hardware_errors);
25
FSL_IMX6UL_NUM_CANS = 2,
34
+int acpi_ghes_record_errors(uint8_t notify, uint64_t error_physical_addr);
26
- FSL_IMX6UL_NUM_PWMS = 4,
35
#endif
27
+ FSL_IMX6UL_NUM_PWMS = 8,
36
diff --git a/hw/acpi/ghes.c b/hw/acpi/ghes.c
28
};
29
30
struct FslIMX6ULState {
31
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
37
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/acpi/ghes.c
33
--- a/hw/arm/fsl-imx6ul.c
39
+++ b/hw/acpi/ghes.c
34
+++ b/hw/arm/fsl-imx6ul.c
40
@@ -XXX,XX +XXX,XX @@
35
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
41
#include "qemu/error-report.h"
36
FSL_IMX6UL_PWM2_ADDR,
42
#include "hw/acpi/generic_event_device.h"
37
FSL_IMX6UL_PWM3_ADDR,
43
#include "hw/nvram/fw_cfg.h"
38
FSL_IMX6UL_PWM4_ADDR,
44
+#include "qemu/uuid.h"
39
+ FSL_IMX6UL_PWM5_ADDR,
45
40
+ FSL_IMX6UL_PWM6_ADDR,
46
#define ACPI_GHES_ERRORS_FW_CFG_FILE "etc/hardware_errors"
41
+ FSL_IMX6UL_PWM7_ADDR,
47
#define ACPI_GHES_DATA_ADDR_FW_CFG_FILE "etc/hardware_errors_addr"
42
+ FSL_IMX6UL_PWM8_ADDR,
48
@@ -XXX,XX +XXX,XX @@
43
};
49
/* Address offset in Generic Address Structure(GAS) */
44
50
#define GAS_ADDR_OFFSET 4
45
snprintf(name, NAME_SIZE, "pwm%d", i);
51
46
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
52
+/*
47
create_unimplemented_device("lcdif", FSL_IMX6UL_LCDIF_ADDR,
53
+ * The total size of Generic Error Data Entry
48
FSL_IMX6UL_LCDIF_SIZE);
54
+ * ACPI 6.1/6.2: 18.3.2.7.1 Generic Error Data,
49
55
+ * Table 18-343 Generic Error Data Entry
56
+ */
57
+#define ACPI_GHES_DATA_LENGTH 72
58
+
59
+/* The memory section CPER size, UEFI 2.6: N.2.5 Memory Error Section */
60
+#define ACPI_GHES_MEM_CPER_LENGTH 80
61
+
62
+/* Masks for block_status flags */
63
+#define ACPI_GEBS_UNCORRECTABLE 1
64
+
65
+/*
66
+ * Total size for Generic Error Status Block except Generic Error Data Entries
67
+ * ACPI 6.2: 18.3.2.7.1 Generic Error Data,
68
+ * Table 18-380 Generic Error Status Block
69
+ */
70
+#define ACPI_GHES_GESB_SIZE 20
71
+
72
+/*
73
+ * Values for error_severity field
74
+ */
75
+enum AcpiGenericErrorSeverity {
76
+ ACPI_CPER_SEV_RECOVERABLE = 0,
77
+ ACPI_CPER_SEV_FATAL = 1,
78
+ ACPI_CPER_SEV_CORRECTED = 2,
79
+ ACPI_CPER_SEV_NONE = 3,
80
+};
81
+
82
/*
83
* Hardware Error Notification
84
* ACPI 4.0: 17.3.2.7 Hardware Error Notification
85
@@ -XXX,XX +XXX,XX @@ static void build_ghes_hw_error_notification(GArray *table, const uint8_t type)
86
build_append_int_noprefix(table, 0, 4);
87
}
88
89
+/*
90
+ * Generic Error Data Entry
91
+ * ACPI 6.1: 18.3.2.7.1 Generic Error Data
92
+ */
93
+static void acpi_ghes_generic_error_data(GArray *table,
94
+ const uint8_t *section_type, uint32_t error_severity,
95
+ uint8_t validation_bits, uint8_t flags,
96
+ uint32_t error_data_length, QemuUUID fru_id,
97
+ uint64_t time_stamp)
98
+{
99
+ const uint8_t fru_text[20] = {0};
100
+
101
+ /* Section Type */
102
+ g_array_append_vals(table, section_type, 16);
103
+
104
+ /* Error Severity */
105
+ build_append_int_noprefix(table, error_severity, 4);
106
+ /* Revision */
107
+ build_append_int_noprefix(table, 0x300, 2);
108
+ /* Validation Bits */
109
+ build_append_int_noprefix(table, validation_bits, 1);
110
+ /* Flags */
111
+ build_append_int_noprefix(table, flags, 1);
112
+ /* Error Data Length */
113
+ build_append_int_noprefix(table, error_data_length, 4);
114
+
115
+ /* FRU Id */
116
+ g_array_append_vals(table, fru_id.data, ARRAY_SIZE(fru_id.data));
117
+
118
+ /* FRU Text */
119
+ g_array_append_vals(table, fru_text, sizeof(fru_text));
120
+
121
+ /* Timestamp */
122
+ build_append_int_noprefix(table, time_stamp, 8);
123
+}
124
+
125
+/*
126
+ * Generic Error Status Block
127
+ * ACPI 6.1: 18.3.2.7.1 Generic Error Data
128
+ */
129
+static void acpi_ghes_generic_error_status(GArray *table, uint32_t block_status,
130
+ uint32_t raw_data_offset, uint32_t raw_data_length,
131
+ uint32_t data_length, uint32_t error_severity)
132
+{
133
+ /* Block Status */
134
+ build_append_int_noprefix(table, block_status, 4);
135
+ /* Raw Data Offset */
136
+ build_append_int_noprefix(table, raw_data_offset, 4);
137
+ /* Raw Data Length */
138
+ build_append_int_noprefix(table, raw_data_length, 4);
139
+ /* Data Length */
140
+ build_append_int_noprefix(table, data_length, 4);
141
+ /* Error Severity */
142
+ build_append_int_noprefix(table, error_severity, 4);
143
+}
144
+
145
+/* UEFI 2.6: N.2.5 Memory Error Section */
146
+static void acpi_ghes_build_append_mem_cper(GArray *table,
147
+ uint64_t error_physical_addr)
148
+{
149
+ /*
50
+ /*
150
+ * Memory Error Record
51
+ * CSU
151
+ */
52
+ */
152
+
53
+ create_unimplemented_device("csu", FSL_IMX6UL_CSU_ADDR,
153
+ /* Validation Bits */
54
+ FSL_IMX6UL_CSU_SIZE);
154
+ build_append_int_noprefix(table,
155
+ (1ULL << 14) | /* Type Valid */
156
+ (1ULL << 1) /* Physical Address Valid */,
157
+ 8);
158
+ /* Error Status */
159
+ build_append_int_noprefix(table, 0, 8);
160
+ /* Physical Address */
161
+ build_append_int_noprefix(table, error_physical_addr, 8);
162
+ /* Skip all the detailed information normally found in such a record */
163
+ build_append_int_noprefix(table, 0, 48);
164
+ /* Memory Error Type */
165
+ build_append_int_noprefix(table, 0 /* Unknown error */, 1);
166
+ /* Skip all the detailed information normally found in such a record */
167
+ build_append_int_noprefix(table, 0, 7);
168
+}
169
+
170
+static int acpi_ghes_record_mem_error(uint64_t error_block_address,
171
+ uint64_t error_physical_addr)
172
+{
173
+ GArray *block;
174
+
175
+ /* Memory Error Section Type */
176
+ const uint8_t uefi_cper_mem_sec[] =
177
+ UUID_LE(0xA5BC1114, 0x6F64, 0x4EDE, 0xB8, 0x63, 0x3E, 0x83, \
178
+ 0xED, 0x7C, 0x83, 0xB1);
179
+
180
+ /* invalid fru id: ACPI 4.0: 17.3.2.6.1 Generic Error Data,
181
+ * Table 17-13 Generic Error Data Entry
182
+ */
183
+ QemuUUID fru_id = {};
184
+ uint32_t data_length;
185
+
186
+ block = g_array_new(false, true /* clear */, 1);
187
+
188
+ /* This is the length if adding a new generic error data entry*/
189
+ data_length = ACPI_GHES_DATA_LENGTH + ACPI_GHES_MEM_CPER_LENGTH;
190
+
55
+
191
+ /*
56
+ /*
192
+ * Check whether it will run out of the preallocated memory if adding a new
57
+ * TZASC
193
+ * generic error data entry
194
+ */
58
+ */
195
+ if ((data_length + ACPI_GHES_GESB_SIZE) > ACPI_GHES_MAX_RAW_DATA_LENGTH) {
59
+ create_unimplemented_device("tzasc", FSL_IMX6UL_TZASC_ADDR,
196
+ error_report("Not enough memory to record new CPER!!!");
60
+ FSL_IMX6UL_TZASC_SIZE);
197
+ g_array_free(block, true);
198
+ return -1;
199
+ }
200
+
61
+
201
+ /* Build the new generic error status block header */
62
/*
202
+ acpi_ghes_generic_error_status(block, ACPI_GEBS_UNCORRECTABLE,
63
* ROM memory
203
+ 0, 0, data_length, ACPI_CPER_SEV_RECOVERABLE);
64
*/
204
+
205
+ /* Build this new generic error data entry header */
206
+ acpi_ghes_generic_error_data(block, uefi_cper_mem_sec,
207
+ ACPI_CPER_SEV_RECOVERABLE, 0, 0,
208
+ ACPI_GHES_MEM_CPER_LENGTH, fru_id, 0);
209
+
210
+ /* Build the memory section CPER for above new generic error data entry */
211
+ acpi_ghes_build_append_mem_cper(block, error_physical_addr);
212
+
213
+ /* Write the generic error data entry into guest memory */
214
+ cpu_physical_memory_write(error_block_address, block->data, block->len);
215
+
216
+ g_array_free(block, true);
217
+
218
+ return 0;
219
+}
220
+
221
/*
222
* Build table for the hardware error fw_cfg blob.
223
* Initialize "etc/hardware_errors" and "etc/hardware_errors_addr" fw_cfg blobs.
224
@@ -XXX,XX +XXX,XX @@ void acpi_ghes_add_fw_cfg(AcpiGhesState *ags, FWCfgState *s,
225
fw_cfg_add_file_callback(s, ACPI_GHES_DATA_ADDR_FW_CFG_FILE, NULL, NULL,
226
NULL, &(ags->ghes_addr_le), sizeof(ags->ghes_addr_le), false);
227
}
228
+
229
+int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
230
+{
231
+ uint64_t error_block_addr, read_ack_register_addr, read_ack_register = 0;
232
+ uint64_t start_addr;
233
+ bool ret = -1;
234
+ AcpiGedState *acpi_ged_state;
235
+ AcpiGhesState *ags;
236
+
237
+ assert(source_id < ACPI_HEST_SRC_ID_RESERVED);
238
+
239
+ acpi_ged_state = ACPI_GED(object_resolve_path_type("", TYPE_ACPI_GED,
240
+ NULL));
241
+ g_assert(acpi_ged_state);
242
+ ags = &acpi_ged_state->ghes_state;
243
+
244
+ start_addr = le64_to_cpu(ags->ghes_addr_le);
245
+
246
+ if (physical_address) {
247
+
248
+ if (source_id < ACPI_HEST_SRC_ID_RESERVED) {
249
+ start_addr += source_id * sizeof(uint64_t);
250
+ }
251
+
252
+ cpu_physical_memory_read(start_addr, &error_block_addr,
253
+ sizeof(error_block_addr));
254
+
255
+ error_block_addr = le64_to_cpu(error_block_addr);
256
+
257
+ read_ack_register_addr = start_addr +
258
+ ACPI_GHES_ERROR_SOURCE_COUNT * sizeof(uint64_t);
259
+
260
+ cpu_physical_memory_read(read_ack_register_addr,
261
+ &read_ack_register, sizeof(read_ack_register));
262
+
263
+ /* zero means OSPM does not acknowledge the error */
264
+ if (!read_ack_register) {
265
+ error_report("OSPM does not acknowledge previous error,"
266
+ " so can not record CPER for current error anymore");
267
+ } else if (error_block_addr) {
268
+ read_ack_register = cpu_to_le64(0);
269
+ /*
270
+ * Clear the Read Ack Register, OSPM will write it to 1 when
271
+ * it acknowledges this error.
272
+ */
273
+ cpu_physical_memory_write(read_ack_register_addr,
274
+ &read_ack_register, sizeof(uint64_t));
275
+
276
+ ret = acpi_ghes_record_mem_error(error_block_addr,
277
+ physical_address);
278
+ } else
279
+ error_report("can not find Generic Error Status Block");
280
+ }
281
+
282
+ return ret;
283
+}
284
--
65
--
285
2.20.1
66
2.34.1
286
67
287
68
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
Pass a pointer directly to env->vfp.qc[0], rather than env.
3
* Add Addr and size definition for all i.MX7 devices in i.MX7 header file.
4
This will allow SVE2, which does not modify QC, to pass a
4
* Use those newly defined named constants whenever possible.
5
pointer to dummy storage.
5
* Standardize the way we init a familly of unimplemented devices
6
- SAI
7
- PWM
8
- CAN
9
* Add/rework few comments
6
10
7
Change the return type of inl_qrdml.h_s16 to match the
11
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
8
sense of the operation: signed.
12
Message-id: 59e195d33e4d486a8d131392acd46633c8c10ed7.1692964892.git.jcd@tribudubois.net
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20200513163245.17915-14-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
---
15
target/arm/translate.c | 18 ++++++++---
16
include/hw/arm/fsl-imx7.h | 330 ++++++++++++++++++++++++++++----------
16
target/arm/vec_helper.c | 70 +++++++++++++++++++++++------------------
17
hw/arm/fsl-imx7.c | 130 ++++++++++-----
17
2 files changed, 54 insertions(+), 34 deletions(-)
18
2 files changed, 335 insertions(+), 125 deletions(-)
18
19
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
20
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
22
--- a/include/hw/arm/fsl-imx7.h
22
+++ b/target/arm/translate.c
23
+++ b/include/hw/arm/fsl-imx7.h
23
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
24
@@ -XXX,XX +XXX,XX @@
24
[NEON_2RM_VCVT_UF] = 0x4,
25
#include "hw/misc/imx7_ccm.h"
26
#include "hw/misc/imx7_snvs.h"
27
#include "hw/misc/imx7_gpr.h"
28
-#include "hw/misc/imx6_src.h"
29
#include "hw/watchdog/wdt_imx2.h"
30
#include "hw/gpio/imx_gpio.h"
31
#include "hw/char/imx_serial.h"
32
@@ -XXX,XX +XXX,XX @@
33
#include "hw/usb/chipidea.h"
34
#include "cpu.h"
35
#include "qom/object.h"
36
+#include "qemu/units.h"
37
38
#define TYPE_FSL_IMX7 "fsl-imx7"
39
OBJECT_DECLARE_SIMPLE_TYPE(FslIMX7State, FSL_IMX7)
40
@@ -XXX,XX +XXX,XX @@ enum FslIMX7Configuration {
41
FSL_IMX7_NUM_ECSPIS = 4,
42
FSL_IMX7_NUM_USBS = 3,
43
FSL_IMX7_NUM_ADCS = 2,
44
+ FSL_IMX7_NUM_SAIS = 3,
45
+ FSL_IMX7_NUM_CANS = 2,
46
+ FSL_IMX7_NUM_PWMS = 4,
25
};
47
};
26
48
27
+static void gen_gvec_fn3_qc(uint32_t rd_ofs, uint32_t rn_ofs, uint32_t rm_ofs,
49
struct FslIMX7State {
28
+ uint32_t opr_sz, uint32_t max_sz,
50
@@ -XXX,XX +XXX,XX @@ struct FslIMX7State {
29
+ gen_helper_gvec_3_ptr *fn)
51
30
+{
52
enum FslIMX7MemoryMap {
31
+ TCGv_ptr qc_ptr = tcg_temp_new_ptr();
53
FSL_IMX7_MMDC_ADDR = 0x80000000,
32
+
54
- FSL_IMX7_MMDC_SIZE = 2 * 1024 * 1024 * 1024UL,
33
+ tcg_gen_addi_ptr(qc_ptr, cpu_env, offsetof(CPUARMState, vfp.qc));
55
+ FSL_IMX7_MMDC_SIZE = (2 * GiB),
34
+ tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, qc_ptr,
56
35
+ opr_sz, max_sz, 0, fn);
57
- FSL_IMX7_GPIO1_ADDR = 0x30200000,
36
+ tcg_temp_free_ptr(qc_ptr);
58
- FSL_IMX7_GPIO2_ADDR = 0x30210000,
37
+}
59
- FSL_IMX7_GPIO3_ADDR = 0x30220000,
38
+
60
- FSL_IMX7_GPIO4_ADDR = 0x30230000,
39
void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
61
- FSL_IMX7_GPIO5_ADDR = 0x30240000,
40
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
62
- FSL_IMX7_GPIO6_ADDR = 0x30250000,
41
{
63
- FSL_IMX7_GPIO7_ADDR = 0x30260000,
42
@@ -XXX,XX +XXX,XX @@ void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
64
+ FSL_IMX7_QSPI1_MEM_ADDR = 0x60000000,
43
gen_helper_gvec_qrdmlah_s16, gen_helper_gvec_qrdmlah_s32
65
+ FSL_IMX7_QSPI1_MEM_SIZE = (256 * MiB),
44
};
66
45
tcg_debug_assert(vece >= 1 && vece <= 2);
67
- FSL_IMX7_IOMUXC_LPSR_GPR_ADDR = 0x30270000,
46
- tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, cpu_env,
68
+ FSL_IMX7_PCIE1_MEM_ADDR = 0x40000000,
47
- opr_sz, max_sz, 0, fns[vece - 1]);
69
+ FSL_IMX7_PCIE1_MEM_SIZE = (256 * MiB),
48
+ gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
70
71
- FSL_IMX7_WDOG1_ADDR = 0x30280000,
72
- FSL_IMX7_WDOG2_ADDR = 0x30290000,
73
- FSL_IMX7_WDOG3_ADDR = 0x302A0000,
74
- FSL_IMX7_WDOG4_ADDR = 0x302B0000,
75
+ FSL_IMX7_QSPI1_RX_BUF_ADDR = 0x34000000,
76
+ FSL_IMX7_QSPI1_RX_BUF_SIZE = (32 * MiB),
77
78
- FSL_IMX7_IOMUXC_LPSR_ADDR = 0x302C0000,
79
+ /* PCIe Peripherals */
80
+ FSL_IMX7_PCIE_REG_ADDR = 0x33800000,
81
82
- FSL_IMX7_GPT1_ADDR = 0x302D0000,
83
- FSL_IMX7_GPT2_ADDR = 0x302E0000,
84
- FSL_IMX7_GPT3_ADDR = 0x302F0000,
85
- FSL_IMX7_GPT4_ADDR = 0x30300000,
86
+ /* MMAP Peripherals */
87
+ FSL_IMX7_DMA_APBH_ADDR = 0x33000000,
88
+ FSL_IMX7_DMA_APBH_SIZE = 0x8000,
89
90
- FSL_IMX7_IOMUXC_ADDR = 0x30330000,
91
- FSL_IMX7_IOMUXC_GPR_ADDR = 0x30340000,
92
- FSL_IMX7_IOMUXCn_SIZE = 0x1000,
93
+ /* GPV configuration */
94
+ FSL_IMX7_GPV6_ADDR = 0x32600000,
95
+ FSL_IMX7_GPV5_ADDR = 0x32500000,
96
+ FSL_IMX7_GPV4_ADDR = 0x32400000,
97
+ FSL_IMX7_GPV3_ADDR = 0x32300000,
98
+ FSL_IMX7_GPV2_ADDR = 0x32200000,
99
+ FSL_IMX7_GPV1_ADDR = 0x32100000,
100
+ FSL_IMX7_GPV0_ADDR = 0x32000000,
101
+ FSL_IMX7_GPVn_SIZE = (1 * MiB),
102
103
- FSL_IMX7_OCOTP_ADDR = 0x30350000,
104
- FSL_IMX7_OCOTP_SIZE = 0x10000,
105
+ /* Arm Peripherals */
106
+ FSL_IMX7_A7MPCORE_ADDR = 0x31000000,
107
108
- FSL_IMX7_ANALOG_ADDR = 0x30360000,
109
- FSL_IMX7_SNVS_ADDR = 0x30370000,
110
- FSL_IMX7_CCM_ADDR = 0x30380000,
111
+ /* AIPS-3 Begin */
112
113
- FSL_IMX7_SRC_ADDR = 0x30390000,
114
- FSL_IMX7_SRC_SIZE = 0x1000,
115
+ FSL_IMX7_ENET2_ADDR = 0x30BF0000,
116
+ FSL_IMX7_ENET1_ADDR = 0x30BE0000,
117
118
- FSL_IMX7_ADC1_ADDR = 0x30610000,
119
- FSL_IMX7_ADC2_ADDR = 0x30620000,
120
- FSL_IMX7_ADCn_SIZE = 0x1000,
121
+ FSL_IMX7_SDMA_ADDR = 0x30BD0000,
122
+ FSL_IMX7_SDMA_SIZE = (4 * KiB),
123
124
- FSL_IMX7_PWM1_ADDR = 0x30660000,
125
- FSL_IMX7_PWM2_ADDR = 0x30670000,
126
- FSL_IMX7_PWM3_ADDR = 0x30680000,
127
- FSL_IMX7_PWM4_ADDR = 0x30690000,
128
- FSL_IMX7_PWMn_SIZE = 0x10000,
129
+ FSL_IMX7_EIM_ADDR = 0x30BC0000,
130
+ FSL_IMX7_EIM_SIZE = (4 * KiB),
131
132
- FSL_IMX7_PCIE_PHY_ADDR = 0x306D0000,
133
- FSL_IMX7_PCIE_PHY_SIZE = 0x10000,
134
+ FSL_IMX7_QSPI_ADDR = 0x30BB0000,
135
+ FSL_IMX7_QSPI_SIZE = 0x8000,
136
137
- FSL_IMX7_GPC_ADDR = 0x303A0000,
138
+ FSL_IMX7_SIM2_ADDR = 0x30BA0000,
139
+ FSL_IMX7_SIM1_ADDR = 0x30B90000,
140
+ FSL_IMX7_SIMn_SIZE = (4 * KiB),
141
+
142
+ FSL_IMX7_USDHC3_ADDR = 0x30B60000,
143
+ FSL_IMX7_USDHC2_ADDR = 0x30B50000,
144
+ FSL_IMX7_USDHC1_ADDR = 0x30B40000,
145
+
146
+ FSL_IMX7_USB3_ADDR = 0x30B30000,
147
+ FSL_IMX7_USBMISC3_ADDR = 0x30B30200,
148
+ FSL_IMX7_USB2_ADDR = 0x30B20000,
149
+ FSL_IMX7_USBMISC2_ADDR = 0x30B20200,
150
+ FSL_IMX7_USB1_ADDR = 0x30B10000,
151
+ FSL_IMX7_USBMISC1_ADDR = 0x30B10200,
152
+ FSL_IMX7_USBMISCn_SIZE = 0x200,
153
+
154
+ FSL_IMX7_USB_PL301_ADDR = 0x30AD0000,
155
+ FSL_IMX7_USB_PL301_SIZE = (64 * KiB),
156
+
157
+ FSL_IMX7_SEMAPHORE_HS_ADDR = 0x30AC0000,
158
+ FSL_IMX7_SEMAPHORE_HS_SIZE = (64 * KiB),
159
+
160
+ FSL_IMX7_MUB_ADDR = 0x30AB0000,
161
+ FSL_IMX7_MUA_ADDR = 0x30AA0000,
162
+ FSL_IMX7_MUn_SIZE = (KiB),
163
+
164
+ FSL_IMX7_UART7_ADDR = 0x30A90000,
165
+ FSL_IMX7_UART6_ADDR = 0x30A80000,
166
+ FSL_IMX7_UART5_ADDR = 0x30A70000,
167
+ FSL_IMX7_UART4_ADDR = 0x30A60000,
168
+
169
+ FSL_IMX7_I2C4_ADDR = 0x30A50000,
170
+ FSL_IMX7_I2C3_ADDR = 0x30A40000,
171
+ FSL_IMX7_I2C2_ADDR = 0x30A30000,
172
+ FSL_IMX7_I2C1_ADDR = 0x30A20000,
173
+
174
+ FSL_IMX7_CAN2_ADDR = 0x30A10000,
175
+ FSL_IMX7_CAN1_ADDR = 0x30A00000,
176
+ FSL_IMX7_CANn_SIZE = (4 * KiB),
177
+
178
+ FSL_IMX7_AIPS3_CONF_ADDR = 0x309F0000,
179
+ FSL_IMX7_AIPS3_CONF_SIZE = (64 * KiB),
180
181
FSL_IMX7_CAAM_ADDR = 0x30900000,
182
- FSL_IMX7_CAAM_SIZE = 0x40000,
183
+ FSL_IMX7_CAAM_SIZE = (256 * KiB),
184
185
- FSL_IMX7_CAN1_ADDR = 0x30A00000,
186
- FSL_IMX7_CAN2_ADDR = 0x30A10000,
187
- FSL_IMX7_CANn_SIZE = 0x10000,
188
+ FSL_IMX7_SPBA_ADDR = 0x308F0000,
189
+ FSL_IMX7_SPBA_SIZE = (4 * KiB),
190
191
- FSL_IMX7_I2C1_ADDR = 0x30A20000,
192
- FSL_IMX7_I2C2_ADDR = 0x30A30000,
193
- FSL_IMX7_I2C3_ADDR = 0x30A40000,
194
- FSL_IMX7_I2C4_ADDR = 0x30A50000,
195
+ FSL_IMX7_SAI3_ADDR = 0x308C0000,
196
+ FSL_IMX7_SAI2_ADDR = 0x308B0000,
197
+ FSL_IMX7_SAI1_ADDR = 0x308A0000,
198
+ FSL_IMX7_SAIn_SIZE = (4 * KiB),
199
200
- FSL_IMX7_ECSPI1_ADDR = 0x30820000,
201
- FSL_IMX7_ECSPI2_ADDR = 0x30830000,
202
- FSL_IMX7_ECSPI3_ADDR = 0x30840000,
203
- FSL_IMX7_ECSPI4_ADDR = 0x30630000,
204
-
205
- FSL_IMX7_LCDIF_ADDR = 0x30730000,
206
- FSL_IMX7_LCDIF_SIZE = 0x1000,
207
-
208
- FSL_IMX7_UART1_ADDR = 0x30860000,
209
+ FSL_IMX7_UART3_ADDR = 0x30880000,
210
/*
211
* Some versions of the reference manual claim that UART2 is @
212
* 0x30870000, but experiments with HW + DT files in upstream
213
@@ -XXX,XX +XXX,XX @@ enum FslIMX7MemoryMap {
214
* actually located @ 0x30890000
215
*/
216
FSL_IMX7_UART2_ADDR = 0x30890000,
217
- FSL_IMX7_UART3_ADDR = 0x30880000,
218
- FSL_IMX7_UART4_ADDR = 0x30A60000,
219
- FSL_IMX7_UART5_ADDR = 0x30A70000,
220
- FSL_IMX7_UART6_ADDR = 0x30A80000,
221
- FSL_IMX7_UART7_ADDR = 0x30A90000,
222
+ FSL_IMX7_UART1_ADDR = 0x30860000,
223
224
- FSL_IMX7_SAI1_ADDR = 0x308A0000,
225
- FSL_IMX7_SAI2_ADDR = 0x308B0000,
226
- FSL_IMX7_SAI3_ADDR = 0x308C0000,
227
- FSL_IMX7_SAIn_SIZE = 0x10000,
228
+ FSL_IMX7_ECSPI3_ADDR = 0x30840000,
229
+ FSL_IMX7_ECSPI2_ADDR = 0x30830000,
230
+ FSL_IMX7_ECSPI1_ADDR = 0x30820000,
231
+ FSL_IMX7_ECSPIn_SIZE = (4 * KiB),
232
233
- FSL_IMX7_ENET1_ADDR = 0x30BE0000,
234
- FSL_IMX7_ENET2_ADDR = 0x30BF0000,
235
+ /* AIPS-3 End */
236
237
- FSL_IMX7_USB1_ADDR = 0x30B10000,
238
- FSL_IMX7_USBMISC1_ADDR = 0x30B10200,
239
- FSL_IMX7_USB2_ADDR = 0x30B20000,
240
- FSL_IMX7_USBMISC2_ADDR = 0x30B20200,
241
- FSL_IMX7_USB3_ADDR = 0x30B30000,
242
- FSL_IMX7_USBMISC3_ADDR = 0x30B30200,
243
- FSL_IMX7_USBMISCn_SIZE = 0x200,
244
+ /* AIPS-2 Begin */
245
246
- FSL_IMX7_USDHC1_ADDR = 0x30B40000,
247
- FSL_IMX7_USDHC2_ADDR = 0x30B50000,
248
- FSL_IMX7_USDHC3_ADDR = 0x30B60000,
249
+ FSL_IMX7_AXI_DEBUG_MON_ADDR = 0x307E0000,
250
+ FSL_IMX7_AXI_DEBUG_MON_SIZE = (64 * KiB),
251
252
- FSL_IMX7_SDMA_ADDR = 0x30BD0000,
253
- FSL_IMX7_SDMA_SIZE = 0x1000,
254
+ FSL_IMX7_PERFMON2_ADDR = 0x307D0000,
255
+ FSL_IMX7_PERFMON1_ADDR = 0x307C0000,
256
+ FSL_IMX7_PERFMONn_SIZE = (64 * KiB),
257
+
258
+ FSL_IMX7_DDRC_ADDR = 0x307A0000,
259
+ FSL_IMX7_DDRC_SIZE = (4 * KiB),
260
+
261
+ FSL_IMX7_DDRC_PHY_ADDR = 0x30790000,
262
+ FSL_IMX7_DDRC_PHY_SIZE = (4 * KiB),
263
+
264
+ FSL_IMX7_TZASC_ADDR = 0x30780000,
265
+ FSL_IMX7_TZASC_SIZE = (64 * KiB),
266
+
267
+ FSL_IMX7_MIPI_DSI_ADDR = 0x30760000,
268
+ FSL_IMX7_MIPI_DSI_SIZE = (4 * KiB),
269
+
270
+ FSL_IMX7_MIPI_CSI_ADDR = 0x30750000,
271
+ FSL_IMX7_MIPI_CSI_SIZE = 0x4000,
272
+
273
+ FSL_IMX7_LCDIF_ADDR = 0x30730000,
274
+ FSL_IMX7_LCDIF_SIZE = 0x8000,
275
+
276
+ FSL_IMX7_CSI_ADDR = 0x30710000,
277
+ FSL_IMX7_CSI_SIZE = (4 * KiB),
278
+
279
+ FSL_IMX7_PXP_ADDR = 0x30700000,
280
+ FSL_IMX7_PXP_SIZE = 0x4000,
281
+
282
+ FSL_IMX7_EPDC_ADDR = 0x306F0000,
283
+ FSL_IMX7_EPDC_SIZE = (4 * KiB),
284
+
285
+ FSL_IMX7_PCIE_PHY_ADDR = 0x306D0000,
286
+ FSL_IMX7_PCIE_PHY_SIZE = (4 * KiB),
287
+
288
+ FSL_IMX7_SYSCNT_CTRL_ADDR = 0x306C0000,
289
+ FSL_IMX7_SYSCNT_CMP_ADDR = 0x306B0000,
290
+ FSL_IMX7_SYSCNT_RD_ADDR = 0x306A0000,
291
+
292
+ FSL_IMX7_PWM4_ADDR = 0x30690000,
293
+ FSL_IMX7_PWM3_ADDR = 0x30680000,
294
+ FSL_IMX7_PWM2_ADDR = 0x30670000,
295
+ FSL_IMX7_PWM1_ADDR = 0x30660000,
296
+ FSL_IMX7_PWMn_SIZE = (4 * KiB),
297
+
298
+ FSL_IMX7_FlEXTIMER2_ADDR = 0x30650000,
299
+ FSL_IMX7_FlEXTIMER1_ADDR = 0x30640000,
300
+ FSL_IMX7_FLEXTIMERn_SIZE = (4 * KiB),
301
+
302
+ FSL_IMX7_ECSPI4_ADDR = 0x30630000,
303
+
304
+ FSL_IMX7_ADC2_ADDR = 0x30620000,
305
+ FSL_IMX7_ADC1_ADDR = 0x30610000,
306
+ FSL_IMX7_ADCn_SIZE = (4 * KiB),
307
+
308
+ FSL_IMX7_AIPS2_CONF_ADDR = 0x305F0000,
309
+ FSL_IMX7_AIPS2_CONF_SIZE = (64 * KiB),
310
+
311
+ /* AIPS-2 End */
312
+
313
+ /* AIPS-1 Begin */
314
+
315
+ FSL_IMX7_CSU_ADDR = 0x303E0000,
316
+ FSL_IMX7_CSU_SIZE = (64 * KiB),
317
+
318
+ FSL_IMX7_RDC_ADDR = 0x303D0000,
319
+ FSL_IMX7_RDC_SIZE = (4 * KiB),
320
+
321
+ FSL_IMX7_SEMAPHORE2_ADDR = 0x303C0000,
322
+ FSL_IMX7_SEMAPHORE1_ADDR = 0x303B0000,
323
+ FSL_IMX7_SEMAPHOREn_SIZE = (4 * KiB),
324
+
325
+ FSL_IMX7_GPC_ADDR = 0x303A0000,
326
+
327
+ FSL_IMX7_SRC_ADDR = 0x30390000,
328
+ FSL_IMX7_SRC_SIZE = (4 * KiB),
329
+
330
+ FSL_IMX7_CCM_ADDR = 0x30380000,
331
+
332
+ FSL_IMX7_SNVS_HP_ADDR = 0x30370000,
333
+
334
+ FSL_IMX7_ANALOG_ADDR = 0x30360000,
335
+
336
+ FSL_IMX7_OCOTP_ADDR = 0x30350000,
337
+ FSL_IMX7_OCOTP_SIZE = 0x10000,
338
+
339
+ FSL_IMX7_IOMUXC_GPR_ADDR = 0x30340000,
340
+ FSL_IMX7_IOMUXC_GPR_SIZE = (4 * KiB),
341
+
342
+ FSL_IMX7_IOMUXC_ADDR = 0x30330000,
343
+ FSL_IMX7_IOMUXC_SIZE = (4 * KiB),
344
+
345
+ FSL_IMX7_KPP_ADDR = 0x30320000,
346
+ FSL_IMX7_KPP_SIZE = (4 * KiB),
347
+
348
+ FSL_IMX7_ROMCP_ADDR = 0x30310000,
349
+ FSL_IMX7_ROMCP_SIZE = (4 * KiB),
350
+
351
+ FSL_IMX7_GPT4_ADDR = 0x30300000,
352
+ FSL_IMX7_GPT3_ADDR = 0x302F0000,
353
+ FSL_IMX7_GPT2_ADDR = 0x302E0000,
354
+ FSL_IMX7_GPT1_ADDR = 0x302D0000,
355
+
356
+ FSL_IMX7_IOMUXC_LPSR_ADDR = 0x302C0000,
357
+ FSL_IMX7_IOMUXC_LPSR_SIZE = (4 * KiB),
358
+
359
+ FSL_IMX7_WDOG4_ADDR = 0x302B0000,
360
+ FSL_IMX7_WDOG3_ADDR = 0x302A0000,
361
+ FSL_IMX7_WDOG2_ADDR = 0x30290000,
362
+ FSL_IMX7_WDOG1_ADDR = 0x30280000,
363
+
364
+ FSL_IMX7_IOMUXC_LPSR_GPR_ADDR = 0x30270000,
365
+
366
+ FSL_IMX7_GPIO7_ADDR = 0x30260000,
367
+ FSL_IMX7_GPIO6_ADDR = 0x30250000,
368
+ FSL_IMX7_GPIO5_ADDR = 0x30240000,
369
+ FSL_IMX7_GPIO4_ADDR = 0x30230000,
370
+ FSL_IMX7_GPIO3_ADDR = 0x30220000,
371
+ FSL_IMX7_GPIO2_ADDR = 0x30210000,
372
+ FSL_IMX7_GPIO1_ADDR = 0x30200000,
373
+
374
+ FSL_IMX7_AIPS1_CONF_ADDR = 0x301F0000,
375
+ FSL_IMX7_AIPS1_CONF_SIZE = (64 * KiB),
376
377
- FSL_IMX7_A7MPCORE_ADDR = 0x31000000,
378
FSL_IMX7_A7MPCORE_DAP_ADDR = 0x30000000,
379
+ FSL_IMX7_A7MPCORE_DAP_SIZE = (1 * MiB),
380
381
- FSL_IMX7_PCIE_REG_ADDR = 0x33800000,
382
- FSL_IMX7_PCIE_REG_SIZE = 16 * 1024,
383
+ /* AIPS-1 End */
384
385
- FSL_IMX7_GPR_ADDR = 0x30340000,
386
+ FSL_IMX7_EIM_CS0_ADDR = 0x28000000,
387
+ FSL_IMX7_EIM_CS0_SIZE = (128 * MiB),
388
389
- FSL_IMX7_DMA_APBH_ADDR = 0x33000000,
390
- FSL_IMX7_DMA_APBH_SIZE = 0x2000,
391
+ FSL_IMX7_OCRAM_PXP_ADDR = 0x00940000,
392
+ FSL_IMX7_OCRAM_PXP_SIZE = (32 * KiB),
393
+
394
+ FSL_IMX7_OCRAM_EPDC_ADDR = 0x00920000,
395
+ FSL_IMX7_OCRAM_EPDC_SIZE = (128 * KiB),
396
+
397
+ FSL_IMX7_OCRAM_MEM_ADDR = 0x00900000,
398
+ FSL_IMX7_OCRAM_MEM_SIZE = (128 * KiB),
399
+
400
+ FSL_IMX7_TCMU_ADDR = 0x00800000,
401
+ FSL_IMX7_TCMU_SIZE = (32 * KiB),
402
+
403
+ FSL_IMX7_TCML_ADDR = 0x007F8000,
404
+ FSL_IMX7_TCML_SIZE = (32 * KiB),
405
+
406
+ FSL_IMX7_OCRAM_S_ADDR = 0x00180000,
407
+ FSL_IMX7_OCRAM_S_SIZE = (32 * KiB),
408
+
409
+ FSL_IMX7_CAAM_MEM_ADDR = 0x00100000,
410
+ FSL_IMX7_CAAM_MEM_SIZE = (32 * KiB),
411
+
412
+ FSL_IMX7_ROM_ADDR = 0x00000000,
413
+ FSL_IMX7_ROM_SIZE = (96 * KiB),
414
};
415
416
enum FslIMX7IRQs {
417
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
418
index XXXXXXX..XXXXXXX 100644
419
--- a/hw/arm/fsl-imx7.c
420
+++ b/hw/arm/fsl-imx7.c
421
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
422
char name[NAME_SIZE];
423
int i;
424
425
+ /*
426
+ * CPUs
427
+ */
428
for (i = 0; i < MIN(ms->smp.cpus, FSL_IMX7_NUM_CPUS); i++) {
429
snprintf(name, NAME_SIZE, "cpu%d", i);
430
object_initialize_child(obj, name, &s->cpu[i],
431
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
432
TYPE_A15MPCORE_PRIV);
433
434
/*
435
- * GPIOs 1 to 7
436
+ * GPIOs
437
*/
438
for (i = 0; i < FSL_IMX7_NUM_GPIOS; i++) {
439
snprintf(name, NAME_SIZE, "gpio%d", i);
440
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
441
}
442
443
/*
444
- * GPT1, 2, 3, 4
445
+ * GPTs
446
*/
447
for (i = 0; i < FSL_IMX7_NUM_GPTS; i++) {
448
snprintf(name, NAME_SIZE, "gpt%d", i);
449
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
450
*/
451
object_initialize_child(obj, "gpcv2", &s->gpcv2, TYPE_IMX_GPCV2);
452
453
+ /*
454
+ * ECSPIs
455
+ */
456
for (i = 0; i < FSL_IMX7_NUM_ECSPIS; i++) {
457
snprintf(name, NAME_SIZE, "spi%d", i + 1);
458
object_initialize_child(obj, name, &s->spi[i], TYPE_IMX_SPI);
459
}
460
461
-
462
+ /*
463
+ * I2Cs
464
+ */
465
for (i = 0; i < FSL_IMX7_NUM_I2CS; i++) {
466
snprintf(name, NAME_SIZE, "i2c%d", i + 1);
467
object_initialize_child(obj, name, &s->i2c[i], TYPE_IMX_I2C);
468
}
469
470
/*
471
- * UART
472
+ * UARTs
473
*/
474
for (i = 0; i < FSL_IMX7_NUM_UARTS; i++) {
475
snprintf(name, NAME_SIZE, "uart%d", i);
476
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
477
}
478
479
/*
480
- * Ethernet
481
+ * Ethernets
482
*/
483
for (i = 0; i < FSL_IMX7_NUM_ETHS; i++) {
484
snprintf(name, NAME_SIZE, "eth%d", i);
485
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
486
}
487
488
/*
489
- * SDHCI
490
+ * SDHCIs
491
*/
492
for (i = 0; i < FSL_IMX7_NUM_USDHCS; i++) {
493
snprintf(name, NAME_SIZE, "usdhc%d", i);
494
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
495
object_initialize_child(obj, "snvs", &s->snvs, TYPE_IMX7_SNVS);
496
497
/*
498
- * Watchdog
499
+ * Watchdogs
500
*/
501
for (i = 0; i < FSL_IMX7_NUM_WDTS; i++) {
502
snprintf(name, NAME_SIZE, "wdt%d", i);
503
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
504
*/
505
object_initialize_child(obj, "gpr", &s->gpr, TYPE_IMX7_GPR);
506
507
+ /*
508
+ * PCIE
509
+ */
510
object_initialize_child(obj, "pcie", &s->pcie, TYPE_DESIGNWARE_PCIE_HOST);
511
512
+ /*
513
+ * USBs
514
+ */
515
for (i = 0; i < FSL_IMX7_NUM_USBS; i++) {
516
snprintf(name, NAME_SIZE, "usb%d", i);
517
object_initialize_child(obj, name, &s->usb[i], TYPE_CHIPIDEA);
518
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
519
return;
520
}
521
522
+ /*
523
+ * CPUs
524
+ */
525
for (i = 0; i < smp_cpus; i++) {
526
o = OBJECT(&s->cpu[i]);
527
528
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
529
* A7MPCORE DAP
530
*/
531
create_unimplemented_device("a7mpcore-dap", FSL_IMX7_A7MPCORE_DAP_ADDR,
532
- 0x100000);
533
+ FSL_IMX7_A7MPCORE_DAP_SIZE);
534
535
/*
536
- * GPT1, 2, 3, 4
537
+ * GPTs
538
*/
539
for (i = 0; i < FSL_IMX7_NUM_GPTS; i++) {
540
static const hwaddr FSL_IMX7_GPTn_ADDR[FSL_IMX7_NUM_GPTS] = {
541
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
542
FSL_IMX7_GPTn_IRQ[i]));
543
}
544
545
+ /*
546
+ * GPIOs
547
+ */
548
for (i = 0; i < FSL_IMX7_NUM_GPIOS; i++) {
549
static const hwaddr FSL_IMX7_GPIOn_ADDR[FSL_IMX7_NUM_GPIOS] = {
550
FSL_IMX7_GPIO1_ADDR,
551
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
552
/*
553
* IOMUXC and IOMUXC_LPSR
554
*/
555
- for (i = 0; i < FSL_IMX7_NUM_IOMUXCS; i++) {
556
- static const hwaddr FSL_IMX7_IOMUXCn_ADDR[FSL_IMX7_NUM_IOMUXCS] = {
557
- FSL_IMX7_IOMUXC_ADDR,
558
- FSL_IMX7_IOMUXC_LPSR_ADDR,
559
- };
560
-
561
- snprintf(name, NAME_SIZE, "iomuxc%d", i);
562
- create_unimplemented_device(name, FSL_IMX7_IOMUXCn_ADDR[i],
563
- FSL_IMX7_IOMUXCn_SIZE);
564
- }
565
+ create_unimplemented_device("iomuxc", FSL_IMX7_IOMUXC_ADDR,
566
+ FSL_IMX7_IOMUXC_SIZE);
567
+ create_unimplemented_device("iomuxc_lspr", FSL_IMX7_IOMUXC_LPSR_ADDR,
568
+ FSL_IMX7_IOMUXC_LPSR_SIZE);
569
570
/*
571
* CCM
572
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
573
sysbus_realize(SYS_BUS_DEVICE(&s->gpcv2), &error_abort);
574
sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpcv2), 0, FSL_IMX7_GPC_ADDR);
575
576
- /* Initialize all ECSPI */
577
+ /*
578
+ * ECSPIs
579
+ */
580
for (i = 0; i < FSL_IMX7_NUM_ECSPIS; i++) {
581
static const hwaddr FSL_IMX7_SPIn_ADDR[FSL_IMX7_NUM_ECSPIS] = {
582
FSL_IMX7_ECSPI1_ADDR,
583
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
584
FSL_IMX7_SPIn_IRQ[i]));
585
}
586
587
+ /*
588
+ * I2Cs
589
+ */
590
for (i = 0; i < FSL_IMX7_NUM_I2CS; i++) {
591
static const hwaddr FSL_IMX7_I2Cn_ADDR[FSL_IMX7_NUM_I2CS] = {
592
FSL_IMX7_I2C1_ADDR,
593
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
594
}
595
596
/*
597
- * UART
598
+ * UARTs
599
*/
600
for (i = 0; i < FSL_IMX7_NUM_UARTS; i++) {
601
static const hwaddr FSL_IMX7_UARTn_ADDR[FSL_IMX7_NUM_UARTS] = {
602
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
603
}
604
605
/*
606
- * Ethernet
607
+ * Ethernets
608
*
609
* We must use two loops since phy_connected affects the other interface
610
* and we have to set all properties before calling sysbus_realize().
611
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
612
}
613
614
/*
615
- * USDHC
616
+ * USDHCs
617
*/
618
for (i = 0; i < FSL_IMX7_NUM_USDHCS; i++) {
619
static const hwaddr FSL_IMX7_USDHCn_ADDR[FSL_IMX7_NUM_USDHCS] = {
620
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
621
* SNVS
622
*/
623
sysbus_realize(SYS_BUS_DEVICE(&s->snvs), &error_abort);
624
- sysbus_mmio_map(SYS_BUS_DEVICE(&s->snvs), 0, FSL_IMX7_SNVS_ADDR);
625
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->snvs), 0, FSL_IMX7_SNVS_HP_ADDR);
626
627
/*
628
* SRC
629
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
630
create_unimplemented_device("src", FSL_IMX7_SRC_ADDR, FSL_IMX7_SRC_SIZE);
631
632
/*
633
- * Watchdog
634
+ * Watchdogs
635
*/
636
for (i = 0; i < FSL_IMX7_NUM_WDTS; i++) {
637
static const hwaddr FSL_IMX7_WDOGn_ADDR[FSL_IMX7_NUM_WDTS] = {
638
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
639
create_unimplemented_device("caam", FSL_IMX7_CAAM_ADDR, FSL_IMX7_CAAM_SIZE);
640
641
/*
642
- * PWM
643
+ * PWMs
644
*/
645
- create_unimplemented_device("pwm1", FSL_IMX7_PWM1_ADDR, FSL_IMX7_PWMn_SIZE);
646
- create_unimplemented_device("pwm2", FSL_IMX7_PWM2_ADDR, FSL_IMX7_PWMn_SIZE);
647
- create_unimplemented_device("pwm3", FSL_IMX7_PWM3_ADDR, FSL_IMX7_PWMn_SIZE);
648
- create_unimplemented_device("pwm4", FSL_IMX7_PWM4_ADDR, FSL_IMX7_PWMn_SIZE);
649
+ for (i = 0; i < FSL_IMX7_NUM_PWMS; i++) {
650
+ static const hwaddr FSL_IMX7_PWMn_ADDR[FSL_IMX7_NUM_PWMS] = {
651
+ FSL_IMX7_PWM1_ADDR,
652
+ FSL_IMX7_PWM2_ADDR,
653
+ FSL_IMX7_PWM3_ADDR,
654
+ FSL_IMX7_PWM4_ADDR,
655
+ };
656
+
657
+ snprintf(name, NAME_SIZE, "pwm%d", i);
658
+ create_unimplemented_device(name, FSL_IMX7_PWMn_ADDR[i],
659
+ FSL_IMX7_PWMn_SIZE);
660
+ }
661
662
/*
663
- * CAN
664
+ * CANs
665
*/
666
- create_unimplemented_device("can1", FSL_IMX7_CAN1_ADDR, FSL_IMX7_CANn_SIZE);
667
- create_unimplemented_device("can2", FSL_IMX7_CAN2_ADDR, FSL_IMX7_CANn_SIZE);
668
+ for (i = 0; i < FSL_IMX7_NUM_CANS; i++) {
669
+ static const hwaddr FSL_IMX7_CANn_ADDR[FSL_IMX7_NUM_CANS] = {
670
+ FSL_IMX7_CAN1_ADDR,
671
+ FSL_IMX7_CAN2_ADDR,
672
+ };
673
+
674
+ snprintf(name, NAME_SIZE, "can%d", i);
675
+ create_unimplemented_device(name, FSL_IMX7_CANn_ADDR[i],
676
+ FSL_IMX7_CANn_SIZE);
677
+ }
678
679
/*
680
- * SAI (Audio SSI (Synchronous Serial Interface))
681
+ * SAIs (Audio SSI (Synchronous Serial Interface))
682
*/
683
- create_unimplemented_device("sai1", FSL_IMX7_SAI1_ADDR, FSL_IMX7_SAIn_SIZE);
684
- create_unimplemented_device("sai2", FSL_IMX7_SAI2_ADDR, FSL_IMX7_SAIn_SIZE);
685
- create_unimplemented_device("sai2", FSL_IMX7_SAI3_ADDR, FSL_IMX7_SAIn_SIZE);
686
+ for (i = 0; i < FSL_IMX7_NUM_SAIS; i++) {
687
+ static const hwaddr FSL_IMX7_SAIn_ADDR[FSL_IMX7_NUM_SAIS] = {
688
+ FSL_IMX7_SAI1_ADDR,
689
+ FSL_IMX7_SAI2_ADDR,
690
+ FSL_IMX7_SAI3_ADDR,
691
+ };
692
+
693
+ snprintf(name, NAME_SIZE, "sai%d", i);
694
+ create_unimplemented_device(name, FSL_IMX7_SAIn_ADDR[i],
695
+ FSL_IMX7_SAIn_SIZE);
696
+ }
697
698
/*
699
* OCOTP
700
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
701
create_unimplemented_device("ocotp", FSL_IMX7_OCOTP_ADDR,
702
FSL_IMX7_OCOTP_SIZE);
703
704
+ /*
705
+ * GPR
706
+ */
707
sysbus_realize(SYS_BUS_DEVICE(&s->gpr), &error_abort);
708
- sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpr), 0, FSL_IMX7_GPR_ADDR);
709
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpr), 0, FSL_IMX7_IOMUXC_GPR_ADDR);
710
711
+ /*
712
+ * PCIE
713
+ */
714
sysbus_realize(SYS_BUS_DEVICE(&s->pcie), &error_abort);
715
sysbus_mmio_map(SYS_BUS_DEVICE(&s->pcie), 0, FSL_IMX7_PCIE_REG_ADDR);
716
717
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
718
irq = qdev_get_gpio_in(DEVICE(&s->a7mpcore), FSL_IMX7_PCI_INTD_IRQ);
719
sysbus_connect_irq(SYS_BUS_DEVICE(&s->pcie), 3, irq);
720
721
-
722
+ /*
723
+ * USBs
724
+ */
725
for (i = 0; i < FSL_IMX7_NUM_USBS; i++) {
726
static const hwaddr FSL_IMX7_USBMISCn_ADDR[FSL_IMX7_NUM_USBS] = {
727
FSL_IMX7_USBMISC1_ADDR,
728
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
729
*/
730
create_unimplemented_device("pcie-phy", FSL_IMX7_PCIE_PHY_ADDR,
731
FSL_IMX7_PCIE_PHY_SIZE);
732
+
49
}
733
}
50
734
51
void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
735
static Property fsl_imx7_properties[] = {
52
@@ -XXX,XX +XXX,XX @@ void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
53
gen_helper_gvec_qrdmlsh_s16, gen_helper_gvec_qrdmlsh_s32
54
};
55
tcg_debug_assert(vece >= 1 && vece <= 2);
56
- tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, cpu_env,
57
- opr_sz, max_sz, 0, fns[vece - 1]);
58
+ gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
59
}
60
61
#define GEN_CMP0(NAME, COND) \
62
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/vec_helper.c
65
+++ b/target/arm/vec_helper.c
66
@@ -XXX,XX +XXX,XX @@
67
#define H4(x) (x)
68
#endif
69
70
-#define SET_QC() env->vfp.qc[0] = 1
71
-
72
static void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
73
{
74
uint64_t *d = vd + opr_sz;
75
@@ -XXX,XX +XXX,XX @@ static void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
76
}
77
78
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
79
-static uint16_t inl_qrdmlah_s16(CPUARMState *env, int16_t src1,
80
- int16_t src2, int16_t src3)
81
+static int16_t inl_qrdmlah_s16(int16_t src1, int16_t src2,
82
+ int16_t src3, uint32_t *sat)
83
{
84
/* Simplify:
85
* = ((a3 << 16) + ((e1 * e2) << 1) + (1 << 15)) >> 16
86
@@ -XXX,XX +XXX,XX @@ static uint16_t inl_qrdmlah_s16(CPUARMState *env, int16_t src1,
87
ret = ((int32_t)src3 << 15) + ret + (1 << 14);
88
ret >>= 15;
89
if (ret != (int16_t)ret) {
90
- SET_QC();
91
+ *sat = 1;
92
ret = (ret < 0 ? -0x8000 : 0x7fff);
93
}
94
return ret;
95
@@ -XXX,XX +XXX,XX @@ static uint16_t inl_qrdmlah_s16(CPUARMState *env, int16_t src1,
96
uint32_t HELPER(neon_qrdmlah_s16)(CPUARMState *env, uint32_t src1,
97
uint32_t src2, uint32_t src3)
98
{
99
- uint16_t e1 = inl_qrdmlah_s16(env, src1, src2, src3);
100
- uint16_t e2 = inl_qrdmlah_s16(env, src1 >> 16, src2 >> 16, src3 >> 16);
101
+ uint32_t *sat = &env->vfp.qc[0];
102
+ uint16_t e1 = inl_qrdmlah_s16(src1, src2, src3, sat);
103
+ uint16_t e2 = inl_qrdmlah_s16(src1 >> 16, src2 >> 16, src3 >> 16, sat);
104
return deposit32(e1, 16, 16, e2);
105
}
106
107
void HELPER(gvec_qrdmlah_s16)(void *vd, void *vn, void *vm,
108
- void *ve, uint32_t desc)
109
+ void *vq, uint32_t desc)
110
{
111
uintptr_t opr_sz = simd_oprsz(desc);
112
int16_t *d = vd;
113
int16_t *n = vn;
114
int16_t *m = vm;
115
- CPUARMState *env = ve;
116
uintptr_t i;
117
118
for (i = 0; i < opr_sz / 2; ++i) {
119
- d[i] = inl_qrdmlah_s16(env, n[i], m[i], d[i]);
120
+ d[i] = inl_qrdmlah_s16(n[i], m[i], d[i], vq);
121
}
122
clear_tail(d, opr_sz, simd_maxsz(desc));
123
}
124
125
/* Signed saturating rounding doubling multiply-subtract high half, 16-bit */
126
-static uint16_t inl_qrdmlsh_s16(CPUARMState *env, int16_t src1,
127
- int16_t src2, int16_t src3)
128
+static int16_t inl_qrdmlsh_s16(int16_t src1, int16_t src2,
129
+ int16_t src3, uint32_t *sat)
130
{
131
/* Similarly, using subtraction:
132
* = ((a3 << 16) - ((e1 * e2) << 1) + (1 << 15)) >> 16
133
@@ -XXX,XX +XXX,XX @@ static uint16_t inl_qrdmlsh_s16(CPUARMState *env, int16_t src1,
134
ret = ((int32_t)src3 << 15) - ret + (1 << 14);
135
ret >>= 15;
136
if (ret != (int16_t)ret) {
137
- SET_QC();
138
+ *sat = 1;
139
ret = (ret < 0 ? -0x8000 : 0x7fff);
140
}
141
return ret;
142
@@ -XXX,XX +XXX,XX @@ static uint16_t inl_qrdmlsh_s16(CPUARMState *env, int16_t src1,
143
uint32_t HELPER(neon_qrdmlsh_s16)(CPUARMState *env, uint32_t src1,
144
uint32_t src2, uint32_t src3)
145
{
146
- uint16_t e1 = inl_qrdmlsh_s16(env, src1, src2, src3);
147
- uint16_t e2 = inl_qrdmlsh_s16(env, src1 >> 16, src2 >> 16, src3 >> 16);
148
+ uint32_t *sat = &env->vfp.qc[0];
149
+ uint16_t e1 = inl_qrdmlsh_s16(src1, src2, src3, sat);
150
+ uint16_t e2 = inl_qrdmlsh_s16(src1 >> 16, src2 >> 16, src3 >> 16, sat);
151
return deposit32(e1, 16, 16, e2);
152
}
153
154
void HELPER(gvec_qrdmlsh_s16)(void *vd, void *vn, void *vm,
155
- void *ve, uint32_t desc)
156
+ void *vq, uint32_t desc)
157
{
158
uintptr_t opr_sz = simd_oprsz(desc);
159
int16_t *d = vd;
160
int16_t *n = vn;
161
int16_t *m = vm;
162
- CPUARMState *env = ve;
163
uintptr_t i;
164
165
for (i = 0; i < opr_sz / 2; ++i) {
166
- d[i] = inl_qrdmlsh_s16(env, n[i], m[i], d[i]);
167
+ d[i] = inl_qrdmlsh_s16(n[i], m[i], d[i], vq);
168
}
169
clear_tail(d, opr_sz, simd_maxsz(desc));
170
}
171
172
/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
173
-uint32_t HELPER(neon_qrdmlah_s32)(CPUARMState *env, int32_t src1,
174
- int32_t src2, int32_t src3)
175
+static int32_t inl_qrdmlah_s32(int32_t src1, int32_t src2,
176
+ int32_t src3, uint32_t *sat)
177
{
178
/* Simplify similarly to int_qrdmlah_s16 above. */
179
int64_t ret = (int64_t)src1 * src2;
180
ret = ((int64_t)src3 << 31) + ret + (1 << 30);
181
ret >>= 31;
182
if (ret != (int32_t)ret) {
183
- SET_QC();
184
+ *sat = 1;
185
ret = (ret < 0 ? INT32_MIN : INT32_MAX);
186
}
187
return ret;
188
}
189
190
+uint32_t HELPER(neon_qrdmlah_s32)(CPUARMState *env, int32_t src1,
191
+ int32_t src2, int32_t src3)
192
+{
193
+ uint32_t *sat = &env->vfp.qc[0];
194
+ return inl_qrdmlah_s32(src1, src2, src3, sat);
195
+}
196
+
197
void HELPER(gvec_qrdmlah_s32)(void *vd, void *vn, void *vm,
198
- void *ve, uint32_t desc)
199
+ void *vq, uint32_t desc)
200
{
201
uintptr_t opr_sz = simd_oprsz(desc);
202
int32_t *d = vd;
203
int32_t *n = vn;
204
int32_t *m = vm;
205
- CPUARMState *env = ve;
206
uintptr_t i;
207
208
for (i = 0; i < opr_sz / 4; ++i) {
209
- d[i] = helper_neon_qrdmlah_s32(env, n[i], m[i], d[i]);
210
+ d[i] = inl_qrdmlah_s32(n[i], m[i], d[i], vq);
211
}
212
clear_tail(d, opr_sz, simd_maxsz(desc));
213
}
214
215
/* Signed saturating rounding doubling multiply-subtract high half, 32-bit */
216
-uint32_t HELPER(neon_qrdmlsh_s32)(CPUARMState *env, int32_t src1,
217
- int32_t src2, int32_t src3)
218
+static int32_t inl_qrdmlsh_s32(int32_t src1, int32_t src2,
219
+ int32_t src3, uint32_t *sat)
220
{
221
/* Simplify similarly to int_qrdmlsh_s16 above. */
222
int64_t ret = (int64_t)src1 * src2;
223
ret = ((int64_t)src3 << 31) - ret + (1 << 30);
224
ret >>= 31;
225
if (ret != (int32_t)ret) {
226
- SET_QC();
227
+ *sat = 1;
228
ret = (ret < 0 ? INT32_MIN : INT32_MAX);
229
}
230
return ret;
231
}
232
233
+uint32_t HELPER(neon_qrdmlsh_s32)(CPUARMState *env, int32_t src1,
234
+ int32_t src2, int32_t src3)
235
+{
236
+ uint32_t *sat = &env->vfp.qc[0];
237
+ return inl_qrdmlsh_s32(src1, src2, src3, sat);
238
+}
239
+
240
void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
241
- void *ve, uint32_t desc)
242
+ void *vq, uint32_t desc)
243
{
244
uintptr_t opr_sz = simd_oprsz(desc);
245
int32_t *d = vd;
246
int32_t *n = vn;
247
int32_t *m = vm;
248
- CPUARMState *env = ve;
249
uintptr_t i;
250
251
for (i = 0; i < opr_sz / 4; ++i) {
252
- d[i] = helper_neon_qrdmlsh_s32(env, n[i], m[i], d[i]);
253
+ d[i] = inl_qrdmlsh_s32(n[i], m[i], d[i], vq);
254
}
255
clear_tail(d, opr_sz, simd_maxsz(desc));
256
}
257
--
736
--
258
2.20.1
737
2.34.1
259
260
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Must clear the tail for AdvSIMD when SVE is enabled.
4
5
Fixes: ca40a6e6e39
6
Cc: qemu-stable@nongnu.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200513163245.17915-15-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/vec_helper.c | 2 ++
13
1 file changed, 2 insertions(+)
14
15
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/vec_helper.c
18
+++ b/target/arm/vec_helper.c
19
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
20
d[i + j] = TYPE##_mul(n[i + j], mm, stat); \
21
} \
22
} \
23
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
24
}
25
26
DO_MUL_IDX(gvec_fmul_idx_h, float16, H2)
27
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, \
28
mm, a[i + j], 0, stat); \
29
} \
30
} \
31
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
32
}
33
34
DO_FMLA_IDX(gvec_fmla_idx_h, float16, H2)
35
--
36
2.20.1
37
38
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Include 64-bit element size in preparation for SVE2.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200513163245.17915-16-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.h | 10 +++
11
target/arm/translate.h | 5 ++
12
target/arm/translate-a64.c | 8 ++-
13
target/arm/translate.c | 133 ++++++++++++++++++++++++++++++++++++-
14
target/arm/vec_helper.c | 24 +++++++
15
5 files changed, 176 insertions(+), 4 deletions(-)
16
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
20
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(gvec_sli_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_3(gvec_sli_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_3(gvec_sli_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_4(gvec_sabd_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(gvec_sabd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(gvec_sabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(gvec_sabd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_4(gvec_uabd_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(gvec_uabd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(gvec_uabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(gvec_uabd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+
35
#ifdef TARGET_AARCH64
36
#include "helper-a64.h"
37
#include "helper-sve.h"
38
diff --git a/target/arm/translate.h b/target/arm/translate.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate.h
41
+++ b/target/arm/translate.h
42
@@ -XXX,XX +XXX,XX @@ void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
43
void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
44
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
45
46
+void gen_gvec_sabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
47
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
48
+void gen_gvec_uabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
49
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
50
+
51
/*
52
* Forward to the isar_feature_* tests given a DisasContext pointer.
53
*/
54
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/translate-a64.c
57
+++ b/target/arm/translate-a64.c
58
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
59
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_smin, size);
60
}
61
return;
62
+ case 0xe: /* SABD, UABD */
63
+ if (u) {
64
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uabd, size);
65
+ } else {
66
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sabd, size);
67
+ }
68
+ return;
69
case 0x10: /* ADD, SUB */
70
if (u) {
71
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_sub, size);
72
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
73
genenvfn = fns[size][u];
74
break;
75
}
76
- case 0xe: /* SABD, UABD */
77
case 0xf: /* SABA, UABA */
78
{
79
static NeonGenTwoOpFn * const fns[3][2] = {
80
diff --git a/target/arm/translate.c b/target/arm/translate.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/translate.c
83
+++ b/target/arm/translate.c
84
@@ -XXX,XX +XXX,XX @@ void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
85
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
86
}
87
88
+static void gen_sabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
89
+{
90
+ TCGv_i32 t = tcg_temp_new_i32();
91
+
92
+ tcg_gen_sub_i32(t, a, b);
93
+ tcg_gen_sub_i32(d, b, a);
94
+ tcg_gen_movcond_i32(TCG_COND_LT, d, a, b, d, t);
95
+ tcg_temp_free_i32(t);
96
+}
97
+
98
+static void gen_sabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
99
+{
100
+ TCGv_i64 t = tcg_temp_new_i64();
101
+
102
+ tcg_gen_sub_i64(t, a, b);
103
+ tcg_gen_sub_i64(d, b, a);
104
+ tcg_gen_movcond_i64(TCG_COND_LT, d, a, b, d, t);
105
+ tcg_temp_free_i64(t);
106
+}
107
+
108
+static void gen_sabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
109
+{
110
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
111
+
112
+ tcg_gen_smin_vec(vece, t, a, b);
113
+ tcg_gen_smax_vec(vece, d, a, b);
114
+ tcg_gen_sub_vec(vece, d, d, t);
115
+ tcg_temp_free_vec(t);
116
+}
117
+
118
+void gen_gvec_sabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
119
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
120
+{
121
+ static const TCGOpcode vecop_list[] = {
122
+ INDEX_op_sub_vec, INDEX_op_smin_vec, INDEX_op_smax_vec, 0
123
+ };
124
+ static const GVecGen3 ops[4] = {
125
+ { .fniv = gen_sabd_vec,
126
+ .fno = gen_helper_gvec_sabd_b,
127
+ .opt_opc = vecop_list,
128
+ .vece = MO_8 },
129
+ { .fniv = gen_sabd_vec,
130
+ .fno = gen_helper_gvec_sabd_h,
131
+ .opt_opc = vecop_list,
132
+ .vece = MO_16 },
133
+ { .fni4 = gen_sabd_i32,
134
+ .fniv = gen_sabd_vec,
135
+ .fno = gen_helper_gvec_sabd_s,
136
+ .opt_opc = vecop_list,
137
+ .vece = MO_32 },
138
+ { .fni8 = gen_sabd_i64,
139
+ .fniv = gen_sabd_vec,
140
+ .fno = gen_helper_gvec_sabd_d,
141
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
142
+ .opt_opc = vecop_list,
143
+ .vece = MO_64 },
144
+ };
145
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
146
+}
147
+
148
+static void gen_uabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
149
+{
150
+ TCGv_i32 t = tcg_temp_new_i32();
151
+
152
+ tcg_gen_sub_i32(t, a, b);
153
+ tcg_gen_sub_i32(d, b, a);
154
+ tcg_gen_movcond_i32(TCG_COND_LTU, d, a, b, d, t);
155
+ tcg_temp_free_i32(t);
156
+}
157
+
158
+static void gen_uabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
159
+{
160
+ TCGv_i64 t = tcg_temp_new_i64();
161
+
162
+ tcg_gen_sub_i64(t, a, b);
163
+ tcg_gen_sub_i64(d, b, a);
164
+ tcg_gen_movcond_i64(TCG_COND_LTU, d, a, b, d, t);
165
+ tcg_temp_free_i64(t);
166
+}
167
+
168
+static void gen_uabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
169
+{
170
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
171
+
172
+ tcg_gen_umin_vec(vece, t, a, b);
173
+ tcg_gen_umax_vec(vece, d, a, b);
174
+ tcg_gen_sub_vec(vece, d, d, t);
175
+ tcg_temp_free_vec(t);
176
+}
177
+
178
+void gen_gvec_uabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
179
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
180
+{
181
+ static const TCGOpcode vecop_list[] = {
182
+ INDEX_op_sub_vec, INDEX_op_umin_vec, INDEX_op_umax_vec, 0
183
+ };
184
+ static const GVecGen3 ops[4] = {
185
+ { .fniv = gen_uabd_vec,
186
+ .fno = gen_helper_gvec_uabd_b,
187
+ .opt_opc = vecop_list,
188
+ .vece = MO_8 },
189
+ { .fniv = gen_uabd_vec,
190
+ .fno = gen_helper_gvec_uabd_h,
191
+ .opt_opc = vecop_list,
192
+ .vece = MO_16 },
193
+ { .fni4 = gen_uabd_i32,
194
+ .fniv = gen_uabd_vec,
195
+ .fno = gen_helper_gvec_uabd_s,
196
+ .opt_opc = vecop_list,
197
+ .vece = MO_32 },
198
+ { .fni8 = gen_uabd_i64,
199
+ .fniv = gen_uabd_vec,
200
+ .fno = gen_helper_gvec_uabd_d,
201
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
202
+ .opt_opc = vecop_list,
203
+ .vece = MO_64 },
204
+ };
205
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
206
+}
207
+
208
/* Translate a NEON data processing instruction. Return nonzero if the
209
instruction is invalid.
210
We process data in a mixture of 32-bit and 64-bit chunks.
211
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
212
}
213
return 1;
214
215
+ case NEON_3R_VABD:
216
+ if (u) {
217
+ gen_gvec_uabd(size, rd_ofs, rn_ofs, rm_ofs,
218
+ vec_size, vec_size);
219
+ } else {
220
+ gen_gvec_sabd(size, rd_ofs, rn_ofs, rm_ofs,
221
+ vec_size, vec_size);
222
+ }
223
+ return 0;
224
+
225
case NEON_3R_VADD_VSUB:
226
case NEON_3R_LOGIC:
227
case NEON_3R_VMAX:
228
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
229
case NEON_3R_VQRSHL:
230
GEN_NEON_INTEGER_OP_ENV(qrshl);
231
break;
232
- case NEON_3R_VABD:
233
- GEN_NEON_INTEGER_OP(abd);
234
- break;
235
case NEON_3R_VABA:
236
GEN_NEON_INTEGER_OP(abd);
237
tcg_temp_free_i32(tmp2);
238
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
239
index XXXXXXX..XXXXXXX 100644
240
--- a/target/arm/vec_helper.c
241
+++ b/target/arm/vec_helper.c
242
@@ -XXX,XX +XXX,XX @@ DO_CMP0(gvec_cgt0_h, int16_t, >)
243
DO_CMP0(gvec_cge0_h, int16_t, >=)
244
245
#undef DO_CMP0
246
+
247
+#define DO_ABD(NAME, TYPE) \
248
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
249
+{ \
250
+ intptr_t i, opr_sz = simd_oprsz(desc); \
251
+ TYPE *d = vd, *n = vn, *m = vm; \
252
+ \
253
+ for (i = 0; i < opr_sz / sizeof(TYPE); ++i) { \
254
+ d[i] = n[i] < m[i] ? m[i] - n[i] : n[i] - m[i]; \
255
+ } \
256
+ clear_tail(d, opr_sz, simd_maxsz(desc)); \
257
+}
258
+
259
+DO_ABD(gvec_sabd_b, int8_t)
260
+DO_ABD(gvec_sabd_h, int16_t)
261
+DO_ABD(gvec_sabd_s, int32_t)
262
+DO_ABD(gvec_sabd_d, int64_t)
263
+
264
+DO_ABD(gvec_uabd_b, uint8_t)
265
+DO_ABD(gvec_uabd_h, uint16_t)
266
+DO_ABD(gvec_uabd_s, uint32_t)
267
+DO_ABD(gvec_uabd_d, uint64_t)
268
+
269
+#undef DO_ABD
270
--
271
2.20.1
272
273
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Include 64-bit element size in preparation for SVE2.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200513163245.17915-17-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.h | 17 +++--
11
target/arm/translate.h | 5 ++
12
target/arm/neon_helper.c | 10 ---
13
target/arm/translate-a64.c | 17 ++---
14
target/arm/translate.c | 134 +++++++++++++++++++++++++++++++++++--
15
target/arm/vec_helper.c | 24 +++++++
16
6 files changed, 174 insertions(+), 33 deletions(-)
17
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.h
21
+++ b/target/arm/helper.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(neon_pmax_s8, i32, i32, i32)
23
DEF_HELPER_2(neon_pmax_u16, i32, i32, i32)
24
DEF_HELPER_2(neon_pmax_s16, i32, i32, i32)
25
26
-DEF_HELPER_2(neon_abd_u8, i32, i32, i32)
27
-DEF_HELPER_2(neon_abd_s8, i32, i32, i32)
28
-DEF_HELPER_2(neon_abd_u16, i32, i32, i32)
29
-DEF_HELPER_2(neon_abd_s16, i32, i32, i32)
30
-DEF_HELPER_2(neon_abd_u32, i32, i32, i32)
31
-DEF_HELPER_2(neon_abd_s32, i32, i32, i32)
32
-
33
DEF_HELPER_2(neon_shl_u16, i32, i32, i32)
34
DEF_HELPER_2(neon_shl_s16, i32, i32, i32)
35
DEF_HELPER_2(neon_rshl_u8, i32, i32, i32)
36
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_uabd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_4(gvec_uabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_4(gvec_uabd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
40
+DEF_HELPER_FLAGS_4(gvec_saba_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_4(gvec_saba_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_4(gvec_saba_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_4(gvec_saba_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
+
45
+DEF_HELPER_FLAGS_4(gvec_uaba_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_4(gvec_uaba_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_4(gvec_uaba_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_4(gvec_uaba_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
49
+
50
#ifdef TARGET_AARCH64
51
#include "helper-a64.h"
52
#include "helper-sve.h"
53
diff --git a/target/arm/translate.h b/target/arm/translate.h
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/translate.h
56
+++ b/target/arm/translate.h
57
@@ -XXX,XX +XXX,XX @@ void gen_gvec_sabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
58
void gen_gvec_uabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
59
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
60
61
+void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
62
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
63
+void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
64
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
65
+
66
/*
67
* Forward to the isar_feature_* tests given a DisasContext pointer.
68
*/
69
diff --git a/target/arm/neon_helper.c b/target/arm/neon_helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/neon_helper.c
72
+++ b/target/arm/neon_helper.c
73
@@ -XXX,XX +XXX,XX @@ NEON_POP(pmax_s16, neon_s16, 2)
74
NEON_POP(pmax_u16, neon_u16, 2)
75
#undef NEON_FN
76
77
-#define NEON_FN(dest, src1, src2) \
78
- dest = (src1 > src2) ? (src1 - src2) : (src2 - src1)
79
-NEON_VOP(abd_s8, neon_s8, 4)
80
-NEON_VOP(abd_u8, neon_u8, 4)
81
-NEON_VOP(abd_s16, neon_s16, 2)
82
-NEON_VOP(abd_u16, neon_u16, 2)
83
-NEON_VOP(abd_s32, neon_s32, 1)
84
-NEON_VOP(abd_u32, neon_u32, 1)
85
-#undef NEON_FN
86
-
87
#define NEON_FN(dest, src1, src2) do { \
88
int8_t tmp; \
89
tmp = (int8_t)src2; \
90
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/translate-a64.c
93
+++ b/target/arm/translate-a64.c
94
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
95
gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sabd, size);
96
}
97
return;
98
+ case 0xf: /* SABA, UABA */
99
+ if (u) {
100
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uaba, size);
101
+ } else {
102
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_saba, size);
103
+ }
104
+ return;
105
case 0x10: /* ADD, SUB */
106
if (u) {
107
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_sub, size);
108
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
109
genenvfn = fns[size][u];
110
break;
111
}
112
- case 0xf: /* SABA, UABA */
113
- {
114
- static NeonGenTwoOpFn * const fns[3][2] = {
115
- { gen_helper_neon_abd_s8, gen_helper_neon_abd_u8 },
116
- { gen_helper_neon_abd_s16, gen_helper_neon_abd_u16 },
117
- { gen_helper_neon_abd_s32, gen_helper_neon_abd_u32 },
118
- };
119
- genfn = fns[size][u];
120
- break;
121
- }
122
case 0x16: /* SQDMULH, SQRDMULH */
123
{
124
static NeonGenTwoOpEnvFn * const fns[2][2] = {
125
diff --git a/target/arm/translate.c b/target/arm/translate.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/arm/translate.c
128
+++ b/target/arm/translate.c
129
@@ -XXX,XX +XXX,XX @@ void gen_gvec_uabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
130
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
131
}
132
133
+static void gen_saba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
134
+{
135
+ TCGv_i32 t = tcg_temp_new_i32();
136
+ gen_sabd_i32(t, a, b);
137
+ tcg_gen_add_i32(d, d, t);
138
+ tcg_temp_free_i32(t);
139
+}
140
+
141
+static void gen_saba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
142
+{
143
+ TCGv_i64 t = tcg_temp_new_i64();
144
+ gen_sabd_i64(t, a, b);
145
+ tcg_gen_add_i64(d, d, t);
146
+ tcg_temp_free_i64(t);
147
+}
148
+
149
+static void gen_saba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
150
+{
151
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
152
+ gen_sabd_vec(vece, t, a, b);
153
+ tcg_gen_add_vec(vece, d, d, t);
154
+ tcg_temp_free_vec(t);
155
+}
156
+
157
+void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
158
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
159
+{
160
+ static const TCGOpcode vecop_list[] = {
161
+ INDEX_op_sub_vec, INDEX_op_add_vec,
162
+ INDEX_op_smin_vec, INDEX_op_smax_vec, 0
163
+ };
164
+ static const GVecGen3 ops[4] = {
165
+ { .fniv = gen_saba_vec,
166
+ .fno = gen_helper_gvec_saba_b,
167
+ .opt_opc = vecop_list,
168
+ .load_dest = true,
169
+ .vece = MO_8 },
170
+ { .fniv = gen_saba_vec,
171
+ .fno = gen_helper_gvec_saba_h,
172
+ .opt_opc = vecop_list,
173
+ .load_dest = true,
174
+ .vece = MO_16 },
175
+ { .fni4 = gen_saba_i32,
176
+ .fniv = gen_saba_vec,
177
+ .fno = gen_helper_gvec_saba_s,
178
+ .opt_opc = vecop_list,
179
+ .load_dest = true,
180
+ .vece = MO_32 },
181
+ { .fni8 = gen_saba_i64,
182
+ .fniv = gen_saba_vec,
183
+ .fno = gen_helper_gvec_saba_d,
184
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
185
+ .opt_opc = vecop_list,
186
+ .load_dest = true,
187
+ .vece = MO_64 },
188
+ };
189
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
190
+}
191
+
192
+static void gen_uaba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
193
+{
194
+ TCGv_i32 t = tcg_temp_new_i32();
195
+ gen_uabd_i32(t, a, b);
196
+ tcg_gen_add_i32(d, d, t);
197
+ tcg_temp_free_i32(t);
198
+}
199
+
200
+static void gen_uaba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
201
+{
202
+ TCGv_i64 t = tcg_temp_new_i64();
203
+ gen_uabd_i64(t, a, b);
204
+ tcg_gen_add_i64(d, d, t);
205
+ tcg_temp_free_i64(t);
206
+}
207
+
208
+static void gen_uaba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
209
+{
210
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
211
+ gen_uabd_vec(vece, t, a, b);
212
+ tcg_gen_add_vec(vece, d, d, t);
213
+ tcg_temp_free_vec(t);
214
+}
215
+
216
+void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
217
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
218
+{
219
+ static const TCGOpcode vecop_list[] = {
220
+ INDEX_op_sub_vec, INDEX_op_add_vec,
221
+ INDEX_op_umin_vec, INDEX_op_umax_vec, 0
222
+ };
223
+ static const GVecGen3 ops[4] = {
224
+ { .fniv = gen_uaba_vec,
225
+ .fno = gen_helper_gvec_uaba_b,
226
+ .opt_opc = vecop_list,
227
+ .load_dest = true,
228
+ .vece = MO_8 },
229
+ { .fniv = gen_uaba_vec,
230
+ .fno = gen_helper_gvec_uaba_h,
231
+ .opt_opc = vecop_list,
232
+ .load_dest = true,
233
+ .vece = MO_16 },
234
+ { .fni4 = gen_uaba_i32,
235
+ .fniv = gen_uaba_vec,
236
+ .fno = gen_helper_gvec_uaba_s,
237
+ .opt_opc = vecop_list,
238
+ .load_dest = true,
239
+ .vece = MO_32 },
240
+ { .fni8 = gen_uaba_i64,
241
+ .fniv = gen_uaba_vec,
242
+ .fno = gen_helper_gvec_uaba_d,
243
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
244
+ .opt_opc = vecop_list,
245
+ .load_dest = true,
246
+ .vece = MO_64 },
247
+ };
248
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
249
+}
250
+
251
/* Translate a NEON data processing instruction. Return nonzero if the
252
instruction is invalid.
253
We process data in a mixture of 32-bit and 64-bit chunks.
254
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
255
}
256
return 0;
257
258
+ case NEON_3R_VABA:
259
+ if (u) {
260
+ gen_gvec_uaba(size, rd_ofs, rn_ofs, rm_ofs,
261
+ vec_size, vec_size);
262
+ } else {
263
+ gen_gvec_saba(size, rd_ofs, rn_ofs, rm_ofs,
264
+ vec_size, vec_size);
265
+ }
266
+ return 0;
267
+
268
case NEON_3R_VADD_VSUB:
269
case NEON_3R_LOGIC:
270
case NEON_3R_VMAX:
271
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
272
case NEON_3R_VQRSHL:
273
GEN_NEON_INTEGER_OP_ENV(qrshl);
274
break;
275
- case NEON_3R_VABA:
276
- GEN_NEON_INTEGER_OP(abd);
277
- tcg_temp_free_i32(tmp2);
278
- tmp2 = neon_load_reg(rd, pass);
279
- gen_neon_add(size, tmp, tmp2);
280
- break;
281
case NEON_3R_VPMAX:
282
GEN_NEON_INTEGER_OP(pmax);
283
break;
284
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
285
index XXXXXXX..XXXXXXX 100644
286
--- a/target/arm/vec_helper.c
287
+++ b/target/arm/vec_helper.c
288
@@ -XXX,XX +XXX,XX @@ DO_ABD(gvec_uabd_s, uint32_t)
289
DO_ABD(gvec_uabd_d, uint64_t)
290
291
#undef DO_ABD
292
+
293
+#define DO_ABA(NAME, TYPE) \
294
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
295
+{ \
296
+ intptr_t i, opr_sz = simd_oprsz(desc); \
297
+ TYPE *d = vd, *n = vn, *m = vm; \
298
+ \
299
+ for (i = 0; i < opr_sz / sizeof(TYPE); ++i) { \
300
+ d[i] += n[i] < m[i] ? m[i] - n[i] : n[i] - m[i]; \
301
+ } \
302
+ clear_tail(d, opr_sz, simd_maxsz(desc)); \
303
+}
304
+
305
+DO_ABA(gvec_saba_b, int8_t)
306
+DO_ABA(gvec_saba_h, int16_t)
307
+DO_ABA(gvec_saba_s, int32_t)
308
+DO_ABA(gvec_saba_d, int64_t)
309
+
310
+DO_ABA(gvec_uaba_b, uint8_t)
311
+DO_ABA(gvec_uaba_h, uint16_t)
312
+DO_ABA(gvec_uaba_s, uint32_t)
313
+DO_ABA(gvec_uaba_d, uint64_t)
314
+
315
+#undef DO_ABA
316
--
317
2.20.1
318
319
diff view generated by jsdifflib
Deleted patch
1
From: Patrick Williams <patrick@stwcx.xyz>
2
1
3
Sonora Pass is a 2 socket x86 motherboard designed by Facebook
4
and supported by OpenBMC. Strapping configuration was obtained
5
from hardware and i2c configuration is based on dts found at:
6
7
https://github.com/facebook/openbmc-linux/blob/1633c87b8ba7c162095787c988979b748ba65dc8/arch/arm/boot/dts/aspeed-bmc-facebook-sonorapass.dts
8
9
Booted a test image of http://github.com/facebook/openbmc to login
10
prompt.
11
12
Signed-off-by: Patrick Williams <patrick@stwcx.xyz>
13
Reviewed-by: Amithash Prasad <amithash@fb.com>
14
Reviewed-by: Cédric Le Goater <clg@kaod.org>
15
[PMM: fixed block comment style nit]
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
hw/arm/aspeed.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++++
19
1 file changed, 78 insertions(+)
20
21
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/arm/aspeed.c
24
+++ b/hw/arm/aspeed.c
25
@@ -XXX,XX +XXX,XX @@ struct AspeedBoardState {
26
SCU_AST2500_HW_STRAP_ACPI_ENABLE | \
27
SCU_HW_STRAP_SPI_MODE(SCU_HW_STRAP_SPI_MASTER))
28
29
+/* Sonorapass hardware value: 0xF100D216 */
30
+#define SONORAPASS_BMC_HW_STRAP1 ( \
31
+ SCU_AST2500_HW_STRAP_SPI_AUTOFETCH_ENABLE | \
32
+ SCU_AST2500_HW_STRAP_GPIO_STRAP_ENABLE | \
33
+ SCU_AST2500_HW_STRAP_UART_DEBUG | \
34
+ SCU_AST2500_HW_STRAP_RESERVED28 | \
35
+ SCU_AST2500_HW_STRAP_DDR4_ENABLE | \
36
+ SCU_HW_STRAP_VGA_CLASS_CODE | \
37
+ SCU_HW_STRAP_LPC_RESET_PIN | \
38
+ SCU_HW_STRAP_SPI_MODE(SCU_HW_STRAP_SPI_MASTER) | \
39
+ SCU_AST2500_HW_STRAP_SET_AXI_AHB_RATIO(AXI_AHB_RATIO_2_1) | \
40
+ SCU_HW_STRAP_VGA_BIOS_ROM | \
41
+ SCU_HW_STRAP_VGA_SIZE_SET(VGA_16M_DRAM) | \
42
+ SCU_AST2500_HW_STRAP_RESERVED1)
43
+
44
/* Swift hardware value: 0xF11AD206 */
45
#define SWIFT_BMC_HW_STRAP1 ( \
46
AST2500_HW_STRAP1_DEFAULTS | \
47
@@ -XXX,XX +XXX,XX @@ static void swift_bmc_i2c_init(AspeedBoardState *bmc)
48
i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 12), "tmp105", 0x4a);
49
}
50
51
+static void sonorapass_bmc_i2c_init(AspeedBoardState *bmc)
52
+{
53
+ AspeedSoCState *soc = &bmc->soc;
54
+
55
+ /* bus 2 : */
56
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 2), "tmp105", 0x48);
57
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 2), "tmp105", 0x49);
58
+ /* bus 2 : pca9546 @ 0x73 */
59
+
60
+ /* bus 3 : pca9548 @ 0x70 */
61
+
62
+ /* bus 4 : */
63
+ uint8_t *eeprom4_54 = g_malloc0(8 * 1024);
64
+ smbus_eeprom_init_one(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 4), 0x54,
65
+ eeprom4_54);
66
+ /* PCA9539 @ 0x76, but PCA9552 is compatible */
67
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 4), "pca9552", 0x76);
68
+ /* PCA9539 @ 0x77, but PCA9552 is compatible */
69
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 4), "pca9552", 0x77);
70
+
71
+ /* bus 6 : */
72
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 6), "tmp105", 0x48);
73
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 6), "tmp105", 0x49);
74
+ /* bus 6 : pca9546 @ 0x73 */
75
+
76
+ /* bus 8 : */
77
+ uint8_t *eeprom8_56 = g_malloc0(8 * 1024);
78
+ smbus_eeprom_init_one(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 8), 0x56,
79
+ eeprom8_56);
80
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 8), "pca9552", 0x60);
81
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 8), "pca9552", 0x61);
82
+ /* bus 8 : adc128d818 @ 0x1d */
83
+ /* bus 8 : adc128d818 @ 0x1f */
84
+
85
+ /*
86
+ * bus 13 : pca9548 @ 0x71
87
+ * - channel 3:
88
+ * - tmm421 @ 0x4c
89
+ * - tmp421 @ 0x4e
90
+ * - tmp421 @ 0x4f
91
+ */
92
+
93
+}
94
+
95
static void witherspoon_bmc_i2c_init(AspeedBoardState *bmc)
96
{
97
AspeedSoCState *soc = &bmc->soc;
98
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_romulus_class_init(ObjectClass *oc, void *data)
99
mc->default_ram_size = 512 * MiB;
100
};
101
102
+static void aspeed_machine_sonorapass_class_init(ObjectClass *oc, void *data)
103
+{
104
+ MachineClass *mc = MACHINE_CLASS(oc);
105
+ AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
106
+
107
+ mc->desc = "OCP SonoraPass BMC (ARM1176)";
108
+ amc->soc_name = "ast2500-a1";
109
+ amc->hw_strap1 = SONORAPASS_BMC_HW_STRAP1;
110
+ amc->fmc_model = "mx66l1g45g";
111
+ amc->spi_model = "mx66l1g45g";
112
+ amc->num_cs = 2;
113
+ amc->i2c_init = sonorapass_bmc_i2c_init;
114
+ mc->default_ram_size = 512 * MiB;
115
+};
116
+
117
static void aspeed_machine_swift_class_init(ObjectClass *oc, void *data)
118
{
119
MachineClass *mc = MACHINE_CLASS(oc);
120
@@ -XXX,XX +XXX,XX @@ static const TypeInfo aspeed_machine_types[] = {
121
.name = MACHINE_TYPE_NAME("swift-bmc"),
122
.parent = TYPE_ASPEED_MACHINE,
123
.class_init = aspeed_machine_swift_class_init,
124
+ }, {
125
+ .name = MACHINE_TYPE_NAME("sonorapass-bmc"),
126
+ .parent = TYPE_ASPEED_MACHINE,
127
+ .class_init = aspeed_machine_sonorapass_class_init,
128
}, {
129
.name = MACHINE_TYPE_NAME("witherspoon-bmc"),
130
.parent = TYPE_ASPEED_MACHINE,
131
--
132
2.20.1
133
134
diff view generated by jsdifflib
Deleted patch
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
2
1
3
RAS Virtualization feature is not supported now, so
4
add a RAS machine option and disable it by default.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
8
Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com>
9
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
10
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
11
Message-id: 20200512030609.19593-3-gengdongjiu@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
include/hw/arm/virt.h | 1 +
15
hw/arm/virt.c | 23 +++++++++++++++++++++++
16
2 files changed, 24 insertions(+)
17
18
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/virt.h
21
+++ b/include/hw/arm/virt.h
22
@@ -XXX,XX +XXX,XX @@ typedef struct {
23
bool highmem_ecam;
24
bool its;
25
bool virt;
26
+ bool ras;
27
OnOffAuto acpi;
28
VirtGICType gic_version;
29
VirtIOMMUType iommu;
30
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/arm/virt.c
33
+++ b/hw/arm/virt.c
34
@@ -XXX,XX +XXX,XX @@ static void virt_set_acpi(Object *obj, Visitor *v, const char *name,
35
visit_type_OnOffAuto(v, name, &vms->acpi, errp);
36
}
37
38
+static bool virt_get_ras(Object *obj, Error **errp)
39
+{
40
+ VirtMachineState *vms = VIRT_MACHINE(obj);
41
+
42
+ return vms->ras;
43
+}
44
+
45
+static void virt_set_ras(Object *obj, bool value, Error **errp)
46
+{
47
+ VirtMachineState *vms = VIRT_MACHINE(obj);
48
+
49
+ vms->ras = value;
50
+}
51
+
52
static char *virt_get_gic_version(Object *obj, Error **errp)
53
{
54
VirtMachineState *vms = VIRT_MACHINE(obj);
55
@@ -XXX,XX +XXX,XX @@ static void virt_instance_init(Object *obj)
56
"Valid values are none and smmuv3",
57
NULL);
58
59
+ /* Default disallows RAS instantiation */
60
+ vms->ras = false;
61
+ object_property_add_bool(obj, "ras", virt_get_ras,
62
+ virt_set_ras, NULL);
63
+ object_property_set_description(obj, "ras",
64
+ "Set on/off to enable/disable reporting host memory errors "
65
+ "to a KVM guest using ACPI and guest external abort exceptions",
66
+ NULL);
67
+
68
vms->irqmap = a15irqmap;
69
70
virt_flash_create(vms);
71
--
72
2.20.1
73
74
diff view generated by jsdifflib
Deleted patch
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
2
1
3
Add APEI/GHES detailed design document
4
5
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
6
Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com>
7
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
8
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
9
Message-id: 20200512030609.19593-4-gengdongjiu@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
docs/specs/acpi_hest_ghes.rst | 110 ++++++++++++++++++++++++++++++++++
13
docs/specs/index.rst | 1 +
14
2 files changed, 111 insertions(+)
15
create mode 100644 docs/specs/acpi_hest_ghes.rst
16
17
diff --git a/docs/specs/acpi_hest_ghes.rst b/docs/specs/acpi_hest_ghes.rst
18
new file mode 100644
19
index XXXXXXX..XXXXXXX
20
--- /dev/null
21
+++ b/docs/specs/acpi_hest_ghes.rst
22
@@ -XXX,XX +XXX,XX @@
23
+APEI tables generating and CPER record
24
+======================================
25
+
26
+..
27
+ Copyright (c) 2020 HUAWEI TECHNOLOGIES CO., LTD.
28
+
29
+ This work is licensed under the terms of the GNU GPL, version 2 or later.
30
+ See the COPYING file in the top-level directory.
31
+
32
+Design Details
33
+--------------
34
+
35
+::
36
+
37
+ etc/acpi/tables etc/hardware_errors
38
+ ==================== ===============================
39
+ + +--------------------------+ +----------------------------+
40
+ | | HEST | +--------->| error_block_address1 |------+
41
+ | +--------------------------+ | +----------------------------+ |
42
+ | | GHES1 | | +------->| error_block_address2 |------+-+
43
+ | +--------------------------+ | | +----------------------------+ | |
44
+ | | ................. | | | | .............. | | |
45
+ | | error_status_address-----+-+ | -----------------------------+ | |
46
+ | | ................. | | +--->| error_block_addressN |------+-+---+
47
+ | | read_ack_register--------+-+ | | +----------------------------+ | | |
48
+ | | read_ack_preserve | +-+---+--->| read_ack_register1 | | | |
49
+ | | read_ack_write | | | +----------------------------+ | | |
50
+ + +--------------------------+ | +-+--->| read_ack_register2 | | | |
51
+ | | GHES2 | | | | +----------------------------+ | | |
52
+ + +--------------------------+ | | | | ............. | | | |
53
+ | | ................. | | | | +----------------------------+ | | |
54
+ | | error_status_address-----+---+ | | +->| read_ack_registerN | | | |
55
+ | | ................. | | | | +----------------------------+ | | |
56
+ | | read_ack_register--------+-----+ | | |Generic Error Status Block 1|<-----+ | |
57
+ | | read_ack_preserve | | | |-+------------------------+-+ | |
58
+ | | read_ack_write | | | | | CPER | | | |
59
+ + +--------------------------| | | | | CPER | | | |
60
+ | | ............... | | | | | .... | | | |
61
+ + +--------------------------+ | | | | CPER | | | |
62
+ | | GHESN | | | |-+------------------------+-| | |
63
+ + +--------------------------+ | | |Generic Error Status Block 2|<-------+ |
64
+ | | ................. | | | |-+------------------------+-+ |
65
+ | | error_status_address-----+-------+ | | | CPER | | |
66
+ | | ................. | | | | CPER | | |
67
+ | | read_ack_register--------+---------+ | | .... | | |
68
+ | | read_ack_preserve | | | CPER | | |
69
+ | | read_ack_write | +-+------------------------+-+ |
70
+ + +--------------------------+ | .......... | |
71
+ |----------------------------+ |
72
+ |Generic Error Status Block N |<----------+
73
+ |-+-------------------------+-+
74
+ | | CPER | |
75
+ | | CPER | |
76
+ | | .... | |
77
+ | | CPER | |
78
+ +-+-------------------------+-+
79
+
80
+
81
+(1) QEMU generates the ACPI HEST table. This table goes in the current
82
+ "etc/acpi/tables" fw_cfg blob. Each error source has different
83
+ notification types.
84
+
85
+(2) A new fw_cfg blob called "etc/hardware_errors" is introduced. QEMU
86
+ also needs to populate this blob. The "etc/hardware_errors" fw_cfg blob
87
+ contains an address registers table and an Error Status Data Block table.
88
+
89
+(3) The address registers table contains N Error Block Address entries
90
+ and N Read Ack Register entries. The size for each entry is 8-byte.
91
+ The Error Status Data Block table contains N Error Status Data Block
92
+ entries. The size for each entry is 4096(0x1000) bytes. The total size
93
+ for the "etc/hardware_errors" fw_cfg blob is (N * 8 * 2 + N * 4096) bytes.
94
+ N is the number of the kinds of hardware error sources.
95
+
96
+(4) QEMU generates the ACPI linker/loader script for the firmware. The
97
+ firmware pre-allocates memory for "etc/acpi/tables", "etc/hardware_errors"
98
+ and copies blob contents there.
99
+
100
+(5) QEMU generates N ADD_POINTER commands, which patch addresses in the
101
+ "error_status_address" fields of the HEST table with a pointer to the
102
+ corresponding "address registers" in the "etc/hardware_errors" blob.
103
+
104
+(6) QEMU generates N ADD_POINTER commands, which patch addresses in the
105
+ "read_ack_register" fields of the HEST table with a pointer to the
106
+ corresponding "read_ack_register" within the "etc/hardware_errors" blob.
107
+
108
+(7) QEMU generates N ADD_POINTER commands for the firmware, which patch
109
+ addresses in the "error_block_address" fields with a pointer to the
110
+ respective "Error Status Data Block" in the "etc/hardware_errors" blob.
111
+
112
+(8) QEMU defines a third and write-only fw_cfg blob which is called
113
+ "etc/hardware_errors_addr". Through that blob, the firmware can send back
114
+ the guest-side allocation addresses to QEMU. The "etc/hardware_errors_addr"
115
+ blob contains a 8-byte entry. QEMU generates a single WRITE_POINTER command
116
+ for the firmware. The firmware will write back the start address of
117
+ "etc/hardware_errors" blob to the fw_cfg file "etc/hardware_errors_addr".
118
+
119
+(9) When QEMU gets a SIGBUS from the kernel, QEMU writes CPER into corresponding
120
+ "Error Status Data Block", guest memory, and then injects platform specific
121
+ interrupt (in case of arm/virt machine it's Synchronous External Abort) as a
122
+ notification which is necessary for notifying the guest.
123
+
124
+(10) This notification (in virtual hardware) will be handled by the guest
125
+ kernel, on receiving notification, guest APEI driver could read the CPER error
126
+ and take appropriate action.
127
+
128
+(11) kvm_arch_on_sigbus_vcpu() uses source_id as index in "etc/hardware_errors" to
129
+ find out "Error Status Data Block" entry corresponding to error source. So supported
130
+ source_id values should be assigned here and not be changed afterwards to make sure
131
+ that guest will write error into expected "Error Status Data Block" even if guest was
132
+ migrated to a newer QEMU.
133
diff --git a/docs/specs/index.rst b/docs/specs/index.rst
134
index XXXXXXX..XXXXXXX 100644
135
--- a/docs/specs/index.rst
136
+++ b/docs/specs/index.rst
137
@@ -XXX,XX +XXX,XX @@ Contents:
138
ppc-spapr-xive
139
acpi_hw_reduced_hotplug
140
tpm
141
+ acpi_hest_ghes
142
--
143
2.20.1
144
145
diff view generated by jsdifflib
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
This patch builds Hardware Error Source Table(HEST) via fw_cfg blobs.
3
* Add TZASC as unimplemented device.
4
Now it only supports ARMv8 SEA, a type of Generic Hardware Error
4
- Allow bare metal application to access this (unimplemented) device
5
Source version 2(GHESv2) error source. Afterwards, we can extend
5
* Add CSU as unimplemented device.
6
the supported types if needed. For the CPER section, currently it
6
- Allow bare metal application to access this (unimplemented) device
7
is memory section because kernel mainly wants userspace to handle
7
* Add various memory segments
8
the memory errors.
8
- OCRAM
9
- OCRAM EPDC
10
- OCRAM PXP
11
- OCRAM S
12
- ROM
13
- CAAM
9
14
10
This patch follows the spec ACPI 6.2 to build the Hardware Error
15
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
11
Source table. For more detailed information, please refer to
16
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
document: docs/specs/acpi_hest_ghes.rst
17
Message-id: f887a3483996ba06d40bd62ffdfb0ecf68621987.1692964892.git.jcd@tribudubois.net
13
14
build_ghes_hw_error_notification() helper will help to add Hardware
15
Error Notification to ACPI tables without using packed C structures
16
and avoid endianness issues as API doesn't need explicit conversion.
17
18
Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com>
19
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
20
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
21
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
22
Message-id: 20200512030609.19593-6-gengdongjiu@huawei.com
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
19
---
25
include/hw/acpi/ghes.h | 39 ++++++++++++
20
include/hw/arm/fsl-imx7.h | 7 +++++
26
hw/acpi/ghes.c | 126 +++++++++++++++++++++++++++++++++++++++
21
hw/arm/fsl-imx7.c | 63 +++++++++++++++++++++++++++++++++++++++
27
hw/arm/virt-acpi-build.c | 2 +
22
2 files changed, 70 insertions(+)
28
3 files changed, 167 insertions(+)
29
23
30
diff --git a/include/hw/acpi/ghes.h b/include/hw/acpi/ghes.h
24
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
31
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
32
--- a/include/hw/acpi/ghes.h
26
--- a/include/hw/arm/fsl-imx7.h
33
+++ b/include/hw/acpi/ghes.h
27
+++ b/include/hw/arm/fsl-imx7.h
34
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@ struct FslIMX7State {
35
29
IMX7GPRState gpr;
36
#include "hw/acpi/bios-linker-loader.h"
30
ChipideaState usb[FSL_IMX7_NUM_USBS];
37
31
DesignwarePCIEHost pcie;
38
+/*
32
+ MemoryRegion rom;
39
+ * Values for Hardware Error Notification Type field
33
+ MemoryRegion caam;
40
+ */
34
+ MemoryRegion ocram;
41
+enum AcpiGhesNotifyType {
35
+ MemoryRegion ocram_epdc;
42
+ /* Polled */
36
+ MemoryRegion ocram_pxp;
43
+ ACPI_GHES_NOTIFY_POLLED = 0,
37
+ MemoryRegion ocram_s;
44
+ /* External Interrupt */
45
+ ACPI_GHES_NOTIFY_EXTERNAL = 1,
46
+ /* Local Interrupt */
47
+ ACPI_GHES_NOTIFY_LOCAL = 2,
48
+ /* SCI */
49
+ ACPI_GHES_NOTIFY_SCI = 3,
50
+ /* NMI */
51
+ ACPI_GHES_NOTIFY_NMI = 4,
52
+ /* CMCI, ACPI 5.0: 18.3.2.7, Table 18-290 */
53
+ ACPI_GHES_NOTIFY_CMCI = 5,
54
+ /* MCE, ACPI 5.0: 18.3.2.7, Table 18-290 */
55
+ ACPI_GHES_NOTIFY_MCE = 6,
56
+ /* GPIO-Signal, ACPI 6.0: 18.3.2.7, Table 18-332 */
57
+ ACPI_GHES_NOTIFY_GPIO = 7,
58
+ /* ARMv8 SEA, ACPI 6.1: 18.3.2.9, Table 18-345 */
59
+ ACPI_GHES_NOTIFY_SEA = 8,
60
+ /* ARMv8 SEI, ACPI 6.1: 18.3.2.9, Table 18-345 */
61
+ ACPI_GHES_NOTIFY_SEI = 9,
62
+ /* External Interrupt - GSIV, ACPI 6.1: 18.3.2.9, Table 18-345 */
63
+ ACPI_GHES_NOTIFY_GSIV = 10,
64
+ /* Software Delegated Exception, ACPI 6.2: 18.3.2.9, Table 18-383 */
65
+ ACPI_GHES_NOTIFY_SDEI = 11,
66
+ /* 12 and greater are reserved */
67
+ ACPI_GHES_NOTIFY_RESERVED = 12
68
+};
69
+
38
+
70
+enum {
39
uint32_t phy_num[FSL_IMX7_NUM_ETHS];
71
+ ACPI_HEST_SRC_ID_SEA = 0,
40
bool phy_connected[FSL_IMX7_NUM_ETHS];
72
+ /* future ids go here */
41
};
73
+ ACPI_HEST_SRC_ID_RESERVED,
42
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
74
+};
75
+
76
void build_ghes_error_table(GArray *hardware_errors, BIOSLinker *linker);
77
+void acpi_build_hest(GArray *table_data, BIOSLinker *linker);
78
#endif
79
diff --git a/hw/acpi/ghes.c b/hw/acpi/ghes.c
80
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
81
--- a/hw/acpi/ghes.c
44
--- a/hw/arm/fsl-imx7.c
82
+++ b/hw/acpi/ghes.c
45
+++ b/hw/arm/fsl-imx7.c
83
@@ -XXX,XX +XXX,XX @@
46
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
84
#include "qemu/units.h"
47
create_unimplemented_device("pcie-phy", FSL_IMX7_PCIE_PHY_ADDR,
85
#include "hw/acpi/ghes.h"
48
FSL_IMX7_PCIE_PHY_SIZE);
86
#include "hw/acpi/aml-build.h"
49
87
+#include "qemu/error-report.h"
88
89
#define ACPI_GHES_ERRORS_FW_CFG_FILE "etc/hardware_errors"
90
#define ACPI_GHES_DATA_ADDR_FW_CFG_FILE "etc/hardware_errors_addr"
91
@@ -XXX,XX +XXX,XX @@
92
/* Now only support ARMv8 SEA notification type error source */
93
#define ACPI_GHES_ERROR_SOURCE_COUNT 1
94
95
+/* Generic Hardware Error Source version 2 */
96
+#define ACPI_GHES_SOURCE_GENERIC_ERROR_V2 10
97
+
98
+/* Address offset in Generic Address Structure(GAS) */
99
+#define GAS_ADDR_OFFSET 4
100
+
101
+/*
102
+ * Hardware Error Notification
103
+ * ACPI 4.0: 17.3.2.7 Hardware Error Notification
104
+ * Composes dummy Hardware Error Notification descriptor of specified type
105
+ */
106
+static void build_ghes_hw_error_notification(GArray *table, const uint8_t type)
107
+{
108
+ /* Type */
109
+ build_append_int_noprefix(table, type, 1);
110
+ /*
50
+ /*
111
+ * Length:
51
+ * CSU
112
+ * Total length of the structure in bytes
113
+ */
52
+ */
114
+ build_append_int_noprefix(table, 28, 1);
53
+ create_unimplemented_device("csu", FSL_IMX7_CSU_ADDR,
115
+ /* Configuration Write Enable */
54
+ FSL_IMX7_CSU_SIZE);
116
+ build_append_int_noprefix(table, 0, 2);
117
+ /* Poll Interval */
118
+ build_append_int_noprefix(table, 0, 4);
119
+ /* Vector */
120
+ build_append_int_noprefix(table, 0, 4);
121
+ /* Switch To Polling Threshold Value */
122
+ build_append_int_noprefix(table, 0, 4);
123
+ /* Switch To Polling Threshold Window */
124
+ build_append_int_noprefix(table, 0, 4);
125
+ /* Error Threshold Value */
126
+ build_append_int_noprefix(table, 0, 4);
127
+ /* Error Threshold Window */
128
+ build_append_int_noprefix(table, 0, 4);
129
+}
130
+
131
/*
132
* Build table for the hardware error fw_cfg blob.
133
* Initialize "etc/hardware_errors" and "etc/hardware_errors_addr" fw_cfg blobs.
134
@@ -XXX,XX +XXX,XX @@ void build_ghes_error_table(GArray *hardware_errors, BIOSLinker *linker)
135
bios_linker_loader_write_pointer(linker, ACPI_GHES_DATA_ADDR_FW_CFG_FILE,
136
0, sizeof(uint64_t), ACPI_GHES_ERRORS_FW_CFG_FILE, 0);
137
}
138
+
139
+/* Build Generic Hardware Error Source version 2 (GHESv2) */
140
+static void build_ghes_v2(GArray *table_data, int source_id, BIOSLinker *linker)
141
+{
142
+ uint64_t address_offset;
143
+ /*
144
+ * Type:
145
+ * Generic Hardware Error Source version 2(GHESv2 - Type 10)
146
+ */
147
+ build_append_int_noprefix(table_data, ACPI_GHES_SOURCE_GENERIC_ERROR_V2, 2);
148
+ /* Source Id */
149
+ build_append_int_noprefix(table_data, source_id, 2);
150
+ /* Related Source Id */
151
+ build_append_int_noprefix(table_data, 0xffff, 2);
152
+ /* Flags */
153
+ build_append_int_noprefix(table_data, 0, 1);
154
+ /* Enabled */
155
+ build_append_int_noprefix(table_data, 1, 1);
156
+
157
+ /* Number of Records To Pre-allocate */
158
+ build_append_int_noprefix(table_data, 1, 4);
159
+ /* Max Sections Per Record */
160
+ build_append_int_noprefix(table_data, 1, 4);
161
+ /* Max Raw Data Length */
162
+ build_append_int_noprefix(table_data, ACPI_GHES_MAX_RAW_DATA_LENGTH, 4);
163
+
164
+ address_offset = table_data->len;
165
+ /* Error Status Address */
166
+ build_append_gas(table_data, AML_AS_SYSTEM_MEMORY, 0x40, 0,
167
+ 4 /* QWord access */, 0);
168
+ bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
169
+ address_offset + GAS_ADDR_OFFSET, sizeof(uint64_t),
170
+ ACPI_GHES_ERRORS_FW_CFG_FILE, source_id * sizeof(uint64_t));
171
+
172
+ switch (source_id) {
173
+ case ACPI_HEST_SRC_ID_SEA:
174
+ /*
175
+ * Notification Structure
176
+ * Now only enable ARMv8 SEA notification type
177
+ */
178
+ build_ghes_hw_error_notification(table_data, ACPI_GHES_NOTIFY_SEA);
179
+ break;
180
+ default:
181
+ error_report("Not support this error source");
182
+ abort();
183
+ }
184
+
185
+ /* Error Status Block Length */
186
+ build_append_int_noprefix(table_data, ACPI_GHES_MAX_RAW_DATA_LENGTH, 4);
187
+
55
+
188
+ /*
56
+ /*
189
+ * Read Ack Register
57
+ * TZASC
190
+ * ACPI 6.1: 18.3.2.8 Generic Hardware Error Source
191
+ * version 2 (GHESv2 - Type 10)
192
+ */
58
+ */
193
+ address_offset = table_data->len;
59
+ create_unimplemented_device("tzasc", FSL_IMX7_TZASC_ADDR,
194
+ build_append_gas(table_data, AML_AS_SYSTEM_MEMORY, 0x40, 0,
60
+ FSL_IMX7_TZASC_SIZE);
195
+ 4 /* QWord access */, 0);
196
+ bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
197
+ address_offset + GAS_ADDR_OFFSET,
198
+ sizeof(uint64_t), ACPI_GHES_ERRORS_FW_CFG_FILE,
199
+ (ACPI_GHES_ERROR_SOURCE_COUNT + source_id) * sizeof(uint64_t));
200
+
61
+
201
+ /*
62
+ /*
202
+ * Read Ack Preserve field
63
+ * OCRAM memory
203
+ * We only provide the first bit in Read Ack Register to OSPM to write
204
+ * while the other bits are preserved.
205
+ */
64
+ */
206
+ build_append_int_noprefix(table_data, ~0x1ULL, 8);
65
+ memory_region_init_ram(&s->ocram, NULL, "imx7.ocram",
207
+ /* Read Ack Write */
66
+ FSL_IMX7_OCRAM_MEM_SIZE,
208
+ build_append_int_noprefix(table_data, 0x1, 8);
67
+ &error_abort);
209
+}
68
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_OCRAM_MEM_ADDR,
69
+ &s->ocram);
210
+
70
+
211
+/* Build Hardware Error Source Table */
71
+ /*
212
+void acpi_build_hest(GArray *table_data, BIOSLinker *linker)
72
+ * OCRAM EPDC memory
213
+{
73
+ */
214
+ uint64_t hest_start = table_data->len;
74
+ memory_region_init_ram(&s->ocram_epdc, NULL, "imx7.ocram_epdc",
75
+ FSL_IMX7_OCRAM_EPDC_SIZE,
76
+ &error_abort);
77
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_OCRAM_EPDC_ADDR,
78
+ &s->ocram_epdc);
215
+
79
+
216
+ /* Hardware Error Source Table header*/
80
+ /*
217
+ acpi_data_push(table_data, sizeof(AcpiTableHeader));
81
+ * OCRAM PXP memory
82
+ */
83
+ memory_region_init_ram(&s->ocram_pxp, NULL, "imx7.ocram_pxp",
84
+ FSL_IMX7_OCRAM_PXP_SIZE,
85
+ &error_abort);
86
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_OCRAM_PXP_ADDR,
87
+ &s->ocram_pxp);
218
+
88
+
219
+ /* Error Source Count */
89
+ /*
220
+ build_append_int_noprefix(table_data, ACPI_GHES_ERROR_SOURCE_COUNT, 4);
90
+ * OCRAM_S memory
91
+ */
92
+ memory_region_init_ram(&s->ocram_s, NULL, "imx7.ocram_s",
93
+ FSL_IMX7_OCRAM_S_SIZE,
94
+ &error_abort);
95
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_OCRAM_S_ADDR,
96
+ &s->ocram_s);
221
+
97
+
222
+ build_ghes_v2(table_data, ACPI_HEST_SRC_ID_SEA, linker);
98
+ /*
99
+ * ROM memory
100
+ */
101
+ memory_region_init_rom(&s->rom, OBJECT(dev), "imx7.rom",
102
+ FSL_IMX7_ROM_SIZE, &error_abort);
103
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_ROM_ADDR,
104
+ &s->rom);
223
+
105
+
224
+ build_header(linker, table_data, (void *)(table_data->data + hest_start),
106
+ /*
225
+ "HEST", table_data->len - hest_start, 1, NULL, NULL);
107
+ * CAAM memory
226
+}
108
+ */
227
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
109
+ memory_region_init_rom(&s->caam, OBJECT(dev), "imx7.caam",
228
index XXXXXXX..XXXXXXX 100644
110
+ FSL_IMX7_CAAM_MEM_SIZE, &error_abort);
229
--- a/hw/arm/virt-acpi-build.c
111
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_CAAM_MEM_ADDR,
230
+++ b/hw/arm/virt-acpi-build.c
112
+ &s->caam);
231
@@ -XXX,XX +XXX,XX @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
113
}
232
114
233
if (vms->ras) {
115
static Property fsl_imx7_properties[] = {
234
build_ghes_error_table(tables->hardware_errors, tables->linker);
235
+ acpi_add_table(table_offsets, tables_blob);
236
+ acpi_build_hest(tables_blob, tables->linker);
237
}
238
239
if (ms->numa_state->num_nodes > 0) {
240
--
116
--
241
2.20.1
117
2.34.1
242
118
243
119
diff view generated by jsdifflib
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
This patch builds error_block_address and read_ack_register fields
3
The SRC device is normally used to start the secondary CPU.
4
in hardware errors table , the error_block_address points to Generic
4
5
Error Status Block(GESB) via bios_linker. The max size for one GESB
5
When running Linux directly, QEMU is emulating a PSCI interface that UBOOT
6
is 1kb, For more detailed information, please refer to
6
is installing at boot time and therefore the fact that the SRC device is
7
document: docs/specs/acpi_hest_ghes.rst
7
unimplemented is hidden as Qemu respond directly to PSCI requets without
8
8
using the SRC device.
9
Now we only support one Error source, if necessary, we can extend to
9
10
support more.
10
But if you try to run a more bare metal application (maybe uboot itself),
11
11
then it is not possible to start the secondary CPU as the SRC is an
12
Suggested-by: Laszlo Ersek <lersek@redhat.com>
12
unimplemented device.
13
Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com>
13
14
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
14
This patch adds the ability to start the secondary CPU through the SRC
15
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
15
device so that you can use this feature in bare metal applications.
16
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
16
17
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
17
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
18
Message-id: 20200512030609.19593-5-gengdongjiu@huawei.com
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Message-id: ce9a0162defd2acee5dc7f8a674743de0cded569.1692964892.git.jcd@tribudubois.net
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
---
21
default-configs/arm-softmmu.mak | 1 +
22
include/hw/arm/fsl-imx7.h | 3 +-
22
include/hw/acpi/aml-build.h | 1 +
23
include/hw/misc/imx7_src.h | 66 +++++++++
23
include/hw/acpi/ghes.h | 28 +++++++++++
24
hw/arm/fsl-imx7.c | 8 +-
24
hw/acpi/aml-build.c | 2 +
25
hw/misc/imx7_src.c | 276 +++++++++++++++++++++++++++++++++++++
25
hw/acpi/ghes.c | 89 +++++++++++++++++++++++++++++++++
26
hw/misc/meson.build | 1 +
26
hw/arm/virt-acpi-build.c | 5 ++
27
hw/misc/trace-events | 4 +
27
hw/acpi/Kconfig | 4 ++
28
6 files changed, 356 insertions(+), 2 deletions(-)
28
hw/acpi/Makefile.objs | 1 +
29
create mode 100644 include/hw/misc/imx7_src.h
29
8 files changed, 131 insertions(+)
30
create mode 100644 hw/misc/imx7_src.c
30
create mode 100644 include/hw/acpi/ghes.h
31
31
create mode 100644 hw/acpi/ghes.c
32
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
32
33
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
34
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
35
--- a/default-configs/arm-softmmu.mak
34
--- a/include/hw/arm/fsl-imx7.h
36
+++ b/default-configs/arm-softmmu.mak
35
+++ b/include/hw/arm/fsl-imx7.h
37
@@ -XXX,XX +XXX,XX @@ CONFIG_FSL_IMX7=y
36
@@ -XXX,XX +XXX,XX @@
38
CONFIG_FSL_IMX6UL=y
37
#include "hw/misc/imx7_ccm.h"
39
CONFIG_SEMIHOSTING=y
38
#include "hw/misc/imx7_snvs.h"
40
CONFIG_ALLWINNER_H3=y
39
#include "hw/misc/imx7_gpr.h"
41
+CONFIG_ACPI_APEI=y
40
+#include "hw/misc/imx7_src.h"
42
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
41
#include "hw/watchdog/wdt_imx2.h"
43
index XXXXXXX..XXXXXXX 100644
42
#include "hw/gpio/imx_gpio.h"
44
--- a/include/hw/acpi/aml-build.h
43
#include "hw/char/imx_serial.h"
45
+++ b/include/hw/acpi/aml-build.h
44
@@ -XXX,XX +XXX,XX @@ struct FslIMX7State {
46
@@ -XXX,XX +XXX,XX @@ struct AcpiBuildTables {
45
IMX7CCMState ccm;
47
GArray *rsdp;
46
IMX7AnalogState analog;
48
GArray *tcpalog;
47
IMX7SNVSState snvs;
49
GArray *vmgenid;
48
+ IMX7SRCState src;
50
+ GArray *hardware_errors;
49
IMXGPCv2State gpcv2;
51
BIOSLinker *linker;
50
IMXSPIState spi[FSL_IMX7_NUM_ECSPIS];
52
} AcpiBuildTables;
51
IMXI2CState i2c[FSL_IMX7_NUM_I2CS];
53
52
@@ -XXX,XX +XXX,XX @@ enum FslIMX7MemoryMap {
54
diff --git a/include/hw/acpi/ghes.h b/include/hw/acpi/ghes.h
53
FSL_IMX7_GPC_ADDR = 0x303A0000,
54
55
FSL_IMX7_SRC_ADDR = 0x30390000,
56
- FSL_IMX7_SRC_SIZE = (4 * KiB),
57
58
FSL_IMX7_CCM_ADDR = 0x30380000,
59
60
diff --git a/include/hw/misc/imx7_src.h b/include/hw/misc/imx7_src.h
55
new file mode 100644
61
new file mode 100644
56
index XXXXXXX..XXXXXXX
62
index XXXXXXX..XXXXXXX
57
--- /dev/null
63
--- /dev/null
58
+++ b/include/hw/acpi/ghes.h
64
+++ b/include/hw/misc/imx7_src.h
59
@@ -XXX,XX +XXX,XX @@
65
@@ -XXX,XX +XXX,XX @@
60
+/*
66
+/*
61
+ * Support for generating APEI tables and recording CPER for Guests
67
+ * IMX7 System Reset Controller
62
+ *
68
+ *
63
+ * Copyright (c) 2020 HUAWEI TECHNOLOGIES CO., LTD.
69
+ * Copyright (C) 2023 Jean-Christophe Dubois <jcd@tribudubois.net>
64
+ *
70
+ *
65
+ * Author: Dongjiu Geng <gengdongjiu@huawei.com>
71
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
66
+ *
72
+ * See the COPYING file in the top-level directory.
67
+ * This program is free software; you can redistribute it and/or modify
68
+ * it under the terms of the GNU General Public License as published by
69
+ * the Free Software Foundation; either version 2 of the License, or
70
+ * (at your option) any later version.
71
+
72
+ * This program is distributed in the hope that it will be useful,
73
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
74
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
75
+ * GNU General Public License for more details.
76
+
77
+ * You should have received a copy of the GNU General Public License along
78
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
79
+ */
73
+ */
80
+
74
+
81
+#ifndef ACPI_GHES_H
75
+#ifndef IMX7_SRC_H
82
+#define ACPI_GHES_H
76
+#define IMX7_SRC_H
83
+
77
+
84
+#include "hw/acpi/bios-linker-loader.h"
78
+#include "hw/sysbus.h"
85
+
79
+#include "qemu/bitops.h"
86
+void build_ghes_error_table(GArray *hardware_errors, BIOSLinker *linker);
80
+#include "qom/object.h"
87
+#endif
81
+
88
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
82
+#define SRC_SCR 0
83
+#define SRC_A7RCR0 1
84
+#define SRC_A7RCR1 2
85
+#define SRC_M4RCR 3
86
+#define SRC_ERCR 5
87
+#define SRC_HSICPHY_RCR 7
88
+#define SRC_USBOPHY1_RCR 8
89
+#define SRC_USBOPHY2_RCR 9
90
+#define SRC_MPIPHY_RCR 10
91
+#define SRC_PCIEPHY_RCR 11
92
+#define SRC_SBMR1 22
93
+#define SRC_SRSR 23
94
+#define SRC_SISR 26
95
+#define SRC_SIMR 27
96
+#define SRC_SBMR2 28
97
+#define SRC_GPR1 29
98
+#define SRC_GPR2 30
99
+#define SRC_GPR3 31
100
+#define SRC_GPR4 32
101
+#define SRC_GPR5 33
102
+#define SRC_GPR6 34
103
+#define SRC_GPR7 35
104
+#define SRC_GPR8 36
105
+#define SRC_GPR9 37
106
+#define SRC_GPR10 38
107
+#define SRC_MAX 39
108
+
109
+/* SRC_A7SCR1 */
110
+#define R_CORE1_ENABLE_SHIFT 1
111
+#define R_CORE1_ENABLE_LENGTH 1
112
+/* SRC_A7SCR0 */
113
+#define R_CORE1_RST_SHIFT 5
114
+#define R_CORE1_RST_LENGTH 1
115
+#define R_CORE0_RST_SHIFT 4
116
+#define R_CORE0_RST_LENGTH 1
117
+
118
+#define TYPE_IMX7_SRC "imx7.src"
119
+OBJECT_DECLARE_SIMPLE_TYPE(IMX7SRCState, IMX7_SRC)
120
+
121
+struct IMX7SRCState {
122
+ /* <private> */
123
+ SysBusDevice parent_obj;
124
+
125
+ /* <public> */
126
+ MemoryRegion iomem;
127
+
128
+ uint32_t regs[SRC_MAX];
129
+};
130
+
131
+#endif /* IMX7_SRC_H */
132
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
89
index XXXXXXX..XXXXXXX 100644
133
index XXXXXXX..XXXXXXX 100644
90
--- a/hw/acpi/aml-build.c
134
--- a/hw/arm/fsl-imx7.c
91
+++ b/hw/acpi/aml-build.c
135
+++ b/hw/arm/fsl-imx7.c
92
@@ -XXX,XX +XXX,XX @@ void acpi_build_tables_init(AcpiBuildTables *tables)
136
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
93
tables->table_data = g_array_new(false, true /* clear */, 1);
137
*/
94
tables->tcpalog = g_array_new(false, true /* clear */, 1);
138
object_initialize_child(obj, "gpcv2", &s->gpcv2, TYPE_IMX_GPCV2);
95
tables->vmgenid = g_array_new(false, true /* clear */, 1);
139
96
+ tables->hardware_errors = g_array_new(false, true /* clear */, 1);
140
+ /*
97
tables->linker = bios_linker_loader_init();
141
+ * SRC
98
}
142
+ */
99
143
+ object_initialize_child(obj, "src", &s->src, TYPE_IMX7_SRC);
100
@@ -XXX,XX +XXX,XX @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
144
+
101
g_array_free(tables->table_data, true);
145
/*
102
g_array_free(tables->tcpalog, mfre);
146
* ECSPIs
103
g_array_free(tables->vmgenid, mfre);
147
*/
104
+ g_array_free(tables->hardware_errors, mfre);
148
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
105
}
149
/*
106
150
* SRC
107
/*
151
*/
108
diff --git a/hw/acpi/ghes.c b/hw/acpi/ghes.c
152
- create_unimplemented_device("src", FSL_IMX7_SRC_ADDR, FSL_IMX7_SRC_SIZE);
153
+ sysbus_realize(SYS_BUS_DEVICE(&s->src), &error_abort);
154
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->src), 0, FSL_IMX7_SRC_ADDR);
155
156
/*
157
* Watchdogs
158
diff --git a/hw/misc/imx7_src.c b/hw/misc/imx7_src.c
109
new file mode 100644
159
new file mode 100644
110
index XXXXXXX..XXXXXXX
160
index XXXXXXX..XXXXXXX
111
--- /dev/null
161
--- /dev/null
112
+++ b/hw/acpi/ghes.c
162
+++ b/hw/misc/imx7_src.c
113
@@ -XXX,XX +XXX,XX @@
163
@@ -XXX,XX +XXX,XX @@
114
+/*
164
+/*
115
+ * Support for generating APEI tables and recording CPER for Guests
165
+ * IMX7 System Reset Controller
116
+ *
166
+ *
117
+ * Copyright (c) 2020 HUAWEI TECHNOLOGIES CO., LTD.
167
+ * Copyright (c) 2023 Jean-Christophe Dubois <jcd@tribudubois.net>
118
+ *
168
+ *
119
+ * Author: Dongjiu Geng <gengdongjiu@huawei.com>
169
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
170
+ * See the COPYING file in the top-level directory.
120
+ *
171
+ *
121
+ * This program is free software; you can redistribute it and/or modify
122
+ * it under the terms of the GNU General Public License as published by
123
+ * the Free Software Foundation; either version 2 of the License, or
124
+ * (at your option) any later version.
125
+
126
+ * This program is distributed in the hope that it will be useful,
127
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
128
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
129
+ * GNU General Public License for more details.
130
+
131
+ * You should have received a copy of the GNU General Public License along
132
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
133
+ */
172
+ */
134
+
173
+
135
+#include "qemu/osdep.h"
174
+#include "qemu/osdep.h"
136
+#include "qemu/units.h"
175
+#include "hw/misc/imx7_src.h"
137
+#include "hw/acpi/ghes.h"
176
+#include "migration/vmstate.h"
138
+#include "hw/acpi/aml-build.h"
177
+#include "qemu/bitops.h"
139
+
178
+#include "qemu/log.h"
140
+#define ACPI_GHES_ERRORS_FW_CFG_FILE "etc/hardware_errors"
179
+#include "qemu/main-loop.h"
141
+#define ACPI_GHES_DATA_ADDR_FW_CFG_FILE "etc/hardware_errors_addr"
180
+#include "qemu/module.h"
142
+
181
+#include "target/arm/arm-powerctl.h"
143
+/* The max size in bytes for one error block */
182
+#include "hw/core/cpu.h"
144
+#define ACPI_GHES_MAX_RAW_DATA_LENGTH (1 * KiB)
183
+#include "hw/registerfields.h"
145
+
184
+
146
+/* Now only support ARMv8 SEA notification type error source */
185
+#include "trace.h"
147
+#define ACPI_GHES_ERROR_SOURCE_COUNT 1
186
+
187
+static const char *imx7_src_reg_name(uint32_t reg)
188
+{
189
+ static char unknown[20];
190
+
191
+ switch (reg) {
192
+ case SRC_SCR:
193
+ return "SRC_SCR";
194
+ case SRC_A7RCR0:
195
+ return "SRC_A7RCR0";
196
+ case SRC_A7RCR1:
197
+ return "SRC_A7RCR1";
198
+ case SRC_M4RCR:
199
+ return "SRC_M4RCR";
200
+ case SRC_ERCR:
201
+ return "SRC_ERCR";
202
+ case SRC_HSICPHY_RCR:
203
+ return "SRC_HSICPHY_RCR";
204
+ case SRC_USBOPHY1_RCR:
205
+ return "SRC_USBOPHY1_RCR";
206
+ case SRC_USBOPHY2_RCR:
207
+ return "SRC_USBOPHY2_RCR";
208
+ case SRC_PCIEPHY_RCR:
209
+ return "SRC_PCIEPHY_RCR";
210
+ case SRC_SBMR1:
211
+ return "SRC_SBMR1";
212
+ case SRC_SRSR:
213
+ return "SRC_SRSR";
214
+ case SRC_SISR:
215
+ return "SRC_SISR";
216
+ case SRC_SIMR:
217
+ return "SRC_SIMR";
218
+ case SRC_SBMR2:
219
+ return "SRC_SBMR2";
220
+ case SRC_GPR1:
221
+ return "SRC_GPR1";
222
+ case SRC_GPR2:
223
+ return "SRC_GPR2";
224
+ case SRC_GPR3:
225
+ return "SRC_GPR3";
226
+ case SRC_GPR4:
227
+ return "SRC_GPR4";
228
+ case SRC_GPR5:
229
+ return "SRC_GPR5";
230
+ case SRC_GPR6:
231
+ return "SRC_GPR6";
232
+ case SRC_GPR7:
233
+ return "SRC_GPR7";
234
+ case SRC_GPR8:
235
+ return "SRC_GPR8";
236
+ case SRC_GPR9:
237
+ return "SRC_GPR9";
238
+ case SRC_GPR10:
239
+ return "SRC_GPR10";
240
+ default:
241
+ sprintf(unknown, "%u ?", reg);
242
+ return unknown;
243
+ }
244
+}
245
+
246
+static const VMStateDescription vmstate_imx7_src = {
247
+ .name = TYPE_IMX7_SRC,
248
+ .version_id = 1,
249
+ .minimum_version_id = 1,
250
+ .fields = (VMStateField[]) {
251
+ VMSTATE_UINT32_ARRAY(regs, IMX7SRCState, SRC_MAX),
252
+ VMSTATE_END_OF_LIST()
253
+ },
254
+};
255
+
256
+static void imx7_src_reset(DeviceState *dev)
257
+{
258
+ IMX7SRCState *s = IMX7_SRC(dev);
259
+
260
+ memset(s->regs, 0, sizeof(s->regs));
261
+
262
+ /* Set reset values */
263
+ s->regs[SRC_SCR] = 0xA0;
264
+ s->regs[SRC_SRSR] = 0x1;
265
+ s->regs[SRC_SIMR] = 0x1F;
266
+}
267
+
268
+static uint64_t imx7_src_read(void *opaque, hwaddr offset, unsigned size)
269
+{
270
+ uint32_t value = 0;
271
+ IMX7SRCState *s = (IMX7SRCState *)opaque;
272
+ uint32_t index = offset >> 2;
273
+
274
+ if (index < SRC_MAX) {
275
+ value = s->regs[index];
276
+ } else {
277
+ qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Bad register at offset 0x%"
278
+ HWADDR_PRIx "\n", TYPE_IMX7_SRC, __func__, offset);
279
+ }
280
+
281
+ trace_imx7_src_read(imx7_src_reg_name(index), value);
282
+
283
+ return value;
284
+}
285
+
148
+
286
+
149
+/*
287
+/*
150
+ * Build table for the hardware error fw_cfg blob.
288
+ * The reset is asynchronous so we need to defer clearing the reset
151
+ * Initialize "etc/hardware_errors" and "etc/hardware_errors_addr" fw_cfg blobs.
289
+ * bit until the work is completed.
152
+ * See docs/specs/acpi_hest_ghes.rst for blobs format.
153
+ */
290
+ */
154
+void build_ghes_error_table(GArray *hardware_errors, BIOSLinker *linker)
291
+
155
+{
292
+struct SRCSCRResetInfo {
156
+ int i, error_status_block_offset;
293
+ IMX7SRCState *s;
157
+
294
+ uint32_t reset_bit;
158
+ /* Build error_block_address */
295
+};
159
+ for (i = 0; i < ACPI_GHES_ERROR_SOURCE_COUNT; i++) {
296
+
160
+ build_append_int_noprefix(hardware_errors, 0, sizeof(uint64_t));
297
+static void imx7_clear_reset_bit(CPUState *cpu, run_on_cpu_data data)
298
+{
299
+ struct SRCSCRResetInfo *ri = data.host_ptr;
300
+ IMX7SRCState *s = ri->s;
301
+
302
+ assert(qemu_mutex_iothread_locked());
303
+
304
+ s->regs[SRC_A7RCR0] = deposit32(s->regs[SRC_A7RCR0], ri->reset_bit, 1, 0);
305
+
306
+ trace_imx7_src_write(imx7_src_reg_name(SRC_A7RCR0), s->regs[SRC_A7RCR0]);
307
+
308
+ g_free(ri);
309
+}
310
+
311
+static void imx7_defer_clear_reset_bit(uint32_t cpuid,
312
+ IMX7SRCState *s,
313
+ uint32_t reset_shift)
314
+{
315
+ struct SRCSCRResetInfo *ri;
316
+ CPUState *cpu = arm_get_cpu_by_id(cpuid);
317
+
318
+ if (!cpu) {
319
+ return;
161
+ }
320
+ }
162
+
321
+
163
+ /* Build read_ack_register */
322
+ ri = g_new(struct SRCSCRResetInfo, 1);
164
+ for (i = 0; i < ACPI_GHES_ERROR_SOURCE_COUNT; i++) {
323
+ ri->s = s;
324
+ ri->reset_bit = reset_shift;
325
+
326
+ async_run_on_cpu(cpu, imx7_clear_reset_bit, RUN_ON_CPU_HOST_PTR(ri));
327
+}
328
+
329
+
330
+static void imx7_src_write(void *opaque, hwaddr offset, uint64_t value,
331
+ unsigned size)
332
+{
333
+ IMX7SRCState *s = (IMX7SRCState *)opaque;
334
+ uint32_t index = offset >> 2;
335
+ long unsigned int change_mask;
336
+ uint32_t current_value = value;
337
+
338
+ if (index >= SRC_MAX) {
339
+ qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Bad register at offset 0x%"
340
+ HWADDR_PRIx "\n", TYPE_IMX7_SRC, __func__, offset);
341
+ return;
342
+ }
343
+
344
+ trace_imx7_src_write(imx7_src_reg_name(SRC_A7RCR0), s->regs[SRC_A7RCR0]);
345
+
346
+ change_mask = s->regs[index] ^ (uint32_t)current_value;
347
+
348
+ switch (index) {
349
+ case SRC_A7RCR0:
350
+ if (FIELD_EX32(change_mask, CORE0, RST)) {
351
+ arm_reset_cpu(0);
352
+ imx7_defer_clear_reset_bit(0, s, R_CORE0_RST_SHIFT);
353
+ }
354
+ if (FIELD_EX32(change_mask, CORE1, RST)) {
355
+ arm_reset_cpu(1);
356
+ imx7_defer_clear_reset_bit(1, s, R_CORE1_RST_SHIFT);
357
+ }
358
+ s->regs[index] = current_value;
359
+ break;
360
+ case SRC_A7RCR1:
165
+ /*
361
+ /*
166
+ * Initialize the value of read_ack_register to 1, so GHES can be
362
+ * On real hardware when the system reset controller starts a
167
+ * writeable after (re)boot.
363
+ * secondary CPU it runs through some boot ROM code which reads
168
+ * ACPI 6.2: 18.3.2.8 Generic Hardware Error Source version 2
364
+ * the SRC_GPRX registers controlling the start address and branches
169
+ * (GHESv2 - Type 10)
365
+ * to it.
366
+ * Here we are taking a short cut and branching directly to the
367
+ * requested address (we don't want to run the boot ROM code inside
368
+ * QEMU)
170
+ */
369
+ */
171
+ build_append_int_noprefix(hardware_errors, 1, sizeof(uint64_t));
370
+ if (FIELD_EX32(change_mask, CORE1, ENABLE)) {
371
+ if (FIELD_EX32(current_value, CORE1, ENABLE)) {
372
+ /* CORE 1 is brought up */
373
+ arm_set_cpu_on(1, s->regs[SRC_GPR3], s->regs[SRC_GPR4],
374
+ 3, false);
375
+ } else {
376
+ /* CORE 1 is shut down */
377
+ arm_set_cpu_off(1);
378
+ }
379
+ /* We clear the reset bits as the processor changed state */
380
+ imx7_defer_clear_reset_bit(1, s, R_CORE1_RST_SHIFT);
381
+ clear_bit(R_CORE1_RST_SHIFT, &change_mask);
382
+ }
383
+ s->regs[index] = current_value;
384
+ break;
385
+ default:
386
+ s->regs[index] = current_value;
387
+ break;
172
+ }
388
+ }
173
+
389
+}
174
+ /* Generic Error Status Block offset in the hardware error fw_cfg blob */
390
+
175
+ error_status_block_offset = hardware_errors->len;
391
+static const struct MemoryRegionOps imx7_src_ops = {
176
+
392
+ .read = imx7_src_read,
177
+ /* Reserve space for Error Status Data Block */
393
+ .write = imx7_src_write,
178
+ acpi_data_push(hardware_errors,
394
+ .endianness = DEVICE_NATIVE_ENDIAN,
179
+ ACPI_GHES_MAX_RAW_DATA_LENGTH * ACPI_GHES_ERROR_SOURCE_COUNT);
395
+ .valid = {
180
+
181
+ /* Tell guest firmware to place hardware_errors blob into RAM */
182
+ bios_linker_loader_alloc(linker, ACPI_GHES_ERRORS_FW_CFG_FILE,
183
+ hardware_errors, sizeof(uint64_t), false);
184
+
185
+ for (i = 0; i < ACPI_GHES_ERROR_SOURCE_COUNT; i++) {
186
+ /*
396
+ /*
187
+ * Tell firmware to patch error_block_address entries to point to
397
+ * Our device would not work correctly if the guest was doing
188
+ * corresponding "Generic Error Status Block"
398
+ * unaligned access. This might not be a limitation on the real
399
+ * device but in practice there is no reason for a guest to access
400
+ * this device unaligned.
189
+ */
401
+ */
190
+ bios_linker_loader_add_pointer(linker,
402
+ .min_access_size = 4,
191
+ ACPI_GHES_ERRORS_FW_CFG_FILE, sizeof(uint64_t) * i,
403
+ .max_access_size = 4,
192
+ sizeof(uint64_t), ACPI_GHES_ERRORS_FW_CFG_FILE,
404
+ .unaligned = false,
193
+ error_status_block_offset + i * ACPI_GHES_MAX_RAW_DATA_LENGTH);
405
+ },
194
+ }
406
+};
195
+
407
+
196
+ /*
408
+static void imx7_src_realize(DeviceState *dev, Error **errp)
197
+ * tell firmware to write hardware_errors GPA into
409
+{
198
+ * hardware_errors_addr fw_cfg, once the former has been initialized.
410
+ IMX7SRCState *s = IMX7_SRC(dev);
199
+ */
411
+
200
+ bios_linker_loader_write_pointer(linker, ACPI_GHES_DATA_ADDR_FW_CFG_FILE,
412
+ memory_region_init_io(&s->iomem, OBJECT(dev), &imx7_src_ops, s,
201
+ 0, sizeof(uint64_t), ACPI_GHES_ERRORS_FW_CFG_FILE, 0);
413
+ TYPE_IMX7_SRC, 0x1000);
202
+}
414
+ sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
203
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
415
+}
416
+
417
+static void imx7_src_class_init(ObjectClass *klass, void *data)
418
+{
419
+ DeviceClass *dc = DEVICE_CLASS(klass);
420
+
421
+ dc->realize = imx7_src_realize;
422
+ dc->reset = imx7_src_reset;
423
+ dc->vmsd = &vmstate_imx7_src;
424
+ dc->desc = "i.MX6 System Reset Controller";
425
+}
426
+
427
+static const TypeInfo imx7_src_info = {
428
+ .name = TYPE_IMX7_SRC,
429
+ .parent = TYPE_SYS_BUS_DEVICE,
430
+ .instance_size = sizeof(IMX7SRCState),
431
+ .class_init = imx7_src_class_init,
432
+};
433
+
434
+static void imx7_src_register_types(void)
435
+{
436
+ type_register_static(&imx7_src_info);
437
+}
438
+
439
+type_init(imx7_src_register_types)
440
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
204
index XXXXXXX..XXXXXXX 100644
441
index XXXXXXX..XXXXXXX 100644
205
--- a/hw/arm/virt-acpi-build.c
442
--- a/hw/misc/meson.build
206
+++ b/hw/arm/virt-acpi-build.c
443
+++ b/hw/misc/meson.build
207
@@ -XXX,XX +XXX,XX @@
444
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_IMX', if_true: files(
208
#include "sysemu/reset.h"
445
'imx6_src.c',
209
#include "kvm_arm.h"
446
'imx6ul_ccm.c',
210
#include "migration/vmstate.h"
447
'imx7_ccm.c',
211
+#include "hw/acpi/ghes.h"
448
+ 'imx7_src.c',
212
449
'imx7_gpr.c',
213
#define ARM_SPI_BASE 32
450
'imx7_snvs.c',
214
451
'imx_ccm.c',
215
@@ -XXX,XX +XXX,XX @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
452
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
216
acpi_add_table(table_offsets, tables_blob);
217
build_spcr(tables_blob, tables->linker, vms);
218
219
+ if (vms->ras) {
220
+ build_ghes_error_table(tables->hardware_errors, tables->linker);
221
+ }
222
+
223
if (ms->numa_state->num_nodes > 0) {
224
acpi_add_table(table_offsets, tables_blob);
225
build_srat(tables_blob, tables->linker, vms);
226
diff --git a/hw/acpi/Kconfig b/hw/acpi/Kconfig
227
index XXXXXXX..XXXXXXX 100644
453
index XXXXXXX..XXXXXXX 100644
228
--- a/hw/acpi/Kconfig
454
--- a/hw/misc/trace-events
229
+++ b/hw/acpi/Kconfig
455
+++ b/hw/misc/trace-events
230
@@ -XXX,XX +XXX,XX @@ config ACPI_HMAT
456
@@ -XXX,XX +XXX,XX @@ ccm_clock_freq(uint32_t clock, uint32_t freq) "(Clock = %d) = %d"
231
bool
457
ccm_read_reg(const char *reg_name, uint32_t value) "reg[%s] <= 0x%" PRIx32
232
depends on ACPI
458
ccm_write_reg(const char *reg_name, uint32_t value) "reg[%s] => 0x%" PRIx32
233
459
234
+config ACPI_APEI
460
+# imx7_src.c
235
+ bool
461
+imx7_src_read(const char *reg_name, uint32_t value) "reg[%s] => 0x%" PRIx32
236
+ depends on ACPI
462
+imx7_src_write(const char *reg_name, uint32_t value) "reg[%s] <= 0x%" PRIx32
237
+
463
+
238
config ACPI_PCI
464
# iotkit-sysinfo.c
239
bool
465
iotkit_sysinfo_read(uint64_t offset, uint64_t data, unsigned size) "IoTKit SysInfo read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
240
depends on ACPI && PCI
466
iotkit_sysinfo_write(uint64_t offset, uint64_t data, unsigned size) "IoTKit SysInfo write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
241
diff --git a/hw/acpi/Makefile.objs b/hw/acpi/Makefile.objs
242
index XXXXXXX..XXXXXXX 100644
243
--- a/hw/acpi/Makefile.objs
244
+++ b/hw/acpi/Makefile.objs
245
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_ACPI_NVDIMM) += nvdimm.o
246
common-obj-$(CONFIG_ACPI_VMGENID) += vmgenid.o
247
common-obj-$(CONFIG_ACPI_HW_REDUCED) += generic_event_device.o
248
common-obj-$(CONFIG_ACPI_HMAT) += hmat.o
249
+common-obj-$(CONFIG_ACPI_APEI) += ghes.o
250
common-obj-$(call lnot,$(CONFIG_ACPI_X86)) += acpi-stub.o
251
common-obj-$(call lnot,$(CONFIG_PC)) += acpi-x86-stub.o
252
253
--
467
--
254
2.20.1
468
2.34.1
255
256
diff view generated by jsdifflib
Deleted patch
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
2
1
3
Record the GHEB address via fw_cfg file, when recording
4
a error to CPER, it will use this address to find out
5
Generic Error Data Entries and write the error.
6
7
In order to avoid migration failure, make hardware
8
error table address to a part of GED device instead
9
of global variable, then this address will be migrated
10
to target QEMU.
11
12
Acked-by: Xiang Zheng <zhengxiang9@huawei.com>
13
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
14
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
15
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
16
Message-id: 20200512030609.19593-7-gengdongjiu@huawei.com
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
include/hw/acpi/generic_event_device.h | 2 ++
20
include/hw/acpi/ghes.h | 6 ++++++
21
hw/acpi/generic_event_device.c | 19 +++++++++++++++++++
22
hw/acpi/ghes.c | 14 ++++++++++++++
23
hw/arm/virt-acpi-build.c | 8 ++++++++
24
5 files changed, 49 insertions(+)
25
26
diff --git a/include/hw/acpi/generic_event_device.h b/include/hw/acpi/generic_event_device.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/include/hw/acpi/generic_event_device.h
29
+++ b/include/hw/acpi/generic_event_device.h
30
@@ -XXX,XX +XXX,XX @@
31
32
#include "hw/sysbus.h"
33
#include "hw/acpi/memory_hotplug.h"
34
+#include "hw/acpi/ghes.h"
35
36
#define ACPI_POWER_BUTTON_DEVICE "PWRB"
37
38
@@ -XXX,XX +XXX,XX @@ typedef struct AcpiGedState {
39
GEDState ged_state;
40
uint32_t ged_event_bitmap;
41
qemu_irq irq;
42
+ AcpiGhesState ghes_state;
43
} AcpiGedState;
44
45
void build_ged_aml(Aml *table, const char* name, HotplugHandler *hotplug_dev,
46
diff --git a/include/hw/acpi/ghes.h b/include/hw/acpi/ghes.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/include/hw/acpi/ghes.h
49
+++ b/include/hw/acpi/ghes.h
50
@@ -XXX,XX +XXX,XX @@ enum {
51
ACPI_HEST_SRC_ID_RESERVED,
52
};
53
54
+typedef struct AcpiGhesState {
55
+ uint64_t ghes_addr_le;
56
+} AcpiGhesState;
57
+
58
void build_ghes_error_table(GArray *hardware_errors, BIOSLinker *linker);
59
void acpi_build_hest(GArray *table_data, BIOSLinker *linker);
60
+void acpi_ghes_add_fw_cfg(AcpiGhesState *vms, FWCfgState *s,
61
+ GArray *hardware_errors);
62
#endif
63
diff --git a/hw/acpi/generic_event_device.c b/hw/acpi/generic_event_device.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/acpi/generic_event_device.c
66
+++ b/hw/acpi/generic_event_device.c
67
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_ged_state = {
68
}
69
};
70
71
+static bool ghes_needed(void *opaque)
72
+{
73
+ AcpiGedState *s = opaque;
74
+ return s->ghes_state.ghes_addr_le;
75
+}
76
+
77
+static const VMStateDescription vmstate_ghes_state = {
78
+ .name = "acpi-ged/ghes",
79
+ .version_id = 1,
80
+ .minimum_version_id = 1,
81
+ .needed = ghes_needed,
82
+ .fields = (VMStateField[]) {
83
+ VMSTATE_STRUCT(ghes_state, AcpiGedState, 1,
84
+ vmstate_ghes_state, AcpiGhesState),
85
+ VMSTATE_END_OF_LIST()
86
+ }
87
+};
88
+
89
static const VMStateDescription vmstate_acpi_ged = {
90
.name = "acpi-ged",
91
.version_id = 1,
92
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_acpi_ged = {
93
},
94
.subsections = (const VMStateDescription * []) {
95
&vmstate_memhp_state,
96
+ &vmstate_ghes_state,
97
NULL
98
}
99
};
100
diff --git a/hw/acpi/ghes.c b/hw/acpi/ghes.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/hw/acpi/ghes.c
103
+++ b/hw/acpi/ghes.c
104
@@ -XXX,XX +XXX,XX @@
105
#include "hw/acpi/ghes.h"
106
#include "hw/acpi/aml-build.h"
107
#include "qemu/error-report.h"
108
+#include "hw/acpi/generic_event_device.h"
109
+#include "hw/nvram/fw_cfg.h"
110
111
#define ACPI_GHES_ERRORS_FW_CFG_FILE "etc/hardware_errors"
112
#define ACPI_GHES_DATA_ADDR_FW_CFG_FILE "etc/hardware_errors_addr"
113
@@ -XXX,XX +XXX,XX @@ void acpi_build_hest(GArray *table_data, BIOSLinker *linker)
114
build_header(linker, table_data, (void *)(table_data->data + hest_start),
115
"HEST", table_data->len - hest_start, 1, NULL, NULL);
116
}
117
+
118
+void acpi_ghes_add_fw_cfg(AcpiGhesState *ags, FWCfgState *s,
119
+ GArray *hardware_error)
120
+{
121
+ /* Create a read-only fw_cfg file for GHES */
122
+ fw_cfg_add_file(s, ACPI_GHES_ERRORS_FW_CFG_FILE, hardware_error->data,
123
+ hardware_error->len);
124
+
125
+ /* Create a read-write fw_cfg file for Address */
126
+ fw_cfg_add_file_callback(s, ACPI_GHES_DATA_ADDR_FW_CFG_FILE, NULL, NULL,
127
+ NULL, &(ags->ghes_addr_le), sizeof(ags->ghes_addr_le), false);
128
+}
129
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/arm/virt-acpi-build.c
132
+++ b/hw/arm/virt-acpi-build.c
133
@@ -XXX,XX +XXX,XX @@ void virt_acpi_setup(VirtMachineState *vms)
134
{
135
AcpiBuildTables tables;
136
AcpiBuildState *build_state;
137
+ AcpiGedState *acpi_ged_state;
138
139
if (!vms->fw_cfg) {
140
trace_virt_acpi_setup();
141
@@ -XXX,XX +XXX,XX @@ void virt_acpi_setup(VirtMachineState *vms)
142
fw_cfg_add_file(vms->fw_cfg, ACPI_BUILD_TPMLOG_FILE, tables.tcpalog->data,
143
acpi_data_len(tables.tcpalog));
144
145
+ if (vms->ras) {
146
+ assert(vms->acpi_dev);
147
+ acpi_ged_state = ACPI_GED(vms->acpi_dev);
148
+ acpi_ghes_add_fw_cfg(&acpi_ged_state->ghes_state,
149
+ vms->fw_cfg, tables.hardware_errors);
150
+ }
151
+
152
build_state->rsdp_mr = acpi_add_rom_blob(virt_acpi_build_update,
153
build_state, tables.rsdp,
154
ACPI_BUILD_RSDP_FILE, 0);
155
--
156
2.20.1
157
158
diff view generated by jsdifflib
Deleted patch
1
Convert the Neon VQRDMLAH and VQRDMLSH insns in the 3-reg-same group
2
to decodetree. These don't use do_3same() because they want to
3
operate on VFP double registers, whose offsets are different from the
4
neon_reg_offset() calculations do_3same does.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200512163904.10918-2-peter.maydell@linaro.org
9
---
10
target/arm/neon-dp.decode | 3 +++
11
target/arm/translate-neon.inc.c | 15 +++++++++++++++
12
target/arm/translate.c | 14 ++------------
13
3 files changed, 20 insertions(+), 12 deletions(-)
14
15
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/neon-dp.decode
18
+++ b/target/arm/neon-dp.decode
19
@@ -XXX,XX +XXX,XX @@ VMLS_3s 1111 001 1 0 . .. .... .... 1001 . . . 0 .... @3same
20
21
VMUL_3s 1111 001 0 0 . .. .... .... 1001 . . . 1 .... @3same
22
VMUL_p_3s 1111 001 1 0 . .. .... .... 1001 . . . 1 .... @3same
23
+
24
+VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
25
+VQRDMLSH_3s 1111 001 1 0 . .. .... .... 1100 ... 1 .... @3same
26
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/translate-neon.inc.c
29
+++ b/target/arm/translate-neon.inc.c
30
@@ -XXX,XX +XXX,XX @@ static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
31
}
32
return do_3same(s, a, gen_VMUL_p_3s);
33
}
34
+
35
+#define DO_VQRDMLAH(INSN, FUNC) \
36
+ static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
37
+ { \
38
+ if (!dc_isar_feature(aa32_rdm, s)) { \
39
+ return false; \
40
+ } \
41
+ if (a->size != 1 && a->size != 2) { \
42
+ return false; \
43
+ } \
44
+ return do_3same(s, a, FUNC); \
45
+ }
46
+
47
+DO_VQRDMLAH(VQRDMLAH, gen_gvec_sqrdmlah_qc)
48
+DO_VQRDMLAH(VQRDMLSH, gen_gvec_sqrdmlsh_qc)
49
diff --git a/target/arm/translate.c b/target/arm/translate.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/translate.c
52
+++ b/target/arm/translate.c
53
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
54
if (!u) {
55
break; /* VPADD */
56
}
57
- /* VQRDMLAH */
58
- if (dc_isar_feature(aa32_rdm, s) && (size == 1 || size == 2)) {
59
- gen_gvec_sqrdmlah_qc(size, rd_ofs, rn_ofs, rm_ofs,
60
- vec_size, vec_size);
61
- return 0;
62
- }
63
+ /* VQRDMLAH : handled by decodetree */
64
return 1;
65
66
case NEON_3R_VFM_VQRDMLSH:
67
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
68
}
69
break;
70
}
71
- /* VQRDMLSH */
72
- if (dc_isar_feature(aa32_rdm, s) && (size == 1 || size == 2)) {
73
- gen_gvec_sqrdmlsh_qc(size, rd_ofs, rn_ofs, rm_ofs,
74
- vec_size, vec_size);
75
- return 0;
76
- }
77
+ /* VQRDMLSH : handled by decodetree */
78
return 1;
79
80
case NEON_3R_VABD:
81
--
82
2.20.1
83
84
diff view generated by jsdifflib
Deleted patch
1
Convert the Neon SHA instructions in the 3-reg-same group
2
to decodetree.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200512163904.10918-3-peter.maydell@linaro.org
7
---
8
target/arm/neon-dp.decode | 10 +++
9
target/arm/translate-neon.inc.c | 139 ++++++++++++++++++++++++++++++++
10
target/arm/translate.c | 46 +----------
11
3 files changed, 151 insertions(+), 44 deletions(-)
12
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
16
+++ b/target/arm/neon-dp.decode
17
@@ -XXX,XX +XXX,XX @@ VMUL_3s 1111 001 0 0 . .. .... .... 1001 . . . 1 .... @3same
18
VMUL_p_3s 1111 001 1 0 . .. .... .... 1001 . . . 1 .... @3same
19
20
VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
21
+
22
+SHA1_3s 1111 001 0 0 . optype:2 .... .... 1100 . 1 . 0 .... \
23
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
24
+SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... \
25
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
26
+SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... \
27
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
28
+SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... \
29
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
30
+
31
VQRDMLSH_3s 1111 001 1 0 . .. .... .... 1100 ... 1 .... @3same
32
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-neon.inc.c
35
+++ b/target/arm/translate-neon.inc.c
36
@@ -XXX,XX +XXX,XX @@ static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
37
38
DO_VQRDMLAH(VQRDMLAH, gen_gvec_sqrdmlah_qc)
39
DO_VQRDMLAH(VQRDMLSH, gen_gvec_sqrdmlsh_qc)
40
+
41
+static bool trans_SHA1_3s(DisasContext *s, arg_SHA1_3s *a)
42
+{
43
+ TCGv_ptr ptr1, ptr2, ptr3;
44
+ TCGv_i32 tmp;
45
+
46
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
47
+ !dc_isar_feature(aa32_sha1, s)) {
48
+ return false;
49
+ }
50
+
51
+ /* UNDEF accesses to D16-D31 if they don't exist. */
52
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
53
+ ((a->vd | a->vn | a->vm) & 0x10)) {
54
+ return false;
55
+ }
56
+
57
+ if ((a->vn | a->vm | a->vd) & 1) {
58
+ return false;
59
+ }
60
+
61
+ if (!vfp_access_check(s)) {
62
+ return true;
63
+ }
64
+
65
+ ptr1 = vfp_reg_ptr(true, a->vd);
66
+ ptr2 = vfp_reg_ptr(true, a->vn);
67
+ ptr3 = vfp_reg_ptr(true, a->vm);
68
+ tmp = tcg_const_i32(a->optype);
69
+ gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp);
70
+ tcg_temp_free_i32(tmp);
71
+ tcg_temp_free_ptr(ptr1);
72
+ tcg_temp_free_ptr(ptr2);
73
+ tcg_temp_free_ptr(ptr3);
74
+
75
+ return true;
76
+}
77
+
78
+static bool trans_SHA256H_3s(DisasContext *s, arg_SHA256H_3s *a)
79
+{
80
+ TCGv_ptr ptr1, ptr2, ptr3;
81
+
82
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
83
+ !dc_isar_feature(aa32_sha2, s)) {
84
+ return false;
85
+ }
86
+
87
+ /* UNDEF accesses to D16-D31 if they don't exist. */
88
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
89
+ ((a->vd | a->vn | a->vm) & 0x10)) {
90
+ return false;
91
+ }
92
+
93
+ if ((a->vn | a->vm | a->vd) & 1) {
94
+ return false;
95
+ }
96
+
97
+ if (!vfp_access_check(s)) {
98
+ return true;
99
+ }
100
+
101
+ ptr1 = vfp_reg_ptr(true, a->vd);
102
+ ptr2 = vfp_reg_ptr(true, a->vn);
103
+ ptr3 = vfp_reg_ptr(true, a->vm);
104
+ gen_helper_crypto_sha256h(ptr1, ptr2, ptr3);
105
+ tcg_temp_free_ptr(ptr1);
106
+ tcg_temp_free_ptr(ptr2);
107
+ tcg_temp_free_ptr(ptr3);
108
+
109
+ return true;
110
+}
111
+
112
+static bool trans_SHA256H2_3s(DisasContext *s, arg_SHA256H2_3s *a)
113
+{
114
+ TCGv_ptr ptr1, ptr2, ptr3;
115
+
116
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
117
+ !dc_isar_feature(aa32_sha2, s)) {
118
+ return false;
119
+ }
120
+
121
+ /* UNDEF accesses to D16-D31 if they don't exist. */
122
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
123
+ ((a->vd | a->vn | a->vm) & 0x10)) {
124
+ return false;
125
+ }
126
+
127
+ if ((a->vn | a->vm | a->vd) & 1) {
128
+ return false;
129
+ }
130
+
131
+ if (!vfp_access_check(s)) {
132
+ return true;
133
+ }
134
+
135
+ ptr1 = vfp_reg_ptr(true, a->vd);
136
+ ptr2 = vfp_reg_ptr(true, a->vn);
137
+ ptr3 = vfp_reg_ptr(true, a->vm);
138
+ gen_helper_crypto_sha256h2(ptr1, ptr2, ptr3);
139
+ tcg_temp_free_ptr(ptr1);
140
+ tcg_temp_free_ptr(ptr2);
141
+ tcg_temp_free_ptr(ptr3);
142
+
143
+ return true;
144
+}
145
+
146
+static bool trans_SHA256SU1_3s(DisasContext *s, arg_SHA256SU1_3s *a)
147
+{
148
+ TCGv_ptr ptr1, ptr2, ptr3;
149
+
150
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
151
+ !dc_isar_feature(aa32_sha2, s)) {
152
+ return false;
153
+ }
154
+
155
+ /* UNDEF accesses to D16-D31 if they don't exist. */
156
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
157
+ ((a->vd | a->vn | a->vm) & 0x10)) {
158
+ return false;
159
+ }
160
+
161
+ if ((a->vn | a->vm | a->vd) & 1) {
162
+ return false;
163
+ }
164
+
165
+ if (!vfp_access_check(s)) {
166
+ return true;
167
+ }
168
+
169
+ ptr1 = vfp_reg_ptr(true, a->vd);
170
+ ptr2 = vfp_reg_ptr(true, a->vn);
171
+ ptr3 = vfp_reg_ptr(true, a->vm);
172
+ gen_helper_crypto_sha256su1(ptr1, ptr2, ptr3);
173
+ tcg_temp_free_ptr(ptr1);
174
+ tcg_temp_free_ptr(ptr2);
175
+ tcg_temp_free_ptr(ptr3);
176
+
177
+ return true;
178
+}
179
diff --git a/target/arm/translate.c b/target/arm/translate.c
180
index XXXXXXX..XXXXXXX 100644
181
--- a/target/arm/translate.c
182
+++ b/target/arm/translate.c
183
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
184
int vec_size;
185
uint32_t imm;
186
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
187
- TCGv_ptr ptr1, ptr2, ptr3;
188
+ TCGv_ptr ptr1, ptr2;
189
TCGv_i64 tmp64;
190
191
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
192
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
193
return 1;
194
}
195
switch (op) {
196
- case NEON_3R_SHA:
197
- /* The SHA-1/SHA-256 3-register instructions require special
198
- * treatment here, as their size field is overloaded as an
199
- * op type selector, and they all consume their input in a
200
- * single pass.
201
- */
202
- if (!q) {
203
- return 1;
204
- }
205
- if (!u) { /* SHA-1 */
206
- if (!dc_isar_feature(aa32_sha1, s)) {
207
- return 1;
208
- }
209
- ptr1 = vfp_reg_ptr(true, rd);
210
- ptr2 = vfp_reg_ptr(true, rn);
211
- ptr3 = vfp_reg_ptr(true, rm);
212
- tmp4 = tcg_const_i32(size);
213
- gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp4);
214
- tcg_temp_free_i32(tmp4);
215
- } else { /* SHA-256 */
216
- if (!dc_isar_feature(aa32_sha2, s) || size == 3) {
217
- return 1;
218
- }
219
- ptr1 = vfp_reg_ptr(true, rd);
220
- ptr2 = vfp_reg_ptr(true, rn);
221
- ptr3 = vfp_reg_ptr(true, rm);
222
- switch (size) {
223
- case 0:
224
- gen_helper_crypto_sha256h(ptr1, ptr2, ptr3);
225
- break;
226
- case 1:
227
- gen_helper_crypto_sha256h2(ptr1, ptr2, ptr3);
228
- break;
229
- case 2:
230
- gen_helper_crypto_sha256su1(ptr1, ptr2, ptr3);
231
- break;
232
- }
233
- }
234
- tcg_temp_free_ptr(ptr1);
235
- tcg_temp_free_ptr(ptr2);
236
- tcg_temp_free_ptr(ptr3);
237
- return 0;
238
-
239
case NEON_3R_VPADD_VQRDMLAH:
240
if (!u) {
241
break; /* VPADD */
242
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
243
case NEON_3R_VMUL:
244
case NEON_3R_VML:
245
case NEON_3R_VSHL:
246
+ case NEON_3R_SHA:
247
/* Already handled by decodetree */
248
return 1;
249
}
250
--
251
2.20.1
252
253
diff view generated by jsdifflib
Deleted patch
1
Convert the 64-bit element insns in the 3-reg-same group
2
to decodetree. This covers VQSHL, VRSHL and VQRSHL where
3
size==0b11.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200512163904.10918-4-peter.maydell@linaro.org
8
---
9
target/arm/neon-dp.decode | 13 +++++++++++
10
target/arm/translate-neon.inc.c | 24 +++++++++++++++++++++
11
target/arm/translate.c | 38 ++-------------------------------
12
3 files changed, 39 insertions(+), 36 deletions(-)
13
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
17
+++ b/target/arm/neon-dp.decode
18
@@ -XXX,XX +XXX,XX @@ VCGE_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 1 .... @3same
19
VSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 0 .... @3same_rev
20
VSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 0 .... @3same_rev
21
22
+# Insns operating on 64-bit elements (size!=0b11 handled elsewhere)
23
+# The _rev suffix indicates that Vn and Vm are reversed (as explained
24
+# by the comment for the @3same_rev format).
25
+@3same_64_rev .... ... . . . 11 .... .... .... . q:1 . . .... \
26
+ &3same vm=%vn_dp vn=%vm_dp vd=%vd_dp size=3
27
+
28
+VQSHL_S64_3s 1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_64_rev
29
+VQSHL_U64_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_64_rev
30
+VRSHL_S64_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_64_rev
31
+VRSHL_U64_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_64_rev
32
+VQRSHL_S64_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_64_rev
33
+VQRSHL_U64_3s 1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_64_rev
34
+
35
VMAX_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
36
VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same
37
VMIN_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 1 .... @3same
38
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-neon.inc.c
41
+++ b/target/arm/translate-neon.inc.c
42
@@ -XXX,XX +XXX,XX @@ static bool trans_SHA256SU1_3s(DisasContext *s, arg_SHA256SU1_3s *a)
43
44
return true;
45
}
46
+
47
+#define DO_3SAME_64(INSN, FUNC) \
48
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
49
+ uint32_t rn_ofs, uint32_t rm_ofs, \
50
+ uint32_t oprsz, uint32_t maxsz) \
51
+ { \
52
+ static const GVecGen3 op = { .fni8 = FUNC }; \
53
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &op); \
54
+ } \
55
+ DO_3SAME(INSN, gen_##INSN##_3s)
56
+
57
+#define DO_3SAME_64_ENV(INSN, FUNC) \
58
+ static void gen_##INSN##_elt(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m) \
59
+ { \
60
+ FUNC(d, cpu_env, n, m); \
61
+ } \
62
+ DO_3SAME_64(INSN, gen_##INSN##_elt)
63
+
64
+DO_3SAME_64(VRSHL_S64, gen_helper_neon_rshl_s64)
65
+DO_3SAME_64(VRSHL_U64, gen_helper_neon_rshl_u64)
66
+DO_3SAME_64_ENV(VQSHL_S64, gen_helper_neon_qshl_s64)
67
+DO_3SAME_64_ENV(VQSHL_U64, gen_helper_neon_qshl_u64)
68
+DO_3SAME_64_ENV(VQRSHL_S64, gen_helper_neon_qrshl_s64)
69
+DO_3SAME_64_ENV(VQRSHL_U64, gen_helper_neon_qrshl_u64)
70
diff --git a/target/arm/translate.c b/target/arm/translate.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/translate.c
73
+++ b/target/arm/translate.c
74
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
75
}
76
77
if (size == 3) {
78
- /* 64-bit element instructions. */
79
- for (pass = 0; pass < (q ? 2 : 1); pass++) {
80
- neon_load_reg64(cpu_V0, rn + pass);
81
- neon_load_reg64(cpu_V1, rm + pass);
82
- switch (op) {
83
- case NEON_3R_VQSHL:
84
- if (u) {
85
- gen_helper_neon_qshl_u64(cpu_V0, cpu_env,
86
- cpu_V1, cpu_V0);
87
- } else {
88
- gen_helper_neon_qshl_s64(cpu_V0, cpu_env,
89
- cpu_V1, cpu_V0);
90
- }
91
- break;
92
- case NEON_3R_VRSHL:
93
- if (u) {
94
- gen_helper_neon_rshl_u64(cpu_V0, cpu_V1, cpu_V0);
95
- } else {
96
- gen_helper_neon_rshl_s64(cpu_V0, cpu_V1, cpu_V0);
97
- }
98
- break;
99
- case NEON_3R_VQRSHL:
100
- if (u) {
101
- gen_helper_neon_qrshl_u64(cpu_V0, cpu_env,
102
- cpu_V1, cpu_V0);
103
- } else {
104
- gen_helper_neon_qrshl_s64(cpu_V0, cpu_env,
105
- cpu_V1, cpu_V0);
106
- }
107
- break;
108
- default:
109
- abort();
110
- }
111
- neon_store_reg64(cpu_V0, rd + pass);
112
- }
113
- return 0;
114
+ /* 64-bit element instructions: handled by decodetree */
115
+ return 1;
116
}
117
pairwise = 0;
118
switch (op) {
119
--
120
2.20.1
121
122
diff view generated by jsdifflib
Deleted patch
1
Convert the Neon VHADD insns in the 3-reg-same group to decodetree.
2
1
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200512163904.10918-5-peter.maydell@linaro.org
6
---
7
target/arm/neon-dp.decode | 2 ++
8
target/arm/translate-neon.inc.c | 24 ++++++++++++++++++++++++
9
target/arm/translate.c | 4 +---
10
3 files changed, 27 insertions(+), 3 deletions(-)
11
12
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-dp.decode
15
+++ b/target/arm/neon-dp.decode
16
@@ -XXX,XX +XXX,XX @@
17
@3same .... ... . . . size:2 .... .... .... . q:1 . . .... \
18
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp
19
20
+VHADD_S_3s 1111 001 0 0 . .. .... .... 0000 . . . 0 .... @3same
21
+VHADD_U_3s 1111 001 1 0 . .. .... .... 0000 . . . 0 .... @3same
22
VQADD_S_3s 1111 001 0 0 . .. .... .... 0000 . . . 1 .... @3same
23
VQADD_U_3s 1111 001 1 0 . .. .... .... 0000 . . . 1 .... @3same
24
25
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/translate-neon.inc.c
28
+++ b/target/arm/translate-neon.inc.c
29
@@ -XXX,XX +XXX,XX @@ DO_3SAME_64_ENV(VQSHL_S64, gen_helper_neon_qshl_s64)
30
DO_3SAME_64_ENV(VQSHL_U64, gen_helper_neon_qshl_u64)
31
DO_3SAME_64_ENV(VQRSHL_S64, gen_helper_neon_qrshl_s64)
32
DO_3SAME_64_ENV(VQRSHL_U64, gen_helper_neon_qrshl_u64)
33
+
34
+#define DO_3SAME_32(INSN, FUNC) \
35
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
36
+ uint32_t rn_ofs, uint32_t rm_ofs, \
37
+ uint32_t oprsz, uint32_t maxsz) \
38
+ { \
39
+ static const GVecGen3 ops[4] = { \
40
+ { .fni4 = gen_helper_neon_##FUNC##8 }, \
41
+ { .fni4 = gen_helper_neon_##FUNC##16 }, \
42
+ { .fni4 = gen_helper_neon_##FUNC##32 }, \
43
+ { 0 }, \
44
+ }; \
45
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &ops[vece]); \
46
+ } \
47
+ static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
48
+ { \
49
+ if (a->size > 2) { \
50
+ return false; \
51
+ } \
52
+ return do_3same(s, a, gen_##INSN##_3s); \
53
+ }
54
+
55
+DO_3SAME_32(VHADD_S, hadd_s)
56
+DO_3SAME_32(VHADD_U, hadd_u)
57
diff --git a/target/arm/translate.c b/target/arm/translate.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/translate.c
60
+++ b/target/arm/translate.c
61
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
62
case NEON_3R_VML:
63
case NEON_3R_VSHL:
64
case NEON_3R_SHA:
65
+ case NEON_3R_VHADD:
66
/* Already handled by decodetree */
67
return 1;
68
}
69
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
70
tmp2 = neon_load_reg(rm, pass);
71
}
72
switch (op) {
73
- case NEON_3R_VHADD:
74
- GEN_NEON_INTEGER_OP(hadd);
75
- break;
76
case NEON_3R_VRHADD:
77
GEN_NEON_INTEGER_OP(rhadd);
78
break;
79
--
80
2.20.1
81
82
diff view generated by jsdifflib
Deleted patch
1
Convert the Neon VABA and VABD insns in the 3-reg-same group to
2
decodetree.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200512163904.10918-6-peter.maydell@linaro.org
7
---
8
target/arm/neon-dp.decode | 6 ++++++
9
target/arm/translate-neon.inc.c | 4 ++++
10
target/arm/translate.c | 22 ++--------------------
11
3 files changed, 12 insertions(+), 20 deletions(-)
12
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
16
+++ b/target/arm/neon-dp.decode
17
@@ -XXX,XX +XXX,XX @@ VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same
18
VMIN_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 1 .... @3same
19
VMIN_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 1 .... @3same
20
21
+VABD_S_3s 1111 001 0 0 . .. .... .... 0111 . . . 0 .... @3same
22
+VABD_U_3s 1111 001 1 0 . .. .... .... 0111 . . . 0 .... @3same
23
+
24
+VABA_S_3s 1111 001 0 0 . .. .... .... 0111 . . . 1 .... @3same
25
+VABA_U_3s 1111 001 1 0 . .. .... .... 0111 . . . 1 .... @3same
26
+
27
VADD_3s 1111 001 0 0 . .. .... .... 1000 . . . 0 .... @3same
28
VSUB_3s 1111 001 1 0 . .. .... .... 1000 . . . 0 .... @3same
29
30
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/translate-neon.inc.c
33
+++ b/target/arm/translate-neon.inc.c
34
@@ -XXX,XX +XXX,XX @@ DO_3SAME_NO_SZ_3(VMUL, tcg_gen_gvec_mul)
35
DO_3SAME_NO_SZ_3(VMLA, gen_gvec_mla)
36
DO_3SAME_NO_SZ_3(VMLS, gen_gvec_mls)
37
DO_3SAME_NO_SZ_3(VTST, gen_gvec_cmtst)
38
+DO_3SAME_NO_SZ_3(VABD_S, gen_gvec_sabd)
39
+DO_3SAME_NO_SZ_3(VABA_S, gen_gvec_saba)
40
+DO_3SAME_NO_SZ_3(VABD_U, gen_gvec_uabd)
41
+DO_3SAME_NO_SZ_3(VABA_U, gen_gvec_uaba)
42
43
#define DO_3SAME_CMP(INSN, COND) \
44
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
45
diff --git a/target/arm/translate.c b/target/arm/translate.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate.c
48
+++ b/target/arm/translate.c
49
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
50
/* VQRDMLSH : handled by decodetree */
51
return 1;
52
53
- case NEON_3R_VABD:
54
- if (u) {
55
- gen_gvec_uabd(size, rd_ofs, rn_ofs, rm_ofs,
56
- vec_size, vec_size);
57
- } else {
58
- gen_gvec_sabd(size, rd_ofs, rn_ofs, rm_ofs,
59
- vec_size, vec_size);
60
- }
61
- return 0;
62
-
63
- case NEON_3R_VABA:
64
- if (u) {
65
- gen_gvec_uaba(size, rd_ofs, rn_ofs, rm_ofs,
66
- vec_size, vec_size);
67
- } else {
68
- gen_gvec_saba(size, rd_ofs, rn_ofs, rm_ofs,
69
- vec_size, vec_size);
70
- }
71
- return 0;
72
-
73
case NEON_3R_VADD_VSUB:
74
case NEON_3R_LOGIC:
75
case NEON_3R_VMAX:
76
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
77
case NEON_3R_VSHL:
78
case NEON_3R_SHA:
79
case NEON_3R_VHADD:
80
+ case NEON_3R_VABD:
81
+ case NEON_3R_VABA:
82
/* Already handled by decodetree */
83
return 1;
84
}
85
--
86
2.20.1
87
88
diff view generated by jsdifflib
Deleted patch
1
Convert the Neon VRHADD and VHSUB 3-reg-same insns to decodetree.
2
(These are all the other insns in 3-reg-same which were using
3
GEN_NEON_INTEGER_OP() and which are not pairwise or
4
reversed-operands.)
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200512163904.10918-7-peter.maydell@linaro.org
9
---
10
target/arm/neon-dp.decode | 6 ++++++
11
target/arm/translate-neon.inc.c | 4 ++++
12
target/arm/translate.c | 8 ++------
13
3 files changed, 12 insertions(+), 6 deletions(-)
14
15
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/neon-dp.decode
18
+++ b/target/arm/neon-dp.decode
19
@@ -XXX,XX +XXX,XX @@ VHADD_U_3s 1111 001 1 0 . .. .... .... 0000 . . . 0 .... @3same
20
VQADD_S_3s 1111 001 0 0 . .. .... .... 0000 . . . 1 .... @3same
21
VQADD_U_3s 1111 001 1 0 . .. .... .... 0000 . . . 1 .... @3same
22
23
+VRHADD_S_3s 1111 001 0 0 . .. .... .... 0001 . . . 0 .... @3same
24
+VRHADD_U_3s 1111 001 1 0 . .. .... .... 0001 . . . 0 .... @3same
25
+
26
@3same_logic .... ... . . . .. .... .... .... . q:1 .. .... \
27
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0
28
29
@@ -XXX,XX +XXX,XX @@ VBSL_3s 1111 001 1 0 . 01 .... .... 0001 ... 1 .... @3same_logic
30
VBIT_3s 1111 001 1 0 . 10 .... .... 0001 ... 1 .... @3same_logic
31
VBIF_3s 1111 001 1 0 . 11 .... .... 0001 ... 1 .... @3same_logic
32
33
+VHSUB_S_3s 1111 001 0 0 . .. .... .... 0010 . . . 0 .... @3same
34
+VHSUB_U_3s 1111 001 1 0 . .. .... .... 0010 . . . 0 .... @3same
35
+
36
VQSUB_S_3s 1111 001 0 0 . .. .... .... 0010 . . . 1 .... @3same
37
VQSUB_U_3s 1111 001 1 0 . .. .... .... 0010 . . . 1 .... @3same
38
39
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/translate-neon.inc.c
42
+++ b/target/arm/translate-neon.inc.c
43
@@ -XXX,XX +XXX,XX @@ DO_3SAME_64_ENV(VQRSHL_U64, gen_helper_neon_qrshl_u64)
44
45
DO_3SAME_32(VHADD_S, hadd_s)
46
DO_3SAME_32(VHADD_U, hadd_u)
47
+DO_3SAME_32(VHSUB_S, hsub_s)
48
+DO_3SAME_32(VHSUB_U, hsub_u)
49
+DO_3SAME_32(VRHADD_S, rhadd_s)
50
+DO_3SAME_32(VRHADD_U, rhadd_u)
51
diff --git a/target/arm/translate.c b/target/arm/translate.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/translate.c
54
+++ b/target/arm/translate.c
55
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
56
case NEON_3R_VSHL:
57
case NEON_3R_SHA:
58
case NEON_3R_VHADD:
59
+ case NEON_3R_VRHADD:
60
+ case NEON_3R_VHSUB:
61
case NEON_3R_VABD:
62
case NEON_3R_VABA:
63
/* Already handled by decodetree */
64
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
65
tmp2 = neon_load_reg(rm, pass);
66
}
67
switch (op) {
68
- case NEON_3R_VRHADD:
69
- GEN_NEON_INTEGER_OP(rhadd);
70
- break;
71
- case NEON_3R_VHSUB:
72
- GEN_NEON_INTEGER_OP(hsub);
73
- break;
74
case NEON_3R_VQSHL:
75
GEN_NEON_INTEGER_OP_ENV(qshl);
76
break;
77
--
78
2.20.1
79
80
diff view generated by jsdifflib
1
Convert the Neon fp VMAX/VMIN/VMAXNM/VMINNM/VRECPS/VRSQRTS 3-reg-same
1
The architecture requires (R_TYTWB) that an attempt to return from EL3
2
insns to decodetree. (These are all the remaining non-accumulation
2
when SCR_EL3.{NSE,NS} are {1,0} is an illegal exception return. (This
3
instructions in this group.)
3
enforces that the CPU can't ever be executing below EL3 with the
4
NSE,NS bits indicating an invalid security state.)
5
6
We were missing this check; add it.
4
7
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200512163904.10918-17-peter.maydell@linaro.org
10
Message-id: 20230807150618.101357-1-peter.maydell@linaro.org
8
---
11
---
9
target/arm/neon-dp.decode | 6 +++
12
target/arm/tcg/helper-a64.c | 9 +++++++++
10
target/arm/translate-neon.inc.c | 70 +++++++++++++++++++++++++++++++++
13
1 file changed, 9 insertions(+)
11
target/arm/translate.c | 42 +-------------------
12
3 files changed, 78 insertions(+), 40 deletions(-)
13
14
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
15
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
17
--- a/target/arm/tcg/helper-a64.c
17
+++ b/target/arm/neon-dp.decode
18
+++ b/target/arm/tcg/helper-a64.c
18
@@ -XXX,XX +XXX,XX @@ VCGE_fp_3s 1111 001 1 0 . 0 . .... .... 1110 ... 0 .... @3same_fp
19
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
19
VACGE_fp_3s 1111 001 1 0 . 0 . .... .... 1110 ... 1 .... @3same_fp
20
spsr &= ~PSTATE_SS;
20
VCGT_fp_3s 1111 001 1 0 . 1 . .... .... 1110 ... 0 .... @3same_fp
21
}
21
VACGT_fp_3s 1111 001 1 0 . 1 . .... .... 1110 ... 1 .... @3same_fp
22
22
+VMAX_fp_3s 1111 001 0 0 . 0 . .... .... 1111 ... 0 .... @3same_fp
23
+ /*
23
+VMIN_fp_3s 1111 001 0 0 . 1 . .... .... 1111 ... 0 .... @3same_fp
24
+ * FEAT_RME forbids return from EL3 with an invalid security state.
24
VPMAX_fp_3s 1111 001 1 0 . 0 . .... .... 1111 ... 0 .... @3same_fp_q0
25
+ * We don't need an explicit check for FEAT_RME here because we enforce
25
VPMIN_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 0 .... @3same_fp_q0
26
+ * in scr_write() that you can't set the NSE bit without it.
26
+VRECPS_fp_3s 1111 001 0 0 . 0 . .... .... 1111 ... 1 .... @3same_fp
27
+ */
27
+VRSQRTS_fp_3s 1111 001 0 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
28
+ if (cur_el == 3 && (env->cp15.scr_el3 & (SCR_NS | SCR_NSE)) == SCR_NSE) {
28
+VMAXNM_fp_3s 1111 001 1 0 . 0 . .... .... 1111 ... 1 .... @3same_fp
29
+ goto illegal_return;
29
+VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
30
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/translate-neon.inc.c
33
+++ b/target/arm/translate-neon.inc.c
34
@@ -XXX,XX +XXX,XX @@ DO_3S_FP(VCGE, gen_helper_neon_cge_f32, false)
35
DO_3S_FP(VCGT, gen_helper_neon_cgt_f32, false)
36
DO_3S_FP(VACGE, gen_helper_neon_acge_f32, false)
37
DO_3S_FP(VACGT, gen_helper_neon_acgt_f32, false)
38
+DO_3S_FP(VMAX, gen_helper_vfp_maxs, false)
39
+DO_3S_FP(VMIN, gen_helper_vfp_mins, false)
40
41
static void gen_VMLA_fp_3s(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm,
42
TCGv_ptr fpstatus)
43
@@ -XXX,XX +XXX,XX @@ static void gen_VMLS_fp_3s(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm,
44
DO_3S_FP(VMLA, gen_VMLA_fp_3s, true)
45
DO_3S_FP(VMLS, gen_VMLS_fp_3s, true)
46
47
+static bool trans_VMAXNM_fp_3s(DisasContext *s, arg_3same *a)
48
+{
49
+ if (!arm_dc_feature(s, ARM_FEATURE_V8)) {
50
+ return false;
51
+ }
30
+ }
52
+
31
+
53
+ if (a->size != 0) {
32
new_el = el_from_spsr(spsr);
54
+ /* TODO fp16 support */
33
if (new_el == -1) {
55
+ return false;
34
goto illegal_return;
56
+ }
57
+
58
+ return do_3same_fp(s, a, gen_helper_vfp_maxnums, false);
59
+}
60
+
61
+static bool trans_VMINNM_fp_3s(DisasContext *s, arg_3same *a)
62
+{
63
+ if (!arm_dc_feature(s, ARM_FEATURE_V8)) {
64
+ return false;
65
+ }
66
+
67
+ if (a->size != 0) {
68
+ /* TODO fp16 support */
69
+ return false;
70
+ }
71
+
72
+ return do_3same_fp(s, a, gen_helper_vfp_minnums, false);
73
+}
74
+
75
+WRAP_ENV_FN(gen_VRECPS_tramp, gen_helper_recps_f32)
76
+
77
+static void gen_VRECPS_fp_3s(unsigned vece, uint32_t rd_ofs,
78
+ uint32_t rn_ofs, uint32_t rm_ofs,
79
+ uint32_t oprsz, uint32_t maxsz)
80
+{
81
+ static const GVecGen3 ops = { .fni4 = gen_VRECPS_tramp };
82
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &ops);
83
+}
84
+
85
+static bool trans_VRECPS_fp_3s(DisasContext *s, arg_3same *a)
86
+{
87
+ if (a->size != 0) {
88
+ /* TODO fp16 support */
89
+ return false;
90
+ }
91
+
92
+ return do_3same(s, a, gen_VRECPS_fp_3s);
93
+}
94
+
95
+WRAP_ENV_FN(gen_VRSQRTS_tramp, gen_helper_rsqrts_f32)
96
+
97
+static void gen_VRSQRTS_fp_3s(unsigned vece, uint32_t rd_ofs,
98
+ uint32_t rn_ofs, uint32_t rm_ofs,
99
+ uint32_t oprsz, uint32_t maxsz)
100
+{
101
+ static const GVecGen3 ops = { .fni4 = gen_VRSQRTS_tramp };
102
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &ops);
103
+}
104
+
105
+static bool trans_VRSQRTS_fp_3s(DisasContext *s, arg_3same *a)
106
+{
107
+ if (a->size != 0) {
108
+ /* TODO fp16 support */
109
+ return false;
110
+ }
111
+
112
+ return do_3same(s, a, gen_VRSQRTS_fp_3s);
113
+}
114
+
115
static bool do_3same_fp_pair(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn)
116
{
117
/* FP operations handled pairwise 32 bits at a time */
118
diff --git a/target/arm/translate.c b/target/arm/translate.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/target/arm/translate.c
121
+++ b/target/arm/translate.c
122
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
123
case NEON_3R_FLOAT_MULTIPLY:
124
case NEON_3R_FLOAT_CMP:
125
case NEON_3R_FLOAT_ACMP:
126
+ case NEON_3R_FLOAT_MINMAX:
127
+ case NEON_3R_FLOAT_MISC:
128
/* Already handled by decodetree */
129
return 1;
130
}
131
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
132
return 1;
133
}
134
switch (op) {
135
- case NEON_3R_FLOAT_MINMAX:
136
- if (u) {
137
- return 1; /* VPMIN/VPMAX handled by decodetree */
138
- }
139
- break;
140
- case NEON_3R_FLOAT_MISC:
141
- /* VMAXNM/VMINNM in ARMv8 */
142
- if (u && !arm_dc_feature(s, ARM_FEATURE_V8)) {
143
- return 1;
144
- }
145
- break;
146
case NEON_3R_VFM_VQRDMLSH:
147
if (!dc_isar_feature(aa32_simdfmac, s)) {
148
return 1;
149
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
150
tmp = neon_load_reg(rn, pass);
151
tmp2 = neon_load_reg(rm, pass);
152
switch (op) {
153
- case NEON_3R_FLOAT_MINMAX:
154
- {
155
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
156
- if (size == 0) {
157
- gen_helper_vfp_maxs(tmp, tmp, tmp2, fpstatus);
158
- } else {
159
- gen_helper_vfp_mins(tmp, tmp, tmp2, fpstatus);
160
- }
161
- tcg_temp_free_ptr(fpstatus);
162
- break;
163
- }
164
- case NEON_3R_FLOAT_MISC:
165
- if (u) {
166
- /* VMAXNM/VMINNM */
167
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
168
- if (size == 0) {
169
- gen_helper_vfp_maxnums(tmp, tmp, tmp2, fpstatus);
170
- } else {
171
- gen_helper_vfp_minnums(tmp, tmp, tmp2, fpstatus);
172
- }
173
- tcg_temp_free_ptr(fpstatus);
174
- } else {
175
- if (size == 0) {
176
- gen_helper_recps_f32(tmp, cpu_env, tmp, tmp2);
177
- } else {
178
- gen_helper_rsqrts_f32(tmp, cpu_env, tmp, tmp2);
179
- }
180
- }
181
- break;
182
case NEON_3R_VFM_VQRDMLSH:
183
{
184
/* VFMA, VFMS: fused multiply-add */
185
--
35
--
186
2.20.1
36
2.34.1
187
188
diff view generated by jsdifflib
1
The usual location for the env argument in the argument list of a TCG helper
1
In the m48t59 device we almost always use 64-bit arithmetic when
2
is immediately after the return-value argument. recps_f32 and rsqrts_f32
2
dealing with time_t deltas. The one exception is in set_alarm(),
3
differ in that they put it at the end.
3
which currently uses a plain 'int' to hold the difference between two
4
4
time_t values. Switch to int64_t instead to avoid any possible
5
Move the env argument to its usual place; this will allow us to
5
overflow issues.
6
more easily use these helper functions with the gvec APIs.
7
6
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-id: 20200512163904.10918-16-peter.maydell@linaro.org
11
---
9
---
12
target/arm/helper.h | 4 ++--
10
hw/rtc/m48t59.c | 2 +-
13
target/arm/translate.c | 4 ++--
11
1 file changed, 1 insertion(+), 1 deletion(-)
14
target/arm/vfp_helper.c | 4 ++--
15
3 files changed, 6 insertions(+), 6 deletions(-)
16
12
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
13
diff --git a/hw/rtc/m48t59.c b/hw/rtc/m48t59.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
15
--- a/hw/rtc/m48t59.c
20
+++ b/target/arm/helper.h
16
+++ b/hw/rtc/m48t59.c
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(vfp_fcvt_f64_to_f16, TCG_CALL_NO_RWG, f16, f64, ptr, i32)
17
@@ -XXX,XX +XXX,XX @@ static void alarm_cb (void *opaque)
22
DEF_HELPER_4(vfp_muladdd, f64, f64, f64, f64, ptr)
18
23
DEF_HELPER_4(vfp_muladds, f32, f32, f32, f32, ptr)
19
static void set_alarm(M48t59State *NVRAM)
24
25
-DEF_HELPER_3(recps_f32, f32, f32, f32, env)
26
-DEF_HELPER_3(rsqrts_f32, f32, f32, f32, env)
27
+DEF_HELPER_3(recps_f32, f32, env, f32, f32)
28
+DEF_HELPER_3(rsqrts_f32, f32, env, f32, f32)
29
DEF_HELPER_FLAGS_2(recpe_f16, TCG_CALL_NO_RWG, f16, f16, ptr)
30
DEF_HELPER_FLAGS_2(recpe_f32, TCG_CALL_NO_RWG, f32, f32, ptr)
31
DEF_HELPER_FLAGS_2(recpe_f64, TCG_CALL_NO_RWG, f64, f64, ptr)
32
diff --git a/target/arm/translate.c b/target/arm/translate.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate.c
35
+++ b/target/arm/translate.c
36
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
37
tcg_temp_free_ptr(fpstatus);
38
} else {
39
if (size == 0) {
40
- gen_helper_recps_f32(tmp, tmp, tmp2, cpu_env);
41
+ gen_helper_recps_f32(tmp, cpu_env, tmp, tmp2);
42
} else {
43
- gen_helper_rsqrts_f32(tmp, tmp, tmp2, cpu_env);
44
+ gen_helper_rsqrts_f32(tmp, cpu_env, tmp, tmp2);
45
}
46
}
47
break;
48
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/vfp_helper.c
51
+++ b/target/arm/vfp_helper.c
52
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
53
#define float32_three make_float32(0x40400000)
54
#define float32_one_point_five make_float32(0x3fc00000)
55
56
-float32 HELPER(recps_f32)(float32 a, float32 b, CPUARMState *env)
57
+float32 HELPER(recps_f32)(CPUARMState *env, float32 a, float32 b)
58
{
20
{
59
float_status *s = &env->vfp.standard_fp_status;
21
- int diff;
60
if ((float32_is_infinity(a) && float32_is_zero_or_denormal(b)) ||
22
+ int64_t diff;
61
@@ -XXX,XX +XXX,XX @@ float32 HELPER(recps_f32)(float32 a, float32 b, CPUARMState *env)
23
if (NVRAM->alrm_timer != NULL) {
62
return float32_sub(float32_two, float32_mul(a, b, s), s);
24
timer_del(NVRAM->alrm_timer);
63
}
25
diff = qemu_timedate_diff(&NVRAM->alarm) - NVRAM->time_offset;
64
65
-float32 HELPER(rsqrts_f32)(float32 a, float32 b, CPUARMState *env)
66
+float32 HELPER(rsqrts_f32)(CPUARMState *env, float32 a, float32 b)
67
{
68
float_status *s = &env->vfp.standard_fp_status;
69
float32 product;
70
--
26
--
71
2.20.1
27
2.34.1
72
28
73
29
diff view generated by jsdifflib
1
Convert the Neon integer 3-reg-same compare insns VCGE, VCGT,
1
In the twl92230 device, use int64_t for the two state fields
2
VCEQ, VACGE and VACGT to decodetree.
2
sec_offset and alm_sec, because we set these to values that
3
are either time_t or differences between two time_t values.
4
5
These fields aren't saved in vmstate anywhere, so we can
6
safely widen them.
3
7
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Message-id: 20200512163904.10918-15-peter.maydell@linaro.org
7
---
10
---
8
target/arm/neon-dp.decode | 5 +++++
11
hw/rtc/twl92230.c | 4 ++--
9
target/arm/translate-neon.inc.c | 6 +++++
12
1 file changed, 2 insertions(+), 2 deletions(-)
10
target/arm/translate.c | 39 ++-------------------------------
11
3 files changed, 13 insertions(+), 37 deletions(-)
12
13
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
diff --git a/hw/rtc/twl92230.c b/hw/rtc/twl92230.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
16
--- a/hw/rtc/twl92230.c
16
+++ b/target/arm/neon-dp.decode
17
+++ b/hw/rtc/twl92230.c
17
@@ -XXX,XX +XXX,XX @@ VABD_fp_3s 1111 001 1 0 . 1 . .... .... 1101 ... 0 .... @3same_fp
18
@@ -XXX,XX +XXX,XX @@ struct MenelausState {
18
VMLA_fp_3s 1111 001 0 0 . 0 . .... .... 1101 ... 1 .... @3same_fp
19
struct tm tm;
19
VMLS_fp_3s 1111 001 0 0 . 1 . .... .... 1101 ... 1 .... @3same_fp
20
struct tm new;
20
VMUL_fp_3s 1111 001 1 0 . 0 . .... .... 1101 ... 1 .... @3same_fp
21
struct tm alm;
21
+VCEQ_fp_3s 1111 001 0 0 . 0 . .... .... 1110 ... 0 .... @3same_fp
22
- int sec_offset;
22
+VCGE_fp_3s 1111 001 1 0 . 0 . .... .... 1110 ... 0 .... @3same_fp
23
- int alm_sec;
23
+VACGE_fp_3s 1111 001 1 0 . 0 . .... .... 1110 ... 1 .... @3same_fp
24
+ int64_t sec_offset;
24
+VCGT_fp_3s 1111 001 1 0 . 1 . .... .... 1110 ... 0 .... @3same_fp
25
+ int64_t alm_sec;
25
+VACGT_fp_3s 1111 001 1 0 . 1 . .... .... 1110 ... 1 .... @3same_fp
26
int next_comp;
26
VPMAX_fp_3s 1111 001 1 0 . 0 . .... .... 1111 ... 0 .... @3same_fp_q0
27
} rtc;
27
VPMIN_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 0 .... @3same_fp_q0
28
uint16_t rtc_next_vmstate;
28
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-neon.inc.c
31
+++ b/target/arm/translate-neon.inc.c
32
@@ -XXX,XX +XXX,XX @@ DO_3S_FP_GVEC(VMUL, gen_helper_gvec_fmul_s)
33
return do_3same_fp(s, a, FUNC, READS_VD); \
34
}
35
36
+DO_3S_FP(VCEQ, gen_helper_neon_ceq_f32, false)
37
+DO_3S_FP(VCGE, gen_helper_neon_cge_f32, false)
38
+DO_3S_FP(VCGT, gen_helper_neon_cgt_f32, false)
39
+DO_3S_FP(VACGE, gen_helper_neon_acge_f32, false)
40
+DO_3S_FP(VACGT, gen_helper_neon_acgt_f32, false)
41
+
42
static void gen_VMLA_fp_3s(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm,
43
TCGv_ptr fpstatus)
44
{
45
diff --git a/target/arm/translate.c b/target/arm/translate.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate.c
48
+++ b/target/arm/translate.c
49
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
50
case NEON_3R_VQDMULH_VQRDMULH:
51
case NEON_3R_FLOAT_ARITH:
52
case NEON_3R_FLOAT_MULTIPLY:
53
+ case NEON_3R_FLOAT_CMP:
54
+ case NEON_3R_FLOAT_ACMP:
55
/* Already handled by decodetree */
56
return 1;
57
}
58
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
59
return 1; /* VPMIN/VPMAX handled by decodetree */
60
}
61
break;
62
- case NEON_3R_FLOAT_CMP:
63
- if (!u && size) {
64
- /* no encoding for U=0 C=1x */
65
- return 1;
66
- }
67
- break;
68
- case NEON_3R_FLOAT_ACMP:
69
- if (!u) {
70
- return 1;
71
- }
72
- break;
73
case NEON_3R_FLOAT_MISC:
74
/* VMAXNM/VMINNM in ARMv8 */
75
if (u && !arm_dc_feature(s, ARM_FEATURE_V8)) {
76
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
77
tmp = neon_load_reg(rn, pass);
78
tmp2 = neon_load_reg(rm, pass);
79
switch (op) {
80
- case NEON_3R_FLOAT_CMP:
81
- {
82
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
83
- if (!u) {
84
- gen_helper_neon_ceq_f32(tmp, tmp, tmp2, fpstatus);
85
- } else {
86
- if (size == 0) {
87
- gen_helper_neon_cge_f32(tmp, tmp, tmp2, fpstatus);
88
- } else {
89
- gen_helper_neon_cgt_f32(tmp, tmp, tmp2, fpstatus);
90
- }
91
- }
92
- tcg_temp_free_ptr(fpstatus);
93
- break;
94
- }
95
- case NEON_3R_FLOAT_ACMP:
96
- {
97
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
98
- if (size == 0) {
99
- gen_helper_neon_acge_f32(tmp, tmp, tmp2, fpstatus);
100
- } else {
101
- gen_helper_neon_acgt_f32(tmp, tmp, tmp2, fpstatus);
102
- }
103
- tcg_temp_free_ptr(fpstatus);
104
- break;
105
- }
106
case NEON_3R_FLOAT_MINMAX:
107
{
108
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
109
--
29
--
110
2.20.1
30
2.34.1
111
31
112
32
diff view generated by jsdifflib
1
Convert the Neon integer VMUL, VMLA, and VMLS 3-reg-same inssn to
1
In the aspeed_rtc device we store a difference between two time_t
2
decodetree.
2
values in an 'int'. This is not really correct when time_t could
3
be 64 bits. Enlarge the field to 'int64_t'.
3
4
4
We don't have a gvec helper for multiply-accumulate, so VMLA and VMLS
5
This is a migration compatibility break for the aspeed boards.
5
need a loop function do_3same_fp(). This takes a reads_vd parameter
6
While we are changing the vmstate, remove the accidental
6
to do_3same_fp() which tells it to load the old value into vd before
7
duplicate of the offset field.
7
calling the callback function, in the same way that the do_vfp_3op_sp()
8
and do_vfp_3op_dp() functions in translate-vfp.inc.c work. (The
9
only uses in this patch pass reads_vd == true, but later commits
10
will use reads_vd == false.)
11
12
This conversion fixes in passing an underdecoding for VMUL
13
(originally reported by Fredrik Strupe <fredrik@strupe.net>): bit 1
14
of the 'size' field must be 0. The old decoder didn't enforce this,
15
but the decodetree pattern does.
16
17
The gen_VMLA_fp_reg() function performs the addition operation
18
with the operands in the opposite order to the old decoder:
19
since Neon sets 'default NaN mode' float32_add operations are
20
commutative so there is no behaviour difference, but putting
21
them this way around matches the Arm ARM pseudocode and the
22
required operation order for the subtraction in gen_VMLS_fp_reg().
23
8
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Cédric Le Goater <clg@kaod.org>
26
Message-id: 20200512163904.10918-14-peter.maydell@linaro.org
27
---
11
---
28
target/arm/neon-dp.decode | 3 ++
12
include/hw/rtc/aspeed_rtc.h | 2 +-
29
target/arm/translate-neon.inc.c | 81 +++++++++++++++++++++++++++++++++
13
hw/rtc/aspeed_rtc.c | 5 ++---
30
target/arm/translate.c | 17 +------
14
2 files changed, 3 insertions(+), 4 deletions(-)
31
3 files changed, 85 insertions(+), 16 deletions(-)
32
15
33
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
16
diff --git a/include/hw/rtc/aspeed_rtc.h b/include/hw/rtc/aspeed_rtc.h
34
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/neon-dp.decode
18
--- a/include/hw/rtc/aspeed_rtc.h
36
+++ b/target/arm/neon-dp.decode
19
+++ b/include/hw/rtc/aspeed_rtc.h
37
@@ -XXX,XX +XXX,XX @@ VADD_fp_3s 1111 001 0 0 . 0 . .... .... 1101 ... 0 .... @3same_fp
20
@@ -XXX,XX +XXX,XX @@ struct AspeedRtcState {
38
VSUB_fp_3s 1111 001 0 0 . 1 . .... .... 1101 ... 0 .... @3same_fp
21
qemu_irq irq;
39
VPADD_fp_3s 1111 001 1 0 . 0 . .... .... 1101 ... 0 .... @3same_fp_q0
22
40
VABD_fp_3s 1111 001 1 0 . 1 . .... .... 1101 ... 0 .... @3same_fp
23
uint32_t reg[0x18];
41
+VMLA_fp_3s 1111 001 0 0 . 0 . .... .... 1101 ... 1 .... @3same_fp
24
- int offset;
42
+VMLS_fp_3s 1111 001 0 0 . 1 . .... .... 1101 ... 1 .... @3same_fp
25
+ int64_t offset;
43
+VMUL_fp_3s 1111 001 1 0 . 0 . .... .... 1101 ... 1 .... @3same_fp
26
44
VPMAX_fp_3s 1111 001 1 0 . 0 . .... .... 1111 ... 0 .... @3same_fp_q0
27
};
45
VPMIN_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 0 .... @3same_fp_q0
28
46
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
29
diff --git a/hw/rtc/aspeed_rtc.c b/hw/rtc/aspeed_rtc.c
47
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-neon.inc.c
31
--- a/hw/rtc/aspeed_rtc.c
49
+++ b/target/arm/translate-neon.inc.c
32
+++ b/hw/rtc/aspeed_rtc.c
50
@@ -XXX,XX +XXX,XX @@ DO_3SAME_PAIR(VPADD, padd_u)
33
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps aspeed_rtc_ops = {
51
DO_3SAME_VQDMULH(VQDMULH, qdmulh)
34
52
DO_3SAME_VQDMULH(VQRDMULH, qrdmulh)
35
static const VMStateDescription vmstate_aspeed_rtc = {
53
36
.name = TYPE_ASPEED_RTC,
54
+static bool do_3same_fp(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn,
37
- .version_id = 1,
55
+ bool reads_vd)
38
+ .version_id = 2,
56
+{
39
.fields = (VMStateField[]) {
57
+ /*
40
VMSTATE_UINT32_ARRAY(reg, AspeedRtcState, 0x18),
58
+ * FP operations handled elementwise 32 bits at a time.
41
- VMSTATE_INT32(offset, AspeedRtcState),
59
+ * If reads_vd is true then the old value of Vd will be
42
- VMSTATE_INT32(offset, AspeedRtcState),
60
+ * loaded before calling the callback function. This is
43
+ VMSTATE_INT64(offset, AspeedRtcState),
61
+ * used for multiply-accumulate type operations.
44
VMSTATE_END_OF_LIST()
62
+ */
45
}
63
+ TCGv_i32 tmp, tmp2;
46
};
64
+ int pass;
65
+
66
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
67
+ return false;
68
+ }
69
+
70
+ /* UNDEF accesses to D16-D31 if they don't exist. */
71
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
72
+ ((a->vd | a->vn | a->vm) & 0x10)) {
73
+ return false;
74
+ }
75
+
76
+ if ((a->vn | a->vm | a->vd) & a->q) {
77
+ return false;
78
+ }
79
+
80
+ if (!vfp_access_check(s)) {
81
+ return true;
82
+ }
83
+
84
+ TCGv_ptr fpstatus = get_fpstatus_ptr(1);
85
+ for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
86
+ tmp = neon_load_reg(a->vn, pass);
87
+ tmp2 = neon_load_reg(a->vm, pass);
88
+ if (reads_vd) {
89
+ TCGv_i32 tmp_rd = neon_load_reg(a->vd, pass);
90
+ fn(tmp_rd, tmp, tmp2, fpstatus);
91
+ neon_store_reg(a->vd, pass, tmp_rd);
92
+ tcg_temp_free_i32(tmp);
93
+ } else {
94
+ fn(tmp, tmp, tmp2, fpstatus);
95
+ neon_store_reg(a->vd, pass, tmp);
96
+ }
97
+ tcg_temp_free_i32(tmp2);
98
+ }
99
+ tcg_temp_free_ptr(fpstatus);
100
+ return true;
101
+}
102
+
103
/*
104
* For all the functions using this macro, size == 1 means fp16,
105
* which is an architecture extension we don't implement yet.
106
@@ -XXX,XX +XXX,XX @@ DO_3SAME_VQDMULH(VQRDMULH, qrdmulh)
107
DO_3S_FP_GVEC(VADD, gen_helper_gvec_fadd_s)
108
DO_3S_FP_GVEC(VSUB, gen_helper_gvec_fsub_s)
109
DO_3S_FP_GVEC(VABD, gen_helper_gvec_fabd_s)
110
+DO_3S_FP_GVEC(VMUL, gen_helper_gvec_fmul_s)
111
+
112
+/*
113
+ * For all the functions using this macro, size == 1 means fp16,
114
+ * which is an architecture extension we don't implement yet.
115
+ */
116
+#define DO_3S_FP(INSN,FUNC,READS_VD) \
117
+ static bool trans_##INSN##_fp_3s(DisasContext *s, arg_3same *a) \
118
+ { \
119
+ if (a->size != 0) { \
120
+ /* TODO fp16 support */ \
121
+ return false; \
122
+ } \
123
+ return do_3same_fp(s, a, FUNC, READS_VD); \
124
+ }
125
+
126
+static void gen_VMLA_fp_3s(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm,
127
+ TCGv_ptr fpstatus)
128
+{
129
+ gen_helper_vfp_muls(vn, vn, vm, fpstatus);
130
+ gen_helper_vfp_adds(vd, vd, vn, fpstatus);
131
+}
132
+
133
+static void gen_VMLS_fp_3s(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm,
134
+ TCGv_ptr fpstatus)
135
+{
136
+ gen_helper_vfp_muls(vn, vn, vm, fpstatus);
137
+ gen_helper_vfp_subs(vd, vd, vn, fpstatus);
138
+}
139
+
140
+DO_3S_FP(VMLA, gen_VMLA_fp_3s, true)
141
+DO_3S_FP(VMLS, gen_VMLS_fp_3s, true)
142
143
static bool do_3same_fp_pair(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn)
144
{
145
diff --git a/target/arm/translate.c b/target/arm/translate.c
146
index XXXXXXX..XXXXXXX 100644
147
--- a/target/arm/translate.c
148
+++ b/target/arm/translate.c
149
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
150
case NEON_3R_VPADD_VQRDMLAH:
151
case NEON_3R_VQDMULH_VQRDMULH:
152
case NEON_3R_FLOAT_ARITH:
153
+ case NEON_3R_FLOAT_MULTIPLY:
154
/* Already handled by decodetree */
155
return 1;
156
}
157
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
158
tmp = neon_load_reg(rn, pass);
159
tmp2 = neon_load_reg(rm, pass);
160
switch (op) {
161
- case NEON_3R_FLOAT_MULTIPLY:
162
- {
163
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
164
- gen_helper_vfp_muls(tmp, tmp, tmp2, fpstatus);
165
- if (!u) {
166
- tcg_temp_free_i32(tmp2);
167
- tmp2 = neon_load_reg(rd, pass);
168
- if (size == 0) {
169
- gen_helper_vfp_adds(tmp, tmp, tmp2, fpstatus);
170
- } else {
171
- gen_helper_vfp_subs(tmp, tmp2, tmp, fpstatus);
172
- }
173
- }
174
- tcg_temp_free_ptr(fpstatus);
175
- break;
176
- }
177
case NEON_3R_FLOAT_CMP:
178
{
179
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
180
--
47
--
181
2.20.1
48
2.34.1
182
49
183
50
diff view generated by jsdifflib
1
Convert the Neon floating point VFMA and VFMS insn to decodetree.
1
The functions qemu_get_timedate() and qemu_timedate_diff() take
2
These are the last insns in the 3-reg-same group so we can
2
and return a time offset as an integer. Coverity points out that
3
remove all the support/loop code from the old decoder.
3
means that when an RTC device implementation holds an offset
4
as a time_t, as the m48t59 does, the time_t will get truncated.
5
(CID 1507157, 1517772).
6
7
The functions work with time_t internally, so make them use that type
8
in their APIs.
9
10
Note that this won't help any Y2038 issues where either the device
11
model itself is keeping the offset in a 32-bit integer, or where the
12
hardware under emulation has Y2038 or other rollover problems. If we
13
missed any cases of the former then hopefully Coverity will warn us
14
about them since after this patch we'd be truncating a time_t in
15
assignments from qemu_timedate_diff().)
4
16
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Message-id: 20200512163904.10918-18-peter.maydell@linaro.org
8
---
19
---
9
target/arm/neon-dp.decode | 3 +
20
include/sysemu/rtc.h | 4 ++--
10
target/arm/translate-neon.inc.c | 41 ++++++++
21
softmmu/rtc.c | 4 ++--
11
target/arm/translate.c | 176 +-------------------------------
22
2 files changed, 4 insertions(+), 4 deletions(-)
12
3 files changed, 46 insertions(+), 174 deletions(-)
13
23
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
24
diff --git a/include/sysemu/rtc.h b/include/sysemu/rtc.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
26
--- a/include/sysemu/rtc.h
17
+++ b/target/arm/neon-dp.decode
27
+++ b/include/sysemu/rtc.h
18
@@ -XXX,XX +XXX,XX @@ SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... \
28
@@ -XXX,XX +XXX,XX @@
19
SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... \
29
* The behaviour of the clock whose value this function returns will
20
vm=%vm_dp vn=%vn_dp vd=%vd_dp
30
* depend on the -rtc command line option passed by the user.
21
31
*/
22
+VFMA_fp_3s 1111 001 0 0 . 0 . .... .... 1100 ... 1 .... @3same_fp
32
-void qemu_get_timedate(struct tm *tm, int offset);
23
+VFMS_fp_3s 1111 001 0 0 . 1 . .... .... 1100 ... 1 .... @3same_fp
33
+void qemu_get_timedate(struct tm *tm, time_t offset);
24
+
34
25
VQRDMLSH_3s 1111 001 1 0 . .. .... .... 1100 ... 1 .... @3same
35
/**
26
36
* qemu_timedate_diff: Return difference between a struct tm and the RTC
27
VADD_fp_3s 1111 001 0 0 . 0 . .... .... 1101 ... 0 .... @3same_fp
37
@@ -XXX,XX +XXX,XX @@ void qemu_get_timedate(struct tm *tm, int offset);
28
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
38
* a timestamp one hour further ahead than the current RTC time
39
* then this function will return 3600.
40
*/
41
-int qemu_timedate_diff(struct tm *tm);
42
+time_t qemu_timedate_diff(struct tm *tm);
43
44
#endif
45
diff --git a/softmmu/rtc.c b/softmmu/rtc.c
29
index XXXXXXX..XXXXXXX 100644
46
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-neon.inc.c
47
--- a/softmmu/rtc.c
31
+++ b/target/arm/translate-neon.inc.c
48
+++ b/softmmu/rtc.c
32
@@ -XXX,XX +XXX,XX @@ static bool trans_VRSQRTS_fp_3s(DisasContext *s, arg_3same *a)
49
@@ -XXX,XX +XXX,XX @@ static time_t qemu_ref_timedate(QEMUClockType clock)
33
return do_3same(s, a, gen_VRSQRTS_fp_3s);
50
return value;
34
}
51
}
35
52
36
+static void gen_VFMA_fp_3s(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm,
53
-void qemu_get_timedate(struct tm *tm, int offset)
37
+ TCGv_ptr fpstatus)
54
+void qemu_get_timedate(struct tm *tm, time_t offset)
38
+{
39
+ gen_helper_vfp_muladds(vd, vn, vm, vd, fpstatus);
40
+}
41
+
42
+static bool trans_VFMA_fp_3s(DisasContext *s, arg_3same *a)
43
+{
44
+ if (!dc_isar_feature(aa32_simdfmac, s)) {
45
+ return false;
46
+ }
47
+
48
+ if (a->size != 0) {
49
+ /* TODO fp16 support */
50
+ return false;
51
+ }
52
+
53
+ return do_3same_fp(s, a, gen_VFMA_fp_3s, true);
54
+}
55
+
56
+static void gen_VFMS_fp_3s(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm,
57
+ TCGv_ptr fpstatus)
58
+{
59
+ gen_helper_vfp_negs(vn, vn);
60
+ gen_helper_vfp_muladds(vd, vn, vm, vd, fpstatus);
61
+}
62
+
63
+static bool trans_VFMS_fp_3s(DisasContext *s, arg_3same *a)
64
+{
65
+ if (!dc_isar_feature(aa32_simdfmac, s)) {
66
+ return false;
67
+ }
68
+
69
+ if (a->size != 0) {
70
+ /* TODO fp16 support */
71
+ return false;
72
+ }
73
+
74
+ return do_3same_fp(s, a, gen_VFMS_fp_3s, true);
75
+}
76
+
77
static bool do_3same_fp_pair(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn)
78
{
55
{
79
/* FP operations handled pairwise 32 bits at a time */
56
time_t ti = qemu_ref_timedate(rtc_clock);
80
diff --git a/target/arm/translate.c b/target/arm/translate.c
57
81
index XXXXXXX..XXXXXXX 100644
58
@@ -XXX,XX +XXX,XX @@ void qemu_get_timedate(struct tm *tm, int offset)
82
--- a/target/arm/translate.c
83
+++ b/target/arm/translate.c
84
@@ -XXX,XX +XXX,XX @@ static void gen_neon_narrow_op(int op, int u, int size,
85
}
59
}
86
}
60
}
87
61
88
-/* Symbolic constants for op fields for Neon 3-register same-length.
62
-int qemu_timedate_diff(struct tm *tm)
89
- * The values correspond to bits [11:8,4]; see the ARM ARM DDI0406B
63
+time_t qemu_timedate_diff(struct tm *tm)
90
- * table A7-9.
64
{
91
- */
65
time_t seconds;
92
-#define NEON_3R_VHADD 0
66
93
-#define NEON_3R_VQADD 1
94
-#define NEON_3R_VRHADD 2
95
-#define NEON_3R_LOGIC 3 /* VAND,VBIC,VORR,VMOV,VORN,VEOR,VBIF,VBIT,VBSL */
96
-#define NEON_3R_VHSUB 4
97
-#define NEON_3R_VQSUB 5
98
-#define NEON_3R_VCGT 6
99
-#define NEON_3R_VCGE 7
100
-#define NEON_3R_VSHL 8
101
-#define NEON_3R_VQSHL 9
102
-#define NEON_3R_VRSHL 10
103
-#define NEON_3R_VQRSHL 11
104
-#define NEON_3R_VMAX 12
105
-#define NEON_3R_VMIN 13
106
-#define NEON_3R_VABD 14
107
-#define NEON_3R_VABA 15
108
-#define NEON_3R_VADD_VSUB 16
109
-#define NEON_3R_VTST_VCEQ 17
110
-#define NEON_3R_VML 18 /* VMLA, VMLS */
111
-#define NEON_3R_VMUL 19
112
-#define NEON_3R_VPMAX 20
113
-#define NEON_3R_VPMIN 21
114
-#define NEON_3R_VQDMULH_VQRDMULH 22
115
-#define NEON_3R_VPADD_VQRDMLAH 23
116
-#define NEON_3R_SHA 24 /* SHA1C,SHA1P,SHA1M,SHA1SU0,SHA256H{2},SHA256SU1 */
117
-#define NEON_3R_VFM_VQRDMLSH 25 /* VFMA, VFMS, VQRDMLSH */
118
-#define NEON_3R_FLOAT_ARITH 26 /* float VADD, VSUB, VPADD, VABD */
119
-#define NEON_3R_FLOAT_MULTIPLY 27 /* float VMLA, VMLS, VMUL */
120
-#define NEON_3R_FLOAT_CMP 28 /* float VCEQ, VCGE, VCGT */
121
-#define NEON_3R_FLOAT_ACMP 29 /* float VACGE, VACGT, VACLE, VACLT */
122
-#define NEON_3R_FLOAT_MINMAX 30 /* float VMIN, VMAX */
123
-#define NEON_3R_FLOAT_MISC 31 /* float VRECPS, VRSQRTS, VMAXNM/MINNM */
124
-
125
-static const uint8_t neon_3r_sizes[] = {
126
- [NEON_3R_VHADD] = 0x7,
127
- [NEON_3R_VQADD] = 0xf,
128
- [NEON_3R_VRHADD] = 0x7,
129
- [NEON_3R_LOGIC] = 0xf, /* size field encodes op type */
130
- [NEON_3R_VHSUB] = 0x7,
131
- [NEON_3R_VQSUB] = 0xf,
132
- [NEON_3R_VCGT] = 0x7,
133
- [NEON_3R_VCGE] = 0x7,
134
- [NEON_3R_VSHL] = 0xf,
135
- [NEON_3R_VQSHL] = 0xf,
136
- [NEON_3R_VRSHL] = 0xf,
137
- [NEON_3R_VQRSHL] = 0xf,
138
- [NEON_3R_VMAX] = 0x7,
139
- [NEON_3R_VMIN] = 0x7,
140
- [NEON_3R_VABD] = 0x7,
141
- [NEON_3R_VABA] = 0x7,
142
- [NEON_3R_VADD_VSUB] = 0xf,
143
- [NEON_3R_VTST_VCEQ] = 0x7,
144
- [NEON_3R_VML] = 0x7,
145
- [NEON_3R_VMUL] = 0x7,
146
- [NEON_3R_VPMAX] = 0x7,
147
- [NEON_3R_VPMIN] = 0x7,
148
- [NEON_3R_VQDMULH_VQRDMULH] = 0x6,
149
- [NEON_3R_VPADD_VQRDMLAH] = 0x7,
150
- [NEON_3R_SHA] = 0xf, /* size field encodes op type */
151
- [NEON_3R_VFM_VQRDMLSH] = 0x7, /* For VFM, size bit 1 encodes op */
152
- [NEON_3R_FLOAT_ARITH] = 0x5, /* size bit 1 encodes op */
153
- [NEON_3R_FLOAT_MULTIPLY] = 0x5, /* size bit 1 encodes op */
154
- [NEON_3R_FLOAT_CMP] = 0x5, /* size bit 1 encodes op */
155
- [NEON_3R_FLOAT_ACMP] = 0x5, /* size bit 1 encodes op */
156
- [NEON_3R_FLOAT_MINMAX] = 0x5, /* size bit 1 encodes op */
157
- [NEON_3R_FLOAT_MISC] = 0x5, /* size bit 1 encodes op */
158
-};
159
-
160
/* Symbolic constants for op fields for Neon 2-register miscellaneous.
161
* The values correspond to bits [17:16,10:7]; see the ARM ARM DDI0406B
162
* table A7-13.
163
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
164
rm_ofs = neon_reg_offset(rm, 0);
165
166
if ((insn & (1 << 23)) == 0) {
167
- /* Three register same length. */
168
- op = ((insn >> 7) & 0x1e) | ((insn >> 4) & 1);
169
- /* Catch invalid op and bad size combinations: UNDEF */
170
- if ((neon_3r_sizes[op] & (1 << size)) == 0) {
171
- return 1;
172
- }
173
- /* All insns of this form UNDEF for either this condition or the
174
- * superset of cases "Q==1"; we catch the latter later.
175
- */
176
- if (q && ((rd | rn | rm) & 1)) {
177
- return 1;
178
- }
179
- switch (op) {
180
- case NEON_3R_VFM_VQRDMLSH:
181
- if (!u) {
182
- /* VFM, VFMS */
183
- if (size == 1) {
184
- return 1;
185
- }
186
- break;
187
- }
188
- /* VQRDMLSH : handled by decodetree */
189
- return 1;
190
-
191
- case NEON_3R_VADD_VSUB:
192
- case NEON_3R_LOGIC:
193
- case NEON_3R_VMAX:
194
- case NEON_3R_VMIN:
195
- case NEON_3R_VTST_VCEQ:
196
- case NEON_3R_VCGT:
197
- case NEON_3R_VCGE:
198
- case NEON_3R_VQADD:
199
- case NEON_3R_VQSUB:
200
- case NEON_3R_VMUL:
201
- case NEON_3R_VML:
202
- case NEON_3R_VSHL:
203
- case NEON_3R_SHA:
204
- case NEON_3R_VHADD:
205
- case NEON_3R_VRHADD:
206
- case NEON_3R_VHSUB:
207
- case NEON_3R_VABD:
208
- case NEON_3R_VABA:
209
- case NEON_3R_VQSHL:
210
- case NEON_3R_VRSHL:
211
- case NEON_3R_VQRSHL:
212
- case NEON_3R_VPMAX:
213
- case NEON_3R_VPMIN:
214
- case NEON_3R_VPADD_VQRDMLAH:
215
- case NEON_3R_VQDMULH_VQRDMULH:
216
- case NEON_3R_FLOAT_ARITH:
217
- case NEON_3R_FLOAT_MULTIPLY:
218
- case NEON_3R_FLOAT_CMP:
219
- case NEON_3R_FLOAT_ACMP:
220
- case NEON_3R_FLOAT_MINMAX:
221
- case NEON_3R_FLOAT_MISC:
222
- /* Already handled by decodetree */
223
- return 1;
224
- }
225
-
226
- if (size == 3) {
227
- /* 64-bit element instructions: handled by decodetree */
228
- return 1;
229
- }
230
- switch (op) {
231
- case NEON_3R_VFM_VQRDMLSH:
232
- if (!dc_isar_feature(aa32_simdfmac, s)) {
233
- return 1;
234
- }
235
- break;
236
- default:
237
- break;
238
- }
239
-
240
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
241
-
242
- /* Elementwise. */
243
- tmp = neon_load_reg(rn, pass);
244
- tmp2 = neon_load_reg(rm, pass);
245
- switch (op) {
246
- case NEON_3R_VFM_VQRDMLSH:
247
- {
248
- /* VFMA, VFMS: fused multiply-add */
249
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
250
- TCGv_i32 tmp3 = neon_load_reg(rd, pass);
251
- if (size) {
252
- /* VFMS */
253
- gen_helper_vfp_negs(tmp, tmp);
254
- }
255
- gen_helper_vfp_muladds(tmp, tmp, tmp2, tmp3, fpstatus);
256
- tcg_temp_free_i32(tmp3);
257
- tcg_temp_free_ptr(fpstatus);
258
- break;
259
- }
260
- default:
261
- abort();
262
- }
263
- tcg_temp_free_i32(tmp2);
264
-
265
- neon_store_reg(rd, pass, tmp);
266
-
267
- } /* for pass */
268
- /* End of 3 register same size operations. */
269
+ /* Three register same length: handled by decodetree */
270
+ return 1;
271
} else if (insn & (1 << 4)) {
272
if ((insn & 0x00380080) != 0) {
273
/* Two registers and shift. */
274
--
67
--
275
2.20.1
68
2.34.1
276
69
277
70
diff view generated by jsdifflib
1
Convert the Neon float VPMIN, VPMAX and VPADD 3-reg-same insns to
1
Where architecturally one ARM_FEATURE_X flag implies another
2
decodetree. These are the only remaining 'pairwise' operations,
2
ARM_FEATURE_Y, we allow the CPU init function to only set X, and then
3
so we can delete the pairwise-specific bits of the old decoder's
3
set Y for it. Currently we do this in two places -- we set a few
4
for-each-element loop now.
4
flags in arm_cpu_post_init() because we need them to decide which
5
properties to create on the CPU object, and then we do the rest in
6
arm_cpu_realizefn(). However, this is fragile, because it's easy to
7
add a new property and not notice that this means that an X-implies-Y
8
check now has to move from realize to post-init.
9
10
As a specific example, the pmsav7-dregion property is conditional
11
on ARM_FEATURE_PMSA && ARM_FEATURE_V7, which means it won't appear
12
on the Cortex-M33 and -M55, because they set ARM_FEATURE_V8 and
13
rely on V8-implies-V7, which doesn't happen until the realizefn.
14
15
Move all of these X-implies-Y checks into a new function, which
16
we call at the top of arm_cpu_post_init(), so the feature bits
17
are available at that point.
18
19
This does now give us the reverse issue, that if there's a feature
20
bit which is enabled or disabled by the setting of a property then
21
then X-implies-Y features that are dependent on that property need to
22
be in realize, not in this new function. But the only one of those
23
is the "EL3 implies VBAR" which is already in the right place, so
24
putting things this way round seems better to me.
5
25
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
27
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200512163904.10918-13-peter.maydell@linaro.org
28
Message-id: 20230724174335.2150499-2-peter.maydell@linaro.org
9
---
29
---
10
target/arm/neon-dp.decode | 5 +++
30
target/arm/cpu.c | 179 +++++++++++++++++++++++++----------------------
11
target/arm/translate-neon.inc.c | 63 +++++++++++++++++++++++++++++++++
31
1 file changed, 97 insertions(+), 82 deletions(-)
12
target/arm/translate.c | 63 +++++----------------------------
32
13
3 files changed, 76 insertions(+), 55 deletions(-)
33
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
15
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
16
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/neon-dp.decode
35
--- a/target/arm/cpu.c
18
+++ b/target/arm/neon-dp.decode
36
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@ unsigned int gt_cntfrq_period_ns(ARMCPU *cpu)
20
# For FP insns the high bit of 'size' is used as part of opcode decode
38
NANOSECONDS_PER_SECOND / cpu->gt_cntfrq_hz : 1;
21
@3same_fp .... ... . . . . size:1 .... .... .... . q:1 . . .... \
39
}
22
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp
40
23
+@3same_fp_q0 .... ... . . . . size:1 .... .... .... . 0 . . .... \
41
+static void arm_cpu_propagate_feature_implications(ARMCPU *cpu)
24
+ &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp q=0
25
26
VHADD_S_3s 1111 001 0 0 . .. .... .... 0000 . . . 0 .... @3same
27
VHADD_U_3s 1111 001 1 0 . .. .... .... 0000 . . . 0 .... @3same
28
@@ -XXX,XX +XXX,XX @@ VQRDMLSH_3s 1111 001 1 0 . .. .... .... 1100 ... 1 .... @3same
29
30
VADD_fp_3s 1111 001 0 0 . 0 . .... .... 1101 ... 0 .... @3same_fp
31
VSUB_fp_3s 1111 001 0 0 . 1 . .... .... 1101 ... 0 .... @3same_fp
32
+VPADD_fp_3s 1111 001 1 0 . 0 . .... .... 1101 ... 0 .... @3same_fp_q0
33
VABD_fp_3s 1111 001 1 0 . 1 . .... .... 1101 ... 0 .... @3same_fp
34
+VPMAX_fp_3s 1111 001 1 0 . 0 . .... .... 1111 ... 0 .... @3same_fp_q0
35
+VPMIN_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 0 .... @3same_fp_q0
36
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-neon.inc.c
39
+++ b/target/arm/translate-neon.inc.c
40
@@ -XXX,XX +XXX,XX @@ DO_3SAME_VQDMULH(VQRDMULH, qrdmulh)
41
DO_3S_FP_GVEC(VADD, gen_helper_gvec_fadd_s)
42
DO_3S_FP_GVEC(VSUB, gen_helper_gvec_fsub_s)
43
DO_3S_FP_GVEC(VABD, gen_helper_gvec_fabd_s)
44
+
45
+static bool do_3same_fp_pair(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn)
46
+{
42
+{
47
+ /* FP operations handled pairwise 32 bits at a time */
43
+ CPUARMState *env = &cpu->env;
48
+ TCGv_i32 tmp, tmp2, tmp3;
44
+ bool no_aa32 = false;
49
+ TCGv_ptr fpstatus;
50
+
51
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
52
+ return false;
53
+ }
54
+
55
+ /* UNDEF accesses to D16-D31 if they don't exist. */
56
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
57
+ ((a->vd | a->vn | a->vm) & 0x10)) {
58
+ return false;
59
+ }
60
+
61
+ if (!vfp_access_check(s)) {
62
+ return true;
63
+ }
64
+
65
+ assert(a->q == 0); /* enforced by decode patterns */
66
+
45
+
67
+ /*
46
+ /*
68
+ * Note that we have to be careful not to clobber the source operands
47
+ * Some features automatically imply others: set the feature
69
+ * in the "vm == vd" case by storing the result of the first pass too
48
+ * bits explicitly for these cases.
70
+ * early. Since Q is 0 there are always just two passes, so instead
71
+ * of a complicated loop over each pass we just unroll.
72
+ */
49
+ */
73
+ fpstatus = get_fpstatus_ptr(1);
50
+
74
+ tmp = neon_load_reg(a->vn, 0);
51
+ if (arm_feature(env, ARM_FEATURE_M)) {
75
+ tmp2 = neon_load_reg(a->vn, 1);
52
+ set_feature(env, ARM_FEATURE_PMSA);
76
+ fn(tmp, tmp, tmp2, fpstatus);
53
+ }
77
+ tcg_temp_free_i32(tmp2);
54
+
78
+
55
+ if (arm_feature(env, ARM_FEATURE_V8)) {
79
+ tmp3 = neon_load_reg(a->vm, 0);
56
+ if (arm_feature(env, ARM_FEATURE_M)) {
80
+ tmp2 = neon_load_reg(a->vm, 1);
57
+ set_feature(env, ARM_FEATURE_V7);
81
+ fn(tmp3, tmp3, tmp2, fpstatus);
58
+ } else {
82
+ tcg_temp_free_i32(tmp2);
59
+ set_feature(env, ARM_FEATURE_V7VE);
83
+ tcg_temp_free_ptr(fpstatus);
60
+ }
84
+
61
+ }
85
+ neon_store_reg(a->vd, 0, tmp);
62
+
86
+ neon_store_reg(a->vd, 1, tmp3);
63
+ /*
87
+ return true;
64
+ * There exist AArch64 cpus without AArch32 support. When KVM
65
+ * queries ID_ISAR0_EL1 on such a host, the value is UNKNOWN.
66
+ * Similarly, we cannot check ID_AA64PFR0 without AArch64 support.
67
+ * As a general principle, we also do not make ID register
68
+ * consistency checks anywhere unless using TCG, because only
69
+ * for TCG would a consistency-check failure be a QEMU bug.
70
+ */
71
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
72
+ no_aa32 = !cpu_isar_feature(aa64_aa32, cpu);
73
+ }
74
+
75
+ if (arm_feature(env, ARM_FEATURE_V7VE)) {
76
+ /*
77
+ * v7 Virtualization Extensions. In real hardware this implies
78
+ * EL2 and also the presence of the Security Extensions.
79
+ * For QEMU, for backwards-compatibility we implement some
80
+ * CPUs or CPU configs which have no actual EL2 or EL3 but do
81
+ * include the various other features that V7VE implies.
82
+ * Presence of EL2 itself is ARM_FEATURE_EL2, and of the
83
+ * Security Extensions is ARM_FEATURE_EL3.
84
+ */
85
+ assert(!tcg_enabled() || no_aa32 ||
86
+ cpu_isar_feature(aa32_arm_div, cpu));
87
+ set_feature(env, ARM_FEATURE_LPAE);
88
+ set_feature(env, ARM_FEATURE_V7);
89
+ }
90
+ if (arm_feature(env, ARM_FEATURE_V7)) {
91
+ set_feature(env, ARM_FEATURE_VAPA);
92
+ set_feature(env, ARM_FEATURE_THUMB2);
93
+ set_feature(env, ARM_FEATURE_MPIDR);
94
+ if (!arm_feature(env, ARM_FEATURE_M)) {
95
+ set_feature(env, ARM_FEATURE_V6K);
96
+ } else {
97
+ set_feature(env, ARM_FEATURE_V6);
98
+ }
99
+
100
+ /*
101
+ * Always define VBAR for V7 CPUs even if it doesn't exist in
102
+ * non-EL3 configs. This is needed by some legacy boards.
103
+ */
104
+ set_feature(env, ARM_FEATURE_VBAR);
105
+ }
106
+ if (arm_feature(env, ARM_FEATURE_V6K)) {
107
+ set_feature(env, ARM_FEATURE_V6);
108
+ set_feature(env, ARM_FEATURE_MVFR);
109
+ }
110
+ if (arm_feature(env, ARM_FEATURE_V6)) {
111
+ set_feature(env, ARM_FEATURE_V5);
112
+ if (!arm_feature(env, ARM_FEATURE_M)) {
113
+ assert(!tcg_enabled() || no_aa32 ||
114
+ cpu_isar_feature(aa32_jazelle, cpu));
115
+ set_feature(env, ARM_FEATURE_AUXCR);
116
+ }
117
+ }
118
+ if (arm_feature(env, ARM_FEATURE_V5)) {
119
+ set_feature(env, ARM_FEATURE_V4T);
120
+ }
121
+ if (arm_feature(env, ARM_FEATURE_LPAE)) {
122
+ set_feature(env, ARM_FEATURE_V7MP);
123
+ }
124
+ if (arm_feature(env, ARM_FEATURE_CBAR_RO)) {
125
+ set_feature(env, ARM_FEATURE_CBAR);
126
+ }
127
+ if (arm_feature(env, ARM_FEATURE_THUMB2) &&
128
+ !arm_feature(env, ARM_FEATURE_M)) {
129
+ set_feature(env, ARM_FEATURE_THUMB_DSP);
130
+ }
88
+}
131
+}
89
+
132
+
90
+/*
133
void arm_cpu_post_init(Object *obj)
91
+ * For all the functions using this macro, size == 1 means fp16,
134
{
92
+ * which is an architecture extension we don't implement yet.
135
ARMCPU *cpu = ARM_CPU(obj);
93
+ */
136
94
+#define DO_3S_FP_PAIR(INSN,FUNC) \
137
- /* M profile implies PMSA. We have to do this here rather than
95
+ static bool trans_##INSN##_fp_3s(DisasContext *s, arg_3same *a) \
138
- * in realize with the other feature-implication checks because
96
+ { \
139
- * we look at the PMSA bit to see if we should add some properties.
97
+ if (a->size != 0) { \
140
+ /*
98
+ /* TODO fp16 support */ \
141
+ * Some features imply others. Figure this out now, because we
99
+ return false; \
142
+ * are going to look at the feature bits in deciding which
100
+ } \
143
+ * properties to add.
101
+ return do_3same_fp_pair(s, a, FUNC); \
144
*/
102
+ }
145
- if (arm_feature(&cpu->env, ARM_FEATURE_M)) {
103
+
146
- set_feature(&cpu->env, ARM_FEATURE_PMSA);
104
+DO_3S_FP_PAIR(VPADD, gen_helper_vfp_adds)
147
- }
105
+DO_3S_FP_PAIR(VPMAX, gen_helper_vfp_maxs)
148
+ arm_cpu_propagate_feature_implications(cpu);
106
+DO_3S_FP_PAIR(VPMIN, gen_helper_vfp_mins)
149
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
150
if (arm_feature(&cpu->env, ARM_FEATURE_CBAR) ||
108
index XXXXXXX..XXXXXXX 100644
151
arm_feature(&cpu->env, ARM_FEATURE_CBAR_RO)) {
109
--- a/target/arm/translate.c
152
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
110
+++ b/target/arm/translate.c
153
CPUARMState *env = &cpu->env;
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
154
int pagebits;
112
int shift;
155
Error *local_err = NULL;
113
int pass;
156
- bool no_aa32 = false;
114
int count;
157
115
- int pairwise;
158
/* Use pc-relative instructions in system-mode */
116
int u;
159
#ifndef CONFIG_USER_ONLY
117
int vec_size;
160
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
118
uint32_t imm;
161
cpu->isar.id_isar3 = u;
119
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
162
}
120
case NEON_3R_VPMIN:
163
121
case NEON_3R_VPADD_VQRDMLAH:
164
- /* Some features automatically imply others: */
122
case NEON_3R_VQDMULH_VQRDMULH:
165
- if (arm_feature(env, ARM_FEATURE_V8)) {
123
+ case NEON_3R_FLOAT_ARITH:
166
- if (arm_feature(env, ARM_FEATURE_M)) {
124
/* Already handled by decodetree */
167
- set_feature(env, ARM_FEATURE_V7);
125
return 1;
168
- } else {
126
}
169
- set_feature(env, ARM_FEATURE_V7VE);
127
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
170
- }
128
/* 64-bit element instructions: handled by decodetree */
171
- }
129
return 1;
172
-
130
}
173
- /*
131
- pairwise = 0;
174
- * There exist AArch64 cpus without AArch32 support. When KVM
132
switch (op) {
175
- * queries ID_ISAR0_EL1 on such a host, the value is UNKNOWN.
133
- case NEON_3R_FLOAT_ARITH:
176
- * Similarly, we cannot check ID_AA64PFR0 without AArch64 support.
134
- pairwise = (u && size < 2); /* if VPADD (float) */
177
- * As a general principle, we also do not make ID register
135
- if (!pairwise) {
178
- * consistency checks anywhere unless using TCG, because only
136
- return 1; /* handled by decodetree */
179
- * for TCG would a consistency-check failure be a QEMU bug.
137
- }
180
- */
138
- break;
181
- if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
139
case NEON_3R_FLOAT_MINMAX:
182
- no_aa32 = !cpu_isar_feature(aa64_aa32, cpu);
140
- pairwise = u; /* if VPMIN/VPMAX (float) */
183
- }
141
+ if (u) {
184
-
142
+ return 1; /* VPMIN/VPMAX handled by decodetree */
185
- if (arm_feature(env, ARM_FEATURE_V7VE)) {
143
+ }
186
- /* v7 Virtualization Extensions. In real hardware this implies
144
break;
187
- * EL2 and also the presence of the Security Extensions.
145
case NEON_3R_FLOAT_CMP:
188
- * For QEMU, for backwards-compatibility we implement some
146
if (!u && size) {
189
- * CPUs or CPU configs which have no actual EL2 or EL3 but do
147
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
190
- * include the various other features that V7VE implies.
148
break;
191
- * Presence of EL2 itself is ARM_FEATURE_EL2, and of the
149
}
192
- * Security Extensions is ARM_FEATURE_EL3.
150
193
- */
151
- if (pairwise && q) {
194
- assert(!tcg_enabled() || no_aa32 ||
152
- /* All the pairwise insns UNDEF if Q is set */
195
- cpu_isar_feature(aa32_arm_div, cpu));
153
- return 1;
196
- set_feature(env, ARM_FEATURE_LPAE);
197
- set_feature(env, ARM_FEATURE_V7);
198
- }
199
- if (arm_feature(env, ARM_FEATURE_V7)) {
200
- set_feature(env, ARM_FEATURE_VAPA);
201
- set_feature(env, ARM_FEATURE_THUMB2);
202
- set_feature(env, ARM_FEATURE_MPIDR);
203
- if (!arm_feature(env, ARM_FEATURE_M)) {
204
- set_feature(env, ARM_FEATURE_V6K);
205
- } else {
206
- set_feature(env, ARM_FEATURE_V6);
154
- }
207
- }
155
-
208
-
156
for (pass = 0; pass < (q ? 4 : 2); pass++) {
209
- /* Always define VBAR for V7 CPUs even if it doesn't exist in
157
210
- * non-EL3 configs. This is needed by some legacy boards.
158
- if (pairwise) {
211
- */
159
- /* Pairwise. */
212
- set_feature(env, ARM_FEATURE_VBAR);
160
- if (pass < 1) {
213
- }
161
- tmp = neon_load_reg(rn, 0);
214
- if (arm_feature(env, ARM_FEATURE_V6K)) {
162
- tmp2 = neon_load_reg(rn, 1);
215
- set_feature(env, ARM_FEATURE_V6);
163
- } else {
216
- set_feature(env, ARM_FEATURE_MVFR);
164
- tmp = neon_load_reg(rm, 0);
217
- }
165
- tmp2 = neon_load_reg(rm, 1);
218
- if (arm_feature(env, ARM_FEATURE_V6)) {
166
- }
219
- set_feature(env, ARM_FEATURE_V5);
167
- } else {
220
- if (!arm_feature(env, ARM_FEATURE_M)) {
168
- /* Elementwise. */
221
- assert(!tcg_enabled() || no_aa32 ||
169
- tmp = neon_load_reg(rn, pass);
222
- cpu_isar_feature(aa32_jazelle, cpu));
170
- tmp2 = neon_load_reg(rm, pass);
223
- set_feature(env, ARM_FEATURE_AUXCR);
171
- }
224
- }
172
+ /* Elementwise. */
225
- }
173
+ tmp = neon_load_reg(rn, pass);
226
- if (arm_feature(env, ARM_FEATURE_V5)) {
174
+ tmp2 = neon_load_reg(rm, pass);
227
- set_feature(env, ARM_FEATURE_V4T);
175
switch (op) {
228
- }
176
- case NEON_3R_FLOAT_ARITH: /* Floating point arithmetic. */
229
- if (arm_feature(env, ARM_FEATURE_LPAE)) {
177
- {
230
- set_feature(env, ARM_FEATURE_V7MP);
178
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
231
- }
179
- switch ((u << 2) | size) {
232
- if (arm_feature(env, ARM_FEATURE_CBAR_RO)) {
180
- case 4: /* VPADD */
233
- set_feature(env, ARM_FEATURE_CBAR);
181
- gen_helper_vfp_adds(tmp, tmp, tmp2, fpstatus);
234
- }
182
- break;
235
- if (arm_feature(env, ARM_FEATURE_THUMB2) &&
183
- default:
236
- !arm_feature(env, ARM_FEATURE_M)) {
184
- abort();
237
- set_feature(env, ARM_FEATURE_THUMB_DSP);
185
- }
238
- }
186
- tcg_temp_free_ptr(fpstatus);
239
187
- break;
240
/*
188
- }
241
* We rely on no XScale CPU having VFP so we can use the same bits in the
189
case NEON_3R_FLOAT_MULTIPLY:
190
{
191
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
192
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
193
}
194
tcg_temp_free_i32(tmp2);
195
196
- /* Save the result. For elementwise operations we can put it
197
- straight into the destination register. For pairwise operations
198
- we have to be careful to avoid clobbering the source operands. */
199
- if (pairwise && rd == rm) {
200
- neon_store_scratch(pass, tmp);
201
- } else {
202
- neon_store_reg(rd, pass, tmp);
203
- }
204
+ neon_store_reg(rd, pass, tmp);
205
206
} /* for pass */
207
- if (pairwise && rd == rm) {
208
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
209
- tmp = neon_load_scratch(pass);
210
- neon_store_reg(rd, pass, tmp);
211
- }
212
- }
213
/* End of 3 register same size operations. */
214
} else if (insn & (1 << 4)) {
215
if ((insn & 0x00380080) != 0) {
216
--
242
--
217
2.20.1
243
2.34.1
218
219
diff view generated by jsdifflib
1
Convert the VQSHL, VRSHL and VQRSHL insns in the 3-reg-same
1
M-profile CPUs generally allow configuration of the number of MPU
2
group to decodetree. We have already implemented the size==0b11
2
regions that they have. We don't currently model this, so our
3
case of these insns; this commit handles the remaining sizes.
3
implementations of some of the board models provide CPUs with the
4
wrong number of regions. RTOSes like Zephyr that hardcode the
5
expected number of regions may therefore not run on the model if they
6
are set up to run on real hardware.
7
8
Add properties mpu-ns-regions and mpu-s-regions to the ARMV7M object,
9
matching the ability of hardware to configure the number of Secure
10
and NonSecure regions separately. Our actual CPU implementation
11
doesn't currently support that, and it happens that none of the MPS
12
boards we model set the number of regions differently for Secure vs
13
NonSecure, so we provide an interface to the boards and SoCs that
14
won't need to change if we ever do add that functionality in future,
15
but make it an error to configure the two properties to different
16
values.
17
18
(The property name on the CPU is the somewhat misnamed-for-M-profile
19
"pmsav7-dregion", so we don't follow that naming convention for
20
the properties here. The TRM doesn't say what the CPU configuration
21
variable names are, so we pick something, and follow the lowercase
22
convention we already have for properties here.)
4
23
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Message-id: 20200512163904.10918-8-peter.maydell@linaro.org
26
Message-id: 20230724174335.2150499-3-peter.maydell@linaro.org
8
---
27
---
9
target/arm/neon-dp.decode | 30 ++++++++++++++++++-----
28
include/hw/arm/armv7m.h | 8 ++++++++
10
target/arm/translate-neon.inc.c | 43 +++++++++++++++++++++++++++++++++
29
hw/arm/armv7m.c | 21 +++++++++++++++++++++
11
target/arm/translate.c | 22 +++--------------
30
2 files changed, 29 insertions(+)
12
3 files changed, 70 insertions(+), 25 deletions(-)
13
31
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
32
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
15
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
34
--- a/include/hw/arm/armv7m.h
17
+++ b/target/arm/neon-dp.decode
35
+++ b/include/hw/arm/armv7m.h
18
@@ -XXX,XX +XXX,XX @@ VSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 0 .... @3same_rev
36
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(ARMv7MState, ARMV7M)
19
@3same_64_rev .... ... . . . 11 .... .... .... . q:1 . . .... \
37
* + Property "vfp": enable VFP (forwarded to CPU object)
20
&3same vm=%vn_dp vn=%vm_dp vd=%vd_dp size=3
38
* + Property "dsp": enable DSP (forwarded to CPU object)
21
39
* + Property "enable-bitband": expose bitbanded IO
22
-VQSHL_S64_3s 1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_64_rev
40
+ * + Property "mpu-ns-regions": number of Non-Secure MPU regions (forwarded
23
-VQSHL_U64_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_64_rev
41
+ * to CPU object pmsav7-dregion property; default is whatever the default
24
-VRSHL_S64_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_64_rev
42
+ * for the CPU is)
25
-VRSHL_U64_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_64_rev
43
+ * + Property "mpu-s-regions": number of Secure MPU regions (default is
26
-VQRSHL_S64_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_64_rev
44
+ * whatever the default for the CPU is; must currently be set to the same
27
-VQRSHL_U64_3s 1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_64_rev
45
+ * value as mpu-ns-regions if the CPU implements the Security Extension)
28
+{
46
* + Clock input "refclk" is the external reference clock for the systick timers
29
+ VQSHL_S64_3s 1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_64_rev
47
* + Clock input "cpuclk" is the main CPU clock
30
+ VQSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_rev
48
*/
31
+}
49
@@ -XXX,XX +XXX,XX @@ struct ARMv7MState {
32
+{
50
Object *idau;
33
+ VQSHL_U64_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_64_rev
51
uint32_t init_svtor;
34
+ VQSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_rev
52
uint32_t init_nsvtor;
35
+}
53
+ uint32_t mpu_ns_regions;
36
+{
54
+ uint32_t mpu_s_regions;
37
+ VRSHL_S64_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_64_rev
55
bool enable_bitband;
38
+ VRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_rev
56
bool start_powered_off;
39
+}
57
bool vfp;
40
+{
58
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
41
+ VRSHL_U64_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_64_rev
42
+ VRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_rev
43
+}
44
+{
45
+ VQRSHL_S64_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_64_rev
46
+ VQRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_rev
47
+}
48
+{
49
+ VQRSHL_U64_3s 1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_64_rev
50
+ VQRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_rev
51
+}
52
53
VMAX_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
54
VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same
55
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
56
index XXXXXXX..XXXXXXX 100644
59
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/translate-neon.inc.c
60
--- a/hw/arm/armv7m.c
58
+++ b/target/arm/translate-neon.inc.c
61
+++ b/hw/arm/armv7m.c
59
@@ -XXX,XX +XXX,XX @@ DO_3SAME_64_ENV(VQRSHL_U64, gen_helper_neon_qrshl_u64)
62
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
60
return do_3same(s, a, gen_##INSN##_3s); \
63
}
61
}
64
}
62
65
63
+/*
66
+ /*
64
+ * Some helper functions need to be passed the cpu_env. In order
67
+ * Real M-profile hardware can be configured with a different number of
65
+ * to use those with the gvec APIs like tcg_gen_gvec_3() we need
68
+ * MPU regions for Secure vs NonSecure. QEMU's CPU implementation doesn't
66
+ * to create wrapper functions whose prototype is a NeonGenTwoOpFn()
69
+ * support that yet, so catch attempts to select that.
67
+ * and which call a NeonGenTwoOpEnvFn().
70
+ */
68
+ */
71
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY) &&
69
+#define WRAP_ENV_FN(WRAPNAME, FUNC) \
72
+ s->mpu_ns_regions != s->mpu_s_regions) {
70
+ static void WRAPNAME(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m) \
73
+ error_setg(errp,
71
+ { \
74
+ "mpu-ns-regions and mpu-s-regions properties must have the same value");
72
+ FUNC(d, cpu_env, n, m); \
75
+ return;
76
+ }
77
+ if (s->mpu_ns_regions != UINT_MAX &&
78
+ object_property_find(OBJECT(s->cpu), "pmsav7-dregion")) {
79
+ if (!object_property_set_uint(OBJECT(s->cpu), "pmsav7-dregion",
80
+ s->mpu_ns_regions, errp)) {
81
+ return;
82
+ }
73
+ }
83
+ }
74
+
84
+
75
+#define DO_3SAME_32_ENV(INSN, FUNC) \
85
/*
76
+ WRAP_ENV_FN(gen_##INSN##_tramp8, gen_helper_neon_##FUNC##8); \
86
* Tell the CPU where the NVIC is; it will fail realize if it doesn't
77
+ WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##16); \
87
* have one. Similarly, tell the NVIC where its CPU is.
78
+ WRAP_ENV_FN(gen_##INSN##_tramp32, gen_helper_neon_##FUNC##32); \
88
@@ -XXX,XX +XXX,XX @@ static Property armv7m_properties[] = {
79
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
89
false),
80
+ uint32_t rn_ofs, uint32_t rm_ofs, \
90
DEFINE_PROP_BOOL("vfp", ARMv7MState, vfp, true),
81
+ uint32_t oprsz, uint32_t maxsz) \
91
DEFINE_PROP_BOOL("dsp", ARMv7MState, dsp, true),
82
+ { \
92
+ DEFINE_PROP_UINT32("mpu-ns-regions", ARMv7MState, mpu_ns_regions, UINT_MAX),
83
+ static const GVecGen3 ops[4] = { \
93
+ DEFINE_PROP_UINT32("mpu-s-regions", ARMv7MState, mpu_s_regions, UINT_MAX),
84
+ { .fni4 = gen_##INSN##_tramp8 }, \
94
DEFINE_PROP_END_OF_LIST(),
85
+ { .fni4 = gen_##INSN##_tramp16 }, \
95
};
86
+ { .fni4 = gen_##INSN##_tramp32 }, \
96
87
+ { 0 }, \
88
+ }; \
89
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &ops[vece]); \
90
+ } \
91
+ static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
92
+ { \
93
+ if (a->size > 2) { \
94
+ return false; \
95
+ } \
96
+ return do_3same(s, a, gen_##INSN##_3s); \
97
+ }
98
+
99
DO_3SAME_32(VHADD_S, hadd_s)
100
DO_3SAME_32(VHADD_U, hadd_u)
101
DO_3SAME_32(VHSUB_S, hsub_s)
102
DO_3SAME_32(VHSUB_U, hsub_u)
103
DO_3SAME_32(VRHADD_S, rhadd_s)
104
DO_3SAME_32(VRHADD_U, rhadd_u)
105
+DO_3SAME_32(VRSHL_S, rshl_s)
106
+DO_3SAME_32(VRSHL_U, rshl_u)
107
+
108
+DO_3SAME_32_ENV(VQSHL_S, qshl_s)
109
+DO_3SAME_32_ENV(VQSHL_U, qshl_u)
110
+DO_3SAME_32_ENV(VQRSHL_S, qrshl_s)
111
+DO_3SAME_32_ENV(VQRSHL_U, qrshl_u)
112
diff --git a/target/arm/translate.c b/target/arm/translate.c
113
index XXXXXXX..XXXXXXX 100644
114
--- a/target/arm/translate.c
115
+++ b/target/arm/translate.c
116
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
117
case NEON_3R_VHSUB:
118
case NEON_3R_VABD:
119
case NEON_3R_VABA:
120
+ case NEON_3R_VQSHL:
121
+ case NEON_3R_VRSHL:
122
+ case NEON_3R_VQRSHL:
123
/* Already handled by decodetree */
124
return 1;
125
}
126
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
127
}
128
pairwise = 0;
129
switch (op) {
130
- case NEON_3R_VQSHL:
131
- case NEON_3R_VRSHL:
132
- case NEON_3R_VQRSHL:
133
- {
134
- int rtmp;
135
- /* Shift instruction operands are reversed. */
136
- rtmp = rn;
137
- rn = rm;
138
- rm = rtmp;
139
- }
140
- break;
141
case NEON_3R_VPADD_VQRDMLAH:
142
case NEON_3R_VPMAX:
143
case NEON_3R_VPMIN:
144
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
145
tmp2 = neon_load_reg(rm, pass);
146
}
147
switch (op) {
148
- case NEON_3R_VQSHL:
149
- GEN_NEON_INTEGER_OP_ENV(qshl);
150
- break;
151
- case NEON_3R_VRSHL:
152
- GEN_NEON_INTEGER_OP(rshl);
153
- break;
154
- case NEON_3R_VQRSHL:
155
- GEN_NEON_INTEGER_OP_ENV(qrshl);
156
break;
157
case NEON_3R_VPMAX:
158
GEN_NEON_INTEGER_OP(pmax);
159
--
97
--
160
2.20.1
98
2.34.1
161
99
162
100
diff view generated by jsdifflib
Deleted patch
1
Convert the Neon integer VPMAX and VPMIN 3-reg-same insns to
2
decodetree. These are 'pairwise' operations.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200512163904.10918-9-peter.maydell@linaro.org
7
---
8
target/arm/neon-dp.decode | 9 +++++
9
target/arm/translate-neon.inc.c | 71 +++++++++++++++++++++++++++++++++
10
target/arm/translate.c | 17 +-------
11
3 files changed, 82 insertions(+), 15 deletions(-)
12
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
16
+++ b/target/arm/neon-dp.decode
17
@@ -XXX,XX +XXX,XX @@
18
@3same .... ... . . . size:2 .... .... .... . q:1 . . .... \
19
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp
20
21
+@3same_q0 .... ... . . . size:2 .... .... .... . 0 . . .... \
22
+ &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp q=0
23
+
24
VHADD_S_3s 1111 001 0 0 . .. .... .... 0000 . . . 0 .... @3same
25
VHADD_U_3s 1111 001 1 0 . .. .... .... 0000 . . . 0 .... @3same
26
VQADD_S_3s 1111 001 0 0 . .. .... .... 0000 . . . 1 .... @3same
27
@@ -XXX,XX +XXX,XX @@ VMLS_3s 1111 001 1 0 . .. .... .... 1001 . . . 0 .... @3same
28
VMUL_3s 1111 001 0 0 . .. .... .... 1001 . . . 1 .... @3same
29
VMUL_p_3s 1111 001 1 0 . .. .... .... 1001 . . . 1 .... @3same
30
31
+VPMAX_S_3s 1111 001 0 0 . .. .... .... 1010 . . . 0 .... @3same_q0
32
+VPMAX_U_3s 1111 001 1 0 . .. .... .... 1010 . . . 0 .... @3same_q0
33
+
34
+VPMIN_S_3s 1111 001 0 0 . .. .... .... 1010 . . . 1 .... @3same_q0
35
+VPMIN_U_3s 1111 001 1 0 . .. .... .... 1010 . . . 1 .... @3same_q0
36
+
37
VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
38
39
SHA1_3s 1111 001 0 0 . optype:2 .... .... 1100 . 1 . 0 .... \
40
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/translate-neon.inc.c
43
+++ b/target/arm/translate-neon.inc.c
44
@@ -XXX,XX +XXX,XX @@ DO_3SAME_32_ENV(VQSHL_S, qshl_s)
45
DO_3SAME_32_ENV(VQSHL_U, qshl_u)
46
DO_3SAME_32_ENV(VQRSHL_S, qrshl_s)
47
DO_3SAME_32_ENV(VQRSHL_U, qrshl_u)
48
+
49
+static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
50
+{
51
+ /* Operations handled pairwise 32 bits at a time */
52
+ TCGv_i32 tmp, tmp2, tmp3;
53
+
54
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
55
+ return false;
56
+ }
57
+
58
+ /* UNDEF accesses to D16-D31 if they don't exist. */
59
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
60
+ ((a->vd | a->vn | a->vm) & 0x10)) {
61
+ return false;
62
+ }
63
+
64
+ if (a->size == 3) {
65
+ return false;
66
+ }
67
+
68
+ if (!vfp_access_check(s)) {
69
+ return true;
70
+ }
71
+
72
+ assert(a->q == 0); /* enforced by decode patterns */
73
+
74
+ /*
75
+ * Note that we have to be careful not to clobber the source operands
76
+ * in the "vm == vd" case by storing the result of the first pass too
77
+ * early. Since Q is 0 there are always just two passes, so instead
78
+ * of a complicated loop over each pass we just unroll.
79
+ */
80
+ tmp = neon_load_reg(a->vn, 0);
81
+ tmp2 = neon_load_reg(a->vn, 1);
82
+ fn(tmp, tmp, tmp2);
83
+ tcg_temp_free_i32(tmp2);
84
+
85
+ tmp3 = neon_load_reg(a->vm, 0);
86
+ tmp2 = neon_load_reg(a->vm, 1);
87
+ fn(tmp3, tmp3, tmp2);
88
+ tcg_temp_free_i32(tmp2);
89
+
90
+ neon_store_reg(a->vd, 0, tmp);
91
+ neon_store_reg(a->vd, 1, tmp3);
92
+ return true;
93
+}
94
+
95
+#define DO_3SAME_PAIR(INSN, func) \
96
+ static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
97
+ { \
98
+ static NeonGenTwoOpFn * const fns[] = { \
99
+ gen_helper_neon_##func##8, \
100
+ gen_helper_neon_##func##16, \
101
+ gen_helper_neon_##func##32, \
102
+ }; \
103
+ if (a->size > 2) { \
104
+ return false; \
105
+ } \
106
+ return do_3same_pair(s, a, fns[a->size]); \
107
+ }
108
+
109
+/* 32-bit pairwise ops end up the same as the elementwise versions. */
110
+#define gen_helper_neon_pmax_s32 tcg_gen_smax_i32
111
+#define gen_helper_neon_pmax_u32 tcg_gen_umax_i32
112
+#define gen_helper_neon_pmin_s32 tcg_gen_smin_i32
113
+#define gen_helper_neon_pmin_u32 tcg_gen_umin_i32
114
+
115
+DO_3SAME_PAIR(VPMAX_S, pmax_s)
116
+DO_3SAME_PAIR(VPMIN_S, pmin_s)
117
+DO_3SAME_PAIR(VPMAX_U, pmax_u)
118
+DO_3SAME_PAIR(VPMIN_U, pmin_u)
119
diff --git a/target/arm/translate.c b/target/arm/translate.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/translate.c
122
+++ b/target/arm/translate.c
123
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_rsb(int size, TCGv_i32 t0, TCGv_i32 t1)
124
}
125
}
126
127
-/* 32-bit pairwise ops end up the same as the elementwise versions. */
128
-#define gen_helper_neon_pmax_s32 tcg_gen_smax_i32
129
-#define gen_helper_neon_pmax_u32 tcg_gen_umax_i32
130
-#define gen_helper_neon_pmin_s32 tcg_gen_smin_i32
131
-#define gen_helper_neon_pmin_u32 tcg_gen_umin_i32
132
-
133
#define GEN_NEON_INTEGER_OP_ENV(name) do { \
134
switch ((size << 1) | u) { \
135
case 0: \
136
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
137
case NEON_3R_VQSHL:
138
case NEON_3R_VRSHL:
139
case NEON_3R_VQRSHL:
140
+ case NEON_3R_VPMAX:
141
+ case NEON_3R_VPMIN:
142
/* Already handled by decodetree */
143
return 1;
144
}
145
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
146
pairwise = 0;
147
switch (op) {
148
case NEON_3R_VPADD_VQRDMLAH:
149
- case NEON_3R_VPMAX:
150
- case NEON_3R_VPMIN:
151
pairwise = 1;
152
break;
153
case NEON_3R_FLOAT_ARITH:
154
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
155
tmp2 = neon_load_reg(rm, pass);
156
}
157
switch (op) {
158
- break;
159
- case NEON_3R_VPMAX:
160
- GEN_NEON_INTEGER_OP(pmax);
161
- break;
162
- case NEON_3R_VPMIN:
163
- GEN_NEON_INTEGER_OP(pmin);
164
- break;
165
case NEON_3R_VQDMULH_VQRDMULH: /* Multiply high. */
166
if (!u) { /* VQDMULH */
167
switch (size) {
168
--
169
2.20.1
170
171
diff view generated by jsdifflib
Deleted patch
1
Convert the Neon integer VPADD 3-reg-same insns to decodetree. These
2
are 'pairwise' operations. (Note that VQRDMLAH, which shares the
3
same primary opcode but has U=1, has already been converted.)
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200512163904.10918-10-peter.maydell@linaro.org
8
---
9
target/arm/neon-dp.decode | 2 ++
10
target/arm/translate-neon.inc.c | 2 ++
11
target/arm/translate.c | 19 +------------------
12
3 files changed, 5 insertions(+), 18 deletions(-)
13
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
17
+++ b/target/arm/neon-dp.decode
18
@@ -XXX,XX +XXX,XX @@ VPMAX_U_3s 1111 001 1 0 . .. .... .... 1010 . . . 0 .... @3same_q0
19
VPMIN_S_3s 1111 001 0 0 . .. .... .... 1010 . . . 1 .... @3same_q0
20
VPMIN_U_3s 1111 001 1 0 . .. .... .... 1010 . . . 1 .... @3same_q0
21
22
+VPADD_3s 1111 001 0 0 . .. .... .... 1011 . . . 1 .... @3same_q0
23
+
24
VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
25
26
SHA1_3s 1111 001 0 0 . optype:2 .... .... 1100 . 1 . 0 .... \
27
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/translate-neon.inc.c
30
+++ b/target/arm/translate-neon.inc.c
31
@@ -XXX,XX +XXX,XX @@ static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
32
#define gen_helper_neon_pmax_u32 tcg_gen_umax_i32
33
#define gen_helper_neon_pmin_s32 tcg_gen_smin_i32
34
#define gen_helper_neon_pmin_u32 tcg_gen_umin_i32
35
+#define gen_helper_neon_padd_u32 tcg_gen_add_i32
36
37
DO_3SAME_PAIR(VPMAX_S, pmax_s)
38
DO_3SAME_PAIR(VPMIN_S, pmin_s)
39
DO_3SAME_PAIR(VPMAX_U, pmax_u)
40
DO_3SAME_PAIR(VPMIN_U, pmin_u)
41
+DO_3SAME_PAIR(VPADD, padd_u)
42
diff --git a/target/arm/translate.c b/target/arm/translate.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/translate.c
45
+++ b/target/arm/translate.c
46
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
47
return 1;
48
}
49
switch (op) {
50
- case NEON_3R_VPADD_VQRDMLAH:
51
- if (!u) {
52
- break; /* VPADD */
53
- }
54
- /* VQRDMLAH : handled by decodetree */
55
- return 1;
56
-
57
case NEON_3R_VFM_VQRDMLSH:
58
if (!u) {
59
/* VFM, VFMS */
60
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
61
case NEON_3R_VQRSHL:
62
case NEON_3R_VPMAX:
63
case NEON_3R_VPMIN:
64
+ case NEON_3R_VPADD_VQRDMLAH:
65
/* Already handled by decodetree */
66
return 1;
67
}
68
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
69
}
70
pairwise = 0;
71
switch (op) {
72
- case NEON_3R_VPADD_VQRDMLAH:
73
- pairwise = 1;
74
- break;
75
case NEON_3R_FLOAT_ARITH:
76
pairwise = (u && size < 2); /* if VPADD (float) */
77
break;
78
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
79
}
80
}
81
break;
82
- case NEON_3R_VPADD_VQRDMLAH:
83
- switch (size) {
84
- case 0: gen_helper_neon_padd_u8(tmp, tmp, tmp2); break;
85
- case 1: gen_helper_neon_padd_u16(tmp, tmp, tmp2); break;
86
- case 2: tcg_gen_add_i32(tmp, tmp, tmp2); break;
87
- default: abort();
88
- }
89
- break;
90
case NEON_3R_FLOAT_ARITH: /* Floating point arithmetic. */
91
{
92
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
93
--
94
2.20.1
95
96
diff view generated by jsdifflib
Deleted patch
1
Convert the Neon VQDMULH and VQRDMULH 3-reg-same insns to
2
decodetree. These are the last integer operations in the
3
3-reg-same group.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200512163904.10918-11-peter.maydell@linaro.org
8
---
9
target/arm/neon-dp.decode | 3 +++
10
target/arm/translate-neon.inc.c | 24 ++++++++++++++++++++++++
11
target/arm/translate.c | 24 +-----------------------
12
3 files changed, 28 insertions(+), 23 deletions(-)
13
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
17
+++ b/target/arm/neon-dp.decode
18
@@ -XXX,XX +XXX,XX @@ VPMAX_U_3s 1111 001 1 0 . .. .... .... 1010 . . . 0 .... @3same_q0
19
VPMIN_S_3s 1111 001 0 0 . .. .... .... 1010 . . . 1 .... @3same_q0
20
VPMIN_U_3s 1111 001 1 0 . .. .... .... 1010 . . . 1 .... @3same_q0
21
22
+VQDMULH_3s 1111 001 0 0 . .. .... .... 1011 . . . 0 .... @3same
23
+VQRDMULH_3s 1111 001 1 0 . .. .... .... 1011 . . . 0 .... @3same
24
+
25
VPADD_3s 1111 001 0 0 . .. .... .... 1011 . . . 1 .... @3same_q0
26
27
VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
28
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-neon.inc.c
31
+++ b/target/arm/translate-neon.inc.c
32
@@ -XXX,XX +XXX,XX @@ DO_3SAME_PAIR(VPMIN_S, pmin_s)
33
DO_3SAME_PAIR(VPMAX_U, pmax_u)
34
DO_3SAME_PAIR(VPMIN_U, pmin_u)
35
DO_3SAME_PAIR(VPADD, padd_u)
36
+
37
+#define DO_3SAME_VQDMULH(INSN, FUNC) \
38
+ WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##_s16); \
39
+ WRAP_ENV_FN(gen_##INSN##_tramp32, gen_helper_neon_##FUNC##_s32); \
40
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
41
+ uint32_t rn_ofs, uint32_t rm_ofs, \
42
+ uint32_t oprsz, uint32_t maxsz) \
43
+ { \
44
+ static const GVecGen3 ops[2] = { \
45
+ { .fni4 = gen_##INSN##_tramp16 }, \
46
+ { .fni4 = gen_##INSN##_tramp32 }, \
47
+ }; \
48
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &ops[vece - 1]); \
49
+ } \
50
+ static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
51
+ { \
52
+ if (a->size != 1 && a->size != 2) { \
53
+ return false; \
54
+ } \
55
+ return do_3same(s, a, gen_##INSN##_3s); \
56
+ }
57
+
58
+DO_3SAME_VQDMULH(VQDMULH, qdmulh)
59
+DO_3SAME_VQDMULH(VQRDMULH, qrdmulh)
60
diff --git a/target/arm/translate.c b/target/arm/translate.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/translate.c
63
+++ b/target/arm/translate.c
64
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
65
case NEON_3R_VPMAX:
66
case NEON_3R_VPMIN:
67
case NEON_3R_VPADD_VQRDMLAH:
68
+ case NEON_3R_VQDMULH_VQRDMULH:
69
/* Already handled by decodetree */
70
return 1;
71
}
72
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
73
tmp2 = neon_load_reg(rm, pass);
74
}
75
switch (op) {
76
- case NEON_3R_VQDMULH_VQRDMULH: /* Multiply high. */
77
- if (!u) { /* VQDMULH */
78
- switch (size) {
79
- case 1:
80
- gen_helper_neon_qdmulh_s16(tmp, cpu_env, tmp, tmp2);
81
- break;
82
- case 2:
83
- gen_helper_neon_qdmulh_s32(tmp, cpu_env, tmp, tmp2);
84
- break;
85
- default: abort();
86
- }
87
- } else { /* VQRDMULH */
88
- switch (size) {
89
- case 1:
90
- gen_helper_neon_qrdmulh_s16(tmp, cpu_env, tmp, tmp2);
91
- break;
92
- case 2:
93
- gen_helper_neon_qrdmulh_s32(tmp, cpu_env, tmp, tmp2);
94
- break;
95
- default: abort();
96
- }
97
- }
98
- break;
99
case NEON_3R_FLOAT_ARITH: /* Floating point arithmetic. */
100
{
101
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
102
--
103
2.20.1
104
105
diff view generated by jsdifflib
1
Convert the Neon VADD, VSUB, VABD 3-reg-same insns to decodetree.
1
The IoTKit, SSE200 and SSE300 all default to 8 MPU regions. The
2
We already have gvec helpers for addition and subtraction, but must
2
MPS2/MPS3 FPGA images don't override these except in the case of
3
add one for fabd.
3
AN547, which uses 16 MPU regions.
4
4
5
Define properties on the ARMSSE object for the MPU regions (using the
6
same names as the documented RTL configuration settings, and
7
following the pattern we already have for this device of using
8
all-caps names as the RTL does), and set them in the board code.
9
10
We don't actually need to override the default except on AN547,
11
but it's simpler code to have the board code set them always
12
rather than tracking which board subtypes want to set them to
13
a non-default value separately from what that value is.
14
15
Tho overall effect is that for mps2-an505, mps2-an521 and mps3-an524
16
we now correctly use 8 MPU regions, while mps3-an547 stays at its
17
current 16 regions.
18
19
It's possible some guest code wrongly depended on the previous
20
incorrectly modeled number of memory regions. (Such guest code
21
should ideally check the number of regions via the MPU_TYPE
22
register.) The old behaviour can be obtained with additional
23
-global arguments to QEMU:
24
25
For mps2-an521 and mps2-an524:
26
-global sse-200.CPU0_MPU_NS=16 -global sse-200.CPU0_MPU_S=16 -global sse-200.CPU1_MPU_NS=16 -global sse-200.CPU1_MPU_S=16
27
28
For mps2-an505:
29
-global sse-200.CPU0_MPU_NS=16 -global sse-200.CPU0_MPU_S=16
30
31
NB that the way the implementation allows this use of -global
32
is slightly fragile: if the board code explicitly sets the
33
properties on the sse-200 object, this overrides the -global
34
command line option. So we rely on:
35
- the boards that need fixing all happen to use the SSE defaults
36
- we can write the board code to only set the property if it
37
is different from the default, rather than having all boards
38
explicitly set the property
39
- the board that does need to use a non-default value happens
40
to need to set it to the same value (16) we previously used
41
This works, but there are some kinds of refactoring of the
42
mps2-tz.c code that would break the support for -global here.
43
44
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1772
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
45
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
46
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200512163904.10918-12-peter.maydell@linaro.org
47
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
48
Message-id: 20230724174335.2150499-4-peter.maydell@linaro.org
8
---
49
---
9
target/arm/helper.h | 3 ++-
50
include/hw/arm/armsse.h | 5 +++++
10
target/arm/neon-dp.decode | 8 ++++++++
51
hw/arm/armsse.c | 16 ++++++++++++++++
11
target/arm/neon_helper.c | 7 -------
52
hw/arm/mps2-tz.c | 29 +++++++++++++++++++++++++++++
12
target/arm/translate-neon.inc.c | 28 ++++++++++++++++++++++++++++
53
3 files changed, 50 insertions(+)
13
target/arm/translate.c | 10 +++-------
54
14
target/arm/vec_helper.c | 7 +++++++
55
diff --git a/include/hw/arm/armsse.h b/include/hw/arm/armsse.h
15
6 files changed, 48 insertions(+), 15 deletions(-)
16
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
56
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
57
--- a/include/hw/arm/armsse.h
20
+++ b/target/arm/helper.h
58
+++ b/include/hw/arm/armsse.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(neon_qneg_s16, TCG_CALL_NO_RWG, i32, env, i32)
59
@@ -XXX,XX +XXX,XX @@
22
DEF_HELPER_FLAGS_2(neon_qneg_s32, TCG_CALL_NO_RWG, i32, env, i32)
60
* (matching the hardware) is that for CPU0 in an IoTKit and CPU1 in an
23
DEF_HELPER_FLAGS_2(neon_qneg_s64, TCG_CALL_NO_RWG, i64, env, i64)
61
* SSE-200 both are present; CPU0 in an SSE-200 has neither.
24
62
* Since the IoTKit has only one CPU, it does not have the CPU1_* properties.
25
-DEF_HELPER_3(neon_abd_f32, i32, i32, i32, ptr)
63
+ * + QOM properties "CPU0_MPU_NS", "CPU0_MPU_S", "CPU1_MPU_NS" and "CPU1_MPU_S"
26
DEF_HELPER_3(neon_ceq_f32, i32, i32, i32, ptr)
64
+ * which set the number of MPU regions on the CPUs. If there is only one
27
DEF_HELPER_3(neon_cge_f32, i32, i32, i32, ptr)
65
+ * CPU the CPU1 properties are not present.
28
DEF_HELPER_3(neon_cgt_f32, i32, i32, i32, ptr)
66
* + Named GPIO inputs "EXP_IRQ" 0..n are the expansion interrupts for CPU 0,
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fmul_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
67
* which are wired to its NVIC lines 32 .. n+32
30
DEF_HELPER_FLAGS_5(gvec_fmul_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
68
* + Named GPIO inputs "EXP_CPU1_IRQ" 0..n are the expansion interrupts for
31
DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
69
@@ -XXX,XX +XXX,XX @@ struct ARMSSE {
32
70
uint32_t exp_numirq;
33
+DEF_HELPER_FLAGS_5(gvec_fabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
71
uint32_t sram_addr_width;
72
uint32_t init_svtor;
73
+ uint32_t cpu_mpu_ns[SSE_MAX_CPUS];
74
+ uint32_t cpu_mpu_s[SSE_MAX_CPUS];
75
bool cpu_fpu[SSE_MAX_CPUS];
76
bool cpu_dsp[SSE_MAX_CPUS];
77
};
78
diff --git a/hw/arm/armsse.c b/hw/arm/armsse.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/hw/arm/armsse.c
81
+++ b/hw/arm/armsse.c
82
@@ -XXX,XX +XXX,XX @@ static Property iotkit_properties[] = {
83
DEFINE_PROP_UINT32("init-svtor", ARMSSE, init_svtor, 0x10000000),
84
DEFINE_PROP_BOOL("CPU0_FPU", ARMSSE, cpu_fpu[0], true),
85
DEFINE_PROP_BOOL("CPU0_DSP", ARMSSE, cpu_dsp[0], true),
86
+ DEFINE_PROP_UINT32("CPU0_MPU_NS", ARMSSE, cpu_mpu_ns[0], 8),
87
+ DEFINE_PROP_UINT32("CPU0_MPU_S", ARMSSE, cpu_mpu_s[0], 8),
88
DEFINE_PROP_END_OF_LIST()
89
};
90
91
@@ -XXX,XX +XXX,XX @@ static Property sse200_properties[] = {
92
DEFINE_PROP_BOOL("CPU0_DSP", ARMSSE, cpu_dsp[0], false),
93
DEFINE_PROP_BOOL("CPU1_FPU", ARMSSE, cpu_fpu[1], true),
94
DEFINE_PROP_BOOL("CPU1_DSP", ARMSSE, cpu_dsp[1], true),
95
+ DEFINE_PROP_UINT32("CPU0_MPU_NS", ARMSSE, cpu_mpu_ns[0], 8),
96
+ DEFINE_PROP_UINT32("CPU0_MPU_S", ARMSSE, cpu_mpu_s[0], 8),
97
+ DEFINE_PROP_UINT32("CPU1_MPU_NS", ARMSSE, cpu_mpu_ns[1], 8),
98
+ DEFINE_PROP_UINT32("CPU1_MPU_S", ARMSSE, cpu_mpu_s[1], 8),
99
DEFINE_PROP_END_OF_LIST()
100
};
101
102
@@ -XXX,XX +XXX,XX @@ static Property sse300_properties[] = {
103
DEFINE_PROP_UINT32("init-svtor", ARMSSE, init_svtor, 0x10000000),
104
DEFINE_PROP_BOOL("CPU0_FPU", ARMSSE, cpu_fpu[0], true),
105
DEFINE_PROP_BOOL("CPU0_DSP", ARMSSE, cpu_dsp[0], true),
106
+ DEFINE_PROP_UINT32("CPU0_MPU_NS", ARMSSE, cpu_mpu_ns[0], 8),
107
+ DEFINE_PROP_UINT32("CPU0_MPU_S", ARMSSE, cpu_mpu_s[0], 8),
108
DEFINE_PROP_END_OF_LIST()
109
};
110
111
@@ -XXX,XX +XXX,XX @@ static void armsse_realize(DeviceState *dev, Error **errp)
112
return;
113
}
114
}
115
+ if (!object_property_set_uint(cpuobj, "mpu-ns-regions",
116
+ s->cpu_mpu_ns[i], errp)) {
117
+ return;
118
+ }
119
+ if (!object_property_set_uint(cpuobj, "mpu-s-regions",
120
+ s->cpu_mpu_s[i], errp)) {
121
+ return;
122
+ }
123
124
if (i > 0) {
125
memory_region_add_subregion_overlap(&s->cpu_container[i], 0,
126
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/hw/arm/mps2-tz.c
129
+++ b/hw/arm/mps2-tz.c
130
@@ -XXX,XX +XXX,XX @@ struct MPS2TZMachineClass {
131
int uart_overflow_irq; /* number of the combined UART overflow IRQ */
132
uint32_t init_svtor; /* init-svtor setting for SSE */
133
uint32_t sram_addr_width; /* SRAM_ADDR_WIDTH setting for SSE */
134
+ uint32_t cpu0_mpu_ns; /* CPU0_MPU_NS setting for SSE */
135
+ uint32_t cpu0_mpu_s; /* CPU0_MPU_S setting for SSE */
136
+ uint32_t cpu1_mpu_ns; /* CPU1_MPU_NS setting for SSE */
137
+ uint32_t cpu1_mpu_s; /* CPU1_MPU_S setting for SSE */
138
const RAMInfo *raminfo;
139
const char *armsse_type;
140
uint32_t boot_ram_size; /* size of ram at address 0; 0 == find in raminfo */
141
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_TYPE(MPS2TZMachineState, MPS2TZMachineClass, MPS2TZ_MACHINE)
142
#define MPS3_DDR_SIZE (2 * GiB)
143
#endif
144
145
+/* For cpu{0,1}_mpu_{ns,s}, means "leave at SSE's default value" */
146
+#define MPU_REGION_DEFAULT UINT32_MAX
34
+
147
+
35
DEF_HELPER_FLAGS_5(gvec_ftsmul_h, TCG_CALL_NO_RWG,
148
static const uint32_t an505_oscclk[] = {
36
void, ptr, ptr, ptr, ptr, i32)
149
40000000,
37
DEF_HELPER_FLAGS_5(gvec_ftsmul_s, TCG_CALL_NO_RWG,
150
24580000,
38
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
151
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
39
index XXXXXXX..XXXXXXX 100644
152
OBJECT(system_memory), &error_abort);
40
--- a/target/arm/neon-dp.decode
153
qdev_prop_set_uint32(iotkitdev, "EXP_NUMIRQ", mmc->numirq);
41
+++ b/target/arm/neon-dp.decode
154
qdev_prop_set_uint32(iotkitdev, "init-svtor", mmc->init_svtor);
42
@@ -XXX,XX +XXX,XX @@
155
+ if (mmc->cpu0_mpu_ns != MPU_REGION_DEFAULT) {
43
@3same_q0 .... ... . . . size:2 .... .... .... . 0 . . .... \
156
+ qdev_prop_set_uint32(iotkitdev, "CPU0_MPU_NS", mmc->cpu0_mpu_ns);
44
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp q=0
157
+ }
45
158
+ if (mmc->cpu0_mpu_s != MPU_REGION_DEFAULT) {
46
+# For FP insns the high bit of 'size' is used as part of opcode decode
159
+ qdev_prop_set_uint32(iotkitdev, "CPU0_MPU_S", mmc->cpu0_mpu_s);
47
+@3same_fp .... ... . . . . size:1 .... .... .... . q:1 . . .... \
160
+ }
48
+ &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp
161
+ if (object_property_find(OBJECT(iotkitdev), "CPU1_MPU_NS")) {
162
+ if (mmc->cpu1_mpu_ns != MPU_REGION_DEFAULT) {
163
+ qdev_prop_set_uint32(iotkitdev, "CPU1_MPU_NS", mmc->cpu1_mpu_ns);
164
+ }
165
+ if (mmc->cpu1_mpu_s != MPU_REGION_DEFAULT) {
166
+ qdev_prop_set_uint32(iotkitdev, "CPU1_MPU_S", mmc->cpu1_mpu_s);
167
+ }
168
+ }
169
qdev_prop_set_uint32(iotkitdev, "SRAM_ADDR_WIDTH", mmc->sram_addr_width);
170
qdev_connect_clock_in(iotkitdev, "MAINCLK", mms->sysclk);
171
qdev_connect_clock_in(iotkitdev, "S32KCLK", mms->s32kclk);
172
@@ -XXX,XX +XXX,XX @@ static void mps2tz_class_init(ObjectClass *oc, void *data)
173
{
174
MachineClass *mc = MACHINE_CLASS(oc);
175
IDAUInterfaceClass *iic = IDAU_INTERFACE_CLASS(oc);
176
+ MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_CLASS(oc);
177
178
mc->init = mps2tz_common_init;
179
mc->reset = mps2_machine_reset;
180
iic->check = mps2_tz_idau_check;
49
+
181
+
50
VHADD_S_3s 1111 001 0 0 . .. .... .... 0000 . . . 0 .... @3same
182
+ /* Most machines leave these at the SSE defaults */
51
VHADD_U_3s 1111 001 1 0 . .. .... .... 0000 . . . 0 .... @3same
183
+ mmc->cpu0_mpu_ns = MPU_REGION_DEFAULT;
52
VQADD_S_3s 1111 001 0 0 . .. .... .... 0000 . . . 1 .... @3same
184
+ mmc->cpu0_mpu_s = MPU_REGION_DEFAULT;
53
@@ -XXX,XX +XXX,XX @@ SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... \
185
+ mmc->cpu1_mpu_ns = MPU_REGION_DEFAULT;
54
vm=%vm_dp vn=%vn_dp vd=%vd_dp
186
+ mmc->cpu1_mpu_s = MPU_REGION_DEFAULT;
55
56
VQRDMLSH_3s 1111 001 1 0 . .. .... .... 1100 ... 1 .... @3same
57
+
58
+VADD_fp_3s 1111 001 0 0 . 0 . .... .... 1101 ... 0 .... @3same_fp
59
+VSUB_fp_3s 1111 001 0 0 . 1 . .... .... 1101 ... 0 .... @3same_fp
60
+VABD_fp_3s 1111 001 1 0 . 1 . .... .... 1101 ... 0 .... @3same_fp
61
diff --git a/target/arm/neon_helper.c b/target/arm/neon_helper.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/neon_helper.c
64
+++ b/target/arm/neon_helper.c
65
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_qneg_s64)(CPUARMState *env, uint64_t x)
66
}
187
}
67
188
68
/* NEON Float helpers. */
189
static void mps2tz_set_default_ram_info(MPS2TZMachineClass *mmc)
69
-uint32_t HELPER(neon_abd_f32)(uint32_t a, uint32_t b, void *fpstp)
190
@@ -XXX,XX +XXX,XX @@ static void mps3tz_an547_class_init(ObjectClass *oc, void *data)
70
-{
191
mmc->numirq = 96;
71
- float_status *fpst = fpstp;
192
mmc->uart_overflow_irq = 48;
72
- float32 f0 = make_float32(a);
193
mmc->init_svtor = 0x00000000;
73
- float32 f1 = make_float32(b);
194
+ mmc->cpu0_mpu_s = mmc->cpu0_mpu_ns = 16;
74
- return float32_val(float32_abs(float32_sub(f0, f1, fpst)));
195
mmc->sram_addr_width = 21;
75
-}
196
mmc->raminfo = an547_raminfo;
76
197
mmc->armsse_type = TYPE_SSE300;
77
/* Floating point comparisons produce an integer result.
78
* Note that EQ doesn't signal InvalidOp for QNaNs but GE and GT do.
79
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/translate-neon.inc.c
82
+++ b/target/arm/translate-neon.inc.c
83
@@ -XXX,XX +XXX,XX @@ DO_3SAME_PAIR(VPADD, padd_u)
84
85
DO_3SAME_VQDMULH(VQDMULH, qdmulh)
86
DO_3SAME_VQDMULH(VQRDMULH, qrdmulh)
87
+
88
+/*
89
+ * For all the functions using this macro, size == 1 means fp16,
90
+ * which is an architecture extension we don't implement yet.
91
+ */
92
+#define DO_3S_FP_GVEC(INSN,FUNC) \
93
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
94
+ uint32_t rn_ofs, uint32_t rm_ofs, \
95
+ uint32_t oprsz, uint32_t maxsz) \
96
+ { \
97
+ TCGv_ptr fpst = get_fpstatus_ptr(1); \
98
+ tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, fpst, \
99
+ oprsz, maxsz, 0, FUNC); \
100
+ tcg_temp_free_ptr(fpst); \
101
+ } \
102
+ static bool trans_##INSN##_fp_3s(DisasContext *s, arg_3same *a) \
103
+ { \
104
+ if (a->size != 0) { \
105
+ /* TODO fp16 support */ \
106
+ return false; \
107
+ } \
108
+ return do_3same(s, a, gen_##INSN##_3s); \
109
+ }
110
+
111
+
112
+DO_3S_FP_GVEC(VADD, gen_helper_gvec_fadd_s)
113
+DO_3S_FP_GVEC(VSUB, gen_helper_gvec_fsub_s)
114
+DO_3S_FP_GVEC(VABD, gen_helper_gvec_fabd_s)
115
diff --git a/target/arm/translate.c b/target/arm/translate.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/translate.c
118
+++ b/target/arm/translate.c
119
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
120
switch (op) {
121
case NEON_3R_FLOAT_ARITH:
122
pairwise = (u && size < 2); /* if VPADD (float) */
123
+ if (!pairwise) {
124
+ return 1; /* handled by decodetree */
125
+ }
126
break;
127
case NEON_3R_FLOAT_MINMAX:
128
pairwise = u; /* if VPMIN/VPMAX (float) */
129
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
130
{
131
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
132
switch ((u << 2) | size) {
133
- case 0: /* VADD */
134
case 4: /* VPADD */
135
gen_helper_vfp_adds(tmp, tmp, tmp2, fpstatus);
136
break;
137
- case 2: /* VSUB */
138
- gen_helper_vfp_subs(tmp, tmp, tmp2, fpstatus);
139
- break;
140
- case 6: /* VABD */
141
- gen_helper_neon_abd_f32(tmp, tmp, tmp2, fpstatus);
142
- break;
143
default:
144
abort();
145
}
146
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/vec_helper.c
149
+++ b/target/arm/vec_helper.c
150
@@ -XXX,XX +XXX,XX @@ static float64 float64_ftsmul(float64 op1, uint64_t op2, float_status *stat)
151
return result;
152
}
153
154
+static float32 float32_abd(float32 op1, float32 op2, float_status *stat)
155
+{
156
+ return float32_abs(float32_sub(op1, op2, stat));
157
+}
158
+
159
#define DO_3OP(NAME, FUNC, TYPE) \
160
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
161
{ \
162
@@ -XXX,XX +XXX,XX @@ DO_3OP(gvec_ftsmul_h, float16_ftsmul, float16)
163
DO_3OP(gvec_ftsmul_s, float32_ftsmul, float32)
164
DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
165
166
+DO_3OP(gvec_fabd_s, float32_abd, float32)
167
+
168
#ifdef TARGET_AARCH64
169
170
DO_3OP(gvec_recps_h, helper_recpsf_f16, float16)
171
--
198
--
172
2.20.1
199
2.34.1
173
200
174
201
diff view generated by jsdifflib