1
target-arm queue; this one has a fair scattering of more
1
target-arm queue: the big stuff here is the final part of
2
miscellaneous things in it which I've sent out this week.
2
rth's patches for Cortex-A76 and Neoverse-N1 support;
3
I've shoved those in as well as it seemed the least-effort
3
also present are Gavin's NUMA series and a few other things.
4
way of getting them into master; a few of them are dependencies
5
on arm-related patches I have brewing.
6
4
7
thanks
5
thanks
8
-- PMM
6
-- PMM
9
7
8
The following changes since commit 554623226f800acf48a2ed568900c1c968ec9a8b:
10
9
11
The following changes since commit 2702c2d3eb74e3908c0c5dbf3a71c8987595a86e:
10
Merge tag 'qemu-sparc-20220508' of https://github.com/mcayland/qemu into staging (2022-05-08 17:03:26 -0500)
12
13
Merge remote-tracking branch 'remotes/stsquad/tags/pull-travis-updates-140618-1' into staging (2018-06-15 12:49:36 +0100)
14
11
15
are available in the Git repository at:
12
are available in the Git repository at:
16
13
17
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180615
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220509
18
15
19
for you to fetch changes up to 14120108f87b3f9e1beacdf0a6096e464e62bb65:
16
for you to fetch changes up to ae9141d4a3265553503bf07d3574b40f84615a34:
20
17
21
target/arm: Allow ARMv6-M Thumb2 instructions (2018-06-15 15:23:34 +0100)
18
hw/acpi/aml-build: Use existing CPU topology to build PPTT table (2022-05-09 11:47:55 +0100)
22
19
23
----------------------------------------------------------------
20
----------------------------------------------------------------
24
target-arm and miscellaneous queue:
21
target-arm queue:
25
* fix KVM state save/restore for GICv3 priority registers for high IRQ numbers
22
* MAINTAINERS/.mailmap: update email for Leif Lindholm
26
* hw/arm/mps2-tz: Put ethernet controller behind PPC
23
* hw/arm: add version information to sbsa-ref machine DT
27
* hw/sh/sh7750: Convert away from old_mmio
24
* Enable new features for -cpu max:
28
* hw/m68k/mcf5206: Convert away from old_mmio
25
FEAT_Debugv8p2, FEAT_Debugv8p4, FEAT_RAS (minimal version only),
29
* hw/block/pflash_cfi02: Convert away from old_mmio
26
FEAT_IESB, FEAT_CSV2, FEAT_CSV2_2, FEAT_CSV3, FEAT_DGH
30
* hw/watchdog/wdt_i6300esb: Convert away from old_mmio
27
* Emulate Cortex-A76
31
* hw/input/pckbd: Convert away from old_mmio
28
* Emulate Neoverse-N1
32
* hw/char/parallel: Convert away from old_mmio
29
* Fix the virt board default NUMA topology
33
* armv7m: refactor to get rid of armv7m_init() function
34
* arm: Don't crash if user tries to use a Cortex-M CPU without an NVIC
35
* hw/core/or-irq: Support more than 16 inputs to an OR gate
36
* cpu-defs.h: Document CPUIOTLBEntry 'addr' field
37
* cputlb: Pass cpu_transaction_failed() the correct physaddr
38
* CODING_STYLE: Define our preferred form for multiline comments
39
* Add and use new stn_*_p() and ldn_*_p() memory access functions
40
* target/arm: More parts of the upcoming SVE support
41
* aspeed_scu: Implement RNG register
42
* m25p80: add support for two bytes WRSR for Macronix chips
43
* exec.c: Handle IOMMUs being in the path of TCG CPU memory accesses
44
* target/arm: Allow ARMv6-M Thumb2 instructions
45
30
46
----------------------------------------------------------------
31
----------------------------------------------------------------
47
Cédric Le Goater (1):
32
Gavin Shan (6):
48
m25p80: add support for two bytes WRSR for Macronix chips
33
qapi/machine.json: Add cluster-id
34
qtest/numa-test: Specify CPU topology in aarch64_numa_cpu()
35
hw/arm/virt: Consider SMP configuration in CPU topology
36
qtest/numa-test: Correct CPU and NUMA association in aarch64_numa_cpu()
37
hw/arm/virt: Fix CPU's default NUMA node ID
38
hw/acpi/aml-build: Use existing CPU topology to build PPTT table
49
39
50
Joel Stanley (1):
40
Leif Lindholm (2):
51
aspeed_scu: Implement RNG register
41
MAINTAINERS/.mailmap: update email for Leif Lindholm
42
hw/arm: add versioning to sbsa-ref machine DT
52
43
53
Julia Suvorova (1):
44
Richard Henderson (24):
54
target/arm: Allow ARMv6-M Thumb2 instructions
45
target/arm: Handle cpreg registration for missing EL
46
target/arm: Drop EL3 no EL2 fallbacks
47
target/arm: Merge zcr reginfo
48
target/arm: Adjust definition of CONTEXTIDR_EL2
49
target/arm: Move cortex impdef sysregs to cpu_tcg.c
50
target/arm: Update qemu-system-arm -cpu max to cortex-a57
51
target/arm: Set ID_DFR0.PerfMon for qemu-system-arm -cpu max
52
target/arm: Split out aa32_max_features
53
target/arm: Annotate arm_max_initfn with FEAT identifiers
54
target/arm: Use field names for manipulating EL2 and EL3 modes
55
target/arm: Enable FEAT_Debugv8p2 for -cpu max
56
target/arm: Enable FEAT_Debugv8p4 for -cpu max
57
target/arm: Add minimal RAS registers
58
target/arm: Enable SCR and HCR bits for RAS
59
target/arm: Implement virtual SError exceptions
60
target/arm: Implement ESB instruction
61
target/arm: Enable FEAT_RAS for -cpu max
62
target/arm: Enable FEAT_IESB for -cpu max
63
target/arm: Enable FEAT_CSV2 for -cpu max
64
target/arm: Enable FEAT_CSV2_2 for -cpu max
65
target/arm: Enable FEAT_CSV3 for -cpu max
66
target/arm: Enable FEAT_DGH for -cpu max
67
target/arm: Define cortex-a76
68
target/arm: Define neoverse-n1
55
69
56
Peter Maydell (21):
70
docs/system/arm/emulation.rst | 10 +
57
hw/arm/mps2-tz: Put ethernet controller behind PPC
71
docs/system/arm/virt.rst | 2 +
58
hw/sh/sh7750: Convert away from old_mmio
72
qapi/machine.json | 6 +-
59
hw/m68k/mcf5206: Convert away from old_mmio
73
target/arm/cpregs.h | 11 +
60
hw/block/pflash_cfi02: Convert away from old_mmio
74
target/arm/cpu.h | 23 ++
61
hw/watchdog/wdt_i6300esb: Convert away from old_mmio
75
target/arm/helper.h | 1 +
62
hw/input/pckbd: Convert away from old_mmio
76
target/arm/internals.h | 16 ++
63
hw/char/parallel: Convert away from old_mmio
77
target/arm/syndrome.h | 5 +
64
stellaris: Stop using armv7m_init()
78
target/arm/a32.decode | 16 +-
65
hw/arm/armv7m: Remove unused armv7m_init() function
79
target/arm/t32.decode | 18 +-
66
arm: Don't crash if user tries to use a Cortex-M CPU without an NVIC
80
hw/acpi/aml-build.c | 111 ++++----
67
hw/core/or-irq: Support more than 16 inputs to an OR gate
81
hw/arm/sbsa-ref.c | 16 ++
68
cpu-defs.h: Document CPUIOTLBEntry 'addr' field
82
hw/arm/virt.c | 21 +-
69
cputlb: Pass cpu_transaction_failed() the correct physaddr
83
hw/core/machine-hmp-cmds.c | 4 +
70
CODING_STYLE: Define our preferred form for multiline comments
84
hw/core/machine.c | 16 ++
71
bswap: Add new stn_*_p() and ldn_*_p() memory access functions
85
target/arm/cpu.c | 66 ++++-
72
exec.c: Don't accidentally sign-extend 4-byte loads in subpage_read()
86
target/arm/cpu64.c | 353 ++++++++++++++-----------
73
exec.c: Use stn_p() and ldn_p() instead of explicit switches
87
target/arm/cpu_tcg.c | 227 +++++++++++-----
74
iommu: Add IOMMU index concept to IOMMU API
88
target/arm/helper.c | 600 +++++++++++++++++++++++++-----------------
75
iommu: Add IOMMU index argument to notifier APIs
89
target/arm/op_helper.c | 43 +++
76
iommu: Add IOMMU index argument to translate method
90
target/arm/translate-a64.c | 18 ++
77
exec.c: Handle IOMMUs in address_space_translate_for_iotlb()
91
target/arm/translate.c | 23 ++
78
92
tests/qtest/numa-test.c | 19 +-
79
Richard Henderson (18):
93
.mailmap | 3 +-
80
target/arm: Extend vec_reg_offset to larger sizes
94
MAINTAINERS | 2 +-
81
target/arm: Implement SVE Permute - Unpredicated Group
95
25 files changed, 1068 insertions(+), 562 deletions(-)
82
target/arm: Implement SVE Permute - Predicates Group
83
target/arm: Implement SVE Permute - Interleaving Group
84
target/arm: Implement SVE compress active elements
85
target/arm: Implement SVE conditionally broadcast/extract element
86
target/arm: Implement SVE copy to vector (predicated)
87
target/arm: Implement SVE reverse within elements
88
target/arm: Implement SVE vector splice (predicated)
89
target/arm: Implement SVE Select Vectors Group
90
target/arm: Implement SVE Integer Compare - Vectors Group
91
target/arm: Implement SVE Integer Compare - Immediate Group
92
target/arm: Implement SVE Partition Break Group
93
target/arm: Implement SVE Predicate Count Group
94
target/arm: Implement SVE Integer Compare - Scalars Group
95
target/arm: Implement FDUP/DUP
96
target/arm: Implement SVE Integer Wide Immediate - Unpredicated Group
97
target/arm: Implement SVE Floating Point Arithmetic - Unpredicated Group
98
99
Shannon Zhao (1):
100
arm_gicv3_kvm: kvm_dist_get/put_priority: skip the registers banked by GICR_IPRIORITYR
101
102
include/exec/cpu-all.h | 4 +
103
include/exec/cpu-defs.h | 9 +
104
include/exec/exec-all.h | 16 +-
105
include/exec/memory.h | 65 +-
106
include/hw/arm/arm.h | 8 +-
107
include/hw/or-irq.h | 5 +-
108
include/qemu/bswap.h | 52 ++
109
include/qom/cpu.h | 3 +
110
target/arm/helper-sve.h | 294 +++++++++
111
target/arm/helper.h | 19 +
112
target/arm/translate-a64.h | 26 +-
113
accel/tcg/cputlb.c | 59 +-
114
exec.c | 263 ++++----
115
hw/alpha/typhoon.c | 3 +-
116
hw/arm/armv7m.c | 28 +-
117
hw/arm/mps2-tz.c | 32 +-
118
hw/arm/smmuv3.c | 2 +-
119
hw/arm/stellaris.c | 12 +-
120
hw/block/m25p80.c | 1 +
121
hw/block/pflash_cfi02.c | 97 +--
122
hw/char/parallel.c | 50 +-
123
hw/core/or-irq.c | 39 +-
124
hw/dma/rc4030.c | 2 +-
125
hw/i386/amd_iommu.c | 2 +-
126
hw/i386/intel_iommu.c | 8 +-
127
hw/input/pckbd.c | 14 +-
128
hw/intc/arm_gicv3_kvm.c | 18 +-
129
hw/intc/armv7m_nvic.c | 6 +-
130
hw/m68k/mcf5206.c | 48 +-
131
hw/misc/aspeed_scu.c | 20 +
132
hw/ppc/spapr_iommu.c | 5 +-
133
hw/s390x/s390-pci-bus.c | 2 +-
134
hw/s390x/s390-pci-inst.c | 4 +-
135
hw/sh4/sh7750.c | 44 +-
136
hw/sparc/sun4m_iommu.c | 3 +-
137
hw/sparc64/sun4u_iommu.c | 2 +-
138
hw/vfio/common.c | 6 +-
139
hw/virtio/vhost.c | 7 +-
140
hw/watchdog/wdt_i6300esb.c | 48 +-
141
memory.c | 33 +-
142
target/arm/cpu.c | 18 +
143
target/arm/sve_helper.c | 1250 +++++++++++++++++++++++++++++++++++++
144
target/arm/translate-sve.c | 1458 +++++++++++++++++++++++++++++++++++++++++++
145
target/arm/translate.c | 43 +-
146
target/arm/vec_helper.c | 69 ++
147
CODING_STYLE | 17 +
148
docs/devel/loads-stores.rst | 15 +
149
target/arm/sve.decode | 248 ++++++++
150
48 files changed, 4114 insertions(+), 363 deletions(-)
151
diff view generated by jsdifflib
1
Add an IOMMU index argument to the translate method of
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
2
IOMMUs. Since all of our current IOMMU implementations
3
support only a single IOMMU index, this has no effect
4
on the behaviour.
5
2
3
NUVIA was acquired by Qualcomm in March 2021, but kept functioning on
4
separate infrastructure for a transitional period. We've now switched
5
over to contributing as Qualcomm Innovation Center (quicinc), so update
6
my email address to reflect this.
7
8
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
9
Message-id: 20220505113740.75565-1-quic_llindhol@quicinc.com
10
Cc: Leif Lindholm <leif@nuviainc.com>
11
Cc: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
[Fixed commit message typo]
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20180604152941.20374-4-peter.maydell@linaro.org
10
---
15
---
11
include/exec/memory.h | 3 ++-
16
.mailmap | 3 ++-
12
exec.c | 11 +++++++++--
17
MAINTAINERS | 2 +-
13
hw/alpha/typhoon.c | 3 ++-
18
2 files changed, 3 insertions(+), 2 deletions(-)
14
hw/arm/smmuv3.c | 2 +-
15
hw/dma/rc4030.c | 2 +-
16
hw/i386/amd_iommu.c | 2 +-
17
hw/i386/intel_iommu.c | 2 +-
18
hw/ppc/spapr_iommu.c | 3 ++-
19
hw/s390x/s390-pci-bus.c | 2 +-
20
hw/sparc/sun4m_iommu.c | 3 ++-
21
hw/sparc64/sun4u_iommu.c | 2 +-
22
memory.c | 2 +-
23
12 files changed, 24 insertions(+), 13 deletions(-)
24
19
25
diff --git a/include/exec/memory.h b/include/exec/memory.h
20
diff --git a/.mailmap b/.mailmap
26
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
27
--- a/include/exec/memory.h
22
--- a/.mailmap
28
+++ b/include/exec/memory.h
23
+++ b/.mailmap
29
@@ -XXX,XX +XXX,XX @@ typedef struct IOMMUMemoryRegionClass {
24
@@ -XXX,XX +XXX,XX @@ Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
30
* @iommu: the IOMMUMemoryRegion
25
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
31
* @hwaddr: address to be translated within the memory region
26
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
32
* @flag: requested access permissions
27
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
33
+ * @iommu_idx: IOMMU index for the translation
28
-Leif Lindholm <leif@nuviainc.com> <leif.lindholm@linaro.org>
34
*/
29
+Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
35
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
30
+Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
36
- IOMMUAccessFlags flag);
31
Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org>
37
+ IOMMUAccessFlags flag, int iommu_idx);
32
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
38
/* Returns minimum supported page size in bytes.
33
Paul Burton <paulburton@kernel.org> <paul.burton@imgtec.com>
39
* If this method is not provided then the minimum is assumed to
34
diff --git a/MAINTAINERS b/MAINTAINERS
40
* be TARGET_PAGE_SIZE.
41
diff --git a/exec.c b/exec.c
42
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
43
--- a/exec.c
36
--- a/MAINTAINERS
44
+++ b/exec.c
37
+++ b/MAINTAINERS
45
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
38
@@ -XXX,XX +XXX,XX @@ F: include/hw/ssi/imx_spi.h
46
do {
39
SBSA-REF
47
hwaddr addr = *xlat;
40
M: Radoslaw Biernacki <rad@semihalf.com>
48
IOMMUMemoryRegionClass *imrc = memory_region_get_iommu_class_nocheck(iommu_mr);
41
M: Peter Maydell <peter.maydell@linaro.org>
49
- IOMMUTLBEntry iotlb = imrc->translate(iommu_mr, addr, is_write ?
42
-R: Leif Lindholm <leif@nuviainc.com>
50
- IOMMU_WO : IOMMU_RO);
43
+R: Leif Lindholm <quic_llindhol@quicinc.com>
51
+ int iommu_idx = 0;
44
L: qemu-arm@nongnu.org
52
+ IOMMUTLBEntry iotlb;
45
S: Maintained
53
+
46
F: hw/arm/sbsa-ref.c
54
+ if (imrc->attrs_to_index) {
55
+ iommu_idx = imrc->attrs_to_index(iommu_mr, attrs);
56
+ }
57
+
58
+ iotlb = imrc->translate(iommu_mr, addr, is_write ?
59
+ IOMMU_WO : IOMMU_RO, iommu_idx);
60
61
if (!(iotlb.perm & (1 << is_write))) {
62
goto unassigned;
63
diff --git a/hw/alpha/typhoon.c b/hw/alpha/typhoon.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/alpha/typhoon.c
66
+++ b/hw/alpha/typhoon.c
67
@@ -XXX,XX +XXX,XX @@ static bool window_translate(TyphoonWindow *win, hwaddr addr,
68
Pchip and generate a machine check interrupt. */
69
static IOMMUTLBEntry typhoon_translate_iommu(IOMMUMemoryRegion *iommu,
70
hwaddr addr,
71
- IOMMUAccessFlags flag)
72
+ IOMMUAccessFlags flag,
73
+ int iommu_idx)
74
{
75
TyphoonPchip *pchip = container_of(iommu, TyphoonPchip, iommu);
76
IOMMUTLBEntry ret;
77
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/arm/smmuv3.c
80
+++ b/hw/arm/smmuv3.c
81
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
82
}
83
84
static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
85
- IOMMUAccessFlags flag)
86
+ IOMMUAccessFlags flag, int iommu_idx)
87
{
88
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
89
SMMUv3State *s = sdev->smmu;
90
diff --git a/hw/dma/rc4030.c b/hw/dma/rc4030.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/hw/dma/rc4030.c
93
+++ b/hw/dma/rc4030.c
94
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps jazzio_ops = {
95
};
96
97
static IOMMUTLBEntry rc4030_dma_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
98
- IOMMUAccessFlags flag)
99
+ IOMMUAccessFlags flag, int iommu_idx)
100
{
101
rc4030State *s = container_of(iommu, rc4030State, dma_mr);
102
IOMMUTLBEntry ret = {
103
diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/hw/i386/amd_iommu.c
106
+++ b/hw/i386/amd_iommu.c
107
@@ -XXX,XX +XXX,XX @@ static inline bool amdvi_is_interrupt_addr(hwaddr addr)
108
}
109
110
static IOMMUTLBEntry amdvi_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
111
- IOMMUAccessFlags flag)
112
+ IOMMUAccessFlags flag, int iommu_idx)
113
{
114
AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
115
AMDVIState *s = as->iommu_state;
116
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/hw/i386/intel_iommu.c
119
+++ b/hw/i386/intel_iommu.c
120
@@ -XXX,XX +XXX,XX @@ static void vtd_mem_write(void *opaque, hwaddr addr,
121
}
122
123
static IOMMUTLBEntry vtd_iommu_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
124
- IOMMUAccessFlags flag)
125
+ IOMMUAccessFlags flag, int iommu_idx)
126
{
127
VTDAddressSpace *vtd_as = container_of(iommu, VTDAddressSpace, iommu);
128
IntelIOMMUState *s = vtd_as->iommu_state;
129
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/ppc/spapr_iommu.c
132
+++ b/hw/ppc/spapr_iommu.c
133
@@ -XXX,XX +XXX,XX @@ static void spapr_tce_free_table(uint64_t *table, int fd, uint32_t nb_table)
134
/* Called from RCU critical section */
135
static IOMMUTLBEntry spapr_tce_translate_iommu(IOMMUMemoryRegion *iommu,
136
hwaddr addr,
137
- IOMMUAccessFlags flag)
138
+ IOMMUAccessFlags flag,
139
+ int iommu_idx)
140
{
141
sPAPRTCETable *tcet = container_of(iommu, sPAPRTCETable, iommu);
142
uint64_t tce;
143
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/s390x/s390-pci-bus.c
146
+++ b/hw/s390x/s390-pci-bus.c
147
@@ -XXX,XX +XXX,XX @@ uint16_t s390_guest_io_table_walk(uint64_t g_iota, hwaddr addr,
148
}
149
150
static IOMMUTLBEntry s390_translate_iommu(IOMMUMemoryRegion *mr, hwaddr addr,
151
- IOMMUAccessFlags flag)
152
+ IOMMUAccessFlags flag, int iommu_idx)
153
{
154
S390PCIIOMMU *iommu = container_of(mr, S390PCIIOMMU, iommu_mr);
155
S390IOTLBEntry *entry;
156
diff --git a/hw/sparc/sun4m_iommu.c b/hw/sparc/sun4m_iommu.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/hw/sparc/sun4m_iommu.c
159
+++ b/hw/sparc/sun4m_iommu.c
160
@@ -XXX,XX +XXX,XX @@ static void iommu_bad_addr(IOMMUState *s, hwaddr addr,
161
/* Called from RCU critical section */
162
static IOMMUTLBEntry sun4m_translate_iommu(IOMMUMemoryRegion *iommu,
163
hwaddr addr,
164
- IOMMUAccessFlags flags)
165
+ IOMMUAccessFlags flags,
166
+ int iommu_idx)
167
{
168
IOMMUState *is = container_of(iommu, IOMMUState, iommu);
169
hwaddr page, pa;
170
diff --git a/hw/sparc64/sun4u_iommu.c b/hw/sparc64/sun4u_iommu.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/hw/sparc64/sun4u_iommu.c
173
+++ b/hw/sparc64/sun4u_iommu.c
174
@@ -XXX,XX +XXX,XX @@
175
/* Called from RCU critical section */
176
static IOMMUTLBEntry sun4u_translate_iommu(IOMMUMemoryRegion *iommu,
177
hwaddr addr,
178
- IOMMUAccessFlags flag)
179
+ IOMMUAccessFlags flag, int iommu_idx)
180
{
181
IOMMUState *is = container_of(iommu, IOMMUState, iommu);
182
hwaddr baseaddr, offset;
183
diff --git a/memory.c b/memory.c
184
index XXXXXXX..XXXXXXX 100644
185
--- a/memory.c
186
+++ b/memory.c
187
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n)
188
granularity = memory_region_iommu_get_min_page_size(iommu_mr);
189
190
for (addr = 0; addr < memory_region_size(mr); addr += granularity) {
191
- iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE);
192
+ iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, n->iommu_idx);
193
if (iotlb.perm != IOMMU_NONE) {
194
n->notify(n, &iotlb);
195
}
196
--
47
--
197
2.17.1
48
2.25.1
198
49
199
50
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
More gracefully handle cpregs when EL2 and/or EL3 are missing.
4
If the reg is entirely inaccessible, do not register it at all.
5
If the reg is for EL2, and EL3 is present but EL2 is not,
6
either discard, squash to res0, const, or keep unchanged.
7
8
Per rule RJFFP, mark the 4 aarch32 hypervisor access registers
9
with ARM_CP_EL3_NO_EL2_KEEP, and mark all of the EL2 address
10
translation and tlb invalidation "regs" ARM_CP_EL3_NO_EL2_UNDEF.
11
Mark the 2 virtualization processor id regs ARM_CP_EL3_NO_EL2_C_NZ.
12
13
This will simplify cpreg registration for conditional arm features.
2
14
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-10-richard.henderson@linaro.org
17
Message-id: 20220506180242.216785-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
19
---
8
target/arm/helper-sve.h | 2 ++
20
target/arm/cpregs.h | 11 +++
9
target/arm/sve_helper.c | 37 +++++++++++++++++++++++++++++++++++++
21
target/arm/helper.c | 178 ++++++++++++++++++++++++++++++--------------
10
target/arm/translate-sve.c | 13 +++++++++++++
22
2 files changed, 133 insertions(+), 56 deletions(-)
11
target/arm/sve.decode | 3 +++
23
12
4 files changed, 55 insertions(+)
24
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
26
--- a/target/arm/cpregs.h
17
+++ b/target/arm/helper-sve.h
27
+++ b/target/arm/cpregs.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
@@ -XXX,XX +XXX,XX @@ enum {
19
DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
ARM_CP_SVE = 1 << 14,
20
DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
/* Flag: Do not expose in gdb sysreg xml. */
21
31
ARM_CP_NO_GDB = 1 << 15,
22
+DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
32
+ /*
23
+
33
+ * Flags: If EL3 but not EL2...
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
+ * - UNDEF: discard the cpreg,
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
+ * - KEEP: retain the cpreg as is,
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
36
+ * - C_NZ: set const on the cpreg, but retain resetvalue,
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
37
+ * - else: set const on the cpreg, zero resetvalue, aka RES0.
38
+ * See rule RJFFP in section D1.1.3 of DDI0487H.a.
39
+ */
40
+ ARM_CP_EL3_NO_EL2_UNDEF = 1 << 16,
41
+ ARM_CP_EL3_NO_EL2_KEEP = 1 << 17,
42
+ ARM_CP_EL3_NO_EL2_C_NZ = 1 << 18,
43
};
44
45
/*
46
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve_helper.c
48
--- a/target/arm/helper.c
30
+++ b/target/arm/sve_helper.c
49
+++ b/target/arm/helper.c
31
@@ -XXX,XX +XXX,XX @@ int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
50
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
32
51
.access = PL1_RW, .readfn = spsel_read, .writefn = spsel_write },
33
return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
52
{ .name = "FPEXC32_EL2", .state = ARM_CP_STATE_AA64,
34
}
53
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 3, .opc2 = 0,
35
+
54
- .access = PL2_RW, .type = ARM_CP_ALIAS | ARM_CP_FPU,
36
+void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
55
+ .access = PL2_RW,
37
+{
56
+ .type = ARM_CP_ALIAS | ARM_CP_FPU | ARM_CP_EL3_NO_EL2_KEEP,
38
+ intptr_t opr_sz = simd_oprsz(desc) / 8;
57
.fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]) },
39
+ int esz = simd_data(desc);
58
{ .name = "DACR32_EL2", .state = ARM_CP_STATE_AA64,
40
+ uint64_t pg, first_g, last_g, len, mask = pred_esz_masks[esz];
59
.opc0 = 3, .opc1 = 4, .crn = 3, .crm = 0, .opc2 = 0,
41
+ intptr_t i, first_i, last_i;
60
- .access = PL2_RW, .resetvalue = 0,
42
+ ARMVectorReg tmp;
61
+ .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
43
+
62
.writefn = dacr_write, .raw_writefn = raw_write,
44
+ first_i = last_i = 0;
63
.fieldoffset = offsetof(CPUARMState, cp15.dacr32_el2) },
45
+ first_g = last_g = 0;
64
{ .name = "IFSR32_EL2", .state = ARM_CP_STATE_AA64,
46
+
65
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 0, .opc2 = 1,
47
+ /* Find the extent of the active elements within VG. */
66
- .access = PL2_RW, .resetvalue = 0,
48
+ for (i = QEMU_ALIGN_UP(opr_sz, 8) - 8; i >= 0; i -= 8) {
67
+ .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
49
+ pg = *(uint64_t *)(vg + i) & mask;
68
.fieldoffset = offsetof(CPUARMState, cp15.ifsr32_el2) },
50
+ if (pg) {
69
{ .name = "SPSR_IRQ", .state = ARM_CP_STATE_AA64,
51
+ if (last_g == 0) {
70
.type = ARM_CP_ALIAS,
52
+ last_g = pg;
71
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
53
+ last_i = i;
72
.writefn = tlbimva_hyp_is_write },
73
{ .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
74
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
75
- .type = ARM_CP_NO_RAW, .access = PL2_W,
76
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
77
.writefn = tlbi_aa64_alle2_write },
78
{ .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64,
79
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
80
- .type = ARM_CP_NO_RAW, .access = PL2_W,
81
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
82
.writefn = tlbi_aa64_vae2_write },
83
{ .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64,
84
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
85
- .access = PL2_W, .type = ARM_CP_NO_RAW,
86
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
87
.writefn = tlbi_aa64_vae2_write },
88
{ .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64,
89
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
90
- .access = PL2_W, .type = ARM_CP_NO_RAW,
91
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
92
.writefn = tlbi_aa64_alle2is_write },
93
{ .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64,
94
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
95
- .type = ARM_CP_NO_RAW, .access = PL2_W,
96
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
97
.writefn = tlbi_aa64_vae2is_write },
98
{ .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64,
99
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
100
- .access = PL2_W, .type = ARM_CP_NO_RAW,
101
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
102
.writefn = tlbi_aa64_vae2is_write },
103
#ifndef CONFIG_USER_ONLY
104
/* Unlike the other EL2-related AT operations, these must
105
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
106
{ .name = "AT_S1E2R", .state = ARM_CP_STATE_AA64,
107
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0,
108
.access = PL2_W, .accessfn = at_s1e2_access,
109
- .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 },
110
+ .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
111
+ .writefn = ats_write64 },
112
{ .name = "AT_S1E2W", .state = ARM_CP_STATE_AA64,
113
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1,
114
.access = PL2_W, .accessfn = at_s1e2_access,
115
- .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 },
116
+ .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
117
+ .writefn = ats_write64 },
118
/* The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
119
* if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3
120
* with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose
121
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
122
{ .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64,
123
.opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0,
124
.access = PL2_RW, .accessfn = access_tda,
125
- .type = ARM_CP_NOP },
126
+ .type = ARM_CP_NOP | ARM_CP_EL3_NO_EL2_KEEP },
127
/* Dummy MDCCINT_EL1, since we don't implement the Debug Communications
128
* Channel but Linux may try to access this register. The 32-bit
129
* alias is DBGDCCINT.
130
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
131
.access = PL2_W, .type = ARM_CP_NOP },
132
{ .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
133
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
134
- .access = PL2_W, .type = ARM_CP_NO_RAW,
135
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
136
.writefn = tlbi_aa64_rvae2is_write },
137
{ .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
138
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
139
- .access = PL2_W, .type = ARM_CP_NO_RAW,
140
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
141
.writefn = tlbi_aa64_rvae2is_write },
142
{ .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
143
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
144
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
145
.access = PL2_W, .type = ARM_CP_NOP },
146
{ .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
147
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
148
- .access = PL2_W, .type = ARM_CP_NO_RAW,
149
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
150
.writefn = tlbi_aa64_rvae2is_write },
151
{ .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
152
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
153
- .access = PL2_W, .type = ARM_CP_NO_RAW,
154
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
155
.writefn = tlbi_aa64_rvae2is_write },
156
{ .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
157
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
158
- .access = PL2_W, .type = ARM_CP_NO_RAW,
159
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
160
.writefn = tlbi_aa64_rvae2_write },
161
{ .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
162
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
163
- .access = PL2_W, .type = ARM_CP_NO_RAW,
164
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
165
.writefn = tlbi_aa64_rvae2_write },
166
{ .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
167
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
168
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
169
.writefn = tlbi_aa64_vae1is_write },
170
{ .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
171
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
172
- .access = PL2_W, .type = ARM_CP_NO_RAW,
173
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
174
.writefn = tlbi_aa64_alle2is_write },
175
{ .name = "TLBI_VAE2OS", .state = ARM_CP_STATE_AA64,
176
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 1,
177
- .access = PL2_W, .type = ARM_CP_NO_RAW,
178
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
179
.writefn = tlbi_aa64_vae2is_write },
180
{ .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
181
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
182
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
183
.writefn = tlbi_aa64_alle1is_write },
184
{ .name = "TLBI_VALE2OS", .state = ARM_CP_STATE_AA64,
185
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 5,
186
- .access = PL2_W, .type = ARM_CP_NO_RAW,
187
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
188
.writefn = tlbi_aa64_vae2is_write },
189
{ .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
190
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
191
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
192
{ .name = "VPIDR", .state = ARM_CP_STATE_AA32,
193
.cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
194
.access = PL2_RW, .accessfn = access_el3_aa32ns,
195
- .resetvalue = cpu->midr, .type = ARM_CP_ALIAS,
196
+ .resetvalue = cpu->midr,
197
+ .type = ARM_CP_ALIAS | ARM_CP_EL3_NO_EL2_C_NZ,
198
.fieldoffset = offsetoflow32(CPUARMState, cp15.vpidr_el2) },
199
{ .name = "VPIDR_EL2", .state = ARM_CP_STATE_AA64,
200
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
201
.access = PL2_RW, .resetvalue = cpu->midr,
202
+ .type = ARM_CP_EL3_NO_EL2_C_NZ,
203
.fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) },
204
{ .name = "VMPIDR", .state = ARM_CP_STATE_AA32,
205
.cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
206
.access = PL2_RW, .accessfn = access_el3_aa32ns,
207
- .resetvalue = vmpidr_def, .type = ARM_CP_ALIAS,
208
+ .resetvalue = vmpidr_def,
209
+ .type = ARM_CP_ALIAS | ARM_CP_EL3_NO_EL2_C_NZ,
210
.fieldoffset = offsetoflow32(CPUARMState, cp15.vmpidr_el2) },
211
{ .name = "VMPIDR_EL2", .state = ARM_CP_STATE_AA64,
212
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
213
- .access = PL2_RW,
214
- .resetvalue = vmpidr_def,
215
+ .access = PL2_RW, .resetvalue = vmpidr_def,
216
+ .type = ARM_CP_EL3_NO_EL2_C_NZ,
217
.fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) },
218
};
219
define_arm_cp_regs(cpu, vpidr_regs);
220
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
221
int crm, int opc1, int opc2,
222
const char *name)
223
{
224
+ CPUARMState *env = &cpu->env;
225
uint32_t key;
226
ARMCPRegInfo *r2;
227
bool is64 = r->type & ARM_CP_64BIT;
228
bool ns = secstate & ARM_CP_SECSTATE_NS;
229
int cp = r->cp;
230
- bool isbanked;
231
size_t name_len;
232
+ bool make_const;
233
234
switch (state) {
235
case ARM_CP_STATE_AA32:
236
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
237
}
238
}
239
240
+ /*
241
+ * Eliminate registers that are not present because the EL is missing.
242
+ * Doing this here makes it easier to put all registers for a given
243
+ * feature into the same ARMCPRegInfo array and define them all at once.
244
+ */
245
+ make_const = false;
246
+ if (arm_feature(env, ARM_FEATURE_EL3)) {
247
+ /*
248
+ * An EL2 register without EL2 but with EL3 is (usually) RES0.
249
+ * See rule RJFFP in section D1.1.3 of DDI0487H.a.
250
+ */
251
+ int min_el = ctz32(r->access) / 2;
252
+ if (min_el == 2 && !arm_feature(env, ARM_FEATURE_EL2)) {
253
+ if (r->type & ARM_CP_EL3_NO_EL2_UNDEF) {
254
+ return;
54
+ }
255
+ }
55
+ first_g = pg;
256
+ make_const = !(r->type & ARM_CP_EL3_NO_EL2_KEEP);
56
+ first_i = i;
257
+ }
258
+ } else {
259
+ CPAccessRights max_el = (arm_feature(env, ARM_FEATURE_EL2)
260
+ ? PL2_RW : PL1_RW);
261
+ if ((r->access & max_el) == 0) {
262
+ return;
57
+ }
263
+ }
58
+ }
264
+ }
59
+
265
+
60
+ len = 0;
266
/* Combine cpreg and name into one allocation. */
61
+ if (first_g != 0) {
267
name_len = strlen(name) + 1;
62
+ first_i = first_i * 8 + ctz64(first_g);
268
r2 = g_malloc(sizeof(*r2) + name_len);
63
+ last_i = last_i * 8 + 63 - clz64(last_g);
269
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
64
+ len = last_i - first_i + (1 << esz);
270
r2->opaque = opaque;
65
+ if (vd == vm) {
271
}
66
+ vm = memcpy(&tmp, vm, opr_sz * 8);
272
273
- isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
274
- if (isbanked) {
275
+ if (make_const) {
276
+ /* This should not have been a very special register to begin. */
277
+ int old_special = r2->type & ARM_CP_SPECIAL_MASK;
278
+ assert(old_special == 0 || old_special == ARM_CP_NOP);
279
/*
280
- * Register is banked (using both entries in array).
281
- * Overwriting fieldoffset as the array is only used to define
282
- * banked registers but later only fieldoffset is used.
283
+ * Set the special function to CONST, retaining the other flags.
284
+ * This is important for e.g. ARM_CP_SVE so that we still
285
+ * take the SVE trap if CPTR_EL3.EZ == 0.
286
*/
287
- r2->fieldoffset = r->bank_fieldoffsets[ns];
288
- }
289
+ r2->type = (r2->type & ~ARM_CP_SPECIAL_MASK) | ARM_CP_CONST;
290
+ /*
291
+ * Usually, these registers become RES0, but there are a few
292
+ * special cases like VPIDR_EL2 which have a constant non-zero
293
+ * value with writes ignored.
294
+ */
295
+ if (!(r->type & ARM_CP_EL3_NO_EL2_C_NZ)) {
296
+ r2->resetvalue = 0;
67
+ }
297
+ }
68
+ swap_memmove(vd, vn + first_i, len);
298
+ /*
69
+ }
299
+ * ARM_CP_CONST has precedence, so removing the callbacks and
70
+ swap_memmove(vd + len, vm, opr_sz * 8 - len);
300
+ * offsets are not strictly necessary, but it is potentially
71
+}
301
+ * less confusing to debug later.
72
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
302
+ */
73
index XXXXXXX..XXXXXXX 100644
303
+ r2->readfn = NULL;
74
--- a/target/arm/translate-sve.c
304
+ r2->writefn = NULL;
75
+++ b/target/arm/translate-sve.c
305
+ r2->raw_readfn = NULL;
76
@@ -XXX,XX +XXX,XX @@ static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
306
+ r2->raw_writefn = NULL;
77
return do_zpz_ool(s, a, fns[a->esz]);
307
+ r2->resetfn = NULL;
78
}
308
+ r2->fieldoffset = 0;
79
309
+ r2->bank_fieldoffsets[0] = 0;
80
+static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
310
+ r2->bank_fieldoffsets[1] = 0;
81
+{
311
+ } else {
82
+ if (sve_access_check(s)) {
312
+ bool isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
83
+ unsigned vsz = vec_full_reg_size(s);
313
84
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
314
- if (state == ARM_CP_STATE_AA32) {
85
+ vec_full_reg_offset(s, a->rn),
315
if (isbanked) {
86
+ vec_full_reg_offset(s, a->rm),
316
/*
87
+ pred_full_reg_offset(s, a->pg),
317
- * If the register is banked then we don't need to migrate or
88
+ vsz, vsz, a->esz, gen_helper_sve_splice);
318
- * reset the 32-bit instance in certain cases:
89
+ }
319
- *
90
+ return true;
320
- * 1) If the register has both 32-bit and 64-bit instances then we
91
+}
321
- * can count on the 64-bit instance taking care of the
92
+
322
- * non-secure bank.
93
/*
323
- * 2) If ARMv8 is enabled then we can count on a 64-bit version
94
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
324
- * taking care of the secure bank. This requires that separate
95
*/
325
- * 32 and 64-bit definitions are provided.
96
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
326
+ * Register is banked (using both entries in array).
97
index XXXXXXX..XXXXXXX 100644
327
+ * Overwriting fieldoffset as the array is only used to define
98
--- a/target/arm/sve.decode
328
+ * banked registers but later only fieldoffset is used.
99
+++ b/target/arm/sve.decode
329
*/
100
@@ -XXX,XX +XXX,XX @@ REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
330
- if ((r->state == ARM_CP_STATE_BOTH && ns) ||
101
REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
331
- (arm_feature(&cpu->env, ARM_FEATURE_V8) && !ns)) {
102
RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
332
+ r2->fieldoffset = r->bank_fieldoffsets[ns];
103
333
+ }
104
+# SVE vector splice (predicated)
334
+ if (state == ARM_CP_STATE_AA32) {
105
+SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
335
+ if (isbanked) {
106
+
336
+ /*
107
### SVE Predicate Logical Operations Group
337
+ * If the register is banked then we don't need to migrate or
108
338
+ * reset the 32-bit instance in certain cases:
109
# SVE predicate logical operations
339
+ *
340
+ * 1) If the register has both 32-bit and 64-bit instances
341
+ * then we can count on the 64-bit instance taking care
342
+ * of the non-secure bank.
343
+ * 2) If ARMv8 is enabled then we can count on a 64-bit
344
+ * version taking care of the secure bank. This requires
345
+ * that separate 32 and 64-bit definitions are provided.
346
+ */
347
+ if ((r->state == ARM_CP_STATE_BOTH && ns) ||
348
+ (arm_feature(env, ARM_FEATURE_V8) && !ns)) {
349
+ r2->type |= ARM_CP_ALIAS;
350
+ }
351
+ } else if ((secstate != r->secure) && !ns) {
352
+ /*
353
+ * The register is not banked so we only want to allow
354
+ * migration of the non-secure instance.
355
+ */
356
r2->type |= ARM_CP_ALIAS;
357
}
358
- } else if ((secstate != r->secure) && !ns) {
359
- /*
360
- * The register is not banked so we only want to allow migration
361
- * of the non-secure instance.
362
- */
363
- r2->type |= ARM_CP_ALIAS;
364
- }
365
366
- if (HOST_BIG_ENDIAN &&
367
- r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
368
- r2->fieldoffset += sizeof(uint32_t);
369
+ if (HOST_BIG_ENDIAN &&
370
+ r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
371
+ r2->fieldoffset += sizeof(uint32_t);
372
+ }
373
}
374
}
375
376
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
377
* multiple times. Special registers (ie NOP/WFI) are
378
* never migratable and not even raw-accessible.
379
*/
380
- if (r->type & ARM_CP_SPECIAL_MASK) {
381
+ if (r2->type & ARM_CP_SPECIAL_MASK) {
382
r2->type |= ARM_CP_NO_RAW;
383
}
384
if (((r->crm == CP_ANY) && crm != 0) ||
110
--
385
--
111
2.17.1
386
2.25.1
112
113
diff view generated by jsdifflib
1
From: Joel Stanley <joel@jms.id.au>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The ASPEED SoCs contain a single register that returns random data when
3
Drop el3_no_el2_cp_reginfo, el3_no_el2_v8_cp_reginfo, and the local
4
read. This models that register so that guests can use it.
4
vpidr_regs definition, and rely on the squashing to ARM_CP_CONST
5
5
while registering for v8.
6
The random number data register has a corresponding control register,
6
7
however it returns data regardless of the state of the enabled bit, so
7
This is a behavior change for v7 cpus with Security Extensions and
8
the model follows this behaviour.
8
without Virtualization Extensions, in that the virtualization cpregs
9
9
are now correctly not present. This would be a migration compatibility
10
When the qcrypto call fails we exit as the guest uses the random number
10
break, except that we have an existing bug in which migration of 32-bit
11
device to feed it's entropy pool, which is used for cryptographic
11
cpus with Security Extensions enabled does not work.
12
purposes.
12
13
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Cédric Le Goater <clg@kaod.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Signed-off-by: Joel Stanley <joel@jms.id.au>
15
Message-id: 20220506180242.216785-3-richard.henderson@linaro.org
16
Message-id: 20180613114836.9265-1-joel@jms.id.au
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
17
---
19
hw/misc/aspeed_scu.c | 20 ++++++++++++++++++++
18
target/arm/helper.c | 158 ++++----------------------------------------
20
1 file changed, 20 insertions(+)
19
1 file changed, 13 insertions(+), 145 deletions(-)
21
20
22
diff --git a/hw/misc/aspeed_scu.c b/hw/misc/aspeed_scu.c
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/misc/aspeed_scu.c
23
--- a/target/arm/helper.c
25
+++ b/hw/misc/aspeed_scu.c
24
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@
25
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
27
#include "qapi/visitor.h"
26
.fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) },
28
#include "qemu/bitops.h"
29
#include "qemu/log.h"
30
+#include "crypto/random.h"
31
#include "trace.h"
32
33
#define TO_REG(offset) ((offset) >> 2)
34
@@ -XXX,XX +XXX,XX @@ static const uint32_t ast2500_a1_resets[ASPEED_SCU_NR_REGS] = {
35
[BMC_DEV_ID] = 0x00002402U
36
};
27
};
37
28
38
+static uint32_t aspeed_scu_get_random(void)
29
-/* Used to describe the behaviour of EL2 regs when EL2 does not exist. */
39
+{
30
-static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
40
+ Error *err = NULL;
31
- { .name = "VBAR_EL2", .state = ARM_CP_STATE_BOTH,
41
+ uint32_t num;
32
- .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 0,
33
- .access = PL2_RW,
34
- .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore },
35
- { .name = "HCR_EL2", .state = ARM_CP_STATE_BOTH,
36
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
37
- .access = PL2_RW,
38
- .type = ARM_CP_CONST, .resetvalue = 0 },
39
- { .name = "HACR_EL2", .state = ARM_CP_STATE_BOTH,
40
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 7,
41
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
42
- { .name = "ESR_EL2", .state = ARM_CP_STATE_BOTH,
43
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 0,
44
- .access = PL2_RW,
45
- .type = ARM_CP_CONST, .resetvalue = 0 },
46
- { .name = "CPTR_EL2", .state = ARM_CP_STATE_BOTH,
47
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 2,
48
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
49
- { .name = "MAIR_EL2", .state = ARM_CP_STATE_BOTH,
50
- .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 0,
51
- .access = PL2_RW, .type = ARM_CP_CONST,
52
- .resetvalue = 0 },
53
- { .name = "HMAIR1", .state = ARM_CP_STATE_AA32,
54
- .cp = 15, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 1,
55
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
56
- { .name = "AMAIR_EL2", .state = ARM_CP_STATE_BOTH,
57
- .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 0,
58
- .access = PL2_RW, .type = ARM_CP_CONST,
59
- .resetvalue = 0 },
60
- { .name = "HAMAIR1", .state = ARM_CP_STATE_AA32,
61
- .cp = 15, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 1,
62
- .access = PL2_RW, .type = ARM_CP_CONST,
63
- .resetvalue = 0 },
64
- { .name = "AFSR0_EL2", .state = ARM_CP_STATE_BOTH,
65
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 0,
66
- .access = PL2_RW, .type = ARM_CP_CONST,
67
- .resetvalue = 0 },
68
- { .name = "AFSR1_EL2", .state = ARM_CP_STATE_BOTH,
69
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 1,
70
- .access = PL2_RW, .type = ARM_CP_CONST,
71
- .resetvalue = 0 },
72
- { .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH,
73
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2,
74
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
75
- { .name = "VTCR_EL2", .state = ARM_CP_STATE_BOTH,
76
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
77
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
78
- .type = ARM_CP_CONST, .resetvalue = 0 },
79
- { .name = "VTTBR", .state = ARM_CP_STATE_AA32,
80
- .cp = 15, .opc1 = 6, .crm = 2,
81
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
82
- .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
83
- { .name = "VTTBR_EL2", .state = ARM_CP_STATE_AA64,
84
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 0,
85
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
86
- { .name = "SCTLR_EL2", .state = ARM_CP_STATE_BOTH,
87
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 0,
88
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
89
- { .name = "TPIDR_EL2", .state = ARM_CP_STATE_BOTH,
90
- .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 2,
91
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
92
- { .name = "TTBR0_EL2", .state = ARM_CP_STATE_AA64,
93
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0,
94
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
95
- { .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2,
96
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
97
- .resetvalue = 0 },
98
- { .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH,
99
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0,
100
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
101
- { .name = "CNTVOFF_EL2", .state = ARM_CP_STATE_AA64,
102
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 0, .opc2 = 3,
103
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
104
- { .name = "CNTVOFF", .cp = 15, .opc1 = 4, .crm = 14,
105
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
106
- .resetvalue = 0 },
107
- { .name = "CNTHP_CVAL_EL2", .state = ARM_CP_STATE_AA64,
108
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 2,
109
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
110
- { .name = "CNTHP_CVAL", .cp = 15, .opc1 = 6, .crm = 14,
111
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
112
- .resetvalue = 0 },
113
- { .name = "CNTHP_TVAL_EL2", .state = ARM_CP_STATE_BOTH,
114
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 0,
115
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
116
- { .name = "CNTHP_CTL_EL2", .state = ARM_CP_STATE_BOTH,
117
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 1,
118
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
119
- { .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
120
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1,
121
- .access = PL2_RW, .accessfn = access_tda,
122
- .type = ARM_CP_CONST, .resetvalue = 0 },
123
- { .name = "HPFAR_EL2", .state = ARM_CP_STATE_BOTH,
124
- .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4,
125
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
126
- .type = ARM_CP_CONST, .resetvalue = 0 },
127
- { .name = "HSTR_EL2", .state = ARM_CP_STATE_BOTH,
128
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3,
129
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
130
- { .name = "FAR_EL2", .state = ARM_CP_STATE_BOTH,
131
- .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 0,
132
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
133
- { .name = "HIFAR", .state = ARM_CP_STATE_AA32,
134
- .type = ARM_CP_CONST,
135
- .cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2,
136
- .access = PL2_RW, .resetvalue = 0 },
137
-};
138
-
139
-/* Ditto, but for registers which exist in ARMv8 but not v7 */
140
-static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
141
- { .name = "HCR2", .state = ARM_CP_STATE_AA32,
142
- .cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
143
- .access = PL2_RW,
144
- .type = ARM_CP_CONST, .resetvalue = 0 },
145
-};
146
-
147
static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
148
{
149
ARMCPU *cpu = env_archcpu(env);
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
151
define_arm_cp_regs(cpu, v8_idregs);
152
define_arm_cp_regs(cpu, v8_cp_reginfo);
153
}
154
- if (arm_feature(env, ARM_FEATURE_EL2)) {
42
+
155
+
43
+ if (qcrypto_random_bytes((uint8_t *)&num, sizeof(num), &err)) {
156
+ /*
44
+ error_report_err(err);
157
+ * Register the base EL2 cpregs.
45
+ exit(1);
158
+ * Pre v8, these registers are implemented only as part of the
46
+ }
159
+ * Virtualization Extensions (EL2 present). Beginning with v8,
160
+ * if EL2 is missing but EL3 is enabled, mostly these become
161
+ * RES0 from EL3, with some specific exceptions.
162
+ */
163
+ if (arm_feature(env, ARM_FEATURE_EL2)
164
+ || (arm_feature(env, ARM_FEATURE_EL3)
165
+ && arm_feature(env, ARM_FEATURE_V8))) {
166
uint64_t vmpidr_def = mpidr_read_val(env);
167
ARMCPRegInfo vpidr_regs[] = {
168
{ .name = "VPIDR", .state = ARM_CP_STATE_AA32,
169
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
170
};
171
define_one_arm_cp_reg(cpu, &rvbar);
172
}
173
- } else {
174
- /* If EL2 is missing but higher ELs are enabled, we need to
175
- * register the no_el2 reginfos.
176
- */
177
- if (arm_feature(env, ARM_FEATURE_EL3)) {
178
- /* When EL3 exists but not EL2, VPIDR and VMPIDR take the value
179
- * of MIDR_EL1 and MPIDR_EL1.
180
- */
181
- ARMCPRegInfo vpidr_regs[] = {
182
- { .name = "VPIDR_EL2", .state = ARM_CP_STATE_BOTH,
183
- .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
184
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
185
- .type = ARM_CP_CONST, .resetvalue = cpu->midr,
186
- .fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) },
187
- { .name = "VMPIDR_EL2", .state = ARM_CP_STATE_BOTH,
188
- .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
189
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
190
- .type = ARM_CP_NO_RAW,
191
- .writefn = arm_cp_write_ignore, .readfn = mpidr_read },
192
- };
193
- define_arm_cp_regs(cpu, vpidr_regs);
194
- define_arm_cp_regs(cpu, el3_no_el2_cp_reginfo);
195
- if (arm_feature(env, ARM_FEATURE_V8)) {
196
- define_arm_cp_regs(cpu, el3_no_el2_v8_cp_reginfo);
197
- }
198
- }
199
}
47
+
200
+
48
+ return num;
201
+ /* Register the base EL3 cpregs. */
49
+}
202
if (arm_feature(env, ARM_FEATURE_EL3)) {
50
+
203
define_arm_cp_regs(cpu, el3_cp_reginfo);
51
static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
204
ARMCPRegInfo el3_regs[] = {
52
{
53
AspeedSCUState *s = ASPEED_SCU(opaque);
54
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
55
}
56
57
switch (reg) {
58
+ case RNG_DATA:
59
+ /* On hardware, RNG_DATA works regardless of
60
+ * the state of the enable bit in RNG_CTRL
61
+ */
62
+ s->regs[RNG_DATA] = aspeed_scu_get_random();
63
+ break;
64
case WAKEUP_EN:
65
qemu_log_mask(LOG_GUEST_ERROR,
66
"%s: Read of write-only offset 0x%" HWADDR_PRIx "\n",
67
--
205
--
68
2.17.1
206
2.25.1
69
70
diff view generated by jsdifflib
1
Convert the sh7750 device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps. This device is used by the sh4 r2d board.
3
2
3
Drop zcr_no_el2_reginfo and merge the 3 registers into one array,
4
now that ZCR_EL2 can be squashed to RES0 and ZCR_EL3 dropped
5
while registering.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220506180242.216785-4-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180601141223.26630-2-peter.maydell@linaro.org
7
---
11
---
8
hw/sh4/sh7750.c | 44 ++++++++++++++++++++++++++++++++++++--------
12
target/arm/helper.c | 55 ++++++++++++++-------------------------------
9
1 file changed, 36 insertions(+), 8 deletions(-)
13
1 file changed, 17 insertions(+), 38 deletions(-)
10
14
11
diff --git a/hw/sh4/sh7750.c b/hw/sh4/sh7750.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/sh4/sh7750.c
17
--- a/target/arm/helper.c
14
+++ b/hw/sh4/sh7750.c
18
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static void sh7750_mem_writel(void *opaque, hwaddr addr,
19
@@ -XXX,XX +XXX,XX @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
16
}
20
}
17
}
21
}
18
22
19
+static uint64_t sh7750_mem_readfn(void *opaque, hwaddr addr, unsigned size)
23
-static const ARMCPRegInfo zcr_el1_reginfo = {
20
+{
24
- .name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
21
+ switch (size) {
25
- .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
22
+ case 1:
26
- .access = PL1_RW, .type = ARM_CP_SVE,
23
+ return sh7750_mem_readb(opaque, addr);
27
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
24
+ case 2:
28
- .writefn = zcr_write, .raw_writefn = raw_write
25
+ return sh7750_mem_readw(opaque, addr);
29
-};
26
+ case 4:
30
-
27
+ return sh7750_mem_readl(opaque, addr);
31
-static const ARMCPRegInfo zcr_el2_reginfo = {
28
+ default:
32
- .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
29
+ g_assert_not_reached();
33
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
30
+ }
34
- .access = PL2_RW, .type = ARM_CP_SVE,
31
+}
35
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
32
+
36
- .writefn = zcr_write, .raw_writefn = raw_write
33
+static void sh7750_mem_writefn(void *opaque, hwaddr addr,
37
-};
34
+ uint64_t value, unsigned size)
38
-
35
+{
39
-static const ARMCPRegInfo zcr_no_el2_reginfo = {
36
+ switch (size) {
40
- .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
37
+ case 1:
41
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
38
+ sh7750_mem_writeb(opaque, addr, value);
42
- .access = PL2_RW, .type = ARM_CP_SVE,
39
+ break;
43
- .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore
40
+ case 2:
44
-};
41
+ sh7750_mem_writew(opaque, addr, value);
45
-
42
+ break;
46
-static const ARMCPRegInfo zcr_el3_reginfo = {
43
+ case 4:
47
- .name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
44
+ sh7750_mem_writel(opaque, addr, value);
48
- .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
45
+ break;
49
- .access = PL3_RW, .type = ARM_CP_SVE,
46
+ default:
50
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
47
+ g_assert_not_reached();
51
- .writefn = zcr_write, .raw_writefn = raw_write
48
+ }
52
+static const ARMCPRegInfo zcr_reginfo[] = {
49
+}
53
+ { .name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
50
+
54
+ .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
51
static const MemoryRegionOps sh7750_mem_ops = {
55
+ .access = PL1_RW, .type = ARM_CP_SVE,
52
- .old_mmio = {
56
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
53
- .read = {sh7750_mem_readb,
57
+ .writefn = zcr_write, .raw_writefn = raw_write },
54
- sh7750_mem_readw,
58
+ { .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
55
- sh7750_mem_readl },
59
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
56
- .write = {sh7750_mem_writeb,
60
+ .access = PL2_RW, .type = ARM_CP_SVE,
57
- sh7750_mem_writew,
61
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
58
- sh7750_mem_writel },
62
+ .writefn = zcr_write, .raw_writefn = raw_write },
59
- },
63
+ { .name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
60
+ .read = sh7750_mem_readfn,
64
+ .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
61
+ .write = sh7750_mem_writefn,
65
+ .access = PL3_RW, .type = ARM_CP_SVE,
62
+ .valid.min_access_size = 1,
66
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
63
+ .valid.max_access_size = 4,
67
+ .writefn = zcr_write, .raw_writefn = raw_write },
64
.endianness = DEVICE_NATIVE_ENDIAN,
65
};
68
};
66
69
70
void hw_watchpoint_update(ARMCPU *cpu, int n)
71
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
72
}
73
74
if (cpu_isar_feature(aa64_sve, cpu)) {
75
- define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
76
- if (arm_feature(env, ARM_FEATURE_EL2)) {
77
- define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
78
- } else {
79
- define_one_arm_cp_reg(cpu, &zcr_no_el2_reginfo);
80
- }
81
- if (arm_feature(env, ARM_FEATURE_EL3)) {
82
- define_one_arm_cp_reg(cpu, &zcr_el3_reginfo);
83
- }
84
+ define_arm_cp_regs(cpu, zcr_reginfo);
85
}
86
87
#ifdef TARGET_AARCH64
67
--
88
--
68
2.17.1
89
2.25.1
69
70
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This register is present for either VHE or Debugv8p2.
2
4
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-4-richard.henderson@linaro.org
7
Message-id: 20220506180242.216785-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/helper-sve.h | 6 +
10
target/arm/helper.c | 15 +++++++++++----
9
target/arm/sve_helper.c | 290 +++++++++++++++++++++++++++++++++++++
11
1 file changed, 11 insertions(+), 4 deletions(-)
10
target/arm/translate-sve.c | 120 +++++++++++++++
11
target/arm/sve.decode | 18 +++
12
4 files changed, 434 insertions(+)
13
12
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
15
--- a/target/arm/helper.c
17
+++ b/target/arm/helper-sve.h
16
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_uunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
17
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo jazelle_regs[] = {
19
DEF_HELPER_FLAGS_3(sve_uunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
18
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
20
DEF_HELPER_FLAGS_3(sve_uunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
};
21
20
22
+DEF_HELPER_FLAGS_4(sve_zip_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
+static const ARMCPRegInfo contextidr_el2 = {
23
+DEF_HELPER_FLAGS_4(sve_uzp_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
+ .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
24
+DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+ .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
25
+DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
+ .access = PL2_RW,
26
+DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
+ .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2])
27
+
28
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/sve_helper.c
34
+++ b/target/arm/sve_helper.c
35
@@ -XXX,XX +XXX,XX @@ DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
36
DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
37
38
#undef DO_UNPK
39
+
40
+/* Mask of bits included in the even numbered predicates of width esz.
41
+ * We also use this for expand_bits/compress_bits, and so extend the
42
+ * same pattern out to 16-bit units.
43
+ */
44
+static const uint64_t even_bit_esz_masks[5] = {
45
+ 0x5555555555555555ull,
46
+ 0x3333333333333333ull,
47
+ 0x0f0f0f0f0f0f0f0full,
48
+ 0x00ff00ff00ff00ffull,
49
+ 0x0000ffff0000ffffull,
50
+};
26
+};
51
+
27
+
52
+/* Zero-extend units of 2**N bits to units of 2**(N+1) bits.
28
static const ARMCPRegInfo vhe_reginfo[] = {
53
+ * For N==0, this corresponds to the operation that in qemu/bitops.h
29
- { .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
54
+ * we call half_shuffle64; this algorithm is from Hacker's Delight,
30
- .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
55
+ * section 7-2 Shuffling Bits.
31
- .access = PL2_RW,
56
+ */
32
- .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2]) },
57
+static uint64_t expand_bits(uint64_t x, int n)
33
{ .name = "TTBR1_EL2", .state = ARM_CP_STATE_AA64,
58
+{
34
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1,
59
+ int i;
35
.access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write,
60
+
36
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
61
+ x &= 0xffffffffu;
37
define_one_arm_cp_reg(cpu, &ssbs_reginfo);
62
+ for (i = 4; i >= n; i--) {
38
}
63
+ int sh = 1 << i;
39
64
+ x = ((x << sh) | x) & even_bit_esz_masks[i];
40
+ if (cpu_isar_feature(aa64_vh, cpu) ||
41
+ cpu_isar_feature(aa64_debugv8p2, cpu)) {
42
+ define_one_arm_cp_reg(cpu, &contextidr_el2);
65
+ }
43
+ }
66
+ return x;
44
if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
67
+}
45
define_arm_cp_regs(cpu, vhe_reginfo);
68
+
46
}
69
+/* Compress units of 2**(N+1) bits to units of 2**N bits.
70
+ * For N==0, this corresponds to the operation that in qemu/bitops.h
71
+ * we call half_unshuffle64; this algorithm is from Hacker's Delight,
72
+ * section 7-2 Shuffling Bits, where it is called an inverse half shuffle.
73
+ */
74
+static uint64_t compress_bits(uint64_t x, int n)
75
+{
76
+ int i;
77
+
78
+ for (i = n; i <= 4; i++) {
79
+ int sh = 1 << i;
80
+ x &= even_bit_esz_masks[i];
81
+ x = (x >> sh) | x;
82
+ }
83
+ return x & 0xffffffffu;
84
+}
85
+
86
+void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
87
+{
88
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
89
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
90
+ intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
91
+ uint64_t *d = vd;
92
+ intptr_t i;
93
+
94
+ if (oprsz <= 8) {
95
+ uint64_t nn = *(uint64_t *)vn;
96
+ uint64_t mm = *(uint64_t *)vm;
97
+ int half = 4 * oprsz;
98
+
99
+ nn = extract64(nn, high * half, half);
100
+ mm = extract64(mm, high * half, half);
101
+ nn = expand_bits(nn, esz);
102
+ mm = expand_bits(mm, esz);
103
+ d[0] = nn + (mm << (1 << esz));
104
+ } else {
105
+ ARMPredicateReg tmp_n, tmp_m;
106
+
107
+ /* We produce output faster than we consume input.
108
+ Therefore we must be mindful of possible overlap. */
109
+ if ((vn - vd) < (uintptr_t)oprsz) {
110
+ vn = memcpy(&tmp_n, vn, oprsz);
111
+ }
112
+ if ((vm - vd) < (uintptr_t)oprsz) {
113
+ vm = memcpy(&tmp_m, vm, oprsz);
114
+ }
115
+ if (high) {
116
+ high = oprsz >> 1;
117
+ }
118
+
119
+ if ((high & 3) == 0) {
120
+ uint32_t *n = vn, *m = vm;
121
+ high >>= 2;
122
+
123
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
124
+ uint64_t nn = n[H4(high + i)];
125
+ uint64_t mm = m[H4(high + i)];
126
+
127
+ nn = expand_bits(nn, esz);
128
+ mm = expand_bits(mm, esz);
129
+ d[i] = nn + (mm << (1 << esz));
130
+ }
131
+ } else {
132
+ uint8_t *n = vn, *m = vm;
133
+ uint16_t *d16 = vd;
134
+
135
+ for (i = 0; i < oprsz / 2; i++) {
136
+ uint16_t nn = n[H1(high + i)];
137
+ uint16_t mm = m[H1(high + i)];
138
+
139
+ nn = expand_bits(nn, esz);
140
+ mm = expand_bits(mm, esz);
141
+ d16[H2(i)] = nn + (mm << (1 << esz));
142
+ }
143
+ }
144
+ }
145
+}
146
+
147
+void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
148
+{
149
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
150
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
151
+ int odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1) << esz;
152
+ uint64_t *d = vd, *n = vn, *m = vm;
153
+ uint64_t l, h;
154
+ intptr_t i;
155
+
156
+ if (oprsz <= 8) {
157
+ l = compress_bits(n[0] >> odd, esz);
158
+ h = compress_bits(m[0] >> odd, esz);
159
+ d[0] = extract64(l + (h << (4 * oprsz)), 0, 8 * oprsz);
160
+ } else {
161
+ ARMPredicateReg tmp_m;
162
+ intptr_t oprsz_16 = oprsz / 16;
163
+
164
+ if ((vm - vd) < (uintptr_t)oprsz) {
165
+ m = memcpy(&tmp_m, vm, oprsz);
166
+ }
167
+
168
+ for (i = 0; i < oprsz_16; i++) {
169
+ l = n[2 * i + 0];
170
+ h = n[2 * i + 1];
171
+ l = compress_bits(l >> odd, esz);
172
+ h = compress_bits(h >> odd, esz);
173
+ d[i] = l + (h << 32);
174
+ }
175
+
176
+ /* For VL which is not a power of 2, the results from M do not
177
+ align nicely with the uint64_t for D. Put the aligned results
178
+ from M into TMP_M and then copy it into place afterward. */
179
+ if (oprsz & 15) {
180
+ d[i] = compress_bits(n[2 * i] >> odd, esz);
181
+
182
+ for (i = 0; i < oprsz_16; i++) {
183
+ l = m[2 * i + 0];
184
+ h = m[2 * i + 1];
185
+ l = compress_bits(l >> odd, esz);
186
+ h = compress_bits(h >> odd, esz);
187
+ tmp_m.p[i] = l + (h << 32);
188
+ }
189
+ tmp_m.p[i] = compress_bits(m[2 * i] >> odd, esz);
190
+
191
+ swap_memmove(vd + oprsz / 2, &tmp_m, oprsz / 2);
192
+ } else {
193
+ for (i = 0; i < oprsz_16; i++) {
194
+ l = m[2 * i + 0];
195
+ h = m[2 * i + 1];
196
+ l = compress_bits(l >> odd, esz);
197
+ h = compress_bits(h >> odd, esz);
198
+ d[oprsz_16 + i] = l + (h << 32);
199
+ }
200
+ }
201
+ }
202
+}
203
+
204
+void HELPER(sve_trn_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
205
+{
206
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
207
+ uintptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
208
+ bool odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
209
+ uint64_t *d = vd, *n = vn, *m = vm;
210
+ uint64_t mask;
211
+ int shr, shl;
212
+ intptr_t i;
213
+
214
+ shl = 1 << esz;
215
+ shr = 0;
216
+ mask = even_bit_esz_masks[esz];
217
+ if (odd) {
218
+ mask <<= shl;
219
+ shr = shl;
220
+ shl = 0;
221
+ }
222
+
223
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
224
+ uint64_t nn = (n[i] & mask) >> shr;
225
+ uint64_t mm = (m[i] & mask) << shl;
226
+ d[i] = nn + mm;
227
+ }
228
+}
229
+
230
+/* Reverse units of 2**N bits. */
231
+static uint64_t reverse_bits_64(uint64_t x, int n)
232
+{
233
+ int i, sh;
234
+
235
+ x = bswap64(x);
236
+ for (i = 2, sh = 4; i >= n; i--, sh >>= 1) {
237
+ uint64_t mask = even_bit_esz_masks[i];
238
+ x = ((x & mask) << sh) | ((x >> sh) & mask);
239
+ }
240
+ return x;
241
+}
242
+
243
+static uint8_t reverse_bits_8(uint8_t x, int n)
244
+{
245
+ static const uint8_t mask[3] = { 0x55, 0x33, 0x0f };
246
+ int i, sh;
247
+
248
+ for (i = 2, sh = 4; i >= n; i--, sh >>= 1) {
249
+ x = ((x & mask[i]) << sh) | ((x >> sh) & mask[i]);
250
+ }
251
+ return x;
252
+}
253
+
254
+void HELPER(sve_rev_p)(void *vd, void *vn, uint32_t pred_desc)
255
+{
256
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
257
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
258
+ intptr_t i, oprsz_2 = oprsz / 2;
259
+
260
+ if (oprsz <= 8) {
261
+ uint64_t l = *(uint64_t *)vn;
262
+ l = reverse_bits_64(l << (64 - 8 * oprsz), esz);
263
+ *(uint64_t *)vd = l;
264
+ } else if ((oprsz & 15) == 0) {
265
+ for (i = 0; i < oprsz_2; i += 8) {
266
+ intptr_t ih = oprsz - 8 - i;
267
+ uint64_t l = reverse_bits_64(*(uint64_t *)(vn + i), esz);
268
+ uint64_t h = reverse_bits_64(*(uint64_t *)(vn + ih), esz);
269
+ *(uint64_t *)(vd + i) = h;
270
+ *(uint64_t *)(vd + ih) = l;
271
+ }
272
+ } else {
273
+ for (i = 0; i < oprsz_2; i += 1) {
274
+ intptr_t il = H1(i);
275
+ intptr_t ih = H1(oprsz - 1 - i);
276
+ uint8_t l = reverse_bits_8(*(uint8_t *)(vn + il), esz);
277
+ uint8_t h = reverse_bits_8(*(uint8_t *)(vn + ih), esz);
278
+ *(uint8_t *)(vd + il) = h;
279
+ *(uint8_t *)(vd + ih) = l;
280
+ }
281
+ }
282
+}
283
+
284
+void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
285
+{
286
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
287
+ intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
288
+ uint64_t *d = vd;
289
+ intptr_t i;
290
+
291
+ if (oprsz <= 8) {
292
+ uint64_t nn = *(uint64_t *)vn;
293
+ int half = 4 * oprsz;
294
+
295
+ nn = extract64(nn, high * half, half);
296
+ nn = expand_bits(nn, 0);
297
+ d[0] = nn;
298
+ } else {
299
+ ARMPredicateReg tmp_n;
300
+
301
+ /* We produce output faster than we consume input.
302
+ Therefore we must be mindful of possible overlap. */
303
+ if ((vn - vd) < (uintptr_t)oprsz) {
304
+ vn = memcpy(&tmp_n, vn, oprsz);
305
+ }
306
+ if (high) {
307
+ high = oprsz >> 1;
308
+ }
309
+
310
+ if ((high & 3) == 0) {
311
+ uint32_t *n = vn;
312
+ high >>= 2;
313
+
314
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
315
+ uint64_t nn = n[H4(high + i)];
316
+ d[i] = expand_bits(nn, 0);
317
+ }
318
+ } else {
319
+ uint16_t *d16 = vd;
320
+ uint8_t *n = vn;
321
+
322
+ for (i = 0; i < oprsz / 2; i++) {
323
+ uint16_t nn = n[H1(high + i)];
324
+ d16[H2(i)] = expand_bits(nn, 0);
325
+ }
326
+ }
327
+ }
328
+}
329
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
330
index XXXXXXX..XXXXXXX 100644
331
--- a/target/arm/translate-sve.c
332
+++ b/target/arm/translate-sve.c
333
@@ -XXX,XX +XXX,XX @@ static bool trans_UNPK(DisasContext *s, arg_UNPK *a, uint32_t insn)
334
return true;
335
}
336
337
+/*
338
+ *** SVE Permute - Predicates Group
339
+ */
340
+
341
+static bool do_perm_pred3(DisasContext *s, arg_rrr_esz *a, bool high_odd,
342
+ gen_helper_gvec_3 *fn)
343
+{
344
+ if (!sve_access_check(s)) {
345
+ return true;
346
+ }
347
+
348
+ unsigned vsz = pred_full_reg_size(s);
349
+
350
+ /* Predicate sizes may be smaller and cannot use simd_desc.
351
+ We cannot round up, as we do elsewhere, because we need
352
+ the exact size for ZIP2 and REV. We retain the style for
353
+ the other helpers for consistency. */
354
+ TCGv_ptr t_d = tcg_temp_new_ptr();
355
+ TCGv_ptr t_n = tcg_temp_new_ptr();
356
+ TCGv_ptr t_m = tcg_temp_new_ptr();
357
+ TCGv_i32 t_desc;
358
+ int desc;
359
+
360
+ desc = vsz - 2;
361
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
362
+ desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
363
+
364
+ tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
365
+ tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
366
+ tcg_gen_addi_ptr(t_m, cpu_env, pred_full_reg_offset(s, a->rm));
367
+ t_desc = tcg_const_i32(desc);
368
+
369
+ fn(t_d, t_n, t_m, t_desc);
370
+
371
+ tcg_temp_free_ptr(t_d);
372
+ tcg_temp_free_ptr(t_n);
373
+ tcg_temp_free_ptr(t_m);
374
+ tcg_temp_free_i32(t_desc);
375
+ return true;
376
+}
377
+
378
+static bool do_perm_pred2(DisasContext *s, arg_rr_esz *a, bool high_odd,
379
+ gen_helper_gvec_2 *fn)
380
+{
381
+ if (!sve_access_check(s)) {
382
+ return true;
383
+ }
384
+
385
+ unsigned vsz = pred_full_reg_size(s);
386
+ TCGv_ptr t_d = tcg_temp_new_ptr();
387
+ TCGv_ptr t_n = tcg_temp_new_ptr();
388
+ TCGv_i32 t_desc;
389
+ int desc;
390
+
391
+ tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
392
+ tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
393
+
394
+ /* Predicate sizes may be smaller and cannot use simd_desc.
395
+ We cannot round up, as we do elsewhere, because we need
396
+ the exact size for ZIP2 and REV. We retain the style for
397
+ the other helpers for consistency. */
398
+
399
+ desc = vsz - 2;
400
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
401
+ desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
402
+ t_desc = tcg_const_i32(desc);
403
+
404
+ fn(t_d, t_n, t_desc);
405
+
406
+ tcg_temp_free_i32(t_desc);
407
+ tcg_temp_free_ptr(t_d);
408
+ tcg_temp_free_ptr(t_n);
409
+ return true;
410
+}
411
+
412
+static bool trans_ZIP1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
413
+{
414
+ return do_perm_pred3(s, a, 0, gen_helper_sve_zip_p);
415
+}
416
+
417
+static bool trans_ZIP2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
418
+{
419
+ return do_perm_pred3(s, a, 1, gen_helper_sve_zip_p);
420
+}
421
+
422
+static bool trans_UZP1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
423
+{
424
+ return do_perm_pred3(s, a, 0, gen_helper_sve_uzp_p);
425
+}
426
+
427
+static bool trans_UZP2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
428
+{
429
+ return do_perm_pred3(s, a, 1, gen_helper_sve_uzp_p);
430
+}
431
+
432
+static bool trans_TRN1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
433
+{
434
+ return do_perm_pred3(s, a, 0, gen_helper_sve_trn_p);
435
+}
436
+
437
+static bool trans_TRN2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
438
+{
439
+ return do_perm_pred3(s, a, 1, gen_helper_sve_trn_p);
440
+}
441
+
442
+static bool trans_REV_p(DisasContext *s, arg_rr_esz *a, uint32_t insn)
443
+{
444
+ return do_perm_pred2(s, a, 0, gen_helper_sve_rev_p);
445
+}
446
+
447
+static bool trans_PUNPKLO(DisasContext *s, arg_PUNPKLO *a, uint32_t insn)
448
+{
449
+ return do_perm_pred2(s, a, 0, gen_helper_sve_punpk_p);
450
+}
451
+
452
+static bool trans_PUNPKHI(DisasContext *s, arg_PUNPKHI *a, uint32_t insn)
453
+{
454
+ return do_perm_pred2(s, a, 1, gen_helper_sve_punpk_p);
455
+}
456
+
457
/*
458
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
459
*/
460
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
461
index XXXXXXX..XXXXXXX 100644
462
--- a/target/arm/sve.decode
463
+++ b/target/arm/sve.decode
464
@@ -XXX,XX +XXX,XX @@
465
466
# Three operand, vector element size
467
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
468
+@pd_pn_pm ........ esz:2 .. rm:4 ....... rn:4 . rd:4 &rrr_esz
469
@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
470
&rrr_esz rn=%reg_movprfx
471
472
@@ -XXX,XX +XXX,XX @@ TBL 00000101 .. 1 ..... 001100 ..... ..... @rd_rn_rm
473
# SVE unpack vector elements
474
UNPK 00000101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
475
476
+### SVE Permute - Predicates Group
477
+
478
+# SVE permute predicate elements
479
+ZIP1_p 00000101 .. 10 .... 010 000 0 .... 0 .... @pd_pn_pm
480
+ZIP2_p 00000101 .. 10 .... 010 001 0 .... 0 .... @pd_pn_pm
481
+UZP1_p 00000101 .. 10 .... 010 010 0 .... 0 .... @pd_pn_pm
482
+UZP2_p 00000101 .. 10 .... 010 011 0 .... 0 .... @pd_pn_pm
483
+TRN1_p 00000101 .. 10 .... 010 100 0 .... 0 .... @pd_pn_pm
484
+TRN2_p 00000101 .. 10 .... 010 101 0 .... 0 .... @pd_pn_pm
485
+
486
+# SVE reverse predicate elements
487
+REV_p 00000101 .. 11 0100 010 000 0 .... 0 .... @pd_pn
488
+
489
+# SVE unpack predicate elements
490
+PUNPKLO 00000101 00 11 0000 010 000 0 .... 0 .... @pd_pn_e0
491
+PUNPKHI 00000101 00 11 0001 010 000 0 .... 0 .... @pd_pn_e0
492
+
493
### SVE Predicate Logical Operations Group
494
495
# SVE predicate logical operations
496
--
47
--
497
2.17.1
48
2.25.1
498
499
diff view generated by jsdifflib
1
Convert the pflash_cfi02 device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps.
2
3
3
Previously we were defining some of these in user-only mode,
4
but none of them are accessible from user-only, therefore
5
define them only in system mode.
6
7
This will shortly be used from cpu_tcg.c also.
8
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20220506180242.216785-6-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Acked-by: Max Reitz <mreitz@redhat.com>
7
Message-id: 20180601141223.26630-4-peter.maydell@linaro.org
8
---
13
---
9
hw/block/pflash_cfi02.c | 97 ++++++++---------------------------------
14
target/arm/internals.h | 6 ++++
10
1 file changed, 18 insertions(+), 79 deletions(-)
15
target/arm/cpu64.c | 64 +++---------------------------------------
11
16
target/arm/cpu_tcg.c | 59 ++++++++++++++++++++++++++++++++++++++
12
diff --git a/hw/block/pflash_cfi02.c b/hw/block/pflash_cfi02.c
17
3 files changed, 69 insertions(+), 60 deletions(-)
18
19
diff --git a/target/arm/internals.h b/target/arm/internals.h
13
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/block/pflash_cfi02.c
21
--- a/target/arm/internals.h
15
+++ b/hw/block/pflash_cfi02.c
22
+++ b/target/arm/internals.h
16
@@ -XXX,XX +XXX,XX @@ static void pflash_write (pflash_t *pfl, hwaddr offset,
23
@@ -XXX,XX +XXX,XX @@ int aarch64_fpu_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg);
17
pfl->cmd = 0;
24
int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg);
25
#endif
26
27
+#ifdef CONFIG_USER_ONLY
28
+static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
29
+#else
30
+void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
31
+#endif
32
+
33
#endif
34
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/cpu64.c
37
+++ b/target/arm/cpu64.c
38
@@ -XXX,XX +XXX,XX @@
39
#include "hvf_arm.h"
40
#include "qapi/visitor.h"
41
#include "hw/qdev-properties.h"
42
-#include "cpregs.h"
43
+#include "internals.h"
44
45
46
-#ifndef CONFIG_USER_ONLY
47
-static uint64_t a57_a53_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
48
-{
49
- ARMCPU *cpu = env_archcpu(env);
50
-
51
- /* Number of cores is in [25:24]; otherwise we RAZ */
52
- return (cpu->core_count - 1) << 24;
53
-}
54
-#endif
55
-
56
-static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
57
-#ifndef CONFIG_USER_ONLY
58
- { .name = "L2CTLR_EL1", .state = ARM_CP_STATE_AA64,
59
- .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 2,
60
- .access = PL1_RW, .readfn = a57_a53_l2ctlr_read,
61
- .writefn = arm_cp_write_ignore },
62
- { .name = "L2CTLR",
63
- .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 2,
64
- .access = PL1_RW, .readfn = a57_a53_l2ctlr_read,
65
- .writefn = arm_cp_write_ignore },
66
-#endif
67
- { .name = "L2ECTLR_EL1", .state = ARM_CP_STATE_AA64,
68
- .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 3,
69
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
70
- { .name = "L2ECTLR",
71
- .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 3,
72
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
73
- { .name = "L2ACTLR", .state = ARM_CP_STATE_BOTH,
74
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 0, .opc2 = 0,
75
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
76
- { .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
77
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 0,
78
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
79
- { .name = "CPUACTLR",
80
- .cp = 15, .opc1 = 0, .crm = 15,
81
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
82
- { .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
83
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 1,
84
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
85
- { .name = "CPUECTLR",
86
- .cp = 15, .opc1 = 1, .crm = 15,
87
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
88
- { .name = "CPUMERRSR_EL1", .state = ARM_CP_STATE_AA64,
89
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 2,
90
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
91
- { .name = "CPUMERRSR",
92
- .cp = 15, .opc1 = 2, .crm = 15,
93
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
94
- { .name = "L2MERRSR_EL1", .state = ARM_CP_STATE_AA64,
95
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 3,
96
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
97
- { .name = "L2MERRSR",
98
- .cp = 15, .opc1 = 3, .crm = 15,
99
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
100
-};
101
-
102
static void aarch64_a57_initfn(Object *obj)
103
{
104
ARMCPU *cpu = ARM_CPU(obj);
105
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
106
cpu->gic_num_lrs = 4;
107
cpu->gic_vpribits = 5;
108
cpu->gic_vprebits = 5;
109
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
110
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
18
}
111
}
19
112
20
-
113
static void aarch64_a53_initfn(Object *obj)
21
-static uint32_t pflash_readb_be(void *opaque, hwaddr addr)
114
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
22
+static uint64_t pflash_be_readfn(void *opaque, hwaddr addr, unsigned size)
115
cpu->gic_num_lrs = 4;
23
{
116
cpu->gic_vpribits = 5;
24
- return pflash_read(opaque, addr, 1, 1);
117
cpu->gic_vprebits = 5;
25
+ return pflash_read(opaque, addr, size, 1);
118
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
119
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
26
}
120
}
27
121
28
-static uint32_t pflash_readb_le(void *opaque, hwaddr addr)
122
static void aarch64_a72_initfn(Object *obj)
29
+static void pflash_be_writefn(void *opaque, hwaddr addr,
123
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
30
+ uint64_t value, unsigned size)
124
cpu->gic_num_lrs = 4;
31
{
125
cpu->gic_vpribits = 5;
32
- return pflash_read(opaque, addr, 1, 0);
126
cpu->gic_vprebits = 5;
33
+ pflash_write(opaque, addr, value, size, 1);
127
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
128
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
34
}
129
}
35
130
36
-static uint32_t pflash_readw_be(void *opaque, hwaddr addr)
131
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
37
+static uint64_t pflash_le_readfn(void *opaque, hwaddr addr, unsigned size)
132
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
38
{
133
index XXXXXXX..XXXXXXX 100644
39
- pflash_t *pfl = opaque;
134
--- a/target/arm/cpu_tcg.c
40
-
135
+++ b/target/arm/cpu_tcg.c
41
- return pflash_read(pfl, addr, 2, 1);
136
@@ -XXX,XX +XXX,XX @@
42
+ return pflash_read(opaque, addr, size, 0);
137
#endif
43
}
138
#include "cpregs.h"
44
139
45
-static uint32_t pflash_readw_le(void *opaque, hwaddr addr)
140
+#ifndef CONFIG_USER_ONLY
46
+static void pflash_le_writefn(void *opaque, hwaddr addr,
141
+static uint64_t l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
47
+ uint64_t value, unsigned size)
142
+{
48
{
143
+ ARMCPU *cpu = env_archcpu(env);
49
- pflash_t *pfl = opaque;
144
+
50
-
145
+ /* Number of cores is in [25:24]; otherwise we RAZ */
51
- return pflash_read(pfl, addr, 2, 0);
146
+ return (cpu->core_count - 1) << 24;
52
-}
147
+}
53
-
148
+
54
-static uint32_t pflash_readl_be(void *opaque, hwaddr addr)
149
+static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
55
-{
150
+ { .name = "L2CTLR_EL1", .state = ARM_CP_STATE_AA64,
56
- pflash_t *pfl = opaque;
151
+ .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 2,
57
-
152
+ .access = PL1_RW, .readfn = l2ctlr_read,
58
- return pflash_read(pfl, addr, 4, 1);
153
+ .writefn = arm_cp_write_ignore },
59
-}
154
+ { .name = "L2CTLR",
60
-
155
+ .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 2,
61
-static uint32_t pflash_readl_le(void *opaque, hwaddr addr)
156
+ .access = PL1_RW, .readfn = l2ctlr_read,
62
-{
157
+ .writefn = arm_cp_write_ignore },
63
- pflash_t *pfl = opaque;
158
+ { .name = "L2ECTLR_EL1", .state = ARM_CP_STATE_AA64,
64
-
159
+ .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 3,
65
- return pflash_read(pfl, addr, 4, 0);
160
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
66
-}
161
+ { .name = "L2ECTLR",
67
-
162
+ .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 3,
68
-static void pflash_writeb_be(void *opaque, hwaddr addr,
163
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
69
- uint32_t value)
164
+ { .name = "L2ACTLR", .state = ARM_CP_STATE_BOTH,
70
-{
165
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 0, .opc2 = 0,
71
- pflash_write(opaque, addr, value, 1, 1);
166
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
72
-}
167
+ { .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
73
-
168
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 0,
74
-static void pflash_writeb_le(void *opaque, hwaddr addr,
169
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
75
- uint32_t value)
170
+ { .name = "CPUACTLR",
76
-{
171
+ .cp = 15, .opc1 = 0, .crm = 15,
77
- pflash_write(opaque, addr, value, 1, 0);
172
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
78
-}
173
+ { .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
79
-
174
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 1,
80
-static void pflash_writew_be(void *opaque, hwaddr addr,
175
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
81
- uint32_t value)
176
+ { .name = "CPUECTLR",
82
-{
177
+ .cp = 15, .opc1 = 1, .crm = 15,
83
- pflash_t *pfl = opaque;
178
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
84
-
179
+ { .name = "CPUMERRSR_EL1", .state = ARM_CP_STATE_AA64,
85
- pflash_write(pfl, addr, value, 2, 1);
180
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 2,
86
-}
181
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
87
-
182
+ { .name = "CPUMERRSR",
88
-static void pflash_writew_le(void *opaque, hwaddr addr,
183
+ .cp = 15, .opc1 = 2, .crm = 15,
89
- uint32_t value)
184
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
90
-{
185
+ { .name = "L2MERRSR_EL1", .state = ARM_CP_STATE_AA64,
91
- pflash_t *pfl = opaque;
186
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 3,
92
-
187
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
93
- pflash_write(pfl, addr, value, 2, 0);
188
+ { .name = "L2MERRSR",
94
-}
189
+ .cp = 15, .opc1 = 3, .crm = 15,
95
-
190
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
96
-static void pflash_writel_be(void *opaque, hwaddr addr,
191
+};
97
- uint32_t value)
192
+
98
-{
193
+void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu)
99
- pflash_t *pfl = opaque;
194
+{
100
-
195
+ define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
101
- pflash_write(pfl, addr, value, 4, 1);
196
+}
102
-}
197
+#endif /* !CONFIG_USER_ONLY */
103
-
198
+
104
-static void pflash_writel_le(void *opaque, hwaddr addr,
199
/* CPU models. These are not needed for the AArch64 linux-user build. */
105
- uint32_t value)
200
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
106
-{
107
- pflash_t *pfl = opaque;
108
-
109
- pflash_write(pfl, addr, value, 4, 0);
110
+ pflash_write(opaque, addr, value, size, 0);
111
}
112
113
static const MemoryRegionOps pflash_cfi02_ops_be = {
114
- .old_mmio = {
115
- .read = { pflash_readb_be, pflash_readw_be, pflash_readl_be, },
116
- .write = { pflash_writeb_be, pflash_writew_be, pflash_writel_be, },
117
- },
118
+ .read = pflash_be_readfn,
119
+ .write = pflash_be_writefn,
120
+ .valid.min_access_size = 1,
121
+ .valid.max_access_size = 4,
122
.endianness = DEVICE_NATIVE_ENDIAN,
123
};
124
125
static const MemoryRegionOps pflash_cfi02_ops_le = {
126
- .old_mmio = {
127
- .read = { pflash_readb_le, pflash_readw_le, pflash_readl_le, },
128
- .write = { pflash_writeb_le, pflash_writew_le, pflash_writel_le, },
129
- },
130
+ .read = pflash_le_readfn,
131
+ .write = pflash_le_writefn,
132
+ .valid.min_access_size = 1,
133
+ .valid.max_access_size = 4,
134
.endianness = DEVICE_NATIVE_ENDIAN,
135
};
136
201
137
--
202
--
138
2.17.1
203
2.25.1
139
140
diff view generated by jsdifflib
1
Add support for multiple IOMMU indexes to the IOMMU notifier APIs.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
When initializing a notifier with iommu_notifier_init(), the caller
3
must pass the IOMMU index that it is interested in. When a change
4
happens, the IOMMU implementation must pass
5
memory_region_notify_iommu() the IOMMU index that has changed and
6
that notifiers must be called for.
7
2
8
IOMMUs which support only a single index don't need to change.
3
Instead of starting with cortex-a15 and adding v8 features to
9
Callers which only really support working with IOMMUs with a single
4
a v7 cpu, begin with a v8 cpu stripped of its aarch64 features.
10
index can use the result of passing MEMTXATTRS_UNSPECIFIED to
5
This fixes the long-standing to-do where we only enabled v8
11
memory_region_iommu_attrs_to_index().
6
features for user-only.
12
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220506180242.216785-7-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
16
Message-id: 20180604152941.20374-3-peter.maydell@linaro.org
17
---
12
---
18
include/exec/memory.h | 7 ++++++-
13
target/arm/cpu_tcg.c | 151 ++++++++++++++++++++++++++-----------------
19
hw/i386/intel_iommu.c | 6 +++---
14
1 file changed, 92 insertions(+), 59 deletions(-)
20
hw/ppc/spapr_iommu.c | 2 +-
21
hw/s390x/s390-pci-inst.c | 4 ++--
22
hw/vfio/common.c | 6 +++++-
23
hw/virtio/vhost.c | 7 ++++++-
24
memory.c | 8 +++++++-
25
7 files changed, 30 insertions(+), 10 deletions(-)
26
15
27
diff --git a/include/exec/memory.h b/include/exec/memory.h
16
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
28
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
29
--- a/include/exec/memory.h
18
--- a/target/arm/cpu_tcg.c
30
+++ b/include/exec/memory.h
19
+++ b/target/arm/cpu_tcg.c
31
@@ -XXX,XX +XXX,XX @@ struct IOMMUNotifier {
20
@@ -XXX,XX +XXX,XX @@ static void arm_v7m_class_init(ObjectClass *oc, void *data)
32
/* Notify for address space range start <= addr <= end */
21
static void arm_max_initfn(Object *obj)
33
hwaddr start;
34
hwaddr end;
35
+ int iommu_idx;
36
QLIST_ENTRY(IOMMUNotifier) node;
37
};
38
typedef struct IOMMUNotifier IOMMUNotifier;
39
40
static inline void iommu_notifier_init(IOMMUNotifier *n, IOMMUNotify fn,
41
IOMMUNotifierFlag flags,
42
- hwaddr start, hwaddr end)
43
+ hwaddr start, hwaddr end,
44
+ int iommu_idx)
45
{
22
{
46
n->notify = fn;
23
ARMCPU *cpu = ARM_CPU(obj);
47
n->notifier_flags = flags;
24
+ uint32_t t;
48
n->start = start;
25
49
n->end = end;
26
- cortex_a15_initfn(obj);
50
+ n->iommu_idx = iommu_idx;
27
+ /* aarch64_a57_initfn, advertising none of the aarch64 features */
28
+ cpu->dtb_compatible = "arm,cortex-a57";
29
+ set_feature(&cpu->env, ARM_FEATURE_V8);
30
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
31
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
32
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
33
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
34
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
35
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
36
+ cpu->midr = 0x411fd070;
37
+ cpu->revidr = 0x00000000;
38
+ cpu->reset_fpsid = 0x41034070;
39
+ cpu->isar.mvfr0 = 0x10110222;
40
+ cpu->isar.mvfr1 = 0x12111111;
41
+ cpu->isar.mvfr2 = 0x00000043;
42
+ cpu->ctr = 0x8444c004;
43
+ cpu->reset_sctlr = 0x00c50838;
44
+ cpu->isar.id_pfr0 = 0x00000131;
45
+ cpu->isar.id_pfr1 = 0x00011011;
46
+ cpu->isar.id_dfr0 = 0x03010066;
47
+ cpu->id_afr0 = 0x00000000;
48
+ cpu->isar.id_mmfr0 = 0x10101105;
49
+ cpu->isar.id_mmfr1 = 0x40000000;
50
+ cpu->isar.id_mmfr2 = 0x01260000;
51
+ cpu->isar.id_mmfr3 = 0x02102211;
52
+ cpu->isar.id_isar0 = 0x02101110;
53
+ cpu->isar.id_isar1 = 0x13112111;
54
+ cpu->isar.id_isar2 = 0x21232042;
55
+ cpu->isar.id_isar3 = 0x01112131;
56
+ cpu->isar.id_isar4 = 0x00011142;
57
+ cpu->isar.id_isar5 = 0x00011121;
58
+ cpu->isar.id_isar6 = 0;
59
+ cpu->isar.dbgdidr = 0x3516d000;
60
+ cpu->clidr = 0x0a200023;
61
+ cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
62
+ cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
63
+ cpu->ccsidr[2] = 0x70ffe07a; /* 2048KB L2 cache */
64
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
65
66
- /* old-style VFP short-vector support */
67
- cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
68
+ /* Add additional features supported by QEMU */
69
+ t = cpu->isar.id_isar5;
70
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
71
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
72
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
73
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
74
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
75
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
76
+ cpu->isar.id_isar5 = t;
77
+
78
+ t = cpu->isar.id_isar6;
79
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
80
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
81
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
82
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1);
83
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
84
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
85
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
86
+ cpu->isar.id_isar6 = t;
87
+
88
+ t = cpu->isar.mvfr1;
89
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
90
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
91
+ cpu->isar.mvfr1 = t;
92
+
93
+ t = cpu->isar.mvfr2;
94
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
95
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
96
+ cpu->isar.mvfr2 = t;
97
+
98
+ t = cpu->isar.id_mmfr3;
99
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
100
+ cpu->isar.id_mmfr3 = t;
101
+
102
+ t = cpu->isar.id_mmfr4;
103
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
104
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
105
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
106
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
107
+ cpu->isar.id_mmfr4 = t;
108
+
109
+ t = cpu->isar.id_pfr0;
110
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1);
111
+ cpu->isar.id_pfr0 = t;
112
+
113
+ t = cpu->isar.id_pfr2;
114
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
115
+ cpu->isar.id_pfr2 = t;
116
117
#ifdef CONFIG_USER_ONLY
118
/*
119
- * We don't set these in system emulation mode for the moment,
120
- * since we don't correctly set (all of) the ID registers to
121
- * advertise them.
122
+ * Break with true ARMv8 and add back old-style VFP short-vector support.
123
+ * Only do this for user-mode, where -cpu max is the default, so that
124
+ * older v6 and v7 programs are more likely to work without adjustment.
125
*/
126
- set_feature(&cpu->env, ARM_FEATURE_V8);
127
- {
128
- uint32_t t;
129
-
130
- t = cpu->isar.id_isar5;
131
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
132
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
133
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
134
- t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
135
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
136
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
137
- cpu->isar.id_isar5 = t;
138
-
139
- t = cpu->isar.id_isar6;
140
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
141
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
142
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
143
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
144
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
145
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
146
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
147
- cpu->isar.id_isar6 = t;
148
-
149
- t = cpu->isar.mvfr1;
150
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
151
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
152
- cpu->isar.mvfr1 = t;
153
-
154
- t = cpu->isar.mvfr2;
155
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
156
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
157
- cpu->isar.mvfr2 = t;
158
-
159
- t = cpu->isar.id_mmfr3;
160
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
161
- cpu->isar.id_mmfr3 = t;
162
-
163
- t = cpu->isar.id_mmfr4;
164
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
165
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
166
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
167
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
168
- cpu->isar.id_mmfr4 = t;
169
-
170
- t = cpu->isar.id_pfr0;
171
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
172
- cpu->isar.id_pfr0 = t;
173
-
174
- t = cpu->isar.id_pfr2;
175
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
176
- cpu->isar.id_pfr2 = t;
177
- }
178
-#endif /* CONFIG_USER_ONLY */
179
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
180
+#endif
51
}
181
}
52
182
#endif /* !TARGET_AARCH64 */
53
/*
54
@@ -XXX,XX +XXX,XX @@ uint64_t memory_region_iommu_get_min_page_size(IOMMUMemoryRegion *iommu_mr);
55
* should be notified with an UNMAP followed by a MAP.
56
*
57
* @iommu_mr: the memory region that was changed
58
+ * @iommu_idx: the IOMMU index for the translation table which has changed
59
* @entry: the new entry in the IOMMU translation table. The entry
60
* replaces all old entries for the same virtual I/O address range.
61
* Deleted entries have .@perm == 0.
62
*/
63
void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
64
+ int iommu_idx,
65
IOMMUTLBEntry entry);
66
67
/**
68
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/i386/intel_iommu.c
71
+++ b/hw/i386/intel_iommu.c
72
@@ -XXX,XX +XXX,XX @@ static int vtd_dev_to_context_entry(IntelIOMMUState *s, uint8_t bus_num,
73
static int vtd_sync_shadow_page_hook(IOMMUTLBEntry *entry,
74
void *private)
75
{
76
- memory_region_notify_iommu((IOMMUMemoryRegion *)private, *entry);
77
+ memory_region_notify_iommu((IOMMUMemoryRegion *)private, 0, *entry);
78
return 0;
79
}
80
81
@@ -XXX,XX +XXX,XX @@ static void vtd_iotlb_page_invalidate_notify(IntelIOMMUState *s,
82
.addr_mask = size - 1,
83
.perm = IOMMU_NONE,
84
};
85
- memory_region_notify_iommu(&vtd_as->iommu, entry);
86
+ memory_region_notify_iommu(&vtd_as->iommu, 0, entry);
87
}
88
}
89
}
90
@@ -XXX,XX +XXX,XX @@ static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
91
entry.iova = addr;
92
entry.perm = IOMMU_NONE;
93
entry.translated_addr = 0;
94
- memory_region_notify_iommu(&vtd_dev_as->iommu, entry);
95
+ memory_region_notify_iommu(&vtd_dev_as->iommu, 0, entry);
96
97
done:
98
return true;
99
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/hw/ppc/spapr_iommu.c
102
+++ b/hw/ppc/spapr_iommu.c
103
@@ -XXX,XX +XXX,XX @@ static target_ulong put_tce_emu(sPAPRTCETable *tcet, target_ulong ioba,
104
entry.translated_addr = tce & page_mask;
105
entry.addr_mask = ~page_mask;
106
entry.perm = spapr_tce_iommu_access_flags(tce);
107
- memory_region_notify_iommu(&tcet->iommu, entry);
108
+ memory_region_notify_iommu(&tcet->iommu, 0, entry);
109
110
return H_SUCCESS;
111
}
112
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
113
index XXXXXXX..XXXXXXX 100644
114
--- a/hw/s390x/s390-pci-inst.c
115
+++ b/hw/s390x/s390-pci-inst.c
116
@@ -XXX,XX +XXX,XX @@ static void s390_pci_update_iotlb(S390PCIIOMMU *iommu, S390IOTLBEntry *entry)
117
}
118
119
notify.perm = IOMMU_NONE;
120
- memory_region_notify_iommu(&iommu->iommu_mr, notify);
121
+ memory_region_notify_iommu(&iommu->iommu_mr, 0, notify);
122
notify.perm = entry->perm;
123
}
124
125
@@ -XXX,XX +XXX,XX @@ static void s390_pci_update_iotlb(S390PCIIOMMU *iommu, S390IOTLBEntry *entry)
126
g_hash_table_replace(iommu->iotlb, &cache->iova, cache);
127
}
128
129
- memory_region_notify_iommu(&iommu->iommu_mr, notify);
130
+ memory_region_notify_iommu(&iommu->iommu_mr, 0, notify);
131
}
132
133
int rpcit_service_call(S390CPU *cpu, uint8_t r1, uint8_t r2, uintptr_t ra)
134
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
135
index XXXXXXX..XXXXXXX 100644
136
--- a/hw/vfio/common.c
137
+++ b/hw/vfio/common.c
138
@@ -XXX,XX +XXX,XX @@ static void vfio_listener_region_add(MemoryListener *listener,
139
if (memory_region_is_iommu(section->mr)) {
140
VFIOGuestIOMMU *giommu;
141
IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
142
+ int iommu_idx;
143
144
trace_vfio_listener_region_add_iommu(iova, end);
145
/*
146
@@ -XXX,XX +XXX,XX @@ static void vfio_listener_region_add(MemoryListener *listener,
147
llend = int128_add(int128_make64(section->offset_within_region),
148
section->size);
149
llend = int128_sub(llend, int128_one());
150
+ iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
151
+ MEMTXATTRS_UNSPECIFIED);
152
iommu_notifier_init(&giommu->n, vfio_iommu_map_notify,
153
IOMMU_NOTIFIER_ALL,
154
section->offset_within_region,
155
- int128_get64(llend));
156
+ int128_get64(llend),
157
+ iommu_idx);
158
QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
159
160
memory_region_register_iommu_notifier(section->mr, &giommu->n);
161
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/hw/virtio/vhost.c
164
+++ b/hw/virtio/vhost.c
165
@@ -XXX,XX +XXX,XX @@ static void vhost_iommu_region_add(MemoryListener *listener,
166
iommu_listener);
167
struct vhost_iommu *iommu;
168
Int128 end;
169
+ int iommu_idx;
170
+ IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
171
172
if (!memory_region_is_iommu(section->mr)) {
173
return;
174
@@ -XXX,XX +XXX,XX @@ static void vhost_iommu_region_add(MemoryListener *listener,
175
end = int128_add(int128_make64(section->offset_within_region),
176
section->size);
177
end = int128_sub(end, int128_one());
178
+ iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
179
+ MEMTXATTRS_UNSPECIFIED);
180
iommu_notifier_init(&iommu->n, vhost_iommu_unmap_notify,
181
IOMMU_NOTIFIER_UNMAP,
182
section->offset_within_region,
183
- int128_get64(end));
184
+ int128_get64(end),
185
+ iommu_idx);
186
iommu->mr = section->mr;
187
iommu->iommu_offset = section->offset_within_address_space -
188
section->offset_within_region;
189
diff --git a/memory.c b/memory.c
190
index XXXXXXX..XXXXXXX 100644
191
--- a/memory.c
192
+++ b/memory.c
193
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
194
iommu_mr = IOMMU_MEMORY_REGION(mr);
195
assert(n->notifier_flags != IOMMU_NOTIFIER_NONE);
196
assert(n->start <= n->end);
197
+ assert(n->iommu_idx >= 0 &&
198
+ n->iommu_idx < memory_region_iommu_num_indexes(iommu_mr));
199
+
200
QLIST_INSERT_HEAD(&iommu_mr->iommu_notify, n, node);
201
memory_region_update_iommu_notify_flags(iommu_mr);
202
}
203
@@ -XXX,XX +XXX,XX @@ void memory_region_notify_one(IOMMUNotifier *notifier,
204
}
205
206
void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
207
+ int iommu_idx,
208
IOMMUTLBEntry entry)
209
{
210
IOMMUNotifier *iommu_notifier;
211
@@ -XXX,XX +XXX,XX @@ void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
212
assert(memory_region_is_iommu(MEMORY_REGION(iommu_mr)));
213
214
IOMMU_NOTIFIER_FOREACH(iommu_notifier, iommu_mr) {
215
- memory_region_notify_one(iommu_notifier, &entry);
216
+ if (iommu_notifier->iommu_idx == iommu_idx) {
217
+ memory_region_notify_one(iommu_notifier, &entry);
218
+ }
219
}
220
}
221
183
222
--
184
--
223
2.17.1
185
2.25.1
224
225
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We set this for qemu-system-aarch64, but failed to do so
4
for the strictly 32-bit emulation.
5
6
Fixes: 3bec78447a9 ("target/arm: Provide ARMv8.4-PMU in '-cpu max'")
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-19-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-8-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 14 ++++++++
12
target/arm/cpu_tcg.c | 4 ++++
9
target/arm/helper.h | 19 +++++++++++
13
1 file changed, 4 insertions(+)
10
target/arm/translate-sve.c | 42 +++++++++++++++++++++++
11
target/arm/vec_helper.c | 69 ++++++++++++++++++++++++++++++++++++++
12
target/arm/sve.decode | 10 ++++++
13
5 files changed, 154 insertions(+)
14
14
15
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-sve.h
17
--- a/target/arm/cpu_tcg.c
18
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/cpu_tcg.c
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
19
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
20
DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
20
t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
21
DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
21
cpu->isar.id_pfr2 = t;
22
DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
22
23
+ t = cpu->isar.id_dfr0;
24
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
25
+ cpu->isar.id_dfr0 = t;
23
+
26
+
24
+DEF_HELPER_FLAGS_5(gvec_recps_h, TCG_CALL_NO_RWG,
27
#ifdef CONFIG_USER_ONLY
25
+ void, ptr, ptr, ptr, ptr, i32)
28
/*
26
+DEF_HELPER_FLAGS_5(gvec_recps_s, TCG_CALL_NO_RWG,
29
* Break with true ARMv8 and add back old-style VFP short-vector support.
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(gvec_recps_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_5(gvec_rsqrts_h, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
36
+ void, ptr, ptr, ptr, ptr, i32)
37
diff --git a/target/arm/helper.h b/target/arm/helper.h
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.h
40
+++ b/target/arm/helper.h
41
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
42
DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
43
void, ptr, ptr, ptr, ptr, i32)
44
45
+DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_5(gvec_fadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_5(gvec_fsub_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_5(gvec_fsub_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_5(gvec_fsub_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
52
+
53
+DEF_HELPER_FLAGS_5(gvec_fmul_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_5(gvec_fmul_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
56
+
57
+DEF_HELPER_FLAGS_5(gvec_ftsmul_h, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_5(gvec_ftsmul_s, TCG_CALL_NO_RWG,
60
+ void, ptr, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_5(gvec_ftsmul_d, TCG_CALL_NO_RWG,
62
+ void, ptr, ptr, ptr, ptr, i32)
63
+
64
#ifdef TARGET_AARCH64
65
#include "helper-a64.h"
66
#include "helper-sve.h"
67
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/translate-sve.c
70
+++ b/target/arm/translate-sve.c
71
@@ -XXX,XX +XXX,XX @@ DO_ZZI(UMIN, umin)
72
73
#undef DO_ZZI
74
75
+/*
76
+ *** SVE Floating Point Arithmetic - Unpredicated Group
77
+ */
78
+
79
+static bool do_zzz_fp(DisasContext *s, arg_rrr_esz *a,
80
+ gen_helper_gvec_3_ptr *fn)
81
+{
82
+ if (fn == NULL) {
83
+ return false;
84
+ }
85
+ if (sve_access_check(s)) {
86
+ unsigned vsz = vec_full_reg_size(s);
87
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
88
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
89
+ vec_full_reg_offset(s, a->rn),
90
+ vec_full_reg_offset(s, a->rm),
91
+ status, vsz, vsz, 0, fn);
92
+ tcg_temp_free_ptr(status);
93
+ }
94
+ return true;
95
+}
96
+
97
+
98
+#define DO_FP3(NAME, name) \
99
+static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a, uint32_t insn) \
100
+{ \
101
+ static gen_helper_gvec_3_ptr * const fns[4] = { \
102
+ NULL, gen_helper_gvec_##name##_h, \
103
+ gen_helper_gvec_##name##_s, gen_helper_gvec_##name##_d \
104
+ }; \
105
+ return do_zzz_fp(s, a, fns[a->esz]); \
106
+}
107
+
108
+DO_FP3(FADD_zzz, fadd)
109
+DO_FP3(FSUB_zzz, fsub)
110
+DO_FP3(FMUL_zzz, fmul)
111
+DO_FP3(FTSMUL, ftsmul)
112
+DO_FP3(FRECPS, recps)
113
+DO_FP3(FRSQRTS, rsqrts)
114
+
115
+#undef DO_FP3
116
+
117
/*
118
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
119
*/
120
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/vec_helper.c
123
+++ b/target/arm/vec_helper.c
124
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm,
125
}
126
clear_tail(d, opr_sz, simd_maxsz(desc));
127
}
128
+
129
+/* Floating-point trigonometric starting value.
130
+ * See the ARM ARM pseudocode function FPTrigSMul.
131
+ */
132
+static float16 float16_ftsmul(float16 op1, uint16_t op2, float_status *stat)
133
+{
134
+ float16 result = float16_mul(op1, op1, stat);
135
+ if (!float16_is_any_nan(result)) {
136
+ result = float16_set_sign(result, op2 & 1);
137
+ }
138
+ return result;
139
+}
140
+
141
+static float32 float32_ftsmul(float32 op1, uint32_t op2, float_status *stat)
142
+{
143
+ float32 result = float32_mul(op1, op1, stat);
144
+ if (!float32_is_any_nan(result)) {
145
+ result = float32_set_sign(result, op2 & 1);
146
+ }
147
+ return result;
148
+}
149
+
150
+static float64 float64_ftsmul(float64 op1, uint64_t op2, float_status *stat)
151
+{
152
+ float64 result = float64_mul(op1, op1, stat);
153
+ if (!float64_is_any_nan(result)) {
154
+ result = float64_set_sign(result, op2 & 1);
155
+ }
156
+ return result;
157
+}
158
+
159
+#define DO_3OP(NAME, FUNC, TYPE) \
160
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
161
+{ \
162
+ intptr_t i, oprsz = simd_oprsz(desc); \
163
+ TYPE *d = vd, *n = vn, *m = vm; \
164
+ for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
165
+ d[i] = FUNC(n[i], m[i], stat); \
166
+ } \
167
+}
168
+
169
+DO_3OP(gvec_fadd_h, float16_add, float16)
170
+DO_3OP(gvec_fadd_s, float32_add, float32)
171
+DO_3OP(gvec_fadd_d, float64_add, float64)
172
+
173
+DO_3OP(gvec_fsub_h, float16_sub, float16)
174
+DO_3OP(gvec_fsub_s, float32_sub, float32)
175
+DO_3OP(gvec_fsub_d, float64_sub, float64)
176
+
177
+DO_3OP(gvec_fmul_h, float16_mul, float16)
178
+DO_3OP(gvec_fmul_s, float32_mul, float32)
179
+DO_3OP(gvec_fmul_d, float64_mul, float64)
180
+
181
+DO_3OP(gvec_ftsmul_h, float16_ftsmul, float16)
182
+DO_3OP(gvec_ftsmul_s, float32_ftsmul, float32)
183
+DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
184
+
185
+#ifdef TARGET_AARCH64
186
+
187
+DO_3OP(gvec_recps_h, helper_recpsf_f16, float16)
188
+DO_3OP(gvec_recps_s, helper_recpsf_f32, float32)
189
+DO_3OP(gvec_recps_d, helper_recpsf_f64, float64)
190
+
191
+DO_3OP(gvec_rsqrts_h, helper_rsqrtsf_f16, float16)
192
+DO_3OP(gvec_rsqrts_s, helper_rsqrtsf_f32, float32)
193
+DO_3OP(gvec_rsqrts_d, helper_rsqrtsf_f64, float64)
194
+
195
+#endif
196
+#undef DO_3OP
197
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
198
index XXXXXXX..XXXXXXX 100644
199
--- a/target/arm/sve.decode
200
+++ b/target/arm/sve.decode
201
@@ -XXX,XX +XXX,XX @@ UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
202
# SVE integer multiply immediate (unpredicated)
203
MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
204
205
+### SVE Floating Point Arithmetic - Unpredicated Group
206
+
207
+# SVE floating-point arithmetic (unpredicated)
208
+FADD_zzz 01100101 .. 0 ..... 000 000 ..... ..... @rd_rn_rm
209
+FSUB_zzz 01100101 .. 0 ..... 000 001 ..... ..... @rd_rn_rm
210
+FMUL_zzz 01100101 .. 0 ..... 000 010 ..... ..... @rd_rn_rm
211
+FTSMUL 01100101 .. 0 ..... 000 011 ..... ..... @rd_rn_rm
212
+FRECPS 01100101 .. 0 ..... 000 110 ..... ..... @rd_rn_rm
213
+FRSQRTS 01100101 .. 0 ..... 000 111 ..... ..... @rd_rn_rm
214
+
215
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
216
217
# SVE load predicate register
218
--
30
--
219
2.17.1
31
2.25.1
220
221
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Share the code to set AArch32 max features so that we no
4
longer have code drift between qemu{-system,}-{arm,aarch64}.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-18-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-9-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 25 +++++++
11
target/arm/internals.h | 2 +
9
target/arm/sve_helper.c | 41 +++++++++++
12
target/arm/cpu64.c | 50 +-----------------
10
target/arm/translate-sve.c | 144 +++++++++++++++++++++++++++++++++++++
13
target/arm/cpu_tcg.c | 114 ++++++++++++++++++++++-------------------
11
target/arm/sve.decode | 26 +++++++
14
3 files changed, 65 insertions(+), 101 deletions(-)
12
4 files changed, 236 insertions(+)
13
15
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
18
--- a/target/arm/internals.h
17
+++ b/target/arm/helper-sve.h
19
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
20
@@ -XXX,XX +XXX,XX @@ static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
19
DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
21
void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
20
22
#endif
21
DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
23
22
+
24
+void aa32_max_features(ARMCPU *cpu);
23
+DEF_HELPER_FLAGS_4(sve_subri_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
25
+
24
+DEF_HELPER_FLAGS_4(sve_subri_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
26
#endif
25
+DEF_HELPER_FLAGS_4(sve_subri_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
27
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
26
+DEF_HELPER_FLAGS_4(sve_subri_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
27
+
28
+DEF_HELPER_FLAGS_4(sve_smaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
29
+DEF_HELPER_FLAGS_4(sve_smaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
30
+DEF_HELPER_FLAGS_4(sve_smaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
31
+DEF_HELPER_FLAGS_4(sve_smaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
32
+
33
+DEF_HELPER_FLAGS_4(sve_smini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
34
+DEF_HELPER_FLAGS_4(sve_smini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
35
+DEF_HELPER_FLAGS_4(sve_smini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
36
+DEF_HELPER_FLAGS_4(sve_smini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
37
+
38
+DEF_HELPER_FLAGS_4(sve_umaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
39
+DEF_HELPER_FLAGS_4(sve_umaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
40
+DEF_HELPER_FLAGS_4(sve_umaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
41
+DEF_HELPER_FLAGS_4(sve_umaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
42
+
43
+DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
44
+DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
45
+DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
46
+DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
47
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
48
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/sve_helper.c
29
--- a/target/arm/cpu64.c
50
+++ b/target/arm/sve_helper.c
30
+++ b/target/arm/cpu64.c
51
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
31
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
52
#undef DO_VPZ
32
{
53
#undef DO_VPZ_D
33
ARMCPU *cpu = ARM_CPU(obj);
54
34
uint64_t t;
55
+/* Two vector operand, one scalar operand, unpredicated. */
35
- uint32_t u;
56
+#define DO_ZZI(NAME, TYPE, OP) \
36
57
+void HELPER(NAME)(void *vd, void *vn, uint64_t s64, uint32_t desc) \
37
if (kvm_enabled() || hvf_enabled()) {
58
+{ \
38
/* With KVM or HVF, '-cpu max' is identical to '-cpu host' */
59
+ intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(TYPE); \
39
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
60
+ TYPE s = s64, *d = vd, *n = vn; \
40
t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
61
+ for (i = 0; i < opr_sz; ++i) { \
41
cpu->isar.id_aa64zfr0 = t;
62
+ d[i] = OP(n[i], s); \
42
63
+ } \
43
- /* Replicate the same data to the 32-bit id registers. */
44
- u = cpu->isar.id_isar5;
45
- u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
46
- u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
47
- u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
48
- u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
49
- u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
50
- u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
51
- cpu->isar.id_isar5 = u;
52
-
53
- u = cpu->isar.id_isar6;
54
- u = FIELD_DP32(u, ID_ISAR6, JSCVT, 1);
55
- u = FIELD_DP32(u, ID_ISAR6, DP, 1);
56
- u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
57
- u = FIELD_DP32(u, ID_ISAR6, SB, 1);
58
- u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
59
- u = FIELD_DP32(u, ID_ISAR6, BF16, 1);
60
- u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
61
- cpu->isar.id_isar6 = u;
62
-
63
- u = cpu->isar.id_pfr0;
64
- u = FIELD_DP32(u, ID_PFR0, DIT, 1);
65
- cpu->isar.id_pfr0 = u;
66
-
67
- u = cpu->isar.id_pfr2;
68
- u = FIELD_DP32(u, ID_PFR2, SSBS, 1);
69
- cpu->isar.id_pfr2 = u;
70
-
71
- u = cpu->isar.id_mmfr3;
72
- u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
73
- cpu->isar.id_mmfr3 = u;
74
-
75
- u = cpu->isar.id_mmfr4;
76
- u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */
77
- u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
78
- u = FIELD_DP32(u, ID_MMFR4, CNP, 1); /* TTCNP */
79
- u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */
80
- cpu->isar.id_mmfr4 = u;
81
-
82
t = cpu->isar.id_aa64dfr0;
83
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
84
cpu->isar.id_aa64dfr0 = t;
85
86
- u = cpu->isar.id_dfr0;
87
- u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
88
- cpu->isar.id_dfr0 = u;
89
-
90
- u = cpu->isar.mvfr1;
91
- u = FIELD_DP32(u, MVFR1, FPHP, 3); /* v8.2-FP16 */
92
- u = FIELD_DP32(u, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
93
- cpu->isar.mvfr1 = u;
94
+ /* Replicate the same data to the 32-bit id registers. */
95
+ aa32_max_features(cpu);
96
97
#ifdef CONFIG_USER_ONLY
98
/*
99
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/cpu_tcg.c
102
+++ b/target/arm/cpu_tcg.c
103
@@ -XXX,XX +XXX,XX @@
104
#endif
105
#include "cpregs.h"
106
107
+
108
+/* Share AArch32 -cpu max features with AArch64. */
109
+void aa32_max_features(ARMCPU *cpu)
110
+{
111
+ uint32_t t;
112
+
113
+ /* Add additional features supported by QEMU */
114
+ t = cpu->isar.id_isar5;
115
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
116
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
117
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
118
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
119
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
120
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
121
+ cpu->isar.id_isar5 = t;
122
+
123
+ t = cpu->isar.id_isar6;
124
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
125
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
126
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
127
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1);
128
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
129
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
130
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
131
+ cpu->isar.id_isar6 = t;
132
+
133
+ t = cpu->isar.mvfr1;
134
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
135
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
136
+ cpu->isar.mvfr1 = t;
137
+
138
+ t = cpu->isar.mvfr2;
139
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
140
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
141
+ cpu->isar.mvfr2 = t;
142
+
143
+ t = cpu->isar.id_mmfr3;
144
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
145
+ cpu->isar.id_mmfr3 = t;
146
+
147
+ t = cpu->isar.id_mmfr4;
148
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
149
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
150
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
151
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
152
+ cpu->isar.id_mmfr4 = t;
153
+
154
+ t = cpu->isar.id_pfr0;
155
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1);
156
+ cpu->isar.id_pfr0 = t;
157
+
158
+ t = cpu->isar.id_pfr2;
159
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
160
+ cpu->isar.id_pfr2 = t;
161
+
162
+ t = cpu->isar.id_dfr0;
163
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
164
+ cpu->isar.id_dfr0 = t;
64
+}
165
+}
65
+
166
+
66
+#define DO_SUBR(X, Y) (Y - X)
167
#ifndef CONFIG_USER_ONLY
67
+
168
static uint64_t l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
68
+DO_ZZI(sve_subri_b, uint8_t, DO_SUBR)
169
{
69
+DO_ZZI(sve_subri_h, uint16_t, DO_SUBR)
170
@@ -XXX,XX +XXX,XX @@ static void arm_v7m_class_init(ObjectClass *oc, void *data)
70
+DO_ZZI(sve_subri_s, uint32_t, DO_SUBR)
171
static void arm_max_initfn(Object *obj)
71
+DO_ZZI(sve_subri_d, uint64_t, DO_SUBR)
172
{
72
+
173
ARMCPU *cpu = ARM_CPU(obj);
73
+DO_ZZI(sve_smaxi_b, int8_t, DO_MAX)
174
- uint32_t t;
74
+DO_ZZI(sve_smaxi_h, int16_t, DO_MAX)
175
75
+DO_ZZI(sve_smaxi_s, int32_t, DO_MAX)
176
/* aarch64_a57_initfn, advertising none of the aarch64 features */
76
+DO_ZZI(sve_smaxi_d, int64_t, DO_MAX)
177
cpu->dtb_compatible = "arm,cortex-a57";
77
+
178
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
78
+DO_ZZI(sve_smini_b, int8_t, DO_MIN)
179
cpu->ccsidr[2] = 0x70ffe07a; /* 2048KB L2 cache */
79
+DO_ZZI(sve_smini_h, int16_t, DO_MIN)
180
define_cortex_a72_a57_a53_cp_reginfo(cpu);
80
+DO_ZZI(sve_smini_s, int32_t, DO_MIN)
181
81
+DO_ZZI(sve_smini_d, int64_t, DO_MIN)
182
- /* Add additional features supported by QEMU */
82
+
183
- t = cpu->isar.id_isar5;
83
+DO_ZZI(sve_umaxi_b, uint8_t, DO_MAX)
184
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
84
+DO_ZZI(sve_umaxi_h, uint16_t, DO_MAX)
185
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
85
+DO_ZZI(sve_umaxi_s, uint32_t, DO_MAX)
186
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
86
+DO_ZZI(sve_umaxi_d, uint64_t, DO_MAX)
187
- t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
87
+
188
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
88
+DO_ZZI(sve_umini_b, uint8_t, DO_MIN)
189
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
89
+DO_ZZI(sve_umini_h, uint16_t, DO_MIN)
190
- cpu->isar.id_isar5 = t;
90
+DO_ZZI(sve_umini_s, uint32_t, DO_MIN)
191
-
91
+DO_ZZI(sve_umini_d, uint64_t, DO_MIN)
192
- t = cpu->isar.id_isar6;
92
+
193
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
93
+#undef DO_ZZI
194
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
94
+
195
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
95
#undef DO_AND
196
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
96
#undef DO_ORR
197
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
97
#undef DO_EOR
198
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
98
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
199
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
99
#undef DO_ASR
200
- cpu->isar.id_isar6 = t;
100
#undef DO_LSR
201
-
101
#undef DO_LSL
202
- t = cpu->isar.mvfr1;
102
+#undef DO_SUBR
203
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
103
204
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
104
/* Similar to the ARM LastActiveElement pseudocode function, except the
205
- cpu->isar.mvfr1 = t;
105
result is multiplied by the element size. This includes the not found
206
-
106
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
207
- t = cpu->isar.mvfr2;
107
index XXXXXXX..XXXXXXX 100644
208
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
108
--- a/target/arm/translate-sve.c
209
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
109
+++ b/target/arm/translate-sve.c
210
- cpu->isar.mvfr2 = t;
110
@@ -XXX,XX +XXX,XX @@ static inline int expand_imm_sh8s(int x)
211
-
111
return (int8_t)x << (x & 0x100 ? 8 : 0);
212
- t = cpu->isar.id_mmfr3;
112
}
213
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
113
214
- cpu->isar.id_mmfr3 = t;
114
+static inline int expand_imm_sh8u(int x)
215
-
115
+{
216
- t = cpu->isar.id_mmfr4;
116
+ return (uint8_t)x << (x & 0x100 ? 8 : 0);
217
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
117
+}
218
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
118
+
219
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
119
/*
220
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
120
* Include the generated decoder.
221
- cpu->isar.id_mmfr4 = t;
121
*/
222
-
122
@@ -XXX,XX +XXX,XX @@ static bool trans_DUP_i(DisasContext *s, arg_DUP_i *a, uint32_t insn)
223
- t = cpu->isar.id_pfr0;
123
return true;
224
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
124
}
225
- cpu->isar.id_pfr0 = t;
125
226
-
126
+static bool trans_ADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
227
- t = cpu->isar.id_pfr2;
127
+{
228
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
128
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
229
- cpu->isar.id_pfr2 = t;
129
+ return false;
230
-
130
+ }
231
- t = cpu->isar.id_dfr0;
131
+ if (sve_access_check(s)) {
232
- t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
132
+ unsigned vsz = vec_full_reg_size(s);
233
- cpu->isar.id_dfr0 = t;
133
+ tcg_gen_gvec_addi(a->esz, vec_full_reg_offset(s, a->rd),
234
+ aa32_max_features(cpu);
134
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
235
135
+ }
236
#ifdef CONFIG_USER_ONLY
136
+ return true;
237
/*
137
+}
138
+
139
+static bool trans_SUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
140
+{
141
+ a->imm = -a->imm;
142
+ return trans_ADD_zzi(s, a, insn);
143
+}
144
+
145
+static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
146
+{
147
+ static const GVecGen2s op[4] = {
148
+ { .fni8 = tcg_gen_vec_sub8_i64,
149
+ .fniv = tcg_gen_sub_vec,
150
+ .fno = gen_helper_sve_subri_b,
151
+ .opc = INDEX_op_sub_vec,
152
+ .vece = MO_8,
153
+ .scalar_first = true },
154
+ { .fni8 = tcg_gen_vec_sub16_i64,
155
+ .fniv = tcg_gen_sub_vec,
156
+ .fno = gen_helper_sve_subri_h,
157
+ .opc = INDEX_op_sub_vec,
158
+ .vece = MO_16,
159
+ .scalar_first = true },
160
+ { .fni4 = tcg_gen_sub_i32,
161
+ .fniv = tcg_gen_sub_vec,
162
+ .fno = gen_helper_sve_subri_s,
163
+ .opc = INDEX_op_sub_vec,
164
+ .vece = MO_32,
165
+ .scalar_first = true },
166
+ { .fni8 = tcg_gen_sub_i64,
167
+ .fniv = tcg_gen_sub_vec,
168
+ .fno = gen_helper_sve_subri_d,
169
+ .opc = INDEX_op_sub_vec,
170
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
171
+ .vece = MO_64,
172
+ .scalar_first = true }
173
+ };
174
+
175
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
176
+ return false;
177
+ }
178
+ if (sve_access_check(s)) {
179
+ unsigned vsz = vec_full_reg_size(s);
180
+ TCGv_i64 c = tcg_const_i64(a->imm);
181
+ tcg_gen_gvec_2s(vec_full_reg_offset(s, a->rd),
182
+ vec_full_reg_offset(s, a->rn),
183
+ vsz, vsz, c, &op[a->esz]);
184
+ tcg_temp_free_i64(c);
185
+ }
186
+ return true;
187
+}
188
+
189
+static bool trans_MUL_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
190
+{
191
+ if (sve_access_check(s)) {
192
+ unsigned vsz = vec_full_reg_size(s);
193
+ tcg_gen_gvec_muli(a->esz, vec_full_reg_offset(s, a->rd),
194
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
195
+ }
196
+ return true;
197
+}
198
+
199
+static bool do_zzi_sat(DisasContext *s, arg_rri_esz *a, uint32_t insn,
200
+ bool u, bool d)
201
+{
202
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
203
+ return false;
204
+ }
205
+ if (sve_access_check(s)) {
206
+ TCGv_i64 val = tcg_const_i64(a->imm);
207
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, u, d);
208
+ tcg_temp_free_i64(val);
209
+ }
210
+ return true;
211
+}
212
+
213
+static bool trans_SQADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
214
+{
215
+ return do_zzi_sat(s, a, insn, false, false);
216
+}
217
+
218
+static bool trans_UQADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
219
+{
220
+ return do_zzi_sat(s, a, insn, true, false);
221
+}
222
+
223
+static bool trans_SQSUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
224
+{
225
+ return do_zzi_sat(s, a, insn, false, true);
226
+}
227
+
228
+static bool trans_UQSUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
229
+{
230
+ return do_zzi_sat(s, a, insn, true, true);
231
+}
232
+
233
+static bool do_zzi_ool(DisasContext *s, arg_rri_esz *a, gen_helper_gvec_2i *fn)
234
+{
235
+ if (sve_access_check(s)) {
236
+ unsigned vsz = vec_full_reg_size(s);
237
+ TCGv_i64 c = tcg_const_i64(a->imm);
238
+
239
+ tcg_gen_gvec_2i_ool(vec_full_reg_offset(s, a->rd),
240
+ vec_full_reg_offset(s, a->rn),
241
+ c, vsz, vsz, 0, fn);
242
+ tcg_temp_free_i64(c);
243
+ }
244
+ return true;
245
+}
246
+
247
+#define DO_ZZI(NAME, name) \
248
+static bool trans_##NAME##_zzi(DisasContext *s, arg_rri_esz *a, \
249
+ uint32_t insn) \
250
+{ \
251
+ static gen_helper_gvec_2i * const fns[4] = { \
252
+ gen_helper_sve_##name##i_b, gen_helper_sve_##name##i_h, \
253
+ gen_helper_sve_##name##i_s, gen_helper_sve_##name##i_d, \
254
+ }; \
255
+ return do_zzi_ool(s, a, fns[a->esz]); \
256
+}
257
+
258
+DO_ZZI(SMAX, smax)
259
+DO_ZZI(UMAX, umax)
260
+DO_ZZI(SMIN, smin)
261
+DO_ZZI(UMIN, umin)
262
+
263
+#undef DO_ZZI
264
+
265
/*
266
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
267
*/
268
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
269
index XXXXXXX..XXXXXXX 100644
270
--- a/target/arm/sve.decode
271
+++ b/target/arm/sve.decode
272
@@ -XXX,XX +XXX,XX @@
273
274
# Signed 8-bit immediate, optionally shifted left by 8.
275
%sh8_i8s 5:9 !function=expand_imm_sh8s
276
+# Unsigned 8-bit immediate, optionally shifted left by 8.
277
+%sh8_i8u 5:9 !function=expand_imm_sh8u
278
279
# Either a copy of rd (at bit 0), or a different source
280
# as propagated via the MOVPRFX instruction.
281
@@ -XXX,XX +XXX,XX @@
282
@pd_pn_pm ........ esz:2 .. rm:4 ....... rn:4 . rd:4 &rrr_esz
283
@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
284
&rrr_esz rn=%reg_movprfx
285
+@rdn_sh_i8u ........ esz:2 ...... ...... ..... rd:5 \
286
+ &rri_esz rn=%reg_movprfx imm=%sh8_i8u
287
+@rdn_i8u ........ esz:2 ...... ... imm:8 rd:5 \
288
+ &rri_esz rn=%reg_movprfx
289
+@rdn_i8s ........ esz:2 ...... ... imm:s8 rd:5 \
290
+ &rri_esz rn=%reg_movprfx
291
292
# Three operand with "memory" size, aka immediate left shift
293
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
294
@@ -XXX,XX +XXX,XX @@ FDUP 00100101 esz:2 111 00 1110 imm:8 rd:5
295
# SVE broadcast integer immediate (unpredicated)
296
DUP_i 00100101 esz:2 111 00 011 . ........ rd:5 imm=%sh8_i8s
297
298
+# SVE integer add/subtract immediate (unpredicated)
299
+ADD_zzi 00100101 .. 100 000 11 . ........ ..... @rdn_sh_i8u
300
+SUB_zzi 00100101 .. 100 001 11 . ........ ..... @rdn_sh_i8u
301
+SUBR_zzi 00100101 .. 100 011 11 . ........ ..... @rdn_sh_i8u
302
+SQADD_zzi 00100101 .. 100 100 11 . ........ ..... @rdn_sh_i8u
303
+UQADD_zzi 00100101 .. 100 101 11 . ........ ..... @rdn_sh_i8u
304
+SQSUB_zzi 00100101 .. 100 110 11 . ........ ..... @rdn_sh_i8u
305
+UQSUB_zzi 00100101 .. 100 111 11 . ........ ..... @rdn_sh_i8u
306
+
307
+# SVE integer min/max immediate (unpredicated)
308
+SMAX_zzi 00100101 .. 101 000 110 ........ ..... @rdn_i8s
309
+UMAX_zzi 00100101 .. 101 001 110 ........ ..... @rdn_i8u
310
+SMIN_zzi 00100101 .. 101 010 110 ........ ..... @rdn_i8s
311
+UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
312
+
313
+# SVE integer multiply immediate (unpredicated)
314
+MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
315
+
316
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
317
318
# SVE load predicate register
319
--
238
--
320
2.17.1
239
2.25.1
321
322
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Update the legacy feature names to the current names.
4
Provide feature names for id changes that were not marked.
5
Sort the field updates into increasing bitfield order.
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-12-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-10-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 115 +++++++++++++++++++++++
12
target/arm/cpu64.c | 100 +++++++++++++++++++++----------------------
9
target/arm/sve_helper.c | 187 +++++++++++++++++++++++++++++++++++++
13
target/arm/cpu_tcg.c | 48 ++++++++++-----------
10
target/arm/translate-sve.c | 91 ++++++++++++++++++
14
2 files changed, 74 insertions(+), 74 deletions(-)
11
target/arm/sve.decode | 24 +++++
12
4 files changed, 417 insertions(+)
13
15
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
18
--- a/target/arm/cpu64.c
17
+++ b/target/arm/helper-sve.h
19
+++ b/target/arm/cpu64.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
19
21
cpu->midr = t;
20
DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
21
23
t = cpu->isar.id_aa64isar0;
22
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_b, TCG_CALL_NO_RWG,
24
- t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
23
+ i32, ptr, ptr, ptr, ptr, i32)
25
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
24
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_b, TCG_CALL_NO_RWG,
26
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
25
+ i32, ptr, ptr, ptr, ptr, i32)
27
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* FEAT_PMULL */
26
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_b, TCG_CALL_NO_RWG,
28
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); /* FEAT_SHA1 */
27
+ i32, ptr, ptr, ptr, ptr, i32)
29
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* FEAT_SHA512 */
28
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_b, TCG_CALL_NO_RWG,
30
t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
29
+ i32, ptr, ptr, ptr, ptr, i32)
31
- t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
30
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_b, TCG_CALL_NO_RWG,
32
- t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
31
+ i32, ptr, ptr, ptr, ptr, i32)
33
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
32
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_b, TCG_CALL_NO_RWG,
34
- t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
33
+ i32, ptr, ptr, ptr, ptr, i32)
35
- t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
34
+
36
- t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
35
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_h, TCG_CALL_NO_RWG,
37
- t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1);
36
+ i32, ptr, ptr, ptr, ptr, i32)
38
- t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */
37
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_h, TCG_CALL_NO_RWG,
39
- t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
38
+ i32, ptr, ptr, ptr, ptr, i32)
40
- t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1);
39
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_h, TCG_CALL_NO_RWG,
41
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2); /* FEAT_LSE */
40
+ i32, ptr, ptr, ptr, ptr, i32)
42
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1); /* FEAT_RDM */
41
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_h, TCG_CALL_NO_RWG,
43
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1); /* FEAT_SHA3 */
42
+ i32, ptr, ptr, ptr, ptr, i32)
44
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1); /* FEAT_SM3 */
43
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_h, TCG_CALL_NO_RWG,
45
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1); /* FEAT_SM4 */
44
+ i32, ptr, ptr, ptr, ptr, i32)
46
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1); /* FEAT_DotProd */
45
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_h, TCG_CALL_NO_RWG,
47
+ t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1); /* FEAT_FHM */
46
+ i32, ptr, ptr, ptr, ptr, i32)
48
+ t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* FEAT_FlagM2 */
47
+
49
+ t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
48
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_s, TCG_CALL_NO_RWG,
50
+ t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1); /* FEAT_RNG */
49
+ i32, ptr, ptr, ptr, ptr, i32)
51
cpu->isar.id_aa64isar0 = t;
50
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_s, TCG_CALL_NO_RWG,
52
51
+ i32, ptr, ptr, ptr, ptr, i32)
53
t = cpu->isar.id_aa64isar1;
52
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_s, TCG_CALL_NO_RWG,
54
- t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2);
53
+ i32, ptr, ptr, ptr, ptr, i32)
55
- t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1);
54
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_s, TCG_CALL_NO_RWG,
56
- t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
55
+ i32, ptr, ptr, ptr, ptr, i32)
57
- t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
56
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_s, TCG_CALL_NO_RWG,
58
- t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
57
+ i32, ptr, ptr, ptr, ptr, i32)
59
- t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1);
58
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_s, TCG_CALL_NO_RWG,
60
- t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
59
+ i32, ptr, ptr, ptr, ptr, i32)
61
- t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
60
+
62
- t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
61
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_d, TCG_CALL_NO_RWG,
63
+ t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2); /* FEAT_DPB2 */
62
+ i32, ptr, ptr, ptr, ptr, i32)
64
+ t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1); /* FEAT_JSCVT */
63
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_d, TCG_CALL_NO_RWG,
65
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1); /* FEAT_FCMA */
64
+ i32, ptr, ptr, ptr, ptr, i32)
66
+ t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* FEAT_LRCPC2 */
65
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_d, TCG_CALL_NO_RWG,
67
+ t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1); /* FEAT_FRINTTS */
66
+ i32, ptr, ptr, ptr, ptr, i32)
68
+ t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); /* FEAT_SB */
67
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_d, TCG_CALL_NO_RWG,
69
+ t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); /* FEAT_SPECRES */
68
+ i32, ptr, ptr, ptr, ptr, i32)
70
+ t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1); /* FEAT_BF16 */
69
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_d, TCG_CALL_NO_RWG,
71
+ t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
70
+ i32, ptr, ptr, ptr, ptr, i32)
72
cpu->isar.id_aa64isar1 = t;
71
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_d, TCG_CALL_NO_RWG,
73
72
+ i32, ptr, ptr, ptr, ptr, i32)
74
t = cpu->isar.id_aa64pfr0;
73
+
75
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
74
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_b, TCG_CALL_NO_RWG,
76
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
75
+ i32, ptr, ptr, ptr, ptr, i32)
77
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
76
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_b, TCG_CALL_NO_RWG,
78
- t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
77
+ i32, ptr, ptr, ptr, ptr, i32)
79
- t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
78
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_b, TCG_CALL_NO_RWG,
80
- t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1);
79
+ i32, ptr, ptr, ptr, ptr, i32)
81
- t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1);
80
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_b, TCG_CALL_NO_RWG,
82
+ t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
81
+ i32, ptr, ptr, ptr, ptr, i32)
83
+ t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
82
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_b, TCG_CALL_NO_RWG,
84
cpu->isar.id_aa64pfr0 = t;
83
+ i32, ptr, ptr, ptr, ptr, i32)
85
84
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_b, TCG_CALL_NO_RWG,
86
t = cpu->isar.id_aa64pfr1;
85
+ i32, ptr, ptr, ptr, ptr, i32)
87
- t = FIELD_DP64(t, ID_AA64PFR1, BT, 1);
86
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_b, TCG_CALL_NO_RWG,
88
- t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2);
87
+ i32, ptr, ptr, ptr, ptr, i32)
89
+ t = FIELD_DP64(t, ID_AA64PFR1, BT, 1); /* FEAT_BTI */
88
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_b, TCG_CALL_NO_RWG,
90
+ t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2); /* FEAT_SSBS2 */
89
+ i32, ptr, ptr, ptr, ptr, i32)
91
/*
90
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_b, TCG_CALL_NO_RWG,
92
* Begin with full support for MTE. This will be downgraded to MTE=0
91
+ i32, ptr, ptr, ptr, ptr, i32)
93
* during realize if the board provides no tag memory, much like
92
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_b, TCG_CALL_NO_RWG,
94
* we do for EL2 with the virtualization=on property.
93
+ i32, ptr, ptr, ptr, ptr, i32)
95
*/
94
+
96
- t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3);
95
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_h, TCG_CALL_NO_RWG,
97
+ t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
96
+ i32, ptr, ptr, ptr, ptr, i32)
98
cpu->isar.id_aa64pfr1 = t;
97
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_h, TCG_CALL_NO_RWG,
99
98
+ i32, ptr, ptr, ptr, ptr, i32)
100
t = cpu->isar.id_aa64mmfr0;
99
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_h, TCG_CALL_NO_RWG,
101
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
100
+ i32, ptr, ptr, ptr, ptr, i32)
102
cpu->isar.id_aa64mmfr0 = t;
101
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_h, TCG_CALL_NO_RWG,
103
102
+ i32, ptr, ptr, ptr, ptr, i32)
104
t = cpu->isar.id_aa64mmfr1;
103
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_h, TCG_CALL_NO_RWG,
105
- t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
104
+ i32, ptr, ptr, ptr, ptr, i32)
106
- t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
105
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_h, TCG_CALL_NO_RWG,
107
- t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
106
+ i32, ptr, ptr, ptr, ptr, i32)
108
- t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
107
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_h, TCG_CALL_NO_RWG,
109
- t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */
108
+ i32, ptr, ptr, ptr, ptr, i32)
110
- t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* TTS2UXN */
109
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_h, TCG_CALL_NO_RWG,
111
+ t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* FEAT_VMID16 */
110
+ i32, ptr, ptr, ptr, ptr, i32)
112
+ t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
111
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_h, TCG_CALL_NO_RWG,
113
+ t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
112
+ i32, ptr, ptr, ptr, ptr, i32)
114
+ t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); /* FEAT_LOR */
113
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_h, TCG_CALL_NO_RWG,
115
+ t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* FEAT_PAN2 */
114
+ i32, ptr, ptr, ptr, ptr, i32)
116
+ t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* FEAT_XNX */
115
+
117
cpu->isar.id_aa64mmfr1 = t;
116
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_s, TCG_CALL_NO_RWG,
118
117
+ i32, ptr, ptr, ptr, ptr, i32)
119
t = cpu->isar.id_aa64mmfr2;
118
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_s, TCG_CALL_NO_RWG,
120
- t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1);
119
+ i32, ptr, ptr, ptr, ptr, i32)
121
- t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */
120
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_s, TCG_CALL_NO_RWG,
122
- t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */
121
+ i32, ptr, ptr, ptr, ptr, i32)
123
- t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
122
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_s, TCG_CALL_NO_RWG,
124
- t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
123
+ i32, ptr, ptr, ptr, ptr, i32)
125
- t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
124
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_s, TCG_CALL_NO_RWG,
126
+ t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* FEAT_TTCNP */
125
+ i32, ptr, ptr, ptr, ptr, i32)
127
+ t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); /* FEAT_UAO */
126
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_s, TCG_CALL_NO_RWG,
128
+ t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
127
+ i32, ptr, ptr, ptr, ptr, i32)
129
+ t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
128
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_s, TCG_CALL_NO_RWG,
130
+ t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
129
+ i32, ptr, ptr, ptr, ptr, i32)
131
+ t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
130
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_s, TCG_CALL_NO_RWG,
132
cpu->isar.id_aa64mmfr2 = t;
131
+ i32, ptr, ptr, ptr, ptr, i32)
133
132
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_s, TCG_CALL_NO_RWG,
134
t = cpu->isar.id_aa64zfr0;
133
+ i32, ptr, ptr, ptr, ptr, i32)
135
t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
134
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_s, TCG_CALL_NO_RWG,
136
- t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* PMULL */
135
+ i32, ptr, ptr, ptr, ptr, i32)
137
- t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
136
+
138
- t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1);
137
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
139
- t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
138
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
140
- t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
139
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
141
- t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
140
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
142
- t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1);
143
- t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
144
+ t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* FEAT_SVE_PMULL128 */
145
+ t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1); /* FEAT_SVE_BitPerm */
146
+ t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1); /* FEAT_BF16 */
147
+ t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1); /* FEAT_SVE_SHA3 */
148
+ t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1); /* FEAT_SVE_SM4 */
149
+ t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1); /* FEAT_I8MM */
150
+ t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1); /* FEAT_F32MM */
151
+ t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1); /* FEAT_F64MM */
152
cpu->isar.id_aa64zfr0 = t;
153
154
t = cpu->isar.id_aa64dfr0;
155
- t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
156
+ t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
157
cpu->isar.id_aa64dfr0 = t;
158
159
/* Replicate the same data to the 32-bit id registers. */
160
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
141
index XXXXXXX..XXXXXXX 100644
161
index XXXXXXX..XXXXXXX 100644
142
--- a/target/arm/sve_helper.c
162
--- a/target/arm/cpu_tcg.c
143
+++ b/target/arm/sve_helper.c
163
+++ b/target/arm/cpu_tcg.c
144
@@ -XXX,XX +XXX,XX @@ static uint32_t iter_predtest_fwd(uint64_t d, uint64_t g, uint32_t flags)
164
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
145
return flags;
165
166
/* Add additional features supported by QEMU */
167
t = cpu->isar.id_isar5;
168
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
169
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
170
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
171
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2); /* FEAT_PMULL */
172
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1); /* FEAT_SHA1 */
173
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1); /* FEAT_SHA256 */
174
t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
175
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
176
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
177
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1); /* FEAT_RDM */
178
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1); /* FEAT_FCMA */
179
cpu->isar.id_isar5 = t;
180
181
t = cpu->isar.id_isar6;
182
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
183
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
184
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
185
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
186
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
187
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
188
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
189
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1); /* FEAT_JSCVT */
190
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1); /* Feat_DotProd */
191
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1); /* FEAT_FHM */
192
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1); /* FEAT_SB */
193
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1); /* FEAT_SPECRES */
194
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1); /* FEAT_AA32BF16 */
195
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1); /* FEAT_AA32I8MM */
196
cpu->isar.id_isar6 = t;
197
198
t = cpu->isar.mvfr1;
199
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
200
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
201
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* FEAT_FP16 */
202
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* FEAT_FP16 */
203
cpu->isar.mvfr1 = t;
204
205
t = cpu->isar.mvfr2;
206
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
207
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
208
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
209
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
210
cpu->isar.mvfr2 = t;
211
212
t = cpu->isar.id_mmfr3;
213
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
214
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* FEAT_PAN2 */
215
cpu->isar.id_mmfr3 = t;
216
217
t = cpu->isar.id_mmfr4;
218
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
219
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
220
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
221
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
222
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* FEAT_AA32HPD */
223
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
224
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* FEAT_TTCNP */
225
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* FEAT_XNX*/
226
cpu->isar.id_mmfr4 = t;
227
228
t = cpu->isar.id_pfr0;
229
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
230
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
231
cpu->isar.id_pfr0 = t;
232
233
t = cpu->isar.id_pfr2;
234
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
235
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1); /* FEAT_SSBS */
236
cpu->isar.id_pfr2 = t;
237
238
t = cpu->isar.id_dfr0;
239
- t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
240
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
241
cpu->isar.id_dfr0 = t;
146
}
242
}
147
243
148
+/* This is an iterative function, called for each Pd and Pg word
149
+ * moving backward.
150
+ */
151
+static uint32_t iter_predtest_bwd(uint64_t d, uint64_t g, uint32_t flags)
152
+{
153
+ if (likely(g)) {
154
+ /* Compute C from first (i.e last) !(D & G).
155
+ Use bit 2 to signal first G bit seen. */
156
+ if (!(flags & 4)) {
157
+ flags += 4 - 1; /* add bit 2, subtract C from PREDTEST_INIT */
158
+ flags |= (d & pow2floor(g)) == 0;
159
+ }
160
+
161
+ /* Accumulate Z from each D & G. */
162
+ flags |= ((d & g) != 0) << 1;
163
+
164
+ /* Compute N from last (i.e first) D & G. Replace previous. */
165
+ flags = deposit32(flags, 31, 1, (d & (g & -g)) != 0);
166
+ }
167
+ return flags;
168
+}
169
+
170
/* The same for a single word predicate. */
171
uint32_t HELPER(sve_predtest1)(uint64_t d, uint64_t g)
172
{
173
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
174
d[i] = (pg[H1(i)] & 1 ? nn : mm);
175
}
176
}
177
+
178
+/* Two operand comparison controlled by a predicate.
179
+ * ??? It is very tempting to want to be able to expand this inline
180
+ * with x86 instructions, e.g.
181
+ *
182
+ * vcmpeqw zm, zn, %ymm0
183
+ * vpmovmskb %ymm0, %eax
184
+ * and $0x5555, %eax
185
+ * and pg, %eax
186
+ *
187
+ * or even aarch64, e.g.
188
+ *
189
+ * // mask = 4000 1000 0400 0100 0040 0010 0004 0001
190
+ * cmeq v0.8h, zn, zm
191
+ * and v0.8h, v0.8h, mask
192
+ * addv h0, v0.8h
193
+ * and v0.8b, pg
194
+ *
195
+ * However, coming up with an abstraction that allows vector inputs and
196
+ * a scalar output, and also handles the byte-ordering of sub-uint64_t
197
+ * scalar outputs, is tricky.
198
+ */
199
+#define DO_CMP_PPZZ(NAME, TYPE, OP, H, MASK) \
200
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
201
+{ \
202
+ intptr_t opr_sz = simd_oprsz(desc); \
203
+ uint32_t flags = PREDTEST_INIT; \
204
+ intptr_t i = opr_sz; \
205
+ do { \
206
+ uint64_t out = 0, pg; \
207
+ do { \
208
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
209
+ TYPE nn = *(TYPE *)(vn + H(i)); \
210
+ TYPE mm = *(TYPE *)(vm + H(i)); \
211
+ out |= nn OP mm; \
212
+ } while (i & 63); \
213
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
214
+ out &= pg; \
215
+ *(uint64_t *)(vd + (i >> 3)) = out; \
216
+ flags = iter_predtest_bwd(out, pg, flags); \
217
+ } while (i > 0); \
218
+ return flags; \
219
+}
220
+
221
+#define DO_CMP_PPZZ_B(NAME, TYPE, OP) \
222
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1, 0xffffffffffffffffull)
223
+#define DO_CMP_PPZZ_H(NAME, TYPE, OP) \
224
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1_2, 0x5555555555555555ull)
225
+#define DO_CMP_PPZZ_S(NAME, TYPE, OP) \
226
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
227
+#define DO_CMP_PPZZ_D(NAME, TYPE, OP) \
228
+ DO_CMP_PPZZ(NAME, TYPE, OP, , 0x0101010101010101ull)
229
+
230
+DO_CMP_PPZZ_B(sve_cmpeq_ppzz_b, uint8_t, ==)
231
+DO_CMP_PPZZ_H(sve_cmpeq_ppzz_h, uint16_t, ==)
232
+DO_CMP_PPZZ_S(sve_cmpeq_ppzz_s, uint32_t, ==)
233
+DO_CMP_PPZZ_D(sve_cmpeq_ppzz_d, uint64_t, ==)
234
+
235
+DO_CMP_PPZZ_B(sve_cmpne_ppzz_b, uint8_t, !=)
236
+DO_CMP_PPZZ_H(sve_cmpne_ppzz_h, uint16_t, !=)
237
+DO_CMP_PPZZ_S(sve_cmpne_ppzz_s, uint32_t, !=)
238
+DO_CMP_PPZZ_D(sve_cmpne_ppzz_d, uint64_t, !=)
239
+
240
+DO_CMP_PPZZ_B(sve_cmpgt_ppzz_b, int8_t, >)
241
+DO_CMP_PPZZ_H(sve_cmpgt_ppzz_h, int16_t, >)
242
+DO_CMP_PPZZ_S(sve_cmpgt_ppzz_s, int32_t, >)
243
+DO_CMP_PPZZ_D(sve_cmpgt_ppzz_d, int64_t, >)
244
+
245
+DO_CMP_PPZZ_B(sve_cmpge_ppzz_b, int8_t, >=)
246
+DO_CMP_PPZZ_H(sve_cmpge_ppzz_h, int16_t, >=)
247
+DO_CMP_PPZZ_S(sve_cmpge_ppzz_s, int32_t, >=)
248
+DO_CMP_PPZZ_D(sve_cmpge_ppzz_d, int64_t, >=)
249
+
250
+DO_CMP_PPZZ_B(sve_cmphi_ppzz_b, uint8_t, >)
251
+DO_CMP_PPZZ_H(sve_cmphi_ppzz_h, uint16_t, >)
252
+DO_CMP_PPZZ_S(sve_cmphi_ppzz_s, uint32_t, >)
253
+DO_CMP_PPZZ_D(sve_cmphi_ppzz_d, uint64_t, >)
254
+
255
+DO_CMP_PPZZ_B(sve_cmphs_ppzz_b, uint8_t, >=)
256
+DO_CMP_PPZZ_H(sve_cmphs_ppzz_h, uint16_t, >=)
257
+DO_CMP_PPZZ_S(sve_cmphs_ppzz_s, uint32_t, >=)
258
+DO_CMP_PPZZ_D(sve_cmphs_ppzz_d, uint64_t, >=)
259
+
260
+#undef DO_CMP_PPZZ_B
261
+#undef DO_CMP_PPZZ_H
262
+#undef DO_CMP_PPZZ_S
263
+#undef DO_CMP_PPZZ_D
264
+#undef DO_CMP_PPZZ
265
+
266
+/* Similar, but the second source is "wide". */
267
+#define DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H, MASK) \
268
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
269
+{ \
270
+ intptr_t opr_sz = simd_oprsz(desc); \
271
+ uint32_t flags = PREDTEST_INIT; \
272
+ intptr_t i = opr_sz; \
273
+ do { \
274
+ uint64_t out = 0, pg; \
275
+ do { \
276
+ TYPEW mm = *(TYPEW *)(vm + i - 8); \
277
+ do { \
278
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
279
+ TYPE nn = *(TYPE *)(vn + H(i)); \
280
+ out |= nn OP mm; \
281
+ } while (i & 7); \
282
+ } while (i & 63); \
283
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
284
+ out &= pg; \
285
+ *(uint64_t *)(vd + (i >> 3)) = out; \
286
+ flags = iter_predtest_bwd(out, pg, flags); \
287
+ } while (i > 0); \
288
+ return flags; \
289
+}
290
+
291
+#define DO_CMP_PPZW_B(NAME, TYPE, TYPEW, OP) \
292
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1, 0xffffffffffffffffull)
293
+#define DO_CMP_PPZW_H(NAME, TYPE, TYPEW, OP) \
294
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_2, 0x5555555555555555ull)
295
+#define DO_CMP_PPZW_S(NAME, TYPE, TYPEW, OP) \
296
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_4, 0x1111111111111111ull)
297
+
298
+DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, uint8_t, uint64_t, ==)
299
+DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, uint16_t, uint64_t, ==)
300
+DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, uint32_t, uint64_t, ==)
301
+
302
+DO_CMP_PPZW_B(sve_cmpne_ppzw_b, uint8_t, uint64_t, !=)
303
+DO_CMP_PPZW_H(sve_cmpne_ppzw_h, uint16_t, uint64_t, !=)
304
+DO_CMP_PPZW_S(sve_cmpne_ppzw_s, uint32_t, uint64_t, !=)
305
+
306
+DO_CMP_PPZW_B(sve_cmpgt_ppzw_b, int8_t, int64_t, >)
307
+DO_CMP_PPZW_H(sve_cmpgt_ppzw_h, int16_t, int64_t, >)
308
+DO_CMP_PPZW_S(sve_cmpgt_ppzw_s, int32_t, int64_t, >)
309
+
310
+DO_CMP_PPZW_B(sve_cmpge_ppzw_b, int8_t, int64_t, >=)
311
+DO_CMP_PPZW_H(sve_cmpge_ppzw_h, int16_t, int64_t, >=)
312
+DO_CMP_PPZW_S(sve_cmpge_ppzw_s, int32_t, int64_t, >=)
313
+
314
+DO_CMP_PPZW_B(sve_cmphi_ppzw_b, uint8_t, uint64_t, >)
315
+DO_CMP_PPZW_H(sve_cmphi_ppzw_h, uint16_t, uint64_t, >)
316
+DO_CMP_PPZW_S(sve_cmphi_ppzw_s, uint32_t, uint64_t, >)
317
+
318
+DO_CMP_PPZW_B(sve_cmphs_ppzw_b, uint8_t, uint64_t, >=)
319
+DO_CMP_PPZW_H(sve_cmphs_ppzw_h, uint16_t, uint64_t, >=)
320
+DO_CMP_PPZW_S(sve_cmphs_ppzw_s, uint32_t, uint64_t, >=)
321
+
322
+DO_CMP_PPZW_B(sve_cmplt_ppzw_b, int8_t, int64_t, <)
323
+DO_CMP_PPZW_H(sve_cmplt_ppzw_h, int16_t, int64_t, <)
324
+DO_CMP_PPZW_S(sve_cmplt_ppzw_s, int32_t, int64_t, <)
325
+
326
+DO_CMP_PPZW_B(sve_cmple_ppzw_b, int8_t, int64_t, <=)
327
+DO_CMP_PPZW_H(sve_cmple_ppzw_h, int16_t, int64_t, <=)
328
+DO_CMP_PPZW_S(sve_cmple_ppzw_s, int32_t, int64_t, <=)
329
+
330
+DO_CMP_PPZW_B(sve_cmplo_ppzw_b, uint8_t, uint64_t, <)
331
+DO_CMP_PPZW_H(sve_cmplo_ppzw_h, uint16_t, uint64_t, <)
332
+DO_CMP_PPZW_S(sve_cmplo_ppzw_s, uint32_t, uint64_t, <)
333
+
334
+DO_CMP_PPZW_B(sve_cmpls_ppzw_b, uint8_t, uint64_t, <=)
335
+DO_CMP_PPZW_H(sve_cmpls_ppzw_h, uint16_t, uint64_t, <=)
336
+DO_CMP_PPZW_S(sve_cmpls_ppzw_s, uint32_t, uint64_t, <=)
337
+
338
+#undef DO_CMP_PPZW_B
339
+#undef DO_CMP_PPZW_H
340
+#undef DO_CMP_PPZW_S
341
+#undef DO_CMP_PPZW
342
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
343
index XXXXXXX..XXXXXXX 100644
344
--- a/target/arm/translate-sve.c
345
+++ b/target/arm/translate-sve.c
346
@@ -XXX,XX +XXX,XX @@
347
#include "trace-tcg.h"
348
#include "translate-a64.h"
349
350
+
351
+typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
352
+ TCGv_ptr, TCGv_ptr, TCGv_i32);
353
+
354
/*
355
* Helpers for extracting complex instruction fields.
356
*/
357
@@ -XXX,XX +XXX,XX @@ static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
358
return true;
359
}
360
361
+/*
362
+ *** SVE Integer Compare - Vectors Group
363
+ */
364
+
365
+static bool do_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
366
+ gen_helper_gvec_flags_4 *gen_fn)
367
+{
368
+ TCGv_ptr pd, zn, zm, pg;
369
+ unsigned vsz;
370
+ TCGv_i32 t;
371
+
372
+ if (gen_fn == NULL) {
373
+ return false;
374
+ }
375
+ if (!sve_access_check(s)) {
376
+ return true;
377
+ }
378
+
379
+ vsz = vec_full_reg_size(s);
380
+ t = tcg_const_i32(simd_desc(vsz, vsz, 0));
381
+ pd = tcg_temp_new_ptr();
382
+ zn = tcg_temp_new_ptr();
383
+ zm = tcg_temp_new_ptr();
384
+ pg = tcg_temp_new_ptr();
385
+
386
+ tcg_gen_addi_ptr(pd, cpu_env, pred_full_reg_offset(s, a->rd));
387
+ tcg_gen_addi_ptr(zn, cpu_env, vec_full_reg_offset(s, a->rn));
388
+ tcg_gen_addi_ptr(zm, cpu_env, vec_full_reg_offset(s, a->rm));
389
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
390
+
391
+ gen_fn(t, pd, zn, zm, pg, t);
392
+
393
+ tcg_temp_free_ptr(pd);
394
+ tcg_temp_free_ptr(zn);
395
+ tcg_temp_free_ptr(zm);
396
+ tcg_temp_free_ptr(pg);
397
+
398
+ do_pred_flags(t);
399
+
400
+ tcg_temp_free_i32(t);
401
+ return true;
402
+}
403
+
404
+#define DO_PPZZ(NAME, name) \
405
+static bool trans_##NAME##_ppzz(DisasContext *s, arg_rprr_esz *a, \
406
+ uint32_t insn) \
407
+{ \
408
+ static gen_helper_gvec_flags_4 * const fns[4] = { \
409
+ gen_helper_sve_##name##_ppzz_b, gen_helper_sve_##name##_ppzz_h, \
410
+ gen_helper_sve_##name##_ppzz_s, gen_helper_sve_##name##_ppzz_d, \
411
+ }; \
412
+ return do_ppzz_flags(s, a, fns[a->esz]); \
413
+}
414
+
415
+DO_PPZZ(CMPEQ, cmpeq)
416
+DO_PPZZ(CMPNE, cmpne)
417
+DO_PPZZ(CMPGT, cmpgt)
418
+DO_PPZZ(CMPGE, cmpge)
419
+DO_PPZZ(CMPHI, cmphi)
420
+DO_PPZZ(CMPHS, cmphs)
421
+
422
+#undef DO_PPZZ
423
+
424
+#define DO_PPZW(NAME, name) \
425
+static bool trans_##NAME##_ppzw(DisasContext *s, arg_rprr_esz *a, \
426
+ uint32_t insn) \
427
+{ \
428
+ static gen_helper_gvec_flags_4 * const fns[4] = { \
429
+ gen_helper_sve_##name##_ppzw_b, gen_helper_sve_##name##_ppzw_h, \
430
+ gen_helper_sve_##name##_ppzw_s, NULL \
431
+ }; \
432
+ return do_ppzz_flags(s, a, fns[a->esz]); \
433
+}
434
+
435
+DO_PPZW(CMPEQ, cmpeq)
436
+DO_PPZW(CMPNE, cmpne)
437
+DO_PPZW(CMPGT, cmpgt)
438
+DO_PPZW(CMPGE, cmpge)
439
+DO_PPZW(CMPHI, cmphi)
440
+DO_PPZW(CMPHS, cmphs)
441
+DO_PPZW(CMPLT, cmplt)
442
+DO_PPZW(CMPLE, cmple)
443
+DO_PPZW(CMPLO, cmplo)
444
+DO_PPZW(CMPLS, cmpls)
445
+
446
+#undef DO_PPZW
447
+
448
/*
449
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
450
*/
451
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
452
index XXXXXXX..XXXXXXX 100644
453
--- a/target/arm/sve.decode
454
+++ b/target/arm/sve.decode
455
@@ -XXX,XX +XXX,XX @@
456
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
457
&rprr_esz rm=%reg_movprfx
458
@rd_pg4_rn_rm ........ esz:2 . rm:5 .. pg:4 rn:5 rd:5 &rprr_esz
459
+@pd_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 . rd:4 &rprr_esz
460
461
# Three register operand, with governing predicate, vector element size
462
@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
463
@@ -XXX,XX +XXX,XX @@ SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
464
# SVE select vector elements (predicated)
465
SEL_zpzz 00000101 .. 1 ..... 11 .... ..... ..... @rd_pg4_rn_rm
466
467
+### SVE Integer Compare - Vectors Group
468
+
469
+# SVE integer compare_vectors
470
+CMPHS_ppzz 00100100 .. 0 ..... 000 ... ..... 0 .... @pd_pg_rn_rm
471
+CMPHI_ppzz 00100100 .. 0 ..... 000 ... ..... 1 .... @pd_pg_rn_rm
472
+CMPGE_ppzz 00100100 .. 0 ..... 100 ... ..... 0 .... @pd_pg_rn_rm
473
+CMPGT_ppzz 00100100 .. 0 ..... 100 ... ..... 1 .... @pd_pg_rn_rm
474
+CMPEQ_ppzz 00100100 .. 0 ..... 101 ... ..... 0 .... @pd_pg_rn_rm
475
+CMPNE_ppzz 00100100 .. 0 ..... 101 ... ..... 1 .... @pd_pg_rn_rm
476
+
477
+# SVE integer compare with wide elements
478
+# Note these require esz != 3.
479
+CMPEQ_ppzw 00100100 .. 0 ..... 001 ... ..... 0 .... @pd_pg_rn_rm
480
+CMPNE_ppzw 00100100 .. 0 ..... 001 ... ..... 1 .... @pd_pg_rn_rm
481
+CMPGE_ppzw 00100100 .. 0 ..... 010 ... ..... 0 .... @pd_pg_rn_rm
482
+CMPGT_ppzw 00100100 .. 0 ..... 010 ... ..... 1 .... @pd_pg_rn_rm
483
+CMPLT_ppzw 00100100 .. 0 ..... 011 ... ..... 0 .... @pd_pg_rn_rm
484
+CMPLE_ppzw 00100100 .. 0 ..... 011 ... ..... 1 .... @pd_pg_rn_rm
485
+CMPHS_ppzw 00100100 .. 0 ..... 110 ... ..... 0 .... @pd_pg_rn_rm
486
+CMPHI_ppzw 00100100 .. 0 ..... 110 ... ..... 1 .... @pd_pg_rn_rm
487
+CMPLO_ppzw 00100100 .. 0 ..... 111 ... ..... 0 .... @pd_pg_rn_rm
488
+CMPLS_ppzw 00100100 .. 0 ..... 111 ... ..... 1 .... @pd_pg_rn_rm
489
+
490
### SVE Predicate Logical Operations Group
491
492
# SVE predicate logical operations
493
--
244
--
494
2.17.1
245
2.25.1
495
496
diff view generated by jsdifflib
1
The Cortex-M CPU and its NVIC are two intimately intertwined parts of
1
From: Richard Henderson <richard.henderson@linaro.org>
2
the same hardware; it is not possible to use one without the other.
3
Unfortunately a lot of our board models don't do any sanity checking
4
on the CPU type the user asks for, so a command line like
5
qemu-system-arm -M versatilepb -cpu cortex-m3
6
will create an M3 without an NVIC, and coredump immediately.
7
In the other direction, trying a non-M-profile CPU in an M-profile
8
board won't blow up, but doesn't do anything useful either:
9
qemu-system-arm -M lm3s6965evb -cpu arm926
10
2
11
Add some checking in the NVIC and CPU realize functions that the
3
Use FIELD_DP{32,64} to manipulate id_pfr1 and id_aa64pfr0
12
user isn't trying to use an NVIC without an M-profile CPU or
4
during arm_cpu_realizefn.
13
an M-profile CPU without an NVIC, so we can produce a helpful
14
error message rather than a core dump.
15
5
16
Fixes: https://bugs.launchpad.net/qemu/+bug/1766896
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220506180242.216785-11-richard.henderson@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Message-id: 20180601160355.15393-1-peter.maydell@linaro.org
20
---
10
---
21
hw/arm/armv7m.c | 7 ++++++-
11
target/arm/cpu.c | 22 +++++++++++++---------
22
hw/intc/armv7m_nvic.c | 6 +++++-
12
1 file changed, 13 insertions(+), 9 deletions(-)
23
target/arm/cpu.c | 18 ++++++++++++++++++
24
3 files changed, 29 insertions(+), 2 deletions(-)
25
13
26
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/arm/armv7m.c
29
+++ b/hw/arm/armv7m.c
30
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
31
return;
32
}
33
}
34
+
35
+ /* Tell the CPU where the NVIC is; it will fail realize if it doesn't
36
+ * have one.
37
+ */
38
+ s->cpu->env.nvic = &s->nvic;
39
+
40
object_property_set_bool(OBJECT(s->cpu), true, "realized", &err);
41
if (err != NULL) {
42
error_propagate(errp, err);
43
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
44
sbd = SYS_BUS_DEVICE(&s->nvic);
45
sysbus_connect_irq(sbd, 0,
46
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
47
- s->cpu->env.nvic = &s->nvic;
48
49
memory_region_add_subregion(&s->container, 0xe000e000,
50
sysbus_mmio_get_region(sbd, 0));
51
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/intc/armv7m_nvic.c
54
+++ b/hw/intc/armv7m_nvic.c
55
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
56
int regionlen;
57
58
s->cpu = ARM_CPU(qemu_get_cpu(0));
59
- assert(s->cpu);
60
+
61
+ if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
62
+ error_setg(errp, "The NVIC can only be used with a Cortex-M CPU");
63
+ return;
64
+ }
65
66
if (s->num_irq > NVIC_MAX_IRQ) {
67
error_setg(errp, "num-irq %d exceeds NVIC maximum", s->num_irq);
68
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
69
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/cpu.c
16
--- a/target/arm/cpu.c
71
+++ b/target/arm/cpu.c
17
+++ b/target/arm/cpu.c
72
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
18
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
73
return;
19
*/
20
unset_feature(env, ARM_FEATURE_EL3);
21
22
- /* Disable the security extension feature bits in the processor feature
23
- * registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
24
+ /*
25
+ * Disable the security extension feature bits in the processor
26
+ * feature registers as well.
27
*/
28
- cpu->isar.id_pfr1 &= ~0xf0;
29
- cpu->isar.id_aa64pfr0 &= ~0xf000;
30
+ cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1, ID_PFR1, SECURITY, 0);
31
+ cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
32
+ ID_AA64PFR0, EL3, 0);
74
}
33
}
75
34
76
+#ifndef CONFIG_USER_ONLY
35
if (!cpu->has_el2) {
77
+ /* The NVIC and M-profile CPU are two halves of a single piece of
36
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
78
+ * hardware; trying to use one without the other is a command line
37
}
79
+ * error and will result in segfaults if not caught here.
38
80
+ */
39
if (!arm_feature(env, ARM_FEATURE_EL2)) {
81
+ if (arm_feature(env, ARM_FEATURE_M)) {
40
- /* Disable the hypervisor feature bits in the processor feature
82
+ if (!env->nvic) {
41
- * registers if we don't have EL2. These are id_pfr1[15:12] and
83
+ error_setg(errp, "This board cannot be used with Cortex-M CPUs");
42
- * id_aa64pfr0_el1[11:8].
84
+ return;
43
+ /*
85
+ }
44
+ * Disable the hypervisor feature bits in the processor feature
86
+ } else {
45
+ * registers if we don't have EL2.
87
+ if (env->nvic) {
46
*/
88
+ error_setg(errp, "This board can only be used with Cortex-M CPUs");
47
- cpu->isar.id_aa64pfr0 &= ~0xf00;
89
+ return;
48
- cpu->isar.id_pfr1 &= ~0xf000;
90
+ }
49
+ cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
91
+ }
50
+ ID_AA64PFR0, EL2, 0);
92
+#endif
51
+ cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1,
93
+
52
+ ID_PFR1, VIRTUALIZATION, 0);
94
cpu_exec_realizefn(cs, &local_err);
53
}
95
if (local_err != NULL) {
54
96
error_propagate(errp, local_err);
55
#ifndef CONFIG_USER_ONLY
97
--
56
--
98
2.17.1
57
2.25.1
99
100
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The only portion of FEAT_Debugv8p2 that is relevant to QEMU
4
is CONTEXTIDR_EL2, which is also conditionally implemented
5
with FEAT_VHE. The rest of the debug extension concerns the
6
External debug interface, which is outside the scope of QEMU.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20180613015641.5667-16-richard.henderson@linaro.org
10
Message-id: 20220506180242.216785-12-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/helper-sve.h | 2 +
13
docs/system/arm/emulation.rst | 1 +
9
target/arm/sve_helper.c | 31 ++++++++++++
14
target/arm/cpu.c | 1 +
10
target/arm/translate-sve.c | 99 ++++++++++++++++++++++++++++++++++++++
15
target/arm/cpu64.c | 1 +
11
target/arm/sve.decode | 8 +++
16
target/arm/cpu_tcg.c | 2 ++
12
4 files changed, 140 insertions(+)
17
4 files changed, 5 insertions(+)
13
18
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
21
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/helper-sve.h
22
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
24
- FEAT_BTI (Branch Target Identification)
20
25
- FEAT_DIT (Data Independent Timing instructions)
21
DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
26
- FEAT_DPB (DC CVAP instruction)
22
+
27
+- FEAT_Debugv8p2 (Debug changes for v8.2)
23
+DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
28
- FEAT_DotProd (Advanced SIMD dot product instructions)
24
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
29
- FEAT_FCMA (Floating-point complex number instructions)
30
- FEAT_FHM (Floating-point half-precision multiplication instructions)
31
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
25
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/sve_helper.c
33
--- a/target/arm/cpu.c
27
+++ b/target/arm/sve_helper.c
34
+++ b/target/arm/cpu.c
28
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
35
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
36
* feature registers as well.
37
*/
38
cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1, ID_PFR1, SECURITY, 0);
39
+ cpu->isar.id_dfr0 = FIELD_DP32(cpu->isar.id_dfr0, ID_DFR0, COPSDBG, 0);
40
cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
41
ID_AA64PFR0, EL3, 0);
29
}
42
}
30
return sum;
43
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/cpu64.c
46
+++ b/target/arm/cpu64.c
47
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
48
cpu->isar.id_aa64zfr0 = t;
49
50
t = cpu->isar.id_aa64dfr0;
51
+ t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 8); /* FEAT_Debugv8p2 */
52
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
53
cpu->isar.id_aa64dfr0 = t;
54
55
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/cpu_tcg.c
58
+++ b/target/arm/cpu_tcg.c
59
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
60
cpu->isar.id_pfr2 = t;
61
62
t = cpu->isar.id_dfr0;
63
+ t = FIELD_DP32(t, ID_DFR0, COPDBG, 8); /* FEAT_Debugv8p2 */
64
+ t = FIELD_DP32(t, ID_DFR0, COPSDBG, 8); /* FEAT_Debugv8p2 */
65
t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
66
cpu->isar.id_dfr0 = t;
31
}
67
}
32
+
33
+uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
34
+{
35
+ uintptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
36
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
37
+ uint64_t esz_mask = pred_esz_masks[esz];
38
+ ARMPredicateReg *d = vd;
39
+ uint32_t flags;
40
+ intptr_t i;
41
+
42
+ /* Begin with a zero predicate register. */
43
+ flags = do_zero(d, oprsz);
44
+ if (count == 0) {
45
+ return flags;
46
+ }
47
+
48
+ /* Scale from predicate element count to bits. */
49
+ count <<= esz;
50
+ /* Bound to the bits in the predicate. */
51
+ count = MIN(count, oprsz * 8);
52
+
53
+ /* Set all of the requested bits. */
54
+ for (i = 0; i < count / 64; ++i) {
55
+ d->p[i] = esz_mask;
56
+ }
57
+ if (count & 63) {
58
+ d->p[i] = MAKE_64BIT_MASK(0, count & 63) & esz_mask;
59
+ }
60
+
61
+ return predtest_ones(d, oprsz, esz_mask);
62
+}
63
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/translate-sve.c
66
+++ b/target/arm/translate-sve.c
67
@@ -XXX,XX +XXX,XX @@ static bool trans_SINCDECP_z(DisasContext *s, arg_incdec2_pred *a,
68
return true;
69
}
70
71
+/*
72
+ *** SVE Integer Compare Scalars Group
73
+ */
74
+
75
+static bool trans_CTERM(DisasContext *s, arg_CTERM *a, uint32_t insn)
76
+{
77
+ if (!sve_access_check(s)) {
78
+ return true;
79
+ }
80
+
81
+ TCGCond cond = (a->ne ? TCG_COND_NE : TCG_COND_EQ);
82
+ TCGv_i64 rn = read_cpu_reg(s, a->rn, a->sf);
83
+ TCGv_i64 rm = read_cpu_reg(s, a->rm, a->sf);
84
+ TCGv_i64 cmp = tcg_temp_new_i64();
85
+
86
+ tcg_gen_setcond_i64(cond, cmp, rn, rm);
87
+ tcg_gen_extrl_i64_i32(cpu_NF, cmp);
88
+ tcg_temp_free_i64(cmp);
89
+
90
+ /* VF = !NF & !CF. */
91
+ tcg_gen_xori_i32(cpu_VF, cpu_NF, 1);
92
+ tcg_gen_andc_i32(cpu_VF, cpu_VF, cpu_CF);
93
+
94
+ /* Both NF and VF actually look at bit 31. */
95
+ tcg_gen_neg_i32(cpu_NF, cpu_NF);
96
+ tcg_gen_neg_i32(cpu_VF, cpu_VF);
97
+ return true;
98
+}
99
+
100
+static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
101
+{
102
+ if (!sve_access_check(s)) {
103
+ return true;
104
+ }
105
+
106
+ TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
107
+ TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
108
+ TCGv_i64 t0 = tcg_temp_new_i64();
109
+ TCGv_i64 t1 = tcg_temp_new_i64();
110
+ TCGv_i32 t2, t3;
111
+ TCGv_ptr ptr;
112
+ unsigned desc, vsz = vec_full_reg_size(s);
113
+ TCGCond cond;
114
+
115
+ if (!a->sf) {
116
+ if (a->u) {
117
+ tcg_gen_ext32u_i64(op0, op0);
118
+ tcg_gen_ext32u_i64(op1, op1);
119
+ } else {
120
+ tcg_gen_ext32s_i64(op0, op0);
121
+ tcg_gen_ext32s_i64(op1, op1);
122
+ }
123
+ }
124
+
125
+ /* For the helper, compress the different conditions into a computation
126
+ * of how many iterations for which the condition is true.
127
+ *
128
+ * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
129
+ * 2**64 iterations, overflowing to 0. Of course, predicate registers
130
+ * aren't that large, so any value >= predicate size is sufficient.
131
+ */
132
+ tcg_gen_sub_i64(t0, op1, op0);
133
+
134
+ /* t0 = MIN(op1 - op0, vsz). */
135
+ tcg_gen_movi_i64(t1, vsz);
136
+ tcg_gen_umin_i64(t0, t0, t1);
137
+ if (a->eq) {
138
+ /* Equality means one more iteration. */
139
+ tcg_gen_addi_i64(t0, t0, 1);
140
+ }
141
+
142
+ /* t0 = (condition true ? t0 : 0). */
143
+ cond = (a->u
144
+ ? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
145
+ : (a->eq ? TCG_COND_LE : TCG_COND_LT));
146
+ tcg_gen_movi_i64(t1, 0);
147
+ tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
148
+
149
+ t2 = tcg_temp_new_i32();
150
+ tcg_gen_extrl_i64_i32(t2, t0);
151
+ tcg_temp_free_i64(t0);
152
+ tcg_temp_free_i64(t1);
153
+
154
+ desc = (vsz / 8) - 2;
155
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
156
+ t3 = tcg_const_i32(desc);
157
+
158
+ ptr = tcg_temp_new_ptr();
159
+ tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
160
+
161
+ gen_helper_sve_while(t2, ptr, t2, t3);
162
+ do_pred_flags(t2);
163
+
164
+ tcg_temp_free_ptr(ptr);
165
+ tcg_temp_free_i32(t2);
166
+ tcg_temp_free_i32(t3);
167
+ return true;
168
+}
169
+
170
/*
171
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
172
*/
173
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
174
index XXXXXXX..XXXXXXX 100644
175
--- a/target/arm/sve.decode
176
+++ b/target/arm/sve.decode
177
@@ -XXX,XX +XXX,XX @@ SINCDECP_r_64 00100101 .. 1010 d:1 u:1 10001 10 .... ..... @incdec_pred
178
# SVE saturating inc/dec vector by predicate count
179
SINCDECP_z 00100101 .. 1010 d:1 u:1 10000 00 .... ..... @incdec2_pred
180
181
+### SVE Integer Compare - Scalars Group
182
+
183
+# SVE conditionally terminate scalars
184
+CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
185
+
186
+# SVE integer compare scalar count and limit
187
+WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
188
+
189
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
190
191
# SVE load predicate register
192
--
68
--
193
2.17.1
69
2.25.1
194
195
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This extension concerns changes to the External Debug interface,
4
with Secure and Non-secure access to the debug registers, and all
5
of it is outside the scope of QEMU. Indicating support for this
6
is mandatory with FEAT_SEL2, which we do implement.
2
7
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-15-richard.henderson@linaro.org
10
Message-id: 20220506180242.216785-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/helper-sve.h | 2 +
13
docs/system/arm/emulation.rst | 1 +
9
target/arm/sve_helper.c | 14 ++++
14
target/arm/cpu64.c | 2 +-
10
target/arm/translate-sve.c | 133 +++++++++++++++++++++++++++++++++++++
15
target/arm/cpu_tcg.c | 4 ++--
11
target/arm/sve.decode | 27 ++++++++
16
3 files changed, 4 insertions(+), 3 deletions(-)
12
4 files changed, 176 insertions(+)
13
17
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
20
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/helper-sve.h
21
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
22
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
23
- FEAT_DIT (Data Independent Timing instructions)
20
DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
- FEAT_DPB (DC CVAP instruction)
21
DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
25
- FEAT_Debugv8p2 (Debug changes for v8.2)
22
+
26
+- FEAT_Debugv8p4 (Debug changes for v8.4)
23
+DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
27
- FEAT_DotProd (Advanced SIMD dot product instructions)
24
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
28
- FEAT_FCMA (Floating-point complex number instructions)
29
- FEAT_FHM (Floating-point half-precision multiplication instructions)
30
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
25
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/sve_helper.c
32
--- a/target/arm/cpu64.c
27
+++ b/target/arm/sve_helper.c
33
+++ b/target/arm/cpu64.c
28
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
34
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
29
return do_zero(vd, oprsz);
35
cpu->isar.id_aa64zfr0 = t;
30
}
36
37
t = cpu->isar.id_aa64dfr0;
38
- t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 8); /* FEAT_Debugv8p2 */
39
+ t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 9); /* FEAT_Debugv8p4 */
40
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
41
cpu->isar.id_aa64dfr0 = t;
42
43
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/cpu_tcg.c
46
+++ b/target/arm/cpu_tcg.c
47
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
48
cpu->isar.id_pfr2 = t;
49
50
t = cpu->isar.id_dfr0;
51
- t = FIELD_DP32(t, ID_DFR0, COPDBG, 8); /* FEAT_Debugv8p2 */
52
- t = FIELD_DP32(t, ID_DFR0, COPSDBG, 8); /* FEAT_Debugv8p2 */
53
+ t = FIELD_DP32(t, ID_DFR0, COPDBG, 9); /* FEAT_Debugv8p4 */
54
+ t = FIELD_DP32(t, ID_DFR0, COPSDBG, 9); /* FEAT_Debugv8p4 */
55
t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
56
cpu->isar.id_dfr0 = t;
31
}
57
}
32
+
33
+uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
34
+{
35
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
36
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
37
+ uint64_t *n = vn, *g = vg, sum = 0, mask = pred_esz_masks[esz];
38
+ intptr_t i;
39
+
40
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
41
+ uint64_t t = n[i] & g[i] & mask;
42
+ sum += ctpop64(t);
43
+ }
44
+ return sum;
45
+}
46
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-sve.c
49
+++ b/target/arm/translate-sve.c
50
@@ -XXX,XX +XXX,XX @@
51
#include "translate-a64.h"
52
53
54
+typedef void GVecGen2sFn(unsigned, uint32_t, uint32_t,
55
+ TCGv_i64, uint32_t, uint32_t);
56
+
57
typedef void gen_helper_gvec_flags_3(TCGv_i32, TCGv_ptr, TCGv_ptr,
58
TCGv_ptr, TCGv_i32);
59
typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
60
@@ -XXX,XX +XXX,XX @@ static bool trans_BRKN(DisasContext *s, arg_rpr_s *a, uint32_t insn)
61
return do_brk2(s, a, gen_helper_sve_brkn, gen_helper_sve_brkns);
62
}
63
64
+/*
65
+ *** SVE Predicate Count Group
66
+ */
67
+
68
+static void do_cntp(DisasContext *s, TCGv_i64 val, int esz, int pn, int pg)
69
+{
70
+ unsigned psz = pred_full_reg_size(s);
71
+
72
+ if (psz <= 8) {
73
+ uint64_t psz_mask;
74
+
75
+ tcg_gen_ld_i64(val, cpu_env, pred_full_reg_offset(s, pn));
76
+ if (pn != pg) {
77
+ TCGv_i64 g = tcg_temp_new_i64();
78
+ tcg_gen_ld_i64(g, cpu_env, pred_full_reg_offset(s, pg));
79
+ tcg_gen_and_i64(val, val, g);
80
+ tcg_temp_free_i64(g);
81
+ }
82
+
83
+ /* Reduce the pred_esz_masks value simply to reduce the
84
+ * size of the code generated here.
85
+ */
86
+ psz_mask = MAKE_64BIT_MASK(0, psz * 8);
87
+ tcg_gen_andi_i64(val, val, pred_esz_masks[esz] & psz_mask);
88
+
89
+ tcg_gen_ctpop_i64(val, val);
90
+ } else {
91
+ TCGv_ptr t_pn = tcg_temp_new_ptr();
92
+ TCGv_ptr t_pg = tcg_temp_new_ptr();
93
+ unsigned desc;
94
+ TCGv_i32 t_desc;
95
+
96
+ desc = psz - 2;
97
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
98
+
99
+ tcg_gen_addi_ptr(t_pn, cpu_env, pred_full_reg_offset(s, pn));
100
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
101
+ t_desc = tcg_const_i32(desc);
102
+
103
+ gen_helper_sve_cntp(val, t_pn, t_pg, t_desc);
104
+ tcg_temp_free_ptr(t_pn);
105
+ tcg_temp_free_ptr(t_pg);
106
+ tcg_temp_free_i32(t_desc);
107
+ }
108
+}
109
+
110
+static bool trans_CNTP(DisasContext *s, arg_CNTP *a, uint32_t insn)
111
+{
112
+ if (sve_access_check(s)) {
113
+ do_cntp(s, cpu_reg(s, a->rd), a->esz, a->rn, a->pg);
114
+ }
115
+ return true;
116
+}
117
+
118
+static bool trans_INCDECP_r(DisasContext *s, arg_incdec_pred *a,
119
+ uint32_t insn)
120
+{
121
+ if (sve_access_check(s)) {
122
+ TCGv_i64 reg = cpu_reg(s, a->rd);
123
+ TCGv_i64 val = tcg_temp_new_i64();
124
+
125
+ do_cntp(s, val, a->esz, a->pg, a->pg);
126
+ if (a->d) {
127
+ tcg_gen_sub_i64(reg, reg, val);
128
+ } else {
129
+ tcg_gen_add_i64(reg, reg, val);
130
+ }
131
+ tcg_temp_free_i64(val);
132
+ }
133
+ return true;
134
+}
135
+
136
+static bool trans_INCDECP_z(DisasContext *s, arg_incdec2_pred *a,
137
+ uint32_t insn)
138
+{
139
+ if (a->esz == 0) {
140
+ return false;
141
+ }
142
+ if (sve_access_check(s)) {
143
+ unsigned vsz = vec_full_reg_size(s);
144
+ TCGv_i64 val = tcg_temp_new_i64();
145
+ GVecGen2sFn *gvec_fn = a->d ? tcg_gen_gvec_subs : tcg_gen_gvec_adds;
146
+
147
+ do_cntp(s, val, a->esz, a->pg, a->pg);
148
+ gvec_fn(a->esz, vec_full_reg_offset(s, a->rd),
149
+ vec_full_reg_offset(s, a->rn), val, vsz, vsz);
150
+ }
151
+ return true;
152
+}
153
+
154
+static bool trans_SINCDECP_r_32(DisasContext *s, arg_incdec_pred *a,
155
+ uint32_t insn)
156
+{
157
+ if (sve_access_check(s)) {
158
+ TCGv_i64 reg = cpu_reg(s, a->rd);
159
+ TCGv_i64 val = tcg_temp_new_i64();
160
+
161
+ do_cntp(s, val, a->esz, a->pg, a->pg);
162
+ do_sat_addsub_32(reg, val, a->u, a->d);
163
+ }
164
+ return true;
165
+}
166
+
167
+static bool trans_SINCDECP_r_64(DisasContext *s, arg_incdec_pred *a,
168
+ uint32_t insn)
169
+{
170
+ if (sve_access_check(s)) {
171
+ TCGv_i64 reg = cpu_reg(s, a->rd);
172
+ TCGv_i64 val = tcg_temp_new_i64();
173
+
174
+ do_cntp(s, val, a->esz, a->pg, a->pg);
175
+ do_sat_addsub_64(reg, val, a->u, a->d);
176
+ }
177
+ return true;
178
+}
179
+
180
+static bool trans_SINCDECP_z(DisasContext *s, arg_incdec2_pred *a,
181
+ uint32_t insn)
182
+{
183
+ if (a->esz == 0) {
184
+ return false;
185
+ }
186
+ if (sve_access_check(s)) {
187
+ TCGv_i64 val = tcg_temp_new_i64();
188
+ do_cntp(s, val, a->esz, a->pg, a->pg);
189
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, a->u, a->d);
190
+ }
191
+ return true;
192
+}
193
+
194
/*
195
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
196
*/
197
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
198
index XXXXXXX..XXXXXXX 100644
199
--- a/target/arm/sve.decode
200
+++ b/target/arm/sve.decode
201
@@ -XXX,XX +XXX,XX @@
202
&ptrue rd esz pat s
203
&incdec_cnt rd pat esz imm d u
204
&incdec2_cnt rd rn pat esz imm d u
205
+&incdec_pred rd pg esz d u
206
+&incdec2_pred rd rn pg esz d u
207
208
###########################################################################
209
# Named instruction formats. These are generally used to
210
@@ -XXX,XX +XXX,XX @@
211
212
# One register operand, with governing predicate, vector element size
213
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
214
+@rd_pg4_pn ........ esz:2 ... ... .. pg:4 . rn:4 rd:5 &rpr_esz
215
216
# Two register operands with a 6-bit signed immediate.
217
@rd_rn_i6 ........ ... rn:5 ..... imm:s6 rd:5 &rri
218
@@ -XXX,XX +XXX,XX @@
219
@incdec2_cnt ........ esz:2 .. .... ...... pat:5 rd:5 \
220
&incdec2_cnt imm=%imm4_16_p1 rn=%reg_movprfx
221
222
+# One register, predicate.
223
+# User must fill in U and D.
224
+@incdec_pred ........ esz:2 .... .. ..... .. pg:4 rd:5 &incdec_pred
225
+@incdec2_pred ........ esz:2 .... .. ..... .. pg:4 rd:5 \
226
+ &incdec2_pred rn=%reg_movprfx
227
+
228
###########################################################################
229
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
230
231
@@ -XXX,XX +XXX,XX @@ BRKB_m 00100101 1. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
232
# SVE propagate break to next partition
233
BRKN 00100101 0. 01100001 .... 0 .... 0 .... @pd_pg_pn_s
234
235
+### SVE Predicate Count Group
236
+
237
+# SVE predicate count
238
+CNTP 00100101 .. 100 000 10 .... 0 .... ..... @rd_pg4_pn
239
+
240
+# SVE inc/dec register by predicate count
241
+INCDECP_r 00100101 .. 10110 d:1 10001 00 .... ..... @incdec_pred u=1
242
+
243
+# SVE inc/dec vector by predicate count
244
+INCDECP_z 00100101 .. 10110 d:1 10000 00 .... ..... @incdec2_pred u=1
245
+
246
+# SVE saturating inc/dec register by predicate count
247
+SINCDECP_r_32 00100101 .. 1010 d:1 u:1 10001 00 .... ..... @incdec_pred
248
+SINCDECP_r_64 00100101 .. 1010 d:1 u:1 10001 10 .... ..... @incdec_pred
249
+
250
+# SVE saturating inc/dec vector by predicate count
251
+SINCDECP_z 00100101 .. 1010 d:1 u:1 10000 00 .... ..... @incdec2_pred
252
+
253
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
254
255
# SVE load predicate register
256
--
58
--
257
2.17.1
59
2.25.1
258
259
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Add only the system registers required to implement zero error
4
records. This means that all values for ERRSELR are out of range,
5
which means that it and all of the indexed error record registers
6
need not be implemented.
7
8
Add the EL2 registers required for injecting virtual SError.
2
9
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-5-richard.henderson@linaro.org
12
Message-id: 20220506180242.216785-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
14
---
8
target/arm/helper-sve.h | 15 ++++++++
15
target/arm/cpu.h | 5 +++
9
target/arm/sve_helper.c | 72 ++++++++++++++++++++++++++++++++++++
16
target/arm/helper.c | 84 +++++++++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 75 ++++++++++++++++++++++++++++++++++++++
17
2 files changed, 89 insertions(+)
11
target/arm/sve.decode | 10 +++++
12
4 files changed, 172 insertions(+)
13
18
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
21
--- a/target/arm/cpu.h
17
+++ b/target/arm/helper-sve.h
22
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
19
DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
uint64_t tfsr_el[4]; /* tfsre0_el1 is index 0. */
20
DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
uint64_t gcr_el1;
21
26
uint64_t rgsr_el1;
22
+DEF_HELPER_FLAGS_4(sve_zip_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_zip_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_zip_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve_zip_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+
27
+
27
+DEF_HELPER_FLAGS_4(sve_uzp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+ /* Minimal RAS registers */
28
+DEF_HELPER_FLAGS_4(sve_uzp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+ uint64_t disr_el1;
29
+DEF_HELPER_FLAGS_4(sve_uzp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+ uint64_t vdisr_el2;
30
+DEF_HELPER_FLAGS_4(sve_uzp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+ uint64_t vsesr_el2;
32
} cp15;
33
34
struct {
35
diff --git a/target/arm/helper.c b/target/arm/helper.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/helper.c
38
+++ b/target/arm/helper.c
39
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
40
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
41
};
42
43
+/*
44
+ * Check for traps to RAS registers, which are controlled
45
+ * by HCR_EL2.TERR and SCR_EL3.TERR.
46
+ */
47
+static CPAccessResult access_terr(CPUARMState *env, const ARMCPRegInfo *ri,
48
+ bool isread)
49
+{
50
+ int el = arm_current_el(env);
31
+
51
+
32
+DEF_HELPER_FLAGS_4(sve_trn_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
52
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_TERR)) {
33
+DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
53
+ return CP_ACCESS_TRAP_EL2;
34
+DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
54
+ }
35
+DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
55
+ if (el < 3 && (env->cp15.scr_el3 & SCR_TERR)) {
36
+
56
+ return CP_ACCESS_TRAP_EL3;
37
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
57
+ }
38
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
58
+ return CP_ACCESS_OK;
39
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
40
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sve_helper.c
43
+++ b/target/arm/sve_helper.c
44
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
45
}
46
}
47
}
48
+
49
+#define DO_ZIP(NAME, TYPE, H) \
50
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
51
+{ \
52
+ intptr_t oprsz = simd_oprsz(desc); \
53
+ intptr_t i, oprsz_2 = oprsz / 2; \
54
+ ARMVectorReg tmp_n, tmp_m; \
55
+ /* We produce output faster than we consume input. \
56
+ Therefore we must be mindful of possible overlap. */ \
57
+ if (unlikely((vn - vd) < (uintptr_t)oprsz)) { \
58
+ vn = memcpy(&tmp_n, vn, oprsz_2); \
59
+ } \
60
+ if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \
61
+ vm = memcpy(&tmp_m, vm, oprsz_2); \
62
+ } \
63
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
64
+ *(TYPE *)(vd + H(2 * i + 0)) = *(TYPE *)(vn + H(i)); \
65
+ *(TYPE *)(vd + H(2 * i + sizeof(TYPE))) = *(TYPE *)(vm + H(i)); \
66
+ } \
67
+}
59
+}
68
+
60
+
69
+DO_ZIP(sve_zip_b, uint8_t, H1)
61
+static uint64_t disr_read(CPUARMState *env, const ARMCPRegInfo *ri)
70
+DO_ZIP(sve_zip_h, uint16_t, H1_2)
62
+{
71
+DO_ZIP(sve_zip_s, uint32_t, H1_4)
63
+ int el = arm_current_el(env);
72
+DO_ZIP(sve_zip_d, uint64_t, )
73
+
64
+
74
+#define DO_UZP(NAME, TYPE, H) \
65
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_AMO)) {
75
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
66
+ return env->cp15.vdisr_el2;
76
+{ \
67
+ }
77
+ intptr_t oprsz = simd_oprsz(desc); \
68
+ if (el < 3 && (env->cp15.scr_el3 & SCR_EA)) {
78
+ intptr_t oprsz_2 = oprsz / 2; \
69
+ return 0; /* RAZ/WI */
79
+ intptr_t odd_ofs = simd_data(desc); \
70
+ }
80
+ intptr_t i; \
71
+ return env->cp15.disr_el1;
81
+ ARMVectorReg tmp_m; \
82
+ if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \
83
+ vm = memcpy(&tmp_m, vm, oprsz); \
84
+ } \
85
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
86
+ *(TYPE *)(vd + H(i)) = *(TYPE *)(vn + H(2 * i + odd_ofs)); \
87
+ } \
88
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
89
+ *(TYPE *)(vd + H(oprsz_2 + i)) = *(TYPE *)(vm + H(2 * i + odd_ofs)); \
90
+ } \
91
+}
72
+}
92
+
73
+
93
+DO_UZP(sve_uzp_b, uint8_t, H1)
74
+static void disr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t val)
94
+DO_UZP(sve_uzp_h, uint16_t, H1_2)
75
+{
95
+DO_UZP(sve_uzp_s, uint32_t, H1_4)
76
+ int el = arm_current_el(env);
96
+DO_UZP(sve_uzp_d, uint64_t, )
97
+
77
+
98
+#define DO_TRN(NAME, TYPE, H) \
78
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_AMO)) {
99
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
79
+ env->cp15.vdisr_el2 = val;
100
+{ \
80
+ return;
101
+ intptr_t oprsz = simd_oprsz(desc); \
81
+ }
102
+ intptr_t odd_ofs = simd_data(desc); \
82
+ if (el < 3 && (env->cp15.scr_el3 & SCR_EA)) {
103
+ intptr_t i; \
83
+ return; /* RAZ/WI */
104
+ for (i = 0; i < oprsz; i += 2 * sizeof(TYPE)) { \
84
+ }
105
+ TYPE ae = *(TYPE *)(vn + H(i + odd_ofs)); \
85
+ env->cp15.disr_el1 = val;
106
+ TYPE be = *(TYPE *)(vm + H(i + odd_ofs)); \
107
+ *(TYPE *)(vd + H(i + 0)) = ae; \
108
+ *(TYPE *)(vd + H(i + sizeof(TYPE))) = be; \
109
+ } \
110
+}
86
+}
111
+
87
+
112
+DO_TRN(sve_trn_b, uint8_t, H1)
113
+DO_TRN(sve_trn_h, uint16_t, H1_2)
114
+DO_TRN(sve_trn_s, uint32_t, H1_4)
115
+DO_TRN(sve_trn_d, uint64_t, )
116
+
117
+#undef DO_ZIP
118
+#undef DO_UZP
119
+#undef DO_TRN
120
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/translate-sve.c
123
+++ b/target/arm/translate-sve.c
124
@@ -XXX,XX +XXX,XX @@ static bool trans_PUNPKHI(DisasContext *s, arg_PUNPKHI *a, uint32_t insn)
125
return do_perm_pred2(s, a, 1, gen_helper_sve_punpk_p);
126
}
127
128
+/*
88
+/*
129
+ *** SVE Permute - Interleaving Group
89
+ * Minimal RAS implementation with no Error Records.
90
+ * Which means that all of the Error Record registers:
91
+ * ERXADDR_EL1
92
+ * ERXCTLR_EL1
93
+ * ERXFR_EL1
94
+ * ERXMISC0_EL1
95
+ * ERXMISC1_EL1
96
+ * ERXMISC2_EL1
97
+ * ERXMISC3_EL1
98
+ * ERXPFGCDN_EL1 (RASv1p1)
99
+ * ERXPFGCTL_EL1 (RASv1p1)
100
+ * ERXPFGF_EL1 (RASv1p1)
101
+ * ERXSTATUS_EL1
102
+ * and
103
+ * ERRSELR_EL1
104
+ * may generate UNDEFINED, which is the effect we get by not
105
+ * listing them at all.
130
+ */
106
+ */
131
+
107
+static const ARMCPRegInfo minimal_ras_reginfo[] = {
132
+static bool do_zip(DisasContext *s, arg_rrr_esz *a, bool high)
108
+ { .name = "DISR_EL1", .state = ARM_CP_STATE_BOTH,
133
+{
109
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 1, .opc2 = 1,
134
+ static gen_helper_gvec_3 * const fns[4] = {
110
+ .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.disr_el1),
135
+ gen_helper_sve_zip_b, gen_helper_sve_zip_h,
111
+ .readfn = disr_read, .writefn = disr_write, .raw_writefn = raw_write },
136
+ gen_helper_sve_zip_s, gen_helper_sve_zip_d,
112
+ { .name = "ERRIDR_EL1", .state = ARM_CP_STATE_BOTH,
137
+ };
113
+ .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 3, .opc2 = 0,
138
+
114
+ .access = PL1_R, .accessfn = access_terr,
139
+ if (sve_access_check(s)) {
115
+ .type = ARM_CP_CONST, .resetvalue = 0 },
140
+ unsigned vsz = vec_full_reg_size(s);
116
+ { .name = "VDISR_EL2", .state = ARM_CP_STATE_BOTH,
141
+ unsigned high_ofs = high ? vsz / 2 : 0;
117
+ .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 1, .opc2 = 1,
142
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
118
+ .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vdisr_el2) },
143
+ vec_full_reg_offset(s, a->rn) + high_ofs,
119
+ { .name = "VSESR_EL2", .state = ARM_CP_STATE_BOTH,
144
+ vec_full_reg_offset(s, a->rm) + high_ofs,
120
+ .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 3,
145
+ vsz, vsz, 0, fns[a->esz]);
121
+ .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vsesr_el2) },
146
+ }
147
+ return true;
148
+}
149
+
150
+static bool do_zzz_data_ool(DisasContext *s, arg_rrr_esz *a, int data,
151
+ gen_helper_gvec_3 *fn)
152
+{
153
+ if (sve_access_check(s)) {
154
+ unsigned vsz = vec_full_reg_size(s);
155
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
156
+ vec_full_reg_offset(s, a->rn),
157
+ vec_full_reg_offset(s, a->rm),
158
+ vsz, vsz, data, fn);
159
+ }
160
+ return true;
161
+}
162
+
163
+static bool trans_ZIP1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
164
+{
165
+ return do_zip(s, a, false);
166
+}
167
+
168
+static bool trans_ZIP2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
169
+{
170
+ return do_zip(s, a, true);
171
+}
172
+
173
+static gen_helper_gvec_3 * const uzp_fns[4] = {
174
+ gen_helper_sve_uzp_b, gen_helper_sve_uzp_h,
175
+ gen_helper_sve_uzp_s, gen_helper_sve_uzp_d,
176
+};
122
+};
177
+
123
+
178
+static bool trans_UZP1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
124
/* Return the exception level to which exceptions should be taken
179
+{
125
* via SVEAccessTrap. If an exception should be routed through
180
+ return do_zzz_data_ool(s, a, 0, uzp_fns[a->esz]);
126
* AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should
181
+}
127
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
182
+
128
if (cpu_isar_feature(aa64_ssbs, cpu)) {
183
+static bool trans_UZP2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
129
define_one_arm_cp_reg(cpu, &ssbs_reginfo);
184
+{
130
}
185
+ return do_zzz_data_ool(s, a, 1 << a->esz, uzp_fns[a->esz]);
131
+ if (cpu_isar_feature(any_ras, cpu)) {
186
+}
132
+ define_arm_cp_regs(cpu, minimal_ras_reginfo);
187
+
133
+ }
188
+static gen_helper_gvec_3 * const trn_fns[4] = {
134
189
+ gen_helper_sve_trn_b, gen_helper_sve_trn_h,
135
if (cpu_isar_feature(aa64_vh, cpu) ||
190
+ gen_helper_sve_trn_s, gen_helper_sve_trn_d,
136
cpu_isar_feature(aa64_debugv8p2, cpu)) {
191
+};
192
+
193
+static bool trans_TRN1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
194
+{
195
+ return do_zzz_data_ool(s, a, 0, trn_fns[a->esz]);
196
+}
197
+
198
+static bool trans_TRN2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
199
+{
200
+ return do_zzz_data_ool(s, a, 1 << a->esz, trn_fns[a->esz]);
201
+}
202
+
203
/*
204
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
205
*/
206
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/sve.decode
209
+++ b/target/arm/sve.decode
210
@@ -XXX,XX +XXX,XX @@ REV_p 00000101 .. 11 0100 010 000 0 .... 0 .... @pd_pn
211
PUNPKLO 00000101 00 11 0000 010 000 0 .... 0 .... @pd_pn_e0
212
PUNPKHI 00000101 00 11 0001 010 000 0 .... 0 .... @pd_pn_e0
213
214
+### SVE Permute - Interleaving Group
215
+
216
+# SVE permute vector elements
217
+ZIP1_z 00000101 .. 1 ..... 011 000 ..... ..... @rd_rn_rm
218
+ZIP2_z 00000101 .. 1 ..... 011 001 ..... ..... @rd_rn_rm
219
+UZP1_z 00000101 .. 1 ..... 011 010 ..... ..... @rd_rn_rm
220
+UZP2_z 00000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
221
+TRN1_z 00000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
222
+TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
223
+
224
### SVE Predicate Logical Operations Group
225
226
# SVE predicate logical operations
227
--
137
--
228
2.17.1
138
2.25.1
229
230
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Enable writes to the TERR and TEA bits when RAS is enabled.
4
These bits are otherwise RES0.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-14-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-15-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 18 +++
11
target/arm/helper.c | 9 +++++++++
9
target/arm/sve_helper.c | 248 +++++++++++++++++++++++++++++++++++++
12
1 file changed, 9 insertions(+)
10
target/arm/translate-sve.c | 106 ++++++++++++++++
11
target/arm/sve.decode | 19 +++
12
4 files changed, 391 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper-sve.h
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_orn_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
19
DEF_HELPER_FLAGS_5(sve_nor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
}
20
DEF_HELPER_FLAGS_5(sve_nand_pppp, TCG_CALL_NO_RWG,
20
valid_mask &= ~SCR_NET;
21
void, ptr, ptr, ptr, ptr, i32)
21
22
+
22
+ if (cpu_isar_feature(aa64_ras, cpu)) {
23
+DEF_HELPER_FLAGS_5(sve_brkpa, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
+ valid_mask |= SCR_TERR;
24
+DEF_HELPER_FLAGS_5(sve_brkpb, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve_brkpas, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve_brkpbs, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
27
+
28
+DEF_HELPER_FLAGS_4(sve_brka_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve_brkb_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_brka_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve_brkb_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(sve_brkas_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_brkbs_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_brkas_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
37
+
38
+DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
40
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sve_helper.c
43
+++ b/target/arm/sve_helper.c
44
@@ -XXX,XX +XXX,XX @@ DO_CMP_PPZI_D(sve_cmpls_ppzi_d, uint64_t, <=)
45
#undef DO_CMP_PPZI_S
46
#undef DO_CMP_PPZI_D
47
#undef DO_CMP_PPZI
48
+
49
+/* Similar to the ARM LastActive pseudocode function. */
50
+static bool last_active_pred(void *vd, void *vg, intptr_t oprsz)
51
+{
52
+ intptr_t i;
53
+
54
+ for (i = QEMU_ALIGN_UP(oprsz, 8) - 8; i >= 0; i -= 8) {
55
+ uint64_t pg = *(uint64_t *)(vg + i);
56
+ if (pg) {
57
+ return (pow2floor(pg) & *(uint64_t *)(vd + i)) != 0;
58
+ }
24
+ }
59
+ }
25
if (cpu_isar_feature(aa64_lor, cpu)) {
60
+ return 0;
26
valid_mask |= SCR_TLOR;
61
+}
27
}
62
+
28
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
63
+/* Compute a mask into RETB that is true for all G, up to and including
29
}
64
+ * (if after) or excluding (if !after) the first G & N.
30
} else {
65
+ * Return true if BRK found.
31
valid_mask &= ~(SCR_RW | SCR_ST);
66
+ */
32
+ if (cpu_isar_feature(aa32_ras, cpu)) {
67
+static bool compute_brk(uint64_t *retb, uint64_t n, uint64_t g,
33
+ valid_mask |= SCR_TERR;
68
+ bool brk, bool after)
69
+{
70
+ uint64_t b;
71
+
72
+ if (brk) {
73
+ b = 0;
74
+ } else if ((g & n) == 0) {
75
+ /* For all G, no N are set; break not found. */
76
+ b = g;
77
+ } else {
78
+ /* Break somewhere in N. Locate it. */
79
+ b = g & n; /* guard true, pred true */
80
+ b = b & -b; /* first such */
81
+ if (after) {
82
+ b = b | (b - 1); /* break after same */
83
+ } else {
84
+ b = b - 1; /* break before same */
85
+ }
34
+ }
86
+ brk = true;
35
}
87
+ }
36
88
+
37
if (!arm_feature(env, ARM_FEATURE_EL2)) {
89
+ *retb = b;
38
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
90
+ return brk;
39
if (cpu_isar_feature(aa64_vh, cpu)) {
91
+}
40
valid_mask |= HCR_E2H;
92
+
41
}
93
+/* Compute a zeroing BRK. */
42
+ if (cpu_isar_feature(aa64_ras, cpu)) {
94
+static void compute_brk_z(uint64_t *d, uint64_t *n, uint64_t *g,
43
+ valid_mask |= HCR_TERR | HCR_TEA;
95
+ intptr_t oprsz, bool after)
44
+ }
96
+{
45
if (cpu_isar_feature(aa64_lor, cpu)) {
97
+ bool brk = false;
46
valid_mask |= HCR_TLOR;
98
+ intptr_t i;
47
}
99
+
100
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
101
+ uint64_t this_b, this_g = g[i];
102
+
103
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
104
+ d[i] = this_b & this_g;
105
+ }
106
+}
107
+
108
+/* Likewise, but also compute flags. */
109
+static uint32_t compute_brks_z(uint64_t *d, uint64_t *n, uint64_t *g,
110
+ intptr_t oprsz, bool after)
111
+{
112
+ uint32_t flags = PREDTEST_INIT;
113
+ bool brk = false;
114
+ intptr_t i;
115
+
116
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
117
+ uint64_t this_b, this_d, this_g = g[i];
118
+
119
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
120
+ d[i] = this_d = this_b & this_g;
121
+ flags = iter_predtest_fwd(this_d, this_g, flags);
122
+ }
123
+ return flags;
124
+}
125
+
126
+/* Compute a merging BRK. */
127
+static void compute_brk_m(uint64_t *d, uint64_t *n, uint64_t *g,
128
+ intptr_t oprsz, bool after)
129
+{
130
+ bool brk = false;
131
+ intptr_t i;
132
+
133
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
134
+ uint64_t this_b, this_g = g[i];
135
+
136
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
137
+ d[i] = (this_b & this_g) | (d[i] & ~this_g);
138
+ }
139
+}
140
+
141
+/* Likewise, but also compute flags. */
142
+static uint32_t compute_brks_m(uint64_t *d, uint64_t *n, uint64_t *g,
143
+ intptr_t oprsz, bool after)
144
+{
145
+ uint32_t flags = PREDTEST_INIT;
146
+ bool brk = false;
147
+ intptr_t i;
148
+
149
+ for (i = 0; i < oprsz / 8; ++i) {
150
+ uint64_t this_b, this_d = d[i], this_g = g[i];
151
+
152
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
153
+ d[i] = this_d = (this_b & this_g) | (this_d & ~this_g);
154
+ flags = iter_predtest_fwd(this_d, this_g, flags);
155
+ }
156
+ return flags;
157
+}
158
+
159
+static uint32_t do_zero(ARMPredicateReg *d, intptr_t oprsz)
160
+{
161
+ /* It is quicker to zero the whole predicate than loop on OPRSZ.
162
+ * The compiler should turn this into 4 64-bit integer stores.
163
+ */
164
+ memset(d, 0, sizeof(ARMPredicateReg));
165
+ return PREDTEST_INIT;
166
+}
167
+
168
+void HELPER(sve_brkpa)(void *vd, void *vn, void *vm, void *vg,
169
+ uint32_t pred_desc)
170
+{
171
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
172
+ if (last_active_pred(vn, vg, oprsz)) {
173
+ compute_brk_z(vd, vm, vg, oprsz, true);
174
+ } else {
175
+ do_zero(vd, oprsz);
176
+ }
177
+}
178
+
179
+uint32_t HELPER(sve_brkpas)(void *vd, void *vn, void *vm, void *vg,
180
+ uint32_t pred_desc)
181
+{
182
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
183
+ if (last_active_pred(vn, vg, oprsz)) {
184
+ return compute_brks_z(vd, vm, vg, oprsz, true);
185
+ } else {
186
+ return do_zero(vd, oprsz);
187
+ }
188
+}
189
+
190
+void HELPER(sve_brkpb)(void *vd, void *vn, void *vm, void *vg,
191
+ uint32_t pred_desc)
192
+{
193
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
194
+ if (last_active_pred(vn, vg, oprsz)) {
195
+ compute_brk_z(vd, vm, vg, oprsz, false);
196
+ } else {
197
+ do_zero(vd, oprsz);
198
+ }
199
+}
200
+
201
+uint32_t HELPER(sve_brkpbs)(void *vd, void *vn, void *vm, void *vg,
202
+ uint32_t pred_desc)
203
+{
204
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
205
+ if (last_active_pred(vn, vg, oprsz)) {
206
+ return compute_brks_z(vd, vm, vg, oprsz, false);
207
+ } else {
208
+ return do_zero(vd, oprsz);
209
+ }
210
+}
211
+
212
+void HELPER(sve_brka_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
213
+{
214
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
215
+ compute_brk_z(vd, vn, vg, oprsz, true);
216
+}
217
+
218
+uint32_t HELPER(sve_brkas_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
219
+{
220
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
221
+ return compute_brks_z(vd, vn, vg, oprsz, true);
222
+}
223
+
224
+void HELPER(sve_brkb_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
225
+{
226
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
227
+ compute_brk_z(vd, vn, vg, oprsz, false);
228
+}
229
+
230
+uint32_t HELPER(sve_brkbs_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
231
+{
232
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
233
+ return compute_brks_z(vd, vn, vg, oprsz, false);
234
+}
235
+
236
+void HELPER(sve_brka_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
237
+{
238
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
239
+ compute_brk_m(vd, vn, vg, oprsz, true);
240
+}
241
+
242
+uint32_t HELPER(sve_brkas_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
243
+{
244
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
245
+ return compute_brks_m(vd, vn, vg, oprsz, true);
246
+}
247
+
248
+void HELPER(sve_brkb_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
249
+{
250
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
251
+ compute_brk_m(vd, vn, vg, oprsz, false);
252
+}
253
+
254
+uint32_t HELPER(sve_brkbs_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
255
+{
256
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
257
+ return compute_brks_m(vd, vn, vg, oprsz, false);
258
+}
259
+
260
+void HELPER(sve_brkn)(void *vd, void *vn, void *vg, uint32_t pred_desc)
261
+{
262
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
263
+
264
+ if (!last_active_pred(vn, vg, oprsz)) {
265
+ do_zero(vd, oprsz);
266
+ }
267
+}
268
+
269
+/* As if PredTest(Ones(PL), D, esz). */
270
+static uint32_t predtest_ones(ARMPredicateReg *d, intptr_t oprsz,
271
+ uint64_t esz_mask)
272
+{
273
+ uint32_t flags = PREDTEST_INIT;
274
+ intptr_t i;
275
+
276
+ for (i = 0; i < oprsz / 8; i++) {
277
+ flags = iter_predtest_fwd(d->p[i], esz_mask, flags);
278
+ }
279
+ if (oprsz & 7) {
280
+ uint64_t mask = ~(-1ULL << (8 * (oprsz & 7)));
281
+ flags = iter_predtest_fwd(d->p[i], esz_mask & mask, flags);
282
+ }
283
+ return flags;
284
+}
285
+
286
+uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
287
+{
288
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
289
+
290
+ if (last_active_pred(vn, vg, oprsz)) {
291
+ return predtest_ones(vd, oprsz, -1);
292
+ } else {
293
+ return do_zero(vd, oprsz);
294
+ }
295
+}
296
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
297
index XXXXXXX..XXXXXXX 100644
298
--- a/target/arm/translate-sve.c
299
+++ b/target/arm/translate-sve.c
300
@@ -XXX,XX +XXX,XX @@ DO_PPZI(CMPLS, cmpls)
301
302
#undef DO_PPZI
303
304
+/*
305
+ *** SVE Partition Break Group
306
+ */
307
+
308
+static bool do_brk3(DisasContext *s, arg_rprr_s *a,
309
+ gen_helper_gvec_4 *fn, gen_helper_gvec_flags_4 *fn_s)
310
+{
311
+ if (!sve_access_check(s)) {
312
+ return true;
313
+ }
314
+
315
+ unsigned vsz = pred_full_reg_size(s);
316
+
317
+ /* Predicate sizes may be smaller and cannot use simd_desc. */
318
+ TCGv_ptr d = tcg_temp_new_ptr();
319
+ TCGv_ptr n = tcg_temp_new_ptr();
320
+ TCGv_ptr m = tcg_temp_new_ptr();
321
+ TCGv_ptr g = tcg_temp_new_ptr();
322
+ TCGv_i32 t = tcg_const_i32(vsz - 2);
323
+
324
+ tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
325
+ tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
326
+ tcg_gen_addi_ptr(m, cpu_env, pred_full_reg_offset(s, a->rm));
327
+ tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
328
+
329
+ if (a->s) {
330
+ fn_s(t, d, n, m, g, t);
331
+ do_pred_flags(t);
332
+ } else {
333
+ fn(d, n, m, g, t);
334
+ }
335
+ tcg_temp_free_ptr(d);
336
+ tcg_temp_free_ptr(n);
337
+ tcg_temp_free_ptr(m);
338
+ tcg_temp_free_ptr(g);
339
+ tcg_temp_free_i32(t);
340
+ return true;
341
+}
342
+
343
+static bool do_brk2(DisasContext *s, arg_rpr_s *a,
344
+ gen_helper_gvec_3 *fn, gen_helper_gvec_flags_3 *fn_s)
345
+{
346
+ if (!sve_access_check(s)) {
347
+ return true;
348
+ }
349
+
350
+ unsigned vsz = pred_full_reg_size(s);
351
+
352
+ /* Predicate sizes may be smaller and cannot use simd_desc. */
353
+ TCGv_ptr d = tcg_temp_new_ptr();
354
+ TCGv_ptr n = tcg_temp_new_ptr();
355
+ TCGv_ptr g = tcg_temp_new_ptr();
356
+ TCGv_i32 t = tcg_const_i32(vsz - 2);
357
+
358
+ tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
359
+ tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
360
+ tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
361
+
362
+ if (a->s) {
363
+ fn_s(t, d, n, g, t);
364
+ do_pred_flags(t);
365
+ } else {
366
+ fn(d, n, g, t);
367
+ }
368
+ tcg_temp_free_ptr(d);
369
+ tcg_temp_free_ptr(n);
370
+ tcg_temp_free_ptr(g);
371
+ tcg_temp_free_i32(t);
372
+ return true;
373
+}
374
+
375
+static bool trans_BRKPA(DisasContext *s, arg_rprr_s *a, uint32_t insn)
376
+{
377
+ return do_brk3(s, a, gen_helper_sve_brkpa, gen_helper_sve_brkpas);
378
+}
379
+
380
+static bool trans_BRKPB(DisasContext *s, arg_rprr_s *a, uint32_t insn)
381
+{
382
+ return do_brk3(s, a, gen_helper_sve_brkpb, gen_helper_sve_brkpbs);
383
+}
384
+
385
+static bool trans_BRKA_m(DisasContext *s, arg_rpr_s *a, uint32_t insn)
386
+{
387
+ return do_brk2(s, a, gen_helper_sve_brka_m, gen_helper_sve_brkas_m);
388
+}
389
+
390
+static bool trans_BRKB_m(DisasContext *s, arg_rpr_s *a, uint32_t insn)
391
+{
392
+ return do_brk2(s, a, gen_helper_sve_brkb_m, gen_helper_sve_brkbs_m);
393
+}
394
+
395
+static bool trans_BRKA_z(DisasContext *s, arg_rpr_s *a, uint32_t insn)
396
+{
397
+ return do_brk2(s, a, gen_helper_sve_brka_z, gen_helper_sve_brkas_z);
398
+}
399
+
400
+static bool trans_BRKB_z(DisasContext *s, arg_rpr_s *a, uint32_t insn)
401
+{
402
+ return do_brk2(s, a, gen_helper_sve_brkb_z, gen_helper_sve_brkbs_z);
403
+}
404
+
405
+static bool trans_BRKN(DisasContext *s, arg_rpr_s *a, uint32_t insn)
406
+{
407
+ return do_brk2(s, a, gen_helper_sve_brkn, gen_helper_sve_brkns);
408
+}
409
+
410
/*
411
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
412
*/
413
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
414
index XXXXXXX..XXXXXXX 100644
415
--- a/target/arm/sve.decode
416
+++ b/target/arm/sve.decode
417
@@ -XXX,XX +XXX,XX @@
418
&rri_esz rd rn imm esz
419
&rrr_esz rd rn rm esz
420
&rpr_esz rd pg rn esz
421
+&rpr_s rd pg rn s
422
&rprr_s rd pg rn rm s
423
&rprr_esz rd pg rn rm esz
424
&rprrr_esz rd pg rn rm ra esz
425
@@ -XXX,XX +XXX,XX @@
426
@pd_pn ........ esz:2 .. .... ....... rn:4 . rd:4 &rr_esz
427
@rd_rn ........ esz:2 ...... ...... rn:5 rd:5 &rr_esz
428
429
+# Two operand with governing predicate, flags setting
430
+@pd_pg_pn_s ........ . s:1 ...... .. pg:4 . rn:4 . rd:4 &rpr_s
431
+
432
# Three operand with unused vector element size
433
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
434
435
@@ -XXX,XX +XXX,XX @@ PFIRST 00100101 01 011 000 11000 00 .... 0 .... @pd_pn_e0
436
# SVE predicate next active
437
PNEXT 00100101 .. 011 001 11000 10 .... 0 .... @pd_pn
438
439
+### SVE Partition Break Group
440
+
441
+# SVE propagate break from previous partition
442
+BRKPA 00100101 0. 00 .... 11 .... 0 .... 0 .... @pd_pg_pn_pm_s
443
+BRKPB 00100101 0. 00 .... 11 .... 0 .... 1 .... @pd_pg_pn_pm_s
444
+
445
+# SVE partition break condition
446
+BRKA_z 00100101 0. 01000001 .... 0 .... 0 .... @pd_pg_pn_s
447
+BRKB_z 00100101 1. 01000001 .... 0 .... 0 .... @pd_pg_pn_s
448
+BRKA_m 00100101 0. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
449
+BRKB_m 00100101 1. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
450
+
451
+# SVE propagate break to next partition
452
+BRKN 00100101 0. 01100001 .... 0 .... 0 .... @pd_pg_pn_s
453
+
454
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
455
456
# SVE load predicate register
457
--
48
--
458
2.17.1
49
2.25.1
459
460
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Virtual SError exceptions are raised by setting HCR_EL2.VSE,
4
and are routed to EL1 just like other virtual exceptions.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-6-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-16-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 3 +++
11
target/arm/cpu.h | 2 ++
9
target/arm/sve_helper.c | 34 ++++++++++++++++++++++++++++++++++
12
target/arm/internals.h | 8 ++++++++
10
target/arm/translate-sve.c | 12 ++++++++++++
13
target/arm/syndrome.h | 5 +++++
11
target/arm/sve.decode | 6 ++++++
14
target/arm/cpu.c | 38 +++++++++++++++++++++++++++++++++++++-
12
4 files changed, 55 insertions(+)
15
target/arm/helper.c | 40 +++++++++++++++++++++++++++++++++++++++-
16
5 files changed, 91 insertions(+), 2 deletions(-)
13
17
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
20
--- a/target/arm/cpu.h
17
+++ b/target/arm/helper-sve.h
21
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
@@ -XXX,XX +XXX,XX @@
19
DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
#define EXCP_LSERR 21 /* v8M LSERR SecureFault */
20
DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
21
25
#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */
22
+DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+#define EXCP_VSERR 24
23
+DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
24
+
28
25
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
#define ARMV7M_EXCP_RESET 1
26
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
@@ -XXX,XX +XXX,XX @@ enum {
27
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
#define CPU_INTERRUPT_FIQ CPU_INTERRUPT_TGT_EXT_1
28
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
32
#define CPU_INTERRUPT_VIRQ CPU_INTERRUPT_TGT_EXT_2
29
index XXXXXXX..XXXXXXX 100644
33
#define CPU_INTERRUPT_VFIQ CPU_INTERRUPT_TGT_EXT_3
30
--- a/target/arm/sve_helper.c
34
+#define CPU_INTERRUPT_VSERR CPU_INTERRUPT_TGT_INT_0
31
+++ b/target/arm/sve_helper.c
35
32
@@ -XXX,XX +XXX,XX @@ DO_TRN(sve_trn_d, uint64_t, )
36
/* The usual mapping for an AArch64 system register to its AArch32
33
#undef DO_ZIP
37
* counterpart is for the 32 bit world to have access to the lower
34
#undef DO_UZP
38
diff --git a/target/arm/internals.h b/target/arm/internals.h
35
#undef DO_TRN
39
index XXXXXXX..XXXXXXX 100644
36
+
40
--- a/target/arm/internals.h
37
+void HELPER(sve_compact_s)(void *vd, void *vn, void *vg, uint32_t desc)
41
+++ b/target/arm/internals.h
42
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_virq(ARMCPU *cpu);
43
*/
44
void arm_cpu_update_vfiq(ARMCPU *cpu);
45
46
+/**
47
+ * arm_cpu_update_vserr: Update CPU_INTERRUPT_VSERR bit
48
+ *
49
+ * Update the CPU_INTERRUPT_VSERR bit in cs->interrupt_request,
50
+ * following a change to the HCR_EL2.VSE bit.
51
+ */
52
+void arm_cpu_update_vserr(ARMCPU *cpu);
53
+
54
/**
55
* arm_mmu_idx_el:
56
* @env: The cpu environment
57
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/syndrome.h
60
+++ b/target/arm/syndrome.h
61
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_pcalignment(void)
62
return (EC_PCALIGNMENT << ARM_EL_EC_SHIFT) | ARM_EL_IL;
63
}
64
65
+static inline uint32_t syn_serror(uint32_t extra)
38
+{
66
+{
39
+ intptr_t i, j, opr_sz = simd_oprsz(desc) / 4;
67
+ return (EC_SERROR << ARM_EL_EC_SHIFT) | ARM_EL_IL | extra;
40
+ uint32_t *d = vd, *n = vn;
68
+}
41
+ uint8_t *pg = vg;
69
+
42
+
70
#endif /* TARGET_ARM_SYNDROME_H */
43
+ for (i = j = 0; i < opr_sz; i++) {
71
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
44
+ if (pg[H1(i / 2)] & (i & 1 ? 0x10 : 0x01)) {
72
index XXXXXXX..XXXXXXX 100644
45
+ d[H4(j)] = n[H4(i)];
73
--- a/target/arm/cpu.c
46
+ j++;
74
+++ b/target/arm/cpu.c
75
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_has_work(CPUState *cs)
76
return (cpu->power_state != PSCI_OFF)
77
&& cs->interrupt_request &
78
(CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
79
- | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
80
+ | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR
81
| CPU_INTERRUPT_EXITTB);
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
85
return false;
86
}
87
return !(env->daif & PSTATE_I);
88
+ case EXCP_VSERR:
89
+ if (!(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
90
+ /* VIRQs are only taken when hypervized. */
91
+ return false;
92
+ }
93
+ return !(env->daif & PSTATE_A);
94
default:
95
g_assert_not_reached();
96
}
97
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
98
goto found;
99
}
100
}
101
+ if (interrupt_request & CPU_INTERRUPT_VSERR) {
102
+ excp_idx = EXCP_VSERR;
103
+ target_el = 1;
104
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
105
+ cur_el, secure, hcr_el2)) {
106
+ /* Taking a virtual abort clears HCR_EL2.VSE */
107
+ env->cp15.hcr_el2 &= ~HCR_VSE;
108
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
109
+ goto found;
47
+ }
110
+ }
48
+ }
111
+ }
49
+ for (; j < opr_sz; j++) {
112
return false;
50
+ d[H4(j)] = 0;
113
114
found:
115
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
116
}
117
}
118
119
+void arm_cpu_update_vserr(ARMCPU *cpu)
120
+{
121
+ /*
122
+ * Update the interrupt level for VSERR, which is the HCR_EL2.VSE bit.
123
+ */
124
+ CPUARMState *env = &cpu->env;
125
+ CPUState *cs = CPU(cpu);
126
+
127
+ bool new_state = env->cp15.hcr_el2 & HCR_VSE;
128
+
129
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VSERR) != 0)) {
130
+ if (new_state) {
131
+ cpu_interrupt(cs, CPU_INTERRUPT_VSERR);
132
+ } else {
133
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
134
+ }
51
+ }
135
+ }
52
+}
136
+}
53
+
137
+
54
+void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
138
#ifndef CONFIG_USER_ONLY
55
+{
139
static void arm_cpu_set_irq(void *opaque, int irq, int level)
56
+ intptr_t i, j, opr_sz = simd_oprsz(desc) / 8;
140
{
57
+ uint64_t *d = vd, *n = vn;
141
diff --git a/target/arm/helper.c b/target/arm/helper.c
58
+ uint8_t *pg = vg;
142
index XXXXXXX..XXXXXXX 100644
59
+
143
--- a/target/arm/helper.c
60
+ for (i = j = 0; i < opr_sz; i++) {
144
+++ b/target/arm/helper.c
61
+ if (pg[H1(i)] & 1) {
145
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
62
+ d[j] = n[i];
146
}
63
+ j++;
147
}
148
149
- /* External aborts are not possible in QEMU so A bit is always clear */
150
+ if (hcr_el2 & HCR_AMO) {
151
+ if (cs->interrupt_request & CPU_INTERRUPT_VSERR) {
152
+ ret |= CPSR_A;
64
+ }
153
+ }
65
+ }
154
+ }
66
+ for (; j < opr_sz; j++) {
155
+
67
+ d[j] = 0;
156
return ret;
68
+ }
157
}
69
+}
158
70
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
159
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
71
index XXXXXXX..XXXXXXX 100644
160
g_assert(qemu_mutex_iothread_locked());
72
--- a/target/arm/translate-sve.c
161
arm_cpu_update_virq(cpu);
73
+++ b/target/arm/translate-sve.c
162
arm_cpu_update_vfiq(cpu);
74
@@ -XXX,XX +XXX,XX @@ static bool trans_TRN2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
163
+ arm_cpu_update_vserr(cpu);
75
return do_zzz_data_ool(s, a, 1 << a->esz, trn_fns[a->esz]);
164
}
76
}
165
77
166
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
78
+/*
167
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(CPUState *cs)
79
+ *** SVE Permute Vector - Predicated Group
168
[EXCP_LSERR] = "v8M LSERR UsageFault",
80
+ */
169
[EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
81
+
170
[EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
82
+static bool trans_COMPACT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
171
+ [EXCP_VSERR] = "Virtual SERR",
83
+{
172
};
84
+ static gen_helper_gvec_3 * const fns[4] = {
173
85
+ NULL, NULL, gen_helper_sve_compact_s, gen_helper_sve_compact_d
174
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
86
+ };
175
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
87
+ return do_zpz_ool(s, a, fns[a->esz]);
176
mask = CPSR_A | CPSR_I | CPSR_F;
88
+}
177
offset = 4;
89
+
178
break;
90
/*
179
+ case EXCP_VSERR:
91
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
180
+ {
92
*/
181
+ /*
93
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
182
+ * Note that this is reported as a data abort, but the DFAR
94
index XXXXXXX..XXXXXXX 100644
183
+ * has an UNKNOWN value. Construct the SError syndrome from
95
--- a/target/arm/sve.decode
184
+ * AET and ExT fields.
96
+++ b/target/arm/sve.decode
185
+ */
97
@@ -XXX,XX +XXX,XX @@ UZP2_z 00000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
186
+ ARMMMUFaultInfo fi = { .type = ARMFault_AsyncExternal, };
98
TRN1_z 00000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
187
+
99
TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
188
+ if (extended_addresses_enabled(env)) {
100
189
+ env->exception.fsr = arm_fi_to_lfsc(&fi);
101
+### SVE Permute - Predicated Group
190
+ } else {
102
+
191
+ env->exception.fsr = arm_fi_to_sfsc(&fi);
103
+# SVE compress active elements
192
+ }
104
+# Note esz >= 2
193
+ env->exception.fsr |= env->cp15.vsesr_el2 & 0xd000;
105
+COMPACT 00000101 .. 100001 100 ... ..... ..... @rd_pg_rn
194
+ A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr);
106
+
195
+ qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x\n",
107
### SVE Predicate Logical Operations Group
196
+ env->exception.fsr);
108
197
+
109
# SVE predicate logical operations
198
+ new_mode = ARM_CPU_MODE_ABT;
199
+ addr = 0x10;
200
+ mask = CPSR_A | CPSR_I;
201
+ offset = 8;
202
+ }
203
+ break;
204
case EXCP_SMC:
205
new_mode = ARM_CPU_MODE_MON;
206
addr = 0x08;
207
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
208
case EXCP_VFIQ:
209
addr += 0x100;
210
break;
211
+ case EXCP_VSERR:
212
+ addr += 0x180;
213
+ /* Construct the SError syndrome from IDS and ISS fields. */
214
+ env->exception.syndrome = syn_serror(env->cp15.vsesr_el2 & 0x1ffffff);
215
+ env->cp15.esr_el[new_el] = env->exception.syndrome;
216
+ break;
217
default:
218
cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
219
}
110
--
220
--
111
2.17.1
221
2.25.1
112
113
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Check for and defer any pending virtual SError.
2
4
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-3-richard.henderson@linaro.org
7
Message-id: 20220506180242.216785-17-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/helper-sve.h | 23 +++++++
10
target/arm/helper.h | 1 +
9
target/arm/sve_helper.c | 114 +++++++++++++++++++++++++++++++
11
target/arm/a32.decode | 16 ++++++++------
10
target/arm/translate-sve.c | 133 +++++++++++++++++++++++++++++++++++++
12
target/arm/t32.decode | 18 ++++++++--------
11
target/arm/sve.decode | 27 ++++++++
13
target/arm/op_helper.c | 43 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 297 insertions(+)
14
target/arm/translate-a64.c | 17 +++++++++++++++
15
target/arm/translate.c | 23 ++++++++++++++++++++
16
6 files changed, 103 insertions(+), 15 deletions(-)
13
17
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
20
--- a/target/arm/helper.h
17
+++ b/target/arm/helper-sve.h
21
+++ b/target/arm/helper.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_1(wfe, void, env)
19
23
DEF_HELPER_1(yield, void, env)
20
DEF_HELPER_FLAGS_4(sve_ext, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
DEF_HELPER_1(pre_hvc, void, env)
21
25
DEF_HELPER_2(pre_smc, void, env, i32)
22
+DEF_HELPER_FLAGS_4(sve_insr_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
26
+DEF_HELPER_1(vesb, void, env)
23
+DEF_HELPER_FLAGS_4(sve_insr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
27
24
+DEF_HELPER_FLAGS_4(sve_insr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
28
DEF_HELPER_3(cpsr_write, void, env, i32, i32)
25
+DEF_HELPER_FLAGS_4(sve_insr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
29
DEF_HELPER_2(cpsr_write_eret, void, env, i32)
26
+
30
diff --git a/target/arm/a32.decode b/target/arm/a32.decode
27
+DEF_HELPER_FLAGS_3(sve_rev_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
index XXXXXXX..XXXXXXX 100644
28
+DEF_HELPER_FLAGS_3(sve_rev_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
--- a/target/arm/a32.decode
29
+DEF_HELPER_FLAGS_3(sve_rev_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
+++ b/target/arm/a32.decode
30
+DEF_HELPER_FLAGS_3(sve_rev_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
@@ -XXX,XX +XXX,XX @@ SMULTT .... 0001 0110 .... 0000 .... 1110 .... @rd0mn
31
+
35
32
+DEF_HELPER_FLAGS_4(sve_tbl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
{
33
+DEF_HELPER_FLAGS_4(sve_tbl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
{
34
+DEF_HELPER_FLAGS_4(sve_tbl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
- YIELD ---- 0011 0010 0000 1111 ---- 0000 0001
35
+DEF_HELPER_FLAGS_4(sve_tbl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
- WFE ---- 0011 0010 0000 1111 ---- 0000 0010
36
+
40
- WFI ---- 0011 0010 0000 1111 ---- 0000 0011
37
+DEF_HELPER_FLAGS_3(sve_sunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
41
+ [
38
+DEF_HELPER_FLAGS_3(sve_sunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
42
+ YIELD ---- 0011 0010 0000 1111 ---- 0000 0001
39
+DEF_HELPER_FLAGS_3(sve_sunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
43
+ WFE ---- 0011 0010 0000 1111 ---- 0000 0010
40
+
44
+ WFI ---- 0011 0010 0000 1111 ---- 0000 0011
41
+DEF_HELPER_FLAGS_3(sve_uunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
45
42
+DEF_HELPER_FLAGS_3(sve_uunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
46
- # TODO: Implement SEV, SEVL; may help SMP performance.
43
+DEF_HELPER_FLAGS_3(sve_uunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
47
- # SEV ---- 0011 0010 0000 1111 ---- 0000 0100
44
+
48
- # SEVL ---- 0011 0010 0000 1111 ---- 0000 0101
45
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
49
+ # TODO: Implement SEV, SEVL; may help SMP performance.
46
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
50
+ # SEV ---- 0011 0010 0000 1111 ---- 0000 0100
47
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
51
+ # SEVL ---- 0011 0010 0000 1111 ---- 0000 0101
48
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
52
+
49
index XXXXXXX..XXXXXXX 100644
53
+ ESB ---- 0011 0010 0000 1111 ---- 0001 0000
50
--- a/target/arm/sve_helper.c
54
+ ]
51
+++ b/target/arm/sve_helper.c
55
52
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ext)(void *vd, void *vn, void *vm, uint32_t desc)
56
# The canonical nop ends in 00000000, but the whole of the
53
memcpy(vd + n_siz, &tmp, n_ofs);
57
# rest of the space executes as nop if otherwise unsupported.
58
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/t32.decode
61
+++ b/target/arm/t32.decode
62
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
63
[
64
# Hints, and CPS
65
{
66
- YIELD 1111 0011 1010 1111 1000 0000 0000 0001
67
- WFE 1111 0011 1010 1111 1000 0000 0000 0010
68
- WFI 1111 0011 1010 1111 1000 0000 0000 0011
69
+ [
70
+ YIELD 1111 0011 1010 1111 1000 0000 0000 0001
71
+ WFE 1111 0011 1010 1111 1000 0000 0000 0010
72
+ WFI 1111 0011 1010 1111 1000 0000 0000 0011
73
74
- # TODO: Implement SEV, SEVL; may help SMP performance.
75
- # SEV 1111 0011 1010 1111 1000 0000 0000 0100
76
- # SEVL 1111 0011 1010 1111 1000 0000 0000 0101
77
+ # TODO: Implement SEV, SEVL; may help SMP performance.
78
+ # SEV 1111 0011 1010 1111 1000 0000 0000 0100
79
+ # SEVL 1111 0011 1010 1111 1000 0000 0000 0101
80
81
- # For M-profile minimal-RAS ESB can be a NOP, which is the
82
- # default behaviour since it is in the hint space.
83
- # ESB 1111 0011 1010 1111 1000 0000 0001 0000
84
+ ESB 1111 0011 1010 1111 1000 0000 0001 0000
85
+ ]
86
87
# The canonical nop ends in 0000 0000, but the whole rest
88
# of the space is "reserved hint, behaves as nop".
89
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/op_helper.c
92
+++ b/target/arm/op_helper.c
93
@@ -XXX,XX +XXX,XX @@ void HELPER(probe_access)(CPUARMState *env, target_ulong ptr,
94
access_type, mmu_idx, ra);
54
}
95
}
55
}
96
}
56
+
97
+
57
+#define DO_INSR(NAME, TYPE, H) \
98
+/*
58
+void HELPER(NAME)(void *vd, void *vn, uint64_t val, uint32_t desc) \
99
+ * This function corresponds to AArch64.vESBOperation().
59
+{ \
100
+ * Note that the AArch32 version is not functionally different.
60
+ intptr_t opr_sz = simd_oprsz(desc); \
101
+ */
61
+ swap_memmove(vd + sizeof(TYPE), vn, opr_sz - sizeof(TYPE)); \
102
+void HELPER(vesb)(CPUARMState *env)
62
+ *(TYPE *)(vd + H(0)) = val; \
63
+}
64
+
65
+DO_INSR(sve_insr_b, uint8_t, H1)
66
+DO_INSR(sve_insr_h, uint16_t, H1_2)
67
+DO_INSR(sve_insr_s, uint32_t, H1_4)
68
+DO_INSR(sve_insr_d, uint64_t, )
69
+
70
+#undef DO_INSR
71
+
72
+void HELPER(sve_rev_b)(void *vd, void *vn, uint32_t desc)
73
+{
103
+{
74
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
104
+ /*
75
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
105
+ * The EL2Enabled() check is done inside arm_hcr_el2_eff,
76
+ uint64_t f = *(uint64_t *)(vn + i);
106
+ * and will return HCR_EL2.VSE == 0, so nothing happens.
77
+ uint64_t b = *(uint64_t *)(vn + j);
107
+ */
78
+ *(uint64_t *)(vd + i) = bswap64(b);
108
+ uint64_t hcr = arm_hcr_el2_eff(env);
79
+ *(uint64_t *)(vd + j) = bswap64(f);
109
+ bool enabled = !(hcr & HCR_TGE) && (hcr & HCR_AMO);
110
+ bool pending = enabled && (hcr & HCR_VSE);
111
+ bool masked = (env->daif & PSTATE_A);
112
+
113
+ /* If VSE pending and masked, defer the exception. */
114
+ if (pending && masked) {
115
+ uint32_t syndrome;
116
+
117
+ if (arm_el_is_aa64(env, 1)) {
118
+ /* Copy across IDS and ISS from VSESR. */
119
+ syndrome = env->cp15.vsesr_el2 & 0x1ffffff;
120
+ } else {
121
+ ARMMMUFaultInfo fi = { .type = ARMFault_AsyncExternal };
122
+
123
+ if (extended_addresses_enabled(env)) {
124
+ syndrome = arm_fi_to_lfsc(&fi);
125
+ } else {
126
+ syndrome = arm_fi_to_sfsc(&fi);
127
+ }
128
+ /* Copy across AET and ExT from VSESR. */
129
+ syndrome |= env->cp15.vsesr_el2 & 0xd000;
130
+ }
131
+
132
+ /* Set VDISR_EL2.A along with the syndrome. */
133
+ env->cp15.vdisr_el2 = syndrome | (1u << 31);
134
+
135
+ /* Clear pending virtual SError */
136
+ env->cp15.hcr_el2 &= ~HCR_VSE;
137
+ cpu_reset_interrupt(env_cpu(env), CPU_INTERRUPT_VSERR);
80
+ }
138
+ }
81
+}
139
+}
82
+
140
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
83
+static inline uint64_t hswap64(uint64_t h)
141
index XXXXXXX..XXXXXXX 100644
84
+{
142
--- a/target/arm/translate-a64.c
85
+ uint64_t m = 0x0000ffff0000ffffull;
143
+++ b/target/arm/translate-a64.c
86
+ h = rol64(h, 32);
144
@@ -XXX,XX +XXX,XX @@ static void handle_hint(DisasContext *s, uint32_t insn,
87
+ return ((h & m) << 16) | ((h >> 16) & m);
145
gen_helper_autib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
88
+}
146
}
89
+
147
break;
90
+void HELPER(sve_rev_h)(void *vd, void *vn, uint32_t desc)
148
+ case 0b10000: /* ESB */
91
+{
149
+ /* Without RAS, we must implement this as NOP. */
92
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
150
+ if (dc_isar_feature(aa64_ras, s)) {
93
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
151
+ /*
94
+ uint64_t f = *(uint64_t *)(vn + i);
152
+ * QEMU does not have a source of physical SErrors,
95
+ uint64_t b = *(uint64_t *)(vn + j);
153
+ * so we are only concerned with virtual SErrors.
96
+ *(uint64_t *)(vd + i) = hswap64(b);
154
+ * The pseudocode in the ARM for this case is
97
+ *(uint64_t *)(vd + j) = hswap64(f);
155
+ * if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
98
+ }
156
+ * AArch64.vESBOperation();
99
+}
157
+ * Most of the condition can be evaluated at translation time.
100
+
158
+ * Test for EL2 present, and defer test for SEL2 to runtime.
101
+void HELPER(sve_rev_s)(void *vd, void *vn, uint32_t desc)
159
+ */
102
+{
160
+ if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
103
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
161
+ gen_helper_vesb(cpu_env);
104
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
162
+ }
105
+ uint64_t f = *(uint64_t *)(vn + i);
163
+ }
106
+ uint64_t b = *(uint64_t *)(vn + j);
164
+ break;
107
+ *(uint64_t *)(vd + i) = rol64(b, 32);
165
case 0b11000: /* PACIAZ */
108
+ *(uint64_t *)(vd + j) = rol64(f, 32);
166
if (s->pauth_active) {
109
+ }
167
gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30],
110
+}
168
diff --git a/target/arm/translate.c b/target/arm/translate.c
111
+
169
index XXXXXXX..XXXXXXX 100644
112
+void HELPER(sve_rev_d)(void *vd, void *vn, uint32_t desc)
170
--- a/target/arm/translate.c
113
+{
171
+++ b/target/arm/translate.c
114
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
172
@@ -XXX,XX +XXX,XX @@ static bool trans_WFI(DisasContext *s, arg_WFI *a)
115
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
116
+ uint64_t f = *(uint64_t *)(vn + i);
117
+ uint64_t b = *(uint64_t *)(vn + j);
118
+ *(uint64_t *)(vd + i) = b;
119
+ *(uint64_t *)(vd + j) = f;
120
+ }
121
+}
122
+
123
+#define DO_TBL(NAME, TYPE, H) \
124
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
125
+{ \
126
+ intptr_t i, opr_sz = simd_oprsz(desc); \
127
+ uintptr_t elem = opr_sz / sizeof(TYPE); \
128
+ TYPE *d = vd, *n = vn, *m = vm; \
129
+ ARMVectorReg tmp; \
130
+ if (unlikely(vd == vn)) { \
131
+ n = memcpy(&tmp, vn, opr_sz); \
132
+ } \
133
+ for (i = 0; i < elem; i++) { \
134
+ TYPE j = m[H(i)]; \
135
+ d[H(i)] = j < elem ? n[H(j)] : 0; \
136
+ } \
137
+}
138
+
139
+DO_TBL(sve_tbl_b, uint8_t, H1)
140
+DO_TBL(sve_tbl_h, uint16_t, H2)
141
+DO_TBL(sve_tbl_s, uint32_t, H4)
142
+DO_TBL(sve_tbl_d, uint64_t, )
143
+
144
+#undef TBL
145
+
146
+#define DO_UNPK(NAME, TYPED, TYPES, HD, HS) \
147
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
148
+{ \
149
+ intptr_t i, opr_sz = simd_oprsz(desc); \
150
+ TYPED *d = vd; \
151
+ TYPES *n = vn; \
152
+ ARMVectorReg tmp; \
153
+ if (unlikely(vn - vd < opr_sz)) { \
154
+ n = memcpy(&tmp, n, opr_sz / 2); \
155
+ } \
156
+ for (i = 0; i < opr_sz / sizeof(TYPED); i++) { \
157
+ d[HD(i)] = n[HS(i)]; \
158
+ } \
159
+}
160
+
161
+DO_UNPK(sve_sunpk_h, int16_t, int8_t, H2, H1)
162
+DO_UNPK(sve_sunpk_s, int32_t, int16_t, H4, H2)
163
+DO_UNPK(sve_sunpk_d, int64_t, int32_t, , H4)
164
+
165
+DO_UNPK(sve_uunpk_h, uint16_t, uint8_t, H2, H1)
166
+DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
167
+DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
168
+
169
+#undef DO_UNPK
170
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/target/arm/translate-sve.c
173
+++ b/target/arm/translate-sve.c
174
@@ -XXX,XX +XXX,XX @@ static bool trans_EXT(DisasContext *s, arg_EXT *a, uint32_t insn)
175
return true;
173
return true;
176
}
174
}
177
175
178
+/*
176
+static bool trans_ESB(DisasContext *s, arg_ESB *a)
179
+ *** SVE Permute - Unpredicated Group
180
+ */
181
+
182
+static bool trans_DUP_s(DisasContext *s, arg_DUP_s *a, uint32_t insn)
183
+{
177
+{
184
+ if (sve_access_check(s)) {
178
+ /*
185
+ unsigned vsz = vec_full_reg_size(s);
179
+ * For M-profile, minimal-RAS ESB can be a NOP.
186
+ tcg_gen_gvec_dup_i64(a->esz, vec_full_reg_offset(s, a->rd),
180
+ * Without RAS, we must implement this as NOP.
187
+ vsz, vsz, cpu_reg_sp(s, a->rn));
181
+ */
188
+ }
182
+ if (!arm_dc_feature(s, ARM_FEATURE_M) && dc_isar_feature(aa32_ras, s)) {
189
+ return true;
183
+ /*
190
+}
184
+ * QEMU does not have a source of physical SErrors,
191
+
185
+ * so we are only concerned with virtual SErrors.
192
+static bool trans_DUP_x(DisasContext *s, arg_DUP_x *a, uint32_t insn)
186
+ * The pseudocode in the ARM for this case is
193
+{
187
+ * if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
194
+ if ((a->imm & 0x1f) == 0) {
188
+ * AArch32.vESBOperation();
195
+ return false;
189
+ * Most of the condition can be evaluated at translation time.
196
+ }
190
+ * Test for EL2 present, and defer test for SEL2 to runtime.
197
+ if (sve_access_check(s)) {
191
+ */
198
+ unsigned vsz = vec_full_reg_size(s);
192
+ if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
199
+ unsigned dofs = vec_full_reg_offset(s, a->rd);
193
+ gen_helper_vesb(cpu_env);
200
+ unsigned esz, index;
201
+
202
+ esz = ctz32(a->imm);
203
+ index = a->imm >> (esz + 1);
204
+
205
+ if ((index << esz) < vsz) {
206
+ unsigned nofs = vec_reg_offset(s, a->rn, index, esz);
207
+ tcg_gen_gvec_dup_mem(esz, dofs, nofs, vsz, vsz);
208
+ } else {
209
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, 0);
210
+ }
194
+ }
211
+ }
195
+ }
212
+ return true;
196
+ return true;
213
+}
197
+}
214
+
198
+
215
+static void do_insr_i64(DisasContext *s, arg_rrr_esz *a, TCGv_i64 val)
199
static bool trans_NOP(DisasContext *s, arg_NOP *a)
216
+{
200
{
217
+ typedef void gen_insr(TCGv_ptr, TCGv_ptr, TCGv_i64, TCGv_i32);
201
return true;
218
+ static gen_insr * const fns[4] = {
219
+ gen_helper_sve_insr_b, gen_helper_sve_insr_h,
220
+ gen_helper_sve_insr_s, gen_helper_sve_insr_d,
221
+ };
222
+ unsigned vsz = vec_full_reg_size(s);
223
+ TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
224
+ TCGv_ptr t_zd = tcg_temp_new_ptr();
225
+ TCGv_ptr t_zn = tcg_temp_new_ptr();
226
+
227
+ tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, a->rd));
228
+ tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
229
+
230
+ fns[a->esz](t_zd, t_zn, val, desc);
231
+
232
+ tcg_temp_free_ptr(t_zd);
233
+ tcg_temp_free_ptr(t_zn);
234
+ tcg_temp_free_i32(desc);
235
+}
236
+
237
+static bool trans_INSR_f(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
238
+{
239
+ if (sve_access_check(s)) {
240
+ TCGv_i64 t = tcg_temp_new_i64();
241
+ tcg_gen_ld_i64(t, cpu_env, vec_reg_offset(s, a->rm, 0, MO_64));
242
+ do_insr_i64(s, a, t);
243
+ tcg_temp_free_i64(t);
244
+ }
245
+ return true;
246
+}
247
+
248
+static bool trans_INSR_r(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
249
+{
250
+ if (sve_access_check(s)) {
251
+ do_insr_i64(s, a, cpu_reg(s, a->rm));
252
+ }
253
+ return true;
254
+}
255
+
256
+static bool trans_REV_v(DisasContext *s, arg_rr_esz *a, uint32_t insn)
257
+{
258
+ static gen_helper_gvec_2 * const fns[4] = {
259
+ gen_helper_sve_rev_b, gen_helper_sve_rev_h,
260
+ gen_helper_sve_rev_s, gen_helper_sve_rev_d
261
+ };
262
+
263
+ if (sve_access_check(s)) {
264
+ unsigned vsz = vec_full_reg_size(s);
265
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
266
+ vec_full_reg_offset(s, a->rn),
267
+ vsz, vsz, 0, fns[a->esz]);
268
+ }
269
+ return true;
270
+}
271
+
272
+static bool trans_TBL(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
273
+{
274
+ static gen_helper_gvec_3 * const fns[4] = {
275
+ gen_helper_sve_tbl_b, gen_helper_sve_tbl_h,
276
+ gen_helper_sve_tbl_s, gen_helper_sve_tbl_d
277
+ };
278
+
279
+ if (sve_access_check(s)) {
280
+ unsigned vsz = vec_full_reg_size(s);
281
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
282
+ vec_full_reg_offset(s, a->rn),
283
+ vec_full_reg_offset(s, a->rm),
284
+ vsz, vsz, 0, fns[a->esz]);
285
+ }
286
+ return true;
287
+}
288
+
289
+static bool trans_UNPK(DisasContext *s, arg_UNPK *a, uint32_t insn)
290
+{
291
+ static gen_helper_gvec_2 * const fns[4][2] = {
292
+ { NULL, NULL },
293
+ { gen_helper_sve_sunpk_h, gen_helper_sve_uunpk_h },
294
+ { gen_helper_sve_sunpk_s, gen_helper_sve_uunpk_s },
295
+ { gen_helper_sve_sunpk_d, gen_helper_sve_uunpk_d },
296
+ };
297
+
298
+ if (a->esz == 0) {
299
+ return false;
300
+ }
301
+ if (sve_access_check(s)) {
302
+ unsigned vsz = vec_full_reg_size(s);
303
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
304
+ vec_full_reg_offset(s, a->rn)
305
+ + (a->h ? vsz / 2 : 0),
306
+ vsz, vsz, 0, fns[a->esz][a->u]);
307
+ }
308
+ return true;
309
+}
310
+
311
/*
312
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
313
*/
314
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
315
index XXXXXXX..XXXXXXX 100644
316
--- a/target/arm/sve.decode
317
+++ b/target/arm/sve.decode
318
@@ -XXX,XX +XXX,XX @@
319
320
%imm4_16_p1 16:4 !function=plus1
321
%imm6_22_5 22:1 5:5
322
+%imm7_22_16 22:2 16:5
323
%imm8_16_10 16:5 10:3
324
%imm9_16_10 16:s6 10:3
325
326
@@ -XXX,XX +XXX,XX @@
327
328
# Three operand, vector element size
329
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
330
+@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
331
+ &rrr_esz rn=%reg_movprfx
332
333
# Three operand with "memory" size, aka immediate left shift
334
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
335
@@ -XXX,XX +XXX,XX @@ CPY_z_i 00000101 .. 01 .... 00 . ........ ..... @rdn_pg4 imm=%sh8_i8s
336
EXT 00000101 001 ..... 000 ... rm:5 rd:5 \
337
&rrri rn=%reg_movprfx imm=%imm8_16_10
338
339
+### SVE Permute - Unpredicated Group
340
+
341
+# SVE broadcast general register
342
+DUP_s 00000101 .. 1 00000 001110 ..... ..... @rd_rn
343
+
344
+# SVE broadcast indexed element
345
+DUP_x 00000101 .. 1 ..... 001000 rn:5 rd:5 \
346
+ &rri imm=%imm7_22_16
347
+
348
+# SVE insert SIMD&FP scalar register
349
+INSR_f 00000101 .. 1 10100 001110 ..... ..... @rdn_rm
350
+
351
+# SVE insert general register
352
+INSR_r 00000101 .. 1 00100 001110 ..... ..... @rdn_rm
353
+
354
+# SVE reverse vector elements
355
+REV_v 00000101 .. 1 11000 001110 ..... ..... @rd_rn
356
+
357
+# SVE vector table lookup
358
+TBL 00000101 .. 1 ..... 001100 ..... ..... @rd_rn_rm
359
+
360
+# SVE unpack vector elements
361
+UNPK 00000101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
362
+
363
### SVE Predicate Logical Operations Group
364
365
# SVE predicate logical operations
366
--
202
--
367
2.17.1
203
2.25.1
368
369
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-13-richard.henderson@linaro.org
5
Message-id: 20220506180242.216785-18-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/helper-sve.h | 44 +++++++++++++++++++
8
docs/system/arm/emulation.rst | 1 +
9
target/arm/sve_helper.c | 88 ++++++++++++++++++++++++++++++++++++++
9
target/arm/cpu64.c | 1 +
10
target/arm/translate-sve.c | 66 ++++++++++++++++++++++++++++
10
target/arm/cpu_tcg.c | 1 +
11
target/arm/sve.decode | 23 ++++++++++
11
3 files changed, 3 insertions(+)
12
4 files changed, 221 insertions(+)
13
12
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
13
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
15
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/helper-sve.h
16
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_s, TCG_CALL_NO_RWG,
17
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_s, TCG_CALL_NO_RWG,
18
- FEAT_PMULL (PMULL, PMULL2 instructions)
20
i32, ptr, ptr, ptr, ptr, i32)
19
- FEAT_PMUv3p1 (PMU Extensions v3.1)
21
20
- FEAT_PMUv3p4 (PMU Extensions v3.4)
22
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
21
+- FEAT_RAS (Reliability, availability, and serviceability)
23
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
22
- FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions)
24
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
23
- FEAT_RNG (Random number generator)
25
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
24
- FEAT_SB (Speculation Barrier)
26
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
25
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
43
+
44
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
49
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
54
+
55
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
56
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
57
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
58
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
60
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
62
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
63
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
64
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
65
+
66
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
67
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
68
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
69
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
70
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/sve_helper.c
27
--- a/target/arm/cpu64.c
72
+++ b/target/arm/sve_helper.c
28
+++ b/target/arm/cpu64.c
73
@@ -XXX,XX +XXX,XX @@ DO_CMP_PPZW_S(sve_cmpls_ppzw_s, uint32_t, uint64_t, <=)
29
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
74
#undef DO_CMP_PPZW_H
30
t = cpu->isar.id_aa64pfr0;
75
#undef DO_CMP_PPZW_S
31
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
76
#undef DO_CMP_PPZW
32
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
77
+
33
+ t = FIELD_DP64(t, ID_AA64PFR0, RAS, 1); /* FEAT_RAS */
78
+/* Similar, but the second source is immediate. */
34
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
79
+#define DO_CMP_PPZI(NAME, TYPE, OP, H, MASK) \
35
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
80
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
36
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
81
+{ \
37
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
82
+ intptr_t opr_sz = simd_oprsz(desc); \
83
+ uint32_t flags = PREDTEST_INIT; \
84
+ TYPE mm = simd_data(desc); \
85
+ intptr_t i = opr_sz; \
86
+ do { \
87
+ uint64_t out = 0, pg; \
88
+ do { \
89
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
90
+ TYPE nn = *(TYPE *)(vn + H(i)); \
91
+ out |= nn OP mm; \
92
+ } while (i & 63); \
93
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
94
+ out &= pg; \
95
+ *(uint64_t *)(vd + (i >> 3)) = out; \
96
+ flags = iter_predtest_bwd(out, pg, flags); \
97
+ } while (i > 0); \
98
+ return flags; \
99
+}
100
+
101
+#define DO_CMP_PPZI_B(NAME, TYPE, OP) \
102
+ DO_CMP_PPZI(NAME, TYPE, OP, H1, 0xffffffffffffffffull)
103
+#define DO_CMP_PPZI_H(NAME, TYPE, OP) \
104
+ DO_CMP_PPZI(NAME, TYPE, OP, H1_2, 0x5555555555555555ull)
105
+#define DO_CMP_PPZI_S(NAME, TYPE, OP) \
106
+ DO_CMP_PPZI(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
107
+#define DO_CMP_PPZI_D(NAME, TYPE, OP) \
108
+ DO_CMP_PPZI(NAME, TYPE, OP, , 0x0101010101010101ull)
109
+
110
+DO_CMP_PPZI_B(sve_cmpeq_ppzi_b, uint8_t, ==)
111
+DO_CMP_PPZI_H(sve_cmpeq_ppzi_h, uint16_t, ==)
112
+DO_CMP_PPZI_S(sve_cmpeq_ppzi_s, uint32_t, ==)
113
+DO_CMP_PPZI_D(sve_cmpeq_ppzi_d, uint64_t, ==)
114
+
115
+DO_CMP_PPZI_B(sve_cmpne_ppzi_b, uint8_t, !=)
116
+DO_CMP_PPZI_H(sve_cmpne_ppzi_h, uint16_t, !=)
117
+DO_CMP_PPZI_S(sve_cmpne_ppzi_s, uint32_t, !=)
118
+DO_CMP_PPZI_D(sve_cmpne_ppzi_d, uint64_t, !=)
119
+
120
+DO_CMP_PPZI_B(sve_cmpgt_ppzi_b, int8_t, >)
121
+DO_CMP_PPZI_H(sve_cmpgt_ppzi_h, int16_t, >)
122
+DO_CMP_PPZI_S(sve_cmpgt_ppzi_s, int32_t, >)
123
+DO_CMP_PPZI_D(sve_cmpgt_ppzi_d, int64_t, >)
124
+
125
+DO_CMP_PPZI_B(sve_cmpge_ppzi_b, int8_t, >=)
126
+DO_CMP_PPZI_H(sve_cmpge_ppzi_h, int16_t, >=)
127
+DO_CMP_PPZI_S(sve_cmpge_ppzi_s, int32_t, >=)
128
+DO_CMP_PPZI_D(sve_cmpge_ppzi_d, int64_t, >=)
129
+
130
+DO_CMP_PPZI_B(sve_cmphi_ppzi_b, uint8_t, >)
131
+DO_CMP_PPZI_H(sve_cmphi_ppzi_h, uint16_t, >)
132
+DO_CMP_PPZI_S(sve_cmphi_ppzi_s, uint32_t, >)
133
+DO_CMP_PPZI_D(sve_cmphi_ppzi_d, uint64_t, >)
134
+
135
+DO_CMP_PPZI_B(sve_cmphs_ppzi_b, uint8_t, >=)
136
+DO_CMP_PPZI_H(sve_cmphs_ppzi_h, uint16_t, >=)
137
+DO_CMP_PPZI_S(sve_cmphs_ppzi_s, uint32_t, >=)
138
+DO_CMP_PPZI_D(sve_cmphs_ppzi_d, uint64_t, >=)
139
+
140
+DO_CMP_PPZI_B(sve_cmplt_ppzi_b, int8_t, <)
141
+DO_CMP_PPZI_H(sve_cmplt_ppzi_h, int16_t, <)
142
+DO_CMP_PPZI_S(sve_cmplt_ppzi_s, int32_t, <)
143
+DO_CMP_PPZI_D(sve_cmplt_ppzi_d, int64_t, <)
144
+
145
+DO_CMP_PPZI_B(sve_cmple_ppzi_b, int8_t, <=)
146
+DO_CMP_PPZI_H(sve_cmple_ppzi_h, int16_t, <=)
147
+DO_CMP_PPZI_S(sve_cmple_ppzi_s, int32_t, <=)
148
+DO_CMP_PPZI_D(sve_cmple_ppzi_d, int64_t, <=)
149
+
150
+DO_CMP_PPZI_B(sve_cmplo_ppzi_b, uint8_t, <)
151
+DO_CMP_PPZI_H(sve_cmplo_ppzi_h, uint16_t, <)
152
+DO_CMP_PPZI_S(sve_cmplo_ppzi_s, uint32_t, <)
153
+DO_CMP_PPZI_D(sve_cmplo_ppzi_d, uint64_t, <)
154
+
155
+DO_CMP_PPZI_B(sve_cmpls_ppzi_b, uint8_t, <=)
156
+DO_CMP_PPZI_H(sve_cmpls_ppzi_h, uint16_t, <=)
157
+DO_CMP_PPZI_S(sve_cmpls_ppzi_s, uint32_t, <=)
158
+DO_CMP_PPZI_D(sve_cmpls_ppzi_d, uint64_t, <=)
159
+
160
+#undef DO_CMP_PPZI_B
161
+#undef DO_CMP_PPZI_H
162
+#undef DO_CMP_PPZI_S
163
+#undef DO_CMP_PPZI_D
164
+#undef DO_CMP_PPZI
165
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
166
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
167
--- a/target/arm/translate-sve.c
39
--- a/target/arm/cpu_tcg.c
168
+++ b/target/arm/translate-sve.c
40
+++ b/target/arm/cpu_tcg.c
169
@@ -XXX,XX +XXX,XX @@
41
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
170
#include "translate-a64.h"
42
171
43
t = cpu->isar.id_pfr0;
172
44
t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
173
+typedef void gen_helper_gvec_flags_3(TCGv_i32, TCGv_ptr, TCGv_ptr,
45
+ t = FIELD_DP32(t, ID_PFR0, RAS, 1); /* FEAT_RAS */
174
+ TCGv_ptr, TCGv_i32);
46
cpu->isar.id_pfr0 = t;
175
typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
47
176
TCGv_ptr, TCGv_ptr, TCGv_i32);
48
t = cpu->isar.id_pfr2;
177
178
@@ -XXX,XX +XXX,XX @@ DO_PPZW(CMPLS, cmpls)
179
180
#undef DO_PPZW
181
182
+/*
183
+ *** SVE Integer Compare - Immediate Groups
184
+ */
185
+
186
+static bool do_ppzi_flags(DisasContext *s, arg_rpri_esz *a,
187
+ gen_helper_gvec_flags_3 *gen_fn)
188
+{
189
+ TCGv_ptr pd, zn, pg;
190
+ unsigned vsz;
191
+ TCGv_i32 t;
192
+
193
+ if (gen_fn == NULL) {
194
+ return false;
195
+ }
196
+ if (!sve_access_check(s)) {
197
+ return true;
198
+ }
199
+
200
+ vsz = vec_full_reg_size(s);
201
+ t = tcg_const_i32(simd_desc(vsz, vsz, a->imm));
202
+ pd = tcg_temp_new_ptr();
203
+ zn = tcg_temp_new_ptr();
204
+ pg = tcg_temp_new_ptr();
205
+
206
+ tcg_gen_addi_ptr(pd, cpu_env, pred_full_reg_offset(s, a->rd));
207
+ tcg_gen_addi_ptr(zn, cpu_env, vec_full_reg_offset(s, a->rn));
208
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
209
+
210
+ gen_fn(t, pd, zn, pg, t);
211
+
212
+ tcg_temp_free_ptr(pd);
213
+ tcg_temp_free_ptr(zn);
214
+ tcg_temp_free_ptr(pg);
215
+
216
+ do_pred_flags(t);
217
+
218
+ tcg_temp_free_i32(t);
219
+ return true;
220
+}
221
+
222
+#define DO_PPZI(NAME, name) \
223
+static bool trans_##NAME##_ppzi(DisasContext *s, arg_rpri_esz *a, \
224
+ uint32_t insn) \
225
+{ \
226
+ static gen_helper_gvec_flags_3 * const fns[4] = { \
227
+ gen_helper_sve_##name##_ppzi_b, gen_helper_sve_##name##_ppzi_h, \
228
+ gen_helper_sve_##name##_ppzi_s, gen_helper_sve_##name##_ppzi_d, \
229
+ }; \
230
+ return do_ppzi_flags(s, a, fns[a->esz]); \
231
+}
232
+
233
+DO_PPZI(CMPEQ, cmpeq)
234
+DO_PPZI(CMPNE, cmpne)
235
+DO_PPZI(CMPGT, cmpgt)
236
+DO_PPZI(CMPGE, cmpge)
237
+DO_PPZI(CMPHI, cmphi)
238
+DO_PPZI(CMPHS, cmphs)
239
+DO_PPZI(CMPLT, cmplt)
240
+DO_PPZI(CMPLE, cmple)
241
+DO_PPZI(CMPLO, cmplo)
242
+DO_PPZI(CMPLS, cmpls)
243
+
244
+#undef DO_PPZI
245
+
246
/*
247
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
248
*/
249
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
250
index XXXXXXX..XXXXXXX 100644
251
--- a/target/arm/sve.decode
252
+++ b/target/arm/sve.decode
253
@@ -XXX,XX +XXX,XX @@
254
@rdn_dbm ........ .. .... dbm:13 rd:5 \
255
&rr_dbm rn=%reg_movprfx
256
257
+# Predicate output, vector and immediate input,
258
+# controlling predicate, element size.
259
+@pd_pg_rn_i7 ........ esz:2 . imm:7 . pg:3 rn:5 . rd:4 &rpri_esz
260
+@pd_pg_rn_i5 ........ esz:2 . imm:s5 ... pg:3 rn:5 . rd:4 &rpri_esz
261
+
262
# Basic Load/Store with 9-bit immediate offset
263
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
264
&rri imm=%imm9_16_10
265
@@ -XXX,XX +XXX,XX @@ CMPHI_ppzw 00100100 .. 0 ..... 110 ... ..... 1 .... @pd_pg_rn_rm
266
CMPLO_ppzw 00100100 .. 0 ..... 111 ... ..... 0 .... @pd_pg_rn_rm
267
CMPLS_ppzw 00100100 .. 0 ..... 111 ... ..... 1 .... @pd_pg_rn_rm
268
269
+### SVE Integer Compare - Unsigned Immediate Group
270
+
271
+# SVE integer compare with unsigned immediate
272
+CMPHS_ppzi 00100100 .. 1 ....... 0 ... ..... 0 .... @pd_pg_rn_i7
273
+CMPHI_ppzi 00100100 .. 1 ....... 0 ... ..... 1 .... @pd_pg_rn_i7
274
+CMPLO_ppzi 00100100 .. 1 ....... 1 ... ..... 0 .... @pd_pg_rn_i7
275
+CMPLS_ppzi 00100100 .. 1 ....... 1 ... ..... 1 .... @pd_pg_rn_i7
276
+
277
+### SVE Integer Compare - Signed Immediate Group
278
+
279
+# SVE integer compare with signed immediate
280
+CMPGE_ppzi 00100101 .. 0 ..... 000 ... ..... 0 .... @pd_pg_rn_i5
281
+CMPGT_ppzi 00100101 .. 0 ..... 000 ... ..... 1 .... @pd_pg_rn_i5
282
+CMPLT_ppzi 00100101 .. 0 ..... 001 ... ..... 0 .... @pd_pg_rn_i5
283
+CMPLE_ppzi 00100101 .. 0 ..... 001 ... ..... 1 .... @pd_pg_rn_i5
284
+CMPEQ_ppzi 00100101 .. 0 ..... 100 ... ..... 0 .... @pd_pg_rn_i5
285
+CMPNE_ppzi 00100101 .. 0 ..... 100 ... ..... 1 .... @pd_pg_rn_i5
286
+
287
### SVE Predicate Logical Operations Group
288
289
# SVE predicate logical operations
290
--
49
--
291
2.17.1
50
2.25.1
292
293
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This feature is AArch64 only, and applies to physical SErrors,
4
which QEMU does not implement, thus the feature is a nop.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-11-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 9 +++++++
11
docs/system/arm/emulation.rst | 1 +
9
target/arm/sve_helper.c | 55 ++++++++++++++++++++++++++++++++++++++
12
target/arm/cpu64.c | 1 +
10
target/arm/translate-sve.c | 2 ++
13
2 files changed, 2 insertions(+)
11
target/arm/sve.decode | 6 +++++
12
4 files changed, 72 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/helper-sve.h
18
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_lsl_zpzz_s, TCG_CALL_NO_RWG,
19
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
DEF_HELPER_FLAGS_5(sve_lsl_zpzz_d, TCG_CALL_NO_RWG,
20
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
20
void, ptr, ptr, ptr, ptr, i32)
21
- FEAT_HPDS (Hierarchical permission disables)
21
22
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
22
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_b, TCG_CALL_NO_RWG,
23
+- FEAT_IESB (Implicit error synchronization event)
23
+ void, ptr, ptr, ptr, ptr, i32)
24
- FEAT_JSCVT (JavaScript conversion instructions)
24
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_h, TCG_CALL_NO_RWG,
25
- FEAT_LOR (Limited ordering regions)
25
+ void, ptr, ptr, ptr, ptr, i32)
26
- FEAT_LPA (Large Physical Address space)
26
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_s, TCG_CALL_NO_RWG,
27
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
32
void, ptr, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
34
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
35
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/sve_helper.c
29
--- a/target/arm/cpu64.c
37
+++ b/target/arm/sve_helper.c
30
+++ b/target/arm/cpu64.c
38
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
31
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
39
}
32
t = cpu->isar.id_aa64mmfr2;
40
swap_memmove(vd + len, vm, opr_sz * 8 - len);
33
t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* FEAT_TTCNP */
41
}
34
t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); /* FEAT_UAO */
42
+
35
+ t = FIELD_DP64(t, ID_AA64MMFR2, IESB, 1); /* FEAT_IESB */
43
+void HELPER(sve_sel_zpzz_b)(void *vd, void *vn, void *vm,
36
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
44
+ void *vg, uint32_t desc)
37
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
45
+{
38
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
46
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
47
+ uint64_t *d = vd, *n = vn, *m = vm;
48
+ uint8_t *pg = vg;
49
+
50
+ for (i = 0; i < opr_sz; i += 1) {
51
+ uint64_t nn = n[i], mm = m[i];
52
+ uint64_t pp = expand_pred_b(pg[H1(i)]);
53
+ d[i] = (nn & pp) | (mm & ~pp);
54
+ }
55
+}
56
+
57
+void HELPER(sve_sel_zpzz_h)(void *vd, void *vn, void *vm,
58
+ void *vg, uint32_t desc)
59
+{
60
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
61
+ uint64_t *d = vd, *n = vn, *m = vm;
62
+ uint8_t *pg = vg;
63
+
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ uint64_t nn = n[i], mm = m[i];
66
+ uint64_t pp = expand_pred_h(pg[H1(i)]);
67
+ d[i] = (nn & pp) | (mm & ~pp);
68
+ }
69
+}
70
+
71
+void HELPER(sve_sel_zpzz_s)(void *vd, void *vn, void *vm,
72
+ void *vg, uint32_t desc)
73
+{
74
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
75
+ uint64_t *d = vd, *n = vn, *m = vm;
76
+ uint8_t *pg = vg;
77
+
78
+ for (i = 0; i < opr_sz; i += 1) {
79
+ uint64_t nn = n[i], mm = m[i];
80
+ uint64_t pp = expand_pred_s(pg[H1(i)]);
81
+ d[i] = (nn & pp) | (mm & ~pp);
82
+ }
83
+}
84
+
85
+void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
86
+ void *vg, uint32_t desc)
87
+{
88
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
89
+ uint64_t *d = vd, *n = vn, *m = vm;
90
+ uint8_t *pg = vg;
91
+
92
+ for (i = 0; i < opr_sz; i += 1) {
93
+ uint64_t nn = n[i], mm = m[i];
94
+ d[i] = (pg[H1(i)] & 1 ? nn : mm);
95
+ }
96
+}
97
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/translate-sve.c
100
+++ b/target/arm/translate-sve.c
101
@@ -XXX,XX +XXX,XX @@ static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
102
return do_zpzz_ool(s, a, fns[a->esz]);
103
}
104
105
+DO_ZPZZ(SEL, sel)
106
+
107
#undef DO_ZPZZ
108
109
/*
110
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/sve.decode
113
+++ b/target/arm/sve.decode
114
@@ -XXX,XX +XXX,XX @@
115
&rprr_esz rn=%reg_movprfx
116
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
117
&rprr_esz rm=%reg_movprfx
118
+@rd_pg4_rn_rm ........ esz:2 . rm:5 .. pg:4 rn:5 rd:5 &rprr_esz
119
120
# Three register operand, with governing predicate, vector element size
121
@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
122
@@ -XXX,XX +XXX,XX @@ RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
123
# SVE vector splice (predicated)
124
SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
125
126
+### SVE Select Vectors Group
127
+
128
+# SVE select vector elements (predicated)
129
+SEL_zpzz 00000101 .. 1 ..... 11 .... ..... ..... @rd_pg4_rn_rm
130
+
131
### SVE Predicate Logical Operations Group
132
133
# SVE predicate logical operations
134
--
39
--
135
2.17.1
40
2.25.1
136
137
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This extension concerns branch speculation, which TCG does
4
not implement. Thus we can trivially enable this feature.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-7-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-20-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 2 +
11
docs/system/arm/emulation.rst | 1 +
9
target/arm/sve_helper.c | 12 ++
12
target/arm/cpu64.c | 1 +
10
target/arm/translate-sve.c | 328 +++++++++++++++++++++++++++++++++++++
13
target/arm/cpu_tcg.c | 1 +
11
target/arm/sve.decode | 20 +++
14
3 files changed, 3 insertions(+)
12
4 files changed, 362 insertions(+)
13
15
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
18
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/helper-sve.h
19
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
- FEAT_BBM at level 2 (Translation table break-before-make levels)
20
DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
- FEAT_BF16 (AArch64 BFloat16 instructions)
21
23
- FEAT_BTI (Branch Target Identification)
22
+DEF_HELPER_FLAGS_2(sve_last_active_element, TCG_CALL_NO_RWG, s32, ptr, i32)
24
+- FEAT_CSV2 (Cache speculation variant 2)
23
+
25
- FEAT_DIT (Data Independent Timing instructions)
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
- FEAT_DPB (DC CVAP instruction)
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
- FEAT_Debugv8p2 (Debug changes for v8.2)
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
28
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve_helper.c
30
--- a/target/arm/cpu64.c
30
+++ b/target/arm/sve_helper.c
31
+++ b/target/arm/cpu64.c
31
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
32
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
32
d[j] = 0;
33
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
33
}
34
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
34
}
35
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
35
+
36
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 1); /* FEAT_CSV2 */
36
+/* Similar to the ARM LastActiveElement pseudocode function, except the
37
cpu->isar.id_aa64pfr0 = t;
37
+ * result is multiplied by the element size. This includes the not found
38
38
+ * indication; e.g. not found for esz=3 is -8.
39
t = cpu->isar.id_aa64pfr1;
39
+ */
40
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
40
+int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
41
+{
42
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
43
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
44
+
45
+ return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
46
+}
47
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
48
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/translate-sve.c
42
--- a/target/arm/cpu_tcg.c
50
+++ b/target/arm/translate-sve.c
43
+++ b/target/arm/cpu_tcg.c
51
@@ -XXX,XX +XXX,XX @@ static bool trans_COMPACT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
44
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
52
return do_zpz_ool(s, a, fns[a->esz]);
45
cpu->isar.id_mmfr4 = t;
53
}
46
54
47
t = cpu->isar.id_pfr0;
55
+/* Call the helper that computes the ARM LastActiveElement pseudocode
48
+ t = FIELD_DP32(t, ID_PFR0, CSV2, 2); /* FEAT_CVS2 */
56
+ * function, scaled by the element size. This includes the not found
49
t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
57
+ * indication; e.g. not found for esz=3 is -8.
50
t = FIELD_DP32(t, ID_PFR0, RAS, 1); /* FEAT_RAS */
58
+ */
51
cpu->isar.id_pfr0 = t;
59
+static void find_last_active(DisasContext *s, TCGv_i32 ret, int esz, int pg)
60
+{
61
+ /* Predicate sizes may be smaller and cannot use simd_desc. We cannot
62
+ * round up, as we do elsewhere, because we need the exact size.
63
+ */
64
+ TCGv_ptr t_p = tcg_temp_new_ptr();
65
+ TCGv_i32 t_desc;
66
+ unsigned vsz = pred_full_reg_size(s);
67
+ unsigned desc;
68
+
69
+ desc = vsz - 2;
70
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
71
+
72
+ tcg_gen_addi_ptr(t_p, cpu_env, pred_full_reg_offset(s, pg));
73
+ t_desc = tcg_const_i32(desc);
74
+
75
+ gen_helper_sve_last_active_element(ret, t_p, t_desc);
76
+
77
+ tcg_temp_free_i32(t_desc);
78
+ tcg_temp_free_ptr(t_p);
79
+}
80
+
81
+/* Increment LAST to the offset of the next element in the vector,
82
+ * wrapping around to 0.
83
+ */
84
+static void incr_last_active(DisasContext *s, TCGv_i32 last, int esz)
85
+{
86
+ unsigned vsz = vec_full_reg_size(s);
87
+
88
+ tcg_gen_addi_i32(last, last, 1 << esz);
89
+ if (is_power_of_2(vsz)) {
90
+ tcg_gen_andi_i32(last, last, vsz - 1);
91
+ } else {
92
+ TCGv_i32 max = tcg_const_i32(vsz);
93
+ TCGv_i32 zero = tcg_const_i32(0);
94
+ tcg_gen_movcond_i32(TCG_COND_GEU, last, last, max, zero, last);
95
+ tcg_temp_free_i32(max);
96
+ tcg_temp_free_i32(zero);
97
+ }
98
+}
99
+
100
+/* If LAST < 0, set LAST to the offset of the last element in the vector. */
101
+static void wrap_last_active(DisasContext *s, TCGv_i32 last, int esz)
102
+{
103
+ unsigned vsz = vec_full_reg_size(s);
104
+
105
+ if (is_power_of_2(vsz)) {
106
+ tcg_gen_andi_i32(last, last, vsz - 1);
107
+ } else {
108
+ TCGv_i32 max = tcg_const_i32(vsz - (1 << esz));
109
+ TCGv_i32 zero = tcg_const_i32(0);
110
+ tcg_gen_movcond_i32(TCG_COND_LT, last, last, zero, max, last);
111
+ tcg_temp_free_i32(max);
112
+ tcg_temp_free_i32(zero);
113
+ }
114
+}
115
+
116
+/* Load an unsigned element of ESZ from BASE+OFS. */
117
+static TCGv_i64 load_esz(TCGv_ptr base, int ofs, int esz)
118
+{
119
+ TCGv_i64 r = tcg_temp_new_i64();
120
+
121
+ switch (esz) {
122
+ case 0:
123
+ tcg_gen_ld8u_i64(r, base, ofs);
124
+ break;
125
+ case 1:
126
+ tcg_gen_ld16u_i64(r, base, ofs);
127
+ break;
128
+ case 2:
129
+ tcg_gen_ld32u_i64(r, base, ofs);
130
+ break;
131
+ case 3:
132
+ tcg_gen_ld_i64(r, base, ofs);
133
+ break;
134
+ default:
135
+ g_assert_not_reached();
136
+ }
137
+ return r;
138
+}
139
+
140
+/* Load an unsigned element of ESZ from RM[LAST]. */
141
+static TCGv_i64 load_last_active(DisasContext *s, TCGv_i32 last,
142
+ int rm, int esz)
143
+{
144
+ TCGv_ptr p = tcg_temp_new_ptr();
145
+ TCGv_i64 r;
146
+
147
+ /* Convert offset into vector into offset into ENV.
148
+ * The final adjustment for the vector register base
149
+ * is added via constant offset to the load.
150
+ */
151
+#ifdef HOST_WORDS_BIGENDIAN
152
+ /* Adjust for element ordering. See vec_reg_offset. */
153
+ if (esz < 3) {
154
+ tcg_gen_xori_i32(last, last, 8 - (1 << esz));
155
+ }
156
+#endif
157
+ tcg_gen_ext_i32_ptr(p, last);
158
+ tcg_gen_add_ptr(p, p, cpu_env);
159
+
160
+ r = load_esz(p, vec_full_reg_offset(s, rm), esz);
161
+ tcg_temp_free_ptr(p);
162
+
163
+ return r;
164
+}
165
+
166
+/* Compute CLAST for a Zreg. */
167
+static bool do_clast_vector(DisasContext *s, arg_rprr_esz *a, bool before)
168
+{
169
+ TCGv_i32 last;
170
+ TCGLabel *over;
171
+ TCGv_i64 ele;
172
+ unsigned vsz, esz = a->esz;
173
+
174
+ if (!sve_access_check(s)) {
175
+ return true;
176
+ }
177
+
178
+ last = tcg_temp_local_new_i32();
179
+ over = gen_new_label();
180
+
181
+ find_last_active(s, last, esz, a->pg);
182
+
183
+ /* There is of course no movcond for a 2048-bit vector,
184
+ * so we must branch over the actual store.
185
+ */
186
+ tcg_gen_brcondi_i32(TCG_COND_LT, last, 0, over);
187
+
188
+ if (!before) {
189
+ incr_last_active(s, last, esz);
190
+ }
191
+
192
+ ele = load_last_active(s, last, a->rm, esz);
193
+ tcg_temp_free_i32(last);
194
+
195
+ vsz = vec_full_reg_size(s);
196
+ tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd), vsz, vsz, ele);
197
+ tcg_temp_free_i64(ele);
198
+
199
+ /* If this insn used MOVPRFX, we may need a second move. */
200
+ if (a->rd != a->rn) {
201
+ TCGLabel *done = gen_new_label();
202
+ tcg_gen_br(done);
203
+
204
+ gen_set_label(over);
205
+ do_mov_z(s, a->rd, a->rn);
206
+
207
+ gen_set_label(done);
208
+ } else {
209
+ gen_set_label(over);
210
+ }
211
+ return true;
212
+}
213
+
214
+static bool trans_CLASTA_z(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
215
+{
216
+ return do_clast_vector(s, a, false);
217
+}
218
+
219
+static bool trans_CLASTB_z(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
220
+{
221
+ return do_clast_vector(s, a, true);
222
+}
223
+
224
+/* Compute CLAST for a scalar. */
225
+static void do_clast_scalar(DisasContext *s, int esz, int pg, int rm,
226
+ bool before, TCGv_i64 reg_val)
227
+{
228
+ TCGv_i32 last = tcg_temp_new_i32();
229
+ TCGv_i64 ele, cmp, zero;
230
+
231
+ find_last_active(s, last, esz, pg);
232
+
233
+ /* Extend the original value of last prior to incrementing. */
234
+ cmp = tcg_temp_new_i64();
235
+ tcg_gen_ext_i32_i64(cmp, last);
236
+
237
+ if (!before) {
238
+ incr_last_active(s, last, esz);
239
+ }
240
+
241
+ /* The conceit here is that while last < 0 indicates not found, after
242
+ * adjusting for cpu_env->vfp.zregs[rm], it is still a valid address
243
+ * from which we can load garbage. We then discard the garbage with
244
+ * a conditional move.
245
+ */
246
+ ele = load_last_active(s, last, rm, esz);
247
+ tcg_temp_free_i32(last);
248
+
249
+ zero = tcg_const_i64(0);
250
+ tcg_gen_movcond_i64(TCG_COND_GE, reg_val, cmp, zero, ele, reg_val);
251
+
252
+ tcg_temp_free_i64(zero);
253
+ tcg_temp_free_i64(cmp);
254
+ tcg_temp_free_i64(ele);
255
+}
256
+
257
+/* Compute CLAST for a Vreg. */
258
+static bool do_clast_fp(DisasContext *s, arg_rpr_esz *a, bool before)
259
+{
260
+ if (sve_access_check(s)) {
261
+ int esz = a->esz;
262
+ int ofs = vec_reg_offset(s, a->rd, 0, esz);
263
+ TCGv_i64 reg = load_esz(cpu_env, ofs, esz);
264
+
265
+ do_clast_scalar(s, esz, a->pg, a->rn, before, reg);
266
+ write_fp_dreg(s, a->rd, reg);
267
+ tcg_temp_free_i64(reg);
268
+ }
269
+ return true;
270
+}
271
+
272
+static bool trans_CLASTA_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
273
+{
274
+ return do_clast_fp(s, a, false);
275
+}
276
+
277
+static bool trans_CLASTB_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
278
+{
279
+ return do_clast_fp(s, a, true);
280
+}
281
+
282
+/* Compute CLAST for a Xreg. */
283
+static bool do_clast_general(DisasContext *s, arg_rpr_esz *a, bool before)
284
+{
285
+ TCGv_i64 reg;
286
+
287
+ if (!sve_access_check(s)) {
288
+ return true;
289
+ }
290
+
291
+ reg = cpu_reg(s, a->rd);
292
+ switch (a->esz) {
293
+ case 0:
294
+ tcg_gen_ext8u_i64(reg, reg);
295
+ break;
296
+ case 1:
297
+ tcg_gen_ext16u_i64(reg, reg);
298
+ break;
299
+ case 2:
300
+ tcg_gen_ext32u_i64(reg, reg);
301
+ break;
302
+ case 3:
303
+ break;
304
+ default:
305
+ g_assert_not_reached();
306
+ }
307
+
308
+ do_clast_scalar(s, a->esz, a->pg, a->rn, before, reg);
309
+ return true;
310
+}
311
+
312
+static bool trans_CLASTA_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
313
+{
314
+ return do_clast_general(s, a, false);
315
+}
316
+
317
+static bool trans_CLASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
318
+{
319
+ return do_clast_general(s, a, true);
320
+}
321
+
322
+/* Compute LAST for a scalar. */
323
+static TCGv_i64 do_last_scalar(DisasContext *s, int esz,
324
+ int pg, int rm, bool before)
325
+{
326
+ TCGv_i32 last = tcg_temp_new_i32();
327
+ TCGv_i64 ret;
328
+
329
+ find_last_active(s, last, esz, pg);
330
+ if (before) {
331
+ wrap_last_active(s, last, esz);
332
+ } else {
333
+ incr_last_active(s, last, esz);
334
+ }
335
+
336
+ ret = load_last_active(s, last, rm, esz);
337
+ tcg_temp_free_i32(last);
338
+ return ret;
339
+}
340
+
341
+/* Compute LAST for a Vreg. */
342
+static bool do_last_fp(DisasContext *s, arg_rpr_esz *a, bool before)
343
+{
344
+ if (sve_access_check(s)) {
345
+ TCGv_i64 val = do_last_scalar(s, a->esz, a->pg, a->rn, before);
346
+ write_fp_dreg(s, a->rd, val);
347
+ tcg_temp_free_i64(val);
348
+ }
349
+ return true;
350
+}
351
+
352
+static bool trans_LASTA_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
353
+{
354
+ return do_last_fp(s, a, false);
355
+}
356
+
357
+static bool trans_LASTB_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
358
+{
359
+ return do_last_fp(s, a, true);
360
+}
361
+
362
+/* Compute LAST for a Xreg. */
363
+static bool do_last_general(DisasContext *s, arg_rpr_esz *a, bool before)
364
+{
365
+ if (sve_access_check(s)) {
366
+ TCGv_i64 val = do_last_scalar(s, a->esz, a->pg, a->rn, before);
367
+ tcg_gen_mov_i64(cpu_reg(s, a->rd), val);
368
+ tcg_temp_free_i64(val);
369
+ }
370
+ return true;
371
+}
372
+
373
+static bool trans_LASTA_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
374
+{
375
+ return do_last_general(s, a, false);
376
+}
377
+
378
+static bool trans_LASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
379
+{
380
+ return do_last_general(s, a, true);
381
+}
382
+
383
/*
384
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
385
*/
386
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
387
index XXXXXXX..XXXXXXX 100644
388
--- a/target/arm/sve.decode
389
+++ b/target/arm/sve.decode
390
@@ -XXX,XX +XXX,XX @@ TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
391
# Note esz >= 2
392
COMPACT 00000101 .. 100001 100 ... ..... ..... @rd_pg_rn
393
394
+# SVE conditionally broadcast element to vector
395
+CLASTA_z 00000101 .. 10100 0 100 ... ..... ..... @rdn_pg_rm
396
+CLASTB_z 00000101 .. 10100 1 100 ... ..... ..... @rdn_pg_rm
397
+
398
+# SVE conditionally copy element to SIMD&FP scalar
399
+CLASTA_v 00000101 .. 10101 0 100 ... ..... ..... @rd_pg_rn
400
+CLASTB_v 00000101 .. 10101 1 100 ... ..... ..... @rd_pg_rn
401
+
402
+# SVE conditionally copy element to general register
403
+CLASTA_r 00000101 .. 11000 0 101 ... ..... ..... @rd_pg_rn
404
+CLASTB_r 00000101 .. 11000 1 101 ... ..... ..... @rd_pg_rn
405
+
406
+# SVE copy element to SIMD&FP scalar register
407
+LASTA_v 00000101 .. 10001 0 100 ... ..... ..... @rd_pg_rn
408
+LASTB_v 00000101 .. 10001 1 100 ... ..... ..... @rd_pg_rn
409
+
410
+# SVE copy element to general register
411
+LASTA_r 00000101 .. 10000 0 101 ... ..... ..... @rd_pg_rn
412
+LASTB_r 00000101 .. 10000 1 101 ... ..... ..... @rd_pg_rn
413
+
414
### SVE Predicate Logical Operations Group
415
416
# SVE predicate logical operations
417
--
52
--
418
2.17.1
53
2.25.1
419
420
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
There is no branch prediction in TCG, therefore there is no
4
need to actually include the context number into the predictor.
5
Therefore all we need to do is add the state for SCXTNUM_ELx.
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-9-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-21-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 14 +++++++++++++
12
docs/system/arm/emulation.rst | 3 ++
9
target/arm/sve_helper.c | 41 +++++++++++++++++++++++++++++++-------
13
target/arm/cpu.h | 16 +++++++++
10
target/arm/translate-sve.c | 38 +++++++++++++++++++++++++++++++++++
14
target/arm/cpu.c | 5 +++
11
target/arm/sve.decode | 7 +++++++
15
target/arm/cpu64.c | 3 +-
12
4 files changed, 93 insertions(+), 7 deletions(-)
16
target/arm/helper.c | 61 ++++++++++++++++++++++++++++++++++-
17
5 files changed, 86 insertions(+), 2 deletions(-)
13
18
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
21
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/helper-sve.h
22
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
24
- FEAT_BF16 (AArch64 BFloat16 instructions)
20
DEF_HELPER_FLAGS_2(sve_last_active_element, TCG_CALL_NO_RWG, s32, ptr, i32)
25
- FEAT_BTI (Branch Target Identification)
21
26
- FEAT_CSV2 (Cache speculation variant 2)
22
+DEF_HELPER_FLAGS_4(sve_revb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+- FEAT_CSV2_1p1 (Cache speculation variant 2, version 1.1)
23
+DEF_HELPER_FLAGS_4(sve_revb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
24
+DEF_HELPER_FLAGS_4(sve_revb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
25
+
30
- FEAT_DIT (Data Independent Timing instructions)
26
+DEF_HELPER_FLAGS_4(sve_revh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
- FEAT_DPB (DC CVAP instruction)
27
+DEF_HELPER_FLAGS_4(sve_revh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
- FEAT_Debugv8p2 (Debug changes for v8.2)
28
+
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
+DEF_HELPER_FLAGS_4(sve_revw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
index XXXXXXX..XXXXXXX 100644
30
+
35
--- a/target/arm/cpu.h
31
+DEF_HELPER_FLAGS_4(sve_rbit_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+++ b/target/arm/cpu.h
32
+DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
33
+DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
ARMPACKey apdb;
34
+DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
ARMPACKey apga;
35
+
40
} keys;
36
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
41
+
37
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
42
+ uint64_t scxtnum_el[4];
38
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
43
#endif
39
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
44
40
index XXXXXXX..XXXXXXX 100644
45
#if defined(CONFIG_USER_ONLY)
41
--- a/target/arm/sve_helper.c
46
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
42
+++ b/target/arm/sve_helper.c
47
#define SCTLR_WXN (1U << 19)
43
@@ -XXX,XX +XXX,XX @@ static inline uint64_t expand_pred_s(uint8_t byte)
48
#define SCTLR_ST (1U << 20) /* up to ??, RAZ in v6 */
44
return word[byte & 0x11];
49
#define SCTLR_UWXN (1U << 20) /* v7 onward, AArch32 only */
50
+#define SCTLR_TSCXT (1U << 20) /* FEAT_CSV2_1p2, AArch64 only */
51
#define SCTLR_FI (1U << 21) /* up to v7, v8 RES0 */
52
#define SCTLR_IESB (1U << 21) /* v8.2-IESB, AArch64 only */
53
#define SCTLR_U (1U << 22) /* up to v6, RAO in v7 */
54
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
55
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
45
}
56
}
46
57
47
+/* Swap 16-bit words within a 32-bit word. */
58
+static inline bool isar_feature_aa64_scxtnum(const ARMISARegisters *id)
48
+static inline uint32_t hswap32(uint32_t h)
49
+{
59
+{
50
+ return rol32(h, 16);
60
+ int key = FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, CSV2);
61
+ if (key >= 2) {
62
+ return true; /* FEAT_CSV2_2 */
63
+ }
64
+ if (key == 1) {
65
+ key = FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, CSV2_FRAC);
66
+ return key >= 2; /* FEAT_CSV2_1p2 */
67
+ }
68
+ return false;
51
+}
69
+}
52
+
70
+
53
+/* Swap 16-bit words within a 64-bit word. */
71
static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
54
+static inline uint64_t hswap64(uint64_t h)
72
{
73
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
74
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/cpu.c
77
+++ b/target/arm/cpu.c
78
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
79
*/
80
env->cp15.gcr_el1 = 0x1ffff;
81
}
82
+ /*
83
+ * Disable access to SCXTNUM_EL0 from CSV2_1p2.
84
+ * This is not yet exposed from the Linux kernel in any way.
85
+ */
86
+ env->cp15.sctlr_el[1] |= SCTLR_TSCXT;
87
#else
88
/* Reset into the highest available EL */
89
if (arm_feature(env, ARM_FEATURE_EL3)) {
90
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/cpu64.c
93
+++ b/target/arm/cpu64.c
94
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
95
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
96
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
97
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
98
- t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 1); /* FEAT_CSV2 */
99
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 2); /* FEAT_CSV2_2 */
100
cpu->isar.id_aa64pfr0 = t;
101
102
t = cpu->isar.id_aa64pfr1;
103
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
104
* we do for EL2 with the virtualization=on property.
105
*/
106
t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
107
+ t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
108
cpu->isar.id_aa64pfr1 = t;
109
110
t = cpu->isar.id_aa64mmfr0;
111
diff --git a/target/arm/helper.c b/target/arm/helper.c
112
index XXXXXXX..XXXXXXX 100644
113
--- a/target/arm/helper.c
114
+++ b/target/arm/helper.c
115
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
116
if (cpu_isar_feature(aa64_mte, cpu)) {
117
valid_mask |= SCR_ATA;
118
}
119
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
120
+ valid_mask |= SCR_ENSCXT;
121
+ }
122
} else {
123
valid_mask &= ~(SCR_RW | SCR_ST);
124
if (cpu_isar_feature(aa32_ras, cpu)) {
125
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
126
if (cpu_isar_feature(aa64_mte, cpu)) {
127
valid_mask |= HCR_ATA | HCR_DCT | HCR_TID5;
128
}
129
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
130
+ valid_mask |= HCR_ENSCXT;
131
+ }
132
}
133
134
/* Clear RES0 bits. */
135
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
136
{ K(3, 0, 5, 6, 0), K(3, 4, 5, 6, 0), K(3, 5, 5, 6, 0),
137
"TFSR_EL1", "TFSR_EL2", "TFSR_EL12", isar_feature_aa64_mte },
138
139
+ { K(3, 0, 13, 0, 7), K(3, 4, 13, 0, 7), K(3, 5, 13, 0, 7),
140
+ "SCXTNUM_EL1", "SCXTNUM_EL2", "SCXTNUM_EL12",
141
+ isar_feature_aa64_scxtnum },
142
+
143
/* TODO: ARMv8.2-SPE -- PMSCR_EL2 */
144
/* TODO: ARMv8.4-Trace -- TRFCR_EL2 */
145
};
146
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
147
},
148
};
149
150
-#endif
151
+static CPAccessResult access_scxtnum(CPUARMState *env, const ARMCPRegInfo *ri,
152
+ bool isread)
55
+{
153
+{
56
+ uint64_t m = 0x0000ffff0000ffffull;
154
+ uint64_t hcr = arm_hcr_el2_eff(env);
57
+ h = rol64(h, 32);
155
+ int el = arm_current_el(env);
58
+ return ((h & m) << 16) | ((h >> 16) & m);
156
+
157
+ if (el == 0 && !((hcr & HCR_E2H) && (hcr & HCR_TGE))) {
158
+ if (env->cp15.sctlr_el[1] & SCTLR_TSCXT) {
159
+ if (hcr & HCR_TGE) {
160
+ return CP_ACCESS_TRAP_EL2;
161
+ }
162
+ return CP_ACCESS_TRAP;
163
+ }
164
+ } else if (el < 2 && (env->cp15.sctlr_el[2] & SCTLR_TSCXT)) {
165
+ return CP_ACCESS_TRAP_EL2;
166
+ }
167
+ if (el < 2 && arm_is_el2_enabled(env) && !(hcr & HCR_ENSCXT)) {
168
+ return CP_ACCESS_TRAP_EL2;
169
+ }
170
+ if (el < 3
171
+ && arm_feature(env, ARM_FEATURE_EL3)
172
+ && !(env->cp15.scr_el3 & SCR_ENSCXT)) {
173
+ return CP_ACCESS_TRAP_EL3;
174
+ }
175
+ return CP_ACCESS_OK;
59
+}
176
+}
60
+
177
+
61
+/* Swap 32-bit words within a 64-bit word. */
178
+static const ARMCPRegInfo scxtnum_reginfo[] = {
62
+static inline uint64_t wswap64(uint64_t h)
179
+ { .name = "SCXTNUM_EL0", .state = ARM_CP_STATE_AA64,
63
+{
180
+ .opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 7,
64
+ return rol64(h, 32);
181
+ .access = PL0_RW, .accessfn = access_scxtnum,
65
+}
182
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[0]) },
66
+
183
+ { .name = "SCXTNUM_EL1", .state = ARM_CP_STATE_AA64,
67
#define LOGICAL_PPPP(NAME, FUNC) \
184
+ .opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 7,
68
void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
185
+ .access = PL1_RW, .accessfn = access_scxtnum,
69
{ \
186
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[1]) },
70
@@ -XXX,XX +XXX,XX @@ DO_ZPZ(sve_neg_h, uint16_t, H1_2, DO_NEG)
187
+ { .name = "SCXTNUM_EL2", .state = ARM_CP_STATE_AA64,
71
DO_ZPZ(sve_neg_s, uint32_t, H1_4, DO_NEG)
188
+ .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 7,
72
DO_ZPZ_D(sve_neg_d, uint64_t, DO_NEG)
189
+ .access = PL2_RW, .accessfn = access_scxtnum,
73
190
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[2]) },
74
+DO_ZPZ(sve_revb_h, uint16_t, H1_2, bswap16)
191
+ { .name = "SCXTNUM_EL3", .state = ARM_CP_STATE_AA64,
75
+DO_ZPZ(sve_revb_s, uint32_t, H1_4, bswap32)
192
+ .opc0 = 3, .opc1 = 6, .crn = 13, .crm = 0, .opc2 = 7,
76
+DO_ZPZ_D(sve_revb_d, uint64_t, bswap64)
193
+ .access = PL3_RW,
77
+
194
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[3]) },
78
+DO_ZPZ(sve_revh_s, uint32_t, H1_4, hswap32)
195
+};
79
+DO_ZPZ_D(sve_revh_d, uint64_t, hswap64)
196
+#endif /* TARGET_AARCH64 */
80
+
197
81
+DO_ZPZ_D(sve_revw_d, uint64_t, wswap64)
198
static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri,
82
+
199
bool isread)
83
+DO_ZPZ(sve_rbit_b, uint8_t, H1, revbit8)
200
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
84
+DO_ZPZ(sve_rbit_h, uint16_t, H1_2, revbit16)
201
define_arm_cp_regs(cpu, mte_tco_ro_reginfo);
85
+DO_ZPZ(sve_rbit_s, uint32_t, H1_4, revbit32)
202
define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo);
86
+DO_ZPZ_D(sve_rbit_d, uint64_t, revbit64)
87
+
88
/* Three-operand expander, unpredicated, in which the third operand is "wide".
89
*/
90
#define DO_ZZW(NAME, TYPE, TYPEW, H, OP) \
91
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_rev_b)(void *vd, void *vn, uint32_t desc)
92
}
203
}
93
}
204
+
94
205
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
95
-static inline uint64_t hswap64(uint64_t h)
206
+ define_arm_cp_regs(cpu, scxtnum_reginfo);
96
-{
207
+ }
97
- uint64_t m = 0x0000ffff0000ffffull;
208
#endif
98
- h = rol64(h, 32);
209
99
- return ((h & m) << 16) | ((h >> 16) & m);
210
if (cpu_isar_feature(any_predinv, cpu)) {
100
-}
101
-
102
void HELPER(sve_rev_h)(void *vd, void *vn, uint32_t desc)
103
{
104
intptr_t i, j, opr_sz = simd_oprsz(desc);
105
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate-sve.c
108
+++ b/target/arm/translate-sve.c
109
@@ -XXX,XX +XXX,XX @@ static bool trans_CPY_m_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
110
return true;
111
}
112
113
+static bool trans_REVB(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
114
+{
115
+ static gen_helper_gvec_3 * const fns[4] = {
116
+ NULL,
117
+ gen_helper_sve_revb_h,
118
+ gen_helper_sve_revb_s,
119
+ gen_helper_sve_revb_d,
120
+ };
121
+ return do_zpz_ool(s, a, fns[a->esz]);
122
+}
123
+
124
+static bool trans_REVH(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
125
+{
126
+ static gen_helper_gvec_3 * const fns[4] = {
127
+ NULL,
128
+ NULL,
129
+ gen_helper_sve_revh_s,
130
+ gen_helper_sve_revh_d,
131
+ };
132
+ return do_zpz_ool(s, a, fns[a->esz]);
133
+}
134
+
135
+static bool trans_REVW(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
136
+{
137
+ return do_zpz_ool(s, a, a->esz == 3 ? gen_helper_sve_revw_d : NULL);
138
+}
139
+
140
+static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
141
+{
142
+ static gen_helper_gvec_3 * const fns[4] = {
143
+ gen_helper_sve_rbit_b,
144
+ gen_helper_sve_rbit_h,
145
+ gen_helper_sve_rbit_s,
146
+ gen_helper_sve_rbit_d,
147
+ };
148
+ return do_zpz_ool(s, a, fns[a->esz]);
149
+}
150
+
151
/*
152
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
153
*/
154
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
155
index XXXXXXX..XXXXXXX 100644
156
--- a/target/arm/sve.decode
157
+++ b/target/arm/sve.decode
158
@@ -XXX,XX +XXX,XX @@ CPY_m_v 00000101 .. 100000 100 ... ..... ..... @rd_pg_rn
159
# SVE copy element from general register to vector (predicated)
160
CPY_m_r 00000101 .. 101000 101 ... ..... ..... @rd_pg_rn
161
162
+# SVE reverse within elements
163
+# Note esz >= operation size
164
+REVB 00000101 .. 1001 00 100 ... ..... ..... @rd_pg_rn
165
+REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
166
+REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
167
+RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
168
+
169
### SVE Predicate Logical Operations Group
170
171
# SVE predicate logical operations
172
--
211
--
173
2.17.1
212
2.25.1
174
175
diff view generated by jsdifflib
1
The 'addr' field in the CPUIOTLBEntry struct has a rather non-obvious
1
From: Richard Henderson <richard.henderson@linaro.org>
2
use; add a comment documenting it (reverse-engineered from what
3
the code that sets it is doing).
4
2
3
This extension concerns cache speculation, which TCG does
4
not implement. Thus we can trivially enable this feature.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220506180242.216785-22-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180611125633.32755-2-peter.maydell@linaro.org
9
---
10
---
10
include/exec/cpu-defs.h | 9 +++++++++
11
docs/system/arm/emulation.rst | 1 +
11
accel/tcg/cputlb.c | 12 ++++++++++++
12
target/arm/cpu64.c | 1 +
12
2 files changed, 21 insertions(+)
13
target/arm/cpu_tcg.c | 1 +
14
3 files changed, 3 insertions(+)
13
15
14
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu-defs.h
18
--- a/docs/system/arm/emulation.rst
17
+++ b/include/exec/cpu-defs.h
19
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) != (1 << CPU_TLB_ENTRY_BITS));
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
* structs into one.)
21
- FEAT_CSV2_1p1 (Cache speculation variant 2, version 1.1)
20
*/
22
- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
21
typedef struct CPUIOTLBEntry {
23
- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
22
+ /*
24
+- FEAT_CSV3 (Cache speculation variant 3)
23
+ * @addr contains:
25
- FEAT_DIT (Data Independent Timing instructions)
24
+ * - in the lower TARGET_PAGE_BITS, a physical section number
26
- FEAT_DPB (DC CVAP instruction)
25
+ * - with the lower TARGET_PAGE_BITS masked off, an offset which
27
- FEAT_Debugv8p2 (Debug changes for v8.2)
26
+ * must be added to the virtual address to obtain:
28
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
+ * + the ram_addr_t of the target RAM (if the physical section
28
+ * number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM)
29
+ * + the offset within the target MemoryRegion (otherwise)
30
+ */
31
hwaddr addr;
32
MemTxAttrs attrs;
33
} CPUIOTLBEntry;
34
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
35
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
36
--- a/accel/tcg/cputlb.c
30
--- a/target/arm/cpu64.c
37
+++ b/accel/tcg/cputlb.c
31
+++ b/target/arm/cpu64.c
38
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
32
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
39
env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index];
33
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
40
34
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
41
/* refill the tlb */
35
t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 2); /* FEAT_CSV2_2 */
42
+ /*
36
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV3, 1); /* FEAT_CSV3 */
43
+ * At this point iotlb contains a physical section number in the lower
37
cpu->isar.id_aa64pfr0 = t;
44
+ * TARGET_PAGE_BITS, and either
38
45
+ * + the ram_addr_t of the page base of the target RAM (if NOTDIRTY or ROM)
39
t = cpu->isar.id_aa64pfr1;
46
+ * + the offset within section->mr of the page base (otherwise)
40
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
47
+ * We subtract the vaddr (which is page aligned and thus won't
41
index XXXXXXX..XXXXXXX 100644
48
+ * disturb the low bits) to give an offset which can be added to the
42
--- a/target/arm/cpu_tcg.c
49
+ * (non-page-aligned) vaddr of the eventual memory access to get
43
+++ b/target/arm/cpu_tcg.c
50
+ * the MemoryRegion offset for the access. Note that the vaddr we
44
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
51
+ * subtract here is that of the page base, and not the same as the
45
cpu->isar.id_pfr0 = t;
52
+ * vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
46
53
+ */
47
t = cpu->isar.id_pfr2;
54
env->iotlb[mmu_idx][index].addr = iotlb - vaddr;
48
+ t = FIELD_DP32(t, ID_PFR2, CSV3, 1); /* FEAT_CSV3 */
55
env->iotlb[mmu_idx][index].attrs = attrs;
49
t = FIELD_DP32(t, ID_PFR2, SSBS, 1); /* FEAT_SSBS */
50
cpu->isar.id_pfr2 = t;
56
51
57
--
52
--
58
2.17.1
53
2.25.1
59
60
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rearrange the arithmetic so that we are agnostic about the total size
3
This extension concerns not merging memory access, which TCG does
4
of the vector and the size of the element. This will allow us to index
4
not implement. Thus we can trivially enable this feature.
5
up to the 32nd byte and with 16-byte elements.
5
Add a comment to handle_hint for the DGH instruction, but no code.
6
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180613015641.5667-2-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-23-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/translate-a64.h | 26 +++++++++++++++++---------
12
docs/system/arm/emulation.rst | 1 +
13
1 file changed, 17 insertions(+), 9 deletions(-)
13
target/arm/cpu64.c | 1 +
14
target/arm/translate-a64.c | 1 +
15
3 files changed, 3 insertions(+)
14
16
15
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
17
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.h
19
--- a/docs/system/arm/emulation.rst
18
+++ b/target/arm/translate-a64.h
20
+++ b/docs/system/arm/emulation.rst
19
@@ -XXX,XX +XXX,XX @@ static inline void assert_fp_access_checked(DisasContext *s)
21
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
20
static inline int vec_reg_offset(DisasContext *s, int regno,
22
- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
21
int element, TCGMemOp size)
23
- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
22
{
24
- FEAT_CSV3 (Cache speculation variant 3)
23
- int offs = 0;
25
+- FEAT_DGH (Data gathering hint)
24
+ int element_size = 1 << size;
26
- FEAT_DIT (Data Independent Timing instructions)
25
+ int offs = element * element_size;
27
- FEAT_DPB (DC CVAP instruction)
26
#ifdef HOST_WORDS_BIGENDIAN
28
- FEAT_Debugv8p2 (Debug changes for v8.2)
27
/* This is complicated slightly because vfp.zregs[n].d[0] is
29
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
28
- * still the low half and vfp.zregs[n].d[1] the high half
30
index XXXXXXX..XXXXXXX 100644
29
- * of the 128 bit vector, even on big endian systems.
31
--- a/target/arm/cpu64.c
30
- * Calculate the offset assuming a fully bigendian 128 bits,
32
+++ b/target/arm/cpu64.c
31
- * then XOR to account for the order of the two 64 bit halves.
33
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
32
+ * still the lowest and vfp.zregs[n].d[15] the highest of the
34
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); /* FEAT_SB */
33
+ * 256 byte vector, even on big endian systems.
35
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); /* FEAT_SPECRES */
34
+ *
36
t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1); /* FEAT_BF16 */
35
+ * Calculate the offset assuming fully little-endian,
37
+ t = FIELD_DP64(t, ID_AA64ISAR1, DGH, 1); /* FEAT_DGH */
36
+ * then XOR to account for the order of the 8-byte units.
38
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
37
+ *
39
cpu->isar.id_aa64isar1 = t;
38
+ * For 16 byte elements, the two 8 byte halves will not form a
40
39
+ * host int128 if the host is bigendian, since they're in the
41
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
40
+ * wrong order. However the only 16 byte operation we have is
42
index XXXXXXX..XXXXXXX 100644
41
+ * a move, so we can ignore this for the moment. More complicated
43
--- a/target/arm/translate-a64.c
42
+ * operations will have to special case loading and storing from
44
+++ b/target/arm/translate-a64.c
43
+ * the zregs array.
45
@@ -XXX,XX +XXX,XX @@ static void handle_hint(DisasContext *s, uint32_t insn,
44
*/
46
break;
45
- offs += (16 - ((element + 1) * (1 << size)));
47
case 0b00100: /* SEV */
46
- offs ^= 8;
48
case 0b00101: /* SEVL */
47
-#else
49
+ case 0b00110: /* DGH */
48
- offs += element * (1 << size);
50
/* we treat all as NOP at least for now */
49
+ if (element_size < 8) {
51
break;
50
+ offs ^= 8 - element_size;
52
case 0b00111: /* XPACLRI */
51
+ }
52
#endif
53
offs += offsetof(CPUARMState, vfp.zregs[regno]);
54
assert_fp_access_checked(s);
55
--
53
--
56
2.17.1
54
2.25.1
57
58
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Enable the a76 for virt and sbsa board use.
2
4
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-17-richard.henderson@linaro.org
7
Message-id: 20220506180242.216785-24-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate-sve.c | 37 +++++++++++++++++++++++++++++++++++++
10
docs/system/arm/virt.rst | 1 +
9
target/arm/sve.decode | 8 ++++++++
11
hw/arm/sbsa-ref.c | 1 +
10
2 files changed, 45 insertions(+)
12
hw/arm/virt.c | 1 +
13
target/arm/cpu64.c | 66 ++++++++++++++++++++++++++++++++++++++++
14
4 files changed, 69 insertions(+)
11
15
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
18
--- a/docs/system/arm/virt.rst
15
+++ b/target/arm/translate-sve.c
19
+++ b/docs/system/arm/virt.rst
16
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ Supported guest CPU types:
17
return true;
21
- ``cortex-a53`` (64-bit)
22
- ``cortex-a57`` (64-bit)
23
- ``cortex-a72`` (64-bit)
24
+- ``cortex-a76`` (64-bit)
25
- ``a64fx`` (64-bit)
26
- ``host`` (with KVM only)
27
- ``max`` (same as ``host`` for KVM; best possible emulation with TCG)
28
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/sbsa-ref.c
31
+++ b/hw/arm/sbsa-ref.c
32
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
33
static const char * const valid_cpus[] = {
34
ARM_CPU_TYPE_NAME("cortex-a57"),
35
ARM_CPU_TYPE_NAME("cortex-a72"),
36
+ ARM_CPU_TYPE_NAME("cortex-a76"),
37
ARM_CPU_TYPE_NAME("max"),
38
};
39
40
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/virt.c
43
+++ b/hw/arm/virt.c
44
@@ -XXX,XX +XXX,XX @@ static const char *valid_cpus[] = {
45
ARM_CPU_TYPE_NAME("cortex-a53"),
46
ARM_CPU_TYPE_NAME("cortex-a57"),
47
ARM_CPU_TYPE_NAME("cortex-a72"),
48
+ ARM_CPU_TYPE_NAME("cortex-a76"),
49
ARM_CPU_TYPE_NAME("a64fx"),
50
ARM_CPU_TYPE_NAME("host"),
51
ARM_CPU_TYPE_NAME("max"),
52
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/cpu64.c
55
+++ b/target/arm/cpu64.c
56
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
57
define_cortex_a72_a57_a53_cp_reginfo(cpu);
18
}
58
}
19
59
20
+/*
60
+static void aarch64_a76_initfn(Object *obj)
21
+ *** SVE Integer Wide Immediate - Unpredicated Group
61
+{
22
+ */
62
+ ARMCPU *cpu = ARM_CPU(obj);
23
+
63
+
24
+static bool trans_FDUP(DisasContext *s, arg_FDUP *a, uint32_t insn)
64
+ cpu->dtb_compatible = "arm,cortex-a76";
25
+{
65
+ set_feature(&cpu->env, ARM_FEATURE_V8);
26
+ if (a->esz == 0) {
66
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
27
+ return false;
67
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
28
+ }
68
+ set_feature(&cpu->env, ARM_FEATURE_AARCH64);
29
+ if (sve_access_check(s)) {
69
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
30
+ unsigned vsz = vec_full_reg_size(s);
70
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
31
+ int dofs = vec_full_reg_offset(s, a->rd);
71
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
32
+ uint64_t imm;
72
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
33
+
73
+
34
+ /* Decode the VFP immediate. */
74
+ /* Ordered by B2.4 AArch64 registers by functional group */
35
+ imm = vfp_expand_imm(a->esz, a->imm);
75
+ cpu->clidr = 0x82000023;
36
+ imm = dup_const(a->esz, imm);
76
+ cpu->ctr = 0x8444C004;
77
+ cpu->dcz_blocksize = 4;
78
+ cpu->isar.id_aa64dfr0 = 0x0000000010305408ull;
79
+ cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
80
+ cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
81
+ cpu->isar.id_aa64mmfr0 = 0x0000000000101122ull;
82
+ cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
83
+ cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
84
+ cpu->isar.id_aa64pfr0 = 0x1100000010111112ull; /* GIC filled in later */
85
+ cpu->isar.id_aa64pfr1 = 0x0000000000000010ull;
86
+ cpu->id_afr0 = 0x00000000;
87
+ cpu->isar.id_dfr0 = 0x04010088;
88
+ cpu->isar.id_isar0 = 0x02101110;
89
+ cpu->isar.id_isar1 = 0x13112111;
90
+ cpu->isar.id_isar2 = 0x21232042;
91
+ cpu->isar.id_isar3 = 0x01112131;
92
+ cpu->isar.id_isar4 = 0x00010142;
93
+ cpu->isar.id_isar5 = 0x01011121;
94
+ cpu->isar.id_isar6 = 0x00000010;
95
+ cpu->isar.id_mmfr0 = 0x10201105;
96
+ cpu->isar.id_mmfr1 = 0x40000000;
97
+ cpu->isar.id_mmfr2 = 0x01260000;
98
+ cpu->isar.id_mmfr3 = 0x02122211;
99
+ cpu->isar.id_mmfr4 = 0x00021110;
100
+ cpu->isar.id_pfr0 = 0x10010131;
101
+ cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
102
+ cpu->isar.id_pfr2 = 0x00000011;
103
+ cpu->midr = 0x414fd0b1; /* r4p1 */
104
+ cpu->revidr = 0;
37
+
105
+
38
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, imm);
106
+ /* From B2.18 CCSIDR_EL1 */
39
+ }
107
+ cpu->ccsidr[0] = 0x701fe01a; /* 64KB L1 dcache */
40
+ return true;
108
+ cpu->ccsidr[1] = 0x201fe01a; /* 64KB L1 icache */
109
+ cpu->ccsidr[2] = 0x707fe03a; /* 512KB L2 cache */
110
+
111
+ /* From B2.93 SCTLR_EL3 */
112
+ cpu->reset_sctlr = 0x30c50838;
113
+
114
+ /* From B4.23 ICH_VTR_EL2 */
115
+ cpu->gic_num_lrs = 4;
116
+ cpu->gic_vpribits = 5;
117
+ cpu->gic_vprebits = 5;
118
+
119
+ /* From B5.1 AdvSIMD AArch64 register summary */
120
+ cpu->isar.mvfr0 = 0x10110222;
121
+ cpu->isar.mvfr1 = 0x13211111;
122
+ cpu->isar.mvfr2 = 0x00000043;
41
+}
123
+}
42
+
124
+
43
+static bool trans_DUP_i(DisasContext *s, arg_DUP_i *a, uint32_t insn)
125
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
44
+{
126
{
45
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
127
/*
46
+ return false;
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
47
+ }
129
{ .name = "cortex-a57", .initfn = aarch64_a57_initfn },
48
+ if (sve_access_check(s)) {
130
{ .name = "cortex-a53", .initfn = aarch64_a53_initfn },
49
+ unsigned vsz = vec_full_reg_size(s);
131
{ .name = "cortex-a72", .initfn = aarch64_a72_initfn },
50
+ int dofs = vec_full_reg_offset(s, a->rd);
132
+ { .name = "cortex-a76", .initfn = aarch64_a76_initfn },
51
+
133
{ .name = "a64fx", .initfn = aarch64_a64fx_initfn },
52
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, dup_const(a->esz, a->imm));
134
{ .name = "max", .initfn = aarch64_max_initfn },
53
+ }
135
#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
54
+ return true;
55
+}
56
+
57
/*
58
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
59
*/
60
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/sve.decode
63
+++ b/target/arm/sve.decode
64
@@ -XXX,XX +XXX,XX @@ CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
65
# SVE integer compare scalar count and limit
66
WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
67
68
+### SVE Integer Wide Immediate - Unpredicated Group
69
+
70
+# SVE broadcast floating-point immediate (unpredicated)
71
+FDUP 00100101 esz:2 111 00 1110 imm:8 rd:5
72
+
73
+# SVE broadcast integer immediate (unpredicated)
74
+DUP_i 00100101 esz:2 111 00 011 . ........ rd:5 imm=%sh8_i8s
75
+
76
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
77
78
# SVE load predicate register
79
--
136
--
80
2.17.1
137
2.25.1
81
82
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Enable the n1 for virt and sbsa board use.
2
4
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-8-richard.henderson@linaro.org
7
Message-id: 20220506180242.216785-25-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate-sve.c | 19 +++++++++++++++++++
10
docs/system/arm/virt.rst | 1 +
9
target/arm/sve.decode | 6 ++++++
11
hw/arm/sbsa-ref.c | 1 +
10
2 files changed, 25 insertions(+)
12
hw/arm/virt.c | 1 +
13
target/arm/cpu64.c | 66 ++++++++++++++++++++++++++++++++++++++++
14
4 files changed, 69 insertions(+)
11
15
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
18
--- a/docs/system/arm/virt.rst
15
+++ b/target/arm/translate-sve.c
19
+++ b/docs/system/arm/virt.rst
16
@@ -XXX,XX +XXX,XX @@ static bool trans_LASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ Supported guest CPU types:
17
return do_last_general(s, a, true);
21
- ``cortex-a76`` (64-bit)
22
- ``a64fx`` (64-bit)
23
- ``host`` (with KVM only)
24
+- ``neoverse-n1`` (64-bit)
25
- ``max`` (same as ``host`` for KVM; best possible emulation with TCG)
26
27
Note that the default is ``cortex-a15``, so for an AArch64 guest you must
28
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/sbsa-ref.c
31
+++ b/hw/arm/sbsa-ref.c
32
@@ -XXX,XX +XXX,XX @@ static const char * const valid_cpus[] = {
33
ARM_CPU_TYPE_NAME("cortex-a57"),
34
ARM_CPU_TYPE_NAME("cortex-a72"),
35
ARM_CPU_TYPE_NAME("cortex-a76"),
36
+ ARM_CPU_TYPE_NAME("neoverse-n1"),
37
ARM_CPU_TYPE_NAME("max"),
38
};
39
40
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/virt.c
43
+++ b/hw/arm/virt.c
44
@@ -XXX,XX +XXX,XX @@ static const char *valid_cpus[] = {
45
ARM_CPU_TYPE_NAME("cortex-a72"),
46
ARM_CPU_TYPE_NAME("cortex-a76"),
47
ARM_CPU_TYPE_NAME("a64fx"),
48
+ ARM_CPU_TYPE_NAME("neoverse-n1"),
49
ARM_CPU_TYPE_NAME("host"),
50
ARM_CPU_TYPE_NAME("max"),
51
};
52
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/cpu64.c
55
+++ b/target/arm/cpu64.c
56
@@ -XXX,XX +XXX,XX @@ static void aarch64_a76_initfn(Object *obj)
57
cpu->isar.mvfr2 = 0x00000043;
18
}
58
}
19
59
20
+static bool trans_CPY_m_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
60
+static void aarch64_neoverse_n1_initfn(Object *obj)
21
+{
61
+{
22
+ if (sve_access_check(s)) {
62
+ ARMCPU *cpu = ARM_CPU(obj);
23
+ do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, cpu_reg_sp(s, a->rn));
63
+
24
+ }
64
+ cpu->dtb_compatible = "arm,neoverse-n1";
25
+ return true;
65
+ set_feature(&cpu->env, ARM_FEATURE_V8);
66
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
67
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
68
+ set_feature(&cpu->env, ARM_FEATURE_AARCH64);
69
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
70
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
71
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
72
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
73
+
74
+ /* Ordered by B2.4 AArch64 registers by functional group */
75
+ cpu->clidr = 0x82000023;
76
+ cpu->ctr = 0x8444c004;
77
+ cpu->dcz_blocksize = 4;
78
+ cpu->isar.id_aa64dfr0 = 0x0000000110305408ull;
79
+ cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
80
+ cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
81
+ cpu->isar.id_aa64mmfr0 = 0x0000000000101125ull;
82
+ cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
83
+ cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
84
+ cpu->isar.id_aa64pfr0 = 0x1100000010111112ull; /* GIC filled in later */
85
+ cpu->isar.id_aa64pfr1 = 0x0000000000000020ull;
86
+ cpu->id_afr0 = 0x00000000;
87
+ cpu->isar.id_dfr0 = 0x04010088;
88
+ cpu->isar.id_isar0 = 0x02101110;
89
+ cpu->isar.id_isar1 = 0x13112111;
90
+ cpu->isar.id_isar2 = 0x21232042;
91
+ cpu->isar.id_isar3 = 0x01112131;
92
+ cpu->isar.id_isar4 = 0x00010142;
93
+ cpu->isar.id_isar5 = 0x01011121;
94
+ cpu->isar.id_isar6 = 0x00000010;
95
+ cpu->isar.id_mmfr0 = 0x10201105;
96
+ cpu->isar.id_mmfr1 = 0x40000000;
97
+ cpu->isar.id_mmfr2 = 0x01260000;
98
+ cpu->isar.id_mmfr3 = 0x02122211;
99
+ cpu->isar.id_mmfr4 = 0x00021110;
100
+ cpu->isar.id_pfr0 = 0x10010131;
101
+ cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
102
+ cpu->isar.id_pfr2 = 0x00000011;
103
+ cpu->midr = 0x414fd0c1; /* r4p1 */
104
+ cpu->revidr = 0;
105
+
106
+ /* From B2.23 CCSIDR_EL1 */
107
+ cpu->ccsidr[0] = 0x701fe01a; /* 64KB L1 dcache */
108
+ cpu->ccsidr[1] = 0x201fe01a; /* 64KB L1 icache */
109
+ cpu->ccsidr[2] = 0x70ffe03a; /* 1MB L2 cache */
110
+
111
+ /* From B2.98 SCTLR_EL3 */
112
+ cpu->reset_sctlr = 0x30c50838;
113
+
114
+ /* From B4.23 ICH_VTR_EL2 */
115
+ cpu->gic_num_lrs = 4;
116
+ cpu->gic_vpribits = 5;
117
+ cpu->gic_vprebits = 5;
118
+
119
+ /* From B5.1 AdvSIMD AArch64 register summary */
120
+ cpu->isar.mvfr0 = 0x10110222;
121
+ cpu->isar.mvfr1 = 0x13211111;
122
+ cpu->isar.mvfr2 = 0x00000043;
26
+}
123
+}
27
+
124
+
28
+static bool trans_CPY_m_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
125
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
29
+{
126
{
30
+ if (sve_access_check(s)) {
127
/*
31
+ int ofs = vec_reg_offset(s, a->rn, 0, a->esz);
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
32
+ TCGv_i64 t = load_esz(cpu_env, ofs, a->esz);
129
{ .name = "cortex-a72", .initfn = aarch64_a72_initfn },
33
+ do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, t);
130
{ .name = "cortex-a76", .initfn = aarch64_a76_initfn },
34
+ tcg_temp_free_i64(t);
131
{ .name = "a64fx", .initfn = aarch64_a64fx_initfn },
35
+ }
132
+ { .name = "neoverse-n1", .initfn = aarch64_neoverse_n1_initfn },
36
+ return true;
133
{ .name = "max", .initfn = aarch64_max_initfn },
37
+}
134
#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
38
+
135
{ .name = "host", .initfn = aarch64_host_initfn },
39
/*
40
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
41
*/
42
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/sve.decode
45
+++ b/target/arm/sve.decode
46
@@ -XXX,XX +XXX,XX @@ LASTB_v 00000101 .. 10001 1 100 ... ..... ..... @rd_pg_rn
47
LASTA_r 00000101 .. 10000 0 101 ... ..... ..... @rd_pg_rn
48
LASTB_r 00000101 .. 10000 1 101 ... ..... ..... @rd_pg_rn
49
50
+# SVE copy element from SIMD&FP scalar register
51
+CPY_m_v 00000101 .. 100000 100 ... ..... ..... @rd_pg_rn
52
+
53
+# SVE copy element from general register to vector (predicated)
54
+CPY_m_r 00000101 .. 101000 101 ... ..... ..... @rd_pg_rn
55
+
56
### SVE Predicate Logical Operations Group
57
58
# SVE predicate logical operations
59
--
136
--
60
2.17.1
137
2.25.1
61
62
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
2
2
3
ARMv6-M supports 6 Thumb2 instructions. This patch checks for these
3
The sbsa-ref machine is continuously evolving. Some of the changes we
4
instructions and allows their execution.
4
want to make in the near future, to align with real components (e.g.
5
Like Thumb2 cores, ARMv6-M always interprets BL instruction as 32-bit.
5
the GIC-700), will break compatibility for existing firmware.
6
6
7
This patch is required for future Cortex-M0 support.
7
Introduce two new properties to the DT generated on machine generation:
8
- machine-version-major
9
To be incremented when a platform change makes the machine
10
incompatible with existing firmware.
11
- machine-version-minor
12
To be incremented when functionality is added to the machine
13
without causing incompatibility with existing firmware.
14
to be reset to 0 when machine-version-major is incremented.
8
15
9
Signed-off-by: Julia Suvorova <jusual@mail.ru>
16
This versioning scheme is *neither*:
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
17
- A QEMU versioned machine type; a given version of QEMU will emulate
11
Message-id: 20180612204632.28780-1-jusual@mail.ru
18
a given version of the platform.
12
[PMM: move armv6m_insn[] and armv6m_mask[] closer to
19
- A reflection of level of SBSA (now SystemReady SR) support provided.
13
point of use, and mark 'const'. Check for M-and-not-v7
20
14
rather than M-and-6.]
21
The version will increment on guest-visible functional changes only,
22
akin to a revision ID register found on a physical platform.
23
24
These properties are both introduced with the value 0.
25
(Hence, a machine where the DT is lacking these nodes is equivalent
26
to version 0.0.)
27
28
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
29
Message-id: 20220505113947.75714-1-quic_llindhol@quicinc.com
30
Cc: Peter Maydell <peter.maydell@linaro.org>
31
Cc: Radoslaw Biernacki <rad@semihalf.com>
32
Cc: Cédric Le Goater <clg@kaod.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
33
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
35
---
18
target/arm/translate.c | 43 +++++++++++++++++++++++++++++++++++++-----
36
hw/arm/sbsa-ref.c | 14 ++++++++++++++
19
1 file changed, 38 insertions(+), 5 deletions(-)
37
1 file changed, 14 insertions(+)
20
38
21
diff --git a/target/arm/translate.c b/target/arm/translate.c
39
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
22
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/translate.c
41
--- a/hw/arm/sbsa-ref.c
24
+++ b/target/arm/translate.c
42
+++ b/hw/arm/sbsa-ref.c
25
@@ -XXX,XX +XXX,XX @@ static bool thumb_insn_is_16bit(DisasContext *s, uint32_t insn)
43
@@ -XXX,XX +XXX,XX @@ static void create_fdt(SBSAMachineState *sms)
26
* end up actually treating this as two 16-bit insns, though,
44
qemu_fdt_setprop_cell(fdt, "/", "#address-cells", 0x2);
27
* if it's half of a bl/blx pair that might span a page boundary.
45
qemu_fdt_setprop_cell(fdt, "/", "#size-cells", 0x2);
28
*/
46
29
- if (arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
30
+ if (arm_dc_feature(s, ARM_FEATURE_THUMB2) ||
31
+ arm_dc_feature(s, ARM_FEATURE_M)) {
32
/* Thumb2 cores (including all M profile ones) always treat
33
* 32-bit insns as 32-bit.
34
*/
35
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
36
int conds;
37
int logic_cc;
38
39
- /* The only 32 bit insn that's allowed for Thumb1 is the combined
40
- * BL/BLX prefix and suffix.
41
+ /*
47
+ /*
42
+ * ARMv6-M supports a limited subset of Thumb2 instructions.
48
+ * This versioning scheme is for informing platform fw only. It is neither:
43
+ * Other Thumb1 architectures allow only 32-bit
49
+ * - A QEMU versioned machine type; a given version of QEMU will emulate
44
+ * combined BL/BLX prefix and suffix.
50
+ * a given version of the platform.
45
*/
51
+ * - A reflection of level of SBSA (now SystemReady SR) support provided.
46
- if ((insn & 0xf800e800) != 0xf000e800) {
52
+ *
47
+ if (arm_dc_feature(s, ARM_FEATURE_M) &&
53
+ * machine-version-major: updated when changes breaking fw compatibility
48
+ !arm_dc_feature(s, ARM_FEATURE_V7)) {
54
+ * are introduced.
49
+ int i;
55
+ * machine-version-minor: updated when features are added that don't break
50
+ bool found = false;
56
+ * fw compatibility.
51
+ const uint32_t armv6m_insn[] = {0xf3808000 /* msr */,
57
+ */
52
+ 0xf3b08040 /* dsb */,
58
+ qemu_fdt_setprop_cell(fdt, "/", "machine-version-major", 0);
53
+ 0xf3b08050 /* dmb */,
59
+ qemu_fdt_setprop_cell(fdt, "/", "machine-version-minor", 0);
54
+ 0xf3b08060 /* isb */,
55
+ 0xf3e08000 /* mrs */,
56
+ 0xf000d000 /* bl */};
57
+ const uint32_t armv6m_mask[] = {0xffe0d000,
58
+ 0xfff0d0f0,
59
+ 0xfff0d0f0,
60
+ 0xfff0d0f0,
61
+ 0xffe0d000,
62
+ 0xf800d000};
63
+
60
+
64
+ for (i = 0; i < ARRAY_SIZE(armv6m_insn); i++) {
61
if (ms->numa_state->have_numa_distance) {
65
+ if ((insn & armv6m_mask[i]) == armv6m_insn[i]) {
62
int size = nb_numa_nodes * nb_numa_nodes * 3 * sizeof(uint32_t);
66
+ found = true;
63
uint32_t *matrix = g_malloc0(size);
67
+ break;
68
+ }
69
+ }
70
+ if (!found) {
71
+ goto illegal_op;
72
+ }
73
+ } else if ((insn & 0xf800e800) != 0xf000e800) {
74
ARCH(6T2);
75
}
76
77
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
78
}
79
break;
80
case 3: /* Special control operations. */
81
- ARCH(7);
82
+ if (!arm_dc_feature(s, ARM_FEATURE_V7) &&
83
+ !(arm_dc_feature(s, ARM_FEATURE_V6) &&
84
+ arm_dc_feature(s, ARM_FEATURE_M))) {
85
+ goto illegal_op;
86
+ }
87
op = (insn >> 4) & 0xf;
88
switch (op) {
89
case 2: /* clrex */
90
--
64
--
91
2.17.1
65
2.25.1
92
66
93
67
diff view generated by jsdifflib
1
Currently we don't support board configurations that put an IOMMU
1
From: Gavin Shan <gshan@redhat.com>
2
in the path of the CPU's memory transactions, and instead just
3
assert() if the memory region fonud in address_space_translate_for_iotlb()
4
is an IOMMUMemoryRegion.
5
2
6
Remove this limitation by having the function handle IOMMUs.
3
This adds cluster-id in CPU instance properties, which will be used
7
This is mostly straightforward, but we must make sure we have
4
by arm/virt machine. Besides, the cluster-id is also verified or
8
a notifier registered for every IOMMU that a transaction has
5
dumped in various spots:
9
passed through, so that we can flush the TLB appropriately
10
when any of the IOMMUs change their mappings.
11
6
7
* hw/core/machine.c::machine_set_cpu_numa_node() to associate
8
CPU with its NUMA node.
9
10
* hw/core/machine.c::machine_numa_finish_cpu_init() to record
11
CPU slots with no NUMA mapping set.
12
13
* hw/core/machine-hmp-cmds.c::hmp_hotpluggable_cpus() to dump
14
cluster-id.
15
16
Signed-off-by: Gavin Shan <gshan@redhat.com>
17
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
18
Acked-by: Igor Mammedov <imammedo@redhat.com>
19
Message-id: 20220503140304.855514-2-gshan@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Message-id: 20180604152941.20374-5-peter.maydell@linaro.org
15
---
21
---
16
include/exec/exec-all.h | 3 +-
22
qapi/machine.json | 6 ++++--
17
include/qom/cpu.h | 3 +
23
hw/core/machine-hmp-cmds.c | 4 ++++
18
accel/tcg/cputlb.c | 3 +-
24
hw/core/machine.c | 16 ++++++++++++++++
19
exec.c | 135 +++++++++++++++++++++++++++++++++++++++-
25
3 files changed, 24 insertions(+), 2 deletions(-)
20
4 files changed, 140 insertions(+), 4 deletions(-)
21
26
22
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
27
diff --git a/qapi/machine.json b/qapi/machine.json
23
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
24
--- a/include/exec/exec-all.h
29
--- a/qapi/machine.json
25
+++ b/include/exec/exec-all.h
30
+++ b/qapi/machine.json
26
@@ -XXX,XX +XXX,XX @@ void tb_flush_jmp_cache(CPUState *cpu, target_ulong addr);
31
@@ -XXX,XX +XXX,XX @@
27
32
# @node-id: NUMA node ID the CPU belongs to
28
MemoryRegionSection *
33
# @socket-id: socket number within node/board the CPU belongs to
29
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
34
# @die-id: die number within socket the CPU belongs to (since 4.1)
30
- hwaddr *xlat, hwaddr *plen);
35
-# @core-id: core number within die the CPU belongs to
31
+ hwaddr *xlat, hwaddr *plen,
36
+# @cluster-id: cluster number within die the CPU belongs to (since 7.1)
32
+ MemTxAttrs attrs, int *prot);
37
+# @core-id: core number within cluster the CPU belongs to
33
hwaddr memory_region_section_get_iotlb(CPUState *cpu,
38
# @thread-id: thread number within core the CPU belongs to
34
MemoryRegionSection *section,
39
#
35
target_ulong vaddr,
40
-# Note: currently there are 5 properties that could be present
36
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
41
+# Note: currently there are 6 properties that could be present
42
# but management should be prepared to pass through other
43
# properties with device_add command to allow for future
44
# interface extension. This also requires the filed names to be kept in
45
@@ -XXX,XX +XXX,XX @@
46
'data': { '*node-id': 'int',
47
'*socket-id': 'int',
48
'*die-id': 'int',
49
+ '*cluster-id': 'int',
50
'*core-id': 'int',
51
'*thread-id': 'int'
52
}
53
diff --git a/hw/core/machine-hmp-cmds.c b/hw/core/machine-hmp-cmds.c
37
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
38
--- a/include/qom/cpu.h
55
--- a/hw/core/machine-hmp-cmds.c
39
+++ b/include/qom/cpu.h
56
+++ b/hw/core/machine-hmp-cmds.c
40
@@ -XXX,XX +XXX,XX @@ struct CPUState {
57
@@ -XXX,XX +XXX,XX @@ void hmp_hotpluggable_cpus(Monitor *mon, const QDict *qdict)
41
uint16_t pending_tlb_flush;
58
if (c->has_die_id) {
42
59
monitor_printf(mon, " die-id: \"%" PRIu64 "\"\n", c->die_id);
43
int hvf_fd;
60
}
44
+
61
+ if (c->has_cluster_id) {
45
+ /* track IOMMUs whose translations we've cached in the TCG TLB */
62
+ monitor_printf(mon, " cluster-id: \"%" PRIu64 "\"\n",
46
+ GArray *iommu_notifiers;
63
+ c->cluster_id);
47
};
64
+ }
48
65
if (c->has_core_id) {
49
QTAILQ_HEAD(CPUTailQ, CPUState);
66
monitor_printf(mon, " core-id: \"%" PRIu64 "\"\n", c->core_id);
50
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
67
}
68
diff --git a/hw/core/machine.c b/hw/core/machine.c
51
index XXXXXXX..XXXXXXX 100644
69
index XXXXXXX..XXXXXXX 100644
52
--- a/accel/tcg/cputlb.c
70
--- a/hw/core/machine.c
53
+++ b/accel/tcg/cputlb.c
71
+++ b/hw/core/machine.c
54
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
72
@@ -XXX,XX +XXX,XX @@ void machine_set_cpu_numa_node(MachineState *machine,
55
}
73
return;
56
74
}
57
sz = size;
75
58
- section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz);
76
+ if (props->has_cluster_id && !slot->props.has_cluster_id) {
59
+ section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz,
77
+ error_setg(errp, "cluster-id is not supported");
60
+ attrs, &prot);
78
+ return;
61
assert(sz >= TARGET_PAGE_SIZE);
62
63
tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
64
diff --git a/exec.c b/exec.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/exec.c
67
+++ b/exec.c
68
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
69
return mr;
70
}
71
72
+typedef struct TCGIOMMUNotifier {
73
+ IOMMUNotifier n;
74
+ MemoryRegion *mr;
75
+ CPUState *cpu;
76
+ int iommu_idx;
77
+ bool active;
78
+} TCGIOMMUNotifier;
79
+
80
+static void tcg_iommu_unmap_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
81
+{
82
+ TCGIOMMUNotifier *notifier = container_of(n, TCGIOMMUNotifier, n);
83
+
84
+ if (!notifier->active) {
85
+ return;
86
+ }
87
+ tlb_flush(notifier->cpu);
88
+ notifier->active = false;
89
+ /* We leave the notifier struct on the list to avoid reallocating it later.
90
+ * Generally the number of IOMMUs a CPU deals with will be small.
91
+ * In any case we can't unregister the iommu notifier from a notify
92
+ * callback.
93
+ */
94
+}
95
+
96
+static void tcg_register_iommu_notifier(CPUState *cpu,
97
+ IOMMUMemoryRegion *iommu_mr,
98
+ int iommu_idx)
99
+{
100
+ /* Make sure this CPU has an IOMMU notifier registered for this
101
+ * IOMMU/IOMMU index combination, so that we can flush its TLB
102
+ * when the IOMMU tells us the mappings we've cached have changed.
103
+ */
104
+ MemoryRegion *mr = MEMORY_REGION(iommu_mr);
105
+ TCGIOMMUNotifier *notifier;
106
+ int i;
107
+
108
+ for (i = 0; i < cpu->iommu_notifiers->len; i++) {
109
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
110
+ if (notifier->mr == mr && notifier->iommu_idx == iommu_idx) {
111
+ break;
112
+ }
113
+ }
114
+ if (i == cpu->iommu_notifiers->len) {
115
+ /* Not found, add a new entry at the end of the array */
116
+ cpu->iommu_notifiers = g_array_set_size(cpu->iommu_notifiers, i + 1);
117
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
118
+
119
+ notifier->mr = mr;
120
+ notifier->iommu_idx = iommu_idx;
121
+ notifier->cpu = cpu;
122
+ /* Rather than trying to register interest in the specific part
123
+ * of the iommu's address space that we've accessed and then
124
+ * expand it later as subsequent accesses touch more of it, we
125
+ * just register interest in the whole thing, on the assumption
126
+ * that iommu reconfiguration will be rare.
127
+ */
128
+ iommu_notifier_init(&notifier->n,
129
+ tcg_iommu_unmap_notify,
130
+ IOMMU_NOTIFIER_UNMAP,
131
+ 0,
132
+ HWADDR_MAX,
133
+ iommu_idx);
134
+ memory_region_register_iommu_notifier(notifier->mr, &notifier->n);
135
+ }
136
+
137
+ if (!notifier->active) {
138
+ notifier->active = true;
139
+ }
140
+}
141
+
142
+static void tcg_iommu_free_notifier_list(CPUState *cpu)
143
+{
144
+ /* Destroy the CPU's notifier list */
145
+ int i;
146
+ TCGIOMMUNotifier *notifier;
147
+
148
+ for (i = 0; i < cpu->iommu_notifiers->len; i++) {
149
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
150
+ memory_region_unregister_iommu_notifier(notifier->mr, &notifier->n);
151
+ }
152
+ g_array_free(cpu->iommu_notifiers, true);
153
+}
154
+
155
/* Called from RCU critical section */
156
MemoryRegionSection *
157
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
158
- hwaddr *xlat, hwaddr *plen)
159
+ hwaddr *xlat, hwaddr *plen,
160
+ MemTxAttrs attrs, int *prot)
161
{
162
MemoryRegionSection *section;
163
+ IOMMUMemoryRegion *iommu_mr;
164
+ IOMMUMemoryRegionClass *imrc;
165
+ IOMMUTLBEntry iotlb;
166
+ int iommu_idx;
167
AddressSpaceDispatch *d = atomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
168
169
- section = address_space_translate_internal(d, addr, xlat, plen, false);
170
+ for (;;) {
171
+ section = address_space_translate_internal(d, addr, &addr, plen, false);
172
+
173
+ iommu_mr = memory_region_get_iommu(section->mr);
174
+ if (!iommu_mr) {
175
+ break;
176
+ }
79
+ }
177
+
80
+
178
+ imrc = memory_region_get_iommu_class_nocheck(iommu_mr);
81
if (props->has_socket_id && !slot->props.has_socket_id) {
179
+
82
error_setg(errp, "socket-id is not supported");
180
+ iommu_idx = imrc->attrs_to_index(iommu_mr, attrs);
83
return;
181
+ tcg_register_iommu_notifier(cpu, iommu_mr, iommu_idx);
84
@@ -XXX,XX +XXX,XX @@ void machine_set_cpu_numa_node(MachineState *machine,
182
+ /* We need all the permissions, so pass IOMMU_NONE so the IOMMU
85
continue;
183
+ * doesn't short-cut its translation table walk.
86
}
184
+ */
87
185
+ iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, iommu_idx);
88
+ if (props->has_cluster_id &&
186
+ addr = ((iotlb.translated_addr & ~iotlb.addr_mask)
89
+ props->cluster_id != slot->props.cluster_id) {
187
+ | (addr & iotlb.addr_mask));
90
+ continue;
188
+ /* Update the caller's prot bits to remove permissions the IOMMU
189
+ * is giving us a failure response for. If we get down to no
190
+ * permissions left at all we can give up now.
191
+ */
192
+ if (!(iotlb.perm & IOMMU_RO)) {
193
+ *prot &= ~(PAGE_READ | PAGE_EXEC);
194
+ }
195
+ if (!(iotlb.perm & IOMMU_WO)) {
196
+ *prot &= ~PAGE_WRITE;
197
+ }
91
+ }
198
+
92
+
199
+ if (!*prot) {
93
if (props->has_die_id && props->die_id != slot->props.die_id) {
200
+ goto translate_fail;
94
continue;
95
}
96
@@ -XXX,XX +XXX,XX @@ static char *cpu_slot_to_string(const CPUArchId *cpu)
97
}
98
g_string_append_printf(s, "die-id: %"PRId64, cpu->props.die_id);
99
}
100
+ if (cpu->props.has_cluster_id) {
101
+ if (s->len) {
102
+ g_string_append_printf(s, ", ");
201
+ }
103
+ }
202
+
104
+ g_string_append_printf(s, "cluster-id: %"PRId64, cpu->props.cluster_id);
203
+ d = flatview_to_dispatch(address_space_to_flatview(iotlb.target_as));
204
+ }
105
+ }
205
106
if (cpu->props.has_core_id) {
206
assert(!memory_region_is_iommu(section->mr));
107
if (s->len) {
207
+ *xlat = addr;
108
g_string_append_printf(s, ", ");
208
return section;
209
+
210
+translate_fail:
211
+ return &d->map.sections[PHYS_SECTION_UNASSIGNED];
212
}
213
#endif
214
215
@@ -XXX,XX +XXX,XX @@ void cpu_exec_unrealizefn(CPUState *cpu)
216
if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
217
vmstate_unregister(NULL, &vmstate_cpu_common, cpu);
218
}
219
+#ifndef CONFIG_USER_ONLY
220
+ tcg_iommu_free_notifier_list(cpu);
221
+#endif
222
}
223
224
Property cpu_common_props[] = {
225
@@ -XXX,XX +XXX,XX @@ void cpu_exec_realizefn(CPUState *cpu, Error **errp)
226
if (cc->vmsd != NULL) {
227
vmstate_register(NULL, cpu->cpu_index, cc->vmsd, cpu);
228
}
229
+
230
+ cpu->iommu_notifiers = g_array_new(false, true, sizeof(TCGIOMMUNotifier));
231
#endif
232
}
233
234
--
109
--
235
2.17.1
110
2.25.1
236
237
diff view generated by jsdifflib
1
If an IOMMU supports mappings that care about the memory
1
From: Gavin Shan <gshan@redhat.com>
2
transaction attributes, then it no longer has a unique
3
address -> output mapping, but more than one. We can
4
represent these using an IOMMU index, analogous to TCG's
5
mmu indexes.
6
2
3
The CPU topology isn't enabled on arm/virt machine yet, but we're
4
going to do it in next patch. After the CPU topology is enabled by
5
next patch, "thread-id=1" becomes invalid because the CPU core is
6
preferred on arm/virt machine. It means these two CPUs have 0/1
7
as their core IDs, but their thread IDs are all 0. It will trigger
8
test failure as the following message indicates:
9
10
[14/21 qemu:qtest+qtest-aarch64 / qtest-aarch64/numa-test ERROR
11
1.48s killed by signal 6 SIGABRT
12
>>> G_TEST_DBUS_DAEMON=/home/gavin/sandbox/qemu.main/tests/dbus-vmstate-daemon.sh \
13
QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon \
14
QTEST_QEMU_BINARY=./qemu-system-aarch64 \
15
QTEST_QEMU_IMG=./qemu-img MALLOC_PERTURB_=83 \
16
/home/gavin/sandbox/qemu.main/build/tests/qtest/numa-test --tap -k
17
――――――――――――――――――――――――――――――――――――――――――――――
18
stderr:
19
qemu-system-aarch64: -numa cpu,node-id=0,thread-id=1: no match found
20
21
This fixes the issue by providing comprehensive SMP configurations
22
in aarch64_numa_cpu(). The SMP configurations aren't used before
23
the CPU topology is enabled in next patch.
24
25
Signed-off-by: Gavin Shan <gshan@redhat.com>
26
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
27
Message-id: 20220503140304.855514-3-gshan@redhat.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20180604152941.20374-2-peter.maydell@linaro.org
11
---
29
---
12
include/exec/memory.h | 55 +++++++++++++++++++++++++++++++++++++++++++
30
tests/qtest/numa-test.c | 3 ++-
13
memory.c | 23 ++++++++++++++++++
31
1 file changed, 2 insertions(+), 1 deletion(-)
14
2 files changed, 78 insertions(+)
15
32
16
diff --git a/include/exec/memory.h b/include/exec/memory.h
33
diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
17
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
18
--- a/include/exec/memory.h
35
--- a/tests/qtest/numa-test.c
19
+++ b/include/exec/memory.h
36
+++ b/tests/qtest/numa-test.c
20
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
37
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
21
* to report whenever mappings are changed, by calling
38
QTestState *qts;
22
* memory_region_notify_iommu() (or, if necessary, by calling
39
g_autofree char *cli = NULL;
23
* memory_region_notify_one() for each registered notifier).
40
24
+ *
41
- cli = make_cli(data, "-machine smp.cpus=2 "
25
+ * Conceptually an IOMMU provides a mapping from input address
42
+ cli = make_cli(data, "-machine "
26
+ * to an output TLB entry. If the IOMMU is aware of memory transaction
43
+ "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
27
+ * attributes and the output TLB entry depends on the transaction
44
"-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
28
+ * attributes, we represent this using IOMMU indexes. Each index
45
"-numa cpu,node-id=1,thread-id=0 "
29
+ * selects a particular translation table that the IOMMU has:
46
"-numa cpu,node-id=0,thread-id=1");
30
+ * @attrs_to_index returns the IOMMU index for a set of transaction attributes
31
+ * @translate takes an input address and an IOMMU index
32
+ * and the mapping returned can only depend on the input address and the
33
+ * IOMMU index.
34
+ *
35
+ * Most IOMMUs don't care about the transaction attributes and support
36
+ * only a single IOMMU index. A more complex IOMMU might have one index
37
+ * for secure transactions and one for non-secure transactions.
38
*/
39
typedef struct IOMMUMemoryRegionClass {
40
/* private */
41
@@ -XXX,XX +XXX,XX @@ typedef struct IOMMUMemoryRegionClass {
42
*/
43
int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
44
void *data);
45
+
46
+ /* Return the IOMMU index to use for a given set of transaction attributes.
47
+ *
48
+ * Optional method: if an IOMMU only supports a single IOMMU index then
49
+ * the default implementation of memory_region_iommu_attrs_to_index()
50
+ * will return 0.
51
+ *
52
+ * The indexes supported by an IOMMU must be contiguous, starting at 0.
53
+ *
54
+ * @iommu: the IOMMUMemoryRegion
55
+ * @attrs: memory transaction attributes
56
+ */
57
+ int (*attrs_to_index)(IOMMUMemoryRegion *iommu, MemTxAttrs attrs);
58
+
59
+ /* Return the number of IOMMU indexes this IOMMU supports.
60
+ *
61
+ * Optional method: if this method is not provided, then
62
+ * memory_region_iommu_num_indexes() will return 1, indicating that
63
+ * only a single IOMMU index is supported.
64
+ *
65
+ * @iommu: the IOMMUMemoryRegion
66
+ */
67
+ int (*num_indexes)(IOMMUMemoryRegion *iommu);
68
} IOMMUMemoryRegionClass;
69
70
typedef struct CoalescedMemoryRange CoalescedMemoryRange;
71
@@ -XXX,XX +XXX,XX @@ int memory_region_iommu_get_attr(IOMMUMemoryRegion *iommu_mr,
72
enum IOMMUMemoryRegionAttr attr,
73
void *data);
74
75
+/**
76
+ * memory_region_iommu_attrs_to_index: return the IOMMU index to
77
+ * use for translations with the given memory transaction attributes.
78
+ *
79
+ * @iommu_mr: the memory region
80
+ * @attrs: the memory transaction attributes
81
+ */
82
+int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
83
+ MemTxAttrs attrs);
84
+
85
+/**
86
+ * memory_region_iommu_num_indexes: return the total number of IOMMU
87
+ * indexes that this IOMMU supports.
88
+ *
89
+ * @iommu_mr: the memory region
90
+ */
91
+int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr);
92
+
93
/**
94
* memory_region_name: get a memory region's name
95
*
96
diff --git a/memory.c b/memory.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/memory.c
99
+++ b/memory.c
100
@@ -XXX,XX +XXX,XX @@ int memory_region_iommu_get_attr(IOMMUMemoryRegion *iommu_mr,
101
return imrc->get_attr(iommu_mr, attr, data);
102
}
103
104
+int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
105
+ MemTxAttrs attrs)
106
+{
107
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
108
+
109
+ if (!imrc->attrs_to_index) {
110
+ return 0;
111
+ }
112
+
113
+ return imrc->attrs_to_index(iommu_mr, attrs);
114
+}
115
+
116
+int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr)
117
+{
118
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
119
+
120
+ if (!imrc->num_indexes) {
121
+ return 1;
122
+ }
123
+
124
+ return imrc->num_indexes(iommu_mr);
125
+}
126
+
127
void memory_region_set_log(MemoryRegion *mr, bool log, unsigned client)
128
{
129
uint8_t mask = 1 << client;
130
--
47
--
131
2.17.1
48
2.25.1
132
49
133
50
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
On Macronix chips, two bytes can written to the WRSR. First byte will
3
Currently, the SMP configuration isn't considered when the CPU
4
configure the status register and the second the configuration
4
topology is populated. In this case, it's impossible to provide
5
register. It is important to save the configuration value as it
5
the default CPU-to-NUMA mapping or association based on the socket
6
contains the dummy cycle setting when using dual or quad IO mode.
6
ID of the given CPU.
7
7
8
Signed-off-by: Cédric Le Goater <clg@kaod.org>
8
This takes account of SMP configuration when the CPU topology
9
Acked-by: Alistair Francis <alistair.francis@wdc.com>
9
is populated. The die ID for the given CPU isn't assigned since
10
it's not supported on arm/virt machine. Besides, the used SMP
11
configuration in qtest/numa-test/aarch64_numa_cpu() is corrcted
12
to avoid testing failure
13
14
Signed-off-by: Gavin Shan <gshan@redhat.com>
15
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
16
Acked-by: Igor Mammedov <imammedo@redhat.com>
17
Message-id: 20220503140304.855514-4-gshan@redhat.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
19
---
12
hw/block/m25p80.c | 1 +
20
hw/arm/virt.c | 15 ++++++++++++++-
13
1 file changed, 1 insertion(+)
21
1 file changed, 14 insertions(+), 1 deletion(-)
14
22
15
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
23
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
16
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/block/m25p80.c
25
--- a/hw/arm/virt.c
18
+++ b/hw/block/m25p80.c
26
+++ b/hw/arm/virt.c
19
@@ -XXX,XX +XXX,XX @@ static void complete_collecting_data(Flash *s)
27
@@ -XXX,XX +XXX,XX @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
20
case MAN_MACRONIX:
28
int n;
21
s->quad_enable = extract32(s->data[0], 6, 1);
29
unsigned int max_cpus = ms->smp.max_cpus;
22
if (s->len > 1) {
30
VirtMachineState *vms = VIRT_MACHINE(ms);
23
+ s->volatile_cfg = s->data[1];
31
+ MachineClass *mc = MACHINE_GET_CLASS(vms);
24
s->four_bytes_address_mode = extract32(s->data[1], 5, 1);
32
25
}
33
if (ms->possible_cpus) {
26
break;
34
assert(ms->possible_cpus->len == max_cpus);
35
@@ -XXX,XX +XXX,XX @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
36
ms->possible_cpus->cpus[n].type = ms->cpu_type;
37
ms->possible_cpus->cpus[n].arch_id =
38
virt_cpu_mp_affinity(vms, n);
39
+
40
+ assert(!mc->smp_props.dies_supported);
41
+ ms->possible_cpus->cpus[n].props.has_socket_id = true;
42
+ ms->possible_cpus->cpus[n].props.socket_id =
43
+ n / (ms->smp.clusters * ms->smp.cores * ms->smp.threads);
44
+ ms->possible_cpus->cpus[n].props.has_cluster_id = true;
45
+ ms->possible_cpus->cpus[n].props.cluster_id =
46
+ (n / (ms->smp.cores * ms->smp.threads)) % ms->smp.clusters;
47
+ ms->possible_cpus->cpus[n].props.has_core_id = true;
48
+ ms->possible_cpus->cpus[n].props.core_id =
49
+ (n / ms->smp.threads) % ms->smp.cores;
50
ms->possible_cpus->cpus[n].props.has_thread_id = true;
51
- ms->possible_cpus->cpus[n].props.thread_id = n;
52
+ ms->possible_cpus->cpus[n].props.thread_id =
53
+ n % ms->smp.threads;
54
}
55
return ms->possible_cpus;
56
}
27
--
57
--
28
2.17.1
58
2.25.1
29
30
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
While for_each_dist_irq_reg loop starts from GIC_INTERNAL, it forgot to
3
In aarch64_numa_cpu(), the CPU and NUMA association is something
4
offset the date array and index. This will overlap the GICR registers
4
like below. Two threads in the same core/cluster/socket are
5
value and leave the last GIC_INTERNAL irq's registers out of update.
5
associated with two individual NUMA nodes, which is unreal as
6
Igor Mammedov mentioned. We don't expect the association to break
7
NUMA-to-socket boundary, which matches with the real world.
6
8
7
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
9
NUMA-node socket cluster core thread
8
Cc: qemu-stable@nongnu.org
10
------------------------------------------
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
0 0 0 0 0
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
1 0 0 0 1
11
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
13
14
This corrects the topology for CPUs and their association with
15
NUMA nodes. After this patch is applied, the CPU and NUMA
16
association becomes something like below, which looks real.
17
Besides, socket/cluster/core/thread IDs are all checked when
18
the NUMA node IDs are verified. It helps to check if the CPU
19
topology is properly populated or not.
20
21
NUMA-node socket cluster core thread
22
------------------------------------------
23
0 1 0 0 0
24
1 0 0 0 0
25
26
Suggested-by: Igor Mammedov <imammedo@redhat.com>
27
Signed-off-by: Gavin Shan <gshan@redhat.com>
28
Acked-by: Igor Mammedov <imammedo@redhat.com>
29
Message-id: 20220503140304.855514-5-gshan@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
31
---
14
hw/intc/arm_gicv3_kvm.c | 18 ++++++++++++++++--
32
tests/qtest/numa-test.c | 18 ++++++++++++------
15
1 file changed, 16 insertions(+), 2 deletions(-)
33
1 file changed, 12 insertions(+), 6 deletions(-)
16
34
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
35
diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
18
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
37
--- a/tests/qtest/numa-test.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
38
+++ b/tests/qtest/numa-test.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_get_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
39
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
22
uint32_t reg, *field;
40
g_autofree char *cli = NULL;
23
int irq;
41
24
42
cli = make_cli(data, "-machine "
25
- field = (uint32_t *)bmp;
43
- "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
26
+ /* For the KVM GICv3, affinity routing is always enabled, and the first 8
44
+ "smp.cpus=2,smp.sockets=2,smp.clusters=1,smp.cores=1,smp.threads=1 "
27
+ * GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
45
"-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
28
+ * functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
46
- "-numa cpu,node-id=1,thread-id=0 "
29
+ * sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
47
- "-numa cpu,node-id=0,thread-id=1");
30
+ * offset.
48
+ "-numa cpu,node-id=0,socket-id=1,cluster-id=0,core-id=0,thread-id=0 "
31
+ */
49
+ "-numa cpu,node-id=1,socket-id=0,cluster-id=0,core-id=0,thread-id=0");
32
+ field = (uint32_t *)(bmp + GIC_INTERNAL);
50
qts = qtest_init(cli);
33
+ offset += (GIC_INTERNAL * 8) / 8;
51
cpus = get_cpus(qts, &resp);
34
for_each_dist_irq_reg(irq, s->num_irq, 8) {
52
g_assert(cpus);
35
kvm_gicd_access(s, offset, &reg, false);
53
36
*field = reg;
54
while ((e = qlist_pop(cpus))) {
37
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_put_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
55
QDict *cpu, *props;
38
uint32_t reg, *field;
56
- int64_t thread, node;
39
int irq;
57
+ int64_t socket, cluster, core, thread, node;
40
58
41
- field = (uint32_t *)bmp;
59
cpu = qobject_to(QDict, e);
42
+ /* For the KVM GICv3, affinity routing is always enabled, and the first 8
60
g_assert(qdict_haskey(cpu, "props"));
43
+ * GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
61
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
44
+ * functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
62
45
+ * sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
63
g_assert(qdict_haskey(props, "node-id"));
46
+ * offset.
64
node = qdict_get_int(props, "node-id");
47
+ */
65
+ g_assert(qdict_haskey(props, "socket-id"));
48
+ field = (uint32_t *)(bmp + GIC_INTERNAL);
66
+ socket = qdict_get_int(props, "socket-id");
49
+ offset += (GIC_INTERNAL * 8) / 8;
67
+ g_assert(qdict_haskey(props, "cluster-id"));
50
for_each_dist_irq_reg(irq, s->num_irq, 8) {
68
+ cluster = qdict_get_int(props, "cluster-id");
51
reg = *field;
69
+ g_assert(qdict_haskey(props, "core-id"));
52
kvm_gicd_access(s, offset, &reg, true);
70
+ core = qdict_get_int(props, "core-id");
71
g_assert(qdict_haskey(props, "thread-id"));
72
thread = qdict_get_int(props, "thread-id");
73
74
- if (thread == 0) {
75
+ if (socket == 0 && cluster == 0 && core == 0 && thread == 0) {
76
g_assert_cmpint(node, ==, 1);
77
- } else if (thread == 1) {
78
+ } else if (socket == 1 && cluster == 0 && core == 0 && thread == 0) {
79
g_assert_cmpint(node, ==, 0);
80
} else {
81
g_assert(false);
53
--
82
--
54
2.17.1
83
2.25.1
55
56
diff view generated by jsdifflib
Deleted patch
1
The ethernet controller in the AN505 MPC FPGA image is behind
2
the same AHB Peripheral Protection Controller that handles
3
the graphics and GPIOs. (In the documentation this is clear
4
in the block diagram but the ethernet controller was omitted
5
from the table listing devices connected to the PPC.)
6
The ethernet sits behind AHB PPCEXP0 interface 5. We had
7
incorrectly claimed that this was a "gpio4", but there are
8
only 4 GPIOs in this image.
9
1
10
Correct the QEMU model to match the hardware.
11
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20180515171446.10834-1-peter.maydell@linaro.org
15
---
16
hw/arm/mps2-tz.c | 32 +++++++++++++++++++++++---------
17
1 file changed, 23 insertions(+), 9 deletions(-)
18
19
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/mps2-tz.c
22
+++ b/hw/arm/mps2-tz.c
23
@@ -XXX,XX +XXX,XX @@ typedef struct {
24
UnimplementedDeviceState spi[5];
25
UnimplementedDeviceState i2c[4];
26
UnimplementedDeviceState i2s_audio;
27
- UnimplementedDeviceState gpio[5];
28
+ UnimplementedDeviceState gpio[4];
29
UnimplementedDeviceState dma[4];
30
UnimplementedDeviceState gfx;
31
CMSDKAPBUART uart[5];
32
SplitIRQ sec_resp_splitter;
33
qemu_or_irq uart_irq_orgate;
34
+ DeviceState *lan9118;
35
} MPS2TZMachineState;
36
37
#define TYPE_MPS2TZ_MACHINE "mps2tz"
38
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_fpgaio(MPS2TZMachineState *mms, void *opaque,
39
return sysbus_mmio_get_region(SYS_BUS_DEVICE(fpgaio), 0);
40
}
41
42
+static MemoryRegion *make_eth_dev(MPS2TZMachineState *mms, void *opaque,
43
+ const char *name, hwaddr size)
44
+{
45
+ SysBusDevice *s;
46
+ DeviceState *iotkitdev = DEVICE(&mms->iotkit);
47
+ NICInfo *nd = &nd_table[0];
48
+
49
+ /* In hardware this is a LAN9220; the LAN9118 is software compatible
50
+ * except that it doesn't support the checksum-offload feature.
51
+ */
52
+ qemu_check_nic_model(nd, "lan9118");
53
+ mms->lan9118 = qdev_create(NULL, "lan9118");
54
+ qdev_set_nic_properties(mms->lan9118, nd);
55
+ qdev_init_nofail(mms->lan9118);
56
+
57
+ s = SYS_BUS_DEVICE(mms->lan9118);
58
+ sysbus_connect_irq(s, 0, qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
59
+ return sysbus_mmio_get_region(s, 0);
60
+}
61
+
62
static void mps2tz_common_init(MachineState *machine)
63
{
64
MPS2TZMachineState *mms = MPS2TZ_MACHINE(machine);
65
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
66
{ "gpio1", make_unimp_dev, &mms->gpio[1], 0x40101000, 0x1000 },
67
{ "gpio2", make_unimp_dev, &mms->gpio[2], 0x40102000, 0x1000 },
68
{ "gpio3", make_unimp_dev, &mms->gpio[3], 0x40103000, 0x1000 },
69
- { "gpio4", make_unimp_dev, &mms->gpio[4], 0x40104000, 0x1000 },
70
+ { "eth", make_eth_dev, NULL, 0x42000000, 0x100000 },
71
},
72
}, {
73
.name = "ahb_ppcexp1",
74
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
75
"cfg_sec_resp", 0));
76
}
77
78
- /* In hardware this is a LAN9220; the LAN9118 is software compatible
79
- * except that it doesn't support the checksum-offload feature.
80
- * The ethernet controller is not behind a PPC.
81
- */
82
- lan9118_init(&nd_table[0], 0x42000000,
83
- qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
84
-
85
create_unimplemented_device("FPGA NS PC", 0x48007000, 0x1000);
86
87
armv7m_load_kernel(ARM_CPU(first_cpu), machine->kernel_filename, 0x400000);
88
--
89
2.17.1
90
91
diff view generated by jsdifflib
Deleted patch
1
Convert the mcf5206 device away from using the old_mmio field
2
of MemoryRegionOps. This device is used by the an5206 board.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Acked-by: Thomas Huth <huth@tuxfamily.org>
6
Message-id: 20180601141223.26630-3-peter.maydell@linaro.org
7
---
8
hw/m68k/mcf5206.c | 48 +++++++++++++++++++++++++++++++++++------------
9
1 file changed, 36 insertions(+), 12 deletions(-)
10
11
diff --git a/hw/m68k/mcf5206.c b/hw/m68k/mcf5206.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/m68k/mcf5206.c
14
+++ b/hw/m68k/mcf5206.c
15
@@ -XXX,XX +XXX,XX @@ static void m5206_mbar_writel(void *opaque, hwaddr offset,
16
m5206_mbar_write(s, offset, value, 4);
17
}
18
19
+static uint64_t m5206_mbar_readfn(void *opaque, hwaddr addr, unsigned size)
20
+{
21
+ switch (size) {
22
+ case 1:
23
+ return m5206_mbar_readb(opaque, addr);
24
+ case 2:
25
+ return m5206_mbar_readw(opaque, addr);
26
+ case 4:
27
+ return m5206_mbar_readl(opaque, addr);
28
+ default:
29
+ g_assert_not_reached();
30
+ }
31
+}
32
+
33
+static void m5206_mbar_writefn(void *opaque, hwaddr addr,
34
+ uint64_t value, unsigned size)
35
+{
36
+ switch (size) {
37
+ case 1:
38
+ m5206_mbar_writeb(opaque, addr, value);
39
+ break;
40
+ case 2:
41
+ m5206_mbar_writew(opaque, addr, value);
42
+ break;
43
+ case 4:
44
+ m5206_mbar_writel(opaque, addr, value);
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static const MemoryRegionOps m5206_mbar_ops = {
52
- .old_mmio = {
53
- .read = {
54
- m5206_mbar_readb,
55
- m5206_mbar_readw,
56
- m5206_mbar_readl,
57
- },
58
- .write = {
59
- m5206_mbar_writeb,
60
- m5206_mbar_writew,
61
- m5206_mbar_writel,
62
- },
63
- },
64
+ .read = m5206_mbar_readfn,
65
+ .write = m5206_mbar_writefn,
66
+ .valid.min_access_size = 1,
67
+ .valid.max_access_size = 4,
68
.endianness = DEVICE_NATIVE_ENDIAN,
69
};
70
71
--
72
2.17.1
73
74
diff view generated by jsdifflib
Deleted patch
1
Convert the wdt_i6300esb device away from using the old_mmio field
2
of MemoryRegionOps.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180601141223.26630-5-peter.maydell@linaro.org
7
---
8
hw/watchdog/wdt_i6300esb.c | 48 ++++++++++++++++++++++++++++----------
9
1 file changed, 36 insertions(+), 12 deletions(-)
10
11
diff --git a/hw/watchdog/wdt_i6300esb.c b/hw/watchdog/wdt_i6300esb.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/watchdog/wdt_i6300esb.c
14
+++ b/hw/watchdog/wdt_i6300esb.c
15
@@ -XXX,XX +XXX,XX @@ static void i6300esb_mem_writel(void *vp, hwaddr addr, uint32_t val)
16
}
17
}
18
19
+static uint64_t i6300esb_mem_readfn(void *opaque, hwaddr addr, unsigned size)
20
+{
21
+ switch (size) {
22
+ case 1:
23
+ return i6300esb_mem_readb(opaque, addr);
24
+ case 2:
25
+ return i6300esb_mem_readw(opaque, addr);
26
+ case 4:
27
+ return i6300esb_mem_readl(opaque, addr);
28
+ default:
29
+ g_assert_not_reached();
30
+ }
31
+}
32
+
33
+static void i6300esb_mem_writefn(void *opaque, hwaddr addr,
34
+ uint64_t value, unsigned size)
35
+{
36
+ switch (size) {
37
+ case 1:
38
+ i6300esb_mem_writeb(opaque, addr, value);
39
+ break;
40
+ case 2:
41
+ i6300esb_mem_writew(opaque, addr, value);
42
+ break;
43
+ case 4:
44
+ i6300esb_mem_writel(opaque, addr, value);
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static const MemoryRegionOps i6300esb_ops = {
52
- .old_mmio = {
53
- .read = {
54
- i6300esb_mem_readb,
55
- i6300esb_mem_readw,
56
- i6300esb_mem_readl,
57
- },
58
- .write = {
59
- i6300esb_mem_writeb,
60
- i6300esb_mem_writew,
61
- i6300esb_mem_writel,
62
- },
63
- },
64
+ .read = i6300esb_mem_readfn,
65
+ .write = i6300esb_mem_writefn,
66
+ .valid.min_access_size = 1,
67
+ .valid.max_access_size = 4,
68
.endianness = DEVICE_LITTLE_ENDIAN,
69
};
70
71
--
72
2.17.1
73
74
diff view generated by jsdifflib
Deleted patch
1
Convert the pckbd device away from using the old_mmio field
2
of MemoryRegionOps. This change only affects the memory-mapped
3
variant of the i8042, which is used by the Unicore32 'puv3'
4
board and the MIPS Jazz boards 'magnum' and 'pica61'.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20180601141223.26630-6-peter.maydell@linaro.org
9
---
10
hw/input/pckbd.c | 14 ++++++++------
11
1 file changed, 8 insertions(+), 6 deletions(-)
12
13
diff --git a/hw/input/pckbd.c b/hw/input/pckbd.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/input/pckbd.c
16
+++ b/hw/input/pckbd.c
17
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_kbd = {
18
};
19
20
/* Memory mapped interface */
21
-static uint32_t kbd_mm_readb (void *opaque, hwaddr addr)
22
+static uint64_t kbd_mm_readfn(void *opaque, hwaddr addr, unsigned size)
23
{
24
KBDState *s = opaque;
25
26
@@ -XXX,XX +XXX,XX @@ static uint32_t kbd_mm_readb (void *opaque, hwaddr addr)
27
return kbd_read_data(s, 0, 1) & 0xff;
28
}
29
30
-static void kbd_mm_writeb (void *opaque, hwaddr addr, uint32_t value)
31
+static void kbd_mm_writefn(void *opaque, hwaddr addr,
32
+ uint64_t value, unsigned size)
33
{
34
KBDState *s = opaque;
35
36
@@ -XXX,XX +XXX,XX @@ static void kbd_mm_writeb (void *opaque, hwaddr addr, uint32_t value)
37
kbd_write_data(s, 0, value & 0xff, 1);
38
}
39
40
+
41
static const MemoryRegionOps i8042_mmio_ops = {
42
+ .read = kbd_mm_readfn,
43
+ .write = kbd_mm_writefn,
44
+ .valid.min_access_size = 1,
45
+ .valid.max_access_size = 4,
46
.endianness = DEVICE_NATIVE_ENDIAN,
47
- .old_mmio = {
48
- .read = { kbd_mm_readb, kbd_mm_readb, kbd_mm_readb },
49
- .write = { kbd_mm_writeb, kbd_mm_writeb, kbd_mm_writeb },
50
- },
51
};
52
53
void i8042_mm_init(qemu_irq kbd_irq, qemu_irq mouse_irq,
54
--
55
2.17.1
56
57
diff view generated by jsdifflib
Deleted patch
1
Convert the parallel device away from using the old_mmio field
2
of MemoryRegionOps. This change only affects the memory-mapped
3
variant, which is used by the MIPS Jazz boards 'magnum' and 'pica61'.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20180601141223.26630-7-peter.maydell@linaro.org
8
---
9
hw/char/parallel.c | 50 ++++++++++------------------------------------
10
1 file changed, 11 insertions(+), 39 deletions(-)
11
12
diff --git a/hw/char/parallel.c b/hw/char/parallel.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/char/parallel.c
15
+++ b/hw/char/parallel.c
16
@@ -XXX,XX +XXX,XX @@ static void parallel_isa_realizefn(DeviceState *dev, Error **errp)
17
}
18
19
/* Memory mapped interface */
20
-static uint32_t parallel_mm_readb (void *opaque, hwaddr addr)
21
+static uint64_t parallel_mm_readfn(void *opaque, hwaddr addr, unsigned size)
22
{
23
ParallelState *s = opaque;
24
25
- return parallel_ioport_read_sw(s, addr >> s->it_shift) & 0xFF;
26
+ return parallel_ioport_read_sw(s, addr >> s->it_shift) &
27
+ MAKE_64BIT_MASK(0, size * 8);
28
}
29
30
-static void parallel_mm_writeb (void *opaque,
31
- hwaddr addr, uint32_t value)
32
+static void parallel_mm_writefn(void *opaque, hwaddr addr,
33
+ uint64_t value, unsigned size)
34
{
35
ParallelState *s = opaque;
36
37
- parallel_ioport_write_sw(s, addr >> s->it_shift, value & 0xFF);
38
-}
39
-
40
-static uint32_t parallel_mm_readw (void *opaque, hwaddr addr)
41
-{
42
- ParallelState *s = opaque;
43
-
44
- return parallel_ioport_read_sw(s, addr >> s->it_shift) & 0xFFFF;
45
-}
46
-
47
-static void parallel_mm_writew (void *opaque,
48
- hwaddr addr, uint32_t value)
49
-{
50
- ParallelState *s = opaque;
51
-
52
- parallel_ioport_write_sw(s, addr >> s->it_shift, value & 0xFFFF);
53
-}
54
-
55
-static uint32_t parallel_mm_readl (void *opaque, hwaddr addr)
56
-{
57
- ParallelState *s = opaque;
58
-
59
- return parallel_ioport_read_sw(s, addr >> s->it_shift);
60
-}
61
-
62
-static void parallel_mm_writel (void *opaque,
63
- hwaddr addr, uint32_t value)
64
-{
65
- ParallelState *s = opaque;
66
-
67
- parallel_ioport_write_sw(s, addr >> s->it_shift, value);
68
+ parallel_ioport_write_sw(s, addr >> s->it_shift,
69
+ value & MAKE_64BIT_MASK(0, size * 8));
70
}
71
72
static const MemoryRegionOps parallel_mm_ops = {
73
- .old_mmio = {
74
- .read = { parallel_mm_readb, parallel_mm_readw, parallel_mm_readl },
75
- .write = { parallel_mm_writeb, parallel_mm_writew, parallel_mm_writel },
76
- },
77
+ .read = parallel_mm_readfn,
78
+ .write = parallel_mm_writefn,
79
+ .valid.min_access_size = 1,
80
+ .valid.max_access_size = 4,
81
.endianness = DEVICE_NATIVE_ENDIAN,
82
};
83
84
--
85
2.17.1
86
87
diff view generated by jsdifflib
Deleted patch
1
The stellaris board is still using the legacy armv7m_init() function,
2
which predates conversion of the ARMv7M into a proper QOM container
3
object. Make the board code directly create the ARMv7M object instead.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Message-id: 20180601144328.23817-2-peter.maydell@linaro.org
9
---
10
hw/arm/stellaris.c | 12 ++++++++++--
11
1 file changed, 10 insertions(+), 2 deletions(-)
12
13
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/stellaris.c
16
+++ b/hw/arm/stellaris.c
17
@@ -XXX,XX +XXX,XX @@
18
#include "qemu/log.h"
19
#include "exec/address-spaces.h"
20
#include "sysemu/sysemu.h"
21
+#include "hw/arm/armv7m.h"
22
#include "hw/char/pl011.h"
23
#include "hw/misc/unimp.h"
24
#include "cpu.h"
25
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
26
&error_fatal);
27
memory_region_add_subregion(system_memory, 0x20000000, sram);
28
29
- nvic = armv7m_init(system_memory, flash_size, NUM_IRQ_LINES,
30
- ms->kernel_filename, ms->cpu_type);
31
+ nvic = qdev_create(NULL, TYPE_ARMV7M);
32
+ qdev_prop_set_uint32(nvic, "num-irq", NUM_IRQ_LINES);
33
+ qdev_prop_set_string(nvic, "cpu-type", ms->cpu_type);
34
+ object_property_set_link(OBJECT(nvic), OBJECT(get_system_memory()),
35
+ "memory", &error_abort);
36
+ /* This will exit with an error if the user passed us a bad cpu_type */
37
+ qdev_init_nofail(nvic);
38
39
qdev_connect_gpio_out_named(nvic, "SYSRESETREQ", 0,
40
qemu_allocate_irq(&do_sys_reset, NULL, 0));
41
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
42
create_unimplemented_device("analogue-comparator", 0x4003c000, 0x1000);
43
create_unimplemented_device("hibernation", 0x400fc000, 0x1000);
44
create_unimplemented_device("flash-control", 0x400fd000, 0x1000);
45
+
46
+ armv7m_load_kernel(ARM_CPU(first_cpu), ms->kernel_filename, flash_size);
47
}
48
49
/* FIXME: Figure out how to generate these from stellaris_boards. */
50
--
51
2.17.1
52
53
diff view generated by jsdifflib
Deleted patch
1
Remove the now-unused armv7m_init() function. This was a legacy from
2
before we properly QOMified ARMv7M, and it has some flaws:
3
1
4
* it combines work that needs to be done by an SoC object (creating
5
and initializing the TYPE_ARMV7M object) with work that needs to
6
be done by the board model (setting the system up to load the ELF
7
file specified with -kernel)
8
* TYPE_ARMV7M creation failure is fatal, but an SoC object wants to
9
arrange to propagate the failure outward
10
* it uses allocate-and-create via qdev_create() whereas the current
11
preferred style for SoC objects is to do creation in-place
12
13
Board and SoC models can instead do the two jobs this function
14
was doing themselves, in the right places and with whatever their
15
preferred style/error handling is.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
20
Message-id: 20180601144328.23817-3-peter.maydell@linaro.org
21
---
22
include/hw/arm/arm.h | 8 ++------
23
hw/arm/armv7m.c | 21 ---------------------
24
2 files changed, 2 insertions(+), 27 deletions(-)
25
26
diff --git a/include/hw/arm/arm.h b/include/hw/arm/arm.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/include/hw/arm/arm.h
29
+++ b/include/hw/arm/arm.h
30
@@ -XXX,XX +XXX,XX @@ typedef enum {
31
ARM_ENDIANNESS_BE32,
32
} arm_endianness;
33
34
-/* armv7m.c */
35
-DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
36
- const char *kernel_filename, const char *cpu_type);
37
/**
38
* armv7m_load_kernel:
39
* @cpu: CPU
40
@@ -XXX,XX +XXX,XX @@ DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
41
* @mem_size: mem_size: maximum image size to load
42
*
43
* Load the guest image for an ARMv7M system. This must be called by
44
- * any ARMv7M board, either directly or via armv7m_init(). (This is
45
- * necessary to ensure that the CPU resets correctly on system reset,
46
- * as well as for kernel loading.)
47
+ * any ARMv7M board. (This is necessary to ensure that the CPU resets
48
+ * correctly on system reset, as well as for kernel loading.)
49
*/
50
void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size);
51
52
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/arm/armv7m.c
55
+++ b/hw/arm/armv7m.c
56
@@ -XXX,XX +XXX,XX @@ static void armv7m_reset(void *opaque)
57
cpu_reset(CPU(cpu));
58
}
59
60
-/* Init CPU and memory for a v7-M based board.
61
- mem_size is in bytes.
62
- Returns the ARMv7M device. */
63
-
64
-DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
65
- const char *kernel_filename, const char *cpu_type)
66
-{
67
- DeviceState *armv7m;
68
-
69
- armv7m = qdev_create(NULL, TYPE_ARMV7M);
70
- qdev_prop_set_uint32(armv7m, "num-irq", num_irq);
71
- qdev_prop_set_string(armv7m, "cpu-type", cpu_type);
72
- object_property_set_link(OBJECT(armv7m), OBJECT(get_system_memory()),
73
- "memory", &error_abort);
74
- /* This will exit with an error if the user passed us a bad cpu_type */
75
- qdev_init_nofail(armv7m);
76
-
77
- armv7m_load_kernel(ARM_CPU(first_cpu), kernel_filename, mem_size);
78
- return armv7m;
79
-}
80
-
81
void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size)
82
{
83
int image_size;
84
--
85
2.17.1
86
87
diff view generated by jsdifflib
Deleted patch
1
For the IoTKit MPC support, we need to wire together the
2
interrupt outputs of 17 MPCs; this exceeds the current
3
value of MAX_OR_LINES. Increase MAX_OR_LINES to 32 (which
4
should be enough for anyone).
5
1
6
The tricky part is retaining the migration compatibility for
7
existing OR gates; we add a subsection which is only used
8
for larger OR gates, and define it such that we can freely
9
increase MAX_OR_LINES in future (or even move to a dynamically
10
allocated levels[] array without an upper size limit) without
11
breaking compatibility.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Message-id: 20180604152941.20374-10-peter.maydell@linaro.org
16
---
17
include/hw/or-irq.h | 5 ++++-
18
hw/core/or-irq.c | 39 +++++++++++++++++++++++++++++++++++++--
19
2 files changed, 41 insertions(+), 3 deletions(-)
20
21
diff --git a/include/hw/or-irq.h b/include/hw/or-irq.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/or-irq.h
24
+++ b/include/hw/or-irq.h
25
@@ -XXX,XX +XXX,XX @@
26
27
#define TYPE_OR_IRQ "or-irq"
28
29
-#define MAX_OR_LINES 16
30
+/* This can safely be increased if necessary without breaking
31
+ * migration compatibility (as long as it remains greater than 15).
32
+ */
33
+#define MAX_OR_LINES 32
34
35
typedef struct OrIRQState qemu_or_irq;
36
37
diff --git a/hw/core/or-irq.c b/hw/core/or-irq.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/hw/core/or-irq.c
40
+++ b/hw/core/or-irq.c
41
@@ -XXX,XX +XXX,XX @@ static void or_irq_init(Object *obj)
42
qdev_init_gpio_out(DEVICE(obj), &s->out_irq, 1);
43
}
44
45
+/* The original version of this device had a fixed 16 entries in its
46
+ * VMState array; devices with more inputs than this need to
47
+ * migrate the extra lines via a subsection.
48
+ * The subsection migrates as much of the levels[] array as is needed
49
+ * (including repeating the first 16 elements), to avoid the awkwardness
50
+ * of splitting it in two to meet the requirements of VMSTATE_VARRAY_UINT16.
51
+ */
52
+#define OLD_MAX_OR_LINES 16
53
+#if MAX_OR_LINES < OLD_MAX_OR_LINES
54
+#error MAX_OR_LINES must be at least 16 for migration compatibility
55
+#endif
56
+
57
+static bool vmstate_extras_needed(void *opaque)
58
+{
59
+ qemu_or_irq *s = OR_IRQ(opaque);
60
+
61
+ return s->num_lines >= OLD_MAX_OR_LINES;
62
+}
63
+
64
+static const VMStateDescription vmstate_or_irq_extras = {
65
+ .name = "or-irq-extras",
66
+ .version_id = 1,
67
+ .minimum_version_id = 1,
68
+ .needed = vmstate_extras_needed,
69
+ .fields = (VMStateField[]) {
70
+ VMSTATE_VARRAY_UINT16_UNSAFE(levels, qemu_or_irq, num_lines, 0,
71
+ vmstate_info_bool, bool),
72
+ VMSTATE_END_OF_LIST(),
73
+ },
74
+};
75
+
76
static const VMStateDescription vmstate_or_irq = {
77
.name = TYPE_OR_IRQ,
78
.version_id = 1,
79
.minimum_version_id = 1,
80
.fields = (VMStateField[]) {
81
- VMSTATE_BOOL_ARRAY(levels, qemu_or_irq, MAX_OR_LINES),
82
+ VMSTATE_BOOL_SUB_ARRAY(levels, qemu_or_irq, 0, OLD_MAX_OR_LINES),
83
VMSTATE_END_OF_LIST(),
84
- }
85
+ },
86
+ .subsections = (const VMStateDescription*[]) {
87
+ &vmstate_or_irq_extras,
88
+ NULL
89
+ },
90
};
91
92
static Property or_irq_properties[] = {
93
--
94
2.17.1
95
96
diff view generated by jsdifflib
1
Now we have stn_p() and ldn_p() we can use them in various
1
From: Gavin Shan <gshan@redhat.com>
2
functions in exec.c that used to have their own switch-on-size code.
3
2
3
When CPU-to-NUMA association isn't explicitly provided by users,
4
the default one is given by mc->get_default_cpu_node_id(). However,
5
the CPU topology isn't fully considered in the default association
6
and this causes CPU topology broken warnings on booting Linux guest.
7
8
For example, the following warning messages are observed when the
9
Linux guest is booted with the following command lines.
10
11
/home/gavin/sandbox/qemu.main/build/qemu-system-aarch64 \
12
-accel kvm -machine virt,gic-version=host \
13
-cpu host \
14
-smp 6,sockets=2,cores=3,threads=1 \
15
-m 1024M,slots=16,maxmem=64G \
16
-object memory-backend-ram,id=mem0,size=128M \
17
-object memory-backend-ram,id=mem1,size=128M \
18
-object memory-backend-ram,id=mem2,size=128M \
19
-object memory-backend-ram,id=mem3,size=128M \
20
-object memory-backend-ram,id=mem4,size=128M \
21
-object memory-backend-ram,id=mem4,size=384M \
22
-numa node,nodeid=0,memdev=mem0 \
23
-numa node,nodeid=1,memdev=mem1 \
24
-numa node,nodeid=2,memdev=mem2 \
25
-numa node,nodeid=3,memdev=mem3 \
26
-numa node,nodeid=4,memdev=mem4 \
27
-numa node,nodeid=5,memdev=mem5
28
:
29
alternatives: patching kernel code
30
BUG: arch topology borken
31
the CLS domain not a subset of the MC domain
32
<the above error log repeats>
33
BUG: arch topology borken
34
the DIE domain not a subset of the NODE domain
35
36
With current implementation of mc->get_default_cpu_node_id(),
37
CPU#0 to CPU#5 are associated with NODE#0 to NODE#5 separately.
38
That's incorrect because CPU#0/1/2 should be associated with same
39
NUMA node because they're seated in same socket.
40
41
This fixes the issue by considering the socket ID when the default
42
CPU-to-NUMA association is provided in virt_possible_cpu_arch_ids().
43
With this applied, no more CPU topology broken warnings are seen
44
from the Linux guest. The 6 CPUs are associated with NODE#0/1, but
45
there are no CPUs associated with NODE#2/3/4/5.
46
47
Signed-off-by: Gavin Shan <gshan@redhat.com>
48
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
49
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
50
Message-id: 20220503140304.855514-6-gshan@redhat.com
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
51
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180611171007.4165-4-peter.maydell@linaro.org
8
---
52
---
9
exec.c | 112 +++++----------------------------------------------------
53
hw/arm/virt.c | 4 +++-
10
1 file changed, 8 insertions(+), 104 deletions(-)
54
1 file changed, 3 insertions(+), 1 deletion(-)
11
55
12
diff --git a/exec.c b/exec.c
56
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
13
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
58
--- a/hw/arm/virt.c
15
+++ b/exec.c
59
+++ b/hw/arm/virt.c
16
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
60
@@ -XXX,XX +XXX,XX @@ virt_cpu_index_to_props(MachineState *ms, unsigned cpu_index)
17
memory_notdirty_write_prepare(&ndi, current_cpu, current_cpu->mem_io_vaddr,
61
18
ram_addr, size);
62
static int64_t virt_get_default_cpu_node_id(const MachineState *ms, int idx)
19
63
{
20
- switch (size) {
64
- return idx % ms->numa_state->num_nodes;
21
- case 1:
65
+ int64_t socket_id = ms->possible_cpus->cpus[idx].props.socket_id;
22
- stb_p(qemu_map_ram_ptr(NULL, ram_addr), val);
66
+
23
- break;
67
+ return socket_id % ms->numa_state->num_nodes;
24
- case 2:
25
- stw_p(qemu_map_ram_ptr(NULL, ram_addr), val);
26
- break;
27
- case 4:
28
- stl_p(qemu_map_ram_ptr(NULL, ram_addr), val);
29
- break;
30
- case 8:
31
- stq_p(qemu_map_ram_ptr(NULL, ram_addr), val);
32
- break;
33
- default:
34
- abort();
35
- }
36
+ stn_p(qemu_map_ram_ptr(NULL, ram_addr), size, val);
37
memory_notdirty_write_complete(&ndi);
38
}
68
}
39
69
40
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
70
static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
41
if (res) {
42
return res;
43
}
44
- switch (len) {
45
- case 1:
46
- *data = ldub_p(buf);
47
- return MEMTX_OK;
48
- case 2:
49
- *data = lduw_p(buf);
50
- return MEMTX_OK;
51
- case 4:
52
- *data = (uint32_t)ldl_p(buf);
53
- return MEMTX_OK;
54
- case 8:
55
- *data = ldq_p(buf);
56
- return MEMTX_OK;
57
- default:
58
- abort();
59
- }
60
+ *data = ldn_p(buf, len);
61
+ return MEMTX_OK;
62
}
63
64
static MemTxResult subpage_write(void *opaque, hwaddr addr,
65
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
66
" value %"PRIx64"\n",
67
__func__, subpage, len, addr, value);
68
#endif
69
- switch (len) {
70
- case 1:
71
- stb_p(buf, value);
72
- break;
73
- case 2:
74
- stw_p(buf, value);
75
- break;
76
- case 4:
77
- stl_p(buf, value);
78
- break;
79
- case 8:
80
- stq_p(buf, value);
81
- break;
82
- default:
83
- abort();
84
- }
85
+ stn_p(buf, len, value);
86
return flatview_write(subpage->fv, addr + subpage->base, attrs, buf, len);
87
}
88
89
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
90
l = memory_access_size(mr, l, addr1);
91
/* XXX: could force current_cpu to NULL to avoid
92
potential bugs */
93
- switch (l) {
94
- case 8:
95
- /* 64 bit write access */
96
- val = ldq_p(buf);
97
- result |= memory_region_dispatch_write(mr, addr1, val, 8,
98
- attrs);
99
- break;
100
- case 4:
101
- /* 32 bit write access */
102
- val = (uint32_t)ldl_p(buf);
103
- result |= memory_region_dispatch_write(mr, addr1, val, 4,
104
- attrs);
105
- break;
106
- case 2:
107
- /* 16 bit write access */
108
- val = lduw_p(buf);
109
- result |= memory_region_dispatch_write(mr, addr1, val, 2,
110
- attrs);
111
- break;
112
- case 1:
113
- /* 8 bit write access */
114
- val = ldub_p(buf);
115
- result |= memory_region_dispatch_write(mr, addr1, val, 1,
116
- attrs);
117
- break;
118
- default:
119
- abort();
120
- }
121
+ val = ldn_p(buf, l);
122
+ result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
123
} else {
124
/* RAM case */
125
ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
126
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
127
/* I/O case */
128
release_lock |= prepare_mmio_access(mr);
129
l = memory_access_size(mr, l, addr1);
130
- switch (l) {
131
- case 8:
132
- /* 64 bit read access */
133
- result |= memory_region_dispatch_read(mr, addr1, &val, 8,
134
- attrs);
135
- stq_p(buf, val);
136
- break;
137
- case 4:
138
- /* 32 bit read access */
139
- result |= memory_region_dispatch_read(mr, addr1, &val, 4,
140
- attrs);
141
- stl_p(buf, val);
142
- break;
143
- case 2:
144
- /* 16 bit read access */
145
- result |= memory_region_dispatch_read(mr, addr1, &val, 2,
146
- attrs);
147
- stw_p(buf, val);
148
- break;
149
- case 1:
150
- /* 8 bit read access */
151
- result |= memory_region_dispatch_read(mr, addr1, &val, 1,
152
- attrs);
153
- stb_p(buf, val);
154
- break;
155
- default:
156
- abort();
157
- }
158
+ result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
159
+ stn_p(buf, l, val);
160
} else {
161
/* RAM case */
162
ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
163
--
71
--
164
2.17.1
72
2.25.1
165
166
diff view generated by jsdifflib
1
The API for cpu_transaction_failed() says that it takes the physical
1
From: Gavin Shan <gshan@redhat.com>
2
address for the failed transaction. However we were actually passing
3
it the offset within the target MemoryRegion. We don't currently
4
have any target CPU implementations of this hook that require the
5
physical address; fix this bug so we don't get confused if we ever
6
do add one.
7
2
8
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
3
When the PPTT table is built, the CPU topology is re-calculated, but
4
it's unecessary because the CPU topology has been populated in
5
virt_possible_cpu_arch_ids() on arm/virt machine.
6
7
This reworks build_pptt() to avoid by reusing the existing IDs in
8
ms->possible_cpus. Currently, the only user of build_pptt() is
9
arm/virt machine.
10
11
Signed-off-by: Gavin Shan <gshan@redhat.com>
12
Tested-by: Yanan Wang <wangyanan55@huawei.com>
13
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
14
Acked-by: Igor Mammedov <imammedo@redhat.com>
15
Acked-by: Michael S. Tsirkin <mst@redhat.com>
16
Message-id: 20220503140304.855514-7-gshan@redhat.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180611125633.32755-3-peter.maydell@linaro.org
13
---
18
---
14
include/exec/exec-all.h | 13 ++++++++++--
19
hw/acpi/aml-build.c | 111 +++++++++++++++++++-------------------------
15
accel/tcg/cputlb.c | 44 +++++++++++++++++++++++++++++------------
20
1 file changed, 48 insertions(+), 63 deletions(-)
16
exec.c | 5 +++--
17
3 files changed, 45 insertions(+), 17 deletions(-)
18
21
19
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
22
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
20
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/exec-all.h
24
--- a/hw/acpi/aml-build.c
22
+++ b/include/exec/exec-all.h
25
+++ b/hw/acpi/aml-build.c
23
@@ -XXX,XX +XXX,XX @@ void tb_lock_reset(void);
26
@@ -XXX,XX +XXX,XX @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
24
27
const char *oem_id, const char *oem_table_id)
25
#if !defined(CONFIG_USER_ONLY)
26
27
-struct MemoryRegion *iotlb_to_region(CPUState *cpu,
28
- hwaddr index, MemTxAttrs attrs);
29
+/**
30
+ * iotlb_to_section:
31
+ * @cpu: CPU performing the access
32
+ * @index: TCG CPU IOTLB entry
33
+ *
34
+ * Given a TCG CPU IOTLB entry, return the MemoryRegionSection that
35
+ * it refers to. @index will have been initially created and returned
36
+ * by memory_region_section_get_iotlb().
37
+ */
38
+struct MemoryRegionSection *iotlb_to_section(CPUState *cpu,
39
+ hwaddr index, MemTxAttrs attrs);
40
41
void tlb_fill(CPUState *cpu, target_ulong addr, int size,
42
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr);
43
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/accel/tcg/cputlb.c
46
+++ b/accel/tcg/cputlb.c
47
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
48
target_ulong addr, uintptr_t retaddr, int size)
49
{
28
{
50
CPUState *cpu = ENV_GET_CPU(env);
29
MachineClass *mc = MACHINE_GET_CLASS(ms);
51
- hwaddr physaddr = iotlbentry->addr;
30
- GQueue *list = g_queue_new();
52
- MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs);
31
- guint pptt_start = table_data->len;
53
+ hwaddr mr_offset;
32
- guint parent_offset;
54
+ MemoryRegionSection *section;
33
- guint length, i;
55
+ MemoryRegion *mr;
34
- int uid = 0;
56
uint64_t val;
35
- int socket;
57
bool locked = false;
36
+ CPUArchIdList *cpus = ms->possible_cpus;
58
MemTxResult r;
37
+ int64_t socket_id = -1, cluster_id = -1, core_id = -1;
59
38
+ uint32_t socket_offset = 0, cluster_offset = 0, core_offset = 0;
60
- physaddr = (physaddr & TARGET_PAGE_MASK) + addr;
39
+ uint32_t pptt_start = table_data->len;
61
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
40
+ int n;
62
+ mr = section->mr;
41
AcpiTable table = { .sig = "PPTT", .rev = 2,
63
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
42
.oem_id = oem_id, .oem_table_id = oem_table_id };
64
cpu->mem_io_pc = retaddr;
43
65
if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) {
44
acpi_table_begin(&table, table_data);
66
cpu_io_recompile(cpu, retaddr);
45
67
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
46
- for (socket = 0; socket < ms->smp.sockets; socket++) {
68
qemu_mutex_lock_iothread();
47
- g_queue_push_tail(list,
69
locked = true;
48
- GUINT_TO_POINTER(table_data->len - pptt_start));
70
}
49
- build_processor_hierarchy_node(
71
- r = memory_region_dispatch_read(mr, physaddr,
50
- table_data,
72
+ r = memory_region_dispatch_read(mr, mr_offset,
51
- /*
73
&val, size, iotlbentry->attrs);
52
- * Physical package - represents the boundary
74
if (r != MEMTX_OK) {
53
- * of a physical package
75
+ hwaddr physaddr = mr_offset +
54
- */
76
+ section->offset_within_address_space -
55
- (1 << 0),
77
+ section->offset_within_region;
56
- 0, socket, NULL, 0);
78
+
57
- }
79
cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_LOAD,
58
-
80
mmu_idx, iotlbentry->attrs, r, retaddr);
59
- if (mc->smp_props.clusters_supported) {
81
}
60
- length = g_queue_get_length(list);
82
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
61
- for (i = 0; i < length; i++) {
83
uintptr_t retaddr, int size)
62
- int cluster;
84
{
63
-
85
CPUState *cpu = ENV_GET_CPU(env);
64
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
86
- hwaddr physaddr = iotlbentry->addr;
65
- for (cluster = 0; cluster < ms->smp.clusters; cluster++) {
87
- MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs);
66
- g_queue_push_tail(list,
88
+ hwaddr mr_offset;
67
- GUINT_TO_POINTER(table_data->len - pptt_start));
89
+ MemoryRegionSection *section;
68
- build_processor_hierarchy_node(
90
+ MemoryRegion *mr;
69
- table_data,
91
bool locked = false;
70
- (0 << 0), /* not a physical package */
92
MemTxResult r;
71
- parent_offset, cluster, NULL, 0);
93
72
- }
94
- physaddr = (physaddr & TARGET_PAGE_MASK) + addr;
73
+ /*
95
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
74
+ * This works with the assumption that cpus[n].props.*_id has been
96
+ mr = section->mr;
75
+ * sorted from top to down levels in mc->possible_cpu_arch_ids().
97
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
76
+ * Otherwise, the unexpected and duplicated containers will be
98
if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) {
77
+ * created.
99
cpu_io_recompile(cpu, retaddr);
78
+ */
100
}
79
+ for (n = 0; n < cpus->len; n++) {
101
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
80
+ if (cpus->cpus[n].props.socket_id != socket_id) {
102
qemu_mutex_lock_iothread();
81
+ assert(cpus->cpus[n].props.socket_id > socket_id);
103
locked = true;
82
+ socket_id = cpus->cpus[n].props.socket_id;
104
}
83
+ cluster_id = -1;
105
- r = memory_region_dispatch_write(mr, physaddr,
84
+ core_id = -1;
106
+ r = memory_region_dispatch_write(mr, mr_offset,
85
+ socket_offset = table_data->len - pptt_start;
107
val, size, iotlbentry->attrs);
86
+ build_processor_hierarchy_node(table_data,
108
if (r != MEMTX_OK) {
87
+ (1 << 0), /* Physical package */
109
+ hwaddr physaddr = mr_offset +
88
+ 0, socket_id, NULL, 0);
110
+ section->offset_within_address_space -
89
}
111
+ section->offset_within_region;
90
- }
112
+
91
113
cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
92
- length = g_queue_get_length(list);
114
mmu_idx, iotlbentry->attrs, r, retaddr);
93
- for (i = 0; i < length; i++) {
115
}
94
- int core;
116
@@ -XXX,XX +XXX,XX @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
95
-
117
*/
96
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
118
tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
97
- for (core = 0; core < ms->smp.cores; core++) {
119
{
98
- if (ms->smp.threads > 1) {
120
- int mmu_idx, index, pd;
99
- g_queue_push_tail(list,
121
+ int mmu_idx, index;
100
- GUINT_TO_POINTER(table_data->len - pptt_start));
122
void *p;
101
- build_processor_hierarchy_node(
123
MemoryRegion *mr;
102
- table_data,
124
+ MemoryRegionSection *section;
103
- (0 << 0), /* not a physical package */
125
CPUState *cpu = ENV_GET_CPU(env);
104
- parent_offset, core, NULL, 0);
126
CPUIOTLBEntry *iotlbentry;
105
- } else {
127
- hwaddr physaddr;
106
- build_processor_hierarchy_node(
128
+ hwaddr physaddr, mr_offset;
107
- table_data,
129
108
- (1 << 1) | /* ACPI Processor ID valid */
130
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
109
- (1 << 3), /* Node is a Leaf */
131
mmu_idx = cpu_mmu_index(env, true);
110
- parent_offset, uid++, NULL, 0);
132
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
111
+ if (mc->smp_props.clusters_supported) {
112
+ if (cpus->cpus[n].props.cluster_id != cluster_id) {
113
+ assert(cpus->cpus[n].props.cluster_id > cluster_id);
114
+ cluster_id = cpus->cpus[n].props.cluster_id;
115
+ core_id = -1;
116
+ cluster_offset = table_data->len - pptt_start;
117
+ build_processor_hierarchy_node(table_data,
118
+ (0 << 0), /* Not a physical package */
119
+ socket_offset, cluster_id, NULL, 0);
120
}
121
+ } else {
122
+ cluster_offset = socket_offset;
123
}
124
- }
125
126
- length = g_queue_get_length(list);
127
- for (i = 0; i < length; i++) {
128
- int thread;
129
+ if (ms->smp.threads == 1) {
130
+ build_processor_hierarchy_node(table_data,
131
+ (1 << 1) | /* ACPI Processor ID valid */
132
+ (1 << 3), /* Node is a Leaf */
133
+ cluster_offset, n, NULL, 0);
134
+ } else {
135
+ if (cpus->cpus[n].props.core_id != core_id) {
136
+ assert(cpus->cpus[n].props.core_id > core_id);
137
+ core_id = cpus->cpus[n].props.core_id;
138
+ core_offset = table_data->len - pptt_start;
139
+ build_processor_hierarchy_node(table_data,
140
+ (0 << 0), /* Not a physical package */
141
+ cluster_offset, core_id, NULL, 0);
142
+ }
143
144
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
145
- for (thread = 0; thread < ms->smp.threads; thread++) {
146
- build_processor_hierarchy_node(
147
- table_data,
148
+ build_processor_hierarchy_node(table_data,
149
(1 << 1) | /* ACPI Processor ID valid */
150
(1 << 2) | /* Processor is a Thread */
151
(1 << 3), /* Node is a Leaf */
152
- parent_offset, uid++, NULL, 0);
153
+ core_offset, n, NULL, 0);
133
}
154
}
134
}
155
}
135
iotlbentry = &env->iotlb[mmu_idx][index];
156
136
- pd = iotlbentry->addr & ~TARGET_PAGE_MASK;
157
- g_queue_free(list);
137
- mr = iotlb_to_region(cpu, pd, iotlbentry->attrs);
158
acpi_table_end(linker, &table);
138
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
139
+ mr = section->mr;
140
if (memory_region_is_unassigned(mr)) {
141
qemu_mutex_lock_iothread();
142
if (memory_region_request_mmio_ptr(mr, addr)) {
143
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
144
* and use the MemTXResult it produced). However it is the
145
* simplest place we have currently available for the check.
146
*/
147
- physaddr = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
148
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
149
+ physaddr = mr_offset +
150
+ section->offset_within_address_space -
151
+ section->offset_within_region;
152
cpu_transaction_failed(cpu, physaddr, addr, 0, MMU_INST_FETCH, mmu_idx,
153
iotlbentry->attrs, MEMTX_DECODE_ERROR, 0);
154
155
diff --git a/exec.c b/exec.c
156
index XXXXXXX..XXXXXXX 100644
157
--- a/exec.c
158
+++ b/exec.c
159
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps readonly_mem_ops = {
160
},
161
};
162
163
-MemoryRegion *iotlb_to_region(CPUState *cpu, hwaddr index, MemTxAttrs attrs)
164
+MemoryRegionSection *iotlb_to_section(CPUState *cpu,
165
+ hwaddr index, MemTxAttrs attrs)
166
{
167
int asidx = cpu_asidx_from_attrs(cpu, attrs);
168
CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx];
169
AddressSpaceDispatch *d = atomic_rcu_read(&cpuas->memory_dispatch);
170
MemoryRegionSection *sections = d->map.sections;
171
172
- return sections[index & ~TARGET_PAGE_MASK].mr;
173
+ return &sections[index & ~TARGET_PAGE_MASK];
174
}
159
}
175
160
176
static void io_mem_init(void)
177
--
161
--
178
2.17.1
162
2.25.1
179
180
diff view generated by jsdifflib
Deleted patch
1
The codebase has a bit of a mix of different multiline
2
comment styles. State a preference for the Linux kernel
3
style:
4
/*
5
* Star on the left for each line.
6
* Leading slash-star and trailing star-slash
7
* each go on a line of their own.
8
*/
9
1
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Eric Blake <eblake@redhat.com>
12
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
13
Reviewed-by: Markus Armbruster <armbru@redhat.com>
14
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
15
Reviewed-by: Thomas Huth <thuth@redhat.com>
16
Reviewed-by: John Snow <jsnow@redhat.com>
17
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
18
Message-id: 20180611141716.3813-1-peter.maydell@linaro.org
19
---
20
CODING_STYLE | 17 +++++++++++++++++
21
1 file changed, 17 insertions(+)
22
23
diff --git a/CODING_STYLE b/CODING_STYLE
24
index XXXXXXX..XXXXXXX 100644
25
--- a/CODING_STYLE
26
+++ b/CODING_STYLE
27
@@ -XXX,XX +XXX,XX @@ We use traditional C-style /* */ comments and avoid // comments.
28
Rationale: The // form is valid in C99, so this is purely a matter of
29
consistency of style. The checkpatch script will warn you about this.
30
31
+Multiline comment blocks should have a row of stars on the left,
32
+and the initial /* and terminating */ both on their own lines:
33
+ /*
34
+ * like
35
+ * this
36
+ */
37
+This is the same format required by the Linux kernel coding style.
38
+
39
+(Some of the existing comments in the codebase use the GNU Coding
40
+Standards form which does not have stars on the left, or other
41
+variations; avoid these when writing new comments, but don't worry
42
+about converting to the preferred form unless you're editing that
43
+comment anyway.)
44
+
45
+Rationale: Consistency, and ease of visually picking out a multiline
46
+comment from the surrounding code.
47
+
48
8. trace-events style
49
50
8.1 0x prefix
51
--
52
2.17.1
53
54
diff view generated by jsdifflib
Deleted patch
1
There's a common pattern in QEMU where a function needs to perform
2
a data load or store of an N byte integer in a particular endianness.
3
At the moment this is handled by doing a switch() on the size and
4
calling the appropriate ld*_p or st*_p function for each size.
5
1
6
Provide a new family of functions ldn_*_p() and stn_*_p() which
7
take the size as an argument and do the switch() themselves.
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180611171007.4165-2-peter.maydell@linaro.org
13
---
14
include/exec/cpu-all.h | 4 +++
15
include/qemu/bswap.h | 52 +++++++++++++++++++++++++++++++++++++
16
docs/devel/loads-stores.rst | 15 +++++++++++
17
3 files changed, 71 insertions(+)
18
19
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/cpu-all.h
22
+++ b/include/exec/cpu-all.h
23
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
24
#define stq_p(p, v) stq_be_p(p, v)
25
#define stfl_p(p, v) stfl_be_p(p, v)
26
#define stfq_p(p, v) stfq_be_p(p, v)
27
+#define ldn_p(p, sz) ldn_be_p(p, sz)
28
+#define stn_p(p, sz, v) stn_be_p(p, sz, v)
29
#else
30
#define lduw_p(p) lduw_le_p(p)
31
#define ldsw_p(p) ldsw_le_p(p)
32
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
33
#define stq_p(p, v) stq_le_p(p, v)
34
#define stfl_p(p, v) stfl_le_p(p, v)
35
#define stfq_p(p, v) stfq_le_p(p, v)
36
+#define ldn_p(p, sz) ldn_le_p(p, sz)
37
+#define stn_p(p, sz, v) stn_le_p(p, sz, v)
38
#endif
39
40
/* MMU memory access macros */
41
diff --git a/include/qemu/bswap.h b/include/qemu/bswap.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/include/qemu/bswap.h
44
+++ b/include/qemu/bswap.h
45
@@ -XXX,XX +XXX,XX @@ typedef union {
46
* For accessors that take a guest address rather than a
47
* host address, see the cpu_{ld,st}_* accessors defined in
48
* cpu_ldst.h.
49
+ *
50
+ * For cases where the size to be used is not fixed at compile time,
51
+ * there are
52
+ * stn{endian}_p(ptr, sz, val)
53
+ * which stores @val to @ptr as an @endian-order number @sz bytes in size
54
+ * and
55
+ * ldn{endian}_p(ptr, sz)
56
+ * which loads @sz bytes from @ptr as an unsigned @endian-order number
57
+ * and returns it in a uint64_t.
58
*/
59
60
static inline int ldub_p(const void *ptr)
61
@@ -XXX,XX +XXX,XX @@ static inline unsigned long leul_to_cpu(unsigned long v)
62
#endif
63
}
64
65
+/* Store v to p as a sz byte value in host order */
66
+#define DO_STN_LDN_P(END) \
67
+ static inline void stn_## END ## _p(void *ptr, int sz, uint64_t v) \
68
+ { \
69
+ switch (sz) { \
70
+ case 1: \
71
+ stb_p(ptr, v); \
72
+ break; \
73
+ case 2: \
74
+ stw_ ## END ## _p(ptr, v); \
75
+ break; \
76
+ case 4: \
77
+ stl_ ## END ## _p(ptr, v); \
78
+ break; \
79
+ case 8: \
80
+ stq_ ## END ## _p(ptr, v); \
81
+ break; \
82
+ default: \
83
+ g_assert_not_reached(); \
84
+ } \
85
+ } \
86
+ static inline uint64_t ldn_## END ## _p(const void *ptr, int sz) \
87
+ { \
88
+ switch (sz) { \
89
+ case 1: \
90
+ return ldub_p(ptr); \
91
+ case 2: \
92
+ return lduw_ ## END ## _p(ptr); \
93
+ case 4: \
94
+ return (uint32_t)ldl_ ## END ## _p(ptr); \
95
+ case 8: \
96
+ return ldq_ ## END ## _p(ptr); \
97
+ default: \
98
+ g_assert_not_reached(); \
99
+ } \
100
+ }
101
+
102
+DO_STN_LDN_P(he)
103
+DO_STN_LDN_P(le)
104
+DO_STN_LDN_P(be)
105
+
106
+#undef DO_STN_LDN_P
107
+
108
#undef le_bswap
109
#undef be_bswap
110
#undef le_bswaps
111
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
112
index XXXXXXX..XXXXXXX 100644
113
--- a/docs/devel/loads-stores.rst
114
+++ b/docs/devel/loads-stores.rst
115
@@ -XXX,XX +XXX,XX @@ The ``_{endian}`` infix is omitted for target-endian accesses.
116
The target endian accessors are only available to source
117
files which are built per-target.
118
119
+There are also functions which take the size as an argument:
120
+
121
+load: ``ldn{endian}_p(ptr, sz)``
122
+
123
+which performs an unsigned load of ``sz`` bytes from ``ptr``
124
+as an ``{endian}`` order value and returns it in a uint64_t.
125
+
126
+store: ``stn{endian}_p(ptr, sz, val)``
127
+
128
+which stores ``val`` to ``ptr`` as an ``{endian}`` order value
129
+of size ``sz`` bytes.
130
+
131
+
132
Regexes for git grep
133
- ``\<ldf\?[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
134
- ``\<stf\?[bwlq]\(_[hbl]e\)\?_p\>``
135
+ - ``\<ldn_\([hbl]e\)?_p\>``
136
+ - ``\<stn_\([hbl]e\)?_p\>``
137
138
``cpu_{ld,st}_*``
139
~~~~~~~~~~~~~~~~~
140
--
141
2.17.1
142
143
diff view generated by jsdifflib
Deleted patch
1
In subpage_read() we perform a load of the data into a local buffer
2
which we then access using ldub_p(), lduw_p(), ldl_p() or ldq_p()
3
depending on its size, storing the result into the uint64_t *data.
4
Since ldl_p() returns an 'int', this means that for the 4-byte
5
case we will sign-extend the data, whereas for 1 and 2 byte
6
reads we zero-extend it.
7
1
8
This ought not to matter since the caller will likely ignore values in
9
the high bytes of the data, but add a cast so that we're consistent.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180611171007.4165-3-peter.maydell@linaro.org
14
---
15
exec.c | 2 +-
16
1 file changed, 1 insertion(+), 1 deletion(-)
17
18
diff --git a/exec.c b/exec.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/exec.c
21
+++ b/exec.c
22
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
23
*data = lduw_p(buf);
24
return MEMTX_OK;
25
case 4:
26
- *data = ldl_p(buf);
27
+ *data = (uint32_t)ldl_p(buf);
28
return MEMTX_OK;
29
case 8:
30
*data = ldq_p(buf);
31
--
32
2.17.1
33
34
diff view generated by jsdifflib