1
target-arm queue; this one has a fair scattering of more
1
As promised, another pullreq... This one's mostly RTH's patches.
2
miscellaneous things in it which I've sent out this week.
3
I've shoved those in as well as it seemed the least-effort
4
way of getting them into master; a few of them are dependencies
5
on arm-related patches I have brewing.
6
2
7
thanks
3
thanks
8
-- PMM
4
-- PMM
9
5
6
The following changes since commit 784c2e4f232adf5ef47a84a262ec72a07d068d6a:
10
7
11
The following changes since commit 2702c2d3eb74e3908c0c5dbf3a71c8987595a86e:
8
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2018-10-19 15:30:40 +0100)
12
13
Merge remote-tracking branch 'remotes/stsquad/tags/pull-travis-updates-140618-1' into staging (2018-06-15 12:49:36 +0100)
14
9
15
are available in the Git repository at:
10
are available in the Git repository at:
16
11
17
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180615
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181019
18
13
19
for you to fetch changes up to 14120108f87b3f9e1beacdf0a6096e464e62bb65:
14
for you to fetch changes up to 88c9add25e7120e8622796c81ad3f3fb7f8d40e7:
20
15
21
target/arm: Allow ARMv6-M Thumb2 instructions (2018-06-15 15:23:34 +0100)
16
target/arm: Only flush tlb if ASID changes (2018-10-19 17:38:48 +0100)
22
17
23
----------------------------------------------------------------
18
----------------------------------------------------------------
24
target-arm and miscellaneous queue:
19
target-arm queue:
25
* fix KVM state save/restore for GICv3 priority registers for high IRQ numbers
20
* ssi-sd: Make devices picking up backends unavailable with -device
26
* hw/arm/mps2-tz: Put ethernet controller behind PPC
21
* Add support for VCPU event states
27
* hw/sh/sh7750: Convert away from old_mmio
22
* Move towards making ID registers the source of truth for
28
* hw/m68k/mcf5206: Convert away from old_mmio
23
whether a guest CPU implements a feature, rather than having
29
* hw/block/pflash_cfi02: Convert away from old_mmio
24
parallel ID registers and feature bit flags
30
* hw/watchdog/wdt_i6300esb: Convert away from old_mmio
25
* Implement various HCR hypervisor trap/config bits
31
* hw/input/pckbd: Convert away from old_mmio
26
* Get IL bit correct for v7 syndrome values
32
* hw/char/parallel: Convert away from old_mmio
27
* Report correct syndrome for FP/SIMD traps to Hyp mode
33
* armv7m: refactor to get rid of armv7m_init() function
28
* hw/arm/boot: Increase compliance with kernel arm64 boot protocol
34
* arm: Don't crash if user tries to use a Cortex-M CPU without an NVIC
29
* Refactor A32 Neon to use generic vector infrastructure
35
* hw/core/or-irq: Support more than 16 inputs to an OR gate
30
* Fix a bug in A32 VLD2 "(multiple 2-element structures)" insn
36
* cpu-defs.h: Document CPUIOTLBEntry 'addr' field
31
* net: cadence_gem: Report features correctly in ID register
37
* cputlb: Pass cpu_transaction_failed() the correct physaddr
32
* Avoid some unnecessary TLB flushes on TTBR register writes
38
* CODING_STYLE: Define our preferred form for multiline comments
39
* Add and use new stn_*_p() and ldn_*_p() memory access functions
40
* target/arm: More parts of the upcoming SVE support
41
* aspeed_scu: Implement RNG register
42
* m25p80: add support for two bytes WRSR for Macronix chips
43
* exec.c: Handle IOMMUs being in the path of TCG CPU memory accesses
44
* target/arm: Allow ARMv6-M Thumb2 instructions
45
33
46
----------------------------------------------------------------
34
----------------------------------------------------------------
47
Cédric Le Goater (1):
35
Dongjiu Geng (1):
48
m25p80: add support for two bytes WRSR for Macronix chips
36
target/arm: Add support for VCPU event states
49
37
50
Joel Stanley (1):
38
Edgar E. Iglesias (2):
51
aspeed_scu: Implement RNG register
39
net: cadence_gem: Announce availability of priority queues
40
net: cadence_gem: Announce 64bit addressing support
52
41
53
Julia Suvorova (1):
42
Markus Armbruster (1):
54
target/arm: Allow ARMv6-M Thumb2 instructions
43
ssi-sd: Make devices picking up backends unavailable with -device
55
44
56
Peter Maydell (21):
45
Peter Maydell (10):
57
hw/arm/mps2-tz: Put ethernet controller behind PPC
46
target/arm: Improve debug logging of AArch32 exception return
58
hw/sh/sh7750: Convert away from old_mmio
47
target/arm: Make switch_mode() file-local
59
hw/m68k/mcf5206: Convert away from old_mmio
48
target/arm: Implement HCR.FB
60
hw/block/pflash_cfi02: Convert away from old_mmio
49
target/arm: Implement HCR.DC
61
hw/watchdog/wdt_i6300esb: Convert away from old_mmio
50
target/arm: ISR_EL1 bits track virtual interrupts if IMO/FMO set
62
hw/input/pckbd: Convert away from old_mmio
51
target/arm: Implement HCR.VI and VF
63
hw/char/parallel: Convert away from old_mmio
52
target/arm: Implement HCR.PTW
64
stellaris: Stop using armv7m_init()
53
target/arm: New utility function to extract EC from syndrome
65
hw/arm/armv7m: Remove unused armv7m_init() function
54
target/arm: Get IL bit correct for v7 syndrome values
66
arm: Don't crash if user tries to use a Cortex-M CPU without an NVIC
55
target/arm: Report correct syndrome for FP/SIMD traps to Hyp mode
67
hw/core/or-irq: Support more than 16 inputs to an OR gate
68
cpu-defs.h: Document CPUIOTLBEntry 'addr' field
69
cputlb: Pass cpu_transaction_failed() the correct physaddr
70
CODING_STYLE: Define our preferred form for multiline comments
71
bswap: Add new stn_*_p() and ldn_*_p() memory access functions
72
exec.c: Don't accidentally sign-extend 4-byte loads in subpage_read()
73
exec.c: Use stn_p() and ldn_p() instead of explicit switches
74
iommu: Add IOMMU index concept to IOMMU API
75
iommu: Add IOMMU index argument to notifier APIs
76
iommu: Add IOMMU index argument to translate method
77
exec.c: Handle IOMMUs in address_space_translate_for_iotlb()
78
56
79
Richard Henderson (18):
57
Richard Henderson (30):
80
target/arm: Extend vec_reg_offset to larger sizes
58
target/arm: Move some system registers into a substructure
81
target/arm: Implement SVE Permute - Unpredicated Group
59
target/arm: V8M should not imply V7VE
82
target/arm: Implement SVE Permute - Predicates Group
60
target/arm: Convert v8 extensions from feature bits to isar tests
83
target/arm: Implement SVE Permute - Interleaving Group
61
target/arm: Convert division from feature bits to isar0 tests
84
target/arm: Implement SVE compress active elements
62
target/arm: Convert jazelle from feature bit to isar1 test
85
target/arm: Implement SVE conditionally broadcast/extract element
63
target/arm: Convert t32ee from feature bit to isar3 test
86
target/arm: Implement SVE copy to vector (predicated)
64
target/arm: Convert sve from feature bit to aa64pfr0 test
87
target/arm: Implement SVE reverse within elements
65
target/arm: Convert v8.2-fp16 from feature bit to aa64pfr0 test
88
target/arm: Implement SVE vector splice (predicated)
66
target/arm: Hoist address increment for vector memory ops
89
target/arm: Implement SVE Select Vectors Group
67
target/arm: Don't call tcg_clear_temp_count
90
target/arm: Implement SVE Integer Compare - Vectors Group
68
target/arm: Use tcg_gen_gvec_dup_i64 for LD[1-4]R
91
target/arm: Implement SVE Integer Compare - Immediate Group
69
target/arm: Promote consecutive memory ops for aa64
92
target/arm: Implement SVE Partition Break Group
70
target/arm: Mark some arrays const
93
target/arm: Implement SVE Predicate Count Group
71
target/arm: Use gvec for NEON VDUP
94
target/arm: Implement SVE Integer Compare - Scalars Group
72
target/arm: Use gvec for NEON VMOV, VMVN, VBIC & VORR (immediate)
95
target/arm: Implement FDUP/DUP
73
target/arm: Use gvec for NEON_3R_LOGIC insns
96
target/arm: Implement SVE Integer Wide Immediate - Unpredicated Group
74
target/arm: Use gvec for NEON_3R_VADD_VSUB insns
97
target/arm: Implement SVE Floating Point Arithmetic - Unpredicated Group
75
target/arm: Use gvec for NEON_2RM_VMN, NEON_2RM_VNEG
76
target/arm: Use gvec for NEON_3R_VMUL
77
target/arm: Use gvec for VSHR, VSHL
78
target/arm: Use gvec for VSRA
79
target/arm: Use gvec for VSRI, VSLI
80
target/arm: Use gvec for NEON_3R_VML
81
target/arm: Use gvec for NEON_3R_VTST_VCEQ, NEON_3R_VCGT, NEON_3R_VCGE
82
target/arm: Use gvec for NEON VLD all lanes
83
target/arm: Reorg NEON VLD/VST all elements
84
target/arm: Promote consecutive memory ops for aa32
85
target/arm: Reorg NEON VLD/VST single element to one lane
86
target/arm: Remove writefn from TTBR0_EL3
87
target/arm: Only flush tlb if ASID changes
98
88
99
Shannon Zhao (1):
89
Stewart Hildebrand (1):
100
arm_gicv3_kvm: kvm_dist_get/put_priority: skip the registers banked by GICR_IPRIORITYR
90
hw/arm/boot: Increase compliance with kernel arm64 boot protocol
101
91
102
include/exec/cpu-all.h | 4 +
92
target/arm/cpu.h | 227 ++++++-
103
include/exec/cpu-defs.h | 9 +
93
target/arm/internals.h | 45 +-
104
include/exec/exec-all.h | 16 +-
94
target/arm/kvm_arm.h | 24 +
105
include/exec/memory.h | 65 +-
95
target/arm/translate.h | 21 +
106
include/hw/arm/arm.h | 8 +-
96
hw/arm/boot.c | 18 +
107
include/hw/or-irq.h | 5 +-
97
hw/intc/armv7m_nvic.c | 12 +-
108
include/qemu/bswap.h | 52 ++
98
hw/net/cadence_gem.c | 9 +-
109
include/qom/cpu.h | 3 +
99
hw/sd/ssi-sd.c | 2 +
110
target/arm/helper-sve.h | 294 +++++++++
100
linux-user/aarch64/signal.c | 4 +-
111
target/arm/helper.h | 19 +
101
linux-user/elfload.c | 60 +-
112
target/arm/translate-a64.h | 26 +-
102
linux-user/syscall.c | 10 +-
113
accel/tcg/cputlb.c | 59 +-
103
target/arm/cpu.c | 242 ++++----
114
exec.c | 263 ++++----
104
target/arm/cpu64.c | 148 +++--
115
hw/alpha/typhoon.c | 3 +-
105
target/arm/helper.c | 397 ++++++++----
116
hw/arm/armv7m.c | 28 +-
106
target/arm/kvm.c | 60 ++
117
hw/arm/mps2-tz.c | 32 +-
107
target/arm/kvm32.c | 13 +
118
hw/arm/smmuv3.c | 2 +-
108
target/arm/kvm64.c | 15 +-
119
hw/arm/stellaris.c | 12 +-
109
target/arm/machine.c | 28 +-
120
hw/block/m25p80.c | 1 +
110
target/arm/op_helper.c | 2 +-
121
hw/block/pflash_cfi02.c | 97 +--
111
target/arm/translate-a64.c | 715 ++++-----------------
122
hw/char/parallel.c | 50 +-
112
target/arm/translate.c | 1451 ++++++++++++++++++++++++++++---------------
123
hw/core/or-irq.c | 39 +-
113
21 files changed, 2021 insertions(+), 1482 deletions(-)
124
hw/dma/rc4030.c | 2 +-
125
hw/i386/amd_iommu.c | 2 +-
126
hw/i386/intel_iommu.c | 8 +-
127
hw/input/pckbd.c | 14 +-
128
hw/intc/arm_gicv3_kvm.c | 18 +-
129
hw/intc/armv7m_nvic.c | 6 +-
130
hw/m68k/mcf5206.c | 48 +-
131
hw/misc/aspeed_scu.c | 20 +
132
hw/ppc/spapr_iommu.c | 5 +-
133
hw/s390x/s390-pci-bus.c | 2 +-
134
hw/s390x/s390-pci-inst.c | 4 +-
135
hw/sh4/sh7750.c | 44 +-
136
hw/sparc/sun4m_iommu.c | 3 +-
137
hw/sparc64/sun4u_iommu.c | 2 +-
138
hw/vfio/common.c | 6 +-
139
hw/virtio/vhost.c | 7 +-
140
hw/watchdog/wdt_i6300esb.c | 48 +-
141
memory.c | 33 +-
142
target/arm/cpu.c | 18 +
143
target/arm/sve_helper.c | 1250 +++++++++++++++++++++++++++++++++++++
144
target/arm/translate-sve.c | 1458 +++++++++++++++++++++++++++++++++++++++++++
145
target/arm/translate.c | 43 +-
146
target/arm/vec_helper.c | 69 ++
147
CODING_STYLE | 17 +
148
docs/devel/loads-stores.rst | 15 +
149
target/arm/sve.decode | 248 ++++++++
150
48 files changed, 4114 insertions(+), 363 deletions(-)
151
114
diff view generated by jsdifflib
1
Remove the now-unused armv7m_init() function. This was a legacy from
1
From: Markus Armbruster <armbru@redhat.com>
2
before we properly QOMified ARMv7M, and it has some flaws:
3
2
4
* it combines work that needs to be done by an SoC object (creating
3
Device models aren't supposed to go on fishing expeditions for
5
and initializing the TYPE_ARMV7M object) with work that needs to
4
backends. They should expose suitable properties for the user to set.
6
be done by the board model (setting the system up to load the ELF
5
For onboard devices, board code sets them.
7
file specified with -kernel)
8
* TYPE_ARMV7M creation failure is fatal, but an SoC object wants to
9
arrange to propagate the failure outward
10
* it uses allocate-and-create via qdev_create() whereas the current
11
preferred style for SoC objects is to do creation in-place
12
6
13
Board and SoC models can instead do the two jobs this function
7
Device ssi-sd picks up its block backend in its init() method with
14
was doing themselves, in the right places and with whatever their
8
drive_get_next() instead. This mistake is already marked FIXME since
15
preferred style/error handling is.
9
commit af9e40a.
16
10
11
Unset user_creatable to remove the mistake from our external
12
interface. Since the SSI bus doesn't support hotplug, only -device
13
can be affected. Only certain ARM machines have ssi-sd and provide an
14
SSI bus for it; this patch breaks -device ssi-sd for these machines.
15
No actual use of -device ssi-sd is known.
16
17
Signed-off-by: Markus Armbruster <armbru@redhat.com>
18
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Acked-by: Thomas Huth <thuth@redhat.com>
20
Message-id: 20181009060835.4608-1-armbru@redhat.com
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
20
Message-id: 20180601144328.23817-3-peter.maydell@linaro.org
21
---
22
---
22
include/hw/arm/arm.h | 8 ++------
23
hw/sd/ssi-sd.c | 2 ++
23
hw/arm/armv7m.c | 21 ---------------------
24
1 file changed, 2 insertions(+)
24
2 files changed, 2 insertions(+), 27 deletions(-)
25
25
26
diff --git a/include/hw/arm/arm.h b/include/hw/arm/arm.h
26
diff --git a/hw/sd/ssi-sd.c b/hw/sd/ssi-sd.c
27
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
28
--- a/include/hw/arm/arm.h
28
--- a/hw/sd/ssi-sd.c
29
+++ b/include/hw/arm/arm.h
29
+++ b/hw/sd/ssi-sd.c
30
@@ -XXX,XX +XXX,XX @@ typedef enum {
30
@@ -XXX,XX +XXX,XX @@ static void ssi_sd_class_init(ObjectClass *klass, void *data)
31
ARM_ENDIANNESS_BE32,
31
k->cs_polarity = SSI_CS_LOW;
32
} arm_endianness;
32
dc->vmsd = &vmstate_ssi_sd;
33
33
dc->reset = ssi_sd_reset;
34
-/* armv7m.c */
34
+ /* Reason: init() method uses drive_get_next() */
35
-DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
35
+ dc->user_creatable = false;
36
- const char *kernel_filename, const char *cpu_type);
37
/**
38
* armv7m_load_kernel:
39
* @cpu: CPU
40
@@ -XXX,XX +XXX,XX @@ DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
41
* @mem_size: mem_size: maximum image size to load
42
*
43
* Load the guest image for an ARMv7M system. This must be called by
44
- * any ARMv7M board, either directly or via armv7m_init(). (This is
45
- * necessary to ensure that the CPU resets correctly on system reset,
46
- * as well as for kernel loading.)
47
+ * any ARMv7M board. (This is necessary to ensure that the CPU resets
48
+ * correctly on system reset, as well as for kernel loading.)
49
*/
50
void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size);
51
52
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/arm/armv7m.c
55
+++ b/hw/arm/armv7m.c
56
@@ -XXX,XX +XXX,XX @@ static void armv7m_reset(void *opaque)
57
cpu_reset(CPU(cpu));
58
}
36
}
59
37
60
-/* Init CPU and memory for a v7-M based board.
38
static const TypeInfo ssi_sd_info = {
61
- mem_size is in bytes.
62
- Returns the ARMv7M device. */
63
-
64
-DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
65
- const char *kernel_filename, const char *cpu_type)
66
-{
67
- DeviceState *armv7m;
68
-
69
- armv7m = qdev_create(NULL, TYPE_ARMV7M);
70
- qdev_prop_set_uint32(armv7m, "num-irq", num_irq);
71
- qdev_prop_set_string(armv7m, "cpu-type", cpu_type);
72
- object_property_set_link(OBJECT(armv7m), OBJECT(get_system_memory()),
73
- "memory", &error_abort);
74
- /* This will exit with an error if the user passed us a bad cpu_type */
75
- qdev_init_nofail(armv7m);
76
-
77
- armv7m_load_kernel(ARM_CPU(first_cpu), kernel_filename, mem_size);
78
- return armv7m;
79
-}
80
-
81
void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size)
82
{
83
int image_size;
84
--
39
--
85
2.17.1
40
2.19.1
86
41
87
42
diff view generated by jsdifflib
New patch
1
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
2
3
This patch extends the qemu-kvm state sync logic with support for
4
KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception.
5
And also it can support the exception state migration.
6
7
The SError exception states include SError pending state and ESR value,
8
the kvm_put/get_vcpu_events() will be called when set or get system
9
registers. When do migration, if source machine has SError pending,
10
QEMU will do this migration regardless whether the target machine supports
11
to specify guest ESR value, because if target machine does not support that,
12
it can also inject the SError with zero ESR value.
13
14
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
15
Reviewed-by: Andrew Jones <drjones@redhat.com>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 1538067351-23931-3-git-send-email-gengdongjiu@huawei.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
target/arm/cpu.h | 7 ++++++
21
target/arm/kvm_arm.h | 24 ++++++++++++++++++
22
target/arm/kvm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++
23
target/arm/kvm32.c | 13 ++++++++++
24
target/arm/kvm64.c | 13 ++++++++++
25
target/arm/machine.c | 22 ++++++++++++++++
26
6 files changed, 139 insertions(+)
27
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
33
*/
34
} exception;
35
36
+ /* Information associated with an SError */
37
+ struct {
38
+ uint8_t pending;
39
+ uint8_t has_esr;
40
+ uint64_t esr;
41
+ } serror;
42
+
43
/* Thumb-2 EE state. */
44
uint32_t teecr;
45
uint32_t teehbr;
46
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/kvm_arm.h
49
+++ b/target/arm/kvm_arm.h
50
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu);
51
*/
52
void kvm_arm_reset_vcpu(ARMCPU *cpu);
53
54
+/**
55
+ * kvm_arm_init_serror_injection:
56
+ * @cs: CPUState
57
+ *
58
+ * Check whether KVM can set guest SError syndrome.
59
+ */
60
+void kvm_arm_init_serror_injection(CPUState *cs);
61
+
62
+/**
63
+ * kvm_get_vcpu_events:
64
+ * @cpu: ARMCPU
65
+ *
66
+ * Get VCPU related state from kvm.
67
+ */
68
+int kvm_get_vcpu_events(ARMCPU *cpu);
69
+
70
+/**
71
+ * kvm_put_vcpu_events:
72
+ * @cpu: ARMCPU
73
+ *
74
+ * Put VCPU related state to kvm.
75
+ */
76
+int kvm_put_vcpu_events(ARMCPU *cpu);
77
+
78
#ifdef CONFIG_KVM
79
/**
80
* kvm_arm_create_scratch_host_vcpu:
81
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/kvm.c
84
+++ b/target/arm/kvm.c
85
@@ -XXX,XX +XXX,XX @@ const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
86
};
87
88
static bool cap_has_mp_state;
89
+static bool cap_has_inject_serror_esr;
90
91
static ARMHostCPUFeatures arm_host_cpu_features;
92
93
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vcpu_init(CPUState *cs)
94
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
95
}
96
97
+void kvm_arm_init_serror_injection(CPUState *cs)
98
+{
99
+ cap_has_inject_serror_esr = kvm_check_extension(cs->kvm_state,
100
+ KVM_CAP_ARM_INJECT_SERROR_ESR);
101
+}
102
+
103
bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
104
int *fdarray,
105
struct kvm_vcpu_init *init)
106
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
107
return 0;
108
}
109
110
+int kvm_put_vcpu_events(ARMCPU *cpu)
111
+{
112
+ CPUARMState *env = &cpu->env;
113
+ struct kvm_vcpu_events events;
114
+ int ret;
115
+
116
+ if (!kvm_has_vcpu_events()) {
117
+ return 0;
118
+ }
119
+
120
+ memset(&events, 0, sizeof(events));
121
+ events.exception.serror_pending = env->serror.pending;
122
+
123
+ /* Inject SError to guest with specified syndrome if host kernel
124
+ * supports it, otherwise inject SError without syndrome.
125
+ */
126
+ if (cap_has_inject_serror_esr) {
127
+ events.exception.serror_has_esr = env->serror.has_esr;
128
+ events.exception.serror_esr = env->serror.esr;
129
+ }
130
+
131
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events);
132
+ if (ret) {
133
+ error_report("failed to put vcpu events");
134
+ }
135
+
136
+ return ret;
137
+}
138
+
139
+int kvm_get_vcpu_events(ARMCPU *cpu)
140
+{
141
+ CPUARMState *env = &cpu->env;
142
+ struct kvm_vcpu_events events;
143
+ int ret;
144
+
145
+ if (!kvm_has_vcpu_events()) {
146
+ return 0;
147
+ }
148
+
149
+ memset(&events, 0, sizeof(events));
150
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_VCPU_EVENTS, &events);
151
+ if (ret) {
152
+ error_report("failed to get vcpu events");
153
+ return ret;
154
+ }
155
+
156
+ env->serror.pending = events.exception.serror_pending;
157
+ env->serror.has_esr = events.exception.serror_has_esr;
158
+ env->serror.esr = events.exception.serror_esr;
159
+
160
+ return 0;
161
+}
162
+
163
void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
164
{
165
}
166
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
167
index XXXXXXX..XXXXXXX 100644
168
--- a/target/arm/kvm32.c
169
+++ b/target/arm/kvm32.c
170
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
171
}
172
cpu->mp_affinity = mpidr & ARM32_AFFINITY_MASK;
173
174
+ /* Check whether userspace can specify guest syndrome value */
175
+ kvm_arm_init_serror_injection(cs);
176
+
177
return kvm_arm_init_cpreg_list(cpu);
178
}
179
180
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
181
return ret;
182
}
183
184
+ ret = kvm_put_vcpu_events(cpu);
185
+ if (ret) {
186
+ return ret;
187
+ }
188
+
189
/* Note that we do not call write_cpustate_to_list()
190
* here, so we are only writing the tuple list back to
191
* KVM. This is safe because nothing can change the
192
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
193
}
194
vfp_set_fpscr(env, fpscr);
195
196
+ ret = kvm_get_vcpu_events(cpu);
197
+ if (ret) {
198
+ return ret;
199
+ }
200
+
201
if (!write_kvmstate_to_list(cpu)) {
202
return EINVAL;
203
}
204
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
205
index XXXXXXX..XXXXXXX 100644
206
--- a/target/arm/kvm64.c
207
+++ b/target/arm/kvm64.c
208
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
209
210
kvm_arm_init_debug(cs);
211
212
+ /* Check whether user space can specify guest syndrome value */
213
+ kvm_arm_init_serror_injection(cs);
214
+
215
return kvm_arm_init_cpreg_list(cpu);
216
}
217
218
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
219
return ret;
220
}
221
222
+ ret = kvm_put_vcpu_events(cpu);
223
+ if (ret) {
224
+ return ret;
225
+ }
226
+
227
if (!write_list_to_kvmstate(cpu, level)) {
228
return EINVAL;
229
}
230
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
231
}
232
vfp_set_fpcr(env, fpr);
233
234
+ ret = kvm_get_vcpu_events(cpu);
235
+ if (ret) {
236
+ return ret;
237
+ }
238
+
239
if (!write_kvmstate_to_list(cpu)) {
240
return EINVAL;
241
}
242
diff --git a/target/arm/machine.c b/target/arm/machine.c
243
index XXXXXXX..XXXXXXX 100644
244
--- a/target/arm/machine.c
245
+++ b/target/arm/machine.c
246
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_sve = {
247
};
248
#endif /* AARCH64 */
249
250
+static bool serror_needed(void *opaque)
251
+{
252
+ ARMCPU *cpu = opaque;
253
+ CPUARMState *env = &cpu->env;
254
+
255
+ return env->serror.pending != 0;
256
+}
257
+
258
+static const VMStateDescription vmstate_serror = {
259
+ .name = "cpu/serror",
260
+ .version_id = 1,
261
+ .minimum_version_id = 1,
262
+ .needed = serror_needed,
263
+ .fields = (VMStateField[]) {
264
+ VMSTATE_UINT8(env.serror.pending, ARMCPU),
265
+ VMSTATE_UINT8(env.serror.has_esr, ARMCPU),
266
+ VMSTATE_UINT64(env.serror.esr, ARMCPU),
267
+ VMSTATE_END_OF_LIST()
268
+ }
269
+};
270
+
271
static bool m_needed(void *opaque)
272
{
273
ARMCPU *cpu = opaque;
274
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
275
#ifdef TARGET_AARCH64
276
&vmstate_sve,
277
#endif
278
+ &vmstate_serror,
279
NULL
280
}
281
};
282
--
283
2.19.1
284
285
diff view generated by jsdifflib
1
The Cortex-M CPU and its NVIC are two intimately intertwined parts of
1
From: Richard Henderson <richard.henderson@linaro.org>
2
the same hardware; it is not possible to use one without the other.
3
Unfortunately a lot of our board models don't do any sanity checking
4
on the CPU type the user asks for, so a command line like
5
qemu-system-arm -M versatilepb -cpu cortex-m3
6
will create an M3 without an NVIC, and coredump immediately.
7
In the other direction, trying a non-M-profile CPU in an M-profile
8
board won't blow up, but doesn't do anything useful either:
9
qemu-system-arm -M lm3s6965evb -cpu arm926
10
2
11
Add some checking in the NVIC and CPU realize functions that the
3
Create struct ARMISARegisters, to be accessed during translation.
12
user isn't trying to use an NVIC without an M-profile CPU or
13
an M-profile CPU without an NVIC, so we can produce a helpful
14
error message rather than a core dump.
15
4
16
Fixes: https://bugs.launchpad.net/qemu/+bug/1766896
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181016223115.24100-2-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Message-id: 20180601160355.15393-1-peter.maydell@linaro.org
20
---
9
---
21
hw/arm/armv7m.c | 7 ++++++-
10
target/arm/cpu.h | 32 ++++----
22
hw/intc/armv7m_nvic.c | 6 +++++-
11
hw/intc/armv7m_nvic.c | 12 +--
23
target/arm/cpu.c | 18 ++++++++++++++++++
12
target/arm/cpu.c | 178 +++++++++++++++++++++---------------------
24
3 files changed, 29 insertions(+), 2 deletions(-)
13
target/arm/cpu64.c | 70 ++++++++---------
14
target/arm/helper.c | 28 +++----
15
5 files changed, 162 insertions(+), 158 deletions(-)
25
16
26
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/arm/armv7m.c
19
--- a/target/arm/cpu.h
29
+++ b/hw/arm/armv7m.c
20
+++ b/target/arm/cpu.h
30
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
21
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
31
return;
22
* ARMv7AR ARM Architecture Reference Manual. A reset_ prefix
32
}
23
* is used for reset values of non-constant registers; no reset_
33
}
24
* prefix means a constant register.
34
+
25
+ * Some of these registers are split out into a substructure that
35
+ /* Tell the CPU where the NVIC is; it will fail realize if it doesn't
26
+ * is shared with the translators to control the ISA.
36
+ * have one.
27
*/
37
+ */
28
+ struct ARMISARegisters {
38
+ s->cpu->env.nvic = &s->nvic;
29
+ uint32_t id_isar0;
39
+
30
+ uint32_t id_isar1;
40
object_property_set_bool(OBJECT(s->cpu), true, "realized", &err);
31
+ uint32_t id_isar2;
41
if (err != NULL) {
32
+ uint32_t id_isar3;
42
error_propagate(errp, err);
33
+ uint32_t id_isar4;
43
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
34
+ uint32_t id_isar5;
44
sbd = SYS_BUS_DEVICE(&s->nvic);
35
+ uint32_t id_isar6;
45
sysbus_connect_irq(sbd, 0,
36
+ uint32_t mvfr0;
46
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
37
+ uint32_t mvfr1;
47
- s->cpu->env.nvic = &s->nvic;
38
+ uint32_t mvfr2;
48
39
+ uint64_t id_aa64isar0;
49
memory_region_add_subregion(&s->container, 0xe000e000,
40
+ uint64_t id_aa64isar1;
50
sysbus_mmio_get_region(sbd, 0));
41
+ uint64_t id_aa64pfr0;
42
+ uint64_t id_aa64pfr1;
43
+ } isar;
44
uint32_t midr;
45
uint32_t revidr;
46
uint32_t reset_fpsid;
47
- uint32_t mvfr0;
48
- uint32_t mvfr1;
49
- uint32_t mvfr2;
50
uint32_t ctr;
51
uint32_t reset_sctlr;
52
uint32_t id_pfr0;
53
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
54
uint32_t id_mmfr2;
55
uint32_t id_mmfr3;
56
uint32_t id_mmfr4;
57
- uint32_t id_isar0;
58
- uint32_t id_isar1;
59
- uint32_t id_isar2;
60
- uint32_t id_isar3;
61
- uint32_t id_isar4;
62
- uint32_t id_isar5;
63
- uint32_t id_isar6;
64
- uint64_t id_aa64pfr0;
65
- uint64_t id_aa64pfr1;
66
uint64_t id_aa64dfr0;
67
uint64_t id_aa64dfr1;
68
uint64_t id_aa64afr0;
69
uint64_t id_aa64afr1;
70
- uint64_t id_aa64isar0;
71
- uint64_t id_aa64isar1;
72
uint64_t id_aa64mmfr0;
73
uint64_t id_aa64mmfr1;
74
uint32_t dbgdidr;
51
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
75
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
52
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/intc/armv7m_nvic.c
77
--- a/hw/intc/armv7m_nvic.c
54
+++ b/hw/intc/armv7m_nvic.c
78
+++ b/hw/intc/armv7m_nvic.c
55
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
79
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
56
int regionlen;
80
case 0xd5c: /* MMFR3. */
57
81
return cpu->id_mmfr3;
58
s->cpu = ARM_CPU(qemu_get_cpu(0));
82
case 0xd60: /* ISAR0. */
59
- assert(s->cpu);
83
- return cpu->id_isar0;
60
+
84
+ return cpu->isar.id_isar0;
61
+ if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
85
case 0xd64: /* ISAR1. */
62
+ error_setg(errp, "The NVIC can only be used with a Cortex-M CPU");
86
- return cpu->id_isar1;
63
+ return;
87
+ return cpu->isar.id_isar1;
64
+ }
88
case 0xd68: /* ISAR2. */
65
89
- return cpu->id_isar2;
66
if (s->num_irq > NVIC_MAX_IRQ) {
90
+ return cpu->isar.id_isar2;
67
error_setg(errp, "num-irq %d exceeds NVIC maximum", s->num_irq);
91
case 0xd6c: /* ISAR3. */
92
- return cpu->id_isar3;
93
+ return cpu->isar.id_isar3;
94
case 0xd70: /* ISAR4. */
95
- return cpu->id_isar4;
96
+ return cpu->isar.id_isar4;
97
case 0xd74: /* ISAR5. */
98
- return cpu->id_isar5;
99
+ return cpu->isar.id_isar5;
100
case 0xd78: /* CLIDR */
101
return cpu->clidr;
102
case 0xd7c: /* CTR */
68
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
103
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
69
index XXXXXXX..XXXXXXX 100644
104
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/cpu.c
105
--- a/target/arm/cpu.c
71
+++ b/target/arm/cpu.c
106
+++ b/target/arm/cpu.c
107
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
108
g_hash_table_foreach(cpu->cp_regs, cp_reg_check_reset, cpu);
109
110
env->vfp.xregs[ARM_VFP_FPSID] = cpu->reset_fpsid;
111
- env->vfp.xregs[ARM_VFP_MVFR0] = cpu->mvfr0;
112
- env->vfp.xregs[ARM_VFP_MVFR1] = cpu->mvfr1;
113
- env->vfp.xregs[ARM_VFP_MVFR2] = cpu->mvfr2;
114
+ env->vfp.xregs[ARM_VFP_MVFR0] = cpu->isar.mvfr0;
115
+ env->vfp.xregs[ARM_VFP_MVFR1] = cpu->isar.mvfr1;
116
+ env->vfp.xregs[ARM_VFP_MVFR2] = cpu->isar.mvfr2;
117
118
cpu->power_state = cpu->start_powered_off ? PSCI_OFF : PSCI_ON;
119
s->halted = cpu->start_powered_off;
72
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
120
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
73
return;
121
* registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
122
*/
123
cpu->id_pfr1 &= ~0xf0;
124
- cpu->id_aa64pfr0 &= ~0xf000;
125
+ cpu->isar.id_aa64pfr0 &= ~0xf000;
74
}
126
}
75
127
76
+#ifndef CONFIG_USER_ONLY
128
if (!cpu->has_el2) {
77
+ /* The NVIC and M-profile CPU are two halves of a single piece of
129
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
78
+ * hardware; trying to use one without the other is a command line
130
* registers if we don't have EL2. These are id_pfr1[15:12] and
79
+ * error and will result in segfaults if not caught here.
131
* id_aa64pfr0_el1[11:8].
80
+ */
132
*/
81
+ if (arm_feature(env, ARM_FEATURE_M)) {
133
- cpu->id_aa64pfr0 &= ~0xf00;
82
+ if (!env->nvic) {
134
+ cpu->isar.id_aa64pfr0 &= ~0xf00;
83
+ error_setg(errp, "This board cannot be used with Cortex-M CPUs");
135
cpu->id_pfr1 &= ~0xf000;
84
+ return;
136
}
85
+ }
137
86
+ } else {
138
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
87
+ if (env->nvic) {
139
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
88
+ error_setg(errp, "This board can only be used with Cortex-M CPUs");
140
cpu->midr = 0x4107b362;
89
+ return;
141
cpu->reset_fpsid = 0x410120b4;
90
+ }
142
- cpu->mvfr0 = 0x11111111;
91
+ }
143
- cpu->mvfr1 = 0x00000000;
92
+#endif
144
+ cpu->isar.mvfr0 = 0x11111111;
93
+
145
+ cpu->isar.mvfr1 = 0x00000000;
94
cpu_exec_realizefn(cs, &local_err);
146
cpu->ctr = 0x1dd20d2;
95
if (local_err != NULL) {
147
cpu->reset_sctlr = 0x00050078;
96
error_propagate(errp, local_err);
148
cpu->id_pfr0 = 0x111;
149
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
150
cpu->id_mmfr0 = 0x01130003;
151
cpu->id_mmfr1 = 0x10030302;
152
cpu->id_mmfr2 = 0x01222110;
153
- cpu->id_isar0 = 0x00140011;
154
- cpu->id_isar1 = 0x12002111;
155
- cpu->id_isar2 = 0x11231111;
156
- cpu->id_isar3 = 0x01102131;
157
- cpu->id_isar4 = 0x141;
158
+ cpu->isar.id_isar0 = 0x00140011;
159
+ cpu->isar.id_isar1 = 0x12002111;
160
+ cpu->isar.id_isar2 = 0x11231111;
161
+ cpu->isar.id_isar3 = 0x01102131;
162
+ cpu->isar.id_isar4 = 0x141;
163
cpu->reset_auxcr = 7;
164
}
165
166
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
167
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
168
cpu->midr = 0x4117b363;
169
cpu->reset_fpsid = 0x410120b4;
170
- cpu->mvfr0 = 0x11111111;
171
- cpu->mvfr1 = 0x00000000;
172
+ cpu->isar.mvfr0 = 0x11111111;
173
+ cpu->isar.mvfr1 = 0x00000000;
174
cpu->ctr = 0x1dd20d2;
175
cpu->reset_sctlr = 0x00050078;
176
cpu->id_pfr0 = 0x111;
177
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
178
cpu->id_mmfr0 = 0x01130003;
179
cpu->id_mmfr1 = 0x10030302;
180
cpu->id_mmfr2 = 0x01222110;
181
- cpu->id_isar0 = 0x00140011;
182
- cpu->id_isar1 = 0x12002111;
183
- cpu->id_isar2 = 0x11231111;
184
- cpu->id_isar3 = 0x01102131;
185
- cpu->id_isar4 = 0x141;
186
+ cpu->isar.id_isar0 = 0x00140011;
187
+ cpu->isar.id_isar1 = 0x12002111;
188
+ cpu->isar.id_isar2 = 0x11231111;
189
+ cpu->isar.id_isar3 = 0x01102131;
190
+ cpu->isar.id_isar4 = 0x141;
191
cpu->reset_auxcr = 7;
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
195
set_feature(&cpu->env, ARM_FEATURE_EL3);
196
cpu->midr = 0x410fb767;
197
cpu->reset_fpsid = 0x410120b5;
198
- cpu->mvfr0 = 0x11111111;
199
- cpu->mvfr1 = 0x00000000;
200
+ cpu->isar.mvfr0 = 0x11111111;
201
+ cpu->isar.mvfr1 = 0x00000000;
202
cpu->ctr = 0x1dd20d2;
203
cpu->reset_sctlr = 0x00050078;
204
cpu->id_pfr0 = 0x111;
205
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
206
cpu->id_mmfr0 = 0x01130003;
207
cpu->id_mmfr1 = 0x10030302;
208
cpu->id_mmfr2 = 0x01222100;
209
- cpu->id_isar0 = 0x0140011;
210
- cpu->id_isar1 = 0x12002111;
211
- cpu->id_isar2 = 0x11231121;
212
- cpu->id_isar3 = 0x01102131;
213
- cpu->id_isar4 = 0x01141;
214
+ cpu->isar.id_isar0 = 0x0140011;
215
+ cpu->isar.id_isar1 = 0x12002111;
216
+ cpu->isar.id_isar2 = 0x11231121;
217
+ cpu->isar.id_isar3 = 0x01102131;
218
+ cpu->isar.id_isar4 = 0x01141;
219
cpu->reset_auxcr = 7;
220
}
221
222
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
223
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
224
cpu->midr = 0x410fb022;
225
cpu->reset_fpsid = 0x410120b4;
226
- cpu->mvfr0 = 0x11111111;
227
- cpu->mvfr1 = 0x00000000;
228
+ cpu->isar.mvfr0 = 0x11111111;
229
+ cpu->isar.mvfr1 = 0x00000000;
230
cpu->ctr = 0x1d192992; /* 32K icache 32K dcache */
231
cpu->id_pfr0 = 0x111;
232
cpu->id_pfr1 = 0x1;
233
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
234
cpu->id_mmfr0 = 0x01100103;
235
cpu->id_mmfr1 = 0x10020302;
236
cpu->id_mmfr2 = 0x01222000;
237
- cpu->id_isar0 = 0x00100011;
238
- cpu->id_isar1 = 0x12002111;
239
- cpu->id_isar2 = 0x11221011;
240
- cpu->id_isar3 = 0x01102131;
241
- cpu->id_isar4 = 0x141;
242
+ cpu->isar.id_isar0 = 0x00100011;
243
+ cpu->isar.id_isar1 = 0x12002111;
244
+ cpu->isar.id_isar2 = 0x11221011;
245
+ cpu->isar.id_isar3 = 0x01102131;
246
+ cpu->isar.id_isar4 = 0x141;
247
cpu->reset_auxcr = 1;
248
}
249
250
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
251
cpu->id_mmfr1 = 0x00000000;
252
cpu->id_mmfr2 = 0x00000000;
253
cpu->id_mmfr3 = 0x00000000;
254
- cpu->id_isar0 = 0x01141110;
255
- cpu->id_isar1 = 0x02111000;
256
- cpu->id_isar2 = 0x21112231;
257
- cpu->id_isar3 = 0x01111110;
258
- cpu->id_isar4 = 0x01310102;
259
- cpu->id_isar5 = 0x00000000;
260
- cpu->id_isar6 = 0x00000000;
261
+ cpu->isar.id_isar0 = 0x01141110;
262
+ cpu->isar.id_isar1 = 0x02111000;
263
+ cpu->isar.id_isar2 = 0x21112231;
264
+ cpu->isar.id_isar3 = 0x01111110;
265
+ cpu->isar.id_isar4 = 0x01310102;
266
+ cpu->isar.id_isar5 = 0x00000000;
267
+ cpu->isar.id_isar6 = 0x00000000;
268
}
269
270
static void cortex_m4_initfn(Object *obj)
271
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
272
cpu->id_mmfr1 = 0x00000000;
273
cpu->id_mmfr2 = 0x00000000;
274
cpu->id_mmfr3 = 0x00000000;
275
- cpu->id_isar0 = 0x01141110;
276
- cpu->id_isar1 = 0x02111000;
277
- cpu->id_isar2 = 0x21112231;
278
- cpu->id_isar3 = 0x01111110;
279
- cpu->id_isar4 = 0x01310102;
280
- cpu->id_isar5 = 0x00000000;
281
- cpu->id_isar6 = 0x00000000;
282
+ cpu->isar.id_isar0 = 0x01141110;
283
+ cpu->isar.id_isar1 = 0x02111000;
284
+ cpu->isar.id_isar2 = 0x21112231;
285
+ cpu->isar.id_isar3 = 0x01111110;
286
+ cpu->isar.id_isar4 = 0x01310102;
287
+ cpu->isar.id_isar5 = 0x00000000;
288
+ cpu->isar.id_isar6 = 0x00000000;
289
}
290
291
static void cortex_m33_initfn(Object *obj)
292
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
293
cpu->id_mmfr1 = 0x00000000;
294
cpu->id_mmfr2 = 0x01000000;
295
cpu->id_mmfr3 = 0x00000000;
296
- cpu->id_isar0 = 0x01101110;
297
- cpu->id_isar1 = 0x02212000;
298
- cpu->id_isar2 = 0x20232232;
299
- cpu->id_isar3 = 0x01111131;
300
- cpu->id_isar4 = 0x01310132;
301
- cpu->id_isar5 = 0x00000000;
302
- cpu->id_isar6 = 0x00000000;
303
+ cpu->isar.id_isar0 = 0x01101110;
304
+ cpu->isar.id_isar1 = 0x02212000;
305
+ cpu->isar.id_isar2 = 0x20232232;
306
+ cpu->isar.id_isar3 = 0x01111131;
307
+ cpu->isar.id_isar4 = 0x01310132;
308
+ cpu->isar.id_isar5 = 0x00000000;
309
+ cpu->isar.id_isar6 = 0x00000000;
310
cpu->clidr = 0x00000000;
311
cpu->ctr = 0x8000c000;
312
}
313
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
314
cpu->id_mmfr1 = 0x00000000;
315
cpu->id_mmfr2 = 0x01200000;
316
cpu->id_mmfr3 = 0x0211;
317
- cpu->id_isar0 = 0x02101111;
318
- cpu->id_isar1 = 0x13112111;
319
- cpu->id_isar2 = 0x21232141;
320
- cpu->id_isar3 = 0x01112131;
321
- cpu->id_isar4 = 0x0010142;
322
- cpu->id_isar5 = 0x0;
323
- cpu->id_isar6 = 0x0;
324
+ cpu->isar.id_isar0 = 0x02101111;
325
+ cpu->isar.id_isar1 = 0x13112111;
326
+ cpu->isar.id_isar2 = 0x21232141;
327
+ cpu->isar.id_isar3 = 0x01112131;
328
+ cpu->isar.id_isar4 = 0x0010142;
329
+ cpu->isar.id_isar5 = 0x0;
330
+ cpu->isar.id_isar6 = 0x0;
331
cpu->mp_is_up = true;
332
cpu->pmsav7_dregion = 16;
333
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
334
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_EL3);
336
cpu->midr = 0x410fc080;
337
cpu->reset_fpsid = 0x410330c0;
338
- cpu->mvfr0 = 0x11110222;
339
- cpu->mvfr1 = 0x00011111;
340
+ cpu->isar.mvfr0 = 0x11110222;
341
+ cpu->isar.mvfr1 = 0x00011111;
342
cpu->ctr = 0x82048004;
343
cpu->reset_sctlr = 0x00c50078;
344
cpu->id_pfr0 = 0x1031;
345
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
346
cpu->id_mmfr1 = 0x20000000;
347
cpu->id_mmfr2 = 0x01202000;
348
cpu->id_mmfr3 = 0x11;
349
- cpu->id_isar0 = 0x00101111;
350
- cpu->id_isar1 = 0x12112111;
351
- cpu->id_isar2 = 0x21232031;
352
- cpu->id_isar3 = 0x11112131;
353
- cpu->id_isar4 = 0x00111142;
354
+ cpu->isar.id_isar0 = 0x00101111;
355
+ cpu->isar.id_isar1 = 0x12112111;
356
+ cpu->isar.id_isar2 = 0x21232031;
357
+ cpu->isar.id_isar3 = 0x11112131;
358
+ cpu->isar.id_isar4 = 0x00111142;
359
cpu->dbgdidr = 0x15141000;
360
cpu->clidr = (1 << 27) | (2 << 24) | 3;
361
cpu->ccsidr[0] = 0xe007e01a; /* 16k L1 dcache. */
362
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
363
set_feature(&cpu->env, ARM_FEATURE_CBAR);
364
cpu->midr = 0x410fc090;
365
cpu->reset_fpsid = 0x41033090;
366
- cpu->mvfr0 = 0x11110222;
367
- cpu->mvfr1 = 0x01111111;
368
+ cpu->isar.mvfr0 = 0x11110222;
369
+ cpu->isar.mvfr1 = 0x01111111;
370
cpu->ctr = 0x80038003;
371
cpu->reset_sctlr = 0x00c50078;
372
cpu->id_pfr0 = 0x1031;
373
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
374
cpu->id_mmfr1 = 0x20000000;
375
cpu->id_mmfr2 = 0x01230000;
376
cpu->id_mmfr3 = 0x00002111;
377
- cpu->id_isar0 = 0x00101111;
378
- cpu->id_isar1 = 0x13112111;
379
- cpu->id_isar2 = 0x21232041;
380
- cpu->id_isar3 = 0x11112131;
381
- cpu->id_isar4 = 0x00111142;
382
+ cpu->isar.id_isar0 = 0x00101111;
383
+ cpu->isar.id_isar1 = 0x13112111;
384
+ cpu->isar.id_isar2 = 0x21232041;
385
+ cpu->isar.id_isar3 = 0x11112131;
386
+ cpu->isar.id_isar4 = 0x00111142;
387
cpu->dbgdidr = 0x35141000;
388
cpu->clidr = (1 << 27) | (1 << 24) | 3;
389
cpu->ccsidr[0] = 0xe00fe019; /* 16k L1 dcache. */
390
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
391
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
392
cpu->midr = 0x410fc075;
393
cpu->reset_fpsid = 0x41023075;
394
- cpu->mvfr0 = 0x10110222;
395
- cpu->mvfr1 = 0x11111111;
396
+ cpu->isar.mvfr0 = 0x10110222;
397
+ cpu->isar.mvfr1 = 0x11111111;
398
cpu->ctr = 0x84448003;
399
cpu->reset_sctlr = 0x00c50078;
400
cpu->id_pfr0 = 0x00001131;
401
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
402
/* a7_mpcore_r0p5_trm, page 4-4 gives 0x01101110; but
403
* table 4-41 gives 0x02101110, which includes the arm div insns.
404
*/
405
- cpu->id_isar0 = 0x02101110;
406
- cpu->id_isar1 = 0x13112111;
407
- cpu->id_isar2 = 0x21232041;
408
- cpu->id_isar3 = 0x11112131;
409
- cpu->id_isar4 = 0x10011142;
410
+ cpu->isar.id_isar0 = 0x02101110;
411
+ cpu->isar.id_isar1 = 0x13112111;
412
+ cpu->isar.id_isar2 = 0x21232041;
413
+ cpu->isar.id_isar3 = 0x11112131;
414
+ cpu->isar.id_isar4 = 0x10011142;
415
cpu->dbgdidr = 0x3515f005;
416
cpu->clidr = 0x0a200023;
417
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
418
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
419
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
420
cpu->midr = 0x412fc0f1;
421
cpu->reset_fpsid = 0x410430f0;
422
- cpu->mvfr0 = 0x10110222;
423
- cpu->mvfr1 = 0x11111111;
424
+ cpu->isar.mvfr0 = 0x10110222;
425
+ cpu->isar.mvfr1 = 0x11111111;
426
cpu->ctr = 0x8444c004;
427
cpu->reset_sctlr = 0x00c50078;
428
cpu->id_pfr0 = 0x00001131;
429
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
430
cpu->id_mmfr1 = 0x20000000;
431
cpu->id_mmfr2 = 0x01240000;
432
cpu->id_mmfr3 = 0x02102211;
433
- cpu->id_isar0 = 0x02101110;
434
- cpu->id_isar1 = 0x13112111;
435
- cpu->id_isar2 = 0x21232041;
436
- cpu->id_isar3 = 0x11112131;
437
- cpu->id_isar4 = 0x10011142;
438
+ cpu->isar.id_isar0 = 0x02101110;
439
+ cpu->isar.id_isar1 = 0x13112111;
440
+ cpu->isar.id_isar2 = 0x21232041;
441
+ cpu->isar.id_isar3 = 0x11112131;
442
+ cpu->isar.id_isar4 = 0x10011142;
443
cpu->dbgdidr = 0x3515f021;
444
cpu->clidr = 0x0a200023;
445
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
446
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
447
index XXXXXXX..XXXXXXX 100644
448
--- a/target/arm/cpu64.c
449
+++ b/target/arm/cpu64.c
450
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
451
cpu->midr = 0x411fd070;
452
cpu->revidr = 0x00000000;
453
cpu->reset_fpsid = 0x41034070;
454
- cpu->mvfr0 = 0x10110222;
455
- cpu->mvfr1 = 0x12111111;
456
- cpu->mvfr2 = 0x00000043;
457
+ cpu->isar.mvfr0 = 0x10110222;
458
+ cpu->isar.mvfr1 = 0x12111111;
459
+ cpu->isar.mvfr2 = 0x00000043;
460
cpu->ctr = 0x8444c004;
461
cpu->reset_sctlr = 0x00c50838;
462
cpu->id_pfr0 = 0x00000131;
463
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
464
cpu->id_mmfr1 = 0x40000000;
465
cpu->id_mmfr2 = 0x01260000;
466
cpu->id_mmfr3 = 0x02102211;
467
- cpu->id_isar0 = 0x02101110;
468
- cpu->id_isar1 = 0x13112111;
469
- cpu->id_isar2 = 0x21232042;
470
- cpu->id_isar3 = 0x01112131;
471
- cpu->id_isar4 = 0x00011142;
472
- cpu->id_isar5 = 0x00011121;
473
- cpu->id_isar6 = 0;
474
- cpu->id_aa64pfr0 = 0x00002222;
475
+ cpu->isar.id_isar0 = 0x02101110;
476
+ cpu->isar.id_isar1 = 0x13112111;
477
+ cpu->isar.id_isar2 = 0x21232042;
478
+ cpu->isar.id_isar3 = 0x01112131;
479
+ cpu->isar.id_isar4 = 0x00011142;
480
+ cpu->isar.id_isar5 = 0x00011121;
481
+ cpu->isar.id_isar6 = 0;
482
+ cpu->isar.id_aa64pfr0 = 0x00002222;
483
cpu->id_aa64dfr0 = 0x10305106;
484
cpu->pmceid0 = 0x00000000;
485
cpu->pmceid1 = 0x00000000;
486
- cpu->id_aa64isar0 = 0x00011120;
487
+ cpu->isar.id_aa64isar0 = 0x00011120;
488
cpu->id_aa64mmfr0 = 0x00001124;
489
cpu->dbgdidr = 0x3516d000;
490
cpu->clidr = 0x0a200023;
491
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
492
cpu->midr = 0x410fd034;
493
cpu->revidr = 0x00000000;
494
cpu->reset_fpsid = 0x41034070;
495
- cpu->mvfr0 = 0x10110222;
496
- cpu->mvfr1 = 0x12111111;
497
- cpu->mvfr2 = 0x00000043;
498
+ cpu->isar.mvfr0 = 0x10110222;
499
+ cpu->isar.mvfr1 = 0x12111111;
500
+ cpu->isar.mvfr2 = 0x00000043;
501
cpu->ctr = 0x84448004; /* L1Ip = VIPT */
502
cpu->reset_sctlr = 0x00c50838;
503
cpu->id_pfr0 = 0x00000131;
504
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
505
cpu->id_mmfr1 = 0x40000000;
506
cpu->id_mmfr2 = 0x01260000;
507
cpu->id_mmfr3 = 0x02102211;
508
- cpu->id_isar0 = 0x02101110;
509
- cpu->id_isar1 = 0x13112111;
510
- cpu->id_isar2 = 0x21232042;
511
- cpu->id_isar3 = 0x01112131;
512
- cpu->id_isar4 = 0x00011142;
513
- cpu->id_isar5 = 0x00011121;
514
- cpu->id_isar6 = 0;
515
- cpu->id_aa64pfr0 = 0x00002222;
516
+ cpu->isar.id_isar0 = 0x02101110;
517
+ cpu->isar.id_isar1 = 0x13112111;
518
+ cpu->isar.id_isar2 = 0x21232042;
519
+ cpu->isar.id_isar3 = 0x01112131;
520
+ cpu->isar.id_isar4 = 0x00011142;
521
+ cpu->isar.id_isar5 = 0x00011121;
522
+ cpu->isar.id_isar6 = 0;
523
+ cpu->isar.id_aa64pfr0 = 0x00002222;
524
cpu->id_aa64dfr0 = 0x10305106;
525
- cpu->id_aa64isar0 = 0x00011120;
526
+ cpu->isar.id_aa64isar0 = 0x00011120;
527
cpu->id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
528
cpu->dbgdidr = 0x3516d000;
529
cpu->clidr = 0x0a200023;
530
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
531
cpu->midr = 0x410fd083;
532
cpu->revidr = 0x00000000;
533
cpu->reset_fpsid = 0x41034080;
534
- cpu->mvfr0 = 0x10110222;
535
- cpu->mvfr1 = 0x12111111;
536
- cpu->mvfr2 = 0x00000043;
537
+ cpu->isar.mvfr0 = 0x10110222;
538
+ cpu->isar.mvfr1 = 0x12111111;
539
+ cpu->isar.mvfr2 = 0x00000043;
540
cpu->ctr = 0x8444c004;
541
cpu->reset_sctlr = 0x00c50838;
542
cpu->id_pfr0 = 0x00000131;
543
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
544
cpu->id_mmfr1 = 0x40000000;
545
cpu->id_mmfr2 = 0x01260000;
546
cpu->id_mmfr3 = 0x02102211;
547
- cpu->id_isar0 = 0x02101110;
548
- cpu->id_isar1 = 0x13112111;
549
- cpu->id_isar2 = 0x21232042;
550
- cpu->id_isar3 = 0x01112131;
551
- cpu->id_isar4 = 0x00011142;
552
- cpu->id_isar5 = 0x00011121;
553
- cpu->id_aa64pfr0 = 0x00002222;
554
+ cpu->isar.id_isar0 = 0x02101110;
555
+ cpu->isar.id_isar1 = 0x13112111;
556
+ cpu->isar.id_isar2 = 0x21232042;
557
+ cpu->isar.id_isar3 = 0x01112131;
558
+ cpu->isar.id_isar4 = 0x00011142;
559
+ cpu->isar.id_isar5 = 0x00011121;
560
+ cpu->isar.id_aa64pfr0 = 0x00002222;
561
cpu->id_aa64dfr0 = 0x10305106;
562
cpu->pmceid0 = 0x00000000;
563
cpu->pmceid1 = 0x00000000;
564
- cpu->id_aa64isar0 = 0x00011120;
565
+ cpu->isar.id_aa64isar0 = 0x00011120;
566
cpu->id_aa64mmfr0 = 0x00001124;
567
cpu->dbgdidr = 0x3516d000;
568
cpu->clidr = 0x0a200023;
569
diff --git a/target/arm/helper.c b/target/arm/helper.c
570
index XXXXXXX..XXXXXXX 100644
571
--- a/target/arm/helper.c
572
+++ b/target/arm/helper.c
573
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
574
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
575
{
576
ARMCPU *cpu = arm_env_get_cpu(env);
577
- uint64_t pfr0 = cpu->id_aa64pfr0;
578
+ uint64_t pfr0 = cpu->isar.id_aa64pfr0;
579
580
if (env->gicv3state) {
581
pfr0 |= 1 << 24;
582
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
583
{ .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH,
584
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
585
.access = PL1_R, .type = ARM_CP_CONST,
586
- .resetvalue = cpu->id_isar0 },
587
+ .resetvalue = cpu->isar.id_isar0 },
588
{ .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH,
589
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1,
590
.access = PL1_R, .type = ARM_CP_CONST,
591
- .resetvalue = cpu->id_isar1 },
592
+ .resetvalue = cpu->isar.id_isar1 },
593
{ .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH,
594
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
595
.access = PL1_R, .type = ARM_CP_CONST,
596
- .resetvalue = cpu->id_isar2 },
597
+ .resetvalue = cpu->isar.id_isar2 },
598
{ .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH,
599
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3,
600
.access = PL1_R, .type = ARM_CP_CONST,
601
- .resetvalue = cpu->id_isar3 },
602
+ .resetvalue = cpu->isar.id_isar3 },
603
{ .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH,
604
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4,
605
.access = PL1_R, .type = ARM_CP_CONST,
606
- .resetvalue = cpu->id_isar4 },
607
+ .resetvalue = cpu->isar.id_isar4 },
608
{ .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH,
609
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5,
610
.access = PL1_R, .type = ARM_CP_CONST,
611
- .resetvalue = cpu->id_isar5 },
612
+ .resetvalue = cpu->isar.id_isar5 },
613
{ .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH,
614
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
615
.access = PL1_R, .type = ARM_CP_CONST,
616
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
617
{ .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
618
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
619
.access = PL1_R, .type = ARM_CP_CONST,
620
- .resetvalue = cpu->id_isar6 },
621
+ .resetvalue = cpu->isar.id_isar6 },
622
REGINFO_SENTINEL
623
};
624
define_arm_cp_regs(cpu, v6_idregs);
625
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
626
{ .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64,
627
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1,
628
.access = PL1_R, .type = ARM_CP_CONST,
629
- .resetvalue = cpu->id_aa64pfr1},
630
+ .resetvalue = cpu->isar.id_aa64pfr1},
631
{ .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
632
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2,
633
.access = PL1_R, .type = ARM_CP_CONST,
634
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
635
{ .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64,
636
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0,
637
.access = PL1_R, .type = ARM_CP_CONST,
638
- .resetvalue = cpu->id_aa64isar0 },
639
+ .resetvalue = cpu->isar.id_aa64isar0 },
640
{ .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64,
641
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1,
642
.access = PL1_R, .type = ARM_CP_CONST,
643
- .resetvalue = cpu->id_aa64isar1 },
644
+ .resetvalue = cpu->isar.id_aa64isar1 },
645
{ .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
646
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
647
.access = PL1_R, .type = ARM_CP_CONST,
648
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
649
{ .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64,
650
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0,
651
.access = PL1_R, .type = ARM_CP_CONST,
652
- .resetvalue = cpu->mvfr0 },
653
+ .resetvalue = cpu->isar.mvfr0 },
654
{ .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64,
655
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1,
656
.access = PL1_R, .type = ARM_CP_CONST,
657
- .resetvalue = cpu->mvfr1 },
658
+ .resetvalue = cpu->isar.mvfr1 },
659
{ .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64,
660
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
661
.access = PL1_R, .type = ARM_CP_CONST,
662
- .resetvalue = cpu->mvfr2 },
663
+ .resetvalue = cpu->isar.mvfr2 },
664
{ .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
665
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3,
666
.access = PL1_R, .type = ARM_CP_CONST,
97
--
667
--
98
2.17.1
668
2.19.1
99
669
100
670
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Instantiating mps2-an505 (cortex-m33) will fail make check when
4
V7VE asserts that ID_ISAR0.Divide includes ARM division. It is
5
also wrong to include ARM_FEATURE_LPAE.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181016223115.24100-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu.c | 6 +++++-
13
1 file changed, 5 insertions(+), 1 deletion(-)
14
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
20
21
/* Some features automatically imply others: */
22
if (arm_feature(env, ARM_FEATURE_V8)) {
23
- set_feature(env, ARM_FEATURE_V7VE);
24
+ if (arm_feature(env, ARM_FEATURE_M)) {
25
+ set_feature(env, ARM_FEATURE_V7);
26
+ } else {
27
+ set_feature(env, ARM_FEATURE_V7VE);
28
+ }
29
}
30
if (arm_feature(env, ARM_FEATURE_V7VE)) {
31
/* v7 Virtualization Extensions. In real hardware this implies
32
--
33
2.19.1
34
35
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Most of the v8 extensions are self-contained within the ISAR
4
registers and are not implied by other feature bits, which
5
makes them the easiest to convert.
6
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181016223115.24100-4-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-17-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/translate-sve.c | 37 +++++++++++++++++++++++++++++++++++++
13
target/arm/cpu.h | 131 +++++++++++++++++++++++++++++++++----
9
target/arm/sve.decode | 8 ++++++++
14
target/arm/translate.h | 7 ++
10
2 files changed, 45 insertions(+)
15
linux-user/elfload.c | 46 ++++++++-----
16
target/arm/cpu.c | 27 +++++---
17
target/arm/cpu64.c | 57 +++++++++-------
18
target/arm/translate-a64.c | 101 ++++++++++++++--------------
19
target/arm/translate.c | 36 +++++-----
20
7 files changed, 273 insertions(+), 132 deletions(-)
11
21
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
24
--- a/target/arm/cpu.h
15
+++ b/target/arm/translate-sve.c
25
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
26
@@ -XXX,XX +XXX,XX @@ typedef enum ARMPSCIState {
17
return true;
27
PSCI_ON_PENDING = 2
28
} ARMPSCIState;
29
30
+typedef struct ARMISARegisters ARMISARegisters;
31
+
32
/**
33
* ARMCPU:
34
* @env: #CPUARMState
35
@@ -XXX,XX +XXX,XX @@ enum arm_features {
36
ARM_FEATURE_LPAE, /* has Large Physical Address Extension */
37
ARM_FEATURE_V8,
38
ARM_FEATURE_AARCH64, /* supports 64 bit mode */
39
- ARM_FEATURE_V8_AES, /* implements AES part of v8 Crypto Extensions */
40
ARM_FEATURE_CBAR, /* has cp15 CBAR */
41
ARM_FEATURE_CRC, /* ARMv8 CRC instructions */
42
ARM_FEATURE_CBAR_RO, /* has cp15 CBAR and it is read-only */
43
ARM_FEATURE_EL2, /* has EL2 Virtualization support */
44
ARM_FEATURE_EL3, /* has EL3 Secure monitor support */
45
- ARM_FEATURE_V8_SHA1, /* implements SHA1 part of v8 Crypto Extensions */
46
- ARM_FEATURE_V8_SHA256, /* implements SHA256 part of v8 Crypto Extensions */
47
- ARM_FEATURE_V8_PMULL, /* implements PMULL part of v8 Crypto Extensions */
48
ARM_FEATURE_THUMB_DSP, /* DSP insns supported in the Thumb encodings */
49
ARM_FEATURE_PMU, /* has PMU support */
50
ARM_FEATURE_VBAR, /* has cp15 VBAR */
51
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
52
ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
53
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
54
- ARM_FEATURE_V8_SHA512, /* implements SHA512 part of v8 Crypto Extensions */
55
- ARM_FEATURE_V8_SHA3, /* implements SHA3 part of v8 Crypto Extensions */
56
- ARM_FEATURE_V8_SM3, /* implements SM3 part of v8 Crypto Extensions */
57
- ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
58
- ARM_FEATURE_V8_ATOMICS, /* ARMv8.1-Atomics feature */
59
- ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
60
- ARM_FEATURE_V8_DOTPROD, /* implements v8.2 simd dot product */
61
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
62
- ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */
63
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
64
};
65
66
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
67
/* Shared between translate-sve.c and sve_helper.c. */
68
extern const uint64_t pred_esz_masks[4];
69
70
+/*
71
+ * 32-bit feature tests via id registers.
72
+ */
73
+static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
74
+{
75
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
76
+}
77
+
78
+static inline bool isar_feature_aa32_pmull(const ARMISARegisters *id)
79
+{
80
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) > 1;
81
+}
82
+
83
+static inline bool isar_feature_aa32_sha1(const ARMISARegisters *id)
84
+{
85
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA1) != 0;
86
+}
87
+
88
+static inline bool isar_feature_aa32_sha2(const ARMISARegisters *id)
89
+{
90
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA2) != 0;
91
+}
92
+
93
+static inline bool isar_feature_aa32_crc32(const ARMISARegisters *id)
94
+{
95
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, CRC32) != 0;
96
+}
97
+
98
+static inline bool isar_feature_aa32_rdm(const ARMISARegisters *id)
99
+{
100
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, RDM) != 0;
101
+}
102
+
103
+static inline bool isar_feature_aa32_vcma(const ARMISARegisters *id)
104
+{
105
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, VCMA) != 0;
106
+}
107
+
108
+static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
109
+{
110
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
111
+}
112
+
113
+/*
114
+ * 64-bit feature tests via id registers.
115
+ */
116
+static inline bool isar_feature_aa64_aes(const ARMISARegisters *id)
117
+{
118
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) != 0;
119
+}
120
+
121
+static inline bool isar_feature_aa64_pmull(const ARMISARegisters *id)
122
+{
123
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) > 1;
124
+}
125
+
126
+static inline bool isar_feature_aa64_sha1(const ARMISARegisters *id)
127
+{
128
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA1) != 0;
129
+}
130
+
131
+static inline bool isar_feature_aa64_sha256(const ARMISARegisters *id)
132
+{
133
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) != 0;
134
+}
135
+
136
+static inline bool isar_feature_aa64_sha512(const ARMISARegisters *id)
137
+{
138
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) > 1;
139
+}
140
+
141
+static inline bool isar_feature_aa64_crc32(const ARMISARegisters *id)
142
+{
143
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, CRC32) != 0;
144
+}
145
+
146
+static inline bool isar_feature_aa64_atomics(const ARMISARegisters *id)
147
+{
148
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, ATOMIC) != 0;
149
+}
150
+
151
+static inline bool isar_feature_aa64_rdm(const ARMISARegisters *id)
152
+{
153
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RDM) != 0;
154
+}
155
+
156
+static inline bool isar_feature_aa64_sha3(const ARMISARegisters *id)
157
+{
158
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA3) != 0;
159
+}
160
+
161
+static inline bool isar_feature_aa64_sm3(const ARMISARegisters *id)
162
+{
163
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM3) != 0;
164
+}
165
+
166
+static inline bool isar_feature_aa64_sm4(const ARMISARegisters *id)
167
+{
168
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM4) != 0;
169
+}
170
+
171
+static inline bool isar_feature_aa64_dp(const ARMISARegisters *id)
172
+{
173
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, DP) != 0;
174
+}
175
+
176
+static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
177
+{
178
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
179
+}
180
+
181
+/*
182
+ * Forward to the above feature tests given an ARMCPU pointer.
183
+ */
184
+#define cpu_isar_feature(name, cpu) \
185
+ ({ ARMCPU *cpu_ = (cpu); isar_feature_##name(&cpu_->isar); })
186
+
187
#endif
188
diff --git a/target/arm/translate.h b/target/arm/translate.h
189
index XXXXXXX..XXXXXXX 100644
190
--- a/target/arm/translate.h
191
+++ b/target/arm/translate.h
192
@@ -XXX,XX +XXX,XX @@
193
/* internal defines */
194
typedef struct DisasContext {
195
DisasContextBase base;
196
+ const ARMISARegisters *isar;
197
198
target_ulong pc;
199
target_ulong page_start;
200
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
201
return ret;
18
}
202
}
19
203
20
+/*
204
+/*
21
+ *** SVE Integer Wide Immediate - Unpredicated Group
205
+ * Forward to the isar_feature_* tests given a DisasContext pointer.
22
+ */
206
+ */
23
+
207
+#define dc_isar_feature(name, ctx) \
24
+static bool trans_FDUP(DisasContext *s, arg_FDUP *a, uint32_t insn)
208
+ ({ DisasContext *ctx_ = (ctx); isar_feature_##name(ctx_->isar); })
25
+{
209
+
26
+ if (a->esz == 0) {
210
#endif /* TARGET_ARM_TRANSLATE_H */
27
+ return false;
211
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
28
+ }
29
+ if (sve_access_check(s)) {
30
+ unsigned vsz = vec_full_reg_size(s);
31
+ int dofs = vec_full_reg_offset(s, a->rd);
32
+ uint64_t imm;
33
+
34
+ /* Decode the VFP immediate. */
35
+ imm = vfp_expand_imm(a->esz, a->imm);
36
+ imm = dup_const(a->esz, imm);
37
+
38
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, imm);
39
+ }
40
+ return true;
41
+}
42
+
43
+static bool trans_DUP_i(DisasContext *s, arg_DUP_i *a, uint32_t insn)
44
+{
45
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
46
+ return false;
47
+ }
48
+ if (sve_access_check(s)) {
49
+ unsigned vsz = vec_full_reg_size(s);
50
+ int dofs = vec_full_reg_offset(s, a->rd);
51
+
52
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, dup_const(a->esz, a->imm));
53
+ }
54
+ return true;
55
+}
56
+
57
/*
58
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
59
*/
60
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
61
index XXXXXXX..XXXXXXX 100644
212
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/sve.decode
213
--- a/linux-user/elfload.c
63
+++ b/target/arm/sve.decode
214
+++ b/linux-user/elfload.c
64
@@ -XXX,XX +XXX,XX @@ CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
215
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
65
# SVE integer compare scalar count and limit
216
/* probe for the extra features */
66
WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
217
#define GET_FEATURE(feat, hwcap) \
67
218
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
68
+### SVE Integer Wide Immediate - Unpredicated Group
219
+
69
+
220
+#define GET_FEATURE_ID(feat, hwcap) \
70
+# SVE broadcast floating-point immediate (unpredicated)
221
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
71
+FDUP 00100101 esz:2 111 00 1110 imm:8 rd:5
222
+
72
+
223
/* EDSP is in v5TE and above, but all our v5 CPUs are v5TE */
73
+# SVE broadcast integer immediate (unpredicated)
224
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
74
+DUP_i 00100101 esz:2 111 00 011 . ........ rd:5 imm=%sh8_i8s
225
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
75
+
226
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
76
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
227
ARMCPU *cpu = ARM_CPU(thread_cpu);
77
228
uint32_t hwcaps = 0;
78
# SVE load predicate register
229
230
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP2_ARM_AES);
231
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP2_ARM_PMULL);
232
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP2_ARM_SHA1);
233
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP2_ARM_SHA2);
234
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP2_ARM_CRC32);
235
+ GET_FEATURE_ID(aa32_aes, ARM_HWCAP2_ARM_AES);
236
+ GET_FEATURE_ID(aa32_pmull, ARM_HWCAP2_ARM_PMULL);
237
+ GET_FEATURE_ID(aa32_sha1, ARM_HWCAP2_ARM_SHA1);
238
+ GET_FEATURE_ID(aa32_sha2, ARM_HWCAP2_ARM_SHA2);
239
+ GET_FEATURE_ID(aa32_crc32, ARM_HWCAP2_ARM_CRC32);
240
return hwcaps;
241
}
242
243
#undef GET_FEATURE
244
+#undef GET_FEATURE_ID
245
246
#else
247
/* 64 bit ARM definitions */
248
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
249
/* probe for the extra features */
250
#define GET_FEATURE(feat, hwcap) \
251
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
252
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP_A64_AES);
253
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP_A64_PMULL);
254
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP_A64_SHA1);
255
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP_A64_SHA2);
256
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP_A64_CRC32);
257
- GET_FEATURE(ARM_FEATURE_V8_SHA3, ARM_HWCAP_A64_SHA3);
258
- GET_FEATURE(ARM_FEATURE_V8_SM3, ARM_HWCAP_A64_SM3);
259
- GET_FEATURE(ARM_FEATURE_V8_SM4, ARM_HWCAP_A64_SM4);
260
- GET_FEATURE(ARM_FEATURE_V8_SHA512, ARM_HWCAP_A64_SHA512);
261
+#define GET_FEATURE_ID(feat, hwcap) \
262
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
263
+
264
+ GET_FEATURE_ID(aa64_aes, ARM_HWCAP_A64_AES);
265
+ GET_FEATURE_ID(aa64_pmull, ARM_HWCAP_A64_PMULL);
266
+ GET_FEATURE_ID(aa64_sha1, ARM_HWCAP_A64_SHA1);
267
+ GET_FEATURE_ID(aa64_sha256, ARM_HWCAP_A64_SHA2);
268
+ GET_FEATURE_ID(aa64_sha512, ARM_HWCAP_A64_SHA512);
269
+ GET_FEATURE_ID(aa64_crc32, ARM_HWCAP_A64_CRC32);
270
+ GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
271
+ GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
272
+ GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
273
GET_FEATURE(ARM_FEATURE_V8_FP16,
274
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
275
- GET_FEATURE(ARM_FEATURE_V8_ATOMICS, ARM_HWCAP_A64_ATOMICS);
276
- GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
277
- GET_FEATURE(ARM_FEATURE_V8_DOTPROD, ARM_HWCAP_A64_ASIMDDP);
278
- GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
279
+ GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
280
+ GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
281
+ GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
282
+ GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
283
GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
284
+
285
#undef GET_FEATURE
286
+#undef GET_FEATURE_ID
287
288
return hwcaps;
289
}
290
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
291
index XXXXXXX..XXXXXXX 100644
292
--- a/target/arm/cpu.c
293
+++ b/target/arm/cpu.c
294
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
295
cortex_a15_initfn(obj);
296
#ifdef CONFIG_USER_ONLY
297
/* We don't set these in system emulation mode for the moment,
298
- * since we don't correctly set the ID registers to advertise them,
299
+ * since we don't correctly set (all of) the ID registers to
300
+ * advertise them.
301
*/
302
set_feature(&cpu->env, ARM_FEATURE_V8);
303
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
304
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
305
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
306
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
307
- set_feature(&cpu->env, ARM_FEATURE_CRC);
308
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
309
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
310
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
311
+ {
312
+ uint32_t t;
313
+
314
+ t = cpu->isar.id_isar5;
315
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
316
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
317
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
318
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
319
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
320
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
321
+ cpu->isar.id_isar5 = t;
322
+
323
+ t = cpu->isar.id_isar6;
324
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
325
+ cpu->isar.id_isar6 = t;
326
+ }
327
#endif
328
}
329
}
330
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
331
index XXXXXXX..XXXXXXX 100644
332
--- a/target/arm/cpu64.c
333
+++ b/target/arm/cpu64.c
334
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
336
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
337
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
338
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
339
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
340
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
341
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
342
- set_feature(&cpu->env, ARM_FEATURE_CRC);
343
set_feature(&cpu->env, ARM_FEATURE_EL2);
344
set_feature(&cpu->env, ARM_FEATURE_EL3);
345
set_feature(&cpu->env, ARM_FEATURE_PMU);
346
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
347
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
348
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
349
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
350
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
351
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
352
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
353
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
354
- set_feature(&cpu->env, ARM_FEATURE_CRC);
355
set_feature(&cpu->env, ARM_FEATURE_EL2);
356
set_feature(&cpu->env, ARM_FEATURE_EL3);
357
set_feature(&cpu->env, ARM_FEATURE_PMU);
358
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
359
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
360
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
361
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
362
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
363
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
364
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
365
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
366
- set_feature(&cpu->env, ARM_FEATURE_CRC);
367
set_feature(&cpu->env, ARM_FEATURE_EL2);
368
set_feature(&cpu->env, ARM_FEATURE_EL3);
369
set_feature(&cpu->env, ARM_FEATURE_PMU);
370
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
371
if (kvm_enabled()) {
372
kvm_arm_set_cpu_features_from_host(cpu);
373
} else {
374
+ uint64_t t;
375
+ uint32_t u;
376
aarch64_a57_initfn(obj);
377
+
378
+ t = cpu->isar.id_aa64isar0;
379
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
380
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
381
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
382
+ t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
383
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
384
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
385
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
386
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
387
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
388
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
389
+ cpu->isar.id_aa64isar0 = t;
390
+
391
+ t = cpu->isar.id_aa64isar1;
392
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
393
+ cpu->isar.id_aa64isar1 = t;
394
+
395
+ /* Replicate the same data to the 32-bit id registers. */
396
+ u = cpu->isar.id_isar5;
397
+ u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
398
+ u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
399
+ u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
400
+ u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
401
+ u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
402
+ u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
403
+ cpu->isar.id_isar5 = u;
404
+
405
+ u = cpu->isar.id_isar6;
406
+ u = FIELD_DP32(u, ID_ISAR6, DP, 1);
407
+ cpu->isar.id_isar6 = u;
408
+
409
#ifdef CONFIG_USER_ONLY
410
/* We don't set these in system emulation mode for the moment,
411
* since we don't correctly set the ID registers to advertise them,
412
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
413
* whereas the architecture requires them to be present in both if
414
* present in either.
415
*/
416
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA512);
417
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA3);
418
- set_feature(&cpu->env, ARM_FEATURE_V8_SM3);
419
- set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
420
- set_feature(&cpu->env, ARM_FEATURE_V8_ATOMICS);
421
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
422
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
423
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
424
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
425
set_feature(&cpu->env, ARM_FEATURE_SVE);
426
/* For usermode -cpu max we can use a larger and more efficient DCZ
427
* blocksize since we don't have to follow what the hardware does.
428
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
429
index XXXXXXX..XXXXXXX 100644
430
--- a/target/arm/translate-a64.c
431
+++ b/target/arm/translate-a64.c
432
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
433
}
434
if (rt2 == 31
435
&& ((rt | rs) & 1) == 0
436
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
437
+ && dc_isar_feature(aa64_atomics, s)) {
438
/* CASP / CASPL */
439
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
440
return;
441
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
442
}
443
if (rt2 == 31
444
&& ((rt | rs) & 1) == 0
445
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
446
+ && dc_isar_feature(aa64_atomics, s)) {
447
/* CASPA / CASPAL */
448
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
449
return;
450
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
451
case 0xb: /* CASL */
452
case 0xe: /* CASA */
453
case 0xf: /* CASAL */
454
- if (rt2 == 31 && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
455
+ if (rt2 == 31 && dc_isar_feature(aa64_atomics, s)) {
456
gen_compare_and_swap(s, rs, rt, rn, size);
457
return;
458
}
459
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
460
int rs = extract32(insn, 16, 5);
461
int rn = extract32(insn, 5, 5);
462
int o3_opc = extract32(insn, 12, 4);
463
- int feature = ARM_FEATURE_V8_ATOMICS;
464
TCGv_i64 tcg_rn, tcg_rs;
465
AtomicThreeOpFn *fn;
466
467
- if (is_vector) {
468
+ if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
469
unallocated_encoding(s);
470
return;
471
}
472
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
473
unallocated_encoding(s);
474
return;
475
}
476
- if (!arm_dc_feature(s, feature)) {
477
- unallocated_encoding(s);
478
- return;
479
- }
480
481
if (rn == 31) {
482
gen_check_sp_alignment(s);
483
@@ -XXX,XX +XXX,XX @@ static void handle_crc32(DisasContext *s,
484
TCGv_i64 tcg_acc, tcg_val;
485
TCGv_i32 tcg_bytes;
486
487
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)
488
+ if (!dc_isar_feature(aa64_crc32, s)
489
|| (sf == 1 && sz != 3)
490
|| (sf == 0 && sz == 3)) {
491
unallocated_encoding(s);
492
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
493
bool u = extract32(insn, 29, 1);
494
TCGv_i32 ele1, ele2, ele3;
495
TCGv_i64 res;
496
- int feature;
497
+ bool feature;
498
499
switch (u * 16 + opcode) {
500
case 0x10: /* SQRDMLAH (vector) */
501
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
502
unallocated_encoding(s);
503
return;
504
}
505
- feature = ARM_FEATURE_V8_RDM;
506
+ feature = dc_isar_feature(aa64_rdm, s);
507
break;
508
default:
509
unallocated_encoding(s);
510
return;
511
}
512
- if (!arm_dc_feature(s, feature)) {
513
+ if (!feature) {
514
unallocated_encoding(s);
515
return;
516
}
517
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
518
return;
519
}
520
if (size == 3) {
521
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
522
+ if (!dc_isar_feature(aa64_pmull, s)) {
523
unallocated_encoding(s);
524
return;
525
}
526
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
527
int size = extract32(insn, 22, 2);
528
bool u = extract32(insn, 29, 1);
529
bool is_q = extract32(insn, 30, 1);
530
- int feature, rot;
531
+ bool feature;
532
+ int rot;
533
534
switch (u * 16 + opcode) {
535
case 0x10: /* SQRDMLAH (vector) */
536
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
537
unallocated_encoding(s);
538
return;
539
}
540
- feature = ARM_FEATURE_V8_RDM;
541
+ feature = dc_isar_feature(aa64_rdm, s);
542
break;
543
case 0x02: /* SDOT (vector) */
544
case 0x12: /* UDOT (vector) */
545
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
546
unallocated_encoding(s);
547
return;
548
}
549
- feature = ARM_FEATURE_V8_DOTPROD;
550
+ feature = dc_isar_feature(aa64_dp, s);
551
break;
552
case 0x18: /* FCMLA, #0 */
553
case 0x19: /* FCMLA, #90 */
554
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
555
unallocated_encoding(s);
556
return;
557
}
558
- feature = ARM_FEATURE_V8_FCMA;
559
+ feature = dc_isar_feature(aa64_fcma, s);
560
break;
561
default:
562
unallocated_encoding(s);
563
return;
564
}
565
- if (!arm_dc_feature(s, feature)) {
566
+ if (!feature) {
567
unallocated_encoding(s);
568
return;
569
}
570
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
571
break;
572
case 0x1d: /* SQRDMLAH */
573
case 0x1f: /* SQRDMLSH */
574
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
575
+ if (!dc_isar_feature(aa64_rdm, s)) {
576
unallocated_encoding(s);
577
return;
578
}
579
break;
580
case 0x0e: /* SDOT */
581
case 0x1e: /* UDOT */
582
- if (size != MO_32 || !arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
583
+ if (size != MO_32 || !dc_isar_feature(aa64_dp, s)) {
584
unallocated_encoding(s);
585
return;
586
}
587
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
588
case 0x13: /* FCMLA #90 */
589
case 0x15: /* FCMLA #180 */
590
case 0x17: /* FCMLA #270 */
591
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
592
+ if (!dc_isar_feature(aa64_fcma, s)) {
593
unallocated_encoding(s);
594
return;
595
}
596
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
597
TCGv_i32 tcg_decrypt;
598
CryptoThreeOpIntFn *genfn;
599
600
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
601
- || size != 0) {
602
+ if (!dc_isar_feature(aa64_aes, s) || size != 0) {
603
unallocated_encoding(s);
604
return;
605
}
606
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
607
int rd = extract32(insn, 0, 5);
608
CryptoThreeOpFn *genfn;
609
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
610
- int feature = ARM_FEATURE_V8_SHA256;
611
+ bool feature;
612
613
if (size != 0) {
614
unallocated_encoding(s);
615
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
616
case 2: /* SHA1M */
617
case 3: /* SHA1SU0 */
618
genfn = NULL;
619
- feature = ARM_FEATURE_V8_SHA1;
620
+ feature = dc_isar_feature(aa64_sha1, s);
621
break;
622
case 4: /* SHA256H */
623
genfn = gen_helper_crypto_sha256h;
624
+ feature = dc_isar_feature(aa64_sha256, s);
625
break;
626
case 5: /* SHA256H2 */
627
genfn = gen_helper_crypto_sha256h2;
628
+ feature = dc_isar_feature(aa64_sha256, s);
629
break;
630
case 6: /* SHA256SU1 */
631
genfn = gen_helper_crypto_sha256su1;
632
+ feature = dc_isar_feature(aa64_sha256, s);
633
break;
634
default:
635
unallocated_encoding(s);
636
return;
637
}
638
639
- if (!arm_dc_feature(s, feature)) {
640
+ if (!feature) {
641
unallocated_encoding(s);
642
return;
643
}
644
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
645
int rn = extract32(insn, 5, 5);
646
int rd = extract32(insn, 0, 5);
647
CryptoTwoOpFn *genfn;
648
- int feature;
649
+ bool feature;
650
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
651
652
if (size != 0) {
653
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
654
655
switch (opcode) {
656
case 0: /* SHA1H */
657
- feature = ARM_FEATURE_V8_SHA1;
658
+ feature = dc_isar_feature(aa64_sha1, s);
659
genfn = gen_helper_crypto_sha1h;
660
break;
661
case 1: /* SHA1SU1 */
662
- feature = ARM_FEATURE_V8_SHA1;
663
+ feature = dc_isar_feature(aa64_sha1, s);
664
genfn = gen_helper_crypto_sha1su1;
665
break;
666
case 2: /* SHA256SU0 */
667
- feature = ARM_FEATURE_V8_SHA256;
668
+ feature = dc_isar_feature(aa64_sha256, s);
669
genfn = gen_helper_crypto_sha256su0;
670
break;
671
default:
672
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
673
return;
674
}
675
676
- if (!arm_dc_feature(s, feature)) {
677
+ if (!feature) {
678
unallocated_encoding(s);
679
return;
680
}
681
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
682
int rm = extract32(insn, 16, 5);
683
int rn = extract32(insn, 5, 5);
684
int rd = extract32(insn, 0, 5);
685
- int feature;
686
+ bool feature;
687
CryptoThreeOpFn *genfn;
688
689
if (o == 0) {
690
switch (opcode) {
691
case 0: /* SHA512H */
692
- feature = ARM_FEATURE_V8_SHA512;
693
+ feature = dc_isar_feature(aa64_sha512, s);
694
genfn = gen_helper_crypto_sha512h;
695
break;
696
case 1: /* SHA512H2 */
697
- feature = ARM_FEATURE_V8_SHA512;
698
+ feature = dc_isar_feature(aa64_sha512, s);
699
genfn = gen_helper_crypto_sha512h2;
700
break;
701
case 2: /* SHA512SU1 */
702
- feature = ARM_FEATURE_V8_SHA512;
703
+ feature = dc_isar_feature(aa64_sha512, s);
704
genfn = gen_helper_crypto_sha512su1;
705
break;
706
case 3: /* RAX1 */
707
- feature = ARM_FEATURE_V8_SHA3;
708
+ feature = dc_isar_feature(aa64_sha3, s);
709
genfn = NULL;
710
break;
711
}
712
} else {
713
switch (opcode) {
714
case 0: /* SM3PARTW1 */
715
- feature = ARM_FEATURE_V8_SM3;
716
+ feature = dc_isar_feature(aa64_sm3, s);
717
genfn = gen_helper_crypto_sm3partw1;
718
break;
719
case 1: /* SM3PARTW2 */
720
- feature = ARM_FEATURE_V8_SM3;
721
+ feature = dc_isar_feature(aa64_sm3, s);
722
genfn = gen_helper_crypto_sm3partw2;
723
break;
724
case 2: /* SM4EKEY */
725
- feature = ARM_FEATURE_V8_SM4;
726
+ feature = dc_isar_feature(aa64_sm4, s);
727
genfn = gen_helper_crypto_sm4ekey;
728
break;
729
default:
730
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
731
}
732
}
733
734
- if (!arm_dc_feature(s, feature)) {
735
+ if (!feature) {
736
unallocated_encoding(s);
737
return;
738
}
739
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
740
int rn = extract32(insn, 5, 5);
741
int rd = extract32(insn, 0, 5);
742
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
743
- int feature;
744
+ bool feature;
745
CryptoTwoOpFn *genfn;
746
747
switch (opcode) {
748
case 0: /* SHA512SU0 */
749
- feature = ARM_FEATURE_V8_SHA512;
750
+ feature = dc_isar_feature(aa64_sha512, s);
751
genfn = gen_helper_crypto_sha512su0;
752
break;
753
case 1: /* SM4E */
754
- feature = ARM_FEATURE_V8_SM4;
755
+ feature = dc_isar_feature(aa64_sm4, s);
756
genfn = gen_helper_crypto_sm4e;
757
break;
758
default:
759
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
760
return;
761
}
762
763
- if (!arm_dc_feature(s, feature)) {
764
+ if (!feature) {
765
unallocated_encoding(s);
766
return;
767
}
768
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
769
int ra = extract32(insn, 10, 5);
770
int rn = extract32(insn, 5, 5);
771
int rd = extract32(insn, 0, 5);
772
- int feature;
773
+ bool feature;
774
775
switch (op0) {
776
case 0: /* EOR3 */
777
case 1: /* BCAX */
778
- feature = ARM_FEATURE_V8_SHA3;
779
+ feature = dc_isar_feature(aa64_sha3, s);
780
break;
781
case 2: /* SM3SS1 */
782
- feature = ARM_FEATURE_V8_SM3;
783
+ feature = dc_isar_feature(aa64_sm3, s);
784
break;
785
default:
786
unallocated_encoding(s);
787
return;
788
}
789
790
- if (!arm_dc_feature(s, feature)) {
791
+ if (!feature) {
792
unallocated_encoding(s);
793
return;
794
}
795
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
796
TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
797
int pass;
798
799
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA3)) {
800
+ if (!dc_isar_feature(aa64_sha3, s)) {
801
unallocated_encoding(s);
802
return;
803
}
804
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
805
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
806
TCGv_i32 tcg_imm2, tcg_opcode;
807
808
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SM3)) {
809
+ if (!dc_isar_feature(aa64_sm3, s)) {
810
unallocated_encoding(s);
811
return;
812
}
813
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
814
ARMCPU *arm_cpu = arm_env_get_cpu(env);
815
int bound;
816
817
+ dc->isar = &arm_cpu->isar;
818
dc->pc = dc->base.pc_first;
819
dc->condjmp = 0;
820
821
diff --git a/target/arm/translate.c b/target/arm/translate.c
822
index XXXXXXX..XXXXXXX 100644
823
--- a/target/arm/translate.c
824
+++ b/target/arm/translate.c
825
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
826
static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
827
int q, int rd, int rn, int rm)
828
{
829
- if (arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
830
+ if (dc_isar_feature(aa32_rdm, s)) {
831
int opr_sz = (1 + q) * 8;
832
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
833
vfp_reg_offset(1, rn),
834
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
835
return 1;
836
}
837
if (!u) { /* SHA-1 */
838
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
839
+ if (!dc_isar_feature(aa32_sha1, s)) {
840
return 1;
841
}
842
ptr1 = vfp_reg_ptr(true, rd);
843
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
844
gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp4);
845
tcg_temp_free_i32(tmp4);
846
} else { /* SHA-256 */
847
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256) || size == 3) {
848
+ if (!dc_isar_feature(aa32_sha2, s) || size == 3) {
849
return 1;
850
}
851
ptr1 = vfp_reg_ptr(true, rd);
852
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
853
if (op == 14 && size == 2) {
854
TCGv_i64 tcg_rn, tcg_rm, tcg_rd;
855
856
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
857
+ if (!dc_isar_feature(aa32_pmull, s)) {
858
return 1;
859
}
860
tcg_rn = tcg_temp_new_i64();
861
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
862
{
863
NeonGenThreeOpEnvFn *fn;
864
865
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
866
+ if (!dc_isar_feature(aa32_rdm, s)) {
867
return 1;
868
}
869
if (u && ((rd | rn) & 1)) {
870
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
871
break;
872
}
873
case NEON_2RM_AESE: case NEON_2RM_AESMC:
874
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
875
- || ((rm | rd) & 1)) {
876
+ if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
877
return 1;
878
}
879
ptr1 = vfp_reg_ptr(true, rd);
880
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
881
tcg_temp_free_i32(tmp3);
882
break;
883
case NEON_2RM_SHA1H:
884
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)
885
- || ((rm | rd) & 1)) {
886
+ if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
887
return 1;
888
}
889
ptr1 = vfp_reg_ptr(true, rd);
890
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
891
}
892
/* bit 6 (q): set -> SHA256SU0, cleared -> SHA1SU1 */
893
if (q) {
894
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256)) {
895
+ if (!dc_isar_feature(aa32_sha2, s)) {
896
return 1;
897
}
898
- } else if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
899
+ } else if (!dc_isar_feature(aa32_sha1, s)) {
900
return 1;
901
}
902
ptr1 = vfp_reg_ptr(true, rd);
903
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
904
/* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
905
int size = extract32(insn, 20, 1);
906
data = extract32(insn, 23, 2); /* rot */
907
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
908
+ if (!dc_isar_feature(aa32_vcma, s)
909
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
910
return 1;
911
}
912
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
913
/* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
914
int size = extract32(insn, 20, 1);
915
data = extract32(insn, 24, 1); /* rot */
916
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
917
+ if (!dc_isar_feature(aa32_vcma, s)
918
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
919
return 1;
920
}
921
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
922
} else if ((insn & 0xfeb00f00) == 0xfc200d00) {
923
/* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
924
bool u = extract32(insn, 4, 1);
925
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
926
+ if (!dc_isar_feature(aa32_dp, s)) {
927
return 1;
928
}
929
fn_gvec = u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
930
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
931
int size = extract32(insn, 23, 1);
932
int index;
933
934
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
935
+ if (!dc_isar_feature(aa32_vcma, s)) {
936
return 1;
937
}
938
if (size == 0) {
939
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
940
} else if ((insn & 0xffb00f00) == 0xfe200d00) {
941
/* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
942
int u = extract32(insn, 4, 1);
943
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
944
+ if (!dc_isar_feature(aa32_dp, s)) {
945
return 1;
946
}
947
fn_gvec = u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
948
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
949
* op1 == 3 is UNPREDICTABLE but handle as UNDEFINED.
950
* Bits 8, 10 and 11 should be zero.
951
*/
952
- if (!arm_dc_feature(s, ARM_FEATURE_CRC) || op1 == 0x3 ||
953
- (c & 0xd) != 0) {
954
+ if (!dc_isar_feature(aa32_crc32, s) || op1 == 0x3 || (c & 0xd) != 0) {
955
goto illegal_op;
956
}
957
958
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
959
case 0x28:
960
case 0x29:
961
case 0x2a:
962
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)) {
963
+ if (!dc_isar_feature(aa32_crc32, s)) {
964
goto illegal_op;
965
}
966
break;
967
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
968
CPUARMState *env = cs->env_ptr;
969
ARMCPU *cpu = arm_env_get_cpu(env);
970
971
+ dc->isar = &cpu->isar;
972
dc->pc = dc->base.pc_first;
973
dc->condjmp = 0;
974
79
--
975
--
80
2.17.1
976
2.19.1
81
977
82
978
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Both arm and thumb2 division are controlled by the same ISAR field,
4
which takes care of the arm implies thumb case. Having M imply
5
thumb2 division was wrong for cortex-m0, which is v6m and does not
6
have thumb2 at all, much less thumb2 division.
7
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20181016223115.24100-5-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
13
---
8
target/arm/helper-sve.h | 14 ++++++++
14
target/arm/cpu.h | 12 ++++++++++--
9
target/arm/helper.h | 19 +++++++++++
15
linux-user/elfload.c | 4 ++--
10
target/arm/translate-sve.c | 42 +++++++++++++++++++++++
16
target/arm/cpu.c | 10 +---------
11
target/arm/vec_helper.c | 69 ++++++++++++++++++++++++++++++++++++++
17
target/arm/translate.c | 4 ++--
12
target/arm/sve.decode | 10 ++++++
18
4 files changed, 15 insertions(+), 15 deletions(-)
13
5 files changed, 154 insertions(+)
14
19
15
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-sve.h
22
--- a/target/arm/cpu.h
18
+++ b/target/arm/helper-sve.h
23
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
20
DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
25
ARM_FEATURE_VFP3,
21
DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
26
ARM_FEATURE_VFP_FP16,
22
DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
27
ARM_FEATURE_NEON,
23
+
28
- ARM_FEATURE_THUMB_DIV, /* divide supported in Thumb encoding */
24
+DEF_HELPER_FLAGS_5(gvec_recps_h, TCG_CALL_NO_RWG,
29
ARM_FEATURE_M, /* Microcontroller profile. */
25
+ void, ptr, ptr, ptr, ptr, i32)
30
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
26
+DEF_HELPER_FLAGS_5(gvec_recps_s, TCG_CALL_NO_RWG,
31
ARM_FEATURE_THUMB2EE,
27
+ void, ptr, ptr, ptr, ptr, i32)
32
@@ -XXX,XX +XXX,XX @@ enum arm_features {
28
+DEF_HELPER_FLAGS_5(gvec_recps_d, TCG_CALL_NO_RWG,
33
ARM_FEATURE_V5,
29
+ void, ptr, ptr, ptr, ptr, i32)
34
ARM_FEATURE_STRONGARM,
30
+
35
ARM_FEATURE_VAPA, /* cp15 VA to PA lookups */
31
+DEF_HELPER_FLAGS_5(gvec_rsqrts_h, TCG_CALL_NO_RWG,
36
- ARM_FEATURE_ARM_DIV, /* divide supported in ARM encoding */
32
+ void, ptr, ptr, ptr, ptr, i32)
37
ARM_FEATURE_VFP4, /* VFPv4 (implies that NEON is v2) */
33
+DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
38
ARM_FEATURE_GENERIC_TIMER,
34
+ void, ptr, ptr, ptr, ptr, i32)
39
ARM_FEATURE_MVFR, /* Media and VFP Feature Registers 0 and 1 */
35
+DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
40
@@ -XXX,XX +XXX,XX @@ extern const uint64_t pred_esz_masks[4];
36
+ void, ptr, ptr, ptr, ptr, i32)
41
/*
37
diff --git a/target/arm/helper.h b/target/arm/helper.h
42
* 32-bit feature tests via id registers.
38
index XXXXXXX..XXXXXXX 100644
43
*/
39
--- a/target/arm/helper.h
44
+static inline bool isar_feature_thumb_div(const ARMISARegisters *id)
40
+++ b/target/arm/helper.h
41
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
42
DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
43
void, ptr, ptr, ptr, ptr, i32)
44
45
+DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_5(gvec_fadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_5(gvec_fsub_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_5(gvec_fsub_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_5(gvec_fsub_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
52
+
53
+DEF_HELPER_FLAGS_5(gvec_fmul_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_5(gvec_fmul_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
56
+
57
+DEF_HELPER_FLAGS_5(gvec_ftsmul_h, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_5(gvec_ftsmul_s, TCG_CALL_NO_RWG,
60
+ void, ptr, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_5(gvec_ftsmul_d, TCG_CALL_NO_RWG,
62
+ void, ptr, ptr, ptr, ptr, i32)
63
+
64
#ifdef TARGET_AARCH64
65
#include "helper-a64.h"
66
#include "helper-sve.h"
67
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/translate-sve.c
70
+++ b/target/arm/translate-sve.c
71
@@ -XXX,XX +XXX,XX @@ DO_ZZI(UMIN, umin)
72
73
#undef DO_ZZI
74
75
+/*
76
+ *** SVE Floating Point Arithmetic - Unpredicated Group
77
+ */
78
+
79
+static bool do_zzz_fp(DisasContext *s, arg_rrr_esz *a,
80
+ gen_helper_gvec_3_ptr *fn)
81
+{
45
+{
82
+ if (fn == NULL) {
46
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) != 0;
83
+ return false;
84
+ }
85
+ if (sve_access_check(s)) {
86
+ unsigned vsz = vec_full_reg_size(s);
87
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
88
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
89
+ vec_full_reg_offset(s, a->rn),
90
+ vec_full_reg_offset(s, a->rm),
91
+ status, vsz, vsz, 0, fn);
92
+ tcg_temp_free_ptr(status);
93
+ }
94
+ return true;
95
+}
47
+}
96
+
48
+
97
+
49
+static inline bool isar_feature_arm_div(const ARMISARegisters *id)
98
+#define DO_FP3(NAME, name) \
50
+{
99
+static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a, uint32_t insn) \
51
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
100
+{ \
101
+ static gen_helper_gvec_3_ptr * const fns[4] = { \
102
+ NULL, gen_helper_gvec_##name##_h, \
103
+ gen_helper_gvec_##name##_s, gen_helper_gvec_##name##_d \
104
+ }; \
105
+ return do_zzz_fp(s, a, fns[a->esz]); \
106
+}
52
+}
107
+
53
+
108
+DO_FP3(FADD_zzz, fadd)
54
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
109
+DO_FP3(FSUB_zzz, fsub)
55
{
110
+DO_FP3(FMUL_zzz, fmul)
56
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
111
+DO_FP3(FTSMUL, ftsmul)
57
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
112
+DO_FP3(FRECPS, recps)
113
+DO_FP3(FRSQRTS, rsqrts)
114
+
115
+#undef DO_FP3
116
+
117
/*
118
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
119
*/
120
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
121
index XXXXXXX..XXXXXXX 100644
58
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/vec_helper.c
59
--- a/linux-user/elfload.c
123
+++ b/target/arm/vec_helper.c
60
+++ b/linux-user/elfload.c
124
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm,
61
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
62
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
63
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
64
GET_FEATURE(ARM_FEATURE_VFP4, ARM_HWCAP_ARM_VFPv4);
65
- GET_FEATURE(ARM_FEATURE_ARM_DIV, ARM_HWCAP_ARM_IDIVA);
66
- GET_FEATURE(ARM_FEATURE_THUMB_DIV, ARM_HWCAP_ARM_IDIVT);
67
+ GET_FEATURE_ID(arm_div, ARM_HWCAP_ARM_IDIVA);
68
+ GET_FEATURE_ID(thumb_div, ARM_HWCAP_ARM_IDIVT);
69
/* All QEMU's VFPv3 CPUs have 32 registers, see VFP_DREG in translate.c.
70
* Note that the ARM_HWCAP_ARM_VFPv3D16 bit is always the inverse of
71
* ARM_HWCAP_ARM_VFPD32 (and so always clear for QEMU); it is unrelated
72
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/cpu.c
75
+++ b/target/arm/cpu.c
76
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
77
* Presence of EL2 itself is ARM_FEATURE_EL2, and of the
78
* Security Extensions is ARM_FEATURE_EL3.
79
*/
80
- set_feature(env, ARM_FEATURE_ARM_DIV);
81
+ assert(cpu_isar_feature(arm_div, cpu));
82
set_feature(env, ARM_FEATURE_LPAE);
83
set_feature(env, ARM_FEATURE_V7);
125
}
84
}
126
clear_tail(d, opr_sz, simd_maxsz(desc));
85
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
127
}
86
if (arm_feature(env, ARM_FEATURE_V5)) {
128
+
87
set_feature(env, ARM_FEATURE_V4T);
129
+/* Floating-point trigonometric starting value.
88
}
130
+ * See the ARM ARM pseudocode function FPTrigSMul.
89
- if (arm_feature(env, ARM_FEATURE_M)) {
131
+ */
90
- set_feature(env, ARM_FEATURE_THUMB_DIV);
132
+static float16 float16_ftsmul(float16 op1, uint16_t op2, float_status *stat)
91
- }
133
+{
92
- if (arm_feature(env, ARM_FEATURE_ARM_DIV)) {
134
+ float16 result = float16_mul(op1, op1, stat);
93
- set_feature(env, ARM_FEATURE_THUMB_DIV);
135
+ if (!float16_is_any_nan(result)) {
94
- }
136
+ result = float16_set_sign(result, op2 & 1);
95
if (arm_feature(env, ARM_FEATURE_VFP4)) {
137
+ }
96
set_feature(env, ARM_FEATURE_VFP3);
138
+ return result;
97
set_feature(env, ARM_FEATURE_VFP_FP16);
139
+}
98
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
140
+
99
ARMCPU *cpu = ARM_CPU(obj);
141
+static float32 float32_ftsmul(float32 op1, uint32_t op2, float_status *stat)
100
142
+{
101
set_feature(&cpu->env, ARM_FEATURE_V7);
143
+ float32 result = float32_mul(op1, op1, stat);
102
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DIV);
144
+ if (!float32_is_any_nan(result)) {
103
- set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
145
+ result = float32_set_sign(result, op2 & 1);
104
set_feature(&cpu->env, ARM_FEATURE_V7MP);
146
+ }
105
set_feature(&cpu->env, ARM_FEATURE_PMSA);
147
+ return result;
106
cpu->midr = 0x411fc153; /* r1p3 */
148
+}
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
149
+
150
+static float64 float64_ftsmul(float64 op1, uint64_t op2, float_status *stat)
151
+{
152
+ float64 result = float64_mul(op1, op1, stat);
153
+ if (!float64_is_any_nan(result)) {
154
+ result = float64_set_sign(result, op2 & 1);
155
+ }
156
+ return result;
157
+}
158
+
159
+#define DO_3OP(NAME, FUNC, TYPE) \
160
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
161
+{ \
162
+ intptr_t i, oprsz = simd_oprsz(desc); \
163
+ TYPE *d = vd, *n = vn, *m = vm; \
164
+ for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
165
+ d[i] = FUNC(n[i], m[i], stat); \
166
+ } \
167
+}
168
+
169
+DO_3OP(gvec_fadd_h, float16_add, float16)
170
+DO_3OP(gvec_fadd_s, float32_add, float32)
171
+DO_3OP(gvec_fadd_d, float64_add, float64)
172
+
173
+DO_3OP(gvec_fsub_h, float16_sub, float16)
174
+DO_3OP(gvec_fsub_s, float32_sub, float32)
175
+DO_3OP(gvec_fsub_d, float64_sub, float64)
176
+
177
+DO_3OP(gvec_fmul_h, float16_mul, float16)
178
+DO_3OP(gvec_fmul_s, float32_mul, float32)
179
+DO_3OP(gvec_fmul_d, float64_mul, float64)
180
+
181
+DO_3OP(gvec_ftsmul_h, float16_ftsmul, float16)
182
+DO_3OP(gvec_ftsmul_s, float32_ftsmul, float32)
183
+DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
184
+
185
+#ifdef TARGET_AARCH64
186
+
187
+DO_3OP(gvec_recps_h, helper_recpsf_f16, float16)
188
+DO_3OP(gvec_recps_s, helper_recpsf_f32, float32)
189
+DO_3OP(gvec_recps_d, helper_recpsf_f64, float64)
190
+
191
+DO_3OP(gvec_rsqrts_h, helper_rsqrtsf_f16, float16)
192
+DO_3OP(gvec_rsqrts_s, helper_rsqrtsf_f32, float32)
193
+DO_3OP(gvec_rsqrts_d, helper_rsqrtsf_f64, float64)
194
+
195
+#endif
196
+#undef DO_3OP
197
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
198
index XXXXXXX..XXXXXXX 100644
108
index XXXXXXX..XXXXXXX 100644
199
--- a/target/arm/sve.decode
109
--- a/target/arm/translate.c
200
+++ b/target/arm/sve.decode
110
+++ b/target/arm/translate.c
201
@@ -XXX,XX +XXX,XX @@ UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
111
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
202
# SVE integer multiply immediate (unpredicated)
112
case 1:
203
MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
113
case 3:
204
114
/* SDIV, UDIV */
205
+### SVE Floating Point Arithmetic - Unpredicated Group
115
- if (!arm_dc_feature(s, ARM_FEATURE_ARM_DIV)) {
206
+
116
+ if (!dc_isar_feature(arm_div, s)) {
207
+# SVE floating-point arithmetic (unpredicated)
117
goto illegal_op;
208
+FADD_zzz 01100101 .. 0 ..... 000 000 ..... ..... @rd_rn_rm
118
}
209
+FSUB_zzz 01100101 .. 0 ..... 000 001 ..... ..... @rd_rn_rm
119
if (((insn >> 5) & 7) || (rd != 15)) {
210
+FMUL_zzz 01100101 .. 0 ..... 000 010 ..... ..... @rd_rn_rm
120
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
211
+FTSMUL 01100101 .. 0 ..... 000 011 ..... ..... @rd_rn_rm
121
tmp2 = load_reg(s, rm);
212
+FRECPS 01100101 .. 0 ..... 000 110 ..... ..... @rd_rn_rm
122
if ((op & 0x50) == 0x10) {
213
+FRSQRTS 01100101 .. 0 ..... 000 111 ..... ..... @rd_rn_rm
123
/* sdiv, udiv */
214
+
124
- if (!arm_dc_feature(s, ARM_FEATURE_THUMB_DIV)) {
215
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
125
+ if (!dc_isar_feature(thumb_div, s)) {
216
126
goto illegal_op;
217
# SVE load predicate register
127
}
128
if (op & 0x20)
218
--
129
--
219
2.17.1
130
2.19.1
220
131
221
132
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Having V6 alone imply jazelle was wrong for cortex-m0.
4
Change to an assertion for V6 & !M.
5
6
This was harmless, because the only place we tested ARM_FEATURE_JAZELLE
7
was for 'bxj' in disas_arm(), which is unreachable for M-profile cores.
8
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181016223115.24100-6-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
14
---
8
target/arm/helper-sve.h | 115 +++++++++++++++++++++++
15
target/arm/cpu.h | 6 +++++-
9
target/arm/sve_helper.c | 187 +++++++++++++++++++++++++++++++++++++
16
target/arm/cpu.c | 17 ++++++++++++++---
10
target/arm/translate-sve.c | 91 ++++++++++++++++++
17
target/arm/translate.c | 2 +-
11
target/arm/sve.decode | 24 +++++
18
3 files changed, 20 insertions(+), 5 deletions(-)
12
4 files changed, 417 insertions(+)
13
19
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
22
--- a/target/arm/cpu.h
17
+++ b/target/arm/helper-sve.h
23
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
19
25
ARM_FEATURE_PMU, /* has PMU support */
20
DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
ARM_FEATURE_VBAR, /* has cp15 VBAR */
21
27
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
22
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_b, TCG_CALL_NO_RWG,
28
- ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
23
+ i32, ptr, ptr, ptr, ptr, i32)
29
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
24
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_b, TCG_CALL_NO_RWG,
30
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
25
+ i32, ptr, ptr, ptr, ptr, i32)
31
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
26
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_b, TCG_CALL_NO_RWG,
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_arm_div(const ARMISARegisters *id)
27
+ i32, ptr, ptr, ptr, ptr, i32)
33
return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
28
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_b, TCG_CALL_NO_RWG,
29
+ i32, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_b, TCG_CALL_NO_RWG,
31
+ i32, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_b, TCG_CALL_NO_RWG,
33
+ i32, ptr, ptr, ptr, ptr, i32)
34
+
35
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_h, TCG_CALL_NO_RWG,
36
+ i32, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_h, TCG_CALL_NO_RWG,
38
+ i32, ptr, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_h, TCG_CALL_NO_RWG,
40
+ i32, ptr, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_h, TCG_CALL_NO_RWG,
42
+ i32, ptr, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_h, TCG_CALL_NO_RWG,
44
+ i32, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_h, TCG_CALL_NO_RWG,
46
+ i32, ptr, ptr, ptr, ptr, i32)
47
+
48
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_s, TCG_CALL_NO_RWG,
49
+ i32, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_s, TCG_CALL_NO_RWG,
51
+ i32, ptr, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_s, TCG_CALL_NO_RWG,
53
+ i32, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_s, TCG_CALL_NO_RWG,
55
+ i32, ptr, ptr, ptr, ptr, i32)
56
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_s, TCG_CALL_NO_RWG,
57
+ i32, ptr, ptr, ptr, ptr, i32)
58
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_s, TCG_CALL_NO_RWG,
59
+ i32, ptr, ptr, ptr, ptr, i32)
60
+
61
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_d, TCG_CALL_NO_RWG,
62
+ i32, ptr, ptr, ptr, ptr, i32)
63
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_d, TCG_CALL_NO_RWG,
64
+ i32, ptr, ptr, ptr, ptr, i32)
65
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_d, TCG_CALL_NO_RWG,
66
+ i32, ptr, ptr, ptr, ptr, i32)
67
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_d, TCG_CALL_NO_RWG,
68
+ i32, ptr, ptr, ptr, ptr, i32)
69
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_d, TCG_CALL_NO_RWG,
70
+ i32, ptr, ptr, ptr, ptr, i32)
71
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_d, TCG_CALL_NO_RWG,
72
+ i32, ptr, ptr, ptr, ptr, i32)
73
+
74
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_b, TCG_CALL_NO_RWG,
75
+ i32, ptr, ptr, ptr, ptr, i32)
76
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_b, TCG_CALL_NO_RWG,
77
+ i32, ptr, ptr, ptr, ptr, i32)
78
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_b, TCG_CALL_NO_RWG,
79
+ i32, ptr, ptr, ptr, ptr, i32)
80
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_b, TCG_CALL_NO_RWG,
81
+ i32, ptr, ptr, ptr, ptr, i32)
82
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_b, TCG_CALL_NO_RWG,
83
+ i32, ptr, ptr, ptr, ptr, i32)
84
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_b, TCG_CALL_NO_RWG,
85
+ i32, ptr, ptr, ptr, ptr, i32)
86
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_b, TCG_CALL_NO_RWG,
87
+ i32, ptr, ptr, ptr, ptr, i32)
88
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_b, TCG_CALL_NO_RWG,
89
+ i32, ptr, ptr, ptr, ptr, i32)
90
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_b, TCG_CALL_NO_RWG,
91
+ i32, ptr, ptr, ptr, ptr, i32)
92
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_b, TCG_CALL_NO_RWG,
93
+ i32, ptr, ptr, ptr, ptr, i32)
94
+
95
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_h, TCG_CALL_NO_RWG,
96
+ i32, ptr, ptr, ptr, ptr, i32)
97
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_h, TCG_CALL_NO_RWG,
98
+ i32, ptr, ptr, ptr, ptr, i32)
99
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_h, TCG_CALL_NO_RWG,
100
+ i32, ptr, ptr, ptr, ptr, i32)
101
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_h, TCG_CALL_NO_RWG,
102
+ i32, ptr, ptr, ptr, ptr, i32)
103
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_h, TCG_CALL_NO_RWG,
104
+ i32, ptr, ptr, ptr, ptr, i32)
105
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_h, TCG_CALL_NO_RWG,
106
+ i32, ptr, ptr, ptr, ptr, i32)
107
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_h, TCG_CALL_NO_RWG,
108
+ i32, ptr, ptr, ptr, ptr, i32)
109
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_h, TCG_CALL_NO_RWG,
110
+ i32, ptr, ptr, ptr, ptr, i32)
111
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_h, TCG_CALL_NO_RWG,
112
+ i32, ptr, ptr, ptr, ptr, i32)
113
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_h, TCG_CALL_NO_RWG,
114
+ i32, ptr, ptr, ptr, ptr, i32)
115
+
116
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_s, TCG_CALL_NO_RWG,
117
+ i32, ptr, ptr, ptr, ptr, i32)
118
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_s, TCG_CALL_NO_RWG,
119
+ i32, ptr, ptr, ptr, ptr, i32)
120
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_s, TCG_CALL_NO_RWG,
121
+ i32, ptr, ptr, ptr, ptr, i32)
122
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_s, TCG_CALL_NO_RWG,
123
+ i32, ptr, ptr, ptr, ptr, i32)
124
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_s, TCG_CALL_NO_RWG,
125
+ i32, ptr, ptr, ptr, ptr, i32)
126
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_s, TCG_CALL_NO_RWG,
127
+ i32, ptr, ptr, ptr, ptr, i32)
128
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_s, TCG_CALL_NO_RWG,
129
+ i32, ptr, ptr, ptr, ptr, i32)
130
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_s, TCG_CALL_NO_RWG,
131
+ i32, ptr, ptr, ptr, ptr, i32)
132
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_s, TCG_CALL_NO_RWG,
133
+ i32, ptr, ptr, ptr, ptr, i32)
134
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_s, TCG_CALL_NO_RWG,
135
+ i32, ptr, ptr, ptr, ptr, i32)
136
+
137
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
138
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
139
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
140
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/target/arm/sve_helper.c
143
+++ b/target/arm/sve_helper.c
144
@@ -XXX,XX +XXX,XX @@ static uint32_t iter_predtest_fwd(uint64_t d, uint64_t g, uint32_t flags)
145
return flags;
146
}
34
}
147
35
148
+/* This is an iterative function, called for each Pd and Pg word
36
+static inline bool isar_feature_jazelle(const ARMISARegisters *id)
149
+ * moving backward.
150
+ */
151
+static uint32_t iter_predtest_bwd(uint64_t d, uint64_t g, uint32_t flags)
152
+{
37
+{
153
+ if (likely(g)) {
38
+ return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
154
+ /* Compute C from first (i.e last) !(D & G).
155
+ Use bit 2 to signal first G bit seen. */
156
+ if (!(flags & 4)) {
157
+ flags += 4 - 1; /* add bit 2, subtract C from PREDTEST_INIT */
158
+ flags |= (d & pow2floor(g)) == 0;
159
+ }
160
+
161
+ /* Accumulate Z from each D & G. */
162
+ flags |= ((d & g) != 0) << 1;
163
+
164
+ /* Compute N from last (i.e first) D & G. Replace previous. */
165
+ flags = deposit32(flags, 31, 1, (d & (g & -g)) != 0);
166
+ }
167
+ return flags;
168
+}
39
+}
169
+
40
+
170
/* The same for a single word predicate. */
41
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
171
uint32_t HELPER(sve_predtest1)(uint64_t d, uint64_t g)
172
{
42
{
173
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
43
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
174
d[i] = (pg[H1(i)] & 1 ? nn : mm);
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu.c
47
+++ b/target/arm/cpu.c
48
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
175
}
49
}
50
if (arm_feature(env, ARM_FEATURE_V6)) {
51
set_feature(env, ARM_FEATURE_V5);
52
- set_feature(env, ARM_FEATURE_JAZELLE);
53
if (!arm_feature(env, ARM_FEATURE_M)) {
54
+ assert(cpu_isar_feature(jazelle, cpu));
55
set_feature(env, ARM_FEATURE_AUXCR);
56
}
57
}
58
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
59
set_feature(&cpu->env, ARM_FEATURE_VFP);
60
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
61
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
62
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
63
cpu->midr = 0x41069265;
64
cpu->reset_fpsid = 0x41011090;
65
cpu->ctr = 0x1dd20d2;
66
cpu->reset_sctlr = 0x00090078;
67
+
68
+ /*
69
+ * ARMv5 does not have the ID_ISAR registers, but we can still
70
+ * set the field to indicate Jazelle support within QEMU.
71
+ */
72
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
176
}
73
}
74
75
static void arm946_initfn(Object *obj)
76
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
77
set_feature(&cpu->env, ARM_FEATURE_AUXCR);
78
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
79
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
80
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
81
cpu->midr = 0x4106a262;
82
cpu->reset_fpsid = 0x410110a0;
83
cpu->ctr = 0x1dd20d2;
84
cpu->reset_sctlr = 0x00090078;
85
cpu->reset_auxcr = 1;
177
+
86
+
178
+/* Two operand comparison controlled by a predicate.
87
+ /*
179
+ * ??? It is very tempting to want to be able to expand this inline
88
+ * ARMv5 does not have the ID_ISAR registers, but we can still
180
+ * with x86 instructions, e.g.
89
+ * set the field to indicate Jazelle support within QEMU.
181
+ *
90
+ */
182
+ * vcmpeqw zm, zn, %ymm0
91
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
183
+ * vpmovmskb %ymm0, %eax
184
+ * and $0x5555, %eax
185
+ * and pg, %eax
186
+ *
187
+ * or even aarch64, e.g.
188
+ *
189
+ * // mask = 4000 1000 0400 0100 0040 0010 0004 0001
190
+ * cmeq v0.8h, zn, zm
191
+ * and v0.8h, v0.8h, mask
192
+ * addv h0, v0.8h
193
+ * and v0.8b, pg
194
+ *
195
+ * However, coming up with an abstraction that allows vector inputs and
196
+ * a scalar output, and also handles the byte-ordering of sub-uint64_t
197
+ * scalar outputs, is tricky.
198
+ */
199
+#define DO_CMP_PPZZ(NAME, TYPE, OP, H, MASK) \
200
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
201
+{ \
202
+ intptr_t opr_sz = simd_oprsz(desc); \
203
+ uint32_t flags = PREDTEST_INIT; \
204
+ intptr_t i = opr_sz; \
205
+ do { \
206
+ uint64_t out = 0, pg; \
207
+ do { \
208
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
209
+ TYPE nn = *(TYPE *)(vn + H(i)); \
210
+ TYPE mm = *(TYPE *)(vm + H(i)); \
211
+ out |= nn OP mm; \
212
+ } while (i & 63); \
213
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
214
+ out &= pg; \
215
+ *(uint64_t *)(vd + (i >> 3)) = out; \
216
+ flags = iter_predtest_bwd(out, pg, flags); \
217
+ } while (i > 0); \
218
+ return flags; \
219
+}
220
+
92
+
221
+#define DO_CMP_PPZZ_B(NAME, TYPE, OP) \
93
{
222
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1, 0xffffffffffffffffull)
94
/* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
223
+#define DO_CMP_PPZZ_H(NAME, TYPE, OP) \
95
ARMCPRegInfo ifar = {
224
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1_2, 0x5555555555555555ull)
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
225
+#define DO_CMP_PPZZ_S(NAME, TYPE, OP) \
226
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
227
+#define DO_CMP_PPZZ_D(NAME, TYPE, OP) \
228
+ DO_CMP_PPZZ(NAME, TYPE, OP, , 0x0101010101010101ull)
229
+
230
+DO_CMP_PPZZ_B(sve_cmpeq_ppzz_b, uint8_t, ==)
231
+DO_CMP_PPZZ_H(sve_cmpeq_ppzz_h, uint16_t, ==)
232
+DO_CMP_PPZZ_S(sve_cmpeq_ppzz_s, uint32_t, ==)
233
+DO_CMP_PPZZ_D(sve_cmpeq_ppzz_d, uint64_t, ==)
234
+
235
+DO_CMP_PPZZ_B(sve_cmpne_ppzz_b, uint8_t, !=)
236
+DO_CMP_PPZZ_H(sve_cmpne_ppzz_h, uint16_t, !=)
237
+DO_CMP_PPZZ_S(sve_cmpne_ppzz_s, uint32_t, !=)
238
+DO_CMP_PPZZ_D(sve_cmpne_ppzz_d, uint64_t, !=)
239
+
240
+DO_CMP_PPZZ_B(sve_cmpgt_ppzz_b, int8_t, >)
241
+DO_CMP_PPZZ_H(sve_cmpgt_ppzz_h, int16_t, >)
242
+DO_CMP_PPZZ_S(sve_cmpgt_ppzz_s, int32_t, >)
243
+DO_CMP_PPZZ_D(sve_cmpgt_ppzz_d, int64_t, >)
244
+
245
+DO_CMP_PPZZ_B(sve_cmpge_ppzz_b, int8_t, >=)
246
+DO_CMP_PPZZ_H(sve_cmpge_ppzz_h, int16_t, >=)
247
+DO_CMP_PPZZ_S(sve_cmpge_ppzz_s, int32_t, >=)
248
+DO_CMP_PPZZ_D(sve_cmpge_ppzz_d, int64_t, >=)
249
+
250
+DO_CMP_PPZZ_B(sve_cmphi_ppzz_b, uint8_t, >)
251
+DO_CMP_PPZZ_H(sve_cmphi_ppzz_h, uint16_t, >)
252
+DO_CMP_PPZZ_S(sve_cmphi_ppzz_s, uint32_t, >)
253
+DO_CMP_PPZZ_D(sve_cmphi_ppzz_d, uint64_t, >)
254
+
255
+DO_CMP_PPZZ_B(sve_cmphs_ppzz_b, uint8_t, >=)
256
+DO_CMP_PPZZ_H(sve_cmphs_ppzz_h, uint16_t, >=)
257
+DO_CMP_PPZZ_S(sve_cmphs_ppzz_s, uint32_t, >=)
258
+DO_CMP_PPZZ_D(sve_cmphs_ppzz_d, uint64_t, >=)
259
+
260
+#undef DO_CMP_PPZZ_B
261
+#undef DO_CMP_PPZZ_H
262
+#undef DO_CMP_PPZZ_S
263
+#undef DO_CMP_PPZZ_D
264
+#undef DO_CMP_PPZZ
265
+
266
+/* Similar, but the second source is "wide". */
267
+#define DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H, MASK) \
268
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
269
+{ \
270
+ intptr_t opr_sz = simd_oprsz(desc); \
271
+ uint32_t flags = PREDTEST_INIT; \
272
+ intptr_t i = opr_sz; \
273
+ do { \
274
+ uint64_t out = 0, pg; \
275
+ do { \
276
+ TYPEW mm = *(TYPEW *)(vm + i - 8); \
277
+ do { \
278
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
279
+ TYPE nn = *(TYPE *)(vn + H(i)); \
280
+ out |= nn OP mm; \
281
+ } while (i & 7); \
282
+ } while (i & 63); \
283
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
284
+ out &= pg; \
285
+ *(uint64_t *)(vd + (i >> 3)) = out; \
286
+ flags = iter_predtest_bwd(out, pg, flags); \
287
+ } while (i > 0); \
288
+ return flags; \
289
+}
290
+
291
+#define DO_CMP_PPZW_B(NAME, TYPE, TYPEW, OP) \
292
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1, 0xffffffffffffffffull)
293
+#define DO_CMP_PPZW_H(NAME, TYPE, TYPEW, OP) \
294
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_2, 0x5555555555555555ull)
295
+#define DO_CMP_PPZW_S(NAME, TYPE, TYPEW, OP) \
296
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_4, 0x1111111111111111ull)
297
+
298
+DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, uint8_t, uint64_t, ==)
299
+DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, uint16_t, uint64_t, ==)
300
+DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, uint32_t, uint64_t, ==)
301
+
302
+DO_CMP_PPZW_B(sve_cmpne_ppzw_b, uint8_t, uint64_t, !=)
303
+DO_CMP_PPZW_H(sve_cmpne_ppzw_h, uint16_t, uint64_t, !=)
304
+DO_CMP_PPZW_S(sve_cmpne_ppzw_s, uint32_t, uint64_t, !=)
305
+
306
+DO_CMP_PPZW_B(sve_cmpgt_ppzw_b, int8_t, int64_t, >)
307
+DO_CMP_PPZW_H(sve_cmpgt_ppzw_h, int16_t, int64_t, >)
308
+DO_CMP_PPZW_S(sve_cmpgt_ppzw_s, int32_t, int64_t, >)
309
+
310
+DO_CMP_PPZW_B(sve_cmpge_ppzw_b, int8_t, int64_t, >=)
311
+DO_CMP_PPZW_H(sve_cmpge_ppzw_h, int16_t, int64_t, >=)
312
+DO_CMP_PPZW_S(sve_cmpge_ppzw_s, int32_t, int64_t, >=)
313
+
314
+DO_CMP_PPZW_B(sve_cmphi_ppzw_b, uint8_t, uint64_t, >)
315
+DO_CMP_PPZW_H(sve_cmphi_ppzw_h, uint16_t, uint64_t, >)
316
+DO_CMP_PPZW_S(sve_cmphi_ppzw_s, uint32_t, uint64_t, >)
317
+
318
+DO_CMP_PPZW_B(sve_cmphs_ppzw_b, uint8_t, uint64_t, >=)
319
+DO_CMP_PPZW_H(sve_cmphs_ppzw_h, uint16_t, uint64_t, >=)
320
+DO_CMP_PPZW_S(sve_cmphs_ppzw_s, uint32_t, uint64_t, >=)
321
+
322
+DO_CMP_PPZW_B(sve_cmplt_ppzw_b, int8_t, int64_t, <)
323
+DO_CMP_PPZW_H(sve_cmplt_ppzw_h, int16_t, int64_t, <)
324
+DO_CMP_PPZW_S(sve_cmplt_ppzw_s, int32_t, int64_t, <)
325
+
326
+DO_CMP_PPZW_B(sve_cmple_ppzw_b, int8_t, int64_t, <=)
327
+DO_CMP_PPZW_H(sve_cmple_ppzw_h, int16_t, int64_t, <=)
328
+DO_CMP_PPZW_S(sve_cmple_ppzw_s, int32_t, int64_t, <=)
329
+
330
+DO_CMP_PPZW_B(sve_cmplo_ppzw_b, uint8_t, uint64_t, <)
331
+DO_CMP_PPZW_H(sve_cmplo_ppzw_h, uint16_t, uint64_t, <)
332
+DO_CMP_PPZW_S(sve_cmplo_ppzw_s, uint32_t, uint64_t, <)
333
+
334
+DO_CMP_PPZW_B(sve_cmpls_ppzw_b, uint8_t, uint64_t, <=)
335
+DO_CMP_PPZW_H(sve_cmpls_ppzw_h, uint16_t, uint64_t, <=)
336
+DO_CMP_PPZW_S(sve_cmpls_ppzw_s, uint32_t, uint64_t, <=)
337
+
338
+#undef DO_CMP_PPZW_B
339
+#undef DO_CMP_PPZW_H
340
+#undef DO_CMP_PPZW_S
341
+#undef DO_CMP_PPZW
342
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
343
index XXXXXXX..XXXXXXX 100644
97
index XXXXXXX..XXXXXXX 100644
344
--- a/target/arm/translate-sve.c
98
--- a/target/arm/translate.c
345
+++ b/target/arm/translate-sve.c
99
+++ b/target/arm/translate.c
346
@@ -XXX,XX +XXX,XX @@
100
@@ -XXX,XX +XXX,XX @@
347
#include "trace-tcg.h"
101
#define ENABLE_ARCH_5 arm_dc_feature(s, ARM_FEATURE_V5)
348
#include "translate-a64.h"
102
/* currently all emulated v5 cores are also v5TE, so don't bother */
349
103
#define ENABLE_ARCH_5TE arm_dc_feature(s, ARM_FEATURE_V5)
350
+
104
-#define ENABLE_ARCH_5J arm_dc_feature(s, ARM_FEATURE_JAZELLE)
351
+typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
105
+#define ENABLE_ARCH_5J dc_isar_feature(jazelle, s)
352
+ TCGv_ptr, TCGv_ptr, TCGv_i32);
106
#define ENABLE_ARCH_6 arm_dc_feature(s, ARM_FEATURE_V6)
353
+
107
#define ENABLE_ARCH_6K arm_dc_feature(s, ARM_FEATURE_V6K)
354
/*
108
#define ENABLE_ARCH_6T2 arm_dc_feature(s, ARM_FEATURE_THUMB2)
355
* Helpers for extracting complex instruction fields.
356
*/
357
@@ -XXX,XX +XXX,XX @@ static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
358
return true;
359
}
360
361
+/*
362
+ *** SVE Integer Compare - Vectors Group
363
+ */
364
+
365
+static bool do_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
366
+ gen_helper_gvec_flags_4 *gen_fn)
367
+{
368
+ TCGv_ptr pd, zn, zm, pg;
369
+ unsigned vsz;
370
+ TCGv_i32 t;
371
+
372
+ if (gen_fn == NULL) {
373
+ return false;
374
+ }
375
+ if (!sve_access_check(s)) {
376
+ return true;
377
+ }
378
+
379
+ vsz = vec_full_reg_size(s);
380
+ t = tcg_const_i32(simd_desc(vsz, vsz, 0));
381
+ pd = tcg_temp_new_ptr();
382
+ zn = tcg_temp_new_ptr();
383
+ zm = tcg_temp_new_ptr();
384
+ pg = tcg_temp_new_ptr();
385
+
386
+ tcg_gen_addi_ptr(pd, cpu_env, pred_full_reg_offset(s, a->rd));
387
+ tcg_gen_addi_ptr(zn, cpu_env, vec_full_reg_offset(s, a->rn));
388
+ tcg_gen_addi_ptr(zm, cpu_env, vec_full_reg_offset(s, a->rm));
389
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
390
+
391
+ gen_fn(t, pd, zn, zm, pg, t);
392
+
393
+ tcg_temp_free_ptr(pd);
394
+ tcg_temp_free_ptr(zn);
395
+ tcg_temp_free_ptr(zm);
396
+ tcg_temp_free_ptr(pg);
397
+
398
+ do_pred_flags(t);
399
+
400
+ tcg_temp_free_i32(t);
401
+ return true;
402
+}
403
+
404
+#define DO_PPZZ(NAME, name) \
405
+static bool trans_##NAME##_ppzz(DisasContext *s, arg_rprr_esz *a, \
406
+ uint32_t insn) \
407
+{ \
408
+ static gen_helper_gvec_flags_4 * const fns[4] = { \
409
+ gen_helper_sve_##name##_ppzz_b, gen_helper_sve_##name##_ppzz_h, \
410
+ gen_helper_sve_##name##_ppzz_s, gen_helper_sve_##name##_ppzz_d, \
411
+ }; \
412
+ return do_ppzz_flags(s, a, fns[a->esz]); \
413
+}
414
+
415
+DO_PPZZ(CMPEQ, cmpeq)
416
+DO_PPZZ(CMPNE, cmpne)
417
+DO_PPZZ(CMPGT, cmpgt)
418
+DO_PPZZ(CMPGE, cmpge)
419
+DO_PPZZ(CMPHI, cmphi)
420
+DO_PPZZ(CMPHS, cmphs)
421
+
422
+#undef DO_PPZZ
423
+
424
+#define DO_PPZW(NAME, name) \
425
+static bool trans_##NAME##_ppzw(DisasContext *s, arg_rprr_esz *a, \
426
+ uint32_t insn) \
427
+{ \
428
+ static gen_helper_gvec_flags_4 * const fns[4] = { \
429
+ gen_helper_sve_##name##_ppzw_b, gen_helper_sve_##name##_ppzw_h, \
430
+ gen_helper_sve_##name##_ppzw_s, NULL \
431
+ }; \
432
+ return do_ppzz_flags(s, a, fns[a->esz]); \
433
+}
434
+
435
+DO_PPZW(CMPEQ, cmpeq)
436
+DO_PPZW(CMPNE, cmpne)
437
+DO_PPZW(CMPGT, cmpgt)
438
+DO_PPZW(CMPGE, cmpge)
439
+DO_PPZW(CMPHI, cmphi)
440
+DO_PPZW(CMPHS, cmphs)
441
+DO_PPZW(CMPLT, cmplt)
442
+DO_PPZW(CMPLE, cmple)
443
+DO_PPZW(CMPLO, cmplo)
444
+DO_PPZW(CMPLS, cmpls)
445
+
446
+#undef DO_PPZW
447
+
448
/*
449
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
450
*/
451
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
452
index XXXXXXX..XXXXXXX 100644
453
--- a/target/arm/sve.decode
454
+++ b/target/arm/sve.decode
455
@@ -XXX,XX +XXX,XX @@
456
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
457
&rprr_esz rm=%reg_movprfx
458
@rd_pg4_rn_rm ........ esz:2 . rm:5 .. pg:4 rn:5 rd:5 &rprr_esz
459
+@pd_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 . rd:4 &rprr_esz
460
461
# Three register operand, with governing predicate, vector element size
462
@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
463
@@ -XXX,XX +XXX,XX @@ SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
464
# SVE select vector elements (predicated)
465
SEL_zpzz 00000101 .. 1 ..... 11 .... ..... ..... @rd_pg4_rn_rm
466
467
+### SVE Integer Compare - Vectors Group
468
+
469
+# SVE integer compare_vectors
470
+CMPHS_ppzz 00100100 .. 0 ..... 000 ... ..... 0 .... @pd_pg_rn_rm
471
+CMPHI_ppzz 00100100 .. 0 ..... 000 ... ..... 1 .... @pd_pg_rn_rm
472
+CMPGE_ppzz 00100100 .. 0 ..... 100 ... ..... 0 .... @pd_pg_rn_rm
473
+CMPGT_ppzz 00100100 .. 0 ..... 100 ... ..... 1 .... @pd_pg_rn_rm
474
+CMPEQ_ppzz 00100100 .. 0 ..... 101 ... ..... 0 .... @pd_pg_rn_rm
475
+CMPNE_ppzz 00100100 .. 0 ..... 101 ... ..... 1 .... @pd_pg_rn_rm
476
+
477
+# SVE integer compare with wide elements
478
+# Note these require esz != 3.
479
+CMPEQ_ppzw 00100100 .. 0 ..... 001 ... ..... 0 .... @pd_pg_rn_rm
480
+CMPNE_ppzw 00100100 .. 0 ..... 001 ... ..... 1 .... @pd_pg_rn_rm
481
+CMPGE_ppzw 00100100 .. 0 ..... 010 ... ..... 0 .... @pd_pg_rn_rm
482
+CMPGT_ppzw 00100100 .. 0 ..... 010 ... ..... 1 .... @pd_pg_rn_rm
483
+CMPLT_ppzw 00100100 .. 0 ..... 011 ... ..... 0 .... @pd_pg_rn_rm
484
+CMPLE_ppzw 00100100 .. 0 ..... 011 ... ..... 1 .... @pd_pg_rn_rm
485
+CMPHS_ppzw 00100100 .. 0 ..... 110 ... ..... 0 .... @pd_pg_rn_rm
486
+CMPHI_ppzw 00100100 .. 0 ..... 110 ... ..... 1 .... @pd_pg_rn_rm
487
+CMPLO_ppzw 00100100 .. 0 ..... 111 ... ..... 0 .... @pd_pg_rn_rm
488
+CMPLS_ppzw 00100100 .. 0 ..... 111 ... ..... 1 .... @pd_pg_rn_rm
489
+
490
### SVE Predicate Logical Operations Group
491
492
# SVE predicate logical operations
493
--
109
--
494
2.17.1
110
2.19.1
495
111
496
112
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20181016223115.24100-7-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-11-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
target/arm/helper-sve.h | 9 +++++++
9
target/arm/cpu.h | 6 +++++-
9
target/arm/sve_helper.c | 55 ++++++++++++++++++++++++++++++++++++++
10
linux-user/elfload.c | 2 +-
10
target/arm/translate-sve.c | 2 ++
11
target/arm/cpu.c | 4 ----
11
target/arm/sve.decode | 6 +++++
12
target/arm/helper.c | 2 +-
12
4 files changed, 72 insertions(+)
13
target/arm/machine.c | 3 +--
14
5 files changed, 8 insertions(+), 9 deletions(-)
13
15
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
18
--- a/target/arm/cpu.h
17
+++ b/target/arm/helper-sve.h
19
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_lsl_zpzz_s, TCG_CALL_NO_RWG,
20
@@ -XXX,XX +XXX,XX @@ enum arm_features {
19
DEF_HELPER_FLAGS_5(sve_lsl_zpzz_d, TCG_CALL_NO_RWG,
21
ARM_FEATURE_NEON,
20
void, ptr, ptr, ptr, ptr, i32)
22
ARM_FEATURE_M, /* Microcontroller profile. */
21
23
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
22
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_b, TCG_CALL_NO_RWG,
24
- ARM_FEATURE_THUMB2EE,
23
+ void, ptr, ptr, ptr, ptr, i32)
25
ARM_FEATURE_V7MP, /* v7 Multiprocessing Extensions */
24
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_h, TCG_CALL_NO_RWG,
26
ARM_FEATURE_V7VE, /* v7 Virtualization Extensions (non-EL2 parts) */
25
+ void, ptr, ptr, ptr, ptr, i32)
27
ARM_FEATURE_V4T,
26
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_s, TCG_CALL_NO_RWG,
28
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_jazelle(const ARMISARegisters *id)
27
+ void, ptr, ptr, ptr, ptr, i32)
29
return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
28
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
32
void, ptr, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
34
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/sve_helper.c
37
+++ b/target/arm/sve_helper.c
38
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
39
}
40
swap_memmove(vd + len, vm, opr_sz * 8 - len);
41
}
30
}
42
+
31
43
+void HELPER(sve_sel_zpzz_b)(void *vd, void *vn, void *vm,
32
+static inline bool isar_feature_t32ee(const ARMISARegisters *id)
44
+ void *vg, uint32_t desc)
45
+{
33
+{
46
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
34
+ return FIELD_EX32(id->id_isar3, ID_ISAR3, T32EE) != 0;
47
+ uint64_t *d = vd, *n = vn, *m = vm;
48
+ uint8_t *pg = vg;
49
+
50
+ for (i = 0; i < opr_sz; i += 1) {
51
+ uint64_t nn = n[i], mm = m[i];
52
+ uint64_t pp = expand_pred_b(pg[H1(i)]);
53
+ d[i] = (nn & pp) | (mm & ~pp);
54
+ }
55
+}
35
+}
56
+
36
+
57
+void HELPER(sve_sel_zpzz_h)(void *vd, void *vn, void *vm,
37
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
58
+ void *vg, uint32_t desc)
38
{
59
+{
39
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
60
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
40
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
61
+ uint64_t *d = vd, *n = vn, *m = vm;
62
+ uint8_t *pg = vg;
63
+
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ uint64_t nn = n[i], mm = m[i];
66
+ uint64_t pp = expand_pred_h(pg[H1(i)]);
67
+ d[i] = (nn & pp) | (mm & ~pp);
68
+ }
69
+}
70
+
71
+void HELPER(sve_sel_zpzz_s)(void *vd, void *vn, void *vm,
72
+ void *vg, uint32_t desc)
73
+{
74
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
75
+ uint64_t *d = vd, *n = vn, *m = vm;
76
+ uint8_t *pg = vg;
77
+
78
+ for (i = 0; i < opr_sz; i += 1) {
79
+ uint64_t nn = n[i], mm = m[i];
80
+ uint64_t pp = expand_pred_s(pg[H1(i)]);
81
+ d[i] = (nn & pp) | (mm & ~pp);
82
+ }
83
+}
84
+
85
+void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
86
+ void *vg, uint32_t desc)
87
+{
88
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
89
+ uint64_t *d = vd, *n = vn, *m = vm;
90
+ uint8_t *pg = vg;
91
+
92
+ for (i = 0; i < opr_sz; i += 1) {
93
+ uint64_t nn = n[i], mm = m[i];
94
+ d[i] = (pg[H1(i)] & 1 ? nn : mm);
95
+ }
96
+}
97
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
98
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/translate-sve.c
42
--- a/linux-user/elfload.c
100
+++ b/target/arm/translate-sve.c
43
+++ b/linux-user/elfload.c
101
@@ -XXX,XX +XXX,XX @@ static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
44
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
102
return do_zpzz_ool(s, a, fns[a->esz]);
45
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
46
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
47
GET_FEATURE(ARM_FEATURE_IWMMXT, ARM_HWCAP_ARM_IWMMXT);
48
- GET_FEATURE(ARM_FEATURE_THUMB2EE, ARM_HWCAP_ARM_THUMBEE);
49
+ GET_FEATURE_ID(t32ee, ARM_HWCAP_ARM_THUMBEE);
50
GET_FEATURE(ARM_FEATURE_NEON, ARM_HWCAP_ARM_NEON);
51
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
52
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
53
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/cpu.c
56
+++ b/target/arm/cpu.c
57
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
58
set_feature(&cpu->env, ARM_FEATURE_V7);
59
set_feature(&cpu->env, ARM_FEATURE_VFP3);
60
set_feature(&cpu->env, ARM_FEATURE_NEON);
61
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
62
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
63
set_feature(&cpu->env, ARM_FEATURE_EL3);
64
cpu->midr = 0x410fc080;
65
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
66
set_feature(&cpu->env, ARM_FEATURE_VFP3);
67
set_feature(&cpu->env, ARM_FEATURE_VFP_FP16);
68
set_feature(&cpu->env, ARM_FEATURE_NEON);
69
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
70
set_feature(&cpu->env, ARM_FEATURE_EL3);
71
/* Note that A9 supports the MP extensions even for
72
* A9UP and single-core A9MP (which are both different
73
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
74
set_feature(&cpu->env, ARM_FEATURE_V7VE);
75
set_feature(&cpu->env, ARM_FEATURE_VFP4);
76
set_feature(&cpu->env, ARM_FEATURE_NEON);
77
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
78
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
79
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
80
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
81
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
82
set_feature(&cpu->env, ARM_FEATURE_V7VE);
83
set_feature(&cpu->env, ARM_FEATURE_VFP4);
84
set_feature(&cpu->env, ARM_FEATURE_NEON);
85
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
86
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
87
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
88
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
89
diff --git a/target/arm/helper.c b/target/arm/helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/helper.c
92
+++ b/target/arm/helper.c
93
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
94
define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo);
95
define_arm_cp_regs(cpu, vmsa_cp_reginfo);
96
}
97
- if (arm_feature(env, ARM_FEATURE_THUMB2EE)) {
98
+ if (cpu_isar_feature(t32ee, cpu)) {
99
define_arm_cp_regs(cpu, t2ee_cp_reginfo);
100
}
101
if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
102
diff --git a/target/arm/machine.c b/target/arm/machine.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/machine.c
105
+++ b/target/arm/machine.c
106
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
107
static bool thumb2ee_needed(void *opaque)
108
{
109
ARMCPU *cpu = opaque;
110
- CPUARMState *env = &cpu->env;
111
112
- return arm_feature(env, ARM_FEATURE_THUMB2EE);
113
+ return cpu_isar_feature(t32ee, cpu);
103
}
114
}
104
115
105
+DO_ZPZZ(SEL, sel)
116
static const VMStateDescription vmstate_thumb2ee = {
106
+
107
#undef DO_ZPZZ
108
109
/*
110
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/sve.decode
113
+++ b/target/arm/sve.decode
114
@@ -XXX,XX +XXX,XX @@
115
&rprr_esz rn=%reg_movprfx
116
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
117
&rprr_esz rm=%reg_movprfx
118
+@rd_pg4_rn_rm ........ esz:2 . rm:5 .. pg:4 rn:5 rd:5 &rprr_esz
119
120
# Three register operand, with governing predicate, vector element size
121
@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
122
@@ -XXX,XX +XXX,XX @@ RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
123
# SVE vector splice (predicated)
124
SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
125
126
+### SVE Select Vectors Group
127
+
128
+# SVE select vector elements (predicated)
129
+SEL_zpzz 00000101 .. 1 ..... 11 .... ..... ..... @rd_pg4_rn_rm
130
+
131
### SVE Predicate Logical Operations Group
132
133
# SVE predicate logical operations
134
--
117
--
135
2.17.1
118
2.19.1
136
119
137
120
diff view generated by jsdifflib
1
The ethernet controller in the AN505 MPC FPGA image is behind
1
From: Richard Henderson <richard.henderson@linaro.org>
2
the same AHB Peripheral Protection Controller that handles
2
3
the graphics and GPIOs. (In the documentation this is clear
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
in the block diagram but the ethernet controller was omitted
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
from the table listing devices connected to the PPC.)
5
Message-id: 20181016223115.24100-8-richard.henderson@linaro.org
6
The ethernet sits behind AHB PPCEXP0 interface 5. We had
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
incorrectly claimed that this was a "gpio4", but there are
8
only 4 GPIOs in this image.
9
10
Correct the QEMU model to match the hardware.
11
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20180515171446.10834-1-peter.maydell@linaro.org
15
---
8
---
16
hw/arm/mps2-tz.c | 32 +++++++++++++++++++++++---------
9
target/arm/cpu.h | 16 +++++++++++++++-
17
1 file changed, 23 insertions(+), 9 deletions(-)
10
linux-user/aarch64/signal.c | 4 ++--
18
11
linux-user/elfload.c | 2 +-
19
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
12
linux-user/syscall.c | 10 ++++++----
20
index XXXXXXX..XXXXXXX 100644
13
target/arm/cpu64.c | 5 ++++-
21
--- a/hw/arm/mps2-tz.c
14
target/arm/helper.c | 9 ++++++---
22
+++ b/hw/arm/mps2-tz.c
15
target/arm/machine.c | 3 +--
23
@@ -XXX,XX +XXX,XX @@ typedef struct {
16
target/arm/translate-a64.c | 4 ++--
24
UnimplementedDeviceState spi[5];
17
8 files changed, 37 insertions(+), 16 deletions(-)
25
UnimplementedDeviceState i2c[4];
18
26
UnimplementedDeviceState i2s_audio;
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
- UnimplementedDeviceState gpio[5];
20
index XXXXXXX..XXXXXXX 100644
28
+ UnimplementedDeviceState gpio[4];
21
--- a/target/arm/cpu.h
29
UnimplementedDeviceState dma[4];
22
+++ b/target/arm/cpu.h
30
UnimplementedDeviceState gfx;
23
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR1, FRINTTS, 32, 4)
31
CMSDKAPBUART uart[5];
24
FIELD(ID_AA64ISAR1, SB, 36, 4)
32
SplitIRQ sec_resp_splitter;
25
FIELD(ID_AA64ISAR1, SPECRES, 40, 4)
33
qemu_or_irq uart_irq_orgate;
26
34
+ DeviceState *lan9118;
27
+FIELD(ID_AA64PFR0, EL0, 0, 4)
35
} MPS2TZMachineState;
28
+FIELD(ID_AA64PFR0, EL1, 4, 4)
36
29
+FIELD(ID_AA64PFR0, EL2, 8, 4)
37
#define TYPE_MPS2TZ_MACHINE "mps2tz"
30
+FIELD(ID_AA64PFR0, EL3, 12, 4)
38
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_fpgaio(MPS2TZMachineState *mms, void *opaque,
31
+FIELD(ID_AA64PFR0, FP, 16, 4)
39
return sysbus_mmio_get_region(SYS_BUS_DEVICE(fpgaio), 0);
32
+FIELD(ID_AA64PFR0, ADVSIMD, 20, 4)
33
+FIELD(ID_AA64PFR0, GIC, 24, 4)
34
+FIELD(ID_AA64PFR0, RAS, 28, 4)
35
+FIELD(ID_AA64PFR0, SVE, 32, 4)
36
+
37
QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK);
38
39
/* If adding a feature bit which corresponds to a Linux ELF
40
@@ -XXX,XX +XXX,XX @@ enum arm_features {
41
ARM_FEATURE_PMU, /* has PMU support */
42
ARM_FEATURE_VBAR, /* has cp15 VBAR */
43
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
44
- ARM_FEATURE_SVE, /* has Scalable Vector Extension */
45
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
46
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
47
};
48
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
49
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
40
}
50
}
41
51
42
+static MemoryRegion *make_eth_dev(MPS2TZMachineState *mms, void *opaque,
52
+static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
43
+ const char *name, hwaddr size)
44
+{
53
+{
45
+ SysBusDevice *s;
54
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
46
+ DeviceState *iotkitdev = DEVICE(&mms->iotkit);
47
+ NICInfo *nd = &nd_table[0];
48
+
49
+ /* In hardware this is a LAN9220; the LAN9118 is software compatible
50
+ * except that it doesn't support the checksum-offload feature.
51
+ */
52
+ qemu_check_nic_model(nd, "lan9118");
53
+ mms->lan9118 = qdev_create(NULL, "lan9118");
54
+ qdev_set_nic_properties(mms->lan9118, nd);
55
+ qdev_init_nofail(mms->lan9118);
56
+
57
+ s = SYS_BUS_DEVICE(mms->lan9118);
58
+ sysbus_connect_irq(s, 0, qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
59
+ return sysbus_mmio_get_region(s, 0);
60
+}
55
+}
61
+
56
+
62
static void mps2tz_common_init(MachineState *machine)
57
/*
58
* Forward to the above feature tests given an ARMCPU pointer.
59
*/
60
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/linux-user/aarch64/signal.c
63
+++ b/linux-user/aarch64/signal.c
64
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
65
break;
66
67
case TARGET_SVE_MAGIC:
68
- if (arm_feature(env, ARM_FEATURE_SVE)) {
69
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
70
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
71
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
72
if (!sve && size == sve_size) {
73
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
74
&layout);
75
76
/* SVE state needs saving only if it exists. */
77
- if (arm_feature(env, ARM_FEATURE_SVE)) {
78
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
79
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
80
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
81
sve_ofs = alloc_sigframe_space(sve_size, &layout);
82
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/linux-user/elfload.c
85
+++ b/linux-user/elfload.c
86
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
87
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
88
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
89
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
90
- GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
91
+ GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
92
93
#undef GET_FEATURE
94
#undef GET_FEATURE_ID
95
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/linux-user/syscall.c
98
+++ b/linux-user/syscall.c
99
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
100
* even though the current architectural maximum is VQ=16.
101
*/
102
ret = -TARGET_EINVAL;
103
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)
104
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(cpu_env))
105
&& arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
106
CPUARMState *env = cpu_env;
107
ARMCPU *cpu = arm_env_get_cpu(env);
108
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
109
return ret;
110
case TARGET_PR_SVE_GET_VL:
111
ret = -TARGET_EINVAL;
112
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)) {
113
- CPUARMState *env = cpu_env;
114
- ret = ((env->vfp.zcr_el[1] & 0xf) + 1) * 16;
115
+ {
116
+ ARMCPU *cpu = arm_env_get_cpu(cpu_env);
117
+ if (cpu_isar_feature(aa64_sve, cpu)) {
118
+ ret = ((cpu->env.vfp.zcr_el[1] & 0xf) + 1) * 16;
119
+ }
120
}
121
return ret;
122
#endif /* AARCH64 */
123
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/target/arm/cpu64.c
126
+++ b/target/arm/cpu64.c
127
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
128
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
129
cpu->isar.id_aa64isar1 = t;
130
131
+ t = cpu->isar.id_aa64pfr0;
132
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
133
+ cpu->isar.id_aa64pfr0 = t;
134
+
135
/* Replicate the same data to the 32-bit id registers. */
136
u = cpu->isar.id_isar5;
137
u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
139
* present in either.
140
*/
141
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
142
- set_feature(&cpu->env, ARM_FEATURE_SVE);
143
/* For usermode -cpu max we can use a larger and more efficient DCZ
144
* blocksize since we don't have to follow what the hardware does.
145
*/
146
diff --git a/target/arm/helper.c b/target/arm/helper.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/helper.c
149
+++ b/target/arm/helper.c
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
151
define_one_arm_cp_reg(cpu, &sctlr);
152
}
153
154
- if (arm_feature(env, ARM_FEATURE_SVE)) {
155
+ if (cpu_isar_feature(aa64_sve, cpu)) {
156
define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
157
if (arm_feature(env, ARM_FEATURE_EL2)) {
158
define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
159
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
160
uint32_t flags;
161
162
if (is_a64(env)) {
163
+ ARMCPU *cpu = arm_env_get_cpu(env);
164
+
165
*pc = env->pc;
166
flags = ARM_TBFLAG_AARCH64_STATE_MASK;
167
/* Get control bits for tagged addresses */
168
flags |= (arm_regime_tbi0(env, mmu_idx) << ARM_TBFLAG_TBI0_SHIFT);
169
flags |= (arm_regime_tbi1(env, mmu_idx) << ARM_TBFLAG_TBI1_SHIFT);
170
171
- if (arm_feature(env, ARM_FEATURE_SVE)) {
172
+ if (cpu_isar_feature(aa64_sve, cpu)) {
173
int sve_el = sve_exception_el(env, current_el);
174
uint32_t zcr_len;
175
176
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq)
177
void aarch64_sve_change_el(CPUARMState *env, int old_el,
178
int new_el, bool el0_a64)
63
{
179
{
64
MPS2TZMachineState *mms = MPS2TZ_MACHINE(machine);
180
+ ARMCPU *cpu = arm_env_get_cpu(env);
65
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
181
int old_len, new_len;
66
{ "gpio1", make_unimp_dev, &mms->gpio[1], 0x40101000, 0x1000 },
182
bool old_a64, new_a64;
67
{ "gpio2", make_unimp_dev, &mms->gpio[2], 0x40102000, 0x1000 },
183
68
{ "gpio3", make_unimp_dev, &mms->gpio[3], 0x40103000, 0x1000 },
184
/* Nothing to do if no SVE. */
69
- { "gpio4", make_unimp_dev, &mms->gpio[4], 0x40104000, 0x1000 },
185
- if (!arm_feature(env, ARM_FEATURE_SVE)) {
70
+ { "eth", make_eth_dev, NULL, 0x42000000, 0x100000 },
186
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
71
},
187
return;
72
}, {
73
.name = "ahb_ppcexp1",
74
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
75
"cfg_sec_resp", 0));
76
}
188
}
77
189
78
- /* In hardware this is a LAN9220; the LAN9118 is software compatible
190
diff --git a/target/arm/machine.c b/target/arm/machine.c
79
- * except that it doesn't support the checksum-offload feature.
191
index XXXXXXX..XXXXXXX 100644
80
- * The ethernet controller is not behind a PPC.
192
--- a/target/arm/machine.c
81
- */
193
+++ b/target/arm/machine.c
82
- lan9118_init(&nd_table[0], 0x42000000,
194
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_iwmmxt = {
83
- qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
195
static bool sve_needed(void *opaque)
84
-
196
{
85
create_unimplemented_device("FPGA NS PC", 0x48007000, 0x1000);
197
ARMCPU *cpu = opaque;
86
198
- CPUARMState *env = &cpu->env;
87
armv7m_load_kernel(ARM_CPU(first_cpu), machine->kernel_filename, 0x400000);
199
200
- return arm_feature(env, ARM_FEATURE_SVE);
201
+ return cpu_isar_feature(aa64_sve, cpu);
202
}
203
204
/* The first two words of each Zreg is stored in VFP state. */
205
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
206
index XXXXXXX..XXXXXXX 100644
207
--- a/target/arm/translate-a64.c
208
+++ b/target/arm/translate-a64.c
209
@@ -XXX,XX +XXX,XX @@ void aarch64_cpu_dump_state(CPUState *cs, FILE *f,
210
cpu_fprintf(f, " FPCR=%08x FPSR=%08x\n",
211
vfp_get_fpcr(env), vfp_get_fpsr(env));
212
213
- if (arm_feature(env, ARM_FEATURE_SVE) && sve_exception_el(env, el) == 0) {
214
+ if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) {
215
int j, zcr_len = sve_zcr_len_for_el(env, el);
216
217
for (i = 0; i <= FFR_PRED_NUM; i++) {
218
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
219
unallocated_encoding(s);
220
break;
221
case 0x2:
222
- if (!arm_dc_feature(s, ARM_FEATURE_SVE) || !disas_sve(s, insn)) {
223
+ if (!dc_isar_feature(aa64_sve, s) || !disas_sve(s, insn)) {
224
unallocated_encoding(s);
225
}
226
break;
88
--
227
--
89
2.17.1
228
2.19.1
90
229
91
230
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20181016223115.24100-9-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-18-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
target/arm/helper-sve.h | 25 +++++++
9
target/arm/cpu.h | 17 +++++++++++++++-
9
target/arm/sve_helper.c | 41 +++++++++++
10
linux-user/elfload.c | 6 +-----
10
target/arm/translate-sve.c | 144 +++++++++++++++++++++++++++++++++++++
11
target/arm/cpu64.c | 16 ++++++++-------
11
target/arm/sve.decode | 26 +++++++
12
target/arm/helper.c | 2 +-
12
4 files changed, 236 insertions(+)
13
target/arm/translate-a64.c | 40 +++++++++++++++++++-------------------
14
target/arm/translate.c | 6 +++---
15
6 files changed, 50 insertions(+), 37 deletions(-)
13
16
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
19
--- a/target/arm/cpu.h
17
+++ b/target/arm/helper-sve.h
20
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
21
@@ -XXX,XX +XXX,XX @@ enum arm_features {
19
DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
22
ARM_FEATURE_PMU, /* has PMU support */
20
23
ARM_FEATURE_VBAR, /* has cp15 VBAR */
21
DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
24
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
22
+
25
- ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
23
+DEF_HELPER_FLAGS_4(sve_subri_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
26
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
24
+DEF_HELPER_FLAGS_4(sve_subri_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
27
};
25
+DEF_HELPER_FLAGS_4(sve_subri_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
28
26
+DEF_HELPER_FLAGS_4(sve_subri_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
29
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
27
+
30
return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
28
+DEF_HELPER_FLAGS_4(sve_smaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
29
+DEF_HELPER_FLAGS_4(sve_smaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
30
+DEF_HELPER_FLAGS_4(sve_smaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
31
+DEF_HELPER_FLAGS_4(sve_smaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
32
+
33
+DEF_HELPER_FLAGS_4(sve_smini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
34
+DEF_HELPER_FLAGS_4(sve_smini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
35
+DEF_HELPER_FLAGS_4(sve_smini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
36
+DEF_HELPER_FLAGS_4(sve_smini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
37
+
38
+DEF_HELPER_FLAGS_4(sve_umaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
39
+DEF_HELPER_FLAGS_4(sve_umaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
40
+DEF_HELPER_FLAGS_4(sve_umaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
41
+DEF_HELPER_FLAGS_4(sve_umaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
42
+
43
+DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
44
+DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
45
+DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
46
+DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
47
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/sve_helper.c
50
+++ b/target/arm/sve_helper.c
51
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
52
#undef DO_VPZ
53
#undef DO_VPZ_D
54
55
+/* Two vector operand, one scalar operand, unpredicated. */
56
+#define DO_ZZI(NAME, TYPE, OP) \
57
+void HELPER(NAME)(void *vd, void *vn, uint64_t s64, uint32_t desc) \
58
+{ \
59
+ intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(TYPE); \
60
+ TYPE s = s64, *d = vd, *n = vn; \
61
+ for (i = 0; i < opr_sz; ++i) { \
62
+ d[i] = OP(n[i], s); \
63
+ } \
64
+}
65
+
66
+#define DO_SUBR(X, Y) (Y - X)
67
+
68
+DO_ZZI(sve_subri_b, uint8_t, DO_SUBR)
69
+DO_ZZI(sve_subri_h, uint16_t, DO_SUBR)
70
+DO_ZZI(sve_subri_s, uint32_t, DO_SUBR)
71
+DO_ZZI(sve_subri_d, uint64_t, DO_SUBR)
72
+
73
+DO_ZZI(sve_smaxi_b, int8_t, DO_MAX)
74
+DO_ZZI(sve_smaxi_h, int16_t, DO_MAX)
75
+DO_ZZI(sve_smaxi_s, int32_t, DO_MAX)
76
+DO_ZZI(sve_smaxi_d, int64_t, DO_MAX)
77
+
78
+DO_ZZI(sve_smini_b, int8_t, DO_MIN)
79
+DO_ZZI(sve_smini_h, int16_t, DO_MIN)
80
+DO_ZZI(sve_smini_s, int32_t, DO_MIN)
81
+DO_ZZI(sve_smini_d, int64_t, DO_MIN)
82
+
83
+DO_ZZI(sve_umaxi_b, uint8_t, DO_MAX)
84
+DO_ZZI(sve_umaxi_h, uint16_t, DO_MAX)
85
+DO_ZZI(sve_umaxi_s, uint32_t, DO_MAX)
86
+DO_ZZI(sve_umaxi_d, uint64_t, DO_MAX)
87
+
88
+DO_ZZI(sve_umini_b, uint8_t, DO_MIN)
89
+DO_ZZI(sve_umini_h, uint16_t, DO_MIN)
90
+DO_ZZI(sve_umini_s, uint32_t, DO_MIN)
91
+DO_ZZI(sve_umini_d, uint64_t, DO_MIN)
92
+
93
+#undef DO_ZZI
94
+
95
#undef DO_AND
96
#undef DO_ORR
97
#undef DO_EOR
98
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
99
#undef DO_ASR
100
#undef DO_LSR
101
#undef DO_LSL
102
+#undef DO_SUBR
103
104
/* Similar to the ARM LastActiveElement pseudocode function, except the
105
result is multiplied by the element size. This includes the not found
106
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/translate-sve.c
109
+++ b/target/arm/translate-sve.c
110
@@ -XXX,XX +XXX,XX @@ static inline int expand_imm_sh8s(int x)
111
return (int8_t)x << (x & 0x100 ? 8 : 0);
112
}
31
}
113
32
114
+static inline int expand_imm_sh8u(int x)
33
+static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
115
+{
34
+{
116
+ return (uint8_t)x << (x & 0x100 ? 8 : 0);
35
+ /*
36
+ * This is a placeholder for use by VCMA until the rest of
37
+ * the ARMv8.2-FP16 extension is implemented for aa32 mode.
38
+ * At which point we can properly set and check MVFR1.FPHP.
39
+ */
40
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
117
+}
41
+}
118
+
42
+
119
/*
43
/*
120
* Include the generated decoder.
44
* 64-bit feature tests via id registers.
121
*/
45
*/
122
@@ -XXX,XX +XXX,XX @@ static bool trans_DUP_i(DisasContext *s, arg_DUP_i *a, uint32_t insn)
46
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
123
return true;
47
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
124
}
48
}
125
49
126
+static bool trans_ADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
50
+static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
127
+{
51
+{
128
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
52
+ /* We always set the AdvSIMD and FP fields identically wrt FP16. */
129
+ return false;
53
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
130
+ }
131
+ if (sve_access_check(s)) {
132
+ unsigned vsz = vec_full_reg_size(s);
133
+ tcg_gen_gvec_addi(a->esz, vec_full_reg_offset(s, a->rd),
134
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
135
+ }
136
+ return true;
137
+}
54
+}
138
+
55
+
139
+static bool trans_SUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
56
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
140
+{
57
{
141
+ a->imm = -a->imm;
58
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
142
+ return trans_ADD_zzi(s, a, insn);
59
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
143
+}
60
index XXXXXXX..XXXXXXX 100644
61
--- a/linux-user/elfload.c
62
+++ b/linux-user/elfload.c
63
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
64
hwcaps |= ARM_HWCAP_A64_ASIMD;
65
66
/* probe for the extra features */
67
-#define GET_FEATURE(feat, hwcap) \
68
- do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
69
#define GET_FEATURE_ID(feat, hwcap) \
70
do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
71
72
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
73
GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
74
GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
75
GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
76
- GET_FEATURE(ARM_FEATURE_V8_FP16,
77
- ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
78
+ GET_FEATURE_ID(aa64_fp16, ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
79
GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
80
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
81
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
82
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
83
GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
84
85
-#undef GET_FEATURE
86
#undef GET_FEATURE_ID
87
88
return hwcaps;
89
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/cpu64.c
92
+++ b/target/arm/cpu64.c
93
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
94
95
t = cpu->isar.id_aa64pfr0;
96
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
97
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
98
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
99
cpu->isar.id_aa64pfr0 = t;
100
101
/* Replicate the same data to the 32-bit id registers. */
102
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
103
u = FIELD_DP32(u, ID_ISAR6, DP, 1);
104
cpu->isar.id_isar6 = u;
105
106
-#ifdef CONFIG_USER_ONLY
107
- /* We don't set these in system emulation mode for the moment,
108
- * since we don't correctly set the ID registers to advertise them,
109
- * and in some cases they're only available in AArch64 and not AArch32,
110
- * whereas the architecture requires them to be present in both if
111
- * present in either.
112
+ /*
113
+ * FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
114
+ * so do not set MVFR1.FPHP. Strictly speaking this is not legal,
115
+ * but it is also not legal to enable SVE without support for FP16,
116
+ * and enabling SVE in system mode is more useful in the short term.
117
*/
118
- set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
144
+
119
+
145
+static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
120
+#ifdef CONFIG_USER_ONLY
146
+{
121
/* For usermode -cpu max we can use a larger and more efficient DCZ
147
+ static const GVecGen2s op[4] = {
122
* blocksize since we don't have to follow what the hardware does.
148
+ { .fni8 = tcg_gen_vec_sub8_i64,
123
*/
149
+ .fniv = tcg_gen_sub_vec,
124
diff --git a/target/arm/helper.c b/target/arm/helper.c
150
+ .fno = gen_helper_sve_subri_b,
125
index XXXXXXX..XXXXXXX 100644
151
+ .opc = INDEX_op_sub_vec,
126
--- a/target/arm/helper.c
152
+ .vece = MO_8,
127
+++ b/target/arm/helper.c
153
+ .scalar_first = true },
128
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
154
+ { .fni8 = tcg_gen_vec_sub16_i64,
129
uint32_t changed;
155
+ .fniv = tcg_gen_sub_vec,
130
156
+ .fno = gen_helper_sve_subri_h,
131
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
157
+ .opc = INDEX_op_sub_vec,
132
- if (!arm_feature(env, ARM_FEATURE_V8_FP16)) {
158
+ .vece = MO_16,
133
+ if (!cpu_isar_feature(aa64_fp16, arm_env_get_cpu(env))) {
159
+ .scalar_first = true },
134
val &= ~FPCR_FZ16;
160
+ { .fni4 = tcg_gen_sub_i32,
135
}
161
+ .fniv = tcg_gen_sub_vec,
136
162
+ .fno = gen_helper_sve_subri_s,
137
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
163
+ .opc = INDEX_op_sub_vec,
138
index XXXXXXX..XXXXXXX 100644
164
+ .vece = MO_32,
139
--- a/target/arm/translate-a64.c
165
+ .scalar_first = true },
140
+++ b/target/arm/translate-a64.c
166
+ { .fni8 = tcg_gen_sub_i64,
141
@@ -XXX,XX +XXX,XX @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
167
+ .fniv = tcg_gen_sub_vec,
142
break;
168
+ .fno = gen_helper_sve_subri_d,
143
case 3:
169
+ .opc = INDEX_op_sub_vec,
144
size = MO_16;
170
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
145
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
171
+ .vece = MO_64,
146
+ if (dc_isar_feature(aa64_fp16, s)) {
172
+ .scalar_first = true }
147
break;
173
+ };
148
}
174
+
149
/* fallthru */
175
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
150
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
176
+ return false;
151
break;
177
+ }
152
case 3:
178
+ if (sve_access_check(s)) {
153
size = MO_16;
179
+ unsigned vsz = vec_full_reg_size(s);
154
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
180
+ TCGv_i64 c = tcg_const_i64(a->imm);
155
+ if (dc_isar_feature(aa64_fp16, s)) {
181
+ tcg_gen_gvec_2s(vec_full_reg_offset(s, a->rd),
156
break;
182
+ vec_full_reg_offset(s, a->rn),
157
}
183
+ vsz, vsz, c, &op[a->esz]);
158
/* fallthru */
184
+ tcg_temp_free_i64(c);
159
@@ -XXX,XX +XXX,XX @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
185
+ }
160
break;
186
+ return true;
161
case 3:
187
+}
162
sz = MO_16;
188
+
163
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
189
+static bool trans_MUL_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
164
+ if (dc_isar_feature(aa64_fp16, s)) {
190
+{
165
break;
191
+ if (sve_access_check(s)) {
166
}
192
+ unsigned vsz = vec_full_reg_size(s);
167
/* fallthru */
193
+ tcg_gen_gvec_muli(a->esz, vec_full_reg_offset(s, a->rd),
168
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
194
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
169
handle_fp_1src_double(s, opcode, rd, rn);
195
+ }
170
break;
196
+ return true;
171
case 3:
197
+}
172
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
198
+
173
+ if (!dc_isar_feature(aa64_fp16, s)) {
199
+static bool do_zzi_sat(DisasContext *s, arg_rri_esz *a, uint32_t insn,
174
unallocated_encoding(s);
200
+ bool u, bool d)
175
return;
201
+{
176
}
202
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
177
@@ -XXX,XX +XXX,XX @@ static void disas_fp_2src(DisasContext *s, uint32_t insn)
203
+ return false;
178
handle_fp_2src_double(s, opcode, rd, rn, rm);
204
+ }
179
break;
205
+ if (sve_access_check(s)) {
180
case 3:
206
+ TCGv_i64 val = tcg_const_i64(a->imm);
181
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
207
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, u, d);
182
+ if (!dc_isar_feature(aa64_fp16, s)) {
208
+ tcg_temp_free_i64(val);
183
unallocated_encoding(s);
209
+ }
184
return;
210
+ return true;
185
}
211
+}
186
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
212
+
187
handle_fp_3src_double(s, o0, o1, rd, rn, rm, ra);
213
+static bool trans_SQADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
188
break;
214
+{
189
case 3:
215
+ return do_zzi_sat(s, a, insn, false, false);
190
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
216
+}
191
+ if (!dc_isar_feature(aa64_fp16, s)) {
217
+
192
unallocated_encoding(s);
218
+static bool trans_UQADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
193
return;
219
+{
194
}
220
+ return do_zzi_sat(s, a, insn, true, false);
195
@@ -XXX,XX +XXX,XX @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
221
+}
196
break;
222
+
197
case 3:
223
+static bool trans_SQSUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
198
sz = MO_16;
224
+{
199
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
225
+ return do_zzi_sat(s, a, insn, false, true);
200
+ if (dc_isar_feature(aa64_fp16, s)) {
226
+}
201
break;
227
+
202
}
228
+static bool trans_UQSUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
203
/* fallthru */
229
+{
204
@@ -XXX,XX +XXX,XX @@ static void disas_fp_fixed_conv(DisasContext *s, uint32_t insn)
230
+ return do_zzi_sat(s, a, insn, true, true);
205
case 1: /* float64 */
231
+}
206
break;
232
+
207
case 3: /* float16 */
233
+static bool do_zzi_ool(DisasContext *s, arg_rri_esz *a, gen_helper_gvec_2i *fn)
208
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
234
+{
209
+ if (dc_isar_feature(aa64_fp16, s)) {
235
+ if (sve_access_check(s)) {
210
break;
236
+ unsigned vsz = vec_full_reg_size(s);
211
}
237
+ TCGv_i64 c = tcg_const_i64(a->imm);
212
/* fallthru */
238
+
213
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
239
+ tcg_gen_gvec_2i_ool(vec_full_reg_offset(s, a->rd),
214
break;
240
+ vec_full_reg_offset(s, a->rn),
215
case 0x6: /* 16-bit float, 32-bit int */
241
+ c, vsz, vsz, 0, fn);
216
case 0xe: /* 16-bit float, 64-bit int */
242
+ tcg_temp_free_i64(c);
217
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
243
+ }
218
+ if (dc_isar_feature(aa64_fp16, s)) {
244
+ return true;
219
break;
245
+}
220
}
246
+
221
/* fallthru */
247
+#define DO_ZZI(NAME, name) \
222
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
248
+static bool trans_##NAME##_zzi(DisasContext *s, arg_rri_esz *a, \
223
case 1: /* float64 */
249
+ uint32_t insn) \
224
break;
250
+{ \
225
case 3: /* float16 */
251
+ static gen_helper_gvec_2i * const fns[4] = { \
226
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
252
+ gen_helper_sve_##name##i_b, gen_helper_sve_##name##i_h, \
227
+ if (dc_isar_feature(aa64_fp16, s)) {
253
+ gen_helper_sve_##name##i_s, gen_helper_sve_##name##i_d, \
228
break;
254
+ }; \
229
}
255
+ return do_zzi_ool(s, a, fns[a->esz]); \
230
/* fallthru */
256
+}
231
@@ -XXX,XX +XXX,XX @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
257
+
232
*/
258
+DO_ZZI(SMAX, smax)
233
is_min = extract32(size, 1, 1);
259
+DO_ZZI(UMAX, umax)
234
is_fp = true;
260
+DO_ZZI(SMIN, smin)
235
- if (!is_u && arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
261
+DO_ZZI(UMIN, umin)
236
+ if (!is_u && dc_isar_feature(aa64_fp16, s)) {
262
+
237
size = 1;
263
+#undef DO_ZZI
238
} else if (!is_u || !is_q || extract32(size, 0, 1)) {
264
+
239
unallocated_encoding(s);
265
/*
240
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
266
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
241
267
*/
242
if (o2 != 0 || ((cmode == 0xf) && is_neg && !is_q)) {
268
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
243
/* Check for FMOV (vector, immediate) - half-precision */
269
index XXXXXXX..XXXXXXX 100644
244
- if (!(arm_dc_feature(s, ARM_FEATURE_V8_FP16) && o2 && cmode == 0xf)) {
270
--- a/target/arm/sve.decode
245
+ if (!(dc_isar_feature(aa64_fp16, s) && o2 && cmode == 0xf)) {
271
+++ b/target/arm/sve.decode
246
unallocated_encoding(s);
272
@@ -XXX,XX +XXX,XX @@
247
return;
273
248
}
274
# Signed 8-bit immediate, optionally shifted left by 8.
249
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
275
%sh8_i8s 5:9 !function=expand_imm_sh8s
250
case 0x2f: /* FMINP */
276
+# Unsigned 8-bit immediate, optionally shifted left by 8.
251
/* FP op, size[0] is 32 or 64 bit*/
277
+%sh8_i8u 5:9 !function=expand_imm_sh8u
252
if (!u) {
278
253
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
279
# Either a copy of rd (at bit 0), or a different source
254
+ if (!dc_isar_feature(aa64_fp16, s)) {
280
# as propagated via the MOVPRFX instruction.
255
unallocated_encoding(s);
281
@@ -XXX,XX +XXX,XX @@
256
return;
282
@pd_pn_pm ........ esz:2 .. rm:4 ....... rn:4 . rd:4 &rrr_esz
257
} else {
283
@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
258
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
284
&rrr_esz rn=%reg_movprfx
259
size = MO_32;
285
+@rdn_sh_i8u ........ esz:2 ...... ...... ..... rd:5 \
260
} else if (immh & 2) {
286
+ &rri_esz rn=%reg_movprfx imm=%sh8_i8u
261
size = MO_16;
287
+@rdn_i8u ........ esz:2 ...... ... imm:8 rd:5 \
262
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
288
+ &rri_esz rn=%reg_movprfx
263
+ if (!dc_isar_feature(aa64_fp16, s)) {
289
+@rdn_i8s ........ esz:2 ...... ... imm:s8 rd:5 \
264
unallocated_encoding(s);
290
+ &rri_esz rn=%reg_movprfx
265
return;
291
266
}
292
# Three operand with "memory" size, aka immediate left shift
267
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
293
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
268
size = MO_32;
294
@@ -XXX,XX +XXX,XX @@ FDUP 00100101 esz:2 111 00 1110 imm:8 rd:5
269
} else if (immh & 0x2) {
295
# SVE broadcast integer immediate (unpredicated)
270
size = MO_16;
296
DUP_i 00100101 esz:2 111 00 011 . ........ rd:5 imm=%sh8_i8s
271
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
297
272
+ if (!dc_isar_feature(aa64_fp16, s)) {
298
+# SVE integer add/subtract immediate (unpredicated)
273
unallocated_encoding(s);
299
+ADD_zzi 00100101 .. 100 000 11 . ........ ..... @rdn_sh_i8u
274
return;
300
+SUB_zzi 00100101 .. 100 001 11 . ........ ..... @rdn_sh_i8u
275
}
301
+SUBR_zzi 00100101 .. 100 011 11 . ........ ..... @rdn_sh_i8u
276
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
302
+SQADD_zzi 00100101 .. 100 100 11 . ........ ..... @rdn_sh_i8u
277
return;
303
+UQADD_zzi 00100101 .. 100 101 11 . ........ ..... @rdn_sh_i8u
278
}
304
+SQSUB_zzi 00100101 .. 100 110 11 . ........ ..... @rdn_sh_i8u
279
305
+UQSUB_zzi 00100101 .. 100 111 11 . ........ ..... @rdn_sh_i8u
280
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
306
+
281
+ if (!dc_isar_feature(aa64_fp16, s)) {
307
+# SVE integer min/max immediate (unpredicated)
282
unallocated_encoding(s);
308
+SMAX_zzi 00100101 .. 101 000 110 ........ ..... @rdn_i8s
283
}
309
+UMAX_zzi 00100101 .. 101 001 110 ........ ..... @rdn_i8u
284
310
+SMIN_zzi 00100101 .. 101 010 110 ........ ..... @rdn_i8s
285
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
311
+UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
286
TCGv_ptr fpst;
312
+
287
bool pairwise = false;
313
+# SVE integer multiply immediate (unpredicated)
288
314
+MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
289
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
315
+
290
+ if (!dc_isar_feature(aa64_fp16, s)) {
316
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
291
unallocated_encoding(s);
317
292
return;
318
# SVE load predicate register
293
}
294
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
295
case 0x1c: /* FCADD, #90 */
296
case 0x1e: /* FCADD, #270 */
297
if (size == 0
298
- || (size == 1 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))
299
+ || (size == 1 && !dc_isar_feature(aa64_fp16, s))
300
|| (size == 3 && !is_q)) {
301
unallocated_encoding(s);
302
return;
303
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
304
bool need_fpst = true;
305
int rmode;
306
307
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
308
+ if (!dc_isar_feature(aa64_fp16, s)) {
309
unallocated_encoding(s);
310
return;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
313
}
314
break;
315
}
316
- if (is_fp16 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
317
+ if (is_fp16 && !dc_isar_feature(aa64_fp16, s)) {
318
unallocated_encoding(s);
319
return;
320
}
321
diff --git a/target/arm/translate.c b/target/arm/translate.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/translate.c
324
+++ b/target/arm/translate.c
325
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
326
int size = extract32(insn, 20, 1);
327
data = extract32(insn, 23, 2); /* rot */
328
if (!dc_isar_feature(aa32_vcma, s)
329
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
330
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
331
return 1;
332
}
333
fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
334
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
335
int size = extract32(insn, 20, 1);
336
data = extract32(insn, 24, 1); /* rot */
337
if (!dc_isar_feature(aa32_vcma, s)
338
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
339
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
340
return 1;
341
}
342
fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
343
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
344
return 1;
345
}
346
if (size == 0) {
347
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
348
+ if (!dc_isar_feature(aa32_fp16_arith, s)) {
349
return 1;
350
}
351
/* For fp16, rm is just Vm, and index is M. */
319
--
352
--
320
2.17.1
353
2.19.1
321
354
322
355
diff view generated by jsdifflib
1
Add support for multiple IOMMU indexes to the IOMMU notifier APIs.
1
For AArch32, exception return happens through certain kinds
2
When initializing a notifier with iommu_notifier_init(), the caller
2
of CPSR write. We don't currently have any CPU_LOG_INT logging
3
must pass the IOMMU index that it is interested in. When a change
3
of these events (unlike AArch64, where we log in the ERET
4
happens, the IOMMU implementation must pass
4
instruction). Add some suitable logging.
5
memory_region_notify_iommu() the IOMMU index that has changed and
6
that notifiers must be called for.
7
5
8
IOMMUs which support only a single index don't need to change.
6
This will log exception returns like this:
9
Callers which only really support working with IOMMUs with a single
7
Exception return from AArch32 hyp to usr PC 0x80100374
10
index can use the result of passing MEMTXATTRS_UNSPECIFIED to
8
11
memory_region_iommu_attrs_to_index().
9
paralleling the existing logging in the exception_return
10
helper for AArch64 exception returns:
11
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
12
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c
13
14
(Note that an AArch32 exception return can only be
15
AArch32->AArch32, never to AArch64.)
12
16
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
19
Message-id: 20181012144235.19646-2-peter.maydell@linaro.org
16
Message-id: 20180604152941.20374-3-peter.maydell@linaro.org
17
---
20
---
18
include/exec/memory.h | 7 ++++++-
21
target/arm/internals.h | 18 ++++++++++++++++++
19
hw/i386/intel_iommu.c | 6 +++---
22
target/arm/helper.c | 10 ++++++++++
20
hw/ppc/spapr_iommu.c | 2 +-
23
target/arm/translate.c | 7 +------
21
hw/s390x/s390-pci-inst.c | 4 ++--
24
3 files changed, 29 insertions(+), 6 deletions(-)
22
hw/vfio/common.c | 6 +++++-
23
hw/virtio/vhost.c | 7 ++++++-
24
memory.c | 8 +++++++-
25
7 files changed, 30 insertions(+), 10 deletions(-)
26
25
27
diff --git a/include/exec/memory.h b/include/exec/memory.h
26
diff --git a/target/arm/internals.h b/target/arm/internals.h
28
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
29
--- a/include/exec/memory.h
28
--- a/target/arm/internals.h
30
+++ b/include/exec/memory.h
29
+++ b/target/arm/internals.h
31
@@ -XXX,XX +XXX,XX @@ struct IOMMUNotifier {
30
@@ -XXX,XX +XXX,XX @@ static inline uint32_t v7m_sp_limit(CPUARMState *env)
32
/* Notify for address space range start <= addr <= end */
31
}
33
hwaddr start;
34
hwaddr end;
35
+ int iommu_idx;
36
QLIST_ENTRY(IOMMUNotifier) node;
37
};
38
typedef struct IOMMUNotifier IOMMUNotifier;
39
40
static inline void iommu_notifier_init(IOMMUNotifier *n, IOMMUNotify fn,
41
IOMMUNotifierFlag flags,
42
- hwaddr start, hwaddr end)
43
+ hwaddr start, hwaddr end,
44
+ int iommu_idx)
45
{
46
n->notify = fn;
47
n->notifier_flags = flags;
48
n->start = start;
49
n->end = end;
50
+ n->iommu_idx = iommu_idx;
51
}
32
}
52
33
53
/*
34
+/**
54
@@ -XXX,XX +XXX,XX @@ uint64_t memory_region_iommu_get_min_page_size(IOMMUMemoryRegion *iommu_mr);
35
+ * aarch32_mode_name(): Return name of the AArch32 CPU mode
55
* should be notified with an UNMAP followed by a MAP.
36
+ * @psr: Program Status Register indicating CPU mode
56
*
37
+ *
57
* @iommu_mr: the memory region that was changed
38
+ * Returns, for debug logging purposes, a printable representation
58
+ * @iommu_idx: the IOMMU index for the translation table which has changed
39
+ * of the AArch32 CPU mode ("svc", "usr", etc) as indicated by
59
* @entry: the new entry in the IOMMU translation table. The entry
40
+ * the low bits of the specified PSR.
60
* replaces all old entries for the same virtual I/O address range.
41
+ */
61
* Deleted entries have .@perm == 0.
42
+static inline const char *aarch32_mode_name(uint32_t psr)
62
*/
43
+{
63
void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
44
+ static const char cpu_mode_names[16][4] = {
64
+ int iommu_idx,
45
+ "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
65
IOMMUTLBEntry entry);
46
+ "???", "???", "hyp", "und", "???", "???", "???", "sys"
66
47
+ };
67
/**
48
+
68
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
49
+ return cpu_mode_names[psr & 0xf];
50
+}
51
+
52
#endif
53
diff --git a/target/arm/helper.c b/target/arm/helper.c
69
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/i386/intel_iommu.c
55
--- a/target/arm/helper.c
71
+++ b/hw/i386/intel_iommu.c
56
+++ b/target/arm/helper.c
72
@@ -XXX,XX +XXX,XX @@ static int vtd_dev_to_context_entry(IntelIOMMUState *s, uint8_t bus_num,
57
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
73
static int vtd_sync_shadow_page_hook(IOMMUTLBEntry *entry,
58
mask |= CPSR_IL;
74
void *private)
59
val |= CPSR_IL;
75
{
76
- memory_region_notify_iommu((IOMMUMemoryRegion *)private, *entry);
77
+ memory_region_notify_iommu((IOMMUMemoryRegion *)private, 0, *entry);
78
return 0;
79
}
80
81
@@ -XXX,XX +XXX,XX @@ static void vtd_iotlb_page_invalidate_notify(IntelIOMMUState *s,
82
.addr_mask = size - 1,
83
.perm = IOMMU_NONE,
84
};
85
- memory_region_notify_iommu(&vtd_as->iommu, entry);
86
+ memory_region_notify_iommu(&vtd_as->iommu, 0, entry);
87
}
60
}
61
+ qemu_log_mask(LOG_GUEST_ERROR,
62
+ "Illegal AArch32 mode switch attempt from %s to %s\n",
63
+ aarch32_mode_name(env->uncached_cpsr),
64
+ aarch32_mode_name(val));
65
} else {
66
+ qemu_log_mask(CPU_LOG_INT, "%s %s to %s PC 0x%" PRIx32 "\n",
67
+ write_type == CPSRWriteExceptionReturn ?
68
+ "Exception return from AArch32" :
69
+ "AArch32 mode switch from",
70
+ aarch32_mode_name(env->uncached_cpsr),
71
+ aarch32_mode_name(val), env->regs[15]);
72
switch_mode(env, val & CPSR_M);
88
}
73
}
89
}
74
}
90
@@ -XXX,XX +XXX,XX @@ static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
75
diff --git a/target/arm/translate.c b/target/arm/translate.c
91
entry.iova = addr;
92
entry.perm = IOMMU_NONE;
93
entry.translated_addr = 0;
94
- memory_region_notify_iommu(&vtd_dev_as->iommu, entry);
95
+ memory_region_notify_iommu(&vtd_dev_as->iommu, 0, entry);
96
97
done:
98
return true;
99
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
100
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
101
--- a/hw/ppc/spapr_iommu.c
77
--- a/target/arm/translate.c
102
+++ b/hw/ppc/spapr_iommu.c
78
+++ b/target/arm/translate.c
103
@@ -XXX,XX +XXX,XX @@ static target_ulong put_tce_emu(sPAPRTCETable *tcet, target_ulong ioba,
79
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb)
104
entry.translated_addr = tce & page_mask;
80
translator_loop(ops, &dc.base, cpu, tb);
105
entry.addr_mask = ~page_mask;
106
entry.perm = spapr_tce_iommu_access_flags(tce);
107
- memory_region_notify_iommu(&tcet->iommu, entry);
108
+ memory_region_notify_iommu(&tcet->iommu, 0, entry);
109
110
return H_SUCCESS;
111
}
81
}
112
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
82
113
index XXXXXXX..XXXXXXX 100644
83
-static const char *cpu_mode_names[16] = {
114
--- a/hw/s390x/s390-pci-inst.c
84
- "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
115
+++ b/hw/s390x/s390-pci-inst.c
85
- "???", "???", "hyp", "und", "???", "???", "???", "sys"
116
@@ -XXX,XX +XXX,XX @@ static void s390_pci_update_iotlb(S390PCIIOMMU *iommu, S390IOTLBEntry *entry)
86
-};
117
}
87
-
118
88
void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
119
notify.perm = IOMMU_NONE;
89
int flags)
120
- memory_region_notify_iommu(&iommu->iommu_mr, notify);
90
{
121
+ memory_region_notify_iommu(&iommu->iommu_mr, 0, notify);
91
@@ -XXX,XX +XXX,XX @@ void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
122
notify.perm = entry->perm;
92
psr & CPSR_V ? 'V' : '-',
123
}
93
psr & CPSR_T ? 'T' : 'A',
124
94
ns_status,
125
@@ -XXX,XX +XXX,XX @@ static void s390_pci_update_iotlb(S390PCIIOMMU *iommu, S390IOTLBEntry *entry)
95
- cpu_mode_names[psr & 0xf], (psr & 0x10) ? 32 : 26);
126
g_hash_table_replace(iommu->iotlb, &cache->iova, cache);
96
+ aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26);
127
}
97
}
128
98
129
- memory_region_notify_iommu(&iommu->iommu_mr, notify);
99
if (flags & CPU_DUMP_FPU) {
130
+ memory_region_notify_iommu(&iommu->iommu_mr, 0, notify);
131
}
132
133
int rpcit_service_call(S390CPU *cpu, uint8_t r1, uint8_t r2, uintptr_t ra)
134
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
135
index XXXXXXX..XXXXXXX 100644
136
--- a/hw/vfio/common.c
137
+++ b/hw/vfio/common.c
138
@@ -XXX,XX +XXX,XX @@ static void vfio_listener_region_add(MemoryListener *listener,
139
if (memory_region_is_iommu(section->mr)) {
140
VFIOGuestIOMMU *giommu;
141
IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
142
+ int iommu_idx;
143
144
trace_vfio_listener_region_add_iommu(iova, end);
145
/*
146
@@ -XXX,XX +XXX,XX @@ static void vfio_listener_region_add(MemoryListener *listener,
147
llend = int128_add(int128_make64(section->offset_within_region),
148
section->size);
149
llend = int128_sub(llend, int128_one());
150
+ iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
151
+ MEMTXATTRS_UNSPECIFIED);
152
iommu_notifier_init(&giommu->n, vfio_iommu_map_notify,
153
IOMMU_NOTIFIER_ALL,
154
section->offset_within_region,
155
- int128_get64(llend));
156
+ int128_get64(llend),
157
+ iommu_idx);
158
QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
159
160
memory_region_register_iommu_notifier(section->mr, &giommu->n);
161
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/hw/virtio/vhost.c
164
+++ b/hw/virtio/vhost.c
165
@@ -XXX,XX +XXX,XX @@ static void vhost_iommu_region_add(MemoryListener *listener,
166
iommu_listener);
167
struct vhost_iommu *iommu;
168
Int128 end;
169
+ int iommu_idx;
170
+ IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
171
172
if (!memory_region_is_iommu(section->mr)) {
173
return;
174
@@ -XXX,XX +XXX,XX @@ static void vhost_iommu_region_add(MemoryListener *listener,
175
end = int128_add(int128_make64(section->offset_within_region),
176
section->size);
177
end = int128_sub(end, int128_one());
178
+ iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
179
+ MEMTXATTRS_UNSPECIFIED);
180
iommu_notifier_init(&iommu->n, vhost_iommu_unmap_notify,
181
IOMMU_NOTIFIER_UNMAP,
182
section->offset_within_region,
183
- int128_get64(end));
184
+ int128_get64(end),
185
+ iommu_idx);
186
iommu->mr = section->mr;
187
iommu->iommu_offset = section->offset_within_address_space -
188
section->offset_within_region;
189
diff --git a/memory.c b/memory.c
190
index XXXXXXX..XXXXXXX 100644
191
--- a/memory.c
192
+++ b/memory.c
193
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
194
iommu_mr = IOMMU_MEMORY_REGION(mr);
195
assert(n->notifier_flags != IOMMU_NOTIFIER_NONE);
196
assert(n->start <= n->end);
197
+ assert(n->iommu_idx >= 0 &&
198
+ n->iommu_idx < memory_region_iommu_num_indexes(iommu_mr));
199
+
200
QLIST_INSERT_HEAD(&iommu_mr->iommu_notify, n, node);
201
memory_region_update_iommu_notify_flags(iommu_mr);
202
}
203
@@ -XXX,XX +XXX,XX @@ void memory_region_notify_one(IOMMUNotifier *notifier,
204
}
205
206
void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
207
+ int iommu_idx,
208
IOMMUTLBEntry entry)
209
{
210
IOMMUNotifier *iommu_notifier;
211
@@ -XXX,XX +XXX,XX @@ void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
212
assert(memory_region_is_iommu(MEMORY_REGION(iommu_mr)));
213
214
IOMMU_NOTIFIER_FOREACH(iommu_notifier, iommu_mr) {
215
- memory_region_notify_one(iommu_notifier, &entry);
216
+ if (iommu_notifier->iommu_idx == iommu_idx) {
217
+ memory_region_notify_one(iommu_notifier, &entry);
218
+ }
219
}
220
}
221
222
--
100
--
223
2.17.1
101
2.19.1
224
102
225
103
diff view generated by jsdifflib
1
Convert the parallel device away from using the old_mmio field
1
The switch_mode() function is defined in target/arm/helper.c and used
2
of MemoryRegionOps. This change only affects the memory-mapped
2
only in that file and nowhere else, so we can make it file-local
3
variant, which is used by the MIPS Jazz boards 'magnum' and 'pica61'.
3
rather than global.
4
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180601141223.26630-7-peter.maydell@linaro.org
7
Message-id: 20181012144235.19646-3-peter.maydell@linaro.org
8
---
8
---
9
hw/char/parallel.c | 50 ++++++++++------------------------------------
9
target/arm/internals.h | 1 -
10
1 file changed, 11 insertions(+), 39 deletions(-)
10
target/arm/helper.c | 6 ++++--
11
2 files changed, 4 insertions(+), 3 deletions(-)
11
12
12
diff --git a/hw/char/parallel.c b/hw/char/parallel.c
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/char/parallel.c
15
--- a/target/arm/internals.h
15
+++ b/hw/char/parallel.c
16
+++ b/target/arm/internals.h
16
@@ -XXX,XX +XXX,XX @@ static void parallel_isa_realizefn(DeviceState *dev, Error **errp)
17
@@ -XXX,XX +XXX,XX @@ static inline int bank_number(int mode)
18
g_assert_not_reached();
17
}
19
}
18
20
19
/* Memory mapped interface */
21
-void switch_mode(CPUARMState *, int);
20
-static uint32_t parallel_mm_readb (void *opaque, hwaddr addr)
22
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu);
21
+static uint64_t parallel_mm_readfn(void *opaque, hwaddr addr, unsigned size)
23
void arm_translate_init(void);
24
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
30
V8M_SAttributes *sattrs);
31
#endif
32
33
+static void switch_mode(CPUARMState *env, int mode);
34
+
35
static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg)
22
{
36
{
23
ParallelState *s = opaque;
37
int nregs;
24
38
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
25
- return parallel_ioport_read_sw(s, addr >> s->it_shift) & 0xFF;
39
return 0;
26
+ return parallel_ioport_read_sw(s, addr >> s->it_shift) &
27
+ MAKE_64BIT_MASK(0, size * 8);
28
}
40
}
29
41
30
-static void parallel_mm_writeb (void *opaque,
42
-void switch_mode(CPUARMState *env, int mode)
31
- hwaddr addr, uint32_t value)
43
+static void switch_mode(CPUARMState *env, int mode)
32
+static void parallel_mm_writefn(void *opaque, hwaddr addr,
33
+ uint64_t value, unsigned size)
34
{
44
{
35
ParallelState *s = opaque;
45
ARMCPU *cpu = arm_env_get_cpu(env);
36
46
37
- parallel_ioport_write_sw(s, addr >> s->it_shift, value & 0xFF);
47
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
38
-}
48
39
-
49
#else
40
-static uint32_t parallel_mm_readw (void *opaque, hwaddr addr)
50
41
-{
51
-void switch_mode(CPUARMState *env, int mode)
42
- ParallelState *s = opaque;
52
+static void switch_mode(CPUARMState *env, int mode)
43
-
53
{
44
- return parallel_ioport_read_sw(s, addr >> s->it_shift) & 0xFFFF;
54
int old_mode;
45
-}
55
int i;
46
-
47
-static void parallel_mm_writew (void *opaque,
48
- hwaddr addr, uint32_t value)
49
-{
50
- ParallelState *s = opaque;
51
-
52
- parallel_ioport_write_sw(s, addr >> s->it_shift, value & 0xFFFF);
53
-}
54
-
55
-static uint32_t parallel_mm_readl (void *opaque, hwaddr addr)
56
-{
57
- ParallelState *s = opaque;
58
-
59
- return parallel_ioport_read_sw(s, addr >> s->it_shift);
60
-}
61
-
62
-static void parallel_mm_writel (void *opaque,
63
- hwaddr addr, uint32_t value)
64
-{
65
- ParallelState *s = opaque;
66
-
67
- parallel_ioport_write_sw(s, addr >> s->it_shift, value);
68
+ parallel_ioport_write_sw(s, addr >> s->it_shift,
69
+ value & MAKE_64BIT_MASK(0, size * 8));
70
}
71
72
static const MemoryRegionOps parallel_mm_ops = {
73
- .old_mmio = {
74
- .read = { parallel_mm_readb, parallel_mm_readw, parallel_mm_readl },
75
- .write = { parallel_mm_writeb, parallel_mm_writew, parallel_mm_writel },
76
- },
77
+ .read = parallel_mm_readfn,
78
+ .write = parallel_mm_writefn,
79
+ .valid.min_access_size = 1,
80
+ .valid.max_access_size = 4,
81
.endianness = DEVICE_NATIVE_ENDIAN,
82
};
83
84
--
56
--
85
2.17.1
57
2.19.1
86
58
87
59
diff view generated by jsdifflib
1
There's a common pattern in QEMU where a function needs to perform
1
The HCR.FB virtualization configuration register bit requests that
2
a data load or store of an N byte integer in a particular endianness.
2
TLB maintenance, branch predictor invalidate-all and icache
3
At the moment this is handled by doing a switch() on the size and
3
invalidate-all operations performed in NS EL1 should be upgraded
4
calling the appropriate ld*_p or st*_p function for each size.
4
from "local CPU only to "broadcast within Inner Shareable domain".
5
5
For QEMU we NOP the branch predictor and icache operations, so
6
Provide a new family of functions ldn_*_p() and stn_*_p() which
6
we only need to upgrade the TLB invalidates:
7
take the size as an argument and do the switch() themselves.
7
AArch32 TLBIALL, TLBIMVA, TLBIASID, DTLBIALL, DTLBIMVA, DTLBIASID,
8
ITLBIALL, ITLBIMVA, ITLBIASID, TLBIMVAA, TLBIMVAL, TLBIMVAAL
9
AArch64 TLBI VMALLE1, TLBI VAE1, TLBI ASIDE1, TLBI VAAE1,
10
TLBI VALE1, TLBI VAALE1
8
11
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180611171007.4165-2-peter.maydell@linaro.org
14
Message-id: 20181012144235.19646-4-peter.maydell@linaro.org
13
---
15
---
14
include/exec/cpu-all.h | 4 +++
16
target/arm/helper.c | 191 +++++++++++++++++++++++++++-----------------
15
include/qemu/bswap.h | 52 +++++++++++++++++++++++++++++++++++++
17
1 file changed, 116 insertions(+), 75 deletions(-)
16
docs/devel/loads-stores.rst | 15 +++++++++++
18
17
3 files changed, 71 insertions(+)
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
19
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
20
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/cpu-all.h
21
--- a/target/arm/helper.c
22
+++ b/include/exec/cpu-all.h
22
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
23
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
#define stq_p(p, v) stq_be_p(p, v)
24
raw_write(env, ri, value);
25
#define stfl_p(p, v) stfl_be_p(p, v)
25
}
26
#define stfq_p(p, v) stfq_be_p(p, v)
26
27
+#define ldn_p(p, sz) ldn_be_p(p, sz)
27
-static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
+#define stn_p(p, sz, v) stn_be_p(p, sz, v)
28
- uint64_t value)
29
#else
29
-{
30
#define lduw_p(p) lduw_le_p(p)
30
- /* Invalidate all (TLBIALL) */
31
#define ldsw_p(p) ldsw_le_p(p)
31
- ARMCPU *cpu = arm_env_get_cpu(env);
32
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
32
-
33
#define stq_p(p, v) stq_le_p(p, v)
33
- tlb_flush(CPU(cpu));
34
#define stfl_p(p, v) stfl_le_p(p, v)
34
-}
35
#define stfq_p(p, v) stfq_le_p(p, v)
35
-
36
+#define ldn_p(p, sz) ldn_le_p(p, sz)
36
-static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
37
+#define stn_p(p, sz, v) stn_le_p(p, sz, v)
37
- uint64_t value)
38
#endif
38
-{
39
39
- /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
40
/* MMU memory access macros */
40
- ARMCPU *cpu = arm_env_get_cpu(env);
41
diff --git a/include/qemu/bswap.h b/include/qemu/bswap.h
41
-
42
index XXXXXXX..XXXXXXX 100644
42
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
43
--- a/include/qemu/bswap.h
43
-}
44
+++ b/include/qemu/bswap.h
44
-
45
@@ -XXX,XX +XXX,XX @@ typedef union {
45
-static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
46
* For accessors that take a guest address rather than a
46
- uint64_t value)
47
* host address, see the cpu_{ld,st}_* accessors defined in
47
-{
48
* cpu_ldst.h.
48
- /* Invalidate by ASID (TLBIASID) */
49
+ *
49
- ARMCPU *cpu = arm_env_get_cpu(env);
50
+ * For cases where the size to be used is not fixed at compile time,
50
-
51
+ * there are
51
- tlb_flush(CPU(cpu));
52
+ * stn{endian}_p(ptr, sz, val)
52
-}
53
+ * which stores @val to @ptr as an @endian-order number @sz bytes in size
53
-
54
+ * and
54
-static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
+ * ldn{endian}_p(ptr, sz)
55
- uint64_t value)
56
+ * which loads @sz bytes from @ptr as an unsigned @endian-order number
56
-{
57
+ * and returns it in a uint64_t.
57
- /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
58
- ARMCPU *cpu = arm_env_get_cpu(env);
59
-
60
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
61
-}
62
-
63
/* IS variants of TLB operations must affect all cores */
64
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
65
uint64_t value)
66
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
68
}
69
70
+/*
71
+ * Non-IS variants of TLB operations are upgraded to
72
+ * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
73
+ * force broadcast of these operations.
74
+ */
75
+static bool tlb_force_broadcast(CPUARMState *env)
76
+{
77
+ return (env->cp15.hcr_el2 & HCR_FB) &&
78
+ arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
79
+}
80
+
81
+static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
82
+ uint64_t value)
83
+{
84
+ /* Invalidate all (TLBIALL) */
85
+ ARMCPU *cpu = arm_env_get_cpu(env);
86
+
87
+ if (tlb_force_broadcast(env)) {
88
+ tlbiall_is_write(env, NULL, value);
89
+ return;
90
+ }
91
+
92
+ tlb_flush(CPU(cpu));
93
+}
94
+
95
+static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
96
+ uint64_t value)
97
+{
98
+ /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
99
+ ARMCPU *cpu = arm_env_get_cpu(env);
100
+
101
+ if (tlb_force_broadcast(env)) {
102
+ tlbimva_is_write(env, NULL, value);
103
+ return;
104
+ }
105
+
106
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
107
+}
108
+
109
+static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
+ uint64_t value)
111
+{
112
+ /* Invalidate by ASID (TLBIASID) */
113
+ ARMCPU *cpu = arm_env_get_cpu(env);
114
+
115
+ if (tlb_force_broadcast(env)) {
116
+ tlbiasid_is_write(env, NULL, value);
117
+ return;
118
+ }
119
+
120
+ tlb_flush(CPU(cpu));
121
+}
122
+
123
+static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
+ uint64_t value)
125
+{
126
+ /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
127
+ ARMCPU *cpu = arm_env_get_cpu(env);
128
+
129
+ if (tlb_force_broadcast(env)) {
130
+ tlbimvaa_is_write(env, NULL, value);
131
+ return;
132
+ }
133
+
134
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
135
+}
136
+
137
static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
uint64_t value)
139
{
140
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
141
* Page D4-1736 (DDI0487A.b)
58
*/
142
*/
59
143
60
static inline int ldub_p(const void *ptr)
144
-static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
61
@@ -XXX,XX +XXX,XX @@ static inline unsigned long leul_to_cpu(unsigned long v)
145
- uint64_t value)
62
#endif
146
-{
63
}
147
- CPUState *cs = ENV_GET_CPU(env);
64
148
-
65
+/* Store v to p as a sz byte value in host order */
149
- if (arm_is_secure_below_el3(env)) {
66
+#define DO_STN_LDN_P(END) \
150
- tlb_flush_by_mmuidx(cs,
67
+ static inline void stn_## END ## _p(void *ptr, int sz, uint64_t v) \
151
- ARMMMUIdxBit_S1SE1 |
68
+ { \
152
- ARMMMUIdxBit_S1SE0);
69
+ switch (sz) { \
153
- } else {
70
+ case 1: \
154
- tlb_flush_by_mmuidx(cs,
71
+ stb_p(ptr, v); \
155
- ARMMMUIdxBit_S12NSE1 |
72
+ break; \
156
- ARMMMUIdxBit_S12NSE0);
73
+ case 2: \
157
- }
74
+ stw_ ## END ## _p(ptr, v); \
158
-}
75
+ break; \
159
-
76
+ case 4: \
160
static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
77
+ stl_ ## END ## _p(ptr, v); \
161
uint64_t value)
78
+ break; \
162
{
79
+ case 8: \
163
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
80
+ stq_ ## END ## _p(ptr, v); \
164
}
81
+ break; \
165
}
82
+ default: \
166
83
+ g_assert_not_reached(); \
167
+static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
84
+ } \
168
+ uint64_t value)
85
+ } \
169
+{
86
+ static inline uint64_t ldn_## END ## _p(const void *ptr, int sz) \
170
+ CPUState *cs = ENV_GET_CPU(env);
87
+ { \
171
+
88
+ switch (sz) { \
172
+ if (tlb_force_broadcast(env)) {
89
+ case 1: \
173
+ tlbi_aa64_vmalle1_write(env, NULL, value);
90
+ return ldub_p(ptr); \
174
+ return;
91
+ case 2: \
175
+ }
92
+ return lduw_ ## END ## _p(ptr); \
176
+
93
+ case 4: \
177
+ if (arm_is_secure_below_el3(env)) {
94
+ return (uint32_t)ldl_ ## END ## _p(ptr); \
178
+ tlb_flush_by_mmuidx(cs,
95
+ case 8: \
179
+ ARMMMUIdxBit_S1SE1 |
96
+ return ldq_ ## END ## _p(ptr); \
180
+ ARMMMUIdxBit_S1SE0);
97
+ default: \
181
+ } else {
98
+ g_assert_not_reached(); \
182
+ tlb_flush_by_mmuidx(cs,
99
+ } \
183
+ ARMMMUIdxBit_S12NSE1 |
100
+ }
184
+ ARMMMUIdxBit_S12NSE0);
101
+
185
+ }
102
+DO_STN_LDN_P(he)
186
+}
103
+DO_STN_LDN_P(le)
187
+
104
+DO_STN_LDN_P(be)
188
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
105
+
189
uint64_t value)
106
+#undef DO_STN_LDN_P
190
{
107
+
191
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
108
#undef le_bswap
192
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
109
#undef be_bswap
193
}
110
#undef le_bswaps
194
111
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
195
-static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
112
index XXXXXXX..XXXXXXX 100644
196
- uint64_t value)
113
--- a/docs/devel/loads-stores.rst
197
-{
114
+++ b/docs/devel/loads-stores.rst
198
- /* Invalidate by VA, EL1&0 (AArch64 version).
115
@@ -XXX,XX +XXX,XX @@ The ``_{endian}`` infix is omitted for target-endian accesses.
199
- * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
116
The target endian accessors are only available to source
200
- * since we don't support flush-for-specific-ASID-only or
117
files which are built per-target.
201
- * flush-last-level-only.
118
202
- */
119
+There are also functions which take the size as an argument:
203
- ARMCPU *cpu = arm_env_get_cpu(env);
120
+
204
- CPUState *cs = CPU(cpu);
121
+load: ``ldn{endian}_p(ptr, sz)``
205
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
122
+
206
-
123
+which performs an unsigned load of ``sz`` bytes from ``ptr``
207
- if (arm_is_secure_below_el3(env)) {
124
+as an ``{endian}`` order value and returns it in a uint64_t.
208
- tlb_flush_page_by_mmuidx(cs, pageaddr,
125
+
209
- ARMMMUIdxBit_S1SE1 |
126
+store: ``stn{endian}_p(ptr, sz, val)``
210
- ARMMMUIdxBit_S1SE0);
127
+
211
- } else {
128
+which stores ``val`` to ``ptr`` as an ``{endian}`` order value
212
- tlb_flush_page_by_mmuidx(cs, pageaddr,
129
+of size ``sz`` bytes.
213
- ARMMMUIdxBit_S12NSE1 |
130
+
214
- ARMMMUIdxBit_S12NSE0);
131
+
215
- }
132
Regexes for git grep
216
-}
133
- ``\<ldf\?[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
217
-
134
- ``\<stf\?[bwlq]\(_[hbl]e\)\?_p\>``
218
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
135
+ - ``\<ldn_\([hbl]e\)?_p\>``
219
uint64_t value)
136
+ - ``\<stn_\([hbl]e\)?_p\>``
220
{
137
221
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
``cpu_{ld,st}_*``
222
}
139
~~~~~~~~~~~~~~~~~
223
}
224
225
+static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
226
+ uint64_t value)
227
+{
228
+ /* Invalidate by VA, EL1&0 (AArch64 version).
229
+ * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
230
+ * since we don't support flush-for-specific-ASID-only or
231
+ * flush-last-level-only.
232
+ */
233
+ ARMCPU *cpu = arm_env_get_cpu(env);
234
+ CPUState *cs = CPU(cpu);
235
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
236
+
237
+ if (tlb_force_broadcast(env)) {
238
+ tlbi_aa64_vae1is_write(env, NULL, value);
239
+ return;
240
+ }
241
+
242
+ if (arm_is_secure_below_el3(env)) {
243
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
244
+ ARMMMUIdxBit_S1SE1 |
245
+ ARMMMUIdxBit_S1SE0);
246
+ } else {
247
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
248
+ ARMMMUIdxBit_S12NSE1 |
249
+ ARMMMUIdxBit_S12NSE0);
250
+ }
251
+}
252
+
253
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
254
uint64_t value)
255
{
140
--
256
--
141
2.17.1
257
2.19.1
142
258
143
259
diff view generated by jsdifflib
1
Currently we don't support board configurations that put an IOMMU
1
The HCR.DC virtualization configuration register bit has the
2
in the path of the CPU's memory transactions, and instead just
2
following effects:
3
assert() if the memory region fonud in address_space_translate_for_iotlb()
3
* SCTLR.M behaves as if it is 0 for all purposes except
4
is an IOMMUMemoryRegion.
4
direct reads of the bit
5
* HCR.VM behaves as if it is 1 for all purposes except
6
direct reads of the bit
7
* the memory type produced by the first stage of the EL1&EL0
8
translation regime is Normal Non-Shareable,
9
Inner Write-Back Read-Allocate Write-Allocate,
10
Outer Write-Back Read-Allocate Write-Allocate.
5
11
6
Remove this limitation by having the function handle IOMMUs.
12
Implement this behaviour.
7
This is mostly straightforward, but we must make sure we have
8
a notifier registered for every IOMMU that a transaction has
9
passed through, so that we can flush the TLB appropriately
10
when any of the IOMMUs change their mappings.
11
13
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20180604152941.20374-5-peter.maydell@linaro.org
16
Message-id: 20181012144235.19646-5-peter.maydell@linaro.org
15
---
17
---
16
include/exec/exec-all.h | 3 +-
18
target/arm/helper.c | 23 +++++++++++++++++++++--
17
include/qom/cpu.h | 3 +
19
1 file changed, 21 insertions(+), 2 deletions(-)
18
accel/tcg/cputlb.c | 3 +-
19
exec.c | 135 +++++++++++++++++++++++++++++++++++++++-
20
4 files changed, 140 insertions(+), 4 deletions(-)
21
20
22
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
24
--- a/include/exec/exec-all.h
23
--- a/target/arm/helper.c
25
+++ b/include/exec/exec-all.h
24
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ void tb_flush_jmp_cache(CPUState *cpu, target_ulong addr);
25
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
27
26
* * The Non-secure TTBCR.EAE bit is set to 1
28
MemoryRegionSection *
27
* * The implementation includes EL2, and the value of HCR.VM is 1
29
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
28
*
30
- hwaddr *xlat, hwaddr *plen);
29
+ * (Note that HCR.DC makes HCR.VM behave as if it is 1.)
31
+ hwaddr *xlat, hwaddr *plen,
30
+ *
32
+ MemTxAttrs attrs, int *prot);
31
* ATS1Hx always uses the 64bit format (not supported yet).
33
hwaddr memory_region_section_get_iotlb(CPUState *cpu,
32
*/
34
MemoryRegionSection *section,
33
format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
35
target_ulong vaddr,
34
36
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
35
if (arm_feature(env, ARM_FEATURE_EL2)) {
37
index XXXXXXX..XXXXXXX 100644
36
if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
38
--- a/include/qom/cpu.h
37
- format64 |= env->cp15.hcr_el2 & HCR_VM;
39
+++ b/include/qom/cpu.h
38
+ format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
40
@@ -XXX,XX +XXX,XX @@ struct CPUState {
39
} else {
41
uint16_t pending_tlb_flush;
40
format64 |= arm_current_el(env) == 2;
42
41
}
43
int hvf_fd;
42
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
44
+
45
+ /* track IOMMUs whose translations we've cached in the TCG TLB */
46
+ GArray *iommu_notifiers;
47
};
48
49
QTAILQ_HEAD(CPUTailQ, CPUState);
50
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/accel/tcg/cputlb.c
53
+++ b/accel/tcg/cputlb.c
54
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
55
}
43
}
56
44
57
sz = size;
45
if (mmu_idx == ARMMMUIdx_S2NS) {
58
- section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz);
46
- return (env->cp15.hcr_el2 & HCR_VM) == 0;
59
+ section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz,
47
+ /* HCR.DC means HCR.VM behaves as 1 */
60
+ attrs, &prot);
48
+ return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0;
61
assert(sz >= TARGET_PAGE_SIZE);
49
}
62
50
63
tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
51
if (env->cp15.hcr_el2 & HCR_TGE) {
64
diff --git a/exec.c b/exec.c
52
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
65
index XXXXXXX..XXXXXXX 100644
53
}
66
--- a/exec.c
54
}
67
+++ b/exec.c
55
68
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
56
+ if ((env->cp15.hcr_el2 & HCR_DC) &&
69
return mr;
57
+ (mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1)) {
70
}
58
+ /* HCR.DC means SCTLR_EL1.M behaves as 0 */
71
59
+ return true;
72
+typedef struct TCGIOMMUNotifier {
73
+ IOMMUNotifier n;
74
+ MemoryRegion *mr;
75
+ CPUState *cpu;
76
+ int iommu_idx;
77
+ bool active;
78
+} TCGIOMMUNotifier;
79
+
80
+static void tcg_iommu_unmap_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
81
+{
82
+ TCGIOMMUNotifier *notifier = container_of(n, TCGIOMMUNotifier, n);
83
+
84
+ if (!notifier->active) {
85
+ return;
86
+ }
87
+ tlb_flush(notifier->cpu);
88
+ notifier->active = false;
89
+ /* We leave the notifier struct on the list to avoid reallocating it later.
90
+ * Generally the number of IOMMUs a CPU deals with will be small.
91
+ * In any case we can't unregister the iommu notifier from a notify
92
+ * callback.
93
+ */
94
+}
95
+
96
+static void tcg_register_iommu_notifier(CPUState *cpu,
97
+ IOMMUMemoryRegion *iommu_mr,
98
+ int iommu_idx)
99
+{
100
+ /* Make sure this CPU has an IOMMU notifier registered for this
101
+ * IOMMU/IOMMU index combination, so that we can flush its TLB
102
+ * when the IOMMU tells us the mappings we've cached have changed.
103
+ */
104
+ MemoryRegion *mr = MEMORY_REGION(iommu_mr);
105
+ TCGIOMMUNotifier *notifier;
106
+ int i;
107
+
108
+ for (i = 0; i < cpu->iommu_notifiers->len; i++) {
109
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
110
+ if (notifier->mr == mr && notifier->iommu_idx == iommu_idx) {
111
+ break;
112
+ }
113
+ }
114
+ if (i == cpu->iommu_notifiers->len) {
115
+ /* Not found, add a new entry at the end of the array */
116
+ cpu->iommu_notifiers = g_array_set_size(cpu->iommu_notifiers, i + 1);
117
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
118
+
119
+ notifier->mr = mr;
120
+ notifier->iommu_idx = iommu_idx;
121
+ notifier->cpu = cpu;
122
+ /* Rather than trying to register interest in the specific part
123
+ * of the iommu's address space that we've accessed and then
124
+ * expand it later as subsequent accesses touch more of it, we
125
+ * just register interest in the whole thing, on the assumption
126
+ * that iommu reconfiguration will be rare.
127
+ */
128
+ iommu_notifier_init(&notifier->n,
129
+ tcg_iommu_unmap_notify,
130
+ IOMMU_NOTIFIER_UNMAP,
131
+ 0,
132
+ HWADDR_MAX,
133
+ iommu_idx);
134
+ memory_region_register_iommu_notifier(notifier->mr, &notifier->n);
135
+ }
60
+ }
136
+
61
+
137
+ if (!notifier->active) {
62
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
138
+ notifier->active = true;
139
+ }
140
+}
141
+
142
+static void tcg_iommu_free_notifier_list(CPUState *cpu)
143
+{
144
+ /* Destroy the CPU's notifier list */
145
+ int i;
146
+ TCGIOMMUNotifier *notifier;
147
+
148
+ for (i = 0; i < cpu->iommu_notifiers->len; i++) {
149
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
150
+ memory_region_unregister_iommu_notifier(notifier->mr, &notifier->n);
151
+ }
152
+ g_array_free(cpu->iommu_notifiers, true);
153
+}
154
+
155
/* Called from RCU critical section */
156
MemoryRegionSection *
157
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
158
- hwaddr *xlat, hwaddr *plen)
159
+ hwaddr *xlat, hwaddr *plen,
160
+ MemTxAttrs attrs, int *prot)
161
{
162
MemoryRegionSection *section;
163
+ IOMMUMemoryRegion *iommu_mr;
164
+ IOMMUMemoryRegionClass *imrc;
165
+ IOMMUTLBEntry iotlb;
166
+ int iommu_idx;
167
AddressSpaceDispatch *d = atomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
168
169
- section = address_space_translate_internal(d, addr, xlat, plen, false);
170
+ for (;;) {
171
+ section = address_space_translate_internal(d, addr, &addr, plen, false);
172
+
173
+ iommu_mr = memory_region_get_iommu(section->mr);
174
+ if (!iommu_mr) {
175
+ break;
176
+ }
177
+
178
+ imrc = memory_region_get_iommu_class_nocheck(iommu_mr);
179
+
180
+ iommu_idx = imrc->attrs_to_index(iommu_mr, attrs);
181
+ tcg_register_iommu_notifier(cpu, iommu_mr, iommu_idx);
182
+ /* We need all the permissions, so pass IOMMU_NONE so the IOMMU
183
+ * doesn't short-cut its translation table walk.
184
+ */
185
+ iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, iommu_idx);
186
+ addr = ((iotlb.translated_addr & ~iotlb.addr_mask)
187
+ | (addr & iotlb.addr_mask));
188
+ /* Update the caller's prot bits to remove permissions the IOMMU
189
+ * is giving us a failure response for. If we get down to no
190
+ * permissions left at all we can give up now.
191
+ */
192
+ if (!(iotlb.perm & IOMMU_RO)) {
193
+ *prot &= ~(PAGE_READ | PAGE_EXEC);
194
+ }
195
+ if (!(iotlb.perm & IOMMU_WO)) {
196
+ *prot &= ~PAGE_WRITE;
197
+ }
198
+
199
+ if (!*prot) {
200
+ goto translate_fail;
201
+ }
202
+
203
+ d = flatview_to_dispatch(address_space_to_flatview(iotlb.target_as));
204
+ }
205
206
assert(!memory_region_is_iommu(section->mr));
207
+ *xlat = addr;
208
return section;
209
+
210
+translate_fail:
211
+ return &d->map.sections[PHYS_SECTION_UNASSIGNED];
212
}
63
}
213
#endif
64
214
65
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
215
@@ -XXX,XX +XXX,XX @@ void cpu_exec_unrealizefn(CPUState *cpu)
66
216
if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
67
/* Combine the S1 and S2 cache attributes, if needed */
217
vmstate_unregister(NULL, &vmstate_cpu_common, cpu);
68
if (!ret && cacheattrs != NULL) {
218
}
69
+ if (env->cp15.hcr_el2 & HCR_DC) {
219
+#ifndef CONFIG_USER_ONLY
70
+ /*
220
+ tcg_iommu_free_notifier_list(cpu);
71
+ * HCR.DC forces the first stage attributes to
221
+#endif
72
+ * Normal Non-Shareable,
222
}
73
+ * Inner Write-Back Read-Allocate Write-Allocate,
223
74
+ * Outer Write-Back Read-Allocate Write-Allocate.
224
Property cpu_common_props[] = {
75
+ */
225
@@ -XXX,XX +XXX,XX @@ void cpu_exec_realizefn(CPUState *cpu, Error **errp)
76
+ cacheattrs->attrs = 0xff;
226
if (cc->vmsd != NULL) {
77
+ cacheattrs->shareability = 0;
227
vmstate_register(NULL, cpu->cpu_index, cc->vmsd, cpu);
78
+ }
228
}
79
*cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
229
+
80
}
230
+ cpu->iommu_notifiers = g_array_new(false, true, sizeof(TCGIOMMUNotifier));
231
#endif
232
}
233
81
234
--
82
--
235
2.17.1
83
2.19.1
236
84
237
85
diff view generated by jsdifflib
1
In subpage_read() we perform a load of the data into a local buffer
1
The A/I/F bits in ISR_EL1 should track the virtual interrupt
2
which we then access using ldub_p(), lduw_p(), ldl_p() or ldq_p()
2
status, not the physical interrupt status, if the associated
3
depending on its size, storing the result into the uint64_t *data.
3
HCR_EL2.AMO/IMO/FMO bit is set. Implement this, rather than
4
Since ldl_p() returns an 'int', this means that for the 4-byte
4
always showing the physical interrupt status.
5
case we will sign-extend the data, whereas for 1 and 2 byte
6
reads we zero-extend it.
7
5
8
This ought not to matter since the caller will likely ignore values in
6
We don't currently implement anything to do with external
9
the high bytes of the data, but add a cast so that we're consistent.
7
aborts, so this applies only to the I and F bits (though it
8
ought to be possible for the outer guest to present a virtual
9
external abort to the inner guest, even if QEMU doesn't
10
emulate physical external aborts, so there is missing
11
functionality in this area).
10
12
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180611171007.4165-3-peter.maydell@linaro.org
15
Message-id: 20181012144235.19646-6-peter.maydell@linaro.org
14
---
16
---
15
exec.c | 2 +-
17
target/arm/helper.c | 22 ++++++++++++++++++----
16
1 file changed, 1 insertion(+), 1 deletion(-)
18
1 file changed, 18 insertions(+), 4 deletions(-)
17
19
18
diff --git a/exec.c b/exec.c
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
20
--- a/exec.c
22
--- a/target/arm/helper.c
21
+++ b/exec.c
23
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
24
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
23
*data = lduw_p(buf);
25
CPUState *cs = ENV_GET_CPU(env);
24
return MEMTX_OK;
26
uint64_t ret = 0;
25
case 4:
27
26
- *data = ldl_p(buf);
28
- if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
27
+ *data = (uint32_t)ldl_p(buf);
29
- ret |= CPSR_I;
28
return MEMTX_OK;
30
+ if (arm_hcr_el2_imo(env)) {
29
case 8:
31
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
30
*data = ldq_p(buf);
32
+ ret |= CPSR_I;
33
+ }
34
+ } else {
35
+ if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
36
+ ret |= CPSR_I;
37
+ }
38
}
39
- if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
40
- ret |= CPSR_F;
41
+
42
+ if (arm_hcr_el2_fmo(env)) {
43
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
44
+ ret |= CPSR_F;
45
+ }
46
+ } else {
47
+ if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
48
+ ret |= CPSR_F;
49
+ }
50
}
51
+
52
/* External aborts are not possible in QEMU so A bit is always clear */
53
return ret;
54
}
31
--
55
--
32
2.17.1
56
2.19.1
33
57
34
58
diff view generated by jsdifflib
1
For the IoTKit MPC support, we need to wire together the
1
The HCR_EL2 VI and VF bits are supposed to track whether there is
2
interrupt outputs of 17 MPCs; this exceeds the current
2
a pending virtual IRQ or virtual FIQ. For QEMU we store the
3
value of MAX_OR_LINES. Increase MAX_OR_LINES to 32 (which
3
pending VIRQ/VFIQ status in cs->interrupt_request, so this means:
4
should be enough for anyone).
4
* if the register is read we must get these bit values from
5
5
cs->interrupt_request
6
The tricky part is retaining the migration compatibility for
6
* if the register is written then we must write the bit
7
existing OR gates; we add a subsection which is only used
7
values back into cs->interrupt_request
8
for larger OR gates, and define it such that we can freely
9
increase MAX_OR_LINES in future (or even move to a dynamically
10
allocated levels[] array without an upper size limit) without
11
breaking compatibility.
12
8
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20180604152941.20374-10-peter.maydell@linaro.org
11
Message-id: 20181012144235.19646-7-peter.maydell@linaro.org
16
---
12
---
17
include/hw/or-irq.h | 5 ++++-
13
target/arm/helper.c | 47 +++++++++++++++++++++++++++++++++++++++++----
18
hw/core/or-irq.c | 39 +++++++++++++++++++++++++++++++++++++--
14
1 file changed, 43 insertions(+), 4 deletions(-)
19
2 files changed, 41 insertions(+), 3 deletions(-)
20
15
21
diff --git a/include/hw/or-irq.h b/include/hw/or-irq.h
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/or-irq.h
18
--- a/target/arm/helper.c
24
+++ b/include/hw/or-irq.h
19
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
26
21
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
27
#define TYPE_OR_IRQ "or-irq"
22
{
28
23
ARMCPU *cpu = arm_env_get_cpu(env);
29
-#define MAX_OR_LINES 16
24
+ CPUState *cs = ENV_GET_CPU(env);
30
+/* This can safely be increased if necessary without breaking
25
uint64_t valid_mask = HCR_MASK;
31
+ * migration compatibility (as long as it remains greater than 15).
26
32
+ */
27
if (arm_feature(env, ARM_FEATURE_EL3)) {
33
+#define MAX_OR_LINES 32
28
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
34
29
/* Clear RES0 bits. */
35
typedef struct OrIRQState qemu_or_irq;
30
value &= valid_mask;
36
31
37
diff --git a/hw/core/or-irq.c b/hw/core/or-irq.c
32
+ /*
38
index XXXXXXX..XXXXXXX 100644
33
+ * VI and VF are kept in cs->interrupt_request. Modifying that
39
--- a/hw/core/or-irq.c
34
+ * requires that we have the iothread lock, which is done by
40
+++ b/hw/core/or-irq.c
35
+ * marking the reginfo structs as ARM_CP_IO.
41
@@ -XXX,XX +XXX,XX @@ static void or_irq_init(Object *obj)
36
+ * Note that if a write to HCR pends a VIRQ or VFIQ it is never
42
qdev_init_gpio_out(DEVICE(obj), &s->out_irq, 1);
37
+ * possible for it to be taken immediately, because VIRQ and
38
+ * VFIQ are masked unless running at EL0 or EL1, and HCR
39
+ * can only be written at EL2.
40
+ */
41
+ g_assert(qemu_mutex_iothread_locked());
42
+ if (value & HCR_VI) {
43
+ cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
44
+ } else {
45
+ cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
46
+ }
47
+ if (value & HCR_VF) {
48
+ cs->interrupt_request |= CPU_INTERRUPT_VFIQ;
49
+ } else {
50
+ cs->interrupt_request &= ~CPU_INTERRUPT_VFIQ;
51
+ }
52
+ value &= ~(HCR_VI | HCR_VF);
53
+
54
/* These bits change the MMU setup:
55
* HCR_VM enables stage 2 translation
56
* HCR_PTW forbids certain page-table setups
57
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
58
hcr_write(env, NULL, value);
43
}
59
}
44
60
45
+/* The original version of this device had a fixed 16 entries in its
61
+static uint64_t hcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
46
+ * VMState array; devices with more inputs than this need to
62
+{
47
+ * migrate the extra lines via a subsection.
63
+ /* The VI and VF bits live in cs->interrupt_request */
48
+ * The subsection migrates as much of the levels[] array as is needed
64
+ uint64_t ret = env->cp15.hcr_el2 & ~(HCR_VI | HCR_VF);
49
+ * (including repeating the first 16 elements), to avoid the awkwardness
65
+ CPUState *cs = ENV_GET_CPU(env);
50
+ * of splitting it in two to meet the requirements of VMSTATE_VARRAY_UINT16.
51
+ */
52
+#define OLD_MAX_OR_LINES 16
53
+#if MAX_OR_LINES < OLD_MAX_OR_LINES
54
+#error MAX_OR_LINES must be at least 16 for migration compatibility
55
+#endif
56
+
66
+
57
+static bool vmstate_extras_needed(void *opaque)
67
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
58
+{
68
+ ret |= HCR_VI;
59
+ qemu_or_irq *s = OR_IRQ(opaque);
69
+ }
60
+
70
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
61
+ return s->num_lines >= OLD_MAX_OR_LINES;
71
+ ret |= HCR_VF;
72
+ }
73
+ return ret;
62
+}
74
+}
63
+
75
+
64
+static const VMStateDescription vmstate_or_irq_extras = {
76
static const ARMCPRegInfo el2_cp_reginfo[] = {
65
+ .name = "or-irq-extras",
77
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
66
+ .version_id = 1,
78
+ .type = ARM_CP_IO,
67
+ .minimum_version_id = 1,
79
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
68
+ .needed = vmstate_extras_needed,
80
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
69
+ .fields = (VMStateField[]) {
81
- .writefn = hcr_write },
70
+ VMSTATE_VARRAY_UINT16_UNSAFE(levels, qemu_or_irq, num_lines, 0,
82
+ .writefn = hcr_write, .readfn = hcr_read },
71
+ vmstate_info_bool, bool),
83
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
72
+ VMSTATE_END_OF_LIST(),
84
- .type = ARM_CP_ALIAS,
73
+ },
85
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
74
+};
86
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
75
+
87
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
76
static const VMStateDescription vmstate_or_irq = {
88
- .writefn = hcr_writelow },
77
.name = TYPE_OR_IRQ,
89
+ .writefn = hcr_writelow, .readfn = hcr_read },
78
.version_id = 1,
90
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
79
.minimum_version_id = 1,
91
.type = ARM_CP_ALIAS,
80
.fields = (VMStateField[]) {
92
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
81
- VMSTATE_BOOL_ARRAY(levels, qemu_or_irq, MAX_OR_LINES),
93
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
82
+ VMSTATE_BOOL_SUB_ARRAY(levels, qemu_or_irq, 0, OLD_MAX_OR_LINES),
94
83
VMSTATE_END_OF_LIST(),
95
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
84
- }
96
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
85
+ },
97
- .type = ARM_CP_ALIAS,
86
+ .subsections = (const VMStateDescription*[]) {
98
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
87
+ &vmstate_or_irq_extras,
99
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
88
+ NULL
100
.access = PL2_RW,
89
+ },
101
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
90
};
91
92
static Property or_irq_properties[] = {
93
--
102
--
94
2.17.1
103
2.19.1
95
104
96
105
diff view generated by jsdifflib
1
Add an IOMMU index argument to the translate method of
1
If the HCR_EL2 PTW virtualizaiton configuration register bit
2
IOMMUs. Since all of our current IOMMU implementations
2
is set, then this means that a stage 2 Permission fault must
3
support only a single IOMMU index, this has no effect
3
be generated if a stage 1 translation table access is made
4
on the behaviour.
4
to an address that is mapped as Device memory in stage 2.
5
Implement this.
5
6
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20181012144235.19646-8-peter.maydell@linaro.org
9
Message-id: 20180604152941.20374-4-peter.maydell@linaro.org
10
---
10
---
11
include/exec/memory.h | 3 ++-
11
target/arm/helper.c | 21 ++++++++++++++++++++-
12
exec.c | 11 +++++++++--
12
1 file changed, 20 insertions(+), 1 deletion(-)
13
hw/alpha/typhoon.c | 3 ++-
14
hw/arm/smmuv3.c | 2 +-
15
hw/dma/rc4030.c | 2 +-
16
hw/i386/amd_iommu.c | 2 +-
17
hw/i386/intel_iommu.c | 2 +-
18
hw/ppc/spapr_iommu.c | 3 ++-
19
hw/s390x/s390-pci-bus.c | 2 +-
20
hw/sparc/sun4m_iommu.c | 3 ++-
21
hw/sparc64/sun4u_iommu.c | 2 +-
22
memory.c | 2 +-
23
12 files changed, 24 insertions(+), 13 deletions(-)
24
13
25
diff --git a/include/exec/memory.h b/include/exec/memory.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
27
--- a/include/exec/memory.h
16
--- a/target/arm/helper.c
28
+++ b/include/exec/memory.h
17
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ typedef struct IOMMUMemoryRegionClass {
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
30
* @iommu: the IOMMUMemoryRegion
19
hwaddr s2pa;
31
* @hwaddr: address to be translated within the memory region
20
int s2prot;
32
* @flag: requested access permissions
21
int ret;
33
+ * @iommu_idx: IOMMU index for the translation
22
+ ARMCacheAttrs cacheattrs = {};
34
*/
23
+ ARMCacheAttrs *pcacheattrs = NULL;
35
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
36
- IOMMUAccessFlags flag);
37
+ IOMMUAccessFlags flag, int iommu_idx);
38
/* Returns minimum supported page size in bytes.
39
* If this method is not provided then the minimum is assumed to
40
* be TARGET_PAGE_SIZE.
41
diff --git a/exec.c b/exec.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/exec.c
44
+++ b/exec.c
45
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
46
do {
47
hwaddr addr = *xlat;
48
IOMMUMemoryRegionClass *imrc = memory_region_get_iommu_class_nocheck(iommu_mr);
49
- IOMMUTLBEntry iotlb = imrc->translate(iommu_mr, addr, is_write ?
50
- IOMMU_WO : IOMMU_RO);
51
+ int iommu_idx = 0;
52
+ IOMMUTLBEntry iotlb;
53
+
24
+
54
+ if (imrc->attrs_to_index) {
25
+ if (env->cp15.hcr_el2 & HCR_PTW) {
55
+ iommu_idx = imrc->attrs_to_index(iommu_mr, attrs);
26
+ /*
27
+ * PTW means we must fault if this S1 walk touches S2 Device
28
+ * memory; otherwise we don't care about the attributes and can
29
+ * save the S2 translation the effort of computing them.
30
+ */
31
+ pcacheattrs = &cacheattrs;
56
+ }
32
+ }
57
+
33
58
+ iotlb = imrc->translate(iommu_mr, addr, is_write ?
34
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
59
+ IOMMU_WO : IOMMU_RO, iommu_idx);
35
- &txattrs, &s2prot, &s2size, fi, NULL);
60
36
+ &txattrs, &s2prot, &s2size, fi, pcacheattrs);
61
if (!(iotlb.perm & (1 << is_write))) {
37
if (ret) {
62
goto unassigned;
38
assert(fi->type != ARMFault_None);
63
diff --git a/hw/alpha/typhoon.c b/hw/alpha/typhoon.c
39
fi->s2addr = addr;
64
index XXXXXXX..XXXXXXX 100644
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
65
--- a/hw/alpha/typhoon.c
41
fi->s1ptw = true;
66
+++ b/hw/alpha/typhoon.c
42
return ~0;
67
@@ -XXX,XX +XXX,XX @@ static bool window_translate(TyphoonWindow *win, hwaddr addr,
68
Pchip and generate a machine check interrupt. */
69
static IOMMUTLBEntry typhoon_translate_iommu(IOMMUMemoryRegion *iommu,
70
hwaddr addr,
71
- IOMMUAccessFlags flag)
72
+ IOMMUAccessFlags flag,
73
+ int iommu_idx)
74
{
75
TyphoonPchip *pchip = container_of(iommu, TyphoonPchip, iommu);
76
IOMMUTLBEntry ret;
77
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/arm/smmuv3.c
80
+++ b/hw/arm/smmuv3.c
81
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
82
}
83
84
static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
85
- IOMMUAccessFlags flag)
86
+ IOMMUAccessFlags flag, int iommu_idx)
87
{
88
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
89
SMMUv3State *s = sdev->smmu;
90
diff --git a/hw/dma/rc4030.c b/hw/dma/rc4030.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/hw/dma/rc4030.c
93
+++ b/hw/dma/rc4030.c
94
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps jazzio_ops = {
95
};
96
97
static IOMMUTLBEntry rc4030_dma_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
98
- IOMMUAccessFlags flag)
99
+ IOMMUAccessFlags flag, int iommu_idx)
100
{
101
rc4030State *s = container_of(iommu, rc4030State, dma_mr);
102
IOMMUTLBEntry ret = {
103
diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/hw/i386/amd_iommu.c
106
+++ b/hw/i386/amd_iommu.c
107
@@ -XXX,XX +XXX,XX @@ static inline bool amdvi_is_interrupt_addr(hwaddr addr)
108
}
109
110
static IOMMUTLBEntry amdvi_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
111
- IOMMUAccessFlags flag)
112
+ IOMMUAccessFlags flag, int iommu_idx)
113
{
114
AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
115
AMDVIState *s = as->iommu_state;
116
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/hw/i386/intel_iommu.c
119
+++ b/hw/i386/intel_iommu.c
120
@@ -XXX,XX +XXX,XX @@ static void vtd_mem_write(void *opaque, hwaddr addr,
121
}
122
123
static IOMMUTLBEntry vtd_iommu_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
124
- IOMMUAccessFlags flag)
125
+ IOMMUAccessFlags flag, int iommu_idx)
126
{
127
VTDAddressSpace *vtd_as = container_of(iommu, VTDAddressSpace, iommu);
128
IntelIOMMUState *s = vtd_as->iommu_state;
129
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/ppc/spapr_iommu.c
132
+++ b/hw/ppc/spapr_iommu.c
133
@@ -XXX,XX +XXX,XX @@ static void spapr_tce_free_table(uint64_t *table, int fd, uint32_t nb_table)
134
/* Called from RCU critical section */
135
static IOMMUTLBEntry spapr_tce_translate_iommu(IOMMUMemoryRegion *iommu,
136
hwaddr addr,
137
- IOMMUAccessFlags flag)
138
+ IOMMUAccessFlags flag,
139
+ int iommu_idx)
140
{
141
sPAPRTCETable *tcet = container_of(iommu, sPAPRTCETable, iommu);
142
uint64_t tce;
143
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/s390x/s390-pci-bus.c
146
+++ b/hw/s390x/s390-pci-bus.c
147
@@ -XXX,XX +XXX,XX @@ uint16_t s390_guest_io_table_walk(uint64_t g_iota, hwaddr addr,
148
}
149
150
static IOMMUTLBEntry s390_translate_iommu(IOMMUMemoryRegion *mr, hwaddr addr,
151
- IOMMUAccessFlags flag)
152
+ IOMMUAccessFlags flag, int iommu_idx)
153
{
154
S390PCIIOMMU *iommu = container_of(mr, S390PCIIOMMU, iommu_mr);
155
S390IOTLBEntry *entry;
156
diff --git a/hw/sparc/sun4m_iommu.c b/hw/sparc/sun4m_iommu.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/hw/sparc/sun4m_iommu.c
159
+++ b/hw/sparc/sun4m_iommu.c
160
@@ -XXX,XX +XXX,XX @@ static void iommu_bad_addr(IOMMUState *s, hwaddr addr,
161
/* Called from RCU critical section */
162
static IOMMUTLBEntry sun4m_translate_iommu(IOMMUMemoryRegion *iommu,
163
hwaddr addr,
164
- IOMMUAccessFlags flags)
165
+ IOMMUAccessFlags flags,
166
+ int iommu_idx)
167
{
168
IOMMUState *is = container_of(iommu, IOMMUState, iommu);
169
hwaddr page, pa;
170
diff --git a/hw/sparc64/sun4u_iommu.c b/hw/sparc64/sun4u_iommu.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/hw/sparc64/sun4u_iommu.c
173
+++ b/hw/sparc64/sun4u_iommu.c
174
@@ -XXX,XX +XXX,XX @@
175
/* Called from RCU critical section */
176
static IOMMUTLBEntry sun4u_translate_iommu(IOMMUMemoryRegion *iommu,
177
hwaddr addr,
178
- IOMMUAccessFlags flag)
179
+ IOMMUAccessFlags flag, int iommu_idx)
180
{
181
IOMMUState *is = container_of(iommu, IOMMUState, iommu);
182
hwaddr baseaddr, offset;
183
diff --git a/memory.c b/memory.c
184
index XXXXXXX..XXXXXXX 100644
185
--- a/memory.c
186
+++ b/memory.c
187
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n)
188
granularity = memory_region_iommu_get_min_page_size(iommu_mr);
189
190
for (addr = 0; addr < memory_region_size(mr); addr += granularity) {
191
- iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE);
192
+ iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, n->iommu_idx);
193
if (iotlb.perm != IOMMU_NONE) {
194
n->notify(n, &iotlb);
195
}
43
}
44
+ if (pcacheattrs && (pcacheattrs->attrs & 0xf0) == 0) {
45
+ /* Access was to Device memory: generate Permission fault */
46
+ fi->type = ARMFault_Permission;
47
+ fi->s2addr = addr;
48
+ fi->stage2 = true;
49
+ fi->s1ptw = true;
50
+ return ~0;
51
+ }
52
addr = s2pa;
53
}
54
return addr;
196
--
55
--
197
2.17.1
56
2.19.1
198
57
199
58
diff view generated by jsdifflib
1
If an IOMMU supports mappings that care about the memory
1
Create and use a utility function to extract the EC field
2
transaction attributes, then it no longer has a unique
2
from a syndrome, rather than open-coding the shift.
3
address -> output mapping, but more than one. We can
4
represent these using an IOMMU index, analogous to TCG's
5
mmu indexes.
6
3
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20181012144235.19646-9-peter.maydell@linaro.org
10
Message-id: 20180604152941.20374-2-peter.maydell@linaro.org
11
---
7
---
12
include/exec/memory.h | 55 +++++++++++++++++++++++++++++++++++++++++++
8
target/arm/internals.h | 5 +++++
13
memory.c | 23 ++++++++++++++++++
9
target/arm/helper.c | 4 ++--
14
2 files changed, 78 insertions(+)
10
target/arm/kvm64.c | 2 +-
11
target/arm/op_helper.c | 2 +-
12
4 files changed, 9 insertions(+), 4 deletions(-)
15
13
16
diff --git a/include/exec/memory.h b/include/exec/memory.h
14
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/include/exec/memory.h
16
--- a/target/arm/internals.h
19
+++ b/include/exec/memory.h
17
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
18
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
21
* to report whenever mappings are changed, by calling
19
#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
22
* memory_region_notify_iommu() (or, if necessary, by calling
20
#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
23
* memory_region_notify_one() for each registered notifier).
21
24
+ *
22
+static inline uint32_t syn_get_ec(uint32_t syn)
25
+ * Conceptually an IOMMU provides a mapping from input address
26
+ * to an output TLB entry. If the IOMMU is aware of memory transaction
27
+ * attributes and the output TLB entry depends on the transaction
28
+ * attributes, we represent this using IOMMU indexes. Each index
29
+ * selects a particular translation table that the IOMMU has:
30
+ * @attrs_to_index returns the IOMMU index for a set of transaction attributes
31
+ * @translate takes an input address and an IOMMU index
32
+ * and the mapping returned can only depend on the input address and the
33
+ * IOMMU index.
34
+ *
35
+ * Most IOMMUs don't care about the transaction attributes and support
36
+ * only a single IOMMU index. A more complex IOMMU might have one index
37
+ * for secure transactions and one for non-secure transactions.
38
*/
39
typedef struct IOMMUMemoryRegionClass {
40
/* private */
41
@@ -XXX,XX +XXX,XX @@ typedef struct IOMMUMemoryRegionClass {
42
*/
43
int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
44
void *data);
45
+
46
+ /* Return the IOMMU index to use for a given set of transaction attributes.
47
+ *
48
+ * Optional method: if an IOMMU only supports a single IOMMU index then
49
+ * the default implementation of memory_region_iommu_attrs_to_index()
50
+ * will return 0.
51
+ *
52
+ * The indexes supported by an IOMMU must be contiguous, starting at 0.
53
+ *
54
+ * @iommu: the IOMMUMemoryRegion
55
+ * @attrs: memory transaction attributes
56
+ */
57
+ int (*attrs_to_index)(IOMMUMemoryRegion *iommu, MemTxAttrs attrs);
58
+
59
+ /* Return the number of IOMMU indexes this IOMMU supports.
60
+ *
61
+ * Optional method: if this method is not provided, then
62
+ * memory_region_iommu_num_indexes() will return 1, indicating that
63
+ * only a single IOMMU index is supported.
64
+ *
65
+ * @iommu: the IOMMUMemoryRegion
66
+ */
67
+ int (*num_indexes)(IOMMUMemoryRegion *iommu);
68
} IOMMUMemoryRegionClass;
69
70
typedef struct CoalescedMemoryRange CoalescedMemoryRange;
71
@@ -XXX,XX +XXX,XX @@ int memory_region_iommu_get_attr(IOMMUMemoryRegion *iommu_mr,
72
enum IOMMUMemoryRegionAttr attr,
73
void *data);
74
75
+/**
76
+ * memory_region_iommu_attrs_to_index: return the IOMMU index to
77
+ * use for translations with the given memory transaction attributes.
78
+ *
79
+ * @iommu_mr: the memory region
80
+ * @attrs: the memory transaction attributes
81
+ */
82
+int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
83
+ MemTxAttrs attrs);
84
+
85
+/**
86
+ * memory_region_iommu_num_indexes: return the total number of IOMMU
87
+ * indexes that this IOMMU supports.
88
+ *
89
+ * @iommu_mr: the memory region
90
+ */
91
+int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr);
92
+
93
/**
94
* memory_region_name: get a memory region's name
95
*
96
diff --git a/memory.c b/memory.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/memory.c
99
+++ b/memory.c
100
@@ -XXX,XX +XXX,XX @@ int memory_region_iommu_get_attr(IOMMUMemoryRegion *iommu_mr,
101
return imrc->get_attr(iommu_mr, attr, data);
102
}
103
104
+int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
105
+ MemTxAttrs attrs)
106
+{
23
+{
107
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
24
+ return syn >> ARM_EL_EC_SHIFT;
108
+
109
+ if (!imrc->attrs_to_index) {
110
+ return 0;
111
+ }
112
+
113
+ return imrc->attrs_to_index(iommu_mr, attrs);
114
+}
25
+}
115
+
26
+
116
+int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr)
27
/* Utility functions for constructing various kinds of syndrome value.
117
+{
28
* Note that in general we follow the AArch64 syndrome values; in a
118
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
29
* few cases the value in HSR for exceptions taken to AArch32 Hyp
119
+
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
120
+ if (!imrc->num_indexes) {
31
index XXXXXXX..XXXXXXX 100644
121
+ return 1;
32
--- a/target/arm/helper.c
122
+ }
33
+++ b/target/arm/helper.c
123
+
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
124
+ return imrc->num_indexes(iommu_mr);
35
uint32_t moe;
125
+}
36
126
+
37
/* If this is a debug exception we must update the DBGDSCR.MOE bits */
127
void memory_region_set_log(MemoryRegion *mr, bool log, unsigned client)
38
- switch (env->exception.syndrome >> ARM_EL_EC_SHIFT) {
39
+ switch (syn_get_ec(env->exception.syndrome)) {
40
case EC_BREAKPOINT:
41
case EC_BREAKPOINT_SAME_EL:
42
moe = 1;
43
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
44
if (qemu_loglevel_mask(CPU_LOG_INT)
45
&& !excp_is_internal(cs->exception_index)) {
46
qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n",
47
- env->exception.syndrome >> ARM_EL_EC_SHIFT,
48
+ syn_get_ec(env->exception.syndrome),
49
env->exception.syndrome);
50
}
51
52
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/kvm64.c
55
+++ b/target/arm/kvm64.c
56
@@ -XXX,XX +XXX,XX @@ int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
57
58
bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
128
{
59
{
129
uint8_t mask = 1 << client;
60
- int hsr_ec = debug_exit->hsr >> ARM_EL_EC_SHIFT;
61
+ int hsr_ec = syn_get_ec(debug_exit->hsr);
62
ARMCPU *cpu = ARM_CPU(cs);
63
CPUClass *cc = CPU_GET_CLASS(cs);
64
CPUARMState *env = &cpu->env;
65
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/op_helper.c
68
+++ b/target/arm/op_helper.c
69
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
70
* (see DDI0478C.a D1.10.4)
71
*/
72
target_el = 2;
73
- if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
74
+ if (syn_get_ec(syndrome) == EC_ADVSIMDFPACCESSTRAP) {
75
syndrome = syn_uncategorized();
76
}
77
}
130
--
78
--
131
2.17.1
79
2.19.1
132
80
133
81
diff view generated by jsdifflib
1
The 'addr' field in the CPUIOTLBEntry struct has a rather non-obvious
1
For the v7 version of the Arm architecture, the IL bit in
2
use; add a comment documenting it (reverse-engineered from what
2
syndrome register values where the field is not valid was
3
the code that sets it is doing).
3
defined to be UNK/SBZP. In v8 this is RES1, which is what
4
QEMU currently implements. Handle the desired v7 behaviour
5
by squashing the IL bit for the affected cases:
6
* EC == EC_UNCATEGORIZED
7
* prefetch aborts
8
* data aborts where ISV is 0
9
10
(The fourth case listed in the v8 Arm ARM DDI 0487C.a in
11
section G7.2.70, "illegal state exception", can't happen
12
on a v7 CPU.)
13
14
This deals with a corner case noted in a comment.
4
15
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180611125633.32755-2-peter.maydell@linaro.org
18
Message-id: 20181012144235.19646-10-peter.maydell@linaro.org
9
---
19
---
10
include/exec/cpu-defs.h | 9 +++++++++
20
target/arm/internals.h | 7 ++-----
11
accel/tcg/cputlb.c | 12 ++++++++++++
21
target/arm/helper.c | 13 +++++++++++++
12
2 files changed, 21 insertions(+)
22
2 files changed, 15 insertions(+), 5 deletions(-)
13
23
14
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
24
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu-defs.h
26
--- a/target/arm/internals.h
17
+++ b/include/exec/cpu-defs.h
27
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) != (1 << CPU_TLB_ENTRY_BITS));
28
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
19
* structs into one.)
29
/* Utility functions for constructing various kinds of syndrome value.
30
* Note that in general we follow the AArch64 syndrome values; in a
31
* few cases the value in HSR for exceptions taken to AArch32 Hyp
32
- * mode differs slightly, so if we ever implemented Hyp mode then the
33
- * syndrome value would need some massaging on exception entry.
34
- * (One example of this is that AArch64 defaults to IL bit set for
35
- * exceptions which don't specifically indicate information about the
36
- * trapping instruction, whereas AArch32 defaults to IL bit clear.)
37
+ * mode differs slightly, and we fix this up when populating HSR in
38
+ * arm_cpu_do_interrupt_aarch32_hyp().
20
*/
39
*/
21
typedef struct CPUIOTLBEntry {
40
static inline uint32_t syn_uncategorized(void)
22
+ /*
41
{
23
+ * @addr contains:
42
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
+ * - in the lower TARGET_PAGE_BITS, a physical section number
25
+ * - with the lower TARGET_PAGE_BITS masked off, an offset which
26
+ * must be added to the virtual address to obtain:
27
+ * + the ram_addr_t of the target RAM (if the physical section
28
+ * number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM)
29
+ * + the offset within the target MemoryRegion (otherwise)
30
+ */
31
hwaddr addr;
32
MemTxAttrs attrs;
33
} CPUIOTLBEntry;
34
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
35
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
36
--- a/accel/tcg/cputlb.c
44
--- a/target/arm/helper.c
37
+++ b/accel/tcg/cputlb.c
45
+++ b/target/arm/helper.c
38
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
46
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs)
39
env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index];
47
}
40
48
41
/* refill the tlb */
49
if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) {
42
+ /*
50
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
43
+ * At this point iotlb contains a physical section number in the lower
51
+ /*
44
+ * TARGET_PAGE_BITS, and either
52
+ * QEMU syndrome values are v8-style. v7 has the IL bit
45
+ * + the ram_addr_t of the page base of the target RAM (if NOTDIRTY or ROM)
53
+ * UNK/SBZP for "field not valid" cases, where v8 uses RES1.
46
+ * + the offset within section->mr of the page base (otherwise)
54
+ * If this is a v7 CPU, squash the IL bit in those cases.
47
+ * We subtract the vaddr (which is page aligned and thus won't
55
+ */
48
+ * disturb the low bits) to give an offset which can be added to the
56
+ if (cs->exception_index == EXCP_PREFETCH_ABORT ||
49
+ * (non-page-aligned) vaddr of the eventual memory access to get
57
+ (cs->exception_index == EXCP_DATA_ABORT &&
50
+ * the MemoryRegion offset for the access. Note that the vaddr we
58
+ !(env->exception.syndrome & ARM_EL_ISV)) ||
51
+ * subtract here is that of the page base, and not the same as the
59
+ syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) {
52
+ * vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
60
+ env->exception.syndrome &= ~ARM_EL_IL;
53
+ */
61
+ }
54
env->iotlb[mmu_idx][index].addr = iotlb - vaddr;
62
+ }
55
env->iotlb[mmu_idx][index].attrs = attrs;
63
env->cp15.esr_el[2] = env->exception.syndrome;
64
}
56
65
57
--
66
--
58
2.17.1
67
2.19.1
59
68
60
69
diff view generated by jsdifflib
1
The stellaris board is still using the legacy armv7m_init() function,
1
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
2
which predates conversion of the ARMv7M into a proper QOM container
2
provided in HSR has more information than is reported to AArch64.
3
object. Make the board code directly create the ARMv7M object instead.
3
Specifically, there are extra fields TA and coproc which indicate
4
whether the trapped instruction was FP or SIMD. Add this extra
5
information to the syndromes we construct, and mask it out when
6
taking the exception to AArch64.
4
7
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Message-id: 20181012144235.19646-11-peter.maydell@linaro.org
8
Message-id: 20180601144328.23817-2-peter.maydell@linaro.org
9
---
11
---
10
hw/arm/stellaris.c | 12 ++++++++++--
12
target/arm/internals.h | 14 +++++++++++++-
11
1 file changed, 10 insertions(+), 2 deletions(-)
13
target/arm/helper.c | 9 +++++++++
14
target/arm/translate.c | 8 ++++----
15
3 files changed, 26 insertions(+), 5 deletions(-)
12
16
13
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/stellaris.c
19
--- a/target/arm/internals.h
16
+++ b/hw/arm/stellaris.c
20
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
18
#include "qemu/log.h"
22
* few cases the value in HSR for exceptions taken to AArch32 Hyp
19
#include "exec/address-spaces.h"
23
* mode differs slightly, and we fix this up when populating HSR in
20
#include "sysemu/sysemu.h"
24
* arm_cpu_do_interrupt_aarch32_hyp().
21
+#include "hw/arm/armv7m.h"
25
+ * The exception is FP/SIMD access traps -- these report extra information
22
#include "hw/char/pl011.h"
26
+ * when taking an exception to AArch32. For those we include the extra coproc
23
#include "hw/misc/unimp.h"
27
+ * and TA fields, and mask them out when taking the exception to AArch64.
24
#include "cpu.h"
28
*/
25
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
29
static inline uint32_t syn_uncategorized(void)
26
&error_fatal);
30
{
27
memory_region_add_subregion(system_memory, 0x20000000, sram);
31
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
28
32
29
- nvic = armv7m_init(system_memory, flash_size, NUM_IRQ_LINES,
33
static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
30
- ms->kernel_filename, ms->cpu_type);
34
{
31
+ nvic = qdev_create(NULL, TYPE_ARMV7M);
35
+ /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
32
+ qdev_prop_set_uint32(nvic, "num-irq", NUM_IRQ_LINES);
36
return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
33
+ qdev_prop_set_string(nvic, "cpu-type", ms->cpu_type);
37
| (is_16bit ? 0 : ARM_EL_IL)
34
+ object_property_set_link(OBJECT(nvic), OBJECT(get_system_memory()),
38
- | (cv << 24) | (cond << 20);
35
+ "memory", &error_abort);
39
+ | (cv << 24) | (cond << 20) | 0xa;
36
+ /* This will exit with an error if the user passed us a bad cpu_type */
40
+}
37
+ qdev_init_nofail(nvic);
38
39
qdev_connect_gpio_out_named(nvic, "SYSRESETREQ", 0,
40
qemu_allocate_irq(&do_sys_reset, NULL, 0));
41
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
42
create_unimplemented_device("analogue-comparator", 0x4003c000, 0x1000);
43
create_unimplemented_device("hibernation", 0x400fc000, 0x1000);
44
create_unimplemented_device("flash-control", 0x400fd000, 0x1000);
45
+
41
+
46
+ armv7m_load_kernel(ARM_CPU(first_cpu), ms->kernel_filename, flash_size);
42
+static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
43
+{
44
+ /* AArch32 SIMD trap: TA == 1 coproc == 0 */
45
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
46
+ | (is_16bit ? 0 : ARM_EL_IL)
47
+ | (cv << 24) | (cond << 20) | (1 << 5);
47
}
48
}
48
49
49
/* FIXME: Figure out how to generate these from stellaris_boards. */
50
static inline uint32_t syn_sve_access_trap(void)
51
diff --git a/target/arm/helper.c b/target/arm/helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/helper.c
54
+++ b/target/arm/helper.c
55
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
56
case EXCP_HVC:
57
case EXCP_HYP_TRAP:
58
case EXCP_SMC:
59
+ if (syn_get_ec(env->exception.syndrome) == EC_ADVSIMDFPACCESSTRAP) {
60
+ /*
61
+ * QEMU internal FP/SIMD syndromes from AArch32 include the
62
+ * TA and coproc fields which are only exposed if the exception
63
+ * is taken to AArch32 Hyp mode. Mask them out to get a valid
64
+ * AArch64 format syndrome.
65
+ */
66
+ env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20);
67
+ }
68
env->cp15.esr_el[new_el] = env->exception.syndrome;
69
break;
70
case EXCP_IRQ:
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate.c
74
+++ b/target/arm/translate.c
75
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
76
*/
77
if (s->fp_excp_el) {
78
gen_exception_insn(s, 4, EXCP_UDEF,
79
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
80
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
81
return 0;
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
85
*/
86
if (s->fp_excp_el) {
87
gen_exception_insn(s, 4, EXCP_UDEF,
88
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
89
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
90
return 0;
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
94
95
if (s->fp_excp_el) {
96
gen_exception_insn(s, 4, EXCP_UDEF,
97
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
98
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
99
return 0;
100
}
101
if (!s->vfp_enabled) {
102
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
103
104
if (s->fp_excp_el) {
105
gen_exception_insn(s, 4, EXCP_UDEF,
106
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
107
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
108
return 0;
109
}
110
if (!s->vfp_enabled) {
50
--
111
--
51
2.17.1
112
2.19.1
52
113
53
114
diff view generated by jsdifflib
1
From: Joel Stanley <joel@jms.id.au>
1
From: Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>
2
2
3
The ASPEED SoCs contain a single register that returns random data when
3
"The Image must be placed text_offset bytes from a 2MB aligned base
4
read. This models that register so that guests can use it.
4
address anywhere in usable system RAM and called there."
5
5
6
The random number data register has a corresponding control register,
6
For the virt board, we write our startup bootloader at the very
7
however it returns data regardless of the state of the enabled bit, so
7
bottom of RAM, so that bit can't be used for the image. To avoid
8
the model follows this behaviour.
8
overlap in case the image requests to be loaded at an offset
9
smaller than our bootloader, we increment the load offset to the
10
next 2MB.
9
11
10
When the qcrypto call fails we exit as the guest uses the random number
12
This fixes a boot failure for Xen AArch64.
11
device to feed it's entropy pool, which is used for cryptographic
12
purposes.
13
13
14
Reviewed-by: Cédric Le Goater <clg@kaod.org>
14
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
15
Signed-off-by: Joel Stanley <joel@jms.id.au>
15
Tested-by: Andre Przywara <andre.przywara@arm.com>
16
Message-id: 20180613114836.9265-1-joel@jms.id.au
16
Message-id: b8a89518794b4436af0c151ed10de4fa@dornerworks.com
17
[PMM: Rephrased a comment a bit]
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
20
---
19
hw/misc/aspeed_scu.c | 20 ++++++++++++++++++++
21
hw/arm/boot.c | 18 ++++++++++++++++++
20
1 file changed, 20 insertions(+)
22
1 file changed, 18 insertions(+)
21
23
22
diff --git a/hw/misc/aspeed_scu.c b/hw/misc/aspeed_scu.c
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
23
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/misc/aspeed_scu.c
26
--- a/hw/arm/boot.c
25
+++ b/hw/misc/aspeed_scu.c
27
+++ b/hw/arm/boot.c
26
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@
27
#include "qapi/visitor.h"
29
#include "qemu/config-file.h"
28
#include "qemu/bitops.h"
30
#include "qemu/option.h"
29
#include "qemu/log.h"
31
#include "exec/address-spaces.h"
30
+#include "crypto/random.h"
32
+#include "qemu/units.h"
31
#include "trace.h"
33
32
34
/* Kernel boot protocol is specified in the kernel docs
33
#define TO_REG(offset) ((offset) >> 2)
35
* Documentation/arm/Booting and Documentation/arm64/booting.txt
34
@@ -XXX,XX +XXX,XX @@ static const uint32_t ast2500_a1_resets[ASPEED_SCU_NR_REGS] = {
36
@@ -XXX,XX +XXX,XX @@
35
[BMC_DEV_ID] = 0x00002402U
37
#define ARM64_TEXT_OFFSET_OFFSET 8
36
};
38
#define ARM64_MAGIC_OFFSET 56
37
39
38
+static uint32_t aspeed_scu_get_random(void)
40
+#define BOOTLOADER_MAX_SIZE (4 * KiB)
39
+{
40
+ Error *err = NULL;
41
+ uint32_t num;
42
+
41
+
43
+ if (qcrypto_random_bytes((uint8_t *)&num, sizeof(num), &err)) {
42
AddressSpace *arm_boot_address_space(ARMCPU *cpu,
44
+ error_report_err(err);
43
const struct arm_boot_info *info)
45
+ exit(1);
44
{
46
+ }
45
@@ -XXX,XX +XXX,XX @@ static void write_bootloader(const char *name, hwaddr addr,
46
code[i] = tswap32(insn);
47
}
48
49
+ assert((len * sizeof(uint32_t)) < BOOTLOADER_MAX_SIZE);
47
+
50
+
48
+ return num;
51
rom_add_blob_fixed_as(name, code, len * sizeof(uint32_t), addr, as);
49
+}
52
53
g_free(code);
54
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
55
memcpy(&hdrvals, buffer + ARM64_TEXT_OFFSET_OFFSET, sizeof(hdrvals));
56
if (hdrvals[1] != 0) {
57
kernel_load_offset = le64_to_cpu(hdrvals[0]);
50
+
58
+
51
static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
59
+ /*
52
{
60
+ * We write our startup "bootloader" at the very bottom of RAM,
53
AspeedSCUState *s = ASPEED_SCU(opaque);
61
+ * so that bit can't be used for the image. Luckily the Image
54
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
62
+ * format specification is that the image requests only an offset
63
+ * from a 2MB boundary, not an absolute load address. So if the
64
+ * image requests an offset that might mean it overlaps with the
65
+ * bootloader, we can just load it starting at 2MB+offset rather
66
+ * than 0MB + offset.
67
+ */
68
+ if (kernel_load_offset < BOOTLOADER_MAX_SIZE) {
69
+ kernel_load_offset += 2 * MiB;
70
+ }
71
}
55
}
72
}
56
73
57
switch (reg) {
58
+ case RNG_DATA:
59
+ /* On hardware, RNG_DATA works regardless of
60
+ * the state of the enable bit in RNG_CTRL
61
+ */
62
+ s->regs[RNG_DATA] = aspeed_scu_get_random();
63
+ break;
64
case WAKEUP_EN:
65
qemu_log_mask(LOG_GUEST_ERROR,
66
"%s: Read of write-only offset 0x%" HWADDR_PRIx "\n",
67
--
74
--
68
2.17.1
75
2.19.1
69
76
70
77
diff view generated by jsdifflib
1
The API for cpu_transaction_failed() says that it takes the physical
1
From: Richard Henderson <rth@twiddle.net>
2
address for the failed transaction. However we were actually passing
3
it the offset within the target MemoryRegion. We don't currently
4
have any target CPU implementations of this hook that require the
5
physical address; fix this bug so we don't get confused if we ever
6
do add one.
7
2
8
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
3
This can reduce the number of opcodes required for certain
4
complex forms of load-multiple (e.g. ld4.16b).
5
6
Signed-off-by: Richard Henderson <rth@twiddle.net>
7
Message-id: 20181011205206.3552-2-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180611125633.32755-3-peter.maydell@linaro.org
13
---
10
---
14
include/exec/exec-all.h | 13 ++++++++++--
11
target/arm/translate-a64.c | 12 ++++++++----
15
accel/tcg/cputlb.c | 44 +++++++++++++++++++++++++++++------------
12
1 file changed, 8 insertions(+), 4 deletions(-)
16
exec.c | 5 +++--
17
3 files changed, 45 insertions(+), 17 deletions(-)
18
13
19
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/exec-all.h
16
--- a/target/arm/translate-a64.c
22
+++ b/include/exec/exec-all.h
17
+++ b/target/arm/translate-a64.c
23
@@ -XXX,XX +XXX,XX @@ void tb_lock_reset(void);
18
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
24
19
bool is_store = !extract32(insn, 22, 1);
25
#if !defined(CONFIG_USER_ONLY)
20
bool is_postidx = extract32(insn, 23, 1);
26
21
bool is_q = extract32(insn, 30, 1);
27
-struct MemoryRegion *iotlb_to_region(CPUState *cpu,
22
- TCGv_i64 tcg_addr, tcg_rn;
28
- hwaddr index, MemTxAttrs attrs);
23
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
29
+/**
24
30
+ * iotlb_to_section:
25
int ebytes = 1 << size;
31
+ * @cpu: CPU performing the access
26
int elements = (is_q ? 128 : 64) / (8 << size);
32
+ * @index: TCG CPU IOTLB entry
27
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
33
+ *
28
tcg_rn = cpu_reg_sp(s, rn);
34
+ * Given a TCG CPU IOTLB entry, return the MemoryRegionSection that
29
tcg_addr = tcg_temp_new_i64();
35
+ * it refers to. @index will have been initially created and returned
30
tcg_gen_mov_i64(tcg_addr, tcg_rn);
36
+ * by memory_region_section_get_iotlb().
31
+ tcg_ebytes = tcg_const_i64(ebytes);
37
+ */
32
38
+struct MemoryRegionSection *iotlb_to_section(CPUState *cpu,
33
for (r = 0; r < rpt; r++) {
39
+ hwaddr index, MemTxAttrs attrs);
34
int e;
40
35
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
41
void tlb_fill(CPUState *cpu, target_ulong addr, int size,
36
clear_vec_high(s, is_q, tt);
42
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr);
37
}
43
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
38
}
44
index XXXXXXX..XXXXXXX 100644
39
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
45
--- a/accel/tcg/cputlb.c
40
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
46
+++ b/accel/tcg/cputlb.c
41
tt = (tt + 1) % 32;
47
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
42
}
48
target_ulong addr, uintptr_t retaddr, int size)
43
}
49
{
44
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
50
CPUState *cpu = ENV_GET_CPU(env);
45
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
51
- hwaddr physaddr = iotlbentry->addr;
52
- MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs);
53
+ hwaddr mr_offset;
54
+ MemoryRegionSection *section;
55
+ MemoryRegion *mr;
56
uint64_t val;
57
bool locked = false;
58
MemTxResult r;
59
60
- physaddr = (physaddr & TARGET_PAGE_MASK) + addr;
61
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
62
+ mr = section->mr;
63
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
64
cpu->mem_io_pc = retaddr;
65
if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) {
66
cpu_io_recompile(cpu, retaddr);
67
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
68
qemu_mutex_lock_iothread();
69
locked = true;
70
}
71
- r = memory_region_dispatch_read(mr, physaddr,
72
+ r = memory_region_dispatch_read(mr, mr_offset,
73
&val, size, iotlbentry->attrs);
74
if (r != MEMTX_OK) {
75
+ hwaddr physaddr = mr_offset +
76
+ section->offset_within_address_space -
77
+ section->offset_within_region;
78
+
79
cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_LOAD,
80
mmu_idx, iotlbentry->attrs, r, retaddr);
81
}
82
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
83
uintptr_t retaddr, int size)
84
{
85
CPUState *cpu = ENV_GET_CPU(env);
86
- hwaddr physaddr = iotlbentry->addr;
87
- MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs);
88
+ hwaddr mr_offset;
89
+ MemoryRegionSection *section;
90
+ MemoryRegion *mr;
91
bool locked = false;
92
MemTxResult r;
93
94
- physaddr = (physaddr & TARGET_PAGE_MASK) + addr;
95
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
96
+ mr = section->mr;
97
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
98
if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) {
99
cpu_io_recompile(cpu, retaddr);
100
}
101
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
102
qemu_mutex_lock_iothread();
103
locked = true;
104
}
105
- r = memory_region_dispatch_write(mr, physaddr,
106
+ r = memory_region_dispatch_write(mr, mr_offset,
107
val, size, iotlbentry->attrs);
108
if (r != MEMTX_OK) {
109
+ hwaddr physaddr = mr_offset +
110
+ section->offset_within_address_space -
111
+ section->offset_within_region;
112
+
113
cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
114
mmu_idx, iotlbentry->attrs, r, retaddr);
115
}
116
@@ -XXX,XX +XXX,XX @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
117
*/
118
tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
119
{
120
- int mmu_idx, index, pd;
121
+ int mmu_idx, index;
122
void *p;
123
MemoryRegion *mr;
124
+ MemoryRegionSection *section;
125
CPUState *cpu = ENV_GET_CPU(env);
126
CPUIOTLBEntry *iotlbentry;
127
- hwaddr physaddr;
128
+ hwaddr physaddr, mr_offset;
129
130
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
131
mmu_idx = cpu_mmu_index(env, true);
132
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
133
}
46
}
134
}
47
}
135
iotlbentry = &env->iotlb[mmu_idx][index];
48
+ tcg_temp_free_i64(tcg_ebytes);
136
- pd = iotlbentry->addr & ~TARGET_PAGE_MASK;
49
tcg_temp_free_i64(tcg_addr);
137
- mr = iotlb_to_region(cpu, pd, iotlbentry->attrs);
138
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
139
+ mr = section->mr;
140
if (memory_region_is_unassigned(mr)) {
141
qemu_mutex_lock_iothread();
142
if (memory_region_request_mmio_ptr(mr, addr)) {
143
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
144
* and use the MemTXResult it produced). However it is the
145
* simplest place we have currently available for the check.
146
*/
147
- physaddr = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
148
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
149
+ physaddr = mr_offset +
150
+ section->offset_within_address_space -
151
+ section->offset_within_region;
152
cpu_transaction_failed(cpu, physaddr, addr, 0, MMU_INST_FETCH, mmu_idx,
153
iotlbentry->attrs, MEMTX_DECODE_ERROR, 0);
154
155
diff --git a/exec.c b/exec.c
156
index XXXXXXX..XXXXXXX 100644
157
--- a/exec.c
158
+++ b/exec.c
159
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps readonly_mem_ops = {
160
},
161
};
162
163
-MemoryRegion *iotlb_to_region(CPUState *cpu, hwaddr index, MemTxAttrs attrs)
164
+MemoryRegionSection *iotlb_to_section(CPUState *cpu,
165
+ hwaddr index, MemTxAttrs attrs)
166
{
167
int asidx = cpu_asidx_from_attrs(cpu, attrs);
168
CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx];
169
AddressSpaceDispatch *d = atomic_rcu_read(&cpuas->memory_dispatch);
170
MemoryRegionSection *sections = d->map.sections;
171
172
- return sections[index & ~TARGET_PAGE_MASK].mr;
173
+ return &sections[index & ~TARGET_PAGE_MASK];
174
}
50
}
175
51
176
static void io_mem_init(void)
52
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
53
bool replicate = false;
54
int index = is_q << 3 | S << 2 | size;
55
int ebytes, xs;
56
- TCGv_i64 tcg_addr, tcg_rn;
57
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
58
59
switch (scale) {
60
case 3:
61
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
62
tcg_rn = cpu_reg_sp(s, rn);
63
tcg_addr = tcg_temp_new_i64();
64
tcg_gen_mov_i64(tcg_addr, tcg_rn);
65
+ tcg_ebytes = tcg_const_i64(ebytes);
66
67
for (xs = 0; xs < selem; xs++) {
68
if (replicate) {
69
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
70
do_vec_st(s, rt, index, tcg_addr, scale);
71
}
72
}
73
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
74
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
75
rt = (rt + 1) % 32;
76
}
77
78
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
79
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
80
}
81
}
82
+ tcg_temp_free_i64(tcg_ebytes);
83
tcg_temp_free_i64(tcg_addr);
84
}
85
177
--
86
--
178
2.17.1
87
2.19.1
179
88
180
89
diff view generated by jsdifflib
1
Convert the pckbd device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps. This change only affects the memory-mapped
3
variant of the i8042, which is used by the Unicore32 'puv3'
4
board and the MIPS Jazz boards 'magnum' and 'pica61'.
5
2
3
This is done generically in translator_loop.
4
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20181011205206.3552-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20180601141223.26630-6-peter.maydell@linaro.org
9
---
11
---
10
hw/input/pckbd.c | 14 ++++++++------
12
target/arm/translate-a64.c | 1 -
11
1 file changed, 8 insertions(+), 6 deletions(-)
13
target/arm/translate.c | 1 -
14
2 files changed, 2 deletions(-)
12
15
13
diff --git a/hw/input/pckbd.c b/hw/input/pckbd.c
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/input/pckbd.c
18
--- a/target/arm/translate-a64.c
16
+++ b/hw/input/pckbd.c
19
+++ b/target/arm/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_kbd = {
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
18
};
21
19
22
static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
20
/* Memory mapped interface */
21
-static uint32_t kbd_mm_readb (void *opaque, hwaddr addr)
22
+static uint64_t kbd_mm_readfn(void *opaque, hwaddr addr, unsigned size)
23
{
23
{
24
KBDState *s = opaque;
24
- tcg_clear_temp_count();
25
26
@@ -XXX,XX +XXX,XX @@ static uint32_t kbd_mm_readb (void *opaque, hwaddr addr)
27
return kbd_read_data(s, 0, 1) & 0xff;
28
}
25
}
29
26
30
-static void kbd_mm_writeb (void *opaque, hwaddr addr, uint32_t value)
27
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
31
+static void kbd_mm_writefn(void *opaque, hwaddr addr,
28
diff --git a/target/arm/translate.c b/target/arm/translate.c
32
+ uint64_t value, unsigned size)
29
index XXXXXXX..XXXXXXX 100644
33
{
30
--- a/target/arm/translate.c
34
KBDState *s = opaque;
31
+++ b/target/arm/translate.c
35
32
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_start(DisasContextBase *dcbase, CPUState *cpu)
36
@@ -XXX,XX +XXX,XX @@ static void kbd_mm_writeb (void *opaque, hwaddr addr, uint32_t value)
33
tcg_gen_movi_i32(tmp, 0);
37
kbd_write_data(s, 0, value & 0xff, 1);
34
store_cpu_field(tmp, condexec_bits);
35
}
36
- tcg_clear_temp_count();
38
}
37
}
39
38
40
+
39
static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
41
static const MemoryRegionOps i8042_mmio_ops = {
42
+ .read = kbd_mm_readfn,
43
+ .write = kbd_mm_writefn,
44
+ .valid.min_access_size = 1,
45
+ .valid.max_access_size = 4,
46
.endianness = DEVICE_NATIVE_ENDIAN,
47
- .old_mmio = {
48
- .read = { kbd_mm_readb, kbd_mm_readb, kbd_mm_readb },
49
- .write = { kbd_mm_writeb, kbd_mm_writeb, kbd_mm_writeb },
50
- },
51
};
52
53
void i8042_mm_init(qemu_irq kbd_irq, qemu_irq mouse_irq,
54
--
40
--
55
2.17.1
41
2.19.1
56
42
57
43
diff view generated by jsdifflib
1
Now we have stn_p() and ldn_p() we can use them in various
1
From: Richard Henderson <richard.henderson@linaro.org>
2
functions in exec.c that used to have their own switch-on-size code.
3
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-4-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180611171007.4165-4-peter.maydell@linaro.org
8
---
7
---
9
exec.c | 112 +++++----------------------------------------------------
8
target/arm/translate-a64.c | 28 +++-------------------------
10
1 file changed, 8 insertions(+), 104 deletions(-)
9
1 file changed, 3 insertions(+), 25 deletions(-)
11
10
12
diff --git a/exec.c b/exec.c
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
13
--- a/target/arm/translate-a64.c
15
+++ b/exec.c
14
+++ b/target/arm/translate-a64.c
16
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
17
memory_notdirty_write_prepare(&ndi, current_cpu, current_cpu->mem_io_vaddr,
16
for (xs = 0; xs < selem; xs++) {
18
ram_addr, size);
17
if (replicate) {
19
18
/* Load and replicate to all elements */
20
- switch (size) {
19
- uint64_t mulconst;
21
- case 1:
20
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
22
- stb_p(qemu_map_ram_ptr(NULL, ram_addr), val);
21
23
- break;
22
tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr,
24
- case 2:
23
get_mem_index(s), s->be_data + scale);
25
- stw_p(qemu_map_ram_ptr(NULL, ram_addr), val);
24
- switch (scale) {
26
- break;
25
- case 0:
27
- case 4:
26
- mulconst = 0x0101010101010101ULL;
28
- stl_p(qemu_map_ram_ptr(NULL, ram_addr), val);
29
- break;
30
- case 8:
31
- stq_p(qemu_map_ram_ptr(NULL, ram_addr), val);
32
- break;
33
- default:
34
- abort();
35
- }
36
+ stn_p(qemu_map_ram_ptr(NULL, ram_addr), size, val);
37
memory_notdirty_write_complete(&ndi);
38
}
39
40
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
41
if (res) {
42
return res;
43
}
44
- switch (len) {
45
- case 1:
46
- *data = ldub_p(buf);
47
- return MEMTX_OK;
48
- case 2:
49
- *data = lduw_p(buf);
50
- return MEMTX_OK;
51
- case 4:
52
- *data = (uint32_t)ldl_p(buf);
53
- return MEMTX_OK;
54
- case 8:
55
- *data = ldq_p(buf);
56
- return MEMTX_OK;
57
- default:
58
- abort();
59
- }
60
+ *data = ldn_p(buf, len);
61
+ return MEMTX_OK;
62
}
63
64
static MemTxResult subpage_write(void *opaque, hwaddr addr,
65
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
66
" value %"PRIx64"\n",
67
__func__, subpage, len, addr, value);
68
#endif
69
- switch (len) {
70
- case 1:
71
- stb_p(buf, value);
72
- break;
73
- case 2:
74
- stw_p(buf, value);
75
- break;
76
- case 4:
77
- stl_p(buf, value);
78
- break;
79
- case 8:
80
- stq_p(buf, value);
81
- break;
82
- default:
83
- abort();
84
- }
85
+ stn_p(buf, len, value);
86
return flatview_write(subpage->fv, addr + subpage->base, attrs, buf, len);
87
}
88
89
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
90
l = memory_access_size(mr, l, addr1);
91
/* XXX: could force current_cpu to NULL to avoid
92
potential bugs */
93
- switch (l) {
94
- case 8:
95
- /* 64 bit write access */
96
- val = ldq_p(buf);
97
- result |= memory_region_dispatch_write(mr, addr1, val, 8,
98
- attrs);
99
- break;
27
- break;
100
- case 4:
28
- case 1:
101
- /* 32 bit write access */
29
- mulconst = 0x0001000100010001ULL;
102
- val = (uint32_t)ldl_p(buf);
103
- result |= memory_region_dispatch_write(mr, addr1, val, 4,
104
- attrs);
105
- break;
30
- break;
106
- case 2:
31
- case 2:
107
- /* 16 bit write access */
32
- mulconst = 0x0000000100000001ULL;
108
- val = lduw_p(buf);
109
- result |= memory_region_dispatch_write(mr, addr1, val, 2,
110
- attrs);
111
- break;
33
- break;
112
- case 1:
34
- case 3:
113
- /* 8 bit write access */
35
- mulconst = 0;
114
- val = ldub_p(buf);
115
- result |= memory_region_dispatch_write(mr, addr1, val, 1,
116
- attrs);
117
- break;
36
- break;
118
- default:
37
- default:
119
- abort();
38
- g_assert_not_reached();
120
- }
39
- }
121
+ val = ldn_p(buf, l);
40
- if (mulconst) {
122
+ result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
41
- tcg_gen_muli_i64(tcg_tmp, tcg_tmp, mulconst);
42
- }
43
- write_vec_element(s, tcg_tmp, rt, 0, MO_64);
44
- if (is_q) {
45
- write_vec_element(s, tcg_tmp, rt, 1, MO_64);
46
- }
47
+ tcg_gen_gvec_dup_i64(scale, vec_full_reg_offset(s, rt),
48
+ (is_q + 1) * 8, vec_full_reg_size(s),
49
+ tcg_tmp);
50
tcg_temp_free_i64(tcg_tmp);
51
- clear_vec_high(s, is_q, rt);
123
} else {
52
} else {
124
/* RAM case */
53
/* Load/store one element per register */
125
ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
54
if (is_load) {
126
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
127
/* I/O case */
128
release_lock |= prepare_mmio_access(mr);
129
l = memory_access_size(mr, l, addr1);
130
- switch (l) {
131
- case 8:
132
- /* 64 bit read access */
133
- result |= memory_region_dispatch_read(mr, addr1, &val, 8,
134
- attrs);
135
- stq_p(buf, val);
136
- break;
137
- case 4:
138
- /* 32 bit read access */
139
- result |= memory_region_dispatch_read(mr, addr1, &val, 4,
140
- attrs);
141
- stl_p(buf, val);
142
- break;
143
- case 2:
144
- /* 16 bit read access */
145
- result |= memory_region_dispatch_read(mr, addr1, &val, 2,
146
- attrs);
147
- stw_p(buf, val);
148
- break;
149
- case 1:
150
- /* 8 bit read access */
151
- result |= memory_region_dispatch_read(mr, addr1, &val, 1,
152
- attrs);
153
- stb_p(buf, val);
154
- break;
155
- default:
156
- abort();
157
- }
158
+ result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
159
+ stn_p(buf, l, val);
160
} else {
161
/* RAM case */
162
ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
163
--
55
--
164
2.17.1
56
2.19.1
165
57
166
58
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
For a sequence of loads or stores from a single register,
4
little-endian operations can be promoted to an 8-byte op.
5
This can reduce the number of operations by a factor of 8.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181011205206.3552-5-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-10-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 2 ++
12
target/arm/translate-a64.c | 66 +++++++++++++++++++++++---------------
9
target/arm/sve_helper.c | 37 +++++++++++++++++++++++++++++++++++++
13
1 file changed, 40 insertions(+), 26 deletions(-)
10
target/arm/translate-sve.c | 13 +++++++++++++
11
target/arm/sve.decode | 3 +++
12
4 files changed, 55 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/translate-a64.c
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
19
DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
20
DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
/* Store from vector register to memory */
21
22
static void do_vec_st(DisasContext *s, int srcidx, int element,
22
+DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
- TCGv_i64 tcg_addr, int size)
24
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
25
{
26
- TCGMemOp memop = s->be_data + size;
27
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
28
29
read_vec_element(s, tcg_tmp, srcidx, element, size);
30
- tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
31
+ tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
32
33
tcg_temp_free_i64(tcg_tmp);
34
}
35
36
/* Load from memory to vector register */
37
static void do_vec_ld(DisasContext *s, int destidx, int element,
38
- TCGv_i64 tcg_addr, int size)
39
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
40
{
41
- TCGMemOp memop = s->be_data + size;
42
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
43
44
- tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
45
+ tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
46
write_vec_element(s, tcg_tmp, destidx, element, size);
47
48
tcg_temp_free_i64(tcg_tmp);
49
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
50
bool is_postidx = extract32(insn, 23, 1);
51
bool is_q = extract32(insn, 30, 1);
52
TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
53
+ TCGMemOp endian = s->be_data;
54
55
- int ebytes = 1 << size;
56
- int elements = (is_q ? 128 : 64) / (8 << size);
57
+ int ebytes; /* bytes per element */
58
+ int elements; /* elements per vector */
59
int rpt; /* num iterations */
60
int selem; /* structure elements */
61
int r;
62
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
63
gen_check_sp_alignment(s);
64
}
65
66
+ /* For our purposes, bytes are always little-endian. */
67
+ if (size == 0) {
68
+ endian = MO_LE;
69
+ }
23
+
70
+
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
71
+ /* Consecutive little-endian elements from a single register
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
72
+ * can be promoted to a larger little-endian operation.
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
73
+ */
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
74
+ if (selem == 1 && endian == MO_LE) {
28
index XXXXXXX..XXXXXXX 100644
75
+ size = 3;
29
--- a/target/arm/sve_helper.c
76
+ }
30
+++ b/target/arm/sve_helper.c
77
+ ebytes = 1 << size;
31
@@ -XXX,XX +XXX,XX @@ int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
78
+ elements = (is_q ? 16 : 8) / ebytes;
32
33
return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
34
}
35
+
79
+
36
+void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
80
tcg_rn = cpu_reg_sp(s, rn);
37
+{
81
tcg_addr = tcg_temp_new_i64();
38
+ intptr_t opr_sz = simd_oprsz(desc) / 8;
82
tcg_gen_mov_i64(tcg_addr, tcg_rn);
39
+ int esz = simd_data(desc);
83
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
40
+ uint64_t pg, first_g, last_g, len, mask = pred_esz_masks[esz];
84
for (r = 0; r < rpt; r++) {
41
+ intptr_t i, first_i, last_i;
85
int e;
42
+ ARMVectorReg tmp;
86
for (e = 0; e < elements; e++) {
43
+
87
- int tt = (rt + r) % 32;
44
+ first_i = last_i = 0;
88
int xs;
45
+ first_g = last_g = 0;
89
for (xs = 0; xs < selem; xs++) {
46
+
90
+ int tt = (rt + r + xs) % 32;
47
+ /* Find the extent of the active elements within VG. */
91
if (is_store) {
48
+ for (i = QEMU_ALIGN_UP(opr_sz, 8) - 8; i >= 0; i -= 8) {
92
- do_vec_st(s, tt, e, tcg_addr, size);
49
+ pg = *(uint64_t *)(vg + i) & mask;
93
+ do_vec_st(s, tt, e, tcg_addr, size, endian);
50
+ if (pg) {
94
} else {
51
+ if (last_g == 0) {
95
- do_vec_ld(s, tt, e, tcg_addr, size);
52
+ last_g = pg;
96
-
53
+ last_i = i;
97
- /* For non-quad operations, setting a slice of the low
54
+ }
98
- * 64 bits of the register clears the high 64 bits (in
55
+ first_g = pg;
99
- * the ARM ARM pseudocode this is implicit in the fact
56
+ first_i = i;
100
- * that 'rval' is a 64 bit wide variable).
101
- * For quad operations, we might still need to zero the
102
- * high bits of SVE. We optimize by noticing that we only
103
- * need to do this the first time we touch a register.
104
- */
105
- if (e == 0 && (r == 0 || xs == selem - 1)) {
106
- clear_vec_high(s, is_q, tt);
107
- }
108
+ do_vec_ld(s, tt, e, tcg_addr, size, endian);
109
}
110
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
111
- tt = (tt + 1) % 32;
112
}
113
}
114
}
115
116
+ if (!is_store) {
117
+ /* For non-quad operations, setting a slice of the low
118
+ * 64 bits of the register clears the high 64 bits (in
119
+ * the ARM ARM pseudocode this is implicit in the fact
120
+ * that 'rval' is a 64 bit wide variable).
121
+ * For quad operations, we might still need to zero the
122
+ * high bits of SVE.
123
+ */
124
+ for (r = 0; r < rpt * selem; r++) {
125
+ int tt = (rt + r) % 32;
126
+ clear_vec_high(s, is_q, tt);
57
+ }
127
+ }
58
+ }
128
+ }
59
+
129
+
60
+ len = 0;
130
if (is_postidx) {
61
+ if (first_g != 0) {
131
int rm = extract32(insn, 16, 5);
62
+ first_i = first_i * 8 + ctz64(first_g);
132
if (rm == 31) {
63
+ last_i = last_i * 8 + 63 - clz64(last_g);
133
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
64
+ len = last_i - first_i + (1 << esz);
134
} else {
65
+ if (vd == vm) {
135
/* Load/store one element per register */
66
+ vm = memcpy(&tmp, vm, opr_sz * 8);
136
if (is_load) {
67
+ }
137
- do_vec_ld(s, rt, index, tcg_addr, scale);
68
+ swap_memmove(vd, vn + first_i, len);
138
+ do_vec_ld(s, rt, index, tcg_addr, scale, s->be_data);
69
+ }
139
} else {
70
+ swap_memmove(vd + len, vm, opr_sz * 8 - len);
140
- do_vec_st(s, rt, index, tcg_addr, scale);
71
+}
141
+ do_vec_st(s, rt, index, tcg_addr, scale, s->be_data);
72
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
142
}
73
index XXXXXXX..XXXXXXX 100644
143
}
74
--- a/target/arm/translate-sve.c
144
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
75
+++ b/target/arm/translate-sve.c
76
@@ -XXX,XX +XXX,XX @@ static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
77
return do_zpz_ool(s, a, fns[a->esz]);
78
}
79
80
+static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
81
+{
82
+ if (sve_access_check(s)) {
83
+ unsigned vsz = vec_full_reg_size(s);
84
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
85
+ vec_full_reg_offset(s, a->rn),
86
+ vec_full_reg_offset(s, a->rm),
87
+ pred_full_reg_offset(s, a->pg),
88
+ vsz, vsz, a->esz, gen_helper_sve_splice);
89
+ }
90
+ return true;
91
+}
92
+
93
/*
94
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
95
*/
96
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/sve.decode
99
+++ b/target/arm/sve.decode
100
@@ -XXX,XX +XXX,XX @@ REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
101
REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
102
RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
103
104
+# SVE vector splice (predicated)
105
+SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
106
+
107
### SVE Predicate Logical Operations Group
108
109
# SVE predicate logical operations
110
--
145
--
111
2.17.1
146
2.19.1
112
147
113
148
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Message-id: 20181011205206.3552-6-richard.henderson@linaro.org
6
[PMM: drop change to now-deleted cpu_mode_names array]
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-15-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/helper-sve.h | 2 +
10
target/arm/translate.c | 4 ++--
9
target/arm/sve_helper.c | 14 ++++
11
1 file changed, 2 insertions(+), 2 deletions(-)
10
target/arm/translate-sve.c | 133 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 27 ++++++++
12
4 files changed, 176 insertions(+)
13
12
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
15
--- a/target/arm/translate.c
17
+++ b/target/arm/helper-sve.h
16
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
17
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_F0d, cpu_F1d;
19
18
20
DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
#include "exec/gen-icount.h"
21
DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
20
22
+
21
-static const char *regnames[] =
23
+DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
22
+static const char * const regnames[] =
24
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
23
{ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
25
index XXXXXXX..XXXXXXX 100644
24
"r8", "r9", "r10", "r11", "r12", "r13", "r14", "pc" };
26
--- a/target/arm/sve_helper.c
25
27
+++ b/target/arm/sve_helper.c
26
@@ -XXX,XX +XXX,XX @@ static struct {
28
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
27
int nregs;
29
return do_zero(vd, oprsz);
28
int interleave;
30
}
29
int spacing;
31
}
30
-} neon_ls_element_type[11] = {
32
+
31
+} const neon_ls_element_type[11] = {
33
+uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
32
{4, 4, 1},
34
+{
33
{4, 4, 2},
35
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
34
{4, 1, 1},
36
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
37
+ uint64_t *n = vn, *g = vg, sum = 0, mask = pred_esz_masks[esz];
38
+ intptr_t i;
39
+
40
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
41
+ uint64_t t = n[i] & g[i] & mask;
42
+ sum += ctpop64(t);
43
+ }
44
+ return sum;
45
+}
46
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-sve.c
49
+++ b/target/arm/translate-sve.c
50
@@ -XXX,XX +XXX,XX @@
51
#include "translate-a64.h"
52
53
54
+typedef void GVecGen2sFn(unsigned, uint32_t, uint32_t,
55
+ TCGv_i64, uint32_t, uint32_t);
56
+
57
typedef void gen_helper_gvec_flags_3(TCGv_i32, TCGv_ptr, TCGv_ptr,
58
TCGv_ptr, TCGv_i32);
59
typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
60
@@ -XXX,XX +XXX,XX @@ static bool trans_BRKN(DisasContext *s, arg_rpr_s *a, uint32_t insn)
61
return do_brk2(s, a, gen_helper_sve_brkn, gen_helper_sve_brkns);
62
}
63
64
+/*
65
+ *** SVE Predicate Count Group
66
+ */
67
+
68
+static void do_cntp(DisasContext *s, TCGv_i64 val, int esz, int pn, int pg)
69
+{
70
+ unsigned psz = pred_full_reg_size(s);
71
+
72
+ if (psz <= 8) {
73
+ uint64_t psz_mask;
74
+
75
+ tcg_gen_ld_i64(val, cpu_env, pred_full_reg_offset(s, pn));
76
+ if (pn != pg) {
77
+ TCGv_i64 g = tcg_temp_new_i64();
78
+ tcg_gen_ld_i64(g, cpu_env, pred_full_reg_offset(s, pg));
79
+ tcg_gen_and_i64(val, val, g);
80
+ tcg_temp_free_i64(g);
81
+ }
82
+
83
+ /* Reduce the pred_esz_masks value simply to reduce the
84
+ * size of the code generated here.
85
+ */
86
+ psz_mask = MAKE_64BIT_MASK(0, psz * 8);
87
+ tcg_gen_andi_i64(val, val, pred_esz_masks[esz] & psz_mask);
88
+
89
+ tcg_gen_ctpop_i64(val, val);
90
+ } else {
91
+ TCGv_ptr t_pn = tcg_temp_new_ptr();
92
+ TCGv_ptr t_pg = tcg_temp_new_ptr();
93
+ unsigned desc;
94
+ TCGv_i32 t_desc;
95
+
96
+ desc = psz - 2;
97
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
98
+
99
+ tcg_gen_addi_ptr(t_pn, cpu_env, pred_full_reg_offset(s, pn));
100
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
101
+ t_desc = tcg_const_i32(desc);
102
+
103
+ gen_helper_sve_cntp(val, t_pn, t_pg, t_desc);
104
+ tcg_temp_free_ptr(t_pn);
105
+ tcg_temp_free_ptr(t_pg);
106
+ tcg_temp_free_i32(t_desc);
107
+ }
108
+}
109
+
110
+static bool trans_CNTP(DisasContext *s, arg_CNTP *a, uint32_t insn)
111
+{
112
+ if (sve_access_check(s)) {
113
+ do_cntp(s, cpu_reg(s, a->rd), a->esz, a->rn, a->pg);
114
+ }
115
+ return true;
116
+}
117
+
118
+static bool trans_INCDECP_r(DisasContext *s, arg_incdec_pred *a,
119
+ uint32_t insn)
120
+{
121
+ if (sve_access_check(s)) {
122
+ TCGv_i64 reg = cpu_reg(s, a->rd);
123
+ TCGv_i64 val = tcg_temp_new_i64();
124
+
125
+ do_cntp(s, val, a->esz, a->pg, a->pg);
126
+ if (a->d) {
127
+ tcg_gen_sub_i64(reg, reg, val);
128
+ } else {
129
+ tcg_gen_add_i64(reg, reg, val);
130
+ }
131
+ tcg_temp_free_i64(val);
132
+ }
133
+ return true;
134
+}
135
+
136
+static bool trans_INCDECP_z(DisasContext *s, arg_incdec2_pred *a,
137
+ uint32_t insn)
138
+{
139
+ if (a->esz == 0) {
140
+ return false;
141
+ }
142
+ if (sve_access_check(s)) {
143
+ unsigned vsz = vec_full_reg_size(s);
144
+ TCGv_i64 val = tcg_temp_new_i64();
145
+ GVecGen2sFn *gvec_fn = a->d ? tcg_gen_gvec_subs : tcg_gen_gvec_adds;
146
+
147
+ do_cntp(s, val, a->esz, a->pg, a->pg);
148
+ gvec_fn(a->esz, vec_full_reg_offset(s, a->rd),
149
+ vec_full_reg_offset(s, a->rn), val, vsz, vsz);
150
+ }
151
+ return true;
152
+}
153
+
154
+static bool trans_SINCDECP_r_32(DisasContext *s, arg_incdec_pred *a,
155
+ uint32_t insn)
156
+{
157
+ if (sve_access_check(s)) {
158
+ TCGv_i64 reg = cpu_reg(s, a->rd);
159
+ TCGv_i64 val = tcg_temp_new_i64();
160
+
161
+ do_cntp(s, val, a->esz, a->pg, a->pg);
162
+ do_sat_addsub_32(reg, val, a->u, a->d);
163
+ }
164
+ return true;
165
+}
166
+
167
+static bool trans_SINCDECP_r_64(DisasContext *s, arg_incdec_pred *a,
168
+ uint32_t insn)
169
+{
170
+ if (sve_access_check(s)) {
171
+ TCGv_i64 reg = cpu_reg(s, a->rd);
172
+ TCGv_i64 val = tcg_temp_new_i64();
173
+
174
+ do_cntp(s, val, a->esz, a->pg, a->pg);
175
+ do_sat_addsub_64(reg, val, a->u, a->d);
176
+ }
177
+ return true;
178
+}
179
+
180
+static bool trans_SINCDECP_z(DisasContext *s, arg_incdec2_pred *a,
181
+ uint32_t insn)
182
+{
183
+ if (a->esz == 0) {
184
+ return false;
185
+ }
186
+ if (sve_access_check(s)) {
187
+ TCGv_i64 val = tcg_temp_new_i64();
188
+ do_cntp(s, val, a->esz, a->pg, a->pg);
189
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, a->u, a->d);
190
+ }
191
+ return true;
192
+}
193
+
194
/*
195
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
196
*/
197
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
198
index XXXXXXX..XXXXXXX 100644
199
--- a/target/arm/sve.decode
200
+++ b/target/arm/sve.decode
201
@@ -XXX,XX +XXX,XX @@
202
&ptrue rd esz pat s
203
&incdec_cnt rd pat esz imm d u
204
&incdec2_cnt rd rn pat esz imm d u
205
+&incdec_pred rd pg esz d u
206
+&incdec2_pred rd rn pg esz d u
207
208
###########################################################################
209
# Named instruction formats. These are generally used to
210
@@ -XXX,XX +XXX,XX @@
211
212
# One register operand, with governing predicate, vector element size
213
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
214
+@rd_pg4_pn ........ esz:2 ... ... .. pg:4 . rn:4 rd:5 &rpr_esz
215
216
# Two register operands with a 6-bit signed immediate.
217
@rd_rn_i6 ........ ... rn:5 ..... imm:s6 rd:5 &rri
218
@@ -XXX,XX +XXX,XX @@
219
@incdec2_cnt ........ esz:2 .. .... ...... pat:5 rd:5 \
220
&incdec2_cnt imm=%imm4_16_p1 rn=%reg_movprfx
221
222
+# One register, predicate.
223
+# User must fill in U and D.
224
+@incdec_pred ........ esz:2 .... .. ..... .. pg:4 rd:5 &incdec_pred
225
+@incdec2_pred ........ esz:2 .... .. ..... .. pg:4 rd:5 \
226
+ &incdec2_pred rn=%reg_movprfx
227
+
228
###########################################################################
229
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
230
231
@@ -XXX,XX +XXX,XX @@ BRKB_m 00100101 1. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
232
# SVE propagate break to next partition
233
BRKN 00100101 0. 01100001 .... 0 .... 0 .... @pd_pg_pn_s
234
235
+### SVE Predicate Count Group
236
+
237
+# SVE predicate count
238
+CNTP 00100101 .. 100 000 10 .... 0 .... ..... @rd_pg4_pn
239
+
240
+# SVE inc/dec register by predicate count
241
+INCDECP_r 00100101 .. 10110 d:1 10001 00 .... ..... @incdec_pred u=1
242
+
243
+# SVE inc/dec vector by predicate count
244
+INCDECP_z 00100101 .. 10110 d:1 10000 00 .... ..... @incdec2_pred u=1
245
+
246
+# SVE saturating inc/dec register by predicate count
247
+SINCDECP_r_32 00100101 .. 1010 d:1 u:1 10001 00 .... ..... @incdec_pred
248
+SINCDECP_r_64 00100101 .. 1010 d:1 u:1 10001 10 .... ..... @incdec_pred
249
+
250
+# SVE saturating inc/dec vector by predicate count
251
+SINCDECP_z 00100101 .. 1010 d:1 u:1 10000 00 .... ..... @incdec2_pred
252
+
253
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
254
255
# SVE load predicate register
256
--
35
--
257
2.17.1
36
2.19.1
258
37
259
38
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Also introduces neon_element_offset to find the env offset
4
of a specific element within a neon register.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-7-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-8-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate-sve.c | 19 +++++++++++++++++++
11
target/arm/translate.c | 63 ++++++++++++++++++++++++------------------
9
target/arm/sve.decode | 6 ++++++
12
1 file changed, 36 insertions(+), 27 deletions(-)
10
2 files changed, 25 insertions(+)
11
13
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
16
--- a/target/arm/translate.c
15
+++ b/target/arm/translate-sve.c
17
+++ b/target/arm/translate.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_LASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
18
@@ -XXX,XX +XXX,XX @@ neon_reg_offset (int reg, int n)
17
return do_last_general(s, a, true);
19
return vfp_reg_offset(0, sreg);
18
}
20
}
19
21
20
+static bool trans_CPY_m_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
22
+/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
23
+ * where 0 is the least significant end of the register.
24
+ */
25
+static inline long
26
+neon_element_offset(int reg, int element, TCGMemOp size)
21
+{
27
+{
22
+ if (sve_access_check(s)) {
28
+ int element_size = 1 << size;
23
+ do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, cpu_reg_sp(s, a->rn));
29
+ int ofs = element * element_size;
30
+#ifdef HOST_WORDS_BIGENDIAN
31
+ /* Calculate the offset assuming fully little-endian,
32
+ * then XOR to account for the order of the 8-byte units.
33
+ */
34
+ if (element_size < 8) {
35
+ ofs ^= 8 - element_size;
24
+ }
36
+ }
25
+ return true;
37
+#endif
38
+ return neon_reg_offset(reg, 0) + ofs;
26
+}
39
+}
27
+
40
+
28
+static bool trans_CPY_m_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
41
static TCGv_i32 neon_load_reg(int reg, int pass)
29
+{
42
{
30
+ if (sve_access_check(s)) {
43
TCGv_i32 tmp = tcg_temp_new_i32();
31
+ int ofs = vec_reg_offset(s, a->rn, 0, a->esz);
44
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
32
+ TCGv_i64 t = load_esz(cpu_env, ofs, a->esz);
45
tmp = load_reg(s, rd);
33
+ do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, t);
46
if (insn & (1 << 23)) {
34
+ tcg_temp_free_i64(t);
47
/* VDUP */
35
+ }
48
- if (size == 0) {
36
+ return true;
49
- gen_neon_dup_u8(tmp, 0);
37
+}
50
- } else if (size == 1) {
51
- gen_neon_dup_low16(tmp);
52
- }
53
- for (n = 0; n <= pass * 2; n++) {
54
- tmp2 = tcg_temp_new_i32();
55
- tcg_gen_mov_i32(tmp2, tmp);
56
- neon_store_reg(rn, n, tmp2);
57
- }
58
- neon_store_reg(rn, n, tmp);
59
+ int vec_size = pass ? 16 : 8;
60
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rn, 0),
61
+ vec_size, vec_size, tmp);
62
+ tcg_temp_free_i32(tmp);
63
} else {
64
/* VMOV */
65
switch (size) {
66
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
67
tcg_temp_free_i32(tmp);
68
} else if ((insn & 0x380) == 0) {
69
/* VDUP */
70
+ int element;
71
+ TCGMemOp size;
38
+
72
+
39
/*
73
if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
40
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
74
return 1;
41
*/
75
}
42
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
76
- if (insn & (1 << 19)) {
43
index XXXXXXX..XXXXXXX 100644
77
- tmp = neon_load_reg(rm, 1);
44
--- a/target/arm/sve.decode
78
- } else {
45
+++ b/target/arm/sve.decode
79
- tmp = neon_load_reg(rm, 0);
46
@@ -XXX,XX +XXX,XX @@ LASTB_v 00000101 .. 10001 1 100 ... ..... ..... @rd_pg_rn
80
- }
47
LASTA_r 00000101 .. 10000 0 101 ... ..... ..... @rd_pg_rn
81
if (insn & (1 << 16)) {
48
LASTB_r 00000101 .. 10000 1 101 ... ..... ..... @rd_pg_rn
82
- gen_neon_dup_u8(tmp, ((insn >> 17) & 3) * 8);
49
83
+ size = MO_8;
50
+# SVE copy element from SIMD&FP scalar register
84
+ element = (insn >> 17) & 7;
51
+CPY_m_v 00000101 .. 100000 100 ... ..... ..... @rd_pg_rn
85
} else if (insn & (1 << 17)) {
52
+
86
- if ((insn >> 18) & 1)
53
+# SVE copy element from general register to vector (predicated)
87
- gen_neon_dup_high16(tmp);
54
+CPY_m_r 00000101 .. 101000 101 ... ..... ..... @rd_pg_rn
88
- else
55
+
89
- gen_neon_dup_low16(tmp);
56
### SVE Predicate Logical Operations Group
90
+ size = MO_16;
57
91
+ element = (insn >> 18) & 3;
58
# SVE predicate logical operations
92
+ } else {
93
+ size = MO_32;
94
+ element = (insn >> 19) & 1;
95
}
96
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
97
- tmp2 = tcg_temp_new_i32();
98
- tcg_gen_mov_i32(tmp2, tmp);
99
- neon_store_reg(rd, pass, tmp2);
100
- }
101
- tcg_temp_free_i32(tmp);
102
+ tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0),
103
+ neon_element_offset(rm, element, size),
104
+ q ? 16 : 8, q ? 16 : 8);
105
} else {
106
return 1;
107
}
59
--
108
--
60
2.17.1
109
2.19.1
61
110
62
111
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-8-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/helper-sve.h | 18 +++
8
target/arm/translate.c | 67 ++++++++++++++++++++++++------------------
9
target/arm/sve_helper.c | 248 +++++++++++++++++++++++++++++++++++++
9
1 file changed, 39 insertions(+), 28 deletions(-)
10
target/arm/translate-sve.c | 106 ++++++++++++++++
11
target/arm/sve.decode | 19 +++
12
4 files changed, 391 insertions(+)
13
10
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
13
--- a/target/arm/translate.c
17
+++ b/target/arm/helper-sve.h
14
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_orn_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
19
DEF_HELPER_FLAGS_5(sve_nor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
16
return 1;
20
DEF_HELPER_FLAGS_5(sve_nand_pppp, TCG_CALL_NO_RWG,
17
}
21
void, ptr, ptr, ptr, ptr, i32)
18
} else { /* (insn & 0x00380080) == 0 */
19
- int invert;
20
+ int invert, reg_ofs, vec_size;
22
+
21
+
23
+DEF_HELPER_FLAGS_5(sve_brkpa, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
if (q && (rd & 1)) {
24
+DEF_HELPER_FLAGS_5(sve_brkpb, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
return 1;
25
+DEF_HELPER_FLAGS_5(sve_brkpas, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
24
}
26
+DEF_HELPER_FLAGS_5(sve_brkpbs, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
25
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
26
break;
27
case 14:
28
imm |= (imm << 8) | (imm << 16) | (imm << 24);
29
- if (invert)
30
+ if (invert) {
31
imm = ~imm;
32
+ }
33
break;
34
case 15:
35
if (invert) {
36
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
37
| ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
38
break;
39
}
40
- if (invert)
41
+ if (invert) {
42
imm = ~imm;
43
+ }
44
45
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
46
- if (op & 1 && op < 12) {
47
- tmp = neon_load_reg(rd, pass);
48
- if (invert) {
49
- /* The immediate value has already been inverted, so
50
- BIC becomes AND. */
51
- tcg_gen_andi_i32(tmp, tmp, imm);
52
- } else {
53
- tcg_gen_ori_i32(tmp, tmp, imm);
54
- }
55
+ reg_ofs = neon_reg_offset(rd, 0);
56
+ vec_size = q ? 16 : 8;
27
+
57
+
28
+DEF_HELPER_FLAGS_4(sve_brka_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
58
+ if (op & 1 && op < 12) {
29
+DEF_HELPER_FLAGS_4(sve_brkb_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
59
+ if (invert) {
30
+DEF_HELPER_FLAGS_4(sve_brka_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
60
+ /* The immediate value has already been inverted,
31
+DEF_HELPER_FLAGS_4(sve_brkb_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
61
+ * so BIC becomes AND.
62
+ */
63
+ tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
64
+ vec_size, vec_size);
65
} else {
66
- /* VMOV, VMVN. */
67
- tmp = tcg_temp_new_i32();
68
- if (op == 14 && invert) {
69
- int n;
70
- uint32_t val;
71
- val = 0;
72
- for (n = 0; n < 4; n++) {
73
- if (imm & (1 << (n + (pass & 1) * 4)))
74
- val |= 0xff << (n * 8);
75
- }
76
- tcg_gen_movi_i32(tmp, val);
77
- } else {
78
- tcg_gen_movi_i32(tmp, imm);
79
- }
80
+ tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
81
+ vec_size, vec_size);
82
+ }
83
+ } else {
84
+ /* VMOV, VMVN. */
85
+ if (op == 14 && invert) {
86
+ TCGv_i64 t64 = tcg_temp_new_i64();
32
+
87
+
33
+DEF_HELPER_FLAGS_4(sve_brkas_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
88
+ for (pass = 0; pass <= q; ++pass) {
34
+DEF_HELPER_FLAGS_4(sve_brkbs_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
89
+ uint64_t val = 0;
35
+DEF_HELPER_FLAGS_4(sve_brkas_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
90
+ int n;
36
+DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
37
+
91
+
38
+DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
92
+ for (n = 0; n < 8; n++) {
39
+DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
93
+ if (imm & (1 << (n + pass * 8))) {
40
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
94
+ val |= 0xffull << (n * 8);
41
index XXXXXXX..XXXXXXX 100644
95
+ }
42
--- a/target/arm/sve_helper.c
96
+ }
43
+++ b/target/arm/sve_helper.c
97
+ tcg_gen_movi_i64(t64, val);
44
@@ -XXX,XX +XXX,XX @@ DO_CMP_PPZI_D(sve_cmpls_ppzi_d, uint64_t, <=)
98
+ neon_store_reg64(t64, rd + pass);
45
#undef DO_CMP_PPZI_S
99
+ }
46
#undef DO_CMP_PPZI_D
100
+ tcg_temp_free_i64(t64);
47
#undef DO_CMP_PPZI
101
+ } else {
48
+
102
+ tcg_gen_gvec_dup32i(reg_ofs, vec_size, vec_size, imm);
49
+/* Similar to the ARM LastActive pseudocode function. */
103
}
50
+static bool last_active_pred(void *vd, void *vg, intptr_t oprsz)
104
- neon_store_reg(rd, pass, tmp);
51
+{
105
}
52
+ intptr_t i;
106
}
53
+
107
} else { /* (insn & 0x00800010 == 0x00800000) */
54
+ for (i = QEMU_ALIGN_UP(oprsz, 8) - 8; i >= 0; i -= 8) {
55
+ uint64_t pg = *(uint64_t *)(vg + i);
56
+ if (pg) {
57
+ return (pow2floor(pg) & *(uint64_t *)(vd + i)) != 0;
58
+ }
59
+ }
60
+ return 0;
61
+}
62
+
63
+/* Compute a mask into RETB that is true for all G, up to and including
64
+ * (if after) or excluding (if !after) the first G & N.
65
+ * Return true if BRK found.
66
+ */
67
+static bool compute_brk(uint64_t *retb, uint64_t n, uint64_t g,
68
+ bool brk, bool after)
69
+{
70
+ uint64_t b;
71
+
72
+ if (brk) {
73
+ b = 0;
74
+ } else if ((g & n) == 0) {
75
+ /* For all G, no N are set; break not found. */
76
+ b = g;
77
+ } else {
78
+ /* Break somewhere in N. Locate it. */
79
+ b = g & n; /* guard true, pred true */
80
+ b = b & -b; /* first such */
81
+ if (after) {
82
+ b = b | (b - 1); /* break after same */
83
+ } else {
84
+ b = b - 1; /* break before same */
85
+ }
86
+ brk = true;
87
+ }
88
+
89
+ *retb = b;
90
+ return brk;
91
+}
92
+
93
+/* Compute a zeroing BRK. */
94
+static void compute_brk_z(uint64_t *d, uint64_t *n, uint64_t *g,
95
+ intptr_t oprsz, bool after)
96
+{
97
+ bool brk = false;
98
+ intptr_t i;
99
+
100
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
101
+ uint64_t this_b, this_g = g[i];
102
+
103
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
104
+ d[i] = this_b & this_g;
105
+ }
106
+}
107
+
108
+/* Likewise, but also compute flags. */
109
+static uint32_t compute_brks_z(uint64_t *d, uint64_t *n, uint64_t *g,
110
+ intptr_t oprsz, bool after)
111
+{
112
+ uint32_t flags = PREDTEST_INIT;
113
+ bool brk = false;
114
+ intptr_t i;
115
+
116
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
117
+ uint64_t this_b, this_d, this_g = g[i];
118
+
119
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
120
+ d[i] = this_d = this_b & this_g;
121
+ flags = iter_predtest_fwd(this_d, this_g, flags);
122
+ }
123
+ return flags;
124
+}
125
+
126
+/* Compute a merging BRK. */
127
+static void compute_brk_m(uint64_t *d, uint64_t *n, uint64_t *g,
128
+ intptr_t oprsz, bool after)
129
+{
130
+ bool brk = false;
131
+ intptr_t i;
132
+
133
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
134
+ uint64_t this_b, this_g = g[i];
135
+
136
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
137
+ d[i] = (this_b & this_g) | (d[i] & ~this_g);
138
+ }
139
+}
140
+
141
+/* Likewise, but also compute flags. */
142
+static uint32_t compute_brks_m(uint64_t *d, uint64_t *n, uint64_t *g,
143
+ intptr_t oprsz, bool after)
144
+{
145
+ uint32_t flags = PREDTEST_INIT;
146
+ bool brk = false;
147
+ intptr_t i;
148
+
149
+ for (i = 0; i < oprsz / 8; ++i) {
150
+ uint64_t this_b, this_d = d[i], this_g = g[i];
151
+
152
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
153
+ d[i] = this_d = (this_b & this_g) | (this_d & ~this_g);
154
+ flags = iter_predtest_fwd(this_d, this_g, flags);
155
+ }
156
+ return flags;
157
+}
158
+
159
+static uint32_t do_zero(ARMPredicateReg *d, intptr_t oprsz)
160
+{
161
+ /* It is quicker to zero the whole predicate than loop on OPRSZ.
162
+ * The compiler should turn this into 4 64-bit integer stores.
163
+ */
164
+ memset(d, 0, sizeof(ARMPredicateReg));
165
+ return PREDTEST_INIT;
166
+}
167
+
168
+void HELPER(sve_brkpa)(void *vd, void *vn, void *vm, void *vg,
169
+ uint32_t pred_desc)
170
+{
171
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
172
+ if (last_active_pred(vn, vg, oprsz)) {
173
+ compute_brk_z(vd, vm, vg, oprsz, true);
174
+ } else {
175
+ do_zero(vd, oprsz);
176
+ }
177
+}
178
+
179
+uint32_t HELPER(sve_brkpas)(void *vd, void *vn, void *vm, void *vg,
180
+ uint32_t pred_desc)
181
+{
182
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
183
+ if (last_active_pred(vn, vg, oprsz)) {
184
+ return compute_brks_z(vd, vm, vg, oprsz, true);
185
+ } else {
186
+ return do_zero(vd, oprsz);
187
+ }
188
+}
189
+
190
+void HELPER(sve_brkpb)(void *vd, void *vn, void *vm, void *vg,
191
+ uint32_t pred_desc)
192
+{
193
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
194
+ if (last_active_pred(vn, vg, oprsz)) {
195
+ compute_brk_z(vd, vm, vg, oprsz, false);
196
+ } else {
197
+ do_zero(vd, oprsz);
198
+ }
199
+}
200
+
201
+uint32_t HELPER(sve_brkpbs)(void *vd, void *vn, void *vm, void *vg,
202
+ uint32_t pred_desc)
203
+{
204
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
205
+ if (last_active_pred(vn, vg, oprsz)) {
206
+ return compute_brks_z(vd, vm, vg, oprsz, false);
207
+ } else {
208
+ return do_zero(vd, oprsz);
209
+ }
210
+}
211
+
212
+void HELPER(sve_brka_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
213
+{
214
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
215
+ compute_brk_z(vd, vn, vg, oprsz, true);
216
+}
217
+
218
+uint32_t HELPER(sve_brkas_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
219
+{
220
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
221
+ return compute_brks_z(vd, vn, vg, oprsz, true);
222
+}
223
+
224
+void HELPER(sve_brkb_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
225
+{
226
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
227
+ compute_brk_z(vd, vn, vg, oprsz, false);
228
+}
229
+
230
+uint32_t HELPER(sve_brkbs_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
231
+{
232
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
233
+ return compute_brks_z(vd, vn, vg, oprsz, false);
234
+}
235
+
236
+void HELPER(sve_brka_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
237
+{
238
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
239
+ compute_brk_m(vd, vn, vg, oprsz, true);
240
+}
241
+
242
+uint32_t HELPER(sve_brkas_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
243
+{
244
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
245
+ return compute_brks_m(vd, vn, vg, oprsz, true);
246
+}
247
+
248
+void HELPER(sve_brkb_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
249
+{
250
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
251
+ compute_brk_m(vd, vn, vg, oprsz, false);
252
+}
253
+
254
+uint32_t HELPER(sve_brkbs_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
255
+{
256
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
257
+ return compute_brks_m(vd, vn, vg, oprsz, false);
258
+}
259
+
260
+void HELPER(sve_brkn)(void *vd, void *vn, void *vg, uint32_t pred_desc)
261
+{
262
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
263
+
264
+ if (!last_active_pred(vn, vg, oprsz)) {
265
+ do_zero(vd, oprsz);
266
+ }
267
+}
268
+
269
+/* As if PredTest(Ones(PL), D, esz). */
270
+static uint32_t predtest_ones(ARMPredicateReg *d, intptr_t oprsz,
271
+ uint64_t esz_mask)
272
+{
273
+ uint32_t flags = PREDTEST_INIT;
274
+ intptr_t i;
275
+
276
+ for (i = 0; i < oprsz / 8; i++) {
277
+ flags = iter_predtest_fwd(d->p[i], esz_mask, flags);
278
+ }
279
+ if (oprsz & 7) {
280
+ uint64_t mask = ~(-1ULL << (8 * (oprsz & 7)));
281
+ flags = iter_predtest_fwd(d->p[i], esz_mask & mask, flags);
282
+ }
283
+ return flags;
284
+}
285
+
286
+uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
287
+{
288
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
289
+
290
+ if (last_active_pred(vn, vg, oprsz)) {
291
+ return predtest_ones(vd, oprsz, -1);
292
+ } else {
293
+ return do_zero(vd, oprsz);
294
+ }
295
+}
296
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
297
index XXXXXXX..XXXXXXX 100644
298
--- a/target/arm/translate-sve.c
299
+++ b/target/arm/translate-sve.c
300
@@ -XXX,XX +XXX,XX @@ DO_PPZI(CMPLS, cmpls)
301
302
#undef DO_PPZI
303
304
+/*
305
+ *** SVE Partition Break Group
306
+ */
307
+
308
+static bool do_brk3(DisasContext *s, arg_rprr_s *a,
309
+ gen_helper_gvec_4 *fn, gen_helper_gvec_flags_4 *fn_s)
310
+{
311
+ if (!sve_access_check(s)) {
312
+ return true;
313
+ }
314
+
315
+ unsigned vsz = pred_full_reg_size(s);
316
+
317
+ /* Predicate sizes may be smaller and cannot use simd_desc. */
318
+ TCGv_ptr d = tcg_temp_new_ptr();
319
+ TCGv_ptr n = tcg_temp_new_ptr();
320
+ TCGv_ptr m = tcg_temp_new_ptr();
321
+ TCGv_ptr g = tcg_temp_new_ptr();
322
+ TCGv_i32 t = tcg_const_i32(vsz - 2);
323
+
324
+ tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
325
+ tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
326
+ tcg_gen_addi_ptr(m, cpu_env, pred_full_reg_offset(s, a->rm));
327
+ tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
328
+
329
+ if (a->s) {
330
+ fn_s(t, d, n, m, g, t);
331
+ do_pred_flags(t);
332
+ } else {
333
+ fn(d, n, m, g, t);
334
+ }
335
+ tcg_temp_free_ptr(d);
336
+ tcg_temp_free_ptr(n);
337
+ tcg_temp_free_ptr(m);
338
+ tcg_temp_free_ptr(g);
339
+ tcg_temp_free_i32(t);
340
+ return true;
341
+}
342
+
343
+static bool do_brk2(DisasContext *s, arg_rpr_s *a,
344
+ gen_helper_gvec_3 *fn, gen_helper_gvec_flags_3 *fn_s)
345
+{
346
+ if (!sve_access_check(s)) {
347
+ return true;
348
+ }
349
+
350
+ unsigned vsz = pred_full_reg_size(s);
351
+
352
+ /* Predicate sizes may be smaller and cannot use simd_desc. */
353
+ TCGv_ptr d = tcg_temp_new_ptr();
354
+ TCGv_ptr n = tcg_temp_new_ptr();
355
+ TCGv_ptr g = tcg_temp_new_ptr();
356
+ TCGv_i32 t = tcg_const_i32(vsz - 2);
357
+
358
+ tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
359
+ tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
360
+ tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
361
+
362
+ if (a->s) {
363
+ fn_s(t, d, n, g, t);
364
+ do_pred_flags(t);
365
+ } else {
366
+ fn(d, n, g, t);
367
+ }
368
+ tcg_temp_free_ptr(d);
369
+ tcg_temp_free_ptr(n);
370
+ tcg_temp_free_ptr(g);
371
+ tcg_temp_free_i32(t);
372
+ return true;
373
+}
374
+
375
+static bool trans_BRKPA(DisasContext *s, arg_rprr_s *a, uint32_t insn)
376
+{
377
+ return do_brk3(s, a, gen_helper_sve_brkpa, gen_helper_sve_brkpas);
378
+}
379
+
380
+static bool trans_BRKPB(DisasContext *s, arg_rprr_s *a, uint32_t insn)
381
+{
382
+ return do_brk3(s, a, gen_helper_sve_brkpb, gen_helper_sve_brkpbs);
383
+}
384
+
385
+static bool trans_BRKA_m(DisasContext *s, arg_rpr_s *a, uint32_t insn)
386
+{
387
+ return do_brk2(s, a, gen_helper_sve_brka_m, gen_helper_sve_brkas_m);
388
+}
389
+
390
+static bool trans_BRKB_m(DisasContext *s, arg_rpr_s *a, uint32_t insn)
391
+{
392
+ return do_brk2(s, a, gen_helper_sve_brkb_m, gen_helper_sve_brkbs_m);
393
+}
394
+
395
+static bool trans_BRKA_z(DisasContext *s, arg_rpr_s *a, uint32_t insn)
396
+{
397
+ return do_brk2(s, a, gen_helper_sve_brka_z, gen_helper_sve_brkas_z);
398
+}
399
+
400
+static bool trans_BRKB_z(DisasContext *s, arg_rpr_s *a, uint32_t insn)
401
+{
402
+ return do_brk2(s, a, gen_helper_sve_brkb_z, gen_helper_sve_brkbs_z);
403
+}
404
+
405
+static bool trans_BRKN(DisasContext *s, arg_rpr_s *a, uint32_t insn)
406
+{
407
+ return do_brk2(s, a, gen_helper_sve_brkn, gen_helper_sve_brkns);
408
+}
409
+
410
/*
411
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
412
*/
413
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
414
index XXXXXXX..XXXXXXX 100644
415
--- a/target/arm/sve.decode
416
+++ b/target/arm/sve.decode
417
@@ -XXX,XX +XXX,XX @@
418
&rri_esz rd rn imm esz
419
&rrr_esz rd rn rm esz
420
&rpr_esz rd pg rn esz
421
+&rpr_s rd pg rn s
422
&rprr_s rd pg rn rm s
423
&rprr_esz rd pg rn rm esz
424
&rprrr_esz rd pg rn rm ra esz
425
@@ -XXX,XX +XXX,XX @@
426
@pd_pn ........ esz:2 .. .... ....... rn:4 . rd:4 &rr_esz
427
@rd_rn ........ esz:2 ...... ...... rn:5 rd:5 &rr_esz
428
429
+# Two operand with governing predicate, flags setting
430
+@pd_pg_pn_s ........ . s:1 ...... .. pg:4 . rn:4 . rd:4 &rpr_s
431
+
432
# Three operand with unused vector element size
433
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
434
435
@@ -XXX,XX +XXX,XX @@ PFIRST 00100101 01 011 000 11000 00 .... 0 .... @pd_pn_e0
436
# SVE predicate next active
437
PNEXT 00100101 .. 011 001 11000 10 .... 0 .... @pd_pn
438
439
+### SVE Partition Break Group
440
+
441
+# SVE propagate break from previous partition
442
+BRKPA 00100101 0. 00 .... 11 .... 0 .... 0 .... @pd_pg_pn_pm_s
443
+BRKPB 00100101 0. 00 .... 11 .... 0 .... 1 .... @pd_pg_pn_pm_s
444
+
445
+# SVE partition break condition
446
+BRKA_z 00100101 0. 01000001 .... 0 .... 0 .... @pd_pg_pn_s
447
+BRKB_z 00100101 1. 01000001 .... 0 .... 0 .... @pd_pg_pn_s
448
+BRKA_m 00100101 0. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
449
+BRKB_m 00100101 1. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
450
+
451
+# SVE propagate break to next partition
452
+BRKN 00100101 0. 01100001 .... 0 .... 0 .... @pd_pg_pn_s
453
+
454
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
455
456
# SVE load predicate register
457
--
108
--
458
2.17.1
109
2.19.1
459
110
460
111
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move expanders for VBSL, VBIT, and VBIF from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-9-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/helper-sve.h | 15 ++++++++
10
target/arm/translate.h | 6 ++
9
target/arm/sve_helper.c | 72 ++++++++++++++++++++++++++++++++++++
11
target/arm/translate-a64.c | 61 --------------
10
target/arm/translate-sve.c | 75 ++++++++++++++++++++++++++++++++++++++
12
target/arm/translate.c | 162 +++++++++++++++++++++++++++----------
11
target/arm/sve.decode | 10 +++++
13
3 files changed, 124 insertions(+), 105 deletions(-)
12
4 files changed, 172 insertions(+)
14
13
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/translate.h
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/translate.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
19
DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
return ret;
20
DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
}
21
22
22
+DEF_HELPER_FLAGS_4(sve_zip_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+
23
+DEF_HELPER_FLAGS_4(sve_zip_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+/* Vector operations shared between ARM and AArch64. */
24
+DEF_HELPER_FLAGS_4(sve_zip_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+extern const GVecGen3 bsl_op;
25
+DEF_HELPER_FLAGS_4(sve_zip_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+extern const GVecGen3 bit_op;
26
+
27
+extern const GVecGen3 bif_op;
27
+DEF_HELPER_FLAGS_4(sve_uzp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+
28
+DEF_HELPER_FLAGS_4(sve_uzp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
/*
29
+DEF_HELPER_FLAGS_4(sve_uzp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
* Forward to the isar_feature_* tests given a DisasContext pointer.
30
+DEF_HELPER_FLAGS_4(sve_uzp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
*/
31
+
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
32
+DEF_HELPER_FLAGS_4(sve_trn_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+
37
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
39
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
40
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
41
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sve_helper.c
34
--- a/target/arm/translate-a64.c
43
+++ b/target/arm/sve_helper.c
35
+++ b/target/arm/translate-a64.c
44
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
45
}
46
}
37
}
47
}
38
}
48
+
39
49
+#define DO_ZIP(NAME, TYPE, H) \
40
-static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
50
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
41
-{
51
+{ \
42
- tcg_gen_xor_i64(rn, rn, rm);
52
+ intptr_t oprsz = simd_oprsz(desc); \
43
- tcg_gen_and_i64(rn, rn, rd);
53
+ intptr_t i, oprsz_2 = oprsz / 2; \
44
- tcg_gen_xor_i64(rd, rm, rn);
54
+ ARMVectorReg tmp_n, tmp_m; \
45
-}
55
+ /* We produce output faster than we consume input. \
46
-
56
+ Therefore we must be mindful of possible overlap. */ \
47
-static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
57
+ if (unlikely((vn - vd) < (uintptr_t)oprsz)) { \
48
-{
58
+ vn = memcpy(&tmp_n, vn, oprsz_2); \
49
- tcg_gen_xor_i64(rn, rn, rd);
59
+ } \
50
- tcg_gen_and_i64(rn, rn, rm);
60
+ if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \
51
- tcg_gen_xor_i64(rd, rd, rn);
61
+ vm = memcpy(&tmp_m, vm, oprsz_2); \
52
-}
62
+ } \
53
-
63
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
54
-static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
64
+ *(TYPE *)(vd + H(2 * i + 0)) = *(TYPE *)(vn + H(i)); \
55
-{
65
+ *(TYPE *)(vd + H(2 * i + sizeof(TYPE))) = *(TYPE *)(vm + H(i)); \
56
- tcg_gen_xor_i64(rn, rn, rd);
66
+ } \
57
- tcg_gen_andc_i64(rn, rn, rm);
67
+}
58
- tcg_gen_xor_i64(rd, rd, rn);
68
+
59
-}
69
+DO_ZIP(sve_zip_b, uint8_t, H1)
60
-
70
+DO_ZIP(sve_zip_h, uint16_t, H1_2)
61
-static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
71
+DO_ZIP(sve_zip_s, uint32_t, H1_4)
62
-{
72
+DO_ZIP(sve_zip_d, uint64_t, )
63
- tcg_gen_xor_vec(vece, rn, rn, rm);
73
+
64
- tcg_gen_and_vec(vece, rn, rn, rd);
74
+#define DO_UZP(NAME, TYPE, H) \
65
- tcg_gen_xor_vec(vece, rd, rm, rn);
75
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
66
-}
76
+{ \
67
-
77
+ intptr_t oprsz = simd_oprsz(desc); \
68
-static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
78
+ intptr_t oprsz_2 = oprsz / 2; \
69
-{
79
+ intptr_t odd_ofs = simd_data(desc); \
70
- tcg_gen_xor_vec(vece, rn, rn, rd);
80
+ intptr_t i; \
71
- tcg_gen_and_vec(vece, rn, rn, rm);
81
+ ARMVectorReg tmp_m; \
72
- tcg_gen_xor_vec(vece, rd, rd, rn);
82
+ if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \
73
-}
83
+ vm = memcpy(&tmp_m, vm, oprsz); \
74
-
84
+ } \
75
-static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
85
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
76
-{
86
+ *(TYPE *)(vd + H(i)) = *(TYPE *)(vn + H(2 * i + odd_ofs)); \
77
- tcg_gen_xor_vec(vece, rn, rn, rd);
87
+ } \
78
- tcg_gen_andc_vec(vece, rn, rn, rm);
88
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
79
- tcg_gen_xor_vec(vece, rd, rd, rn);
89
+ *(TYPE *)(vd + H(oprsz_2 + i)) = *(TYPE *)(vm + H(2 * i + odd_ofs)); \
80
-}
90
+ } \
81
-
91
+}
82
/* Logic op (opcode == 3) subgroup of C3.6.16. */
92
+
83
static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
93
+DO_UZP(sve_uzp_b, uint8_t, H1)
84
{
94
+DO_UZP(sve_uzp_h, uint16_t, H1_2)
85
- static const GVecGen3 bsl_op = {
95
+DO_UZP(sve_uzp_s, uint32_t, H1_4)
86
- .fni8 = gen_bsl_i64,
96
+DO_UZP(sve_uzp_d, uint64_t, )
87
- .fniv = gen_bsl_vec,
97
+
88
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
98
+#define DO_TRN(NAME, TYPE, H) \
89
- .load_dest = true
99
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
90
- };
100
+{ \
91
- static const GVecGen3 bit_op = {
101
+ intptr_t oprsz = simd_oprsz(desc); \
92
- .fni8 = gen_bit_i64,
102
+ intptr_t odd_ofs = simd_data(desc); \
93
- .fniv = gen_bit_vec,
103
+ intptr_t i; \
94
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
104
+ for (i = 0; i < oprsz; i += 2 * sizeof(TYPE)) { \
95
- .load_dest = true
105
+ TYPE ae = *(TYPE *)(vn + H(i + odd_ofs)); \
96
- };
106
+ TYPE be = *(TYPE *)(vm + H(i + odd_ofs)); \
97
- static const GVecGen3 bif_op = {
107
+ *(TYPE *)(vd + H(i + 0)) = ae; \
98
- .fni8 = gen_bif_i64,
108
+ *(TYPE *)(vd + H(i + sizeof(TYPE))) = be; \
99
- .fniv = gen_bif_vec,
109
+ } \
100
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
110
+}
101
- .load_dest = true
111
+
102
- };
112
+DO_TRN(sve_trn_b, uint8_t, H1)
103
-
113
+DO_TRN(sve_trn_h, uint16_t, H1_2)
104
int rd = extract32(insn, 0, 5);
114
+DO_TRN(sve_trn_s, uint32_t, H1_4)
105
int rn = extract32(insn, 5, 5);
115
+DO_TRN(sve_trn_d, uint64_t, )
106
int rm = extract32(insn, 16, 5);
116
+
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
117
+#undef DO_ZIP
118
+#undef DO_UZP
119
+#undef DO_TRN
120
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
121
index XXXXXXX..XXXXXXX 100644
108
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/translate-sve.c
109
--- a/target/arm/translate.c
123
+++ b/target/arm/translate-sve.c
110
+++ b/target/arm/translate.c
124
@@ -XXX,XX +XXX,XX @@ static bool trans_PUNPKHI(DisasContext *s, arg_PUNPKHI *a, uint32_t insn)
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
125
return do_perm_pred2(s, a, 1, gen_helper_sve_punpk_p);
112
return 0;
126
}
113
}
127
114
115
-/* Bitwise select. dest = c ? t : f. Clobbers T and F. */
116
-static void gen_neon_bsl(TCGv_i32 dest, TCGv_i32 t, TCGv_i32 f, TCGv_i32 c)
117
-{
118
- tcg_gen_and_i32(t, t, c);
119
- tcg_gen_andc_i32(f, f, c);
120
- tcg_gen_or_i32(dest, t, f);
121
-}
122
-
123
static inline void gen_neon_narrow(int size, TCGv_i32 dest, TCGv_i64 src)
124
{
125
switch (size) {
126
@@ -XXX,XX +XXX,XX @@ static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
127
return 1;
128
}
129
128
+/*
130
+/*
129
+ *** SVE Permute - Interleaving Group
131
+ * Expanders for VBitOps_VBIF, VBIT, VBSL.
130
+ */
132
+ */
131
+
133
+static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
132
+static bool do_zip(DisasContext *s, arg_rrr_esz *a, bool high)
134
+{
133
+{
135
+ tcg_gen_xor_i64(rn, rn, rm);
134
+ static gen_helper_gvec_3 * const fns[4] = {
136
+ tcg_gen_and_i64(rn, rn, rd);
135
+ gen_helper_sve_zip_b, gen_helper_sve_zip_h,
137
+ tcg_gen_xor_i64(rd, rm, rn);
136
+ gen_helper_sve_zip_s, gen_helper_sve_zip_d,
138
+}
137
+ };
139
+
138
+
140
+static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
139
+ if (sve_access_check(s)) {
141
+{
140
+ unsigned vsz = vec_full_reg_size(s);
142
+ tcg_gen_xor_i64(rn, rn, rd);
141
+ unsigned high_ofs = high ? vsz / 2 : 0;
143
+ tcg_gen_and_i64(rn, rn, rm);
142
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
144
+ tcg_gen_xor_i64(rd, rd, rn);
143
+ vec_full_reg_offset(s, a->rn) + high_ofs,
145
+}
144
+ vec_full_reg_offset(s, a->rm) + high_ofs,
146
+
145
+ vsz, vsz, 0, fns[a->esz]);
147
+static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
146
+ }
148
+{
147
+ return true;
149
+ tcg_gen_xor_i64(rn, rn, rd);
148
+}
150
+ tcg_gen_andc_i64(rn, rn, rm);
149
+
151
+ tcg_gen_xor_i64(rd, rd, rn);
150
+static bool do_zzz_data_ool(DisasContext *s, arg_rrr_esz *a, int data,
152
+}
151
+ gen_helper_gvec_3 *fn)
153
+
152
+{
154
+static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
153
+ if (sve_access_check(s)) {
155
+{
154
+ unsigned vsz = vec_full_reg_size(s);
156
+ tcg_gen_xor_vec(vece, rn, rn, rm);
155
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
157
+ tcg_gen_and_vec(vece, rn, rn, rd);
156
+ vec_full_reg_offset(s, a->rn),
158
+ tcg_gen_xor_vec(vece, rd, rm, rn);
157
+ vec_full_reg_offset(s, a->rm),
159
+}
158
+ vsz, vsz, data, fn);
160
+
159
+ }
161
+static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
160
+ return true;
162
+{
161
+}
163
+ tcg_gen_xor_vec(vece, rn, rn, rd);
162
+
164
+ tcg_gen_and_vec(vece, rn, rn, rm);
163
+static bool trans_ZIP1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
165
+ tcg_gen_xor_vec(vece, rd, rd, rn);
164
+{
166
+}
165
+ return do_zip(s, a, false);
167
+
166
+}
168
+static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
167
+
169
+{
168
+static bool trans_ZIP2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
170
+ tcg_gen_xor_vec(vece, rn, rn, rd);
169
+{
171
+ tcg_gen_andc_vec(vece, rn, rn, rm);
170
+ return do_zip(s, a, true);
172
+ tcg_gen_xor_vec(vece, rd, rd, rn);
171
+}
173
+}
172
+
174
+
173
+static gen_helper_gvec_3 * const uzp_fns[4] = {
175
+const GVecGen3 bsl_op = {
174
+ gen_helper_sve_uzp_b, gen_helper_sve_uzp_h,
176
+ .fni8 = gen_bsl_i64,
175
+ gen_helper_sve_uzp_s, gen_helper_sve_uzp_d,
177
+ .fniv = gen_bsl_vec,
178
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
179
+ .load_dest = true
176
+};
180
+};
177
+
181
+
178
+static bool trans_UZP1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
182
+const GVecGen3 bit_op = {
179
+{
183
+ .fni8 = gen_bit_i64,
180
+ return do_zzz_data_ool(s, a, 0, uzp_fns[a->esz]);
184
+ .fniv = gen_bit_vec,
181
+}
185
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
182
+
186
+ .load_dest = true
183
+static bool trans_UZP2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
184
+{
185
+ return do_zzz_data_ool(s, a, 1 << a->esz, uzp_fns[a->esz]);
186
+}
187
+
188
+static gen_helper_gvec_3 * const trn_fns[4] = {
189
+ gen_helper_sve_trn_b, gen_helper_sve_trn_h,
190
+ gen_helper_sve_trn_s, gen_helper_sve_trn_d,
191
+};
187
+};
192
+
188
+
193
+static bool trans_TRN1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
189
+const GVecGen3 bif_op = {
194
+{
190
+ .fni8 = gen_bif_i64,
195
+ return do_zzz_data_ool(s, a, 0, trn_fns[a->esz]);
191
+ .fniv = gen_bif_vec,
196
+}
192
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
197
+
193
+ .load_dest = true
198
+static bool trans_TRN2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
194
+};
199
+{
195
+
200
+ return do_zzz_data_ool(s, a, 1 << a->esz, trn_fns[a->esz]);
196
+
201
+}
197
/* Translate a NEON data processing instruction. Return nonzero if the
202
+
198
instruction is invalid.
203
/*
199
We process data in a mixture of 32-bit and 64-bit chunks.
204
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
200
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
205
*/
201
{
206
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
202
int op;
207
index XXXXXXX..XXXXXXX 100644
203
int q;
208
--- a/target/arm/sve.decode
204
- int rd, rn, rm;
209
+++ b/target/arm/sve.decode
205
+ int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
210
@@ -XXX,XX +XXX,XX @@ REV_p 00000101 .. 11 0100 010 000 0 .... 0 .... @pd_pn
206
int size;
211
PUNPKLO 00000101 00 11 0000 010 000 0 .... 0 .... @pd_pn_e0
207
int shift;
212
PUNPKHI 00000101 00 11 0001 010 000 0 .... 0 .... @pd_pn_e0
208
int pass;
213
209
int count;
214
+### SVE Permute - Interleaving Group
210
int pairwise;
215
+
211
int u;
216
+# SVE permute vector elements
212
+ int vec_size;
217
+ZIP1_z 00000101 .. 1 ..... 011 000 ..... ..... @rd_rn_rm
213
uint32_t imm, mask;
218
+ZIP2_z 00000101 .. 1 ..... 011 001 ..... ..... @rd_rn_rm
214
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
219
+UZP1_z 00000101 .. 1 ..... 011 010 ..... ..... @rd_rn_rm
215
TCGv_ptr ptr1, ptr2, ptr3;
220
+UZP2_z 00000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
216
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
221
+TRN1_z 00000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
217
VFP_DREG_N(rn, insn);
222
+TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
218
VFP_DREG_M(rm, insn);
223
+
219
size = (insn >> 20) & 3;
224
### SVE Predicate Logical Operations Group
220
+ vec_size = q ? 16 : 8;
225
221
+ rd_ofs = neon_reg_offset(rd, 0);
226
# SVE predicate logical operations
222
+ rn_ofs = neon_reg_offset(rn, 0);
223
+ rm_ofs = neon_reg_offset(rm, 0);
224
+
225
if ((insn & (1 << 23)) == 0) {
226
/* Three register same length. */
227
op = ((insn >> 7) & 0x1e) | ((insn >> 4) & 1);
228
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
229
q, rd, rn, rm);
230
}
231
return 1;
232
+
233
+ case NEON_3R_LOGIC: /* Logic ops. */
234
+ switch ((u << 2) | size) {
235
+ case 0: /* VAND */
236
+ tcg_gen_gvec_and(0, rd_ofs, rn_ofs, rm_ofs,
237
+ vec_size, vec_size);
238
+ break;
239
+ case 1: /* VBIC */
240
+ tcg_gen_gvec_andc(0, rd_ofs, rn_ofs, rm_ofs,
241
+ vec_size, vec_size);
242
+ break;
243
+ case 2:
244
+ if (rn == rm) {
245
+ /* VMOV */
246
+ tcg_gen_gvec_mov(0, rd_ofs, rn_ofs, vec_size, vec_size);
247
+ } else {
248
+ /* VORR */
249
+ tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
250
+ vec_size, vec_size);
251
+ }
252
+ break;
253
+ case 3: /* VORN */
254
+ tcg_gen_gvec_orc(0, rd_ofs, rn_ofs, rm_ofs,
255
+ vec_size, vec_size);
256
+ break;
257
+ case 4: /* VEOR */
258
+ tcg_gen_gvec_xor(0, rd_ofs, rn_ofs, rm_ofs,
259
+ vec_size, vec_size);
260
+ break;
261
+ case 5: /* VBSL */
262
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
263
+ vec_size, vec_size, &bsl_op);
264
+ break;
265
+ case 6: /* VBIT */
266
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
267
+ vec_size, vec_size, &bit_op);
268
+ break;
269
+ case 7: /* VBIF */
270
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
271
+ vec_size, vec_size, &bif_op);
272
+ break;
273
+ }
274
+ return 0;
275
}
276
- if (size == 3 && op != NEON_3R_LOGIC) {
277
+ if (size == 3) {
278
/* 64-bit element instructions. */
279
for (pass = 0; pass < (q ? 2 : 1); pass++) {
280
neon_load_reg64(cpu_V0, rn + pass);
281
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
282
case NEON_3R_VRHADD:
283
GEN_NEON_INTEGER_OP(rhadd);
284
break;
285
- case NEON_3R_LOGIC: /* Logic ops. */
286
- switch ((u << 2) | size) {
287
- case 0: /* VAND */
288
- tcg_gen_and_i32(tmp, tmp, tmp2);
289
- break;
290
- case 1: /* BIC */
291
- tcg_gen_andc_i32(tmp, tmp, tmp2);
292
- break;
293
- case 2: /* VORR */
294
- tcg_gen_or_i32(tmp, tmp, tmp2);
295
- break;
296
- case 3: /* VORN */
297
- tcg_gen_orc_i32(tmp, tmp, tmp2);
298
- break;
299
- case 4: /* VEOR */
300
- tcg_gen_xor_i32(tmp, tmp, tmp2);
301
- break;
302
- case 5: /* VBSL */
303
- tmp3 = neon_load_reg(rd, pass);
304
- gen_neon_bsl(tmp, tmp, tmp2, tmp3);
305
- tcg_temp_free_i32(tmp3);
306
- break;
307
- case 6: /* VBIT */
308
- tmp3 = neon_load_reg(rd, pass);
309
- gen_neon_bsl(tmp, tmp, tmp3, tmp2);
310
- tcg_temp_free_i32(tmp3);
311
- break;
312
- case 7: /* VBIF */
313
- tmp3 = neon_load_reg(rd, pass);
314
- gen_neon_bsl(tmp, tmp3, tmp, tmp2);
315
- tcg_temp_free_i32(tmp3);
316
- break;
317
- }
318
- break;
319
case NEON_3R_VHSUB:
320
GEN_NEON_INTEGER_OP(hsub);
321
break;
227
--
322
--
228
2.17.1
323
2.19.1
229
324
230
325
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-10-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/helper-sve.h | 44 +++++++++++++++++++
8
target/arm/translate.c | 29 ++++++++++-------------------
9
target/arm/sve_helper.c | 88 ++++++++++++++++++++++++++++++++++++++
9
1 file changed, 10 insertions(+), 19 deletions(-)
10
target/arm/translate-sve.c | 66 ++++++++++++++++++++++++++++
11
target/arm/sve.decode | 23 ++++++++++
12
4 files changed, 221 insertions(+)
13
10
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
13
--- a/target/arm/translate.c
17
+++ b/target/arm/helper-sve.h
14
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_s, TCG_CALL_NO_RWG,
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
19
DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_s, TCG_CALL_NO_RWG,
16
break;
20
i32, ptr, ptr, ptr, ptr, i32)
17
}
21
18
return 0;
22
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
32
+
19
+
33
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
20
+ case NEON_3R_VADD_VSUB:
34
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
21
+ if (u) {
35
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
22
+ tcg_gen_gvec_sub(size, rd_ofs, rn_ofs, rm_ofs,
36
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
23
+ vec_size, vec_size);
37
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
24
+ } else {
38
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
25
+ tcg_gen_gvec_add(size, rd_ofs, rn_ofs, rm_ofs,
39
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
26
+ vec_size, vec_size);
40
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
27
+ }
41
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
28
+ return 0;
42
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
29
}
43
+
30
if (size == 3) {
44
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
31
/* 64-bit element instructions. */
45
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
32
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
46
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
33
cpu_V1, cpu_V0);
47
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
34
}
48
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
35
break;
49
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
36
- case NEON_3R_VADD_VSUB:
50
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
37
- if (u) {
51
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
38
- tcg_gen_sub_i64(CPU_V001);
52
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
39
- } else {
53
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
40
- tcg_gen_add_i64(CPU_V001);
54
+
41
- }
55
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
42
- break;
56
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
43
default:
57
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
44
abort();
58
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
45
}
59
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
46
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
60
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
47
tmp2 = neon_load_reg(rd, pass);
61
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
48
gen_neon_add(size, tmp, tmp2);
62
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
49
break;
63
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
50
- case NEON_3R_VADD_VSUB:
64
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
51
- if (!u) { /* VADD */
65
+
52
- gen_neon_add(size, tmp, tmp2);
66
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
53
- } else { /* VSUB */
67
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
54
- switch (size) {
68
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
55
- case 0: gen_helper_neon_sub_u8(tmp, tmp, tmp2); break;
69
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
56
- case 1: gen_helper_neon_sub_u16(tmp, tmp, tmp2); break;
70
index XXXXXXX..XXXXXXX 100644
57
- case 2: tcg_gen_sub_i32(tmp, tmp, tmp2); break;
71
--- a/target/arm/sve_helper.c
58
- default: abort();
72
+++ b/target/arm/sve_helper.c
59
- }
73
@@ -XXX,XX +XXX,XX @@ DO_CMP_PPZW_S(sve_cmpls_ppzw_s, uint32_t, uint64_t, <=)
60
- }
74
#undef DO_CMP_PPZW_H
61
- break;
75
#undef DO_CMP_PPZW_S
62
case NEON_3R_VTST_VCEQ:
76
#undef DO_CMP_PPZW
63
if (!u) { /* VTST */
77
+
64
switch (size) {
78
+/* Similar, but the second source is immediate. */
79
+#define DO_CMP_PPZI(NAME, TYPE, OP, H, MASK) \
80
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
81
+{ \
82
+ intptr_t opr_sz = simd_oprsz(desc); \
83
+ uint32_t flags = PREDTEST_INIT; \
84
+ TYPE mm = simd_data(desc); \
85
+ intptr_t i = opr_sz; \
86
+ do { \
87
+ uint64_t out = 0, pg; \
88
+ do { \
89
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
90
+ TYPE nn = *(TYPE *)(vn + H(i)); \
91
+ out |= nn OP mm; \
92
+ } while (i & 63); \
93
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
94
+ out &= pg; \
95
+ *(uint64_t *)(vd + (i >> 3)) = out; \
96
+ flags = iter_predtest_bwd(out, pg, flags); \
97
+ } while (i > 0); \
98
+ return flags; \
99
+}
100
+
101
+#define DO_CMP_PPZI_B(NAME, TYPE, OP) \
102
+ DO_CMP_PPZI(NAME, TYPE, OP, H1, 0xffffffffffffffffull)
103
+#define DO_CMP_PPZI_H(NAME, TYPE, OP) \
104
+ DO_CMP_PPZI(NAME, TYPE, OP, H1_2, 0x5555555555555555ull)
105
+#define DO_CMP_PPZI_S(NAME, TYPE, OP) \
106
+ DO_CMP_PPZI(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
107
+#define DO_CMP_PPZI_D(NAME, TYPE, OP) \
108
+ DO_CMP_PPZI(NAME, TYPE, OP, , 0x0101010101010101ull)
109
+
110
+DO_CMP_PPZI_B(sve_cmpeq_ppzi_b, uint8_t, ==)
111
+DO_CMP_PPZI_H(sve_cmpeq_ppzi_h, uint16_t, ==)
112
+DO_CMP_PPZI_S(sve_cmpeq_ppzi_s, uint32_t, ==)
113
+DO_CMP_PPZI_D(sve_cmpeq_ppzi_d, uint64_t, ==)
114
+
115
+DO_CMP_PPZI_B(sve_cmpne_ppzi_b, uint8_t, !=)
116
+DO_CMP_PPZI_H(sve_cmpne_ppzi_h, uint16_t, !=)
117
+DO_CMP_PPZI_S(sve_cmpne_ppzi_s, uint32_t, !=)
118
+DO_CMP_PPZI_D(sve_cmpne_ppzi_d, uint64_t, !=)
119
+
120
+DO_CMP_PPZI_B(sve_cmpgt_ppzi_b, int8_t, >)
121
+DO_CMP_PPZI_H(sve_cmpgt_ppzi_h, int16_t, >)
122
+DO_CMP_PPZI_S(sve_cmpgt_ppzi_s, int32_t, >)
123
+DO_CMP_PPZI_D(sve_cmpgt_ppzi_d, int64_t, >)
124
+
125
+DO_CMP_PPZI_B(sve_cmpge_ppzi_b, int8_t, >=)
126
+DO_CMP_PPZI_H(sve_cmpge_ppzi_h, int16_t, >=)
127
+DO_CMP_PPZI_S(sve_cmpge_ppzi_s, int32_t, >=)
128
+DO_CMP_PPZI_D(sve_cmpge_ppzi_d, int64_t, >=)
129
+
130
+DO_CMP_PPZI_B(sve_cmphi_ppzi_b, uint8_t, >)
131
+DO_CMP_PPZI_H(sve_cmphi_ppzi_h, uint16_t, >)
132
+DO_CMP_PPZI_S(sve_cmphi_ppzi_s, uint32_t, >)
133
+DO_CMP_PPZI_D(sve_cmphi_ppzi_d, uint64_t, >)
134
+
135
+DO_CMP_PPZI_B(sve_cmphs_ppzi_b, uint8_t, >=)
136
+DO_CMP_PPZI_H(sve_cmphs_ppzi_h, uint16_t, >=)
137
+DO_CMP_PPZI_S(sve_cmphs_ppzi_s, uint32_t, >=)
138
+DO_CMP_PPZI_D(sve_cmphs_ppzi_d, uint64_t, >=)
139
+
140
+DO_CMP_PPZI_B(sve_cmplt_ppzi_b, int8_t, <)
141
+DO_CMP_PPZI_H(sve_cmplt_ppzi_h, int16_t, <)
142
+DO_CMP_PPZI_S(sve_cmplt_ppzi_s, int32_t, <)
143
+DO_CMP_PPZI_D(sve_cmplt_ppzi_d, int64_t, <)
144
+
145
+DO_CMP_PPZI_B(sve_cmple_ppzi_b, int8_t, <=)
146
+DO_CMP_PPZI_H(sve_cmple_ppzi_h, int16_t, <=)
147
+DO_CMP_PPZI_S(sve_cmple_ppzi_s, int32_t, <=)
148
+DO_CMP_PPZI_D(sve_cmple_ppzi_d, int64_t, <=)
149
+
150
+DO_CMP_PPZI_B(sve_cmplo_ppzi_b, uint8_t, <)
151
+DO_CMP_PPZI_H(sve_cmplo_ppzi_h, uint16_t, <)
152
+DO_CMP_PPZI_S(sve_cmplo_ppzi_s, uint32_t, <)
153
+DO_CMP_PPZI_D(sve_cmplo_ppzi_d, uint64_t, <)
154
+
155
+DO_CMP_PPZI_B(sve_cmpls_ppzi_b, uint8_t, <=)
156
+DO_CMP_PPZI_H(sve_cmpls_ppzi_h, uint16_t, <=)
157
+DO_CMP_PPZI_S(sve_cmpls_ppzi_s, uint32_t, <=)
158
+DO_CMP_PPZI_D(sve_cmpls_ppzi_d, uint64_t, <=)
159
+
160
+#undef DO_CMP_PPZI_B
161
+#undef DO_CMP_PPZI_H
162
+#undef DO_CMP_PPZI_S
163
+#undef DO_CMP_PPZI_D
164
+#undef DO_CMP_PPZI
165
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
166
index XXXXXXX..XXXXXXX 100644
167
--- a/target/arm/translate-sve.c
168
+++ b/target/arm/translate-sve.c
169
@@ -XXX,XX +XXX,XX @@
170
#include "translate-a64.h"
171
172
173
+typedef void gen_helper_gvec_flags_3(TCGv_i32, TCGv_ptr, TCGv_ptr,
174
+ TCGv_ptr, TCGv_i32);
175
typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
176
TCGv_ptr, TCGv_ptr, TCGv_i32);
177
178
@@ -XXX,XX +XXX,XX @@ DO_PPZW(CMPLS, cmpls)
179
180
#undef DO_PPZW
181
182
+/*
183
+ *** SVE Integer Compare - Immediate Groups
184
+ */
185
+
186
+static bool do_ppzi_flags(DisasContext *s, arg_rpri_esz *a,
187
+ gen_helper_gvec_flags_3 *gen_fn)
188
+{
189
+ TCGv_ptr pd, zn, pg;
190
+ unsigned vsz;
191
+ TCGv_i32 t;
192
+
193
+ if (gen_fn == NULL) {
194
+ return false;
195
+ }
196
+ if (!sve_access_check(s)) {
197
+ return true;
198
+ }
199
+
200
+ vsz = vec_full_reg_size(s);
201
+ t = tcg_const_i32(simd_desc(vsz, vsz, a->imm));
202
+ pd = tcg_temp_new_ptr();
203
+ zn = tcg_temp_new_ptr();
204
+ pg = tcg_temp_new_ptr();
205
+
206
+ tcg_gen_addi_ptr(pd, cpu_env, pred_full_reg_offset(s, a->rd));
207
+ tcg_gen_addi_ptr(zn, cpu_env, vec_full_reg_offset(s, a->rn));
208
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
209
+
210
+ gen_fn(t, pd, zn, pg, t);
211
+
212
+ tcg_temp_free_ptr(pd);
213
+ tcg_temp_free_ptr(zn);
214
+ tcg_temp_free_ptr(pg);
215
+
216
+ do_pred_flags(t);
217
+
218
+ tcg_temp_free_i32(t);
219
+ return true;
220
+}
221
+
222
+#define DO_PPZI(NAME, name) \
223
+static bool trans_##NAME##_ppzi(DisasContext *s, arg_rpri_esz *a, \
224
+ uint32_t insn) \
225
+{ \
226
+ static gen_helper_gvec_flags_3 * const fns[4] = { \
227
+ gen_helper_sve_##name##_ppzi_b, gen_helper_sve_##name##_ppzi_h, \
228
+ gen_helper_sve_##name##_ppzi_s, gen_helper_sve_##name##_ppzi_d, \
229
+ }; \
230
+ return do_ppzi_flags(s, a, fns[a->esz]); \
231
+}
232
+
233
+DO_PPZI(CMPEQ, cmpeq)
234
+DO_PPZI(CMPNE, cmpne)
235
+DO_PPZI(CMPGT, cmpgt)
236
+DO_PPZI(CMPGE, cmpge)
237
+DO_PPZI(CMPHI, cmphi)
238
+DO_PPZI(CMPHS, cmphs)
239
+DO_PPZI(CMPLT, cmplt)
240
+DO_PPZI(CMPLE, cmple)
241
+DO_PPZI(CMPLO, cmplo)
242
+DO_PPZI(CMPLS, cmpls)
243
+
244
+#undef DO_PPZI
245
+
246
/*
247
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
248
*/
249
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
250
index XXXXXXX..XXXXXXX 100644
251
--- a/target/arm/sve.decode
252
+++ b/target/arm/sve.decode
253
@@ -XXX,XX +XXX,XX @@
254
@rdn_dbm ........ .. .... dbm:13 rd:5 \
255
&rr_dbm rn=%reg_movprfx
256
257
+# Predicate output, vector and immediate input,
258
+# controlling predicate, element size.
259
+@pd_pg_rn_i7 ........ esz:2 . imm:7 . pg:3 rn:5 . rd:4 &rpri_esz
260
+@pd_pg_rn_i5 ........ esz:2 . imm:s5 ... pg:3 rn:5 . rd:4 &rpri_esz
261
+
262
# Basic Load/Store with 9-bit immediate offset
263
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
264
&rri imm=%imm9_16_10
265
@@ -XXX,XX +XXX,XX @@ CMPHI_ppzw 00100100 .. 0 ..... 110 ... ..... 1 .... @pd_pg_rn_rm
266
CMPLO_ppzw 00100100 .. 0 ..... 111 ... ..... 0 .... @pd_pg_rn_rm
267
CMPLS_ppzw 00100100 .. 0 ..... 111 ... ..... 1 .... @pd_pg_rn_rm
268
269
+### SVE Integer Compare - Unsigned Immediate Group
270
+
271
+# SVE integer compare with unsigned immediate
272
+CMPHS_ppzi 00100100 .. 1 ....... 0 ... ..... 0 .... @pd_pg_rn_i7
273
+CMPHI_ppzi 00100100 .. 1 ....... 0 ... ..... 1 .... @pd_pg_rn_i7
274
+CMPLO_ppzi 00100100 .. 1 ....... 1 ... ..... 0 .... @pd_pg_rn_i7
275
+CMPLS_ppzi 00100100 .. 1 ....... 1 ... ..... 1 .... @pd_pg_rn_i7
276
+
277
+### SVE Integer Compare - Signed Immediate Group
278
+
279
+# SVE integer compare with signed immediate
280
+CMPGE_ppzi 00100101 .. 0 ..... 000 ... ..... 0 .... @pd_pg_rn_i5
281
+CMPGT_ppzi 00100101 .. 0 ..... 000 ... ..... 1 .... @pd_pg_rn_i5
282
+CMPLT_ppzi 00100101 .. 0 ..... 001 ... ..... 0 .... @pd_pg_rn_i5
283
+CMPLE_ppzi 00100101 .. 0 ..... 001 ... ..... 1 .... @pd_pg_rn_i5
284
+CMPEQ_ppzi 00100101 .. 0 ..... 100 ... ..... 0 .... @pd_pg_rn_i5
285
+CMPNE_ppzi 00100101 .. 0 ..... 100 ... ..... 1 .... @pd_pg_rn_i5
286
+
287
### SVE Predicate Logical Operations Group
288
289
# SVE predicate logical operations
290
--
65
--
291
2.17.1
66
2.19.1
292
67
293
68
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20180613015641.5667-16-richard.henderson@linaro.org
4
Message-id: 20181011205206.3552-11-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/helper-sve.h | 2 +
8
target/arm/translate.c | 16 ++++++++--------
9
target/arm/sve_helper.c | 31 ++++++++++++
9
1 file changed, 8 insertions(+), 8 deletions(-)
10
target/arm/translate-sve.c | 99 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 8 +++
12
4 files changed, 140 insertions(+)
13
10
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
13
--- a/target/arm/translate.c
17
+++ b/target/arm/helper-sve.h
14
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
19
DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
16
tcg_temp_free_ptr(ptr1);
20
17
tcg_temp_free_ptr(ptr2);
21
DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
18
break;
22
+
19
+
23
+DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
20
+ case NEON_2RM_VMVN:
24
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
21
+ tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
25
index XXXXXXX..XXXXXXX 100644
22
+ break;
26
--- a/target/arm/sve_helper.c
23
+ case NEON_2RM_VNEG:
27
+++ b/target/arm/sve_helper.c
24
+ tcg_gen_gvec_neg(size, rd_ofs, rm_ofs, vec_size, vec_size);
28
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
25
+ break;
29
}
30
return sum;
31
}
32
+
26
+
33
+uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
27
default:
34
+{
28
elementwise:
35
+ uintptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
29
for (pass = 0; pass < (q ? 4 : 2); pass++) {
36
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
30
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
37
+ uint64_t esz_mask = pred_esz_masks[esz];
31
case NEON_2RM_VCNT:
38
+ ARMPredicateReg *d = vd;
32
gen_helper_neon_cnt_u8(tmp, tmp);
39
+ uint32_t flags;
33
break;
40
+ intptr_t i;
34
- case NEON_2RM_VMVN:
41
+
35
- tcg_gen_not_i32(tmp, tmp);
42
+ /* Begin with a zero predicate register. */
36
- break;
43
+ flags = do_zero(d, oprsz);
37
case NEON_2RM_VQABS:
44
+ if (count == 0) {
38
switch (size) {
45
+ return flags;
39
case 0:
46
+ }
40
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
47
+
41
default: abort();
48
+ /* Scale from predicate element count to bits. */
42
}
49
+ count <<= esz;
43
break;
50
+ /* Bound to the bits in the predicate. */
44
- case NEON_2RM_VNEG:
51
+ count = MIN(count, oprsz * 8);
45
- tmp2 = tcg_const_i32(0);
52
+
46
- gen_neon_rsb(size, tmp, tmp2);
53
+ /* Set all of the requested bits. */
47
- tcg_temp_free_i32(tmp2);
54
+ for (i = 0; i < count / 64; ++i) {
48
- break;
55
+ d->p[i] = esz_mask;
49
case NEON_2RM_VCGT0_F:
56
+ }
50
{
57
+ if (count & 63) {
51
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
58
+ d->p[i] = MAKE_64BIT_MASK(0, count & 63) & esz_mask;
59
+ }
60
+
61
+ return predtest_ones(d, oprsz, esz_mask);
62
+}
63
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/translate-sve.c
66
+++ b/target/arm/translate-sve.c
67
@@ -XXX,XX +XXX,XX @@ static bool trans_SINCDECP_z(DisasContext *s, arg_incdec2_pred *a,
68
return true;
69
}
70
71
+/*
72
+ *** SVE Integer Compare Scalars Group
73
+ */
74
+
75
+static bool trans_CTERM(DisasContext *s, arg_CTERM *a, uint32_t insn)
76
+{
77
+ if (!sve_access_check(s)) {
78
+ return true;
79
+ }
80
+
81
+ TCGCond cond = (a->ne ? TCG_COND_NE : TCG_COND_EQ);
82
+ TCGv_i64 rn = read_cpu_reg(s, a->rn, a->sf);
83
+ TCGv_i64 rm = read_cpu_reg(s, a->rm, a->sf);
84
+ TCGv_i64 cmp = tcg_temp_new_i64();
85
+
86
+ tcg_gen_setcond_i64(cond, cmp, rn, rm);
87
+ tcg_gen_extrl_i64_i32(cpu_NF, cmp);
88
+ tcg_temp_free_i64(cmp);
89
+
90
+ /* VF = !NF & !CF. */
91
+ tcg_gen_xori_i32(cpu_VF, cpu_NF, 1);
92
+ tcg_gen_andc_i32(cpu_VF, cpu_VF, cpu_CF);
93
+
94
+ /* Both NF and VF actually look at bit 31. */
95
+ tcg_gen_neg_i32(cpu_NF, cpu_NF);
96
+ tcg_gen_neg_i32(cpu_VF, cpu_VF);
97
+ return true;
98
+}
99
+
100
+static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
101
+{
102
+ if (!sve_access_check(s)) {
103
+ return true;
104
+ }
105
+
106
+ TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
107
+ TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
108
+ TCGv_i64 t0 = tcg_temp_new_i64();
109
+ TCGv_i64 t1 = tcg_temp_new_i64();
110
+ TCGv_i32 t2, t3;
111
+ TCGv_ptr ptr;
112
+ unsigned desc, vsz = vec_full_reg_size(s);
113
+ TCGCond cond;
114
+
115
+ if (!a->sf) {
116
+ if (a->u) {
117
+ tcg_gen_ext32u_i64(op0, op0);
118
+ tcg_gen_ext32u_i64(op1, op1);
119
+ } else {
120
+ tcg_gen_ext32s_i64(op0, op0);
121
+ tcg_gen_ext32s_i64(op1, op1);
122
+ }
123
+ }
124
+
125
+ /* For the helper, compress the different conditions into a computation
126
+ * of how many iterations for which the condition is true.
127
+ *
128
+ * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
129
+ * 2**64 iterations, overflowing to 0. Of course, predicate registers
130
+ * aren't that large, so any value >= predicate size is sufficient.
131
+ */
132
+ tcg_gen_sub_i64(t0, op1, op0);
133
+
134
+ /* t0 = MIN(op1 - op0, vsz). */
135
+ tcg_gen_movi_i64(t1, vsz);
136
+ tcg_gen_umin_i64(t0, t0, t1);
137
+ if (a->eq) {
138
+ /* Equality means one more iteration. */
139
+ tcg_gen_addi_i64(t0, t0, 1);
140
+ }
141
+
142
+ /* t0 = (condition true ? t0 : 0). */
143
+ cond = (a->u
144
+ ? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
145
+ : (a->eq ? TCG_COND_LE : TCG_COND_LT));
146
+ tcg_gen_movi_i64(t1, 0);
147
+ tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
148
+
149
+ t2 = tcg_temp_new_i32();
150
+ tcg_gen_extrl_i64_i32(t2, t0);
151
+ tcg_temp_free_i64(t0);
152
+ tcg_temp_free_i64(t1);
153
+
154
+ desc = (vsz / 8) - 2;
155
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
156
+ t3 = tcg_const_i32(desc);
157
+
158
+ ptr = tcg_temp_new_ptr();
159
+ tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
160
+
161
+ gen_helper_sve_while(t2, ptr, t2, t3);
162
+ do_pred_flags(t2);
163
+
164
+ tcg_temp_free_ptr(ptr);
165
+ tcg_temp_free_i32(t2);
166
+ tcg_temp_free_i32(t3);
167
+ return true;
168
+}
169
+
170
/*
171
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
172
*/
173
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
174
index XXXXXXX..XXXXXXX 100644
175
--- a/target/arm/sve.decode
176
+++ b/target/arm/sve.decode
177
@@ -XXX,XX +XXX,XX @@ SINCDECP_r_64 00100101 .. 1010 d:1 u:1 10001 10 .... ..... @incdec_pred
178
# SVE saturating inc/dec vector by predicate count
179
SINCDECP_z 00100101 .. 1010 d:1 u:1 10000 00 .... ..... @incdec2_pred
180
181
+### SVE Integer Compare - Scalars Group
182
+
183
+# SVE conditionally terminate scalars
184
+CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
185
+
186
+# SVE integer compare scalar count and limit
187
+WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
188
+
189
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
190
191
# SVE load predicate register
192
--
52
--
193
2.17.1
53
2.19.1
194
54
195
55
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
On Macronix chips, two bytes can written to the WRSR. First byte will
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
configure the status register and the second the configuration
4
Message-id: 20181011205206.3552-12-richard.henderson@linaro.org
5
register. It is important to save the configuration value as it
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
contains the dummy cycle setting when using dual or quad IO mode.
7
8
Signed-off-by: Cédric Le Goater <clg@kaod.org>
9
Acked-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
7
---
12
hw/block/m25p80.c | 1 +
8
target/arm/translate.c | 31 +++++++++++++++----------------
13
1 file changed, 1 insertion(+)
9
1 file changed, 15 insertions(+), 16 deletions(-)
14
10
15
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/block/m25p80.c
13
--- a/target/arm/translate.c
18
+++ b/hw/block/m25p80.c
14
+++ b/target/arm/translate.c
19
@@ -XXX,XX +XXX,XX @@ static void complete_collecting_data(Flash *s)
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
20
case MAN_MACRONIX:
16
vec_size, vec_size);
21
s->quad_enable = extract32(s->data[0], 6, 1);
17
}
22
if (s->len > 1) {
18
return 0;
23
+ s->volatile_cfg = s->data[1];
19
+
24
s->four_bytes_address_mode = extract32(s->data[1], 5, 1);
20
+ case NEON_3R_VMUL: /* VMUL */
21
+ if (u) {
22
+ /* Polynomial case allows only P8 and is handled below. */
23
+ if (size != 0) {
24
+ return 1;
25
+ }
26
+ } else {
27
+ tcg_gen_gvec_mul(size, rd_ofs, rn_ofs, rm_ofs,
28
+ vec_size, vec_size);
29
+ return 0;
30
+ }
31
+ break;
32
}
33
if (size == 3) {
34
/* 64-bit element instructions. */
35
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
36
return 1;
25
}
37
}
26
break;
38
break;
39
- case NEON_3R_VMUL:
40
- if (u && (size != 0)) {
41
- /* UNDEF on invalid size for polynomial subcase */
42
- return 1;
43
- }
44
- break;
45
case NEON_3R_VFM_VQRDMLSH:
46
if (!arm_dc_feature(s, ARM_FEATURE_VFP4)) {
47
return 1;
48
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
49
}
50
break;
51
case NEON_3R_VMUL:
52
- if (u) { /* polynomial */
53
- gen_helper_neon_mul_p8(tmp, tmp, tmp2);
54
- } else { /* Integer */
55
- switch (size) {
56
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
57
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
58
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
59
- default: abort();
60
- }
61
- }
62
+ /* VMUL.P8; other cases already eliminated. */
63
+ gen_helper_neon_mul_p8(tmp, tmp, tmp2);
64
break;
65
case NEON_3R_VPMAX:
66
GEN_NEON_INTEGER_OP(pmax);
27
--
67
--
28
2.17.1
68
2.19.1
29
69
30
70
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-13-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/helper-sve.h | 3 +++
8
target/arm/translate.c | 70 +++++++++++++++++++++++++++++-------------
9
target/arm/sve_helper.c | 34 ++++++++++++++++++++++++++++++++++
9
1 file changed, 48 insertions(+), 22 deletions(-)
10
target/arm/translate-sve.c | 12 ++++++++++++
11
target/arm/sve.decode | 6 ++++++
12
4 files changed, 55 insertions(+)
13
10
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
13
--- a/target/arm/translate.c
17
+++ b/target/arm/helper-sve.h
14
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
19
DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
16
size--;
20
DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
17
}
21
18
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
22
+DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
- /* To avoid excessive duplication of ops we implement shift
23
+DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
- by immediate using the variable shift operations. */
21
if (op < 8) {
22
/* Shift by immediate:
23
VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
24
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
25
}
26
/* Right shifts are encoded as N - shift, where N is the
27
element size in bits. */
28
- if (op <= 4)
29
+ if (op <= 4) {
30
shift = shift - (1 << (size + 3));
31
+ }
24
+
32
+
25
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
33
+ switch (op) {
26
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
+ case 0: /* VSHR */
27
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
+ /* Right shift comes here negative. */
28
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
36
+ shift = -shift;
29
index XXXXXXX..XXXXXXX 100644
37
+ /* Shifts larger than the element size are architecturally
30
--- a/target/arm/sve_helper.c
38
+ * valid. Unsigned results in all zeros; signed results
31
+++ b/target/arm/sve_helper.c
39
+ * in all sign bits.
32
@@ -XXX,XX +XXX,XX @@ DO_TRN(sve_trn_d, uint64_t, )
40
+ */
33
#undef DO_ZIP
41
+ if (!u) {
34
#undef DO_UZP
42
+ tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
35
#undef DO_TRN
43
+ MIN(shift, (8 << size) - 1),
44
+ vec_size, vec_size);
45
+ } else if (shift >= 8 << size) {
46
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
47
+ } else {
48
+ tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
49
+ vec_size, vec_size);
50
+ }
51
+ return 0;
36
+
52
+
37
+void HELPER(sve_compact_s)(void *vd, void *vn, void *vg, uint32_t desc)
53
+ case 5: /* VSHL, VSLI */
38
+{
54
+ if (!u) { /* VSHL */
39
+ intptr_t i, j, opr_sz = simd_oprsz(desc) / 4;
55
+ /* Shifts larger than the element size are
40
+ uint32_t *d = vd, *n = vn;
56
+ * architecturally valid and results in zero.
41
+ uint8_t *pg = vg;
57
+ */
58
+ if (shift >= 8 << size) {
59
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
60
+ } else {
61
+ tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
62
+ vec_size, vec_size);
63
+ }
64
+ return 0;
65
+ }
66
+ break;
67
+ }
42
+
68
+
43
+ for (i = j = 0; i < opr_sz; i++) {
69
if (size == 3) {
44
+ if (pg[H1(i / 2)] & (i & 1 ? 0x10 : 0x01)) {
70
count = q + 1;
45
+ d[H4(j)] = n[H4(i)];
71
} else {
46
+ j++;
72
count = q ? 4: 2;
47
+ }
73
}
48
+ }
74
- switch (size) {
49
+ for (; j < opr_sz; j++) {
75
- case 0:
50
+ d[H4(j)] = 0;
76
- imm = (uint8_t) shift;
51
+ }
77
- imm |= imm << 8;
52
+}
78
- imm |= imm << 16;
79
- break;
80
- case 1:
81
- imm = (uint16_t) shift;
82
- imm |= imm << 16;
83
- break;
84
- case 2:
85
- case 3:
86
- imm = shift;
87
- break;
88
- default:
89
- abort();
90
- }
53
+
91
+
54
+void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
92
+ /* To avoid excessive duplication of ops we implement shift
55
+{
93
+ * by immediate using the variable shift operations.
56
+ intptr_t i, j, opr_sz = simd_oprsz(desc) / 8;
94
+ */
57
+ uint64_t *d = vd, *n = vn;
95
+ imm = dup_const(size, shift);
58
+ uint8_t *pg = vg;
96
59
+
97
for (pass = 0; pass < count; pass++) {
60
+ for (i = j = 0; i < opr_sz; i++) {
98
if (size == 3) {
61
+ if (pg[H1(i)] & 1) {
99
neon_load_reg64(cpu_V0, rm + pass);
62
+ d[j] = n[i];
100
tcg_gen_movi_i64(cpu_V1, imm);
63
+ j++;
101
switch (op) {
64
+ }
102
- case 0: /* VSHR */
65
+ }
103
case 1: /* VSRA */
66
+ for (; j < opr_sz; j++) {
104
if (u)
67
+ d[j] = 0;
105
gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
68
+ }
106
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
69
+}
107
cpu_V0, cpu_V1);
70
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
108
}
71
index XXXXXXX..XXXXXXX 100644
109
break;
72
--- a/target/arm/translate-sve.c
110
+ default:
73
+++ b/target/arm/translate-sve.c
111
+ g_assert_not_reached();
74
@@ -XXX,XX +XXX,XX @@ static bool trans_TRN2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
112
}
75
return do_zzz_data_ool(s, a, 1 << a->esz, trn_fns[a->esz]);
113
if (op == 1 || op == 3) {
76
}
114
/* Accumulate. */
77
115
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
78
+/*
116
tmp2 = tcg_temp_new_i32();
79
+ *** SVE Permute Vector - Predicated Group
117
tcg_gen_movi_i32(tmp2, imm);
80
+ */
118
switch (op) {
81
+
119
- case 0: /* VSHR */
82
+static bool trans_COMPACT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
120
case 1: /* VSRA */
83
+{
121
GEN_NEON_INTEGER_OP(shl);
84
+ static gen_helper_gvec_3 * const fns[4] = {
122
break;
85
+ NULL, NULL, gen_helper_sve_compact_s, gen_helper_sve_compact_d
123
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
86
+ };
124
case 7: /* VQSHL */
87
+ return do_zpz_ool(s, a, fns[a->esz]);
125
GEN_NEON_INTEGER_OP_ENV(qshl);
88
+}
126
break;
89
+
127
+ default:
90
/*
128
+ g_assert_not_reached();
91
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
129
}
92
*/
130
tcg_temp_free_i32(tmp2);
93
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
131
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/sve.decode
96
+++ b/target/arm/sve.decode
97
@@ -XXX,XX +XXX,XX @@ UZP2_z 00000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
98
TRN1_z 00000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
99
TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
100
101
+### SVE Permute - Predicated Group
102
+
103
+# SVE compress active elements
104
+# Note esz >= 2
105
+COMPACT 00000101 .. 100001 100 ... ..... ..... @rd_pg_rn
106
+
107
### SVE Predicate Logical Operations Group
108
109
# SVE predicate logical operations
110
--
132
--
111
2.17.1
133
2.19.1
112
134
113
135
diff view generated by jsdifflib
1
Convert the wdt_i6300esb device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps.
2
3
3
Move ssra_op and usra_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-14-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180601141223.26630-5-peter.maydell@linaro.org
7
---
9
---
8
hw/watchdog/wdt_i6300esb.c | 48 ++++++++++++++++++++++++++++----------
10
target/arm/translate.h | 2 +
9
1 file changed, 36 insertions(+), 12 deletions(-)
11
target/arm/translate-a64.c | 106 ----------------------------
10
12
target/arm/translate.c | 139 ++++++++++++++++++++++++++++++++++---
11
diff --git a/hw/watchdog/wdt_i6300esb.c b/hw/watchdog/wdt_i6300esb.c
13
3 files changed, 130 insertions(+), 117 deletions(-)
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/watchdog/wdt_i6300esb.c
17
--- a/target/arm/translate.h
14
+++ b/hw/watchdog/wdt_i6300esb.c
18
+++ b/target/arm/translate.h
15
@@ -XXX,XX +XXX,XX @@ static void i6300esb_mem_writel(void *vp, hwaddr addr, uint32_t val)
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
extern const GVecGen3 bsl_op;
21
extern const GVecGen3 bit_op;
22
extern const GVecGen3 bif_op;
23
+extern const GVecGen2i ssra_op[4];
24
+extern const GVecGen2i usra_op[4];
25
26
/*
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
16
}
33
}
17
}
34
}
18
35
19
+static uint64_t i6300esb_mem_readfn(void *opaque, hwaddr addr, unsigned size)
36
-static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
20
+{
37
-{
21
+ switch (size) {
38
- tcg_gen_vec_sar8i_i64(a, a, shift);
22
+ case 1:
39
- tcg_gen_vec_add8_i64(d, d, a);
23
+ return i6300esb_mem_readb(opaque, addr);
40
-}
24
+ case 2:
41
-
25
+ return i6300esb_mem_readw(opaque, addr);
42
-static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
26
+ case 4:
43
-{
27
+ return i6300esb_mem_readl(opaque, addr);
44
- tcg_gen_vec_sar16i_i64(a, a, shift);
28
+ default:
45
- tcg_gen_vec_add16_i64(d, d, a);
29
+ g_assert_not_reached();
46
-}
30
+ }
47
-
31
+}
48
-static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
32
+
49
-{
33
+static void i6300esb_mem_writefn(void *opaque, hwaddr addr,
50
- tcg_gen_sari_i32(a, a, shift);
34
+ uint64_t value, unsigned size)
51
- tcg_gen_add_i32(d, d, a);
35
+{
52
-}
36
+ switch (size) {
53
-
37
+ case 1:
54
-static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
38
+ i6300esb_mem_writeb(opaque, addr, value);
55
-{
39
+ break;
56
- tcg_gen_sari_i64(a, a, shift);
40
+ case 2:
57
- tcg_gen_add_i64(d, d, a);
41
+ i6300esb_mem_writew(opaque, addr, value);
58
-}
42
+ break;
59
-
43
+ case 4:
60
-static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
44
+ i6300esb_mem_writel(opaque, addr, value);
61
-{
45
+ break;
62
- tcg_gen_sari_vec(vece, a, a, sh);
46
+ default:
63
- tcg_gen_add_vec(vece, d, d, a);
47
+ g_assert_not_reached();
64
-}
48
+ }
65
-
49
+}
66
-static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
50
+
67
-{
51
static const MemoryRegionOps i6300esb_ops = {
68
- tcg_gen_vec_shr8i_i64(a, a, shift);
52
- .old_mmio = {
69
- tcg_gen_vec_add8_i64(d, d, a);
53
- .read = {
70
-}
54
- i6300esb_mem_readb,
71
-
55
- i6300esb_mem_readw,
72
-static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
56
- i6300esb_mem_readl,
73
-{
57
- },
74
- tcg_gen_vec_shr16i_i64(a, a, shift);
58
- .write = {
75
- tcg_gen_vec_add16_i64(d, d, a);
59
- i6300esb_mem_writeb,
76
-}
60
- i6300esb_mem_writew,
77
-
61
- i6300esb_mem_writel,
78
-static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
62
- },
79
-{
63
- },
80
- tcg_gen_shri_i32(a, a, shift);
64
+ .read = i6300esb_mem_readfn,
81
- tcg_gen_add_i32(d, d, a);
65
+ .write = i6300esb_mem_writefn,
82
-}
66
+ .valid.min_access_size = 1,
83
-
67
+ .valid.max_access_size = 4,
84
-static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
68
.endianness = DEVICE_LITTLE_ENDIAN,
85
-{
86
- tcg_gen_shri_i64(a, a, shift);
87
- tcg_gen_add_i64(d, d, a);
88
-}
89
-
90
-static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
91
-{
92
- tcg_gen_shri_vec(vece, a, a, sh);
93
- tcg_gen_add_vec(vece, d, d, a);
94
-}
95
-
96
static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
97
{
98
uint64_t mask = dup_const(MO_8, 0xff >> shift);
99
@@ -XXX,XX +XXX,XX @@ static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
100
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
101
int immh, int immb, int opcode, int rn, int rd)
102
{
103
- static const GVecGen2i ssra_op[4] = {
104
- { .fni8 = gen_ssra8_i64,
105
- .fniv = gen_ssra_vec,
106
- .load_dest = true,
107
- .opc = INDEX_op_sari_vec,
108
- .vece = MO_8 },
109
- { .fni8 = gen_ssra16_i64,
110
- .fniv = gen_ssra_vec,
111
- .load_dest = true,
112
- .opc = INDEX_op_sari_vec,
113
- .vece = MO_16 },
114
- { .fni4 = gen_ssra32_i32,
115
- .fniv = gen_ssra_vec,
116
- .load_dest = true,
117
- .opc = INDEX_op_sari_vec,
118
- .vece = MO_32 },
119
- { .fni8 = gen_ssra64_i64,
120
- .fniv = gen_ssra_vec,
121
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
122
- .load_dest = true,
123
- .opc = INDEX_op_sari_vec,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen2i usra_op[4] = {
127
- { .fni8 = gen_usra8_i64,
128
- .fniv = gen_usra_vec,
129
- .load_dest = true,
130
- .opc = INDEX_op_shri_vec,
131
- .vece = MO_8, },
132
- { .fni8 = gen_usra16_i64,
133
- .fniv = gen_usra_vec,
134
- .load_dest = true,
135
- .opc = INDEX_op_shri_vec,
136
- .vece = MO_16, },
137
- { .fni4 = gen_usra32_i32,
138
- .fniv = gen_usra_vec,
139
- .load_dest = true,
140
- .opc = INDEX_op_shri_vec,
141
- .vece = MO_32, },
142
- { .fni8 = gen_usra64_i64,
143
- .fniv = gen_usra_vec,
144
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
145
- .load_dest = true,
146
- .opc = INDEX_op_shri_vec,
147
- .vece = MO_64, },
148
- };
149
static const GVecGen2i sri_op[4] = {
150
{ .fni8 = gen_shr8_ins_i64,
151
.fniv = gen_shr_ins_vec,
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ const GVecGen3 bif_op = {
157
.load_dest = true
69
};
158
};
70
159
160
+static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
161
+{
162
+ tcg_gen_vec_sar8i_i64(a, a, shift);
163
+ tcg_gen_vec_add8_i64(d, d, a);
164
+}
165
+
166
+static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
167
+{
168
+ tcg_gen_vec_sar16i_i64(a, a, shift);
169
+ tcg_gen_vec_add16_i64(d, d, a);
170
+}
171
+
172
+static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
173
+{
174
+ tcg_gen_sari_i32(a, a, shift);
175
+ tcg_gen_add_i32(d, d, a);
176
+}
177
+
178
+static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
179
+{
180
+ tcg_gen_sari_i64(a, a, shift);
181
+ tcg_gen_add_i64(d, d, a);
182
+}
183
+
184
+static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
185
+{
186
+ tcg_gen_sari_vec(vece, a, a, sh);
187
+ tcg_gen_add_vec(vece, d, d, a);
188
+}
189
+
190
+const GVecGen2i ssra_op[4] = {
191
+ { .fni8 = gen_ssra8_i64,
192
+ .fniv = gen_ssra_vec,
193
+ .load_dest = true,
194
+ .opc = INDEX_op_sari_vec,
195
+ .vece = MO_8 },
196
+ { .fni8 = gen_ssra16_i64,
197
+ .fniv = gen_ssra_vec,
198
+ .load_dest = true,
199
+ .opc = INDEX_op_sari_vec,
200
+ .vece = MO_16 },
201
+ { .fni4 = gen_ssra32_i32,
202
+ .fniv = gen_ssra_vec,
203
+ .load_dest = true,
204
+ .opc = INDEX_op_sari_vec,
205
+ .vece = MO_32 },
206
+ { .fni8 = gen_ssra64_i64,
207
+ .fniv = gen_ssra_vec,
208
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
209
+ .load_dest = true,
210
+ .opc = INDEX_op_sari_vec,
211
+ .vece = MO_64 },
212
+};
213
+
214
+static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
215
+{
216
+ tcg_gen_vec_shr8i_i64(a, a, shift);
217
+ tcg_gen_vec_add8_i64(d, d, a);
218
+}
219
+
220
+static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
221
+{
222
+ tcg_gen_vec_shr16i_i64(a, a, shift);
223
+ tcg_gen_vec_add16_i64(d, d, a);
224
+}
225
+
226
+static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
227
+{
228
+ tcg_gen_shri_i32(a, a, shift);
229
+ tcg_gen_add_i32(d, d, a);
230
+}
231
+
232
+static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
233
+{
234
+ tcg_gen_shri_i64(a, a, shift);
235
+ tcg_gen_add_i64(d, d, a);
236
+}
237
+
238
+static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
239
+{
240
+ tcg_gen_shri_vec(vece, a, a, sh);
241
+ tcg_gen_add_vec(vece, d, d, a);
242
+}
243
+
244
+const GVecGen2i usra_op[4] = {
245
+ { .fni8 = gen_usra8_i64,
246
+ .fniv = gen_usra_vec,
247
+ .load_dest = true,
248
+ .opc = INDEX_op_shri_vec,
249
+ .vece = MO_8, },
250
+ { .fni8 = gen_usra16_i64,
251
+ .fniv = gen_usra_vec,
252
+ .load_dest = true,
253
+ .opc = INDEX_op_shri_vec,
254
+ .vece = MO_16, },
255
+ { .fni4 = gen_usra32_i32,
256
+ .fniv = gen_usra_vec,
257
+ .load_dest = true,
258
+ .opc = INDEX_op_shri_vec,
259
+ .vece = MO_32, },
260
+ { .fni8 = gen_usra64_i64,
261
+ .fniv = gen_usra_vec,
262
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
263
+ .load_dest = true,
264
+ .opc = INDEX_op_shri_vec,
265
+ .vece = MO_64, },
266
+};
267
268
/* Translate a NEON data processing instruction. Return nonzero if the
269
instruction is invalid.
270
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
271
}
272
return 0;
273
274
+ case 1: /* VSRA */
275
+ /* Right shift comes here negative. */
276
+ shift = -shift;
277
+ /* Shifts larger than the element size are architecturally
278
+ * valid. Unsigned results in all zeros; signed results
279
+ * in all sign bits.
280
+ */
281
+ if (!u) {
282
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
283
+ MIN(shift, (8 << size) - 1),
284
+ &ssra_op[size]);
285
+ } else if (shift >= 8 << size) {
286
+ /* rd += 0 */
287
+ } else {
288
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
289
+ shift, &usra_op[size]);
290
+ }
291
+ return 0;
292
+
293
case 5: /* VSHL, VSLI */
294
if (!u) { /* VSHL */
295
/* Shifts larger than the element size are
296
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
297
neon_load_reg64(cpu_V0, rm + pass);
298
tcg_gen_movi_i64(cpu_V1, imm);
299
switch (op) {
300
- case 1: /* VSRA */
301
- if (u)
302
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
303
- else
304
- gen_helper_neon_shl_s64(cpu_V0, cpu_V0, cpu_V1);
305
- break;
306
case 2: /* VRSHR */
307
case 3: /* VRSRA */
308
if (u)
309
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
310
default:
311
g_assert_not_reached();
312
}
313
- if (op == 1 || op == 3) {
314
+ if (op == 3) {
315
/* Accumulate. */
316
neon_load_reg64(cpu_V1, rd + pass);
317
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
318
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
319
tmp2 = tcg_temp_new_i32();
320
tcg_gen_movi_i32(tmp2, imm);
321
switch (op) {
322
- case 1: /* VSRA */
323
- GEN_NEON_INTEGER_OP(shl);
324
- break;
325
case 2: /* VRSHR */
326
case 3: /* VRSRA */
327
GEN_NEON_INTEGER_OP(rshl);
328
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
329
}
330
tcg_temp_free_i32(tmp2);
331
332
- if (op == 1 || op == 3) {
333
+ if (op == 3) {
334
/* Accumulate. */
335
tmp2 = neon_load_reg(rd, pass);
336
gen_neon_add(size, tmp, tmp2);
71
--
337
--
72
2.17.1
338
2.19.1
73
339
74
340
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move shi_op and sli_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-15-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/helper-sve.h | 23 +++++++
10
target/arm/translate.h | 2 +
9
target/arm/sve_helper.c | 114 +++++++++++++++++++++++++++++++
11
target/arm/translate-a64.c | 152 +----------------------
10
target/arm/translate-sve.c | 133 +++++++++++++++++++++++++++++++++++++
12
target/arm/translate.c | 244 ++++++++++++++++++++++++++-----------
11
target/arm/sve.decode | 27 ++++++++
13
3 files changed, 179 insertions(+), 219 deletions(-)
12
4 files changed, 297 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/translate.h
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/translate.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
19
20
extern const GVecGen3 bif_op;
20
DEF_HELPER_FLAGS_4(sve_ext, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
extern const GVecGen2i ssra_op[4];
21
22
extern const GVecGen2i usra_op[4];
22
+DEF_HELPER_FLAGS_4(sve_insr_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
23
+extern const GVecGen2i sri_op[4];
23
+DEF_HELPER_FLAGS_4(sve_insr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
24
+extern const GVecGen2i sli_op[4];
24
+DEF_HELPER_FLAGS_4(sve_insr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
25
25
+DEF_HELPER_FLAGS_4(sve_insr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
26
/*
26
+
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
27
+DEF_HELPER_FLAGS_3(sve_rev_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
28
+DEF_HELPER_FLAGS_3(sve_rev_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_3(sve_rev_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_3(sve_rev_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+
32
+DEF_HELPER_FLAGS_4(sve_tbl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_tbl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_tbl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_tbl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_3(sve_sunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_3(sve_sunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_3(sve_sunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+
41
+DEF_HELPER_FLAGS_3(sve_uunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_3(sve_uunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_3(sve_uunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
44
+
45
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
46
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
47
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
48
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
49
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/sve_helper.c
30
--- a/target/arm/translate-a64.c
51
+++ b/target/arm/sve_helper.c
31
+++ b/target/arm/translate-a64.c
52
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ext)(void *vd, void *vn, void *vm, uint32_t desc)
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
53
memcpy(vd + n_siz, &tmp, n_ofs);
54
}
33
}
55
}
34
}
56
+
35
57
+#define DO_INSR(NAME, TYPE, H) \
36
-static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
58
+void HELPER(NAME)(void *vd, void *vn, uint64_t val, uint32_t desc) \
37
-{
59
+{ \
38
- uint64_t mask = dup_const(MO_8, 0xff >> shift);
60
+ intptr_t opr_sz = simd_oprsz(desc); \
39
- TCGv_i64 t = tcg_temp_new_i64();
61
+ swap_memmove(vd + sizeof(TYPE), vn, opr_sz - sizeof(TYPE)); \
40
-
62
+ *(TYPE *)(vd + H(0)) = val; \
41
- tcg_gen_shri_i64(t, a, shift);
63
+}
42
- tcg_gen_andi_i64(t, t, mask);
64
+
43
- tcg_gen_andi_i64(d, d, ~mask);
65
+DO_INSR(sve_insr_b, uint8_t, H1)
44
- tcg_gen_or_i64(d, d, t);
66
+DO_INSR(sve_insr_h, uint16_t, H1_2)
45
- tcg_temp_free_i64(t);
67
+DO_INSR(sve_insr_s, uint32_t, H1_4)
46
-}
68
+DO_INSR(sve_insr_d, uint64_t, )
47
-
69
+
48
-static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
70
+#undef DO_INSR
49
-{
71
+
50
- uint64_t mask = dup_const(MO_16, 0xffff >> shift);
72
+void HELPER(sve_rev_b)(void *vd, void *vn, uint32_t desc)
51
- TCGv_i64 t = tcg_temp_new_i64();
73
+{
52
-
74
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
53
- tcg_gen_shri_i64(t, a, shift);
75
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
54
- tcg_gen_andi_i64(t, t, mask);
76
+ uint64_t f = *(uint64_t *)(vn + i);
55
- tcg_gen_andi_i64(d, d, ~mask);
77
+ uint64_t b = *(uint64_t *)(vn + j);
56
- tcg_gen_or_i64(d, d, t);
78
+ *(uint64_t *)(vd + i) = bswap64(b);
57
- tcg_temp_free_i64(t);
79
+ *(uint64_t *)(vd + j) = bswap64(f);
58
-}
59
-
60
-static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
61
-{
62
- tcg_gen_shri_i32(a, a, shift);
63
- tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
64
-}
65
-
66
-static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_shri_i64(a, a, shift);
69
- tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
70
-}
71
-
72
-static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
73
-{
74
- uint64_t mask = (2ull << ((8 << vece) - 1)) - 1;
75
- TCGv_vec t = tcg_temp_new_vec_matching(d);
76
- TCGv_vec m = tcg_temp_new_vec_matching(d);
77
-
78
- tcg_gen_dupi_vec(vece, m, mask ^ (mask >> sh));
79
- tcg_gen_shri_vec(vece, t, a, sh);
80
- tcg_gen_and_vec(vece, d, d, m);
81
- tcg_gen_or_vec(vece, d, d, t);
82
-
83
- tcg_temp_free_vec(t);
84
- tcg_temp_free_vec(m);
85
-}
86
-
87
/* SSHR[RA]/USHR[RA] - Vector shift right (optional rounding/accumulate) */
88
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
89
int immh, int immb, int opcode, int rn, int rd)
90
{
91
- static const GVecGen2i sri_op[4] = {
92
- { .fni8 = gen_shr8_ins_i64,
93
- .fniv = gen_shr_ins_vec,
94
- .load_dest = true,
95
- .opc = INDEX_op_shri_vec,
96
- .vece = MO_8 },
97
- { .fni8 = gen_shr16_ins_i64,
98
- .fniv = gen_shr_ins_vec,
99
- .load_dest = true,
100
- .opc = INDEX_op_shri_vec,
101
- .vece = MO_16 },
102
- { .fni4 = gen_shr32_ins_i32,
103
- .fniv = gen_shr_ins_vec,
104
- .load_dest = true,
105
- .opc = INDEX_op_shri_vec,
106
- .vece = MO_32 },
107
- { .fni8 = gen_shr64_ins_i64,
108
- .fniv = gen_shr_ins_vec,
109
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
110
- .load_dest = true,
111
- .opc = INDEX_op_shri_vec,
112
- .vece = MO_64 },
113
- };
114
-
115
int size = 32 - clz32(immh) - 1;
116
int immhb = immh << 3 | immb;
117
int shift = 2 * (8 << size) - immhb;
118
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
119
clear_vec_high(s, is_q, rd);
120
}
121
122
-static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
123
-{
124
- uint64_t mask = dup_const(MO_8, 0xff << shift);
125
- TCGv_i64 t = tcg_temp_new_i64();
126
-
127
- tcg_gen_shli_i64(t, a, shift);
128
- tcg_gen_andi_i64(t, t, mask);
129
- tcg_gen_andi_i64(d, d, ~mask);
130
- tcg_gen_or_i64(d, d, t);
131
- tcg_temp_free_i64(t);
132
-}
133
-
134
-static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
135
-{
136
- uint64_t mask = dup_const(MO_16, 0xffff << shift);
137
- TCGv_i64 t = tcg_temp_new_i64();
138
-
139
- tcg_gen_shli_i64(t, a, shift);
140
- tcg_gen_andi_i64(t, t, mask);
141
- tcg_gen_andi_i64(d, d, ~mask);
142
- tcg_gen_or_i64(d, d, t);
143
- tcg_temp_free_i64(t);
144
-}
145
-
146
-static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
147
-{
148
- tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
149
-}
150
-
151
-static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
152
-{
153
- tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
154
-}
155
-
156
-static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
157
-{
158
- uint64_t mask = (1ull << sh) - 1;
159
- TCGv_vec t = tcg_temp_new_vec_matching(d);
160
- TCGv_vec m = tcg_temp_new_vec_matching(d);
161
-
162
- tcg_gen_dupi_vec(vece, m, mask);
163
- tcg_gen_shli_vec(vece, t, a, sh);
164
- tcg_gen_and_vec(vece, d, d, m);
165
- tcg_gen_or_vec(vece, d, d, t);
166
-
167
- tcg_temp_free_vec(t);
168
- tcg_temp_free_vec(m);
169
-}
170
-
171
/* SHL/SLI - Vector shift left */
172
static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
173
int immh, int immb, int opcode, int rn, int rd)
174
{
175
- static const GVecGen2i shi_op[4] = {
176
- { .fni8 = gen_shl8_ins_i64,
177
- .fniv = gen_shl_ins_vec,
178
- .opc = INDEX_op_shli_vec,
179
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
180
- .load_dest = true,
181
- .vece = MO_8 },
182
- { .fni8 = gen_shl16_ins_i64,
183
- .fniv = gen_shl_ins_vec,
184
- .opc = INDEX_op_shli_vec,
185
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
- .load_dest = true,
187
- .vece = MO_16 },
188
- { .fni4 = gen_shl32_ins_i32,
189
- .fniv = gen_shl_ins_vec,
190
- .opc = INDEX_op_shli_vec,
191
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
192
- .load_dest = true,
193
- .vece = MO_32 },
194
- { .fni8 = gen_shl64_ins_i64,
195
- .fniv = gen_shl_ins_vec,
196
- .opc = INDEX_op_shli_vec,
197
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
198
- .load_dest = true,
199
- .vece = MO_64 },
200
- };
201
int size = 32 - clz32(immh) - 1;
202
int immhb = immh << 3 | immb;
203
int shift = immhb - (8 << size);
204
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
205
}
206
207
if (insert) {
208
- gen_gvec_op2i(s, is_q, rd, rn, shift, &shi_op[size]);
209
+ gen_gvec_op2i(s, is_q, rd, rn, shift, &sli_op[size]);
210
} else {
211
gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_shli, size);
212
}
213
diff --git a/target/arm/translate.c b/target/arm/translate.c
214
index XXXXXXX..XXXXXXX 100644
215
--- a/target/arm/translate.c
216
+++ b/target/arm/translate.c
217
@@ -XXX,XX +XXX,XX @@ const GVecGen2i usra_op[4] = {
218
.vece = MO_64, },
219
};
220
221
+static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
222
+{
223
+ uint64_t mask = dup_const(MO_8, 0xff >> shift);
224
+ TCGv_i64 t = tcg_temp_new_i64();
225
+
226
+ tcg_gen_shri_i64(t, a, shift);
227
+ tcg_gen_andi_i64(t, t, mask);
228
+ tcg_gen_andi_i64(d, d, ~mask);
229
+ tcg_gen_or_i64(d, d, t);
230
+ tcg_temp_free_i64(t);
231
+}
232
+
233
+static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
234
+{
235
+ uint64_t mask = dup_const(MO_16, 0xffff >> shift);
236
+ TCGv_i64 t = tcg_temp_new_i64();
237
+
238
+ tcg_gen_shri_i64(t, a, shift);
239
+ tcg_gen_andi_i64(t, t, mask);
240
+ tcg_gen_andi_i64(d, d, ~mask);
241
+ tcg_gen_or_i64(d, d, t);
242
+ tcg_temp_free_i64(t);
243
+}
244
+
245
+static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
246
+{
247
+ tcg_gen_shri_i32(a, a, shift);
248
+ tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
249
+}
250
+
251
+static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
252
+{
253
+ tcg_gen_shri_i64(a, a, shift);
254
+ tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
255
+}
256
+
257
+static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
258
+{
259
+ if (sh == 0) {
260
+ tcg_gen_mov_vec(d, a);
261
+ } else {
262
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
263
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
264
+
265
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
266
+ tcg_gen_shri_vec(vece, t, a, sh);
267
+ tcg_gen_and_vec(vece, d, d, m);
268
+ tcg_gen_or_vec(vece, d, d, t);
269
+
270
+ tcg_temp_free_vec(t);
271
+ tcg_temp_free_vec(m);
80
+ }
272
+ }
81
+}
273
+}
82
+
274
+
83
+static inline uint64_t hswap64(uint64_t h)
275
+const GVecGen2i sri_op[4] = {
84
+{
276
+ { .fni8 = gen_shr8_ins_i64,
85
+ uint64_t m = 0x0000ffff0000ffffull;
277
+ .fniv = gen_shr_ins_vec,
86
+ h = rol64(h, 32);
278
+ .load_dest = true,
87
+ return ((h & m) << 16) | ((h >> 16) & m);
279
+ .opc = INDEX_op_shri_vec,
88
+}
280
+ .vece = MO_8 },
89
+
281
+ { .fni8 = gen_shr16_ins_i64,
90
+void HELPER(sve_rev_h)(void *vd, void *vn, uint32_t desc)
282
+ .fniv = gen_shr_ins_vec,
91
+{
283
+ .load_dest = true,
92
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
284
+ .opc = INDEX_op_shri_vec,
93
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
285
+ .vece = MO_16 },
94
+ uint64_t f = *(uint64_t *)(vn + i);
286
+ { .fni4 = gen_shr32_ins_i32,
95
+ uint64_t b = *(uint64_t *)(vn + j);
287
+ .fniv = gen_shr_ins_vec,
96
+ *(uint64_t *)(vd + i) = hswap64(b);
288
+ .load_dest = true,
97
+ *(uint64_t *)(vd + j) = hswap64(f);
289
+ .opc = INDEX_op_shri_vec,
290
+ .vece = MO_32 },
291
+ { .fni8 = gen_shr64_ins_i64,
292
+ .fniv = gen_shr_ins_vec,
293
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
294
+ .load_dest = true,
295
+ .opc = INDEX_op_shri_vec,
296
+ .vece = MO_64 },
297
+};
298
+
299
+static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
300
+{
301
+ uint64_t mask = dup_const(MO_8, 0xff << shift);
302
+ TCGv_i64 t = tcg_temp_new_i64();
303
+
304
+ tcg_gen_shli_i64(t, a, shift);
305
+ tcg_gen_andi_i64(t, t, mask);
306
+ tcg_gen_andi_i64(d, d, ~mask);
307
+ tcg_gen_or_i64(d, d, t);
308
+ tcg_temp_free_i64(t);
309
+}
310
+
311
+static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
312
+{
313
+ uint64_t mask = dup_const(MO_16, 0xffff << shift);
314
+ TCGv_i64 t = tcg_temp_new_i64();
315
+
316
+ tcg_gen_shli_i64(t, a, shift);
317
+ tcg_gen_andi_i64(t, t, mask);
318
+ tcg_gen_andi_i64(d, d, ~mask);
319
+ tcg_gen_or_i64(d, d, t);
320
+ tcg_temp_free_i64(t);
321
+}
322
+
323
+static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
324
+{
325
+ tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
326
+}
327
+
328
+static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
329
+{
330
+ tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
331
+}
332
+
333
+static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
334
+{
335
+ if (sh == 0) {
336
+ tcg_gen_mov_vec(d, a);
337
+ } else {
338
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
339
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
340
+
341
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
342
+ tcg_gen_shli_vec(vece, t, a, sh);
343
+ tcg_gen_and_vec(vece, d, d, m);
344
+ tcg_gen_or_vec(vece, d, d, t);
345
+
346
+ tcg_temp_free_vec(t);
347
+ tcg_temp_free_vec(m);
98
+ }
348
+ }
99
+}
349
+}
100
+
350
+
101
+void HELPER(sve_rev_s)(void *vd, void *vn, uint32_t desc)
351
+const GVecGen2i sli_op[4] = {
102
+{
352
+ { .fni8 = gen_shl8_ins_i64,
103
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
353
+ .fniv = gen_shl_ins_vec,
104
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
354
+ .load_dest = true,
105
+ uint64_t f = *(uint64_t *)(vn + i);
355
+ .opc = INDEX_op_shli_vec,
106
+ uint64_t b = *(uint64_t *)(vn + j);
356
+ .vece = MO_8 },
107
+ *(uint64_t *)(vd + i) = rol64(b, 32);
357
+ { .fni8 = gen_shl16_ins_i64,
108
+ *(uint64_t *)(vd + j) = rol64(f, 32);
358
+ .fniv = gen_shl_ins_vec,
109
+ }
359
+ .load_dest = true,
110
+}
360
+ .opc = INDEX_op_shli_vec,
111
+
361
+ .vece = MO_16 },
112
+void HELPER(sve_rev_d)(void *vd, void *vn, uint32_t desc)
362
+ { .fni4 = gen_shl32_ins_i32,
113
+{
363
+ .fniv = gen_shl_ins_vec,
114
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
364
+ .load_dest = true,
115
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
365
+ .opc = INDEX_op_shli_vec,
116
+ uint64_t f = *(uint64_t *)(vn + i);
366
+ .vece = MO_32 },
117
+ uint64_t b = *(uint64_t *)(vn + j);
367
+ { .fni8 = gen_shl64_ins_i64,
118
+ *(uint64_t *)(vd + i) = b;
368
+ .fniv = gen_shl_ins_vec,
119
+ *(uint64_t *)(vd + j) = f;
369
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
120
+ }
370
+ .load_dest = true,
121
+}
371
+ .opc = INDEX_op_shli_vec,
122
+
372
+ .vece = MO_64 },
123
+#define DO_TBL(NAME, TYPE, H) \
373
+};
124
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
374
+
125
+{ \
375
/* Translate a NEON data processing instruction. Return nonzero if the
126
+ intptr_t i, opr_sz = simd_oprsz(desc); \
376
instruction is invalid.
127
+ uintptr_t elem = opr_sz / sizeof(TYPE); \
377
We process data in a mixture of 32-bit and 64-bit chunks.
128
+ TYPE *d = vd, *n = vn, *m = vm; \
378
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
129
+ ARMVectorReg tmp; \
379
int pairwise;
130
+ if (unlikely(vd == vn)) { \
380
int u;
131
+ n = memcpy(&tmp, vn, opr_sz); \
381
int vec_size;
132
+ } \
382
- uint32_t imm, mask;
133
+ for (i = 0; i < elem; i++) { \
383
+ uint32_t imm;
134
+ TYPE j = m[H(i)]; \
384
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
135
+ d[H(i)] = j < elem ? n[H(j)] : 0; \
385
TCGv_ptr ptr1, ptr2, ptr3;
136
+ } \
386
TCGv_i64 tmp64;
137
+}
387
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
138
+
388
}
139
+DO_TBL(sve_tbl_b, uint8_t, H1)
389
return 0;
140
+DO_TBL(sve_tbl_h, uint16_t, H2)
390
141
+DO_TBL(sve_tbl_s, uint32_t, H4)
391
+ case 4: /* VSRI */
142
+DO_TBL(sve_tbl_d, uint64_t, )
392
+ if (!u) {
143
+
393
+ return 1;
144
+#undef TBL
394
+ }
145
+
395
+ /* Right shift comes here negative. */
146
+#define DO_UNPK(NAME, TYPED, TYPES, HD, HS) \
396
+ shift = -shift;
147
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
397
+ /* Shift out of range leaves destination unchanged. */
148
+{ \
398
+ if (shift < 8 << size) {
149
+ intptr_t i, opr_sz = simd_oprsz(desc); \
399
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
150
+ TYPED *d = vd; \
400
+ shift, &sri_op[size]);
151
+ TYPES *n = vn; \
401
+ }
152
+ ARMVectorReg tmp; \
402
+ return 0;
153
+ if (unlikely(vn - vd < opr_sz)) { \
403
+
154
+ n = memcpy(&tmp, n, opr_sz / 2); \
404
case 5: /* VSHL, VSLI */
155
+ } \
405
- if (!u) { /* VSHL */
156
+ for (i = 0; i < opr_sz / sizeof(TYPED); i++) { \
406
+ if (u) { /* VSLI */
157
+ d[HD(i)] = n[HS(i)]; \
407
+ /* Shift out of range leaves destination unchanged. */
158
+ } \
408
+ if (shift < 8 << size) {
159
+}
409
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size,
160
+
410
+ vec_size, shift, &sli_op[size]);
161
+DO_UNPK(sve_sunpk_h, int16_t, int8_t, H2, H1)
411
+ }
162
+DO_UNPK(sve_sunpk_s, int32_t, int16_t, H4, H2)
412
+ } else { /* VSHL */
163
+DO_UNPK(sve_sunpk_d, int64_t, int32_t, , H4)
413
/* Shifts larger than the element size are
164
+
414
* architecturally valid and results in zero.
165
+DO_UNPK(sve_uunpk_h, uint16_t, uint8_t, H2, H1)
415
*/
166
+DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
416
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
167
+DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
417
tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
168
+
418
vec_size, vec_size);
169
+#undef DO_UNPK
419
}
170
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
420
- return 0;
171
index XXXXXXX..XXXXXXX 100644
421
}
172
--- a/target/arm/translate-sve.c
422
- break;
173
+++ b/target/arm/translate-sve.c
423
+ return 0;
174
@@ -XXX,XX +XXX,XX @@ static bool trans_EXT(DisasContext *s, arg_EXT *a, uint32_t insn)
424
}
175
return true;
425
176
}
426
if (size == 3) {
177
427
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
178
+/*
428
else
179
+ *** SVE Permute - Unpredicated Group
429
gen_helper_neon_rshl_s64(cpu_V0, cpu_V0, cpu_V1);
180
+ */
430
break;
181
+
431
- case 4: /* VSRI */
182
+static bool trans_DUP_s(DisasContext *s, arg_DUP_s *a, uint32_t insn)
432
- case 5: /* VSHL, VSLI */
183
+{
433
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
184
+ if (sve_access_check(s)) {
434
- break;
185
+ unsigned vsz = vec_full_reg_size(s);
435
case 6: /* VQSHLU */
186
+ tcg_gen_gvec_dup_i64(a->esz, vec_full_reg_offset(s, a->rd),
436
gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
187
+ vsz, vsz, cpu_reg_sp(s, a->rn));
437
cpu_V0, cpu_V1);
188
+ }
438
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
189
+ return true;
439
/* Accumulate. */
190
+}
440
neon_load_reg64(cpu_V1, rd + pass);
191
+
441
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
192
+static bool trans_DUP_x(DisasContext *s, arg_DUP_x *a, uint32_t insn)
442
- } else if (op == 4 || (op == 5 && u)) {
193
+{
443
- /* Insert */
194
+ if ((a->imm & 0x1f) == 0) {
444
- neon_load_reg64(cpu_V1, rd + pass);
195
+ return false;
445
- uint64_t mask;
196
+ }
446
- if (shift < -63 || shift > 63) {
197
+ if (sve_access_check(s)) {
447
- mask = 0;
198
+ unsigned vsz = vec_full_reg_size(s);
448
- } else {
199
+ unsigned dofs = vec_full_reg_offset(s, a->rd);
449
- if (op == 4) {
200
+ unsigned esz, index;
450
- mask = 0xffffffffffffffffull >> -shift;
201
+
451
- } else {
202
+ esz = ctz32(a->imm);
452
- mask = 0xffffffffffffffffull << shift;
203
+ index = a->imm >> (esz + 1);
453
- }
204
+
454
- }
205
+ if ((index << esz) < vsz) {
455
- tcg_gen_andi_i64(cpu_V1, cpu_V1, ~mask);
206
+ unsigned nofs = vec_reg_offset(s, a->rn, index, esz);
456
- tcg_gen_or_i64(cpu_V0, cpu_V0, cpu_V1);
207
+ tcg_gen_gvec_dup_mem(esz, dofs, nofs, vsz, vsz);
457
}
208
+ } else {
458
neon_store_reg64(cpu_V0, rd + pass);
209
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, 0);
459
} else { /* size < 3 */
210
+ }
460
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
211
+ }
461
case 3: /* VRSRA */
212
+ return true;
462
GEN_NEON_INTEGER_OP(rshl);
213
+}
463
break;
214
+
464
- case 4: /* VSRI */
215
+static void do_insr_i64(DisasContext *s, arg_rrr_esz *a, TCGv_i64 val)
465
- case 5: /* VSHL, VSLI */
216
+{
466
- switch (size) {
217
+ typedef void gen_insr(TCGv_ptr, TCGv_ptr, TCGv_i64, TCGv_i32);
467
- case 0: gen_helper_neon_shl_u8(tmp, tmp, tmp2); break;
218
+ static gen_insr * const fns[4] = {
468
- case 1: gen_helper_neon_shl_u16(tmp, tmp, tmp2); break;
219
+ gen_helper_sve_insr_b, gen_helper_sve_insr_h,
469
- case 2: gen_helper_neon_shl_u32(tmp, tmp, tmp2); break;
220
+ gen_helper_sve_insr_s, gen_helper_sve_insr_d,
470
- default: abort();
221
+ };
471
- }
222
+ unsigned vsz = vec_full_reg_size(s);
472
- break;
223
+ TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
473
case 6: /* VQSHLU */
224
+ TCGv_ptr t_zd = tcg_temp_new_ptr();
474
switch (size) {
225
+ TCGv_ptr t_zn = tcg_temp_new_ptr();
475
case 0:
226
+
476
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
227
+ tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, a->rd));
477
tmp2 = neon_load_reg(rd, pass);
228
+ tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
478
gen_neon_add(size, tmp, tmp2);
229
+
479
tcg_temp_free_i32(tmp2);
230
+ fns[a->esz](t_zd, t_zn, val, desc);
480
- } else if (op == 4 || (op == 5 && u)) {
231
+
481
- /* Insert */
232
+ tcg_temp_free_ptr(t_zd);
482
- switch (size) {
233
+ tcg_temp_free_ptr(t_zn);
483
- case 0:
234
+ tcg_temp_free_i32(desc);
484
- if (op == 4)
235
+}
485
- mask = 0xff >> -shift;
236
+
486
- else
237
+static bool trans_INSR_f(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
487
- mask = (uint8_t)(0xff << shift);
238
+{
488
- mask |= mask << 8;
239
+ if (sve_access_check(s)) {
489
- mask |= mask << 16;
240
+ TCGv_i64 t = tcg_temp_new_i64();
490
- break;
241
+ tcg_gen_ld_i64(t, cpu_env, vec_reg_offset(s, a->rm, 0, MO_64));
491
- case 1:
242
+ do_insr_i64(s, a, t);
492
- if (op == 4)
243
+ tcg_temp_free_i64(t);
493
- mask = 0xffff >> -shift;
244
+ }
494
- else
245
+ return true;
495
- mask = (uint16_t)(0xffff << shift);
246
+}
496
- mask |= mask << 16;
247
+
497
- break;
248
+static bool trans_INSR_r(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
498
- case 2:
249
+{
499
- if (shift < -31 || shift > 31) {
250
+ if (sve_access_check(s)) {
500
- mask = 0;
251
+ do_insr_i64(s, a, cpu_reg(s, a->rm));
501
- } else {
252
+ }
502
- if (op == 4)
253
+ return true;
503
- mask = 0xffffffffu >> -shift;
254
+}
504
- else
255
+
505
- mask = 0xffffffffu << shift;
256
+static bool trans_REV_v(DisasContext *s, arg_rr_esz *a, uint32_t insn)
506
- }
257
+{
507
- break;
258
+ static gen_helper_gvec_2 * const fns[4] = {
508
- default:
259
+ gen_helper_sve_rev_b, gen_helper_sve_rev_h,
509
- abort();
260
+ gen_helper_sve_rev_s, gen_helper_sve_rev_d
510
- }
261
+ };
511
- tmp2 = neon_load_reg(rd, pass);
262
+
512
- tcg_gen_andi_i32(tmp, tmp, mask);
263
+ if (sve_access_check(s)) {
513
- tcg_gen_andi_i32(tmp2, tmp2, ~mask);
264
+ unsigned vsz = vec_full_reg_size(s);
514
- tcg_gen_or_i32(tmp, tmp, tmp2);
265
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
515
- tcg_temp_free_i32(tmp2);
266
+ vec_full_reg_offset(s, a->rn),
516
}
267
+ vsz, vsz, 0, fns[a->esz]);
517
neon_store_reg(rd, pass, tmp);
268
+ }
518
}
269
+ return true;
270
+}
271
+
272
+static bool trans_TBL(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
273
+{
274
+ static gen_helper_gvec_3 * const fns[4] = {
275
+ gen_helper_sve_tbl_b, gen_helper_sve_tbl_h,
276
+ gen_helper_sve_tbl_s, gen_helper_sve_tbl_d
277
+ };
278
+
279
+ if (sve_access_check(s)) {
280
+ unsigned vsz = vec_full_reg_size(s);
281
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
282
+ vec_full_reg_offset(s, a->rn),
283
+ vec_full_reg_offset(s, a->rm),
284
+ vsz, vsz, 0, fns[a->esz]);
285
+ }
286
+ return true;
287
+}
288
+
289
+static bool trans_UNPK(DisasContext *s, arg_UNPK *a, uint32_t insn)
290
+{
291
+ static gen_helper_gvec_2 * const fns[4][2] = {
292
+ { NULL, NULL },
293
+ { gen_helper_sve_sunpk_h, gen_helper_sve_uunpk_h },
294
+ { gen_helper_sve_sunpk_s, gen_helper_sve_uunpk_s },
295
+ { gen_helper_sve_sunpk_d, gen_helper_sve_uunpk_d },
296
+ };
297
+
298
+ if (a->esz == 0) {
299
+ return false;
300
+ }
301
+ if (sve_access_check(s)) {
302
+ unsigned vsz = vec_full_reg_size(s);
303
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
304
+ vec_full_reg_offset(s, a->rn)
305
+ + (a->h ? vsz / 2 : 0),
306
+ vsz, vsz, 0, fns[a->esz][a->u]);
307
+ }
308
+ return true;
309
+}
310
+
311
/*
312
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
313
*/
314
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
315
index XXXXXXX..XXXXXXX 100644
316
--- a/target/arm/sve.decode
317
+++ b/target/arm/sve.decode
318
@@ -XXX,XX +XXX,XX @@
319
320
%imm4_16_p1 16:4 !function=plus1
321
%imm6_22_5 22:1 5:5
322
+%imm7_22_16 22:2 16:5
323
%imm8_16_10 16:5 10:3
324
%imm9_16_10 16:s6 10:3
325
326
@@ -XXX,XX +XXX,XX @@
327
328
# Three operand, vector element size
329
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
330
+@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
331
+ &rrr_esz rn=%reg_movprfx
332
333
# Three operand with "memory" size, aka immediate left shift
334
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
335
@@ -XXX,XX +XXX,XX @@ CPY_z_i 00000101 .. 01 .... 00 . ........ ..... @rdn_pg4 imm=%sh8_i8s
336
EXT 00000101 001 ..... 000 ... rm:5 rd:5 \
337
&rrri rn=%reg_movprfx imm=%imm8_16_10
338
339
+### SVE Permute - Unpredicated Group
340
+
341
+# SVE broadcast general register
342
+DUP_s 00000101 .. 1 00000 001110 ..... ..... @rd_rn
343
+
344
+# SVE broadcast indexed element
345
+DUP_x 00000101 .. 1 ..... 001000 rn:5 rd:5 \
346
+ &rri imm=%imm7_22_16
347
+
348
+# SVE insert SIMD&FP scalar register
349
+INSR_f 00000101 .. 1 10100 001110 ..... ..... @rdn_rm
350
+
351
+# SVE insert general register
352
+INSR_r 00000101 .. 1 00100 001110 ..... ..... @rdn_rm
353
+
354
+# SVE reverse vector elements
355
+REV_v 00000101 .. 1 11000 001110 ..... ..... @rd_rn
356
+
357
+# SVE vector table lookup
358
+TBL 00000101 .. 1 ..... 001100 ..... ..... @rd_rn_rm
359
+
360
+# SVE unpack vector elements
361
+UNPK 00000101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
362
+
363
### SVE Predicate Logical Operations Group
364
365
# SVE predicate logical operations
366
--
519
--
367
2.17.1
520
2.19.1
368
521
369
522
diff view generated by jsdifflib
1
Convert the sh7750 device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps. This device is used by the sh4 r2d board.
2
3
3
Move mla_op and mls_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-16-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180601141223.26630-2-peter.maydell@linaro.org
7
---
9
---
8
hw/sh4/sh7750.c | 44 ++++++++++++++++++++++++++++++++++++--------
10
target/arm/translate.h | 2 +
9
1 file changed, 36 insertions(+), 8 deletions(-)
11
target/arm/translate-a64.c | 106 -----------------------------
10
12
target/arm/translate.c | 134 ++++++++++++++++++++++++++++++++-----
11
diff --git a/hw/sh4/sh7750.c b/hw/sh4/sh7750.c
13
3 files changed, 120 insertions(+), 122 deletions(-)
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/sh4/sh7750.c
17
--- a/target/arm/translate.h
14
+++ b/hw/sh4/sh7750.c
18
+++ b/target/arm/translate.h
15
@@ -XXX,XX +XXX,XX @@ static void sh7750_mem_writel(void *opaque, hwaddr addr,
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
extern const GVecGen3 bsl_op;
21
extern const GVecGen3 bit_op;
22
extern const GVecGen3 bif_op;
23
+extern const GVecGen3 mla_op[4];
24
+extern const GVecGen3 mls_op[4];
25
extern const GVecGen2i ssra_op[4];
26
extern const GVecGen2i usra_op[4];
27
extern const GVecGen2i sri_op[4];
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
16
}
33
}
17
}
34
}
18
35
19
+static uint64_t sh7750_mem_readfn(void *opaque, hwaddr addr, unsigned size)
36
-static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
20
+{
37
-{
21
+ switch (size) {
38
- gen_helper_neon_mul_u8(a, a, b);
22
+ case 1:
39
- gen_helper_neon_add_u8(d, d, a);
23
+ return sh7750_mem_readb(opaque, addr);
40
-}
24
+ case 2:
41
-
25
+ return sh7750_mem_readw(opaque, addr);
42
-static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
26
+ case 4:
43
-{
27
+ return sh7750_mem_readl(opaque, addr);
44
- gen_helper_neon_mul_u16(a, a, b);
28
+ default:
45
- gen_helper_neon_add_u16(d, d, a);
29
+ g_assert_not_reached();
46
-}
30
+ }
47
-
31
+}
48
-static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
32
+
49
-{
33
+static void sh7750_mem_writefn(void *opaque, hwaddr addr,
50
- tcg_gen_mul_i32(a, a, b);
34
+ uint64_t value, unsigned size)
51
- tcg_gen_add_i32(d, d, a);
35
+{
52
-}
36
+ switch (size) {
53
-
37
+ case 1:
54
-static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
38
+ sh7750_mem_writeb(opaque, addr, value);
55
-{
39
+ break;
56
- tcg_gen_mul_i64(a, a, b);
40
+ case 2:
57
- tcg_gen_add_i64(d, d, a);
41
+ sh7750_mem_writew(opaque, addr, value);
58
-}
42
+ break;
59
-
43
+ case 4:
60
-static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
44
+ sh7750_mem_writel(opaque, addr, value);
61
-{
45
+ break;
62
- tcg_gen_mul_vec(vece, a, a, b);
46
+ default:
63
- tcg_gen_add_vec(vece, d, d, a);
47
+ g_assert_not_reached();
64
-}
48
+ }
65
-
49
+}
66
-static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
50
+
67
-{
51
static const MemoryRegionOps sh7750_mem_ops = {
68
- gen_helper_neon_mul_u8(a, a, b);
52
- .old_mmio = {
69
- gen_helper_neon_sub_u8(d, d, a);
53
- .read = {sh7750_mem_readb,
70
-}
54
- sh7750_mem_readw,
71
-
55
- sh7750_mem_readl },
72
-static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
56
- .write = {sh7750_mem_writeb,
73
-{
57
- sh7750_mem_writew,
74
- gen_helper_neon_mul_u16(a, a, b);
58
- sh7750_mem_writel },
75
- gen_helper_neon_sub_u16(d, d, a);
59
- },
76
-}
60
+ .read = sh7750_mem_readfn,
77
-
61
+ .write = sh7750_mem_writefn,
78
-static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
62
+ .valid.min_access_size = 1,
79
-{
63
+ .valid.max_access_size = 4,
80
- tcg_gen_mul_i32(a, a, b);
64
.endianness = DEVICE_NATIVE_ENDIAN,
81
- tcg_gen_sub_i32(d, d, a);
82
-}
83
-
84
-static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
85
-{
86
- tcg_gen_mul_i64(a, a, b);
87
- tcg_gen_sub_i64(d, d, a);
88
-}
89
-
90
-static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
91
-{
92
- tcg_gen_mul_vec(vece, a, a, b);
93
- tcg_gen_sub_vec(vece, d, d, a);
94
-}
95
-
96
/* Integer op subgroup of C3.6.16. */
97
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
98
{
99
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
100
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
.vece = MO_64 },
102
};
103
- static const GVecGen3 mla_op[4] = {
104
- { .fni4 = gen_mla8_i32,
105
- .fniv = gen_mla_vec,
106
- .opc = INDEX_op_mul_vec,
107
- .load_dest = true,
108
- .vece = MO_8 },
109
- { .fni4 = gen_mla16_i32,
110
- .fniv = gen_mla_vec,
111
- .opc = INDEX_op_mul_vec,
112
- .load_dest = true,
113
- .vece = MO_16 },
114
- { .fni4 = gen_mla32_i32,
115
- .fniv = gen_mla_vec,
116
- .opc = INDEX_op_mul_vec,
117
- .load_dest = true,
118
- .vece = MO_32 },
119
- { .fni8 = gen_mla64_i64,
120
- .fniv = gen_mla_vec,
121
- .opc = INDEX_op_mul_vec,
122
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
123
- .load_dest = true,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen3 mls_op[4] = {
127
- { .fni4 = gen_mls8_i32,
128
- .fniv = gen_mls_vec,
129
- .opc = INDEX_op_mul_vec,
130
- .load_dest = true,
131
- .vece = MO_8 },
132
- { .fni4 = gen_mls16_i32,
133
- .fniv = gen_mls_vec,
134
- .opc = INDEX_op_mul_vec,
135
- .load_dest = true,
136
- .vece = MO_16 },
137
- { .fni4 = gen_mls32_i32,
138
- .fniv = gen_mls_vec,
139
- .opc = INDEX_op_mul_vec,
140
- .load_dest = true,
141
- .vece = MO_32 },
142
- { .fni8 = gen_mls64_i64,
143
- .fniv = gen_mls_vec,
144
- .opc = INDEX_op_mul_vec,
145
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
146
- .load_dest = true,
147
- .vece = MO_64 },
148
- };
149
150
int is_q = extract32(insn, 30, 1);
151
int u = extract32(insn, 29, 1);
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static void gen_neon_narrow_op(int op, int u, int size,
157
#define NEON_3R_VABA 15
158
#define NEON_3R_VADD_VSUB 16
159
#define NEON_3R_VTST_VCEQ 17
160
-#define NEON_3R_VML 18 /* VMLA, VMLAL, VMLS, VMLSL */
161
+#define NEON_3R_VML 18 /* VMLA, VMLS */
162
#define NEON_3R_VMUL 19
163
#define NEON_3R_VPMAX 20
164
#define NEON_3R_VPMIN 21
165
@@ -XXX,XX +XXX,XX @@ const GVecGen2i sli_op[4] = {
166
.vece = MO_64 },
65
};
167
};
66
168
169
+static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
170
+{
171
+ gen_helper_neon_mul_u8(a, a, b);
172
+ gen_helper_neon_add_u8(d, d, a);
173
+}
174
+
175
+static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
176
+{
177
+ gen_helper_neon_mul_u8(a, a, b);
178
+ gen_helper_neon_sub_u8(d, d, a);
179
+}
180
+
181
+static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
182
+{
183
+ gen_helper_neon_mul_u16(a, a, b);
184
+ gen_helper_neon_add_u16(d, d, a);
185
+}
186
+
187
+static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
188
+{
189
+ gen_helper_neon_mul_u16(a, a, b);
190
+ gen_helper_neon_sub_u16(d, d, a);
191
+}
192
+
193
+static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
194
+{
195
+ tcg_gen_mul_i32(a, a, b);
196
+ tcg_gen_add_i32(d, d, a);
197
+}
198
+
199
+static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
200
+{
201
+ tcg_gen_mul_i32(a, a, b);
202
+ tcg_gen_sub_i32(d, d, a);
203
+}
204
+
205
+static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
206
+{
207
+ tcg_gen_mul_i64(a, a, b);
208
+ tcg_gen_add_i64(d, d, a);
209
+}
210
+
211
+static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
212
+{
213
+ tcg_gen_mul_i64(a, a, b);
214
+ tcg_gen_sub_i64(d, d, a);
215
+}
216
+
217
+static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
218
+{
219
+ tcg_gen_mul_vec(vece, a, a, b);
220
+ tcg_gen_add_vec(vece, d, d, a);
221
+}
222
+
223
+static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
224
+{
225
+ tcg_gen_mul_vec(vece, a, a, b);
226
+ tcg_gen_sub_vec(vece, d, d, a);
227
+}
228
+
229
+/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
230
+ * these tables are shared with AArch64 which does support them.
231
+ */
232
+const GVecGen3 mla_op[4] = {
233
+ { .fni4 = gen_mla8_i32,
234
+ .fniv = gen_mla_vec,
235
+ .opc = INDEX_op_mul_vec,
236
+ .load_dest = true,
237
+ .vece = MO_8 },
238
+ { .fni4 = gen_mla16_i32,
239
+ .fniv = gen_mla_vec,
240
+ .opc = INDEX_op_mul_vec,
241
+ .load_dest = true,
242
+ .vece = MO_16 },
243
+ { .fni4 = gen_mla32_i32,
244
+ .fniv = gen_mla_vec,
245
+ .opc = INDEX_op_mul_vec,
246
+ .load_dest = true,
247
+ .vece = MO_32 },
248
+ { .fni8 = gen_mla64_i64,
249
+ .fniv = gen_mla_vec,
250
+ .opc = INDEX_op_mul_vec,
251
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
252
+ .load_dest = true,
253
+ .vece = MO_64 },
254
+};
255
+
256
+const GVecGen3 mls_op[4] = {
257
+ { .fni4 = gen_mls8_i32,
258
+ .fniv = gen_mls_vec,
259
+ .opc = INDEX_op_mul_vec,
260
+ .load_dest = true,
261
+ .vece = MO_8 },
262
+ { .fni4 = gen_mls16_i32,
263
+ .fniv = gen_mls_vec,
264
+ .opc = INDEX_op_mul_vec,
265
+ .load_dest = true,
266
+ .vece = MO_16 },
267
+ { .fni4 = gen_mls32_i32,
268
+ .fniv = gen_mls_vec,
269
+ .opc = INDEX_op_mul_vec,
270
+ .load_dest = true,
271
+ .vece = MO_32 },
272
+ { .fni8 = gen_mls64_i64,
273
+ .fniv = gen_mls_vec,
274
+ .opc = INDEX_op_mul_vec,
275
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
276
+ .load_dest = true,
277
+ .vece = MO_64 },
278
+};
279
+
280
/* Translate a NEON data processing instruction. Return nonzero if the
281
instruction is invalid.
282
We process data in a mixture of 32-bit and 64-bit chunks.
283
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
284
return 0;
285
}
286
break;
287
+
288
+ case NEON_3R_VML: /* VMLA, VMLS */
289
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
290
+ u ? &mls_op[size] : &mla_op[size]);
291
+ return 0;
292
}
293
+
294
if (size == 3) {
295
/* 64-bit element instructions. */
296
for (pass = 0; pass < (q ? 2 : 1); pass++) {
297
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
298
}
299
}
300
break;
301
- case NEON_3R_VML: /* VMLA, VMLAL, VMLS,VMLSL */
302
- switch (size) {
303
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
304
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
305
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
306
- default: abort();
307
- }
308
- tcg_temp_free_i32(tmp2);
309
- tmp2 = neon_load_reg(rd, pass);
310
- if (u) { /* VMLS */
311
- gen_neon_rsb(size, tmp, tmp2);
312
- } else { /* VMLA */
313
- gen_neon_add(size, tmp, tmp2);
314
- }
315
- break;
316
case NEON_3R_VMUL:
317
/* VMUL.P8; other cases already eliminated. */
318
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
67
--
319
--
68
2.17.1
320
2.19.1
69
321
70
322
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move cmtst_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-17-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-9-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/helper-sve.h | 14 +++++++++++++
10
target/arm/translate.h | 2 +
9
target/arm/sve_helper.c | 41 +++++++++++++++++++++++++++++++-------
11
target/arm/translate-a64.c | 38 ------------------
10
target/arm/translate-sve.c | 38 +++++++++++++++++++++++++++++++++++
12
target/arm/translate.c | 81 +++++++++++++++++++++++++++-----------
11
target/arm/sve.decode | 7 +++++++
13
3 files changed, 60 insertions(+), 61 deletions(-)
12
4 files changed, 93 insertions(+), 7 deletions(-)
14
13
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/translate.h
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/translate.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
19
20
extern const GVecGen3 bif_op;
20
DEF_HELPER_FLAGS_2(sve_last_active_element, TCG_CALL_NO_RWG, s32, ptr, i32)
21
extern const GVecGen3 mla_op[4];
21
22
extern const GVecGen3 mls_op[4];
22
+DEF_HELPER_FLAGS_4(sve_revb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+extern const GVecGen3 cmtst_op[4];
23
+DEF_HELPER_FLAGS_4(sve_revb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
extern const GVecGen2i ssra_op[4];
24
+DEF_HELPER_FLAGS_4(sve_revb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
extern const GVecGen2i usra_op[4];
25
+
26
extern const GVecGen2i sri_op[4];
26
+DEF_HELPER_FLAGS_4(sve_revh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
extern const GVecGen2i sli_op[4];
27
+DEF_HELPER_FLAGS_4(sve_revh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
28
+
29
29
+DEF_HELPER_FLAGS_4(sve_revw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
/*
30
+
31
* Forward to the isar_feature_* tests given a DisasContext pointer.
31
+DEF_HELPER_FLAGS_4(sve_rbit_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
32
+DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+
36
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
39
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
40
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve_helper.c
34
--- a/target/arm/translate-a64.c
42
+++ b/target/arm/sve_helper.c
35
+++ b/target/arm/translate-a64.c
43
@@ -XXX,XX +XXX,XX @@ static inline uint64_t expand_pred_s(uint8_t byte)
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
44
return word[byte & 0x11];
45
}
46
47
+/* Swap 16-bit words within a 32-bit word. */
48
+static inline uint32_t hswap32(uint32_t h)
49
+{
50
+ return rol32(h, 16);
51
+}
52
+
53
+/* Swap 16-bit words within a 64-bit word. */
54
+static inline uint64_t hswap64(uint64_t h)
55
+{
56
+ uint64_t m = 0x0000ffff0000ffffull;
57
+ h = rol64(h, 32);
58
+ return ((h & m) << 16) | ((h >> 16) & m);
59
+}
60
+
61
+/* Swap 32-bit words within a 64-bit word. */
62
+static inline uint64_t wswap64(uint64_t h)
63
+{
64
+ return rol64(h, 32);
65
+}
66
+
67
#define LOGICAL_PPPP(NAME, FUNC) \
68
void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
69
{ \
70
@@ -XXX,XX +XXX,XX @@ DO_ZPZ(sve_neg_h, uint16_t, H1_2, DO_NEG)
71
DO_ZPZ(sve_neg_s, uint32_t, H1_4, DO_NEG)
72
DO_ZPZ_D(sve_neg_d, uint64_t, DO_NEG)
73
74
+DO_ZPZ(sve_revb_h, uint16_t, H1_2, bswap16)
75
+DO_ZPZ(sve_revb_s, uint32_t, H1_4, bswap32)
76
+DO_ZPZ_D(sve_revb_d, uint64_t, bswap64)
77
+
78
+DO_ZPZ(sve_revh_s, uint32_t, H1_4, hswap32)
79
+DO_ZPZ_D(sve_revh_d, uint64_t, hswap64)
80
+
81
+DO_ZPZ_D(sve_revw_d, uint64_t, wswap64)
82
+
83
+DO_ZPZ(sve_rbit_b, uint8_t, H1, revbit8)
84
+DO_ZPZ(sve_rbit_h, uint16_t, H1_2, revbit16)
85
+DO_ZPZ(sve_rbit_s, uint32_t, H1_4, revbit32)
86
+DO_ZPZ_D(sve_rbit_d, uint64_t, revbit64)
87
+
88
/* Three-operand expander, unpredicated, in which the third operand is "wide".
89
*/
90
#define DO_ZZW(NAME, TYPE, TYPEW, H, OP) \
91
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_rev_b)(void *vd, void *vn, uint32_t desc)
92
}
37
}
93
}
38
}
94
39
95
-static inline uint64_t hswap64(uint64_t h)
40
-/* CMTST : test is "if (X & Y != 0)". */
41
-static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
96
-{
42
-{
97
- uint64_t m = 0x0000ffff0000ffffull;
43
- tcg_gen_and_i32(d, a, b);
98
- h = rol64(h, 32);
44
- tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
99
- return ((h & m) << 16) | ((h >> 16) & m);
45
- tcg_gen_neg_i32(d, d);
100
-}
46
-}
101
-
47
-
102
void HELPER(sve_rev_h)(void *vd, void *vn, uint32_t desc)
48
-static void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
49
-{
50
- tcg_gen_and_i64(d, a, b);
51
- tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
52
- tcg_gen_neg_i64(d, d);
53
-}
54
-
55
-static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
56
-{
57
- tcg_gen_and_vec(vece, d, a, b);
58
- tcg_gen_dupi_vec(vece, a, 0);
59
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
60
-}
61
-
62
static void handle_3same_64(DisasContext *s, int opcode, bool u,
63
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn, TCGv_i64 tcg_rm)
103
{
64
{
104
intptr_t i, j, opr_sz = simd_oprsz(desc);
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
105
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
66
/* Integer op subgroup of C3.6.16. */
67
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
68
{
69
- static const GVecGen3 cmtst_op[4] = {
70
- { .fni4 = gen_helper_neon_tst_u8,
71
- .fniv = gen_cmtst_vec,
72
- .vece = MO_8 },
73
- { .fni4 = gen_helper_neon_tst_u16,
74
- .fniv = gen_cmtst_vec,
75
- .vece = MO_16 },
76
- { .fni4 = gen_cmtst_i32,
77
- .fniv = gen_cmtst_vec,
78
- .vece = MO_32 },
79
- { .fni8 = gen_cmtst_i64,
80
- .fniv = gen_cmtst_vec,
81
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
82
- .vece = MO_64 },
83
- };
84
-
85
int is_q = extract32(insn, 30, 1);
86
int u = extract32(insn, 29, 1);
87
int size = extract32(insn, 22, 2);
88
diff --git a/target/arm/translate.c b/target/arm/translate.c
106
index XXXXXXX..XXXXXXX 100644
89
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate-sve.c
90
--- a/target/arm/translate.c
108
+++ b/target/arm/translate-sve.c
91
+++ b/target/arm/translate.c
109
@@ -XXX,XX +XXX,XX @@ static bool trans_CPY_m_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
92
@@ -XXX,XX +XXX,XX @@ const GVecGen3 mls_op[4] = {
110
return true;
93
.vece = MO_64 },
111
}
94
};
112
95
113
+static bool trans_REVB(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
96
+/* CMTST : test is "if (X & Y != 0)". */
97
+static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
114
+{
98
+{
115
+ static gen_helper_gvec_3 * const fns[4] = {
99
+ tcg_gen_and_i32(d, a, b);
116
+ NULL,
100
+ tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
117
+ gen_helper_sve_revb_h,
101
+ tcg_gen_neg_i32(d, d);
118
+ gen_helper_sve_revb_s,
119
+ gen_helper_sve_revb_d,
120
+ };
121
+ return do_zpz_ool(s, a, fns[a->esz]);
122
+}
102
+}
123
+
103
+
124
+static bool trans_REVH(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
104
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
125
+{
105
+{
126
+ static gen_helper_gvec_3 * const fns[4] = {
106
+ tcg_gen_and_i64(d, a, b);
127
+ NULL,
107
+ tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
128
+ NULL,
108
+ tcg_gen_neg_i64(d, d);
129
+ gen_helper_sve_revh_s,
130
+ gen_helper_sve_revh_d,
131
+ };
132
+ return do_zpz_ool(s, a, fns[a->esz]);
133
+}
109
+}
134
+
110
+
135
+static bool trans_REVW(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
111
+static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
136
+{
112
+{
137
+ return do_zpz_ool(s, a, a->esz == 3 ? gen_helper_sve_revw_d : NULL);
113
+ tcg_gen_and_vec(vece, d, a, b);
114
+ tcg_gen_dupi_vec(vece, a, 0);
115
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
138
+}
116
+}
139
+
117
+
140
+static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
118
+const GVecGen3 cmtst_op[4] = {
141
+{
119
+ { .fni4 = gen_helper_neon_tst_u8,
142
+ static gen_helper_gvec_3 * const fns[4] = {
120
+ .fniv = gen_cmtst_vec,
143
+ gen_helper_sve_rbit_b,
121
+ .vece = MO_8 },
144
+ gen_helper_sve_rbit_h,
122
+ { .fni4 = gen_helper_neon_tst_u16,
145
+ gen_helper_sve_rbit_s,
123
+ .fniv = gen_cmtst_vec,
146
+ gen_helper_sve_rbit_d,
124
+ .vece = MO_16 },
147
+ };
125
+ { .fni4 = gen_cmtst_i32,
148
+ return do_zpz_ool(s, a, fns[a->esz]);
126
+ .fniv = gen_cmtst_vec,
149
+}
127
+ .vece = MO_32 },
150
+
128
+ { .fni8 = gen_cmtst_i64,
151
/*
129
+ .fniv = gen_cmtst_vec,
152
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
130
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
153
*/
131
+ .vece = MO_64 },
154
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
132
+};
155
index XXXXXXX..XXXXXXX 100644
133
+
156
--- a/target/arm/sve.decode
134
/* Translate a NEON data processing instruction. Return nonzero if the
157
+++ b/target/arm/sve.decode
135
instruction is invalid.
158
@@ -XXX,XX +XXX,XX @@ CPY_m_v 00000101 .. 100000 100 ... ..... ..... @rd_pg_rn
136
We process data in a mixture of 32-bit and 64-bit chunks.
159
# SVE copy element from general register to vector (predicated)
137
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
160
CPY_m_r 00000101 .. 101000 101 ... ..... ..... @rd_pg_rn
138
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
161
139
u ? &mls_op[size] : &mla_op[size]);
162
+# SVE reverse within elements
140
return 0;
163
+# Note esz >= operation size
141
+
164
+REVB 00000101 .. 1001 00 100 ... ..... ..... @rd_pg_rn
142
+ case NEON_3R_VTST_VCEQ:
165
+REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
143
+ if (u) { /* VCEQ */
166
+REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
144
+ tcg_gen_gvec_cmp(TCG_COND_EQ, size, rd_ofs, rn_ofs, rm_ofs,
167
+RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
145
+ vec_size, vec_size);
168
+
146
+ } else { /* VTST */
169
### SVE Predicate Logical Operations Group
147
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
170
148
+ vec_size, vec_size, &cmtst_op[size]);
171
# SVE predicate logical operations
149
+ }
150
+ return 0;
151
+
152
+ case NEON_3R_VCGT:
153
+ tcg_gen_gvec_cmp(u ? TCG_COND_GTU : TCG_COND_GT, size,
154
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
155
+ return 0;
156
+
157
+ case NEON_3R_VCGE:
158
+ tcg_gen_gvec_cmp(u ? TCG_COND_GEU : TCG_COND_GE, size,
159
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
160
+ return 0;
161
}
162
163
if (size == 3) {
164
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
165
case NEON_3R_VQSUB:
166
GEN_NEON_INTEGER_OP_ENV(qsub);
167
break;
168
- case NEON_3R_VCGT:
169
- GEN_NEON_INTEGER_OP(cgt);
170
- break;
171
- case NEON_3R_VCGE:
172
- GEN_NEON_INTEGER_OP(cge);
173
- break;
174
case NEON_3R_VSHL:
175
GEN_NEON_INTEGER_OP(shl);
176
break;
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
178
tmp2 = neon_load_reg(rd, pass);
179
gen_neon_add(size, tmp, tmp2);
180
break;
181
- case NEON_3R_VTST_VCEQ:
182
- if (!u) { /* VTST */
183
- switch (size) {
184
- case 0: gen_helper_neon_tst_u8(tmp, tmp, tmp2); break;
185
- case 1: gen_helper_neon_tst_u16(tmp, tmp, tmp2); break;
186
- case 2: gen_helper_neon_tst_u32(tmp, tmp, tmp2); break;
187
- default: abort();
188
- }
189
- } else { /* VCEQ */
190
- switch (size) {
191
- case 0: gen_helper_neon_ceq_u8(tmp, tmp, tmp2); break;
192
- case 1: gen_helper_neon_ceq_u16(tmp, tmp, tmp2); break;
193
- case 2: gen_helper_neon_ceq_u32(tmp, tmp, tmp2); break;
194
- default: abort();
195
- }
196
- }
197
- break;
198
case NEON_3R_VMUL:
199
/* VMUL.P8; other cases already eliminated. */
200
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
172
--
201
--
173
2.17.1
202
2.19.1
174
203
175
204
diff view generated by jsdifflib
1
Convert the pflash_cfi02 device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps.
3
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-18-richard.henderson@linaro.org
5
[PMM: added parens in ?: expression]
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Acked-by: Max Reitz <mreitz@redhat.com>
7
Message-id: 20180601141223.26630-4-peter.maydell@linaro.org
8
---
8
---
9
hw/block/pflash_cfi02.c | 97 ++++++++---------------------------------
9
target/arm/translate.c | 81 ++++++++++++++----------------------------
10
1 file changed, 18 insertions(+), 79 deletions(-)
10
1 file changed, 26 insertions(+), 55 deletions(-)
11
11
12
diff --git a/hw/block/pflash_cfi02.c b/hw/block/pflash_cfi02.c
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/block/pflash_cfi02.c
14
--- a/target/arm/translate.c
15
+++ b/hw/block/pflash_cfi02.c
15
+++ b/target/arm/translate.c
16
@@ -XXX,XX +XXX,XX @@ static void pflash_write (pflash_t *pfl, hwaddr offset,
16
@@ -XXX,XX +XXX,XX @@ static void gen_vfp_msr(TCGv_i32 tmp)
17
pfl->cmd = 0;
17
tcg_temp_free_i32(tmp);
18
}
18
}
19
19
20
-
20
-static void gen_neon_dup_u8(TCGv_i32 var, int shift)
21
-static uint32_t pflash_readb_be(void *opaque, hwaddr addr)
21
-{
22
+static uint64_t pflash_be_readfn(void *opaque, hwaddr addr, unsigned size)
22
- TCGv_i32 tmp = tcg_temp_new_i32();
23
{
23
- if (shift)
24
- return pflash_read(opaque, addr, 1, 1);
24
- tcg_gen_shri_i32(var, var, shift);
25
+ return pflash_read(opaque, addr, size, 1);
25
- tcg_gen_ext8u_i32(var, var);
26
}
26
- tcg_gen_shli_i32(tmp, var, 8);
27
27
- tcg_gen_or_i32(var, var, tmp);
28
-static uint32_t pflash_readb_le(void *opaque, hwaddr addr)
28
- tcg_gen_shli_i32(tmp, var, 16);
29
+static void pflash_be_writefn(void *opaque, hwaddr addr,
29
- tcg_gen_or_i32(var, var, tmp);
30
+ uint64_t value, unsigned size)
30
- tcg_temp_free_i32(tmp);
31
{
32
- return pflash_read(opaque, addr, 1, 0);
33
+ pflash_write(opaque, addr, value, size, 1);
34
}
35
36
-static uint32_t pflash_readw_be(void *opaque, hwaddr addr)
37
+static uint64_t pflash_le_readfn(void *opaque, hwaddr addr, unsigned size)
38
{
39
- pflash_t *pfl = opaque;
40
-
41
- return pflash_read(pfl, addr, 2, 1);
42
+ return pflash_read(opaque, addr, size, 0);
43
}
44
45
-static uint32_t pflash_readw_le(void *opaque, hwaddr addr)
46
+static void pflash_le_writefn(void *opaque, hwaddr addr,
47
+ uint64_t value, unsigned size)
48
{
49
- pflash_t *pfl = opaque;
50
-
51
- return pflash_read(pfl, addr, 2, 0);
52
-}
31
-}
53
-
32
-
54
-static uint32_t pflash_readl_be(void *opaque, hwaddr addr)
33
static void gen_neon_dup_low16(TCGv_i32 var)
34
{
35
TCGv_i32 tmp = tcg_temp_new_i32();
36
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
37
tcg_temp_free_i32(tmp);
38
}
39
40
-static TCGv_i32 gen_load_and_replicate(DisasContext *s, TCGv_i32 addr, int size)
55
-{
41
-{
56
- pflash_t *pfl = opaque;
42
- /* Load a single Neon element and replicate into a 32 bit TCG reg */
57
-
43
- TCGv_i32 tmp = tcg_temp_new_i32();
58
- return pflash_read(pfl, addr, 4, 1);
44
- switch (size) {
45
- case 0:
46
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
47
- gen_neon_dup_u8(tmp, 0);
48
- break;
49
- case 1:
50
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
51
- gen_neon_dup_low16(tmp);
52
- break;
53
- case 2:
54
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
55
- break;
56
- default: /* Avoid compiler warnings. */
57
- abort();
58
- }
59
- return tmp;
59
-}
60
-}
60
-
61
-
61
-static uint32_t pflash_readl_le(void *opaque, hwaddr addr)
62
static int handle_vsel(uint32_t insn, uint32_t rd, uint32_t rn, uint32_t rm,
62
-{
63
uint32_t dp)
63
- pflash_t *pfl = opaque;
64
{
64
-
65
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
65
- return pflash_read(pfl, addr, 4, 0);
66
int load;
66
-}
67
int shift;
67
-
68
int n;
68
-static void pflash_writeb_be(void *opaque, hwaddr addr,
69
+ int vec_size;
69
- uint32_t value)
70
TCGv_i32 addr;
70
-{
71
TCGv_i32 tmp;
71
- pflash_write(opaque, addr, value, 1, 1);
72
TCGv_i32 tmp2;
72
-}
73
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
73
-
74
}
74
-static void pflash_writeb_le(void *opaque, hwaddr addr,
75
addr = tcg_temp_new_i32();
75
- uint32_t value)
76
load_reg_var(s, addr, rn);
76
-{
77
- if (nregs == 1) {
77
- pflash_write(opaque, addr, value, 1, 0);
78
- /* VLD1 to all lanes: bit 5 indicates how many Dregs to write */
78
-}
79
- tmp = gen_load_and_replicate(s, addr, size);
79
-
80
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
80
-static void pflash_writew_be(void *opaque, hwaddr addr,
81
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
81
- uint32_t value)
82
- if (insn & (1 << 5)) {
82
-{
83
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 0));
83
- pflash_t *pfl = opaque;
84
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 1));
84
-
85
- }
85
- pflash_write(pfl, addr, value, 2, 1);
86
- tcg_temp_free_i32(tmp);
86
-}
87
- } else {
87
-
88
- /* VLD2/3/4 to all lanes: bit 5 indicates register stride */
88
-static void pflash_writew_le(void *opaque, hwaddr addr,
89
- stride = (insn & (1 << 5)) ? 2 : 1;
89
- uint32_t value)
90
- for (reg = 0; reg < nregs; reg++) {
90
-{
91
- tmp = gen_load_and_replicate(s, addr, size);
91
- pflash_t *pfl = opaque;
92
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
92
-
93
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
93
- pflash_write(pfl, addr, value, 2, 0);
94
- tcg_temp_free_i32(tmp);
94
-}
95
- tcg_gen_addi_i32(addr, addr, 1 << size);
95
-
96
- rd += stride;
96
-static void pflash_writel_be(void *opaque, hwaddr addr,
97
+
97
- uint32_t value)
98
+ /* VLD1 to all lanes: bit 5 indicates how many Dregs to write.
98
-{
99
+ * VLD2/3/4 to all lanes: bit 5 indicates register stride.
99
- pflash_t *pfl = opaque;
100
+ */
100
-
101
+ stride = (insn & (1 << 5)) ? 2 : 1;
101
- pflash_write(pfl, addr, value, 4, 1);
102
+ vec_size = nregs == 1 ? stride * 8 : 8;
102
-}
103
+
103
-
104
+ tmp = tcg_temp_new_i32();
104
-static void pflash_writel_le(void *opaque, hwaddr addr,
105
+ for (reg = 0; reg < nregs; reg++) {
105
- uint32_t value)
106
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
106
-{
107
+ s->be_data | size);
107
- pflash_t *pfl = opaque;
108
+ if ((rd & 1) && vec_size == 16) {
108
-
109
+ /* We cannot write 16 bytes at once because the
109
- pflash_write(pfl, addr, value, 4, 0);
110
+ * destination is unaligned.
110
+ pflash_write(opaque, addr, value, size, 0);
111
+ */
111
}
112
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
112
113
+ 8, 8, tmp);
113
static const MemoryRegionOps pflash_cfi02_ops_be = {
114
+ tcg_gen_gvec_mov(0, neon_reg_offset(rd + 1, 0),
114
- .old_mmio = {
115
+ neon_reg_offset(rd, 0), 8, 8);
115
- .read = { pflash_readb_be, pflash_readw_be, pflash_readl_be, },
116
+ } else {
116
- .write = { pflash_writeb_be, pflash_writew_be, pflash_writel_be, },
117
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
117
- },
118
+ vec_size, vec_size, tmp);
118
+ .read = pflash_be_readfn,
119
}
119
+ .write = pflash_be_writefn,
120
+ tcg_gen_addi_i32(addr, addr, 1 << size);
120
+ .valid.min_access_size = 1,
121
+ rd += stride;
121
+ .valid.max_access_size = 4,
122
}
122
.endianness = DEVICE_NATIVE_ENDIAN,
123
+ tcg_temp_free_i32(tmp);
123
};
124
tcg_temp_free_i32(addr);
124
125
stride = (1 << size) * nregs;
125
static const MemoryRegionOps pflash_cfi02_ops_le = {
126
} else {
126
- .old_mmio = {
127
- .read = { pflash_readb_le, pflash_readw_le, pflash_readl_le, },
128
- .write = { pflash_writeb_le, pflash_writew_le, pflash_writel_le, },
129
- },
130
+ .read = pflash_le_readfn,
131
+ .write = pflash_le_writefn,
132
+ .valid.min_access_size = 1,
133
+ .valid.max_access_size = 4,
134
.endianness = DEVICE_NATIVE_ENDIAN,
135
};
136
137
--
127
--
138
2.17.1
128
2.19.1
139
129
140
130
diff view generated by jsdifflib
1
Convert the mcf5206 device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps. This device is used by the an5206 board.
2
3
3
Instead of shifts and masks, use direct loads and stores from the neon
4
register file. Mirror the iteration structure of the ARM pseudocode
5
more closely. Correct the parameters of the VLD2 A2 insn.
6
7
Note that this includes a bugfix for handling of the insn
8
"VLD2 (multiple 2-element structures)" -- we were using an
9
incorrect stride value.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20181011205206.3552-19-richard.henderson@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Acked-by: Thomas Huth <huth@tuxfamily.org>
6
Message-id: 20180601141223.26630-3-peter.maydell@linaro.org
7
---
15
---
8
hw/m68k/mcf5206.c | 48 +++++++++++++++++++++++++++++++++++------------
16
target/arm/translate.c | 170 ++++++++++++++++++-----------------------
9
1 file changed, 36 insertions(+), 12 deletions(-)
17
1 file changed, 74 insertions(+), 96 deletions(-)
10
18
11
diff --git a/hw/m68k/mcf5206.c b/hw/m68k/mcf5206.c
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/m68k/mcf5206.c
21
--- a/target/arm/translate.c
14
+++ b/hw/m68k/mcf5206.c
22
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static void m5206_mbar_writel(void *opaque, hwaddr offset,
23
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
16
m5206_mbar_write(s, offset, value, 4);
24
return tmp;
17
}
25
}
18
26
19
+static uint64_t m5206_mbar_readfn(void *opaque, hwaddr addr, unsigned size)
27
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
20
+{
28
+{
21
+ switch (size) {
29
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
22
+ case 1:
30
+
23
+ return m5206_mbar_readb(opaque, addr);
31
+ switch (mop) {
24
+ case 2:
32
+ case MO_UB:
25
+ return m5206_mbar_readw(opaque, addr);
33
+ tcg_gen_ld8u_i64(var, cpu_env, offset);
26
+ case 4:
34
+ break;
27
+ return m5206_mbar_readl(opaque, addr);
35
+ case MO_UW:
36
+ tcg_gen_ld16u_i64(var, cpu_env, offset);
37
+ break;
38
+ case MO_UL:
39
+ tcg_gen_ld32u_i64(var, cpu_env, offset);
40
+ break;
41
+ case MO_Q:
42
+ tcg_gen_ld_i64(var, cpu_env, offset);
43
+ break;
28
+ default:
44
+ default:
29
+ g_assert_not_reached();
45
+ g_assert_not_reached();
30
+ }
46
+ }
31
+}
47
+}
32
+
48
+
33
+static void m5206_mbar_writefn(void *opaque, hwaddr addr,
49
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
34
+ uint64_t value, unsigned size)
50
{
51
tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
52
tcg_temp_free_i32(var);
53
}
54
55
+static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
35
+{
56
+{
57
+ long offset = neon_element_offset(reg, ele, size);
58
+
36
+ switch (size) {
59
+ switch (size) {
37
+ case 1:
60
+ case MO_8:
38
+ m5206_mbar_writeb(opaque, addr, value);
61
+ tcg_gen_st8_i64(var, cpu_env, offset);
39
+ break;
62
+ break;
40
+ case 2:
63
+ case MO_16:
41
+ m5206_mbar_writew(opaque, addr, value);
64
+ tcg_gen_st16_i64(var, cpu_env, offset);
42
+ break;
65
+ break;
43
+ case 4:
66
+ case MO_32:
44
+ m5206_mbar_writel(opaque, addr, value);
67
+ tcg_gen_st32_i64(var, cpu_env, offset);
68
+ break;
69
+ case MO_64:
70
+ tcg_gen_st_i64(var, cpu_env, offset);
45
+ break;
71
+ break;
46
+ default:
72
+ default:
47
+ g_assert_not_reached();
73
+ g_assert_not_reached();
48
+ }
74
+ }
49
+}
75
+}
50
+
76
+
51
static const MemoryRegionOps m5206_mbar_ops = {
77
static inline void neon_load_reg64(TCGv_i64 var, int reg)
52
- .old_mmio = {
78
{
53
- .read = {
79
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
54
- m5206_mbar_readb,
80
@@ -XXX,XX +XXX,XX @@ static struct {
55
- m5206_mbar_readw,
81
int interleave;
56
- m5206_mbar_readl,
82
int spacing;
57
- },
83
} const neon_ls_element_type[11] = {
58
- .write = {
84
- {4, 4, 1},
59
- m5206_mbar_writeb,
85
- {4, 4, 2},
60
- m5206_mbar_writew,
86
+ {1, 4, 1},
61
- m5206_mbar_writel,
87
+ {1, 4, 2},
62
- },
88
{4, 1, 1},
63
- },
89
- {4, 2, 1},
64
+ .read = m5206_mbar_readfn,
90
- {3, 3, 1},
65
+ .write = m5206_mbar_writefn,
91
- {3, 3, 2},
66
+ .valid.min_access_size = 1,
92
+ {2, 2, 2},
67
+ .valid.max_access_size = 4,
93
+ {1, 3, 1},
68
.endianness = DEVICE_NATIVE_ENDIAN,
94
+ {1, 3, 2},
95
{3, 1, 1},
96
{1, 1, 1},
97
- {2, 2, 1},
98
- {2, 2, 2},
99
+ {1, 2, 1},
100
+ {1, 2, 2},
101
{2, 1, 1}
69
};
102
};
70
103
104
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
105
int shift;
106
int n;
107
int vec_size;
108
+ int mmu_idx;
109
+ TCGMemOp endian;
110
TCGv_i32 addr;
111
TCGv_i32 tmp;
112
TCGv_i32 tmp2;
113
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
114
rn = (insn >> 16) & 0xf;
115
rm = insn & 0xf;
116
load = (insn & (1 << 21)) != 0;
117
+ endian = s->be_data;
118
+ mmu_idx = get_mem_index(s);
119
if ((insn & (1 << 23)) == 0) {
120
/* Load store all elements. */
121
op = (insn >> 8) & 0xf;
122
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
123
nregs = neon_ls_element_type[op].nregs;
124
interleave = neon_ls_element_type[op].interleave;
125
spacing = neon_ls_element_type[op].spacing;
126
- if (size == 3 && (interleave | spacing) != 1)
127
+ if (size == 3 && (interleave | spacing) != 1) {
128
return 1;
129
+ }
130
+ tmp64 = tcg_temp_new_i64();
131
addr = tcg_temp_new_i32();
132
+ tmp2 = tcg_const_i32(1 << size);
133
load_reg_var(s, addr, rn);
134
- stride = (1 << size) * interleave;
135
for (reg = 0; reg < nregs; reg++) {
136
- if (interleave > 2 || (interleave == 2 && nregs == 2)) {
137
- load_reg_var(s, addr, rn);
138
- tcg_gen_addi_i32(addr, addr, (1 << size) * reg);
139
- } else if (interleave == 2 && nregs == 4 && reg == 2) {
140
- load_reg_var(s, addr, rn);
141
- tcg_gen_addi_i32(addr, addr, 1 << size);
142
- }
143
- if (size == 3) {
144
- tmp64 = tcg_temp_new_i64();
145
- if (load) {
146
- gen_aa32_ld64(s, tmp64, addr, get_mem_index(s));
147
- neon_store_reg64(tmp64, rd);
148
- } else {
149
- neon_load_reg64(tmp64, rd);
150
- gen_aa32_st64(s, tmp64, addr, get_mem_index(s));
151
- }
152
- tcg_temp_free_i64(tmp64);
153
- tcg_gen_addi_i32(addr, addr, stride);
154
- } else {
155
- for (pass = 0; pass < 2; pass++) {
156
- if (size == 2) {
157
- if (load) {
158
- tmp = tcg_temp_new_i32();
159
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
160
- neon_store_reg(rd, pass, tmp);
161
- } else {
162
- tmp = neon_load_reg(rd, pass);
163
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
164
- tcg_temp_free_i32(tmp);
165
- }
166
- tcg_gen_addi_i32(addr, addr, stride);
167
- } else if (size == 1) {
168
- if (load) {
169
- tmp = tcg_temp_new_i32();
170
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
171
- tcg_gen_addi_i32(addr, addr, stride);
172
- tmp2 = tcg_temp_new_i32();
173
- gen_aa32_ld16u(s, tmp2, addr, get_mem_index(s));
174
- tcg_gen_addi_i32(addr, addr, stride);
175
- tcg_gen_shli_i32(tmp2, tmp2, 16);
176
- tcg_gen_or_i32(tmp, tmp, tmp2);
177
- tcg_temp_free_i32(tmp2);
178
- neon_store_reg(rd, pass, tmp);
179
- } else {
180
- tmp = neon_load_reg(rd, pass);
181
- tmp2 = tcg_temp_new_i32();
182
- tcg_gen_shri_i32(tmp2, tmp, 16);
183
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
184
- tcg_temp_free_i32(tmp);
185
- tcg_gen_addi_i32(addr, addr, stride);
186
- gen_aa32_st16(s, tmp2, addr, get_mem_index(s));
187
- tcg_temp_free_i32(tmp2);
188
- tcg_gen_addi_i32(addr, addr, stride);
189
- }
190
- } else /* size == 0 */ {
191
- if (load) {
192
- tmp2 = NULL;
193
- for (n = 0; n < 4; n++) {
194
- tmp = tcg_temp_new_i32();
195
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
196
- tcg_gen_addi_i32(addr, addr, stride);
197
- if (n == 0) {
198
- tmp2 = tmp;
199
- } else {
200
- tcg_gen_shli_i32(tmp, tmp, n * 8);
201
- tcg_gen_or_i32(tmp2, tmp2, tmp);
202
- tcg_temp_free_i32(tmp);
203
- }
204
- }
205
- neon_store_reg(rd, pass, tmp2);
206
- } else {
207
- tmp2 = neon_load_reg(rd, pass);
208
- for (n = 0; n < 4; n++) {
209
- tmp = tcg_temp_new_i32();
210
- if (n == 0) {
211
- tcg_gen_mov_i32(tmp, tmp2);
212
- } else {
213
- tcg_gen_shri_i32(tmp, tmp2, n * 8);
214
- }
215
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
216
- tcg_temp_free_i32(tmp);
217
- tcg_gen_addi_i32(addr, addr, stride);
218
- }
219
- tcg_temp_free_i32(tmp2);
220
- }
221
+ for (n = 0; n < 8 >> size; n++) {
222
+ int xs;
223
+ for (xs = 0; xs < interleave; xs++) {
224
+ int tt = rd + reg + spacing * xs;
225
+
226
+ if (load) {
227
+ gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
228
+ neon_store_element64(tt, n, size, tmp64);
229
+ } else {
230
+ neon_load_element64(tmp64, tt, n, size);
231
+ gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
232
}
233
+ tcg_gen_add_i32(addr, addr, tmp2);
234
}
235
}
236
- rd += spacing;
237
}
238
tcg_temp_free_i32(addr);
239
- stride = nregs * 8;
240
+ tcg_temp_free_i32(tmp2);
241
+ tcg_temp_free_i64(tmp64);
242
+ stride = nregs * interleave * 8;
243
} else {
244
size = (insn >> 10) & 3;
245
if (size == 3) {
71
--
246
--
72
2.17.1
247
2.19.1
73
248
74
249
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
ARMv6-M supports 6 Thumb2 instructions. This patch checks for these
3
For a sequence of loads or stores from a single register,
4
instructions and allows their execution.
4
little-endian operations can be promoted to an 8-byte op.
5
Like Thumb2 cores, ARMv6-M always interprets BL instruction as 32-bit.
5
This can reduce the number of operations by a factor of 8.
6
6
7
This patch is required for future Cortex-M0 support.
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
8
Message-id: 20181011205206.3552-20-richard.henderson@linaro.org
9
Signed-off-by: Julia Suvorova <jusual@mail.ru>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Message-id: 20180612204632.28780-1-jusual@mail.ru
12
[PMM: move armv6m_insn[] and armv6m_mask[] closer to
13
point of use, and mark 'const'. Check for M-and-not-v7
14
rather than M-and-6.]
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
12
---
18
target/arm/translate.c | 43 +++++++++++++++++++++++++++++++++++++-----
13
target/arm/translate.c | 10 ++++++++++
19
1 file changed, 38 insertions(+), 5 deletions(-)
14
1 file changed, 10 insertions(+)
20
15
21
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
22
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/translate.c
18
--- a/target/arm/translate.c
24
+++ b/target/arm/translate.c
19
+++ b/target/arm/translate.c
25
@@ -XXX,XX +XXX,XX @@ static bool thumb_insn_is_16bit(DisasContext *s, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
26
* end up actually treating this as two 16-bit insns, though,
21
if (size == 3 && (interleave | spacing) != 1) {
27
* if it's half of a bl/blx pair that might span a page boundary.
22
return 1;
28
*/
23
}
29
- if (arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
24
+ /* For our purposes, bytes are always little-endian. */
30
+ if (arm_dc_feature(s, ARM_FEATURE_THUMB2) ||
25
+ if (size == 0) {
31
+ arm_dc_feature(s, ARM_FEATURE_M)) {
26
+ endian = MO_LE;
32
/* Thumb2 cores (including all M profile ones) always treat
33
* 32-bit insns as 32-bit.
34
*/
35
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
36
int conds;
37
int logic_cc;
38
39
- /* The only 32 bit insn that's allowed for Thumb1 is the combined
40
- * BL/BLX prefix and suffix.
41
+ /*
42
+ * ARMv6-M supports a limited subset of Thumb2 instructions.
43
+ * Other Thumb1 architectures allow only 32-bit
44
+ * combined BL/BLX prefix and suffix.
45
*/
46
- if ((insn & 0xf800e800) != 0xf000e800) {
47
+ if (arm_dc_feature(s, ARM_FEATURE_M) &&
48
+ !arm_dc_feature(s, ARM_FEATURE_V7)) {
49
+ int i;
50
+ bool found = false;
51
+ const uint32_t armv6m_insn[] = {0xf3808000 /* msr */,
52
+ 0xf3b08040 /* dsb */,
53
+ 0xf3b08050 /* dmb */,
54
+ 0xf3b08060 /* isb */,
55
+ 0xf3e08000 /* mrs */,
56
+ 0xf000d000 /* bl */};
57
+ const uint32_t armv6m_mask[] = {0xffe0d000,
58
+ 0xfff0d0f0,
59
+ 0xfff0d0f0,
60
+ 0xfff0d0f0,
61
+ 0xffe0d000,
62
+ 0xf800d000};
63
+
64
+ for (i = 0; i < ARRAY_SIZE(armv6m_insn); i++) {
65
+ if ((insn & armv6m_mask[i]) == armv6m_insn[i]) {
66
+ found = true;
67
+ break;
68
+ }
69
+ }
27
+ }
70
+ if (!found) {
28
+ /* Consecutive little-endian elements from a single register
71
+ goto illegal_op;
29
+ * can be promoted to a larger little-endian operation.
30
+ */
31
+ if (interleave == 1 && endian == MO_LE) {
32
+ size = 3;
72
+ }
33
+ }
73
+ } else if ((insn & 0xf800e800) != 0xf000e800) {
34
tmp64 = tcg_temp_new_i64();
74
ARCH(6T2);
35
addr = tcg_temp_new_i32();
75
}
36
tmp2 = tcg_const_i32(1 << size);
76
77
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
78
}
79
break;
80
case 3: /* Special control operations. */
81
- ARCH(7);
82
+ if (!arm_dc_feature(s, ARM_FEATURE_V7) &&
83
+ !(arm_dc_feature(s, ARM_FEATURE_V6) &&
84
+ arm_dc_feature(s, ARM_FEATURE_M))) {
85
+ goto illegal_op;
86
+ }
87
op = (insn >> 4) & 0xf;
88
switch (op) {
89
case 2: /* clrex */
90
--
37
--
91
2.17.1
38
2.19.1
92
39
93
40
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Instead of shifts and masks, use direct loads and stores from
4
the neon register file.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-21-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-7-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 2 +
11
target/arm/translate.c | 92 +++++++++++++++++++++++-------------------
9
target/arm/sve_helper.c | 12 ++
12
1 file changed, 50 insertions(+), 42 deletions(-)
10
target/arm/translate-sve.c | 328 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 20 +++
12
4 files changed, 362 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/target/arm/translate.c
17
+++ b/target/arm/helper-sve.h
17
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
19
DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
return tmp;
20
DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
}
21
21
22
+DEF_HELPER_FLAGS_2(sve_last_active_element, TCG_CALL_NO_RWG, s32, ptr, i32)
22
+static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
23
+{
24
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
23
+
25
+
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
+ switch (mop) {
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
+ case MO_UB:
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
+ tcg_gen_ld8u_i32(var, cpu_env, offset);
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve_helper.c
30
+++ b/target/arm/sve_helper.c
31
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
32
d[j] = 0;
33
}
34
}
35
+
36
+/* Similar to the ARM LastActiveElement pseudocode function, except the
37
+ * result is multiplied by the element size. This includes the not found
38
+ * indication; e.g. not found for esz=3 is -8.
39
+ */
40
+int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
41
+{
42
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
43
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
44
+
45
+ return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
46
+}
47
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/translate-sve.c
50
+++ b/target/arm/translate-sve.c
51
@@ -XXX,XX +XXX,XX @@ static bool trans_COMPACT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
52
return do_zpz_ool(s, a, fns[a->esz]);
53
}
54
55
+/* Call the helper that computes the ARM LastActiveElement pseudocode
56
+ * function, scaled by the element size. This includes the not found
57
+ * indication; e.g. not found for esz=3 is -8.
58
+ */
59
+static void find_last_active(DisasContext *s, TCGv_i32 ret, int esz, int pg)
60
+{
61
+ /* Predicate sizes may be smaller and cannot use simd_desc. We cannot
62
+ * round up, as we do elsewhere, because we need the exact size.
63
+ */
64
+ TCGv_ptr t_p = tcg_temp_new_ptr();
65
+ TCGv_i32 t_desc;
66
+ unsigned vsz = pred_full_reg_size(s);
67
+ unsigned desc;
68
+
69
+ desc = vsz - 2;
70
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
71
+
72
+ tcg_gen_addi_ptr(t_p, cpu_env, pred_full_reg_offset(s, pg));
73
+ t_desc = tcg_const_i32(desc);
74
+
75
+ gen_helper_sve_last_active_element(ret, t_p, t_desc);
76
+
77
+ tcg_temp_free_i32(t_desc);
78
+ tcg_temp_free_ptr(t_p);
79
+}
80
+
81
+/* Increment LAST to the offset of the next element in the vector,
82
+ * wrapping around to 0.
83
+ */
84
+static void incr_last_active(DisasContext *s, TCGv_i32 last, int esz)
85
+{
86
+ unsigned vsz = vec_full_reg_size(s);
87
+
88
+ tcg_gen_addi_i32(last, last, 1 << esz);
89
+ if (is_power_of_2(vsz)) {
90
+ tcg_gen_andi_i32(last, last, vsz - 1);
91
+ } else {
92
+ TCGv_i32 max = tcg_const_i32(vsz);
93
+ TCGv_i32 zero = tcg_const_i32(0);
94
+ tcg_gen_movcond_i32(TCG_COND_GEU, last, last, max, zero, last);
95
+ tcg_temp_free_i32(max);
96
+ tcg_temp_free_i32(zero);
97
+ }
98
+}
99
+
100
+/* If LAST < 0, set LAST to the offset of the last element in the vector. */
101
+static void wrap_last_active(DisasContext *s, TCGv_i32 last, int esz)
102
+{
103
+ unsigned vsz = vec_full_reg_size(s);
104
+
105
+ if (is_power_of_2(vsz)) {
106
+ tcg_gen_andi_i32(last, last, vsz - 1);
107
+ } else {
108
+ TCGv_i32 max = tcg_const_i32(vsz - (1 << esz));
109
+ TCGv_i32 zero = tcg_const_i32(0);
110
+ tcg_gen_movcond_i32(TCG_COND_LT, last, last, zero, max, last);
111
+ tcg_temp_free_i32(max);
112
+ tcg_temp_free_i32(zero);
113
+ }
114
+}
115
+
116
+/* Load an unsigned element of ESZ from BASE+OFS. */
117
+static TCGv_i64 load_esz(TCGv_ptr base, int ofs, int esz)
118
+{
119
+ TCGv_i64 r = tcg_temp_new_i64();
120
+
121
+ switch (esz) {
122
+ case 0:
123
+ tcg_gen_ld8u_i64(r, base, ofs);
124
+ break;
29
+ break;
125
+ case 1:
30
+ case MO_UW:
126
+ tcg_gen_ld16u_i64(r, base, ofs);
31
+ tcg_gen_ld16u_i32(var, cpu_env, offset);
127
+ break;
32
+ break;
128
+ case 2:
33
+ case MO_UL:
129
+ tcg_gen_ld32u_i64(r, base, ofs);
34
+ tcg_gen_ld_i32(var, cpu_env, offset);
130
+ break;
131
+ case 3:
132
+ tcg_gen_ld_i64(r, base, ofs);
133
+ break;
35
+ break;
134
+ default:
36
+ default:
135
+ g_assert_not_reached();
37
+ g_assert_not_reached();
136
+ }
38
+ }
137
+ return r;
138
+}
39
+}
139
+
40
+
140
+/* Load an unsigned element of ESZ from RM[LAST]. */
41
static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
141
+static TCGv_i64 load_last_active(DisasContext *s, TCGv_i32 last,
42
{
142
+ int rm, int esz)
43
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
44
@@ -XXX,XX +XXX,XX @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
45
tcg_temp_free_i32(var);
46
}
47
48
+static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
143
+{
49
+{
144
+ TCGv_ptr p = tcg_temp_new_ptr();
50
+ long offset = neon_element_offset(reg, ele, size);
145
+ TCGv_i64 r;
146
+
51
+
147
+ /* Convert offset into vector into offset into ENV.
52
+ switch (size) {
148
+ * The final adjustment for the vector register base
53
+ case MO_8:
149
+ * is added via constant offset to the load.
54
+ tcg_gen_st8_i32(var, cpu_env, offset);
150
+ */
151
+#ifdef HOST_WORDS_BIGENDIAN
152
+ /* Adjust for element ordering. See vec_reg_offset. */
153
+ if (esz < 3) {
154
+ tcg_gen_xori_i32(last, last, 8 - (1 << esz));
155
+ }
156
+#endif
157
+ tcg_gen_ext_i32_ptr(p, last);
158
+ tcg_gen_add_ptr(p, p, cpu_env);
159
+
160
+ r = load_esz(p, vec_full_reg_offset(s, rm), esz);
161
+ tcg_temp_free_ptr(p);
162
+
163
+ return r;
164
+}
165
+
166
+/* Compute CLAST for a Zreg. */
167
+static bool do_clast_vector(DisasContext *s, arg_rprr_esz *a, bool before)
168
+{
169
+ TCGv_i32 last;
170
+ TCGLabel *over;
171
+ TCGv_i64 ele;
172
+ unsigned vsz, esz = a->esz;
173
+
174
+ if (!sve_access_check(s)) {
175
+ return true;
176
+ }
177
+
178
+ last = tcg_temp_local_new_i32();
179
+ over = gen_new_label();
180
+
181
+ find_last_active(s, last, esz, a->pg);
182
+
183
+ /* There is of course no movcond for a 2048-bit vector,
184
+ * so we must branch over the actual store.
185
+ */
186
+ tcg_gen_brcondi_i32(TCG_COND_LT, last, 0, over);
187
+
188
+ if (!before) {
189
+ incr_last_active(s, last, esz);
190
+ }
191
+
192
+ ele = load_last_active(s, last, a->rm, esz);
193
+ tcg_temp_free_i32(last);
194
+
195
+ vsz = vec_full_reg_size(s);
196
+ tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd), vsz, vsz, ele);
197
+ tcg_temp_free_i64(ele);
198
+
199
+ /* If this insn used MOVPRFX, we may need a second move. */
200
+ if (a->rd != a->rn) {
201
+ TCGLabel *done = gen_new_label();
202
+ tcg_gen_br(done);
203
+
204
+ gen_set_label(over);
205
+ do_mov_z(s, a->rd, a->rn);
206
+
207
+ gen_set_label(done);
208
+ } else {
209
+ gen_set_label(over);
210
+ }
211
+ return true;
212
+}
213
+
214
+static bool trans_CLASTA_z(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
215
+{
216
+ return do_clast_vector(s, a, false);
217
+}
218
+
219
+static bool trans_CLASTB_z(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
220
+{
221
+ return do_clast_vector(s, a, true);
222
+}
223
+
224
+/* Compute CLAST for a scalar. */
225
+static void do_clast_scalar(DisasContext *s, int esz, int pg, int rm,
226
+ bool before, TCGv_i64 reg_val)
227
+{
228
+ TCGv_i32 last = tcg_temp_new_i32();
229
+ TCGv_i64 ele, cmp, zero;
230
+
231
+ find_last_active(s, last, esz, pg);
232
+
233
+ /* Extend the original value of last prior to incrementing. */
234
+ cmp = tcg_temp_new_i64();
235
+ tcg_gen_ext_i32_i64(cmp, last);
236
+
237
+ if (!before) {
238
+ incr_last_active(s, last, esz);
239
+ }
240
+
241
+ /* The conceit here is that while last < 0 indicates not found, after
242
+ * adjusting for cpu_env->vfp.zregs[rm], it is still a valid address
243
+ * from which we can load garbage. We then discard the garbage with
244
+ * a conditional move.
245
+ */
246
+ ele = load_last_active(s, last, rm, esz);
247
+ tcg_temp_free_i32(last);
248
+
249
+ zero = tcg_const_i64(0);
250
+ tcg_gen_movcond_i64(TCG_COND_GE, reg_val, cmp, zero, ele, reg_val);
251
+
252
+ tcg_temp_free_i64(zero);
253
+ tcg_temp_free_i64(cmp);
254
+ tcg_temp_free_i64(ele);
255
+}
256
+
257
+/* Compute CLAST for a Vreg. */
258
+static bool do_clast_fp(DisasContext *s, arg_rpr_esz *a, bool before)
259
+{
260
+ if (sve_access_check(s)) {
261
+ int esz = a->esz;
262
+ int ofs = vec_reg_offset(s, a->rd, 0, esz);
263
+ TCGv_i64 reg = load_esz(cpu_env, ofs, esz);
264
+
265
+ do_clast_scalar(s, esz, a->pg, a->rn, before, reg);
266
+ write_fp_dreg(s, a->rd, reg);
267
+ tcg_temp_free_i64(reg);
268
+ }
269
+ return true;
270
+}
271
+
272
+static bool trans_CLASTA_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
273
+{
274
+ return do_clast_fp(s, a, false);
275
+}
276
+
277
+static bool trans_CLASTB_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
278
+{
279
+ return do_clast_fp(s, a, true);
280
+}
281
+
282
+/* Compute CLAST for a Xreg. */
283
+static bool do_clast_general(DisasContext *s, arg_rpr_esz *a, bool before)
284
+{
285
+ TCGv_i64 reg;
286
+
287
+ if (!sve_access_check(s)) {
288
+ return true;
289
+ }
290
+
291
+ reg = cpu_reg(s, a->rd);
292
+ switch (a->esz) {
293
+ case 0:
294
+ tcg_gen_ext8u_i64(reg, reg);
295
+ break;
55
+ break;
296
+ case 1:
56
+ case MO_16:
297
+ tcg_gen_ext16u_i64(reg, reg);
57
+ tcg_gen_st16_i32(var, cpu_env, offset);
298
+ break;
58
+ break;
299
+ case 2:
59
+ case MO_32:
300
+ tcg_gen_ext32u_i64(reg, reg);
60
+ tcg_gen_st_i32(var, cpu_env, offset);
301
+ break;
302
+ case 3:
303
+ break;
61
+ break;
304
+ default:
62
+ default:
305
+ g_assert_not_reached();
63
+ g_assert_not_reached();
306
+ }
64
+ }
307
+
308
+ do_clast_scalar(s, a->esz, a->pg, a->rn, before, reg);
309
+ return true;
310
+}
65
+}
311
+
66
+
312
+static bool trans_CLASTA_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
67
static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
313
+{
68
{
314
+ return do_clast_general(s, a, false);
69
long offset = neon_element_offset(reg, ele, size);
315
+}
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
316
+
71
int stride;
317
+static bool trans_CLASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
72
int size;
318
+{
73
int reg;
319
+ return do_clast_general(s, a, true);
74
- int pass;
320
+}
75
int load;
321
+
76
- int shift;
322
+/* Compute LAST for a scalar. */
77
int n;
323
+static TCGv_i64 do_last_scalar(DisasContext *s, int esz,
78
int vec_size;
324
+ int pg, int rm, bool before)
79
int mmu_idx;
325
+{
80
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
326
+ TCGv_i32 last = tcg_temp_new_i32();
81
} else {
327
+ TCGv_i64 ret;
82
/* Single element. */
328
+
83
int idx = (insn >> 4) & 0xf;
329
+ find_last_active(s, last, esz, pg);
84
- pass = (insn >> 7) & 1;
330
+ if (before) {
85
+ int reg_idx;
331
+ wrap_last_active(s, last, esz);
86
switch (size) {
332
+ } else {
87
case 0:
333
+ incr_last_active(s, last, esz);
88
- shift = ((insn >> 5) & 3) * 8;
334
+ }
89
+ reg_idx = (insn >> 5) & 7;
335
+
90
stride = 1;
336
+ ret = load_last_active(s, last, rm, esz);
91
break;
337
+ tcg_temp_free_i32(last);
92
case 1:
338
+ return ret;
93
- shift = ((insn >> 6) & 1) * 16;
339
+}
94
+ reg_idx = (insn >> 6) & 3;
340
+
95
stride = (insn & (1 << 5)) ? 2 : 1;
341
+/* Compute LAST for a Vreg. */
96
break;
342
+static bool do_last_fp(DisasContext *s, arg_rpr_esz *a, bool before)
97
case 2:
343
+{
98
- shift = 0;
344
+ if (sve_access_check(s)) {
99
+ reg_idx = (insn >> 7) & 1;
345
+ TCGv_i64 val = do_last_scalar(s, a->esz, a->pg, a->rn, before);
100
stride = (insn & (1 << 6)) ? 2 : 1;
346
+ write_fp_dreg(s, a->rd, val);
101
break;
347
+ tcg_temp_free_i64(val);
102
default:
348
+ }
103
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
349
+ return true;
104
*/
350
+}
105
return 1;
351
+
106
}
352
+static bool trans_LASTA_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
107
+ tmp = tcg_temp_new_i32();
353
+{
108
addr = tcg_temp_new_i32();
354
+ return do_last_fp(s, a, false);
109
load_reg_var(s, addr, rn);
355
+}
110
for (reg = 0; reg < nregs; reg++) {
356
+
111
if (load) {
357
+static bool trans_LASTB_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
112
- tmp = tcg_temp_new_i32();
358
+{
113
- switch (size) {
359
+ return do_last_fp(s, a, true);
114
- case 0:
360
+}
115
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
361
+
116
- break;
362
+/* Compute LAST for a Xreg. */
117
- case 1:
363
+static bool do_last_general(DisasContext *s, arg_rpr_esz *a, bool before)
118
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
364
+{
119
- break;
365
+ if (sve_access_check(s)) {
120
- case 2:
366
+ TCGv_i64 val = do_last_scalar(s, a->esz, a->pg, a->rn, before);
121
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
367
+ tcg_gen_mov_i64(cpu_reg(s, a->rd), val);
122
- break;
368
+ tcg_temp_free_i64(val);
123
- default: /* Avoid compiler warnings. */
369
+ }
124
- abort();
370
+ return true;
125
- }
371
+}
126
- if (size != 2) {
372
+
127
- tmp2 = neon_load_reg(rd, pass);
373
+static bool trans_LASTA_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
128
- tcg_gen_deposit_i32(tmp, tmp2, tmp,
374
+{
129
- shift, size ? 16 : 8);
375
+ return do_last_general(s, a, false);
130
- tcg_temp_free_i32(tmp2);
376
+}
131
- }
377
+
132
- neon_store_reg(rd, pass, tmp);
378
+static bool trans_LASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
133
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
379
+{
134
+ s->be_data | size);
380
+ return do_last_general(s, a, true);
135
+ neon_store_element(rd, reg_idx, size, tmp);
381
+}
136
} else { /* Store */
382
+
137
- tmp = neon_load_reg(rd, pass);
383
/*
138
- if (shift)
384
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
139
- tcg_gen_shri_i32(tmp, tmp, shift);
385
*/
140
- switch (size) {
386
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
141
- case 0:
387
index XXXXXXX..XXXXXXX 100644
142
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
388
--- a/target/arm/sve.decode
143
- break;
389
+++ b/target/arm/sve.decode
144
- case 1:
390
@@ -XXX,XX +XXX,XX @@ TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
145
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
391
# Note esz >= 2
146
- break;
392
COMPACT 00000101 .. 100001 100 ... ..... ..... @rd_pg_rn
147
- case 2:
393
148
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
394
+# SVE conditionally broadcast element to vector
149
- break;
395
+CLASTA_z 00000101 .. 10100 0 100 ... ..... ..... @rdn_pg_rm
150
- }
396
+CLASTB_z 00000101 .. 10100 1 100 ... ..... ..... @rdn_pg_rm
151
- tcg_temp_free_i32(tmp);
397
+
152
+ neon_load_element(tmp, rd, reg_idx, size);
398
+# SVE conditionally copy element to SIMD&FP scalar
153
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
399
+CLASTA_v 00000101 .. 10101 0 100 ... ..... ..... @rd_pg_rn
154
+ s->be_data | size);
400
+CLASTB_v 00000101 .. 10101 1 100 ... ..... ..... @rd_pg_rn
155
}
401
+
156
rd += stride;
402
+# SVE conditionally copy element to general register
157
tcg_gen_addi_i32(addr, addr, 1 << size);
403
+CLASTA_r 00000101 .. 11000 0 101 ... ..... ..... @rd_pg_rn
158
}
404
+CLASTB_r 00000101 .. 11000 1 101 ... ..... ..... @rd_pg_rn
159
tcg_temp_free_i32(addr);
405
+
160
+ tcg_temp_free_i32(tmp);
406
+# SVE copy element to SIMD&FP scalar register
161
stride = nregs * (1 << size);
407
+LASTA_v 00000101 .. 10001 0 100 ... ..... ..... @rd_pg_rn
162
}
408
+LASTB_v 00000101 .. 10001 1 100 ... ..... ..... @rd_pg_rn
163
}
409
+
410
+# SVE copy element to general register
411
+LASTA_r 00000101 .. 10000 0 101 ... ..... ..... @rd_pg_rn
412
+LASTB_r 00000101 .. 10000 1 101 ... ..... ..... @rd_pg_rn
413
+
414
### SVE Predicate Logical Operations Group
415
416
# SVE predicate logical operations
417
--
164
--
418
2.17.1
165
2.19.1
419
166
420
167
diff view generated by jsdifflib
1
The codebase has a bit of a mix of different multiline
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
comment styles. State a preference for the Linux kernel
3
style:
4
/*
5
* Star on the left for each line.
6
* Leading slash-star and trailing star-slash
7
* each go on a line of their own.
8
*/
9
2
3
Announce the availability of the various priority queues.
4
This fixes an issue where guest kernels would miss to
5
configure secondary queues due to inproper feature bits.
6
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20181017213932.19973-2-edgar.iglesias@gmail.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Eric Blake <eblake@redhat.com>
12
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
13
Reviewed-by: Markus Armbruster <armbru@redhat.com>
14
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
15
Reviewed-by: Thomas Huth <thuth@redhat.com>
16
Reviewed-by: John Snow <jsnow@redhat.com>
17
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
18
Message-id: 20180611141716.3813-1-peter.maydell@linaro.org
19
---
11
---
20
CODING_STYLE | 17 +++++++++++++++++
12
hw/net/cadence_gem.c | 8 +++++++-
21
1 file changed, 17 insertions(+)
13
1 file changed, 7 insertions(+), 1 deletion(-)
22
14
23
diff --git a/CODING_STYLE b/CODING_STYLE
15
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
24
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
25
--- a/CODING_STYLE
17
--- a/hw/net/cadence_gem.c
26
+++ b/CODING_STYLE
18
+++ b/hw/net/cadence_gem.c
27
@@ -XXX,XX +XXX,XX @@ We use traditional C-style /* */ comments and avoid // comments.
19
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
28
Rationale: The // form is valid in C99, so this is purely a matter of
20
int i;
29
consistency of style. The checkpatch script will warn you about this.
21
CadenceGEMState *s = CADENCE_GEM(d);
30
22
const uint8_t *a;
31
+Multiline comment blocks should have a row of stars on the left,
23
+ uint32_t queues_mask = 0;
32
+and the initial /* and terminating */ both on their own lines:
24
33
+ /*
25
DB_PRINT("\n");
34
+ * like
26
35
+ * this
27
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
36
+ */
28
s->regs[GEM_DESCONF] = 0x02500111;
37
+This is the same format required by the Linux kernel coding style.
29
s->regs[GEM_DESCONF2] = 0x2ab13fff;
30
s->regs[GEM_DESCONF5] = 0x002f2045;
31
- s->regs[GEM_DESCONF6] = 0x00000200;
32
+ s->regs[GEM_DESCONF6] = 0x0;
38
+
33
+
39
+(Some of the existing comments in the codebase use the GNU Coding
34
+ if (s->num_priority_queues > 1) {
40
+Standards form which does not have stars on the left, or other
35
+ queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
41
+variations; avoid these when writing new comments, but don't worry
36
+ s->regs[GEM_DESCONF6] |= queues_mask;
42
+about converting to the preferred form unless you're editing that
37
+ }
43
+comment anyway.)
38
44
+
39
/* Set MAC address */
45
+Rationale: Consistency, and ease of visually picking out a multiline
40
a = &s->conf.macaddr.a[0];
46
+comment from the surrounding code.
47
+
48
8. trace-events style
49
50
8.1 0x prefix
51
--
41
--
52
2.17.1
42
2.19.1
53
43
54
44
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
While for_each_dist_irq_reg loop starts from GIC_INTERNAL, it forgot to
3
Announce 64bit addressing support.
4
offset the date array and index. This will overlap the GICR registers
5
value and leave the last GIC_INTERNAL irq's registers out of update.
6
4
7
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Cc: qemu-stable@nongnu.org
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Message-id: 20181017213932.19973-3-edgar.iglesias@gmail.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
hw/intc/arm_gicv3_kvm.c | 18 ++++++++++++++++--
11
hw/net/cadence_gem.c | 3 ++-
15
1 file changed, 16 insertions(+), 2 deletions(-)
12
1 file changed, 2 insertions(+), 1 deletion(-)
16
13
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
14
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
16
--- a/hw/net/cadence_gem.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
17
+++ b/hw/net/cadence_gem.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_get_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
18
@@ -XXX,XX +XXX,XX @@
22
uint32_t reg, *field;
19
#define GEM_DESCONF4 (0x0000028C/4)
23
int irq;
20
#define GEM_DESCONF5 (0x00000290/4)
24
21
#define GEM_DESCONF6 (0x00000294/4)
25
- field = (uint32_t *)bmp;
22
+#define GEM_DESCONF6_64B_MASK (1U << 23)
26
+ /* For the KVM GICv3, affinity routing is always enabled, and the first 8
23
#define GEM_DESCONF7 (0x00000298/4)
27
+ * GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
24
28
+ * functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
25
#define GEM_INT_Q1_STATUS (0x00000400 / 4)
29
+ * sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
26
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
30
+ * offset.
27
s->regs[GEM_DESCONF] = 0x02500111;
31
+ */
28
s->regs[GEM_DESCONF2] = 0x2ab13fff;
32
+ field = (uint32_t *)(bmp + GIC_INTERNAL);
29
s->regs[GEM_DESCONF5] = 0x002f2045;
33
+ offset += (GIC_INTERNAL * 8) / 8;
30
- s->regs[GEM_DESCONF6] = 0x0;
34
for_each_dist_irq_reg(irq, s->num_irq, 8) {
31
+ s->regs[GEM_DESCONF6] = GEM_DESCONF6_64B_MASK;
35
kvm_gicd_access(s, offset, &reg, false);
32
36
*field = reg;
33
if (s->num_priority_queues > 1) {
37
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_put_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
34
queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
38
uint32_t reg, *field;
39
int irq;
40
41
- field = (uint32_t *)bmp;
42
+ /* For the KVM GICv3, affinity routing is always enabled, and the first 8
43
+ * GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
44
+ * functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
45
+ * sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
46
+ * offset.
47
+ */
48
+ field = (uint32_t *)(bmp + GIC_INTERNAL);
49
+ offset += (GIC_INTERNAL * 8) / 8;
50
for_each_dist_irq_reg(irq, s->num_irq, 8) {
51
reg = *field;
52
kvm_gicd_access(s, offset, &reg, true);
53
--
35
--
54
2.17.1
36
2.19.1
55
37
56
38
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The EL3 version of this register does not include an ASID,
4
and so the tlb_flush performed by vmsa_ttbr_write is not needed.
5
6
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181019015617.22583-2-richard.henderson@linaro.org
5
Message-id: 20180613015641.5667-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 6 +
12
target/arm/helper.c | 2 +-
9
target/arm/sve_helper.c | 290 +++++++++++++++++++++++++++++++++++++
13
1 file changed, 1 insertion(+), 1 deletion(-)
10
target/arm/translate-sve.c | 120 +++++++++++++++
11
target/arm/sve.decode | 18 +++
12
4 files changed, 434 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/helper.c
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_uunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
19
DEF_HELPER_FLAGS_3(sve_uunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
.fieldoffset = offsetof(CPUARMState, cp15.mvbar) },
20
DEF_HELPER_FLAGS_3(sve_uunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
{ .name = "TTBR0_EL3", .state = ARM_CP_STATE_AA64,
21
22
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 0,
22
+DEF_HELPER_FLAGS_4(sve_zip_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
- .access = PL3_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
23
+DEF_HELPER_FLAGS_4(sve_uzp_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+ .access = PL3_RW, .resetvalue = 0,
24
+DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[3]) },
25
+DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
{ .name = "TCR_EL3", .state = ARM_CP_STATE_AA64,
26
+DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
27
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2,
27
+
28
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/sve_helper.c
34
+++ b/target/arm/sve_helper.c
35
@@ -XXX,XX +XXX,XX @@ DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
36
DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
37
38
#undef DO_UNPK
39
+
40
+/* Mask of bits included in the even numbered predicates of width esz.
41
+ * We also use this for expand_bits/compress_bits, and so extend the
42
+ * same pattern out to 16-bit units.
43
+ */
44
+static const uint64_t even_bit_esz_masks[5] = {
45
+ 0x5555555555555555ull,
46
+ 0x3333333333333333ull,
47
+ 0x0f0f0f0f0f0f0f0full,
48
+ 0x00ff00ff00ff00ffull,
49
+ 0x0000ffff0000ffffull,
50
+};
51
+
52
+/* Zero-extend units of 2**N bits to units of 2**(N+1) bits.
53
+ * For N==0, this corresponds to the operation that in qemu/bitops.h
54
+ * we call half_shuffle64; this algorithm is from Hacker's Delight,
55
+ * section 7-2 Shuffling Bits.
56
+ */
57
+static uint64_t expand_bits(uint64_t x, int n)
58
+{
59
+ int i;
60
+
61
+ x &= 0xffffffffu;
62
+ for (i = 4; i >= n; i--) {
63
+ int sh = 1 << i;
64
+ x = ((x << sh) | x) & even_bit_esz_masks[i];
65
+ }
66
+ return x;
67
+}
68
+
69
+/* Compress units of 2**(N+1) bits to units of 2**N bits.
70
+ * For N==0, this corresponds to the operation that in qemu/bitops.h
71
+ * we call half_unshuffle64; this algorithm is from Hacker's Delight,
72
+ * section 7-2 Shuffling Bits, where it is called an inverse half shuffle.
73
+ */
74
+static uint64_t compress_bits(uint64_t x, int n)
75
+{
76
+ int i;
77
+
78
+ for (i = n; i <= 4; i++) {
79
+ int sh = 1 << i;
80
+ x &= even_bit_esz_masks[i];
81
+ x = (x >> sh) | x;
82
+ }
83
+ return x & 0xffffffffu;
84
+}
85
+
86
+void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
87
+{
88
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
89
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
90
+ intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
91
+ uint64_t *d = vd;
92
+ intptr_t i;
93
+
94
+ if (oprsz <= 8) {
95
+ uint64_t nn = *(uint64_t *)vn;
96
+ uint64_t mm = *(uint64_t *)vm;
97
+ int half = 4 * oprsz;
98
+
99
+ nn = extract64(nn, high * half, half);
100
+ mm = extract64(mm, high * half, half);
101
+ nn = expand_bits(nn, esz);
102
+ mm = expand_bits(mm, esz);
103
+ d[0] = nn + (mm << (1 << esz));
104
+ } else {
105
+ ARMPredicateReg tmp_n, tmp_m;
106
+
107
+ /* We produce output faster than we consume input.
108
+ Therefore we must be mindful of possible overlap. */
109
+ if ((vn - vd) < (uintptr_t)oprsz) {
110
+ vn = memcpy(&tmp_n, vn, oprsz);
111
+ }
112
+ if ((vm - vd) < (uintptr_t)oprsz) {
113
+ vm = memcpy(&tmp_m, vm, oprsz);
114
+ }
115
+ if (high) {
116
+ high = oprsz >> 1;
117
+ }
118
+
119
+ if ((high & 3) == 0) {
120
+ uint32_t *n = vn, *m = vm;
121
+ high >>= 2;
122
+
123
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
124
+ uint64_t nn = n[H4(high + i)];
125
+ uint64_t mm = m[H4(high + i)];
126
+
127
+ nn = expand_bits(nn, esz);
128
+ mm = expand_bits(mm, esz);
129
+ d[i] = nn + (mm << (1 << esz));
130
+ }
131
+ } else {
132
+ uint8_t *n = vn, *m = vm;
133
+ uint16_t *d16 = vd;
134
+
135
+ for (i = 0; i < oprsz / 2; i++) {
136
+ uint16_t nn = n[H1(high + i)];
137
+ uint16_t mm = m[H1(high + i)];
138
+
139
+ nn = expand_bits(nn, esz);
140
+ mm = expand_bits(mm, esz);
141
+ d16[H2(i)] = nn + (mm << (1 << esz));
142
+ }
143
+ }
144
+ }
145
+}
146
+
147
+void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
148
+{
149
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
150
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
151
+ int odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1) << esz;
152
+ uint64_t *d = vd, *n = vn, *m = vm;
153
+ uint64_t l, h;
154
+ intptr_t i;
155
+
156
+ if (oprsz <= 8) {
157
+ l = compress_bits(n[0] >> odd, esz);
158
+ h = compress_bits(m[0] >> odd, esz);
159
+ d[0] = extract64(l + (h << (4 * oprsz)), 0, 8 * oprsz);
160
+ } else {
161
+ ARMPredicateReg tmp_m;
162
+ intptr_t oprsz_16 = oprsz / 16;
163
+
164
+ if ((vm - vd) < (uintptr_t)oprsz) {
165
+ m = memcpy(&tmp_m, vm, oprsz);
166
+ }
167
+
168
+ for (i = 0; i < oprsz_16; i++) {
169
+ l = n[2 * i + 0];
170
+ h = n[2 * i + 1];
171
+ l = compress_bits(l >> odd, esz);
172
+ h = compress_bits(h >> odd, esz);
173
+ d[i] = l + (h << 32);
174
+ }
175
+
176
+ /* For VL which is not a power of 2, the results from M do not
177
+ align nicely with the uint64_t for D. Put the aligned results
178
+ from M into TMP_M and then copy it into place afterward. */
179
+ if (oprsz & 15) {
180
+ d[i] = compress_bits(n[2 * i] >> odd, esz);
181
+
182
+ for (i = 0; i < oprsz_16; i++) {
183
+ l = m[2 * i + 0];
184
+ h = m[2 * i + 1];
185
+ l = compress_bits(l >> odd, esz);
186
+ h = compress_bits(h >> odd, esz);
187
+ tmp_m.p[i] = l + (h << 32);
188
+ }
189
+ tmp_m.p[i] = compress_bits(m[2 * i] >> odd, esz);
190
+
191
+ swap_memmove(vd + oprsz / 2, &tmp_m, oprsz / 2);
192
+ } else {
193
+ for (i = 0; i < oprsz_16; i++) {
194
+ l = m[2 * i + 0];
195
+ h = m[2 * i + 1];
196
+ l = compress_bits(l >> odd, esz);
197
+ h = compress_bits(h >> odd, esz);
198
+ d[oprsz_16 + i] = l + (h << 32);
199
+ }
200
+ }
201
+ }
202
+}
203
+
204
+void HELPER(sve_trn_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
205
+{
206
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
207
+ uintptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
208
+ bool odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
209
+ uint64_t *d = vd, *n = vn, *m = vm;
210
+ uint64_t mask;
211
+ int shr, shl;
212
+ intptr_t i;
213
+
214
+ shl = 1 << esz;
215
+ shr = 0;
216
+ mask = even_bit_esz_masks[esz];
217
+ if (odd) {
218
+ mask <<= shl;
219
+ shr = shl;
220
+ shl = 0;
221
+ }
222
+
223
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
224
+ uint64_t nn = (n[i] & mask) >> shr;
225
+ uint64_t mm = (m[i] & mask) << shl;
226
+ d[i] = nn + mm;
227
+ }
228
+}
229
+
230
+/* Reverse units of 2**N bits. */
231
+static uint64_t reverse_bits_64(uint64_t x, int n)
232
+{
233
+ int i, sh;
234
+
235
+ x = bswap64(x);
236
+ for (i = 2, sh = 4; i >= n; i--, sh >>= 1) {
237
+ uint64_t mask = even_bit_esz_masks[i];
238
+ x = ((x & mask) << sh) | ((x >> sh) & mask);
239
+ }
240
+ return x;
241
+}
242
+
243
+static uint8_t reverse_bits_8(uint8_t x, int n)
244
+{
245
+ static const uint8_t mask[3] = { 0x55, 0x33, 0x0f };
246
+ int i, sh;
247
+
248
+ for (i = 2, sh = 4; i >= n; i--, sh >>= 1) {
249
+ x = ((x & mask[i]) << sh) | ((x >> sh) & mask[i]);
250
+ }
251
+ return x;
252
+}
253
+
254
+void HELPER(sve_rev_p)(void *vd, void *vn, uint32_t pred_desc)
255
+{
256
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
257
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
258
+ intptr_t i, oprsz_2 = oprsz / 2;
259
+
260
+ if (oprsz <= 8) {
261
+ uint64_t l = *(uint64_t *)vn;
262
+ l = reverse_bits_64(l << (64 - 8 * oprsz), esz);
263
+ *(uint64_t *)vd = l;
264
+ } else if ((oprsz & 15) == 0) {
265
+ for (i = 0; i < oprsz_2; i += 8) {
266
+ intptr_t ih = oprsz - 8 - i;
267
+ uint64_t l = reverse_bits_64(*(uint64_t *)(vn + i), esz);
268
+ uint64_t h = reverse_bits_64(*(uint64_t *)(vn + ih), esz);
269
+ *(uint64_t *)(vd + i) = h;
270
+ *(uint64_t *)(vd + ih) = l;
271
+ }
272
+ } else {
273
+ for (i = 0; i < oprsz_2; i += 1) {
274
+ intptr_t il = H1(i);
275
+ intptr_t ih = H1(oprsz - 1 - i);
276
+ uint8_t l = reverse_bits_8(*(uint8_t *)(vn + il), esz);
277
+ uint8_t h = reverse_bits_8(*(uint8_t *)(vn + ih), esz);
278
+ *(uint8_t *)(vd + il) = h;
279
+ *(uint8_t *)(vd + ih) = l;
280
+ }
281
+ }
282
+}
283
+
284
+void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
285
+{
286
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
287
+ intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
288
+ uint64_t *d = vd;
289
+ intptr_t i;
290
+
291
+ if (oprsz <= 8) {
292
+ uint64_t nn = *(uint64_t *)vn;
293
+ int half = 4 * oprsz;
294
+
295
+ nn = extract64(nn, high * half, half);
296
+ nn = expand_bits(nn, 0);
297
+ d[0] = nn;
298
+ } else {
299
+ ARMPredicateReg tmp_n;
300
+
301
+ /* We produce output faster than we consume input.
302
+ Therefore we must be mindful of possible overlap. */
303
+ if ((vn - vd) < (uintptr_t)oprsz) {
304
+ vn = memcpy(&tmp_n, vn, oprsz);
305
+ }
306
+ if (high) {
307
+ high = oprsz >> 1;
308
+ }
309
+
310
+ if ((high & 3) == 0) {
311
+ uint32_t *n = vn;
312
+ high >>= 2;
313
+
314
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
315
+ uint64_t nn = n[H4(high + i)];
316
+ d[i] = expand_bits(nn, 0);
317
+ }
318
+ } else {
319
+ uint16_t *d16 = vd;
320
+ uint8_t *n = vn;
321
+
322
+ for (i = 0; i < oprsz / 2; i++) {
323
+ uint16_t nn = n[H1(high + i)];
324
+ d16[H2(i)] = expand_bits(nn, 0);
325
+ }
326
+ }
327
+ }
328
+}
329
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
330
index XXXXXXX..XXXXXXX 100644
331
--- a/target/arm/translate-sve.c
332
+++ b/target/arm/translate-sve.c
333
@@ -XXX,XX +XXX,XX @@ static bool trans_UNPK(DisasContext *s, arg_UNPK *a, uint32_t insn)
334
return true;
335
}
336
337
+/*
338
+ *** SVE Permute - Predicates Group
339
+ */
340
+
341
+static bool do_perm_pred3(DisasContext *s, arg_rrr_esz *a, bool high_odd,
342
+ gen_helper_gvec_3 *fn)
343
+{
344
+ if (!sve_access_check(s)) {
345
+ return true;
346
+ }
347
+
348
+ unsigned vsz = pred_full_reg_size(s);
349
+
350
+ /* Predicate sizes may be smaller and cannot use simd_desc.
351
+ We cannot round up, as we do elsewhere, because we need
352
+ the exact size for ZIP2 and REV. We retain the style for
353
+ the other helpers for consistency. */
354
+ TCGv_ptr t_d = tcg_temp_new_ptr();
355
+ TCGv_ptr t_n = tcg_temp_new_ptr();
356
+ TCGv_ptr t_m = tcg_temp_new_ptr();
357
+ TCGv_i32 t_desc;
358
+ int desc;
359
+
360
+ desc = vsz - 2;
361
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
362
+ desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
363
+
364
+ tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
365
+ tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
366
+ tcg_gen_addi_ptr(t_m, cpu_env, pred_full_reg_offset(s, a->rm));
367
+ t_desc = tcg_const_i32(desc);
368
+
369
+ fn(t_d, t_n, t_m, t_desc);
370
+
371
+ tcg_temp_free_ptr(t_d);
372
+ tcg_temp_free_ptr(t_n);
373
+ tcg_temp_free_ptr(t_m);
374
+ tcg_temp_free_i32(t_desc);
375
+ return true;
376
+}
377
+
378
+static bool do_perm_pred2(DisasContext *s, arg_rr_esz *a, bool high_odd,
379
+ gen_helper_gvec_2 *fn)
380
+{
381
+ if (!sve_access_check(s)) {
382
+ return true;
383
+ }
384
+
385
+ unsigned vsz = pred_full_reg_size(s);
386
+ TCGv_ptr t_d = tcg_temp_new_ptr();
387
+ TCGv_ptr t_n = tcg_temp_new_ptr();
388
+ TCGv_i32 t_desc;
389
+ int desc;
390
+
391
+ tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
392
+ tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
393
+
394
+ /* Predicate sizes may be smaller and cannot use simd_desc.
395
+ We cannot round up, as we do elsewhere, because we need
396
+ the exact size for ZIP2 and REV. We retain the style for
397
+ the other helpers for consistency. */
398
+
399
+ desc = vsz - 2;
400
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
401
+ desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
402
+ t_desc = tcg_const_i32(desc);
403
+
404
+ fn(t_d, t_n, t_desc);
405
+
406
+ tcg_temp_free_i32(t_desc);
407
+ tcg_temp_free_ptr(t_d);
408
+ tcg_temp_free_ptr(t_n);
409
+ return true;
410
+}
411
+
412
+static bool trans_ZIP1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
413
+{
414
+ return do_perm_pred3(s, a, 0, gen_helper_sve_zip_p);
415
+}
416
+
417
+static bool trans_ZIP2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
418
+{
419
+ return do_perm_pred3(s, a, 1, gen_helper_sve_zip_p);
420
+}
421
+
422
+static bool trans_UZP1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
423
+{
424
+ return do_perm_pred3(s, a, 0, gen_helper_sve_uzp_p);
425
+}
426
+
427
+static bool trans_UZP2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
428
+{
429
+ return do_perm_pred3(s, a, 1, gen_helper_sve_uzp_p);
430
+}
431
+
432
+static bool trans_TRN1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
433
+{
434
+ return do_perm_pred3(s, a, 0, gen_helper_sve_trn_p);
435
+}
436
+
437
+static bool trans_TRN2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
438
+{
439
+ return do_perm_pred3(s, a, 1, gen_helper_sve_trn_p);
440
+}
441
+
442
+static bool trans_REV_p(DisasContext *s, arg_rr_esz *a, uint32_t insn)
443
+{
444
+ return do_perm_pred2(s, a, 0, gen_helper_sve_rev_p);
445
+}
446
+
447
+static bool trans_PUNPKLO(DisasContext *s, arg_PUNPKLO *a, uint32_t insn)
448
+{
449
+ return do_perm_pred2(s, a, 0, gen_helper_sve_punpk_p);
450
+}
451
+
452
+static bool trans_PUNPKHI(DisasContext *s, arg_PUNPKHI *a, uint32_t insn)
453
+{
454
+ return do_perm_pred2(s, a, 1, gen_helper_sve_punpk_p);
455
+}
456
+
457
/*
458
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
459
*/
460
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
461
index XXXXXXX..XXXXXXX 100644
462
--- a/target/arm/sve.decode
463
+++ b/target/arm/sve.decode
464
@@ -XXX,XX +XXX,XX @@
465
466
# Three operand, vector element size
467
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
468
+@pd_pn_pm ........ esz:2 .. rm:4 ....... rn:4 . rd:4 &rrr_esz
469
@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
470
&rrr_esz rn=%reg_movprfx
471
472
@@ -XXX,XX +XXX,XX @@ TBL 00000101 .. 1 ..... 001100 ..... ..... @rd_rn_rm
473
# SVE unpack vector elements
474
UNPK 00000101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
475
476
+### SVE Permute - Predicates Group
477
+
478
+# SVE permute predicate elements
479
+ZIP1_p 00000101 .. 10 .... 010 000 0 .... 0 .... @pd_pn_pm
480
+ZIP2_p 00000101 .. 10 .... 010 001 0 .... 0 .... @pd_pn_pm
481
+UZP1_p 00000101 .. 10 .... 010 010 0 .... 0 .... @pd_pn_pm
482
+UZP2_p 00000101 .. 10 .... 010 011 0 .... 0 .... @pd_pn_pm
483
+TRN1_p 00000101 .. 10 .... 010 100 0 .... 0 .... @pd_pn_pm
484
+TRN2_p 00000101 .. 10 .... 010 101 0 .... 0 .... @pd_pn_pm
485
+
486
+# SVE reverse predicate elements
487
+REV_p 00000101 .. 11 0100 010 000 0 .... 0 .... @pd_pn
488
+
489
+# SVE unpack predicate elements
490
+PUNPKLO 00000101 00 11 0000 010 000 0 .... 0 .... @pd_pn_e0
491
+PUNPKHI 00000101 00 11 0001 010 000 0 .... 0 .... @pd_pn_e0
492
+
493
### SVE Predicate Logical Operations Group
494
495
# SVE predicate logical operations
496
--
28
--
497
2.17.1
29
2.19.1
498
30
499
31
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rearrange the arithmetic so that we are agnostic about the total size
3
Since QEMU does not implement ASIDs, changes to the ASID must flush the
4
of the vector and the size of the element. This will allow us to index
4
tlb. However, if the ASID does not change there is no reason to flush.
5
up to the 32nd byte and with 16-byte elements.
6
5
6
In testing a boot of the Ubuntu installer to the first menu, this reduces
7
the number of flushes by 30%, or nearly 600k instances.
8
9
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20180613015641.5667-2-richard.henderson@linaro.org
13
Message-id: 20181019015617.22583-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
15
---
12
target/arm/translate-a64.h | 26 +++++++++++++++++---------
16
target/arm/helper.c | 8 +++-----
13
1 file changed, 17 insertions(+), 9 deletions(-)
17
1 file changed, 3 insertions(+), 5 deletions(-)
14
18
15
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.h
21
--- a/target/arm/helper.c
18
+++ b/target/arm/translate-a64.h
22
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static inline void assert_fp_access_checked(DisasContext *s)
23
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_el1_write(CPUARMState *env, const ARMCPRegInfo *ri,
20
static inline int vec_reg_offset(DisasContext *s, int regno,
24
static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
21
int element, TCGMemOp size)
25
uint64_t value)
22
{
26
{
23
- int offs = 0;
27
- /* 64 bit accesses to the TTBRs can change the ASID and so we
24
+ int element_size = 1 << size;
28
- * must flush the TLB.
25
+ int offs = element * element_size;
29
- */
26
#ifdef HOST_WORDS_BIGENDIAN
30
- if (cpreg_field_is_64bit(ri)) {
27
/* This is complicated slightly because vfp.zregs[n].d[0] is
31
+ /* If the ASID changes (with a 64-bit write), we must flush the TLB. */
28
- * still the low half and vfp.zregs[n].d[1] the high half
32
+ if (cpreg_field_is_64bit(ri) &&
29
- * of the 128 bit vector, even on big endian systems.
33
+ extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
30
- * Calculate the offset assuming a fully bigendian 128 bits,
34
ARMCPU *cpu = arm_env_get_cpu(env);
31
- * then XOR to account for the order of the two 64 bit halves.
35
-
32
+ * still the lowest and vfp.zregs[n].d[15] the highest of the
36
tlb_flush(CPU(cpu));
33
+ * 256 byte vector, even on big endian systems.
37
}
34
+ *
38
raw_write(env, ri, value);
35
+ * Calculate the offset assuming fully little-endian,
36
+ * then XOR to account for the order of the 8-byte units.
37
+ *
38
+ * For 16 byte elements, the two 8 byte halves will not form a
39
+ * host int128 if the host is bigendian, since they're in the
40
+ * wrong order. However the only 16 byte operation we have is
41
+ * a move, so we can ignore this for the moment. More complicated
42
+ * operations will have to special case loading and storing from
43
+ * the zregs array.
44
*/
45
- offs += (16 - ((element + 1) * (1 << size)));
46
- offs ^= 8;
47
-#else
48
- offs += element * (1 << size);
49
+ if (element_size < 8) {
50
+ offs ^= 8 - element_size;
51
+ }
52
#endif
53
offs += offsetof(CPUARMState, vfp.zregs[regno]);
54
assert_fp_access_checked(s);
55
--
39
--
56
2.17.1
40
2.19.1
57
41
58
42
diff view generated by jsdifflib