1
target-arm queue. This has the "plumb txattrs through various
1
target-arm queue; this one has a fair scattering of more
2
bits of exec.c" patches, and a collection of bug fixes from
2
miscellaneous things in it which I've sent out this week.
3
various people.
3
I've shoved those in as well as it seemed the least-effort
4
way of getting them into master; a few of them are dependencies
5
on arm-related patches I have brewing.
4
6
5
thanks
7
thanks
6
-- PMM
8
-- PMM
7
9
8
10
11
The following changes since commit 2702c2d3eb74e3908c0c5dbf3a71c8987595a86e:
9
12
10
The following changes since commit a3ac12fba028df90f7b3dbec924995c126c41022:
13
Merge remote-tracking branch 'remotes/stsquad/tags/pull-travis-updates-140618-1' into staging (2018-06-15 12:49:36 +0100)
11
12
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-next-pull-request' into staging (2018-05-31 11:12:36 +0100)
13
14
14
are available in the Git repository at:
15
are available in the Git repository at:
15
16
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180531
17
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180615
17
18
18
for you to fetch changes up to 49d1dca0520ea71bc21867fab6647f474fcf857b:
19
for you to fetch changes up to 14120108f87b3f9e1beacdf0a6096e464e62bb65:
19
20
20
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice (2018-05-31 14:52:53 +0100)
21
target/arm: Allow ARMv6-M Thumb2 instructions (2018-06-15 15:23:34 +0100)
21
22
22
----------------------------------------------------------------
23
----------------------------------------------------------------
23
target-arm queue:
24
target-arm and miscellaneous queue:
24
* target/arm: Honour FPCR.FZ in FRECPX
25
* fix KVM state save/restore for GICv3 priority registers for high IRQ numbers
25
* MAINTAINERS: Add entries for newer MPS2 boards and devices
26
* hw/arm/mps2-tz: Put ethernet controller behind PPC
26
* hw/intc/arm_gicv3: Fix APxR<n> register dispatching
27
* hw/sh/sh7750: Convert away from old_mmio
27
* arm_gicv3_kvm: fix bug in writing zero bits back to the in-kernel
28
* hw/m68k/mcf5206: Convert away from old_mmio
28
GIC state
29
* hw/block/pflash_cfi02: Convert away from old_mmio
29
* tcg: Fix helper function vs host abi for float16
30
* hw/watchdog/wdt_i6300esb: Convert away from old_mmio
30
* arm: fix qemu crash on startup with -bios option
31
* hw/input/pckbd: Convert away from old_mmio
31
* arm: fix malloc type mismatch
32
* hw/char/parallel: Convert away from old_mmio
32
* xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
33
* armv7m: refactor to get rid of armv7m_init() function
33
* Correct CPACR reset value for v7 cores
34
* arm: Don't crash if user tries to use a Cortex-M CPU without an NVIC
34
* memory.h: Improve IOMMU related documentation
35
* hw/core/or-irq: Support more than 16 inputs to an OR gate
35
* exec: Plumb transaction attributes through various functions in
36
* cpu-defs.h: Document CPUIOTLBEntry 'addr' field
36
preparation for allowing IOMMUs to see them
37
* cputlb: Pass cpu_transaction_failed() the correct physaddr
37
* vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
38
* CODING_STYLE: Define our preferred form for multiline comments
38
* ARM: ACPI: Fix use-after-free due to memory realloc
39
* Add and use new stn_*_p() and ldn_*_p() memory access functions
39
* KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
40
* target/arm: More parts of the upcoming SVE support
41
* aspeed_scu: Implement RNG register
42
* m25p80: add support for two bytes WRSR for Macronix chips
43
* exec.c: Handle IOMMUs being in the path of TCG CPU memory accesses
44
* target/arm: Allow ARMv6-M Thumb2 instructions
40
45
41
----------------------------------------------------------------
46
----------------------------------------------------------------
42
Francisco Iglesias (1):
47
Cédric Le Goater (1):
43
xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
48
m25p80: add support for two bytes WRSR for Macronix chips
44
49
45
Igor Mammedov (1):
50
Joel Stanley (1):
46
arm: fix qemu crash on startup with -bios option
51
aspeed_scu: Implement RNG register
47
52
48
Jan Kiszka (1):
53
Julia Suvorova (1):
49
hw/intc/arm_gicv3: Fix APxR<n> register dispatching
54
target/arm: Allow ARMv6-M Thumb2 instructions
50
55
51
Paolo Bonzini (1):
56
Peter Maydell (21):
52
arm: fix malloc type mismatch
57
hw/arm/mps2-tz: Put ethernet controller behind PPC
58
hw/sh/sh7750: Convert away from old_mmio
59
hw/m68k/mcf5206: Convert away from old_mmio
60
hw/block/pflash_cfi02: Convert away from old_mmio
61
hw/watchdog/wdt_i6300esb: Convert away from old_mmio
62
hw/input/pckbd: Convert away from old_mmio
63
hw/char/parallel: Convert away from old_mmio
64
stellaris: Stop using armv7m_init()
65
hw/arm/armv7m: Remove unused armv7m_init() function
66
arm: Don't crash if user tries to use a Cortex-M CPU without an NVIC
67
hw/core/or-irq: Support more than 16 inputs to an OR gate
68
cpu-defs.h: Document CPUIOTLBEntry 'addr' field
69
cputlb: Pass cpu_transaction_failed() the correct physaddr
70
CODING_STYLE: Define our preferred form for multiline comments
71
bswap: Add new stn_*_p() and ldn_*_p() memory access functions
72
exec.c: Don't accidentally sign-extend 4-byte loads in subpage_read()
73
exec.c: Use stn_p() and ldn_p() instead of explicit switches
74
iommu: Add IOMMU index concept to IOMMU API
75
iommu: Add IOMMU index argument to notifier APIs
76
iommu: Add IOMMU index argument to translate method
77
exec.c: Handle IOMMUs in address_space_translate_for_iotlb()
53
78
54
Peter Maydell (17):
79
Richard Henderson (18):
55
target/arm: Honour FPCR.FZ in FRECPX
80
target/arm: Extend vec_reg_offset to larger sizes
56
MAINTAINERS: Add entries for newer MPS2 boards and devices
81
target/arm: Implement SVE Permute - Unpredicated Group
57
Correct CPACR reset value for v7 cores
82
target/arm: Implement SVE Permute - Predicates Group
58
memory.h: Improve IOMMU related documentation
83
target/arm: Implement SVE Permute - Interleaving Group
59
Make tb_invalidate_phys_addr() take a MemTxAttrs argument
84
target/arm: Implement SVE compress active elements
60
Make address_space_translate{, _cached}() take a MemTxAttrs argument
85
target/arm: Implement SVE conditionally broadcast/extract element
61
Make address_space_map() take a MemTxAttrs argument
86
target/arm: Implement SVE copy to vector (predicated)
62
Make address_space_access_valid() take a MemTxAttrs argument
87
target/arm: Implement SVE reverse within elements
63
Make flatview_extend_translation() take a MemTxAttrs argument
88
target/arm: Implement SVE vector splice (predicated)
64
Make memory_region_access_valid() take a MemTxAttrs argument
89
target/arm: Implement SVE Select Vectors Group
65
Make MemoryRegion valid.accepts callback take a MemTxAttrs argument
90
target/arm: Implement SVE Integer Compare - Vectors Group
66
Make flatview_access_valid() take a MemTxAttrs argument
91
target/arm: Implement SVE Integer Compare - Immediate Group
67
Make flatview_translate() take a MemTxAttrs argument
92
target/arm: Implement SVE Partition Break Group
68
Make address_space_get_iotlb_entry() take a MemTxAttrs argument
93
target/arm: Implement SVE Predicate Count Group
69
Make flatview_do_translate() take a MemTxAttrs argument
94
target/arm: Implement SVE Integer Compare - Scalars Group
70
Make address_space_translate_iommu take a MemTxAttrs argument
95
target/arm: Implement FDUP/DUP
71
vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
96
target/arm: Implement SVE Integer Wide Immediate - Unpredicated Group
97
target/arm: Implement SVE Floating Point Arithmetic - Unpredicated Group
72
98
73
Richard Henderson (1):
99
Shannon Zhao (1):
74
tcg: Fix helper function vs host abi for float16
100
arm_gicv3_kvm: kvm_dist_get/put_priority: skip the registers banked by GICR_IPRIORITYR
75
101
76
Shannon Zhao (3):
102
include/exec/cpu-all.h | 4 +
77
arm_gicv3_kvm: increase clroffset accordingly
103
include/exec/cpu-defs.h | 9 +
78
ARM: ACPI: Fix use-after-free due to memory realloc
104
include/exec/exec-all.h | 16 +-
79
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
105
include/exec/memory.h | 65 +-
106
include/hw/arm/arm.h | 8 +-
107
include/hw/or-irq.h | 5 +-
108
include/qemu/bswap.h | 52 ++
109
include/qom/cpu.h | 3 +
110
target/arm/helper-sve.h | 294 +++++++++
111
target/arm/helper.h | 19 +
112
target/arm/translate-a64.h | 26 +-
113
accel/tcg/cputlb.c | 59 +-
114
exec.c | 263 ++++----
115
hw/alpha/typhoon.c | 3 +-
116
hw/arm/armv7m.c | 28 +-
117
hw/arm/mps2-tz.c | 32 +-
118
hw/arm/smmuv3.c | 2 +-
119
hw/arm/stellaris.c | 12 +-
120
hw/block/m25p80.c | 1 +
121
hw/block/pflash_cfi02.c | 97 +--
122
hw/char/parallel.c | 50 +-
123
hw/core/or-irq.c | 39 +-
124
hw/dma/rc4030.c | 2 +-
125
hw/i386/amd_iommu.c | 2 +-
126
hw/i386/intel_iommu.c | 8 +-
127
hw/input/pckbd.c | 14 +-
128
hw/intc/arm_gicv3_kvm.c | 18 +-
129
hw/intc/armv7m_nvic.c | 6 +-
130
hw/m68k/mcf5206.c | 48 +-
131
hw/misc/aspeed_scu.c | 20 +
132
hw/ppc/spapr_iommu.c | 5 +-
133
hw/s390x/s390-pci-bus.c | 2 +-
134
hw/s390x/s390-pci-inst.c | 4 +-
135
hw/sh4/sh7750.c | 44 +-
136
hw/sparc/sun4m_iommu.c | 3 +-
137
hw/sparc64/sun4u_iommu.c | 2 +-
138
hw/vfio/common.c | 6 +-
139
hw/virtio/vhost.c | 7 +-
140
hw/watchdog/wdt_i6300esb.c | 48 +-
141
memory.c | 33 +-
142
target/arm/cpu.c | 18 +
143
target/arm/sve_helper.c | 1250 +++++++++++++++++++++++++++++++++++++
144
target/arm/translate-sve.c | 1458 +++++++++++++++++++++++++++++++++++++++++++
145
target/arm/translate.c | 43 +-
146
target/arm/vec_helper.c | 69 ++
147
CODING_STYLE | 17 +
148
docs/devel/loads-stores.rst | 15 +
149
target/arm/sve.decode | 248 ++++++++
150
48 files changed, 4114 insertions(+), 363 deletions(-)
80
151
81
include/exec/exec-all.h | 5 +-
82
include/exec/helper-head.h | 2 +-
83
include/exec/memory-internal.h | 3 +-
84
include/exec/memory.h | 128 +++++++++++++++++++++++++++++++++++------
85
include/migration/vmstate.h | 3 +
86
include/sysemu/dma.h | 6 +-
87
accel/tcg/translate-all.c | 4 +-
88
exec.c | 95 ++++++++++++++++++------------
89
hw/arm/boot.c | 18 +++---
90
hw/arm/virt-acpi-build.c | 20 +++++--
91
hw/dma/xlnx-zdma.c | 10 +++-
92
hw/hppa/dino.c | 3 +-
93
hw/intc/arm_gic_kvm.c | 1 -
94
hw/intc/arm_gicv3_cpuif.c | 12 ++--
95
hw/intc/arm_gicv3_kvm.c | 2 +-
96
hw/nvram/fw_cfg.c | 12 ++--
97
hw/s390x/s390-pci-inst.c | 3 +-
98
hw/scsi/esp.c | 3 +-
99
hw/vfio/common.c | 3 +-
100
hw/virtio/vhost.c | 3 +-
101
hw/xen/xen_pt_msi.c | 3 +-
102
memory.c | 12 ++--
103
memory_ldst.inc.c | 18 +++---
104
target/arm/gdbstub.c | 3 +-
105
target/arm/helper-a64.c | 41 +++++++------
106
target/arm/helper.c | 90 ++++++++++++++++-------------
107
target/ppc/mmu-hash64.c | 3 +-
108
target/riscv/helper.c | 2 +-
109
target/s390x/diag.c | 6 +-
110
target/s390x/excp_helper.c | 3 +-
111
target/s390x/mmu_helper.c | 3 +-
112
target/s390x/sigp.c | 3 +-
113
target/xtensa/op_helper.c | 3 +-
114
MAINTAINERS | 9 ++-
115
34 files changed, 353 insertions(+), 182 deletions(-)
116
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
2
2
3
It forgot to increase clroffset during the loop. So it only clear the
3
While for_each_dist_irq_reg loop starts from GIC_INTERNAL, it forgot to
4
first 4 bytes.
4
offset the date array and index. This will overlap the GICR registers
5
value and leave the last GIC_INTERNAL irq's registers out of update.
5
6
6
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
7
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
7
Cc: qemu-stable@nongnu.org
8
Cc: qemu-stable@nongnu.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
11
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
13
---
14
hw/intc/arm_gicv3_kvm.c | 1 +
14
hw/intc/arm_gicv3_kvm.c | 18 ++++++++++++++++--
15
1 file changed, 1 insertion(+)
15
1 file changed, 16 insertions(+), 2 deletions(-)
16
16
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
19
--- a/hw/intc/arm_gicv3_kvm.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_get_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
22
if (clroffset != 0) {
22
uint32_t reg, *field;
23
reg = 0;
23
int irq;
24
kvm_gicd_access(s, clroffset, &reg, true);
24
25
+ clroffset += 4;
25
- field = (uint32_t *)bmp;
26
}
26
+ /* For the KVM GICv3, affinity routing is always enabled, and the first 8
27
reg = *gic_bmp_ptr32(bmp, irq);
27
+ * GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
28
+ * functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
29
+ * sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
30
+ * offset.
31
+ */
32
+ field = (uint32_t *)(bmp + GIC_INTERNAL);
33
+ offset += (GIC_INTERNAL * 8) / 8;
34
for_each_dist_irq_reg(irq, s->num_irq, 8) {
35
kvm_gicd_access(s, offset, &reg, false);
36
*field = reg;
37
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_put_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
38
uint32_t reg, *field;
39
int irq;
40
41
- field = (uint32_t *)bmp;
42
+ /* For the KVM GICv3, affinity routing is always enabled, and the first 8
43
+ * GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
44
+ * functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
45
+ * sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
46
+ * offset.
47
+ */
48
+ field = (uint32_t *)(bmp + GIC_INTERNAL);
49
+ offset += (GIC_INTERNAL * 8) / 8;
50
for_each_dist_irq_reg(irq, s->num_irq, 8) {
51
reg = *field;
28
kvm_gicd_access(s, offset, &reg, true);
52
kvm_gicd_access(s, offset, &reg, true);
29
--
53
--
30
2.17.1
54
2.17.1
31
55
32
56
diff view generated by jsdifflib
New patch
1
The ethernet controller in the AN505 MPC FPGA image is behind
2
the same AHB Peripheral Protection Controller that handles
3
the graphics and GPIOs. (In the documentation this is clear
4
in the block diagram but the ethernet controller was omitted
5
from the table listing devices connected to the PPC.)
6
The ethernet sits behind AHB PPCEXP0 interface 5. We had
7
incorrectly claimed that this was a "gpio4", but there are
8
only 4 GPIOs in this image.
1
9
10
Correct the QEMU model to match the hardware.
11
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20180515171446.10834-1-peter.maydell@linaro.org
15
---
16
hw/arm/mps2-tz.c | 32 +++++++++++++++++++++++---------
17
1 file changed, 23 insertions(+), 9 deletions(-)
18
19
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/mps2-tz.c
22
+++ b/hw/arm/mps2-tz.c
23
@@ -XXX,XX +XXX,XX @@ typedef struct {
24
UnimplementedDeviceState spi[5];
25
UnimplementedDeviceState i2c[4];
26
UnimplementedDeviceState i2s_audio;
27
- UnimplementedDeviceState gpio[5];
28
+ UnimplementedDeviceState gpio[4];
29
UnimplementedDeviceState dma[4];
30
UnimplementedDeviceState gfx;
31
CMSDKAPBUART uart[5];
32
SplitIRQ sec_resp_splitter;
33
qemu_or_irq uart_irq_orgate;
34
+ DeviceState *lan9118;
35
} MPS2TZMachineState;
36
37
#define TYPE_MPS2TZ_MACHINE "mps2tz"
38
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_fpgaio(MPS2TZMachineState *mms, void *opaque,
39
return sysbus_mmio_get_region(SYS_BUS_DEVICE(fpgaio), 0);
40
}
41
42
+static MemoryRegion *make_eth_dev(MPS2TZMachineState *mms, void *opaque,
43
+ const char *name, hwaddr size)
44
+{
45
+ SysBusDevice *s;
46
+ DeviceState *iotkitdev = DEVICE(&mms->iotkit);
47
+ NICInfo *nd = &nd_table[0];
48
+
49
+ /* In hardware this is a LAN9220; the LAN9118 is software compatible
50
+ * except that it doesn't support the checksum-offload feature.
51
+ */
52
+ qemu_check_nic_model(nd, "lan9118");
53
+ mms->lan9118 = qdev_create(NULL, "lan9118");
54
+ qdev_set_nic_properties(mms->lan9118, nd);
55
+ qdev_init_nofail(mms->lan9118);
56
+
57
+ s = SYS_BUS_DEVICE(mms->lan9118);
58
+ sysbus_connect_irq(s, 0, qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
59
+ return sysbus_mmio_get_region(s, 0);
60
+}
61
+
62
static void mps2tz_common_init(MachineState *machine)
63
{
64
MPS2TZMachineState *mms = MPS2TZ_MACHINE(machine);
65
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
66
{ "gpio1", make_unimp_dev, &mms->gpio[1], 0x40101000, 0x1000 },
67
{ "gpio2", make_unimp_dev, &mms->gpio[2], 0x40102000, 0x1000 },
68
{ "gpio3", make_unimp_dev, &mms->gpio[3], 0x40103000, 0x1000 },
69
- { "gpio4", make_unimp_dev, &mms->gpio[4], 0x40104000, 0x1000 },
70
+ { "eth", make_eth_dev, NULL, 0x42000000, 0x100000 },
71
},
72
}, {
73
.name = "ahb_ppcexp1",
74
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
75
"cfg_sec_resp", 0));
76
}
77
78
- /* In hardware this is a LAN9220; the LAN9118 is software compatible
79
- * except that it doesn't support the checksum-offload feature.
80
- * The ethernet controller is not behind a PPC.
81
- */
82
- lan9118_init(&nd_table[0], 0x42000000,
83
- qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
84
-
85
create_unimplemented_device("FPGA NS PC", 0x48007000, 0x1000);
86
87
armv7m_load_kernel(ARM_CPU(first_cpu), machine->kernel_filename, 0x400000);
88
--
89
2.17.1
90
91
diff view generated by jsdifflib
New patch
1
Convert the sh7750 device away from using the old_mmio field
2
of MemoryRegionOps. This device is used by the sh4 r2d board.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180601141223.26630-2-peter.maydell@linaro.org
7
---
8
hw/sh4/sh7750.c | 44 ++++++++++++++++++++++++++++++++++++--------
9
1 file changed, 36 insertions(+), 8 deletions(-)
10
11
diff --git a/hw/sh4/sh7750.c b/hw/sh4/sh7750.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/sh4/sh7750.c
14
+++ b/hw/sh4/sh7750.c
15
@@ -XXX,XX +XXX,XX @@ static void sh7750_mem_writel(void *opaque, hwaddr addr,
16
}
17
}
18
19
+static uint64_t sh7750_mem_readfn(void *opaque, hwaddr addr, unsigned size)
20
+{
21
+ switch (size) {
22
+ case 1:
23
+ return sh7750_mem_readb(opaque, addr);
24
+ case 2:
25
+ return sh7750_mem_readw(opaque, addr);
26
+ case 4:
27
+ return sh7750_mem_readl(opaque, addr);
28
+ default:
29
+ g_assert_not_reached();
30
+ }
31
+}
32
+
33
+static void sh7750_mem_writefn(void *opaque, hwaddr addr,
34
+ uint64_t value, unsigned size)
35
+{
36
+ switch (size) {
37
+ case 1:
38
+ sh7750_mem_writeb(opaque, addr, value);
39
+ break;
40
+ case 2:
41
+ sh7750_mem_writew(opaque, addr, value);
42
+ break;
43
+ case 4:
44
+ sh7750_mem_writel(opaque, addr, value);
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static const MemoryRegionOps sh7750_mem_ops = {
52
- .old_mmio = {
53
- .read = {sh7750_mem_readb,
54
- sh7750_mem_readw,
55
- sh7750_mem_readl },
56
- .write = {sh7750_mem_writeb,
57
- sh7750_mem_writew,
58
- sh7750_mem_writel },
59
- },
60
+ .read = sh7750_mem_readfn,
61
+ .write = sh7750_mem_writefn,
62
+ .valid.min_access_size = 1,
63
+ .valid.max_access_size = 4,
64
.endianness = DEVICE_NATIVE_ENDIAN,
65
};
66
67
--
68
2.17.1
69
70
diff view generated by jsdifflib
New patch
1
Convert the mcf5206 device away from using the old_mmio field
2
of MemoryRegionOps. This device is used by the an5206 board.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Acked-by: Thomas Huth <huth@tuxfamily.org>
6
Message-id: 20180601141223.26630-3-peter.maydell@linaro.org
7
---
8
hw/m68k/mcf5206.c | 48 +++++++++++++++++++++++++++++++++++------------
9
1 file changed, 36 insertions(+), 12 deletions(-)
10
11
diff --git a/hw/m68k/mcf5206.c b/hw/m68k/mcf5206.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/m68k/mcf5206.c
14
+++ b/hw/m68k/mcf5206.c
15
@@ -XXX,XX +XXX,XX @@ static void m5206_mbar_writel(void *opaque, hwaddr offset,
16
m5206_mbar_write(s, offset, value, 4);
17
}
18
19
+static uint64_t m5206_mbar_readfn(void *opaque, hwaddr addr, unsigned size)
20
+{
21
+ switch (size) {
22
+ case 1:
23
+ return m5206_mbar_readb(opaque, addr);
24
+ case 2:
25
+ return m5206_mbar_readw(opaque, addr);
26
+ case 4:
27
+ return m5206_mbar_readl(opaque, addr);
28
+ default:
29
+ g_assert_not_reached();
30
+ }
31
+}
32
+
33
+static void m5206_mbar_writefn(void *opaque, hwaddr addr,
34
+ uint64_t value, unsigned size)
35
+{
36
+ switch (size) {
37
+ case 1:
38
+ m5206_mbar_writeb(opaque, addr, value);
39
+ break;
40
+ case 2:
41
+ m5206_mbar_writew(opaque, addr, value);
42
+ break;
43
+ case 4:
44
+ m5206_mbar_writel(opaque, addr, value);
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static const MemoryRegionOps m5206_mbar_ops = {
52
- .old_mmio = {
53
- .read = {
54
- m5206_mbar_readb,
55
- m5206_mbar_readw,
56
- m5206_mbar_readl,
57
- },
58
- .write = {
59
- m5206_mbar_writeb,
60
- m5206_mbar_writew,
61
- m5206_mbar_writel,
62
- },
63
- },
64
+ .read = m5206_mbar_readfn,
65
+ .write = m5206_mbar_writefn,
66
+ .valid.min_access_size = 1,
67
+ .valid.max_access_size = 4,
68
.endianness = DEVICE_NATIVE_ENDIAN,
69
};
70
71
--
72
2.17.1
73
74
diff view generated by jsdifflib
New patch
1
Convert the pflash_cfi02 device away from using the old_mmio field
2
of MemoryRegionOps.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Acked-by: Max Reitz <mreitz@redhat.com>
7
Message-id: 20180601141223.26630-4-peter.maydell@linaro.org
8
---
9
hw/block/pflash_cfi02.c | 97 ++++++++---------------------------------
10
1 file changed, 18 insertions(+), 79 deletions(-)
11
12
diff --git a/hw/block/pflash_cfi02.c b/hw/block/pflash_cfi02.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/block/pflash_cfi02.c
15
+++ b/hw/block/pflash_cfi02.c
16
@@ -XXX,XX +XXX,XX @@ static void pflash_write (pflash_t *pfl, hwaddr offset,
17
pfl->cmd = 0;
18
}
19
20
-
21
-static uint32_t pflash_readb_be(void *opaque, hwaddr addr)
22
+static uint64_t pflash_be_readfn(void *opaque, hwaddr addr, unsigned size)
23
{
24
- return pflash_read(opaque, addr, 1, 1);
25
+ return pflash_read(opaque, addr, size, 1);
26
}
27
28
-static uint32_t pflash_readb_le(void *opaque, hwaddr addr)
29
+static void pflash_be_writefn(void *opaque, hwaddr addr,
30
+ uint64_t value, unsigned size)
31
{
32
- return pflash_read(opaque, addr, 1, 0);
33
+ pflash_write(opaque, addr, value, size, 1);
34
}
35
36
-static uint32_t pflash_readw_be(void *opaque, hwaddr addr)
37
+static uint64_t pflash_le_readfn(void *opaque, hwaddr addr, unsigned size)
38
{
39
- pflash_t *pfl = opaque;
40
-
41
- return pflash_read(pfl, addr, 2, 1);
42
+ return pflash_read(opaque, addr, size, 0);
43
}
44
45
-static uint32_t pflash_readw_le(void *opaque, hwaddr addr)
46
+static void pflash_le_writefn(void *opaque, hwaddr addr,
47
+ uint64_t value, unsigned size)
48
{
49
- pflash_t *pfl = opaque;
50
-
51
- return pflash_read(pfl, addr, 2, 0);
52
-}
53
-
54
-static uint32_t pflash_readl_be(void *opaque, hwaddr addr)
55
-{
56
- pflash_t *pfl = opaque;
57
-
58
- return pflash_read(pfl, addr, 4, 1);
59
-}
60
-
61
-static uint32_t pflash_readl_le(void *opaque, hwaddr addr)
62
-{
63
- pflash_t *pfl = opaque;
64
-
65
- return pflash_read(pfl, addr, 4, 0);
66
-}
67
-
68
-static void pflash_writeb_be(void *opaque, hwaddr addr,
69
- uint32_t value)
70
-{
71
- pflash_write(opaque, addr, value, 1, 1);
72
-}
73
-
74
-static void pflash_writeb_le(void *opaque, hwaddr addr,
75
- uint32_t value)
76
-{
77
- pflash_write(opaque, addr, value, 1, 0);
78
-}
79
-
80
-static void pflash_writew_be(void *opaque, hwaddr addr,
81
- uint32_t value)
82
-{
83
- pflash_t *pfl = opaque;
84
-
85
- pflash_write(pfl, addr, value, 2, 1);
86
-}
87
-
88
-static void pflash_writew_le(void *opaque, hwaddr addr,
89
- uint32_t value)
90
-{
91
- pflash_t *pfl = opaque;
92
-
93
- pflash_write(pfl, addr, value, 2, 0);
94
-}
95
-
96
-static void pflash_writel_be(void *opaque, hwaddr addr,
97
- uint32_t value)
98
-{
99
- pflash_t *pfl = opaque;
100
-
101
- pflash_write(pfl, addr, value, 4, 1);
102
-}
103
-
104
-static void pflash_writel_le(void *opaque, hwaddr addr,
105
- uint32_t value)
106
-{
107
- pflash_t *pfl = opaque;
108
-
109
- pflash_write(pfl, addr, value, 4, 0);
110
+ pflash_write(opaque, addr, value, size, 0);
111
}
112
113
static const MemoryRegionOps pflash_cfi02_ops_be = {
114
- .old_mmio = {
115
- .read = { pflash_readb_be, pflash_readw_be, pflash_readl_be, },
116
- .write = { pflash_writeb_be, pflash_writew_be, pflash_writel_be, },
117
- },
118
+ .read = pflash_be_readfn,
119
+ .write = pflash_be_writefn,
120
+ .valid.min_access_size = 1,
121
+ .valid.max_access_size = 4,
122
.endianness = DEVICE_NATIVE_ENDIAN,
123
};
124
125
static const MemoryRegionOps pflash_cfi02_ops_le = {
126
- .old_mmio = {
127
- .read = { pflash_readb_le, pflash_readw_le, pflash_readl_le, },
128
- .write = { pflash_writeb_le, pflash_writew_le, pflash_writel_le, },
129
- },
130
+ .read = pflash_le_readfn,
131
+ .write = pflash_le_writefn,
132
+ .valid.min_access_size = 1,
133
+ .valid.max_access_size = 4,
134
.endianness = DEVICE_NATIVE_ENDIAN,
135
};
136
137
--
138
2.17.1
139
140
diff view generated by jsdifflib
1
In commit f0aff255700 we made cpacr_write() enforce that some CPACR
1
Convert the wdt_i6300esb device away from using the old_mmio field
2
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
2
of MemoryRegionOps.
3
we forgot to also update the register's reset value. The effect
4
was that (a) a guest that read CPACR on reset would not see ones in
5
the RAO bits, and (b) if you did a migration before the guest did
6
a write to the CPACR then the migration would fail because the
7
destination would enforce the RAO bits and then complain that they
8
didn't match the zero value from the source.
9
3
10
Implement reset for the CPACR using a custom reset function
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
that just calls cpacr_write(), to avoid having to duplicate
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
the logic for which bits are RAO.
6
Message-id: 20180601141223.26630-5-peter.maydell@linaro.org
7
---
8
hw/watchdog/wdt_i6300esb.c | 48 ++++++++++++++++++++++++++++----------
9
1 file changed, 36 insertions(+), 12 deletions(-)
13
10
14
This bug would affect migration for TCG CPUs which are ARMv7
11
diff --git a/hw/watchdog/wdt_i6300esb.c b/hw/watchdog/wdt_i6300esb.c
15
with VFP but without one of Neon or VFPv3.
16
17
Reported-by: Cédric Le Goater <clg@kaod.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Tested-by: Cédric Le Goater <clg@kaod.org>
20
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
21
---
22
target/arm/helper.c | 10 +++++++++-
23
1 file changed, 9 insertions(+), 1 deletion(-)
24
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
13
--- a/hw/watchdog/wdt_i6300esb.c
28
+++ b/target/arm/helper.c
14
+++ b/hw/watchdog/wdt_i6300esb.c
29
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
15
@@ -XXX,XX +XXX,XX @@ static void i6300esb_mem_writel(void *vp, hwaddr addr, uint32_t val)
30
env->cp15.cpacr_el1 = value;
16
}
31
}
17
}
32
18
33
+static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
19
+static uint64_t i6300esb_mem_readfn(void *opaque, hwaddr addr, unsigned size)
34
+{
20
+{
35
+ /* Call cpacr_write() so that we reset with the correct RAO bits set
21
+ switch (size) {
36
+ * for our CPU features.
22
+ case 1:
37
+ */
23
+ return i6300esb_mem_readb(opaque, addr);
38
+ cpacr_write(env, ri, 0);
24
+ case 2:
25
+ return i6300esb_mem_readw(opaque, addr);
26
+ case 4:
27
+ return i6300esb_mem_readl(opaque, addr);
28
+ default:
29
+ g_assert_not_reached();
30
+ }
39
+}
31
+}
40
+
32
+
41
static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
33
+static void i6300esb_mem_writefn(void *opaque, hwaddr addr,
42
bool isread)
34
+ uint64_t value, unsigned size)
43
{
35
+{
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
36
+ switch (size) {
45
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
37
+ case 1:
46
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
38
+ i6300esb_mem_writeb(opaque, addr, value);
47
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
39
+ break;
48
- .resetvalue = 0, .writefn = cpacr_write },
40
+ case 2:
49
+ .resetfn = cpacr_reset, .writefn = cpacr_write },
41
+ i6300esb_mem_writew(opaque, addr, value);
50
REGINFO_SENTINEL
42
+ break;
43
+ case 4:
44
+ i6300esb_mem_writel(opaque, addr, value);
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static const MemoryRegionOps i6300esb_ops = {
52
- .old_mmio = {
53
- .read = {
54
- i6300esb_mem_readb,
55
- i6300esb_mem_readw,
56
- i6300esb_mem_readl,
57
- },
58
- .write = {
59
- i6300esb_mem_writeb,
60
- i6300esb_mem_writew,
61
- i6300esb_mem_writel,
62
- },
63
- },
64
+ .read = i6300esb_mem_readfn,
65
+ .write = i6300esb_mem_writefn,
66
+ .valid.min_access_size = 1,
67
+ .valid.max_access_size = 4,
68
.endianness = DEVICE_LITTLE_ENDIAN,
51
};
69
};
52
70
53
--
71
--
54
2.17.1
72
2.17.1
55
73
56
74
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Convert the pckbd device away from using the old_mmio field
2
add MemTxAttrs as an argument to address_space_map().
2
of MemoryRegionOps. This change only affects the memory-mapped
3
Its callers either have an attrs value to hand, or don't care
3
variant of the i8042, which is used by the Unicore32 'puv3'
4
and can use MEMTXATTRS_UNSPECIFIED.
4
board and the MIPS Jazz boards 'magnum' and 'pica61'.
5
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180601141223.26630-6-peter.maydell@linaro.org
9
Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
10
---
9
---
11
include/exec/memory.h | 3 ++-
10
hw/input/pckbd.c | 14 ++++++++------
12
include/sysemu/dma.h | 3 ++-
11
1 file changed, 8 insertions(+), 6 deletions(-)
13
exec.c | 6 ++++--
14
target/ppc/mmu-hash64.c | 3 ++-
15
4 files changed, 10 insertions(+), 5 deletions(-)
16
12
17
diff --git a/include/exec/memory.h b/include/exec/memory.h
13
diff --git a/hw/input/pckbd.c b/hw/input/pckbd.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/memory.h
15
--- a/hw/input/pckbd.c
20
+++ b/include/exec/memory.h
16
+++ b/hw/input/pckbd.c
21
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
17
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_kbd = {
22
* @addr: address within that address space
18
};
23
* @plen: pointer to length of buffer; updated on return
19
24
* @is_write: indicates the transfer direction
20
/* Memory mapped interface */
25
+ * @attrs: memory attributes
21
-static uint32_t kbd_mm_readb (void *opaque, hwaddr addr)
26
*/
22
+static uint64_t kbd_mm_readfn(void *opaque, hwaddr addr, unsigned size)
27
void *address_space_map(AddressSpace *as, hwaddr addr,
23
{
28
- hwaddr *plen, bool is_write);
24
KBDState *s = opaque;
29
+ hwaddr *plen, bool is_write, MemTxAttrs attrs);
25
30
26
@@ -XXX,XX +XXX,XX @@ static uint32_t kbd_mm_readb (void *opaque, hwaddr addr)
31
/* address_space_unmap: Unmaps a memory region previously mapped by address_space_map()
27
return kbd_read_data(s, 0, 1) & 0xff;
32
*
33
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/include/sysemu/dma.h
36
+++ b/include/sysemu/dma.h
37
@@ -XXX,XX +XXX,XX @@ static inline void *dma_memory_map(AddressSpace *as,
38
hwaddr xlen = *len;
39
void *p;
40
41
- p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE);
42
+ p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE,
43
+ MEMTXATTRS_UNSPECIFIED);
44
*len = xlen;
45
return p;
46
}
28
}
47
diff --git a/exec.c b/exec.c
29
48
index XXXXXXX..XXXXXXX 100644
30
-static void kbd_mm_writeb (void *opaque, hwaddr addr, uint32_t value)
49
--- a/exec.c
31
+static void kbd_mm_writefn(void *opaque, hwaddr addr,
50
+++ b/exec.c
32
+ uint64_t value, unsigned size)
51
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
52
void *address_space_map(AddressSpace *as,
53
hwaddr addr,
54
hwaddr *plen,
55
- bool is_write)
56
+ bool is_write,
57
+ MemTxAttrs attrs)
58
{
33
{
59
hwaddr len = *plen;
34
KBDState *s = opaque;
60
hwaddr l, xlat;
35
61
@@ -XXX,XX +XXX,XX @@ void *cpu_physical_memory_map(hwaddr addr,
36
@@ -XXX,XX +XXX,XX @@ static void kbd_mm_writeb (void *opaque, hwaddr addr, uint32_t value)
62
hwaddr *plen,
37
kbd_write_data(s, 0, value & 0xff, 1);
63
int is_write)
64
{
65
- return address_space_map(&address_space_memory, addr, plen, is_write);
66
+ return address_space_map(&address_space_memory, addr, plen, is_write,
67
+ MEMTXATTRS_UNSPECIFIED);
68
}
38
}
69
39
70
void cpu_physical_memory_unmap(void *buffer, hwaddr len,
40
+
71
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
41
static const MemoryRegionOps i8042_mmio_ops = {
72
index XXXXXXX..XXXXXXX 100644
42
+ .read = kbd_mm_readfn,
73
--- a/target/ppc/mmu-hash64.c
43
+ .write = kbd_mm_writefn,
74
+++ b/target/ppc/mmu-hash64.c
44
+ .valid.min_access_size = 1,
75
@@ -XXX,XX +XXX,XX @@ const ppc_hash_pte64_t *ppc_hash64_map_hptes(PowerPCCPU *cpu,
45
+ .valid.max_access_size = 4,
76
return NULL;
46
.endianness = DEVICE_NATIVE_ENDIAN,
77
}
47
- .old_mmio = {
78
48
- .read = { kbd_mm_readb, kbd_mm_readb, kbd_mm_readb },
79
- hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false);
49
- .write = { kbd_mm_writeb, kbd_mm_writeb, kbd_mm_writeb },
80
+ hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false,
50
- },
81
+ MEMTXATTRS_UNSPECIFIED);
51
};
82
if (plen < (n * HASH_PTE_SIZE_64)) {
52
83
hw_error("%s: Unable to map all requested HPTEs\n", __func__);
53
void i8042_mm_init(qemu_irq kbd_irq, qemu_irq mouse_irq,
84
}
85
--
54
--
86
2.17.1
55
2.17.1
87
56
88
57
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Convert the parallel device away from using the old_mmio field
2
add MemTxAttrs as an argument to address_space_access_valid().
2
of MemoryRegionOps. This change only affects the memory-mapped
3
Its callers either have an attrs value to hand, or don't care
3
variant, which is used by the MIPS Jazz boards 'magnum' and 'pica61'.
4
and can use MEMTXATTRS_UNSPECIFIED.
5
4
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180601141223.26630-7-peter.maydell@linaro.org
9
Message-id: 20180521140402.23318-6-peter.maydell@linaro.org
10
---
8
---
11
include/exec/memory.h | 4 +++-
9
hw/char/parallel.c | 50 ++++++++++------------------------------------
12
include/sysemu/dma.h | 3 ++-
10
1 file changed, 11 insertions(+), 39 deletions(-)
13
exec.c | 3 ++-
14
target/s390x/diag.c | 6 ++++--
15
target/s390x/excp_helper.c | 3 ++-
16
target/s390x/mmu_helper.c | 3 ++-
17
target/s390x/sigp.c | 3 ++-
18
7 files changed, 17 insertions(+), 8 deletions(-)
19
11
20
diff --git a/include/exec/memory.h b/include/exec/memory.h
12
diff --git a/hw/char/parallel.c b/hw/char/parallel.c
21
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
22
--- a/include/exec/memory.h
14
--- a/hw/char/parallel.c
23
+++ b/include/exec/memory.h
15
+++ b/hw/char/parallel.c
24
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
16
@@ -XXX,XX +XXX,XX @@ static void parallel_isa_realizefn(DeviceState *dev, Error **errp)
25
* @addr: address within that address space
17
}
26
* @len: length of the area to be checked
18
27
* @is_write: indicates the transfer direction
19
/* Memory mapped interface */
28
+ * @attrs: memory attributes
20
-static uint32_t parallel_mm_readb (void *opaque, hwaddr addr)
29
*/
21
+static uint64_t parallel_mm_readfn(void *opaque, hwaddr addr, unsigned size)
30
-bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
31
+bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len,
32
+ bool is_write, MemTxAttrs attrs);
33
34
/* address_space_map: map a physical memory region into a host virtual address
35
*
36
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/include/sysemu/dma.h
39
+++ b/include/sysemu/dma.h
40
@@ -XXX,XX +XXX,XX @@ static inline bool dma_memory_valid(AddressSpace *as,
41
DMADirection dir)
42
{
22
{
43
return address_space_access_valid(as, addr, len,
23
ParallelState *s = opaque;
44
- dir == DMA_DIRECTION_FROM_DEVICE);
24
45
+ dir == DMA_DIRECTION_FROM_DEVICE,
25
- return parallel_ioport_read_sw(s, addr >> s->it_shift) & 0xFF;
46
+ MEMTXATTRS_UNSPECIFIED);
26
+ return parallel_ioport_read_sw(s, addr >> s->it_shift) &
27
+ MAKE_64BIT_MASK(0, size * 8);
47
}
28
}
48
29
49
static inline int dma_memory_rw_relaxed(AddressSpace *as, dma_addr_t addr,
30
-static void parallel_mm_writeb (void *opaque,
50
diff --git a/exec.c b/exec.c
31
- hwaddr addr, uint32_t value)
51
index XXXXXXX..XXXXXXX 100644
32
+static void parallel_mm_writefn(void *opaque, hwaddr addr,
52
--- a/exec.c
33
+ uint64_t value, unsigned size)
53
+++ b/exec.c
34
{
54
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
35
ParallelState *s = opaque;
36
37
- parallel_ioport_write_sw(s, addr >> s->it_shift, value & 0xFF);
38
-}
39
-
40
-static uint32_t parallel_mm_readw (void *opaque, hwaddr addr)
41
-{
42
- ParallelState *s = opaque;
43
-
44
- return parallel_ioport_read_sw(s, addr >> s->it_shift) & 0xFFFF;
45
-}
46
-
47
-static void parallel_mm_writew (void *opaque,
48
- hwaddr addr, uint32_t value)
49
-{
50
- ParallelState *s = opaque;
51
-
52
- parallel_ioport_write_sw(s, addr >> s->it_shift, value & 0xFFFF);
53
-}
54
-
55
-static uint32_t parallel_mm_readl (void *opaque, hwaddr addr)
56
-{
57
- ParallelState *s = opaque;
58
-
59
- return parallel_ioport_read_sw(s, addr >> s->it_shift);
60
-}
61
-
62
-static void parallel_mm_writel (void *opaque,
63
- hwaddr addr, uint32_t value)
64
-{
65
- ParallelState *s = opaque;
66
-
67
- parallel_ioport_write_sw(s, addr >> s->it_shift, value);
68
+ parallel_ioport_write_sw(s, addr >> s->it_shift,
69
+ value & MAKE_64BIT_MASK(0, size * 8));
55
}
70
}
56
71
57
bool address_space_access_valid(AddressSpace *as, hwaddr addr,
72
static const MemoryRegionOps parallel_mm_ops = {
58
- int len, bool is_write)
73
- .old_mmio = {
59
+ int len, bool is_write,
74
- .read = { parallel_mm_readb, parallel_mm_readw, parallel_mm_readl },
60
+ MemTxAttrs attrs)
75
- .write = { parallel_mm_writeb, parallel_mm_writew, parallel_mm_writel },
61
{
76
- },
62
FlatView *fv;
77
+ .read = parallel_mm_readfn,
63
bool result;
78
+ .write = parallel_mm_writefn,
64
diff --git a/target/s390x/diag.c b/target/s390x/diag.c
79
+ .valid.min_access_size = 1,
65
index XXXXXXX..XXXXXXX 100644
80
+ .valid.max_access_size = 4,
66
--- a/target/s390x/diag.c
81
.endianness = DEVICE_NATIVE_ENDIAN,
67
+++ b/target/s390x/diag.c
82
};
68
@@ -XXX,XX +XXX,XX @@ void handle_diag_308(CPUS390XState *env, uint64_t r1, uint64_t r3, uintptr_t ra)
83
69
return;
70
}
71
if (!address_space_access_valid(&address_space_memory, addr,
72
- sizeof(IplParameterBlock), false)) {
73
+ sizeof(IplParameterBlock), false,
74
+ MEMTXATTRS_UNSPECIFIED)) {
75
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
76
return;
77
}
78
@@ -XXX,XX +XXX,XX @@ out:
79
return;
80
}
81
if (!address_space_access_valid(&address_space_memory, addr,
82
- sizeof(IplParameterBlock), true)) {
83
+ sizeof(IplParameterBlock), true,
84
+ MEMTXATTRS_UNSPECIFIED)) {
85
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
86
return;
87
}
88
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/s390x/excp_helper.c
91
+++ b/target/s390x/excp_helper.c
92
@@ -XXX,XX +XXX,XX @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr, int size,
93
94
/* check out of RAM access */
95
if (!address_space_access_valid(&address_space_memory, raddr,
96
- TARGET_PAGE_SIZE, rw)) {
97
+ TARGET_PAGE_SIZE, rw,
98
+ MEMTXATTRS_UNSPECIFIED)) {
99
DPRINTF("%s: raddr %" PRIx64 " > ram_size %" PRIx64 "\n", __func__,
100
(uint64_t)raddr, (uint64_t)ram_size);
101
trigger_pgm_exception(env, PGM_ADDRESSING, ILEN_AUTO);
102
diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/s390x/mmu_helper.c
105
+++ b/target/s390x/mmu_helper.c
106
@@ -XXX,XX +XXX,XX @@ static int translate_pages(S390CPU *cpu, vaddr addr, int nr_pages,
107
return ret;
108
}
109
if (!address_space_access_valid(&address_space_memory, pages[i],
110
- TARGET_PAGE_SIZE, is_write)) {
111
+ TARGET_PAGE_SIZE, is_write,
112
+ MEMTXATTRS_UNSPECIFIED)) {
113
trigger_access_exception(env, PGM_ADDRESSING, ILEN_AUTO, 0);
114
return -EFAULT;
115
}
116
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/s390x/sigp.c
119
+++ b/target/s390x/sigp.c
120
@@ -XXX,XX +XXX,XX @@ static void sigp_set_prefix(CPUState *cs, run_on_cpu_data arg)
121
cpu_synchronize_state(cs);
122
123
if (!address_space_access_valid(&address_space_memory, addr,
124
- sizeof(struct LowCore), false)) {
125
+ sizeof(struct LowCore), false,
126
+ MEMTXATTRS_UNSPECIFIED)) {
127
set_sigp_status(si, SIGP_STAT_INVALID_PARAMETER);
128
return;
129
}
130
--
84
--
131
2.17.1
85
2.17.1
132
86
133
87
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
The stellaris board is still using the legacy armv7m_init() function,
2
add MemTxAttrs as an argument to address_space_get_iotlb_entry().
2
which predates conversion of the ARMv7M into a proper QOM container
3
object. Make the board code directly create the ARMv7M object instead.
3
4
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Message-id: 20180521140402.23318-12-peter.maydell@linaro.org
8
Message-id: 20180601144328.23817-2-peter.maydell@linaro.org
8
---
9
---
9
include/exec/memory.h | 2 +-
10
hw/arm/stellaris.c | 12 ++++++++++--
10
exec.c | 2 +-
11
1 file changed, 10 insertions(+), 2 deletions(-)
11
hw/virtio/vhost.c | 3 ++-
12
3 files changed, 4 insertions(+), 3 deletions(-)
13
12
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
13
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
15
--- a/hw/arm/stellaris.c
17
+++ b/include/exec/memory.h
16
+++ b/hw/arm/stellaris.c
18
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache);
17
@@ -XXX,XX +XXX,XX @@
19
* entry. Should be called from an RCU critical section.
18
#include "qemu/log.h"
20
*/
19
#include "exec/address-spaces.h"
21
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
20
#include "sysemu/sysemu.h"
22
- bool is_write);
21
+#include "hw/arm/armv7m.h"
23
+ bool is_write, MemTxAttrs attrs);
22
#include "hw/char/pl011.h"
24
23
#include "hw/misc/unimp.h"
25
/* address_space_translate: translate an address range into an address space
24
#include "cpu.h"
26
* into a MemoryRegion and an address range into that section. Should be
25
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
27
diff --git a/exec.c b/exec.c
26
&error_fatal);
28
index XXXXXXX..XXXXXXX 100644
27
memory_region_add_subregion(system_memory, 0x20000000, sram);
29
--- a/exec.c
28
30
+++ b/exec.c
29
- nvic = armv7m_init(system_memory, flash_size, NUM_IRQ_LINES,
31
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
30
- ms->kernel_filename, ms->cpu_type);
32
31
+ nvic = qdev_create(NULL, TYPE_ARMV7M);
33
/* Called from RCU critical section */
32
+ qdev_prop_set_uint32(nvic, "num-irq", NUM_IRQ_LINES);
34
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
33
+ qdev_prop_set_string(nvic, "cpu-type", ms->cpu_type);
35
- bool is_write)
34
+ object_property_set_link(OBJECT(nvic), OBJECT(get_system_memory()),
36
+ bool is_write, MemTxAttrs attrs)
35
+ "memory", &error_abort);
37
{
36
+ /* This will exit with an error if the user passed us a bad cpu_type */
38
MemoryRegionSection section;
37
+ qdev_init_nofail(nvic);
39
hwaddr xlat, page_mask;
38
40
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
39
qdev_connect_gpio_out_named(nvic, "SYSRESETREQ", 0,
41
index XXXXXXX..XXXXXXX 100644
40
qemu_allocate_irq(&do_sys_reset, NULL, 0));
42
--- a/hw/virtio/vhost.c
41
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
43
+++ b/hw/virtio/vhost.c
42
create_unimplemented_device("analogue-comparator", 0x4003c000, 0x1000);
44
@@ -XXX,XX +XXX,XX @@ int vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write)
43
create_unimplemented_device("hibernation", 0x400fc000, 0x1000);
45
trace_vhost_iotlb_miss(dev, 1);
44
create_unimplemented_device("flash-control", 0x400fd000, 0x1000);
46
45
+
47
iotlb = address_space_get_iotlb_entry(dev->vdev->dma_as,
46
+ armv7m_load_kernel(ARM_CPU(first_cpu), ms->kernel_filename, flash_size);
48
- iova, write);
47
}
49
+ iova, write,
48
50
+ MEMTXATTRS_UNSPECIFIED);
49
/* FIXME: Figure out how to generate these from stellaris_boards. */
51
if (iotlb.target_as != NULL) {
52
ret = vhost_memory_region_lookup(dev, iotlb.translated_addr,
53
&uaddr, &len);
54
--
50
--
55
2.17.1
51
2.17.1
56
52
57
53
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Remove the now-unused armv7m_init() function. This was a legacy from
2
add MemTxAttrs as an argument to memory_region_access_valid().
2
before we properly QOMified ARMv7M, and it has some flaws:
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
3
6
The callsite in flatview_access_valid() is part of a recursive
4
* it combines work that needs to be done by an SoC object (creating
7
loop flatview_access_valid() -> memory_region_access_valid() ->
5
and initializing the TYPE_ARMV7M object) with work that needs to
8
subpage_accepts() -> flatview_access_valid(); we make it pass
6
be done by the board model (setting the system up to load the ELF
9
MEMTXATTRS_UNSPECIFIED for now, until the next several commits
7
file specified with -kernel)
10
have plumbed an attrs parameter through the rest of the loop
8
* TYPE_ARMV7M creation failure is fatal, but an SoC object wants to
11
and we can add an attrs parameter to flatview_access_valid().
9
arrange to propagate the failure outward
10
* it uses allocate-and-create via qdev_create() whereas the current
11
preferred style for SoC objects is to do creation in-place
12
13
Board and SoC models can instead do the two jobs this function
14
was doing themselves, in the right places and with whatever their
15
preferred style/error handling is.
12
16
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
16
Message-id: 20180521140402.23318-8-peter.maydell@linaro.org
20
Message-id: 20180601144328.23817-3-peter.maydell@linaro.org
17
---
21
---
18
include/exec/memory-internal.h | 3 ++-
22
include/hw/arm/arm.h | 8 ++------
19
exec.c | 4 +++-
23
hw/arm/armv7m.c | 21 ---------------------
20
hw/s390x/s390-pci-inst.c | 3 ++-
24
2 files changed, 2 insertions(+), 27 deletions(-)
21
memory.c | 7 ++++---
22
4 files changed, 11 insertions(+), 6 deletions(-)
23
25
24
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
26
diff --git a/include/hw/arm/arm.h b/include/hw/arm/arm.h
25
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory-internal.h
28
--- a/include/hw/arm/arm.h
27
+++ b/include/exec/memory-internal.h
29
+++ b/include/hw/arm/arm.h
28
@@ -XXX,XX +XXX,XX @@ void flatview_unref(FlatView *view);
30
@@ -XXX,XX +XXX,XX @@ typedef enum {
29
extern const MemoryRegionOps unassigned_mem_ops;
31
ARM_ENDIANNESS_BE32,
30
32
} arm_endianness;
31
bool memory_region_access_valid(MemoryRegion *mr, hwaddr addr,
33
32
- unsigned size, bool is_write);
34
-/* armv7m.c */
33
+ unsigned size, bool is_write,
35
-DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
34
+ MemTxAttrs attrs);
36
- const char *kernel_filename, const char *cpu_type);
35
37
/**
36
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section);
38
* armv7m_load_kernel:
37
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv);
39
* @cpu: CPU
38
diff --git a/exec.c b/exec.c
40
@@ -XXX,XX +XXX,XX @@ DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
41
* @mem_size: mem_size: maximum image size to load
42
*
43
* Load the guest image for an ARMv7M system. This must be called by
44
- * any ARMv7M board, either directly or via armv7m_init(). (This is
45
- * necessary to ensure that the CPU resets correctly on system reset,
46
- * as well as for kernel loading.)
47
+ * any ARMv7M board. (This is necessary to ensure that the CPU resets
48
+ * correctly on system reset, as well as for kernel loading.)
49
*/
50
void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size);
51
52
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
39
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
54
--- a/hw/arm/armv7m.c
41
+++ b/exec.c
55
+++ b/hw/arm/armv7m.c
42
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
56
@@ -XXX,XX +XXX,XX @@ static void armv7m_reset(void *opaque)
43
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
57
cpu_reset(CPU(cpu));
44
if (!memory_access_is_direct(mr, is_write)) {
58
}
45
l = memory_access_size(mr, l, addr);
59
46
- if (!memory_region_access_valid(mr, xlat, l, is_write)) {
60
-/* Init CPU and memory for a v7-M based board.
47
+ /* When our callers all have attrs we'll pass them through here */
61
- mem_size is in bytes.
48
+ if (!memory_region_access_valid(mr, xlat, l, is_write,
62
- Returns the ARMv7M device. */
49
+ MEMTXATTRS_UNSPECIFIED)) {
63
-
50
return false;
64
-DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
51
}
65
- const char *kernel_filename, const char *cpu_type)
52
}
66
-{
53
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
67
- DeviceState *armv7m;
54
index XXXXXXX..XXXXXXX 100644
68
-
55
--- a/hw/s390x/s390-pci-inst.c
69
- armv7m = qdev_create(NULL, TYPE_ARMV7M);
56
+++ b/hw/s390x/s390-pci-inst.c
70
- qdev_prop_set_uint32(armv7m, "num-irq", num_irq);
57
@@ -XXX,XX +XXX,XX @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
71
- qdev_prop_set_string(armv7m, "cpu-type", cpu_type);
58
mr = s390_get_subregion(mr, offset, len);
72
- object_property_set_link(OBJECT(armv7m), OBJECT(get_system_memory()),
59
offset -= mr->addr;
73
- "memory", &error_abort);
60
74
- /* This will exit with an error if the user passed us a bad cpu_type */
61
- if (!memory_region_access_valid(mr, offset, len, true)) {
75
- qdev_init_nofail(armv7m);
62
+ if (!memory_region_access_valid(mr, offset, len, true,
76
-
63
+ MEMTXATTRS_UNSPECIFIED)) {
77
- armv7m_load_kernel(ARM_CPU(first_cpu), kernel_filename, mem_size);
64
s390_program_interrupt(env, PGM_OPERAND, 6, ra);
78
- return armv7m;
65
return 0;
79
-}
66
}
80
-
67
diff --git a/memory.c b/memory.c
81
void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size)
68
index XXXXXXX..XXXXXXX 100644
69
--- a/memory.c
70
+++ b/memory.c
71
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps ram_device_mem_ops = {
72
bool memory_region_access_valid(MemoryRegion *mr,
73
hwaddr addr,
74
unsigned size,
75
- bool is_write)
76
+ bool is_write,
77
+ MemTxAttrs attrs)
78
{
82
{
79
int access_size_min, access_size_max;
83
int image_size;
80
int access_size, i;
81
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
82
{
83
MemTxResult r;
84
85
- if (!memory_region_access_valid(mr, addr, size, false)) {
86
+ if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
87
*pval = unassigned_mem_read(mr, addr, size);
88
return MEMTX_DECODE_ERROR;
89
}
90
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
91
unsigned size,
92
MemTxAttrs attrs)
93
{
94
- if (!memory_region_access_valid(mr, addr, size, true)) {
95
+ if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
96
unassigned_mem_write(mr, addr, data, size);
97
return MEMTX_DECODE_ERROR;
98
}
99
--
84
--
100
2.17.1
85
2.17.1
101
86
102
87
diff view generated by jsdifflib
1
The FRECPX instructions should (like most other floating point operations)
1
The Cortex-M CPU and its NVIC are two intimately intertwined parts of
2
honour the FPCR.FZ bit which specifies whether input denormals should
2
the same hardware; it is not possible to use one without the other.
3
be flushed to zero (or FZ16 for the half-precision version).
3
Unfortunately a lot of our board models don't do any sanity checking
4
We forgot to implement this, which doesn't affect the results (since
4
on the CPU type the user asks for, so a command line like
5
the calculation doesn't actually care about the mantissa bits) but did
5
qemu-system-arm -M versatilepb -cpu cortex-m3
6
mean we were failing to set the FPSR.IDC bit.
6
will create an M3 without an NVIC, and coredump immediately.
7
In the other direction, trying a non-M-profile CPU in an M-profile
8
board won't blow up, but doesn't do anything useful either:
9
qemu-system-arm -M lm3s6965evb -cpu arm926
7
10
11
Add some checking in the NVIC and CPU realize functions that the
12
user isn't trying to use an NVIC without an M-profile CPU or
13
an M-profile CPU without an NVIC, so we can produce a helpful
14
error message rather than a core dump.
15
16
Fixes: https://bugs.launchpad.net/qemu/+bug/1766896
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
19
Message-id: 20180601160355.15393-1-peter.maydell@linaro.org
11
---
20
---
12
target/arm/helper-a64.c | 6 ++++++
21
hw/arm/armv7m.c | 7 ++++++-
13
1 file changed, 6 insertions(+)
22
hw/intc/armv7m_nvic.c | 6 +++++-
23
target/arm/cpu.c | 18 ++++++++++++++++++
24
3 files changed, 29 insertions(+), 2 deletions(-)
14
25
15
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
26
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
16
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-a64.c
28
--- a/hw/arm/armv7m.c
18
+++ b/target/arm/helper-a64.c
29
+++ b/hw/arm/armv7m.c
19
@@ -XXX,XX +XXX,XX @@ float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
30
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
20
return nan;
31
return;
32
}
21
}
33
}
22
23
+ a = float16_squash_input_denormal(a, fpst);
24
+
34
+
25
val16 = float16_val(a);
35
+ /* Tell the CPU where the NVIC is; it will fail realize if it doesn't
26
sbit = 0x8000 & val16;
36
+ * have one.
27
exp = extract32(val16, 10, 5);
37
+ */
28
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
38
+ s->cpu->env.nvic = &s->nvic;
29
return nan;
39
+
40
object_property_set_bool(OBJECT(s->cpu), true, "realized", &err);
41
if (err != NULL) {
42
error_propagate(errp, err);
43
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
44
sbd = SYS_BUS_DEVICE(&s->nvic);
45
sysbus_connect_irq(sbd, 0,
46
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
47
- s->cpu->env.nvic = &s->nvic;
48
49
memory_region_add_subregion(&s->container, 0xe000e000,
50
sysbus_mmio_get_region(sbd, 0));
51
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/intc/armv7m_nvic.c
54
+++ b/hw/intc/armv7m_nvic.c
55
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
56
int regionlen;
57
58
s->cpu = ARM_CPU(qemu_get_cpu(0));
59
- assert(s->cpu);
60
+
61
+ if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
62
+ error_setg(errp, "The NVIC can only be used with a Cortex-M CPU");
63
+ return;
64
+ }
65
66
if (s->num_irq > NVIC_MAX_IRQ) {
67
error_setg(errp, "num-irq %d exceeds NVIC maximum", s->num_irq);
68
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/cpu.c
71
+++ b/target/arm/cpu.c
72
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
73
return;
30
}
74
}
31
75
32
+ a = float32_squash_input_denormal(a, fpst);
76
+#ifndef CONFIG_USER_ONLY
77
+ /* The NVIC and M-profile CPU are two halves of a single piece of
78
+ * hardware; trying to use one without the other is a command line
79
+ * error and will result in segfaults if not caught here.
80
+ */
81
+ if (arm_feature(env, ARM_FEATURE_M)) {
82
+ if (!env->nvic) {
83
+ error_setg(errp, "This board cannot be used with Cortex-M CPUs");
84
+ return;
85
+ }
86
+ } else {
87
+ if (env->nvic) {
88
+ error_setg(errp, "This board can only be used with Cortex-M CPUs");
89
+ return;
90
+ }
91
+ }
92
+#endif
33
+
93
+
34
val32 = float32_val(a);
94
cpu_exec_realizefn(cs, &local_err);
35
sbit = 0x80000000ULL & val32;
95
if (local_err != NULL) {
36
exp = extract32(val32, 23, 8);
96
error_propagate(errp, local_err);
37
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
38
return nan;
39
}
40
41
+ a = float64_squash_input_denormal(a, fpst);
42
+
43
val64 = float64_val(a);
44
sbit = 0x8000000000000000ULL & val64;
45
exp = extract64(float64_val(a), 52, 11);
46
--
97
--
47
2.17.1
98
2.17.1
48
99
49
100
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
For the IoTKit MPC support, we need to wire together the
2
add MemTxAttrs as an argument to flatview_translate(); all its
2
interrupt outputs of 17 MPCs; this exceeds the current
3
callers now have attrs available.
3
value of MAX_OR_LINES. Increase MAX_OR_LINES to 32 (which
4
should be enough for anyone).
5
6
The tricky part is retaining the migration compatibility for
7
existing OR gates; we add a subsection which is only used
8
for larger OR gates, and define it such that we can freely
9
increase MAX_OR_LINES in future (or even move to a dynamically
10
allocated levels[] array without an upper size limit) without
11
breaking compatibility.
4
12
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20180604152941.20374-10-peter.maydell@linaro.org
8
Message-id: 20180521140402.23318-11-peter.maydell@linaro.org
9
---
16
---
10
include/exec/memory.h | 7 ++++---
17
include/hw/or-irq.h | 5 ++++-
11
exec.c | 17 +++++++++--------
18
hw/core/or-irq.c | 39 +++++++++++++++++++++++++++++++++++++--
12
2 files changed, 13 insertions(+), 11 deletions(-)
19
2 files changed, 41 insertions(+), 3 deletions(-)
13
20
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
21
diff --git a/include/hw/or-irq.h b/include/hw/or-irq.h
15
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
23
--- a/include/hw/or-irq.h
17
+++ b/include/exec/memory.h
24
+++ b/include/hw/or-irq.h
18
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
25
@@ -XXX,XX +XXX,XX @@
19
*/
26
20
MemoryRegion *flatview_translate(FlatView *fv,
27
#define TYPE_OR_IRQ "or-irq"
21
hwaddr addr, hwaddr *xlat,
28
22
- hwaddr *len, bool is_write);
29
-#define MAX_OR_LINES 16
23
+ hwaddr *len, bool is_write,
30
+/* This can safely be increased if necessary without breaking
24
+ MemTxAttrs attrs);
31
+ * migration compatibility (as long as it remains greater than 15).
25
32
+ */
26
static inline MemoryRegion *address_space_translate(AddressSpace *as,
33
+#define MAX_OR_LINES 32
27
hwaddr addr, hwaddr *xlat,
34
28
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
35
typedef struct OrIRQState qemu_or_irq;
29
MemTxAttrs attrs)
36
30
{
37
diff --git a/hw/core/or-irq.c b/hw/core/or-irq.c
31
return flatview_translate(address_space_to_flatview(as),
38
index XXXXXXX..XXXXXXX 100644
32
- addr, xlat, len, is_write);
39
--- a/hw/core/or-irq.c
33
+ addr, xlat, len, is_write, attrs);
40
+++ b/hw/core/or-irq.c
41
@@ -XXX,XX +XXX,XX @@ static void or_irq_init(Object *obj)
42
qdev_init_gpio_out(DEVICE(obj), &s->out_irq, 1);
34
}
43
}
35
44
36
/* address_space_access_valid: check for validity of accessing an address
45
+/* The original version of this device had a fixed 16 entries in its
37
@@ -XXX,XX +XXX,XX @@ MemTxResult address_space_read(AddressSpace *as, hwaddr addr,
46
+ * VMState array; devices with more inputs than this need to
38
rcu_read_lock();
47
+ * migrate the extra lines via a subsection.
39
fv = address_space_to_flatview(as);
48
+ * The subsection migrates as much of the levels[] array as is needed
40
l = len;
49
+ * (including repeating the first 16 elements), to avoid the awkwardness
41
- mr = flatview_translate(fv, addr, &addr1, &l, false);
50
+ * of splitting it in two to meet the requirements of VMSTATE_VARRAY_UINT16.
42
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
51
+ */
43
if (len == l && memory_access_is_direct(mr, false)) {
52
+#define OLD_MAX_OR_LINES 16
44
ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
53
+#if MAX_OR_LINES < OLD_MAX_OR_LINES
45
memcpy(buf, ptr, len);
54
+#error MAX_OR_LINES must be at least 16 for migration compatibility
46
diff --git a/exec.c b/exec.c
55
+#endif
47
index XXXXXXX..XXXXXXX 100644
56
+
48
--- a/exec.c
57
+static bool vmstate_extras_needed(void *opaque)
49
+++ b/exec.c
58
+{
50
@@ -XXX,XX +XXX,XX @@ iotlb_fail:
59
+ qemu_or_irq *s = OR_IRQ(opaque);
51
60
+
52
/* Called from RCU critical section */
61
+ return s->num_lines >= OLD_MAX_OR_LINES;
53
MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
62
+}
54
- hwaddr *plen, bool is_write)
63
+
55
+ hwaddr *plen, bool is_write,
64
+static const VMStateDescription vmstate_or_irq_extras = {
56
+ MemTxAttrs attrs)
65
+ .name = "or-irq-extras",
57
{
66
+ .version_id = 1,
58
MemoryRegion *mr;
67
+ .minimum_version_id = 1,
59
MemoryRegionSection section;
68
+ .needed = vmstate_extras_needed,
60
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
69
+ .fields = (VMStateField[]) {
61
}
70
+ VMSTATE_VARRAY_UINT16_UNSAFE(levels, qemu_or_irq, num_lines, 0,
62
71
+ vmstate_info_bool, bool),
63
l = len;
72
+ VMSTATE_END_OF_LIST(),
64
- mr = flatview_translate(fv, addr, &addr1, &l, true);
73
+ },
65
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
74
+};
66
}
75
+
67
76
static const VMStateDescription vmstate_or_irq = {
68
return result;
77
.name = TYPE_OR_IRQ,
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
78
.version_id = 1,
70
MemTxResult result = MEMTX_OK;
79
.minimum_version_id = 1,
71
80
.fields = (VMStateField[]) {
72
l = len;
81
- VMSTATE_BOOL_ARRAY(levels, qemu_or_irq, MAX_OR_LINES),
73
- mr = flatview_translate(fv, addr, &addr1, &l, true);
82
+ VMSTATE_BOOL_SUB_ARRAY(levels, qemu_or_irq, 0, OLD_MAX_OR_LINES),
74
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
83
VMSTATE_END_OF_LIST(),
75
result = flatview_write_continue(fv, addr, attrs, buf, len,
84
- }
76
addr1, l, mr);
85
+ },
77
86
+ .subsections = (const VMStateDescription*[]) {
78
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
87
+ &vmstate_or_irq_extras,
79
}
88
+ NULL
80
89
+ },
81
l = len;
90
};
82
- mr = flatview_translate(fv, addr, &addr1, &l, false);
91
83
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
92
static Property or_irq_properties[] = {
84
}
85
86
return result;
87
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = flatview_translate(fv, addr, &addr1, &l, false);
92
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
93
return flatview_read_continue(fv, addr, attrs, buf, len,
94
addr1, l, mr);
95
}
96
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
97
98
while (len > 0) {
99
l = len;
100
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
101
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
102
if (!memory_access_is_direct(mr, is_write)) {
103
l = memory_access_size(mr, l, addr);
104
if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
105
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
106
107
len = target_len;
108
this_mr = flatview_translate(fv, addr, &xlat,
109
- &len, is_write);
110
+ &len, is_write, attrs);
111
if (this_mr != mr || xlat != base + done) {
112
return done;
113
}
114
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
115
l = len;
116
rcu_read_lock();
117
fv = address_space_to_flatview(as);
118
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
119
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
120
121
if (!memory_access_is_direct(mr, is_write)) {
122
if (atomic_xchg(&bounce.in_use, true)) {
123
--
93
--
124
2.17.1
94
2.17.1
125
95
126
96
diff view generated by jsdifflib
1
Provide a VMSTATE_BOOL_SUB_ARRAY to go with VMSTATE_UINT8_SUB_ARRAY
1
The 'addr' field in the CPUIOTLBEntry struct has a rather non-obvious
2
and friends.
2
use; add a comment documenting it (reverse-engineered from what
3
the code that sets it is doing).
3
4
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
6
Message-id: 20180521140402.23318-23-peter.maydell@linaro.org
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180611125633.32755-2-peter.maydell@linaro.org
7
---
9
---
8
include/migration/vmstate.h | 3 +++
10
include/exec/cpu-defs.h | 9 +++++++++
9
1 file changed, 3 insertions(+)
11
accel/tcg/cputlb.c | 12 ++++++++++++
12
2 files changed, 21 insertions(+)
10
13
11
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
14
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/include/migration/vmstate.h
16
--- a/include/exec/cpu-defs.h
14
+++ b/include/migration/vmstate.h
17
+++ b/include/exec/cpu-defs.h
15
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
18
@@ -XXX,XX +XXX,XX @@ QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) != (1 << CPU_TLB_ENTRY_BITS));
16
#define VMSTATE_BOOL_ARRAY(_f, _s, _n) \
19
* structs into one.)
17
VMSTATE_BOOL_ARRAY_V(_f, _s, _n, 0)
20
*/
18
21
typedef struct CPUIOTLBEntry {
19
+#define VMSTATE_BOOL_SUB_ARRAY(_f, _s, _start, _num) \
22
+ /*
20
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_bool, bool)
23
+ * @addr contains:
21
+
24
+ * - in the lower TARGET_PAGE_BITS, a physical section number
22
#define VMSTATE_UINT16_ARRAY_V(_f, _s, _n, _v) \
25
+ * - with the lower TARGET_PAGE_BITS masked off, an offset which
23
VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_uint16, uint16_t)
26
+ * must be added to the virtual address to obtain:
27
+ * + the ram_addr_t of the target RAM (if the physical section
28
+ * number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM)
29
+ * + the offset within the target MemoryRegion (otherwise)
30
+ */
31
hwaddr addr;
32
MemTxAttrs attrs;
33
} CPUIOTLBEntry;
34
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/accel/tcg/cputlb.c
37
+++ b/accel/tcg/cputlb.c
38
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
39
env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index];
40
41
/* refill the tlb */
42
+ /*
43
+ * At this point iotlb contains a physical section number in the lower
44
+ * TARGET_PAGE_BITS, and either
45
+ * + the ram_addr_t of the page base of the target RAM (if NOTDIRTY or ROM)
46
+ * + the offset within section->mr of the page base (otherwise)
47
+ * We subtract the vaddr (which is page aligned and thus won't
48
+ * disturb the low bits) to give an offset which can be added to the
49
+ * (non-page-aligned) vaddr of the eventual memory access to get
50
+ * the MemoryRegion offset for the access. Note that the vaddr we
51
+ * subtract here is that of the page base, and not the same as the
52
+ * vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
53
+ */
54
env->iotlb[mmu_idx][index].addr = iotlb - vaddr;
55
env->iotlb[mmu_idx][index].attrs = attrs;
24
56
25
--
57
--
26
2.17.1
58
2.17.1
27
59
28
60
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
The API for cpu_transaction_failed() says that it takes the physical
2
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
2
address for the failed transaction. However we were actually passing
3
Its callers either have an attrs value to hand, or don't care
3
it the offset within the target MemoryRegion. We don't currently
4
and can use MEMTXATTRS_UNSPECIFIED.
4
have any target CPU implementations of this hook that require the
5
physical address; fix this bug so we don't get confused if we ever
6
do add one.
5
7
8
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Message-id: 20180611125633.32755-3-peter.maydell@linaro.org
9
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
10
---
13
---
11
include/exec/exec-all.h | 5 +++--
14
include/exec/exec-all.h | 13 ++++++++++--
12
accel/tcg/translate-all.c | 2 +-
15
accel/tcg/cputlb.c | 44 +++++++++++++++++++++++++++++------------
13
exec.c | 2 +-
16
exec.c | 5 +++--
14
target/xtensa/op_helper.c | 3 ++-
17
3 files changed, 45 insertions(+), 17 deletions(-)
15
4 files changed, 7 insertions(+), 5 deletions(-)
16
18
17
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
19
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/exec-all.h
21
--- a/include/exec/exec-all.h
20
+++ b/include/exec/exec-all.h
22
+++ b/include/exec/exec-all.h
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
23
@@ -XXX,XX +XXX,XX @@ void tb_lock_reset(void);
22
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
24
23
hwaddr paddr, int prot,
25
#if !defined(CONFIG_USER_ONLY)
24
int mmu_idx, target_ulong size);
26
25
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr);
27
-struct MemoryRegion *iotlb_to_region(CPUState *cpu,
26
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
28
- hwaddr index, MemTxAttrs attrs);
27
void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx,
29
+/**
28
uintptr_t retaddr);
30
+ * iotlb_to_section:
29
#else
31
+ * @cpu: CPU performing the access
30
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
32
+ * @index: TCG CPU IOTLB entry
31
uint16_t idxmap)
33
+ *
34
+ * Given a TCG CPU IOTLB entry, return the MemoryRegionSection that
35
+ * it refers to. @index will have been initially created and returned
36
+ * by memory_region_section_get_iotlb().
37
+ */
38
+struct MemoryRegionSection *iotlb_to_section(CPUState *cpu,
39
+ hwaddr index, MemTxAttrs attrs);
40
41
void tlb_fill(CPUState *cpu, target_ulong addr, int size,
42
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr);
43
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/accel/tcg/cputlb.c
46
+++ b/accel/tcg/cputlb.c
47
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
48
target_ulong addr, uintptr_t retaddr, int size)
32
{
49
{
33
}
50
CPUState *cpu = ENV_GET_CPU(env);
34
-static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
51
- hwaddr physaddr = iotlbentry->addr;
35
+static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr,
52
- MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs);
36
+ MemTxAttrs attrs)
53
+ hwaddr mr_offset;
54
+ MemoryRegionSection *section;
55
+ MemoryRegion *mr;
56
uint64_t val;
57
bool locked = false;
58
MemTxResult r;
59
60
- physaddr = (physaddr & TARGET_PAGE_MASK) + addr;
61
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
62
+ mr = section->mr;
63
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
64
cpu->mem_io_pc = retaddr;
65
if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) {
66
cpu_io_recompile(cpu, retaddr);
67
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
68
qemu_mutex_lock_iothread();
69
locked = true;
70
}
71
- r = memory_region_dispatch_read(mr, physaddr,
72
+ r = memory_region_dispatch_read(mr, mr_offset,
73
&val, size, iotlbentry->attrs);
74
if (r != MEMTX_OK) {
75
+ hwaddr physaddr = mr_offset +
76
+ section->offset_within_address_space -
77
+ section->offset_within_region;
78
+
79
cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_LOAD,
80
mmu_idx, iotlbentry->attrs, r, retaddr);
81
}
82
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
83
uintptr_t retaddr, int size)
37
{
84
{
38
}
85
CPUState *cpu = ENV_GET_CPU(env);
39
#endif
86
- hwaddr physaddr = iotlbentry->addr;
40
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
87
- MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs);
41
index XXXXXXX..XXXXXXX 100644
88
+ hwaddr mr_offset;
42
--- a/accel/tcg/translate-all.c
89
+ MemoryRegionSection *section;
43
+++ b/accel/tcg/translate-all.c
90
+ MemoryRegion *mr;
44
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
91
bool locked = false;
45
}
92
MemTxResult r;
46
93
47
#if !defined(CONFIG_USER_ONLY)
94
- physaddr = (physaddr & TARGET_PAGE_MASK) + addr;
48
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
95
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
49
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
96
+ mr = section->mr;
97
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
98
if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) {
99
cpu_io_recompile(cpu, retaddr);
100
}
101
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
102
qemu_mutex_lock_iothread();
103
locked = true;
104
}
105
- r = memory_region_dispatch_write(mr, physaddr,
106
+ r = memory_region_dispatch_write(mr, mr_offset,
107
val, size, iotlbentry->attrs);
108
if (r != MEMTX_OK) {
109
+ hwaddr physaddr = mr_offset +
110
+ section->offset_within_address_space -
111
+ section->offset_within_region;
112
+
113
cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
114
mmu_idx, iotlbentry->attrs, r, retaddr);
115
}
116
@@ -XXX,XX +XXX,XX @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
117
*/
118
tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
50
{
119
{
51
ram_addr_t ram_addr;
120
- int mmu_idx, index, pd;
121
+ int mmu_idx, index;
122
void *p;
52
MemoryRegion *mr;
123
MemoryRegion *mr;
124
+ MemoryRegionSection *section;
125
CPUState *cpu = ENV_GET_CPU(env);
126
CPUIOTLBEntry *iotlbentry;
127
- hwaddr physaddr;
128
+ hwaddr physaddr, mr_offset;
129
130
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
131
mmu_idx = cpu_mmu_index(env, true);
132
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
133
}
134
}
135
iotlbentry = &env->iotlb[mmu_idx][index];
136
- pd = iotlbentry->addr & ~TARGET_PAGE_MASK;
137
- mr = iotlb_to_region(cpu, pd, iotlbentry->attrs);
138
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
139
+ mr = section->mr;
140
if (memory_region_is_unassigned(mr)) {
141
qemu_mutex_lock_iothread();
142
if (memory_region_request_mmio_ptr(mr, addr)) {
143
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
144
* and use the MemTXResult it produced). However it is the
145
* simplest place we have currently available for the check.
146
*/
147
- physaddr = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
148
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
149
+ physaddr = mr_offset +
150
+ section->offset_within_address_space -
151
+ section->offset_within_region;
152
cpu_transaction_failed(cpu, physaddr, addr, 0, MMU_INST_FETCH, mmu_idx,
153
iotlbentry->attrs, MEMTX_DECODE_ERROR, 0);
154
53
diff --git a/exec.c b/exec.c
155
diff --git a/exec.c b/exec.c
54
index XXXXXXX..XXXXXXX 100644
156
index XXXXXXX..XXXXXXX 100644
55
--- a/exec.c
157
--- a/exec.c
56
+++ b/exec.c
158
+++ b/exec.c
57
@@ -XXX,XX +XXX,XX @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
159
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps readonly_mem_ops = {
58
if (phys != -1) {
160
},
59
/* Locks grabbed by tb_invalidate_phys_addr */
161
};
60
tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as,
162
61
- phys | (pc & ~TARGET_PAGE_MASK));
163
-MemoryRegion *iotlb_to_region(CPUState *cpu, hwaddr index, MemTxAttrs attrs)
62
+ phys | (pc & ~TARGET_PAGE_MASK), attrs);
164
+MemoryRegionSection *iotlb_to_section(CPUState *cpu,
63
}
165
+ hwaddr index, MemTxAttrs attrs)
166
{
167
int asidx = cpu_asidx_from_attrs(cpu, attrs);
168
CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx];
169
AddressSpaceDispatch *d = atomic_rcu_read(&cpuas->memory_dispatch);
170
MemoryRegionSection *sections = d->map.sections;
171
172
- return sections[index & ~TARGET_PAGE_MASK].mr;
173
+ return &sections[index & ~TARGET_PAGE_MASK];
64
}
174
}
65
#endif
175
66
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
176
static void io_mem_init(void)
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/xtensa/op_helper.c
69
+++ b/target/xtensa/op_helper.c
70
@@ -XXX,XX +XXX,XX @@ static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
71
int ret = xtensa_get_physical_addr(env, false, vaddr, 2, 0,
72
&paddr, &page_size, &access);
73
if (ret == 0) {
74
- tb_invalidate_phys_addr(&address_space_memory, paddr);
75
+ tb_invalidate_phys_addr(&address_space_memory, paddr,
76
+ MEMTXATTRS_UNSPECIFIED);
77
}
78
}
79
80
--
177
--
81
2.17.1
178
2.17.1
82
179
83
180
diff view generated by jsdifflib
1
Add entries to MAINTAINERS to cover the newer MPS2 boards and
1
The codebase has a bit of a mix of different multiline
2
the new devices they use.
2
comment styles. State a preference for the Linux kernel
3
style:
4
/*
5
* Star on the left for each line.
6
* Leading slash-star and trailing star-slash
7
* each go on a line of their own.
8
*/
3
9
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180518153157.14899-1-peter.maydell@linaro.org
11
Reviewed-by: Eric Blake <eblake@redhat.com>
12
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
13
Reviewed-by: Markus Armbruster <armbru@redhat.com>
14
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
15
Reviewed-by: Thomas Huth <thuth@redhat.com>
16
Reviewed-by: John Snow <jsnow@redhat.com>
17
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
18
Message-id: 20180611141716.3813-1-peter.maydell@linaro.org
6
---
19
---
7
MAINTAINERS | 9 +++++++--
20
CODING_STYLE | 17 +++++++++++++++++
8
1 file changed, 7 insertions(+), 2 deletions(-)
21
1 file changed, 17 insertions(+)
9
22
10
diff --git a/MAINTAINERS b/MAINTAINERS
23
diff --git a/CODING_STYLE b/CODING_STYLE
11
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
12
--- a/MAINTAINERS
25
--- a/CODING_STYLE
13
+++ b/MAINTAINERS
26
+++ b/CODING_STYLE
14
@@ -XXX,XX +XXX,XX @@ F: hw/timer/cmsdk-apb-timer.c
27
@@ -XXX,XX +XXX,XX @@ We use traditional C-style /* */ comments and avoid // comments.
15
F: include/hw/timer/cmsdk-apb-timer.h
28
Rationale: The // form is valid in C99, so this is purely a matter of
16
F: hw/char/cmsdk-apb-uart.c
29
consistency of style. The checkpatch script will warn you about this.
17
F: include/hw/char/cmsdk-apb-uart.h
30
18
+F: hw/misc/tz-ppc.c
31
+Multiline comment blocks should have a row of stars on the left,
19
+F: include/hw/misc/tz-ppc.h
32
+and the initial /* and terminating */ both on their own lines:
20
33
+ /*
21
ARM cores
34
+ * like
22
M: Peter Maydell <peter.maydell@linaro.org>
35
+ * this
23
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
36
+ */
24
L: qemu-arm@nongnu.org
37
+This is the same format required by the Linux kernel coding style.
25
S: Maintained
38
+
26
F: hw/arm/mps2.c
39
+(Some of the existing comments in the codebase use the GNU Coding
27
-F: hw/misc/mps2-scc.c
40
+Standards form which does not have stars on the left, or other
28
-F: include/hw/misc/mps2-scc.h
41
+variations; avoid these when writing new comments, but don't worry
29
+F: hw/arm/mps2-tz.c
42
+about converting to the preferred form unless you're editing that
30
+F: hw/misc/mps2-*.c
43
+comment anyway.)
31
+F: include/hw/misc/mps2-*.h
44
+
32
+F: hw/arm/iotkit.c
45
+Rationale: Consistency, and ease of visually picking out a multiline
33
+F: include/hw/arm/iotkit.h
46
+comment from the surrounding code.
34
47
+
35
Musicpal
48
8. trace-events style
36
M: Jan Kiszka <jan.kiszka@web.de>
49
50
8.1 0x prefix
37
--
51
--
38
2.17.1
52
2.17.1
39
53
40
54
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
There's a common pattern in QEMU where a function needs to perform
2
add MemTxAttrs as an argument to flatview_access_valid().
2
a data load or store of an N byte integer in a particular endianness.
3
Its callers now all have an attrs value to hand, so we can
3
At the moment this is handled by doing a switch() on the size and
4
correct our earlier temporary use of MEMTXATTRS_UNSPECIFIED.
4
calling the appropriate ld*_p or st*_p function for each size.
5
6
Provide a new family of functions ldn_*_p() and stn_*_p() which
7
take the size as an argument and do the switch() themselves.
5
8
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-10-peter.maydell@linaro.org
12
Message-id: 20180611171007.4165-2-peter.maydell@linaro.org
10
---
13
---
11
exec.c | 12 +++++-------
14
include/exec/cpu-all.h | 4 +++
12
1 file changed, 5 insertions(+), 7 deletions(-)
15
include/qemu/bswap.h | 52 +++++++++++++++++++++++++++++++++++++
16
docs/devel/loads-stores.rst | 15 +++++++++++
17
3 files changed, 71 insertions(+)
13
18
14
diff --git a/exec.c b/exec.c
19
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
21
--- a/include/exec/cpu-all.h
17
+++ b/exec.c
22
+++ b/include/exec/cpu-all.h
18
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
23
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
19
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
24
#define stq_p(p, v) stq_be_p(p, v)
20
const uint8_t *buf, int len);
25
#define stfl_p(p, v) stfl_be_p(p, v)
21
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
26
#define stfq_p(p, v) stfq_be_p(p, v)
22
- bool is_write);
27
+#define ldn_p(p, sz) ldn_be_p(p, sz)
23
+ bool is_write, MemTxAttrs attrs);
28
+#define stn_p(p, sz, v) stn_be_p(p, sz, v)
24
29
#else
25
static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
30
#define lduw_p(p) lduw_le_p(p)
26
unsigned len, MemTxAttrs attrs)
31
#define ldsw_p(p) ldsw_le_p(p)
27
@@ -XXX,XX +XXX,XX @@ static bool subpage_accepts(void *opaque, hwaddr addr,
32
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
33
#define stq_p(p, v) stq_le_p(p, v)
34
#define stfl_p(p, v) stfl_le_p(p, v)
35
#define stfq_p(p, v) stfq_le_p(p, v)
36
+#define ldn_p(p, sz) ldn_le_p(p, sz)
37
+#define stn_p(p, sz, v) stn_le_p(p, sz, v)
28
#endif
38
#endif
29
39
30
return flatview_access_valid(subpage->fv, addr + subpage->base,
40
/* MMU memory access macros */
31
- len, is_write);
41
diff --git a/include/qemu/bswap.h b/include/qemu/bswap.h
32
+ len, is_write, attrs);
42
index XXXXXXX..XXXXXXX 100644
43
--- a/include/qemu/bswap.h
44
+++ b/include/qemu/bswap.h
45
@@ -XXX,XX +XXX,XX @@ typedef union {
46
* For accessors that take a guest address rather than a
47
* host address, see the cpu_{ld,st}_* accessors defined in
48
* cpu_ldst.h.
49
+ *
50
+ * For cases where the size to be used is not fixed at compile time,
51
+ * there are
52
+ * stn{endian}_p(ptr, sz, val)
53
+ * which stores @val to @ptr as an @endian-order number @sz bytes in size
54
+ * and
55
+ * ldn{endian}_p(ptr, sz)
56
+ * which loads @sz bytes from @ptr as an unsigned @endian-order number
57
+ * and returns it in a uint64_t.
58
*/
59
60
static inline int ldub_p(const void *ptr)
61
@@ -XXX,XX +XXX,XX @@ static inline unsigned long leul_to_cpu(unsigned long v)
62
#endif
33
}
63
}
34
64
35
static const MemoryRegionOps subpage_ops = {
65
+/* Store v to p as a sz byte value in host order */
36
@@ -XXX,XX +XXX,XX @@ static void cpu_notify_map_clients(void)
66
+#define DO_STN_LDN_P(END) \
37
}
67
+ static inline void stn_## END ## _p(void *ptr, int sz, uint64_t v) \
38
68
+ { \
39
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
69
+ switch (sz) { \
40
- bool is_write)
70
+ case 1: \
41
+ bool is_write, MemTxAttrs attrs)
71
+ stb_p(ptr, v); \
42
{
72
+ break; \
43
MemoryRegion *mr;
73
+ case 2: \
44
hwaddr l, xlat;
74
+ stw_ ## END ## _p(ptr, v); \
45
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
75
+ break; \
46
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
76
+ case 4: \
47
if (!memory_access_is_direct(mr, is_write)) {
77
+ stl_ ## END ## _p(ptr, v); \
48
l = memory_access_size(mr, l, addr);
78
+ break; \
49
- /* When our callers all have attrs we'll pass them through here */
79
+ case 8: \
50
- if (!memory_region_access_valid(mr, xlat, l, is_write,
80
+ stq_ ## END ## _p(ptr, v); \
51
- MEMTXATTRS_UNSPECIFIED)) {
81
+ break; \
52
+ if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
82
+ default: \
53
return false;
83
+ g_assert_not_reached(); \
54
}
84
+ } \
55
}
85
+ } \
56
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
86
+ static inline uint64_t ldn_## END ## _p(const void *ptr, int sz) \
57
87
+ { \
58
rcu_read_lock();
88
+ switch (sz) { \
59
fv = address_space_to_flatview(as);
89
+ case 1: \
60
- result = flatview_access_valid(fv, addr, len, is_write);
90
+ return ldub_p(ptr); \
61
+ result = flatview_access_valid(fv, addr, len, is_write, attrs);
91
+ case 2: \
62
rcu_read_unlock();
92
+ return lduw_ ## END ## _p(ptr); \
63
return result;
93
+ case 4: \
64
}
94
+ return (uint32_t)ldl_ ## END ## _p(ptr); \
95
+ case 8: \
96
+ return ldq_ ## END ## _p(ptr); \
97
+ default: \
98
+ g_assert_not_reached(); \
99
+ } \
100
+ }
101
+
102
+DO_STN_LDN_P(he)
103
+DO_STN_LDN_P(le)
104
+DO_STN_LDN_P(be)
105
+
106
+#undef DO_STN_LDN_P
107
+
108
#undef le_bswap
109
#undef be_bswap
110
#undef le_bswaps
111
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
112
index XXXXXXX..XXXXXXX 100644
113
--- a/docs/devel/loads-stores.rst
114
+++ b/docs/devel/loads-stores.rst
115
@@ -XXX,XX +XXX,XX @@ The ``_{endian}`` infix is omitted for target-endian accesses.
116
The target endian accessors are only available to source
117
files which are built per-target.
118
119
+There are also functions which take the size as an argument:
120
+
121
+load: ``ldn{endian}_p(ptr, sz)``
122
+
123
+which performs an unsigned load of ``sz`` bytes from ``ptr``
124
+as an ``{endian}`` order value and returns it in a uint64_t.
125
+
126
+store: ``stn{endian}_p(ptr, sz, val)``
127
+
128
+which stores ``val`` to ``ptr`` as an ``{endian}`` order value
129
+of size ``sz`` bytes.
130
+
131
+
132
Regexes for git grep
133
- ``\<ldf\?[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
134
- ``\<stf\?[bwlq]\(_[hbl]e\)\?_p\>``
135
+ - ``\<ldn_\([hbl]e\)?_p\>``
136
+ - ``\<stn_\([hbl]e\)?_p\>``
137
138
``cpu_{ld,st}_*``
139
~~~~~~~~~~~~~~~~~
65
--
140
--
66
2.17.1
141
2.17.1
67
142
68
143
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
In subpage_read() we perform a load of the data into a local buffer
2
add MemTxAttrs as an argument to flatview_extend_translation().
2
which we then access using ldub_p(), lduw_p(), ldl_p() or ldq_p()
3
Its callers either have an attrs value to hand, or don't care
3
depending on its size, storing the result into the uint64_t *data.
4
and can use MEMTXATTRS_UNSPECIFIED.
4
Since ldl_p() returns an 'int', this means that for the 4-byte
5
case we will sign-extend the data, whereas for 1 and 2 byte
6
reads we zero-extend it.
7
8
This ought not to matter since the caller will likely ignore values in
9
the high bytes of the data, but add a cast so that we're consistent.
5
10
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-7-peter.maydell@linaro.org
13
Message-id: 20180611171007.4165-3-peter.maydell@linaro.org
10
---
14
---
11
exec.c | 15 ++++++++++-----
15
exec.c | 2 +-
12
1 file changed, 10 insertions(+), 5 deletions(-)
16
1 file changed, 1 insertion(+), 1 deletion(-)
13
17
14
diff --git a/exec.c b/exec.c
18
diff --git a/exec.c b/exec.c
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
20
--- a/exec.c
17
+++ b/exec.c
21
+++ b/exec.c
18
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
22
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
19
23
*data = lduw_p(buf);
20
static hwaddr
24
return MEMTX_OK;
21
flatview_extend_translation(FlatView *fv, hwaddr addr,
25
case 4:
22
- hwaddr target_len,
26
- *data = ldl_p(buf);
23
- MemoryRegion *mr, hwaddr base, hwaddr len,
27
+ *data = (uint32_t)ldl_p(buf);
24
- bool is_write)
28
return MEMTX_OK;
25
+ hwaddr target_len,
29
case 8:
26
+ MemoryRegion *mr, hwaddr base, hwaddr len,
30
*data = ldq_p(buf);
27
+ bool is_write, MemTxAttrs attrs)
28
{
29
hwaddr done = 0;
30
hwaddr xlat;
31
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
32
33
memory_region_ref(mr);
34
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
35
- l, is_write);
36
+ l, is_write, attrs);
37
ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
38
rcu_read_unlock();
39
40
@@ -XXX,XX +XXX,XX @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
41
mr = cache->mrs.mr;
42
memory_region_ref(mr);
43
if (memory_access_is_direct(mr, is_write)) {
44
+ /* We don't care about the memory attributes here as we're only
45
+ * doing this if we found actual RAM, which behaves the same
46
+ * regardless of attributes; so UNSPECIFIED is fine.
47
+ */
48
l = flatview_extend_translation(cache->fv, addr, len, mr,
49
- cache->xlat, l, is_write);
50
+ cache->xlat, l, is_write,
51
+ MEMTXATTRS_UNSPECIFIED);
52
cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
53
} else {
54
cache->ptr = NULL;
55
--
31
--
56
2.17.1
32
2.17.1
57
33
58
34
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Now we have stn_p() and ldn_p() we can use them in various
2
add MemTxAttrs as an argument to address_space_translate_iommu().
2
functions in exec.c that used to have their own switch-on-size code.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-14-peter.maydell@linaro.org
7
Message-id: 20180611171007.4165-4-peter.maydell@linaro.org
8
---
8
---
9
exec.c | 8 +++++---
9
exec.c | 112 +++++----------------------------------------------------
10
1 file changed, 5 insertions(+), 3 deletions(-)
10
1 file changed, 8 insertions(+), 104 deletions(-)
11
11
12
diff --git a/exec.c b/exec.c
12
diff --git a/exec.c b/exec.c
13
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
14
--- a/exec.c
15
+++ b/exec.c
15
+++ b/exec.c
16
@@ -XXX,XX +XXX,XX @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
16
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
17
* @is_write: whether the translation operation is for write
17
memory_notdirty_write_prepare(&ndi, current_cpu, current_cpu->mem_io_vaddr,
18
* @is_mmio: whether this can be MMIO, set true if it can
18
ram_addr, size);
19
* @target_as: the address space targeted by the IOMMU
19
20
+ * @attrs: transaction attributes
20
- switch (size) {
21
*
21
- case 1:
22
* This function is called from RCU critical section. It is the common
22
- stb_p(qemu_map_ram_ptr(NULL, ram_addr), val);
23
* part of flatview_do_translate and address_space_translate_cached.
23
- break;
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
24
- case 2:
25
hwaddr *page_mask_out,
25
- stw_p(qemu_map_ram_ptr(NULL, ram_addr), val);
26
bool is_write,
26
- break;
27
bool is_mmio,
27
- case 4:
28
- AddressSpace **target_as)
28
- stl_p(qemu_map_ram_ptr(NULL, ram_addr), val);
29
+ AddressSpace **target_as,
29
- break;
30
+ MemTxAttrs attrs)
30
- case 8:
31
{
31
- stq_p(qemu_map_ram_ptr(NULL, ram_addr), val);
32
MemoryRegionSection *section;
32
- break;
33
hwaddr page_mask = (hwaddr)-1;
33
- default:
34
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
34
- abort();
35
return address_space_translate_iommu(iommu_mr, xlat,
35
- }
36
plen_out, page_mask_out,
36
+ stn_p(qemu_map_ram_ptr(NULL, ram_addr), size, val);
37
is_write, is_mmio,
37
memory_notdirty_write_complete(&ndi);
38
- target_as);
38
}
39
+ target_as, attrs);
39
40
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
41
if (res) {
42
return res;
40
}
43
}
41
if (page_mask_out) {
44
- switch (len) {
42
/* Not behind an IOMMU, use default page size. */
45
- case 1:
43
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate_cached(
46
- *data = ldub_p(buf);
44
47
- return MEMTX_OK;
45
section = address_space_translate_iommu(iommu_mr, xlat, plen,
48
- case 2:
46
NULL, is_write, true,
49
- *data = lduw_p(buf);
47
- &target_as);
50
- return MEMTX_OK;
48
+ &target_as, attrs);
51
- case 4:
49
return section.mr;
52
- *data = (uint32_t)ldl_p(buf);
53
- return MEMTX_OK;
54
- case 8:
55
- *data = ldq_p(buf);
56
- return MEMTX_OK;
57
- default:
58
- abort();
59
- }
60
+ *data = ldn_p(buf, len);
61
+ return MEMTX_OK;
50
}
62
}
51
63
64
static MemTxResult subpage_write(void *opaque, hwaddr addr,
65
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
66
" value %"PRIx64"\n",
67
__func__, subpage, len, addr, value);
68
#endif
69
- switch (len) {
70
- case 1:
71
- stb_p(buf, value);
72
- break;
73
- case 2:
74
- stw_p(buf, value);
75
- break;
76
- case 4:
77
- stl_p(buf, value);
78
- break;
79
- case 8:
80
- stq_p(buf, value);
81
- break;
82
- default:
83
- abort();
84
- }
85
+ stn_p(buf, len, value);
86
return flatview_write(subpage->fv, addr + subpage->base, attrs, buf, len);
87
}
88
89
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
90
l = memory_access_size(mr, l, addr1);
91
/* XXX: could force current_cpu to NULL to avoid
92
potential bugs */
93
- switch (l) {
94
- case 8:
95
- /* 64 bit write access */
96
- val = ldq_p(buf);
97
- result |= memory_region_dispatch_write(mr, addr1, val, 8,
98
- attrs);
99
- break;
100
- case 4:
101
- /* 32 bit write access */
102
- val = (uint32_t)ldl_p(buf);
103
- result |= memory_region_dispatch_write(mr, addr1, val, 4,
104
- attrs);
105
- break;
106
- case 2:
107
- /* 16 bit write access */
108
- val = lduw_p(buf);
109
- result |= memory_region_dispatch_write(mr, addr1, val, 2,
110
- attrs);
111
- break;
112
- case 1:
113
- /* 8 bit write access */
114
- val = ldub_p(buf);
115
- result |= memory_region_dispatch_write(mr, addr1, val, 1,
116
- attrs);
117
- break;
118
- default:
119
- abort();
120
- }
121
+ val = ldn_p(buf, l);
122
+ result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
123
} else {
124
/* RAM case */
125
ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
126
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
127
/* I/O case */
128
release_lock |= prepare_mmio_access(mr);
129
l = memory_access_size(mr, l, addr1);
130
- switch (l) {
131
- case 8:
132
- /* 64 bit read access */
133
- result |= memory_region_dispatch_read(mr, addr1, &val, 8,
134
- attrs);
135
- stq_p(buf, val);
136
- break;
137
- case 4:
138
- /* 32 bit read access */
139
- result |= memory_region_dispatch_read(mr, addr1, &val, 4,
140
- attrs);
141
- stl_p(buf, val);
142
- break;
143
- case 2:
144
- /* 16 bit read access */
145
- result |= memory_region_dispatch_read(mr, addr1, &val, 2,
146
- attrs);
147
- stw_p(buf, val);
148
- break;
149
- case 1:
150
- /* 8 bit read access */
151
- result |= memory_region_dispatch_read(mr, addr1, &val, 1,
152
- attrs);
153
- stb_p(buf, val);
154
- break;
155
- default:
156
- abort();
157
- }
158
+ result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
159
+ stn_p(buf, l, val);
160
} else {
161
/* RAM case */
162
ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
52
--
163
--
53
2.17.1
164
2.17.1
54
165
55
166
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Rearrange the arithmetic so that we are agnostic about the total size
4
of the vector and the size of the element. This will allow us to index
5
up to the 32nd byte and with 16-byte elements.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180613015641.5667-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-a64.h | 26 +++++++++++++++++---------
13
1 file changed, 17 insertions(+), 9 deletions(-)
14
15
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.h
18
+++ b/target/arm/translate-a64.h
19
@@ -XXX,XX +XXX,XX @@ static inline void assert_fp_access_checked(DisasContext *s)
20
static inline int vec_reg_offset(DisasContext *s, int regno,
21
int element, TCGMemOp size)
22
{
23
- int offs = 0;
24
+ int element_size = 1 << size;
25
+ int offs = element * element_size;
26
#ifdef HOST_WORDS_BIGENDIAN
27
/* This is complicated slightly because vfp.zregs[n].d[0] is
28
- * still the low half and vfp.zregs[n].d[1] the high half
29
- * of the 128 bit vector, even on big endian systems.
30
- * Calculate the offset assuming a fully bigendian 128 bits,
31
- * then XOR to account for the order of the two 64 bit halves.
32
+ * still the lowest and vfp.zregs[n].d[15] the highest of the
33
+ * 256 byte vector, even on big endian systems.
34
+ *
35
+ * Calculate the offset assuming fully little-endian,
36
+ * then XOR to account for the order of the 8-byte units.
37
+ *
38
+ * For 16 byte elements, the two 8 byte halves will not form a
39
+ * host int128 if the host is bigendian, since they're in the
40
+ * wrong order. However the only 16 byte operation we have is
41
+ * a move, so we can ignore this for the moment. More complicated
42
+ * operations will have to special case loading and storing from
43
+ * the zregs array.
44
*/
45
- offs += (16 - ((element + 1) * (1 << size)));
46
- offs ^= 8;
47
-#else
48
- offs += element * (1 << size);
49
+ if (element_size < 8) {
50
+ offs ^= 8 - element_size;
51
+ }
52
#endif
53
offs += offsetof(CPUARMState, vfp.zregs[regno]);
54
assert_fp_access_checked(s);
55
--
56
2.17.1
57
58
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 23 +++++++
9
target/arm/sve_helper.c | 114 +++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 133 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 27 ++++++++
12
4 files changed, 297 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
19
20
DEF_HELPER_FLAGS_4(sve_ext, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_insr_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
23
+DEF_HELPER_FLAGS_4(sve_insr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
24
+DEF_HELPER_FLAGS_4(sve_insr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
25
+DEF_HELPER_FLAGS_4(sve_insr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
26
+
27
+DEF_HELPER_FLAGS_3(sve_rev_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_3(sve_rev_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_3(sve_rev_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_3(sve_rev_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+
32
+DEF_HELPER_FLAGS_4(sve_tbl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_tbl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_tbl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_tbl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_3(sve_sunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_3(sve_sunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_3(sve_sunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+
41
+DEF_HELPER_FLAGS_3(sve_uunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_3(sve_uunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_3(sve_uunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
44
+
45
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
46
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
47
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
48
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/sve_helper.c
51
+++ b/target/arm/sve_helper.c
52
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ext)(void *vd, void *vn, void *vm, uint32_t desc)
53
memcpy(vd + n_siz, &tmp, n_ofs);
54
}
55
}
56
+
57
+#define DO_INSR(NAME, TYPE, H) \
58
+void HELPER(NAME)(void *vd, void *vn, uint64_t val, uint32_t desc) \
59
+{ \
60
+ intptr_t opr_sz = simd_oprsz(desc); \
61
+ swap_memmove(vd + sizeof(TYPE), vn, opr_sz - sizeof(TYPE)); \
62
+ *(TYPE *)(vd + H(0)) = val; \
63
+}
64
+
65
+DO_INSR(sve_insr_b, uint8_t, H1)
66
+DO_INSR(sve_insr_h, uint16_t, H1_2)
67
+DO_INSR(sve_insr_s, uint32_t, H1_4)
68
+DO_INSR(sve_insr_d, uint64_t, )
69
+
70
+#undef DO_INSR
71
+
72
+void HELPER(sve_rev_b)(void *vd, void *vn, uint32_t desc)
73
+{
74
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
75
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
76
+ uint64_t f = *(uint64_t *)(vn + i);
77
+ uint64_t b = *(uint64_t *)(vn + j);
78
+ *(uint64_t *)(vd + i) = bswap64(b);
79
+ *(uint64_t *)(vd + j) = bswap64(f);
80
+ }
81
+}
82
+
83
+static inline uint64_t hswap64(uint64_t h)
84
+{
85
+ uint64_t m = 0x0000ffff0000ffffull;
86
+ h = rol64(h, 32);
87
+ return ((h & m) << 16) | ((h >> 16) & m);
88
+}
89
+
90
+void HELPER(sve_rev_h)(void *vd, void *vn, uint32_t desc)
91
+{
92
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
93
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
94
+ uint64_t f = *(uint64_t *)(vn + i);
95
+ uint64_t b = *(uint64_t *)(vn + j);
96
+ *(uint64_t *)(vd + i) = hswap64(b);
97
+ *(uint64_t *)(vd + j) = hswap64(f);
98
+ }
99
+}
100
+
101
+void HELPER(sve_rev_s)(void *vd, void *vn, uint32_t desc)
102
+{
103
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
104
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
105
+ uint64_t f = *(uint64_t *)(vn + i);
106
+ uint64_t b = *(uint64_t *)(vn + j);
107
+ *(uint64_t *)(vd + i) = rol64(b, 32);
108
+ *(uint64_t *)(vd + j) = rol64(f, 32);
109
+ }
110
+}
111
+
112
+void HELPER(sve_rev_d)(void *vd, void *vn, uint32_t desc)
113
+{
114
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
115
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
116
+ uint64_t f = *(uint64_t *)(vn + i);
117
+ uint64_t b = *(uint64_t *)(vn + j);
118
+ *(uint64_t *)(vd + i) = b;
119
+ *(uint64_t *)(vd + j) = f;
120
+ }
121
+}
122
+
123
+#define DO_TBL(NAME, TYPE, H) \
124
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
125
+{ \
126
+ intptr_t i, opr_sz = simd_oprsz(desc); \
127
+ uintptr_t elem = opr_sz / sizeof(TYPE); \
128
+ TYPE *d = vd, *n = vn, *m = vm; \
129
+ ARMVectorReg tmp; \
130
+ if (unlikely(vd == vn)) { \
131
+ n = memcpy(&tmp, vn, opr_sz); \
132
+ } \
133
+ for (i = 0; i < elem; i++) { \
134
+ TYPE j = m[H(i)]; \
135
+ d[H(i)] = j < elem ? n[H(j)] : 0; \
136
+ } \
137
+}
138
+
139
+DO_TBL(sve_tbl_b, uint8_t, H1)
140
+DO_TBL(sve_tbl_h, uint16_t, H2)
141
+DO_TBL(sve_tbl_s, uint32_t, H4)
142
+DO_TBL(sve_tbl_d, uint64_t, )
143
+
144
+#undef TBL
145
+
146
+#define DO_UNPK(NAME, TYPED, TYPES, HD, HS) \
147
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
148
+{ \
149
+ intptr_t i, opr_sz = simd_oprsz(desc); \
150
+ TYPED *d = vd; \
151
+ TYPES *n = vn; \
152
+ ARMVectorReg tmp; \
153
+ if (unlikely(vn - vd < opr_sz)) { \
154
+ n = memcpy(&tmp, n, opr_sz / 2); \
155
+ } \
156
+ for (i = 0; i < opr_sz / sizeof(TYPED); i++) { \
157
+ d[HD(i)] = n[HS(i)]; \
158
+ } \
159
+}
160
+
161
+DO_UNPK(sve_sunpk_h, int16_t, int8_t, H2, H1)
162
+DO_UNPK(sve_sunpk_s, int32_t, int16_t, H4, H2)
163
+DO_UNPK(sve_sunpk_d, int64_t, int32_t, , H4)
164
+
165
+DO_UNPK(sve_uunpk_h, uint16_t, uint8_t, H2, H1)
166
+DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
167
+DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
168
+
169
+#undef DO_UNPK
170
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/target/arm/translate-sve.c
173
+++ b/target/arm/translate-sve.c
174
@@ -XXX,XX +XXX,XX @@ static bool trans_EXT(DisasContext *s, arg_EXT *a, uint32_t insn)
175
return true;
176
}
177
178
+/*
179
+ *** SVE Permute - Unpredicated Group
180
+ */
181
+
182
+static bool trans_DUP_s(DisasContext *s, arg_DUP_s *a, uint32_t insn)
183
+{
184
+ if (sve_access_check(s)) {
185
+ unsigned vsz = vec_full_reg_size(s);
186
+ tcg_gen_gvec_dup_i64(a->esz, vec_full_reg_offset(s, a->rd),
187
+ vsz, vsz, cpu_reg_sp(s, a->rn));
188
+ }
189
+ return true;
190
+}
191
+
192
+static bool trans_DUP_x(DisasContext *s, arg_DUP_x *a, uint32_t insn)
193
+{
194
+ if ((a->imm & 0x1f) == 0) {
195
+ return false;
196
+ }
197
+ if (sve_access_check(s)) {
198
+ unsigned vsz = vec_full_reg_size(s);
199
+ unsigned dofs = vec_full_reg_offset(s, a->rd);
200
+ unsigned esz, index;
201
+
202
+ esz = ctz32(a->imm);
203
+ index = a->imm >> (esz + 1);
204
+
205
+ if ((index << esz) < vsz) {
206
+ unsigned nofs = vec_reg_offset(s, a->rn, index, esz);
207
+ tcg_gen_gvec_dup_mem(esz, dofs, nofs, vsz, vsz);
208
+ } else {
209
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, 0);
210
+ }
211
+ }
212
+ return true;
213
+}
214
+
215
+static void do_insr_i64(DisasContext *s, arg_rrr_esz *a, TCGv_i64 val)
216
+{
217
+ typedef void gen_insr(TCGv_ptr, TCGv_ptr, TCGv_i64, TCGv_i32);
218
+ static gen_insr * const fns[4] = {
219
+ gen_helper_sve_insr_b, gen_helper_sve_insr_h,
220
+ gen_helper_sve_insr_s, gen_helper_sve_insr_d,
221
+ };
222
+ unsigned vsz = vec_full_reg_size(s);
223
+ TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
224
+ TCGv_ptr t_zd = tcg_temp_new_ptr();
225
+ TCGv_ptr t_zn = tcg_temp_new_ptr();
226
+
227
+ tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, a->rd));
228
+ tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
229
+
230
+ fns[a->esz](t_zd, t_zn, val, desc);
231
+
232
+ tcg_temp_free_ptr(t_zd);
233
+ tcg_temp_free_ptr(t_zn);
234
+ tcg_temp_free_i32(desc);
235
+}
236
+
237
+static bool trans_INSR_f(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
238
+{
239
+ if (sve_access_check(s)) {
240
+ TCGv_i64 t = tcg_temp_new_i64();
241
+ tcg_gen_ld_i64(t, cpu_env, vec_reg_offset(s, a->rm, 0, MO_64));
242
+ do_insr_i64(s, a, t);
243
+ tcg_temp_free_i64(t);
244
+ }
245
+ return true;
246
+}
247
+
248
+static bool trans_INSR_r(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
249
+{
250
+ if (sve_access_check(s)) {
251
+ do_insr_i64(s, a, cpu_reg(s, a->rm));
252
+ }
253
+ return true;
254
+}
255
+
256
+static bool trans_REV_v(DisasContext *s, arg_rr_esz *a, uint32_t insn)
257
+{
258
+ static gen_helper_gvec_2 * const fns[4] = {
259
+ gen_helper_sve_rev_b, gen_helper_sve_rev_h,
260
+ gen_helper_sve_rev_s, gen_helper_sve_rev_d
261
+ };
262
+
263
+ if (sve_access_check(s)) {
264
+ unsigned vsz = vec_full_reg_size(s);
265
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
266
+ vec_full_reg_offset(s, a->rn),
267
+ vsz, vsz, 0, fns[a->esz]);
268
+ }
269
+ return true;
270
+}
271
+
272
+static bool trans_TBL(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
273
+{
274
+ static gen_helper_gvec_3 * const fns[4] = {
275
+ gen_helper_sve_tbl_b, gen_helper_sve_tbl_h,
276
+ gen_helper_sve_tbl_s, gen_helper_sve_tbl_d
277
+ };
278
+
279
+ if (sve_access_check(s)) {
280
+ unsigned vsz = vec_full_reg_size(s);
281
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
282
+ vec_full_reg_offset(s, a->rn),
283
+ vec_full_reg_offset(s, a->rm),
284
+ vsz, vsz, 0, fns[a->esz]);
285
+ }
286
+ return true;
287
+}
288
+
289
+static bool trans_UNPK(DisasContext *s, arg_UNPK *a, uint32_t insn)
290
+{
291
+ static gen_helper_gvec_2 * const fns[4][2] = {
292
+ { NULL, NULL },
293
+ { gen_helper_sve_sunpk_h, gen_helper_sve_uunpk_h },
294
+ { gen_helper_sve_sunpk_s, gen_helper_sve_uunpk_s },
295
+ { gen_helper_sve_sunpk_d, gen_helper_sve_uunpk_d },
296
+ };
297
+
298
+ if (a->esz == 0) {
299
+ return false;
300
+ }
301
+ if (sve_access_check(s)) {
302
+ unsigned vsz = vec_full_reg_size(s);
303
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
304
+ vec_full_reg_offset(s, a->rn)
305
+ + (a->h ? vsz / 2 : 0),
306
+ vsz, vsz, 0, fns[a->esz][a->u]);
307
+ }
308
+ return true;
309
+}
310
+
311
/*
312
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
313
*/
314
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
315
index XXXXXXX..XXXXXXX 100644
316
--- a/target/arm/sve.decode
317
+++ b/target/arm/sve.decode
318
@@ -XXX,XX +XXX,XX @@
319
320
%imm4_16_p1 16:4 !function=plus1
321
%imm6_22_5 22:1 5:5
322
+%imm7_22_16 22:2 16:5
323
%imm8_16_10 16:5 10:3
324
%imm9_16_10 16:s6 10:3
325
326
@@ -XXX,XX +XXX,XX @@
327
328
# Three operand, vector element size
329
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
330
+@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
331
+ &rrr_esz rn=%reg_movprfx
332
333
# Three operand with "memory" size, aka immediate left shift
334
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
335
@@ -XXX,XX +XXX,XX @@ CPY_z_i 00000101 .. 01 .... 00 . ........ ..... @rdn_pg4 imm=%sh8_i8s
336
EXT 00000101 001 ..... 000 ... rm:5 rd:5 \
337
&rrri rn=%reg_movprfx imm=%imm8_16_10
338
339
+### SVE Permute - Unpredicated Group
340
+
341
+# SVE broadcast general register
342
+DUP_s 00000101 .. 1 00000 001110 ..... ..... @rd_rn
343
+
344
+# SVE broadcast indexed element
345
+DUP_x 00000101 .. 1 ..... 001000 rn:5 rd:5 \
346
+ &rri imm=%imm7_22_16
347
+
348
+# SVE insert SIMD&FP scalar register
349
+INSR_f 00000101 .. 1 10100 001110 ..... ..... @rdn_rm
350
+
351
+# SVE insert general register
352
+INSR_r 00000101 .. 1 00100 001110 ..... ..... @rdn_rm
353
+
354
+# SVE reverse vector elements
355
+REV_v 00000101 .. 1 11000 001110 ..... ..... @rd_rn
356
+
357
+# SVE vector table lookup
358
+TBL 00000101 .. 1 ..... 001100 ..... ..... @rd_rn_rm
359
+
360
+# SVE unpack vector elements
361
+UNPK 00000101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
362
+
363
### SVE Predicate Logical Operations Group
364
365
# SVE predicate logical operations
366
--
367
2.17.1
368
369
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 6 +
9
target/arm/sve_helper.c | 290 +++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 120 +++++++++++++++
11
target/arm/sve.decode | 18 +++
12
4 files changed, 434 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_uunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_3(sve_uunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_3(sve_uunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_zip_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_uzp_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
27
+
28
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/sve_helper.c
34
+++ b/target/arm/sve_helper.c
35
@@ -XXX,XX +XXX,XX @@ DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
36
DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
37
38
#undef DO_UNPK
39
+
40
+/* Mask of bits included in the even numbered predicates of width esz.
41
+ * We also use this for expand_bits/compress_bits, and so extend the
42
+ * same pattern out to 16-bit units.
43
+ */
44
+static const uint64_t even_bit_esz_masks[5] = {
45
+ 0x5555555555555555ull,
46
+ 0x3333333333333333ull,
47
+ 0x0f0f0f0f0f0f0f0full,
48
+ 0x00ff00ff00ff00ffull,
49
+ 0x0000ffff0000ffffull,
50
+};
51
+
52
+/* Zero-extend units of 2**N bits to units of 2**(N+1) bits.
53
+ * For N==0, this corresponds to the operation that in qemu/bitops.h
54
+ * we call half_shuffle64; this algorithm is from Hacker's Delight,
55
+ * section 7-2 Shuffling Bits.
56
+ */
57
+static uint64_t expand_bits(uint64_t x, int n)
58
+{
59
+ int i;
60
+
61
+ x &= 0xffffffffu;
62
+ for (i = 4; i >= n; i--) {
63
+ int sh = 1 << i;
64
+ x = ((x << sh) | x) & even_bit_esz_masks[i];
65
+ }
66
+ return x;
67
+}
68
+
69
+/* Compress units of 2**(N+1) bits to units of 2**N bits.
70
+ * For N==0, this corresponds to the operation that in qemu/bitops.h
71
+ * we call half_unshuffle64; this algorithm is from Hacker's Delight,
72
+ * section 7-2 Shuffling Bits, where it is called an inverse half shuffle.
73
+ */
74
+static uint64_t compress_bits(uint64_t x, int n)
75
+{
76
+ int i;
77
+
78
+ for (i = n; i <= 4; i++) {
79
+ int sh = 1 << i;
80
+ x &= even_bit_esz_masks[i];
81
+ x = (x >> sh) | x;
82
+ }
83
+ return x & 0xffffffffu;
84
+}
85
+
86
+void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
87
+{
88
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
89
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
90
+ intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
91
+ uint64_t *d = vd;
92
+ intptr_t i;
93
+
94
+ if (oprsz <= 8) {
95
+ uint64_t nn = *(uint64_t *)vn;
96
+ uint64_t mm = *(uint64_t *)vm;
97
+ int half = 4 * oprsz;
98
+
99
+ nn = extract64(nn, high * half, half);
100
+ mm = extract64(mm, high * half, half);
101
+ nn = expand_bits(nn, esz);
102
+ mm = expand_bits(mm, esz);
103
+ d[0] = nn + (mm << (1 << esz));
104
+ } else {
105
+ ARMPredicateReg tmp_n, tmp_m;
106
+
107
+ /* We produce output faster than we consume input.
108
+ Therefore we must be mindful of possible overlap. */
109
+ if ((vn - vd) < (uintptr_t)oprsz) {
110
+ vn = memcpy(&tmp_n, vn, oprsz);
111
+ }
112
+ if ((vm - vd) < (uintptr_t)oprsz) {
113
+ vm = memcpy(&tmp_m, vm, oprsz);
114
+ }
115
+ if (high) {
116
+ high = oprsz >> 1;
117
+ }
118
+
119
+ if ((high & 3) == 0) {
120
+ uint32_t *n = vn, *m = vm;
121
+ high >>= 2;
122
+
123
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
124
+ uint64_t nn = n[H4(high + i)];
125
+ uint64_t mm = m[H4(high + i)];
126
+
127
+ nn = expand_bits(nn, esz);
128
+ mm = expand_bits(mm, esz);
129
+ d[i] = nn + (mm << (1 << esz));
130
+ }
131
+ } else {
132
+ uint8_t *n = vn, *m = vm;
133
+ uint16_t *d16 = vd;
134
+
135
+ for (i = 0; i < oprsz / 2; i++) {
136
+ uint16_t nn = n[H1(high + i)];
137
+ uint16_t mm = m[H1(high + i)];
138
+
139
+ nn = expand_bits(nn, esz);
140
+ mm = expand_bits(mm, esz);
141
+ d16[H2(i)] = nn + (mm << (1 << esz));
142
+ }
143
+ }
144
+ }
145
+}
146
+
147
+void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
148
+{
149
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
150
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
151
+ int odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1) << esz;
152
+ uint64_t *d = vd, *n = vn, *m = vm;
153
+ uint64_t l, h;
154
+ intptr_t i;
155
+
156
+ if (oprsz <= 8) {
157
+ l = compress_bits(n[0] >> odd, esz);
158
+ h = compress_bits(m[0] >> odd, esz);
159
+ d[0] = extract64(l + (h << (4 * oprsz)), 0, 8 * oprsz);
160
+ } else {
161
+ ARMPredicateReg tmp_m;
162
+ intptr_t oprsz_16 = oprsz / 16;
163
+
164
+ if ((vm - vd) < (uintptr_t)oprsz) {
165
+ m = memcpy(&tmp_m, vm, oprsz);
166
+ }
167
+
168
+ for (i = 0; i < oprsz_16; i++) {
169
+ l = n[2 * i + 0];
170
+ h = n[2 * i + 1];
171
+ l = compress_bits(l >> odd, esz);
172
+ h = compress_bits(h >> odd, esz);
173
+ d[i] = l + (h << 32);
174
+ }
175
+
176
+ /* For VL which is not a power of 2, the results from M do not
177
+ align nicely with the uint64_t for D. Put the aligned results
178
+ from M into TMP_M and then copy it into place afterward. */
179
+ if (oprsz & 15) {
180
+ d[i] = compress_bits(n[2 * i] >> odd, esz);
181
+
182
+ for (i = 0; i < oprsz_16; i++) {
183
+ l = m[2 * i + 0];
184
+ h = m[2 * i + 1];
185
+ l = compress_bits(l >> odd, esz);
186
+ h = compress_bits(h >> odd, esz);
187
+ tmp_m.p[i] = l + (h << 32);
188
+ }
189
+ tmp_m.p[i] = compress_bits(m[2 * i] >> odd, esz);
190
+
191
+ swap_memmove(vd + oprsz / 2, &tmp_m, oprsz / 2);
192
+ } else {
193
+ for (i = 0; i < oprsz_16; i++) {
194
+ l = m[2 * i + 0];
195
+ h = m[2 * i + 1];
196
+ l = compress_bits(l >> odd, esz);
197
+ h = compress_bits(h >> odd, esz);
198
+ d[oprsz_16 + i] = l + (h << 32);
199
+ }
200
+ }
201
+ }
202
+}
203
+
204
+void HELPER(sve_trn_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
205
+{
206
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
207
+ uintptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
208
+ bool odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
209
+ uint64_t *d = vd, *n = vn, *m = vm;
210
+ uint64_t mask;
211
+ int shr, shl;
212
+ intptr_t i;
213
+
214
+ shl = 1 << esz;
215
+ shr = 0;
216
+ mask = even_bit_esz_masks[esz];
217
+ if (odd) {
218
+ mask <<= shl;
219
+ shr = shl;
220
+ shl = 0;
221
+ }
222
+
223
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
224
+ uint64_t nn = (n[i] & mask) >> shr;
225
+ uint64_t mm = (m[i] & mask) << shl;
226
+ d[i] = nn + mm;
227
+ }
228
+}
229
+
230
+/* Reverse units of 2**N bits. */
231
+static uint64_t reverse_bits_64(uint64_t x, int n)
232
+{
233
+ int i, sh;
234
+
235
+ x = bswap64(x);
236
+ for (i = 2, sh = 4; i >= n; i--, sh >>= 1) {
237
+ uint64_t mask = even_bit_esz_masks[i];
238
+ x = ((x & mask) << sh) | ((x >> sh) & mask);
239
+ }
240
+ return x;
241
+}
242
+
243
+static uint8_t reverse_bits_8(uint8_t x, int n)
244
+{
245
+ static const uint8_t mask[3] = { 0x55, 0x33, 0x0f };
246
+ int i, sh;
247
+
248
+ for (i = 2, sh = 4; i >= n; i--, sh >>= 1) {
249
+ x = ((x & mask[i]) << sh) | ((x >> sh) & mask[i]);
250
+ }
251
+ return x;
252
+}
253
+
254
+void HELPER(sve_rev_p)(void *vd, void *vn, uint32_t pred_desc)
255
+{
256
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
257
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
258
+ intptr_t i, oprsz_2 = oprsz / 2;
259
+
260
+ if (oprsz <= 8) {
261
+ uint64_t l = *(uint64_t *)vn;
262
+ l = reverse_bits_64(l << (64 - 8 * oprsz), esz);
263
+ *(uint64_t *)vd = l;
264
+ } else if ((oprsz & 15) == 0) {
265
+ for (i = 0; i < oprsz_2; i += 8) {
266
+ intptr_t ih = oprsz - 8 - i;
267
+ uint64_t l = reverse_bits_64(*(uint64_t *)(vn + i), esz);
268
+ uint64_t h = reverse_bits_64(*(uint64_t *)(vn + ih), esz);
269
+ *(uint64_t *)(vd + i) = h;
270
+ *(uint64_t *)(vd + ih) = l;
271
+ }
272
+ } else {
273
+ for (i = 0; i < oprsz_2; i += 1) {
274
+ intptr_t il = H1(i);
275
+ intptr_t ih = H1(oprsz - 1 - i);
276
+ uint8_t l = reverse_bits_8(*(uint8_t *)(vn + il), esz);
277
+ uint8_t h = reverse_bits_8(*(uint8_t *)(vn + ih), esz);
278
+ *(uint8_t *)(vd + il) = h;
279
+ *(uint8_t *)(vd + ih) = l;
280
+ }
281
+ }
282
+}
283
+
284
+void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
285
+{
286
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
287
+ intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
288
+ uint64_t *d = vd;
289
+ intptr_t i;
290
+
291
+ if (oprsz <= 8) {
292
+ uint64_t nn = *(uint64_t *)vn;
293
+ int half = 4 * oprsz;
294
+
295
+ nn = extract64(nn, high * half, half);
296
+ nn = expand_bits(nn, 0);
297
+ d[0] = nn;
298
+ } else {
299
+ ARMPredicateReg tmp_n;
300
+
301
+ /* We produce output faster than we consume input.
302
+ Therefore we must be mindful of possible overlap. */
303
+ if ((vn - vd) < (uintptr_t)oprsz) {
304
+ vn = memcpy(&tmp_n, vn, oprsz);
305
+ }
306
+ if (high) {
307
+ high = oprsz >> 1;
308
+ }
309
+
310
+ if ((high & 3) == 0) {
311
+ uint32_t *n = vn;
312
+ high >>= 2;
313
+
314
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
315
+ uint64_t nn = n[H4(high + i)];
316
+ d[i] = expand_bits(nn, 0);
317
+ }
318
+ } else {
319
+ uint16_t *d16 = vd;
320
+ uint8_t *n = vn;
321
+
322
+ for (i = 0; i < oprsz / 2; i++) {
323
+ uint16_t nn = n[H1(high + i)];
324
+ d16[H2(i)] = expand_bits(nn, 0);
325
+ }
326
+ }
327
+ }
328
+}
329
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
330
index XXXXXXX..XXXXXXX 100644
331
--- a/target/arm/translate-sve.c
332
+++ b/target/arm/translate-sve.c
333
@@ -XXX,XX +XXX,XX @@ static bool trans_UNPK(DisasContext *s, arg_UNPK *a, uint32_t insn)
334
return true;
335
}
336
337
+/*
338
+ *** SVE Permute - Predicates Group
339
+ */
340
+
341
+static bool do_perm_pred3(DisasContext *s, arg_rrr_esz *a, bool high_odd,
342
+ gen_helper_gvec_3 *fn)
343
+{
344
+ if (!sve_access_check(s)) {
345
+ return true;
346
+ }
347
+
348
+ unsigned vsz = pred_full_reg_size(s);
349
+
350
+ /* Predicate sizes may be smaller and cannot use simd_desc.
351
+ We cannot round up, as we do elsewhere, because we need
352
+ the exact size for ZIP2 and REV. We retain the style for
353
+ the other helpers for consistency. */
354
+ TCGv_ptr t_d = tcg_temp_new_ptr();
355
+ TCGv_ptr t_n = tcg_temp_new_ptr();
356
+ TCGv_ptr t_m = tcg_temp_new_ptr();
357
+ TCGv_i32 t_desc;
358
+ int desc;
359
+
360
+ desc = vsz - 2;
361
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
362
+ desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
363
+
364
+ tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
365
+ tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
366
+ tcg_gen_addi_ptr(t_m, cpu_env, pred_full_reg_offset(s, a->rm));
367
+ t_desc = tcg_const_i32(desc);
368
+
369
+ fn(t_d, t_n, t_m, t_desc);
370
+
371
+ tcg_temp_free_ptr(t_d);
372
+ tcg_temp_free_ptr(t_n);
373
+ tcg_temp_free_ptr(t_m);
374
+ tcg_temp_free_i32(t_desc);
375
+ return true;
376
+}
377
+
378
+static bool do_perm_pred2(DisasContext *s, arg_rr_esz *a, bool high_odd,
379
+ gen_helper_gvec_2 *fn)
380
+{
381
+ if (!sve_access_check(s)) {
382
+ return true;
383
+ }
384
+
385
+ unsigned vsz = pred_full_reg_size(s);
386
+ TCGv_ptr t_d = tcg_temp_new_ptr();
387
+ TCGv_ptr t_n = tcg_temp_new_ptr();
388
+ TCGv_i32 t_desc;
389
+ int desc;
390
+
391
+ tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
392
+ tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
393
+
394
+ /* Predicate sizes may be smaller and cannot use simd_desc.
395
+ We cannot round up, as we do elsewhere, because we need
396
+ the exact size for ZIP2 and REV. We retain the style for
397
+ the other helpers for consistency. */
398
+
399
+ desc = vsz - 2;
400
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
401
+ desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
402
+ t_desc = tcg_const_i32(desc);
403
+
404
+ fn(t_d, t_n, t_desc);
405
+
406
+ tcg_temp_free_i32(t_desc);
407
+ tcg_temp_free_ptr(t_d);
408
+ tcg_temp_free_ptr(t_n);
409
+ return true;
410
+}
411
+
412
+static bool trans_ZIP1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
413
+{
414
+ return do_perm_pred3(s, a, 0, gen_helper_sve_zip_p);
415
+}
416
+
417
+static bool trans_ZIP2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
418
+{
419
+ return do_perm_pred3(s, a, 1, gen_helper_sve_zip_p);
420
+}
421
+
422
+static bool trans_UZP1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
423
+{
424
+ return do_perm_pred3(s, a, 0, gen_helper_sve_uzp_p);
425
+}
426
+
427
+static bool trans_UZP2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
428
+{
429
+ return do_perm_pred3(s, a, 1, gen_helper_sve_uzp_p);
430
+}
431
+
432
+static bool trans_TRN1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
433
+{
434
+ return do_perm_pred3(s, a, 0, gen_helper_sve_trn_p);
435
+}
436
+
437
+static bool trans_TRN2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
438
+{
439
+ return do_perm_pred3(s, a, 1, gen_helper_sve_trn_p);
440
+}
441
+
442
+static bool trans_REV_p(DisasContext *s, arg_rr_esz *a, uint32_t insn)
443
+{
444
+ return do_perm_pred2(s, a, 0, gen_helper_sve_rev_p);
445
+}
446
+
447
+static bool trans_PUNPKLO(DisasContext *s, arg_PUNPKLO *a, uint32_t insn)
448
+{
449
+ return do_perm_pred2(s, a, 0, gen_helper_sve_punpk_p);
450
+}
451
+
452
+static bool trans_PUNPKHI(DisasContext *s, arg_PUNPKHI *a, uint32_t insn)
453
+{
454
+ return do_perm_pred2(s, a, 1, gen_helper_sve_punpk_p);
455
+}
456
+
457
/*
458
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
459
*/
460
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
461
index XXXXXXX..XXXXXXX 100644
462
--- a/target/arm/sve.decode
463
+++ b/target/arm/sve.decode
464
@@ -XXX,XX +XXX,XX @@
465
466
# Three operand, vector element size
467
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
468
+@pd_pn_pm ........ esz:2 .. rm:4 ....... rn:4 . rd:4 &rrr_esz
469
@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
470
&rrr_esz rn=%reg_movprfx
471
472
@@ -XXX,XX +XXX,XX @@ TBL 00000101 .. 1 ..... 001100 ..... ..... @rd_rn_rm
473
# SVE unpack vector elements
474
UNPK 00000101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
475
476
+### SVE Permute - Predicates Group
477
+
478
+# SVE permute predicate elements
479
+ZIP1_p 00000101 .. 10 .... 010 000 0 .... 0 .... @pd_pn_pm
480
+ZIP2_p 00000101 .. 10 .... 010 001 0 .... 0 .... @pd_pn_pm
481
+UZP1_p 00000101 .. 10 .... 010 010 0 .... 0 .... @pd_pn_pm
482
+UZP2_p 00000101 .. 10 .... 010 011 0 .... 0 .... @pd_pn_pm
483
+TRN1_p 00000101 .. 10 .... 010 100 0 .... 0 .... @pd_pn_pm
484
+TRN2_p 00000101 .. 10 .... 010 101 0 .... 0 .... @pd_pn_pm
485
+
486
+# SVE reverse predicate elements
487
+REV_p 00000101 .. 11 0100 010 000 0 .... 0 .... @pd_pn
488
+
489
+# SVE unpack predicate elements
490
+PUNPKLO 00000101 00 11 0000 010 000 0 .... 0 .... @pd_pn_e0
491
+PUNPKHI 00000101 00 11 0001 010 000 0 .... 0 .... @pd_pn_e0
492
+
493
### SVE Predicate Logical Operations Group
494
495
# SVE predicate logical operations
496
--
497
2.17.1
498
499
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 15 ++++++++
9
target/arm/sve_helper.c | 72 ++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 75 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 10 +++++
12
4 files changed, 172 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_zip_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_zip_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_zip_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve_zip_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+
27
+DEF_HELPER_FLAGS_4(sve_uzp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve_uzp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve_uzp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_uzp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+
32
+DEF_HELPER_FLAGS_4(sve_trn_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+
37
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
39
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
40
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sve_helper.c
43
+++ b/target/arm/sve_helper.c
44
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
45
}
46
}
47
}
48
+
49
+#define DO_ZIP(NAME, TYPE, H) \
50
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
51
+{ \
52
+ intptr_t oprsz = simd_oprsz(desc); \
53
+ intptr_t i, oprsz_2 = oprsz / 2; \
54
+ ARMVectorReg tmp_n, tmp_m; \
55
+ /* We produce output faster than we consume input. \
56
+ Therefore we must be mindful of possible overlap. */ \
57
+ if (unlikely((vn - vd) < (uintptr_t)oprsz)) { \
58
+ vn = memcpy(&tmp_n, vn, oprsz_2); \
59
+ } \
60
+ if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \
61
+ vm = memcpy(&tmp_m, vm, oprsz_2); \
62
+ } \
63
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
64
+ *(TYPE *)(vd + H(2 * i + 0)) = *(TYPE *)(vn + H(i)); \
65
+ *(TYPE *)(vd + H(2 * i + sizeof(TYPE))) = *(TYPE *)(vm + H(i)); \
66
+ } \
67
+}
68
+
69
+DO_ZIP(sve_zip_b, uint8_t, H1)
70
+DO_ZIP(sve_zip_h, uint16_t, H1_2)
71
+DO_ZIP(sve_zip_s, uint32_t, H1_4)
72
+DO_ZIP(sve_zip_d, uint64_t, )
73
+
74
+#define DO_UZP(NAME, TYPE, H) \
75
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
76
+{ \
77
+ intptr_t oprsz = simd_oprsz(desc); \
78
+ intptr_t oprsz_2 = oprsz / 2; \
79
+ intptr_t odd_ofs = simd_data(desc); \
80
+ intptr_t i; \
81
+ ARMVectorReg tmp_m; \
82
+ if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \
83
+ vm = memcpy(&tmp_m, vm, oprsz); \
84
+ } \
85
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
86
+ *(TYPE *)(vd + H(i)) = *(TYPE *)(vn + H(2 * i + odd_ofs)); \
87
+ } \
88
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
89
+ *(TYPE *)(vd + H(oprsz_2 + i)) = *(TYPE *)(vm + H(2 * i + odd_ofs)); \
90
+ } \
91
+}
92
+
93
+DO_UZP(sve_uzp_b, uint8_t, H1)
94
+DO_UZP(sve_uzp_h, uint16_t, H1_2)
95
+DO_UZP(sve_uzp_s, uint32_t, H1_4)
96
+DO_UZP(sve_uzp_d, uint64_t, )
97
+
98
+#define DO_TRN(NAME, TYPE, H) \
99
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
100
+{ \
101
+ intptr_t oprsz = simd_oprsz(desc); \
102
+ intptr_t odd_ofs = simd_data(desc); \
103
+ intptr_t i; \
104
+ for (i = 0; i < oprsz; i += 2 * sizeof(TYPE)) { \
105
+ TYPE ae = *(TYPE *)(vn + H(i + odd_ofs)); \
106
+ TYPE be = *(TYPE *)(vm + H(i + odd_ofs)); \
107
+ *(TYPE *)(vd + H(i + 0)) = ae; \
108
+ *(TYPE *)(vd + H(i + sizeof(TYPE))) = be; \
109
+ } \
110
+}
111
+
112
+DO_TRN(sve_trn_b, uint8_t, H1)
113
+DO_TRN(sve_trn_h, uint16_t, H1_2)
114
+DO_TRN(sve_trn_s, uint32_t, H1_4)
115
+DO_TRN(sve_trn_d, uint64_t, )
116
+
117
+#undef DO_ZIP
118
+#undef DO_UZP
119
+#undef DO_TRN
120
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/translate-sve.c
123
+++ b/target/arm/translate-sve.c
124
@@ -XXX,XX +XXX,XX @@ static bool trans_PUNPKHI(DisasContext *s, arg_PUNPKHI *a, uint32_t insn)
125
return do_perm_pred2(s, a, 1, gen_helper_sve_punpk_p);
126
}
127
128
+/*
129
+ *** SVE Permute - Interleaving Group
130
+ */
131
+
132
+static bool do_zip(DisasContext *s, arg_rrr_esz *a, bool high)
133
+{
134
+ static gen_helper_gvec_3 * const fns[4] = {
135
+ gen_helper_sve_zip_b, gen_helper_sve_zip_h,
136
+ gen_helper_sve_zip_s, gen_helper_sve_zip_d,
137
+ };
138
+
139
+ if (sve_access_check(s)) {
140
+ unsigned vsz = vec_full_reg_size(s);
141
+ unsigned high_ofs = high ? vsz / 2 : 0;
142
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
143
+ vec_full_reg_offset(s, a->rn) + high_ofs,
144
+ vec_full_reg_offset(s, a->rm) + high_ofs,
145
+ vsz, vsz, 0, fns[a->esz]);
146
+ }
147
+ return true;
148
+}
149
+
150
+static bool do_zzz_data_ool(DisasContext *s, arg_rrr_esz *a, int data,
151
+ gen_helper_gvec_3 *fn)
152
+{
153
+ if (sve_access_check(s)) {
154
+ unsigned vsz = vec_full_reg_size(s);
155
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
156
+ vec_full_reg_offset(s, a->rn),
157
+ vec_full_reg_offset(s, a->rm),
158
+ vsz, vsz, data, fn);
159
+ }
160
+ return true;
161
+}
162
+
163
+static bool trans_ZIP1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
164
+{
165
+ return do_zip(s, a, false);
166
+}
167
+
168
+static bool trans_ZIP2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
169
+{
170
+ return do_zip(s, a, true);
171
+}
172
+
173
+static gen_helper_gvec_3 * const uzp_fns[4] = {
174
+ gen_helper_sve_uzp_b, gen_helper_sve_uzp_h,
175
+ gen_helper_sve_uzp_s, gen_helper_sve_uzp_d,
176
+};
177
+
178
+static bool trans_UZP1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
179
+{
180
+ return do_zzz_data_ool(s, a, 0, uzp_fns[a->esz]);
181
+}
182
+
183
+static bool trans_UZP2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
184
+{
185
+ return do_zzz_data_ool(s, a, 1 << a->esz, uzp_fns[a->esz]);
186
+}
187
+
188
+static gen_helper_gvec_3 * const trn_fns[4] = {
189
+ gen_helper_sve_trn_b, gen_helper_sve_trn_h,
190
+ gen_helper_sve_trn_s, gen_helper_sve_trn_d,
191
+};
192
+
193
+static bool trans_TRN1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
194
+{
195
+ return do_zzz_data_ool(s, a, 0, trn_fns[a->esz]);
196
+}
197
+
198
+static bool trans_TRN2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
199
+{
200
+ return do_zzz_data_ool(s, a, 1 << a->esz, trn_fns[a->esz]);
201
+}
202
+
203
/*
204
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
205
*/
206
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/sve.decode
209
+++ b/target/arm/sve.decode
210
@@ -XXX,XX +XXX,XX @@ REV_p 00000101 .. 11 0100 010 000 0 .... 0 .... @pd_pn
211
PUNPKLO 00000101 00 11 0000 010 000 0 .... 0 .... @pd_pn_e0
212
PUNPKHI 00000101 00 11 0001 010 000 0 .... 0 .... @pd_pn_e0
213
214
+### SVE Permute - Interleaving Group
215
+
216
+# SVE permute vector elements
217
+ZIP1_z 00000101 .. 1 ..... 011 000 ..... ..... @rd_rn_rm
218
+ZIP2_z 00000101 .. 1 ..... 011 001 ..... ..... @rd_rn_rm
219
+UZP1_z 00000101 .. 1 ..... 011 010 ..... ..... @rd_rn_rm
220
+UZP2_z 00000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
221
+TRN1_z 00000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
222
+TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
223
+
224
### SVE Predicate Logical Operations Group
225
226
# SVE predicate logical operations
227
--
228
2.17.1
229
230
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 3 +++
9
target/arm/sve_helper.c | 34 ++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 12 ++++++++++++
11
target/arm/sve.decode | 6 ++++++
12
4 files changed, 55 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+
25
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/sve_helper.c
31
+++ b/target/arm/sve_helper.c
32
@@ -XXX,XX +XXX,XX @@ DO_TRN(sve_trn_d, uint64_t, )
33
#undef DO_ZIP
34
#undef DO_UZP
35
#undef DO_TRN
36
+
37
+void HELPER(sve_compact_s)(void *vd, void *vn, void *vg, uint32_t desc)
38
+{
39
+ intptr_t i, j, opr_sz = simd_oprsz(desc) / 4;
40
+ uint32_t *d = vd, *n = vn;
41
+ uint8_t *pg = vg;
42
+
43
+ for (i = j = 0; i < opr_sz; i++) {
44
+ if (pg[H1(i / 2)] & (i & 1 ? 0x10 : 0x01)) {
45
+ d[H4(j)] = n[H4(i)];
46
+ j++;
47
+ }
48
+ }
49
+ for (; j < opr_sz; j++) {
50
+ d[H4(j)] = 0;
51
+ }
52
+}
53
+
54
+void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
55
+{
56
+ intptr_t i, j, opr_sz = simd_oprsz(desc) / 8;
57
+ uint64_t *d = vd, *n = vn;
58
+ uint8_t *pg = vg;
59
+
60
+ for (i = j = 0; i < opr_sz; i++) {
61
+ if (pg[H1(i)] & 1) {
62
+ d[j] = n[i];
63
+ j++;
64
+ }
65
+ }
66
+ for (; j < opr_sz; j++) {
67
+ d[j] = 0;
68
+ }
69
+}
70
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/translate-sve.c
73
+++ b/target/arm/translate-sve.c
74
@@ -XXX,XX +XXX,XX @@ static bool trans_TRN2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
75
return do_zzz_data_ool(s, a, 1 << a->esz, trn_fns[a->esz]);
76
}
77
78
+/*
79
+ *** SVE Permute Vector - Predicated Group
80
+ */
81
+
82
+static bool trans_COMPACT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
83
+{
84
+ static gen_helper_gvec_3 * const fns[4] = {
85
+ NULL, NULL, gen_helper_sve_compact_s, gen_helper_sve_compact_d
86
+ };
87
+ return do_zpz_ool(s, a, fns[a->esz]);
88
+}
89
+
90
/*
91
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
92
*/
93
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/sve.decode
96
+++ b/target/arm/sve.decode
97
@@ -XXX,XX +XXX,XX @@ UZP2_z 00000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
98
TRN1_z 00000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
99
TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
100
101
+### SVE Permute - Predicated Group
102
+
103
+# SVE compress active elements
104
+# Note esz >= 2
105
+COMPACT 00000101 .. 100001 100 ... ..... ..... @rd_pg_rn
106
+
107
### SVE Predicate Logical Operations Group
108
109
# SVE predicate logical operations
110
--
111
2.17.1
112
113
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-7-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 2 +
9
target/arm/sve_helper.c | 12 ++
10
target/arm/translate-sve.c | 328 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 20 +++
12
4 files changed, 362 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_2(sve_last_active_element, TCG_CALL_NO_RWG, s32, ptr, i32)
23
+
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve_helper.c
30
+++ b/target/arm/sve_helper.c
31
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
32
d[j] = 0;
33
}
34
}
35
+
36
+/* Similar to the ARM LastActiveElement pseudocode function, except the
37
+ * result is multiplied by the element size. This includes the not found
38
+ * indication; e.g. not found for esz=3 is -8.
39
+ */
40
+int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
41
+{
42
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
43
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
44
+
45
+ return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
46
+}
47
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/translate-sve.c
50
+++ b/target/arm/translate-sve.c
51
@@ -XXX,XX +XXX,XX @@ static bool trans_COMPACT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
52
return do_zpz_ool(s, a, fns[a->esz]);
53
}
54
55
+/* Call the helper that computes the ARM LastActiveElement pseudocode
56
+ * function, scaled by the element size. This includes the not found
57
+ * indication; e.g. not found for esz=3 is -8.
58
+ */
59
+static void find_last_active(DisasContext *s, TCGv_i32 ret, int esz, int pg)
60
+{
61
+ /* Predicate sizes may be smaller and cannot use simd_desc. We cannot
62
+ * round up, as we do elsewhere, because we need the exact size.
63
+ */
64
+ TCGv_ptr t_p = tcg_temp_new_ptr();
65
+ TCGv_i32 t_desc;
66
+ unsigned vsz = pred_full_reg_size(s);
67
+ unsigned desc;
68
+
69
+ desc = vsz - 2;
70
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
71
+
72
+ tcg_gen_addi_ptr(t_p, cpu_env, pred_full_reg_offset(s, pg));
73
+ t_desc = tcg_const_i32(desc);
74
+
75
+ gen_helper_sve_last_active_element(ret, t_p, t_desc);
76
+
77
+ tcg_temp_free_i32(t_desc);
78
+ tcg_temp_free_ptr(t_p);
79
+}
80
+
81
+/* Increment LAST to the offset of the next element in the vector,
82
+ * wrapping around to 0.
83
+ */
84
+static void incr_last_active(DisasContext *s, TCGv_i32 last, int esz)
85
+{
86
+ unsigned vsz = vec_full_reg_size(s);
87
+
88
+ tcg_gen_addi_i32(last, last, 1 << esz);
89
+ if (is_power_of_2(vsz)) {
90
+ tcg_gen_andi_i32(last, last, vsz - 1);
91
+ } else {
92
+ TCGv_i32 max = tcg_const_i32(vsz);
93
+ TCGv_i32 zero = tcg_const_i32(0);
94
+ tcg_gen_movcond_i32(TCG_COND_GEU, last, last, max, zero, last);
95
+ tcg_temp_free_i32(max);
96
+ tcg_temp_free_i32(zero);
97
+ }
98
+}
99
+
100
+/* If LAST < 0, set LAST to the offset of the last element in the vector. */
101
+static void wrap_last_active(DisasContext *s, TCGv_i32 last, int esz)
102
+{
103
+ unsigned vsz = vec_full_reg_size(s);
104
+
105
+ if (is_power_of_2(vsz)) {
106
+ tcg_gen_andi_i32(last, last, vsz - 1);
107
+ } else {
108
+ TCGv_i32 max = tcg_const_i32(vsz - (1 << esz));
109
+ TCGv_i32 zero = tcg_const_i32(0);
110
+ tcg_gen_movcond_i32(TCG_COND_LT, last, last, zero, max, last);
111
+ tcg_temp_free_i32(max);
112
+ tcg_temp_free_i32(zero);
113
+ }
114
+}
115
+
116
+/* Load an unsigned element of ESZ from BASE+OFS. */
117
+static TCGv_i64 load_esz(TCGv_ptr base, int ofs, int esz)
118
+{
119
+ TCGv_i64 r = tcg_temp_new_i64();
120
+
121
+ switch (esz) {
122
+ case 0:
123
+ tcg_gen_ld8u_i64(r, base, ofs);
124
+ break;
125
+ case 1:
126
+ tcg_gen_ld16u_i64(r, base, ofs);
127
+ break;
128
+ case 2:
129
+ tcg_gen_ld32u_i64(r, base, ofs);
130
+ break;
131
+ case 3:
132
+ tcg_gen_ld_i64(r, base, ofs);
133
+ break;
134
+ default:
135
+ g_assert_not_reached();
136
+ }
137
+ return r;
138
+}
139
+
140
+/* Load an unsigned element of ESZ from RM[LAST]. */
141
+static TCGv_i64 load_last_active(DisasContext *s, TCGv_i32 last,
142
+ int rm, int esz)
143
+{
144
+ TCGv_ptr p = tcg_temp_new_ptr();
145
+ TCGv_i64 r;
146
+
147
+ /* Convert offset into vector into offset into ENV.
148
+ * The final adjustment for the vector register base
149
+ * is added via constant offset to the load.
150
+ */
151
+#ifdef HOST_WORDS_BIGENDIAN
152
+ /* Adjust for element ordering. See vec_reg_offset. */
153
+ if (esz < 3) {
154
+ tcg_gen_xori_i32(last, last, 8 - (1 << esz));
155
+ }
156
+#endif
157
+ tcg_gen_ext_i32_ptr(p, last);
158
+ tcg_gen_add_ptr(p, p, cpu_env);
159
+
160
+ r = load_esz(p, vec_full_reg_offset(s, rm), esz);
161
+ tcg_temp_free_ptr(p);
162
+
163
+ return r;
164
+}
165
+
166
+/* Compute CLAST for a Zreg. */
167
+static bool do_clast_vector(DisasContext *s, arg_rprr_esz *a, bool before)
168
+{
169
+ TCGv_i32 last;
170
+ TCGLabel *over;
171
+ TCGv_i64 ele;
172
+ unsigned vsz, esz = a->esz;
173
+
174
+ if (!sve_access_check(s)) {
175
+ return true;
176
+ }
177
+
178
+ last = tcg_temp_local_new_i32();
179
+ over = gen_new_label();
180
+
181
+ find_last_active(s, last, esz, a->pg);
182
+
183
+ /* There is of course no movcond for a 2048-bit vector,
184
+ * so we must branch over the actual store.
185
+ */
186
+ tcg_gen_brcondi_i32(TCG_COND_LT, last, 0, over);
187
+
188
+ if (!before) {
189
+ incr_last_active(s, last, esz);
190
+ }
191
+
192
+ ele = load_last_active(s, last, a->rm, esz);
193
+ tcg_temp_free_i32(last);
194
+
195
+ vsz = vec_full_reg_size(s);
196
+ tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd), vsz, vsz, ele);
197
+ tcg_temp_free_i64(ele);
198
+
199
+ /* If this insn used MOVPRFX, we may need a second move. */
200
+ if (a->rd != a->rn) {
201
+ TCGLabel *done = gen_new_label();
202
+ tcg_gen_br(done);
203
+
204
+ gen_set_label(over);
205
+ do_mov_z(s, a->rd, a->rn);
206
+
207
+ gen_set_label(done);
208
+ } else {
209
+ gen_set_label(over);
210
+ }
211
+ return true;
212
+}
213
+
214
+static bool trans_CLASTA_z(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
215
+{
216
+ return do_clast_vector(s, a, false);
217
+}
218
+
219
+static bool trans_CLASTB_z(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
220
+{
221
+ return do_clast_vector(s, a, true);
222
+}
223
+
224
+/* Compute CLAST for a scalar. */
225
+static void do_clast_scalar(DisasContext *s, int esz, int pg, int rm,
226
+ bool before, TCGv_i64 reg_val)
227
+{
228
+ TCGv_i32 last = tcg_temp_new_i32();
229
+ TCGv_i64 ele, cmp, zero;
230
+
231
+ find_last_active(s, last, esz, pg);
232
+
233
+ /* Extend the original value of last prior to incrementing. */
234
+ cmp = tcg_temp_new_i64();
235
+ tcg_gen_ext_i32_i64(cmp, last);
236
+
237
+ if (!before) {
238
+ incr_last_active(s, last, esz);
239
+ }
240
+
241
+ /* The conceit here is that while last < 0 indicates not found, after
242
+ * adjusting for cpu_env->vfp.zregs[rm], it is still a valid address
243
+ * from which we can load garbage. We then discard the garbage with
244
+ * a conditional move.
245
+ */
246
+ ele = load_last_active(s, last, rm, esz);
247
+ tcg_temp_free_i32(last);
248
+
249
+ zero = tcg_const_i64(0);
250
+ tcg_gen_movcond_i64(TCG_COND_GE, reg_val, cmp, zero, ele, reg_val);
251
+
252
+ tcg_temp_free_i64(zero);
253
+ tcg_temp_free_i64(cmp);
254
+ tcg_temp_free_i64(ele);
255
+}
256
+
257
+/* Compute CLAST for a Vreg. */
258
+static bool do_clast_fp(DisasContext *s, arg_rpr_esz *a, bool before)
259
+{
260
+ if (sve_access_check(s)) {
261
+ int esz = a->esz;
262
+ int ofs = vec_reg_offset(s, a->rd, 0, esz);
263
+ TCGv_i64 reg = load_esz(cpu_env, ofs, esz);
264
+
265
+ do_clast_scalar(s, esz, a->pg, a->rn, before, reg);
266
+ write_fp_dreg(s, a->rd, reg);
267
+ tcg_temp_free_i64(reg);
268
+ }
269
+ return true;
270
+}
271
+
272
+static bool trans_CLASTA_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
273
+{
274
+ return do_clast_fp(s, a, false);
275
+}
276
+
277
+static bool trans_CLASTB_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
278
+{
279
+ return do_clast_fp(s, a, true);
280
+}
281
+
282
+/* Compute CLAST for a Xreg. */
283
+static bool do_clast_general(DisasContext *s, arg_rpr_esz *a, bool before)
284
+{
285
+ TCGv_i64 reg;
286
+
287
+ if (!sve_access_check(s)) {
288
+ return true;
289
+ }
290
+
291
+ reg = cpu_reg(s, a->rd);
292
+ switch (a->esz) {
293
+ case 0:
294
+ tcg_gen_ext8u_i64(reg, reg);
295
+ break;
296
+ case 1:
297
+ tcg_gen_ext16u_i64(reg, reg);
298
+ break;
299
+ case 2:
300
+ tcg_gen_ext32u_i64(reg, reg);
301
+ break;
302
+ case 3:
303
+ break;
304
+ default:
305
+ g_assert_not_reached();
306
+ }
307
+
308
+ do_clast_scalar(s, a->esz, a->pg, a->rn, before, reg);
309
+ return true;
310
+}
311
+
312
+static bool trans_CLASTA_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
313
+{
314
+ return do_clast_general(s, a, false);
315
+}
316
+
317
+static bool trans_CLASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
318
+{
319
+ return do_clast_general(s, a, true);
320
+}
321
+
322
+/* Compute LAST for a scalar. */
323
+static TCGv_i64 do_last_scalar(DisasContext *s, int esz,
324
+ int pg, int rm, bool before)
325
+{
326
+ TCGv_i32 last = tcg_temp_new_i32();
327
+ TCGv_i64 ret;
328
+
329
+ find_last_active(s, last, esz, pg);
330
+ if (before) {
331
+ wrap_last_active(s, last, esz);
332
+ } else {
333
+ incr_last_active(s, last, esz);
334
+ }
335
+
336
+ ret = load_last_active(s, last, rm, esz);
337
+ tcg_temp_free_i32(last);
338
+ return ret;
339
+}
340
+
341
+/* Compute LAST for a Vreg. */
342
+static bool do_last_fp(DisasContext *s, arg_rpr_esz *a, bool before)
343
+{
344
+ if (sve_access_check(s)) {
345
+ TCGv_i64 val = do_last_scalar(s, a->esz, a->pg, a->rn, before);
346
+ write_fp_dreg(s, a->rd, val);
347
+ tcg_temp_free_i64(val);
348
+ }
349
+ return true;
350
+}
351
+
352
+static bool trans_LASTA_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
353
+{
354
+ return do_last_fp(s, a, false);
355
+}
356
+
357
+static bool trans_LASTB_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
358
+{
359
+ return do_last_fp(s, a, true);
360
+}
361
+
362
+/* Compute LAST for a Xreg. */
363
+static bool do_last_general(DisasContext *s, arg_rpr_esz *a, bool before)
364
+{
365
+ if (sve_access_check(s)) {
366
+ TCGv_i64 val = do_last_scalar(s, a->esz, a->pg, a->rn, before);
367
+ tcg_gen_mov_i64(cpu_reg(s, a->rd), val);
368
+ tcg_temp_free_i64(val);
369
+ }
370
+ return true;
371
+}
372
+
373
+static bool trans_LASTA_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
374
+{
375
+ return do_last_general(s, a, false);
376
+}
377
+
378
+static bool trans_LASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
379
+{
380
+ return do_last_general(s, a, true);
381
+}
382
+
383
/*
384
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
385
*/
386
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
387
index XXXXXXX..XXXXXXX 100644
388
--- a/target/arm/sve.decode
389
+++ b/target/arm/sve.decode
390
@@ -XXX,XX +XXX,XX @@ TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
391
# Note esz >= 2
392
COMPACT 00000101 .. 100001 100 ... ..... ..... @rd_pg_rn
393
394
+# SVE conditionally broadcast element to vector
395
+CLASTA_z 00000101 .. 10100 0 100 ... ..... ..... @rdn_pg_rm
396
+CLASTB_z 00000101 .. 10100 1 100 ... ..... ..... @rdn_pg_rm
397
+
398
+# SVE conditionally copy element to SIMD&FP scalar
399
+CLASTA_v 00000101 .. 10101 0 100 ... ..... ..... @rd_pg_rn
400
+CLASTB_v 00000101 .. 10101 1 100 ... ..... ..... @rd_pg_rn
401
+
402
+# SVE conditionally copy element to general register
403
+CLASTA_r 00000101 .. 11000 0 101 ... ..... ..... @rd_pg_rn
404
+CLASTB_r 00000101 .. 11000 1 101 ... ..... ..... @rd_pg_rn
405
+
406
+# SVE copy element to SIMD&FP scalar register
407
+LASTA_v 00000101 .. 10001 0 100 ... ..... ..... @rd_pg_rn
408
+LASTB_v 00000101 .. 10001 1 100 ... ..... ..... @rd_pg_rn
409
+
410
+# SVE copy element to general register
411
+LASTA_r 00000101 .. 10000 0 101 ... ..... ..... @rd_pg_rn
412
+LASTB_r 00000101 .. 10000 1 101 ... ..... ..... @rd_pg_rn
413
+
414
### SVE Predicate Logical Operations Group
415
416
# SVE predicate logical operations
417
--
418
2.17.1
419
420
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-8-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-sve.c | 19 +++++++++++++++++++
9
target/arm/sve.decode | 6 ++++++
10
2 files changed, 25 insertions(+)
11
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
15
+++ b/target/arm/translate-sve.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_LASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
17
return do_last_general(s, a, true);
18
}
19
20
+static bool trans_CPY_m_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
21
+{
22
+ if (sve_access_check(s)) {
23
+ do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, cpu_reg_sp(s, a->rn));
24
+ }
25
+ return true;
26
+}
27
+
28
+static bool trans_CPY_m_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
29
+{
30
+ if (sve_access_check(s)) {
31
+ int ofs = vec_reg_offset(s, a->rn, 0, a->esz);
32
+ TCGv_i64 t = load_esz(cpu_env, ofs, a->esz);
33
+ do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, t);
34
+ tcg_temp_free_i64(t);
35
+ }
36
+ return true;
37
+}
38
+
39
/*
40
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
41
*/
42
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/sve.decode
45
+++ b/target/arm/sve.decode
46
@@ -XXX,XX +XXX,XX @@ LASTB_v 00000101 .. 10001 1 100 ... ..... ..... @rd_pg_rn
47
LASTA_r 00000101 .. 10000 0 101 ... ..... ..... @rd_pg_rn
48
LASTB_r 00000101 .. 10000 1 101 ... ..... ..... @rd_pg_rn
49
50
+# SVE copy element from SIMD&FP scalar register
51
+CPY_m_v 00000101 .. 100000 100 ... ..... ..... @rd_pg_rn
52
+
53
+# SVE copy element from general register to vector (predicated)
54
+CPY_m_r 00000101 .. 101000 101 ... ..... ..... @rd_pg_rn
55
+
56
### SVE Predicate Logical Operations Group
57
58
# SVE predicate logical operations
59
--
60
2.17.1
61
62
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-9-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 14 +++++++++++++
9
target/arm/sve_helper.c | 41 +++++++++++++++++++++++++++++++-------
10
target/arm/translate-sve.c | 38 +++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 7 +++++++
12
4 files changed, 93 insertions(+), 7 deletions(-)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
20
DEF_HELPER_FLAGS_2(sve_last_active_element, TCG_CALL_NO_RWG, s32, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_revb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_revb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_revb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_4(sve_revh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(sve_revh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_4(sve_revw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_4(sve_rbit_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+
36
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
39
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve_helper.c
42
+++ b/target/arm/sve_helper.c
43
@@ -XXX,XX +XXX,XX @@ static inline uint64_t expand_pred_s(uint8_t byte)
44
return word[byte & 0x11];
45
}
46
47
+/* Swap 16-bit words within a 32-bit word. */
48
+static inline uint32_t hswap32(uint32_t h)
49
+{
50
+ return rol32(h, 16);
51
+}
52
+
53
+/* Swap 16-bit words within a 64-bit word. */
54
+static inline uint64_t hswap64(uint64_t h)
55
+{
56
+ uint64_t m = 0x0000ffff0000ffffull;
57
+ h = rol64(h, 32);
58
+ return ((h & m) << 16) | ((h >> 16) & m);
59
+}
60
+
61
+/* Swap 32-bit words within a 64-bit word. */
62
+static inline uint64_t wswap64(uint64_t h)
63
+{
64
+ return rol64(h, 32);
65
+}
66
+
67
#define LOGICAL_PPPP(NAME, FUNC) \
68
void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
69
{ \
70
@@ -XXX,XX +XXX,XX @@ DO_ZPZ(sve_neg_h, uint16_t, H1_2, DO_NEG)
71
DO_ZPZ(sve_neg_s, uint32_t, H1_4, DO_NEG)
72
DO_ZPZ_D(sve_neg_d, uint64_t, DO_NEG)
73
74
+DO_ZPZ(sve_revb_h, uint16_t, H1_2, bswap16)
75
+DO_ZPZ(sve_revb_s, uint32_t, H1_4, bswap32)
76
+DO_ZPZ_D(sve_revb_d, uint64_t, bswap64)
77
+
78
+DO_ZPZ(sve_revh_s, uint32_t, H1_4, hswap32)
79
+DO_ZPZ_D(sve_revh_d, uint64_t, hswap64)
80
+
81
+DO_ZPZ_D(sve_revw_d, uint64_t, wswap64)
82
+
83
+DO_ZPZ(sve_rbit_b, uint8_t, H1, revbit8)
84
+DO_ZPZ(sve_rbit_h, uint16_t, H1_2, revbit16)
85
+DO_ZPZ(sve_rbit_s, uint32_t, H1_4, revbit32)
86
+DO_ZPZ_D(sve_rbit_d, uint64_t, revbit64)
87
+
88
/* Three-operand expander, unpredicated, in which the third operand is "wide".
89
*/
90
#define DO_ZZW(NAME, TYPE, TYPEW, H, OP) \
91
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_rev_b)(void *vd, void *vn, uint32_t desc)
92
}
93
}
94
95
-static inline uint64_t hswap64(uint64_t h)
96
-{
97
- uint64_t m = 0x0000ffff0000ffffull;
98
- h = rol64(h, 32);
99
- return ((h & m) << 16) | ((h >> 16) & m);
100
-}
101
-
102
void HELPER(sve_rev_h)(void *vd, void *vn, uint32_t desc)
103
{
104
intptr_t i, j, opr_sz = simd_oprsz(desc);
105
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate-sve.c
108
+++ b/target/arm/translate-sve.c
109
@@ -XXX,XX +XXX,XX @@ static bool trans_CPY_m_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
110
return true;
111
}
112
113
+static bool trans_REVB(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
114
+{
115
+ static gen_helper_gvec_3 * const fns[4] = {
116
+ NULL,
117
+ gen_helper_sve_revb_h,
118
+ gen_helper_sve_revb_s,
119
+ gen_helper_sve_revb_d,
120
+ };
121
+ return do_zpz_ool(s, a, fns[a->esz]);
122
+}
123
+
124
+static bool trans_REVH(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
125
+{
126
+ static gen_helper_gvec_3 * const fns[4] = {
127
+ NULL,
128
+ NULL,
129
+ gen_helper_sve_revh_s,
130
+ gen_helper_sve_revh_d,
131
+ };
132
+ return do_zpz_ool(s, a, fns[a->esz]);
133
+}
134
+
135
+static bool trans_REVW(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
136
+{
137
+ return do_zpz_ool(s, a, a->esz == 3 ? gen_helper_sve_revw_d : NULL);
138
+}
139
+
140
+static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
141
+{
142
+ static gen_helper_gvec_3 * const fns[4] = {
143
+ gen_helper_sve_rbit_b,
144
+ gen_helper_sve_rbit_h,
145
+ gen_helper_sve_rbit_s,
146
+ gen_helper_sve_rbit_d,
147
+ };
148
+ return do_zpz_ool(s, a, fns[a->esz]);
149
+}
150
+
151
/*
152
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
153
*/
154
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
155
index XXXXXXX..XXXXXXX 100644
156
--- a/target/arm/sve.decode
157
+++ b/target/arm/sve.decode
158
@@ -XXX,XX +XXX,XX @@ CPY_m_v 00000101 .. 100000 100 ... ..... ..... @rd_pg_rn
159
# SVE copy element from general register to vector (predicated)
160
CPY_m_r 00000101 .. 101000 101 ... ..... ..... @rd_pg_rn
161
162
+# SVE reverse within elements
163
+# Note esz >= operation size
164
+REVB 00000101 .. 1001 00 100 ... ..... ..... @rd_pg_rn
165
+REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
166
+REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
167
+RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
168
+
169
### SVE Predicate Logical Operations Group
170
171
# SVE predicate logical operations
172
--
173
2.17.1
174
175
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-10-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 2 ++
9
target/arm/sve_helper.c | 37 +++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 13 +++++++++++++
11
target/arm/sve.decode | 3 +++
12
4 files changed, 55 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
+
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve_helper.c
30
+++ b/target/arm/sve_helper.c
31
@@ -XXX,XX +XXX,XX @@ int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
32
33
return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
34
}
35
+
36
+void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
37
+{
38
+ intptr_t opr_sz = simd_oprsz(desc) / 8;
39
+ int esz = simd_data(desc);
40
+ uint64_t pg, first_g, last_g, len, mask = pred_esz_masks[esz];
41
+ intptr_t i, first_i, last_i;
42
+ ARMVectorReg tmp;
43
+
44
+ first_i = last_i = 0;
45
+ first_g = last_g = 0;
46
+
47
+ /* Find the extent of the active elements within VG. */
48
+ for (i = QEMU_ALIGN_UP(opr_sz, 8) - 8; i >= 0; i -= 8) {
49
+ pg = *(uint64_t *)(vg + i) & mask;
50
+ if (pg) {
51
+ if (last_g == 0) {
52
+ last_g = pg;
53
+ last_i = i;
54
+ }
55
+ first_g = pg;
56
+ first_i = i;
57
+ }
58
+ }
59
+
60
+ len = 0;
61
+ if (first_g != 0) {
62
+ first_i = first_i * 8 + ctz64(first_g);
63
+ last_i = last_i * 8 + 63 - clz64(last_g);
64
+ len = last_i - first_i + (1 << esz);
65
+ if (vd == vm) {
66
+ vm = memcpy(&tmp, vm, opr_sz * 8);
67
+ }
68
+ swap_memmove(vd, vn + first_i, len);
69
+ }
70
+ swap_memmove(vd + len, vm, opr_sz * 8 - len);
71
+}
72
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/translate-sve.c
75
+++ b/target/arm/translate-sve.c
76
@@ -XXX,XX +XXX,XX @@ static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
77
return do_zpz_ool(s, a, fns[a->esz]);
78
}
79
80
+static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
81
+{
82
+ if (sve_access_check(s)) {
83
+ unsigned vsz = vec_full_reg_size(s);
84
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
85
+ vec_full_reg_offset(s, a->rn),
86
+ vec_full_reg_offset(s, a->rm),
87
+ pred_full_reg_offset(s, a->pg),
88
+ vsz, vsz, a->esz, gen_helper_sve_splice);
89
+ }
90
+ return true;
91
+}
92
+
93
/*
94
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
95
*/
96
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/sve.decode
99
+++ b/target/arm/sve.decode
100
@@ -XXX,XX +XXX,XX @@ REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
101
REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
102
RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
103
104
+# SVE vector splice (predicated)
105
+SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
106
+
107
### SVE Predicate Logical Operations Group
108
109
# SVE predicate logical operations
110
--
111
2.17.1
112
113
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-11-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 9 +++++++
9
target/arm/sve_helper.c | 55 ++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 2 ++
11
target/arm/sve.decode | 6 +++++
12
4 files changed, 72 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_lsl_zpzz_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(sve_lsl_zpzz_d, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_b, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_h, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
32
void, ptr, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
34
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/sve_helper.c
37
+++ b/target/arm/sve_helper.c
38
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
39
}
40
swap_memmove(vd + len, vm, opr_sz * 8 - len);
41
}
42
+
43
+void HELPER(sve_sel_zpzz_b)(void *vd, void *vn, void *vm,
44
+ void *vg, uint32_t desc)
45
+{
46
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
47
+ uint64_t *d = vd, *n = vn, *m = vm;
48
+ uint8_t *pg = vg;
49
+
50
+ for (i = 0; i < opr_sz; i += 1) {
51
+ uint64_t nn = n[i], mm = m[i];
52
+ uint64_t pp = expand_pred_b(pg[H1(i)]);
53
+ d[i] = (nn & pp) | (mm & ~pp);
54
+ }
55
+}
56
+
57
+void HELPER(sve_sel_zpzz_h)(void *vd, void *vn, void *vm,
58
+ void *vg, uint32_t desc)
59
+{
60
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
61
+ uint64_t *d = vd, *n = vn, *m = vm;
62
+ uint8_t *pg = vg;
63
+
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ uint64_t nn = n[i], mm = m[i];
66
+ uint64_t pp = expand_pred_h(pg[H1(i)]);
67
+ d[i] = (nn & pp) | (mm & ~pp);
68
+ }
69
+}
70
+
71
+void HELPER(sve_sel_zpzz_s)(void *vd, void *vn, void *vm,
72
+ void *vg, uint32_t desc)
73
+{
74
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
75
+ uint64_t *d = vd, *n = vn, *m = vm;
76
+ uint8_t *pg = vg;
77
+
78
+ for (i = 0; i < opr_sz; i += 1) {
79
+ uint64_t nn = n[i], mm = m[i];
80
+ uint64_t pp = expand_pred_s(pg[H1(i)]);
81
+ d[i] = (nn & pp) | (mm & ~pp);
82
+ }
83
+}
84
+
85
+void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
86
+ void *vg, uint32_t desc)
87
+{
88
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
89
+ uint64_t *d = vd, *n = vn, *m = vm;
90
+ uint8_t *pg = vg;
91
+
92
+ for (i = 0; i < opr_sz; i += 1) {
93
+ uint64_t nn = n[i], mm = m[i];
94
+ d[i] = (pg[H1(i)] & 1 ? nn : mm);
95
+ }
96
+}
97
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/translate-sve.c
100
+++ b/target/arm/translate-sve.c
101
@@ -XXX,XX +XXX,XX @@ static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
102
return do_zpzz_ool(s, a, fns[a->esz]);
103
}
104
105
+DO_ZPZZ(SEL, sel)
106
+
107
#undef DO_ZPZZ
108
109
/*
110
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/sve.decode
113
+++ b/target/arm/sve.decode
114
@@ -XXX,XX +XXX,XX @@
115
&rprr_esz rn=%reg_movprfx
116
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
117
&rprr_esz rm=%reg_movprfx
118
+@rd_pg4_rn_rm ........ esz:2 . rm:5 .. pg:4 rn:5 rd:5 &rprr_esz
119
120
# Three register operand, with governing predicate, vector element size
121
@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
122
@@ -XXX,XX +XXX,XX @@ RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
123
# SVE vector splice (predicated)
124
SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
125
126
+### SVE Select Vectors Group
127
+
128
+# SVE select vector elements (predicated)
129
+SEL_zpzz 00000101 .. 1 ..... 11 .... ..... ..... @rd_pg4_rn_rm
130
+
131
### SVE Predicate Logical Operations Group
132
133
# SVE predicate logical operations
134
--
135
2.17.1
136
137
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Depending on the host abi, float16, aka uint16_t, values are
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
passed and returned either zero-extended in the host register
5
or with garbage at the top of the host register.
6
7
The tcg code generator has so far been assuming garbage, as that
8
matches the x86 abi, but this is incorrect for other host abis.
9
Further, target/arm has so far been assuming zero-extended results,
10
so that it may store the 16-bit value into a 32-bit slot with the
11
high 16-bits already clear.
12
13
Rectify both problems by mapping "f16" in the helper definition
14
to uint32_t instead of (a typedef for) uint16_t. This forces
15
the host compiler to assume garbage in the upper 16 bits on input
16
and to zero-extend the result on output.
17
18
Cc: qemu-stable@nongnu.org
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Message-id: 20180613015641.5667-12-richard.henderson@linaro.org
21
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
22
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
7
---
26
include/exec/helper-head.h | 2 +-
8
target/arm/helper-sve.h | 115 +++++++++++++++++++++++
27
target/arm/helper-a64.c | 35 +++++++++--------
9
target/arm/sve_helper.c | 187 +++++++++++++++++++++++++++++++++++++
28
target/arm/helper.c | 80 +++++++++++++++++++-------------------
10
target/arm/translate-sve.c | 91 ++++++++++++++++++
29
3 files changed, 59 insertions(+), 58 deletions(-)
11
target/arm/sve.decode | 24 +++++
12
4 files changed, 417 insertions(+)
30
13
31
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
32
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
33
--- a/include/exec/helper-head.h
16
--- a/target/arm/helper-sve.h
34
+++ b/include/exec/helper-head.h
17
+++ b/target/arm/helper-sve.h
35
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
#define dh_ctype_int int
19
37
#define dh_ctype_i64 uint64_t
20
DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
38
#define dh_ctype_s64 int64_t
21
39
-#define dh_ctype_f16 float16
22
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_b, TCG_CALL_NO_RWG,
40
+#define dh_ctype_f16 uint32_t
23
+ i32, ptr, ptr, ptr, ptr, i32)
41
#define dh_ctype_f32 float32
24
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_b, TCG_CALL_NO_RWG,
42
#define dh_ctype_f64 float64
25
+ i32, ptr, ptr, ptr, ptr, i32)
43
#define dh_ctype_ptr void *
26
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_b, TCG_CALL_NO_RWG,
44
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
27
+ i32, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_b, TCG_CALL_NO_RWG,
29
+ i32, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_b, TCG_CALL_NO_RWG,
31
+ i32, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_b, TCG_CALL_NO_RWG,
33
+ i32, ptr, ptr, ptr, ptr, i32)
34
+
35
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_h, TCG_CALL_NO_RWG,
36
+ i32, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_h, TCG_CALL_NO_RWG,
38
+ i32, ptr, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_h, TCG_CALL_NO_RWG,
40
+ i32, ptr, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_h, TCG_CALL_NO_RWG,
42
+ i32, ptr, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_h, TCG_CALL_NO_RWG,
44
+ i32, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_h, TCG_CALL_NO_RWG,
46
+ i32, ptr, ptr, ptr, ptr, i32)
47
+
48
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_s, TCG_CALL_NO_RWG,
49
+ i32, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_s, TCG_CALL_NO_RWG,
51
+ i32, ptr, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_s, TCG_CALL_NO_RWG,
53
+ i32, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_s, TCG_CALL_NO_RWG,
55
+ i32, ptr, ptr, ptr, ptr, i32)
56
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_s, TCG_CALL_NO_RWG,
57
+ i32, ptr, ptr, ptr, ptr, i32)
58
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_s, TCG_CALL_NO_RWG,
59
+ i32, ptr, ptr, ptr, ptr, i32)
60
+
61
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_d, TCG_CALL_NO_RWG,
62
+ i32, ptr, ptr, ptr, ptr, i32)
63
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_d, TCG_CALL_NO_RWG,
64
+ i32, ptr, ptr, ptr, ptr, i32)
65
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_d, TCG_CALL_NO_RWG,
66
+ i32, ptr, ptr, ptr, ptr, i32)
67
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_d, TCG_CALL_NO_RWG,
68
+ i32, ptr, ptr, ptr, ptr, i32)
69
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_d, TCG_CALL_NO_RWG,
70
+ i32, ptr, ptr, ptr, ptr, i32)
71
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_d, TCG_CALL_NO_RWG,
72
+ i32, ptr, ptr, ptr, ptr, i32)
73
+
74
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_b, TCG_CALL_NO_RWG,
75
+ i32, ptr, ptr, ptr, ptr, i32)
76
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_b, TCG_CALL_NO_RWG,
77
+ i32, ptr, ptr, ptr, ptr, i32)
78
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_b, TCG_CALL_NO_RWG,
79
+ i32, ptr, ptr, ptr, ptr, i32)
80
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_b, TCG_CALL_NO_RWG,
81
+ i32, ptr, ptr, ptr, ptr, i32)
82
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_b, TCG_CALL_NO_RWG,
83
+ i32, ptr, ptr, ptr, ptr, i32)
84
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_b, TCG_CALL_NO_RWG,
85
+ i32, ptr, ptr, ptr, ptr, i32)
86
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_b, TCG_CALL_NO_RWG,
87
+ i32, ptr, ptr, ptr, ptr, i32)
88
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_b, TCG_CALL_NO_RWG,
89
+ i32, ptr, ptr, ptr, ptr, i32)
90
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_b, TCG_CALL_NO_RWG,
91
+ i32, ptr, ptr, ptr, ptr, i32)
92
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_b, TCG_CALL_NO_RWG,
93
+ i32, ptr, ptr, ptr, ptr, i32)
94
+
95
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_h, TCG_CALL_NO_RWG,
96
+ i32, ptr, ptr, ptr, ptr, i32)
97
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_h, TCG_CALL_NO_RWG,
98
+ i32, ptr, ptr, ptr, ptr, i32)
99
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_h, TCG_CALL_NO_RWG,
100
+ i32, ptr, ptr, ptr, ptr, i32)
101
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_h, TCG_CALL_NO_RWG,
102
+ i32, ptr, ptr, ptr, ptr, i32)
103
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_h, TCG_CALL_NO_RWG,
104
+ i32, ptr, ptr, ptr, ptr, i32)
105
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_h, TCG_CALL_NO_RWG,
106
+ i32, ptr, ptr, ptr, ptr, i32)
107
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_h, TCG_CALL_NO_RWG,
108
+ i32, ptr, ptr, ptr, ptr, i32)
109
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_h, TCG_CALL_NO_RWG,
110
+ i32, ptr, ptr, ptr, ptr, i32)
111
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_h, TCG_CALL_NO_RWG,
112
+ i32, ptr, ptr, ptr, ptr, i32)
113
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_h, TCG_CALL_NO_RWG,
114
+ i32, ptr, ptr, ptr, ptr, i32)
115
+
116
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_s, TCG_CALL_NO_RWG,
117
+ i32, ptr, ptr, ptr, ptr, i32)
118
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_s, TCG_CALL_NO_RWG,
119
+ i32, ptr, ptr, ptr, ptr, i32)
120
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_s, TCG_CALL_NO_RWG,
121
+ i32, ptr, ptr, ptr, ptr, i32)
122
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_s, TCG_CALL_NO_RWG,
123
+ i32, ptr, ptr, ptr, ptr, i32)
124
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_s, TCG_CALL_NO_RWG,
125
+ i32, ptr, ptr, ptr, ptr, i32)
126
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_s, TCG_CALL_NO_RWG,
127
+ i32, ptr, ptr, ptr, ptr, i32)
128
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_s, TCG_CALL_NO_RWG,
129
+ i32, ptr, ptr, ptr, ptr, i32)
130
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_s, TCG_CALL_NO_RWG,
131
+ i32, ptr, ptr, ptr, ptr, i32)
132
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_s, TCG_CALL_NO_RWG,
133
+ i32, ptr, ptr, ptr, ptr, i32)
134
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_s, TCG_CALL_NO_RWG,
135
+ i32, ptr, ptr, ptr, ptr, i32)
136
+
137
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
138
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
139
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
140
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
45
index XXXXXXX..XXXXXXX 100644
141
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper-a64.c
142
--- a/target/arm/sve_helper.c
47
+++ b/target/arm/helper-a64.c
143
+++ b/target/arm/sve_helper.c
48
@@ -XXX,XX +XXX,XX @@ static inline uint32_t float_rel_to_flags(int res)
144
@@ -XXX,XX +XXX,XX @@ static uint32_t iter_predtest_fwd(uint64_t d, uint64_t g, uint32_t flags)
49
return flags;
145
return flags;
50
}
146
}
51
147
52
-uint64_t HELPER(vfp_cmph_a64)(float16 x, float16 y, void *fp_status)
148
+/* This is an iterative function, called for each Pd and Pg word
53
+uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, void *fp_status)
149
+ * moving backward.
150
+ */
151
+static uint32_t iter_predtest_bwd(uint64_t d, uint64_t g, uint32_t flags)
152
+{
153
+ if (likely(g)) {
154
+ /* Compute C from first (i.e last) !(D & G).
155
+ Use bit 2 to signal first G bit seen. */
156
+ if (!(flags & 4)) {
157
+ flags += 4 - 1; /* add bit 2, subtract C from PREDTEST_INIT */
158
+ flags |= (d & pow2floor(g)) == 0;
159
+ }
160
+
161
+ /* Accumulate Z from each D & G. */
162
+ flags |= ((d & g) != 0) << 1;
163
+
164
+ /* Compute N from last (i.e first) D & G. Replace previous. */
165
+ flags = deposit32(flags, 31, 1, (d & (g & -g)) != 0);
166
+ }
167
+ return flags;
168
+}
169
+
170
/* The same for a single word predicate. */
171
uint32_t HELPER(sve_predtest1)(uint64_t d, uint64_t g)
54
{
172
{
55
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
173
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
56
}
174
d[i] = (pg[H1(i)] & 1 ? nn : mm);
57
58
-uint64_t HELPER(vfp_cmpeh_a64)(float16 x, float16 y, void *fp_status)
59
+uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, void *fp_status)
60
{
61
return float_rel_to_flags(float16_compare(x, y, fp_status));
62
}
63
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
64
#define float64_three make_float64(0x4008000000000000ULL)
65
#define float64_one_point_five make_float64(0x3FF8000000000000ULL)
66
67
-float16 HELPER(recpsf_f16)(float16 a, float16 b, void *fpstp)
68
+uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
69
{
70
float_status *fpst = fpstp;
71
72
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
73
return float64_muladd(a, b, float64_two, 0, fpst);
74
}
75
76
-float16 HELPER(rsqrtsf_f16)(float16 a, float16 b, void *fpstp)
77
+uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
78
{
79
float_status *fpst = fpstp;
80
81
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u16)(uint64_t a)
82
}
83
84
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
85
-float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
86
+uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
87
{
88
float_status *fpst = fpstp;
89
uint16_t val16, sbit;
90
@@ -XXX,XX +XXX,XX @@ void HELPER(casp_be_parallel)(CPUARMState *env, uint32_t rs, uint64_t addr,
91
#define ADVSIMD_HELPER(name, suffix) HELPER(glue(glue(advsimd_, name), suffix))
92
93
#define ADVSIMD_HALFOP(name) \
94
-float16 ADVSIMD_HELPER(name, h)(float16 a, float16 b, void *fpstp) \
95
+uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, void *fpstp) \
96
{ \
97
float_status *fpst = fpstp; \
98
return float16_ ## name(a, b, fpst); \
99
@@ -XXX,XX +XXX,XX @@ ADVSIMD_HALFOP(mulx)
100
ADVSIMD_TWOHALFOP(mulx)
101
102
/* fused multiply-accumulate */
103
-float16 HELPER(advsimd_muladdh)(float16 a, float16 b, float16 c, void *fpstp)
104
+uint32_t HELPER(advsimd_muladdh)(uint32_t a, uint32_t b, uint32_t c,
105
+ void *fpstp)
106
{
107
float_status *fpst = fpstp;
108
return float16_muladd(a, b, c, 0, fpst);
109
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
110
111
#define ADVSIMD_CMPRES(test) (test) ? 0xffff : 0
112
113
-uint32_t HELPER(advsimd_ceq_f16)(float16 a, float16 b, void *fpstp)
114
+uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, void *fpstp)
115
{
116
float_status *fpst = fpstp;
117
int compare = float16_compare_quiet(a, b, fpst);
118
return ADVSIMD_CMPRES(compare == float_relation_equal);
119
}
120
121
-uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
122
+uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, void *fpstp)
123
{
124
float_status *fpst = fpstp;
125
int compare = float16_compare(a, b, fpst);
126
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
127
compare == float_relation_equal);
128
}
129
130
-uint32_t HELPER(advsimd_cgt_f16)(float16 a, float16 b, void *fpstp)
131
+uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, void *fpstp)
132
{
133
float_status *fpst = fpstp;
134
int compare = float16_compare(a, b, fpst);
135
return ADVSIMD_CMPRES(compare == float_relation_greater);
136
}
137
138
-uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
139
+uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
140
{
141
float_status *fpst = fpstp;
142
float16 f0 = float16_abs(a);
143
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
144
compare == float_relation_equal);
145
}
146
147
-uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
148
+uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
149
{
150
float_status *fpst = fpstp;
151
float16 f0 = float16_abs(a);
152
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
153
}
154
155
/* round to integral */
156
-float16 HELPER(advsimd_rinth_exact)(float16 x, void *fp_status)
157
+uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, void *fp_status)
158
{
159
return float16_round_to_int(x, fp_status);
160
}
161
162
-float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
163
+uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
164
{
165
int old_flags = get_float_exception_flags(fp_status), new_flags;
166
float16 ret;
167
@@ -XXX,XX +XXX,XX @@ float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
168
* setting the mode appropriately before calling the helper.
169
*/
170
171
-uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
172
+uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
173
{
174
float_status *fpst = fpstp;
175
176
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
177
return float16_to_int16(a, fpst);
178
}
179
180
-uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
181
+uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
182
{
183
float_status *fpst = fpstp;
184
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
186
* Square Root and Reciprocal square root
187
*/
188
189
-float16 HELPER(sqrt_f16)(float16 a, void *fpstp)
190
+uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
191
{
192
float_status *s = fpstp;
193
194
diff --git a/target/arm/helper.c b/target/arm/helper.c
195
index XXXXXXX..XXXXXXX 100644
196
--- a/target/arm/helper.c
197
+++ b/target/arm/helper.c
198
@@ -XXX,XX +XXX,XX @@ DO_VFP_cmp(d, float64)
199
200
/* Integer to float and float to integer conversions */
201
202
-#define CONV_ITOF(name, fsz, sign) \
203
- float##fsz HELPER(name)(uint32_t x, void *fpstp) \
204
-{ \
205
- float_status *fpst = fpstp; \
206
- return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
207
+#define CONV_ITOF(name, ftype, fsz, sign) \
208
+ftype HELPER(name)(uint32_t x, void *fpstp) \
209
+{ \
210
+ float_status *fpst = fpstp; \
211
+ return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
212
}
213
214
-#define CONV_FTOI(name, fsz, sign, round) \
215
-uint32_t HELPER(name)(float##fsz x, void *fpstp) \
216
-{ \
217
- float_status *fpst = fpstp; \
218
- if (float##fsz##_is_any_nan(x)) { \
219
- float_raise(float_flag_invalid, fpst); \
220
- return 0; \
221
- } \
222
- return float##fsz##_to_##sign##int32##round(x, fpst); \
223
+#define CONV_FTOI(name, ftype, fsz, sign, round) \
224
+uint32_t HELPER(name)(ftype x, void *fpstp) \
225
+{ \
226
+ float_status *fpst = fpstp; \
227
+ if (float##fsz##_is_any_nan(x)) { \
228
+ float_raise(float_flag_invalid, fpst); \
229
+ return 0; \
230
+ } \
231
+ return float##fsz##_to_##sign##int32##round(x, fpst); \
232
}
233
234
-#define FLOAT_CONVS(name, p, fsz, sign) \
235
-CONV_ITOF(vfp_##name##to##p, fsz, sign) \
236
-CONV_FTOI(vfp_to##name##p, fsz, sign, ) \
237
-CONV_FTOI(vfp_to##name##z##p, fsz, sign, _round_to_zero)
238
+#define FLOAT_CONVS(name, p, ftype, fsz, sign) \
239
+ CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \
240
+ CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \
241
+ CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero)
242
243
-FLOAT_CONVS(si, h, 16, )
244
-FLOAT_CONVS(si, s, 32, )
245
-FLOAT_CONVS(si, d, 64, )
246
-FLOAT_CONVS(ui, h, 16, u)
247
-FLOAT_CONVS(ui, s, 32, u)
248
-FLOAT_CONVS(ui, d, 64, u)
249
+FLOAT_CONVS(si, h, uint32_t, 16, )
250
+FLOAT_CONVS(si, s, float32, 32, )
251
+FLOAT_CONVS(si, d, float64, 64, )
252
+FLOAT_CONVS(ui, h, uint32_t, 16, u)
253
+FLOAT_CONVS(ui, s, float32, 32, u)
254
+FLOAT_CONVS(ui, d, float64, 64, u)
255
256
#undef CONV_ITOF
257
#undef CONV_FTOI
258
@@ -XXX,XX +XXX,XX @@ static float16 do_postscale_fp16(float64 f, int shift, float_status *fpst)
259
return float64_to_float16(float64_scalbn(f, -shift, fpst), true, fpst);
260
}
261
262
-float16 HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
263
+uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
264
{
265
return do_postscale_fp16(int32_to_float64(x, fpst), shift, fpst);
266
}
267
268
-float16 HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
269
+uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
270
{
271
return do_postscale_fp16(uint32_to_float64(x, fpst), shift, fpst);
272
}
273
274
-float16 HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
275
+uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
276
{
277
return do_postscale_fp16(int64_to_float64(x, fpst), shift, fpst);
278
}
279
280
-float16 HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
281
+uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
282
{
283
return do_postscale_fp16(uint64_to_float64(x, fpst), shift, fpst);
284
}
285
@@ -XXX,XX +XXX,XX @@ static float64 do_prescale_fp16(float16 f, int shift, float_status *fpst)
286
}
175
}
287
}
176
}
288
177
+
289
-uint32_t HELPER(vfp_toshh)(float16 x, uint32_t shift, void *fpst)
178
+/* Two operand comparison controlled by a predicate.
290
+uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst)
179
+ * ??? It is very tempting to want to be able to expand this inline
291
{
180
+ * with x86 instructions, e.g.
292
return float64_to_int16(do_prescale_fp16(x, shift, fpst), fpst);
181
+ *
182
+ * vcmpeqw zm, zn, %ymm0
183
+ * vpmovmskb %ymm0, %eax
184
+ * and $0x5555, %eax
185
+ * and pg, %eax
186
+ *
187
+ * or even aarch64, e.g.
188
+ *
189
+ * // mask = 4000 1000 0400 0100 0040 0010 0004 0001
190
+ * cmeq v0.8h, zn, zm
191
+ * and v0.8h, v0.8h, mask
192
+ * addv h0, v0.8h
193
+ * and v0.8b, pg
194
+ *
195
+ * However, coming up with an abstraction that allows vector inputs and
196
+ * a scalar output, and also handles the byte-ordering of sub-uint64_t
197
+ * scalar outputs, is tricky.
198
+ */
199
+#define DO_CMP_PPZZ(NAME, TYPE, OP, H, MASK) \
200
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
201
+{ \
202
+ intptr_t opr_sz = simd_oprsz(desc); \
203
+ uint32_t flags = PREDTEST_INIT; \
204
+ intptr_t i = opr_sz; \
205
+ do { \
206
+ uint64_t out = 0, pg; \
207
+ do { \
208
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
209
+ TYPE nn = *(TYPE *)(vn + H(i)); \
210
+ TYPE mm = *(TYPE *)(vm + H(i)); \
211
+ out |= nn OP mm; \
212
+ } while (i & 63); \
213
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
214
+ out &= pg; \
215
+ *(uint64_t *)(vd + (i >> 3)) = out; \
216
+ flags = iter_predtest_bwd(out, pg, flags); \
217
+ } while (i > 0); \
218
+ return flags; \
219
+}
220
+
221
+#define DO_CMP_PPZZ_B(NAME, TYPE, OP) \
222
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1, 0xffffffffffffffffull)
223
+#define DO_CMP_PPZZ_H(NAME, TYPE, OP) \
224
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1_2, 0x5555555555555555ull)
225
+#define DO_CMP_PPZZ_S(NAME, TYPE, OP) \
226
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
227
+#define DO_CMP_PPZZ_D(NAME, TYPE, OP) \
228
+ DO_CMP_PPZZ(NAME, TYPE, OP, , 0x0101010101010101ull)
229
+
230
+DO_CMP_PPZZ_B(sve_cmpeq_ppzz_b, uint8_t, ==)
231
+DO_CMP_PPZZ_H(sve_cmpeq_ppzz_h, uint16_t, ==)
232
+DO_CMP_PPZZ_S(sve_cmpeq_ppzz_s, uint32_t, ==)
233
+DO_CMP_PPZZ_D(sve_cmpeq_ppzz_d, uint64_t, ==)
234
+
235
+DO_CMP_PPZZ_B(sve_cmpne_ppzz_b, uint8_t, !=)
236
+DO_CMP_PPZZ_H(sve_cmpne_ppzz_h, uint16_t, !=)
237
+DO_CMP_PPZZ_S(sve_cmpne_ppzz_s, uint32_t, !=)
238
+DO_CMP_PPZZ_D(sve_cmpne_ppzz_d, uint64_t, !=)
239
+
240
+DO_CMP_PPZZ_B(sve_cmpgt_ppzz_b, int8_t, >)
241
+DO_CMP_PPZZ_H(sve_cmpgt_ppzz_h, int16_t, >)
242
+DO_CMP_PPZZ_S(sve_cmpgt_ppzz_s, int32_t, >)
243
+DO_CMP_PPZZ_D(sve_cmpgt_ppzz_d, int64_t, >)
244
+
245
+DO_CMP_PPZZ_B(sve_cmpge_ppzz_b, int8_t, >=)
246
+DO_CMP_PPZZ_H(sve_cmpge_ppzz_h, int16_t, >=)
247
+DO_CMP_PPZZ_S(sve_cmpge_ppzz_s, int32_t, >=)
248
+DO_CMP_PPZZ_D(sve_cmpge_ppzz_d, int64_t, >=)
249
+
250
+DO_CMP_PPZZ_B(sve_cmphi_ppzz_b, uint8_t, >)
251
+DO_CMP_PPZZ_H(sve_cmphi_ppzz_h, uint16_t, >)
252
+DO_CMP_PPZZ_S(sve_cmphi_ppzz_s, uint32_t, >)
253
+DO_CMP_PPZZ_D(sve_cmphi_ppzz_d, uint64_t, >)
254
+
255
+DO_CMP_PPZZ_B(sve_cmphs_ppzz_b, uint8_t, >=)
256
+DO_CMP_PPZZ_H(sve_cmphs_ppzz_h, uint16_t, >=)
257
+DO_CMP_PPZZ_S(sve_cmphs_ppzz_s, uint32_t, >=)
258
+DO_CMP_PPZZ_D(sve_cmphs_ppzz_d, uint64_t, >=)
259
+
260
+#undef DO_CMP_PPZZ_B
261
+#undef DO_CMP_PPZZ_H
262
+#undef DO_CMP_PPZZ_S
263
+#undef DO_CMP_PPZZ_D
264
+#undef DO_CMP_PPZZ
265
+
266
+/* Similar, but the second source is "wide". */
267
+#define DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H, MASK) \
268
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
269
+{ \
270
+ intptr_t opr_sz = simd_oprsz(desc); \
271
+ uint32_t flags = PREDTEST_INIT; \
272
+ intptr_t i = opr_sz; \
273
+ do { \
274
+ uint64_t out = 0, pg; \
275
+ do { \
276
+ TYPEW mm = *(TYPEW *)(vm + i - 8); \
277
+ do { \
278
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
279
+ TYPE nn = *(TYPE *)(vn + H(i)); \
280
+ out |= nn OP mm; \
281
+ } while (i & 7); \
282
+ } while (i & 63); \
283
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
284
+ out &= pg; \
285
+ *(uint64_t *)(vd + (i >> 3)) = out; \
286
+ flags = iter_predtest_bwd(out, pg, flags); \
287
+ } while (i > 0); \
288
+ return flags; \
289
+}
290
+
291
+#define DO_CMP_PPZW_B(NAME, TYPE, TYPEW, OP) \
292
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1, 0xffffffffffffffffull)
293
+#define DO_CMP_PPZW_H(NAME, TYPE, TYPEW, OP) \
294
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_2, 0x5555555555555555ull)
295
+#define DO_CMP_PPZW_S(NAME, TYPE, TYPEW, OP) \
296
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_4, 0x1111111111111111ull)
297
+
298
+DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, uint8_t, uint64_t, ==)
299
+DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, uint16_t, uint64_t, ==)
300
+DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, uint32_t, uint64_t, ==)
301
+
302
+DO_CMP_PPZW_B(sve_cmpne_ppzw_b, uint8_t, uint64_t, !=)
303
+DO_CMP_PPZW_H(sve_cmpne_ppzw_h, uint16_t, uint64_t, !=)
304
+DO_CMP_PPZW_S(sve_cmpne_ppzw_s, uint32_t, uint64_t, !=)
305
+
306
+DO_CMP_PPZW_B(sve_cmpgt_ppzw_b, int8_t, int64_t, >)
307
+DO_CMP_PPZW_H(sve_cmpgt_ppzw_h, int16_t, int64_t, >)
308
+DO_CMP_PPZW_S(sve_cmpgt_ppzw_s, int32_t, int64_t, >)
309
+
310
+DO_CMP_PPZW_B(sve_cmpge_ppzw_b, int8_t, int64_t, >=)
311
+DO_CMP_PPZW_H(sve_cmpge_ppzw_h, int16_t, int64_t, >=)
312
+DO_CMP_PPZW_S(sve_cmpge_ppzw_s, int32_t, int64_t, >=)
313
+
314
+DO_CMP_PPZW_B(sve_cmphi_ppzw_b, uint8_t, uint64_t, >)
315
+DO_CMP_PPZW_H(sve_cmphi_ppzw_h, uint16_t, uint64_t, >)
316
+DO_CMP_PPZW_S(sve_cmphi_ppzw_s, uint32_t, uint64_t, >)
317
+
318
+DO_CMP_PPZW_B(sve_cmphs_ppzw_b, uint8_t, uint64_t, >=)
319
+DO_CMP_PPZW_H(sve_cmphs_ppzw_h, uint16_t, uint64_t, >=)
320
+DO_CMP_PPZW_S(sve_cmphs_ppzw_s, uint32_t, uint64_t, >=)
321
+
322
+DO_CMP_PPZW_B(sve_cmplt_ppzw_b, int8_t, int64_t, <)
323
+DO_CMP_PPZW_H(sve_cmplt_ppzw_h, int16_t, int64_t, <)
324
+DO_CMP_PPZW_S(sve_cmplt_ppzw_s, int32_t, int64_t, <)
325
+
326
+DO_CMP_PPZW_B(sve_cmple_ppzw_b, int8_t, int64_t, <=)
327
+DO_CMP_PPZW_H(sve_cmple_ppzw_h, int16_t, int64_t, <=)
328
+DO_CMP_PPZW_S(sve_cmple_ppzw_s, int32_t, int64_t, <=)
329
+
330
+DO_CMP_PPZW_B(sve_cmplo_ppzw_b, uint8_t, uint64_t, <)
331
+DO_CMP_PPZW_H(sve_cmplo_ppzw_h, uint16_t, uint64_t, <)
332
+DO_CMP_PPZW_S(sve_cmplo_ppzw_s, uint32_t, uint64_t, <)
333
+
334
+DO_CMP_PPZW_B(sve_cmpls_ppzw_b, uint8_t, uint64_t, <=)
335
+DO_CMP_PPZW_H(sve_cmpls_ppzw_h, uint16_t, uint64_t, <=)
336
+DO_CMP_PPZW_S(sve_cmpls_ppzw_s, uint32_t, uint64_t, <=)
337
+
338
+#undef DO_CMP_PPZW_B
339
+#undef DO_CMP_PPZW_H
340
+#undef DO_CMP_PPZW_S
341
+#undef DO_CMP_PPZW
342
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
343
index XXXXXXX..XXXXXXX 100644
344
--- a/target/arm/translate-sve.c
345
+++ b/target/arm/translate-sve.c
346
@@ -XXX,XX +XXX,XX @@
347
#include "trace-tcg.h"
348
#include "translate-a64.h"
349
350
+
351
+typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
352
+ TCGv_ptr, TCGv_ptr, TCGv_i32);
353
+
354
/*
355
* Helpers for extracting complex instruction fields.
356
*/
357
@@ -XXX,XX +XXX,XX @@ static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
358
return true;
293
}
359
}
294
360
295
-uint32_t HELPER(vfp_touhh)(float16 x, uint32_t shift, void *fpst)
361
+/*
296
+uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst)
362
+ *** SVE Integer Compare - Vectors Group
297
{
363
+ */
298
return float64_to_uint16(do_prescale_fp16(x, shift, fpst), fpst);
364
+
299
}
365
+static bool do_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
300
366
+ gen_helper_gvec_flags_4 *gen_fn)
301
-uint32_t HELPER(vfp_toslh)(float16 x, uint32_t shift, void *fpst)
367
+{
302
+uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst)
368
+ TCGv_ptr pd, zn, zm, pg;
303
{
369
+ unsigned vsz;
304
return float64_to_int32(do_prescale_fp16(x, shift, fpst), fpst);
370
+ TCGv_i32 t;
305
}
371
+
306
372
+ if (gen_fn == NULL) {
307
-uint32_t HELPER(vfp_toulh)(float16 x, uint32_t shift, void *fpst)
373
+ return false;
308
+uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst)
374
+ }
309
{
375
+ if (!sve_access_check(s)) {
310
return float64_to_uint32(do_prescale_fp16(x, shift, fpst), fpst);
376
+ return true;
311
}
377
+ }
312
378
+
313
-uint64_t HELPER(vfp_tosqh)(float16 x, uint32_t shift, void *fpst)
379
+ vsz = vec_full_reg_size(s);
314
+uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst)
380
+ t = tcg_const_i32(simd_desc(vsz, vsz, 0));
315
{
381
+ pd = tcg_temp_new_ptr();
316
return float64_to_int64(do_prescale_fp16(x, shift, fpst), fpst);
382
+ zn = tcg_temp_new_ptr();
317
}
383
+ zm = tcg_temp_new_ptr();
318
384
+ pg = tcg_temp_new_ptr();
319
-uint64_t HELPER(vfp_touqh)(float16 x, uint32_t shift, void *fpst)
385
+
320
+uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst)
386
+ tcg_gen_addi_ptr(pd, cpu_env, pred_full_reg_offset(s, a->rd));
321
{
387
+ tcg_gen_addi_ptr(zn, cpu_env, vec_full_reg_offset(s, a->rn));
322
return float64_to_uint64(do_prescale_fp16(x, shift, fpst), fpst);
388
+ tcg_gen_addi_ptr(zm, cpu_env, vec_full_reg_offset(s, a->rm));
323
}
389
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
324
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env)
390
+
325
}
391
+ gen_fn(t, pd, zn, zm, pg, t);
326
392
+
327
/* Half precision conversions. */
393
+ tcg_temp_free_ptr(pd);
328
-float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
394
+ tcg_temp_free_ptr(zn);
329
+float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
395
+ tcg_temp_free_ptr(zm);
330
{
396
+ tcg_temp_free_ptr(pg);
331
/* Squash FZ16 to 0 for the duration of conversion. In this case,
397
+
332
* it would affect flushing input denormals.
398
+ do_pred_flags(t);
333
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
399
+
334
return r;
400
+ tcg_temp_free_i32(t);
335
}
401
+ return true;
336
402
+}
337
-float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
403
+
338
+uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
404
+#define DO_PPZZ(NAME, name) \
339
{
405
+static bool trans_##NAME##_ppzz(DisasContext *s, arg_rprr_esz *a, \
340
/* Squash FZ16 to 0 for the duration of conversion. In this case,
406
+ uint32_t insn) \
341
* it would affect flushing output denormals.
407
+{ \
342
@@ -XXX,XX +XXX,XX @@ float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
408
+ static gen_helper_gvec_flags_4 * const fns[4] = { \
343
return r;
409
+ gen_helper_sve_##name##_ppzz_b, gen_helper_sve_##name##_ppzz_h, \
344
}
410
+ gen_helper_sve_##name##_ppzz_s, gen_helper_sve_##name##_ppzz_d, \
345
411
+ }; \
346
-float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
412
+ return do_ppzz_flags(s, a, fns[a->esz]); \
347
+float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
413
+}
348
{
414
+
349
/* Squash FZ16 to 0 for the duration of conversion. In this case,
415
+DO_PPZZ(CMPEQ, cmpeq)
350
* it would affect flushing input denormals.
416
+DO_PPZZ(CMPNE, cmpne)
351
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
417
+DO_PPZZ(CMPGT, cmpgt)
352
return r;
418
+DO_PPZZ(CMPGE, cmpge)
353
}
419
+DO_PPZZ(CMPHI, cmphi)
354
420
+DO_PPZZ(CMPHS, cmphs)
355
-float16 HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
421
+
356
+uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
422
+#undef DO_PPZZ
357
{
423
+
358
/* Squash FZ16 to 0 for the duration of conversion. In this case,
424
+#define DO_PPZW(NAME, name) \
359
* it would affect flushing output denormals.
425
+static bool trans_##NAME##_ppzw(DisasContext *s, arg_rprr_esz *a, \
360
@@ -XXX,XX +XXX,XX @@ static bool round_to_inf(float_status *fpst, bool sign_bit)
426
+ uint32_t insn) \
361
g_assert_not_reached();
427
+{ \
362
}
428
+ static gen_helper_gvec_flags_4 * const fns[4] = { \
363
429
+ gen_helper_sve_##name##_ppzw_b, gen_helper_sve_##name##_ppzw_h, \
364
-float16 HELPER(recpe_f16)(float16 input, void *fpstp)
430
+ gen_helper_sve_##name##_ppzw_s, NULL \
365
+uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
431
+ }; \
366
{
432
+ return do_ppzz_flags(s, a, fns[a->esz]); \
367
float_status *fpst = fpstp;
433
+}
368
float16 f16 = float16_squash_input_denormal(input, fpst);
434
+
369
@@ -XXX,XX +XXX,XX @@ static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac)
435
+DO_PPZW(CMPEQ, cmpeq)
370
return extract64(estimate, 0, 8) << 44;
436
+DO_PPZW(CMPNE, cmpne)
371
}
437
+DO_PPZW(CMPGT, cmpgt)
372
438
+DO_PPZW(CMPGE, cmpge)
373
-float16 HELPER(rsqrte_f16)(float16 input, void *fpstp)
439
+DO_PPZW(CMPHI, cmphi)
374
+uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
440
+DO_PPZW(CMPHS, cmphs)
375
{
441
+DO_PPZW(CMPLT, cmplt)
376
float_status *s = fpstp;
442
+DO_PPZW(CMPLE, cmple)
377
float16 f16 = float16_squash_input_denormal(input, s);
443
+DO_PPZW(CMPLO, cmplo)
444
+DO_PPZW(CMPLS, cmpls)
445
+
446
+#undef DO_PPZW
447
+
448
/*
449
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
450
*/
451
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
452
index XXXXXXX..XXXXXXX 100644
453
--- a/target/arm/sve.decode
454
+++ b/target/arm/sve.decode
455
@@ -XXX,XX +XXX,XX @@
456
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
457
&rprr_esz rm=%reg_movprfx
458
@rd_pg4_rn_rm ........ esz:2 . rm:5 .. pg:4 rn:5 rd:5 &rprr_esz
459
+@pd_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 . rd:4 &rprr_esz
460
461
# Three register operand, with governing predicate, vector element size
462
@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
463
@@ -XXX,XX +XXX,XX @@ SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
464
# SVE select vector elements (predicated)
465
SEL_zpzz 00000101 .. 1 ..... 11 .... ..... ..... @rd_pg4_rn_rm
466
467
+### SVE Integer Compare - Vectors Group
468
+
469
+# SVE integer compare_vectors
470
+CMPHS_ppzz 00100100 .. 0 ..... 000 ... ..... 0 .... @pd_pg_rn_rm
471
+CMPHI_ppzz 00100100 .. 0 ..... 000 ... ..... 1 .... @pd_pg_rn_rm
472
+CMPGE_ppzz 00100100 .. 0 ..... 100 ... ..... 0 .... @pd_pg_rn_rm
473
+CMPGT_ppzz 00100100 .. 0 ..... 100 ... ..... 1 .... @pd_pg_rn_rm
474
+CMPEQ_ppzz 00100100 .. 0 ..... 101 ... ..... 0 .... @pd_pg_rn_rm
475
+CMPNE_ppzz 00100100 .. 0 ..... 101 ... ..... 1 .... @pd_pg_rn_rm
476
+
477
+# SVE integer compare with wide elements
478
+# Note these require esz != 3.
479
+CMPEQ_ppzw 00100100 .. 0 ..... 001 ... ..... 0 .... @pd_pg_rn_rm
480
+CMPNE_ppzw 00100100 .. 0 ..... 001 ... ..... 1 .... @pd_pg_rn_rm
481
+CMPGE_ppzw 00100100 .. 0 ..... 010 ... ..... 0 .... @pd_pg_rn_rm
482
+CMPGT_ppzw 00100100 .. 0 ..... 010 ... ..... 1 .... @pd_pg_rn_rm
483
+CMPLT_ppzw 00100100 .. 0 ..... 011 ... ..... 0 .... @pd_pg_rn_rm
484
+CMPLE_ppzw 00100100 .. 0 ..... 011 ... ..... 1 .... @pd_pg_rn_rm
485
+CMPHS_ppzw 00100100 .. 0 ..... 110 ... ..... 0 .... @pd_pg_rn_rm
486
+CMPHI_ppzw 00100100 .. 0 ..... 110 ... ..... 1 .... @pd_pg_rn_rm
487
+CMPLO_ppzw 00100100 .. 0 ..... 111 ... ..... 0 .... @pd_pg_rn_rm
488
+CMPLS_ppzw 00100100 .. 0 ..... 111 ... ..... 1 .... @pd_pg_rn_rm
489
+
490
### SVE Predicate Logical Operations Group
491
492
# SVE predicate logical operations
378
--
493
--
379
2.17.1
494
2.17.1
380
495
381
496
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 44 +++++++++++++++++++
9
target/arm/sve_helper.c | 88 ++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 66 ++++++++++++++++++++++++++++
11
target/arm/sve.decode | 23 ++++++++++
12
4 files changed, 221 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_s, TCG_CALL_NO_RWG,
20
i32, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
43
+
44
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
49
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
54
+
55
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
56
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
57
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
58
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
60
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
62
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
63
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
64
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
65
+
66
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
67
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
68
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
69
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/sve_helper.c
72
+++ b/target/arm/sve_helper.c
73
@@ -XXX,XX +XXX,XX @@ DO_CMP_PPZW_S(sve_cmpls_ppzw_s, uint32_t, uint64_t, <=)
74
#undef DO_CMP_PPZW_H
75
#undef DO_CMP_PPZW_S
76
#undef DO_CMP_PPZW
77
+
78
+/* Similar, but the second source is immediate. */
79
+#define DO_CMP_PPZI(NAME, TYPE, OP, H, MASK) \
80
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
81
+{ \
82
+ intptr_t opr_sz = simd_oprsz(desc); \
83
+ uint32_t flags = PREDTEST_INIT; \
84
+ TYPE mm = simd_data(desc); \
85
+ intptr_t i = opr_sz; \
86
+ do { \
87
+ uint64_t out = 0, pg; \
88
+ do { \
89
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
90
+ TYPE nn = *(TYPE *)(vn + H(i)); \
91
+ out |= nn OP mm; \
92
+ } while (i & 63); \
93
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
94
+ out &= pg; \
95
+ *(uint64_t *)(vd + (i >> 3)) = out; \
96
+ flags = iter_predtest_bwd(out, pg, flags); \
97
+ } while (i > 0); \
98
+ return flags; \
99
+}
100
+
101
+#define DO_CMP_PPZI_B(NAME, TYPE, OP) \
102
+ DO_CMP_PPZI(NAME, TYPE, OP, H1, 0xffffffffffffffffull)
103
+#define DO_CMP_PPZI_H(NAME, TYPE, OP) \
104
+ DO_CMP_PPZI(NAME, TYPE, OP, H1_2, 0x5555555555555555ull)
105
+#define DO_CMP_PPZI_S(NAME, TYPE, OP) \
106
+ DO_CMP_PPZI(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
107
+#define DO_CMP_PPZI_D(NAME, TYPE, OP) \
108
+ DO_CMP_PPZI(NAME, TYPE, OP, , 0x0101010101010101ull)
109
+
110
+DO_CMP_PPZI_B(sve_cmpeq_ppzi_b, uint8_t, ==)
111
+DO_CMP_PPZI_H(sve_cmpeq_ppzi_h, uint16_t, ==)
112
+DO_CMP_PPZI_S(sve_cmpeq_ppzi_s, uint32_t, ==)
113
+DO_CMP_PPZI_D(sve_cmpeq_ppzi_d, uint64_t, ==)
114
+
115
+DO_CMP_PPZI_B(sve_cmpne_ppzi_b, uint8_t, !=)
116
+DO_CMP_PPZI_H(sve_cmpne_ppzi_h, uint16_t, !=)
117
+DO_CMP_PPZI_S(sve_cmpne_ppzi_s, uint32_t, !=)
118
+DO_CMP_PPZI_D(sve_cmpne_ppzi_d, uint64_t, !=)
119
+
120
+DO_CMP_PPZI_B(sve_cmpgt_ppzi_b, int8_t, >)
121
+DO_CMP_PPZI_H(sve_cmpgt_ppzi_h, int16_t, >)
122
+DO_CMP_PPZI_S(sve_cmpgt_ppzi_s, int32_t, >)
123
+DO_CMP_PPZI_D(sve_cmpgt_ppzi_d, int64_t, >)
124
+
125
+DO_CMP_PPZI_B(sve_cmpge_ppzi_b, int8_t, >=)
126
+DO_CMP_PPZI_H(sve_cmpge_ppzi_h, int16_t, >=)
127
+DO_CMP_PPZI_S(sve_cmpge_ppzi_s, int32_t, >=)
128
+DO_CMP_PPZI_D(sve_cmpge_ppzi_d, int64_t, >=)
129
+
130
+DO_CMP_PPZI_B(sve_cmphi_ppzi_b, uint8_t, >)
131
+DO_CMP_PPZI_H(sve_cmphi_ppzi_h, uint16_t, >)
132
+DO_CMP_PPZI_S(sve_cmphi_ppzi_s, uint32_t, >)
133
+DO_CMP_PPZI_D(sve_cmphi_ppzi_d, uint64_t, >)
134
+
135
+DO_CMP_PPZI_B(sve_cmphs_ppzi_b, uint8_t, >=)
136
+DO_CMP_PPZI_H(sve_cmphs_ppzi_h, uint16_t, >=)
137
+DO_CMP_PPZI_S(sve_cmphs_ppzi_s, uint32_t, >=)
138
+DO_CMP_PPZI_D(sve_cmphs_ppzi_d, uint64_t, >=)
139
+
140
+DO_CMP_PPZI_B(sve_cmplt_ppzi_b, int8_t, <)
141
+DO_CMP_PPZI_H(sve_cmplt_ppzi_h, int16_t, <)
142
+DO_CMP_PPZI_S(sve_cmplt_ppzi_s, int32_t, <)
143
+DO_CMP_PPZI_D(sve_cmplt_ppzi_d, int64_t, <)
144
+
145
+DO_CMP_PPZI_B(sve_cmple_ppzi_b, int8_t, <=)
146
+DO_CMP_PPZI_H(sve_cmple_ppzi_h, int16_t, <=)
147
+DO_CMP_PPZI_S(sve_cmple_ppzi_s, int32_t, <=)
148
+DO_CMP_PPZI_D(sve_cmple_ppzi_d, int64_t, <=)
149
+
150
+DO_CMP_PPZI_B(sve_cmplo_ppzi_b, uint8_t, <)
151
+DO_CMP_PPZI_H(sve_cmplo_ppzi_h, uint16_t, <)
152
+DO_CMP_PPZI_S(sve_cmplo_ppzi_s, uint32_t, <)
153
+DO_CMP_PPZI_D(sve_cmplo_ppzi_d, uint64_t, <)
154
+
155
+DO_CMP_PPZI_B(sve_cmpls_ppzi_b, uint8_t, <=)
156
+DO_CMP_PPZI_H(sve_cmpls_ppzi_h, uint16_t, <=)
157
+DO_CMP_PPZI_S(sve_cmpls_ppzi_s, uint32_t, <=)
158
+DO_CMP_PPZI_D(sve_cmpls_ppzi_d, uint64_t, <=)
159
+
160
+#undef DO_CMP_PPZI_B
161
+#undef DO_CMP_PPZI_H
162
+#undef DO_CMP_PPZI_S
163
+#undef DO_CMP_PPZI_D
164
+#undef DO_CMP_PPZI
165
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
166
index XXXXXXX..XXXXXXX 100644
167
--- a/target/arm/translate-sve.c
168
+++ b/target/arm/translate-sve.c
169
@@ -XXX,XX +XXX,XX @@
170
#include "translate-a64.h"
171
172
173
+typedef void gen_helper_gvec_flags_3(TCGv_i32, TCGv_ptr, TCGv_ptr,
174
+ TCGv_ptr, TCGv_i32);
175
typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
176
TCGv_ptr, TCGv_ptr, TCGv_i32);
177
178
@@ -XXX,XX +XXX,XX @@ DO_PPZW(CMPLS, cmpls)
179
180
#undef DO_PPZW
181
182
+/*
183
+ *** SVE Integer Compare - Immediate Groups
184
+ */
185
+
186
+static bool do_ppzi_flags(DisasContext *s, arg_rpri_esz *a,
187
+ gen_helper_gvec_flags_3 *gen_fn)
188
+{
189
+ TCGv_ptr pd, zn, pg;
190
+ unsigned vsz;
191
+ TCGv_i32 t;
192
+
193
+ if (gen_fn == NULL) {
194
+ return false;
195
+ }
196
+ if (!sve_access_check(s)) {
197
+ return true;
198
+ }
199
+
200
+ vsz = vec_full_reg_size(s);
201
+ t = tcg_const_i32(simd_desc(vsz, vsz, a->imm));
202
+ pd = tcg_temp_new_ptr();
203
+ zn = tcg_temp_new_ptr();
204
+ pg = tcg_temp_new_ptr();
205
+
206
+ tcg_gen_addi_ptr(pd, cpu_env, pred_full_reg_offset(s, a->rd));
207
+ tcg_gen_addi_ptr(zn, cpu_env, vec_full_reg_offset(s, a->rn));
208
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
209
+
210
+ gen_fn(t, pd, zn, pg, t);
211
+
212
+ tcg_temp_free_ptr(pd);
213
+ tcg_temp_free_ptr(zn);
214
+ tcg_temp_free_ptr(pg);
215
+
216
+ do_pred_flags(t);
217
+
218
+ tcg_temp_free_i32(t);
219
+ return true;
220
+}
221
+
222
+#define DO_PPZI(NAME, name) \
223
+static bool trans_##NAME##_ppzi(DisasContext *s, arg_rpri_esz *a, \
224
+ uint32_t insn) \
225
+{ \
226
+ static gen_helper_gvec_flags_3 * const fns[4] = { \
227
+ gen_helper_sve_##name##_ppzi_b, gen_helper_sve_##name##_ppzi_h, \
228
+ gen_helper_sve_##name##_ppzi_s, gen_helper_sve_##name##_ppzi_d, \
229
+ }; \
230
+ return do_ppzi_flags(s, a, fns[a->esz]); \
231
+}
232
+
233
+DO_PPZI(CMPEQ, cmpeq)
234
+DO_PPZI(CMPNE, cmpne)
235
+DO_PPZI(CMPGT, cmpgt)
236
+DO_PPZI(CMPGE, cmpge)
237
+DO_PPZI(CMPHI, cmphi)
238
+DO_PPZI(CMPHS, cmphs)
239
+DO_PPZI(CMPLT, cmplt)
240
+DO_PPZI(CMPLE, cmple)
241
+DO_PPZI(CMPLO, cmplo)
242
+DO_PPZI(CMPLS, cmpls)
243
+
244
+#undef DO_PPZI
245
+
246
/*
247
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
248
*/
249
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
250
index XXXXXXX..XXXXXXX 100644
251
--- a/target/arm/sve.decode
252
+++ b/target/arm/sve.decode
253
@@ -XXX,XX +XXX,XX @@
254
@rdn_dbm ........ .. .... dbm:13 rd:5 \
255
&rr_dbm rn=%reg_movprfx
256
257
+# Predicate output, vector and immediate input,
258
+# controlling predicate, element size.
259
+@pd_pg_rn_i7 ........ esz:2 . imm:7 . pg:3 rn:5 . rd:4 &rpri_esz
260
+@pd_pg_rn_i5 ........ esz:2 . imm:s5 ... pg:3 rn:5 . rd:4 &rpri_esz
261
+
262
# Basic Load/Store with 9-bit immediate offset
263
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
264
&rri imm=%imm9_16_10
265
@@ -XXX,XX +XXX,XX @@ CMPHI_ppzw 00100100 .. 0 ..... 110 ... ..... 1 .... @pd_pg_rn_rm
266
CMPLO_ppzw 00100100 .. 0 ..... 111 ... ..... 0 .... @pd_pg_rn_rm
267
CMPLS_ppzw 00100100 .. 0 ..... 111 ... ..... 1 .... @pd_pg_rn_rm
268
269
+### SVE Integer Compare - Unsigned Immediate Group
270
+
271
+# SVE integer compare with unsigned immediate
272
+CMPHS_ppzi 00100100 .. 1 ....... 0 ... ..... 0 .... @pd_pg_rn_i7
273
+CMPHI_ppzi 00100100 .. 1 ....... 0 ... ..... 1 .... @pd_pg_rn_i7
274
+CMPLO_ppzi 00100100 .. 1 ....... 1 ... ..... 0 .... @pd_pg_rn_i7
275
+CMPLS_ppzi 00100100 .. 1 ....... 1 ... ..... 1 .... @pd_pg_rn_i7
276
+
277
+### SVE Integer Compare - Signed Immediate Group
278
+
279
+# SVE integer compare with signed immediate
280
+CMPGE_ppzi 00100101 .. 0 ..... 000 ... ..... 0 .... @pd_pg_rn_i5
281
+CMPGT_ppzi 00100101 .. 0 ..... 000 ... ..... 1 .... @pd_pg_rn_i5
282
+CMPLT_ppzi 00100101 .. 0 ..... 001 ... ..... 0 .... @pd_pg_rn_i5
283
+CMPLE_ppzi 00100101 .. 0 ..... 001 ... ..... 1 .... @pd_pg_rn_i5
284
+CMPEQ_ppzi 00100101 .. 0 ..... 100 ... ..... 0 .... @pd_pg_rn_i5
285
+CMPNE_ppzi 00100101 .. 0 ..... 100 ... ..... 1 .... @pd_pg_rn_i5
286
+
287
### SVE Predicate Logical Operations Group
288
289
# SVE predicate logical operations
290
--
291
2.17.1
292
293
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 18 +++
9
target/arm/sve_helper.c | 248 +++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 106 ++++++++++++++++
11
target/arm/sve.decode | 19 +++
12
4 files changed, 391 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_orn_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_5(sve_nor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sve_nand_pppp, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_5(sve_brkpa, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve_brkpb, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve_brkpas, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve_brkpbs, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
27
+
28
+DEF_HELPER_FLAGS_4(sve_brka_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve_brkb_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_brka_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve_brkb_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(sve_brkas_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_brkbs_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_brkas_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
37
+
38
+DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
40
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sve_helper.c
43
+++ b/target/arm/sve_helper.c
44
@@ -XXX,XX +XXX,XX @@ DO_CMP_PPZI_D(sve_cmpls_ppzi_d, uint64_t, <=)
45
#undef DO_CMP_PPZI_S
46
#undef DO_CMP_PPZI_D
47
#undef DO_CMP_PPZI
48
+
49
+/* Similar to the ARM LastActive pseudocode function. */
50
+static bool last_active_pred(void *vd, void *vg, intptr_t oprsz)
51
+{
52
+ intptr_t i;
53
+
54
+ for (i = QEMU_ALIGN_UP(oprsz, 8) - 8; i >= 0; i -= 8) {
55
+ uint64_t pg = *(uint64_t *)(vg + i);
56
+ if (pg) {
57
+ return (pow2floor(pg) & *(uint64_t *)(vd + i)) != 0;
58
+ }
59
+ }
60
+ return 0;
61
+}
62
+
63
+/* Compute a mask into RETB that is true for all G, up to and including
64
+ * (if after) or excluding (if !after) the first G & N.
65
+ * Return true if BRK found.
66
+ */
67
+static bool compute_brk(uint64_t *retb, uint64_t n, uint64_t g,
68
+ bool brk, bool after)
69
+{
70
+ uint64_t b;
71
+
72
+ if (brk) {
73
+ b = 0;
74
+ } else if ((g & n) == 0) {
75
+ /* For all G, no N are set; break not found. */
76
+ b = g;
77
+ } else {
78
+ /* Break somewhere in N. Locate it. */
79
+ b = g & n; /* guard true, pred true */
80
+ b = b & -b; /* first such */
81
+ if (after) {
82
+ b = b | (b - 1); /* break after same */
83
+ } else {
84
+ b = b - 1; /* break before same */
85
+ }
86
+ brk = true;
87
+ }
88
+
89
+ *retb = b;
90
+ return brk;
91
+}
92
+
93
+/* Compute a zeroing BRK. */
94
+static void compute_brk_z(uint64_t *d, uint64_t *n, uint64_t *g,
95
+ intptr_t oprsz, bool after)
96
+{
97
+ bool brk = false;
98
+ intptr_t i;
99
+
100
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
101
+ uint64_t this_b, this_g = g[i];
102
+
103
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
104
+ d[i] = this_b & this_g;
105
+ }
106
+}
107
+
108
+/* Likewise, but also compute flags. */
109
+static uint32_t compute_brks_z(uint64_t *d, uint64_t *n, uint64_t *g,
110
+ intptr_t oprsz, bool after)
111
+{
112
+ uint32_t flags = PREDTEST_INIT;
113
+ bool brk = false;
114
+ intptr_t i;
115
+
116
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
117
+ uint64_t this_b, this_d, this_g = g[i];
118
+
119
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
120
+ d[i] = this_d = this_b & this_g;
121
+ flags = iter_predtest_fwd(this_d, this_g, flags);
122
+ }
123
+ return flags;
124
+}
125
+
126
+/* Compute a merging BRK. */
127
+static void compute_brk_m(uint64_t *d, uint64_t *n, uint64_t *g,
128
+ intptr_t oprsz, bool after)
129
+{
130
+ bool brk = false;
131
+ intptr_t i;
132
+
133
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
134
+ uint64_t this_b, this_g = g[i];
135
+
136
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
137
+ d[i] = (this_b & this_g) | (d[i] & ~this_g);
138
+ }
139
+}
140
+
141
+/* Likewise, but also compute flags. */
142
+static uint32_t compute_brks_m(uint64_t *d, uint64_t *n, uint64_t *g,
143
+ intptr_t oprsz, bool after)
144
+{
145
+ uint32_t flags = PREDTEST_INIT;
146
+ bool brk = false;
147
+ intptr_t i;
148
+
149
+ for (i = 0; i < oprsz / 8; ++i) {
150
+ uint64_t this_b, this_d = d[i], this_g = g[i];
151
+
152
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
153
+ d[i] = this_d = (this_b & this_g) | (this_d & ~this_g);
154
+ flags = iter_predtest_fwd(this_d, this_g, flags);
155
+ }
156
+ return flags;
157
+}
158
+
159
+static uint32_t do_zero(ARMPredicateReg *d, intptr_t oprsz)
160
+{
161
+ /* It is quicker to zero the whole predicate than loop on OPRSZ.
162
+ * The compiler should turn this into 4 64-bit integer stores.
163
+ */
164
+ memset(d, 0, sizeof(ARMPredicateReg));
165
+ return PREDTEST_INIT;
166
+}
167
+
168
+void HELPER(sve_brkpa)(void *vd, void *vn, void *vm, void *vg,
169
+ uint32_t pred_desc)
170
+{
171
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
172
+ if (last_active_pred(vn, vg, oprsz)) {
173
+ compute_brk_z(vd, vm, vg, oprsz, true);
174
+ } else {
175
+ do_zero(vd, oprsz);
176
+ }
177
+}
178
+
179
+uint32_t HELPER(sve_brkpas)(void *vd, void *vn, void *vm, void *vg,
180
+ uint32_t pred_desc)
181
+{
182
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
183
+ if (last_active_pred(vn, vg, oprsz)) {
184
+ return compute_brks_z(vd, vm, vg, oprsz, true);
185
+ } else {
186
+ return do_zero(vd, oprsz);
187
+ }
188
+}
189
+
190
+void HELPER(sve_brkpb)(void *vd, void *vn, void *vm, void *vg,
191
+ uint32_t pred_desc)
192
+{
193
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
194
+ if (last_active_pred(vn, vg, oprsz)) {
195
+ compute_brk_z(vd, vm, vg, oprsz, false);
196
+ } else {
197
+ do_zero(vd, oprsz);
198
+ }
199
+}
200
+
201
+uint32_t HELPER(sve_brkpbs)(void *vd, void *vn, void *vm, void *vg,
202
+ uint32_t pred_desc)
203
+{
204
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
205
+ if (last_active_pred(vn, vg, oprsz)) {
206
+ return compute_brks_z(vd, vm, vg, oprsz, false);
207
+ } else {
208
+ return do_zero(vd, oprsz);
209
+ }
210
+}
211
+
212
+void HELPER(sve_brka_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
213
+{
214
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
215
+ compute_brk_z(vd, vn, vg, oprsz, true);
216
+}
217
+
218
+uint32_t HELPER(sve_brkas_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
219
+{
220
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
221
+ return compute_brks_z(vd, vn, vg, oprsz, true);
222
+}
223
+
224
+void HELPER(sve_brkb_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
225
+{
226
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
227
+ compute_brk_z(vd, vn, vg, oprsz, false);
228
+}
229
+
230
+uint32_t HELPER(sve_brkbs_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
231
+{
232
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
233
+ return compute_brks_z(vd, vn, vg, oprsz, false);
234
+}
235
+
236
+void HELPER(sve_brka_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
237
+{
238
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
239
+ compute_brk_m(vd, vn, vg, oprsz, true);
240
+}
241
+
242
+uint32_t HELPER(sve_brkas_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
243
+{
244
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
245
+ return compute_brks_m(vd, vn, vg, oprsz, true);
246
+}
247
+
248
+void HELPER(sve_brkb_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
249
+{
250
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
251
+ compute_brk_m(vd, vn, vg, oprsz, false);
252
+}
253
+
254
+uint32_t HELPER(sve_brkbs_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
255
+{
256
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
257
+ return compute_brks_m(vd, vn, vg, oprsz, false);
258
+}
259
+
260
+void HELPER(sve_brkn)(void *vd, void *vn, void *vg, uint32_t pred_desc)
261
+{
262
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
263
+
264
+ if (!last_active_pred(vn, vg, oprsz)) {
265
+ do_zero(vd, oprsz);
266
+ }
267
+}
268
+
269
+/* As if PredTest(Ones(PL), D, esz). */
270
+static uint32_t predtest_ones(ARMPredicateReg *d, intptr_t oprsz,
271
+ uint64_t esz_mask)
272
+{
273
+ uint32_t flags = PREDTEST_INIT;
274
+ intptr_t i;
275
+
276
+ for (i = 0; i < oprsz / 8; i++) {
277
+ flags = iter_predtest_fwd(d->p[i], esz_mask, flags);
278
+ }
279
+ if (oprsz & 7) {
280
+ uint64_t mask = ~(-1ULL << (8 * (oprsz & 7)));
281
+ flags = iter_predtest_fwd(d->p[i], esz_mask & mask, flags);
282
+ }
283
+ return flags;
284
+}
285
+
286
+uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
287
+{
288
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
289
+
290
+ if (last_active_pred(vn, vg, oprsz)) {
291
+ return predtest_ones(vd, oprsz, -1);
292
+ } else {
293
+ return do_zero(vd, oprsz);
294
+ }
295
+}
296
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
297
index XXXXXXX..XXXXXXX 100644
298
--- a/target/arm/translate-sve.c
299
+++ b/target/arm/translate-sve.c
300
@@ -XXX,XX +XXX,XX @@ DO_PPZI(CMPLS, cmpls)
301
302
#undef DO_PPZI
303
304
+/*
305
+ *** SVE Partition Break Group
306
+ */
307
+
308
+static bool do_brk3(DisasContext *s, arg_rprr_s *a,
309
+ gen_helper_gvec_4 *fn, gen_helper_gvec_flags_4 *fn_s)
310
+{
311
+ if (!sve_access_check(s)) {
312
+ return true;
313
+ }
314
+
315
+ unsigned vsz = pred_full_reg_size(s);
316
+
317
+ /* Predicate sizes may be smaller and cannot use simd_desc. */
318
+ TCGv_ptr d = tcg_temp_new_ptr();
319
+ TCGv_ptr n = tcg_temp_new_ptr();
320
+ TCGv_ptr m = tcg_temp_new_ptr();
321
+ TCGv_ptr g = tcg_temp_new_ptr();
322
+ TCGv_i32 t = tcg_const_i32(vsz - 2);
323
+
324
+ tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
325
+ tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
326
+ tcg_gen_addi_ptr(m, cpu_env, pred_full_reg_offset(s, a->rm));
327
+ tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
328
+
329
+ if (a->s) {
330
+ fn_s(t, d, n, m, g, t);
331
+ do_pred_flags(t);
332
+ } else {
333
+ fn(d, n, m, g, t);
334
+ }
335
+ tcg_temp_free_ptr(d);
336
+ tcg_temp_free_ptr(n);
337
+ tcg_temp_free_ptr(m);
338
+ tcg_temp_free_ptr(g);
339
+ tcg_temp_free_i32(t);
340
+ return true;
341
+}
342
+
343
+static bool do_brk2(DisasContext *s, arg_rpr_s *a,
344
+ gen_helper_gvec_3 *fn, gen_helper_gvec_flags_3 *fn_s)
345
+{
346
+ if (!sve_access_check(s)) {
347
+ return true;
348
+ }
349
+
350
+ unsigned vsz = pred_full_reg_size(s);
351
+
352
+ /* Predicate sizes may be smaller and cannot use simd_desc. */
353
+ TCGv_ptr d = tcg_temp_new_ptr();
354
+ TCGv_ptr n = tcg_temp_new_ptr();
355
+ TCGv_ptr g = tcg_temp_new_ptr();
356
+ TCGv_i32 t = tcg_const_i32(vsz - 2);
357
+
358
+ tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
359
+ tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
360
+ tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
361
+
362
+ if (a->s) {
363
+ fn_s(t, d, n, g, t);
364
+ do_pred_flags(t);
365
+ } else {
366
+ fn(d, n, g, t);
367
+ }
368
+ tcg_temp_free_ptr(d);
369
+ tcg_temp_free_ptr(n);
370
+ tcg_temp_free_ptr(g);
371
+ tcg_temp_free_i32(t);
372
+ return true;
373
+}
374
+
375
+static bool trans_BRKPA(DisasContext *s, arg_rprr_s *a, uint32_t insn)
376
+{
377
+ return do_brk3(s, a, gen_helper_sve_brkpa, gen_helper_sve_brkpas);
378
+}
379
+
380
+static bool trans_BRKPB(DisasContext *s, arg_rprr_s *a, uint32_t insn)
381
+{
382
+ return do_brk3(s, a, gen_helper_sve_brkpb, gen_helper_sve_brkpbs);
383
+}
384
+
385
+static bool trans_BRKA_m(DisasContext *s, arg_rpr_s *a, uint32_t insn)
386
+{
387
+ return do_brk2(s, a, gen_helper_sve_brka_m, gen_helper_sve_brkas_m);
388
+}
389
+
390
+static bool trans_BRKB_m(DisasContext *s, arg_rpr_s *a, uint32_t insn)
391
+{
392
+ return do_brk2(s, a, gen_helper_sve_brkb_m, gen_helper_sve_brkbs_m);
393
+}
394
+
395
+static bool trans_BRKA_z(DisasContext *s, arg_rpr_s *a, uint32_t insn)
396
+{
397
+ return do_brk2(s, a, gen_helper_sve_brka_z, gen_helper_sve_brkas_z);
398
+}
399
+
400
+static bool trans_BRKB_z(DisasContext *s, arg_rpr_s *a, uint32_t insn)
401
+{
402
+ return do_brk2(s, a, gen_helper_sve_brkb_z, gen_helper_sve_brkbs_z);
403
+}
404
+
405
+static bool trans_BRKN(DisasContext *s, arg_rpr_s *a, uint32_t insn)
406
+{
407
+ return do_brk2(s, a, gen_helper_sve_brkn, gen_helper_sve_brkns);
408
+}
409
+
410
/*
411
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
412
*/
413
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
414
index XXXXXXX..XXXXXXX 100644
415
--- a/target/arm/sve.decode
416
+++ b/target/arm/sve.decode
417
@@ -XXX,XX +XXX,XX @@
418
&rri_esz rd rn imm esz
419
&rrr_esz rd rn rm esz
420
&rpr_esz rd pg rn esz
421
+&rpr_s rd pg rn s
422
&rprr_s rd pg rn rm s
423
&rprr_esz rd pg rn rm esz
424
&rprrr_esz rd pg rn rm ra esz
425
@@ -XXX,XX +XXX,XX @@
426
@pd_pn ........ esz:2 .. .... ....... rn:4 . rd:4 &rr_esz
427
@rd_rn ........ esz:2 ...... ...... rn:5 rd:5 &rr_esz
428
429
+# Two operand with governing predicate, flags setting
430
+@pd_pg_pn_s ........ . s:1 ...... .. pg:4 . rn:4 . rd:4 &rpr_s
431
+
432
# Three operand with unused vector element size
433
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
434
435
@@ -XXX,XX +XXX,XX @@ PFIRST 00100101 01 011 000 11000 00 .... 0 .... @pd_pn_e0
436
# SVE predicate next active
437
PNEXT 00100101 .. 011 001 11000 10 .... 0 .... @pd_pn
438
439
+### SVE Partition Break Group
440
+
441
+# SVE propagate break from previous partition
442
+BRKPA 00100101 0. 00 .... 11 .... 0 .... 0 .... @pd_pg_pn_pm_s
443
+BRKPB 00100101 0. 00 .... 11 .... 0 .... 1 .... @pd_pg_pn_pm_s
444
+
445
+# SVE partition break condition
446
+BRKA_z 00100101 0. 01000001 .... 0 .... 0 .... @pd_pg_pn_s
447
+BRKB_z 00100101 1. 01000001 .... 0 .... 0 .... @pd_pg_pn_s
448
+BRKA_m 00100101 0. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
449
+BRKB_m 00100101 1. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
450
+
451
+# SVE propagate break to next partition
452
+BRKN 00100101 0. 01100001 .... 0 .... 0 .... @pd_pg_pn_s
453
+
454
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
455
456
# SVE load predicate register
457
--
458
2.17.1
459
460
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-15-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 2 +
9
target/arm/sve_helper.c | 14 ++++
10
target/arm/translate-sve.c | 133 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 27 ++++++++
12
4 files changed, 176 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
19
20
DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
24
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/sve_helper.c
27
+++ b/target/arm/sve_helper.c
28
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
29
return do_zero(vd, oprsz);
30
}
31
}
32
+
33
+uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
34
+{
35
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
36
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
37
+ uint64_t *n = vn, *g = vg, sum = 0, mask = pred_esz_masks[esz];
38
+ intptr_t i;
39
+
40
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
41
+ uint64_t t = n[i] & g[i] & mask;
42
+ sum += ctpop64(t);
43
+ }
44
+ return sum;
45
+}
46
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-sve.c
49
+++ b/target/arm/translate-sve.c
50
@@ -XXX,XX +XXX,XX @@
51
#include "translate-a64.h"
52
53
54
+typedef void GVecGen2sFn(unsigned, uint32_t, uint32_t,
55
+ TCGv_i64, uint32_t, uint32_t);
56
+
57
typedef void gen_helper_gvec_flags_3(TCGv_i32, TCGv_ptr, TCGv_ptr,
58
TCGv_ptr, TCGv_i32);
59
typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
60
@@ -XXX,XX +XXX,XX @@ static bool trans_BRKN(DisasContext *s, arg_rpr_s *a, uint32_t insn)
61
return do_brk2(s, a, gen_helper_sve_brkn, gen_helper_sve_brkns);
62
}
63
64
+/*
65
+ *** SVE Predicate Count Group
66
+ */
67
+
68
+static void do_cntp(DisasContext *s, TCGv_i64 val, int esz, int pn, int pg)
69
+{
70
+ unsigned psz = pred_full_reg_size(s);
71
+
72
+ if (psz <= 8) {
73
+ uint64_t psz_mask;
74
+
75
+ tcg_gen_ld_i64(val, cpu_env, pred_full_reg_offset(s, pn));
76
+ if (pn != pg) {
77
+ TCGv_i64 g = tcg_temp_new_i64();
78
+ tcg_gen_ld_i64(g, cpu_env, pred_full_reg_offset(s, pg));
79
+ tcg_gen_and_i64(val, val, g);
80
+ tcg_temp_free_i64(g);
81
+ }
82
+
83
+ /* Reduce the pred_esz_masks value simply to reduce the
84
+ * size of the code generated here.
85
+ */
86
+ psz_mask = MAKE_64BIT_MASK(0, psz * 8);
87
+ tcg_gen_andi_i64(val, val, pred_esz_masks[esz] & psz_mask);
88
+
89
+ tcg_gen_ctpop_i64(val, val);
90
+ } else {
91
+ TCGv_ptr t_pn = tcg_temp_new_ptr();
92
+ TCGv_ptr t_pg = tcg_temp_new_ptr();
93
+ unsigned desc;
94
+ TCGv_i32 t_desc;
95
+
96
+ desc = psz - 2;
97
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
98
+
99
+ tcg_gen_addi_ptr(t_pn, cpu_env, pred_full_reg_offset(s, pn));
100
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
101
+ t_desc = tcg_const_i32(desc);
102
+
103
+ gen_helper_sve_cntp(val, t_pn, t_pg, t_desc);
104
+ tcg_temp_free_ptr(t_pn);
105
+ tcg_temp_free_ptr(t_pg);
106
+ tcg_temp_free_i32(t_desc);
107
+ }
108
+}
109
+
110
+static bool trans_CNTP(DisasContext *s, arg_CNTP *a, uint32_t insn)
111
+{
112
+ if (sve_access_check(s)) {
113
+ do_cntp(s, cpu_reg(s, a->rd), a->esz, a->rn, a->pg);
114
+ }
115
+ return true;
116
+}
117
+
118
+static bool trans_INCDECP_r(DisasContext *s, arg_incdec_pred *a,
119
+ uint32_t insn)
120
+{
121
+ if (sve_access_check(s)) {
122
+ TCGv_i64 reg = cpu_reg(s, a->rd);
123
+ TCGv_i64 val = tcg_temp_new_i64();
124
+
125
+ do_cntp(s, val, a->esz, a->pg, a->pg);
126
+ if (a->d) {
127
+ tcg_gen_sub_i64(reg, reg, val);
128
+ } else {
129
+ tcg_gen_add_i64(reg, reg, val);
130
+ }
131
+ tcg_temp_free_i64(val);
132
+ }
133
+ return true;
134
+}
135
+
136
+static bool trans_INCDECP_z(DisasContext *s, arg_incdec2_pred *a,
137
+ uint32_t insn)
138
+{
139
+ if (a->esz == 0) {
140
+ return false;
141
+ }
142
+ if (sve_access_check(s)) {
143
+ unsigned vsz = vec_full_reg_size(s);
144
+ TCGv_i64 val = tcg_temp_new_i64();
145
+ GVecGen2sFn *gvec_fn = a->d ? tcg_gen_gvec_subs : tcg_gen_gvec_adds;
146
+
147
+ do_cntp(s, val, a->esz, a->pg, a->pg);
148
+ gvec_fn(a->esz, vec_full_reg_offset(s, a->rd),
149
+ vec_full_reg_offset(s, a->rn), val, vsz, vsz);
150
+ }
151
+ return true;
152
+}
153
+
154
+static bool trans_SINCDECP_r_32(DisasContext *s, arg_incdec_pred *a,
155
+ uint32_t insn)
156
+{
157
+ if (sve_access_check(s)) {
158
+ TCGv_i64 reg = cpu_reg(s, a->rd);
159
+ TCGv_i64 val = tcg_temp_new_i64();
160
+
161
+ do_cntp(s, val, a->esz, a->pg, a->pg);
162
+ do_sat_addsub_32(reg, val, a->u, a->d);
163
+ }
164
+ return true;
165
+}
166
+
167
+static bool trans_SINCDECP_r_64(DisasContext *s, arg_incdec_pred *a,
168
+ uint32_t insn)
169
+{
170
+ if (sve_access_check(s)) {
171
+ TCGv_i64 reg = cpu_reg(s, a->rd);
172
+ TCGv_i64 val = tcg_temp_new_i64();
173
+
174
+ do_cntp(s, val, a->esz, a->pg, a->pg);
175
+ do_sat_addsub_64(reg, val, a->u, a->d);
176
+ }
177
+ return true;
178
+}
179
+
180
+static bool trans_SINCDECP_z(DisasContext *s, arg_incdec2_pred *a,
181
+ uint32_t insn)
182
+{
183
+ if (a->esz == 0) {
184
+ return false;
185
+ }
186
+ if (sve_access_check(s)) {
187
+ TCGv_i64 val = tcg_temp_new_i64();
188
+ do_cntp(s, val, a->esz, a->pg, a->pg);
189
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, a->u, a->d);
190
+ }
191
+ return true;
192
+}
193
+
194
/*
195
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
196
*/
197
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
198
index XXXXXXX..XXXXXXX 100644
199
--- a/target/arm/sve.decode
200
+++ b/target/arm/sve.decode
201
@@ -XXX,XX +XXX,XX @@
202
&ptrue rd esz pat s
203
&incdec_cnt rd pat esz imm d u
204
&incdec2_cnt rd rn pat esz imm d u
205
+&incdec_pred rd pg esz d u
206
+&incdec2_pred rd rn pg esz d u
207
208
###########################################################################
209
# Named instruction formats. These are generally used to
210
@@ -XXX,XX +XXX,XX @@
211
212
# One register operand, with governing predicate, vector element size
213
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
214
+@rd_pg4_pn ........ esz:2 ... ... .. pg:4 . rn:4 rd:5 &rpr_esz
215
216
# Two register operands with a 6-bit signed immediate.
217
@rd_rn_i6 ........ ... rn:5 ..... imm:s6 rd:5 &rri
218
@@ -XXX,XX +XXX,XX @@
219
@incdec2_cnt ........ esz:2 .. .... ...... pat:5 rd:5 \
220
&incdec2_cnt imm=%imm4_16_p1 rn=%reg_movprfx
221
222
+# One register, predicate.
223
+# User must fill in U and D.
224
+@incdec_pred ........ esz:2 .... .. ..... .. pg:4 rd:5 &incdec_pred
225
+@incdec2_pred ........ esz:2 .... .. ..... .. pg:4 rd:5 \
226
+ &incdec2_pred rn=%reg_movprfx
227
+
228
###########################################################################
229
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
230
231
@@ -XXX,XX +XXX,XX @@ BRKB_m 00100101 1. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
232
# SVE propagate break to next partition
233
BRKN 00100101 0. 01100001 .... 0 .... 0 .... @pd_pg_pn_s
234
235
+### SVE Predicate Count Group
236
+
237
+# SVE predicate count
238
+CNTP 00100101 .. 100 000 10 .... 0 .... ..... @rd_pg4_pn
239
+
240
+# SVE inc/dec register by predicate count
241
+INCDECP_r 00100101 .. 10110 d:1 10001 00 .... ..... @incdec_pred u=1
242
+
243
+# SVE inc/dec vector by predicate count
244
+INCDECP_z 00100101 .. 10110 d:1 10000 00 .... ..... @incdec2_pred u=1
245
+
246
+# SVE saturating inc/dec register by predicate count
247
+SINCDECP_r_32 00100101 .. 1010 d:1 u:1 10001 00 .... ..... @incdec_pred
248
+SINCDECP_r_64 00100101 .. 1010 d:1 u:1 10001 10 .... ..... @incdec_pred
249
+
250
+# SVE saturating inc/dec vector by predicate count
251
+SINCDECP_z 00100101 .. 1010 d:1 u:1 10000 00 .... ..... @incdec2_pred
252
+
253
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
254
255
# SVE load predicate register
256
--
257
2.17.1
258
259
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Coverity found that the string return by 'object_get_canonical_path' was not
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
being freed at two locations in the model (CID 1391294 and CID 1391293) and
4
Message-id: 20180613015641.5667-16-richard.henderson@linaro.org
5
also that a memset was being called with a value greater than the max of a byte
6
on the second argument (CID 1391286). This patch corrects this by adding the
7
freeing of the strings and also changing to memset to zero instead on
8
descriptor unaligned errors.
9
10
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Message-id: 20180528184859.3530-1-frasse.iglesias@gmail.com
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
7
---
17
hw/dma/xlnx-zdma.c | 10 +++++++---
8
target/arm/helper-sve.h | 2 +
18
1 file changed, 7 insertions(+), 3 deletions(-)
9
target/arm/sve_helper.c | 31 ++++++++++++
10
target/arm/translate-sve.c | 99 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 8 +++
12
4 files changed, 140 insertions(+)
19
13
20
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
21
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/dma/xlnx-zdma.c
16
--- a/target/arm/helper-sve.h
23
+++ b/hw/dma/xlnx-zdma.c
17
+++ b/target/arm/helper-sve.h
24
@@ -XXX,XX +XXX,XX @@ static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
qemu_log_mask(LOG_GUEST_ERROR,
19
DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
26
"zdma: unaligned descriptor at %" PRIx64,
20
27
addr);
21
DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
28
- memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
22
+
29
+ memset(buf, 0x0, sizeof(XlnxZDMADescr));
23
+DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
30
s->error = true;
24
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
31
return false;
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/sve_helper.c
27
+++ b/target/arm/sve_helper.c
28
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
32
}
29
}
33
@@ -XXX,XX +XXX,XX @@ static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
30
return sum;
34
RegisterInfo *r = &s->regs_info[addr / 4];
31
}
35
32
+
36
if (!r->data) {
33
+uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
37
+ gchar *path = object_get_canonical_path(OBJECT(s));
34
+{
38
qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
35
+ uintptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
39
- object_get_canonical_path(OBJECT(s)),
36
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
40
+ path,
37
+ uint64_t esz_mask = pred_esz_masks[esz];
41
addr);
38
+ ARMPredicateReg *d = vd;
42
+ g_free(path);
39
+ uint32_t flags;
43
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
40
+ intptr_t i;
44
zdma_ch_imr_update_irq(s);
41
+
45
return 0;
42
+ /* Begin with a zero predicate register. */
46
@@ -XXX,XX +XXX,XX @@ static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
43
+ flags = do_zero(d, oprsz);
47
RegisterInfo *r = &s->regs_info[addr / 4];
44
+ if (count == 0) {
48
45
+ return flags;
49
if (!r->data) {
46
+ }
50
+ gchar *path = object_get_canonical_path(OBJECT(s));
47
+
51
qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
48
+ /* Scale from predicate element count to bits. */
52
- object_get_canonical_path(OBJECT(s)),
49
+ count <<= esz;
53
+ path,
50
+ /* Bound to the bits in the predicate. */
54
addr, value);
51
+ count = MIN(count, oprsz * 8);
55
+ g_free(path);
52
+
56
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
53
+ /* Set all of the requested bits. */
57
zdma_ch_imr_update_irq(s);
54
+ for (i = 0; i < count / 64; ++i) {
58
return;
55
+ d->p[i] = esz_mask;
56
+ }
57
+ if (count & 63) {
58
+ d->p[i] = MAKE_64BIT_MASK(0, count & 63) & esz_mask;
59
+ }
60
+
61
+ return predtest_ones(d, oprsz, esz_mask);
62
+}
63
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/translate-sve.c
66
+++ b/target/arm/translate-sve.c
67
@@ -XXX,XX +XXX,XX @@ static bool trans_SINCDECP_z(DisasContext *s, arg_incdec2_pred *a,
68
return true;
69
}
70
71
+/*
72
+ *** SVE Integer Compare Scalars Group
73
+ */
74
+
75
+static bool trans_CTERM(DisasContext *s, arg_CTERM *a, uint32_t insn)
76
+{
77
+ if (!sve_access_check(s)) {
78
+ return true;
79
+ }
80
+
81
+ TCGCond cond = (a->ne ? TCG_COND_NE : TCG_COND_EQ);
82
+ TCGv_i64 rn = read_cpu_reg(s, a->rn, a->sf);
83
+ TCGv_i64 rm = read_cpu_reg(s, a->rm, a->sf);
84
+ TCGv_i64 cmp = tcg_temp_new_i64();
85
+
86
+ tcg_gen_setcond_i64(cond, cmp, rn, rm);
87
+ tcg_gen_extrl_i64_i32(cpu_NF, cmp);
88
+ tcg_temp_free_i64(cmp);
89
+
90
+ /* VF = !NF & !CF. */
91
+ tcg_gen_xori_i32(cpu_VF, cpu_NF, 1);
92
+ tcg_gen_andc_i32(cpu_VF, cpu_VF, cpu_CF);
93
+
94
+ /* Both NF and VF actually look at bit 31. */
95
+ tcg_gen_neg_i32(cpu_NF, cpu_NF);
96
+ tcg_gen_neg_i32(cpu_VF, cpu_VF);
97
+ return true;
98
+}
99
+
100
+static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
101
+{
102
+ if (!sve_access_check(s)) {
103
+ return true;
104
+ }
105
+
106
+ TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
107
+ TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
108
+ TCGv_i64 t0 = tcg_temp_new_i64();
109
+ TCGv_i64 t1 = tcg_temp_new_i64();
110
+ TCGv_i32 t2, t3;
111
+ TCGv_ptr ptr;
112
+ unsigned desc, vsz = vec_full_reg_size(s);
113
+ TCGCond cond;
114
+
115
+ if (!a->sf) {
116
+ if (a->u) {
117
+ tcg_gen_ext32u_i64(op0, op0);
118
+ tcg_gen_ext32u_i64(op1, op1);
119
+ } else {
120
+ tcg_gen_ext32s_i64(op0, op0);
121
+ tcg_gen_ext32s_i64(op1, op1);
122
+ }
123
+ }
124
+
125
+ /* For the helper, compress the different conditions into a computation
126
+ * of how many iterations for which the condition is true.
127
+ *
128
+ * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
129
+ * 2**64 iterations, overflowing to 0. Of course, predicate registers
130
+ * aren't that large, so any value >= predicate size is sufficient.
131
+ */
132
+ tcg_gen_sub_i64(t0, op1, op0);
133
+
134
+ /* t0 = MIN(op1 - op0, vsz). */
135
+ tcg_gen_movi_i64(t1, vsz);
136
+ tcg_gen_umin_i64(t0, t0, t1);
137
+ if (a->eq) {
138
+ /* Equality means one more iteration. */
139
+ tcg_gen_addi_i64(t0, t0, 1);
140
+ }
141
+
142
+ /* t0 = (condition true ? t0 : 0). */
143
+ cond = (a->u
144
+ ? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
145
+ : (a->eq ? TCG_COND_LE : TCG_COND_LT));
146
+ tcg_gen_movi_i64(t1, 0);
147
+ tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
148
+
149
+ t2 = tcg_temp_new_i32();
150
+ tcg_gen_extrl_i64_i32(t2, t0);
151
+ tcg_temp_free_i64(t0);
152
+ tcg_temp_free_i64(t1);
153
+
154
+ desc = (vsz / 8) - 2;
155
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
156
+ t3 = tcg_const_i32(desc);
157
+
158
+ ptr = tcg_temp_new_ptr();
159
+ tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
160
+
161
+ gen_helper_sve_while(t2, ptr, t2, t3);
162
+ do_pred_flags(t2);
163
+
164
+ tcg_temp_free_ptr(ptr);
165
+ tcg_temp_free_i32(t2);
166
+ tcg_temp_free_i32(t3);
167
+ return true;
168
+}
169
+
170
/*
171
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
172
*/
173
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
174
index XXXXXXX..XXXXXXX 100644
175
--- a/target/arm/sve.decode
176
+++ b/target/arm/sve.decode
177
@@ -XXX,XX +XXX,XX @@ SINCDECP_r_64 00100101 .. 1010 d:1 u:1 10001 10 .... ..... @incdec_pred
178
# SVE saturating inc/dec vector by predicate count
179
SINCDECP_z 00100101 .. 1010 d:1 u:1 10000 00 .... ..... @incdec2_pred
180
181
+### SVE Integer Compare - Scalars Group
182
+
183
+# SVE conditionally terminate scalars
184
+CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
185
+
186
+# SVE integer compare scalar count and limit
187
+WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
188
+
189
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
190
191
# SVE load predicate register
59
--
192
--
60
2.17.1
193
2.17.1
61
194
62
195
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-17-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-sve.c | 37 +++++++++++++++++++++++++++++++++++++
9
target/arm/sve.decode | 8 ++++++++
10
2 files changed, 45 insertions(+)
11
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
15
+++ b/target/arm/translate-sve.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
17
return true;
18
}
19
20
+/*
21
+ *** SVE Integer Wide Immediate - Unpredicated Group
22
+ */
23
+
24
+static bool trans_FDUP(DisasContext *s, arg_FDUP *a, uint32_t insn)
25
+{
26
+ if (a->esz == 0) {
27
+ return false;
28
+ }
29
+ if (sve_access_check(s)) {
30
+ unsigned vsz = vec_full_reg_size(s);
31
+ int dofs = vec_full_reg_offset(s, a->rd);
32
+ uint64_t imm;
33
+
34
+ /* Decode the VFP immediate. */
35
+ imm = vfp_expand_imm(a->esz, a->imm);
36
+ imm = dup_const(a->esz, imm);
37
+
38
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, imm);
39
+ }
40
+ return true;
41
+}
42
+
43
+static bool trans_DUP_i(DisasContext *s, arg_DUP_i *a, uint32_t insn)
44
+{
45
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
46
+ return false;
47
+ }
48
+ if (sve_access_check(s)) {
49
+ unsigned vsz = vec_full_reg_size(s);
50
+ int dofs = vec_full_reg_offset(s, a->rd);
51
+
52
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, dup_const(a->esz, a->imm));
53
+ }
54
+ return true;
55
+}
56
+
57
/*
58
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
59
*/
60
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/sve.decode
63
+++ b/target/arm/sve.decode
64
@@ -XXX,XX +XXX,XX @@ CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
65
# SVE integer compare scalar count and limit
66
WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
67
68
+### SVE Integer Wide Immediate - Unpredicated Group
69
+
70
+# SVE broadcast floating-point immediate (unpredicated)
71
+FDUP 00100101 esz:2 111 00 1110 imm:8 rd:5
72
+
73
+# SVE broadcast integer immediate (unpredicated)
74
+DUP_i 00100101 esz:2 111 00 011 . ........ rd:5 imm=%sh8_i8s
75
+
76
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
77
78
# SVE load predicate register
79
--
80
2.17.1
81
82
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
kvm_irqchip_create called by kvm_init will call kvm_init_irq_routing to
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
initialize global capability variables. If we call kvm_init_irq_routing in
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
GIC realize function, previous allocated memory will leak.
5
Message-id: 20180613015641.5667-18-richard.henderson@linaro.org
6
7
Fix this by deleting the unnecessary call.
8
9
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Message-id: 1527750994-14360-1-git-send-email-zhaoshenglong@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
7
---
14
hw/intc/arm_gic_kvm.c | 1 -
8
target/arm/helper-sve.h | 25 +++++++
15
hw/intc/arm_gicv3_kvm.c | 1 -
9
target/arm/sve_helper.c | 41 +++++++++++
16
2 files changed, 2 deletions(-)
10
target/arm/translate-sve.c | 144 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 26 +++++++
12
4 files changed, 236 insertions(+)
17
13
18
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gic_kvm.c
16
--- a/target/arm/helper-sve.h
21
+++ b/hw/intc/arm_gic_kvm.c
17
+++ b/target/arm/helper-sve.h
22
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
23
19
DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
24
if (kvm_has_gsi_routing()) {
20
25
/* set up irq routing */
21
DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
26
- kvm_init_irq_routing(kvm_state);
22
+
27
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
23
+DEF_HELPER_FLAGS_4(sve_subri_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
28
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
24
+DEF_HELPER_FLAGS_4(sve_subri_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
29
}
25
+DEF_HELPER_FLAGS_4(sve_subri_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
30
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
26
+DEF_HELPER_FLAGS_4(sve_subri_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
27
+
28
+DEF_HELPER_FLAGS_4(sve_smaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
29
+DEF_HELPER_FLAGS_4(sve_smaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
30
+DEF_HELPER_FLAGS_4(sve_smaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
31
+DEF_HELPER_FLAGS_4(sve_smaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
32
+
33
+DEF_HELPER_FLAGS_4(sve_smini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
34
+DEF_HELPER_FLAGS_4(sve_smini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
35
+DEF_HELPER_FLAGS_4(sve_smini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
36
+DEF_HELPER_FLAGS_4(sve_smini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
37
+
38
+DEF_HELPER_FLAGS_4(sve_umaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
39
+DEF_HELPER_FLAGS_4(sve_umaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
40
+DEF_HELPER_FLAGS_4(sve_umaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
41
+DEF_HELPER_FLAGS_4(sve_umaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
42
+
43
+DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
44
+DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
45
+DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
46
+DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
47
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
31
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/arm_gicv3_kvm.c
49
--- a/target/arm/sve_helper.c
33
+++ b/hw/intc/arm_gicv3_kvm.c
50
+++ b/target/arm/sve_helper.c
34
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
51
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
35
52
#undef DO_VPZ
36
if (kvm_has_gsi_routing()) {
53
#undef DO_VPZ_D
37
/* set up irq routing */
54
38
- kvm_init_irq_routing(kvm_state);
55
+/* Two vector operand, one scalar operand, unpredicated. */
39
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
56
+#define DO_ZZI(NAME, TYPE, OP) \
40
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
57
+void HELPER(NAME)(void *vd, void *vn, uint64_t s64, uint32_t desc) \
41
}
58
+{ \
59
+ intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(TYPE); \
60
+ TYPE s = s64, *d = vd, *n = vn; \
61
+ for (i = 0; i < opr_sz; ++i) { \
62
+ d[i] = OP(n[i], s); \
63
+ } \
64
+}
65
+
66
+#define DO_SUBR(X, Y) (Y - X)
67
+
68
+DO_ZZI(sve_subri_b, uint8_t, DO_SUBR)
69
+DO_ZZI(sve_subri_h, uint16_t, DO_SUBR)
70
+DO_ZZI(sve_subri_s, uint32_t, DO_SUBR)
71
+DO_ZZI(sve_subri_d, uint64_t, DO_SUBR)
72
+
73
+DO_ZZI(sve_smaxi_b, int8_t, DO_MAX)
74
+DO_ZZI(sve_smaxi_h, int16_t, DO_MAX)
75
+DO_ZZI(sve_smaxi_s, int32_t, DO_MAX)
76
+DO_ZZI(sve_smaxi_d, int64_t, DO_MAX)
77
+
78
+DO_ZZI(sve_smini_b, int8_t, DO_MIN)
79
+DO_ZZI(sve_smini_h, int16_t, DO_MIN)
80
+DO_ZZI(sve_smini_s, int32_t, DO_MIN)
81
+DO_ZZI(sve_smini_d, int64_t, DO_MIN)
82
+
83
+DO_ZZI(sve_umaxi_b, uint8_t, DO_MAX)
84
+DO_ZZI(sve_umaxi_h, uint16_t, DO_MAX)
85
+DO_ZZI(sve_umaxi_s, uint32_t, DO_MAX)
86
+DO_ZZI(sve_umaxi_d, uint64_t, DO_MAX)
87
+
88
+DO_ZZI(sve_umini_b, uint8_t, DO_MIN)
89
+DO_ZZI(sve_umini_h, uint16_t, DO_MIN)
90
+DO_ZZI(sve_umini_s, uint32_t, DO_MIN)
91
+DO_ZZI(sve_umini_d, uint64_t, DO_MIN)
92
+
93
+#undef DO_ZZI
94
+
95
#undef DO_AND
96
#undef DO_ORR
97
#undef DO_EOR
98
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
99
#undef DO_ASR
100
#undef DO_LSR
101
#undef DO_LSL
102
+#undef DO_SUBR
103
104
/* Similar to the ARM LastActiveElement pseudocode function, except the
105
result is multiplied by the element size. This includes the not found
106
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/translate-sve.c
109
+++ b/target/arm/translate-sve.c
110
@@ -XXX,XX +XXX,XX @@ static inline int expand_imm_sh8s(int x)
111
return (int8_t)x << (x & 0x100 ? 8 : 0);
112
}
113
114
+static inline int expand_imm_sh8u(int x)
115
+{
116
+ return (uint8_t)x << (x & 0x100 ? 8 : 0);
117
+}
118
+
119
/*
120
* Include the generated decoder.
121
*/
122
@@ -XXX,XX +XXX,XX @@ static bool trans_DUP_i(DisasContext *s, arg_DUP_i *a, uint32_t insn)
123
return true;
124
}
125
126
+static bool trans_ADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
127
+{
128
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
129
+ return false;
130
+ }
131
+ if (sve_access_check(s)) {
132
+ unsigned vsz = vec_full_reg_size(s);
133
+ tcg_gen_gvec_addi(a->esz, vec_full_reg_offset(s, a->rd),
134
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
135
+ }
136
+ return true;
137
+}
138
+
139
+static bool trans_SUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
140
+{
141
+ a->imm = -a->imm;
142
+ return trans_ADD_zzi(s, a, insn);
143
+}
144
+
145
+static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
146
+{
147
+ static const GVecGen2s op[4] = {
148
+ { .fni8 = tcg_gen_vec_sub8_i64,
149
+ .fniv = tcg_gen_sub_vec,
150
+ .fno = gen_helper_sve_subri_b,
151
+ .opc = INDEX_op_sub_vec,
152
+ .vece = MO_8,
153
+ .scalar_first = true },
154
+ { .fni8 = tcg_gen_vec_sub16_i64,
155
+ .fniv = tcg_gen_sub_vec,
156
+ .fno = gen_helper_sve_subri_h,
157
+ .opc = INDEX_op_sub_vec,
158
+ .vece = MO_16,
159
+ .scalar_first = true },
160
+ { .fni4 = tcg_gen_sub_i32,
161
+ .fniv = tcg_gen_sub_vec,
162
+ .fno = gen_helper_sve_subri_s,
163
+ .opc = INDEX_op_sub_vec,
164
+ .vece = MO_32,
165
+ .scalar_first = true },
166
+ { .fni8 = tcg_gen_sub_i64,
167
+ .fniv = tcg_gen_sub_vec,
168
+ .fno = gen_helper_sve_subri_d,
169
+ .opc = INDEX_op_sub_vec,
170
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
171
+ .vece = MO_64,
172
+ .scalar_first = true }
173
+ };
174
+
175
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
176
+ return false;
177
+ }
178
+ if (sve_access_check(s)) {
179
+ unsigned vsz = vec_full_reg_size(s);
180
+ TCGv_i64 c = tcg_const_i64(a->imm);
181
+ tcg_gen_gvec_2s(vec_full_reg_offset(s, a->rd),
182
+ vec_full_reg_offset(s, a->rn),
183
+ vsz, vsz, c, &op[a->esz]);
184
+ tcg_temp_free_i64(c);
185
+ }
186
+ return true;
187
+}
188
+
189
+static bool trans_MUL_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
190
+{
191
+ if (sve_access_check(s)) {
192
+ unsigned vsz = vec_full_reg_size(s);
193
+ tcg_gen_gvec_muli(a->esz, vec_full_reg_offset(s, a->rd),
194
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
195
+ }
196
+ return true;
197
+}
198
+
199
+static bool do_zzi_sat(DisasContext *s, arg_rri_esz *a, uint32_t insn,
200
+ bool u, bool d)
201
+{
202
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
203
+ return false;
204
+ }
205
+ if (sve_access_check(s)) {
206
+ TCGv_i64 val = tcg_const_i64(a->imm);
207
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, u, d);
208
+ tcg_temp_free_i64(val);
209
+ }
210
+ return true;
211
+}
212
+
213
+static bool trans_SQADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
214
+{
215
+ return do_zzi_sat(s, a, insn, false, false);
216
+}
217
+
218
+static bool trans_UQADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
219
+{
220
+ return do_zzi_sat(s, a, insn, true, false);
221
+}
222
+
223
+static bool trans_SQSUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
224
+{
225
+ return do_zzi_sat(s, a, insn, false, true);
226
+}
227
+
228
+static bool trans_UQSUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
229
+{
230
+ return do_zzi_sat(s, a, insn, true, true);
231
+}
232
+
233
+static bool do_zzi_ool(DisasContext *s, arg_rri_esz *a, gen_helper_gvec_2i *fn)
234
+{
235
+ if (sve_access_check(s)) {
236
+ unsigned vsz = vec_full_reg_size(s);
237
+ TCGv_i64 c = tcg_const_i64(a->imm);
238
+
239
+ tcg_gen_gvec_2i_ool(vec_full_reg_offset(s, a->rd),
240
+ vec_full_reg_offset(s, a->rn),
241
+ c, vsz, vsz, 0, fn);
242
+ tcg_temp_free_i64(c);
243
+ }
244
+ return true;
245
+}
246
+
247
+#define DO_ZZI(NAME, name) \
248
+static bool trans_##NAME##_zzi(DisasContext *s, arg_rri_esz *a, \
249
+ uint32_t insn) \
250
+{ \
251
+ static gen_helper_gvec_2i * const fns[4] = { \
252
+ gen_helper_sve_##name##i_b, gen_helper_sve_##name##i_h, \
253
+ gen_helper_sve_##name##i_s, gen_helper_sve_##name##i_d, \
254
+ }; \
255
+ return do_zzi_ool(s, a, fns[a->esz]); \
256
+}
257
+
258
+DO_ZZI(SMAX, smax)
259
+DO_ZZI(UMAX, umax)
260
+DO_ZZI(SMIN, smin)
261
+DO_ZZI(UMIN, umin)
262
+
263
+#undef DO_ZZI
264
+
265
/*
266
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
267
*/
268
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
269
index XXXXXXX..XXXXXXX 100644
270
--- a/target/arm/sve.decode
271
+++ b/target/arm/sve.decode
272
@@ -XXX,XX +XXX,XX @@
273
274
# Signed 8-bit immediate, optionally shifted left by 8.
275
%sh8_i8s 5:9 !function=expand_imm_sh8s
276
+# Unsigned 8-bit immediate, optionally shifted left by 8.
277
+%sh8_i8u 5:9 !function=expand_imm_sh8u
278
279
# Either a copy of rd (at bit 0), or a different source
280
# as propagated via the MOVPRFX instruction.
281
@@ -XXX,XX +XXX,XX @@
282
@pd_pn_pm ........ esz:2 .. rm:4 ....... rn:4 . rd:4 &rrr_esz
283
@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
284
&rrr_esz rn=%reg_movprfx
285
+@rdn_sh_i8u ........ esz:2 ...... ...... ..... rd:5 \
286
+ &rri_esz rn=%reg_movprfx imm=%sh8_i8u
287
+@rdn_i8u ........ esz:2 ...... ... imm:8 rd:5 \
288
+ &rri_esz rn=%reg_movprfx
289
+@rdn_i8s ........ esz:2 ...... ... imm:s8 rd:5 \
290
+ &rri_esz rn=%reg_movprfx
291
292
# Three operand with "memory" size, aka immediate left shift
293
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
294
@@ -XXX,XX +XXX,XX @@ FDUP 00100101 esz:2 111 00 1110 imm:8 rd:5
295
# SVE broadcast integer immediate (unpredicated)
296
DUP_i 00100101 esz:2 111 00 011 . ........ rd:5 imm=%sh8_i8s
297
298
+# SVE integer add/subtract immediate (unpredicated)
299
+ADD_zzi 00100101 .. 100 000 11 . ........ ..... @rdn_sh_i8u
300
+SUB_zzi 00100101 .. 100 001 11 . ........ ..... @rdn_sh_i8u
301
+SUBR_zzi 00100101 .. 100 011 11 . ........ ..... @rdn_sh_i8u
302
+SQADD_zzi 00100101 .. 100 100 11 . ........ ..... @rdn_sh_i8u
303
+UQADD_zzi 00100101 .. 100 101 11 . ........ ..... @rdn_sh_i8u
304
+SQSUB_zzi 00100101 .. 100 110 11 . ........ ..... @rdn_sh_i8u
305
+UQSUB_zzi 00100101 .. 100 111 11 . ........ ..... @rdn_sh_i8u
306
+
307
+# SVE integer min/max immediate (unpredicated)
308
+SMAX_zzi 00100101 .. 101 000 110 ........ ..... @rdn_i8s
309
+UMAX_zzi 00100101 .. 101 001 110 ........ ..... @rdn_i8u
310
+SMIN_zzi 00100101 .. 101 010 110 ........ ..... @rdn_i8s
311
+UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
312
+
313
+# SVE integer multiply immediate (unpredicated)
314
+MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
315
+
316
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
317
318
# SVE load predicate register
42
--
319
--
43
2.17.1
320
2.17.1
44
321
45
322
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
acpi_data_push uses g_array_set_size to resize the memory size. If there
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
is no enough contiguous memory, the address will be changed. So previous
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
pointer could not be used any more. It must update the pointer and use
5
Message-id: 20180613015641.5667-19-richard.henderson@linaro.org
6
the new one.
7
8
Also, previous codes wrongly use le32 conversion of iort->node_offset
9
for subsequent computations that will result incorrect value if host is
10
not litlle endian. So use the non-converted one instead.
11
12
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Message-id: 1527663951-14552-1-git-send-email-zhaoshenglong@huawei.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
7
---
17
hw/arm/virt-acpi-build.c | 20 +++++++++++++++-----
8
target/arm/helper-sve.h | 14 ++++++++
18
1 file changed, 15 insertions(+), 5 deletions(-)
9
target/arm/helper.h | 19 +++++++++++
19
10
target/arm/translate-sve.c | 42 +++++++++++++++++++++++
20
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
11
target/arm/vec_helper.c | 69 ++++++++++++++++++++++++++++++++++++++
21
index XXXXXXX..XXXXXXX 100644
12
target/arm/sve.decode | 10 ++++++
22
--- a/hw/arm/virt-acpi-build.c
13
5 files changed, 154 insertions(+)
23
+++ b/hw/arm/virt-acpi-build.c
14
24
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
15
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
25
AcpiIortItsGroup *its;
16
index XXXXXXX..XXXXXXX 100644
26
AcpiIortTable *iort;
17
--- a/target/arm/helper-sve.h
27
AcpiIortSmmu3 *smmu;
18
+++ b/target/arm/helper-sve.h
28
- size_t node_size, iort_length, smmu_offset = 0;
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
29
+ size_t node_size, iort_node_offset, iort_length, smmu_offset = 0;
20
DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
30
AcpiIortRC *rc;
21
DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
31
22
DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
32
iort = acpi_data_push(table_data, sizeof(*iort));
23
+
33
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
24
+DEF_HELPER_FLAGS_5(gvec_recps_h, TCG_CALL_NO_RWG,
34
25
+ void, ptr, ptr, ptr, ptr, i32)
35
iort_length = sizeof(*iort);
26
+DEF_HELPER_FLAGS_5(gvec_recps_s, TCG_CALL_NO_RWG,
36
iort->node_count = cpu_to_le32(nb_nodes);
27
+ void, ptr, ptr, ptr, ptr, i32)
37
- iort->node_offset = cpu_to_le32(sizeof(*iort));
28
+DEF_HELPER_FLAGS_5(gvec_recps_d, TCG_CALL_NO_RWG,
38
+ /*
29
+ void, ptr, ptr, ptr, ptr, i32)
39
+ * Use a copy in case table_data->data moves during acpi_data_push
30
+
40
+ * operations.
31
+DEF_HELPER_FLAGS_5(gvec_rsqrts_h, TCG_CALL_NO_RWG,
41
+ */
32
+ void, ptr, ptr, ptr, ptr, i32)
42
+ iort_node_offset = sizeof(*iort);
33
+DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
43
+ iort->node_offset = cpu_to_le32(iort_node_offset);
34
+ void, ptr, ptr, ptr, ptr, i32)
44
35
+DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
45
/* ITS group node */
36
+ void, ptr, ptr, ptr, ptr, i32)
46
node_size = sizeof(*its) + sizeof(uint32_t);
37
diff --git a/target/arm/helper.h b/target/arm/helper.h
47
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
38
index XXXXXXX..XXXXXXX 100644
48
int irq = vms->irqmap[VIRT_SMMU];
39
--- a/target/arm/helper.h
49
40
+++ b/target/arm/helper.h
50
/* SMMUv3 node */
41
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
51
- smmu_offset = iort->node_offset + node_size;
42
DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
52
+ smmu_offset = iort_node_offset + node_size;
43
void, ptr, ptr, ptr, ptr, i32)
53
node_size = sizeof(*smmu) + sizeof(*idmap);
44
54
iort_length += node_size;
45
+DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
55
smmu = acpi_data_push(table_data, node_size);
46
+DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
56
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
47
+DEF_HELPER_FLAGS_5(gvec_fadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
57
idmap->id_count = cpu_to_le32(0xFFFF);
48
+
58
idmap->output_base = 0;
49
+DEF_HELPER_FLAGS_5(gvec_fsub_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
59
/* output IORT node is the ITS group node (the first node) */
50
+DEF_HELPER_FLAGS_5(gvec_fsub_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
60
- idmap->output_reference = cpu_to_le32(iort->node_offset);
51
+DEF_HELPER_FLAGS_5(gvec_fsub_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
61
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
52
+
53
+DEF_HELPER_FLAGS_5(gvec_fmul_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_5(gvec_fmul_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
56
+
57
+DEF_HELPER_FLAGS_5(gvec_ftsmul_h, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_5(gvec_ftsmul_s, TCG_CALL_NO_RWG,
60
+ void, ptr, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_5(gvec_ftsmul_d, TCG_CALL_NO_RWG,
62
+ void, ptr, ptr, ptr, ptr, i32)
63
+
64
#ifdef TARGET_AARCH64
65
#include "helper-a64.h"
66
#include "helper-sve.h"
67
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/translate-sve.c
70
+++ b/target/arm/translate-sve.c
71
@@ -XXX,XX +XXX,XX @@ DO_ZZI(UMIN, umin)
72
73
#undef DO_ZZI
74
75
+/*
76
+ *** SVE Floating Point Arithmetic - Unpredicated Group
77
+ */
78
+
79
+static bool do_zzz_fp(DisasContext *s, arg_rrr_esz *a,
80
+ gen_helper_gvec_3_ptr *fn)
81
+{
82
+ if (fn == NULL) {
83
+ return false;
84
+ }
85
+ if (sve_access_check(s)) {
86
+ unsigned vsz = vec_full_reg_size(s);
87
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
88
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
89
+ vec_full_reg_offset(s, a->rn),
90
+ vec_full_reg_offset(s, a->rm),
91
+ status, vsz, vsz, 0, fn);
92
+ tcg_temp_free_ptr(status);
93
+ }
94
+ return true;
95
+}
96
+
97
+
98
+#define DO_FP3(NAME, name) \
99
+static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a, uint32_t insn) \
100
+{ \
101
+ static gen_helper_gvec_3_ptr * const fns[4] = { \
102
+ NULL, gen_helper_gvec_##name##_h, \
103
+ gen_helper_gvec_##name##_s, gen_helper_gvec_##name##_d \
104
+ }; \
105
+ return do_zzz_fp(s, a, fns[a->esz]); \
106
+}
107
+
108
+DO_FP3(FADD_zzz, fadd)
109
+DO_FP3(FSUB_zzz, fsub)
110
+DO_FP3(FMUL_zzz, fmul)
111
+DO_FP3(FTSMUL, ftsmul)
112
+DO_FP3(FRECPS, recps)
113
+DO_FP3(FRSQRTS, rsqrts)
114
+
115
+#undef DO_FP3
116
+
117
/*
118
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
119
*/
120
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/vec_helper.c
123
+++ b/target/arm/vec_helper.c
124
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm,
62
}
125
}
63
126
clear_tail(d, opr_sz, simd_maxsz(desc));
64
/* Root Complex Node */
127
}
65
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
128
+
66
idmap->output_reference = cpu_to_le32(smmu_offset);
129
+/* Floating-point trigonometric starting value.
67
} else {
130
+ * See the ARM ARM pseudocode function FPTrigSMul.
68
/* output IORT node is the ITS group node (the first node) */
131
+ */
69
- idmap->output_reference = cpu_to_le32(iort->node_offset);
132
+static float16 float16_ftsmul(float16 op1, uint16_t op2, float_status *stat)
70
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
133
+{
71
}
134
+ float16 result = float16_mul(op1, op1, stat);
72
135
+ if (!float16_is_any_nan(result)) {
73
+ /*
136
+ result = float16_set_sign(result, op2 & 1);
74
+ * Update the pointer address in case table_data->data moves during above
137
+ }
75
+ * acpi_data_push operations.
138
+ return result;
76
+ */
139
+}
77
+ iort = (AcpiIortTable *)(table_data->data + iort_start);
140
+
78
iort->length = cpu_to_le32(iort_length);
141
+static float32 float32_ftsmul(float32 op1, uint32_t op2, float_status *stat)
79
142
+{
80
build_header(linker, table_data, (void *)(table_data->data + iort_start),
143
+ float32 result = float32_mul(op1, op1, stat);
144
+ if (!float32_is_any_nan(result)) {
145
+ result = float32_set_sign(result, op2 & 1);
146
+ }
147
+ return result;
148
+}
149
+
150
+static float64 float64_ftsmul(float64 op1, uint64_t op2, float_status *stat)
151
+{
152
+ float64 result = float64_mul(op1, op1, stat);
153
+ if (!float64_is_any_nan(result)) {
154
+ result = float64_set_sign(result, op2 & 1);
155
+ }
156
+ return result;
157
+}
158
+
159
+#define DO_3OP(NAME, FUNC, TYPE) \
160
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
161
+{ \
162
+ intptr_t i, oprsz = simd_oprsz(desc); \
163
+ TYPE *d = vd, *n = vn, *m = vm; \
164
+ for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
165
+ d[i] = FUNC(n[i], m[i], stat); \
166
+ } \
167
+}
168
+
169
+DO_3OP(gvec_fadd_h, float16_add, float16)
170
+DO_3OP(gvec_fadd_s, float32_add, float32)
171
+DO_3OP(gvec_fadd_d, float64_add, float64)
172
+
173
+DO_3OP(gvec_fsub_h, float16_sub, float16)
174
+DO_3OP(gvec_fsub_s, float32_sub, float32)
175
+DO_3OP(gvec_fsub_d, float64_sub, float64)
176
+
177
+DO_3OP(gvec_fmul_h, float16_mul, float16)
178
+DO_3OP(gvec_fmul_s, float32_mul, float32)
179
+DO_3OP(gvec_fmul_d, float64_mul, float64)
180
+
181
+DO_3OP(gvec_ftsmul_h, float16_ftsmul, float16)
182
+DO_3OP(gvec_ftsmul_s, float32_ftsmul, float32)
183
+DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
184
+
185
+#ifdef TARGET_AARCH64
186
+
187
+DO_3OP(gvec_recps_h, helper_recpsf_f16, float16)
188
+DO_3OP(gvec_recps_s, helper_recpsf_f32, float32)
189
+DO_3OP(gvec_recps_d, helper_recpsf_f64, float64)
190
+
191
+DO_3OP(gvec_rsqrts_h, helper_rsqrtsf_f16, float16)
192
+DO_3OP(gvec_rsqrts_s, helper_rsqrtsf_f32, float32)
193
+DO_3OP(gvec_rsqrts_d, helper_rsqrtsf_f64, float64)
194
+
195
+#endif
196
+#undef DO_3OP
197
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
198
index XXXXXXX..XXXXXXX 100644
199
--- a/target/arm/sve.decode
200
+++ b/target/arm/sve.decode
201
@@ -XXX,XX +XXX,XX @@ UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
202
# SVE integer multiply immediate (unpredicated)
203
MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
204
205
+### SVE Floating Point Arithmetic - Unpredicated Group
206
+
207
+# SVE floating-point arithmetic (unpredicated)
208
+FADD_zzz 01100101 .. 0 ..... 000 000 ..... ..... @rd_rn_rm
209
+FSUB_zzz 01100101 .. 0 ..... 000 001 ..... ..... @rd_rn_rm
210
+FMUL_zzz 01100101 .. 0 ..... 000 010 ..... ..... @rd_rn_rm
211
+FTSMUL 01100101 .. 0 ..... 000 011 ..... ..... @rd_rn_rm
212
+FRECPS 01100101 .. 0 ..... 000 110 ..... ..... @rd_rn_rm
213
+FRSQRTS 01100101 .. 0 ..... 000 111 ..... ..... @rd_rn_rm
214
+
215
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
216
217
# SVE load predicate register
81
--
218
--
82
2.17.1
219
2.17.1
83
220
84
221
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Joel Stanley <joel@jms.id.au>
2
2
3
When QEMU is started with following CLI
3
The ASPEED SoCs contain a single register that returns random data when
4
-machine virt,gic-version=3,accel=kvm -cpu host -bios AAVMF_CODE.fd
4
read. This models that register so that guests can use it.
5
it crashes with abort at
6
accel/kvm/kvm-all.c:2164:
7
KVM_SET_DEVICE_ATTR failed: Group 6 attr 0x000000000000c665: Invalid argument
8
5
9
Which is caused by implicit dependency of kvm_arm_gicv3_reset() on
6
The random number data register has a corresponding control register,
10
arm_gicv3_icc_reset() where the later is called by CPU reset
7
however it returns data regardless of the state of the enabled bit, so
11
reset callback.
8
the model follows this behaviour.
12
9
13
However commit:
10
When the qcrypto call fails we exit as the guest uses the random number
14
3b77f6c arm/boot: split load_dtb() from arm_load_kernel()
11
device to feed it's entropy pool, which is used for cryptographic
15
broke CPU reset callback registration in case
12
purposes.
16
13
17
arm_load_kernel()
14
Reviewed-by: Cédric Le Goater <clg@kaod.org>
18
...
15
Signed-off-by: Joel Stanley <joel@jms.id.au>
19
if (!info->kernel_filename || info->firmware_loaded)
16
Message-id: 20180613114836.9265-1-joel@jms.id.au
20
21
branch is taken, i.e. it's sufficient to provide a firmware
22
or do not provide kernel on CLI to skip cpu reset callback
23
registration, where before offending commit the callback
24
has been registered unconditionally.
25
26
Fix it by registering the callback right at the beginning of
27
arm_load_kernel() unconditionally instead of doing it at the end.
28
29
NOTE:
30
we probably should eliminate that dependency anyways as well as
31
separate arch CPU reset parts from arm_load_kernel() into CPU
32
itself, but that refactoring that I probably would have to do
33
anyways later for CPU hotplug to work.
34
35
Reported-by: Auger Eric <eric.auger@redhat.com>
36
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
37
Reviewed-by: Eric Auger <eric.auger@redhat.com>
38
Tested-by: Eric Auger <eric.auger@redhat.com>
39
Message-id: 1527070950-208350-1-git-send-email-imammedo@redhat.com
40
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
---
18
---
43
hw/arm/boot.c | 18 +++++++++---------
19
hw/misc/aspeed_scu.c | 20 ++++++++++++++++++++
44
1 file changed, 9 insertions(+), 9 deletions(-)
20
1 file changed, 20 insertions(+)
45
21
46
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
22
diff --git a/hw/misc/aspeed_scu.c b/hw/misc/aspeed_scu.c
47
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/boot.c
24
--- a/hw/misc/aspeed_scu.c
49
+++ b/hw/arm/boot.c
25
+++ b/hw/misc/aspeed_scu.c
50
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
26
@@ -XXX,XX +XXX,XX @@
51
static const ARMInsnFixup *primary_loader;
27
#include "qapi/visitor.h"
52
AddressSpace *as = arm_boot_address_space(cpu, info);
28
#include "qemu/bitops.h"
53
29
#include "qemu/log.h"
54
+ /* CPU objects (unlike devices) are not automatically reset on system
30
+#include "crypto/random.h"
55
+ * reset, so we must always register a handler to do so. If we're
31
#include "trace.h"
56
+ * actually loading a kernel, the handler is also responsible for
32
57
+ * arranging that we start it correctly.
33
#define TO_REG(offset) ((offset) >> 2)
58
+ */
34
@@ -XXX,XX +XXX,XX @@ static const uint32_t ast2500_a1_resets[ASPEED_SCU_NR_REGS] = {
59
+ for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
35
[BMC_DEV_ID] = 0x00002402U
60
+ qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
36
};
37
38
+static uint32_t aspeed_scu_get_random(void)
39
+{
40
+ Error *err = NULL;
41
+ uint32_t num;
42
+
43
+ if (qcrypto_random_bytes((uint8_t *)&num, sizeof(num), &err)) {
44
+ error_report_err(err);
45
+ exit(1);
61
+ }
46
+ }
62
+
47
+
63
/* The board code is not supposed to set secure_board_setup unless
48
+ return num;
64
* running its code in secure mode is actually possible, and KVM
49
+}
65
* doesn't support secure.
50
+
66
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
51
static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
67
ARM_CPU(cs)->env.boot_info = info;
52
{
53
AspeedSCUState *s = ASPEED_SCU(opaque);
54
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
68
}
55
}
69
56
70
- /* CPU objects (unlike devices) are not automatically reset on system
57
switch (reg) {
71
- * reset, so we must always register a handler to do so. If we're
58
+ case RNG_DATA:
72
- * actually loading a kernel, the handler is also responsible for
59
+ /* On hardware, RNG_DATA works regardless of
73
- * arranging that we start it correctly.
60
+ * the state of the enable bit in RNG_CTRL
74
- */
61
+ */
75
- for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
62
+ s->regs[RNG_DATA] = aspeed_scu_get_random();
76
- qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
63
+ break;
77
- }
64
case WAKEUP_EN:
78
-
65
qemu_log_mask(LOG_GUEST_ERROR,
79
if (!info->skip_dtb_autoload && have_dtb(info)) {
66
"%s: Read of write-only offset 0x%" HWADDR_PRIx "\n",
80
if (arm_load_dtb(info->dtb_start, info, info->dtb_limit, as) < 0) {
81
exit(1);
82
--
67
--
83
2.17.1
68
2.17.1
84
69
85
70
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Cédric Le Goater <clg@kaod.org>
2
2
3
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
3
On Macronix chips, two bytes can written to the WRSR. First byte will
4
g_new is even better because it is type-safe.
4
configure the status register and the second the configuration
5
register. It is important to save the configuration value as it
6
contains the dummy cycle setting when using dual or quad IO mode.
5
7
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
8
Signed-off-by: Cédric Le Goater <clg@kaod.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Acked-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/gdbstub.c | 3 +--
12
hw/block/m25p80.c | 1 +
12
1 file changed, 1 insertion(+), 2 deletions(-)
13
1 file changed, 1 insertion(+)
13
14
14
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
15
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/gdbstub.c
17
--- a/hw/block/m25p80.c
17
+++ b/target/arm/gdbstub.c
18
+++ b/hw/block/m25p80.c
18
@@ -XXX,XX +XXX,XX @@ int arm_gen_dynamic_xml(CPUState *cs)
19
@@ -XXX,XX +XXX,XX @@ static void complete_collecting_data(Flash *s)
19
RegisterSysregXmlParam param = {cs, s};
20
case MAN_MACRONIX:
20
21
s->quad_enable = extract32(s->data[0], 6, 1);
21
cpu->dyn_xml.num_cpregs = 0;
22
if (s->len > 1) {
22
- cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
23
+ s->volatile_cfg = s->data[1];
23
- g_hash_table_size(cpu->cp_regs));
24
s->four_bytes_address_mode = extract32(s->data[1], 5, 1);
24
+ cpu->dyn_xml.cpregs_keys = g_new(uint32_t, g_hash_table_size(cpu->cp_regs));
25
}
25
g_string_printf(s, "<?xml version=\"1.0\"?>");
26
break;
26
g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
27
g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
28
--
27
--
29
2.17.1
28
2.17.1
30
29
31
30
diff view generated by jsdifflib
1
Add more detail to the documentation for memory_region_init_iommu()
1
If an IOMMU supports mappings that care about the memory
2
and other IOMMU-related functions and data structures.
2
transaction attributes, then it no longer has a unique
3
address -> output mapping, but more than one. We can
4
represent these using an IOMMU index, analogous to TCG's
5
mmu indexes.
3
6
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Message-id: 20180604152941.20374-2-peter.maydell@linaro.org
8
Message-id: 20180521140402.23318-2-peter.maydell@linaro.org
9
---
11
---
10
include/exec/memory.h | 105 ++++++++++++++++++++++++++++++++++++++----
12
include/exec/memory.h | 55 +++++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 95 insertions(+), 10 deletions(-)
13
memory.c | 23 ++++++++++++++++++
14
2 files changed, 78 insertions(+)
12
15
13
diff --git a/include/exec/memory.h b/include/exec/memory.h
16
diff --git a/include/exec/memory.h b/include/exec/memory.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/memory.h
18
--- a/include/exec/memory.h
16
+++ b/include/exec/memory.h
19
+++ b/include/exec/memory.h
17
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
20
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
18
IOMMU_ATTR_SPAPR_TCE_FD
21
* to report whenever mappings are changed, by calling
19
};
22
* memory_region_notify_iommu() (or, if necessary, by calling
20
23
* memory_region_notify_one() for each registered notifier).
21
+/**
22
+ * IOMMUMemoryRegionClass:
23
+ *
24
+ *
24
+ * All IOMMU implementations need to subclass TYPE_IOMMU_MEMORY_REGION
25
+ * Conceptually an IOMMU provides a mapping from input address
25
+ * and provide an implementation of at least the @translate method here
26
+ * to an output TLB entry. If the IOMMU is aware of memory transaction
26
+ * to handle requests to the memory region. Other methods are optional.
27
+ * attributes and the output TLB entry depends on the transaction
28
+ * attributes, we represent this using IOMMU indexes. Each index
29
+ * selects a particular translation table that the IOMMU has:
30
+ * @attrs_to_index returns the IOMMU index for a set of transaction attributes
31
+ * @translate takes an input address and an IOMMU index
32
+ * and the mapping returned can only depend on the input address and the
33
+ * IOMMU index.
27
+ *
34
+ *
28
+ * The IOMMU implementation must use the IOMMU notifier infrastructure
35
+ * Most IOMMUs don't care about the transaction attributes and support
29
+ * to report whenever mappings are changed, by calling
36
+ * only a single IOMMU index. A more complex IOMMU might have one index
30
+ * memory_region_notify_iommu() (or, if necessary, by calling
37
+ * for secure transactions and one for non-secure transactions.
31
+ * memory_region_notify_one() for each registered notifier).
38
*/
32
+ */
33
typedef struct IOMMUMemoryRegionClass {
39
typedef struct IOMMUMemoryRegionClass {
34
/* private */
40
/* private */
35
struct DeviceClass parent_class;
41
@@ -XXX,XX +XXX,XX @@ typedef struct IOMMUMemoryRegionClass {
36
42
*/
37
/*
43
int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
38
- * Return a TLB entry that contains a given address. Flag should
44
void *data);
39
- * be the access permission of this translation operation. We can
45
+
40
- * set flag to IOMMU_NONE to mean that we don't need any
46
+ /* Return the IOMMU index to use for a given set of transaction attributes.
41
- * read/write permission checks, like, when for region replay.
42
+ * Return a TLB entry that contains a given address.
43
+ *
47
+ *
44
+ * The IOMMUAccessFlags indicated via @flag are optional and may
48
+ * Optional method: if an IOMMU only supports a single IOMMU index then
45
+ * be specified as IOMMU_NONE to indicate that the caller needs
49
+ * the default implementation of memory_region_iommu_attrs_to_index()
46
+ * the full translation information for both reads and writes. If
50
+ * will return 0.
47
+ * the access flags are specified then the IOMMU implementation
48
+ * may use this as an optimization, to stop doing a page table
49
+ * walk as soon as it knows that the requested permissions are not
50
+ * allowed. If IOMMU_NONE is passed then the IOMMU must do the
51
+ * full page table walk and report the permissions in the returned
52
+ * IOMMUTLBEntry. (Note that this implies that an IOMMU may not
53
+ * return different mappings for reads and writes.)
54
+ *
51
+ *
55
+ * The returned information remains valid while the caller is
52
+ * The indexes supported by an IOMMU must be contiguous, starting at 0.
56
+ * holding the big QEMU lock or is inside an RCU critical section;
57
+ * if the caller wishes to cache the mapping beyond that it must
58
+ * register an IOMMU notifier so it can invalidate its cached
59
+ * information when the IOMMU mapping changes.
60
+ *
53
+ *
61
+ * @iommu: the IOMMUMemoryRegion
54
+ * @iommu: the IOMMUMemoryRegion
62
+ * @hwaddr: address to be translated within the memory region
55
+ * @attrs: memory transaction attributes
63
+ * @flag: requested access permissions
56
+ */
64
*/
57
+ int (*attrs_to_index)(IOMMUMemoryRegion *iommu, MemTxAttrs attrs);
65
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
58
+
66
IOMMUAccessFlags flag);
59
+ /* Return the number of IOMMU indexes this IOMMU supports.
67
- /* Returns minimum supported page size */
60
+ *
68
+ /* Returns minimum supported page size in bytes.
61
+ * Optional method: if this method is not provided, then
69
+ * If this method is not provided then the minimum is assumed to
62
+ * memory_region_iommu_num_indexes() will return 1, indicating that
70
+ * be TARGET_PAGE_SIZE.
63
+ * only a single IOMMU index is supported.
71
+ *
64
+ *
72
+ * @iommu: the IOMMUMemoryRegion
65
+ * @iommu: the IOMMUMemoryRegion
73
+ */
66
+ */
74
uint64_t (*get_min_page_size)(IOMMUMemoryRegion *iommu);
67
+ int (*num_indexes)(IOMMUMemoryRegion *iommu);
75
- /* Called when IOMMU Notifier flag changed */
76
+ /* Called when IOMMU Notifier flag changes (ie when the set of
77
+ * events which IOMMU users are requesting notification for changes).
78
+ * Optional method -- need not be provided if the IOMMU does not
79
+ * need to know exactly which events must be notified.
80
+ *
81
+ * @iommu: the IOMMUMemoryRegion
82
+ * @old_flags: events which previously needed to be notified
83
+ * @new_flags: events which now need to be notified
84
+ */
85
void (*notify_flag_changed)(IOMMUMemoryRegion *iommu,
86
IOMMUNotifierFlag old_flags,
87
IOMMUNotifierFlag new_flags);
88
- /* Set this up to provide customized IOMMU replay function */
89
+ /* Called to handle memory_region_iommu_replay().
90
+ *
91
+ * The default implementation of memory_region_iommu_replay() is to
92
+ * call the IOMMU translate method for every page in the address space
93
+ * with flag == IOMMU_NONE and then call the notifier if translate
94
+ * returns a valid mapping. If this method is implemented then it
95
+ * overrides the default behaviour, and must provide the full semantics
96
+ * of memory_region_iommu_replay(), by calling @notifier for every
97
+ * translation present in the IOMMU.
98
+ *
99
+ * Optional method -- an IOMMU only needs to provide this method
100
+ * if the default is inefficient or produces undesirable side effects.
101
+ *
102
+ * Note: this is not related to record-and-replay functionality.
103
+ */
104
void (*replay)(IOMMUMemoryRegion *iommu, IOMMUNotifier *notifier);
105
106
- /* Get IOMMU misc attributes */
107
- int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr,
108
+ /* Get IOMMU misc attributes. This is an optional method that
109
+ * can be used to allow users of the IOMMU to get implementation-specific
110
+ * information. The IOMMU implements this method to handle calls
111
+ * by IOMMU users to memory_region_iommu_get_attr() by filling in
112
+ * the arbitrary data pointer for any IOMMUMemoryRegionAttr values that
113
+ * the IOMMU supports. If the method is unimplemented then
114
+ * memory_region_iommu_get_attr() will always return -EINVAL.
115
+ *
116
+ * @iommu: the IOMMUMemoryRegion
117
+ * @attr: attribute being queried
118
+ * @data: memory to fill in with the attribute data
119
+ *
120
+ * Returns 0 on success, or a negative errno; in particular
121
+ * returns -EINVAL for unrecognized or unimplemented attribute types.
122
+ */
123
+ int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
124
void *data);
125
} IOMMUMemoryRegionClass;
68
} IOMMUMemoryRegionClass;
126
69
127
@@ -XXX,XX +XXX,XX @@ static inline void memory_region_init_reservation(MemoryRegion *mr,
70
typedef struct CoalescedMemoryRange CoalescedMemoryRange;
128
* An IOMMU region translates addresses and forwards accesses to a target
71
@@ -XXX,XX +XXX,XX @@ int memory_region_iommu_get_attr(IOMMUMemoryRegion *iommu_mr,
129
* memory region.
72
enum IOMMUMemoryRegionAttr attr,
73
void *data);
74
75
+/**
76
+ * memory_region_iommu_attrs_to_index: return the IOMMU index to
77
+ * use for translations with the given memory transaction attributes.
78
+ *
79
+ * @iommu_mr: the memory region
80
+ * @attrs: the memory transaction attributes
81
+ */
82
+int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
83
+ MemTxAttrs attrs);
84
+
85
+/**
86
+ * memory_region_iommu_num_indexes: return the total number of IOMMU
87
+ * indexes that this IOMMU supports.
88
+ *
89
+ * @iommu_mr: the memory region
90
+ */
91
+int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr);
92
+
93
/**
94
* memory_region_name: get a memory region's name
130
*
95
*
131
+ * The IOMMU implementation must define a subclass of TYPE_IOMMU_MEMORY_REGION.
96
diff --git a/memory.c b/memory.c
132
+ * @_iommu_mr should be a pointer to enough memory for an instance of
97
index XXXXXXX..XXXXXXX 100644
133
+ * that subclass, @instance_size is the size of that subclass, and
98
--- a/memory.c
134
+ * @mrtypename is its name. This function will initialize @_iommu_mr as an
99
+++ b/memory.c
135
+ * instance of the subclass, and its methods will then be called to handle
100
@@ -XXX,XX +XXX,XX @@ int memory_region_iommu_get_attr(IOMMUMemoryRegion *iommu_mr,
136
+ * accesses to the memory region. See the documentation of
101
return imrc->get_attr(iommu_mr, attr, data);
137
+ * #IOMMUMemoryRegionClass for further details.
102
}
138
+ *
103
139
* @_iommu_mr: the #IOMMUMemoryRegion to be initialized
104
+int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
140
* @instance_size: the IOMMUMemoryRegion subclass instance size
105
+ MemTxAttrs attrs)
141
* @mrtypename: the type name of the #IOMMUMemoryRegion
106
+{
142
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
107
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
143
* a notifier with the minimum page granularity returned by
108
+
144
* mr->iommu_ops->get_page_size().
109
+ if (!imrc->attrs_to_index) {
145
*
110
+ return 0;
146
+ * Note: this is not related to record-and-replay functionality.
111
+ }
147
+ *
112
+
148
* @iommu_mr: the memory region to observe
113
+ return imrc->attrs_to_index(iommu_mr, attrs);
149
* @n: the notifier to which to replay iommu mappings
114
+}
150
*/
115
+
151
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n);
116
+int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr)
152
* memory_region_iommu_replay_all: replay existing IOMMU translations
117
+{
153
* to all the notifiers registered.
118
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
154
*
119
+
155
+ * Note: this is not related to record-and-replay functionality.
120
+ if (!imrc->num_indexes) {
156
+ *
121
+ return 1;
157
* @iommu_mr: the memory region to observe
122
+ }
158
*/
123
+
159
void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr);
124
+ return imrc->num_indexes(iommu_mr);
160
@@ -XXX,XX +XXX,XX @@ void memory_region_unregister_iommu_notifier(MemoryRegion *mr,
125
+}
161
* memory_region_iommu_get_attr: return an IOMMU attr if get_attr() is
126
+
162
* defined on the IOMMU.
127
void memory_region_set_log(MemoryRegion *mr, bool log, unsigned client)
163
*
128
{
164
- * Returns 0 if succeded, error code otherwise.
129
uint8_t mask = 1 << client;
165
+ * Returns 0 on success, or a negative errno otherwise. In particular,
166
+ * -EINVAL indicates that the IOMMU does not support the requested
167
+ * attribute.
168
*
169
* @iommu_mr: the memory region
170
* @attr: the requested attribute
171
--
130
--
172
2.17.1
131
2.17.1
173
132
174
133
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Add support for multiple IOMMU indexes to the IOMMU notifier APIs.
2
add MemTxAttrs as an argument to address_space_translate()
2
When initializing a notifier with iommu_notifier_init(), the caller
3
and address_space_translate_cached(). Callers either have an
3
must pass the IOMMU index that it is interested in. When a change
4
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
4
happens, the IOMMU implementation must pass
5
memory_region_notify_iommu() the IOMMU index that has changed and
6
that notifiers must be called for.
7
8
IOMMUs which support only a single index don't need to change.
9
Callers which only really support working with IOMMUs with a single
10
index can use the result of passing MEMTXATTRS_UNSPECIFIED to
11
memory_region_iommu_attrs_to_index().
5
12
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20180604152941.20374-3-peter.maydell@linaro.org
9
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
10
---
17
---
11
include/exec/memory.h | 4 +++-
18
include/exec/memory.h | 7 ++++++-
12
accel/tcg/translate-all.c | 2 +-
19
hw/i386/intel_iommu.c | 6 +++---
13
exec.c | 14 +++++++++-----
20
hw/ppc/spapr_iommu.c | 2 +-
14
hw/vfio/common.c | 3 ++-
21
hw/s390x/s390-pci-inst.c | 4 ++--
15
memory_ldst.inc.c | 18 +++++++++---------
22
hw/vfio/common.c | 6 +++++-
16
target/riscv/helper.c | 2 +-
23
hw/virtio/vhost.c | 7 ++++++-
17
6 files changed, 25 insertions(+), 18 deletions(-)
24
memory.c | 8 +++++++-
25
7 files changed, 30 insertions(+), 10 deletions(-)
18
26
19
diff --git a/include/exec/memory.h b/include/exec/memory.h
27
diff --git a/include/exec/memory.h b/include/exec/memory.h
20
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/memory.h
29
--- a/include/exec/memory.h
22
+++ b/include/exec/memory.h
30
+++ b/include/exec/memory.h
23
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
31
@@ -XXX,XX +XXX,XX @@ struct IOMMUNotifier {
24
* #MemoryRegion.
32
/* Notify for address space range start <= addr <= end */
25
* @len: pointer to length
33
hwaddr start;
26
* @is_write: indicates the transfer direction
34
hwaddr end;
27
+ * @attrs: memory attributes
35
+ int iommu_idx;
36
QLIST_ENTRY(IOMMUNotifier) node;
37
};
38
typedef struct IOMMUNotifier IOMMUNotifier;
39
40
static inline void iommu_notifier_init(IOMMUNotifier *n, IOMMUNotify fn,
41
IOMMUNotifierFlag flags,
42
- hwaddr start, hwaddr end)
43
+ hwaddr start, hwaddr end,
44
+ int iommu_idx)
45
{
46
n->notify = fn;
47
n->notifier_flags = flags;
48
n->start = start;
49
n->end = end;
50
+ n->iommu_idx = iommu_idx;
51
}
52
53
/*
54
@@ -XXX,XX +XXX,XX @@ uint64_t memory_region_iommu_get_min_page_size(IOMMUMemoryRegion *iommu_mr);
55
* should be notified with an UNMAP followed by a MAP.
56
*
57
* @iommu_mr: the memory region that was changed
58
+ * @iommu_idx: the IOMMU index for the translation table which has changed
59
* @entry: the new entry in the IOMMU translation table. The entry
60
* replaces all old entries for the same virtual I/O address range.
61
* Deleted entries have .@perm == 0.
28
*/
62
*/
29
MemoryRegion *flatview_translate(FlatView *fv,
63
void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
30
hwaddr addr, hwaddr *xlat,
64
+ int iommu_idx,
31
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv,
65
IOMMUTLBEntry entry);
32
66
33
static inline MemoryRegion *address_space_translate(AddressSpace *as,
67
/**
34
hwaddr addr, hwaddr *xlat,
68
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
35
- hwaddr *len, bool is_write)
69
index XXXXXXX..XXXXXXX 100644
36
+ hwaddr *len, bool is_write,
70
--- a/hw/i386/intel_iommu.c
37
+ MemTxAttrs attrs)
71
+++ b/hw/i386/intel_iommu.c
72
@@ -XXX,XX +XXX,XX @@ static int vtd_dev_to_context_entry(IntelIOMMUState *s, uint8_t bus_num,
73
static int vtd_sync_shadow_page_hook(IOMMUTLBEntry *entry,
74
void *private)
38
{
75
{
39
return flatview_translate(address_space_to_flatview(as),
76
- memory_region_notify_iommu((IOMMUMemoryRegion *)private, *entry);
40
addr, xlat, len, is_write);
77
+ memory_region_notify_iommu((IOMMUMemoryRegion *)private, 0, *entry);
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
78
return 0;
42
index XXXXXXX..XXXXXXX 100644
79
}
43
--- a/accel/tcg/translate-all.c
80
44
+++ b/accel/tcg/translate-all.c
81
@@ -XXX,XX +XXX,XX @@ static void vtd_iotlb_page_invalidate_notify(IntelIOMMUState *s,
45
@@ -XXX,XX +XXX,XX @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
82
.addr_mask = size - 1,
46
hwaddr l = 1;
83
.perm = IOMMU_NONE,
47
84
};
48
rcu_read_lock();
85
- memory_region_notify_iommu(&vtd_as->iommu, entry);
49
- mr = address_space_translate(as, addr, &addr, &l, false);
86
+ memory_region_notify_iommu(&vtd_as->iommu, 0, entry);
50
+ mr = address_space_translate(as, addr, &addr, &l, false, attrs);
87
}
51
if (!(memory_region_is_ram(mr)
88
}
52
|| memory_region_is_romd(mr))) {
89
}
53
rcu_read_unlock();
90
@@ -XXX,XX +XXX,XX @@ static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
54
diff --git a/exec.c b/exec.c
91
entry.iova = addr;
55
index XXXXXXX..XXXXXXX 100644
92
entry.perm = IOMMU_NONE;
56
--- a/exec.c
93
entry.translated_addr = 0;
57
+++ b/exec.c
94
- memory_region_notify_iommu(&vtd_dev_as->iommu, entry);
58
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_write_rom_internal(AddressSpace *as,
95
+ memory_region_notify_iommu(&vtd_dev_as->iommu, 0, entry);
59
rcu_read_lock();
96
60
while (len > 0) {
97
done:
61
l = len;
98
return true;
62
- mr = address_space_translate(as, addr, &addr1, &l, true);
99
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
63
+ mr = address_space_translate(as, addr, &addr1, &l, true,
100
index XXXXXXX..XXXXXXX 100644
64
+ MEMTXATTRS_UNSPECIFIED);
101
--- a/hw/ppc/spapr_iommu.c
65
102
+++ b/hw/ppc/spapr_iommu.c
66
if (!(memory_region_is_ram(mr) ||
103
@@ -XXX,XX +XXX,XX @@ static target_ulong put_tce_emu(sPAPRTCETable *tcet, target_ulong ioba,
67
memory_region_is_romd(mr))) {
104
entry.translated_addr = tce & page_mask;
68
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache)
105
entry.addr_mask = ~page_mask;
69
*/
106
entry.perm = spapr_tce_iommu_access_flags(tce);
70
static inline MemoryRegion *address_space_translate_cached(
107
- memory_region_notify_iommu(&tcet->iommu, entry);
71
MemoryRegionCache *cache, hwaddr addr, hwaddr *xlat,
108
+ memory_region_notify_iommu(&tcet->iommu, 0, entry);
72
- hwaddr *plen, bool is_write)
109
73
+ hwaddr *plen, bool is_write, MemTxAttrs attrs)
110
return H_SUCCESS;
74
{
111
}
75
MemoryRegionSection section;
112
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
76
MemoryRegion *mr;
113
index XXXXXXX..XXXXXXX 100644
77
@@ -XXX,XX +XXX,XX @@ address_space_read_cached_slow(MemoryRegionCache *cache, hwaddr addr,
114
--- a/hw/s390x/s390-pci-inst.c
78
MemoryRegion *mr;
115
+++ b/hw/s390x/s390-pci-inst.c
79
116
@@ -XXX,XX +XXX,XX @@ static void s390_pci_update_iotlb(S390PCIIOMMU *iommu, S390IOTLBEntry *entry)
80
l = len;
117
}
81
- mr = address_space_translate_cached(cache, addr, &addr1, &l, false);
118
82
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, false,
119
notify.perm = IOMMU_NONE;
83
+ MEMTXATTRS_UNSPECIFIED);
120
- memory_region_notify_iommu(&iommu->iommu_mr, notify);
84
flatview_read_continue(cache->fv,
121
+ memory_region_notify_iommu(&iommu->iommu_mr, 0, notify);
85
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
122
notify.perm = entry->perm;
86
addr1, l, mr);
123
}
87
@@ -XXX,XX +XXX,XX @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr,
124
88
MemoryRegion *mr;
125
@@ -XXX,XX +XXX,XX @@ static void s390_pci_update_iotlb(S390PCIIOMMU *iommu, S390IOTLBEntry *entry)
89
126
g_hash_table_replace(iommu->iotlb, &cache->iova, cache);
90
l = len;
127
}
91
- mr = address_space_translate_cached(cache, addr, &addr1, &l, true);
128
92
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, true,
129
- memory_region_notify_iommu(&iommu->iommu_mr, notify);
93
+ MEMTXATTRS_UNSPECIFIED);
130
+ memory_region_notify_iommu(&iommu->iommu_mr, 0, notify);
94
flatview_write_continue(cache->fv,
131
}
95
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
132
96
addr1, l, mr);
133
int rpcit_service_call(S390CPU *cpu, uint8_t r1, uint8_t r2, uintptr_t ra)
97
@@ -XXX,XX +XXX,XX @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
98
99
rcu_read_lock();
100
mr = address_space_translate(&address_space_memory,
101
- phys_addr, &phys_addr, &l, false);
102
+ phys_addr, &phys_addr, &l, false,
103
+ MEMTXATTRS_UNSPECIFIED);
104
105
res = !(memory_region_is_ram(mr) || memory_region_is_romd(mr));
106
rcu_read_unlock();
107
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
134
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
108
index XXXXXXX..XXXXXXX 100644
135
index XXXXXXX..XXXXXXX 100644
109
--- a/hw/vfio/common.c
136
--- a/hw/vfio/common.c
110
+++ b/hw/vfio/common.c
137
+++ b/hw/vfio/common.c
111
@@ -XXX,XX +XXX,XX @@ static bool vfio_get_vaddr(IOMMUTLBEntry *iotlb, void **vaddr,
138
@@ -XXX,XX +XXX,XX @@ static void vfio_listener_region_add(MemoryListener *listener,
112
*/
139
if (memory_region_is_iommu(section->mr)) {
113
mr = address_space_translate(&address_space_memory,
140
VFIOGuestIOMMU *giommu;
114
iotlb->translated_addr,
141
IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
115
- &xlat, &len, writable);
142
+ int iommu_idx;
116
+ &xlat, &len, writable,
143
117
+ MEMTXATTRS_UNSPECIFIED);
144
trace_vfio_listener_region_add_iommu(iova, end);
118
if (!memory_region_is_ram(mr)) {
145
/*
119
error_report("iommu map to non memory area %"HWADDR_PRIx"",
146
@@ -XXX,XX +XXX,XX @@ static void vfio_listener_region_add(MemoryListener *listener,
120
xlat);
147
llend = int128_add(int128_make64(section->offset_within_region),
121
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
148
section->size);
122
index XXXXXXX..XXXXXXX 100644
149
llend = int128_sub(llend, int128_one());
123
--- a/memory_ldst.inc.c
150
+ iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
124
+++ b/memory_ldst.inc.c
151
+ MEMTXATTRS_UNSPECIFIED);
125
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
152
iommu_notifier_init(&giommu->n, vfio_iommu_map_notify,
126
bool release_lock = false;
153
IOMMU_NOTIFIER_ALL,
127
154
section->offset_within_region,
128
RCU_READ_LOCK();
155
- int128_get64(llend));
129
- mr = TRANSLATE(addr, &addr1, &l, false);
156
+ int128_get64(llend),
130
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
157
+ iommu_idx);
131
if (l < 4 || !IS_DIRECT(mr, false)) {
158
QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
132
release_lock |= prepare_mmio_access(mr);
159
133
160
memory_region_register_iommu_notifier(section->mr, &giommu->n);
134
@@ -XXX,XX +XXX,XX @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
161
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
135
bool release_lock = false;
162
index XXXXXXX..XXXXXXX 100644
136
163
--- a/hw/virtio/vhost.c
137
RCU_READ_LOCK();
164
+++ b/hw/virtio/vhost.c
138
- mr = TRANSLATE(addr, &addr1, &l, false);
165
@@ -XXX,XX +XXX,XX @@ static void vhost_iommu_region_add(MemoryListener *listener,
139
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
166
iommu_listener);
140
if (l < 8 || !IS_DIRECT(mr, false)) {
167
struct vhost_iommu *iommu;
141
release_lock |= prepare_mmio_access(mr);
168
Int128 end;
142
169
+ int iommu_idx;
143
@@ -XXX,XX +XXX,XX @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
170
+ IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
144
bool release_lock = false;
171
145
172
if (!memory_region_is_iommu(section->mr)) {
146
RCU_READ_LOCK();
173
return;
147
- mr = TRANSLATE(addr, &addr1, &l, false);
174
@@ -XXX,XX +XXX,XX @@ static void vhost_iommu_region_add(MemoryListener *listener,
148
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
175
end = int128_add(int128_make64(section->offset_within_region),
149
if (!IS_DIRECT(mr, false)) {
176
section->size);
150
release_lock |= prepare_mmio_access(mr);
177
end = int128_sub(end, int128_one());
151
178
+ iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
152
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
179
+ MEMTXATTRS_UNSPECIFIED);
153
bool release_lock = false;
180
iommu_notifier_init(&iommu->n, vhost_iommu_unmap_notify,
154
181
IOMMU_NOTIFIER_UNMAP,
155
RCU_READ_LOCK();
182
section->offset_within_region,
156
- mr = TRANSLATE(addr, &addr1, &l, false);
183
- int128_get64(end));
157
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
184
+ int128_get64(end),
158
if (l < 2 || !IS_DIRECT(mr, false)) {
185
+ iommu_idx);
159
release_lock |= prepare_mmio_access(mr);
186
iommu->mr = section->mr;
160
187
iommu->iommu_offset = section->offset_within_address_space -
161
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
188
section->offset_within_region;
162
bool release_lock = false;
189
diff --git a/memory.c b/memory.c
163
190
index XXXXXXX..XXXXXXX 100644
164
RCU_READ_LOCK();
191
--- a/memory.c
165
- mr = TRANSLATE(addr, &addr1, &l, true);
192
+++ b/memory.c
166
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
193
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
167
if (l < 4 || !IS_DIRECT(mr, true)) {
194
iommu_mr = IOMMU_MEMORY_REGION(mr);
168
release_lock |= prepare_mmio_access(mr);
195
assert(n->notifier_flags != IOMMU_NOTIFIER_NONE);
169
196
assert(n->start <= n->end);
170
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
197
+ assert(n->iommu_idx >= 0 &&
171
bool release_lock = false;
198
+ n->iommu_idx < memory_region_iommu_num_indexes(iommu_mr));
172
199
+
173
RCU_READ_LOCK();
200
QLIST_INSERT_HEAD(&iommu_mr->iommu_notify, n, node);
174
- mr = TRANSLATE(addr, &addr1, &l, true);
201
memory_region_update_iommu_notify_flags(iommu_mr);
175
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
202
}
176
if (l < 4 || !IS_DIRECT(mr, true)) {
203
@@ -XXX,XX +XXX,XX @@ void memory_region_notify_one(IOMMUNotifier *notifier,
177
release_lock |= prepare_mmio_access(mr);
204
}
178
205
179
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
206
void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
180
bool release_lock = false;
207
+ int iommu_idx,
181
208
IOMMUTLBEntry entry)
182
RCU_READ_LOCK();
209
{
183
- mr = TRANSLATE(addr, &addr1, &l, true);
210
IOMMUNotifier *iommu_notifier;
184
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
211
@@ -XXX,XX +XXX,XX @@ void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
185
if (!IS_DIRECT(mr, true)) {
212
assert(memory_region_is_iommu(MEMORY_REGION(iommu_mr)));
186
release_lock |= prepare_mmio_access(mr);
213
187
r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
214
IOMMU_NOTIFIER_FOREACH(iommu_notifier, iommu_mr) {
188
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
215
- memory_region_notify_one(iommu_notifier, &entry);
189
bool release_lock = false;
216
+ if (iommu_notifier->iommu_idx == iommu_idx) {
190
217
+ memory_region_notify_one(iommu_notifier, &entry);
191
RCU_READ_LOCK();
218
+ }
192
- mr = TRANSLATE(addr, &addr1, &l, true);
219
}
193
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
220
}
194
if (l < 2 || !IS_DIRECT(mr, true)) {
221
195
release_lock |= prepare_mmio_access(mr);
196
197
@@ -XXX,XX +XXX,XX @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
198
bool release_lock = false;
199
200
RCU_READ_LOCK();
201
- mr = TRANSLATE(addr, &addr1, &l, true);
202
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
203
if (l < 8 || !IS_DIRECT(mr, true)) {
204
release_lock |= prepare_mmio_access(mr);
205
206
diff --git a/target/riscv/helper.c b/target/riscv/helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/riscv/helper.c
209
+++ b/target/riscv/helper.c
210
@@ -XXX,XX +XXX,XX @@ restart:
211
MemoryRegion *mr;
212
hwaddr l = sizeof(target_ulong), addr1;
213
mr = address_space_translate(cs->as, pte_addr,
214
- &addr1, &l, false);
215
+ &addr1, &l, false, MEMTXATTRS_UNSPECIFIED);
216
if (memory_access_is_direct(mr, true)) {
217
target_ulong *pte_pa =
218
qemu_map_ram_ptr(mr->ram_block, addr1);
219
--
222
--
220
2.17.1
223
2.17.1
221
224
222
225
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Add an IOMMU index argument to the translate method of
2
add MemTxAttrs as an argument to the MemoryRegion valid.accepts
2
IOMMUs. Since all of our current IOMMU implementations
3
callback. We'll need this for subpage_accepts().
3
support only a single IOMMU index, this has no effect
4
4
on the behaviour.
5
We could take the approach we used with the read and write
6
callbacks and add new a new _with_attrs version, but since there
7
are so few implementations of the accepts hook we just change
8
them all.
9
5
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180604152941.20374-4-peter.maydell@linaro.org
13
Message-id: 20180521140402.23318-9-peter.maydell@linaro.org
14
---
10
---
15
include/exec/memory.h | 3 ++-
11
include/exec/memory.h | 3 ++-
16
exec.c | 9 ++++++---
12
exec.c | 11 +++++++++--
17
hw/hppa/dino.c | 3 ++-
13
hw/alpha/typhoon.c | 3 ++-
18
hw/nvram/fw_cfg.c | 12 ++++++++----
14
hw/arm/smmuv3.c | 2 +-
19
hw/scsi/esp.c | 3 ++-
15
hw/dma/rc4030.c | 2 +-
20
hw/xen/xen_pt_msi.c | 3 ++-
16
hw/i386/amd_iommu.c | 2 +-
21
memory.c | 5 +++--
17
hw/i386/intel_iommu.c | 2 +-
22
7 files changed, 25 insertions(+), 13 deletions(-)
18
hw/ppc/spapr_iommu.c | 3 ++-
19
hw/s390x/s390-pci-bus.c | 2 +-
20
hw/sparc/sun4m_iommu.c | 3 ++-
21
hw/sparc64/sun4u_iommu.c | 2 +-
22
memory.c | 2 +-
23
12 files changed, 24 insertions(+), 13 deletions(-)
23
24
24
diff --git a/include/exec/memory.h b/include/exec/memory.h
25
diff --git a/include/exec/memory.h b/include/exec/memory.h
25
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory.h
27
--- a/include/exec/memory.h
27
+++ b/include/exec/memory.h
28
+++ b/include/exec/memory.h
28
@@ -XXX,XX +XXX,XX @@ struct MemoryRegionOps {
29
@@ -XXX,XX +XXX,XX @@ typedef struct IOMMUMemoryRegionClass {
29
* as a machine check exception).
30
* @iommu: the IOMMUMemoryRegion
30
*/
31
* @hwaddr: address to be translated within the memory region
31
bool (*accepts)(void *opaque, hwaddr addr,
32
* @flag: requested access permissions
32
- unsigned size, bool is_write);
33
+ * @iommu_idx: IOMMU index for the translation
33
+ unsigned size, bool is_write,
34
*/
34
+ MemTxAttrs attrs);
35
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
35
} valid;
36
- IOMMUAccessFlags flag);
36
/* Internal implementation constraints: */
37
+ IOMMUAccessFlags flag, int iommu_idx);
37
struct {
38
/* Returns minimum supported page size in bytes.
39
* If this method is not provided then the minimum is assumed to
40
* be TARGET_PAGE_SIZE.
38
diff --git a/exec.c b/exec.c
41
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
43
--- a/exec.c
41
+++ b/exec.c
44
+++ b/exec.c
42
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
45
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
46
do {
47
hwaddr addr = *xlat;
48
IOMMUMemoryRegionClass *imrc = memory_region_get_iommu_class_nocheck(iommu_mr);
49
- IOMMUTLBEntry iotlb = imrc->translate(iommu_mr, addr, is_write ?
50
- IOMMU_WO : IOMMU_RO);
51
+ int iommu_idx = 0;
52
+ IOMMUTLBEntry iotlb;
53
+
54
+ if (imrc->attrs_to_index) {
55
+ iommu_idx = imrc->attrs_to_index(iommu_mr, attrs);
56
+ }
57
+
58
+ iotlb = imrc->translate(iommu_mr, addr, is_write ?
59
+ IOMMU_WO : IOMMU_RO, iommu_idx);
60
61
if (!(iotlb.perm & (1 << is_write))) {
62
goto unassigned;
63
diff --git a/hw/alpha/typhoon.c b/hw/alpha/typhoon.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/alpha/typhoon.c
66
+++ b/hw/alpha/typhoon.c
67
@@ -XXX,XX +XXX,XX @@ static bool window_translate(TyphoonWindow *win, hwaddr addr,
68
Pchip and generate a machine check interrupt. */
69
static IOMMUTLBEntry typhoon_translate_iommu(IOMMUMemoryRegion *iommu,
70
hwaddr addr,
71
- IOMMUAccessFlags flag)
72
+ IOMMUAccessFlags flag,
73
+ int iommu_idx)
74
{
75
TyphoonPchip *pchip = container_of(iommu, TyphoonPchip, iommu);
76
IOMMUTLBEntry ret;
77
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/arm/smmuv3.c
80
+++ b/hw/arm/smmuv3.c
81
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
43
}
82
}
44
83
45
static bool notdirty_mem_accepts(void *opaque, hwaddr addr,
84
static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
46
- unsigned size, bool is_write)
85
- IOMMUAccessFlags flag)
47
+ unsigned size, bool is_write,
86
+ IOMMUAccessFlags flag, int iommu_idx)
48
+ MemTxAttrs attrs)
49
{
87
{
50
return is_write;
88
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
89
SMMUv3State *s = sdev->smmu;
90
diff --git a/hw/dma/rc4030.c b/hw/dma/rc4030.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/hw/dma/rc4030.c
93
+++ b/hw/dma/rc4030.c
94
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps jazzio_ops = {
95
};
96
97
static IOMMUTLBEntry rc4030_dma_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
98
- IOMMUAccessFlags flag)
99
+ IOMMUAccessFlags flag, int iommu_idx)
100
{
101
rc4030State *s = container_of(iommu, rc4030State, dma_mr);
102
IOMMUTLBEntry ret = {
103
diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/hw/i386/amd_iommu.c
106
+++ b/hw/i386/amd_iommu.c
107
@@ -XXX,XX +XXX,XX @@ static inline bool amdvi_is_interrupt_addr(hwaddr addr)
51
}
108
}
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
109
110
static IOMMUTLBEntry amdvi_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
111
- IOMMUAccessFlags flag)
112
+ IOMMUAccessFlags flag, int iommu_idx)
113
{
114
AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
115
AMDVIState *s = as->iommu_state;
116
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/hw/i386/intel_iommu.c
119
+++ b/hw/i386/intel_iommu.c
120
@@ -XXX,XX +XXX,XX @@ static void vtd_mem_write(void *opaque, hwaddr addr,
53
}
121
}
54
122
55
static bool subpage_accepts(void *opaque, hwaddr addr,
123
static IOMMUTLBEntry vtd_iommu_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
56
- unsigned len, bool is_write)
124
- IOMMUAccessFlags flag)
57
+ unsigned len, bool is_write,
125
+ IOMMUAccessFlags flag, int iommu_idx)
58
+ MemTxAttrs attrs)
59
{
126
{
60
subpage_t *subpage = opaque;
127
VTDAddressSpace *vtd_as = container_of(iommu, VTDAddressSpace, iommu);
61
#if defined(DEBUG_SUBPAGE)
128
IntelIOMMUState *s = vtd_as->iommu_state;
62
@@ -XXX,XX +XXX,XX @@ static void readonly_mem_write(void *opaque, hwaddr addr,
129
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/ppc/spapr_iommu.c
132
+++ b/hw/ppc/spapr_iommu.c
133
@@ -XXX,XX +XXX,XX @@ static void spapr_tce_free_table(uint64_t *table, int fd, uint32_t nb_table)
134
/* Called from RCU critical section */
135
static IOMMUTLBEntry spapr_tce_translate_iommu(IOMMUMemoryRegion *iommu,
136
hwaddr addr,
137
- IOMMUAccessFlags flag)
138
+ IOMMUAccessFlags flag,
139
+ int iommu_idx)
140
{
141
sPAPRTCETable *tcet = container_of(iommu, sPAPRTCETable, iommu);
142
uint64_t tce;
143
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/s390x/s390-pci-bus.c
146
+++ b/hw/s390x/s390-pci-bus.c
147
@@ -XXX,XX +XXX,XX @@ uint16_t s390_guest_io_table_walk(uint64_t g_iota, hwaddr addr,
63
}
148
}
64
149
65
static bool readonly_mem_accepts(void *opaque, hwaddr addr,
150
static IOMMUTLBEntry s390_translate_iommu(IOMMUMemoryRegion *mr, hwaddr addr,
66
- unsigned size, bool is_write)
151
- IOMMUAccessFlags flag)
67
+ unsigned size, bool is_write,
152
+ IOMMUAccessFlags flag, int iommu_idx)
68
+ MemTxAttrs attrs)
69
{
153
{
70
return is_write;
154
S390PCIIOMMU *iommu = container_of(mr, S390PCIIOMMU, iommu_mr);
71
}
155
S390IOTLBEntry *entry;
72
diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
156
diff --git a/hw/sparc/sun4m_iommu.c b/hw/sparc/sun4m_iommu.c
73
index XXXXXXX..XXXXXXX 100644
157
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/hppa/dino.c
158
--- a/hw/sparc/sun4m_iommu.c
75
+++ b/hw/hppa/dino.c
159
+++ b/hw/sparc/sun4m_iommu.c
76
@@ -XXX,XX +XXX,XX @@ static void gsc_to_pci_forwarding(DinoState *s)
160
@@ -XXX,XX +XXX,XX @@ static void iommu_bad_addr(IOMMUState *s, hwaddr addr,
77
}
161
/* Called from RCU critical section */
78
162
static IOMMUTLBEntry sun4m_translate_iommu(IOMMUMemoryRegion *iommu,
79
static bool dino_chip_mem_valid(void *opaque, hwaddr addr,
163
hwaddr addr,
80
- unsigned size, bool is_write)
164
- IOMMUAccessFlags flags)
81
+ unsigned size, bool is_write,
165
+ IOMMUAccessFlags flags,
82
+ MemTxAttrs attrs)
166
+ int iommu_idx)
83
{
167
{
84
switch (addr) {
168
IOMMUState *is = container_of(iommu, IOMMUState, iommu);
85
case DINO_IAR0:
169
hwaddr page, pa;
86
diff --git a/hw/nvram/fw_cfg.c b/hw/nvram/fw_cfg.c
170
diff --git a/hw/sparc64/sun4u_iommu.c b/hw/sparc64/sun4u_iommu.c
87
index XXXXXXX..XXXXXXX 100644
171
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/nvram/fw_cfg.c
172
--- a/hw/sparc64/sun4u_iommu.c
89
+++ b/hw/nvram/fw_cfg.c
173
+++ b/hw/sparc64/sun4u_iommu.c
90
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_dma_mem_write(void *opaque, hwaddr addr,
174
@@ -XXX,XX +XXX,XX @@
91
}
175
/* Called from RCU critical section */
92
176
static IOMMUTLBEntry sun4u_translate_iommu(IOMMUMemoryRegion *iommu,
93
static bool fw_cfg_dma_mem_valid(void *opaque, hwaddr addr,
177
hwaddr addr,
94
- unsigned size, bool is_write)
178
- IOMMUAccessFlags flag)
95
+ unsigned size, bool is_write,
179
+ IOMMUAccessFlags flag, int iommu_idx)
96
+ MemTxAttrs attrs)
97
{
180
{
98
return !is_write || ((size == 4 && (addr == 0 || addr == 4)) ||
181
IOMMUState *is = container_of(iommu, IOMMUState, iommu);
99
(size == 8 && addr == 0));
182
hwaddr baseaddr, offset;
100
}
101
102
static bool fw_cfg_data_mem_valid(void *opaque, hwaddr addr,
103
- unsigned size, bool is_write)
104
+ unsigned size, bool is_write,
105
+ MemTxAttrs attrs)
106
{
107
return addr == 0;
108
}
109
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_ctl_mem_write(void *opaque, hwaddr addr,
110
}
111
112
static bool fw_cfg_ctl_mem_valid(void *opaque, hwaddr addr,
113
- unsigned size, bool is_write)
114
+ unsigned size, bool is_write,
115
+ MemTxAttrs attrs)
116
{
117
return is_write && size == 2;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_comb_write(void *opaque, hwaddr addr,
120
}
121
122
static bool fw_cfg_comb_valid(void *opaque, hwaddr addr,
123
- unsigned size, bool is_write)
124
+ unsigned size, bool is_write,
125
+ MemTxAttrs attrs)
126
{
127
return (size == 1) || (is_write && size == 2);
128
}
129
diff --git a/hw/scsi/esp.c b/hw/scsi/esp.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/scsi/esp.c
132
+++ b/hw/scsi/esp.c
133
@@ -XXX,XX +XXX,XX @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t val)
134
}
135
136
static bool esp_mem_accepts(void *opaque, hwaddr addr,
137
- unsigned size, bool is_write)
138
+ unsigned size, bool is_write,
139
+ MemTxAttrs attrs)
140
{
141
return (size == 1) || (is_write && size == 4);
142
}
143
diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/xen/xen_pt_msi.c
146
+++ b/hw/xen/xen_pt_msi.c
147
@@ -XXX,XX +XXX,XX @@ static uint64_t pci_msix_read(void *opaque, hwaddr addr,
148
}
149
150
static bool pci_msix_accepts(void *opaque, hwaddr addr,
151
- unsigned size, bool is_write)
152
+ unsigned size, bool is_write,
153
+ MemTxAttrs attrs)
154
{
155
return !(addr & (size - 1));
156
}
157
diff --git a/memory.c b/memory.c
183
diff --git a/memory.c b/memory.c
158
index XXXXXXX..XXXXXXX 100644
184
index XXXXXXX..XXXXXXX 100644
159
--- a/memory.c
185
--- a/memory.c
160
+++ b/memory.c
186
+++ b/memory.c
161
@@ -XXX,XX +XXX,XX @@ static void unassigned_mem_write(void *opaque, hwaddr addr,
187
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n)
162
}
188
granularity = memory_region_iommu_get_min_page_size(iommu_mr);
163
189
164
static bool unassigned_mem_accepts(void *opaque, hwaddr addr,
190
for (addr = 0; addr < memory_region_size(mr); addr += granularity) {
165
- unsigned size, bool is_write)
191
- iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE);
166
+ unsigned size, bool is_write,
192
+ iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, n->iommu_idx);
167
+ MemTxAttrs attrs)
193
if (iotlb.perm != IOMMU_NONE) {
168
{
194
n->notify(n, &iotlb);
169
return false;
170
}
171
@@ -XXX,XX +XXX,XX @@ bool memory_region_access_valid(MemoryRegion *mr,
172
access_size = MAX(MIN(size, access_size_max), access_size_min);
173
for (i = 0; i < size; i += access_size) {
174
if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size,
175
- is_write)) {
176
+ is_write, attrs)) {
177
return false;
178
}
195
}
179
}
180
--
196
--
181
2.17.1
197
2.17.1
182
198
183
199
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Currently we don't support board configurations that put an IOMMU
2
add MemTxAttrs as an argument to flatview_do_translate().
2
in the path of the CPU's memory transactions, and instead just
3
assert() if the memory region fonud in address_space_translate_for_iotlb()
4
is an IOMMUMemoryRegion.
5
6
Remove this limitation by having the function handle IOMMUs.
7
This is mostly straightforward, but we must make sure we have
8
a notifier registered for every IOMMU that a transaction has
9
passed through, so that we can flush the TLB appropriately
10
when any of the IOMMUs change their mappings.
3
11
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20180604152941.20374-5-peter.maydell@linaro.org
7
Message-id: 20180521140402.23318-13-peter.maydell@linaro.org
8
---
15
---
9
exec.c | 9 ++++++---
16
include/exec/exec-all.h | 3 +-
10
1 file changed, 6 insertions(+), 3 deletions(-)
17
include/qom/cpu.h | 3 +
11
18
accel/tcg/cputlb.c | 3 +-
19
exec.c | 135 +++++++++++++++++++++++++++++++++++++++-
20
4 files changed, 140 insertions(+), 4 deletions(-)
21
22
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/include/exec/exec-all.h
25
+++ b/include/exec/exec-all.h
26
@@ -XXX,XX +XXX,XX @@ void tb_flush_jmp_cache(CPUState *cpu, target_ulong addr);
27
28
MemoryRegionSection *
29
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
30
- hwaddr *xlat, hwaddr *plen);
31
+ hwaddr *xlat, hwaddr *plen,
32
+ MemTxAttrs attrs, int *prot);
33
hwaddr memory_region_section_get_iotlb(CPUState *cpu,
34
MemoryRegionSection *section,
35
target_ulong vaddr,
36
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/include/qom/cpu.h
39
+++ b/include/qom/cpu.h
40
@@ -XXX,XX +XXX,XX @@ struct CPUState {
41
uint16_t pending_tlb_flush;
42
43
int hvf_fd;
44
+
45
+ /* track IOMMUs whose translations we've cached in the TCG TLB */
46
+ GArray *iommu_notifiers;
47
};
48
49
QTAILQ_HEAD(CPUTailQ, CPUState);
50
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/accel/tcg/cputlb.c
53
+++ b/accel/tcg/cputlb.c
54
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
55
}
56
57
sz = size;
58
- section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz);
59
+ section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz,
60
+ attrs, &prot);
61
assert(sz >= TARGET_PAGE_SIZE);
62
63
tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
12
diff --git a/exec.c b/exec.c
64
diff --git a/exec.c b/exec.c
13
index XXXXXXX..XXXXXXX 100644
65
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
66
--- a/exec.c
15
+++ b/exec.c
67
+++ b/exec.c
16
@@ -XXX,XX +XXX,XX @@ unassigned:
68
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
17
* @is_write: whether the translation operation is for write
69
return mr;
18
* @is_mmio: whether this can be MMIO, set true if it can
70
}
19
* @target_as: the address space targeted by the IOMMU
71
20
+ * @attrs: memory transaction attributes
72
+typedef struct TCGIOMMUNotifier {
21
*
73
+ IOMMUNotifier n;
22
* This function is called from RCU critical section
74
+ MemoryRegion *mr;
23
*/
75
+ CPUState *cpu;
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
76
+ int iommu_idx;
25
hwaddr *page_mask_out,
77
+ bool active;
26
bool is_write,
78
+} TCGIOMMUNotifier;
27
bool is_mmio,
79
+
28
- AddressSpace **target_as)
80
+static void tcg_iommu_unmap_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
29
+ AddressSpace **target_as,
81
+{
30
+ MemTxAttrs attrs)
82
+ TCGIOMMUNotifier *notifier = container_of(n, TCGIOMMUNotifier, n);
83
+
84
+ if (!notifier->active) {
85
+ return;
86
+ }
87
+ tlb_flush(notifier->cpu);
88
+ notifier->active = false;
89
+ /* We leave the notifier struct on the list to avoid reallocating it later.
90
+ * Generally the number of IOMMUs a CPU deals with will be small.
91
+ * In any case we can't unregister the iommu notifier from a notify
92
+ * callback.
93
+ */
94
+}
95
+
96
+static void tcg_register_iommu_notifier(CPUState *cpu,
97
+ IOMMUMemoryRegion *iommu_mr,
98
+ int iommu_idx)
99
+{
100
+ /* Make sure this CPU has an IOMMU notifier registered for this
101
+ * IOMMU/IOMMU index combination, so that we can flush its TLB
102
+ * when the IOMMU tells us the mappings we've cached have changed.
103
+ */
104
+ MemoryRegion *mr = MEMORY_REGION(iommu_mr);
105
+ TCGIOMMUNotifier *notifier;
106
+ int i;
107
+
108
+ for (i = 0; i < cpu->iommu_notifiers->len; i++) {
109
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
110
+ if (notifier->mr == mr && notifier->iommu_idx == iommu_idx) {
111
+ break;
112
+ }
113
+ }
114
+ if (i == cpu->iommu_notifiers->len) {
115
+ /* Not found, add a new entry at the end of the array */
116
+ cpu->iommu_notifiers = g_array_set_size(cpu->iommu_notifiers, i + 1);
117
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
118
+
119
+ notifier->mr = mr;
120
+ notifier->iommu_idx = iommu_idx;
121
+ notifier->cpu = cpu;
122
+ /* Rather than trying to register interest in the specific part
123
+ * of the iommu's address space that we've accessed and then
124
+ * expand it later as subsequent accesses touch more of it, we
125
+ * just register interest in the whole thing, on the assumption
126
+ * that iommu reconfiguration will be rare.
127
+ */
128
+ iommu_notifier_init(&notifier->n,
129
+ tcg_iommu_unmap_notify,
130
+ IOMMU_NOTIFIER_UNMAP,
131
+ 0,
132
+ HWADDR_MAX,
133
+ iommu_idx);
134
+ memory_region_register_iommu_notifier(notifier->mr, &notifier->n);
135
+ }
136
+
137
+ if (!notifier->active) {
138
+ notifier->active = true;
139
+ }
140
+}
141
+
142
+static void tcg_iommu_free_notifier_list(CPUState *cpu)
143
+{
144
+ /* Destroy the CPU's notifier list */
145
+ int i;
146
+ TCGIOMMUNotifier *notifier;
147
+
148
+ for (i = 0; i < cpu->iommu_notifiers->len; i++) {
149
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
150
+ memory_region_unregister_iommu_notifier(notifier->mr, &notifier->n);
151
+ }
152
+ g_array_free(cpu->iommu_notifiers, true);
153
+}
154
+
155
/* Called from RCU critical section */
156
MemoryRegionSection *
157
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
158
- hwaddr *xlat, hwaddr *plen)
159
+ hwaddr *xlat, hwaddr *plen,
160
+ MemTxAttrs attrs, int *prot)
31
{
161
{
32
MemoryRegionSection *section;
162
MemoryRegionSection *section;
33
IOMMUMemoryRegion *iommu_mr;
163
+ IOMMUMemoryRegion *iommu_mr;
34
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
164
+ IOMMUMemoryRegionClass *imrc;
35
* but page mask.
165
+ IOMMUTLBEntry iotlb;
36
*/
166
+ int iommu_idx;
37
section = flatview_do_translate(address_space_to_flatview(as), addr, &xlat,
167
AddressSpaceDispatch *d = atomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
38
- NULL, &page_mask, is_write, false, &as);
168
39
+ NULL, &page_mask, is_write, false, &as,
169
- section = address_space_translate_internal(d, addr, xlat, plen, false);
40
+ attrs);
170
+ for (;;) {
41
171
+ section = address_space_translate_internal(d, addr, &addr, plen, false);
42
/* Illegal translation */
172
+
43
if (section.mr == &io_mem_unassigned) {
173
+ iommu_mr = memory_region_get_iommu(section->mr);
44
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
174
+ if (!iommu_mr) {
45
175
+ break;
46
/* This can be MMIO, so setup MMIO bit. */
176
+ }
47
section = flatview_do_translate(fv, addr, xlat, plen, NULL,
177
+
48
- is_write, true, &as);
178
+ imrc = memory_region_get_iommu_class_nocheck(iommu_mr);
49
+ is_write, true, &as, attrs);
179
+
50
mr = section.mr;
180
+ iommu_idx = imrc->attrs_to_index(iommu_mr, attrs);
51
181
+ tcg_register_iommu_notifier(cpu, iommu_mr, iommu_idx);
52
if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
182
+ /* We need all the permissions, so pass IOMMU_NONE so the IOMMU
183
+ * doesn't short-cut its translation table walk.
184
+ */
185
+ iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, iommu_idx);
186
+ addr = ((iotlb.translated_addr & ~iotlb.addr_mask)
187
+ | (addr & iotlb.addr_mask));
188
+ /* Update the caller's prot bits to remove permissions the IOMMU
189
+ * is giving us a failure response for. If we get down to no
190
+ * permissions left at all we can give up now.
191
+ */
192
+ if (!(iotlb.perm & IOMMU_RO)) {
193
+ *prot &= ~(PAGE_READ | PAGE_EXEC);
194
+ }
195
+ if (!(iotlb.perm & IOMMU_WO)) {
196
+ *prot &= ~PAGE_WRITE;
197
+ }
198
+
199
+ if (!*prot) {
200
+ goto translate_fail;
201
+ }
202
+
203
+ d = flatview_to_dispatch(address_space_to_flatview(iotlb.target_as));
204
+ }
205
206
assert(!memory_region_is_iommu(section->mr));
207
+ *xlat = addr;
208
return section;
209
+
210
+translate_fail:
211
+ return &d->map.sections[PHYS_SECTION_UNASSIGNED];
212
}
213
#endif
214
215
@@ -XXX,XX +XXX,XX @@ void cpu_exec_unrealizefn(CPUState *cpu)
216
if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
217
vmstate_unregister(NULL, &vmstate_cpu_common, cpu);
218
}
219
+#ifndef CONFIG_USER_ONLY
220
+ tcg_iommu_free_notifier_list(cpu);
221
+#endif
222
}
223
224
Property cpu_common_props[] = {
225
@@ -XXX,XX +XXX,XX @@ void cpu_exec_realizefn(CPUState *cpu, Error **errp)
226
if (cc->vmsd != NULL) {
227
vmstate_register(NULL, cpu->cpu_index, cc->vmsd, cpu);
228
}
229
+
230
+ cpu->iommu_notifiers = g_array_new(false, true, sizeof(TCGIOMMUNotifier));
231
#endif
232
}
233
53
--
234
--
54
2.17.1
235
2.17.1
55
236
56
237
diff view generated by jsdifflib
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
From: Julia Suvorova <jusual@mail.ru>
2
2
3
There was a nasty flip in identifying which register group an access is
3
ARMv6-M supports 6 Thumb2 instructions. This patch checks for these
4
targeting. The issue caused spuriously raised priorities of the guest
4
instructions and allows their execution.
5
when handing CPUs over in the Jailhouse hypervisor.
5
Like Thumb2 cores, ARMv6-M always interprets BL instruction as 32-bit.
6
6
7
Cc: qemu-stable@nongnu.org
7
This patch is required for future Cortex-M0 support.
8
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
8
9
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
9
Signed-off-by: Julia Suvorova <jusual@mail.ru>
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Message-id: 20180612204632.28780-1-jusual@mail.ru
12
[PMM: move armv6m_insn[] and armv6m_mask[] closer to
13
point of use, and mark 'const'. Check for M-and-not-v7
14
rather than M-and-6.]
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
17
---
13
hw/intc/arm_gicv3_cpuif.c | 12 ++++++------
18
target/arm/translate.c | 43 +++++++++++++++++++++++++++++++++++++-----
14
1 file changed, 6 insertions(+), 6 deletions(-)
19
1 file changed, 38 insertions(+), 5 deletions(-)
15
20
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
21
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gicv3_cpuif.c
23
--- a/target/arm/translate.c
19
+++ b/hw/intc/arm_gicv3_cpuif.c
24
+++ b/target/arm/translate.c
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
25
@@ -XXX,XX +XXX,XX @@ static bool thumb_insn_is_16bit(DisasContext *s, uint32_t insn)
21
{
26
* end up actually treating this as two 16-bit insns, though,
22
GICv3CPUState *cs = icc_cs_from_env(env);
27
* if it's half of a bl/blx pair that might span a page boundary.
23
int regno = ri->opc2 & 3;
28
*/
24
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
29
- if (arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
25
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
30
+ if (arm_dc_feature(s, ARM_FEATURE_THUMB2) ||
26
uint64_t value = cs->ich_apr[grp][regno];
31
+ arm_dc_feature(s, ARM_FEATURE_M)) {
27
32
/* Thumb2 cores (including all M profile ones) always treat
28
trace_gicv3_icv_ap_read(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
33
* 32-bit insns as 32-bit.
29
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
34
*/
30
{
35
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
31
GICv3CPUState *cs = icc_cs_from_env(env);
36
int conds;
32
int regno = ri->opc2 & 3;
37
int logic_cc;
33
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
38
34
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
39
- /* The only 32 bit insn that's allowed for Thumb1 is the combined
35
40
- * BL/BLX prefix and suffix.
36
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
41
+ /*
37
42
+ * ARMv6-M supports a limited subset of Thumb2 instructions.
38
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
43
+ * Other Thumb1 architectures allow only 32-bit
39
uint64_t value;
44
+ * combined BL/BLX prefix and suffix.
40
45
*/
41
int regno = ri->opc2 & 3;
46
- if ((insn & 0xf800e800) != 0xf000e800) {
42
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
47
+ if (arm_dc_feature(s, ARM_FEATURE_M) &&
43
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
48
+ !arm_dc_feature(s, ARM_FEATURE_V7)) {
44
49
+ int i;
45
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
50
+ bool found = false;
46
return icv_ap_read(env, ri);
51
+ const uint32_t armv6m_insn[] = {0xf3808000 /* msr */,
47
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
52
+ 0xf3b08040 /* dsb */,
48
GICv3CPUState *cs = icc_cs_from_env(env);
53
+ 0xf3b08050 /* dmb */,
49
54
+ 0xf3b08060 /* isb */,
50
int regno = ri->opc2 & 3;
55
+ 0xf3e08000 /* mrs */,
51
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
56
+ 0xf000d000 /* bl */};
52
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
57
+ const uint32_t armv6m_mask[] = {0xffe0d000,
53
58
+ 0xfff0d0f0,
54
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
59
+ 0xfff0d0f0,
55
icv_ap_write(env, ri, value);
60
+ 0xfff0d0f0,
56
@@ -XXX,XX +XXX,XX @@ static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
61
+ 0xffe0d000,
57
{
62
+ 0xf800d000};
58
GICv3CPUState *cs = icc_cs_from_env(env);
63
+
59
int regno = ri->opc2 & 3;
64
+ for (i = 0; i < ARRAY_SIZE(armv6m_insn); i++) {
60
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
65
+ if ((insn & armv6m_mask[i]) == armv6m_insn[i]) {
61
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
66
+ found = true;
62
uint64_t value;
67
+ break;
63
68
+ }
64
value = cs->ich_apr[grp][regno];
69
+ }
65
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
70
+ if (!found) {
66
{
71
+ goto illegal_op;
67
GICv3CPUState *cs = icc_cs_from_env(env);
72
+ }
68
int regno = ri->opc2 & 3;
73
+ } else if ((insn & 0xf800e800) != 0xf000e800) {
69
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
74
ARCH(6T2);
70
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
75
}
71
76
72
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
77
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
73
78
}
79
break;
80
case 3: /* Special control operations. */
81
- ARCH(7);
82
+ if (!arm_dc_feature(s, ARM_FEATURE_V7) &&
83
+ !(arm_dc_feature(s, ARM_FEATURE_V6) &&
84
+ arm_dc_feature(s, ARM_FEATURE_M))) {
85
+ goto illegal_op;
86
+ }
87
op = (insn >> 4) & 0xf;
88
switch (op) {
89
case 2: /* clrex */
74
--
90
--
75
2.17.1
91
2.17.1
76
92
77
93
diff view generated by jsdifflib