1
target-arm queue; this one has a fair scattering of more
1
One last arm pullreq before I stop work for the end of the year...
2
miscellaneous things in it which I've sent out this week.
3
I've shoved those in as well as it seemed the least-effort
4
way of getting them into master; a few of them are dependencies
5
on arm-related patches I have brewing.
6
2
7
thanks
8
-- PMM
3
-- PMM
9
4
5
The following changes since commit 8e5943260a8f765216674ee87ce8588cc4e7463e:
10
6
11
The following changes since commit 2702c2d3eb74e3908c0c5dbf3a71c8987595a86e:
7
Merge remote-tracking branch 'remotes/vivier2/tags/trivial-branch-pull-request' into staging (2019-12-20 12:46:10 +0000)
12
13
Merge remote-tracking branch 'remotes/stsquad/tags/pull-travis-updates-140618-1' into staging (2018-06-15 12:49:36 +0100)
14
8
15
are available in the Git repository at:
9
are available in the Git repository at:
16
10
17
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180615
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20191220
18
12
19
for you to fetch changes up to 14120108f87b3f9e1beacdf0a6096e464e62bb65:
13
for you to fetch changes up to c8fa6079eb35888587f1be27c1590da4edcc5098:
20
14
21
target/arm: Allow ARMv6-M Thumb2 instructions (2018-06-15 15:23:34 +0100)
15
arm/arm-powerctl: rebuild hflags after setting CP15 bits in arm_set_cpu_on() (2019-12-20 14:03:00 +0000)
22
16
23
----------------------------------------------------------------
17
----------------------------------------------------------------
24
target-arm and miscellaneous queue:
18
target-arm queue:
25
* fix KVM state save/restore for GICv3 priority registers for high IRQ numbers
19
* Support emulating the generic timers at frequencies other than 62.5MHz
26
* hw/arm/mps2-tz: Put ethernet controller behind PPC
20
* Various fixes for SMMUv3 emulation bugs
27
* hw/sh/sh7750: Convert away from old_mmio
21
* Improve assert error message for hflags mismatches
28
* hw/m68k/mcf5206: Convert away from old_mmio
22
* arm-powerctl: rebuild hflags after setting CP15 bits in arm_set_cpu_on()
29
* hw/block/pflash_cfi02: Convert away from old_mmio
30
* hw/watchdog/wdt_i6300esb: Convert away from old_mmio
31
* hw/input/pckbd: Convert away from old_mmio
32
* hw/char/parallel: Convert away from old_mmio
33
* armv7m: refactor to get rid of armv7m_init() function
34
* arm: Don't crash if user tries to use a Cortex-M CPU without an NVIC
35
* hw/core/or-irq: Support more than 16 inputs to an OR gate
36
* cpu-defs.h: Document CPUIOTLBEntry 'addr' field
37
* cputlb: Pass cpu_transaction_failed() the correct physaddr
38
* CODING_STYLE: Define our preferred form for multiline comments
39
* Add and use new stn_*_p() and ldn_*_p() memory access functions
40
* target/arm: More parts of the upcoming SVE support
41
* aspeed_scu: Implement RNG register
42
* m25p80: add support for two bytes WRSR for Macronix chips
43
* exec.c: Handle IOMMUs being in the path of TCG CPU memory accesses
44
* target/arm: Allow ARMv6-M Thumb2 instructions
45
23
46
----------------------------------------------------------------
24
----------------------------------------------------------------
47
Cédric Le Goater (1):
25
Andrew Jeffery (4):
48
m25p80: add support for two bytes WRSR for Macronix chips
26
target/arm: Remove redundant scaling of nexttick
27
target/arm: Abstract the generic timer frequency
28
target/arm: Prepare generic timer for per-platform CNTFRQ
29
ast2600: Configure CNTFRQ at 1125MHz
49
30
50
Joel Stanley (1):
31
Niek Linnenbank (1):
51
aspeed_scu: Implement RNG register
32
arm/arm-powerctl: rebuild hflags after setting CP15 bits in arm_set_cpu_on()
52
33
53
Julia Suvorova (1):
34
Philippe Mathieu-Daudé (1):
54
target/arm: Allow ARMv6-M Thumb2 instructions
35
target/arm: Display helpful message when hflags mismatch
55
36
56
Peter Maydell (21):
37
Simon Veith (6):
57
hw/arm/mps2-tz: Put ethernet controller behind PPC
38
hw/arm/smmuv3: Apply address mask to linear strtab base address
58
hw/sh/sh7750: Convert away from old_mmio
39
hw/arm/smmuv3: Correct SMMU_BASE_ADDR_MASK value
59
hw/m68k/mcf5206: Convert away from old_mmio
40
hw/arm/smmuv3: Check stream IDs against actual table LOG2SIZE
60
hw/block/pflash_cfi02: Convert away from old_mmio
41
hw/arm/smmuv3: Align stream table base address to table size
61
hw/watchdog/wdt_i6300esb: Convert away from old_mmio
42
hw/arm/smmuv3: Use correct bit positions in EVT_SET_ADDR2 macro
62
hw/input/pckbd: Convert away from old_mmio
43
hw/arm/smmuv3: Report F_STE_FETCH fault address in correct word position
63
hw/char/parallel: Convert away from old_mmio
64
stellaris: Stop using armv7m_init()
65
hw/arm/armv7m: Remove unused armv7m_init() function
66
arm: Don't crash if user tries to use a Cortex-M CPU without an NVIC
67
hw/core/or-irq: Support more than 16 inputs to an OR gate
68
cpu-defs.h: Document CPUIOTLBEntry 'addr' field
69
cputlb: Pass cpu_transaction_failed() the correct physaddr
70
CODING_STYLE: Define our preferred form for multiline comments
71
bswap: Add new stn_*_p() and ldn_*_p() memory access functions
72
exec.c: Don't accidentally sign-extend 4-byte loads in subpage_read()
73
exec.c: Use stn_p() and ldn_p() instead of explicit switches
74
iommu: Add IOMMU index concept to IOMMU API
75
iommu: Add IOMMU index argument to notifier APIs
76
iommu: Add IOMMU index argument to translate method
77
exec.c: Handle IOMMUs in address_space_translate_for_iotlb()
78
44
79
Richard Henderson (18):
45
hw/arm/smmuv3-internal.h | 6 ++---
80
target/arm: Extend vec_reg_offset to larger sizes
46
target/arm/cpu.h | 5 ++++
81
target/arm: Implement SVE Permute - Unpredicated Group
47
hw/arm/aspeed_ast2600.c | 3 +++
82
target/arm: Implement SVE Permute - Predicates Group
48
hw/arm/smmuv3.c | 28 +++++++++++++++-----
83
target/arm: Implement SVE Permute - Interleaving Group
49
target/arm/arm-powerctl.c | 3 +++
84
target/arm: Implement SVE compress active elements
50
target/arm/cpu.c | 65 +++++++++++++++++++++++++++++++++++++++++------
85
target/arm: Implement SVE conditionally broadcast/extract element
51
target/arm/helper.c | 42 +++++++++++++++++++++++-------
86
target/arm: Implement SVE copy to vector (predicated)
52
7 files changed, 125 insertions(+), 27 deletions(-)
87
target/arm: Implement SVE reverse within elements
88
target/arm: Implement SVE vector splice (predicated)
89
target/arm: Implement SVE Select Vectors Group
90
target/arm: Implement SVE Integer Compare - Vectors Group
91
target/arm: Implement SVE Integer Compare - Immediate Group
92
target/arm: Implement SVE Partition Break Group
93
target/arm: Implement SVE Predicate Count Group
94
target/arm: Implement SVE Integer Compare - Scalars Group
95
target/arm: Implement FDUP/DUP
96
target/arm: Implement SVE Integer Wide Immediate - Unpredicated Group
97
target/arm: Implement SVE Floating Point Arithmetic - Unpredicated Group
98
53
99
Shannon Zhao (1):
100
arm_gicv3_kvm: kvm_dist_get/put_priority: skip the registers banked by GICR_IPRIORITYR
101
102
include/exec/cpu-all.h | 4 +
103
include/exec/cpu-defs.h | 9 +
104
include/exec/exec-all.h | 16 +-
105
include/exec/memory.h | 65 +-
106
include/hw/arm/arm.h | 8 +-
107
include/hw/or-irq.h | 5 +-
108
include/qemu/bswap.h | 52 ++
109
include/qom/cpu.h | 3 +
110
target/arm/helper-sve.h | 294 +++++++++
111
target/arm/helper.h | 19 +
112
target/arm/translate-a64.h | 26 +-
113
accel/tcg/cputlb.c | 59 +-
114
exec.c | 263 ++++----
115
hw/alpha/typhoon.c | 3 +-
116
hw/arm/armv7m.c | 28 +-
117
hw/arm/mps2-tz.c | 32 +-
118
hw/arm/smmuv3.c | 2 +-
119
hw/arm/stellaris.c | 12 +-
120
hw/block/m25p80.c | 1 +
121
hw/block/pflash_cfi02.c | 97 +--
122
hw/char/parallel.c | 50 +-
123
hw/core/or-irq.c | 39 +-
124
hw/dma/rc4030.c | 2 +-
125
hw/i386/amd_iommu.c | 2 +-
126
hw/i386/intel_iommu.c | 8 +-
127
hw/input/pckbd.c | 14 +-
128
hw/intc/arm_gicv3_kvm.c | 18 +-
129
hw/intc/armv7m_nvic.c | 6 +-
130
hw/m68k/mcf5206.c | 48 +-
131
hw/misc/aspeed_scu.c | 20 +
132
hw/ppc/spapr_iommu.c | 5 +-
133
hw/s390x/s390-pci-bus.c | 2 +-
134
hw/s390x/s390-pci-inst.c | 4 +-
135
hw/sh4/sh7750.c | 44 +-
136
hw/sparc/sun4m_iommu.c | 3 +-
137
hw/sparc64/sun4u_iommu.c | 2 +-
138
hw/vfio/common.c | 6 +-
139
hw/virtio/vhost.c | 7 +-
140
hw/watchdog/wdt_i6300esb.c | 48 +-
141
memory.c | 33 +-
142
target/arm/cpu.c | 18 +
143
target/arm/sve_helper.c | 1250 +++++++++++++++++++++++++++++++++++++
144
target/arm/translate-sve.c | 1458 +++++++++++++++++++++++++++++++++++++++++++
145
target/arm/translate.c | 43 +-
146
target/arm/vec_helper.c | 69 ++
147
CODING_STYLE | 17 +
148
docs/devel/loads-stores.rst | 15 +
149
target/arm/sve.decode | 248 ++++++++
150
48 files changed, 4114 insertions(+), 363 deletions(-)
151
diff view generated by jsdifflib
Deleted patch
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
2
1
3
While for_each_dist_irq_reg loop starts from GIC_INTERNAL, it forgot to
4
offset the date array and index. This will overlap the GICR registers
5
value and leave the last GIC_INTERNAL irq's registers out of update.
6
7
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
8
Cc: qemu-stable@nongnu.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/intc/arm_gicv3_kvm.c | 18 ++++++++++++++++--
15
1 file changed, 16 insertions(+), 2 deletions(-)
16
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_get_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
22
uint32_t reg, *field;
23
int irq;
24
25
- field = (uint32_t *)bmp;
26
+ /* For the KVM GICv3, affinity routing is always enabled, and the first 8
27
+ * GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
28
+ * functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
29
+ * sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
30
+ * offset.
31
+ */
32
+ field = (uint32_t *)(bmp + GIC_INTERNAL);
33
+ offset += (GIC_INTERNAL * 8) / 8;
34
for_each_dist_irq_reg(irq, s->num_irq, 8) {
35
kvm_gicd_access(s, offset, &reg, false);
36
*field = reg;
37
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_put_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
38
uint32_t reg, *field;
39
int irq;
40
41
- field = (uint32_t *)bmp;
42
+ /* For the KVM GICv3, affinity routing is always enabled, and the first 8
43
+ * GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
44
+ * functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
45
+ * sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
46
+ * offset.
47
+ */
48
+ field = (uint32_t *)(bmp + GIC_INTERNAL);
49
+ offset += (GIC_INTERNAL * 8) / 8;
50
for_each_dist_irq_reg(irq, s->num_irq, 8) {
51
reg = *field;
52
kvm_gicd_access(s, offset, &reg, true);
53
--
54
2.17.1
55
56
diff view generated by jsdifflib
Deleted patch
1
The ethernet controller in the AN505 MPC FPGA image is behind
2
the same AHB Peripheral Protection Controller that handles
3
the graphics and GPIOs. (In the documentation this is clear
4
in the block diagram but the ethernet controller was omitted
5
from the table listing devices connected to the PPC.)
6
The ethernet sits behind AHB PPCEXP0 interface 5. We had
7
incorrectly claimed that this was a "gpio4", but there are
8
only 4 GPIOs in this image.
9
1
10
Correct the QEMU model to match the hardware.
11
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20180515171446.10834-1-peter.maydell@linaro.org
15
---
16
hw/arm/mps2-tz.c | 32 +++++++++++++++++++++++---------
17
1 file changed, 23 insertions(+), 9 deletions(-)
18
19
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/mps2-tz.c
22
+++ b/hw/arm/mps2-tz.c
23
@@ -XXX,XX +XXX,XX @@ typedef struct {
24
UnimplementedDeviceState spi[5];
25
UnimplementedDeviceState i2c[4];
26
UnimplementedDeviceState i2s_audio;
27
- UnimplementedDeviceState gpio[5];
28
+ UnimplementedDeviceState gpio[4];
29
UnimplementedDeviceState dma[4];
30
UnimplementedDeviceState gfx;
31
CMSDKAPBUART uart[5];
32
SplitIRQ sec_resp_splitter;
33
qemu_or_irq uart_irq_orgate;
34
+ DeviceState *lan9118;
35
} MPS2TZMachineState;
36
37
#define TYPE_MPS2TZ_MACHINE "mps2tz"
38
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_fpgaio(MPS2TZMachineState *mms, void *opaque,
39
return sysbus_mmio_get_region(SYS_BUS_DEVICE(fpgaio), 0);
40
}
41
42
+static MemoryRegion *make_eth_dev(MPS2TZMachineState *mms, void *opaque,
43
+ const char *name, hwaddr size)
44
+{
45
+ SysBusDevice *s;
46
+ DeviceState *iotkitdev = DEVICE(&mms->iotkit);
47
+ NICInfo *nd = &nd_table[0];
48
+
49
+ /* In hardware this is a LAN9220; the LAN9118 is software compatible
50
+ * except that it doesn't support the checksum-offload feature.
51
+ */
52
+ qemu_check_nic_model(nd, "lan9118");
53
+ mms->lan9118 = qdev_create(NULL, "lan9118");
54
+ qdev_set_nic_properties(mms->lan9118, nd);
55
+ qdev_init_nofail(mms->lan9118);
56
+
57
+ s = SYS_BUS_DEVICE(mms->lan9118);
58
+ sysbus_connect_irq(s, 0, qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
59
+ return sysbus_mmio_get_region(s, 0);
60
+}
61
+
62
static void mps2tz_common_init(MachineState *machine)
63
{
64
MPS2TZMachineState *mms = MPS2TZ_MACHINE(machine);
65
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
66
{ "gpio1", make_unimp_dev, &mms->gpio[1], 0x40101000, 0x1000 },
67
{ "gpio2", make_unimp_dev, &mms->gpio[2], 0x40102000, 0x1000 },
68
{ "gpio3", make_unimp_dev, &mms->gpio[3], 0x40103000, 0x1000 },
69
- { "gpio4", make_unimp_dev, &mms->gpio[4], 0x40104000, 0x1000 },
70
+ { "eth", make_eth_dev, NULL, 0x42000000, 0x100000 },
71
},
72
}, {
73
.name = "ahb_ppcexp1",
74
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
75
"cfg_sec_resp", 0));
76
}
77
78
- /* In hardware this is a LAN9220; the LAN9118 is software compatible
79
- * except that it doesn't support the checksum-offload feature.
80
- * The ethernet controller is not behind a PPC.
81
- */
82
- lan9118_init(&nd_table[0], 0x42000000,
83
- qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
84
-
85
create_unimplemented_device("FPGA NS PC", 0x48007000, 0x1000);
86
87
armv7m_load_kernel(ARM_CPU(first_cpu), machine->kernel_filename, 0x400000);
88
--
89
2.17.1
90
91
diff view generated by jsdifflib
Deleted patch
1
Convert the sh7750 device away from using the old_mmio field
2
of MemoryRegionOps. This device is used by the sh4 r2d board.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180601141223.26630-2-peter.maydell@linaro.org
7
---
8
hw/sh4/sh7750.c | 44 ++++++++++++++++++++++++++++++++++++--------
9
1 file changed, 36 insertions(+), 8 deletions(-)
10
11
diff --git a/hw/sh4/sh7750.c b/hw/sh4/sh7750.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/sh4/sh7750.c
14
+++ b/hw/sh4/sh7750.c
15
@@ -XXX,XX +XXX,XX @@ static void sh7750_mem_writel(void *opaque, hwaddr addr,
16
}
17
}
18
19
+static uint64_t sh7750_mem_readfn(void *opaque, hwaddr addr, unsigned size)
20
+{
21
+ switch (size) {
22
+ case 1:
23
+ return sh7750_mem_readb(opaque, addr);
24
+ case 2:
25
+ return sh7750_mem_readw(opaque, addr);
26
+ case 4:
27
+ return sh7750_mem_readl(opaque, addr);
28
+ default:
29
+ g_assert_not_reached();
30
+ }
31
+}
32
+
33
+static void sh7750_mem_writefn(void *opaque, hwaddr addr,
34
+ uint64_t value, unsigned size)
35
+{
36
+ switch (size) {
37
+ case 1:
38
+ sh7750_mem_writeb(opaque, addr, value);
39
+ break;
40
+ case 2:
41
+ sh7750_mem_writew(opaque, addr, value);
42
+ break;
43
+ case 4:
44
+ sh7750_mem_writel(opaque, addr, value);
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static const MemoryRegionOps sh7750_mem_ops = {
52
- .old_mmio = {
53
- .read = {sh7750_mem_readb,
54
- sh7750_mem_readw,
55
- sh7750_mem_readl },
56
- .write = {sh7750_mem_writeb,
57
- sh7750_mem_writew,
58
- sh7750_mem_writel },
59
- },
60
+ .read = sh7750_mem_readfn,
61
+ .write = sh7750_mem_writefn,
62
+ .valid.min_access_size = 1,
63
+ .valid.max_access_size = 4,
64
.endianness = DEVICE_NATIVE_ENDIAN,
65
};
66
67
--
68
2.17.1
69
70
diff view generated by jsdifflib
Deleted patch
1
Convert the mcf5206 device away from using the old_mmio field
2
of MemoryRegionOps. This device is used by the an5206 board.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Acked-by: Thomas Huth <huth@tuxfamily.org>
6
Message-id: 20180601141223.26630-3-peter.maydell@linaro.org
7
---
8
hw/m68k/mcf5206.c | 48 +++++++++++++++++++++++++++++++++++------------
9
1 file changed, 36 insertions(+), 12 deletions(-)
10
11
diff --git a/hw/m68k/mcf5206.c b/hw/m68k/mcf5206.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/m68k/mcf5206.c
14
+++ b/hw/m68k/mcf5206.c
15
@@ -XXX,XX +XXX,XX @@ static void m5206_mbar_writel(void *opaque, hwaddr offset,
16
m5206_mbar_write(s, offset, value, 4);
17
}
18
19
+static uint64_t m5206_mbar_readfn(void *opaque, hwaddr addr, unsigned size)
20
+{
21
+ switch (size) {
22
+ case 1:
23
+ return m5206_mbar_readb(opaque, addr);
24
+ case 2:
25
+ return m5206_mbar_readw(opaque, addr);
26
+ case 4:
27
+ return m5206_mbar_readl(opaque, addr);
28
+ default:
29
+ g_assert_not_reached();
30
+ }
31
+}
32
+
33
+static void m5206_mbar_writefn(void *opaque, hwaddr addr,
34
+ uint64_t value, unsigned size)
35
+{
36
+ switch (size) {
37
+ case 1:
38
+ m5206_mbar_writeb(opaque, addr, value);
39
+ break;
40
+ case 2:
41
+ m5206_mbar_writew(opaque, addr, value);
42
+ break;
43
+ case 4:
44
+ m5206_mbar_writel(opaque, addr, value);
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static const MemoryRegionOps m5206_mbar_ops = {
52
- .old_mmio = {
53
- .read = {
54
- m5206_mbar_readb,
55
- m5206_mbar_readw,
56
- m5206_mbar_readl,
57
- },
58
- .write = {
59
- m5206_mbar_writeb,
60
- m5206_mbar_writew,
61
- m5206_mbar_writel,
62
- },
63
- },
64
+ .read = m5206_mbar_readfn,
65
+ .write = m5206_mbar_writefn,
66
+ .valid.min_access_size = 1,
67
+ .valid.max_access_size = 4,
68
.endianness = DEVICE_NATIVE_ENDIAN,
69
};
70
71
--
72
2.17.1
73
74
diff view generated by jsdifflib
Deleted patch
1
Convert the pflash_cfi02 device away from using the old_mmio field
2
of MemoryRegionOps.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Acked-by: Max Reitz <mreitz@redhat.com>
7
Message-id: 20180601141223.26630-4-peter.maydell@linaro.org
8
---
9
hw/block/pflash_cfi02.c | 97 ++++++++---------------------------------
10
1 file changed, 18 insertions(+), 79 deletions(-)
11
12
diff --git a/hw/block/pflash_cfi02.c b/hw/block/pflash_cfi02.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/block/pflash_cfi02.c
15
+++ b/hw/block/pflash_cfi02.c
16
@@ -XXX,XX +XXX,XX @@ static void pflash_write (pflash_t *pfl, hwaddr offset,
17
pfl->cmd = 0;
18
}
19
20
-
21
-static uint32_t pflash_readb_be(void *opaque, hwaddr addr)
22
+static uint64_t pflash_be_readfn(void *opaque, hwaddr addr, unsigned size)
23
{
24
- return pflash_read(opaque, addr, 1, 1);
25
+ return pflash_read(opaque, addr, size, 1);
26
}
27
28
-static uint32_t pflash_readb_le(void *opaque, hwaddr addr)
29
+static void pflash_be_writefn(void *opaque, hwaddr addr,
30
+ uint64_t value, unsigned size)
31
{
32
- return pflash_read(opaque, addr, 1, 0);
33
+ pflash_write(opaque, addr, value, size, 1);
34
}
35
36
-static uint32_t pflash_readw_be(void *opaque, hwaddr addr)
37
+static uint64_t pflash_le_readfn(void *opaque, hwaddr addr, unsigned size)
38
{
39
- pflash_t *pfl = opaque;
40
-
41
- return pflash_read(pfl, addr, 2, 1);
42
+ return pflash_read(opaque, addr, size, 0);
43
}
44
45
-static uint32_t pflash_readw_le(void *opaque, hwaddr addr)
46
+static void pflash_le_writefn(void *opaque, hwaddr addr,
47
+ uint64_t value, unsigned size)
48
{
49
- pflash_t *pfl = opaque;
50
-
51
- return pflash_read(pfl, addr, 2, 0);
52
-}
53
-
54
-static uint32_t pflash_readl_be(void *opaque, hwaddr addr)
55
-{
56
- pflash_t *pfl = opaque;
57
-
58
- return pflash_read(pfl, addr, 4, 1);
59
-}
60
-
61
-static uint32_t pflash_readl_le(void *opaque, hwaddr addr)
62
-{
63
- pflash_t *pfl = opaque;
64
-
65
- return pflash_read(pfl, addr, 4, 0);
66
-}
67
-
68
-static void pflash_writeb_be(void *opaque, hwaddr addr,
69
- uint32_t value)
70
-{
71
- pflash_write(opaque, addr, value, 1, 1);
72
-}
73
-
74
-static void pflash_writeb_le(void *opaque, hwaddr addr,
75
- uint32_t value)
76
-{
77
- pflash_write(opaque, addr, value, 1, 0);
78
-}
79
-
80
-static void pflash_writew_be(void *opaque, hwaddr addr,
81
- uint32_t value)
82
-{
83
- pflash_t *pfl = opaque;
84
-
85
- pflash_write(pfl, addr, value, 2, 1);
86
-}
87
-
88
-static void pflash_writew_le(void *opaque, hwaddr addr,
89
- uint32_t value)
90
-{
91
- pflash_t *pfl = opaque;
92
-
93
- pflash_write(pfl, addr, value, 2, 0);
94
-}
95
-
96
-static void pflash_writel_be(void *opaque, hwaddr addr,
97
- uint32_t value)
98
-{
99
- pflash_t *pfl = opaque;
100
-
101
- pflash_write(pfl, addr, value, 4, 1);
102
-}
103
-
104
-static void pflash_writel_le(void *opaque, hwaddr addr,
105
- uint32_t value)
106
-{
107
- pflash_t *pfl = opaque;
108
-
109
- pflash_write(pfl, addr, value, 4, 0);
110
+ pflash_write(opaque, addr, value, size, 0);
111
}
112
113
static const MemoryRegionOps pflash_cfi02_ops_be = {
114
- .old_mmio = {
115
- .read = { pflash_readb_be, pflash_readw_be, pflash_readl_be, },
116
- .write = { pflash_writeb_be, pflash_writew_be, pflash_writel_be, },
117
- },
118
+ .read = pflash_be_readfn,
119
+ .write = pflash_be_writefn,
120
+ .valid.min_access_size = 1,
121
+ .valid.max_access_size = 4,
122
.endianness = DEVICE_NATIVE_ENDIAN,
123
};
124
125
static const MemoryRegionOps pflash_cfi02_ops_le = {
126
- .old_mmio = {
127
- .read = { pflash_readb_le, pflash_readw_le, pflash_readl_le, },
128
- .write = { pflash_writeb_le, pflash_writew_le, pflash_writel_le, },
129
- },
130
+ .read = pflash_le_readfn,
131
+ .write = pflash_le_writefn,
132
+ .valid.min_access_size = 1,
133
+ .valid.max_access_size = 4,
134
.endianness = DEVICE_NATIVE_ENDIAN,
135
};
136
137
--
138
2.17.1
139
140
diff view generated by jsdifflib
Deleted patch
1
Convert the wdt_i6300esb device away from using the old_mmio field
2
of MemoryRegionOps.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180601141223.26630-5-peter.maydell@linaro.org
7
---
8
hw/watchdog/wdt_i6300esb.c | 48 ++++++++++++++++++++++++++++----------
9
1 file changed, 36 insertions(+), 12 deletions(-)
10
11
diff --git a/hw/watchdog/wdt_i6300esb.c b/hw/watchdog/wdt_i6300esb.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/watchdog/wdt_i6300esb.c
14
+++ b/hw/watchdog/wdt_i6300esb.c
15
@@ -XXX,XX +XXX,XX @@ static void i6300esb_mem_writel(void *vp, hwaddr addr, uint32_t val)
16
}
17
}
18
19
+static uint64_t i6300esb_mem_readfn(void *opaque, hwaddr addr, unsigned size)
20
+{
21
+ switch (size) {
22
+ case 1:
23
+ return i6300esb_mem_readb(opaque, addr);
24
+ case 2:
25
+ return i6300esb_mem_readw(opaque, addr);
26
+ case 4:
27
+ return i6300esb_mem_readl(opaque, addr);
28
+ default:
29
+ g_assert_not_reached();
30
+ }
31
+}
32
+
33
+static void i6300esb_mem_writefn(void *opaque, hwaddr addr,
34
+ uint64_t value, unsigned size)
35
+{
36
+ switch (size) {
37
+ case 1:
38
+ i6300esb_mem_writeb(opaque, addr, value);
39
+ break;
40
+ case 2:
41
+ i6300esb_mem_writew(opaque, addr, value);
42
+ break;
43
+ case 4:
44
+ i6300esb_mem_writel(opaque, addr, value);
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static const MemoryRegionOps i6300esb_ops = {
52
- .old_mmio = {
53
- .read = {
54
- i6300esb_mem_readb,
55
- i6300esb_mem_readw,
56
- i6300esb_mem_readl,
57
- },
58
- .write = {
59
- i6300esb_mem_writeb,
60
- i6300esb_mem_writew,
61
- i6300esb_mem_writel,
62
- },
63
- },
64
+ .read = i6300esb_mem_readfn,
65
+ .write = i6300esb_mem_writefn,
66
+ .valid.min_access_size = 1,
67
+ .valid.max_access_size = 4,
68
.endianness = DEVICE_LITTLE_ENDIAN,
69
};
70
71
--
72
2.17.1
73
74
diff view generated by jsdifflib
Deleted patch
1
Convert the pckbd device away from using the old_mmio field
2
of MemoryRegionOps. This change only affects the memory-mapped
3
variant of the i8042, which is used by the Unicore32 'puv3'
4
board and the MIPS Jazz boards 'magnum' and 'pica61'.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20180601141223.26630-6-peter.maydell@linaro.org
9
---
10
hw/input/pckbd.c | 14 ++++++++------
11
1 file changed, 8 insertions(+), 6 deletions(-)
12
13
diff --git a/hw/input/pckbd.c b/hw/input/pckbd.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/input/pckbd.c
16
+++ b/hw/input/pckbd.c
17
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_kbd = {
18
};
19
20
/* Memory mapped interface */
21
-static uint32_t kbd_mm_readb (void *opaque, hwaddr addr)
22
+static uint64_t kbd_mm_readfn(void *opaque, hwaddr addr, unsigned size)
23
{
24
KBDState *s = opaque;
25
26
@@ -XXX,XX +XXX,XX @@ static uint32_t kbd_mm_readb (void *opaque, hwaddr addr)
27
return kbd_read_data(s, 0, 1) & 0xff;
28
}
29
30
-static void kbd_mm_writeb (void *opaque, hwaddr addr, uint32_t value)
31
+static void kbd_mm_writefn(void *opaque, hwaddr addr,
32
+ uint64_t value, unsigned size)
33
{
34
KBDState *s = opaque;
35
36
@@ -XXX,XX +XXX,XX @@ static void kbd_mm_writeb (void *opaque, hwaddr addr, uint32_t value)
37
kbd_write_data(s, 0, value & 0xff, 1);
38
}
39
40
+
41
static const MemoryRegionOps i8042_mmio_ops = {
42
+ .read = kbd_mm_readfn,
43
+ .write = kbd_mm_writefn,
44
+ .valid.min_access_size = 1,
45
+ .valid.max_access_size = 4,
46
.endianness = DEVICE_NATIVE_ENDIAN,
47
- .old_mmio = {
48
- .read = { kbd_mm_readb, kbd_mm_readb, kbd_mm_readb },
49
- .write = { kbd_mm_writeb, kbd_mm_writeb, kbd_mm_writeb },
50
- },
51
};
52
53
void i8042_mm_init(qemu_irq kbd_irq, qemu_irq mouse_irq,
54
--
55
2.17.1
56
57
diff view generated by jsdifflib
Deleted patch
1
Convert the parallel device away from using the old_mmio field
2
of MemoryRegionOps. This change only affects the memory-mapped
3
variant, which is used by the MIPS Jazz boards 'magnum' and 'pica61'.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20180601141223.26630-7-peter.maydell@linaro.org
8
---
9
hw/char/parallel.c | 50 ++++++++++------------------------------------
10
1 file changed, 11 insertions(+), 39 deletions(-)
11
12
diff --git a/hw/char/parallel.c b/hw/char/parallel.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/char/parallel.c
15
+++ b/hw/char/parallel.c
16
@@ -XXX,XX +XXX,XX @@ static void parallel_isa_realizefn(DeviceState *dev, Error **errp)
17
}
18
19
/* Memory mapped interface */
20
-static uint32_t parallel_mm_readb (void *opaque, hwaddr addr)
21
+static uint64_t parallel_mm_readfn(void *opaque, hwaddr addr, unsigned size)
22
{
23
ParallelState *s = opaque;
24
25
- return parallel_ioport_read_sw(s, addr >> s->it_shift) & 0xFF;
26
+ return parallel_ioport_read_sw(s, addr >> s->it_shift) &
27
+ MAKE_64BIT_MASK(0, size * 8);
28
}
29
30
-static void parallel_mm_writeb (void *opaque,
31
- hwaddr addr, uint32_t value)
32
+static void parallel_mm_writefn(void *opaque, hwaddr addr,
33
+ uint64_t value, unsigned size)
34
{
35
ParallelState *s = opaque;
36
37
- parallel_ioport_write_sw(s, addr >> s->it_shift, value & 0xFF);
38
-}
39
-
40
-static uint32_t parallel_mm_readw (void *opaque, hwaddr addr)
41
-{
42
- ParallelState *s = opaque;
43
-
44
- return parallel_ioport_read_sw(s, addr >> s->it_shift) & 0xFFFF;
45
-}
46
-
47
-static void parallel_mm_writew (void *opaque,
48
- hwaddr addr, uint32_t value)
49
-{
50
- ParallelState *s = opaque;
51
-
52
- parallel_ioport_write_sw(s, addr >> s->it_shift, value & 0xFFFF);
53
-}
54
-
55
-static uint32_t parallel_mm_readl (void *opaque, hwaddr addr)
56
-{
57
- ParallelState *s = opaque;
58
-
59
- return parallel_ioport_read_sw(s, addr >> s->it_shift);
60
-}
61
-
62
-static void parallel_mm_writel (void *opaque,
63
- hwaddr addr, uint32_t value)
64
-{
65
- ParallelState *s = opaque;
66
-
67
- parallel_ioport_write_sw(s, addr >> s->it_shift, value);
68
+ parallel_ioport_write_sw(s, addr >> s->it_shift,
69
+ value & MAKE_64BIT_MASK(0, size * 8));
70
}
71
72
static const MemoryRegionOps parallel_mm_ops = {
73
- .old_mmio = {
74
- .read = { parallel_mm_readb, parallel_mm_readw, parallel_mm_readl },
75
- .write = { parallel_mm_writeb, parallel_mm_writew, parallel_mm_writel },
76
- },
77
+ .read = parallel_mm_readfn,
78
+ .write = parallel_mm_writefn,
79
+ .valid.min_access_size = 1,
80
+ .valid.max_access_size = 4,
81
.endianness = DEVICE_NATIVE_ENDIAN,
82
};
83
84
--
85
2.17.1
86
87
diff view generated by jsdifflib
Deleted patch
1
The stellaris board is still using the legacy armv7m_init() function,
2
which predates conversion of the ARMv7M into a proper QOM container
3
object. Make the board code directly create the ARMv7M object instead.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Message-id: 20180601144328.23817-2-peter.maydell@linaro.org
9
---
10
hw/arm/stellaris.c | 12 ++++++++++--
11
1 file changed, 10 insertions(+), 2 deletions(-)
12
13
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/stellaris.c
16
+++ b/hw/arm/stellaris.c
17
@@ -XXX,XX +XXX,XX @@
18
#include "qemu/log.h"
19
#include "exec/address-spaces.h"
20
#include "sysemu/sysemu.h"
21
+#include "hw/arm/armv7m.h"
22
#include "hw/char/pl011.h"
23
#include "hw/misc/unimp.h"
24
#include "cpu.h"
25
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
26
&error_fatal);
27
memory_region_add_subregion(system_memory, 0x20000000, sram);
28
29
- nvic = armv7m_init(system_memory, flash_size, NUM_IRQ_LINES,
30
- ms->kernel_filename, ms->cpu_type);
31
+ nvic = qdev_create(NULL, TYPE_ARMV7M);
32
+ qdev_prop_set_uint32(nvic, "num-irq", NUM_IRQ_LINES);
33
+ qdev_prop_set_string(nvic, "cpu-type", ms->cpu_type);
34
+ object_property_set_link(OBJECT(nvic), OBJECT(get_system_memory()),
35
+ "memory", &error_abort);
36
+ /* This will exit with an error if the user passed us a bad cpu_type */
37
+ qdev_init_nofail(nvic);
38
39
qdev_connect_gpio_out_named(nvic, "SYSRESETREQ", 0,
40
qemu_allocate_irq(&do_sys_reset, NULL, 0));
41
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
42
create_unimplemented_device("analogue-comparator", 0x4003c000, 0x1000);
43
create_unimplemented_device("hibernation", 0x400fc000, 0x1000);
44
create_unimplemented_device("flash-control", 0x400fd000, 0x1000);
45
+
46
+ armv7m_load_kernel(ARM_CPU(first_cpu), ms->kernel_filename, flash_size);
47
}
48
49
/* FIXME: Figure out how to generate these from stellaris_boards. */
50
--
51
2.17.1
52
53
diff view generated by jsdifflib
Deleted patch
1
Remove the now-unused armv7m_init() function. This was a legacy from
2
before we properly QOMified ARMv7M, and it has some flaws:
3
1
4
* it combines work that needs to be done by an SoC object (creating
5
and initializing the TYPE_ARMV7M object) with work that needs to
6
be done by the board model (setting the system up to load the ELF
7
file specified with -kernel)
8
* TYPE_ARMV7M creation failure is fatal, but an SoC object wants to
9
arrange to propagate the failure outward
10
* it uses allocate-and-create via qdev_create() whereas the current
11
preferred style for SoC objects is to do creation in-place
12
13
Board and SoC models can instead do the two jobs this function
14
was doing themselves, in the right places and with whatever their
15
preferred style/error handling is.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
20
Message-id: 20180601144328.23817-3-peter.maydell@linaro.org
21
---
22
include/hw/arm/arm.h | 8 ++------
23
hw/arm/armv7m.c | 21 ---------------------
24
2 files changed, 2 insertions(+), 27 deletions(-)
25
26
diff --git a/include/hw/arm/arm.h b/include/hw/arm/arm.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/include/hw/arm/arm.h
29
+++ b/include/hw/arm/arm.h
30
@@ -XXX,XX +XXX,XX @@ typedef enum {
31
ARM_ENDIANNESS_BE32,
32
} arm_endianness;
33
34
-/* armv7m.c */
35
-DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
36
- const char *kernel_filename, const char *cpu_type);
37
/**
38
* armv7m_load_kernel:
39
* @cpu: CPU
40
@@ -XXX,XX +XXX,XX @@ DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
41
* @mem_size: mem_size: maximum image size to load
42
*
43
* Load the guest image for an ARMv7M system. This must be called by
44
- * any ARMv7M board, either directly or via armv7m_init(). (This is
45
- * necessary to ensure that the CPU resets correctly on system reset,
46
- * as well as for kernel loading.)
47
+ * any ARMv7M board. (This is necessary to ensure that the CPU resets
48
+ * correctly on system reset, as well as for kernel loading.)
49
*/
50
void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size);
51
52
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/arm/armv7m.c
55
+++ b/hw/arm/armv7m.c
56
@@ -XXX,XX +XXX,XX @@ static void armv7m_reset(void *opaque)
57
cpu_reset(CPU(cpu));
58
}
59
60
-/* Init CPU and memory for a v7-M based board.
61
- mem_size is in bytes.
62
- Returns the ARMv7M device. */
63
-
64
-DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
65
- const char *kernel_filename, const char *cpu_type)
66
-{
67
- DeviceState *armv7m;
68
-
69
- armv7m = qdev_create(NULL, TYPE_ARMV7M);
70
- qdev_prop_set_uint32(armv7m, "num-irq", num_irq);
71
- qdev_prop_set_string(armv7m, "cpu-type", cpu_type);
72
- object_property_set_link(OBJECT(armv7m), OBJECT(get_system_memory()),
73
- "memory", &error_abort);
74
- /* This will exit with an error if the user passed us a bad cpu_type */
75
- qdev_init_nofail(armv7m);
76
-
77
- armv7m_load_kernel(ARM_CPU(first_cpu), kernel_filename, mem_size);
78
- return armv7m;
79
-}
80
-
81
void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size)
82
{
83
int image_size;
84
--
85
2.17.1
86
87
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: Andrew Jeffery <andrew@aj.id.au>
2
2
3
On Macronix chips, two bytes can written to the WRSR. First byte will
3
The corner-case codepath was adjusting nexttick such that overflow
4
configure the status register and the second the configuration
4
wouldn't occur when timer_mod() scaled the value back up. Remove a use
5
register. It is important to save the configuration value as it
5
of GTIMER_SCALE and avoid unnecessary operations by calling
6
contains the dummy cycle setting when using dual or quad IO mode.
6
timer_mod_ns() directly.
7
7
8
Signed-off-by: Cédric Le Goater <clg@kaod.org>
8
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
9
Acked-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Cédric Le Goater <clg@kaod.org>
11
Message-id: f8c680720e3abe55476e6d9cb604ad27fdbeb2e0.1576215453.git-series.andrew@aj.id.au
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
hw/block/m25p80.c | 1 +
14
target/arm/helper.c | 5 +++--
13
1 file changed, 1 insertion(+)
15
1 file changed, 3 insertions(+), 2 deletions(-)
14
16
15
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/block/m25p80.c
19
--- a/target/arm/helper.c
18
+++ b/hw/block/m25p80.c
20
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void complete_collecting_data(Flash *s)
21
@@ -XXX,XX +XXX,XX @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
20
case MAN_MACRONIX:
22
* timer expires we will reset the timer for any remaining period.
21
s->quad_enable = extract32(s->data[0], 6, 1);
23
*/
22
if (s->len > 1) {
24
if (nexttick > INT64_MAX / GTIMER_SCALE) {
23
+ s->volatile_cfg = s->data[1];
25
- nexttick = INT64_MAX / GTIMER_SCALE;
24
s->four_bytes_address_mode = extract32(s->data[1], 5, 1);
26
+ timer_mod_ns(cpu->gt_timer[timeridx], INT64_MAX);
25
}
27
+ } else {
26
break;
28
+ timer_mod(cpu->gt_timer[timeridx], nexttick);
29
}
30
- timer_mod(cpu->gt_timer[timeridx], nexttick);
31
trace_arm_gt_recalc(timeridx, irqstate, nexttick);
32
} else {
33
/* Timer disabled: ISTATUS and timer output always clear */
27
--
34
--
28
2.17.1
35
2.20.1
29
36
30
37
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Andrew Jeffery <andrew@aj.id.au>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Prepare for SoCs such as the ASPEED AST2600 whose firmware configures
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
CNTFRQ to values significantly larger than the static 62.5MHz value
5
Message-id: 20180613015641.5667-9-richard.henderson@linaro.org
5
currently derived from GTIMER_SCALE. As the OS potentially derives its
6
timer periods from the CNTFRQ value the lack of support for running
7
QEMUTimers at the appropriate rate leads to sticky behaviour in the
8
guest.
9
10
Substitute the GTIMER_SCALE constant with use of a helper to derive the
11
period from gt_cntfrq_hz stored in struct ARMCPU. Initially set
12
gt_cntfrq_hz to the frequency associated with GTIMER_SCALE so current
13
behaviour is maintained.
14
15
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
18
Message-id: 40bd8df043f66e1ccfb3e9482999d099ac72bb2e.1576215453.git-series.andrew@aj.id.au
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
20
---
8
target/arm/helper-sve.h | 14 +++++++++++++
21
target/arm/cpu.h | 5 +++++
9
target/arm/sve_helper.c | 41 +++++++++++++++++++++++++++++++-------
22
target/arm/cpu.c | 8 ++++++++
10
target/arm/translate-sve.c | 38 +++++++++++++++++++++++++++++++++++
23
target/arm/helper.c | 10 +++++++---
11
target/arm/sve.decode | 7 +++++++
24
3 files changed, 20 insertions(+), 3 deletions(-)
12
4 files changed, 93 insertions(+), 7 deletions(-)
13
25
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
26
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
28
--- a/target/arm/cpu.h
17
+++ b/target/arm/helper-sve.h
29
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
19
31
*/
20
DEF_HELPER_FLAGS_2(sve_last_active_element, TCG_CALL_NO_RWG, s32, ptr, i32)
32
DECLARE_BITMAP(sve_vq_map, ARM_MAX_VQ);
21
33
DECLARE_BITMAP(sve_vq_init, ARM_MAX_VQ);
22
+DEF_HELPER_FLAGS_4(sve_revb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_revb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_revb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+
34
+
26
+DEF_HELPER_FLAGS_4(sve_revh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+ /* Generic timer counter frequency, in Hz */
27
+DEF_HELPER_FLAGS_4(sve_revh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+ uint64_t gt_cntfrq_hz;
37
};
38
39
+unsigned int gt_cntfrq_period_ns(ARMCPU *cpu);
28
+
40
+
29
+DEF_HELPER_FLAGS_4(sve_revw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
void arm_cpu_post_init(Object *obj);
42
43
uint64_t arm_cpu_mp_affinity(int idx, uint8_t clustersz);
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu.c
47
+++ b/target/arm/cpu.c
48
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
49
if (tcg_enabled()) {
50
cpu->psci_version = 2; /* TCG implements PSCI 0.2 */
51
}
30
+
52
+
31
+DEF_HELPER_FLAGS_4(sve_rbit_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
53
+ cpu->gt_cntfrq_hz = NANOSECONDS_PER_SECOND / GTIMER_SCALE;
32
+DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+
36
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
39
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve_helper.c
42
+++ b/target/arm/sve_helper.c
43
@@ -XXX,XX +XXX,XX @@ static inline uint64_t expand_pred_s(uint8_t byte)
44
return word[byte & 0x11];
45
}
54
}
46
55
47
+/* Swap 16-bit words within a 32-bit word. */
56
static Property arm_cpu_reset_cbar_property =
48
+static inline uint32_t hswap32(uint32_t h)
57
@@ -XXX,XX +XXX,XX @@ static void arm_set_init_svtor(Object *obj, Visitor *v, const char *name,
58
visit_type_uint32(v, name, &cpu->init_svtor, errp);
59
}
60
61
+unsigned int gt_cntfrq_period_ns(ARMCPU *cpu)
49
+{
62
+{
50
+ return rol32(h, 16);
63
+ return NANOSECONDS_PER_SECOND > cpu->gt_cntfrq_hz ?
64
+ NANOSECONDS_PER_SECOND / cpu->gt_cntfrq_hz : 1;
51
+}
65
+}
52
+
66
+
53
+/* Swap 16-bit words within a 64-bit word. */
67
void arm_cpu_post_init(Object *obj)
54
+static inline uint64_t hswap64(uint64_t h)
68
{
55
+{
69
ARMCPU *cpu = ARM_CPU(obj);
56
+ uint64_t m = 0x0000ffff0000ffffull;
70
diff --git a/target/arm/helper.c b/target/arm/helper.c
57
+ h = rol64(h, 32);
71
index XXXXXXX..XXXXXXX 100644
58
+ return ((h & m) << 16) | ((h >> 16) & m);
72
--- a/target/arm/helper.c
59
+}
73
+++ b/target/arm/helper.c
74
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_stimer_access(CPUARMState *env,
75
76
static uint64_t gt_get_countervalue(CPUARMState *env)
77
{
78
- return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) / GTIMER_SCALE;
79
+ ARMCPU *cpu = env_archcpu(env);
60
+
80
+
61
+/* Swap 32-bit words within a 64-bit word. */
81
+ return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) / gt_cntfrq_period_ns(cpu);
62
+static inline uint64_t wswap64(uint64_t h)
82
}
63
+{
83
64
+ return rol64(h, 32);
84
static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
65
+}
85
@@ -XXX,XX +XXX,XX @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
86
* set the timer for as far in the future as possible. When the
87
* timer expires we will reset the timer for any remaining period.
88
*/
89
- if (nexttick > INT64_MAX / GTIMER_SCALE) {
90
+ if (nexttick > INT64_MAX / gt_cntfrq_period_ns(cpu)) {
91
timer_mod_ns(cpu->gt_timer[timeridx], INT64_MAX);
92
} else {
93
timer_mod(cpu->gt_timer[timeridx], nexttick);
94
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
95
96
static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri)
97
{
98
+ ARMCPU *cpu = env_archcpu(env);
66
+
99
+
67
#define LOGICAL_PPPP(NAME, FUNC) \
100
/* Currently we have no support for QEMUTimer in linux-user so we
68
void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
101
* can't call gt_get_countervalue(env), instead we directly
69
{ \
102
* call the lower level functions.
70
@@ -XXX,XX +XXX,XX @@ DO_ZPZ(sve_neg_h, uint16_t, H1_2, DO_NEG)
103
*/
71
DO_ZPZ(sve_neg_s, uint32_t, H1_4, DO_NEG)
104
- return cpu_get_clock() / GTIMER_SCALE;
72
DO_ZPZ_D(sve_neg_d, uint64_t, DO_NEG)
105
+ return cpu_get_clock() / gt_cntfrq_period_ns(cpu);
73
74
+DO_ZPZ(sve_revb_h, uint16_t, H1_2, bswap16)
75
+DO_ZPZ(sve_revb_s, uint32_t, H1_4, bswap32)
76
+DO_ZPZ_D(sve_revb_d, uint64_t, bswap64)
77
+
78
+DO_ZPZ(sve_revh_s, uint32_t, H1_4, hswap32)
79
+DO_ZPZ_D(sve_revh_d, uint64_t, hswap64)
80
+
81
+DO_ZPZ_D(sve_revw_d, uint64_t, wswap64)
82
+
83
+DO_ZPZ(sve_rbit_b, uint8_t, H1, revbit8)
84
+DO_ZPZ(sve_rbit_h, uint16_t, H1_2, revbit16)
85
+DO_ZPZ(sve_rbit_s, uint32_t, H1_4, revbit32)
86
+DO_ZPZ_D(sve_rbit_d, uint64_t, revbit64)
87
+
88
/* Three-operand expander, unpredicated, in which the third operand is "wide".
89
*/
90
#define DO_ZZW(NAME, TYPE, TYPEW, H, OP) \
91
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_rev_b)(void *vd, void *vn, uint32_t desc)
92
}
93
}
106
}
94
107
95
-static inline uint64_t hswap64(uint64_t h)
108
static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
96
-{
97
- uint64_t m = 0x0000ffff0000ffffull;
98
- h = rol64(h, 32);
99
- return ((h & m) << 16) | ((h >> 16) & m);
100
-}
101
-
102
void HELPER(sve_rev_h)(void *vd, void *vn, uint32_t desc)
103
{
104
intptr_t i, j, opr_sz = simd_oprsz(desc);
105
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate-sve.c
108
+++ b/target/arm/translate-sve.c
109
@@ -XXX,XX +XXX,XX @@ static bool trans_CPY_m_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
110
return true;
111
}
112
113
+static bool trans_REVB(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
114
+{
115
+ static gen_helper_gvec_3 * const fns[4] = {
116
+ NULL,
117
+ gen_helper_sve_revb_h,
118
+ gen_helper_sve_revb_s,
119
+ gen_helper_sve_revb_d,
120
+ };
121
+ return do_zpz_ool(s, a, fns[a->esz]);
122
+}
123
+
124
+static bool trans_REVH(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
125
+{
126
+ static gen_helper_gvec_3 * const fns[4] = {
127
+ NULL,
128
+ NULL,
129
+ gen_helper_sve_revh_s,
130
+ gen_helper_sve_revh_d,
131
+ };
132
+ return do_zpz_ool(s, a, fns[a->esz]);
133
+}
134
+
135
+static bool trans_REVW(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
136
+{
137
+ return do_zpz_ool(s, a, a->esz == 3 ? gen_helper_sve_revw_d : NULL);
138
+}
139
+
140
+static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
141
+{
142
+ static gen_helper_gvec_3 * const fns[4] = {
143
+ gen_helper_sve_rbit_b,
144
+ gen_helper_sve_rbit_h,
145
+ gen_helper_sve_rbit_s,
146
+ gen_helper_sve_rbit_d,
147
+ };
148
+ return do_zpz_ool(s, a, fns[a->esz]);
149
+}
150
+
151
/*
152
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
153
*/
154
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
155
index XXXXXXX..XXXXXXX 100644
156
--- a/target/arm/sve.decode
157
+++ b/target/arm/sve.decode
158
@@ -XXX,XX +XXX,XX @@ CPY_m_v 00000101 .. 100000 100 ... ..... ..... @rd_pg_rn
159
# SVE copy element from general register to vector (predicated)
160
CPY_m_r 00000101 .. 101000 101 ... ..... ..... @rd_pg_rn
161
162
+# SVE reverse within elements
163
+# Note esz >= operation size
164
+REVB 00000101 .. 1001 00 100 ... ..... ..... @rd_pg_rn
165
+REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
166
+REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
167
+RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
168
+
169
### SVE Predicate Logical Operations Group
170
171
# SVE predicate logical operations
172
--
109
--
173
2.17.1
110
2.20.1
174
111
175
112
diff view generated by jsdifflib
1
The Cortex-M CPU and its NVIC are two intimately intertwined parts of
1
From: Andrew Jeffery <andrew@aj.id.au>
2
the same hardware; it is not possible to use one without the other.
3
Unfortunately a lot of our board models don't do any sanity checking
4
on the CPU type the user asks for, so a command line like
5
qemu-system-arm -M versatilepb -cpu cortex-m3
6
will create an M3 without an NVIC, and coredump immediately.
7
In the other direction, trying a non-M-profile CPU in an M-profile
8
board won't blow up, but doesn't do anything useful either:
9
qemu-system-arm -M lm3s6965evb -cpu arm926
10
2
11
Add some checking in the NVIC and CPU realize functions that the
3
The ASPEED AST2600 clocks the generic timer at the rate of HPLL. On
12
user isn't trying to use an NVIC without an M-profile CPU or
4
recent firmwares this is at 1125MHz, which is considerably quicker than
13
an M-profile CPU without an NVIC, so we can produce a helpful
5
the assumed 62.5MHz of the current generic timer implementation. The
14
error message rather than a core dump.
6
delta between the value as read from CNTFRQ and the true rate of the
7
underlying QEMUTimer leads to sticky behaviour in AST2600 guests.
15
8
16
Fixes: https://bugs.launchpad.net/qemu/+bug/1766896
9
Add a feature-gated property exposing CNTFRQ for ARM CPUs providing the
10
generic timer. This allows platforms to configure CNTFRQ (and the
11
associated QEMUTimer) to the appropriate frequency prior to starting the
12
guest.
13
14
As the platform can now determine the rate of CNTFRQ we're exposed to
15
limitations of QEMUTimer that didn't previously materialise: In the
16
course of emulation we need to arbitrarily and accurately convert
17
between guest ticks and time, but we're constrained by QEMUTimer's use
18
of an integer scaling factor. The effect is QEMUTimer cannot exactly
19
capture the period of frequencies that do not cleanly divide
20
NANOSECONDS_PER_SECOND for scaling ticks to time. As such, provide an
21
equally inaccurate scaling factor for scaling time to ticks so at least
22
a self-consistent inverse relationship holds.
23
24
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
26
Message-id: a22db9325f96e39f76e3c2baddcb712149f46bf2.1576215453.git-series.andrew@aj.id.au
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Message-id: 20180601160355.15393-1-peter.maydell@linaro.org
20
---
28
---
21
hw/arm/armv7m.c | 7 ++++++-
29
target/arm/cpu.c | 61 +++++++++++++++++++++++++++++++++++++--------
22
hw/intc/armv7m_nvic.c | 6 +++++-
30
target/arm/helper.c | 9 ++++++-
23
target/arm/cpu.c | 18 ++++++++++++++++++
31
2 files changed, 59 insertions(+), 11 deletions(-)
24
3 files changed, 29 insertions(+), 2 deletions(-)
25
32
26
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/arm/armv7m.c
29
+++ b/hw/arm/armv7m.c
30
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
31
return;
32
}
33
}
34
+
35
+ /* Tell the CPU where the NVIC is; it will fail realize if it doesn't
36
+ * have one.
37
+ */
38
+ s->cpu->env.nvic = &s->nvic;
39
+
40
object_property_set_bool(OBJECT(s->cpu), true, "realized", &err);
41
if (err != NULL) {
42
error_propagate(errp, err);
43
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
44
sbd = SYS_BUS_DEVICE(&s->nvic);
45
sysbus_connect_irq(sbd, 0,
46
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
47
- s->cpu->env.nvic = &s->nvic;
48
49
memory_region_add_subregion(&s->container, 0xe000e000,
50
sysbus_mmio_get_region(sbd, 0));
51
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/intc/armv7m_nvic.c
54
+++ b/hw/intc/armv7m_nvic.c
55
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
56
int regionlen;
57
58
s->cpu = ARM_CPU(qemu_get_cpu(0));
59
- assert(s->cpu);
60
+
61
+ if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
62
+ error_setg(errp, "The NVIC can only be used with a Cortex-M CPU");
63
+ return;
64
+ }
65
66
if (s->num_irq > NVIC_MAX_IRQ) {
67
error_setg(errp, "num-irq %d exceeds NVIC maximum", s->num_irq);
68
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
33
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
69
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/cpu.c
35
--- a/target/arm/cpu.c
71
+++ b/target/arm/cpu.c
36
+++ b/target/arm/cpu.c
37
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
38
if (tcg_enabled()) {
39
cpu->psci_version = 2; /* TCG implements PSCI 0.2 */
40
}
41
-
42
- cpu->gt_cntfrq_hz = NANOSECONDS_PER_SECOND / GTIMER_SCALE;
43
}
44
45
+static Property arm_cpu_gt_cntfrq_property =
46
+ DEFINE_PROP_UINT64("cntfrq", ARMCPU, gt_cntfrq_hz,
47
+ NANOSECONDS_PER_SECOND / GTIMER_SCALE);
48
+
49
static Property arm_cpu_reset_cbar_property =
50
DEFINE_PROP_UINT64("reset-cbar", ARMCPU, reset_cbar, 0);
51
52
@@ -XXX,XX +XXX,XX @@ static void arm_set_init_svtor(Object *obj, Visitor *v, const char *name,
53
54
unsigned int gt_cntfrq_period_ns(ARMCPU *cpu)
55
{
56
+ /*
57
+ * The exact approach to calculating guest ticks is:
58
+ *
59
+ * muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), cpu->gt_cntfrq_hz,
60
+ * NANOSECONDS_PER_SECOND);
61
+ *
62
+ * We don't do that. Rather we intentionally use integer division
63
+ * truncation below and in the caller for the conversion of host monotonic
64
+ * time to guest ticks to provide the exact inverse for the semantics of
65
+ * the QEMUTimer scale factor. QEMUTimer's scale facter is an integer, so
66
+ * it loses precision when representing frequencies where
67
+ * `(NANOSECONDS_PER_SECOND % cpu->gt_cntfrq) > 0` holds. Failing to
68
+ * provide an exact inverse leads to scheduling timers with negative
69
+ * periods, which in turn leads to sticky behaviour in the guest.
70
+ *
71
+ * Finally, CNTFRQ is effectively capped at 1GHz to ensure our scale factor
72
+ * cannot become zero.
73
+ */
74
return NANOSECONDS_PER_SECOND > cpu->gt_cntfrq_hz ?
75
NANOSECONDS_PER_SECOND / cpu->gt_cntfrq_hz : 1;
76
}
77
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
78
79
qdev_property_add_static(DEVICE(obj), &arm_cpu_cfgend_property,
80
&error_abort);
81
+
82
+ if (arm_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER)) {
83
+ qdev_property_add_static(DEVICE(cpu), &arm_cpu_gt_cntfrq_property,
84
+ &error_abort);
85
+ }
86
}
87
88
static void arm_cpu_finalizefn(Object *obj)
72
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
89
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
73
return;
90
}
74
}
91
}
75
92
76
+#ifndef CONFIG_USER_ONLY
93
- cpu->gt_timer[GTIMER_PHYS] = timer_new(QEMU_CLOCK_VIRTUAL, GTIMER_SCALE,
77
+ /* The NVIC and M-profile CPU are two halves of a single piece of
94
- arm_gt_ptimer_cb, cpu);
78
+ * hardware; trying to use one without the other is a command line
95
- cpu->gt_timer[GTIMER_VIRT] = timer_new(QEMU_CLOCK_VIRTUAL, GTIMER_SCALE,
79
+ * error and will result in segfaults if not caught here.
96
- arm_gt_vtimer_cb, cpu);
80
+ */
97
- cpu->gt_timer[GTIMER_HYP] = timer_new(QEMU_CLOCK_VIRTUAL, GTIMER_SCALE,
81
+ if (arm_feature(env, ARM_FEATURE_M)) {
98
- arm_gt_htimer_cb, cpu);
82
+ if (!env->nvic) {
99
- cpu->gt_timer[GTIMER_SEC] = timer_new(QEMU_CLOCK_VIRTUAL, GTIMER_SCALE,
83
+ error_setg(errp, "This board cannot be used with Cortex-M CPUs");
100
- arm_gt_stimer_cb, cpu);
84
+ return;
101
+
102
+ {
103
+ uint64_t scale;
104
+
105
+ if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
106
+ if (!cpu->gt_cntfrq_hz) {
107
+ error_setg(errp, "Invalid CNTFRQ: %"PRId64"Hz",
108
+ cpu->gt_cntfrq_hz);
109
+ return;
110
+ }
111
+ scale = gt_cntfrq_period_ns(cpu);
112
+ } else {
113
+ scale = GTIMER_SCALE;
85
+ }
114
+ }
86
+ } else {
115
+
87
+ if (env->nvic) {
116
+ cpu->gt_timer[GTIMER_PHYS] = timer_new(QEMU_CLOCK_VIRTUAL, scale,
88
+ error_setg(errp, "This board can only be used with Cortex-M CPUs");
117
+ arm_gt_ptimer_cb, cpu);
89
+ return;
118
+ cpu->gt_timer[GTIMER_VIRT] = timer_new(QEMU_CLOCK_VIRTUAL, scale,
90
+ }
119
+ arm_gt_vtimer_cb, cpu);
120
+ cpu->gt_timer[GTIMER_HYP] = timer_new(QEMU_CLOCK_VIRTUAL, scale,
121
+ arm_gt_htimer_cb, cpu);
122
+ cpu->gt_timer[GTIMER_SEC] = timer_new(QEMU_CLOCK_VIRTUAL, scale,
123
+ arm_gt_stimer_cb, cpu);
91
+ }
124
+ }
92
+#endif
125
#endif
126
127
cpu_exec_realizefn(cs, &local_err);
128
diff --git a/target/arm/helper.c b/target/arm/helper.c
129
index XXXXXXX..XXXXXXX 100644
130
--- a/target/arm/helper.c
131
+++ b/target/arm/helper.c
132
@@ -XXX,XX +XXX,XX @@ void arm_gt_stimer_cb(void *opaque)
133
gt_recalc_timer(cpu, GTIMER_SEC);
134
}
135
136
+static void arm_gt_cntfrq_reset(CPUARMState *env, const ARMCPRegInfo *opaque)
137
+{
138
+ ARMCPU *cpu = env_archcpu(env);
93
+
139
+
94
cpu_exec_realizefn(cs, &local_err);
140
+ cpu->env.cp15.c14_cntfrq = cpu->gt_cntfrq_hz;
95
if (local_err != NULL) {
141
+}
96
error_propagate(errp, local_err);
142
+
143
static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
144
/* Note that CNTFRQ is purely reads-as-written for the benefit
145
* of software; writing it doesn't actually change the timer frequency.
146
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
147
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 0,
148
.access = PL1_RW | PL0_R, .accessfn = gt_cntfrq_access,
149
.fieldoffset = offsetof(CPUARMState, cp15.c14_cntfrq),
150
- .resetvalue = (1000 * 1000 * 1000) / GTIMER_SCALE,
151
+ .resetfn = arm_gt_cntfrq_reset,
152
},
153
/* overall control: mostly access permissions */
154
{ .name = "CNTKCTL", .state = ARM_CP_STATE_BOTH,
97
--
155
--
98
2.17.1
156
2.20.1
99
157
100
158
diff view generated by jsdifflib
Deleted patch
1
For the IoTKit MPC support, we need to wire together the
2
interrupt outputs of 17 MPCs; this exceeds the current
3
value of MAX_OR_LINES. Increase MAX_OR_LINES to 32 (which
4
should be enough for anyone).
5
1
6
The tricky part is retaining the migration compatibility for
7
existing OR gates; we add a subsection which is only used
8
for larger OR gates, and define it such that we can freely
9
increase MAX_OR_LINES in future (or even move to a dynamically
10
allocated levels[] array without an upper size limit) without
11
breaking compatibility.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Message-id: 20180604152941.20374-10-peter.maydell@linaro.org
16
---
17
include/hw/or-irq.h | 5 ++++-
18
hw/core/or-irq.c | 39 +++++++++++++++++++++++++++++++++++++--
19
2 files changed, 41 insertions(+), 3 deletions(-)
20
21
diff --git a/include/hw/or-irq.h b/include/hw/or-irq.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/or-irq.h
24
+++ b/include/hw/or-irq.h
25
@@ -XXX,XX +XXX,XX @@
26
27
#define TYPE_OR_IRQ "or-irq"
28
29
-#define MAX_OR_LINES 16
30
+/* This can safely be increased if necessary without breaking
31
+ * migration compatibility (as long as it remains greater than 15).
32
+ */
33
+#define MAX_OR_LINES 32
34
35
typedef struct OrIRQState qemu_or_irq;
36
37
diff --git a/hw/core/or-irq.c b/hw/core/or-irq.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/hw/core/or-irq.c
40
+++ b/hw/core/or-irq.c
41
@@ -XXX,XX +XXX,XX @@ static void or_irq_init(Object *obj)
42
qdev_init_gpio_out(DEVICE(obj), &s->out_irq, 1);
43
}
44
45
+/* The original version of this device had a fixed 16 entries in its
46
+ * VMState array; devices with more inputs than this need to
47
+ * migrate the extra lines via a subsection.
48
+ * The subsection migrates as much of the levels[] array as is needed
49
+ * (including repeating the first 16 elements), to avoid the awkwardness
50
+ * of splitting it in two to meet the requirements of VMSTATE_VARRAY_UINT16.
51
+ */
52
+#define OLD_MAX_OR_LINES 16
53
+#if MAX_OR_LINES < OLD_MAX_OR_LINES
54
+#error MAX_OR_LINES must be at least 16 for migration compatibility
55
+#endif
56
+
57
+static bool vmstate_extras_needed(void *opaque)
58
+{
59
+ qemu_or_irq *s = OR_IRQ(opaque);
60
+
61
+ return s->num_lines >= OLD_MAX_OR_LINES;
62
+}
63
+
64
+static const VMStateDescription vmstate_or_irq_extras = {
65
+ .name = "or-irq-extras",
66
+ .version_id = 1,
67
+ .minimum_version_id = 1,
68
+ .needed = vmstate_extras_needed,
69
+ .fields = (VMStateField[]) {
70
+ VMSTATE_VARRAY_UINT16_UNSAFE(levels, qemu_or_irq, num_lines, 0,
71
+ vmstate_info_bool, bool),
72
+ VMSTATE_END_OF_LIST(),
73
+ },
74
+};
75
+
76
static const VMStateDescription vmstate_or_irq = {
77
.name = TYPE_OR_IRQ,
78
.version_id = 1,
79
.minimum_version_id = 1,
80
.fields = (VMStateField[]) {
81
- VMSTATE_BOOL_ARRAY(levels, qemu_or_irq, MAX_OR_LINES),
82
+ VMSTATE_BOOL_SUB_ARRAY(levels, qemu_or_irq, 0, OLD_MAX_OR_LINES),
83
VMSTATE_END_OF_LIST(),
84
- }
85
+ },
86
+ .subsections = (const VMStateDescription*[]) {
87
+ &vmstate_or_irq_extras,
88
+ NULL
89
+ },
90
};
91
92
static Property or_irq_properties[] = {
93
--
94
2.17.1
95
96
diff view generated by jsdifflib
Deleted patch
1
The 'addr' field in the CPUIOTLBEntry struct has a rather non-obvious
2
use; add a comment documenting it (reverse-engineered from what
3
the code that sets it is doing).
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180611125633.32755-2-peter.maydell@linaro.org
9
---
10
include/exec/cpu-defs.h | 9 +++++++++
11
accel/tcg/cputlb.c | 12 ++++++++++++
12
2 files changed, 21 insertions(+)
13
14
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu-defs.h
17
+++ b/include/exec/cpu-defs.h
18
@@ -XXX,XX +XXX,XX @@ QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) != (1 << CPU_TLB_ENTRY_BITS));
19
* structs into one.)
20
*/
21
typedef struct CPUIOTLBEntry {
22
+ /*
23
+ * @addr contains:
24
+ * - in the lower TARGET_PAGE_BITS, a physical section number
25
+ * - with the lower TARGET_PAGE_BITS masked off, an offset which
26
+ * must be added to the virtual address to obtain:
27
+ * + the ram_addr_t of the target RAM (if the physical section
28
+ * number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM)
29
+ * + the offset within the target MemoryRegion (otherwise)
30
+ */
31
hwaddr addr;
32
MemTxAttrs attrs;
33
} CPUIOTLBEntry;
34
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/accel/tcg/cputlb.c
37
+++ b/accel/tcg/cputlb.c
38
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
39
env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index];
40
41
/* refill the tlb */
42
+ /*
43
+ * At this point iotlb contains a physical section number in the lower
44
+ * TARGET_PAGE_BITS, and either
45
+ * + the ram_addr_t of the page base of the target RAM (if NOTDIRTY or ROM)
46
+ * + the offset within section->mr of the page base (otherwise)
47
+ * We subtract the vaddr (which is page aligned and thus won't
48
+ * disturb the low bits) to give an offset which can be added to the
49
+ * (non-page-aligned) vaddr of the eventual memory access to get
50
+ * the MemoryRegion offset for the access. Note that the vaddr we
51
+ * subtract here is that of the page base, and not the same as the
52
+ * vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
53
+ */
54
env->iotlb[mmu_idx][index].addr = iotlb - vaddr;
55
env->iotlb[mmu_idx][index].attrs = attrs;
56
57
--
58
2.17.1
59
60
diff view generated by jsdifflib
Deleted patch
1
The API for cpu_transaction_failed() says that it takes the physical
2
address for the failed transaction. However we were actually passing
3
it the offset within the target MemoryRegion. We don't currently
4
have any target CPU implementations of this hook that require the
5
physical address; fix this bug so we don't get confused if we ever
6
do add one.
7
1
8
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180611125633.32755-3-peter.maydell@linaro.org
13
---
14
include/exec/exec-all.h | 13 ++++++++++--
15
accel/tcg/cputlb.c | 44 +++++++++++++++++++++++++++++------------
16
exec.c | 5 +++--
17
3 files changed, 45 insertions(+), 17 deletions(-)
18
19
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/exec-all.h
22
+++ b/include/exec/exec-all.h
23
@@ -XXX,XX +XXX,XX @@ void tb_lock_reset(void);
24
25
#if !defined(CONFIG_USER_ONLY)
26
27
-struct MemoryRegion *iotlb_to_region(CPUState *cpu,
28
- hwaddr index, MemTxAttrs attrs);
29
+/**
30
+ * iotlb_to_section:
31
+ * @cpu: CPU performing the access
32
+ * @index: TCG CPU IOTLB entry
33
+ *
34
+ * Given a TCG CPU IOTLB entry, return the MemoryRegionSection that
35
+ * it refers to. @index will have been initially created and returned
36
+ * by memory_region_section_get_iotlb().
37
+ */
38
+struct MemoryRegionSection *iotlb_to_section(CPUState *cpu,
39
+ hwaddr index, MemTxAttrs attrs);
40
41
void tlb_fill(CPUState *cpu, target_ulong addr, int size,
42
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr);
43
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/accel/tcg/cputlb.c
46
+++ b/accel/tcg/cputlb.c
47
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
48
target_ulong addr, uintptr_t retaddr, int size)
49
{
50
CPUState *cpu = ENV_GET_CPU(env);
51
- hwaddr physaddr = iotlbentry->addr;
52
- MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs);
53
+ hwaddr mr_offset;
54
+ MemoryRegionSection *section;
55
+ MemoryRegion *mr;
56
uint64_t val;
57
bool locked = false;
58
MemTxResult r;
59
60
- physaddr = (physaddr & TARGET_PAGE_MASK) + addr;
61
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
62
+ mr = section->mr;
63
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
64
cpu->mem_io_pc = retaddr;
65
if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) {
66
cpu_io_recompile(cpu, retaddr);
67
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
68
qemu_mutex_lock_iothread();
69
locked = true;
70
}
71
- r = memory_region_dispatch_read(mr, physaddr,
72
+ r = memory_region_dispatch_read(mr, mr_offset,
73
&val, size, iotlbentry->attrs);
74
if (r != MEMTX_OK) {
75
+ hwaddr physaddr = mr_offset +
76
+ section->offset_within_address_space -
77
+ section->offset_within_region;
78
+
79
cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_LOAD,
80
mmu_idx, iotlbentry->attrs, r, retaddr);
81
}
82
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
83
uintptr_t retaddr, int size)
84
{
85
CPUState *cpu = ENV_GET_CPU(env);
86
- hwaddr physaddr = iotlbentry->addr;
87
- MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs);
88
+ hwaddr mr_offset;
89
+ MemoryRegionSection *section;
90
+ MemoryRegion *mr;
91
bool locked = false;
92
MemTxResult r;
93
94
- physaddr = (physaddr & TARGET_PAGE_MASK) + addr;
95
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
96
+ mr = section->mr;
97
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
98
if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) {
99
cpu_io_recompile(cpu, retaddr);
100
}
101
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
102
qemu_mutex_lock_iothread();
103
locked = true;
104
}
105
- r = memory_region_dispatch_write(mr, physaddr,
106
+ r = memory_region_dispatch_write(mr, mr_offset,
107
val, size, iotlbentry->attrs);
108
if (r != MEMTX_OK) {
109
+ hwaddr physaddr = mr_offset +
110
+ section->offset_within_address_space -
111
+ section->offset_within_region;
112
+
113
cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
114
mmu_idx, iotlbentry->attrs, r, retaddr);
115
}
116
@@ -XXX,XX +XXX,XX @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
117
*/
118
tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
119
{
120
- int mmu_idx, index, pd;
121
+ int mmu_idx, index;
122
void *p;
123
MemoryRegion *mr;
124
+ MemoryRegionSection *section;
125
CPUState *cpu = ENV_GET_CPU(env);
126
CPUIOTLBEntry *iotlbentry;
127
- hwaddr physaddr;
128
+ hwaddr physaddr, mr_offset;
129
130
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
131
mmu_idx = cpu_mmu_index(env, true);
132
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
133
}
134
}
135
iotlbentry = &env->iotlb[mmu_idx][index];
136
- pd = iotlbentry->addr & ~TARGET_PAGE_MASK;
137
- mr = iotlb_to_region(cpu, pd, iotlbentry->attrs);
138
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
139
+ mr = section->mr;
140
if (memory_region_is_unassigned(mr)) {
141
qemu_mutex_lock_iothread();
142
if (memory_region_request_mmio_ptr(mr, addr)) {
143
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
144
* and use the MemTXResult it produced). However it is the
145
* simplest place we have currently available for the check.
146
*/
147
- physaddr = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
148
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
149
+ physaddr = mr_offset +
150
+ section->offset_within_address_space -
151
+ section->offset_within_region;
152
cpu_transaction_failed(cpu, physaddr, addr, 0, MMU_INST_FETCH, mmu_idx,
153
iotlbentry->attrs, MEMTX_DECODE_ERROR, 0);
154
155
diff --git a/exec.c b/exec.c
156
index XXXXXXX..XXXXXXX 100644
157
--- a/exec.c
158
+++ b/exec.c
159
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps readonly_mem_ops = {
160
},
161
};
162
163
-MemoryRegion *iotlb_to_region(CPUState *cpu, hwaddr index, MemTxAttrs attrs)
164
+MemoryRegionSection *iotlb_to_section(CPUState *cpu,
165
+ hwaddr index, MemTxAttrs attrs)
166
{
167
int asidx = cpu_asidx_from_attrs(cpu, attrs);
168
CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx];
169
AddressSpaceDispatch *d = atomic_rcu_read(&cpuas->memory_dispatch);
170
MemoryRegionSection *sections = d->map.sections;
171
172
- return sections[index & ~TARGET_PAGE_MASK].mr;
173
+ return &sections[index & ~TARGET_PAGE_MASK];
174
}
175
176
static void io_mem_init(void)
177
--
178
2.17.1
179
180
diff view generated by jsdifflib
Deleted patch
1
The codebase has a bit of a mix of different multiline
2
comment styles. State a preference for the Linux kernel
3
style:
4
/*
5
* Star on the left for each line.
6
* Leading slash-star and trailing star-slash
7
* each go on a line of their own.
8
*/
9
1
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Eric Blake <eblake@redhat.com>
12
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
13
Reviewed-by: Markus Armbruster <armbru@redhat.com>
14
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
15
Reviewed-by: Thomas Huth <thuth@redhat.com>
16
Reviewed-by: John Snow <jsnow@redhat.com>
17
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
18
Message-id: 20180611141716.3813-1-peter.maydell@linaro.org
19
---
20
CODING_STYLE | 17 +++++++++++++++++
21
1 file changed, 17 insertions(+)
22
23
diff --git a/CODING_STYLE b/CODING_STYLE
24
index XXXXXXX..XXXXXXX 100644
25
--- a/CODING_STYLE
26
+++ b/CODING_STYLE
27
@@ -XXX,XX +XXX,XX @@ We use traditional C-style /* */ comments and avoid // comments.
28
Rationale: The // form is valid in C99, so this is purely a matter of
29
consistency of style. The checkpatch script will warn you about this.
30
31
+Multiline comment blocks should have a row of stars on the left,
32
+and the initial /* and terminating */ both on their own lines:
33
+ /*
34
+ * like
35
+ * this
36
+ */
37
+This is the same format required by the Linux kernel coding style.
38
+
39
+(Some of the existing comments in the codebase use the GNU Coding
40
+Standards form which does not have stars on the left, or other
41
+variations; avoid these when writing new comments, but don't worry
42
+about converting to the preferred form unless you're editing that
43
+comment anyway.)
44
+
45
+Rationale: Consistency, and ease of visually picking out a multiline
46
+comment from the surrounding code.
47
+
48
8. trace-events style
49
50
8.1 0x prefix
51
--
52
2.17.1
53
54
diff view generated by jsdifflib
Deleted patch
1
There's a common pattern in QEMU where a function needs to perform
2
a data load or store of an N byte integer in a particular endianness.
3
At the moment this is handled by doing a switch() on the size and
4
calling the appropriate ld*_p or st*_p function for each size.
5
1
6
Provide a new family of functions ldn_*_p() and stn_*_p() which
7
take the size as an argument and do the switch() themselves.
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180611171007.4165-2-peter.maydell@linaro.org
13
---
14
include/exec/cpu-all.h | 4 +++
15
include/qemu/bswap.h | 52 +++++++++++++++++++++++++++++++++++++
16
docs/devel/loads-stores.rst | 15 +++++++++++
17
3 files changed, 71 insertions(+)
18
19
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/cpu-all.h
22
+++ b/include/exec/cpu-all.h
23
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
24
#define stq_p(p, v) stq_be_p(p, v)
25
#define stfl_p(p, v) stfl_be_p(p, v)
26
#define stfq_p(p, v) stfq_be_p(p, v)
27
+#define ldn_p(p, sz) ldn_be_p(p, sz)
28
+#define stn_p(p, sz, v) stn_be_p(p, sz, v)
29
#else
30
#define lduw_p(p) lduw_le_p(p)
31
#define ldsw_p(p) ldsw_le_p(p)
32
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
33
#define stq_p(p, v) stq_le_p(p, v)
34
#define stfl_p(p, v) stfl_le_p(p, v)
35
#define stfq_p(p, v) stfq_le_p(p, v)
36
+#define ldn_p(p, sz) ldn_le_p(p, sz)
37
+#define stn_p(p, sz, v) stn_le_p(p, sz, v)
38
#endif
39
40
/* MMU memory access macros */
41
diff --git a/include/qemu/bswap.h b/include/qemu/bswap.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/include/qemu/bswap.h
44
+++ b/include/qemu/bswap.h
45
@@ -XXX,XX +XXX,XX @@ typedef union {
46
* For accessors that take a guest address rather than a
47
* host address, see the cpu_{ld,st}_* accessors defined in
48
* cpu_ldst.h.
49
+ *
50
+ * For cases where the size to be used is not fixed at compile time,
51
+ * there are
52
+ * stn{endian}_p(ptr, sz, val)
53
+ * which stores @val to @ptr as an @endian-order number @sz bytes in size
54
+ * and
55
+ * ldn{endian}_p(ptr, sz)
56
+ * which loads @sz bytes from @ptr as an unsigned @endian-order number
57
+ * and returns it in a uint64_t.
58
*/
59
60
static inline int ldub_p(const void *ptr)
61
@@ -XXX,XX +XXX,XX @@ static inline unsigned long leul_to_cpu(unsigned long v)
62
#endif
63
}
64
65
+/* Store v to p as a sz byte value in host order */
66
+#define DO_STN_LDN_P(END) \
67
+ static inline void stn_## END ## _p(void *ptr, int sz, uint64_t v) \
68
+ { \
69
+ switch (sz) { \
70
+ case 1: \
71
+ stb_p(ptr, v); \
72
+ break; \
73
+ case 2: \
74
+ stw_ ## END ## _p(ptr, v); \
75
+ break; \
76
+ case 4: \
77
+ stl_ ## END ## _p(ptr, v); \
78
+ break; \
79
+ case 8: \
80
+ stq_ ## END ## _p(ptr, v); \
81
+ break; \
82
+ default: \
83
+ g_assert_not_reached(); \
84
+ } \
85
+ } \
86
+ static inline uint64_t ldn_## END ## _p(const void *ptr, int sz) \
87
+ { \
88
+ switch (sz) { \
89
+ case 1: \
90
+ return ldub_p(ptr); \
91
+ case 2: \
92
+ return lduw_ ## END ## _p(ptr); \
93
+ case 4: \
94
+ return (uint32_t)ldl_ ## END ## _p(ptr); \
95
+ case 8: \
96
+ return ldq_ ## END ## _p(ptr); \
97
+ default: \
98
+ g_assert_not_reached(); \
99
+ } \
100
+ }
101
+
102
+DO_STN_LDN_P(he)
103
+DO_STN_LDN_P(le)
104
+DO_STN_LDN_P(be)
105
+
106
+#undef DO_STN_LDN_P
107
+
108
#undef le_bswap
109
#undef be_bswap
110
#undef le_bswaps
111
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
112
index XXXXXXX..XXXXXXX 100644
113
--- a/docs/devel/loads-stores.rst
114
+++ b/docs/devel/loads-stores.rst
115
@@ -XXX,XX +XXX,XX @@ The ``_{endian}`` infix is omitted for target-endian accesses.
116
The target endian accessors are only available to source
117
files which are built per-target.
118
119
+There are also functions which take the size as an argument:
120
+
121
+load: ``ldn{endian}_p(ptr, sz)``
122
+
123
+which performs an unsigned load of ``sz`` bytes from ``ptr``
124
+as an ``{endian}`` order value and returns it in a uint64_t.
125
+
126
+store: ``stn{endian}_p(ptr, sz, val)``
127
+
128
+which stores ``val`` to ``ptr`` as an ``{endian}`` order value
129
+of size ``sz`` bytes.
130
+
131
+
132
Regexes for git grep
133
- ``\<ldf\?[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
134
- ``\<stf\?[bwlq]\(_[hbl]e\)\?_p\>``
135
+ - ``\<ldn_\([hbl]e\)?_p\>``
136
+ - ``\<stn_\([hbl]e\)?_p\>``
137
138
``cpu_{ld,st}_*``
139
~~~~~~~~~~~~~~~~~
140
--
141
2.17.1
142
143
diff view generated by jsdifflib
Deleted patch
1
In subpage_read() we perform a load of the data into a local buffer
2
which we then access using ldub_p(), lduw_p(), ldl_p() or ldq_p()
3
depending on its size, storing the result into the uint64_t *data.
4
Since ldl_p() returns an 'int', this means that for the 4-byte
5
case we will sign-extend the data, whereas for 1 and 2 byte
6
reads we zero-extend it.
7
1
8
This ought not to matter since the caller will likely ignore values in
9
the high bytes of the data, but add a cast so that we're consistent.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180611171007.4165-3-peter.maydell@linaro.org
14
---
15
exec.c | 2 +-
16
1 file changed, 1 insertion(+), 1 deletion(-)
17
18
diff --git a/exec.c b/exec.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/exec.c
21
+++ b/exec.c
22
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
23
*data = lduw_p(buf);
24
return MEMTX_OK;
25
case 4:
26
- *data = ldl_p(buf);
27
+ *data = (uint32_t)ldl_p(buf);
28
return MEMTX_OK;
29
case 8:
30
*data = ldq_p(buf);
31
--
32
2.17.1
33
34
diff view generated by jsdifflib
Deleted patch
1
Now we have stn_p() and ldn_p() we can use them in various
2
functions in exec.c that used to have their own switch-on-size code.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180611171007.4165-4-peter.maydell@linaro.org
8
---
9
exec.c | 112 +++++----------------------------------------------------
10
1 file changed, 8 insertions(+), 104 deletions(-)
11
12
diff --git a/exec.c b/exec.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
15
+++ b/exec.c
16
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
17
memory_notdirty_write_prepare(&ndi, current_cpu, current_cpu->mem_io_vaddr,
18
ram_addr, size);
19
20
- switch (size) {
21
- case 1:
22
- stb_p(qemu_map_ram_ptr(NULL, ram_addr), val);
23
- break;
24
- case 2:
25
- stw_p(qemu_map_ram_ptr(NULL, ram_addr), val);
26
- break;
27
- case 4:
28
- stl_p(qemu_map_ram_ptr(NULL, ram_addr), val);
29
- break;
30
- case 8:
31
- stq_p(qemu_map_ram_ptr(NULL, ram_addr), val);
32
- break;
33
- default:
34
- abort();
35
- }
36
+ stn_p(qemu_map_ram_ptr(NULL, ram_addr), size, val);
37
memory_notdirty_write_complete(&ndi);
38
}
39
40
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
41
if (res) {
42
return res;
43
}
44
- switch (len) {
45
- case 1:
46
- *data = ldub_p(buf);
47
- return MEMTX_OK;
48
- case 2:
49
- *data = lduw_p(buf);
50
- return MEMTX_OK;
51
- case 4:
52
- *data = (uint32_t)ldl_p(buf);
53
- return MEMTX_OK;
54
- case 8:
55
- *data = ldq_p(buf);
56
- return MEMTX_OK;
57
- default:
58
- abort();
59
- }
60
+ *data = ldn_p(buf, len);
61
+ return MEMTX_OK;
62
}
63
64
static MemTxResult subpage_write(void *opaque, hwaddr addr,
65
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
66
" value %"PRIx64"\n",
67
__func__, subpage, len, addr, value);
68
#endif
69
- switch (len) {
70
- case 1:
71
- stb_p(buf, value);
72
- break;
73
- case 2:
74
- stw_p(buf, value);
75
- break;
76
- case 4:
77
- stl_p(buf, value);
78
- break;
79
- case 8:
80
- stq_p(buf, value);
81
- break;
82
- default:
83
- abort();
84
- }
85
+ stn_p(buf, len, value);
86
return flatview_write(subpage->fv, addr + subpage->base, attrs, buf, len);
87
}
88
89
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
90
l = memory_access_size(mr, l, addr1);
91
/* XXX: could force current_cpu to NULL to avoid
92
potential bugs */
93
- switch (l) {
94
- case 8:
95
- /* 64 bit write access */
96
- val = ldq_p(buf);
97
- result |= memory_region_dispatch_write(mr, addr1, val, 8,
98
- attrs);
99
- break;
100
- case 4:
101
- /* 32 bit write access */
102
- val = (uint32_t)ldl_p(buf);
103
- result |= memory_region_dispatch_write(mr, addr1, val, 4,
104
- attrs);
105
- break;
106
- case 2:
107
- /* 16 bit write access */
108
- val = lduw_p(buf);
109
- result |= memory_region_dispatch_write(mr, addr1, val, 2,
110
- attrs);
111
- break;
112
- case 1:
113
- /* 8 bit write access */
114
- val = ldub_p(buf);
115
- result |= memory_region_dispatch_write(mr, addr1, val, 1,
116
- attrs);
117
- break;
118
- default:
119
- abort();
120
- }
121
+ val = ldn_p(buf, l);
122
+ result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
123
} else {
124
/* RAM case */
125
ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
126
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
127
/* I/O case */
128
release_lock |= prepare_mmio_access(mr);
129
l = memory_access_size(mr, l, addr1);
130
- switch (l) {
131
- case 8:
132
- /* 64 bit read access */
133
- result |= memory_region_dispatch_read(mr, addr1, &val, 8,
134
- attrs);
135
- stq_p(buf, val);
136
- break;
137
- case 4:
138
- /* 32 bit read access */
139
- result |= memory_region_dispatch_read(mr, addr1, &val, 4,
140
- attrs);
141
- stl_p(buf, val);
142
- break;
143
- case 2:
144
- /* 16 bit read access */
145
- result |= memory_region_dispatch_read(mr, addr1, &val, 2,
146
- attrs);
147
- stw_p(buf, val);
148
- break;
149
- case 1:
150
- /* 8 bit read access */
151
- result |= memory_region_dispatch_read(mr, addr1, &val, 1,
152
- attrs);
153
- stb_p(buf, val);
154
- break;
155
- default:
156
- abort();
157
- }
158
+ result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
159
+ stn_p(buf, l, val);
160
} else {
161
/* RAM case */
162
ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
163
--
164
2.17.1
165
166
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Rearrange the arithmetic so that we are agnostic about the total size
4
of the vector and the size of the element. This will allow us to index
5
up to the 32nd byte and with 16-byte elements.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180613015641.5667-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-a64.h | 26 +++++++++++++++++---------
13
1 file changed, 17 insertions(+), 9 deletions(-)
14
15
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.h
18
+++ b/target/arm/translate-a64.h
19
@@ -XXX,XX +XXX,XX @@ static inline void assert_fp_access_checked(DisasContext *s)
20
static inline int vec_reg_offset(DisasContext *s, int regno,
21
int element, TCGMemOp size)
22
{
23
- int offs = 0;
24
+ int element_size = 1 << size;
25
+ int offs = element * element_size;
26
#ifdef HOST_WORDS_BIGENDIAN
27
/* This is complicated slightly because vfp.zregs[n].d[0] is
28
- * still the low half and vfp.zregs[n].d[1] the high half
29
- * of the 128 bit vector, even on big endian systems.
30
- * Calculate the offset assuming a fully bigendian 128 bits,
31
- * then XOR to account for the order of the two 64 bit halves.
32
+ * still the lowest and vfp.zregs[n].d[15] the highest of the
33
+ * 256 byte vector, even on big endian systems.
34
+ *
35
+ * Calculate the offset assuming fully little-endian,
36
+ * then XOR to account for the order of the 8-byte units.
37
+ *
38
+ * For 16 byte elements, the two 8 byte halves will not form a
39
+ * host int128 if the host is bigendian, since they're in the
40
+ * wrong order. However the only 16 byte operation we have is
41
+ * a move, so we can ignore this for the moment. More complicated
42
+ * operations will have to special case loading and storing from
43
+ * the zregs array.
44
*/
45
- offs += (16 - ((element + 1) * (1 << size)));
46
- offs ^= 8;
47
-#else
48
- offs += element * (1 << size);
49
+ if (element_size < 8) {
50
+ offs ^= 8 - element_size;
51
+ }
52
#endif
53
offs += offsetof(CPUARMState, vfp.zregs[regno]);
54
assert_fp_access_checked(s);
55
--
56
2.17.1
57
58
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 23 +++++++
9
target/arm/sve_helper.c | 114 +++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 133 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 27 ++++++++
12
4 files changed, 297 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
19
20
DEF_HELPER_FLAGS_4(sve_ext, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_insr_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
23
+DEF_HELPER_FLAGS_4(sve_insr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
24
+DEF_HELPER_FLAGS_4(sve_insr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
25
+DEF_HELPER_FLAGS_4(sve_insr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
26
+
27
+DEF_HELPER_FLAGS_3(sve_rev_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_3(sve_rev_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_3(sve_rev_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_3(sve_rev_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+
32
+DEF_HELPER_FLAGS_4(sve_tbl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_tbl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_tbl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_tbl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_3(sve_sunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_3(sve_sunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_3(sve_sunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+
41
+DEF_HELPER_FLAGS_3(sve_uunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_3(sve_uunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_3(sve_uunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
44
+
45
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
46
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
47
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
48
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/sve_helper.c
51
+++ b/target/arm/sve_helper.c
52
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ext)(void *vd, void *vn, void *vm, uint32_t desc)
53
memcpy(vd + n_siz, &tmp, n_ofs);
54
}
55
}
56
+
57
+#define DO_INSR(NAME, TYPE, H) \
58
+void HELPER(NAME)(void *vd, void *vn, uint64_t val, uint32_t desc) \
59
+{ \
60
+ intptr_t opr_sz = simd_oprsz(desc); \
61
+ swap_memmove(vd + sizeof(TYPE), vn, opr_sz - sizeof(TYPE)); \
62
+ *(TYPE *)(vd + H(0)) = val; \
63
+}
64
+
65
+DO_INSR(sve_insr_b, uint8_t, H1)
66
+DO_INSR(sve_insr_h, uint16_t, H1_2)
67
+DO_INSR(sve_insr_s, uint32_t, H1_4)
68
+DO_INSR(sve_insr_d, uint64_t, )
69
+
70
+#undef DO_INSR
71
+
72
+void HELPER(sve_rev_b)(void *vd, void *vn, uint32_t desc)
73
+{
74
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
75
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
76
+ uint64_t f = *(uint64_t *)(vn + i);
77
+ uint64_t b = *(uint64_t *)(vn + j);
78
+ *(uint64_t *)(vd + i) = bswap64(b);
79
+ *(uint64_t *)(vd + j) = bswap64(f);
80
+ }
81
+}
82
+
83
+static inline uint64_t hswap64(uint64_t h)
84
+{
85
+ uint64_t m = 0x0000ffff0000ffffull;
86
+ h = rol64(h, 32);
87
+ return ((h & m) << 16) | ((h >> 16) & m);
88
+}
89
+
90
+void HELPER(sve_rev_h)(void *vd, void *vn, uint32_t desc)
91
+{
92
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
93
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
94
+ uint64_t f = *(uint64_t *)(vn + i);
95
+ uint64_t b = *(uint64_t *)(vn + j);
96
+ *(uint64_t *)(vd + i) = hswap64(b);
97
+ *(uint64_t *)(vd + j) = hswap64(f);
98
+ }
99
+}
100
+
101
+void HELPER(sve_rev_s)(void *vd, void *vn, uint32_t desc)
102
+{
103
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
104
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
105
+ uint64_t f = *(uint64_t *)(vn + i);
106
+ uint64_t b = *(uint64_t *)(vn + j);
107
+ *(uint64_t *)(vd + i) = rol64(b, 32);
108
+ *(uint64_t *)(vd + j) = rol64(f, 32);
109
+ }
110
+}
111
+
112
+void HELPER(sve_rev_d)(void *vd, void *vn, uint32_t desc)
113
+{
114
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
115
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
116
+ uint64_t f = *(uint64_t *)(vn + i);
117
+ uint64_t b = *(uint64_t *)(vn + j);
118
+ *(uint64_t *)(vd + i) = b;
119
+ *(uint64_t *)(vd + j) = f;
120
+ }
121
+}
122
+
123
+#define DO_TBL(NAME, TYPE, H) \
124
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
125
+{ \
126
+ intptr_t i, opr_sz = simd_oprsz(desc); \
127
+ uintptr_t elem = opr_sz / sizeof(TYPE); \
128
+ TYPE *d = vd, *n = vn, *m = vm; \
129
+ ARMVectorReg tmp; \
130
+ if (unlikely(vd == vn)) { \
131
+ n = memcpy(&tmp, vn, opr_sz); \
132
+ } \
133
+ for (i = 0; i < elem; i++) { \
134
+ TYPE j = m[H(i)]; \
135
+ d[H(i)] = j < elem ? n[H(j)] : 0; \
136
+ } \
137
+}
138
+
139
+DO_TBL(sve_tbl_b, uint8_t, H1)
140
+DO_TBL(sve_tbl_h, uint16_t, H2)
141
+DO_TBL(sve_tbl_s, uint32_t, H4)
142
+DO_TBL(sve_tbl_d, uint64_t, )
143
+
144
+#undef TBL
145
+
146
+#define DO_UNPK(NAME, TYPED, TYPES, HD, HS) \
147
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
148
+{ \
149
+ intptr_t i, opr_sz = simd_oprsz(desc); \
150
+ TYPED *d = vd; \
151
+ TYPES *n = vn; \
152
+ ARMVectorReg tmp; \
153
+ if (unlikely(vn - vd < opr_sz)) { \
154
+ n = memcpy(&tmp, n, opr_sz / 2); \
155
+ } \
156
+ for (i = 0; i < opr_sz / sizeof(TYPED); i++) { \
157
+ d[HD(i)] = n[HS(i)]; \
158
+ } \
159
+}
160
+
161
+DO_UNPK(sve_sunpk_h, int16_t, int8_t, H2, H1)
162
+DO_UNPK(sve_sunpk_s, int32_t, int16_t, H4, H2)
163
+DO_UNPK(sve_sunpk_d, int64_t, int32_t, , H4)
164
+
165
+DO_UNPK(sve_uunpk_h, uint16_t, uint8_t, H2, H1)
166
+DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
167
+DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
168
+
169
+#undef DO_UNPK
170
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/target/arm/translate-sve.c
173
+++ b/target/arm/translate-sve.c
174
@@ -XXX,XX +XXX,XX @@ static bool trans_EXT(DisasContext *s, arg_EXT *a, uint32_t insn)
175
return true;
176
}
177
178
+/*
179
+ *** SVE Permute - Unpredicated Group
180
+ */
181
+
182
+static bool trans_DUP_s(DisasContext *s, arg_DUP_s *a, uint32_t insn)
183
+{
184
+ if (sve_access_check(s)) {
185
+ unsigned vsz = vec_full_reg_size(s);
186
+ tcg_gen_gvec_dup_i64(a->esz, vec_full_reg_offset(s, a->rd),
187
+ vsz, vsz, cpu_reg_sp(s, a->rn));
188
+ }
189
+ return true;
190
+}
191
+
192
+static bool trans_DUP_x(DisasContext *s, arg_DUP_x *a, uint32_t insn)
193
+{
194
+ if ((a->imm & 0x1f) == 0) {
195
+ return false;
196
+ }
197
+ if (sve_access_check(s)) {
198
+ unsigned vsz = vec_full_reg_size(s);
199
+ unsigned dofs = vec_full_reg_offset(s, a->rd);
200
+ unsigned esz, index;
201
+
202
+ esz = ctz32(a->imm);
203
+ index = a->imm >> (esz + 1);
204
+
205
+ if ((index << esz) < vsz) {
206
+ unsigned nofs = vec_reg_offset(s, a->rn, index, esz);
207
+ tcg_gen_gvec_dup_mem(esz, dofs, nofs, vsz, vsz);
208
+ } else {
209
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, 0);
210
+ }
211
+ }
212
+ return true;
213
+}
214
+
215
+static void do_insr_i64(DisasContext *s, arg_rrr_esz *a, TCGv_i64 val)
216
+{
217
+ typedef void gen_insr(TCGv_ptr, TCGv_ptr, TCGv_i64, TCGv_i32);
218
+ static gen_insr * const fns[4] = {
219
+ gen_helper_sve_insr_b, gen_helper_sve_insr_h,
220
+ gen_helper_sve_insr_s, gen_helper_sve_insr_d,
221
+ };
222
+ unsigned vsz = vec_full_reg_size(s);
223
+ TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
224
+ TCGv_ptr t_zd = tcg_temp_new_ptr();
225
+ TCGv_ptr t_zn = tcg_temp_new_ptr();
226
+
227
+ tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, a->rd));
228
+ tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
229
+
230
+ fns[a->esz](t_zd, t_zn, val, desc);
231
+
232
+ tcg_temp_free_ptr(t_zd);
233
+ tcg_temp_free_ptr(t_zn);
234
+ tcg_temp_free_i32(desc);
235
+}
236
+
237
+static bool trans_INSR_f(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
238
+{
239
+ if (sve_access_check(s)) {
240
+ TCGv_i64 t = tcg_temp_new_i64();
241
+ tcg_gen_ld_i64(t, cpu_env, vec_reg_offset(s, a->rm, 0, MO_64));
242
+ do_insr_i64(s, a, t);
243
+ tcg_temp_free_i64(t);
244
+ }
245
+ return true;
246
+}
247
+
248
+static bool trans_INSR_r(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
249
+{
250
+ if (sve_access_check(s)) {
251
+ do_insr_i64(s, a, cpu_reg(s, a->rm));
252
+ }
253
+ return true;
254
+}
255
+
256
+static bool trans_REV_v(DisasContext *s, arg_rr_esz *a, uint32_t insn)
257
+{
258
+ static gen_helper_gvec_2 * const fns[4] = {
259
+ gen_helper_sve_rev_b, gen_helper_sve_rev_h,
260
+ gen_helper_sve_rev_s, gen_helper_sve_rev_d
261
+ };
262
+
263
+ if (sve_access_check(s)) {
264
+ unsigned vsz = vec_full_reg_size(s);
265
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
266
+ vec_full_reg_offset(s, a->rn),
267
+ vsz, vsz, 0, fns[a->esz]);
268
+ }
269
+ return true;
270
+}
271
+
272
+static bool trans_TBL(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
273
+{
274
+ static gen_helper_gvec_3 * const fns[4] = {
275
+ gen_helper_sve_tbl_b, gen_helper_sve_tbl_h,
276
+ gen_helper_sve_tbl_s, gen_helper_sve_tbl_d
277
+ };
278
+
279
+ if (sve_access_check(s)) {
280
+ unsigned vsz = vec_full_reg_size(s);
281
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
282
+ vec_full_reg_offset(s, a->rn),
283
+ vec_full_reg_offset(s, a->rm),
284
+ vsz, vsz, 0, fns[a->esz]);
285
+ }
286
+ return true;
287
+}
288
+
289
+static bool trans_UNPK(DisasContext *s, arg_UNPK *a, uint32_t insn)
290
+{
291
+ static gen_helper_gvec_2 * const fns[4][2] = {
292
+ { NULL, NULL },
293
+ { gen_helper_sve_sunpk_h, gen_helper_sve_uunpk_h },
294
+ { gen_helper_sve_sunpk_s, gen_helper_sve_uunpk_s },
295
+ { gen_helper_sve_sunpk_d, gen_helper_sve_uunpk_d },
296
+ };
297
+
298
+ if (a->esz == 0) {
299
+ return false;
300
+ }
301
+ if (sve_access_check(s)) {
302
+ unsigned vsz = vec_full_reg_size(s);
303
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
304
+ vec_full_reg_offset(s, a->rn)
305
+ + (a->h ? vsz / 2 : 0),
306
+ vsz, vsz, 0, fns[a->esz][a->u]);
307
+ }
308
+ return true;
309
+}
310
+
311
/*
312
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
313
*/
314
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
315
index XXXXXXX..XXXXXXX 100644
316
--- a/target/arm/sve.decode
317
+++ b/target/arm/sve.decode
318
@@ -XXX,XX +XXX,XX @@
319
320
%imm4_16_p1 16:4 !function=plus1
321
%imm6_22_5 22:1 5:5
322
+%imm7_22_16 22:2 16:5
323
%imm8_16_10 16:5 10:3
324
%imm9_16_10 16:s6 10:3
325
326
@@ -XXX,XX +XXX,XX @@
327
328
# Three operand, vector element size
329
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
330
+@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
331
+ &rrr_esz rn=%reg_movprfx
332
333
# Three operand with "memory" size, aka immediate left shift
334
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
335
@@ -XXX,XX +XXX,XX @@ CPY_z_i 00000101 .. 01 .... 00 . ........ ..... @rdn_pg4 imm=%sh8_i8s
336
EXT 00000101 001 ..... 000 ... rm:5 rd:5 \
337
&rrri rn=%reg_movprfx imm=%imm8_16_10
338
339
+### SVE Permute - Unpredicated Group
340
+
341
+# SVE broadcast general register
342
+DUP_s 00000101 .. 1 00000 001110 ..... ..... @rd_rn
343
+
344
+# SVE broadcast indexed element
345
+DUP_x 00000101 .. 1 ..... 001000 rn:5 rd:5 \
346
+ &rri imm=%imm7_22_16
347
+
348
+# SVE insert SIMD&FP scalar register
349
+INSR_f 00000101 .. 1 10100 001110 ..... ..... @rdn_rm
350
+
351
+# SVE insert general register
352
+INSR_r 00000101 .. 1 00100 001110 ..... ..... @rdn_rm
353
+
354
+# SVE reverse vector elements
355
+REV_v 00000101 .. 1 11000 001110 ..... ..... @rd_rn
356
+
357
+# SVE vector table lookup
358
+TBL 00000101 .. 1 ..... 001100 ..... ..... @rd_rn_rm
359
+
360
+# SVE unpack vector elements
361
+UNPK 00000101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
362
+
363
### SVE Predicate Logical Operations Group
364
365
# SVE predicate logical operations
366
--
367
2.17.1
368
369
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 6 +
9
target/arm/sve_helper.c | 290 +++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 120 +++++++++++++++
11
target/arm/sve.decode | 18 +++
12
4 files changed, 434 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_uunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_3(sve_uunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_3(sve_uunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_zip_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_uzp_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
27
+
28
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/sve_helper.c
34
+++ b/target/arm/sve_helper.c
35
@@ -XXX,XX +XXX,XX @@ DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
36
DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
37
38
#undef DO_UNPK
39
+
40
+/* Mask of bits included in the even numbered predicates of width esz.
41
+ * We also use this for expand_bits/compress_bits, and so extend the
42
+ * same pattern out to 16-bit units.
43
+ */
44
+static const uint64_t even_bit_esz_masks[5] = {
45
+ 0x5555555555555555ull,
46
+ 0x3333333333333333ull,
47
+ 0x0f0f0f0f0f0f0f0full,
48
+ 0x00ff00ff00ff00ffull,
49
+ 0x0000ffff0000ffffull,
50
+};
51
+
52
+/* Zero-extend units of 2**N bits to units of 2**(N+1) bits.
53
+ * For N==0, this corresponds to the operation that in qemu/bitops.h
54
+ * we call half_shuffle64; this algorithm is from Hacker's Delight,
55
+ * section 7-2 Shuffling Bits.
56
+ */
57
+static uint64_t expand_bits(uint64_t x, int n)
58
+{
59
+ int i;
60
+
61
+ x &= 0xffffffffu;
62
+ for (i = 4; i >= n; i--) {
63
+ int sh = 1 << i;
64
+ x = ((x << sh) | x) & even_bit_esz_masks[i];
65
+ }
66
+ return x;
67
+}
68
+
69
+/* Compress units of 2**(N+1) bits to units of 2**N bits.
70
+ * For N==0, this corresponds to the operation that in qemu/bitops.h
71
+ * we call half_unshuffle64; this algorithm is from Hacker's Delight,
72
+ * section 7-2 Shuffling Bits, where it is called an inverse half shuffle.
73
+ */
74
+static uint64_t compress_bits(uint64_t x, int n)
75
+{
76
+ int i;
77
+
78
+ for (i = n; i <= 4; i++) {
79
+ int sh = 1 << i;
80
+ x &= even_bit_esz_masks[i];
81
+ x = (x >> sh) | x;
82
+ }
83
+ return x & 0xffffffffu;
84
+}
85
+
86
+void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
87
+{
88
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
89
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
90
+ intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
91
+ uint64_t *d = vd;
92
+ intptr_t i;
93
+
94
+ if (oprsz <= 8) {
95
+ uint64_t nn = *(uint64_t *)vn;
96
+ uint64_t mm = *(uint64_t *)vm;
97
+ int half = 4 * oprsz;
98
+
99
+ nn = extract64(nn, high * half, half);
100
+ mm = extract64(mm, high * half, half);
101
+ nn = expand_bits(nn, esz);
102
+ mm = expand_bits(mm, esz);
103
+ d[0] = nn + (mm << (1 << esz));
104
+ } else {
105
+ ARMPredicateReg tmp_n, tmp_m;
106
+
107
+ /* We produce output faster than we consume input.
108
+ Therefore we must be mindful of possible overlap. */
109
+ if ((vn - vd) < (uintptr_t)oprsz) {
110
+ vn = memcpy(&tmp_n, vn, oprsz);
111
+ }
112
+ if ((vm - vd) < (uintptr_t)oprsz) {
113
+ vm = memcpy(&tmp_m, vm, oprsz);
114
+ }
115
+ if (high) {
116
+ high = oprsz >> 1;
117
+ }
118
+
119
+ if ((high & 3) == 0) {
120
+ uint32_t *n = vn, *m = vm;
121
+ high >>= 2;
122
+
123
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
124
+ uint64_t nn = n[H4(high + i)];
125
+ uint64_t mm = m[H4(high + i)];
126
+
127
+ nn = expand_bits(nn, esz);
128
+ mm = expand_bits(mm, esz);
129
+ d[i] = nn + (mm << (1 << esz));
130
+ }
131
+ } else {
132
+ uint8_t *n = vn, *m = vm;
133
+ uint16_t *d16 = vd;
134
+
135
+ for (i = 0; i < oprsz / 2; i++) {
136
+ uint16_t nn = n[H1(high + i)];
137
+ uint16_t mm = m[H1(high + i)];
138
+
139
+ nn = expand_bits(nn, esz);
140
+ mm = expand_bits(mm, esz);
141
+ d16[H2(i)] = nn + (mm << (1 << esz));
142
+ }
143
+ }
144
+ }
145
+}
146
+
147
+void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
148
+{
149
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
150
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
151
+ int odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1) << esz;
152
+ uint64_t *d = vd, *n = vn, *m = vm;
153
+ uint64_t l, h;
154
+ intptr_t i;
155
+
156
+ if (oprsz <= 8) {
157
+ l = compress_bits(n[0] >> odd, esz);
158
+ h = compress_bits(m[0] >> odd, esz);
159
+ d[0] = extract64(l + (h << (4 * oprsz)), 0, 8 * oprsz);
160
+ } else {
161
+ ARMPredicateReg tmp_m;
162
+ intptr_t oprsz_16 = oprsz / 16;
163
+
164
+ if ((vm - vd) < (uintptr_t)oprsz) {
165
+ m = memcpy(&tmp_m, vm, oprsz);
166
+ }
167
+
168
+ for (i = 0; i < oprsz_16; i++) {
169
+ l = n[2 * i + 0];
170
+ h = n[2 * i + 1];
171
+ l = compress_bits(l >> odd, esz);
172
+ h = compress_bits(h >> odd, esz);
173
+ d[i] = l + (h << 32);
174
+ }
175
+
176
+ /* For VL which is not a power of 2, the results from M do not
177
+ align nicely with the uint64_t for D. Put the aligned results
178
+ from M into TMP_M and then copy it into place afterward. */
179
+ if (oprsz & 15) {
180
+ d[i] = compress_bits(n[2 * i] >> odd, esz);
181
+
182
+ for (i = 0; i < oprsz_16; i++) {
183
+ l = m[2 * i + 0];
184
+ h = m[2 * i + 1];
185
+ l = compress_bits(l >> odd, esz);
186
+ h = compress_bits(h >> odd, esz);
187
+ tmp_m.p[i] = l + (h << 32);
188
+ }
189
+ tmp_m.p[i] = compress_bits(m[2 * i] >> odd, esz);
190
+
191
+ swap_memmove(vd + oprsz / 2, &tmp_m, oprsz / 2);
192
+ } else {
193
+ for (i = 0; i < oprsz_16; i++) {
194
+ l = m[2 * i + 0];
195
+ h = m[2 * i + 1];
196
+ l = compress_bits(l >> odd, esz);
197
+ h = compress_bits(h >> odd, esz);
198
+ d[oprsz_16 + i] = l + (h << 32);
199
+ }
200
+ }
201
+ }
202
+}
203
+
204
+void HELPER(sve_trn_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
205
+{
206
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
207
+ uintptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
208
+ bool odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
209
+ uint64_t *d = vd, *n = vn, *m = vm;
210
+ uint64_t mask;
211
+ int shr, shl;
212
+ intptr_t i;
213
+
214
+ shl = 1 << esz;
215
+ shr = 0;
216
+ mask = even_bit_esz_masks[esz];
217
+ if (odd) {
218
+ mask <<= shl;
219
+ shr = shl;
220
+ shl = 0;
221
+ }
222
+
223
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
224
+ uint64_t nn = (n[i] & mask) >> shr;
225
+ uint64_t mm = (m[i] & mask) << shl;
226
+ d[i] = nn + mm;
227
+ }
228
+}
229
+
230
+/* Reverse units of 2**N bits. */
231
+static uint64_t reverse_bits_64(uint64_t x, int n)
232
+{
233
+ int i, sh;
234
+
235
+ x = bswap64(x);
236
+ for (i = 2, sh = 4; i >= n; i--, sh >>= 1) {
237
+ uint64_t mask = even_bit_esz_masks[i];
238
+ x = ((x & mask) << sh) | ((x >> sh) & mask);
239
+ }
240
+ return x;
241
+}
242
+
243
+static uint8_t reverse_bits_8(uint8_t x, int n)
244
+{
245
+ static const uint8_t mask[3] = { 0x55, 0x33, 0x0f };
246
+ int i, sh;
247
+
248
+ for (i = 2, sh = 4; i >= n; i--, sh >>= 1) {
249
+ x = ((x & mask[i]) << sh) | ((x >> sh) & mask[i]);
250
+ }
251
+ return x;
252
+}
253
+
254
+void HELPER(sve_rev_p)(void *vd, void *vn, uint32_t pred_desc)
255
+{
256
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
257
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
258
+ intptr_t i, oprsz_2 = oprsz / 2;
259
+
260
+ if (oprsz <= 8) {
261
+ uint64_t l = *(uint64_t *)vn;
262
+ l = reverse_bits_64(l << (64 - 8 * oprsz), esz);
263
+ *(uint64_t *)vd = l;
264
+ } else if ((oprsz & 15) == 0) {
265
+ for (i = 0; i < oprsz_2; i += 8) {
266
+ intptr_t ih = oprsz - 8 - i;
267
+ uint64_t l = reverse_bits_64(*(uint64_t *)(vn + i), esz);
268
+ uint64_t h = reverse_bits_64(*(uint64_t *)(vn + ih), esz);
269
+ *(uint64_t *)(vd + i) = h;
270
+ *(uint64_t *)(vd + ih) = l;
271
+ }
272
+ } else {
273
+ for (i = 0; i < oprsz_2; i += 1) {
274
+ intptr_t il = H1(i);
275
+ intptr_t ih = H1(oprsz - 1 - i);
276
+ uint8_t l = reverse_bits_8(*(uint8_t *)(vn + il), esz);
277
+ uint8_t h = reverse_bits_8(*(uint8_t *)(vn + ih), esz);
278
+ *(uint8_t *)(vd + il) = h;
279
+ *(uint8_t *)(vd + ih) = l;
280
+ }
281
+ }
282
+}
283
+
284
+void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
285
+{
286
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
287
+ intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
288
+ uint64_t *d = vd;
289
+ intptr_t i;
290
+
291
+ if (oprsz <= 8) {
292
+ uint64_t nn = *(uint64_t *)vn;
293
+ int half = 4 * oprsz;
294
+
295
+ nn = extract64(nn, high * half, half);
296
+ nn = expand_bits(nn, 0);
297
+ d[0] = nn;
298
+ } else {
299
+ ARMPredicateReg tmp_n;
300
+
301
+ /* We produce output faster than we consume input.
302
+ Therefore we must be mindful of possible overlap. */
303
+ if ((vn - vd) < (uintptr_t)oprsz) {
304
+ vn = memcpy(&tmp_n, vn, oprsz);
305
+ }
306
+ if (high) {
307
+ high = oprsz >> 1;
308
+ }
309
+
310
+ if ((high & 3) == 0) {
311
+ uint32_t *n = vn;
312
+ high >>= 2;
313
+
314
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
315
+ uint64_t nn = n[H4(high + i)];
316
+ d[i] = expand_bits(nn, 0);
317
+ }
318
+ } else {
319
+ uint16_t *d16 = vd;
320
+ uint8_t *n = vn;
321
+
322
+ for (i = 0; i < oprsz / 2; i++) {
323
+ uint16_t nn = n[H1(high + i)];
324
+ d16[H2(i)] = expand_bits(nn, 0);
325
+ }
326
+ }
327
+ }
328
+}
329
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
330
index XXXXXXX..XXXXXXX 100644
331
--- a/target/arm/translate-sve.c
332
+++ b/target/arm/translate-sve.c
333
@@ -XXX,XX +XXX,XX @@ static bool trans_UNPK(DisasContext *s, arg_UNPK *a, uint32_t insn)
334
return true;
335
}
336
337
+/*
338
+ *** SVE Permute - Predicates Group
339
+ */
340
+
341
+static bool do_perm_pred3(DisasContext *s, arg_rrr_esz *a, bool high_odd,
342
+ gen_helper_gvec_3 *fn)
343
+{
344
+ if (!sve_access_check(s)) {
345
+ return true;
346
+ }
347
+
348
+ unsigned vsz = pred_full_reg_size(s);
349
+
350
+ /* Predicate sizes may be smaller and cannot use simd_desc.
351
+ We cannot round up, as we do elsewhere, because we need
352
+ the exact size for ZIP2 and REV. We retain the style for
353
+ the other helpers for consistency. */
354
+ TCGv_ptr t_d = tcg_temp_new_ptr();
355
+ TCGv_ptr t_n = tcg_temp_new_ptr();
356
+ TCGv_ptr t_m = tcg_temp_new_ptr();
357
+ TCGv_i32 t_desc;
358
+ int desc;
359
+
360
+ desc = vsz - 2;
361
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
362
+ desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
363
+
364
+ tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
365
+ tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
366
+ tcg_gen_addi_ptr(t_m, cpu_env, pred_full_reg_offset(s, a->rm));
367
+ t_desc = tcg_const_i32(desc);
368
+
369
+ fn(t_d, t_n, t_m, t_desc);
370
+
371
+ tcg_temp_free_ptr(t_d);
372
+ tcg_temp_free_ptr(t_n);
373
+ tcg_temp_free_ptr(t_m);
374
+ tcg_temp_free_i32(t_desc);
375
+ return true;
376
+}
377
+
378
+static bool do_perm_pred2(DisasContext *s, arg_rr_esz *a, bool high_odd,
379
+ gen_helper_gvec_2 *fn)
380
+{
381
+ if (!sve_access_check(s)) {
382
+ return true;
383
+ }
384
+
385
+ unsigned vsz = pred_full_reg_size(s);
386
+ TCGv_ptr t_d = tcg_temp_new_ptr();
387
+ TCGv_ptr t_n = tcg_temp_new_ptr();
388
+ TCGv_i32 t_desc;
389
+ int desc;
390
+
391
+ tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
392
+ tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
393
+
394
+ /* Predicate sizes may be smaller and cannot use simd_desc.
395
+ We cannot round up, as we do elsewhere, because we need
396
+ the exact size for ZIP2 and REV. We retain the style for
397
+ the other helpers for consistency. */
398
+
399
+ desc = vsz - 2;
400
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
401
+ desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
402
+ t_desc = tcg_const_i32(desc);
403
+
404
+ fn(t_d, t_n, t_desc);
405
+
406
+ tcg_temp_free_i32(t_desc);
407
+ tcg_temp_free_ptr(t_d);
408
+ tcg_temp_free_ptr(t_n);
409
+ return true;
410
+}
411
+
412
+static bool trans_ZIP1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
413
+{
414
+ return do_perm_pred3(s, a, 0, gen_helper_sve_zip_p);
415
+}
416
+
417
+static bool trans_ZIP2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
418
+{
419
+ return do_perm_pred3(s, a, 1, gen_helper_sve_zip_p);
420
+}
421
+
422
+static bool trans_UZP1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
423
+{
424
+ return do_perm_pred3(s, a, 0, gen_helper_sve_uzp_p);
425
+}
426
+
427
+static bool trans_UZP2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
428
+{
429
+ return do_perm_pred3(s, a, 1, gen_helper_sve_uzp_p);
430
+}
431
+
432
+static bool trans_TRN1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
433
+{
434
+ return do_perm_pred3(s, a, 0, gen_helper_sve_trn_p);
435
+}
436
+
437
+static bool trans_TRN2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
438
+{
439
+ return do_perm_pred3(s, a, 1, gen_helper_sve_trn_p);
440
+}
441
+
442
+static bool trans_REV_p(DisasContext *s, arg_rr_esz *a, uint32_t insn)
443
+{
444
+ return do_perm_pred2(s, a, 0, gen_helper_sve_rev_p);
445
+}
446
+
447
+static bool trans_PUNPKLO(DisasContext *s, arg_PUNPKLO *a, uint32_t insn)
448
+{
449
+ return do_perm_pred2(s, a, 0, gen_helper_sve_punpk_p);
450
+}
451
+
452
+static bool trans_PUNPKHI(DisasContext *s, arg_PUNPKHI *a, uint32_t insn)
453
+{
454
+ return do_perm_pred2(s, a, 1, gen_helper_sve_punpk_p);
455
+}
456
+
457
/*
458
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
459
*/
460
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
461
index XXXXXXX..XXXXXXX 100644
462
--- a/target/arm/sve.decode
463
+++ b/target/arm/sve.decode
464
@@ -XXX,XX +XXX,XX @@
465
466
# Three operand, vector element size
467
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
468
+@pd_pn_pm ........ esz:2 .. rm:4 ....... rn:4 . rd:4 &rrr_esz
469
@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
470
&rrr_esz rn=%reg_movprfx
471
472
@@ -XXX,XX +XXX,XX @@ TBL 00000101 .. 1 ..... 001100 ..... ..... @rd_rn_rm
473
# SVE unpack vector elements
474
UNPK 00000101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
475
476
+### SVE Permute - Predicates Group
477
+
478
+# SVE permute predicate elements
479
+ZIP1_p 00000101 .. 10 .... 010 000 0 .... 0 .... @pd_pn_pm
480
+ZIP2_p 00000101 .. 10 .... 010 001 0 .... 0 .... @pd_pn_pm
481
+UZP1_p 00000101 .. 10 .... 010 010 0 .... 0 .... @pd_pn_pm
482
+UZP2_p 00000101 .. 10 .... 010 011 0 .... 0 .... @pd_pn_pm
483
+TRN1_p 00000101 .. 10 .... 010 100 0 .... 0 .... @pd_pn_pm
484
+TRN2_p 00000101 .. 10 .... 010 101 0 .... 0 .... @pd_pn_pm
485
+
486
+# SVE reverse predicate elements
487
+REV_p 00000101 .. 11 0100 010 000 0 .... 0 .... @pd_pn
488
+
489
+# SVE unpack predicate elements
490
+PUNPKLO 00000101 00 11 0000 010 000 0 .... 0 .... @pd_pn_e0
491
+PUNPKHI 00000101 00 11 0001 010 000 0 .... 0 .... @pd_pn_e0
492
+
493
### SVE Predicate Logical Operations Group
494
495
# SVE predicate logical operations
496
--
497
2.17.1
498
499
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 15 ++++++++
9
target/arm/sve_helper.c | 72 ++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 75 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 10 +++++
12
4 files changed, 172 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_zip_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_zip_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_zip_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve_zip_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+
27
+DEF_HELPER_FLAGS_4(sve_uzp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve_uzp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve_uzp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_uzp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+
32
+DEF_HELPER_FLAGS_4(sve_trn_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+
37
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
39
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
40
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sve_helper.c
43
+++ b/target/arm/sve_helper.c
44
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
45
}
46
}
47
}
48
+
49
+#define DO_ZIP(NAME, TYPE, H) \
50
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
51
+{ \
52
+ intptr_t oprsz = simd_oprsz(desc); \
53
+ intptr_t i, oprsz_2 = oprsz / 2; \
54
+ ARMVectorReg tmp_n, tmp_m; \
55
+ /* We produce output faster than we consume input. \
56
+ Therefore we must be mindful of possible overlap. */ \
57
+ if (unlikely((vn - vd) < (uintptr_t)oprsz)) { \
58
+ vn = memcpy(&tmp_n, vn, oprsz_2); \
59
+ } \
60
+ if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \
61
+ vm = memcpy(&tmp_m, vm, oprsz_2); \
62
+ } \
63
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
64
+ *(TYPE *)(vd + H(2 * i + 0)) = *(TYPE *)(vn + H(i)); \
65
+ *(TYPE *)(vd + H(2 * i + sizeof(TYPE))) = *(TYPE *)(vm + H(i)); \
66
+ } \
67
+}
68
+
69
+DO_ZIP(sve_zip_b, uint8_t, H1)
70
+DO_ZIP(sve_zip_h, uint16_t, H1_2)
71
+DO_ZIP(sve_zip_s, uint32_t, H1_4)
72
+DO_ZIP(sve_zip_d, uint64_t, )
73
+
74
+#define DO_UZP(NAME, TYPE, H) \
75
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
76
+{ \
77
+ intptr_t oprsz = simd_oprsz(desc); \
78
+ intptr_t oprsz_2 = oprsz / 2; \
79
+ intptr_t odd_ofs = simd_data(desc); \
80
+ intptr_t i; \
81
+ ARMVectorReg tmp_m; \
82
+ if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \
83
+ vm = memcpy(&tmp_m, vm, oprsz); \
84
+ } \
85
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
86
+ *(TYPE *)(vd + H(i)) = *(TYPE *)(vn + H(2 * i + odd_ofs)); \
87
+ } \
88
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
89
+ *(TYPE *)(vd + H(oprsz_2 + i)) = *(TYPE *)(vm + H(2 * i + odd_ofs)); \
90
+ } \
91
+}
92
+
93
+DO_UZP(sve_uzp_b, uint8_t, H1)
94
+DO_UZP(sve_uzp_h, uint16_t, H1_2)
95
+DO_UZP(sve_uzp_s, uint32_t, H1_4)
96
+DO_UZP(sve_uzp_d, uint64_t, )
97
+
98
+#define DO_TRN(NAME, TYPE, H) \
99
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
100
+{ \
101
+ intptr_t oprsz = simd_oprsz(desc); \
102
+ intptr_t odd_ofs = simd_data(desc); \
103
+ intptr_t i; \
104
+ for (i = 0; i < oprsz; i += 2 * sizeof(TYPE)) { \
105
+ TYPE ae = *(TYPE *)(vn + H(i + odd_ofs)); \
106
+ TYPE be = *(TYPE *)(vm + H(i + odd_ofs)); \
107
+ *(TYPE *)(vd + H(i + 0)) = ae; \
108
+ *(TYPE *)(vd + H(i + sizeof(TYPE))) = be; \
109
+ } \
110
+}
111
+
112
+DO_TRN(sve_trn_b, uint8_t, H1)
113
+DO_TRN(sve_trn_h, uint16_t, H1_2)
114
+DO_TRN(sve_trn_s, uint32_t, H1_4)
115
+DO_TRN(sve_trn_d, uint64_t, )
116
+
117
+#undef DO_ZIP
118
+#undef DO_UZP
119
+#undef DO_TRN
120
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/translate-sve.c
123
+++ b/target/arm/translate-sve.c
124
@@ -XXX,XX +XXX,XX @@ static bool trans_PUNPKHI(DisasContext *s, arg_PUNPKHI *a, uint32_t insn)
125
return do_perm_pred2(s, a, 1, gen_helper_sve_punpk_p);
126
}
127
128
+/*
129
+ *** SVE Permute - Interleaving Group
130
+ */
131
+
132
+static bool do_zip(DisasContext *s, arg_rrr_esz *a, bool high)
133
+{
134
+ static gen_helper_gvec_3 * const fns[4] = {
135
+ gen_helper_sve_zip_b, gen_helper_sve_zip_h,
136
+ gen_helper_sve_zip_s, gen_helper_sve_zip_d,
137
+ };
138
+
139
+ if (sve_access_check(s)) {
140
+ unsigned vsz = vec_full_reg_size(s);
141
+ unsigned high_ofs = high ? vsz / 2 : 0;
142
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
143
+ vec_full_reg_offset(s, a->rn) + high_ofs,
144
+ vec_full_reg_offset(s, a->rm) + high_ofs,
145
+ vsz, vsz, 0, fns[a->esz]);
146
+ }
147
+ return true;
148
+}
149
+
150
+static bool do_zzz_data_ool(DisasContext *s, arg_rrr_esz *a, int data,
151
+ gen_helper_gvec_3 *fn)
152
+{
153
+ if (sve_access_check(s)) {
154
+ unsigned vsz = vec_full_reg_size(s);
155
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
156
+ vec_full_reg_offset(s, a->rn),
157
+ vec_full_reg_offset(s, a->rm),
158
+ vsz, vsz, data, fn);
159
+ }
160
+ return true;
161
+}
162
+
163
+static bool trans_ZIP1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
164
+{
165
+ return do_zip(s, a, false);
166
+}
167
+
168
+static bool trans_ZIP2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
169
+{
170
+ return do_zip(s, a, true);
171
+}
172
+
173
+static gen_helper_gvec_3 * const uzp_fns[4] = {
174
+ gen_helper_sve_uzp_b, gen_helper_sve_uzp_h,
175
+ gen_helper_sve_uzp_s, gen_helper_sve_uzp_d,
176
+};
177
+
178
+static bool trans_UZP1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
179
+{
180
+ return do_zzz_data_ool(s, a, 0, uzp_fns[a->esz]);
181
+}
182
+
183
+static bool trans_UZP2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
184
+{
185
+ return do_zzz_data_ool(s, a, 1 << a->esz, uzp_fns[a->esz]);
186
+}
187
+
188
+static gen_helper_gvec_3 * const trn_fns[4] = {
189
+ gen_helper_sve_trn_b, gen_helper_sve_trn_h,
190
+ gen_helper_sve_trn_s, gen_helper_sve_trn_d,
191
+};
192
+
193
+static bool trans_TRN1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
194
+{
195
+ return do_zzz_data_ool(s, a, 0, trn_fns[a->esz]);
196
+}
197
+
198
+static bool trans_TRN2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
199
+{
200
+ return do_zzz_data_ool(s, a, 1 << a->esz, trn_fns[a->esz]);
201
+}
202
+
203
/*
204
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
205
*/
206
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/sve.decode
209
+++ b/target/arm/sve.decode
210
@@ -XXX,XX +XXX,XX @@ REV_p 00000101 .. 11 0100 010 000 0 .... 0 .... @pd_pn
211
PUNPKLO 00000101 00 11 0000 010 000 0 .... 0 .... @pd_pn_e0
212
PUNPKHI 00000101 00 11 0001 010 000 0 .... 0 .... @pd_pn_e0
213
214
+### SVE Permute - Interleaving Group
215
+
216
+# SVE permute vector elements
217
+ZIP1_z 00000101 .. 1 ..... 011 000 ..... ..... @rd_rn_rm
218
+ZIP2_z 00000101 .. 1 ..... 011 001 ..... ..... @rd_rn_rm
219
+UZP1_z 00000101 .. 1 ..... 011 010 ..... ..... @rd_rn_rm
220
+UZP2_z 00000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
221
+TRN1_z 00000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
222
+TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
223
+
224
### SVE Predicate Logical Operations Group
225
226
# SVE predicate logical operations
227
--
228
2.17.1
229
230
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 3 +++
9
target/arm/sve_helper.c | 34 ++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 12 ++++++++++++
11
target/arm/sve.decode | 6 ++++++
12
4 files changed, 55 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+
25
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/sve_helper.c
31
+++ b/target/arm/sve_helper.c
32
@@ -XXX,XX +XXX,XX @@ DO_TRN(sve_trn_d, uint64_t, )
33
#undef DO_ZIP
34
#undef DO_UZP
35
#undef DO_TRN
36
+
37
+void HELPER(sve_compact_s)(void *vd, void *vn, void *vg, uint32_t desc)
38
+{
39
+ intptr_t i, j, opr_sz = simd_oprsz(desc) / 4;
40
+ uint32_t *d = vd, *n = vn;
41
+ uint8_t *pg = vg;
42
+
43
+ for (i = j = 0; i < opr_sz; i++) {
44
+ if (pg[H1(i / 2)] & (i & 1 ? 0x10 : 0x01)) {
45
+ d[H4(j)] = n[H4(i)];
46
+ j++;
47
+ }
48
+ }
49
+ for (; j < opr_sz; j++) {
50
+ d[H4(j)] = 0;
51
+ }
52
+}
53
+
54
+void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
55
+{
56
+ intptr_t i, j, opr_sz = simd_oprsz(desc) / 8;
57
+ uint64_t *d = vd, *n = vn;
58
+ uint8_t *pg = vg;
59
+
60
+ for (i = j = 0; i < opr_sz; i++) {
61
+ if (pg[H1(i)] & 1) {
62
+ d[j] = n[i];
63
+ j++;
64
+ }
65
+ }
66
+ for (; j < opr_sz; j++) {
67
+ d[j] = 0;
68
+ }
69
+}
70
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/translate-sve.c
73
+++ b/target/arm/translate-sve.c
74
@@ -XXX,XX +XXX,XX @@ static bool trans_TRN2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
75
return do_zzz_data_ool(s, a, 1 << a->esz, trn_fns[a->esz]);
76
}
77
78
+/*
79
+ *** SVE Permute Vector - Predicated Group
80
+ */
81
+
82
+static bool trans_COMPACT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
83
+{
84
+ static gen_helper_gvec_3 * const fns[4] = {
85
+ NULL, NULL, gen_helper_sve_compact_s, gen_helper_sve_compact_d
86
+ };
87
+ return do_zpz_ool(s, a, fns[a->esz]);
88
+}
89
+
90
/*
91
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
92
*/
93
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/sve.decode
96
+++ b/target/arm/sve.decode
97
@@ -XXX,XX +XXX,XX @@ UZP2_z 00000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
98
TRN1_z 00000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
99
TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
100
101
+### SVE Permute - Predicated Group
102
+
103
+# SVE compress active elements
104
+# Note esz >= 2
105
+COMPACT 00000101 .. 100001 100 ... ..... ..... @rd_pg_rn
106
+
107
### SVE Predicate Logical Operations Group
108
109
# SVE predicate logical operations
110
--
111
2.17.1
112
113
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-7-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 2 +
9
target/arm/sve_helper.c | 12 ++
10
target/arm/translate-sve.c | 328 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 20 +++
12
4 files changed, 362 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_2(sve_last_active_element, TCG_CALL_NO_RWG, s32, ptr, i32)
23
+
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve_helper.c
30
+++ b/target/arm/sve_helper.c
31
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
32
d[j] = 0;
33
}
34
}
35
+
36
+/* Similar to the ARM LastActiveElement pseudocode function, except the
37
+ * result is multiplied by the element size. This includes the not found
38
+ * indication; e.g. not found for esz=3 is -8.
39
+ */
40
+int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
41
+{
42
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
43
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
44
+
45
+ return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
46
+}
47
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/translate-sve.c
50
+++ b/target/arm/translate-sve.c
51
@@ -XXX,XX +XXX,XX @@ static bool trans_COMPACT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
52
return do_zpz_ool(s, a, fns[a->esz]);
53
}
54
55
+/* Call the helper that computes the ARM LastActiveElement pseudocode
56
+ * function, scaled by the element size. This includes the not found
57
+ * indication; e.g. not found for esz=3 is -8.
58
+ */
59
+static void find_last_active(DisasContext *s, TCGv_i32 ret, int esz, int pg)
60
+{
61
+ /* Predicate sizes may be smaller and cannot use simd_desc. We cannot
62
+ * round up, as we do elsewhere, because we need the exact size.
63
+ */
64
+ TCGv_ptr t_p = tcg_temp_new_ptr();
65
+ TCGv_i32 t_desc;
66
+ unsigned vsz = pred_full_reg_size(s);
67
+ unsigned desc;
68
+
69
+ desc = vsz - 2;
70
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
71
+
72
+ tcg_gen_addi_ptr(t_p, cpu_env, pred_full_reg_offset(s, pg));
73
+ t_desc = tcg_const_i32(desc);
74
+
75
+ gen_helper_sve_last_active_element(ret, t_p, t_desc);
76
+
77
+ tcg_temp_free_i32(t_desc);
78
+ tcg_temp_free_ptr(t_p);
79
+}
80
+
81
+/* Increment LAST to the offset of the next element in the vector,
82
+ * wrapping around to 0.
83
+ */
84
+static void incr_last_active(DisasContext *s, TCGv_i32 last, int esz)
85
+{
86
+ unsigned vsz = vec_full_reg_size(s);
87
+
88
+ tcg_gen_addi_i32(last, last, 1 << esz);
89
+ if (is_power_of_2(vsz)) {
90
+ tcg_gen_andi_i32(last, last, vsz - 1);
91
+ } else {
92
+ TCGv_i32 max = tcg_const_i32(vsz);
93
+ TCGv_i32 zero = tcg_const_i32(0);
94
+ tcg_gen_movcond_i32(TCG_COND_GEU, last, last, max, zero, last);
95
+ tcg_temp_free_i32(max);
96
+ tcg_temp_free_i32(zero);
97
+ }
98
+}
99
+
100
+/* If LAST < 0, set LAST to the offset of the last element in the vector. */
101
+static void wrap_last_active(DisasContext *s, TCGv_i32 last, int esz)
102
+{
103
+ unsigned vsz = vec_full_reg_size(s);
104
+
105
+ if (is_power_of_2(vsz)) {
106
+ tcg_gen_andi_i32(last, last, vsz - 1);
107
+ } else {
108
+ TCGv_i32 max = tcg_const_i32(vsz - (1 << esz));
109
+ TCGv_i32 zero = tcg_const_i32(0);
110
+ tcg_gen_movcond_i32(TCG_COND_LT, last, last, zero, max, last);
111
+ tcg_temp_free_i32(max);
112
+ tcg_temp_free_i32(zero);
113
+ }
114
+}
115
+
116
+/* Load an unsigned element of ESZ from BASE+OFS. */
117
+static TCGv_i64 load_esz(TCGv_ptr base, int ofs, int esz)
118
+{
119
+ TCGv_i64 r = tcg_temp_new_i64();
120
+
121
+ switch (esz) {
122
+ case 0:
123
+ tcg_gen_ld8u_i64(r, base, ofs);
124
+ break;
125
+ case 1:
126
+ tcg_gen_ld16u_i64(r, base, ofs);
127
+ break;
128
+ case 2:
129
+ tcg_gen_ld32u_i64(r, base, ofs);
130
+ break;
131
+ case 3:
132
+ tcg_gen_ld_i64(r, base, ofs);
133
+ break;
134
+ default:
135
+ g_assert_not_reached();
136
+ }
137
+ return r;
138
+}
139
+
140
+/* Load an unsigned element of ESZ from RM[LAST]. */
141
+static TCGv_i64 load_last_active(DisasContext *s, TCGv_i32 last,
142
+ int rm, int esz)
143
+{
144
+ TCGv_ptr p = tcg_temp_new_ptr();
145
+ TCGv_i64 r;
146
+
147
+ /* Convert offset into vector into offset into ENV.
148
+ * The final adjustment for the vector register base
149
+ * is added via constant offset to the load.
150
+ */
151
+#ifdef HOST_WORDS_BIGENDIAN
152
+ /* Adjust for element ordering. See vec_reg_offset. */
153
+ if (esz < 3) {
154
+ tcg_gen_xori_i32(last, last, 8 - (1 << esz));
155
+ }
156
+#endif
157
+ tcg_gen_ext_i32_ptr(p, last);
158
+ tcg_gen_add_ptr(p, p, cpu_env);
159
+
160
+ r = load_esz(p, vec_full_reg_offset(s, rm), esz);
161
+ tcg_temp_free_ptr(p);
162
+
163
+ return r;
164
+}
165
+
166
+/* Compute CLAST for a Zreg. */
167
+static bool do_clast_vector(DisasContext *s, arg_rprr_esz *a, bool before)
168
+{
169
+ TCGv_i32 last;
170
+ TCGLabel *over;
171
+ TCGv_i64 ele;
172
+ unsigned vsz, esz = a->esz;
173
+
174
+ if (!sve_access_check(s)) {
175
+ return true;
176
+ }
177
+
178
+ last = tcg_temp_local_new_i32();
179
+ over = gen_new_label();
180
+
181
+ find_last_active(s, last, esz, a->pg);
182
+
183
+ /* There is of course no movcond for a 2048-bit vector,
184
+ * so we must branch over the actual store.
185
+ */
186
+ tcg_gen_brcondi_i32(TCG_COND_LT, last, 0, over);
187
+
188
+ if (!before) {
189
+ incr_last_active(s, last, esz);
190
+ }
191
+
192
+ ele = load_last_active(s, last, a->rm, esz);
193
+ tcg_temp_free_i32(last);
194
+
195
+ vsz = vec_full_reg_size(s);
196
+ tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd), vsz, vsz, ele);
197
+ tcg_temp_free_i64(ele);
198
+
199
+ /* If this insn used MOVPRFX, we may need a second move. */
200
+ if (a->rd != a->rn) {
201
+ TCGLabel *done = gen_new_label();
202
+ tcg_gen_br(done);
203
+
204
+ gen_set_label(over);
205
+ do_mov_z(s, a->rd, a->rn);
206
+
207
+ gen_set_label(done);
208
+ } else {
209
+ gen_set_label(over);
210
+ }
211
+ return true;
212
+}
213
+
214
+static bool trans_CLASTA_z(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
215
+{
216
+ return do_clast_vector(s, a, false);
217
+}
218
+
219
+static bool trans_CLASTB_z(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
220
+{
221
+ return do_clast_vector(s, a, true);
222
+}
223
+
224
+/* Compute CLAST for a scalar. */
225
+static void do_clast_scalar(DisasContext *s, int esz, int pg, int rm,
226
+ bool before, TCGv_i64 reg_val)
227
+{
228
+ TCGv_i32 last = tcg_temp_new_i32();
229
+ TCGv_i64 ele, cmp, zero;
230
+
231
+ find_last_active(s, last, esz, pg);
232
+
233
+ /* Extend the original value of last prior to incrementing. */
234
+ cmp = tcg_temp_new_i64();
235
+ tcg_gen_ext_i32_i64(cmp, last);
236
+
237
+ if (!before) {
238
+ incr_last_active(s, last, esz);
239
+ }
240
+
241
+ /* The conceit here is that while last < 0 indicates not found, after
242
+ * adjusting for cpu_env->vfp.zregs[rm], it is still a valid address
243
+ * from which we can load garbage. We then discard the garbage with
244
+ * a conditional move.
245
+ */
246
+ ele = load_last_active(s, last, rm, esz);
247
+ tcg_temp_free_i32(last);
248
+
249
+ zero = tcg_const_i64(0);
250
+ tcg_gen_movcond_i64(TCG_COND_GE, reg_val, cmp, zero, ele, reg_val);
251
+
252
+ tcg_temp_free_i64(zero);
253
+ tcg_temp_free_i64(cmp);
254
+ tcg_temp_free_i64(ele);
255
+}
256
+
257
+/* Compute CLAST for a Vreg. */
258
+static bool do_clast_fp(DisasContext *s, arg_rpr_esz *a, bool before)
259
+{
260
+ if (sve_access_check(s)) {
261
+ int esz = a->esz;
262
+ int ofs = vec_reg_offset(s, a->rd, 0, esz);
263
+ TCGv_i64 reg = load_esz(cpu_env, ofs, esz);
264
+
265
+ do_clast_scalar(s, esz, a->pg, a->rn, before, reg);
266
+ write_fp_dreg(s, a->rd, reg);
267
+ tcg_temp_free_i64(reg);
268
+ }
269
+ return true;
270
+}
271
+
272
+static bool trans_CLASTA_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
273
+{
274
+ return do_clast_fp(s, a, false);
275
+}
276
+
277
+static bool trans_CLASTB_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
278
+{
279
+ return do_clast_fp(s, a, true);
280
+}
281
+
282
+/* Compute CLAST for a Xreg. */
283
+static bool do_clast_general(DisasContext *s, arg_rpr_esz *a, bool before)
284
+{
285
+ TCGv_i64 reg;
286
+
287
+ if (!sve_access_check(s)) {
288
+ return true;
289
+ }
290
+
291
+ reg = cpu_reg(s, a->rd);
292
+ switch (a->esz) {
293
+ case 0:
294
+ tcg_gen_ext8u_i64(reg, reg);
295
+ break;
296
+ case 1:
297
+ tcg_gen_ext16u_i64(reg, reg);
298
+ break;
299
+ case 2:
300
+ tcg_gen_ext32u_i64(reg, reg);
301
+ break;
302
+ case 3:
303
+ break;
304
+ default:
305
+ g_assert_not_reached();
306
+ }
307
+
308
+ do_clast_scalar(s, a->esz, a->pg, a->rn, before, reg);
309
+ return true;
310
+}
311
+
312
+static bool trans_CLASTA_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
313
+{
314
+ return do_clast_general(s, a, false);
315
+}
316
+
317
+static bool trans_CLASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
318
+{
319
+ return do_clast_general(s, a, true);
320
+}
321
+
322
+/* Compute LAST for a scalar. */
323
+static TCGv_i64 do_last_scalar(DisasContext *s, int esz,
324
+ int pg, int rm, bool before)
325
+{
326
+ TCGv_i32 last = tcg_temp_new_i32();
327
+ TCGv_i64 ret;
328
+
329
+ find_last_active(s, last, esz, pg);
330
+ if (before) {
331
+ wrap_last_active(s, last, esz);
332
+ } else {
333
+ incr_last_active(s, last, esz);
334
+ }
335
+
336
+ ret = load_last_active(s, last, rm, esz);
337
+ tcg_temp_free_i32(last);
338
+ return ret;
339
+}
340
+
341
+/* Compute LAST for a Vreg. */
342
+static bool do_last_fp(DisasContext *s, arg_rpr_esz *a, bool before)
343
+{
344
+ if (sve_access_check(s)) {
345
+ TCGv_i64 val = do_last_scalar(s, a->esz, a->pg, a->rn, before);
346
+ write_fp_dreg(s, a->rd, val);
347
+ tcg_temp_free_i64(val);
348
+ }
349
+ return true;
350
+}
351
+
352
+static bool trans_LASTA_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
353
+{
354
+ return do_last_fp(s, a, false);
355
+}
356
+
357
+static bool trans_LASTB_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
358
+{
359
+ return do_last_fp(s, a, true);
360
+}
361
+
362
+/* Compute LAST for a Xreg. */
363
+static bool do_last_general(DisasContext *s, arg_rpr_esz *a, bool before)
364
+{
365
+ if (sve_access_check(s)) {
366
+ TCGv_i64 val = do_last_scalar(s, a->esz, a->pg, a->rn, before);
367
+ tcg_gen_mov_i64(cpu_reg(s, a->rd), val);
368
+ tcg_temp_free_i64(val);
369
+ }
370
+ return true;
371
+}
372
+
373
+static bool trans_LASTA_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
374
+{
375
+ return do_last_general(s, a, false);
376
+}
377
+
378
+static bool trans_LASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
379
+{
380
+ return do_last_general(s, a, true);
381
+}
382
+
383
/*
384
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
385
*/
386
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
387
index XXXXXXX..XXXXXXX 100644
388
--- a/target/arm/sve.decode
389
+++ b/target/arm/sve.decode
390
@@ -XXX,XX +XXX,XX @@ TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
391
# Note esz >= 2
392
COMPACT 00000101 .. 100001 100 ... ..... ..... @rd_pg_rn
393
394
+# SVE conditionally broadcast element to vector
395
+CLASTA_z 00000101 .. 10100 0 100 ... ..... ..... @rdn_pg_rm
396
+CLASTB_z 00000101 .. 10100 1 100 ... ..... ..... @rdn_pg_rm
397
+
398
+# SVE conditionally copy element to SIMD&FP scalar
399
+CLASTA_v 00000101 .. 10101 0 100 ... ..... ..... @rd_pg_rn
400
+CLASTB_v 00000101 .. 10101 1 100 ... ..... ..... @rd_pg_rn
401
+
402
+# SVE conditionally copy element to general register
403
+CLASTA_r 00000101 .. 11000 0 101 ... ..... ..... @rd_pg_rn
404
+CLASTB_r 00000101 .. 11000 1 101 ... ..... ..... @rd_pg_rn
405
+
406
+# SVE copy element to SIMD&FP scalar register
407
+LASTA_v 00000101 .. 10001 0 100 ... ..... ..... @rd_pg_rn
408
+LASTB_v 00000101 .. 10001 1 100 ... ..... ..... @rd_pg_rn
409
+
410
+# SVE copy element to general register
411
+LASTA_r 00000101 .. 10000 0 101 ... ..... ..... @rd_pg_rn
412
+LASTB_r 00000101 .. 10000 1 101 ... ..... ..... @rd_pg_rn
413
+
414
### SVE Predicate Logical Operations Group
415
416
# SVE predicate logical operations
417
--
418
2.17.1
419
420
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-8-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-sve.c | 19 +++++++++++++++++++
9
target/arm/sve.decode | 6 ++++++
10
2 files changed, 25 insertions(+)
11
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
15
+++ b/target/arm/translate-sve.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_LASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
17
return do_last_general(s, a, true);
18
}
19
20
+static bool trans_CPY_m_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
21
+{
22
+ if (sve_access_check(s)) {
23
+ do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, cpu_reg_sp(s, a->rn));
24
+ }
25
+ return true;
26
+}
27
+
28
+static bool trans_CPY_m_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
29
+{
30
+ if (sve_access_check(s)) {
31
+ int ofs = vec_reg_offset(s, a->rn, 0, a->esz);
32
+ TCGv_i64 t = load_esz(cpu_env, ofs, a->esz);
33
+ do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, t);
34
+ tcg_temp_free_i64(t);
35
+ }
36
+ return true;
37
+}
38
+
39
/*
40
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
41
*/
42
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/sve.decode
45
+++ b/target/arm/sve.decode
46
@@ -XXX,XX +XXX,XX @@ LASTB_v 00000101 .. 10001 1 100 ... ..... ..... @rd_pg_rn
47
LASTA_r 00000101 .. 10000 0 101 ... ..... ..... @rd_pg_rn
48
LASTB_r 00000101 .. 10000 1 101 ... ..... ..... @rd_pg_rn
49
50
+# SVE copy element from SIMD&FP scalar register
51
+CPY_m_v 00000101 .. 100000 100 ... ..... ..... @rd_pg_rn
52
+
53
+# SVE copy element from general register to vector (predicated)
54
+CPY_m_r 00000101 .. 101000 101 ... ..... ..... @rd_pg_rn
55
+
56
### SVE Predicate Logical Operations Group
57
58
# SVE predicate logical operations
59
--
60
2.17.1
61
62
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-10-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 2 ++
9
target/arm/sve_helper.c | 37 +++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 13 +++++++++++++
11
target/arm/sve.decode | 3 +++
12
4 files changed, 55 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
+
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve_helper.c
30
+++ b/target/arm/sve_helper.c
31
@@ -XXX,XX +XXX,XX @@ int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
32
33
return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
34
}
35
+
36
+void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
37
+{
38
+ intptr_t opr_sz = simd_oprsz(desc) / 8;
39
+ int esz = simd_data(desc);
40
+ uint64_t pg, first_g, last_g, len, mask = pred_esz_masks[esz];
41
+ intptr_t i, first_i, last_i;
42
+ ARMVectorReg tmp;
43
+
44
+ first_i = last_i = 0;
45
+ first_g = last_g = 0;
46
+
47
+ /* Find the extent of the active elements within VG. */
48
+ for (i = QEMU_ALIGN_UP(opr_sz, 8) - 8; i >= 0; i -= 8) {
49
+ pg = *(uint64_t *)(vg + i) & mask;
50
+ if (pg) {
51
+ if (last_g == 0) {
52
+ last_g = pg;
53
+ last_i = i;
54
+ }
55
+ first_g = pg;
56
+ first_i = i;
57
+ }
58
+ }
59
+
60
+ len = 0;
61
+ if (first_g != 0) {
62
+ first_i = first_i * 8 + ctz64(first_g);
63
+ last_i = last_i * 8 + 63 - clz64(last_g);
64
+ len = last_i - first_i + (1 << esz);
65
+ if (vd == vm) {
66
+ vm = memcpy(&tmp, vm, opr_sz * 8);
67
+ }
68
+ swap_memmove(vd, vn + first_i, len);
69
+ }
70
+ swap_memmove(vd + len, vm, opr_sz * 8 - len);
71
+}
72
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/translate-sve.c
75
+++ b/target/arm/translate-sve.c
76
@@ -XXX,XX +XXX,XX @@ static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
77
return do_zpz_ool(s, a, fns[a->esz]);
78
}
79
80
+static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
81
+{
82
+ if (sve_access_check(s)) {
83
+ unsigned vsz = vec_full_reg_size(s);
84
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
85
+ vec_full_reg_offset(s, a->rn),
86
+ vec_full_reg_offset(s, a->rm),
87
+ pred_full_reg_offset(s, a->pg),
88
+ vsz, vsz, a->esz, gen_helper_sve_splice);
89
+ }
90
+ return true;
91
+}
92
+
93
/*
94
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
95
*/
96
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/sve.decode
99
+++ b/target/arm/sve.decode
100
@@ -XXX,XX +XXX,XX @@ REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
101
REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
102
RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
103
104
+# SVE vector splice (predicated)
105
+SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
106
+
107
### SVE Predicate Logical Operations Group
108
109
# SVE predicate logical operations
110
--
111
2.17.1
112
113
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-11-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 9 +++++++
9
target/arm/sve_helper.c | 55 ++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 2 ++
11
target/arm/sve.decode | 6 +++++
12
4 files changed, 72 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_lsl_zpzz_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(sve_lsl_zpzz_d, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_b, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_h, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
32
void, ptr, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
34
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/sve_helper.c
37
+++ b/target/arm/sve_helper.c
38
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
39
}
40
swap_memmove(vd + len, vm, opr_sz * 8 - len);
41
}
42
+
43
+void HELPER(sve_sel_zpzz_b)(void *vd, void *vn, void *vm,
44
+ void *vg, uint32_t desc)
45
+{
46
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
47
+ uint64_t *d = vd, *n = vn, *m = vm;
48
+ uint8_t *pg = vg;
49
+
50
+ for (i = 0; i < opr_sz; i += 1) {
51
+ uint64_t nn = n[i], mm = m[i];
52
+ uint64_t pp = expand_pred_b(pg[H1(i)]);
53
+ d[i] = (nn & pp) | (mm & ~pp);
54
+ }
55
+}
56
+
57
+void HELPER(sve_sel_zpzz_h)(void *vd, void *vn, void *vm,
58
+ void *vg, uint32_t desc)
59
+{
60
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
61
+ uint64_t *d = vd, *n = vn, *m = vm;
62
+ uint8_t *pg = vg;
63
+
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ uint64_t nn = n[i], mm = m[i];
66
+ uint64_t pp = expand_pred_h(pg[H1(i)]);
67
+ d[i] = (nn & pp) | (mm & ~pp);
68
+ }
69
+}
70
+
71
+void HELPER(sve_sel_zpzz_s)(void *vd, void *vn, void *vm,
72
+ void *vg, uint32_t desc)
73
+{
74
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
75
+ uint64_t *d = vd, *n = vn, *m = vm;
76
+ uint8_t *pg = vg;
77
+
78
+ for (i = 0; i < opr_sz; i += 1) {
79
+ uint64_t nn = n[i], mm = m[i];
80
+ uint64_t pp = expand_pred_s(pg[H1(i)]);
81
+ d[i] = (nn & pp) | (mm & ~pp);
82
+ }
83
+}
84
+
85
+void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
86
+ void *vg, uint32_t desc)
87
+{
88
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
89
+ uint64_t *d = vd, *n = vn, *m = vm;
90
+ uint8_t *pg = vg;
91
+
92
+ for (i = 0; i < opr_sz; i += 1) {
93
+ uint64_t nn = n[i], mm = m[i];
94
+ d[i] = (pg[H1(i)] & 1 ? nn : mm);
95
+ }
96
+}
97
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/translate-sve.c
100
+++ b/target/arm/translate-sve.c
101
@@ -XXX,XX +XXX,XX @@ static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
102
return do_zpzz_ool(s, a, fns[a->esz]);
103
}
104
105
+DO_ZPZZ(SEL, sel)
106
+
107
#undef DO_ZPZZ
108
109
/*
110
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/sve.decode
113
+++ b/target/arm/sve.decode
114
@@ -XXX,XX +XXX,XX @@
115
&rprr_esz rn=%reg_movprfx
116
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
117
&rprr_esz rm=%reg_movprfx
118
+@rd_pg4_rn_rm ........ esz:2 . rm:5 .. pg:4 rn:5 rd:5 &rprr_esz
119
120
# Three register operand, with governing predicate, vector element size
121
@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
122
@@ -XXX,XX +XXX,XX @@ RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
123
# SVE vector splice (predicated)
124
SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
125
126
+### SVE Select Vectors Group
127
+
128
+# SVE select vector elements (predicated)
129
+SEL_zpzz 00000101 .. 1 ..... 11 .... ..... ..... @rd_pg4_rn_rm
130
+
131
### SVE Predicate Logical Operations Group
132
133
# SVE predicate logical operations
134
--
135
2.17.1
136
137
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 115 +++++++++++++++++++++++
9
target/arm/sve_helper.c | 187 +++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 91 ++++++++++++++++++
11
target/arm/sve.decode | 24 +++++
12
4 files changed, 417 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
20
DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_b, TCG_CALL_NO_RWG,
23
+ i32, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_b, TCG_CALL_NO_RWG,
25
+ i32, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_b, TCG_CALL_NO_RWG,
27
+ i32, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_b, TCG_CALL_NO_RWG,
29
+ i32, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_b, TCG_CALL_NO_RWG,
31
+ i32, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_b, TCG_CALL_NO_RWG,
33
+ i32, ptr, ptr, ptr, ptr, i32)
34
+
35
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_h, TCG_CALL_NO_RWG,
36
+ i32, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_h, TCG_CALL_NO_RWG,
38
+ i32, ptr, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_h, TCG_CALL_NO_RWG,
40
+ i32, ptr, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_h, TCG_CALL_NO_RWG,
42
+ i32, ptr, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_h, TCG_CALL_NO_RWG,
44
+ i32, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_h, TCG_CALL_NO_RWG,
46
+ i32, ptr, ptr, ptr, ptr, i32)
47
+
48
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_s, TCG_CALL_NO_RWG,
49
+ i32, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_s, TCG_CALL_NO_RWG,
51
+ i32, ptr, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_s, TCG_CALL_NO_RWG,
53
+ i32, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_s, TCG_CALL_NO_RWG,
55
+ i32, ptr, ptr, ptr, ptr, i32)
56
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_s, TCG_CALL_NO_RWG,
57
+ i32, ptr, ptr, ptr, ptr, i32)
58
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_s, TCG_CALL_NO_RWG,
59
+ i32, ptr, ptr, ptr, ptr, i32)
60
+
61
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_d, TCG_CALL_NO_RWG,
62
+ i32, ptr, ptr, ptr, ptr, i32)
63
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_d, TCG_CALL_NO_RWG,
64
+ i32, ptr, ptr, ptr, ptr, i32)
65
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_d, TCG_CALL_NO_RWG,
66
+ i32, ptr, ptr, ptr, ptr, i32)
67
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_d, TCG_CALL_NO_RWG,
68
+ i32, ptr, ptr, ptr, ptr, i32)
69
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_d, TCG_CALL_NO_RWG,
70
+ i32, ptr, ptr, ptr, ptr, i32)
71
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_d, TCG_CALL_NO_RWG,
72
+ i32, ptr, ptr, ptr, ptr, i32)
73
+
74
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_b, TCG_CALL_NO_RWG,
75
+ i32, ptr, ptr, ptr, ptr, i32)
76
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_b, TCG_CALL_NO_RWG,
77
+ i32, ptr, ptr, ptr, ptr, i32)
78
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_b, TCG_CALL_NO_RWG,
79
+ i32, ptr, ptr, ptr, ptr, i32)
80
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_b, TCG_CALL_NO_RWG,
81
+ i32, ptr, ptr, ptr, ptr, i32)
82
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_b, TCG_CALL_NO_RWG,
83
+ i32, ptr, ptr, ptr, ptr, i32)
84
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_b, TCG_CALL_NO_RWG,
85
+ i32, ptr, ptr, ptr, ptr, i32)
86
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_b, TCG_CALL_NO_RWG,
87
+ i32, ptr, ptr, ptr, ptr, i32)
88
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_b, TCG_CALL_NO_RWG,
89
+ i32, ptr, ptr, ptr, ptr, i32)
90
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_b, TCG_CALL_NO_RWG,
91
+ i32, ptr, ptr, ptr, ptr, i32)
92
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_b, TCG_CALL_NO_RWG,
93
+ i32, ptr, ptr, ptr, ptr, i32)
94
+
95
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_h, TCG_CALL_NO_RWG,
96
+ i32, ptr, ptr, ptr, ptr, i32)
97
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_h, TCG_CALL_NO_RWG,
98
+ i32, ptr, ptr, ptr, ptr, i32)
99
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_h, TCG_CALL_NO_RWG,
100
+ i32, ptr, ptr, ptr, ptr, i32)
101
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_h, TCG_CALL_NO_RWG,
102
+ i32, ptr, ptr, ptr, ptr, i32)
103
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_h, TCG_CALL_NO_RWG,
104
+ i32, ptr, ptr, ptr, ptr, i32)
105
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_h, TCG_CALL_NO_RWG,
106
+ i32, ptr, ptr, ptr, ptr, i32)
107
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_h, TCG_CALL_NO_RWG,
108
+ i32, ptr, ptr, ptr, ptr, i32)
109
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_h, TCG_CALL_NO_RWG,
110
+ i32, ptr, ptr, ptr, ptr, i32)
111
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_h, TCG_CALL_NO_RWG,
112
+ i32, ptr, ptr, ptr, ptr, i32)
113
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_h, TCG_CALL_NO_RWG,
114
+ i32, ptr, ptr, ptr, ptr, i32)
115
+
116
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_s, TCG_CALL_NO_RWG,
117
+ i32, ptr, ptr, ptr, ptr, i32)
118
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_s, TCG_CALL_NO_RWG,
119
+ i32, ptr, ptr, ptr, ptr, i32)
120
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_s, TCG_CALL_NO_RWG,
121
+ i32, ptr, ptr, ptr, ptr, i32)
122
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_s, TCG_CALL_NO_RWG,
123
+ i32, ptr, ptr, ptr, ptr, i32)
124
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_s, TCG_CALL_NO_RWG,
125
+ i32, ptr, ptr, ptr, ptr, i32)
126
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_s, TCG_CALL_NO_RWG,
127
+ i32, ptr, ptr, ptr, ptr, i32)
128
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_s, TCG_CALL_NO_RWG,
129
+ i32, ptr, ptr, ptr, ptr, i32)
130
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_s, TCG_CALL_NO_RWG,
131
+ i32, ptr, ptr, ptr, ptr, i32)
132
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_s, TCG_CALL_NO_RWG,
133
+ i32, ptr, ptr, ptr, ptr, i32)
134
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_s, TCG_CALL_NO_RWG,
135
+ i32, ptr, ptr, ptr, ptr, i32)
136
+
137
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
138
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
139
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
140
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/target/arm/sve_helper.c
143
+++ b/target/arm/sve_helper.c
144
@@ -XXX,XX +XXX,XX @@ static uint32_t iter_predtest_fwd(uint64_t d, uint64_t g, uint32_t flags)
145
return flags;
146
}
147
148
+/* This is an iterative function, called for each Pd and Pg word
149
+ * moving backward.
150
+ */
151
+static uint32_t iter_predtest_bwd(uint64_t d, uint64_t g, uint32_t flags)
152
+{
153
+ if (likely(g)) {
154
+ /* Compute C from first (i.e last) !(D & G).
155
+ Use bit 2 to signal first G bit seen. */
156
+ if (!(flags & 4)) {
157
+ flags += 4 - 1; /* add bit 2, subtract C from PREDTEST_INIT */
158
+ flags |= (d & pow2floor(g)) == 0;
159
+ }
160
+
161
+ /* Accumulate Z from each D & G. */
162
+ flags |= ((d & g) != 0) << 1;
163
+
164
+ /* Compute N from last (i.e first) D & G. Replace previous. */
165
+ flags = deposit32(flags, 31, 1, (d & (g & -g)) != 0);
166
+ }
167
+ return flags;
168
+}
169
+
170
/* The same for a single word predicate. */
171
uint32_t HELPER(sve_predtest1)(uint64_t d, uint64_t g)
172
{
173
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
174
d[i] = (pg[H1(i)] & 1 ? nn : mm);
175
}
176
}
177
+
178
+/* Two operand comparison controlled by a predicate.
179
+ * ??? It is very tempting to want to be able to expand this inline
180
+ * with x86 instructions, e.g.
181
+ *
182
+ * vcmpeqw zm, zn, %ymm0
183
+ * vpmovmskb %ymm0, %eax
184
+ * and $0x5555, %eax
185
+ * and pg, %eax
186
+ *
187
+ * or even aarch64, e.g.
188
+ *
189
+ * // mask = 4000 1000 0400 0100 0040 0010 0004 0001
190
+ * cmeq v0.8h, zn, zm
191
+ * and v0.8h, v0.8h, mask
192
+ * addv h0, v0.8h
193
+ * and v0.8b, pg
194
+ *
195
+ * However, coming up with an abstraction that allows vector inputs and
196
+ * a scalar output, and also handles the byte-ordering of sub-uint64_t
197
+ * scalar outputs, is tricky.
198
+ */
199
+#define DO_CMP_PPZZ(NAME, TYPE, OP, H, MASK) \
200
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
201
+{ \
202
+ intptr_t opr_sz = simd_oprsz(desc); \
203
+ uint32_t flags = PREDTEST_INIT; \
204
+ intptr_t i = opr_sz; \
205
+ do { \
206
+ uint64_t out = 0, pg; \
207
+ do { \
208
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
209
+ TYPE nn = *(TYPE *)(vn + H(i)); \
210
+ TYPE mm = *(TYPE *)(vm + H(i)); \
211
+ out |= nn OP mm; \
212
+ } while (i & 63); \
213
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
214
+ out &= pg; \
215
+ *(uint64_t *)(vd + (i >> 3)) = out; \
216
+ flags = iter_predtest_bwd(out, pg, flags); \
217
+ } while (i > 0); \
218
+ return flags; \
219
+}
220
+
221
+#define DO_CMP_PPZZ_B(NAME, TYPE, OP) \
222
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1, 0xffffffffffffffffull)
223
+#define DO_CMP_PPZZ_H(NAME, TYPE, OP) \
224
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1_2, 0x5555555555555555ull)
225
+#define DO_CMP_PPZZ_S(NAME, TYPE, OP) \
226
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
227
+#define DO_CMP_PPZZ_D(NAME, TYPE, OP) \
228
+ DO_CMP_PPZZ(NAME, TYPE, OP, , 0x0101010101010101ull)
229
+
230
+DO_CMP_PPZZ_B(sve_cmpeq_ppzz_b, uint8_t, ==)
231
+DO_CMP_PPZZ_H(sve_cmpeq_ppzz_h, uint16_t, ==)
232
+DO_CMP_PPZZ_S(sve_cmpeq_ppzz_s, uint32_t, ==)
233
+DO_CMP_PPZZ_D(sve_cmpeq_ppzz_d, uint64_t, ==)
234
+
235
+DO_CMP_PPZZ_B(sve_cmpne_ppzz_b, uint8_t, !=)
236
+DO_CMP_PPZZ_H(sve_cmpne_ppzz_h, uint16_t, !=)
237
+DO_CMP_PPZZ_S(sve_cmpne_ppzz_s, uint32_t, !=)
238
+DO_CMP_PPZZ_D(sve_cmpne_ppzz_d, uint64_t, !=)
239
+
240
+DO_CMP_PPZZ_B(sve_cmpgt_ppzz_b, int8_t, >)
241
+DO_CMP_PPZZ_H(sve_cmpgt_ppzz_h, int16_t, >)
242
+DO_CMP_PPZZ_S(sve_cmpgt_ppzz_s, int32_t, >)
243
+DO_CMP_PPZZ_D(sve_cmpgt_ppzz_d, int64_t, >)
244
+
245
+DO_CMP_PPZZ_B(sve_cmpge_ppzz_b, int8_t, >=)
246
+DO_CMP_PPZZ_H(sve_cmpge_ppzz_h, int16_t, >=)
247
+DO_CMP_PPZZ_S(sve_cmpge_ppzz_s, int32_t, >=)
248
+DO_CMP_PPZZ_D(sve_cmpge_ppzz_d, int64_t, >=)
249
+
250
+DO_CMP_PPZZ_B(sve_cmphi_ppzz_b, uint8_t, >)
251
+DO_CMP_PPZZ_H(sve_cmphi_ppzz_h, uint16_t, >)
252
+DO_CMP_PPZZ_S(sve_cmphi_ppzz_s, uint32_t, >)
253
+DO_CMP_PPZZ_D(sve_cmphi_ppzz_d, uint64_t, >)
254
+
255
+DO_CMP_PPZZ_B(sve_cmphs_ppzz_b, uint8_t, >=)
256
+DO_CMP_PPZZ_H(sve_cmphs_ppzz_h, uint16_t, >=)
257
+DO_CMP_PPZZ_S(sve_cmphs_ppzz_s, uint32_t, >=)
258
+DO_CMP_PPZZ_D(sve_cmphs_ppzz_d, uint64_t, >=)
259
+
260
+#undef DO_CMP_PPZZ_B
261
+#undef DO_CMP_PPZZ_H
262
+#undef DO_CMP_PPZZ_S
263
+#undef DO_CMP_PPZZ_D
264
+#undef DO_CMP_PPZZ
265
+
266
+/* Similar, but the second source is "wide". */
267
+#define DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H, MASK) \
268
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
269
+{ \
270
+ intptr_t opr_sz = simd_oprsz(desc); \
271
+ uint32_t flags = PREDTEST_INIT; \
272
+ intptr_t i = opr_sz; \
273
+ do { \
274
+ uint64_t out = 0, pg; \
275
+ do { \
276
+ TYPEW mm = *(TYPEW *)(vm + i - 8); \
277
+ do { \
278
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
279
+ TYPE nn = *(TYPE *)(vn + H(i)); \
280
+ out |= nn OP mm; \
281
+ } while (i & 7); \
282
+ } while (i & 63); \
283
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
284
+ out &= pg; \
285
+ *(uint64_t *)(vd + (i >> 3)) = out; \
286
+ flags = iter_predtest_bwd(out, pg, flags); \
287
+ } while (i > 0); \
288
+ return flags; \
289
+}
290
+
291
+#define DO_CMP_PPZW_B(NAME, TYPE, TYPEW, OP) \
292
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1, 0xffffffffffffffffull)
293
+#define DO_CMP_PPZW_H(NAME, TYPE, TYPEW, OP) \
294
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_2, 0x5555555555555555ull)
295
+#define DO_CMP_PPZW_S(NAME, TYPE, TYPEW, OP) \
296
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_4, 0x1111111111111111ull)
297
+
298
+DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, uint8_t, uint64_t, ==)
299
+DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, uint16_t, uint64_t, ==)
300
+DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, uint32_t, uint64_t, ==)
301
+
302
+DO_CMP_PPZW_B(sve_cmpne_ppzw_b, uint8_t, uint64_t, !=)
303
+DO_CMP_PPZW_H(sve_cmpne_ppzw_h, uint16_t, uint64_t, !=)
304
+DO_CMP_PPZW_S(sve_cmpne_ppzw_s, uint32_t, uint64_t, !=)
305
+
306
+DO_CMP_PPZW_B(sve_cmpgt_ppzw_b, int8_t, int64_t, >)
307
+DO_CMP_PPZW_H(sve_cmpgt_ppzw_h, int16_t, int64_t, >)
308
+DO_CMP_PPZW_S(sve_cmpgt_ppzw_s, int32_t, int64_t, >)
309
+
310
+DO_CMP_PPZW_B(sve_cmpge_ppzw_b, int8_t, int64_t, >=)
311
+DO_CMP_PPZW_H(sve_cmpge_ppzw_h, int16_t, int64_t, >=)
312
+DO_CMP_PPZW_S(sve_cmpge_ppzw_s, int32_t, int64_t, >=)
313
+
314
+DO_CMP_PPZW_B(sve_cmphi_ppzw_b, uint8_t, uint64_t, >)
315
+DO_CMP_PPZW_H(sve_cmphi_ppzw_h, uint16_t, uint64_t, >)
316
+DO_CMP_PPZW_S(sve_cmphi_ppzw_s, uint32_t, uint64_t, >)
317
+
318
+DO_CMP_PPZW_B(sve_cmphs_ppzw_b, uint8_t, uint64_t, >=)
319
+DO_CMP_PPZW_H(sve_cmphs_ppzw_h, uint16_t, uint64_t, >=)
320
+DO_CMP_PPZW_S(sve_cmphs_ppzw_s, uint32_t, uint64_t, >=)
321
+
322
+DO_CMP_PPZW_B(sve_cmplt_ppzw_b, int8_t, int64_t, <)
323
+DO_CMP_PPZW_H(sve_cmplt_ppzw_h, int16_t, int64_t, <)
324
+DO_CMP_PPZW_S(sve_cmplt_ppzw_s, int32_t, int64_t, <)
325
+
326
+DO_CMP_PPZW_B(sve_cmple_ppzw_b, int8_t, int64_t, <=)
327
+DO_CMP_PPZW_H(sve_cmple_ppzw_h, int16_t, int64_t, <=)
328
+DO_CMP_PPZW_S(sve_cmple_ppzw_s, int32_t, int64_t, <=)
329
+
330
+DO_CMP_PPZW_B(sve_cmplo_ppzw_b, uint8_t, uint64_t, <)
331
+DO_CMP_PPZW_H(sve_cmplo_ppzw_h, uint16_t, uint64_t, <)
332
+DO_CMP_PPZW_S(sve_cmplo_ppzw_s, uint32_t, uint64_t, <)
333
+
334
+DO_CMP_PPZW_B(sve_cmpls_ppzw_b, uint8_t, uint64_t, <=)
335
+DO_CMP_PPZW_H(sve_cmpls_ppzw_h, uint16_t, uint64_t, <=)
336
+DO_CMP_PPZW_S(sve_cmpls_ppzw_s, uint32_t, uint64_t, <=)
337
+
338
+#undef DO_CMP_PPZW_B
339
+#undef DO_CMP_PPZW_H
340
+#undef DO_CMP_PPZW_S
341
+#undef DO_CMP_PPZW
342
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
343
index XXXXXXX..XXXXXXX 100644
344
--- a/target/arm/translate-sve.c
345
+++ b/target/arm/translate-sve.c
346
@@ -XXX,XX +XXX,XX @@
347
#include "trace-tcg.h"
348
#include "translate-a64.h"
349
350
+
351
+typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
352
+ TCGv_ptr, TCGv_ptr, TCGv_i32);
353
+
354
/*
355
* Helpers for extracting complex instruction fields.
356
*/
357
@@ -XXX,XX +XXX,XX @@ static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
358
return true;
359
}
360
361
+/*
362
+ *** SVE Integer Compare - Vectors Group
363
+ */
364
+
365
+static bool do_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
366
+ gen_helper_gvec_flags_4 *gen_fn)
367
+{
368
+ TCGv_ptr pd, zn, zm, pg;
369
+ unsigned vsz;
370
+ TCGv_i32 t;
371
+
372
+ if (gen_fn == NULL) {
373
+ return false;
374
+ }
375
+ if (!sve_access_check(s)) {
376
+ return true;
377
+ }
378
+
379
+ vsz = vec_full_reg_size(s);
380
+ t = tcg_const_i32(simd_desc(vsz, vsz, 0));
381
+ pd = tcg_temp_new_ptr();
382
+ zn = tcg_temp_new_ptr();
383
+ zm = tcg_temp_new_ptr();
384
+ pg = tcg_temp_new_ptr();
385
+
386
+ tcg_gen_addi_ptr(pd, cpu_env, pred_full_reg_offset(s, a->rd));
387
+ tcg_gen_addi_ptr(zn, cpu_env, vec_full_reg_offset(s, a->rn));
388
+ tcg_gen_addi_ptr(zm, cpu_env, vec_full_reg_offset(s, a->rm));
389
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
390
+
391
+ gen_fn(t, pd, zn, zm, pg, t);
392
+
393
+ tcg_temp_free_ptr(pd);
394
+ tcg_temp_free_ptr(zn);
395
+ tcg_temp_free_ptr(zm);
396
+ tcg_temp_free_ptr(pg);
397
+
398
+ do_pred_flags(t);
399
+
400
+ tcg_temp_free_i32(t);
401
+ return true;
402
+}
403
+
404
+#define DO_PPZZ(NAME, name) \
405
+static bool trans_##NAME##_ppzz(DisasContext *s, arg_rprr_esz *a, \
406
+ uint32_t insn) \
407
+{ \
408
+ static gen_helper_gvec_flags_4 * const fns[4] = { \
409
+ gen_helper_sve_##name##_ppzz_b, gen_helper_sve_##name##_ppzz_h, \
410
+ gen_helper_sve_##name##_ppzz_s, gen_helper_sve_##name##_ppzz_d, \
411
+ }; \
412
+ return do_ppzz_flags(s, a, fns[a->esz]); \
413
+}
414
+
415
+DO_PPZZ(CMPEQ, cmpeq)
416
+DO_PPZZ(CMPNE, cmpne)
417
+DO_PPZZ(CMPGT, cmpgt)
418
+DO_PPZZ(CMPGE, cmpge)
419
+DO_PPZZ(CMPHI, cmphi)
420
+DO_PPZZ(CMPHS, cmphs)
421
+
422
+#undef DO_PPZZ
423
+
424
+#define DO_PPZW(NAME, name) \
425
+static bool trans_##NAME##_ppzw(DisasContext *s, arg_rprr_esz *a, \
426
+ uint32_t insn) \
427
+{ \
428
+ static gen_helper_gvec_flags_4 * const fns[4] = { \
429
+ gen_helper_sve_##name##_ppzw_b, gen_helper_sve_##name##_ppzw_h, \
430
+ gen_helper_sve_##name##_ppzw_s, NULL \
431
+ }; \
432
+ return do_ppzz_flags(s, a, fns[a->esz]); \
433
+}
434
+
435
+DO_PPZW(CMPEQ, cmpeq)
436
+DO_PPZW(CMPNE, cmpne)
437
+DO_PPZW(CMPGT, cmpgt)
438
+DO_PPZW(CMPGE, cmpge)
439
+DO_PPZW(CMPHI, cmphi)
440
+DO_PPZW(CMPHS, cmphs)
441
+DO_PPZW(CMPLT, cmplt)
442
+DO_PPZW(CMPLE, cmple)
443
+DO_PPZW(CMPLO, cmplo)
444
+DO_PPZW(CMPLS, cmpls)
445
+
446
+#undef DO_PPZW
447
+
448
/*
449
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
450
*/
451
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
452
index XXXXXXX..XXXXXXX 100644
453
--- a/target/arm/sve.decode
454
+++ b/target/arm/sve.decode
455
@@ -XXX,XX +XXX,XX @@
456
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
457
&rprr_esz rm=%reg_movprfx
458
@rd_pg4_rn_rm ........ esz:2 . rm:5 .. pg:4 rn:5 rd:5 &rprr_esz
459
+@pd_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 . rd:4 &rprr_esz
460
461
# Three register operand, with governing predicate, vector element size
462
@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
463
@@ -XXX,XX +XXX,XX @@ SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
464
# SVE select vector elements (predicated)
465
SEL_zpzz 00000101 .. 1 ..... 11 .... ..... ..... @rd_pg4_rn_rm
466
467
+### SVE Integer Compare - Vectors Group
468
+
469
+# SVE integer compare_vectors
470
+CMPHS_ppzz 00100100 .. 0 ..... 000 ... ..... 0 .... @pd_pg_rn_rm
471
+CMPHI_ppzz 00100100 .. 0 ..... 000 ... ..... 1 .... @pd_pg_rn_rm
472
+CMPGE_ppzz 00100100 .. 0 ..... 100 ... ..... 0 .... @pd_pg_rn_rm
473
+CMPGT_ppzz 00100100 .. 0 ..... 100 ... ..... 1 .... @pd_pg_rn_rm
474
+CMPEQ_ppzz 00100100 .. 0 ..... 101 ... ..... 0 .... @pd_pg_rn_rm
475
+CMPNE_ppzz 00100100 .. 0 ..... 101 ... ..... 1 .... @pd_pg_rn_rm
476
+
477
+# SVE integer compare with wide elements
478
+# Note these require esz != 3.
479
+CMPEQ_ppzw 00100100 .. 0 ..... 001 ... ..... 0 .... @pd_pg_rn_rm
480
+CMPNE_ppzw 00100100 .. 0 ..... 001 ... ..... 1 .... @pd_pg_rn_rm
481
+CMPGE_ppzw 00100100 .. 0 ..... 010 ... ..... 0 .... @pd_pg_rn_rm
482
+CMPGT_ppzw 00100100 .. 0 ..... 010 ... ..... 1 .... @pd_pg_rn_rm
483
+CMPLT_ppzw 00100100 .. 0 ..... 011 ... ..... 0 .... @pd_pg_rn_rm
484
+CMPLE_ppzw 00100100 .. 0 ..... 011 ... ..... 1 .... @pd_pg_rn_rm
485
+CMPHS_ppzw 00100100 .. 0 ..... 110 ... ..... 0 .... @pd_pg_rn_rm
486
+CMPHI_ppzw 00100100 .. 0 ..... 110 ... ..... 1 .... @pd_pg_rn_rm
487
+CMPLO_ppzw 00100100 .. 0 ..... 111 ... ..... 0 .... @pd_pg_rn_rm
488
+CMPLS_ppzw 00100100 .. 0 ..... 111 ... ..... 1 .... @pd_pg_rn_rm
489
+
490
### SVE Predicate Logical Operations Group
491
492
# SVE predicate logical operations
493
--
494
2.17.1
495
496
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 44 +++++++++++++++++++
9
target/arm/sve_helper.c | 88 ++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 66 ++++++++++++++++++++++++++++
11
target/arm/sve.decode | 23 ++++++++++
12
4 files changed, 221 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_s, TCG_CALL_NO_RWG,
20
i32, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
43
+
44
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
49
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
54
+
55
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
56
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
57
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
58
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
60
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
62
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
63
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
64
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
65
+
66
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
67
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
68
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
69
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/sve_helper.c
72
+++ b/target/arm/sve_helper.c
73
@@ -XXX,XX +XXX,XX @@ DO_CMP_PPZW_S(sve_cmpls_ppzw_s, uint32_t, uint64_t, <=)
74
#undef DO_CMP_PPZW_H
75
#undef DO_CMP_PPZW_S
76
#undef DO_CMP_PPZW
77
+
78
+/* Similar, but the second source is immediate. */
79
+#define DO_CMP_PPZI(NAME, TYPE, OP, H, MASK) \
80
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
81
+{ \
82
+ intptr_t opr_sz = simd_oprsz(desc); \
83
+ uint32_t flags = PREDTEST_INIT; \
84
+ TYPE mm = simd_data(desc); \
85
+ intptr_t i = opr_sz; \
86
+ do { \
87
+ uint64_t out = 0, pg; \
88
+ do { \
89
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
90
+ TYPE nn = *(TYPE *)(vn + H(i)); \
91
+ out |= nn OP mm; \
92
+ } while (i & 63); \
93
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
94
+ out &= pg; \
95
+ *(uint64_t *)(vd + (i >> 3)) = out; \
96
+ flags = iter_predtest_bwd(out, pg, flags); \
97
+ } while (i > 0); \
98
+ return flags; \
99
+}
100
+
101
+#define DO_CMP_PPZI_B(NAME, TYPE, OP) \
102
+ DO_CMP_PPZI(NAME, TYPE, OP, H1, 0xffffffffffffffffull)
103
+#define DO_CMP_PPZI_H(NAME, TYPE, OP) \
104
+ DO_CMP_PPZI(NAME, TYPE, OP, H1_2, 0x5555555555555555ull)
105
+#define DO_CMP_PPZI_S(NAME, TYPE, OP) \
106
+ DO_CMP_PPZI(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
107
+#define DO_CMP_PPZI_D(NAME, TYPE, OP) \
108
+ DO_CMP_PPZI(NAME, TYPE, OP, , 0x0101010101010101ull)
109
+
110
+DO_CMP_PPZI_B(sve_cmpeq_ppzi_b, uint8_t, ==)
111
+DO_CMP_PPZI_H(sve_cmpeq_ppzi_h, uint16_t, ==)
112
+DO_CMP_PPZI_S(sve_cmpeq_ppzi_s, uint32_t, ==)
113
+DO_CMP_PPZI_D(sve_cmpeq_ppzi_d, uint64_t, ==)
114
+
115
+DO_CMP_PPZI_B(sve_cmpne_ppzi_b, uint8_t, !=)
116
+DO_CMP_PPZI_H(sve_cmpne_ppzi_h, uint16_t, !=)
117
+DO_CMP_PPZI_S(sve_cmpne_ppzi_s, uint32_t, !=)
118
+DO_CMP_PPZI_D(sve_cmpne_ppzi_d, uint64_t, !=)
119
+
120
+DO_CMP_PPZI_B(sve_cmpgt_ppzi_b, int8_t, >)
121
+DO_CMP_PPZI_H(sve_cmpgt_ppzi_h, int16_t, >)
122
+DO_CMP_PPZI_S(sve_cmpgt_ppzi_s, int32_t, >)
123
+DO_CMP_PPZI_D(sve_cmpgt_ppzi_d, int64_t, >)
124
+
125
+DO_CMP_PPZI_B(sve_cmpge_ppzi_b, int8_t, >=)
126
+DO_CMP_PPZI_H(sve_cmpge_ppzi_h, int16_t, >=)
127
+DO_CMP_PPZI_S(sve_cmpge_ppzi_s, int32_t, >=)
128
+DO_CMP_PPZI_D(sve_cmpge_ppzi_d, int64_t, >=)
129
+
130
+DO_CMP_PPZI_B(sve_cmphi_ppzi_b, uint8_t, >)
131
+DO_CMP_PPZI_H(sve_cmphi_ppzi_h, uint16_t, >)
132
+DO_CMP_PPZI_S(sve_cmphi_ppzi_s, uint32_t, >)
133
+DO_CMP_PPZI_D(sve_cmphi_ppzi_d, uint64_t, >)
134
+
135
+DO_CMP_PPZI_B(sve_cmphs_ppzi_b, uint8_t, >=)
136
+DO_CMP_PPZI_H(sve_cmphs_ppzi_h, uint16_t, >=)
137
+DO_CMP_PPZI_S(sve_cmphs_ppzi_s, uint32_t, >=)
138
+DO_CMP_PPZI_D(sve_cmphs_ppzi_d, uint64_t, >=)
139
+
140
+DO_CMP_PPZI_B(sve_cmplt_ppzi_b, int8_t, <)
141
+DO_CMP_PPZI_H(sve_cmplt_ppzi_h, int16_t, <)
142
+DO_CMP_PPZI_S(sve_cmplt_ppzi_s, int32_t, <)
143
+DO_CMP_PPZI_D(sve_cmplt_ppzi_d, int64_t, <)
144
+
145
+DO_CMP_PPZI_B(sve_cmple_ppzi_b, int8_t, <=)
146
+DO_CMP_PPZI_H(sve_cmple_ppzi_h, int16_t, <=)
147
+DO_CMP_PPZI_S(sve_cmple_ppzi_s, int32_t, <=)
148
+DO_CMP_PPZI_D(sve_cmple_ppzi_d, int64_t, <=)
149
+
150
+DO_CMP_PPZI_B(sve_cmplo_ppzi_b, uint8_t, <)
151
+DO_CMP_PPZI_H(sve_cmplo_ppzi_h, uint16_t, <)
152
+DO_CMP_PPZI_S(sve_cmplo_ppzi_s, uint32_t, <)
153
+DO_CMP_PPZI_D(sve_cmplo_ppzi_d, uint64_t, <)
154
+
155
+DO_CMP_PPZI_B(sve_cmpls_ppzi_b, uint8_t, <=)
156
+DO_CMP_PPZI_H(sve_cmpls_ppzi_h, uint16_t, <=)
157
+DO_CMP_PPZI_S(sve_cmpls_ppzi_s, uint32_t, <=)
158
+DO_CMP_PPZI_D(sve_cmpls_ppzi_d, uint64_t, <=)
159
+
160
+#undef DO_CMP_PPZI_B
161
+#undef DO_CMP_PPZI_H
162
+#undef DO_CMP_PPZI_S
163
+#undef DO_CMP_PPZI_D
164
+#undef DO_CMP_PPZI
165
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
166
index XXXXXXX..XXXXXXX 100644
167
--- a/target/arm/translate-sve.c
168
+++ b/target/arm/translate-sve.c
169
@@ -XXX,XX +XXX,XX @@
170
#include "translate-a64.h"
171
172
173
+typedef void gen_helper_gvec_flags_3(TCGv_i32, TCGv_ptr, TCGv_ptr,
174
+ TCGv_ptr, TCGv_i32);
175
typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
176
TCGv_ptr, TCGv_ptr, TCGv_i32);
177
178
@@ -XXX,XX +XXX,XX @@ DO_PPZW(CMPLS, cmpls)
179
180
#undef DO_PPZW
181
182
+/*
183
+ *** SVE Integer Compare - Immediate Groups
184
+ */
185
+
186
+static bool do_ppzi_flags(DisasContext *s, arg_rpri_esz *a,
187
+ gen_helper_gvec_flags_3 *gen_fn)
188
+{
189
+ TCGv_ptr pd, zn, pg;
190
+ unsigned vsz;
191
+ TCGv_i32 t;
192
+
193
+ if (gen_fn == NULL) {
194
+ return false;
195
+ }
196
+ if (!sve_access_check(s)) {
197
+ return true;
198
+ }
199
+
200
+ vsz = vec_full_reg_size(s);
201
+ t = tcg_const_i32(simd_desc(vsz, vsz, a->imm));
202
+ pd = tcg_temp_new_ptr();
203
+ zn = tcg_temp_new_ptr();
204
+ pg = tcg_temp_new_ptr();
205
+
206
+ tcg_gen_addi_ptr(pd, cpu_env, pred_full_reg_offset(s, a->rd));
207
+ tcg_gen_addi_ptr(zn, cpu_env, vec_full_reg_offset(s, a->rn));
208
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
209
+
210
+ gen_fn(t, pd, zn, pg, t);
211
+
212
+ tcg_temp_free_ptr(pd);
213
+ tcg_temp_free_ptr(zn);
214
+ tcg_temp_free_ptr(pg);
215
+
216
+ do_pred_flags(t);
217
+
218
+ tcg_temp_free_i32(t);
219
+ return true;
220
+}
221
+
222
+#define DO_PPZI(NAME, name) \
223
+static bool trans_##NAME##_ppzi(DisasContext *s, arg_rpri_esz *a, \
224
+ uint32_t insn) \
225
+{ \
226
+ static gen_helper_gvec_flags_3 * const fns[4] = { \
227
+ gen_helper_sve_##name##_ppzi_b, gen_helper_sve_##name##_ppzi_h, \
228
+ gen_helper_sve_##name##_ppzi_s, gen_helper_sve_##name##_ppzi_d, \
229
+ }; \
230
+ return do_ppzi_flags(s, a, fns[a->esz]); \
231
+}
232
+
233
+DO_PPZI(CMPEQ, cmpeq)
234
+DO_PPZI(CMPNE, cmpne)
235
+DO_PPZI(CMPGT, cmpgt)
236
+DO_PPZI(CMPGE, cmpge)
237
+DO_PPZI(CMPHI, cmphi)
238
+DO_PPZI(CMPHS, cmphs)
239
+DO_PPZI(CMPLT, cmplt)
240
+DO_PPZI(CMPLE, cmple)
241
+DO_PPZI(CMPLO, cmplo)
242
+DO_PPZI(CMPLS, cmpls)
243
+
244
+#undef DO_PPZI
245
+
246
/*
247
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
248
*/
249
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
250
index XXXXXXX..XXXXXXX 100644
251
--- a/target/arm/sve.decode
252
+++ b/target/arm/sve.decode
253
@@ -XXX,XX +XXX,XX @@
254
@rdn_dbm ........ .. .... dbm:13 rd:5 \
255
&rr_dbm rn=%reg_movprfx
256
257
+# Predicate output, vector and immediate input,
258
+# controlling predicate, element size.
259
+@pd_pg_rn_i7 ........ esz:2 . imm:7 . pg:3 rn:5 . rd:4 &rpri_esz
260
+@pd_pg_rn_i5 ........ esz:2 . imm:s5 ... pg:3 rn:5 . rd:4 &rpri_esz
261
+
262
# Basic Load/Store with 9-bit immediate offset
263
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
264
&rri imm=%imm9_16_10
265
@@ -XXX,XX +XXX,XX @@ CMPHI_ppzw 00100100 .. 0 ..... 110 ... ..... 1 .... @pd_pg_rn_rm
266
CMPLO_ppzw 00100100 .. 0 ..... 111 ... ..... 0 .... @pd_pg_rn_rm
267
CMPLS_ppzw 00100100 .. 0 ..... 111 ... ..... 1 .... @pd_pg_rn_rm
268
269
+### SVE Integer Compare - Unsigned Immediate Group
270
+
271
+# SVE integer compare with unsigned immediate
272
+CMPHS_ppzi 00100100 .. 1 ....... 0 ... ..... 0 .... @pd_pg_rn_i7
273
+CMPHI_ppzi 00100100 .. 1 ....... 0 ... ..... 1 .... @pd_pg_rn_i7
274
+CMPLO_ppzi 00100100 .. 1 ....... 1 ... ..... 0 .... @pd_pg_rn_i7
275
+CMPLS_ppzi 00100100 .. 1 ....... 1 ... ..... 1 .... @pd_pg_rn_i7
276
+
277
+### SVE Integer Compare - Signed Immediate Group
278
+
279
+# SVE integer compare with signed immediate
280
+CMPGE_ppzi 00100101 .. 0 ..... 000 ... ..... 0 .... @pd_pg_rn_i5
281
+CMPGT_ppzi 00100101 .. 0 ..... 000 ... ..... 1 .... @pd_pg_rn_i5
282
+CMPLT_ppzi 00100101 .. 0 ..... 001 ... ..... 0 .... @pd_pg_rn_i5
283
+CMPLE_ppzi 00100101 .. 0 ..... 001 ... ..... 1 .... @pd_pg_rn_i5
284
+CMPEQ_ppzi 00100101 .. 0 ..... 100 ... ..... 0 .... @pd_pg_rn_i5
285
+CMPNE_ppzi 00100101 .. 0 ..... 100 ... ..... 1 .... @pd_pg_rn_i5
286
+
287
### SVE Predicate Logical Operations Group
288
289
# SVE predicate logical operations
290
--
291
2.17.1
292
293
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 18 +++
9
target/arm/sve_helper.c | 248 +++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 106 ++++++++++++++++
11
target/arm/sve.decode | 19 +++
12
4 files changed, 391 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_orn_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_5(sve_nor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sve_nand_pppp, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_5(sve_brkpa, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve_brkpb, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve_brkpas, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve_brkpbs, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
27
+
28
+DEF_HELPER_FLAGS_4(sve_brka_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve_brkb_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_brka_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve_brkb_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(sve_brkas_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_brkbs_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_brkas_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
37
+
38
+DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
40
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sve_helper.c
43
+++ b/target/arm/sve_helper.c
44
@@ -XXX,XX +XXX,XX @@ DO_CMP_PPZI_D(sve_cmpls_ppzi_d, uint64_t, <=)
45
#undef DO_CMP_PPZI_S
46
#undef DO_CMP_PPZI_D
47
#undef DO_CMP_PPZI
48
+
49
+/* Similar to the ARM LastActive pseudocode function. */
50
+static bool last_active_pred(void *vd, void *vg, intptr_t oprsz)
51
+{
52
+ intptr_t i;
53
+
54
+ for (i = QEMU_ALIGN_UP(oprsz, 8) - 8; i >= 0; i -= 8) {
55
+ uint64_t pg = *(uint64_t *)(vg + i);
56
+ if (pg) {
57
+ return (pow2floor(pg) & *(uint64_t *)(vd + i)) != 0;
58
+ }
59
+ }
60
+ return 0;
61
+}
62
+
63
+/* Compute a mask into RETB that is true for all G, up to and including
64
+ * (if after) or excluding (if !after) the first G & N.
65
+ * Return true if BRK found.
66
+ */
67
+static bool compute_brk(uint64_t *retb, uint64_t n, uint64_t g,
68
+ bool brk, bool after)
69
+{
70
+ uint64_t b;
71
+
72
+ if (brk) {
73
+ b = 0;
74
+ } else if ((g & n) == 0) {
75
+ /* For all G, no N are set; break not found. */
76
+ b = g;
77
+ } else {
78
+ /* Break somewhere in N. Locate it. */
79
+ b = g & n; /* guard true, pred true */
80
+ b = b & -b; /* first such */
81
+ if (after) {
82
+ b = b | (b - 1); /* break after same */
83
+ } else {
84
+ b = b - 1; /* break before same */
85
+ }
86
+ brk = true;
87
+ }
88
+
89
+ *retb = b;
90
+ return brk;
91
+}
92
+
93
+/* Compute a zeroing BRK. */
94
+static void compute_brk_z(uint64_t *d, uint64_t *n, uint64_t *g,
95
+ intptr_t oprsz, bool after)
96
+{
97
+ bool brk = false;
98
+ intptr_t i;
99
+
100
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
101
+ uint64_t this_b, this_g = g[i];
102
+
103
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
104
+ d[i] = this_b & this_g;
105
+ }
106
+}
107
+
108
+/* Likewise, but also compute flags. */
109
+static uint32_t compute_brks_z(uint64_t *d, uint64_t *n, uint64_t *g,
110
+ intptr_t oprsz, bool after)
111
+{
112
+ uint32_t flags = PREDTEST_INIT;
113
+ bool brk = false;
114
+ intptr_t i;
115
+
116
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
117
+ uint64_t this_b, this_d, this_g = g[i];
118
+
119
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
120
+ d[i] = this_d = this_b & this_g;
121
+ flags = iter_predtest_fwd(this_d, this_g, flags);
122
+ }
123
+ return flags;
124
+}
125
+
126
+/* Compute a merging BRK. */
127
+static void compute_brk_m(uint64_t *d, uint64_t *n, uint64_t *g,
128
+ intptr_t oprsz, bool after)
129
+{
130
+ bool brk = false;
131
+ intptr_t i;
132
+
133
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
134
+ uint64_t this_b, this_g = g[i];
135
+
136
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
137
+ d[i] = (this_b & this_g) | (d[i] & ~this_g);
138
+ }
139
+}
140
+
141
+/* Likewise, but also compute flags. */
142
+static uint32_t compute_brks_m(uint64_t *d, uint64_t *n, uint64_t *g,
143
+ intptr_t oprsz, bool after)
144
+{
145
+ uint32_t flags = PREDTEST_INIT;
146
+ bool brk = false;
147
+ intptr_t i;
148
+
149
+ for (i = 0; i < oprsz / 8; ++i) {
150
+ uint64_t this_b, this_d = d[i], this_g = g[i];
151
+
152
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
153
+ d[i] = this_d = (this_b & this_g) | (this_d & ~this_g);
154
+ flags = iter_predtest_fwd(this_d, this_g, flags);
155
+ }
156
+ return flags;
157
+}
158
+
159
+static uint32_t do_zero(ARMPredicateReg *d, intptr_t oprsz)
160
+{
161
+ /* It is quicker to zero the whole predicate than loop on OPRSZ.
162
+ * The compiler should turn this into 4 64-bit integer stores.
163
+ */
164
+ memset(d, 0, sizeof(ARMPredicateReg));
165
+ return PREDTEST_INIT;
166
+}
167
+
168
+void HELPER(sve_brkpa)(void *vd, void *vn, void *vm, void *vg,
169
+ uint32_t pred_desc)
170
+{
171
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
172
+ if (last_active_pred(vn, vg, oprsz)) {
173
+ compute_brk_z(vd, vm, vg, oprsz, true);
174
+ } else {
175
+ do_zero(vd, oprsz);
176
+ }
177
+}
178
+
179
+uint32_t HELPER(sve_brkpas)(void *vd, void *vn, void *vm, void *vg,
180
+ uint32_t pred_desc)
181
+{
182
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
183
+ if (last_active_pred(vn, vg, oprsz)) {
184
+ return compute_brks_z(vd, vm, vg, oprsz, true);
185
+ } else {
186
+ return do_zero(vd, oprsz);
187
+ }
188
+}
189
+
190
+void HELPER(sve_brkpb)(void *vd, void *vn, void *vm, void *vg,
191
+ uint32_t pred_desc)
192
+{
193
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
194
+ if (last_active_pred(vn, vg, oprsz)) {
195
+ compute_brk_z(vd, vm, vg, oprsz, false);
196
+ } else {
197
+ do_zero(vd, oprsz);
198
+ }
199
+}
200
+
201
+uint32_t HELPER(sve_brkpbs)(void *vd, void *vn, void *vm, void *vg,
202
+ uint32_t pred_desc)
203
+{
204
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
205
+ if (last_active_pred(vn, vg, oprsz)) {
206
+ return compute_brks_z(vd, vm, vg, oprsz, false);
207
+ } else {
208
+ return do_zero(vd, oprsz);
209
+ }
210
+}
211
+
212
+void HELPER(sve_brka_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
213
+{
214
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
215
+ compute_brk_z(vd, vn, vg, oprsz, true);
216
+}
217
+
218
+uint32_t HELPER(sve_brkas_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
219
+{
220
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
221
+ return compute_brks_z(vd, vn, vg, oprsz, true);
222
+}
223
+
224
+void HELPER(sve_brkb_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
225
+{
226
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
227
+ compute_brk_z(vd, vn, vg, oprsz, false);
228
+}
229
+
230
+uint32_t HELPER(sve_brkbs_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
231
+{
232
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
233
+ return compute_brks_z(vd, vn, vg, oprsz, false);
234
+}
235
+
236
+void HELPER(sve_brka_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
237
+{
238
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
239
+ compute_brk_m(vd, vn, vg, oprsz, true);
240
+}
241
+
242
+uint32_t HELPER(sve_brkas_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
243
+{
244
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
245
+ return compute_brks_m(vd, vn, vg, oprsz, true);
246
+}
247
+
248
+void HELPER(sve_brkb_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
249
+{
250
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
251
+ compute_brk_m(vd, vn, vg, oprsz, false);
252
+}
253
+
254
+uint32_t HELPER(sve_brkbs_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
255
+{
256
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
257
+ return compute_brks_m(vd, vn, vg, oprsz, false);
258
+}
259
+
260
+void HELPER(sve_brkn)(void *vd, void *vn, void *vg, uint32_t pred_desc)
261
+{
262
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
263
+
264
+ if (!last_active_pred(vn, vg, oprsz)) {
265
+ do_zero(vd, oprsz);
266
+ }
267
+}
268
+
269
+/* As if PredTest(Ones(PL), D, esz). */
270
+static uint32_t predtest_ones(ARMPredicateReg *d, intptr_t oprsz,
271
+ uint64_t esz_mask)
272
+{
273
+ uint32_t flags = PREDTEST_INIT;
274
+ intptr_t i;
275
+
276
+ for (i = 0; i < oprsz / 8; i++) {
277
+ flags = iter_predtest_fwd(d->p[i], esz_mask, flags);
278
+ }
279
+ if (oprsz & 7) {
280
+ uint64_t mask = ~(-1ULL << (8 * (oprsz & 7)));
281
+ flags = iter_predtest_fwd(d->p[i], esz_mask & mask, flags);
282
+ }
283
+ return flags;
284
+}
285
+
286
+uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
287
+{
288
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
289
+
290
+ if (last_active_pred(vn, vg, oprsz)) {
291
+ return predtest_ones(vd, oprsz, -1);
292
+ } else {
293
+ return do_zero(vd, oprsz);
294
+ }
295
+}
296
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
297
index XXXXXXX..XXXXXXX 100644
298
--- a/target/arm/translate-sve.c
299
+++ b/target/arm/translate-sve.c
300
@@ -XXX,XX +XXX,XX @@ DO_PPZI(CMPLS, cmpls)
301
302
#undef DO_PPZI
303
304
+/*
305
+ *** SVE Partition Break Group
306
+ */
307
+
308
+static bool do_brk3(DisasContext *s, arg_rprr_s *a,
309
+ gen_helper_gvec_4 *fn, gen_helper_gvec_flags_4 *fn_s)
310
+{
311
+ if (!sve_access_check(s)) {
312
+ return true;
313
+ }
314
+
315
+ unsigned vsz = pred_full_reg_size(s);
316
+
317
+ /* Predicate sizes may be smaller and cannot use simd_desc. */
318
+ TCGv_ptr d = tcg_temp_new_ptr();
319
+ TCGv_ptr n = tcg_temp_new_ptr();
320
+ TCGv_ptr m = tcg_temp_new_ptr();
321
+ TCGv_ptr g = tcg_temp_new_ptr();
322
+ TCGv_i32 t = tcg_const_i32(vsz - 2);
323
+
324
+ tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
325
+ tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
326
+ tcg_gen_addi_ptr(m, cpu_env, pred_full_reg_offset(s, a->rm));
327
+ tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
328
+
329
+ if (a->s) {
330
+ fn_s(t, d, n, m, g, t);
331
+ do_pred_flags(t);
332
+ } else {
333
+ fn(d, n, m, g, t);
334
+ }
335
+ tcg_temp_free_ptr(d);
336
+ tcg_temp_free_ptr(n);
337
+ tcg_temp_free_ptr(m);
338
+ tcg_temp_free_ptr(g);
339
+ tcg_temp_free_i32(t);
340
+ return true;
341
+}
342
+
343
+static bool do_brk2(DisasContext *s, arg_rpr_s *a,
344
+ gen_helper_gvec_3 *fn, gen_helper_gvec_flags_3 *fn_s)
345
+{
346
+ if (!sve_access_check(s)) {
347
+ return true;
348
+ }
349
+
350
+ unsigned vsz = pred_full_reg_size(s);
351
+
352
+ /* Predicate sizes may be smaller and cannot use simd_desc. */
353
+ TCGv_ptr d = tcg_temp_new_ptr();
354
+ TCGv_ptr n = tcg_temp_new_ptr();
355
+ TCGv_ptr g = tcg_temp_new_ptr();
356
+ TCGv_i32 t = tcg_const_i32(vsz - 2);
357
+
358
+ tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
359
+ tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
360
+ tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
361
+
362
+ if (a->s) {
363
+ fn_s(t, d, n, g, t);
364
+ do_pred_flags(t);
365
+ } else {
366
+ fn(d, n, g, t);
367
+ }
368
+ tcg_temp_free_ptr(d);
369
+ tcg_temp_free_ptr(n);
370
+ tcg_temp_free_ptr(g);
371
+ tcg_temp_free_i32(t);
372
+ return true;
373
+}
374
+
375
+static bool trans_BRKPA(DisasContext *s, arg_rprr_s *a, uint32_t insn)
376
+{
377
+ return do_brk3(s, a, gen_helper_sve_brkpa, gen_helper_sve_brkpas);
378
+}
379
+
380
+static bool trans_BRKPB(DisasContext *s, arg_rprr_s *a, uint32_t insn)
381
+{
382
+ return do_brk3(s, a, gen_helper_sve_brkpb, gen_helper_sve_brkpbs);
383
+}
384
+
385
+static bool trans_BRKA_m(DisasContext *s, arg_rpr_s *a, uint32_t insn)
386
+{
387
+ return do_brk2(s, a, gen_helper_sve_brka_m, gen_helper_sve_brkas_m);
388
+}
389
+
390
+static bool trans_BRKB_m(DisasContext *s, arg_rpr_s *a, uint32_t insn)
391
+{
392
+ return do_brk2(s, a, gen_helper_sve_brkb_m, gen_helper_sve_brkbs_m);
393
+}
394
+
395
+static bool trans_BRKA_z(DisasContext *s, arg_rpr_s *a, uint32_t insn)
396
+{
397
+ return do_brk2(s, a, gen_helper_sve_brka_z, gen_helper_sve_brkas_z);
398
+}
399
+
400
+static bool trans_BRKB_z(DisasContext *s, arg_rpr_s *a, uint32_t insn)
401
+{
402
+ return do_brk2(s, a, gen_helper_sve_brkb_z, gen_helper_sve_brkbs_z);
403
+}
404
+
405
+static bool trans_BRKN(DisasContext *s, arg_rpr_s *a, uint32_t insn)
406
+{
407
+ return do_brk2(s, a, gen_helper_sve_brkn, gen_helper_sve_brkns);
408
+}
409
+
410
/*
411
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
412
*/
413
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
414
index XXXXXXX..XXXXXXX 100644
415
--- a/target/arm/sve.decode
416
+++ b/target/arm/sve.decode
417
@@ -XXX,XX +XXX,XX @@
418
&rri_esz rd rn imm esz
419
&rrr_esz rd rn rm esz
420
&rpr_esz rd pg rn esz
421
+&rpr_s rd pg rn s
422
&rprr_s rd pg rn rm s
423
&rprr_esz rd pg rn rm esz
424
&rprrr_esz rd pg rn rm ra esz
425
@@ -XXX,XX +XXX,XX @@
426
@pd_pn ........ esz:2 .. .... ....... rn:4 . rd:4 &rr_esz
427
@rd_rn ........ esz:2 ...... ...... rn:5 rd:5 &rr_esz
428
429
+# Two operand with governing predicate, flags setting
430
+@pd_pg_pn_s ........ . s:1 ...... .. pg:4 . rn:4 . rd:4 &rpr_s
431
+
432
# Three operand with unused vector element size
433
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
434
435
@@ -XXX,XX +XXX,XX @@ PFIRST 00100101 01 011 000 11000 00 .... 0 .... @pd_pn_e0
436
# SVE predicate next active
437
PNEXT 00100101 .. 011 001 11000 10 .... 0 .... @pd_pn
438
439
+### SVE Partition Break Group
440
+
441
+# SVE propagate break from previous partition
442
+BRKPA 00100101 0. 00 .... 11 .... 0 .... 0 .... @pd_pg_pn_pm_s
443
+BRKPB 00100101 0. 00 .... 11 .... 0 .... 1 .... @pd_pg_pn_pm_s
444
+
445
+# SVE partition break condition
446
+BRKA_z 00100101 0. 01000001 .... 0 .... 0 .... @pd_pg_pn_s
447
+BRKB_z 00100101 1. 01000001 .... 0 .... 0 .... @pd_pg_pn_s
448
+BRKA_m 00100101 0. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
449
+BRKB_m 00100101 1. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
450
+
451
+# SVE propagate break to next partition
452
+BRKN 00100101 0. 01100001 .... 0 .... 0 .... @pd_pg_pn_s
453
+
454
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
455
456
# SVE load predicate register
457
--
458
2.17.1
459
460
diff view generated by jsdifflib
1
From: Joel Stanley <joel@jms.id.au>
1
From: Andrew Jeffery <andrew@aj.id.au>
2
2
3
The ASPEED SoCs contain a single register that returns random data when
3
This matches the configuration set by u-boot on the AST2600.
4
read. This models that register so that guests can use it.
5
4
6
The random number data register has a corresponding control register,
5
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
7
however it returns data regardless of the state of the enabled bit, so
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
the model follows this behaviour.
9
10
When the qcrypto call fails we exit as the guest uses the random number
11
device to feed it's entropy pool, which is used for cryptographic
12
purposes.
13
14
Reviewed-by: Cédric Le Goater <clg@kaod.org>
7
Reviewed-by: Cédric Le Goater <clg@kaod.org>
15
Signed-off-by: Joel Stanley <joel@jms.id.au>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
16
Message-id: 20180613114836.9265-1-joel@jms.id.au
9
Message-id: 080ca1267a09381c43cf3c50d434fb6c186f2b6e.1576215453.git-series.andrew@aj.id.au
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
11
---
19
hw/misc/aspeed_scu.c | 20 ++++++++++++++++++++
12
hw/arm/aspeed_ast2600.c | 3 +++
20
1 file changed, 20 insertions(+)
13
1 file changed, 3 insertions(+)
21
14
22
diff --git a/hw/misc/aspeed_scu.c b/hw/misc/aspeed_scu.c
15
diff --git a/hw/arm/aspeed_ast2600.c b/hw/arm/aspeed_ast2600.c
23
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/misc/aspeed_scu.c
17
--- a/hw/arm/aspeed_ast2600.c
25
+++ b/hw/misc/aspeed_scu.c
18
+++ b/hw/arm/aspeed_ast2600.c
26
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2600_realize(DeviceState *dev, Error **errp)
27
#include "qapi/visitor.h"
20
object_property_set_int(OBJECT(&s->cpu[i]), aspeed_calc_affinity(i),
28
#include "qemu/bitops.h"
21
"mp-affinity", &error_abort);
29
#include "qemu/log.h"
22
30
+#include "crypto/random.h"
23
+ object_property_set_int(OBJECT(&s->cpu[i]), 1125000000, "cntfrq",
31
#include "trace.h"
24
+ &error_abort);
32
33
#define TO_REG(offset) ((offset) >> 2)
34
@@ -XXX,XX +XXX,XX @@ static const uint32_t ast2500_a1_resets[ASPEED_SCU_NR_REGS] = {
35
[BMC_DEV_ID] = 0x00002402U
36
};
37
38
+static uint32_t aspeed_scu_get_random(void)
39
+{
40
+ Error *err = NULL;
41
+ uint32_t num;
42
+
25
+
43
+ if (qcrypto_random_bytes((uint8_t *)&num, sizeof(num), &err)) {
26
/*
44
+ error_report_err(err);
27
* TODO: the secondary CPUs are started and a boot helper
45
+ exit(1);
28
* is needed when using -kernel
46
+ }
47
+
48
+ return num;
49
+}
50
+
51
static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
52
{
53
AspeedSCUState *s = ASPEED_SCU(opaque);
54
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
55
}
56
57
switch (reg) {
58
+ case RNG_DATA:
59
+ /* On hardware, RNG_DATA works regardless of
60
+ * the state of the enable bit in RNG_CTRL
61
+ */
62
+ s->regs[RNG_DATA] = aspeed_scu_get_random();
63
+ break;
64
case WAKEUP_EN:
65
qemu_log_mask(LOG_GUEST_ERROR,
66
"%s: Read of write-only offset 0x%" HWADDR_PRIx "\n",
67
--
29
--
68
2.17.1
30
2.20.1
69
31
70
32
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
From: Simon Veith <sveith@amazon.de>
2
2
3
ARMv6-M supports 6 Thumb2 instructions. This patch checks for these
3
In the SMMU_STRTAB_BASE register, the stream table base address only
4
instructions and allows their execution.
4
occupies bits [51:6]. Other bits, such as RA (bit [62]), must be masked
5
Like Thumb2 cores, ARMv6-M always interprets BL instruction as 32-bit.
5
out to obtain the base address.
6
6
7
This patch is required for future Cortex-M0 support.
7
The branch for 2-level stream tables correctly applies this mask by way
8
of SMMU_BASE_ADDR_MASK, but the one for linear stream tables does not.
8
9
9
Signed-off-by: Julia Suvorova <jusual@mail.ru>
10
Apply the missing mask in that case as well so that the correct stream
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
base address is used by guests which configure a linear stream table.
11
Message-id: 20180612204632.28780-1-jusual@mail.ru
12
12
[PMM: move armv6m_insn[] and armv6m_mask[] closer to
13
Linux guests are unaffected by this change because they choose a 2-level
13
point of use, and mark 'const'. Check for M-and-not-v7
14
stream table layout for the QEMU SMMUv3, based on the size of its stream
14
rather than M-and-6.]
15
ID space.
16
17
ref. ARM IHI 0070C, section 6.3.23.
18
19
Signed-off-by: Simon Veith <sveith@amazon.de>
20
Acked-by: Eric Auger <eric.auger@redhat.com>
21
Tested-by: Eric Auger <eric.auger@redhat.com>
22
Message-id: 1576509312-13083-2-git-send-email-sveith@amazon.de
23
Cc: Eric Auger <eric.auger@redhat.com>
24
Cc: qemu-devel@nongnu.org
25
Cc: qemu-arm@nongnu.org
26
Acked-by: Eric Auger <eric.auger@redhat.com>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
27
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
29
---
18
target/arm/translate.c | 43 +++++++++++++++++++++++++++++++++++++-----
30
hw/arm/smmuv3.c | 2 +-
19
1 file changed, 38 insertions(+), 5 deletions(-)
31
1 file changed, 1 insertion(+), 1 deletion(-)
20
32
21
diff --git a/target/arm/translate.c b/target/arm/translate.c
33
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
22
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/translate.c
35
--- a/hw/arm/smmuv3.c
24
+++ b/target/arm/translate.c
36
+++ b/hw/arm/smmuv3.c
25
@@ -XXX,XX +XXX,XX @@ static bool thumb_insn_is_16bit(DisasContext *s, uint32_t insn)
37
@@ -XXX,XX +XXX,XX @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
26
* end up actually treating this as two 16-bit insns, though,
38
}
27
* if it's half of a bl/blx pair that might span a page boundary.
39
addr = l2ptr + l2_ste_offset * sizeof(*ste);
28
*/
40
} else {
29
- if (arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
41
- addr = s->strtab_base + sid * sizeof(*ste);
30
+ if (arm_dc_feature(s, ARM_FEATURE_THUMB2) ||
42
+ addr = (s->strtab_base & SMMU_BASE_ADDR_MASK) + sid * sizeof(*ste);
31
+ arm_dc_feature(s, ARM_FEATURE_M)) {
32
/* Thumb2 cores (including all M profile ones) always treat
33
* 32-bit insns as 32-bit.
34
*/
35
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
36
int conds;
37
int logic_cc;
38
39
- /* The only 32 bit insn that's allowed for Thumb1 is the combined
40
- * BL/BLX prefix and suffix.
41
+ /*
42
+ * ARMv6-M supports a limited subset of Thumb2 instructions.
43
+ * Other Thumb1 architectures allow only 32-bit
44
+ * combined BL/BLX prefix and suffix.
45
*/
46
- if ((insn & 0xf800e800) != 0xf000e800) {
47
+ if (arm_dc_feature(s, ARM_FEATURE_M) &&
48
+ !arm_dc_feature(s, ARM_FEATURE_V7)) {
49
+ int i;
50
+ bool found = false;
51
+ const uint32_t armv6m_insn[] = {0xf3808000 /* msr */,
52
+ 0xf3b08040 /* dsb */,
53
+ 0xf3b08050 /* dmb */,
54
+ 0xf3b08060 /* isb */,
55
+ 0xf3e08000 /* mrs */,
56
+ 0xf000d000 /* bl */};
57
+ const uint32_t armv6m_mask[] = {0xffe0d000,
58
+ 0xfff0d0f0,
59
+ 0xfff0d0f0,
60
+ 0xfff0d0f0,
61
+ 0xffe0d000,
62
+ 0xf800d000};
63
+
64
+ for (i = 0; i < ARRAY_SIZE(armv6m_insn); i++) {
65
+ if ((insn & armv6m_mask[i]) == armv6m_insn[i]) {
66
+ found = true;
67
+ break;
68
+ }
69
+ }
70
+ if (!found) {
71
+ goto illegal_op;
72
+ }
73
+ } else if ((insn & 0xf800e800) != 0xf000e800) {
74
ARCH(6T2);
75
}
43
}
76
44
77
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
45
if (smmu_get_ste(s, addr, ste, event)) {
78
}
79
break;
80
case 3: /* Special control operations. */
81
- ARCH(7);
82
+ if (!arm_dc_feature(s, ARM_FEATURE_V7) &&
83
+ !(arm_dc_feature(s, ARM_FEATURE_V6) &&
84
+ arm_dc_feature(s, ARM_FEATURE_M))) {
85
+ goto illegal_op;
86
+ }
87
op = (insn >> 4) & 0xf;
88
switch (op) {
89
case 2: /* clrex */
90
--
46
--
91
2.17.1
47
2.20.1
92
48
93
49
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Simon Veith <sveith@amazon.de>
2
2
3
There are two issues with the current value of SMMU_BASE_ADDR_MASK:
4
5
- At the lower end, we are clearing bits [4:0]. Per the SMMUv3 spec,
6
we should also be treating bit 5 as zero in the base address.
7
- At the upper end, we are clearing bits [63:48]. Per the SMMUv3 spec,
8
only bits [63:52] must be explicitly treated as zero.
9
10
Update the SMMU_BASE_ADDR_MASK value to mask out bits [63:52] and [5:0].
11
12
ref. ARM IHI 0070C, section 6.3.23.
13
14
Signed-off-by: Simon Veith <sveith@amazon.de>
15
Acked-by: Eric Auger <eric.auger@redhat.com>
16
Tested-by: Eric Auger <eric.auger@redhat.com>
17
Message-id: 1576509312-13083-3-git-send-email-sveith@amazon.de
18
Cc: Eric Auger <eric.auger@redhat.com>
19
Cc: qemu-devel@nongnu.org
20
Cc: qemu-arm@nongnu.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
23
---
8
target/arm/helper-sve.h | 14 ++++++++
24
hw/arm/smmuv3-internal.h | 2 +-
9
target/arm/helper.h | 19 +++++++++++
25
1 file changed, 1 insertion(+), 1 deletion(-)
10
target/arm/translate-sve.c | 42 +++++++++++++++++++++++
11
target/arm/vec_helper.c | 69 ++++++++++++++++++++++++++++++++++++++
12
target/arm/sve.decode | 10 ++++++
13
5 files changed, 154 insertions(+)
14
26
15
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
27
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
16
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-sve.h
29
--- a/hw/arm/smmuv3-internal.h
18
+++ b/target/arm/helper-sve.h
30
+++ b/hw/arm/smmuv3-internal.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
31
@@ -XXX,XX +XXX,XX @@ REG32(GERROR_IRQ_CFG2, 0x74)
20
DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
32
21
DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
33
#define A_STRTAB_BASE 0x80 /* 64b */
22
DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
34
23
+
35
-#define SMMU_BASE_ADDR_MASK 0xffffffffffe0
24
+DEF_HELPER_FLAGS_5(gvec_recps_h, TCG_CALL_NO_RWG,
36
+#define SMMU_BASE_ADDR_MASK 0xfffffffffffc0
25
+ void, ptr, ptr, ptr, ptr, i32)
37
26
+DEF_HELPER_FLAGS_5(gvec_recps_s, TCG_CALL_NO_RWG,
38
REG32(STRTAB_BASE_CFG, 0x88)
27
+ void, ptr, ptr, ptr, ptr, i32)
39
FIELD(STRTAB_BASE_CFG, FMT, 16, 2)
28
+DEF_HELPER_FLAGS_5(gvec_recps_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_5(gvec_rsqrts_h, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
36
+ void, ptr, ptr, ptr, ptr, i32)
37
diff --git a/target/arm/helper.h b/target/arm/helper.h
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.h
40
+++ b/target/arm/helper.h
41
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
42
DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
43
void, ptr, ptr, ptr, ptr, i32)
44
45
+DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_5(gvec_fadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_5(gvec_fsub_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_5(gvec_fsub_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_5(gvec_fsub_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
52
+
53
+DEF_HELPER_FLAGS_5(gvec_fmul_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_5(gvec_fmul_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
56
+
57
+DEF_HELPER_FLAGS_5(gvec_ftsmul_h, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_5(gvec_ftsmul_s, TCG_CALL_NO_RWG,
60
+ void, ptr, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_5(gvec_ftsmul_d, TCG_CALL_NO_RWG,
62
+ void, ptr, ptr, ptr, ptr, i32)
63
+
64
#ifdef TARGET_AARCH64
65
#include "helper-a64.h"
66
#include "helper-sve.h"
67
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/translate-sve.c
70
+++ b/target/arm/translate-sve.c
71
@@ -XXX,XX +XXX,XX @@ DO_ZZI(UMIN, umin)
72
73
#undef DO_ZZI
74
75
+/*
76
+ *** SVE Floating Point Arithmetic - Unpredicated Group
77
+ */
78
+
79
+static bool do_zzz_fp(DisasContext *s, arg_rrr_esz *a,
80
+ gen_helper_gvec_3_ptr *fn)
81
+{
82
+ if (fn == NULL) {
83
+ return false;
84
+ }
85
+ if (sve_access_check(s)) {
86
+ unsigned vsz = vec_full_reg_size(s);
87
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
88
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
89
+ vec_full_reg_offset(s, a->rn),
90
+ vec_full_reg_offset(s, a->rm),
91
+ status, vsz, vsz, 0, fn);
92
+ tcg_temp_free_ptr(status);
93
+ }
94
+ return true;
95
+}
96
+
97
+
98
+#define DO_FP3(NAME, name) \
99
+static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a, uint32_t insn) \
100
+{ \
101
+ static gen_helper_gvec_3_ptr * const fns[4] = { \
102
+ NULL, gen_helper_gvec_##name##_h, \
103
+ gen_helper_gvec_##name##_s, gen_helper_gvec_##name##_d \
104
+ }; \
105
+ return do_zzz_fp(s, a, fns[a->esz]); \
106
+}
107
+
108
+DO_FP3(FADD_zzz, fadd)
109
+DO_FP3(FSUB_zzz, fsub)
110
+DO_FP3(FMUL_zzz, fmul)
111
+DO_FP3(FTSMUL, ftsmul)
112
+DO_FP3(FRECPS, recps)
113
+DO_FP3(FRSQRTS, rsqrts)
114
+
115
+#undef DO_FP3
116
+
117
/*
118
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
119
*/
120
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/vec_helper.c
123
+++ b/target/arm/vec_helper.c
124
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm,
125
}
126
clear_tail(d, opr_sz, simd_maxsz(desc));
127
}
128
+
129
+/* Floating-point trigonometric starting value.
130
+ * See the ARM ARM pseudocode function FPTrigSMul.
131
+ */
132
+static float16 float16_ftsmul(float16 op1, uint16_t op2, float_status *stat)
133
+{
134
+ float16 result = float16_mul(op1, op1, stat);
135
+ if (!float16_is_any_nan(result)) {
136
+ result = float16_set_sign(result, op2 & 1);
137
+ }
138
+ return result;
139
+}
140
+
141
+static float32 float32_ftsmul(float32 op1, uint32_t op2, float_status *stat)
142
+{
143
+ float32 result = float32_mul(op1, op1, stat);
144
+ if (!float32_is_any_nan(result)) {
145
+ result = float32_set_sign(result, op2 & 1);
146
+ }
147
+ return result;
148
+}
149
+
150
+static float64 float64_ftsmul(float64 op1, uint64_t op2, float_status *stat)
151
+{
152
+ float64 result = float64_mul(op1, op1, stat);
153
+ if (!float64_is_any_nan(result)) {
154
+ result = float64_set_sign(result, op2 & 1);
155
+ }
156
+ return result;
157
+}
158
+
159
+#define DO_3OP(NAME, FUNC, TYPE) \
160
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
161
+{ \
162
+ intptr_t i, oprsz = simd_oprsz(desc); \
163
+ TYPE *d = vd, *n = vn, *m = vm; \
164
+ for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
165
+ d[i] = FUNC(n[i], m[i], stat); \
166
+ } \
167
+}
168
+
169
+DO_3OP(gvec_fadd_h, float16_add, float16)
170
+DO_3OP(gvec_fadd_s, float32_add, float32)
171
+DO_3OP(gvec_fadd_d, float64_add, float64)
172
+
173
+DO_3OP(gvec_fsub_h, float16_sub, float16)
174
+DO_3OP(gvec_fsub_s, float32_sub, float32)
175
+DO_3OP(gvec_fsub_d, float64_sub, float64)
176
+
177
+DO_3OP(gvec_fmul_h, float16_mul, float16)
178
+DO_3OP(gvec_fmul_s, float32_mul, float32)
179
+DO_3OP(gvec_fmul_d, float64_mul, float64)
180
+
181
+DO_3OP(gvec_ftsmul_h, float16_ftsmul, float16)
182
+DO_3OP(gvec_ftsmul_s, float32_ftsmul, float32)
183
+DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
184
+
185
+#ifdef TARGET_AARCH64
186
+
187
+DO_3OP(gvec_recps_h, helper_recpsf_f16, float16)
188
+DO_3OP(gvec_recps_s, helper_recpsf_f32, float32)
189
+DO_3OP(gvec_recps_d, helper_recpsf_f64, float64)
190
+
191
+DO_3OP(gvec_rsqrts_h, helper_rsqrtsf_f16, float16)
192
+DO_3OP(gvec_rsqrts_s, helper_rsqrtsf_f32, float32)
193
+DO_3OP(gvec_rsqrts_d, helper_rsqrtsf_f64, float64)
194
+
195
+#endif
196
+#undef DO_3OP
197
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
198
index XXXXXXX..XXXXXXX 100644
199
--- a/target/arm/sve.decode
200
+++ b/target/arm/sve.decode
201
@@ -XXX,XX +XXX,XX @@ UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
202
# SVE integer multiply immediate (unpredicated)
203
MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
204
205
+### SVE Floating Point Arithmetic - Unpredicated Group
206
+
207
+# SVE floating-point arithmetic (unpredicated)
208
+FADD_zzz 01100101 .. 0 ..... 000 000 ..... ..... @rd_rn_rm
209
+FSUB_zzz 01100101 .. 0 ..... 000 001 ..... ..... @rd_rn_rm
210
+FMUL_zzz 01100101 .. 0 ..... 000 010 ..... ..... @rd_rn_rm
211
+FTSMUL 01100101 .. 0 ..... 000 011 ..... ..... @rd_rn_rm
212
+FRECPS 01100101 .. 0 ..... 000 110 ..... ..... @rd_rn_rm
213
+FRSQRTS 01100101 .. 0 ..... 000 111 ..... ..... @rd_rn_rm
214
+
215
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
216
217
# SVE load predicate register
218
--
40
--
219
2.17.1
41
2.20.1
220
42
221
43
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Simon Veith <sveith@amazon.de>
2
2
3
When checking whether a stream ID is in range of the stream table, we
4
have so far been only checking it against our implementation limit
5
(SMMU_IDR1_SIDSIZE). However, the guest can program the
6
STRTAB_BASE_CFG.LOG2SIZE field to a size that is smaller than this
7
limit.
8
9
Check the stream ID against this limit as well to match the hardware
10
behavior of raising C_BAD_STREAMID events in case the limit is exceeded.
11
Also, ensure that we do not go one entry beyond the end of the table by
12
checking that its index is strictly smaller than the table size.
13
14
ref. ARM IHI 0070C, section 6.3.24.
15
16
Signed-off-by: Simon Veith <sveith@amazon.de>
17
Acked-by: Eric Auger <eric.auger@redhat.com>
18
Tested-by: Eric Auger <eric.auger@redhat.com>
19
Message-id: 1576509312-13083-4-git-send-email-sveith@amazon.de
20
Cc: Eric Auger <eric.auger@redhat.com>
21
Cc: qemu-devel@nongnu.org
22
Cc: qemu-arm@nongnu.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-18-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
25
---
8
target/arm/helper-sve.h | 25 +++++++
26
hw/arm/smmuv3.c | 8 ++++++--
9
target/arm/sve_helper.c | 41 +++++++++++
27
1 file changed, 6 insertions(+), 2 deletions(-)
10
target/arm/translate-sve.c | 144 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 26 +++++++
12
4 files changed, 236 insertions(+)
13
28
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
29
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
15
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
31
--- a/hw/arm/smmuv3.c
17
+++ b/target/arm/helper-sve.h
32
+++ b/hw/arm/smmuv3.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
33
@@ -XXX,XX +XXX,XX @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
19
DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
34
SMMUEventInfo *event)
20
35
{
21
DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
36
dma_addr_t addr;
22
+
37
+ uint32_t log2size;
23
+DEF_HELPER_FLAGS_4(sve_subri_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
38
int ret;
24
+DEF_HELPER_FLAGS_4(sve_subri_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
39
25
+DEF_HELPER_FLAGS_4(sve_subri_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
40
trace_smmuv3_find_ste(sid, s->features, s->sid_split);
26
+DEF_HELPER_FLAGS_4(sve_subri_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
41
- /* Check SID range */
27
+
42
- if (sid > (1 << SMMU_IDR1_SIDSIZE)) {
28
+DEF_HELPER_FLAGS_4(sve_smaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
43
+ log2size = FIELD_EX32(s->strtab_base_cfg, STRTAB_BASE_CFG, LOG2SIZE);
29
+DEF_HELPER_FLAGS_4(sve_smaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
44
+ /*
30
+DEF_HELPER_FLAGS_4(sve_smaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
45
+ * Check SID range against both guest-configured and implementation limits
31
+DEF_HELPER_FLAGS_4(sve_smaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
46
+ */
32
+
47
+ if (sid >= (1 << MIN(log2size, SMMU_IDR1_SIDSIZE))) {
33
+DEF_HELPER_FLAGS_4(sve_smini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
48
event->type = SMMU_EVT_C_BAD_STREAMID;
34
+DEF_HELPER_FLAGS_4(sve_smini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
49
return -EINVAL;
35
+DEF_HELPER_FLAGS_4(sve_smini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
50
}
36
+DEF_HELPER_FLAGS_4(sve_smini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
37
+
38
+DEF_HELPER_FLAGS_4(sve_umaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
39
+DEF_HELPER_FLAGS_4(sve_umaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
40
+DEF_HELPER_FLAGS_4(sve_umaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
41
+DEF_HELPER_FLAGS_4(sve_umaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
42
+
43
+DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
44
+DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
45
+DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
46
+DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
47
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/sve_helper.c
50
+++ b/target/arm/sve_helper.c
51
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
52
#undef DO_VPZ
53
#undef DO_VPZ_D
54
55
+/* Two vector operand, one scalar operand, unpredicated. */
56
+#define DO_ZZI(NAME, TYPE, OP) \
57
+void HELPER(NAME)(void *vd, void *vn, uint64_t s64, uint32_t desc) \
58
+{ \
59
+ intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(TYPE); \
60
+ TYPE s = s64, *d = vd, *n = vn; \
61
+ for (i = 0; i < opr_sz; ++i) { \
62
+ d[i] = OP(n[i], s); \
63
+ } \
64
+}
65
+
66
+#define DO_SUBR(X, Y) (Y - X)
67
+
68
+DO_ZZI(sve_subri_b, uint8_t, DO_SUBR)
69
+DO_ZZI(sve_subri_h, uint16_t, DO_SUBR)
70
+DO_ZZI(sve_subri_s, uint32_t, DO_SUBR)
71
+DO_ZZI(sve_subri_d, uint64_t, DO_SUBR)
72
+
73
+DO_ZZI(sve_smaxi_b, int8_t, DO_MAX)
74
+DO_ZZI(sve_smaxi_h, int16_t, DO_MAX)
75
+DO_ZZI(sve_smaxi_s, int32_t, DO_MAX)
76
+DO_ZZI(sve_smaxi_d, int64_t, DO_MAX)
77
+
78
+DO_ZZI(sve_smini_b, int8_t, DO_MIN)
79
+DO_ZZI(sve_smini_h, int16_t, DO_MIN)
80
+DO_ZZI(sve_smini_s, int32_t, DO_MIN)
81
+DO_ZZI(sve_smini_d, int64_t, DO_MIN)
82
+
83
+DO_ZZI(sve_umaxi_b, uint8_t, DO_MAX)
84
+DO_ZZI(sve_umaxi_h, uint16_t, DO_MAX)
85
+DO_ZZI(sve_umaxi_s, uint32_t, DO_MAX)
86
+DO_ZZI(sve_umaxi_d, uint64_t, DO_MAX)
87
+
88
+DO_ZZI(sve_umini_b, uint8_t, DO_MIN)
89
+DO_ZZI(sve_umini_h, uint16_t, DO_MIN)
90
+DO_ZZI(sve_umini_s, uint32_t, DO_MIN)
91
+DO_ZZI(sve_umini_d, uint64_t, DO_MIN)
92
+
93
+#undef DO_ZZI
94
+
95
#undef DO_AND
96
#undef DO_ORR
97
#undef DO_EOR
98
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
99
#undef DO_ASR
100
#undef DO_LSR
101
#undef DO_LSL
102
+#undef DO_SUBR
103
104
/* Similar to the ARM LastActiveElement pseudocode function, except the
105
result is multiplied by the element size. This includes the not found
106
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/translate-sve.c
109
+++ b/target/arm/translate-sve.c
110
@@ -XXX,XX +XXX,XX @@ static inline int expand_imm_sh8s(int x)
111
return (int8_t)x << (x & 0x100 ? 8 : 0);
112
}
113
114
+static inline int expand_imm_sh8u(int x)
115
+{
116
+ return (uint8_t)x << (x & 0x100 ? 8 : 0);
117
+}
118
+
119
/*
120
* Include the generated decoder.
121
*/
122
@@ -XXX,XX +XXX,XX @@ static bool trans_DUP_i(DisasContext *s, arg_DUP_i *a, uint32_t insn)
123
return true;
124
}
125
126
+static bool trans_ADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
127
+{
128
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
129
+ return false;
130
+ }
131
+ if (sve_access_check(s)) {
132
+ unsigned vsz = vec_full_reg_size(s);
133
+ tcg_gen_gvec_addi(a->esz, vec_full_reg_offset(s, a->rd),
134
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
135
+ }
136
+ return true;
137
+}
138
+
139
+static bool trans_SUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
140
+{
141
+ a->imm = -a->imm;
142
+ return trans_ADD_zzi(s, a, insn);
143
+}
144
+
145
+static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
146
+{
147
+ static const GVecGen2s op[4] = {
148
+ { .fni8 = tcg_gen_vec_sub8_i64,
149
+ .fniv = tcg_gen_sub_vec,
150
+ .fno = gen_helper_sve_subri_b,
151
+ .opc = INDEX_op_sub_vec,
152
+ .vece = MO_8,
153
+ .scalar_first = true },
154
+ { .fni8 = tcg_gen_vec_sub16_i64,
155
+ .fniv = tcg_gen_sub_vec,
156
+ .fno = gen_helper_sve_subri_h,
157
+ .opc = INDEX_op_sub_vec,
158
+ .vece = MO_16,
159
+ .scalar_first = true },
160
+ { .fni4 = tcg_gen_sub_i32,
161
+ .fniv = tcg_gen_sub_vec,
162
+ .fno = gen_helper_sve_subri_s,
163
+ .opc = INDEX_op_sub_vec,
164
+ .vece = MO_32,
165
+ .scalar_first = true },
166
+ { .fni8 = tcg_gen_sub_i64,
167
+ .fniv = tcg_gen_sub_vec,
168
+ .fno = gen_helper_sve_subri_d,
169
+ .opc = INDEX_op_sub_vec,
170
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
171
+ .vece = MO_64,
172
+ .scalar_first = true }
173
+ };
174
+
175
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
176
+ return false;
177
+ }
178
+ if (sve_access_check(s)) {
179
+ unsigned vsz = vec_full_reg_size(s);
180
+ TCGv_i64 c = tcg_const_i64(a->imm);
181
+ tcg_gen_gvec_2s(vec_full_reg_offset(s, a->rd),
182
+ vec_full_reg_offset(s, a->rn),
183
+ vsz, vsz, c, &op[a->esz]);
184
+ tcg_temp_free_i64(c);
185
+ }
186
+ return true;
187
+}
188
+
189
+static bool trans_MUL_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
190
+{
191
+ if (sve_access_check(s)) {
192
+ unsigned vsz = vec_full_reg_size(s);
193
+ tcg_gen_gvec_muli(a->esz, vec_full_reg_offset(s, a->rd),
194
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
195
+ }
196
+ return true;
197
+}
198
+
199
+static bool do_zzi_sat(DisasContext *s, arg_rri_esz *a, uint32_t insn,
200
+ bool u, bool d)
201
+{
202
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
203
+ return false;
204
+ }
205
+ if (sve_access_check(s)) {
206
+ TCGv_i64 val = tcg_const_i64(a->imm);
207
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, u, d);
208
+ tcg_temp_free_i64(val);
209
+ }
210
+ return true;
211
+}
212
+
213
+static bool trans_SQADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
214
+{
215
+ return do_zzi_sat(s, a, insn, false, false);
216
+}
217
+
218
+static bool trans_UQADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
219
+{
220
+ return do_zzi_sat(s, a, insn, true, false);
221
+}
222
+
223
+static bool trans_SQSUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
224
+{
225
+ return do_zzi_sat(s, a, insn, false, true);
226
+}
227
+
228
+static bool trans_UQSUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
229
+{
230
+ return do_zzi_sat(s, a, insn, true, true);
231
+}
232
+
233
+static bool do_zzi_ool(DisasContext *s, arg_rri_esz *a, gen_helper_gvec_2i *fn)
234
+{
235
+ if (sve_access_check(s)) {
236
+ unsigned vsz = vec_full_reg_size(s);
237
+ TCGv_i64 c = tcg_const_i64(a->imm);
238
+
239
+ tcg_gen_gvec_2i_ool(vec_full_reg_offset(s, a->rd),
240
+ vec_full_reg_offset(s, a->rn),
241
+ c, vsz, vsz, 0, fn);
242
+ tcg_temp_free_i64(c);
243
+ }
244
+ return true;
245
+}
246
+
247
+#define DO_ZZI(NAME, name) \
248
+static bool trans_##NAME##_zzi(DisasContext *s, arg_rri_esz *a, \
249
+ uint32_t insn) \
250
+{ \
251
+ static gen_helper_gvec_2i * const fns[4] = { \
252
+ gen_helper_sve_##name##i_b, gen_helper_sve_##name##i_h, \
253
+ gen_helper_sve_##name##i_s, gen_helper_sve_##name##i_d, \
254
+ }; \
255
+ return do_zzi_ool(s, a, fns[a->esz]); \
256
+}
257
+
258
+DO_ZZI(SMAX, smax)
259
+DO_ZZI(UMAX, umax)
260
+DO_ZZI(SMIN, smin)
261
+DO_ZZI(UMIN, umin)
262
+
263
+#undef DO_ZZI
264
+
265
/*
266
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
267
*/
268
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
269
index XXXXXXX..XXXXXXX 100644
270
--- a/target/arm/sve.decode
271
+++ b/target/arm/sve.decode
272
@@ -XXX,XX +XXX,XX @@
273
274
# Signed 8-bit immediate, optionally shifted left by 8.
275
%sh8_i8s 5:9 !function=expand_imm_sh8s
276
+# Unsigned 8-bit immediate, optionally shifted left by 8.
277
+%sh8_i8u 5:9 !function=expand_imm_sh8u
278
279
# Either a copy of rd (at bit 0), or a different source
280
# as propagated via the MOVPRFX instruction.
281
@@ -XXX,XX +XXX,XX @@
282
@pd_pn_pm ........ esz:2 .. rm:4 ....... rn:4 . rd:4 &rrr_esz
283
@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
284
&rrr_esz rn=%reg_movprfx
285
+@rdn_sh_i8u ........ esz:2 ...... ...... ..... rd:5 \
286
+ &rri_esz rn=%reg_movprfx imm=%sh8_i8u
287
+@rdn_i8u ........ esz:2 ...... ... imm:8 rd:5 \
288
+ &rri_esz rn=%reg_movprfx
289
+@rdn_i8s ........ esz:2 ...... ... imm:s8 rd:5 \
290
+ &rri_esz rn=%reg_movprfx
291
292
# Three operand with "memory" size, aka immediate left shift
293
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
294
@@ -XXX,XX +XXX,XX @@ FDUP 00100101 esz:2 111 00 1110 imm:8 rd:5
295
# SVE broadcast integer immediate (unpredicated)
296
DUP_i 00100101 esz:2 111 00 011 . ........ rd:5 imm=%sh8_i8s
297
298
+# SVE integer add/subtract immediate (unpredicated)
299
+ADD_zzi 00100101 .. 100 000 11 . ........ ..... @rdn_sh_i8u
300
+SUB_zzi 00100101 .. 100 001 11 . ........ ..... @rdn_sh_i8u
301
+SUBR_zzi 00100101 .. 100 011 11 . ........ ..... @rdn_sh_i8u
302
+SQADD_zzi 00100101 .. 100 100 11 . ........ ..... @rdn_sh_i8u
303
+UQADD_zzi 00100101 .. 100 101 11 . ........ ..... @rdn_sh_i8u
304
+SQSUB_zzi 00100101 .. 100 110 11 . ........ ..... @rdn_sh_i8u
305
+UQSUB_zzi 00100101 .. 100 111 11 . ........ ..... @rdn_sh_i8u
306
+
307
+# SVE integer min/max immediate (unpredicated)
308
+SMAX_zzi 00100101 .. 101 000 110 ........ ..... @rdn_i8s
309
+UMAX_zzi 00100101 .. 101 001 110 ........ ..... @rdn_i8u
310
+SMIN_zzi 00100101 .. 101 010 110 ........ ..... @rdn_i8s
311
+UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
312
+
313
+# SVE integer multiply immediate (unpredicated)
314
+MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
315
+
316
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
317
318
# SVE load predicate register
319
--
51
--
320
2.17.1
52
2.20.1
321
53
322
54
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Simon Veith <sveith@amazon.de>
2
2
3
Per the specification, and as observed in hardware, the SMMUv3 aligns
4
the SMMU_STRTAB_BASE address to the size of the table by masking out the
5
respective least significant bits in the ADDR field.
6
7
Apply this masking logic to our smmu_find_ste() lookup function per the
8
specification.
9
10
ref. ARM IHI 0070C, section 6.3.23.
11
12
Signed-off-by: Simon Veith <sveith@amazon.de>
13
Acked-by: Eric Auger <eric.auger@redhat.com>
14
Tested-by: Eric Auger <eric.auger@redhat.com>
15
Message-id: 1576509312-13083-5-git-send-email-sveith@amazon.de
16
Cc: Eric Auger <eric.auger@redhat.com>
17
Cc: qemu-devel@nongnu.org
18
Cc: qemu-arm@nongnu.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-15-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
21
---
8
target/arm/helper-sve.h | 2 +
22
hw/arm/smmuv3.c | 18 ++++++++++++++----
9
target/arm/sve_helper.c | 14 ++++
23
1 file changed, 14 insertions(+), 4 deletions(-)
10
target/arm/translate-sve.c | 133 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 27 ++++++++
12
4 files changed, 176 insertions(+)
13
24
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
25
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
15
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
27
--- a/hw/arm/smmuv3.c
17
+++ b/target/arm/helper-sve.h
28
+++ b/hw/arm/smmuv3.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
29
@@ -XXX,XX +XXX,XX @@ bad_ste:
19
30
static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
20
DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
SMMUEventInfo *event)
21
DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
32
{
22
+
33
- dma_addr_t addr;
23
+DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
34
+ dma_addr_t addr, strtab_base;
24
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
35
uint32_t log2size;
25
index XXXXXXX..XXXXXXX 100644
36
+ int strtab_size_shift;
26
--- a/target/arm/sve_helper.c
37
int ret;
27
+++ b/target/arm/sve_helper.c
38
28
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
39
trace_smmuv3_find_ste(sid, s->features, s->sid_split);
29
return do_zero(vd, oprsz);
40
@@ -XXX,XX +XXX,XX @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
30
}
41
}
31
}
42
if (s->features & SMMU_FEATURE_2LVL_STE) {
32
+
43
int l1_ste_offset, l2_ste_offset, max_l2_ste, span;
33
+uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
44
- dma_addr_t strtab_base, l1ptr, l2ptr;
34
+{
45
+ dma_addr_t l1ptr, l2ptr;
35
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
46
STEDesc l1std;
36
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
47
37
+ uint64_t *n = vn, *g = vg, sum = 0, mask = pred_esz_masks[esz];
48
- strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK;
38
+ intptr_t i;
49
+ /*
39
+
50
+ * Align strtab base address to table size. For this purpose, assume it
40
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
51
+ * is not bounded by SMMU_IDR1_SIDSIZE.
41
+ uint64_t t = n[i] & g[i] & mask;
42
+ sum += ctpop64(t);
43
+ }
44
+ return sum;
45
+}
46
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-sve.c
49
+++ b/target/arm/translate-sve.c
50
@@ -XXX,XX +XXX,XX @@
51
#include "translate-a64.h"
52
53
54
+typedef void GVecGen2sFn(unsigned, uint32_t, uint32_t,
55
+ TCGv_i64, uint32_t, uint32_t);
56
+
57
typedef void gen_helper_gvec_flags_3(TCGv_i32, TCGv_ptr, TCGv_ptr,
58
TCGv_ptr, TCGv_i32);
59
typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
60
@@ -XXX,XX +XXX,XX @@ static bool trans_BRKN(DisasContext *s, arg_rpr_s *a, uint32_t insn)
61
return do_brk2(s, a, gen_helper_sve_brkn, gen_helper_sve_brkns);
62
}
63
64
+/*
65
+ *** SVE Predicate Count Group
66
+ */
67
+
68
+static void do_cntp(DisasContext *s, TCGv_i64 val, int esz, int pn, int pg)
69
+{
70
+ unsigned psz = pred_full_reg_size(s);
71
+
72
+ if (psz <= 8) {
73
+ uint64_t psz_mask;
74
+
75
+ tcg_gen_ld_i64(val, cpu_env, pred_full_reg_offset(s, pn));
76
+ if (pn != pg) {
77
+ TCGv_i64 g = tcg_temp_new_i64();
78
+ tcg_gen_ld_i64(g, cpu_env, pred_full_reg_offset(s, pg));
79
+ tcg_gen_and_i64(val, val, g);
80
+ tcg_temp_free_i64(g);
81
+ }
82
+
83
+ /* Reduce the pred_esz_masks value simply to reduce the
84
+ * size of the code generated here.
85
+ */
52
+ */
86
+ psz_mask = MAKE_64BIT_MASK(0, psz * 8);
53
+ strtab_size_shift = MAX(5, (int)log2size - s->sid_split - 1 + 3);
87
+ tcg_gen_andi_i64(val, val, pred_esz_masks[esz] & psz_mask);
54
+ strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
88
+
55
+ ~MAKE_64BIT_MASK(0, strtab_size_shift);
89
+ tcg_gen_ctpop_i64(val, val);
56
l1_ste_offset = sid >> s->sid_split;
90
+ } else {
57
l2_ste_offset = sid & ((1 << s->sid_split) - 1);
91
+ TCGv_ptr t_pn = tcg_temp_new_ptr();
58
l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
92
+ TCGv_ptr t_pg = tcg_temp_new_ptr();
59
@@ -XXX,XX +XXX,XX @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
93
+ unsigned desc;
60
}
94
+ TCGv_i32 t_desc;
61
addr = l2ptr + l2_ste_offset * sizeof(*ste);
95
+
62
} else {
96
+ desc = psz - 2;
63
- addr = (s->strtab_base & SMMU_BASE_ADDR_MASK) + sid * sizeof(*ste);
97
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
64
+ strtab_size_shift = log2size + 5;
98
+
65
+ strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
99
+ tcg_gen_addi_ptr(t_pn, cpu_env, pred_full_reg_offset(s, pn));
66
+ ~MAKE_64BIT_MASK(0, strtab_size_shift);
100
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
67
+ addr = strtab_base + sid * sizeof(*ste);
101
+ t_desc = tcg_const_i32(desc);
68
}
102
+
69
103
+ gen_helper_sve_cntp(val, t_pn, t_pg, t_desc);
70
if (smmu_get_ste(s, addr, ste, event)) {
104
+ tcg_temp_free_ptr(t_pn);
105
+ tcg_temp_free_ptr(t_pg);
106
+ tcg_temp_free_i32(t_desc);
107
+ }
108
+}
109
+
110
+static bool trans_CNTP(DisasContext *s, arg_CNTP *a, uint32_t insn)
111
+{
112
+ if (sve_access_check(s)) {
113
+ do_cntp(s, cpu_reg(s, a->rd), a->esz, a->rn, a->pg);
114
+ }
115
+ return true;
116
+}
117
+
118
+static bool trans_INCDECP_r(DisasContext *s, arg_incdec_pred *a,
119
+ uint32_t insn)
120
+{
121
+ if (sve_access_check(s)) {
122
+ TCGv_i64 reg = cpu_reg(s, a->rd);
123
+ TCGv_i64 val = tcg_temp_new_i64();
124
+
125
+ do_cntp(s, val, a->esz, a->pg, a->pg);
126
+ if (a->d) {
127
+ tcg_gen_sub_i64(reg, reg, val);
128
+ } else {
129
+ tcg_gen_add_i64(reg, reg, val);
130
+ }
131
+ tcg_temp_free_i64(val);
132
+ }
133
+ return true;
134
+}
135
+
136
+static bool trans_INCDECP_z(DisasContext *s, arg_incdec2_pred *a,
137
+ uint32_t insn)
138
+{
139
+ if (a->esz == 0) {
140
+ return false;
141
+ }
142
+ if (sve_access_check(s)) {
143
+ unsigned vsz = vec_full_reg_size(s);
144
+ TCGv_i64 val = tcg_temp_new_i64();
145
+ GVecGen2sFn *gvec_fn = a->d ? tcg_gen_gvec_subs : tcg_gen_gvec_adds;
146
+
147
+ do_cntp(s, val, a->esz, a->pg, a->pg);
148
+ gvec_fn(a->esz, vec_full_reg_offset(s, a->rd),
149
+ vec_full_reg_offset(s, a->rn), val, vsz, vsz);
150
+ }
151
+ return true;
152
+}
153
+
154
+static bool trans_SINCDECP_r_32(DisasContext *s, arg_incdec_pred *a,
155
+ uint32_t insn)
156
+{
157
+ if (sve_access_check(s)) {
158
+ TCGv_i64 reg = cpu_reg(s, a->rd);
159
+ TCGv_i64 val = tcg_temp_new_i64();
160
+
161
+ do_cntp(s, val, a->esz, a->pg, a->pg);
162
+ do_sat_addsub_32(reg, val, a->u, a->d);
163
+ }
164
+ return true;
165
+}
166
+
167
+static bool trans_SINCDECP_r_64(DisasContext *s, arg_incdec_pred *a,
168
+ uint32_t insn)
169
+{
170
+ if (sve_access_check(s)) {
171
+ TCGv_i64 reg = cpu_reg(s, a->rd);
172
+ TCGv_i64 val = tcg_temp_new_i64();
173
+
174
+ do_cntp(s, val, a->esz, a->pg, a->pg);
175
+ do_sat_addsub_64(reg, val, a->u, a->d);
176
+ }
177
+ return true;
178
+}
179
+
180
+static bool trans_SINCDECP_z(DisasContext *s, arg_incdec2_pred *a,
181
+ uint32_t insn)
182
+{
183
+ if (a->esz == 0) {
184
+ return false;
185
+ }
186
+ if (sve_access_check(s)) {
187
+ TCGv_i64 val = tcg_temp_new_i64();
188
+ do_cntp(s, val, a->esz, a->pg, a->pg);
189
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, a->u, a->d);
190
+ }
191
+ return true;
192
+}
193
+
194
/*
195
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
196
*/
197
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
198
index XXXXXXX..XXXXXXX 100644
199
--- a/target/arm/sve.decode
200
+++ b/target/arm/sve.decode
201
@@ -XXX,XX +XXX,XX @@
202
&ptrue rd esz pat s
203
&incdec_cnt rd pat esz imm d u
204
&incdec2_cnt rd rn pat esz imm d u
205
+&incdec_pred rd pg esz d u
206
+&incdec2_pred rd rn pg esz d u
207
208
###########################################################################
209
# Named instruction formats. These are generally used to
210
@@ -XXX,XX +XXX,XX @@
211
212
# One register operand, with governing predicate, vector element size
213
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
214
+@rd_pg4_pn ........ esz:2 ... ... .. pg:4 . rn:4 rd:5 &rpr_esz
215
216
# Two register operands with a 6-bit signed immediate.
217
@rd_rn_i6 ........ ... rn:5 ..... imm:s6 rd:5 &rri
218
@@ -XXX,XX +XXX,XX @@
219
@incdec2_cnt ........ esz:2 .. .... ...... pat:5 rd:5 \
220
&incdec2_cnt imm=%imm4_16_p1 rn=%reg_movprfx
221
222
+# One register, predicate.
223
+# User must fill in U and D.
224
+@incdec_pred ........ esz:2 .... .. ..... .. pg:4 rd:5 &incdec_pred
225
+@incdec2_pred ........ esz:2 .... .. ..... .. pg:4 rd:5 \
226
+ &incdec2_pred rn=%reg_movprfx
227
+
228
###########################################################################
229
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
230
231
@@ -XXX,XX +XXX,XX @@ BRKB_m 00100101 1. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
232
# SVE propagate break to next partition
233
BRKN 00100101 0. 01100001 .... 0 .... 0 .... @pd_pg_pn_s
234
235
+### SVE Predicate Count Group
236
+
237
+# SVE predicate count
238
+CNTP 00100101 .. 100 000 10 .... 0 .... ..... @rd_pg4_pn
239
+
240
+# SVE inc/dec register by predicate count
241
+INCDECP_r 00100101 .. 10110 d:1 10001 00 .... ..... @incdec_pred u=1
242
+
243
+# SVE inc/dec vector by predicate count
244
+INCDECP_z 00100101 .. 10110 d:1 10000 00 .... ..... @incdec2_pred u=1
245
+
246
+# SVE saturating inc/dec register by predicate count
247
+SINCDECP_r_32 00100101 .. 1010 d:1 u:1 10001 00 .... ..... @incdec_pred
248
+SINCDECP_r_64 00100101 .. 1010 d:1 u:1 10001 10 .... ..... @incdec_pred
249
+
250
+# SVE saturating inc/dec vector by predicate count
251
+SINCDECP_z 00100101 .. 1010 d:1 u:1 10000 00 .... ..... @incdec2_pred
252
+
253
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
254
255
# SVE load predicate register
256
--
71
--
257
2.17.1
72
2.20.1
258
73
259
74
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Simon Veith <sveith@amazon.de>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
The bit offsets in the EVT_SET_ADDR2 macro do not match those specified
4
Message-id: 20180613015641.5667-16-richard.henderson@linaro.org
4
in the ARM SMMUv3 Architecture Specification. In all events that use
5
this macro, e.g. F_WALK_EABT, the faulting fetch address or IPA actually
6
occupies the 32-bit words 6 and 7 in the event record contiguously, with
7
the upper and lower unused bits clear due to alignment or maximum
8
supported address bits. How many bits are clear depends on the
9
individual event type.
10
11
Update the macro to write to the correct words in the event record so
12
that guest drivers can obtain accurate address information on events.
13
14
ref. ARM IHI 0070C, sections 7.3.12 through 7.3.16.
15
16
Signed-off-by: Simon Veith <sveith@amazon.de>
17
Acked-by: Eric Auger <eric.auger@redhat.com>
18
Tested-by: Eric Auger <eric.auger@redhat.com>
19
Message-id: 1576509312-13083-6-git-send-email-sveith@amazon.de
20
Cc: Eric Auger <eric.auger@redhat.com>
21
Cc: qemu-devel@nongnu.org
22
Cc: qemu-arm@nongnu.org
23
Acked-by: Eric Auger <eric.auger@redhat.com>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
26
---
8
target/arm/helper-sve.h | 2 +
27
hw/arm/smmuv3-internal.h | 4 ++--
9
target/arm/sve_helper.c | 31 ++++++++++++
28
1 file changed, 2 insertions(+), 2 deletions(-)
10
target/arm/translate-sve.c | 99 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 8 +++
12
4 files changed, 140 insertions(+)
13
29
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
30
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
15
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
32
--- a/hw/arm/smmuv3-internal.h
17
+++ b/target/arm/helper-sve.h
33
+++ b/hw/arm/smmuv3-internal.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUEventInfo {
19
DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
35
} while (0)
20
36
#define EVT_SET_ADDR2(x, addr) \
21
DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
37
do { \
22
+
38
- (x)->word[7] = deposit32((x)->word[7], 3, 29, addr >> 16); \
23
+DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
39
- (x)->word[7] = deposit32((x)->word[7], 0, 16, addr & 0xffff);\
24
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
40
+ (x)->word[7] = (uint32_t)(addr >> 32); \
25
index XXXXXXX..XXXXXXX 100644
41
+ (x)->word[6] = (uint32_t)(addr & 0xffffffff); \
26
--- a/target/arm/sve_helper.c
42
} while (0)
27
+++ b/target/arm/sve_helper.c
43
28
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
44
void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *event);
29
}
30
return sum;
31
}
32
+
33
+uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
34
+{
35
+ uintptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
36
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
37
+ uint64_t esz_mask = pred_esz_masks[esz];
38
+ ARMPredicateReg *d = vd;
39
+ uint32_t flags;
40
+ intptr_t i;
41
+
42
+ /* Begin with a zero predicate register. */
43
+ flags = do_zero(d, oprsz);
44
+ if (count == 0) {
45
+ return flags;
46
+ }
47
+
48
+ /* Scale from predicate element count to bits. */
49
+ count <<= esz;
50
+ /* Bound to the bits in the predicate. */
51
+ count = MIN(count, oprsz * 8);
52
+
53
+ /* Set all of the requested bits. */
54
+ for (i = 0; i < count / 64; ++i) {
55
+ d->p[i] = esz_mask;
56
+ }
57
+ if (count & 63) {
58
+ d->p[i] = MAKE_64BIT_MASK(0, count & 63) & esz_mask;
59
+ }
60
+
61
+ return predtest_ones(d, oprsz, esz_mask);
62
+}
63
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/translate-sve.c
66
+++ b/target/arm/translate-sve.c
67
@@ -XXX,XX +XXX,XX @@ static bool trans_SINCDECP_z(DisasContext *s, arg_incdec2_pred *a,
68
return true;
69
}
70
71
+/*
72
+ *** SVE Integer Compare Scalars Group
73
+ */
74
+
75
+static bool trans_CTERM(DisasContext *s, arg_CTERM *a, uint32_t insn)
76
+{
77
+ if (!sve_access_check(s)) {
78
+ return true;
79
+ }
80
+
81
+ TCGCond cond = (a->ne ? TCG_COND_NE : TCG_COND_EQ);
82
+ TCGv_i64 rn = read_cpu_reg(s, a->rn, a->sf);
83
+ TCGv_i64 rm = read_cpu_reg(s, a->rm, a->sf);
84
+ TCGv_i64 cmp = tcg_temp_new_i64();
85
+
86
+ tcg_gen_setcond_i64(cond, cmp, rn, rm);
87
+ tcg_gen_extrl_i64_i32(cpu_NF, cmp);
88
+ tcg_temp_free_i64(cmp);
89
+
90
+ /* VF = !NF & !CF. */
91
+ tcg_gen_xori_i32(cpu_VF, cpu_NF, 1);
92
+ tcg_gen_andc_i32(cpu_VF, cpu_VF, cpu_CF);
93
+
94
+ /* Both NF and VF actually look at bit 31. */
95
+ tcg_gen_neg_i32(cpu_NF, cpu_NF);
96
+ tcg_gen_neg_i32(cpu_VF, cpu_VF);
97
+ return true;
98
+}
99
+
100
+static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
101
+{
102
+ if (!sve_access_check(s)) {
103
+ return true;
104
+ }
105
+
106
+ TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
107
+ TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
108
+ TCGv_i64 t0 = tcg_temp_new_i64();
109
+ TCGv_i64 t1 = tcg_temp_new_i64();
110
+ TCGv_i32 t2, t3;
111
+ TCGv_ptr ptr;
112
+ unsigned desc, vsz = vec_full_reg_size(s);
113
+ TCGCond cond;
114
+
115
+ if (!a->sf) {
116
+ if (a->u) {
117
+ tcg_gen_ext32u_i64(op0, op0);
118
+ tcg_gen_ext32u_i64(op1, op1);
119
+ } else {
120
+ tcg_gen_ext32s_i64(op0, op0);
121
+ tcg_gen_ext32s_i64(op1, op1);
122
+ }
123
+ }
124
+
125
+ /* For the helper, compress the different conditions into a computation
126
+ * of how many iterations for which the condition is true.
127
+ *
128
+ * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
129
+ * 2**64 iterations, overflowing to 0. Of course, predicate registers
130
+ * aren't that large, so any value >= predicate size is sufficient.
131
+ */
132
+ tcg_gen_sub_i64(t0, op1, op0);
133
+
134
+ /* t0 = MIN(op1 - op0, vsz). */
135
+ tcg_gen_movi_i64(t1, vsz);
136
+ tcg_gen_umin_i64(t0, t0, t1);
137
+ if (a->eq) {
138
+ /* Equality means one more iteration. */
139
+ tcg_gen_addi_i64(t0, t0, 1);
140
+ }
141
+
142
+ /* t0 = (condition true ? t0 : 0). */
143
+ cond = (a->u
144
+ ? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
145
+ : (a->eq ? TCG_COND_LE : TCG_COND_LT));
146
+ tcg_gen_movi_i64(t1, 0);
147
+ tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
148
+
149
+ t2 = tcg_temp_new_i32();
150
+ tcg_gen_extrl_i64_i32(t2, t0);
151
+ tcg_temp_free_i64(t0);
152
+ tcg_temp_free_i64(t1);
153
+
154
+ desc = (vsz / 8) - 2;
155
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
156
+ t3 = tcg_const_i32(desc);
157
+
158
+ ptr = tcg_temp_new_ptr();
159
+ tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
160
+
161
+ gen_helper_sve_while(t2, ptr, t2, t3);
162
+ do_pred_flags(t2);
163
+
164
+ tcg_temp_free_ptr(ptr);
165
+ tcg_temp_free_i32(t2);
166
+ tcg_temp_free_i32(t3);
167
+ return true;
168
+}
169
+
170
/*
171
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
172
*/
173
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
174
index XXXXXXX..XXXXXXX 100644
175
--- a/target/arm/sve.decode
176
+++ b/target/arm/sve.decode
177
@@ -XXX,XX +XXX,XX @@ SINCDECP_r_64 00100101 .. 1010 d:1 u:1 10001 10 .... ..... @incdec_pred
178
# SVE saturating inc/dec vector by predicate count
179
SINCDECP_z 00100101 .. 1010 d:1 u:1 10000 00 .... ..... @incdec2_pred
180
181
+### SVE Integer Compare - Scalars Group
182
+
183
+# SVE conditionally terminate scalars
184
+CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
185
+
186
+# SVE integer compare scalar count and limit
187
+WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
188
+
189
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
190
191
# SVE load predicate register
192
--
45
--
193
2.17.1
46
2.20.1
194
47
195
48
diff view generated by jsdifflib
1
Add an IOMMU index argument to the translate method of
1
From: Simon Veith <sveith@amazon.de>
2
IOMMUs. Since all of our current IOMMU implementations
3
support only a single IOMMU index, this has no effect
4
on the behaviour.
5
2
3
The smmuv3_record_event() function that generates the F_STE_FETCH error
4
uses the EVT_SET_ADDR macro to record the fetch address, placing it in
5
32-bit words 4 and 5.
6
7
The correct position for this address is in words 6 and 7, per the
8
SMMUv3 Architecture Specification.
9
10
Update the function to use the EVT_SET_ADDR2 macro instead, which is the
11
macro intended for writing to these words.
12
13
ref. ARM IHI 0070C, section 7.3.4.
14
15
Signed-off-by: Simon Veith <sveith@amazon.de>
16
Acked-by: Eric Auger <eric.auger@redhat.com>
17
Tested-by: Eric Auger <eric.auger@redhat.com>
18
Message-id: 1576509312-13083-7-git-send-email-sveith@amazon.de
19
Cc: Eric Auger <eric.auger@redhat.com>
20
Cc: qemu-devel@nongnu.org
21
Cc: qemu-arm@nongnu.org
22
Acked-by: Eric Auger <eric.auger@redhat.com>
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20180604152941.20374-4-peter.maydell@linaro.org
10
---
25
---
11
include/exec/memory.h | 3 ++-
26
hw/arm/smmuv3.c | 2 +-
12
exec.c | 11 +++++++++--
27
1 file changed, 1 insertion(+), 1 deletion(-)
13
hw/alpha/typhoon.c | 3 ++-
14
hw/arm/smmuv3.c | 2 +-
15
hw/dma/rc4030.c | 2 +-
16
hw/i386/amd_iommu.c | 2 +-
17
hw/i386/intel_iommu.c | 2 +-
18
hw/ppc/spapr_iommu.c | 3 ++-
19
hw/s390x/s390-pci-bus.c | 2 +-
20
hw/sparc/sun4m_iommu.c | 3 ++-
21
hw/sparc64/sun4u_iommu.c | 2 +-
22
memory.c | 2 +-
23
12 files changed, 24 insertions(+), 13 deletions(-)
24
28
25
diff --git a/include/exec/memory.h b/include/exec/memory.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/include/exec/memory.h
28
+++ b/include/exec/memory.h
29
@@ -XXX,XX +XXX,XX @@ typedef struct IOMMUMemoryRegionClass {
30
* @iommu: the IOMMUMemoryRegion
31
* @hwaddr: address to be translated within the memory region
32
* @flag: requested access permissions
33
+ * @iommu_idx: IOMMU index for the translation
34
*/
35
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
36
- IOMMUAccessFlags flag);
37
+ IOMMUAccessFlags flag, int iommu_idx);
38
/* Returns minimum supported page size in bytes.
39
* If this method is not provided then the minimum is assumed to
40
* be TARGET_PAGE_SIZE.
41
diff --git a/exec.c b/exec.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/exec.c
44
+++ b/exec.c
45
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
46
do {
47
hwaddr addr = *xlat;
48
IOMMUMemoryRegionClass *imrc = memory_region_get_iommu_class_nocheck(iommu_mr);
49
- IOMMUTLBEntry iotlb = imrc->translate(iommu_mr, addr, is_write ?
50
- IOMMU_WO : IOMMU_RO);
51
+ int iommu_idx = 0;
52
+ IOMMUTLBEntry iotlb;
53
+
54
+ if (imrc->attrs_to_index) {
55
+ iommu_idx = imrc->attrs_to_index(iommu_mr, attrs);
56
+ }
57
+
58
+ iotlb = imrc->translate(iommu_mr, addr, is_write ?
59
+ IOMMU_WO : IOMMU_RO, iommu_idx);
60
61
if (!(iotlb.perm & (1 << is_write))) {
62
goto unassigned;
63
diff --git a/hw/alpha/typhoon.c b/hw/alpha/typhoon.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/alpha/typhoon.c
66
+++ b/hw/alpha/typhoon.c
67
@@ -XXX,XX +XXX,XX @@ static bool window_translate(TyphoonWindow *win, hwaddr addr,
68
Pchip and generate a machine check interrupt. */
69
static IOMMUTLBEntry typhoon_translate_iommu(IOMMUMemoryRegion *iommu,
70
hwaddr addr,
71
- IOMMUAccessFlags flag)
72
+ IOMMUAccessFlags flag,
73
+ int iommu_idx)
74
{
75
TyphoonPchip *pchip = container_of(iommu, TyphoonPchip, iommu);
76
IOMMUTLBEntry ret;
77
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
29
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
78
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/arm/smmuv3.c
31
--- a/hw/arm/smmuv3.c
80
+++ b/hw/arm/smmuv3.c
32
+++ b/hw/arm/smmuv3.c
81
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
33
@@ -XXX,XX +XXX,XX @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
82
}
34
case SMMU_EVT_F_STE_FETCH:
83
35
EVT_SET_SSID(&evt, info->u.f_ste_fetch.ssid);
84
static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
36
EVT_SET_SSV(&evt, info->u.f_ste_fetch.ssv);
85
- IOMMUAccessFlags flag)
37
- EVT_SET_ADDR(&evt, info->u.f_ste_fetch.addr);
86
+ IOMMUAccessFlags flag, int iommu_idx)
38
+ EVT_SET_ADDR2(&evt, info->u.f_ste_fetch.addr);
87
{
39
break;
88
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
40
case SMMU_EVT_C_BAD_STE:
89
SMMUv3State *s = sdev->smmu;
41
EVT_SET_SSID(&evt, info->u.c_bad_ste.ssid);
90
diff --git a/hw/dma/rc4030.c b/hw/dma/rc4030.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/hw/dma/rc4030.c
93
+++ b/hw/dma/rc4030.c
94
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps jazzio_ops = {
95
};
96
97
static IOMMUTLBEntry rc4030_dma_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
98
- IOMMUAccessFlags flag)
99
+ IOMMUAccessFlags flag, int iommu_idx)
100
{
101
rc4030State *s = container_of(iommu, rc4030State, dma_mr);
102
IOMMUTLBEntry ret = {
103
diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/hw/i386/amd_iommu.c
106
+++ b/hw/i386/amd_iommu.c
107
@@ -XXX,XX +XXX,XX @@ static inline bool amdvi_is_interrupt_addr(hwaddr addr)
108
}
109
110
static IOMMUTLBEntry amdvi_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
111
- IOMMUAccessFlags flag)
112
+ IOMMUAccessFlags flag, int iommu_idx)
113
{
114
AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
115
AMDVIState *s = as->iommu_state;
116
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/hw/i386/intel_iommu.c
119
+++ b/hw/i386/intel_iommu.c
120
@@ -XXX,XX +XXX,XX @@ static void vtd_mem_write(void *opaque, hwaddr addr,
121
}
122
123
static IOMMUTLBEntry vtd_iommu_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
124
- IOMMUAccessFlags flag)
125
+ IOMMUAccessFlags flag, int iommu_idx)
126
{
127
VTDAddressSpace *vtd_as = container_of(iommu, VTDAddressSpace, iommu);
128
IntelIOMMUState *s = vtd_as->iommu_state;
129
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/ppc/spapr_iommu.c
132
+++ b/hw/ppc/spapr_iommu.c
133
@@ -XXX,XX +XXX,XX @@ static void spapr_tce_free_table(uint64_t *table, int fd, uint32_t nb_table)
134
/* Called from RCU critical section */
135
static IOMMUTLBEntry spapr_tce_translate_iommu(IOMMUMemoryRegion *iommu,
136
hwaddr addr,
137
- IOMMUAccessFlags flag)
138
+ IOMMUAccessFlags flag,
139
+ int iommu_idx)
140
{
141
sPAPRTCETable *tcet = container_of(iommu, sPAPRTCETable, iommu);
142
uint64_t tce;
143
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/s390x/s390-pci-bus.c
146
+++ b/hw/s390x/s390-pci-bus.c
147
@@ -XXX,XX +XXX,XX @@ uint16_t s390_guest_io_table_walk(uint64_t g_iota, hwaddr addr,
148
}
149
150
static IOMMUTLBEntry s390_translate_iommu(IOMMUMemoryRegion *mr, hwaddr addr,
151
- IOMMUAccessFlags flag)
152
+ IOMMUAccessFlags flag, int iommu_idx)
153
{
154
S390PCIIOMMU *iommu = container_of(mr, S390PCIIOMMU, iommu_mr);
155
S390IOTLBEntry *entry;
156
diff --git a/hw/sparc/sun4m_iommu.c b/hw/sparc/sun4m_iommu.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/hw/sparc/sun4m_iommu.c
159
+++ b/hw/sparc/sun4m_iommu.c
160
@@ -XXX,XX +XXX,XX @@ static void iommu_bad_addr(IOMMUState *s, hwaddr addr,
161
/* Called from RCU critical section */
162
static IOMMUTLBEntry sun4m_translate_iommu(IOMMUMemoryRegion *iommu,
163
hwaddr addr,
164
- IOMMUAccessFlags flags)
165
+ IOMMUAccessFlags flags,
166
+ int iommu_idx)
167
{
168
IOMMUState *is = container_of(iommu, IOMMUState, iommu);
169
hwaddr page, pa;
170
diff --git a/hw/sparc64/sun4u_iommu.c b/hw/sparc64/sun4u_iommu.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/hw/sparc64/sun4u_iommu.c
173
+++ b/hw/sparc64/sun4u_iommu.c
174
@@ -XXX,XX +XXX,XX @@
175
/* Called from RCU critical section */
176
static IOMMUTLBEntry sun4u_translate_iommu(IOMMUMemoryRegion *iommu,
177
hwaddr addr,
178
- IOMMUAccessFlags flag)
179
+ IOMMUAccessFlags flag, int iommu_idx)
180
{
181
IOMMUState *is = container_of(iommu, IOMMUState, iommu);
182
hwaddr baseaddr, offset;
183
diff --git a/memory.c b/memory.c
184
index XXXXXXX..XXXXXXX 100644
185
--- a/memory.c
186
+++ b/memory.c
187
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n)
188
granularity = memory_region_iommu_get_min_page_size(iommu_mr);
189
190
for (addr = 0; addr < memory_region_size(mr); addr += granularity) {
191
- iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE);
192
+ iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, n->iommu_idx);
193
if (iotlb.perm != IOMMU_NONE) {
194
n->notify(n, &iotlb);
195
}
196
--
42
--
197
2.17.1
43
2.20.1
198
44
199
45
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Instead of crashing in a confuse way, give some hint to the user
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
about why we aborted. He might report the issue without having
5
Message-id: 20180613015641.5667-17-richard.henderson@linaro.org
5
to use a debugger.
6
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20191209134552.27733-1-philmd@redhat.com
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/translate-sve.c | 37 +++++++++++++++++++++++++++++++++++++
13
target/arm/helper.c | 18 +++++++++++++++---
9
target/arm/sve.decode | 8 ++++++++
14
1 file changed, 15 insertions(+), 3 deletions(-)
10
2 files changed, 45 insertions(+)
11
15
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
18
--- a/target/arm/helper.c
15
+++ b/target/arm/translate-sve.c
19
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ void HELPER(rebuild_hflags_a64)(CPUARMState *env, int el)
17
return true;
21
env->hflags = rebuild_hflags_a64(env, el, fp_el, mmu_idx);
18
}
22
}
19
23
20
+/*
24
+static inline void assert_hflags_rebuild_correctly(CPUARMState *env)
21
+ *** SVE Integer Wide Immediate - Unpredicated Group
25
+{
22
+ */
26
+#ifdef CONFIG_DEBUG_TCG
27
+ uint32_t env_flags_current = env->hflags;
28
+ uint32_t env_flags_rebuilt = rebuild_hflags_internal(env);
23
+
29
+
24
+static bool trans_FDUP(DisasContext *s, arg_FDUP *a, uint32_t insn)
30
+ if (unlikely(env_flags_current != env_flags_rebuilt)) {
25
+{
31
+ fprintf(stderr, "TCG hflags mismatch (current:0x%08x rebuilt:0x%08x)\n",
26
+ if (a->esz == 0) {
32
+ env_flags_current, env_flags_rebuilt);
27
+ return false;
33
+ abort();
28
+ }
34
+ }
29
+ if (sve_access_check(s)) {
35
+#endif
30
+ unsigned vsz = vec_full_reg_size(s);
31
+ int dofs = vec_full_reg_offset(s, a->rd);
32
+ uint64_t imm;
33
+
34
+ /* Decode the VFP immediate. */
35
+ imm = vfp_expand_imm(a->esz, a->imm);
36
+ imm = dup_const(a->esz, imm);
37
+
38
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, imm);
39
+ }
40
+ return true;
41
+}
36
+}
42
+
37
+
43
+static bool trans_DUP_i(DisasContext *s, arg_DUP_i *a, uint32_t insn)
38
void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
44
+{
39
target_ulong *cs_base, uint32_t *pflags)
45
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
40
{
46
+ return false;
41
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
47
+ }
42
uint32_t pstate_for_ss;
48
+ if (sve_access_check(s)) {
43
49
+ unsigned vsz = vec_full_reg_size(s);
44
*cs_base = 0;
50
+ int dofs = vec_full_reg_offset(s, a->rd);
45
-#ifdef CONFIG_DEBUG_TCG
51
+
46
- assert(flags == rebuild_hflags_internal(env));
52
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, dup_const(a->esz, a->imm));
47
-#endif
53
+ }
48
+ assert_hflags_rebuild_correctly(env);
54
+ return true;
49
55
+}
50
if (FIELD_EX32(flags, TBFLAG_ANY, AARCH64_STATE)) {
56
+
51
*pc = env->pc;
57
/*
58
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
59
*/
60
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/sve.decode
63
+++ b/target/arm/sve.decode
64
@@ -XXX,XX +XXX,XX @@ CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
65
# SVE integer compare scalar count and limit
66
WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
67
68
+### SVE Integer Wide Immediate - Unpredicated Group
69
+
70
+# SVE broadcast floating-point immediate (unpredicated)
71
+FDUP 00100101 esz:2 111 00 1110 imm:8 rd:5
72
+
73
+# SVE broadcast integer immediate (unpredicated)
74
+DUP_i 00100101 esz:2 111 00 011 . ........ rd:5 imm=%sh8_i8s
75
+
76
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
77
78
# SVE load predicate register
79
--
52
--
80
2.17.1
53
2.20.1
81
54
82
55
diff view generated by jsdifflib
Deleted patch
1
If an IOMMU supports mappings that care about the memory
2
transaction attributes, then it no longer has a unique
3
address -> output mapping, but more than one. We can
4
represent these using an IOMMU index, analogous to TCG's
5
mmu indexes.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20180604152941.20374-2-peter.maydell@linaro.org
11
---
12
include/exec/memory.h | 55 +++++++++++++++++++++++++++++++++++++++++++
13
memory.c | 23 ++++++++++++++++++
14
2 files changed, 78 insertions(+)
15
16
diff --git a/include/exec/memory.h b/include/exec/memory.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/exec/memory.h
19
+++ b/include/exec/memory.h
20
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
21
* to report whenever mappings are changed, by calling
22
* memory_region_notify_iommu() (or, if necessary, by calling
23
* memory_region_notify_one() for each registered notifier).
24
+ *
25
+ * Conceptually an IOMMU provides a mapping from input address
26
+ * to an output TLB entry. If the IOMMU is aware of memory transaction
27
+ * attributes and the output TLB entry depends on the transaction
28
+ * attributes, we represent this using IOMMU indexes. Each index
29
+ * selects a particular translation table that the IOMMU has:
30
+ * @attrs_to_index returns the IOMMU index for a set of transaction attributes
31
+ * @translate takes an input address and an IOMMU index
32
+ * and the mapping returned can only depend on the input address and the
33
+ * IOMMU index.
34
+ *
35
+ * Most IOMMUs don't care about the transaction attributes and support
36
+ * only a single IOMMU index. A more complex IOMMU might have one index
37
+ * for secure transactions and one for non-secure transactions.
38
*/
39
typedef struct IOMMUMemoryRegionClass {
40
/* private */
41
@@ -XXX,XX +XXX,XX @@ typedef struct IOMMUMemoryRegionClass {
42
*/
43
int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
44
void *data);
45
+
46
+ /* Return the IOMMU index to use for a given set of transaction attributes.
47
+ *
48
+ * Optional method: if an IOMMU only supports a single IOMMU index then
49
+ * the default implementation of memory_region_iommu_attrs_to_index()
50
+ * will return 0.
51
+ *
52
+ * The indexes supported by an IOMMU must be contiguous, starting at 0.
53
+ *
54
+ * @iommu: the IOMMUMemoryRegion
55
+ * @attrs: memory transaction attributes
56
+ */
57
+ int (*attrs_to_index)(IOMMUMemoryRegion *iommu, MemTxAttrs attrs);
58
+
59
+ /* Return the number of IOMMU indexes this IOMMU supports.
60
+ *
61
+ * Optional method: if this method is not provided, then
62
+ * memory_region_iommu_num_indexes() will return 1, indicating that
63
+ * only a single IOMMU index is supported.
64
+ *
65
+ * @iommu: the IOMMUMemoryRegion
66
+ */
67
+ int (*num_indexes)(IOMMUMemoryRegion *iommu);
68
} IOMMUMemoryRegionClass;
69
70
typedef struct CoalescedMemoryRange CoalescedMemoryRange;
71
@@ -XXX,XX +XXX,XX @@ int memory_region_iommu_get_attr(IOMMUMemoryRegion *iommu_mr,
72
enum IOMMUMemoryRegionAttr attr,
73
void *data);
74
75
+/**
76
+ * memory_region_iommu_attrs_to_index: return the IOMMU index to
77
+ * use for translations with the given memory transaction attributes.
78
+ *
79
+ * @iommu_mr: the memory region
80
+ * @attrs: the memory transaction attributes
81
+ */
82
+int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
83
+ MemTxAttrs attrs);
84
+
85
+/**
86
+ * memory_region_iommu_num_indexes: return the total number of IOMMU
87
+ * indexes that this IOMMU supports.
88
+ *
89
+ * @iommu_mr: the memory region
90
+ */
91
+int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr);
92
+
93
/**
94
* memory_region_name: get a memory region's name
95
*
96
diff --git a/memory.c b/memory.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/memory.c
99
+++ b/memory.c
100
@@ -XXX,XX +XXX,XX @@ int memory_region_iommu_get_attr(IOMMUMemoryRegion *iommu_mr,
101
return imrc->get_attr(iommu_mr, attr, data);
102
}
103
104
+int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
105
+ MemTxAttrs attrs)
106
+{
107
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
108
+
109
+ if (!imrc->attrs_to_index) {
110
+ return 0;
111
+ }
112
+
113
+ return imrc->attrs_to_index(iommu_mr, attrs);
114
+}
115
+
116
+int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr)
117
+{
118
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
119
+
120
+ if (!imrc->num_indexes) {
121
+ return 1;
122
+ }
123
+
124
+ return imrc->num_indexes(iommu_mr);
125
+}
126
+
127
void memory_region_set_log(MemoryRegion *mr, bool log, unsigned client)
128
{
129
uint8_t mask = 1 << client;
130
--
131
2.17.1
132
133
diff view generated by jsdifflib
Deleted patch
1
Add support for multiple IOMMU indexes to the IOMMU notifier APIs.
2
When initializing a notifier with iommu_notifier_init(), the caller
3
must pass the IOMMU index that it is interested in. When a change
4
happens, the IOMMU implementation must pass
5
memory_region_notify_iommu() the IOMMU index that has changed and
6
that notifiers must be called for.
7
1
8
IOMMUs which support only a single index don't need to change.
9
Callers which only really support working with IOMMUs with a single
10
index can use the result of passing MEMTXATTRS_UNSPECIFIED to
11
memory_region_iommu_attrs_to_index().
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
16
Message-id: 20180604152941.20374-3-peter.maydell@linaro.org
17
---
18
include/exec/memory.h | 7 ++++++-
19
hw/i386/intel_iommu.c | 6 +++---
20
hw/ppc/spapr_iommu.c | 2 +-
21
hw/s390x/s390-pci-inst.c | 4 ++--
22
hw/vfio/common.c | 6 +++++-
23
hw/virtio/vhost.c | 7 ++++++-
24
memory.c | 8 +++++++-
25
7 files changed, 30 insertions(+), 10 deletions(-)
26
27
diff --git a/include/exec/memory.h b/include/exec/memory.h
28
index XXXXXXX..XXXXXXX 100644
29
--- a/include/exec/memory.h
30
+++ b/include/exec/memory.h
31
@@ -XXX,XX +XXX,XX @@ struct IOMMUNotifier {
32
/* Notify for address space range start <= addr <= end */
33
hwaddr start;
34
hwaddr end;
35
+ int iommu_idx;
36
QLIST_ENTRY(IOMMUNotifier) node;
37
};
38
typedef struct IOMMUNotifier IOMMUNotifier;
39
40
static inline void iommu_notifier_init(IOMMUNotifier *n, IOMMUNotify fn,
41
IOMMUNotifierFlag flags,
42
- hwaddr start, hwaddr end)
43
+ hwaddr start, hwaddr end,
44
+ int iommu_idx)
45
{
46
n->notify = fn;
47
n->notifier_flags = flags;
48
n->start = start;
49
n->end = end;
50
+ n->iommu_idx = iommu_idx;
51
}
52
53
/*
54
@@ -XXX,XX +XXX,XX @@ uint64_t memory_region_iommu_get_min_page_size(IOMMUMemoryRegion *iommu_mr);
55
* should be notified with an UNMAP followed by a MAP.
56
*
57
* @iommu_mr: the memory region that was changed
58
+ * @iommu_idx: the IOMMU index for the translation table which has changed
59
* @entry: the new entry in the IOMMU translation table. The entry
60
* replaces all old entries for the same virtual I/O address range.
61
* Deleted entries have .@perm == 0.
62
*/
63
void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
64
+ int iommu_idx,
65
IOMMUTLBEntry entry);
66
67
/**
68
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/i386/intel_iommu.c
71
+++ b/hw/i386/intel_iommu.c
72
@@ -XXX,XX +XXX,XX @@ static int vtd_dev_to_context_entry(IntelIOMMUState *s, uint8_t bus_num,
73
static int vtd_sync_shadow_page_hook(IOMMUTLBEntry *entry,
74
void *private)
75
{
76
- memory_region_notify_iommu((IOMMUMemoryRegion *)private, *entry);
77
+ memory_region_notify_iommu((IOMMUMemoryRegion *)private, 0, *entry);
78
return 0;
79
}
80
81
@@ -XXX,XX +XXX,XX @@ static void vtd_iotlb_page_invalidate_notify(IntelIOMMUState *s,
82
.addr_mask = size - 1,
83
.perm = IOMMU_NONE,
84
};
85
- memory_region_notify_iommu(&vtd_as->iommu, entry);
86
+ memory_region_notify_iommu(&vtd_as->iommu, 0, entry);
87
}
88
}
89
}
90
@@ -XXX,XX +XXX,XX @@ static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
91
entry.iova = addr;
92
entry.perm = IOMMU_NONE;
93
entry.translated_addr = 0;
94
- memory_region_notify_iommu(&vtd_dev_as->iommu, entry);
95
+ memory_region_notify_iommu(&vtd_dev_as->iommu, 0, entry);
96
97
done:
98
return true;
99
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/hw/ppc/spapr_iommu.c
102
+++ b/hw/ppc/spapr_iommu.c
103
@@ -XXX,XX +XXX,XX @@ static target_ulong put_tce_emu(sPAPRTCETable *tcet, target_ulong ioba,
104
entry.translated_addr = tce & page_mask;
105
entry.addr_mask = ~page_mask;
106
entry.perm = spapr_tce_iommu_access_flags(tce);
107
- memory_region_notify_iommu(&tcet->iommu, entry);
108
+ memory_region_notify_iommu(&tcet->iommu, 0, entry);
109
110
return H_SUCCESS;
111
}
112
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
113
index XXXXXXX..XXXXXXX 100644
114
--- a/hw/s390x/s390-pci-inst.c
115
+++ b/hw/s390x/s390-pci-inst.c
116
@@ -XXX,XX +XXX,XX @@ static void s390_pci_update_iotlb(S390PCIIOMMU *iommu, S390IOTLBEntry *entry)
117
}
118
119
notify.perm = IOMMU_NONE;
120
- memory_region_notify_iommu(&iommu->iommu_mr, notify);
121
+ memory_region_notify_iommu(&iommu->iommu_mr, 0, notify);
122
notify.perm = entry->perm;
123
}
124
125
@@ -XXX,XX +XXX,XX @@ static void s390_pci_update_iotlb(S390PCIIOMMU *iommu, S390IOTLBEntry *entry)
126
g_hash_table_replace(iommu->iotlb, &cache->iova, cache);
127
}
128
129
- memory_region_notify_iommu(&iommu->iommu_mr, notify);
130
+ memory_region_notify_iommu(&iommu->iommu_mr, 0, notify);
131
}
132
133
int rpcit_service_call(S390CPU *cpu, uint8_t r1, uint8_t r2, uintptr_t ra)
134
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
135
index XXXXXXX..XXXXXXX 100644
136
--- a/hw/vfio/common.c
137
+++ b/hw/vfio/common.c
138
@@ -XXX,XX +XXX,XX @@ static void vfio_listener_region_add(MemoryListener *listener,
139
if (memory_region_is_iommu(section->mr)) {
140
VFIOGuestIOMMU *giommu;
141
IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
142
+ int iommu_idx;
143
144
trace_vfio_listener_region_add_iommu(iova, end);
145
/*
146
@@ -XXX,XX +XXX,XX @@ static void vfio_listener_region_add(MemoryListener *listener,
147
llend = int128_add(int128_make64(section->offset_within_region),
148
section->size);
149
llend = int128_sub(llend, int128_one());
150
+ iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
151
+ MEMTXATTRS_UNSPECIFIED);
152
iommu_notifier_init(&giommu->n, vfio_iommu_map_notify,
153
IOMMU_NOTIFIER_ALL,
154
section->offset_within_region,
155
- int128_get64(llend));
156
+ int128_get64(llend),
157
+ iommu_idx);
158
QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
159
160
memory_region_register_iommu_notifier(section->mr, &giommu->n);
161
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/hw/virtio/vhost.c
164
+++ b/hw/virtio/vhost.c
165
@@ -XXX,XX +XXX,XX @@ static void vhost_iommu_region_add(MemoryListener *listener,
166
iommu_listener);
167
struct vhost_iommu *iommu;
168
Int128 end;
169
+ int iommu_idx;
170
+ IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
171
172
if (!memory_region_is_iommu(section->mr)) {
173
return;
174
@@ -XXX,XX +XXX,XX @@ static void vhost_iommu_region_add(MemoryListener *listener,
175
end = int128_add(int128_make64(section->offset_within_region),
176
section->size);
177
end = int128_sub(end, int128_one());
178
+ iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
179
+ MEMTXATTRS_UNSPECIFIED);
180
iommu_notifier_init(&iommu->n, vhost_iommu_unmap_notify,
181
IOMMU_NOTIFIER_UNMAP,
182
section->offset_within_region,
183
- int128_get64(end));
184
+ int128_get64(end),
185
+ iommu_idx);
186
iommu->mr = section->mr;
187
iommu->iommu_offset = section->offset_within_address_space -
188
section->offset_within_region;
189
diff --git a/memory.c b/memory.c
190
index XXXXXXX..XXXXXXX 100644
191
--- a/memory.c
192
+++ b/memory.c
193
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
194
iommu_mr = IOMMU_MEMORY_REGION(mr);
195
assert(n->notifier_flags != IOMMU_NOTIFIER_NONE);
196
assert(n->start <= n->end);
197
+ assert(n->iommu_idx >= 0 &&
198
+ n->iommu_idx < memory_region_iommu_num_indexes(iommu_mr));
199
+
200
QLIST_INSERT_HEAD(&iommu_mr->iommu_notify, n, node);
201
memory_region_update_iommu_notify_flags(iommu_mr);
202
}
203
@@ -XXX,XX +XXX,XX @@ void memory_region_notify_one(IOMMUNotifier *notifier,
204
}
205
206
void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
207
+ int iommu_idx,
208
IOMMUTLBEntry entry)
209
{
210
IOMMUNotifier *iommu_notifier;
211
@@ -XXX,XX +XXX,XX @@ void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
212
assert(memory_region_is_iommu(MEMORY_REGION(iommu_mr)));
213
214
IOMMU_NOTIFIER_FOREACH(iommu_notifier, iommu_mr) {
215
- memory_region_notify_one(iommu_notifier, &entry);
216
+ if (iommu_notifier->iommu_idx == iommu_idx) {
217
+ memory_region_notify_one(iommu_notifier, &entry);
218
+ }
219
}
220
}
221
222
--
223
2.17.1
224
225
diff view generated by jsdifflib
1
Currently we don't support board configurations that put an IOMMU
1
From: Niek Linnenbank <nieklinnenbank@gmail.com>
2
in the path of the CPU's memory transactions, and instead just
3
assert() if the memory region fonud in address_space_translate_for_iotlb()
4
is an IOMMUMemoryRegion.
5
2
6
Remove this limitation by having the function handle IOMMUs.
3
After setting CP15 bits in arm_set_cpu_on() the cached hflags must
7
This is mostly straightforward, but we must make sure we have
4
be rebuild to reflect the changed processor state. Without rebuilding,
8
a notifier registered for every IOMMU that a transaction has
5
the cached hflags would be inconsistent until the next call to
9
passed through, so that we can flush the TLB appropriately
6
arm_rebuild_hflags(). When QEMU is compiled with debugging enabled
10
when any of the IOMMUs change their mappings.
7
(--enable-debug), this problem is captured shortly after the first
8
call to arm_set_cpu_on() for CPUs running in ARM 32-bit non-secure mode:
11
9
10
qemu-system-arm: target/arm/helper.c:11359: cpu_get_tb_cpu_state:
11
Assertion `flags == rebuild_hflags_internal(env)' failed.
12
Aborted (core dumped)
13
14
Fixes: 0c7f8c43daf65
15
Cc: qemu-stable@nongnu.org
16
Signed-off-by: Niek Linnenbank <nieklinnenbank@gmail.com>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Message-id: 20180604152941.20374-5-peter.maydell@linaro.org
15
---
19
---
16
include/exec/exec-all.h | 3 +-
20
target/arm/arm-powerctl.c | 3 +++
17
include/qom/cpu.h | 3 +
21
1 file changed, 3 insertions(+)
18
accel/tcg/cputlb.c | 3 +-
19
exec.c | 135 +++++++++++++++++++++++++++++++++++++++-
20
4 files changed, 140 insertions(+), 4 deletions(-)
21
22
22
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
23
diff --git a/target/arm/arm-powerctl.c b/target/arm/arm-powerctl.c
23
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
24
--- a/include/exec/exec-all.h
25
--- a/target/arm/arm-powerctl.c
25
+++ b/include/exec/exec-all.h
26
+++ b/target/arm/arm-powerctl.c
26
@@ -XXX,XX +XXX,XX @@ void tb_flush_jmp_cache(CPUState *cpu, target_ulong addr);
27
@@ -XXX,XX +XXX,XX @@ static void arm_set_cpu_on_async_work(CPUState *target_cpu_state,
27
28
target_cpu->env.regs[0] = info->context_id;
28
MemoryRegionSection *
29
}
29
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
30
30
- hwaddr *xlat, hwaddr *plen);
31
+ /* CP15 update requires rebuilding hflags */
31
+ hwaddr *xlat, hwaddr *plen,
32
+ arm_rebuild_hflags(&target_cpu->env);
32
+ MemTxAttrs attrs, int *prot);
33
hwaddr memory_region_section_get_iotlb(CPUState *cpu,
34
MemoryRegionSection *section,
35
target_ulong vaddr,
36
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/include/qom/cpu.h
39
+++ b/include/qom/cpu.h
40
@@ -XXX,XX +XXX,XX @@ struct CPUState {
41
uint16_t pending_tlb_flush;
42
43
int hvf_fd;
44
+
33
+
45
+ /* track IOMMUs whose translations we've cached in the TCG TLB */
34
/* Start the new CPU at the requested address */
46
+ GArray *iommu_notifiers;
35
cpu_set_pc(target_cpu_state, info->entry);
47
};
48
49
QTAILQ_HEAD(CPUTailQ, CPUState);
50
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/accel/tcg/cputlb.c
53
+++ b/accel/tcg/cputlb.c
54
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
55
}
56
57
sz = size;
58
- section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz);
59
+ section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz,
60
+ attrs, &prot);
61
assert(sz >= TARGET_PAGE_SIZE);
62
63
tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
64
diff --git a/exec.c b/exec.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/exec.c
67
+++ b/exec.c
68
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
69
return mr;
70
}
71
72
+typedef struct TCGIOMMUNotifier {
73
+ IOMMUNotifier n;
74
+ MemoryRegion *mr;
75
+ CPUState *cpu;
76
+ int iommu_idx;
77
+ bool active;
78
+} TCGIOMMUNotifier;
79
+
80
+static void tcg_iommu_unmap_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
81
+{
82
+ TCGIOMMUNotifier *notifier = container_of(n, TCGIOMMUNotifier, n);
83
+
84
+ if (!notifier->active) {
85
+ return;
86
+ }
87
+ tlb_flush(notifier->cpu);
88
+ notifier->active = false;
89
+ /* We leave the notifier struct on the list to avoid reallocating it later.
90
+ * Generally the number of IOMMUs a CPU deals with will be small.
91
+ * In any case we can't unregister the iommu notifier from a notify
92
+ * callback.
93
+ */
94
+}
95
+
96
+static void tcg_register_iommu_notifier(CPUState *cpu,
97
+ IOMMUMemoryRegion *iommu_mr,
98
+ int iommu_idx)
99
+{
100
+ /* Make sure this CPU has an IOMMU notifier registered for this
101
+ * IOMMU/IOMMU index combination, so that we can flush its TLB
102
+ * when the IOMMU tells us the mappings we've cached have changed.
103
+ */
104
+ MemoryRegion *mr = MEMORY_REGION(iommu_mr);
105
+ TCGIOMMUNotifier *notifier;
106
+ int i;
107
+
108
+ for (i = 0; i < cpu->iommu_notifiers->len; i++) {
109
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
110
+ if (notifier->mr == mr && notifier->iommu_idx == iommu_idx) {
111
+ break;
112
+ }
113
+ }
114
+ if (i == cpu->iommu_notifiers->len) {
115
+ /* Not found, add a new entry at the end of the array */
116
+ cpu->iommu_notifiers = g_array_set_size(cpu->iommu_notifiers, i + 1);
117
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
118
+
119
+ notifier->mr = mr;
120
+ notifier->iommu_idx = iommu_idx;
121
+ notifier->cpu = cpu;
122
+ /* Rather than trying to register interest in the specific part
123
+ * of the iommu's address space that we've accessed and then
124
+ * expand it later as subsequent accesses touch more of it, we
125
+ * just register interest in the whole thing, on the assumption
126
+ * that iommu reconfiguration will be rare.
127
+ */
128
+ iommu_notifier_init(&notifier->n,
129
+ tcg_iommu_unmap_notify,
130
+ IOMMU_NOTIFIER_UNMAP,
131
+ 0,
132
+ HWADDR_MAX,
133
+ iommu_idx);
134
+ memory_region_register_iommu_notifier(notifier->mr, &notifier->n);
135
+ }
136
+
137
+ if (!notifier->active) {
138
+ notifier->active = true;
139
+ }
140
+}
141
+
142
+static void tcg_iommu_free_notifier_list(CPUState *cpu)
143
+{
144
+ /* Destroy the CPU's notifier list */
145
+ int i;
146
+ TCGIOMMUNotifier *notifier;
147
+
148
+ for (i = 0; i < cpu->iommu_notifiers->len; i++) {
149
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
150
+ memory_region_unregister_iommu_notifier(notifier->mr, &notifier->n);
151
+ }
152
+ g_array_free(cpu->iommu_notifiers, true);
153
+}
154
+
155
/* Called from RCU critical section */
156
MemoryRegionSection *
157
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
158
- hwaddr *xlat, hwaddr *plen)
159
+ hwaddr *xlat, hwaddr *plen,
160
+ MemTxAttrs attrs, int *prot)
161
{
162
MemoryRegionSection *section;
163
+ IOMMUMemoryRegion *iommu_mr;
164
+ IOMMUMemoryRegionClass *imrc;
165
+ IOMMUTLBEntry iotlb;
166
+ int iommu_idx;
167
AddressSpaceDispatch *d = atomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
168
169
- section = address_space_translate_internal(d, addr, xlat, plen, false);
170
+ for (;;) {
171
+ section = address_space_translate_internal(d, addr, &addr, plen, false);
172
+
173
+ iommu_mr = memory_region_get_iommu(section->mr);
174
+ if (!iommu_mr) {
175
+ break;
176
+ }
177
+
178
+ imrc = memory_region_get_iommu_class_nocheck(iommu_mr);
179
+
180
+ iommu_idx = imrc->attrs_to_index(iommu_mr, attrs);
181
+ tcg_register_iommu_notifier(cpu, iommu_mr, iommu_idx);
182
+ /* We need all the permissions, so pass IOMMU_NONE so the IOMMU
183
+ * doesn't short-cut its translation table walk.
184
+ */
185
+ iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, iommu_idx);
186
+ addr = ((iotlb.translated_addr & ~iotlb.addr_mask)
187
+ | (addr & iotlb.addr_mask));
188
+ /* Update the caller's prot bits to remove permissions the IOMMU
189
+ * is giving us a failure response for. If we get down to no
190
+ * permissions left at all we can give up now.
191
+ */
192
+ if (!(iotlb.perm & IOMMU_RO)) {
193
+ *prot &= ~(PAGE_READ | PAGE_EXEC);
194
+ }
195
+ if (!(iotlb.perm & IOMMU_WO)) {
196
+ *prot &= ~PAGE_WRITE;
197
+ }
198
+
199
+ if (!*prot) {
200
+ goto translate_fail;
201
+ }
202
+
203
+ d = flatview_to_dispatch(address_space_to_flatview(iotlb.target_as));
204
+ }
205
206
assert(!memory_region_is_iommu(section->mr));
207
+ *xlat = addr;
208
return section;
209
+
210
+translate_fail:
211
+ return &d->map.sections[PHYS_SECTION_UNASSIGNED];
212
}
213
#endif
214
215
@@ -XXX,XX +XXX,XX @@ void cpu_exec_unrealizefn(CPUState *cpu)
216
if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
217
vmstate_unregister(NULL, &vmstate_cpu_common, cpu);
218
}
219
+#ifndef CONFIG_USER_ONLY
220
+ tcg_iommu_free_notifier_list(cpu);
221
+#endif
222
}
223
224
Property cpu_common_props[] = {
225
@@ -XXX,XX +XXX,XX @@ void cpu_exec_realizefn(CPUState *cpu, Error **errp)
226
if (cc->vmsd != NULL) {
227
vmstate_register(NULL, cpu->cpu_index, cc->vmsd, cpu);
228
}
229
+
230
+ cpu->iommu_notifiers = g_array_new(false, true, sizeof(TCGIOMMUNotifier));
231
#endif
232
}
233
36
234
--
37
--
235
2.17.1
38
2.20.1
236
39
237
40
diff view generated by jsdifflib