1
As promised, another pullreq... This one's mostly RTH's patches.
1
Another arm pullreq; nothing particularly exciting here.
2
2
3
thanks
4
-- PMM
3
-- PMM
5
4
6
The following changes since commit 784c2e4f232adf5ef47a84a262ec72a07d068d6a:
7
5
8
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2018-10-19 15:30:40 +0100)
6
The following changes since commit e27d5b488ef08408691bfed61f34ee2858136287:
7
8
Merge remote-tracking branch 'remotes/juanquintela/tags/pull-migration-pull-request' into staging (2020-02-28 14:02:31 +0000)
9
9
10
are available in the Git repository at:
10
are available in the Git repository at:
11
11
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181019
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200228
13
13
14
for you to fetch changes up to 88c9add25e7120e8622796c81ad3f3fb7f8d40e7:
14
for you to fetch changes up to 1904f9b5f1d94fe12fe021db6b504c87d684f6db:
15
15
16
target/arm: Only flush tlb if ASID changes (2018-10-19 17:38:48 +0100)
16
hw/intc/arm_gic_kvm: Don't assume kernel can provide a GICv2 (2020-02-28 16:14:57 +0000)
17
17
18
----------------------------------------------------------------
18
----------------------------------------------------------------
19
target-arm queue:
19
target-arm queue:
20
* ssi-sd: Make devices picking up backends unavailable with -device
20
* hw/arm: Use TYPE_PL011 to create serial port
21
* Add support for VCPU event states
21
* target/arm: Set ID_MMFR4.HPDS for aarch64_max_initfn
22
* Move towards making ID registers the source of truth for
22
* hw/arm/integratorcp: Map the audio codec controller
23
whether a guest CPU implements a feature, rather than having
23
* GICv2: Correctly implement the limited number of priority bits
24
parallel ID registers and feature bit flags
24
* target/arm: refactoring of VFP related feature checks and decode
25
* Implement various HCR hypervisor trap/config bits
25
* xilinx_zynq: Fix USB port instantiation
26
* Get IL bit correct for v7 syndrome values
26
* acceptance tests for n800, n810, integratorcp
27
* Report correct syndrome for FP/SIMD traps to Hyp mode
27
* Implement v8.3-RCPC, v8.4-RCPC, v8.3-CCIDX
28
* hw/arm/boot: Increase compliance with kernel arm64 boot protocol
28
* arm_gic_kvm: Don't assume kernel can provide a GICv2
29
* Refactor A32 Neon to use generic vector infrastructure
29
(provide better error message for user error)
30
* Fix a bug in A32 VLD2 "(multiple 2-element structures)" insn
31
* net: cadence_gem: Report features correctly in ID register
32
* Avoid some unnecessary TLB flushes on TTBR register writes
33
30
34
----------------------------------------------------------------
31
----------------------------------------------------------------
35
Dongjiu Geng (1):
32
Gavin Shan (1):
36
target/arm: Add support for VCPU event states
33
hw/arm: Use TYPE_PL011 to create serial port
37
34
38
Edgar E. Iglesias (2):
35
Guenter Roeck (2):
39
net: cadence_gem: Announce availability of priority queues
36
hw/arm/xilinx_zynq: Fix USB port instantiation
40
net: cadence_gem: Announce 64bit addressing support
37
hw/usb/hcd-ehci-sysbus: Remove obsolete xlnx, ps7-usb class
41
38
42
Markus Armbruster (1):
39
Peter Maydell (5):
43
ssi-sd: Make devices picking up backends unavailable with -device
40
target/arm: Fix wrong use of FIELD_EX32 on ID_AA64DFR0
41
target/arm: Implement v8.3-RCPC
42
target/arm: Implement v8.4-RCPC
43
target/arm: Implement ARMv8.3-CCIDX
44
hw/intc/arm_gic_kvm: Don't assume kernel can provide a GICv2
44
45
45
Peter Maydell (10):
46
Philippe Mathieu-Daudé (3):
46
target/arm: Improve debug logging of AArch32 exception return
47
hw/arm/integratorcp: Map the audio codec controller
47
target/arm: Make switch_mode() file-local
48
tests/acceptance: Extract boot_integratorcp() from test_integratorcp()
48
target/arm: Implement HCR.FB
49
tests/acceptance/integratorcp: Verify Tux is displayed on framebuffer
49
target/arm: Implement HCR.DC
50
target/arm: ISR_EL1 bits track virtual interrupts if IMO/FMO set
51
target/arm: Implement HCR.VI and VF
52
target/arm: Implement HCR.PTW
53
target/arm: New utility function to extract EC from syndrome
54
target/arm: Get IL bit correct for v7 syndrome values
55
target/arm: Report correct syndrome for FP/SIMD traps to Hyp mode
56
50
57
Richard Henderson (30):
51
Richard Henderson (17):
58
target/arm: Move some system registers into a substructure
52
target/arm: Set ID_MMFR4.HPDS for aarch64_max_initfn
59
target/arm: V8M should not imply V7VE
53
target/arm: Add isar_feature_aa32_vfp_simd
60
target/arm: Convert v8 extensions from feature bits to isar tests
54
target/arm: Rename isar_feature_aa32_fpdp_v2
61
target/arm: Convert division from feature bits to isar0 tests
55
target/arm: Add isar_feature_aa32_{fpsp_v2, fpsp_v3, fpdp_v3}
62
target/arm: Convert jazelle from feature bit to isar1 test
56
target/arm: Add isar_feature_aa64_fp_simd, isar_feature_aa32_vfp
63
target/arm: Convert t32ee from feature bit to isar3 test
57
target/arm: Perform fpdp_v2 check first
64
target/arm: Convert sve from feature bit to aa64pfr0 test
58
target/arm: Replace ARM_FEATURE_VFP3 checks with fp{sp, dp}_v3
65
target/arm: Convert v8.2-fp16 from feature bit to aa64pfr0 test
59
target/arm: Add missing checks for fpsp_v2
66
target/arm: Hoist address increment for vector memory ops
60
target/arm: Replace ARM_FEATURE_VFP4 with isar_feature_aa32_simdfmac
67
target/arm: Don't call tcg_clear_temp_count
61
target/arm: Remove ARM_FEATURE_VFP check from disas_vfp_insn
68
target/arm: Use tcg_gen_gvec_dup_i64 for LD[1-4]R
62
target/arm: Move VLLDM and VLSTM to vfp.decode
69
target/arm: Promote consecutive memory ops for aa64
63
target/arm: Move the vfp decodetree calls next to the base isa
70
target/arm: Mark some arrays const
64
linux-user/arm: Replace ARM_FEATURE_VFP* tests for HWCAP
71
target/arm: Use gvec for NEON VDUP
65
target/arm: Remove ARM_FEATURE_VFP*
72
target/arm: Use gvec for NEON VMOV, VMVN, VBIC & VORR (immediate)
66
target/arm: Add formats for some vfp 2 and 3-register insns
73
target/arm: Use gvec for NEON_3R_LOGIC insns
67
target/arm: Split VFM decode
74
target/arm: Use gvec for NEON_3R_VADD_VSUB insns
68
target/arm: Split VMINMAXNM decode
75
target/arm: Use gvec for NEON_2RM_VMN, NEON_2RM_VNEG
76
target/arm: Use gvec for NEON_3R_VMUL
77
target/arm: Use gvec for VSHR, VSHL
78
target/arm: Use gvec for VSRA
79
target/arm: Use gvec for VSRI, VSLI
80
target/arm: Use gvec for NEON_3R_VML
81
target/arm: Use gvec for NEON_3R_VTST_VCEQ, NEON_3R_VCGT, NEON_3R_VCGE
82
target/arm: Use gvec for NEON VLD all lanes
83
target/arm: Reorg NEON VLD/VST all elements
84
target/arm: Promote consecutive memory ops for aa32
85
target/arm: Reorg NEON VLD/VST single element to one lane
86
target/arm: Remove writefn from TTBR0_EL3
87
target/arm: Only flush tlb if ASID changes
88
69
89
Stewart Hildebrand (1):
70
Sai Pavan Boddu (3):
90
hw/arm/boot: Increase compliance with kernel arm64 boot protocol
71
arm_gic: Mask the un-supported priority bits
72
cpu/a9mpcore: Set number of GIC priority bits to 5
73
cpu/arm11mpcore: Set number of GIC priority bits to 4
91
74
92
target/arm/cpu.h | 227 ++++++-
75
Thomas Huth (2):
93
target/arm/internals.h | 45 +-
76
tests/acceptance: Add a test for the N800 and N810 arm machines
94
target/arm/kvm_arm.h | 24 +
77
tests/acceptance: Add a test for the integratorcp arm machine
95
target/arm/translate.h | 21 +
96
hw/arm/boot.c | 18 +
97
hw/intc/armv7m_nvic.c | 12 +-
98
hw/net/cadence_gem.c | 9 +-
99
hw/sd/ssi-sd.c | 2 +
100
linux-user/aarch64/signal.c | 4 +-
101
linux-user/elfload.c | 60 +-
102
linux-user/syscall.c | 10 +-
103
target/arm/cpu.c | 242 ++++----
104
target/arm/cpu64.c | 148 +++--
105
target/arm/helper.c | 397 ++++++++----
106
target/arm/kvm.c | 60 ++
107
target/arm/kvm32.c | 13 +
108
target/arm/kvm64.c | 15 +-
109
target/arm/machine.c | 28 +-
110
target/arm/op_helper.c | 2 +-
111
target/arm/translate-a64.c | 715 ++++-----------------
112
target/arm/translate.c | 1451 ++++++++++++++++++++++++++++---------------
113
21 files changed, 2021 insertions(+), 1482 deletions(-)
114
78
79
include/hw/intc/arm_gic.h | 2 +
80
include/hw/intc/arm_gic_common.h | 1 +
81
target/arm/cpu.h | 88 +++++-
82
hw/arm/integratorcp.c | 1 +
83
hw/arm/sbsa-ref.c | 3 +-
84
hw/arm/virt.c | 3 +-
85
hw/arm/xilinx_zynq.c | 5 +-
86
hw/arm/xlnx-versal.c | 3 +-
87
hw/cpu/a9mpcore.c | 4 +
88
hw/cpu/arm11mpcore.c | 5 +
89
hw/intc/arm_gic.c | 33 +-
90
hw/intc/arm_gic_common.c | 1 +
91
hw/intc/arm_gic_kvm.c | 9 +
92
hw/intc/armv7m_nvic.c | 20 +-
93
hw/usb/hcd-ehci-sysbus.c | 17 -
94
linux-user/arm/signal.c | 4 +-
95
linux-user/elfload.c | 25 +-
96
target/arm/arch_dump.c | 11 +-
97
target/arm/cpu.c | 44 +--
98
target/arm/cpu64.c | 5 +-
99
target/arm/helper.c | 23 +-
100
target/arm/kvm32.c | 5 -
101
target/arm/kvm64.c | 1 -
102
target/arm/m_helper.c | 11 +-
103
target/arm/machine.c | 5 +-
104
target/arm/translate-a64.c | 114 +++++++
105
target/arm/translate-vfp.inc.c | 448 +++++++++++++++++----------
106
target/arm/translate.c | 122 ++------
107
MAINTAINERS | 2 +
108
hw/arm/Kconfig | 1 +
109
target/arm/vfp-uncond.decode | 12 +-
110
target/arm/vfp.decode | 153 ++++-----
111
tests/acceptance/machine_arm_integratorcp.py | 99 ++++++
112
tests/acceptance/machine_arm_n8x0.py | 49 +++
113
34 files changed, 865 insertions(+), 464 deletions(-)
114
create mode 100644 tests/acceptance/machine_arm_integratorcp.py
115
create mode 100644 tests/acceptance/machine_arm_n8x0.py
116
diff view generated by jsdifflib
Deleted patch
1
From: Markus Armbruster <armbru@redhat.com>
2
1
3
Device models aren't supposed to go on fishing expeditions for
4
backends. They should expose suitable properties for the user to set.
5
For onboard devices, board code sets them.
6
7
Device ssi-sd picks up its block backend in its init() method with
8
drive_get_next() instead. This mistake is already marked FIXME since
9
commit af9e40a.
10
11
Unset user_creatable to remove the mistake from our external
12
interface. Since the SSI bus doesn't support hotplug, only -device
13
can be affected. Only certain ARM machines have ssi-sd and provide an
14
SSI bus for it; this patch breaks -device ssi-sd for these machines.
15
No actual use of -device ssi-sd is known.
16
17
Signed-off-by: Markus Armbruster <armbru@redhat.com>
18
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Acked-by: Thomas Huth <thuth@redhat.com>
20
Message-id: 20181009060835.4608-1-armbru@redhat.com
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
23
hw/sd/ssi-sd.c | 2 ++
24
1 file changed, 2 insertions(+)
25
26
diff --git a/hw/sd/ssi-sd.c b/hw/sd/ssi-sd.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/sd/ssi-sd.c
29
+++ b/hw/sd/ssi-sd.c
30
@@ -XXX,XX +XXX,XX @@ static void ssi_sd_class_init(ObjectClass *klass, void *data)
31
k->cs_polarity = SSI_CS_LOW;
32
dc->vmsd = &vmstate_ssi_sd;
33
dc->reset = ssi_sd_reset;
34
+ /* Reason: init() method uses drive_get_next() */
35
+ dc->user_creatable = false;
36
}
37
38
static const TypeInfo ssi_sd_info = {
39
--
40
2.19.1
41
42
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
Since QEMU does not implement ASIDs, changes to the ASID must flush the
3
This uses TYPE_PL011 when creating the serial port so that the code
4
tlb. However, if the ASID does not change there is no reason to flush.
4
looks cleaner.
5
5
6
In testing a boot of the Ubuntu installer to the first menu, this reduces
6
Signed-off-by: Gavin Shan <gshan@redhat.com>
7
the number of flushes by 30%, or nearly 600k instances.
8
9
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Message-id: 20181019015617.22583-3-richard.henderson@linaro.org
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-id: 20200224222223.4128-1-gshan@redhat.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
11
---
16
target/arm/helper.c | 8 +++-----
12
hw/arm/sbsa-ref.c | 3 ++-
17
1 file changed, 3 insertions(+), 5 deletions(-)
13
hw/arm/virt.c | 3 ++-
14
hw/arm/xlnx-versal.c | 3 ++-
15
3 files changed, 6 insertions(+), 3 deletions(-)
18
16
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
19
--- a/hw/arm/sbsa-ref.c
22
+++ b/target/arm/helper.c
20
+++ b/hw/arm/sbsa-ref.c
23
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_el1_write(CPUARMState *env, const ARMCPRegInfo *ri,
21
@@ -XXX,XX +XXX,XX @@
24
static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
22
#include "hw/pci-host/gpex.h"
25
uint64_t value)
23
#include "hw/qdev-properties.h"
24
#include "hw/usb.h"
25
+#include "hw/char/pl011.h"
26
#include "net/net.h"
27
28
#define RAMLIMIT_GB 8192
29
@@ -XXX,XX +XXX,XX @@ static void create_uart(const SBSAMachineState *sms, int uart,
26
{
30
{
27
- /* 64 bit accesses to the TTBRs can change the ASID and so we
31
hwaddr base = sbsa_ref_memmap[uart].base;
28
- * must flush the TLB.
32
int irq = sbsa_ref_irqmap[uart];
29
- */
33
- DeviceState *dev = qdev_create(NULL, "pl011");
30
- if (cpreg_field_is_64bit(ri)) {
34
+ DeviceState *dev = qdev_create(NULL, TYPE_PL011);
31
+ /* If the ASID changes (with a 64-bit write), we must flush the TLB. */
35
SysBusDevice *s = SYS_BUS_DEVICE(dev);
32
+ if (cpreg_field_is_64bit(ri) &&
36
33
+ extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
37
qdev_prop_set_chr(dev, "chardev", chr);
34
ARMCPU *cpu = arm_env_get_cpu(env);
38
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
35
-
39
index XXXXXXX..XXXXXXX 100644
36
tlb_flush(CPU(cpu));
40
--- a/hw/arm/virt.c
37
}
41
+++ b/hw/arm/virt.c
38
raw_write(env, ri, value);
42
@@ -XXX,XX +XXX,XX @@
43
#include "hw/mem/nvdimm.h"
44
#include "hw/acpi/generic_event_device.h"
45
#include "hw/virtio/virtio-iommu.h"
46
+#include "hw/char/pl011.h"
47
48
#define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
49
static void virt_##major##_##minor##_class_init(ObjectClass *oc, \
50
@@ -XXX,XX +XXX,XX @@ static void create_uart(const VirtMachineState *vms, int uart,
51
int irq = vms->irqmap[uart];
52
const char compat[] = "arm,pl011\0arm,primecell";
53
const char clocknames[] = "uartclk\0apb_pclk";
54
- DeviceState *dev = qdev_create(NULL, "pl011");
55
+ DeviceState *dev = qdev_create(NULL, TYPE_PL011);
56
SysBusDevice *s = SYS_BUS_DEVICE(dev);
57
58
qdev_prop_set_chr(dev, "chardev", chr);
59
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/hw/arm/xlnx-versal.c
62
+++ b/hw/arm/xlnx-versal.c
63
@@ -XXX,XX +XXX,XX @@
64
#include "hw/misc/unimp.h"
65
#include "hw/intc/arm_gicv3_common.h"
66
#include "hw/arm/xlnx-versal.h"
67
+#include "hw/char/pl011.h"
68
69
#define XLNX_VERSAL_ACPU_TYPE ARM_CPU_TYPE_NAME("cortex-a72")
70
#define GEM_REVISION 0x40070106
71
@@ -XXX,XX +XXX,XX @@ static void versal_create_uarts(Versal *s, qemu_irq *pic)
72
DeviceState *dev;
73
MemoryRegion *mr;
74
75
- dev = qdev_create(NULL, "pl011");
76
+ dev = qdev_create(NULL, TYPE_PL011);
77
s->lpd.iou.uart[i] = SYS_BUS_DEVICE(dev);
78
qdev_prop_set_chr(dev, "chardev", serial_hd(i));
79
object_property_add_child(OBJECT(s), name, OBJECT(dev), &error_fatal);
39
--
80
--
40
2.19.1
81
2.20.1
41
82
42
83
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
For a sequence of loads or stores from a single register,
3
We had set this for aarch32-only in arm_max_initfn, but
4
little-endian operations can be promoted to an 8-byte op.
4
failed to set the same bit for aarch64.
5
This can reduce the number of operations by a factor of 8.
6
5
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181011205206.3552-20-richard.henderson@linaro.org
7
Message-id: 20200218190958.745-2-richard.henderson@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
target/arm/translate.c | 10 ++++++++++
11
target/arm/cpu64.c | 1 +
14
1 file changed, 10 insertions(+)
12
1 file changed, 1 insertion(+)
15
13
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.c
16
--- a/target/arm/cpu64.c
19
+++ b/target/arm/translate.c
17
+++ b/target/arm/cpu64.c
20
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
18
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
21
if (size == 3 && (interleave | spacing) != 1) {
19
cpu->isar.id_mmfr3 = u;
22
return 1;
20
23
}
21
u = cpu->isar.id_mmfr4;
24
+ /* For our purposes, bytes are always little-endian. */
22
+ u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */
25
+ if (size == 0) {
23
u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
26
+ endian = MO_LE;
24
cpu->isar.id_mmfr4 = u;
27
+ }
25
28
+ /* Consecutive little-endian elements from a single register
29
+ * can be promoted to a larger little-endian operation.
30
+ */
31
+ if (interleave == 1 && endian == MO_LE) {
32
+ size = 3;
33
+ }
34
tmp64 = tcg_temp_new_i64();
35
addr = tcg_temp_new_i32();
36
tmp2 = tcg_const_i32(1 << size);
37
--
26
--
38
2.19.1
27
2.20.1
39
28
40
29
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Announce 64bit addressing support.
3
The Linux kernel displays errors why trying to detect the PL041
4
audio interface:
4
5
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Linux version 4.16.0 (linus@genomnajs) (gcc version 7.2.1 20171011 (Linaro GCC 7.2-2017.11)) #142 PREEMPT Wed May 9 13:24:55 CEST 2018
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=00093177
7
Message-id: 20181017213932.19973-3-edgar.iglesias@gmail.com
8
CPU: VIVT data cache, VIVT instruction cache
9
OF: fdt: Machine model: ARM Integrator/CP
10
...
11
OF: amba_device_add() failed (-19) for /fpga/aaci@1d000000
12
13
Since we have it already modelled, simply plug it.
14
15
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Message-id: 20200223233033.15371-2-f4bug@amsat.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
19
---
11
hw/net/cadence_gem.c | 3 ++-
20
hw/arm/integratorcp.c | 1 +
12
1 file changed, 2 insertions(+), 1 deletion(-)
21
hw/arm/Kconfig | 1 +
22
2 files changed, 2 insertions(+)
13
23
14
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
24
diff --git a/hw/arm/integratorcp.c b/hw/arm/integratorcp.c
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/net/cadence_gem.c
26
--- a/hw/arm/integratorcp.c
17
+++ b/hw/net/cadence_gem.c
27
+++ b/hw/arm/integratorcp.c
18
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@ static void integratorcp_init(MachineState *machine)
19
#define GEM_DESCONF4 (0x0000028C/4)
29
qdev_get_gpio_in_named(icp, ICP_GPIO_MMC_WPROT, 0));
20
#define GEM_DESCONF5 (0x00000290/4)
30
qdev_connect_gpio_out(dev, 1,
21
#define GEM_DESCONF6 (0x00000294/4)
31
qdev_get_gpio_in_named(icp, ICP_GPIO_MMC_CARDIN, 0));
22
+#define GEM_DESCONF6_64B_MASK (1U << 23)
32
+ sysbus_create_varargs("pl041", 0x1d000000, pic[25], NULL);
23
#define GEM_DESCONF7 (0x00000298/4)
33
24
34
if (nd_table[0].used)
25
#define GEM_INT_Q1_STATUS (0x00000400 / 4)
35
smc91c111_init(&nd_table[0], 0xc8000000, pic[27]);
26
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
36
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
27
s->regs[GEM_DESCONF] = 0x02500111;
37
index XXXXXXX..XXXXXXX 100644
28
s->regs[GEM_DESCONF2] = 0x2ab13fff;
38
--- a/hw/arm/Kconfig
29
s->regs[GEM_DESCONF5] = 0x002f2045;
39
+++ b/hw/arm/Kconfig
30
- s->regs[GEM_DESCONF6] = 0x0;
40
@@ -XXX,XX +XXX,XX @@ config INTEGRATOR
31
+ s->regs[GEM_DESCONF6] = GEM_DESCONF6_64B_MASK;
41
select INTEGRATOR_DEBUG
32
42
select PL011 # UART
33
if (s->num_priority_queues > 1) {
43
select PL031 # RTC
34
queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
44
+ select PL041 # audio
45
select PL050 # keyboard/mouse
46
select PL110 # pl111 LCD controller
47
select PL181 # display
35
--
48
--
36
2.19.1
49
2.20.1
37
50
38
51
diff view generated by jsdifflib
1
From: Richard Henderson <rth@twiddle.net>
1
From: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
2
2
3
This can reduce the number of opcodes required for certain
3
The GICv2 allows the implementation to implement a variable number
4
complex forms of load-multiple (e.g. ld4.16b).
4
of priority bits; unimplemented bits in the priority registers
5
are read as zeros, writes ignored. We were previously always
6
implementing a full 8 bits of priority, which is allowed but not
7
what the real hardware typically does (which is usually to have
8
4 or 5 bits of priority).
5
9
6
Signed-off-by: Richard Henderson <rth@twiddle.net>
10
Add a new device property to allow the number of implemented
7
Message-id: 20181011205206.3552-2-richard.henderson@linaro.org
11
property bits to be specified.
12
13
Signed-off-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
14
Message-id: 1582537164-764-2-git-send-email-sai.pavan.boddu@xilinx.com
15
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
[PMM: improved commit message]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
19
---
11
target/arm/translate-a64.c | 12 ++++++++----
20
include/hw/intc/arm_gic.h | 2 ++
12
1 file changed, 8 insertions(+), 4 deletions(-)
21
include/hw/intc/arm_gic_common.h | 1 +
22
hw/intc/arm_gic.c | 33 ++++++++++++++++++++++++++++++--
23
hw/intc/arm_gic_common.c | 1 +
24
4 files changed, 35 insertions(+), 2 deletions(-)
13
25
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
26
diff --git a/include/hw/intc/arm_gic.h b/include/hw/intc/arm_gic.h
15
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
28
--- a/include/hw/intc/arm_gic.h
17
+++ b/target/arm/translate-a64.c
29
+++ b/include/hw/intc/arm_gic.h
18
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
30
@@ -XXX,XX +XXX,XX @@
19
bool is_store = !extract32(insn, 22, 1);
31
20
bool is_postidx = extract32(insn, 23, 1);
32
/* Number of SGI target-list bits */
21
bool is_q = extract32(insn, 30, 1);
33
#define GIC_TARGETLIST_BITS 8
22
- TCGv_i64 tcg_addr, tcg_rn;
34
+#define GIC_MAX_PRIORITY_BITS 8
23
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
35
+#define GIC_MIN_PRIORITY_BITS 4
24
36
25
int ebytes = 1 << size;
37
#define TYPE_ARM_GIC "arm_gic"
26
int elements = (is_q ? 128 : 64) / (8 << size);
38
#define ARM_GIC(obj) \
27
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
39
diff --git a/include/hw/intc/arm_gic_common.h b/include/hw/intc/arm_gic_common.h
28
tcg_rn = cpu_reg_sp(s, rn);
40
index XXXXXXX..XXXXXXX 100644
29
tcg_addr = tcg_temp_new_i64();
41
--- a/include/hw/intc/arm_gic_common.h
30
tcg_gen_mov_i64(tcg_addr, tcg_rn);
42
+++ b/include/hw/intc/arm_gic_common.h
31
+ tcg_ebytes = tcg_const_i64(ebytes);
43
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
32
44
uint16_t priority_mask[GIC_NCPU_VCPU];
33
for (r = 0; r < rpt; r++) {
45
uint16_t running_priority[GIC_NCPU_VCPU];
34
int e;
46
uint16_t current_pending[GIC_NCPU_VCPU];
35
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
47
+ uint32_t n_prio_bits;
36
clear_vec_high(s, is_q, tt);
48
37
}
49
/* If we present the GICv2 without security extensions to a guest,
38
}
50
* the guest can configure the GICC_CTLR to configure group 1 binary point
39
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
51
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
40
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
52
index XXXXXXX..XXXXXXX 100644
41
tt = (tt + 1) % 32;
53
--- a/hw/intc/arm_gic.c
42
}
54
+++ b/hw/intc/arm_gic.c
55
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
56
return ret;
57
}
58
59
+static uint32_t gic_fullprio_mask(GICState *s, int cpu)
60
+{
61
+ /*
62
+ * Return a mask word which clears the unimplemented priority
63
+ * bits from a priority value for an interrupt. (Not to be
64
+ * confused with the group priority, whose mask depends on BPR.)
65
+ */
66
+ int priBits;
67
+
68
+ if (gic_is_vcpu(cpu)) {
69
+ priBits = GIC_VIRT_MAX_GROUP_PRIO_BITS;
70
+ } else {
71
+ priBits = s->n_prio_bits;
72
+ }
73
+ return ~0U << (8 - priBits);
74
+}
75
+
76
void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
77
MemTxAttrs attrs)
78
{
79
@@ -XXX,XX +XXX,XX @@ void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
80
val = 0x80 | (val >> 1); /* Non-secure view */
81
}
82
83
+ val &= gic_fullprio_mask(s, cpu);
84
+
85
if (irq < GIC_INTERNAL) {
86
s->priority1[irq][cpu] = val;
87
} else {
88
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_get_priority(GICState *s, int cpu, int irq,
43
}
89
}
44
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
90
prio = (prio << 1) & 0xff; /* Non-secure view */
45
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
91
}
92
- return prio;
93
+ return prio & gic_fullprio_mask(s, cpu);
94
}
95
96
static void gic_set_priority_mask(GICState *s, int cpu, uint8_t pmask,
97
@@ -XXX,XX +XXX,XX @@ static void gic_set_priority_mask(GICState *s, int cpu, uint8_t pmask,
98
return;
46
}
99
}
47
}
100
}
48
+ tcg_temp_free_i64(tcg_ebytes);
101
- s->priority_mask[cpu] = pmask;
49
tcg_temp_free_i64(tcg_addr);
102
+ s->priority_mask[cpu] = pmask & gic_fullprio_mask(s, cpu);
50
}
103
}
51
104
52
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
105
static uint32_t gic_get_priority_mask(GICState *s, int cpu, MemTxAttrs attrs)
53
bool replicate = false;
106
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
54
int index = is_q << 3 | S << 2 | size;
107
return;
55
int ebytes, xs;
56
- TCGv_i64 tcg_addr, tcg_rn;
57
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
58
59
switch (scale) {
60
case 3:
61
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
62
tcg_rn = cpu_reg_sp(s, rn);
63
tcg_addr = tcg_temp_new_i64();
64
tcg_gen_mov_i64(tcg_addr, tcg_rn);
65
+ tcg_ebytes = tcg_const_i64(ebytes);
66
67
for (xs = 0; xs < selem; xs++) {
68
if (replicate) {
69
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
70
do_vec_st(s, rt, index, tcg_addr, scale);
71
}
72
}
73
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
74
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
75
rt = (rt + 1) % 32;
76
}
108
}
77
109
78
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
110
+ if (s->n_prio_bits > GIC_MAX_PRIORITY_BITS ||
79
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
111
+ (s->virt_extn ? s->n_prio_bits < GIC_VIRT_MAX_GROUP_PRIO_BITS :
80
}
112
+ s->n_prio_bits < GIC_MIN_PRIORITY_BITS)) {
81
}
113
+ error_setg(errp, "num-priority-bits cannot be greater than %d"
82
+ tcg_temp_free_i64(tcg_ebytes);
114
+ " or less than %d", GIC_MAX_PRIORITY_BITS,
83
tcg_temp_free_i64(tcg_addr);
115
+ s->virt_extn ? GIC_VIRT_MAX_GROUP_PRIO_BITS :
84
}
116
+ GIC_MIN_PRIORITY_BITS);
117
+ return;
118
+ }
119
+
120
/* This creates distributor, main CPU interface (s->cpuiomem[0]) and if
121
* enabled, virtualization extensions related interfaces (main virtual
122
* interface (s->vifaceiomem[0]) and virtual CPU interface).
123
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/hw/intc/arm_gic_common.c
126
+++ b/hw/intc/arm_gic_common.c
127
@@ -XXX,XX +XXX,XX @@ static Property arm_gic_common_properties[] = {
128
DEFINE_PROP_BOOL("has-security-extensions", GICState, security_extn, 0),
129
/* True if the GIC should implement the virtualization extensions */
130
DEFINE_PROP_BOOL("has-virtualization-extensions", GICState, virt_extn, 0),
131
+ DEFINE_PROP_UINT32("num-priority-bits", GICState, n_prio_bits, 8),
132
DEFINE_PROP_END_OF_LIST(),
133
};
85
134
86
--
135
--
87
2.19.1
136
2.20.1
88
137
89
138
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
2
2
3
Announce the availability of the various priority queues.
3
All A9 CPUs have a GIC with 5 bits of priority.
4
This fixes an issue where guest kernels would miss to
5
configure secondary queues due to inproper feature bits.
6
4
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
5
Signed-off-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
8
Message-id: 20181017213932.19973-2-edgar.iglesias@gmail.com
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 1582537164-764-3-git-send-email-sai.pavan.boddu@xilinx.com
8
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
hw/net/cadence_gem.c | 8 +++++++-
12
hw/cpu/a9mpcore.c | 4 ++++
13
1 file changed, 7 insertions(+), 1 deletion(-)
13
1 file changed, 4 insertions(+)
14
14
15
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
15
diff --git a/hw/cpu/a9mpcore.c b/hw/cpu/a9mpcore.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/net/cadence_gem.c
17
--- a/hw/cpu/a9mpcore.c
18
+++ b/hw/net/cadence_gem.c
18
+++ b/hw/cpu/a9mpcore.c
19
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
19
@@ -XXX,XX +XXX,XX @@
20
int i;
20
#include "hw/qdev-properties.h"
21
CadenceGEMState *s = CADENCE_GEM(d);
21
#include "hw/core/cpu.h"
22
const uint8_t *a;
22
23
+ uint32_t queues_mask = 0;
23
+#define A9_GIC_NUM_PRIORITY_BITS 5
24
25
DB_PRINT("\n");
26
27
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
28
s->regs[GEM_DESCONF] = 0x02500111;
29
s->regs[GEM_DESCONF2] = 0x2ab13fff;
30
s->regs[GEM_DESCONF5] = 0x002f2045;
31
- s->regs[GEM_DESCONF6] = 0x00000200;
32
+ s->regs[GEM_DESCONF6] = 0x0;
33
+
24
+
34
+ if (s->num_priority_queues > 1) {
25
static void a9mp_priv_set_irq(void *opaque, int irq, int level)
35
+ queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
26
{
36
+ s->regs[GEM_DESCONF6] |= queues_mask;
27
A9MPPrivState *s = (A9MPPrivState *)opaque;
37
+ }
28
@@ -XXX,XX +XXX,XX @@ static void a9mp_priv_realize(DeviceState *dev, Error **errp)
38
29
gicdev = DEVICE(&s->gic);
39
/* Set MAC address */
30
qdev_prop_set_uint32(gicdev, "num-cpu", s->num_cpu);
40
a = &s->conf.macaddr.a[0];
31
qdev_prop_set_uint32(gicdev, "num-irq", s->num_irq);
32
+ qdev_prop_set_uint32(gicdev, "num-priority-bits",
33
+ A9_GIC_NUM_PRIORITY_BITS);
34
35
/* Make the GIC's TZ support match the CPUs. We assume that
36
* either all the CPUs have TZ, or none do.
41
--
37
--
42
2.19.1
38
2.20.1
43
39
44
40
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
2
2
3
The EL3 version of this register does not include an ASID,
3
The GIC built into the ARM11MPCore is always implemented with 4
4
and so the tlb_flush performed by vmsa_ttbr_write is not needed.
4
priority bits; set the GIC property accordingly.
5
5
6
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
6
Signed-off-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20181019015617.22583-2-richard.henderson@linaro.org
8
Message-id: 1582537164-764-4-git-send-email-sai.pavan.boddu@xilinx.com
9
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
[PMM: tweaked commit message]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
target/arm/helper.c | 2 +-
14
hw/cpu/arm11mpcore.c | 5 +++++
13
1 file changed, 1 insertion(+), 1 deletion(-)
15
1 file changed, 5 insertions(+)
14
16
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/hw/cpu/arm11mpcore.c b/hw/cpu/arm11mpcore.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
19
--- a/hw/cpu/arm11mpcore.c
18
+++ b/target/arm/helper.c
20
+++ b/hw/cpu/arm11mpcore.c
19
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
21
@@ -XXX,XX +XXX,XX @@
20
.fieldoffset = offsetof(CPUARMState, cp15.mvbar) },
22
#include "hw/irq.h"
21
{ .name = "TTBR0_EL3", .state = ARM_CP_STATE_AA64,
23
#include "hw/qdev-properties.h"
22
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 0,
24
23
- .access = PL3_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
25
+#define ARM11MPCORE_NUM_GIC_PRIORITY_BITS 4
24
+ .access = PL3_RW, .resetvalue = 0,
26
25
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[3]) },
27
static void mpcore_priv_set_irq(void *opaque, int irq, int level)
26
{ .name = "TCR_EL3", .state = ARM_CP_STATE_AA64,
28
{
27
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2,
29
@@ -XXX,XX +XXX,XX @@ static void mpcore_priv_realize(DeviceState *dev, Error **errp)
30
31
qdev_prop_set_uint32(gicdev, "num-cpu", s->num_cpu);
32
qdev_prop_set_uint32(gicdev, "num-irq", s->num_irq);
33
+ qdev_prop_set_uint32(gicdev, "num-priority-bits",
34
+ ARM11MPCORE_NUM_GIC_PRIORITY_BITS);
35
+
36
+
37
object_property_set_bool(OBJECT(&s->gic), true, "realized", &err);
38
if (err != NULL) {
39
error_propagate(errp, err);
28
--
40
--
29
2.19.1
41
2.20.1
30
42
31
43
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Create struct ARMISARegisters, to be accessed during translation.
3
Use this in the places that were checking ARM_FEATURE_VFP, and
4
are obviously testing for the existance of the register set
5
as opposed to testing for some particular instruction extension.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181016223115.24100-2-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20200224222232.13807-2-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/cpu.h | 32 ++++----
12
target/arm/cpu.h | 9 +++++++++
11
hw/intc/armv7m_nvic.c | 12 +--
13
hw/intc/armv7m_nvic.c | 20 ++++++++++----------
12
target/arm/cpu.c | 178 +++++++++++++++++++++---------------------
14
linux-user/arm/signal.c | 4 ++--
13
target/arm/cpu64.c | 70 ++++++++---------
15
target/arm/arch_dump.c | 11 ++++++-----
14
target/arm/helper.c | 28 +++----
16
target/arm/cpu.c | 4 ++--
15
5 files changed, 162 insertions(+), 158 deletions(-)
17
target/arm/helper.c | 4 ++--
18
target/arm/m_helper.c | 11 ++++++-----
19
7 files changed, 37 insertions(+), 26 deletions(-)
16
20
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
23
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
25
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
22
* ARMv7AR ARM Architecture Reference Manual. A reset_ prefix
26
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
23
* is used for reset values of non-constant registers; no reset_
27
}
24
* prefix means a constant register.
28
25
+ * Some of these registers are split out into a substructure that
29
+static inline bool isar_feature_aa32_vfp_simd(const ARMISARegisters *id)
26
+ * is shared with the translators to control the ISA.
30
+{
27
*/
31
+ /*
28
+ struct ARMISARegisters {
32
+ * Return true if either VFP or SIMD is implemented.
29
+ uint32_t id_isar0;
33
+ * In this case, a minimum of VFP w/ D0-D15.
30
+ uint32_t id_isar1;
34
+ */
31
+ uint32_t id_isar2;
35
+ return FIELD_EX32(id->mvfr0, MVFR0, SIMDREG) > 0;
32
+ uint32_t id_isar3;
36
+}
33
+ uint32_t id_isar4;
37
+
34
+ uint32_t id_isar5;
38
static inline bool isar_feature_aa32_simd_r32(const ARMISARegisters *id)
35
+ uint32_t id_isar6;
39
{
36
+ uint32_t mvfr0;
40
/* Return true if D16-D31 are implemented */
37
+ uint32_t mvfr1;
38
+ uint32_t mvfr2;
39
+ uint64_t id_aa64isar0;
40
+ uint64_t id_aa64isar1;
41
+ uint64_t id_aa64pfr0;
42
+ uint64_t id_aa64pfr1;
43
+ } isar;
44
uint32_t midr;
45
uint32_t revidr;
46
uint32_t reset_fpsid;
47
- uint32_t mvfr0;
48
- uint32_t mvfr1;
49
- uint32_t mvfr2;
50
uint32_t ctr;
51
uint32_t reset_sctlr;
52
uint32_t id_pfr0;
53
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
54
uint32_t id_mmfr2;
55
uint32_t id_mmfr3;
56
uint32_t id_mmfr4;
57
- uint32_t id_isar0;
58
- uint32_t id_isar1;
59
- uint32_t id_isar2;
60
- uint32_t id_isar3;
61
- uint32_t id_isar4;
62
- uint32_t id_isar5;
63
- uint32_t id_isar6;
64
- uint64_t id_aa64pfr0;
65
- uint64_t id_aa64pfr1;
66
uint64_t id_aa64dfr0;
67
uint64_t id_aa64dfr1;
68
uint64_t id_aa64afr0;
69
uint64_t id_aa64afr1;
70
- uint64_t id_aa64isar0;
71
- uint64_t id_aa64isar1;
72
uint64_t id_aa64mmfr0;
73
uint64_t id_aa64mmfr1;
74
uint32_t dbgdidr;
75
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
41
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
76
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
77
--- a/hw/intc/armv7m_nvic.c
43
--- a/hw/intc/armv7m_nvic.c
78
+++ b/hw/intc/armv7m_nvic.c
44
+++ b/hw/intc/armv7m_nvic.c
79
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
45
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
80
case 0xd5c: /* MMFR3. */
46
case 0xd84: /* CSSELR */
81
return cpu->id_mmfr3;
47
return cpu->env.v7m.csselr[attrs.secure];
82
case 0xd60: /* ISAR0. */
48
case 0xd88: /* CPACR */
83
- return cpu->id_isar0;
49
- if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
84
+ return cpu->isar.id_isar0;
50
+ if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
85
case 0xd64: /* ISAR1. */
51
return 0;
86
- return cpu->id_isar1;
52
}
87
+ return cpu->isar.id_isar1;
53
return cpu->env.v7m.cpacr[attrs.secure];
88
case 0xd68: /* ISAR2. */
54
case 0xd8c: /* NSACR */
89
- return cpu->id_isar2;
55
- if (!attrs.secure || !arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
90
+ return cpu->isar.id_isar2;
56
+ if (!attrs.secure || !cpu_isar_feature(aa32_vfp_simd, cpu)) {
91
case 0xd6c: /* ISAR3. */
57
return 0;
92
- return cpu->id_isar3;
58
}
93
+ return cpu->isar.id_isar3;
59
return cpu->env.v7m.nsacr;
94
case 0xd70: /* ISAR4. */
60
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
95
- return cpu->id_isar4;
61
}
96
+ return cpu->isar.id_isar4;
62
return cpu->env.v7m.sfar;
97
case 0xd74: /* ISAR5. */
63
case 0xf34: /* FPCCR */
98
- return cpu->id_isar5;
64
- if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
99
+ return cpu->isar.id_isar5;
65
+ if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
100
case 0xd78: /* CLIDR */
66
return 0;
101
return cpu->clidr;
67
}
102
case 0xd7c: /* CTR */
68
if (attrs.secure) {
69
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
70
return value;
71
}
72
case 0xf38: /* FPCAR */
73
- if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
74
+ if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
75
return 0;
76
}
77
return cpu->env.v7m.fpcar[attrs.secure];
78
case 0xf3c: /* FPDSCR */
79
- if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
80
+ if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
81
return 0;
82
}
83
return cpu->env.v7m.fpdscr[attrs.secure];
84
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
85
}
86
break;
87
case 0xd88: /* CPACR */
88
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
89
+ if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
90
/* We implement only the Floating Point extension's CP10/CP11 */
91
cpu->env.v7m.cpacr[attrs.secure] = value & (0xf << 20);
92
}
93
break;
94
case 0xd8c: /* NSACR */
95
- if (attrs.secure && arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
96
+ if (attrs.secure && cpu_isar_feature(aa32_vfp_simd, cpu)) {
97
/* We implement only the Floating Point extension's CP10/CP11 */
98
cpu->env.v7m.nsacr = value & (3 << 10);
99
}
100
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
101
break;
102
}
103
case 0xf34: /* FPCCR */
104
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
105
+ if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
106
/* Not all bits here are banked. */
107
uint32_t fpccr_s;
108
109
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
110
}
111
break;
112
case 0xf38: /* FPCAR */
113
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
114
+ if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
115
value &= ~7;
116
cpu->env.v7m.fpcar[attrs.secure] = value;
117
}
118
break;
119
case 0xf3c: /* FPDSCR */
120
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
121
+ if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
122
value &= 0x07c00000;
123
cpu->env.v7m.fpdscr[attrs.secure] = value;
124
}
125
diff --git a/linux-user/arm/signal.c b/linux-user/arm/signal.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/linux-user/arm/signal.c
128
+++ b/linux-user/arm/signal.c
129
@@ -XXX,XX +XXX,XX @@ static void setup_sigframe_v2(struct target_ucontext_v2 *uc,
130
setup_sigcontext(&uc->tuc_mcontext, env, set->sig[0]);
131
/* Save coprocessor signal frame. */
132
regspace = uc->tuc_regspace;
133
- if (arm_feature(env, ARM_FEATURE_VFP)) {
134
+ if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env))) {
135
regspace = setup_sigframe_v2_vfp(regspace, env);
136
}
137
if (arm_feature(env, ARM_FEATURE_IWMMXT)) {
138
@@ -XXX,XX +XXX,XX @@ static int do_sigframe_return_v2(CPUARMState *env,
139
140
/* Restore coprocessor signal frame */
141
regspace = uc->tuc_regspace;
142
- if (arm_feature(env, ARM_FEATURE_VFP)) {
143
+ if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env))) {
144
regspace = restore_sigframe_v2_vfp(env, regspace);
145
if (!regspace) {
146
return 1;
147
diff --git a/target/arm/arch_dump.c b/target/arm/arch_dump.c
148
index XXXXXXX..XXXXXXX 100644
149
--- a/target/arm/arch_dump.c
150
+++ b/target/arm/arch_dump.c
151
@@ -XXX,XX +XXX,XX @@ int arm_cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cs,
152
int cpuid, void *opaque)
153
{
154
struct arm_note note;
155
- CPUARMState *env = &ARM_CPU(cs)->env;
156
+ ARMCPU *cpu = ARM_CPU(cs);
157
+ CPUARMState *env = &cpu->env;
158
DumpState *s = opaque;
159
- int ret, i, fpvalid = !!arm_feature(env, ARM_FEATURE_VFP);
160
+ int ret, i;
161
+ bool fpvalid = cpu_isar_feature(aa32_vfp_simd, cpu);
162
163
arm_note_init(&note, s, "CORE", 5, NT_PRSTATUS, sizeof(note.prstatus));
164
165
@@ -XXX,XX +XXX,XX @@ int cpu_get_dump_info(ArchDumpInfo *info,
166
ssize_t cpu_get_note_size(int class, int machine, int nr_cpus)
167
{
168
ARMCPU *cpu = ARM_CPU(first_cpu);
169
- CPUARMState *env = &cpu->env;
170
size_t note_size;
171
172
if (class == ELFCLASS64) {
173
@@ -XXX,XX +XXX,XX @@ ssize_t cpu_get_note_size(int class, int machine, int nr_cpus)
174
note_size += AARCH64_PRFPREG_NOTE_SIZE;
175
#ifdef TARGET_AARCH64
176
if (cpu_isar_feature(aa64_sve, cpu)) {
177
- note_size += AARCH64_SVE_NOTE_SIZE(env);
178
+ note_size += AARCH64_SVE_NOTE_SIZE(&cpu->env);
179
}
180
#endif
181
} else {
182
note_size = ARM_PRSTATUS_NOTE_SIZE;
183
- if (arm_feature(env, ARM_FEATURE_VFP)) {
184
+ if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
185
note_size += ARM_VFP_NOTE_SIZE;
186
}
187
}
103
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
188
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
104
index XXXXXXX..XXXXXXX 100644
189
index XXXXXXX..XXXXXXX 100644
105
--- a/target/arm/cpu.c
190
--- a/target/arm/cpu.c
106
+++ b/target/arm/cpu.c
191
+++ b/target/arm/cpu.c
107
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
192
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
108
g_hash_table_foreach(cpu->cp_regs, cp_reg_check_reset, cpu);
193
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
109
194
}
110
env->vfp.xregs[ARM_VFP_FPSID] = cpu->reset_fpsid;
195
111
- env->vfp.xregs[ARM_VFP_MVFR0] = cpu->mvfr0;
196
- if (arm_feature(env, ARM_FEATURE_VFP)) {
112
- env->vfp.xregs[ARM_VFP_MVFR1] = cpu->mvfr1;
197
+ if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
113
- env->vfp.xregs[ARM_VFP_MVFR2] = cpu->mvfr2;
198
env->v7m.fpccr[M_REG_NS] = R_V7M_FPCCR_ASPEN_MASK;
114
+ env->vfp.xregs[ARM_VFP_MVFR0] = cpu->isar.mvfr0;
199
env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
115
+ env->vfp.xregs[ARM_VFP_MVFR1] = cpu->isar.mvfr1;
200
R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
116
+ env->vfp.xregs[ARM_VFP_MVFR2] = cpu->isar.mvfr2;
201
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags)
117
202
int numvfpregs = 0;
118
cpu->power_state = cpu->start_powered_off ? PSCI_OFF : PSCI_ON;
203
if (cpu_isar_feature(aa32_simd_r32, cpu)) {
119
s->halted = cpu->start_powered_off;
204
numvfpregs = 32;
120
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
205
- } else if (arm_feature(env, ARM_FEATURE_VFP)) {
121
* registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
206
+ } else if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
122
*/
207
numvfpregs = 16;
123
cpu->id_pfr1 &= ~0xf0;
208
}
124
- cpu->id_aa64pfr0 &= ~0xf000;
209
for (i = 0; i < numvfpregs; i++) {
125
+ cpu->isar.id_aa64pfr0 &= ~0xf000;
126
}
127
128
if (!cpu->has_el2) {
129
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
130
* registers if we don't have EL2. These are id_pfr1[15:12] and
131
* id_aa64pfr0_el1[11:8].
132
*/
133
- cpu->id_aa64pfr0 &= ~0xf00;
134
+ cpu->isar.id_aa64pfr0 &= ~0xf00;
135
cpu->id_pfr1 &= ~0xf000;
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
139
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
140
cpu->midr = 0x4107b362;
141
cpu->reset_fpsid = 0x410120b4;
142
- cpu->mvfr0 = 0x11111111;
143
- cpu->mvfr1 = 0x00000000;
144
+ cpu->isar.mvfr0 = 0x11111111;
145
+ cpu->isar.mvfr1 = 0x00000000;
146
cpu->ctr = 0x1dd20d2;
147
cpu->reset_sctlr = 0x00050078;
148
cpu->id_pfr0 = 0x111;
149
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
150
cpu->id_mmfr0 = 0x01130003;
151
cpu->id_mmfr1 = 0x10030302;
152
cpu->id_mmfr2 = 0x01222110;
153
- cpu->id_isar0 = 0x00140011;
154
- cpu->id_isar1 = 0x12002111;
155
- cpu->id_isar2 = 0x11231111;
156
- cpu->id_isar3 = 0x01102131;
157
- cpu->id_isar4 = 0x141;
158
+ cpu->isar.id_isar0 = 0x00140011;
159
+ cpu->isar.id_isar1 = 0x12002111;
160
+ cpu->isar.id_isar2 = 0x11231111;
161
+ cpu->isar.id_isar3 = 0x01102131;
162
+ cpu->isar.id_isar4 = 0x141;
163
cpu->reset_auxcr = 7;
164
}
165
166
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
167
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
168
cpu->midr = 0x4117b363;
169
cpu->reset_fpsid = 0x410120b4;
170
- cpu->mvfr0 = 0x11111111;
171
- cpu->mvfr1 = 0x00000000;
172
+ cpu->isar.mvfr0 = 0x11111111;
173
+ cpu->isar.mvfr1 = 0x00000000;
174
cpu->ctr = 0x1dd20d2;
175
cpu->reset_sctlr = 0x00050078;
176
cpu->id_pfr0 = 0x111;
177
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
178
cpu->id_mmfr0 = 0x01130003;
179
cpu->id_mmfr1 = 0x10030302;
180
cpu->id_mmfr2 = 0x01222110;
181
- cpu->id_isar0 = 0x00140011;
182
- cpu->id_isar1 = 0x12002111;
183
- cpu->id_isar2 = 0x11231111;
184
- cpu->id_isar3 = 0x01102131;
185
- cpu->id_isar4 = 0x141;
186
+ cpu->isar.id_isar0 = 0x00140011;
187
+ cpu->isar.id_isar1 = 0x12002111;
188
+ cpu->isar.id_isar2 = 0x11231111;
189
+ cpu->isar.id_isar3 = 0x01102131;
190
+ cpu->isar.id_isar4 = 0x141;
191
cpu->reset_auxcr = 7;
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
195
set_feature(&cpu->env, ARM_FEATURE_EL3);
196
cpu->midr = 0x410fb767;
197
cpu->reset_fpsid = 0x410120b5;
198
- cpu->mvfr0 = 0x11111111;
199
- cpu->mvfr1 = 0x00000000;
200
+ cpu->isar.mvfr0 = 0x11111111;
201
+ cpu->isar.mvfr1 = 0x00000000;
202
cpu->ctr = 0x1dd20d2;
203
cpu->reset_sctlr = 0x00050078;
204
cpu->id_pfr0 = 0x111;
205
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
206
cpu->id_mmfr0 = 0x01130003;
207
cpu->id_mmfr1 = 0x10030302;
208
cpu->id_mmfr2 = 0x01222100;
209
- cpu->id_isar0 = 0x0140011;
210
- cpu->id_isar1 = 0x12002111;
211
- cpu->id_isar2 = 0x11231121;
212
- cpu->id_isar3 = 0x01102131;
213
- cpu->id_isar4 = 0x01141;
214
+ cpu->isar.id_isar0 = 0x0140011;
215
+ cpu->isar.id_isar1 = 0x12002111;
216
+ cpu->isar.id_isar2 = 0x11231121;
217
+ cpu->isar.id_isar3 = 0x01102131;
218
+ cpu->isar.id_isar4 = 0x01141;
219
cpu->reset_auxcr = 7;
220
}
221
222
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
223
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
224
cpu->midr = 0x410fb022;
225
cpu->reset_fpsid = 0x410120b4;
226
- cpu->mvfr0 = 0x11111111;
227
- cpu->mvfr1 = 0x00000000;
228
+ cpu->isar.mvfr0 = 0x11111111;
229
+ cpu->isar.mvfr1 = 0x00000000;
230
cpu->ctr = 0x1d192992; /* 32K icache 32K dcache */
231
cpu->id_pfr0 = 0x111;
232
cpu->id_pfr1 = 0x1;
233
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
234
cpu->id_mmfr0 = 0x01100103;
235
cpu->id_mmfr1 = 0x10020302;
236
cpu->id_mmfr2 = 0x01222000;
237
- cpu->id_isar0 = 0x00100011;
238
- cpu->id_isar1 = 0x12002111;
239
- cpu->id_isar2 = 0x11221011;
240
- cpu->id_isar3 = 0x01102131;
241
- cpu->id_isar4 = 0x141;
242
+ cpu->isar.id_isar0 = 0x00100011;
243
+ cpu->isar.id_isar1 = 0x12002111;
244
+ cpu->isar.id_isar2 = 0x11221011;
245
+ cpu->isar.id_isar3 = 0x01102131;
246
+ cpu->isar.id_isar4 = 0x141;
247
cpu->reset_auxcr = 1;
248
}
249
250
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
251
cpu->id_mmfr1 = 0x00000000;
252
cpu->id_mmfr2 = 0x00000000;
253
cpu->id_mmfr3 = 0x00000000;
254
- cpu->id_isar0 = 0x01141110;
255
- cpu->id_isar1 = 0x02111000;
256
- cpu->id_isar2 = 0x21112231;
257
- cpu->id_isar3 = 0x01111110;
258
- cpu->id_isar4 = 0x01310102;
259
- cpu->id_isar5 = 0x00000000;
260
- cpu->id_isar6 = 0x00000000;
261
+ cpu->isar.id_isar0 = 0x01141110;
262
+ cpu->isar.id_isar1 = 0x02111000;
263
+ cpu->isar.id_isar2 = 0x21112231;
264
+ cpu->isar.id_isar3 = 0x01111110;
265
+ cpu->isar.id_isar4 = 0x01310102;
266
+ cpu->isar.id_isar5 = 0x00000000;
267
+ cpu->isar.id_isar6 = 0x00000000;
268
}
269
270
static void cortex_m4_initfn(Object *obj)
271
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
272
cpu->id_mmfr1 = 0x00000000;
273
cpu->id_mmfr2 = 0x00000000;
274
cpu->id_mmfr3 = 0x00000000;
275
- cpu->id_isar0 = 0x01141110;
276
- cpu->id_isar1 = 0x02111000;
277
- cpu->id_isar2 = 0x21112231;
278
- cpu->id_isar3 = 0x01111110;
279
- cpu->id_isar4 = 0x01310102;
280
- cpu->id_isar5 = 0x00000000;
281
- cpu->id_isar6 = 0x00000000;
282
+ cpu->isar.id_isar0 = 0x01141110;
283
+ cpu->isar.id_isar1 = 0x02111000;
284
+ cpu->isar.id_isar2 = 0x21112231;
285
+ cpu->isar.id_isar3 = 0x01111110;
286
+ cpu->isar.id_isar4 = 0x01310102;
287
+ cpu->isar.id_isar5 = 0x00000000;
288
+ cpu->isar.id_isar6 = 0x00000000;
289
}
290
291
static void cortex_m33_initfn(Object *obj)
292
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
293
cpu->id_mmfr1 = 0x00000000;
294
cpu->id_mmfr2 = 0x01000000;
295
cpu->id_mmfr3 = 0x00000000;
296
- cpu->id_isar0 = 0x01101110;
297
- cpu->id_isar1 = 0x02212000;
298
- cpu->id_isar2 = 0x20232232;
299
- cpu->id_isar3 = 0x01111131;
300
- cpu->id_isar4 = 0x01310132;
301
- cpu->id_isar5 = 0x00000000;
302
- cpu->id_isar6 = 0x00000000;
303
+ cpu->isar.id_isar0 = 0x01101110;
304
+ cpu->isar.id_isar1 = 0x02212000;
305
+ cpu->isar.id_isar2 = 0x20232232;
306
+ cpu->isar.id_isar3 = 0x01111131;
307
+ cpu->isar.id_isar4 = 0x01310132;
308
+ cpu->isar.id_isar5 = 0x00000000;
309
+ cpu->isar.id_isar6 = 0x00000000;
310
cpu->clidr = 0x00000000;
311
cpu->ctr = 0x8000c000;
312
}
313
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
314
cpu->id_mmfr1 = 0x00000000;
315
cpu->id_mmfr2 = 0x01200000;
316
cpu->id_mmfr3 = 0x0211;
317
- cpu->id_isar0 = 0x02101111;
318
- cpu->id_isar1 = 0x13112111;
319
- cpu->id_isar2 = 0x21232141;
320
- cpu->id_isar3 = 0x01112131;
321
- cpu->id_isar4 = 0x0010142;
322
- cpu->id_isar5 = 0x0;
323
- cpu->id_isar6 = 0x0;
324
+ cpu->isar.id_isar0 = 0x02101111;
325
+ cpu->isar.id_isar1 = 0x13112111;
326
+ cpu->isar.id_isar2 = 0x21232141;
327
+ cpu->isar.id_isar3 = 0x01112131;
328
+ cpu->isar.id_isar4 = 0x0010142;
329
+ cpu->isar.id_isar5 = 0x0;
330
+ cpu->isar.id_isar6 = 0x0;
331
cpu->mp_is_up = true;
332
cpu->pmsav7_dregion = 16;
333
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
334
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_EL3);
336
cpu->midr = 0x410fc080;
337
cpu->reset_fpsid = 0x410330c0;
338
- cpu->mvfr0 = 0x11110222;
339
- cpu->mvfr1 = 0x00011111;
340
+ cpu->isar.mvfr0 = 0x11110222;
341
+ cpu->isar.mvfr1 = 0x00011111;
342
cpu->ctr = 0x82048004;
343
cpu->reset_sctlr = 0x00c50078;
344
cpu->id_pfr0 = 0x1031;
345
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
346
cpu->id_mmfr1 = 0x20000000;
347
cpu->id_mmfr2 = 0x01202000;
348
cpu->id_mmfr3 = 0x11;
349
- cpu->id_isar0 = 0x00101111;
350
- cpu->id_isar1 = 0x12112111;
351
- cpu->id_isar2 = 0x21232031;
352
- cpu->id_isar3 = 0x11112131;
353
- cpu->id_isar4 = 0x00111142;
354
+ cpu->isar.id_isar0 = 0x00101111;
355
+ cpu->isar.id_isar1 = 0x12112111;
356
+ cpu->isar.id_isar2 = 0x21232031;
357
+ cpu->isar.id_isar3 = 0x11112131;
358
+ cpu->isar.id_isar4 = 0x00111142;
359
cpu->dbgdidr = 0x15141000;
360
cpu->clidr = (1 << 27) | (2 << 24) | 3;
361
cpu->ccsidr[0] = 0xe007e01a; /* 16k L1 dcache. */
362
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
363
set_feature(&cpu->env, ARM_FEATURE_CBAR);
364
cpu->midr = 0x410fc090;
365
cpu->reset_fpsid = 0x41033090;
366
- cpu->mvfr0 = 0x11110222;
367
- cpu->mvfr1 = 0x01111111;
368
+ cpu->isar.mvfr0 = 0x11110222;
369
+ cpu->isar.mvfr1 = 0x01111111;
370
cpu->ctr = 0x80038003;
371
cpu->reset_sctlr = 0x00c50078;
372
cpu->id_pfr0 = 0x1031;
373
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
374
cpu->id_mmfr1 = 0x20000000;
375
cpu->id_mmfr2 = 0x01230000;
376
cpu->id_mmfr3 = 0x00002111;
377
- cpu->id_isar0 = 0x00101111;
378
- cpu->id_isar1 = 0x13112111;
379
- cpu->id_isar2 = 0x21232041;
380
- cpu->id_isar3 = 0x11112131;
381
- cpu->id_isar4 = 0x00111142;
382
+ cpu->isar.id_isar0 = 0x00101111;
383
+ cpu->isar.id_isar1 = 0x13112111;
384
+ cpu->isar.id_isar2 = 0x21232041;
385
+ cpu->isar.id_isar3 = 0x11112131;
386
+ cpu->isar.id_isar4 = 0x00111142;
387
cpu->dbgdidr = 0x35141000;
388
cpu->clidr = (1 << 27) | (1 << 24) | 3;
389
cpu->ccsidr[0] = 0xe00fe019; /* 16k L1 dcache. */
390
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
391
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
392
cpu->midr = 0x410fc075;
393
cpu->reset_fpsid = 0x41023075;
394
- cpu->mvfr0 = 0x10110222;
395
- cpu->mvfr1 = 0x11111111;
396
+ cpu->isar.mvfr0 = 0x10110222;
397
+ cpu->isar.mvfr1 = 0x11111111;
398
cpu->ctr = 0x84448003;
399
cpu->reset_sctlr = 0x00c50078;
400
cpu->id_pfr0 = 0x00001131;
401
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
402
/* a7_mpcore_r0p5_trm, page 4-4 gives 0x01101110; but
403
* table 4-41 gives 0x02101110, which includes the arm div insns.
404
*/
405
- cpu->id_isar0 = 0x02101110;
406
- cpu->id_isar1 = 0x13112111;
407
- cpu->id_isar2 = 0x21232041;
408
- cpu->id_isar3 = 0x11112131;
409
- cpu->id_isar4 = 0x10011142;
410
+ cpu->isar.id_isar0 = 0x02101110;
411
+ cpu->isar.id_isar1 = 0x13112111;
412
+ cpu->isar.id_isar2 = 0x21232041;
413
+ cpu->isar.id_isar3 = 0x11112131;
414
+ cpu->isar.id_isar4 = 0x10011142;
415
cpu->dbgdidr = 0x3515f005;
416
cpu->clidr = 0x0a200023;
417
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
418
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
419
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
420
cpu->midr = 0x412fc0f1;
421
cpu->reset_fpsid = 0x410430f0;
422
- cpu->mvfr0 = 0x10110222;
423
- cpu->mvfr1 = 0x11111111;
424
+ cpu->isar.mvfr0 = 0x10110222;
425
+ cpu->isar.mvfr1 = 0x11111111;
426
cpu->ctr = 0x8444c004;
427
cpu->reset_sctlr = 0x00c50078;
428
cpu->id_pfr0 = 0x00001131;
429
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
430
cpu->id_mmfr1 = 0x20000000;
431
cpu->id_mmfr2 = 0x01240000;
432
cpu->id_mmfr3 = 0x02102211;
433
- cpu->id_isar0 = 0x02101110;
434
- cpu->id_isar1 = 0x13112111;
435
- cpu->id_isar2 = 0x21232041;
436
- cpu->id_isar3 = 0x11112131;
437
- cpu->id_isar4 = 0x10011142;
438
+ cpu->isar.id_isar0 = 0x02101110;
439
+ cpu->isar.id_isar1 = 0x13112111;
440
+ cpu->isar.id_isar2 = 0x21232041;
441
+ cpu->isar.id_isar3 = 0x11112131;
442
+ cpu->isar.id_isar4 = 0x10011142;
443
cpu->dbgdidr = 0x3515f021;
444
cpu->clidr = 0x0a200023;
445
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
446
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
447
index XXXXXXX..XXXXXXX 100644
448
--- a/target/arm/cpu64.c
449
+++ b/target/arm/cpu64.c
450
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
451
cpu->midr = 0x411fd070;
452
cpu->revidr = 0x00000000;
453
cpu->reset_fpsid = 0x41034070;
454
- cpu->mvfr0 = 0x10110222;
455
- cpu->mvfr1 = 0x12111111;
456
- cpu->mvfr2 = 0x00000043;
457
+ cpu->isar.mvfr0 = 0x10110222;
458
+ cpu->isar.mvfr1 = 0x12111111;
459
+ cpu->isar.mvfr2 = 0x00000043;
460
cpu->ctr = 0x8444c004;
461
cpu->reset_sctlr = 0x00c50838;
462
cpu->id_pfr0 = 0x00000131;
463
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
464
cpu->id_mmfr1 = 0x40000000;
465
cpu->id_mmfr2 = 0x01260000;
466
cpu->id_mmfr3 = 0x02102211;
467
- cpu->id_isar0 = 0x02101110;
468
- cpu->id_isar1 = 0x13112111;
469
- cpu->id_isar2 = 0x21232042;
470
- cpu->id_isar3 = 0x01112131;
471
- cpu->id_isar4 = 0x00011142;
472
- cpu->id_isar5 = 0x00011121;
473
- cpu->id_isar6 = 0;
474
- cpu->id_aa64pfr0 = 0x00002222;
475
+ cpu->isar.id_isar0 = 0x02101110;
476
+ cpu->isar.id_isar1 = 0x13112111;
477
+ cpu->isar.id_isar2 = 0x21232042;
478
+ cpu->isar.id_isar3 = 0x01112131;
479
+ cpu->isar.id_isar4 = 0x00011142;
480
+ cpu->isar.id_isar5 = 0x00011121;
481
+ cpu->isar.id_isar6 = 0;
482
+ cpu->isar.id_aa64pfr0 = 0x00002222;
483
cpu->id_aa64dfr0 = 0x10305106;
484
cpu->pmceid0 = 0x00000000;
485
cpu->pmceid1 = 0x00000000;
486
- cpu->id_aa64isar0 = 0x00011120;
487
+ cpu->isar.id_aa64isar0 = 0x00011120;
488
cpu->id_aa64mmfr0 = 0x00001124;
489
cpu->dbgdidr = 0x3516d000;
490
cpu->clidr = 0x0a200023;
491
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
492
cpu->midr = 0x410fd034;
493
cpu->revidr = 0x00000000;
494
cpu->reset_fpsid = 0x41034070;
495
- cpu->mvfr0 = 0x10110222;
496
- cpu->mvfr1 = 0x12111111;
497
- cpu->mvfr2 = 0x00000043;
498
+ cpu->isar.mvfr0 = 0x10110222;
499
+ cpu->isar.mvfr1 = 0x12111111;
500
+ cpu->isar.mvfr2 = 0x00000043;
501
cpu->ctr = 0x84448004; /* L1Ip = VIPT */
502
cpu->reset_sctlr = 0x00c50838;
503
cpu->id_pfr0 = 0x00000131;
504
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
505
cpu->id_mmfr1 = 0x40000000;
506
cpu->id_mmfr2 = 0x01260000;
507
cpu->id_mmfr3 = 0x02102211;
508
- cpu->id_isar0 = 0x02101110;
509
- cpu->id_isar1 = 0x13112111;
510
- cpu->id_isar2 = 0x21232042;
511
- cpu->id_isar3 = 0x01112131;
512
- cpu->id_isar4 = 0x00011142;
513
- cpu->id_isar5 = 0x00011121;
514
- cpu->id_isar6 = 0;
515
- cpu->id_aa64pfr0 = 0x00002222;
516
+ cpu->isar.id_isar0 = 0x02101110;
517
+ cpu->isar.id_isar1 = 0x13112111;
518
+ cpu->isar.id_isar2 = 0x21232042;
519
+ cpu->isar.id_isar3 = 0x01112131;
520
+ cpu->isar.id_isar4 = 0x00011142;
521
+ cpu->isar.id_isar5 = 0x00011121;
522
+ cpu->isar.id_isar6 = 0;
523
+ cpu->isar.id_aa64pfr0 = 0x00002222;
524
cpu->id_aa64dfr0 = 0x10305106;
525
- cpu->id_aa64isar0 = 0x00011120;
526
+ cpu->isar.id_aa64isar0 = 0x00011120;
527
cpu->id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
528
cpu->dbgdidr = 0x3516d000;
529
cpu->clidr = 0x0a200023;
530
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
531
cpu->midr = 0x410fd083;
532
cpu->revidr = 0x00000000;
533
cpu->reset_fpsid = 0x41034080;
534
- cpu->mvfr0 = 0x10110222;
535
- cpu->mvfr1 = 0x12111111;
536
- cpu->mvfr2 = 0x00000043;
537
+ cpu->isar.mvfr0 = 0x10110222;
538
+ cpu->isar.mvfr1 = 0x12111111;
539
+ cpu->isar.mvfr2 = 0x00000043;
540
cpu->ctr = 0x8444c004;
541
cpu->reset_sctlr = 0x00c50838;
542
cpu->id_pfr0 = 0x00000131;
543
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
544
cpu->id_mmfr1 = 0x40000000;
545
cpu->id_mmfr2 = 0x01260000;
546
cpu->id_mmfr3 = 0x02102211;
547
- cpu->id_isar0 = 0x02101110;
548
- cpu->id_isar1 = 0x13112111;
549
- cpu->id_isar2 = 0x21232042;
550
- cpu->id_isar3 = 0x01112131;
551
- cpu->id_isar4 = 0x00011142;
552
- cpu->id_isar5 = 0x00011121;
553
- cpu->id_aa64pfr0 = 0x00002222;
554
+ cpu->isar.id_isar0 = 0x02101110;
555
+ cpu->isar.id_isar1 = 0x13112111;
556
+ cpu->isar.id_isar2 = 0x21232042;
557
+ cpu->isar.id_isar3 = 0x01112131;
558
+ cpu->isar.id_isar4 = 0x00011142;
559
+ cpu->isar.id_isar5 = 0x00011121;
560
+ cpu->isar.id_aa64pfr0 = 0x00002222;
561
cpu->id_aa64dfr0 = 0x10305106;
562
cpu->pmceid0 = 0x00000000;
563
cpu->pmceid1 = 0x00000000;
564
- cpu->id_aa64isar0 = 0x00011120;
565
+ cpu->isar.id_aa64isar0 = 0x00011120;
566
cpu->id_aa64mmfr0 = 0x00001124;
567
cpu->dbgdidr = 0x3516d000;
568
cpu->clidr = 0x0a200023;
569
diff --git a/target/arm/helper.c b/target/arm/helper.c
210
diff --git a/target/arm/helper.c b/target/arm/helper.c
570
index XXXXXXX..XXXXXXX 100644
211
index XXXXXXX..XXXXXXX 100644
571
--- a/target/arm/helper.c
212
--- a/target/arm/helper.c
572
+++ b/target/arm/helper.c
213
+++ b/target/arm/helper.c
573
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
214
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
574
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
215
* ASEDIS [31] and D32DIS [30] are both UNK/SBZP without VFP.
575
{
216
* TRCDIS [28] is RAZ/WI since we do not implement a trace macrocell.
576
ARMCPU *cpu = arm_env_get_cpu(env);
217
*/
577
- uint64_t pfr0 = cpu->id_aa64pfr0;
218
- if (arm_feature(env, ARM_FEATURE_VFP)) {
578
+ uint64_t pfr0 = cpu->isar.id_aa64pfr0;
219
+ if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env))) {
579
220
/* VFP coprocessor: cp10 & cp11 [23:20] */
580
if (env->gicv3state) {
221
mask |= (1 << 31) | (1 << 30) | (0xf << 20);
581
pfr0 |= 1 << 24;
222
582
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
223
@@ -XXX,XX +XXX,XX @@ void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu)
583
{ .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH,
224
} else if (cpu_isar_feature(aa32_simd_r32, cpu)) {
584
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
225
gdb_register_coprocessor(cs, vfp_gdb_get_reg, vfp_gdb_set_reg,
585
.access = PL1_R, .type = ARM_CP_CONST,
226
35, "arm-vfp3.xml", 0);
586
- .resetvalue = cpu->id_isar0 },
227
- } else if (arm_feature(env, ARM_FEATURE_VFP)) {
587
+ .resetvalue = cpu->isar.id_isar0 },
228
+ } else if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
588
{ .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH,
229
gdb_register_coprocessor(cs, vfp_gdb_get_reg, vfp_gdb_set_reg,
589
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1,
230
19, "arm-vfp.xml", 0);
590
.access = PL1_R, .type = ARM_CP_CONST,
231
}
591
- .resetvalue = cpu->id_isar1 },
232
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
592
+ .resetvalue = cpu->isar.id_isar1 },
233
index XXXXXXX..XXXXXXX 100644
593
{ .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH,
234
--- a/target/arm/m_helper.c
594
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
235
+++ b/target/arm/m_helper.c
595
.access = PL1_R, .type = ARM_CP_CONST,
236
@@ -XXX,XX +XXX,XX @@ static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
596
- .resetvalue = cpu->id_isar2 },
237
*/
597
+ .resetvalue = cpu->isar.id_isar2 },
238
uint32_t sig = 0xfefa125a;
598
{ .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH,
239
599
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3,
240
- if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
600
.access = PL1_R, .type = ARM_CP_CONST,
241
+ if (!cpu_isar_feature(aa32_vfp_simd, env_archcpu(env))
601
- .resetvalue = cpu->id_isar3 },
242
+ || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
602
+ .resetvalue = cpu->isar.id_isar3 },
243
sig |= 1;
603
{ .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH,
244
}
604
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4,
245
return sig;
605
.access = PL1_R, .type = ARM_CP_CONST,
246
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
606
- .resetvalue = cpu->id_isar4 },
247
607
+ .resetvalue = cpu->isar.id_isar4 },
248
if (dotailchain) {
608
{ .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH,
249
/* Sanitize LR FType and PREFIX bits */
609
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5,
250
- if (!arm_feature(env, ARM_FEATURE_VFP)) {
610
.access = PL1_R, .type = ARM_CP_CONST,
251
+ if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
611
- .resetvalue = cpu->id_isar5 },
252
lr |= R_V7M_EXCRET_FTYPE_MASK;
612
+ .resetvalue = cpu->isar.id_isar5 },
253
}
613
{ .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH,
254
lr = deposit32(lr, 24, 8, 0xff);
614
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
255
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
615
.access = PL1_R, .type = ARM_CP_CONST,
256
616
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
257
ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
617
{ .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
258
618
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
259
- if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
619
.access = PL1_R, .type = ARM_CP_CONST,
260
+ if (!ftype && !cpu_isar_feature(aa32_vfp_simd, cpu)) {
620
- .resetvalue = cpu->id_isar6 },
261
qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
621
+ .resetvalue = cpu->isar.id_isar6 },
262
"exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
622
REGINFO_SENTINEL
263
"if FPU not present\n",
623
};
264
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
624
define_arm_cp_regs(cpu, v6_idregs);
265
* SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
625
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
266
* RES0 if the FPU is not present, and is stored in the S bank
626
{ .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64,
267
*/
627
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1,
268
- if (arm_feature(env, ARM_FEATURE_VFP) &&
628
.access = PL1_R, .type = ARM_CP_CONST,
269
+ if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env)) &&
629
- .resetvalue = cpu->id_aa64pfr1},
270
extract32(env->v7m.nsacr, 10, 1)) {
630
+ .resetvalue = cpu->isar.id_aa64pfr1},
271
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
631
{ .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
272
env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
632
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2,
273
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
633
.access = PL1_R, .type = ARM_CP_CONST,
274
env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
634
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
275
env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
635
{ .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64,
276
}
636
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0,
277
- if (arm_feature(env, ARM_FEATURE_VFP)) {
637
.access = PL1_R, .type = ARM_CP_CONST,
278
+ if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env))) {
638
- .resetvalue = cpu->id_aa64isar0 },
279
/*
639
+ .resetvalue = cpu->isar.id_aa64isar0 },
280
* SFPA is RAZ/WI from NS or if no FPU.
640
{ .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64,
281
* FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
641
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1,
642
.access = PL1_R, .type = ARM_CP_CONST,
643
- .resetvalue = cpu->id_aa64isar1 },
644
+ .resetvalue = cpu->isar.id_aa64isar1 },
645
{ .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
646
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
647
.access = PL1_R, .type = ARM_CP_CONST,
648
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
649
{ .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64,
650
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0,
651
.access = PL1_R, .type = ARM_CP_CONST,
652
- .resetvalue = cpu->mvfr0 },
653
+ .resetvalue = cpu->isar.mvfr0 },
654
{ .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64,
655
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1,
656
.access = PL1_R, .type = ARM_CP_CONST,
657
- .resetvalue = cpu->mvfr1 },
658
+ .resetvalue = cpu->isar.mvfr1 },
659
{ .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64,
660
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
661
.access = PL1_R, .type = ARM_CP_CONST,
662
- .resetvalue = cpu->mvfr2 },
663
+ .resetvalue = cpu->isar.mvfr2 },
664
{ .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
665
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3,
666
.access = PL1_R, .type = ARM_CP_CONST,
667
--
282
--
668
2.19.1
283
2.20.1
669
284
670
285
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Instead of shifts and masks, use direct loads and stores from the neon
3
The old name, isar_feature_aa32_fpdp, does not reflect
4
register file. Mirror the iteration structure of the ARM pseudocode
4
that the test includes VFPv2. We will introduce another
5
more closely. Correct the parameters of the VLD2 A2 insn.
5
feature tests for VFPv3.
6
6
7
Note that this includes a bugfix for handling of the insn
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
"VLD2 (multiple 2-element structures)" -- we were using an
9
incorrect stride value.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20181011205206.3552-19-richard.henderson@linaro.org
9
Message-id: 20200224222232.13807-3-richard.henderson@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
11
---
16
target/arm/translate.c | 170 ++++++++++++++++++-----------------------
12
target/arm/cpu.h | 4 ++--
17
1 file changed, 74 insertions(+), 96 deletions(-)
13
target/arm/translate-vfp.inc.c | 40 +++++++++++++++++-----------------
18
14
2 files changed, 22 insertions(+), 22 deletions(-)
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
18
--- a/target/arm/cpu.h
22
+++ b/target/arm/translate.c
19
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
20
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fpshvec(const ARMISARegisters *id)
24
return tmp;
21
return FIELD_EX32(id->mvfr0, MVFR0, FPSHVEC) > 0;
25
}
22
}
26
23
27
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
24
-static inline bool isar_feature_aa32_fpdp(const ARMISARegisters *id)
28
+{
25
+static inline bool isar_feature_aa32_fpdp_v2(const ARMISARegisters *id)
29
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
30
+
31
+ switch (mop) {
32
+ case MO_UB:
33
+ tcg_gen_ld8u_i64(var, cpu_env, offset);
34
+ break;
35
+ case MO_UW:
36
+ tcg_gen_ld16u_i64(var, cpu_env, offset);
37
+ break;
38
+ case MO_UL:
39
+ tcg_gen_ld32u_i64(var, cpu_env, offset);
40
+ break;
41
+ case MO_Q:
42
+ tcg_gen_ld_i64(var, cpu_env, offset);
43
+ break;
44
+ default:
45
+ g_assert_not_reached();
46
+ }
47
+}
48
+
49
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
50
{
26
{
51
tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
27
- /* Return true if CPU supports double precision floating point */
52
tcg_temp_free_i32(var);
28
+ /* Return true if CPU supports double precision floating point, VFPv2 */
29
return FIELD_EX32(id->mvfr0, MVFR0, FPDP) > 0;
53
}
30
}
54
31
55
+static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
32
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
56
+{
33
index XXXXXXX..XXXXXXX 100644
57
+ long offset = neon_element_offset(reg, ele, size);
34
--- a/target/arm/translate-vfp.inc.c
58
+
35
+++ b/target/arm/translate-vfp.inc.c
59
+ switch (size) {
36
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
60
+ case MO_8:
37
return false;
61
+ tcg_gen_st8_i64(var, cpu_env, offset);
38
}
62
+ break;
39
63
+ case MO_16:
40
- if (dp && !dc_isar_feature(aa32_fpdp, s)) {
64
+ tcg_gen_st16_i64(var, cpu_env, offset);
41
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
65
+ break;
42
return false;
66
+ case MO_32:
43
}
67
+ tcg_gen_st32_i64(var, cpu_env, offset);
44
68
+ break;
45
@@ -XXX,XX +XXX,XX @@ static bool trans_VMINMAXNM(DisasContext *s, arg_VMINMAXNM *a)
69
+ case MO_64:
46
return false;
70
+ tcg_gen_st_i64(var, cpu_env, offset);
47
}
71
+ break;
48
72
+ default:
49
- if (dp && !dc_isar_feature(aa32_fpdp, s)) {
73
+ g_assert_not_reached();
50
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
74
+ }
51
return false;
75
+}
52
}
76
+
53
77
static inline void neon_load_reg64(TCGv_i64 var, int reg)
54
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
78
{
55
return false;
79
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
56
}
80
@@ -XXX,XX +XXX,XX @@ static struct {
57
81
int interleave;
58
- if (dp && !dc_isar_feature(aa32_fpdp, s)) {
82
int spacing;
59
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
83
} const neon_ls_element_type[11] = {
60
return false;
84
- {4, 4, 1},
61
}
85
- {4, 4, 2},
62
86
+ {1, 4, 1},
63
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
87
+ {1, 4, 2},
64
return false;
88
{4, 1, 1},
65
}
89
- {4, 2, 1},
66
90
- {3, 3, 1},
67
- if (dp && !dc_isar_feature(aa32_fpdp, s)) {
91
- {3, 3, 2},
68
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
92
+ {2, 2, 2},
69
return false;
93
+ {1, 3, 1},
70
}
94
+ {1, 3, 2},
71
95
{3, 1, 1},
72
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
96
{1, 1, 1},
73
return false;
97
- {2, 2, 1},
74
}
98
- {2, 2, 2},
75
99
+ {1, 2, 1},
76
- if (!dc_isar_feature(aa32_fpdp, s)) {
100
+ {1, 2, 2},
77
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
101
{2, 1, 1}
78
return false;
102
};
79
}
103
80
104
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
81
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
105
int shift;
82
return false;
106
int n;
83
}
107
int vec_size;
84
108
+ int mmu_idx;
85
- if (!dc_isar_feature(aa32_fpdp, s)) {
109
+ TCGMemOp endian;
86
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
110
TCGv_i32 addr;
87
return false;
111
TCGv_i32 tmp;
88
}
112
TCGv_i32 tmp2;
89
113
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
90
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_dp(DisasContext *s, arg_VFM_dp *a)
114
rn = (insn >> 16) & 0xf;
91
return false;
115
rm = insn & 0xf;
92
}
116
load = (insn & (1 << 21)) != 0;
93
117
+ endian = s->be_data;
94
- if (!dc_isar_feature(aa32_fpdp, s)) {
118
+ mmu_idx = get_mem_index(s);
95
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
119
if ((insn & (1 << 23)) == 0) {
96
return false;
120
/* Load store all elements. */
97
}
121
op = (insn >> 8) & 0xf;
98
122
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
99
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
123
nregs = neon_ls_element_type[op].nregs;
100
return false;
124
interleave = neon_ls_element_type[op].interleave;
101
}
125
spacing = neon_ls_element_type[op].spacing;
102
126
- if (size == 3 && (interleave | spacing) != 1)
103
- if (!dc_isar_feature(aa32_fpdp, s)) {
127
+ if (size == 3 && (interleave | spacing) != 1) {
104
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
128
return 1;
105
return false;
129
+ }
106
}
130
+ tmp64 = tcg_temp_new_i64();
107
131
addr = tcg_temp_new_i32();
108
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
132
+ tmp2 = tcg_const_i32(1 << size);
109
return false;
133
load_reg_var(s, addr, rn);
110
}
134
- stride = (1 << size) * interleave;
111
135
for (reg = 0; reg < nregs; reg++) {
112
- if (!dc_isar_feature(aa32_fpdp, s)) {
136
- if (interleave > 2 || (interleave == 2 && nregs == 2)) {
113
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
137
- load_reg_var(s, addr, rn);
114
return false;
138
- tcg_gen_addi_i32(addr, addr, (1 << size) * reg);
115
}
139
- } else if (interleave == 2 && nregs == 4 && reg == 2) {
116
140
- load_reg_var(s, addr, rn);
117
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
141
- tcg_gen_addi_i32(addr, addr, 1 << size);
118
return false;
142
- }
119
}
143
- if (size == 3) {
120
144
- tmp64 = tcg_temp_new_i64();
121
- if (!dc_isar_feature(aa32_fpdp, s)) {
145
- if (load) {
122
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
146
- gen_aa32_ld64(s, tmp64, addr, get_mem_index(s));
123
return false;
147
- neon_store_reg64(tmp64, rd);
124
}
148
- } else {
125
149
- neon_load_reg64(tmp64, rd);
126
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
150
- gen_aa32_st64(s, tmp64, addr, get_mem_index(s));
127
return false;
151
- }
128
}
152
- tcg_temp_free_i64(tmp64);
129
153
- tcg_gen_addi_i32(addr, addr, stride);
130
- if (!dc_isar_feature(aa32_fpdp, s)) {
154
- } else {
131
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
155
- for (pass = 0; pass < 2; pass++) {
132
return false;
156
- if (size == 2) {
133
}
157
- if (load) {
134
158
- tmp = tcg_temp_new_i32();
135
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
159
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
136
return false;
160
- neon_store_reg(rd, pass, tmp);
137
}
161
- } else {
138
162
- tmp = neon_load_reg(rd, pass);
139
- if (!dc_isar_feature(aa32_fpdp, s)) {
163
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
140
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
164
- tcg_temp_free_i32(tmp);
141
return false;
165
- }
142
}
166
- tcg_gen_addi_i32(addr, addr, stride);
143
167
- } else if (size == 1) {
144
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
168
- if (load) {
145
return false;
169
- tmp = tcg_temp_new_i32();
146
}
170
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
147
171
- tcg_gen_addi_i32(addr, addr, stride);
148
- if (!dc_isar_feature(aa32_fpdp, s)) {
172
- tmp2 = tcg_temp_new_i32();
149
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
173
- gen_aa32_ld16u(s, tmp2, addr, get_mem_index(s));
150
return false;
174
- tcg_gen_addi_i32(addr, addr, stride);
151
}
175
- tcg_gen_shli_i32(tmp2, tmp2, 16);
152
176
- tcg_gen_or_i32(tmp, tmp, tmp2);
153
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
177
- tcg_temp_free_i32(tmp2);
154
return false;
178
- neon_store_reg(rd, pass, tmp);
155
}
179
- } else {
156
180
- tmp = neon_load_reg(rd, pass);
157
- if (!dc_isar_feature(aa32_fpdp, s)) {
181
- tmp2 = tcg_temp_new_i32();
158
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
182
- tcg_gen_shri_i32(tmp2, tmp, 16);
159
return false;
183
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
160
}
184
- tcg_temp_free_i32(tmp);
161
185
- tcg_gen_addi_i32(addr, addr, stride);
162
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
186
- gen_aa32_st16(s, tmp2, addr, get_mem_index(s));
163
return false;
187
- tcg_temp_free_i32(tmp2);
164
}
188
- tcg_gen_addi_i32(addr, addr, stride);
165
189
- }
166
- if (!dc_isar_feature(aa32_fpdp, s)) {
190
- } else /* size == 0 */ {
167
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
191
- if (load) {
168
return false;
192
- tmp2 = NULL;
169
}
193
- for (n = 0; n < 4; n++) {
170
194
- tmp = tcg_temp_new_i32();
171
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
195
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
172
return false;
196
- tcg_gen_addi_i32(addr, addr, stride);
173
}
197
- if (n == 0) {
174
198
- tmp2 = tmp;
175
- if (!dc_isar_feature(aa32_fpdp, s)) {
199
- } else {
176
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
200
- tcg_gen_shli_i32(tmp, tmp, n * 8);
177
return false;
201
- tcg_gen_or_i32(tmp2, tmp2, tmp);
178
}
202
- tcg_temp_free_i32(tmp);
179
203
- }
180
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
204
- }
181
return false;
205
- neon_store_reg(rd, pass, tmp2);
182
}
206
- } else {
183
207
- tmp2 = neon_load_reg(rd, pass);
184
- if (!dc_isar_feature(aa32_fpdp, s)) {
208
- for (n = 0; n < 4; n++) {
185
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
209
- tmp = tcg_temp_new_i32();
186
return false;
210
- if (n == 0) {
187
}
211
- tcg_gen_mov_i32(tmp, tmp2);
188
212
- } else {
189
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
213
- tcg_gen_shri_i32(tmp, tmp2, n * 8);
190
return false;
214
- }
191
}
215
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
192
216
- tcg_temp_free_i32(tmp);
193
- if (!dc_isar_feature(aa32_fpdp, s)) {
217
- tcg_gen_addi_i32(addr, addr, stride);
194
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
218
- }
195
return false;
219
- tcg_temp_free_i32(tmp2);
196
}
220
- }
197
221
+ for (n = 0; n < 8 >> size; n++) {
198
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
222
+ int xs;
199
return false;
223
+ for (xs = 0; xs < interleave; xs++) {
200
}
224
+ int tt = rd + reg + spacing * xs;
201
225
+
202
- if (!dc_isar_feature(aa32_fpdp, s)) {
226
+ if (load) {
203
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
227
+ gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
204
return false;
228
+ neon_store_element64(tt, n, size, tmp64);
205
}
229
+ } else {
206
230
+ neon_load_element64(tmp64, tt, n, size);
207
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
231
+ gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
208
return false;
232
}
209
}
233
+ tcg_gen_add_i32(addr, addr, tmp2);
210
234
}
211
- if (!dc_isar_feature(aa32_fpdp, s)) {
235
}
212
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
236
- rd += spacing;
213
return false;
237
}
214
}
238
tcg_temp_free_i32(addr);
215
239
- stride = nregs * 8;
240
+ tcg_temp_free_i32(tmp2);
241
+ tcg_temp_free_i64(tmp64);
242
+ stride = nregs * interleave * 8;
243
} else {
244
size = (insn >> 10) & 3;
245
if (size == 3) {
246
--
216
--
247
2.19.1
217
2.20.1
248
218
249
219
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move cmtst_op expanders from translate-a64.c.
3
We will shortly use these to test for VFPv2 and VFPv3
4
in different situations.
4
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-17-richard.henderson@linaro.org
8
Message-id: 20200224222232.13807-4-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/translate.h | 2 +
11
target/arm/cpu.h | 18 ++++++++++++++++++
11
target/arm/translate-a64.c | 38 ------------------
12
1 file changed, 18 insertions(+)
12
target/arm/translate.c | 81 +++++++++++++++++++++++++++-----------
13
3 files changed, 60 insertions(+), 61 deletions(-)
14
13
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
16
--- a/target/arm/cpu.h
18
+++ b/target/arm/translate.h
17
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
18
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fpshvec(const ARMISARegisters *id)
20
extern const GVecGen3 bif_op;
19
return FIELD_EX32(id->mvfr0, MVFR0, FPSHVEC) > 0;
21
extern const GVecGen3 mla_op[4];
22
extern const GVecGen3 mls_op[4];
23
+extern const GVecGen3 cmtst_op[4];
24
extern const GVecGen2i ssra_op[4];
25
extern const GVecGen2i usra_op[4];
26
extern const GVecGen2i sri_op[4];
27
extern const GVecGen2i sli_op[4];
28
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
29
30
/*
31
* Forward to the isar_feature_* tests given a DisasContext pointer.
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-a64.c
35
+++ b/target/arm/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
37
}
38
}
20
}
39
21
40
-/* CMTST : test is "if (X & Y != 0)". */
22
+static inline bool isar_feature_aa32_fpsp_v2(const ARMISARegisters *id)
41
-static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
42
-{
43
- tcg_gen_and_i32(d, a, b);
44
- tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
45
- tcg_gen_neg_i32(d, d);
46
-}
47
-
48
-static void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
49
-{
50
- tcg_gen_and_i64(d, a, b);
51
- tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
52
- tcg_gen_neg_i64(d, d);
53
-}
54
-
55
-static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
56
-{
57
- tcg_gen_and_vec(vece, d, a, b);
58
- tcg_gen_dupi_vec(vece, a, 0);
59
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
60
-}
61
-
62
static void handle_3same_64(DisasContext *s, int opcode, bool u,
63
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn, TCGv_i64 tcg_rm)
64
{
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
66
/* Integer op subgroup of C3.6.16. */
67
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
68
{
69
- static const GVecGen3 cmtst_op[4] = {
70
- { .fni4 = gen_helper_neon_tst_u8,
71
- .fniv = gen_cmtst_vec,
72
- .vece = MO_8 },
73
- { .fni4 = gen_helper_neon_tst_u16,
74
- .fniv = gen_cmtst_vec,
75
- .vece = MO_16 },
76
- { .fni4 = gen_cmtst_i32,
77
- .fniv = gen_cmtst_vec,
78
- .vece = MO_32 },
79
- { .fni8 = gen_cmtst_i64,
80
- .fniv = gen_cmtst_vec,
81
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
82
- .vece = MO_64 },
83
- };
84
-
85
int is_q = extract32(insn, 30, 1);
86
int u = extract32(insn, 29, 1);
87
int size = extract32(insn, 22, 2);
88
diff --git a/target/arm/translate.c b/target/arm/translate.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate.c
91
+++ b/target/arm/translate.c
92
@@ -XXX,XX +XXX,XX @@ const GVecGen3 mls_op[4] = {
93
.vece = MO_64 },
94
};
95
96
+/* CMTST : test is "if (X & Y != 0)". */
97
+static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
98
+{
23
+{
99
+ tcg_gen_and_i32(d, a, b);
24
+ /* Return true if CPU supports single precision floating point, VFPv2 */
100
+ tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
25
+ return FIELD_EX32(id->mvfr0, MVFR0, FPSP) > 0;
101
+ tcg_gen_neg_i32(d, d);
102
+}
26
+}
103
+
27
+
104
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
28
+static inline bool isar_feature_aa32_fpsp_v3(const ARMISARegisters *id)
105
+{
29
+{
106
+ tcg_gen_and_i64(d, a, b);
30
+ /* Return true if CPU supports single precision floating point, VFPv3 */
107
+ tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
31
+ return FIELD_EX32(id->mvfr0, MVFR0, FPSP) >= 2;
108
+ tcg_gen_neg_i64(d, d);
109
+}
32
+}
110
+
33
+
111
+static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
34
static inline bool isar_feature_aa32_fpdp_v2(const ARMISARegisters *id)
35
{
36
/* Return true if CPU supports double precision floating point, VFPv2 */
37
return FIELD_EX32(id->mvfr0, MVFR0, FPDP) > 0;
38
}
39
40
+static inline bool isar_feature_aa32_fpdp_v3(const ARMISARegisters *id)
112
+{
41
+{
113
+ tcg_gen_and_vec(vece, d, a, b);
42
+ /* Return true if CPU supports double precision floating point, VFPv3 */
114
+ tcg_gen_dupi_vec(vece, a, 0);
43
+ return FIELD_EX32(id->mvfr0, MVFR0, FPDP) >= 2;
115
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
116
+}
44
+}
117
+
45
+
118
+const GVecGen3 cmtst_op[4] = {
46
/*
119
+ { .fni4 = gen_helper_neon_tst_u8,
47
* We always set the FP and SIMD FP16 fields to indicate identical
120
+ .fniv = gen_cmtst_vec,
48
* levels of support (assuming SIMD is implemented at all), so
121
+ .vece = MO_8 },
122
+ { .fni4 = gen_helper_neon_tst_u16,
123
+ .fniv = gen_cmtst_vec,
124
+ .vece = MO_16 },
125
+ { .fni4 = gen_cmtst_i32,
126
+ .fniv = gen_cmtst_vec,
127
+ .vece = MO_32 },
128
+ { .fni8 = gen_cmtst_i64,
129
+ .fniv = gen_cmtst_vec,
130
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
131
+ .vece = MO_64 },
132
+};
133
+
134
/* Translate a NEON data processing instruction. Return nonzero if the
135
instruction is invalid.
136
We process data in a mixture of 32-bit and 64-bit chunks.
137
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
138
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
139
u ? &mls_op[size] : &mla_op[size]);
140
return 0;
141
+
142
+ case NEON_3R_VTST_VCEQ:
143
+ if (u) { /* VCEQ */
144
+ tcg_gen_gvec_cmp(TCG_COND_EQ, size, rd_ofs, rn_ofs, rm_ofs,
145
+ vec_size, vec_size);
146
+ } else { /* VTST */
147
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
148
+ vec_size, vec_size, &cmtst_op[size]);
149
+ }
150
+ return 0;
151
+
152
+ case NEON_3R_VCGT:
153
+ tcg_gen_gvec_cmp(u ? TCG_COND_GTU : TCG_COND_GT, size,
154
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
155
+ return 0;
156
+
157
+ case NEON_3R_VCGE:
158
+ tcg_gen_gvec_cmp(u ? TCG_COND_GEU : TCG_COND_GE, size,
159
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
160
+ return 0;
161
}
162
163
if (size == 3) {
164
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
165
case NEON_3R_VQSUB:
166
GEN_NEON_INTEGER_OP_ENV(qsub);
167
break;
168
- case NEON_3R_VCGT:
169
- GEN_NEON_INTEGER_OP(cgt);
170
- break;
171
- case NEON_3R_VCGE:
172
- GEN_NEON_INTEGER_OP(cge);
173
- break;
174
case NEON_3R_VSHL:
175
GEN_NEON_INTEGER_OP(shl);
176
break;
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
178
tmp2 = neon_load_reg(rd, pass);
179
gen_neon_add(size, tmp, tmp2);
180
break;
181
- case NEON_3R_VTST_VCEQ:
182
- if (!u) { /* VTST */
183
- switch (size) {
184
- case 0: gen_helper_neon_tst_u8(tmp, tmp, tmp2); break;
185
- case 1: gen_helper_neon_tst_u16(tmp, tmp, tmp2); break;
186
- case 2: gen_helper_neon_tst_u32(tmp, tmp, tmp2); break;
187
- default: abort();
188
- }
189
- } else { /* VCEQ */
190
- switch (size) {
191
- case 0: gen_helper_neon_ceq_u8(tmp, tmp, tmp2); break;
192
- case 1: gen_helper_neon_ceq_u16(tmp, tmp, tmp2); break;
193
- case 2: gen_helper_neon_ceq_u32(tmp, tmp, tmp2); break;
194
- default: abort();
195
- }
196
- }
197
- break;
198
case NEON_3R_VMUL:
199
/* VMUL.P8; other cases already eliminated. */
200
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
201
--
49
--
202
2.19.1
50
2.20.1
203
51
204
52
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
3
We cannot easily create "any" functions for these, because the
4
ID_AA64PFR0 fields for FP and SIMD signal "enabled" with zero.
5
Which means that an aarch32-only cpu will return incorrect results
6
when testing the aarch64 registers.
7
8
To use these, we must either have context or additionally test
9
vs ARM_FEATURE_AARCH64.
10
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20181016223115.24100-7-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20200224222232.13807-5-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
15
---
9
target/arm/cpu.h | 6 +++++-
16
target/arm/cpu.h | 11 +++++++++++
10
linux-user/elfload.c | 2 +-
17
target/arm/cpu.c | 9 ++++++---
11
target/arm/cpu.c | 4 ----
18
target/arm/machine.c | 5 +++--
12
target/arm/helper.c | 2 +-
19
3 files changed, 20 insertions(+), 5 deletions(-)
13
target/arm/machine.c | 3 +--
14
5 files changed, 8 insertions(+), 9 deletions(-)
15
20
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
23
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ enum arm_features {
25
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fpdp_v3(const ARMISARegisters *id)
21
ARM_FEATURE_NEON,
26
return FIELD_EX32(id->mvfr0, MVFR0, FPDP) >= 2;
22
ARM_FEATURE_M, /* Microcontroller profile. */
23
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
24
- ARM_FEATURE_THUMB2EE,
25
ARM_FEATURE_V7MP, /* v7 Multiprocessing Extensions */
26
ARM_FEATURE_V7VE, /* v7 Virtualization Extensions (non-EL2 parts) */
27
ARM_FEATURE_V4T,
28
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_jazelle(const ARMISARegisters *id)
29
return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
30
}
27
}
31
28
32
+static inline bool isar_feature_t32ee(const ARMISARegisters *id)
29
+static inline bool isar_feature_aa32_vfp(const ARMISARegisters *id)
33
+{
30
+{
34
+ return FIELD_EX32(id->id_isar3, ID_ISAR3, T32EE) != 0;
31
+ return isar_feature_aa32_fpsp_v2(id) || isar_feature_aa32_fpdp_v2(id);
35
+}
32
+}
36
+
33
+
37
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
34
/*
35
* We always set the FP and SIMD FP16 fields to indicate identical
36
* levels of support (assuming SIMD is implemented at all), so
37
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_dcpodp(const ARMISARegisters *id)
38
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, DPB) >= 2;
39
}
40
41
+static inline bool isar_feature_aa64_fp_simd(const ARMISARegisters *id)
42
+{
43
+ /* We always set the AdvSIMD and FP fields identically. */
44
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) != 0xf;
45
+}
46
+
47
static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
38
{
48
{
39
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
49
/* We always set the AdvSIMD and FP fields identically wrt FP16. */
40
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/linux-user/elfload.c
43
+++ b/linux-user/elfload.c
44
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
45
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
46
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
47
GET_FEATURE(ARM_FEATURE_IWMMXT, ARM_HWCAP_ARM_IWMMXT);
48
- GET_FEATURE(ARM_FEATURE_THUMB2EE, ARM_HWCAP_ARM_THUMBEE);
49
+ GET_FEATURE_ID(t32ee, ARM_HWCAP_ARM_THUMBEE);
50
GET_FEATURE(ARM_FEATURE_NEON, ARM_HWCAP_ARM_NEON);
51
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
52
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
53
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
50
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
54
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/cpu.c
52
--- a/target/arm/cpu.c
56
+++ b/target/arm/cpu.c
53
+++ b/target/arm/cpu.c
57
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
54
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
58
set_feature(&cpu->env, ARM_FEATURE_V7);
55
* KVM does not currently allow us to lie to the guest about its
59
set_feature(&cpu->env, ARM_FEATURE_VFP3);
56
* ID/feature registers, so the guest always sees what the host has.
60
set_feature(&cpu->env, ARM_FEATURE_NEON);
57
*/
61
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
58
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
62
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
59
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)
63
set_feature(&cpu->env, ARM_FEATURE_EL3);
60
+ ? cpu_isar_feature(aa64_fp_simd, cpu)
64
cpu->midr = 0x410fc080;
61
+ : cpu_isar_feature(aa32_vfp, cpu)) {
65
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
62
cpu->has_vfp = true;
66
set_feature(&cpu->env, ARM_FEATURE_VFP3);
63
if (!kvm_enabled()) {
67
set_feature(&cpu->env, ARM_FEATURE_VFP_FP16);
64
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_vfp_property);
68
set_feature(&cpu->env, ARM_FEATURE_NEON);
65
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
69
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
66
* We rely on no XScale CPU having VFP so we can use the same bits in the
70
set_feature(&cpu->env, ARM_FEATURE_EL3);
67
* TB flags field for VECSTRIDE and XSCALE_CPAR.
71
/* Note that A9 supports the MP extensions even for
68
*/
72
* A9UP and single-core A9MP (which are both different
69
- assert(!(arm_feature(env, ARM_FEATURE_VFP) &&
73
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
70
- arm_feature(env, ARM_FEATURE_XSCALE)));
74
set_feature(&cpu->env, ARM_FEATURE_V7VE);
71
+ assert(arm_feature(&cpu->env, ARM_FEATURE_AARCH64) ||
75
set_feature(&cpu->env, ARM_FEATURE_VFP4);
72
+ !cpu_isar_feature(aa32_vfp_simd, cpu) ||
76
set_feature(&cpu->env, ARM_FEATURE_NEON);
73
+ !arm_feature(env, ARM_FEATURE_XSCALE));
77
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
74
78
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
75
if (arm_feature(env, ARM_FEATURE_V7) &&
79
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
76
!arm_feature(env, ARM_FEATURE_M) &&
80
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
81
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
82
set_feature(&cpu->env, ARM_FEATURE_V7VE);
83
set_feature(&cpu->env, ARM_FEATURE_VFP4);
84
set_feature(&cpu->env, ARM_FEATURE_NEON);
85
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
86
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
87
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
88
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
89
diff --git a/target/arm/helper.c b/target/arm/helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/helper.c
92
+++ b/target/arm/helper.c
93
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
94
define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo);
95
define_arm_cp_regs(cpu, vmsa_cp_reginfo);
96
}
97
- if (arm_feature(env, ARM_FEATURE_THUMB2EE)) {
98
+ if (cpu_isar_feature(t32ee, cpu)) {
99
define_arm_cp_regs(cpu, t2ee_cp_reginfo);
100
}
101
if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
102
diff --git a/target/arm/machine.c b/target/arm/machine.c
77
diff --git a/target/arm/machine.c b/target/arm/machine.c
103
index XXXXXXX..XXXXXXX 100644
78
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/machine.c
79
--- a/target/arm/machine.c
105
+++ b/target/arm/machine.c
80
+++ b/target/arm/machine.c
106
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
81
@@ -XXX,XX +XXX,XX @@
107
static bool thumb2ee_needed(void *opaque)
82
static bool vfp_needed(void *opaque)
108
{
83
{
109
ARMCPU *cpu = opaque;
84
ARMCPU *cpu = opaque;
110
- CPUARMState *env = &cpu->env;
85
- CPUARMState *env = &cpu->env;
111
86
112
- return arm_feature(env, ARM_FEATURE_THUMB2EE);
87
- return arm_feature(env, ARM_FEATURE_VFP);
113
+ return cpu_isar_feature(t32ee, cpu);
88
+ return (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)
89
+ ? cpu_isar_feature(aa64_fp_simd, cpu)
90
+ : cpu_isar_feature(aa32_vfp_simd, cpu));
114
}
91
}
115
92
116
static const VMStateDescription vmstate_thumb2ee = {
93
static int get_fpscr(QEMUFile *f, void *opaque, size_t size,
117
--
94
--
118
2.19.1
95
2.20.1
119
96
120
97
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Shuffle the order of the checks so that we test the ISA
4
before we test anything else, such as the register arguments.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-13-richard.henderson@linaro.org
8
Message-id: 20200224222232.13807-7-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate.c | 70 +++++++++++++++++++++++++++++-------------
11
target/arm/translate-vfp.inc.c | 140 +++++++++++++++++----------------
9
1 file changed, 48 insertions(+), 22 deletions(-)
12
1 file changed, 71 insertions(+), 69 deletions(-)
10
13
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
16
--- a/target/arm/translate-vfp.inc.c
14
+++ b/target/arm/translate.c
17
+++ b/target/arm/translate-vfp.inc.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
18
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
16
size--;
19
return false;
17
}
20
}
18
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
21
19
- /* To avoid excessive duplication of ops we implement shift
22
- /* UNDEF accesses to D16-D31 if they don't exist */
20
- by immediate using the variable shift operations. */
23
- if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
21
if (op < 8) {
24
- ((a->vm | a->vn | a->vd) & 0x10)) {
22
/* Shift by immediate:
25
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
23
VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
26
return false;
24
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
27
}
25
}
28
26
/* Right shifts are encoded as N - shift, where N is the
29
- if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
27
element size in bits. */
30
+ /* UNDEF accesses to D16-D31 if they don't exist */
28
- if (op <= 4)
31
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
29
+ if (op <= 4) {
32
+ ((a->vm | a->vn | a->vd) & 0x10)) {
30
shift = shift - (1 << (size + 3));
33
return false;
31
+ }
34
}
32
+
35
33
+ switch (op) {
36
@@ -XXX,XX +XXX,XX @@ static bool trans_VMINMAXNM(DisasContext *s, arg_VMINMAXNM *a)
34
+ case 0: /* VSHR */
37
return false;
35
+ /* Right shift comes here negative. */
38
}
36
+ shift = -shift;
39
37
+ /* Shifts larger than the element size are architecturally
40
- /* UNDEF accesses to D16-D31 if they don't exist */
38
+ * valid. Unsigned results in all zeros; signed results
41
- if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
39
+ * in all sign bits.
42
- ((a->vm | a->vn | a->vd) & 0x10)) {
40
+ */
43
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
41
+ if (!u) {
44
return false;
42
+ tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
45
}
43
+ MIN(shift, (8 << size) - 1),
46
44
+ vec_size, vec_size);
47
- if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
45
+ } else if (shift >= 8 << size) {
48
+ /* UNDEF accesses to D16-D31 if they don't exist */
46
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
49
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
47
+ } else {
50
+ ((a->vm | a->vn | a->vd) & 0x10)) {
48
+ tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
51
return false;
49
+ vec_size, vec_size);
52
}
50
+ }
53
51
+ return 0;
54
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
52
+
55
return false;
53
+ case 5: /* VSHL, VSLI */
56
}
54
+ if (!u) { /* VSHL */
57
55
+ /* Shifts larger than the element size are
58
- /* UNDEF accesses to D16-D31 if they don't exist */
56
+ * architecturally valid and results in zero.
59
- if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
57
+ */
60
- ((a->vm | a->vd) & 0x10)) {
58
+ if (shift >= 8 << size) {
61
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
59
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
62
return false;
60
+ } else {
63
}
61
+ tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
64
62
+ vec_size, vec_size);
65
- if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
63
+ }
66
+ /* UNDEF accesses to D16-D31 if they don't exist */
64
+ return 0;
67
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
65
+ }
68
+ ((a->vm | a->vd) & 0x10)) {
66
+ break;
69
return false;
67
+ }
70
}
68
+
71
69
if (size == 3) {
72
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
70
count = q + 1;
73
return false;
71
} else {
74
}
72
count = q ? 4: 2;
75
73
}
76
- /* UNDEF accesses to D16-D31 if they don't exist */
74
- switch (size) {
77
- if (dp && !dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
75
- case 0:
78
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
76
- imm = (uint8_t) shift;
79
return false;
77
- imm |= imm << 8;
80
}
78
- imm |= imm << 16;
81
79
- break;
82
- if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
80
- case 1:
83
+ /* UNDEF accesses to D16-D31 if they don't exist */
81
- imm = (uint16_t) shift;
84
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
82
- imm |= imm << 16;
85
return false;
83
- break;
86
}
84
- case 2:
87
85
- case 3:
88
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
86
- imm = shift;
89
TCGv_i64 f0, f1, fd;
87
- break;
90
TCGv_ptr fpst;
88
- default:
91
89
- abort();
92
- /* UNDEF accesses to D16-D31 if they don't exist */
90
- }
93
- if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vn | vm) & 0x10)) {
91
+
94
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
92
+ /* To avoid excessive duplication of ops we implement shift
95
return false;
93
+ * by immediate using the variable shift operations.
96
}
94
+ */
97
95
+ imm = dup_const(size, shift);
98
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
96
99
+ /* UNDEF accesses to D16-D31 if they don't exist */
97
for (pass = 0; pass < count; pass++) {
100
+ if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vn | vm) & 0x10)) {
98
if (size == 3) {
101
return false;
99
neon_load_reg64(cpu_V0, rm + pass);
102
}
100
tcg_gen_movi_i64(cpu_V1, imm);
103
101
switch (op) {
104
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
102
- case 0: /* VSHR */
105
int veclen = s->vec_len;
103
case 1: /* VSRA */
106
TCGv_i64 f0, fd;
104
if (u)
107
105
gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
108
- /* UNDEF accesses to D16-D31 if they don't exist */
106
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
109
- if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vm) & 0x10)) {
107
cpu_V0, cpu_V1);
110
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
108
}
111
return false;
109
break;
112
}
110
+ default:
113
111
+ g_assert_not_reached();
114
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
112
}
115
+ /* UNDEF accesses to D16-D31 if they don't exist */
113
if (op == 1 || op == 3) {
116
+ if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vm) & 0x10)) {
114
/* Accumulate. */
117
return false;
115
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
118
}
116
tmp2 = tcg_temp_new_i32();
119
117
tcg_gen_movi_i32(tmp2, imm);
120
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_dp(DisasContext *s, arg_VFM_dp *a)
118
switch (op) {
121
return false;
119
- case 0: /* VSHR */
122
}
120
case 1: /* VSRA */
123
121
GEN_NEON_INTEGER_OP(shl);
124
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
122
break;
125
+ /* UNDEF accesses to D16-D31 if they don't exist. */
123
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
126
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
124
case 7: /* VQSHL */
127
+ ((a->vd | a->vn | a->vm) & 0x10)) {
125
GEN_NEON_INTEGER_OP_ENV(qshl);
128
return false;
126
break;
129
}
127
+ default:
130
128
+ g_assert_not_reached();
131
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
129
}
132
130
tcg_temp_free_i32(tmp2);
133
vd = a->vd;
134
135
- /* UNDEF accesses to D16-D31 if they don't exist. */
136
- if (!dc_isar_feature(aa32_simd_r32, s) && (vd & 0x10)) {
137
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
138
return false;
139
}
140
141
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
142
+ /* UNDEF accesses to D16-D31 if they don't exist. */
143
+ if (!dc_isar_feature(aa32_simd_r32, s) && (vd & 0x10)) {
144
return false;
145
}
146
147
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
148
{
149
TCGv_i64 vd, vm;
150
151
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
152
+ return false;
153
+ }
154
+
155
/* Vm/M bits must be zero for the Z variant */
156
if (a->z && a->vm != 0) {
157
return false;
158
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
159
return false;
160
}
161
162
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
163
- return false;
164
- }
165
-
166
if (!vfp_access_check(s)) {
167
return true;
168
}
169
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
170
TCGv_i32 tmp;
171
TCGv_i64 vd;
172
173
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
174
+ return false;
175
+ }
176
+
177
if (!dc_isar_feature(aa32_fp16_dpconv, s)) {
178
return false;
179
}
180
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
181
return false;
182
}
183
184
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
185
- return false;
186
- }
187
-
188
if (!vfp_access_check(s)) {
189
return true;
190
}
191
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
192
TCGv_i32 tmp;
193
TCGv_i64 vm;
194
195
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
196
+ return false;
197
+ }
198
+
199
if (!dc_isar_feature(aa32_fp16_dpconv, s)) {
200
return false;
201
}
202
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
203
return false;
204
}
205
206
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
207
- return false;
208
- }
209
-
210
if (!vfp_access_check(s)) {
211
return true;
212
}
213
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
214
TCGv_ptr fpst;
215
TCGv_i64 tmp;
216
217
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
218
+ return false;
219
+ }
220
+
221
if (!dc_isar_feature(aa32_vrint, s)) {
222
return false;
223
}
224
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
225
return false;
226
}
227
228
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
229
- return false;
230
- }
231
-
232
if (!vfp_access_check(s)) {
233
return true;
234
}
235
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
236
TCGv_i64 tmp;
237
TCGv_i32 tcg_rmode;
238
239
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
240
+ return false;
241
+ }
242
+
243
if (!dc_isar_feature(aa32_vrint, s)) {
244
return false;
245
}
246
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
247
return false;
248
}
249
250
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
251
- return false;
252
- }
253
-
254
if (!vfp_access_check(s)) {
255
return true;
256
}
257
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
258
TCGv_ptr fpst;
259
TCGv_i64 tmp;
260
261
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
262
+ return false;
263
+ }
264
+
265
if (!dc_isar_feature(aa32_vrint, s)) {
266
return false;
267
}
268
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
269
return false;
270
}
271
272
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
273
- return false;
274
- }
275
-
276
if (!vfp_access_check(s)) {
277
return true;
278
}
279
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
280
TCGv_i64 vd;
281
TCGv_i32 vm;
282
283
- /* UNDEF accesses to D16-D31 if they don't exist. */
284
- if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
285
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
286
return false;
287
}
288
289
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
290
+ /* UNDEF accesses to D16-D31 if they don't exist. */
291
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
292
return false;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
296
TCGv_i64 vm;
297
TCGv_i32 vd;
298
299
- /* UNDEF accesses to D16-D31 if they don't exist. */
300
- if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
301
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
302
return false;
303
}
304
305
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
306
+ /* UNDEF accesses to D16-D31 if they don't exist. */
307
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
308
return false;
309
}
310
311
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
312
TCGv_i64 vd;
313
TCGv_ptr fpst;
314
315
- /* UNDEF accesses to D16-D31 if they don't exist. */
316
- if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
317
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
318
return false;
319
}
320
321
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
322
+ /* UNDEF accesses to D16-D31 if they don't exist. */
323
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
324
return false;
325
}
326
327
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
328
TCGv_i32 vd;
329
TCGv_i64 vm;
330
331
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
332
+ return false;
333
+ }
334
+
335
if (!dc_isar_feature(aa32_jscvt, s)) {
336
return false;
337
}
338
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
339
return false;
340
}
341
342
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
343
- return false;
344
- }
345
-
346
if (!vfp_access_check(s)) {
347
return true;
348
}
349
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
350
TCGv_ptr fpst;
351
int frac_bits;
352
353
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
354
+ return false;
355
+ }
356
+
357
if (!arm_dc_feature(s, ARM_FEATURE_VFP3)) {
358
return false;
359
}
360
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
361
return false;
362
}
363
364
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
365
- return false;
366
- }
367
-
368
if (!vfp_access_check(s)) {
369
return true;
370
}
371
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
372
TCGv_i64 vm;
373
TCGv_ptr fpst;
374
375
- /* UNDEF accesses to D16-D31 if they don't exist. */
376
- if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
377
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
378
return false;
379
}
380
381
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
382
+ /* UNDEF accesses to D16-D31 if they don't exist. */
383
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
384
return false;
385
}
131
386
132
--
387
--
133
2.19.1
388
2.20.1
134
389
135
390
diff view generated by jsdifflib
1
The HCR.DC virtualization configuration register bit has the
1
From: Richard Henderson <richard.henderson@linaro.org>
2
following effects:
3
* SCTLR.M behaves as if it is 0 for all purposes except
4
direct reads of the bit
5
* HCR.VM behaves as if it is 1 for all purposes except
6
direct reads of the bit
7
* the memory type produced by the first stage of the EL1&EL0
8
translation regime is Normal Non-Shareable,
9
Inner Write-Back Read-Allocate Write-Allocate,
10
Outer Write-Back Read-Allocate Write-Allocate.
11
2
12
Implement this behaviour.
3
Sort this check to the start of a trans_* function.
4
Merge this with any existing test for fpdp_v2.
13
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200224222232.13807-8-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20181012144235.19646-5-peter.maydell@linaro.org
17
---
10
---
18
target/arm/helper.c | 23 +++++++++++++++++++++--
11
target/arm/translate-vfp.inc.c | 24 ++++++++----------------
19
1 file changed, 21 insertions(+), 2 deletions(-)
12
1 file changed, 8 insertions(+), 16 deletions(-)
20
13
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
16
--- a/target/arm/translate-vfp.inc.c
24
+++ b/target/arm/helper.c
17
+++ b/target/arm/translate-vfp.inc.c
25
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
18
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
26
* * The Non-secure TTBCR.EAE bit is set to 1
19
* VFPv2 allows access to FPSID from userspace; VFPv3 restricts
27
* * The implementation includes EL2, and the value of HCR.VM is 1
20
* all ID registers to privileged access only.
28
*
29
+ * (Note that HCR.DC makes HCR.VM behave as if it is 1.)
30
+ *
31
* ATS1Hx always uses the 64bit format (not supported yet).
32
*/
21
*/
33
format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
22
- if (IS_USER(s) && arm_dc_feature(s, ARM_FEATURE_VFP3)) {
34
23
+ if (IS_USER(s) && dc_isar_feature(aa32_fpsp_v3, s)) {
35
if (arm_feature(env, ARM_FEATURE_EL2)) {
24
return false;
36
if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
25
}
37
- format64 |= env->cp15.hcr_el2 & HCR_VM;
26
ignore_vfp_enabled = true;
38
+ format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
27
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
39
} else {
28
case ARM_VFP_FPINST:
40
format64 |= arm_current_el(env) == 2;
29
case ARM_VFP_FPINST2:
41
}
30
/* Not present in VFPv3 */
42
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
31
- if (IS_USER(s) || arm_dc_feature(s, ARM_FEATURE_VFP3)) {
32
+ if (IS_USER(s) || dc_isar_feature(aa32_fpsp_v3, s)) {
33
return false;
34
}
35
break;
36
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
37
38
vd = a->vd;
39
40
- if (!dc_isar_feature(aa32_fpshvec, s) &&
41
- (veclen != 0 || s->vec_stride != 0)) {
42
+ if (!dc_isar_feature(aa32_fpsp_v3, s)) {
43
return false;
43
}
44
}
44
45
45
if (mmu_idx == ARMMMUIdx_S2NS) {
46
- if (!arm_dc_feature(s, ARM_FEATURE_VFP3)) {
46
- return (env->cp15.hcr_el2 & HCR_VM) == 0;
47
+ if (!dc_isar_feature(aa32_fpshvec, s) &&
47
+ /* HCR.DC means HCR.VM behaves as 1 */
48
+ (veclen != 0 || s->vec_stride != 0)) {
48
+ return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0;
49
return false;
49
}
50
}
50
51
51
if (env->cp15.hcr_el2 & HCR_TGE) {
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
52
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
53
53
}
54
vd = a->vd;
55
56
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
57
+ if (!dc_isar_feature(aa32_fpdp_v3, s)) {
58
return false;
54
}
59
}
55
60
56
+ if ((env->cp15.hcr_el2 & HCR_DC) &&
61
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
57
+ (mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1)) {
62
return false;
58
+ /* HCR.DC means SCTLR_EL1.M behaves as 0 */
63
}
59
+ return true;
64
60
+ }
65
- if (!arm_dc_feature(s, ARM_FEATURE_VFP3)) {
61
+
66
- return false;
62
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
67
- }
63
}
68
-
64
69
if (!vfp_access_check(s)) {
65
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
70
return true;
66
71
}
67
/* Combine the S1 and S2 cache attributes, if needed */
72
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
68
if (!ret && cacheattrs != NULL) {
73
TCGv_ptr fpst;
69
+ if (env->cp15.hcr_el2 & HCR_DC) {
74
int frac_bits;
70
+ /*
75
71
+ * HCR.DC forces the first stage attributes to
76
- if (!arm_dc_feature(s, ARM_FEATURE_VFP3)) {
72
+ * Normal Non-Shareable,
77
+ if (!dc_isar_feature(aa32_fpsp_v3, s)) {
73
+ * Inner Write-Back Read-Allocate Write-Allocate,
78
return false;
74
+ * Outer Write-Back Read-Allocate Write-Allocate.
79
}
75
+ */
80
76
+ cacheattrs->attrs = 0xff;
81
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
77
+ cacheattrs->shareability = 0;
82
TCGv_ptr fpst;
78
+ }
83
int frac_bits;
79
*cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
84
80
}
85
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
86
- return false;
87
- }
88
-
89
- if (!arm_dc_feature(s, ARM_FEATURE_VFP3)) {
90
+ if (!dc_isar_feature(aa32_fpdp_v3, s)) {
91
return false;
92
}
81
93
82
--
94
--
83
2.19.1
95
2.20.1
84
96
85
97
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Instead of shifts and masks, use direct loads and stores from
3
We will eventually remove the early ARM_FEATURE_VFP test,
4
the neon register file.
4
so add a proper test for each trans_* that does not already
5
5
have another ISA test.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-21-richard.henderson@linaro.org
9
Message-id: 20200224222232.13807-9-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/translate.c | 92 +++++++++++++++++++++++-------------------
12
target/arm/translate-vfp.inc.c | 78 ++++++++++++++++++++++++++++++----
12
1 file changed, 50 insertions(+), 42 deletions(-)
13
1 file changed, 69 insertions(+), 9 deletions(-)
13
14
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
17
--- a/target/arm/translate-vfp.inc.c
17
+++ b/target/arm/translate.c
18
+++ b/target/arm/translate-vfp.inc.c
18
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
19
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
19
return tmp;
20
int pass;
20
}
21
uint32_t offset;
21
22
22
+static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
23
+ /* SIZE == 2 is a VFP instruction; otherwise NEON. */
23
+{
24
+ if (a->size == 2
24
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
25
+ ? !dc_isar_feature(aa32_fpsp_v2, s)
25
+
26
+ : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
26
+ switch (mop) {
27
+ return false;
27
+ case MO_UB:
28
+ }
28
+ tcg_gen_ld8u_i32(var, cpu_env, offset);
29
+
29
+ break;
30
/* UNDEF accesses to D16-D31 if they don't exist */
30
+ case MO_UW:
31
if (!dc_isar_feature(aa32_simd_r32, s) && (a->vn & 0x10)) {
31
+ tcg_gen_ld16u_i32(var, cpu_env, offset);
32
return false;
32
+ break;
33
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
33
+ case MO_UL:
34
pass = extract32(offset, 2, 1);
34
+ tcg_gen_ld_i32(var, cpu_env, offset);
35
offset = extract32(offset, 0, 2) * 8;
35
+ break;
36
36
+ default:
37
- if (a->size != 2 && !arm_dc_feature(s, ARM_FEATURE_NEON)) {
37
+ g_assert_not_reached();
38
- return false;
38
+ }
39
- }
39
+}
40
-
40
+
41
if (!vfp_access_check(s)) {
41
static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
42
return true;
43
}
44
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
45
int pass;
46
uint32_t offset;
47
48
+ /* SIZE == 2 is a VFP instruction; otherwise NEON. */
49
+ if (a->size == 2
50
+ ? !dc_isar_feature(aa32_fpsp_v2, s)
51
+ : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
52
+ return false;
53
+ }
54
+
55
/* UNDEF accesses to D16-D31 if they don't exist */
56
if (!dc_isar_feature(aa32_simd_r32, s) && (a->vn & 0x10)) {
57
return false;
58
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
59
pass = extract32(offset, 2, 1);
60
offset = extract32(offset, 0, 2) * 8;
61
62
- if (a->size != 2 && !arm_dc_feature(s, ARM_FEATURE_NEON)) {
63
- return false;
64
- }
65
-
66
if (!vfp_access_check(s)) {
67
return true;
68
}
69
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
70
TCGv_i32 tmp;
71
bool ignore_vfp_enabled = false;
72
73
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
74
+ return false;
75
+ }
76
+
77
if (arm_dc_feature(s, ARM_FEATURE_M)) {
78
/*
79
* The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
80
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
42
{
81
{
43
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
82
TCGv_i32 tmp;
44
@@ -XXX,XX +XXX,XX @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
83
45
tcg_temp_free_i32(var);
84
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
46
}
85
+ return false;
47
86
+ }
48
+static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
87
+
49
+{
88
if (!vfp_access_check(s)) {
50
+ long offset = neon_element_offset(reg, ele, size);
89
return true;
51
+
90
}
52
+ switch (size) {
91
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
53
+ case MO_8:
54
+ tcg_gen_st8_i32(var, cpu_env, offset);
55
+ break;
56
+ case MO_16:
57
+ tcg_gen_st16_i32(var, cpu_env, offset);
58
+ break;
59
+ case MO_32:
60
+ tcg_gen_st_i32(var, cpu_env, offset);
61
+ break;
62
+ default:
63
+ g_assert_not_reached();
64
+ }
65
+}
66
+
67
static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
68
{
92
{
69
long offset = neon_element_offset(reg, ele, size);
93
TCGv_i32 tmp;
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
94
71
int stride;
95
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
72
int size;
96
+ return false;
73
int reg;
97
+ }
74
- int pass;
98
+
75
int load;
99
/*
76
- int shift;
100
* VMOV between two general-purpose registers and two single precision
77
int n;
101
* floating point registers
78
int vec_size;
102
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
79
int mmu_idx;
103
80
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
104
/*
81
} else {
105
* VMOV between two general-purpose registers and one double precision
82
/* Single element. */
106
- * floating point register
83
int idx = (insn >> 4) & 0xf;
107
+ * floating point register. Note that this does not require support
84
- pass = (insn >> 7) & 1;
108
+ * for double precision arithmetic.
85
+ int reg_idx;
109
*/
86
switch (size) {
110
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
87
case 0:
111
+ return false;
88
- shift = ((insn >> 5) & 3) * 8;
112
+ }
89
+ reg_idx = (insn >> 5) & 7;
113
90
stride = 1;
114
/* UNDEF accesses to D16-D31 if they don't exist */
91
break;
115
if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
92
case 1:
116
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
93
- shift = ((insn >> 6) & 1) * 16;
117
uint32_t offset;
94
+ reg_idx = (insn >> 6) & 3;
118
TCGv_i32 addr, tmp;
95
stride = (insn & (1 << 5)) ? 2 : 1;
119
96
break;
120
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
97
case 2:
121
+ return false;
98
- shift = 0;
122
+ }
99
+ reg_idx = (insn >> 7) & 1;
123
+
100
stride = (insn & (1 << 6)) ? 2 : 1;
124
if (!vfp_access_check(s)) {
101
break;
125
return true;
102
default:
126
}
103
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
127
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
104
*/
128
TCGv_i32 addr;
105
return 1;
129
TCGv_i64 tmp;
106
}
130
107
+ tmp = tcg_temp_new_i32();
131
+ /* Note that this does not require support for double arithmetic. */
108
addr = tcg_temp_new_i32();
132
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
109
load_reg_var(s, addr, rn);
133
+ return false;
110
for (reg = 0; reg < nregs; reg++) {
134
+ }
111
if (load) {
135
+
112
- tmp = tcg_temp_new_i32();
136
/* UNDEF accesses to D16-D31 if they don't exist */
113
- switch (size) {
137
if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
114
- case 0:
138
return false;
115
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
139
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
116
- break;
140
TCGv_i32 addr, tmp;
117
- case 1:
141
int i, n;
118
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
142
119
- break;
143
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
120
- case 2:
144
+ return false;
121
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
145
+ }
122
- break;
146
+
123
- default: /* Avoid compiler warnings. */
147
n = a->imm;
124
- abort();
148
125
- }
149
if (n == 0 || (a->vd + n) > 32) {
126
- if (size != 2) {
150
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
127
- tmp2 = neon_load_reg(rd, pass);
151
TCGv_i64 tmp;
128
- tcg_gen_deposit_i32(tmp, tmp2, tmp,
152
int i, n;
129
- shift, size ? 16 : 8);
153
130
- tcg_temp_free_i32(tmp2);
154
+ /* Note that this does not require support for double arithmetic. */
131
- }
155
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
132
- neon_store_reg(rd, pass, tmp);
156
+ return false;
133
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
157
+ }
134
+ s->be_data | size);
158
+
135
+ neon_store_element(rd, reg_idx, size, tmp);
159
n = a->imm >> 1;
136
} else { /* Store */
160
137
- tmp = neon_load_reg(rd, pass);
161
if (n == 0 || (a->vd + n) > 32 || n > 16) {
138
- if (shift)
162
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
139
- tcg_gen_shri_i32(tmp, tmp, shift);
163
TCGv_i32 f0, f1, fd;
140
- switch (size) {
164
TCGv_ptr fpst;
141
- case 0:
165
142
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
166
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
143
- break;
167
+ return false;
144
- case 1:
168
+ }
145
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
169
+
146
- break;
170
if (!dc_isar_feature(aa32_fpshvec, s) &&
147
- case 2:
171
(veclen != 0 || s->vec_stride != 0)) {
148
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
172
return false;
149
- break;
173
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
150
- }
174
int veclen = s->vec_len;
151
- tcg_temp_free_i32(tmp);
175
TCGv_i32 f0, fd;
152
+ neon_load_element(tmp, rd, reg_idx, size);
176
153
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
177
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
154
+ s->be_data | size);
178
+ return false;
155
}
179
+ }
156
rd += stride;
180
+
157
tcg_gen_addi_i32(addr, addr, 1 << size);
181
if (!dc_isar_feature(aa32_fpshvec, s) &&
158
}
182
(veclen != 0 || s->vec_stride != 0)) {
159
tcg_temp_free_i32(addr);
183
return false;
160
+ tcg_temp_free_i32(tmp);
184
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_sp(DisasContext *s, arg_VCMP_sp *a)
161
stride = nregs * (1 << size);
185
{
162
}
186
TCGv_i32 vd, vm;
187
188
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
189
+ return false;
190
+ }
191
+
192
/* Vm/M bits must be zero for the Z variant */
193
if (a->z && a->vm != 0) {
194
return false;
195
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
196
TCGv_i32 vm;
197
TCGv_ptr fpst;
198
199
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
200
+ return false;
201
+ }
202
+
203
if (!vfp_access_check(s)) {
204
return true;
205
}
206
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
207
TCGv_i32 vm;
208
TCGv_ptr fpst;
209
210
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
211
+ return false;
212
+ }
213
+
214
if (!vfp_access_check(s)) {
215
return true;
163
}
216
}
164
--
217
--
165
2.19.1
218
2.20.1
166
219
167
220
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Having V6 alone imply jazelle was wrong for cortex-m0.
3
All remaining tests for VFP4 are for fused multiply-add insns.
4
Change to an assertion for V6 & !M.
5
4
6
This was harmless, because the only place we tested ARM_FEATURE_JAZELLE
5
Since the MVFR1 field is used for both VFP and NEON, move its adjustment
7
was for 'bxj' in disas_arm(), which is unreachable for M-profile cores.
6
from the !has_neon block to the (!has_vfp && !has_neon) block.
8
7
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Test for vfp of the appropraite width alongside the test for simdfmac
9
within translate-vfp.inc.c. Within disas_neon_data_insn, we have
10
already tested for ARM_FEATURE_NEON.
11
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181016223115.24100-6-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20200224222232.13807-10-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
16
---
15
target/arm/cpu.h | 6 +++++-
17
target/arm/cpu.h | 12 ++++++++++++
16
target/arm/cpu.c | 17 ++++++++++++++---
18
target/arm/cpu.c | 6 +++++-
17
target/arm/translate.c | 2 +-
19
target/arm/translate-vfp.inc.c | 22 ++++++++++++++++++----
18
3 files changed, 20 insertions(+), 5 deletions(-)
20
target/arm/translate.c | 2 +-
21
4 files changed, 36 insertions(+), 6 deletions(-)
19
22
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
25
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
27
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fp16_dpconv(const ARMISARegisters *id)
25
ARM_FEATURE_PMU, /* has PMU support */
28
return FIELD_EX32(id->mvfr1, MVFR1, FPHP) > 1;
26
ARM_FEATURE_VBAR, /* has cp15 VBAR */
27
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
28
- ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
29
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
30
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
31
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_arm_div(const ARMISARegisters *id)
33
return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
34
}
29
}
35
30
36
+static inline bool isar_feature_jazelle(const ARMISARegisters *id)
31
+/*
32
+ * Note that this ID register field covers both VFP and Neon FMAC,
33
+ * so should usually be tested in combination with some other
34
+ * check that confirms the presence of whichever of VFP or Neon is
35
+ * relevant, to avoid accidentally enabling a Neon feature on
36
+ * a VFP-no-Neon core or vice-versa.
37
+ */
38
+static inline bool isar_feature_aa32_simdfmac(const ARMISARegisters *id)
37
+{
39
+{
38
+ return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
40
+ return FIELD_EX32(id->mvfr1, MVFR1, SIMDFMAC) != 0;
39
+}
41
+}
40
+
42
+
41
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
43
static inline bool isar_feature_aa32_vsel(const ARMISARegisters *id)
42
{
44
{
43
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
45
return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 1;
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
46
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
45
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu.c
48
--- a/target/arm/cpu.c
47
+++ b/target/arm/cpu.c
49
+++ b/target/arm/cpu.c
48
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
50
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
51
u = FIELD_DP32(u, MVFR1, SIMDINT, 0);
52
u = FIELD_DP32(u, MVFR1, SIMDSP, 0);
53
u = FIELD_DP32(u, MVFR1, SIMDHP, 0);
54
- u = FIELD_DP32(u, MVFR1, SIMDFMAC, 0);
55
cpu->isar.mvfr1 = u;
56
57
u = cpu->isar.mvfr2;
58
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
59
u = cpu->isar.mvfr0;
60
u = FIELD_DP32(u, MVFR0, SIMDREG, 0);
61
cpu->isar.mvfr0 = u;
62
+
63
+ /* Despite the name, this field covers both VFP and Neon */
64
+ u = cpu->isar.mvfr1;
65
+ u = FIELD_DP32(u, MVFR1, SIMDFMAC, 0);
66
+ cpu->isar.mvfr1 = u;
49
}
67
}
50
if (arm_feature(env, ARM_FEATURE_V6)) {
68
51
set_feature(env, ARM_FEATURE_V5);
69
if (arm_feature(env, ARM_FEATURE_M) && !cpu->has_dsp) {
52
- set_feature(env, ARM_FEATURE_JAZELLE);
70
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
53
if (!arm_feature(env, ARM_FEATURE_M)) {
71
index XXXXXXX..XXXXXXX 100644
54
+ assert(cpu_isar_feature(jazelle, cpu));
72
--- a/target/arm/translate-vfp.inc.c
55
set_feature(env, ARM_FEATURE_AUXCR);
73
+++ b/target/arm/translate-vfp.inc.c
56
}
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_sp(DisasContext *s, arg_VFM_sp *a)
75
76
/*
77
* Present in VFPv4 only.
78
+ * Note that we can't rely on the SIMDFMAC check alone, because
79
+ * in a Neon-no-VFP core that ID register field will be non-zero.
80
+ */
81
+ if (!dc_isar_feature(aa32_simdfmac, s) ||
82
+ !dc_isar_feature(aa32_fpsp_v2, s)) {
83
+ return false;
84
+ }
85
+ /*
86
* In v7A, UNPREDICTABLE with non-zero vector length/stride; from
87
* v8A, must UNDEF. We choose to UNDEF for both v7A and v8A.
88
*/
89
- if (!arm_dc_feature(s, ARM_FEATURE_VFP4) ||
90
- (s->vec_len != 0 || s->vec_stride != 0)) {
91
+ if (s->vec_len != 0 || s->vec_stride != 0) {
92
return false;
57
}
93
}
58
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
94
59
set_feature(&cpu->env, ARM_FEATURE_VFP);
95
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_dp(DisasContext *s, arg_VFM_dp *a)
60
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
96
61
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
97
/*
62
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
98
* Present in VFPv4 only.
63
cpu->midr = 0x41069265;
99
+ * Note that we can't rely on the SIMDFMAC check alone, because
64
cpu->reset_fpsid = 0x41011090;
100
+ * in a Neon-no-VFP core that ID register field will be non-zero.
65
cpu->ctr = 0x1dd20d2;
101
+ */
66
cpu->reset_sctlr = 0x00090078;
102
+ if (!dc_isar_feature(aa32_simdfmac, s) ||
67
+
103
+ !dc_isar_feature(aa32_fpdp_v2, s)) {
104
+ return false;
105
+ }
68
+ /*
106
+ /*
69
+ * ARMv5 does not have the ID_ISAR registers, but we can still
107
* In v7A, UNPREDICTABLE with non-zero vector length/stride; from
70
+ * set the field to indicate Jazelle support within QEMU.
108
* v8A, must UNDEF. We choose to UNDEF for both v7A and v8A.
71
+ */
109
*/
72
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
110
- if (!arm_dc_feature(s, ARM_FEATURE_VFP4) ||
73
}
111
- (s->vec_len != 0 || s->vec_stride != 0)) {
74
112
+ if (s->vec_len != 0 || s->vec_stride != 0) {
75
static void arm946_initfn(Object *obj)
113
return false;
76
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
114
}
77
set_feature(&cpu->env, ARM_FEATURE_AUXCR);
115
78
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
79
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
80
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
81
cpu->midr = 0x4106a262;
82
cpu->reset_fpsid = 0x410110a0;
83
cpu->ctr = 0x1dd20d2;
84
cpu->reset_sctlr = 0x00090078;
85
cpu->reset_auxcr = 1;
86
+
87
+ /*
88
+ * ARMv5 does not have the ID_ISAR registers, but we can still
89
+ * set the field to indicate Jazelle support within QEMU.
90
+ */
91
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
92
+
93
{
94
/* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
95
ARMCPRegInfo ifar = {
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
116
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
117
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
118
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
119
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@
120
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
101
#define ENABLE_ARCH_5 arm_dc_feature(s, ARM_FEATURE_V5)
121
}
102
/* currently all emulated v5 cores are also v5TE, so don't bother */
122
break;
103
#define ENABLE_ARCH_5TE arm_dc_feature(s, ARM_FEATURE_V5)
123
case NEON_3R_VFM_VQRDMLSH:
104
-#define ENABLE_ARCH_5J arm_dc_feature(s, ARM_FEATURE_JAZELLE)
124
- if (!arm_dc_feature(s, ARM_FEATURE_VFP4)) {
105
+#define ENABLE_ARCH_5J dc_isar_feature(jazelle, s)
125
+ if (!dc_isar_feature(aa32_simdfmac, s)) {
106
#define ENABLE_ARCH_6 arm_dc_feature(s, ARM_FEATURE_V6)
126
return 1;
107
#define ENABLE_ARCH_6K arm_dc_feature(s, ARM_FEATURE_V6K)
127
}
108
#define ENABLE_ARCH_6T2 arm_dc_feature(s, ARM_FEATURE_THUMB2)
128
break;
109
--
129
--
110
2.19.1
130
2.20.1
111
131
112
132
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We now have proper ISA checks within each trans_* function.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-12-richard.henderson@linaro.org
7
Message-id: 20200224222232.13807-11-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate.c | 31 +++++++++++++++----------------
10
target/arm/translate.c | 4 ----
9
1 file changed, 15 insertions(+), 16 deletions(-)
11
1 file changed, 4 deletions(-)
10
12
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
15
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
16
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
17
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
16
vec_size, vec_size);
18
*/
17
}
19
static int disas_vfp_insn(DisasContext *s, uint32_t insn)
18
return 0;
20
{
19
+
21
- if (!arm_dc_feature(s, ARM_FEATURE_VFP)) {
20
+ case NEON_3R_VMUL: /* VMUL */
22
- return 1;
21
+ if (u) {
23
- }
22
+ /* Polynomial case allows only P8 and is handled below. */
24
-
23
+ if (size != 0) {
25
/*
24
+ return 1;
26
* If the decodetree decoder handles this insn it will always
25
+ }
27
* emit code to either execute the insn or generate an appropriate
26
+ } else {
27
+ tcg_gen_gvec_mul(size, rd_ofs, rn_ofs, rm_ofs,
28
+ vec_size, vec_size);
29
+ return 0;
30
+ }
31
+ break;
32
}
33
if (size == 3) {
34
/* 64-bit element instructions. */
35
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
36
return 1;
37
}
38
break;
39
- case NEON_3R_VMUL:
40
- if (u && (size != 0)) {
41
- /* UNDEF on invalid size for polynomial subcase */
42
- return 1;
43
- }
44
- break;
45
case NEON_3R_VFM_VQRDMLSH:
46
if (!arm_dc_feature(s, ARM_FEATURE_VFP4)) {
47
return 1;
48
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
49
}
50
break;
51
case NEON_3R_VMUL:
52
- if (u) { /* polynomial */
53
- gen_helper_neon_mul_p8(tmp, tmp, tmp2);
54
- } else { /* Integer */
55
- switch (size) {
56
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
57
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
58
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
59
- default: abort();
60
- }
61
- }
62
+ /* VMUL.P8; other cases already eliminated. */
63
+ gen_helper_neon_mul_p8(tmp, tmp, tmp2);
64
break;
65
case NEON_3R_VPMAX:
66
GEN_NEON_INTEGER_OP(pmax);
67
--
28
--
68
2.19.1
29
2.20.1
69
30
70
31
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Also introduces neon_element_offset to find the env offset
3
Now that we no longer have an early check for ARM_FEATURE_VFP,
4
of a specific element within a neon register.
4
we can use the proper ISA check in trans_VLLDM_VLSTM.
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-7-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20200224222232.13807-12-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/translate.c | 63 ++++++++++++++++++++++++------------------
11
target/arm/translate-vfp.inc.c | 39 +++++++++++++++++++++++++
12
1 file changed, 36 insertions(+), 27 deletions(-)
12
target/arm/translate.c | 53 ++++++----------------------------
13
target/arm/vfp.decode | 2 ++
14
3 files changed, 50 insertions(+), 44 deletions(-)
13
15
16
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-vfp.inc.c
19
+++ b/target/arm/translate-vfp.inc.c
20
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
21
tcg_temp_free_ptr(fpst);
22
return true;
23
}
24
+
25
+/*
26
+ * Decode VLLDM and VLSTM are nonstandard because:
27
+ * * if there is no FPU then these insns must NOP in
28
+ * Secure state and UNDEF in Nonsecure state
29
+ * * if there is an FPU then these insns do not have
30
+ * the usual behaviour that vfp_access_check() provides of
31
+ * being controlled by CPACR/NSACR enable bits or the
32
+ * lazy-stacking logic.
33
+ */
34
+static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
35
+{
36
+ TCGv_i32 fptr;
37
+
38
+ if (!arm_dc_feature(s, ARM_FEATURE_M) ||
39
+ !arm_dc_feature(s, ARM_FEATURE_V8)) {
40
+ return false;
41
+ }
42
+ /* If not secure, UNDEF. */
43
+ if (!s->v8m_secure) {
44
+ return false;
45
+ }
46
+ /* If no fpu, NOP. */
47
+ if (!dc_isar_feature(aa32_vfp, s)) {
48
+ return true;
49
+ }
50
+
51
+ fptr = load_reg(s, a->rn);
52
+ if (a->l) {
53
+ gen_helper_v7m_vlldm(cpu_env, fptr);
54
+ } else {
55
+ gen_helper_v7m_vlstm(cpu_env, fptr);
56
+ }
57
+ tcg_temp_free_i32(fptr);
58
+
59
+ /* End the TB, because we have updated FP control bits */
60
+ s->base.is_jmp = DISAS_UPDATE;
61
+ return true;
62
+}
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
63
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
64
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
65
--- a/target/arm/translate.c
17
+++ b/target/arm/translate.c
66
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ neon_reg_offset (int reg, int n)
67
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
19
return vfp_reg_offset(0, sreg);
68
goto illegal_op; /* op0 = 0b11 : unallocated */
20
}
69
}
21
70
22
+/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
71
- /*
23
+ * where 0 is the least significant end of the register.
72
- * Decode VLLDM and VLSTM first: these are nonstandard because:
24
+ */
73
- * * if there is no FPU then these insns must NOP in
25
+static inline long
74
- * Secure state and UNDEF in Nonsecure state
26
+neon_element_offset(int reg, int element, TCGMemOp size)
75
- * * if there is an FPU then these insns do not have
27
+{
76
- * the usual behaviour that disas_vfp_insn() provides of
28
+ int element_size = 1 << size;
77
- * being controlled by CPACR/NSACR enable bits or the
29
+ int ofs = element * element_size;
78
- * lazy-stacking logic.
30
+#ifdef HOST_WORDS_BIGENDIAN
79
- */
31
+ /* Calculate the offset assuming fully little-endian,
80
- if (arm_dc_feature(s, ARM_FEATURE_V8) &&
32
+ * then XOR to account for the order of the 8-byte units.
81
- (insn & 0xffa00f00) == 0xec200a00) {
33
+ */
82
- /* 0b1110_1100_0x1x_xxxx_xxxx_1010_xxxx_xxxx
34
+ if (element_size < 8) {
83
- * - VLLDM, VLSTM
35
+ ofs ^= 8 - element_size;
84
- * We choose to UNDEF if the RAZ bits are non-zero.
36
+ }
85
- */
37
+#endif
86
- if (!s->v8m_secure || (insn & 0x0040f0ff)) {
38
+ return neon_reg_offset(reg, 0) + ofs;
87
+ if (disas_vfp_insn(s, insn)) {
39
+}
88
+ if (((insn >> 8) & 0xe) == 10 &&
89
+ dc_isar_feature(aa32_fpsp_v2, s)) {
90
+ /* FP, and the CPU supports it */
91
goto illegal_op;
92
+ } else {
93
+ /* All other insns: NOCP */
94
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
95
+ syn_uncategorized(),
96
+ default_exception_el(s));
97
}
98
-
99
- if (arm_dc_feature(s, ARM_FEATURE_VFP)) {
100
- uint32_t rn = (insn >> 16) & 0xf;
101
- TCGv_i32 fptr = load_reg(s, rn);
102
-
103
- if (extract32(insn, 20, 1)) {
104
- gen_helper_v7m_vlldm(cpu_env, fptr);
105
- } else {
106
- gen_helper_v7m_vlstm(cpu_env, fptr);
107
- }
108
- tcg_temp_free_i32(fptr);
109
-
110
- /* End the TB, because we have updated FP control bits */
111
- s->base.is_jmp = DISAS_UPDATE;
112
- }
113
- break;
114
}
115
- if (arm_dc_feature(s, ARM_FEATURE_VFP) &&
116
- ((insn >> 8) & 0xe) == 10) {
117
- /* FP, and the CPU supports it */
118
- if (disas_vfp_insn(s, insn)) {
119
- goto illegal_op;
120
- }
121
- break;
122
- }
123
-
124
- /* All other insns: NOCP */
125
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP, syn_uncategorized(),
126
- default_exception_el(s));
127
break;
128
}
129
if ((insn & 0xfe000a00) == 0xfc000800
130
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
131
index XXXXXXX..XXXXXXX 100644
132
--- a/target/arm/vfp.decode
133
+++ b/target/arm/vfp.decode
134
@@ -XXX,XX +XXX,XX @@ VCVT_sp_int ---- 1110 1.11 110 s:1 .... 1010 rz:1 1.0 .... \
135
vd=%vd_sp vm=%vm_sp
136
VCVT_dp_int ---- 1110 1.11 110 s:1 .... 1011 rz:1 1.0 .... \
137
vd=%vd_sp vm=%vm_dp
40
+
138
+
41
static TCGv_i32 neon_load_reg(int reg, int pass)
139
+VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
42
{
43
TCGv_i32 tmp = tcg_temp_new_i32();
44
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
45
tmp = load_reg(s, rd);
46
if (insn & (1 << 23)) {
47
/* VDUP */
48
- if (size == 0) {
49
- gen_neon_dup_u8(tmp, 0);
50
- } else if (size == 1) {
51
- gen_neon_dup_low16(tmp);
52
- }
53
- for (n = 0; n <= pass * 2; n++) {
54
- tmp2 = tcg_temp_new_i32();
55
- tcg_gen_mov_i32(tmp2, tmp);
56
- neon_store_reg(rn, n, tmp2);
57
- }
58
- neon_store_reg(rn, n, tmp);
59
+ int vec_size = pass ? 16 : 8;
60
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rn, 0),
61
+ vec_size, vec_size, tmp);
62
+ tcg_temp_free_i32(tmp);
63
} else {
64
/* VMOV */
65
switch (size) {
66
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
67
tcg_temp_free_i32(tmp);
68
} else if ((insn & 0x380) == 0) {
69
/* VDUP */
70
+ int element;
71
+ TCGMemOp size;
72
+
73
if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
74
return 1;
75
}
76
- if (insn & (1 << 19)) {
77
- tmp = neon_load_reg(rm, 1);
78
- } else {
79
- tmp = neon_load_reg(rm, 0);
80
- }
81
if (insn & (1 << 16)) {
82
- gen_neon_dup_u8(tmp, ((insn >> 17) & 3) * 8);
83
+ size = MO_8;
84
+ element = (insn >> 17) & 7;
85
} else if (insn & (1 << 17)) {
86
- if ((insn >> 18) & 1)
87
- gen_neon_dup_high16(tmp);
88
- else
89
- gen_neon_dup_low16(tmp);
90
+ size = MO_16;
91
+ element = (insn >> 18) & 3;
92
+ } else {
93
+ size = MO_32;
94
+ element = (insn >> 19) & 1;
95
}
96
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
97
- tmp2 = tcg_temp_new_i32();
98
- tcg_gen_mov_i32(tmp2, tmp);
99
- neon_store_reg(rd, pass, tmp2);
100
- }
101
- tcg_temp_free_i32(tmp);
102
+ tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0),
103
+ neon_element_offset(rm, element, size),
104
+ q ? 16 : 8, q ? 16 : 8);
105
} else {
106
return 1;
107
}
108
--
140
--
109
2.19.1
141
2.20.1
110
142
111
143
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Have the calls adjacent as an intermediate step toward
4
actually merging the decodes.
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-18-richard.henderson@linaro.org
5
[PMM: added parens in ?: expression]
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20200224222232.13807-13-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
10
---
9
target/arm/translate.c | 81 ++++++++++++++----------------------------
11
target/arm/translate.c | 83 +++++++++++++++---------------------------
10
1 file changed, 26 insertions(+), 55 deletions(-)
12
1 file changed, 29 insertions(+), 54 deletions(-)
11
13
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate.c
16
--- a/target/arm/translate.c
15
+++ b/target/arm/translate.c
17
+++ b/target/arm/translate.c
16
@@ -XXX,XX +XXX,XX @@ static void gen_vfp_msr(TCGv_i32 tmp)
17
tcg_temp_free_i32(tmp);
18
}
19
20
-static void gen_neon_dup_u8(TCGv_i32 var, int shift)
21
-{
22
- TCGv_i32 tmp = tcg_temp_new_i32();
23
- if (shift)
24
- tcg_gen_shri_i32(var, var, shift);
25
- tcg_gen_ext8u_i32(var, var);
26
- tcg_gen_shli_i32(tmp, var, 8);
27
- tcg_gen_or_i32(var, var, tmp);
28
- tcg_gen_shli_i32(tmp, var, 16);
29
- tcg_gen_or_i32(var, var, tmp);
30
- tcg_temp_free_i32(tmp);
31
-}
32
-
33
static void gen_neon_dup_low16(TCGv_i32 var)
34
{
35
TCGv_i32 tmp = tcg_temp_new_i32();
36
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
18
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
37
tcg_temp_free_i32(tmp);
19
tcg_temp_free_i32(tmp);
38
}
20
}
39
21
40
-static TCGv_i32 gen_load_and_replicate(DisasContext *s, TCGv_i32 addr, int size)
22
-/*
23
- * Disassemble a VFP instruction. Returns nonzero if an error occurred
24
- * (ie. an undefined instruction).
25
- */
26
-static int disas_vfp_insn(DisasContext *s, uint32_t insn)
41
-{
27
-{
42
- /* Load a single Neon element and replicate into a 32 bit TCG reg */
28
- /*
43
- TCGv_i32 tmp = tcg_temp_new_i32();
29
- * If the decodetree decoder handles this insn it will always
44
- switch (size) {
30
- * emit code to either execute the insn or generate an appropriate
45
- case 0:
31
- * exception; so we don't need to ever return non-zero to tell
46
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
32
- * the calling code to emit an UNDEF exception.
47
- gen_neon_dup_u8(tmp, 0);
33
- */
48
- break;
34
- if (extract32(insn, 28, 4) == 0xf) {
49
- case 1:
35
- if (disas_vfp_uncond(s, insn)) {
50
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
36
- return 0;
51
- gen_neon_dup_low16(tmp);
37
- }
52
- break;
38
- } else {
53
- case 2:
39
- if (disas_vfp(s, insn)) {
54
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
40
- return 0;
55
- break;
41
- }
56
- default: /* Avoid compiler warnings. */
57
- abort();
58
- }
42
- }
59
- return tmp;
43
- /* If the decodetree decoder didn't handle this insn, it must be UNDEF */
44
- return 1;
60
-}
45
-}
61
-
46
-
62
static int handle_vsel(uint32_t insn, uint32_t rd, uint32_t rn, uint32_t rm,
47
static inline bool use_goto_tb(DisasContext *s, target_ulong dest)
63
uint32_t dp)
64
{
48
{
65
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
49
#ifndef CONFIG_USER_ONLY
66
int load;
50
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
67
int shift;
51
ARCH(5);
68
int n;
52
69
+ int vec_size;
53
/* Unconditional instructions. */
70
TCGv_i32 addr;
54
- if (disas_a32_uncond(s, insn)) {
71
TCGv_i32 tmp;
55
+ /* TODO: Perhaps merge these into one decodetree output file. */
72
TCGv_i32 tmp2;
56
+ if (disas_a32_uncond(s, insn) ||
73
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
57
+ disas_vfp_uncond(s, insn)) {
58
return;
59
}
60
/* fall back to legacy decoder */
61
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
74
}
62
}
75
addr = tcg_temp_new_i32();
63
return;
76
load_reg_var(s, addr, rn);
64
}
77
- if (nregs == 1) {
65
- if ((insn & 0x0f000e10) == 0x0e000a00) {
78
- /* VLD1 to all lanes: bit 5 indicates how many Dregs to write */
66
- /* VFP. */
79
- tmp = gen_load_and_replicate(s, addr, size);
67
- if (disas_vfp_insn(s, insn)) {
80
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
68
- goto illegal_op;
81
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
69
- }
82
- if (insn & (1 << 5)) {
70
- return;
83
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 0));
71
- }
84
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 1));
72
if ((insn & 0x0e000f00) == 0x0c000100) {
73
if (arm_dc_feature(s, ARM_FEATURE_IWMMXT)) {
74
/* iWMMXt register transfer. */
75
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
76
arm_skip_unless(s, cond);
77
}
78
79
- if (disas_a32(s, insn)) {
80
+ /* TODO: Perhaps merge these into one decodetree output file. */
81
+ if (disas_a32(s, insn) ||
82
+ disas_vfp(s, insn)) {
83
return;
84
}
85
/* fall back to legacy decoder */
86
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
87
case 0xd:
88
case 0xe:
89
if (((insn >> 8) & 0xe) == 10) {
90
- /* VFP. */
91
- if (disas_vfp_insn(s, insn)) {
92
- goto illegal_op;
93
- }
94
- } else if (disas_coproc_insn(s, insn)) {
95
+ /* VFP, but failed disas_vfp. */
96
+ goto illegal_op;
97
+ }
98
+ if (disas_coproc_insn(s, insn)) {
99
/* Coprocessor. */
100
goto illegal_op;
101
}
102
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
103
ARCH(6T2);
104
}
105
106
- if (disas_t32(s, insn)) {
107
+ /*
108
+ * TODO: Perhaps merge these into one decodetree output file.
109
+ * Note disas_vfp is written for a32 with cond field in the
110
+ * top nibble. The t32 encoding requires 0xe in the top nibble.
111
+ */
112
+ if (disas_t32(s, insn) ||
113
+ disas_vfp_uncond(s, insn) ||
114
+ ((insn >> 28) == 0xe && disas_vfp(s, insn))) {
115
return;
116
}
117
/* fall back to legacy decoder */
118
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
119
goto illegal_op; /* op0 = 0b11 : unallocated */
120
}
121
122
- if (disas_vfp_insn(s, insn)) {
123
- if (((insn >> 8) & 0xe) == 10 &&
124
- dc_isar_feature(aa32_fpsp_v2, s)) {
125
- /* FP, and the CPU supports it */
126
- goto illegal_op;
127
- } else {
128
- /* All other insns: NOCP */
129
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
130
- syn_uncategorized(),
131
- default_exception_el(s));
85
- }
132
- }
86
- tcg_temp_free_i32(tmp);
133
+ if (((insn >> 8) & 0xe) == 10 &&
87
- } else {
134
+ dc_isar_feature(aa32_fpsp_v2, s)) {
88
- /* VLD2/3/4 to all lanes: bit 5 indicates register stride */
135
+ /* FP, and the CPU supports it */
89
- stride = (insn & (1 << 5)) ? 2 : 1;
136
+ goto illegal_op;
90
- for (reg = 0; reg < nregs; reg++) {
137
+ } else {
91
- tmp = gen_load_and_replicate(s, addr, size);
138
+ /* All other insns: NOCP */
92
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
139
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
93
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
140
+ syn_uncategorized(),
94
- tcg_temp_free_i32(tmp);
141
+ default_exception_el(s));
95
- tcg_gen_addi_i32(addr, addr, 1 << size);
96
- rd += stride;
97
+
98
+ /* VLD1 to all lanes: bit 5 indicates how many Dregs to write.
99
+ * VLD2/3/4 to all lanes: bit 5 indicates register stride.
100
+ */
101
+ stride = (insn & (1 << 5)) ? 2 : 1;
102
+ vec_size = nregs == 1 ? stride * 8 : 8;
103
+
104
+ tmp = tcg_temp_new_i32();
105
+ for (reg = 0; reg < nregs; reg++) {
106
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
107
+ s->be_data | size);
108
+ if ((rd & 1) && vec_size == 16) {
109
+ /* We cannot write 16 bytes at once because the
110
+ * destination is unaligned.
111
+ */
112
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
113
+ 8, 8, tmp);
114
+ tcg_gen_gvec_mov(0, neon_reg_offset(rd + 1, 0),
115
+ neon_reg_offset(rd, 0), 8, 8);
116
+ } else {
117
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
118
+ vec_size, vec_size, tmp);
119
}
120
+ tcg_gen_addi_i32(addr, addr, 1 << size);
121
+ rd += stride;
122
}
142
}
123
+ tcg_temp_free_i32(tmp);
143
break;
124
tcg_temp_free_i32(addr);
144
}
125
stride = (1 << size) * nregs;
145
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
146
goto illegal_op;
147
}
148
} else if (((insn >> 8) & 0xe) == 10) {
149
- if (disas_vfp_insn(s, insn)) {
150
- goto illegal_op;
151
- }
152
+ /* VFP, but failed disas_vfp. */
153
+ goto illegal_op;
126
} else {
154
} else {
155
if (insn & (1 << 28))
156
goto illegal_op;
127
--
157
--
128
2.19.1
158
2.20.1
129
159
130
160
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
For a sequence of loads or stores from a single register,
3
Use isar feature tests instead of feature bit tests.
4
little-endian operations can be promoted to an 8-byte op.
4
5
This can reduce the number of operations by a factor of 8.
5
Although none of QEMUs current cpus have VFPv3 without D32,
6
replace the large comment explaining why with one line that
7
sets ARM_HWCAP_ARM_VFPv3D16 under the correct conditions.
8
Mirror the test sequence used in the linux kernel.
6
9
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181011205206.3552-5-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20200224222232.13807-14-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
14
---
12
target/arm/translate-a64.c | 66 +++++++++++++++++++++++---------------
15
linux-user/elfload.c | 23 +++++++++++++----------
13
1 file changed, 40 insertions(+), 26 deletions(-)
16
1 file changed, 13 insertions(+), 10 deletions(-)
14
17
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
18
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
20
--- a/linux-user/elfload.c
18
+++ b/target/arm/translate-a64.c
21
+++ b/linux-user/elfload.c
19
@@ -XXX,XX +XXX,XX @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
22
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
20
23
21
/* Store from vector register to memory */
24
/* EDSP is in v5TE and above, but all our v5 CPUs are v5TE */
22
static void do_vec_st(DisasContext *s, int srcidx, int element,
25
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
23
- TCGv_i64 tcg_addr, int size)
26
- GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
24
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
27
GET_FEATURE(ARM_FEATURE_IWMMXT, ARM_HWCAP_ARM_IWMMXT);
25
{
28
GET_FEATURE(ARM_FEATURE_THUMB2EE, ARM_HWCAP_ARM_THUMBEE);
26
- TCGMemOp memop = s->be_data + size;
29
GET_FEATURE(ARM_FEATURE_NEON, ARM_HWCAP_ARM_NEON);
27
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
30
- GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
28
31
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
29
read_vec_element(s, tcg_tmp, srcidx, element, size);
32
- GET_FEATURE(ARM_FEATURE_VFP4, ARM_HWCAP_ARM_VFPv4);
30
- tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
33
+ GET_FEATURE(ARM_FEATURE_LPAE, ARM_HWCAP_ARM_LPAE);
31
+ tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
34
GET_FEATURE_ID(aa32_arm_div, ARM_HWCAP_ARM_IDIVA);
32
35
GET_FEATURE_ID(aa32_thumb_div, ARM_HWCAP_ARM_IDIVT);
33
tcg_temp_free_i64(tcg_tmp);
36
- /* All QEMU's VFPv3 CPUs have 32 registers, see VFP_DREG in translate.c.
34
}
37
- * Note that the ARM_HWCAP_ARM_VFPv3D16 bit is always the inverse of
35
38
- * ARM_HWCAP_ARM_VFPD32 (and so always clear for QEMU); it is unrelated
36
/* Load from memory to vector register */
39
- * to our VFP_FP16 feature bit.
37
static void do_vec_ld(DisasContext *s, int destidx, int element,
40
- */
38
- TCGv_i64 tcg_addr, int size)
41
- GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPD32);
39
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
42
- GET_FEATURE(ARM_FEATURE_LPAE, ARM_HWCAP_ARM_LPAE);
40
{
43
+ GET_FEATURE_ID(aa32_vfp, ARM_HWCAP_ARM_VFP);
41
- TCGMemOp memop = s->be_data + size;
42
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
43
44
- tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
45
+ tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
46
write_vec_element(s, tcg_tmp, destidx, element, size);
47
48
tcg_temp_free_i64(tcg_tmp);
49
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
50
bool is_postidx = extract32(insn, 23, 1);
51
bool is_q = extract32(insn, 30, 1);
52
TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
53
+ TCGMemOp endian = s->be_data;
54
55
- int ebytes = 1 << size;
56
- int elements = (is_q ? 128 : 64) / (8 << size);
57
+ int ebytes; /* bytes per element */
58
+ int elements; /* elements per vector */
59
int rpt; /* num iterations */
60
int selem; /* structure elements */
61
int r;
62
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
63
gen_check_sp_alignment(s);
64
}
65
66
+ /* For our purposes, bytes are always little-endian. */
67
+ if (size == 0) {
68
+ endian = MO_LE;
69
+ }
70
+
44
+
71
+ /* Consecutive little-endian elements from a single register
45
+ if (cpu_isar_feature(aa32_fpsp_v3, cpu) ||
72
+ * can be promoted to a larger little-endian operation.
46
+ cpu_isar_feature(aa32_fpdp_v3, cpu)) {
73
+ */
47
+ hwcaps |= ARM_HWCAP_ARM_VFPv3;
74
+ if (selem == 1 && endian == MO_LE) {
48
+ if (cpu_isar_feature(aa32_simd_r32, cpu)) {
75
+ size = 3;
49
+ hwcaps |= ARM_HWCAP_ARM_VFPD32;
76
+ }
50
+ } else {
77
+ ebytes = 1 << size;
51
+ hwcaps |= ARM_HWCAP_ARM_VFPv3D16;
78
+ elements = (is_q ? 16 : 8) / ebytes;
79
+
80
tcg_rn = cpu_reg_sp(s, rn);
81
tcg_addr = tcg_temp_new_i64();
82
tcg_gen_mov_i64(tcg_addr, tcg_rn);
83
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
84
for (r = 0; r < rpt; r++) {
85
int e;
86
for (e = 0; e < elements; e++) {
87
- int tt = (rt + r) % 32;
88
int xs;
89
for (xs = 0; xs < selem; xs++) {
90
+ int tt = (rt + r + xs) % 32;
91
if (is_store) {
92
- do_vec_st(s, tt, e, tcg_addr, size);
93
+ do_vec_st(s, tt, e, tcg_addr, size, endian);
94
} else {
95
- do_vec_ld(s, tt, e, tcg_addr, size);
96
-
97
- /* For non-quad operations, setting a slice of the low
98
- * 64 bits of the register clears the high 64 bits (in
99
- * the ARM ARM pseudocode this is implicit in the fact
100
- * that 'rval' is a 64 bit wide variable).
101
- * For quad operations, we might still need to zero the
102
- * high bits of SVE. We optimize by noticing that we only
103
- * need to do this the first time we touch a register.
104
- */
105
- if (e == 0 && (r == 0 || xs == selem - 1)) {
106
- clear_vec_high(s, is_q, tt);
107
- }
108
+ do_vec_ld(s, tt, e, tcg_addr, size, endian);
109
}
110
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
111
- tt = (tt + 1) % 32;
112
}
113
}
114
}
115
116
+ if (!is_store) {
117
+ /* For non-quad operations, setting a slice of the low
118
+ * 64 bits of the register clears the high 64 bits (in
119
+ * the ARM ARM pseudocode this is implicit in the fact
120
+ * that 'rval' is a 64 bit wide variable).
121
+ * For quad operations, we might still need to zero the
122
+ * high bits of SVE.
123
+ */
124
+ for (r = 0; r < rpt * selem; r++) {
125
+ int tt = (rt + r) % 32;
126
+ clear_vec_high(s, is_q, tt);
127
+ }
52
+ }
128
+ }
53
+ }
129
+
54
+ GET_FEATURE_ID(aa32_simdfmac, ARM_HWCAP_ARM_VFPv4);
130
if (is_postidx) {
55
131
int rm = extract32(insn, 16, 5);
56
return hwcaps;
132
if (rm == 31) {
57
}
133
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
134
} else {
135
/* Load/store one element per register */
136
if (is_load) {
137
- do_vec_ld(s, rt, index, tcg_addr, scale);
138
+ do_vec_ld(s, rt, index, tcg_addr, scale, s->be_data);
139
} else {
140
- do_vec_st(s, rt, index, tcg_addr, scale);
141
+ do_vec_st(s, rt, index, tcg_addr, scale, s->be_data);
142
}
143
}
144
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
145
--
58
--
146
2.19.1
59
2.20.1
147
60
148
61
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Both arm and thumb2 division are controlled by the same ISAR field,
3
We have converted all tests against these features
4
which takes care of the arm implies thumb case. Having M imply
4
to ISAR tests.
5
thumb2 division was wrong for cortex-m0, which is v6m and does not
5
6
have thumb2 at all, much less thumb2 division.
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20181016223115.24100-5-richard.henderson@linaro.org
8
Message-id: 20200224222232.13807-15-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
target/arm/cpu.h | 12 ++++++++++--
11
target/arm/cpu.h | 3 ---
15
linux-user/elfload.c | 4 ++--
12
target/arm/cpu.c | 25 -------------------------
16
target/arm/cpu.c | 10 +---------
13
target/arm/cpu64.c | 3 ---
17
target/arm/translate.c | 4 ++--
14
target/arm/kvm32.c | 5 -----
18
4 files changed, 15 insertions(+), 15 deletions(-)
15
target/arm/kvm64.c | 1 -
16
5 files changed, 37 deletions(-)
19
17
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
20
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK);
23
* mapping in linux-user/elfload.c:get_elf_hwcap().
24
*/
25
enum arm_features {
26
- ARM_FEATURE_VFP,
27
ARM_FEATURE_AUXCR, /* ARM1026 Auxiliary control register. */
28
ARM_FEATURE_XSCALE, /* Intel XScale extensions. */
29
ARM_FEATURE_IWMMXT, /* Intel iwMMXt extension. */
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
30
@@ -XXX,XX +XXX,XX @@ enum arm_features {
25
ARM_FEATURE_VFP3,
31
ARM_FEATURE_V7,
26
ARM_FEATURE_VFP_FP16,
32
ARM_FEATURE_THUMB2,
33
ARM_FEATURE_PMSA, /* no MMU; may have Memory Protection Unit */
34
- ARM_FEATURE_VFP3,
27
ARM_FEATURE_NEON,
35
ARM_FEATURE_NEON,
28
- ARM_FEATURE_THUMB_DIV, /* divide supported in Thumb encoding */
29
ARM_FEATURE_M, /* Microcontroller profile. */
36
ARM_FEATURE_M, /* Microcontroller profile. */
30
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
37
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
31
ARM_FEATURE_THUMB2EE,
32
@@ -XXX,XX +XXX,XX @@ enum arm_features {
38
@@ -XXX,XX +XXX,XX @@ enum arm_features {
33
ARM_FEATURE_V5,
39
ARM_FEATURE_V5,
34
ARM_FEATURE_STRONGARM,
40
ARM_FEATURE_STRONGARM,
35
ARM_FEATURE_VAPA, /* cp15 VA to PA lookups */
41
ARM_FEATURE_VAPA, /* cp15 VA to PA lookups */
36
- ARM_FEATURE_ARM_DIV, /* divide supported in ARM encoding */
42
- ARM_FEATURE_VFP4, /* VFPv4 (implies that NEON is v2) */
37
ARM_FEATURE_VFP4, /* VFPv4 (implies that NEON is v2) */
38
ARM_FEATURE_GENERIC_TIMER,
43
ARM_FEATURE_GENERIC_TIMER,
39
ARM_FEATURE_MVFR, /* Media and VFP Feature Registers 0 and 1 */
44
ARM_FEATURE_MVFR, /* Media and VFP Feature Registers 0 and 1 */
40
@@ -XXX,XX +XXX,XX @@ extern const uint64_t pred_esz_masks[4];
45
ARM_FEATURE_DUMMY_C15_REGS, /* RAZ/WI all of cp15 crn=15 */
41
/*
42
* 32-bit feature tests via id registers.
43
*/
44
+static inline bool isar_feature_thumb_div(const ARMISARegisters *id)
45
+{
46
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) != 0;
47
+}
48
+
49
+static inline bool isar_feature_arm_div(const ARMISARegisters *id)
50
+{
51
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
52
+}
53
+
54
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
55
{
56
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
57
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/linux-user/elfload.c
60
+++ b/linux-user/elfload.c
61
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
62
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
63
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
64
GET_FEATURE(ARM_FEATURE_VFP4, ARM_HWCAP_ARM_VFPv4);
65
- GET_FEATURE(ARM_FEATURE_ARM_DIV, ARM_HWCAP_ARM_IDIVA);
66
- GET_FEATURE(ARM_FEATURE_THUMB_DIV, ARM_HWCAP_ARM_IDIVT);
67
+ GET_FEATURE_ID(arm_div, ARM_HWCAP_ARM_IDIVA);
68
+ GET_FEATURE_ID(thumb_div, ARM_HWCAP_ARM_IDIVT);
69
/* All QEMU's VFPv3 CPUs have 32 registers, see VFP_DREG in translate.c.
70
* Note that the ARM_HWCAP_ARM_VFPv3D16 bit is always the inverse of
71
* ARM_HWCAP_ARM_VFPD32 (and so always clear for QEMU); it is unrelated
72
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
46
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
73
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/cpu.c
48
--- a/target/arm/cpu.c
75
+++ b/target/arm/cpu.c
49
+++ b/target/arm/cpu.c
50
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
51
if (arm_feature(&cpu->env, ARM_FEATURE_M)) {
52
set_feature(&cpu->env, ARM_FEATURE_PMSA);
53
}
54
- /* Similarly for the VFP feature bits */
55
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP4)) {
56
- set_feature(&cpu->env, ARM_FEATURE_VFP3);
57
- }
58
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP3)) {
59
- set_feature(&cpu->env, ARM_FEATURE_VFP);
60
- }
61
62
if (arm_feature(&cpu->env, ARM_FEATURE_CBAR) ||
63
arm_feature(&cpu->env, ARM_FEATURE_CBAR_RO)) {
76
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
64
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
77
* Presence of EL2 itself is ARM_FEATURE_EL2, and of the
65
uint64_t t;
78
* Security Extensions is ARM_FEATURE_EL3.
66
uint32_t u;
79
*/
67
80
- set_feature(env, ARM_FEATURE_ARM_DIV);
68
- unset_feature(env, ARM_FEATURE_VFP);
81
+ assert(cpu_isar_feature(arm_div, cpu));
69
- unset_feature(env, ARM_FEATURE_VFP3);
82
set_feature(env, ARM_FEATURE_LPAE);
70
- unset_feature(env, ARM_FEATURE_VFP4);
83
set_feature(env, ARM_FEATURE_V7);
71
-
72
t = cpu->isar.id_aa64isar1;
73
t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 0);
74
cpu->isar.id_aa64isar1 = t;
75
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
76
77
cpu->dtb_compatible = "arm,arm926";
78
set_feature(&cpu->env, ARM_FEATURE_V5);
79
- set_feature(&cpu->env, ARM_FEATURE_VFP);
80
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
81
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
82
cpu->midr = 0x41069265;
83
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
84
85
cpu->dtb_compatible = "arm,arm1026";
86
set_feature(&cpu->env, ARM_FEATURE_V5);
87
- set_feature(&cpu->env, ARM_FEATURE_VFP);
88
set_feature(&cpu->env, ARM_FEATURE_AUXCR);
89
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
90
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
91
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
92
93
cpu->dtb_compatible = "arm,arm1136";
94
set_feature(&cpu->env, ARM_FEATURE_V6);
95
- set_feature(&cpu->env, ARM_FEATURE_VFP);
96
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
97
set_feature(&cpu->env, ARM_FEATURE_CACHE_DIRTY_REG);
98
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
99
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
100
cpu->dtb_compatible = "arm,arm1136";
101
set_feature(&cpu->env, ARM_FEATURE_V6K);
102
set_feature(&cpu->env, ARM_FEATURE_V6);
103
- set_feature(&cpu->env, ARM_FEATURE_VFP);
104
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
105
set_feature(&cpu->env, ARM_FEATURE_CACHE_DIRTY_REG);
106
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
107
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
108
109
cpu->dtb_compatible = "arm,arm1176";
110
set_feature(&cpu->env, ARM_FEATURE_V6K);
111
- set_feature(&cpu->env, ARM_FEATURE_VFP);
112
set_feature(&cpu->env, ARM_FEATURE_VAPA);
113
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
114
set_feature(&cpu->env, ARM_FEATURE_CACHE_DIRTY_REG);
115
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
116
117
cpu->dtb_compatible = "arm,arm11mpcore";
118
set_feature(&cpu->env, ARM_FEATURE_V6K);
119
- set_feature(&cpu->env, ARM_FEATURE_VFP);
120
set_feature(&cpu->env, ARM_FEATURE_VAPA);
121
set_feature(&cpu->env, ARM_FEATURE_MPIDR);
122
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
123
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
124
set_feature(&cpu->env, ARM_FEATURE_M);
125
set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
126
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
127
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
128
cpu->midr = 0x410fc240; /* r0p0 */
129
cpu->pmsav7_dregion = 8;
130
cpu->isar.mvfr0 = 0x10110021;
131
@@ -XXX,XX +XXX,XX @@ static void cortex_m7_initfn(Object *obj)
132
set_feature(&cpu->env, ARM_FEATURE_M);
133
set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
134
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
135
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
136
cpu->midr = 0x411fc272; /* r1p2 */
137
cpu->pmsav7_dregion = 8;
138
cpu->isar.mvfr0 = 0x10110221;
139
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
140
set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
141
set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
142
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
143
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
144
cpu->midr = 0x410fd213; /* r0p3 */
145
cpu->pmsav7_dregion = 16;
146
cpu->sau_sregion = 8;
147
@@ -XXX,XX +XXX,XX @@ static void cortex_r5f_initfn(Object *obj)
148
ARMCPU *cpu = ARM_CPU(obj);
149
150
cortex_r5_initfn(obj);
151
- set_feature(&cpu->env, ARM_FEATURE_VFP3);
152
cpu->isar.mvfr0 = 0x10110221;
153
cpu->isar.mvfr1 = 0x00000011;
154
}
155
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
156
157
cpu->dtb_compatible = "arm,cortex-a8";
158
set_feature(&cpu->env, ARM_FEATURE_V7);
159
- set_feature(&cpu->env, ARM_FEATURE_VFP3);
160
set_feature(&cpu->env, ARM_FEATURE_NEON);
161
set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
162
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
163
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
164
165
cpu->dtb_compatible = "arm,cortex-a9";
166
set_feature(&cpu->env, ARM_FEATURE_V7);
167
- set_feature(&cpu->env, ARM_FEATURE_VFP3);
168
set_feature(&cpu->env, ARM_FEATURE_NEON);
169
set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
170
set_feature(&cpu->env, ARM_FEATURE_EL3);
171
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
172
173
cpu->dtb_compatible = "arm,cortex-a7";
174
set_feature(&cpu->env, ARM_FEATURE_V7VE);
175
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
176
set_feature(&cpu->env, ARM_FEATURE_NEON);
177
set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
178
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
179
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
180
181
cpu->dtb_compatible = "arm,cortex-a15";
182
set_feature(&cpu->env, ARM_FEATURE_V7VE);
183
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
184
set_feature(&cpu->env, ARM_FEATURE_NEON);
185
set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
186
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
187
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
188
index XXXXXXX..XXXXXXX 100644
189
--- a/target/arm/cpu64.c
190
+++ b/target/arm/cpu64.c
191
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
192
193
cpu->dtb_compatible = "arm,cortex-a57";
194
set_feature(&cpu->env, ARM_FEATURE_V8);
195
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
196
set_feature(&cpu->env, ARM_FEATURE_NEON);
197
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
198
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
199
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
200
201
cpu->dtb_compatible = "arm,cortex-a53";
202
set_feature(&cpu->env, ARM_FEATURE_V8);
203
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
204
set_feature(&cpu->env, ARM_FEATURE_NEON);
205
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
206
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
207
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
208
209
cpu->dtb_compatible = "arm,cortex-a72";
210
set_feature(&cpu->env, ARM_FEATURE_V8);
211
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
212
set_feature(&cpu->env, ARM_FEATURE_NEON);
213
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
214
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
215
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
216
index XXXXXXX..XXXXXXX 100644
217
--- a/target/arm/kvm32.c
218
+++ b/target/arm/kvm32.c
219
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
220
* bits, but a few must be tested.
221
*/
222
set_feature(&features, ARM_FEATURE_V7VE);
223
- set_feature(&features, ARM_FEATURE_VFP3);
224
set_feature(&features, ARM_FEATURE_GENERIC_TIMER);
225
226
if (extract32(id_pfr0, 12, 4) == 1) {
227
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
228
if (extract32(ahcf->isar.mvfr1, 12, 4) == 1) {
229
set_feature(&features, ARM_FEATURE_NEON);
84
}
230
}
85
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
231
- if (extract32(ahcf->isar.mvfr1, 28, 4) == 1) {
86
if (arm_feature(env, ARM_FEATURE_V5)) {
232
- /* FMAC support implies VFPv4 */
87
set_feature(env, ARM_FEATURE_V4T);
233
- set_feature(&features, ARM_FEATURE_VFP4);
88
}
89
- if (arm_feature(env, ARM_FEATURE_M)) {
90
- set_feature(env, ARM_FEATURE_THUMB_DIV);
91
- }
234
- }
92
- if (arm_feature(env, ARM_FEATURE_ARM_DIV)) {
235
93
- set_feature(env, ARM_FEATURE_THUMB_DIV);
236
ahcf->features = features;
94
- }
237
95
if (arm_feature(env, ARM_FEATURE_VFP4)) {
238
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
96
set_feature(env, ARM_FEATURE_VFP3);
239
index XXXXXXX..XXXXXXX 100644
97
set_feature(env, ARM_FEATURE_VFP_FP16);
240
--- a/target/arm/kvm64.c
98
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
241
+++ b/target/arm/kvm64.c
99
ARMCPU *cpu = ARM_CPU(obj);
242
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
100
243
* feature bits.
101
set_feature(&cpu->env, ARM_FEATURE_V7);
244
*/
102
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DIV);
245
set_feature(&features, ARM_FEATURE_V8);
103
- set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
246
- set_feature(&features, ARM_FEATURE_VFP4);
104
set_feature(&cpu->env, ARM_FEATURE_V7MP);
247
set_feature(&features, ARM_FEATURE_NEON);
105
set_feature(&cpu->env, ARM_FEATURE_PMSA);
248
set_feature(&features, ARM_FEATURE_AARCH64);
106
cpu->midr = 0x411fc153; /* r1p3 */
249
set_feature(&features, ARM_FEATURE_PMU);
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
112
case 1:
113
case 3:
114
/* SDIV, UDIV */
115
- if (!arm_dc_feature(s, ARM_FEATURE_ARM_DIV)) {
116
+ if (!dc_isar_feature(arm_div, s)) {
117
goto illegal_op;
118
}
119
if (((insn >> 5) & 7) || (rd != 15)) {
120
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
121
tmp2 = load_reg(s, rm);
122
if ((op & 0x50) == 0x10) {
123
/* sdiv, udiv */
124
- if (!arm_dc_feature(s, ARM_FEATURE_THUMB_DIV)) {
125
+ if (!dc_isar_feature(thumb_div, s)) {
126
goto illegal_op;
127
}
128
if (op & 0x20)
129
--
250
--
130
2.19.1
251
2.20.1
131
252
132
253
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Those vfp instructions without extra opcode fields can
4
share a common @format for brevity.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-11-richard.henderson@linaro.org
8
Message-id: 20200224222232.13807-16-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate.c | 16 ++++++++--------
11
target/arm/vfp.decode | 134 ++++++++++++++++--------------------------
9
1 file changed, 8 insertions(+), 8 deletions(-)
12
1 file changed, 52 insertions(+), 82 deletions(-)
10
13
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
16
--- a/target/arm/vfp.decode
14
+++ b/target/arm/translate.c
17
+++ b/target/arm/vfp.decode
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
18
@@ -XXX,XX +XXX,XX @@
16
tcg_temp_free_ptr(ptr1);
19
17
tcg_temp_free_ptr(ptr2);
20
%vmov_imm 16:4 0:4
18
break;
21
22
+@vfp_dnm_s ................................ vm=%vm_sp vn=%vn_sp vd=%vd_sp
23
+@vfp_dnm_d ................................ vm=%vm_dp vn=%vn_dp vd=%vd_dp
19
+
24
+
20
+ case NEON_2RM_VMVN:
25
+@vfp_dm_ss ................................ vm=%vm_sp vd=%vd_sp
21
+ tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
26
+@vfp_dm_dd ................................ vm=%vm_dp vd=%vd_dp
22
+ break;
27
+@vfp_dm_ds ................................ vm=%vm_sp vd=%vd_dp
23
+ case NEON_2RM_VNEG:
28
+@vfp_dm_sd ................................ vm=%vm_dp vd=%vd_sp
24
+ tcg_gen_gvec_neg(size, rd_ofs, rm_ofs, vec_size, vec_size);
25
+ break;
26
+
29
+
27
default:
30
# VMOV scalar to general-purpose register; note that this does
28
elementwise:
31
# include some Neon cases.
29
for (pass = 0; pass < (q ? 4 : 2); pass++) {
32
VMOV_to_gp ---- 1110 u:1 1. 1 .... rt:4 1011 ... 1 0000 \
30
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
33
@@ -XXX,XX +XXX,XX @@ VDUP ---- 1110 1 b:1 q:1 0 .... rt:4 1011 . 0 e:1 1 0000 \
31
case NEON_2RM_VCNT:
34
vn=%vn_dp
32
gen_helper_neon_cnt_u8(tmp, tmp);
35
33
break;
36
VMSR_VMRS ---- 1110 111 l:1 reg:4 rt:4 1010 0001 0000
34
- case NEON_2RM_VMVN:
37
-VMOV_single ---- 1110 000 l:1 .... rt:4 1010 . 001 0000 \
35
- tcg_gen_not_i32(tmp, tmp);
38
- vn=%vn_sp
36
- break;
39
+VMOV_single ---- 1110 000 l:1 .... rt:4 1010 . 001 0000 vn=%vn_sp
37
case NEON_2RM_VQABS:
40
38
switch (size) {
41
-VMOV_64_sp ---- 1100 010 op:1 rt2:4 rt:4 1010 00.1 .... \
39
case 0:
42
- vm=%vm_sp
40
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
43
-VMOV_64_dp ---- 1100 010 op:1 rt2:4 rt:4 1011 00.1 .... \
41
default: abort();
44
- vm=%vm_dp
42
}
45
+VMOV_64_sp ---- 1100 010 op:1 rt2:4 rt:4 1010 00.1 .... vm=%vm_sp
43
break;
46
+VMOV_64_dp ---- 1100 010 op:1 rt2:4 rt:4 1011 00.1 .... vm=%vm_dp
44
- case NEON_2RM_VNEG:
47
45
- tmp2 = tcg_const_i32(0);
48
# Note that the half-precision variants of VLDR and VSTR are
46
- gen_neon_rsb(size, tmp, tmp2);
49
# not part of this decodetree at all because they have bits [9:8] == 0b01
47
- tcg_temp_free_i32(tmp2);
50
-VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 \
48
- break;
51
- vd=%vd_sp
49
case NEON_2RM_VCGT0_F:
52
-VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 \
50
{
53
- vd=%vd_dp
51
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
54
+VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
55
+VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
56
57
# We split the load/store multiple up into two patterns to avoid
58
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
59
@@ -XXX,XX +XXX,XX @@ VLDM_VSTM_dp ---- 1101 0.1 l:1 rn:4 .... 1011 imm:8 \
60
vd=%vd_dp p=1 u=0 w=1
61
62
# 3-register VFP data-processing; bits [23,21:20,6] identify the operation.
63
-VMLA_sp ---- 1110 0.00 .... .... 1010 .0.0 .... \
64
- vm=%vm_sp vn=%vn_sp vd=%vd_sp
65
-VMLA_dp ---- 1110 0.00 .... .... 1011 .0.0 .... \
66
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
67
+VMLA_sp ---- 1110 0.00 .... .... 1010 .0.0 .... @vfp_dnm_s
68
+VMLA_dp ---- 1110 0.00 .... .... 1011 .0.0 .... @vfp_dnm_d
69
70
-VMLS_sp ---- 1110 0.00 .... .... 1010 .1.0 .... \
71
- vm=%vm_sp vn=%vn_sp vd=%vd_sp
72
-VMLS_dp ---- 1110 0.00 .... .... 1011 .1.0 .... \
73
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
74
+VMLS_sp ---- 1110 0.00 .... .... 1010 .1.0 .... @vfp_dnm_s
75
+VMLS_dp ---- 1110 0.00 .... .... 1011 .1.0 .... @vfp_dnm_d
76
77
-VNMLS_sp ---- 1110 0.01 .... .... 1010 .0.0 .... \
78
- vm=%vm_sp vn=%vn_sp vd=%vd_sp
79
-VNMLS_dp ---- 1110 0.01 .... .... 1011 .0.0 .... \
80
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
81
+VNMLS_sp ---- 1110 0.01 .... .... 1010 .0.0 .... @vfp_dnm_s
82
+VNMLS_dp ---- 1110 0.01 .... .... 1011 .0.0 .... @vfp_dnm_d
83
84
-VNMLA_sp ---- 1110 0.01 .... .... 1010 .1.0 .... \
85
- vm=%vm_sp vn=%vn_sp vd=%vd_sp
86
-VNMLA_dp ---- 1110 0.01 .... .... 1011 .1.0 .... \
87
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
88
+VNMLA_sp ---- 1110 0.01 .... .... 1010 .1.0 .... @vfp_dnm_s
89
+VNMLA_dp ---- 1110 0.01 .... .... 1011 .1.0 .... @vfp_dnm_d
90
91
-VMUL_sp ---- 1110 0.10 .... .... 1010 .0.0 .... \
92
- vm=%vm_sp vn=%vn_sp vd=%vd_sp
93
-VMUL_dp ---- 1110 0.10 .... .... 1011 .0.0 .... \
94
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
95
+VMUL_sp ---- 1110 0.10 .... .... 1010 .0.0 .... @vfp_dnm_s
96
+VMUL_dp ---- 1110 0.10 .... .... 1011 .0.0 .... @vfp_dnm_d
97
98
-VNMUL_sp ---- 1110 0.10 .... .... 1010 .1.0 .... \
99
- vm=%vm_sp vn=%vn_sp vd=%vd_sp
100
-VNMUL_dp ---- 1110 0.10 .... .... 1011 .1.0 .... \
101
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
102
+VNMUL_sp ---- 1110 0.10 .... .... 1010 .1.0 .... @vfp_dnm_s
103
+VNMUL_dp ---- 1110 0.10 .... .... 1011 .1.0 .... @vfp_dnm_d
104
105
-VADD_sp ---- 1110 0.11 .... .... 1010 .0.0 .... \
106
- vm=%vm_sp vn=%vn_sp vd=%vd_sp
107
-VADD_dp ---- 1110 0.11 .... .... 1011 .0.0 .... \
108
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
109
+VADD_sp ---- 1110 0.11 .... .... 1010 .0.0 .... @vfp_dnm_s
110
+VADD_dp ---- 1110 0.11 .... .... 1011 .0.0 .... @vfp_dnm_d
111
112
-VSUB_sp ---- 1110 0.11 .... .... 1010 .1.0 .... \
113
- vm=%vm_sp vn=%vn_sp vd=%vd_sp
114
-VSUB_dp ---- 1110 0.11 .... .... 1011 .1.0 .... \
115
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
116
+VSUB_sp ---- 1110 0.11 .... .... 1010 .1.0 .... @vfp_dnm_s
117
+VSUB_dp ---- 1110 0.11 .... .... 1011 .1.0 .... @vfp_dnm_d
118
119
-VDIV_sp ---- 1110 1.00 .... .... 1010 .0.0 .... \
120
- vm=%vm_sp vn=%vn_sp vd=%vd_sp
121
-VDIV_dp ---- 1110 1.00 .... .... 1011 .0.0 .... \
122
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
123
+VDIV_sp ---- 1110 1.00 .... .... 1010 .0.0 .... @vfp_dnm_s
124
+VDIV_dp ---- 1110 1.00 .... .... 1011 .0.0 .... @vfp_dnm_d
125
126
VFM_sp ---- 1110 1.01 .... .... 1010 . o2:1 . 0 .... \
127
vm=%vm_sp vn=%vn_sp vd=%vd_sp o1=1
128
@@ -XXX,XX +XXX,XX @@ VMOV_imm_sp ---- 1110 1.11 .... .... 1010 0000 .... \
129
VMOV_imm_dp ---- 1110 1.11 .... .... 1011 0000 .... \
130
vd=%vd_dp imm=%vmov_imm
131
132
-VMOV_reg_sp ---- 1110 1.11 0000 .... 1010 01.0 .... \
133
- vd=%vd_sp vm=%vm_sp
134
-VMOV_reg_dp ---- 1110 1.11 0000 .... 1011 01.0 .... \
135
- vd=%vd_dp vm=%vm_dp
136
+VMOV_reg_sp ---- 1110 1.11 0000 .... 1010 01.0 .... @vfp_dm_ss
137
+VMOV_reg_dp ---- 1110 1.11 0000 .... 1011 01.0 .... @vfp_dm_dd
138
139
-VABS_sp ---- 1110 1.11 0000 .... 1010 11.0 .... \
140
- vd=%vd_sp vm=%vm_sp
141
-VABS_dp ---- 1110 1.11 0000 .... 1011 11.0 .... \
142
- vd=%vd_dp vm=%vm_dp
143
+VABS_sp ---- 1110 1.11 0000 .... 1010 11.0 .... @vfp_dm_ss
144
+VABS_dp ---- 1110 1.11 0000 .... 1011 11.0 .... @vfp_dm_dd
145
146
-VNEG_sp ---- 1110 1.11 0001 .... 1010 01.0 .... \
147
- vd=%vd_sp vm=%vm_sp
148
-VNEG_dp ---- 1110 1.11 0001 .... 1011 01.0 .... \
149
- vd=%vd_dp vm=%vm_dp
150
+VNEG_sp ---- 1110 1.11 0001 .... 1010 01.0 .... @vfp_dm_ss
151
+VNEG_dp ---- 1110 1.11 0001 .... 1011 01.0 .... @vfp_dm_dd
152
153
-VSQRT_sp ---- 1110 1.11 0001 .... 1010 11.0 .... \
154
- vd=%vd_sp vm=%vm_sp
155
-VSQRT_dp ---- 1110 1.11 0001 .... 1011 11.0 .... \
156
- vd=%vd_dp vm=%vm_dp
157
+VSQRT_sp ---- 1110 1.11 0001 .... 1010 11.0 .... @vfp_dm_ss
158
+VSQRT_dp ---- 1110 1.11 0001 .... 1011 11.0 .... @vfp_dm_dd
159
160
VCMP_sp ---- 1110 1.11 010 z:1 .... 1010 e:1 1.0 .... \
161
vd=%vd_sp vm=%vm_sp
162
@@ -XXX,XX +XXX,XX @@ VCVT_f32_f16 ---- 1110 1.11 0010 .... 1010 t:1 1.0 .... \
163
VCVT_f64_f16 ---- 1110 1.11 0010 .... 1011 t:1 1.0 .... \
164
vd=%vd_dp vm=%vm_sp
165
166
-# VCVTB and VCVTT to f16: Vd format is always vd_sp; Vm format depends on size bit
167
+# VCVTB and VCVTT to f16: Vd format is always vd_sp;
168
+# Vm format depends on size bit
169
VCVT_f16_f32 ---- 1110 1.11 0011 .... 1010 t:1 1.0 .... \
170
vd=%vd_sp vm=%vm_sp
171
VCVT_f16_f64 ---- 1110 1.11 0011 .... 1011 t:1 1.0 .... \
172
vd=%vd_sp vm=%vm_dp
173
174
-VRINTR_sp ---- 1110 1.11 0110 .... 1010 01.0 .... \
175
- vd=%vd_sp vm=%vm_sp
176
-VRINTR_dp ---- 1110 1.11 0110 .... 1011 01.0 .... \
177
- vd=%vd_dp vm=%vm_dp
178
+VRINTR_sp ---- 1110 1.11 0110 .... 1010 01.0 .... @vfp_dm_ss
179
+VRINTR_dp ---- 1110 1.11 0110 .... 1011 01.0 .... @vfp_dm_dd
180
181
-VRINTZ_sp ---- 1110 1.11 0110 .... 1010 11.0 .... \
182
- vd=%vd_sp vm=%vm_sp
183
-VRINTZ_dp ---- 1110 1.11 0110 .... 1011 11.0 .... \
184
- vd=%vd_dp vm=%vm_dp
185
+VRINTZ_sp ---- 1110 1.11 0110 .... 1010 11.0 .... @vfp_dm_ss
186
+VRINTZ_dp ---- 1110 1.11 0110 .... 1011 11.0 .... @vfp_dm_dd
187
188
-VRINTX_sp ---- 1110 1.11 0111 .... 1010 01.0 .... \
189
- vd=%vd_sp vm=%vm_sp
190
-VRINTX_dp ---- 1110 1.11 0111 .... 1011 01.0 .... \
191
- vd=%vd_dp vm=%vm_dp
192
+VRINTX_sp ---- 1110 1.11 0111 .... 1010 01.0 .... @vfp_dm_ss
193
+VRINTX_dp ---- 1110 1.11 0111 .... 1011 01.0 .... @vfp_dm_dd
194
195
-# VCVT between single and double: Vm precision depends on size; Vd is its reverse
196
-VCVT_sp ---- 1110 1.11 0111 .... 1010 11.0 .... \
197
- vd=%vd_dp vm=%vm_sp
198
-VCVT_dp ---- 1110 1.11 0111 .... 1011 11.0 .... \
199
- vd=%vd_sp vm=%vm_dp
200
+# VCVT between single and double:
201
+# Vm precision depends on size; Vd is its reverse
202
+VCVT_sp ---- 1110 1.11 0111 .... 1010 11.0 .... @vfp_dm_ds
203
+VCVT_dp ---- 1110 1.11 0111 .... 1011 11.0 .... @vfp_dm_sd
204
205
# VCVT from integer to floating point: Vm always single; Vd depends on size
206
VCVT_int_sp ---- 1110 1.11 1000 .... 1010 s:1 1.0 .... \
207
@@ -XXX,XX +XXX,XX @@ VCVT_int_dp ---- 1110 1.11 1000 .... 1011 s:1 1.0 .... \
208
vd=%vd_dp vm=%vm_sp
209
210
# VJCVT is always dp to sp
211
-VJCVT ---- 1110 1.11 1001 .... 1011 11.0 .... \
212
- vd=%vd_sp vm=%vm_dp
213
+VJCVT ---- 1110 1.11 1001 .... 1011 11.0 .... @vfp_dm_sd
214
215
# VCVT between floating-point and fixed-point. The immediate value
216
# is in the same format as a Vm single-precision register number.
52
--
217
--
53
2.19.1
218
2.20.1
54
219
55
220
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move mla_op and mls_op expanders from translate-a64.c.
3
Passing the raw o1 and o2 fields from the manual is less
4
instructive than it might be. Do the full decode and let
5
the trans_* functions pass in booleans to a helper.
4
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-16-richard.henderson@linaro.org
9
Message-id: 20200224222232.13807-17-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/translate.h | 2 +
12
target/arm/translate-vfp.inc.c | 52 ++++++++++++++++++++++++++++++----
11
target/arm/translate-a64.c | 106 -----------------------------
13
target/arm/vfp.decode | 17 +++++------
12
target/arm/translate.c | 134 ++++++++++++++++++++++++++++++++-----
14
2 files changed, 55 insertions(+), 14 deletions(-)
13
3 files changed, 120 insertions(+), 122 deletions(-)
14
15
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
18
--- a/target/arm/translate-vfp.inc.c
18
+++ b/target/arm/translate.h
19
+++ b/target/arm/translate-vfp.inc.c
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
@@ -XXX,XX +XXX,XX @@ static bool trans_VDIV_dp(DisasContext *s, arg_VDIV_dp *a)
20
extern const GVecGen3 bsl_op;
21
return do_vfp_3op_dp(s, gen_helper_vfp_divd, a->vd, a->vn, a->vm, false);
21
extern const GVecGen3 bit_op;
22
}
22
extern const GVecGen3 bif_op;
23
23
+extern const GVecGen3 mla_op[4];
24
-static bool trans_VFM_sp(DisasContext *s, arg_VFM_sp *a)
24
+extern const GVecGen3 mls_op[4];
25
+static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
25
extern const GVecGen2i ssra_op[4];
26
{
26
extern const GVecGen2i usra_op[4];
27
/*
27
extern const GVecGen2i sri_op[4];
28
* VFNMA : fd = muladd(-fd, fn, fm)
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_sp(DisasContext *s, arg_VFM_sp *a)
29
index XXXXXXX..XXXXXXX 100644
30
30
--- a/target/arm/translate-a64.c
31
neon_load_reg32(vn, a->vn);
31
+++ b/target/arm/translate-a64.c
32
neon_load_reg32(vm, a->vm);
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
33
- if (a->o2) {
34
+ if (neg_n) {
35
/* VFNMS, VFMS */
36
gen_helper_vfp_negs(vn, vn);
33
}
37
}
38
neon_load_reg32(vd, a->vd);
39
- if (a->o1 & 1) {
40
+ if (neg_d) {
41
/* VFNMA, VFNMS */
42
gen_helper_vfp_negs(vd, vd);
43
}
44
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_sp(DisasContext *s, arg_VFM_sp *a)
45
return true;
34
}
46
}
35
47
36
-static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
48
-static bool trans_VFM_dp(DisasContext *s, arg_VFM_dp *a)
37
-{
49
+static bool trans_VFMA_sp(DisasContext *s, arg_VFMA_sp *a)
38
- gen_helper_neon_mul_u8(a, a, b);
39
- gen_helper_neon_add_u8(d, d, a);
40
-}
41
-
42
-static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
43
-{
44
- gen_helper_neon_mul_u16(a, a, b);
45
- gen_helper_neon_add_u16(d, d, a);
46
-}
47
-
48
-static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
49
-{
50
- tcg_gen_mul_i32(a, a, b);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
55
-{
56
- tcg_gen_mul_i64(a, a, b);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
61
-{
62
- tcg_gen_mul_vec(vece, a, a, b);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
67
-{
68
- gen_helper_neon_mul_u8(a, a, b);
69
- gen_helper_neon_sub_u8(d, d, a);
70
-}
71
-
72
-static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
73
-{
74
- gen_helper_neon_mul_u16(a, a, b);
75
- gen_helper_neon_sub_u16(d, d, a);
76
-}
77
-
78
-static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
79
-{
80
- tcg_gen_mul_i32(a, a, b);
81
- tcg_gen_sub_i32(d, d, a);
82
-}
83
-
84
-static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
85
-{
86
- tcg_gen_mul_i64(a, a, b);
87
- tcg_gen_sub_i64(d, d, a);
88
-}
89
-
90
-static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
91
-{
92
- tcg_gen_mul_vec(vece, a, a, b);
93
- tcg_gen_sub_vec(vece, d, d, a);
94
-}
95
-
96
/* Integer op subgroup of C3.6.16. */
97
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
98
{
99
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
100
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
.vece = MO_64 },
102
};
103
- static const GVecGen3 mla_op[4] = {
104
- { .fni4 = gen_mla8_i32,
105
- .fniv = gen_mla_vec,
106
- .opc = INDEX_op_mul_vec,
107
- .load_dest = true,
108
- .vece = MO_8 },
109
- { .fni4 = gen_mla16_i32,
110
- .fniv = gen_mla_vec,
111
- .opc = INDEX_op_mul_vec,
112
- .load_dest = true,
113
- .vece = MO_16 },
114
- { .fni4 = gen_mla32_i32,
115
- .fniv = gen_mla_vec,
116
- .opc = INDEX_op_mul_vec,
117
- .load_dest = true,
118
- .vece = MO_32 },
119
- { .fni8 = gen_mla64_i64,
120
- .fniv = gen_mla_vec,
121
- .opc = INDEX_op_mul_vec,
122
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
123
- .load_dest = true,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen3 mls_op[4] = {
127
- { .fni4 = gen_mls8_i32,
128
- .fniv = gen_mls_vec,
129
- .opc = INDEX_op_mul_vec,
130
- .load_dest = true,
131
- .vece = MO_8 },
132
- { .fni4 = gen_mls16_i32,
133
- .fniv = gen_mls_vec,
134
- .opc = INDEX_op_mul_vec,
135
- .load_dest = true,
136
- .vece = MO_16 },
137
- { .fni4 = gen_mls32_i32,
138
- .fniv = gen_mls_vec,
139
- .opc = INDEX_op_mul_vec,
140
- .load_dest = true,
141
- .vece = MO_32 },
142
- { .fni8 = gen_mls64_i64,
143
- .fniv = gen_mls_vec,
144
- .opc = INDEX_op_mul_vec,
145
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
146
- .load_dest = true,
147
- .vece = MO_64 },
148
- };
149
150
int is_q = extract32(insn, 30, 1);
151
int u = extract32(insn, 29, 1);
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static void gen_neon_narrow_op(int op, int u, int size,
157
#define NEON_3R_VABA 15
158
#define NEON_3R_VADD_VSUB 16
159
#define NEON_3R_VTST_VCEQ 17
160
-#define NEON_3R_VML 18 /* VMLA, VMLAL, VMLS, VMLSL */
161
+#define NEON_3R_VML 18 /* VMLA, VMLS */
162
#define NEON_3R_VMUL 19
163
#define NEON_3R_VPMAX 20
164
#define NEON_3R_VPMIN 21
165
@@ -XXX,XX +XXX,XX @@ const GVecGen2i sli_op[4] = {
166
.vece = MO_64 },
167
};
168
169
+static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
170
+{
50
+{
171
+ gen_helper_neon_mul_u8(a, a, b);
51
+ return do_vfm_sp(s, a, false, false);
172
+ gen_helper_neon_add_u8(d, d, a);
173
+}
52
+}
174
+
53
+
175
+static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
54
+static bool trans_VFMS_sp(DisasContext *s, arg_VFMS_sp *a)
176
+{
55
+{
177
+ gen_helper_neon_mul_u8(a, a, b);
56
+ return do_vfm_sp(s, a, true, false);
178
+ gen_helper_neon_sub_u8(d, d, a);
179
+}
57
+}
180
+
58
+
181
+static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
59
+static bool trans_VFNMA_sp(DisasContext *s, arg_VFNMA_sp *a)
182
+{
60
+{
183
+ gen_helper_neon_mul_u16(a, a, b);
61
+ return do_vfm_sp(s, a, false, true);
184
+ gen_helper_neon_add_u16(d, d, a);
185
+}
62
+}
186
+
63
+
187
+static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
64
+static bool trans_VFNMS_sp(DisasContext *s, arg_VFNMS_sp *a)
188
+{
65
+{
189
+ gen_helper_neon_mul_u16(a, a, b);
66
+ return do_vfm_sp(s, a, true, true);
190
+ gen_helper_neon_sub_u16(d, d, a);
191
+}
67
+}
192
+
68
+
193
+static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
69
+static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
70
{
71
/*
72
* VFNMA : fd = muladd(-fd, fn, fm)
73
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_dp(DisasContext *s, arg_VFM_dp *a)
74
75
neon_load_reg64(vn, a->vn);
76
neon_load_reg64(vm, a->vm);
77
- if (a->o2) {
78
+ if (neg_n) {
79
/* VFNMS, VFMS */
80
gen_helper_vfp_negd(vn, vn);
81
}
82
neon_load_reg64(vd, a->vd);
83
- if (a->o1 & 1) {
84
+ if (neg_d) {
85
/* VFNMA, VFNMS */
86
gen_helper_vfp_negd(vd, vd);
87
}
88
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_dp(DisasContext *s, arg_VFM_dp *a)
89
return true;
90
}
91
92
+static bool trans_VFMA_dp(DisasContext *s, arg_VFMA_dp *a)
194
+{
93
+{
195
+ tcg_gen_mul_i32(a, a, b);
94
+ return do_vfm_dp(s, a, false, false);
196
+ tcg_gen_add_i32(d, d, a);
197
+}
95
+}
198
+
96
+
199
+static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
97
+static bool trans_VFMS_dp(DisasContext *s, arg_VFMS_dp *a)
200
+{
98
+{
201
+ tcg_gen_mul_i32(a, a, b);
99
+ return do_vfm_dp(s, a, true, false);
202
+ tcg_gen_sub_i32(d, d, a);
203
+}
100
+}
204
+
101
+
205
+static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
102
+static bool trans_VFNMA_dp(DisasContext *s, arg_VFNMA_dp *a)
206
+{
103
+{
207
+ tcg_gen_mul_i64(a, a, b);
104
+ return do_vfm_dp(s, a, false, true);
208
+ tcg_gen_add_i64(d, d, a);
209
+}
105
+}
210
+
106
+
211
+static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
107
+static bool trans_VFNMS_dp(DisasContext *s, arg_VFNMS_dp *a)
212
+{
108
+{
213
+ tcg_gen_mul_i64(a, a, b);
109
+ return do_vfm_dp(s, a, true, true);
214
+ tcg_gen_sub_i64(d, d, a);
215
+}
110
+}
216
+
111
+
217
+static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
112
static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
218
+{
113
{
219
+ tcg_gen_mul_vec(vece, a, a, b);
114
uint32_t delta_d = 0;
220
+ tcg_gen_add_vec(vece, d, d, a);
115
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
221
+}
116
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/vfp.decode
118
+++ b/target/arm/vfp.decode
119
@@ -XXX,XX +XXX,XX @@ VSUB_dp ---- 1110 0.11 .... .... 1011 .1.0 .... @vfp_dnm_d
120
VDIV_sp ---- 1110 1.00 .... .... 1010 .0.0 .... @vfp_dnm_s
121
VDIV_dp ---- 1110 1.00 .... .... 1011 .0.0 .... @vfp_dnm_d
122
123
-VFM_sp ---- 1110 1.01 .... .... 1010 . o2:1 . 0 .... \
124
- vm=%vm_sp vn=%vn_sp vd=%vd_sp o1=1
125
-VFM_dp ---- 1110 1.01 .... .... 1011 . o2:1 . 0 .... \
126
- vm=%vm_dp vn=%vn_dp vd=%vd_dp o1=1
127
-VFM_sp ---- 1110 1.10 .... .... 1010 . o2:1 . 0 .... \
128
- vm=%vm_sp vn=%vn_sp vd=%vd_sp o1=2
129
-VFM_dp ---- 1110 1.10 .... .... 1011 . o2:1 . 0 .... \
130
- vm=%vm_dp vn=%vn_dp vd=%vd_dp o1=2
131
+VFMA_sp ---- 1110 1.10 .... .... 1010 .0. 0 .... @vfp_dnm_s
132
+VFMS_sp ---- 1110 1.10 .... .... 1010 .1. 0 .... @vfp_dnm_s
133
+VFNMA_sp ---- 1110 1.01 .... .... 1010 .0. 0 .... @vfp_dnm_s
134
+VFNMS_sp ---- 1110 1.01 .... .... 1010 .1. 0 .... @vfp_dnm_s
222
+
135
+
223
+static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
136
+VFMA_dp ---- 1110 1.10 .... .... 1011 .0.0 .... @vfp_dnm_d
224
+{
137
+VFMS_dp ---- 1110 1.10 .... .... 1011 .1.0 .... @vfp_dnm_d
225
+ tcg_gen_mul_vec(vece, a, a, b);
138
+VFNMA_dp ---- 1110 1.01 .... .... 1011 .0.0 .... @vfp_dnm_d
226
+ tcg_gen_sub_vec(vece, d, d, a);
139
+VFNMS_dp ---- 1110 1.01 .... .... 1011 .1.0 .... @vfp_dnm_d
227
+}
140
228
+
141
VMOV_imm_sp ---- 1110 1.11 .... .... 1010 0000 .... \
229
+/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
142
vd=%vd_sp imm=%vmov_imm
230
+ * these tables are shared with AArch64 which does support them.
231
+ */
232
+const GVecGen3 mla_op[4] = {
233
+ { .fni4 = gen_mla8_i32,
234
+ .fniv = gen_mla_vec,
235
+ .opc = INDEX_op_mul_vec,
236
+ .load_dest = true,
237
+ .vece = MO_8 },
238
+ { .fni4 = gen_mla16_i32,
239
+ .fniv = gen_mla_vec,
240
+ .opc = INDEX_op_mul_vec,
241
+ .load_dest = true,
242
+ .vece = MO_16 },
243
+ { .fni4 = gen_mla32_i32,
244
+ .fniv = gen_mla_vec,
245
+ .opc = INDEX_op_mul_vec,
246
+ .load_dest = true,
247
+ .vece = MO_32 },
248
+ { .fni8 = gen_mla64_i64,
249
+ .fniv = gen_mla_vec,
250
+ .opc = INDEX_op_mul_vec,
251
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
252
+ .load_dest = true,
253
+ .vece = MO_64 },
254
+};
255
+
256
+const GVecGen3 mls_op[4] = {
257
+ { .fni4 = gen_mls8_i32,
258
+ .fniv = gen_mls_vec,
259
+ .opc = INDEX_op_mul_vec,
260
+ .load_dest = true,
261
+ .vece = MO_8 },
262
+ { .fni4 = gen_mls16_i32,
263
+ .fniv = gen_mls_vec,
264
+ .opc = INDEX_op_mul_vec,
265
+ .load_dest = true,
266
+ .vece = MO_16 },
267
+ { .fni4 = gen_mls32_i32,
268
+ .fniv = gen_mls_vec,
269
+ .opc = INDEX_op_mul_vec,
270
+ .load_dest = true,
271
+ .vece = MO_32 },
272
+ { .fni8 = gen_mls64_i64,
273
+ .fniv = gen_mls_vec,
274
+ .opc = INDEX_op_mul_vec,
275
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
276
+ .load_dest = true,
277
+ .vece = MO_64 },
278
+};
279
+
280
/* Translate a NEON data processing instruction. Return nonzero if the
281
instruction is invalid.
282
We process data in a mixture of 32-bit and 64-bit chunks.
283
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
284
return 0;
285
}
286
break;
287
+
288
+ case NEON_3R_VML: /* VMLA, VMLS */
289
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
290
+ u ? &mls_op[size] : &mla_op[size]);
291
+ return 0;
292
}
293
+
294
if (size == 3) {
295
/* 64-bit element instructions. */
296
for (pass = 0; pass < (q ? 2 : 1); pass++) {
297
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
298
}
299
}
300
break;
301
- case NEON_3R_VML: /* VMLA, VMLAL, VMLS,VMLSL */
302
- switch (size) {
303
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
304
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
305
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
306
- default: abort();
307
- }
308
- tcg_temp_free_i32(tmp2);
309
- tmp2 = neon_load_reg(rd, pass);
310
- if (u) { /* VMLS */
311
- gen_neon_rsb(size, tmp, tmp2);
312
- } else { /* VMLA */
313
- gen_neon_add(size, tmp, tmp2);
314
- }
315
- break;
316
case NEON_3R_VMUL:
317
/* VMUL.P8; other cases already eliminated. */
318
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
319
--
143
--
320
2.19.1
144
2.20.1
321
145
322
146
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move shi_op and sli_op expanders from translate-a64.c.
3
Passing the raw op field from the manual is less instructive
4
than it might be. Do the full decode and use the existing
5
helpers to perform the expansion.
4
6
7
Since these are v8 insns, VECLEN+VECSTRIDE are already RES0.
8
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-15-richard.henderson@linaro.org
11
Message-id: 20200224222232.13807-18-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
target/arm/translate.h | 2 +
14
target/arm/translate-vfp.inc.c | 109 +++++++++++----------------------
11
target/arm/translate-a64.c | 152 +----------------------
15
target/arm/vfp-uncond.decode | 12 ++--
12
target/arm/translate.c | 244 ++++++++++++++++++++++++++-----------
16
2 files changed, 44 insertions(+), 77 deletions(-)
13
3 files changed, 179 insertions(+), 219 deletions(-)
14
17
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
18
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
20
--- a/target/arm/translate-vfp.inc.c
18
+++ b/target/arm/translate.h
21
+++ b/target/arm/translate-vfp.inc.c
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
22
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
20
extern const GVecGen3 bif_op;
23
return true;
21
extern const GVecGen2i ssra_op[4];
22
extern const GVecGen2i usra_op[4];
23
+extern const GVecGen2i sri_op[4];
24
+extern const GVecGen2i sli_op[4];
25
26
/*
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
33
}
34
}
24
}
35
25
36
-static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
26
-static bool trans_VMINMAXNM(DisasContext *s, arg_VMINMAXNM *a)
37
-{
27
-{
38
- uint64_t mask = dup_const(MO_8, 0xff >> shift);
28
- uint32_t rd, rn, rm;
39
- TCGv_i64 t = tcg_temp_new_i64();
29
- bool dp = a->dp;
30
- bool vmin = a->op;
31
- TCGv_ptr fpst;
40
-
32
-
41
- tcg_gen_shri_i64(t, a, shift);
33
- if (!dc_isar_feature(aa32_vminmaxnm, s)) {
42
- tcg_gen_andi_i64(t, t, mask);
34
- return false;
43
- tcg_gen_andi_i64(d, d, ~mask);
35
- }
44
- tcg_gen_or_i64(d, d, t);
36
-
45
- tcg_temp_free_i64(t);
37
- if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
38
- return false;
39
- }
40
-
41
- /* UNDEF accesses to D16-D31 if they don't exist */
42
- if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
43
- ((a->vm | a->vn | a->vd) & 0x10)) {
44
- return false;
45
- }
46
-
47
- rd = a->vd;
48
- rn = a->vn;
49
- rm = a->vm;
50
-
51
- if (!vfp_access_check(s)) {
52
- return true;
53
- }
54
-
55
- fpst = get_fpstatus_ptr(0);
56
-
57
- if (dp) {
58
- TCGv_i64 frn, frm, dest;
59
-
60
- frn = tcg_temp_new_i64();
61
- frm = tcg_temp_new_i64();
62
- dest = tcg_temp_new_i64();
63
-
64
- neon_load_reg64(frn, rn);
65
- neon_load_reg64(frm, rm);
66
- if (vmin) {
67
- gen_helper_vfp_minnumd(dest, frn, frm, fpst);
68
- } else {
69
- gen_helper_vfp_maxnumd(dest, frn, frm, fpst);
70
- }
71
- neon_store_reg64(dest, rd);
72
- tcg_temp_free_i64(frn);
73
- tcg_temp_free_i64(frm);
74
- tcg_temp_free_i64(dest);
75
- } else {
76
- TCGv_i32 frn, frm, dest;
77
-
78
- frn = tcg_temp_new_i32();
79
- frm = tcg_temp_new_i32();
80
- dest = tcg_temp_new_i32();
81
-
82
- neon_load_reg32(frn, rn);
83
- neon_load_reg32(frm, rm);
84
- if (vmin) {
85
- gen_helper_vfp_minnums(dest, frn, frm, fpst);
86
- } else {
87
- gen_helper_vfp_maxnums(dest, frn, frm, fpst);
88
- }
89
- neon_store_reg32(dest, rd);
90
- tcg_temp_free_i32(frn);
91
- tcg_temp_free_i32(frm);
92
- tcg_temp_free_i32(dest);
93
- }
94
-
95
- tcg_temp_free_ptr(fpst);
96
- return true;
46
-}
97
-}
47
-
98
-
48
-static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
99
/*
49
-{
100
* Table for converting the most common AArch32 encoding of
50
- uint64_t mask = dup_const(MO_16, 0xffff >> shift);
101
* rounding mode to arm_fprounding order (which matches the
51
- TCGv_i64 t = tcg_temp_new_i64();
102
@@ -XXX,XX +XXX,XX @@ static bool trans_VDIV_dp(DisasContext *s, arg_VDIV_dp *a)
52
-
103
return do_vfp_3op_dp(s, gen_helper_vfp_divd, a->vd, a->vn, a->vm, false);
53
- tcg_gen_shri_i64(t, a, shift);
54
- tcg_gen_andi_i64(t, t, mask);
55
- tcg_gen_andi_i64(d, d, ~mask);
56
- tcg_gen_or_i64(d, d, t);
57
- tcg_temp_free_i64(t);
58
-}
59
-
60
-static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
61
-{
62
- tcg_gen_shri_i32(a, a, shift);
63
- tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
64
-}
65
-
66
-static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_shri_i64(a, a, shift);
69
- tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
70
-}
71
-
72
-static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
73
-{
74
- uint64_t mask = (2ull << ((8 << vece) - 1)) - 1;
75
- TCGv_vec t = tcg_temp_new_vec_matching(d);
76
- TCGv_vec m = tcg_temp_new_vec_matching(d);
77
-
78
- tcg_gen_dupi_vec(vece, m, mask ^ (mask >> sh));
79
- tcg_gen_shri_vec(vece, t, a, sh);
80
- tcg_gen_and_vec(vece, d, d, m);
81
- tcg_gen_or_vec(vece, d, d, t);
82
-
83
- tcg_temp_free_vec(t);
84
- tcg_temp_free_vec(m);
85
-}
86
-
87
/* SSHR[RA]/USHR[RA] - Vector shift right (optional rounding/accumulate) */
88
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
89
int immh, int immb, int opcode, int rn, int rd)
90
{
91
- static const GVecGen2i sri_op[4] = {
92
- { .fni8 = gen_shr8_ins_i64,
93
- .fniv = gen_shr_ins_vec,
94
- .load_dest = true,
95
- .opc = INDEX_op_shri_vec,
96
- .vece = MO_8 },
97
- { .fni8 = gen_shr16_ins_i64,
98
- .fniv = gen_shr_ins_vec,
99
- .load_dest = true,
100
- .opc = INDEX_op_shri_vec,
101
- .vece = MO_16 },
102
- { .fni4 = gen_shr32_ins_i32,
103
- .fniv = gen_shr_ins_vec,
104
- .load_dest = true,
105
- .opc = INDEX_op_shri_vec,
106
- .vece = MO_32 },
107
- { .fni8 = gen_shr64_ins_i64,
108
- .fniv = gen_shr_ins_vec,
109
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
110
- .load_dest = true,
111
- .opc = INDEX_op_shri_vec,
112
- .vece = MO_64 },
113
- };
114
-
115
int size = 32 - clz32(immh) - 1;
116
int immhb = immh << 3 | immb;
117
int shift = 2 * (8 << size) - immhb;
118
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
119
clear_vec_high(s, is_q, rd);
120
}
104
}
121
105
122
-static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
106
+static bool trans_VMINNM_sp(DisasContext *s, arg_VMINNM_sp *a)
123
-{
124
- uint64_t mask = dup_const(MO_8, 0xff << shift);
125
- TCGv_i64 t = tcg_temp_new_i64();
126
-
127
- tcg_gen_shli_i64(t, a, shift);
128
- tcg_gen_andi_i64(t, t, mask);
129
- tcg_gen_andi_i64(d, d, ~mask);
130
- tcg_gen_or_i64(d, d, t);
131
- tcg_temp_free_i64(t);
132
-}
133
-
134
-static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
135
-{
136
- uint64_t mask = dup_const(MO_16, 0xffff << shift);
137
- TCGv_i64 t = tcg_temp_new_i64();
138
-
139
- tcg_gen_shli_i64(t, a, shift);
140
- tcg_gen_andi_i64(t, t, mask);
141
- tcg_gen_andi_i64(d, d, ~mask);
142
- tcg_gen_or_i64(d, d, t);
143
- tcg_temp_free_i64(t);
144
-}
145
-
146
-static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
147
-{
148
- tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
149
-}
150
-
151
-static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
152
-{
153
- tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
154
-}
155
-
156
-static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
157
-{
158
- uint64_t mask = (1ull << sh) - 1;
159
- TCGv_vec t = tcg_temp_new_vec_matching(d);
160
- TCGv_vec m = tcg_temp_new_vec_matching(d);
161
-
162
- tcg_gen_dupi_vec(vece, m, mask);
163
- tcg_gen_shli_vec(vece, t, a, sh);
164
- tcg_gen_and_vec(vece, d, d, m);
165
- tcg_gen_or_vec(vece, d, d, t);
166
-
167
- tcg_temp_free_vec(t);
168
- tcg_temp_free_vec(m);
169
-}
170
-
171
/* SHL/SLI - Vector shift left */
172
static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
173
int immh, int immb, int opcode, int rn, int rd)
174
{
175
- static const GVecGen2i shi_op[4] = {
176
- { .fni8 = gen_shl8_ins_i64,
177
- .fniv = gen_shl_ins_vec,
178
- .opc = INDEX_op_shli_vec,
179
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
180
- .load_dest = true,
181
- .vece = MO_8 },
182
- { .fni8 = gen_shl16_ins_i64,
183
- .fniv = gen_shl_ins_vec,
184
- .opc = INDEX_op_shli_vec,
185
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
- .load_dest = true,
187
- .vece = MO_16 },
188
- { .fni4 = gen_shl32_ins_i32,
189
- .fniv = gen_shl_ins_vec,
190
- .opc = INDEX_op_shli_vec,
191
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
192
- .load_dest = true,
193
- .vece = MO_32 },
194
- { .fni8 = gen_shl64_ins_i64,
195
- .fniv = gen_shl_ins_vec,
196
- .opc = INDEX_op_shli_vec,
197
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
198
- .load_dest = true,
199
- .vece = MO_64 },
200
- };
201
int size = 32 - clz32(immh) - 1;
202
int immhb = immh << 3 | immb;
203
int shift = immhb - (8 << size);
204
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
205
}
206
207
if (insert) {
208
- gen_gvec_op2i(s, is_q, rd, rn, shift, &shi_op[size]);
209
+ gen_gvec_op2i(s, is_q, rd, rn, shift, &sli_op[size]);
210
} else {
211
gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_shli, size);
212
}
213
diff --git a/target/arm/translate.c b/target/arm/translate.c
214
index XXXXXXX..XXXXXXX 100644
215
--- a/target/arm/translate.c
216
+++ b/target/arm/translate.c
217
@@ -XXX,XX +XXX,XX @@ const GVecGen2i usra_op[4] = {
218
.vece = MO_64, },
219
};
220
221
+static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
222
+{
107
+{
223
+ uint64_t mask = dup_const(MO_8, 0xff >> shift);
108
+ if (!dc_isar_feature(aa32_vminmaxnm, s)) {
224
+ TCGv_i64 t = tcg_temp_new_i64();
109
+ return false;
225
+
110
+ }
226
+ tcg_gen_shri_i64(t, a, shift);
111
+ return do_vfp_3op_sp(s, gen_helper_vfp_minnums,
227
+ tcg_gen_andi_i64(t, t, mask);
112
+ a->vd, a->vn, a->vm, false);
228
+ tcg_gen_andi_i64(d, d, ~mask);
229
+ tcg_gen_or_i64(d, d, t);
230
+ tcg_temp_free_i64(t);
231
+}
113
+}
232
+
114
+
233
+static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
115
+static bool trans_VMAXNM_sp(DisasContext *s, arg_VMAXNM_sp *a)
234
+{
116
+{
235
+ uint64_t mask = dup_const(MO_16, 0xffff >> shift);
117
+ if (!dc_isar_feature(aa32_vminmaxnm, s)) {
236
+ TCGv_i64 t = tcg_temp_new_i64();
118
+ return false;
237
+
119
+ }
238
+ tcg_gen_shri_i64(t, a, shift);
120
+ return do_vfp_3op_sp(s, gen_helper_vfp_maxnums,
239
+ tcg_gen_andi_i64(t, t, mask);
121
+ a->vd, a->vn, a->vm, false);
240
+ tcg_gen_andi_i64(d, d, ~mask);
241
+ tcg_gen_or_i64(d, d, t);
242
+ tcg_temp_free_i64(t);
243
+}
122
+}
244
+
123
+
245
+static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
124
+static bool trans_VMINNM_dp(DisasContext *s, arg_VMINNM_dp *a)
246
+{
125
+{
247
+ tcg_gen_shri_i32(a, a, shift);
126
+ if (!dc_isar_feature(aa32_vminmaxnm, s)) {
248
+ tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
127
+ return false;
128
+ }
129
+ return do_vfp_3op_dp(s, gen_helper_vfp_minnumd,
130
+ a->vd, a->vn, a->vm, false);
249
+}
131
+}
250
+
132
+
251
+static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
133
+static bool trans_VMAXNM_dp(DisasContext *s, arg_VMAXNM_dp *a)
252
+{
134
+{
253
+ tcg_gen_shri_i64(a, a, shift);
135
+ if (!dc_isar_feature(aa32_vminmaxnm, s)) {
254
+ tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
136
+ return false;
137
+ }
138
+ return do_vfp_3op_dp(s, gen_helper_vfp_maxnumd,
139
+ a->vd, a->vn, a->vm, false);
255
+}
140
+}
256
+
141
+
257
+static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
142
static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
258
+{
143
{
259
+ if (sh == 0) {
144
/*
260
+ tcg_gen_mov_vec(d, a);
145
diff --git a/target/arm/vfp-uncond.decode b/target/arm/vfp-uncond.decode
261
+ } else {
146
index XXXXXXX..XXXXXXX 100644
262
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
147
--- a/target/arm/vfp-uncond.decode
263
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
148
+++ b/target/arm/vfp-uncond.decode
149
@@ -XXX,XX +XXX,XX @@
150
%vd_dp 22:1 12:4
151
%vd_sp 12:4 22:1
152
153
+@vfp_dnm_s ................................ vm=%vm_sp vn=%vn_sp vd=%vd_sp
154
+@vfp_dnm_d ................................ vm=%vm_dp vn=%vn_dp vd=%vd_dp
264
+
155
+
265
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
156
VSEL 1111 1110 0. cc:2 .... .... 1010 .0.0 .... \
266
+ tcg_gen_shri_vec(vece, t, a, sh);
157
vm=%vm_sp vn=%vn_sp vd=%vd_sp dp=0
267
+ tcg_gen_and_vec(vece, d, d, m);
158
VSEL 1111 1110 0. cc:2 .... .... 1011 .0.0 .... \
268
+ tcg_gen_or_vec(vece, d, d, t);
159
vm=%vm_dp vn=%vn_dp vd=%vd_dp dp=1
160
161
-VMINMAXNM 1111 1110 1.00 .... .... 1010 . op:1 .0 .... \
162
- vm=%vm_sp vn=%vn_sp vd=%vd_sp dp=0
163
-VMINMAXNM 1111 1110 1.00 .... .... 1011 . op:1 .0 .... \
164
- vm=%vm_dp vn=%vn_dp vd=%vd_dp dp=1
165
+VMAXNM_sp 1111 1110 1.00 .... .... 1010 .0.0 .... @vfp_dnm_s
166
+VMINNM_sp 1111 1110 1.00 .... .... 1010 .1.0 .... @vfp_dnm_s
269
+
167
+
270
+ tcg_temp_free_vec(t);
168
+VMAXNM_dp 1111 1110 1.00 .... .... 1011 .0.0 .... @vfp_dnm_d
271
+ tcg_temp_free_vec(m);
169
+VMINNM_dp 1111 1110 1.00 .... .... 1011 .1.0 .... @vfp_dnm_d
272
+ }
170
273
+}
171
VRINT 1111 1110 1.11 10 rm:2 .... 1010 01.0 .... \
274
+
172
vm=%vm_sp vd=%vd_sp dp=0
275
+const GVecGen2i sri_op[4] = {
276
+ { .fni8 = gen_shr8_ins_i64,
277
+ .fniv = gen_shr_ins_vec,
278
+ .load_dest = true,
279
+ .opc = INDEX_op_shri_vec,
280
+ .vece = MO_8 },
281
+ { .fni8 = gen_shr16_ins_i64,
282
+ .fniv = gen_shr_ins_vec,
283
+ .load_dest = true,
284
+ .opc = INDEX_op_shri_vec,
285
+ .vece = MO_16 },
286
+ { .fni4 = gen_shr32_ins_i32,
287
+ .fniv = gen_shr_ins_vec,
288
+ .load_dest = true,
289
+ .opc = INDEX_op_shri_vec,
290
+ .vece = MO_32 },
291
+ { .fni8 = gen_shr64_ins_i64,
292
+ .fniv = gen_shr_ins_vec,
293
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
294
+ .load_dest = true,
295
+ .opc = INDEX_op_shri_vec,
296
+ .vece = MO_64 },
297
+};
298
+
299
+static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
300
+{
301
+ uint64_t mask = dup_const(MO_8, 0xff << shift);
302
+ TCGv_i64 t = tcg_temp_new_i64();
303
+
304
+ tcg_gen_shli_i64(t, a, shift);
305
+ tcg_gen_andi_i64(t, t, mask);
306
+ tcg_gen_andi_i64(d, d, ~mask);
307
+ tcg_gen_or_i64(d, d, t);
308
+ tcg_temp_free_i64(t);
309
+}
310
+
311
+static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
312
+{
313
+ uint64_t mask = dup_const(MO_16, 0xffff << shift);
314
+ TCGv_i64 t = tcg_temp_new_i64();
315
+
316
+ tcg_gen_shli_i64(t, a, shift);
317
+ tcg_gen_andi_i64(t, t, mask);
318
+ tcg_gen_andi_i64(d, d, ~mask);
319
+ tcg_gen_or_i64(d, d, t);
320
+ tcg_temp_free_i64(t);
321
+}
322
+
323
+static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
324
+{
325
+ tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
326
+}
327
+
328
+static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
329
+{
330
+ tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
331
+}
332
+
333
+static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
334
+{
335
+ if (sh == 0) {
336
+ tcg_gen_mov_vec(d, a);
337
+ } else {
338
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
339
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
340
+
341
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
342
+ tcg_gen_shli_vec(vece, t, a, sh);
343
+ tcg_gen_and_vec(vece, d, d, m);
344
+ tcg_gen_or_vec(vece, d, d, t);
345
+
346
+ tcg_temp_free_vec(t);
347
+ tcg_temp_free_vec(m);
348
+ }
349
+}
350
+
351
+const GVecGen2i sli_op[4] = {
352
+ { .fni8 = gen_shl8_ins_i64,
353
+ .fniv = gen_shl_ins_vec,
354
+ .load_dest = true,
355
+ .opc = INDEX_op_shli_vec,
356
+ .vece = MO_8 },
357
+ { .fni8 = gen_shl16_ins_i64,
358
+ .fniv = gen_shl_ins_vec,
359
+ .load_dest = true,
360
+ .opc = INDEX_op_shli_vec,
361
+ .vece = MO_16 },
362
+ { .fni4 = gen_shl32_ins_i32,
363
+ .fniv = gen_shl_ins_vec,
364
+ .load_dest = true,
365
+ .opc = INDEX_op_shli_vec,
366
+ .vece = MO_32 },
367
+ { .fni8 = gen_shl64_ins_i64,
368
+ .fniv = gen_shl_ins_vec,
369
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
370
+ .load_dest = true,
371
+ .opc = INDEX_op_shli_vec,
372
+ .vece = MO_64 },
373
+};
374
+
375
/* Translate a NEON data processing instruction. Return nonzero if the
376
instruction is invalid.
377
We process data in a mixture of 32-bit and 64-bit chunks.
378
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
379
int pairwise;
380
int u;
381
int vec_size;
382
- uint32_t imm, mask;
383
+ uint32_t imm;
384
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
385
TCGv_ptr ptr1, ptr2, ptr3;
386
TCGv_i64 tmp64;
387
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
388
}
389
return 0;
390
391
+ case 4: /* VSRI */
392
+ if (!u) {
393
+ return 1;
394
+ }
395
+ /* Right shift comes here negative. */
396
+ shift = -shift;
397
+ /* Shift out of range leaves destination unchanged. */
398
+ if (shift < 8 << size) {
399
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
400
+ shift, &sri_op[size]);
401
+ }
402
+ return 0;
403
+
404
case 5: /* VSHL, VSLI */
405
- if (!u) { /* VSHL */
406
+ if (u) { /* VSLI */
407
+ /* Shift out of range leaves destination unchanged. */
408
+ if (shift < 8 << size) {
409
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size,
410
+ vec_size, shift, &sli_op[size]);
411
+ }
412
+ } else { /* VSHL */
413
/* Shifts larger than the element size are
414
* architecturally valid and results in zero.
415
*/
416
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
417
tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
418
vec_size, vec_size);
419
}
420
- return 0;
421
}
422
- break;
423
+ return 0;
424
}
425
426
if (size == 3) {
427
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
428
else
429
gen_helper_neon_rshl_s64(cpu_V0, cpu_V0, cpu_V1);
430
break;
431
- case 4: /* VSRI */
432
- case 5: /* VSHL, VSLI */
433
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
434
- break;
435
case 6: /* VQSHLU */
436
gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
437
cpu_V0, cpu_V1);
438
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
439
/* Accumulate. */
440
neon_load_reg64(cpu_V1, rd + pass);
441
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
442
- } else if (op == 4 || (op == 5 && u)) {
443
- /* Insert */
444
- neon_load_reg64(cpu_V1, rd + pass);
445
- uint64_t mask;
446
- if (shift < -63 || shift > 63) {
447
- mask = 0;
448
- } else {
449
- if (op == 4) {
450
- mask = 0xffffffffffffffffull >> -shift;
451
- } else {
452
- mask = 0xffffffffffffffffull << shift;
453
- }
454
- }
455
- tcg_gen_andi_i64(cpu_V1, cpu_V1, ~mask);
456
- tcg_gen_or_i64(cpu_V0, cpu_V0, cpu_V1);
457
}
458
neon_store_reg64(cpu_V0, rd + pass);
459
} else { /* size < 3 */
460
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
461
case 3: /* VRSRA */
462
GEN_NEON_INTEGER_OP(rshl);
463
break;
464
- case 4: /* VSRI */
465
- case 5: /* VSHL, VSLI */
466
- switch (size) {
467
- case 0: gen_helper_neon_shl_u8(tmp, tmp, tmp2); break;
468
- case 1: gen_helper_neon_shl_u16(tmp, tmp, tmp2); break;
469
- case 2: gen_helper_neon_shl_u32(tmp, tmp, tmp2); break;
470
- default: abort();
471
- }
472
- break;
473
case 6: /* VQSHLU */
474
switch (size) {
475
case 0:
476
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
477
tmp2 = neon_load_reg(rd, pass);
478
gen_neon_add(size, tmp, tmp2);
479
tcg_temp_free_i32(tmp2);
480
- } else if (op == 4 || (op == 5 && u)) {
481
- /* Insert */
482
- switch (size) {
483
- case 0:
484
- if (op == 4)
485
- mask = 0xff >> -shift;
486
- else
487
- mask = (uint8_t)(0xff << shift);
488
- mask |= mask << 8;
489
- mask |= mask << 16;
490
- break;
491
- case 1:
492
- if (op == 4)
493
- mask = 0xffff >> -shift;
494
- else
495
- mask = (uint16_t)(0xffff << shift);
496
- mask |= mask << 16;
497
- break;
498
- case 2:
499
- if (shift < -31 || shift > 31) {
500
- mask = 0;
501
- } else {
502
- if (op == 4)
503
- mask = 0xffffffffu >> -shift;
504
- else
505
- mask = 0xffffffffu << shift;
506
- }
507
- break;
508
- default:
509
- abort();
510
- }
511
- tmp2 = neon_load_reg(rd, pass);
512
- tcg_gen_andi_i32(tmp, tmp, mask);
513
- tcg_gen_andi_i32(tmp2, tmp2, ~mask);
514
- tcg_gen_or_i32(tmp, tmp, tmp2);
515
- tcg_temp_free_i32(tmp2);
516
}
517
neon_store_reg(rd, pass, tmp);
518
}
519
--
173
--
520
2.19.1
174
2.20.1
521
175
522
176
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
USB ports on Xilinx Zync must be instantiated as TYPE_CHIPIDEA to work.
4
Message-id: 20181011205206.3552-10-richard.henderson@linaro.org
4
Linux expects and checks various chipidea registers, which do not exist
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
with the basic ehci emulation. This patch series fixes the problem.
6
7
Without this patch, USB ports fail to instantiate under Linux.
8
9
ci_hdrc ci_hdrc.0: doesn't support host
10
ci_hdrc ci_hdrc.0: no supported roles
11
12
With this patch, USB ports are instantiated, and it is possible
13
to boot from USB drive.
14
15
ci_hdrc ci_hdrc.0: EHCI Host Controller
16
ci_hdrc ci_hdrc.0: new USB bus registered, assigned bus number 1
17
ci_hdrc ci_hdrc.0: USB 2.0 started, EHCI 1.00
18
usb 1-1: new full-speed USB device number 2 using ci_hdrc
19
usb 1-1: not running at top speed; connect to a high speed hub
20
usb 1-1: config 1 interface 0 altsetting 0 endpoint 0x81 has invalid maxpacket 512, setting to 64
21
usb 1-1: config 1 interface 0 altsetting 0 endpoint 0x2 has invalid maxpacket 512, setting to 64
22
usb-storage 1-1:1.0: USB Mass Storage device detected
23
scsi host0: usb-storage 1-1:1.0
24
25
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
26
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
27
Message-id: 20200215122354.13706-2-linux@roeck-us.net
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
29
---
8
target/arm/translate.c | 29 ++++++++++-------------------
30
hw/arm/xilinx_zynq.c | 5 +++--
9
1 file changed, 10 insertions(+), 19 deletions(-)
31
1 file changed, 3 insertions(+), 2 deletions(-)
10
32
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
33
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
12
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
35
--- a/hw/arm/xilinx_zynq.c
14
+++ b/target/arm/translate.c
36
+++ b/hw/arm/xilinx_zynq.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
37
@@ -XXX,XX +XXX,XX @@
16
break;
38
#include "hw/loader.h"
17
}
39
#include "hw/misc/zynq-xadc.h"
18
return 0;
40
#include "hw/ssi/ssi.h"
19
+
41
+#include "hw/usb/chipidea.h"
20
+ case NEON_3R_VADD_VSUB:
42
#include "qemu/error-report.h"
21
+ if (u) {
43
#include "hw/sd/sdhci.h"
22
+ tcg_gen_gvec_sub(size, rd_ofs, rn_ofs, rm_ofs,
44
#include "hw/char/cadence_uart.h"
23
+ vec_size, vec_size);
45
@@ -XXX,XX +XXX,XX @@ static void zynq_init(MachineState *machine)
24
+ } else {
46
zynq_init_spi_flashes(0xE0007000, pic[81-IRQ_OFFSET], false);
25
+ tcg_gen_gvec_add(size, rd_ofs, rn_ofs, rm_ofs,
47
zynq_init_spi_flashes(0xE000D000, pic[51-IRQ_OFFSET], true);
26
+ vec_size, vec_size);
48
27
+ }
49
- sysbus_create_simple("xlnx,ps7-usb", 0xE0002000, pic[53-IRQ_OFFSET]);
28
+ return 0;
50
- sysbus_create_simple("xlnx,ps7-usb", 0xE0003000, pic[76-IRQ_OFFSET]);
29
}
51
+ sysbus_create_simple(TYPE_CHIPIDEA, 0xE0002000, pic[53 - IRQ_OFFSET]);
30
if (size == 3) {
52
+ sysbus_create_simple(TYPE_CHIPIDEA, 0xE0003000, pic[76 - IRQ_OFFSET]);
31
/* 64-bit element instructions. */
53
32
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
54
cadence_uart_create(0xE0000000, pic[59 - IRQ_OFFSET], serial_hd(0));
33
cpu_V1, cpu_V0);
55
cadence_uart_create(0xE0001000, pic[82 - IRQ_OFFSET], serial_hd(1));
34
}
35
break;
36
- case NEON_3R_VADD_VSUB:
37
- if (u) {
38
- tcg_gen_sub_i64(CPU_V001);
39
- } else {
40
- tcg_gen_add_i64(CPU_V001);
41
- }
42
- break;
43
default:
44
abort();
45
}
46
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
47
tmp2 = neon_load_reg(rd, pass);
48
gen_neon_add(size, tmp, tmp2);
49
break;
50
- case NEON_3R_VADD_VSUB:
51
- if (!u) { /* VADD */
52
- gen_neon_add(size, tmp, tmp2);
53
- } else { /* VSUB */
54
- switch (size) {
55
- case 0: gen_helper_neon_sub_u8(tmp, tmp, tmp2); break;
56
- case 1: gen_helper_neon_sub_u16(tmp, tmp, tmp2); break;
57
- case 2: tcg_gen_sub_i32(tmp, tmp, tmp2); break;
58
- default: abort();
59
- }
60
- }
61
- break;
62
case NEON_3R_VTST_VCEQ:
63
if (!u) { /* VTST */
64
switch (size) {
65
--
56
--
66
2.19.1
57
2.20.1
67
58
68
59
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
Move ssra_op and usra_op expanders from translate-a64.c.
3
Xilinx USB devices are now instantiated through TYPE_CHIPIDEA,
4
and xlnx support in the EHCI code is no longer needed.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
6
Message-id: 20181011205206.3552-14-richard.henderson@linaro.org
7
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20200215122354.13706-3-linux@roeck-us.net
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/translate.h | 2 +
11
hw/usb/hcd-ehci-sysbus.c | 17 -----------------
11
target/arm/translate-a64.c | 106 ----------------------------
12
1 file changed, 17 deletions(-)
12
target/arm/translate.c | 139 ++++++++++++++++++++++++++++++++++---
13
3 files changed, 130 insertions(+), 117 deletions(-)
14
13
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/hw/usb/hcd-ehci-sysbus.c b/hw/usb/hcd-ehci-sysbus.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
16
--- a/hw/usb/hcd-ehci-sysbus.c
18
+++ b/target/arm/translate.h
17
+++ b/hw/usb/hcd-ehci-sysbus.c
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
18
@@ -XXX,XX +XXX,XX @@ static const TypeInfo ehci_platform_type_info = {
20
extern const GVecGen3 bsl_op;
19
.class_init = ehci_platform_class_init,
21
extern const GVecGen3 bit_op;
20
};
22
extern const GVecGen3 bif_op;
21
23
+extern const GVecGen2i ssra_op[4];
22
-static void ehci_xlnx_class_init(ObjectClass *oc, void *data)
24
+extern const GVecGen2i usra_op[4];
25
26
/*
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
33
}
34
}
35
36
-static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
37
-{
23
-{
38
- tcg_gen_vec_sar8i_i64(a, a, shift);
24
- SysBusEHCIClass *sec = SYS_BUS_EHCI_CLASS(oc);
39
- tcg_gen_vec_add8_i64(d, d, a);
25
- DeviceClass *dc = DEVICE_CLASS(oc);
26
-
27
- set_bit(DEVICE_CATEGORY_USB, dc->categories);
28
- sec->capsbase = 0x100;
29
- sec->opregbase = 0x140;
40
-}
30
-}
41
-
31
-
42
-static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
32
-static const TypeInfo ehci_xlnx_type_info = {
43
-{
33
- .name = "xlnx,ps7-usb",
44
- tcg_gen_vec_sar16i_i64(a, a, shift);
34
- .parent = TYPE_SYS_BUS_EHCI,
45
- tcg_gen_vec_add16_i64(d, d, a);
35
- .class_init = ehci_xlnx_class_init,
46
-}
36
-};
47
-
37
-
48
-static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
38
static void ehci_exynos4210_class_init(ObjectClass *oc, void *data)
49
-{
50
- tcg_gen_sari_i32(a, a, shift);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
55
-{
56
- tcg_gen_sari_i64(a, a, shift);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
61
-{
62
- tcg_gen_sari_vec(vece, a, a, sh);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_vec_shr8i_i64(a, a, shift);
69
- tcg_gen_vec_add8_i64(d, d, a);
70
-}
71
-
72
-static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
73
-{
74
- tcg_gen_vec_shr16i_i64(a, a, shift);
75
- tcg_gen_vec_add16_i64(d, d, a);
76
-}
77
-
78
-static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
79
-{
80
- tcg_gen_shri_i32(a, a, shift);
81
- tcg_gen_add_i32(d, d, a);
82
-}
83
-
84
-static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
85
-{
86
- tcg_gen_shri_i64(a, a, shift);
87
- tcg_gen_add_i64(d, d, a);
88
-}
89
-
90
-static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
91
-{
92
- tcg_gen_shri_vec(vece, a, a, sh);
93
- tcg_gen_add_vec(vece, d, d, a);
94
-}
95
-
96
static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
97
{
39
{
98
uint64_t mask = dup_const(MO_8, 0xff >> shift);
40
SysBusEHCIClass *sec = SYS_BUS_EHCI_CLASS(oc);
99
@@ -XXX,XX +XXX,XX @@ static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
41
@@ -XXX,XX +XXX,XX @@ static void ehci_sysbus_register_types(void)
100
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
101
int immh, int immb, int opcode, int rn, int rd)
102
{
42
{
103
- static const GVecGen2i ssra_op[4] = {
43
type_register_static(&ehci_type_info);
104
- { .fni8 = gen_ssra8_i64,
44
type_register_static(&ehci_platform_type_info);
105
- .fniv = gen_ssra_vec,
45
- type_register_static(&ehci_xlnx_type_info);
106
- .load_dest = true,
46
type_register_static(&ehci_exynos4210_type_info);
107
- .opc = INDEX_op_sari_vec,
47
type_register_static(&ehci_tegra2_type_info);
108
- .vece = MO_8 },
48
type_register_static(&ehci_ppc4xx_type_info);
109
- { .fni8 = gen_ssra16_i64,
110
- .fniv = gen_ssra_vec,
111
- .load_dest = true,
112
- .opc = INDEX_op_sari_vec,
113
- .vece = MO_16 },
114
- { .fni4 = gen_ssra32_i32,
115
- .fniv = gen_ssra_vec,
116
- .load_dest = true,
117
- .opc = INDEX_op_sari_vec,
118
- .vece = MO_32 },
119
- { .fni8 = gen_ssra64_i64,
120
- .fniv = gen_ssra_vec,
121
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
122
- .load_dest = true,
123
- .opc = INDEX_op_sari_vec,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen2i usra_op[4] = {
127
- { .fni8 = gen_usra8_i64,
128
- .fniv = gen_usra_vec,
129
- .load_dest = true,
130
- .opc = INDEX_op_shri_vec,
131
- .vece = MO_8, },
132
- { .fni8 = gen_usra16_i64,
133
- .fniv = gen_usra_vec,
134
- .load_dest = true,
135
- .opc = INDEX_op_shri_vec,
136
- .vece = MO_16, },
137
- { .fni4 = gen_usra32_i32,
138
- .fniv = gen_usra_vec,
139
- .load_dest = true,
140
- .opc = INDEX_op_shri_vec,
141
- .vece = MO_32, },
142
- { .fni8 = gen_usra64_i64,
143
- .fniv = gen_usra_vec,
144
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
145
- .load_dest = true,
146
- .opc = INDEX_op_shri_vec,
147
- .vece = MO_64, },
148
- };
149
static const GVecGen2i sri_op[4] = {
150
{ .fni8 = gen_shr8_ins_i64,
151
.fniv = gen_shr_ins_vec,
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ const GVecGen3 bif_op = {
157
.load_dest = true
158
};
159
160
+static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
161
+{
162
+ tcg_gen_vec_sar8i_i64(a, a, shift);
163
+ tcg_gen_vec_add8_i64(d, d, a);
164
+}
165
+
166
+static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
167
+{
168
+ tcg_gen_vec_sar16i_i64(a, a, shift);
169
+ tcg_gen_vec_add16_i64(d, d, a);
170
+}
171
+
172
+static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
173
+{
174
+ tcg_gen_sari_i32(a, a, shift);
175
+ tcg_gen_add_i32(d, d, a);
176
+}
177
+
178
+static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
179
+{
180
+ tcg_gen_sari_i64(a, a, shift);
181
+ tcg_gen_add_i64(d, d, a);
182
+}
183
+
184
+static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
185
+{
186
+ tcg_gen_sari_vec(vece, a, a, sh);
187
+ tcg_gen_add_vec(vece, d, d, a);
188
+}
189
+
190
+const GVecGen2i ssra_op[4] = {
191
+ { .fni8 = gen_ssra8_i64,
192
+ .fniv = gen_ssra_vec,
193
+ .load_dest = true,
194
+ .opc = INDEX_op_sari_vec,
195
+ .vece = MO_8 },
196
+ { .fni8 = gen_ssra16_i64,
197
+ .fniv = gen_ssra_vec,
198
+ .load_dest = true,
199
+ .opc = INDEX_op_sari_vec,
200
+ .vece = MO_16 },
201
+ { .fni4 = gen_ssra32_i32,
202
+ .fniv = gen_ssra_vec,
203
+ .load_dest = true,
204
+ .opc = INDEX_op_sari_vec,
205
+ .vece = MO_32 },
206
+ { .fni8 = gen_ssra64_i64,
207
+ .fniv = gen_ssra_vec,
208
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
209
+ .load_dest = true,
210
+ .opc = INDEX_op_sari_vec,
211
+ .vece = MO_64 },
212
+};
213
+
214
+static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
215
+{
216
+ tcg_gen_vec_shr8i_i64(a, a, shift);
217
+ tcg_gen_vec_add8_i64(d, d, a);
218
+}
219
+
220
+static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
221
+{
222
+ tcg_gen_vec_shr16i_i64(a, a, shift);
223
+ tcg_gen_vec_add16_i64(d, d, a);
224
+}
225
+
226
+static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
227
+{
228
+ tcg_gen_shri_i32(a, a, shift);
229
+ tcg_gen_add_i32(d, d, a);
230
+}
231
+
232
+static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
233
+{
234
+ tcg_gen_shri_i64(a, a, shift);
235
+ tcg_gen_add_i64(d, d, a);
236
+}
237
+
238
+static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
239
+{
240
+ tcg_gen_shri_vec(vece, a, a, sh);
241
+ tcg_gen_add_vec(vece, d, d, a);
242
+}
243
+
244
+const GVecGen2i usra_op[4] = {
245
+ { .fni8 = gen_usra8_i64,
246
+ .fniv = gen_usra_vec,
247
+ .load_dest = true,
248
+ .opc = INDEX_op_shri_vec,
249
+ .vece = MO_8, },
250
+ { .fni8 = gen_usra16_i64,
251
+ .fniv = gen_usra_vec,
252
+ .load_dest = true,
253
+ .opc = INDEX_op_shri_vec,
254
+ .vece = MO_16, },
255
+ { .fni4 = gen_usra32_i32,
256
+ .fniv = gen_usra_vec,
257
+ .load_dest = true,
258
+ .opc = INDEX_op_shri_vec,
259
+ .vece = MO_32, },
260
+ { .fni8 = gen_usra64_i64,
261
+ .fniv = gen_usra_vec,
262
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
263
+ .load_dest = true,
264
+ .opc = INDEX_op_shri_vec,
265
+ .vece = MO_64, },
266
+};
267
268
/* Translate a NEON data processing instruction. Return nonzero if the
269
instruction is invalid.
270
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
271
}
272
return 0;
273
274
+ case 1: /* VSRA */
275
+ /* Right shift comes here negative. */
276
+ shift = -shift;
277
+ /* Shifts larger than the element size are architecturally
278
+ * valid. Unsigned results in all zeros; signed results
279
+ * in all sign bits.
280
+ */
281
+ if (!u) {
282
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
283
+ MIN(shift, (8 << size) - 1),
284
+ &ssra_op[size]);
285
+ } else if (shift >= 8 << size) {
286
+ /* rd += 0 */
287
+ } else {
288
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
289
+ shift, &usra_op[size]);
290
+ }
291
+ return 0;
292
+
293
case 5: /* VSHL, VSLI */
294
if (!u) { /* VSHL */
295
/* Shifts larger than the element size are
296
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
297
neon_load_reg64(cpu_V0, rm + pass);
298
tcg_gen_movi_i64(cpu_V1, imm);
299
switch (op) {
300
- case 1: /* VSRA */
301
- if (u)
302
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
303
- else
304
- gen_helper_neon_shl_s64(cpu_V0, cpu_V0, cpu_V1);
305
- break;
306
case 2: /* VRSHR */
307
case 3: /* VRSRA */
308
if (u)
309
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
310
default:
311
g_assert_not_reached();
312
}
313
- if (op == 1 || op == 3) {
314
+ if (op == 3) {
315
/* Accumulate. */
316
neon_load_reg64(cpu_V1, rd + pass);
317
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
318
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
319
tmp2 = tcg_temp_new_i32();
320
tcg_gen_movi_i32(tmp2, imm);
321
switch (op) {
322
- case 1: /* VSRA */
323
- GEN_NEON_INTEGER_OP(shl);
324
- break;
325
case 2: /* VRSHR */
326
case 3: /* VRSRA */
327
GEN_NEON_INTEGER_OP(rshl);
328
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
329
}
330
tcg_temp_free_i32(tmp2);
331
332
- if (op == 1 || op == 3) {
333
+ if (op == 3) {
334
/* Accumulate. */
335
tmp2 = neon_load_reg(rd, pass);
336
gen_neon_add(size, tmp, tmp2);
337
--
49
--
338
2.19.1
50
2.20.1
339
51
340
52
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Thomas Huth <thuth@redhat.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Old kernels from the Meego project can be used to check that Linux
4
Message-id: 20181011205206.3552-8-richard.henderson@linaro.org
4
is at least starting on these machines.
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
6
Signed-off-by: Thomas Huth <thuth@redhat.com>
7
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20200225172501.29609-2-philmd@redhat.com
12
Message-Id: <20200129131920.22302-1-thuth@redhat.com>
13
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
15
---
8
target/arm/translate.c | 67 ++++++++++++++++++++++++------------------
16
MAINTAINERS | 1 +
9
1 file changed, 39 insertions(+), 28 deletions(-)
17
tests/acceptance/machine_arm_n8x0.py | 49 ++++++++++++++++++++++++++++
18
2 files changed, 50 insertions(+)
19
create mode 100644 tests/acceptance/machine_arm_n8x0.py
10
20
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
21
diff --git a/MAINTAINERS b/MAINTAINERS
12
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
23
--- a/MAINTAINERS
14
+++ b/target/arm/translate.c
24
+++ b/MAINTAINERS
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
25
@@ -XXX,XX +XXX,XX @@ F: hw/rtc/twl92230.c
16
return 1;
26
F: include/hw/display/blizzard.h
17
}
27
F: include/hw/input/tsc2xxx.h
18
} else { /* (insn & 0x00380080) == 0 */
28
F: include/hw/misc/cbus.h
19
- int invert;
29
+F: tests/acceptance/machine_arm_n8x0.py
20
+ int invert, reg_ofs, vec_size;
30
31
Palm
32
M: Andrzej Zaborowski <balrogg@gmail.com>
33
diff --git a/tests/acceptance/machine_arm_n8x0.py b/tests/acceptance/machine_arm_n8x0.py
34
new file mode 100644
35
index XXXXXXX..XXXXXXX
36
--- /dev/null
37
+++ b/tests/acceptance/machine_arm_n8x0.py
38
@@ -XXX,XX +XXX,XX @@
39
+# Functional test that boots a Linux kernel and checks the console
40
+#
41
+# Copyright (c) 2020 Red Hat, Inc.
42
+#
43
+# Author:
44
+# Thomas Huth <thuth@redhat.com>
45
+#
46
+# This work is licensed under the terms of the GNU GPL, version 2 or
47
+# later. See the COPYING file in the top-level directory.
21
+
48
+
22
if (q && (rd & 1)) {
49
+import os
23
return 1;
24
}
25
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
26
break;
27
case 14:
28
imm |= (imm << 8) | (imm << 16) | (imm << 24);
29
- if (invert)
30
+ if (invert) {
31
imm = ~imm;
32
+ }
33
break;
34
case 15:
35
if (invert) {
36
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
37
| ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
38
break;
39
}
40
- if (invert)
41
+ if (invert) {
42
imm = ~imm;
43
+ }
44
45
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
46
- if (op & 1 && op < 12) {
47
- tmp = neon_load_reg(rd, pass);
48
- if (invert) {
49
- /* The immediate value has already been inverted, so
50
- BIC becomes AND. */
51
- tcg_gen_andi_i32(tmp, tmp, imm);
52
- } else {
53
- tcg_gen_ori_i32(tmp, tmp, imm);
54
- }
55
+ reg_ofs = neon_reg_offset(rd, 0);
56
+ vec_size = q ? 16 : 8;
57
+
50
+
58
+ if (op & 1 && op < 12) {
51
+from avocado import skipUnless
59
+ if (invert) {
52
+from avocado_qemu import Test
60
+ /* The immediate value has already been inverted,
53
+from avocado_qemu import wait_for_console_pattern
61
+ * so BIC becomes AND.
62
+ */
63
+ tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
64
+ vec_size, vec_size);
65
} else {
66
- /* VMOV, VMVN. */
67
- tmp = tcg_temp_new_i32();
68
- if (op == 14 && invert) {
69
- int n;
70
- uint32_t val;
71
- val = 0;
72
- for (n = 0; n < 4; n++) {
73
- if (imm & (1 << (n + (pass & 1) * 4)))
74
- val |= 0xff << (n * 8);
75
- }
76
- tcg_gen_movi_i32(tmp, val);
77
- } else {
78
- tcg_gen_movi_i32(tmp, imm);
79
- }
80
+ tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
81
+ vec_size, vec_size);
82
+ }
83
+ } else {
84
+ /* VMOV, VMVN. */
85
+ if (op == 14 && invert) {
86
+ TCGv_i64 t64 = tcg_temp_new_i64();
87
+
54
+
88
+ for (pass = 0; pass <= q; ++pass) {
55
+class N8x0Machine(Test):
89
+ uint64_t val = 0;
56
+ """Boots the Linux kernel and checks that the console is operational"""
90
+ int n;
91
+
57
+
92
+ for (n = 0; n < 8; n++) {
58
+ timeout = 90
93
+ if (imm & (1 << (n + pass * 8))) {
59
+
94
+ val |= 0xffull << (n * 8);
60
+ def __do_test_n8x0(self):
95
+ }
61
+ kernel_url = ('http://stskeeps.subnetmask.net/meego-n8x0/'
96
+ }
62
+ 'meego-arm-n8x0-1.0.80.20100712.1431-'
97
+ tcg_gen_movi_i64(t64, val);
63
+ 'vmlinuz-2.6.35~rc4-129.1-n8x0')
98
+ neon_store_reg64(t64, rd + pass);
64
+ kernel_hash = 'e9d5ab8d7548923a0061b6fbf601465e479ed269'
99
+ }
65
+ kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
100
+ tcg_temp_free_i64(t64);
66
+
101
+ } else {
67
+ self.vm.set_console(console_index=1)
102
+ tcg_gen_gvec_dup32i(reg_ofs, vec_size, vec_size, imm);
68
+ self.vm.add_args('-kernel', kernel_path,
103
}
69
+ '-append', 'printk.time=0 console=ttyS1')
104
- neon_store_reg(rd, pass, tmp);
70
+ self.vm.launch()
105
}
71
+ wait_for_console_pattern(self, 'TSC2005 driver initializing')
106
}
72
+
107
} else { /* (insn & 0x00800010 == 0x00800000) */
73
+ @skipUnless(os.getenv('AVOCADO_ALLOW_UNTRUSTED_CODE'), 'untrusted code')
74
+ def test_n800(self):
75
+ """
76
+ :avocado: tags=arch:arm
77
+ :avocado: tags=machine:n800
78
+ """
79
+ self.__do_test_n8x0()
80
+
81
+ @skipUnless(os.getenv('AVOCADO_ALLOW_UNTRUSTED_CODE'), 'untrusted code')
82
+ def test_n810(self):
83
+ """
84
+ :avocado: tags=arch:arm
85
+ :avocado: tags=machine:n810
86
+ """
87
+ self.__do_test_n8x0()
108
--
88
--
109
2.19.1
89
2.20.1
110
90
111
91
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Thomas Huth <thuth@redhat.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
There is a kernel and initrd available on github which we can use
4
for testing this machine.
5
6
Signed-off-by: Thomas Huth <thuth@redhat.com>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Message-id: 20181011205206.3552-6-richard.henderson@linaro.org
8
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
[PMM: drop change to now-deleted cpu_mode_names array]
9
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20200225172501.29609-3-philmd@redhat.com
12
Message-Id: <20200131170233.14584-1-thuth@redhat.com>
13
[PMD: Renamed test method, moved description from class to method]
14
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
16
---
10
target/arm/translate.c | 4 ++--
17
MAINTAINERS | 1 +
11
1 file changed, 2 insertions(+), 2 deletions(-)
18
tests/acceptance/machine_arm_integratorcp.py | 43 ++++++++++++++++++++
19
2 files changed, 44 insertions(+)
20
create mode 100644 tests/acceptance/machine_arm_integratorcp.py
12
21
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
22
diff --git a/MAINTAINERS b/MAINTAINERS
14
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
24
--- a/MAINTAINERS
16
+++ b/target/arm/translate.c
25
+++ b/MAINTAINERS
17
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_F0d, cpu_F1d;
26
@@ -XXX,XX +XXX,XX @@ S: Maintained
18
27
F: hw/arm/integratorcp.c
19
#include "exec/gen-icount.h"
28
F: hw/misc/arm_integrator_debug.c
20
29
F: include/hw/misc/arm_integrator_debug.h
21
-static const char *regnames[] =
30
+F: tests/acceptance/machine_arm_integratorcp.py
22
+static const char * const regnames[] =
31
23
{ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
32
MCIMX6UL EVK / i.MX6ul
24
"r8", "r9", "r10", "r11", "r12", "r13", "r14", "pc" };
33
M: Peter Maydell <peter.maydell@linaro.org>
25
34
diff --git a/tests/acceptance/machine_arm_integratorcp.py b/tests/acceptance/machine_arm_integratorcp.py
26
@@ -XXX,XX +XXX,XX @@ static struct {
35
new file mode 100644
27
int nregs;
36
index XXXXXXX..XXXXXXX
28
int interleave;
37
--- /dev/null
29
int spacing;
38
+++ b/tests/acceptance/machine_arm_integratorcp.py
30
-} neon_ls_element_type[11] = {
39
@@ -XXX,XX +XXX,XX @@
31
+} const neon_ls_element_type[11] = {
40
+# Functional test that boots a Linux kernel and checks the console
32
{4, 4, 1},
41
+#
33
{4, 4, 2},
42
+# Copyright (c) 2020 Red Hat, Inc.
34
{4, 1, 1},
43
+#
44
+# Author:
45
+# Thomas Huth <thuth@redhat.com>
46
+#
47
+# This work is licensed under the terms of the GNU GPL, version 2 or
48
+# later. See the COPYING file in the top-level directory.
49
+
50
+import os
51
+
52
+from avocado import skipUnless
53
+from avocado_qemu import Test
54
+from avocado_qemu import wait_for_console_pattern
55
+
56
+class IntegratorMachine(Test):
57
+
58
+ timeout = 90
59
+
60
+ @skipUnless(os.getenv('AVOCADO_ALLOW_UNTRUSTED_CODE'), 'untrusted code')
61
+ def test_integratorcp_console(self):
62
+ """
63
+ Boots the Linux kernel and checks that the console is operational
64
+ :avocado: tags=arch:arm
65
+ :avocado: tags=machine:integratorcp
66
+ """
67
+ kernel_url = ('https://github.com/zayac/qemu-arm/raw/master/'
68
+ 'arm-test/kernel/zImage.integrator')
69
+ kernel_hash = '0d7adba893c503267c946a3cbdc63b4b54f25468'
70
+ kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
71
+
72
+ initrd_url = ('https://github.com/zayac/qemu-arm/raw/master/'
73
+ 'arm-test/kernel/arm_root.img')
74
+ initrd_hash = 'b51e4154285bf784e017a37586428332d8c7bd8b'
75
+ initrd_path = self.fetch_asset(initrd_url, asset_hash=initrd_hash)
76
+
77
+ self.vm.set_console()
78
+ self.vm.add_args('-kernel', kernel_path,
79
+ '-initrd', initrd_path,
80
+ '-append', 'printk.time=0 console=ttyAMA0')
81
+ self.vm.launch()
82
+ wait_for_console_pattern(self, 'Log in as root')
35
--
83
--
36
2.19.1
84
2.20.1
37
85
38
86
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
As we want to re-use this code, extract it as a new function.
4
Message-id: 20181011205206.3552-4-richard.henderson@linaro.org
4
Since we are using the PL011 serial console, add a Avocado tag
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
to ease filtering of tests.
6
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
8
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20200225172501.29609-4-philmd@redhat.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/translate-a64.c | 28 +++-------------------------
13
tests/acceptance/machine_arm_integratorcp.py | 18 +++++++++++-------
9
1 file changed, 3 insertions(+), 25 deletions(-)
14
1 file changed, 11 insertions(+), 7 deletions(-)
10
15
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
diff --git a/tests/acceptance/machine_arm_integratorcp.py b/tests/acceptance/machine_arm_integratorcp.py
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
18
--- a/tests/acceptance/machine_arm_integratorcp.py
14
+++ b/target/arm/translate-a64.c
19
+++ b/tests/acceptance/machine_arm_integratorcp.py
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ class IntegratorMachine(Test):
16
for (xs = 0; xs < selem; xs++) {
21
17
if (replicate) {
22
timeout = 90
18
/* Load and replicate to all elements */
23
19
- uint64_t mulconst;
24
- @skipUnless(os.getenv('AVOCADO_ALLOW_UNTRUSTED_CODE'), 'untrusted code')
20
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
25
- def test_integratorcp_console(self):
21
26
- """
22
tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr,
27
- Boots the Linux kernel and checks that the console is operational
23
get_mem_index(s), s->be_data + scale);
28
- :avocado: tags=arch:arm
24
- switch (scale) {
29
- :avocado: tags=machine:integratorcp
25
- case 0:
30
- """
26
- mulconst = 0x0101010101010101ULL;
31
+ def boot_integratorcp(self):
27
- break;
32
kernel_url = ('https://github.com/zayac/qemu-arm/raw/master/'
28
- case 1:
33
'arm-test/kernel/zImage.integrator')
29
- mulconst = 0x0001000100010001ULL;
34
kernel_hash = '0d7adba893c503267c946a3cbdc63b4b54f25468'
30
- break;
35
@@ -XXX,XX +XXX,XX @@ class IntegratorMachine(Test):
31
- case 2:
36
'-initrd', initrd_path,
32
- mulconst = 0x0000000100000001ULL;
37
'-append', 'printk.time=0 console=ttyAMA0')
33
- break;
38
self.vm.launch()
34
- case 3:
39
+
35
- mulconst = 0;
40
+ @skipUnless(os.getenv('AVOCADO_ALLOW_UNTRUSTED_CODE'), 'untrusted code')
36
- break;
41
+ def test_integratorcp_console(self):
37
- default:
42
+ """
38
- g_assert_not_reached();
43
+ Boots the Linux kernel and checks that the console is operational
39
- }
44
+ :avocado: tags=arch:arm
40
- if (mulconst) {
45
+ :avocado: tags=machine:integratorcp
41
- tcg_gen_muli_i64(tcg_tmp, tcg_tmp, mulconst);
46
+ :avocado: tags=device:pl011
42
- }
47
+ """
43
- write_vec_element(s, tcg_tmp, rt, 0, MO_64);
48
+ self.boot_integratorcp()
44
- if (is_q) {
49
wait_for_console_pattern(self, 'Log in as root')
45
- write_vec_element(s, tcg_tmp, rt, 1, MO_64);
46
- }
47
+ tcg_gen_gvec_dup_i64(scale, vec_full_reg_offset(s, rt),
48
+ (is_q + 1) * 8, vec_full_reg_size(s),
49
+ tcg_tmp);
50
tcg_temp_free_i64(tcg_tmp);
51
- clear_vec_high(s, is_q, rt);
52
} else {
53
/* Load/store one element per register */
54
if (is_load) {
55
--
50
--
56
2.19.1
51
2.20.1
57
52
58
53
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Move expanders for VBSL, VBIT, and VBIF from translate-a64.c.
3
Add a test that verifies the Tux logo is displayed on the framebuffer.
4
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
We simply follow the OpenCV "Template Matching with Multiple Objects"
6
Message-id: 20181011205206.3552-9-richard.henderson@linaro.org
6
tutorial, replacing Lionel Messi by Tux:
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
https://docs.opencv.org/4.2.0/d4/dc6/tutorial_py_template_matching.html
8
9
When OpenCV and NumPy are installed, this test can be run using:
10
11
$ AVOCADO_ALLOW_UNTRUSTED_CODE=hmmm \
12
avocado --show=app,framebuffer run -t device:framebuffer \
13
tests/acceptance/machine_arm_integratorcp.py
14
JOB ID : 8c46b0f8269242e87d738247883ea2a470df949e
15
JOB LOG : avocado/job-results/job-2020-01-31T21.38-8c46b0f/job.log
16
(1/1) tests/acceptance/machine_arm_integratorcp.py:IntegratorMachine.test_framebuffer_tux_logo:
17
framebuffer: found Tux at position [x, y] = (0, 0)
18
PASS (3.96 s)
19
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
20
JOB TIME : 4.23 s
21
22
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
23
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
24
Message-id: 20200225172501.29609-5-philmd@redhat.com
25
Message-Id: <20200131211102.29612-3-f4bug@amsat.org>
26
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
28
---
10
target/arm/translate.h | 6 ++
29
tests/acceptance/machine_arm_integratorcp.py | 52 ++++++++++++++++++++
11
target/arm/translate-a64.c | 61 --------------
30
1 file changed, 52 insertions(+)
12
target/arm/translate.c | 162 +++++++++++++++++++++++++++----------
13
3 files changed, 124 insertions(+), 105 deletions(-)
14
31
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
32
diff --git a/tests/acceptance/machine_arm_integratorcp.py b/tests/acceptance/machine_arm_integratorcp.py
16
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
34
--- a/tests/acceptance/machine_arm_integratorcp.py
18
+++ b/target/arm/translate.h
35
+++ b/tests/acceptance/machine_arm_integratorcp.py
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
36
@@ -XXX,XX +XXX,XX @@
20
return ret;
37
# later. See the COPYING file in the top-level directory.
21
}
38
39
import os
40
+import logging
41
42
from avocado import skipUnless
43
from avocado_qemu import Test
44
from avocado_qemu import wait_for_console_pattern
22
45
23
+
46
+
24
+/* Vector operations shared between ARM and AArch64. */
47
+NUMPY_AVAILABLE = True
25
+extern const GVecGen3 bsl_op;
48
+try:
26
+extern const GVecGen3 bit_op;
49
+ import numpy as np
27
+extern const GVecGen3 bif_op;
50
+except ImportError:
51
+ NUMPY_AVAILABLE = False
28
+
52
+
29
/*
53
+CV2_AVAILABLE = True
30
* Forward to the isar_feature_* tests given a DisasContext pointer.
54
+try:
31
*/
55
+ import cv2
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
56
+except ImportError:
33
index XXXXXXX..XXXXXXX 100644
57
+ CV2_AVAILABLE = False
34
--- a/target/arm/translate-a64.c
35
+++ b/target/arm/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
37
}
38
}
39
40
-static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
41
-{
42
- tcg_gen_xor_i64(rn, rn, rm);
43
- tcg_gen_and_i64(rn, rn, rd);
44
- tcg_gen_xor_i64(rd, rm, rn);
45
-}
46
-
47
-static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
48
-{
49
- tcg_gen_xor_i64(rn, rn, rd);
50
- tcg_gen_and_i64(rn, rn, rm);
51
- tcg_gen_xor_i64(rd, rd, rn);
52
-}
53
-
54
-static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
55
-{
56
- tcg_gen_xor_i64(rn, rn, rd);
57
- tcg_gen_andc_i64(rn, rn, rm);
58
- tcg_gen_xor_i64(rd, rd, rn);
59
-}
60
-
61
-static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
62
-{
63
- tcg_gen_xor_vec(vece, rn, rn, rm);
64
- tcg_gen_and_vec(vece, rn, rn, rd);
65
- tcg_gen_xor_vec(vece, rd, rm, rn);
66
-}
67
-
68
-static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
69
-{
70
- tcg_gen_xor_vec(vece, rn, rn, rd);
71
- tcg_gen_and_vec(vece, rn, rn, rm);
72
- tcg_gen_xor_vec(vece, rd, rd, rn);
73
-}
74
-
75
-static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
76
-{
77
- tcg_gen_xor_vec(vece, rn, rn, rd);
78
- tcg_gen_andc_vec(vece, rn, rn, rm);
79
- tcg_gen_xor_vec(vece, rd, rd, rn);
80
-}
81
-
82
/* Logic op (opcode == 3) subgroup of C3.6.16. */
83
static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
84
{
85
- static const GVecGen3 bsl_op = {
86
- .fni8 = gen_bsl_i64,
87
- .fniv = gen_bsl_vec,
88
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
89
- .load_dest = true
90
- };
91
- static const GVecGen3 bit_op = {
92
- .fni8 = gen_bit_i64,
93
- .fniv = gen_bit_vec,
94
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
95
- .load_dest = true
96
- };
97
- static const GVecGen3 bif_op = {
98
- .fni8 = gen_bif_i64,
99
- .fniv = gen_bif_vec,
100
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
- .load_dest = true
102
- };
103
-
104
int rd = extract32(insn, 0, 5);
105
int rn = extract32(insn, 5, 5);
106
int rm = extract32(insn, 16, 5);
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
112
return 0;
113
}
114
115
-/* Bitwise select. dest = c ? t : f. Clobbers T and F. */
116
-static void gen_neon_bsl(TCGv_i32 dest, TCGv_i32 t, TCGv_i32 f, TCGv_i32 c)
117
-{
118
- tcg_gen_and_i32(t, t, c);
119
- tcg_gen_andc_i32(f, f, c);
120
- tcg_gen_or_i32(dest, t, f);
121
-}
122
-
123
static inline void gen_neon_narrow(int size, TCGv_i32 dest, TCGv_i64 src)
124
{
125
switch (size) {
126
@@ -XXX,XX +XXX,XX @@ static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
127
return 1;
128
}
129
130
+/*
131
+ * Expanders for VBitOps_VBIF, VBIT, VBSL.
132
+ */
133
+static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
134
+{
135
+ tcg_gen_xor_i64(rn, rn, rm);
136
+ tcg_gen_and_i64(rn, rn, rd);
137
+ tcg_gen_xor_i64(rd, rm, rn);
138
+}
139
+
140
+static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
141
+{
142
+ tcg_gen_xor_i64(rn, rn, rd);
143
+ tcg_gen_and_i64(rn, rn, rm);
144
+ tcg_gen_xor_i64(rd, rd, rn);
145
+}
146
+
147
+static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
148
+{
149
+ tcg_gen_xor_i64(rn, rn, rd);
150
+ tcg_gen_andc_i64(rn, rn, rm);
151
+ tcg_gen_xor_i64(rd, rd, rn);
152
+}
153
+
154
+static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
155
+{
156
+ tcg_gen_xor_vec(vece, rn, rn, rm);
157
+ tcg_gen_and_vec(vece, rn, rn, rd);
158
+ tcg_gen_xor_vec(vece, rd, rm, rn);
159
+}
160
+
161
+static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
162
+{
163
+ tcg_gen_xor_vec(vece, rn, rn, rd);
164
+ tcg_gen_and_vec(vece, rn, rn, rm);
165
+ tcg_gen_xor_vec(vece, rd, rd, rn);
166
+}
167
+
168
+static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
169
+{
170
+ tcg_gen_xor_vec(vece, rn, rn, rd);
171
+ tcg_gen_andc_vec(vece, rn, rn, rm);
172
+ tcg_gen_xor_vec(vece, rd, rd, rn);
173
+}
174
+
175
+const GVecGen3 bsl_op = {
176
+ .fni8 = gen_bsl_i64,
177
+ .fniv = gen_bsl_vec,
178
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
179
+ .load_dest = true
180
+};
181
+
182
+const GVecGen3 bit_op = {
183
+ .fni8 = gen_bit_i64,
184
+ .fniv = gen_bit_vec,
185
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
+ .load_dest = true
187
+};
188
+
189
+const GVecGen3 bif_op = {
190
+ .fni8 = gen_bif_i64,
191
+ .fniv = gen_bif_vec,
192
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
193
+ .load_dest = true
194
+};
195
+
58
+
196
+
59
+
197
/* Translate a NEON data processing instruction. Return nonzero if the
60
class IntegratorMachine(Test):
198
instruction is invalid.
61
199
We process data in a mixture of 32-bit and 64-bit chunks.
62
timeout = 90
200
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
63
@@ -XXX,XX +XXX,XX @@ class IntegratorMachine(Test):
201
{
64
"""
202
int op;
65
self.boot_integratorcp()
203
int q;
66
wait_for_console_pattern(self, 'Log in as root')
204
- int rd, rn, rm;
205
+ int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
206
int size;
207
int shift;
208
int pass;
209
int count;
210
int pairwise;
211
int u;
212
+ int vec_size;
213
uint32_t imm, mask;
214
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
215
TCGv_ptr ptr1, ptr2, ptr3;
216
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
217
VFP_DREG_N(rn, insn);
218
VFP_DREG_M(rm, insn);
219
size = (insn >> 20) & 3;
220
+ vec_size = q ? 16 : 8;
221
+ rd_ofs = neon_reg_offset(rd, 0);
222
+ rn_ofs = neon_reg_offset(rn, 0);
223
+ rm_ofs = neon_reg_offset(rm, 0);
224
+
67
+
225
if ((insn & (1 << 23)) == 0) {
68
+ @skipUnless(NUMPY_AVAILABLE, 'Python NumPy not installed')
226
/* Three register same length. */
69
+ @skipUnless(CV2_AVAILABLE, 'Python OpenCV not installed')
227
op = ((insn >> 7) & 0x1e) | ((insn >> 4) & 1);
70
+ @skipUnless(os.getenv('AVOCADO_ALLOW_UNTRUSTED_CODE'), 'untrusted code')
228
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
71
+ def test_framebuffer_tux_logo(self):
229
q, rd, rn, rm);
72
+ """
230
}
73
+ Boot Linux and verify the Tux logo is displayed on the framebuffer.
231
return 1;
74
+ :avocado: tags=arch:arm
75
+ :avocado: tags=machine:integratorcp
76
+ :avocado: tags=device:pl110
77
+ :avocado: tags=device:framebuffer
78
+ """
79
+ screendump_path = os.path.join(self.workdir, "screendump.pbm")
80
+ tuxlogo_url = ('https://github.com/torvalds/linux/raw/v2.6.12/'
81
+ 'drivers/video/logo/logo_linux_vga16.ppm')
82
+ tuxlogo_hash = '3991c2ddbd1ddaecda7601f8aafbcf5b02dc86af'
83
+ tuxlogo_path = self.fetch_asset(tuxlogo_url, asset_hash=tuxlogo_hash)
232
+
84
+
233
+ case NEON_3R_LOGIC: /* Logic ops. */
85
+ self.boot_integratorcp()
234
+ switch ((u << 2) | size) {
86
+ framebuffer_ready = 'Console: switching to colour frame buffer device'
235
+ case 0: /* VAND */
87
+ wait_for_console_pattern(self, framebuffer_ready)
236
+ tcg_gen_gvec_and(0, rd_ofs, rn_ofs, rm_ofs,
88
+ self.vm.command('human-monitor-command', command_line='stop')
237
+ vec_size, vec_size);
89
+ self.vm.command('human-monitor-command',
238
+ break;
90
+ command_line='screendump %s' % screendump_path)
239
+ case 1: /* VBIC */
91
+ logger = logging.getLogger('framebuffer')
240
+ tcg_gen_gvec_andc(0, rd_ofs, rn_ofs, rm_ofs,
92
+
241
+ vec_size, vec_size);
93
+ cpu_count = 1
242
+ break;
94
+ match_threshold = 0.92
243
+ case 2:
95
+ screendump_bgr = cv2.imread(screendump_path)
244
+ if (rn == rm) {
96
+ screendump_gray = cv2.cvtColor(screendump_bgr, cv2.COLOR_BGR2GRAY)
245
+ /* VMOV */
97
+ result = cv2.matchTemplate(screendump_gray, cv2.imread(tuxlogo_path, 0),
246
+ tcg_gen_gvec_mov(0, rd_ofs, rn_ofs, vec_size, vec_size);
98
+ cv2.TM_CCOEFF_NORMED)
247
+ } else {
99
+ loc = np.where(result >= match_threshold)
248
+ /* VORR */
100
+ tux_count = 0
249
+ tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
101
+ for tux_count, pt in enumerate(zip(*loc[::-1]), start=1):
250
+ vec_size, vec_size);
102
+ logger.debug('found Tux at position [x, y] = %s', pt)
251
+ }
103
+ self.assertGreaterEqual(tux_count, cpu_count)
252
+ break;
253
+ case 3: /* VORN */
254
+ tcg_gen_gvec_orc(0, rd_ofs, rn_ofs, rm_ofs,
255
+ vec_size, vec_size);
256
+ break;
257
+ case 4: /* VEOR */
258
+ tcg_gen_gvec_xor(0, rd_ofs, rn_ofs, rm_ofs,
259
+ vec_size, vec_size);
260
+ break;
261
+ case 5: /* VBSL */
262
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
263
+ vec_size, vec_size, &bsl_op);
264
+ break;
265
+ case 6: /* VBIT */
266
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
267
+ vec_size, vec_size, &bit_op);
268
+ break;
269
+ case 7: /* VBIF */
270
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
271
+ vec_size, vec_size, &bif_op);
272
+ break;
273
+ }
274
+ return 0;
275
}
276
- if (size == 3 && op != NEON_3R_LOGIC) {
277
+ if (size == 3) {
278
/* 64-bit element instructions. */
279
for (pass = 0; pass < (q ? 2 : 1); pass++) {
280
neon_load_reg64(cpu_V0, rn + pass);
281
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
282
case NEON_3R_VRHADD:
283
GEN_NEON_INTEGER_OP(rhadd);
284
break;
285
- case NEON_3R_LOGIC: /* Logic ops. */
286
- switch ((u << 2) | size) {
287
- case 0: /* VAND */
288
- tcg_gen_and_i32(tmp, tmp, tmp2);
289
- break;
290
- case 1: /* BIC */
291
- tcg_gen_andc_i32(tmp, tmp, tmp2);
292
- break;
293
- case 2: /* VORR */
294
- tcg_gen_or_i32(tmp, tmp, tmp2);
295
- break;
296
- case 3: /* VORN */
297
- tcg_gen_orc_i32(tmp, tmp, tmp2);
298
- break;
299
- case 4: /* VEOR */
300
- tcg_gen_xor_i32(tmp, tmp, tmp2);
301
- break;
302
- case 5: /* VBSL */
303
- tmp3 = neon_load_reg(rd, pass);
304
- gen_neon_bsl(tmp, tmp, tmp2, tmp3);
305
- tcg_temp_free_i32(tmp3);
306
- break;
307
- case 6: /* VBIT */
308
- tmp3 = neon_load_reg(rd, pass);
309
- gen_neon_bsl(tmp, tmp, tmp3, tmp2);
310
- tcg_temp_free_i32(tmp3);
311
- break;
312
- case 7: /* VBIF */
313
- tmp3 = neon_load_reg(rd, pass);
314
- gen_neon_bsl(tmp, tmp3, tmp, tmp2);
315
- tcg_temp_free_i32(tmp3);
316
- break;
317
- }
318
- break;
319
case NEON_3R_VHSUB:
320
GEN_NEON_INTEGER_OP(hsub);
321
break;
322
--
104
--
323
2.19.1
105
2.20.1
324
106
325
107
diff view generated by jsdifflib
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
1
We missed an instance of using FIELD_EX32 on a 64-bit ID
2
register, in isar_feature_aa64_pmu_8_4(). Fix it.
2
3
3
This patch extends the qemu-kvm state sync logic with support for
4
KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception.
5
And also it can support the exception state migration.
6
7
The SError exception states include SError pending state and ESR value,
8
the kvm_put/get_vcpu_events() will be called when set or get system
9
registers. When do migration, if source machine has SError pending,
10
QEMU will do this migration regardless whether the target machine supports
11
to specify guest ESR value, because if target machine does not support that,
12
it can also inject the SError with zero ESR value.
13
14
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
15
Reviewed-by: Andrew Jones <drjones@redhat.com>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 1538067351-23931-3-git-send-email-gengdongjiu@huawei.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200224172846.13053-2-peter.maydell@linaro.org
19
---
8
---
20
target/arm/cpu.h | 7 ++++++
9
target/arm/cpu.h | 4 ++--
21
target/arm/kvm_arm.h | 24 ++++++++++++++++++
10
1 file changed, 2 insertions(+), 2 deletions(-)
22
target/arm/kvm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++
23
target/arm/kvm32.c | 13 ++++++++++
24
target/arm/kvm64.c | 13 ++++++++++
25
target/arm/machine.c | 22 ++++++++++++++++
26
6 files changed, 139 insertions(+)
27
11
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
14
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
15
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
16
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_pmu_8_1(const ARMISARegisters *id)
33
*/
17
34
} exception;
18
static inline bool isar_feature_aa64_pmu_8_4(const ARMISARegisters *id)
35
19
{
36
+ /* Information associated with an SError */
20
- return FIELD_EX32(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 5 &&
37
+ struct {
21
- FIELD_EX32(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
38
+ uint8_t pending;
22
+ return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 5 &&
39
+ uint8_t has_esr;
23
+ FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
40
+ uint64_t esr;
41
+ } serror;
42
+
43
/* Thumb-2 EE state. */
44
uint32_t teecr;
45
uint32_t teehbr;
46
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/kvm_arm.h
49
+++ b/target/arm/kvm_arm.h
50
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu);
51
*/
52
void kvm_arm_reset_vcpu(ARMCPU *cpu);
53
54
+/**
55
+ * kvm_arm_init_serror_injection:
56
+ * @cs: CPUState
57
+ *
58
+ * Check whether KVM can set guest SError syndrome.
59
+ */
60
+void kvm_arm_init_serror_injection(CPUState *cs);
61
+
62
+/**
63
+ * kvm_get_vcpu_events:
64
+ * @cpu: ARMCPU
65
+ *
66
+ * Get VCPU related state from kvm.
67
+ */
68
+int kvm_get_vcpu_events(ARMCPU *cpu);
69
+
70
+/**
71
+ * kvm_put_vcpu_events:
72
+ * @cpu: ARMCPU
73
+ *
74
+ * Put VCPU related state to kvm.
75
+ */
76
+int kvm_put_vcpu_events(ARMCPU *cpu);
77
+
78
#ifdef CONFIG_KVM
79
/**
80
* kvm_arm_create_scratch_host_vcpu:
81
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/kvm.c
84
+++ b/target/arm/kvm.c
85
@@ -XXX,XX +XXX,XX @@ const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
86
};
87
88
static bool cap_has_mp_state;
89
+static bool cap_has_inject_serror_esr;
90
91
static ARMHostCPUFeatures arm_host_cpu_features;
92
93
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vcpu_init(CPUState *cs)
94
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
95
}
24
}
96
25
97
+void kvm_arm_init_serror_injection(CPUState *cs)
26
/*
98
+{
99
+ cap_has_inject_serror_esr = kvm_check_extension(cs->kvm_state,
100
+ KVM_CAP_ARM_INJECT_SERROR_ESR);
101
+}
102
+
103
bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
104
int *fdarray,
105
struct kvm_vcpu_init *init)
106
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
107
return 0;
108
}
109
110
+int kvm_put_vcpu_events(ARMCPU *cpu)
111
+{
112
+ CPUARMState *env = &cpu->env;
113
+ struct kvm_vcpu_events events;
114
+ int ret;
115
+
116
+ if (!kvm_has_vcpu_events()) {
117
+ return 0;
118
+ }
119
+
120
+ memset(&events, 0, sizeof(events));
121
+ events.exception.serror_pending = env->serror.pending;
122
+
123
+ /* Inject SError to guest with specified syndrome if host kernel
124
+ * supports it, otherwise inject SError without syndrome.
125
+ */
126
+ if (cap_has_inject_serror_esr) {
127
+ events.exception.serror_has_esr = env->serror.has_esr;
128
+ events.exception.serror_esr = env->serror.esr;
129
+ }
130
+
131
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events);
132
+ if (ret) {
133
+ error_report("failed to put vcpu events");
134
+ }
135
+
136
+ return ret;
137
+}
138
+
139
+int kvm_get_vcpu_events(ARMCPU *cpu)
140
+{
141
+ CPUARMState *env = &cpu->env;
142
+ struct kvm_vcpu_events events;
143
+ int ret;
144
+
145
+ if (!kvm_has_vcpu_events()) {
146
+ return 0;
147
+ }
148
+
149
+ memset(&events, 0, sizeof(events));
150
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_VCPU_EVENTS, &events);
151
+ if (ret) {
152
+ error_report("failed to get vcpu events");
153
+ return ret;
154
+ }
155
+
156
+ env->serror.pending = events.exception.serror_pending;
157
+ env->serror.has_esr = events.exception.serror_has_esr;
158
+ env->serror.esr = events.exception.serror_esr;
159
+
160
+ return 0;
161
+}
162
+
163
void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
164
{
165
}
166
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
167
index XXXXXXX..XXXXXXX 100644
168
--- a/target/arm/kvm32.c
169
+++ b/target/arm/kvm32.c
170
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
171
}
172
cpu->mp_affinity = mpidr & ARM32_AFFINITY_MASK;
173
174
+ /* Check whether userspace can specify guest syndrome value */
175
+ kvm_arm_init_serror_injection(cs);
176
+
177
return kvm_arm_init_cpreg_list(cpu);
178
}
179
180
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
181
return ret;
182
}
183
184
+ ret = kvm_put_vcpu_events(cpu);
185
+ if (ret) {
186
+ return ret;
187
+ }
188
+
189
/* Note that we do not call write_cpustate_to_list()
190
* here, so we are only writing the tuple list back to
191
* KVM. This is safe because nothing can change the
192
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
193
}
194
vfp_set_fpscr(env, fpscr);
195
196
+ ret = kvm_get_vcpu_events(cpu);
197
+ if (ret) {
198
+ return ret;
199
+ }
200
+
201
if (!write_kvmstate_to_list(cpu)) {
202
return EINVAL;
203
}
204
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
205
index XXXXXXX..XXXXXXX 100644
206
--- a/target/arm/kvm64.c
207
+++ b/target/arm/kvm64.c
208
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
209
210
kvm_arm_init_debug(cs);
211
212
+ /* Check whether user space can specify guest syndrome value */
213
+ kvm_arm_init_serror_injection(cs);
214
+
215
return kvm_arm_init_cpreg_list(cpu);
216
}
217
218
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
219
return ret;
220
}
221
222
+ ret = kvm_put_vcpu_events(cpu);
223
+ if (ret) {
224
+ return ret;
225
+ }
226
+
227
if (!write_list_to_kvmstate(cpu, level)) {
228
return EINVAL;
229
}
230
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
231
}
232
vfp_set_fpcr(env, fpr);
233
234
+ ret = kvm_get_vcpu_events(cpu);
235
+ if (ret) {
236
+ return ret;
237
+ }
238
+
239
if (!write_kvmstate_to_list(cpu)) {
240
return EINVAL;
241
}
242
diff --git a/target/arm/machine.c b/target/arm/machine.c
243
index XXXXXXX..XXXXXXX 100644
244
--- a/target/arm/machine.c
245
+++ b/target/arm/machine.c
246
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_sve = {
247
};
248
#endif /* AARCH64 */
249
250
+static bool serror_needed(void *opaque)
251
+{
252
+ ARMCPU *cpu = opaque;
253
+ CPUARMState *env = &cpu->env;
254
+
255
+ return env->serror.pending != 0;
256
+}
257
+
258
+static const VMStateDescription vmstate_serror = {
259
+ .name = "cpu/serror",
260
+ .version_id = 1,
261
+ .minimum_version_id = 1,
262
+ .needed = serror_needed,
263
+ .fields = (VMStateField[]) {
264
+ VMSTATE_UINT8(env.serror.pending, ARMCPU),
265
+ VMSTATE_UINT8(env.serror.has_esr, ARMCPU),
266
+ VMSTATE_UINT64(env.serror.esr, ARMCPU),
267
+ VMSTATE_END_OF_LIST()
268
+ }
269
+};
270
+
271
static bool m_needed(void *opaque)
272
{
273
ARMCPU *cpu = opaque;
274
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
275
#ifdef TARGET_AARCH64
276
&vmstate_sve,
277
#endif
278
+ &vmstate_serror,
279
NULL
280
}
281
};
282
--
27
--
283
2.19.1
28
2.20.1
284
29
285
30
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Instantiating mps2-an505 (cortex-m33) will fail make check when
4
V7VE asserts that ID_ISAR0.Divide includes ARM division. It is
5
also wrong to include ARM_FEATURE_LPAE.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181016223115.24100-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu.c | 6 +++++-
13
1 file changed, 5 insertions(+), 1 deletion(-)
14
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
20
21
/* Some features automatically imply others: */
22
if (arm_feature(env, ARM_FEATURE_V8)) {
23
- set_feature(env, ARM_FEATURE_V7VE);
24
+ if (arm_feature(env, ARM_FEATURE_M)) {
25
+ set_feature(env, ARM_FEATURE_V7);
26
+ } else {
27
+ set_feature(env, ARM_FEATURE_V7VE);
28
+ }
29
}
30
if (arm_feature(env, ARM_FEATURE_V7VE)) {
31
/* v7 Virtualization Extensions. In real hardware this implies
32
--
33
2.19.1
34
35
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The v8.3-RCPC extension implements three new load instructions
2
which provide slightly weaker consistency guarantees than the
3
existing load-acquire operations. For QEMU we choose to simply
4
implement them with a full LDAQ barrier.
2
5
3
Most of the v8 extensions are self-contained within the ISAR
4
registers and are not implied by other feature bits, which
5
makes them the easiest to convert.
6
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181016223115.24100-4-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200224172846.13053-3-peter.maydell@linaro.org
12
---
9
---
13
target/arm/cpu.h | 131 +++++++++++++++++++++++++++++++++----
10
target/arm/cpu.h | 5 +++++
14
target/arm/translate.h | 7 ++
11
linux-user/elfload.c | 1 +
15
linux-user/elfload.c | 46 ++++++++-----
12
target/arm/cpu64.c | 1 +
16
target/arm/cpu.c | 27 +++++---
13
target/arm/translate-a64.c | 24 ++++++++++++++++++++++++
17
target/arm/cpu64.c | 57 +++++++++-------
14
4 files changed, 31 insertions(+)
18
target/arm/translate-a64.c | 101 ++++++++++++++--------------
19
target/arm/translate.c | 36 +++++-----
20
7 files changed, 273 insertions(+), 132 deletions(-)
21
15
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
18
--- a/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ typedef enum ARMPSCIState {
20
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_pmu_8_4(const ARMISARegisters *id)
27
PSCI_ON_PENDING = 2
21
FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
28
} ARMPSCIState;
22
}
29
23
30
+typedef struct ARMISARegisters ARMISARegisters;
24
+static inline bool isar_feature_aa64_rcpc_8_3(const ARMISARegisters *id)
31
+
32
/**
33
* ARMCPU:
34
* @env: #CPUARMState
35
@@ -XXX,XX +XXX,XX @@ enum arm_features {
36
ARM_FEATURE_LPAE, /* has Large Physical Address Extension */
37
ARM_FEATURE_V8,
38
ARM_FEATURE_AARCH64, /* supports 64 bit mode */
39
- ARM_FEATURE_V8_AES, /* implements AES part of v8 Crypto Extensions */
40
ARM_FEATURE_CBAR, /* has cp15 CBAR */
41
ARM_FEATURE_CRC, /* ARMv8 CRC instructions */
42
ARM_FEATURE_CBAR_RO, /* has cp15 CBAR and it is read-only */
43
ARM_FEATURE_EL2, /* has EL2 Virtualization support */
44
ARM_FEATURE_EL3, /* has EL3 Secure monitor support */
45
- ARM_FEATURE_V8_SHA1, /* implements SHA1 part of v8 Crypto Extensions */
46
- ARM_FEATURE_V8_SHA256, /* implements SHA256 part of v8 Crypto Extensions */
47
- ARM_FEATURE_V8_PMULL, /* implements PMULL part of v8 Crypto Extensions */
48
ARM_FEATURE_THUMB_DSP, /* DSP insns supported in the Thumb encodings */
49
ARM_FEATURE_PMU, /* has PMU support */
50
ARM_FEATURE_VBAR, /* has cp15 VBAR */
51
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
52
ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
53
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
54
- ARM_FEATURE_V8_SHA512, /* implements SHA512 part of v8 Crypto Extensions */
55
- ARM_FEATURE_V8_SHA3, /* implements SHA3 part of v8 Crypto Extensions */
56
- ARM_FEATURE_V8_SM3, /* implements SM3 part of v8 Crypto Extensions */
57
- ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
58
- ARM_FEATURE_V8_ATOMICS, /* ARMv8.1-Atomics feature */
59
- ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
60
- ARM_FEATURE_V8_DOTPROD, /* implements v8.2 simd dot product */
61
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
62
- ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */
63
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
64
};
65
66
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
67
/* Shared between translate-sve.c and sve_helper.c. */
68
extern const uint64_t pred_esz_masks[4];
69
70
+/*
71
+ * 32-bit feature tests via id registers.
72
+ */
73
+static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
74
+{
25
+{
75
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
26
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) != 0;
76
+}
27
+}
77
+
28
+
78
+static inline bool isar_feature_aa32_pmull(const ARMISARegisters *id)
29
/*
79
+{
30
* Feature tests for "does this exist in either 32-bit or 64-bit?"
80
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) > 1;
31
*/
81
+}
82
+
83
+static inline bool isar_feature_aa32_sha1(const ARMISARegisters *id)
84
+{
85
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA1) != 0;
86
+}
87
+
88
+static inline bool isar_feature_aa32_sha2(const ARMISARegisters *id)
89
+{
90
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA2) != 0;
91
+}
92
+
93
+static inline bool isar_feature_aa32_crc32(const ARMISARegisters *id)
94
+{
95
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, CRC32) != 0;
96
+}
97
+
98
+static inline bool isar_feature_aa32_rdm(const ARMISARegisters *id)
99
+{
100
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, RDM) != 0;
101
+}
102
+
103
+static inline bool isar_feature_aa32_vcma(const ARMISARegisters *id)
104
+{
105
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, VCMA) != 0;
106
+}
107
+
108
+static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
109
+{
110
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
111
+}
112
+
113
+/*
114
+ * 64-bit feature tests via id registers.
115
+ */
116
+static inline bool isar_feature_aa64_aes(const ARMISARegisters *id)
117
+{
118
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) != 0;
119
+}
120
+
121
+static inline bool isar_feature_aa64_pmull(const ARMISARegisters *id)
122
+{
123
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) > 1;
124
+}
125
+
126
+static inline bool isar_feature_aa64_sha1(const ARMISARegisters *id)
127
+{
128
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA1) != 0;
129
+}
130
+
131
+static inline bool isar_feature_aa64_sha256(const ARMISARegisters *id)
132
+{
133
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) != 0;
134
+}
135
+
136
+static inline bool isar_feature_aa64_sha512(const ARMISARegisters *id)
137
+{
138
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) > 1;
139
+}
140
+
141
+static inline bool isar_feature_aa64_crc32(const ARMISARegisters *id)
142
+{
143
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, CRC32) != 0;
144
+}
145
+
146
+static inline bool isar_feature_aa64_atomics(const ARMISARegisters *id)
147
+{
148
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, ATOMIC) != 0;
149
+}
150
+
151
+static inline bool isar_feature_aa64_rdm(const ARMISARegisters *id)
152
+{
153
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RDM) != 0;
154
+}
155
+
156
+static inline bool isar_feature_aa64_sha3(const ARMISARegisters *id)
157
+{
158
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA3) != 0;
159
+}
160
+
161
+static inline bool isar_feature_aa64_sm3(const ARMISARegisters *id)
162
+{
163
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM3) != 0;
164
+}
165
+
166
+static inline bool isar_feature_aa64_sm4(const ARMISARegisters *id)
167
+{
168
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM4) != 0;
169
+}
170
+
171
+static inline bool isar_feature_aa64_dp(const ARMISARegisters *id)
172
+{
173
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, DP) != 0;
174
+}
175
+
176
+static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
177
+{
178
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
179
+}
180
+
181
+/*
182
+ * Forward to the above feature tests given an ARMCPU pointer.
183
+ */
184
+#define cpu_isar_feature(name, cpu) \
185
+ ({ ARMCPU *cpu_ = (cpu); isar_feature_##name(&cpu_->isar); })
186
+
187
#endif
188
diff --git a/target/arm/translate.h b/target/arm/translate.h
189
index XXXXXXX..XXXXXXX 100644
190
--- a/target/arm/translate.h
191
+++ b/target/arm/translate.h
192
@@ -XXX,XX +XXX,XX @@
193
/* internal defines */
194
typedef struct DisasContext {
195
DisasContextBase base;
196
+ const ARMISARegisters *isar;
197
198
target_ulong pc;
199
target_ulong page_start;
200
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
201
return ret;
202
}
203
204
+/*
205
+ * Forward to the isar_feature_* tests given a DisasContext pointer.
206
+ */
207
+#define dc_isar_feature(name, ctx) \
208
+ ({ DisasContext *ctx_ = (ctx); isar_feature_##name(ctx_->isar); })
209
+
210
#endif /* TARGET_ARM_TRANSLATE_H */
211
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
32
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
212
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
213
--- a/linux-user/elfload.c
34
--- a/linux-user/elfload.c
214
+++ b/linux-user/elfload.c
35
+++ b/linux-user/elfload.c
215
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
36
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
216
/* probe for the extra features */
37
GET_FEATURE_ID(aa64_sb, ARM_HWCAP_A64_SB);
217
#define GET_FEATURE(feat, hwcap) \
38
GET_FEATURE_ID(aa64_condm_4, ARM_HWCAP_A64_FLAGM);
218
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
39
GET_FEATURE_ID(aa64_dcpop, ARM_HWCAP_A64_DCPOP);
219
+
40
+ GET_FEATURE_ID(aa64_rcpc_8_3, ARM_HWCAP_A64_LRCPC);
220
+#define GET_FEATURE_ID(feat, hwcap) \
41
221
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
222
+
223
/* EDSP is in v5TE and above, but all our v5 CPUs are v5TE */
224
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
225
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
226
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
227
ARMCPU *cpu = ARM_CPU(thread_cpu);
228
uint32_t hwcaps = 0;
229
230
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP2_ARM_AES);
231
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP2_ARM_PMULL);
232
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP2_ARM_SHA1);
233
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP2_ARM_SHA2);
234
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP2_ARM_CRC32);
235
+ GET_FEATURE_ID(aa32_aes, ARM_HWCAP2_ARM_AES);
236
+ GET_FEATURE_ID(aa32_pmull, ARM_HWCAP2_ARM_PMULL);
237
+ GET_FEATURE_ID(aa32_sha1, ARM_HWCAP2_ARM_SHA1);
238
+ GET_FEATURE_ID(aa32_sha2, ARM_HWCAP2_ARM_SHA2);
239
+ GET_FEATURE_ID(aa32_crc32, ARM_HWCAP2_ARM_CRC32);
240
return hwcaps;
42
return hwcaps;
241
}
242
243
#undef GET_FEATURE
244
+#undef GET_FEATURE_ID
245
246
#else
247
/* 64 bit ARM definitions */
248
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
249
/* probe for the extra features */
250
#define GET_FEATURE(feat, hwcap) \
251
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
252
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP_A64_AES);
253
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP_A64_PMULL);
254
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP_A64_SHA1);
255
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP_A64_SHA2);
256
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP_A64_CRC32);
257
- GET_FEATURE(ARM_FEATURE_V8_SHA3, ARM_HWCAP_A64_SHA3);
258
- GET_FEATURE(ARM_FEATURE_V8_SM3, ARM_HWCAP_A64_SM3);
259
- GET_FEATURE(ARM_FEATURE_V8_SM4, ARM_HWCAP_A64_SM4);
260
- GET_FEATURE(ARM_FEATURE_V8_SHA512, ARM_HWCAP_A64_SHA512);
261
+#define GET_FEATURE_ID(feat, hwcap) \
262
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
263
+
264
+ GET_FEATURE_ID(aa64_aes, ARM_HWCAP_A64_AES);
265
+ GET_FEATURE_ID(aa64_pmull, ARM_HWCAP_A64_PMULL);
266
+ GET_FEATURE_ID(aa64_sha1, ARM_HWCAP_A64_SHA1);
267
+ GET_FEATURE_ID(aa64_sha256, ARM_HWCAP_A64_SHA2);
268
+ GET_FEATURE_ID(aa64_sha512, ARM_HWCAP_A64_SHA512);
269
+ GET_FEATURE_ID(aa64_crc32, ARM_HWCAP_A64_CRC32);
270
+ GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
271
+ GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
272
+ GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
273
GET_FEATURE(ARM_FEATURE_V8_FP16,
274
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
275
- GET_FEATURE(ARM_FEATURE_V8_ATOMICS, ARM_HWCAP_A64_ATOMICS);
276
- GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
277
- GET_FEATURE(ARM_FEATURE_V8_DOTPROD, ARM_HWCAP_A64_ASIMDDP);
278
- GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
279
+ GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
280
+ GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
281
+ GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
282
+ GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
283
GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
284
+
285
#undef GET_FEATURE
286
+#undef GET_FEATURE_ID
287
288
return hwcaps;
289
}
290
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
291
index XXXXXXX..XXXXXXX 100644
292
--- a/target/arm/cpu.c
293
+++ b/target/arm/cpu.c
294
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
295
cortex_a15_initfn(obj);
296
#ifdef CONFIG_USER_ONLY
297
/* We don't set these in system emulation mode for the moment,
298
- * since we don't correctly set the ID registers to advertise them,
299
+ * since we don't correctly set (all of) the ID registers to
300
+ * advertise them.
301
*/
302
set_feature(&cpu->env, ARM_FEATURE_V8);
303
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
304
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
305
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
306
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
307
- set_feature(&cpu->env, ARM_FEATURE_CRC);
308
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
309
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
310
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
311
+ {
312
+ uint32_t t;
313
+
314
+ t = cpu->isar.id_isar5;
315
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
316
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
317
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
318
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
319
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
320
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
321
+ cpu->isar.id_isar5 = t;
322
+
323
+ t = cpu->isar.id_isar6;
324
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
325
+ cpu->isar.id_isar6 = t;
326
+ }
327
#endif
328
}
329
}
43
}
330
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
44
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
331
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
332
--- a/target/arm/cpu64.c
46
--- a/target/arm/cpu64.c
333
+++ b/target/arm/cpu64.c
47
+++ b/target/arm/cpu64.c
334
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
336
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
337
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
338
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
339
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
340
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
341
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
342
- set_feature(&cpu->env, ARM_FEATURE_CRC);
343
set_feature(&cpu->env, ARM_FEATURE_EL2);
344
set_feature(&cpu->env, ARM_FEATURE_EL3);
345
set_feature(&cpu->env, ARM_FEATURE_PMU);
346
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
347
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
348
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
349
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
350
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
351
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
352
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
353
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
354
- set_feature(&cpu->env, ARM_FEATURE_CRC);
355
set_feature(&cpu->env, ARM_FEATURE_EL2);
356
set_feature(&cpu->env, ARM_FEATURE_EL3);
357
set_feature(&cpu->env, ARM_FEATURE_PMU);
358
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
359
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
360
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
361
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
362
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
363
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
364
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
365
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
366
- set_feature(&cpu->env, ARM_FEATURE_CRC);
367
set_feature(&cpu->env, ARM_FEATURE_EL2);
368
set_feature(&cpu->env, ARM_FEATURE_EL3);
369
set_feature(&cpu->env, ARM_FEATURE_PMU);
370
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
48
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
371
if (kvm_enabled()) {
49
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
372
kvm_arm_set_cpu_features_from_host(cpu);
50
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
373
} else {
51
t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
374
+ uint64_t t;
52
+ t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 1); /* ARMv8.3-RCPC */
375
+ uint32_t u;
53
cpu->isar.id_aa64isar1 = t;
376
aarch64_a57_initfn(obj);
54
377
+
55
t = cpu->isar.id_aa64pfr0;
378
+ t = cpu->isar.id_aa64isar0;
379
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
380
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
381
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
382
+ t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
383
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
384
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
385
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
386
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
387
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
388
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
389
+ cpu->isar.id_aa64isar0 = t;
390
+
391
+ t = cpu->isar.id_aa64isar1;
392
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
393
+ cpu->isar.id_aa64isar1 = t;
394
+
395
+ /* Replicate the same data to the 32-bit id registers. */
396
+ u = cpu->isar.id_isar5;
397
+ u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
398
+ u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
399
+ u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
400
+ u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
401
+ u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
402
+ u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
403
+ cpu->isar.id_isar5 = u;
404
+
405
+ u = cpu->isar.id_isar6;
406
+ u = FIELD_DP32(u, ID_ISAR6, DP, 1);
407
+ cpu->isar.id_isar6 = u;
408
+
409
#ifdef CONFIG_USER_ONLY
410
/* We don't set these in system emulation mode for the moment,
411
* since we don't correctly set the ID registers to advertise them,
412
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
413
* whereas the architecture requires them to be present in both if
414
* present in either.
415
*/
416
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA512);
417
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA3);
418
- set_feature(&cpu->env, ARM_FEATURE_V8_SM3);
419
- set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
420
- set_feature(&cpu->env, ARM_FEATURE_V8_ATOMICS);
421
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
422
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
423
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
424
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
425
set_feature(&cpu->env, ARM_FEATURE_SVE);
426
/* For usermode -cpu max we can use a larger and more efficient DCZ
427
* blocksize since we don't have to follow what the hardware does.
428
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
56
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
429
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
430
--- a/target/arm/translate-a64.c
58
--- a/target/arm/translate-a64.c
431
+++ b/target/arm/translate-a64.c
59
+++ b/target/arm/translate-a64.c
432
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
433
}
434
if (rt2 == 31
435
&& ((rt | rs) & 1) == 0
436
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
437
+ && dc_isar_feature(aa64_atomics, s)) {
438
/* CASP / CASPL */
439
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
440
return;
441
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
442
}
443
if (rt2 == 31
444
&& ((rt | rs) & 1) == 0
445
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
446
+ && dc_isar_feature(aa64_atomics, s)) {
447
/* CASPA / CASPAL */
448
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
449
return;
450
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
451
case 0xb: /* CASL */
452
case 0xe: /* CASA */
453
case 0xf: /* CASAL */
454
- if (rt2 == 31 && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
455
+ if (rt2 == 31 && dc_isar_feature(aa64_atomics, s)) {
456
gen_compare_and_swap(s, rs, rt, rn, size);
457
return;
458
}
459
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
60
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
460
int rs = extract32(insn, 16, 5);
61
int rs = extract32(insn, 16, 5);
461
int rn = extract32(insn, 5, 5);
62
int rn = extract32(insn, 5, 5);
462
int o3_opc = extract32(insn, 12, 4);
63
int o3_opc = extract32(insn, 12, 4);
463
- int feature = ARM_FEATURE_V8_ATOMICS;
64
+ bool r = extract32(insn, 22, 1);
464
TCGv_i64 tcg_rn, tcg_rs;
65
+ bool a = extract32(insn, 23, 1);
66
TCGv_i64 tcg_rs, clean_addr;
465
AtomicThreeOpFn *fn;
67
AtomicThreeOpFn *fn;
466
68
467
- if (is_vector) {
468
+ if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
469
unallocated_encoding(s);
470
return;
471
}
472
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
69
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
473
unallocated_encoding(s);
70
case 010: /* SWP */
474
return;
71
fn = tcg_gen_atomic_xchg_i64;
475
}
476
- if (!arm_dc_feature(s, feature)) {
477
- unallocated_encoding(s);
478
- return;
479
- }
480
481
if (rn == 31) {
482
gen_check_sp_alignment(s);
483
@@ -XXX,XX +XXX,XX @@ static void handle_crc32(DisasContext *s,
484
TCGv_i64 tcg_acc, tcg_val;
485
TCGv_i32 tcg_bytes;
486
487
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)
488
+ if (!dc_isar_feature(aa64_crc32, s)
489
|| (sf == 1 && sz != 3)
490
|| (sf == 0 && sz == 3)) {
491
unallocated_encoding(s);
492
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
493
bool u = extract32(insn, 29, 1);
494
TCGv_i32 ele1, ele2, ele3;
495
TCGv_i64 res;
496
- int feature;
497
+ bool feature;
498
499
switch (u * 16 + opcode) {
500
case 0x10: /* SQRDMLAH (vector) */
501
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
502
unallocated_encoding(s);
503
return;
504
}
505
- feature = ARM_FEATURE_V8_RDM;
506
+ feature = dc_isar_feature(aa64_rdm, s);
507
break;
72
break;
73
+ case 014: /* LDAPR, LDAPRH, LDAPRB */
74
+ if (!dc_isar_feature(aa64_rcpc_8_3, s) ||
75
+ rs != 31 || a != 1 || r != 0) {
76
+ unallocated_encoding(s);
77
+ return;
78
+ }
79
+ break;
508
default:
80
default:
509
unallocated_encoding(s);
81
unallocated_encoding(s);
510
return;
82
return;
83
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
84
gen_check_sp_alignment(s);
511
}
85
}
512
- if (!arm_dc_feature(s, feature)) {
86
clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
513
+ if (!feature) {
87
+
514
unallocated_encoding(s);
88
+ if (o3_opc == 014) {
515
return;
89
+ /*
516
}
90
+ * LDAPR* are a special case because they are a simple load, not a
517
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
91
+ * fetch-and-do-something op.
518
return;
92
+ * The architectural consistency requirements here are weaker than
519
}
93
+ * full load-acquire (we only need "load-acquire processor consistent"),
520
if (size == 3) {
94
+ * but we choose to implement them as full LDAQ.
521
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
95
+ */
522
+ if (!dc_isar_feature(aa64_pmull, s)) {
96
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size, false, false,
523
unallocated_encoding(s);
97
+ true, rt, disas_ldst_compute_iss_sf(size, false, 0), true);
524
return;
98
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
525
}
99
+ return;
526
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
100
+ }
527
int size = extract32(insn, 22, 2);
101
+
528
bool u = extract32(insn, 29, 1);
102
tcg_rs = read_cpu_reg(s, rs, true);
529
bool is_q = extract32(insn, 30, 1);
103
530
- int feature, rot;
104
if (o3_opc == 1) { /* LDCLR */
531
+ bool feature;
532
+ int rot;
533
534
switch (u * 16 + opcode) {
535
case 0x10: /* SQRDMLAH (vector) */
536
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
537
unallocated_encoding(s);
538
return;
539
}
540
- feature = ARM_FEATURE_V8_RDM;
541
+ feature = dc_isar_feature(aa64_rdm, s);
542
break;
543
case 0x02: /* SDOT (vector) */
544
case 0x12: /* UDOT (vector) */
545
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
546
unallocated_encoding(s);
547
return;
548
}
549
- feature = ARM_FEATURE_V8_DOTPROD;
550
+ feature = dc_isar_feature(aa64_dp, s);
551
break;
552
case 0x18: /* FCMLA, #0 */
553
case 0x19: /* FCMLA, #90 */
554
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
555
unallocated_encoding(s);
556
return;
557
}
558
- feature = ARM_FEATURE_V8_FCMA;
559
+ feature = dc_isar_feature(aa64_fcma, s);
560
break;
561
default:
562
unallocated_encoding(s);
563
return;
564
}
565
- if (!arm_dc_feature(s, feature)) {
566
+ if (!feature) {
567
unallocated_encoding(s);
568
return;
569
}
570
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
571
break;
572
case 0x1d: /* SQRDMLAH */
573
case 0x1f: /* SQRDMLSH */
574
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
575
+ if (!dc_isar_feature(aa64_rdm, s)) {
576
unallocated_encoding(s);
577
return;
578
}
579
break;
580
case 0x0e: /* SDOT */
581
case 0x1e: /* UDOT */
582
- if (size != MO_32 || !arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
583
+ if (size != MO_32 || !dc_isar_feature(aa64_dp, s)) {
584
unallocated_encoding(s);
585
return;
586
}
587
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
588
case 0x13: /* FCMLA #90 */
589
case 0x15: /* FCMLA #180 */
590
case 0x17: /* FCMLA #270 */
591
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
592
+ if (!dc_isar_feature(aa64_fcma, s)) {
593
unallocated_encoding(s);
594
return;
595
}
596
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
597
TCGv_i32 tcg_decrypt;
598
CryptoThreeOpIntFn *genfn;
599
600
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
601
- || size != 0) {
602
+ if (!dc_isar_feature(aa64_aes, s) || size != 0) {
603
unallocated_encoding(s);
604
return;
605
}
606
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
607
int rd = extract32(insn, 0, 5);
608
CryptoThreeOpFn *genfn;
609
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
610
- int feature = ARM_FEATURE_V8_SHA256;
611
+ bool feature;
612
613
if (size != 0) {
614
unallocated_encoding(s);
615
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
616
case 2: /* SHA1M */
617
case 3: /* SHA1SU0 */
618
genfn = NULL;
619
- feature = ARM_FEATURE_V8_SHA1;
620
+ feature = dc_isar_feature(aa64_sha1, s);
621
break;
622
case 4: /* SHA256H */
623
genfn = gen_helper_crypto_sha256h;
624
+ feature = dc_isar_feature(aa64_sha256, s);
625
break;
626
case 5: /* SHA256H2 */
627
genfn = gen_helper_crypto_sha256h2;
628
+ feature = dc_isar_feature(aa64_sha256, s);
629
break;
630
case 6: /* SHA256SU1 */
631
genfn = gen_helper_crypto_sha256su1;
632
+ feature = dc_isar_feature(aa64_sha256, s);
633
break;
634
default:
635
unallocated_encoding(s);
636
return;
637
}
638
639
- if (!arm_dc_feature(s, feature)) {
640
+ if (!feature) {
641
unallocated_encoding(s);
642
return;
643
}
644
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
645
int rn = extract32(insn, 5, 5);
646
int rd = extract32(insn, 0, 5);
647
CryptoTwoOpFn *genfn;
648
- int feature;
649
+ bool feature;
650
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
651
652
if (size != 0) {
653
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
654
655
switch (opcode) {
656
case 0: /* SHA1H */
657
- feature = ARM_FEATURE_V8_SHA1;
658
+ feature = dc_isar_feature(aa64_sha1, s);
659
genfn = gen_helper_crypto_sha1h;
660
break;
661
case 1: /* SHA1SU1 */
662
- feature = ARM_FEATURE_V8_SHA1;
663
+ feature = dc_isar_feature(aa64_sha1, s);
664
genfn = gen_helper_crypto_sha1su1;
665
break;
666
case 2: /* SHA256SU0 */
667
- feature = ARM_FEATURE_V8_SHA256;
668
+ feature = dc_isar_feature(aa64_sha256, s);
669
genfn = gen_helper_crypto_sha256su0;
670
break;
671
default:
672
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
673
return;
674
}
675
676
- if (!arm_dc_feature(s, feature)) {
677
+ if (!feature) {
678
unallocated_encoding(s);
679
return;
680
}
681
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
682
int rm = extract32(insn, 16, 5);
683
int rn = extract32(insn, 5, 5);
684
int rd = extract32(insn, 0, 5);
685
- int feature;
686
+ bool feature;
687
CryptoThreeOpFn *genfn;
688
689
if (o == 0) {
690
switch (opcode) {
691
case 0: /* SHA512H */
692
- feature = ARM_FEATURE_V8_SHA512;
693
+ feature = dc_isar_feature(aa64_sha512, s);
694
genfn = gen_helper_crypto_sha512h;
695
break;
696
case 1: /* SHA512H2 */
697
- feature = ARM_FEATURE_V8_SHA512;
698
+ feature = dc_isar_feature(aa64_sha512, s);
699
genfn = gen_helper_crypto_sha512h2;
700
break;
701
case 2: /* SHA512SU1 */
702
- feature = ARM_FEATURE_V8_SHA512;
703
+ feature = dc_isar_feature(aa64_sha512, s);
704
genfn = gen_helper_crypto_sha512su1;
705
break;
706
case 3: /* RAX1 */
707
- feature = ARM_FEATURE_V8_SHA3;
708
+ feature = dc_isar_feature(aa64_sha3, s);
709
genfn = NULL;
710
break;
711
}
712
} else {
713
switch (opcode) {
714
case 0: /* SM3PARTW1 */
715
- feature = ARM_FEATURE_V8_SM3;
716
+ feature = dc_isar_feature(aa64_sm3, s);
717
genfn = gen_helper_crypto_sm3partw1;
718
break;
719
case 1: /* SM3PARTW2 */
720
- feature = ARM_FEATURE_V8_SM3;
721
+ feature = dc_isar_feature(aa64_sm3, s);
722
genfn = gen_helper_crypto_sm3partw2;
723
break;
724
case 2: /* SM4EKEY */
725
- feature = ARM_FEATURE_V8_SM4;
726
+ feature = dc_isar_feature(aa64_sm4, s);
727
genfn = gen_helper_crypto_sm4ekey;
728
break;
729
default:
730
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
731
}
732
}
733
734
- if (!arm_dc_feature(s, feature)) {
735
+ if (!feature) {
736
unallocated_encoding(s);
737
return;
738
}
739
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
740
int rn = extract32(insn, 5, 5);
741
int rd = extract32(insn, 0, 5);
742
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
743
- int feature;
744
+ bool feature;
745
CryptoTwoOpFn *genfn;
746
747
switch (opcode) {
748
case 0: /* SHA512SU0 */
749
- feature = ARM_FEATURE_V8_SHA512;
750
+ feature = dc_isar_feature(aa64_sha512, s);
751
genfn = gen_helper_crypto_sha512su0;
752
break;
753
case 1: /* SM4E */
754
- feature = ARM_FEATURE_V8_SM4;
755
+ feature = dc_isar_feature(aa64_sm4, s);
756
genfn = gen_helper_crypto_sm4e;
757
break;
758
default:
759
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
760
return;
761
}
762
763
- if (!arm_dc_feature(s, feature)) {
764
+ if (!feature) {
765
unallocated_encoding(s);
766
return;
767
}
768
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
769
int ra = extract32(insn, 10, 5);
770
int rn = extract32(insn, 5, 5);
771
int rd = extract32(insn, 0, 5);
772
- int feature;
773
+ bool feature;
774
775
switch (op0) {
776
case 0: /* EOR3 */
777
case 1: /* BCAX */
778
- feature = ARM_FEATURE_V8_SHA3;
779
+ feature = dc_isar_feature(aa64_sha3, s);
780
break;
781
case 2: /* SM3SS1 */
782
- feature = ARM_FEATURE_V8_SM3;
783
+ feature = dc_isar_feature(aa64_sm3, s);
784
break;
785
default:
786
unallocated_encoding(s);
787
return;
788
}
789
790
- if (!arm_dc_feature(s, feature)) {
791
+ if (!feature) {
792
unallocated_encoding(s);
793
return;
794
}
795
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
796
TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
797
int pass;
798
799
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA3)) {
800
+ if (!dc_isar_feature(aa64_sha3, s)) {
801
unallocated_encoding(s);
802
return;
803
}
804
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
805
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
806
TCGv_i32 tcg_imm2, tcg_opcode;
807
808
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SM3)) {
809
+ if (!dc_isar_feature(aa64_sm3, s)) {
810
unallocated_encoding(s);
811
return;
812
}
813
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
814
ARMCPU *arm_cpu = arm_env_get_cpu(env);
815
int bound;
816
817
+ dc->isar = &arm_cpu->isar;
818
dc->pc = dc->base.pc_first;
819
dc->condjmp = 0;
820
821
diff --git a/target/arm/translate.c b/target/arm/translate.c
822
index XXXXXXX..XXXXXXX 100644
823
--- a/target/arm/translate.c
824
+++ b/target/arm/translate.c
825
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
826
static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
827
int q, int rd, int rn, int rm)
828
{
829
- if (arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
830
+ if (dc_isar_feature(aa32_rdm, s)) {
831
int opr_sz = (1 + q) * 8;
832
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
833
vfp_reg_offset(1, rn),
834
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
835
return 1;
836
}
837
if (!u) { /* SHA-1 */
838
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
839
+ if (!dc_isar_feature(aa32_sha1, s)) {
840
return 1;
841
}
842
ptr1 = vfp_reg_ptr(true, rd);
843
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
844
gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp4);
845
tcg_temp_free_i32(tmp4);
846
} else { /* SHA-256 */
847
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256) || size == 3) {
848
+ if (!dc_isar_feature(aa32_sha2, s) || size == 3) {
849
return 1;
850
}
851
ptr1 = vfp_reg_ptr(true, rd);
852
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
853
if (op == 14 && size == 2) {
854
TCGv_i64 tcg_rn, tcg_rm, tcg_rd;
855
856
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
857
+ if (!dc_isar_feature(aa32_pmull, s)) {
858
return 1;
859
}
860
tcg_rn = tcg_temp_new_i64();
861
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
862
{
863
NeonGenThreeOpEnvFn *fn;
864
865
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
866
+ if (!dc_isar_feature(aa32_rdm, s)) {
867
return 1;
868
}
869
if (u && ((rd | rn) & 1)) {
870
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
871
break;
872
}
873
case NEON_2RM_AESE: case NEON_2RM_AESMC:
874
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
875
- || ((rm | rd) & 1)) {
876
+ if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
877
return 1;
878
}
879
ptr1 = vfp_reg_ptr(true, rd);
880
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
881
tcg_temp_free_i32(tmp3);
882
break;
883
case NEON_2RM_SHA1H:
884
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)
885
- || ((rm | rd) & 1)) {
886
+ if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
887
return 1;
888
}
889
ptr1 = vfp_reg_ptr(true, rd);
890
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
891
}
892
/* bit 6 (q): set -> SHA256SU0, cleared -> SHA1SU1 */
893
if (q) {
894
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256)) {
895
+ if (!dc_isar_feature(aa32_sha2, s)) {
896
return 1;
897
}
898
- } else if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
899
+ } else if (!dc_isar_feature(aa32_sha1, s)) {
900
return 1;
901
}
902
ptr1 = vfp_reg_ptr(true, rd);
903
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
904
/* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
905
int size = extract32(insn, 20, 1);
906
data = extract32(insn, 23, 2); /* rot */
907
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
908
+ if (!dc_isar_feature(aa32_vcma, s)
909
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
910
return 1;
911
}
912
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
913
/* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
914
int size = extract32(insn, 20, 1);
915
data = extract32(insn, 24, 1); /* rot */
916
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
917
+ if (!dc_isar_feature(aa32_vcma, s)
918
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
919
return 1;
920
}
921
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
922
} else if ((insn & 0xfeb00f00) == 0xfc200d00) {
923
/* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
924
bool u = extract32(insn, 4, 1);
925
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
926
+ if (!dc_isar_feature(aa32_dp, s)) {
927
return 1;
928
}
929
fn_gvec = u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
930
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
931
int size = extract32(insn, 23, 1);
932
int index;
933
934
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
935
+ if (!dc_isar_feature(aa32_vcma, s)) {
936
return 1;
937
}
938
if (size == 0) {
939
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
940
} else if ((insn & 0xffb00f00) == 0xfe200d00) {
941
/* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
942
int u = extract32(insn, 4, 1);
943
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
944
+ if (!dc_isar_feature(aa32_dp, s)) {
945
return 1;
946
}
947
fn_gvec = u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
948
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
949
* op1 == 3 is UNPREDICTABLE but handle as UNDEFINED.
950
* Bits 8, 10 and 11 should be zero.
951
*/
952
- if (!arm_dc_feature(s, ARM_FEATURE_CRC) || op1 == 0x3 ||
953
- (c & 0xd) != 0) {
954
+ if (!dc_isar_feature(aa32_crc32, s) || op1 == 0x3 || (c & 0xd) != 0) {
955
goto illegal_op;
956
}
957
958
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
959
case 0x28:
960
case 0x29:
961
case 0x2a:
962
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)) {
963
+ if (!dc_isar_feature(aa32_crc32, s)) {
964
goto illegal_op;
965
}
966
break;
967
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
968
CPUARMState *env = cs->env_ptr;
969
ARMCPU *cpu = arm_env_get_cpu(env);
970
971
+ dc->isar = &cpu->isar;
972
dc->pc = dc->base.pc_first;
973
dc->condjmp = 0;
974
975
--
105
--
976
2.19.1
106
2.20.1
977
107
978
108
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The v8.4-RCPC extension implements some new instructions:
2
* LDAPUR, LDAPURB, LDAPURH, LDAPRSB, LDAPRSH, LDAPRSW
3
* STLUR, STLURB, STLURH
2
4
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
These are all in a new subgroup of encodings that sits below the
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
top-level "Loads and Stores" group in the Arm ARM.
5
Message-id: 20181016223115.24100-8-richard.henderson@linaro.org
7
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
The STLUR* instructions have standard store-release semantics; the
9
LDAPUR* have Load-AcquirePC semantics, but (as with LDAPR*) we choose
10
to implement them as the slightly stronger Load-Acquire.
11
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20200224172846.13053-4-peter.maydell@linaro.org
8
---
15
---
9
target/arm/cpu.h | 16 +++++++++++++++-
16
target/arm/cpu.h | 5 +++
10
linux-user/aarch64/signal.c | 4 ++--
17
linux-user/elfload.c | 1 +
11
linux-user/elfload.c | 2 +-
18
target/arm/cpu64.c | 2 +-
12
linux-user/syscall.c | 10 ++++++----
19
target/arm/translate-a64.c | 90 ++++++++++++++++++++++++++++++++++++++
13
target/arm/cpu64.c | 5 ++++-
20
4 files changed, 97 insertions(+), 1 deletion(-)
14
target/arm/helper.c | 9 ++++++---
15
target/arm/machine.c | 3 +--
16
target/arm/translate-a64.c | 4 ++--
17
8 files changed, 37 insertions(+), 16 deletions(-)
18
21
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
24
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR1, FRINTTS, 32, 4)
26
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_rcpc_8_3(const ARMISARegisters *id)
24
FIELD(ID_AA64ISAR1, SB, 36, 4)
27
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) != 0;
25
FIELD(ID_AA64ISAR1, SPECRES, 40, 4)
26
27
+FIELD(ID_AA64PFR0, EL0, 0, 4)
28
+FIELD(ID_AA64PFR0, EL1, 4, 4)
29
+FIELD(ID_AA64PFR0, EL2, 8, 4)
30
+FIELD(ID_AA64PFR0, EL3, 12, 4)
31
+FIELD(ID_AA64PFR0, FP, 16, 4)
32
+FIELD(ID_AA64PFR0, ADVSIMD, 20, 4)
33
+FIELD(ID_AA64PFR0, GIC, 24, 4)
34
+FIELD(ID_AA64PFR0, RAS, 28, 4)
35
+FIELD(ID_AA64PFR0, SVE, 32, 4)
36
+
37
QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK);
38
39
/* If adding a feature bit which corresponds to a Linux ELF
40
@@ -XXX,XX +XXX,XX @@ enum arm_features {
41
ARM_FEATURE_PMU, /* has PMU support */
42
ARM_FEATURE_VBAR, /* has cp15 VBAR */
43
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
44
- ARM_FEATURE_SVE, /* has Scalable Vector Extension */
45
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
46
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
47
};
48
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
49
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
50
}
28
}
51
29
52
+static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
30
+static inline bool isar_feature_aa64_rcpc_8_4(const ARMISARegisters *id)
53
+{
31
+{
54
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
32
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) >= 2;
55
+}
33
+}
56
+
34
+
57
/*
35
/*
58
* Forward to the above feature tests given an ARMCPU pointer.
36
* Feature tests for "does this exist in either 32-bit or 64-bit?"
59
*/
37
*/
60
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/linux-user/aarch64/signal.c
63
+++ b/linux-user/aarch64/signal.c
64
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
65
break;
66
67
case TARGET_SVE_MAGIC:
68
- if (arm_feature(env, ARM_FEATURE_SVE)) {
69
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
70
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
71
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
72
if (!sve && size == sve_size) {
73
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
74
&layout);
75
76
/* SVE state needs saving only if it exists. */
77
- if (arm_feature(env, ARM_FEATURE_SVE)) {
78
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
79
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
80
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
81
sve_ofs = alloc_sigframe_space(sve_size, &layout);
82
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
38
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
83
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
84
--- a/linux-user/elfload.c
40
--- a/linux-user/elfload.c
85
+++ b/linux-user/elfload.c
41
+++ b/linux-user/elfload.c
86
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
42
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
87
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
43
GET_FEATURE_ID(aa64_condm_4, ARM_HWCAP_A64_FLAGM);
88
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
44
GET_FEATURE_ID(aa64_dcpop, ARM_HWCAP_A64_DCPOP);
89
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
45
GET_FEATURE_ID(aa64_rcpc_8_3, ARM_HWCAP_A64_LRCPC);
90
- GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
46
+ GET_FEATURE_ID(aa64_rcpc_8_4, ARM_HWCAP_A64_ILRCPC);
91
+ GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
47
92
48
return hwcaps;
93
#undef GET_FEATURE
49
}
94
#undef GET_FEATURE_ID
95
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/linux-user/syscall.c
98
+++ b/linux-user/syscall.c
99
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
100
* even though the current architectural maximum is VQ=16.
101
*/
102
ret = -TARGET_EINVAL;
103
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)
104
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(cpu_env))
105
&& arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
106
CPUARMState *env = cpu_env;
107
ARMCPU *cpu = arm_env_get_cpu(env);
108
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
109
return ret;
110
case TARGET_PR_SVE_GET_VL:
111
ret = -TARGET_EINVAL;
112
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)) {
113
- CPUARMState *env = cpu_env;
114
- ret = ((env->vfp.zcr_el[1] & 0xf) + 1) * 16;
115
+ {
116
+ ARMCPU *cpu = arm_env_get_cpu(cpu_env);
117
+ if (cpu_isar_feature(aa64_sve, cpu)) {
118
+ ret = ((cpu->env.vfp.zcr_el[1] & 0xf) + 1) * 16;
119
+ }
120
}
121
return ret;
122
#endif /* AARCH64 */
123
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
50
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
124
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
125
--- a/target/arm/cpu64.c
52
--- a/target/arm/cpu64.c
126
+++ b/target/arm/cpu64.c
53
+++ b/target/arm/cpu64.c
127
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
54
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
128
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
55
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
56
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
57
t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
58
- t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 1); /* ARMv8.3-RCPC */
59
+ t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
129
cpu->isar.id_aa64isar1 = t;
60
cpu->isar.id_aa64isar1 = t;
130
61
131
+ t = cpu->isar.id_aa64pfr0;
62
t = cpu->isar.id_aa64pfr0;
132
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
133
+ cpu->isar.id_aa64pfr0 = t;
134
+
135
/* Replicate the same data to the 32-bit id registers. */
136
u = cpu->isar.id_isar5;
137
u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
139
* present in either.
140
*/
141
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
142
- set_feature(&cpu->env, ARM_FEATURE_SVE);
143
/* For usermode -cpu max we can use a larger and more efficient DCZ
144
* blocksize since we don't have to follow what the hardware does.
145
*/
146
diff --git a/target/arm/helper.c b/target/arm/helper.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/helper.c
149
+++ b/target/arm/helper.c
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
151
define_one_arm_cp_reg(cpu, &sctlr);
152
}
153
154
- if (arm_feature(env, ARM_FEATURE_SVE)) {
155
+ if (cpu_isar_feature(aa64_sve, cpu)) {
156
define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
157
if (arm_feature(env, ARM_FEATURE_EL2)) {
158
define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
159
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
160
uint32_t flags;
161
162
if (is_a64(env)) {
163
+ ARMCPU *cpu = arm_env_get_cpu(env);
164
+
165
*pc = env->pc;
166
flags = ARM_TBFLAG_AARCH64_STATE_MASK;
167
/* Get control bits for tagged addresses */
168
flags |= (arm_regime_tbi0(env, mmu_idx) << ARM_TBFLAG_TBI0_SHIFT);
169
flags |= (arm_regime_tbi1(env, mmu_idx) << ARM_TBFLAG_TBI1_SHIFT);
170
171
- if (arm_feature(env, ARM_FEATURE_SVE)) {
172
+ if (cpu_isar_feature(aa64_sve, cpu)) {
173
int sve_el = sve_exception_el(env, current_el);
174
uint32_t zcr_len;
175
176
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq)
177
void aarch64_sve_change_el(CPUARMState *env, int old_el,
178
int new_el, bool el0_a64)
179
{
180
+ ARMCPU *cpu = arm_env_get_cpu(env);
181
int old_len, new_len;
182
bool old_a64, new_a64;
183
184
/* Nothing to do if no SVE. */
185
- if (!arm_feature(env, ARM_FEATURE_SVE)) {
186
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
187
return;
188
}
189
190
diff --git a/target/arm/machine.c b/target/arm/machine.c
191
index XXXXXXX..XXXXXXX 100644
192
--- a/target/arm/machine.c
193
+++ b/target/arm/machine.c
194
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_iwmmxt = {
195
static bool sve_needed(void *opaque)
196
{
197
ARMCPU *cpu = opaque;
198
- CPUARMState *env = &cpu->env;
199
200
- return arm_feature(env, ARM_FEATURE_SVE);
201
+ return cpu_isar_feature(aa64_sve, cpu);
202
}
203
204
/* The first two words of each Zreg is stored in VFP state. */
205
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
63
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
206
index XXXXXXX..XXXXXXX 100644
64
index XXXXXXX..XXXXXXX 100644
207
--- a/target/arm/translate-a64.c
65
--- a/target/arm/translate-a64.c
208
+++ b/target/arm/translate-a64.c
66
+++ b/target/arm/translate-a64.c
209
@@ -XXX,XX +XXX,XX @@ void aarch64_cpu_dump_state(CPUState *cs, FILE *f,
67
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pac(DisasContext *s, uint32_t insn,
210
cpu_fprintf(f, " FPCR=%08x FPSR=%08x\n",
68
}
211
vfp_get_fpcr(env), vfp_get_fpsr(env));
69
}
212
70
213
- if (arm_feature(env, ARM_FEATURE_SVE) && sve_exception_el(env, el) == 0) {
71
+/*
214
+ if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) {
72
+ * LDAPR/STLR (unscaled immediate)
215
int j, zcr_len = sve_zcr_len_for_el(env, el);
73
+ *
216
74
+ * 31 30 24 22 21 12 10 5 0
217
for (i = 0; i <= FFR_PRED_NUM; i++) {
75
+ * +------+-------------+-----+---+--------+-----+----+-----+
218
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
76
+ * | size | 0 1 1 0 0 1 | opc | 0 | imm9 | 0 0 | Rn | Rt |
77
+ * +------+-------------+-----+---+--------+-----+----+-----+
78
+ *
79
+ * Rt: source or destination register
80
+ * Rn: base register
81
+ * imm9: unscaled immediate offset
82
+ * opc: 00: STLUR*, 01/10/11: various LDAPUR*
83
+ * size: size of load/store
84
+ */
85
+static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
86
+{
87
+ int rt = extract32(insn, 0, 5);
88
+ int rn = extract32(insn, 5, 5);
89
+ int offset = sextract32(insn, 12, 9);
90
+ int opc = extract32(insn, 22, 2);
91
+ int size = extract32(insn, 30, 2);
92
+ TCGv_i64 clean_addr, dirty_addr;
93
+ bool is_store = false;
94
+ bool is_signed = false;
95
+ bool extend = false;
96
+ bool iss_sf;
97
+
98
+ if (!dc_isar_feature(aa64_rcpc_8_4, s)) {
99
+ unallocated_encoding(s);
100
+ return;
101
+ }
102
+
103
+ switch (opc) {
104
+ case 0: /* STLURB */
105
+ is_store = true;
106
+ break;
107
+ case 1: /* LDAPUR* */
108
+ break;
109
+ case 2: /* LDAPURS* 64-bit variant */
110
+ if (size == 3) {
111
+ unallocated_encoding(s);
112
+ return;
113
+ }
114
+ is_signed = true;
115
+ break;
116
+ case 3: /* LDAPURS* 32-bit variant */
117
+ if (size > 1) {
118
+ unallocated_encoding(s);
119
+ return;
120
+ }
121
+ is_signed = true;
122
+ extend = true; /* zero-extend 32->64 after signed load */
123
+ break;
124
+ default:
125
+ g_assert_not_reached();
126
+ }
127
+
128
+ iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
129
+
130
+ if (rn == 31) {
131
+ gen_check_sp_alignment(s);
132
+ }
133
+
134
+ dirty_addr = read_cpu_reg_sp(s, rn, 1);
135
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
136
+ clean_addr = clean_data_tbi(s, dirty_addr);
137
+
138
+ if (is_store) {
139
+ /* Store-Release semantics */
140
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
141
+ do_gpr_st(s, cpu_reg(s, rt), clean_addr, size, true, rt, iss_sf, true);
142
+ } else {
143
+ /*
144
+ * Load-AcquirePC semantics; we implement as the slightly more
145
+ * restrictive Load-Acquire.
146
+ */
147
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size, is_signed, extend,
148
+ true, rt, iss_sf, true);
149
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
150
+ }
151
+}
152
+
153
/* Load/store register (all forms) */
154
static void disas_ldst_reg(DisasContext *s, uint32_t insn)
155
{
156
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
157
case 0x0d: /* AdvSIMD load/store single structure */
158
disas_ldst_single_struct(s, insn);
159
break;
160
+ case 0x19: /* LDAPR/STLR (unscaled immediate) */
161
+ if (extract32(insn, 10, 2) != 0 ||
162
+ extract32(insn, 21, 1) != 0) {
163
+ unallocated_encoding(s);
164
+ break;
165
+ }
166
+ disas_ldst_ldapr_stlr(s, insn);
167
+ break;
168
default:
219
unallocated_encoding(s);
169
unallocated_encoding(s);
220
break;
170
break;
221
case 0x2:
222
- if (!arm_dc_feature(s, ARM_FEATURE_SVE) || !disas_sve(s, insn)) {
223
+ if (!dc_isar_feature(aa64_sve, s) || !disas_sve(s, insn)) {
224
unallocated_encoding(s);
225
}
226
break;
227
--
171
--
228
2.19.1
172
2.20.1
229
173
230
174
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The ARMv8.3-CCIDX extension makes the CCSIDR_EL1 system ID registers
2
have a format that uses the full 64 bit width of the register, and
3
adds a new CCSIDR2 register so AArch32 can get at the high 32 bits.
2
4
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
QEMU doesn't implement caches, so we just treat these ID registers as
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
opaque values that are set to the correct constant values for each
5
Message-id: 20181016223115.24100-9-richard.henderson@linaro.org
7
CPU. The only thing we need to do is allow 64-bit values in our
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
cssidr[] array and provide the CCSIDR2 accessors.
9
10
We don't set the CCIDX field in our 'max' CPU because the CCSIDR
11
constant values we use are the same as the ones used by the
12
Cortex-A57 and they are in the old 32-bit format. This means
13
that the extra regdef added here is unused currently, but it
14
means that whenever in the future we add a CPU that does need
15
the new 64-bit format it will just work when we set the cssidr
16
values and the ID registers for it.
17
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20200224182626.29252-1-peter.maydell@linaro.org
8
---
21
---
9
target/arm/cpu.h | 17 +++++++++++++++-
22
target/arm/cpu.h | 17 ++++++++++++++++-
10
linux-user/elfload.c | 6 +-----
23
target/arm/helper.c | 19 +++++++++++++++++++
11
target/arm/cpu64.c | 16 ++++++++-------
24
2 files changed, 35 insertions(+), 1 deletion(-)
12
target/arm/helper.c | 2 +-
13
target/arm/translate-a64.c | 40 +++++++++++++++++++-------------------
14
target/arm/translate.c | 6 +++---
15
6 files changed, 50 insertions(+), 37 deletions(-)
16
25
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
28
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
29
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ enum arm_features {
30
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
22
ARM_FEATURE_PMU, /* has PMU support */
31
/* The elements of this array are the CCSIDR values for each cache,
23
ARM_FEATURE_VBAR, /* has cp15 VBAR */
32
* in the order L1DCache, L1ICache, L2DCache, L2ICache, etc.
24
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
33
*/
25
- ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
34
- uint32_t ccsidr[16];
26
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
35
+ uint64_t ccsidr[16];
27
};
36
uint64_t reset_cbar;
28
37
uint32_t reset_auxcr;
29
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
38
bool reset_hivecs;
30
return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
39
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_ac2(const ARMISARegisters *id)
40
return FIELD_EX32(id->id_mmfr4, ID_MMFR4, AC2) != 0;
31
}
41
}
32
42
33
+static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
43
+static inline bool isar_feature_aa32_ccidx(const ARMISARegisters *id)
34
+{
44
+{
35
+ /*
45
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, CCIDX) != 0;
36
+ * This is a placeholder for use by VCMA until the rest of
37
+ * the ARMv8.2-FP16 extension is implemented for aa32 mode.
38
+ * At which point we can properly set and check MVFR1.FPHP.
39
+ */
40
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
41
+}
46
+}
42
+
47
+
43
/*
48
/*
44
* 64-bit feature tests via id registers.
49
* 64-bit feature tests via id registers.
45
*/
50
*/
46
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
51
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_rcpc_8_4(const ARMISARegisters *id)
47
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
52
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) >= 2;
48
}
53
}
49
54
50
+static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
55
+static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
51
+{
56
+{
52
+ /* We always set the AdvSIMD and FP fields identically wrt FP16. */
57
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
53
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
54
+}
58
+}
55
+
59
+
56
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
60
/*
57
{
61
* Feature tests for "does this exist in either 32-bit or 64-bit?"
58
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
62
*/
59
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
63
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_pmu_8_4(const ARMISARegisters *id)
60
index XXXXXXX..XXXXXXX 100644
64
return isar_feature_aa64_pmu_8_4(id) || isar_feature_aa32_pmu_8_4(id);
61
--- a/linux-user/elfload.c
65
}
62
+++ b/linux-user/elfload.c
66
63
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
67
+static inline bool isar_feature_any_ccidx(const ARMISARegisters *id)
64
hwcaps |= ARM_HWCAP_A64_ASIMD;
68
+{
65
69
+ return isar_feature_aa64_ccidx(id) || isar_feature_aa32_ccidx(id);
66
/* probe for the extra features */
70
+}
67
-#define GET_FEATURE(feat, hwcap) \
68
- do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
69
#define GET_FEATURE_ID(feat, hwcap) \
70
do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
71
72
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
73
GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
74
GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
75
GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
76
- GET_FEATURE(ARM_FEATURE_V8_FP16,
77
- ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
78
+ GET_FEATURE_ID(aa64_fp16, ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
79
GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
80
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
81
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
82
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
83
GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
84
85
-#undef GET_FEATURE
86
#undef GET_FEATURE_ID
87
88
return hwcaps;
89
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/cpu64.c
92
+++ b/target/arm/cpu64.c
93
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
94
95
t = cpu->isar.id_aa64pfr0;
96
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
97
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
98
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
99
cpu->isar.id_aa64pfr0 = t;
100
101
/* Replicate the same data to the 32-bit id registers. */
102
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
103
u = FIELD_DP32(u, ID_ISAR6, DP, 1);
104
cpu->isar.id_isar6 = u;
105
106
-#ifdef CONFIG_USER_ONLY
107
- /* We don't set these in system emulation mode for the moment,
108
- * since we don't correctly set the ID registers to advertise them,
109
- * and in some cases they're only available in AArch64 and not AArch32,
110
- * whereas the architecture requires them to be present in both if
111
- * present in either.
112
+ /*
113
+ * FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
114
+ * so do not set MVFR1.FPHP. Strictly speaking this is not legal,
115
+ * but it is also not legal to enable SVE without support for FP16,
116
+ * and enabling SVE in system mode is more useful in the short term.
117
*/
118
- set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
119
+
71
+
120
+#ifdef CONFIG_USER_ONLY
72
/*
121
/* For usermode -cpu max we can use a larger and more efficient DCZ
73
* Forward to the above feature tests given an ARMCPU pointer.
122
* blocksize since we don't have to follow what the hardware does.
74
*/
123
*/
124
diff --git a/target/arm/helper.c b/target/arm/helper.c
75
diff --git a/target/arm/helper.c b/target/arm/helper.c
125
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
126
--- a/target/arm/helper.c
77
--- a/target/arm/helper.c
127
+++ b/target/arm/helper.c
78
+++ b/target/arm/helper.c
128
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
79
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo predinv_reginfo[] = {
129
uint32_t changed;
80
REGINFO_SENTINEL
130
81
};
131
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
82
132
- if (!arm_feature(env, ARM_FEATURE_V8_FP16)) {
83
+static uint64_t ccsidr2_read(CPUARMState *env, const ARMCPRegInfo *ri)
133
+ if (!cpu_isar_feature(aa64_fp16, arm_env_get_cpu(env))) {
84
+{
134
val &= ~FPCR_FZ16;
85
+ /* Read the high 32 bits of the current CCSIDR */
86
+ return extract64(ccsidr_read(env, ri), 32, 32);
87
+}
88
+
89
+static const ARMCPRegInfo ccsidr2_reginfo[] = {
90
+ { .name = "CCSIDR2", .state = ARM_CP_STATE_BOTH,
91
+ .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 2,
92
+ .access = PL1_R,
93
+ .accessfn = access_aa64_tid2,
94
+ .readfn = ccsidr2_read, .type = ARM_CP_NO_RAW },
95
+ REGINFO_SENTINEL
96
+};
97
+
98
static CPAccessResult access_aa64_tid3(CPUARMState *env, const ARMCPRegInfo *ri,
99
bool isread)
100
{
101
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
102
define_arm_cp_regs(cpu, predinv_reginfo);
135
}
103
}
136
104
137
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
105
+ if (cpu_isar_feature(any_ccidx, cpu)) {
138
index XXXXXXX..XXXXXXX 100644
106
+ define_arm_cp_regs(cpu, ccsidr2_reginfo);
139
--- a/target/arm/translate-a64.c
107
+ }
140
+++ b/target/arm/translate-a64.c
108
+
141
@@ -XXX,XX +XXX,XX @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
109
#ifndef CONFIG_USER_ONLY
142
break;
110
/*
143
case 3:
111
* Register redirections and aliases must be done last,
144
size = MO_16;
145
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
146
+ if (dc_isar_feature(aa64_fp16, s)) {
147
break;
148
}
149
/* fallthru */
150
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
151
break;
152
case 3:
153
size = MO_16;
154
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
155
+ if (dc_isar_feature(aa64_fp16, s)) {
156
break;
157
}
158
/* fallthru */
159
@@ -XXX,XX +XXX,XX @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
160
break;
161
case 3:
162
sz = MO_16;
163
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
164
+ if (dc_isar_feature(aa64_fp16, s)) {
165
break;
166
}
167
/* fallthru */
168
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
169
handle_fp_1src_double(s, opcode, rd, rn);
170
break;
171
case 3:
172
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
173
+ if (!dc_isar_feature(aa64_fp16, s)) {
174
unallocated_encoding(s);
175
return;
176
}
177
@@ -XXX,XX +XXX,XX @@ static void disas_fp_2src(DisasContext *s, uint32_t insn)
178
handle_fp_2src_double(s, opcode, rd, rn, rm);
179
break;
180
case 3:
181
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
182
+ if (!dc_isar_feature(aa64_fp16, s)) {
183
unallocated_encoding(s);
184
return;
185
}
186
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
187
handle_fp_3src_double(s, o0, o1, rd, rn, rm, ra);
188
break;
189
case 3:
190
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
191
+ if (!dc_isar_feature(aa64_fp16, s)) {
192
unallocated_encoding(s);
193
return;
194
}
195
@@ -XXX,XX +XXX,XX @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
196
break;
197
case 3:
198
sz = MO_16;
199
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
200
+ if (dc_isar_feature(aa64_fp16, s)) {
201
break;
202
}
203
/* fallthru */
204
@@ -XXX,XX +XXX,XX @@ static void disas_fp_fixed_conv(DisasContext *s, uint32_t insn)
205
case 1: /* float64 */
206
break;
207
case 3: /* float16 */
208
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
209
+ if (dc_isar_feature(aa64_fp16, s)) {
210
break;
211
}
212
/* fallthru */
213
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
214
break;
215
case 0x6: /* 16-bit float, 32-bit int */
216
case 0xe: /* 16-bit float, 64-bit int */
217
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
218
+ if (dc_isar_feature(aa64_fp16, s)) {
219
break;
220
}
221
/* fallthru */
222
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
223
case 1: /* float64 */
224
break;
225
case 3: /* float16 */
226
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
227
+ if (dc_isar_feature(aa64_fp16, s)) {
228
break;
229
}
230
/* fallthru */
231
@@ -XXX,XX +XXX,XX @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
232
*/
233
is_min = extract32(size, 1, 1);
234
is_fp = true;
235
- if (!is_u && arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
236
+ if (!is_u && dc_isar_feature(aa64_fp16, s)) {
237
size = 1;
238
} else if (!is_u || !is_q || extract32(size, 0, 1)) {
239
unallocated_encoding(s);
240
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
241
242
if (o2 != 0 || ((cmode == 0xf) && is_neg && !is_q)) {
243
/* Check for FMOV (vector, immediate) - half-precision */
244
- if (!(arm_dc_feature(s, ARM_FEATURE_V8_FP16) && o2 && cmode == 0xf)) {
245
+ if (!(dc_isar_feature(aa64_fp16, s) && o2 && cmode == 0xf)) {
246
unallocated_encoding(s);
247
return;
248
}
249
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
250
case 0x2f: /* FMINP */
251
/* FP op, size[0] is 32 or 64 bit*/
252
if (!u) {
253
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
254
+ if (!dc_isar_feature(aa64_fp16, s)) {
255
unallocated_encoding(s);
256
return;
257
} else {
258
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
259
size = MO_32;
260
} else if (immh & 2) {
261
size = MO_16;
262
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
263
+ if (!dc_isar_feature(aa64_fp16, s)) {
264
unallocated_encoding(s);
265
return;
266
}
267
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
268
size = MO_32;
269
} else if (immh & 0x2) {
270
size = MO_16;
271
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
272
+ if (!dc_isar_feature(aa64_fp16, s)) {
273
unallocated_encoding(s);
274
return;
275
}
276
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
277
return;
278
}
279
280
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
281
+ if (!dc_isar_feature(aa64_fp16, s)) {
282
unallocated_encoding(s);
283
}
284
285
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
286
TCGv_ptr fpst;
287
bool pairwise = false;
288
289
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
290
+ if (!dc_isar_feature(aa64_fp16, s)) {
291
unallocated_encoding(s);
292
return;
293
}
294
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
295
case 0x1c: /* FCADD, #90 */
296
case 0x1e: /* FCADD, #270 */
297
if (size == 0
298
- || (size == 1 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))
299
+ || (size == 1 && !dc_isar_feature(aa64_fp16, s))
300
|| (size == 3 && !is_q)) {
301
unallocated_encoding(s);
302
return;
303
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
304
bool need_fpst = true;
305
int rmode;
306
307
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
308
+ if (!dc_isar_feature(aa64_fp16, s)) {
309
unallocated_encoding(s);
310
return;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
313
}
314
break;
315
}
316
- if (is_fp16 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
317
+ if (is_fp16 && !dc_isar_feature(aa64_fp16, s)) {
318
unallocated_encoding(s);
319
return;
320
}
321
diff --git a/target/arm/translate.c b/target/arm/translate.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/translate.c
324
+++ b/target/arm/translate.c
325
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
326
int size = extract32(insn, 20, 1);
327
data = extract32(insn, 23, 2); /* rot */
328
if (!dc_isar_feature(aa32_vcma, s)
329
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
330
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
331
return 1;
332
}
333
fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
334
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
335
int size = extract32(insn, 20, 1);
336
data = extract32(insn, 24, 1); /* rot */
337
if (!dc_isar_feature(aa32_vcma, s)
338
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
339
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
340
return 1;
341
}
342
fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
343
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
344
return 1;
345
}
346
if (size == 0) {
347
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
348
+ if (!dc_isar_feature(aa32_fp16_arith, s)) {
349
return 1;
350
}
351
/* For fp16, rm is just Vm, and index is M. */
352
--
112
--
353
2.19.1
113
2.20.1
354
114
355
115
diff view generated by jsdifflib
Deleted patch
1
For AArch32, exception return happens through certain kinds
2
of CPSR write. We don't currently have any CPU_LOG_INT logging
3
of these events (unlike AArch64, where we log in the ERET
4
instruction). Add some suitable logging.
5
1
6
This will log exception returns like this:
7
Exception return from AArch32 hyp to usr PC 0x80100374
8
9
paralleling the existing logging in the exception_return
10
helper for AArch64 exception returns:
11
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
12
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c
13
14
(Note that an AArch32 exception return can only be
15
AArch32->AArch32, never to AArch64.)
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20181012144235.19646-2-peter.maydell@linaro.org
20
---
21
target/arm/internals.h | 18 ++++++++++++++++++
22
target/arm/helper.c | 10 ++++++++++
23
target/arm/translate.c | 7 +------
24
3 files changed, 29 insertions(+), 6 deletions(-)
25
26
diff --git a/target/arm/internals.h b/target/arm/internals.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/internals.h
29
+++ b/target/arm/internals.h
30
@@ -XXX,XX +XXX,XX @@ static inline uint32_t v7m_sp_limit(CPUARMState *env)
31
}
32
}
33
34
+/**
35
+ * aarch32_mode_name(): Return name of the AArch32 CPU mode
36
+ * @psr: Program Status Register indicating CPU mode
37
+ *
38
+ * Returns, for debug logging purposes, a printable representation
39
+ * of the AArch32 CPU mode ("svc", "usr", etc) as indicated by
40
+ * the low bits of the specified PSR.
41
+ */
42
+static inline const char *aarch32_mode_name(uint32_t psr)
43
+{
44
+ static const char cpu_mode_names[16][4] = {
45
+ "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
46
+ "???", "???", "hyp", "und", "???", "???", "???", "sys"
47
+ };
48
+
49
+ return cpu_mode_names[psr & 0xf];
50
+}
51
+
52
#endif
53
diff --git a/target/arm/helper.c b/target/arm/helper.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/helper.c
56
+++ b/target/arm/helper.c
57
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
58
mask |= CPSR_IL;
59
val |= CPSR_IL;
60
}
61
+ qemu_log_mask(LOG_GUEST_ERROR,
62
+ "Illegal AArch32 mode switch attempt from %s to %s\n",
63
+ aarch32_mode_name(env->uncached_cpsr),
64
+ aarch32_mode_name(val));
65
} else {
66
+ qemu_log_mask(CPU_LOG_INT, "%s %s to %s PC 0x%" PRIx32 "\n",
67
+ write_type == CPSRWriteExceptionReturn ?
68
+ "Exception return from AArch32" :
69
+ "AArch32 mode switch from",
70
+ aarch32_mode_name(env->uncached_cpsr),
71
+ aarch32_mode_name(val), env->regs[15]);
72
switch_mode(env, val & CPSR_M);
73
}
74
}
75
diff --git a/target/arm/translate.c b/target/arm/translate.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/translate.c
78
+++ b/target/arm/translate.c
79
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb)
80
translator_loop(ops, &dc.base, cpu, tb);
81
}
82
83
-static const char *cpu_mode_names[16] = {
84
- "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
85
- "???", "???", "hyp", "und", "???", "???", "???", "sys"
86
-};
87
-
88
void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
89
int flags)
90
{
91
@@ -XXX,XX +XXX,XX @@ void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
92
psr & CPSR_V ? 'V' : '-',
93
psr & CPSR_T ? 'T' : 'A',
94
ns_status,
95
- cpu_mode_names[psr & 0xf], (psr & 0x10) ? 32 : 26);
96
+ aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26);
97
}
98
99
if (flags & CPU_DUMP_FPU) {
100
--
101
2.19.1
102
103
diff view generated by jsdifflib
Deleted patch
1
The switch_mode() function is defined in target/arm/helper.c and used
2
only in that file and nowhere else, so we can make it file-local
3
rather than global.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181012144235.19646-3-peter.maydell@linaro.org
8
---
9
target/arm/internals.h | 1 -
10
target/arm/helper.c | 6 ++++--
11
2 files changed, 4 insertions(+), 3 deletions(-)
12
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/internals.h
16
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@ static inline int bank_number(int mode)
18
g_assert_not_reached();
19
}
20
21
-void switch_mode(CPUARMState *, int);
22
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu);
23
void arm_translate_init(void);
24
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
30
V8M_SAttributes *sattrs);
31
#endif
32
33
+static void switch_mode(CPUARMState *env, int mode);
34
+
35
static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg)
36
{
37
int nregs;
38
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
39
return 0;
40
}
41
42
-void switch_mode(CPUARMState *env, int mode)
43
+static void switch_mode(CPUARMState *env, int mode)
44
{
45
ARMCPU *cpu = arm_env_get_cpu(env);
46
47
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
48
49
#else
50
51
-void switch_mode(CPUARMState *env, int mode)
52
+static void switch_mode(CPUARMState *env, int mode)
53
{
54
int old_mode;
55
int i;
56
--
57
2.19.1
58
59
diff view generated by jsdifflib
Deleted patch
1
The HCR.FB virtualization configuration register bit requests that
2
TLB maintenance, branch predictor invalidate-all and icache
3
invalidate-all operations performed in NS EL1 should be upgraded
4
from "local CPU only to "broadcast within Inner Shareable domain".
5
For QEMU we NOP the branch predictor and icache operations, so
6
we only need to upgrade the TLB invalidates:
7
AArch32 TLBIALL, TLBIMVA, TLBIASID, DTLBIALL, DTLBIMVA, DTLBIASID,
8
ITLBIALL, ITLBIMVA, ITLBIASID, TLBIMVAA, TLBIMVAL, TLBIMVAAL
9
AArch64 TLBI VMALLE1, TLBI VAE1, TLBI ASIDE1, TLBI VAAE1,
10
TLBI VALE1, TLBI VAALE1
11
1
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20181012144235.19646-4-peter.maydell@linaro.org
15
---
16
target/arm/helper.c | 191 +++++++++++++++++++++++++++-----------------
17
1 file changed, 116 insertions(+), 75 deletions(-)
18
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
raw_write(env, ri, value);
25
}
26
27
-static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
- uint64_t value)
29
-{
30
- /* Invalidate all (TLBIALL) */
31
- ARMCPU *cpu = arm_env_get_cpu(env);
32
-
33
- tlb_flush(CPU(cpu));
34
-}
35
-
36
-static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
37
- uint64_t value)
38
-{
39
- /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
40
- ARMCPU *cpu = arm_env_get_cpu(env);
41
-
42
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
43
-}
44
-
45
-static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
46
- uint64_t value)
47
-{
48
- /* Invalidate by ASID (TLBIASID) */
49
- ARMCPU *cpu = arm_env_get_cpu(env);
50
-
51
- tlb_flush(CPU(cpu));
52
-}
53
-
54
-static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
- uint64_t value)
56
-{
57
- /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
58
- ARMCPU *cpu = arm_env_get_cpu(env);
59
-
60
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
61
-}
62
-
63
/* IS variants of TLB operations must affect all cores */
64
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
65
uint64_t value)
66
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
68
}
69
70
+/*
71
+ * Non-IS variants of TLB operations are upgraded to
72
+ * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
73
+ * force broadcast of these operations.
74
+ */
75
+static bool tlb_force_broadcast(CPUARMState *env)
76
+{
77
+ return (env->cp15.hcr_el2 & HCR_FB) &&
78
+ arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
79
+}
80
+
81
+static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
82
+ uint64_t value)
83
+{
84
+ /* Invalidate all (TLBIALL) */
85
+ ARMCPU *cpu = arm_env_get_cpu(env);
86
+
87
+ if (tlb_force_broadcast(env)) {
88
+ tlbiall_is_write(env, NULL, value);
89
+ return;
90
+ }
91
+
92
+ tlb_flush(CPU(cpu));
93
+}
94
+
95
+static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
96
+ uint64_t value)
97
+{
98
+ /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
99
+ ARMCPU *cpu = arm_env_get_cpu(env);
100
+
101
+ if (tlb_force_broadcast(env)) {
102
+ tlbimva_is_write(env, NULL, value);
103
+ return;
104
+ }
105
+
106
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
107
+}
108
+
109
+static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
+ uint64_t value)
111
+{
112
+ /* Invalidate by ASID (TLBIASID) */
113
+ ARMCPU *cpu = arm_env_get_cpu(env);
114
+
115
+ if (tlb_force_broadcast(env)) {
116
+ tlbiasid_is_write(env, NULL, value);
117
+ return;
118
+ }
119
+
120
+ tlb_flush(CPU(cpu));
121
+}
122
+
123
+static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
+ uint64_t value)
125
+{
126
+ /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
127
+ ARMCPU *cpu = arm_env_get_cpu(env);
128
+
129
+ if (tlb_force_broadcast(env)) {
130
+ tlbimvaa_is_write(env, NULL, value);
131
+ return;
132
+ }
133
+
134
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
135
+}
136
+
137
static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
uint64_t value)
139
{
140
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
141
* Page D4-1736 (DDI0487A.b)
142
*/
143
144
-static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
145
- uint64_t value)
146
-{
147
- CPUState *cs = ENV_GET_CPU(env);
148
-
149
- if (arm_is_secure_below_el3(env)) {
150
- tlb_flush_by_mmuidx(cs,
151
- ARMMMUIdxBit_S1SE1 |
152
- ARMMMUIdxBit_S1SE0);
153
- } else {
154
- tlb_flush_by_mmuidx(cs,
155
- ARMMMUIdxBit_S12NSE1 |
156
- ARMMMUIdxBit_S12NSE0);
157
- }
158
-}
159
-
160
static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
uint64_t value)
162
{
163
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
164
}
165
}
166
167
+static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
168
+ uint64_t value)
169
+{
170
+ CPUState *cs = ENV_GET_CPU(env);
171
+
172
+ if (tlb_force_broadcast(env)) {
173
+ tlbi_aa64_vmalle1_write(env, NULL, value);
174
+ return;
175
+ }
176
+
177
+ if (arm_is_secure_below_el3(env)) {
178
+ tlb_flush_by_mmuidx(cs,
179
+ ARMMMUIdxBit_S1SE1 |
180
+ ARMMMUIdxBit_S1SE0);
181
+ } else {
182
+ tlb_flush_by_mmuidx(cs,
183
+ ARMMMUIdxBit_S12NSE1 |
184
+ ARMMMUIdxBit_S12NSE0);
185
+ }
186
+}
187
+
188
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
189
uint64_t value)
190
{
191
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
192
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
193
}
194
195
-static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
- uint64_t value)
197
-{
198
- /* Invalidate by VA, EL1&0 (AArch64 version).
199
- * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
200
- * since we don't support flush-for-specific-ASID-only or
201
- * flush-last-level-only.
202
- */
203
- ARMCPU *cpu = arm_env_get_cpu(env);
204
- CPUState *cs = CPU(cpu);
205
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
206
-
207
- if (arm_is_secure_below_el3(env)) {
208
- tlb_flush_page_by_mmuidx(cs, pageaddr,
209
- ARMMMUIdxBit_S1SE1 |
210
- ARMMMUIdxBit_S1SE0);
211
- } else {
212
- tlb_flush_page_by_mmuidx(cs, pageaddr,
213
- ARMMMUIdxBit_S12NSE1 |
214
- ARMMMUIdxBit_S12NSE0);
215
- }
216
-}
217
-
218
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
219
uint64_t value)
220
{
221
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
222
}
223
}
224
225
+static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
226
+ uint64_t value)
227
+{
228
+ /* Invalidate by VA, EL1&0 (AArch64 version).
229
+ * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
230
+ * since we don't support flush-for-specific-ASID-only or
231
+ * flush-last-level-only.
232
+ */
233
+ ARMCPU *cpu = arm_env_get_cpu(env);
234
+ CPUState *cs = CPU(cpu);
235
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
236
+
237
+ if (tlb_force_broadcast(env)) {
238
+ tlbi_aa64_vae1is_write(env, NULL, value);
239
+ return;
240
+ }
241
+
242
+ if (arm_is_secure_below_el3(env)) {
243
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
244
+ ARMMMUIdxBit_S1SE1 |
245
+ ARMMMUIdxBit_S1SE0);
246
+ } else {
247
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
248
+ ARMMMUIdxBit_S12NSE1 |
249
+ ARMMMUIdxBit_S12NSE0);
250
+ }
251
+}
252
+
253
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
254
uint64_t value)
255
{
256
--
257
2.19.1
258
259
diff view generated by jsdifflib
Deleted patch
1
The A/I/F bits in ISR_EL1 should track the virtual interrupt
2
status, not the physical interrupt status, if the associated
3
HCR_EL2.AMO/IMO/FMO bit is set. Implement this, rather than
4
always showing the physical interrupt status.
5
1
6
We don't currently implement anything to do with external
7
aborts, so this applies only to the I and F bits (though it
8
ought to be possible for the outer guest to present a virtual
9
external abort to the inner guest, even if QEMU doesn't
10
emulate physical external aborts, so there is missing
11
functionality in this area).
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20181012144235.19646-6-peter.maydell@linaro.org
16
---
17
target/arm/helper.c | 22 ++++++++++++++++++----
18
1 file changed, 18 insertions(+), 4 deletions(-)
19
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
25
CPUState *cs = ENV_GET_CPU(env);
26
uint64_t ret = 0;
27
28
- if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
29
- ret |= CPSR_I;
30
+ if (arm_hcr_el2_imo(env)) {
31
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
32
+ ret |= CPSR_I;
33
+ }
34
+ } else {
35
+ if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
36
+ ret |= CPSR_I;
37
+ }
38
}
39
- if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
40
- ret |= CPSR_F;
41
+
42
+ if (arm_hcr_el2_fmo(env)) {
43
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
44
+ ret |= CPSR_F;
45
+ }
46
+ } else {
47
+ if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
48
+ ret |= CPSR_F;
49
+ }
50
}
51
+
52
/* External aborts are not possible in QEMU so A bit is always clear */
53
return ret;
54
}
55
--
56
2.19.1
57
58
diff view generated by jsdifflib
Deleted patch
1
The HCR_EL2 VI and VF bits are supposed to track whether there is
2
a pending virtual IRQ or virtual FIQ. For QEMU we store the
3
pending VIRQ/VFIQ status in cs->interrupt_request, so this means:
4
* if the register is read we must get these bit values from
5
cs->interrupt_request
6
* if the register is written then we must write the bit
7
values back into cs->interrupt_request
8
1
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181012144235.19646-7-peter.maydell@linaro.org
12
---
13
target/arm/helper.c | 47 +++++++++++++++++++++++++++++++++++++++++----
14
1 file changed, 43 insertions(+), 4 deletions(-)
15
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
21
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
22
{
23
ARMCPU *cpu = arm_env_get_cpu(env);
24
+ CPUState *cs = ENV_GET_CPU(env);
25
uint64_t valid_mask = HCR_MASK;
26
27
if (arm_feature(env, ARM_FEATURE_EL3)) {
28
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
29
/* Clear RES0 bits. */
30
value &= valid_mask;
31
32
+ /*
33
+ * VI and VF are kept in cs->interrupt_request. Modifying that
34
+ * requires that we have the iothread lock, which is done by
35
+ * marking the reginfo structs as ARM_CP_IO.
36
+ * Note that if a write to HCR pends a VIRQ or VFIQ it is never
37
+ * possible for it to be taken immediately, because VIRQ and
38
+ * VFIQ are masked unless running at EL0 or EL1, and HCR
39
+ * can only be written at EL2.
40
+ */
41
+ g_assert(qemu_mutex_iothread_locked());
42
+ if (value & HCR_VI) {
43
+ cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
44
+ } else {
45
+ cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
46
+ }
47
+ if (value & HCR_VF) {
48
+ cs->interrupt_request |= CPU_INTERRUPT_VFIQ;
49
+ } else {
50
+ cs->interrupt_request &= ~CPU_INTERRUPT_VFIQ;
51
+ }
52
+ value &= ~(HCR_VI | HCR_VF);
53
+
54
/* These bits change the MMU setup:
55
* HCR_VM enables stage 2 translation
56
* HCR_PTW forbids certain page-table setups
57
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
58
hcr_write(env, NULL, value);
59
}
60
61
+static uint64_t hcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
62
+{
63
+ /* The VI and VF bits live in cs->interrupt_request */
64
+ uint64_t ret = env->cp15.hcr_el2 & ~(HCR_VI | HCR_VF);
65
+ CPUState *cs = ENV_GET_CPU(env);
66
+
67
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
68
+ ret |= HCR_VI;
69
+ }
70
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
71
+ ret |= HCR_VF;
72
+ }
73
+ return ret;
74
+}
75
+
76
static const ARMCPRegInfo el2_cp_reginfo[] = {
77
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
78
+ .type = ARM_CP_IO,
79
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
80
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
81
- .writefn = hcr_write },
82
+ .writefn = hcr_write, .readfn = hcr_read },
83
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
84
- .type = ARM_CP_ALIAS,
85
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
86
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
87
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
88
- .writefn = hcr_writelow },
89
+ .writefn = hcr_writelow, .readfn = hcr_read },
90
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
91
.type = ARM_CP_ALIAS,
92
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
93
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
94
95
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
96
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
97
- .type = ARM_CP_ALIAS,
98
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
99
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
100
.access = PL2_RW,
101
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
102
--
103
2.19.1
104
105
diff view generated by jsdifflib
Deleted patch
1
If the HCR_EL2 PTW virtualizaiton configuration register bit
2
is set, then this means that a stage 2 Permission fault must
3
be generated if a stage 1 translation table access is made
4
to an address that is mapped as Device memory in stage 2.
5
Implement this.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181012144235.19646-8-peter.maydell@linaro.org
10
---
11
target/arm/helper.c | 21 ++++++++++++++++++++-
12
1 file changed, 20 insertions(+), 1 deletion(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
19
hwaddr s2pa;
20
int s2prot;
21
int ret;
22
+ ARMCacheAttrs cacheattrs = {};
23
+ ARMCacheAttrs *pcacheattrs = NULL;
24
+
25
+ if (env->cp15.hcr_el2 & HCR_PTW) {
26
+ /*
27
+ * PTW means we must fault if this S1 walk touches S2 Device
28
+ * memory; otherwise we don't care about the attributes and can
29
+ * save the S2 translation the effort of computing them.
30
+ */
31
+ pcacheattrs = &cacheattrs;
32
+ }
33
34
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
35
- &txattrs, &s2prot, &s2size, fi, NULL);
36
+ &txattrs, &s2prot, &s2size, fi, pcacheattrs);
37
if (ret) {
38
assert(fi->type != ARMFault_None);
39
fi->s2addr = addr;
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
41
fi->s1ptw = true;
42
return ~0;
43
}
44
+ if (pcacheattrs && (pcacheattrs->attrs & 0xf0) == 0) {
45
+ /* Access was to Device memory: generate Permission fault */
46
+ fi->type = ARMFault_Permission;
47
+ fi->s2addr = addr;
48
+ fi->stage2 = true;
49
+ fi->s1ptw = true;
50
+ return ~0;
51
+ }
52
addr = s2pa;
53
}
54
return addr;
55
--
56
2.19.1
57
58
diff view generated by jsdifflib
1
Create and use a utility function to extract the EC field
1
In our KVM GICv2 realize function, we try to cope with old kernels
2
from a syndrome, rather than open-coding the shift.
2
that don't provide the device control API (KVM_CAP_DEVICE_CTRL): we
3
try to use the device control, and if that fails we fall back to
4
assuming that the kernel has the old style KVM_CREATE_IRQCHIP and
5
that it will provide a GICv2.
6
7
This doesn't cater for the possibility of a kernel and hardware which
8
only provide a GICv3, which is very common now. On that setup we
9
will abort() later on in kvm_arm_pmu_set_irq() when we try to wire up
10
an interrupt to the GIC we failed to create:
11
12
qemu-system-aarch64: PMU: KVM_SET_DEVICE_ATTR: Invalid argument
13
qemu-system-aarch64: failed to set irq for PMU
14
Aborted
15
16
If the kernel advertises KVM_CAP_DEVICE_CTRL we should trust it if it
17
says it can't create a GICv2, rather than assuming it has one. We
18
can then produce a more helpful error message including a hint about
19
the most probable reason for the failure.
20
21
If the kernel doesn't advertise KVM_CAP_DEVICE_CTRL then it is truly
22
ancient by this point but we might as well still fall back to a
23
KVM_CREATE_IRQCHIP GICv2.
24
25
With this patch then the user misconfiguration which previously
26
caused an abort now prints:
27
qemu-system-aarch64: Initialization of device kvm-arm-gic failed: error creating in-kernel VGIC: No such device
28
Perhaps the host CPU does not support GICv2?
3
29
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
31
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Message-id: 20181012144235.19646-9-peter.maydell@linaro.org
32
Reviewed-by: Andrew Jones <drjones@redhat.com>
33
Tested-by: Andrew Jones <drjones@redhat.com>
34
Message-id: 20200225182435.1131-1-peter.maydell@linaro.org
7
---
35
---
8
target/arm/internals.h | 5 +++++
36
hw/intc/arm_gic_kvm.c | 9 +++++++++
9
target/arm/helper.c | 4 ++--
37
1 file changed, 9 insertions(+)
10
target/arm/kvm64.c | 2 +-
11
target/arm/op_helper.c | 2 +-
12
4 files changed, 9 insertions(+), 4 deletions(-)
13
38
14
diff --git a/target/arm/internals.h b/target/arm/internals.h
39
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
15
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/internals.h
41
--- a/hw/intc/arm_gic_kvm.c
17
+++ b/target/arm/internals.h
42
+++ b/hw/intc/arm_gic_kvm.c
18
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
43
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
19
#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
44
KVM_DEV_ARM_VGIC_CTRL_INIT, NULL, true,
20
#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
45
&error_abort);
21
22
+static inline uint32_t syn_get_ec(uint32_t syn)
23
+{
24
+ return syn >> ARM_EL_EC_SHIFT;
25
+}
26
+
27
/* Utility functions for constructing various kinds of syndrome value.
28
* Note that in general we follow the AArch64 syndrome values; in a
29
* few cases the value in HSR for exceptions taken to AArch32 Hyp
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/helper.c
33
+++ b/target/arm/helper.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
35
uint32_t moe;
36
37
/* If this is a debug exception we must update the DBGDSCR.MOE bits */
38
- switch (env->exception.syndrome >> ARM_EL_EC_SHIFT) {
39
+ switch (syn_get_ec(env->exception.syndrome)) {
40
case EC_BREAKPOINT:
41
case EC_BREAKPOINT_SAME_EL:
42
moe = 1;
43
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
44
if (qemu_loglevel_mask(CPU_LOG_INT)
45
&& !excp_is_internal(cs->exception_index)) {
46
qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n",
47
- env->exception.syndrome >> ARM_EL_EC_SHIFT,
48
+ syn_get_ec(env->exception.syndrome),
49
env->exception.syndrome);
50
}
51
52
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/kvm64.c
55
+++ b/target/arm/kvm64.c
56
@@ -XXX,XX +XXX,XX @@ int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
57
58
bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
59
{
60
- int hsr_ec = debug_exit->hsr >> ARM_EL_EC_SHIFT;
61
+ int hsr_ec = syn_get_ec(debug_exit->hsr);
62
ARMCPU *cpu = ARM_CPU(cs);
63
CPUClass *cc = CPU_GET_CLASS(cs);
64
CPUARMState *env = &cpu->env;
65
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/op_helper.c
68
+++ b/target/arm/op_helper.c
69
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
70
* (see DDI0478C.a D1.10.4)
71
*/
72
target_el = 2;
73
- if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
74
+ if (syn_get_ec(syndrome) == EC_ADVSIMDFPACCESSTRAP) {
75
syndrome = syn_uncategorized();
76
}
46
}
47
+ } else if (kvm_check_extension(kvm_state, KVM_CAP_DEVICE_CTRL)) {
48
+ error_setg_errno(errp, -ret, "error creating in-kernel VGIC");
49
+ error_append_hint(errp,
50
+ "Perhaps the host CPU does not support GICv2?\n");
51
} else if (ret != -ENODEV && ret != -ENOTSUP) {
52
+ /*
53
+ * Very ancient kernel without KVM_CAP_DEVICE_CTRL: assume that
54
+ * ENODEV or ENOTSUP mean "can't create GICv2 with KVM_CREATE_DEVICE",
55
+ * and that we will get a GICv2 via KVM_CREATE_IRQCHIP.
56
+ */
57
error_setg_errno(errp, -ret, "error creating in-kernel VGIC");
58
return;
77
}
59
}
78
--
60
--
79
2.19.1
61
2.20.1
80
62
81
63
diff view generated by jsdifflib
Deleted patch
1
For the v7 version of the Arm architecture, the IL bit in
2
syndrome register values where the field is not valid was
3
defined to be UNK/SBZP. In v8 this is RES1, which is what
4
QEMU currently implements. Handle the desired v7 behaviour
5
by squashing the IL bit for the affected cases:
6
* EC == EC_UNCATEGORIZED
7
* prefetch aborts
8
* data aborts where ISV is 0
9
1
10
(The fourth case listed in the v8 Arm ARM DDI 0487C.a in
11
section G7.2.70, "illegal state exception", can't happen
12
on a v7 CPU.)
13
14
This deals with a corner case noted in a comment.
15
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20181012144235.19646-10-peter.maydell@linaro.org
19
---
20
target/arm/internals.h | 7 ++-----
21
target/arm/helper.c | 13 +++++++++++++
22
2 files changed, 15 insertions(+), 5 deletions(-)
23
24
diff --git a/target/arm/internals.h b/target/arm/internals.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/internals.h
27
+++ b/target/arm/internals.h
28
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
29
/* Utility functions for constructing various kinds of syndrome value.
30
* Note that in general we follow the AArch64 syndrome values; in a
31
* few cases the value in HSR for exceptions taken to AArch32 Hyp
32
- * mode differs slightly, so if we ever implemented Hyp mode then the
33
- * syndrome value would need some massaging on exception entry.
34
- * (One example of this is that AArch64 defaults to IL bit set for
35
- * exceptions which don't specifically indicate information about the
36
- * trapping instruction, whereas AArch32 defaults to IL bit clear.)
37
+ * mode differs slightly, and we fix this up when populating HSR in
38
+ * arm_cpu_do_interrupt_aarch32_hyp().
39
*/
40
static inline uint32_t syn_uncategorized(void)
41
{
42
diff --git a/target/arm/helper.c b/target/arm/helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/helper.c
45
+++ b/target/arm/helper.c
46
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs)
47
}
48
49
if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) {
50
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
51
+ /*
52
+ * QEMU syndrome values are v8-style. v7 has the IL bit
53
+ * UNK/SBZP for "field not valid" cases, where v8 uses RES1.
54
+ * If this is a v7 CPU, squash the IL bit in those cases.
55
+ */
56
+ if (cs->exception_index == EXCP_PREFETCH_ABORT ||
57
+ (cs->exception_index == EXCP_DATA_ABORT &&
58
+ !(env->exception.syndrome & ARM_EL_ISV)) ||
59
+ syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) {
60
+ env->exception.syndrome &= ~ARM_EL_IL;
61
+ }
62
+ }
63
env->cp15.esr_el[2] = env->exception.syndrome;
64
}
65
66
--
67
2.19.1
68
69
diff view generated by jsdifflib
Deleted patch
1
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
2
provided in HSR has more information than is reported to AArch64.
3
Specifically, there are extra fields TA and coproc which indicate
4
whether the trapped instruction was FP or SIMD. Add this extra
5
information to the syndromes we construct, and mask it out when
6
taking the exception to AArch64.
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20181012144235.19646-11-peter.maydell@linaro.org
11
---
12
target/arm/internals.h | 14 +++++++++++++-
13
target/arm/helper.c | 9 +++++++++
14
target/arm/translate.c | 8 ++++----
15
3 files changed, 26 insertions(+), 5 deletions(-)
16
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/internals.h
20
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
22
* few cases the value in HSR for exceptions taken to AArch32 Hyp
23
* mode differs slightly, and we fix this up when populating HSR in
24
* arm_cpu_do_interrupt_aarch32_hyp().
25
+ * The exception is FP/SIMD access traps -- these report extra information
26
+ * when taking an exception to AArch32. For those we include the extra coproc
27
+ * and TA fields, and mask them out when taking the exception to AArch64.
28
*/
29
static inline uint32_t syn_uncategorized(void)
30
{
31
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
32
33
static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
34
{
35
+ /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
36
return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
37
| (is_16bit ? 0 : ARM_EL_IL)
38
- | (cv << 24) | (cond << 20);
39
+ | (cv << 24) | (cond << 20) | 0xa;
40
+}
41
+
42
+static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
43
+{
44
+ /* AArch32 SIMD trap: TA == 1 coproc == 0 */
45
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
46
+ | (is_16bit ? 0 : ARM_EL_IL)
47
+ | (cv << 24) | (cond << 20) | (1 << 5);
48
}
49
50
static inline uint32_t syn_sve_access_trap(void)
51
diff --git a/target/arm/helper.c b/target/arm/helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/helper.c
54
+++ b/target/arm/helper.c
55
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
56
case EXCP_HVC:
57
case EXCP_HYP_TRAP:
58
case EXCP_SMC:
59
+ if (syn_get_ec(env->exception.syndrome) == EC_ADVSIMDFPACCESSTRAP) {
60
+ /*
61
+ * QEMU internal FP/SIMD syndromes from AArch32 include the
62
+ * TA and coproc fields which are only exposed if the exception
63
+ * is taken to AArch32 Hyp mode. Mask them out to get a valid
64
+ * AArch64 format syndrome.
65
+ */
66
+ env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20);
67
+ }
68
env->cp15.esr_el[new_el] = env->exception.syndrome;
69
break;
70
case EXCP_IRQ:
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate.c
74
+++ b/target/arm/translate.c
75
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
76
*/
77
if (s->fp_excp_el) {
78
gen_exception_insn(s, 4, EXCP_UDEF,
79
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
80
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
81
return 0;
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
85
*/
86
if (s->fp_excp_el) {
87
gen_exception_insn(s, 4, EXCP_UDEF,
88
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
89
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
90
return 0;
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
94
95
if (s->fp_excp_el) {
96
gen_exception_insn(s, 4, EXCP_UDEF,
97
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
98
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
99
return 0;
100
}
101
if (!s->vfp_enabled) {
102
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
103
104
if (s->fp_excp_el) {
105
gen_exception_insn(s, 4, EXCP_UDEF,
106
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
107
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
108
return 0;
109
}
110
if (!s->vfp_enabled) {
111
--
112
2.19.1
113
114
diff view generated by jsdifflib
Deleted patch
1
From: Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>
2
1
3
"The Image must be placed text_offset bytes from a 2MB aligned base
4
address anywhere in usable system RAM and called there."
5
6
For the virt board, we write our startup bootloader at the very
7
bottom of RAM, so that bit can't be used for the image. To avoid
8
overlap in case the image requests to be loaded at an offset
9
smaller than our bootloader, we increment the load offset to the
10
next 2MB.
11
12
This fixes a boot failure for Xen AArch64.
13
14
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
15
Tested-by: Andre Przywara <andre.przywara@arm.com>
16
Message-id: b8a89518794b4436af0c151ed10de4fa@dornerworks.com
17
[PMM: Rephrased a comment a bit]
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
hw/arm/boot.c | 18 ++++++++++++++++++
22
1 file changed, 18 insertions(+)
23
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/arm/boot.c
27
+++ b/hw/arm/boot.c
28
@@ -XXX,XX +XXX,XX @@
29
#include "qemu/config-file.h"
30
#include "qemu/option.h"
31
#include "exec/address-spaces.h"
32
+#include "qemu/units.h"
33
34
/* Kernel boot protocol is specified in the kernel docs
35
* Documentation/arm/Booting and Documentation/arm64/booting.txt
36
@@ -XXX,XX +XXX,XX @@
37
#define ARM64_TEXT_OFFSET_OFFSET 8
38
#define ARM64_MAGIC_OFFSET 56
39
40
+#define BOOTLOADER_MAX_SIZE (4 * KiB)
41
+
42
AddressSpace *arm_boot_address_space(ARMCPU *cpu,
43
const struct arm_boot_info *info)
44
{
45
@@ -XXX,XX +XXX,XX @@ static void write_bootloader(const char *name, hwaddr addr,
46
code[i] = tswap32(insn);
47
}
48
49
+ assert((len * sizeof(uint32_t)) < BOOTLOADER_MAX_SIZE);
50
+
51
rom_add_blob_fixed_as(name, code, len * sizeof(uint32_t), addr, as);
52
53
g_free(code);
54
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
55
memcpy(&hdrvals, buffer + ARM64_TEXT_OFFSET_OFFSET, sizeof(hdrvals));
56
if (hdrvals[1] != 0) {
57
kernel_load_offset = le64_to_cpu(hdrvals[0]);
58
+
59
+ /*
60
+ * We write our startup "bootloader" at the very bottom of RAM,
61
+ * so that bit can't be used for the image. Luckily the Image
62
+ * format specification is that the image requests only an offset
63
+ * from a 2MB boundary, not an absolute load address. So if the
64
+ * image requests an offset that might mean it overlaps with the
65
+ * bootloader, we can just load it starting at 2MB+offset rather
66
+ * than 0MB + offset.
67
+ */
68
+ if (kernel_load_offset < BOOTLOADER_MAX_SIZE) {
69
+ kernel_load_offset += 2 * MiB;
70
+ }
71
}
72
}
73
74
--
75
2.19.1
76
77
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This is done generically in translator_loop.
4
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20181011205206.3552-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-a64.c | 1 -
13
target/arm/translate.c | 1 -
14
2 files changed, 2 deletions(-)
15
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a64.c
19
+++ b/target/arm/translate-a64.c
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
21
22
static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
23
{
24
- tcg_clear_temp_count();
25
}
26
27
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
28
diff --git a/target/arm/translate.c b/target/arm/translate.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate.c
31
+++ b/target/arm/translate.c
32
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_start(DisasContextBase *dcbase, CPUState *cpu)
33
tcg_gen_movi_i32(tmp, 0);
34
store_cpu_field(tmp, condexec_bits);
35
}
36
- tcg_clear_temp_count();
37
}
38
39
static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
40
--
41
2.19.1
42
43
diff view generated by jsdifflib