1
First pullreq of the 3.1 release cycle, with lots of
1
The following changes since commit 53f306f316549d20c76886903181413d20842423:
2
Arm related patches accumulated during freeze. Most
3
notable here is Luc's GICv2 virtualization support and
4
my execute-from-MMIO patches.
5
2
6
I stopped looking at my to-review queue towards the
3
Merge remote-tracking branch 'remotes/ehabkost-gl/tags/x86-next-pull-request' into staging (2021-06-21 11:26:04 +0100)
7
end of freeze, since 45 patches is already pushing what
8
I consider a reasonable sized pullreq; once this goes into
9
master I'll start working through it again.
10
11
thanks
12
-- PMM
13
14
The following changes since commit 38441756b70eec5807b5f60dad11a93a91199866:
15
16
Update version for v3.0.0 release (2018-08-14 16:38:43 +0100)
17
4
18
are available in the Git repository at:
5
are available in the Git repository at:
19
6
20
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180814
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210621
21
8
22
for you to fetch changes up to 054e7adf4e64e4acb3b033348ebf7cc871baa34f:
9
for you to fetch changes up to a83f1d9263d281f938a3984cda7104d55affd43a:
23
10
24
target/arm: Fix typo in helper_sve_movz_d (2018-08-14 17:17:22 +0100)
11
docs/system: arm: Add nRF boards description (2021-06-21 17:24:33 +0100)
25
12
26
----------------------------------------------------------------
13
----------------------------------------------------------------
27
target-arm queue:
14
target-arm queue:
28
* Implement more of ARMv6-M support
15
* Don't require 'virt' board to be compiled in for ACPI GHES code
29
* Support direct execution from non-RAM regions;
16
* docs: Document which architecture extensions we emulate
30
use this to implmeent execution from small (<1K) MPU regions
17
* Fix bugs in M-profile FPCXT_NS accesses
31
* GICv2: implement the virtualization extensions
18
* First slice of MVE patches
32
* support a virtualization-capable GICv2 in the virt and
19
* Implement MTE3
33
xlnx-zynqmp boards
20
* docs/system: arm: Add nRF boards description
34
* arm: Fix return code of arm_load_elf() so we can detect
35
failure to load the file correctly
36
* Implement HCR_EL2.TGE ("trap general exceptions") bit
37
* Implement tailchaining for M profile cores
38
* Fix bugs in SVE compare, saturating add/sub, WHILE, MOVZ
39
21
40
----------------------------------------------------------------
22
----------------------------------------------------------------
41
Adam Lackorzynski (1):
23
Alexandre Iooss (1):
42
arm: Fix return code of arm_load_elf
24
docs/system: arm: Add nRF boards description
43
25
44
Julia Suvorova (4):
26
Peter Collingbourne (1):
45
target/arm: Forbid unprivileged mode for M Baseline
27
target/arm: Implement MTE3
46
nvic: Handle ARMv6-M SCS reserved registers
47
arm: Add ARMv6-M programmer's model support
48
nvic: Change NVIC to support ARMv6-M
49
28
50
Luc Michel (20):
29
Peter Maydell (55):
51
intc/arm_gic: Refactor operations on the distributor
30
hw/acpi: Provide stub version of acpi_ghes_record_errors()
52
intc/arm_gic: Implement GICD_ISACTIVERn and GICD_ICACTIVERn registers
31
hw/acpi: Provide function acpi_ghes_present()
53
intc/arm_gic: Remove some dead code and put some functions static
32
target/arm: Use acpi_ghes_present() to see if we report ACPI memory errors
54
vmstate.h: Provide VMSTATE_UINT16_SUB_ARRAY
33
docs/system/arm: Document which architecture extensions we emulate
55
intc/arm_gic: Add the virtualization extensions to the GIC state
34
target/arm/translate-vfp.c: Whitespace fixes
56
intc/arm_gic: Add virtual interface register definitions
35
target/arm: Handle FPU being disabled in FPCXT_NS accesses
57
intc/arm_gic: Add virtualization extensions helper macros and functions
36
target/arm: Don't NOCP fault for FPCXT_NS accesses
58
intc/arm_gic: Refactor secure/ns access check in the CPU interface
37
target/arm: Handle writeback in VLDR/VSTR sysreg with no memory access
59
intc/arm_gic: Add virtualization enabled IRQ helper functions
38
target/arm: Factor FP context update code out into helper function
60
intc/arm_gic: Implement virtualization extensions in gic_(activate_irq|drop_prio)
39
target/arm: Split vfp_access_check() into A and M versions
61
intc/arm_gic: Implement virtualization extensions in gic_acknowledge_irq
40
target/arm: Handle FPU check for FPCXT_NS insns via vfp_access_check_m()
62
intc/arm_gic: Implement virtualization extensions in gic_(deactivate|complete_irq)
41
target/arm: Implement MVE VLDR/VSTR (non-widening forms)
63
intc/arm_gic: Implement virtualization extensions in gic_cpu_(read|write)
42
target/arm: Implement widening/narrowing MVE VLDR/VSTR insns
64
intc/arm_gic: Wire the vCPU interface
43
target/arm: Implement MVE VCLZ
65
intc/arm_gic: Implement the virtual interface registers
44
target/arm: Implement MVE VCLS
66
intc/arm_gic: Implement gic_update_virt() function
45
target/arm: Implement MVE VREV16, VREV32, VREV64
67
intc/arm_gic: Implement maintenance interrupt generation
46
target/arm: Implement MVE VMVN (register)
68
intc/arm_gic: Improve traces
47
target/arm: Implement MVE VABS
69
xlnx-zynqmp: Improve GIC wiring and MMIO mapping
48
target/arm: Implement MVE VNEG
70
arm/virt: Add support for GICv2 virtualization extensions
49
tcg: Make gen_dup_i32/i64() public as tcg_gen_dup_i32/i64
50
target/arm: Implement MVE VDUP
51
target/arm: Implement MVE VAND, VBIC, VORR, VORN, VEOR
52
target/arm: Implement MVE VADD, VSUB, VMUL
53
target/arm: Implement MVE VMULH
54
target/arm: Implement MVE VRMULH
55
target/arm: Implement MVE VMAX, VMIN
56
target/arm: Implement MVE VABD
57
target/arm: Implement MVE VHADD, VHSUB
58
target/arm: Implement MVE VMULL
59
target/arm: Implement MVE VMLALDAV
60
target/arm: Implement MVE VMLSLDAV
61
target/arm: Implement MVE VRMLALDAVH, VRMLSLDAVH
62
target/arm: Implement MVE VADD (scalar)
63
target/arm: Implement MVE VSUB, VMUL (scalar)
64
target/arm: Implement MVE VHADD, VHSUB (scalar)
65
target/arm: Implement MVE VBRSR
66
target/arm: Implement MVE VPST
67
target/arm: Implement MVE VQADD and VQSUB
68
target/arm: Implement MVE VQDMULH and VQRDMULH (scalar)
69
target/arm: Implement MVE VQDMULL scalar
70
target/arm: Implement MVE VQDMULH, VQRDMULH (vector)
71
target/arm: Implement MVE VQADD, VQSUB (vector)
72
target/arm: Implement MVE VQSHL (vector)
73
target/arm: Implement MVE VQRSHL
74
target/arm: Implement MVE VSHL insn
75
target/arm: Implement MVE VRSHL
76
target/arm: Implement MVE VQDMLADH and VQRDMLADH
77
target/arm: Implement MVE VQDMLSDH and VQRDMLSDH
78
target/arm: Implement MVE VQDMULL (vector)
79
target/arm: Implement MVE VRHADD
80
target/arm: Implement MVE VADC, VSBC
81
target/arm: Implement MVE VCADD
82
target/arm: Implement MVE VHCADD
83
target/arm: Implement MVE VADDV
84
target/arm: Make VMOV scalar <-> gpreg beatwise for MVE
71
85
72
Peter Maydell (16):
86
docs/system/arm/emulation.rst | 103 ++++
73
accel/tcg: Pass read access type through to io_readx()
87
docs/system/arm/nrf.rst | 51 ++
74
accel/tcg: Handle get_page_addr_code() returning -1 in hashtable lookups
88
docs/system/target-arm.rst | 7 +
75
accel/tcg: Handle get_page_addr_code() returning -1 in tb_check_watchpoint()
89
include/hw/acpi/ghes.h | 9 +
76
accel/tcg: tb_gen_code(): Create single-insn TB for execution from non-RAM
90
include/tcg/tcg-op.h | 8 +
77
accel/tcg: Return -1 for execution from MMIO regions in get_page_addr_code()
91
include/tcg/tcg.h | 1 -
78
target/arm: Allow execution from small regions
92
target/arm/helper-mve.h | 357 +++++++++++++
79
accel/tcg: Check whether TLB entry is RAM consistently with how we set it up
93
target/arm/helper.h | 2 +
80
target/arm: Mask virtual interrupts if HCR_EL2.TGE is set
94
target/arm/internals.h | 11 +
81
target/arm: Honour HCR_EL2.TGE and MDCR_EL2.TDE in debug register access checks
95
target/arm/translate-a32.h | 3 +
82
target/arm: Honour HCR_EL2.TGE when raising synchronous exceptions
96
target/arm/translate.h | 10 +
83
target/arm: Provide accessor functions for HCR_EL2.{IMO, FMO, AMO}
97
target/arm/m-nocp.decode | 24 +
84
target/arm: Treat SCTLR_EL1.M as if it were zero when HCR_EL2.TGE is set
98
target/arm/mve.decode | 240 +++++++++
85
target/arm: Improve exception-taken logging
99
target/arm/vfp.decode | 14 -
86
target/arm: Initialize exc_secure correctly in do_v7m_exception_exit()
100
hw/acpi/ghes-stub.c | 22 +
87
target/arm: Restore M-profile CONTROL.SPSEL before any tailchaining
101
hw/acpi/ghes.c | 17 +
88
target/arm: Implement tailchaining for M profile cores
102
target/arm/cpu64.c | 2 +-
103
target/arm/kvm64.c | 6 +-
104
target/arm/mte_helper.c | 82 +--
105
target/arm/mve_helper.c | 1160 +++++++++++++++++++++++++++++++++++++++++
106
target/arm/translate-m-nocp.c | 550 +++++++++++++++++++
107
target/arm/translate-mve.c | 759 +++++++++++++++++++++++++++
108
target/arm/translate-vfp.c | 741 +++++++-------------------
109
tcg/tcg-op-gvec.c | 20 +-
110
MAINTAINERS | 1 +
111
hw/acpi/meson.build | 6 +-
112
target/arm/meson.build | 1 +
113
27 files changed, 3578 insertions(+), 629 deletions(-)
114
create mode 100644 docs/system/arm/emulation.rst
115
create mode 100644 docs/system/arm/nrf.rst
116
create mode 100644 target/arm/helper-mve.h
117
create mode 100644 hw/acpi/ghes-stub.c
118
create mode 100644 target/arm/mve_helper.c
89
119
90
Richard Henderson (4):
91
target/arm: Fix sign of sve_cmpeq_ppzw/sve_cmpne_ppzw
92
target/arm: Fix typo in do_sat_addsub_64
93
target/arm: Reorganize SVE WHILE
94
target/arm: Fix typo in helper_sve_movz_d
95
96
accel/tcg/softmmu_template.h | 11 +-
97
hw/intc/gic_internal.h | 282 +++++++++--
98
include/exec/exec-all.h | 2 -
99
include/hw/arm/virt.h | 4 +-
100
include/hw/arm/xlnx-zynqmp.h | 4 +-
101
include/hw/intc/arm_gic_common.h | 43 +-
102
include/hw/intc/armv7m_nvic.h | 1 +
103
include/migration/vmstate.h | 3 +
104
include/qom/cpu.h | 6 +
105
target/arm/cpu.h | 62 ++-
106
accel/tcg/cpu-exec.c | 3 +
107
accel/tcg/cputlb.c | 111 +----
108
accel/tcg/translate-all.c | 23 +-
109
exec.c | 6 -
110
hw/arm/boot.c | 8 +-
111
hw/arm/virt-acpi-build.c | 6 +-
112
hw/arm/virt.c | 52 ++-
113
hw/arm/xlnx-zynqmp.c | 92 +++-
114
hw/intc/arm_gic.c | 987 +++++++++++++++++++++++++++++++--------
115
hw/intc/arm_gic_common.c | 154 ++++--
116
hw/intc/arm_gic_kvm.c | 31 +-
117
hw/intc/arm_gicv3_cpuif.c | 19 +-
118
hw/intc/armv7m_nvic.c | 82 +++-
119
memory.c | 3 +-
120
target/arm/cpu.c | 4 +
121
target/arm/helper.c | 127 +++--
122
target/arm/op_helper.c | 14 +
123
target/arm/sve_helper.c | 19 +-
124
target/arm/translate-sve.c | 51 +-
125
hw/intc/trace-events | 12 +-
126
30 files changed, 1724 insertions(+), 498 deletions(-)
127
diff view generated by jsdifflib
New patch
1
Generic code in target/arm wants to call acpi_ghes_record_errors();
2
provide a stub version so that we don't fail to link when
3
CONFIG_ACPI_APEI is not set. This requires us to add a new
4
ghes-stub.c file to contain it and the meson.build mechanics
5
to use it when appropriate.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
10
Message-id: 20210603171259.27962-2-peter.maydell@linaro.org
11
---
12
hw/acpi/ghes-stub.c | 17 +++++++++++++++++
13
hw/acpi/meson.build | 6 +++---
14
2 files changed, 20 insertions(+), 3 deletions(-)
15
create mode 100644 hw/acpi/ghes-stub.c
16
17
diff --git a/hw/acpi/ghes-stub.c b/hw/acpi/ghes-stub.c
18
new file mode 100644
19
index XXXXXXX..XXXXXXX
20
--- /dev/null
21
+++ b/hw/acpi/ghes-stub.c
22
@@ -XXX,XX +XXX,XX @@
23
+/*
24
+ * Support for generating APEI tables and recording CPER for Guests:
25
+ * stub functions.
26
+ *
27
+ * Copyright (c) 2021 Linaro, Ltd
28
+ *
29
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
30
+ * See the COPYING file in the top-level directory.
31
+ */
32
+
33
+#include "qemu/osdep.h"
34
+#include "hw/acpi/ghes.h"
35
+
36
+int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
37
+{
38
+ return -1;
39
+}
40
diff --git a/hw/acpi/meson.build b/hw/acpi/meson.build
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/acpi/meson.build
43
+++ b/hw/acpi/meson.build
44
@@ -XXX,XX +XXX,XX @@ acpi_ss.add(when: 'CONFIG_ACPI_PCI', if_true: files('pci.c'))
45
acpi_ss.add(when: 'CONFIG_ACPI_VMGENID', if_true: files('vmgenid.c'))
46
acpi_ss.add(when: 'CONFIG_ACPI_HW_REDUCED', if_true: files('generic_event_device.c'))
47
acpi_ss.add(when: 'CONFIG_ACPI_HMAT', if_true: files('hmat.c'))
48
-acpi_ss.add(when: 'CONFIG_ACPI_APEI', if_true: files('ghes.c'))
49
+acpi_ss.add(when: 'CONFIG_ACPI_APEI', if_true: files('ghes.c'), if_false: files('ghes-stub.c'))
50
acpi_ss.add(when: 'CONFIG_ACPI_X86', if_true: files('core.c', 'piix4.c', 'pcihp.c'), if_false: files('acpi-stub.c'))
51
acpi_ss.add(when: 'CONFIG_ACPI_X86_ICH', if_true: files('ich9.c', 'tco.c'))
52
acpi_ss.add(when: 'CONFIG_IPMI', if_true: files('ipmi.c'), if_false: files('ipmi-stub.c'))
53
acpi_ss.add(when: 'CONFIG_PC', if_false: files('acpi-x86-stub.c'))
54
acpi_ss.add(when: 'CONFIG_TPM', if_true: files('tpm.c'))
55
-softmmu_ss.add(when: 'CONFIG_ACPI', if_false: files('acpi-stub.c', 'aml-build-stub.c'))
56
+softmmu_ss.add(when: 'CONFIG_ACPI', if_false: files('acpi-stub.c', 'aml-build-stub.c', 'ghes-stub.c'))
57
softmmu_ss.add_all(when: 'CONFIG_ACPI', if_true: acpi_ss)
58
softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('acpi-stub.c', 'aml-build-stub.c',
59
- 'acpi-x86-stub.c', 'ipmi-stub.c'))
60
+ 'acpi-x86-stub.c', 'ipmi-stub.c', 'ghes-stub.c'))
61
--
62
2.20.1
63
64
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Allow code elsewhere in the system to check whether the ACPI GHES
2
table is present, so it can determine whether it is OK to try to
3
record an error by calling acpi_ghes_record_errors().
2
4
3
Add some traces to the ARM GIC to catch register accesses (distributor,
5
(We don't need to migrate the new 'present' field in AcpiGhesState,
4
(v)cpu interface and virtual interface), and to take into account
6
because it is set once at system initialization and doesn't change.)
5
virtualization extensions (print `vcpu` instead of `cpu` when needed).
6
7
7
Also add some virtualization extensions specific traces: LR updating
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
and maintenance IRQ generation.
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
11
Message-id: 20210603171259.27962-3-peter.maydell@linaro.org
12
---
13
include/hw/acpi/ghes.h | 9 +++++++++
14
hw/acpi/ghes-stub.c | 5 +++++
15
hw/acpi/ghes.c | 17 +++++++++++++++++
16
3 files changed, 31 insertions(+)
9
17
10
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
18
diff --git a/include/hw/acpi/ghes.h b/include/hw/acpi/ghes.h
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20180727095421.386-19-luc.michel@greensocs.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
hw/intc/arm_gic.c | 31 +++++++++++++++++++++++++------
17
hw/intc/trace-events | 12 ++++++++++--
18
2 files changed, 35 insertions(+), 8 deletions(-)
19
20
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
21
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/arm_gic.c
20
--- a/include/hw/acpi/ghes.h
23
+++ b/hw/intc/arm_gic.c
21
+++ b/include/hw/acpi/ghes.h
24
@@ -XXX,XX +XXX,XX @@ static inline void gic_update_internal(GICState *s, bool virt)
22
@@ -XXX,XX +XXX,XX @@ enum {
25
}
23
26
24
typedef struct AcpiGhesState {
27
if (best_irq != 1023) {
25
uint64_t ghes_addr_le;
28
- trace_gic_update_bestirq(cpu, best_irq, best_prio,
26
+ bool present; /* True if GHES is present at all on this board */
29
- s->priority_mask[cpu_iface], s->running_priority[cpu_iface]);
27
} AcpiGhesState;
30
+ trace_gic_update_bestirq(virt ? "vcpu" : "cpu", cpu,
28
31
+ best_irq, best_prio,
29
void build_ghes_error_table(GArray *hardware_errors, BIOSLinker *linker);
32
+ s->priority_mask[cpu_iface],
30
@@ -XXX,XX +XXX,XX @@ void acpi_build_hest(GArray *table_data, BIOSLinker *linker,
33
+ s->running_priority[cpu_iface]);
31
void acpi_ghes_add_fw_cfg(AcpiGhesState *vms, FWCfgState *s,
34
}
32
GArray *hardware_errors);
35
33
int acpi_ghes_record_errors(uint8_t notify, uint64_t error_physical_addr);
36
irq_level = fiq_level = 0;
34
+
37
@@ -XXX,XX +XXX,XX @@ static void gic_update_maintenance(GICState *s)
35
+/**
38
gic_compute_misr(s, cpu);
36
+ * acpi_ghes_present: Report whether ACPI GHES table is present
39
maint_level = (s->h_hcr[cpu] & R_GICH_HCR_EN_MASK) && s->h_misr[cpu];
37
+ *
40
38
+ * Returns: true if the system has an ACPI GHES table and it is
41
+ trace_gic_update_maintenance_irq(cpu, maint_level);
39
+ * safe to call acpi_ghes_record_errors() to record a memory error.
42
qemu_set_irq(s->maintenance_irq[cpu], maint_level);
40
+ */
43
}
41
+bool acpi_ghes_present(void);
42
#endif
43
diff --git a/hw/acpi/ghes-stub.c b/hw/acpi/ghes-stub.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/acpi/ghes-stub.c
46
+++ b/hw/acpi/ghes-stub.c
47
@@ -XXX,XX +XXX,XX @@ int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
48
{
49
return -1;
44
}
50
}
45
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
46
* is in the wrong group.
47
*/
48
irq = gic_get_current_pending_irq(s, cpu, attrs);
49
- trace_gic_acknowledge_irq(gic_get_vcpu_real_id(cpu), irq);
50
+ trace_gic_acknowledge_irq(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
51
+ gic_get_vcpu_real_id(cpu), irq);
52
53
if (irq >= GIC_MAXIRQ) {
54
DPRINTF("ACK, no pending interrupt or it is hidden: %d\n", irq);
55
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_dist_read(void *opaque, hwaddr offset, uint64_t *data,
56
switch (size) {
57
case 1:
58
*data = gic_dist_readb(opaque, offset, attrs);
59
- return MEMTX_OK;
60
+ break;
61
case 2:
62
*data = gic_dist_readb(opaque, offset, attrs);
63
*data |= gic_dist_readb(opaque, offset + 1, attrs) << 8;
64
- return MEMTX_OK;
65
+ break;
66
case 4:
67
*data = gic_dist_readb(opaque, offset, attrs);
68
*data |= gic_dist_readb(opaque, offset + 1, attrs) << 8;
69
*data |= gic_dist_readb(opaque, offset + 2, attrs) << 16;
70
*data |= gic_dist_readb(opaque, offset + 3, attrs) << 24;
71
- return MEMTX_OK;
72
+ break;
73
default:
74
return MEMTX_ERROR;
75
}
76
+
51
+
77
+ trace_gic_dist_read(offset, size, *data);
52
+bool acpi_ghes_present(void)
78
+ return MEMTX_OK;
53
+{
54
+ return false;
55
+}
56
diff --git a/hw/acpi/ghes.c b/hw/acpi/ghes.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/hw/acpi/ghes.c
59
+++ b/hw/acpi/ghes.c
60
@@ -XXX,XX +XXX,XX @@ void acpi_ghes_add_fw_cfg(AcpiGhesState *ags, FWCfgState *s,
61
/* Create a read-write fw_cfg file for Address */
62
fw_cfg_add_file_callback(s, ACPI_GHES_DATA_ADDR_FW_CFG_FILE, NULL, NULL,
63
NULL, &(ags->ghes_addr_le), sizeof(ags->ghes_addr_le), false);
64
+
65
+ ags->present = true;
79
}
66
}
80
67
81
static void gic_dist_writeb(void *opaque, hwaddr offset,
68
int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
82
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writel(void *opaque, hwaddr offset,
69
@@ -XXX,XX +XXX,XX @@ int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
83
static MemTxResult gic_dist_write(void *opaque, hwaddr offset, uint64_t data,
70
84
unsigned size, MemTxAttrs attrs)
71
return ret;
85
{
72
}
86
+ trace_gic_dist_write(offset, size, data);
87
+
73
+
88
switch (size) {
74
+bool acpi_ghes_present(void)
89
case 1:
75
+{
90
gic_dist_writeb(opaque, offset, data, attrs);
76
+ AcpiGedState *acpi_ged_state;
91
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
77
+ AcpiGhesState *ags;
92
*data = 0;
93
break;
94
}
95
+
78
+
96
+ trace_gic_cpu_read(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
79
+ acpi_ged_state = ACPI_GED(object_resolve_path_type("", TYPE_ACPI_GED,
97
+ gic_get_vcpu_real_id(cpu), offset, *data);
80
+ NULL));
98
return MEMTX_OK;
99
}
100
101
static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
102
uint32_t value, MemTxAttrs attrs)
103
{
104
+ trace_gic_cpu_write(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
105
+ gic_get_vcpu_real_id(cpu), offset, value);
106
+
81
+
107
switch (offset) {
82
+ if (!acpi_ged_state) {
108
case 0x00: /* Control */
83
+ return false;
109
gic_set_cpu_control(s, cpu, value, attrs);
84
+ }
110
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_read(void *opaque, int cpu, hwaddr addr,
85
+ ags = &acpi_ged_state->ghes_state;
111
return MEMTX_OK;
86
+ return ags->present;
112
}
87
+}
113
114
+ trace_gic_hyp_read(addr, *data);
115
return MEMTX_OK;
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
119
GICState *s = ARM_GIC(opaque);
120
int vcpu = cpu + GIC_NCPU;
121
122
+ trace_gic_hyp_write(addr, value);
123
+
124
switch (addr) {
125
case A_GICH_HCR: /* Hypervisor Control */
126
s->h_hcr[cpu] = value & GICH_HCR_MASK;
127
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
128
}
129
130
s->h_lr[lr_idx][cpu] = value & GICH_LR_MASK;
131
+ trace_gic_lr_entry(cpu, lr_idx, s->h_lr[lr_idx][cpu]);
132
break;
133
}
134
135
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
136
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/intc/trace-events
138
+++ b/hw/intc/trace-events
139
@@ -XXX,XX +XXX,XX @@ aspeed_vic_write(uint64_t offset, unsigned size, uint32_t data) "To 0x%" PRIx64
140
gic_enable_irq(int irq) "irq %d enabled"
141
gic_disable_irq(int irq) "irq %d disabled"
142
gic_set_irq(int irq, int level, int cpumask, int target) "irq %d level %d cpumask 0x%x target 0x%x"
143
-gic_update_bestirq(int cpu, int irq, int prio, int priority_mask, int running_priority) "cpu %d irq %d priority %d cpu priority mask %d cpu running priority %d"
144
+gic_update_bestirq(const char *s, int cpu, int irq, int prio, int priority_mask, int running_priority) "%s %d irq %d priority %d cpu priority mask %d cpu running priority %d"
145
gic_update_set_irq(int cpu, const char *name, int level) "cpu[%d]: %s = %d"
146
-gic_acknowledge_irq(int cpu, int irq) "cpu %d acknowledged irq %d"
147
+gic_acknowledge_irq(const char *s, int cpu, int irq) "%s %d acknowledged irq %d"
148
+gic_cpu_write(const char *s, int cpu, int addr, uint32_t val) "%s %d iface write at 0x%08x 0x%08" PRIx32
149
+gic_cpu_read(const char *s, int cpu, int addr, uint32_t val) "%s %d iface read at 0x%08x: 0x%08" PRIx32
150
+gic_hyp_read(int addr, uint32_t val) "hyp read at 0x%08x: 0x%08" PRIx32
151
+gic_hyp_write(int addr, uint32_t val) "hyp write at 0x%08x: 0x%08" PRIx32
152
+gic_dist_read(int addr, unsigned int size, uint32_t val) "dist read at 0x%08x size %u: 0x%08" PRIx32
153
+gic_dist_write(int addr, unsigned int size, uint32_t val) "dist write at 0x%08x size %u: 0x%08" PRIx32
154
+gic_lr_entry(int cpu, int entry, uint32_t val) "cpu %d: new lr entry %d: 0x%08" PRIx32
155
+gic_update_maintenance_irq(int cpu, int val) "cpu %d: maintenance = %d"
156
157
# hw/intc/arm_gicv3_cpuif.c
158
gicv3_icc_pmr_read(uint32_t cpu, uint64_t val) "GICv3 ICC_PMR read cpu 0x%x value 0x%" PRIx64
159
--
88
--
160
2.18.0
89
2.20.1
161
90
162
91
diff view generated by jsdifflib
New patch
1
The virt_is_acpi_enabled() function is specific to the virt board, as
2
is the check for its 'ras' property. Use the new acpi_ghes_present()
3
function to check whether we should report memory errors via
4
acpi_ghes_record_errors().
1
5
6
This avoids a link error if QEMU was built without support for the
7
virt board, and provides a mechanism that can be used by any future
8
board models that want to add ACPI memory error reporting support
9
(they only need to call acpi_ghes_add_fw_cfg()).
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
14
Message-id: 20210603171259.27962-4-peter.maydell@linaro.org
15
---
16
target/arm/kvm64.c | 6 +-----
17
1 file changed, 1 insertion(+), 5 deletions(-)
18
19
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/kvm64.c
22
+++ b/target/arm/kvm64.c
23
@@ -XXX,XX +XXX,XX @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr)
24
{
25
ram_addr_t ram_addr;
26
hwaddr paddr;
27
- Object *obj = qdev_get_machine();
28
- VirtMachineState *vms = VIRT_MACHINE(obj);
29
- bool acpi_enabled = virt_is_acpi_enabled(vms);
30
31
assert(code == BUS_MCEERR_AR || code == BUS_MCEERR_AO);
32
33
- if (acpi_enabled && addr &&
34
- object_property_get_bool(obj, "ras", NULL)) {
35
+ if (acpi_ghes_present() && addr) {
36
ram_addr = qemu_ram_addr_from_host(addr);
37
if (ram_addr != RAM_ADDR_INVALID &&
38
kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) {
39
--
40
2.20.1
41
42
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
These days the Arm architecture has a wide range of fine-grained
2
optional extra architectural features. We implement quite a lot
3
of these but by no means all of them. Document what we do implement,
4
so that users can find out without having to dig through back-issues
5
of our Changelog on the wiki.
2
6
3
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20210617140328.28622-1-peter.maydell@linaro.org
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Message-id: 20180801123111.3595-5-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
target/arm/sve_helper.c | 2 +-
12
docs/system/arm/emulation.rst | 102 ++++++++++++++++++++++++++++++++++
14
1 file changed, 1 insertion(+), 1 deletion(-)
13
docs/system/target-arm.rst | 6 ++
14
2 files changed, 108 insertions(+)
15
create mode 100644 docs/system/arm/emulation.rst
15
16
16
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
17
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
18
new file mode 100644
19
index XXXXXXX..XXXXXXX
20
--- /dev/null
21
+++ b/docs/system/arm/emulation.rst
22
@@ -XXX,XX +XXX,XX @@
23
+A-profile CPU architecture support
24
+==================================
25
+
26
+QEMU's TCG emulation includes support for the Armv5, Armv6, Armv7 and
27
+Armv8 versions of the A-profile architecture. It also has support for
28
+the following architecture extensions:
29
+
30
+- FEAT_AA32BF16 (AArch32 BFloat16 instructions)
31
+- FEAT_AA32HPD (AArch32 hierarchical permission disables)
32
+- FEAT_AA32I8MM (AArch32 Int8 matrix multiplication instructions)
33
+- FEAT_AES (AESD and AESE instructions)
34
+- FEAT_BF16 (AArch64 BFloat16 instructions)
35
+- FEAT_BTI (Branch Target Identification)
36
+- FEAT_DIT (Data Independent Timing instructions)
37
+- FEAT_DPB (DC CVAP instruction)
38
+- FEAT_DotProd (Advanced SIMD dot product instructions)
39
+- FEAT_FCMA (Floating-point complex number instructions)
40
+- FEAT_FHM (Floating-point half-precision multiplication instructions)
41
+- FEAT_FP16 (Half-precision floating-point data processing)
42
+- FEAT_FRINTTS (Floating-point to integer instructions)
43
+- FEAT_FlagM (Flag manipulation instructions v2)
44
+- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
45
+- FEAT_HPDS (Hierarchical permission disables)
46
+- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
47
+- FEAT_JSCVT (JavaScript conversion instructions)
48
+- FEAT_LOR (Limited ordering regions)
49
+- FEAT_LRCPC (Load-acquire RCpc instructions)
50
+- FEAT_LRCPC2 (Load-acquire RCpc instructions v2)
51
+- FEAT_LSE (Large System Extensions)
52
+- FEAT_MTE (Memory Tagging Extension)
53
+- FEAT_MTE2 (Memory Tagging Extension)
54
+- FEAT_PAN (Privileged access never)
55
+- FEAT_PAN2 (AT S1E1R and AT S1E1W instruction variants affected by PSTATE.PAN)
56
+- FEAT_PAuth (Pointer authentication)
57
+- FEAT_PMULL (PMULL, PMULL2 instructions)
58
+- FEAT_PMUv3p1 (PMU Extensions v3.1)
59
+- FEAT_PMUv3p4 (PMU Extensions v3.4)
60
+- FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions)
61
+- FEAT_RNG (Random number generator)
62
+- FEAT_SB (Speculation Barrier)
63
+- FEAT_SEL2 (Secure EL2)
64
+- FEAT_SHA1 (SHA1 instructions)
65
+- FEAT_SHA256 (SHA256 instructions)
66
+- FEAT_SHA3 (Advanced SIMD SHA3 instructions)
67
+- FEAT_SHA512 (Advanced SIMD SHA512 instructions)
68
+- FEAT_SM3 (Advanced SIMD SM3 instructions)
69
+- FEAT_SM4 (Advanced SIMD SM4 instructions)
70
+- FEAT_SPECRES (Speculation restriction instructions)
71
+- FEAT_SSBS (Speculative Store Bypass Safe)
72
+- FEAT_TLBIOS (TLB invalidate instructions in Outer Shareable domain)
73
+- FEAT_TLBIRANGE (TLB invalidate range instructions)
74
+- FEAT_TTCNP (Translation table Common not private translations)
75
+- FEAT_TTST (Small translation tables)
76
+- FEAT_UAO (Unprivileged Access Override control)
77
+- FEAT_VHE (Virtualization Host Extensions)
78
+- FEAT_VMID16 (16-bit VMID)
79
+- FEAT_XNX (Translation table stage 2 Unprivileged Execute-never)
80
+- SVE (The Scalable Vector Extension)
81
+- SVE2 (The Scalable Vector Extension v2)
82
+
83
+For information on the specifics of these extensions, please refer
84
+to the `Armv8-A Arm Architecture Reference Manual
85
+<https://developer.arm.com/documentation/ddi0487/latest>`_.
86
+
87
+When a specific named CPU is being emulated, only those features which
88
+are present in hardware for that CPU are emulated. (If a feature is
89
+not in the list above then it is not supported, even if the real
90
+hardware should have it.) The ``max`` CPU enables all features.
91
+
92
+R-profile CPU architecture support
93
+==================================
94
+
95
+QEMU's TCG emulation support for R-profile CPUs is currently limited.
96
+We emulate only the Cortex-R5 and Cortex-R5F CPUs.
97
+
98
+M-profile CPU architecture support
99
+==================================
100
+
101
+QEMU's TCG emulation includes support for Armv6-M, Armv7-M, Armv8-M, and
102
+Armv8.1-M versions of the M-profile architucture. It also has support
103
+for the following architecture extensions:
104
+
105
+- FP (Floating-point Extension)
106
+- FPCXT (FPCXT access instructions)
107
+- HP (Half-precision floating-point instructions)
108
+- LOB (Low Overhead loops and Branch future)
109
+- M (Main Extension)
110
+- MPU (Memory Protection Unit Extension)
111
+- PXN (Privileged Execute Never)
112
+- RAS (Reliability, Serviceability and Availability): "minimum RAS Extension" only
113
+- S (Security Extension)
114
+- ST (System Timer Extension)
115
+
116
+For information on the specifics of these extensions, please refer
117
+to the `Armv8-M Arm Architecture Reference Manual
118
+<https://developer.arm.com/documentation/ddi0553/latest>`_.
119
+
120
+When a specific named CPU is being emulated, only those features which
121
+are present in hardware for that CPU are emulated. (If a feature is
122
+not in the list above then it is not supported, even if the real
123
+hardware should have it.) There is no equivalent of the ``max`` CPU for
124
+M-profile.
125
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
17
index XXXXXXX..XXXXXXX 100644
126
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve_helper.c
127
--- a/docs/system/target-arm.rst
19
+++ b/target/arm/sve_helper.c
128
+++ b/docs/system/target-arm.rst
20
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_movz_d)(void *vd, void *vn, void *vg, uint32_t desc)
129
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
21
uint64_t *d = vd, *n = vn;
130
arm/virt
22
uint8_t *pg = vg;
131
arm/xlnx-versal-virt
23
for (i = 0; i < opr_sz; i += 1) {
132
24
- d[i] = n[1] & -(uint64_t)(pg[H1(i)] & 1);
133
+Emulated CPU architecture support
25
+ d[i] = n[i] & -(uint64_t)(pg[H1(i)] & 1);
134
+=================================
26
}
135
+
27
}
136
+.. toctree::
137
+ arm/emulation
138
+
139
Arm CPU features
140
================
28
141
29
--
142
--
30
2.18.0
143
2.20.1
31
144
32
145
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
In the code for handling VFP system register accesses there is some
2
stray whitespace after a unary '-' operator, and also some incorrect
3
indent in a couple of function prototypes. We're about to move this
4
code to another file, so fix the code style issues first so
5
checkpatch doesn't complain about the code-movement patch.
2
6
3
Add support for GICv2 virtualization extensions by mapping the necessary
7
Cc: qemu-stable@nongnu.org
4
I/O regions and connecting the maintenance IRQ lines.
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210618141019.10671-2-peter.maydell@linaro.org
11
---
12
target/arm/translate-vfp.c | 11 +++++------
13
1 file changed, 5 insertions(+), 6 deletions(-)
5
14
6
Declare those additions in the device tree and in the ACPI tables.
15
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
7
8
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20180727095421.386-21-luc.michel@greensocs.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/arm/virt.h | 4 +++-
14
hw/arm/virt-acpi-build.c | 6 +++--
15
hw/arm/virt.c | 52 +++++++++++++++++++++++++++++++++-------
16
3 files changed, 50 insertions(+), 12 deletions(-)
17
18
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/virt.h
17
--- a/target/arm/translate-vfp.c
21
+++ b/include/hw/arm/virt.h
18
+++ b/target/arm/translate-vfp.c
22
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static void gen_branch_fpInactive(DisasContext *s, TCGCond cond,
23
#define NUM_VIRTIO_TRANSPORTS 32
20
}
24
#define NUM_SMMU_IRQS 4
21
25
22
static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
26
-#define ARCH_GICV3_MAINT_IRQ 9
23
-
27
+#define ARCH_GIC_MAINT_IRQ 9
24
fp_sysreg_loadfn *loadfn,
28
25
- void *opaque)
29
#define ARCH_TIMER_VIRT_IRQ 11
26
+ void *opaque)
30
#define ARCH_TIMER_S_EL1_IRQ 13
27
{
31
@@ -XXX,XX +XXX,XX @@ enum {
28
/* Do a write to an M-profile floating point system register */
32
VIRT_GIC_DIST,
29
TCGv_i32 tmp;
33
VIRT_GIC_CPU,
30
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
34
VIRT_GIC_V2M,
31
}
35
+ VIRT_GIC_HYP,
32
36
+ VIRT_GIC_VCPU,
33
static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
37
VIRT_GIC_ITS,
34
- fp_sysreg_storefn *storefn,
38
VIRT_GIC_REDIST,
35
- void *opaque)
39
VIRT_GIC_REDIST2,
36
+ fp_sysreg_storefn *storefn,
40
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
37
+ void *opaque)
41
index XXXXXXX..XXXXXXX 100644
38
{
42
--- a/hw/arm/virt-acpi-build.c
39
/* Do a read from an M-profile floating point system register */
43
+++ b/hw/arm/virt-acpi-build.c
40
TCGv_i32 tmp;
44
@@ -XXX,XX +XXX,XX @@ build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
41
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
45
gicc->length = sizeof(*gicc);
42
TCGv_i32 addr;
46
if (vms->gic_version == 2) {
43
47
gicc->base_address = cpu_to_le64(memmap[VIRT_GIC_CPU].base);
44
if (!a->a) {
48
+ gicc->gich_base_address = cpu_to_le64(memmap[VIRT_GIC_HYP].base);
45
- offset = - offset;
49
+ gicc->gicv_base_address = cpu_to_le64(memmap[VIRT_GIC_VCPU].base);
46
+ offset = -offset;
50
}
51
gicc->cpu_interface_number = cpu_to_le32(i);
52
gicc->arm_mpidr = cpu_to_le64(armcpu->mp_affinity);
53
@@ -XXX,XX +XXX,XX @@ build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
54
if (arm_feature(&armcpu->env, ARM_FEATURE_PMU)) {
55
gicc->performance_interrupt = cpu_to_le32(PPI(VIRTUAL_PMU_IRQ));
56
}
57
- if (vms->virt && vms->gic_version == 3) {
58
- gicc->vgic_interrupt = cpu_to_le32(PPI(ARCH_GICV3_MAINT_IRQ));
59
+ if (vms->virt) {
60
+ gicc->vgic_interrupt = cpu_to_le32(PPI(ARCH_GIC_MAINT_IRQ));
61
}
62
}
47
}
63
48
64
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
49
addr = load_reg(s, a->rn);
65
index XXXXXXX..XXXXXXX 100644
50
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
66
--- a/hw/arm/virt.c
51
TCGv_i32 value = tcg_temp_new_i32();
67
+++ b/hw/arm/virt.c
52
68
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry a15memmap[] = {
53
if (!a->a) {
69
[VIRT_GIC_DIST] = { 0x08000000, 0x00010000 },
54
- offset = - offset;
70
[VIRT_GIC_CPU] = { 0x08010000, 0x00010000 },
55
+ offset = -offset;
71
[VIRT_GIC_V2M] = { 0x08020000, 0x00001000 },
72
+ [VIRT_GIC_HYP] = { 0x08030000, 0x00010000 },
73
+ [VIRT_GIC_VCPU] = { 0x08040000, 0x00010000 },
74
/* The space in between here is reserved for GICv3 CPU/vCPU/HYP */
75
[VIRT_GIC_ITS] = { 0x08080000, 0x00020000 },
76
/* This redistributor space allows up to 2*64kB*123 CPUs */
77
@@ -XXX,XX +XXX,XX @@ static void fdt_add_gic_node(VirtMachineState *vms)
78
79
if (vms->virt) {
80
qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
81
- GIC_FDT_IRQ_TYPE_PPI, ARCH_GICV3_MAINT_IRQ,
82
+ GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
83
GIC_FDT_IRQ_FLAGS_LEVEL_HI);
84
}
85
} else {
86
/* 'cortex-a15-gic' means 'GIC v2' */
87
qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
88
"arm,cortex-a15-gic");
89
- qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
90
- 2, vms->memmap[VIRT_GIC_DIST].base,
91
- 2, vms->memmap[VIRT_GIC_DIST].size,
92
- 2, vms->memmap[VIRT_GIC_CPU].base,
93
- 2, vms->memmap[VIRT_GIC_CPU].size);
94
+ if (!vms->virt) {
95
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
96
+ 2, vms->memmap[VIRT_GIC_DIST].base,
97
+ 2, vms->memmap[VIRT_GIC_DIST].size,
98
+ 2, vms->memmap[VIRT_GIC_CPU].base,
99
+ 2, vms->memmap[VIRT_GIC_CPU].size);
100
+ } else {
101
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
102
+ 2, vms->memmap[VIRT_GIC_DIST].base,
103
+ 2, vms->memmap[VIRT_GIC_DIST].size,
104
+ 2, vms->memmap[VIRT_GIC_CPU].base,
105
+ 2, vms->memmap[VIRT_GIC_CPU].size,
106
+ 2, vms->memmap[VIRT_GIC_HYP].base,
107
+ 2, vms->memmap[VIRT_GIC_HYP].size,
108
+ 2, vms->memmap[VIRT_GIC_VCPU].base,
109
+ 2, vms->memmap[VIRT_GIC_VCPU].size);
110
+ qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
111
+ GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
112
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
113
+ }
114
}
56
}
115
57
116
qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", vms->gic_phandle);
58
addr = load_reg(s, a->rn);
117
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
118
qdev_prop_set_uint32(gicdev, "redist-region-count[1]",
119
MIN(smp_cpus - redist0_count, redist1_capacity));
120
}
121
+ } else {
122
+ if (!kvm_irqchip_in_kernel()) {
123
+ qdev_prop_set_bit(gicdev, "has-virtualization-extensions",
124
+ vms->virt);
125
+ }
126
}
127
qdev_init_nofail(gicdev);
128
gicbusdev = SYS_BUS_DEVICE(gicdev);
129
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
130
}
131
} else {
132
sysbus_mmio_map(gicbusdev, 1, vms->memmap[VIRT_GIC_CPU].base);
133
+ if (vms->virt) {
134
+ sysbus_mmio_map(gicbusdev, 2, vms->memmap[VIRT_GIC_HYP].base);
135
+ sysbus_mmio_map(gicbusdev, 3, vms->memmap[VIRT_GIC_VCPU].base);
136
+ }
137
}
138
139
/* Wire the outputs from each CPU's generic timer and the GICv3
140
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
141
ppibase + timer_irq[irq]));
142
}
143
144
- qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt", 0,
145
- qdev_get_gpio_in(gicdev, ppibase
146
- + ARCH_GICV3_MAINT_IRQ));
147
+ if (type == 3) {
148
+ qemu_irq irq = qdev_get_gpio_in(gicdev,
149
+ ppibase + ARCH_GIC_MAINT_IRQ);
150
+ qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt",
151
+ 0, irq);
152
+ } else if (vms->virt) {
153
+ qemu_irq irq = qdev_get_gpio_in(gicdev,
154
+ ppibase + ARCH_GIC_MAINT_IRQ);
155
+ sysbus_connect_irq(gicbusdev, i + 4 * smp_cpus, irq);
156
+ }
157
+
158
qdev_connect_gpio_out_named(cpudev, "pmu-interrupt", 0,
159
qdev_get_gpio_in(gicdev, ppibase
160
+ VIRTUAL_PMU_IRQ));
161
--
59
--
162
2.18.0
60
2.20.1
163
61
164
62
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
If the guest makes an FPCXT_NS access when the FPU is disabled,
2
one of two things happens:
3
* if there is no active FP context, then the insn behaves the
4
same way as if the FPU was enabled: writes ignored, reads
5
same value as FPDSCR_NS
6
* if there is an active FP context, then we take a NOCP
7
exception
2
8
3
The differences from ARMv7-M NVIC are:
9
Add code to the sysreg read/write functions which emits
4
* ARMv6-M only supports up to 32 external interrupts
10
code to take the NOCP exception in the latter case.
5
(configurable feature already). The ICTR is reserved.
6
* Active Bit Register is reserved.
7
* ARMv6-M supports 4 priority levels against 256 in ARMv7-M.
8
11
9
Signed-off-by: Julia Suvorova <jusual@mail.ru>
12
At the moment this will never be used, because the NOCP checks in
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
m-nocp.decode happen first, and so the trans functions are never
14
called when the FPU is disabled. The code will be needed when we
15
move the sysreg access insns to before the NOCP patterns in the
16
following commit.
17
18
Cc: qemu-stable@nongnu.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20210618141019.10671-3-peter.maydell@linaro.org
12
---
22
---
13
include/hw/intc/armv7m_nvic.h | 1 +
23
target/arm/translate-vfp.c | 32 ++++++++++++++++++++++++++++++--
14
hw/intc/armv7m_nvic.c | 21 ++++++++++++++++++---
24
1 file changed, 30 insertions(+), 2 deletions(-)
15
2 files changed, 19 insertions(+), 3 deletions(-)
16
25
17
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
26
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
18
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/intc/armv7m_nvic.h
28
--- a/target/arm/translate-vfp.c
20
+++ b/include/hw/intc/armv7m_nvic.h
29
+++ b/target/arm/translate-vfp.c
21
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
30
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
22
VecInfo sec_vectors[NVIC_INTERNAL_VECTORS];
31
lab_end = gen_new_label();
23
/* The PRIGROUP field in AIRCR is banked */
32
/* fpInactive case: write is a NOP, so branch to end */
24
uint32_t prigroup[M_REG_NUM_BANKS];
33
gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
25
+ uint8_t num_prio_bits;
34
- /* !fpInactive: PreserveFPState(), and reads same as FPCXT_S */
26
35
+ /*
27
/* v8M NVIC_ITNS state (stored as a bool per bit) */
36
+ * !fpInactive: if FPU disabled, take NOCP exception;
28
bool itns[NVIC_MAX_VECTORS];
37
+ * otherwise PreserveFPState(), and then FPCXT_NS writes
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
38
+ * behave the same as FPCXT_S writes.
30
index XXXXXXX..XXXXXXX 100644
39
+ */
31
--- a/hw/intc/armv7m_nvic.c
40
+ if (s->fp_excp_el) {
32
+++ b/hw/intc/armv7m_nvic.c
41
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
33
@@ -XXX,XX +XXX,XX @@ static void set_prio(NVICState *s, unsigned irq, bool secure, uint8_t prio)
42
+ syn_uncategorized(), s->fp_excp_el);
34
assert(irq > ARMV7M_EXCP_NMI); /* only use for configurable prios */
43
+ /*
35
assert(irq < s->num_irq);
44
+ * This was only a conditional exception, so override
36
45
+ * gen_exception_insn()'s default to DISAS_NORETURN
37
+ prio &= MAKE_64BIT_MASK(8 - s->num_prio_bits, s->num_prio_bits);
46
+ */
38
+
47
+ s->base.is_jmp = DISAS_NEXT;
39
if (secure) {
40
assert(exc_is_banked(irq));
41
s->sec_vectors[irq].prio = prio;
42
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
43
44
switch (offset) {
45
case 4: /* Interrupt Control Type. */
46
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
47
+ goto bad_offset;
48
+ }
49
return ((s->num_irq - NVIC_FIRST_IRQ) / 32) - 1;
50
case 0xc: /* CPPWR */
51
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
52
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
53
"Setting VECTRESET when not in DEBUG mode "
54
"is UNPREDICTABLE\n");
55
}
56
- s->prigroup[attrs.secure] = extract32(value,
57
- R_V7M_AIRCR_PRIGROUP_SHIFT,
58
- R_V7M_AIRCR_PRIGROUP_LENGTH);
59
+ if (arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
60
+ s->prigroup[attrs.secure] =
61
+ extract32(value,
62
+ R_V7M_AIRCR_PRIGROUP_SHIFT,
63
+ R_V7M_AIRCR_PRIGROUP_LENGTH);
64
+ }
65
if (attrs.secure) {
66
/* These bits are only writable by secure */
67
cpu->env.v7m.aircr = value &
68
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
69
break;
70
case 0x300 ... 0x33f: /* NVIC Active */
71
val = 0;
72
+
73
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_V7)) {
74
+ break;
48
+ break;
75
+ }
49
+ }
76
+
50
gen_preserve_fp_state(s);
77
startvec = 8 * (offset - 0x300) + NVIC_FIRST_IRQ; /* vector # */
51
/* fall through */
78
52
case ARM_VFP_FPCXT_S:
79
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
53
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
80
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
54
tcg_gen_br(lab_end);
81
/* include space for internal exception vectors */
55
82
s->num_irq += NVIC_FIRST_IRQ;
56
gen_set_label(lab_active);
83
57
- /* !fpInactive: Reads the same as FPCXT_S, but side effects differ */
84
+ s->num_prio_bits = arm_feature(&s->cpu->env, ARM_FEATURE_V7) ? 8 : 2;
58
+ /*
85
+
59
+ * !fpInactive: if FPU disabled, take NOCP exception;
86
object_property_set_bool(OBJECT(&s->systick[M_REG_NS]), true,
60
+ * otherwise PreserveFPState(), and then FPCXT_NS
87
"realized", &err);
61
+ * reads the same as FPCXT_S.
88
if (err != NULL) {
62
+ */
63
+ if (s->fp_excp_el) {
64
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
65
+ syn_uncategorized(), s->fp_excp_el);
66
+ /*
67
+ * This was only a conditional exception, so override
68
+ * gen_exception_insn()'s default to DISAS_NORETURN
69
+ */
70
+ s->base.is_jmp = DISAS_NEXT;
71
+ break;
72
+ }
73
gen_preserve_fp_state(s);
74
tmp = tcg_temp_new_i32();
75
sfpa = tcg_temp_new_i32();
89
--
76
--
90
2.18.0
77
2.20.1
91
78
92
79
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
The M-profile architecture requires that accesses to FPCXT_NS when
2
there is no active FP state must not take a NOCP fault even if the
3
FPU is disabled. We were not implementing this correctly, because
4
in our decode we catch the NOCP faults early in m-nocp.decode.
2
5
3
Add the read/write functions to handle accesses to the vCPU interface.
6
Fix this bug by moving all the handling of M-profile FP system
4
Those accesses are forwarded to the real CPU interface, with the CPU id
7
register accesses from vfp.decode into m-nocp.decode and putting
5
being converted to the corresponding vCPU id (vCPU id = CPU id +
8
it above the NOCP blocks. This provides the correct behaviour:
6
GIC_NCPU).
9
* for accesses other than FPCXT_NS the trans functions call
10
vfp_access_check(), which will check for FPU disabled and
11
raise a NOCP exception if necessary
12
* for FPCXT_NS we have the special case code that doesn't
13
call vfp_access_check()
14
* when these trans functions want to raise an UNDEF they return
15
false, so the decoder will fall through into the NOCP blocks.
16
This means that NOCP correctly takes precedence over UNDEF
17
for these insns. (This is a difference from the other insns
18
handled by m-nocp.decode, where UNDEF takes precedence and
19
which we implement by having those trans functions call
20
unallocated_encoding() in the appropriate places.)
7
21
8
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
22
[Note for backport to stable: this commit has a semantic dependency
9
Message-id: 20180727095421.386-15-luc.michel@greensocs.com
23
on commit 9a486856e9173af, which was not marked as cc-stable because
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
we didn't know we'd need it for a for-stable bugfix.]
25
26
Cc: qemu-stable@nongnu.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Message-id: 20210618141019.10671-4-peter.maydell@linaro.org
12
---
30
---
13
hw/intc/arm_gic.c | 37 +++++++++++++++++++++++++++++++++++--
31
target/arm/translate-a32.h | 1 +
14
1 file changed, 35 insertions(+), 2 deletions(-)
32
target/arm/m-nocp.decode | 24 ++
33
target/arm/vfp.decode | 14 -
34
target/arm/translate-m-nocp.c | 514 +++++++++++++++++++++++++++++++++
35
target/arm/translate-vfp.c | 517 +---------------------------------
36
5 files changed, 542 insertions(+), 528 deletions(-)
15
37
16
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
38
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
17
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gic.c
40
--- a/target/arm/translate-a32.h
19
+++ b/hw/intc/arm_gic.c
41
+++ b/target/arm/translate-a32.h
20
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_do_cpu_write(void *opaque, hwaddr addr,
42
@@ -XXX,XX +XXX,XX @@ bool disas_neon_shared(DisasContext *s, uint32_t insn);
21
return gic_cpu_write(s, id, addr, value, attrs);
43
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg);
44
void arm_gen_condlabel(DisasContext *s);
45
bool vfp_access_check(DisasContext *s);
46
+void gen_preserve_fp_state(DisasContext *s);
47
void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop);
48
void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop);
49
void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop);
50
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/m-nocp.decode
53
+++ b/target/arm/m-nocp.decode
54
@@ -XXX,XX +XXX,XX @@
55
56
&nocp cp
57
58
+# M-profile VLDR/VSTR to sysreg
59
+%vldr_sysreg 22:1 13:3
60
+%imm7_0x4 0:7 !function=times_4
61
+
62
+&vldr_sysreg rn reg imm a w p
63
+@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
64
+ reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
65
+
66
{
67
# Special cases which do not take an early NOCP: VLLDM and VLSTM
68
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 op:1 000 0000
69
@@ -XXX,XX +XXX,XX @@
70
VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
71
VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
72
73
+ # FP system register accesses: these are a special case because accesses
74
+ # to FPCXT_NS succeed even if the FPU is disabled. We therefore need
75
+ # to handle them before the big NOCP blocks. Note that within these
76
+ # insns NOCP still has higher priority than UNDEFs; this is implemented
77
+ # by their returning 'false' for UNDEF so as to fall through into the
78
+ # NOCP check (in contrast to VLLDM etc, which call unallocated_encoding()
79
+ # for the UNDEFs there that must take precedence over NOCP.)
80
+
81
+ VMSR_VMRS ---- 1110 111 l:1 reg:4 rt:4 1010 0001 0000
82
+
83
+ # P=0 W=0 is SEE "Related encodings", so split into two patterns
84
+ VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
85
+ VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
86
+ VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
87
+ VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
88
+
89
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
90
NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
91
# From v8.1M onwards this range will also NOCP:
92
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/vfp.decode
95
+++ b/target/arm/vfp.decode
96
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR_hp ---- 1101 u:1 .0 l:1 rn:4 .... 1001 imm:8 vd=%vd_sp
97
VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
98
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
99
100
-# M-profile VLDR/VSTR to sysreg
101
-%vldr_sysreg 22:1 13:3
102
-%imm7_0x4 0:7 !function=times_4
103
-
104
-&vldr_sysreg rn reg imm a w p
105
-@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
106
- reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
107
-
108
-# P=0 W=0 is SEE "Related encodings", so split into two patterns
109
-VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
110
-VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
111
-VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
112
-VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
113
-
114
# We split the load/store multiple up into two patterns to avoid
115
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
116
# grouping:
117
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/target/arm/translate-m-nocp.c
120
+++ b/target/arm/translate-m-nocp.c
121
@@ -XXX,XX +XXX,XX @@
122
123
#include "qemu/osdep.h"
124
#include "tcg/tcg-op.h"
125
+#include "tcg/tcg-op-gvec.h"
126
#include "translate.h"
127
#include "translate-a32.h"
128
129
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
130
return true;
22
}
131
}
23
132
24
+static MemTxResult gic_thisvcpu_read(void *opaque, hwaddr addr, uint64_t *data,
133
+/*
25
+ unsigned size, MemTxAttrs attrs)
134
+ * M-profile provides two different sets of instructions that can
135
+ * access floating point system registers: VMSR/VMRS (which move
136
+ * to/from a general purpose register) and VLDR/VSTR sysreg (which
137
+ * move directly to/from memory). In some cases there are also side
138
+ * effects which must happen after any write to memory (which could
139
+ * cause an exception). So we implement the common logic for the
140
+ * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
141
+ * which take pointers to callback functions which will perform the
142
+ * actual "read/write general purpose register" and "read/write
143
+ * memory" operations.
144
+ */
145
+
146
+/*
147
+ * Emit code to store the sysreg to its final destination; frees the
148
+ * TCG temp 'value' it is passed.
149
+ */
150
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
151
+/*
152
+ * Emit code to load the value to be copied to the sysreg; returns
153
+ * a new TCG temporary
154
+ */
155
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
156
+
157
+/* Common decode/access checks for fp sysreg read/write */
158
+typedef enum FPSysRegCheckResult {
159
+ FPSysRegCheckFailed, /* caller should return false */
160
+ FPSysRegCheckDone, /* caller should return true */
161
+ FPSysRegCheckContinue, /* caller should continue generating code */
162
+} FPSysRegCheckResult;
163
+
164
+static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
26
+{
165
+{
27
+ GICState *s = (GICState *)opaque;
166
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
28
+
167
+ return FPSysRegCheckFailed;
29
+ return gic_cpu_read(s, gic_get_current_vcpu(s), addr, data, attrs);
168
+ }
169
+
170
+ switch (regno) {
171
+ case ARM_VFP_FPSCR:
172
+ case QEMU_VFP_FPSCR_NZCV:
173
+ break;
174
+ case ARM_VFP_FPSCR_NZCVQC:
175
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
176
+ return FPSysRegCheckFailed;
177
+ }
178
+ break;
179
+ case ARM_VFP_FPCXT_S:
180
+ case ARM_VFP_FPCXT_NS:
181
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
182
+ return FPSysRegCheckFailed;
183
+ }
184
+ if (!s->v8m_secure) {
185
+ return FPSysRegCheckFailed;
186
+ }
187
+ break;
188
+ case ARM_VFP_VPR:
189
+ case ARM_VFP_P0:
190
+ if (!dc_isar_feature(aa32_mve, s)) {
191
+ return FPSysRegCheckFailed;
192
+ }
193
+ break;
194
+ default:
195
+ return FPSysRegCheckFailed;
196
+ }
197
+
198
+ /*
199
+ * FPCXT_NS is a special case: it has specific handling for
200
+ * "current FP state is inactive", and must do the PreserveFPState()
201
+ * but not the usual full set of actions done by ExecuteFPCheck().
202
+ * So we don't call vfp_access_check() and the callers must handle this.
203
+ */
204
+ if (regno != ARM_VFP_FPCXT_NS && !vfp_access_check(s)) {
205
+ return FPSysRegCheckDone;
206
+ }
207
+ return FPSysRegCheckContinue;
30
+}
208
+}
31
+
209
+
32
+static MemTxResult gic_thisvcpu_write(void *opaque, hwaddr addr,
210
+static void gen_branch_fpInactive(DisasContext *s, TCGCond cond,
33
+ uint64_t value, unsigned size,
211
+ TCGLabel *label)
34
+ MemTxAttrs attrs)
35
+{
212
+{
36
+ GICState *s = (GICState *)opaque;
213
+ /*
37
+
214
+ * FPCXT_NS is a special case: it has specific handling for
38
+ return gic_cpu_write(s, gic_get_current_vcpu(s), addr, value, attrs);
215
+ * "current FP state is inactive", and must do the PreserveFPState()
216
+ * but not the usual full set of actions done by ExecuteFPCheck().
217
+ * We don't have a TB flag that matches the fpInactive check, so we
218
+ * do it at runtime as we don't expect FPCXT_NS accesses to be frequent.
219
+ *
220
+ * Emit code that checks fpInactive and does a conditional
221
+ * branch to label based on it:
222
+ * if cond is TCG_COND_NE then branch if fpInactive != 0 (ie if inactive)
223
+ * if cond is TCG_COND_EQ then branch if fpInactive == 0 (ie if active)
224
+ */
225
+ assert(cond == TCG_COND_EQ || cond == TCG_COND_NE);
226
+
227
+ /* fpInactive = FPCCR_NS.ASPEN == 1 && CONTROL.FPCA == 0 */
228
+ TCGv_i32 aspen, fpca;
229
+ aspen = load_cpu_field(v7m.fpccr[M_REG_NS]);
230
+ fpca = load_cpu_field(v7m.control[M_REG_S]);
231
+ tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
232
+ tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
233
+ tcg_gen_andi_i32(fpca, fpca, R_V7M_CONTROL_FPCA_MASK);
234
+ tcg_gen_or_i32(fpca, fpca, aspen);
235
+ tcg_gen_brcondi_i32(tcg_invert_cond(cond), fpca, 0, label);
236
+ tcg_temp_free_i32(aspen);
237
+ tcg_temp_free_i32(fpca);
39
+}
238
+}
40
+
239
+
41
static const MemoryRegionOps gic_ops[2] = {
240
+static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
42
{
241
+ fp_sysreg_loadfn *loadfn,
43
.read_with_attrs = gic_dist_read,
242
+ void *opaque)
44
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
243
+{
45
.endianness = DEVICE_NATIVE_ENDIAN,
244
+ /* Do a write to an M-profile floating point system register */
46
};
245
+ TCGv_i32 tmp;
47
246
+ TCGLabel *lab_end = NULL;
48
+static const MemoryRegionOps gic_virt_ops[2] = {
247
+
248
+ switch (fp_sysreg_checks(s, regno)) {
249
+ case FPSysRegCheckFailed:
250
+ return false;
251
+ case FPSysRegCheckDone:
252
+ return true;
253
+ case FPSysRegCheckContinue:
254
+ break;
255
+ }
256
+
257
+ switch (regno) {
258
+ case ARM_VFP_FPSCR:
259
+ tmp = loadfn(s, opaque);
260
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
261
+ tcg_temp_free_i32(tmp);
262
+ gen_lookup_tb(s);
263
+ break;
264
+ case ARM_VFP_FPSCR_NZCVQC:
49
+ {
265
+ {
50
+ .read_with_attrs = NULL,
266
+ TCGv_i32 fpscr;
51
+ .write_with_attrs = NULL,
267
+ tmp = loadfn(s, opaque);
52
+ .endianness = DEVICE_NATIVE_ENDIAN,
268
+ if (dc_isar_feature(aa32_mve, s)) {
53
+ },
269
+ /* QC is only present for MVE; otherwise RES0 */
270
+ TCGv_i32 qc = tcg_temp_new_i32();
271
+ tcg_gen_andi_i32(qc, tmp, FPCR_QC);
272
+ /*
273
+ * The 4 vfp.qc[] fields need only be "zero" vs "non-zero";
274
+ * here writing the same value into all elements is simplest.
275
+ */
276
+ tcg_gen_gvec_dup_i32(MO_32, offsetof(CPUARMState, vfp.qc),
277
+ 16, 16, qc);
278
+ }
279
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
280
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
281
+ tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
282
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
283
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
284
+ tcg_temp_free_i32(tmp);
285
+ break;
286
+ }
287
+ case ARM_VFP_FPCXT_NS:
288
+ lab_end = gen_new_label();
289
+ /* fpInactive case: write is a NOP, so branch to end */
290
+ gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
291
+ /*
292
+ * !fpInactive: if FPU disabled, take NOCP exception;
293
+ * otherwise PreserveFPState(), and then FPCXT_NS writes
294
+ * behave the same as FPCXT_S writes.
295
+ */
296
+ if (s->fp_excp_el) {
297
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
298
+ syn_uncategorized(), s->fp_excp_el);
299
+ /*
300
+ * This was only a conditional exception, so override
301
+ * gen_exception_insn()'s default to DISAS_NORETURN
302
+ */
303
+ s->base.is_jmp = DISAS_NEXT;
304
+ break;
305
+ }
306
+ gen_preserve_fp_state(s);
307
+ /* fall through */
308
+ case ARM_VFP_FPCXT_S:
54
+ {
309
+ {
55
+ .read_with_attrs = gic_thisvcpu_read,
310
+ TCGv_i32 sfpa, control;
56
+ .write_with_attrs = gic_thisvcpu_write,
311
+ /*
57
+ .endianness = DEVICE_NATIVE_ENDIAN,
312
+ * Set FPSCR and CONTROL.SFPA from value; the new FPSCR takes
58
+ }
313
+ * bits [27:0] from value and zeroes bits [31:28].
59
+};
314
+ */
60
+
315
+ tmp = loadfn(s, opaque);
61
static void arm_gic_realize(DeviceState *dev, Error **errp)
316
+ sfpa = tcg_temp_new_i32();
317
+ tcg_gen_shri_i32(sfpa, tmp, 31);
318
+ control = load_cpu_field(v7m.control[M_REG_S]);
319
+ tcg_gen_deposit_i32(control, control, sfpa,
320
+ R_V7M_CONTROL_SFPA_SHIFT, 1);
321
+ store_cpu_field(control, v7m.control[M_REG_S]);
322
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
323
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
324
+ tcg_temp_free_i32(tmp);
325
+ tcg_temp_free_i32(sfpa);
326
+ break;
327
+ }
328
+ case ARM_VFP_VPR:
329
+ /* Behaves as NOP if not privileged */
330
+ if (IS_USER(s)) {
331
+ break;
332
+ }
333
+ tmp = loadfn(s, opaque);
334
+ store_cpu_field(tmp, v7m.vpr);
335
+ break;
336
+ case ARM_VFP_P0:
337
+ {
338
+ TCGv_i32 vpr;
339
+ tmp = loadfn(s, opaque);
340
+ vpr = load_cpu_field(v7m.vpr);
341
+ tcg_gen_deposit_i32(vpr, vpr, tmp,
342
+ R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
343
+ store_cpu_field(vpr, v7m.vpr);
344
+ tcg_temp_free_i32(tmp);
345
+ break;
346
+ }
347
+ default:
348
+ g_assert_not_reached();
349
+ }
350
+ if (lab_end) {
351
+ gen_set_label(lab_end);
352
+ }
353
+ return true;
354
+}
355
+
356
+static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
357
+ fp_sysreg_storefn *storefn,
358
+ void *opaque)
359
+{
360
+ /* Do a read from an M-profile floating point system register */
361
+ TCGv_i32 tmp;
362
+ TCGLabel *lab_end = NULL;
363
+ bool lookup_tb = false;
364
+
365
+ switch (fp_sysreg_checks(s, regno)) {
366
+ case FPSysRegCheckFailed:
367
+ return false;
368
+ case FPSysRegCheckDone:
369
+ return true;
370
+ case FPSysRegCheckContinue:
371
+ break;
372
+ }
373
+
374
+ if (regno == ARM_VFP_FPSCR_NZCVQC && !dc_isar_feature(aa32_mve, s)) {
375
+ /* QC is RES0 without MVE, so NZCVQC simplifies to NZCV */
376
+ regno = QEMU_VFP_FPSCR_NZCV;
377
+ }
378
+
379
+ switch (regno) {
380
+ case ARM_VFP_FPSCR:
381
+ tmp = tcg_temp_new_i32();
382
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
383
+ storefn(s, opaque, tmp);
384
+ break;
385
+ case ARM_VFP_FPSCR_NZCVQC:
386
+ tmp = tcg_temp_new_i32();
387
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
388
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCVQC_MASK);
389
+ storefn(s, opaque, tmp);
390
+ break;
391
+ case QEMU_VFP_FPSCR_NZCV:
392
+ /*
393
+ * Read just NZCV; this is a special case to avoid the
394
+ * helper call for the "VMRS to CPSR.NZCV" insn.
395
+ */
396
+ tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
397
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
398
+ storefn(s, opaque, tmp);
399
+ break;
400
+ case ARM_VFP_FPCXT_S:
401
+ {
402
+ TCGv_i32 control, sfpa, fpscr;
403
+ /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
404
+ tmp = tcg_temp_new_i32();
405
+ sfpa = tcg_temp_new_i32();
406
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
407
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
408
+ control = load_cpu_field(v7m.control[M_REG_S]);
409
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
410
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
411
+ tcg_gen_or_i32(tmp, tmp, sfpa);
412
+ tcg_temp_free_i32(sfpa);
413
+ /*
414
+ * Store result before updating FPSCR etc, in case
415
+ * it is a memory write which causes an exception.
416
+ */
417
+ storefn(s, opaque, tmp);
418
+ /*
419
+ * Now we must reset FPSCR from FPDSCR_NS, and clear
420
+ * CONTROL.SFPA; so we'll end the TB here.
421
+ */
422
+ tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
423
+ store_cpu_field(control, v7m.control[M_REG_S]);
424
+ fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
425
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
426
+ tcg_temp_free_i32(fpscr);
427
+ lookup_tb = true;
428
+ break;
429
+ }
430
+ case ARM_VFP_FPCXT_NS:
431
+ {
432
+ TCGv_i32 control, sfpa, fpscr, fpdscr, zero;
433
+ TCGLabel *lab_active = gen_new_label();
434
+
435
+ lookup_tb = true;
436
+
437
+ gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
438
+ /* fpInactive case: reads as FPDSCR_NS */
439
+ TCGv_i32 tmp = load_cpu_field(v7m.fpdscr[M_REG_NS]);
440
+ storefn(s, opaque, tmp);
441
+ lab_end = gen_new_label();
442
+ tcg_gen_br(lab_end);
443
+
444
+ gen_set_label(lab_active);
445
+ /*
446
+ * !fpInactive: if FPU disabled, take NOCP exception;
447
+ * otherwise PreserveFPState(), and then FPCXT_NS
448
+ * reads the same as FPCXT_S.
449
+ */
450
+ if (s->fp_excp_el) {
451
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
452
+ syn_uncategorized(), s->fp_excp_el);
453
+ /*
454
+ * This was only a conditional exception, so override
455
+ * gen_exception_insn()'s default to DISAS_NORETURN
456
+ */
457
+ s->base.is_jmp = DISAS_NEXT;
458
+ break;
459
+ }
460
+ gen_preserve_fp_state(s);
461
+ tmp = tcg_temp_new_i32();
462
+ sfpa = tcg_temp_new_i32();
463
+ fpscr = tcg_temp_new_i32();
464
+ gen_helper_vfp_get_fpscr(fpscr, cpu_env);
465
+ tcg_gen_andi_i32(tmp, fpscr, ~FPCR_NZCV_MASK);
466
+ control = load_cpu_field(v7m.control[M_REG_S]);
467
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
468
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
469
+ tcg_gen_or_i32(tmp, tmp, sfpa);
470
+ tcg_temp_free_i32(control);
471
+ /* Store result before updating FPSCR, in case it faults */
472
+ storefn(s, opaque, tmp);
473
+ /* If SFPA is zero then set FPSCR from FPDSCR_NS */
474
+ fpdscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
475
+ zero = tcg_const_i32(0);
476
+ tcg_gen_movcond_i32(TCG_COND_EQ, fpscr, sfpa, zero, fpdscr, fpscr);
477
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
478
+ tcg_temp_free_i32(zero);
479
+ tcg_temp_free_i32(sfpa);
480
+ tcg_temp_free_i32(fpdscr);
481
+ tcg_temp_free_i32(fpscr);
482
+ break;
483
+ }
484
+ case ARM_VFP_VPR:
485
+ /* Behaves as NOP if not privileged */
486
+ if (IS_USER(s)) {
487
+ break;
488
+ }
489
+ tmp = load_cpu_field(v7m.vpr);
490
+ storefn(s, opaque, tmp);
491
+ break;
492
+ case ARM_VFP_P0:
493
+ tmp = load_cpu_field(v7m.vpr);
494
+ tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
495
+ storefn(s, opaque, tmp);
496
+ break;
497
+ default:
498
+ g_assert_not_reached();
499
+ }
500
+
501
+ if (lab_end) {
502
+ gen_set_label(lab_end);
503
+ }
504
+ if (lookup_tb) {
505
+ gen_lookup_tb(s);
506
+ }
507
+ return true;
508
+}
509
+
510
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
511
+{
512
+ arg_VMSR_VMRS *a = opaque;
513
+
514
+ if (a->rt == 15) {
515
+ /* Set the 4 flag bits in the CPSR */
516
+ gen_set_nzcv(value);
517
+ tcg_temp_free_i32(value);
518
+ } else {
519
+ store_reg(s, a->rt, value);
520
+ }
521
+}
522
+
523
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
524
+{
525
+ arg_VMSR_VMRS *a = opaque;
526
+
527
+ return load_reg(s, a->rt);
528
+}
529
+
530
+static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
531
+{
532
+ /*
533
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
534
+ * FPSCR -> r15 is a special case which writes to the PSR flags;
535
+ * set a->reg to a special value to tell gen_M_fp_sysreg_read()
536
+ * we only care about the top 4 bits of FPSCR there.
537
+ */
538
+ if (a->rt == 15) {
539
+ if (a->l && a->reg == ARM_VFP_FPSCR) {
540
+ a->reg = QEMU_VFP_FPSCR_NZCV;
541
+ } else {
542
+ return false;
543
+ }
544
+ }
545
+
546
+ if (a->l) {
547
+ /* VMRS, move FP system register to gp register */
548
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
549
+ } else {
550
+ /* VMSR, move gp register to FP system register */
551
+ return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
552
+ }
553
+}
554
+
555
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
556
+{
557
+ arg_vldr_sysreg *a = opaque;
558
+ uint32_t offset = a->imm;
559
+ TCGv_i32 addr;
560
+
561
+ if (!a->a) {
562
+ offset = -offset;
563
+ }
564
+
565
+ addr = load_reg(s, a->rn);
566
+ if (a->p) {
567
+ tcg_gen_addi_i32(addr, addr, offset);
568
+ }
569
+
570
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
571
+ gen_helper_v8m_stackcheck(cpu_env, addr);
572
+ }
573
+
574
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
575
+ MO_UL | MO_ALIGN | s->be_data);
576
+ tcg_temp_free_i32(value);
577
+
578
+ if (a->w) {
579
+ /* writeback */
580
+ if (!a->p) {
581
+ tcg_gen_addi_i32(addr, addr, offset);
582
+ }
583
+ store_reg(s, a->rn, addr);
584
+ } else {
585
+ tcg_temp_free_i32(addr);
586
+ }
587
+}
588
+
589
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
590
+{
591
+ arg_vldr_sysreg *a = opaque;
592
+ uint32_t offset = a->imm;
593
+ TCGv_i32 addr;
594
+ TCGv_i32 value = tcg_temp_new_i32();
595
+
596
+ if (!a->a) {
597
+ offset = -offset;
598
+ }
599
+
600
+ addr = load_reg(s, a->rn);
601
+ if (a->p) {
602
+ tcg_gen_addi_i32(addr, addr, offset);
603
+ }
604
+
605
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
606
+ gen_helper_v8m_stackcheck(cpu_env, addr);
607
+ }
608
+
609
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
610
+ MO_UL | MO_ALIGN | s->be_data);
611
+
612
+ if (a->w) {
613
+ /* writeback */
614
+ if (!a->p) {
615
+ tcg_gen_addi_i32(addr, addr, offset);
616
+ }
617
+ store_reg(s, a->rn, addr);
618
+ } else {
619
+ tcg_temp_free_i32(addr);
620
+ }
621
+ return value;
622
+}
623
+
624
+static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
625
+{
626
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
627
+ return false;
628
+ }
629
+ if (a->rn == 15) {
630
+ return false;
631
+ }
632
+ return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
633
+}
634
+
635
+static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
636
+{
637
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
638
+ return false;
639
+ }
640
+ if (a->rn == 15) {
641
+ return false;
642
+ }
643
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
644
+}
645
+
646
static bool trans_NOCP(DisasContext *s, arg_nocp *a)
62
{
647
{
63
/* Device instance realize function for the GIC sysbus device */
648
/*
64
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
649
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
65
return;
650
index XXXXXXX..XXXXXXX 100644
651
--- a/target/arm/translate-vfp.c
652
+++ b/target/arm/translate-vfp.c
653
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
654
* Generate code for M-profile lazy FP state preservation if needed;
655
* this corresponds to the pseudocode PreserveFPState() function.
656
*/
657
-static void gen_preserve_fp_state(DisasContext *s)
658
+void gen_preserve_fp_state(DisasContext *s)
659
{
660
if (s->v7m_lspact) {
661
/*
662
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
663
return true;
664
}
665
666
-/*
667
- * M-profile provides two different sets of instructions that can
668
- * access floating point system registers: VMSR/VMRS (which move
669
- * to/from a general purpose register) and VLDR/VSTR sysreg (which
670
- * move directly to/from memory). In some cases there are also side
671
- * effects which must happen after any write to memory (which could
672
- * cause an exception). So we implement the common logic for the
673
- * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
674
- * which take pointers to callback functions which will perform the
675
- * actual "read/write general purpose register" and "read/write
676
- * memory" operations.
677
- */
678
-
679
-/*
680
- * Emit code to store the sysreg to its final destination; frees the
681
- * TCG temp 'value' it is passed.
682
- */
683
-typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
684
-/*
685
- * Emit code to load the value to be copied to the sysreg; returns
686
- * a new TCG temporary
687
- */
688
-typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
689
-
690
-/* Common decode/access checks for fp sysreg read/write */
691
-typedef enum FPSysRegCheckResult {
692
- FPSysRegCheckFailed, /* caller should return false */
693
- FPSysRegCheckDone, /* caller should return true */
694
- FPSysRegCheckContinue, /* caller should continue generating code */
695
-} FPSysRegCheckResult;
696
-
697
-static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
698
-{
699
- if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
700
- return FPSysRegCheckFailed;
701
- }
702
-
703
- switch (regno) {
704
- case ARM_VFP_FPSCR:
705
- case QEMU_VFP_FPSCR_NZCV:
706
- break;
707
- case ARM_VFP_FPSCR_NZCVQC:
708
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
709
- return FPSysRegCheckFailed;
710
- }
711
- break;
712
- case ARM_VFP_FPCXT_S:
713
- case ARM_VFP_FPCXT_NS:
714
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
715
- return FPSysRegCheckFailed;
716
- }
717
- if (!s->v8m_secure) {
718
- return FPSysRegCheckFailed;
719
- }
720
- break;
721
- case ARM_VFP_VPR:
722
- case ARM_VFP_P0:
723
- if (!dc_isar_feature(aa32_mve, s)) {
724
- return FPSysRegCheckFailed;
725
- }
726
- break;
727
- default:
728
- return FPSysRegCheckFailed;
729
- }
730
-
731
- /*
732
- * FPCXT_NS is a special case: it has specific handling for
733
- * "current FP state is inactive", and must do the PreserveFPState()
734
- * but not the usual full set of actions done by ExecuteFPCheck().
735
- * So we don't call vfp_access_check() and the callers must handle this.
736
- */
737
- if (regno != ARM_VFP_FPCXT_NS && !vfp_access_check(s)) {
738
- return FPSysRegCheckDone;
739
- }
740
- return FPSysRegCheckContinue;
741
-}
742
-
743
-static void gen_branch_fpInactive(DisasContext *s, TCGCond cond,
744
- TCGLabel *label)
745
-{
746
- /*
747
- * FPCXT_NS is a special case: it has specific handling for
748
- * "current FP state is inactive", and must do the PreserveFPState()
749
- * but not the usual full set of actions done by ExecuteFPCheck().
750
- * We don't have a TB flag that matches the fpInactive check, so we
751
- * do it at runtime as we don't expect FPCXT_NS accesses to be frequent.
752
- *
753
- * Emit code that checks fpInactive and does a conditional
754
- * branch to label based on it:
755
- * if cond is TCG_COND_NE then branch if fpInactive != 0 (ie if inactive)
756
- * if cond is TCG_COND_EQ then branch if fpInactive == 0 (ie if active)
757
- */
758
- assert(cond == TCG_COND_EQ || cond == TCG_COND_NE);
759
-
760
- /* fpInactive = FPCCR_NS.ASPEN == 1 && CONTROL.FPCA == 0 */
761
- TCGv_i32 aspen, fpca;
762
- aspen = load_cpu_field(v7m.fpccr[M_REG_NS]);
763
- fpca = load_cpu_field(v7m.control[M_REG_S]);
764
- tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
765
- tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
766
- tcg_gen_andi_i32(fpca, fpca, R_V7M_CONTROL_FPCA_MASK);
767
- tcg_gen_or_i32(fpca, fpca, aspen);
768
- tcg_gen_brcondi_i32(tcg_invert_cond(cond), fpca, 0, label);
769
- tcg_temp_free_i32(aspen);
770
- tcg_temp_free_i32(fpca);
771
-}
772
-
773
-static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
774
- fp_sysreg_loadfn *loadfn,
775
- void *opaque)
776
-{
777
- /* Do a write to an M-profile floating point system register */
778
- TCGv_i32 tmp;
779
- TCGLabel *lab_end = NULL;
780
-
781
- switch (fp_sysreg_checks(s, regno)) {
782
- case FPSysRegCheckFailed:
783
- return false;
784
- case FPSysRegCheckDone:
785
- return true;
786
- case FPSysRegCheckContinue:
787
- break;
788
- }
789
-
790
- switch (regno) {
791
- case ARM_VFP_FPSCR:
792
- tmp = loadfn(s, opaque);
793
- gen_helper_vfp_set_fpscr(cpu_env, tmp);
794
- tcg_temp_free_i32(tmp);
795
- gen_lookup_tb(s);
796
- break;
797
- case ARM_VFP_FPSCR_NZCVQC:
798
- {
799
- TCGv_i32 fpscr;
800
- tmp = loadfn(s, opaque);
801
- if (dc_isar_feature(aa32_mve, s)) {
802
- /* QC is only present for MVE; otherwise RES0 */
803
- TCGv_i32 qc = tcg_temp_new_i32();
804
- tcg_gen_andi_i32(qc, tmp, FPCR_QC);
805
- /*
806
- * The 4 vfp.qc[] fields need only be "zero" vs "non-zero";
807
- * here writing the same value into all elements is simplest.
808
- */
809
- tcg_gen_gvec_dup_i32(MO_32, offsetof(CPUARMState, vfp.qc),
810
- 16, 16, qc);
811
- }
812
- tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
813
- fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
814
- tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
815
- tcg_gen_or_i32(fpscr, fpscr, tmp);
816
- store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
817
- tcg_temp_free_i32(tmp);
818
- break;
819
- }
820
- case ARM_VFP_FPCXT_NS:
821
- lab_end = gen_new_label();
822
- /* fpInactive case: write is a NOP, so branch to end */
823
- gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
824
- /*
825
- * !fpInactive: if FPU disabled, take NOCP exception;
826
- * otherwise PreserveFPState(), and then FPCXT_NS writes
827
- * behave the same as FPCXT_S writes.
828
- */
829
- if (s->fp_excp_el) {
830
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
831
- syn_uncategorized(), s->fp_excp_el);
832
- /*
833
- * This was only a conditional exception, so override
834
- * gen_exception_insn()'s default to DISAS_NORETURN
835
- */
836
- s->base.is_jmp = DISAS_NEXT;
837
- break;
838
- }
839
- gen_preserve_fp_state(s);
840
- /* fall through */
841
- case ARM_VFP_FPCXT_S:
842
- {
843
- TCGv_i32 sfpa, control;
844
- /*
845
- * Set FPSCR and CONTROL.SFPA from value; the new FPSCR takes
846
- * bits [27:0] from value and zeroes bits [31:28].
847
- */
848
- tmp = loadfn(s, opaque);
849
- sfpa = tcg_temp_new_i32();
850
- tcg_gen_shri_i32(sfpa, tmp, 31);
851
- control = load_cpu_field(v7m.control[M_REG_S]);
852
- tcg_gen_deposit_i32(control, control, sfpa,
853
- R_V7M_CONTROL_SFPA_SHIFT, 1);
854
- store_cpu_field(control, v7m.control[M_REG_S]);
855
- tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
856
- gen_helper_vfp_set_fpscr(cpu_env, tmp);
857
- tcg_temp_free_i32(tmp);
858
- tcg_temp_free_i32(sfpa);
859
- break;
860
- }
861
- case ARM_VFP_VPR:
862
- /* Behaves as NOP if not privileged */
863
- if (IS_USER(s)) {
864
- break;
865
- }
866
- tmp = loadfn(s, opaque);
867
- store_cpu_field(tmp, v7m.vpr);
868
- break;
869
- case ARM_VFP_P0:
870
- {
871
- TCGv_i32 vpr;
872
- tmp = loadfn(s, opaque);
873
- vpr = load_cpu_field(v7m.vpr);
874
- tcg_gen_deposit_i32(vpr, vpr, tmp,
875
- R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
876
- store_cpu_field(vpr, v7m.vpr);
877
- tcg_temp_free_i32(tmp);
878
- break;
879
- }
880
- default:
881
- g_assert_not_reached();
882
- }
883
- if (lab_end) {
884
- gen_set_label(lab_end);
885
- }
886
- return true;
887
-}
888
-
889
-static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
890
- fp_sysreg_storefn *storefn,
891
- void *opaque)
892
-{
893
- /* Do a read from an M-profile floating point system register */
894
- TCGv_i32 tmp;
895
- TCGLabel *lab_end = NULL;
896
- bool lookup_tb = false;
897
-
898
- switch (fp_sysreg_checks(s, regno)) {
899
- case FPSysRegCheckFailed:
900
- return false;
901
- case FPSysRegCheckDone:
902
- return true;
903
- case FPSysRegCheckContinue:
904
- break;
905
- }
906
-
907
- if (regno == ARM_VFP_FPSCR_NZCVQC && !dc_isar_feature(aa32_mve, s)) {
908
- /* QC is RES0 without MVE, so NZCVQC simplifies to NZCV */
909
- regno = QEMU_VFP_FPSCR_NZCV;
910
- }
911
-
912
- switch (regno) {
913
- case ARM_VFP_FPSCR:
914
- tmp = tcg_temp_new_i32();
915
- gen_helper_vfp_get_fpscr(tmp, cpu_env);
916
- storefn(s, opaque, tmp);
917
- break;
918
- case ARM_VFP_FPSCR_NZCVQC:
919
- tmp = tcg_temp_new_i32();
920
- gen_helper_vfp_get_fpscr(tmp, cpu_env);
921
- tcg_gen_andi_i32(tmp, tmp, FPCR_NZCVQC_MASK);
922
- storefn(s, opaque, tmp);
923
- break;
924
- case QEMU_VFP_FPSCR_NZCV:
925
- /*
926
- * Read just NZCV; this is a special case to avoid the
927
- * helper call for the "VMRS to CPSR.NZCV" insn.
928
- */
929
- tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
930
- tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
931
- storefn(s, opaque, tmp);
932
- break;
933
- case ARM_VFP_FPCXT_S:
934
- {
935
- TCGv_i32 control, sfpa, fpscr;
936
- /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
937
- tmp = tcg_temp_new_i32();
938
- sfpa = tcg_temp_new_i32();
939
- gen_helper_vfp_get_fpscr(tmp, cpu_env);
940
- tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
941
- control = load_cpu_field(v7m.control[M_REG_S]);
942
- tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
943
- tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
944
- tcg_gen_or_i32(tmp, tmp, sfpa);
945
- tcg_temp_free_i32(sfpa);
946
- /*
947
- * Store result before updating FPSCR etc, in case
948
- * it is a memory write which causes an exception.
949
- */
950
- storefn(s, opaque, tmp);
951
- /*
952
- * Now we must reset FPSCR from FPDSCR_NS, and clear
953
- * CONTROL.SFPA; so we'll end the TB here.
954
- */
955
- tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
956
- store_cpu_field(control, v7m.control[M_REG_S]);
957
- fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
958
- gen_helper_vfp_set_fpscr(cpu_env, fpscr);
959
- tcg_temp_free_i32(fpscr);
960
- lookup_tb = true;
961
- break;
962
- }
963
- case ARM_VFP_FPCXT_NS:
964
- {
965
- TCGv_i32 control, sfpa, fpscr, fpdscr, zero;
966
- TCGLabel *lab_active = gen_new_label();
967
-
968
- lookup_tb = true;
969
-
970
- gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
971
- /* fpInactive case: reads as FPDSCR_NS */
972
- TCGv_i32 tmp = load_cpu_field(v7m.fpdscr[M_REG_NS]);
973
- storefn(s, opaque, tmp);
974
- lab_end = gen_new_label();
975
- tcg_gen_br(lab_end);
976
-
977
- gen_set_label(lab_active);
978
- /*
979
- * !fpInactive: if FPU disabled, take NOCP exception;
980
- * otherwise PreserveFPState(), and then FPCXT_NS
981
- * reads the same as FPCXT_S.
982
- */
983
- if (s->fp_excp_el) {
984
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
985
- syn_uncategorized(), s->fp_excp_el);
986
- /*
987
- * This was only a conditional exception, so override
988
- * gen_exception_insn()'s default to DISAS_NORETURN
989
- */
990
- s->base.is_jmp = DISAS_NEXT;
991
- break;
992
- }
993
- gen_preserve_fp_state(s);
994
- tmp = tcg_temp_new_i32();
995
- sfpa = tcg_temp_new_i32();
996
- fpscr = tcg_temp_new_i32();
997
- gen_helper_vfp_get_fpscr(fpscr, cpu_env);
998
- tcg_gen_andi_i32(tmp, fpscr, ~FPCR_NZCV_MASK);
999
- control = load_cpu_field(v7m.control[M_REG_S]);
1000
- tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
1001
- tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
1002
- tcg_gen_or_i32(tmp, tmp, sfpa);
1003
- tcg_temp_free_i32(control);
1004
- /* Store result before updating FPSCR, in case it faults */
1005
- storefn(s, opaque, tmp);
1006
- /* If SFPA is zero then set FPSCR from FPDSCR_NS */
1007
- fpdscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
1008
- zero = tcg_const_i32(0);
1009
- tcg_gen_movcond_i32(TCG_COND_EQ, fpscr, sfpa, zero, fpdscr, fpscr);
1010
- gen_helper_vfp_set_fpscr(cpu_env, fpscr);
1011
- tcg_temp_free_i32(zero);
1012
- tcg_temp_free_i32(sfpa);
1013
- tcg_temp_free_i32(fpdscr);
1014
- tcg_temp_free_i32(fpscr);
1015
- break;
1016
- }
1017
- case ARM_VFP_VPR:
1018
- /* Behaves as NOP if not privileged */
1019
- if (IS_USER(s)) {
1020
- break;
1021
- }
1022
- tmp = load_cpu_field(v7m.vpr);
1023
- storefn(s, opaque, tmp);
1024
- break;
1025
- case ARM_VFP_P0:
1026
- tmp = load_cpu_field(v7m.vpr);
1027
- tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
1028
- storefn(s, opaque, tmp);
1029
- break;
1030
- default:
1031
- g_assert_not_reached();
1032
- }
1033
-
1034
- if (lab_end) {
1035
- gen_set_label(lab_end);
1036
- }
1037
- if (lookup_tb) {
1038
- gen_lookup_tb(s);
1039
- }
1040
- return true;
1041
-}
1042
-
1043
-static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
1044
-{
1045
- arg_VMSR_VMRS *a = opaque;
1046
-
1047
- if (a->rt == 15) {
1048
- /* Set the 4 flag bits in the CPSR */
1049
- gen_set_nzcv(value);
1050
- tcg_temp_free_i32(value);
1051
- } else {
1052
- store_reg(s, a->rt, value);
1053
- }
1054
-}
1055
-
1056
-static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
1057
-{
1058
- arg_VMSR_VMRS *a = opaque;
1059
-
1060
- return load_reg(s, a->rt);
1061
-}
1062
-
1063
-static bool gen_M_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
1064
-{
1065
- /*
1066
- * Accesses to R15 are UNPREDICTABLE; we choose to undef.
1067
- * FPSCR -> r15 is a special case which writes to the PSR flags;
1068
- * set a->reg to a special value to tell gen_M_fp_sysreg_read()
1069
- * we only care about the top 4 bits of FPSCR there.
1070
- */
1071
- if (a->rt == 15) {
1072
- if (a->l && a->reg == ARM_VFP_FPSCR) {
1073
- a->reg = QEMU_VFP_FPSCR_NZCV;
1074
- } else {
1075
- return false;
1076
- }
1077
- }
1078
-
1079
- if (a->l) {
1080
- /* VMRS, move FP system register to gp register */
1081
- return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
1082
- } else {
1083
- /* VMSR, move gp register to FP system register */
1084
- return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
1085
- }
1086
-}
1087
-
1088
static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
1089
{
1090
TCGv_i32 tmp;
1091
bool ignore_vfp_enabled = false;
1092
1093
if (arm_dc_feature(s, ARM_FEATURE_M)) {
1094
- return gen_M_VMSR_VMRS(s, a);
1095
+ /* M profile version was already handled in m-nocp.decode */
1096
+ return false;
66
}
1097
}
67
1098
68
- /* This creates distributor and main CPU interface (s->cpuiomem[0]) */
1099
if (!dc_isar_feature(aa32_fpsp_v2, s)) {
69
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, NULL);
1100
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
70
+ /* This creates distributor, main CPU interface (s->cpuiomem[0]) and if
1101
return true;
71
+ * enabled, virtualization extensions related interfaces (main virtual
1102
}
72
+ * interface (s->vifaceiomem[0]) and virtual CPU interface).
1103
73
+ */
1104
-static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
74
+ gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, gic_virt_ops);
1105
-{
75
1106
- arg_vldr_sysreg *a = opaque;
76
/* Extra core-specific regions for the CPU interfaces. This is
1107
- uint32_t offset = a->imm;
77
* necessary for "franken-GIC" implementations, for example on
1108
- TCGv_i32 addr;
1109
-
1110
- if (!a->a) {
1111
- offset = -offset;
1112
- }
1113
-
1114
- addr = load_reg(s, a->rn);
1115
- if (a->p) {
1116
- tcg_gen_addi_i32(addr, addr, offset);
1117
- }
1118
-
1119
- if (s->v8m_stackcheck && a->rn == 13 && a->w) {
1120
- gen_helper_v8m_stackcheck(cpu_env, addr);
1121
- }
1122
-
1123
- gen_aa32_st_i32(s, value, addr, get_mem_index(s),
1124
- MO_UL | MO_ALIGN | s->be_data);
1125
- tcg_temp_free_i32(value);
1126
-
1127
- if (a->w) {
1128
- /* writeback */
1129
- if (!a->p) {
1130
- tcg_gen_addi_i32(addr, addr, offset);
1131
- }
1132
- store_reg(s, a->rn, addr);
1133
- } else {
1134
- tcg_temp_free_i32(addr);
1135
- }
1136
-}
1137
-
1138
-static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
1139
-{
1140
- arg_vldr_sysreg *a = opaque;
1141
- uint32_t offset = a->imm;
1142
- TCGv_i32 addr;
1143
- TCGv_i32 value = tcg_temp_new_i32();
1144
-
1145
- if (!a->a) {
1146
- offset = -offset;
1147
- }
1148
-
1149
- addr = load_reg(s, a->rn);
1150
- if (a->p) {
1151
- tcg_gen_addi_i32(addr, addr, offset);
1152
- }
1153
-
1154
- if (s->v8m_stackcheck && a->rn == 13 && a->w) {
1155
- gen_helper_v8m_stackcheck(cpu_env, addr);
1156
- }
1157
-
1158
- gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
1159
- MO_UL | MO_ALIGN | s->be_data);
1160
-
1161
- if (a->w) {
1162
- /* writeback */
1163
- if (!a->p) {
1164
- tcg_gen_addi_i32(addr, addr, offset);
1165
- }
1166
- store_reg(s, a->rn, addr);
1167
- } else {
1168
- tcg_temp_free_i32(addr);
1169
- }
1170
- return value;
1171
-}
1172
-
1173
-static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
1174
-{
1175
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
1176
- return false;
1177
- }
1178
- if (a->rn == 15) {
1179
- return false;
1180
- }
1181
- return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
1182
-}
1183
-
1184
-static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
1185
-{
1186
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
1187
- return false;
1188
- }
1189
- if (a->rn == 15) {
1190
- return false;
1191
- }
1192
- return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
1193
-}
1194
1195
static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
1196
{
78
--
1197
--
79
2.18.0
1198
2.20.1
80
1199
81
1200
diff view generated by jsdifflib
New patch
1
1
A few subcases of VLDR/VSTR sysreg succeed but do not perform a
2
memory access:
3
* VSTR of VPR when unprivileged
4
* VLDR to VPR when unprivileged
5
* VLDR to FPCXT_NS when fpInactive
6
7
In these cases, even though we don't do the memory access we should
8
still update the base register and perform the stack limit check if
9
the insn's addressing mode specifies writeback. Our implementation
10
failed to do this, because we handle these side-effects inside the
11
memory_to_fp_sysreg() and fp_sysreg_to_memory() callback functions,
12
which are only called if there's something to load or store.
13
14
Fix this by adding an extra argument to the callbacks which is set to
15
true to actually perform the access and false to only do side effects
16
like writeback, and calling the callback with do_access = false
17
for the three cases listed above.
18
19
This produces slightly suboptimal code for the case of a write
20
to FPCXT_NS when the FPU is inactive and the insn didn't have
21
side effects (ie no writeback, or via VMSR), in which case we'll
22
generate a conditional branch over an unconditional branch.
23
But this doesn't seem to be important enough to merit requiring
24
the callback to report back whether it generated any code or not.
25
26
Cc: qemu-stable@nongnu.org
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Message-id: 20210618141019.10671-5-peter.maydell@linaro.org
30
---
31
target/arm/translate-m-nocp.c | 102 ++++++++++++++++++++++++----------
32
1 file changed, 72 insertions(+), 30 deletions(-)
33
34
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-m-nocp.c
37
+++ b/target/arm/translate-m-nocp.c
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
39
40
/*
41
* Emit code to store the sysreg to its final destination; frees the
42
- * TCG temp 'value' it is passed.
43
+ * TCG temp 'value' it is passed. do_access is true to do the store,
44
+ * and false to skip it and only perform side-effects like base
45
+ * register writeback.
46
*/
47
-typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
48
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value,
49
+ bool do_access);
50
/*
51
* Emit code to load the value to be copied to the sysreg; returns
52
- * a new TCG temporary
53
+ * a new TCG temporary. do_access is true to do the store,
54
+ * and false to skip it and only perform side-effects like base
55
+ * register writeback.
56
*/
57
-typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
58
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque,
59
+ bool do_access);
60
61
/* Common decode/access checks for fp sysreg read/write */
62
typedef enum FPSysRegCheckResult {
63
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
64
65
switch (regno) {
66
case ARM_VFP_FPSCR:
67
- tmp = loadfn(s, opaque);
68
+ tmp = loadfn(s, opaque, true);
69
gen_helper_vfp_set_fpscr(cpu_env, tmp);
70
tcg_temp_free_i32(tmp);
71
gen_lookup_tb(s);
72
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
73
case ARM_VFP_FPSCR_NZCVQC:
74
{
75
TCGv_i32 fpscr;
76
- tmp = loadfn(s, opaque);
77
+ tmp = loadfn(s, opaque, true);
78
if (dc_isar_feature(aa32_mve, s)) {
79
/* QC is only present for MVE; otherwise RES0 */
80
TCGv_i32 qc = tcg_temp_new_i32();
81
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
82
break;
83
}
84
case ARM_VFP_FPCXT_NS:
85
+ {
86
+ TCGLabel *lab_active = gen_new_label();
87
+
88
lab_end = gen_new_label();
89
- /* fpInactive case: write is a NOP, so branch to end */
90
- gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
91
+ gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
92
+ /*
93
+ * fpInactive case: write is a NOP, so only do side effects
94
+ * like register writeback before we branch to end
95
+ */
96
+ loadfn(s, opaque, false);
97
+ tcg_gen_br(lab_end);
98
+
99
+ gen_set_label(lab_active);
100
/*
101
* !fpInactive: if FPU disabled, take NOCP exception;
102
* otherwise PreserveFPState(), and then FPCXT_NS writes
103
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
104
break;
105
}
106
gen_preserve_fp_state(s);
107
- /* fall through */
108
+ }
109
+ /* fall through */
110
case ARM_VFP_FPCXT_S:
111
{
112
TCGv_i32 sfpa, control;
113
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
114
* Set FPSCR and CONTROL.SFPA from value; the new FPSCR takes
115
* bits [27:0] from value and zeroes bits [31:28].
116
*/
117
- tmp = loadfn(s, opaque);
118
+ tmp = loadfn(s, opaque, true);
119
sfpa = tcg_temp_new_i32();
120
tcg_gen_shri_i32(sfpa, tmp, 31);
121
control = load_cpu_field(v7m.control[M_REG_S]);
122
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
123
case ARM_VFP_VPR:
124
/* Behaves as NOP if not privileged */
125
if (IS_USER(s)) {
126
+ loadfn(s, opaque, false);
127
break;
128
}
129
- tmp = loadfn(s, opaque);
130
+ tmp = loadfn(s, opaque, true);
131
store_cpu_field(tmp, v7m.vpr);
132
break;
133
case ARM_VFP_P0:
134
{
135
TCGv_i32 vpr;
136
- tmp = loadfn(s, opaque);
137
+ tmp = loadfn(s, opaque, true);
138
vpr = load_cpu_field(v7m.vpr);
139
tcg_gen_deposit_i32(vpr, vpr, tmp,
140
R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
141
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
142
case ARM_VFP_FPSCR:
143
tmp = tcg_temp_new_i32();
144
gen_helper_vfp_get_fpscr(tmp, cpu_env);
145
- storefn(s, opaque, tmp);
146
+ storefn(s, opaque, tmp, true);
147
break;
148
case ARM_VFP_FPSCR_NZCVQC:
149
tmp = tcg_temp_new_i32();
150
gen_helper_vfp_get_fpscr(tmp, cpu_env);
151
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCVQC_MASK);
152
- storefn(s, opaque, tmp);
153
+ storefn(s, opaque, tmp, true);
154
break;
155
case QEMU_VFP_FPSCR_NZCV:
156
/*
157
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
158
*/
159
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
160
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
161
- storefn(s, opaque, tmp);
162
+ storefn(s, opaque, tmp, true);
163
break;
164
case ARM_VFP_FPCXT_S:
165
{
166
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
167
* Store result before updating FPSCR etc, in case
168
* it is a memory write which causes an exception.
169
*/
170
- storefn(s, opaque, tmp);
171
+ storefn(s, opaque, tmp, true);
172
/*
173
* Now we must reset FPSCR from FPDSCR_NS, and clear
174
* CONTROL.SFPA; so we'll end the TB here.
175
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
176
gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
177
/* fpInactive case: reads as FPDSCR_NS */
178
TCGv_i32 tmp = load_cpu_field(v7m.fpdscr[M_REG_NS]);
179
- storefn(s, opaque, tmp);
180
+ storefn(s, opaque, tmp, true);
181
lab_end = gen_new_label();
182
tcg_gen_br(lab_end);
183
184
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
185
tcg_gen_or_i32(tmp, tmp, sfpa);
186
tcg_temp_free_i32(control);
187
/* Store result before updating FPSCR, in case it faults */
188
- storefn(s, opaque, tmp);
189
+ storefn(s, opaque, tmp, true);
190
/* If SFPA is zero then set FPSCR from FPDSCR_NS */
191
fpdscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
192
zero = tcg_const_i32(0);
193
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
194
case ARM_VFP_VPR:
195
/* Behaves as NOP if not privileged */
196
if (IS_USER(s)) {
197
+ storefn(s, opaque, NULL, false);
198
break;
199
}
200
tmp = load_cpu_field(v7m.vpr);
201
- storefn(s, opaque, tmp);
202
+ storefn(s, opaque, tmp, true);
203
break;
204
case ARM_VFP_P0:
205
tmp = load_cpu_field(v7m.vpr);
206
tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
207
- storefn(s, opaque, tmp);
208
+ storefn(s, opaque, tmp, true);
209
break;
210
default:
211
g_assert_not_reached();
212
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
213
return true;
214
}
215
216
-static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
217
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value,
218
+ bool do_access)
219
{
220
arg_VMSR_VMRS *a = opaque;
221
222
+ if (!do_access) {
223
+ return;
224
+ }
225
+
226
if (a->rt == 15) {
227
/* Set the 4 flag bits in the CPSR */
228
gen_set_nzcv(value);
229
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
230
}
231
}
232
233
-static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
234
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque, bool do_access)
235
{
236
arg_VMSR_VMRS *a = opaque;
237
238
+ if (!do_access) {
239
+ return NULL;
240
+ }
241
return load_reg(s, a->rt);
242
}
243
244
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
245
}
246
}
247
248
-static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
249
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value,
250
+ bool do_access)
251
{
252
arg_vldr_sysreg *a = opaque;
253
uint32_t offset = a->imm;
254
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
255
offset = -offset;
256
}
257
258
+ if (!do_access && !a->w) {
259
+ return;
260
+ }
261
+
262
addr = load_reg(s, a->rn);
263
if (a->p) {
264
tcg_gen_addi_i32(addr, addr, offset);
265
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
266
gen_helper_v8m_stackcheck(cpu_env, addr);
267
}
268
269
- gen_aa32_st_i32(s, value, addr, get_mem_index(s),
270
- MO_UL | MO_ALIGN | s->be_data);
271
- tcg_temp_free_i32(value);
272
+ if (do_access) {
273
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
274
+ MO_UL | MO_ALIGN | s->be_data);
275
+ tcg_temp_free_i32(value);
276
+ }
277
278
if (a->w) {
279
/* writeback */
280
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
281
}
282
}
283
284
-static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
285
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque,
286
+ bool do_access)
287
{
288
arg_vldr_sysreg *a = opaque;
289
uint32_t offset = a->imm;
290
TCGv_i32 addr;
291
- TCGv_i32 value = tcg_temp_new_i32();
292
+ TCGv_i32 value = NULL;
293
294
if (!a->a) {
295
offset = -offset;
296
}
297
298
+ if (!do_access && !a->w) {
299
+ return NULL;
300
+ }
301
+
302
addr = load_reg(s, a->rn);
303
if (a->p) {
304
tcg_gen_addi_i32(addr, addr, offset);
305
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
306
gen_helper_v8m_stackcheck(cpu_env, addr);
307
}
308
309
- gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
310
- MO_UL | MO_ALIGN | s->be_data);
311
+ if (do_access) {
312
+ value = tcg_temp_new_i32();
313
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
314
+ MO_UL | MO_ALIGN | s->be_data);
315
+ }
316
317
if (a->w) {
318
/* writeback */
319
--
320
2.20.1
321
322
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Factor the code in full_vfp_access_check() which updates the
2
ownership of the FP context and creates a new FP context
3
out into its own function.
2
4
3
Implement virtualization extensions in the gic_acknowledge_irq()
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
function. This function changes the state of the highest priority IRQ
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
from pending to active.
7
Message-id: 20210618141019.10671-6-peter.maydell@linaro.org
8
---
9
target/arm/translate-vfp.c | 104 +++++++++++++++++++++----------------
10
1 file changed, 58 insertions(+), 46 deletions(-)
6
11
7
When the current CPU is a vCPU, modifying the state of an IRQ modifies
12
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
8
the corresponding LR entry. However if we clear the pending flag before
9
setting the active one, we lose track of the LR entry as it becomes
10
invalid. The next call to gic_get_lr_entry() will fail.
11
12
To overcome this issue, we call gic_activate_irq() before
13
gic_clear_pending(). This does not change the general behaviour of
14
gic_acknowledge_irq.
15
16
We also move the SGI case in gic_clear_pending_sgi() to enhance
17
code readability as the virtualization extensions support adds a if-else
18
level.
19
20
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
21
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Message-id: 20180727095421.386-12-luc.michel@greensocs.com
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
25
hw/intc/arm_gic.c | 52 ++++++++++++++++++++++++++++++-----------------
26
1 file changed, 33 insertions(+), 19 deletions(-)
27
28
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
29
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/intc/arm_gic.c
14
--- a/target/arm/translate-vfp.c
31
+++ b/hw/intc/arm_gic.c
15
+++ b/target/arm/translate-vfp.c
32
@@ -XXX,XX +XXX,XX @@ static void gic_drop_prio(GICState *s, int cpu, int group)
16
@@ -XXX,XX +XXX,XX @@ void gen_preserve_fp_state(DisasContext *s)
33
s->running_priority[cpu] = gic_get_prio_from_apr_bits(s, cpu);
17
}
34
}
18
}
35
19
36
+static inline uint32_t gic_clear_pending_sgi(GICState *s, int irq, int cpu)
20
+/*
21
+ * Generate code for M-profile FP context handling: update the
22
+ * ownership of the FP context, and create a new context if
23
+ * necessary. This corresponds to the parts of the pseudocode
24
+ * ExecuteFPCheck() after the inital PreserveFPState() call.
25
+ */
26
+static void gen_update_fp_context(DisasContext *s)
37
+{
27
+{
38
+ int src;
28
+ /* Update ownership of FP context: set FPCCR.S to match current state */
39
+ uint32_t ret;
29
+ if (s->v8m_fpccr_s_wrong) {
30
+ TCGv_i32 tmp;
40
+
31
+
41
+ if (!gic_is_vcpu(cpu)) {
32
+ tmp = load_cpu_field(v7m.fpccr[M_REG_S]);
42
+ /* Lookup the source CPU for the SGI and clear this in the
33
+ if (s->v8m_secure) {
43
+ * sgi_pending map. Return the src and clear the overall pending
34
+ tcg_gen_ori_i32(tmp, tmp, R_V7M_FPCCR_S_MASK);
44
+ * state on this CPU if the SGI is not pending from any CPUs.
35
+ } else {
45
+ */
36
+ tcg_gen_andi_i32(tmp, tmp, ~R_V7M_FPCCR_S_MASK);
46
+ assert(s->sgi_pending[irq][cpu] != 0);
47
+ src = ctz32(s->sgi_pending[irq][cpu]);
48
+ s->sgi_pending[irq][cpu] &= ~(1 << src);
49
+ if (s->sgi_pending[irq][cpu] == 0) {
50
+ gic_clear_pending(s, irq, cpu);
51
+ }
37
+ }
52
+ ret = irq | ((src & 0x7) << 10);
38
+ store_cpu_field(tmp, v7m.fpccr[M_REG_S]);
53
+ } else {
39
+ /* Don't need to do this for any further FP insns in this TB */
54
+ uint32_t *lr_entry = gic_get_lr_entry(s, irq, cpu);
40
+ s->v8m_fpccr_s_wrong = false;
55
+ src = GICH_LR_CPUID(*lr_entry);
56
+
57
+ gic_clear_pending(s, irq, cpu);
58
+ ret = irq | (src << 10);
59
+ }
41
+ }
60
+
42
+
61
+ return ret;
43
+ if (s->v7m_new_fp_ctxt_needed) {
44
+ /*
45
+ * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA,
46
+ * the FPSCR, and VPR.
47
+ */
48
+ TCGv_i32 control, fpscr;
49
+ uint32_t bits = R_V7M_CONTROL_FPCA_MASK;
50
+
51
+ fpscr = load_cpu_field(v7m.fpdscr[s->v8m_secure]);
52
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
53
+ tcg_temp_free_i32(fpscr);
54
+ if (dc_isar_feature(aa32_mve, s)) {
55
+ TCGv_i32 z32 = tcg_const_i32(0);
56
+ store_cpu_field(z32, v7m.vpr);
57
+ }
58
+
59
+ /*
60
+ * We don't need to arrange to end the TB, because the only
61
+ * parts of FPSCR which we cache in the TB flags are the VECLEN
62
+ * and VECSTRIDE, and those don't exist for M-profile.
63
+ */
64
+
65
+ if (s->v8m_secure) {
66
+ bits |= R_V7M_CONTROL_SFPA_MASK;
67
+ }
68
+ control = load_cpu_field(v7m.control[M_REG_S]);
69
+ tcg_gen_ori_i32(control, control, bits);
70
+ store_cpu_field(control, v7m.control[M_REG_S]);
71
+ /* Don't need to do this for any further FP insns in this TB */
72
+ s->v7m_new_fp_ctxt_needed = false;
73
+ }
62
+}
74
+}
63
+
75
+
64
uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
76
/*
65
{
77
* Check that VFP access is enabled. If it is, do the necessary
66
- int ret, irq, src;
78
* M-profile lazy-FP handling and then return true.
67
- int cm = 1 << cpu;
79
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
68
+ int ret, irq;
80
/* Trigger lazy-state preservation if necessary */
69
81
gen_preserve_fp_state(s);
70
/* gic_get_current_pending_irq() will return 1022 or 1023 appropriately
82
71
* for the case where this GIC supports grouping and the pending interrupt
83
- /* Update ownership of FP context: set FPCCR.S to match current state */
72
* is in the wrong group.
84
- if (s->v8m_fpccr_s_wrong) {
73
*/
85
- TCGv_i32 tmp;
74
irq = gic_get_current_pending_irq(s, cpu, attrs);
86
-
75
- trace_gic_acknowledge_irq(cpu, irq);
87
- tmp = load_cpu_field(v7m.fpccr[M_REG_S]);
76
+ trace_gic_acknowledge_irq(gic_get_vcpu_real_id(cpu), irq);
88
- if (s->v8m_secure) {
77
89
- tcg_gen_ori_i32(tmp, tmp, R_V7M_FPCCR_S_MASK);
78
if (irq >= GIC_MAXIRQ) {
90
- } else {
79
DPRINTF("ACK, no pending interrupt or it is hidden: %d\n", irq);
91
- tcg_gen_andi_i32(tmp, tmp, ~R_V7M_FPCCR_S_MASK);
80
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
92
- }
81
return 1023;
93
- store_cpu_field(tmp, v7m.fpccr[M_REG_S]);
94
- /* Don't need to do this for any further FP insns in this TB */
95
- s->v8m_fpccr_s_wrong = false;
96
- }
97
-
98
- if (s->v7m_new_fp_ctxt_needed) {
99
- /*
100
- * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA,
101
- * the FPSCR, and VPR.
102
- */
103
- TCGv_i32 control, fpscr;
104
- uint32_t bits = R_V7M_CONTROL_FPCA_MASK;
105
-
106
- fpscr = load_cpu_field(v7m.fpdscr[s->v8m_secure]);
107
- gen_helper_vfp_set_fpscr(cpu_env, fpscr);
108
- tcg_temp_free_i32(fpscr);
109
- if (dc_isar_feature(aa32_mve, s)) {
110
- TCGv_i32 z32 = tcg_const_i32(0);
111
- store_cpu_field(z32, v7m.vpr);
112
- }
113
-
114
- /*
115
- * We don't need to arrange to end the TB, because the only
116
- * parts of FPSCR which we cache in the TB flags are the VECLEN
117
- * and VECSTRIDE, and those don't exist for M-profile.
118
- */
119
-
120
- if (s->v8m_secure) {
121
- bits |= R_V7M_CONTROL_SFPA_MASK;
122
- }
123
- control = load_cpu_field(v7m.control[M_REG_S]);
124
- tcg_gen_ori_i32(control, control, bits);
125
- store_cpu_field(control, v7m.control[M_REG_S]);
126
- /* Don't need to do this for any further FP insns in this TB */
127
- s->v7m_new_fp_ctxt_needed = false;
128
- }
129
+ /* Update ownership of FP context and create new FP context if needed */
130
+ gen_update_fp_context(s);
82
}
131
}
83
132
84
+ gic_activate_irq(s, cpu, irq);
133
return true;
85
+
86
if (s->revision == REV_11MPCORE) {
87
/* Clear pending flags for both level and edge triggered interrupts.
88
* Level triggered IRQs will be reasserted once they become inactive.
89
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
90
ret = irq;
91
} else {
92
if (irq < GIC_NR_SGIS) {
93
- /* Lookup the source CPU for the SGI and clear this in the
94
- * sgi_pending map. Return the src and clear the overall pending
95
- * state on this CPU if the SGI is not pending from any CPUs.
96
- */
97
- assert(s->sgi_pending[irq][cpu] != 0);
98
- src = ctz32(s->sgi_pending[irq][cpu]);
99
- s->sgi_pending[irq][cpu] &= ~(1 << src);
100
- if (s->sgi_pending[irq][cpu] == 0) {
101
- gic_clear_pending(s, irq, cpu);
102
- }
103
- ret = irq | ((src & 0x7) << 10);
104
+ ret = gic_clear_pending_sgi(s, irq, cpu);
105
} else {
106
- /* Clear pending state for both level and edge triggered
107
- * interrupts. (level triggered interrupts with an active line
108
- * remain pending, see gic_test_pending)
109
- */
110
gic_clear_pending(s, irq, cpu);
111
ret = irq;
112
}
113
}
114
115
- gic_activate_irq(s, cpu, irq);
116
gic_update(s);
117
DPRINTF("ACK %d\n", irq);
118
return ret;
119
--
134
--
120
2.18.0
135
2.20.1
121
136
122
137
diff view generated by jsdifflib
1
Improve the exception-taken logging by logging in
1
vfp_access_check and its helper routine full_vfp_access_check() has
2
v7m_exception_taken() the exception we're going to take
2
gradually grown and is now an awkward mix of A-profile only and
3
and whether it is secure/nonsecure.
3
M-profile only pieces. Refactor it into an A-profile only and an
4
4
M-profile only version, taking advantage of the fact that now the
5
This requires us to move logging at many callsites from after the
5
only direct call to full_vfp_access_check() is in A-profile-only
6
call to before it, so that the logging appears in a sensible order.
6
code.
7
8
(This will make tail-chaining produce more useful logs; for the
9
current callers of v7m_exception_taken() we know which exception
10
we're going to take, so custom log messages at the callsite sufficed;
11
for tail-chaining only v7m_exception_taken() knows the exception
12
number that we're going to tail-chain to.)
13
7
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20210618141019.10671-7-peter.maydell@linaro.org
17
Message-id: 20180720145647.8810-2-peter.maydell@linaro.org
18
---
11
---
19
target/arm/helper.c | 17 +++++++++++------
12
target/arm/translate-vfp.c | 79 +++++++++++++++++++++++---------------
20
1 file changed, 11 insertions(+), 6 deletions(-)
13
1 file changed, 48 insertions(+), 31 deletions(-)
21
14
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
23
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
17
--- a/target/arm/translate-vfp.c
25
+++ b/target/arm/helper.c
18
+++ b/target/arm/translate-vfp.c
26
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
19
@@ -XXX,XX +XXX,XX @@ static void gen_update_fp_context(DisasContext *s)
27
bool push_failed = false;
20
}
28
21
29
armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure);
22
/*
30
+ qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
23
- * Check that VFP access is enabled. If it is, do the necessary
31
+ targets_secure ? "secure" : "nonsecure", exc);
24
- * M-profile lazy-FP handling and then return true.
32
25
- * If not, emit code to generate an appropriate exception and
33
if (arm_feature(env, ARM_FEATURE_V8)) {
26
- * return false.
34
if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
27
+ * Check that VFP access is enabled, A-profile specific version.
35
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
28
+ *
36
* we might now want to take a different exception which
29
+ * If VFP is enabled, return true. If not, emit code to generate an
37
* targets a different security state, so try again from the top.
30
+ * appropriate exception and return false.
38
*/
31
* The ignore_vfp_enabled argument specifies that we should ignore
39
+ qemu_log_mask(CPU_LOG_INT,
32
- * whether VFP is enabled via FPEXC[EN]: this should be true for FMXR/FMRX
40
+ "...derived exception on callee-saves register stacking");
33
+ * whether VFP is enabled via FPEXC.EN: this should be true for FMXR/FMRX
41
v7m_exception_taken(cpu, lr, true, true);
34
* accesses to FPSID, FPEXC, MVFR0, MVFR1, MVFR2, and false for all other insns.
42
return;
35
*/
36
-static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
37
+static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
38
{
39
if (s->fp_excp_el) {
40
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
41
- /*
42
- * M-profile mostly catches the "FPU disabled" case early, in
43
- * disas_m_nocp(), but a few insns (eg LCTP, WLSTP, DLSTP)
44
- * which do coprocessor-checks are outside the large ranges of
45
- * the encoding space handled by the patterns in m-nocp.decode,
46
- * and for them we may need to raise NOCP here.
47
- */
48
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
49
- syn_uncategorized(), s->fp_excp_el);
50
- } else {
51
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
52
- syn_fp_access_trap(1, 0xe, false),
53
- s->fp_excp_el);
54
- }
55
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
56
+ syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
57
return false;
43
}
58
}
44
59
45
if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) {
60
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
46
/* Vector load failed: derived exception */
61
unallocated_encoding(s);
47
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load");
62
return false;
48
v7m_exception_taken(cpu, lr, true, true);
49
return;
50
}
63
}
51
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
64
+ return true;
52
if (sfault) {
65
+}
53
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
66
54
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
67
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
55
- v7m_exception_taken(cpu, excret, true, false);
68
- /* Handle M-profile lazy FP state mechanics */
56
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
69
-
57
"stackframe: failed EXC_RETURN.ES validity check\n");
70
- /* Trigger lazy-state preservation if necessary */
58
+ v7m_exception_taken(cpu, excret, true, false);
71
- gen_preserve_fp_state(s);
59
return;
72
-
73
- /* Update ownership of FP context and create new FP context if needed */
74
- gen_update_fp_context(s);
75
+/*
76
+ * Check that VFP access is enabled, M-profile specific version.
77
+ *
78
+ * If VFP is enabled, do the necessary M-profile lazy-FP handling and then
79
+ * return true. If not, emit code to generate an appropriate exception and
80
+ * return false.
81
+ */
82
+static bool vfp_access_check_m(DisasContext *s)
83
+{
84
+ if (s->fp_excp_el) {
85
+ /*
86
+ * M-profile mostly catches the "FPU disabled" case early, in
87
+ * disas_m_nocp(), but a few insns (eg LCTP, WLSTP, DLSTP)
88
+ * which do coprocessor-checks are outside the large ranges of
89
+ * the encoding space handled by the patterns in m-nocp.decode,
90
+ * and for them we may need to raise NOCP here.
91
+ */
92
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
93
+ syn_uncategorized(), s->fp_excp_el);
94
+ return false;
60
}
95
}
61
96
62
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
97
+ /* Handle M-profile lazy FP state mechanics */
63
*/
98
+
64
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
99
+ /* Trigger lazy-state preservation if necessary */
65
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
100
+ gen_preserve_fp_state(s);
66
- v7m_exception_taken(cpu, excret, true, false);
101
+
67
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
102
+ /* Update ownership of FP context and create new FP context if needed */
68
"stackframe: failed exception return integrity check\n");
103
+ gen_update_fp_context(s);
69
+ v7m_exception_taken(cpu, excret, true, false);
104
+
70
return;
105
return true;
106
}
107
108
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
109
*/
110
bool vfp_access_check(DisasContext *s)
111
{
112
- return full_vfp_access_check(s, false);
113
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
114
+ return vfp_access_check_m(s);
115
+ } else {
116
+ return vfp_access_check_a(s, false);
117
+ }
118
}
119
120
static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
121
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
122
return false;
71
}
123
}
72
124
73
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
125
- if (!full_vfp_access_check(s, ignore_vfp_enabled)) {
74
/* Take a SecureFault on the current stack */
126
+ /*
75
env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
127
+ * Call vfp_access_check_a() directly, because we need to tell
76
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
128
+ * it to ignore FPEXC.EN for some register accesses.
77
- v7m_exception_taken(cpu, excret, true, false);
129
+ */
78
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
130
+ if (!vfp_access_check_a(s, ignore_vfp_enabled)) {
79
"stackframe: failed exception return integrity "
131
return true;
80
"signature check\n");
81
+ v7m_exception_taken(cpu, excret, true, false);
82
return;
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
86
/* v7m_stack_read() pended a fault, so take it (as a tail
87
* chained exception on the same stack frame)
88
*/
89
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n");
90
v7m_exception_taken(cpu, excret, true, false);
91
return;
92
}
93
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
94
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
95
env->v7m.secure);
96
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
97
- v7m_exception_taken(cpu, excret, true, false);
98
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
99
"stackframe: failed exception return integrity "
100
"check\n");
101
+ v7m_exception_taken(cpu, excret, true, false);
102
return;
103
}
104
}
105
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
106
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
107
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
108
ignore_stackfaults = v7m_push_stack(cpu);
109
- v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
110
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
111
"failed exception return integrity check\n");
112
+ v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
113
return;
114
}
132
}
115
133
116
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
117
118
ignore_stackfaults = v7m_push_stack(cpu);
119
v7m_exception_taken(cpu, lr, false, ignore_stackfaults);
120
- qemu_log_mask(CPU_LOG_INT, "... as %d\n", env->v7m.exception);
121
}
122
123
/* Function used to synchronize QEMU's AArch64 register set with AArch32
124
--
134
--
125
2.18.0
135
2.20.1
126
136
127
137
diff view generated by jsdifflib
New patch
1
Instead of open-coding the "take NOCP exception if FPU disabled,
2
otherwise call gen_preserve_fp_state()" code in the accessors for
3
FPCXT_NS, add an argument to vfp_access_check_m() which tells it to
4
skip the gen_update_fp_context() call, so we can use it for the
5
FPCXT_NS case.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210618141019.10671-8-peter.maydell@linaro.org
10
---
11
target/arm/translate-a32.h | 2 +-
12
target/arm/translate-m-nocp.c | 10 ++--------
13
target/arm/translate-vfp.c | 13 ++++++++-----
14
3 files changed, 11 insertions(+), 14 deletions(-)
15
16
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a32.h
19
+++ b/target/arm/translate-a32.h
20
@@ -XXX,XX +XXX,XX @@ bool disas_neon_shared(DisasContext *s, uint32_t insn);
21
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg);
22
void arm_gen_condlabel(DisasContext *s);
23
bool vfp_access_check(DisasContext *s);
24
-void gen_preserve_fp_state(DisasContext *s);
25
+bool vfp_access_check_m(DisasContext *s, bool skip_context_update);
26
void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop);
27
void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop);
28
void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop);
29
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/translate-m-nocp.c
32
+++ b/target/arm/translate-m-nocp.c
33
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
34
* otherwise PreserveFPState(), and then FPCXT_NS writes
35
* behave the same as FPCXT_S writes.
36
*/
37
- if (s->fp_excp_el) {
38
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
39
- syn_uncategorized(), s->fp_excp_el);
40
+ if (!vfp_access_check_m(s, true)) {
41
/*
42
* This was only a conditional exception, so override
43
* gen_exception_insn()'s default to DISAS_NORETURN
44
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
45
s->base.is_jmp = DISAS_NEXT;
46
break;
47
}
48
- gen_preserve_fp_state(s);
49
}
50
/* fall through */
51
case ARM_VFP_FPCXT_S:
52
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
53
* otherwise PreserveFPState(), and then FPCXT_NS
54
* reads the same as FPCXT_S.
55
*/
56
- if (s->fp_excp_el) {
57
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
58
- syn_uncategorized(), s->fp_excp_el);
59
+ if (!vfp_access_check_m(s, true)) {
60
/*
61
* This was only a conditional exception, so override
62
* gen_exception_insn()'s default to DISAS_NORETURN
63
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
64
s->base.is_jmp = DISAS_NEXT;
65
break;
66
}
67
- gen_preserve_fp_state(s);
68
tmp = tcg_temp_new_i32();
69
sfpa = tcg_temp_new_i32();
70
fpscr = tcg_temp_new_i32();
71
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate-vfp.c
74
+++ b/target/arm/translate-vfp.c
75
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
76
* Generate code for M-profile lazy FP state preservation if needed;
77
* this corresponds to the pseudocode PreserveFPState() function.
78
*/
79
-void gen_preserve_fp_state(DisasContext *s)
80
+static void gen_preserve_fp_state(DisasContext *s)
81
{
82
if (s->v7m_lspact) {
83
/*
84
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
85
* If VFP is enabled, do the necessary M-profile lazy-FP handling and then
86
* return true. If not, emit code to generate an appropriate exception and
87
* return false.
88
+ * skip_context_update is true to skip the "update FP context" part of this.
89
*/
90
-static bool vfp_access_check_m(DisasContext *s)
91
+bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
92
{
93
if (s->fp_excp_el) {
94
/*
95
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_m(DisasContext *s)
96
/* Trigger lazy-state preservation if necessary */
97
gen_preserve_fp_state(s);
98
99
- /* Update ownership of FP context and create new FP context if needed */
100
- gen_update_fp_context(s);
101
+ if (!skip_context_update) {
102
+ /* Update ownership of FP context and create new FP context if needed */
103
+ gen_update_fp_context(s);
104
+ }
105
106
return true;
107
}
108
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_m(DisasContext *s)
109
bool vfp_access_check(DisasContext *s)
110
{
111
if (arm_dc_feature(s, ARM_FEATURE_M)) {
112
- return vfp_access_check_m(s);
113
+ return vfp_access_check_m(s, false);
114
} else {
115
return vfp_access_check_a(s, false);
116
}
117
--
118
2.20.1
119
120
diff view generated by jsdifflib
1
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then
1
Implement the forms of the MVE VLDR and VSTR insns which perform
2
exceptions targeting NS EL1 must be redirected to EL2. Implement
2
non-widening loads of bytes, halfwords or words from memory into
3
this in raise_exception() -- all synchronous exceptions go through
3
vector elements of the same width (encodings T5, T6, T7).
4
this function.
5
4
6
(Asynchronous exceptions go via arm_cpu_exec_interrupt(), which
5
(At the moment we know for MVE and M-profile in general that
7
already honours HCR_EL2.TGE when it determines the target EL
6
vfp_access_check() can never return false, but we include the
8
in arm_phys_excp_target_el().)
7
conventional return-true-on-failure check for consistency
8
with non-M-profile translation code.)
9
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180724115950.17316-4-peter.maydell@linaro.org
12
Message-id: 20210617121628.20116-2-peter.maydell@linaro.org
13
---
13
---
14
target/arm/op_helper.c | 14 ++++++++++++++
14
target/arm/{translate-mve.c => helper-mve.h} | 19 +-
15
1 file changed, 14 insertions(+)
15
target/arm/helper.h | 2 +
16
target/arm/internals.h | 11 ++
17
target/arm/mve.decode | 22 +++
18
target/arm/mve_helper.c | 172 +++++++++++++++++++
19
target/arm/translate-mve.c | 119 +++++++++++++
20
target/arm/meson.build | 1 +
21
7 files changed, 334 insertions(+), 12 deletions(-)
22
copy target/arm/{translate-mve.c => helper-mve.h} (61%)
23
create mode 100644 target/arm/mve_helper.c
16
24
17
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
25
diff --git a/target/arm/translate-mve.c b/target/arm/helper-mve.h
18
index XXXXXXX..XXXXXXX 100644
26
similarity index 61%
19
--- a/target/arm/op_helper.c
27
copy from target/arm/translate-mve.c
20
+++ b/target/arm/op_helper.c
28
copy to target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@ static void raise_exception(CPUARMState *env, uint32_t excp,
29
index XXXXXXX..XXXXXXX 100644
22
{
30
--- a/target/arm/translate-mve.c
23
CPUState *cs = CPU(arm_env_get_cpu(env));
31
+++ b/target/arm/helper-mve.h
24
32
@@ -XXX,XX +XXX,XX @@
25
+ if ((env->cp15.hcr_el2 & HCR_TGE) &&
33
/*
26
+ target_el == 1 && !arm_is_secure(env)) {
34
- * ARM translation: M-profile MVE instructions
35
+ * M-profile MVE specific helper definitions
36
*
37
* Copyright (c) 2021 Linaro, Ltd.
38
*
39
@@ -XXX,XX +XXX,XX @@
40
* You should have received a copy of the GNU Lesser General Public
41
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
42
*/
43
-
44
-#include "qemu/osdep.h"
45
-#include "tcg/tcg-op.h"
46
-#include "tcg/tcg-op-gvec.h"
47
-#include "exec/exec-all.h"
48
-#include "exec/gen-icount.h"
49
-#include "translate.h"
50
-#include "translate-a32.h"
51
-
52
-/* Include the generated decoder */
53
-#include "decode-mve.c.inc"
54
+DEF_HELPER_FLAGS_3(mve_vldrb, TCG_CALL_NO_WG, void, env, ptr, i32)
55
+DEF_HELPER_FLAGS_3(mve_vldrh, TCG_CALL_NO_WG, void, env, ptr, i32)
56
+DEF_HELPER_FLAGS_3(mve_vldrw, TCG_CALL_NO_WG, void, env, ptr, i32)
57
+DEF_HELPER_FLAGS_3(mve_vstrb, TCG_CALL_NO_WG, void, env, ptr, i32)
58
+DEF_HELPER_FLAGS_3(mve_vstrh, TCG_CALL_NO_WG, void, env, ptr, i32)
59
+DEF_HELPER_FLAGS_3(mve_vstrw, TCG_CALL_NO_WG, void, env, ptr, i32)
60
diff --git a/target/arm/helper.h b/target/arm/helper.h
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/helper.h
63
+++ b/target/arm/helper.h
64
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
65
#include "helper-a64.h"
66
#include "helper-sve.h"
67
#endif
68
+
69
+#include "helper-mve.h"
70
diff --git a/target/arm/internals.h b/target/arm/internals.h
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/internals.h
73
+++ b/target/arm/internals.h
74
@@ -XXX,XX +XXX,XX @@ static inline uint64_t useronly_maybe_clean_ptr(uint32_t desc, uint64_t ptr)
75
return ptr;
76
}
77
78
+/* Values for M-profile PSR.ECI for MVE insns */
79
+enum MVEECIState {
80
+ ECI_NONE = 0, /* No completed beats */
81
+ ECI_A0 = 1, /* Completed: A0 */
82
+ ECI_A0A1 = 2, /* Completed: A0, A1 */
83
+ /* 3 is reserved */
84
+ ECI_A0A1A2 = 4, /* Completed: A0, A1, A2 */
85
+ ECI_A0A1A2B0 = 5, /* Completed: A0, A1, A2, B0 */
86
+ /* All other values reserved */
87
+};
88
+
89
#endif
90
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/mve.decode
93
+++ b/target/arm/mve.decode
94
@@ -XXX,XX +XXX,XX @@
95
#
96
# This file is processed by scripts/decodetree.py
97
#
98
+
99
+%qd 22:1 13:3
100
+
101
+&vldr_vstr rn qd imm p a w size l
102
+
103
+@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd
104
+
105
+# Vector loads and stores
106
+
107
+# Non-widening loads/stores (P=0 W=0 is 'related encoding')
108
+VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111100 ....... @vldr_vstr \
109
+ size=0 p=0 w=1
110
+VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111101 ....... @vldr_vstr \
111
+ size=1 p=0 w=1
112
+VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111110 ....... @vldr_vstr \
113
+ size=2 p=0 w=1
114
+VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111100 ....... @vldr_vstr \
115
+ size=0 p=1
116
+VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
117
+ size=1 p=1
118
+VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
119
+ size=2 p=1
120
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
121
new file mode 100644
122
index XXXXXXX..XXXXXXX
123
--- /dev/null
124
+++ b/target/arm/mve_helper.c
125
@@ -XXX,XX +XXX,XX @@
126
+/*
127
+ * M-profile MVE Operations
128
+ *
129
+ * Copyright (c) 2021 Linaro, Ltd.
130
+ *
131
+ * This library is free software; you can redistribute it and/or
132
+ * modify it under the terms of the GNU Lesser General Public
133
+ * License as published by the Free Software Foundation; either
134
+ * version 2.1 of the License, or (at your option) any later version.
135
+ *
136
+ * This library is distributed in the hope that it will be useful,
137
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
138
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
139
+ * Lesser General Public License for more details.
140
+ *
141
+ * You should have received a copy of the GNU Lesser General Public
142
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
143
+ */
144
+
145
+#include "qemu/osdep.h"
146
+#include "cpu.h"
147
+#include "internals.h"
148
+#include "vec_internal.h"
149
+#include "exec/helper-proto.h"
150
+#include "exec/cpu_ldst.h"
151
+#include "exec/exec-all.h"
152
+
153
+static uint16_t mve_element_mask(CPUARMState *env)
154
+{
155
+ /*
156
+ * Return the mask of which elements in the MVE vector should be
157
+ * updated. This is a combination of multiple things:
158
+ * (1) by default, we update every lane in the vector
159
+ * (2) VPT predication stores its state in the VPR register;
160
+ * (3) low-overhead-branch tail predication will mask out part
161
+ * the vector on the final iteration of the loop
162
+ * (4) if EPSR.ECI is set then we must execute only some beats
163
+ * of the insn
164
+ * We combine all these into a 16-bit result with the same semantics
165
+ * as VPR.P0: 0 to mask the lane, 1 if it is active.
166
+ * 8-bit vector ops will look at all bits of the result;
167
+ * 16-bit ops will look at bits 0, 2, 4, ...;
168
+ * 32-bit ops will look at bits 0, 4, 8 and 12.
169
+ * Compare pseudocode GetCurInstrBeat(), though that only returns
170
+ * the 4-bit slice of the mask corresponding to a single beat.
171
+ */
172
+ uint16_t mask = FIELD_EX32(env->v7m.vpr, V7M_VPR, P0);
173
+
174
+ if (!(env->v7m.vpr & R_V7M_VPR_MASK01_MASK)) {
175
+ mask |= 0xff;
176
+ }
177
+ if (!(env->v7m.vpr & R_V7M_VPR_MASK23_MASK)) {
178
+ mask |= 0xff00;
179
+ }
180
+
181
+ if (env->v7m.ltpsize < 4 &&
182
+ env->regs[14] <= (1 << (4 - env->v7m.ltpsize))) {
27
+ /*
183
+ /*
28
+ * Redirect NS EL1 exceptions to NS EL2. These are reported with
184
+ * Tail predication active, and this is the last loop iteration.
29
+ * their original syndrome register value, with the exception of
185
+ * The element size is (1 << ltpsize), and we only want to process
30
+ * SIMD/FP access traps, which are reported as uncategorized
186
+ * loopcount elements, so we want to retain the least significant
31
+ * (see DDI0478C.a D1.10.4)
187
+ * (loopcount * esize) predicate bits and zero out bits above that.
32
+ */
188
+ */
33
+ target_el = 2;
189
+ int masklen = env->regs[14] << env->v7m.ltpsize;
34
+ if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
190
+ assert(masklen <= 16);
35
+ syndrome = syn_uncategorized();
191
+ mask &= MAKE_64BIT_MASK(0, masklen);
192
+ }
193
+
194
+ if ((env->condexec_bits & 0xf) == 0) {
195
+ /*
196
+ * ECI bits indicate which beats are already executed;
197
+ * we handle this by effectively predicating them out.
198
+ */
199
+ int eci = env->condexec_bits >> 4;
200
+ switch (eci) {
201
+ case ECI_NONE:
202
+ break;
203
+ case ECI_A0:
204
+ mask &= 0xfff0;
205
+ break;
206
+ case ECI_A0A1:
207
+ mask &= 0xff00;
208
+ break;
209
+ case ECI_A0A1A2:
210
+ case ECI_A0A1A2B0:
211
+ mask &= 0xf000;
212
+ break;
213
+ default:
214
+ g_assert_not_reached();
36
+ }
215
+ }
37
+ }
216
+ }
38
+
217
+
39
assert(!excp_is_internal(excp));
218
+ return mask;
40
cs->exception_index = excp;
219
+}
41
env->exception.syndrome = syndrome;
220
+
221
+static void mve_advance_vpt(CPUARMState *env)
222
+{
223
+ /* Advance the VPT and ECI state if necessary */
224
+ uint32_t vpr = env->v7m.vpr;
225
+ unsigned mask01, mask23;
226
+
227
+ if ((env->condexec_bits & 0xf) == 0) {
228
+ env->condexec_bits = (env->condexec_bits == (ECI_A0A1A2B0 << 4)) ?
229
+ (ECI_A0 << 4) : (ECI_NONE << 4);
230
+ }
231
+
232
+ if (!(vpr & (R_V7M_VPR_MASK01_MASK | R_V7M_VPR_MASK23_MASK))) {
233
+ /* VPT not enabled, nothing to do */
234
+ return;
235
+ }
236
+
237
+ mask01 = FIELD_EX32(vpr, V7M_VPR, MASK01);
238
+ mask23 = FIELD_EX32(vpr, V7M_VPR, MASK23);
239
+ if (mask01 > 8) {
240
+ /* high bit set, but not 0b1000: invert the relevant half of P0 */
241
+ vpr ^= 0xff;
242
+ }
243
+ if (mask23 > 8) {
244
+ /* high bit set, but not 0b1000: invert the relevant half of P0 */
245
+ vpr ^= 0xff00;
246
+ }
247
+ vpr = FIELD_DP32(vpr, V7M_VPR, MASK01, mask01 << 1);
248
+ vpr = FIELD_DP32(vpr, V7M_VPR, MASK23, mask23 << 1);
249
+ env->v7m.vpr = vpr;
250
+}
251
+
252
+
253
+#define DO_VLDR(OP, MSIZE, LDTYPE, ESIZE, TYPE) \
254
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t addr) \
255
+ { \
256
+ TYPE *d = vd; \
257
+ uint16_t mask = mve_element_mask(env); \
258
+ unsigned b, e; \
259
+ /* \
260
+ * R_SXTM allows the dest reg to become UNKNOWN for abandoned \
261
+ * beats so we don't care if we update part of the dest and \
262
+ * then take an exception. \
263
+ */ \
264
+ for (b = 0, e = 0; b < 16; b += ESIZE, e++) { \
265
+ if (mask & (1 << b)) { \
266
+ d[H##ESIZE(e)] = cpu_##LDTYPE##_data_ra(env, addr, GETPC()); \
267
+ } \
268
+ addr += MSIZE; \
269
+ } \
270
+ mve_advance_vpt(env); \
271
+ }
272
+
273
+#define DO_VSTR(OP, MSIZE, STTYPE, ESIZE, TYPE) \
274
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t addr) \
275
+ { \
276
+ TYPE *d = vd; \
277
+ uint16_t mask = mve_element_mask(env); \
278
+ unsigned b, e; \
279
+ for (b = 0, e = 0; b < 16; b += ESIZE, e++) { \
280
+ if (mask & (1 << b)) { \
281
+ cpu_##STTYPE##_data_ra(env, addr, d[H##ESIZE(e)], GETPC()); \
282
+ } \
283
+ addr += MSIZE; \
284
+ } \
285
+ mve_advance_vpt(env); \
286
+ }
287
+
288
+DO_VLDR(vldrb, 1, ldub, 1, uint8_t)
289
+DO_VLDR(vldrh, 2, lduw, 2, uint16_t)
290
+DO_VLDR(vldrw, 4, ldl, 4, uint32_t)
291
+
292
+DO_VSTR(vstrb, 1, stb, 1, uint8_t)
293
+DO_VSTR(vstrh, 2, stw, 2, uint16_t)
294
+DO_VSTR(vstrw, 4, stl, 4, uint32_t)
295
+
296
+#undef DO_VLDR
297
+#undef DO_VSTR
298
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
299
index XXXXXXX..XXXXXXX 100644
300
--- a/target/arm/translate-mve.c
301
+++ b/target/arm/translate-mve.c
302
@@ -XXX,XX +XXX,XX @@
303
304
/* Include the generated decoder */
305
#include "decode-mve.c.inc"
306
+
307
+typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
308
+
309
+/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
310
+static inline long mve_qreg_offset(unsigned reg)
311
+{
312
+ return offsetof(CPUARMState, vfp.zregs[reg].d[0]);
313
+}
314
+
315
+static TCGv_ptr mve_qreg_ptr(unsigned reg)
316
+{
317
+ TCGv_ptr ret = tcg_temp_new_ptr();
318
+ tcg_gen_addi_ptr(ret, cpu_env, mve_qreg_offset(reg));
319
+ return ret;
320
+}
321
+
322
+static bool mve_check_qreg_bank(DisasContext *s, int qmask)
323
+{
324
+ /*
325
+ * Check whether Qregs are in range. For v8.1M only Q0..Q7
326
+ * are supported, see VFPSmallRegisterBank().
327
+ */
328
+ return qmask < 8;
329
+}
330
+
331
+static bool mve_eci_check(DisasContext *s)
332
+{
333
+ /*
334
+ * This is a beatwise insn: check that ECI is valid (not a
335
+ * reserved value) and note that we are handling it.
336
+ * Return true if OK, false if we generated an exception.
337
+ */
338
+ s->eci_handled = true;
339
+ switch (s->eci) {
340
+ case ECI_NONE:
341
+ case ECI_A0:
342
+ case ECI_A0A1:
343
+ case ECI_A0A1A2:
344
+ case ECI_A0A1A2B0:
345
+ return true;
346
+ default:
347
+ /* Reserved value: INVSTATE UsageFault */
348
+ gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized(),
349
+ default_exception_el(s));
350
+ return false;
351
+ }
352
+}
353
+
354
+static void mve_update_eci(DisasContext *s)
355
+{
356
+ /*
357
+ * The helper function will always update the CPUState field,
358
+ * so we only need to update the DisasContext field.
359
+ */
360
+ if (s->eci) {
361
+ s->eci = (s->eci == ECI_A0A1A2B0) ? ECI_A0 : ECI_NONE;
362
+ }
363
+}
364
+
365
+static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn)
366
+{
367
+ TCGv_i32 addr;
368
+ uint32_t offset;
369
+ TCGv_ptr qreg;
370
+
371
+ if (!dc_isar_feature(aa32_mve, s) ||
372
+ !mve_check_qreg_bank(s, a->qd) ||
373
+ !fn) {
374
+ return false;
375
+ }
376
+
377
+ /* CONSTRAINED UNPREDICTABLE: we choose to UNDEF */
378
+ if (a->rn == 15 || (a->rn == 13 && a->w)) {
379
+ return false;
380
+ }
381
+
382
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
383
+ return true;
384
+ }
385
+
386
+ offset = a->imm << a->size;
387
+ if (!a->a) {
388
+ offset = -offset;
389
+ }
390
+ addr = load_reg(s, a->rn);
391
+ if (a->p) {
392
+ tcg_gen_addi_i32(addr, addr, offset);
393
+ }
394
+
395
+ qreg = mve_qreg_ptr(a->qd);
396
+ fn(cpu_env, qreg, addr);
397
+ tcg_temp_free_ptr(qreg);
398
+
399
+ /*
400
+ * Writeback always happens after the last beat of the insn,
401
+ * regardless of predication
402
+ */
403
+ if (a->w) {
404
+ if (!a->p) {
405
+ tcg_gen_addi_i32(addr, addr, offset);
406
+ }
407
+ store_reg(s, a->rn, addr);
408
+ } else {
409
+ tcg_temp_free_i32(addr);
410
+ }
411
+ mve_update_eci(s);
412
+ return true;
413
+}
414
+
415
+static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
416
+{
417
+ static MVEGenLdStFn * const ldstfns[4][2] = {
418
+ { gen_helper_mve_vstrb, gen_helper_mve_vldrb },
419
+ { gen_helper_mve_vstrh, gen_helper_mve_vldrh },
420
+ { gen_helper_mve_vstrw, gen_helper_mve_vldrw },
421
+ { NULL, NULL }
422
+ };
423
+ return do_ldst(s, a, ldstfns[a->size][a->l]);
424
+}
425
diff --git a/target/arm/meson.build b/target/arm/meson.build
426
index XXXXXXX..XXXXXXX 100644
427
--- a/target/arm/meson.build
428
+++ b/target/arm/meson.build
429
@@ -XXX,XX +XXX,XX @@ arm_ss.add(files(
430
'helper.c',
431
'iwmmxt_helper.c',
432
'm_helper.c',
433
+ 'mve_helper.c',
434
'neon_helper.c',
435
'op_helper.c',
436
'tlb_helper.c',
42
--
437
--
43
2.18.0
438
2.20.1
44
439
45
440
diff view generated by jsdifflib
New patch
1
Implement the variants of MVE VLDR (encodings T1, T2) which perform
2
"widening" loads where bytes or halfwords are loaded from memory and
3
zero or sign-extended into halfword or word length vector elements,
4
and the narrowing MVE VSTR (encodings T1, T2) where bytes or
5
halfwords are stored from halfword or word elements.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210617121628.20116-3-peter.maydell@linaro.org
10
---
11
target/arm/helper-mve.h | 10 ++++++++++
12
target/arm/mve.decode | 25 +++++++++++++++++++++++--
13
target/arm/mve_helper.c | 11 +++++++++++
14
target/arm/translate-mve.c | 14 ++++++++++++++
15
4 files changed, 58 insertions(+), 2 deletions(-)
16
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-mve.h
20
+++ b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vldrw, TCG_CALL_NO_WG, void, env, ptr, i32)
22
DEF_HELPER_FLAGS_3(mve_vstrb, TCG_CALL_NO_WG, void, env, ptr, i32)
23
DEF_HELPER_FLAGS_3(mve_vstrh, TCG_CALL_NO_WG, void, env, ptr, i32)
24
DEF_HELPER_FLAGS_3(mve_vstrw, TCG_CALL_NO_WG, void, env, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_3(mve_vldrb_sh, TCG_CALL_NO_WG, void, env, ptr, i32)
27
+DEF_HELPER_FLAGS_3(mve_vldrb_sw, TCG_CALL_NO_WG, void, env, ptr, i32)
28
+DEF_HELPER_FLAGS_3(mve_vldrb_uh, TCG_CALL_NO_WG, void, env, ptr, i32)
29
+DEF_HELPER_FLAGS_3(mve_vldrb_uw, TCG_CALL_NO_WG, void, env, ptr, i32)
30
+DEF_HELPER_FLAGS_3(mve_vldrh_sw, TCG_CALL_NO_WG, void, env, ptr, i32)
31
+DEF_HELPER_FLAGS_3(mve_vldrh_uw, TCG_CALL_NO_WG, void, env, ptr, i32)
32
+DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
33
+DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
34
+DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
35
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/mve.decode
38
+++ b/target/arm/mve.decode
39
@@ -XXX,XX +XXX,XX @@
40
41
%qd 22:1 13:3
42
43
-&vldr_vstr rn qd imm p a w size l
44
+&vldr_vstr rn qd imm p a w size l u
45
46
-@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd
47
+@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
48
+# Note that both Rn and Qd are 3 bits only (no D bit)
49
+@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
50
51
# Vector loads and stores
52
53
+# Widening loads and narrowing stores:
54
+# for these P=0 W=0 is 'related encoding'; sz=11 is 'related encoding'
55
+# This means we need to expand out to multiple patterns for P, W, SZ.
56
+# For stores the U bit must be 0 but we catch that in the trans_ function.
57
+# The naming scheme here is "VLDSTB_H == in-memory byte load/store to/from
58
+# signed halfword element in register", etc.
59
+VLDSTB_H 111 . 110 0 a:1 0 1 . 0 ... ... 0 111 01 ....... @vldst_wn \
60
+ p=0 w=1 size=1
61
+VLDSTB_H 111 . 110 1 a:1 0 w:1 . 0 ... ... 0 111 01 ....... @vldst_wn \
62
+ p=1 size=1
63
+VLDSTB_W 111 . 110 0 a:1 0 1 . 0 ... ... 0 111 10 ....... @vldst_wn \
64
+ p=0 w=1 size=2
65
+VLDSTB_W 111 . 110 1 a:1 0 w:1 . 0 ... ... 0 111 10 ....... @vldst_wn \
66
+ p=1 size=2
67
+VLDSTH_W 111 . 110 0 a:1 0 1 . 1 ... ... 0 111 10 ....... @vldst_wn \
68
+ p=0 w=1 size=2
69
+VLDSTH_W 111 . 110 1 a:1 0 w:1 . 1 ... ... 0 111 10 ....... @vldst_wn \
70
+ p=1 size=2
71
+
72
# Non-widening loads/stores (P=0 W=0 is 'related encoding')
73
VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111100 ....... @vldr_vstr \
74
size=0 p=0 w=1
75
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/mve_helper.c
78
+++ b/target/arm/mve_helper.c
79
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrb, 1, stb, 1, uint8_t)
80
DO_VSTR(vstrh, 2, stw, 2, uint16_t)
81
DO_VSTR(vstrw, 4, stl, 4, uint32_t)
82
83
+DO_VLDR(vldrb_sh, 1, ldsb, 2, int16_t)
84
+DO_VLDR(vldrb_sw, 1, ldsb, 4, int32_t)
85
+DO_VLDR(vldrb_uh, 1, ldub, 2, uint16_t)
86
+DO_VLDR(vldrb_uw, 1, ldub, 4, uint32_t)
87
+DO_VLDR(vldrh_sw, 2, ldsw, 4, int32_t)
88
+DO_VLDR(vldrh_uw, 2, lduw, 4, uint32_t)
89
+
90
+DO_VSTR(vstrb_h, 1, stb, 2, int16_t)
91
+DO_VSTR(vstrb_w, 1, stb, 4, int32_t)
92
+DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
93
+
94
#undef DO_VLDR
95
#undef DO_VSTR
96
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate-mve.c
99
+++ b/target/arm/translate-mve.c
100
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
101
};
102
return do_ldst(s, a, ldstfns[a->size][a->l]);
103
}
104
+
105
+#define DO_VLDST_WIDE_NARROW(OP, SLD, ULD, ST) \
106
+ static bool trans_##OP(DisasContext *s, arg_VLDR_VSTR *a) \
107
+ { \
108
+ static MVEGenLdStFn * const ldstfns[2][2] = { \
109
+ { gen_helper_mve_##ST, gen_helper_mve_##SLD }, \
110
+ { NULL, gen_helper_mve_##ULD }, \
111
+ }; \
112
+ return do_ldst(s, a, ldstfns[a->u][a->l]); \
113
+ }
114
+
115
+DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h)
116
+DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w)
117
+DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w)
118
--
119
2.20.1
120
121
diff view generated by jsdifflib
New patch
1
1
Implement the MVE VCLZ insn (and the necessary machinery
2
for MVE 1-input vector ops).
3
4
Note that for non-load instructions predication is always performed
5
at a byte level granularity regardless of element size (R_ZLSJ),
6
and so the masking logic here differs from that used in the VLDR
7
and VSTR helpers.
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210617121628.20116-4-peter.maydell@linaro.org
12
---
13
target/arm/helper-mve.h | 4 ++
14
target/arm/mve.decode | 8 ++++
15
target/arm/mve_helper.c | 82 ++++++++++++++++++++++++++++++++++++++
16
target/arm/translate-mve.c | 38 ++++++++++++++++++
17
4 files changed, 132 insertions(+)
18
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper-mve.h
22
+++ b/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vldrh_uw, TCG_CALL_NO_WG, void, env, ptr, i32)
24
DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
25
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
26
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
27
+
28
+DEF_HELPER_FLAGS_3(mve_vclzb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
+DEF_HELPER_FLAGS_3(mve_vclzh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
+DEF_HELPER_FLAGS_3(mve_vclzw, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/mve.decode
34
+++ b/target/arm/mve.decode
35
@@ -XXX,XX +XXX,XX @@
36
#
37
38
%qd 22:1 13:3
39
+%qm 5:1 1:3
40
41
&vldr_vstr rn qd imm p a w size l u
42
+&1op qd qm size
43
44
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
45
# Note that both Rn and Qd are 3 bits only (no D bit)
46
@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
47
48
+@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
49
+
50
# Vector loads and stores
51
52
# Widening loads and narrowing stores:
53
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
54
size=1 p=1
55
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
56
size=2 p=1
57
+
58
+# Vector miscellaneous
59
+
60
+VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
61
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/mve_helper.c
64
+++ b/target/arm/mve_helper.c
65
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
66
67
#undef DO_VLDR
68
#undef DO_VSTR
69
+
70
+/*
71
+ * The mergemask(D, R, M) macro performs the operation "*D = R" but
72
+ * storing only the bytes which correspond to 1 bits in M,
73
+ * leaving other bytes in *D unchanged. We use _Generic
74
+ * to select the correct implementation based on the type of D.
75
+ */
76
+
77
+static void mergemask_ub(uint8_t *d, uint8_t r, uint16_t mask)
78
+{
79
+ if (mask & 1) {
80
+ *d = r;
81
+ }
82
+}
83
+
84
+static void mergemask_sb(int8_t *d, int8_t r, uint16_t mask)
85
+{
86
+ mergemask_ub((uint8_t *)d, r, mask);
87
+}
88
+
89
+static void mergemask_uh(uint16_t *d, uint16_t r, uint16_t mask)
90
+{
91
+ uint16_t bmask = expand_pred_b_data[mask & 3];
92
+ *d = (*d & ~bmask) | (r & bmask);
93
+}
94
+
95
+static void mergemask_sh(int16_t *d, int16_t r, uint16_t mask)
96
+{
97
+ mergemask_uh((uint16_t *)d, r, mask);
98
+}
99
+
100
+static void mergemask_uw(uint32_t *d, uint32_t r, uint16_t mask)
101
+{
102
+ uint32_t bmask = expand_pred_b_data[mask & 0xf];
103
+ *d = (*d & ~bmask) | (r & bmask);
104
+}
105
+
106
+static void mergemask_sw(int32_t *d, int32_t r, uint16_t mask)
107
+{
108
+ mergemask_uw((uint32_t *)d, r, mask);
109
+}
110
+
111
+static void mergemask_uq(uint64_t *d, uint64_t r, uint16_t mask)
112
+{
113
+ uint64_t bmask = expand_pred_b_data[mask & 0xff];
114
+ *d = (*d & ~bmask) | (r & bmask);
115
+}
116
+
117
+static void mergemask_sq(int64_t *d, int64_t r, uint16_t mask)
118
+{
119
+ mergemask_uq((uint64_t *)d, r, mask);
120
+}
121
+
122
+#define mergemask(D, R, M) \
123
+ _Generic(D, \
124
+ uint8_t *: mergemask_ub, \
125
+ int8_t *: mergemask_sb, \
126
+ uint16_t *: mergemask_uh, \
127
+ int16_t *: mergemask_sh, \
128
+ uint32_t *: mergemask_uw, \
129
+ int32_t *: mergemask_sw, \
130
+ uint64_t *: mergemask_uq, \
131
+ int64_t *: mergemask_sq)(D, R, M)
132
+
133
+#define DO_1OP(OP, ESIZE, TYPE, FN) \
134
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
135
+ { \
136
+ TYPE *d = vd, *m = vm; \
137
+ uint16_t mask = mve_element_mask(env); \
138
+ unsigned e; \
139
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
140
+ mergemask(&d[H##ESIZE(e)], FN(m[H##ESIZE(e)]), mask); \
141
+ } \
142
+ mve_advance_vpt(env); \
143
+ }
144
+
145
+#define DO_CLZ_B(N) (clz32(N) - 24)
146
+#define DO_CLZ_H(N) (clz32(N) - 16)
147
+
148
+DO_1OP(vclzb, 1, uint8_t, DO_CLZ_B)
149
+DO_1OP(vclzh, 2, uint16_t, DO_CLZ_H)
150
+DO_1OP(vclzw, 4, uint32_t, clz32)
151
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
152
index XXXXXXX..XXXXXXX 100644
153
--- a/target/arm/translate-mve.c
154
+++ b/target/arm/translate-mve.c
155
@@ -XXX,XX +XXX,XX @@
156
#include "decode-mve.c.inc"
157
158
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
159
+typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
160
161
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
162
static inline long mve_qreg_offset(unsigned reg)
163
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
164
DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h)
165
DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w)
166
DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w)
167
+
168
+static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
169
+{
170
+ TCGv_ptr qd, qm;
171
+
172
+ if (!dc_isar_feature(aa32_mve, s) ||
173
+ !mve_check_qreg_bank(s, a->qd | a->qm) ||
174
+ !fn) {
175
+ return false;
176
+ }
177
+
178
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
179
+ return true;
180
+ }
181
+
182
+ qd = mve_qreg_ptr(a->qd);
183
+ qm = mve_qreg_ptr(a->qm);
184
+ fn(cpu_env, qd, qm);
185
+ tcg_temp_free_ptr(qd);
186
+ tcg_temp_free_ptr(qm);
187
+ mve_update_eci(s);
188
+ return true;
189
+}
190
+
191
+#define DO_1OP(INSN, FN) \
192
+ static bool trans_##INSN(DisasContext *s, arg_1op *a) \
193
+ { \
194
+ static MVEGenOneOpFn * const fns[] = { \
195
+ gen_helper_mve_##FN##b, \
196
+ gen_helper_mve_##FN##h, \
197
+ gen_helper_mve_##FN##w, \
198
+ NULL, \
199
+ }; \
200
+ return do_1op(s, a, fns[a->size]); \
201
+ }
202
+
203
+DO_1OP(VCLZ, vclz)
204
--
205
2.20.1
206
207
diff view generated by jsdifflib
1
On exception return for M-profile, we must restore the CONTROL.SPSEL
1
Implement the MVE VCLS insn.
2
bit from the EXCRET value before we do any kind of tailchaining,
3
including for the derived exceptions on integrity check failures.
4
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
5
exception entry for the tailchained exception.
6
2
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180720145647.8810-4-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-5-peter.maydell@linaro.org
10
---
6
---
11
target/arm/helper.c | 16 ++++++++++------
7
target/arm/helper-mve.h | 4 ++++
12
1 file changed, 10 insertions(+), 6 deletions(-)
8
target/arm/mve.decode | 1 +
9
target/arm/mve_helper.c | 7 +++++++
10
target/arm/translate-mve.c | 1 +
11
4 files changed, 13 insertions(+)
13
12
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
15
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
19
}
18
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
19
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
20
21
+DEF_HELPER_FLAGS_3(mve_vclsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
+DEF_HELPER_FLAGS_3(mve_vclsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+
25
DEF_HELPER_FLAGS_3(mve_vclzb, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
DEF_HELPER_FLAGS_3(mve_vclzh, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
DEF_HELPER_FLAGS_3(mve_vclzw, TCG_CALL_NO_WG, void, env, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
33
34
# Vector miscellaneous
35
36
+VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
37
VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/mve_helper.c
41
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@ static void mergemask_sq(int64_t *d, int64_t r, uint16_t mask)
43
mve_advance_vpt(env); \
20
}
44
}
21
45
22
+ /*
46
+#define DO_CLS_B(N) (clrsb32(N) - 24)
23
+ * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
47
+#define DO_CLS_H(N) (clrsb32(N) - 16)
24
+ * Handler mode (and will be until we write the new XPSR.Interrupt
25
+ * field) this does not switch around the current stack pointer.
26
+ * We must do this before we do any kind of tailchaining, including
27
+ * for the derived exceptions on integrity check failures, or we will
28
+ * give the guest an incorrect EXCRET.SPSEL value on exception entry.
29
+ */
30
+ write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
31
+
48
+
32
if (sfault) {
49
+DO_1OP(vclsb, 1, int8_t, DO_CLS_B)
33
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
50
+DO_1OP(vclsh, 2, int16_t, DO_CLS_H)
34
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
51
+DO_1OP(vclsw, 4, int32_t, clrsb32)
35
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
52
+
36
return;
53
#define DO_CLZ_B(N) (clz32(N) - 24)
54
#define DO_CLZ_H(N) (clz32(N) - 16)
55
56
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/translate-mve.c
59
+++ b/target/arm/translate-mve.c
60
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
37
}
61
}
38
62
39
- /* Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
63
DO_1OP(VCLZ, vclz)
40
- * Handler mode (and will be until we write the new XPSR.Interrupt
64
+DO_1OP(VCLS, vcls)
41
- * field) this does not switch around the current stack pointer.
42
- */
43
- write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
44
-
45
switch_v7m_security_state(env, return_to_secure);
46
47
{
48
--
65
--
49
2.18.0
66
2.20.1
50
67
51
68
diff view generated by jsdifflib
New patch
1
Implement the MVE instructions VREV16, VREV32 and VREV64.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-6-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 7 +++++++
8
target/arm/mve.decode | 4 ++++
9
target/arm/mve_helper.c | 7 +++++++
10
target/arm/translate-mve.c | 33 +++++++++++++++++++++++++++++++++
11
4 files changed, 51 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
18
DEF_HELPER_FLAGS_3(mve_vclzb, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
DEF_HELPER_FLAGS_3(mve_vclzh, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
DEF_HELPER_FLAGS_3(mve_vclzw, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_3(mve_vrev16b, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vrev32b, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+DEF_HELPER_FLAGS_3(mve_vrev32h, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
+DEF_HELPER_FLAGS_3(mve_vrev64b, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+DEF_HELPER_FLAGS_3(mve_vrev64h, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
+DEF_HELPER_FLAGS_3(mve_vrev64w, TCG_CALL_NO_WG, void, env, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
33
34
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
35
VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
36
+
37
+VREV16 1111 1111 1 . 11 .. 00 ... 0 0001 01 . 0 ... 0 @1op
38
+VREV32 1111 1111 1 . 11 .. 00 ... 0 0000 11 . 0 ... 0 @1op
39
+VREV64 1111 1111 1 . 11 .. 00 ... 0 0000 01 . 0 ... 0 @1op
40
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/mve_helper.c
43
+++ b/target/arm/mve_helper.c
44
@@ -XXX,XX +XXX,XX @@ DO_1OP(vclsw, 4, int32_t, clrsb32)
45
DO_1OP(vclzb, 1, uint8_t, DO_CLZ_B)
46
DO_1OP(vclzh, 2, uint16_t, DO_CLZ_H)
47
DO_1OP(vclzw, 4, uint32_t, clz32)
48
+
49
+DO_1OP(vrev16b, 2, uint16_t, bswap16)
50
+DO_1OP(vrev32b, 4, uint32_t, bswap32)
51
+DO_1OP(vrev32h, 4, uint32_t, hswap32)
52
+DO_1OP(vrev64b, 8, uint64_t, bswap64)
53
+DO_1OP(vrev64h, 8, uint64_t, hswap64)
54
+DO_1OP(vrev64w, 8, uint64_t, wswap64)
55
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/translate-mve.c
58
+++ b/target/arm/translate-mve.c
59
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
60
61
DO_1OP(VCLZ, vclz)
62
DO_1OP(VCLS, vcls)
63
+
64
+static bool trans_VREV16(DisasContext *s, arg_1op *a)
65
+{
66
+ static MVEGenOneOpFn * const fns[] = {
67
+ gen_helper_mve_vrev16b,
68
+ NULL,
69
+ NULL,
70
+ NULL,
71
+ };
72
+ return do_1op(s, a, fns[a->size]);
73
+}
74
+
75
+static bool trans_VREV32(DisasContext *s, arg_1op *a)
76
+{
77
+ static MVEGenOneOpFn * const fns[] = {
78
+ gen_helper_mve_vrev32b,
79
+ gen_helper_mve_vrev32h,
80
+ NULL,
81
+ NULL,
82
+ };
83
+ return do_1op(s, a, fns[a->size]);
84
+}
85
+
86
+static bool trans_VREV64(DisasContext *s, arg_1op *a)
87
+{
88
+ static MVEGenOneOpFn * const fns[] = {
89
+ gen_helper_mve_vrev64b,
90
+ gen_helper_mve_vrev64h,
91
+ gen_helper_mve_vrev64w,
92
+ NULL,
93
+ };
94
+ return do_1op(s, a, fns[a->size]);
95
+}
96
--
97
2.20.1
98
99
diff view generated by jsdifflib
New patch
1
Implement the MVE VMVN(register) operation. Note that for
2
predication this operation is byte-by-byte.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-7-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 2 ++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 4 ++++
11
target/arm/translate-mve.c | 5 +++++
12
4 files changed, 14 insertions(+)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vrev32h, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
DEF_HELPER_FLAGS_3(mve_vrev64b, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
DEF_HELPER_FLAGS_3(mve_vrev64h, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
DEF_HELPER_FLAGS_3(mve_vrev64w, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_3(mve_vmvn, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/mve.decode
27
+++ b/target/arm/mve.decode
28
@@ -XXX,XX +XXX,XX @@
29
@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
30
31
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
32
+@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
33
34
# Vector loads and stores
35
36
@@ -XXX,XX +XXX,XX @@ VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
37
VREV16 1111 1111 1 . 11 .. 00 ... 0 0001 01 . 0 ... 0 @1op
38
VREV32 1111 1111 1 . 11 .. 00 ... 0 0000 11 . 0 ... 0 @1op
39
VREV64 1111 1111 1 . 11 .. 00 ... 0 0000 01 . 0 ... 0 @1op
40
+
41
+VMVN 1111 1111 1 . 11 00 00 ... 0 0101 11 . 0 ... 0 @1op_nosz
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_1OP(vrev32h, 4, uint32_t, hswap32)
47
DO_1OP(vrev64b, 8, uint64_t, bswap64)
48
DO_1OP(vrev64h, 8, uint64_t, hswap64)
49
DO_1OP(vrev64w, 8, uint64_t, wswap64)
50
+
51
+#define DO_NOT(N) (~(N))
52
+
53
+DO_1OP(vmvn, 8, uint64_t, DO_NOT)
54
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/translate-mve.c
57
+++ b/target/arm/translate-mve.c
58
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_1op *a)
59
};
60
return do_1op(s, a, fns[a->size]);
61
}
62
+
63
+static bool trans_VMVN(DisasContext *s, arg_1op *a)
64
+{
65
+ return do_1op(s, a, gen_helper_mve_vmvn);
66
+}
67
--
68
2.20.1
69
70
diff view generated by jsdifflib
New patch
1
Implement the MVE VABS functions (both integer and floating point).
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-8-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 6 ++++++
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 13 +++++++++++++
10
target/arm/translate-mve.c | 15 +++++++++++++++
11
4 files changed, 37 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vrev64h, TCG_CALL_NO_WG, void, env, ptr, ptr)
18
DEF_HELPER_FLAGS_3(mve_vrev64w, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
20
DEF_HELPER_FLAGS_3(mve_vmvn, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_3(mve_vabsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+DEF_HELPER_FLAGS_3(mve_vabsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
+DEF_HELPER_FLAGS_3(mve_vfabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+DEF_HELPER_FLAGS_3(mve_vfabss, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/mve.decode
30
+++ b/target/arm/mve.decode
31
@@ -XXX,XX +XXX,XX @@ VREV32 1111 1111 1 . 11 .. 00 ... 0 0000 11 . 0 ... 0 @1op
32
VREV64 1111 1111 1 . 11 .. 00 ... 0 0000 01 . 0 ... 0 @1op
33
34
VMVN 1111 1111 1 . 11 00 00 ... 0 0101 11 . 0 ... 0 @1op_nosz
35
+
36
+VABS 1111 1111 1 . 11 .. 01 ... 0 0011 01 . 0 ... 0 @1op
37
+VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/mve_helper.c
41
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@
43
#include "exec/helper-proto.h"
44
#include "exec/cpu_ldst.h"
45
#include "exec/exec-all.h"
46
+#include "tcg/tcg.h"
47
48
static uint16_t mve_element_mask(CPUARMState *env)
49
{
50
@@ -XXX,XX +XXX,XX @@ DO_1OP(vrev64w, 8, uint64_t, wswap64)
51
#define DO_NOT(N) (~(N))
52
53
DO_1OP(vmvn, 8, uint64_t, DO_NOT)
54
+
55
+#define DO_ABS(N) ((N) < 0 ? -(N) : (N))
56
+#define DO_FABSH(N) ((N) & dup_const(MO_16, 0x7fff))
57
+#define DO_FABSS(N) ((N) & dup_const(MO_32, 0x7fffffff))
58
+
59
+DO_1OP(vabsb, 1, int8_t, DO_ABS)
60
+DO_1OP(vabsh, 2, int16_t, DO_ABS)
61
+DO_1OP(vabsw, 4, int32_t, DO_ABS)
62
+
63
+/* We can do these 64 bits at a time */
64
+DO_1OP(vfabsh, 8, uint64_t, DO_FABSH)
65
+DO_1OP(vfabss, 8, uint64_t, DO_FABSS)
66
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/translate-mve.c
69
+++ b/target/arm/translate-mve.c
70
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
71
72
DO_1OP(VCLZ, vclz)
73
DO_1OP(VCLS, vcls)
74
+DO_1OP(VABS, vabs)
75
76
static bool trans_VREV16(DisasContext *s, arg_1op *a)
77
{
78
@@ -XXX,XX +XXX,XX @@ static bool trans_VMVN(DisasContext *s, arg_1op *a)
79
{
80
return do_1op(s, a, gen_helper_mve_vmvn);
81
}
82
+
83
+static bool trans_VABS_fp(DisasContext *s, arg_1op *a)
84
+{
85
+ static MVEGenOneOpFn * const fns[] = {
86
+ NULL,
87
+ gen_helper_mve_vfabsh,
88
+ gen_helper_mve_vfabss,
89
+ NULL,
90
+ };
91
+ if (!dc_isar_feature(aa32_mve_fp, s)) {
92
+ return false;
93
+ }
94
+ return do_1op(s, a, fns[a->size]);
95
+}
96
--
97
2.20.1
98
99
diff view generated by jsdifflib
New patch
1
Implement the MVE VNEG insn (both integer and floating point forms).
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-9-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 6 ++++++
8
target/arm/mve.decode | 2 ++
9
target/arm/mve_helper.c | 12 ++++++++++++
10
target/arm/translate-mve.c | 15 +++++++++++++++
11
4 files changed, 35 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
18
DEF_HELPER_FLAGS_3(mve_vabsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
DEF_HELPER_FLAGS_3(mve_vfabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
DEF_HELPER_FLAGS_3(mve_vfabss, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_3(mve_vnegb, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
+DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/mve.decode
30
+++ b/target/arm/mve.decode
31
@@ -XXX,XX +XXX,XX @@ VMVN 1111 1111 1 . 11 00 00 ... 0 0101 11 . 0 ... 0 @1op_nosz
32
33
VABS 1111 1111 1 . 11 .. 01 ... 0 0011 01 . 0 ... 0 @1op
34
VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
35
+VNEG 1111 1111 1 . 11 .. 01 ... 0 0011 11 . 0 ... 0 @1op
36
+VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
37
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/mve_helper.c
40
+++ b/target/arm/mve_helper.c
41
@@ -XXX,XX +XXX,XX @@ DO_1OP(vabsw, 4, int32_t, DO_ABS)
42
/* We can do these 64 bits at a time */
43
DO_1OP(vfabsh, 8, uint64_t, DO_FABSH)
44
DO_1OP(vfabss, 8, uint64_t, DO_FABSS)
45
+
46
+#define DO_NEG(N) (-(N))
47
+#define DO_FNEGH(N) ((N) ^ dup_const(MO_16, 0x8000))
48
+#define DO_FNEGS(N) ((N) ^ dup_const(MO_32, 0x80000000))
49
+
50
+DO_1OP(vnegb, 1, int8_t, DO_NEG)
51
+DO_1OP(vnegh, 2, int16_t, DO_NEG)
52
+DO_1OP(vnegw, 4, int32_t, DO_NEG)
53
+
54
+/* We can do these 64 bits at a time */
55
+DO_1OP(vfnegh, 8, uint64_t, DO_FNEGH)
56
+DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
57
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/translate-mve.c
60
+++ b/target/arm/translate-mve.c
61
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
62
DO_1OP(VCLZ, vclz)
63
DO_1OP(VCLS, vcls)
64
DO_1OP(VABS, vabs)
65
+DO_1OP(VNEG, vneg)
66
67
static bool trans_VREV16(DisasContext *s, arg_1op *a)
68
{
69
@@ -XXX,XX +XXX,XX @@ static bool trans_VABS_fp(DisasContext *s, arg_1op *a)
70
}
71
return do_1op(s, a, fns[a->size]);
72
}
73
+
74
+static bool trans_VNEG_fp(DisasContext *s, arg_1op *a)
75
+{
76
+ static MVEGenOneOpFn * const fns[] = {
77
+ NULL,
78
+ gen_helper_mve_vfnegh,
79
+ gen_helper_mve_vfnegs,
80
+ NULL,
81
+ };
82
+ if (!dc_isar_feature(aa32_mve_fp, s)) {
83
+ return false;
84
+ }
85
+ return do_1op(s, a, fns[a->size]);
86
+}
87
--
88
2.20.1
89
90
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
The Arm MVE VDUP implementation would like to be able to emit code to
2
duplicate a byte or halfword value into an i32. We have code to do
3
this already in tcg-op-gvec.c, so all we need to do is make the
4
functions global.
2
5
3
Some functions are now only used in arm_gic.c, put them static. Some of
6
For consistency with other functions made available to the frontends:
4
them where only used by the NVIC implementation and are not used
7
* we rename to tcg_gen_dup_*
5
anymore, so remove them.
8
* we expose both the _i32 and _i64 forms
9
* we provide the #define for a _tl form
6
10
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
11
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20180727095421.386-4-luc.michel@greensocs.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20210617121628.20116-10-peter.maydell@linaro.org
12
---
15
---
13
hw/intc/gic_internal.h | 4 ----
16
include/tcg/tcg-op.h | 8 ++++++++
14
hw/intc/arm_gic.c | 23 ++---------------------
17
include/tcg/tcg.h | 1 -
15
2 files changed, 2 insertions(+), 25 deletions(-)
18
tcg/tcg-op-gvec.c | 20 ++++++++++----------
19
3 files changed, 18 insertions(+), 11 deletions(-)
16
20
17
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
21
diff --git a/include/tcg/tcg-op.h b/include/tcg/tcg-op.h
18
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/gic_internal.h
23
--- a/include/tcg/tcg-op.h
20
+++ b/hw/intc/gic_internal.h
24
+++ b/include/tcg/tcg-op.h
21
@@ -XXX,XX +XXX,XX @@
25
@@ -XXX,XX +XXX,XX @@ void tcg_gen_umin_i32(TCGv_i32, TCGv_i32 arg1, TCGv_i32 arg2);
22
/* The special cases for the revision property: */
26
void tcg_gen_umax_i32(TCGv_i32, TCGv_i32 arg1, TCGv_i32 arg2);
23
#define REV_11MPCORE 0
27
void tcg_gen_abs_i32(TCGv_i32, TCGv_i32);
24
28
25
-void gic_set_pending_private(GICState *s, int cpu, int irq);
29
+/* Replicate a value of size @vece from @in to all the lanes in @out */
26
uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs);
30
+void tcg_gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in);
27
-void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs);
31
+
28
-void gic_update(GICState *s);
32
static inline void tcg_gen_discard_i32(TCGv_i32 arg)
29
-void gic_init_irqs_and_distributor(GICState *s);
33
{
30
void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
34
tcg_gen_op1_i32(INDEX_op_discard, arg);
31
MemTxAttrs attrs);
35
@@ -XXX,XX +XXX,XX @@ void tcg_gen_umin_i64(TCGv_i64, TCGv_i64 arg1, TCGv_i64 arg2);
32
36
void tcg_gen_umax_i64(TCGv_i64, TCGv_i64 arg1, TCGv_i64 arg2);
33
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
37
void tcg_gen_abs_i64(TCGv_i64, TCGv_i64);
38
39
+/* Replicate a value of size @vece from @in to all the lanes in @out */
40
+void tcg_gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in);
41
+
42
#if TCG_TARGET_REG_BITS == 64
43
static inline void tcg_gen_discard_i64(TCGv_i64 arg)
44
{
45
@@ -XXX,XX +XXX,XX @@ void tcg_gen_stl_vec(TCGv_vec r, TCGv_ptr base, TCGArg offset, TCGType t);
46
#define tcg_gen_atomic_smax_fetch_tl tcg_gen_atomic_smax_fetch_i64
47
#define tcg_gen_atomic_umax_fetch_tl tcg_gen_atomic_umax_fetch_i64
48
#define tcg_gen_dup_tl_vec tcg_gen_dup_i64_vec
49
+#define tcg_gen_dup_tl tcg_gen_dup_i64
50
#else
51
#define tcg_gen_movi_tl tcg_gen_movi_i32
52
#define tcg_gen_mov_tl tcg_gen_mov_i32
53
@@ -XXX,XX +XXX,XX @@ void tcg_gen_stl_vec(TCGv_vec r, TCGv_ptr base, TCGArg offset, TCGType t);
54
#define tcg_gen_atomic_smax_fetch_tl tcg_gen_atomic_smax_fetch_i32
55
#define tcg_gen_atomic_umax_fetch_tl tcg_gen_atomic_umax_fetch_i32
56
#define tcg_gen_dup_tl_vec tcg_gen_dup_i32_vec
57
+#define tcg_gen_dup_tl tcg_gen_dup_i32
58
#endif
59
60
#if UINTPTR_MAX == UINT32_MAX
61
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
34
index XXXXXXX..XXXXXXX 100644
62
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/arm_gic.c
63
--- a/include/tcg/tcg.h
36
+++ b/hw/intc/arm_gic.c
64
+++ b/include/tcg/tcg.h
37
@@ -XXX,XX +XXX,XX @@ static inline bool gic_has_groups(GICState *s)
65
@@ -XXX,XX +XXX,XX @@ uint64_t dup_const(unsigned vece, uint64_t c);
38
66
: (qemu_build_not_reached_always(), 0)) \
39
/* TODO: Many places that call this routine could be optimized. */
67
: dup_const(VECE, C))
40
/* Update interrupt status after enabled or pending bits have been changed. */
68
41
-void gic_update(GICState *s)
69
-
42
+static void gic_update(GICState *s)
70
/*
71
* Memory helpers that will be used by TCG generated code.
72
*/
73
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/tcg/tcg-op-gvec.c
76
+++ b/tcg/tcg-op-gvec.c
77
@@ -XXX,XX +XXX,XX @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
78
}
79
80
/* Duplicate IN into OUT as per VECE. */
81
-static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
82
+void tcg_gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
43
{
83
{
44
int best_irq;
84
switch (vece) {
45
int best_prio;
85
case MO_8:
46
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
86
@@ -XXX,XX +XXX,XX @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
47
}
87
}
48
}
88
}
49
89
50
-void gic_set_pending_private(GICState *s, int cpu, int irq)
90
-static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
51
-{
91
+void tcg_gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
52
- int cm = 1 << cpu;
53
-
54
- if (gic_test_pending(s, irq, cm)) {
55
- return;
56
- }
57
-
58
- DPRINTF("Set %d pending cpu %d\n", irq, cpu);
59
- GIC_DIST_SET_PENDING(irq, cm);
60
- gic_update(s);
61
-}
62
-
63
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
64
int cm, int target)
65
{
92
{
66
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
93
switch (vece) {
67
GIC_DIST_CLEAR_ACTIVE(irq, cm);
94
case MO_8:
95
@@ -XXX,XX +XXX,XX @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
96
&& (vece != MO_32 || !check_size_impl(oprsz, 4))) {
97
t_64 = tcg_temp_new_i64();
98
tcg_gen_extu_i32_i64(t_64, in_32);
99
- gen_dup_i64(vece, t_64, t_64);
100
+ tcg_gen_dup_i64(vece, t_64, t_64);
101
} else {
102
t_32 = tcg_temp_new_i32();
103
- gen_dup_i32(vece, t_32, in_32);
104
+ tcg_gen_dup_i32(vece, t_32, in_32);
105
}
106
} else if (in_64) {
107
/* We are given a 64-bit variable input. */
108
t_64 = tcg_temp_new_i64();
109
- gen_dup_i64(vece, t_64, in_64);
110
+ tcg_gen_dup_i64(vece, t_64, in_64);
111
} else {
112
/* We are given a constant input. */
113
/* For 64-bit hosts, use 64-bit constants for "simple" constants
114
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_2s(uint32_t dofs, uint32_t aofs, uint32_t oprsz,
115
} else if (g->fni8 && check_size_impl(oprsz, 8)) {
116
TCGv_i64 t64 = tcg_temp_new_i64();
117
118
- gen_dup_i64(g->vece, t64, c);
119
+ tcg_gen_dup_i64(g->vece, t64, c);
120
expand_2s_i64(dofs, aofs, oprsz, t64, g->scalar_first, g->fni8);
121
tcg_temp_free_i64(t64);
122
} else if (g->fni4 && check_size_impl(oprsz, 4)) {
123
TCGv_i32 t32 = tcg_temp_new_i32();
124
125
tcg_gen_extrl_i64_i32(t32, c);
126
- gen_dup_i32(g->vece, t32, t32);
127
+ tcg_gen_dup_i32(g->vece, t32, t32);
128
expand_2s_i32(dofs, aofs, oprsz, t32, g->scalar_first, g->fni4);
129
tcg_temp_free_i32(t32);
130
} else {
131
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_ands(unsigned vece, uint32_t dofs, uint32_t aofs,
132
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
133
{
134
TCGv_i64 tmp = tcg_temp_new_i64();
135
- gen_dup_i64(vece, tmp, c);
136
+ tcg_gen_dup_i64(vece, tmp, c);
137
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_ands);
138
tcg_temp_free_i64(tmp);
68
}
139
}
69
140
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_xors(unsigned vece, uint32_t dofs, uint32_t aofs,
70
-void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
141
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
71
+static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
72
{
142
{
73
int cm = 1 << cpu;
143
TCGv_i64 tmp = tcg_temp_new_i64();
74
int group;
144
- gen_dup_i64(vece, tmp, c);
75
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
145
+ tcg_gen_dup_i64(vece, tmp, c);
76
.endianness = DEVICE_NATIVE_ENDIAN,
146
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_xors);
77
};
147
tcg_temp_free_i64(tmp);
78
148
}
79
-/* This function is used by nvic model */
149
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_ors(unsigned vece, uint32_t dofs, uint32_t aofs,
80
-void gic_init_irqs_and_distributor(GICState *s)
150
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
81
-{
82
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops);
83
-}
84
-
85
static void arm_gic_realize(DeviceState *dev, Error **errp)
86
{
151
{
87
/* Device instance realize function for the GIC sysbus device */
152
TCGv_i64 tmp = tcg_temp_new_i64();
153
- gen_dup_i64(vece, tmp, c);
154
+ tcg_gen_dup_i64(vece, tmp, c);
155
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_ors);
156
tcg_temp_free_i64(tmp);
157
}
88
--
158
--
89
2.18.0
159
2.20.1
90
160
91
161
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the MVE VDUP insn, which duplicates a value from
2
a general-purpose register into every lane of a vector
3
register (subject to predication).
2
4
3
The pseudocode for this operation is an increment + compare loop,
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
so comparing <= the maximum integer produces an all-true predicate.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210617121628.20116-11-peter.maydell@linaro.org
8
---
9
target/arm/helper-mve.h | 2 ++
10
target/arm/mve.decode | 10 ++++++++++
11
target/arm/mve_helper.c | 16 ++++++++++++++++
12
target/arm/translate-mve.c | 27 +++++++++++++++++++++++++++
13
4 files changed, 55 insertions(+)
5
14
6
Rather than bound in both the inline code and the helper, pass the
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
7
helper the number of predicate bits to set instead of the number
8
of predicate elements to set.
9
10
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
13
Tested-by: Alex Bennée <alex.bennee@linaro.org>
14
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
15
Message-id: 20180801123111.3595-4-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
target/arm/sve_helper.c | 5 ----
19
target/arm/translate-sve.c | 49 +++++++++++++++++++++++++-------------
20
2 files changed, 32 insertions(+), 22 deletions(-)
21
22
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
23
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/sve_helper.c
17
--- a/target/arm/helper-mve.h
25
+++ b/target/arm/sve_helper.c
18
+++ b/target/arm/helper-mve.h
26
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
27
return flags;
20
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
28
}
21
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
29
22
30
- /* Scale from predicate element count to bits. */
23
+DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
31
- count <<= esz;
24
+
32
- /* Bound to the bits in the predicate. */
25
DEF_HELPER_FLAGS_3(mve_vclsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
33
- count = MIN(count, oprsz * 8);
26
DEF_HELPER_FLAGS_3(mve_vclsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
34
-
27
DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
35
/* Set all of the requested bits. */
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
36
for (i = 0; i < count / 64; ++i) {
37
d->p[i] = esz_mask;
38
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
39
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-sve.c
30
--- a/target/arm/mve.decode
41
+++ b/target/arm/translate-sve.c
31
+++ b/target/arm/mve.decode
42
@@ -XXX,XX +XXX,XX @@ static bool trans_CTERM(DisasContext *s, arg_CTERM *a, uint32_t insn)
32
@@ -XXX,XX +XXX,XX @@
43
33
44
static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
34
%qd 22:1 13:3
45
{
35
%qm 5:1 1:3
46
- if (!sve_access_check(s)) {
36
+%qn 7:1 17:3
47
- return true;
37
48
- }
38
&vldr_vstr rn qd imm p a w size l u
49
-
39
&1op qd qm size
50
- TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
40
@@ -XXX,XX +XXX,XX @@ VABS 1111 1111 1 . 11 .. 01 ... 0 0011 01 . 0 ... 0 @1op
51
- TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
41
VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
52
- TCGv_i64 t0 = tcg_temp_new_i64();
42
VNEG 1111 1111 1 . 11 .. 01 ... 0 0011 11 . 0 ... 0 @1op
53
- TCGv_i64 t1 = tcg_temp_new_i64();
43
VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
54
+ TCGv_i64 op0, op1, t0, t1, tmax;
44
+
55
TCGv_i32 t2, t3;
45
+&vdup qd rt size
56
TCGv_ptr ptr;
46
+# Qd is in the fields usually named Qn
57
unsigned desc, vsz = vec_full_reg_size(s);
47
+@vdup .... .... . . .. ... . rt:4 .... . . . . .... qd=%qn &vdup
58
TCGCond cond;
48
+
59
49
+# B and E bits encode size, which we decode here to the usual size values
60
+ if (!sve_access_check(s)) {
50
+VDUP 1110 1110 1 1 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=0
51
+VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 1 1 0000 @vdup size=1
52
+VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
53
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/mve_helper.c
56
+++ b/target/arm/mve_helper.c
57
@@ -XXX,XX +XXX,XX @@ static void mergemask_sq(int64_t *d, int64_t r, uint16_t mask)
58
uint64_t *: mergemask_uq, \
59
int64_t *: mergemask_sq)(D, R, M)
60
61
+void HELPER(mve_vdup)(CPUARMState *env, void *vd, uint32_t val)
62
+{
63
+ /*
64
+ * The generated code already replicated an 8 or 16 bit constant
65
+ * into the 32-bit value, so we only need to write the 32-bit
66
+ * value to all elements of the Qreg, allowing for predication.
67
+ */
68
+ uint32_t *d = vd;
69
+ uint16_t mask = mve_element_mask(env);
70
+ unsigned e;
71
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) {
72
+ mergemask(&d[H4(e)], val, mask);
73
+ }
74
+ mve_advance_vpt(env);
75
+}
76
+
77
#define DO_1OP(OP, ESIZE, TYPE, FN) \
78
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
79
{ \
80
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/translate-mve.c
83
+++ b/target/arm/translate-mve.c
84
@@ -XXX,XX +XXX,XX @@ DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h)
85
DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w)
86
DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w)
87
88
+static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
89
+{
90
+ TCGv_ptr qd;
91
+ TCGv_i32 rt;
92
+
93
+ if (!dc_isar_feature(aa32_mve, s) ||
94
+ !mve_check_qreg_bank(s, a->qd)) {
95
+ return false;
96
+ }
97
+ if (a->rt == 13 || a->rt == 15) {
98
+ /* UNPREDICTABLE; we choose to UNDEF */
99
+ return false;
100
+ }
101
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
61
+ return true;
102
+ return true;
62
+ }
103
+ }
63
+
104
+
64
+ op0 = read_cpu_reg(s, a->rn, 1);
105
+ qd = mve_qreg_ptr(a->qd);
65
+ op1 = read_cpu_reg(s, a->rm, 1);
106
+ rt = load_reg(s, a->rt);
107
+ tcg_gen_dup_i32(a->size, rt, rt);
108
+ gen_helper_mve_vdup(cpu_env, qd, rt);
109
+ tcg_temp_free_ptr(qd);
110
+ tcg_temp_free_i32(rt);
111
+ mve_update_eci(s);
112
+ return true;
113
+}
66
+
114
+
67
if (!a->sf) {
115
static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
68
if (a->u) {
116
{
69
tcg_gen_ext32u_i64(op0, op0);
117
TCGv_ptr qd, qm;
70
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
71
72
/* For the helper, compress the different conditions into a computation
73
* of how many iterations for which the condition is true.
74
- *
75
- * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
76
- * 2**64 iterations, overflowing to 0. Of course, predicate registers
77
- * aren't that large, so any value >= predicate size is sufficient.
78
*/
79
+ t0 = tcg_temp_new_i64();
80
+ t1 = tcg_temp_new_i64();
81
tcg_gen_sub_i64(t0, op1, op0);
82
83
- /* t0 = MIN(op1 - op0, vsz). */
84
- tcg_gen_movi_i64(t1, vsz);
85
- tcg_gen_umin_i64(t0, t0, t1);
86
+ tmax = tcg_const_i64(vsz >> a->esz);
87
if (a->eq) {
88
/* Equality means one more iteration. */
89
tcg_gen_addi_i64(t0, t0, 1);
90
+
91
+ /* If op1 is max (un)signed integer (and the only time the addition
92
+ * above could overflow), then we produce an all-true predicate by
93
+ * setting the count to the vector length. This is because the
94
+ * pseudocode is described as an increment + compare loop, and the
95
+ * max integer would always compare true.
96
+ */
97
+ tcg_gen_movi_i64(t1, (a->sf
98
+ ? (a->u ? UINT64_MAX : INT64_MAX)
99
+ : (a->u ? UINT32_MAX : INT32_MAX)));
100
+ tcg_gen_movcond_i64(TCG_COND_EQ, t0, op1, t1, tmax, t0);
101
}
102
103
- /* t0 = (condition true ? t0 : 0). */
104
+ /* Bound to the maximum. */
105
+ tcg_gen_umin_i64(t0, t0, tmax);
106
+ tcg_temp_free_i64(tmax);
107
+
108
+ /* Set the count to zero if the condition is false. */
109
cond = (a->u
110
? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
111
: (a->eq ? TCG_COND_LE : TCG_COND_LT));
112
tcg_gen_movi_i64(t1, 0);
113
tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
114
+ tcg_temp_free_i64(t1);
115
116
+ /* Since we're bounded, pass as a 32-bit type. */
117
t2 = tcg_temp_new_i32();
118
tcg_gen_extrl_i64_i32(t2, t0);
119
tcg_temp_free_i64(t0);
120
- tcg_temp_free_i64(t1);
121
+
122
+ /* Scale elements to bits. */
123
+ tcg_gen_shli_i32(t2, t2, a->esz);
124
125
desc = (vsz / 8) - 2;
126
desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
127
--
118
--
128
2.18.0
119
2.20.1
129
120
130
121
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the MVE vector logical operations operating
2
on two registers.
2
3
3
Implement virtualization extensions in the gic_deactivate_irq() and
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
gic_complete_irq() functions.
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-12-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 6 ++++++
9
target/arm/mve.decode | 9 +++++++++
10
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 37 +++++++++++++++++++++++++++++++++++++
12
4 files changed, 78 insertions(+)
5
13
6
When the guest writes an invalid vIRQ to V_EOIR or V_DIR, since the
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
7
GICv2 specification is not entirely clear here, we adopt the behaviour
8
observed on real hardware:
9
* When V_CTRL.EOIMode is false (EOI split is disabled):
10
- In case of an invalid vIRQ write to V_EOIR:
11
-> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR
12
triggers a priority drop, and increments V_HCR.EOICount.
13
-> If V_APR is already cleared, nothing happen
14
15
- An invalid vIRQ write to V_DIR is ignored.
16
17
* When V_CTRL.EOIMode is true:
18
- In case of an invalid vIRQ write to V_EOIR:
19
-> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR
20
triggers a priority drop.
21
-> If V_APR is already cleared, nothing happen
22
23
- An invalid vIRQ write to V_DIR increments V_HCR.EOICount.
24
25
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
26
Message-id: 20180727095421.386-13-luc.michel@greensocs.com
27
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
---
30
hw/intc/arm_gic.c | 51 +++++++++++++++++++++++++++++++++++++++++++----
31
1 file changed, 47 insertions(+), 4 deletions(-)
32
33
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
34
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/arm_gic.c
16
--- a/target/arm/helper-mve.h
36
+++ b/hw/intc/arm_gic.c
17
+++ b/target/arm/helper-mve.h
37
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
38
{
19
DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
39
int group;
20
DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
40
21
DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
41
- if (irq >= s->num_irq) {
22
+
42
+ if (irq >= GIC_MAXIRQ || (!gic_is_vcpu(cpu) && irq >= s->num_irq)) {
23
+DEF_HELPER_FLAGS_4(mve_vand, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
43
/*
24
+DEF_HELPER_FLAGS_4(mve_vbic, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
44
* This handles two cases:
25
+DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
45
* 1. If software writes the ID of a spurious interrupt [ie 1023]
26
+DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
46
* to the GICC_DIR, the GIC ignores that write.
27
+DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
47
* 2. If software writes the number of a non-existent interrupt
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
48
* this must be a subcase of "value written is not an active interrupt"
29
index XXXXXXX..XXXXXXX 100644
49
- * and so this is UNPREDICTABLE. We choose to ignore it.
30
--- a/target/arm/mve.decode
50
+ * and so this is UNPREDICTABLE. We choose to ignore it. For vCPUs,
31
+++ b/target/arm/mve.decode
51
+ * all IRQs potentially exist, so this limit does not apply.
32
@@ -XXX,XX +XXX,XX @@
52
*/
33
53
return;
34
&vldr_vstr rn qd imm p a w size l u
54
}
35
&1op qd qm size
55
36
+&2op qd qm qn size
56
- group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
37
57
-
38
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
58
if (!gic_eoi_split(s, cpu, attrs)) {
39
# Note that both Rn and Qd are 3 bits only (no D bit)
59
/* This is UNPREDICTABLE; we choose to ignore it */
40
@@ -XXX,XX +XXX,XX @@
60
qemu_log_mask(LOG_GUEST_ERROR,
41
61
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
42
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
62
return;
43
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
63
}
44
+@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
64
45
65
+ if (gic_is_vcpu(cpu) && !gic_virq_is_valid(s, irq, cpu)) {
46
# Vector loads and stores
66
+ /* This vIRQ does not have an LR entry which is either active or
47
67
+ * pending and active. Increment EOICount and ignore the write.
48
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
68
+ */
49
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
69
+ int rcpu = gic_get_vcpu_real_id(cpu);
50
size=2 p=1
70
+ s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
51
71
+ return;
52
+# Vector 2-op
53
+VAND 1110 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
54
+VBIC 1110 1111 0 . 01 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
55
+VORR 1110 1111 0 . 10 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
56
+VORN 1110 1111 0 . 11 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
57
+VEOR 1111 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
58
+
59
# Vector miscellaneous
60
61
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
62
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/mve_helper.c
65
+++ b/target/arm/mve_helper.c
66
@@ -XXX,XX +XXX,XX @@ DO_1OP(vnegw, 4, int32_t, DO_NEG)
67
/* We can do these 64 bits at a time */
68
DO_1OP(vfnegh, 8, uint64_t, DO_FNEGH)
69
DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
70
+
71
+#define DO_2OP(OP, ESIZE, TYPE, FN) \
72
+ void HELPER(glue(mve_, OP))(CPUARMState *env, \
73
+ void *vd, void *vn, void *vm) \
74
+ { \
75
+ TYPE *d = vd, *n = vn, *m = vm; \
76
+ uint16_t mask = mve_element_mask(env); \
77
+ unsigned e; \
78
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
79
+ mergemask(&d[H##ESIZE(e)], \
80
+ FN(n[H##ESIZE(e)], m[H##ESIZE(e)]), mask); \
81
+ } \
82
+ mve_advance_vpt(env); \
72
+ }
83
+ }
73
+
84
+
74
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
85
+#define DO_AND(N, M) ((N) & (M))
86
+#define DO_BIC(N, M) ((N) & ~(M))
87
+#define DO_ORR(N, M) ((N) | (M))
88
+#define DO_ORN(N, M) ((N) | ~(M))
89
+#define DO_EOR(N, M) ((N) ^ (M))
75
+
90
+
76
if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
91
+DO_2OP(vand, 8, uint64_t, DO_AND)
77
DPRINTF("Non-secure DI for Group0 interrupt %d ignored\n", irq);
92
+DO_2OP(vbic, 8, uint64_t, DO_BIC)
78
return;
93
+DO_2OP(vorr, 8, uint64_t, DO_ORR)
79
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
94
+DO_2OP(vorn, 8, uint64_t, DO_ORN)
80
int group;
95
+DO_2OP(veor, 8, uint64_t, DO_EOR)
81
96
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
82
DPRINTF("EOI %d\n", irq);
97
index XXXXXXX..XXXXXXX 100644
83
+ if (gic_is_vcpu(cpu)) {
98
--- a/target/arm/translate-mve.c
84
+ /* The call to gic_prio_drop() will clear a bit in GICH_APR iff the
99
+++ b/target/arm/translate-mve.c
85
+ * running prio is < 0x100.
100
@@ -XXX,XX +XXX,XX @@
86
+ */
101
87
+ bool prio_drop = s->running_priority[cpu] < 0x100;
102
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
103
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
104
+typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
105
106
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
107
static inline long mve_qreg_offset(unsigned reg)
108
@@ -XXX,XX +XXX,XX @@ static bool trans_VNEG_fp(DisasContext *s, arg_1op *a)
109
}
110
return do_1op(s, a, fns[a->size]);
111
}
88
+
112
+
89
+ if (irq >= GIC_MAXIRQ) {
113
+static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
90
+ /* Ignore spurious interrupt */
114
+{
91
+ return;
115
+ TCGv_ptr qd, qn, qm;
92
+ }
93
+
116
+
94
+ gic_drop_prio(s, cpu, 0);
117
+ if (!dc_isar_feature(aa32_mve, s) ||
95
+
118
+ !mve_check_qreg_bank(s, a->qd | a->qn | a->qm) ||
96
+ if (!gic_eoi_split(s, cpu, attrs)) {
119
+ !fn) {
97
+ bool valid = gic_virq_is_valid(s, irq, cpu);
120
+ return false;
98
+ if (prio_drop && !valid) {
121
+ }
99
+ /* We are in a situation where:
122
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
100
+ * - V_CTRL.EOIMode is false (no EOI split),
123
+ return true;
101
+ * - The call to gic_drop_prio() cleared a bit in GICH_APR,
102
+ * - This vIRQ does not have an LR entry which is either
103
+ * active or pending and active.
104
+ * In that case, we must increment EOICount.
105
+ */
106
+ int rcpu = gic_get_vcpu_real_id(cpu);
107
+ s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
108
+ } else if (valid) {
109
+ gic_clear_active(s, irq, cpu);
110
+ }
111
+ }
112
+
113
+ return;
114
+ }
124
+ }
115
+
125
+
116
if (irq >= s->num_irq) {
126
+ qd = mve_qreg_ptr(a->qd);
117
/* This handles two cases:
127
+ qn = mve_qreg_ptr(a->qn);
118
* 1. If software writes the ID of a spurious interrupt [ie 1023]
128
+ qm = mve_qreg_ptr(a->qm);
129
+ fn(cpu_env, qd, qn, qm);
130
+ tcg_temp_free_ptr(qd);
131
+ tcg_temp_free_ptr(qn);
132
+ tcg_temp_free_ptr(qm);
133
+ mve_update_eci(s);
134
+ return true;
135
+}
136
+
137
+#define DO_LOGIC(INSN, HELPER) \
138
+ static bool trans_##INSN(DisasContext *s, arg_2op *a) \
139
+ { \
140
+ return do_2op(s, a, HELPER); \
141
+ }
142
+
143
+DO_LOGIC(VAND, gen_helper_mve_vand)
144
+DO_LOGIC(VBIC, gen_helper_mve_vbic)
145
+DO_LOGIC(VORR, gen_helper_mve_vorr)
146
+DO_LOGIC(VORN, gen_helper_mve_vorn)
147
+DO_LOGIC(VEOR, gen_helper_mve_veor)
119
--
148
--
120
2.18.0
149
2.20.1
121
150
122
151
diff view generated by jsdifflib
1
Now that all the callers can handle get_page_addr_code() returning -1,
1
Implement the MVE VADD, VSUB and VMUL insns.
2
remove all the code which tries to handle execution from MMIO regions
3
or small-MMU-region RAM areas. This will mean that we can correctly
4
execute from these areas, rather than ending up either aborting QEMU
5
or delivering an incorrect guest exception.
6
2
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Message-id: 20210617121628.20116-13-peter.maydell@linaro.org
10
Tested-by: Cédric Le Goater <clg@kaod.org>
11
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Message-id: 20180710160013.26559-6-peter.maydell@linaro.org
13
---
6
---
14
accel/tcg/cputlb.c | 95 +++++-----------------------------------------
7
target/arm/helper-mve.h | 12 ++++++++++++
15
1 file changed, 10 insertions(+), 85 deletions(-)
8
target/arm/mve.decode | 5 +++++
9
target/arm/mve_helper.c | 14 ++++++++++++++
10
target/arm/translate-mve.c | 16 ++++++++++++++++
11
4 files changed, 47 insertions(+)
16
12
17
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/tcg/cputlb.c
15
--- a/target/arm/helper-mve.h
20
+++ b/accel/tcg/cputlb.c
16
+++ b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page(CPUState *cpu, target_ulong vaddr,
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vbic, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
prot, mmu_idx, size);
18
DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
}
19
DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
20
DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
-static void report_bad_exec(CPUState *cpu, target_ulong addr)
21
+
26
-{
22
+DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
- /* Accidentally executing outside RAM or ROM is quite common for
23
+DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
- * several user-error situations, so report it in a way that
24
+DEF_HELPER_FLAGS_4(mve_vaddw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
- * makes it clear that this isn't a QEMU bug and provide suggestions
25
+
30
- * about what a user could do to fix things.
26
+DEF_HELPER_FLAGS_4(mve_vsubb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
- */
27
+DEF_HELPER_FLAGS_4(mve_vsubh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
- error_report("Trying to execute code outside RAM or ROM at 0x"
28
+DEF_HELPER_FLAGS_4(mve_vsubw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
- TARGET_FMT_lx, addr);
29
+
34
- error_printf("This usually means one of the following happened:\n\n"
30
+DEF_HELPER_FLAGS_4(mve_vmulb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
- "(1) You told QEMU to execute a kernel for the wrong machine "
31
+DEF_HELPER_FLAGS_4(mve_vmulh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
- "type, and it crashed on startup (eg trying to run a "
32
+DEF_HELPER_FLAGS_4(mve_vmulw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
- "raspberry pi kernel on a versatilepb QEMU machine)\n"
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
38
- "(2) You didn't give QEMU a kernel or BIOS filename at all, "
34
index XXXXXXX..XXXXXXX 100644
39
- "and QEMU executed a ROM full of no-op instructions until "
35
--- a/target/arm/mve.decode
40
- "it fell off the end\n"
36
+++ b/target/arm/mve.decode
41
- "(3) Your guest kernel has a bug and crashed by jumping "
37
@@ -XXX,XX +XXX,XX @@
42
- "off into nowhere\n\n"
38
43
- "This is almost always one of the first two, so check your "
39
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
44
- "command line and that you are using the right type of kernel "
40
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
45
- "for this machine.\n"
41
+@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
46
- "If you think option (3) is likely then you can try debugging "
42
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
47
- "your guest with the -d debug options; in particular "
43
48
- "-d guest_errors will cause the log to include a dump of the "
44
# Vector loads and stores
49
- "guest register state at this point.\n\n"
45
@@ -XXX,XX +XXX,XX @@ VORR 1110 1111 0 . 10 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
50
- "Execution cannot continue; stopping here.\n\n");
46
VORN 1110 1111 0 . 11 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
51
-
47
VEOR 1111 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
52
- /* Report also to the logs, with more detail including register dump */
48
53
- qemu_log_mask(LOG_GUEST_ERROR, "qemu: fatal: Trying to execute code "
49
+VADD 1110 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
54
- "outside RAM or ROM at 0x" TARGET_FMT_lx "\n", addr);
50
+VSUB 1111 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
55
- log_cpu_state_mask(LOG_GUEST_ERROR, cpu, CPU_DUMP_FPU | CPU_DUMP_CCOP);
51
+VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
56
-}
52
+
57
-
53
# Vector miscellaneous
58
static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
54
59
{
55
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
60
ram_addr_t ram_addr;
56
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
61
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
57
index XXXXXXX..XXXXXXX 100644
62
MemoryRegionSection *section;
58
--- a/target/arm/mve_helper.c
63
CPUState *cpu = ENV_GET_CPU(env);
59
+++ b/target/arm/mve_helper.c
64
CPUIOTLBEntry *iotlbentry;
60
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
65
- hwaddr physaddr, mr_offset;
61
mve_advance_vpt(env); \
66
67
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
68
mmu_idx = cpu_mmu_index(env, true);
69
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
70
if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
71
/*
72
* This is a TLB_RECHECK access, where the MMU protection
73
- * covers a smaller range than a target page, and we must
74
- * repeat the MMU check here. This tlb_fill() call might
75
- * longjump out if this access should cause a guest exception.
76
- */
77
- int index;
78
- target_ulong tlb_addr;
79
-
80
- tlb_fill(cpu, addr, 0, MMU_INST_FETCH, mmu_idx, 0);
81
-
82
- index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
83
- tlb_addr = env->tlb_table[mmu_idx][index].addr_code;
84
- if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
85
- /* RAM access. We can't handle this, so for now just stop */
86
- cpu_abort(cpu, "Unable to handle guest executing from RAM within "
87
- "a small MPU region at 0x" TARGET_FMT_lx, addr);
88
- }
89
- /*
90
- * Fall through to handle IO accesses (which will almost certainly
91
- * also result in failure)
92
+ * covers a smaller range than a target page. Return -1 to
93
+ * indicate that we cannot simply execute from RAM here;
94
+ * we will perform the necessary repeat of the MMU check
95
+ * when the "execute a single insn" code performs the
96
+ * load of the guest insn.
97
*/
98
+ return -1;
99
}
62
}
100
63
101
iotlbentry = &env->iotlb[mmu_idx][index];
64
+/* provide unsigned 2-op helpers for all sizes */
102
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
65
+#define DO_2OP_U(OP, FN) \
103
mr = section->mr;
66
+ DO_2OP(OP##b, 1, uint8_t, FN) \
104
if (memory_region_is_unassigned(mr)) {
67
+ DO_2OP(OP##h, 2, uint16_t, FN) \
105
- qemu_mutex_lock_iothread();
68
+ DO_2OP(OP##w, 4, uint32_t, FN)
106
- if (memory_region_request_mmio_ptr(mr, addr)) {
69
+
107
- qemu_mutex_unlock_iothread();
70
#define DO_AND(N, M) ((N) & (M))
108
- /* A MemoryRegion is potentially added so re-run the
71
#define DO_BIC(N, M) ((N) & ~(M))
109
- * get_page_addr_code.
72
#define DO_ORR(N, M) ((N) | (M))
110
- */
73
@@ -XXX,XX +XXX,XX @@ DO_2OP(vbic, 8, uint64_t, DO_BIC)
111
- return get_page_addr_code(env, addr);
74
DO_2OP(vorr, 8, uint64_t, DO_ORR)
112
- }
75
DO_2OP(vorn, 8, uint64_t, DO_ORN)
113
- qemu_mutex_unlock_iothread();
76
DO_2OP(veor, 8, uint64_t, DO_EOR)
114
-
77
+
115
- /* Give the new-style cpu_transaction_failed() hook first chance
78
+#define DO_ADD(N, M) ((N) + (M))
116
- * to handle this.
79
+#define DO_SUB(N, M) ((N) - (M))
117
- * This is not the ideal place to detect and generate CPU
80
+#define DO_MUL(N, M) ((N) * (M))
118
- * exceptions for instruction fetch failure (for instance
81
+
119
- * we don't know the length of the access that the CPU would
82
+DO_2OP_U(vadd, DO_ADD)
120
- * use, and it would be better to go ahead and try the access
83
+DO_2OP_U(vsub, DO_SUB)
121
- * and use the MemTXResult it produced). However it is the
84
+DO_2OP_U(vmul, DO_MUL)
122
- * simplest place we have currently available for the check.
85
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
123
+ /*
86
index XXXXXXX..XXXXXXX 100644
124
+ * Not guest RAM, so there is no ram_addr_t for it. Return -1,
87
--- a/target/arm/translate-mve.c
125
+ * and we will execute a single insn from this device.
88
+++ b/target/arm/translate-mve.c
126
*/
89
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VBIC, gen_helper_mve_vbic)
127
- mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
90
DO_LOGIC(VORR, gen_helper_mve_vorr)
128
- physaddr = mr_offset +
91
DO_LOGIC(VORN, gen_helper_mve_vorn)
129
- section->offset_within_address_space -
92
DO_LOGIC(VEOR, gen_helper_mve_veor)
130
- section->offset_within_region;
93
+
131
- cpu_transaction_failed(cpu, physaddr, addr, 0, MMU_INST_FETCH, mmu_idx,
94
+#define DO_2OP(INSN, FN) \
132
- iotlbentry->attrs, MEMTX_DECODE_ERROR, 0);
95
+ static bool trans_##INSN(DisasContext *s, arg_2op *a) \
133
-
96
+ { \
134
- cpu_unassigned_access(cpu, addr, false, true, 0, 4);
97
+ static MVEGenTwoOpFn * const fns[] = { \
135
- /* The CPU's unassigned access hook might have longjumped out
98
+ gen_helper_mve_##FN##b, \
136
- * with an exception. If it didn't (or there was no hook) then
99
+ gen_helper_mve_##FN##h, \
137
- * we can't proceed further.
100
+ gen_helper_mve_##FN##w, \
138
- */
101
+ NULL, \
139
- report_bad_exec(cpu, addr);
102
+ }; \
140
- exit(1);
103
+ return do_2op(s, a, fns[a->size]); \
141
+ return -1;
104
+ }
142
}
105
+
143
p = (void *)((uintptr_t)addr + env->tlb_table[mmu_idx][index].addend);
106
+DO_2OP(VADD, vadd)
144
return qemu_ram_addr_from_host_nofail(p);
107
+DO_2OP(VSUB, vsub)
108
+DO_2OP(VMUL, vmul)
145
--
109
--
146
2.18.0
110
2.20.1
147
111
148
112
diff view generated by jsdifflib
New patch
1
Implement the MVE VMULH insn, which performs a vector
2
multiply and returns the high half of the result.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-14-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 7 +++++++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 38 insertions(+)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vsubw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vmulb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vmulh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vmulw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vmulhsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vmulhsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@ VADD 1110 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
34
VSUB 1111 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
35
VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
36
37
+VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
38
+VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
39
+
40
# Vector miscellaneous
41
42
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
43
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mve_helper.c
46
+++ b/target/arm/mve_helper.c
47
@@ -XXX,XX +XXX,XX @@ DO_2OP(veor, 8, uint64_t, DO_EOR)
48
DO_2OP_U(vadd, DO_ADD)
49
DO_2OP_U(vsub, DO_SUB)
50
DO_2OP_U(vmul, DO_MUL)
51
+
52
+/*
53
+ * Because the computation type is at least twice as large as required,
54
+ * these work for both signed and unsigned source types.
55
+ */
56
+static inline uint8_t do_mulh_b(int32_t n, int32_t m)
57
+{
58
+ return (n * m) >> 8;
59
+}
60
+
61
+static inline uint16_t do_mulh_h(int32_t n, int32_t m)
62
+{
63
+ return (n * m) >> 16;
64
+}
65
+
66
+static inline uint32_t do_mulh_w(int64_t n, int64_t m)
67
+{
68
+ return (n * m) >> 32;
69
+}
70
+
71
+DO_2OP(vmulhsb, 1, int8_t, do_mulh_b)
72
+DO_2OP(vmulhsh, 2, int16_t, do_mulh_h)
73
+DO_2OP(vmulhsw, 4, int32_t, do_mulh_w)
74
+DO_2OP(vmulhub, 1, uint8_t, do_mulh_b)
75
+DO_2OP(vmulhuh, 2, uint16_t, do_mulh_h)
76
+DO_2OP(vmulhuw, 4, uint32_t, do_mulh_w)
77
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/translate-mve.c
80
+++ b/target/arm/translate-mve.c
81
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VEOR, gen_helper_mve_veor)
82
DO_2OP(VADD, vadd)
83
DO_2OP(VSUB, vsub)
84
DO_2OP(VMUL, vmul)
85
+DO_2OP(VMULH_S, vmulhs)
86
+DO_2OP(VMULH_U, vmulhu)
87
--
88
2.20.1
89
90
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the MVE VRMULH insn, which performs a rounding multiply
2
and then returns the high half.
2
3
3
Implement the maintenance interrupt generation that is part of the GICv2
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
virtualization extensions.
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-15-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 7 +++++++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 22 ++++++++++++++++++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 34 insertions(+)
5
13
6
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20180727095421.386-18-luc.michel@greensocs.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/intc/arm_gic.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++
12
1 file changed, 97 insertions(+)
13
14
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/arm_gic.c
16
--- a/target/arm/helper-mve.h
17
+++ b/hw/intc/arm_gic.c
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ static inline bool gic_lr_entry_is_eoi(uint32_t entry)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
&& !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
19
DEF_HELPER_FLAGS_4(mve_vmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vrmulhsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vrmulhsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vrmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vrmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vrmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vrmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
34
VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
35
VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
36
37
+VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
38
+VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
39
+
40
# Vector miscellaneous
41
42
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
43
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mve_helper.c
46
+++ b/target/arm/mve_helper.c
47
@@ -XXX,XX +XXX,XX @@ static inline uint32_t do_mulh_w(int64_t n, int64_t m)
48
return (n * m) >> 32;
20
}
49
}
21
50
22
+static inline void gic_extract_lr_info(GICState *s, int cpu,
51
+static inline uint8_t do_rmulh_b(int32_t n, int32_t m)
23
+ int *num_eoi, int *num_valid, int *num_pending)
24
+{
52
+{
25
+ int lr_idx;
53
+ return (n * m + (1U << 7)) >> 8;
26
+
27
+ *num_eoi = 0;
28
+ *num_valid = 0;
29
+ *num_pending = 0;
30
+
31
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
32
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
33
+
34
+ if (gic_lr_entry_is_eoi(*entry)) {
35
+ (*num_eoi)++;
36
+ }
37
+
38
+ if (GICH_LR_STATE(*entry) != GICH_LR_STATE_INVALID) {
39
+ (*num_valid)++;
40
+ }
41
+
42
+ if (GICH_LR_STATE(*entry) == GICH_LR_STATE_PENDING) {
43
+ (*num_pending)++;
44
+ }
45
+ }
46
+}
54
+}
47
+
55
+
48
+static void gic_compute_misr(GICState *s, int cpu)
56
+static inline uint16_t do_rmulh_h(int32_t n, int32_t m)
49
+{
57
+{
50
+ uint32_t value = 0;
58
+ return (n * m + (1U << 15)) >> 16;
51
+ int vcpu = cpu + GIC_NCPU;
52
+
53
+ int num_eoi, num_valid, num_pending;
54
+
55
+ gic_extract_lr_info(s, cpu, &num_eoi, &num_valid, &num_pending);
56
+
57
+ /* EOI */
58
+ if (num_eoi) {
59
+ value |= R_GICH_MISR_EOI_MASK;
60
+ }
61
+
62
+ /* U: true if only 0 or 1 LR entry is valid */
63
+ if ((s->h_hcr[cpu] & R_GICH_HCR_UIE_MASK) && (num_valid < 2)) {
64
+ value |= R_GICH_MISR_U_MASK;
65
+ }
66
+
67
+ /* LRENP: EOICount is not 0 */
68
+ if ((s->h_hcr[cpu] & R_GICH_HCR_LRENPIE_MASK) &&
69
+ ((s->h_hcr[cpu] & R_GICH_HCR_EOICount_MASK) != 0)) {
70
+ value |= R_GICH_MISR_LRENP_MASK;
71
+ }
72
+
73
+ /* NP: no pending interrupts */
74
+ if ((s->h_hcr[cpu] & R_GICH_HCR_NPIE_MASK) && (num_pending == 0)) {
75
+ value |= R_GICH_MISR_NP_MASK;
76
+ }
77
+
78
+ /* VGrp0E: group0 virq signaling enabled */
79
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP0EIE_MASK) &&
80
+ (s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP0)) {
81
+ value |= R_GICH_MISR_VGrp0E_MASK;
82
+ }
83
+
84
+ /* VGrp0D: group0 virq signaling disabled */
85
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP0DIE_MASK) &&
86
+ !(s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP0)) {
87
+ value |= R_GICH_MISR_VGrp0D_MASK;
88
+ }
89
+
90
+ /* VGrp1E: group1 virq signaling enabled */
91
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP1EIE_MASK) &&
92
+ (s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP1)) {
93
+ value |= R_GICH_MISR_VGrp1E_MASK;
94
+ }
95
+
96
+ /* VGrp1D: group1 virq signaling disabled */
97
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP1DIE_MASK) &&
98
+ !(s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP1)) {
99
+ value |= R_GICH_MISR_VGrp1D_MASK;
100
+ }
101
+
102
+ s->h_misr[cpu] = value;
103
+}
59
+}
104
+
60
+
105
+static void gic_update_maintenance(GICState *s)
61
+static inline uint32_t do_rmulh_w(int64_t n, int64_t m)
106
+{
62
+{
107
+ int cpu = 0;
63
+ return (n * m + (1U << 31)) >> 32;
108
+ int maint_level;
109
+
110
+ for (cpu = 0; cpu < s->num_cpu; cpu++) {
111
+ gic_compute_misr(s, cpu);
112
+ maint_level = (s->h_hcr[cpu] & R_GICH_HCR_EN_MASK) && s->h_misr[cpu];
113
+
114
+ qemu_set_irq(s->maintenance_irq[cpu], maint_level);
115
+ }
116
+}
64
+}
117
+
65
+
118
static void gic_update_virt(GICState *s)
66
DO_2OP(vmulhsb, 1, int8_t, do_mulh_b)
119
{
67
DO_2OP(vmulhsh, 2, int16_t, do_mulh_h)
120
gic_update_internal(s, true);
68
DO_2OP(vmulhsw, 4, int32_t, do_mulh_w)
121
+ gic_update_maintenance(s);
69
DO_2OP(vmulhub, 1, uint8_t, do_mulh_b)
122
}
70
DO_2OP(vmulhuh, 2, uint16_t, do_mulh_h)
123
71
DO_2OP(vmulhuw, 4, uint32_t, do_mulh_w)
124
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
72
+
73
+DO_2OP(vrmulhsb, 1, int8_t, do_rmulh_b)
74
+DO_2OP(vrmulhsh, 2, int16_t, do_rmulh_h)
75
+DO_2OP(vrmulhsw, 4, int32_t, do_rmulh_w)
76
+DO_2OP(vrmulhub, 1, uint8_t, do_rmulh_b)
77
+DO_2OP(vrmulhuh, 2, uint16_t, do_rmulh_h)
78
+DO_2OP(vrmulhuw, 4, uint32_t, do_rmulh_w)
79
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/translate-mve.c
82
+++ b/target/arm/translate-mve.c
83
@@ -XXX,XX +XXX,XX @@ DO_2OP(VSUB, vsub)
84
DO_2OP(VMUL, vmul)
85
DO_2OP(VMULH_S, vmulhs)
86
DO_2OP(VMULH_U, vmulhu)
87
+DO_2OP(VRMULH_S, vrmulhs)
88
+DO_2OP(VRMULH_U, vrmulhu)
125
--
89
--
126
2.18.0
90
2.20.1
127
91
128
92
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the MVE VMAX and VMIN insns.
2
2
3
Used the wrong temporary in the computation of subtractive overflow.
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-16-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 14 ++++++++++++++
8
target/arm/mve.decode | 5 +++++
9
target/arm/mve_helper.c | 14 ++++++++++++++
10
target/arm/translate-mve.c | 4 ++++
11
4 files changed, 37 insertions(+)
4
12
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Message-id: 20180801123111.3595-3-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/translate-sve.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-sve.c
15
--- a/target/arm/helper-mve.h
19
+++ b/target/arm/translate-sve.c
16
+++ b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@ static void do_sat_addsub_64(TCGv_i64 reg, TCGv_i64 val, bool u, bool d)
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
/* Detect signed overflow for subtraction. */
18
DEF_HELPER_FLAGS_4(mve_vrmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
tcg_gen_xor_i64(t0, reg, val);
19
DEF_HELPER_FLAGS_4(mve_vrmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
tcg_gen_sub_i64(t1, reg, val);
20
DEF_HELPER_FLAGS_4(mve_vrmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
- tcg_gen_xor_i64(reg, reg, t0);
21
+
25
+ tcg_gen_xor_i64(reg, reg, t1);
22
+DEF_HELPER_FLAGS_4(mve_vmaxsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
tcg_gen_and_i64(t0, t0, reg);
23
+DEF_HELPER_FLAGS_4(mve_vmaxsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
24
+DEF_HELPER_FLAGS_4(mve_vmaxsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
/* Bound the result. */
25
+DEF_HELPER_FLAGS_4(mve_vmaxub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vmaxuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vmaxuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+
29
+DEF_HELPER_FLAGS_4(mve_vminsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
+DEF_HELPER_FLAGS_4(mve_vminsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vminsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vminub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vminuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vminuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/mve.decode
38
+++ b/target/arm/mve.decode
39
@@ -XXX,XX +XXX,XX @@ VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
40
VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
41
VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
42
43
+VMAX_S 111 0 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
44
+VMAX_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
45
+VMIN_S 111 0 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
46
+VMIN_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
47
+
48
# Vector miscellaneous
49
50
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
51
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/mve_helper.c
54
+++ b/target/arm/mve_helper.c
55
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
56
DO_2OP(OP##h, 2, uint16_t, FN) \
57
DO_2OP(OP##w, 4, uint32_t, FN)
58
59
+/* provide signed 2-op helpers for all sizes */
60
+#define DO_2OP_S(OP, FN) \
61
+ DO_2OP(OP##b, 1, int8_t, FN) \
62
+ DO_2OP(OP##h, 2, int16_t, FN) \
63
+ DO_2OP(OP##w, 4, int32_t, FN)
64
+
65
#define DO_AND(N, M) ((N) & (M))
66
#define DO_BIC(N, M) ((N) & ~(M))
67
#define DO_ORR(N, M) ((N) | (M))
68
@@ -XXX,XX +XXX,XX @@ DO_2OP(vrmulhsw, 4, int32_t, do_rmulh_w)
69
DO_2OP(vrmulhub, 1, uint8_t, do_rmulh_b)
70
DO_2OP(vrmulhuh, 2, uint16_t, do_rmulh_h)
71
DO_2OP(vrmulhuw, 4, uint32_t, do_rmulh_w)
72
+
73
+#define DO_MAX(N, M) ((N) >= (M) ? (N) : (M))
74
+#define DO_MIN(N, M) ((N) >= (M) ? (M) : (N))
75
+
76
+DO_2OP_S(vmaxs, DO_MAX)
77
+DO_2OP_U(vmaxu, DO_MAX)
78
+DO_2OP_S(vmins, DO_MIN)
79
+DO_2OP_U(vminu, DO_MIN)
80
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/translate-mve.c
83
+++ b/target/arm/translate-mve.c
84
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULH_S, vmulhs)
85
DO_2OP(VMULH_U, vmulhu)
86
DO_2OP(VRMULH_S, vrmulhs)
87
DO_2OP(VRMULH_U, vrmulhu)
88
+DO_2OP(VMAX_S, vmaxs)
89
+DO_2OP(VMAX_U, vmaxu)
90
+DO_2OP(VMIN_S, vmins)
91
+DO_2OP(VMIN_U, vminu)
29
--
92
--
30
2.18.0
93
2.20.1
31
94
32
95
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the MVE VABD insn.
2
2
3
The normal vector element is sign-extended before
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
comparing with the wide vector element.
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-17-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 7 +++++++
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 5 +++++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 17 insertions(+)
5
12
6
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Tested-by: Alex Bennée <alex.bennee@linaro.org>
11
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
12
Message-id: 20180801123111.3595-2-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/sve_helper.c | 12 ++++++------
16
1 file changed, 6 insertions(+), 6 deletions(-)
17
18
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
19
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/sve_helper.c
15
--- a/target/arm/helper-mve.h
21
+++ b/target/arm/sve_helper.c
16
+++ b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vminsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
#define DO_CMP_PPZW_S(NAME, TYPE, TYPEW, OP) \
18
DEF_HELPER_FLAGS_4(mve_vminub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_4, 0x1111111111111111ull)
19
DEF_HELPER_FLAGS_4(mve_vminuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
20
DEF_HELPER_FLAGS_4(mve_vminuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
-DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, uint8_t, uint64_t, ==)
21
+
27
-DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, uint16_t, uint64_t, ==)
22
+DEF_HELPER_FLAGS_4(mve_vabdsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
-DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, uint32_t, uint64_t, ==)
23
+DEF_HELPER_FLAGS_4(mve_vabdsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, int8_t, uint64_t, ==)
24
+DEF_HELPER_FLAGS_4(mve_vabdsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
+DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, int16_t, uint64_t, ==)
25
+DEF_HELPER_FLAGS_4(mve_vabdub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, int32_t, uint64_t, ==)
26
+DEF_HELPER_FLAGS_4(mve_vabduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
27
+DEF_HELPER_FLAGS_4(mve_vabduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
-DO_CMP_PPZW_B(sve_cmpne_ppzw_b, uint8_t, uint64_t, !=)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
-DO_CMP_PPZW_H(sve_cmpne_ppzw_h, uint16_t, uint64_t, !=)
29
index XXXXXXX..XXXXXXX 100644
35
-DO_CMP_PPZW_S(sve_cmpne_ppzw_s, uint32_t, uint64_t, !=)
30
--- a/target/arm/mve.decode
36
+DO_CMP_PPZW_B(sve_cmpne_ppzw_b, int8_t, uint64_t, !=)
31
+++ b/target/arm/mve.decode
37
+DO_CMP_PPZW_H(sve_cmpne_ppzw_h, int16_t, uint64_t, !=)
32
@@ -XXX,XX +XXX,XX @@ VMAX_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
38
+DO_CMP_PPZW_S(sve_cmpne_ppzw_s, int32_t, uint64_t, !=)
33
VMIN_S 111 0 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
39
34
VMIN_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
40
DO_CMP_PPZW_B(sve_cmpgt_ppzw_b, int8_t, int64_t, >)
35
41
DO_CMP_PPZW_H(sve_cmpgt_ppzw_h, int16_t, int64_t, >)
36
+VABD_S 111 0 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
37
+VABD_U 111 1 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
38
+
39
# Vector miscellaneous
40
41
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_2OP_S(vmaxs, DO_MAX)
47
DO_2OP_U(vmaxu, DO_MAX)
48
DO_2OP_S(vmins, DO_MIN)
49
DO_2OP_U(vminu, DO_MIN)
50
+
51
+#define DO_ABD(N, M) ((N) >= (M) ? (N) - (M) : (M) - (N))
52
+
53
+DO_2OP_S(vabds, DO_ABD)
54
+DO_2OP_U(vabdu, DO_ABD)
55
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/translate-mve.c
58
+++ b/target/arm/translate-mve.c
59
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMAX_S, vmaxs)
60
DO_2OP(VMAX_U, vmaxu)
61
DO_2OP(VMIN_S, vmins)
62
DO_2OP(VMIN_U, vminu)
63
+DO_2OP(VABD_S, vabds)
64
+DO_2OP(VABD_U, vabdu)
42
--
65
--
43
2.18.0
66
2.20.1
44
67
45
68
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement MVE VHADD and VHSUB insns, which perform an addition
2
or subtraction and then halve the result.
2
3
3
Add some helper functions to gic_internal.h to get or change the state
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
of an IRQ. When the current CPU is not a vCPU, the call is forwarded to
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
the GIC distributor. Otherwise, it acts on the list register matching
6
Message-id: 20210617121628.20116-18-peter.maydell@linaro.org
6
the IRQ in the current CPU virtual interface.
7
---
8
target/arm/helper-mve.h | 14 ++++++++++++++
9
target/arm/mve.decode | 5 +++++
10
target/arm/mve_helper.c | 25 +++++++++++++++++++++++++
11
target/arm/translate-mve.c | 4 ++++
12
4 files changed, 48 insertions(+)
7
13
8
gic_clear_active can have a side effect on the distributor, even in the
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
9
vCPU case, when the correponding LR has the HW field set.
10
11
Use those functions in the CPU interface code path to prepare for the
12
vCPU interface implementation.
13
14
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20180727095421.386-10-luc.michel@greensocs.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
hw/intc/gic_internal.h | 83 ++++++++++++++++++++++++++++++++++++++++++
21
hw/intc/arm_gic.c | 32 +++++++---------
22
2 files changed, 97 insertions(+), 18 deletions(-)
23
24
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/gic_internal.h
16
--- a/target/arm/helper-mve.h
27
+++ b/hw/intc/gic_internal.h
17
+++ b/target/arm/helper-mve.h
28
@@ -XXX,XX +XXX,XX @@ REG32(GICH_LR63, 0x1fc)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vabdsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
#define GICH_LR_GROUP(entry) (FIELD_EX32(entry, GICH_LR0, Grp1))
19
DEF_HELPER_FLAGS_4(mve_vabdub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
#define GICH_LR_HW(entry) (FIELD_EX32(entry, GICH_LR0, HW))
20
DEF_HELPER_FLAGS_4(mve_vabduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
21
DEF_HELPER_FLAGS_4(mve_vabduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+#define GICH_LR_CLEAR_PENDING(entry) \
33
+ ((entry) &= ~(GICH_LR_STATE_PENDING << R_GICH_LR0_State_SHIFT))
34
+#define GICH_LR_SET_ACTIVE(entry) \
35
+ ((entry) |= (GICH_LR_STATE_ACTIVE << R_GICH_LR0_State_SHIFT))
36
+#define GICH_LR_CLEAR_ACTIVE(entry) \
37
+ ((entry) &= ~(GICH_LR_STATE_ACTIVE << R_GICH_LR0_State_SHIFT))
38
+
22
+
39
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
23
+DEF_HELPER_FLAGS_4(mve_vhaddsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
40
* GICv2 and GICv2 with security extensions:
24
+DEF_HELPER_FLAGS_4(mve_vhaddsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
41
*/
25
+DEF_HELPER_FLAGS_4(mve_vhaddsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
42
@@ -XXX,XX +XXX,XX @@ static inline uint32_t *gic_get_lr_entry(GICState *s, int irq, int vcpu)
26
+DEF_HELPER_FLAGS_4(mve_vhaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
43
g_assert_not_reached();
27
+DEF_HELPER_FLAGS_4(mve_vhadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
44
}
28
+DEF_HELPER_FLAGS_4(mve_vhadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
45
29
+
46
+static inline bool gic_test_group(GICState *s, int irq, int cpu)
30
+DEF_HELPER_FLAGS_4(mve_vhsubsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vhsubsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vhsubsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vhsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vhsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
+DEF_HELPER_FLAGS_4(mve_vhsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/mve.decode
39
+++ b/target/arm/mve.decode
40
@@ -XXX,XX +XXX,XX @@ VMIN_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
41
VABD_S 111 0 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
42
VABD_U 111 1 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
43
44
+VHADD_S 111 0 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
45
+VHADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
46
+VHSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
47
+VHSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
48
+
49
# Vector miscellaneous
50
51
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vminu, DO_MIN)
57
58
DO_2OP_S(vabds, DO_ABD)
59
DO_2OP_U(vabdu, DO_ABD)
60
+
61
+static inline uint32_t do_vhadd_u(uint32_t n, uint32_t m)
47
+{
62
+{
48
+ if (gic_is_vcpu(cpu)) {
63
+ return ((uint64_t)n + m) >> 1;
49
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
50
+ return GICH_LR_GROUP(*entry);
51
+ } else {
52
+ return GIC_DIST_TEST_GROUP(irq, 1 << cpu);
53
+ }
54
+}
64
+}
55
+
65
+
56
+static inline void gic_clear_pending(GICState *s, int irq, int cpu)
66
+static inline int32_t do_vhadd_s(int32_t n, int32_t m)
57
+{
67
+{
58
+ if (gic_is_vcpu(cpu)) {
68
+ return ((int64_t)n + m) >> 1;
59
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
60
+ GICH_LR_CLEAR_PENDING(*entry);
61
+ } else {
62
+ /* Clear pending state for both level and edge triggered
63
+ * interrupts. (level triggered interrupts with an active line
64
+ * remain pending, see gic_test_pending)
65
+ */
66
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
67
+ : (1 << cpu));
68
+ }
69
+}
69
+}
70
+
70
+
71
+static inline void gic_set_active(GICState *s, int irq, int cpu)
71
+static inline uint32_t do_vhsub_u(uint32_t n, uint32_t m)
72
+{
72
+{
73
+ if (gic_is_vcpu(cpu)) {
73
+ return ((uint64_t)n - m) >> 1;
74
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
75
+ GICH_LR_SET_ACTIVE(*entry);
76
+ } else {
77
+ GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
78
+ }
79
+}
74
+}
80
+
75
+
81
+static inline void gic_clear_active(GICState *s, int irq, int cpu)
76
+static inline int32_t do_vhsub_s(int32_t n, int32_t m)
82
+{
77
+{
83
+ if (gic_is_vcpu(cpu)) {
78
+ return ((int64_t)n - m) >> 1;
84
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
85
+ GICH_LR_CLEAR_ACTIVE(*entry);
86
+
87
+ if (GICH_LR_HW(*entry)) {
88
+ /* Hardware interrupt. We must forward the deactivation request to
89
+ * the distributor.
90
+ */
91
+ int phys_irq = GICH_LR_PHYS_ID(*entry);
92
+ int rcpu = gic_get_vcpu_real_id(cpu);
93
+
94
+ if (phys_irq < GIC_NR_SGIS || phys_irq >= GIC_MAXIRQ) {
95
+ /* UNPREDICTABLE behaviour, we choose to ignore the request */
96
+ return;
97
+ }
98
+
99
+ /* This is equivalent to a NS write to DIR on the physical CPU
100
+ * interface. Hence group0 interrupt deactivation is ignored if
101
+ * the GIC is secure.
102
+ */
103
+ if (!s->security_extn || GIC_DIST_TEST_GROUP(phys_irq, 1 << rcpu)) {
104
+ GIC_DIST_CLEAR_ACTIVE(phys_irq, 1 << rcpu);
105
+ }
106
+ }
107
+ } else {
108
+ GIC_DIST_CLEAR_ACTIVE(irq, 1 << cpu);
109
+ }
110
+}
79
+}
111
+
80
+
112
+static inline int gic_get_priority(GICState *s, int irq, int cpu)
81
+DO_2OP_S(vhadds, do_vhadd_s)
113
+{
82
+DO_2OP_U(vhaddu, do_vhadd_u)
114
+ if (gic_is_vcpu(cpu)) {
83
+DO_2OP_S(vhsubs, do_vhsub_s)
115
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
84
+DO_2OP_U(vhsubu, do_vhsub_u)
116
+ return GICH_LR_PRIORITY(*entry);
85
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
117
+ } else {
118
+ return GIC_DIST_GET_PRIORITY(irq, cpu);
119
+ }
120
+}
121
+
122
#endif /* QEMU_ARM_GIC_INTERNAL_H */
123
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
124
index XXXXXXX..XXXXXXX 100644
86
index XXXXXXX..XXXXXXX 100644
125
--- a/hw/intc/arm_gic.c
87
--- a/target/arm/translate-mve.c
126
+++ b/hw/intc/arm_gic.c
88
+++ b/target/arm/translate-mve.c
127
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
89
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMIN_S, vmins)
128
uint16_t pending_irq = s->current_pending[cpu];
90
DO_2OP(VMIN_U, vminu)
129
91
DO_2OP(VABD_S, vabds)
130
if (pending_irq < GIC_MAXIRQ && gic_has_groups(s)) {
92
DO_2OP(VABD_U, vabdu)
131
- int group = GIC_DIST_TEST_GROUP(pending_irq, (1 << cpu));
93
+DO_2OP(VHADD_S, vhadds)
132
+ int group = gic_test_group(s, pending_irq, cpu);
94
+DO_2OP(VHADD_U, vhaddu)
133
+
95
+DO_2OP(VHSUB_S, vhsubs)
134
/* On a GIC without the security extensions, reading this register
96
+DO_2OP(VHSUB_U, vhsubu)
135
* behaves in the same way as a secure access to a GIC with them.
136
*/
137
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
138
139
if (gic_has_groups(s) &&
140
!(s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) &&
141
- GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
142
+ gic_test_group(s, irq, cpu)) {
143
bpr = s->abpr[cpu] - 1;
144
assert(bpr >= 0);
145
} else {
146
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
147
*/
148
mask = ~0U << ((bpr & 7) + 1);
149
150
- return GIC_DIST_GET_PRIORITY(irq, cpu) & mask;
151
+ return gic_get_priority(s, irq, cpu) & mask;
152
}
153
154
static void gic_activate_irq(GICState *s, int cpu, int irq)
155
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
156
int regno = preemption_level / 32;
157
int bitno = preemption_level % 32;
158
159
- if (gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
160
+ if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
161
s->nsapr[regno][cpu] |= (1 << bitno);
162
} else {
163
s->apr[regno][cpu] |= (1 << bitno);
164
}
165
166
s->running_priority[cpu] = prio;
167
- GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
168
+ gic_set_active(s, irq, cpu);
169
}
170
171
static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
172
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
173
return irq;
174
}
175
176
- if (GIC_DIST_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
177
+ if (gic_get_priority(s, irq, cpu) >= s->running_priority[cpu]) {
178
DPRINTF("ACK, pending interrupt (%d) has insufficient priority\n", irq);
179
return 1023;
180
}
181
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
182
/* Clear pending flags for both level and edge triggered interrupts.
183
* Level triggered IRQs will be reasserted once they become inactive.
184
*/
185
- GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
186
- : cm);
187
+ gic_clear_pending(s, irq, cpu);
188
ret = irq;
189
} else {
190
if (irq < GIC_NR_SGIS) {
191
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
192
src = ctz32(s->sgi_pending[irq][cpu]);
193
s->sgi_pending[irq][cpu] &= ~(1 << src);
194
if (s->sgi_pending[irq][cpu] == 0) {
195
- GIC_DIST_CLEAR_PENDING(irq,
196
- GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
197
- : cm);
198
+ gic_clear_pending(s, irq, cpu);
199
}
200
ret = irq | ((src & 0x7) << 10);
201
} else {
202
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
203
* interrupts. (level triggered interrupts with an active line
204
* remain pending, see gic_test_pending)
205
*/
206
- GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
207
- : cm);
208
+ gic_clear_pending(s, irq, cpu);
209
ret = irq;
210
}
211
}
212
@@ -XXX,XX +XXX,XX @@ static bool gic_eoi_split(GICState *s, int cpu, MemTxAttrs attrs)
213
214
static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
215
{
216
- int cm = 1 << cpu;
217
int group;
218
219
if (irq >= s->num_irq) {
220
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
221
return;
222
}
223
224
- group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
225
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
226
227
if (!gic_eoi_split(s, cpu, attrs)) {
228
/* This is UNPREDICTABLE; we choose to ignore it */
229
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
230
return;
231
}
232
233
- GIC_DIST_CLEAR_ACTIVE(irq, cm);
234
+ gic_clear_active(s, irq, cpu);
235
}
236
237
static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
238
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
239
}
240
}
241
242
- group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
243
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
244
245
if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
246
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
247
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
248
249
/* In GICv2 the guest can choose to split priority-drop and deactivate */
250
if (!gic_eoi_split(s, cpu, attrs)) {
251
- GIC_DIST_CLEAR_ACTIVE(irq, cm);
252
+ gic_clear_active(s, irq, cpu);
253
}
254
gic_update(s);
255
}
256
--
97
--
257
2.18.0
98
2.20.1
258
99
259
100
diff view generated by jsdifflib
1
One of the required effects of setting HCR_EL2.TGE is that when
1
Implement the MVE VMULL insn, which multiplies two single
2
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
2
width integer elements to produce a double width result.
3
all purposes except direct reads. That is, it effectively disables
4
the MMU for the NS EL0/EL1 translation regime.
5
3
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180724115950.17316-6-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-19-peter.maydell@linaro.org
9
---
7
---
10
target/arm/helper.c | 8 ++++++++
8
target/arm/helper-mve.h | 14 ++++++++++++++
11
1 file changed, 8 insertions(+)
9
target/arm/mve.decode | 5 +++++
10
target/arm/mve_helper.c | 34 ++++++++++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 4 ++++
12
4 files changed, 57 insertions(+)
12
13
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vhsubsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
if (mmu_idx == ARMMMUIdx_S2NS) {
19
DEF_HELPER_FLAGS_4(mve_vhsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
return (env->cp15.hcr_el2 & HCR_VM) == 0;
20
DEF_HELPER_FLAGS_4(mve_vhsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
}
21
DEF_HELPER_FLAGS_4(mve_vhsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
+
22
+
22
+ if (env->cp15.hcr_el2 & HCR_TGE) {
23
+DEF_HELPER_FLAGS_4(mve_vmullbsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+ /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
24
+DEF_HELPER_FLAGS_4(mve_vmullbsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+ if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
25
+DEF_HELPER_FLAGS_4(mve_vmullbsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+ return true;
26
+DEF_HELPER_FLAGS_4(mve_vmullbub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+ }
27
+DEF_HELPER_FLAGS_4(mve_vmullbuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vmullbuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vmulltsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vmulltsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vmulltsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
+DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/mve.decode
39
+++ b/target/arm/mve.decode
40
@@ -XXX,XX +XXX,XX @@ VHADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
41
VHSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
42
VHSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
43
44
+VMULL_BS 111 0 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
45
+VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
46
+VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
47
+VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
48
+
49
# Vector miscellaneous
50
51
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
57
DO_2OP(OP##h, 2, int16_t, FN) \
58
DO_2OP(OP##w, 4, int32_t, FN)
59
60
+/*
61
+ * "Long" operations where two half-sized inputs (taken from either the
62
+ * top or the bottom of the input vector) produce a double-width result.
63
+ * Here ESIZE, TYPE are for the input, and LESIZE, LTYPE for the output.
64
+ */
65
+#define DO_2OP_L(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN) \
66
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, void *vm) \
67
+ { \
68
+ LTYPE *d = vd; \
69
+ TYPE *n = vn, *m = vm; \
70
+ uint16_t mask = mve_element_mask(env); \
71
+ unsigned le; \
72
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
73
+ LTYPE r = FN((LTYPE)n[H##ESIZE(le * 2 + TOP)], \
74
+ m[H##ESIZE(le * 2 + TOP)]); \
75
+ mergemask(&d[H##LESIZE(le)], r, mask); \
76
+ } \
77
+ mve_advance_vpt(env); \
27
+ }
78
+ }
28
+
79
+
29
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
80
#define DO_AND(N, M) ((N) & (M))
30
}
81
#define DO_BIC(N, M) ((N) & ~(M))
31
82
#define DO_ORR(N, M) ((N) | (M))
83
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vadd, DO_ADD)
84
DO_2OP_U(vsub, DO_SUB)
85
DO_2OP_U(vmul, DO_MUL)
86
87
+DO_2OP_L(vmullbsb, 0, 1, int8_t, 2, int16_t, DO_MUL)
88
+DO_2OP_L(vmullbsh, 0, 2, int16_t, 4, int32_t, DO_MUL)
89
+DO_2OP_L(vmullbsw, 0, 4, int32_t, 8, int64_t, DO_MUL)
90
+DO_2OP_L(vmullbub, 0, 1, uint8_t, 2, uint16_t, DO_MUL)
91
+DO_2OP_L(vmullbuh, 0, 2, uint16_t, 4, uint32_t, DO_MUL)
92
+DO_2OP_L(vmullbuw, 0, 4, uint32_t, 8, uint64_t, DO_MUL)
93
+
94
+DO_2OP_L(vmulltsb, 1, 1, int8_t, 2, int16_t, DO_MUL)
95
+DO_2OP_L(vmulltsh, 1, 2, int16_t, 4, int32_t, DO_MUL)
96
+DO_2OP_L(vmulltsw, 1, 4, int32_t, 8, int64_t, DO_MUL)
97
+DO_2OP_L(vmulltub, 1, 1, uint8_t, 2, uint16_t, DO_MUL)
98
+DO_2OP_L(vmulltuh, 1, 2, uint16_t, 4, uint32_t, DO_MUL)
99
+DO_2OP_L(vmulltuw, 1, 4, uint32_t, 8, uint64_t, DO_MUL)
100
+
101
/*
102
* Because the computation type is at least twice as large as required,
103
* these work for both signed and unsigned source types.
104
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/target/arm/translate-mve.c
107
+++ b/target/arm/translate-mve.c
108
@@ -XXX,XX +XXX,XX @@ DO_2OP(VHADD_S, vhadds)
109
DO_2OP(VHADD_U, vhaddu)
110
DO_2OP(VHSUB_S, vhsubs)
111
DO_2OP(VHSUB_U, vhsubu)
112
+DO_2OP(VMULL_BS, vmullbs)
113
+DO_2OP(VMULL_BU, vmullbu)
114
+DO_2OP(VMULL_TS, vmullts)
115
+DO_2OP(VMULL_TU, vmulltu)
32
--
116
--
33
2.18.0
117
2.20.1
34
118
35
119
diff view generated by jsdifflib
1
Now that we have full support for small regions, including execution,
1
Implement the MVE VMLALDAV insn, which multiplies pairs of integer
2
we can remove the workarounds where we marked all small regions as
2
elements, accumulating them into a 64-bit result in a pair of
3
non-executable for the M-profile MPU and SAU.
3
general-purpose registers.
4
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20210617121628.20116-20-peter.maydell@linaro.org
8
Tested-by: Cédric Le Goater <clg@kaod.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20180710160013.26559-7-peter.maydell@linaro.org
11
---
8
---
12
target/arm/helper.c | 23 -----------------------
9
target/arm/helper-mve.h | 8 ++++
13
1 file changed, 23 deletions(-)
10
target/arm/translate.h | 10 ++++
14
11
target/arm/mve.decode | 15 ++++++
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
target/arm/mve_helper.c | 34 ++++++++++++++
16
index XXXXXXX..XXXXXXX 100644
13
target/arm/translate-mve.c | 96 ++++++++++++++++++++++++++++++++++++++
17
--- a/target/arm/helper.c
14
5 files changed, 163 insertions(+)
18
+++ b/target/arm/helper.c
15
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
17
index XXXXXXX..XXXXXXX 100644
21
fi->type = ARMFault_Permission;
18
--- a/target/arm/helper-mve.h
22
fi->level = 1;
19
+++ b/target/arm/helper-mve.h
23
- /*
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
- * Core QEMU code can't handle execution from small pages yet, so
21
DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
- * don't try it. This way we'll get an MPU exception, rather than
22
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
- * eventually causing QEMU to exit in get_page_addr_code().
23
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
- */
24
+
28
- if (*page_size < TARGET_PAGE_SIZE && (*prot & PAGE_EXEC)) {
25
+DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
29
- qemu_log_mask(LOG_UNIMP,
26
+DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
30
- "MPU: No support for execution from regions "
27
+DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
31
- "smaller than 1K\n");
28
+DEF_HELPER_FLAGS_4(mve_vmlaldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
- *prot &= ~PAGE_EXEC;
29
+
33
- }
30
+DEF_HELPER_FLAGS_4(mve_vmlaldavuh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
34
return !(*prot & (1 << access_type));
31
+DEF_HELPER_FLAGS_4(mve_vmlaldavuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
diff --git a/target/arm/translate.h b/target/arm/translate.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate.h
35
+++ b/target/arm/translate.h
36
@@ -XXX,XX +XXX,XX @@ static inline int negate(DisasContext *s, int x)
37
return -x;
35
}
38
}
36
39
37
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
40
+static inline int plus_1(DisasContext *s, int x)
38
41
+{
39
fi->type = ARMFault_Permission;
42
+ return x + 1;
40
fi->level = 1;
43
+}
41
- /*
44
+
42
- * Core QEMU code can't handle execution from small pages yet, so
45
static inline int plus_2(DisasContext *s, int x)
43
- * don't try it. This means any attempted execution will generate
46
{
44
- * an MPU exception, rather than eventually causing QEMU to exit in
47
return x + 2;
45
- * get_page_addr_code().
48
@@ -XXX,XX +XXX,XX @@ static inline int times_4(DisasContext *s, int x)
46
- */
49
return x * 4;
47
- if (*is_subpage && (*prot & PAGE_EXEC)) {
48
- qemu_log_mask(LOG_UNIMP,
49
- "MPU: No support for execution from regions "
50
- "smaller than 1K\n");
51
- *prot &= ~PAGE_EXEC;
52
- }
53
return !(*prot & (1 << access_type));
54
}
50
}
55
51
52
+static inline int times_2_plus_1(DisasContext *s, int x)
53
+{
54
+ return x * 2 + 1;
55
+}
56
+
57
static inline int arm_dc_feature(DisasContext *dc, int feature)
58
{
59
return (dc->features & (1ULL << feature)) != 0;
60
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/mve.decode
63
+++ b/target/arm/mve.decode
64
@@ -XXX,XX +XXX,XX @@ VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
65
VDUP 1110 1110 1 1 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=0
66
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 1 1 0000 @vdup size=1
67
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
68
+
69
+# multiply-add long dual accumulate
70
+# rdahi: bits [3:1] from insn, bit 0 is 1
71
+# rdalo: bits [3:1] from insn, bit 0 is 0
72
+%rdahi 20:3 !function=times_2_plus_1
73
+%rdalo 13:3 !function=times_2
74
+# size bit is 0 for 16 bit, 1 for 32 bit
75
+%size_16 16:1 !function=plus_1
76
+
77
+&vmlaldav rdahi rdalo size qn qm x a
78
+
79
+@vmlaldav .... .... . ... ... . ... . .... .... qm:3 . \
80
+ qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
81
+VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
82
+VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
83
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/mve_helper.c
86
+++ b/target/arm/mve_helper.c
87
@@ -XXX,XX +XXX,XX @@ DO_2OP_S(vhadds, do_vhadd_s)
88
DO_2OP_U(vhaddu, do_vhadd_u)
89
DO_2OP_S(vhsubs, do_vhsub_s)
90
DO_2OP_U(vhsubu, do_vhsub_u)
91
+
92
+
93
+/*
94
+ * Multiply add long dual accumulate ops.
95
+ */
96
+#define DO_LDAV(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC) \
97
+ uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
98
+ void *vm, uint64_t a) \
99
+ { \
100
+ uint16_t mask = mve_element_mask(env); \
101
+ unsigned e; \
102
+ TYPE *n = vn, *m = vm; \
103
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
104
+ if (mask & 1) { \
105
+ if (e & 1) { \
106
+ a ODDACC \
107
+ (int64_t)n[H##ESIZE(e - 1 * XCHG)] * m[H##ESIZE(e)]; \
108
+ } else { \
109
+ a EVENACC \
110
+ (int64_t)n[H##ESIZE(e + 1 * XCHG)] * m[H##ESIZE(e)]; \
111
+ } \
112
+ } \
113
+ } \
114
+ mve_advance_vpt(env); \
115
+ return a; \
116
+ }
117
+
118
+DO_LDAV(vmlaldavsh, 2, int16_t, false, +=, +=)
119
+DO_LDAV(vmlaldavxsh, 2, int16_t, true, +=, +=)
120
+DO_LDAV(vmlaldavsw, 4, int32_t, false, +=, +=)
121
+DO_LDAV(vmlaldavxsw, 4, int32_t, true, +=, +=)
122
+
123
+DO_LDAV(vmlaldavuh, 2, uint16_t, false, +=, +=)
124
+DO_LDAV(vmlaldavuw, 4, uint32_t, false, +=, +=)
125
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/arm/translate-mve.c
128
+++ b/target/arm/translate-mve.c
129
@@ -XXX,XX +XXX,XX @@
130
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
131
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
132
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
133
+typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
134
135
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
136
static inline long mve_qreg_offset(unsigned reg)
137
@@ -XXX,XX +XXX,XX @@ static void mve_update_eci(DisasContext *s)
138
}
139
}
140
141
+static bool mve_skip_first_beat(DisasContext *s)
142
+{
143
+ /* Return true if PSR.ECI says we must skip the first beat of this insn */
144
+ switch (s->eci) {
145
+ case ECI_NONE:
146
+ return false;
147
+ case ECI_A0:
148
+ case ECI_A0A1:
149
+ case ECI_A0A1A2:
150
+ case ECI_A0A1A2B0:
151
+ return true;
152
+ default:
153
+ g_assert_not_reached();
154
+ }
155
+}
156
+
157
static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn)
158
{
159
TCGv_i32 addr;
160
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_BS, vmullbs)
161
DO_2OP(VMULL_BU, vmullbu)
162
DO_2OP(VMULL_TS, vmullts)
163
DO_2OP(VMULL_TU, vmulltu)
164
+
165
+static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
166
+ MVEGenDualAccOpFn *fn)
167
+{
168
+ TCGv_ptr qn, qm;
169
+ TCGv_i64 rda;
170
+ TCGv_i32 rdalo, rdahi;
171
+
172
+ if (!dc_isar_feature(aa32_mve, s) ||
173
+ !mve_check_qreg_bank(s, a->qn | a->qm) ||
174
+ !fn) {
175
+ return false;
176
+ }
177
+ /*
178
+ * rdahi == 13 is UNPREDICTABLE; rdahi == 15 is a related
179
+ * encoding; rdalo always has bit 0 clear so cannot be 13 or 15.
180
+ */
181
+ if (a->rdahi == 13 || a->rdahi == 15) {
182
+ return false;
183
+ }
184
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
185
+ return true;
186
+ }
187
+
188
+ qn = mve_qreg_ptr(a->qn);
189
+ qm = mve_qreg_ptr(a->qm);
190
+
191
+ /*
192
+ * This insn is subject to beat-wise execution. Partial execution
193
+ * of an A=0 (no-accumulate) insn which does not execute the first
194
+ * beat must start with the current rda value, not 0.
195
+ */
196
+ if (a->a || mve_skip_first_beat(s)) {
197
+ rda = tcg_temp_new_i64();
198
+ rdalo = load_reg(s, a->rdalo);
199
+ rdahi = load_reg(s, a->rdahi);
200
+ tcg_gen_concat_i32_i64(rda, rdalo, rdahi);
201
+ tcg_temp_free_i32(rdalo);
202
+ tcg_temp_free_i32(rdahi);
203
+ } else {
204
+ rda = tcg_const_i64(0);
205
+ }
206
+
207
+ fn(rda, cpu_env, qn, qm, rda);
208
+ tcg_temp_free_ptr(qn);
209
+ tcg_temp_free_ptr(qm);
210
+
211
+ rdalo = tcg_temp_new_i32();
212
+ rdahi = tcg_temp_new_i32();
213
+ tcg_gen_extrl_i64_i32(rdalo, rda);
214
+ tcg_gen_extrh_i64_i32(rdahi, rda);
215
+ store_reg(s, a->rdalo, rdalo);
216
+ store_reg(s, a->rdahi, rdahi);
217
+ tcg_temp_free_i64(rda);
218
+ mve_update_eci(s);
219
+ return true;
220
+}
221
+
222
+static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a)
223
+{
224
+ static MVEGenDualAccOpFn * const fns[4][2] = {
225
+ { NULL, NULL },
226
+ { gen_helper_mve_vmlaldavsh, gen_helper_mve_vmlaldavxsh },
227
+ { gen_helper_mve_vmlaldavsw, gen_helper_mve_vmlaldavxsw },
228
+ { NULL, NULL },
229
+ };
230
+ return do_long_dual_acc(s, a, fns[a->size][a->x]);
231
+}
232
+
233
+static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
234
+{
235
+ static MVEGenDualAccOpFn * const fns[4][2] = {
236
+ { NULL, NULL },
237
+ { gen_helper_mve_vmlaldavuh, NULL },
238
+ { gen_helper_mve_vmlaldavuw, NULL },
239
+ { NULL, NULL },
240
+ };
241
+ return do_long_dual_acc(s, a, fns[a->size][a->x]);
242
+}
56
--
243
--
57
2.18.0
244
2.20.1
58
245
59
246
diff view generated by jsdifflib
1
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
1
Implement the MVE insn VMLSLDAV, which multiplies source elements,
2
and TDA, which we implement in the functions access_tdra(),
2
alternately adding and subtracting them, and accumulates into a
3
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
3
64-bit result in a pair of general purpose registers.
4
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
5
Implement this by having the access functions check MDCR_EL2.TDE
6
and HCR_EL2.TGE.
7
4
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180724115950.17316-3-peter.maydell@linaro.org
7
Message-id: 20210617121628.20116-21-peter.maydell@linaro.org
11
---
8
---
12
target/arm/helper.c | 18 ++++++++++++------
9
target/arm/helper-mve.h | 5 +++++
13
1 file changed, 12 insertions(+), 6 deletions(-)
10
target/arm/mve.decode | 2 ++
11
target/arm/mve_helper.c | 5 +++++
12
target/arm/translate-mve.c | 11 +++++++++++
13
4 files changed, 23 insertions(+)
14
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
17
--- a/target/arm/helper-mve.h
18
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri,
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmlaldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
20
bool isread)
20
21
{
21
DEF_HELPER_FLAGS_4(mve_vmlaldavuh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
22
int el = arm_current_el(env);
22
DEF_HELPER_FLAGS_4(mve_vmlaldavuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
23
+ bool mdcr_el2_tdosa = (env->cp15.mdcr_el2 & MDCR_TDOSA) ||
23
+
24
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
24
+DEF_HELPER_FLAGS_4(mve_vmlsldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
25
+ (env->cp15.hcr_el2 & HCR_TGE);
25
+DEF_HELPER_FLAGS_4(mve_vmlsldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
26
26
+DEF_HELPER_FLAGS_4(mve_vmlsldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDOSA)
27
+DEF_HELPER_FLAGS_4(mve_vmlsldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
28
- && !arm_is_secure_below_el3(env)) {
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
+ if (el < 2 && mdcr_el2_tdosa && !arm_is_secure_below_el3(env)) {
29
index XXXXXXX..XXXXXXX 100644
30
return CP_ACCESS_TRAP_EL2;
30
--- a/target/arm/mve.decode
31
}
31
+++ b/target/arm/mve.decode
32
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDOSA)) {
32
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
33
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri,
33
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
34
bool isread)
34
VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
35
{
35
VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
36
int el = arm_current_el(env);
36
+
37
+ bool mdcr_el2_tdra = (env->cp15.mdcr_el2 & MDCR_TDRA) ||
37
+VMLSLDAV 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav
38
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
39
+ (env->cp15.hcr_el2 & HCR_TGE);
39
index XXXXXXX..XXXXXXX 100644
40
40
--- a/target/arm/mve_helper.c
41
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDRA)
41
+++ b/target/arm/mve_helper.c
42
- && !arm_is_secure_below_el3(env)) {
42
@@ -XXX,XX +XXX,XX @@ DO_LDAV(vmlaldavxsw, 4, int32_t, true, +=, +=)
43
+ if (el < 2 && mdcr_el2_tdra && !arm_is_secure_below_el3(env)) {
43
44
return CP_ACCESS_TRAP_EL2;
44
DO_LDAV(vmlaldavuh, 2, uint16_t, false, +=, +=)
45
}
45
DO_LDAV(vmlaldavuw, 4, uint32_t, false, +=, +=)
46
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
46
+
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
47
+DO_LDAV(vmlsldavsh, 2, int16_t, false, +=, -=)
48
bool isread)
48
+DO_LDAV(vmlsldavxsh, 2, int16_t, true, +=, -=)
49
{
49
+DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=)
50
int el = arm_current_el(env);
50
+DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=)
51
+ bool mdcr_el2_tda = (env->cp15.mdcr_el2 & MDCR_TDA) ||
51
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
52
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
52
index XXXXXXX..XXXXXXX 100644
53
+ (env->cp15.hcr_el2 & HCR_TGE);
53
--- a/target/arm/translate-mve.c
54
54
+++ b/target/arm/translate-mve.c
55
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDA)
55
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
56
- && !arm_is_secure_below_el3(env)) {
56
};
57
+ if (el < 2 && mdcr_el2_tda && !arm_is_secure_below_el3(env)) {
57
return do_long_dual_acc(s, a, fns[a->size][a->x]);
58
return CP_ACCESS_TRAP_EL2;
58
}
59
}
59
+
60
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
60
+static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
61
+{
62
+ static MVEGenDualAccOpFn * const fns[4][2] = {
63
+ { NULL, NULL },
64
+ { gen_helper_mve_vmlsldavsh, gen_helper_mve_vmlsldavxsh },
65
+ { gen_helper_mve_vmlsldavsw, gen_helper_mve_vmlsldavxsw },
66
+ { NULL, NULL },
67
+ };
68
+ return do_long_dual_acc(s, a, fns[a->size][a->x]);
69
+}
61
--
70
--
62
2.18.0
71
2.20.1
63
72
64
73
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the MVE VRMLALDAVH and VRMLSLDAVH insns, which accumulate
2
the results of a rounded multiply of pairs of elements into a 72-bit
3
accumulator, returning the top 64 bits in a pair of general purpose
4
registers.
2
5
3
This commit improve the way the GIC is realized and connected in the
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
ZynqMP SoC. The security extensions are enabled only if requested in the
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
machine state. The same goes for the virtualization extensions.
8
Message-id: 20210617121628.20116-22-peter.maydell@linaro.org
9
---
10
target/arm/helper-mve.h | 8 ++++++++
11
target/arm/mve.decode | 7 +++++++
12
target/arm/mve_helper.c | 37 +++++++++++++++++++++++++++++++++++++
13
target/arm/translate-mve.c | 24 ++++++++++++++++++++++++
14
4 files changed, 76 insertions(+)
6
15
7
All the GIC to APU CPU(s) IRQ lines are now connected, including FIQ,
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
8
vIRQ and vFIQ. The missing CPU to GIC timers IRQ connections are also
9
added (HYP and SEC timers).
10
11
The GIC maintenance IRQs are back-wired to the correct GIC PPIs.
12
13
Finally, the MMIO mappings are reworked to take into account the ZynqMP
14
specifics. The GIC (v)CPU interface is aliased 16 times:
15
* for the first 0x1000 bytes from 0xf9010000 to 0xf901f000
16
* for the second 0x1000 bytes from 0xf9020000 to 0xf902f000
17
Mappings of the virtual interface and virtual CPU interface are mapped
18
only when virtualization extensions are requested. The
19
XlnxZynqMPGICRegion struct has been enhanced to be able to catch all
20
this information.
21
22
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
23
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
24
Message-id: 20180727095421.386-20-luc.michel@greensocs.com
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
27
include/hw/arm/xlnx-zynqmp.h | 4 +-
28
hw/arm/xlnx-zynqmp.c | 92 ++++++++++++++++++++++++++++++++----
29
2 files changed, 86 insertions(+), 10 deletions(-)
30
31
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
32
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
33
--- a/include/hw/arm/xlnx-zynqmp.h
18
--- a/target/arm/helper-mve.h
34
+++ b/include/hw/arm/xlnx-zynqmp.h
19
+++ b/target/arm/helper-mve.h
35
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmlsldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
36
#define XLNX_ZYNQMP_OCM_RAM_0_ADDRESS 0xFFFC0000
21
DEF_HELPER_FLAGS_4(mve_vmlsldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
37
#define XLNX_ZYNQMP_OCM_RAM_SIZE 0x10000
22
DEF_HELPER_FLAGS_4(mve_vmlsldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
38
23
DEF_HELPER_FLAGS_4(mve_vmlsldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
39
-#define XLNX_ZYNQMP_GIC_REGIONS 2
24
+
40
+#define XLNX_ZYNQMP_GIC_REGIONS 6
25
+DEF_HELPER_FLAGS_4(mve_vrmlaldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
41
26
+DEF_HELPER_FLAGS_4(mve_vrmlaldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
42
/* ZynqMP maps the ARM GIC regions (GICC, GICD ...) at consecutive 64k offsets
27
+
43
* and under-decodes the 64k region. This mirrors the 4k regions to every 4k
28
+DEF_HELPER_FLAGS_4(mve_vrmlaldavhuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vrmlsldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
31
+DEF_HELPER_FLAGS_4(mve_vrmlsldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/mve.decode
35
+++ b/target/arm/mve.decode
36
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
37
38
@vmlaldav .... .... . ... ... . ... . .... .... qm:3 . \
39
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
40
+@vmlaldav_nosz .... .... . ... ... . ... . .... .... qm:3 . \
41
+ qn=%qn rdahi=%rdahi rdalo=%rdalo size=0 &vmlaldav
42
VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
43
VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
44
45
VMLSLDAV 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav
46
+
47
+VRMLALDAVH_S 1110 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
48
+VRMLALDAVH_U 1111 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
49
+
50
+VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_nosz
51
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/mve_helper.c
54
+++ b/target/arm/mve_helper.c
44
@@ -XXX,XX +XXX,XX @@
55
@@ -XXX,XX +XXX,XX @@
45
*/
56
*/
46
57
47
#define XLNX_ZYNQMP_GIC_REGION_SIZE 0x1000
58
#include "qemu/osdep.h"
48
-#define XLNX_ZYNQMP_GIC_ALIASES (0x10000 / XLNX_ZYNQMP_GIC_REGION_SIZE - 1)
59
+#include "qemu/int128.h"
49
+#define XLNX_ZYNQMP_GIC_ALIASES (0x10000 / XLNX_ZYNQMP_GIC_REGION_SIZE)
60
#include "cpu.h"
50
61
#include "internals.h"
51
#define XLNX_ZYNQMP_MAX_LOW_RAM_SIZE 0x80000000ull
62
#include "vec_internal.h"
52
63
@@ -XXX,XX +XXX,XX @@ DO_LDAV(vmlsldavsh, 2, int16_t, false, +=, -=)
53
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
64
DO_LDAV(vmlsldavxsh, 2, int16_t, true, +=, -=)
65
DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=)
66
DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=)
67
+
68
+/*
69
+ * Rounding multiply add long dual accumulate high: we must keep
70
+ * a 72-bit internal accumulator value and return the top 64 bits.
71
+ */
72
+#define DO_LDAVH(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC, TO128) \
73
+ uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
74
+ void *vm, uint64_t a) \
75
+ { \
76
+ uint16_t mask = mve_element_mask(env); \
77
+ unsigned e; \
78
+ TYPE *n = vn, *m = vm; \
79
+ Int128 acc = int128_lshift(TO128(a), 8); \
80
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
81
+ if (mask & 1) { \
82
+ if (e & 1) { \
83
+ acc = ODDACC(acc, TO128(n[H##ESIZE(e - 1 * XCHG)] * \
84
+ m[H##ESIZE(e)])); \
85
+ } else { \
86
+ acc = EVENACC(acc, TO128(n[H##ESIZE(e + 1 * XCHG)] * \
87
+ m[H##ESIZE(e)])); \
88
+ } \
89
+ acc = int128_add(acc, 1 << 7); \
90
+ } \
91
+ } \
92
+ mve_advance_vpt(env); \
93
+ return int128_getlo(int128_rshift(acc, 8)); \
94
+ }
95
+
96
+DO_LDAVH(vrmlaldavhsw, 4, int32_t, false, int128_add, int128_add, int128_makes64)
97
+DO_LDAVH(vrmlaldavhxsw, 4, int32_t, true, int128_add, int128_add, int128_makes64)
98
+
99
+DO_LDAVH(vrmlaldavhuw, 4, uint32_t, false, int128_add, int128_add, int128_make64)
100
+
101
+DO_LDAVH(vrmlsldavhsw, 4, int32_t, false, int128_add, int128_sub, int128_makes64)
102
+DO_LDAVH(vrmlsldavhxsw, 4, int32_t, true, int128_add, int128_sub, int128_makes64)
103
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
54
index XXXXXXX..XXXXXXX 100644
104
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/arm/xlnx-zynqmp.c
105
--- a/target/arm/translate-mve.c
56
+++ b/hw/arm/xlnx-zynqmp.c
106
+++ b/target/arm/translate-mve.c
57
@@ -XXX,XX +XXX,XX @@
107
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
58
108
};
59
#define ARM_PHYS_TIMER_PPI 30
109
return do_long_dual_acc(s, a, fns[a->size][a->x]);
60
#define ARM_VIRT_TIMER_PPI 27
110
}
61
+#define ARM_HYP_TIMER_PPI 26
62
+#define ARM_SEC_TIMER_PPI 29
63
+#define GIC_MAINTENANCE_PPI 25
64
65
#define GEM_REVISION 0x40070106
66
67
#define GIC_BASE_ADDR 0xf9000000
68
#define GIC_DIST_ADDR 0xf9010000
69
#define GIC_CPU_ADDR 0xf9020000
70
+#define GIC_VIFACE_ADDR 0xf9040000
71
+#define GIC_VCPU_ADDR 0xf9060000
72
73
#define SATA_INTR 133
74
#define SATA_ADDR 0xFD0C0000
75
@@ -XXX,XX +XXX,XX @@ static const int adma_ch_intr[XLNX_ZYNQMP_NUM_ADMA_CH] = {
76
typedef struct XlnxZynqMPGICRegion {
77
int region_index;
78
uint32_t address;
79
+ uint32_t offset;
80
+ bool virt;
81
} XlnxZynqMPGICRegion;
82
83
static const XlnxZynqMPGICRegion xlnx_zynqmp_gic_regions[] = {
84
- { .region_index = 0, .address = GIC_DIST_ADDR, },
85
- { .region_index = 1, .address = GIC_CPU_ADDR, },
86
+ /* Distributor */
87
+ {
88
+ .region_index = 0,
89
+ .address = GIC_DIST_ADDR,
90
+ .offset = 0,
91
+ .virt = false
92
+ },
93
+
111
+
94
+ /* CPU interface */
112
+static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a)
95
+ {
113
+{
96
+ .region_index = 1,
114
+ static MVEGenDualAccOpFn * const fns[] = {
97
+ .address = GIC_CPU_ADDR,
115
+ gen_helper_mve_vrmlaldavhsw, gen_helper_mve_vrmlaldavhxsw,
98
+ .offset = 0,
116
+ };
99
+ .virt = false
117
+ return do_long_dual_acc(s, a, fns[a->x]);
100
+ },
118
+}
101
+ {
102
+ .region_index = 1,
103
+ .address = GIC_CPU_ADDR + 0x10000,
104
+ .offset = 0x1000,
105
+ .virt = false
106
+ },
107
+
119
+
108
+ /* Virtual interface */
120
+static bool trans_VRMLALDAVH_U(DisasContext *s, arg_vmlaldav *a)
109
+ {
121
+{
110
+ .region_index = 2,
122
+ static MVEGenDualAccOpFn * const fns[] = {
111
+ .address = GIC_VIFACE_ADDR,
123
+ gen_helper_mve_vrmlaldavhuw, NULL,
112
+ .offset = 0,
124
+ };
113
+ .virt = true
125
+ return do_long_dual_acc(s, a, fns[a->x]);
114
+ },
126
+}
115
+
127
+
116
+ /* Virtual CPU interface */
128
+static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
117
+ {
129
+{
118
+ .region_index = 3,
130
+ static MVEGenDualAccOpFn * const fns[] = {
119
+ .address = GIC_VCPU_ADDR,
131
+ gen_helper_mve_vrmlsldavhsw, gen_helper_mve_vrmlsldavhxsw,
120
+ .offset = 0,
132
+ };
121
+ .virt = true
133
+ return do_long_dual_acc(s, a, fns[a->x]);
122
+ },
134
+}
123
+ {
124
+ .region_index = 3,
125
+ .address = GIC_VCPU_ADDR + 0x10000,
126
+ .offset = 0x1000,
127
+ .virt = true
128
+ },
129
};
130
131
static inline int arm_gic_ppi_index(int cpu_nr, int ppi_index)
132
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
133
qdev_prop_set_uint32(DEVICE(&s->gic), "num-irq", GIC_NUM_SPI_INTR + 32);
134
qdev_prop_set_uint32(DEVICE(&s->gic), "revision", 2);
135
qdev_prop_set_uint32(DEVICE(&s->gic), "num-cpu", num_apus);
136
+ qdev_prop_set_bit(DEVICE(&s->gic), "has-security-extensions", s->secure);
137
+ qdev_prop_set_bit(DEVICE(&s->gic),
138
+ "has-virtualization-extensions", s->virt);
139
140
/* Realize APUs before realizing the GIC. KVM requires this. */
141
for (i = 0; i < num_apus; i++) {
142
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
143
for (i = 0; i < XLNX_ZYNQMP_GIC_REGIONS; i++) {
144
SysBusDevice *gic = SYS_BUS_DEVICE(&s->gic);
145
const XlnxZynqMPGICRegion *r = &xlnx_zynqmp_gic_regions[i];
146
- MemoryRegion *mr = sysbus_mmio_get_region(gic, r->region_index);
147
+ MemoryRegion *mr;
148
uint32_t addr = r->address;
149
int j;
150
151
- sysbus_mmio_map(gic, r->region_index, addr);
152
+ if (r->virt && !s->virt) {
153
+ continue;
154
+ }
155
156
+ mr = sysbus_mmio_get_region(gic, r->region_index);
157
for (j = 0; j < XLNX_ZYNQMP_GIC_ALIASES; j++) {
158
MemoryRegion *alias = &s->gic_mr[i][j];
159
160
- addr += XLNX_ZYNQMP_GIC_REGION_SIZE;
161
memory_region_init_alias(alias, OBJECT(s), "zynqmp-gic-alias", mr,
162
- 0, XLNX_ZYNQMP_GIC_REGION_SIZE);
163
+ r->offset, XLNX_ZYNQMP_GIC_REGION_SIZE);
164
memory_region_add_subregion(system_memory, addr, alias);
165
+
166
+ addr += XLNX_ZYNQMP_GIC_REGION_SIZE;
167
}
168
}
169
170
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
171
sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i,
172
qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
173
ARM_CPU_IRQ));
174
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus,
175
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
176
+ ARM_CPU_FIQ));
177
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 2,
178
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
179
+ ARM_CPU_VIRQ));
180
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 3,
181
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
182
+ ARM_CPU_VFIQ));
183
irq = qdev_get_gpio_in(DEVICE(&s->gic),
184
arm_gic_ppi_index(i, ARM_PHYS_TIMER_PPI));
185
- qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), 0, irq);
186
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_PHYS, irq);
187
irq = qdev_get_gpio_in(DEVICE(&s->gic),
188
arm_gic_ppi_index(i, ARM_VIRT_TIMER_PPI));
189
- qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), 1, irq);
190
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_VIRT, irq);
191
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
192
+ arm_gic_ppi_index(i, ARM_HYP_TIMER_PPI));
193
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_HYP, irq);
194
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
195
+ arm_gic_ppi_index(i, ARM_SEC_TIMER_PPI));
196
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_SEC, irq);
197
+
198
+ if (s->virt) {
199
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
200
+ arm_gic_ppi_index(i, GIC_MAINTENANCE_PPI));
201
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 4, irq);
202
+ }
203
}
204
205
if (s->has_rpu) {
206
--
135
--
207
2.18.0
136
2.20.1
208
137
209
138
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the scalar form of the MVE VADD insn. This takes the
2
scalar operand from a general purpose register.
2
3
3
Add the gic_update_virt() function to update the vCPU interface states
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
and raise vIRQ and vFIQ as needed. This commit renames gic_update() to
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
gic_update_internal() and generalizes it to handle both cases, with a
6
Message-id: 20210617121628.20116-23-peter.maydell@linaro.org
6
`virt' parameter to track whether we are updating the CPU or vCPU
7
---
7
interfaces.
8
target/arm/helper-mve.h | 4 ++++
9
target/arm/mve.decode | 7 ++++++
10
target/arm/mve_helper.c | 22 +++++++++++++++++++
11
target/arm/translate-mve.c | 45 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 78 insertions(+)
8
13
9
The main difference between CPU and vCPU is the way we select the best
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
10
IRQ. This part has been split into the gic_get_best_(v)irq functions.
11
For the virt case, the LRs are iterated to find the best candidate.
12
13
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20180727095421.386-17-luc.michel@greensocs.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
hw/intc/arm_gic.c | 175 +++++++++++++++++++++++++++++++++++-----------
19
1 file changed, 136 insertions(+), 39 deletions(-)
20
21
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/intc/arm_gic.c
16
--- a/target/arm/helper-mve.h
24
+++ b/hw/intc/arm_gic.c
17
+++ b/target/arm/helper-mve.h
25
@@ -XXX,XX +XXX,XX @@ static inline bool gic_cpu_ns_access(GICState *s, int cpu, MemTxAttrs attrs)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
return !gic_is_vcpu(cpu) && s->security_extn && !attrs.secure;
19
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
}
20
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
21
29
+static inline void gic_get_best_irq(GICState *s, int cpu,
22
+DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+ int *best_irq, int *best_prio, int *group)
23
+DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+{
24
+DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+ int irq;
33
+ int cm = 1 << cpu;
34
+
25
+
35
+ *best_irq = 1023;
26
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
36
+ *best_prio = 0x100;
27
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
28
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@
34
&vldr_vstr rn qd imm p a w size l u
35
&1op qd qm size
36
&2op qd qm qn size
37
+&2scalar qd qn rm size
38
39
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
40
# Note that both Rn and Qd are 3 bits only (no D bit)
41
@@ -XXX,XX +XXX,XX @@
42
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
43
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
44
45
+@2scalar .... .... .. size:2 .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
37
+
46
+
38
+ for (irq = 0; irq < s->num_irq; irq++) {
47
# Vector loads and stores
39
+ if (GIC_DIST_TEST_ENABLED(irq, cm) && gic_test_pending(s, irq, cm) &&
48
40
+ (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
49
# Widening loads and narrowing stores:
41
+ (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
50
@@ -XXX,XX +XXX,XX @@ VRMLALDAVH_S 1110 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_no
42
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) < *best_prio) {
51
VRMLALDAVH_U 1111 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
43
+ *best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
52
44
+ *best_irq = irq;
53
VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_nosz
45
+ }
54
+
46
+ }
55
+# Scalar operations
56
+
57
+VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
58
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mve_helper.c
61
+++ b/target/arm/mve_helper.c
62
@@ -XXX,XX +XXX,XX @@ DO_2OP_S(vhsubs, do_vhsub_s)
63
DO_2OP_U(vhsubu, do_vhsub_u)
64
65
66
+#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
67
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
68
+ uint32_t rm) \
69
+ { \
70
+ TYPE *d = vd, *n = vn; \
71
+ TYPE m = rm; \
72
+ uint16_t mask = mve_element_mask(env); \
73
+ unsigned e; \
74
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
75
+ mergemask(&d[H##ESIZE(e)], FN(n[H##ESIZE(e)], m), mask); \
76
+ } \
77
+ mve_advance_vpt(env); \
47
+ }
78
+ }
48
+
79
+
49
+ if (*best_irq < 1023) {
80
+/* provide unsigned 2-op scalar helpers for all sizes */
50
+ *group = GIC_DIST_TEST_GROUP(*best_irq, cm);
81
+#define DO_2OP_SCALAR_U(OP, FN) \
51
+ }
82
+ DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
52
+}
83
+ DO_2OP_SCALAR(OP##h, 2, uint16_t, FN) \
84
+ DO_2OP_SCALAR(OP##w, 4, uint32_t, FN)
53
+
85
+
54
+static inline void gic_get_best_virq(GICState *s, int cpu,
86
+DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
55
+ int *best_irq, int *best_prio, int *group)
87
+
88
/*
89
* Multiply add long dual accumulate ops.
90
*/
91
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/translate-mve.c
94
+++ b/target/arm/translate-mve.c
95
@@ -XXX,XX +XXX,XX @@
96
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
97
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
98
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
99
+typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
100
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
101
102
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
103
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_BU, vmullbu)
104
DO_2OP(VMULL_TS, vmullts)
105
DO_2OP(VMULL_TU, vmulltu)
106
107
+static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
108
+ MVEGenTwoOpScalarFn fn)
56
+{
109
+{
57
+ int lr_idx = 0;
110
+ TCGv_ptr qd, qn;
111
+ TCGv_i32 rm;
58
+
112
+
59
+ *best_irq = 1023;
113
+ if (!dc_isar_feature(aa32_mve, s) ||
60
+ *best_prio = 0x100;
114
+ !mve_check_qreg_bank(s, a->qd | a->qn) ||
61
+
115
+ !fn) {
62
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
63
+ uint32_t lr_entry = s->h_lr[lr_idx][cpu];
64
+ int state = GICH_LR_STATE(lr_entry);
65
+
66
+ if (state == GICH_LR_STATE_PENDING) {
67
+ int prio = GICH_LR_PRIORITY(lr_entry);
68
+
69
+ if (prio < *best_prio) {
70
+ *best_prio = prio;
71
+ *best_irq = GICH_LR_VIRT_ID(lr_entry);
72
+ *group = GICH_LR_GROUP(lr_entry);
73
+ }
74
+ }
75
+ }
76
+}
77
+
78
+/* Return true if IRQ signaling is enabled for the given cpu and at least one
79
+ * of the given groups:
80
+ * - in the non-virt case, the distributor must be enabled for one of the
81
+ * given groups
82
+ * - in the virt case, the virtual interface must be enabled.
83
+ * - in all cases, the (v)CPU interface must be enabled for one of the given
84
+ * groups.
85
+ */
86
+static inline bool gic_irq_signaling_enabled(GICState *s, int cpu, bool virt,
87
+ int group_mask)
88
+{
89
+ if (!virt && !(s->ctlr & group_mask)) {
90
+ return false;
116
+ return false;
91
+ }
117
+ }
92
+
118
+ if (a->rm == 13 || a->rm == 15) {
93
+ if (virt && !(s->h_hcr[cpu] & R_GICH_HCR_EN_MASK)) {
119
+ /* UNPREDICTABLE */
94
+ return false;
120
+ return false;
95
+ }
121
+ }
96
+
122
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
97
+ if (!(s->cpu_ctlr[cpu] & group_mask)) {
123
+ return true;
98
+ return false;
99
+ }
124
+ }
100
+
125
+
126
+ qd = mve_qreg_ptr(a->qd);
127
+ qn = mve_qreg_ptr(a->qn);
128
+ rm = load_reg(s, a->rm);
129
+ fn(cpu_env, qd, qn, rm);
130
+ tcg_temp_free_i32(rm);
131
+ tcg_temp_free_ptr(qd);
132
+ tcg_temp_free_ptr(qn);
133
+ mve_update_eci(s);
101
+ return true;
134
+ return true;
102
+}
135
+}
103
+
136
+
104
/* TODO: Many places that call this routine could be optimized. */
137
+#define DO_2OP_SCALAR(INSN, FN) \
105
/* Update interrupt status after enabled or pending bits have been changed. */
138
+ static bool trans_##INSN(DisasContext *s, arg_2scalar *a) \
106
-static void gic_update(GICState *s)
139
+ { \
107
+static inline void gic_update_internal(GICState *s, bool virt)
140
+ static MVEGenTwoOpScalarFn * const fns[] = { \
108
{
141
+ gen_helper_mve_##FN##b, \
109
int best_irq;
142
+ gen_helper_mve_##FN##h, \
110
int best_prio;
143
+ gen_helper_mve_##FN##w, \
111
- int irq;
144
+ NULL, \
112
int irq_level, fiq_level;
145
+ }; \
113
- int cpu;
146
+ return do_2op_scalar(s, a, fns[a->size]); \
114
- int cm;
115
+ int cpu, cpu_iface;
116
+ int group = 0;
117
+ qemu_irq *irq_lines = virt ? s->parent_virq : s->parent_irq;
118
+ qemu_irq *fiq_lines = virt ? s->parent_vfiq : s->parent_fiq;
119
120
for (cpu = 0; cpu < s->num_cpu; cpu++) {
121
- cm = 1 << cpu;
122
- s->current_pending[cpu] = 1023;
123
- if (!(s->ctlr & (GICD_CTLR_EN_GRP0 | GICD_CTLR_EN_GRP1))
124
- || !(s->cpu_ctlr[cpu] & (GICC_CTLR_EN_GRP0 | GICC_CTLR_EN_GRP1))) {
125
- qemu_irq_lower(s->parent_irq[cpu]);
126
- qemu_irq_lower(s->parent_fiq[cpu]);
127
+ cpu_iface = virt ? (cpu + GIC_NCPU) : cpu;
128
+
129
+ s->current_pending[cpu_iface] = 1023;
130
+ if (!gic_irq_signaling_enabled(s, cpu, virt,
131
+ GICD_CTLR_EN_GRP0 | GICD_CTLR_EN_GRP1)) {
132
+ qemu_irq_lower(irq_lines[cpu]);
133
+ qemu_irq_lower(fiq_lines[cpu]);
134
continue;
135
}
136
- best_prio = 0x100;
137
- best_irq = 1023;
138
- for (irq = 0; irq < s->num_irq; irq++) {
139
- if (GIC_DIST_TEST_ENABLED(irq, cm) &&
140
- gic_test_pending(s, irq, cm) &&
141
- (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
142
- (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
143
- if (GIC_DIST_GET_PRIORITY(irq, cpu) < best_prio) {
144
- best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
145
- best_irq = irq;
146
- }
147
- }
148
+
149
+ if (virt) {
150
+ gic_get_best_virq(s, cpu, &best_irq, &best_prio, &group);
151
+ } else {
152
+ gic_get_best_irq(s, cpu, &best_irq, &best_prio, &group);
153
}
154
155
if (best_irq != 1023) {
156
trace_gic_update_bestirq(cpu, best_irq, best_prio,
157
- s->priority_mask[cpu], s->running_priority[cpu]);
158
+ s->priority_mask[cpu_iface], s->running_priority[cpu_iface]);
159
}
160
161
irq_level = fiq_level = 0;
162
163
- if (best_prio < s->priority_mask[cpu]) {
164
- s->current_pending[cpu] = best_irq;
165
- if (best_prio < s->running_priority[cpu]) {
166
- int group = GIC_DIST_TEST_GROUP(best_irq, cm);
167
-
168
- if (extract32(s->ctlr, group, 1) &&
169
- extract32(s->cpu_ctlr[cpu], group, 1)) {
170
- if (group == 0 && s->cpu_ctlr[cpu] & GICC_CTLR_FIQ_EN) {
171
+ if (best_prio < s->priority_mask[cpu_iface]) {
172
+ s->current_pending[cpu_iface] = best_irq;
173
+ if (best_prio < s->running_priority[cpu_iface]) {
174
+ if (gic_irq_signaling_enabled(s, cpu, virt, 1 << group)) {
175
+ if (group == 0 &&
176
+ s->cpu_ctlr[cpu_iface] & GICC_CTLR_FIQ_EN) {
177
DPRINTF("Raised pending FIQ %d (cpu %d)\n",
178
- best_irq, cpu);
179
+ best_irq, cpu_iface);
180
fiq_level = 1;
181
- trace_gic_update_set_irq(cpu, "fiq", fiq_level);
182
+ trace_gic_update_set_irq(cpu, virt ? "vfiq" : "fiq",
183
+ fiq_level);
184
} else {
185
DPRINTF("Raised pending IRQ %d (cpu %d)\n",
186
- best_irq, cpu);
187
+ best_irq, cpu_iface);
188
irq_level = 1;
189
- trace_gic_update_set_irq(cpu, "irq", irq_level);
190
+ trace_gic_update_set_irq(cpu, virt ? "virq" : "irq",
191
+ irq_level);
192
}
193
}
194
}
195
}
196
197
- qemu_set_irq(s->parent_irq[cpu], irq_level);
198
- qemu_set_irq(s->parent_fiq[cpu], fiq_level);
199
+ qemu_set_irq(irq_lines[cpu], irq_level);
200
+ qemu_set_irq(fiq_lines[cpu], fiq_level);
201
}
202
}
203
204
+static void gic_update(GICState *s)
205
+{
206
+ gic_update_internal(s, false);
207
+}
208
+
209
/* Return true if this LR is empty, i.e. the corresponding bit
210
* in ELRSR is set.
211
*/
212
@@ -XXX,XX +XXX,XX @@ static inline bool gic_lr_entry_is_eoi(uint32_t entry)
213
&& !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
214
}
215
216
+static void gic_update_virt(GICState *s)
217
+{
218
+ gic_update_internal(s, true);
219
+}
220
+
221
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
222
int cm, int target)
223
{
224
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
225
}
226
}
227
228
- gic_update(s);
229
+ if (gic_is_vcpu(cpu)) {
230
+ gic_update_virt(s);
231
+ } else {
232
+ gic_update(s);
233
+ }
234
DPRINTF("ACK %d\n", irq);
235
return ret;
236
}
237
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
238
*/
239
int rcpu = gic_get_vcpu_real_id(cpu);
240
s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
241
+
242
+ /* Update the virtual interface in case a maintenance interrupt should
243
+ * be raised.
244
+ */
245
+ gic_update_virt(s);
246
return;
247
}
248
249
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
250
}
251
}
252
253
+ gic_update_virt(s);
254
return;
255
}
256
257
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
258
"gic_cpu_write: Bad offset %x\n", (int)offset);
259
return MEMTX_OK;
260
}
261
- gic_update(s);
262
+
263
+ if (gic_is_vcpu(cpu)) {
264
+ gic_update_virt(s);
265
+ } else {
266
+ gic_update(s);
267
+ }
147
+ }
268
+
148
+
269
return MEMTX_OK;
149
+DO_2OP_SCALAR(VADD_scalar, vadd_scalar)
270
}
150
+
271
151
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
272
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
152
MVEGenDualAccOpFn *fn)
273
return MEMTX_OK;
153
{
274
}
275
276
+ gic_update_virt(s);
277
return MEMTX_OK;
278
}
279
280
--
154
--
281
2.18.0
155
2.20.1
282
156
283
157
diff view generated by jsdifflib
1
In do_v7m_exception_exit(), we use the exc_secure variable to track
1
Implement the scalar forms of the MVE VSUB and VMUL insns.
2
whether the exception we're returning from is secure or non-secure.
3
Unfortunately the statement initializing this was accidentally
4
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
5
which meant that we were using the wrong value for NMI handlers.
6
Move the initialization out to the right place.
7
2
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Message-id: 20210617121628.20116-24-peter.maydell@linaro.org
11
Message-id: 20180720145647.8810-3-peter.maydell@linaro.org
12
---
6
---
13
target/arm/helper.c | 2 +-
7
target/arm/helper-mve.h | 8 ++++++++
14
1 file changed, 1 insertion(+), 1 deletion(-)
8
target/arm/mve.decode | 2 ++
9
target/arm/mve_helper.c | 2 ++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 14 insertions(+)
15
12
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
15
--- a/target/arm/helper-mve.h
19
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
/* For all other purposes, treat ES as 0 (R_HXSR) */
18
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
excret &= ~R_V7M_EXCRET_ES_MASK;
19
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
}
20
24
+ exc_secure = excret & R_V7M_EXCRET_ES_MASK;
21
+DEF_HELPER_FLAGS_4(mve_vsub_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
+DEF_HELPER_FLAGS_4(mve_vsub_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(mve_vsub_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+
25
+DEF_HELPER_FLAGS_4(mve_vmul_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(mve_vmul_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vmul_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+
29
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
30
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
31
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/mve.decode
35
+++ b/target/arm/mve.decode
36
@@ -XXX,XX +XXX,XX @@ VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_no
37
# Scalar operations
38
39
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
40
+VSUB_scalar 1110 1110 0 . .. ... 1 ... 1 1111 . 100 .... @2scalar
41
+VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
47
DO_2OP_SCALAR(OP##w, 4, uint32_t, FN)
48
49
DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
50
+DO_2OP_SCALAR_U(vsub_scalar, DO_SUB)
51
+DO_2OP_SCALAR_U(vmul_scalar, DO_MUL)
52
53
/*
54
* Multiply add long dual accumulate ops.
55
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/translate-mve.c
58
+++ b/target/arm/translate-mve.c
59
@@ -XXX,XX +XXX,XX @@ static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
25
}
60
}
26
61
27
if (env->v7m.exception != ARMV7M_EXCP_NMI) {
62
DO_2OP_SCALAR(VADD_scalar, vadd_scalar)
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
63
+DO_2OP_SCALAR(VSUB_scalar, vsub_scalar)
29
* which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
64
+DO_2OP_SCALAR(VMUL_scalar, vmul_scalar)
30
*/
65
31
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
66
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
32
- exc_secure = excret & R_V7M_EXCRET_ES_MASK;
67
MVEGenDualAccOpFn *fn)
33
if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
34
env->v7m.faultmask[exc_secure] = 0;
35
}
36
--
68
--
37
2.18.0
69
2.20.1
38
70
39
71
diff view generated by jsdifflib
1
If the "trap general exceptions" bit HCR_EL2.TGE is set, we
1
Implement the scalar variants of the MVE VHADD and VHSUB insns.
2
must mask all virtual interrupts (as per DDI0487C.a D1.14.3).
3
Implement this in arm_excp_unmasked().
4
2
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180724115950.17316-2-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-25-peter.maydell@linaro.org
8
---
6
---
9
target/arm/cpu.h | 6 ++++--
7
target/arm/helper-mve.h | 16 ++++++++++++++++
10
1 file changed, 4 insertions(+), 2 deletions(-)
8
target/arm/mve.decode | 4 ++++
9
target/arm/mve_helper.c | 8 ++++++++
10
target/arm/translate-mve.c | 4 ++++
11
4 files changed, 32 insertions(+)
11
12
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.h
15
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/cpu.h
16
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmul_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
17
break;
18
DEF_HELPER_FLAGS_4(mve_vmul_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
18
19
DEF_HELPER_FLAGS_4(mve_vmul_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
case EXCP_VFIQ:
20
20
- if (secure || !(env->cp15.hcr_el2 & HCR_FMO)) {
21
+DEF_HELPER_FLAGS_4(mve_vhadds_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
+ if (secure || !(env->cp15.hcr_el2 & HCR_FMO)
22
+DEF_HELPER_FLAGS_4(mve_vhadds_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
+ || (env->cp15.hcr_el2 & HCR_TGE)) {
23
+DEF_HELPER_FLAGS_4(mve_vhadds_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
/* VFIQs are only taken when hypervized and non-secure. */
24
+
24
return false;
25
+DEF_HELPER_FLAGS_4(mve_vhaddu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
}
26
+DEF_HELPER_FLAGS_4(mve_vhaddu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
return !(env->daif & PSTATE_F);
27
+DEF_HELPER_FLAGS_4(mve_vhaddu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
case EXCP_VIRQ:
28
+
28
- if (secure || !(env->cp15.hcr_el2 & HCR_IMO)) {
29
+DEF_HELPER_FLAGS_4(mve_vhsubs_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+ if (secure || !(env->cp15.hcr_el2 & HCR_IMO)
30
+DEF_HELPER_FLAGS_4(mve_vhsubs_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+ || (env->cp15.hcr_el2 & HCR_TGE)) {
31
+DEF_HELPER_FLAGS_4(mve_vhsubs_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
/* VIRQs are only taken when hypervized and non-secure. */
32
+
32
return false;
33
+DEF_HELPER_FLAGS_4(mve_vhsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
}
34
+DEF_HELPER_FLAGS_4(mve_vhsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(mve_vhsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+
37
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
38
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
39
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
40
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/mve.decode
43
+++ b/target/arm/mve.decode
44
@@ -XXX,XX +XXX,XX @@ VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_no
45
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
46
VSUB_scalar 1110 1110 0 . .. ... 1 ... 1 1111 . 100 .... @2scalar
47
VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
48
+VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
49
+VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
50
+VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
51
+VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
57
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
58
DO_2OP_SCALAR(OP##h, 2, uint16_t, FN) \
59
DO_2OP_SCALAR(OP##w, 4, uint32_t, FN)
60
+#define DO_2OP_SCALAR_S(OP, FN) \
61
+ DO_2OP_SCALAR(OP##b, 1, int8_t, FN) \
62
+ DO_2OP_SCALAR(OP##h, 2, int16_t, FN) \
63
+ DO_2OP_SCALAR(OP##w, 4, int32_t, FN)
64
65
DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
66
DO_2OP_SCALAR_U(vsub_scalar, DO_SUB)
67
DO_2OP_SCALAR_U(vmul_scalar, DO_MUL)
68
+DO_2OP_SCALAR_S(vhadds_scalar, do_vhadd_s)
69
+DO_2OP_SCALAR_U(vhaddu_scalar, do_vhadd_u)
70
+DO_2OP_SCALAR_S(vhsubs_scalar, do_vhsub_s)
71
+DO_2OP_SCALAR_U(vhsubu_scalar, do_vhsub_u)
72
73
/*
74
* Multiply add long dual accumulate ops.
75
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/translate-mve.c
78
+++ b/target/arm/translate-mve.c
79
@@ -XXX,XX +XXX,XX @@ static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
80
DO_2OP_SCALAR(VADD_scalar, vadd_scalar)
81
DO_2OP_SCALAR(VSUB_scalar, vsub_scalar)
82
DO_2OP_SCALAR(VMUL_scalar, vmul_scalar)
83
+DO_2OP_SCALAR(VHADD_S_scalar, vhadds_scalar)
84
+DO_2OP_SCALAR(VHADD_U_scalar, vhaddu_scalar)
85
+DO_2OP_SCALAR(VHSUB_S_scalar, vhsubs_scalar)
86
+DO_2OP_SCALAR(VHSUB_U_scalar, vhsubu_scalar)
87
88
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
89
MVEGenDualAccOpFn *fn)
34
--
90
--
35
2.18.0
91
2.20.1
36
92
37
93
diff view generated by jsdifflib
1
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
1
Implement the MVE VBRSR insn, which reverses a specified
2
1 for all purposes other than direct reads" if HCR_EL2.TGE
2
number of bits in each element, setting the rest to zero.
3
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
4
purposes other than direct reads" if HCR_EL2.TGE is set
5
and HRC_EL2.E2H is 1.
6
7
To avoid having to check E2H and TGE everywhere where we test IMO and
8
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
9
arm_hcr_el2_amo(). We don't implement ARMv8.1-VHE yet, so the E2H
10
case will never be true, but we include the logic to save effort when
11
we eventually do get to that.
12
13
(Note that in several of these callsites the change doesn't
14
actually make a difference as either the callsite is handling
15
TGE specially anyway, or the CPU can't get into that situation
16
with TGE set; we change everywhere for consistency.)
17
3
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20180724115950.17316-5-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-26-peter.maydell@linaro.org
21
---
7
---
22
target/arm/cpu.h | 64 +++++++++++++++++++++++++++++++++++----
8
target/arm/helper-mve.h | 4 ++++
23
hw/intc/arm_gicv3_cpuif.c | 19 ++++++------
9
target/arm/mve.decode | 1 +
24
target/arm/helper.c | 6 ++--
10
target/arm/mve_helper.c | 43 ++++++++++++++++++++++++++++++++++++++
25
3 files changed, 71 insertions(+), 18 deletions(-)
11
target/arm/translate-mve.c | 1 +
12
4 files changed, 49 insertions(+)
26
13
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
28
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu.h
16
--- a/target/arm/helper-mve.h
30
+++ b/target/arm/cpu.h
17
+++ b/target/arm/helper-mve.h
31
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vhsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
#define HCR_RW (1ULL << 31)
19
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
#define HCR_CD (1ULL << 32)
20
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
#define HCR_ID (1ULL << 33)
21
35
+#define HCR_E2H (1ULL << 34)
22
+DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+/*
23
+DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
+ * When we actually implement ARMv8.1-VHE we should add HCR_E2H to
24
+DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+ * HCR_MASK and then clear it again if the feature bit is not set in
25
+
39
+ * hcr_write().
26
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
40
+ */
27
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
41
#define HCR_MASK ((1ULL << 34) - 1)
28
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
42
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
43
#define SCR_NS (1U << 0)
30
index XXXXXXX..XXXXXXX 100644
44
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu);
31
--- a/target/arm/mve.decode
45
# define TARGET_VIRT_ADDR_SPACE_BITS 32
32
+++ b/target/arm/mve.decode
46
#endif
33
@@ -XXX,XX +XXX,XX @@ VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
47
34
VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
48
+/**
35
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
49
+ * arm_hcr_el2_imo(): Return the effective value of HCR_EL2.IMO.
36
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
50
+ * Depending on the values of HCR_EL2.E2H and TGE, this may be
37
+VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
51
+ * "behaves as 1 for all purposes other than direct read/write" or
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
52
+ * "behaves as 0 for all purposes other than direct read/write"
39
index XXXXXXX..XXXXXXX 100644
53
+ */
40
--- a/target/arm/mve_helper.c
54
+static inline bool arm_hcr_el2_imo(CPUARMState *env)
41
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR_U(vhaddu_scalar, do_vhadd_u)
43
DO_2OP_SCALAR_S(vhsubs_scalar, do_vhsub_s)
44
DO_2OP_SCALAR_U(vhsubu_scalar, do_vhsub_u)
45
46
+static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
55
+{
47
+{
56
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
48
+ m &= 0xff;
57
+ case HCR_TGE:
49
+ if (m == 0) {
58
+ return true;
50
+ return 0;
59
+ case HCR_TGE | HCR_E2H:
60
+ return false;
61
+ default:
62
+ return env->cp15.hcr_el2 & HCR_IMO;
63
+ }
51
+ }
52
+ n = revbit8(n);
53
+ if (m < 8) {
54
+ n >>= 8 - m;
55
+ }
56
+ return n;
64
+}
57
+}
65
+
58
+
66
+/**
59
+static inline uint32_t do_vbrsrh(uint32_t n, uint32_t m)
67
+ * arm_hcr_el2_fmo(): Return the effective value of HCR_EL2.FMO.
68
+ */
69
+static inline bool arm_hcr_el2_fmo(CPUARMState *env)
70
+{
60
+{
71
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
61
+ m &= 0xff;
72
+ case HCR_TGE:
62
+ if (m == 0) {
73
+ return true;
63
+ return 0;
74
+ case HCR_TGE | HCR_E2H:
75
+ return false;
76
+ default:
77
+ return env->cp15.hcr_el2 & HCR_FMO;
78
+ }
64
+ }
65
+ n = revbit16(n);
66
+ if (m < 16) {
67
+ n >>= 16 - m;
68
+ }
69
+ return n;
79
+}
70
+}
80
+
71
+
81
+/**
72
+static inline uint32_t do_vbrsrw(uint32_t n, uint32_t m)
82
+ * arm_hcr_el2_amo(): Return the effective value of HCR_EL2.AMO.
83
+ */
84
+static inline bool arm_hcr_el2_amo(CPUARMState *env)
85
+{
73
+{
86
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
74
+ m &= 0xff;
87
+ case HCR_TGE:
75
+ if (m == 0) {
88
+ return true;
76
+ return 0;
89
+ case HCR_TGE | HCR_E2H:
90
+ return false;
91
+ default:
92
+ return env->cp15.hcr_el2 & HCR_AMO;
93
+ }
77
+ }
78
+ n = revbit32(n);
79
+ if (m < 32) {
80
+ n >>= 32 - m;
81
+ }
82
+ return n;
94
+}
83
+}
95
+
84
+
96
static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
85
+DO_2OP_SCALAR(vbrsrb, 1, uint8_t, do_vbrsrb)
97
unsigned int target_el)
86
+DO_2OP_SCALAR(vbrsrh, 2, uint16_t, do_vbrsrh)
98
{
87
+DO_2OP_SCALAR(vbrsrw, 4, uint32_t, do_vbrsrw)
99
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
88
+
100
break;
89
/*
101
90
* Multiply add long dual accumulate ops.
102
case EXCP_VFIQ:
91
*/
103
- if (secure || !(env->cp15.hcr_el2 & HCR_FMO)
92
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
104
- || (env->cp15.hcr_el2 & HCR_TGE)) {
105
+ if (secure || !arm_hcr_el2_fmo(env) || (env->cp15.hcr_el2 & HCR_TGE)) {
106
/* VFIQs are only taken when hypervized and non-secure. */
107
return false;
108
}
109
return !(env->daif & PSTATE_F);
110
case EXCP_VIRQ:
111
- if (secure || !(env->cp15.hcr_el2 & HCR_IMO)
112
- || (env->cp15.hcr_el2 & HCR_TGE)) {
113
+ if (secure || !arm_hcr_el2_imo(env) || (env->cp15.hcr_el2 & HCR_TGE)) {
114
/* VIRQs are only taken when hypervized and non-secure. */
115
return false;
116
}
117
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
118
* to the CPSR.F setting otherwise we further assess the state
119
* below.
120
*/
121
- hcr = (env->cp15.hcr_el2 & HCR_FMO);
122
+ hcr = arm_hcr_el2_fmo(env);
123
scr = (env->cp15.scr_el3 & SCR_FIQ);
124
125
/* When EL3 is 32-bit, the SCR.FW bit controls whether the
126
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
127
* when setting the target EL, so it does not have a further
128
* affect here.
129
*/
130
- hcr = (env->cp15.hcr_el2 & HCR_IMO);
131
+ hcr = arm_hcr_el2_imo(env);
132
scr = false;
133
break;
134
default:
135
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
136
index XXXXXXX..XXXXXXX 100644
93
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/intc/arm_gicv3_cpuif.c
94
--- a/target/arm/translate-mve.c
138
+++ b/hw/intc/arm_gicv3_cpuif.c
95
+++ b/target/arm/translate-mve.c
139
@@ -XXX,XX +XXX,XX @@ static bool icv_access(CPUARMState *env, int hcr_flags)
96
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VHADD_S_scalar, vhadds_scalar)
140
* * access if NS EL1 and either IMO or FMO == 1:
97
DO_2OP_SCALAR(VHADD_U_scalar, vhaddu_scalar)
141
* CTLR, DIR, PMR, RPR
98
DO_2OP_SCALAR(VHSUB_S_scalar, vhsubs_scalar)
142
*/
99
DO_2OP_SCALAR(VHSUB_U_scalar, vhsubu_scalar)
143
- return (env->cp15.hcr_el2 & hcr_flags) && arm_current_el(env) == 1
100
+DO_2OP_SCALAR(VBRSR, vbrsr)
144
+ bool flagmatch = ((hcr_flags & HCR_IMO) && arm_hcr_el2_imo(env)) ||
101
145
+ ((hcr_flags & HCR_FMO) && arm_hcr_el2_fmo(env));
102
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
146
+
103
MVEGenDualAccOpFn *fn)
147
+ return flagmatch && arm_current_el(env) == 1
148
&& !arm_is_secure_below_el3(env);
149
}
150
151
@@ -XXX,XX +XXX,XX @@ static void icc_dir_write(CPUARMState *env, const ARMCPRegInfo *ri,
152
/* No need to include !IsSecure in route_*_to_el2 as it's only
153
* tested in cases where we know !IsSecure is true.
154
*/
155
- route_fiq_to_el2 = env->cp15.hcr_el2 & HCR_FMO;
156
- route_irq_to_el2 = env->cp15.hcr_el2 & HCR_IMO;
157
+ route_fiq_to_el2 = arm_hcr_el2_fmo(env);
158
+ route_irq_to_el2 = arm_hcr_el2_imo(env);
159
160
switch (arm_current_el(env)) {
161
case 3:
162
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_irqfiq_access(CPUARMState *env,
163
switch (el) {
164
case 1:
165
if (arm_is_secure_below_el3(env) ||
166
- ((env->cp15.hcr_el2 & (HCR_IMO | HCR_FMO)) == 0)) {
167
+ (arm_hcr_el2_imo(env) == 0 && arm_hcr_el2_fmo(env) == 0)) {
168
r = CP_ACCESS_TRAP_EL3;
169
}
170
break;
171
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_dir_access(CPUARMState *env,
172
static CPAccessResult gicv3_sgi_access(CPUARMState *env,
173
const ARMCPRegInfo *ri, bool isread)
174
{
175
- if ((env->cp15.hcr_el2 & (HCR_IMO | HCR_FMO)) &&
176
+ if ((arm_hcr_el2_imo(env) || arm_hcr_el2_fmo(env)) &&
177
arm_current_el(env) == 1 && !arm_is_secure_below_el3(env)) {
178
/* Takes priority over a possible EL3 trap */
179
return CP_ACCESS_TRAP_EL2;
180
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_fiq_access(CPUARMState *env,
181
if (env->cp15.scr_el3 & SCR_FIQ) {
182
switch (el) {
183
case 1:
184
- if (arm_is_secure_below_el3(env) ||
185
- ((env->cp15.hcr_el2 & HCR_FMO) == 0)) {
186
+ if (arm_is_secure_below_el3(env) || !arm_hcr_el2_fmo(env)) {
187
r = CP_ACCESS_TRAP_EL3;
188
}
189
break;
190
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_irq_access(CPUARMState *env,
191
if (env->cp15.scr_el3 & SCR_IRQ) {
192
switch (el) {
193
case 1:
194
- if (arm_is_secure_below_el3(env) ||
195
- ((env->cp15.hcr_el2 & HCR_IMO) == 0)) {
196
+ if (arm_is_secure_below_el3(env) || !arm_hcr_el2_imo(env)) {
197
r = CP_ACCESS_TRAP_EL3;
198
}
199
break;
200
diff --git a/target/arm/helper.c b/target/arm/helper.c
201
index XXXXXXX..XXXXXXX 100644
202
--- a/target/arm/helper.c
203
+++ b/target/arm/helper.c
204
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
205
switch (excp_idx) {
206
case EXCP_IRQ:
207
scr = ((env->cp15.scr_el3 & SCR_IRQ) == SCR_IRQ);
208
- hcr = ((env->cp15.hcr_el2 & HCR_IMO) == HCR_IMO);
209
+ hcr = arm_hcr_el2_imo(env);
210
break;
211
case EXCP_FIQ:
212
scr = ((env->cp15.scr_el3 & SCR_FIQ) == SCR_FIQ);
213
- hcr = ((env->cp15.hcr_el2 & HCR_FMO) == HCR_FMO);
214
+ hcr = arm_hcr_el2_fmo(env);
215
break;
216
default:
217
scr = ((env->cp15.scr_el3 & SCR_EA) == SCR_EA);
218
- hcr = ((env->cp15.hcr_el2 & HCR_AMO) == HCR_AMO);
219
+ hcr = arm_hcr_el2_amo(env);
220
break;
221
};
222
223
--
104
--
224
2.18.0
105
2.20.1
225
106
226
107
diff view generated by jsdifflib
1
When we support execution from non-RAM MMIO regions, get_page_addr_code()
1
Implement the MVE VPST insn, which sets the predicate mask
2
will return -1 to indicate that there is no RAM at the requested address.
2
fields in the VPR to the immediate value encoded in the insn.
3
Handle this in tb_check_watchpoint() -- if the exception happened for a
4
PC which doesn't correspond to RAM then there is no need to invalidate
5
any TBs, because the one-instruction TB will not have been cached.
6
3
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Tested-by: Cédric Le Goater <clg@kaod.org>
6
Message-id: 20210617121628.20116-27-peter.maydell@linaro.org
10
Message-id: 20180710160013.26559-4-peter.maydell@linaro.org
11
---
7
---
12
accel/tcg/translate-all.c | 4 +++-
8
target/arm/mve.decode | 4 +++
13
1 file changed, 3 insertions(+), 1 deletion(-)
9
target/arm/translate-mve.c | 59 ++++++++++++++++++++++++++++++++++++++
10
2 files changed, 63 insertions(+)
14
11
15
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
12
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/accel/tcg/translate-all.c
14
--- a/target/arm/mve.decode
18
+++ b/accel/tcg/translate-all.c
15
+++ b/target/arm/mve.decode
19
@@ -XXX,XX +XXX,XX @@ void tb_check_watchpoint(CPUState *cpu)
16
@@ -XXX,XX +XXX,XX @@ VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
20
17
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
21
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
18
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
22
addr = get_page_addr_code(env, pc);
19
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
23
- tb_invalidate_phys_range(addr, addr + 1);
20
+
24
+ if (addr != -1) {
21
+# Predicate operations
25
+ tb_invalidate_phys_range(addr, addr + 1);
22
+%mask_22_13 22:1 13:3
26
+ }
23
+VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
24
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/translate-mve.c
27
+++ b/target/arm/translate-mve.c
28
@@ -XXX,XX +XXX,XX @@ static void mve_update_eci(DisasContext *s)
27
}
29
}
28
}
30
}
29
31
32
+static void mve_update_and_store_eci(DisasContext *s)
33
+{
34
+ /*
35
+ * For insns which don't call a helper function that will call
36
+ * mve_advance_vpt(), this version updates s->eci and also stores
37
+ * it out to the CPUState field.
38
+ */
39
+ if (s->eci) {
40
+ mve_update_eci(s);
41
+ store_cpu_field(tcg_constant_i32(s->eci << 4), condexec_bits);
42
+ }
43
+}
44
+
45
static bool mve_skip_first_beat(DisasContext *s)
46
{
47
/* Return true if PSR.ECI says we must skip the first beat of this insn */
48
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
49
};
50
return do_long_dual_acc(s, a, fns[a->x]);
51
}
52
+
53
+static bool trans_VPST(DisasContext *s, arg_VPST *a)
54
+{
55
+ TCGv_i32 vpr;
56
+
57
+ /* mask == 0 is a "related encoding" */
58
+ if (!dc_isar_feature(aa32_mve, s) || !a->mask) {
59
+ return false;
60
+ }
61
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
62
+ return true;
63
+ }
64
+ /*
65
+ * Set the VPR mask fields. We take advantage of MASK01 and MASK23
66
+ * being adjacent fields in the register.
67
+ *
68
+ * This insn is not predicated, but it is subject to beat-wise
69
+ * execution, and the mask is updated on the odd-numbered beats.
70
+ * So if PSR.ECI says we should skip beat 1, we mustn't update the
71
+ * 01 mask field.
72
+ */
73
+ vpr = load_cpu_field(v7m.vpr);
74
+ switch (s->eci) {
75
+ case ECI_NONE:
76
+ case ECI_A0:
77
+ /* Update both 01 and 23 fields */
78
+ tcg_gen_deposit_i32(vpr, vpr,
79
+ tcg_constant_i32(a->mask | (a->mask << 4)),
80
+ R_V7M_VPR_MASK01_SHIFT,
81
+ R_V7M_VPR_MASK01_LENGTH + R_V7M_VPR_MASK23_LENGTH);
82
+ break;
83
+ case ECI_A0A1:
84
+ case ECI_A0A1A2:
85
+ case ECI_A0A1A2B0:
86
+ /* Update only the 23 mask field */
87
+ tcg_gen_deposit_i32(vpr, vpr,
88
+ tcg_constant_i32(a->mask),
89
+ R_V7M_VPR_MASK23_SHIFT, R_V7M_VPR_MASK23_LENGTH);
90
+ break;
91
+ default:
92
+ g_assert_not_reached();
93
+ }
94
+ store_cpu_field(vpr, v7m.vpr);
95
+ mve_update_and_store_eci(s);
96
+ return true;
97
+}
30
--
98
--
31
2.18.0
99
2.20.1
32
100
33
101
diff view generated by jsdifflib
1
Tailchaining is an optimization in handling of exception return
1
Implement the MVE VQADD and VQSUB insns, which perform saturating
2
for M-profile cores: if we are about to pop the exception stack
2
addition of a scalar to each element. Note that individual bytes of
3
for an exception return, but there is a pending exception which
3
each result element are used or discarded according to the predicate
4
is higher priority than the priority we are returning to, then
4
mask, but FPSCR.QC is only set if the predicate mask for the lowest
5
instead of unstacking and then immediately taking the exception
5
byte of the element is set.
6
and stacking registers again, we can chain to the pending
7
exception without unstacking and stacking.
8
9
For v6M and v7M it is IMPDEF whether tailchaining happens for pending
10
exceptions; for v8M this is architecturally required. Implement it
11
in QEMU for all M-profile cores, since in practice v6M and v7M
12
hardware implementations generally do have it.
13
14
(We were already doing tailchaining for derived exceptions which
15
happened during exception return, like the validity checks and
16
stack access failures; these have always been required to be
17
tailchained for all versions of the architecture.)
18
6
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20180720145647.8810-5-peter.maydell@linaro.org
9
Message-id: 20210617121628.20116-28-peter.maydell@linaro.org
22
---
10
---
23
target/arm/helper.c | 16 ++++++++++++++++
11
target/arm/helper-mve.h | 16 ++++++++++
24
1 file changed, 16 insertions(+)
12
target/arm/mve.decode | 5 +++
13
target/arm/mve_helper.c | 62 ++++++++++++++++++++++++++++++++++++++
14
target/arm/translate-mve.c | 4 +++
15
4 files changed, 87 insertions(+)
25
16
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
27
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
19
--- a/target/arm/helper-mve.h
29
+++ b/target/arm/helper.c
20
+++ b/target/arm/helper-mve.h
30
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vhsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
return;
22
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_4(mve_vqadds_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(mve_vqadds_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vqadds_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_4(mve_vqaddu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(mve_vqaddu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vqaddu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(mve_vqsubs_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(mve_vqsubs_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(mve_vqsubs_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_4(mve_vqsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(mve_vqsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vqsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
+
41
DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
42
DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
43
DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
44
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/mve.decode
47
+++ b/target/arm/mve.decode
48
@@ -XXX,XX +XXX,XX @@ VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
49
VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
50
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
51
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
52
+
53
+VQADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
54
+VQADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
55
+VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
56
+VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
57
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
58
59
# Predicate operations
60
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/mve_helper.c
63
+++ b/target/arm/mve_helper.c
64
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhaddu, do_vhadd_u)
65
DO_2OP_S(vhsubs, do_vhsub_s)
66
DO_2OP_U(vhsubu, do_vhsub_u)
67
68
+static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
69
+{
70
+ if (val > max) {
71
+ *s = true;
72
+ return max;
73
+ } else if (val < min) {
74
+ *s = true;
75
+ return min;
76
+ }
77
+ return val;
78
+}
79
+
80
+#define DO_SQADD_B(n, m, s) do_sat_bhw((int64_t)n + m, INT8_MIN, INT8_MAX, s)
81
+#define DO_SQADD_H(n, m, s) do_sat_bhw((int64_t)n + m, INT16_MIN, INT16_MAX, s)
82
+#define DO_SQADD_W(n, m, s) do_sat_bhw((int64_t)n + m, INT32_MIN, INT32_MAX, s)
83
+
84
+#define DO_UQADD_B(n, m, s) do_sat_bhw((int64_t)n + m, 0, UINT8_MAX, s)
85
+#define DO_UQADD_H(n, m, s) do_sat_bhw((int64_t)n + m, 0, UINT16_MAX, s)
86
+#define DO_UQADD_W(n, m, s) do_sat_bhw((int64_t)n + m, 0, UINT32_MAX, s)
87
+
88
+#define DO_SQSUB_B(n, m, s) do_sat_bhw((int64_t)n - m, INT8_MIN, INT8_MAX, s)
89
+#define DO_SQSUB_H(n, m, s) do_sat_bhw((int64_t)n - m, INT16_MIN, INT16_MAX, s)
90
+#define DO_SQSUB_W(n, m, s) do_sat_bhw((int64_t)n - m, INT32_MIN, INT32_MAX, s)
91
+
92
+#define DO_UQSUB_B(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT8_MAX, s)
93
+#define DO_UQSUB_H(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT16_MAX, s)
94
+#define DO_UQSUB_W(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT32_MAX, s)
95
96
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
97
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
98
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
99
mve_advance_vpt(env); \
32
}
100
}
33
101
34
+ /*
102
+#define DO_2OP_SAT_SCALAR(OP, ESIZE, TYPE, FN) \
35
+ * Tailchaining: if there is currently a pending exception that
103
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
36
+ * is high enough priority to preempt execution at the level we're
104
+ uint32_t rm) \
37
+ * about to return to, then just directly take that exception now,
105
+ { \
38
+ * avoiding an unstack-and-then-stack. Note that now we have
106
+ TYPE *d = vd, *n = vn; \
39
+ * deactivated the previous exception by calling armv7m_nvic_complete_irq()
107
+ TYPE m = rm; \
40
+ * our current execution priority is already the execution priority we are
108
+ uint16_t mask = mve_element_mask(env); \
41
+ * returning to -- none of the state we would unstack or set based on
109
+ unsigned e; \
42
+ * the EXCRET value affects it.
110
+ bool qc = false; \
43
+ */
111
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
44
+ if (armv7m_nvic_can_take_pending_exception(env->nvic)) {
112
+ bool sat = false; \
45
+ qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n");
113
+ mergemask(&d[H##ESIZE(e)], FN(n[H##ESIZE(e)], m, &sat), \
46
+ v7m_exception_taken(cpu, excret, true, false);
114
+ mask); \
47
+ return;
115
+ qc |= sat & mask & 1; \
116
+ } \
117
+ if (qc) { \
118
+ env->vfp.qc[0] = qc; \
119
+ } \
120
+ mve_advance_vpt(env); \
48
+ }
121
+ }
49
+
122
+
50
switch_v7m_security_state(env, return_to_secure);
123
/* provide unsigned 2-op scalar helpers for all sizes */
51
124
#define DO_2OP_SCALAR_U(OP, FN) \
52
{
125
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
126
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR_U(vhaddu_scalar, do_vhadd_u)
127
DO_2OP_SCALAR_S(vhsubs_scalar, do_vhsub_s)
128
DO_2OP_SCALAR_U(vhsubu_scalar, do_vhsub_u)
129
130
+DO_2OP_SAT_SCALAR(vqaddu_scalarb, 1, uint8_t, DO_UQADD_B)
131
+DO_2OP_SAT_SCALAR(vqaddu_scalarh, 2, uint16_t, DO_UQADD_H)
132
+DO_2OP_SAT_SCALAR(vqaddu_scalarw, 4, uint32_t, DO_UQADD_W)
133
+DO_2OP_SAT_SCALAR(vqadds_scalarb, 1, int8_t, DO_SQADD_B)
134
+DO_2OP_SAT_SCALAR(vqadds_scalarh, 2, int16_t, DO_SQADD_H)
135
+DO_2OP_SAT_SCALAR(vqadds_scalarw, 4, int32_t, DO_SQADD_W)
136
+
137
+DO_2OP_SAT_SCALAR(vqsubu_scalarb, 1, uint8_t, DO_UQSUB_B)
138
+DO_2OP_SAT_SCALAR(vqsubu_scalarh, 2, uint16_t, DO_UQSUB_H)
139
+DO_2OP_SAT_SCALAR(vqsubu_scalarw, 4, uint32_t, DO_UQSUB_W)
140
+DO_2OP_SAT_SCALAR(vqsubs_scalarb, 1, int8_t, DO_SQSUB_B)
141
+DO_2OP_SAT_SCALAR(vqsubs_scalarh, 2, int16_t, DO_SQSUB_H)
142
+DO_2OP_SAT_SCALAR(vqsubs_scalarw, 4, int32_t, DO_SQSUB_W)
143
+
144
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
145
{
146
m &= 0xff;
147
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
148
index XXXXXXX..XXXXXXX 100644
149
--- a/target/arm/translate-mve.c
150
+++ b/target/arm/translate-mve.c
151
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VHADD_S_scalar, vhadds_scalar)
152
DO_2OP_SCALAR(VHADD_U_scalar, vhaddu_scalar)
153
DO_2OP_SCALAR(VHSUB_S_scalar, vhsubs_scalar)
154
DO_2OP_SCALAR(VHSUB_U_scalar, vhsubu_scalar)
155
+DO_2OP_SCALAR(VQADD_S_scalar, vqadds_scalar)
156
+DO_2OP_SCALAR(VQADD_U_scalar, vqaddu_scalar)
157
+DO_2OP_SCALAR(VQSUB_S_scalar, vqsubs_scalar)
158
+DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
159
DO_2OP_SCALAR(VBRSR, vbrsr)
160
161
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
53
--
162
--
54
2.18.0
163
2.20.1
55
164
56
165
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the MVE VQDMULH and VQRDMULH scalar insns, which multiply
2
elements by the scalar, double, possibly round, take the high half
3
and saturate.
2
4
3
Implement virtualization extensions in the gic_cpu_read() and
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
gic_cpu_write() functions. Those are the last bits missing to fully
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
support virtualization extensions in the CPU interface path.
7
Message-id: 20210617121628.20116-29-peter.maydell@linaro.org
8
---
9
target/arm/helper-mve.h | 8 ++++++++
10
target/arm/mve.decode | 3 +++
11
target/arm/mve_helper.c | 25 +++++++++++++++++++++++++
12
target/arm/translate-mve.c | 2 ++
13
4 files changed, 38 insertions(+)
6
14
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180727095421.386-14-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/intc/arm_gic.c | 20 +++++++++++++++-----
13
1 file changed, 15 insertions(+), 5 deletions(-)
14
15
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gic.c
17
--- a/target/arm/helper-mve.h
18
+++ b/hw/intc/arm_gic.c
18
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
case 0xd0: case 0xd4: case 0xd8: case 0xdc:
20
DEF_HELPER_FLAGS_4(mve_vqsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
{
21
DEF_HELPER_FLAGS_4(mve_vqsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
int regno = (offset - 0xd0) / 4;
22
23
+ int nr_aprs = gic_is_vcpu(cpu) ? GIC_VIRT_NR_APRS : GIC_NR_APRS;
23
+DEF_HELPER_FLAGS_4(mve_vqdmulh_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
24
+DEF_HELPER_FLAGS_4(mve_vqdmulh_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
- if (regno >= GIC_NR_APRS || s->revision != 2) {
25
+DEF_HELPER_FLAGS_4(mve_vqdmulh_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+ if (regno >= nr_aprs || s->revision != 2) {
26
+
27
*data = 0;
27
+DEF_HELPER_FLAGS_4(mve_vqrdmulh_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+ } else if (gic_is_vcpu(cpu)) {
28
+DEF_HELPER_FLAGS_4(mve_vqrdmulh_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+ *data = s->h_apr[gic_get_vcpu_real_id(cpu)];
29
+DEF_HELPER_FLAGS_4(mve_vqrdmulh_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
} else if (gic_cpu_ns_access(s, cpu, attrs)) {
30
+
31
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
31
DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
*data = gic_apr_ns_view(s, regno, cpu);
32
DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
33
DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
int regno = (offset - 0xe0) / 4;
34
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
35
35
index XXXXXXX..XXXXXXX 100644
36
if (regno >= GIC_NR_APRS || s->revision != 2 || !gic_has_groups(s) ||
36
--- a/target/arm/mve.decode
37
- gic_cpu_ns_access(s, cpu, attrs)) {
37
+++ b/target/arm/mve.decode
38
+ gic_cpu_ns_access(s, cpu, attrs) || gic_is_vcpu(cpu)) {
38
@@ -XXX,XX +XXX,XX @@ VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
39
*data = 0;
39
VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
40
} else {
40
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
41
*data = s->nsapr[regno][cpu];
41
42
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
42
+VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
43
s->abpr[cpu] = MAX(value & 0x7, GIC_MIN_ABPR);
43
+VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
44
}
44
+
45
} else {
45
# Predicate operations
46
- s->bpr[cpu] = MAX(value & 0x7, GIC_MIN_BPR);
46
%mask_22_13 22:1 13:3
47
+ int min_bpr = gic_is_vcpu(cpu) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
47
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
48
+ s->bpr[cpu] = MAX(value & 0x7, min_bpr);
48
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
49
}
49
index XXXXXXX..XXXXXXX 100644
50
break;
50
--- a/target/arm/mve_helper.c
51
case 0x10: /* End Of Interrupt */
51
+++ b/target/arm/mve_helper.c
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
52
@@ -XXX,XX +XXX,XX @@ static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
53
case 0xd0: case 0xd4: case 0xd8: case 0xdc:
53
#define DO_UQSUB_H(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT16_MAX, s)
54
{
54
#define DO_UQSUB_W(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT32_MAX, s)
55
int regno = (offset - 0xd0) / 4;
55
56
+ int nr_aprs = gic_is_vcpu(cpu) ? GIC_VIRT_NR_APRS : GIC_NR_APRS;
56
+/*
57
57
+ * For QDMULH and QRDMULH we simplify "double and shift by esize" into
58
- if (regno >= GIC_NR_APRS || s->revision != 2) {
58
+ * "shift by esize-1", adjusting the QRDMULH rounding constant to match.
59
+ if (regno >= nr_aprs || s->revision != 2) {
59
+ */
60
return MEMTX_OK;
60
+#define DO_QDMULH_B(n, m, s) do_sat_bhw(((int64_t)n * m) >> 7, \
61
}
61
+ INT8_MIN, INT8_MAX, s)
62
- if (gic_cpu_ns_access(s, cpu, attrs)) {
62
+#define DO_QDMULH_H(n, m, s) do_sat_bhw(((int64_t)n * m) >> 15, \
63
+ if (gic_is_vcpu(cpu)) {
63
+ INT16_MIN, INT16_MAX, s)
64
+ s->h_apr[gic_get_vcpu_real_id(cpu)] = value;
64
+#define DO_QDMULH_W(n, m, s) do_sat_bhw(((int64_t)n * m) >> 31, \
65
+ } else if (gic_cpu_ns_access(s, cpu, attrs)) {
65
+ INT32_MIN, INT32_MAX, s)
66
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
66
+
67
gic_apr_write_ns_view(s, regno, cpu, value);
67
+#define DO_QRDMULH_B(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 6)) >> 7, \
68
} else {
68
+ INT8_MIN, INT8_MAX, s)
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
69
+#define DO_QRDMULH_H(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 14)) >> 15, \
70
if (regno >= GIC_NR_APRS || s->revision != 2) {
70
+ INT16_MIN, INT16_MAX, s)
71
return MEMTX_OK;
71
+#define DO_QRDMULH_W(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 30)) >> 31, \
72
}
72
+ INT32_MIN, INT32_MAX, s)
73
+ if (gic_is_vcpu(cpu)) {
73
+
74
+ return MEMTX_OK;
74
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
75
+ }
75
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
76
if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
76
uint32_t rm) \
77
return MEMTX_OK;
77
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqsubs_scalarb, 1, int8_t, DO_SQSUB_B)
78
}
78
DO_2OP_SAT_SCALAR(vqsubs_scalarh, 2, int16_t, DO_SQSUB_H)
79
DO_2OP_SAT_SCALAR(vqsubs_scalarw, 4, int32_t, DO_SQSUB_W)
80
81
+DO_2OP_SAT_SCALAR(vqdmulh_scalarb, 1, int8_t, DO_QDMULH_B)
82
+DO_2OP_SAT_SCALAR(vqdmulh_scalarh, 2, int16_t, DO_QDMULH_H)
83
+DO_2OP_SAT_SCALAR(vqdmulh_scalarw, 4, int32_t, DO_QDMULH_W)
84
+DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
85
+DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
86
+DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
87
+
88
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
89
{
90
m &= 0xff;
91
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/translate-mve.c
94
+++ b/target/arm/translate-mve.c
95
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQADD_S_scalar, vqadds_scalar)
96
DO_2OP_SCALAR(VQADD_U_scalar, vqaddu_scalar)
97
DO_2OP_SCALAR(VQSUB_S_scalar, vqsubs_scalar)
98
DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
99
+DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
100
+DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
101
DO_2OP_SCALAR(VBRSR, vbrsr)
102
103
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
79
--
104
--
80
2.18.0
105
2.20.1
81
106
82
107
diff view generated by jsdifflib
New patch
1
1
Implement the MVE VQDMULL scalar insn. This multiplies the top or
2
bottom half of each element by the scalar, doubles and saturates
3
to a double-width result.
4
5
Note that this encoding overlaps with VQADD and VQSUB; it uses
6
what in VQADD and VQSUB would be the 'size=0b11' encoding.
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210617121628.20116-30-peter.maydell@linaro.org
11
---
12
target/arm/helper-mve.h | 5 +++
13
target/arm/mve.decode | 23 +++++++++++---
14
target/arm/mve_helper.c | 65 ++++++++++++++++++++++++++++++++++++++
15
target/arm/translate-mve.c | 30 ++++++++++++++++++
16
4 files changed, 119 insertions(+), 4 deletions(-)
17
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper-mve.h
21
+++ b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
26
+DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+
31
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
33
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
34
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/mve.decode
37
+++ b/target/arm/mve.decode
38
@@ -XXX,XX +XXX,XX @@
39
%qm 5:1 1:3
40
%qn 7:1 17:3
41
42
+# VQDMULL has size in bit 28: 0 for 16 bit, 1 for 32 bit
43
+%size_28 28:1 !function=plus_1
44
+
45
&vldr_vstr rn qd imm p a w size l u
46
&1op qd qm size
47
&2op qd qm qn size
48
@@ -XXX,XX +XXX,XX @@
49
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
50
51
@2scalar .... .... .. size:2 .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
52
+@2scalar_nosz .... .... .... .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
53
54
# Vector loads and stores
55
56
@@ -XXX,XX +XXX,XX @@ VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
57
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
58
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
59
60
-VQADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
61
-VQADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
62
-VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
63
-VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
64
+{
65
+ VQADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
66
+ VQADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
67
+ VQDMULLB_scalar 111 . 1110 0 . 11 ... 0 ... 0 1111 . 110 .... @2scalar_nosz \
68
+ size=%size_28
69
+}
70
+
71
+{
72
+ VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
73
+ VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
74
+ VQDMULLT_scalar 111 . 1110 0 . 11 ... 0 ... 1 1111 . 110 .... @2scalar_nosz \
75
+ size=%size_28
76
+}
77
+
78
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
79
80
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
81
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
82
83
+
84
# Predicate operations
85
%mask_22_13 22:1 13:3
86
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
87
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
88
index XXXXXXX..XXXXXXX 100644
89
--- a/target/arm/mve_helper.c
90
+++ b/target/arm/mve_helper.c
91
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
92
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
93
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
94
95
+/*
96
+ * Long saturating scalar ops. As with DO_2OP_L, TYPE and H are for the
97
+ * input (smaller) type and LESIZE, LTYPE, LH for the output (long) type.
98
+ * SATMASK specifies which bits of the predicate mask matter for determining
99
+ * whether to propagate a saturation indication into FPSCR.QC -- for
100
+ * the 16x16->32 case we must check only the bit corresponding to the T or B
101
+ * half that we used, but for the 32x32->64 case we propagate if the mask
102
+ * bit is set for either half.
103
+ */
104
+#define DO_2OP_SAT_SCALAR_L(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN, SATMASK) \
105
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
106
+ uint32_t rm) \
107
+ { \
108
+ LTYPE *d = vd; \
109
+ TYPE *n = vn; \
110
+ TYPE m = rm; \
111
+ uint16_t mask = mve_element_mask(env); \
112
+ unsigned le; \
113
+ bool qc = false; \
114
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
115
+ bool sat = false; \
116
+ LTYPE r = FN((LTYPE)n[H##ESIZE(le * 2 + TOP)], m, &sat); \
117
+ mergemask(&d[H##LESIZE(le)], r, mask); \
118
+ qc |= sat && (mask & SATMASK); \
119
+ } \
120
+ if (qc) { \
121
+ env->vfp.qc[0] = qc; \
122
+ } \
123
+ mve_advance_vpt(env); \
124
+ }
125
+
126
+static inline int32_t do_qdmullh(int16_t n, int16_t m, bool *sat)
127
+{
128
+ int64_t r = ((int64_t)n * m) * 2;
129
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat);
130
+}
131
+
132
+static inline int64_t do_qdmullw(int32_t n, int32_t m, bool *sat)
133
+{
134
+ /* The multiply can't overflow, but the doubling might */
135
+ int64_t r = (int64_t)n * m;
136
+ if (r > INT64_MAX / 2) {
137
+ *sat = true;
138
+ return INT64_MAX;
139
+ } else if (r < INT64_MIN / 2) {
140
+ *sat = true;
141
+ return INT64_MIN;
142
+ } else {
143
+ return r * 2;
144
+ }
145
+}
146
+
147
+#define SATMASK16B 1
148
+#define SATMASK16T (1 << 2)
149
+#define SATMASK32 ((1 << 4) | 1)
150
+
151
+DO_2OP_SAT_SCALAR_L(vqdmullb_scalarh, 0, 2, int16_t, 4, int32_t, \
152
+ do_qdmullh, SATMASK16B)
153
+DO_2OP_SAT_SCALAR_L(vqdmullb_scalarw, 0, 4, int32_t, 8, int64_t, \
154
+ do_qdmullw, SATMASK32)
155
+DO_2OP_SAT_SCALAR_L(vqdmullt_scalarh, 1, 2, int16_t, 4, int32_t, \
156
+ do_qdmullh, SATMASK16T)
157
+DO_2OP_SAT_SCALAR_L(vqdmullt_scalarw, 1, 4, int32_t, 8, int64_t, \
158
+ do_qdmullw, SATMASK32)
159
+
160
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
161
{
162
m &= 0xff;
163
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/translate-mve.c
166
+++ b/target/arm/translate-mve.c
167
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
168
DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
169
DO_2OP_SCALAR(VBRSR, vbrsr)
170
171
+static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
172
+{
173
+ static MVEGenTwoOpScalarFn * const fns[] = {
174
+ NULL,
175
+ gen_helper_mve_vqdmullb_scalarh,
176
+ gen_helper_mve_vqdmullb_scalarw,
177
+ NULL,
178
+ };
179
+ if (a->qd == a->qn && a->size == MO_32) {
180
+ /* UNPREDICTABLE; we choose to undef */
181
+ return false;
182
+ }
183
+ return do_2op_scalar(s, a, fns[a->size]);
184
+}
185
+
186
+static bool trans_VQDMULLT_scalar(DisasContext *s, arg_2scalar *a)
187
+{
188
+ static MVEGenTwoOpScalarFn * const fns[] = {
189
+ NULL,
190
+ gen_helper_mve_vqdmullt_scalarh,
191
+ gen_helper_mve_vqdmullt_scalarw,
192
+ NULL,
193
+ };
194
+ if (a->qd == a->qn && a->size == MO_32) {
195
+ /* UNPREDICTABLE; we choose to undef */
196
+ return false;
197
+ }
198
+ return do_2op_scalar(s, a, fns[a->size]);
199
+}
200
+
201
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
202
MVEGenDualAccOpFn *fn)
203
{
204
--
205
2.20.1
206
207
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the vector forms of the MVE VQDMULH and VQRDMULH insns.
2
2
3
Implement virtualization extensions in gic_activate_irq() and
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
gic_drop_prio() and in gic_get_prio_from_apr_bits() called by
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
gic_drop_prio().
5
Message-id: 20210617121628.20116-31-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 8 ++++++++
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 27 +++++++++++++++++++++++++++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 40 insertions(+)
6
12
7
When the current CPU is a vCPU:
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
8
- Use GIC_VIRT_MIN_BPR and GIC_VIRT_NR_APRS instead of their non-virt
9
counterparts,
10
- the vCPU APR is stored in the virtual interface, in h_apr.
11
12
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20180727095421.386-11-luc.michel@greensocs.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/intc/arm_gic.c | 50 +++++++++++++++++++++++++++++++++++------------
18
1 file changed, 38 insertions(+), 12 deletions(-)
19
20
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
21
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/arm_gic.c
15
--- a/target/arm/helper-mve.h
23
+++ b/hw/intc/arm_gic.c
16
+++ b/target/arm/helper-mve.h
24
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
* and update the running priority.
18
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
*/
19
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
int prio = gic_get_group_priority(s, cpu, irq);
20
28
- int preemption_level = prio >> (GIC_MIN_BPR + 1);
21
+DEF_HELPER_FLAGS_4(mve_vqdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+ int min_bpr = gic_is_vcpu(cpu) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
22
+DEF_HELPER_FLAGS_4(mve_vqdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
+ int preemption_level = prio >> (min_bpr + 1);
23
+DEF_HELPER_FLAGS_4(mve_vqdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
int regno = preemption_level / 32;
24
+
32
int bitno = preemption_level % 32;
25
+DEF_HELPER_FLAGS_4(mve_vqrdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+ uint32_t *papr = NULL;
26
+DEF_HELPER_FLAGS_4(mve_vqrdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
27
+DEF_HELPER_FLAGS_4(mve_vqrdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
- if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
28
+
36
- s->nsapr[regno][cpu] |= (1 << bitno);
29
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
+ if (gic_is_vcpu(cpu)) {
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+ assert(regno == 0);
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+ papr = &s->h_apr[gic_get_vcpu_real_id(cpu)];
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
40
+ } else if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
33
index XXXXXXX..XXXXXXX 100644
41
+ papr = &s->nsapr[regno][cpu];
34
--- a/target/arm/mve.decode
42
} else {
35
+++ b/target/arm/mve.decode
43
- s->apr[regno][cpu] |= (1 << bitno);
36
@@ -XXX,XX +XXX,XX @@ VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
44
+ papr = &s->apr[regno][cpu];
37
VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
38
VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
39
40
+VQDMULH 1110 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
41
+VQRDMULH 1111 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
42
+
43
# Vector miscellaneous
44
45
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
46
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve_helper.c
49
+++ b/target/arm/mve_helper.c
50
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
51
mve_advance_vpt(env); \
45
}
52
}
46
53
47
+ *papr |= (1 << bitno);
54
+#define DO_2OP_SAT(OP, ESIZE, TYPE, FN) \
48
+
55
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, void *vm) \
49
s->running_priority[cpu] = prio;
56
+ { \
50
gic_set_active(s, irq, cpu);
57
+ TYPE *d = vd, *n = vn, *m = vm; \
51
}
58
+ uint16_t mask = mve_element_mask(env); \
52
@@ -XXX,XX +XXX,XX @@ static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
59
+ unsigned e; \
53
* on the set bits in the Active Priority Registers.
60
+ bool qc = false; \
54
*/
61
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
55
int i;
62
+ bool sat = false; \
56
+
63
+ TYPE r = FN(n[H##ESIZE(e)], m[H##ESIZE(e)], &sat); \
57
+ if (gic_is_vcpu(cpu)) {
64
+ mergemask(&d[H##ESIZE(e)], r, mask); \
58
+ uint32_t apr = s->h_apr[gic_get_vcpu_real_id(cpu)];
65
+ qc |= sat & mask & 1; \
59
+ if (apr) {
66
+ } \
60
+ return ctz32(apr) << (GIC_VIRT_MIN_BPR + 1);
67
+ if (qc) { \
61
+ } else {
68
+ env->vfp.qc[0] = qc; \
62
+ return 0x100;
69
+ } \
63
+ }
70
+ mve_advance_vpt(env); \
64
+ }
71
+ }
65
+
72
+
66
for (i = 0; i < GIC_NR_APRS; i++) {
73
#define DO_AND(N, M) ((N) & (M))
67
uint32_t apr = s->apr[i][cpu] | s->nsapr[i][cpu];
74
#define DO_BIC(N, M) ((N) & ~(M))
68
if (!apr) {
75
#define DO_ORR(N, M) ((N) | (M))
69
@@ -XXX,XX +XXX,XX @@ static void gic_drop_prio(GICState *s, int cpu, int group)
76
@@ -XXX,XX +XXX,XX @@ static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
70
* running priority will be wrong, so interrupts that should preempt
77
#define DO_QRDMULH_W(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 30)) >> 31, \
71
* might not do so, and interrupts that should not preempt might do so.
78
INT32_MIN, INT32_MAX, s)
72
*/
79
73
- int i;
80
+DO_2OP_SAT(vqdmulhb, 1, int8_t, DO_QDMULH_B)
74
+ if (gic_is_vcpu(cpu)) {
81
+DO_2OP_SAT(vqdmulhh, 2, int16_t, DO_QDMULH_H)
75
+ int rcpu = gic_get_vcpu_real_id(cpu);
82
+DO_2OP_SAT(vqdmulhw, 4, int32_t, DO_QDMULH_W)
76
77
- for (i = 0; i < GIC_NR_APRS; i++) {
78
- uint32_t *papr = group ? &s->nsapr[i][cpu] : &s->apr[i][cpu];
79
- if (!*papr) {
80
- continue;
81
+ if (s->h_apr[rcpu]) {
82
+ /* Clear lowest set bit */
83
+ s->h_apr[rcpu] &= s->h_apr[rcpu] - 1;
84
+ }
85
+ } else {
86
+ int i;
87
+
83
+
88
+ for (i = 0; i < GIC_NR_APRS; i++) {
84
+DO_2OP_SAT(vqrdmulhb, 1, int8_t, DO_QRDMULH_B)
89
+ uint32_t *papr = group ? &s->nsapr[i][cpu] : &s->apr[i][cpu];
85
+DO_2OP_SAT(vqrdmulhh, 2, int16_t, DO_QRDMULH_H)
90
+ if (!*papr) {
86
+DO_2OP_SAT(vqrdmulhw, 4, int32_t, DO_QRDMULH_W)
91
+ continue;
87
+
92
+ }
88
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
93
+ /* Clear lowest set bit */
89
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
94
+ *papr &= *papr - 1;
90
uint32_t rm) \
95
+ break;
91
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
96
}
92
index XXXXXXX..XXXXXXX 100644
97
- /* Clear lowest set bit */
93
--- a/target/arm/translate-mve.c
98
- *papr &= *papr - 1;
94
+++ b/target/arm/translate-mve.c
99
- break;
95
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_BS, vmullbs)
100
}
96
DO_2OP(VMULL_BU, vmullbu)
101
97
DO_2OP(VMULL_TS, vmullts)
102
s->running_priority[cpu] = gic_get_prio_from_apr_bits(s, cpu);
98
DO_2OP(VMULL_TU, vmulltu)
99
+DO_2OP(VQDMULH, vqdmulh)
100
+DO_2OP(VQRDMULH, vqrdmulh)
101
102
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
103
MVEGenTwoOpScalarFn fn)
103
--
104
--
104
2.18.0
105
2.20.1
105
106
106
107
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the vector forms of the MVE VQADD and VQSUB insns.
2
2
3
Add the register definitions for the virtual interface of the GICv2.
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-32-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 16 ++++++++++++++++
8
target/arm/mve.decode | 5 +++++
9
target/arm/mve_helper.c | 14 ++++++++++++++
10
target/arm/translate-mve.c | 4 ++++
11
4 files changed, 39 insertions(+)
4
12
5
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180727095421.386-7-luc.michel@greensocs.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/intc/gic_internal.h | 65 ++++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 65 insertions(+)
12
13
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/gic_internal.h
15
--- a/target/arm/helper-mve.h
16
+++ b/hw/intc/gic_internal.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
#ifndef QEMU_ARM_GIC_INTERNAL_H
18
DEF_HELPER_FLAGS_4(mve_vqrdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
#define QEMU_ARM_GIC_INTERNAL_H
19
DEF_HELPER_FLAGS_4(mve_vqrdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
20
21
+#include "hw/registerfields.h"
21
+DEF_HELPER_FLAGS_4(mve_vqaddsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
#include "hw/intc/arm_gic.h"
22
+DEF_HELPER_FLAGS_4(mve_vqaddsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
23
+DEF_HELPER_FLAGS_4(mve_vqaddsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
#define ALL_CPU_MASK ((unsigned)(((1 << GIC_NCPU) - 1)))
25
@@ -XXX,XX +XXX,XX @@
26
#define GICC_CTLR_EOIMODE (1U << 9)
27
#define GICC_CTLR_EOIMODE_NS (1U << 10)
28
29
+REG32(GICH_HCR, 0x0)
30
+ FIELD(GICH_HCR, EN, 0, 1)
31
+ FIELD(GICH_HCR, UIE, 1, 1)
32
+ FIELD(GICH_HCR, LRENPIE, 2, 1)
33
+ FIELD(GICH_HCR, NPIE, 3, 1)
34
+ FIELD(GICH_HCR, VGRP0EIE, 4, 1)
35
+ FIELD(GICH_HCR, VGRP0DIE, 5, 1)
36
+ FIELD(GICH_HCR, VGRP1EIE, 6, 1)
37
+ FIELD(GICH_HCR, VGRP1DIE, 7, 1)
38
+ FIELD(GICH_HCR, EOICount, 27, 5)
39
+
24
+
40
+#define GICH_HCR_MASK \
25
+DEF_HELPER_FLAGS_4(mve_vqaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
41
+ (R_GICH_HCR_EN_MASK | R_GICH_HCR_UIE_MASK | \
26
+DEF_HELPER_FLAGS_4(mve_vqadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
42
+ R_GICH_HCR_LRENPIE_MASK | R_GICH_HCR_NPIE_MASK | \
27
+DEF_HELPER_FLAGS_4(mve_vqadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
43
+ R_GICH_HCR_VGRP0EIE_MASK | R_GICH_HCR_VGRP0DIE_MASK | \
44
+ R_GICH_HCR_VGRP1EIE_MASK | R_GICH_HCR_VGRP1DIE_MASK | \
45
+ R_GICH_HCR_EOICount_MASK)
46
+
28
+
47
+REG32(GICH_VTR, 0x4)
29
+DEF_HELPER_FLAGS_4(mve_vqsubsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
48
+ FIELD(GICH_VTR, ListRegs, 0, 6)
30
+DEF_HELPER_FLAGS_4(mve_vqsubsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
49
+ FIELD(GICH_VTR, PREbits, 26, 3)
31
+DEF_HELPER_FLAGS_4(mve_vqsubsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
50
+ FIELD(GICH_VTR, PRIbits, 29, 3)
51
+
32
+
52
+REG32(GICH_VMCR, 0x8)
33
+DEF_HELPER_FLAGS_4(mve_vqsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
53
+ FIELD(GICH_VMCR, VMCCtlr, 0, 10)
34
+DEF_HELPER_FLAGS_4(mve_vqsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
54
+ FIELD(GICH_VMCR, VMABP, 18, 3)
35
+DEF_HELPER_FLAGS_4(mve_vqsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
55
+ FIELD(GICH_VMCR, VMBP, 21, 3)
56
+ FIELD(GICH_VMCR, VMPriMask, 27, 5)
57
+
36
+
58
+REG32(GICH_MISR, 0x10)
37
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
59
+ FIELD(GICH_MISR, EOI, 0, 1)
38
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
60
+ FIELD(GICH_MISR, U, 1, 1)
39
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
61
+ FIELD(GICH_MISR, LRENP, 2, 1)
40
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
62
+ FIELD(GICH_MISR, NP, 3, 1)
41
index XXXXXXX..XXXXXXX 100644
63
+ FIELD(GICH_MISR, VGrp0E, 4, 1)
42
--- a/target/arm/mve.decode
64
+ FIELD(GICH_MISR, VGrp0D, 5, 1)
43
+++ b/target/arm/mve.decode
65
+ FIELD(GICH_MISR, VGrp1E, 6, 1)
44
@@ -XXX,XX +XXX,XX @@ VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
66
+ FIELD(GICH_MISR, VGrp1D, 7, 1)
45
VQDMULH 1110 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
46
VQRDMULH 1111 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
47
48
+VQADD_S 111 0 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
49
+VQADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
50
+VQSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
51
+VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
67
+
52
+
68
+REG32(GICH_EISR0, 0x20)
53
# Vector miscellaneous
69
+REG32(GICH_EISR1, 0x24)
54
70
+REG32(GICH_ELRSR0, 0x30)
55
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
71
+REG32(GICH_ELRSR1, 0x34)
56
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
72
+REG32(GICH_APR, 0xf0)
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/mve_helper.c
59
+++ b/target/arm/mve_helper.c
60
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT(vqrdmulhb, 1, int8_t, DO_QRDMULH_B)
61
DO_2OP_SAT(vqrdmulhh, 2, int16_t, DO_QRDMULH_H)
62
DO_2OP_SAT(vqrdmulhw, 4, int32_t, DO_QRDMULH_W)
63
64
+DO_2OP_SAT(vqaddub, 1, uint8_t, DO_UQADD_B)
65
+DO_2OP_SAT(vqadduh, 2, uint16_t, DO_UQADD_H)
66
+DO_2OP_SAT(vqadduw, 4, uint32_t, DO_UQADD_W)
67
+DO_2OP_SAT(vqaddsb, 1, int8_t, DO_SQADD_B)
68
+DO_2OP_SAT(vqaddsh, 2, int16_t, DO_SQADD_H)
69
+DO_2OP_SAT(vqaddsw, 4, int32_t, DO_SQADD_W)
73
+
70
+
74
+REG32(GICH_LR0, 0x100)
71
+DO_2OP_SAT(vqsubub, 1, uint8_t, DO_UQSUB_B)
75
+ FIELD(GICH_LR0, VirtualID, 0, 10)
72
+DO_2OP_SAT(vqsubuh, 2, uint16_t, DO_UQSUB_H)
76
+ FIELD(GICH_LR0, PhysicalID, 10, 10)
73
+DO_2OP_SAT(vqsubuw, 4, uint32_t, DO_UQSUB_W)
77
+ FIELD(GICH_LR0, CPUID, 10, 3)
74
+DO_2OP_SAT(vqsubsb, 1, int8_t, DO_SQSUB_B)
78
+ FIELD(GICH_LR0, EOI, 19, 1)
75
+DO_2OP_SAT(vqsubsh, 2, int16_t, DO_SQSUB_H)
79
+ FIELD(GICH_LR0, Priority, 23, 5)
76
+DO_2OP_SAT(vqsubsw, 4, int32_t, DO_SQSUB_W)
80
+ FIELD(GICH_LR0, State, 28, 2)
81
+ FIELD(GICH_LR0, Grp1, 30, 1)
82
+ FIELD(GICH_LR0, HW, 31, 1)
83
+
77
+
84
+/* Last LR register */
78
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
85
+REG32(GICH_LR63, 0x1fc)
79
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
86
+
80
uint32_t rm) \
87
+#define GICH_LR_MASK \
81
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
88
+ (R_GICH_LR0_VirtualID_MASK | R_GICH_LR0_PhysicalID_MASK | \
82
index XXXXXXX..XXXXXXX 100644
89
+ R_GICH_LR0_CPUID_MASK | R_GICH_LR0_EOI_MASK | \
83
--- a/target/arm/translate-mve.c
90
+ R_GICH_LR0_Priority_MASK | R_GICH_LR0_State_MASK | \
84
+++ b/target/arm/translate-mve.c
91
+ R_GICH_LR0_Grp1_MASK | R_GICH_LR0_HW_MASK)
85
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_TS, vmullts)
92
+
86
DO_2OP(VMULL_TU, vmulltu)
93
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
87
DO_2OP(VQDMULH, vqdmulh)
94
* GICv2 and GICv2 with security extensions:
88
DO_2OP(VQRDMULH, vqrdmulh)
95
*/
89
+DO_2OP(VQADD_S, vqadds)
90
+DO_2OP(VQADD_U, vqaddu)
91
+DO_2OP(VQSUB_S, vqsubs)
92
+DO_2OP(VQSUB_U, vqsubu)
93
94
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
95
MVEGenTwoOpScalarFn fn)
96
--
96
--
97
2.18.0
97
2.20.1
98
98
99
99
diff view generated by jsdifflib
1
We set up TLB entries in tlb_set_page_with_attrs(), where we have
1
Implement the MVE VQSHL insn (encoding T4, which is the
2
some logic for determining whether the TLB entry is considered
2
vector-shift-by-vector version).
3
to be RAM-backed, and thus has a valid addend field. When we
4
look at the TLB entry in get_page_addr_code(), we use different
5
logic for determining whether to treat the page as RAM-backed
6
and use the addend field. This is confusing, and in fact buggy,
7
because the code in tlb_set_page_with_attrs() correctly decides
8
that rom_device memory regions not in romd mode are not RAM-backed,
9
but the code in get_page_addr_code() thinks they are RAM-backed.
10
This typically results in "Bad ram pointer" assertion if the
11
guest tries to execute from such a memory region.
12
3
13
Fix this by making get_page_addr_code() just look at the
4
The DO_SQSHL_OP and DO_UQSHL_OP macros here are derived from
14
TLB_MMIO bit in the code_address field of the TLB, which
5
the neon_helper.c code for qshl_u{8,16,32} and qshl_s{8,16,32}.
15
tlb_set_page_with_attrs() sets if and only if the addend
16
field is not valid for code execution.
17
6
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20210617121628.20116-33-peter.maydell@linaro.org
21
Message-id: 20180713150945.12348-1-peter.maydell@linaro.org
22
---
10
---
23
include/exec/exec-all.h | 2 --
11
target/arm/helper-mve.h | 8 ++++++++
24
accel/tcg/cputlb.c | 29 ++++++++---------------------
12
target/arm/mve.decode | 12 ++++++++++++
25
exec.c | 6 ------
13
target/arm/mve_helper.c | 34 ++++++++++++++++++++++++++++++++++
26
3 files changed, 8 insertions(+), 29 deletions(-)
14
target/arm/translate-mve.c | 2 ++
15
4 files changed, 56 insertions(+)
27
16
28
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
29
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
30
--- a/include/exec/exec-all.h
19
--- a/target/arm/helper-mve.h
31
+++ b/include/exec/exec-all.h
20
+++ b/target/arm/helper-mve.h
32
@@ -XXX,XX +XXX,XX @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu,
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
hwaddr paddr, hwaddr xlat,
22
DEF_HELPER_FLAGS_4(mve_vqsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
int prot,
23
DEF_HELPER_FLAGS_4(mve_vqsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
target_ulong *address);
24
36
-bool memory_region_is_unassigned(MemoryRegion *mr);
25
+DEF_HELPER_FLAGS_4(mve_vqshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
-
26
+DEF_HELPER_FLAGS_4(mve_vqshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
#endif
27
+DEF_HELPER_FLAGS_4(mve_vqshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
39
28
+
40
/* vl.c */
29
+DEF_HELPER_FLAGS_4(mve_vqshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
41
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
30
+DEF_HELPER_FLAGS_4(mve_vqshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vqshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+
33
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
42
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/cputlb.c
38
--- a/target/arm/mve.decode
44
+++ b/accel/tcg/cputlb.c
39
+++ b/target/arm/mve.decode
45
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
40
@@ -XXX,XX +XXX,XX @@
46
{
41
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
47
int mmu_idx, index;
42
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
48
void *p;
43
49
- MemoryRegion *mr;
44
+# The _rev suffix indicates that Vn and Vm are reversed. This is
50
- MemoryRegionSection *section;
45
+# the case for shifts. In the Arm ARM these insns are documented
51
- CPUState *cpu = ENV_GET_CPU(env);
46
+# with the Vm and Vn fields in their usual places, but in the
52
- CPUIOTLBEntry *iotlbentry;
47
+# assembly the operands are listed "backwards", ie in the order
53
48
+# Qd, Qm, Qn where other insns use Qd, Qn, Qm. For QEMU we choose
54
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
49
+# to consider Vm and Vn as being in different fields in the insn.
55
mmu_idx = cpu_mmu_index(env, true);
50
+# This gives us consistency with A64 and Neon.
56
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
51
+@2op_rev .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qn qn=%qm
57
assert(tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr));
52
+
53
@2scalar .... .... .. size:2 .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
54
@2scalar_nosz .... .... .... .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
55
56
@@ -XXX,XX +XXX,XX @@ VQADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
57
VQSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
58
VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
59
60
+VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
61
+VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
62
+
63
# Vector miscellaneous
64
65
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
66
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/mve_helper.c
69
+++ b/target/arm/mve_helper.c
70
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
71
mve_advance_vpt(env); \
58
}
72
}
59
73
60
- if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
74
+/* provide unsigned 2-op helpers for all sizes */
61
+ if (unlikely(env->tlb_table[mmu_idx][index].addr_code &
75
+#define DO_2OP_SAT_U(OP, FN) \
62
+ (TLB_RECHECK | TLB_MMIO))) {
76
+ DO_2OP_SAT(OP##b, 1, uint8_t, FN) \
63
/*
77
+ DO_2OP_SAT(OP##h, 2, uint16_t, FN) \
64
- * This is a TLB_RECHECK access, where the MMU protection
78
+ DO_2OP_SAT(OP##w, 4, uint32_t, FN)
65
- * covers a smaller range than a target page. Return -1 to
79
+
66
- * indicate that we cannot simply execute from RAM here;
80
+/* provide signed 2-op helpers for all sizes */
67
- * we will perform the necessary repeat of the MMU check
81
+#define DO_2OP_SAT_S(OP, FN) \
68
- * when the "execute a single insn" code performs the
82
+ DO_2OP_SAT(OP##b, 1, int8_t, FN) \
69
- * load of the guest insn.
83
+ DO_2OP_SAT(OP##h, 2, int16_t, FN) \
70
+ * Return -1 if we can't translate and execute from an entire
84
+ DO_2OP_SAT(OP##w, 4, int32_t, FN)
71
+ * page of RAM here, which will cause us to execute by loading
85
+
72
+ * and translating one insn at a time, without caching:
86
#define DO_AND(N, M) ((N) & (M))
73
+ * - TLB_RECHECK: means the MMU protection covers a smaller range
87
#define DO_BIC(N, M) ((N) & ~(M))
74
+ * than a target page, so we must redo the MMU check every insn
88
#define DO_ORR(N, M) ((N) | (M))
75
+ * - TLB_MMIO: region is not backed by RAM
89
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT(vqsubsb, 1, int8_t, DO_SQSUB_B)
76
*/
90
DO_2OP_SAT(vqsubsh, 2, int16_t, DO_SQSUB_H)
77
return -1;
91
DO_2OP_SAT(vqsubsw, 4, int32_t, DO_SQSUB_W)
78
}
92
79
93
+/*
80
- iotlbentry = &env->iotlb[mmu_idx][index];
94
+ * This wrapper fixes up the impedance mismatch between do_sqrshl_bhs()
81
- section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
95
+ * and friends wanting a uint32_t* sat and our needing a bool*.
82
- mr = section->mr;
96
+ */
83
- if (memory_region_is_unassigned(mr)) {
97
+#define WRAP_QRSHL_HELPER(FN, N, M, ROUND, satp) \
84
- /*
98
+ ({ \
85
- * Not guest RAM, so there is no ram_addr_t for it. Return -1,
99
+ uint32_t su32 = 0; \
86
- * and we will execute a single insn from this device.
100
+ typeof(N) r = FN(N, (int8_t)(M), sizeof(N) * 8, ROUND, &su32); \
87
- */
101
+ if (su32) { \
88
- return -1;
102
+ *satp = true; \
89
- }
103
+ } \
90
p = (void *)((uintptr_t)addr + env->tlb_table[mmu_idx][index].addend);
104
+ r; \
91
return qemu_ram_addr_from_host_nofail(p);
105
+ })
92
}
106
+
93
diff --git a/exec.c b/exec.c
107
+#define DO_SQSHL_OP(N, M, satp) \
108
+ WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, false, satp)
109
+#define DO_UQSHL_OP(N, M, satp) \
110
+ WRAP_QRSHL_HELPER(do_uqrshl_bhs, N, M, false, satp)
111
+
112
+DO_2OP_SAT_S(vqshls, DO_SQSHL_OP)
113
+DO_2OP_SAT_U(vqshlu, DO_UQSHL_OP)
114
+
115
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
116
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
117
uint32_t rm) \
118
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
94
index XXXXXXX..XXXXXXX 100644
119
index XXXXXXX..XXXXXXX 100644
95
--- a/exec.c
120
--- a/target/arm/translate-mve.c
96
+++ b/exec.c
121
+++ b/target/arm/translate-mve.c
97
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection *phys_page_find(AddressSpaceDispatch *d, hwaddr addr)
122
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQADD_S, vqadds)
98
}
123
DO_2OP(VQADD_U, vqaddu)
99
}
124
DO_2OP(VQSUB_S, vqsubs)
100
125
DO_2OP(VQSUB_U, vqsubu)
101
-bool memory_region_is_unassigned(MemoryRegion *mr)
126
+DO_2OP(VQSHL_S, vqshls)
102
-{
127
+DO_2OP(VQSHL_U, vqshlu)
103
- return mr != &io_mem_rom && mr != &io_mem_notdirty && !mr->rom_device
128
104
- && mr != &io_mem_watch;
129
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
105
-}
130
MVEGenTwoOpScalarFn fn)
106
-
107
/* Called from RCU critical section */
108
static MemoryRegionSection *address_space_lookup_region(AddressSpaceDispatch *d,
109
hwaddr addr,
110
--
131
--
111
2.18.0
132
2.20.1
112
133
113
134
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the MV VQRSHL (vector) insn. Again, the code to perform
2
the actual shifts is borrowed from neon_helper.c.
2
3
3
Provide a VMSTATE_UINT16_SUB_ARRAY macro to save a uint16_t sub-array in
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
a VMState.
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-34-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 8 ++++++++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 6 ++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 19 insertions(+)
5
13
6
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20180727095421.386-5-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/migration/vmstate.h | 3 +++
13
1 file changed, 3 insertions(+)
14
15
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/include/migration/vmstate.h
16
--- a/target/arm/helper-mve.h
18
+++ b/include/migration/vmstate.h
17
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
#define VMSTATE_UINT16_ARRAY(_f, _s, _n) \
19
DEF_HELPER_FLAGS_4(mve_vqshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
VMSTATE_UINT16_ARRAY_V(_f, _s, _n, 0)
20
DEF_HELPER_FLAGS_4(mve_vqshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
21
23
+#define VMSTATE_UINT16_SUB_ARRAY(_f, _s, _start, _num) \
22
+DEF_HELPER_FLAGS_4(mve_vqrshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_uint16, uint16_t)
23
+DEF_HELPER_FLAGS_4(mve_vqrshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vqrshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+
25
+
26
#define VMSTATE_UINT16_2DARRAY(_f, _s, _n1, _n2) \
26
+DEF_HELPER_FLAGS_4(mve_vqrshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
VMSTATE_UINT16_2DARRAY_V(_f, _s, _n1, _n2, 0)
27
+DEF_HELPER_FLAGS_4(mve_vqrshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
28
+DEF_HELPER_FLAGS_4(mve_vqrshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@ VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
38
VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
39
VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
40
41
+VQRSHL_S 111 0 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
42
+VQRSHL_U 111 1 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
43
+
44
# Vector miscellaneous
45
46
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
47
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/mve_helper.c
50
+++ b/target/arm/mve_helper.c
51
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT(vqsubsw, 4, int32_t, DO_SQSUB_W)
52
WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, false, satp)
53
#define DO_UQSHL_OP(N, M, satp) \
54
WRAP_QRSHL_HELPER(do_uqrshl_bhs, N, M, false, satp)
55
+#define DO_SQRSHL_OP(N, M, satp) \
56
+ WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, true, satp)
57
+#define DO_UQRSHL_OP(N, M, satp) \
58
+ WRAP_QRSHL_HELPER(do_uqrshl_bhs, N, M, true, satp)
59
60
DO_2OP_SAT_S(vqshls, DO_SQSHL_OP)
61
DO_2OP_SAT_U(vqshlu, DO_UQSHL_OP)
62
+DO_2OP_SAT_S(vqrshls, DO_SQRSHL_OP)
63
+DO_2OP_SAT_U(vqrshlu, DO_UQRSHL_OP)
64
65
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
66
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
67
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/translate-mve.c
70
+++ b/target/arm/translate-mve.c
71
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQSUB_S, vqsubs)
72
DO_2OP(VQSUB_U, vqsubu)
73
DO_2OP(VQSHL_S, vqshls)
74
DO_2OP(VQSHL_U, vqshlu)
75
+DO_2OP(VQRSHL_S, vqrshls)
76
+DO_2OP(VQRSHL_U, vqrshlu)
77
78
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
79
MVEGenTwoOpScalarFn fn)
29
--
80
--
30
2.18.0
81
2.20.1
31
82
32
83
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the MVE VSHL insn (vector form).
2
2
3
Implement GICD_ISACTIVERn and GICD_ICACTIVERn registers in the GICv2.
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Those registers allow to set or clear the active state of an IRQ in the
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
distributor.
5
Message-id: 20210617121628.20116-35-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 8 ++++++++
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 6 ++++++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 19 insertions(+)
6
12
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180727095421.386-3-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/intc/arm_gic.c | 61 +++++++++++++++++++++++++++++++++++++++++++----
13
1 file changed, 57 insertions(+), 4 deletions(-)
14
15
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gic.c
15
--- a/target/arm/helper-mve.h
18
+++ b/hw/intc/arm_gic.c
16
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
}
18
DEF_HELPER_FLAGS_4(mve_vqsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
}
19
DEF_HELPER_FLAGS_4(mve_vqsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
} else if (offset < 0x400) {
20
23
- /* Interrupt Active. */
21
+DEF_HELPER_FLAGS_4(mve_vshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
- irq = (offset - 0x300) * 8 + GIC_BASE_IRQ;
22
+DEF_HELPER_FLAGS_4(mve_vshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+ /* Interrupt Set/Clear Active. */
23
+DEF_HELPER_FLAGS_4(mve_vshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+ if (offset < 0x380) {
27
+ irq = (offset - 0x300) * 8;
28
+ } else if (s->revision == 2) {
29
+ irq = (offset - 0x380) * 8;
30
+ } else {
31
+ goto bad_reg;
32
+ }
33
+
24
+
34
+ irq += GIC_BASE_IRQ;
25
+DEF_HELPER_FLAGS_4(mve_vshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
if (irq >= s->num_irq)
26
+DEF_HELPER_FLAGS_4(mve_vshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
goto bad_reg;
27
+DEF_HELPER_FLAGS_4(mve_vshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
res = 0;
38
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
39
GIC_DIST_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
40
}
41
}
42
+ } else if (offset < 0x380) {
43
+ /* Interrupt Set Active. */
44
+ if (s->revision != 2) {
45
+ goto bad_reg;
46
+ }
47
+
28
+
48
+ irq = (offset - 0x300) * 8 + GIC_BASE_IRQ;
29
DEF_HELPER_FLAGS_4(mve_vqshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
49
+ if (irq >= s->num_irq) {
30
DEF_HELPER_FLAGS_4(mve_vqshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
50
+ goto bad_reg;
31
DEF_HELPER_FLAGS_4(mve_vqshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
51
+ }
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/mve.decode
35
+++ b/target/arm/mve.decode
36
@@ -XXX,XX +XXX,XX @@ VQADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
37
VQSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
38
VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
39
40
+VSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
41
+VSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
52
+
42
+
53
+ /* This register is banked per-cpu for PPIs */
43
VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
54
+ int cm = irq < GIC_INTERNAL ? (1 << cpu) : ALL_CPU_MASK;
44
VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
45
46
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve_helper.c
49
+++ b/target/arm/mve_helper.c
50
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhaddu, do_vhadd_u)
51
DO_2OP_S(vhsubs, do_vhsub_s)
52
DO_2OP_U(vhsubu, do_vhsub_u)
53
54
+#define DO_VSHLS(N, M) do_sqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
55
+#define DO_VSHLU(N, M) do_uqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
55
+
56
+
56
+ for (i = 0; i < 8; i++) {
57
+DO_2OP_S(vshls, DO_VSHLS)
57
+ if (s->security_extn && !attrs.secure &&
58
+DO_2OP_U(vshlu, DO_VSHLU)
58
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
59
+ continue; /* Ignore Non-secure access of Group0 IRQ */
60
+ }
61
+
59
+
62
+ if (value & (1 << i)) {
60
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
63
+ GIC_DIST_SET_ACTIVE(irq + i, cm);
61
{
64
+ }
62
if (val > max) {
65
+ }
63
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
66
} else if (offset < 0x400) {
64
index XXXXXXX..XXXXXXX 100644
67
- /* Interrupt Active. */
65
--- a/target/arm/translate-mve.c
68
- goto bad_reg;
66
+++ b/target/arm/translate-mve.c
69
+ /* Interrupt Clear Active. */
67
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQADD_S, vqadds)
70
+ if (s->revision != 2) {
68
DO_2OP(VQADD_U, vqaddu)
71
+ goto bad_reg;
69
DO_2OP(VQSUB_S, vqsubs)
72
+ }
70
DO_2OP(VQSUB_U, vqsubu)
73
+
71
+DO_2OP(VSHL_S, vshls)
74
+ irq = (offset - 0x380) * 8 + GIC_BASE_IRQ;
72
+DO_2OP(VSHL_U, vshlu)
75
+ if (irq >= s->num_irq) {
73
DO_2OP(VQSHL_S, vqshls)
76
+ goto bad_reg;
74
DO_2OP(VQSHL_U, vqshlu)
77
+ }
75
DO_2OP(VQRSHL_S, vqrshls)
78
+
79
+ /* This register is banked per-cpu for PPIs */
80
+ int cm = irq < GIC_INTERNAL ? (1 << cpu) : ALL_CPU_MASK;
81
+
82
+ for (i = 0; i < 8; i++) {
83
+ if (s->security_extn && !attrs.secure &&
84
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
85
+ continue; /* Ignore Non-secure access of Group0 IRQ */
86
+ }
87
+
88
+ if (value & (1 << i)) {
89
+ GIC_DIST_CLEAR_ACTIVE(irq + i, cm);
90
+ }
91
+ }
92
} else if (offset < 0x800) {
93
/* Interrupt Priority. */
94
irq = (offset - 0x400) + GIC_BASE_IRQ;
95
--
76
--
96
2.18.0
77
2.20.1
97
78
98
79
diff view generated by jsdifflib
1
When we support execution from non-RAM MMIO regions, get_page_addr_code()
1
Implement the MVE VRSHL insn (vector form).
2
will return -1 to indicate that there is no RAM at the requested address.
3
Handle this in the cpu-exec TB hashtable lookup code, treating it as
4
"no match found".
5
6
Note that the call to get_page_addr_code() in tb_lookup_cmp() needs
7
no changes -- a return of -1 will already correctly result in the
8
function returning false.
9
2
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Emilio G. Cota <cota@braap.org>
5
Message-id: 20210617121628.20116-36-peter.maydell@linaro.org
13
Tested-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 20180710160013.26559-3-peter.maydell@linaro.org
15
---
6
---
16
accel/tcg/cpu-exec.c | 3 +++
7
target/arm/helper-mve.h | 8 ++++++++
17
1 file changed, 3 insertions(+)
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 4 ++++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 17 insertions(+)
18
12
19
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
21
--- a/accel/tcg/cpu-exec.c
15
--- a/target/arm/helper-mve.h
22
+++ b/accel/tcg/cpu-exec.c
16
+++ b/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
desc.trace_vcpu_dstate = *cpu->trace_dstate;
18
DEF_HELPER_FLAGS_4(mve_vshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
desc.pc = pc;
19
DEF_HELPER_FLAGS_4(mve_vshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
phys_pc = get_page_addr_code(desc.env, pc);
20
27
+ if (phys_pc == -1) {
21
+DEF_HELPER_FLAGS_4(mve_vrshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+ return NULL;
22
+DEF_HELPER_FLAGS_4(mve_vrshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+ }
23
+DEF_HELPER_FLAGS_4(mve_vrshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
desc.phys_page1 = phys_pc & TARGET_PAGE_MASK;
24
+
31
h = tb_hash_func(phys_pc, pc, flags, cf_mask, *cpu->trace_dstate);
25
+DEF_HELPER_FLAGS_4(mve_vrshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
return qht_lookup_custom(&tb_ctx.htable, &desc, h, tb_lookup_cmp);
26
+DEF_HELPER_FLAGS_4(mve_vrshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vrshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+
29
DEF_HELPER_FLAGS_4(mve_vqshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
DEF_HELPER_FLAGS_4(mve_vqshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
DEF_HELPER_FLAGS_4(mve_vqshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/mve.decode
35
+++ b/target/arm/mve.decode
36
@@ -XXX,XX +XXX,XX @@ VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
37
VSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
38
VSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
39
40
+VRSHL_S 111 0 1111 0 . .. ... 0 ... 0 0101 . 1 . 0 ... 0 @2op_rev
41
+VRSHL_U 111 1 1111 0 . .. ... 0 ... 0 0101 . 1 . 0 ... 0 @2op_rev
42
+
43
VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
44
VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
45
46
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve_helper.c
49
+++ b/target/arm/mve_helper.c
50
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
51
52
#define DO_VSHLS(N, M) do_sqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
53
#define DO_VSHLU(N, M) do_uqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
54
+#define DO_VRSHLS(N, M) do_sqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, true, NULL)
55
+#define DO_VRSHLU(N, M) do_uqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, true, NULL)
56
57
DO_2OP_S(vshls, DO_VSHLS)
58
DO_2OP_U(vshlu, DO_VSHLU)
59
+DO_2OP_S(vrshls, DO_VRSHLS)
60
+DO_2OP_U(vrshlu, DO_VRSHLU)
61
62
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
63
{
64
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate-mve.c
67
+++ b/target/arm/translate-mve.c
68
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQSUB_S, vqsubs)
69
DO_2OP(VQSUB_U, vqsubu)
70
DO_2OP(VSHL_S, vshls)
71
DO_2OP(VSHL_U, vshlu)
72
+DO_2OP(VRSHL_S, vrshls)
73
+DO_2OP(VRSHL_U, vrshlu)
74
DO_2OP(VQSHL_S, vqshls)
75
DO_2OP(VQSHL_U, vqshlu)
76
DO_2OP(VQRSHL_S, vqrshls)
33
--
77
--
34
2.18.0
78
2.20.1
35
79
36
80
diff view generated by jsdifflib
1
The io_readx() function needs to know whether the load it is
1
Implement the MVE VQDMLADH and VQRDMLADH insns. These multiply
2
doing is an MMU_DATA_LOAD or an MMU_INST_FETCH, so that it
2
elements, and then add pairs of products, double, possibly round,
3
can pass the right value to the cpu_transaction_failed()
3
saturate and return the high half of the result.
4
function. Plumb this information through from the softmmu
5
code.
6
7
This is currently not often going to give the wrong answer,
8
because usually instruction fetches go via get_page_addr_code().
9
However once we switch over to handling execution from non-RAM by
10
creating single-insn TBs, the path for an insn fetch to generate
11
a bus error will be through cpu_ld*_code() and io_readx(),
12
so without this change we will generate a d-side fault when we
13
should generate an i-side fault.
14
15
We also have to pass the access type via a CPU struct global
16
down to unassigned_mem_read(), for the benefit of the targets
17
which still use the cpu_unassigned_access() hook (m68k, mips,
18
sparc, xtensa).
19
4
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20210617121628.20116-37-peter.maydell@linaro.org
23
Tested-by: Cédric Le Goater <clg@kaod.org>
24
Message-id: 20180710160013.26559-2-peter.maydell@linaro.org
25
---
8
---
26
accel/tcg/softmmu_template.h | 11 +++++++----
9
target/arm/helper-mve.h | 16 +++++++
27
include/qom/cpu.h | 6 ++++++
10
target/arm/mve.decode | 5 +++
28
accel/tcg/cputlb.c | 5 +++--
11
target/arm/mve_helper.c | 89 ++++++++++++++++++++++++++++++++++++++
29
memory.c | 3 ++-
12
target/arm/translate-mve.c | 4 ++
30
4 files changed, 18 insertions(+), 7 deletions(-)
13
4 files changed, 114 insertions(+)
31
14
32
diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
33
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
34
--- a/accel/tcg/softmmu_template.h
17
--- a/target/arm/helper-mve.h
35
+++ b/accel/tcg/softmmu_template.h
18
+++ b/target/arm/helper-mve.h
36
@@ -XXX,XX +XXX,XX @@ static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
size_t mmu_idx, size_t index,
20
DEF_HELPER_FLAGS_4(mve_vqrshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
target_ulong addr,
21
DEF_HELPER_FLAGS_4(mve_vqrshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
39
uintptr_t retaddr,
22
40
- bool recheck)
23
+DEF_HELPER_FLAGS_4(mve_vqdmladhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
41
+ bool recheck,
24
+DEF_HELPER_FLAGS_4(mve_vqdmladhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
42
+ MMUAccessType access_type)
25
+DEF_HELPER_FLAGS_4(mve_vqdmladhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
43
{
26
+
44
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
27
+DEF_HELPER_FLAGS_4(mve_vqdmladhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
45
return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck,
28
+DEF_HELPER_FLAGS_4(mve_vqdmladhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
46
- DATA_SIZE);
29
+DEF_HELPER_FLAGS_4(mve_vqdmladhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
47
+ access_type, DATA_SIZE);
30
+
48
}
31
+DEF_HELPER_FLAGS_4(mve_vqrdmladhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
49
#endif
32
+DEF_HELPER_FLAGS_4(mve_vqrdmladhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
50
33
+DEF_HELPER_FLAGS_4(mve_vqrdmladhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
51
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
34
+
52
/* ??? Note that the io helpers always read data in the target
35
+DEF_HELPER_FLAGS_4(mve_vqrdmladhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
53
byte ordering. We should push the LE/BE request down into io. */
36
+DEF_HELPER_FLAGS_4(mve_vqrdmladhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
54
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
37
+DEF_HELPER_FLAGS_4(mve_vqrdmladhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
55
- tlb_addr & TLB_RECHECK);
38
+
56
+ tlb_addr & TLB_RECHECK,
39
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
57
+ READ_ACCESS_TYPE);
40
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
58
res = TGT_LE(res);
41
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
59
return res;
42
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
60
}
61
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
62
/* ??? Note that the io helpers always read data in the target
63
byte ordering. We should push the LE/BE request down into io. */
64
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
65
- tlb_addr & TLB_RECHECK);
66
+ tlb_addr & TLB_RECHECK,
67
+ READ_ACCESS_TYPE);
68
res = TGT_BE(res);
69
return res;
70
}
71
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
72
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
73
--- a/include/qom/cpu.h
44
--- a/target/arm/mve.decode
74
+++ b/include/qom/cpu.h
45
+++ b/target/arm/mve.decode
75
@@ -XXX,XX +XXX,XX @@ struct CPUState {
46
@@ -XXX,XX +XXX,XX @@ VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
76
*/
47
VQRSHL_S 111 0 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
77
uintptr_t mem_io_pc;
48
VQRSHL_U 111 1 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
78
vaddr mem_io_vaddr;
49
50
+VQDMLADH 1110 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 0 @2op
51
+VQDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
52
+VQRDMLADH 1110 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
53
+VQRDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
54
+
55
# Vector miscellaneous
56
57
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
58
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mve_helper.c
61
+++ b/target/arm/mve_helper.c
62
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_U(vqshlu, DO_UQSHL_OP)
63
DO_2OP_SAT_S(vqrshls, DO_SQRSHL_OP)
64
DO_2OP_SAT_U(vqrshlu, DO_UQRSHL_OP)
65
66
+/*
67
+ * Multiply add dual returning high half
68
+ * The 'FN' here takes four inputs A, B, C, D, a 0/1 indicator of
69
+ * whether to add the rounding constant, and the pointer to the
70
+ * saturation flag, and should do "(A * B + C * D) * 2 + rounding constant",
71
+ * saturate to twice the input size and return the high half; or
72
+ * (A * B - C * D) etc for VQDMLSDH.
73
+ */
74
+#define DO_VQDMLADH_OP(OP, ESIZE, TYPE, XCHG, ROUND, FN) \
75
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
76
+ void *vm) \
77
+ { \
78
+ TYPE *d = vd, *n = vn, *m = vm; \
79
+ uint16_t mask = mve_element_mask(env); \
80
+ unsigned e; \
81
+ bool qc = false; \
82
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
83
+ bool sat = false; \
84
+ if ((e & 1) == XCHG) { \
85
+ TYPE r = FN(n[H##ESIZE(e)], \
86
+ m[H##ESIZE(e - XCHG)], \
87
+ n[H##ESIZE(e + (1 - 2 * XCHG))], \
88
+ m[H##ESIZE(e + (1 - XCHG))], \
89
+ ROUND, &sat); \
90
+ mergemask(&d[H##ESIZE(e)], r, mask); \
91
+ qc |= sat & mask & 1; \
92
+ } \
93
+ } \
94
+ if (qc) { \
95
+ env->vfp.qc[0] = qc; \
96
+ } \
97
+ mve_advance_vpt(env); \
98
+ }
99
+
100
+static int8_t do_vqdmladh_b(int8_t a, int8_t b, int8_t c, int8_t d,
101
+ int round, bool *sat)
102
+{
103
+ int64_t r = ((int64_t)a * b + (int64_t)c * d) * 2 + (round << 7);
104
+ return do_sat_bhw(r, INT16_MIN, INT16_MAX, sat) >> 8;
105
+}
106
+
107
+static int16_t do_vqdmladh_h(int16_t a, int16_t b, int16_t c, int16_t d,
108
+ int round, bool *sat)
109
+{
110
+ int64_t r = ((int64_t)a * b + (int64_t)c * d) * 2 + (round << 15);
111
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat) >> 16;
112
+}
113
+
114
+static int32_t do_vqdmladh_w(int32_t a, int32_t b, int32_t c, int32_t d,
115
+ int round, bool *sat)
116
+{
117
+ int64_t m1 = (int64_t)a * b;
118
+ int64_t m2 = (int64_t)c * d;
119
+ int64_t r;
79
+ /*
120
+ /*
80
+ * This is only needed for the legacy cpu_unassigned_access() hook;
121
+ * Architecturally we should do the entire add, double, round
81
+ * when all targets using it have been converted to use
122
+ * and then check for saturation. We do three saturating adds,
82
+ * cpu_transaction_failed() instead it can be removed.
123
+ * but we need to be careful about the order. If the first
124
+ * m1 + m2 saturates then it's impossible for the *2+rc to
125
+ * bring it back into the non-saturated range. However, if
126
+ * m1 + m2 is negative then it's possible that doing the doubling
127
+ * would take the intermediate result below INT64_MAX and the
128
+ * addition of the rounding constant then brings it back in range.
129
+ * So we add half the rounding constant before doubling rather
130
+ * than adding the rounding constant after the doubling.
83
+ */
131
+ */
84
+ MMUAccessType mem_io_access_type;
132
+ if (sadd64_overflow(m1, m2, &r) ||
85
133
+ sadd64_overflow(r, (round << 30), &r) ||
86
int kvm_fd;
134
+ sadd64_overflow(r, r, &r)) {
87
struct KVMState *kvm_state;
135
+ *sat = true;
88
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
136
+ return r < 0 ? INT32_MAX : INT32_MIN;
137
+ }
138
+ return r >> 32;
139
+}
140
+
141
+DO_VQDMLADH_OP(vqdmladhb, 1, int8_t, 0, 0, do_vqdmladh_b)
142
+DO_VQDMLADH_OP(vqdmladhh, 2, int16_t, 0, 0, do_vqdmladh_h)
143
+DO_VQDMLADH_OP(vqdmladhw, 4, int32_t, 0, 0, do_vqdmladh_w)
144
+DO_VQDMLADH_OP(vqdmladhxb, 1, int8_t, 1, 0, do_vqdmladh_b)
145
+DO_VQDMLADH_OP(vqdmladhxh, 2, int16_t, 1, 0, do_vqdmladh_h)
146
+DO_VQDMLADH_OP(vqdmladhxw, 4, int32_t, 1, 0, do_vqdmladh_w)
147
+
148
+DO_VQDMLADH_OP(vqrdmladhb, 1, int8_t, 0, 1, do_vqdmladh_b)
149
+DO_VQDMLADH_OP(vqrdmladhh, 2, int16_t, 0, 1, do_vqdmladh_h)
150
+DO_VQDMLADH_OP(vqrdmladhw, 4, int32_t, 0, 1, do_vqdmladh_w)
151
+DO_VQDMLADH_OP(vqrdmladhxb, 1, int8_t, 1, 1, do_vqdmladh_b)
152
+DO_VQDMLADH_OP(vqrdmladhxh, 2, int16_t, 1, 1, do_vqdmladh_h)
153
+DO_VQDMLADH_OP(vqrdmladhxw, 4, int32_t, 1, 1, do_vqdmladh_w)
154
+
155
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
156
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
157
uint32_t rm) \
158
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
89
index XXXXXXX..XXXXXXX 100644
159
index XXXXXXX..XXXXXXX 100644
90
--- a/accel/tcg/cputlb.c
160
--- a/target/arm/translate-mve.c
91
+++ b/accel/tcg/cputlb.c
161
+++ b/target/arm/translate-mve.c
92
@@ -XXX,XX +XXX,XX @@ static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
162
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQSHL_S, vqshls)
93
static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
163
DO_2OP(VQSHL_U, vqshlu)
94
int mmu_idx,
164
DO_2OP(VQRSHL_S, vqrshls)
95
target_ulong addr, uintptr_t retaddr,
165
DO_2OP(VQRSHL_U, vqrshlu)
96
- bool recheck, int size)
166
+DO_2OP(VQDMLADH, vqdmladh)
97
+ bool recheck, MMUAccessType access_type, int size)
167
+DO_2OP(VQDMLADHX, vqdmladhx)
98
{
168
+DO_2OP(VQRDMLADH, vqrdmladh)
99
CPUState *cpu = ENV_GET_CPU(env);
169
+DO_2OP(VQRDMLADHX, vqrdmladhx)
100
hwaddr mr_offset;
170
101
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
171
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
102
}
172
MVEGenTwoOpScalarFn fn)
103
104
cpu->mem_io_vaddr = addr;
105
+ cpu->mem_io_access_type = access_type;
106
107
if (mr->global_locking && !qemu_mutex_iothread_locked()) {
108
qemu_mutex_lock_iothread();
109
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
110
section->offset_within_address_space -
111
section->offset_within_region;
112
113
- cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_LOAD,
114
+ cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
115
mmu_idx, iotlbentry->attrs, r, retaddr);
116
}
117
if (locked) {
118
diff --git a/memory.c b/memory.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/memory.c
121
+++ b/memory.c
122
@@ -XXX,XX +XXX,XX @@ static uint64_t unassigned_mem_read(void *opaque, hwaddr addr,
123
printf("Unassigned mem read " TARGET_FMT_plx "\n", addr);
124
#endif
125
if (current_cpu != NULL) {
126
- cpu_unassigned_access(current_cpu, addr, false, false, 0, size);
127
+ bool is_exec = current_cpu->mem_io_access_type == MMU_INST_FETCH;
128
+ cpu_unassigned_access(current_cpu, addr, false, is_exec, 0, size);
129
}
130
return 0;
131
}
132
--
173
--
133
2.18.0
174
2.20.1
134
175
135
176
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the MVE VQDMLSDH and VQRDMLSDH insns, which are
2
like VQDMLADH and VQRDMLADH except that products are subtracted
3
rather than added.
2
4
3
Add some helper macros and functions related to the virtualization
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
extensions to gic_internal.h.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210617121628.20116-38-peter.maydell@linaro.org
8
---
9
target/arm/helper-mve.h | 16 ++++++++++++++
10
target/arm/mve.decode | 5 +++++
11
target/arm/mve_helper.c | 44 ++++++++++++++++++++++++++++++++++++++
12
target/arm/translate-mve.c | 4 ++++
13
4 files changed, 69 insertions(+)
5
14
6
The GICH_LR_* macros help extracting specific fields of a list register
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
7
value. The only tricky one is the priority field as only the MSB are
8
stored. The value must be shifted accordingly to obtain the correct
9
priority value.
10
11
gic_is_vcpu() and gic_get_vcpu_real_id() help with (v)CPU id manipulation
12
to abstract the fact that vCPU id are in the range
13
[ GIC_NCPU; (GIC_NCPU + num_cpu) [.
14
15
gic_lr_* and gic_virq_is_valid() help with the list registers.
16
gic_get_lr_entry() returns the LR entry for a given (vCPU, irq) pair. It
17
is meant to be used in contexts where we know for sure that the entry
18
exists, so we assert that entry is actually found, and the caller can
19
avoid the NULL check on the returned pointer.
20
21
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
22
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
23
Message-id: 20180727095421.386-8-luc.michel@greensocs.com
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
26
hw/intc/gic_internal.h | 74 ++++++++++++++++++++++++++++++++++++++++++
27
hw/intc/arm_gic.c | 5 +++
28
2 files changed, 79 insertions(+)
29
30
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
31
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/gic_internal.h
17
--- a/target/arm/helper-mve.h
33
+++ b/hw/intc/gic_internal.h
18
+++ b/target/arm/helper-mve.h
34
@@ -XXX,XX +XXX,XX @@ REG32(GICH_LR63, 0x1fc)
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrdmladhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
R_GICH_LR0_Priority_MASK | R_GICH_LR0_State_MASK | \
20
DEF_HELPER_FLAGS_4(mve_vqrdmladhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
R_GICH_LR0_Grp1_MASK | R_GICH_LR0_HW_MASK)
21
DEF_HELPER_FLAGS_4(mve_vqrdmladhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
22
38
+#define GICH_LR_STATE_INVALID 0
23
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
39
+#define GICH_LR_STATE_PENDING 1
24
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
40
+#define GICH_LR_STATE_ACTIVE 2
25
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
41
+#define GICH_LR_STATE_ACTIVE_PENDING 3
42
+
26
+
43
+#define GICH_LR_VIRT_ID(entry) (FIELD_EX32(entry, GICH_LR0, VirtualID))
27
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
44
+#define GICH_LR_PHYS_ID(entry) (FIELD_EX32(entry, GICH_LR0, PhysicalID))
28
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
45
+#define GICH_LR_CPUID(entry) (FIELD_EX32(entry, GICH_LR0, CPUID))
29
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
46
+#define GICH_LR_EOI(entry) (FIELD_EX32(entry, GICH_LR0, EOI))
47
+#define GICH_LR_PRIORITY(entry) (FIELD_EX32(entry, GICH_LR0, Priority) << 3)
48
+#define GICH_LR_STATE(entry) (FIELD_EX32(entry, GICH_LR0, State))
49
+#define GICH_LR_GROUP(entry) (FIELD_EX32(entry, GICH_LR0, Grp1))
50
+#define GICH_LR_HW(entry) (FIELD_EX32(entry, GICH_LR0, HW))
51
+
30
+
52
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
31
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
53
* GICv2 and GICv2 with security extensions:
32
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
54
*/
33
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
55
@@ -XXX,XX +XXX,XX @@ static inline bool gic_is_vcpu(int cpu)
34
+
56
return cpu >= GIC_NCPU;
35
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
+
39
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
42
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve.decode
45
+++ b/target/arm/mve.decode
46
@@ -XXX,XX +XXX,XX @@ VQDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
47
VQRDMLADH 1110 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
48
VQRDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
49
50
+VQDMLSDH 1111 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 0 @2op
51
+VQDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
52
+VQRDMLSDH 1111 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
53
+VQRDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
54
+
55
# Vector miscellaneous
56
57
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
58
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mve_helper.c
61
+++ b/target/arm/mve_helper.c
62
@@ -XXX,XX +XXX,XX @@ static int32_t do_vqdmladh_w(int32_t a, int32_t b, int32_t c, int32_t d,
63
return r >> 32;
57
}
64
}
58
65
59
+static inline int gic_get_vcpu_real_id(int cpu)
66
+static int8_t do_vqdmlsdh_b(int8_t a, int8_t b, int8_t c, int8_t d,
67
+ int round, bool *sat)
60
+{
68
+{
61
+ return (cpu >= GIC_NCPU) ? (cpu - GIC_NCPU) : cpu;
69
+ int64_t r = ((int64_t)a * b - (int64_t)c * d) * 2 + (round << 7);
70
+ return do_sat_bhw(r, INT16_MIN, INT16_MAX, sat) >> 8;
62
+}
71
+}
63
+
72
+
64
+/* Return true if the given vIRQ state exists in a LR and is either active or
73
+static int16_t do_vqdmlsdh_h(int16_t a, int16_t b, int16_t c, int16_t d,
65
+ * pending and active.
74
+ int round, bool *sat)
66
+ *
67
+ * This function is used to check that a guest's `end of interrupt' or
68
+ * `interrupts deactivation' request is valid, and matches with a LR of an
69
+ * already acknowledged vIRQ (i.e. has the active bit set in its state).
70
+ */
71
+static inline bool gic_virq_is_valid(GICState *s, int irq, int vcpu)
72
+{
75
+{
73
+ int cpu = gic_get_vcpu_real_id(vcpu);
76
+ int64_t r = ((int64_t)a * b - (int64_t)c * d) * 2 + (round << 15);
74
+ int lr_idx;
77
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat) >> 16;
75
+
76
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
77
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
78
+
79
+ if ((GICH_LR_VIRT_ID(*entry) == irq) &&
80
+ (GICH_LR_STATE(*entry) & GICH_LR_STATE_ACTIVE)) {
81
+ return true;
82
+ }
83
+ }
84
+
85
+ return false;
86
+}
78
+}
87
+
79
+
88
+/* Return a pointer on the LR entry matching the given vIRQ.
80
+static int32_t do_vqdmlsdh_w(int32_t a, int32_t b, int32_t c, int32_t d,
89
+ *
81
+ int round, bool *sat)
90
+ * This function is used to retrieve an LR for which we know for sure that the
91
+ * corresponding vIRQ exists in the current context (i.e. its current state is
92
+ * not `invalid'):
93
+ * - Either the corresponding vIRQ has been validated with gic_virq_is_valid()
94
+ * so it is `active' or `active and pending',
95
+ * - Or it was pending and has been selected by gic_get_best_virq(). It is now
96
+ * `pending', `active' or `active and pending', depending on what the guest
97
+ * already did with this vIRQ.
98
+ *
99
+ * Having multiple LRs with the same VirtualID leads to UNPREDICTABLE
100
+ * behaviour in the GIC. We choose to return the first one that matches.
101
+ */
102
+static inline uint32_t *gic_get_lr_entry(GICState *s, int irq, int vcpu)
103
+{
82
+{
104
+ int cpu = gic_get_vcpu_real_id(vcpu);
83
+ int64_t m1 = (int64_t)a * b;
105
+ int lr_idx;
84
+ int64_t m2 = (int64_t)c * d;
106
+
85
+ int64_t r;
107
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
86
+ /* The same ordering issue as in do_vqdmladh_w applies here too */
108
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
87
+ if (ssub64_overflow(m1, m2, &r) ||
109
+
88
+ sadd64_overflow(r, (round << 30), &r) ||
110
+ if ((GICH_LR_VIRT_ID(*entry) == irq) &&
89
+ sadd64_overflow(r, r, &r)) {
111
+ (GICH_LR_STATE(*entry) != GICH_LR_STATE_INVALID)) {
90
+ *sat = true;
112
+ return entry;
91
+ return r < 0 ? INT32_MAX : INT32_MIN;
113
+ }
114
+ }
92
+ }
115
+
93
+ return r >> 32;
116
+ g_assert_not_reached();
117
+}
94
+}
118
+
95
+
119
#endif /* QEMU_ARM_GIC_INTERNAL_H */
96
DO_VQDMLADH_OP(vqdmladhb, 1, int8_t, 0, 0, do_vqdmladh_b)
120
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
97
DO_VQDMLADH_OP(vqdmladhh, 2, int16_t, 0, 0, do_vqdmladh_h)
98
DO_VQDMLADH_OP(vqdmladhw, 4, int32_t, 0, 0, do_vqdmladh_w)
99
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmladhxb, 1, int8_t, 1, 1, do_vqdmladh_b)
100
DO_VQDMLADH_OP(vqrdmladhxh, 2, int16_t, 1, 1, do_vqdmladh_h)
101
DO_VQDMLADH_OP(vqrdmladhxw, 4, int32_t, 1, 1, do_vqdmladh_w)
102
103
+DO_VQDMLADH_OP(vqdmlsdhb, 1, int8_t, 0, 0, do_vqdmlsdh_b)
104
+DO_VQDMLADH_OP(vqdmlsdhh, 2, int16_t, 0, 0, do_vqdmlsdh_h)
105
+DO_VQDMLADH_OP(vqdmlsdhw, 4, int32_t, 0, 0, do_vqdmlsdh_w)
106
+DO_VQDMLADH_OP(vqdmlsdhxb, 1, int8_t, 1, 0, do_vqdmlsdh_b)
107
+DO_VQDMLADH_OP(vqdmlsdhxh, 2, int16_t, 1, 0, do_vqdmlsdh_h)
108
+DO_VQDMLADH_OP(vqdmlsdhxw, 4, int32_t, 1, 0, do_vqdmlsdh_w)
109
+
110
+DO_VQDMLADH_OP(vqrdmlsdhb, 1, int8_t, 0, 1, do_vqdmlsdh_b)
111
+DO_VQDMLADH_OP(vqrdmlsdhh, 2, int16_t, 0, 1, do_vqdmlsdh_h)
112
+DO_VQDMLADH_OP(vqrdmlsdhw, 4, int32_t, 0, 1, do_vqdmlsdh_w)
113
+DO_VQDMLADH_OP(vqrdmlsdhxb, 1, int8_t, 1, 1, do_vqdmlsdh_b)
114
+DO_VQDMLADH_OP(vqrdmlsdhxh, 2, int16_t, 1, 1, do_vqdmlsdh_h)
115
+DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
116
+
117
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
118
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
119
uint32_t rm) \
120
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
121
index XXXXXXX..XXXXXXX 100644
121
index XXXXXXX..XXXXXXX 100644
122
--- a/hw/intc/arm_gic.c
122
--- a/target/arm/translate-mve.c
123
+++ b/hw/intc/arm_gic.c
123
+++ b/target/arm/translate-mve.c
124
@@ -XXX,XX +XXX,XX @@ static inline int gic_get_current_cpu(GICState *s)
124
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQDMLADH, vqdmladh)
125
return 0;
125
DO_2OP(VQDMLADHX, vqdmladhx)
126
}
126
DO_2OP(VQRDMLADH, vqrdmladh)
127
127
DO_2OP(VQRDMLADHX, vqrdmladhx)
128
+static inline int gic_get_current_vcpu(GICState *s)
128
+DO_2OP(VQDMLSDH, vqdmlsdh)
129
+{
129
+DO_2OP(VQDMLSDHX, vqdmlsdhx)
130
+ return gic_get_current_cpu(s) + GIC_NCPU;
130
+DO_2OP(VQRDMLSDH, vqrdmlsdh)
131
+}
131
+DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
132
+
132
133
/* Return true if this GIC config has interrupt groups, which is
133
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
134
* true if we're a GICv2, or a GICv1 with the security extensions.
134
MVEGenTwoOpScalarFn fn)
135
*/
136
--
135
--
137
2.18.0
136
2.20.1
138
137
139
138
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the vector form of the MVE VQDMULL insn.
2
2
3
An access to the CPU interface is non-secure if the current GIC instance
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
implements the security extensions, and the memory access is actually
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
non-secure. Until then, it was checked with tests such as
5
Message-id: 20210617121628.20116-39-peter.maydell@linaro.org
6
if (s->security_extn && !attrs.secure) { ... }
6
---
7
in various places of the CPU interface code.
7
target/arm/helper-mve.h | 5 +++++
8
target/arm/mve.decode | 5 +++++
9
target/arm/mve_helper.c | 30 ++++++++++++++++++++++++++++++
10
target/arm/translate-mve.c | 30 ++++++++++++++++++++++++++++++
11
4 files changed, 70 insertions(+)
8
12
9
With the implementation of the virtualization extensions, those tests
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
10
must be updated to take into account whether we are in a vCPU interface
11
or not. This is because the exposed vCPU interface does not implement
12
security extensions.
13
14
This commits replaces all those tests with a call to the
15
gic_cpu_ns_access() function to check if the current access to the CPU
16
interface is non-secure. This function takes into account whether the
17
current CPU is a vCPU or not.
18
19
Note that this function is used only in the (v)CPU interface code path.
20
The distributor code path is left unchanged, as the distributor is not
21
exposed to vCPUs at all.
22
23
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
26
Message-id: 20180727095421.386-9-luc.michel@greensocs.com
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
---
29
hw/intc/arm_gic.c | 39 ++++++++++++++++++++++-----------------
30
1 file changed, 22 insertions(+), 17 deletions(-)
31
32
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
33
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/intc/arm_gic.c
15
--- a/target/arm/helper-mve.h
35
+++ b/hw/intc/arm_gic.c
16
+++ b/target/arm/helper-mve.h
36
@@ -XXX,XX +XXX,XX @@ static inline bool gic_has_groups(GICState *s)
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
return s->revision == 2 || s->security_extn;
18
DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
}
19
DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
39
20
40
+static inline bool gic_cpu_ns_access(GICState *s, int cpu, MemTxAttrs attrs)
21
+DEF_HELPER_FLAGS_4(mve_vqdmullbh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+DEF_HELPER_FLAGS_4(mve_vqdmullbw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+DEF_HELPER_FLAGS_4(mve_vqdmullth, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vqdmulltw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+
26
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@
34
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
35
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
36
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
37
+@2op_sz28 .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn \
38
+ size=%size_28
39
40
# The _rev suffix indicates that Vn and Vm are reversed. This is
41
# the case for shifts. In the Arm ARM these insns are documented
42
@@ -XXX,XX +XXX,XX @@ VQDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
43
VQRDMLSDH 1111 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
44
VQRDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
45
46
+VQDMULLB 111 . 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 1 @2op_sz28
47
+VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
48
+
49
# Vector miscellaneous
50
51
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR_L(vqdmullt_scalarh, 1, 2, int16_t, 4, int32_t, \
57
DO_2OP_SAT_SCALAR_L(vqdmullt_scalarw, 1, 4, int32_t, 8, int64_t, \
58
do_qdmullw, SATMASK32)
59
60
+/*
61
+ * Long saturating ops
62
+ */
63
+#define DO_2OP_SAT_L(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN, SATMASK) \
64
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
65
+ void *vm) \
66
+ { \
67
+ LTYPE *d = vd; \
68
+ TYPE *n = vn, *m = vm; \
69
+ uint16_t mask = mve_element_mask(env); \
70
+ unsigned le; \
71
+ bool qc = false; \
72
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
73
+ bool sat = false; \
74
+ LTYPE op1 = n[H##ESIZE(le * 2 + TOP)]; \
75
+ LTYPE op2 = m[H##ESIZE(le * 2 + TOP)]; \
76
+ mergemask(&d[H##LESIZE(le)], FN(op1, op2, &sat), mask); \
77
+ qc |= sat && (mask & SATMASK); \
78
+ } \
79
+ if (qc) { \
80
+ env->vfp.qc[0] = qc; \
81
+ } \
82
+ mve_advance_vpt(env); \
83
+ }
84
+
85
+DO_2OP_SAT_L(vqdmullbh, 0, 2, int16_t, 4, int32_t, do_qdmullh, SATMASK16B)
86
+DO_2OP_SAT_L(vqdmullbw, 0, 4, int32_t, 8, int64_t, do_qdmullw, SATMASK32)
87
+DO_2OP_SAT_L(vqdmullth, 1, 2, int16_t, 4, int32_t, do_qdmullh, SATMASK16T)
88
+DO_2OP_SAT_L(vqdmulltw, 1, 4, int32_t, 8, int64_t, do_qdmullw, SATMASK32)
89
+
90
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
91
{
92
m &= 0xff;
93
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/translate-mve.c
96
+++ b/target/arm/translate-mve.c
97
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQDMLSDHX, vqdmlsdhx)
98
DO_2OP(VQRDMLSDH, vqrdmlsdh)
99
DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
100
101
+static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
41
+{
102
+{
42
+ return !gic_is_vcpu(cpu) && s->security_extn && !attrs.secure;
103
+ static MVEGenTwoOpFn * const fns[] = {
104
+ NULL,
105
+ gen_helper_mve_vqdmullbh,
106
+ gen_helper_mve_vqdmullbw,
107
+ NULL,
108
+ };
109
+ if (a->size == MO_32 && (a->qd == a->qm || a->qd == a->qn)) {
110
+ /* UNPREDICTABLE; we choose to undef */
111
+ return false;
112
+ }
113
+ return do_2op(s, a, fns[a->size]);
43
+}
114
+}
44
+
115
+
45
/* TODO: Many places that call this routine could be optimized. */
116
+static bool trans_VQDMULLT(DisasContext *s, arg_2op *a)
46
/* Update interrupt status after enabled or pending bits have been changed. */
117
+{
47
static void gic_update(GICState *s)
118
+ static MVEGenTwoOpFn * const fns[] = {
48
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
119
+ NULL,
49
/* On a GIC without the security extensions, reading this register
120
+ gen_helper_mve_vqdmullth,
50
* behaves in the same way as a secure access to a GIC with them.
121
+ gen_helper_mve_vqdmulltw,
51
*/
122
+ NULL,
52
- bool secure = !s->security_extn || attrs.secure;
123
+ };
53
+ bool secure = !gic_cpu_ns_access(s, cpu, attrs);
124
+ if (a->size == MO_32 && (a->qd == a->qm || a->qd == a->qn)) {
54
125
+ /* UNPREDICTABLE; we choose to undef */
55
if (group == 0 && !secure) {
126
+ return false;
56
/* Group0 interrupts hidden from Non-secure access */
127
+ }
57
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_get_priority(GICState *s, int cpu, int irq,
128
+ return do_2op(s, a, fns[a->size]);
58
static void gic_set_priority_mask(GICState *s, int cpu, uint8_t pmask,
129
+}
59
MemTxAttrs attrs)
130
+
131
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
132
MVEGenTwoOpScalarFn fn)
60
{
133
{
61
- if (s->security_extn && !attrs.secure) {
62
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
63
if (s->priority_mask[cpu] & 0x80) {
64
/* Priority Mask in upper half */
65
pmask = 0x80 | (pmask >> 1);
66
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_get_priority_mask(GICState *s, int cpu, MemTxAttrs attrs)
67
{
68
uint32_t pmask = s->priority_mask[cpu];
69
70
- if (s->security_extn && !attrs.secure) {
71
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
72
if (pmask & 0x80) {
73
/* Priority Mask in upper half, return Non-secure view */
74
pmask = (pmask << 1) & 0xff;
75
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_get_cpu_control(GICState *s, int cpu, MemTxAttrs attrs)
76
{
77
uint32_t ret = s->cpu_ctlr[cpu];
78
79
- if (s->security_extn && !attrs.secure) {
80
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
81
/* Construct the NS banked view of GICC_CTLR from the correct
82
* bits of the S banked view. We don't need to move the bypass
83
* control bits because we don't implement that (IMPDEF) part
84
@@ -XXX,XX +XXX,XX @@ static void gic_set_cpu_control(GICState *s, int cpu, uint32_t value,
85
{
86
uint32_t mask;
87
88
- if (s->security_extn && !attrs.secure) {
89
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
90
/* The NS view can only write certain bits in the register;
91
* the rest are unchanged
92
*/
93
@@ -XXX,XX +XXX,XX @@ static uint8_t gic_get_running_priority(GICState *s, int cpu, MemTxAttrs attrs)
94
return 0xff;
95
}
96
97
- if (s->security_extn && !attrs.secure) {
98
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
99
if (s->running_priority[cpu] & 0x80) {
100
/* Running priority in upper half of range: return the Non-secure
101
* view of the priority.
102
@@ -XXX,XX +XXX,XX @@ static bool gic_eoi_split(GICState *s, int cpu, MemTxAttrs attrs)
103
/* Before GICv2 prio-drop and deactivate are not separable */
104
return false;
105
}
106
- if (s->security_extn && !attrs.secure) {
107
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
108
return s->cpu_ctlr[cpu] & GICC_CTLR_EOIMODE_NS;
109
}
110
return s->cpu_ctlr[cpu] & GICC_CTLR_EOIMODE;
111
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
112
return;
113
}
114
115
- if (s->security_extn && !attrs.secure && !group) {
116
+ if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
117
DPRINTF("Non-secure DI for Group0 interrupt %d ignored\n", irq);
118
return;
119
}
120
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
121
122
group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
123
124
- if (s->security_extn && !attrs.secure && !group) {
125
+ if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
126
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
127
return;
128
}
129
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
130
*data = gic_get_priority_mask(s, cpu, attrs);
131
break;
132
case 0x08: /* Binary Point */
133
- if (s->security_extn && !attrs.secure) {
134
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
135
if (s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) {
136
/* NS view of BPR when CBPR is 1 */
137
*data = MIN(s->bpr[cpu] + 1, 7);
138
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
139
* With security extensions, secure access: ABPR (alias of NS BPR)
140
* With security extensions, nonsecure access: RAZ/WI
141
*/
142
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
143
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
144
*data = 0;
145
} else {
146
*data = s->abpr[cpu];
147
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
148
149
if (regno >= GIC_NR_APRS || s->revision != 2) {
150
*data = 0;
151
- } else if (s->security_extn && !attrs.secure) {
152
+ } else if (gic_cpu_ns_access(s, cpu, attrs)) {
153
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
154
*data = gic_apr_ns_view(s, regno, cpu);
155
} else {
156
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
157
int regno = (offset - 0xe0) / 4;
158
159
if (regno >= GIC_NR_APRS || s->revision != 2 || !gic_has_groups(s) ||
160
- (s->security_extn && !attrs.secure)) {
161
+ gic_cpu_ns_access(s, cpu, attrs)) {
162
*data = 0;
163
} else {
164
*data = s->nsapr[regno][cpu];
165
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
166
gic_set_priority_mask(s, cpu, value, attrs);
167
break;
168
case 0x08: /* Binary Point */
169
- if (s->security_extn && !attrs.secure) {
170
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
171
if (s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) {
172
/* WI when CBPR is 1 */
173
return MEMTX_OK;
174
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
175
gic_complete_irq(s, cpu, value & 0x3ff, attrs);
176
return MEMTX_OK;
177
case 0x1c: /* Aliased Binary Point */
178
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
179
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
180
/* unimplemented, or NS access: RAZ/WI */
181
return MEMTX_OK;
182
} else {
183
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
184
if (regno >= GIC_NR_APRS || s->revision != 2) {
185
return MEMTX_OK;
186
}
187
- if (s->security_extn && !attrs.secure) {
188
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
189
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
190
gic_apr_write_ns_view(s, regno, cpu, value);
191
} else {
192
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
193
if (regno >= GIC_NR_APRS || s->revision != 2) {
194
return MEMTX_OK;
195
}
196
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
197
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
198
return MEMTX_OK;
199
}
200
s->nsapr[regno][cpu] = value;
201
--
134
--
202
2.18.0
135
2.20.1
203
136
204
137
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
Implement the MVE VRHADD insn, which performs a rounded halving
2
addition.
2
3
3
Forbid stack alignment change. (CCR)
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reserve FAULTMASK, BASEPRI registers.
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Report any fault as a HardFault. Disable MemManage, BusFault and
6
Message-id: 20210617121628.20116-40-peter.maydell@linaro.org
6
UsageFault, so they always escalated to HardFault. (SHCSR)
7
---
8
target/arm/helper-mve.h | 8 ++++++++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 6 ++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 19 insertions(+)
7
13
8
Signed-off-by: Julia Suvorova <jusual@mail.ru>
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Message-id: 20180718095628.26442-1-jusual@mail.ru
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/intc/armv7m_nvic.c | 10 ++++++++++
15
target/arm/cpu.c | 4 ++++
16
target/arm/helper.c | 13 +++++++++++--
17
3 files changed, 25 insertions(+), 2 deletions(-)
18
19
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/intc/armv7m_nvic.c
16
--- a/target/arm/helper-mve.h
22
+++ b/hw/intc/armv7m_nvic.c
17
+++ b/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqdmullbw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
19
DEF_HELPER_FLAGS_4(mve_vqdmullth, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
return val;
20
DEF_HELPER_FLAGS_4(mve_vqdmulltw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
case 0xd24: /* System Handler Control and State (SHCSR) */
21
27
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
22
+DEF_HELPER_FLAGS_4(mve_vrhaddsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+ goto bad_offset;
23
+DEF_HELPER_FLAGS_4(mve_vrhaddsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+ }
24
+DEF_HELPER_FLAGS_4(mve_vrhaddsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
val = 0;
31
if (attrs.secure) {
32
if (s->sec_vectors[ARMV7M_EXCP_MEM].active) {
33
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
34
cpu->env.v7m.scr[attrs.secure] = value;
35
break;
36
case 0xd14: /* Configuration Control. */
37
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
38
+ goto bad_offset;
39
+ }
40
+
25
+
41
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
26
+DEF_HELPER_FLAGS_4(mve_vrhaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
42
value &= (R_V7M_CCR_STKALIGN_MASK |
27
+DEF_HELPER_FLAGS_4(mve_vrhadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
43
R_V7M_CCR_BFHFNMIGN_MASK |
28
+DEF_HELPER_FLAGS_4(mve_vrhadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
44
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
29
+
45
cpu->env.v7m.ccr[attrs.secure] = value;
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
46
break;
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
47
case 0xd24: /* System Handler Control and State (SHCSR) */
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
48
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
49
+ goto bad_offset;
50
+ }
51
if (attrs.secure) {
52
s->sec_vectors[ARMV7M_EXCP_MEM].active = (value & (1 << 0)) != 0;
53
/* Secure HardFault active bit cannot be written */
54
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
55
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/cpu.c
35
--- a/target/arm/mve.decode
57
+++ b/target/arm/cpu.c
36
+++ b/target/arm/mve.decode
58
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
37
@@ -XXX,XX +XXX,XX @@ VQRDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
59
env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_NONBASETHRDENA_MASK;
38
VQDMULLB 111 . 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 1 @2op_sz28
60
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_NONBASETHRDENA_MASK;
39
VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
61
}
40
62
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
41
+VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
63
+ env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_UNALIGN_TRP_MASK;
42
+VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
64
+ env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
43
+
65
+ }
44
# Vector miscellaneous
66
45
67
/* Unlike A/R profile, M profile defines the reset LR value */
46
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
68
env->regs[14] = 0xffffffff;
47
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
70
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/helper.c
49
--- a/target/arm/mve_helper.c
72
+++ b/target/arm/helper.c
50
+++ b/target/arm/mve_helper.c
73
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
51
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vshlu, DO_VSHLU)
74
env->v7m.primask[M_REG_NS] = val & 1;
52
DO_2OP_S(vrshls, DO_VRSHLS)
75
return;
53
DO_2OP_U(vrshlu, DO_VRSHLU)
76
case 0x91: /* BASEPRI_NS */
54
77
- if (!env->v7m.secure) {
55
+#define DO_RHADD_S(N, M) (((int64_t)(N) + (M) + 1) >> 1)
78
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
56
+#define DO_RHADD_U(N, M) (((uint64_t)(N) + (M) + 1) >> 1)
79
return;
57
+
80
}
58
+DO_2OP_S(vrhadds, DO_RHADD_S)
81
env->v7m.basepri[M_REG_NS] = val & 0xff;
59
+DO_2OP_U(vrhaddu, DO_RHADD_U)
82
return;
60
+
83
case 0x93: /* FAULTMASK_NS */
61
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
84
- if (!env->v7m.secure) {
62
{
85
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
63
if (val > max) {
86
return;
64
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
87
}
65
index XXXXXXX..XXXXXXX 100644
88
env->v7m.faultmask[M_REG_NS] = val & 1;
66
--- a/target/arm/translate-mve.c
89
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
67
+++ b/target/arm/translate-mve.c
90
env->v7m.primask[env->v7m.secure] = val & 1;
68
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQDMLSDH, vqdmlsdh)
91
break;
69
DO_2OP(VQDMLSDHX, vqdmlsdhx)
92
case 17: /* BASEPRI */
70
DO_2OP(VQRDMLSDH, vqrdmlsdh)
93
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
71
DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
94
+ goto bad_reg;
72
+DO_2OP(VRHADD_S, vrhadds)
95
+ }
73
+DO_2OP(VRHADD_U, vrhaddu)
96
env->v7m.basepri[env->v7m.secure] = val & 0xff;
74
97
break;
75
static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
98
case 18: /* BASEPRI_MAX */
76
{
99
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
100
+ goto bad_reg;
101
+ }
102
val &= 0xff;
103
if (val != 0 && (val < env->v7m.basepri[env->v7m.secure]
104
|| env->v7m.basepri[env->v7m.secure] == 0)) {
105
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
106
}
107
break;
108
case 19: /* FAULTMASK */
109
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
110
+ goto bad_reg;
111
+ }
112
env->v7m.faultmask[env->v7m.secure] = val & 1;
113
break;
114
case 20: /* CONTROL */
115
--
77
--
116
2.18.0
78
2.20.1
117
79
118
80
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
Implement the MVE VADC and VSBC insns. These perform an
2
add-with-carry or subtract-with-carry of the 32-bit elements in each
3
lane of the input vectors, where the carry-out of each add is the
4
carry-in of the next. The initial carry input is either 1 or is from
5
FPSCR.C; the carry out at the end is written back to FPSCR.C.
2
6
3
Implement the read and write functions for the virtual interface of the
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
virtualization extensions in the GICv2.
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210617121628.20116-41-peter.maydell@linaro.org
10
---
11
target/arm/helper-mve.h | 5 ++++
12
target/arm/mve.decode | 5 ++++
13
target/arm/mve_helper.c | 52 ++++++++++++++++++++++++++++++++++++++
14
target/arm/translate-mve.c | 37 +++++++++++++++++++++++++++
15
4 files changed, 99 insertions(+)
5
16
6
One mirror region per CPU is also created, which maps to that specific
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
7
CPU id. This is required by the GIC architecture specification.
8
9
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20180727095421.386-16-luc.michel@greensocs.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/intc/arm_gic.c | 235 +++++++++++++++++++++++++++++++++++++++++++++-
15
1 file changed, 233 insertions(+), 2 deletions(-)
16
17
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gic.c
19
--- a/target/arm/helper-mve.h
20
+++ b/hw/intc/arm_gic.c
20
+++ b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@ static void gic_update(GICState *s)
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrhaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
}
22
DEF_HELPER_FLAGS_4(mve_vrhadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
}
23
DEF_HELPER_FLAGS_4(mve_vrhadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
24
25
+/* Return true if this LR is empty, i.e. the corresponding bit
25
+DEF_HELPER_FLAGS_4(mve_vadc, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+ * in ELRSR is set.
26
+DEF_HELPER_FLAGS_4(mve_vadci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+ */
27
+DEF_HELPER_FLAGS_4(mve_vsbc, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+static inline bool gic_lr_entry_is_free(uint32_t entry)
28
+DEF_HELPER_FLAGS_4(mve_vsbci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@ VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
38
VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
39
VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
40
41
+VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
42
+VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
43
+VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
44
+VSBCI 1111 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
45
+
46
# Vector miscellaneous
47
48
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
49
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/mve_helper.c
52
+++ b/target/arm/mve_helper.c
53
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vrshlu, DO_VRSHLU)
54
DO_2OP_S(vrhadds, DO_RHADD_S)
55
DO_2OP_U(vrhaddu, DO_RHADD_U)
56
57
+static void do_vadc(CPUARMState *env, uint32_t *d, uint32_t *n, uint32_t *m,
58
+ uint32_t inv, uint32_t carry_in, bool update_flags)
29
+{
59
+{
30
+ return (GICH_LR_STATE(entry) == GICH_LR_STATE_INVALID)
60
+ uint16_t mask = mve_element_mask(env);
31
+ && (GICH_LR_HW(entry) || !GICH_LR_EOI(entry));
61
+ unsigned e;
62
+
63
+ /* If any additions trigger, we will update flags. */
64
+ if (mask & 0x1111) {
65
+ update_flags = true;
66
+ }
67
+
68
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) {
69
+ uint64_t r = carry_in;
70
+ r += n[H4(e)];
71
+ r += m[H4(e)] ^ inv;
72
+ if (mask & 1) {
73
+ carry_in = r >> 32;
74
+ }
75
+ mergemask(&d[H4(e)], r, mask);
76
+ }
77
+
78
+ if (update_flags) {
79
+ /* Store C, clear NZV. */
80
+ env->vfp.xregs[ARM_VFP_FPSCR] &= ~FPCR_NZCV_MASK;
81
+ env->vfp.xregs[ARM_VFP_FPSCR] |= carry_in * FPCR_C;
82
+ }
83
+ mve_advance_vpt(env);
32
+}
84
+}
33
+
85
+
34
+/* Return true if this LR should trigger an EOI maintenance interrupt, i.e. the
86
+void HELPER(mve_vadc)(CPUARMState *env, void *vd, void *vn, void *vm)
35
+ * corrsponding bit in EISR is set.
36
+ */
37
+static inline bool gic_lr_entry_is_eoi(uint32_t entry)
38
+{
87
+{
39
+ return (GICH_LR_STATE(entry) == GICH_LR_STATE_INVALID)
88
+ bool carry_in = env->vfp.xregs[ARM_VFP_FPSCR] & FPCR_C;
40
+ && !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
89
+ do_vadc(env, vd, vn, vm, 0, carry_in, false);
41
+}
90
+}
42
+
91
+
43
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
92
+void HELPER(mve_vsbc)(CPUARMState *env, void *vd, void *vn, void *vm)
44
int cm, int target)
45
{
46
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_thisvcpu_write(void *opaque, hwaddr addr,
47
return gic_cpu_write(s, gic_get_current_vcpu(s), addr, value, attrs);
48
}
49
50
+static uint32_t gic_compute_eisr(GICState *s, int cpu, int lr_start)
51
+{
93
+{
52
+ int lr_idx;
94
+ bool carry_in = env->vfp.xregs[ARM_VFP_FPSCR] & FPCR_C;
53
+ uint32_t ret = 0;
95
+ do_vadc(env, vd, vn, vm, -1, carry_in, false);
54
+
55
+ for (lr_idx = lr_start; lr_idx < s->num_lrs; lr_idx++) {
56
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
57
+ ret = deposit32(ret, lr_idx - lr_start, 1,
58
+ gic_lr_entry_is_eoi(*entry));
59
+ }
60
+
61
+ return ret;
62
+}
96
+}
63
+
97
+
64
+static uint32_t gic_compute_elrsr(GICState *s, int cpu, int lr_start)
98
+
99
+void HELPER(mve_vadci)(CPUARMState *env, void *vd, void *vn, void *vm)
65
+{
100
+{
66
+ int lr_idx;
101
+ do_vadc(env, vd, vn, vm, 0, 0, true);
67
+ uint32_t ret = 0;
68
+
69
+ for (lr_idx = lr_start; lr_idx < s->num_lrs; lr_idx++) {
70
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
71
+ ret = deposit32(ret, lr_idx - lr_start, 1,
72
+ gic_lr_entry_is_free(*entry));
73
+ }
74
+
75
+ return ret;
76
+}
102
+}
77
+
103
+
78
+static void gic_vmcr_write(GICState *s, uint32_t value, MemTxAttrs attrs)
104
+void HELPER(mve_vsbci)(CPUARMState *env, void *vd, void *vn, void *vm)
79
+{
105
+{
80
+ int vcpu = gic_get_current_vcpu(s);
106
+ do_vadc(env, vd, vn, vm, -1, 1, true);
81
+ uint32_t ctlr;
82
+ uint32_t abpr;
83
+ uint32_t bpr;
84
+ uint32_t prio_mask;
85
+
86
+ ctlr = FIELD_EX32(value, GICH_VMCR, VMCCtlr);
87
+ abpr = FIELD_EX32(value, GICH_VMCR, VMABP);
88
+ bpr = FIELD_EX32(value, GICH_VMCR, VMBP);
89
+ prio_mask = FIELD_EX32(value, GICH_VMCR, VMPriMask) << 3;
90
+
91
+ gic_set_cpu_control(s, vcpu, ctlr, attrs);
92
+ s->abpr[vcpu] = MAX(abpr, GIC_VIRT_MIN_ABPR);
93
+ s->bpr[vcpu] = MAX(bpr, GIC_VIRT_MIN_BPR);
94
+ gic_set_priority_mask(s, vcpu, prio_mask, attrs);
95
+}
107
+}
96
+
108
+
97
+static MemTxResult gic_hyp_read(void *opaque, int cpu, hwaddr addr,
109
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
98
+ uint64_t *data, MemTxAttrs attrs)
110
{
111
if (val > max) {
112
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
113
index XXXXXXX..XXXXXXX 100644
114
--- a/target/arm/translate-mve.c
115
+++ b/target/arm/translate-mve.c
116
@@ -XXX,XX +XXX,XX @@ static bool trans_VQDMULLT(DisasContext *s, arg_2op *a)
117
return do_2op(s, a, fns[a->size]);
118
}
119
120
+/*
121
+ * VADC and VSBC: these perform an add-with-carry or subtract-with-carry
122
+ * of the 32-bit elements in each lane of the input vectors, where the
123
+ * carry-out of each add is the carry-in of the next. The initial carry
124
+ * input is either fixed (0 for VADCI, 1 for VSBCI) or is from FPSCR.C
125
+ * (for VADC and VSBC); the carry out at the end is written back to FPSCR.C.
126
+ * These insns are subject to beat-wise execution. Partial execution
127
+ * of an I=1 (initial carry input fixed) insn which does not
128
+ * execute the first beat must start with the current FPSCR.NZCV
129
+ * value, not the fixed constant input.
130
+ */
131
+static bool trans_VADC(DisasContext *s, arg_2op *a)
99
+{
132
+{
100
+ GICState *s = ARM_GIC(opaque);
133
+ return do_2op(s, a, gen_helper_mve_vadc);
101
+ int vcpu = cpu + GIC_NCPU;
102
+
103
+ switch (addr) {
104
+ case A_GICH_HCR: /* Hypervisor Control */
105
+ *data = s->h_hcr[cpu];
106
+ break;
107
+
108
+ case A_GICH_VTR: /* VGIC Type */
109
+ *data = FIELD_DP32(0, GICH_VTR, ListRegs, s->num_lrs - 1);
110
+ *data = FIELD_DP32(*data, GICH_VTR, PREbits,
111
+ GIC_VIRT_MAX_GROUP_PRIO_BITS - 1);
112
+ *data = FIELD_DP32(*data, GICH_VTR, PRIbits,
113
+ (7 - GIC_VIRT_MIN_BPR) - 1);
114
+ break;
115
+
116
+ case A_GICH_VMCR: /* Virtual Machine Control */
117
+ *data = FIELD_DP32(0, GICH_VMCR, VMCCtlr,
118
+ extract32(s->cpu_ctlr[vcpu], 0, 10));
119
+ *data = FIELD_DP32(*data, GICH_VMCR, VMABP, s->abpr[vcpu]);
120
+ *data = FIELD_DP32(*data, GICH_VMCR, VMBP, s->bpr[vcpu]);
121
+ *data = FIELD_DP32(*data, GICH_VMCR, VMPriMask,
122
+ extract32(s->priority_mask[vcpu], 3, 5));
123
+ break;
124
+
125
+ case A_GICH_MISR: /* Maintenance Interrupt Status */
126
+ *data = s->h_misr[cpu];
127
+ break;
128
+
129
+ case A_GICH_EISR0: /* End of Interrupt Status 0 and 1 */
130
+ case A_GICH_EISR1:
131
+ *data = gic_compute_eisr(s, cpu, (addr - A_GICH_EISR0) * 8);
132
+ break;
133
+
134
+ case A_GICH_ELRSR0: /* Empty List Status 0 and 1 */
135
+ case A_GICH_ELRSR1:
136
+ *data = gic_compute_elrsr(s, cpu, (addr - A_GICH_ELRSR0) * 8);
137
+ break;
138
+
139
+ case A_GICH_APR: /* Active Priorities */
140
+ *data = s->h_apr[cpu];
141
+ break;
142
+
143
+ case A_GICH_LR0 ... A_GICH_LR63: /* List Registers */
144
+ {
145
+ int lr_idx = (addr - A_GICH_LR0) / 4;
146
+
147
+ if (lr_idx > s->num_lrs) {
148
+ *data = 0;
149
+ } else {
150
+ *data = s->h_lr[lr_idx][cpu];
151
+ }
152
+ break;
153
+ }
154
+
155
+ default:
156
+ qemu_log_mask(LOG_GUEST_ERROR,
157
+ "gic_hyp_read: Bad offset %" HWADDR_PRIx "\n", addr);
158
+ return MEMTX_OK;
159
+ }
160
+
161
+ return MEMTX_OK;
162
+}
134
+}
163
+
135
+
164
+static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
136
+static bool trans_VADCI(DisasContext *s, arg_2op *a)
165
+ uint64_t value, MemTxAttrs attrs)
166
+{
137
+{
167
+ GICState *s = ARM_GIC(opaque);
138
+ if (mve_skip_first_beat(s)) {
168
+ int vcpu = cpu + GIC_NCPU;
139
+ return trans_VADC(s, a);
169
+
170
+ switch (addr) {
171
+ case A_GICH_HCR: /* Hypervisor Control */
172
+ s->h_hcr[cpu] = value & GICH_HCR_MASK;
173
+ break;
174
+
175
+ case A_GICH_VMCR: /* Virtual Machine Control */
176
+ gic_vmcr_write(s, value, attrs);
177
+ break;
178
+
179
+ case A_GICH_APR: /* Active Priorities */
180
+ s->h_apr[cpu] = value;
181
+ s->running_priority[vcpu] = gic_get_prio_from_apr_bits(s, vcpu);
182
+ break;
183
+
184
+ case A_GICH_LR0 ... A_GICH_LR63: /* List Registers */
185
+ {
186
+ int lr_idx = (addr - A_GICH_LR0) / 4;
187
+
188
+ if (lr_idx > s->num_lrs) {
189
+ return MEMTX_OK;
190
+ }
191
+
192
+ s->h_lr[lr_idx][cpu] = value & GICH_LR_MASK;
193
+ break;
194
+ }
140
+ }
195
+
141
+ return do_2op(s, a, gen_helper_mve_vadci);
196
+ default:
197
+ qemu_log_mask(LOG_GUEST_ERROR,
198
+ "gic_hyp_write: Bad offset %" HWADDR_PRIx "\n", addr);
199
+ return MEMTX_OK;
200
+ }
201
+
202
+ return MEMTX_OK;
203
+}
142
+}
204
+
143
+
205
+static MemTxResult gic_thiscpu_hyp_read(void *opaque, hwaddr addr, uint64_t *data,
144
+static bool trans_VSBC(DisasContext *s, arg_2op *a)
206
+ unsigned size, MemTxAttrs attrs)
207
+{
145
+{
208
+ GICState *s = (GICState *)opaque;
146
+ return do_2op(s, a, gen_helper_mve_vsbc);
209
+
210
+ return gic_hyp_read(s, gic_get_current_cpu(s), addr, data, attrs);
211
+}
147
+}
212
+
148
+
213
+static MemTxResult gic_thiscpu_hyp_write(void *opaque, hwaddr addr,
149
+static bool trans_VSBCI(DisasContext *s, arg_2op *a)
214
+ uint64_t value, unsigned size,
215
+ MemTxAttrs attrs)
216
+{
150
+{
217
+ GICState *s = (GICState *)opaque;
151
+ if (mve_skip_first_beat(s)) {
218
+
152
+ return trans_VSBC(s, a);
219
+ return gic_hyp_write(s, gic_get_current_cpu(s), addr, value, attrs);
153
+ }
154
+ return do_2op(s, a, gen_helper_mve_vsbci);
220
+}
155
+}
221
+
156
+
222
+static MemTxResult gic_do_hyp_read(void *opaque, hwaddr addr, uint64_t *data,
157
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
223
+ unsigned size, MemTxAttrs attrs)
158
MVEGenTwoOpScalarFn fn)
224
+{
225
+ GICState **backref = (GICState **)opaque;
226
+ GICState *s = *backref;
227
+ int id = (backref - s->backref);
228
+
229
+ return gic_hyp_read(s, id, addr, data, attrs);
230
+}
231
+
232
+static MemTxResult gic_do_hyp_write(void *opaque, hwaddr addr,
233
+ uint64_t value, unsigned size,
234
+ MemTxAttrs attrs)
235
+{
236
+ GICState **backref = (GICState **)opaque;
237
+ GICState *s = *backref;
238
+ int id = (backref - s->backref);
239
+
240
+ return gic_hyp_write(s, id + GIC_NCPU, addr, value, attrs);
241
+
242
+}
243
+
244
static const MemoryRegionOps gic_ops[2] = {
245
{
246
.read_with_attrs = gic_dist_read,
247
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
248
249
static const MemoryRegionOps gic_virt_ops[2] = {
250
{
251
- .read_with_attrs = NULL,
252
- .write_with_attrs = NULL,
253
+ .read_with_attrs = gic_thiscpu_hyp_read,
254
+ .write_with_attrs = gic_thiscpu_hyp_write,
255
.endianness = DEVICE_NATIVE_ENDIAN,
256
},
257
{
258
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_virt_ops[2] = {
259
}
260
};
261
262
+static const MemoryRegionOps gic_viface_ops = {
263
+ .read_with_attrs = gic_do_hyp_read,
264
+ .write_with_attrs = gic_do_hyp_write,
265
+ .endianness = DEVICE_NATIVE_ENDIAN,
266
+};
267
+
268
static void arm_gic_realize(DeviceState *dev, Error **errp)
269
{
159
{
270
/* Device instance realize function for the GIC sysbus device */
271
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
272
&s->backref[i], "gic_cpu", 0x100);
273
sysbus_init_mmio(sbd, &s->cpuiomem[i+1]);
274
}
275
+
276
+ /* Extra core-specific regions for virtual interfaces. This is required by
277
+ * the GICv2 specification.
278
+ */
279
+ if (s->virt_extn) {
280
+ for (i = 0; i < s->num_cpu; i++) {
281
+ memory_region_init_io(&s->vifaceiomem[i + 1], OBJECT(s),
282
+ &gic_viface_ops, &s->backref[i],
283
+ "gic_viface", 0x1000);
284
+ sysbus_init_mmio(sbd, &s->vifaceiomem[i + 1]);
285
+ }
286
+ }
287
+
288
}
289
290
static void arm_gic_class_init(ObjectClass *klass, void *data)
291
--
160
--
292
2.18.0
161
2.20.1
293
162
294
163
diff view generated by jsdifflib
1
From: Adam Lackorzynski <adam@l4re.org>
1
Implement the MVE VCADD insn, which performs a complex add with
2
rotate. Note that the size=0b11 encoding is VSBC.
2
3
3
Use an int64_t as a return type to restore
4
The architecture grants some leeway for the "destination and Vm
4
the negative check for arm_load_as.
5
source overlap" case for the size MO_32 case, but we choose not to
6
make use of it, instead always calculating all 16 bytes worth of
7
results before setting the destination register.
5
8
6
Signed-off-by: Adam Lackorzynski <adam@l4re.org>
7
Message-id: 20180730173712.GG4987@os.inf.tu-dresden.de
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210617121628.20116-42-peter.maydell@linaro.org
10
---
12
---
11
hw/arm/boot.c | 8 ++++----
13
target/arm/helper-mve.h | 8 ++++++++
12
1 file changed, 4 insertions(+), 4 deletions(-)
14
target/arm/mve.decode | 9 +++++++--
15
target/arm/mve_helper.c | 29 +++++++++++++++++++++++++++++
16
target/arm/translate-mve.c | 7 +++++++
17
4 files changed, 51 insertions(+), 2 deletions(-)
13
18
14
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/boot.c
21
--- a/target/arm/helper-mve.h
17
+++ b/hw/arm/boot.c
22
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ static int do_arm_linux_init(Object *obj, void *opaque)
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vadci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
return 0;
24
DEF_HELPER_FLAGS_4(mve_vsbc, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
DEF_HELPER_FLAGS_4(mve_vsbci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
27
+DEF_HELPER_FLAGS_4(mve_vcadd90b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vcadd90h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+DEF_HELPER_FLAGS_4(mve_vcadd90w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
+
31
+DEF_HELPER_FLAGS_4(mve_vcadd270b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vcadd270h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vcadd270w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+
35
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/mve.decode
41
+++ b/target/arm/mve.decode
42
@@ -XXX,XX +XXX,XX @@ VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
43
VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
44
45
VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
46
-VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
47
VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
48
-VSBCI 1111 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
49
+
50
+{
51
+ VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
52
+ VSBCI 1111 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
53
+ VCADD90 1111 1110 0 . .. ... 0 ... 0 1111 . 0 . 0 ... 0 @2op
54
+ VCADD270 1111 1110 0 . .. ... 0 ... 1 1111 . 0 . 0 ... 0 @2op
55
+}
56
57
# Vector miscellaneous
58
59
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/mve_helper.c
62
+++ b/target/arm/mve_helper.c
63
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vsbci)(CPUARMState *env, void *vd, void *vn, void *vm)
64
do_vadc(env, vd, vn, vm, -1, 1, true);
20
}
65
}
21
66
22
-static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
67
+#define DO_VCADD(OP, ESIZE, TYPE, FN0, FN1) \
23
- uint64_t *lowaddr, uint64_t *highaddr,
68
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, void *vm) \
24
- int elf_machine, AddressSpace *as)
69
+ { \
25
+static int64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
70
+ TYPE *d = vd, *n = vn, *m = vm; \
26
+ uint64_t *lowaddr, uint64_t *highaddr,
71
+ uint16_t mask = mve_element_mask(env); \
27
+ int elf_machine, AddressSpace *as)
72
+ unsigned e; \
73
+ TYPE r[16 / ESIZE]; \
74
+ /* Calculate all results first to avoid overwriting inputs */ \
75
+ for (e = 0; e < 16 / ESIZE; e++) { \
76
+ if (!(e & 1)) { \
77
+ r[e] = FN0(n[H##ESIZE(e)], m[H##ESIZE(e + 1)]); \
78
+ } else { \
79
+ r[e] = FN1(n[H##ESIZE(e)], m[H##ESIZE(e - 1)]); \
80
+ } \
81
+ } \
82
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
83
+ mergemask(&d[H##ESIZE(e)], r[e], mask); \
84
+ } \
85
+ mve_advance_vpt(env); \
86
+ }
87
+
88
+#define DO_VCADD_ALL(OP, FN0, FN1) \
89
+ DO_VCADD(OP##b, 1, int8_t, FN0, FN1) \
90
+ DO_VCADD(OP##h, 2, int16_t, FN0, FN1) \
91
+ DO_VCADD(OP##w, 4, int32_t, FN0, FN1)
92
+
93
+DO_VCADD_ALL(vcadd90, DO_SUB, DO_ADD)
94
+DO_VCADD_ALL(vcadd270, DO_ADD, DO_SUB)
95
+
96
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
28
{
97
{
29
bool elf_is64;
98
if (val > max) {
30
union {
99
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
31
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
100
index XXXXXXX..XXXXXXX 100644
32
} elf_header;
101
--- a/target/arm/translate-mve.c
33
int data_swab = 0;
102
+++ b/target/arm/translate-mve.c
34
bool big_endian;
103
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQRDMLSDH, vqrdmlsdh)
35
- uint64_t ret = -1;
104
DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
36
+ int64_t ret = -1;
105
DO_2OP(VRHADD_S, vrhadds)
37
Error *err = NULL;
106
DO_2OP(VRHADD_U, vrhaddu)
38
107
+/*
39
108
+ * VCADD Qd == Qm at size MO_32 is UNPREDICTABLE; we choose not to diagnose
109
+ * so we can reuse the DO_2OP macro. (Our implementation calculates the
110
+ * "expected" results in this case.)
111
+ */
112
+DO_2OP(VCADD90, vcadd90)
113
+DO_2OP(VCADD270, vcadd270)
114
115
static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
116
{
40
--
117
--
41
2.18.0
118
2.20.1
42
119
43
120
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
Implement the MVE VHCADD insn, which is similar to VCADD
2
but performs a halving step. This one overlaps with VADC.
2
3
3
Handle SCS reserved registers listed in ARMv6-M ARM D3.6.1.
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
All reserved registers are RAZ/WI. ARM_FEATURE_M_MAIN is used for the
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
checks, because these registers are reserved in ARMv8-M Baseline too.
6
Message-id: 20210617121628.20116-43-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 8 ++++++++
9
target/arm/mve.decode | 8 ++++++--
10
target/arm/mve_helper.c | 2 ++
11
target/arm/translate-mve.c | 4 +++-
12
4 files changed, 19 insertions(+), 3 deletions(-)
6
13
7
Signed-off-by: Julia Suvorova <jusual@mail.ru>
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/intc/armv7m_nvic.c | 51 +++++++++++++++++++++++++++++++++++++++++--
12
1 file changed, 49 insertions(+), 2 deletions(-)
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
16
--- a/target/arm/helper-mve.h
17
+++ b/hw/intc/armv7m_nvic.c
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vcadd270b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
}
19
DEF_HELPER_FLAGS_4(mve_vcadd270h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
return val;
20
DEF_HELPER_FLAGS_4(mve_vcadd270w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
case 0xd10: /* System Control. */
21
22
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
22
+DEF_HELPER_FLAGS_4(mve_vhcadd90b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+ goto bad_offset;
23
+DEF_HELPER_FLAGS_4(mve_vhcadd90h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+ }
24
+DEF_HELPER_FLAGS_4(mve_vhcadd90w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
return cpu->env.v7m.scr[attrs.secure];
26
case 0xd14: /* Configuration Control. */
27
/* The BFHFNMIGN bit is the only non-banked bit; we
28
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
29
}
30
return val;
31
case 0xd2c: /* Hard Fault Status. */
32
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
33
+ goto bad_offset;
34
+ }
35
return cpu->env.v7m.hfsr;
36
case 0xd30: /* Debug Fault Status. */
37
return cpu->env.v7m.dfsr;
38
case 0xd34: /* MMFAR MemManage Fault Address */
39
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
40
+ goto bad_offset;
41
+ }
42
return cpu->env.v7m.mmfar[attrs.secure];
43
case 0xd38: /* Bus Fault Address. */
44
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
45
+ goto bad_offset;
46
+ }
47
return cpu->env.v7m.bfar;
48
case 0xd3c: /* Aux Fault Status. */
49
/* TODO: Implement fault status registers. */
50
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
51
}
52
break;
53
case 0xd10: /* System Control. */
54
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
55
+ goto bad_offset;
56
+ }
57
/* We don't implement deep-sleep so these bits are RAZ/WI.
58
* The other bits in the register are banked.
59
* QEMU's implementation ignores SEVONPEND and SLEEPONEXIT, which
60
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
61
nvic_irq_update(s);
62
break;
63
case 0xd2c: /* Hard Fault Status. */
64
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
65
+ goto bad_offset;
66
+ }
67
cpu->env.v7m.hfsr &= ~value; /* W1C */
68
break;
69
case 0xd30: /* Debug Fault Status. */
70
cpu->env.v7m.dfsr &= ~value; /* W1C */
71
break;
72
case 0xd34: /* Mem Manage Address. */
73
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
74
+ goto bad_offset;
75
+ }
76
cpu->env.v7m.mmfar[attrs.secure] = value;
77
return;
78
case 0xd38: /* Bus Fault Address. */
79
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
80
+ goto bad_offset;
81
+ }
82
cpu->env.v7m.bfar = value;
83
return;
84
case 0xd3c: /* Aux Fault Status. */
85
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
86
case 0xf00: /* Software Triggered Interrupt Register */
87
{
88
int excnum = (value & 0x1ff) + NVIC_FIRST_IRQ;
89
+
25
+
90
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
26
+DEF_HELPER_FLAGS_4(mve_vhcadd270b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
91
+ goto bad_offset;
27
+DEF_HELPER_FLAGS_4(mve_vhcadd270h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
92
+ }
28
+DEF_HELPER_FLAGS_4(mve_vhcadd270w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
93
+
29
+
94
if (excnum < s->num_irq) {
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
95
armv7m_nvic_set_pending(s, excnum, false);
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
96
}
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
97
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
98
}
34
index XXXXXXX..XXXXXXX 100644
99
}
35
--- a/target/arm/mve.decode
100
break;
36
+++ b/target/arm/mve.decode
101
- case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
37
@@ -XXX,XX +XXX,XX @@ VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
102
+ case 0xd18: /* System Handler Priority (SHPR1) */
38
VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
103
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
39
VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
104
+ val = 0;
40
105
+ break;
41
-VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
106
+ }
42
-VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
107
+ /* fall through */
43
+{
108
+ case 0xd1c ... 0xd23: /* System Handler Priority (SHPR2, SHPR3) */
44
+ VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
109
val = 0;
45
+ VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
110
for (i = 0; i < size; i++) {
46
+ VHCADD90 1110 1110 0 . .. ... 0 ... 0 1111 . 0 . 0 ... 0 @2op
111
unsigned hdlidx = (offset - 0xd14) + i;
47
+ VHCADD270 1110 1110 0 . .. ... 0 ... 1 1111 . 0 . 0 ... 0 @2op
112
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
48
+}
113
}
49
114
break;
50
{
115
case 0xd28 ... 0xd2b: /* Configurable Fault Status (CFSR) */
51
VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
116
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
117
+ val = 0;
53
index XXXXXXX..XXXXXXX 100644
118
+ break;
54
--- a/target/arm/mve_helper.c
119
+ };
55
+++ b/target/arm/mve_helper.c
120
/* The BFSR bits [15:8] are shared between security states
56
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vsbci)(CPUARMState *env, void *vd, void *vn, void *vm)
121
* and we store them in the NS copy
57
122
*/
58
DO_VCADD_ALL(vcadd90, DO_SUB, DO_ADD)
123
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
59
DO_VCADD_ALL(vcadd270, DO_ADD, DO_SUB)
124
}
60
+DO_VCADD_ALL(vhcadd90, do_vhsub_s, do_vhadd_s)
125
nvic_irq_update(s);
61
+DO_VCADD_ALL(vhcadd270, do_vhadd_s, do_vhsub_s)
126
return MEMTX_OK;
62
127
- case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
63
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
128
+ case 0xd18: /* System Handler Priority (SHPR1) */
64
{
129
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
65
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
130
+ return MEMTX_OK;
66
index XXXXXXX..XXXXXXX 100644
131
+ }
67
--- a/target/arm/translate-mve.c
132
+ /* fall through */
68
+++ b/target/arm/translate-mve.c
133
+ case 0xd1c ... 0xd23: /* System Handler Priority (SHPR2, SHPR3) */
69
@@ -XXX,XX +XXX,XX @@ DO_2OP(VRHADD_U, vrhaddu)
134
for (i = 0; i < size; i++) {
70
/*
135
unsigned hdlidx = (offset - 0xd14) + i;
71
* VCADD Qd == Qm at size MO_32 is UNPREDICTABLE; we choose not to diagnose
136
int newprio = extract32(value, i * 8, 8);
72
* so we can reuse the DO_2OP macro. (Our implementation calculates the
137
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
73
- * "expected" results in this case.)
138
nvic_irq_update(s);
74
+ * "expected" results in this case.) Similarly for VHCADD.
139
return MEMTX_OK;
75
*/
140
case 0xd28 ... 0xd2b: /* Configurable Fault Status (CFSR) */
76
DO_2OP(VCADD90, vcadd90)
141
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
77
DO_2OP(VCADD270, vcadd270)
142
+ return MEMTX_OK;
78
+DO_2OP(VHCADD90, vhcadd90)
143
+ }
79
+DO_2OP(VHCADD270, vhcadd270)
144
/* All bits are W1C, so construct 32 bit value with 0s in
80
145
* the parts not written by the access size
81
static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
146
*/
82
{
147
--
83
--
148
2.18.0
84
2.20.1
149
85
150
86
diff view generated by jsdifflib
1
If get_page_addr_code() returns -1, this indicates that there is no RAM
1
Implement the MVE VADDV insn, which performs an addition
2
page we can read a full TB from. Instead we must create a TB which
2
across vector lanes.
3
contains a single instruction and which we do not cache, so it is
4
executed only once.
5
6
Since this means we can now have TBs which are not in any page list,
7
we also need to make tb_phys_invalidate() handle them (by not trying
8
to remove them from a nonexistent page list).
9
3
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Emilio G. Cota <cota@braap.org>
6
Message-id: 20210617121628.20116-44-peter.maydell@linaro.org
13
Tested-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 20180710160013.26559-5-peter.maydell@linaro.org
15
---
7
---
16
accel/tcg/translate-all.c | 19 ++++++++++++++++++-
8
target/arm/helper-mve.h | 7 +++++++
17
1 file changed, 18 insertions(+), 1 deletion(-)
9
target/arm/mve.decode | 2 ++
10
target/arm/mve_helper.c | 24 +++++++++++++++++++++
11
target/arm/translate-mve.c | 43 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 76 insertions(+)
18
13
19
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/accel/tcg/translate-all.c
16
--- a/target/arm/helper-mve.h
22
+++ b/accel/tcg/translate-all.c
17
+++ b/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ static void tb_phys_invalidate__locked(TranslationBlock *tb)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrmlaldavhuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
24
*/
19
25
void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
20
DEF_HELPER_FLAGS_4(mve_vrmlsldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
26
{
21
DEF_HELPER_FLAGS_4(mve_vrmlsldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
- if (page_addr == -1) {
22
+
28
+ if (page_addr == -1 && tb->page_addr[0] != -1) {
23
+DEF_HELPER_FLAGS_3(mve_vaddvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
29
page_lock_tb(tb);
24
+DEF_HELPER_FLAGS_3(mve_vaddvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
30
do_tb_phys_invalidate(tb, true);
25
+DEF_HELPER_FLAGS_3(mve_vaddvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
31
page_unlock_tb(tb);
26
+DEF_HELPER_FLAGS_3(mve_vaddvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
32
@@ -XXX,XX +XXX,XX @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
27
+DEF_HELPER_FLAGS_3(mve_vaddvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
33
28
+DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
34
assert_memory_lock();
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
35
30
index XXXXXXX..XXXXXXX 100644
36
+ if (phys_pc == -1) {
31
--- a/target/arm/mve.decode
37
+ /*
32
+++ b/target/arm/mve.decode
38
+ * If the TB is not associated with a physical RAM page then
33
@@ -XXX,XX +XXX,XX @@ VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
39
+ * it must be a temporary one-insn TB, and we have nothing to do
34
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
40
+ * except fill in the page_addr[] fields.
35
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
41
+ */
36
42
+ assert(tb->cflags & CF_NOCACHE);
37
+# Vector add across vector
43
+ tb->page_addr[0] = tb->page_addr[1] = -1;
38
+VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
44
+ return tb;
39
40
# Predicate operations
41
%mask_22_13 22:1 13:3
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_LDAVH(vrmlaldavhuw, 4, uint32_t, false, int128_add, int128_add, int128_make64
47
48
DO_LDAVH(vrmlsldavhsw, 4, int32_t, false, int128_add, int128_sub, int128_makes64)
49
DO_LDAVH(vrmlsldavhxsw, 4, int32_t, true, int128_add, int128_sub, int128_makes64)
50
+
51
+/* Vector add across vector */
52
+#define DO_VADDV(OP, ESIZE, TYPE) \
53
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
54
+ uint32_t ra) \
55
+ { \
56
+ uint16_t mask = mve_element_mask(env); \
57
+ unsigned e; \
58
+ TYPE *m = vm; \
59
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
60
+ if (mask & 1) { \
61
+ ra += m[H##ESIZE(e)]; \
62
+ } \
63
+ } \
64
+ mve_advance_vpt(env); \
65
+ return ra; \
66
+ } \
67
+
68
+DO_VADDV(vaddvsb, 1, uint8_t)
69
+DO_VADDV(vaddvsh, 2, uint16_t)
70
+DO_VADDV(vaddvsw, 4, uint32_t)
71
+DO_VADDV(vaddvub, 1, uint8_t)
72
+DO_VADDV(vaddvuh, 2, uint16_t)
73
+DO_VADDV(vaddvuw, 4, uint32_t)
74
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/translate-mve.c
77
+++ b/target/arm/translate-mve.c
78
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
79
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
80
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
81
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
82
+typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
83
84
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
85
static inline long mve_qreg_offset(unsigned reg)
86
@@ -XXX,XX +XXX,XX @@ static bool trans_VPST(DisasContext *s, arg_VPST *a)
87
mve_update_and_store_eci(s);
88
return true;
89
}
90
+
91
+static bool trans_VADDV(DisasContext *s, arg_VADDV *a)
92
+{
93
+ /* VADDV: vector add across vector */
94
+ static MVEGenVADDVFn * const fns[4][2] = {
95
+ { gen_helper_mve_vaddvsb, gen_helper_mve_vaddvub },
96
+ { gen_helper_mve_vaddvsh, gen_helper_mve_vaddvuh },
97
+ { gen_helper_mve_vaddvsw, gen_helper_mve_vaddvuw },
98
+ { NULL, NULL }
99
+ };
100
+ TCGv_ptr qm;
101
+ TCGv_i32 rda;
102
+
103
+ if (!dc_isar_feature(aa32_mve, s) ||
104
+ a->size == 3) {
105
+ return false;
106
+ }
107
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
108
+ return true;
45
+ }
109
+ }
46
+
110
+
47
/*
111
+ /*
48
* Add the TB to the page list, acquiring first the pages's locks.
112
+ * This insn is subject to beat-wise execution. Partial execution
49
* We keep the locks held until after inserting the TB in the hash table,
113
+ * of an A=0 (no-accumulate) insn which does not execute the first
50
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
114
+ * beat must start with the current value of Rda, not zero.
51
115
+ */
52
phys_pc = get_page_addr_code(env, pc);
116
+ if (a->a || mve_skip_first_beat(s)) {
53
117
+ /* Accumulate input from Rda */
54
+ if (phys_pc == -1) {
118
+ rda = load_reg(s, a->rda);
55
+ /* Generate a temporary TB with 1 insn in it */
119
+ } else {
56
+ cflags &= ~CF_COUNT_MASK;
120
+ /* Accumulate starting at zero */
57
+ cflags |= CF_NOCACHE | 1;
121
+ rda = tcg_const_i32(0);
58
+ }
122
+ }
59
+
123
+
60
buffer_overflow:
124
+ qm = mve_qreg_ptr(a->qm);
61
tb = tb_alloc(pc);
125
+ fns[a->size][a->u](rda, cpu_env, qm, rda);
62
if (unlikely(!tb)) {
126
+ store_reg(s, a->rda, rda);
127
+ tcg_temp_free_ptr(qm);
128
+
129
+ mve_update_eci(s);
130
+ return true;
131
+}
63
--
132
--
64
2.18.0
133
2.20.1
65
134
66
135
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
In a CPU with MVE, the VMOV (vector lane to general-purpose register)
2
and VMOV (general-purpose register to vector lane) insns are not
3
predicated, but they are subject to beatwise execution if they
4
are not in an IT block.
2
5
3
Add the necessary parts of the virtualization extensions state to the
6
Since our implementation always executes all 4 beats in one tick,
4
GIC state. We choose to increase the size of the CPU interfaces state to
7
this means only that we need to handle PSR.ECI:
5
add space for the vCPU interfaces (the GIC_NCPU_VCPU macro). This way,
8
* we must do the usual check for bad ECI state
6
we'll be able to reuse most of the CPU interface code for the vCPUs.
9
* we must advance ECI state if the insn succeeds
10
* if ECI says we should not be executing the beat corresponding
11
to the lane of the vector register being accessed then we
12
should skip performing the move
7
13
8
The only exception is the APR value, which is stored in h_apr in the
14
Note that if PSR.ECI is non-zero then we cannot be in an IT block.
9
virtual interface state for vCPUs. This is due to some complications
10
with the GIC VMState, for which we don't want to break backward
11
compatibility. APRs being stored in 2D arrays, increasing the second
12
dimension would lead to some ugly VMState description. To avoid
13
that, we keep it in h_apr for vCPUs.
14
15
15
The vCPUs are numbered from GIC_NCPU to (GIC_NCPU * 2) - 1. The
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
`gic_is_vcpu` function help to determine if a given CPU id correspond to
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
a physical CPU or a virtual one.
18
Message-id: 20210617121628.20116-45-peter.maydell@linaro.org
19
---
20
target/arm/translate-a32.h | 2 +
21
target/arm/translate-mve.c | 4 +-
22
target/arm/translate-vfp.c | 77 +++++++++++++++++++++++++++++++++++---
23
3 files changed, 75 insertions(+), 8 deletions(-)
18
24
19
For the in-kernel KVM VGIC, since the exposed VGIC does not implement
25
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
20
the virtualization extensions, we report an error if the corresponding
21
property is set to true.
22
23
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Message-id: 20180727095421.386-6-luc.michel@greensocs.com
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
---
28
hw/intc/gic_internal.h | 5 ++
29
include/hw/intc/arm_gic_common.h | 43 +++++++--
30
hw/intc/arm_gic.c | 2 +-
31
hw/intc/arm_gic_common.c | 148 ++++++++++++++++++++++++++-----
32
hw/intc/arm_gic_kvm.c | 8 +-
33
5 files changed, 173 insertions(+), 33 deletions(-)
34
35
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
36
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/intc/gic_internal.h
27
--- a/target/arm/translate-a32.h
38
+++ b/hw/intc/gic_internal.h
28
+++ b/target/arm/translate-a32.h
39
@@ -XXX,XX +XXX,XX @@ static inline bool gic_test_pending(GICState *s, int irq, int cm)
29
@@ -XXX,XX +XXX,XX @@ long neon_full_reg_offset(unsigned reg);
30
long neon_element_offset(int reg, int element, MemOp memop);
31
void gen_rev16(TCGv_i32 dest, TCGv_i32 var);
32
void clear_eci_state(DisasContext *s);
33
+bool mve_eci_check(DisasContext *s);
34
+void mve_update_and_store_eci(DisasContext *s);
35
36
static inline TCGv_i32 load_cpu_offset(int offset)
37
{
38
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-mve.c
41
+++ b/target/arm/translate-mve.c
42
@@ -XXX,XX +XXX,XX @@ static bool mve_check_qreg_bank(DisasContext *s, int qmask)
43
return qmask < 8;
44
}
45
46
-static bool mve_eci_check(DisasContext *s)
47
+bool mve_eci_check(DisasContext *s)
48
{
49
/*
50
* This is a beatwise insn: check that ECI is valid (not a
51
@@ -XXX,XX +XXX,XX @@ static void mve_update_eci(DisasContext *s)
40
}
52
}
41
}
53
}
42
54
43
+static inline bool gic_is_vcpu(int cpu)
55
-static void mve_update_and_store_eci(DisasContext *s)
56
+void mve_update_and_store_eci(DisasContext *s)
57
{
58
/*
59
* For insns which don't call a helper function that will call
60
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/translate-vfp.c
63
+++ b/target/arm/translate-vfp.c
64
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
65
return true;
66
}
67
68
+static bool mve_skip_vmov(DisasContext *s, int vn, int index, int size)
44
+{
69
+{
45
+ return cpu >= GIC_NCPU;
70
+ /*
46
+}
71
+ * In a CPU with MVE, the VMOV (vector lane to general-purpose register)
72
+ * and VMOV (general-purpose register to vector lane) insns are not
73
+ * predicated, but they are subject to beatwise execution if they are
74
+ * not in an IT block.
75
+ *
76
+ * Since our implementation always executes all 4 beats in one tick,
77
+ * this means only that if PSR.ECI says we should not be executing
78
+ * the beat corresponding to the lane of the vector register being
79
+ * accessed then we should skip performing the move, and that we need
80
+ * to do the usual check for bad ECI state and advance of ECI state.
81
+ *
82
+ * Note that if PSR.ECI is non-zero then we cannot be in an IT block.
83
+ *
84
+ * Return true if this VMOV scalar <-> gpreg should be skipped because
85
+ * the MVE PSR.ECI state says we skip the beat where the store happens.
86
+ */
47
+
87
+
48
#endif /* QEMU_ARM_GIC_INTERNAL_H */
88
+ /* Calculate the byte offset into Qn which we're going to access */
49
diff --git a/include/hw/intc/arm_gic_common.h b/include/hw/intc/arm_gic_common.h
89
+ int ofs = (index << size) + ((vn & 1) * 8);
50
index XXXXXXX..XXXXXXX 100644
51
--- a/include/hw/intc/arm_gic_common.h
52
+++ b/include/hw/intc/arm_gic_common.h
53
@@ -XXX,XX +XXX,XX @@
54
#define GIC_NR_SGIS 16
55
/* Maximum number of possible CPU interfaces, determined by GIC architecture */
56
#define GIC_NCPU 8
57
+/* Maximum number of possible CPU interfaces with their respective vCPU */
58
+#define GIC_NCPU_VCPU (GIC_NCPU * 2)
59
60
#define MAX_NR_GROUP_PRIO 128
61
#define GIC_NR_APRS (MAX_NR_GROUP_PRIO / 32)
62
@@ -XXX,XX +XXX,XX @@
63
#define GIC_MIN_BPR 0
64
#define GIC_MIN_ABPR (GIC_MIN_BPR + 1)
65
66
+/* Architectural maximum number of list registers in the virtual interface */
67
+#define GIC_MAX_LR 64
68
+
90
+
69
+/* Only 32 priority levels and 32 preemption levels in the vCPU interfaces */
91
+ if (!dc_isar_feature(aa32_mve, s)) {
70
+#define GIC_VIRT_MAX_GROUP_PRIO_BITS 5
92
+ return false;
71
+#define GIC_VIRT_MAX_NR_GROUP_PRIO (1 << GIC_VIRT_MAX_GROUP_PRIO_BITS)
93
+ }
72
+#define GIC_VIRT_NR_APRS (GIC_VIRT_MAX_NR_GROUP_PRIO / 32)
73
+
94
+
74
+#define GIC_VIRT_MIN_BPR 2
95
+ switch (s->eci) {
75
+#define GIC_VIRT_MIN_ABPR (GIC_VIRT_MIN_BPR + 1)
96
+ case ECI_NONE:
76
+
97
+ return false;
77
typedef struct gic_irq_state {
98
+ case ECI_A0:
78
/* The enable bits are only banked for per-cpu interrupts. */
99
+ return ofs < 4;
79
uint8_t enabled;
100
+ case ECI_A0A1:
80
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
101
+ return ofs < 8;
81
qemu_irq parent_fiq[GIC_NCPU];
102
+ case ECI_A0A1A2:
82
qemu_irq parent_virq[GIC_NCPU];
103
+ case ECI_A0A1A2B0:
83
qemu_irq parent_vfiq[GIC_NCPU];
104
+ return ofs < 12;
84
+ qemu_irq maintenance_irq[GIC_NCPU];
105
+ default:
85
+
106
+ g_assert_not_reached();
86
/* GICD_CTLR; for a GIC with the security extensions the NS banked version
87
* of this register is just an alias of bit 1 of the S banked version.
88
*/
89
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
90
/* GICC_CTLR; again, the NS banked version is just aliases of bits of
91
* the S banked register, so our state only needs to store the S version.
92
*/
93
- uint32_t cpu_ctlr[GIC_NCPU];
94
+ uint32_t cpu_ctlr[GIC_NCPU_VCPU];
95
96
gic_irq_state irq_state[GIC_MAXIRQ];
97
uint8_t irq_target[GIC_MAXIRQ];
98
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
99
*/
100
uint8_t sgi_pending[GIC_NR_SGIS][GIC_NCPU];
101
102
- uint16_t priority_mask[GIC_NCPU];
103
- uint16_t running_priority[GIC_NCPU];
104
- uint16_t current_pending[GIC_NCPU];
105
+ uint16_t priority_mask[GIC_NCPU_VCPU];
106
+ uint16_t running_priority[GIC_NCPU_VCPU];
107
+ uint16_t current_pending[GIC_NCPU_VCPU];
108
109
/* If we present the GICv2 without security extensions to a guest,
110
* the guest can configure the GICC_CTLR to configure group 1 binary point
111
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
112
* For a GIC with Security Extensions we use use bpr for the
113
* secure copy and abpr as storage for the non-secure copy of the register.
114
*/
115
- uint8_t bpr[GIC_NCPU];
116
- uint8_t abpr[GIC_NCPU];
117
+ uint8_t bpr[GIC_NCPU_VCPU];
118
+ uint8_t abpr[GIC_NCPU_VCPU];
119
120
/* The APR is implementation defined, so we choose a layout identical to
121
* the KVM ABI layout for QEMU's implementation of the gic:
122
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
123
uint32_t apr[GIC_NR_APRS][GIC_NCPU];
124
uint32_t nsapr[GIC_NR_APRS][GIC_NCPU];
125
126
+ /* Virtual interface control registers */
127
+ uint32_t h_hcr[GIC_NCPU];
128
+ uint32_t h_misr[GIC_NCPU];
129
+ uint32_t h_lr[GIC_MAX_LR][GIC_NCPU];
130
+ uint32_t h_apr[GIC_NCPU];
131
+
132
+ /* Number of LRs implemented in this GIC instance */
133
+ uint32_t num_lrs;
134
+
135
uint32_t num_cpu;
136
137
MemoryRegion iomem; /* Distributor */
138
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
139
*/
140
struct GICState *backref[GIC_NCPU];
141
MemoryRegion cpuiomem[GIC_NCPU + 1]; /* CPU interfaces */
142
+ MemoryRegion vifaceiomem[GIC_NCPU + 1]; /* Virtual interfaces */
143
+ MemoryRegion vcpuiomem; /* vCPU interface */
144
+
145
uint32_t num_irq;
146
uint32_t revision;
147
bool security_extn;
148
+ bool virt_extn;
149
bool irq_reset_nonsecure; /* configure IRQs as group 1 (NS) on reset? */
150
int dev_fd; /* kvm device fd if backed by kvm vgic support */
151
Error *migration_blocker;
152
@@ -XXX,XX +XXX,XX @@ typedef struct ARMGICCommonClass {
153
} ARMGICCommonClass;
154
155
void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
156
- const MemoryRegionOps *ops);
157
+ const MemoryRegionOps *ops,
158
+ const MemoryRegionOps *virt_ops);
159
160
#endif
161
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/hw/intc/arm_gic.c
164
+++ b/hw/intc/arm_gic.c
165
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
166
}
167
168
/* This creates distributor and main CPU interface (s->cpuiomem[0]) */
169
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops);
170
+ gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, NULL);
171
172
/* Extra core-specific regions for the CPU interfaces. This is
173
* necessary for "franken-GIC" implementations, for example on
174
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
175
index XXXXXXX..XXXXXXX 100644
176
--- a/hw/intc/arm_gic_common.c
177
+++ b/hw/intc/arm_gic_common.c
178
@@ -XXX,XX +XXX,XX @@ static int gic_post_load(void *opaque, int version_id)
179
return 0;
180
}
181
182
+static bool gic_virt_state_needed(void *opaque)
183
+{
184
+ GICState *s = (GICState *)opaque;
185
+
186
+ return s->virt_extn;
187
+}
188
+
189
static const VMStateDescription vmstate_gic_irq_state = {
190
.name = "arm_gic_irq_state",
191
.version_id = 1,
192
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gic_irq_state = {
193
}
194
};
195
196
+static const VMStateDescription vmstate_gic_virt_state = {
197
+ .name = "arm_gic_virt_state",
198
+ .version_id = 1,
199
+ .minimum_version_id = 1,
200
+ .needed = gic_virt_state_needed,
201
+ .fields = (VMStateField[]) {
202
+ /* Virtual interface */
203
+ VMSTATE_UINT32_ARRAY(h_hcr, GICState, GIC_NCPU),
204
+ VMSTATE_UINT32_ARRAY(h_misr, GICState, GIC_NCPU),
205
+ VMSTATE_UINT32_2DARRAY(h_lr, GICState, GIC_MAX_LR, GIC_NCPU),
206
+ VMSTATE_UINT32_ARRAY(h_apr, GICState, GIC_NCPU),
207
+
208
+ /* Virtual CPU interfaces */
209
+ VMSTATE_UINT32_SUB_ARRAY(cpu_ctlr, GICState, GIC_NCPU, GIC_NCPU),
210
+ VMSTATE_UINT16_SUB_ARRAY(priority_mask, GICState, GIC_NCPU, GIC_NCPU),
211
+ VMSTATE_UINT16_SUB_ARRAY(running_priority, GICState, GIC_NCPU, GIC_NCPU),
212
+ VMSTATE_UINT16_SUB_ARRAY(current_pending, GICState, GIC_NCPU, GIC_NCPU),
213
+ VMSTATE_UINT8_SUB_ARRAY(bpr, GICState, GIC_NCPU, GIC_NCPU),
214
+ VMSTATE_UINT8_SUB_ARRAY(abpr, GICState, GIC_NCPU, GIC_NCPU),
215
+
216
+ VMSTATE_END_OF_LIST()
217
+ }
218
+};
219
+
220
static const VMStateDescription vmstate_gic = {
221
.name = "arm_gic",
222
.version_id = 12,
223
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gic = {
224
.post_load = gic_post_load,
225
.fields = (VMStateField[]) {
226
VMSTATE_UINT32(ctlr, GICState),
227
- VMSTATE_UINT32_ARRAY(cpu_ctlr, GICState, GIC_NCPU),
228
+ VMSTATE_UINT32_SUB_ARRAY(cpu_ctlr, GICState, 0, GIC_NCPU),
229
VMSTATE_STRUCT_ARRAY(irq_state, GICState, GIC_MAXIRQ, 1,
230
vmstate_gic_irq_state, gic_irq_state),
231
VMSTATE_UINT8_ARRAY(irq_target, GICState, GIC_MAXIRQ),
232
VMSTATE_UINT8_2DARRAY(priority1, GICState, GIC_INTERNAL, GIC_NCPU),
233
VMSTATE_UINT8_ARRAY(priority2, GICState, GIC_MAXIRQ - GIC_INTERNAL),
234
VMSTATE_UINT8_2DARRAY(sgi_pending, GICState, GIC_NR_SGIS, GIC_NCPU),
235
- VMSTATE_UINT16_ARRAY(priority_mask, GICState, GIC_NCPU),
236
- VMSTATE_UINT16_ARRAY(running_priority, GICState, GIC_NCPU),
237
- VMSTATE_UINT16_ARRAY(current_pending, GICState, GIC_NCPU),
238
- VMSTATE_UINT8_ARRAY(bpr, GICState, GIC_NCPU),
239
- VMSTATE_UINT8_ARRAY(abpr, GICState, GIC_NCPU),
240
+ VMSTATE_UINT16_SUB_ARRAY(priority_mask, GICState, 0, GIC_NCPU),
241
+ VMSTATE_UINT16_SUB_ARRAY(running_priority, GICState, 0, GIC_NCPU),
242
+ VMSTATE_UINT16_SUB_ARRAY(current_pending, GICState, 0, GIC_NCPU),
243
+ VMSTATE_UINT8_SUB_ARRAY(bpr, GICState, 0, GIC_NCPU),
244
+ VMSTATE_UINT8_SUB_ARRAY(abpr, GICState, 0, GIC_NCPU),
245
VMSTATE_UINT32_2DARRAY(apr, GICState, GIC_NR_APRS, GIC_NCPU),
246
VMSTATE_UINT32_2DARRAY(nsapr, GICState, GIC_NR_APRS, GIC_NCPU),
247
VMSTATE_END_OF_LIST()
248
+ },
249
+ .subsections = (const VMStateDescription * []) {
250
+ &vmstate_gic_virt_state,
251
+ NULL
252
}
253
};
254
255
void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
256
- const MemoryRegionOps *ops)
257
+ const MemoryRegionOps *ops,
258
+ const MemoryRegionOps *virt_ops)
259
{
260
SysBusDevice *sbd = SYS_BUS_DEVICE(s);
261
int i = s->num_irq - GIC_INTERNAL;
262
@@ -XXX,XX +XXX,XX @@ void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
263
for (i = 0; i < s->num_cpu; i++) {
264
sysbus_init_irq(sbd, &s->parent_vfiq[i]);
265
}
266
+ if (s->virt_extn) {
267
+ for (i = 0; i < s->num_cpu; i++) {
268
+ sysbus_init_irq(sbd, &s->maintenance_irq[i]);
269
+ }
270
+ }
271
272
/* Distributor */
273
memory_region_init_io(&s->iomem, OBJECT(s), ops, s, "gic_dist", 0x1000);
274
@@ -XXX,XX +XXX,XX @@ void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
275
memory_region_init_io(&s->cpuiomem[0], OBJECT(s), ops ? &ops[1] : NULL,
276
s, "gic_cpu", s->revision == 2 ? 0x2000 : 0x100);
277
sysbus_init_mmio(sbd, &s->cpuiomem[0]);
278
+
279
+ if (s->virt_extn) {
280
+ memory_region_init_io(&s->vifaceiomem[0], OBJECT(s), virt_ops,
281
+ s, "gic_viface", 0x1000);
282
+ sysbus_init_mmio(sbd, &s->vifaceiomem[0]);
283
+
284
+ memory_region_init_io(&s->vcpuiomem, OBJECT(s),
285
+ virt_ops ? &virt_ops[1] : NULL,
286
+ s, "gic_vcpu", 0x2000);
287
+ sysbus_init_mmio(sbd, &s->vcpuiomem);
288
+ }
289
}
290
291
static void arm_gic_common_realize(DeviceState *dev, Error **errp)
292
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_realize(DeviceState *dev, Error **errp)
293
"the security extensions");
294
return;
295
}
296
+
297
+ if (s->virt_extn) {
298
+ if (s->revision != 2) {
299
+ error_setg(errp, "GIC virtualization extensions are only "
300
+ "supported by revision 2");
301
+ return;
302
+ }
303
+
304
+ /* For now, set the number of implemented LRs to 4, as found in most
305
+ * real GICv2. This could be promoted as a QOM property if we need to
306
+ * emulate a variant with another num_lrs.
307
+ */
308
+ s->num_lrs = 4;
309
+ }
107
+ }
310
+}
108
+}
311
+
109
+
312
+static inline void arm_gic_common_reset_irq_state(GICState *s, int first_cpu,
110
static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
313
+ int resetprio)
111
{
314
+{
112
/* VMOV scalar to general purpose register */
315
+ int i, j;
113
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
316
+
114
return false;
317
+ for (i = first_cpu; i < first_cpu + s->num_cpu; i++) {
318
+ if (s->revision == REV_11MPCORE) {
319
+ s->priority_mask[i] = 0xf0;
320
+ } else {
321
+ s->priority_mask[i] = resetprio;
322
+ }
323
+ s->current_pending[i] = 1023;
324
+ s->running_priority[i] = 0x100;
325
+ s->cpu_ctlr[i] = 0;
326
+ s->bpr[i] = gic_is_vcpu(i) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
327
+ s->abpr[i] = gic_is_vcpu(i) ? GIC_VIRT_MIN_ABPR : GIC_MIN_ABPR;
328
+
329
+ if (!gic_is_vcpu(i)) {
330
+ for (j = 0; j < GIC_INTERNAL; j++) {
331
+ s->priority1[j][i] = resetprio;
332
+ }
333
+ for (j = 0; j < GIC_NR_SGIS; j++) {
334
+ s->sgi_pending[j][i] = 0;
335
+ }
336
+ }
337
+ }
338
}
339
340
static void arm_gic_common_reset(DeviceState *dev)
341
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
342
}
115
}
343
116
344
memset(s->irq_state, 0, GIC_MAXIRQ * sizeof(gic_irq_state));
117
+ if (dc_isar_feature(aa32_mve, s)) {
345
- for (i = 0 ; i < s->num_cpu; i++) {
118
+ if (!mve_eci_check(s)) {
346
- if (s->revision == REV_11MPCORE) {
119
+ return true;
347
- s->priority_mask[i] = 0xf0;
348
- } else {
349
- s->priority_mask[i] = resetprio;
350
- }
351
- s->current_pending[i] = 1023;
352
- s->running_priority[i] = 0x100;
353
- s->cpu_ctlr[i] = 0;
354
- s->bpr[i] = GIC_MIN_BPR;
355
- s->abpr[i] = GIC_MIN_ABPR;
356
- for (j = 0; j < GIC_INTERNAL; j++) {
357
- s->priority1[j][i] = resetprio;
358
- }
359
- for (j = 0; j < GIC_NR_SGIS; j++) {
360
- s->sgi_pending[j][i] = 0;
361
- }
362
+ arm_gic_common_reset_irq_state(s, 0, resetprio);
363
+
364
+ if (s->virt_extn) {
365
+ /* vCPU states are stored at indexes GIC_NCPU .. GIC_NCPU+num_cpu.
366
+ * The exposed vCPU interface does not have security extensions.
367
+ */
368
+ arm_gic_common_reset_irq_state(s, GIC_NCPU, 0);
369
}
370
+
371
for (i = 0; i < GIC_NR_SGIS; i++) {
372
GIC_DIST_SET_ENABLED(i, ALL_CPU_MASK);
373
GIC_DIST_SET_EDGE_TRIGGER(i);
374
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
375
}
376
}
377
378
+ if (s->virt_extn) {
379
+ for (i = 0; i < s->num_lrs; i++) {
380
+ for (j = 0; j < s->num_cpu; j++) {
381
+ s->h_lr[i][j] = 0;
382
+ }
383
+ }
384
+
385
+ for (i = 0; i < s->num_cpu; i++) {
386
+ s->h_hcr[i] = 0;
387
+ s->h_misr[i] = 0;
388
+ }
120
+ }
389
+ }
121
+ }
390
+
122
+
391
s->ctlr = 0;
123
if (!vfp_access_check(s)) {
124
return true;
125
}
126
127
- tmp = tcg_temp_new_i32();
128
- read_neon_element32(tmp, a->vn, a->index, a->size | (a->u ? 0 : MO_SIGN));
129
- store_reg(s, a->rt, tmp);
130
+ if (!mve_skip_vmov(s, a->vn, a->index, a->size)) {
131
+ tmp = tcg_temp_new_i32();
132
+ read_neon_element32(tmp, a->vn, a->index,
133
+ a->size | (a->u ? 0 : MO_SIGN));
134
+ store_reg(s, a->rt, tmp);
135
+ }
136
137
+ if (dc_isar_feature(aa32_mve, s)) {
138
+ mve_update_and_store_eci(s);
139
+ }
140
return true;
392
}
141
}
393
142
394
@@ -XXX,XX +XXX,XX @@ static Property arm_gic_common_properties[] = {
143
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
395
DEFINE_PROP_UINT32("revision", GICState, revision, 1),
144
return false;
396
/* True if the GIC should implement the security extensions */
397
DEFINE_PROP_BOOL("has-security-extensions", GICState, security_extn, 0),
398
+ /* True if the GIC should implement the virtualization extensions */
399
+ DEFINE_PROP_BOOL("has-virtualization-extensions", GICState, virt_extn, 0),
400
DEFINE_PROP_END_OF_LIST(),
401
};
402
403
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
404
index XXXXXXX..XXXXXXX 100644
405
--- a/hw/intc/arm_gic_kvm.c
406
+++ b/hw/intc/arm_gic_kvm.c
407
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
408
return;
409
}
145
}
410
146
411
+ if (s->virt_extn) {
147
+ if (dc_isar_feature(aa32_mve, s)) {
412
+ error_setg(errp, "the in-kernel VGIC does not implement the "
148
+ if (!mve_eci_check(s)) {
413
+ "virtualization extensions");
149
+ return true;
414
+ return;
150
+ }
415
+ }
151
+ }
416
+
152
+
417
if (!kvm_arm_gic_can_save_restore(s)) {
153
if (!vfp_access_check(s)) {
418
error_setg(&s->migration_blocker, "This operating system kernel does "
154
return true;
419
"not support vGICv2 migration");
420
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
421
}
422
}
155
}
423
156
424
- gic_init_irqs_and_mmio(s, kvm_arm_gicv2_set_irq, NULL);
157
- tmp = load_reg(s, a->rt);
425
+ gic_init_irqs_and_mmio(s, kvm_arm_gicv2_set_irq, NULL, NULL);
158
- write_neon_element32(tmp, a->vn, a->index, a->size);
426
159
- tcg_temp_free_i32(tmp);
427
for (i = 0; i < s->num_irq - GIC_INTERNAL; i++) {
160
+ if (!mve_skip_vmov(s, a->vn, a->index, a->size)) {
428
qemu_irq irq = qdev_get_gpio_in(dev, i);
161
+ tmp = load_reg(s, a->rt);
162
+ write_neon_element32(tmp, a->vn, a->index, a->size);
163
+ tcg_temp_free_i32(tmp);
164
+ }
165
166
+ if (dc_isar_feature(aa32_mve, s)) {
167
+ mve_update_and_store_eci(s);
168
+ }
169
return true;
170
}
171
429
--
172
--
430
2.18.0
173
2.20.1
431
174
432
175
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Peter Collingbourne <pcc@google.com>
2
2
3
In preparation for the virtualization extensions implementation,
3
MTE3 introduces an asymmetric tag checking mode, in which loads are
4
refactor the name of the functions and macros that act on the GIC
4
checked synchronously and stores are checked asynchronously. Add
5
distributor to make that fact explicit. It will be useful to
5
support for it.
6
differentiate them from the ones that will act on the virtual
7
interfaces.
8
6
9
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
7
Signed-off-by: Peter Collingbourne <pcc@google.com>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
9
Message-id: 20210616195614.11785-1-pcc@google.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
[PMM: Add line to emulation.rst]
13
Message-id: 20180727095421.386-2-luc.michel@greensocs.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
12
---
16
hw/intc/gic_internal.h | 51 ++++++------
13
docs/system/arm/emulation.rst | 1 +
17
hw/intc/arm_gic.c | 163 +++++++++++++++++++++------------------
14
target/arm/cpu64.c | 2 +-
18
hw/intc/arm_gic_common.c | 6 +-
15
target/arm/mte_helper.c | 82 ++++++++++++++++++++++-------------
19
hw/intc/arm_gic_kvm.c | 23 +++---
16
3 files changed, 53 insertions(+), 32 deletions(-)
20
4 files changed, 127 insertions(+), 116 deletions(-)
21
17
22
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
18
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
23
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/intc/gic_internal.h
20
--- a/docs/system/arm/emulation.rst
25
+++ b/hw/intc/gic_internal.h
21
+++ b/docs/system/arm/emulation.rst
26
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
27
23
- FEAT_LSE (Large System Extensions)
28
#define GIC_BASE_IRQ 0
24
- FEAT_MTE (Memory Tagging Extension)
29
25
- FEAT_MTE2 (Memory Tagging Extension)
30
-#define GIC_SET_ENABLED(irq, cm) s->irq_state[irq].enabled |= (cm)
26
+- FEAT_MTE3 (MTE Asymmetric Fault Handling)
31
-#define GIC_CLEAR_ENABLED(irq, cm) s->irq_state[irq].enabled &= ~(cm)
27
- FEAT_PAN (Privileged access never)
32
-#define GIC_TEST_ENABLED(irq, cm) ((s->irq_state[irq].enabled & (cm)) != 0)
28
- FEAT_PAN2 (AT S1E1R and AT S1E1W instruction variants affected by PSTATE.PAN)
33
-#define GIC_SET_PENDING(irq, cm) s->irq_state[irq].pending |= (cm)
29
- FEAT_PAuth (Pointer authentication)
34
-#define GIC_CLEAR_PENDING(irq, cm) s->irq_state[irq].pending &= ~(cm)
30
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
35
-#define GIC_SET_ACTIVE(irq, cm) s->irq_state[irq].active |= (cm)
31
index XXXXXXX..XXXXXXX 100644
36
-#define GIC_CLEAR_ACTIVE(irq, cm) s->irq_state[irq].active &= ~(cm)
32
--- a/target/arm/cpu64.c
37
-#define GIC_TEST_ACTIVE(irq, cm) ((s->irq_state[irq].active & (cm)) != 0)
33
+++ b/target/arm/cpu64.c
38
-#define GIC_SET_MODEL(irq) s->irq_state[irq].model = true
34
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
39
-#define GIC_CLEAR_MODEL(irq) s->irq_state[irq].model = false
35
* during realize if the board provides no tag memory, much like
40
-#define GIC_TEST_MODEL(irq) s->irq_state[irq].model
36
* we do for EL2 with the virtualization=on property.
41
-#define GIC_SET_LEVEL(irq, cm) s->irq_state[irq].level |= (cm)
42
-#define GIC_CLEAR_LEVEL(irq, cm) s->irq_state[irq].level &= ~(cm)
43
-#define GIC_TEST_LEVEL(irq, cm) ((s->irq_state[irq].level & (cm)) != 0)
44
-#define GIC_SET_EDGE_TRIGGER(irq) s->irq_state[irq].edge_trigger = true
45
-#define GIC_CLEAR_EDGE_TRIGGER(irq) s->irq_state[irq].edge_trigger = false
46
-#define GIC_TEST_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger)
47
-#define GIC_GET_PRIORITY(irq, cpu) (((irq) < GIC_INTERNAL) ? \
48
+#define GIC_DIST_SET_ENABLED(irq, cm) (s->irq_state[irq].enabled |= (cm))
49
+#define GIC_DIST_CLEAR_ENABLED(irq, cm) (s->irq_state[irq].enabled &= ~(cm))
50
+#define GIC_DIST_TEST_ENABLED(irq, cm) ((s->irq_state[irq].enabled & (cm)) != 0)
51
+#define GIC_DIST_SET_PENDING(irq, cm) (s->irq_state[irq].pending |= (cm))
52
+#define GIC_DIST_CLEAR_PENDING(irq, cm) (s->irq_state[irq].pending &= ~(cm))
53
+#define GIC_DIST_SET_ACTIVE(irq, cm) (s->irq_state[irq].active |= (cm))
54
+#define GIC_DIST_CLEAR_ACTIVE(irq, cm) (s->irq_state[irq].active &= ~(cm))
55
+#define GIC_DIST_TEST_ACTIVE(irq, cm) ((s->irq_state[irq].active & (cm)) != 0)
56
+#define GIC_DIST_SET_MODEL(irq) (s->irq_state[irq].model = true)
57
+#define GIC_DIST_CLEAR_MODEL(irq) (s->irq_state[irq].model = false)
58
+#define GIC_DIST_TEST_MODEL(irq) (s->irq_state[irq].model)
59
+#define GIC_DIST_SET_LEVEL(irq, cm) (s->irq_state[irq].level |= (cm))
60
+#define GIC_DIST_CLEAR_LEVEL(irq, cm) (s->irq_state[irq].level &= ~(cm))
61
+#define GIC_DIST_TEST_LEVEL(irq, cm) ((s->irq_state[irq].level & (cm)) != 0)
62
+#define GIC_DIST_SET_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger = true)
63
+#define GIC_DIST_CLEAR_EDGE_TRIGGER(irq) \
64
+ (s->irq_state[irq].edge_trigger = false)
65
+#define GIC_DIST_TEST_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger)
66
+#define GIC_DIST_GET_PRIORITY(irq, cpu) (((irq) < GIC_INTERNAL) ? \
67
s->priority1[irq][cpu] : \
68
s->priority2[(irq) - GIC_INTERNAL])
69
-#define GIC_TARGET(irq) s->irq_target[irq]
70
-#define GIC_CLEAR_GROUP(irq, cm) (s->irq_state[irq].group &= ~(cm))
71
-#define GIC_SET_GROUP(irq, cm) (s->irq_state[irq].group |= (cm))
72
-#define GIC_TEST_GROUP(irq, cm) ((s->irq_state[irq].group & (cm)) != 0)
73
+#define GIC_DIST_TARGET(irq) (s->irq_target[irq])
74
+#define GIC_DIST_CLEAR_GROUP(irq, cm) (s->irq_state[irq].group &= ~(cm))
75
+#define GIC_DIST_SET_GROUP(irq, cm) (s->irq_state[irq].group |= (cm))
76
+#define GIC_DIST_TEST_GROUP(irq, cm) ((s->irq_state[irq].group & (cm)) != 0)
77
78
#define GICD_CTLR_EN_GRP0 (1U << 0)
79
#define GICD_CTLR_EN_GRP1 (1U << 1)
80
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs);
81
void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs);
82
void gic_update(GICState *s);
83
void gic_init_irqs_and_distributor(GICState *s);
84
-void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
85
- MemTxAttrs attrs);
86
+void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
87
+ MemTxAttrs attrs);
88
89
static inline bool gic_test_pending(GICState *s, int irq, int cm)
90
{
91
@@ -XXX,XX +XXX,XX @@ static inline bool gic_test_pending(GICState *s, int irq, int cm)
92
* GICD_ISPENDR to set the state pending.
93
*/
37
*/
94
return (s->irq_state[irq].pending & cm) ||
38
- t = FIELD_DP64(t, ID_AA64PFR1, MTE, 2);
95
- (!GIC_TEST_EDGE_TRIGGER(irq) && GIC_TEST_LEVEL(irq, cm));
39
+ t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3);
96
+ (!GIC_DIST_TEST_EDGE_TRIGGER(irq) && GIC_DIST_TEST_LEVEL(irq, cm));
40
cpu->isar.id_aa64pfr1 = t;
41
42
t = cpu->isar.id_aa64mmfr0;
43
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mte_helper.c
46
+++ b/target/arm/mte_helper.c
47
@@ -XXX,XX +XXX,XX @@ void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val)
97
}
48
}
98
}
49
}
99
50
100
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
51
+static void mte_sync_check_fail(CPUARMState *env, uint32_t desc,
101
index XXXXXXX..XXXXXXX 100644
52
+ uint64_t dirty_ptr, uintptr_t ra)
102
--- a/hw/intc/arm_gic.c
53
+{
103
+++ b/hw/intc/arm_gic.c
54
+ int is_write, syn;
104
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
55
+
105
best_prio = 0x100;
56
+ env->exception.vaddress = dirty_ptr;
106
best_irq = 1023;
57
+
107
for (irq = 0; irq < s->num_irq; irq++) {
58
+ is_write = FIELD_EX32(desc, MTEDESC, WRITE);
108
- if (GIC_TEST_ENABLED(irq, cm) && gic_test_pending(s, irq, cm) &&
59
+ syn = syn_data_abort_no_iss(arm_current_el(env) != 0, 0, 0, 0, 0, is_write,
109
- (!GIC_TEST_ACTIVE(irq, cm)) &&
60
+ 0x11);
110
- (irq < GIC_INTERNAL || GIC_TARGET(irq) & cm)) {
61
+ raise_exception_ra(env, EXCP_DATA_ABORT, syn, exception_target_el(env), ra);
111
- if (GIC_GET_PRIORITY(irq, cpu) < best_prio) {
62
+ g_assert_not_reached();
112
- best_prio = GIC_GET_PRIORITY(irq, cpu);
63
+}
113
+ if (GIC_DIST_TEST_ENABLED(irq, cm) &&
64
+
114
+ gic_test_pending(s, irq, cm) &&
65
+static void mte_async_check_fail(CPUARMState *env, uint64_t dirty_ptr,
115
+ (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
66
+ uintptr_t ra, ARMMMUIdx arm_mmu_idx, int el)
116
+ (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
67
+{
117
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) < best_prio) {
68
+ int select;
118
+ best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
69
+
119
best_irq = irq;
70
+ if (regime_has_2_ranges(arm_mmu_idx)) {
120
}
71
+ select = extract64(dirty_ptr, 55, 1);
121
}
72
+ } else {
122
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
73
+ select = 0;
123
if (best_prio < s->priority_mask[cpu]) {
74
+ }
124
s->current_pending[cpu] = best_irq;
75
+ env->cp15.tfsr_el[el] |= 1 << select;
125
if (best_prio < s->running_priority[cpu]) {
76
+#ifdef CONFIG_USER_ONLY
126
- int group = GIC_TEST_GROUP(best_irq, cm);
77
+ /*
127
+ int group = GIC_DIST_TEST_GROUP(best_irq, cm);
78
+ * Stand in for a timer irq, setting _TIF_MTE_ASYNC_FAULT,
128
79
+ * which then sends a SIGSEGV when the thread is next scheduled.
129
if (extract32(s->ctlr, group, 1) &&
80
+ * This cpu will return to the main loop at the end of the TB,
130
extract32(s->cpu_ctlr[cpu], group, 1)) {
81
+ * which is rather sooner than "normal". But the alternative
131
@@ -XXX,XX +XXX,XX @@ void gic_set_pending_private(GICState *s, int cpu, int irq)
82
+ * is waiting until the next syscall.
132
}
83
+ */
133
84
+ qemu_cpu_kick(env_cpu(env));
134
DPRINTF("Set %d pending cpu %d\n", irq, cpu);
85
+#endif
135
- GIC_SET_PENDING(irq, cm);
86
+}
136
+ GIC_DIST_SET_PENDING(irq, cm);
87
+
137
gic_update(s);
88
/* Record a tag check failure. */
138
}
89
static void mte_check_fail(CPUARMState *env, uint32_t desc,
139
90
uint64_t dirty_ptr, uintptr_t ra)
140
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
141
int cm, int target)
142
{
91
{
143
if (level) {
92
int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
144
- GIC_SET_LEVEL(irq, cm);
93
ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx);
145
- if (GIC_TEST_EDGE_TRIGGER(irq) || GIC_TEST_ENABLED(irq, cm)) {
94
- int el, reg_el, tcf, select, is_write, syn;
146
+ GIC_DIST_SET_LEVEL(irq, cm);
95
+ int el, reg_el, tcf;
147
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq) || GIC_DIST_TEST_ENABLED(irq, cm)) {
96
uint64_t sctlr;
148
DPRINTF("Set %d pending mask %x\n", irq, target);
97
149
- GIC_SET_PENDING(irq, target);
98
reg_el = regime_el(env, arm_mmu_idx);
150
+ GIC_DIST_SET_PENDING(irq, target);
99
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
151
}
100
switch (tcf) {
152
} else {
101
case 1:
153
- GIC_CLEAR_LEVEL(irq, cm);
102
/* Tag check fail causes a synchronous exception. */
154
+ GIC_DIST_CLEAR_LEVEL(irq, cm);
103
- env->exception.vaddress = dirty_ptr;
104
-
105
- is_write = FIELD_EX32(desc, MTEDESC, WRITE);
106
- syn = syn_data_abort_no_iss(arm_current_el(env) != 0, 0, 0, 0, 0,
107
- is_write, 0x11);
108
- raise_exception_ra(env, EXCP_DATA_ABORT, syn,
109
- exception_target_el(env), ra);
110
- /* noreturn, but fall through to the assert anyway */
111
+ mte_sync_check_fail(env, desc, dirty_ptr, ra);
112
+ break;
113
114
case 0:
115
/*
116
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
117
118
case 2:
119
/* Tag check fail causes asynchronous flag set. */
120
- if (regime_has_2_ranges(arm_mmu_idx)) {
121
- select = extract64(dirty_ptr, 55, 1);
122
- } else {
123
- select = 0;
124
- }
125
- env->cp15.tfsr_el[el] |= 1 << select;
126
-#ifdef CONFIG_USER_ONLY
127
- /*
128
- * Stand in for a timer irq, setting _TIF_MTE_ASYNC_FAULT,
129
- * which then sends a SIGSEGV when the thread is next scheduled.
130
- * This cpu will return to the main loop at the end of the TB,
131
- * which is rather sooner than "normal". But the alternative
132
- * is waiting until the next syscall.
133
- */
134
- qemu_cpu_kick(env_cpu(env));
135
-#endif
136
+ mte_async_check_fail(env, dirty_ptr, ra, arm_mmu_idx, el);
137
break;
138
139
- default:
140
- /* Case 3: Reserved. */
141
- qemu_log_mask(LOG_GUEST_ERROR,
142
- "Tag check failure with SCTLR_EL%d.TCF%s "
143
- "set to reserved value %d\n",
144
- reg_el, el ? "" : "0", tcf);
145
+ case 3:
146
+ /*
147
+ * Tag check fail causes asynchronous flag set for stores, or
148
+ * a synchronous exception for loads.
149
+ */
150
+ if (FIELD_EX32(desc, MTEDESC, WRITE)) {
151
+ mte_async_check_fail(env, dirty_ptr, ra, arm_mmu_idx, el);
152
+ } else {
153
+ mte_sync_check_fail(env, desc, dirty_ptr, ra);
154
+ }
155
break;
155
}
156
}
156
}
157
}
157
158
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq_generic(GICState *s, int irq, int level,
159
int cm, int target)
160
{
161
if (level) {
162
- GIC_SET_LEVEL(irq, cm);
163
+ GIC_DIST_SET_LEVEL(irq, cm);
164
DPRINTF("Set %d pending mask %x\n", irq, target);
165
- if (GIC_TEST_EDGE_TRIGGER(irq)) {
166
- GIC_SET_PENDING(irq, target);
167
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq)) {
168
+ GIC_DIST_SET_PENDING(irq, target);
169
}
170
} else {
171
- GIC_CLEAR_LEVEL(irq, cm);
172
+ GIC_DIST_CLEAR_LEVEL(irq, cm);
173
}
174
}
175
176
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq(void *opaque, int irq, int level)
177
/* The first external input line is internal interrupt 32. */
178
cm = ALL_CPU_MASK;
179
irq += GIC_INTERNAL;
180
- target = GIC_TARGET(irq);
181
+ target = GIC_DIST_TARGET(irq);
182
} else {
183
int cpu;
184
irq -= (s->num_irq - GIC_INTERNAL);
185
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq(void *opaque, int irq, int level)
186
187
assert(irq >= GIC_NR_SGIS);
188
189
- if (level == GIC_TEST_LEVEL(irq, cm)) {
190
+ if (level == GIC_DIST_TEST_LEVEL(irq, cm)) {
191
return;
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
195
uint16_t pending_irq = s->current_pending[cpu];
196
197
if (pending_irq < GIC_MAXIRQ && gic_has_groups(s)) {
198
- int group = GIC_TEST_GROUP(pending_irq, (1 << cpu));
199
+ int group = GIC_DIST_TEST_GROUP(pending_irq, (1 << cpu));
200
/* On a GIC without the security extensions, reading this register
201
* behaves in the same way as a secure access to a GIC with them.
202
*/
203
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
204
205
if (gic_has_groups(s) &&
206
!(s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) &&
207
- GIC_TEST_GROUP(irq, (1 << cpu))) {
208
+ GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
209
bpr = s->abpr[cpu] - 1;
210
assert(bpr >= 0);
211
} else {
212
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
213
*/
214
mask = ~0U << ((bpr & 7) + 1);
215
216
- return GIC_GET_PRIORITY(irq, cpu) & mask;
217
+ return GIC_DIST_GET_PRIORITY(irq, cpu) & mask;
218
}
219
220
static void gic_activate_irq(GICState *s, int cpu, int irq)
221
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
222
int regno = preemption_level / 32;
223
int bitno = preemption_level % 32;
224
225
- if (gic_has_groups(s) && GIC_TEST_GROUP(irq, (1 << cpu))) {
226
+ if (gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
227
s->nsapr[regno][cpu] |= (1 << bitno);
228
} else {
229
s->apr[regno][cpu] |= (1 << bitno);
230
}
231
232
s->running_priority[cpu] = prio;
233
- GIC_SET_ACTIVE(irq, 1 << cpu);
234
+ GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
235
}
236
237
static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
238
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
239
return irq;
240
}
241
242
- if (GIC_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
243
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
244
DPRINTF("ACK, pending interrupt (%d) has insufficient priority\n", irq);
245
return 1023;
246
}
247
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
248
/* Clear pending flags for both level and edge triggered interrupts.
249
* Level triggered IRQs will be reasserted once they become inactive.
250
*/
251
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
252
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
253
+ : cm);
254
ret = irq;
255
} else {
256
if (irq < GIC_NR_SGIS) {
257
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
258
src = ctz32(s->sgi_pending[irq][cpu]);
259
s->sgi_pending[irq][cpu] &= ~(1 << src);
260
if (s->sgi_pending[irq][cpu] == 0) {
261
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
262
+ GIC_DIST_CLEAR_PENDING(irq,
263
+ GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
264
+ : cm);
265
}
266
ret = irq | ((src & 0x7) << 10);
267
} else {
268
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
269
* interrupts. (level triggered interrupts with an active line
270
* remain pending, see gic_test_pending)
271
*/
272
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
273
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
274
+ : cm);
275
ret = irq;
276
}
277
}
278
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
279
return ret;
280
}
281
282
-void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
283
+void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
284
MemTxAttrs attrs)
285
{
286
if (s->security_extn && !attrs.secure) {
287
- if (!GIC_TEST_GROUP(irq, (1 << cpu))) {
288
+ if (!GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
289
return; /* Ignore Non-secure access of Group0 IRQ */
290
}
291
val = 0x80 | (val >> 1); /* Non-secure view */
292
@@ -XXX,XX +XXX,XX @@ void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
293
}
294
}
295
296
-static uint32_t gic_get_priority(GICState *s, int cpu, int irq,
297
+static uint32_t gic_dist_get_priority(GICState *s, int cpu, int irq,
298
MemTxAttrs attrs)
299
{
300
- uint32_t prio = GIC_GET_PRIORITY(irq, cpu);
301
+ uint32_t prio = GIC_DIST_GET_PRIORITY(irq, cpu);
302
303
if (s->security_extn && !attrs.secure) {
304
- if (!GIC_TEST_GROUP(irq, (1 << cpu))) {
305
+ if (!GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
306
return 0; /* Non-secure access cannot read priority of Group0 IRQ */
307
}
308
prio = (prio << 1) & 0xff; /* Non-secure view */
309
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
310
return;
311
}
312
313
- group = gic_has_groups(s) && GIC_TEST_GROUP(irq, cm);
314
+ group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
315
316
if (!gic_eoi_split(s, cpu, attrs)) {
317
/* This is UNPREDICTABLE; we choose to ignore it */
318
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
319
return;
320
}
321
322
- GIC_CLEAR_ACTIVE(irq, cm);
323
+ GIC_DIST_CLEAR_ACTIVE(irq, cm);
324
}
325
326
void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
327
@@ -XXX,XX +XXX,XX @@ void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
328
if (s->revision == REV_11MPCORE) {
329
/* Mark level triggered interrupts as pending if they are still
330
raised. */
331
- if (!GIC_TEST_EDGE_TRIGGER(irq) && GIC_TEST_ENABLED(irq, cm)
332
- && GIC_TEST_LEVEL(irq, cm) && (GIC_TARGET(irq) & cm) != 0) {
333
+ if (!GIC_DIST_TEST_EDGE_TRIGGER(irq) && GIC_DIST_TEST_ENABLED(irq, cm)
334
+ && GIC_DIST_TEST_LEVEL(irq, cm)
335
+ && (GIC_DIST_TARGET(irq) & cm) != 0) {
336
DPRINTF("Set %d pending mask %x\n", irq, cm);
337
- GIC_SET_PENDING(irq, cm);
338
+ GIC_DIST_SET_PENDING(irq, cm);
339
}
340
}
341
342
- group = gic_has_groups(s) && GIC_TEST_GROUP(irq, cm);
343
+ group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
344
345
if (s->security_extn && !attrs.secure && !group) {
346
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
347
@@ -XXX,XX +XXX,XX @@ void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
348
349
/* In GICv2 the guest can choose to split priority-drop and deactivate */
350
if (!gic_eoi_split(s, cpu, attrs)) {
351
- GIC_CLEAR_ACTIVE(irq, cm);
352
+ GIC_DIST_CLEAR_ACTIVE(irq, cm);
353
}
354
gic_update(s);
355
}
356
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
357
goto bad_reg;
358
}
359
for (i = 0; i < 8; i++) {
360
- if (GIC_TEST_GROUP(irq + i, cm)) {
361
+ if (GIC_DIST_TEST_GROUP(irq + i, cm)) {
362
res |= (1 << i);
363
}
364
}
365
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
366
res = 0;
367
for (i = 0; i < 8; i++) {
368
if (s->security_extn && !attrs.secure &&
369
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
370
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
371
continue; /* Ignore Non-secure access of Group0 IRQ */
372
}
373
374
- if (GIC_TEST_ENABLED(irq + i, cm)) {
375
+ if (GIC_DIST_TEST_ENABLED(irq + i, cm)) {
376
res |= (1 << i);
377
}
378
}
379
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
380
mask = (irq < GIC_INTERNAL) ? cm : ALL_CPU_MASK;
381
for (i = 0; i < 8; i++) {
382
if (s->security_extn && !attrs.secure &&
383
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
384
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
385
continue; /* Ignore Non-secure access of Group0 IRQ */
386
}
387
388
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
389
mask = (irq < GIC_INTERNAL) ? cm : ALL_CPU_MASK;
390
for (i = 0; i < 8; i++) {
391
if (s->security_extn && !attrs.secure &&
392
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
393
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
394
continue; /* Ignore Non-secure access of Group0 IRQ */
395
}
396
397
- if (GIC_TEST_ACTIVE(irq + i, mask)) {
398
+ if (GIC_DIST_TEST_ACTIVE(irq + i, mask)) {
399
res |= (1 << i);
400
}
401
}
402
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
403
irq = (offset - 0x400) + GIC_BASE_IRQ;
404
if (irq >= s->num_irq)
405
goto bad_reg;
406
- res = gic_get_priority(s, cpu, irq, attrs);
407
+ res = gic_dist_get_priority(s, cpu, irq, attrs);
408
} else if (offset < 0xc00) {
409
/* Interrupt CPU Target. */
410
if (s->num_cpu == 1 && s->revision != REV_11MPCORE) {
411
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
412
} else if (irq < GIC_INTERNAL) {
413
res = cm;
414
} else {
415
- res = GIC_TARGET(irq);
416
+ res = GIC_DIST_TARGET(irq);
417
}
418
}
419
} else if (offset < 0xf00) {
420
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
421
res = 0;
422
for (i = 0; i < 4; i++) {
423
if (s->security_extn && !attrs.secure &&
424
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
425
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
426
continue; /* Ignore Non-secure access of Group0 IRQ */
427
}
428
429
- if (GIC_TEST_MODEL(irq + i))
430
+ if (GIC_DIST_TEST_MODEL(irq + i)) {
431
res |= (1 << (i * 2));
432
- if (GIC_TEST_EDGE_TRIGGER(irq + i))
433
+ }
434
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq + i)) {
435
res |= (2 << (i * 2));
436
+ }
437
}
438
} else if (offset < 0xf10) {
439
goto bad_reg;
440
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
441
}
442
443
if (s->security_extn && !attrs.secure &&
444
- !GIC_TEST_GROUP(irq, 1 << cpu)) {
445
+ !GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
446
res = 0; /* Ignore Non-secure access of Group0 IRQ */
447
} else {
448
res = s->sgi_pending[irq][cpu];
449
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
450
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
451
if (value & (1 << i)) {
452
/* Group1 (Non-secure) */
453
- GIC_SET_GROUP(irq + i, cm);
454
+ GIC_DIST_SET_GROUP(irq + i, cm);
455
} else {
456
/* Group0 (Secure) */
457
- GIC_CLEAR_GROUP(irq + i, cm);
458
+ GIC_DIST_CLEAR_GROUP(irq + i, cm);
459
}
460
}
461
}
462
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
463
for (i = 0; i < 8; i++) {
464
if (value & (1 << i)) {
465
int mask =
466
- (irq < GIC_INTERNAL) ? (1 << cpu) : GIC_TARGET(irq + i);
467
+ (irq < GIC_INTERNAL) ? (1 << cpu)
468
+ : GIC_DIST_TARGET(irq + i);
469
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
470
471
if (s->security_extn && !attrs.secure &&
472
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
473
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
474
continue; /* Ignore Non-secure access of Group0 IRQ */
475
}
476
477
- if (!GIC_TEST_ENABLED(irq + i, cm)) {
478
+ if (!GIC_DIST_TEST_ENABLED(irq + i, cm)) {
479
DPRINTF("Enabled IRQ %d\n", irq + i);
480
trace_gic_enable_irq(irq + i);
481
}
482
- GIC_SET_ENABLED(irq + i, cm);
483
+ GIC_DIST_SET_ENABLED(irq + i, cm);
484
/* If a raised level triggered IRQ enabled then mark
485
is as pending. */
486
- if (GIC_TEST_LEVEL(irq + i, mask)
487
- && !GIC_TEST_EDGE_TRIGGER(irq + i)) {
488
+ if (GIC_DIST_TEST_LEVEL(irq + i, mask)
489
+ && !GIC_DIST_TEST_EDGE_TRIGGER(irq + i)) {
490
DPRINTF("Set %d pending mask %x\n", irq + i, mask);
491
- GIC_SET_PENDING(irq + i, mask);
492
+ GIC_DIST_SET_PENDING(irq + i, mask);
493
}
494
}
495
}
496
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
497
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
498
499
if (s->security_extn && !attrs.secure &&
500
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
501
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
502
continue; /* Ignore Non-secure access of Group0 IRQ */
503
}
504
505
- if (GIC_TEST_ENABLED(irq + i, cm)) {
506
+ if (GIC_DIST_TEST_ENABLED(irq + i, cm)) {
507
DPRINTF("Disabled IRQ %d\n", irq + i);
508
trace_gic_disable_irq(irq + i);
509
}
510
- GIC_CLEAR_ENABLED(irq + i, cm);
511
+ GIC_DIST_CLEAR_ENABLED(irq + i, cm);
512
}
513
}
514
} else if (offset < 0x280) {
515
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
516
for (i = 0; i < 8; i++) {
517
if (value & (1 << i)) {
518
if (s->security_extn && !attrs.secure &&
519
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
520
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
521
continue; /* Ignore Non-secure access of Group0 IRQ */
522
}
523
524
- GIC_SET_PENDING(irq + i, GIC_TARGET(irq + i));
525
+ GIC_DIST_SET_PENDING(irq + i, GIC_DIST_TARGET(irq + i));
526
}
527
}
528
} else if (offset < 0x300) {
529
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
530
531
for (i = 0; i < 8; i++) {
532
if (s->security_extn && !attrs.secure &&
533
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
534
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
535
continue; /* Ignore Non-secure access of Group0 IRQ */
536
}
537
538
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
539
for per-CPU interrupts. It's unclear whether this is the
540
corect behavior. */
541
if (value & (1 << i)) {
542
- GIC_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
543
+ GIC_DIST_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
544
}
545
}
546
} else if (offset < 0x400) {
547
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
548
irq = (offset - 0x400) + GIC_BASE_IRQ;
549
if (irq >= s->num_irq)
550
goto bad_reg;
551
- gic_set_priority(s, cpu, irq, value, attrs);
552
+ gic_dist_set_priority(s, cpu, irq, value, attrs);
553
} else if (offset < 0xc00) {
554
/* Interrupt CPU Target. RAZ/WI on uniprocessor GICs, with the
555
* annoying exception of the 11MPCore's GIC.
556
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
557
value |= 0xaa;
558
for (i = 0; i < 4; i++) {
559
if (s->security_extn && !attrs.secure &&
560
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
561
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
562
continue; /* Ignore Non-secure access of Group0 IRQ */
563
}
564
565
if (s->revision == REV_11MPCORE) {
566
if (value & (1 << (i * 2))) {
567
- GIC_SET_MODEL(irq + i);
568
+ GIC_DIST_SET_MODEL(irq + i);
569
} else {
570
- GIC_CLEAR_MODEL(irq + i);
571
+ GIC_DIST_CLEAR_MODEL(irq + i);
572
}
573
}
574
if (value & (2 << (i * 2))) {
575
- GIC_SET_EDGE_TRIGGER(irq + i);
576
+ GIC_DIST_SET_EDGE_TRIGGER(irq + i);
577
} else {
578
- GIC_CLEAR_EDGE_TRIGGER(irq + i);
579
+ GIC_DIST_CLEAR_EDGE_TRIGGER(irq + i);
580
}
581
}
582
} else if (offset < 0xf10) {
583
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
584
irq = (offset - 0xf10);
585
586
if (!s->security_extn || attrs.secure ||
587
- GIC_TEST_GROUP(irq, 1 << cpu)) {
588
+ GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
589
s->sgi_pending[irq][cpu] &= ~value;
590
if (s->sgi_pending[irq][cpu] == 0) {
591
- GIC_CLEAR_PENDING(irq, 1 << cpu);
592
+ GIC_DIST_CLEAR_PENDING(irq, 1 << cpu);
593
}
594
}
595
} else if (offset < 0xf30) {
596
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
597
irq = (offset - 0xf20);
598
599
if (!s->security_extn || attrs.secure ||
600
- GIC_TEST_GROUP(irq, 1 << cpu)) {
601
- GIC_SET_PENDING(irq, 1 << cpu);
602
+ GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
603
+ GIC_DIST_SET_PENDING(irq, 1 << cpu);
604
s->sgi_pending[irq][cpu] |= value;
605
}
606
} else {
607
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writel(void *opaque, hwaddr offset,
608
mask = ALL_CPU_MASK;
609
break;
610
}
611
- GIC_SET_PENDING(irq, mask);
612
+ GIC_DIST_SET_PENDING(irq, mask);
613
target_cpu = ctz32(mask);
614
while (target_cpu < GIC_NCPU) {
615
s->sgi_pending[irq][target_cpu] |= (1 << cpu);
616
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
617
index XXXXXXX..XXXXXXX 100644
618
--- a/hw/intc/arm_gic_common.c
619
+++ b/hw/intc/arm_gic_common.c
620
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
621
}
622
}
623
for (i = 0; i < GIC_NR_SGIS; i++) {
624
- GIC_SET_ENABLED(i, ALL_CPU_MASK);
625
- GIC_SET_EDGE_TRIGGER(i);
626
+ GIC_DIST_SET_ENABLED(i, ALL_CPU_MASK);
627
+ GIC_DIST_SET_EDGE_TRIGGER(i);
628
}
629
630
for (i = 0; i < ARRAY_SIZE(s->priority2); i++) {
631
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
632
}
633
if (s->security_extn && s->irq_reset_nonsecure) {
634
for (i = 0; i < GIC_MAXIRQ; i++) {
635
- GIC_SET_GROUP(i, ALL_CPU_MASK);
636
+ GIC_DIST_SET_GROUP(i, ALL_CPU_MASK);
637
}
638
}
639
640
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
641
index XXXXXXX..XXXXXXX 100644
642
--- a/hw/intc/arm_gic_kvm.c
643
+++ b/hw/intc/arm_gic_kvm.c
644
@@ -XXX,XX +XXX,XX @@ static void translate_group(GICState *s, int irq, int cpu,
645
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
646
647
if (to_kernel) {
648
- *field = GIC_TEST_GROUP(irq, cm);
649
+ *field = GIC_DIST_TEST_GROUP(irq, cm);
650
} else {
651
if (*field & 1) {
652
- GIC_SET_GROUP(irq, cm);
653
+ GIC_DIST_SET_GROUP(irq, cm);
654
}
655
}
656
}
657
@@ -XXX,XX +XXX,XX @@ static void translate_enabled(GICState *s, int irq, int cpu,
658
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
659
660
if (to_kernel) {
661
- *field = GIC_TEST_ENABLED(irq, cm);
662
+ *field = GIC_DIST_TEST_ENABLED(irq, cm);
663
} else {
664
if (*field & 1) {
665
- GIC_SET_ENABLED(irq, cm);
666
+ GIC_DIST_SET_ENABLED(irq, cm);
667
}
668
}
669
}
670
@@ -XXX,XX +XXX,XX @@ static void translate_pending(GICState *s, int irq, int cpu,
671
*field = gic_test_pending(s, irq, cm);
672
} else {
673
if (*field & 1) {
674
- GIC_SET_PENDING(irq, cm);
675
+ GIC_DIST_SET_PENDING(irq, cm);
676
/* TODO: Capture is level-line is held high in the kernel */
677
}
678
}
679
@@ -XXX,XX +XXX,XX @@ static void translate_active(GICState *s, int irq, int cpu,
680
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
681
682
if (to_kernel) {
683
- *field = GIC_TEST_ACTIVE(irq, cm);
684
+ *field = GIC_DIST_TEST_ACTIVE(irq, cm);
685
} else {
686
if (*field & 1) {
687
- GIC_SET_ACTIVE(irq, cm);
688
+ GIC_DIST_SET_ACTIVE(irq, cm);
689
}
690
}
691
}
692
@@ -XXX,XX +XXX,XX @@ static void translate_trigger(GICState *s, int irq, int cpu,
693
uint32_t *field, bool to_kernel)
694
{
695
if (to_kernel) {
696
- *field = (GIC_TEST_EDGE_TRIGGER(irq)) ? 0x2 : 0x0;
697
+ *field = (GIC_DIST_TEST_EDGE_TRIGGER(irq)) ? 0x2 : 0x0;
698
} else {
699
if (*field & 0x2) {
700
- GIC_SET_EDGE_TRIGGER(irq);
701
+ GIC_DIST_SET_EDGE_TRIGGER(irq);
702
}
703
}
704
}
705
@@ -XXX,XX +XXX,XX @@ static void translate_priority(GICState *s, int irq, int cpu,
706
uint32_t *field, bool to_kernel)
707
{
708
if (to_kernel) {
709
- *field = GIC_GET_PRIORITY(irq, cpu) & 0xff;
710
+ *field = GIC_DIST_GET_PRIORITY(irq, cpu) & 0xff;
711
} else {
712
- gic_set_priority(s, cpu, irq, *field & 0xff, MEMTXATTRS_UNSPECIFIED);
713
+ gic_dist_set_priority(s, cpu, irq,
714
+ *field & 0xff, MEMTXATTRS_UNSPECIFIED);
715
}
716
}
717
718
--
158
--
719
2.18.0
159
2.20.1
720
160
721
161
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
From: Alexandre Iooss <erdnaxe@crans.org>
2
2
3
MSR handling is the only place where CONTROL.nPRIV is modified.
3
This adds the target guide for BBC Micro:bit.
4
4
5
Signed-off-by: Julia Suvorova <jusual@mail.ru>
5
Information is taken from https://wiki.qemu.org/Features/MicroBit
6
Message-id: 20180705222622.17139-1-jusual@mail.ru
6
and from hw/arm/nrf51_soc.c.
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
8
Signed-off-by: Alexandre Iooss <erdnaxe@crans.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Joel Stanley <joel@jms.id.au>
11
Message-id: 20210621075625.540471-1-erdnaxe@crans.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
target/arm/helper.c | 12 ++++++++----
14
docs/system/arm/nrf.rst | 51 ++++++++++++++++++++++++++++++++++++++
11
1 file changed, 8 insertions(+), 4 deletions(-)
15
docs/system/target-arm.rst | 1 +
16
MAINTAINERS | 1 +
17
3 files changed, 53 insertions(+)
18
create mode 100644 docs/system/arm/nrf.rst
12
19
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
diff --git a/docs/system/arm/nrf.rst b/docs/system/arm/nrf.rst
21
new file mode 100644
22
index XXXXXXX..XXXXXXX
23
--- /dev/null
24
+++ b/docs/system/arm/nrf.rst
25
@@ -XXX,XX +XXX,XX @@
26
+Nordic nRF boards (``microbit``)
27
+================================
28
+
29
+The `Nordic nRF`_ chips are a family of ARM-based System-on-Chip that
30
+are designed to be used for low-power and short-range wireless solutions.
31
+
32
+.. _Nordic nRF: https://www.nordicsemi.com/Products
33
+
34
+The nRF51 series is the first series for short range wireless applications.
35
+It is superseded by the nRF52 series.
36
+The following machines are based on this chip :
37
+
38
+- ``microbit`` BBC micro:bit board with nRF51822 SoC
39
+
40
+There are other series such as nRF52, nRF53 and nRF91 which are currently not
41
+supported by QEMU.
42
+
43
+Supported devices
44
+-----------------
45
+
46
+ * ARM Cortex-M0 (ARMv6-M)
47
+ * Serial ports (UART)
48
+ * Clock controller
49
+ * Timers
50
+ * Random Number Generator (RNG)
51
+ * GPIO controller
52
+ * NVMC
53
+ * SWI
54
+
55
+Missing devices
56
+---------------
57
+
58
+ * Watchdog
59
+ * Real-Time Clock (RTC) controller
60
+ * TWI (i2c)
61
+ * SPI controller
62
+ * Analog to Digital Converter (ADC)
63
+ * Quadrature decoder
64
+ * Radio
65
+
66
+Boot options
67
+------------
68
+
69
+The Micro:bit machine can be started using the ``-device`` option to load a
70
+firmware in `ihex format`_. Example:
71
+
72
+.. _ihex format: https://en.wikipedia.org/wiki/Intel_HEX
73
+
74
+.. code-block:: bash
75
+
76
+ $ qemu-system-arm -M microbit -device loader,file=test.hex
77
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
14
index XXXXXXX..XXXXXXX 100644
78
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
79
--- a/docs/system/target-arm.rst
16
+++ b/target/arm/helper.c
80
+++ b/docs/system/target-arm.rst
17
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
81
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
18
write_v7m_control_spsel_for_secstate(env,
82
arm/digic
19
val & R_V7M_CONTROL_SPSEL_MASK,
83
arm/musicpal
20
M_REG_NS);
84
arm/gumstix
21
- env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
85
+ arm/nrf
22
- env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
86
arm/nseries
23
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
87
arm/nuvoton
24
+ env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
88
arm/orangepi
25
+ env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
89
diff --git a/MAINTAINERS b/MAINTAINERS
26
+ }
90
index XXXXXXX..XXXXXXX 100644
27
return;
91
--- a/MAINTAINERS
28
case 0x98: /* SP_NS */
92
+++ b/MAINTAINERS
29
{
93
@@ -XXX,XX +XXX,XX @@ F: hw/*/microbit*.c
30
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
94
F: include/hw/*/nrf51*.h
31
!arm_v7m_is_handler_mode(env)) {
95
F: include/hw/*/microbit*.h
32
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
96
F: tests/qtest/microbit-test.c
33
}
97
+F: docs/system/arm/nrf.rst
34
- env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
98
35
- env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
99
AVR Machines
36
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
100
-------------
37
+ env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
38
+ env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
39
+ }
40
break;
41
default:
42
bad_reg:
43
--
101
--
44
2.18.0
102
2.20.1
45
103
46
104
diff view generated by jsdifflib