1
Second pull request of the week; mostly RTH's support for some
1
As promised, another pullreq... This one's mostly RTH's patches.
2
new-in-v8.1/v8.3 instructions, and my v8M board model.
3
2
4
thanks
3
thanks
5
-- PMM
4
-- PMM
6
5
7
The following changes since commit 427cbc7e4136a061628cb4315cc8182ea36d772f:
6
The following changes since commit 784c2e4f232adf5ef47a84a262ec72a07d068d6a:
8
7
9
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2018-03-01 18:46:41 +0000)
8
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2018-10-19 15:30:40 +0100)
10
9
11
are available in the Git repository at:
10
are available in the Git repository at:
12
11
13
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180302
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181019
14
13
15
for you to fetch changes up to e66a67bf28e1b4fce2e3d72a2610dbd48d9d3078:
14
for you to fetch changes up to 88c9add25e7120e8622796c81ad3f3fb7f8d40e7:
16
15
17
target/arm: Enable ARM_FEATURE_V8_FCMA (2018-03-02 11:03:45 +0000)
16
target/arm: Only flush tlb if ASID changes (2018-10-19 17:38:48 +0100)
18
17
19
----------------------------------------------------------------
18
----------------------------------------------------------------
20
target-arm queue:
19
target-arm queue:
21
* implement FCMA and RDM v8.1 and v8.3 instructions
20
* ssi-sd: Make devices picking up backends unavailable with -device
22
* enable Cortex-M33 v8M core, and provide new mps2-an505 board model
21
* Add support for VCPU event states
23
that uses it
22
* Move towards making ID registers the source of truth for
24
* decodetree: Propagate return value from translate subroutines
23
whether a guest CPU implements a feature, rather than having
25
* xlnx-zynqmp: Implement the RTC device
24
parallel ID registers and feature bit flags
25
* Implement various HCR hypervisor trap/config bits
26
* Get IL bit correct for v7 syndrome values
27
* Report correct syndrome for FP/SIMD traps to Hyp mode
28
* hw/arm/boot: Increase compliance with kernel arm64 boot protocol
29
* Refactor A32 Neon to use generic vector infrastructure
30
* Fix a bug in A32 VLD2 "(multiple 2-element structures)" insn
31
* net: cadence_gem: Report features correctly in ID register
32
* Avoid some unnecessary TLB flushes on TTBR register writes
26
33
27
----------------------------------------------------------------
34
----------------------------------------------------------------
28
Alistair Francis (3):
35
Dongjiu Geng (1):
29
xlnx-zynqmp-rtc: Initial commit
36
target/arm: Add support for VCPU event states
30
xlnx-zynqmp-rtc: Add basic time support
31
xlnx-zynqmp: Connect the RTC device
32
37
33
Peter Maydell (19):
38
Edgar E. Iglesias (2):
34
loader: Add new load_ramdisk_as()
39
net: cadence_gem: Announce availability of priority queues
35
hw/arm/boot: Honour CPU's address space for image loads
40
net: cadence_gem: Announce 64bit addressing support
36
hw/arm/armv7m: Honour CPU's address space for image loads
37
target/arm: Define an IDAU interface
38
armv7m: Forward idau property to CPU object
39
target/arm: Define init-svtor property for the reset secure VTOR value
40
armv7m: Forward init-svtor property to CPU object
41
target/arm: Add Cortex-M33
42
hw/misc/unimp: Move struct to header file
43
include/hw/or-irq.h: Add missing include guard
44
qdev: Add new qdev_init_gpio_in_named_with_opaque()
45
hw/core/split-irq: Device that splits IRQ lines
46
hw/misc/mps2-fpgaio: FPGA control block for MPS2 AN505
47
hw/misc/tz-ppc: Model TrustZone peripheral protection controller
48
hw/misc/iotkit-secctl: Arm IoT Kit security controller initial skeleton
49
hw/misc/iotkit-secctl: Add handling for PPCs
50
hw/misc/iotkit-secctl: Add remaining simple registers
51
hw/arm/iotkit: Model Arm IOT Kit
52
mps2-an505: New board model: MPS2 with AN505 Cortex-M33 FPGA image
53
41
54
Richard Henderson (17):
42
Markus Armbruster (1):
55
decodetree: Propagate return value from translate subroutines
43
ssi-sd: Make devices picking up backends unavailable with -device
56
target/arm: Add ARM_FEATURE_V8_RDM
57
target/arm: Refactor disas_simd_indexed decode
58
target/arm: Refactor disas_simd_indexed size checks
59
target/arm: Decode aa64 armv8.1 scalar three same extra
60
target/arm: Decode aa64 armv8.1 three same extra
61
target/arm: Decode aa64 armv8.1 scalar/vector x indexed element
62
target/arm: Decode aa32 armv8.1 three same
63
target/arm: Decode aa32 armv8.1 two reg and a scalar
64
target/arm: Enable ARM_FEATURE_V8_RDM
65
target/arm: Add ARM_FEATURE_V8_FCMA
66
target/arm: Decode aa64 armv8.3 fcadd
67
target/arm: Decode aa64 armv8.3 fcmla
68
target/arm: Decode aa32 armv8.3 3-same
69
target/arm: Decode aa32 armv8.3 2-reg-index
70
target/arm: Decode t32 simd 3reg and 2reg_scalar extension
71
target/arm: Enable ARM_FEATURE_V8_FCMA
72
44
73
hw/arm/Makefile.objs | 2 +
45
Peter Maydell (10):
74
hw/core/Makefile.objs | 1 +
46
target/arm: Improve debug logging of AArch32 exception return
75
hw/misc/Makefile.objs | 4 +
47
target/arm: Make switch_mode() file-local
76
hw/timer/Makefile.objs | 1 +
48
target/arm: Implement HCR.FB
77
target/arm/Makefile.objs | 2 +-
49
target/arm: Implement HCR.DC
78
include/hw/arm/armv7m.h | 5 +
50
target/arm: ISR_EL1 bits track virtual interrupts if IMO/FMO set
79
include/hw/arm/iotkit.h | 109 ++++++
51
target/arm: Implement HCR.VI and VF
80
include/hw/arm/xlnx-zynqmp.h | 2 +
52
target/arm: Implement HCR.PTW
81
include/hw/core/split-irq.h | 57 +++
53
target/arm: New utility function to extract EC from syndrome
82
include/hw/irq.h | 4 +-
54
target/arm: Get IL bit correct for v7 syndrome values
83
include/hw/loader.h | 12 +-
55
target/arm: Report correct syndrome for FP/SIMD traps to Hyp mode
84
include/hw/misc/iotkit-secctl.h | 103 ++++++
85
include/hw/misc/mps2-fpgaio.h | 43 +++
86
include/hw/misc/tz-ppc.h | 101 ++++++
87
include/hw/misc/unimp.h | 10 +
88
include/hw/or-irq.h | 5 +
89
include/hw/qdev-core.h | 30 +-
90
include/hw/timer/xlnx-zynqmp-rtc.h | 86 +++++
91
target/arm/cpu.h | 8 +
92
target/arm/helper.h | 31 ++
93
target/arm/idau.h | 61 ++++
94
hw/arm/armv7m.c | 35 +-
95
hw/arm/boot.c | 119 ++++---
96
hw/arm/iotkit.c | 598 +++++++++++++++++++++++++++++++
97
hw/arm/mps2-tz.c | 503 ++++++++++++++++++++++++++
98
hw/arm/xlnx-zynqmp.c | 14 +
99
hw/core/loader.c | 8 +-
100
hw/core/qdev.c | 8 +-
101
hw/core/split-irq.c | 89 +++++
102
hw/misc/iotkit-secctl.c | 704 +++++++++++++++++++++++++++++++++++++
103
hw/misc/mps2-fpgaio.c | 176 ++++++++++
104
hw/misc/tz-ppc.c | 302 ++++++++++++++++
105
hw/misc/unimp.c | 10 -
106
hw/timer/xlnx-zynqmp-rtc.c | 272 ++++++++++++++
107
linux-user/elfload.c | 2 +
108
target/arm/cpu.c | 66 +++-
109
target/arm/cpu64.c | 2 +
110
target/arm/helper.c | 28 +-
111
target/arm/translate-a64.c | 514 +++++++++++++++++++++------
112
target/arm/translate.c | 275 +++++++++++++--
113
target/arm/vec_helper.c | 429 ++++++++++++++++++++++
114
default-configs/arm-softmmu.mak | 5 +
115
hw/misc/trace-events | 24 ++
116
hw/timer/trace-events | 3 +
117
scripts/decodetree.py | 5 +-
118
45 files changed, 4668 insertions(+), 200 deletions(-)
119
create mode 100644 include/hw/arm/iotkit.h
120
create mode 100644 include/hw/core/split-irq.h
121
create mode 100644 include/hw/misc/iotkit-secctl.h
122
create mode 100644 include/hw/misc/mps2-fpgaio.h
123
create mode 100644 include/hw/misc/tz-ppc.h
124
create mode 100644 include/hw/timer/xlnx-zynqmp-rtc.h
125
create mode 100644 target/arm/idau.h
126
create mode 100644 hw/arm/iotkit.c
127
create mode 100644 hw/arm/mps2-tz.c
128
create mode 100644 hw/core/split-irq.c
129
create mode 100644 hw/misc/iotkit-secctl.c
130
create mode 100644 hw/misc/mps2-fpgaio.c
131
create mode 100644 hw/misc/tz-ppc.c
132
create mode 100644 hw/timer/xlnx-zynqmp-rtc.c
133
create mode 100644 target/arm/vec_helper.c
134
56
57
Richard Henderson (30):
58
target/arm: Move some system registers into a substructure
59
target/arm: V8M should not imply V7VE
60
target/arm: Convert v8 extensions from feature bits to isar tests
61
target/arm: Convert division from feature bits to isar0 tests
62
target/arm: Convert jazelle from feature bit to isar1 test
63
target/arm: Convert t32ee from feature bit to isar3 test
64
target/arm: Convert sve from feature bit to aa64pfr0 test
65
target/arm: Convert v8.2-fp16 from feature bit to aa64pfr0 test
66
target/arm: Hoist address increment for vector memory ops
67
target/arm: Don't call tcg_clear_temp_count
68
target/arm: Use tcg_gen_gvec_dup_i64 for LD[1-4]R
69
target/arm: Promote consecutive memory ops for aa64
70
target/arm: Mark some arrays const
71
target/arm: Use gvec for NEON VDUP
72
target/arm: Use gvec for NEON VMOV, VMVN, VBIC & VORR (immediate)
73
target/arm: Use gvec for NEON_3R_LOGIC insns
74
target/arm: Use gvec for NEON_3R_VADD_VSUB insns
75
target/arm: Use gvec for NEON_2RM_VMN, NEON_2RM_VNEG
76
target/arm: Use gvec for NEON_3R_VMUL
77
target/arm: Use gvec for VSHR, VSHL
78
target/arm: Use gvec for VSRA
79
target/arm: Use gvec for VSRI, VSLI
80
target/arm: Use gvec for NEON_3R_VML
81
target/arm: Use gvec for NEON_3R_VTST_VCEQ, NEON_3R_VCGT, NEON_3R_VCGE
82
target/arm: Use gvec for NEON VLD all lanes
83
target/arm: Reorg NEON VLD/VST all elements
84
target/arm: Promote consecutive memory ops for aa32
85
target/arm: Reorg NEON VLD/VST single element to one lane
86
target/arm: Remove writefn from TTBR0_EL3
87
target/arm: Only flush tlb if ASID changes
88
89
Stewart Hildebrand (1):
90
hw/arm/boot: Increase compliance with kernel arm64 boot protocol
91
92
target/arm/cpu.h | 227 ++++++-
93
target/arm/internals.h | 45 +-
94
target/arm/kvm_arm.h | 24 +
95
target/arm/translate.h | 21 +
96
hw/arm/boot.c | 18 +
97
hw/intc/armv7m_nvic.c | 12 +-
98
hw/net/cadence_gem.c | 9 +-
99
hw/sd/ssi-sd.c | 2 +
100
linux-user/aarch64/signal.c | 4 +-
101
linux-user/elfload.c | 60 +-
102
linux-user/syscall.c | 10 +-
103
target/arm/cpu.c | 242 ++++----
104
target/arm/cpu64.c | 148 +++--
105
target/arm/helper.c | 397 ++++++++----
106
target/arm/kvm.c | 60 ++
107
target/arm/kvm32.c | 13 +
108
target/arm/kvm64.c | 15 +-
109
target/arm/machine.c | 28 +-
110
target/arm/op_helper.c | 2 +-
111
target/arm/translate-a64.c | 715 ++++-----------------
112
target/arm/translate.c | 1451 ++++++++++++++++++++++++++++---------------
113
21 files changed, 2021 insertions(+), 1482 deletions(-)
114
diff view generated by jsdifflib
1
The function qdev_init_gpio_in_named() passes the DeviceState pointer
1
From: Markus Armbruster <armbru@redhat.com>
2
as the opaque data pointor for the irq handler function. Usually
3
this is what you want, but in some cases it would be helpful to use
4
some other data pointer.
5
2
6
Add a new function qdev_init_gpio_in_named_with_opaque() which allows
3
Device models aren't supposed to go on fishing expeditions for
7
the caller to specify the data pointer they want.
4
backends. They should expose suitable properties for the user to set.
5
For onboard devices, board code sets them.
8
6
7
Device ssi-sd picks up its block backend in its init() method with
8
drive_get_next() instead. This mistake is already marked FIXME since
9
commit af9e40a.
10
11
Unset user_creatable to remove the mistake from our external
12
interface. Since the SSI bus doesn't support hotplug, only -device
13
can be affected. Only certain ARM machines have ssi-sd and provide an
14
SSI bus for it; this patch breaks -device ssi-sd for these machines.
15
No actual use of -device ssi-sd is known.
16
17
Signed-off-by: Markus Armbruster <armbru@redhat.com>
18
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Acked-by: Thomas Huth <thuth@redhat.com>
20
Message-id: 20181009060835.4608-1-armbru@redhat.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180220180325.29818-12-peter.maydell@linaro.org
13
---
22
---
14
include/hw/qdev-core.h | 30 ++++++++++++++++++++++++++++--
23
hw/sd/ssi-sd.c | 2 ++
15
hw/core/qdev.c | 8 +++++---
24
1 file changed, 2 insertions(+)
16
2 files changed, 33 insertions(+), 5 deletions(-)
17
25
18
diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
26
diff --git a/hw/sd/ssi-sd.c b/hw/sd/ssi-sd.c
19
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/qdev-core.h
28
--- a/hw/sd/ssi-sd.c
21
+++ b/include/hw/qdev-core.h
29
+++ b/hw/sd/ssi-sd.c
22
@@ -XXX,XX +XXX,XX @@ BusState *qdev_get_child_bus(DeviceState *dev, const char *name);
30
@@ -XXX,XX +XXX,XX @@ static void ssi_sd_class_init(ObjectClass *klass, void *data)
23
/* GPIO inputs also double as IRQ sinks. */
31
k->cs_polarity = SSI_CS_LOW;
24
void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n);
32
dc->vmsd = &vmstate_ssi_sd;
25
void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n);
33
dc->reset = ssi_sd_reset;
26
-void qdev_init_gpio_in_named(DeviceState *dev, qemu_irq_handler handler,
34
+ /* Reason: init() method uses drive_get_next() */
27
- const char *name, int n);
35
+ dc->user_creatable = false;
28
void qdev_init_gpio_out_named(DeviceState *dev, qemu_irq *pins,
29
const char *name, int n);
30
+/**
31
+ * qdev_init_gpio_in_named_with_opaque: create an array of input GPIO lines
32
+ * for the specified device
33
+ *
34
+ * @dev: Device to create input GPIOs for
35
+ * @handler: Function to call when GPIO line value is set
36
+ * @opaque: Opaque data pointer to pass to @handler
37
+ * @name: Name of the GPIO input (must be unique for this device)
38
+ * @n: Number of GPIO lines in this input set
39
+ */
40
+void qdev_init_gpio_in_named_with_opaque(DeviceState *dev,
41
+ qemu_irq_handler handler,
42
+ void *opaque,
43
+ const char *name, int n);
44
+
45
+/**
46
+ * qdev_init_gpio_in_named: create an array of input GPIO lines
47
+ * for the specified device
48
+ *
49
+ * Like qdev_init_gpio_in_named_with_opaque(), but the opaque pointer
50
+ * passed to the handler is @dev (which is the most commonly desired behaviour).
51
+ */
52
+static inline void qdev_init_gpio_in_named(DeviceState *dev,
53
+ qemu_irq_handler handler,
54
+ const char *name, int n)
55
+{
56
+ qdev_init_gpio_in_named_with_opaque(dev, handler, dev, name, n);
57
+}
58
59
void qdev_pass_gpios(DeviceState *dev, DeviceState *container,
60
const char *name);
61
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/hw/core/qdev.c
64
+++ b/hw/core/qdev.c
65
@@ -XXX,XX +XXX,XX @@ static NamedGPIOList *qdev_get_named_gpio_list(DeviceState *dev,
66
return ngl;
67
}
36
}
68
37
69
-void qdev_init_gpio_in_named(DeviceState *dev, qemu_irq_handler handler,
38
static const TypeInfo ssi_sd_info = {
70
- const char *name, int n)
71
+void qdev_init_gpio_in_named_with_opaque(DeviceState *dev,
72
+ qemu_irq_handler handler,
73
+ void *opaque,
74
+ const char *name, int n)
75
{
76
int i;
77
NamedGPIOList *gpio_list = qdev_get_named_gpio_list(dev, name);
78
79
assert(gpio_list->num_out == 0 || !name);
80
gpio_list->in = qemu_extend_irqs(gpio_list->in, gpio_list->num_in, handler,
81
- dev, n);
82
+ opaque, n);
83
84
if (!name) {
85
name = "unnamed-gpio-in";
86
--
39
--
87
2.16.2
40
2.19.1
88
41
89
42
diff view generated by jsdifflib
1
The IoTKit Security Controller includes various registers
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
2
that expose to software the controls for the Peripheral
2
3
Protection Controllers in the system. Implement these.
3
This patch extends the qemu-kvm state sync logic with support for
4
4
KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception.
5
And also it can support the exception state migration.
6
7
The SError exception states include SError pending state and ESR value,
8
the kvm_put/get_vcpu_events() will be called when set or get system
9
registers. When do migration, if source machine has SError pending,
10
QEMU will do this migration regardless whether the target machine supports
11
to specify guest ESR value, because if target machine does not support that,
12
it can also inject the SError with zero ESR value.
13
14
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
15
Reviewed-by: Andrew Jones <drjones@redhat.com>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 1538067351-23931-3-git-send-email-gengdongjiu@huawei.com
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180220180325.29818-17-peter.maydell@linaro.org
8
---
19
---
9
include/hw/misc/iotkit-secctl.h | 64 +++++++++-
20
target/arm/cpu.h | 7 ++++++
10
hw/misc/iotkit-secctl.c | 270 +++++++++++++++++++++++++++++++++++++---
21
target/arm/kvm_arm.h | 24 ++++++++++++++++++
11
2 files changed, 315 insertions(+), 19 deletions(-)
22
target/arm/kvm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++
12
23
target/arm/kvm32.c | 13 ++++++++++
13
diff --git a/include/hw/misc/iotkit-secctl.h b/include/hw/misc/iotkit-secctl.h
24
target/arm/kvm64.c | 13 ++++++++++
14
index XXXXXXX..XXXXXXX 100644
25
target/arm/machine.c | 22 ++++++++++++++++
15
--- a/include/hw/misc/iotkit-secctl.h
26
6 files changed, 139 insertions(+)
16
+++ b/include/hw/misc/iotkit-secctl.h
27
17
@@ -XXX,XX +XXX,XX @@
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
* QEMU interface:
29
index XXXXXXX..XXXXXXX 100644
19
* + sysbus MMIO region 0 is the "secure privilege control block" registers
30
--- a/target/arm/cpu.h
20
* + sysbus MMIO region 1 is the "non-secure privilege control block" registers
31
+++ b/target/arm/cpu.h
21
+ * + named GPIO output "sec_resp_cfg" indicating whether blocked accesses
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
22
+ * should RAZ/WI or bus error
33
*/
23
+ * Controlling the 2 APB PPCs in the IoTKit:
34
} exception;
24
+ * + named GPIO outputs apb_ppc0_nonsec[0..2] and apb_ppc1_nonsec
35
25
+ * + named GPIO outputs apb_ppc0_ap[0..2] and apb_ppc1_ap
36
+ /* Information associated with an SError */
26
+ * + named GPIO outputs apb_ppc{0,1}_irq_enable
37
+ struct {
27
+ * + named GPIO outputs apb_ppc{0,1}_irq_clear
38
+ uint8_t pending;
28
+ * + named GPIO inputs apb_ppc{0,1}_irq_status
39
+ uint8_t has_esr;
29
+ * Controlling each of the 4 expansion APB PPCs which a system using the IoTKit
40
+ uint64_t esr;
30
+ * might provide:
41
+ } serror;
31
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_nonsec[0..15]
42
+
32
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_ap[0..15]
43
/* Thumb-2 EE state. */
33
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_irq_enable
44
uint32_t teecr;
34
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_irq_clear
45
uint32_t teehbr;
35
+ * + named GPIO inputs apb_ppcexp{0,1,2,3}_irq_status
46
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
36
+ * Controlling each of the 4 expansion AHB PPCs which a system using the IoTKit
47
index XXXXXXX..XXXXXXX 100644
37
+ * might provide:
48
--- a/target/arm/kvm_arm.h
38
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_nonsec[0..15]
49
+++ b/target/arm/kvm_arm.h
39
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_ap[0..15]
50
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu);
40
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_irq_enable
41
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_irq_clear
42
+ * + named GPIO inputs ahb_ppcexp{0,1,2,3}_irq_status
43
*/
51
*/
44
52
void kvm_arm_reset_vcpu(ARMCPU *cpu);
45
#ifndef IOTKIT_SECCTL_H
53
46
@@ -XXX,XX +XXX,XX @@
54
+/**
47
#define TYPE_IOTKIT_SECCTL "iotkit-secctl"
55
+ * kvm_arm_init_serror_injection:
48
#define IOTKIT_SECCTL(obj) OBJECT_CHECK(IoTKitSecCtl, (obj), TYPE_IOTKIT_SECCTL)
56
+ * @cs: CPUState
49
57
+ *
50
-typedef struct IoTKitSecCtl {
58
+ * Check whether KVM can set guest SError syndrome.
51
+#define IOTS_APB_PPC0_NUM_PORTS 3
52
+#define IOTS_APB_PPC1_NUM_PORTS 1
53
+#define IOTS_PPC_NUM_PORTS 16
54
+#define IOTS_NUM_APB_PPC 2
55
+#define IOTS_NUM_APB_EXP_PPC 4
56
+#define IOTS_NUM_AHB_EXP_PPC 4
57
+
58
+typedef struct IoTKitSecCtl IoTKitSecCtl;
59
+
60
+/* State and IRQ lines relating to a PPC. For the
61
+ * PPCs in the IoTKit not all the IRQ lines are used.
62
+ */
59
+ */
63
+typedef struct IoTKitSecCtlPPC {
60
+void kvm_arm_init_serror_injection(CPUState *cs);
64
+ qemu_irq nonsec[IOTS_PPC_NUM_PORTS];
61
+
65
+ qemu_irq ap[IOTS_PPC_NUM_PORTS];
62
+/**
66
+ qemu_irq irq_enable;
63
+ * kvm_get_vcpu_events:
67
+ qemu_irq irq_clear;
64
+ * @cpu: ARMCPU
68
+
65
+ *
69
+ uint32_t ns;
66
+ * Get VCPU related state from kvm.
70
+ uint32_t sp;
67
+ */
71
+ uint32_t nsp;
68
+int kvm_get_vcpu_events(ARMCPU *cpu);
72
+
69
+
73
+ /* Number of ports actually present */
70
+/**
74
+ int numports;
71
+ * kvm_put_vcpu_events:
75
+ /* Offset of this PPC's interrupt bits in SECPPCINTSTAT */
72
+ * @cpu: ARMCPU
76
+ int irq_bit_offset;
73
+ *
77
+ IoTKitSecCtl *parent;
74
+ * Put VCPU related state to kvm.
78
+} IoTKitSecCtlPPC;
75
+ */
79
+
76
+int kvm_put_vcpu_events(ARMCPU *cpu);
80
+struct IoTKitSecCtl {
77
+
81
/*< private >*/
78
#ifdef CONFIG_KVM
82
SysBusDevice parent_obj;
79
/**
83
80
* kvm_arm_create_scratch_host_vcpu:
84
/*< public >*/
81
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
85
+ qemu_irq sec_resp_cfg;
82
index XXXXXXX..XXXXXXX 100644
86
83
--- a/target/arm/kvm.c
87
MemoryRegion s_regs;
84
+++ b/target/arm/kvm.c
88
MemoryRegion ns_regs;
85
@@ -XXX,XX +XXX,XX @@ const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
89
-} IoTKitSecCtl;
90
+
91
+ uint32_t secppcintstat;
92
+ uint32_t secppcinten;
93
+ uint32_t secrespcfg;
94
+
95
+ IoTKitSecCtlPPC apb[IOTS_NUM_APB_PPC];
96
+ IoTKitSecCtlPPC apbexp[IOTS_NUM_APB_EXP_PPC];
97
+ IoTKitSecCtlPPC ahbexp[IOTS_NUM_APB_EXP_PPC];
98
+};
99
100
#endif
101
diff --git a/hw/misc/iotkit-secctl.c b/hw/misc/iotkit-secctl.c
102
index XXXXXXX..XXXXXXX 100644
103
--- a/hw/misc/iotkit-secctl.c
104
+++ b/hw/misc/iotkit-secctl.c
105
@@ -XXX,XX +XXX,XX @@ static const uint8_t iotkit_secctl_ns_idregs[] = {
106
0x0d, 0xf0, 0x05, 0xb1,
107
};
86
};
108
87
109
+/* The register sets for the various PPCs (AHB internal, APB internal,
88
static bool cap_has_mp_state;
110
+ * AHB expansion, APB expansion) are all set up so that they are
89
+static bool cap_has_inject_serror_esr;
111
+ * in 16-aligned blocks so offsets 0xN0, 0xN4, 0xN8, 0xNC are PPCs
90
112
+ * 0, 1, 2, 3 of that type, so we can convert a register address offset
91
static ARMHostCPUFeatures arm_host_cpu_features;
113
+ * into an an index into a PPC array easily.
92
114
+ */
93
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vcpu_init(CPUState *cs)
115
+static inline int offset_to_ppc_idx(uint32_t offset)
94
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
116
+{
95
}
117
+ return extract32(offset, 2, 2);
96
118
+}
97
+void kvm_arm_init_serror_injection(CPUState *cs)
119
+
98
+{
120
+typedef void PerPPCFunction(IoTKitSecCtlPPC *ppc);
99
+ cap_has_inject_serror_esr = kvm_check_extension(cs->kvm_state,
121
+
100
+ KVM_CAP_ARM_INJECT_SERROR_ESR);
122
+static void foreach_ppc(IoTKitSecCtl *s, PerPPCFunction *fn)
101
+}
123
+{
102
+
124
+ int i;
103
bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
125
+
104
int *fdarray,
126
+ for (i = 0; i < IOTS_NUM_APB_PPC; i++) {
105
struct kvm_vcpu_init *init)
127
+ fn(&s->apb[i]);
106
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
128
+ }
107
return 0;
129
+ for (i = 0; i < IOTS_NUM_APB_EXP_PPC; i++) {
108
}
130
+ fn(&s->apbexp[i]);
109
131
+ }
110
+int kvm_put_vcpu_events(ARMCPU *cpu)
132
+ for (i = 0; i < IOTS_NUM_AHB_EXP_PPC; i++) {
111
+{
133
+ fn(&s->ahbexp[i]);
112
+ CPUARMState *env = &cpu->env;
134
+ }
113
+ struct kvm_vcpu_events events;
135
+}
114
+ int ret;
136
+
115
+
137
static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
116
+ if (!kvm_has_vcpu_events()) {
138
uint64_t *pdata,
117
+ return 0;
139
unsigned size, MemTxAttrs attrs)
118
+ }
119
+
120
+ memset(&events, 0, sizeof(events));
121
+ events.exception.serror_pending = env->serror.pending;
122
+
123
+ /* Inject SError to guest with specified syndrome if host kernel
124
+ * supports it, otherwise inject SError without syndrome.
125
+ */
126
+ if (cap_has_inject_serror_esr) {
127
+ events.exception.serror_has_esr = env->serror.has_esr;
128
+ events.exception.serror_esr = env->serror.esr;
129
+ }
130
+
131
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events);
132
+ if (ret) {
133
+ error_report("failed to put vcpu events");
134
+ }
135
+
136
+ return ret;
137
+}
138
+
139
+int kvm_get_vcpu_events(ARMCPU *cpu)
140
+{
141
+ CPUARMState *env = &cpu->env;
142
+ struct kvm_vcpu_events events;
143
+ int ret;
144
+
145
+ if (!kvm_has_vcpu_events()) {
146
+ return 0;
147
+ }
148
+
149
+ memset(&events, 0, sizeof(events));
150
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_VCPU_EVENTS, &events);
151
+ if (ret) {
152
+ error_report("failed to get vcpu events");
153
+ return ret;
154
+ }
155
+
156
+ env->serror.pending = events.exception.serror_pending;
157
+ env->serror.has_esr = events.exception.serror_has_esr;
158
+ env->serror.esr = events.exception.serror_esr;
159
+
160
+ return 0;
161
+}
162
+
163
void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
140
{
164
{
141
uint64_t r;
165
}
142
uint32_t offset = addr & ~0x3;
166
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
143
+ IoTKitSecCtl *s = IOTKIT_SECCTL(opaque);
167
index XXXXXXX..XXXXXXX 100644
144
168
--- a/target/arm/kvm32.c
145
switch (offset) {
169
+++ b/target/arm/kvm32.c
146
case A_AHBNSPPC0:
170
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
147
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
171
}
148
r = 0;
172
cpu->mp_affinity = mpidr & ARM32_AFFINITY_MASK;
149
break;
173
150
case A_SECRESPCFG:
174
+ /* Check whether userspace can specify guest syndrome value */
151
- case A_NSCCFG:
175
+ kvm_arm_init_serror_injection(cs);
152
- case A_SECMPCINTSTATUS:
176
+
153
+ r = s->secrespcfg;
177
return kvm_arm_init_cpreg_list(cpu);
154
+ break;
178
}
155
case A_SECPPCINTSTAT:
179
156
+ r = s->secppcintstat;
180
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
157
+ break;
181
return ret;
158
case A_SECPPCINTEN:
182
}
159
- case A_SECMSCINTSTAT:
183
160
- case A_SECMSCINTEN:
184
+ ret = kvm_put_vcpu_events(cpu);
161
- case A_BRGINTSTAT:
185
+ if (ret) {
162
- case A_BRGINTEN:
186
+ return ret;
163
+ r = s->secppcinten;
187
+ }
164
+ break;
188
+
165
case A_AHBNSPPCEXP0:
189
/* Note that we do not call write_cpustate_to_list()
166
case A_AHBNSPPCEXP1:
190
* here, so we are only writing the tuple list back to
167
case A_AHBNSPPCEXP2:
191
* KVM. This is safe because nothing can change the
168
case A_AHBNSPPCEXP3:
192
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
169
+ r = s->ahbexp[offset_to_ppc_idx(offset)].ns;
193
}
170
+ break;
194
vfp_set_fpscr(env, fpscr);
171
case A_APBNSPPC0:
195
172
case A_APBNSPPC1:
196
+ ret = kvm_get_vcpu_events(cpu);
173
+ r = s->apb[offset_to_ppc_idx(offset)].ns;
197
+ if (ret) {
174
+ break;
198
+ return ret;
175
case A_APBNSPPCEXP0:
199
+ }
176
case A_APBNSPPCEXP1:
200
+
177
case A_APBNSPPCEXP2:
201
if (!write_kvmstate_to_list(cpu)) {
178
case A_APBNSPPCEXP3:
202
return EINVAL;
179
+ r = s->apbexp[offset_to_ppc_idx(offset)].ns;
203
}
180
+ break;
204
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
181
case A_AHBSPPPCEXP0:
205
index XXXXXXX..XXXXXXX 100644
182
case A_AHBSPPPCEXP1:
206
--- a/target/arm/kvm64.c
183
case A_AHBSPPPCEXP2:
207
+++ b/target/arm/kvm64.c
184
case A_AHBSPPPCEXP3:
208
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
185
+ r = s->apbexp[offset_to_ppc_idx(offset)].sp;
209
186
+ break;
210
kvm_arm_init_debug(cs);
187
case A_APBSPPPC0:
211
188
case A_APBSPPPC1:
212
+ /* Check whether user space can specify guest syndrome value */
189
+ r = s->apb[offset_to_ppc_idx(offset)].sp;
213
+ kvm_arm_init_serror_injection(cs);
190
+ break;
214
+
191
case A_APBSPPPCEXP0:
215
return kvm_arm_init_cpreg_list(cpu);
192
case A_APBSPPPCEXP1:
216
}
193
case A_APBSPPPCEXP2:
217
194
case A_APBSPPPCEXP3:
218
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
195
+ r = s->apbexp[offset_to_ppc_idx(offset)].sp;
219
return ret;
196
+ break;
220
}
197
+ case A_NSCCFG:
221
198
+ case A_SECMPCINTSTATUS:
222
+ ret = kvm_put_vcpu_events(cpu);
199
+ case A_SECMSCINTSTAT:
223
+ if (ret) {
200
+ case A_SECMSCINTEN:
224
+ return ret;
201
+ case A_BRGINTSTAT:
225
+ }
202
+ case A_BRGINTEN:
226
+
203
case A_NSMSCEXP:
227
if (!write_list_to_kvmstate(cpu, level)) {
204
qemu_log_mask(LOG_UNIMP,
228
return EINVAL;
205
"IoTKit SecCtl S block read: "
229
}
206
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
230
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
207
return MEMTX_OK;
231
}
208
}
232
vfp_set_fpcr(env, fpr);
209
233
210
+static void iotkit_secctl_update_ppc_ap(IoTKitSecCtlPPC *ppc)
234
+ ret = kvm_get_vcpu_events(cpu);
211
+{
235
+ if (ret) {
212
+ int i;
236
+ return ret;
213
+
237
+ }
214
+ for (i = 0; i < ppc->numports; i++) {
238
+
215
+ bool v;
239
if (!write_kvmstate_to_list(cpu)) {
216
+
240
return EINVAL;
217
+ if (extract32(ppc->ns, i, 1)) {
241
}
218
+ v = extract32(ppc->nsp, i, 1);
242
diff --git a/target/arm/machine.c b/target/arm/machine.c
219
+ } else {
243
index XXXXXXX..XXXXXXX 100644
220
+ v = extract32(ppc->sp, i, 1);
244
--- a/target/arm/machine.c
221
+ }
245
+++ b/target/arm/machine.c
222
+ qemu_set_irq(ppc->ap[i], v);
246
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_sve = {
223
+ }
224
+}
225
+
226
+static void iotkit_secctl_ppc_ns_write(IoTKitSecCtlPPC *ppc, uint32_t value)
227
+{
228
+ int i;
229
+
230
+ ppc->ns = value & MAKE_64BIT_MASK(0, ppc->numports);
231
+ for (i = 0; i < ppc->numports; i++) {
232
+ qemu_set_irq(ppc->nonsec[i], extract32(ppc->ns, i, 1));
233
+ }
234
+ iotkit_secctl_update_ppc_ap(ppc);
235
+}
236
+
237
+static void iotkit_secctl_ppc_sp_write(IoTKitSecCtlPPC *ppc, uint32_t value)
238
+{
239
+ ppc->sp = value & MAKE_64BIT_MASK(0, ppc->numports);
240
+ iotkit_secctl_update_ppc_ap(ppc);
241
+}
242
+
243
+static void iotkit_secctl_ppc_nsp_write(IoTKitSecCtlPPC *ppc, uint32_t value)
244
+{
245
+ ppc->nsp = value & MAKE_64BIT_MASK(0, ppc->numports);
246
+ iotkit_secctl_update_ppc_ap(ppc);
247
+}
248
+
249
+static void iotkit_secctl_ppc_update_irq_clear(IoTKitSecCtlPPC *ppc)
250
+{
251
+ uint32_t value = ppc->parent->secppcintstat;
252
+
253
+ qemu_set_irq(ppc->irq_clear, extract32(value, ppc->irq_bit_offset, 1));
254
+}
255
+
256
+static void iotkit_secctl_ppc_update_irq_enable(IoTKitSecCtlPPC *ppc)
257
+{
258
+ uint32_t value = ppc->parent->secppcinten;
259
+
260
+ qemu_set_irq(ppc->irq_enable, extract32(value, ppc->irq_bit_offset, 1));
261
+}
262
+
263
static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
264
uint64_t value,
265
unsigned size, MemTxAttrs attrs)
266
{
267
+ IoTKitSecCtl *s = IOTKIT_SECCTL(opaque);
268
uint32_t offset = addr;
269
+ IoTKitSecCtlPPC *ppc;
270
271
trace_iotkit_secctl_s_write(offset, value, size);
272
273
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
274
275
switch (offset) {
276
case A_SECRESPCFG:
277
- case A_NSCCFG:
278
+ value &= 1;
279
+ s->secrespcfg = value;
280
+ qemu_set_irq(s->sec_resp_cfg, s->secrespcfg);
281
+ break;
282
case A_SECPPCINTCLR:
283
+ value &= 0x00f000f3;
284
+ foreach_ppc(s, iotkit_secctl_ppc_update_irq_clear);
285
+ break;
286
case A_SECPPCINTEN:
287
- case A_SECMSCINTCLR:
288
- case A_SECMSCINTEN:
289
- case A_BRGINTCLR:
290
- case A_BRGINTEN:
291
+ s->secppcinten = value & 0x00f000f3;
292
+ foreach_ppc(s, iotkit_secctl_ppc_update_irq_enable);
293
+ break;
294
case A_AHBNSPPCEXP0:
295
case A_AHBNSPPCEXP1:
296
case A_AHBNSPPCEXP2:
297
case A_AHBNSPPCEXP3:
298
+ ppc = &s->ahbexp[offset_to_ppc_idx(offset)];
299
+ iotkit_secctl_ppc_ns_write(ppc, value);
300
+ break;
301
case A_APBNSPPC0:
302
case A_APBNSPPC1:
303
+ ppc = &s->apb[offset_to_ppc_idx(offset)];
304
+ iotkit_secctl_ppc_ns_write(ppc, value);
305
+ break;
306
case A_APBNSPPCEXP0:
307
case A_APBNSPPCEXP1:
308
case A_APBNSPPCEXP2:
309
case A_APBNSPPCEXP3:
310
+ ppc = &s->apbexp[offset_to_ppc_idx(offset)];
311
+ iotkit_secctl_ppc_ns_write(ppc, value);
312
+ break;
313
case A_AHBSPPPCEXP0:
314
case A_AHBSPPPCEXP1:
315
case A_AHBSPPPCEXP2:
316
case A_AHBSPPPCEXP3:
317
+ ppc = &s->ahbexp[offset_to_ppc_idx(offset)];
318
+ iotkit_secctl_ppc_sp_write(ppc, value);
319
+ break;
320
case A_APBSPPPC0:
321
case A_APBSPPPC1:
322
+ ppc = &s->apb[offset_to_ppc_idx(offset)];
323
+ iotkit_secctl_ppc_sp_write(ppc, value);
324
+ break;
325
case A_APBSPPPCEXP0:
326
case A_APBSPPPCEXP1:
327
case A_APBSPPPCEXP2:
328
case A_APBSPPPCEXP3:
329
+ ppc = &s->apbexp[offset_to_ppc_idx(offset)];
330
+ iotkit_secctl_ppc_sp_write(ppc, value);
331
+ break;
332
+ case A_NSCCFG:
333
+ case A_SECMSCINTCLR:
334
+ case A_SECMSCINTEN:
335
+ case A_BRGINTCLR:
336
+ case A_BRGINTEN:
337
qemu_log_mask(LOG_UNIMP,
338
"IoTKit SecCtl S block write: "
339
"unimplemented offset 0x%x\n", offset);
340
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_ns_read(void *opaque, hwaddr addr,
341
uint64_t *pdata,
342
unsigned size, MemTxAttrs attrs)
343
{
344
+ IoTKitSecCtl *s = IOTKIT_SECCTL(opaque);
345
uint64_t r;
346
uint32_t offset = addr & ~0x3;
347
348
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_ns_read(void *opaque, hwaddr addr,
349
case A_AHBNSPPPCEXP1:
350
case A_AHBNSPPPCEXP2:
351
case A_AHBNSPPPCEXP3:
352
+ r = s->ahbexp[offset_to_ppc_idx(offset)].nsp;
353
+ break;
354
case A_APBNSPPPC0:
355
case A_APBNSPPPC1:
356
+ r = s->apb[offset_to_ppc_idx(offset)].nsp;
357
+ break;
358
case A_APBNSPPPCEXP0:
359
case A_APBNSPPPCEXP1:
360
case A_APBNSPPPCEXP2:
361
case A_APBNSPPPCEXP3:
362
- qemu_log_mask(LOG_UNIMP,
363
- "IoTKit SecCtl NS block read: "
364
- "unimplemented offset 0x%x\n", offset);
365
+ r = s->apbexp[offset_to_ppc_idx(offset)].nsp;
366
break;
367
case A_PID4:
368
case A_PID5:
369
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_ns_write(void *opaque, hwaddr addr,
370
uint64_t value,
371
unsigned size, MemTxAttrs attrs)
372
{
373
+ IoTKitSecCtl *s = IOTKIT_SECCTL(opaque);
374
uint32_t offset = addr;
375
+ IoTKitSecCtlPPC *ppc;
376
377
trace_iotkit_secctl_ns_write(offset, value, size);
378
379
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_ns_write(void *opaque, hwaddr addr,
380
case A_AHBNSPPPCEXP1:
381
case A_AHBNSPPPCEXP2:
382
case A_AHBNSPPPCEXP3:
383
+ ppc = &s->ahbexp[offset_to_ppc_idx(offset)];
384
+ iotkit_secctl_ppc_nsp_write(ppc, value);
385
+ break;
386
case A_APBNSPPPC0:
387
case A_APBNSPPPC1:
388
+ ppc = &s->apb[offset_to_ppc_idx(offset)];
389
+ iotkit_secctl_ppc_nsp_write(ppc, value);
390
+ break;
391
case A_APBNSPPPCEXP0:
392
case A_APBNSPPPCEXP1:
393
case A_APBNSPPPCEXP2:
394
case A_APBNSPPPCEXP3:
395
- qemu_log_mask(LOG_UNIMP,
396
- "IoTKit SecCtl NS block write: "
397
- "unimplemented offset 0x%x\n", offset);
398
+ ppc = &s->apbexp[offset_to_ppc_idx(offset)];
399
+ iotkit_secctl_ppc_nsp_write(ppc, value);
400
break;
401
case A_AHBNSPPPC0:
402
case A_PID4:
403
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps iotkit_secctl_ns_ops = {
404
.impl.max_access_size = 4,
405
};
247
};
406
248
#endif /* AARCH64 */
407
+static void iotkit_secctl_reset_ppc(IoTKitSecCtlPPC *ppc)
249
408
+{
250
+static bool serror_needed(void *opaque)
409
+ ppc->ns = 0;
251
+{
410
+ ppc->sp = 0;
252
+ ARMCPU *cpu = opaque;
411
+ ppc->nsp = 0;
253
+ CPUARMState *env = &cpu->env;
412
+}
254
+
413
+
255
+ return env->serror.pending != 0;
414
static void iotkit_secctl_reset(DeviceState *dev)
256
+}
415
{
257
+
416
+ IoTKitSecCtl *s = IOTKIT_SECCTL(dev);
258
+static const VMStateDescription vmstate_serror = {
417
259
+ .name = "cpu/serror",
418
+ s->secppcintstat = 0;
419
+ s->secppcinten = 0;
420
+ s->secrespcfg = 0;
421
+
422
+ foreach_ppc(s, iotkit_secctl_reset_ppc);
423
+}
424
+
425
+static void iotkit_secctl_ppc_irqstatus(void *opaque, int n, int level)
426
+{
427
+ IoTKitSecCtlPPC *ppc = opaque;
428
+ IoTKitSecCtl *s = IOTKIT_SECCTL(ppc->parent);
429
+ int irqbit = ppc->irq_bit_offset + n;
430
+
431
+ s->secppcintstat = deposit32(s->secppcintstat, irqbit, 1, level);
432
+}
433
+
434
+static void iotkit_secctl_init_ppc(IoTKitSecCtl *s,
435
+ IoTKitSecCtlPPC *ppc,
436
+ const char *name,
437
+ int numports,
438
+ int irq_bit_offset)
439
+{
440
+ char *gpioname;
441
+ DeviceState *dev = DEVICE(s);
442
+
443
+ ppc->numports = numports;
444
+ ppc->irq_bit_offset = irq_bit_offset;
445
+ ppc->parent = s;
446
+
447
+ gpioname = g_strdup_printf("%s_nonsec", name);
448
+ qdev_init_gpio_out_named(dev, ppc->nonsec, gpioname, numports);
449
+ g_free(gpioname);
450
+ gpioname = g_strdup_printf("%s_ap", name);
451
+ qdev_init_gpio_out_named(dev, ppc->ap, gpioname, numports);
452
+ g_free(gpioname);
453
+ gpioname = g_strdup_printf("%s_irq_enable", name);
454
+ qdev_init_gpio_out_named(dev, &ppc->irq_enable, gpioname, 1);
455
+ g_free(gpioname);
456
+ gpioname = g_strdup_printf("%s_irq_clear", name);
457
+ qdev_init_gpio_out_named(dev, &ppc->irq_clear, gpioname, 1);
458
+ g_free(gpioname);
459
+ gpioname = g_strdup_printf("%s_irq_status", name);
460
+ qdev_init_gpio_in_named_with_opaque(dev, iotkit_secctl_ppc_irqstatus,
461
+ ppc, gpioname, 1);
462
+ g_free(gpioname);
463
}
464
465
static void iotkit_secctl_init(Object *obj)
466
{
467
IoTKitSecCtl *s = IOTKIT_SECCTL(obj);
468
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
469
+ DeviceState *dev = DEVICE(obj);
470
+ int i;
471
+
472
+ iotkit_secctl_init_ppc(s, &s->apb[0], "apb_ppc0",
473
+ IOTS_APB_PPC0_NUM_PORTS, 0);
474
+ iotkit_secctl_init_ppc(s, &s->apb[1], "apb_ppc1",
475
+ IOTS_APB_PPC1_NUM_PORTS, 1);
476
+
477
+ for (i = 0; i < IOTS_NUM_APB_EXP_PPC; i++) {
478
+ IoTKitSecCtlPPC *ppc = &s->apbexp[i];
479
+ char *ppcname = g_strdup_printf("apb_ppcexp%d", i);
480
+ iotkit_secctl_init_ppc(s, ppc, ppcname, IOTS_PPC_NUM_PORTS, 4 + i);
481
+ g_free(ppcname);
482
+ }
483
+ for (i = 0; i < IOTS_NUM_AHB_EXP_PPC; i++) {
484
+ IoTKitSecCtlPPC *ppc = &s->ahbexp[i];
485
+ char *ppcname = g_strdup_printf("ahb_ppcexp%d", i);
486
+ iotkit_secctl_init_ppc(s, ppc, ppcname, IOTS_PPC_NUM_PORTS, 20 + i);
487
+ g_free(ppcname);
488
+ }
489
+
490
+ qdev_init_gpio_out_named(dev, &s->sec_resp_cfg, "sec_resp_cfg", 1);
491
492
memory_region_init_io(&s->s_regs, obj, &iotkit_secctl_s_ops,
493
s, "iotkit-secctl-s-regs", 0x1000);
494
@@ -XXX,XX +XXX,XX @@ static void iotkit_secctl_init(Object *obj)
495
sysbus_init_mmio(sbd, &s->ns_regs);
496
}
497
498
+static const VMStateDescription iotkit_secctl_ppc_vmstate = {
499
+ .name = "iotkit-secctl-ppc",
500
+ .version_id = 1,
260
+ .version_id = 1,
501
+ .minimum_version_id = 1,
261
+ .minimum_version_id = 1,
262
+ .needed = serror_needed,
502
+ .fields = (VMStateField[]) {
263
+ .fields = (VMStateField[]) {
503
+ VMSTATE_UINT32(ns, IoTKitSecCtlPPC),
264
+ VMSTATE_UINT8(env.serror.pending, ARMCPU),
504
+ VMSTATE_UINT32(sp, IoTKitSecCtlPPC),
265
+ VMSTATE_UINT8(env.serror.has_esr, ARMCPU),
505
+ VMSTATE_UINT32(nsp, IoTKitSecCtlPPC),
266
+ VMSTATE_UINT64(env.serror.esr, ARMCPU),
506
+ VMSTATE_END_OF_LIST()
267
+ VMSTATE_END_OF_LIST()
507
+ }
268
+ }
508
+};
269
+};
509
+
270
+
510
static const VMStateDescription iotkit_secctl_vmstate = {
271
static bool m_needed(void *opaque)
511
.name = "iotkit-secctl",
272
{
512
.version_id = 1,
273
ARMCPU *cpu = opaque;
513
.minimum_version_id = 1,
274
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
514
.fields = (VMStateField[]) {
275
#ifdef TARGET_AARCH64
515
+ VMSTATE_UINT32(secppcintstat, IoTKitSecCtl),
276
&vmstate_sve,
516
+ VMSTATE_UINT32(secppcinten, IoTKitSecCtl),
277
#endif
517
+ VMSTATE_UINT32(secrespcfg, IoTKitSecCtl),
278
+ &vmstate_serror,
518
+ VMSTATE_STRUCT_ARRAY(apb, IoTKitSecCtl, IOTS_NUM_APB_PPC, 1,
279
NULL
519
+ iotkit_secctl_ppc_vmstate, IoTKitSecCtlPPC),
520
+ VMSTATE_STRUCT_ARRAY(apbexp, IoTKitSecCtl, IOTS_NUM_APB_EXP_PPC, 1,
521
+ iotkit_secctl_ppc_vmstate, IoTKitSecCtlPPC),
522
+ VMSTATE_STRUCT_ARRAY(ahbexp, IoTKitSecCtl, IOTS_NUM_AHB_EXP_PPC, 1,
523
+ iotkit_secctl_ppc_vmstate, IoTKitSecCtlPPC),
524
VMSTATE_END_OF_LIST()
525
}
280
}
526
};
281
};
527
--
282
--
528
2.16.2
283
2.19.1
529
284
530
285
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Enable it for the "any" CPU used by *-linux-user.
3
Create struct ARMISARegisters, to be accessed during translation.
4
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181016223115.24100-2-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180228193125.20577-17-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
9
---
10
target/arm/cpu.c | 1 +
10
target/arm/cpu.h | 32 ++++----
11
target/arm/cpu64.c | 1 +
11
hw/intc/armv7m_nvic.c | 12 +--
12
2 files changed, 2 insertions(+)
12
target/arm/cpu.c | 178 +++++++++++++++++++++---------------------
13
target/arm/cpu64.c | 70 ++++++++---------
14
target/arm/helper.c | 28 +++----
15
5 files changed, 162 insertions(+), 158 deletions(-)
13
16
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
22
* ARMv7AR ARM Architecture Reference Manual. A reset_ prefix
23
* is used for reset values of non-constant registers; no reset_
24
* prefix means a constant register.
25
+ * Some of these registers are split out into a substructure that
26
+ * is shared with the translators to control the ISA.
27
*/
28
+ struct ARMISARegisters {
29
+ uint32_t id_isar0;
30
+ uint32_t id_isar1;
31
+ uint32_t id_isar2;
32
+ uint32_t id_isar3;
33
+ uint32_t id_isar4;
34
+ uint32_t id_isar5;
35
+ uint32_t id_isar6;
36
+ uint32_t mvfr0;
37
+ uint32_t mvfr1;
38
+ uint32_t mvfr2;
39
+ uint64_t id_aa64isar0;
40
+ uint64_t id_aa64isar1;
41
+ uint64_t id_aa64pfr0;
42
+ uint64_t id_aa64pfr1;
43
+ } isar;
44
uint32_t midr;
45
uint32_t revidr;
46
uint32_t reset_fpsid;
47
- uint32_t mvfr0;
48
- uint32_t mvfr1;
49
- uint32_t mvfr2;
50
uint32_t ctr;
51
uint32_t reset_sctlr;
52
uint32_t id_pfr0;
53
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
54
uint32_t id_mmfr2;
55
uint32_t id_mmfr3;
56
uint32_t id_mmfr4;
57
- uint32_t id_isar0;
58
- uint32_t id_isar1;
59
- uint32_t id_isar2;
60
- uint32_t id_isar3;
61
- uint32_t id_isar4;
62
- uint32_t id_isar5;
63
- uint32_t id_isar6;
64
- uint64_t id_aa64pfr0;
65
- uint64_t id_aa64pfr1;
66
uint64_t id_aa64dfr0;
67
uint64_t id_aa64dfr1;
68
uint64_t id_aa64afr0;
69
uint64_t id_aa64afr1;
70
- uint64_t id_aa64isar0;
71
- uint64_t id_aa64isar1;
72
uint64_t id_aa64mmfr0;
73
uint64_t id_aa64mmfr1;
74
uint32_t dbgdidr;
75
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/hw/intc/armv7m_nvic.c
78
+++ b/hw/intc/armv7m_nvic.c
79
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
80
case 0xd5c: /* MMFR3. */
81
return cpu->id_mmfr3;
82
case 0xd60: /* ISAR0. */
83
- return cpu->id_isar0;
84
+ return cpu->isar.id_isar0;
85
case 0xd64: /* ISAR1. */
86
- return cpu->id_isar1;
87
+ return cpu->isar.id_isar1;
88
case 0xd68: /* ISAR2. */
89
- return cpu->id_isar2;
90
+ return cpu->isar.id_isar2;
91
case 0xd6c: /* ISAR3. */
92
- return cpu->id_isar3;
93
+ return cpu->isar.id_isar3;
94
case 0xd70: /* ISAR4. */
95
- return cpu->id_isar4;
96
+ return cpu->isar.id_isar4;
97
case 0xd74: /* ISAR5. */
98
- return cpu->id_isar5;
99
+ return cpu->isar.id_isar5;
100
case 0xd78: /* CLIDR */
101
return cpu->clidr;
102
case 0xd7c: /* CTR */
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
103
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
15
index XXXXXXX..XXXXXXX 100644
104
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.c
105
--- a/target/arm/cpu.c
17
+++ b/target/arm/cpu.c
106
+++ b/target/arm/cpu.c
18
@@ -XXX,XX +XXX,XX @@ static void arm_any_initfn(Object *obj)
107
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
19
set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
108
g_hash_table_foreach(cpu->cp_regs, cp_reg_check_reset, cpu);
20
set_feature(&cpu->env, ARM_FEATURE_CRC);
109
21
set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
110
env->vfp.xregs[ARM_VFP_FPSID] = cpu->reset_fpsid;
22
+ set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
111
- env->vfp.xregs[ARM_VFP_MVFR0] = cpu->mvfr0;
23
cpu->midr = 0xffffffff;
112
- env->vfp.xregs[ARM_VFP_MVFR1] = cpu->mvfr1;
113
- env->vfp.xregs[ARM_VFP_MVFR2] = cpu->mvfr2;
114
+ env->vfp.xregs[ARM_VFP_MVFR0] = cpu->isar.mvfr0;
115
+ env->vfp.xregs[ARM_VFP_MVFR1] = cpu->isar.mvfr1;
116
+ env->vfp.xregs[ARM_VFP_MVFR2] = cpu->isar.mvfr2;
117
118
cpu->power_state = cpu->start_powered_off ? PSCI_OFF : PSCI_ON;
119
s->halted = cpu->start_powered_off;
120
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
121
* registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
122
*/
123
cpu->id_pfr1 &= ~0xf0;
124
- cpu->id_aa64pfr0 &= ~0xf000;
125
+ cpu->isar.id_aa64pfr0 &= ~0xf000;
126
}
127
128
if (!cpu->has_el2) {
129
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
130
* registers if we don't have EL2. These are id_pfr1[15:12] and
131
* id_aa64pfr0_el1[11:8].
132
*/
133
- cpu->id_aa64pfr0 &= ~0xf00;
134
+ cpu->isar.id_aa64pfr0 &= ~0xf00;
135
cpu->id_pfr1 &= ~0xf000;
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
139
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
140
cpu->midr = 0x4107b362;
141
cpu->reset_fpsid = 0x410120b4;
142
- cpu->mvfr0 = 0x11111111;
143
- cpu->mvfr1 = 0x00000000;
144
+ cpu->isar.mvfr0 = 0x11111111;
145
+ cpu->isar.mvfr1 = 0x00000000;
146
cpu->ctr = 0x1dd20d2;
147
cpu->reset_sctlr = 0x00050078;
148
cpu->id_pfr0 = 0x111;
149
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
150
cpu->id_mmfr0 = 0x01130003;
151
cpu->id_mmfr1 = 0x10030302;
152
cpu->id_mmfr2 = 0x01222110;
153
- cpu->id_isar0 = 0x00140011;
154
- cpu->id_isar1 = 0x12002111;
155
- cpu->id_isar2 = 0x11231111;
156
- cpu->id_isar3 = 0x01102131;
157
- cpu->id_isar4 = 0x141;
158
+ cpu->isar.id_isar0 = 0x00140011;
159
+ cpu->isar.id_isar1 = 0x12002111;
160
+ cpu->isar.id_isar2 = 0x11231111;
161
+ cpu->isar.id_isar3 = 0x01102131;
162
+ cpu->isar.id_isar4 = 0x141;
163
cpu->reset_auxcr = 7;
24
}
164
}
25
#endif
165
166
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
167
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
168
cpu->midr = 0x4117b363;
169
cpu->reset_fpsid = 0x410120b4;
170
- cpu->mvfr0 = 0x11111111;
171
- cpu->mvfr1 = 0x00000000;
172
+ cpu->isar.mvfr0 = 0x11111111;
173
+ cpu->isar.mvfr1 = 0x00000000;
174
cpu->ctr = 0x1dd20d2;
175
cpu->reset_sctlr = 0x00050078;
176
cpu->id_pfr0 = 0x111;
177
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
178
cpu->id_mmfr0 = 0x01130003;
179
cpu->id_mmfr1 = 0x10030302;
180
cpu->id_mmfr2 = 0x01222110;
181
- cpu->id_isar0 = 0x00140011;
182
- cpu->id_isar1 = 0x12002111;
183
- cpu->id_isar2 = 0x11231111;
184
- cpu->id_isar3 = 0x01102131;
185
- cpu->id_isar4 = 0x141;
186
+ cpu->isar.id_isar0 = 0x00140011;
187
+ cpu->isar.id_isar1 = 0x12002111;
188
+ cpu->isar.id_isar2 = 0x11231111;
189
+ cpu->isar.id_isar3 = 0x01102131;
190
+ cpu->isar.id_isar4 = 0x141;
191
cpu->reset_auxcr = 7;
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
195
set_feature(&cpu->env, ARM_FEATURE_EL3);
196
cpu->midr = 0x410fb767;
197
cpu->reset_fpsid = 0x410120b5;
198
- cpu->mvfr0 = 0x11111111;
199
- cpu->mvfr1 = 0x00000000;
200
+ cpu->isar.mvfr0 = 0x11111111;
201
+ cpu->isar.mvfr1 = 0x00000000;
202
cpu->ctr = 0x1dd20d2;
203
cpu->reset_sctlr = 0x00050078;
204
cpu->id_pfr0 = 0x111;
205
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
206
cpu->id_mmfr0 = 0x01130003;
207
cpu->id_mmfr1 = 0x10030302;
208
cpu->id_mmfr2 = 0x01222100;
209
- cpu->id_isar0 = 0x0140011;
210
- cpu->id_isar1 = 0x12002111;
211
- cpu->id_isar2 = 0x11231121;
212
- cpu->id_isar3 = 0x01102131;
213
- cpu->id_isar4 = 0x01141;
214
+ cpu->isar.id_isar0 = 0x0140011;
215
+ cpu->isar.id_isar1 = 0x12002111;
216
+ cpu->isar.id_isar2 = 0x11231121;
217
+ cpu->isar.id_isar3 = 0x01102131;
218
+ cpu->isar.id_isar4 = 0x01141;
219
cpu->reset_auxcr = 7;
220
}
221
222
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
223
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
224
cpu->midr = 0x410fb022;
225
cpu->reset_fpsid = 0x410120b4;
226
- cpu->mvfr0 = 0x11111111;
227
- cpu->mvfr1 = 0x00000000;
228
+ cpu->isar.mvfr0 = 0x11111111;
229
+ cpu->isar.mvfr1 = 0x00000000;
230
cpu->ctr = 0x1d192992; /* 32K icache 32K dcache */
231
cpu->id_pfr0 = 0x111;
232
cpu->id_pfr1 = 0x1;
233
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
234
cpu->id_mmfr0 = 0x01100103;
235
cpu->id_mmfr1 = 0x10020302;
236
cpu->id_mmfr2 = 0x01222000;
237
- cpu->id_isar0 = 0x00100011;
238
- cpu->id_isar1 = 0x12002111;
239
- cpu->id_isar2 = 0x11221011;
240
- cpu->id_isar3 = 0x01102131;
241
- cpu->id_isar4 = 0x141;
242
+ cpu->isar.id_isar0 = 0x00100011;
243
+ cpu->isar.id_isar1 = 0x12002111;
244
+ cpu->isar.id_isar2 = 0x11221011;
245
+ cpu->isar.id_isar3 = 0x01102131;
246
+ cpu->isar.id_isar4 = 0x141;
247
cpu->reset_auxcr = 1;
248
}
249
250
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
251
cpu->id_mmfr1 = 0x00000000;
252
cpu->id_mmfr2 = 0x00000000;
253
cpu->id_mmfr3 = 0x00000000;
254
- cpu->id_isar0 = 0x01141110;
255
- cpu->id_isar1 = 0x02111000;
256
- cpu->id_isar2 = 0x21112231;
257
- cpu->id_isar3 = 0x01111110;
258
- cpu->id_isar4 = 0x01310102;
259
- cpu->id_isar5 = 0x00000000;
260
- cpu->id_isar6 = 0x00000000;
261
+ cpu->isar.id_isar0 = 0x01141110;
262
+ cpu->isar.id_isar1 = 0x02111000;
263
+ cpu->isar.id_isar2 = 0x21112231;
264
+ cpu->isar.id_isar3 = 0x01111110;
265
+ cpu->isar.id_isar4 = 0x01310102;
266
+ cpu->isar.id_isar5 = 0x00000000;
267
+ cpu->isar.id_isar6 = 0x00000000;
268
}
269
270
static void cortex_m4_initfn(Object *obj)
271
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
272
cpu->id_mmfr1 = 0x00000000;
273
cpu->id_mmfr2 = 0x00000000;
274
cpu->id_mmfr3 = 0x00000000;
275
- cpu->id_isar0 = 0x01141110;
276
- cpu->id_isar1 = 0x02111000;
277
- cpu->id_isar2 = 0x21112231;
278
- cpu->id_isar3 = 0x01111110;
279
- cpu->id_isar4 = 0x01310102;
280
- cpu->id_isar5 = 0x00000000;
281
- cpu->id_isar6 = 0x00000000;
282
+ cpu->isar.id_isar0 = 0x01141110;
283
+ cpu->isar.id_isar1 = 0x02111000;
284
+ cpu->isar.id_isar2 = 0x21112231;
285
+ cpu->isar.id_isar3 = 0x01111110;
286
+ cpu->isar.id_isar4 = 0x01310102;
287
+ cpu->isar.id_isar5 = 0x00000000;
288
+ cpu->isar.id_isar6 = 0x00000000;
289
}
290
291
static void cortex_m33_initfn(Object *obj)
292
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
293
cpu->id_mmfr1 = 0x00000000;
294
cpu->id_mmfr2 = 0x01000000;
295
cpu->id_mmfr3 = 0x00000000;
296
- cpu->id_isar0 = 0x01101110;
297
- cpu->id_isar1 = 0x02212000;
298
- cpu->id_isar2 = 0x20232232;
299
- cpu->id_isar3 = 0x01111131;
300
- cpu->id_isar4 = 0x01310132;
301
- cpu->id_isar5 = 0x00000000;
302
- cpu->id_isar6 = 0x00000000;
303
+ cpu->isar.id_isar0 = 0x01101110;
304
+ cpu->isar.id_isar1 = 0x02212000;
305
+ cpu->isar.id_isar2 = 0x20232232;
306
+ cpu->isar.id_isar3 = 0x01111131;
307
+ cpu->isar.id_isar4 = 0x01310132;
308
+ cpu->isar.id_isar5 = 0x00000000;
309
+ cpu->isar.id_isar6 = 0x00000000;
310
cpu->clidr = 0x00000000;
311
cpu->ctr = 0x8000c000;
312
}
313
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
314
cpu->id_mmfr1 = 0x00000000;
315
cpu->id_mmfr2 = 0x01200000;
316
cpu->id_mmfr3 = 0x0211;
317
- cpu->id_isar0 = 0x02101111;
318
- cpu->id_isar1 = 0x13112111;
319
- cpu->id_isar2 = 0x21232141;
320
- cpu->id_isar3 = 0x01112131;
321
- cpu->id_isar4 = 0x0010142;
322
- cpu->id_isar5 = 0x0;
323
- cpu->id_isar6 = 0x0;
324
+ cpu->isar.id_isar0 = 0x02101111;
325
+ cpu->isar.id_isar1 = 0x13112111;
326
+ cpu->isar.id_isar2 = 0x21232141;
327
+ cpu->isar.id_isar3 = 0x01112131;
328
+ cpu->isar.id_isar4 = 0x0010142;
329
+ cpu->isar.id_isar5 = 0x0;
330
+ cpu->isar.id_isar6 = 0x0;
331
cpu->mp_is_up = true;
332
cpu->pmsav7_dregion = 16;
333
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
334
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_EL3);
336
cpu->midr = 0x410fc080;
337
cpu->reset_fpsid = 0x410330c0;
338
- cpu->mvfr0 = 0x11110222;
339
- cpu->mvfr1 = 0x00011111;
340
+ cpu->isar.mvfr0 = 0x11110222;
341
+ cpu->isar.mvfr1 = 0x00011111;
342
cpu->ctr = 0x82048004;
343
cpu->reset_sctlr = 0x00c50078;
344
cpu->id_pfr0 = 0x1031;
345
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
346
cpu->id_mmfr1 = 0x20000000;
347
cpu->id_mmfr2 = 0x01202000;
348
cpu->id_mmfr3 = 0x11;
349
- cpu->id_isar0 = 0x00101111;
350
- cpu->id_isar1 = 0x12112111;
351
- cpu->id_isar2 = 0x21232031;
352
- cpu->id_isar3 = 0x11112131;
353
- cpu->id_isar4 = 0x00111142;
354
+ cpu->isar.id_isar0 = 0x00101111;
355
+ cpu->isar.id_isar1 = 0x12112111;
356
+ cpu->isar.id_isar2 = 0x21232031;
357
+ cpu->isar.id_isar3 = 0x11112131;
358
+ cpu->isar.id_isar4 = 0x00111142;
359
cpu->dbgdidr = 0x15141000;
360
cpu->clidr = (1 << 27) | (2 << 24) | 3;
361
cpu->ccsidr[0] = 0xe007e01a; /* 16k L1 dcache. */
362
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
363
set_feature(&cpu->env, ARM_FEATURE_CBAR);
364
cpu->midr = 0x410fc090;
365
cpu->reset_fpsid = 0x41033090;
366
- cpu->mvfr0 = 0x11110222;
367
- cpu->mvfr1 = 0x01111111;
368
+ cpu->isar.mvfr0 = 0x11110222;
369
+ cpu->isar.mvfr1 = 0x01111111;
370
cpu->ctr = 0x80038003;
371
cpu->reset_sctlr = 0x00c50078;
372
cpu->id_pfr0 = 0x1031;
373
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
374
cpu->id_mmfr1 = 0x20000000;
375
cpu->id_mmfr2 = 0x01230000;
376
cpu->id_mmfr3 = 0x00002111;
377
- cpu->id_isar0 = 0x00101111;
378
- cpu->id_isar1 = 0x13112111;
379
- cpu->id_isar2 = 0x21232041;
380
- cpu->id_isar3 = 0x11112131;
381
- cpu->id_isar4 = 0x00111142;
382
+ cpu->isar.id_isar0 = 0x00101111;
383
+ cpu->isar.id_isar1 = 0x13112111;
384
+ cpu->isar.id_isar2 = 0x21232041;
385
+ cpu->isar.id_isar3 = 0x11112131;
386
+ cpu->isar.id_isar4 = 0x00111142;
387
cpu->dbgdidr = 0x35141000;
388
cpu->clidr = (1 << 27) | (1 << 24) | 3;
389
cpu->ccsidr[0] = 0xe00fe019; /* 16k L1 dcache. */
390
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
391
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
392
cpu->midr = 0x410fc075;
393
cpu->reset_fpsid = 0x41023075;
394
- cpu->mvfr0 = 0x10110222;
395
- cpu->mvfr1 = 0x11111111;
396
+ cpu->isar.mvfr0 = 0x10110222;
397
+ cpu->isar.mvfr1 = 0x11111111;
398
cpu->ctr = 0x84448003;
399
cpu->reset_sctlr = 0x00c50078;
400
cpu->id_pfr0 = 0x00001131;
401
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
402
/* a7_mpcore_r0p5_trm, page 4-4 gives 0x01101110; but
403
* table 4-41 gives 0x02101110, which includes the arm div insns.
404
*/
405
- cpu->id_isar0 = 0x02101110;
406
- cpu->id_isar1 = 0x13112111;
407
- cpu->id_isar2 = 0x21232041;
408
- cpu->id_isar3 = 0x11112131;
409
- cpu->id_isar4 = 0x10011142;
410
+ cpu->isar.id_isar0 = 0x02101110;
411
+ cpu->isar.id_isar1 = 0x13112111;
412
+ cpu->isar.id_isar2 = 0x21232041;
413
+ cpu->isar.id_isar3 = 0x11112131;
414
+ cpu->isar.id_isar4 = 0x10011142;
415
cpu->dbgdidr = 0x3515f005;
416
cpu->clidr = 0x0a200023;
417
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
418
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
419
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
420
cpu->midr = 0x412fc0f1;
421
cpu->reset_fpsid = 0x410430f0;
422
- cpu->mvfr0 = 0x10110222;
423
- cpu->mvfr1 = 0x11111111;
424
+ cpu->isar.mvfr0 = 0x10110222;
425
+ cpu->isar.mvfr1 = 0x11111111;
426
cpu->ctr = 0x8444c004;
427
cpu->reset_sctlr = 0x00c50078;
428
cpu->id_pfr0 = 0x00001131;
429
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
430
cpu->id_mmfr1 = 0x20000000;
431
cpu->id_mmfr2 = 0x01240000;
432
cpu->id_mmfr3 = 0x02102211;
433
- cpu->id_isar0 = 0x02101110;
434
- cpu->id_isar1 = 0x13112111;
435
- cpu->id_isar2 = 0x21232041;
436
- cpu->id_isar3 = 0x11112131;
437
- cpu->id_isar4 = 0x10011142;
438
+ cpu->isar.id_isar0 = 0x02101110;
439
+ cpu->isar.id_isar1 = 0x13112111;
440
+ cpu->isar.id_isar2 = 0x21232041;
441
+ cpu->isar.id_isar3 = 0x11112131;
442
+ cpu->isar.id_isar4 = 0x10011142;
443
cpu->dbgdidr = 0x3515f021;
444
cpu->clidr = 0x0a200023;
445
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
26
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
446
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
index XXXXXXX..XXXXXXX 100644
447
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpu64.c
448
--- a/target/arm/cpu64.c
29
+++ b/target/arm/cpu64.c
449
+++ b/target/arm/cpu64.c
30
@@ -XXX,XX +XXX,XX @@ static void aarch64_any_initfn(Object *obj)
450
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
31
set_feature(&cpu->env, ARM_FEATURE_CRC);
451
cpu->midr = 0x411fd070;
32
set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
452
cpu->revidr = 0x00000000;
33
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
453
cpu->reset_fpsid = 0x41034070;
34
+ set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
454
- cpu->mvfr0 = 0x10110222;
35
cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
455
- cpu->mvfr1 = 0x12111111;
36
cpu->dcz_blocksize = 7; /* 512 bytes */
456
- cpu->mvfr2 = 0x00000043;
37
}
457
+ cpu->isar.mvfr0 = 0x10110222;
458
+ cpu->isar.mvfr1 = 0x12111111;
459
+ cpu->isar.mvfr2 = 0x00000043;
460
cpu->ctr = 0x8444c004;
461
cpu->reset_sctlr = 0x00c50838;
462
cpu->id_pfr0 = 0x00000131;
463
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
464
cpu->id_mmfr1 = 0x40000000;
465
cpu->id_mmfr2 = 0x01260000;
466
cpu->id_mmfr3 = 0x02102211;
467
- cpu->id_isar0 = 0x02101110;
468
- cpu->id_isar1 = 0x13112111;
469
- cpu->id_isar2 = 0x21232042;
470
- cpu->id_isar3 = 0x01112131;
471
- cpu->id_isar4 = 0x00011142;
472
- cpu->id_isar5 = 0x00011121;
473
- cpu->id_isar6 = 0;
474
- cpu->id_aa64pfr0 = 0x00002222;
475
+ cpu->isar.id_isar0 = 0x02101110;
476
+ cpu->isar.id_isar1 = 0x13112111;
477
+ cpu->isar.id_isar2 = 0x21232042;
478
+ cpu->isar.id_isar3 = 0x01112131;
479
+ cpu->isar.id_isar4 = 0x00011142;
480
+ cpu->isar.id_isar5 = 0x00011121;
481
+ cpu->isar.id_isar6 = 0;
482
+ cpu->isar.id_aa64pfr0 = 0x00002222;
483
cpu->id_aa64dfr0 = 0x10305106;
484
cpu->pmceid0 = 0x00000000;
485
cpu->pmceid1 = 0x00000000;
486
- cpu->id_aa64isar0 = 0x00011120;
487
+ cpu->isar.id_aa64isar0 = 0x00011120;
488
cpu->id_aa64mmfr0 = 0x00001124;
489
cpu->dbgdidr = 0x3516d000;
490
cpu->clidr = 0x0a200023;
491
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
492
cpu->midr = 0x410fd034;
493
cpu->revidr = 0x00000000;
494
cpu->reset_fpsid = 0x41034070;
495
- cpu->mvfr0 = 0x10110222;
496
- cpu->mvfr1 = 0x12111111;
497
- cpu->mvfr2 = 0x00000043;
498
+ cpu->isar.mvfr0 = 0x10110222;
499
+ cpu->isar.mvfr1 = 0x12111111;
500
+ cpu->isar.mvfr2 = 0x00000043;
501
cpu->ctr = 0x84448004; /* L1Ip = VIPT */
502
cpu->reset_sctlr = 0x00c50838;
503
cpu->id_pfr0 = 0x00000131;
504
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
505
cpu->id_mmfr1 = 0x40000000;
506
cpu->id_mmfr2 = 0x01260000;
507
cpu->id_mmfr3 = 0x02102211;
508
- cpu->id_isar0 = 0x02101110;
509
- cpu->id_isar1 = 0x13112111;
510
- cpu->id_isar2 = 0x21232042;
511
- cpu->id_isar3 = 0x01112131;
512
- cpu->id_isar4 = 0x00011142;
513
- cpu->id_isar5 = 0x00011121;
514
- cpu->id_isar6 = 0;
515
- cpu->id_aa64pfr0 = 0x00002222;
516
+ cpu->isar.id_isar0 = 0x02101110;
517
+ cpu->isar.id_isar1 = 0x13112111;
518
+ cpu->isar.id_isar2 = 0x21232042;
519
+ cpu->isar.id_isar3 = 0x01112131;
520
+ cpu->isar.id_isar4 = 0x00011142;
521
+ cpu->isar.id_isar5 = 0x00011121;
522
+ cpu->isar.id_isar6 = 0;
523
+ cpu->isar.id_aa64pfr0 = 0x00002222;
524
cpu->id_aa64dfr0 = 0x10305106;
525
- cpu->id_aa64isar0 = 0x00011120;
526
+ cpu->isar.id_aa64isar0 = 0x00011120;
527
cpu->id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
528
cpu->dbgdidr = 0x3516d000;
529
cpu->clidr = 0x0a200023;
530
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
531
cpu->midr = 0x410fd083;
532
cpu->revidr = 0x00000000;
533
cpu->reset_fpsid = 0x41034080;
534
- cpu->mvfr0 = 0x10110222;
535
- cpu->mvfr1 = 0x12111111;
536
- cpu->mvfr2 = 0x00000043;
537
+ cpu->isar.mvfr0 = 0x10110222;
538
+ cpu->isar.mvfr1 = 0x12111111;
539
+ cpu->isar.mvfr2 = 0x00000043;
540
cpu->ctr = 0x8444c004;
541
cpu->reset_sctlr = 0x00c50838;
542
cpu->id_pfr0 = 0x00000131;
543
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
544
cpu->id_mmfr1 = 0x40000000;
545
cpu->id_mmfr2 = 0x01260000;
546
cpu->id_mmfr3 = 0x02102211;
547
- cpu->id_isar0 = 0x02101110;
548
- cpu->id_isar1 = 0x13112111;
549
- cpu->id_isar2 = 0x21232042;
550
- cpu->id_isar3 = 0x01112131;
551
- cpu->id_isar4 = 0x00011142;
552
- cpu->id_isar5 = 0x00011121;
553
- cpu->id_aa64pfr0 = 0x00002222;
554
+ cpu->isar.id_isar0 = 0x02101110;
555
+ cpu->isar.id_isar1 = 0x13112111;
556
+ cpu->isar.id_isar2 = 0x21232042;
557
+ cpu->isar.id_isar3 = 0x01112131;
558
+ cpu->isar.id_isar4 = 0x00011142;
559
+ cpu->isar.id_isar5 = 0x00011121;
560
+ cpu->isar.id_aa64pfr0 = 0x00002222;
561
cpu->id_aa64dfr0 = 0x10305106;
562
cpu->pmceid0 = 0x00000000;
563
cpu->pmceid1 = 0x00000000;
564
- cpu->id_aa64isar0 = 0x00011120;
565
+ cpu->isar.id_aa64isar0 = 0x00011120;
566
cpu->id_aa64mmfr0 = 0x00001124;
567
cpu->dbgdidr = 0x3516d000;
568
cpu->clidr = 0x0a200023;
569
diff --git a/target/arm/helper.c b/target/arm/helper.c
570
index XXXXXXX..XXXXXXX 100644
571
--- a/target/arm/helper.c
572
+++ b/target/arm/helper.c
573
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
574
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
575
{
576
ARMCPU *cpu = arm_env_get_cpu(env);
577
- uint64_t pfr0 = cpu->id_aa64pfr0;
578
+ uint64_t pfr0 = cpu->isar.id_aa64pfr0;
579
580
if (env->gicv3state) {
581
pfr0 |= 1 << 24;
582
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
583
{ .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH,
584
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
585
.access = PL1_R, .type = ARM_CP_CONST,
586
- .resetvalue = cpu->id_isar0 },
587
+ .resetvalue = cpu->isar.id_isar0 },
588
{ .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH,
589
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1,
590
.access = PL1_R, .type = ARM_CP_CONST,
591
- .resetvalue = cpu->id_isar1 },
592
+ .resetvalue = cpu->isar.id_isar1 },
593
{ .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH,
594
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
595
.access = PL1_R, .type = ARM_CP_CONST,
596
- .resetvalue = cpu->id_isar2 },
597
+ .resetvalue = cpu->isar.id_isar2 },
598
{ .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH,
599
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3,
600
.access = PL1_R, .type = ARM_CP_CONST,
601
- .resetvalue = cpu->id_isar3 },
602
+ .resetvalue = cpu->isar.id_isar3 },
603
{ .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH,
604
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4,
605
.access = PL1_R, .type = ARM_CP_CONST,
606
- .resetvalue = cpu->id_isar4 },
607
+ .resetvalue = cpu->isar.id_isar4 },
608
{ .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH,
609
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5,
610
.access = PL1_R, .type = ARM_CP_CONST,
611
- .resetvalue = cpu->id_isar5 },
612
+ .resetvalue = cpu->isar.id_isar5 },
613
{ .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH,
614
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
615
.access = PL1_R, .type = ARM_CP_CONST,
616
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
617
{ .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
618
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
619
.access = PL1_R, .type = ARM_CP_CONST,
620
- .resetvalue = cpu->id_isar6 },
621
+ .resetvalue = cpu->isar.id_isar6 },
622
REGINFO_SENTINEL
623
};
624
define_arm_cp_regs(cpu, v6_idregs);
625
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
626
{ .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64,
627
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1,
628
.access = PL1_R, .type = ARM_CP_CONST,
629
- .resetvalue = cpu->id_aa64pfr1},
630
+ .resetvalue = cpu->isar.id_aa64pfr1},
631
{ .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
632
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2,
633
.access = PL1_R, .type = ARM_CP_CONST,
634
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
635
{ .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64,
636
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0,
637
.access = PL1_R, .type = ARM_CP_CONST,
638
- .resetvalue = cpu->id_aa64isar0 },
639
+ .resetvalue = cpu->isar.id_aa64isar0 },
640
{ .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64,
641
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1,
642
.access = PL1_R, .type = ARM_CP_CONST,
643
- .resetvalue = cpu->id_aa64isar1 },
644
+ .resetvalue = cpu->isar.id_aa64isar1 },
645
{ .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
646
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
647
.access = PL1_R, .type = ARM_CP_CONST,
648
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
649
{ .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64,
650
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0,
651
.access = PL1_R, .type = ARM_CP_CONST,
652
- .resetvalue = cpu->mvfr0 },
653
+ .resetvalue = cpu->isar.mvfr0 },
654
{ .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64,
655
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1,
656
.access = PL1_R, .type = ARM_CP_CONST,
657
- .resetvalue = cpu->mvfr1 },
658
+ .resetvalue = cpu->isar.mvfr1 },
659
{ .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64,
660
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
661
.access = PL1_R, .type = ARM_CP_CONST,
662
- .resetvalue = cpu->mvfr2 },
663
+ .resetvalue = cpu->isar.mvfr2 },
664
{ .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
665
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3,
666
.access = PL1_R, .type = ARM_CP_CONST,
38
--
667
--
39
2.16.2
668
2.19.1
40
669
41
670
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Enable it for the "any" CPU used by *-linux-user.
3
Instantiating mps2-an505 (cortex-m33) will fail make check when
4
V7VE asserts that ID_ISAR0.Divide includes ARM division. It is
5
also wrong to include ARM_FEATURE_LPAE.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181016223115.24100-3-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180228193125.20577-10-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/cpu.c | 1 +
12
target/arm/cpu.c | 6 +++++-
11
target/arm/cpu64.c | 1 +
13
1 file changed, 5 insertions(+), 1 deletion(-)
12
2 files changed, 2 insertions(+)
13
14
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.c
17
--- a/target/arm/cpu.c
17
+++ b/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
18
@@ -XXX,XX +XXX,XX @@ static void arm_any_initfn(Object *obj)
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
19
set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
20
20
set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
21
/* Some features automatically imply others: */
21
set_feature(&cpu->env, ARM_FEATURE_CRC);
22
if (arm_feature(env, ARM_FEATURE_V8)) {
22
+ set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
23
- set_feature(env, ARM_FEATURE_V7VE);
23
cpu->midr = 0xffffffff;
24
+ if (arm_feature(env, ARM_FEATURE_M)) {
24
}
25
+ set_feature(env, ARM_FEATURE_V7);
25
#endif
26
+ } else {
26
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
+ set_feature(env, ARM_FEATURE_V7VE);
27
index XXXXXXX..XXXXXXX 100644
28
+ }
28
--- a/target/arm/cpu64.c
29
}
29
+++ b/target/arm/cpu64.c
30
if (arm_feature(env, ARM_FEATURE_V7VE)) {
30
@@ -XXX,XX +XXX,XX @@ static void aarch64_any_initfn(Object *obj)
31
/* v7 Virtualization Extensions. In real hardware this implies
31
set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
32
set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
33
set_feature(&cpu->env, ARM_FEATURE_CRC);
34
+ set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
35
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
36
cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
37
cpu->dcz_blocksize = 7; /* 512 bytes */
38
--
32
--
39
2.16.2
33
2.19.1
40
34
41
35
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Most of the v8 extensions are self-contained within the ISAR
4
registers and are not implied by other feature bits, which
5
makes them the easiest to convert.
6
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180228193125.20577-12-richard.henderson@linaro.org
9
Message-id: 20181016223115.24100-4-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/helper.h | 7 ++++
13
target/arm/cpu.h | 131 +++++++++++++++++++++++++++++++++----
9
target/arm/translate-a64.c | 48 ++++++++++++++++++++++-
14
target/arm/translate.h | 7 ++
10
target/arm/vec_helper.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++
15
linux-user/elfload.c | 46 ++++++++-----
11
3 files changed, 151 insertions(+), 1 deletion(-)
16
target/arm/cpu.c | 27 +++++---
17
target/arm/cpu64.c | 57 +++++++++-------
18
target/arm/translate-a64.c | 101 ++++++++++++++--------------
19
target/arm/translate.c | 36 +++++-----
20
7 files changed, 273 insertions(+), 132 deletions(-)
12
21
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
24
--- a/target/arm/cpu.h
16
+++ b/target/arm/helper.h
25
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_qrdmlah_s32, TCG_CALL_NO_RWG,
26
@@ -XXX,XX +XXX,XX @@ typedef enum ARMPSCIState {
18
DEF_HELPER_FLAGS_5(gvec_qrdmlsh_s32, TCG_CALL_NO_RWG,
27
PSCI_ON_PENDING = 2
19
void, ptr, ptr, ptr, ptr, i32)
28
} ARMPSCIState;
20
29
21
+DEF_HELPER_FLAGS_5(gvec_fcaddh, TCG_CALL_NO_RWG,
30
+typedef struct ARMISARegisters ARMISARegisters;
22
+ void, ptr, ptr, ptr, ptr, i32)
31
+
23
+DEF_HELPER_FLAGS_5(gvec_fcadds, TCG_CALL_NO_RWG,
32
/**
24
+ void, ptr, ptr, ptr, ptr, i32)
33
* ARMCPU:
25
+DEF_HELPER_FLAGS_5(gvec_fcaddd, TCG_CALL_NO_RWG,
34
* @env: #CPUARMState
26
+ void, ptr, ptr, ptr, ptr, i32)
35
@@ -XXX,XX +XXX,XX @@ enum arm_features {
27
+
36
ARM_FEATURE_LPAE, /* has Large Physical Address Extension */
28
#ifdef TARGET_AARCH64
37
ARM_FEATURE_V8,
29
#include "helper-a64.h"
38
ARM_FEATURE_AARCH64, /* supports 64 bit mode */
39
- ARM_FEATURE_V8_AES, /* implements AES part of v8 Crypto Extensions */
40
ARM_FEATURE_CBAR, /* has cp15 CBAR */
41
ARM_FEATURE_CRC, /* ARMv8 CRC instructions */
42
ARM_FEATURE_CBAR_RO, /* has cp15 CBAR and it is read-only */
43
ARM_FEATURE_EL2, /* has EL2 Virtualization support */
44
ARM_FEATURE_EL3, /* has EL3 Secure monitor support */
45
- ARM_FEATURE_V8_SHA1, /* implements SHA1 part of v8 Crypto Extensions */
46
- ARM_FEATURE_V8_SHA256, /* implements SHA256 part of v8 Crypto Extensions */
47
- ARM_FEATURE_V8_PMULL, /* implements PMULL part of v8 Crypto Extensions */
48
ARM_FEATURE_THUMB_DSP, /* DSP insns supported in the Thumb encodings */
49
ARM_FEATURE_PMU, /* has PMU support */
50
ARM_FEATURE_VBAR, /* has cp15 VBAR */
51
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
52
ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
53
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
54
- ARM_FEATURE_V8_SHA512, /* implements SHA512 part of v8 Crypto Extensions */
55
- ARM_FEATURE_V8_SHA3, /* implements SHA3 part of v8 Crypto Extensions */
56
- ARM_FEATURE_V8_SM3, /* implements SM3 part of v8 Crypto Extensions */
57
- ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
58
- ARM_FEATURE_V8_ATOMICS, /* ARMv8.1-Atomics feature */
59
- ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
60
- ARM_FEATURE_V8_DOTPROD, /* implements v8.2 simd dot product */
61
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
62
- ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */
63
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
64
};
65
66
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
67
/* Shared between translate-sve.c and sve_helper.c. */
68
extern const uint64_t pred_esz_masks[4];
69
70
+/*
71
+ * 32-bit feature tests via id registers.
72
+ */
73
+static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
74
+{
75
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
76
+}
77
+
78
+static inline bool isar_feature_aa32_pmull(const ARMISARegisters *id)
79
+{
80
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) > 1;
81
+}
82
+
83
+static inline bool isar_feature_aa32_sha1(const ARMISARegisters *id)
84
+{
85
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA1) != 0;
86
+}
87
+
88
+static inline bool isar_feature_aa32_sha2(const ARMISARegisters *id)
89
+{
90
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA2) != 0;
91
+}
92
+
93
+static inline bool isar_feature_aa32_crc32(const ARMISARegisters *id)
94
+{
95
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, CRC32) != 0;
96
+}
97
+
98
+static inline bool isar_feature_aa32_rdm(const ARMISARegisters *id)
99
+{
100
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, RDM) != 0;
101
+}
102
+
103
+static inline bool isar_feature_aa32_vcma(const ARMISARegisters *id)
104
+{
105
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, VCMA) != 0;
106
+}
107
+
108
+static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
109
+{
110
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
111
+}
112
+
113
+/*
114
+ * 64-bit feature tests via id registers.
115
+ */
116
+static inline bool isar_feature_aa64_aes(const ARMISARegisters *id)
117
+{
118
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) != 0;
119
+}
120
+
121
+static inline bool isar_feature_aa64_pmull(const ARMISARegisters *id)
122
+{
123
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) > 1;
124
+}
125
+
126
+static inline bool isar_feature_aa64_sha1(const ARMISARegisters *id)
127
+{
128
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA1) != 0;
129
+}
130
+
131
+static inline bool isar_feature_aa64_sha256(const ARMISARegisters *id)
132
+{
133
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) != 0;
134
+}
135
+
136
+static inline bool isar_feature_aa64_sha512(const ARMISARegisters *id)
137
+{
138
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) > 1;
139
+}
140
+
141
+static inline bool isar_feature_aa64_crc32(const ARMISARegisters *id)
142
+{
143
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, CRC32) != 0;
144
+}
145
+
146
+static inline bool isar_feature_aa64_atomics(const ARMISARegisters *id)
147
+{
148
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, ATOMIC) != 0;
149
+}
150
+
151
+static inline bool isar_feature_aa64_rdm(const ARMISARegisters *id)
152
+{
153
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RDM) != 0;
154
+}
155
+
156
+static inline bool isar_feature_aa64_sha3(const ARMISARegisters *id)
157
+{
158
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA3) != 0;
159
+}
160
+
161
+static inline bool isar_feature_aa64_sm3(const ARMISARegisters *id)
162
+{
163
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM3) != 0;
164
+}
165
+
166
+static inline bool isar_feature_aa64_sm4(const ARMISARegisters *id)
167
+{
168
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM4) != 0;
169
+}
170
+
171
+static inline bool isar_feature_aa64_dp(const ARMISARegisters *id)
172
+{
173
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, DP) != 0;
174
+}
175
+
176
+static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
177
+{
178
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
179
+}
180
+
181
+/*
182
+ * Forward to the above feature tests given an ARMCPU pointer.
183
+ */
184
+#define cpu_isar_feature(name, cpu) \
185
+ ({ ARMCPU *cpu_ = (cpu); isar_feature_##name(&cpu_->isar); })
186
+
30
#endif
187
#endif
188
diff --git a/target/arm/translate.h b/target/arm/translate.h
189
index XXXXXXX..XXXXXXX 100644
190
--- a/target/arm/translate.h
191
+++ b/target/arm/translate.h
192
@@ -XXX,XX +XXX,XX @@
193
/* internal defines */
194
typedef struct DisasContext {
195
DisasContextBase base;
196
+ const ARMISARegisters *isar;
197
198
target_ulong pc;
199
target_ulong page_start;
200
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
201
return ret;
202
}
203
204
+/*
205
+ * Forward to the isar_feature_* tests given a DisasContext pointer.
206
+ */
207
+#define dc_isar_feature(name, ctx) \
208
+ ({ DisasContext *ctx_ = (ctx); isar_feature_##name(ctx_->isar); })
209
+
210
#endif /* TARGET_ARM_TRANSLATE_H */
211
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
212
index XXXXXXX..XXXXXXX 100644
213
--- a/linux-user/elfload.c
214
+++ b/linux-user/elfload.c
215
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
216
/* probe for the extra features */
217
#define GET_FEATURE(feat, hwcap) \
218
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
219
+
220
+#define GET_FEATURE_ID(feat, hwcap) \
221
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
222
+
223
/* EDSP is in v5TE and above, but all our v5 CPUs are v5TE */
224
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
225
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
226
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
227
ARMCPU *cpu = ARM_CPU(thread_cpu);
228
uint32_t hwcaps = 0;
229
230
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP2_ARM_AES);
231
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP2_ARM_PMULL);
232
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP2_ARM_SHA1);
233
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP2_ARM_SHA2);
234
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP2_ARM_CRC32);
235
+ GET_FEATURE_ID(aa32_aes, ARM_HWCAP2_ARM_AES);
236
+ GET_FEATURE_ID(aa32_pmull, ARM_HWCAP2_ARM_PMULL);
237
+ GET_FEATURE_ID(aa32_sha1, ARM_HWCAP2_ARM_SHA1);
238
+ GET_FEATURE_ID(aa32_sha2, ARM_HWCAP2_ARM_SHA2);
239
+ GET_FEATURE_ID(aa32_crc32, ARM_HWCAP2_ARM_CRC32);
240
return hwcaps;
241
}
242
243
#undef GET_FEATURE
244
+#undef GET_FEATURE_ID
245
246
#else
247
/* 64 bit ARM definitions */
248
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
249
/* probe for the extra features */
250
#define GET_FEATURE(feat, hwcap) \
251
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
252
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP_A64_AES);
253
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP_A64_PMULL);
254
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP_A64_SHA1);
255
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP_A64_SHA2);
256
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP_A64_CRC32);
257
- GET_FEATURE(ARM_FEATURE_V8_SHA3, ARM_HWCAP_A64_SHA3);
258
- GET_FEATURE(ARM_FEATURE_V8_SM3, ARM_HWCAP_A64_SM3);
259
- GET_FEATURE(ARM_FEATURE_V8_SM4, ARM_HWCAP_A64_SM4);
260
- GET_FEATURE(ARM_FEATURE_V8_SHA512, ARM_HWCAP_A64_SHA512);
261
+#define GET_FEATURE_ID(feat, hwcap) \
262
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
263
+
264
+ GET_FEATURE_ID(aa64_aes, ARM_HWCAP_A64_AES);
265
+ GET_FEATURE_ID(aa64_pmull, ARM_HWCAP_A64_PMULL);
266
+ GET_FEATURE_ID(aa64_sha1, ARM_HWCAP_A64_SHA1);
267
+ GET_FEATURE_ID(aa64_sha256, ARM_HWCAP_A64_SHA2);
268
+ GET_FEATURE_ID(aa64_sha512, ARM_HWCAP_A64_SHA512);
269
+ GET_FEATURE_ID(aa64_crc32, ARM_HWCAP_A64_CRC32);
270
+ GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
271
+ GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
272
+ GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
273
GET_FEATURE(ARM_FEATURE_V8_FP16,
274
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
275
- GET_FEATURE(ARM_FEATURE_V8_ATOMICS, ARM_HWCAP_A64_ATOMICS);
276
- GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
277
- GET_FEATURE(ARM_FEATURE_V8_DOTPROD, ARM_HWCAP_A64_ASIMDDP);
278
- GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
279
+ GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
280
+ GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
281
+ GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
282
+ GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
283
GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
284
+
285
#undef GET_FEATURE
286
+#undef GET_FEATURE_ID
287
288
return hwcaps;
289
}
290
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
291
index XXXXXXX..XXXXXXX 100644
292
--- a/target/arm/cpu.c
293
+++ b/target/arm/cpu.c
294
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
295
cortex_a15_initfn(obj);
296
#ifdef CONFIG_USER_ONLY
297
/* We don't set these in system emulation mode for the moment,
298
- * since we don't correctly set the ID registers to advertise them,
299
+ * since we don't correctly set (all of) the ID registers to
300
+ * advertise them.
301
*/
302
set_feature(&cpu->env, ARM_FEATURE_V8);
303
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
304
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
305
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
306
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
307
- set_feature(&cpu->env, ARM_FEATURE_CRC);
308
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
309
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
310
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
311
+ {
312
+ uint32_t t;
313
+
314
+ t = cpu->isar.id_isar5;
315
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
316
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
317
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
318
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
319
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
320
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
321
+ cpu->isar.id_isar5 = t;
322
+
323
+ t = cpu->isar.id_isar6;
324
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
325
+ cpu->isar.id_isar6 = t;
326
+ }
327
#endif
328
}
329
}
330
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
331
index XXXXXXX..XXXXXXX 100644
332
--- a/target/arm/cpu64.c
333
+++ b/target/arm/cpu64.c
334
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
336
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
337
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
338
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
339
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
340
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
341
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
342
- set_feature(&cpu->env, ARM_FEATURE_CRC);
343
set_feature(&cpu->env, ARM_FEATURE_EL2);
344
set_feature(&cpu->env, ARM_FEATURE_EL3);
345
set_feature(&cpu->env, ARM_FEATURE_PMU);
346
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
347
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
348
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
349
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
350
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
351
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
352
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
353
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
354
- set_feature(&cpu->env, ARM_FEATURE_CRC);
355
set_feature(&cpu->env, ARM_FEATURE_EL2);
356
set_feature(&cpu->env, ARM_FEATURE_EL3);
357
set_feature(&cpu->env, ARM_FEATURE_PMU);
358
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
359
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
360
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
361
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
362
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
363
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
364
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
365
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
366
- set_feature(&cpu->env, ARM_FEATURE_CRC);
367
set_feature(&cpu->env, ARM_FEATURE_EL2);
368
set_feature(&cpu->env, ARM_FEATURE_EL3);
369
set_feature(&cpu->env, ARM_FEATURE_PMU);
370
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
371
if (kvm_enabled()) {
372
kvm_arm_set_cpu_features_from_host(cpu);
373
} else {
374
+ uint64_t t;
375
+ uint32_t u;
376
aarch64_a57_initfn(obj);
377
+
378
+ t = cpu->isar.id_aa64isar0;
379
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
380
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
381
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
382
+ t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
383
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
384
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
385
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
386
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
387
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
388
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
389
+ cpu->isar.id_aa64isar0 = t;
390
+
391
+ t = cpu->isar.id_aa64isar1;
392
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
393
+ cpu->isar.id_aa64isar1 = t;
394
+
395
+ /* Replicate the same data to the 32-bit id registers. */
396
+ u = cpu->isar.id_isar5;
397
+ u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
398
+ u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
399
+ u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
400
+ u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
401
+ u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
402
+ u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
403
+ cpu->isar.id_isar5 = u;
404
+
405
+ u = cpu->isar.id_isar6;
406
+ u = FIELD_DP32(u, ID_ISAR6, DP, 1);
407
+ cpu->isar.id_isar6 = u;
408
+
409
#ifdef CONFIG_USER_ONLY
410
/* We don't set these in system emulation mode for the moment,
411
* since we don't correctly set the ID registers to advertise them,
412
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
413
* whereas the architecture requires them to be present in both if
414
* present in either.
415
*/
416
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA512);
417
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA3);
418
- set_feature(&cpu->env, ARM_FEATURE_V8_SM3);
419
- set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
420
- set_feature(&cpu->env, ARM_FEATURE_V8_ATOMICS);
421
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
422
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
423
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
424
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
425
set_feature(&cpu->env, ARM_FEATURE_SVE);
426
/* For usermode -cpu max we can use a larger and more efficient DCZ
427
* blocksize since we don't have to follow what the hardware does.
31
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
428
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
429
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate-a64.c
430
--- a/target/arm/translate-a64.c
34
+++ b/target/arm/translate-a64.c
431
+++ b/target/arm/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_op3_env(DisasContext *s, bool is_q, int rd,
432
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
36
is_q ? 16 : 8, vec_full_reg_size(s), 0, fn);
433
}
37
}
434
if (rt2 == 31
38
435
&& ((rt | rs) & 1) == 0
39
+/* Expand a 3-operand + fpstatus pointer + simd data value operation using
436
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
40
+ * an out-of-line helper.
437
+ && dc_isar_feature(aa64_atomics, s)) {
41
+ */
438
/* CASP / CASPL */
42
+static void gen_gvec_op3_fpst(DisasContext *s, bool is_q, int rd, int rn,
439
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
43
+ int rm, bool is_fp16, int data,
440
return;
44
+ gen_helper_gvec_3_ptr *fn)
441
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
45
+{
442
}
46
+ TCGv_ptr fpst = get_fpstatus_ptr(is_fp16);
443
if (rt2 == 31
47
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
444
&& ((rt | rs) & 1) == 0
48
+ vec_full_reg_offset(s, rn),
445
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
49
+ vec_full_reg_offset(s, rm), fpst,
446
+ && dc_isar_feature(aa64_atomics, s)) {
50
+ is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
447
/* CASPA / CASPAL */
51
+ tcg_temp_free_ptr(fpst);
448
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
52
+}
449
return;
53
+
450
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
54
/* Set ZF and NF based on a 64 bit result. This is alas fiddlier
451
case 0xb: /* CASL */
55
* than the 32 bit equivalent.
452
case 0xe: /* CASA */
56
*/
453
case 0xf: /* CASAL */
454
- if (rt2 == 31 && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
455
+ if (rt2 == 31 && dc_isar_feature(aa64_atomics, s)) {
456
gen_compare_and_swap(s, rs, rt, rn, size);
457
return;
458
}
459
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
460
int rs = extract32(insn, 16, 5);
461
int rn = extract32(insn, 5, 5);
462
int o3_opc = extract32(insn, 12, 4);
463
- int feature = ARM_FEATURE_V8_ATOMICS;
464
TCGv_i64 tcg_rn, tcg_rs;
465
AtomicThreeOpFn *fn;
466
467
- if (is_vector) {
468
+ if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
469
unallocated_encoding(s);
470
return;
471
}
472
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
473
unallocated_encoding(s);
474
return;
475
}
476
- if (!arm_dc_feature(s, feature)) {
477
- unallocated_encoding(s);
478
- return;
479
- }
480
481
if (rn == 31) {
482
gen_check_sp_alignment(s);
483
@@ -XXX,XX +XXX,XX @@ static void handle_crc32(DisasContext *s,
484
TCGv_i64 tcg_acc, tcg_val;
485
TCGv_i32 tcg_bytes;
486
487
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)
488
+ if (!dc_isar_feature(aa64_crc32, s)
489
|| (sf == 1 && sz != 3)
490
|| (sf == 0 && sz == 3)) {
491
unallocated_encoding(s);
492
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
493
bool u = extract32(insn, 29, 1);
494
TCGv_i32 ele1, ele2, ele3;
495
TCGv_i64 res;
496
- int feature;
497
+ bool feature;
498
499
switch (u * 16 + opcode) {
500
case 0x10: /* SQRDMLAH (vector) */
501
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
502
unallocated_encoding(s);
503
return;
504
}
505
- feature = ARM_FEATURE_V8_RDM;
506
+ feature = dc_isar_feature(aa64_rdm, s);
507
break;
508
default:
509
unallocated_encoding(s);
510
return;
511
}
512
- if (!arm_dc_feature(s, feature)) {
513
+ if (!feature) {
514
unallocated_encoding(s);
515
return;
516
}
517
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
518
return;
519
}
520
if (size == 3) {
521
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
522
+ if (!dc_isar_feature(aa64_pmull, s)) {
523
unallocated_encoding(s);
524
return;
525
}
57
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
526
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
58
int size = extract32(insn, 22, 2);
527
int size = extract32(insn, 22, 2);
59
bool u = extract32(insn, 29, 1);
528
bool u = extract32(insn, 29, 1);
60
bool is_q = extract32(insn, 30, 1);
529
bool is_q = extract32(insn, 30, 1);
61
- int feature;
530
- int feature, rot;
62
+ int feature, rot;
531
+ bool feature;
532
+ int rot;
63
533
64
switch (u * 16 + opcode) {
534
switch (u * 16 + opcode) {
65
case 0x10: /* SQRDMLAH (vector) */
535
case 0x10: /* SQRDMLAH (vector) */
66
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
536
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
67
}
537
unallocated_encoding(s);
68
feature = ARM_FEATURE_V8_RDM;
538
return;
69
break;
539
}
70
+ case 0xc: /* FCADD, #90 */
540
- feature = ARM_FEATURE_V8_RDM;
71
+ case 0xe: /* FCADD, #270 */
541
+ feature = dc_isar_feature(aa64_rdm, s);
72
+ if (size == 0
542
break;
73
+ || (size == 1 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))
543
case 0x02: /* SDOT (vector) */
74
+ || (size == 3 && !is_q)) {
544
case 0x12: /* UDOT (vector) */
75
+ unallocated_encoding(s);
545
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
76
+ return;
546
unallocated_encoding(s);
77
+ }
547
return;
78
+ feature = ARM_FEATURE_V8_FCMA;
548
}
79
+ break;
549
- feature = ARM_FEATURE_V8_DOTPROD;
550
+ feature = dc_isar_feature(aa64_dp, s);
551
break;
552
case 0x18: /* FCMLA, #0 */
553
case 0x19: /* FCMLA, #90 */
554
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
555
unallocated_encoding(s);
556
return;
557
}
558
- feature = ARM_FEATURE_V8_FCMA;
559
+ feature = dc_isar_feature(aa64_fcma, s);
560
break;
80
default:
561
default:
81
unallocated_encoding(s);
562
unallocated_encoding(s);
82
return;
563
return;
83
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
564
}
84
}
565
- if (!arm_dc_feature(s, feature)) {
85
return;
566
+ if (!feature) {
86
567
unallocated_encoding(s);
87
+ case 0xc: /* FCADD, #90 */
568
return;
88
+ case 0xe: /* FCADD, #270 */
569
}
89
+ rot = extract32(opcode, 1, 1);
570
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
90
+ switch (size) {
571
break;
91
+ case 1:
572
case 0x1d: /* SQRDMLAH */
92
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, size == 1, rot,
573
case 0x1f: /* SQRDMLSH */
93
+ gen_helper_gvec_fcaddh);
574
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
94
+ break;
575
+ if (!dc_isar_feature(aa64_rdm, s)) {
95
+ case 2:
576
unallocated_encoding(s);
96
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, size == 1, rot,
577
return;
97
+ gen_helper_gvec_fcadds);
578
}
98
+ break;
579
break;
99
+ case 3:
580
case 0x0e: /* SDOT */
100
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, size == 1, rot,
581
case 0x1e: /* UDOT */
101
+ gen_helper_gvec_fcaddd);
582
- if (size != MO_32 || !arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
102
+ break;
583
+ if (size != MO_32 || !dc_isar_feature(aa64_dp, s)) {
103
+ default:
584
unallocated_encoding(s);
104
+ g_assert_not_reached();
585
return;
105
+ }
586
}
106
+ return;
587
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
107
+
588
case 0x13: /* FCMLA #90 */
589
case 0x15: /* FCMLA #180 */
590
case 0x17: /* FCMLA #270 */
591
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
592
+ if (!dc_isar_feature(aa64_fcma, s)) {
593
unallocated_encoding(s);
594
return;
595
}
596
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
597
TCGv_i32 tcg_decrypt;
598
CryptoThreeOpIntFn *genfn;
599
600
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
601
- || size != 0) {
602
+ if (!dc_isar_feature(aa64_aes, s) || size != 0) {
603
unallocated_encoding(s);
604
return;
605
}
606
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
607
int rd = extract32(insn, 0, 5);
608
CryptoThreeOpFn *genfn;
609
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
610
- int feature = ARM_FEATURE_V8_SHA256;
611
+ bool feature;
612
613
if (size != 0) {
614
unallocated_encoding(s);
615
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
616
case 2: /* SHA1M */
617
case 3: /* SHA1SU0 */
618
genfn = NULL;
619
- feature = ARM_FEATURE_V8_SHA1;
620
+ feature = dc_isar_feature(aa64_sha1, s);
621
break;
622
case 4: /* SHA256H */
623
genfn = gen_helper_crypto_sha256h;
624
+ feature = dc_isar_feature(aa64_sha256, s);
625
break;
626
case 5: /* SHA256H2 */
627
genfn = gen_helper_crypto_sha256h2;
628
+ feature = dc_isar_feature(aa64_sha256, s);
629
break;
630
case 6: /* SHA256SU1 */
631
genfn = gen_helper_crypto_sha256su1;
632
+ feature = dc_isar_feature(aa64_sha256, s);
633
break;
108
default:
634
default:
109
g_assert_not_reached();
635
unallocated_encoding(s);
110
}
636
return;
111
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
637
}
638
639
- if (!arm_dc_feature(s, feature)) {
640
+ if (!feature) {
641
unallocated_encoding(s);
642
return;
643
}
644
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
645
int rn = extract32(insn, 5, 5);
646
int rd = extract32(insn, 0, 5);
647
CryptoTwoOpFn *genfn;
648
- int feature;
649
+ bool feature;
650
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
651
652
if (size != 0) {
653
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
654
655
switch (opcode) {
656
case 0: /* SHA1H */
657
- feature = ARM_FEATURE_V8_SHA1;
658
+ feature = dc_isar_feature(aa64_sha1, s);
659
genfn = gen_helper_crypto_sha1h;
660
break;
661
case 1: /* SHA1SU1 */
662
- feature = ARM_FEATURE_V8_SHA1;
663
+ feature = dc_isar_feature(aa64_sha1, s);
664
genfn = gen_helper_crypto_sha1su1;
665
break;
666
case 2: /* SHA256SU0 */
667
- feature = ARM_FEATURE_V8_SHA256;
668
+ feature = dc_isar_feature(aa64_sha256, s);
669
genfn = gen_helper_crypto_sha256su0;
670
break;
671
default:
672
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
673
return;
674
}
675
676
- if (!arm_dc_feature(s, feature)) {
677
+ if (!feature) {
678
unallocated_encoding(s);
679
return;
680
}
681
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
682
int rm = extract32(insn, 16, 5);
683
int rn = extract32(insn, 5, 5);
684
int rd = extract32(insn, 0, 5);
685
- int feature;
686
+ bool feature;
687
CryptoThreeOpFn *genfn;
688
689
if (o == 0) {
690
switch (opcode) {
691
case 0: /* SHA512H */
692
- feature = ARM_FEATURE_V8_SHA512;
693
+ feature = dc_isar_feature(aa64_sha512, s);
694
genfn = gen_helper_crypto_sha512h;
695
break;
696
case 1: /* SHA512H2 */
697
- feature = ARM_FEATURE_V8_SHA512;
698
+ feature = dc_isar_feature(aa64_sha512, s);
699
genfn = gen_helper_crypto_sha512h2;
700
break;
701
case 2: /* SHA512SU1 */
702
- feature = ARM_FEATURE_V8_SHA512;
703
+ feature = dc_isar_feature(aa64_sha512, s);
704
genfn = gen_helper_crypto_sha512su1;
705
break;
706
case 3: /* RAX1 */
707
- feature = ARM_FEATURE_V8_SHA3;
708
+ feature = dc_isar_feature(aa64_sha3, s);
709
genfn = NULL;
710
break;
711
}
712
} else {
713
switch (opcode) {
714
case 0: /* SM3PARTW1 */
715
- feature = ARM_FEATURE_V8_SM3;
716
+ feature = dc_isar_feature(aa64_sm3, s);
717
genfn = gen_helper_crypto_sm3partw1;
718
break;
719
case 1: /* SM3PARTW2 */
720
- feature = ARM_FEATURE_V8_SM3;
721
+ feature = dc_isar_feature(aa64_sm3, s);
722
genfn = gen_helper_crypto_sm3partw2;
723
break;
724
case 2: /* SM4EKEY */
725
- feature = ARM_FEATURE_V8_SM4;
726
+ feature = dc_isar_feature(aa64_sm4, s);
727
genfn = gen_helper_crypto_sm4ekey;
728
break;
729
default:
730
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
731
}
732
}
733
734
- if (!arm_dc_feature(s, feature)) {
735
+ if (!feature) {
736
unallocated_encoding(s);
737
return;
738
}
739
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
740
int rn = extract32(insn, 5, 5);
741
int rd = extract32(insn, 0, 5);
742
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
743
- int feature;
744
+ bool feature;
745
CryptoTwoOpFn *genfn;
746
747
switch (opcode) {
748
case 0: /* SHA512SU0 */
749
- feature = ARM_FEATURE_V8_SHA512;
750
+ feature = dc_isar_feature(aa64_sha512, s);
751
genfn = gen_helper_crypto_sha512su0;
752
break;
753
case 1: /* SM4E */
754
- feature = ARM_FEATURE_V8_SM4;
755
+ feature = dc_isar_feature(aa64_sm4, s);
756
genfn = gen_helper_crypto_sm4e;
757
break;
758
default:
759
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
760
return;
761
}
762
763
- if (!arm_dc_feature(s, feature)) {
764
+ if (!feature) {
765
unallocated_encoding(s);
766
return;
767
}
768
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
769
int ra = extract32(insn, 10, 5);
770
int rn = extract32(insn, 5, 5);
771
int rd = extract32(insn, 0, 5);
772
- int feature;
773
+ bool feature;
774
775
switch (op0) {
776
case 0: /* EOR3 */
777
case 1: /* BCAX */
778
- feature = ARM_FEATURE_V8_SHA3;
779
+ feature = dc_isar_feature(aa64_sha3, s);
780
break;
781
case 2: /* SM3SS1 */
782
- feature = ARM_FEATURE_V8_SM3;
783
+ feature = dc_isar_feature(aa64_sm3, s);
784
break;
785
default:
786
unallocated_encoding(s);
787
return;
788
}
789
790
- if (!arm_dc_feature(s, feature)) {
791
+ if (!feature) {
792
unallocated_encoding(s);
793
return;
794
}
795
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
796
TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
797
int pass;
798
799
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA3)) {
800
+ if (!dc_isar_feature(aa64_sha3, s)) {
801
unallocated_encoding(s);
802
return;
803
}
804
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
805
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
806
TCGv_i32 tcg_imm2, tcg_opcode;
807
808
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SM3)) {
809
+ if (!dc_isar_feature(aa64_sm3, s)) {
810
unallocated_encoding(s);
811
return;
812
}
813
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
814
ARMCPU *arm_cpu = arm_env_get_cpu(env);
815
int bound;
816
817
+ dc->isar = &arm_cpu->isar;
818
dc->pc = dc->base.pc_first;
819
dc->condjmp = 0;
820
821
diff --git a/target/arm/translate.c b/target/arm/translate.c
112
index XXXXXXX..XXXXXXX 100644
822
index XXXXXXX..XXXXXXX 100644
113
--- a/target/arm/vec_helper.c
823
--- a/target/arm/translate.c
114
+++ b/target/arm/vec_helper.c
824
+++ b/target/arm/translate.c
115
@@ -XXX,XX +XXX,XX @@
825
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
116
#include "exec/exec-all.h"
826
static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
117
#include "exec/helper-proto.h"
827
int q, int rd, int rn, int rm)
118
#include "tcg/tcg-gvec-desc.h"
828
{
119
+#include "fpu/softfloat.h"
829
- if (arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
120
830
+ if (dc_isar_feature(aa32_rdm, s)) {
121
831
int opr_sz = (1 + q) * 8;
122
+/* Note that vector data is stored in host-endian 64-bit chunks,
832
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
123
+ so addressing units smaller than that needs a host-endian fixup. */
833
vfp_reg_offset(1, rn),
124
+#ifdef HOST_WORDS_BIGENDIAN
834
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
125
+#define H1(x) ((x) ^ 7)
835
return 1;
126
+#define H2(x) ((x) ^ 3)
836
}
127
+#define H4(x) ((x) ^ 1)
837
if (!u) { /* SHA-1 */
128
+#else
838
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
129
+#define H1(x) (x)
839
+ if (!dc_isar_feature(aa32_sha1, s)) {
130
+#define H2(x) (x)
840
return 1;
131
+#define H4(x) (x)
841
}
132
+#endif
842
ptr1 = vfp_reg_ptr(true, rd);
133
+
843
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
134
#define SET_QC() env->vfp.xregs[ARM_VFP_FPSCR] |= CPSR_Q
844
gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp4);
135
845
tcg_temp_free_i32(tmp4);
136
static void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
846
} else { /* SHA-256 */
137
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
847
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256) || size == 3) {
138
}
848
+ if (!dc_isar_feature(aa32_sha2, s) || size == 3) {
139
clear_tail(d, opr_sz, simd_maxsz(desc));
849
return 1;
140
}
850
}
141
+
851
ptr1 = vfp_reg_ptr(true, rd);
142
+void HELPER(gvec_fcaddh)(void *vd, void *vn, void *vm,
852
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
143
+ void *vfpst, uint32_t desc)
853
if (op == 14 && size == 2) {
144
+{
854
TCGv_i64 tcg_rn, tcg_rm, tcg_rd;
145
+ uintptr_t opr_sz = simd_oprsz(desc);
855
146
+ float16 *d = vd;
856
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
147
+ float16 *n = vn;
857
+ if (!dc_isar_feature(aa32_pmull, s)) {
148
+ float16 *m = vm;
858
return 1;
149
+ float_status *fpst = vfpst;
859
}
150
+ uint32_t neg_real = extract32(desc, SIMD_DATA_SHIFT, 1);
860
tcg_rn = tcg_temp_new_i64();
151
+ uint32_t neg_imag = neg_real ^ 1;
861
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
152
+ uintptr_t i;
862
{
153
+
863
NeonGenThreeOpEnvFn *fn;
154
+ /* Shift boolean to the sign bit so we can xor to negate. */
864
155
+ neg_real <<= 15;
865
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
156
+ neg_imag <<= 15;
866
+ if (!dc_isar_feature(aa32_rdm, s)) {
157
+
867
return 1;
158
+ for (i = 0; i < opr_sz / 2; i += 2) {
868
}
159
+ float16 e0 = n[H2(i)];
869
if (u && ((rd | rn) & 1)) {
160
+ float16 e1 = m[H2(i + 1)] ^ neg_imag;
870
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
161
+ float16 e2 = n[H2(i + 1)];
871
break;
162
+ float16 e3 = m[H2(i)] ^ neg_real;
872
}
163
+
873
case NEON_2RM_AESE: case NEON_2RM_AESMC:
164
+ d[H2(i)] = float16_add(e0, e1, fpst);
874
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
165
+ d[H2(i + 1)] = float16_add(e2, e3, fpst);
875
- || ((rm | rd) & 1)) {
166
+ }
876
+ if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
167
+ clear_tail(d, opr_sz, simd_maxsz(desc));
877
return 1;
168
+}
878
}
169
+
879
ptr1 = vfp_reg_ptr(true, rd);
170
+void HELPER(gvec_fcadds)(void *vd, void *vn, void *vm,
880
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
171
+ void *vfpst, uint32_t desc)
881
tcg_temp_free_i32(tmp3);
172
+{
882
break;
173
+ uintptr_t opr_sz = simd_oprsz(desc);
883
case NEON_2RM_SHA1H:
174
+ float32 *d = vd;
884
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)
175
+ float32 *n = vn;
885
- || ((rm | rd) & 1)) {
176
+ float32 *m = vm;
886
+ if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
177
+ float_status *fpst = vfpst;
887
return 1;
178
+ uint32_t neg_real = extract32(desc, SIMD_DATA_SHIFT, 1);
888
}
179
+ uint32_t neg_imag = neg_real ^ 1;
889
ptr1 = vfp_reg_ptr(true, rd);
180
+ uintptr_t i;
890
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
181
+
891
}
182
+ /* Shift boolean to the sign bit so we can xor to negate. */
892
/* bit 6 (q): set -> SHA256SU0, cleared -> SHA1SU1 */
183
+ neg_real <<= 31;
893
if (q) {
184
+ neg_imag <<= 31;
894
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256)) {
185
+
895
+ if (!dc_isar_feature(aa32_sha2, s)) {
186
+ for (i = 0; i < opr_sz / 4; i += 2) {
896
return 1;
187
+ float32 e0 = n[H4(i)];
897
}
188
+ float32 e1 = m[H4(i + 1)] ^ neg_imag;
898
- } else if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
189
+ float32 e2 = n[H4(i + 1)];
899
+ } else if (!dc_isar_feature(aa32_sha1, s)) {
190
+ float32 e3 = m[H4(i)] ^ neg_real;
900
return 1;
191
+
901
}
192
+ d[H4(i)] = float32_add(e0, e1, fpst);
902
ptr1 = vfp_reg_ptr(true, rd);
193
+ d[H4(i + 1)] = float32_add(e2, e3, fpst);
903
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
194
+ }
904
/* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
195
+ clear_tail(d, opr_sz, simd_maxsz(desc));
905
int size = extract32(insn, 20, 1);
196
+}
906
data = extract32(insn, 23, 2); /* rot */
197
+
907
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
198
+void HELPER(gvec_fcaddd)(void *vd, void *vn, void *vm,
908
+ if (!dc_isar_feature(aa32_vcma, s)
199
+ void *vfpst, uint32_t desc)
909
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
200
+{
910
return 1;
201
+ uintptr_t opr_sz = simd_oprsz(desc);
911
}
202
+ float64 *d = vd;
912
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
203
+ float64 *n = vn;
913
/* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
204
+ float64 *m = vm;
914
int size = extract32(insn, 20, 1);
205
+ float_status *fpst = vfpst;
915
data = extract32(insn, 24, 1); /* rot */
206
+ uint64_t neg_real = extract64(desc, SIMD_DATA_SHIFT, 1);
916
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
207
+ uint64_t neg_imag = neg_real ^ 1;
917
+ if (!dc_isar_feature(aa32_vcma, s)
208
+ uintptr_t i;
918
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
209
+
919
return 1;
210
+ /* Shift boolean to the sign bit so we can xor to negate. */
920
}
211
+ neg_real <<= 63;
921
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
212
+ neg_imag <<= 63;
922
} else if ((insn & 0xfeb00f00) == 0xfc200d00) {
213
+
923
/* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
214
+ for (i = 0; i < opr_sz / 8; i += 2) {
924
bool u = extract32(insn, 4, 1);
215
+ float64 e0 = n[i];
925
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
216
+ float64 e1 = m[i + 1] ^ neg_imag;
926
+ if (!dc_isar_feature(aa32_dp, s)) {
217
+ float64 e2 = n[i + 1];
927
return 1;
218
+ float64 e3 = m[i] ^ neg_real;
928
}
219
+
929
fn_gvec = u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
220
+ d[i] = float64_add(e0, e1, fpst);
930
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
221
+ d[i + 1] = float64_add(e2, e3, fpst);
931
int size = extract32(insn, 23, 1);
222
+ }
932
int index;
223
+ clear_tail(d, opr_sz, simd_maxsz(desc));
933
224
+}
934
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
935
+ if (!dc_isar_feature(aa32_vcma, s)) {
936
return 1;
937
}
938
if (size == 0) {
939
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
940
} else if ((insn & 0xffb00f00) == 0xfe200d00) {
941
/* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
942
int u = extract32(insn, 4, 1);
943
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
944
+ if (!dc_isar_feature(aa32_dp, s)) {
945
return 1;
946
}
947
fn_gvec = u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
948
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
949
* op1 == 3 is UNPREDICTABLE but handle as UNDEFINED.
950
* Bits 8, 10 and 11 should be zero.
951
*/
952
- if (!arm_dc_feature(s, ARM_FEATURE_CRC) || op1 == 0x3 ||
953
- (c & 0xd) != 0) {
954
+ if (!dc_isar_feature(aa32_crc32, s) || op1 == 0x3 || (c & 0xd) != 0) {
955
goto illegal_op;
956
}
957
958
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
959
case 0x28:
960
case 0x29:
961
case 0x2a:
962
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)) {
963
+ if (!dc_isar_feature(aa32_crc32, s)) {
964
goto illegal_op;
965
}
966
break;
967
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
968
CPUARMState *env = cs->env_ptr;
969
ARMCPU *cpu = arm_env_get_cpu(env);
970
971
+ dc->isar = &cpu->isar;
972
dc->pc = dc->base.pc_first;
973
dc->condjmp = 0;
974
225
--
975
--
226
2.16.2
976
2.19.1
227
977
228
978
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Not enabled anywhere yet.
3
Both arm and thumb2 division are controlled by the same ISAR field,
4
which takes care of the arm implies thumb case. Having M imply
5
thumb2 division was wrong for cortex-m0, which is v6m and does not
6
have thumb2 at all, much less thumb2 division.
4
7
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180228193125.20577-11-richard.henderson@linaro.org
10
Message-id: 20181016223115.24100-5-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
target/arm/cpu.h | 1 +
14
target/arm/cpu.h | 12 ++++++++++--
11
linux-user/elfload.c | 1 +
15
linux-user/elfload.c | 4 ++--
12
2 files changed, 2 insertions(+)
16
target/arm/cpu.c | 10 +---------
17
target/arm/translate.c | 4 ++--
18
4 files changed, 15 insertions(+), 15 deletions(-)
13
19
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
22
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ enum arm_features {
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
19
ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
25
ARM_FEATURE_VFP3,
20
ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
26
ARM_FEATURE_VFP_FP16,
21
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
27
ARM_FEATURE_NEON,
22
+ ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */
28
- ARM_FEATURE_THUMB_DIV, /* divide supported in Thumb encoding */
23
};
29
ARM_FEATURE_M, /* Microcontroller profile. */
24
30
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
25
static inline int arm_feature(CPUARMState *env, int feature)
31
ARM_FEATURE_THUMB2EE,
32
@@ -XXX,XX +XXX,XX @@ enum arm_features {
33
ARM_FEATURE_V5,
34
ARM_FEATURE_STRONGARM,
35
ARM_FEATURE_VAPA, /* cp15 VA to PA lookups */
36
- ARM_FEATURE_ARM_DIV, /* divide supported in ARM encoding */
37
ARM_FEATURE_VFP4, /* VFPv4 (implies that NEON is v2) */
38
ARM_FEATURE_GENERIC_TIMER,
39
ARM_FEATURE_MVFR, /* Media and VFP Feature Registers 0 and 1 */
40
@@ -XXX,XX +XXX,XX @@ extern const uint64_t pred_esz_masks[4];
41
/*
42
* 32-bit feature tests via id registers.
43
*/
44
+static inline bool isar_feature_thumb_div(const ARMISARegisters *id)
45
+{
46
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) != 0;
47
+}
48
+
49
+static inline bool isar_feature_arm_div(const ARMISARegisters *id)
50
+{
51
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
52
+}
53
+
54
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
55
{
56
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
26
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
57
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
27
index XXXXXXX..XXXXXXX 100644
58
index XXXXXXX..XXXXXXX 100644
28
--- a/linux-user/elfload.c
59
--- a/linux-user/elfload.c
29
+++ b/linux-user/elfload.c
60
+++ b/linux-user/elfload.c
30
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
61
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
31
GET_FEATURE(ARM_FEATURE_V8_FP16,
62
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
32
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
63
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
33
GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
64
GET_FEATURE(ARM_FEATURE_VFP4, ARM_HWCAP_ARM_VFPv4);
34
+ GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
65
- GET_FEATURE(ARM_FEATURE_ARM_DIV, ARM_HWCAP_ARM_IDIVA);
35
#undef GET_FEATURE
66
- GET_FEATURE(ARM_FEATURE_THUMB_DIV, ARM_HWCAP_ARM_IDIVT);
36
67
+ GET_FEATURE_ID(arm_div, ARM_HWCAP_ARM_IDIVA);
37
return hwcaps;
68
+ GET_FEATURE_ID(thumb_div, ARM_HWCAP_ARM_IDIVT);
69
/* All QEMU's VFPv3 CPUs have 32 registers, see VFP_DREG in translate.c.
70
* Note that the ARM_HWCAP_ARM_VFPv3D16 bit is always the inverse of
71
* ARM_HWCAP_ARM_VFPD32 (and so always clear for QEMU); it is unrelated
72
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/cpu.c
75
+++ b/target/arm/cpu.c
76
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
77
* Presence of EL2 itself is ARM_FEATURE_EL2, and of the
78
* Security Extensions is ARM_FEATURE_EL3.
79
*/
80
- set_feature(env, ARM_FEATURE_ARM_DIV);
81
+ assert(cpu_isar_feature(arm_div, cpu));
82
set_feature(env, ARM_FEATURE_LPAE);
83
set_feature(env, ARM_FEATURE_V7);
84
}
85
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
86
if (arm_feature(env, ARM_FEATURE_V5)) {
87
set_feature(env, ARM_FEATURE_V4T);
88
}
89
- if (arm_feature(env, ARM_FEATURE_M)) {
90
- set_feature(env, ARM_FEATURE_THUMB_DIV);
91
- }
92
- if (arm_feature(env, ARM_FEATURE_ARM_DIV)) {
93
- set_feature(env, ARM_FEATURE_THUMB_DIV);
94
- }
95
if (arm_feature(env, ARM_FEATURE_VFP4)) {
96
set_feature(env, ARM_FEATURE_VFP3);
97
set_feature(env, ARM_FEATURE_VFP_FP16);
98
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
99
ARMCPU *cpu = ARM_CPU(obj);
100
101
set_feature(&cpu->env, ARM_FEATURE_V7);
102
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DIV);
103
- set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
104
set_feature(&cpu->env, ARM_FEATURE_V7MP);
105
set_feature(&cpu->env, ARM_FEATURE_PMSA);
106
cpu->midr = 0x411fc153; /* r1p3 */
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
112
case 1:
113
case 3:
114
/* SDIV, UDIV */
115
- if (!arm_dc_feature(s, ARM_FEATURE_ARM_DIV)) {
116
+ if (!dc_isar_feature(arm_div, s)) {
117
goto illegal_op;
118
}
119
if (((insn >> 5) & 7) || (rd != 15)) {
120
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
121
tmp2 = load_reg(s, rm);
122
if ((op & 0x50) == 0x10) {
123
/* sdiv, udiv */
124
- if (!arm_dc_feature(s, ARM_FEATURE_THUMB_DIV)) {
125
+ if (!dc_isar_feature(thumb_div, s)) {
126
goto illegal_op;
127
}
128
if (op & 0x20)
38
--
129
--
39
2.16.2
130
2.19.1
40
131
41
132
diff view generated by jsdifflib
1
The Cortex-M33 allows the system to specify the reset value of the
1
From: Richard Henderson <richard.henderson@linaro.org>
2
secure Vector Table Offset Register (VTOR) by asserting config
3
signals. In particular, guest images for the MPS2 AN505 board rely
4
on the MPS2's initial VTOR being correct for that board.
5
Implement a QEMU property so board and SoC code can set the reset
6
value to the correct value.
7
2
3
Having V6 alone imply jazelle was wrong for cortex-m0.
4
Change to an assertion for V6 & !M.
5
6
This was harmless, because the only place we tested ARM_FEATURE_JAZELLE
7
was for 'bxj' in disas_arm(), which is unreachable for M-profile cores.
8
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181016223115.24100-6-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180220180325.29818-7-peter.maydell@linaro.org
11
---
14
---
12
target/arm/cpu.h | 3 +++
15
target/arm/cpu.h | 6 +++++-
13
target/arm/cpu.c | 18 ++++++++++++++----
16
target/arm/cpu.c | 17 ++++++++++++++---
14
2 files changed, 17 insertions(+), 4 deletions(-)
17
target/arm/translate.c | 2 +-
18
3 files changed, 20 insertions(+), 5 deletions(-)
15
19
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
22
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
21
*/
25
ARM_FEATURE_PMU, /* has PMU support */
22
uint32_t psci_conduit;
26
ARM_FEATURE_VBAR, /* has cp15 VBAR */
23
27
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
24
+ /* For v8M, initial value of the Secure VTOR */
28
- ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
25
+ uint32_t init_svtor;
29
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
30
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
31
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_arm_div(const ARMISARegisters *id)
33
return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
34
}
35
36
+static inline bool isar_feature_jazelle(const ARMISARegisters *id)
37
+{
38
+ return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
39
+}
26
+
40
+
27
/* [QEMU_]KVM_ARM_TARGET_* constant for this CPU, or
41
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
28
* QEMU_KVM_ARM_TARGET_NONE if the kernel doesn't support this CPU type.
42
{
29
*/
43
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.c
46
--- a/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
47
+++ b/target/arm/cpu.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
48
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
35
uint32_t initial_msp; /* Loaded from 0x0 */
49
}
36
uint32_t initial_pc; /* Loaded from 0x4 */
50
if (arm_feature(env, ARM_FEATURE_V6)) {
37
uint8_t *rom;
51
set_feature(env, ARM_FEATURE_V5);
38
+ uint32_t vecbase;
52
- set_feature(env, ARM_FEATURE_JAZELLE);
39
53
if (!arm_feature(env, ARM_FEATURE_M)) {
40
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
54
+ assert(cpu_isar_feature(jazelle, cpu));
41
env->v7m.secure = true;
55
set_feature(env, ARM_FEATURE_AUXCR);
42
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
56
}
43
/* Unlike A/R profile, M profile defines the reset LR value */
57
}
44
env->regs[14] = 0xffffffff;
58
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
45
59
set_feature(&cpu->env, ARM_FEATURE_VFP);
46
- /* Load the initial SP and PC from the vector table at address 0 */
60
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
47
- rom = rom_ptr(0);
61
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
48
+ env->v7m.vecbase[M_REG_S] = cpu->init_svtor & 0xffffff80;
62
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
63
cpu->midr = 0x41069265;
64
cpu->reset_fpsid = 0x41011090;
65
cpu->ctr = 0x1dd20d2;
66
cpu->reset_sctlr = 0x00090078;
49
+
67
+
50
+ /* Load the initial SP and PC from offset 0 and 4 in the vector table */
68
+ /*
51
+ vecbase = env->v7m.vecbase[env->v7m.secure];
69
+ * ARMv5 does not have the ID_ISAR registers, but we can still
52
+ rom = rom_ptr(vecbase);
70
+ * set the field to indicate Jazelle support within QEMU.
53
if (rom) {
71
+ */
54
/* Address zero is covered by ROM which hasn't yet been
72
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
55
* copied into physical memory.
73
}
56
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
74
57
* it got copied into memory. In the latter case, rom_ptr
75
static void arm946_initfn(Object *obj)
58
* will return a NULL pointer and we should use ldl_phys instead.
76
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
59
*/
77
set_feature(&cpu->env, ARM_FEATURE_AUXCR);
60
- initial_msp = ldl_phys(s->as, 0);
78
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
61
- initial_pc = ldl_phys(s->as, 4);
79
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
62
+ initial_msp = ldl_phys(s->as, vecbase);
80
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
63
+ initial_pc = ldl_phys(s->as, vecbase + 4);
81
cpu->midr = 0x4106a262;
64
}
82
cpu->reset_fpsid = 0x410110a0;
65
83
cpu->ctr = 0x1dd20d2;
66
env->regs[13] = initial_msp & 0xFFFFFFFC;
84
cpu->reset_sctlr = 0x00090078;
67
@@ -XXX,XX +XXX,XX @@ static Property arm_cpu_pmsav7_dregion_property =
85
cpu->reset_auxcr = 1;
68
pmsav7_dregion,
69
qdev_prop_uint32, uint32_t);
70
71
+/* M profile: initial value of the Secure VTOR */
72
+static Property arm_cpu_initsvtor_property =
73
+ DEFINE_PROP_UINT32("init-svtor", ARMCPU, init_svtor, 0);
74
+
86
+
75
static void arm_cpu_post_init(Object *obj)
87
+ /*
76
{
88
+ * ARMv5 does not have the ID_ISAR registers, but we can still
77
ARMCPU *cpu = ARM_CPU(obj);
89
+ * set the field to indicate Jazelle support within QEMU.
78
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_post_init(Object *obj)
90
+ */
79
qdev_prop_allow_set_link_before_realize,
91
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
80
OBJ_PROP_LINK_UNREF_ON_RELEASE,
92
+
81
&error_abort);
93
{
82
+ qdev_property_add_static(DEVICE(obj), &arm_cpu_initsvtor_property,
94
/* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
83
+ &error_abort);
95
ARMCPRegInfo ifar = {
84
}
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
85
97
index XXXXXXX..XXXXXXX 100644
86
qdev_property_add_static(DEVICE(obj), &arm_cpu_cfgend_property,
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@
101
#define ENABLE_ARCH_5 arm_dc_feature(s, ARM_FEATURE_V5)
102
/* currently all emulated v5 cores are also v5TE, so don't bother */
103
#define ENABLE_ARCH_5TE arm_dc_feature(s, ARM_FEATURE_V5)
104
-#define ENABLE_ARCH_5J arm_dc_feature(s, ARM_FEATURE_JAZELLE)
105
+#define ENABLE_ARCH_5J dc_isar_feature(jazelle, s)
106
#define ENABLE_ARCH_6 arm_dc_feature(s, ARM_FEATURE_V6)
107
#define ENABLE_ARCH_6K arm_dc_feature(s, ARM_FEATURE_V6K)
108
#define ENABLE_ARCH_6T2 arm_dc_feature(s, ARM_FEATURE_THUMB2)
87
--
109
--
88
2.16.2
110
2.19.1
89
111
90
112
diff view generated by jsdifflib
1
In v8M, the Implementation Defined Attribution Unit (IDAU) is
1
From: Richard Henderson <richard.henderson@linaro.org>
2
a small piece of hardware typically implemented in the SoC
3
which provides board or SoC specific security attribution
4
information for each address that the CPU performs MPU/SAU
5
checks on. For QEMU, we model this with a QOM interface which
6
is implemented by the board or SoC object and connected to
7
the CPU using a link property.
8
2
9
This commit defines the new interface class, adds the link
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
property to the CPU object, and makes the SAU checking
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
code call the IDAU interface if one is present.
5
Message-id: 20181016223115.24100-7-richard.henderson@linaro.org
12
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20180220180325.29818-5-peter.maydell@linaro.org
16
---
8
---
17
target/arm/cpu.h | 3 +++
9
target/arm/cpu.h | 6 +++++-
18
target/arm/idau.h | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++
10
linux-user/elfload.c | 2 +-
19
target/arm/cpu.c | 15 +++++++++++++
11
target/arm/cpu.c | 4 ----
20
target/arm/helper.c | 28 +++++++++++++++++++++---
12
target/arm/helper.c | 2 +-
21
4 files changed, 104 insertions(+), 3 deletions(-)
13
target/arm/machine.c | 3 +--
22
create mode 100644 target/arm/idau.h
14
5 files changed, 8 insertions(+), 9 deletions(-)
23
15
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/cpu.h
18
--- a/target/arm/cpu.h
27
+++ b/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
28
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
20
@@ -XXX,XX +XXX,XX @@ enum arm_features {
29
/* MemoryRegion to use for secure physical accesses */
21
ARM_FEATURE_NEON,
30
MemoryRegion *secure_memory;
22
ARM_FEATURE_M, /* Microcontroller profile. */
31
23
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
32
+ /* For v8M, pointer to the IDAU interface provided by board/SoC */
24
- ARM_FEATURE_THUMB2EE,
33
+ Object *idau;
25
ARM_FEATURE_V7MP, /* v7 Multiprocessing Extensions */
26
ARM_FEATURE_V7VE, /* v7 Virtualization Extensions (non-EL2 parts) */
27
ARM_FEATURE_V4T,
28
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_jazelle(const ARMISARegisters *id)
29
return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
30
}
31
32
+static inline bool isar_feature_t32ee(const ARMISARegisters *id)
33
+{
34
+ return FIELD_EX32(id->id_isar3, ID_ISAR3, T32EE) != 0;
35
+}
34
+
36
+
35
/* 'compatible' string for this CPU for Linux device trees */
37
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
36
const char *dtb_compatible;
38
{
37
39
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
38
diff --git a/target/arm/idau.h b/target/arm/idau.h
40
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
39
new file mode 100644
41
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX
42
--- a/linux-user/elfload.c
41
--- /dev/null
43
+++ b/linux-user/elfload.c
42
+++ b/target/arm/idau.h
44
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
43
@@ -XXX,XX +XXX,XX @@
45
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
44
+/*
46
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
45
+ * QEMU ARM CPU -- interface for the Arm v8M IDAU
47
GET_FEATURE(ARM_FEATURE_IWMMXT, ARM_HWCAP_ARM_IWMMXT);
46
+ *
48
- GET_FEATURE(ARM_FEATURE_THUMB2EE, ARM_HWCAP_ARM_THUMBEE);
47
+ * Copyright (c) 2018 Linaro Ltd
49
+ GET_FEATURE_ID(t32ee, ARM_HWCAP_ARM_THUMBEE);
48
+ *
50
GET_FEATURE(ARM_FEATURE_NEON, ARM_HWCAP_ARM_NEON);
49
+ * This program is free software; you can redistribute it and/or
51
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
50
+ * modify it under the terms of the GNU General Public License
52
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
51
+ * as published by the Free Software Foundation; either version 2
52
+ * of the License, or (at your option) any later version.
53
+ *
54
+ * This program is distributed in the hope that it will be useful,
55
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
56
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
57
+ * GNU General Public License for more details.
58
+ *
59
+ * You should have received a copy of the GNU General Public License
60
+ * along with this program; if not, see
61
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
62
+ *
63
+ * In the v8M architecture, the IDAU is a small piece of hardware
64
+ * typically implemented in the SoC which provides board or SoC
65
+ * specific security attribution information for each address that
66
+ * the CPU performs MPU/SAU checks on. For QEMU, we model this with a
67
+ * QOM interface which is implemented by the board or SoC object and
68
+ * connected to the CPU using a link property.
69
+ */
70
+
71
+#ifndef TARGET_ARM_IDAU_H
72
+#define TARGET_ARM_IDAU_H
73
+
74
+#include "qom/object.h"
75
+
76
+#define TYPE_IDAU_INTERFACE "idau-interface"
77
+#define IDAU_INTERFACE(obj) \
78
+ INTERFACE_CHECK(IDAUInterface, (obj), TYPE_IDAU_INTERFACE)
79
+#define IDAU_INTERFACE_CLASS(class) \
80
+ OBJECT_CLASS_CHECK(IDAUInterfaceClass, (class), TYPE_IDAU_INTERFACE)
81
+#define IDAU_INTERFACE_GET_CLASS(obj) \
82
+ OBJECT_GET_CLASS(IDAUInterfaceClass, (obj), TYPE_IDAU_INTERFACE)
83
+
84
+typedef struct IDAUInterface {
85
+ Object parent;
86
+} IDAUInterface;
87
+
88
+#define IREGION_NOTVALID -1
89
+
90
+typedef struct IDAUInterfaceClass {
91
+ InterfaceClass parent;
92
+
93
+ /* Check the specified address and return the IDAU security information
94
+ * for it by filling in iregion, exempt, ns and nsc:
95
+ * iregion: IDAU region number, or IREGION_NOTVALID if not valid
96
+ * exempt: true if address is exempt from security attribution
97
+ * ns: true if the address is NonSecure
98
+ * nsc: true if the address is NonSecure-callable
99
+ */
100
+ void (*check)(IDAUInterface *ii, uint32_t address, int *iregion,
101
+ bool *exempt, bool *ns, bool *nsc);
102
+} IDAUInterfaceClass;
103
+
104
+#endif
105
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
53
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
106
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/cpu.c
55
--- a/target/arm/cpu.c
108
+++ b/target/arm/cpu.c
56
+++ b/target/arm/cpu.c
109
@@ -XXX,XX +XXX,XX @@
57
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
110
*/
58
set_feature(&cpu->env, ARM_FEATURE_V7);
111
59
set_feature(&cpu->env, ARM_FEATURE_VFP3);
112
#include "qemu/osdep.h"
60
set_feature(&cpu->env, ARM_FEATURE_NEON);
113
+#include "target/arm/idau.h"
61
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
114
#include "qemu/error-report.h"
62
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
115
#include "qapi/error.h"
63
set_feature(&cpu->env, ARM_FEATURE_EL3);
116
#include "cpu.h"
64
cpu->midr = 0x410fc080;
117
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_post_init(Object *obj)
65
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
118
}
66
set_feature(&cpu->env, ARM_FEATURE_VFP3);
119
}
67
set_feature(&cpu->env, ARM_FEATURE_VFP_FP16);
120
68
set_feature(&cpu->env, ARM_FEATURE_NEON);
121
+ if (arm_feature(&cpu->env, ARM_FEATURE_M_SECURITY)) {
69
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
122
+ object_property_add_link(obj, "idau", TYPE_IDAU_INTERFACE, &cpu->idau,
70
set_feature(&cpu->env, ARM_FEATURE_EL3);
123
+ qdev_prop_allow_set_link_before_realize,
71
/* Note that A9 supports the MP extensions even for
124
+ OBJ_PROP_LINK_UNREF_ON_RELEASE,
72
* A9UP and single-core A9MP (which are both different
125
+ &error_abort);
73
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
126
+ }
74
set_feature(&cpu->env, ARM_FEATURE_V7VE);
127
+
75
set_feature(&cpu->env, ARM_FEATURE_VFP4);
128
qdev_property_add_static(DEVICE(obj), &arm_cpu_cfgend_property,
76
set_feature(&cpu->env, ARM_FEATURE_NEON);
129
&error_abort);
77
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
130
}
78
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
131
@@ -XXX,XX +XXX,XX @@ static const TypeInfo arm_cpu_type_info = {
79
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
132
.class_init = arm_cpu_class_init,
80
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
133
};
81
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
134
82
set_feature(&cpu->env, ARM_FEATURE_V7VE);
135
+static const TypeInfo idau_interface_type_info = {
83
set_feature(&cpu->env, ARM_FEATURE_VFP4);
136
+ .name = TYPE_IDAU_INTERFACE,
84
set_feature(&cpu->env, ARM_FEATURE_NEON);
137
+ .parent = TYPE_INTERFACE,
85
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
138
+ .class_size = sizeof(IDAUInterfaceClass),
86
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
139
+};
87
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
140
+
88
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
141
static void arm_cpu_register_types(void)
142
{
143
const ARMCPUInfo *info = arm_cpus;
144
145
type_register_static(&arm_cpu_type_info);
146
+ type_register_static(&idau_interface_type_info);
147
148
while (info->name) {
149
cpu_register(info);
150
diff --git a/target/arm/helper.c b/target/arm/helper.c
89
diff --git a/target/arm/helper.c b/target/arm/helper.c
151
index XXXXXXX..XXXXXXX 100644
90
index XXXXXXX..XXXXXXX 100644
152
--- a/target/arm/helper.c
91
--- a/target/arm/helper.c
153
+++ b/target/arm/helper.c
92
+++ b/target/arm/helper.c
154
@@ -XXX,XX +XXX,XX @@
93
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
155
#include "qemu/osdep.h"
94
define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo);
156
+#include "target/arm/idau.h"
95
define_arm_cp_regs(cpu, vmsa_cp_reginfo);
157
#include "trace.h"
158
#include "cpu.h"
159
#include "internals.h"
160
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
161
*/
162
ARMCPU *cpu = arm_env_get_cpu(env);
163
int r;
164
+ bool idau_exempt = false, idau_ns = true, idau_nsc = true;
165
+ int idau_region = IREGION_NOTVALID;
166
167
- /* TODO: implement IDAU */
168
+ if (cpu->idau) {
169
+ IDAUInterfaceClass *iic = IDAU_INTERFACE_GET_CLASS(cpu->idau);
170
+ IDAUInterface *ii = IDAU_INTERFACE(cpu->idau);
171
+
172
+ iic->check(ii, address, &idau_region, &idau_exempt, &idau_ns,
173
+ &idau_nsc);
174
+ }
175
176
if (access_type == MMU_INST_FETCH && extract32(address, 28, 4) == 0xf) {
177
/* 0xf0000000..0xffffffff is always S for insn fetches */
178
return;
179
}
96
}
180
97
- if (arm_feature(env, ARM_FEATURE_THUMB2EE)) {
181
- if (v8m_is_sau_exempt(env, address, access_type)) {
98
+ if (cpu_isar_feature(t32ee, cpu)) {
182
+ if (idau_exempt || v8m_is_sau_exempt(env, address, access_type)) {
99
define_arm_cp_regs(cpu, t2ee_cp_reginfo);
183
sattrs->ns = !regime_is_secure(env, mmu_idx);
184
return;
185
}
100
}
186
101
if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
187
+ if (idau_region != IREGION_NOTVALID) {
102
diff --git a/target/arm/machine.c b/target/arm/machine.c
188
+ sattrs->irvalid = true;
103
index XXXXXXX..XXXXXXX 100644
189
+ sattrs->iregion = idau_region;
104
--- a/target/arm/machine.c
190
+ }
105
+++ b/target/arm/machine.c
191
+
106
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
192
switch (env->sau.ctrl & 3) {
107
static bool thumb2ee_needed(void *opaque)
193
case 0: /* SAU.ENABLE == 0, SAU.ALLNS == 0 */
108
{
194
break;
109
ARMCPU *cpu = opaque;
195
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
110
- CPUARMState *env = &cpu->env;
196
}
111
197
}
112
- return arm_feature(env, ARM_FEATURE_THUMB2EE);
198
113
+ return cpu_isar_feature(t32ee, cpu);
199
- /* TODO when we support the IDAU then it may override the result here */
200
+ /* The IDAU will override the SAU lookup results if it specifies
201
+ * higher security than the SAU does.
202
+ */
203
+ if (!idau_ns) {
204
+ if (sattrs->ns || (!idau_nsc && sattrs->nsc)) {
205
+ sattrs->ns = false;
206
+ sattrs->nsc = idau_nsc;
207
+ }
208
+ }
209
break;
210
}
211
}
114
}
115
116
static const VMStateDescription vmstate_thumb2ee = {
212
--
117
--
213
2.16.2
118
2.19.1
214
119
215
120
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20181016223115.24100-8-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/cpu.h | 16 +++++++++++++++-
10
linux-user/aarch64/signal.c | 4 ++--
11
linux-user/elfload.c | 2 +-
12
linux-user/syscall.c | 10 ++++++----
13
target/arm/cpu64.c | 5 ++++-
14
target/arm/helper.c | 9 ++++++---
15
target/arm/machine.c | 3 +--
16
target/arm/translate-a64.c | 4 ++--
17
8 files changed, 37 insertions(+), 16 deletions(-)
18
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR1, FRINTTS, 32, 4)
24
FIELD(ID_AA64ISAR1, SB, 36, 4)
25
FIELD(ID_AA64ISAR1, SPECRES, 40, 4)
26
27
+FIELD(ID_AA64PFR0, EL0, 0, 4)
28
+FIELD(ID_AA64PFR0, EL1, 4, 4)
29
+FIELD(ID_AA64PFR0, EL2, 8, 4)
30
+FIELD(ID_AA64PFR0, EL3, 12, 4)
31
+FIELD(ID_AA64PFR0, FP, 16, 4)
32
+FIELD(ID_AA64PFR0, ADVSIMD, 20, 4)
33
+FIELD(ID_AA64PFR0, GIC, 24, 4)
34
+FIELD(ID_AA64PFR0, RAS, 28, 4)
35
+FIELD(ID_AA64PFR0, SVE, 32, 4)
36
+
37
QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK);
38
39
/* If adding a feature bit which corresponds to a Linux ELF
40
@@ -XXX,XX +XXX,XX @@ enum arm_features {
41
ARM_FEATURE_PMU, /* has PMU support */
42
ARM_FEATURE_VBAR, /* has cp15 VBAR */
43
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
44
- ARM_FEATURE_SVE, /* has Scalable Vector Extension */
45
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
46
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
47
};
48
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
49
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
50
}
51
52
+static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
53
+{
54
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
55
+}
56
+
57
/*
58
* Forward to the above feature tests given an ARMCPU pointer.
59
*/
60
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/linux-user/aarch64/signal.c
63
+++ b/linux-user/aarch64/signal.c
64
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
65
break;
66
67
case TARGET_SVE_MAGIC:
68
- if (arm_feature(env, ARM_FEATURE_SVE)) {
69
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
70
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
71
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
72
if (!sve && size == sve_size) {
73
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
74
&layout);
75
76
/* SVE state needs saving only if it exists. */
77
- if (arm_feature(env, ARM_FEATURE_SVE)) {
78
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
79
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
80
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
81
sve_ofs = alloc_sigframe_space(sve_size, &layout);
82
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/linux-user/elfload.c
85
+++ b/linux-user/elfload.c
86
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
87
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
88
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
89
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
90
- GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
91
+ GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
92
93
#undef GET_FEATURE
94
#undef GET_FEATURE_ID
95
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/linux-user/syscall.c
98
+++ b/linux-user/syscall.c
99
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
100
* even though the current architectural maximum is VQ=16.
101
*/
102
ret = -TARGET_EINVAL;
103
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)
104
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(cpu_env))
105
&& arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
106
CPUARMState *env = cpu_env;
107
ARMCPU *cpu = arm_env_get_cpu(env);
108
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
109
return ret;
110
case TARGET_PR_SVE_GET_VL:
111
ret = -TARGET_EINVAL;
112
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)) {
113
- CPUARMState *env = cpu_env;
114
- ret = ((env->vfp.zcr_el[1] & 0xf) + 1) * 16;
115
+ {
116
+ ARMCPU *cpu = arm_env_get_cpu(cpu_env);
117
+ if (cpu_isar_feature(aa64_sve, cpu)) {
118
+ ret = ((cpu->env.vfp.zcr_el[1] & 0xf) + 1) * 16;
119
+ }
120
}
121
return ret;
122
#endif /* AARCH64 */
123
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/target/arm/cpu64.c
126
+++ b/target/arm/cpu64.c
127
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
128
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
129
cpu->isar.id_aa64isar1 = t;
130
131
+ t = cpu->isar.id_aa64pfr0;
132
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
133
+ cpu->isar.id_aa64pfr0 = t;
134
+
135
/* Replicate the same data to the 32-bit id registers. */
136
u = cpu->isar.id_isar5;
137
u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
139
* present in either.
140
*/
141
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
142
- set_feature(&cpu->env, ARM_FEATURE_SVE);
143
/* For usermode -cpu max we can use a larger and more efficient DCZ
144
* blocksize since we don't have to follow what the hardware does.
145
*/
146
diff --git a/target/arm/helper.c b/target/arm/helper.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/helper.c
149
+++ b/target/arm/helper.c
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
151
define_one_arm_cp_reg(cpu, &sctlr);
152
}
153
154
- if (arm_feature(env, ARM_FEATURE_SVE)) {
155
+ if (cpu_isar_feature(aa64_sve, cpu)) {
156
define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
157
if (arm_feature(env, ARM_FEATURE_EL2)) {
158
define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
159
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
160
uint32_t flags;
161
162
if (is_a64(env)) {
163
+ ARMCPU *cpu = arm_env_get_cpu(env);
164
+
165
*pc = env->pc;
166
flags = ARM_TBFLAG_AARCH64_STATE_MASK;
167
/* Get control bits for tagged addresses */
168
flags |= (arm_regime_tbi0(env, mmu_idx) << ARM_TBFLAG_TBI0_SHIFT);
169
flags |= (arm_regime_tbi1(env, mmu_idx) << ARM_TBFLAG_TBI1_SHIFT);
170
171
- if (arm_feature(env, ARM_FEATURE_SVE)) {
172
+ if (cpu_isar_feature(aa64_sve, cpu)) {
173
int sve_el = sve_exception_el(env, current_el);
174
uint32_t zcr_len;
175
176
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq)
177
void aarch64_sve_change_el(CPUARMState *env, int old_el,
178
int new_el, bool el0_a64)
179
{
180
+ ARMCPU *cpu = arm_env_get_cpu(env);
181
int old_len, new_len;
182
bool old_a64, new_a64;
183
184
/* Nothing to do if no SVE. */
185
- if (!arm_feature(env, ARM_FEATURE_SVE)) {
186
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
187
return;
188
}
189
190
diff --git a/target/arm/machine.c b/target/arm/machine.c
191
index XXXXXXX..XXXXXXX 100644
192
--- a/target/arm/machine.c
193
+++ b/target/arm/machine.c
194
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_iwmmxt = {
195
static bool sve_needed(void *opaque)
196
{
197
ARMCPU *cpu = opaque;
198
- CPUARMState *env = &cpu->env;
199
200
- return arm_feature(env, ARM_FEATURE_SVE);
201
+ return cpu_isar_feature(aa64_sve, cpu);
202
}
203
204
/* The first two words of each Zreg is stored in VFP state. */
205
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
206
index XXXXXXX..XXXXXXX 100644
207
--- a/target/arm/translate-a64.c
208
+++ b/target/arm/translate-a64.c
209
@@ -XXX,XX +XXX,XX @@ void aarch64_cpu_dump_state(CPUState *cs, FILE *f,
210
cpu_fprintf(f, " FPCR=%08x FPSR=%08x\n",
211
vfp_get_fpcr(env), vfp_get_fpsr(env));
212
213
- if (arm_feature(env, ARM_FEATURE_SVE) && sve_exception_el(env, el) == 0) {
214
+ if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) {
215
int j, zcr_len = sve_zcr_len_for_el(env, el);
216
217
for (i = 0; i <= FFR_PRED_NUM; i++) {
218
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
219
unallocated_encoding(s);
220
break;
221
case 0x2:
222
- if (!arm_dc_feature(s, ARM_FEATURE_SVE) || !disas_sve(s, insn)) {
223
+ if (!dc_isar_feature(aa64_sve, s) || !disas_sve(s, insn)) {
224
unallocated_encoding(s);
225
}
226
break;
227
--
228
2.19.1
229
230
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Not enabled anywhere yet.
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Message-id: 20181016223115.24100-9-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180228193125.20577-2-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
8
---
11
target/arm/cpu.h | 1 +
9
target/arm/cpu.h | 17 +++++++++++++++-
12
linux-user/elfload.c | 1 +
10
linux-user/elfload.c | 6 +-----
13
2 files changed, 2 insertions(+)
11
target/arm/cpu64.c | 16 ++++++++-------
12
target/arm/helper.c | 2 +-
13
target/arm/translate-a64.c | 40 +++++++++++++++++++-------------------
14
target/arm/translate.c | 6 +++---
15
6 files changed, 50 insertions(+), 37 deletions(-)
14
16
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ enum arm_features {
21
@@ -XXX,XX +XXX,XX @@ enum arm_features {
20
ARM_FEATURE_V8_SHA3, /* implements SHA3 part of v8 Crypto Extensions */
22
ARM_FEATURE_PMU, /* has PMU support */
21
ARM_FEATURE_V8_SM3, /* implements SM3 part of v8 Crypto Extensions */
23
ARM_FEATURE_VBAR, /* has cp15 VBAR */
22
ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
24
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
23
+ ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
25
- ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
24
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
26
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
25
};
27
};
26
28
29
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
30
return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
31
}
32
33
+static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
34
+{
35
+ /*
36
+ * This is a placeholder for use by VCMA until the rest of
37
+ * the ARMv8.2-FP16 extension is implemented for aa32 mode.
38
+ * At which point we can properly set and check MVFR1.FPHP.
39
+ */
40
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
41
+}
42
+
43
/*
44
* 64-bit feature tests via id registers.
45
*/
46
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
47
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
48
}
49
50
+static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
51
+{
52
+ /* We always set the AdvSIMD and FP fields identically wrt FP16. */
53
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
54
+}
55
+
56
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
57
{
58
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
27
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
59
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
28
index XXXXXXX..XXXXXXX 100644
60
index XXXXXXX..XXXXXXX 100644
29
--- a/linux-user/elfload.c
61
--- a/linux-user/elfload.c
30
+++ b/linux-user/elfload.c
62
+++ b/linux-user/elfload.c
31
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
63
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
32
GET_FEATURE(ARM_FEATURE_V8_SHA512, ARM_HWCAP_A64_SHA512);
64
hwcaps |= ARM_HWCAP_A64_ASIMD;
33
GET_FEATURE(ARM_FEATURE_V8_FP16,
65
34
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
66
/* probe for the extra features */
35
+ GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
67
-#define GET_FEATURE(feat, hwcap) \
36
#undef GET_FEATURE
68
- do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
69
#define GET_FEATURE_ID(feat, hwcap) \
70
do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
71
72
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
73
GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
74
GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
75
GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
76
- GET_FEATURE(ARM_FEATURE_V8_FP16,
77
- ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
78
+ GET_FEATURE_ID(aa64_fp16, ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
79
GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
80
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
81
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
82
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
83
GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
84
85
-#undef GET_FEATURE
86
#undef GET_FEATURE_ID
37
87
38
return hwcaps;
88
return hwcaps;
89
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/cpu64.c
92
+++ b/target/arm/cpu64.c
93
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
94
95
t = cpu->isar.id_aa64pfr0;
96
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
97
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
98
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
99
cpu->isar.id_aa64pfr0 = t;
100
101
/* Replicate the same data to the 32-bit id registers. */
102
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
103
u = FIELD_DP32(u, ID_ISAR6, DP, 1);
104
cpu->isar.id_isar6 = u;
105
106
-#ifdef CONFIG_USER_ONLY
107
- /* We don't set these in system emulation mode for the moment,
108
- * since we don't correctly set the ID registers to advertise them,
109
- * and in some cases they're only available in AArch64 and not AArch32,
110
- * whereas the architecture requires them to be present in both if
111
- * present in either.
112
+ /*
113
+ * FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
114
+ * so do not set MVFR1.FPHP. Strictly speaking this is not legal,
115
+ * but it is also not legal to enable SVE without support for FP16,
116
+ * and enabling SVE in system mode is more useful in the short term.
117
*/
118
- set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
119
+
120
+#ifdef CONFIG_USER_ONLY
121
/* For usermode -cpu max we can use a larger and more efficient DCZ
122
* blocksize since we don't have to follow what the hardware does.
123
*/
124
diff --git a/target/arm/helper.c b/target/arm/helper.c
125
index XXXXXXX..XXXXXXX 100644
126
--- a/target/arm/helper.c
127
+++ b/target/arm/helper.c
128
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
129
uint32_t changed;
130
131
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
132
- if (!arm_feature(env, ARM_FEATURE_V8_FP16)) {
133
+ if (!cpu_isar_feature(aa64_fp16, arm_env_get_cpu(env))) {
134
val &= ~FPCR_FZ16;
135
}
136
137
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
138
index XXXXXXX..XXXXXXX 100644
139
--- a/target/arm/translate-a64.c
140
+++ b/target/arm/translate-a64.c
141
@@ -XXX,XX +XXX,XX @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
142
break;
143
case 3:
144
size = MO_16;
145
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
146
+ if (dc_isar_feature(aa64_fp16, s)) {
147
break;
148
}
149
/* fallthru */
150
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
151
break;
152
case 3:
153
size = MO_16;
154
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
155
+ if (dc_isar_feature(aa64_fp16, s)) {
156
break;
157
}
158
/* fallthru */
159
@@ -XXX,XX +XXX,XX @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
160
break;
161
case 3:
162
sz = MO_16;
163
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
164
+ if (dc_isar_feature(aa64_fp16, s)) {
165
break;
166
}
167
/* fallthru */
168
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
169
handle_fp_1src_double(s, opcode, rd, rn);
170
break;
171
case 3:
172
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
173
+ if (!dc_isar_feature(aa64_fp16, s)) {
174
unallocated_encoding(s);
175
return;
176
}
177
@@ -XXX,XX +XXX,XX @@ static void disas_fp_2src(DisasContext *s, uint32_t insn)
178
handle_fp_2src_double(s, opcode, rd, rn, rm);
179
break;
180
case 3:
181
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
182
+ if (!dc_isar_feature(aa64_fp16, s)) {
183
unallocated_encoding(s);
184
return;
185
}
186
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
187
handle_fp_3src_double(s, o0, o1, rd, rn, rm, ra);
188
break;
189
case 3:
190
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
191
+ if (!dc_isar_feature(aa64_fp16, s)) {
192
unallocated_encoding(s);
193
return;
194
}
195
@@ -XXX,XX +XXX,XX @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
196
break;
197
case 3:
198
sz = MO_16;
199
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
200
+ if (dc_isar_feature(aa64_fp16, s)) {
201
break;
202
}
203
/* fallthru */
204
@@ -XXX,XX +XXX,XX @@ static void disas_fp_fixed_conv(DisasContext *s, uint32_t insn)
205
case 1: /* float64 */
206
break;
207
case 3: /* float16 */
208
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
209
+ if (dc_isar_feature(aa64_fp16, s)) {
210
break;
211
}
212
/* fallthru */
213
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
214
break;
215
case 0x6: /* 16-bit float, 32-bit int */
216
case 0xe: /* 16-bit float, 64-bit int */
217
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
218
+ if (dc_isar_feature(aa64_fp16, s)) {
219
break;
220
}
221
/* fallthru */
222
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
223
case 1: /* float64 */
224
break;
225
case 3: /* float16 */
226
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
227
+ if (dc_isar_feature(aa64_fp16, s)) {
228
break;
229
}
230
/* fallthru */
231
@@ -XXX,XX +XXX,XX @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
232
*/
233
is_min = extract32(size, 1, 1);
234
is_fp = true;
235
- if (!is_u && arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
236
+ if (!is_u && dc_isar_feature(aa64_fp16, s)) {
237
size = 1;
238
} else if (!is_u || !is_q || extract32(size, 0, 1)) {
239
unallocated_encoding(s);
240
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
241
242
if (o2 != 0 || ((cmode == 0xf) && is_neg && !is_q)) {
243
/* Check for FMOV (vector, immediate) - half-precision */
244
- if (!(arm_dc_feature(s, ARM_FEATURE_V8_FP16) && o2 && cmode == 0xf)) {
245
+ if (!(dc_isar_feature(aa64_fp16, s) && o2 && cmode == 0xf)) {
246
unallocated_encoding(s);
247
return;
248
}
249
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
250
case 0x2f: /* FMINP */
251
/* FP op, size[0] is 32 or 64 bit*/
252
if (!u) {
253
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
254
+ if (!dc_isar_feature(aa64_fp16, s)) {
255
unallocated_encoding(s);
256
return;
257
} else {
258
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
259
size = MO_32;
260
} else if (immh & 2) {
261
size = MO_16;
262
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
263
+ if (!dc_isar_feature(aa64_fp16, s)) {
264
unallocated_encoding(s);
265
return;
266
}
267
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
268
size = MO_32;
269
} else if (immh & 0x2) {
270
size = MO_16;
271
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
272
+ if (!dc_isar_feature(aa64_fp16, s)) {
273
unallocated_encoding(s);
274
return;
275
}
276
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
277
return;
278
}
279
280
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
281
+ if (!dc_isar_feature(aa64_fp16, s)) {
282
unallocated_encoding(s);
283
}
284
285
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
286
TCGv_ptr fpst;
287
bool pairwise = false;
288
289
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
290
+ if (!dc_isar_feature(aa64_fp16, s)) {
291
unallocated_encoding(s);
292
return;
293
}
294
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
295
case 0x1c: /* FCADD, #90 */
296
case 0x1e: /* FCADD, #270 */
297
if (size == 0
298
- || (size == 1 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))
299
+ || (size == 1 && !dc_isar_feature(aa64_fp16, s))
300
|| (size == 3 && !is_q)) {
301
unallocated_encoding(s);
302
return;
303
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
304
bool need_fpst = true;
305
int rmode;
306
307
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
308
+ if (!dc_isar_feature(aa64_fp16, s)) {
309
unallocated_encoding(s);
310
return;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
313
}
314
break;
315
}
316
- if (is_fp16 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
317
+ if (is_fp16 && !dc_isar_feature(aa64_fp16, s)) {
318
unallocated_encoding(s);
319
return;
320
}
321
diff --git a/target/arm/translate.c b/target/arm/translate.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/translate.c
324
+++ b/target/arm/translate.c
325
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
326
int size = extract32(insn, 20, 1);
327
data = extract32(insn, 23, 2); /* rot */
328
if (!dc_isar_feature(aa32_vcma, s)
329
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
330
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
331
return 1;
332
}
333
fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
334
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
335
int size = extract32(insn, 20, 1);
336
data = extract32(insn, 24, 1); /* rot */
337
if (!dc_isar_feature(aa32_vcma, s)
338
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
339
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
340
return 1;
341
}
342
fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
343
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
344
return 1;
345
}
346
if (size == 0) {
347
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
348
+ if (!dc_isar_feature(aa32_fp16_arith, s)) {
349
return 1;
350
}
351
/* For fp16, rm is just Vm, and index is M. */
39
--
352
--
40
2.16.2
353
2.19.1
41
354
42
355
diff view generated by jsdifflib
1
Add a Cortex-M33 definition. The M33 is an M profile CPU
1
For AArch32, exception return happens through certain kinds
2
which implements the ARM v8M architecture, including the
2
of CPSR write. We don't currently have any CPU_LOG_INT logging
3
M profile Security Extension.
3
of these events (unlike AArch64, where we log in the ERET
4
instruction). Add some suitable logging.
5
6
This will log exception returns like this:
7
Exception return from AArch32 hyp to usr PC 0x80100374
8
9
paralleling the existing logging in the exception_return
10
helper for AArch64 exception returns:
11
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
12
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c
13
14
(Note that an AArch32 exception return can only be
15
AArch32->AArch32, never to AArch64.)
4
16
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180220180325.29818-9-peter.maydell@linaro.org
19
Message-id: 20181012144235.19646-2-peter.maydell@linaro.org
8
---
20
---
9
target/arm/cpu.c | 31 +++++++++++++++++++++++++++++++
21
target/arm/internals.h | 18 ++++++++++++++++++
10
1 file changed, 31 insertions(+)
22
target/arm/helper.c | 10 ++++++++++
23
target/arm/translate.c | 7 +------
24
3 files changed, 29 insertions(+), 6 deletions(-)
11
25
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
26
diff --git a/target/arm/internals.h b/target/arm/internals.h
13
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
28
--- a/target/arm/internals.h
15
+++ b/target/arm/cpu.c
29
+++ b/target/arm/internals.h
16
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
30
@@ -XXX,XX +XXX,XX @@ static inline uint32_t v7m_sp_limit(CPUARMState *env)
17
cpu->id_isar5 = 0x00000000;
31
}
18
}
32
}
19
33
20
+static void cortex_m33_initfn(Object *obj)
34
+/**
35
+ * aarch32_mode_name(): Return name of the AArch32 CPU mode
36
+ * @psr: Program Status Register indicating CPU mode
37
+ *
38
+ * Returns, for debug logging purposes, a printable representation
39
+ * of the AArch32 CPU mode ("svc", "usr", etc) as indicated by
40
+ * the low bits of the specified PSR.
41
+ */
42
+static inline const char *aarch32_mode_name(uint32_t psr)
21
+{
43
+{
22
+ ARMCPU *cpu = ARM_CPU(obj);
44
+ static const char cpu_mode_names[16][4] = {
45
+ "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
46
+ "???", "???", "hyp", "und", "???", "???", "???", "sys"
47
+ };
23
+
48
+
24
+ set_feature(&cpu->env, ARM_FEATURE_V8);
49
+ return cpu_mode_names[psr & 0xf];
25
+ set_feature(&cpu->env, ARM_FEATURE_M);
26
+ set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
27
+ set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
28
+ cpu->midr = 0x410fd213; /* r0p3 */
29
+ cpu->pmsav7_dregion = 16;
30
+ cpu->sau_sregion = 8;
31
+ cpu->id_pfr0 = 0x00000030;
32
+ cpu->id_pfr1 = 0x00000210;
33
+ cpu->id_dfr0 = 0x00200000;
34
+ cpu->id_afr0 = 0x00000000;
35
+ cpu->id_mmfr0 = 0x00101F40;
36
+ cpu->id_mmfr1 = 0x00000000;
37
+ cpu->id_mmfr2 = 0x01000000;
38
+ cpu->id_mmfr3 = 0x00000000;
39
+ cpu->id_isar0 = 0x01101110;
40
+ cpu->id_isar1 = 0x02212000;
41
+ cpu->id_isar2 = 0x20232232;
42
+ cpu->id_isar3 = 0x01111131;
43
+ cpu->id_isar4 = 0x01310132;
44
+ cpu->id_isar5 = 0x00000000;
45
+ cpu->clidr = 0x00000000;
46
+ cpu->ctr = 0x8000c000;
47
+}
50
+}
48
+
51
+
49
static void arm_v7m_class_init(ObjectClass *oc, void *data)
52
#endif
53
diff --git a/target/arm/helper.c b/target/arm/helper.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/helper.c
56
+++ b/target/arm/helper.c
57
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
58
mask |= CPSR_IL;
59
val |= CPSR_IL;
60
}
61
+ qemu_log_mask(LOG_GUEST_ERROR,
62
+ "Illegal AArch32 mode switch attempt from %s to %s\n",
63
+ aarch32_mode_name(env->uncached_cpsr),
64
+ aarch32_mode_name(val));
65
} else {
66
+ qemu_log_mask(CPU_LOG_INT, "%s %s to %s PC 0x%" PRIx32 "\n",
67
+ write_type == CPSRWriteExceptionReturn ?
68
+ "Exception return from AArch32" :
69
+ "AArch32 mode switch from",
70
+ aarch32_mode_name(env->uncached_cpsr),
71
+ aarch32_mode_name(val), env->regs[15]);
72
switch_mode(env, val & CPSR_M);
73
}
74
}
75
diff --git a/target/arm/translate.c b/target/arm/translate.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/translate.c
78
+++ b/target/arm/translate.c
79
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb)
80
translator_loop(ops, &dc.base, cpu, tb);
81
}
82
83
-static const char *cpu_mode_names[16] = {
84
- "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
85
- "???", "???", "hyp", "und", "???", "???", "???", "sys"
86
-};
87
-
88
void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
89
int flags)
50
{
90
{
51
CPUClass *cc = CPU_CLASS(oc);
91
@@ -XXX,XX +XXX,XX @@ void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
52
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo arm_cpus[] = {
92
psr & CPSR_V ? 'V' : '-',
53
.class_init = arm_v7m_class_init },
93
psr & CPSR_T ? 'T' : 'A',
54
{ .name = "cortex-m4", .initfn = cortex_m4_initfn,
94
ns_status,
55
.class_init = arm_v7m_class_init },
95
- cpu_mode_names[psr & 0xf], (psr & 0x10) ? 32 : 26);
56
+ { .name = "cortex-m33", .initfn = cortex_m33_initfn,
96
+ aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26);
57
+ .class_init = arm_v7m_class_init },
97
}
58
{ .name = "cortex-r5", .initfn = cortex_r5_initfn },
98
59
{ .name = "cortex-a7", .initfn = cortex_a7_initfn },
99
if (flags & CPU_DUMP_FPU) {
60
{ .name = "cortex-a8", .initfn = cortex_a8_initfn },
61
--
100
--
62
2.16.2
101
2.19.1
63
102
64
103
diff view generated by jsdifflib
1
Define a new board model for the MPS2 with an AN505 FPGA image
1
The switch_mode() function is defined in target/arm/helper.c and used
2
containing a Cortex-M33. Since the FPGA images for TrustZone
2
only in that file and nowhere else, so we can make it file-local
3
cores (AN505, and the similar AN519 for Cortex-M23) have a
3
rather than global.
4
significantly different layout of devices to the non-TrustZone
5
images, we use a new source file rather than shoehorning them
6
into the existing mps2.c.
7
4
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180220180325.29818-20-peter.maydell@linaro.org
7
Message-id: 20181012144235.19646-3-peter.maydell@linaro.org
11
---
8
---
12
hw/arm/Makefile.objs | 1 +
9
target/arm/internals.h | 1 -
13
hw/arm/mps2-tz.c | 503 +++++++++++++++++++++++++++++++++++++++++++++++++++
10
target/arm/helper.c | 6 ++++--
14
2 files changed, 504 insertions(+)
11
2 files changed, 4 insertions(+), 3 deletions(-)
15
create mode 100644 hw/arm/mps2-tz.c
16
12
17
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/Makefile.objs
15
--- a/target/arm/internals.h
20
+++ b/hw/arm/Makefile.objs
16
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_FSL_IMX31) += fsl-imx31.o kzm.o
17
@@ -XXX,XX +XXX,XX @@ static inline int bank_number(int mode)
22
obj-$(CONFIG_FSL_IMX6) += fsl-imx6.o sabrelite.o
18
g_assert_not_reached();
23
obj-$(CONFIG_ASPEED_SOC) += aspeed_soc.o aspeed.o
19
}
24
obj-$(CONFIG_MPS2) += mps2.o
20
25
+obj-$(CONFIG_MPS2) += mps2-tz.o
21
-void switch_mode(CPUARMState *, int);
26
obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
22
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu);
27
obj-$(CONFIG_IOTKIT) += iotkit.o
23
void arm_translate_init(void);
28
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
24
29
new file mode 100644
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
30
index XXXXXXX..XXXXXXX
26
index XXXXXXX..XXXXXXX 100644
31
--- /dev/null
27
--- a/target/arm/helper.c
32
+++ b/hw/arm/mps2-tz.c
28
+++ b/target/arm/helper.c
33
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
34
+/*
30
V8M_SAttributes *sattrs);
35
+ * ARM V2M MPS2 board emulation, trustzone aware FPGA images
31
#endif
36
+ *
32
37
+ * Copyright (c) 2017 Linaro Limited
33
+static void switch_mode(CPUARMState *env, int mode);
38
+ * Written by Peter Maydell
39
+ *
40
+ * This program is free software; you can redistribute it and/or modify
41
+ * it under the terms of the GNU General Public License version 2 or
42
+ * (at your option) any later version.
43
+ */
44
+
34
+
45
+/* The MPS2 and MPS2+ dev boards are FPGA based (the 2+ has a bigger
35
static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg)
46
+ * FPGA but is otherwise the same as the 2). Since the CPU itself
36
{
47
+ * and most of the devices are in the FPGA, the details of the board
37
int nregs;
48
+ * as seen by the guest depend significantly on the FPGA image.
38
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
49
+ * This source file covers the following FPGA images, for TrustZone cores:
39
return 0;
50
+ * "mps2-an505" -- Cortex-M33 as documented in ARM Application Note AN505
40
}
51
+ *
41
52
+ * Links to the TRM for the board itself and to the various Application
42
-void switch_mode(CPUARMState *env, int mode)
53
+ * Notes which document the FPGA images can be found here:
43
+static void switch_mode(CPUARMState *env, int mode)
54
+ * https://developer.arm.com/products/system-design/development-boards/fpga-prototyping-boards/mps2
44
{
55
+ *
45
ARMCPU *cpu = arm_env_get_cpu(env);
56
+ * Board TRM:
46
57
+ * http://infocenter.arm.com/help/topic/com.arm.doc.100112_0200_06_en/versatile_express_cortex_m_prototyping_systems_v2m_mps2_and_v2m_mps2plus_technical_reference_100112_0200_06_en.pdf
47
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
58
+ * Application Note AN505:
48
59
+ * http://infocenter.arm.com/help/topic/com.arm.doc.dai0505b/index.html
49
#else
60
+ *
50
61
+ * The AN505 defers to the Cortex-M33 processor ARMv8M IoT Kit FVP User Guide
51
-void switch_mode(CPUARMState *env, int mode)
62
+ * (ARM ECM0601256) for the details of some of the device layout:
52
+static void switch_mode(CPUARMState *env, int mode)
63
+ * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ecm0601256/index.html
53
{
64
+ */
54
int old_mode;
65
+
55
int i;
66
+#include "qemu/osdep.h"
67
+#include "qapi/error.h"
68
+#include "qemu/error-report.h"
69
+#include "hw/arm/arm.h"
70
+#include "hw/arm/armv7m.h"
71
+#include "hw/or-irq.h"
72
+#include "hw/boards.h"
73
+#include "exec/address-spaces.h"
74
+#include "sysemu/sysemu.h"
75
+#include "hw/misc/unimp.h"
76
+#include "hw/char/cmsdk-apb-uart.h"
77
+#include "hw/timer/cmsdk-apb-timer.h"
78
+#include "hw/misc/mps2-scc.h"
79
+#include "hw/misc/mps2-fpgaio.h"
80
+#include "hw/arm/iotkit.h"
81
+#include "hw/devices.h"
82
+#include "net/net.h"
83
+#include "hw/core/split-irq.h"
84
+
85
+typedef enum MPS2TZFPGAType {
86
+ FPGA_AN505,
87
+} MPS2TZFPGAType;
88
+
89
+typedef struct {
90
+ MachineClass parent;
91
+ MPS2TZFPGAType fpga_type;
92
+ uint32_t scc_id;
93
+} MPS2TZMachineClass;
94
+
95
+typedef struct {
96
+ MachineState parent;
97
+
98
+ IoTKit iotkit;
99
+ MemoryRegion psram;
100
+ MemoryRegion ssram1;
101
+ MemoryRegion ssram1_m;
102
+ MemoryRegion ssram23;
103
+ MPS2SCC scc;
104
+ MPS2FPGAIO fpgaio;
105
+ TZPPC ppc[5];
106
+ UnimplementedDeviceState ssram_mpc[3];
107
+ UnimplementedDeviceState spi[5];
108
+ UnimplementedDeviceState i2c[4];
109
+ UnimplementedDeviceState i2s_audio;
110
+ UnimplementedDeviceState gpio[5];
111
+ UnimplementedDeviceState dma[4];
112
+ UnimplementedDeviceState gfx;
113
+ CMSDKAPBUART uart[5];
114
+ SplitIRQ sec_resp_splitter;
115
+ qemu_or_irq uart_irq_orgate;
116
+} MPS2TZMachineState;
117
+
118
+#define TYPE_MPS2TZ_MACHINE "mps2tz"
119
+#define TYPE_MPS2TZ_AN505_MACHINE MACHINE_TYPE_NAME("mps2-an505")
120
+
121
+#define MPS2TZ_MACHINE(obj) \
122
+ OBJECT_CHECK(MPS2TZMachineState, obj, TYPE_MPS2TZ_MACHINE)
123
+#define MPS2TZ_MACHINE_GET_CLASS(obj) \
124
+ OBJECT_GET_CLASS(MPS2TZMachineClass, obj, TYPE_MPS2TZ_MACHINE)
125
+#define MPS2TZ_MACHINE_CLASS(klass) \
126
+ OBJECT_CLASS_CHECK(MPS2TZMachineClass, klass, TYPE_MPS2TZ_MACHINE)
127
+
128
+/* Main SYSCLK frequency in Hz */
129
+#define SYSCLK_FRQ 20000000
130
+
131
+/* Initialize the auxiliary RAM region @mr and map it into
132
+ * the memory map at @base.
133
+ */
134
+static void make_ram(MemoryRegion *mr, const char *name,
135
+ hwaddr base, hwaddr size)
136
+{
137
+ memory_region_init_ram(mr, NULL, name, size, &error_fatal);
138
+ memory_region_add_subregion(get_system_memory(), base, mr);
139
+}
140
+
141
+/* Create an alias of an entire original MemoryRegion @orig
142
+ * located at @base in the memory map.
143
+ */
144
+static void make_ram_alias(MemoryRegion *mr, const char *name,
145
+ MemoryRegion *orig, hwaddr base)
146
+{
147
+ memory_region_init_alias(mr, NULL, name, orig, 0,
148
+ memory_region_size(orig));
149
+ memory_region_add_subregion(get_system_memory(), base, mr);
150
+}
151
+
152
+static void init_sysbus_child(Object *parent, const char *childname,
153
+ void *child, size_t childsize,
154
+ const char *childtype)
155
+{
156
+ object_initialize(child, childsize, childtype);
157
+ object_property_add_child(parent, childname, OBJECT(child), &error_abort);
158
+ qdev_set_parent_bus(DEVICE(child), sysbus_get_default());
159
+
160
+}
161
+
162
+/* Most of the devices in the AN505 FPGA image sit behind
163
+ * Peripheral Protection Controllers. These data structures
164
+ * define the layout of which devices sit behind which PPCs.
165
+ * The devfn for each port is a function which creates, configures
166
+ * and initializes the device, returning the MemoryRegion which
167
+ * needs to be plugged into the downstream end of the PPC port.
168
+ */
169
+typedef MemoryRegion *MakeDevFn(MPS2TZMachineState *mms, void *opaque,
170
+ const char *name, hwaddr size);
171
+
172
+typedef struct PPCPortInfo {
173
+ const char *name;
174
+ MakeDevFn *devfn;
175
+ void *opaque;
176
+ hwaddr addr;
177
+ hwaddr size;
178
+} PPCPortInfo;
179
+
180
+typedef struct PPCInfo {
181
+ const char *name;
182
+ PPCPortInfo ports[TZ_NUM_PORTS];
183
+} PPCInfo;
184
+
185
+static MemoryRegion *make_unimp_dev(MPS2TZMachineState *mms,
186
+ void *opaque,
187
+ const char *name, hwaddr size)
188
+{
189
+ /* Initialize, configure and realize a TYPE_UNIMPLEMENTED_DEVICE,
190
+ * and return a pointer to its MemoryRegion.
191
+ */
192
+ UnimplementedDeviceState *uds = opaque;
193
+
194
+ init_sysbus_child(OBJECT(mms), name, uds,
195
+ sizeof(UnimplementedDeviceState),
196
+ TYPE_UNIMPLEMENTED_DEVICE);
197
+ qdev_prop_set_string(DEVICE(uds), "name", name);
198
+ qdev_prop_set_uint64(DEVICE(uds), "size", size);
199
+ object_property_set_bool(OBJECT(uds), true, "realized", &error_fatal);
200
+ return sysbus_mmio_get_region(SYS_BUS_DEVICE(uds), 0);
201
+}
202
+
203
+static MemoryRegion *make_uart(MPS2TZMachineState *mms, void *opaque,
204
+ const char *name, hwaddr size)
205
+{
206
+ CMSDKAPBUART *uart = opaque;
207
+ int i = uart - &mms->uart[0];
208
+ Chardev *uartchr = i < MAX_SERIAL_PORTS ? serial_hds[i] : NULL;
209
+ int rxirqno = i * 2;
210
+ int txirqno = i * 2 + 1;
211
+ int combirqno = i + 10;
212
+ SysBusDevice *s;
213
+ DeviceState *iotkitdev = DEVICE(&mms->iotkit);
214
+ DeviceState *orgate_dev = DEVICE(&mms->uart_irq_orgate);
215
+
216
+ init_sysbus_child(OBJECT(mms), name, uart,
217
+ sizeof(mms->uart[0]), TYPE_CMSDK_APB_UART);
218
+ qdev_prop_set_chr(DEVICE(uart), "chardev", uartchr);
219
+ qdev_prop_set_uint32(DEVICE(uart), "pclk-frq", SYSCLK_FRQ);
220
+ object_property_set_bool(OBJECT(uart), true, "realized", &error_fatal);
221
+ s = SYS_BUS_DEVICE(uart);
222
+ sysbus_connect_irq(s, 0, qdev_get_gpio_in_named(iotkitdev,
223
+ "EXP_IRQ", txirqno));
224
+ sysbus_connect_irq(s, 1, qdev_get_gpio_in_named(iotkitdev,
225
+ "EXP_IRQ", rxirqno));
226
+ sysbus_connect_irq(s, 2, qdev_get_gpio_in(orgate_dev, i * 2));
227
+ sysbus_connect_irq(s, 3, qdev_get_gpio_in(orgate_dev, i * 2 + 1));
228
+ sysbus_connect_irq(s, 4, qdev_get_gpio_in_named(iotkitdev,
229
+ "EXP_IRQ", combirqno));
230
+ return sysbus_mmio_get_region(SYS_BUS_DEVICE(uart), 0);
231
+}
232
+
233
+static MemoryRegion *make_scc(MPS2TZMachineState *mms, void *opaque,
234
+ const char *name, hwaddr size)
235
+{
236
+ MPS2SCC *scc = opaque;
237
+ DeviceState *sccdev;
238
+ MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_GET_CLASS(mms);
239
+
240
+ object_initialize(scc, sizeof(mms->scc), TYPE_MPS2_SCC);
241
+ sccdev = DEVICE(scc);
242
+ qdev_set_parent_bus(sccdev, sysbus_get_default());
243
+ qdev_prop_set_uint32(sccdev, "scc-cfg4", 0x2);
244
+ qdev_prop_set_uint32(sccdev, "scc-aid", 0x02000008);
245
+ qdev_prop_set_uint32(sccdev, "scc-id", mmc->scc_id);
246
+ object_property_set_bool(OBJECT(scc), true, "realized", &error_fatal);
247
+ return sysbus_mmio_get_region(SYS_BUS_DEVICE(sccdev), 0);
248
+}
249
+
250
+static MemoryRegion *make_fpgaio(MPS2TZMachineState *mms, void *opaque,
251
+ const char *name, hwaddr size)
252
+{
253
+ MPS2FPGAIO *fpgaio = opaque;
254
+
255
+ object_initialize(fpgaio, sizeof(mms->fpgaio), TYPE_MPS2_FPGAIO);
256
+ qdev_set_parent_bus(DEVICE(fpgaio), sysbus_get_default());
257
+ object_property_set_bool(OBJECT(fpgaio), true, "realized", &error_fatal);
258
+ return sysbus_mmio_get_region(SYS_BUS_DEVICE(fpgaio), 0);
259
+}
260
+
261
+static void mps2tz_common_init(MachineState *machine)
262
+{
263
+ MPS2TZMachineState *mms = MPS2TZ_MACHINE(machine);
264
+ MachineClass *mc = MACHINE_GET_CLASS(machine);
265
+ MemoryRegion *system_memory = get_system_memory();
266
+ DeviceState *iotkitdev;
267
+ DeviceState *dev_splitter;
268
+ int i;
269
+
270
+ if (strcmp(machine->cpu_type, mc->default_cpu_type) != 0) {
271
+ error_report("This board can only be used with CPU %s",
272
+ mc->default_cpu_type);
273
+ exit(1);
274
+ }
275
+
276
+ init_sysbus_child(OBJECT(machine), "iotkit", &mms->iotkit,
277
+ sizeof(mms->iotkit), TYPE_IOTKIT);
278
+ iotkitdev = DEVICE(&mms->iotkit);
279
+ object_property_set_link(OBJECT(&mms->iotkit), OBJECT(system_memory),
280
+ "memory", &error_abort);
281
+ qdev_prop_set_uint32(iotkitdev, "EXP_NUMIRQ", 92);
282
+ qdev_prop_set_uint32(iotkitdev, "MAINCLK", SYSCLK_FRQ);
283
+ object_property_set_bool(OBJECT(&mms->iotkit), true, "realized",
284
+ &error_fatal);
285
+
286
+ /* The sec_resp_cfg output from the IoTKit must be split into multiple
287
+ * lines, one for each of the PPCs we create here.
288
+ */
289
+ object_initialize(&mms->sec_resp_splitter, sizeof(mms->sec_resp_splitter),
290
+ TYPE_SPLIT_IRQ);
291
+ object_property_add_child(OBJECT(machine), "sec-resp-splitter",
292
+ OBJECT(&mms->sec_resp_splitter), &error_abort);
293
+ object_property_set_int(OBJECT(&mms->sec_resp_splitter), 5,
294
+ "num-lines", &error_fatal);
295
+ object_property_set_bool(OBJECT(&mms->sec_resp_splitter), true,
296
+ "realized", &error_fatal);
297
+ dev_splitter = DEVICE(&mms->sec_resp_splitter);
298
+ qdev_connect_gpio_out_named(iotkitdev, "sec_resp_cfg", 0,
299
+ qdev_get_gpio_in(dev_splitter, 0));
300
+
301
+ /* The IoTKit sets up much of the memory layout, including
302
+ * the aliases between secure and non-secure regions in the
303
+ * address space. The FPGA itself contains:
304
+ *
305
+ * 0x00000000..0x003fffff SSRAM1
306
+ * 0x00400000..0x007fffff alias of SSRAM1
307
+ * 0x28000000..0x283fffff 4MB SSRAM2 + SSRAM3
308
+ * 0x40100000..0x4fffffff AHB Master Expansion 1 interface devices
309
+ * 0x80000000..0x80ffffff 16MB PSRAM
310
+ */
311
+
312
+ /* The FPGA images have an odd combination of different RAMs,
313
+ * because in hardware they are different implementations and
314
+ * connected to different buses, giving varying performance/size
315
+ * tradeoffs. For QEMU they're all just RAM, though. We arbitrarily
316
+ * call the 16MB our "system memory", as it's the largest lump.
317
+ */
318
+ memory_region_allocate_system_memory(&mms->psram,
319
+ NULL, "mps.ram", 0x01000000);
320
+ memory_region_add_subregion(system_memory, 0x80000000, &mms->psram);
321
+
322
+ /* The SSRAM memories should all be behind Memory Protection Controllers,
323
+ * but we don't implement that yet.
324
+ */
325
+ make_ram(&mms->ssram1, "mps.ssram1", 0x00000000, 0x00400000);
326
+ make_ram_alias(&mms->ssram1_m, "mps.ssram1_m", &mms->ssram1, 0x00400000);
327
+
328
+ make_ram(&mms->ssram23, "mps.ssram23", 0x28000000, 0x00400000);
329
+
330
+ /* The overflow IRQs for all UARTs are ORed together.
331
+ * Tx, Rx and "combined" IRQs are sent to the NVIC separately.
332
+ * Create the OR gate for this.
333
+ */
334
+ object_initialize(&mms->uart_irq_orgate, sizeof(mms->uart_irq_orgate),
335
+ TYPE_OR_IRQ);
336
+ object_property_add_child(OBJECT(mms), "uart-irq-orgate",
337
+ OBJECT(&mms->uart_irq_orgate), &error_abort);
338
+ object_property_set_int(OBJECT(&mms->uart_irq_orgate), 10, "num-lines",
339
+ &error_fatal);
340
+ object_property_set_bool(OBJECT(&mms->uart_irq_orgate), true,
341
+ "realized", &error_fatal);
342
+ qdev_connect_gpio_out(DEVICE(&mms->uart_irq_orgate), 0,
343
+ qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 15));
344
+
345
+ /* Most of the devices in the FPGA are behind Peripheral Protection
346
+ * Controllers. The required order for initializing things is:
347
+ * + initialize the PPC
348
+ * + initialize, configure and realize downstream devices
349
+ * + connect downstream device MemoryRegions to the PPC
350
+ * + realize the PPC
351
+ * + map the PPC's MemoryRegions to the places in the address map
352
+ * where the downstream devices should appear
353
+ * + wire up the PPC's control lines to the IoTKit object
354
+ */
355
+
356
+ const PPCInfo ppcs[] = { {
357
+ .name = "apb_ppcexp0",
358
+ .ports = {
359
+ { "ssram-mpc0", make_unimp_dev, &mms->ssram_mpc[0],
360
+ 0x58007000, 0x1000 },
361
+ { "ssram-mpc1", make_unimp_dev, &mms->ssram_mpc[1],
362
+ 0x58008000, 0x1000 },
363
+ { "ssram-mpc2", make_unimp_dev, &mms->ssram_mpc[2],
364
+ 0x58009000, 0x1000 },
365
+ },
366
+ }, {
367
+ .name = "apb_ppcexp1",
368
+ .ports = {
369
+ { "spi0", make_unimp_dev, &mms->spi[0], 0x40205000, 0x1000 },
370
+ { "spi1", make_unimp_dev, &mms->spi[1], 0x40206000, 0x1000 },
371
+ { "spi2", make_unimp_dev, &mms->spi[2], 0x40209000, 0x1000 },
372
+ { "spi3", make_unimp_dev, &mms->spi[3], 0x4020a000, 0x1000 },
373
+ { "spi4", make_unimp_dev, &mms->spi[4], 0x4020b000, 0x1000 },
374
+ { "uart0", make_uart, &mms->uart[0], 0x40200000, 0x1000 },
375
+ { "uart1", make_uart, &mms->uart[1], 0x40201000, 0x1000 },
376
+ { "uart2", make_uart, &mms->uart[2], 0x40202000, 0x1000 },
377
+ { "uart3", make_uart, &mms->uart[3], 0x40203000, 0x1000 },
378
+ { "uart4", make_uart, &mms->uart[4], 0x40204000, 0x1000 },
379
+ { "i2c0", make_unimp_dev, &mms->i2c[0], 0x40207000, 0x1000 },
380
+ { "i2c1", make_unimp_dev, &mms->i2c[1], 0x40208000, 0x1000 },
381
+ { "i2c2", make_unimp_dev, &mms->i2c[2], 0x4020c000, 0x1000 },
382
+ { "i2c3", make_unimp_dev, &mms->i2c[3], 0x4020d000, 0x1000 },
383
+ },
384
+ }, {
385
+ .name = "apb_ppcexp2",
386
+ .ports = {
387
+ { "scc", make_scc, &mms->scc, 0x40300000, 0x1000 },
388
+ { "i2s-audio", make_unimp_dev, &mms->i2s_audio,
389
+ 0x40301000, 0x1000 },
390
+ { "fpgaio", make_fpgaio, &mms->fpgaio, 0x40302000, 0x1000 },
391
+ },
392
+ }, {
393
+ .name = "ahb_ppcexp0",
394
+ .ports = {
395
+ { "gfx", make_unimp_dev, &mms->gfx, 0x41000000, 0x140000 },
396
+ { "gpio0", make_unimp_dev, &mms->gpio[0], 0x40100000, 0x1000 },
397
+ { "gpio1", make_unimp_dev, &mms->gpio[1], 0x40101000, 0x1000 },
398
+ { "gpio2", make_unimp_dev, &mms->gpio[2], 0x40102000, 0x1000 },
399
+ { "gpio3", make_unimp_dev, &mms->gpio[3], 0x40103000, 0x1000 },
400
+ { "gpio4", make_unimp_dev, &mms->gpio[4], 0x40104000, 0x1000 },
401
+ },
402
+ }, {
403
+ .name = "ahb_ppcexp1",
404
+ .ports = {
405
+ { "dma0", make_unimp_dev, &mms->dma[0], 0x40110000, 0x1000 },
406
+ { "dma1", make_unimp_dev, &mms->dma[1], 0x40111000, 0x1000 },
407
+ { "dma2", make_unimp_dev, &mms->dma[2], 0x40112000, 0x1000 },
408
+ { "dma3", make_unimp_dev, &mms->dma[3], 0x40113000, 0x1000 },
409
+ },
410
+ },
411
+ };
412
+
413
+ for (i = 0; i < ARRAY_SIZE(ppcs); i++) {
414
+ const PPCInfo *ppcinfo = &ppcs[i];
415
+ TZPPC *ppc = &mms->ppc[i];
416
+ DeviceState *ppcdev;
417
+ int port;
418
+ char *gpioname;
419
+
420
+ init_sysbus_child(OBJECT(machine), ppcinfo->name, ppc,
421
+ sizeof(TZPPC), TYPE_TZ_PPC);
422
+ ppcdev = DEVICE(ppc);
423
+
424
+ for (port = 0; port < TZ_NUM_PORTS; port++) {
425
+ const PPCPortInfo *pinfo = &ppcinfo->ports[port];
426
+ MemoryRegion *mr;
427
+ char *portname;
428
+
429
+ if (!pinfo->devfn) {
430
+ continue;
431
+ }
432
+
433
+ mr = pinfo->devfn(mms, pinfo->opaque, pinfo->name, pinfo->size);
434
+ portname = g_strdup_printf("port[%d]", port);
435
+ object_property_set_link(OBJECT(ppc), OBJECT(mr),
436
+ portname, &error_fatal);
437
+ g_free(portname);
438
+ }
439
+
440
+ object_property_set_bool(OBJECT(ppc), true, "realized", &error_fatal);
441
+
442
+ for (port = 0; port < TZ_NUM_PORTS; port++) {
443
+ const PPCPortInfo *pinfo = &ppcinfo->ports[port];
444
+
445
+ if (!pinfo->devfn) {
446
+ continue;
447
+ }
448
+ sysbus_mmio_map(SYS_BUS_DEVICE(ppc), port, pinfo->addr);
449
+
450
+ gpioname = g_strdup_printf("%s_nonsec", ppcinfo->name);
451
+ qdev_connect_gpio_out_named(iotkitdev, gpioname, port,
452
+ qdev_get_gpio_in_named(ppcdev,
453
+ "cfg_nonsec",
454
+ port));
455
+ g_free(gpioname);
456
+ gpioname = g_strdup_printf("%s_ap", ppcinfo->name);
457
+ qdev_connect_gpio_out_named(iotkitdev, gpioname, port,
458
+ qdev_get_gpio_in_named(ppcdev,
459
+ "cfg_ap", port));
460
+ g_free(gpioname);
461
+ }
462
+
463
+ gpioname = g_strdup_printf("%s_irq_enable", ppcinfo->name);
464
+ qdev_connect_gpio_out_named(iotkitdev, gpioname, 0,
465
+ qdev_get_gpio_in_named(ppcdev,
466
+ "irq_enable", 0));
467
+ g_free(gpioname);
468
+ gpioname = g_strdup_printf("%s_irq_clear", ppcinfo->name);
469
+ qdev_connect_gpio_out_named(iotkitdev, gpioname, 0,
470
+ qdev_get_gpio_in_named(ppcdev,
471
+ "irq_clear", 0));
472
+ g_free(gpioname);
473
+ gpioname = g_strdup_printf("%s_irq_status", ppcinfo->name);
474
+ qdev_connect_gpio_out_named(ppcdev, "irq", 0,
475
+ qdev_get_gpio_in_named(iotkitdev,
476
+ gpioname, 0));
477
+ g_free(gpioname);
478
+
479
+ qdev_connect_gpio_out(dev_splitter, i,
480
+ qdev_get_gpio_in_named(ppcdev,
481
+ "cfg_sec_resp", 0));
482
+ }
483
+
484
+ /* In hardware this is a LAN9220; the LAN9118 is software compatible
485
+ * except that it doesn't support the checksum-offload feature.
486
+ * The ethernet controller is not behind a PPC.
487
+ */
488
+ lan9118_init(&nd_table[0], 0x42000000,
489
+ qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
490
+
491
+ create_unimplemented_device("FPGA NS PC", 0x48007000, 0x1000);
492
+
493
+ armv7m_load_kernel(ARM_CPU(first_cpu), machine->kernel_filename, 0x400000);
494
+}
495
+
496
+static void mps2tz_class_init(ObjectClass *oc, void *data)
497
+{
498
+ MachineClass *mc = MACHINE_CLASS(oc);
499
+
500
+ mc->init = mps2tz_common_init;
501
+ mc->max_cpus = 1;
502
+}
503
+
504
+static void mps2tz_an505_class_init(ObjectClass *oc, void *data)
505
+{
506
+ MachineClass *mc = MACHINE_CLASS(oc);
507
+ MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_CLASS(oc);
508
+
509
+ mc->desc = "ARM MPS2 with AN505 FPGA image for Cortex-M33";
510
+ mmc->fpga_type = FPGA_AN505;
511
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-m33");
512
+ mmc->scc_id = 0x41040000 | (505 << 4);
513
+}
514
+
515
+static const TypeInfo mps2tz_info = {
516
+ .name = TYPE_MPS2TZ_MACHINE,
517
+ .parent = TYPE_MACHINE,
518
+ .abstract = true,
519
+ .instance_size = sizeof(MPS2TZMachineState),
520
+ .class_size = sizeof(MPS2TZMachineClass),
521
+ .class_init = mps2tz_class_init,
522
+};
523
+
524
+static const TypeInfo mps2tz_an505_info = {
525
+ .name = TYPE_MPS2TZ_AN505_MACHINE,
526
+ .parent = TYPE_MPS2TZ_MACHINE,
527
+ .class_init = mps2tz_an505_class_init,
528
+};
529
+
530
+static void mps2tz_machine_init(void)
531
+{
532
+ type_register_static(&mps2tz_info);
533
+ type_register_static(&mps2tz_an505_info);
534
+}
535
+
536
+type_init(mps2tz_machine_init);
537
--
56
--
538
2.16.2
57
2.19.1
539
58
540
59
diff view generated by jsdifflib
New patch
1
1
The HCR.FB virtualization configuration register bit requests that
2
TLB maintenance, branch predictor invalidate-all and icache
3
invalidate-all operations performed in NS EL1 should be upgraded
4
from "local CPU only to "broadcast within Inner Shareable domain".
5
For QEMU we NOP the branch predictor and icache operations, so
6
we only need to upgrade the TLB invalidates:
7
AArch32 TLBIALL, TLBIMVA, TLBIASID, DTLBIALL, DTLBIMVA, DTLBIASID,
8
ITLBIALL, ITLBIMVA, ITLBIASID, TLBIMVAA, TLBIMVAL, TLBIMVAAL
9
AArch64 TLBI VMALLE1, TLBI VAE1, TLBI ASIDE1, TLBI VAAE1,
10
TLBI VALE1, TLBI VAALE1
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20181012144235.19646-4-peter.maydell@linaro.org
15
---
16
target/arm/helper.c | 191 +++++++++++++++++++++++++++-----------------
17
1 file changed, 116 insertions(+), 75 deletions(-)
18
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
raw_write(env, ri, value);
25
}
26
27
-static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
- uint64_t value)
29
-{
30
- /* Invalidate all (TLBIALL) */
31
- ARMCPU *cpu = arm_env_get_cpu(env);
32
-
33
- tlb_flush(CPU(cpu));
34
-}
35
-
36
-static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
37
- uint64_t value)
38
-{
39
- /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
40
- ARMCPU *cpu = arm_env_get_cpu(env);
41
-
42
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
43
-}
44
-
45
-static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
46
- uint64_t value)
47
-{
48
- /* Invalidate by ASID (TLBIASID) */
49
- ARMCPU *cpu = arm_env_get_cpu(env);
50
-
51
- tlb_flush(CPU(cpu));
52
-}
53
-
54
-static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
- uint64_t value)
56
-{
57
- /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
58
- ARMCPU *cpu = arm_env_get_cpu(env);
59
-
60
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
61
-}
62
-
63
/* IS variants of TLB operations must affect all cores */
64
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
65
uint64_t value)
66
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
68
}
69
70
+/*
71
+ * Non-IS variants of TLB operations are upgraded to
72
+ * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
73
+ * force broadcast of these operations.
74
+ */
75
+static bool tlb_force_broadcast(CPUARMState *env)
76
+{
77
+ return (env->cp15.hcr_el2 & HCR_FB) &&
78
+ arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
79
+}
80
+
81
+static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
82
+ uint64_t value)
83
+{
84
+ /* Invalidate all (TLBIALL) */
85
+ ARMCPU *cpu = arm_env_get_cpu(env);
86
+
87
+ if (tlb_force_broadcast(env)) {
88
+ tlbiall_is_write(env, NULL, value);
89
+ return;
90
+ }
91
+
92
+ tlb_flush(CPU(cpu));
93
+}
94
+
95
+static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
96
+ uint64_t value)
97
+{
98
+ /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
99
+ ARMCPU *cpu = arm_env_get_cpu(env);
100
+
101
+ if (tlb_force_broadcast(env)) {
102
+ tlbimva_is_write(env, NULL, value);
103
+ return;
104
+ }
105
+
106
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
107
+}
108
+
109
+static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
+ uint64_t value)
111
+{
112
+ /* Invalidate by ASID (TLBIASID) */
113
+ ARMCPU *cpu = arm_env_get_cpu(env);
114
+
115
+ if (tlb_force_broadcast(env)) {
116
+ tlbiasid_is_write(env, NULL, value);
117
+ return;
118
+ }
119
+
120
+ tlb_flush(CPU(cpu));
121
+}
122
+
123
+static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
+ uint64_t value)
125
+{
126
+ /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
127
+ ARMCPU *cpu = arm_env_get_cpu(env);
128
+
129
+ if (tlb_force_broadcast(env)) {
130
+ tlbimvaa_is_write(env, NULL, value);
131
+ return;
132
+ }
133
+
134
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
135
+}
136
+
137
static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
uint64_t value)
139
{
140
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
141
* Page D4-1736 (DDI0487A.b)
142
*/
143
144
-static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
145
- uint64_t value)
146
-{
147
- CPUState *cs = ENV_GET_CPU(env);
148
-
149
- if (arm_is_secure_below_el3(env)) {
150
- tlb_flush_by_mmuidx(cs,
151
- ARMMMUIdxBit_S1SE1 |
152
- ARMMMUIdxBit_S1SE0);
153
- } else {
154
- tlb_flush_by_mmuidx(cs,
155
- ARMMMUIdxBit_S12NSE1 |
156
- ARMMMUIdxBit_S12NSE0);
157
- }
158
-}
159
-
160
static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
uint64_t value)
162
{
163
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
164
}
165
}
166
167
+static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
168
+ uint64_t value)
169
+{
170
+ CPUState *cs = ENV_GET_CPU(env);
171
+
172
+ if (tlb_force_broadcast(env)) {
173
+ tlbi_aa64_vmalle1_write(env, NULL, value);
174
+ return;
175
+ }
176
+
177
+ if (arm_is_secure_below_el3(env)) {
178
+ tlb_flush_by_mmuidx(cs,
179
+ ARMMMUIdxBit_S1SE1 |
180
+ ARMMMUIdxBit_S1SE0);
181
+ } else {
182
+ tlb_flush_by_mmuidx(cs,
183
+ ARMMMUIdxBit_S12NSE1 |
184
+ ARMMMUIdxBit_S12NSE0);
185
+ }
186
+}
187
+
188
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
189
uint64_t value)
190
{
191
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
192
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
193
}
194
195
-static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
- uint64_t value)
197
-{
198
- /* Invalidate by VA, EL1&0 (AArch64 version).
199
- * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
200
- * since we don't support flush-for-specific-ASID-only or
201
- * flush-last-level-only.
202
- */
203
- ARMCPU *cpu = arm_env_get_cpu(env);
204
- CPUState *cs = CPU(cpu);
205
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
206
-
207
- if (arm_is_secure_below_el3(env)) {
208
- tlb_flush_page_by_mmuidx(cs, pageaddr,
209
- ARMMMUIdxBit_S1SE1 |
210
- ARMMMUIdxBit_S1SE0);
211
- } else {
212
- tlb_flush_page_by_mmuidx(cs, pageaddr,
213
- ARMMMUIdxBit_S12NSE1 |
214
- ARMMMUIdxBit_S12NSE0);
215
- }
216
-}
217
-
218
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
219
uint64_t value)
220
{
221
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
222
}
223
}
224
225
+static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
226
+ uint64_t value)
227
+{
228
+ /* Invalidate by VA, EL1&0 (AArch64 version).
229
+ * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
230
+ * since we don't support flush-for-specific-ASID-only or
231
+ * flush-last-level-only.
232
+ */
233
+ ARMCPU *cpu = arm_env_get_cpu(env);
234
+ CPUState *cs = CPU(cpu);
235
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
236
+
237
+ if (tlb_force_broadcast(env)) {
238
+ tlbi_aa64_vae1is_write(env, NULL, value);
239
+ return;
240
+ }
241
+
242
+ if (arm_is_secure_below_el3(env)) {
243
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
244
+ ARMMMUIdxBit_S1SE1 |
245
+ ARMMMUIdxBit_S1SE0);
246
+ } else {
247
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
248
+ ARMMMUIdxBit_S12NSE1 |
249
+ ARMMMUIdxBit_S12NSE0);
250
+ }
251
+}
252
+
253
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
254
uint64_t value)
255
{
256
--
257
2.19.1
258
259
diff view generated by jsdifflib
1
Create an "init-svtor" property on the armv7m container
1
The HCR.DC virtualization configuration register bit has the
2
object which we can forward to the CPU object.
2
following effects:
3
* SCTLR.M behaves as if it is 0 for all purposes except
4
direct reads of the bit
5
* HCR.VM behaves as if it is 1 for all purposes except
6
direct reads of the bit
7
* the memory type produced by the first stage of the EL1&EL0
8
translation regime is Normal Non-Shareable,
9
Inner Write-Back Read-Allocate Write-Allocate,
10
Outer Write-Back Read-Allocate Write-Allocate.
11
12
Implement this behaviour.
3
13
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20180220180325.29818-8-peter.maydell@linaro.org
16
Message-id: 20181012144235.19646-5-peter.maydell@linaro.org
7
---
17
---
8
include/hw/arm/armv7m.h | 2 ++
18
target/arm/helper.c | 23 +++++++++++++++++++++--
9
hw/arm/armv7m.c | 9 +++++++++
19
1 file changed, 21 insertions(+), 2 deletions(-)
10
2 files changed, 11 insertions(+)
11
20
12
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
14
--- a/include/hw/arm/armv7m.h
23
--- a/target/arm/helper.c
15
+++ b/include/hw/arm/armv7m.h
24
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ typedef struct {
25
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
17
* that CPU accesses see. (The NVIC, bitbanding and other CPU-internal
26
* * The Non-secure TTBCR.EAE bit is set to 1
18
* devices will be automatically layered on top of this view.)
27
* * The implementation includes EL2, and the value of HCR.VM is 1
19
* + Property "idau": IDAU interface (forwarded to CPU object)
28
*
20
+ * + Property "init-svtor": secure VTOR reset value (forwarded to CPU object)
29
+ * (Note that HCR.DC makes HCR.VM behave as if it is 1.)
21
*/
30
+ *
22
typedef struct ARMv7MState {
31
* ATS1Hx always uses the 64bit format (not supported yet).
23
/*< private >*/
32
*/
24
@@ -XXX,XX +XXX,XX @@ typedef struct ARMv7MState {
33
format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
25
/* MemoryRegion the board provides to us (with its devices, RAM, etc) */
34
26
MemoryRegion *board_memory;
35
if (arm_feature(env, ARM_FEATURE_EL2)) {
27
Object *idau;
36
if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
28
+ uint32_t init_svtor;
37
- format64 |= env->cp15.hcr_el2 & HCR_VM;
29
} ARMv7MState;
38
+ format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
30
39
} else {
31
#endif
40
format64 |= arm_current_el(env) == 2;
32
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
41
}
33
index XXXXXXX..XXXXXXX 100644
42
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
34
--- a/hw/arm/armv7m.c
43
}
35
+++ b/hw/arm/armv7m.c
44
36
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
45
if (mmu_idx == ARMMMUIdx_S2NS) {
37
return;
46
- return (env->cp15.hcr_el2 & HCR_VM) == 0;
47
+ /* HCR.DC means HCR.VM behaves as 1 */
48
+ return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0;
49
}
50
51
if (env->cp15.hcr_el2 & HCR_TGE) {
52
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
38
}
53
}
39
}
54
}
40
+ if (object_property_find(OBJECT(s->cpu), "init-svtor", NULL)) {
55
41
+ object_property_set_uint(OBJECT(s->cpu), s->init_svtor,
56
+ if ((env->cp15.hcr_el2 & HCR_DC) &&
42
+ "init-svtor", &err);
57
+ (mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1)) {
43
+ if (err != NULL) {
58
+ /* HCR.DC means SCTLR_EL1.M behaves as 0 */
44
+ error_propagate(errp, err);
59
+ return true;
45
+ return;
46
+ }
47
+ }
60
+ }
48
object_property_set_bool(OBJECT(s->cpu), true, "realized", &err);
61
+
49
if (err != NULL) {
62
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
50
error_propagate(errp, err);
63
}
51
@@ -XXX,XX +XXX,XX @@ static Property armv7m_properties[] = {
64
52
DEFINE_PROP_LINK("memory", ARMv7MState, board_memory, TYPE_MEMORY_REGION,
65
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
53
MemoryRegion *),
66
54
DEFINE_PROP_LINK("idau", ARMv7MState, idau, TYPE_IDAU_INTERFACE, Object *),
67
/* Combine the S1 and S2 cache attributes, if needed */
55
+ DEFINE_PROP_UINT32("init-svtor", ARMv7MState, init_svtor, 0),
68
if (!ret && cacheattrs != NULL) {
56
DEFINE_PROP_END_OF_LIST(),
69
+ if (env->cp15.hcr_el2 & HCR_DC) {
57
};
70
+ /*
71
+ * HCR.DC forces the first stage attributes to
72
+ * Normal Non-Shareable,
73
+ * Inner Write-Back Read-Allocate Write-Allocate,
74
+ * Outer Write-Back Read-Allocate Write-Allocate.
75
+ */
76
+ cacheattrs->attrs = 0xff;
77
+ cacheattrs->shareability = 0;
78
+ }
79
*cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
80
}
58
81
59
--
82
--
60
2.16.2
83
2.19.1
61
84
62
85
diff view generated by jsdifflib
1
Model the Arm IoT Kit documented in
1
The A/I/F bits in ISR_EL1 should track the virtual interrupt
2
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ecm0601256/index.html
2
status, not the physical interrupt status, if the associated
3
HCR_EL2.AMO/IMO/FMO bit is set. Implement this, rather than
4
always showing the physical interrupt status.
3
5
4
The Arm IoT Kit is a subsystem which includes a CPU and some devices,
6
We don't currently implement anything to do with external
5
and is intended be extended by adding extra devices to form a
7
aborts, so this applies only to the I and F bits (though it
6
complete system. It is used in the MPS2 board's AN505 image for the
8
ought to be possible for the outer guest to present a virtual
7
Cortex-M33.
9
external abort to the inner guest, even if QEMU doesn't
10
emulate physical external aborts, so there is missing
11
functionality in this area).
8
12
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20180220180325.29818-19-peter.maydell@linaro.org
15
Message-id: 20181012144235.19646-6-peter.maydell@linaro.org
12
---
16
---
13
hw/arm/Makefile.objs | 1 +
17
target/arm/helper.c | 22 ++++++++++++++++++----
14
include/hw/arm/iotkit.h | 109 ++++++++
18
1 file changed, 18 insertions(+), 4 deletions(-)
15
hw/arm/iotkit.c | 598 ++++++++++++++++++++++++++++++++++++++++
16
default-configs/arm-softmmu.mak | 1 +
17
4 files changed, 709 insertions(+)
18
create mode 100644 include/hw/arm/iotkit.h
19
create mode 100644 hw/arm/iotkit.c
20
19
21
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/arm/Makefile.objs
22
--- a/target/arm/helper.c
24
+++ b/hw/arm/Makefile.objs
23
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_FSL_IMX6) += fsl-imx6.o sabrelite.o
24
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
26
obj-$(CONFIG_ASPEED_SOC) += aspeed_soc.o aspeed.o
25
CPUState *cs = ENV_GET_CPU(env);
27
obj-$(CONFIG_MPS2) += mps2.o
26
uint64_t ret = 0;
28
obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
27
29
+obj-$(CONFIG_IOTKIT) += iotkit.o
28
- if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
30
diff --git a/include/hw/arm/iotkit.h b/include/hw/arm/iotkit.h
29
- ret |= CPSR_I;
31
new file mode 100644
30
+ if (arm_hcr_el2_imo(env)) {
32
index XXXXXXX..XXXXXXX
31
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
33
--- /dev/null
32
+ ret |= CPSR_I;
34
+++ b/include/hw/arm/iotkit.h
33
+ }
35
@@ -XXX,XX +XXX,XX @@
34
+ } else {
36
+/*
35
+ if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
37
+ * ARM IoT Kit
36
+ ret |= CPSR_I;
38
+ *
37
+ }
39
+ * Copyright (c) 2018 Linaro Limited
38
}
40
+ * Written by Peter Maydell
39
- if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
41
+ *
40
- ret |= CPSR_F;
42
+ * This program is free software; you can redistribute it and/or modify
43
+ * it under the terms of the GNU General Public License version 2 or
44
+ * (at your option) any later version.
45
+ */
46
+
41
+
47
+/* This is a model of the Arm IoT Kit which is documented in
42
+ if (arm_hcr_el2_fmo(env)) {
48
+ * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ecm0601256/index.html
43
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
49
+ * It contains:
44
+ ret |= CPSR_F;
50
+ * a Cortex-M33
45
+ }
51
+ * the IDAU
46
+ } else {
52
+ * some timers and watchdogs
47
+ if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
53
+ * two peripheral protection controllers
48
+ ret |= CPSR_F;
54
+ * a memory protection controller
49
+ }
55
+ * a security controller
50
}
56
+ * a bus fabric which arranges that some parts of the address
57
+ * space are secure and non-secure aliases of each other
58
+ *
59
+ * QEMU interface:
60
+ * + QOM property "memory" is a MemoryRegion containing the devices provided
61
+ * by the board model.
62
+ * + QOM property "MAINCLK" is the frequency of the main system clock
63
+ * + QOM property "EXP_NUMIRQ" sets the number of expansion interrupts
64
+ * + Named GPIO inputs "EXP_IRQ" 0..n are the expansion interrupts, which
65
+ * are wired to the NVIC lines 32 .. n+32
66
+ * Controlling up to 4 AHB expansion PPBs which a system using the IoTKit
67
+ * might provide:
68
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_nonsec[0..15]
69
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_ap[0..15]
70
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_irq_enable
71
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_irq_clear
72
+ * + named GPIO inputs apb_ppcexp{0,1,2,3}_irq_status
73
+ * Controlling each of the 4 expansion AHB PPCs which a system using the IoTKit
74
+ * might provide:
75
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_nonsec[0..15]
76
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_ap[0..15]
77
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_irq_enable
78
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_irq_clear
79
+ * + named GPIO inputs ahb_ppcexp{0,1,2,3}_irq_status
80
+ */
81
+
51
+
82
+#ifndef IOTKIT_H
52
/* External aborts are not possible in QEMU so A bit is always clear */
83
+#define IOTKIT_H
53
return ret;
84
+
54
}
85
+#include "hw/sysbus.h"
86
+#include "hw/arm/armv7m.h"
87
+#include "hw/misc/iotkit-secctl.h"
88
+#include "hw/misc/tz-ppc.h"
89
+#include "hw/timer/cmsdk-apb-timer.h"
90
+#include "hw/misc/unimp.h"
91
+#include "hw/or-irq.h"
92
+#include "hw/core/split-irq.h"
93
+
94
+#define TYPE_IOTKIT "iotkit"
95
+#define IOTKIT(obj) OBJECT_CHECK(IoTKit, (obj), TYPE_IOTKIT)
96
+
97
+/* We have an IRQ splitter and an OR gate input for each external PPC
98
+ * and the 2 internal PPCs
99
+ */
100
+#define NUM_EXTERNAL_PPCS (IOTS_NUM_AHB_EXP_PPC + IOTS_NUM_APB_EXP_PPC)
101
+#define NUM_PPCS (NUM_EXTERNAL_PPCS + 2)
102
+
103
+typedef struct IoTKit {
104
+ /*< private >*/
105
+ SysBusDevice parent_obj;
106
+
107
+ /*< public >*/
108
+ ARMv7MState armv7m;
109
+ IoTKitSecCtl secctl;
110
+ TZPPC apb_ppc0;
111
+ TZPPC apb_ppc1;
112
+ CMSDKAPBTIMER timer0;
113
+ CMSDKAPBTIMER timer1;
114
+ qemu_or_irq ppc_irq_orgate;
115
+ SplitIRQ sec_resp_splitter;
116
+ SplitIRQ ppc_irq_splitter[NUM_PPCS];
117
+
118
+ UnimplementedDeviceState dualtimer;
119
+ UnimplementedDeviceState s32ktimer;
120
+
121
+ MemoryRegion container;
122
+ MemoryRegion alias1;
123
+ MemoryRegion alias2;
124
+ MemoryRegion alias3;
125
+ MemoryRegion sram0;
126
+
127
+ qemu_irq *exp_irqs;
128
+ qemu_irq ppc0_irq;
129
+ qemu_irq ppc1_irq;
130
+ qemu_irq sec_resp_cfg;
131
+ qemu_irq sec_resp_cfg_in;
132
+ qemu_irq nsc_cfg_in;
133
+
134
+ qemu_irq irq_status_in[NUM_EXTERNAL_PPCS];
135
+
136
+ uint32_t nsccfg;
137
+
138
+ /* Properties */
139
+ MemoryRegion *board_memory;
140
+ uint32_t exp_numirq;
141
+ uint32_t mainclk_frq;
142
+} IoTKit;
143
+
144
+#endif
145
diff --git a/hw/arm/iotkit.c b/hw/arm/iotkit.c
146
new file mode 100644
147
index XXXXXXX..XXXXXXX
148
--- /dev/null
149
+++ b/hw/arm/iotkit.c
150
@@ -XXX,XX +XXX,XX @@
151
+/*
152
+ * Arm IoT Kit
153
+ *
154
+ * Copyright (c) 2018 Linaro Limited
155
+ * Written by Peter Maydell
156
+ *
157
+ * This program is free software; you can redistribute it and/or modify
158
+ * it under the terms of the GNU General Public License version 2 or
159
+ * (at your option) any later version.
160
+ */
161
+
162
+#include "qemu/osdep.h"
163
+#include "qemu/log.h"
164
+#include "qapi/error.h"
165
+#include "trace.h"
166
+#include "hw/sysbus.h"
167
+#include "hw/registerfields.h"
168
+#include "hw/arm/iotkit.h"
169
+#include "hw/misc/unimp.h"
170
+#include "hw/arm/arm.h"
171
+
172
+/* Create an alias region of @size bytes starting at @base
173
+ * which mirrors the memory starting at @orig.
174
+ */
175
+static void make_alias(IoTKit *s, MemoryRegion *mr, const char *name,
176
+ hwaddr base, hwaddr size, hwaddr orig)
177
+{
178
+ memory_region_init_alias(mr, NULL, name, &s->container, orig, size);
179
+ /* The alias is even lower priority than unimplemented_device regions */
180
+ memory_region_add_subregion_overlap(&s->container, base, mr, -1500);
181
+}
182
+
183
+static void init_sysbus_child(Object *parent, const char *childname,
184
+ void *child, size_t childsize,
185
+ const char *childtype)
186
+{
187
+ object_initialize(child, childsize, childtype);
188
+ object_property_add_child(parent, childname, OBJECT(child), &error_abort);
189
+ qdev_set_parent_bus(DEVICE(child), sysbus_get_default());
190
+}
191
+
192
+static void irq_status_forwarder(void *opaque, int n, int level)
193
+{
194
+ qemu_irq destirq = opaque;
195
+
196
+ qemu_set_irq(destirq, level);
197
+}
198
+
199
+static void nsccfg_handler(void *opaque, int n, int level)
200
+{
201
+ IoTKit *s = IOTKIT(opaque);
202
+
203
+ s->nsccfg = level;
204
+}
205
+
206
+static void iotkit_forward_ppc(IoTKit *s, const char *ppcname, int ppcnum)
207
+{
208
+ /* Each of the 4 AHB and 4 APB PPCs that might be present in a
209
+ * system using the IoTKit has a collection of control lines which
210
+ * are provided by the security controller and which we want to
211
+ * expose as control lines on the IoTKit device itself, so the
212
+ * code using the IoTKit can wire them up to the PPCs.
213
+ */
214
+ SplitIRQ *splitter = &s->ppc_irq_splitter[ppcnum];
215
+ DeviceState *iotkitdev = DEVICE(s);
216
+ DeviceState *dev_secctl = DEVICE(&s->secctl);
217
+ DeviceState *dev_splitter = DEVICE(splitter);
218
+ char *name;
219
+
220
+ name = g_strdup_printf("%s_nonsec", ppcname);
221
+ qdev_pass_gpios(dev_secctl, iotkitdev, name);
222
+ g_free(name);
223
+ name = g_strdup_printf("%s_ap", ppcname);
224
+ qdev_pass_gpios(dev_secctl, iotkitdev, name);
225
+ g_free(name);
226
+ name = g_strdup_printf("%s_irq_enable", ppcname);
227
+ qdev_pass_gpios(dev_secctl, iotkitdev, name);
228
+ g_free(name);
229
+ name = g_strdup_printf("%s_irq_clear", ppcname);
230
+ qdev_pass_gpios(dev_secctl, iotkitdev, name);
231
+ g_free(name);
232
+
233
+ /* irq_status is a little more tricky, because we need to
234
+ * split it so we can send it both to the security controller
235
+ * and to our OR gate for the NVIC interrupt line.
236
+ * Connect up the splitter's outputs, and create a GPIO input
237
+ * which will pass the line state to the input splitter.
238
+ */
239
+ name = g_strdup_printf("%s_irq_status", ppcname);
240
+ qdev_connect_gpio_out(dev_splitter, 0,
241
+ qdev_get_gpio_in_named(dev_secctl,
242
+ name, 0));
243
+ qdev_connect_gpio_out(dev_splitter, 1,
244
+ qdev_get_gpio_in(DEVICE(&s->ppc_irq_orgate), ppcnum));
245
+ s->irq_status_in[ppcnum] = qdev_get_gpio_in(dev_splitter, 0);
246
+ qdev_init_gpio_in_named_with_opaque(iotkitdev, irq_status_forwarder,
247
+ s->irq_status_in[ppcnum], name, 1);
248
+ g_free(name);
249
+}
250
+
251
+static void iotkit_forward_sec_resp_cfg(IoTKit *s)
252
+{
253
+ /* Forward the 3rd output from the splitter device as a
254
+ * named GPIO output of the iotkit object.
255
+ */
256
+ DeviceState *dev = DEVICE(s);
257
+ DeviceState *dev_splitter = DEVICE(&s->sec_resp_splitter);
258
+
259
+ qdev_init_gpio_out_named(dev, &s->sec_resp_cfg, "sec_resp_cfg", 1);
260
+ s->sec_resp_cfg_in = qemu_allocate_irq(irq_status_forwarder,
261
+ s->sec_resp_cfg, 1);
262
+ qdev_connect_gpio_out(dev_splitter, 2, s->sec_resp_cfg_in);
263
+}
264
+
265
+static void iotkit_init(Object *obj)
266
+{
267
+ IoTKit *s = IOTKIT(obj);
268
+ int i;
269
+
270
+ memory_region_init(&s->container, obj, "iotkit-container", UINT64_MAX);
271
+
272
+ init_sysbus_child(obj, "armv7m", &s->armv7m, sizeof(s->armv7m),
273
+ TYPE_ARMV7M);
274
+ qdev_prop_set_string(DEVICE(&s->armv7m), "cpu-type",
275
+ ARM_CPU_TYPE_NAME("cortex-m33"));
276
+
277
+ init_sysbus_child(obj, "secctl", &s->secctl, sizeof(s->secctl),
278
+ TYPE_IOTKIT_SECCTL);
279
+ init_sysbus_child(obj, "apb-ppc0", &s->apb_ppc0, sizeof(s->apb_ppc0),
280
+ TYPE_TZ_PPC);
281
+ init_sysbus_child(obj, "apb-ppc1", &s->apb_ppc1, sizeof(s->apb_ppc1),
282
+ TYPE_TZ_PPC);
283
+ init_sysbus_child(obj, "timer0", &s->timer0, sizeof(s->timer0),
284
+ TYPE_CMSDK_APB_TIMER);
285
+ init_sysbus_child(obj, "timer1", &s->timer1, sizeof(s->timer1),
286
+ TYPE_CMSDK_APB_TIMER);
287
+ init_sysbus_child(obj, "dualtimer", &s->dualtimer, sizeof(s->dualtimer),
288
+ TYPE_UNIMPLEMENTED_DEVICE);
289
+ object_initialize(&s->ppc_irq_orgate, sizeof(s->ppc_irq_orgate),
290
+ TYPE_OR_IRQ);
291
+ object_property_add_child(obj, "ppc-irq-orgate",
292
+ OBJECT(&s->ppc_irq_orgate), &error_abort);
293
+ object_initialize(&s->sec_resp_splitter, sizeof(s->sec_resp_splitter),
294
+ TYPE_SPLIT_IRQ);
295
+ object_property_add_child(obj, "sec-resp-splitter",
296
+ OBJECT(&s->sec_resp_splitter), &error_abort);
297
+ for (i = 0; i < ARRAY_SIZE(s->ppc_irq_splitter); i++) {
298
+ char *name = g_strdup_printf("ppc-irq-splitter-%d", i);
299
+ SplitIRQ *splitter = &s->ppc_irq_splitter[i];
300
+
301
+ object_initialize(splitter, sizeof(*splitter), TYPE_SPLIT_IRQ);
302
+ object_property_add_child(obj, name, OBJECT(splitter), &error_abort);
303
+ }
304
+ init_sysbus_child(obj, "s32ktimer", &s->s32ktimer, sizeof(s->s32ktimer),
305
+ TYPE_UNIMPLEMENTED_DEVICE);
306
+}
307
+
308
+static void iotkit_exp_irq(void *opaque, int n, int level)
309
+{
310
+ IoTKit *s = IOTKIT(opaque);
311
+
312
+ qemu_set_irq(s->exp_irqs[n], level);
313
+}
314
+
315
+static void iotkit_realize(DeviceState *dev, Error **errp)
316
+{
317
+ IoTKit *s = IOTKIT(dev);
318
+ int i;
319
+ MemoryRegion *mr;
320
+ Error *err = NULL;
321
+ SysBusDevice *sbd_apb_ppc0;
322
+ SysBusDevice *sbd_secctl;
323
+ DeviceState *dev_apb_ppc0;
324
+ DeviceState *dev_apb_ppc1;
325
+ DeviceState *dev_secctl;
326
+ DeviceState *dev_splitter;
327
+
328
+ if (!s->board_memory) {
329
+ error_setg(errp, "memory property was not set");
330
+ return;
331
+ }
332
+
333
+ if (!s->mainclk_frq) {
334
+ error_setg(errp, "MAINCLK property was not set");
335
+ return;
336
+ }
337
+
338
+ /* Handling of which devices should be available only to secure
339
+ * code is usually done differently for M profile than for A profile.
340
+ * Instead of putting some devices only into the secure address space,
341
+ * devices exist in both address spaces but with hard-wired security
342
+ * permissions that will cause the CPU to fault for non-secure accesses.
343
+ *
344
+ * The IoTKit has an IDAU (Implementation Defined Access Unit),
345
+ * which specifies hard-wired security permissions for different
346
+ * areas of the physical address space. For the IoTKit IDAU, the
347
+ * top 4 bits of the physical address are the IDAU region ID, and
348
+ * if bit 28 (ie the lowest bit of the ID) is 0 then this is an NS
349
+ * region, otherwise it is an S region.
350
+ *
351
+ * The various devices and RAMs are generally all mapped twice,
352
+ * once into a region that the IDAU defines as secure and once
353
+ * into a non-secure region. They sit behind either a Memory
354
+ * Protection Controller (for RAM) or a Peripheral Protection
355
+ * Controller (for devices), which allow a more fine grained
356
+ * configuration of whether non-secure accesses are permitted.
357
+ *
358
+ * (The other place that guest software can configure security
359
+ * permissions is in the architected SAU (Security Attribution
360
+ * Unit), which is entirely inside the CPU. The IDAU can upgrade
361
+ * the security attributes for a region to more restrictive than
362
+ * the SAU specifies, but cannot downgrade them.)
363
+ *
364
+ * 0x10000000..0x1fffffff alias of 0x00000000..0x0fffffff
365
+ * 0x20000000..0x2007ffff 32KB FPGA block RAM
366
+ * 0x30000000..0x3fffffff alias of 0x20000000..0x2fffffff
367
+ * 0x40000000..0x4000ffff base peripheral region 1
368
+ * 0x40010000..0x4001ffff CPU peripherals (none for IoTKit)
369
+ * 0x40020000..0x4002ffff system control element peripherals
370
+ * 0x40080000..0x400fffff base peripheral region 2
371
+ * 0x50000000..0x5fffffff alias of 0x40000000..0x4fffffff
372
+ */
373
+
374
+ memory_region_add_subregion_overlap(&s->container, 0, s->board_memory, -1);
375
+
376
+ qdev_prop_set_uint32(DEVICE(&s->armv7m), "num-irq", s->exp_numirq + 32);
377
+ /* In real hardware the initial Secure VTOR is set from the INITSVTOR0
378
+ * register in the IoT Kit System Control Register block, and the
379
+ * initial value of that is in turn specifiable by the FPGA that
380
+ * instantiates the IoT Kit. In QEMU we don't implement this wrinkle,
381
+ * and simply set the CPU's init-svtor to the IoT Kit default value.
382
+ */
383
+ qdev_prop_set_uint32(DEVICE(&s->armv7m), "init-svtor", 0x10000000);
384
+ object_property_set_link(OBJECT(&s->armv7m), OBJECT(&s->container),
385
+ "memory", &err);
386
+ if (err) {
387
+ error_propagate(errp, err);
388
+ return;
389
+ }
390
+ object_property_set_link(OBJECT(&s->armv7m), OBJECT(s), "idau", &err);
391
+ if (err) {
392
+ error_propagate(errp, err);
393
+ return;
394
+ }
395
+ object_property_set_bool(OBJECT(&s->armv7m), true, "realized", &err);
396
+ if (err) {
397
+ error_propagate(errp, err);
398
+ return;
399
+ }
400
+
401
+ /* Connect our EXP_IRQ GPIOs to the NVIC's lines 32 and up. */
402
+ s->exp_irqs = g_new(qemu_irq, s->exp_numirq);
403
+ for (i = 0; i < s->exp_numirq; i++) {
404
+ s->exp_irqs[i] = qdev_get_gpio_in(DEVICE(&s->armv7m), i + 32);
405
+ }
406
+ qdev_init_gpio_in_named(dev, iotkit_exp_irq, "EXP_IRQ", s->exp_numirq);
407
+
408
+ /* Set up the big aliases first */
409
+ make_alias(s, &s->alias1, "alias 1", 0x10000000, 0x10000000, 0x00000000);
410
+ make_alias(s, &s->alias2, "alias 2", 0x30000000, 0x10000000, 0x20000000);
411
+ /* The 0x50000000..0x5fffffff region is not a pure alias: it has
412
+ * a few extra devices that only appear there (generally the
413
+ * control interfaces for the protection controllers).
414
+ * We implement this by mapping those devices over the top of this
415
+ * alias MR at a higher priority.
416
+ */
417
+ make_alias(s, &s->alias3, "alias 3", 0x50000000, 0x10000000, 0x40000000);
418
+
419
+ /* This RAM should be behind a Memory Protection Controller, but we
420
+ * don't implement that yet.
421
+ */
422
+ memory_region_init_ram(&s->sram0, NULL, "iotkit.sram0", 0x00008000, &err);
423
+ if (err) {
424
+ error_propagate(errp, err);
425
+ return;
426
+ }
427
+ memory_region_add_subregion(&s->container, 0x20000000, &s->sram0);
428
+
429
+ /* Security controller */
430
+ object_property_set_bool(OBJECT(&s->secctl), true, "realized", &err);
431
+ if (err) {
432
+ error_propagate(errp, err);
433
+ return;
434
+ }
435
+ sbd_secctl = SYS_BUS_DEVICE(&s->secctl);
436
+ dev_secctl = DEVICE(&s->secctl);
437
+ sysbus_mmio_map(sbd_secctl, 0, 0x50080000);
438
+ sysbus_mmio_map(sbd_secctl, 1, 0x40080000);
439
+
440
+ s->nsc_cfg_in = qemu_allocate_irq(nsccfg_handler, s, 1);
441
+ qdev_connect_gpio_out_named(dev_secctl, "nsc_cfg", 0, s->nsc_cfg_in);
442
+
443
+ /* The sec_resp_cfg output from the security controller must be split into
444
+ * multiple lines, one for each of the PPCs within the IoTKit and one
445
+ * that will be an output from the IoTKit to the system.
446
+ */
447
+ object_property_set_int(OBJECT(&s->sec_resp_splitter), 3,
448
+ "num-lines", &err);
449
+ if (err) {
450
+ error_propagate(errp, err);
451
+ return;
452
+ }
453
+ object_property_set_bool(OBJECT(&s->sec_resp_splitter), true,
454
+ "realized", &err);
455
+ if (err) {
456
+ error_propagate(errp, err);
457
+ return;
458
+ }
459
+ dev_splitter = DEVICE(&s->sec_resp_splitter);
460
+ qdev_connect_gpio_out_named(dev_secctl, "sec_resp_cfg", 0,
461
+ qdev_get_gpio_in(dev_splitter, 0));
462
+
463
+ /* Devices behind APB PPC0:
464
+ * 0x40000000: timer0
465
+ * 0x40001000: timer1
466
+ * 0x40002000: dual timer
467
+ * We must configure and realize each downstream device and connect
468
+ * it to the appropriate PPC port; then we can realize the PPC and
469
+ * map its upstream ends to the right place in the container.
470
+ */
471
+ qdev_prop_set_uint32(DEVICE(&s->timer0), "pclk-frq", s->mainclk_frq);
472
+ object_property_set_bool(OBJECT(&s->timer0), true, "realized", &err);
473
+ if (err) {
474
+ error_propagate(errp, err);
475
+ return;
476
+ }
477
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->timer0), 0,
478
+ qdev_get_gpio_in(DEVICE(&s->armv7m), 3));
479
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->timer0), 0);
480
+ object_property_set_link(OBJECT(&s->apb_ppc0), OBJECT(mr), "port[0]", &err);
481
+ if (err) {
482
+ error_propagate(errp, err);
483
+ return;
484
+ }
485
+
486
+ qdev_prop_set_uint32(DEVICE(&s->timer1), "pclk-frq", s->mainclk_frq);
487
+ object_property_set_bool(OBJECT(&s->timer1), true, "realized", &err);
488
+ if (err) {
489
+ error_propagate(errp, err);
490
+ return;
491
+ }
492
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->timer1), 0,
493
+ qdev_get_gpio_in(DEVICE(&s->armv7m), 3));
494
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->timer1), 0);
495
+ object_property_set_link(OBJECT(&s->apb_ppc0), OBJECT(mr), "port[1]", &err);
496
+ if (err) {
497
+ error_propagate(errp, err);
498
+ return;
499
+ }
500
+
501
+ qdev_prop_set_string(DEVICE(&s->dualtimer), "name", "Dual timer");
502
+ qdev_prop_set_uint64(DEVICE(&s->dualtimer), "size", 0x1000);
503
+ object_property_set_bool(OBJECT(&s->dualtimer), true, "realized", &err);
504
+ if (err) {
505
+ error_propagate(errp, err);
506
+ return;
507
+ }
508
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->dualtimer), 0);
509
+ object_property_set_link(OBJECT(&s->apb_ppc0), OBJECT(mr), "port[2]", &err);
510
+ if (err) {
511
+ error_propagate(errp, err);
512
+ return;
513
+ }
514
+
515
+ object_property_set_bool(OBJECT(&s->apb_ppc0), true, "realized", &err);
516
+ if (err) {
517
+ error_propagate(errp, err);
518
+ return;
519
+ }
520
+
521
+ sbd_apb_ppc0 = SYS_BUS_DEVICE(&s->apb_ppc0);
522
+ dev_apb_ppc0 = DEVICE(&s->apb_ppc0);
523
+
524
+ mr = sysbus_mmio_get_region(sbd_apb_ppc0, 0);
525
+ memory_region_add_subregion(&s->container, 0x40000000, mr);
526
+ mr = sysbus_mmio_get_region(sbd_apb_ppc0, 1);
527
+ memory_region_add_subregion(&s->container, 0x40001000, mr);
528
+ mr = sysbus_mmio_get_region(sbd_apb_ppc0, 2);
529
+ memory_region_add_subregion(&s->container, 0x40002000, mr);
530
+ for (i = 0; i < IOTS_APB_PPC0_NUM_PORTS; i++) {
531
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc0_nonsec", i,
532
+ qdev_get_gpio_in_named(dev_apb_ppc0,
533
+ "cfg_nonsec", i));
534
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc0_ap", i,
535
+ qdev_get_gpio_in_named(dev_apb_ppc0,
536
+ "cfg_ap", i));
537
+ }
538
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc0_irq_enable", 0,
539
+ qdev_get_gpio_in_named(dev_apb_ppc0,
540
+ "irq_enable", 0));
541
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc0_irq_clear", 0,
542
+ qdev_get_gpio_in_named(dev_apb_ppc0,
543
+ "irq_clear", 0));
544
+ qdev_connect_gpio_out(dev_splitter, 0,
545
+ qdev_get_gpio_in_named(dev_apb_ppc0,
546
+ "cfg_sec_resp", 0));
547
+
548
+ /* All the PPC irq lines (from the 2 internal PPCs and the 8 external
549
+ * ones) are sent individually to the security controller, and also
550
+ * ORed together to give a single combined PPC interrupt to the NVIC.
551
+ */
552
+ object_property_set_int(OBJECT(&s->ppc_irq_orgate),
553
+ NUM_PPCS, "num-lines", &err);
554
+ if (err) {
555
+ error_propagate(errp, err);
556
+ return;
557
+ }
558
+ object_property_set_bool(OBJECT(&s->ppc_irq_orgate), true,
559
+ "realized", &err);
560
+ if (err) {
561
+ error_propagate(errp, err);
562
+ return;
563
+ }
564
+ qdev_connect_gpio_out(DEVICE(&s->ppc_irq_orgate), 0,
565
+ qdev_get_gpio_in(DEVICE(&s->armv7m), 10));
566
+
567
+ /* 0x40010000 .. 0x4001ffff: private CPU region: unused in IoTKit */
568
+
569
+ /* 0x40020000 .. 0x4002ffff : IoTKit system control peripheral region */
570
+ /* Devices behind APB PPC1:
571
+ * 0x4002f000: S32K timer
572
+ */
573
+ qdev_prop_set_string(DEVICE(&s->s32ktimer), "name", "S32KTIMER");
574
+ qdev_prop_set_uint64(DEVICE(&s->s32ktimer), "size", 0x1000);
575
+ object_property_set_bool(OBJECT(&s->s32ktimer), true, "realized", &err);
576
+ if (err) {
577
+ error_propagate(errp, err);
578
+ return;
579
+ }
580
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->s32ktimer), 0);
581
+ object_property_set_link(OBJECT(&s->apb_ppc1), OBJECT(mr), "port[0]", &err);
582
+ if (err) {
583
+ error_propagate(errp, err);
584
+ return;
585
+ }
586
+
587
+ object_property_set_bool(OBJECT(&s->apb_ppc1), true, "realized", &err);
588
+ if (err) {
589
+ error_propagate(errp, err);
590
+ return;
591
+ }
592
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->apb_ppc1), 0);
593
+ memory_region_add_subregion(&s->container, 0x4002f000, mr);
594
+
595
+ dev_apb_ppc1 = DEVICE(&s->apb_ppc1);
596
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc1_nonsec", 0,
597
+ qdev_get_gpio_in_named(dev_apb_ppc1,
598
+ "cfg_nonsec", 0));
599
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc1_ap", 0,
600
+ qdev_get_gpio_in_named(dev_apb_ppc1,
601
+ "cfg_ap", 0));
602
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc1_irq_enable", 0,
603
+ qdev_get_gpio_in_named(dev_apb_ppc1,
604
+ "irq_enable", 0));
605
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc1_irq_clear", 0,
606
+ qdev_get_gpio_in_named(dev_apb_ppc1,
607
+ "irq_clear", 0));
608
+ qdev_connect_gpio_out(dev_splitter, 1,
609
+ qdev_get_gpio_in_named(dev_apb_ppc1,
610
+ "cfg_sec_resp", 0));
611
+
612
+ /* Using create_unimplemented_device() maps the stub into the
613
+ * system address space rather than into our container, but the
614
+ * overall effect to the guest is the same.
615
+ */
616
+ create_unimplemented_device("SYSINFO", 0x40020000, 0x1000);
617
+
618
+ create_unimplemented_device("SYSCONTROL", 0x50021000, 0x1000);
619
+ create_unimplemented_device("S32KWATCHDOG", 0x5002e000, 0x1000);
620
+
621
+ /* 0x40080000 .. 0x4008ffff : IoTKit second Base peripheral region */
622
+
623
+ create_unimplemented_device("NS watchdog", 0x40081000, 0x1000);
624
+ create_unimplemented_device("S watchdog", 0x50081000, 0x1000);
625
+
626
+ create_unimplemented_device("SRAM0 MPC", 0x50083000, 0x1000);
627
+
628
+ for (i = 0; i < ARRAY_SIZE(s->ppc_irq_splitter); i++) {
629
+ Object *splitter = OBJECT(&s->ppc_irq_splitter[i]);
630
+
631
+ object_property_set_int(splitter, 2, "num-lines", &err);
632
+ if (err) {
633
+ error_propagate(errp, err);
634
+ return;
635
+ }
636
+ object_property_set_bool(splitter, true, "realized", &err);
637
+ if (err) {
638
+ error_propagate(errp, err);
639
+ return;
640
+ }
641
+ }
642
+
643
+ for (i = 0; i < IOTS_NUM_AHB_EXP_PPC; i++) {
644
+ char *ppcname = g_strdup_printf("ahb_ppcexp%d", i);
645
+
646
+ iotkit_forward_ppc(s, ppcname, i);
647
+ g_free(ppcname);
648
+ }
649
+
650
+ for (i = 0; i < IOTS_NUM_APB_EXP_PPC; i++) {
651
+ char *ppcname = g_strdup_printf("apb_ppcexp%d", i);
652
+
653
+ iotkit_forward_ppc(s, ppcname, i + IOTS_NUM_AHB_EXP_PPC);
654
+ g_free(ppcname);
655
+ }
656
+
657
+ for (i = NUM_EXTERNAL_PPCS; i < NUM_PPCS; i++) {
658
+ /* Wire up IRQ splitter for internal PPCs */
659
+ DeviceState *devs = DEVICE(&s->ppc_irq_splitter[i]);
660
+ char *gpioname = g_strdup_printf("apb_ppc%d_irq_status",
661
+ i - NUM_EXTERNAL_PPCS);
662
+ TZPPC *ppc = (i == NUM_EXTERNAL_PPCS) ? &s->apb_ppc0 : &s->apb_ppc1;
663
+
664
+ qdev_connect_gpio_out(devs, 0,
665
+ qdev_get_gpio_in_named(dev_secctl, gpioname, 0));
666
+ qdev_connect_gpio_out(devs, 1,
667
+ qdev_get_gpio_in(DEVICE(&s->ppc_irq_orgate), i));
668
+ qdev_connect_gpio_out_named(DEVICE(ppc), "irq", 0,
669
+ qdev_get_gpio_in(devs, 0));
670
+ }
671
+
672
+ iotkit_forward_sec_resp_cfg(s);
673
+
674
+ system_clock_scale = NANOSECONDS_PER_SECOND / s->mainclk_frq;
675
+}
676
+
677
+static void iotkit_idau_check(IDAUInterface *ii, uint32_t address,
678
+ int *iregion, bool *exempt, bool *ns, bool *nsc)
679
+{
680
+ /* For IoTKit systems the IDAU responses are simple logical functions
681
+ * of the address bits. The NSC attribute is guest-adjustable via the
682
+ * NSCCFG register in the security controller.
683
+ */
684
+ IoTKit *s = IOTKIT(ii);
685
+ int region = extract32(address, 28, 4);
686
+
687
+ *ns = !(region & 1);
688
+ *nsc = (region == 1 && (s->nsccfg & 1)) || (region == 3 && (s->nsccfg & 2));
689
+ /* 0xe0000000..0xe00fffff and 0xf0000000..0xf00fffff are exempt */
690
+ *exempt = (address & 0xeff00000) == 0xe0000000;
691
+ *iregion = region;
692
+}
693
+
694
+static const VMStateDescription iotkit_vmstate = {
695
+ .name = "iotkit",
696
+ .version_id = 1,
697
+ .minimum_version_id = 1,
698
+ .fields = (VMStateField[]) {
699
+ VMSTATE_UINT32(nsccfg, IoTKit),
700
+ VMSTATE_END_OF_LIST()
701
+ }
702
+};
703
+
704
+static Property iotkit_properties[] = {
705
+ DEFINE_PROP_LINK("memory", IoTKit, board_memory, TYPE_MEMORY_REGION,
706
+ MemoryRegion *),
707
+ DEFINE_PROP_UINT32("EXP_NUMIRQ", IoTKit, exp_numirq, 64),
708
+ DEFINE_PROP_UINT32("MAINCLK", IoTKit, mainclk_frq, 0),
709
+ DEFINE_PROP_END_OF_LIST()
710
+};
711
+
712
+static void iotkit_reset(DeviceState *dev)
713
+{
714
+ IoTKit *s = IOTKIT(dev);
715
+
716
+ s->nsccfg = 0;
717
+}
718
+
719
+static void iotkit_class_init(ObjectClass *klass, void *data)
720
+{
721
+ DeviceClass *dc = DEVICE_CLASS(klass);
722
+ IDAUInterfaceClass *iic = IDAU_INTERFACE_CLASS(klass);
723
+
724
+ dc->realize = iotkit_realize;
725
+ dc->vmsd = &iotkit_vmstate;
726
+ dc->props = iotkit_properties;
727
+ dc->reset = iotkit_reset;
728
+ iic->check = iotkit_idau_check;
729
+}
730
+
731
+static const TypeInfo iotkit_info = {
732
+ .name = TYPE_IOTKIT,
733
+ .parent = TYPE_SYS_BUS_DEVICE,
734
+ .instance_size = sizeof(IoTKit),
735
+ .instance_init = iotkit_init,
736
+ .class_init = iotkit_class_init,
737
+ .interfaces = (InterfaceInfo[]) {
738
+ { TYPE_IDAU_INTERFACE },
739
+ { }
740
+ }
741
+};
742
+
743
+static void iotkit_register_types(void)
744
+{
745
+ type_register_static(&iotkit_info);
746
+}
747
+
748
+type_init(iotkit_register_types);
749
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
750
index XXXXXXX..XXXXXXX 100644
751
--- a/default-configs/arm-softmmu.mak
752
+++ b/default-configs/arm-softmmu.mak
753
@@ -XXX,XX +XXX,XX @@ CONFIG_MPS2_FPGAIO=y
754
CONFIG_MPS2_SCC=y
755
756
CONFIG_TZ_PPC=y
757
+CONFIG_IOTKIT=y
758
CONFIG_IOTKIT_SECCTL=y
759
760
CONFIG_VERSATILE_PCI=y
761
--
55
--
762
2.16.2
56
2.19.1
763
57
764
58
diff view generated by jsdifflib
1
The Arm IoT Kit includes a "security controller" which is largely a
1
The HCR_EL2 VI and VF bits are supposed to track whether there is
2
collection of registers for controlling the PPCs and other bits of
2
a pending virtual IRQ or virtual FIQ. For QEMU we store the
3
glue in the system. This commit provides the initial skeleton of the
3
pending VIRQ/VFIQ status in cs->interrupt_request, so this means:
4
device, implementing just the ID registers, and a couple of read-only
4
* if the register is read we must get these bit values from
5
read-as-zero registers.
5
cs->interrupt_request
6
* if the register is written then we must write the bit
7
values back into cs->interrupt_request
6
8
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180220180325.29818-16-peter.maydell@linaro.org
11
Message-id: 20181012144235.19646-7-peter.maydell@linaro.org
10
---
12
---
11
hw/misc/Makefile.objs | 1 +
13
target/arm/helper.c | 47 +++++++++++++++++++++++++++++++++++++++++----
12
include/hw/misc/iotkit-secctl.h | 39 ++++
14
1 file changed, 43 insertions(+), 4 deletions(-)
13
hw/misc/iotkit-secctl.c | 448 ++++++++++++++++++++++++++++++++++++++++
14
default-configs/arm-softmmu.mak | 1 +
15
hw/misc/trace-events | 7 +
16
5 files changed, 496 insertions(+)
17
create mode 100644 include/hw/misc/iotkit-secctl.h
18
create mode 100644 hw/misc/iotkit-secctl.c
19
15
20
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/misc/Makefile.objs
18
--- a/target/arm/helper.c
23
+++ b/hw/misc/Makefile.objs
19
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_MPS2_FPGAIO) += mps2-fpgaio.o
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
25
obj-$(CONFIG_MPS2_SCC) += mps2-scc.o
21
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
26
22
{
27
obj-$(CONFIG_TZ_PPC) += tz-ppc.o
23
ARMCPU *cpu = arm_env_get_cpu(env);
28
+obj-$(CONFIG_IOTKIT_SECCTL) += iotkit-secctl.o
24
+ CPUState *cs = ENV_GET_CPU(env);
29
25
uint64_t valid_mask = HCR_MASK;
30
obj-$(CONFIG_PVPANIC) += pvpanic.o
26
31
obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o
27
if (arm_feature(env, ARM_FEATURE_EL3)) {
32
diff --git a/include/hw/misc/iotkit-secctl.h b/include/hw/misc/iotkit-secctl.h
28
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
33
new file mode 100644
29
/* Clear RES0 bits. */
34
index XXXXXXX..XXXXXXX
30
value &= valid_mask;
35
--- /dev/null
31
36
+++ b/include/hw/misc/iotkit-secctl.h
32
+ /*
37
@@ -XXX,XX +XXX,XX @@
33
+ * VI and VF are kept in cs->interrupt_request. Modifying that
38
+/*
34
+ * requires that we have the iothread lock, which is done by
39
+ * ARM IoT Kit security controller
35
+ * marking the reginfo structs as ARM_CP_IO.
40
+ *
36
+ * Note that if a write to HCR pends a VIRQ or VFIQ it is never
41
+ * Copyright (c) 2018 Linaro Limited
37
+ * possible for it to be taken immediately, because VIRQ and
42
+ * Written by Peter Maydell
38
+ * VFIQ are masked unless running at EL0 or EL1, and HCR
43
+ *
39
+ * can only be written at EL2.
44
+ * This program is free software; you can redistribute it and/or modify
40
+ */
45
+ * it under the terms of the GNU General Public License version 2 or
41
+ g_assert(qemu_mutex_iothread_locked());
46
+ * (at your option) any later version.
42
+ if (value & HCR_VI) {
47
+ */
43
+ cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
44
+ } else {
45
+ cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
46
+ }
47
+ if (value & HCR_VF) {
48
+ cs->interrupt_request |= CPU_INTERRUPT_VFIQ;
49
+ } else {
50
+ cs->interrupt_request &= ~CPU_INTERRUPT_VFIQ;
51
+ }
52
+ value &= ~(HCR_VI | HCR_VF);
48
+
53
+
49
+/* This is a model of the security controller which is part of the
54
/* These bits change the MMU setup:
50
+ * Arm IoT Kit and documented in
55
* HCR_VM enables stage 2 translation
51
+ * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ecm0601256/index.html
56
* HCR_PTW forbids certain page-table setups
52
+ *
57
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
53
+ * QEMU interface:
58
hcr_write(env, NULL, value);
54
+ * + sysbus MMIO region 0 is the "secure privilege control block" registers
59
}
55
+ * + sysbus MMIO region 1 is the "non-secure privilege control block" registers
60
56
+ */
61
+static uint64_t hcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
62
+{
63
+ /* The VI and VF bits live in cs->interrupt_request */
64
+ uint64_t ret = env->cp15.hcr_el2 & ~(HCR_VI | HCR_VF);
65
+ CPUState *cs = ENV_GET_CPU(env);
57
+
66
+
58
+#ifndef IOTKIT_SECCTL_H
67
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
59
+#define IOTKIT_SECCTL_H
68
+ ret |= HCR_VI;
60
+
61
+#include "hw/sysbus.h"
62
+
63
+#define TYPE_IOTKIT_SECCTL "iotkit-secctl"
64
+#define IOTKIT_SECCTL(obj) OBJECT_CHECK(IoTKitSecCtl, (obj), TYPE_IOTKIT_SECCTL)
65
+
66
+typedef struct IoTKitSecCtl {
67
+ /*< private >*/
68
+ SysBusDevice parent_obj;
69
+
70
+ /*< public >*/
71
+
72
+ MemoryRegion s_regs;
73
+ MemoryRegion ns_regs;
74
+} IoTKitSecCtl;
75
+
76
+#endif
77
diff --git a/hw/misc/iotkit-secctl.c b/hw/misc/iotkit-secctl.c
78
new file mode 100644
79
index XXXXXXX..XXXXXXX
80
--- /dev/null
81
+++ b/hw/misc/iotkit-secctl.c
82
@@ -XXX,XX +XXX,XX @@
83
+/*
84
+ * Arm IoT Kit security controller
85
+ *
86
+ * Copyright (c) 2018 Linaro Limited
87
+ * Written by Peter Maydell
88
+ *
89
+ * This program is free software; you can redistribute it and/or modify
90
+ * it under the terms of the GNU General Public License version 2 or
91
+ * (at your option) any later version.
92
+ */
93
+
94
+#include "qemu/osdep.h"
95
+#include "qemu/log.h"
96
+#include "qapi/error.h"
97
+#include "trace.h"
98
+#include "hw/sysbus.h"
99
+#include "hw/registerfields.h"
100
+#include "hw/misc/iotkit-secctl.h"
101
+
102
+/* Registers in the secure privilege control block */
103
+REG32(SECRESPCFG, 0x10)
104
+REG32(NSCCFG, 0x14)
105
+REG32(SECMPCINTSTATUS, 0x1c)
106
+REG32(SECPPCINTSTAT, 0x20)
107
+REG32(SECPPCINTCLR, 0x24)
108
+REG32(SECPPCINTEN, 0x28)
109
+REG32(SECMSCINTSTAT, 0x30)
110
+REG32(SECMSCINTCLR, 0x34)
111
+REG32(SECMSCINTEN, 0x38)
112
+REG32(BRGINTSTAT, 0x40)
113
+REG32(BRGINTCLR, 0x44)
114
+REG32(BRGINTEN, 0x48)
115
+REG32(AHBNSPPC0, 0x50)
116
+REG32(AHBNSPPCEXP0, 0x60)
117
+REG32(AHBNSPPCEXP1, 0x64)
118
+REG32(AHBNSPPCEXP2, 0x68)
119
+REG32(AHBNSPPCEXP3, 0x6c)
120
+REG32(APBNSPPC0, 0x70)
121
+REG32(APBNSPPC1, 0x74)
122
+REG32(APBNSPPCEXP0, 0x80)
123
+REG32(APBNSPPCEXP1, 0x84)
124
+REG32(APBNSPPCEXP2, 0x88)
125
+REG32(APBNSPPCEXP3, 0x8c)
126
+REG32(AHBSPPPC0, 0x90)
127
+REG32(AHBSPPPCEXP0, 0xa0)
128
+REG32(AHBSPPPCEXP1, 0xa4)
129
+REG32(AHBSPPPCEXP2, 0xa8)
130
+REG32(AHBSPPPCEXP3, 0xac)
131
+REG32(APBSPPPC0, 0xb0)
132
+REG32(APBSPPPC1, 0xb4)
133
+REG32(APBSPPPCEXP0, 0xc0)
134
+REG32(APBSPPPCEXP1, 0xc4)
135
+REG32(APBSPPPCEXP2, 0xc8)
136
+REG32(APBSPPPCEXP3, 0xcc)
137
+REG32(NSMSCEXP, 0xd0)
138
+REG32(PID4, 0xfd0)
139
+REG32(PID5, 0xfd4)
140
+REG32(PID6, 0xfd8)
141
+REG32(PID7, 0xfdc)
142
+REG32(PID0, 0xfe0)
143
+REG32(PID1, 0xfe4)
144
+REG32(PID2, 0xfe8)
145
+REG32(PID3, 0xfec)
146
+REG32(CID0, 0xff0)
147
+REG32(CID1, 0xff4)
148
+REG32(CID2, 0xff8)
149
+REG32(CID3, 0xffc)
150
+
151
+/* Registers in the non-secure privilege control block */
152
+REG32(AHBNSPPPC0, 0x90)
153
+REG32(AHBNSPPPCEXP0, 0xa0)
154
+REG32(AHBNSPPPCEXP1, 0xa4)
155
+REG32(AHBNSPPPCEXP2, 0xa8)
156
+REG32(AHBNSPPPCEXP3, 0xac)
157
+REG32(APBNSPPPC0, 0xb0)
158
+REG32(APBNSPPPC1, 0xb4)
159
+REG32(APBNSPPPCEXP0, 0xc0)
160
+REG32(APBNSPPPCEXP1, 0xc4)
161
+REG32(APBNSPPPCEXP2, 0xc8)
162
+REG32(APBNSPPPCEXP3, 0xcc)
163
+/* PID and CID registers are also present in the NS block */
164
+
165
+static const uint8_t iotkit_secctl_s_idregs[] = {
166
+ 0x04, 0x00, 0x00, 0x00,
167
+ 0x52, 0xb8, 0x0b, 0x00,
168
+ 0x0d, 0xf0, 0x05, 0xb1,
169
+};
170
+
171
+static const uint8_t iotkit_secctl_ns_idregs[] = {
172
+ 0x04, 0x00, 0x00, 0x00,
173
+ 0x53, 0xb8, 0x0b, 0x00,
174
+ 0x0d, 0xf0, 0x05, 0xb1,
175
+};
176
+
177
+static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
178
+ uint64_t *pdata,
179
+ unsigned size, MemTxAttrs attrs)
180
+{
181
+ uint64_t r;
182
+ uint32_t offset = addr & ~0x3;
183
+
184
+ switch (offset) {
185
+ case A_AHBNSPPC0:
186
+ case A_AHBSPPPC0:
187
+ r = 0;
188
+ break;
189
+ case A_SECRESPCFG:
190
+ case A_NSCCFG:
191
+ case A_SECMPCINTSTATUS:
192
+ case A_SECPPCINTSTAT:
193
+ case A_SECPPCINTEN:
194
+ case A_SECMSCINTSTAT:
195
+ case A_SECMSCINTEN:
196
+ case A_BRGINTSTAT:
197
+ case A_BRGINTEN:
198
+ case A_AHBNSPPCEXP0:
199
+ case A_AHBNSPPCEXP1:
200
+ case A_AHBNSPPCEXP2:
201
+ case A_AHBNSPPCEXP3:
202
+ case A_APBNSPPC0:
203
+ case A_APBNSPPC1:
204
+ case A_APBNSPPCEXP0:
205
+ case A_APBNSPPCEXP1:
206
+ case A_APBNSPPCEXP2:
207
+ case A_APBNSPPCEXP3:
208
+ case A_AHBSPPPCEXP0:
209
+ case A_AHBSPPPCEXP1:
210
+ case A_AHBSPPPCEXP2:
211
+ case A_AHBSPPPCEXP3:
212
+ case A_APBSPPPC0:
213
+ case A_APBSPPPC1:
214
+ case A_APBSPPPCEXP0:
215
+ case A_APBSPPPCEXP1:
216
+ case A_APBSPPPCEXP2:
217
+ case A_APBSPPPCEXP3:
218
+ case A_NSMSCEXP:
219
+ qemu_log_mask(LOG_UNIMP,
220
+ "IoTKit SecCtl S block read: "
221
+ "unimplemented offset 0x%x\n", offset);
222
+ r = 0;
223
+ break;
224
+ case A_PID4:
225
+ case A_PID5:
226
+ case A_PID6:
227
+ case A_PID7:
228
+ case A_PID0:
229
+ case A_PID1:
230
+ case A_PID2:
231
+ case A_PID3:
232
+ case A_CID0:
233
+ case A_CID1:
234
+ case A_CID2:
235
+ case A_CID3:
236
+ r = iotkit_secctl_s_idregs[(offset - A_PID4) / 4];
237
+ break;
238
+ case A_SECPPCINTCLR:
239
+ case A_SECMSCINTCLR:
240
+ case A_BRGINTCLR:
241
+ qemu_log_mask(LOG_GUEST_ERROR,
242
+ "IotKit SecCtl S block read: write-only offset 0x%x\n",
243
+ offset);
244
+ r = 0;
245
+ break;
246
+ default:
247
+ qemu_log_mask(LOG_GUEST_ERROR,
248
+ "IotKit SecCtl S block read: bad offset 0x%x\n", offset);
249
+ r = 0;
250
+ break;
251
+ }
69
+ }
252
+
70
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
253
+ if (size != 4) {
71
+ ret |= HCR_VF;
254
+ /* None of our registers are access-sensitive, so just pull the right
255
+ * byte out of the word read result.
256
+ */
257
+ r = extract32(r, (addr & 3) * 8, size * 8);
258
+ }
72
+ }
259
+
73
+ return ret;
260
+ trace_iotkit_secctl_s_read(offset, r, size);
261
+ *pdata = r;
262
+ return MEMTX_OK;
263
+}
74
+}
264
+
75
+
265
+static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
76
static const ARMCPRegInfo el2_cp_reginfo[] = {
266
+ uint64_t value,
77
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
267
+ unsigned size, MemTxAttrs attrs)
78
+ .type = ARM_CP_IO,
268
+{
79
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
269
+ uint32_t offset = addr;
80
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
270
+
81
- .writefn = hcr_write },
271
+ trace_iotkit_secctl_s_write(offset, value, size);
82
+ .writefn = hcr_write, .readfn = hcr_read },
272
+
83
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
273
+ if (size != 4) {
84
- .type = ARM_CP_ALIAS,
274
+ /* Byte and halfword writes are ignored */
85
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
275
+ qemu_log_mask(LOG_GUEST_ERROR,
86
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
276
+ "IotKit SecCtl S block write: bad size, ignored\n");
87
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
277
+ return MEMTX_OK;
88
- .writefn = hcr_writelow },
278
+ }
89
+ .writefn = hcr_writelow, .readfn = hcr_read },
279
+
90
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
280
+ switch (offset) {
91
.type = ARM_CP_ALIAS,
281
+ case A_SECRESPCFG:
92
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
282
+ case A_NSCCFG:
93
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
283
+ case A_SECPPCINTCLR:
94
284
+ case A_SECPPCINTEN:
95
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
285
+ case A_SECMSCINTCLR:
96
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
286
+ case A_SECMSCINTEN:
97
- .type = ARM_CP_ALIAS,
287
+ case A_BRGINTCLR:
98
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
288
+ case A_BRGINTEN:
99
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
289
+ case A_AHBNSPPCEXP0:
100
.access = PL2_RW,
290
+ case A_AHBNSPPCEXP1:
101
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
291
+ case A_AHBNSPPCEXP2:
292
+ case A_AHBNSPPCEXP3:
293
+ case A_APBNSPPC0:
294
+ case A_APBNSPPC1:
295
+ case A_APBNSPPCEXP0:
296
+ case A_APBNSPPCEXP1:
297
+ case A_APBNSPPCEXP2:
298
+ case A_APBNSPPCEXP3:
299
+ case A_AHBSPPPCEXP0:
300
+ case A_AHBSPPPCEXP1:
301
+ case A_AHBSPPPCEXP2:
302
+ case A_AHBSPPPCEXP3:
303
+ case A_APBSPPPC0:
304
+ case A_APBSPPPC1:
305
+ case A_APBSPPPCEXP0:
306
+ case A_APBSPPPCEXP1:
307
+ case A_APBSPPPCEXP2:
308
+ case A_APBSPPPCEXP3:
309
+ qemu_log_mask(LOG_UNIMP,
310
+ "IoTKit SecCtl S block write: "
311
+ "unimplemented offset 0x%x\n", offset);
312
+ break;
313
+ case A_SECMPCINTSTATUS:
314
+ case A_SECPPCINTSTAT:
315
+ case A_SECMSCINTSTAT:
316
+ case A_BRGINTSTAT:
317
+ case A_AHBNSPPC0:
318
+ case A_AHBSPPPC0:
319
+ case A_NSMSCEXP:
320
+ case A_PID4:
321
+ case A_PID5:
322
+ case A_PID6:
323
+ case A_PID7:
324
+ case A_PID0:
325
+ case A_PID1:
326
+ case A_PID2:
327
+ case A_PID3:
328
+ case A_CID0:
329
+ case A_CID1:
330
+ case A_CID2:
331
+ case A_CID3:
332
+ qemu_log_mask(LOG_GUEST_ERROR,
333
+ "IoTKit SecCtl S block write: "
334
+ "read-only offset 0x%x\n", offset);
335
+ break;
336
+ default:
337
+ qemu_log_mask(LOG_GUEST_ERROR,
338
+ "IotKit SecCtl S block write: bad offset 0x%x\n",
339
+ offset);
340
+ break;
341
+ }
342
+
343
+ return MEMTX_OK;
344
+}
345
+
346
+static MemTxResult iotkit_secctl_ns_read(void *opaque, hwaddr addr,
347
+ uint64_t *pdata,
348
+ unsigned size, MemTxAttrs attrs)
349
+{
350
+ uint64_t r;
351
+ uint32_t offset = addr & ~0x3;
352
+
353
+ switch (offset) {
354
+ case A_AHBNSPPPC0:
355
+ r = 0;
356
+ break;
357
+ case A_AHBNSPPPCEXP0:
358
+ case A_AHBNSPPPCEXP1:
359
+ case A_AHBNSPPPCEXP2:
360
+ case A_AHBNSPPPCEXP3:
361
+ case A_APBNSPPPC0:
362
+ case A_APBNSPPPC1:
363
+ case A_APBNSPPPCEXP0:
364
+ case A_APBNSPPPCEXP1:
365
+ case A_APBNSPPPCEXP2:
366
+ case A_APBNSPPPCEXP3:
367
+ qemu_log_mask(LOG_UNIMP,
368
+ "IoTKit SecCtl NS block read: "
369
+ "unimplemented offset 0x%x\n", offset);
370
+ break;
371
+ case A_PID4:
372
+ case A_PID5:
373
+ case A_PID6:
374
+ case A_PID7:
375
+ case A_PID0:
376
+ case A_PID1:
377
+ case A_PID2:
378
+ case A_PID3:
379
+ case A_CID0:
380
+ case A_CID1:
381
+ case A_CID2:
382
+ case A_CID3:
383
+ r = iotkit_secctl_ns_idregs[(offset - A_PID4) / 4];
384
+ break;
385
+ default:
386
+ qemu_log_mask(LOG_GUEST_ERROR,
387
+ "IotKit SecCtl NS block write: bad offset 0x%x\n",
388
+ offset);
389
+ r = 0;
390
+ break;
391
+ }
392
+
393
+ if (size != 4) {
394
+ /* None of our registers are access-sensitive, so just pull the right
395
+ * byte out of the word read result.
396
+ */
397
+ r = extract32(r, (addr & 3) * 8, size * 8);
398
+ }
399
+
400
+ trace_iotkit_secctl_ns_read(offset, r, size);
401
+ *pdata = r;
402
+ return MEMTX_OK;
403
+}
404
+
405
+static MemTxResult iotkit_secctl_ns_write(void *opaque, hwaddr addr,
406
+ uint64_t value,
407
+ unsigned size, MemTxAttrs attrs)
408
+{
409
+ uint32_t offset = addr;
410
+
411
+ trace_iotkit_secctl_ns_write(offset, value, size);
412
+
413
+ if (size != 4) {
414
+ /* Byte and halfword writes are ignored */
415
+ qemu_log_mask(LOG_GUEST_ERROR,
416
+ "IotKit SecCtl NS block write: bad size, ignored\n");
417
+ return MEMTX_OK;
418
+ }
419
+
420
+ switch (offset) {
421
+ case A_AHBNSPPPCEXP0:
422
+ case A_AHBNSPPPCEXP1:
423
+ case A_AHBNSPPPCEXP2:
424
+ case A_AHBNSPPPCEXP3:
425
+ case A_APBNSPPPC0:
426
+ case A_APBNSPPPC1:
427
+ case A_APBNSPPPCEXP0:
428
+ case A_APBNSPPPCEXP1:
429
+ case A_APBNSPPPCEXP2:
430
+ case A_APBNSPPPCEXP3:
431
+ qemu_log_mask(LOG_UNIMP,
432
+ "IoTKit SecCtl NS block write: "
433
+ "unimplemented offset 0x%x\n", offset);
434
+ break;
435
+ case A_AHBNSPPPC0:
436
+ case A_PID4:
437
+ case A_PID5:
438
+ case A_PID6:
439
+ case A_PID7:
440
+ case A_PID0:
441
+ case A_PID1:
442
+ case A_PID2:
443
+ case A_PID3:
444
+ case A_CID0:
445
+ case A_CID1:
446
+ case A_CID2:
447
+ case A_CID3:
448
+ qemu_log_mask(LOG_GUEST_ERROR,
449
+ "IoTKit SecCtl NS block write: "
450
+ "read-only offset 0x%x\n", offset);
451
+ break;
452
+ default:
453
+ qemu_log_mask(LOG_GUEST_ERROR,
454
+ "IotKit SecCtl NS block write: bad offset 0x%x\n",
455
+ offset);
456
+ break;
457
+ }
458
+
459
+ return MEMTX_OK;
460
+}
461
+
462
+static const MemoryRegionOps iotkit_secctl_s_ops = {
463
+ .read_with_attrs = iotkit_secctl_s_read,
464
+ .write_with_attrs = iotkit_secctl_s_write,
465
+ .endianness = DEVICE_LITTLE_ENDIAN,
466
+ .valid.min_access_size = 1,
467
+ .valid.max_access_size = 4,
468
+ .impl.min_access_size = 1,
469
+ .impl.max_access_size = 4,
470
+};
471
+
472
+static const MemoryRegionOps iotkit_secctl_ns_ops = {
473
+ .read_with_attrs = iotkit_secctl_ns_read,
474
+ .write_with_attrs = iotkit_secctl_ns_write,
475
+ .endianness = DEVICE_LITTLE_ENDIAN,
476
+ .valid.min_access_size = 1,
477
+ .valid.max_access_size = 4,
478
+ .impl.min_access_size = 1,
479
+ .impl.max_access_size = 4,
480
+};
481
+
482
+static void iotkit_secctl_reset(DeviceState *dev)
483
+{
484
+
485
+}
486
+
487
+static void iotkit_secctl_init(Object *obj)
488
+{
489
+ IoTKitSecCtl *s = IOTKIT_SECCTL(obj);
490
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
491
+
492
+ memory_region_init_io(&s->s_regs, obj, &iotkit_secctl_s_ops,
493
+ s, "iotkit-secctl-s-regs", 0x1000);
494
+ memory_region_init_io(&s->ns_regs, obj, &iotkit_secctl_ns_ops,
495
+ s, "iotkit-secctl-ns-regs", 0x1000);
496
+ sysbus_init_mmio(sbd, &s->s_regs);
497
+ sysbus_init_mmio(sbd, &s->ns_regs);
498
+}
499
+
500
+static const VMStateDescription iotkit_secctl_vmstate = {
501
+ .name = "iotkit-secctl",
502
+ .version_id = 1,
503
+ .minimum_version_id = 1,
504
+ .fields = (VMStateField[]) {
505
+ VMSTATE_END_OF_LIST()
506
+ }
507
+};
508
+
509
+static void iotkit_secctl_class_init(ObjectClass *klass, void *data)
510
+{
511
+ DeviceClass *dc = DEVICE_CLASS(klass);
512
+
513
+ dc->vmsd = &iotkit_secctl_vmstate;
514
+ dc->reset = iotkit_secctl_reset;
515
+}
516
+
517
+static const TypeInfo iotkit_secctl_info = {
518
+ .name = TYPE_IOTKIT_SECCTL,
519
+ .parent = TYPE_SYS_BUS_DEVICE,
520
+ .instance_size = sizeof(IoTKitSecCtl),
521
+ .instance_init = iotkit_secctl_init,
522
+ .class_init = iotkit_secctl_class_init,
523
+};
524
+
525
+static void iotkit_secctl_register_types(void)
526
+{
527
+ type_register_static(&iotkit_secctl_info);
528
+}
529
+
530
+type_init(iotkit_secctl_register_types);
531
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
532
index XXXXXXX..XXXXXXX 100644
533
--- a/default-configs/arm-softmmu.mak
534
+++ b/default-configs/arm-softmmu.mak
535
@@ -XXX,XX +XXX,XX @@ CONFIG_MPS2_FPGAIO=y
536
CONFIG_MPS2_SCC=y
537
538
CONFIG_TZ_PPC=y
539
+CONFIG_IOTKIT_SECCTL=y
540
541
CONFIG_VERSATILE_PCI=y
542
CONFIG_VERSATILE_I2C=y
543
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
544
index XXXXXXX..XXXXXXX 100644
545
--- a/hw/misc/trace-events
546
+++ b/hw/misc/trace-events
547
@@ -XXX,XX +XXX,XX @@ tz_ppc_irq_clear(int level) "TZ PPC: int_clear = %d"
548
tz_ppc_update_irq(int level) "TZ PPC: setting irq line to %d"
549
tz_ppc_read_blocked(int n, hwaddr offset, bool secure, bool user) "TZ PPC: port %d offset 0x%" HWADDR_PRIx " read (secure %d user %d) blocked"
550
tz_ppc_write_blocked(int n, hwaddr offset, bool secure, bool user) "TZ PPC: port %d offset 0x%" HWADDR_PRIx " write (secure %d user %d) blocked"
551
+
552
+# hw/misc/iotkit-secctl.c
553
+iotkit_secctl_s_read(uint32_t offset, uint64_t data, unsigned size) "IoTKit SecCtl S regs read: offset 0x%x data 0x%" PRIx64 " size %u"
554
+iotkit_secctl_s_write(uint32_t offset, uint64_t data, unsigned size) "IoTKit SecCtl S regs write: offset 0x%x data 0x%" PRIx64 " size %u"
555
+iotkit_secctl_ns_read(uint32_t offset, uint64_t data, unsigned size) "IoTKit SecCtl NS regs read: offset 0x%x data 0x%" PRIx64 " size %u"
556
+iotkit_secctl_ns_write(uint32_t offset, uint64_t data, unsigned size) "IoTKit SecCtl NS regs write: offset 0x%x data 0x%" PRIx64 " size %u"
557
+iotkit_secctl_reset(void) "IoTKit SecCtl: reset"
558
--
102
--
559
2.16.2
103
2.19.1
560
104
561
105
diff view generated by jsdifflib
1
Create an "idau" property on the armv7m container object which
1
If the HCR_EL2 PTW virtualizaiton configuration register bit
2
we can forward to the CPU object. Annoyingly, we can't use
2
is set, then this means that a stage 2 Permission fault must
3
object_property_add_alias() because the CPU object we want to
3
be generated if a stage 1 translation table access is made
4
forward to doesn't exist until the armv7m container is realized.
4
to an address that is mapped as Device memory in stage 2.
5
Implement this.
5
6
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180220180325.29818-6-peter.maydell@linaro.org
9
Message-id: 20181012144235.19646-8-peter.maydell@linaro.org
9
---
10
---
10
include/hw/arm/armv7m.h | 3 +++
11
target/arm/helper.c | 21 ++++++++++++++++++++-
11
hw/arm/armv7m.c | 9 +++++++++
12
1 file changed, 20 insertions(+), 1 deletion(-)
12
2 files changed, 12 insertions(+)
13
13
14
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/arm/armv7m.h
16
--- a/target/arm/helper.c
17
+++ b/include/hw/arm/armv7m.h
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
19
19
hwaddr s2pa;
20
#include "hw/sysbus.h"
20
int s2prot;
21
#include "hw/intc/armv7m_nvic.h"
21
int ret;
22
+#include "target/arm/idau.h"
22
+ ARMCacheAttrs cacheattrs = {};
23
23
+ ARMCacheAttrs *pcacheattrs = NULL;
24
#define TYPE_BITBAND "ARM,bitband-memory"
24
+
25
#define BITBAND(obj) OBJECT_CHECK(BitBandState, (obj), TYPE_BITBAND)
25
+ if (env->cp15.hcr_el2 & HCR_PTW) {
26
@@ -XXX,XX +XXX,XX @@ typedef struct {
26
+ /*
27
* + Property "memory": MemoryRegion defining the physical address space
27
+ * PTW means we must fault if this S1 walk touches S2 Device
28
* that CPU accesses see. (The NVIC, bitbanding and other CPU-internal
28
+ * memory; otherwise we don't care about the attributes and can
29
* devices will be automatically layered on top of this view.)
29
+ * save the S2 translation the effort of computing them.
30
+ * + Property "idau": IDAU interface (forwarded to CPU object)
30
+ */
31
*/
31
+ pcacheattrs = &cacheattrs;
32
typedef struct ARMv7MState {
33
/*< private >*/
34
@@ -XXX,XX +XXX,XX @@ typedef struct ARMv7MState {
35
char *cpu_type;
36
/* MemoryRegion the board provides to us (with its devices, RAM, etc) */
37
MemoryRegion *board_memory;
38
+ Object *idau;
39
} ARMv7MState;
40
41
#endif
42
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/arm/armv7m.c
45
+++ b/hw/arm/armv7m.c
46
@@ -XXX,XX +XXX,XX @@
47
#include "sysemu/qtest.h"
48
#include "qemu/error-report.h"
49
#include "exec/address-spaces.h"
50
+#include "target/arm/idau.h"
51
52
/* Bitbanded IO. Each word corresponds to a single bit. */
53
54
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
55
56
object_property_set_link(OBJECT(s->cpu), OBJECT(&s->container), "memory",
57
&error_abort);
58
+ if (object_property_find(OBJECT(s->cpu), "idau", NULL)) {
59
+ object_property_set_link(OBJECT(s->cpu), s->idau, "idau", &err);
60
+ if (err != NULL) {
61
+ error_propagate(errp, err);
62
+ return;
63
+ }
32
+ }
64
+ }
33
65
object_property_set_bool(OBJECT(s->cpu), true, "realized", &err);
34
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
66
if (err != NULL) {
35
- &txattrs, &s2prot, &s2size, fi, NULL);
67
error_propagate(errp, err);
36
+ &txattrs, &s2prot, &s2size, fi, pcacheattrs);
68
@@ -XXX,XX +XXX,XX @@ static Property armv7m_properties[] = {
37
if (ret) {
69
DEFINE_PROP_STRING("cpu-type", ARMv7MState, cpu_type),
38
assert(fi->type != ARMFault_None);
70
DEFINE_PROP_LINK("memory", ARMv7MState, board_memory, TYPE_MEMORY_REGION,
39
fi->s2addr = addr;
71
MemoryRegion *),
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
72
+ DEFINE_PROP_LINK("idau", ARMv7MState, idau, TYPE_IDAU_INTERFACE, Object *),
41
fi->s1ptw = true;
73
DEFINE_PROP_END_OF_LIST(),
42
return ~0;
74
};
43
}
75
44
+ if (pcacheattrs && (pcacheattrs->attrs & 0xf0) == 0) {
45
+ /* Access was to Device memory: generate Permission fault */
46
+ fi->type = ARMFault_Permission;
47
+ fi->s2addr = addr;
48
+ fi->stage2 = true;
49
+ fi->s1ptw = true;
50
+ return ~0;
51
+ }
52
addr = s2pa;
53
}
54
return addr;
76
--
55
--
77
2.16.2
56
2.19.1
78
57
79
58
diff view generated by jsdifflib
1
The MPS2 AN505 FPGA image includes a "FPGA control block"
1
Create and use a utility function to extract the EC field
2
which is a small set of registers handling LEDs, buttons
2
from a syndrome, rather than open-coding the shift.
3
and some counters.
4
3
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180220180325.29818-14-peter.maydell@linaro.org
6
Message-id: 20181012144235.19646-9-peter.maydell@linaro.org
8
---
7
---
9
hw/misc/Makefile.objs | 1 +
8
target/arm/internals.h | 5 +++++
10
include/hw/misc/mps2-fpgaio.h | 43 ++++++++++
9
target/arm/helper.c | 4 ++--
11
hw/misc/mps2-fpgaio.c | 176 ++++++++++++++++++++++++++++++++++++++++
10
target/arm/kvm64.c | 2 +-
12
default-configs/arm-softmmu.mak | 1 +
11
target/arm/op_helper.c | 2 +-
13
hw/misc/trace-events | 6 ++
12
4 files changed, 9 insertions(+), 4 deletions(-)
14
5 files changed, 227 insertions(+)
15
create mode 100644 include/hw/misc/mps2-fpgaio.h
16
create mode 100644 hw/misc/mps2-fpgaio.c
17
13
18
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
14
diff --git a/target/arm/internals.h b/target/arm/internals.h
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/misc/Makefile.objs
16
--- a/target/arm/internals.h
21
+++ b/hw/misc/Makefile.objs
17
+++ b/target/arm/internals.h
22
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o
18
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
23
obj-$(CONFIG_MIPS_CPS) += mips_cmgcr.o
19
#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
24
obj-$(CONFIG_MIPS_CPS) += mips_cpc.o
20
#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
25
obj-$(CONFIG_MIPS_ITU) += mips_itu.o
21
26
+obj-$(CONFIG_MPS2_FPGAIO) += mps2-fpgaio.o
22
+static inline uint32_t syn_get_ec(uint32_t syn)
27
obj-$(CONFIG_MPS2_SCC) += mps2-scc.o
28
29
obj-$(CONFIG_PVPANIC) += pvpanic.o
30
diff --git a/include/hw/misc/mps2-fpgaio.h b/include/hw/misc/mps2-fpgaio.h
31
new file mode 100644
32
index XXXXXXX..XXXXXXX
33
--- /dev/null
34
+++ b/include/hw/misc/mps2-fpgaio.h
35
@@ -XXX,XX +XXX,XX @@
36
+/*
37
+ * ARM MPS2 FPGAIO emulation
38
+ *
39
+ * Copyright (c) 2018 Linaro Limited
40
+ * Written by Peter Maydell
41
+ *
42
+ * This program is free software; you can redistribute it and/or modify
43
+ * it under the terms of the GNU General Public License version 2 or
44
+ * (at your option) any later version.
45
+ */
46
+
47
+/* This is a model of the FPGAIO register block in the AN505
48
+ * FPGA image for the MPS2 dev board; it is documented in the
49
+ * application note:
50
+ * http://infocenter.arm.com/help/topic/com.arm.doc.dai0505b/index.html
51
+ *
52
+ * QEMU interface:
53
+ * + sysbus MMIO region 0: the register bank
54
+ */
55
+
56
+#ifndef MPS2_FPGAIO_H
57
+#define MPS2_FPGAIO_H
58
+
59
+#include "hw/sysbus.h"
60
+
61
+#define TYPE_MPS2_FPGAIO "mps2-fpgaio"
62
+#define MPS2_FPGAIO(obj) OBJECT_CHECK(MPS2FPGAIO, (obj), TYPE_MPS2_FPGAIO)
63
+
64
+typedef struct {
65
+ /*< private >*/
66
+ SysBusDevice parent_obj;
67
+
68
+ /*< public >*/
69
+ MemoryRegion iomem;
70
+
71
+ uint32_t led0;
72
+ uint32_t prescale;
73
+ uint32_t misc;
74
+
75
+ uint32_t prescale_clk;
76
+} MPS2FPGAIO;
77
+
78
+#endif
79
diff --git a/hw/misc/mps2-fpgaio.c b/hw/misc/mps2-fpgaio.c
80
new file mode 100644
81
index XXXXXXX..XXXXXXX
82
--- /dev/null
83
+++ b/hw/misc/mps2-fpgaio.c
84
@@ -XXX,XX +XXX,XX @@
85
+/*
86
+ * ARM MPS2 AN505 FPGAIO emulation
87
+ *
88
+ * Copyright (c) 2018 Linaro Limited
89
+ * Written by Peter Maydell
90
+ *
91
+ * This program is free software; you can redistribute it and/or modify
92
+ * it under the terms of the GNU General Public License version 2 or
93
+ * (at your option) any later version.
94
+ */
95
+
96
+/* This is a model of the "FPGA system control and I/O" block found
97
+ * in the AN505 FPGA image for the MPS2 devboard.
98
+ * It is documented in AN505:
99
+ * http://infocenter.arm.com/help/topic/com.arm.doc.dai0505b/index.html
100
+ */
101
+
102
+#include "qemu/osdep.h"
103
+#include "qemu/log.h"
104
+#include "qapi/error.h"
105
+#include "trace.h"
106
+#include "hw/sysbus.h"
107
+#include "hw/registerfields.h"
108
+#include "hw/misc/mps2-fpgaio.h"
109
+
110
+REG32(LED0, 0)
111
+REG32(BUTTON, 8)
112
+REG32(CLK1HZ, 0x10)
113
+REG32(CLK100HZ, 0x14)
114
+REG32(COUNTER, 0x18)
115
+REG32(PRESCALE, 0x1c)
116
+REG32(PSCNTR, 0x20)
117
+REG32(MISC, 0x4c)
118
+
119
+static uint64_t mps2_fpgaio_read(void *opaque, hwaddr offset, unsigned size)
120
+{
23
+{
121
+ MPS2FPGAIO *s = MPS2_FPGAIO(opaque);
24
+ return syn >> ARM_EL_EC_SHIFT;
122
+ uint64_t r;
123
+
124
+ switch (offset) {
125
+ case A_LED0:
126
+ r = s->led0;
127
+ break;
128
+ case A_BUTTON:
129
+ /* User-pressable board buttons. We don't model that, so just return
130
+ * zeroes.
131
+ */
132
+ r = 0;
133
+ break;
134
+ case A_PRESCALE:
135
+ r = s->prescale;
136
+ break;
137
+ case A_MISC:
138
+ r = s->misc;
139
+ break;
140
+ case A_CLK1HZ:
141
+ case A_CLK100HZ:
142
+ case A_COUNTER:
143
+ case A_PSCNTR:
144
+ /* These are all upcounters of various frequencies. */
145
+ qemu_log_mask(LOG_UNIMP, "MPS2 FPGAIO: counters unimplemented\n");
146
+ r = 0;
147
+ break;
148
+ default:
149
+ qemu_log_mask(LOG_GUEST_ERROR,
150
+ "MPS2 FPGAIO read: bad offset %x\n", (int) offset);
151
+ r = 0;
152
+ break;
153
+ }
154
+
155
+ trace_mps2_fpgaio_read(offset, r, size);
156
+ return r;
157
+}
25
+}
158
+
26
+
159
+static void mps2_fpgaio_write(void *opaque, hwaddr offset, uint64_t value,
27
/* Utility functions for constructing various kinds of syndrome value.
160
+ unsigned size)
28
* Note that in general we follow the AArch64 syndrome values; in a
161
+{
29
* few cases the value in HSR for exceptions taken to AArch32 Hyp
162
+ MPS2FPGAIO *s = MPS2_FPGAIO(opaque);
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
163
+
164
+ trace_mps2_fpgaio_write(offset, value, size);
165
+
166
+ switch (offset) {
167
+ case A_LED0:
168
+ /* LED bits [1:0] control board LEDs. We don't currently have
169
+ * a mechanism for displaying this graphically, so use a trace event.
170
+ */
171
+ trace_mps2_fpgaio_leds(value & 0x02 ? '*' : '.',
172
+ value & 0x01 ? '*' : '.');
173
+ s->led0 = value & 0x3;
174
+ break;
175
+ case A_PRESCALE:
176
+ s->prescale = value;
177
+ break;
178
+ case A_MISC:
179
+ /* These are control bits for some of the other devices on the
180
+ * board (SPI, CLCD, etc). We don't implement that yet, so just
181
+ * make the bits read as written.
182
+ */
183
+ qemu_log_mask(LOG_UNIMP,
184
+ "MPS2 FPGAIO: MISC control bits unimplemented\n");
185
+ s->misc = value;
186
+ break;
187
+ default:
188
+ qemu_log_mask(LOG_GUEST_ERROR,
189
+ "MPS2 FPGAIO write: bad offset 0x%x\n", (int) offset);
190
+ break;
191
+ }
192
+}
193
+
194
+static const MemoryRegionOps mps2_fpgaio_ops = {
195
+ .read = mps2_fpgaio_read,
196
+ .write = mps2_fpgaio_write,
197
+ .endianness = DEVICE_LITTLE_ENDIAN,
198
+};
199
+
200
+static void mps2_fpgaio_reset(DeviceState *dev)
201
+{
202
+ MPS2FPGAIO *s = MPS2_FPGAIO(dev);
203
+
204
+ trace_mps2_fpgaio_reset();
205
+ s->led0 = 0;
206
+ s->prescale = 0;
207
+ s->misc = 0;
208
+}
209
+
210
+static void mps2_fpgaio_init(Object *obj)
211
+{
212
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
213
+ MPS2FPGAIO *s = MPS2_FPGAIO(obj);
214
+
215
+ memory_region_init_io(&s->iomem, obj, &mps2_fpgaio_ops, s,
216
+ "mps2-fpgaio", 0x1000);
217
+ sysbus_init_mmio(sbd, &s->iomem);
218
+}
219
+
220
+static const VMStateDescription mps2_fpgaio_vmstate = {
221
+ .name = "mps2-fpgaio",
222
+ .version_id = 1,
223
+ .minimum_version_id = 1,
224
+ .fields = (VMStateField[]) {
225
+ VMSTATE_UINT32(led0, MPS2FPGAIO),
226
+ VMSTATE_UINT32(prescale, MPS2FPGAIO),
227
+ VMSTATE_UINT32(misc, MPS2FPGAIO),
228
+ VMSTATE_END_OF_LIST()
229
+ }
230
+};
231
+
232
+static Property mps2_fpgaio_properties[] = {
233
+ /* Frequency of the prescale counter */
234
+ DEFINE_PROP_UINT32("prescale-clk", MPS2FPGAIO, prescale_clk, 20000000),
235
+ DEFINE_PROP_END_OF_LIST(),
236
+};
237
+
238
+static void mps2_fpgaio_class_init(ObjectClass *klass, void *data)
239
+{
240
+ DeviceClass *dc = DEVICE_CLASS(klass);
241
+
242
+ dc->vmsd = &mps2_fpgaio_vmstate;
243
+ dc->reset = mps2_fpgaio_reset;
244
+ dc->props = mps2_fpgaio_properties;
245
+}
246
+
247
+static const TypeInfo mps2_fpgaio_info = {
248
+ .name = TYPE_MPS2_FPGAIO,
249
+ .parent = TYPE_SYS_BUS_DEVICE,
250
+ .instance_size = sizeof(MPS2FPGAIO),
251
+ .instance_init = mps2_fpgaio_init,
252
+ .class_init = mps2_fpgaio_class_init,
253
+};
254
+
255
+static void mps2_fpgaio_register_types(void)
256
+{
257
+ type_register_static(&mps2_fpgaio_info);
258
+}
259
+
260
+type_init(mps2_fpgaio_register_types);
261
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
262
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
263
--- a/default-configs/arm-softmmu.mak
32
--- a/target/arm/helper.c
264
+++ b/default-configs/arm-softmmu.mak
33
+++ b/target/arm/helper.c
265
@@ -XXX,XX +XXX,XX @@ CONFIG_STM32F205_SOC=y
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
266
CONFIG_CMSDK_APB_TIMER=y
35
uint32_t moe;
267
CONFIG_CMSDK_APB_UART=y
36
268
37
/* If this is a debug exception we must update the DBGDSCR.MOE bits */
269
+CONFIG_MPS2_FPGAIO=y
38
- switch (env->exception.syndrome >> ARM_EL_EC_SHIFT) {
270
CONFIG_MPS2_SCC=y
39
+ switch (syn_get_ec(env->exception.syndrome)) {
271
40
case EC_BREAKPOINT:
272
CONFIG_VERSATILE_PCI=y
41
case EC_BREAKPOINT_SAME_EL:
273
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
42
moe = 1;
43
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
44
if (qemu_loglevel_mask(CPU_LOG_INT)
45
&& !excp_is_internal(cs->exception_index)) {
46
qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n",
47
- env->exception.syndrome >> ARM_EL_EC_SHIFT,
48
+ syn_get_ec(env->exception.syndrome),
49
env->exception.syndrome);
50
}
51
52
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
274
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
275
--- a/hw/misc/trace-events
54
--- a/target/arm/kvm64.c
276
+++ b/hw/misc/trace-events
55
+++ b/target/arm/kvm64.c
277
@@ -XXX,XX +XXX,XX @@ mps2_scc_leds(char led7, char led6, char led5, char led4, char led3, char led2,
56
@@ -XXX,XX +XXX,XX @@ int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
278
mps2_scc_cfg_write(unsigned function, unsigned device, uint32_t value) "MPS2 SCC config write: function %d device %d data 0x%" PRIx32
57
279
mps2_scc_cfg_read(unsigned function, unsigned device, uint32_t value) "MPS2 SCC config read: function %d device %d data 0x%" PRIx32
58
bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
280
59
{
281
+# hw/misc/mps2_fpgaio.c
60
- int hsr_ec = debug_exit->hsr >> ARM_EL_EC_SHIFT;
282
+mps2_fpgaio_read(uint64_t offset, uint64_t data, unsigned size) "MPS2 FPGAIO read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
61
+ int hsr_ec = syn_get_ec(debug_exit->hsr);
283
+mps2_fpgaio_write(uint64_t offset, uint64_t data, unsigned size) "MPS2 FPGAIO write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
62
ARMCPU *cpu = ARM_CPU(cs);
284
+mps2_fpgaio_reset(void) "MPS2 FPGAIO: reset"
63
CPUClass *cc = CPU_GET_CLASS(cs);
285
+mps2_fpgaio_leds(char led1, char led0) "MPS2 FPGAIO LEDs: %c%c"
64
CPUARMState *env = &cpu->env;
286
+
65
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
287
# hw/misc/msf2-sysreg.c
66
index XXXXXXX..XXXXXXX 100644
288
msf2_sysreg_write(uint64_t offset, uint32_t val, uint32_t prev) "msf2-sysreg write: addr 0x%08" HWADDR_PRIx " data 0x%" PRIx32 " prev 0x%" PRIx32
67
--- a/target/arm/op_helper.c
289
msf2_sysreg_read(uint64_t offset, uint32_t val) "msf2-sysreg read: addr 0x%08" HWADDR_PRIx " data 0x%08" PRIx32
68
+++ b/target/arm/op_helper.c
69
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
70
* (see DDI0478C.a D1.10.4)
71
*/
72
target_el = 2;
73
- if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
74
+ if (syn_get_ec(syndrome) == EC_ADVSIMDFPACCESSTRAP) {
75
syndrome = syn_uncategorized();
76
}
77
}
290
--
78
--
291
2.16.2
79
2.19.1
292
80
293
81
diff view generated by jsdifflib
1
Add remaining easy registers to iotkit-secctl:
1
For the v7 version of the Arm architecture, the IL bit in
2
* NSCCFG just routes its two bits out to external GPIO lines
2
syndrome register values where the field is not valid was
3
* BRGINSTAT/BRGINTCLR/BRGINTEN can be dummies, because QEMU's
3
defined to be UNK/SBZP. In v8 this is RES1, which is what
4
bus fabric can never report errors
4
QEMU currently implements. Handle the desired v7 behaviour
5
by squashing the IL bit for the affected cases:
6
* EC == EC_UNCATEGORIZED
7
* prefetch aborts
8
* data aborts where ISV is 0
9
10
(The fourth case listed in the v8 Arm ARM DDI 0487C.a in
11
section G7.2.70, "illegal state exception", can't happen
12
on a v7 CPU.)
13
14
This deals with a corner case noted in a comment.
5
15
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180220180325.29818-18-peter.maydell@linaro.org
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20181012144235.19646-10-peter.maydell@linaro.org
8
---
19
---
9
include/hw/misc/iotkit-secctl.h | 4 ++++
20
target/arm/internals.h | 7 ++-----
10
hw/misc/iotkit-secctl.c | 32 ++++++++++++++++++++++++++------
21
target/arm/helper.c | 13 +++++++++++++
11
2 files changed, 30 insertions(+), 6 deletions(-)
22
2 files changed, 15 insertions(+), 5 deletions(-)
12
23
13
diff --git a/include/hw/misc/iotkit-secctl.h b/include/hw/misc/iotkit-secctl.h
24
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/misc/iotkit-secctl.h
26
--- a/target/arm/internals.h
16
+++ b/include/hw/misc/iotkit-secctl.h
27
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
18
* + sysbus MMIO region 1 is the "non-secure privilege control block" registers
29
/* Utility functions for constructing various kinds of syndrome value.
19
* + named GPIO output "sec_resp_cfg" indicating whether blocked accesses
30
* Note that in general we follow the AArch64 syndrome values; in a
20
* should RAZ/WI or bus error
31
* few cases the value in HSR for exceptions taken to AArch32 Hyp
21
+ * + named GPIO output "nsc_cfg" whose value tracks the NSCCFG register value
32
- * mode differs slightly, so if we ever implemented Hyp mode then the
22
* Controlling the 2 APB PPCs in the IoTKit:
33
- * syndrome value would need some massaging on exception entry.
23
* + named GPIO outputs apb_ppc0_nonsec[0..2] and apb_ppc1_nonsec
34
- * (One example of this is that AArch64 defaults to IL bit set for
24
* + named GPIO outputs apb_ppc0_ap[0..2] and apb_ppc1_ap
35
- * exceptions which don't specifically indicate information about the
25
@@ -XXX,XX +XXX,XX @@ struct IoTKitSecCtl {
36
- * trapping instruction, whereas AArch32 defaults to IL bit clear.)
26
37
+ * mode differs slightly, and we fix this up when populating HSR in
27
/*< public >*/
38
+ * arm_cpu_do_interrupt_aarch32_hyp().
28
qemu_irq sec_resp_cfg;
39
*/
29
+ qemu_irq nsc_cfg_irq;
40
static inline uint32_t syn_uncategorized(void)
30
41
{
31
MemoryRegion s_regs;
42
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
MemoryRegion ns_regs;
33
@@ -XXX,XX +XXX,XX @@ struct IoTKitSecCtl {
34
uint32_t secppcintstat;
35
uint32_t secppcinten;
36
uint32_t secrespcfg;
37
+ uint32_t nsccfg;
38
+ uint32_t brginten;
39
40
IoTKitSecCtlPPC apb[IOTS_NUM_APB_PPC];
41
IoTKitSecCtlPPC apbexp[IOTS_NUM_APB_EXP_PPC];
42
diff --git a/hw/misc/iotkit-secctl.c b/hw/misc/iotkit-secctl.c
43
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/misc/iotkit-secctl.c
44
--- a/target/arm/helper.c
45
+++ b/hw/misc/iotkit-secctl.c
45
+++ b/target/arm/helper.c
46
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
46
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs)
47
case A_SECRESPCFG:
48
r = s->secrespcfg;
49
break;
50
+ case A_NSCCFG:
51
+ r = s->nsccfg;
52
+ break;
53
case A_SECPPCINTSTAT:
54
r = s->secppcintstat;
55
break;
56
case A_SECPPCINTEN:
57
r = s->secppcinten;
58
break;
59
+ case A_BRGINTSTAT:
60
+ /* QEMU's bus fabric can never report errors as it doesn't buffer
61
+ * writes, so we never report bridge interrupts.
62
+ */
63
+ r = 0;
64
+ break;
65
+ case A_BRGINTEN:
66
+ r = s->brginten;
67
+ break;
68
case A_AHBNSPPCEXP0:
69
case A_AHBNSPPCEXP1:
70
case A_AHBNSPPCEXP2:
71
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
72
case A_APBSPPPCEXP3:
73
r = s->apbexp[offset_to_ppc_idx(offset)].sp;
74
break;
75
- case A_NSCCFG:
76
case A_SECMPCINTSTATUS:
77
case A_SECMSCINTSTAT:
78
case A_SECMSCINTEN:
79
- case A_BRGINTSTAT:
80
- case A_BRGINTEN:
81
case A_NSMSCEXP:
82
qemu_log_mask(LOG_UNIMP,
83
"IoTKit SecCtl S block read: "
84
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
85
}
47
}
86
48
87
switch (offset) {
49
if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) {
88
+ case A_NSCCFG:
50
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
89
+ s->nsccfg = value & 3;
51
+ /*
90
+ qemu_set_irq(s->nsc_cfg_irq, s->nsccfg);
52
+ * QEMU syndrome values are v8-style. v7 has the IL bit
91
+ break;
53
+ * UNK/SBZP for "field not valid" cases, where v8 uses RES1.
92
case A_SECRESPCFG:
54
+ * If this is a v7 CPU, squash the IL bit in those cases.
93
value &= 1;
55
+ */
94
s->secrespcfg = value;
56
+ if (cs->exception_index == EXCP_PREFETCH_ABORT ||
95
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
57
+ (cs->exception_index == EXCP_DATA_ABORT &&
96
s->secppcinten = value & 0x00f000f3;
58
+ !(env->exception.syndrome & ARM_EL_ISV)) ||
97
foreach_ppc(s, iotkit_secctl_ppc_update_irq_enable);
59
+ syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) {
98
break;
60
+ env->exception.syndrome &= ~ARM_EL_IL;
99
+ case A_BRGINTCLR:
61
+ }
100
+ break;
62
+ }
101
+ case A_BRGINTEN:
63
env->cp15.esr_el[2] = env->exception.syndrome;
102
+ s->brginten = value & 0xffff0000;
103
+ break;
104
case A_AHBNSPPCEXP0:
105
case A_AHBNSPPCEXP1:
106
case A_AHBNSPPCEXP2:
107
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
108
ppc = &s->apbexp[offset_to_ppc_idx(offset)];
109
iotkit_secctl_ppc_sp_write(ppc, value);
110
break;
111
- case A_NSCCFG:
112
case A_SECMSCINTCLR:
113
case A_SECMSCINTEN:
114
- case A_BRGINTCLR:
115
- case A_BRGINTEN:
116
qemu_log_mask(LOG_UNIMP,
117
"IoTKit SecCtl S block write: "
118
"unimplemented offset 0x%x\n", offset);
119
@@ -XXX,XX +XXX,XX @@ static void iotkit_secctl_reset(DeviceState *dev)
120
s->secppcintstat = 0;
121
s->secppcinten = 0;
122
s->secrespcfg = 0;
123
+ s->nsccfg = 0;
124
+ s->brginten = 0;
125
126
foreach_ppc(s, iotkit_secctl_reset_ppc);
127
}
128
@@ -XXX,XX +XXX,XX @@ static void iotkit_secctl_init(Object *obj)
129
}
64
}
130
65
131
qdev_init_gpio_out_named(dev, &s->sec_resp_cfg, "sec_resp_cfg", 1);
132
+ qdev_init_gpio_out_named(dev, &s->nsc_cfg_irq, "nsc_cfg", 1);
133
134
memory_region_init_io(&s->s_regs, obj, &iotkit_secctl_s_ops,
135
s, "iotkit-secctl-s-regs", 0x1000);
136
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription iotkit_secctl_vmstate = {
137
VMSTATE_UINT32(secppcintstat, IoTKitSecCtl),
138
VMSTATE_UINT32(secppcinten, IoTKitSecCtl),
139
VMSTATE_UINT32(secrespcfg, IoTKitSecCtl),
140
+ VMSTATE_UINT32(nsccfg, IoTKitSecCtl),
141
+ VMSTATE_UINT32(brginten, IoTKitSecCtl),
142
VMSTATE_STRUCT_ARRAY(apb, IoTKitSecCtl, IOTS_NUM_APB_PPC, 1,
143
iotkit_secctl_ppc_vmstate, IoTKitSecCtlPPC),
144
VMSTATE_STRUCT_ARRAY(apbexp, IoTKitSecCtl, IOTS_NUM_APB_EXP_PPC, 1,
145
--
66
--
146
2.16.2
67
2.19.1
147
68
148
69
diff view generated by jsdifflib
1
Add a function load_ramdisk_as() which behaves like the existing
1
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
2
load_ramdisk() but allows the caller to specify the AddressSpace
2
provided in HSR has more information than is reported to AArch64.
3
to use. This matches the pattern we have already for various
3
Specifically, there are extra fields TA and coproc which indicate
4
other loader functions.
4
whether the trapped instruction was FP or SIMD. Add this extra
5
information to the syndromes we construct, and mask it out when
6
taking the exception to AArch64.
5
7
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180220180325.29818-2-peter.maydell@linaro.org
10
Message-id: 20181012144235.19646-11-peter.maydell@linaro.org
10
---
11
---
11
include/hw/loader.h | 12 +++++++++++-
12
target/arm/internals.h | 14 +++++++++++++-
12
hw/core/loader.c | 8 +++++++-
13
target/arm/helper.c | 9 +++++++++
13
2 files changed, 18 insertions(+), 2 deletions(-)
14
target/arm/translate.c | 8 ++++----
15
3 files changed, 26 insertions(+), 5 deletions(-)
14
16
15
diff --git a/include/hw/loader.h b/include/hw/loader.h
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/loader.h
19
--- a/target/arm/internals.h
18
+++ b/include/hw/loader.h
20
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ int load_uimage(const char *filename, hwaddr *ep,
21
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
20
void *translate_opaque);
22
* few cases the value in HSR for exceptions taken to AArch32 Hyp
21
23
* mode differs slightly, and we fix this up when populating HSR in
22
/**
24
* arm_cpu_do_interrupt_aarch32_hyp().
23
- * load_ramdisk:
25
+ * The exception is FP/SIMD access traps -- these report extra information
24
+ * load_ramdisk_as:
26
+ * when taking an exception to AArch32. For those we include the extra coproc
25
* @filename: Path to the ramdisk image
27
+ * and TA fields, and mask them out when taking the exception to AArch64.
26
* @addr: Memory address to load the ramdisk to
27
* @max_sz: Maximum allowed ramdisk size (for non-u-boot ramdisks)
28
+ * @as: The AddressSpace to load the ELF to. The value of address_space_memory
29
+ * is used if nothing is supplied here.
30
*
31
* Load a ramdisk image with U-Boot header to the specified memory
32
* address.
33
*
34
* Returns the size of the loaded image on success, -1 otherwise.
35
*/
28
*/
36
+int load_ramdisk_as(const char *filename, hwaddr addr, uint64_t max_sz,
29
static inline uint32_t syn_uncategorized(void)
37
+ AddressSpace *as);
30
{
38
+
31
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
39
+/**
32
40
+ * load_ramdisk:
33
static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
41
+ * Same as load_ramdisk_as(), but doesn't allow the caller to specify
34
{
42
+ * an AddressSpace.
35
+ /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
43
+ */
36
return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
44
int load_ramdisk(const char *filename, hwaddr addr, uint64_t max_sz);
37
| (is_16bit ? 0 : ARM_EL_IL)
45
38
- | (cv << 24) | (cond << 20);
46
ssize_t gunzip(void *dst, size_t dstlen, uint8_t *src, size_t srclen);
39
+ | (cv << 24) | (cond << 20) | 0xa;
47
diff --git a/hw/core/loader.c b/hw/core/loader.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/hw/core/loader.c
50
+++ b/hw/core/loader.c
51
@@ -XXX,XX +XXX,XX @@ int load_uimage_as(const char *filename, hwaddr *ep, hwaddr *loadaddr,
52
53
/* Load a ramdisk. */
54
int load_ramdisk(const char *filename, hwaddr addr, uint64_t max_sz)
55
+{
56
+ return load_ramdisk_as(filename, addr, max_sz, NULL);
57
+}
40
+}
58
+
41
+
59
+int load_ramdisk_as(const char *filename, hwaddr addr, uint64_t max_sz,
42
+static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
60
+ AddressSpace *as)
43
+{
61
{
44
+ /* AArch32 SIMD trap: TA == 1 coproc == 0 */
62
return load_uboot_image(filename, NULL, &addr, NULL, IH_TYPE_RAMDISK,
45
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
63
- NULL, NULL, NULL);
46
+ | (is_16bit ? 0 : ARM_EL_IL)
64
+ NULL, NULL, as);
47
+ | (cv << 24) | (cond << 20) | (1 << 5);
65
}
48
}
66
49
67
/* Load a gzip-compressed kernel to a dynamically allocated buffer. */
50
static inline uint32_t syn_sve_access_trap(void)
51
diff --git a/target/arm/helper.c b/target/arm/helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/helper.c
54
+++ b/target/arm/helper.c
55
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
56
case EXCP_HVC:
57
case EXCP_HYP_TRAP:
58
case EXCP_SMC:
59
+ if (syn_get_ec(env->exception.syndrome) == EC_ADVSIMDFPACCESSTRAP) {
60
+ /*
61
+ * QEMU internal FP/SIMD syndromes from AArch32 include the
62
+ * TA and coproc fields which are only exposed if the exception
63
+ * is taken to AArch32 Hyp mode. Mask them out to get a valid
64
+ * AArch64 format syndrome.
65
+ */
66
+ env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20);
67
+ }
68
env->cp15.esr_el[new_el] = env->exception.syndrome;
69
break;
70
case EXCP_IRQ:
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate.c
74
+++ b/target/arm/translate.c
75
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
76
*/
77
if (s->fp_excp_el) {
78
gen_exception_insn(s, 4, EXCP_UDEF,
79
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
80
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
81
return 0;
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
85
*/
86
if (s->fp_excp_el) {
87
gen_exception_insn(s, 4, EXCP_UDEF,
88
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
89
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
90
return 0;
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
94
95
if (s->fp_excp_el) {
96
gen_exception_insn(s, 4, EXCP_UDEF,
97
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
98
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
99
return 0;
100
}
101
if (!s->vfp_enabled) {
102
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
103
104
if (s->fp_excp_el) {
105
gen_exception_insn(s, 4, EXCP_UDEF,
106
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
107
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
108
return 0;
109
}
110
if (!s->vfp_enabled) {
68
--
111
--
69
2.16.2
112
2.19.1
70
113
71
114
diff view generated by jsdifflib
1
Instead of loading kernels, device trees, and the like to
1
From: Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>
2
the system address space, use the CPU's address space. This
3
is important if we're trying to load the file to memory or
4
via an alias memory region that is provided by an SoC
5
object and thus not mapped into the system address space.
6
2
3
"The Image must be placed text_offset bytes from a 2MB aligned base
4
address anywhere in usable system RAM and called there."
5
6
For the virt board, we write our startup bootloader at the very
7
bottom of RAM, so that bit can't be used for the image. To avoid
8
overlap in case the image requests to be loaded at an offset
9
smaller than our bootloader, we increment the load offset to the
10
next 2MB.
11
12
This fixes a boot failure for Xen AArch64.
13
14
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
15
Tested-by: Andre Przywara <andre.przywara@arm.com>
16
Message-id: b8a89518794b4436af0c151ed10de4fa@dornerworks.com
17
[PMM: Rephrased a comment a bit]
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180220180325.29818-3-peter.maydell@linaro.org
11
---
20
---
12
hw/arm/boot.c | 119 +++++++++++++++++++++++++++++++++++++---------------------
21
hw/arm/boot.c | 18 ++++++++++++++++++
13
1 file changed, 76 insertions(+), 43 deletions(-)
22
1 file changed, 18 insertions(+)
14
23
15
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
16
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/boot.c
26
--- a/hw/arm/boot.c
18
+++ b/hw/arm/boot.c
27
+++ b/hw/arm/boot.c
19
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@
29
#include "qemu/config-file.h"
30
#include "qemu/option.h"
31
#include "exec/address-spaces.h"
32
+#include "qemu/units.h"
33
34
/* Kernel boot protocol is specified in the kernel docs
35
* Documentation/arm/Booting and Documentation/arm64/booting.txt
36
@@ -XXX,XX +XXX,XX @@
20
#define ARM64_TEXT_OFFSET_OFFSET 8
37
#define ARM64_TEXT_OFFSET_OFFSET 8
21
#define ARM64_MAGIC_OFFSET 56
38
#define ARM64_MAGIC_OFFSET 56
22
39
23
+static AddressSpace *arm_boot_address_space(ARMCPU *cpu,
40
+#define BOOTLOADER_MAX_SIZE (4 * KiB)
24
+ const struct arm_boot_info *info)
25
+{
26
+ /* Return the address space to use for bootloader reads and writes.
27
+ * We prefer the secure address space if the CPU has it and we're
28
+ * going to boot the guest into it.
29
+ */
30
+ int asidx;
31
+ CPUState *cs = CPU(cpu);
32
+
41
+
33
+ if (arm_feature(&cpu->env, ARM_FEATURE_EL3) && info->secure_boot) {
42
AddressSpace *arm_boot_address_space(ARMCPU *cpu,
34
+ asidx = ARMASIdx_S;
43
const struct arm_boot_info *info)
35
+ } else {
36
+ asidx = ARMASIdx_NS;
37
+ }
38
+
39
+ return cpu_get_address_space(cs, asidx);
40
+}
41
+
42
typedef enum {
43
FIXUP_NONE = 0, /* do nothing */
44
FIXUP_TERMINATOR, /* end of insns */
45
@@ -XXX,XX +XXX,XX @@ static const ARMInsnFixup smpboot[] = {
46
};
47
48
static void write_bootloader(const char *name, hwaddr addr,
49
- const ARMInsnFixup *insns, uint32_t *fixupcontext)
50
+ const ARMInsnFixup *insns, uint32_t *fixupcontext,
51
+ AddressSpace *as)
52
{
44
{
53
/* Fix up the specified bootloader fragment and write it into
54
* guest memory using rom_add_blob_fixed(). fixupcontext is
55
@@ -XXX,XX +XXX,XX @@ static void write_bootloader(const char *name, hwaddr addr,
45
@@ -XXX,XX +XXX,XX @@ static void write_bootloader(const char *name, hwaddr addr,
56
code[i] = tswap32(insn);
46
code[i] = tswap32(insn);
57
}
47
}
58
48
59
- rom_add_blob_fixed(name, code, len * sizeof(uint32_t), addr);
49
+ assert((len * sizeof(uint32_t)) < BOOTLOADER_MAX_SIZE);
60
+ rom_add_blob_fixed_as(name, code, len * sizeof(uint32_t), addr, as);
50
+
51
rom_add_blob_fixed_as(name, code, len * sizeof(uint32_t), addr, as);
61
52
62
g_free(code);
53
g_free(code);
63
}
54
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
64
@@ -XXX,XX +XXX,XX @@ static void default_write_secondary(ARMCPU *cpu,
55
memcpy(&hdrvals, buffer + ARM64_TEXT_OFFSET_OFFSET, sizeof(hdrvals));
65
const struct arm_boot_info *info)
56
if (hdrvals[1] != 0) {
66
{
57
kernel_load_offset = le64_to_cpu(hdrvals[0]);
67
uint32_t fixupcontext[FIXUP_MAX];
68
+ AddressSpace *as = arm_boot_address_space(cpu, info);
69
70
fixupcontext[FIXUP_GIC_CPU_IF] = info->gic_cpu_if_addr;
71
fixupcontext[FIXUP_BOOTREG] = info->smp_bootreg_addr;
72
@@ -XXX,XX +XXX,XX @@ static void default_write_secondary(ARMCPU *cpu,
73
}
74
75
write_bootloader("smpboot", info->smp_loader_start,
76
- smpboot, fixupcontext);
77
+ smpboot, fixupcontext, as);
78
}
79
80
void arm_write_secure_board_setup_dummy_smc(ARMCPU *cpu,
81
const struct arm_boot_info *info,
82
hwaddr mvbar_addr)
83
{
84
+ AddressSpace *as = arm_boot_address_space(cpu, info);
85
int n;
86
uint32_t mvbar_blob[] = {
87
/* mvbar_addr: secure monitor vectors
88
@@ -XXX,XX +XXX,XX @@ void arm_write_secure_board_setup_dummy_smc(ARMCPU *cpu,
89
for (n = 0; n < ARRAY_SIZE(mvbar_blob); n++) {
90
mvbar_blob[n] = tswap32(mvbar_blob[n]);
91
}
92
- rom_add_blob_fixed("board-setup-mvbar", mvbar_blob, sizeof(mvbar_blob),
93
- mvbar_addr);
94
+ rom_add_blob_fixed_as("board-setup-mvbar", mvbar_blob, sizeof(mvbar_blob),
95
+ mvbar_addr, as);
96
97
for (n = 0; n < ARRAY_SIZE(board_setup_blob); n++) {
98
board_setup_blob[n] = tswap32(board_setup_blob[n]);
99
}
100
- rom_add_blob_fixed("board-setup", board_setup_blob,
101
- sizeof(board_setup_blob), info->board_setup_addr);
102
+ rom_add_blob_fixed_as("board-setup", board_setup_blob,
103
+ sizeof(board_setup_blob), info->board_setup_addr, as);
104
}
105
106
static void default_reset_secondary(ARMCPU *cpu,
107
const struct arm_boot_info *info)
108
{
109
+ AddressSpace *as = arm_boot_address_space(cpu, info);
110
CPUState *cs = CPU(cpu);
111
112
- address_space_stl_notdirty(&address_space_memory, info->smp_bootreg_addr,
113
+ address_space_stl_notdirty(as, info->smp_bootreg_addr,
114
0, MEMTXATTRS_UNSPECIFIED, NULL);
115
cpu_set_pc(cs, info->smp_loader_start);
116
}
117
@@ -XXX,XX +XXX,XX @@ static inline bool have_dtb(const struct arm_boot_info *info)
118
}
119
120
#define WRITE_WORD(p, value) do { \
121
- address_space_stl_notdirty(&address_space_memory, p, value, \
122
+ address_space_stl_notdirty(as, p, value, \
123
MEMTXATTRS_UNSPECIFIED, NULL); \
124
p += 4; \
125
} while (0)
126
127
-static void set_kernel_args(const struct arm_boot_info *info)
128
+static void set_kernel_args(const struct arm_boot_info *info, AddressSpace *as)
129
{
130
int initrd_size = info->initrd_size;
131
hwaddr base = info->loader_start;
132
@@ -XXX,XX +XXX,XX @@ static void set_kernel_args(const struct arm_boot_info *info)
133
int cmdline_size;
134
135
cmdline_size = strlen(info->kernel_cmdline);
136
- cpu_physical_memory_write(p + 8, info->kernel_cmdline,
137
- cmdline_size + 1);
138
+ address_space_write(as, p + 8, MEMTXATTRS_UNSPECIFIED,
139
+ (const uint8_t *)info->kernel_cmdline,
140
+ cmdline_size + 1);
141
cmdline_size = (cmdline_size >> 2) + 1;
142
WRITE_WORD(p, cmdline_size + 2);
143
WRITE_WORD(p, 0x54410009);
144
@@ -XXX,XX +XXX,XX @@ static void set_kernel_args(const struct arm_boot_info *info)
145
atag_board_len = (info->atag_board(info, atag_board_buf) + 3) & ~3;
146
WRITE_WORD(p, (atag_board_len + 8) >> 2);
147
WRITE_WORD(p, 0x414f4d50);
148
- cpu_physical_memory_write(p, atag_board_buf, atag_board_len);
149
+ address_space_write(as, p, MEMTXATTRS_UNSPECIFIED,
150
+ atag_board_buf, atag_board_len);
151
p += atag_board_len;
152
}
153
/* ATAG_END */
154
@@ -XXX,XX +XXX,XX @@ static void set_kernel_args(const struct arm_boot_info *info)
155
WRITE_WORD(p, 0);
156
}
157
158
-static void set_kernel_args_old(const struct arm_boot_info *info)
159
+static void set_kernel_args_old(const struct arm_boot_info *info,
160
+ AddressSpace *as)
161
{
162
hwaddr p;
163
const char *s;
164
@@ -XXX,XX +XXX,XX @@ static void set_kernel_args_old(const struct arm_boot_info *info)
165
}
166
s = info->kernel_cmdline;
167
if (s) {
168
- cpu_physical_memory_write(p, s, strlen(s) + 1);
169
+ address_space_write(as, p, MEMTXATTRS_UNSPECIFIED,
170
+ (const uint8_t *)s, strlen(s) + 1);
171
} else {
172
WRITE_WORD(p, 0);
173
}
174
@@ -XXX,XX +XXX,XX @@ static void fdt_add_psci_node(void *fdt)
175
* @addr: the address to load the image at
176
* @binfo: struct describing the boot environment
177
* @addr_limit: upper limit of the available memory area at @addr
178
+ * @as: address space to load image to
179
*
180
* Load a device tree supplied by the machine or by the user with the
181
* '-dtb' command line option, and put it at offset @addr in target
182
@@ -XXX,XX +XXX,XX @@ static void fdt_add_psci_node(void *fdt)
183
* Note: Must not be called unless have_dtb(binfo) is true.
184
*/
185
static int load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
186
- hwaddr addr_limit)
187
+ hwaddr addr_limit, AddressSpace *as)
188
{
189
void *fdt = NULL;
190
int size, rc;
191
@@ -XXX,XX +XXX,XX @@ static int load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
192
/* Put the DTB into the memory map as a ROM image: this will ensure
193
* the DTB is copied again upon reset, even if addr points into RAM.
194
*/
195
- rom_add_blob_fixed("dtb", fdt, size, addr);
196
+ rom_add_blob_fixed_as("dtb", fdt, size, addr, as);
197
198
g_free(fdt);
199
200
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
201
}
202
203
if (cs == first_cpu) {
204
+ AddressSpace *as = arm_boot_address_space(cpu, info);
205
+
58
+
206
cpu_set_pc(cs, info->loader_start);
59
+ /*
207
60
+ * We write our startup "bootloader" at the very bottom of RAM,
208
if (!have_dtb(info)) {
61
+ * so that bit can't be used for the image. Luckily the Image
209
if (old_param) {
62
+ * format specification is that the image requests only an offset
210
- set_kernel_args_old(info);
63
+ * from a 2MB boundary, not an absolute load address. So if the
211
+ set_kernel_args_old(info, as);
64
+ * image requests an offset that might mean it overlaps with the
212
} else {
65
+ * bootloader, we can just load it starting at 2MB+offset rather
213
- set_kernel_args(info);
66
+ * than 0MB + offset.
214
+ set_kernel_args(info, as);
67
+ */
215
}
68
+ if (kernel_load_offset < BOOTLOADER_MAX_SIZE) {
216
}
69
+ kernel_load_offset += 2 * MiB;
217
} else {
70
+ }
218
@@ -XXX,XX +XXX,XX @@ static int do_arm_linux_init(Object *obj, void *opaque)
219
220
static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
221
uint64_t *lowaddr, uint64_t *highaddr,
222
- int elf_machine)
223
+ int elf_machine, AddressSpace *as)
224
{
225
bool elf_is64;
226
union {
227
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
228
}
71
}
229
}
72
}
230
73
231
- ret = load_elf(info->kernel_filename, NULL, NULL,
232
- pentry, lowaddr, highaddr, big_endian, elf_machine,
233
- 1, data_swab);
234
+ ret = load_elf_as(info->kernel_filename, NULL, NULL,
235
+ pentry, lowaddr, highaddr, big_endian, elf_machine,
236
+ 1, data_swab, as);
237
if (ret <= 0) {
238
/* The header loaded but the image didn't */
239
exit(1);
240
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
241
}
242
243
static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
244
- hwaddr *entry)
245
+ hwaddr *entry, AddressSpace *as)
246
{
247
hwaddr kernel_load_offset = KERNEL64_LOAD_ADDR;
248
uint8_t *buffer;
249
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
250
}
251
252
*entry = mem_base + kernel_load_offset;
253
- rom_add_blob_fixed(filename, buffer, size, *entry);
254
+ rom_add_blob_fixed_as(filename, buffer, size, *entry, as);
255
256
g_free(buffer);
257
258
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
259
ARMCPU *cpu = n->cpu;
260
struct arm_boot_info *info =
261
container_of(n, struct arm_boot_info, load_kernel_notifier);
262
+ AddressSpace *as = arm_boot_address_space(cpu, info);
263
264
/* The board code is not supposed to set secure_board_setup unless
265
* running its code in secure mode is actually possible, and KVM
266
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
267
* the kernel is supposed to be loaded by the bootloader), copy the
268
* DTB to the base of RAM for the bootloader to pick up.
269
*/
270
- if (load_dtb(info->loader_start, info, 0) < 0) {
271
+ if (load_dtb(info->loader_start, info, 0, as) < 0) {
272
exit(1);
273
}
274
}
275
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
276
277
/* Assume that raw images are linux kernels, and ELF images are not. */
278
kernel_size = arm_load_elf(info, &elf_entry, &elf_low_addr,
279
- &elf_high_addr, elf_machine);
280
+ &elf_high_addr, elf_machine, as);
281
if (kernel_size > 0 && have_dtb(info)) {
282
/* If there is still some room left at the base of RAM, try and put
283
* the DTB there like we do for images loaded with -bios or -pflash.
284
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
285
if (elf_low_addr < info->loader_start) {
286
elf_low_addr = 0;
287
}
288
- if (load_dtb(info->loader_start, info, elf_low_addr) < 0) {
289
+ if (load_dtb(info->loader_start, info, elf_low_addr, as) < 0) {
290
exit(1);
291
}
292
}
293
}
294
entry = elf_entry;
295
if (kernel_size < 0) {
296
- kernel_size = load_uimage(info->kernel_filename, &entry, NULL,
297
- &is_linux, NULL, NULL);
298
+ kernel_size = load_uimage_as(info->kernel_filename, &entry, NULL,
299
+ &is_linux, NULL, NULL, as);
300
}
301
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64) && kernel_size < 0) {
302
kernel_size = load_aarch64_image(info->kernel_filename,
303
- info->loader_start, &entry);
304
+ info->loader_start, &entry, as);
305
is_linux = 1;
306
} else if (kernel_size < 0) {
307
/* 32-bit ARM */
308
entry = info->loader_start + KERNEL_LOAD_ADDR;
309
- kernel_size = load_image_targphys(info->kernel_filename, entry,
310
- info->ram_size - KERNEL_LOAD_ADDR);
311
+ kernel_size = load_image_targphys_as(info->kernel_filename, entry,
312
+ info->ram_size - KERNEL_LOAD_ADDR,
313
+ as);
314
is_linux = 1;
315
}
316
if (kernel_size < 0) {
317
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
318
uint32_t fixupcontext[FIXUP_MAX];
319
320
if (info->initrd_filename) {
321
- initrd_size = load_ramdisk(info->initrd_filename,
322
- info->initrd_start,
323
- info->ram_size -
324
- info->initrd_start);
325
+ initrd_size = load_ramdisk_as(info->initrd_filename,
326
+ info->initrd_start,
327
+ info->ram_size - info->initrd_start,
328
+ as);
329
if (initrd_size < 0) {
330
- initrd_size = load_image_targphys(info->initrd_filename,
331
- info->initrd_start,
332
- info->ram_size -
333
- info->initrd_start);
334
+ initrd_size = load_image_targphys_as(info->initrd_filename,
335
+ info->initrd_start,
336
+ info->ram_size -
337
+ info->initrd_start,
338
+ as);
339
}
340
if (initrd_size < 0) {
341
error_report("could not load initrd '%s'",
342
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
343
344
/* Place the DTB after the initrd in memory with alignment. */
345
dtb_start = QEMU_ALIGN_UP(info->initrd_start + initrd_size, align);
346
- if (load_dtb(dtb_start, info, 0) < 0) {
347
+ if (load_dtb(dtb_start, info, 0, as) < 0) {
348
exit(1);
349
}
350
fixupcontext[FIXUP_ARGPTR] = dtb_start;
351
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
352
fixupcontext[FIXUP_ENTRYPOINT] = entry;
353
354
write_bootloader("bootloader", info->loader_start,
355
- primary_loader, fixupcontext);
356
+ primary_loader, fixupcontext, as);
357
358
if (info->nb_cpus > 1) {
359
info->write_secondary_boot(cpu, info);
360
--
74
--
361
2.16.2
75
2.19.1
362
76
363
77
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Richard Henderson <rth@twiddle.net>
2
2
3
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
3
This can reduce the number of opcodes required for certain
4
complex forms of load-multiple (e.g. ld4.16b).
5
6
Signed-off-by: Richard Henderson <rth@twiddle.net>
7
Message-id: 20181011205206.3552-2-richard.henderson@linaro.org
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
include/hw/arm/xlnx-zynqmp.h | 2 ++
11
target/arm/translate-a64.c | 12 ++++++++----
9
hw/arm/xlnx-zynqmp.c | 14 ++++++++++++++
12
1 file changed, 8 insertions(+), 4 deletions(-)
10
2 files changed, 16 insertions(+)
11
13
12
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/include/hw/arm/xlnx-zynqmp.h
16
--- a/target/arm/translate-a64.c
15
+++ b/include/hw/arm/xlnx-zynqmp.h
17
+++ b/target/arm/translate-a64.c
16
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
17
#include "hw/dma/xlnx_dpdma.h"
19
bool is_store = !extract32(insn, 22, 1);
18
#include "hw/display/xlnx_dp.h"
20
bool is_postidx = extract32(insn, 23, 1);
19
#include "hw/intc/xlnx-zynqmp-ipi.h"
21
bool is_q = extract32(insn, 30, 1);
20
+#include "hw/timer/xlnx-zynqmp-rtc.h"
22
- TCGv_i64 tcg_addr, tcg_rn;
21
23
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
22
#define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
24
23
#define XLNX_ZYNQMP(obj) OBJECT_CHECK(XlnxZynqMPState, (obj), \
25
int ebytes = 1 << size;
24
@@ -XXX,XX +XXX,XX @@ typedef struct XlnxZynqMPState {
26
int elements = (is_q ? 128 : 64) / (8 << size);
25
XlnxDPState dp;
27
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
26
XlnxDPDMAState dpdma;
28
tcg_rn = cpu_reg_sp(s, rn);
27
XlnxZynqMPIPI ipi;
29
tcg_addr = tcg_temp_new_i64();
28
+ XlnxZynqMPRTC rtc;
30
tcg_gen_mov_i64(tcg_addr, tcg_rn);
29
31
+ tcg_ebytes = tcg_const_i64(ebytes);
30
char *boot_cpu;
32
31
ARMCPU *boot_cpu_ptr;
33
for (r = 0; r < rpt; r++) {
32
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
34
int e;
33
index XXXXXXX..XXXXXXX 100644
35
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
34
--- a/hw/arm/xlnx-zynqmp.c
36
clear_vec_high(s, is_q, tt);
35
+++ b/hw/arm/xlnx-zynqmp.c
37
}
36
@@ -XXX,XX +XXX,XX @@
38
}
37
#define IPI_ADDR 0xFF300000
39
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
38
#define IPI_IRQ 64
40
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
39
41
tt = (tt + 1) % 32;
40
+#define RTC_ADDR 0xffa60000
42
}
41
+#define RTC_IRQ 26
43
}
42
+
44
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
43
#define SDHCI_CAPABILITIES 0x280737ec6481 /* Datasheet: UG1085 (v1.7) */
45
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
44
46
}
45
static const uint64_t gem_addr[XLNX_ZYNQMP_NUM_GEMS] = {
47
}
46
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
48
+ tcg_temp_free_i64(tcg_ebytes);
47
49
tcg_temp_free_i64(tcg_addr);
48
object_initialize(&s->ipi, sizeof(s->ipi), TYPE_XLNX_ZYNQMP_IPI);
49
qdev_set_parent_bus(DEVICE(&s->ipi), sysbus_get_default());
50
+
51
+ object_initialize(&s->rtc, sizeof(s->rtc), TYPE_XLNX_ZYNQMP_RTC);
52
+ qdev_set_parent_bus(DEVICE(&s->rtc), sysbus_get_default());
53
}
50
}
54
51
55
static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
52
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
56
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
53
bool replicate = false;
54
int index = is_q << 3 | S << 2 | size;
55
int ebytes, xs;
56
- TCGv_i64 tcg_addr, tcg_rn;
57
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
58
59
switch (scale) {
60
case 3:
61
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
62
tcg_rn = cpu_reg_sp(s, rn);
63
tcg_addr = tcg_temp_new_i64();
64
tcg_gen_mov_i64(tcg_addr, tcg_rn);
65
+ tcg_ebytes = tcg_const_i64(ebytes);
66
67
for (xs = 0; xs < selem; xs++) {
68
if (replicate) {
69
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
70
do_vec_st(s, rt, index, tcg_addr, scale);
71
}
72
}
73
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
74
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
75
rt = (rt + 1) % 32;
57
}
76
}
58
sysbus_mmio_map(SYS_BUS_DEVICE(&s->ipi), 0, IPI_ADDR);
77
59
sysbus_connect_irq(SYS_BUS_DEVICE(&s->ipi), 0, gic_spi[IPI_IRQ]);
78
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
60
+
79
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
61
+ object_property_set_bool(OBJECT(&s->rtc), true, "realized", &err);
80
}
62
+ if (err) {
81
}
63
+ error_propagate(errp, err);
82
+ tcg_temp_free_i64(tcg_ebytes);
64
+ return;
83
tcg_temp_free_i64(tcg_addr);
65
+ }
66
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->rtc), 0, RTC_ADDR);
67
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->rtc), 0, gic_spi[RTC_IRQ]);
68
}
84
}
69
85
70
static Property xlnx_zynqmp_props[] = {
71
--
86
--
72
2.16.2
87
2.19.1
73
88
74
89
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
This is done generically in translator_loop.
4
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180228193125.20577-7-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20181011205206.3552-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/translate-a64.c | 29 +++++++++++++++++++++++++++++
12
target/arm/translate-a64.c | 1 -
9
1 file changed, 29 insertions(+)
13
target/arm/translate.c | 1 -
14
2 files changed, 2 deletions(-)
10
15
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
18
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
19
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
16
case 0x19: /* FMULX */
21
17
is_fp = true;
22
static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
18
break;
23
{
19
+ case 0x1d: /* SQRDMLAH */
24
- tcg_clear_temp_count();
20
+ case 0x1f: /* SQRDMLSH */
25
}
21
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
26
22
+ unallocated_encoding(s);
27
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
23
+ return;
28
diff --git a/target/arm/translate.c b/target/arm/translate.c
24
+ }
29
index XXXXXXX..XXXXXXX 100644
25
+ break;
30
--- a/target/arm/translate.c
26
default:
31
+++ b/target/arm/translate.c
27
unallocated_encoding(s);
32
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_start(DisasContextBase *dcbase, CPUState *cpu)
28
return;
33
tcg_gen_movi_i32(tmp, 0);
29
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
34
store_cpu_field(tmp, condexec_bits);
30
tcg_op, tcg_idx);
35
}
31
}
36
- tcg_clear_temp_count();
32
break;
37
}
33
+ case 0x1d: /* SQRDMLAH */
38
34
+ read_vec_element_i32(s, tcg_res, rd, pass,
39
static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
35
+ is_scalar ? size : MO_32);
36
+ if (size == 1) {
37
+ gen_helper_neon_qrdmlah_s16(tcg_res, cpu_env,
38
+ tcg_op, tcg_idx, tcg_res);
39
+ } else {
40
+ gen_helper_neon_qrdmlah_s32(tcg_res, cpu_env,
41
+ tcg_op, tcg_idx, tcg_res);
42
+ }
43
+ break;
44
+ case 0x1f: /* SQRDMLSH */
45
+ read_vec_element_i32(s, tcg_res, rd, pass,
46
+ is_scalar ? size : MO_32);
47
+ if (size == 1) {
48
+ gen_helper_neon_qrdmlsh_s16(tcg_res, cpu_env,
49
+ tcg_op, tcg_idx, tcg_res);
50
+ } else {
51
+ gen_helper_neon_qrdmlsh_s32(tcg_res, cpu_env,
52
+ tcg_op, tcg_idx, tcg_res);
53
+ }
54
+ break;
55
default:
56
g_assert_not_reached();
57
}
58
--
40
--
59
2.16.2
41
2.19.1
60
42
61
43
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20180228193125.20577-13-richard.henderson@linaro.org
4
Message-id: 20181011205206.3552-4-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
[PMM: renamed e1/e2/e3/e4 to use the same naming as the version
7
of the pseudocode in the Arm ARM]
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
---
7
---
10
target/arm/helper.h | 11 ++++
8
target/arm/translate-a64.c | 28 +++-------------------------
11
target/arm/translate-a64.c | 94 +++++++++++++++++++++++++---
9
1 file changed, 3 insertions(+), 25 deletions(-)
12
target/arm/vec_helper.c | 149 +++++++++++++++++++++++++++++++++++++++++++++
13
3 files changed, 246 insertions(+), 8 deletions(-)
14
10
15
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.h
18
+++ b/target/arm/helper.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fcadds, TCG_CALL_NO_RWG,
20
DEF_HELPER_FLAGS_5(gvec_fcaddd, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, i32)
22
23
+DEF_HELPER_FLAGS_5(gvec_fcmlah, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(gvec_fcmlah_idx, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(gvec_fcmlas, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_5(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+
34
#ifdef TARGET_AARCH64
35
#include "helper-a64.h"
36
#endif
37
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
38
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-a64.c
13
--- a/target/arm/translate-a64.c
40
+++ b/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
42
}
16
for (xs = 0; xs < selem; xs++) {
43
feature = ARM_FEATURE_V8_RDM;
17
if (replicate) {
44
break;
18
/* Load and replicate to all elements */
45
+ case 0x8: /* FCMLA, #0 */
19
- uint64_t mulconst;
46
+ case 0x9: /* FCMLA, #90 */
20
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
47
+ case 0xa: /* FCMLA, #180 */
21
48
+ case 0xb: /* FCMLA, #270 */
22
tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr,
49
case 0xc: /* FCADD, #90 */
23
get_mem_index(s), s->be_data + scale);
50
case 0xe: /* FCADD, #270 */
24
- switch (scale) {
51
if (size == 0
25
- case 0:
52
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
26
- mulconst = 0x0101010101010101ULL;
53
}
27
- break;
54
return;
28
- case 1:
55
29
- mulconst = 0x0001000100010001ULL;
56
+ case 0x8: /* FCMLA, #0 */
30
- break;
57
+ case 0x9: /* FCMLA, #90 */
31
- case 2:
58
+ case 0xa: /* FCMLA, #180 */
32
- mulconst = 0x0000000100000001ULL;
59
+ case 0xb: /* FCMLA, #270 */
33
- break;
60
+ rot = extract32(opcode, 0, 2);
34
- case 3:
61
+ switch (size) {
35
- mulconst = 0;
62
+ case 1:
36
- break;
63
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, true, rot,
37
- default:
64
+ gen_helper_gvec_fcmlah);
38
- g_assert_not_reached();
65
+ break;
66
+ case 2:
67
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, false, rot,
68
+ gen_helper_gvec_fcmlas);
69
+ break;
70
+ case 3:
71
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, false, rot,
72
+ gen_helper_gvec_fcmlad);
73
+ break;
74
+ default:
75
+ g_assert_not_reached();
76
+ }
77
+ return;
78
+
79
case 0xc: /* FCADD, #90 */
80
case 0xe: /* FCADD, #270 */
81
rot = extract32(opcode, 1, 1);
82
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
83
int rn = extract32(insn, 5, 5);
84
int rd = extract32(insn, 0, 5);
85
bool is_long = false;
86
- bool is_fp = false;
87
+ int is_fp = 0;
88
bool is_fp16 = false;
89
int index;
90
TCGv_ptr fpst;
91
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
92
case 0x05: /* FMLS */
93
case 0x09: /* FMUL */
94
case 0x19: /* FMULX */
95
- is_fp = true;
96
+ is_fp = 1;
97
break;
98
case 0x1d: /* SQRDMLAH */
99
case 0x1f: /* SQRDMLSH */
100
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
101
return;
102
}
103
break;
104
+ case 0x11: /* FCMLA #0 */
105
+ case 0x13: /* FCMLA #90 */
106
+ case 0x15: /* FCMLA #180 */
107
+ case 0x17: /* FCMLA #270 */
108
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
109
+ unallocated_encoding(s);
110
+ return;
111
+ }
112
+ is_fp = 2;
113
+ break;
114
default:
115
unallocated_encoding(s);
116
return;
117
}
118
119
- if (is_fp) {
120
+ switch (is_fp) {
121
+ case 1: /* normal fp */
122
/* convert insn encoded size to TCGMemOp size */
123
switch (size) {
124
case 0: /* half-precision */
125
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
126
- unallocated_encoding(s);
127
- return;
128
- }
39
- }
129
size = MO_16;
40
- if (mulconst) {
130
+ is_fp16 = true;
41
- tcg_gen_muli_i64(tcg_tmp, tcg_tmp, mulconst);
131
break;
42
- }
132
case MO_32: /* single precision */
43
- write_vec_element(s, tcg_tmp, rt, 0, MO_64);
133
case MO_64: /* double precision */
44
- if (is_q) {
134
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
45
- write_vec_element(s, tcg_tmp, rt, 1, MO_64);
135
unallocated_encoding(s);
46
- }
136
return;
47
+ tcg_gen_gvec_dup_i64(scale, vec_full_reg_offset(s, rt),
137
}
48
+ (is_q + 1) * 8, vec_full_reg_size(s),
138
- } else {
49
+ tcg_tmp);
139
+ break;
50
tcg_temp_free_i64(tcg_tmp);
140
+
51
- clear_vec_high(s, is_q, rt);
141
+ case 2: /* complex fp */
52
} else {
142
+ /* Each indexable element is a complex pair. */
53
/* Load/store one element per register */
143
+ size <<= 1;
54
if (is_load) {
144
+ switch (size) {
145
+ case MO_32:
146
+ if (h && !is_q) {
147
+ unallocated_encoding(s);
148
+ return;
149
+ }
150
+ is_fp16 = true;
151
+ break;
152
+ case MO_64:
153
+ break;
154
+ default:
155
+ unallocated_encoding(s);
156
+ return;
157
+ }
158
+ break;
159
+
160
+ default: /* integer */
161
switch (size) {
162
case MO_8:
163
case MO_64:
164
unallocated_encoding(s);
165
return;
166
}
167
+ break;
168
+ }
169
+ if (is_fp16 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
170
+ unallocated_encoding(s);
171
+ return;
172
}
173
174
/* Given TCGMemOp size, adjust register and indexing. */
175
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
176
fpst = NULL;
177
}
178
179
+ switch (16 * u + opcode) {
180
+ case 0x11: /* FCMLA #0 */
181
+ case 0x13: /* FCMLA #90 */
182
+ case 0x15: /* FCMLA #180 */
183
+ case 0x17: /* FCMLA #270 */
184
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
185
+ vec_full_reg_offset(s, rn),
186
+ vec_reg_offset(s, rm, index, size), fpst,
187
+ is_q ? 16 : 8, vec_full_reg_size(s),
188
+ extract32(insn, 13, 2), /* rot */
189
+ size == MO_64
190
+ ? gen_helper_gvec_fcmlas_idx
191
+ : gen_helper_gvec_fcmlah_idx);
192
+ tcg_temp_free_ptr(fpst);
193
+ return;
194
+ }
195
+
196
if (size == 3) {
197
TCGv_i64 tcg_idx = tcg_temp_new_i64();
198
int pass;
199
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
200
index XXXXXXX..XXXXXXX 100644
201
--- a/target/arm/vec_helper.c
202
+++ b/target/arm/vec_helper.c
203
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcaddd)(void *vd, void *vn, void *vm,
204
}
205
clear_tail(d, opr_sz, simd_maxsz(desc));
206
}
207
+
208
+void HELPER(gvec_fcmlah)(void *vd, void *vn, void *vm,
209
+ void *vfpst, uint32_t desc)
210
+{
211
+ uintptr_t opr_sz = simd_oprsz(desc);
212
+ float16 *d = vd;
213
+ float16 *n = vn;
214
+ float16 *m = vm;
215
+ float_status *fpst = vfpst;
216
+ intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
217
+ uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
218
+ uint32_t neg_real = flip ^ neg_imag;
219
+ uintptr_t i;
220
+
221
+ /* Shift boolean to the sign bit so we can xor to negate. */
222
+ neg_real <<= 15;
223
+ neg_imag <<= 15;
224
+
225
+ for (i = 0; i < opr_sz / 2; i += 2) {
226
+ float16 e2 = n[H2(i + flip)];
227
+ float16 e1 = m[H2(i + flip)] ^ neg_real;
228
+ float16 e4 = e2;
229
+ float16 e3 = m[H2(i + 1 - flip)] ^ neg_imag;
230
+
231
+ d[H2(i)] = float16_muladd(e2, e1, d[H2(i)], 0, fpst);
232
+ d[H2(i + 1)] = float16_muladd(e4, e3, d[H2(i + 1)], 0, fpst);
233
+ }
234
+ clear_tail(d, opr_sz, simd_maxsz(desc));
235
+}
236
+
237
+void HELPER(gvec_fcmlah_idx)(void *vd, void *vn, void *vm,
238
+ void *vfpst, uint32_t desc)
239
+{
240
+ uintptr_t opr_sz = simd_oprsz(desc);
241
+ float16 *d = vd;
242
+ float16 *n = vn;
243
+ float16 *m = vm;
244
+ float_status *fpst = vfpst;
245
+ intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
246
+ uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
247
+ uint32_t neg_real = flip ^ neg_imag;
248
+ uintptr_t i;
249
+ float16 e1 = m[H2(flip)];
250
+ float16 e3 = m[H2(1 - flip)];
251
+
252
+ /* Shift boolean to the sign bit so we can xor to negate. */
253
+ neg_real <<= 15;
254
+ neg_imag <<= 15;
255
+ e1 ^= neg_real;
256
+ e3 ^= neg_imag;
257
+
258
+ for (i = 0; i < opr_sz / 2; i += 2) {
259
+ float16 e2 = n[H2(i + flip)];
260
+ float16 e4 = e2;
261
+
262
+ d[H2(i)] = float16_muladd(e2, e1, d[H2(i)], 0, fpst);
263
+ d[H2(i + 1)] = float16_muladd(e4, e3, d[H2(i + 1)], 0, fpst);
264
+ }
265
+ clear_tail(d, opr_sz, simd_maxsz(desc));
266
+}
267
+
268
+void HELPER(gvec_fcmlas)(void *vd, void *vn, void *vm,
269
+ void *vfpst, uint32_t desc)
270
+{
271
+ uintptr_t opr_sz = simd_oprsz(desc);
272
+ float32 *d = vd;
273
+ float32 *n = vn;
274
+ float32 *m = vm;
275
+ float_status *fpst = vfpst;
276
+ intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
277
+ uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
278
+ uint32_t neg_real = flip ^ neg_imag;
279
+ uintptr_t i;
280
+
281
+ /* Shift boolean to the sign bit so we can xor to negate. */
282
+ neg_real <<= 31;
283
+ neg_imag <<= 31;
284
+
285
+ for (i = 0; i < opr_sz / 4; i += 2) {
286
+ float32 e2 = n[H4(i + flip)];
287
+ float32 e1 = m[H4(i + flip)] ^ neg_real;
288
+ float32 e4 = e2;
289
+ float32 e3 = m[H4(i + 1 - flip)] ^ neg_imag;
290
+
291
+ d[H4(i)] = float32_muladd(e2, e1, d[H4(i)], 0, fpst);
292
+ d[H4(i + 1)] = float32_muladd(e4, e3, d[H4(i + 1)], 0, fpst);
293
+ }
294
+ clear_tail(d, opr_sz, simd_maxsz(desc));
295
+}
296
+
297
+void HELPER(gvec_fcmlas_idx)(void *vd, void *vn, void *vm,
298
+ void *vfpst, uint32_t desc)
299
+{
300
+ uintptr_t opr_sz = simd_oprsz(desc);
301
+ float32 *d = vd;
302
+ float32 *n = vn;
303
+ float32 *m = vm;
304
+ float_status *fpst = vfpst;
305
+ intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
306
+ uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
307
+ uint32_t neg_real = flip ^ neg_imag;
308
+ uintptr_t i;
309
+ float32 e1 = m[H4(flip)];
310
+ float32 e3 = m[H4(1 - flip)];
311
+
312
+ /* Shift boolean to the sign bit so we can xor to negate. */
313
+ neg_real <<= 31;
314
+ neg_imag <<= 31;
315
+ e1 ^= neg_real;
316
+ e3 ^= neg_imag;
317
+
318
+ for (i = 0; i < opr_sz / 4; i += 2) {
319
+ float32 e2 = n[H4(i + flip)];
320
+ float32 e4 = e2;
321
+
322
+ d[H4(i)] = float32_muladd(e2, e1, d[H4(i)], 0, fpst);
323
+ d[H4(i + 1)] = float32_muladd(e4, e3, d[H4(i + 1)], 0, fpst);
324
+ }
325
+ clear_tail(d, opr_sz, simd_maxsz(desc));
326
+}
327
+
328
+void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm,
329
+ void *vfpst, uint32_t desc)
330
+{
331
+ uintptr_t opr_sz = simd_oprsz(desc);
332
+ float64 *d = vd;
333
+ float64 *n = vn;
334
+ float64 *m = vm;
335
+ float_status *fpst = vfpst;
336
+ intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
337
+ uint64_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
338
+ uint64_t neg_real = flip ^ neg_imag;
339
+ uintptr_t i;
340
+
341
+ /* Shift boolean to the sign bit so we can xor to negate. */
342
+ neg_real <<= 63;
343
+ neg_imag <<= 63;
344
+
345
+ for (i = 0; i < opr_sz / 8; i += 2) {
346
+ float64 e2 = n[i + flip];
347
+ float64 e1 = m[i + flip] ^ neg_real;
348
+ float64 e4 = e2;
349
+ float64 e3 = m[i + 1 - flip] ^ neg_imag;
350
+
351
+ d[i] = float64_muladd(e2, e1, d[i], 0, fpst);
352
+ d[i + 1] = float64_muladd(e4, e3, d[i + 1], 0, fpst);
353
+ }
354
+ clear_tail(d, opr_sz, simd_maxsz(desc));
355
+}
356
--
55
--
357
2.16.2
56
2.19.1
358
57
359
58
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The integer size check was already outside of the opcode switch;
3
For a sequence of loads or stores from a single register,
4
move the floating-point size check outside as well. Unify the
4
little-endian operations can be promoted to an 8-byte op.
5
size vs index adjustment between fp and integer paths.
5
This can reduce the number of operations by a factor of 8.
6
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181011205206.3552-5-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180228193125.20577-4-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/translate-a64.c | 65 +++++++++++++++++++++++-----------------------
12
target/arm/translate-a64.c | 66 +++++++++++++++++++++++---------------
13
1 file changed, 32 insertions(+), 33 deletions(-)
13
1 file changed, 40 insertions(+), 26 deletions(-)
14
14
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
17
--- a/target/arm/translate-a64.c
18
+++ b/target/arm/translate-a64.c
18
+++ b/target/arm/translate-a64.c
19
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
19
@@ -XXX,XX +XXX,XX @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
20
case 0x05: /* FMLS */
20
21
case 0x09: /* FMUL */
21
/* Store from vector register to memory */
22
case 0x19: /* FMULX */
22
static void do_vec_st(DisasContext *s, int srcidx, int element,
23
- if (size == 1) {
23
- TCGv_i64 tcg_addr, int size)
24
- unallocated_encoding(s);
24
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
25
- return;
25
{
26
- }
26
- TCGMemOp memop = s->be_data + size;
27
is_fp = true;
27
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
28
break;
28
29
default:
29
read_vec_element(s, tcg_tmp, srcidx, element, size);
30
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
30
- tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
31
if (is_fp) {
31
+ tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
32
/* convert insn encoded size to TCGMemOp size */
32
33
switch (size) {
33
tcg_temp_free_i64(tcg_tmp);
34
- case 2: /* single precision */
34
}
35
- size = MO_32;
35
36
- index = h << 1 | l;
36
/* Load from memory to vector register */
37
- rm |= (m << 4);
37
static void do_vec_ld(DisasContext *s, int destidx, int element,
38
- break;
38
- TCGv_i64 tcg_addr, int size)
39
- case 3: /* double precision */
39
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
40
- size = MO_64;
40
{
41
- if (l || !is_q) {
41
- TCGMemOp memop = s->be_data + size;
42
+ case 0: /* half-precision */
42
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
43
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
43
44
unallocated_encoding(s);
44
- tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
45
return;
45
+ tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
46
write_vec_element(s, tcg_tmp, destidx, element, size);
47
48
tcg_temp_free_i64(tcg_tmp);
49
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
50
bool is_postidx = extract32(insn, 23, 1);
51
bool is_q = extract32(insn, 30, 1);
52
TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
53
+ TCGMemOp endian = s->be_data;
54
55
- int ebytes = 1 << size;
56
- int elements = (is_q ? 128 : 64) / (8 << size);
57
+ int ebytes; /* bytes per element */
58
+ int elements; /* elements per vector */
59
int rpt; /* num iterations */
60
int selem; /* structure elements */
61
int r;
62
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
63
gen_check_sp_alignment(s);
64
}
65
66
+ /* For our purposes, bytes are always little-endian. */
67
+ if (size == 0) {
68
+ endian = MO_LE;
69
+ }
70
+
71
+ /* Consecutive little-endian elements from a single register
72
+ * can be promoted to a larger little-endian operation.
73
+ */
74
+ if (selem == 1 && endian == MO_LE) {
75
+ size = 3;
76
+ }
77
+ ebytes = 1 << size;
78
+ elements = (is_q ? 16 : 8) / ebytes;
79
+
80
tcg_rn = cpu_reg_sp(s, rn);
81
tcg_addr = tcg_temp_new_i64();
82
tcg_gen_mov_i64(tcg_addr, tcg_rn);
83
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
84
for (r = 0; r < rpt; r++) {
85
int e;
86
for (e = 0; e < elements; e++) {
87
- int tt = (rt + r) % 32;
88
int xs;
89
for (xs = 0; xs < selem; xs++) {
90
+ int tt = (rt + r + xs) % 32;
91
if (is_store) {
92
- do_vec_st(s, tt, e, tcg_addr, size);
93
+ do_vec_st(s, tt, e, tcg_addr, size, endian);
94
} else {
95
- do_vec_ld(s, tt, e, tcg_addr, size);
96
-
97
- /* For non-quad operations, setting a slice of the low
98
- * 64 bits of the register clears the high 64 bits (in
99
- * the ARM ARM pseudocode this is implicit in the fact
100
- * that 'rval' is a 64 bit wide variable).
101
- * For quad operations, we might still need to zero the
102
- * high bits of SVE. We optimize by noticing that we only
103
- * need to do this the first time we touch a register.
104
- */
105
- if (e == 0 && (r == 0 || xs == selem - 1)) {
106
- clear_vec_high(s, is_q, tt);
107
- }
108
+ do_vec_ld(s, tt, e, tcg_addr, size, endian);
109
}
110
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
111
- tt = (tt + 1) % 32;
46
}
112
}
47
- index = h;
48
- rm |= (m << 4);
49
- break;
50
- case 0: /* half precision */
51
size = MO_16;
52
- index = h << 2 | l << 1 | m;
53
- is_fp16 = true;
54
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
55
- break;
56
- }
57
- /* fallthru */
58
- default: /* unallocated */
59
- unallocated_encoding(s);
60
- return;
61
- }
62
- } else {
63
- switch (size) {
64
- case 1:
65
- index = h << 2 | l << 1 | m;
66
break;
67
- case 2:
68
- index = h << 1 | l;
69
- rm |= (m << 4);
70
+ case MO_32: /* single precision */
71
+ case MO_64: /* double precision */
72
break;
73
default:
74
unallocated_encoding(s);
75
return;
76
}
113
}
77
+ } else {
114
}
78
+ switch (size) {
115
79
+ case MO_8:
116
+ if (!is_store) {
80
+ case MO_64:
117
+ /* For non-quad operations, setting a slice of the low
81
+ unallocated_encoding(s);
118
+ * 64 bits of the register clears the high 64 bits (in
82
+ return;
119
+ * the ARM ARM pseudocode this is implicit in the fact
120
+ * that 'rval' is a 64 bit wide variable).
121
+ * For quad operations, we might still need to zero the
122
+ * high bits of SVE.
123
+ */
124
+ for (r = 0; r < rpt * selem; r++) {
125
+ int tt = (rt + r) % 32;
126
+ clear_vec_high(s, is_q, tt);
83
+ }
127
+ }
84
+ }
128
+ }
85
+
129
+
86
+ /* Given TCGMemOp size, adjust register and indexing. */
130
if (is_postidx) {
87
+ switch (size) {
131
int rm = extract32(insn, 16, 5);
88
+ case MO_16:
132
if (rm == 31) {
89
+ index = h << 2 | l << 1 | m;
133
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
90
+ break;
134
} else {
91
+ case MO_32:
135
/* Load/store one element per register */
92
+ index = h << 1 | l;
136
if (is_load) {
93
+ rm |= m << 4;
137
- do_vec_ld(s, rt, index, tcg_addr, scale);
94
+ break;
138
+ do_vec_ld(s, rt, index, tcg_addr, scale, s->be_data);
95
+ case MO_64:
139
} else {
96
+ if (l || !is_q) {
140
- do_vec_st(s, rt, index, tcg_addr, scale);
97
+ unallocated_encoding(s);
141
+ do_vec_st(s, rt, index, tcg_addr, scale, s->be_data);
98
+ return;
142
}
99
+ }
143
}
100
+ index = h;
144
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
101
+ rm |= m << 4;
102
+ break;
103
+ default:
104
+ g_assert_not_reached();
105
}
106
107
if (!fp_access_check(s)) {
108
--
145
--
109
2.16.2
146
2.19.1
110
147
111
148
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180228193125.20577-9-richard.henderson@linaro.org
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Message-id: 20181011205206.3552-6-richard.henderson@linaro.org
6
[PMM: drop change to now-deleted cpu_mode_names array]
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate.c | 46 ++++++++++++++++++++++++++++++++++++++++++----
10
target/arm/translate.c | 4 ++--
9
1 file changed, 42 insertions(+), 4 deletions(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
10
12
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
15
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
16
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static const char *regnames[] =
17
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_F0d, cpu_F1d;
18
19
#include "exec/gen-icount.h"
20
21
-static const char *regnames[] =
22
+static const char * const regnames[] =
16
{ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
23
{ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
17
"r8", "r9", "r10", "r11", "r12", "r13", "r14", "pc" };
24
"r8", "r9", "r10", "r11", "r12", "r13", "r14", "pc" };
18
25
19
+/* Function prototypes for gen_ functions calling Neon helpers. */
26
@@ -XXX,XX +XXX,XX @@ static struct {
20
+typedef void NeonGenThreeOpEnvFn(TCGv_i32, TCGv_env, TCGv_i32,
27
int nregs;
21
+ TCGv_i32, TCGv_i32);
28
int interleave;
22
+
29
int spacing;
23
/* initialize TCG globals. */
30
-} neon_ls_element_type[11] = {
24
void arm_translate_init(void)
31
+} const neon_ls_element_type[11] = {
25
{
32
{4, 4, 1},
26
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
33
{4, 4, 2},
27
}
34
{4, 1, 1},
28
neon_store_reg64(cpu_V0, rd + pass);
29
}
30
-
31
-
32
break;
33
- default: /* 14 and 15 are RESERVED */
34
- return 1;
35
+ case 14: /* VQRDMLAH scalar */
36
+ case 15: /* VQRDMLSH scalar */
37
+ {
38
+ NeonGenThreeOpEnvFn *fn;
39
+
40
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
41
+ return 1;
42
+ }
43
+ if (u && ((rd | rn) & 1)) {
44
+ return 1;
45
+ }
46
+ if (op == 14) {
47
+ if (size == 1) {
48
+ fn = gen_helper_neon_qrdmlah_s16;
49
+ } else {
50
+ fn = gen_helper_neon_qrdmlah_s32;
51
+ }
52
+ } else {
53
+ if (size == 1) {
54
+ fn = gen_helper_neon_qrdmlsh_s16;
55
+ } else {
56
+ fn = gen_helper_neon_qrdmlsh_s32;
57
+ }
58
+ }
59
+
60
+ tmp2 = neon_get_scalar(size, rm);
61
+ for (pass = 0; pass < (u ? 4 : 2); pass++) {
62
+ tmp = neon_load_reg(rn, pass);
63
+ tmp3 = neon_load_reg(rd, pass);
64
+ fn(tmp, cpu_env, tmp, tmp2, tmp3);
65
+ tcg_temp_free_i32(tmp3);
66
+ neon_store_reg(rd, pass, tmp);
67
+ }
68
+ tcg_temp_free_i32(tmp2);
69
+ }
70
+ break;
71
+ default:
72
+ g_assert_not_reached();
73
}
74
}
75
} else { /* size == 3 */
76
--
35
--
77
2.16.2
36
2.19.1
78
37
79
38
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Allow the guest to determine the time set from the QEMU command line.
3
Also introduces neon_element_offset to find the env offset
4
of a specific element within a neon register.
4
5
5
This includes adding a trace event to debug the new time.
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
7
Message-id: 20181011205206.3552-7-richard.henderson@linaro.org
7
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
include/hw/timer/xlnx-zynqmp-rtc.h | 2 ++
11
target/arm/translate.c | 63 ++++++++++++++++++++++++------------------
13
hw/timer/xlnx-zynqmp-rtc.c | 58 ++++++++++++++++++++++++++++++++++++++
12
1 file changed, 36 insertions(+), 27 deletions(-)
14
hw/timer/trace-events | 3 ++
15
3 files changed, 63 insertions(+)
16
13
17
diff --git a/include/hw/timer/xlnx-zynqmp-rtc.h b/include/hw/timer/xlnx-zynqmp-rtc.h
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/timer/xlnx-zynqmp-rtc.h
16
--- a/target/arm/translate.c
20
+++ b/include/hw/timer/xlnx-zynqmp-rtc.h
17
+++ b/target/arm/translate.c
21
@@ -XXX,XX +XXX,XX @@ typedef struct XlnxZynqMPRTC {
18
@@ -XXX,XX +XXX,XX @@ neon_reg_offset (int reg, int n)
22
qemu_irq irq_rtc_int;
19
return vfp_reg_offset(0, sreg);
23
qemu_irq irq_addr_error_int;
24
25
+ uint32_t tick_offset;
26
+
27
uint32_t regs[XLNX_ZYNQMP_RTC_R_MAX];
28
RegisterInfo regs_info[XLNX_ZYNQMP_RTC_R_MAX];
29
} XlnxZynqMPRTC;
30
diff --git a/hw/timer/xlnx-zynqmp-rtc.c b/hw/timer/xlnx-zynqmp-rtc.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/timer/xlnx-zynqmp-rtc.c
33
+++ b/hw/timer/xlnx-zynqmp-rtc.c
34
@@ -XXX,XX +XXX,XX @@
35
#include "hw/register.h"
36
#include "qemu/bitops.h"
37
#include "qemu/log.h"
38
+#include "hw/ptimer.h"
39
+#include "qemu/cutils.h"
40
+#include "sysemu/sysemu.h"
41
+#include "trace.h"
42
#include "hw/timer/xlnx-zynqmp-rtc.h"
43
44
#ifndef XLNX_ZYNQMP_RTC_ERR_DEBUG
45
@@ -XXX,XX +XXX,XX @@ static void addr_error_int_update_irq(XlnxZynqMPRTC *s)
46
qemu_set_irq(s->irq_addr_error_int, pending);
47
}
20
}
48
21
49
+static uint32_t rtc_get_count(XlnxZynqMPRTC *s)
22
+/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
23
+ * where 0 is the least significant end of the register.
24
+ */
25
+static inline long
26
+neon_element_offset(int reg, int element, TCGMemOp size)
50
+{
27
+{
51
+ int64_t now = qemu_clock_get_ns(rtc_clock);
28
+ int element_size = 1 << size;
52
+ return s->tick_offset + now / NANOSECONDS_PER_SECOND;
29
+ int ofs = element * element_size;
30
+#ifdef HOST_WORDS_BIGENDIAN
31
+ /* Calculate the offset assuming fully little-endian,
32
+ * then XOR to account for the order of the 8-byte units.
33
+ */
34
+ if (element_size < 8) {
35
+ ofs ^= 8 - element_size;
36
+ }
37
+#endif
38
+ return neon_reg_offset(reg, 0) + ofs;
53
+}
39
+}
54
+
40
+
55
+static uint64_t current_time_postr(RegisterInfo *reg, uint64_t val64)
41
static TCGv_i32 neon_load_reg(int reg, int pass)
56
+{
42
{
57
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
43
TCGv_i32 tmp = tcg_temp_new_i32();
44
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
45
tmp = load_reg(s, rd);
46
if (insn & (1 << 23)) {
47
/* VDUP */
48
- if (size == 0) {
49
- gen_neon_dup_u8(tmp, 0);
50
- } else if (size == 1) {
51
- gen_neon_dup_low16(tmp);
52
- }
53
- for (n = 0; n <= pass * 2; n++) {
54
- tmp2 = tcg_temp_new_i32();
55
- tcg_gen_mov_i32(tmp2, tmp);
56
- neon_store_reg(rn, n, tmp2);
57
- }
58
- neon_store_reg(rn, n, tmp);
59
+ int vec_size = pass ? 16 : 8;
60
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rn, 0),
61
+ vec_size, vec_size, tmp);
62
+ tcg_temp_free_i32(tmp);
63
} else {
64
/* VMOV */
65
switch (size) {
66
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
67
tcg_temp_free_i32(tmp);
68
} else if ((insn & 0x380) == 0) {
69
/* VDUP */
70
+ int element;
71
+ TCGMemOp size;
58
+
72
+
59
+ return rtc_get_count(s);
73
if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
60
+}
74
return 1;
61
+
75
}
62
static void rtc_int_status_postw(RegisterInfo *reg, uint64_t val64)
76
- if (insn & (1 << 19)) {
63
{
77
- tmp = neon_load_reg(rm, 1);
64
XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
78
- } else {
65
@@ -XXX,XX +XXX,XX @@ static uint64_t addr_error_int_dis_prew(RegisterInfo *reg, uint64_t val64)
79
- tmp = neon_load_reg(rm, 0);
66
80
- }
67
static const RegisterAccessInfo rtc_regs_info[] = {
81
if (insn & (1 << 16)) {
68
{ .name = "SET_TIME_WRITE", .addr = A_SET_TIME_WRITE,
82
- gen_neon_dup_u8(tmp, ((insn >> 17) & 3) * 8);
69
+ .unimp = MAKE_64BIT_MASK(0, 32),
83
+ size = MO_8;
70
},{ .name = "SET_TIME_READ", .addr = A_SET_TIME_READ,
84
+ element = (insn >> 17) & 7;
71
.ro = 0xffffffff,
85
} else if (insn & (1 << 17)) {
72
+ .post_read = current_time_postr,
86
- if ((insn >> 18) & 1)
73
},{ .name = "CALIB_WRITE", .addr = A_CALIB_WRITE,
87
- gen_neon_dup_high16(tmp);
74
+ .unimp = MAKE_64BIT_MASK(0, 32),
88
- else
75
},{ .name = "CALIB_READ", .addr = A_CALIB_READ,
89
- gen_neon_dup_low16(tmp);
76
.ro = 0x1fffff,
90
+ size = MO_16;
77
},{ .name = "CURRENT_TIME", .addr = A_CURRENT_TIME,
91
+ element = (insn >> 18) & 3;
78
.ro = 0xffffffff,
92
+ } else {
79
+ .post_read = current_time_postr,
93
+ size = MO_32;
80
},{ .name = "CURRENT_TICK", .addr = A_CURRENT_TICK,
94
+ element = (insn >> 19) & 1;
81
.ro = 0xffff,
95
}
82
},{ .name = "ALARM", .addr = A_ALARM,
96
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
83
@@ -XXX,XX +XXX,XX @@ static void rtc_init(Object *obj)
97
- tmp2 = tcg_temp_new_i32();
84
XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(obj);
98
- tcg_gen_mov_i32(tmp2, tmp);
85
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
99
- neon_store_reg(rd, pass, tmp2);
86
RegisterInfoArray *reg_array;
100
- }
87
+ struct tm current_tm;
101
- tcg_temp_free_i32(tmp);
88
102
+ tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0),
89
memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_RTC,
103
+ neon_element_offset(rm, element, size),
90
XLNX_ZYNQMP_RTC_R_MAX * 4);
104
+ q ? 16 : 8, q ? 16 : 8);
91
@@ -XXX,XX +XXX,XX @@ static void rtc_init(Object *obj)
105
} else {
92
sysbus_init_mmio(sbd, &s->iomem);
106
return 1;
93
sysbus_init_irq(sbd, &s->irq_rtc_int);
107
}
94
sysbus_init_irq(sbd, &s->irq_addr_error_int);
95
+
96
+ qemu_get_timedate(&current_tm, 0);
97
+ s->tick_offset = mktimegm(&current_tm) -
98
+ qemu_clock_get_ns(rtc_clock) / NANOSECONDS_PER_SECOND;
99
+
100
+ trace_xlnx_zynqmp_rtc_gettime(current_tm.tm_year, current_tm.tm_mon,
101
+ current_tm.tm_mday, current_tm.tm_hour,
102
+ current_tm.tm_min, current_tm.tm_sec);
103
+}
104
+
105
+static int rtc_pre_save(void *opaque)
106
+{
107
+ XlnxZynqMPRTC *s = opaque;
108
+ int64_t now = qemu_clock_get_ns(rtc_clock) / NANOSECONDS_PER_SECOND;
109
+
110
+ /* Add the time at migration */
111
+ s->tick_offset = s->tick_offset + now;
112
+
113
+ return 0;
114
+}
115
+
116
+static int rtc_post_load(void *opaque, int version_id)
117
+{
118
+ XlnxZynqMPRTC *s = opaque;
119
+ int64_t now = qemu_clock_get_ns(rtc_clock) / NANOSECONDS_PER_SECOND;
120
+
121
+ /* Subtract the time after migration. This combined with the pre_save
122
+ * action results in us having subtracted the time that the guest was
123
+ * stopped to the offset.
124
+ */
125
+ s->tick_offset = s->tick_offset - now;
126
+
127
+ return 0;
128
}
129
130
static const VMStateDescription vmstate_rtc = {
131
.name = TYPE_XLNX_ZYNQMP_RTC,
132
.version_id = 1,
133
.minimum_version_id = 1,
134
+ .pre_save = rtc_pre_save,
135
+ .post_load = rtc_post_load,
136
.fields = (VMStateField[]) {
137
VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPRTC, XLNX_ZYNQMP_RTC_R_MAX),
138
+ VMSTATE_UINT32(tick_offset, XlnxZynqMPRTC),
139
VMSTATE_END_OF_LIST(),
140
}
141
};
142
diff --git a/hw/timer/trace-events b/hw/timer/trace-events
143
index XXXXXXX..XXXXXXX 100644
144
--- a/hw/timer/trace-events
145
+++ b/hw/timer/trace-events
146
@@ -XXX,XX +XXX,XX @@ systick_write(uint64_t addr, uint32_t value, unsigned size) "systick write addr
147
cmsdk_apb_timer_read(uint64_t offset, uint64_t data, unsigned size) "CMSDK APB timer read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
148
cmsdk_apb_timer_write(uint64_t offset, uint64_t data, unsigned size) "CMSDK APB timer write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
149
cmsdk_apb_timer_reset(void) "CMSDK APB timer: reset"
150
+
151
+# hw/timer/xlnx-zynqmp-rtc.c
152
+xlnx_zynqmp_rtc_gettime(int year, int month, int day, int hour, int min, int sec) "Get time from host: %d-%d-%d %2d:%02d:%02d"
153
--
108
--
154
2.16.2
109
2.19.1
155
110
156
111
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-8-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 67 ++++++++++++++++++++++++------------------
9
1 file changed, 39 insertions(+), 28 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
return 1;
17
}
18
} else { /* (insn & 0x00380080) == 0 */
19
- int invert;
20
+ int invert, reg_ofs, vec_size;
21
+
22
if (q && (rd & 1)) {
23
return 1;
24
}
25
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
26
break;
27
case 14:
28
imm |= (imm << 8) | (imm << 16) | (imm << 24);
29
- if (invert)
30
+ if (invert) {
31
imm = ~imm;
32
+ }
33
break;
34
case 15:
35
if (invert) {
36
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
37
| ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
38
break;
39
}
40
- if (invert)
41
+ if (invert) {
42
imm = ~imm;
43
+ }
44
45
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
46
- if (op & 1 && op < 12) {
47
- tmp = neon_load_reg(rd, pass);
48
- if (invert) {
49
- /* The immediate value has already been inverted, so
50
- BIC becomes AND. */
51
- tcg_gen_andi_i32(tmp, tmp, imm);
52
- } else {
53
- tcg_gen_ori_i32(tmp, tmp, imm);
54
- }
55
+ reg_ofs = neon_reg_offset(rd, 0);
56
+ vec_size = q ? 16 : 8;
57
+
58
+ if (op & 1 && op < 12) {
59
+ if (invert) {
60
+ /* The immediate value has already been inverted,
61
+ * so BIC becomes AND.
62
+ */
63
+ tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
64
+ vec_size, vec_size);
65
} else {
66
- /* VMOV, VMVN. */
67
- tmp = tcg_temp_new_i32();
68
- if (op == 14 && invert) {
69
- int n;
70
- uint32_t val;
71
- val = 0;
72
- for (n = 0; n < 4; n++) {
73
- if (imm & (1 << (n + (pass & 1) * 4)))
74
- val |= 0xff << (n * 8);
75
- }
76
- tcg_gen_movi_i32(tmp, val);
77
- } else {
78
- tcg_gen_movi_i32(tmp, imm);
79
- }
80
+ tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
81
+ vec_size, vec_size);
82
+ }
83
+ } else {
84
+ /* VMOV, VMVN. */
85
+ if (op == 14 && invert) {
86
+ TCGv_i64 t64 = tcg_temp_new_i64();
87
+
88
+ for (pass = 0; pass <= q; ++pass) {
89
+ uint64_t val = 0;
90
+ int n;
91
+
92
+ for (n = 0; n < 8; n++) {
93
+ if (imm & (1 << (n + pass * 8))) {
94
+ val |= 0xffull << (n * 8);
95
+ }
96
+ }
97
+ tcg_gen_movi_i64(t64, val);
98
+ neon_store_reg64(t64, rd + pass);
99
+ }
100
+ tcg_temp_free_i64(t64);
101
+ } else {
102
+ tcg_gen_gvec_dup32i(reg_ofs, vec_size, vec_size, imm);
103
}
104
- neon_store_reg(rd, pass, tmp);
105
}
106
}
107
} else { /* (insn & 0x00800010 == 0x00800000) */
108
--
109
2.19.1
110
111
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Initial commit of the ZynqMP RTC device.
3
Move expanders for VBSL, VBIT, and VBIF from translate-a64.c.
4
4
5
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20181011205206.3552-9-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
---
9
hw/timer/Makefile.objs | 1 +
10
target/arm/translate.h | 6 ++
10
include/hw/timer/xlnx-zynqmp-rtc.h | 84 +++++++++++++++
11
target/arm/translate-a64.c | 61 --------------
11
hw/timer/xlnx-zynqmp-rtc.c | 214 +++++++++++++++++++++++++++++++++++++
12
target/arm/translate.c | 162 +++++++++++++++++++++++++++----------
12
3 files changed, 299 insertions(+)
13
3 files changed, 124 insertions(+), 105 deletions(-)
13
create mode 100644 include/hw/timer/xlnx-zynqmp-rtc.h
14
14
create mode 100644 hw/timer/xlnx-zynqmp-rtc.c
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
15
16
diff --git a/hw/timer/Makefile.objs b/hw/timer/Makefile.objs
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/timer/Makefile.objs
17
--- a/target/arm/translate.h
19
+++ b/hw/timer/Makefile.objs
18
+++ b/target/arm/translate.h
20
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_IMX) += imx_epit.o
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
21
common-obj-$(CONFIG_IMX) += imx_gpt.o
20
return ret;
22
common-obj-$(CONFIG_LM32) += lm32_timer.o
21
}
23
common-obj-$(CONFIG_MILKYMIST) += milkymist-sysctl.o
22
24
+common-obj-$(CONFIG_XLNX_ZYNQMP) += xlnx-zynqmp-rtc.o
23
+
25
24
+/* Vector operations shared between ARM and AArch64. */
26
obj-$(CONFIG_ALTERA_TIMER) += altera_timer.o
25
+extern const GVecGen3 bsl_op;
27
obj-$(CONFIG_EXYNOS4) += exynos4210_mct.o
26
+extern const GVecGen3 bit_op;
28
diff --git a/include/hw/timer/xlnx-zynqmp-rtc.h b/include/hw/timer/xlnx-zynqmp-rtc.h
27
+extern const GVecGen3 bif_op;
29
new file mode 100644
28
+
30
index XXXXXXX..XXXXXXX
29
/*
31
--- /dev/null
30
* Forward to the isar_feature_* tests given a DisasContext pointer.
32
+++ b/include/hw/timer/xlnx-zynqmp-rtc.h
31
*/
33
@@ -XXX,XX +XXX,XX @@
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-a64.c
35
+++ b/target/arm/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
37
}
38
}
39
40
-static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
41
-{
42
- tcg_gen_xor_i64(rn, rn, rm);
43
- tcg_gen_and_i64(rn, rn, rd);
44
- tcg_gen_xor_i64(rd, rm, rn);
45
-}
46
-
47
-static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
48
-{
49
- tcg_gen_xor_i64(rn, rn, rd);
50
- tcg_gen_and_i64(rn, rn, rm);
51
- tcg_gen_xor_i64(rd, rd, rn);
52
-}
53
-
54
-static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
55
-{
56
- tcg_gen_xor_i64(rn, rn, rd);
57
- tcg_gen_andc_i64(rn, rn, rm);
58
- tcg_gen_xor_i64(rd, rd, rn);
59
-}
60
-
61
-static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
62
-{
63
- tcg_gen_xor_vec(vece, rn, rn, rm);
64
- tcg_gen_and_vec(vece, rn, rn, rd);
65
- tcg_gen_xor_vec(vece, rd, rm, rn);
66
-}
67
-
68
-static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
69
-{
70
- tcg_gen_xor_vec(vece, rn, rn, rd);
71
- tcg_gen_and_vec(vece, rn, rn, rm);
72
- tcg_gen_xor_vec(vece, rd, rd, rn);
73
-}
74
-
75
-static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
76
-{
77
- tcg_gen_xor_vec(vece, rn, rn, rd);
78
- tcg_gen_andc_vec(vece, rn, rn, rm);
79
- tcg_gen_xor_vec(vece, rd, rd, rn);
80
-}
81
-
82
/* Logic op (opcode == 3) subgroup of C3.6.16. */
83
static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
84
{
85
- static const GVecGen3 bsl_op = {
86
- .fni8 = gen_bsl_i64,
87
- .fniv = gen_bsl_vec,
88
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
89
- .load_dest = true
90
- };
91
- static const GVecGen3 bit_op = {
92
- .fni8 = gen_bit_i64,
93
- .fniv = gen_bit_vec,
94
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
95
- .load_dest = true
96
- };
97
- static const GVecGen3 bif_op = {
98
- .fni8 = gen_bif_i64,
99
- .fniv = gen_bif_vec,
100
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
- .load_dest = true
102
- };
103
-
104
int rd = extract32(insn, 0, 5);
105
int rn = extract32(insn, 5, 5);
106
int rm = extract32(insn, 16, 5);
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
112
return 0;
113
}
114
115
-/* Bitwise select. dest = c ? t : f. Clobbers T and F. */
116
-static void gen_neon_bsl(TCGv_i32 dest, TCGv_i32 t, TCGv_i32 f, TCGv_i32 c)
117
-{
118
- tcg_gen_and_i32(t, t, c);
119
- tcg_gen_andc_i32(f, f, c);
120
- tcg_gen_or_i32(dest, t, f);
121
-}
122
-
123
static inline void gen_neon_narrow(int size, TCGv_i32 dest, TCGv_i64 src)
124
{
125
switch (size) {
126
@@ -XXX,XX +XXX,XX @@ static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
127
return 1;
128
}
129
34
+/*
130
+/*
35
+ * QEMU model of the Xilinx ZynqMP Real Time Clock (RTC).
131
+ * Expanders for VBitOps_VBIF, VBIT, VBSL.
36
+ *
37
+ * Copyright (c) 2017 Xilinx Inc.
38
+ *
39
+ * Written-by: Alistair Francis <alistair.francis@xilinx.com>
40
+ *
41
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
42
+ * of this software and associated documentation files (the "Software"), to deal
43
+ * in the Software without restriction, including without limitation the rights
44
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
45
+ * copies of the Software, and to permit persons to whom the Software is
46
+ * furnished to do so, subject to the following conditions:
47
+ *
48
+ * The above copyright notice and this permission notice shall be included in
49
+ * all copies or substantial portions of the Software.
50
+ *
51
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
52
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
53
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
54
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
55
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
56
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
57
+ * THE SOFTWARE.
58
+ */
132
+ */
59
+
133
+static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
60
+#include "hw/register.h"
134
+{
61
+
135
+ tcg_gen_xor_i64(rn, rn, rm);
62
+#define TYPE_XLNX_ZYNQMP_RTC "xlnx-zynmp.rtc"
136
+ tcg_gen_and_i64(rn, rn, rd);
63
+
137
+ tcg_gen_xor_i64(rd, rm, rn);
64
+#define XLNX_ZYNQMP_RTC(obj) \
138
+}
65
+ OBJECT_CHECK(XlnxZynqMPRTC, (obj), TYPE_XLNX_ZYNQMP_RTC)
139
+
66
+
140
+static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
67
+REG32(SET_TIME_WRITE, 0x0)
141
+{
68
+REG32(SET_TIME_READ, 0x4)
142
+ tcg_gen_xor_i64(rn, rn, rd);
69
+REG32(CALIB_WRITE, 0x8)
143
+ tcg_gen_and_i64(rn, rn, rm);
70
+ FIELD(CALIB_WRITE, FRACTION_EN, 20, 1)
144
+ tcg_gen_xor_i64(rd, rd, rn);
71
+ FIELD(CALIB_WRITE, FRACTION_DATA, 16, 4)
145
+}
72
+ FIELD(CALIB_WRITE, MAX_TICK, 0, 16)
146
+
73
+REG32(CALIB_READ, 0xc)
147
+static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
74
+ FIELD(CALIB_READ, FRACTION_EN, 20, 1)
148
+{
75
+ FIELD(CALIB_READ, FRACTION_DATA, 16, 4)
149
+ tcg_gen_xor_i64(rn, rn, rd);
76
+ FIELD(CALIB_READ, MAX_TICK, 0, 16)
150
+ tcg_gen_andc_i64(rn, rn, rm);
77
+REG32(CURRENT_TIME, 0x10)
151
+ tcg_gen_xor_i64(rd, rd, rn);
78
+REG32(CURRENT_TICK, 0x14)
152
+}
79
+ FIELD(CURRENT_TICK, VALUE, 0, 16)
153
+
80
+REG32(ALARM, 0x18)
154
+static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
81
+REG32(RTC_INT_STATUS, 0x20)
155
+{
82
+ FIELD(RTC_INT_STATUS, ALARM, 1, 1)
156
+ tcg_gen_xor_vec(vece, rn, rn, rm);
83
+ FIELD(RTC_INT_STATUS, SECONDS, 0, 1)
157
+ tcg_gen_and_vec(vece, rn, rn, rd);
84
+REG32(RTC_INT_MASK, 0x24)
158
+ tcg_gen_xor_vec(vece, rd, rm, rn);
85
+ FIELD(RTC_INT_MASK, ALARM, 1, 1)
159
+}
86
+ FIELD(RTC_INT_MASK, SECONDS, 0, 1)
160
+
87
+REG32(RTC_INT_EN, 0x28)
161
+static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
88
+ FIELD(RTC_INT_EN, ALARM, 1, 1)
162
+{
89
+ FIELD(RTC_INT_EN, SECONDS, 0, 1)
163
+ tcg_gen_xor_vec(vece, rn, rn, rd);
90
+REG32(RTC_INT_DIS, 0x2c)
164
+ tcg_gen_and_vec(vece, rn, rn, rm);
91
+ FIELD(RTC_INT_DIS, ALARM, 1, 1)
165
+ tcg_gen_xor_vec(vece, rd, rd, rn);
92
+ FIELD(RTC_INT_DIS, SECONDS, 0, 1)
166
+}
93
+REG32(ADDR_ERROR, 0x30)
167
+
94
+ FIELD(ADDR_ERROR, STATUS, 0, 1)
168
+static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
95
+REG32(ADDR_ERROR_INT_MASK, 0x34)
169
+{
96
+ FIELD(ADDR_ERROR_INT_MASK, MASK, 0, 1)
170
+ tcg_gen_xor_vec(vece, rn, rn, rd);
97
+REG32(ADDR_ERROR_INT_EN, 0x38)
171
+ tcg_gen_andc_vec(vece, rn, rn, rm);
98
+ FIELD(ADDR_ERROR_INT_EN, MASK, 0, 1)
172
+ tcg_gen_xor_vec(vece, rd, rd, rn);
99
+REG32(ADDR_ERROR_INT_DIS, 0x3c)
173
+}
100
+ FIELD(ADDR_ERROR_INT_DIS, MASK, 0, 1)
174
+
101
+REG32(CONTROL, 0x40)
175
+const GVecGen3 bsl_op = {
102
+ FIELD(CONTROL, BATTERY_DISABLE, 31, 1)
176
+ .fni8 = gen_bsl_i64,
103
+ FIELD(CONTROL, OSC_CNTRL, 24, 4)
177
+ .fniv = gen_bsl_vec,
104
+ FIELD(CONTROL, SLVERR_ENABLE, 0, 1)
178
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
105
+REG32(SAFETY_CHK, 0x50)
179
+ .load_dest = true
106
+
107
+#define XLNX_ZYNQMP_RTC_R_MAX (R_SAFETY_CHK + 1)
108
+
109
+typedef struct XlnxZynqMPRTC {
110
+ SysBusDevice parent_obj;
111
+ MemoryRegion iomem;
112
+ qemu_irq irq_rtc_int;
113
+ qemu_irq irq_addr_error_int;
114
+
115
+ uint32_t regs[XLNX_ZYNQMP_RTC_R_MAX];
116
+ RegisterInfo regs_info[XLNX_ZYNQMP_RTC_R_MAX];
117
+} XlnxZynqMPRTC;
118
diff --git a/hw/timer/xlnx-zynqmp-rtc.c b/hw/timer/xlnx-zynqmp-rtc.c
119
new file mode 100644
120
index XXXXXXX..XXXXXXX
121
--- /dev/null
122
+++ b/hw/timer/xlnx-zynqmp-rtc.c
123
@@ -XXX,XX +XXX,XX @@
124
+/*
125
+ * QEMU model of the Xilinx ZynqMP Real Time Clock (RTC).
126
+ *
127
+ * Copyright (c) 2017 Xilinx Inc.
128
+ *
129
+ * Written-by: Alistair Francis <alistair.francis@xilinx.com>
130
+ *
131
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
132
+ * of this software and associated documentation files (the "Software"), to deal
133
+ * in the Software without restriction, including without limitation the rights
134
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
135
+ * copies of the Software, and to permit persons to whom the Software is
136
+ * furnished to do so, subject to the following conditions:
137
+ *
138
+ * The above copyright notice and this permission notice shall be included in
139
+ * all copies or substantial portions of the Software.
140
+ *
141
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
142
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
143
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
144
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
145
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
146
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
147
+ * THE SOFTWARE.
148
+ */
149
+
150
+#include "qemu/osdep.h"
151
+#include "hw/sysbus.h"
152
+#include "hw/register.h"
153
+#include "qemu/bitops.h"
154
+#include "qemu/log.h"
155
+#include "hw/timer/xlnx-zynqmp-rtc.h"
156
+
157
+#ifndef XLNX_ZYNQMP_RTC_ERR_DEBUG
158
+#define XLNX_ZYNQMP_RTC_ERR_DEBUG 0
159
+#endif
160
+
161
+static void rtc_int_update_irq(XlnxZynqMPRTC *s)
162
+{
163
+ bool pending = s->regs[R_RTC_INT_STATUS] & ~s->regs[R_RTC_INT_MASK];
164
+ qemu_set_irq(s->irq_rtc_int, pending);
165
+}
166
+
167
+static void addr_error_int_update_irq(XlnxZynqMPRTC *s)
168
+{
169
+ bool pending = s->regs[R_ADDR_ERROR] & ~s->regs[R_ADDR_ERROR_INT_MASK];
170
+ qemu_set_irq(s->irq_addr_error_int, pending);
171
+}
172
+
173
+static void rtc_int_status_postw(RegisterInfo *reg, uint64_t val64)
174
+{
175
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
176
+ rtc_int_update_irq(s);
177
+}
178
+
179
+static uint64_t rtc_int_en_prew(RegisterInfo *reg, uint64_t val64)
180
+{
181
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
182
+
183
+ s->regs[R_RTC_INT_MASK] &= (uint32_t) ~val64;
184
+ rtc_int_update_irq(s);
185
+ return 0;
186
+}
187
+
188
+static uint64_t rtc_int_dis_prew(RegisterInfo *reg, uint64_t val64)
189
+{
190
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
191
+
192
+ s->regs[R_RTC_INT_MASK] |= (uint32_t) val64;
193
+ rtc_int_update_irq(s);
194
+ return 0;
195
+}
196
+
197
+static void addr_error_postw(RegisterInfo *reg, uint64_t val64)
198
+{
199
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
200
+ addr_error_int_update_irq(s);
201
+}
202
+
203
+static uint64_t addr_error_int_en_prew(RegisterInfo *reg, uint64_t val64)
204
+{
205
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
206
+
207
+ s->regs[R_ADDR_ERROR_INT_MASK] &= (uint32_t) ~val64;
208
+ addr_error_int_update_irq(s);
209
+ return 0;
210
+}
211
+
212
+static uint64_t addr_error_int_dis_prew(RegisterInfo *reg, uint64_t val64)
213
+{
214
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
215
+
216
+ s->regs[R_ADDR_ERROR_INT_MASK] |= (uint32_t) val64;
217
+ addr_error_int_update_irq(s);
218
+ return 0;
219
+}
220
+
221
+static const RegisterAccessInfo rtc_regs_info[] = {
222
+ { .name = "SET_TIME_WRITE", .addr = A_SET_TIME_WRITE,
223
+ },{ .name = "SET_TIME_READ", .addr = A_SET_TIME_READ,
224
+ .ro = 0xffffffff,
225
+ },{ .name = "CALIB_WRITE", .addr = A_CALIB_WRITE,
226
+ },{ .name = "CALIB_READ", .addr = A_CALIB_READ,
227
+ .ro = 0x1fffff,
228
+ },{ .name = "CURRENT_TIME", .addr = A_CURRENT_TIME,
229
+ .ro = 0xffffffff,
230
+ },{ .name = "CURRENT_TICK", .addr = A_CURRENT_TICK,
231
+ .ro = 0xffff,
232
+ },{ .name = "ALARM", .addr = A_ALARM,
233
+ },{ .name = "RTC_INT_STATUS", .addr = A_RTC_INT_STATUS,
234
+ .w1c = 0x3,
235
+ .post_write = rtc_int_status_postw,
236
+ },{ .name = "RTC_INT_MASK", .addr = A_RTC_INT_MASK,
237
+ .reset = 0x3,
238
+ .ro = 0x3,
239
+ },{ .name = "RTC_INT_EN", .addr = A_RTC_INT_EN,
240
+ .pre_write = rtc_int_en_prew,
241
+ },{ .name = "RTC_INT_DIS", .addr = A_RTC_INT_DIS,
242
+ .pre_write = rtc_int_dis_prew,
243
+ },{ .name = "ADDR_ERROR", .addr = A_ADDR_ERROR,
244
+ .w1c = 0x1,
245
+ .post_write = addr_error_postw,
246
+ },{ .name = "ADDR_ERROR_INT_MASK", .addr = A_ADDR_ERROR_INT_MASK,
247
+ .reset = 0x1,
248
+ .ro = 0x1,
249
+ },{ .name = "ADDR_ERROR_INT_EN", .addr = A_ADDR_ERROR_INT_EN,
250
+ .pre_write = addr_error_int_en_prew,
251
+ },{ .name = "ADDR_ERROR_INT_DIS", .addr = A_ADDR_ERROR_INT_DIS,
252
+ .pre_write = addr_error_int_dis_prew,
253
+ },{ .name = "CONTROL", .addr = A_CONTROL,
254
+ .reset = 0x1000000,
255
+ .rsvd = 0x70fffffe,
256
+ },{ .name = "SAFETY_CHK", .addr = A_SAFETY_CHK,
257
+ }
258
+};
180
+};
259
+
181
+
260
+static void rtc_reset(DeviceState *dev)
182
+const GVecGen3 bit_op = {
261
+{
183
+ .fni8 = gen_bit_i64,
262
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(dev);
184
+ .fniv = gen_bit_vec,
263
+ unsigned int i;
185
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
264
+
186
+ .load_dest = true
265
+ for (i = 0; i < ARRAY_SIZE(s->regs_info); ++i) {
266
+ register_reset(&s->regs_info[i]);
267
+ }
268
+
269
+ rtc_int_update_irq(s);
270
+ addr_error_int_update_irq(s);
271
+}
272
+
273
+static const MemoryRegionOps rtc_ops = {
274
+ .read = register_read_memory,
275
+ .write = register_write_memory,
276
+ .endianness = DEVICE_LITTLE_ENDIAN,
277
+ .valid = {
278
+ .min_access_size = 4,
279
+ .max_access_size = 4,
280
+ },
281
+};
187
+};
282
+
188
+
283
+static void rtc_init(Object *obj)
189
+const GVecGen3 bif_op = {
284
+{
190
+ .fni8 = gen_bif_i64,
285
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(obj);
191
+ .fniv = gen_bif_vec,
286
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
192
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
287
+ RegisterInfoArray *reg_array;
193
+ .load_dest = true
288
+
289
+ memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_RTC,
290
+ XLNX_ZYNQMP_RTC_R_MAX * 4);
291
+ reg_array =
292
+ register_init_block32(DEVICE(obj), rtc_regs_info,
293
+ ARRAY_SIZE(rtc_regs_info),
294
+ s->regs_info, s->regs,
295
+ &rtc_ops,
296
+ XLNX_ZYNQMP_RTC_ERR_DEBUG,
297
+ XLNX_ZYNQMP_RTC_R_MAX * 4);
298
+ memory_region_add_subregion(&s->iomem,
299
+ 0x0,
300
+ &reg_array->mem);
301
+ sysbus_init_mmio(sbd, &s->iomem);
302
+ sysbus_init_irq(sbd, &s->irq_rtc_int);
303
+ sysbus_init_irq(sbd, &s->irq_addr_error_int);
304
+}
305
+
306
+static const VMStateDescription vmstate_rtc = {
307
+ .name = TYPE_XLNX_ZYNQMP_RTC,
308
+ .version_id = 1,
309
+ .minimum_version_id = 1,
310
+ .fields = (VMStateField[]) {
311
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPRTC, XLNX_ZYNQMP_RTC_R_MAX),
312
+ VMSTATE_END_OF_LIST(),
313
+ }
314
+};
194
+};
315
+
195
+
316
+static void rtc_class_init(ObjectClass *klass, void *data)
196
+
317
+{
197
/* Translate a NEON data processing instruction. Return nonzero if the
318
+ DeviceClass *dc = DEVICE_CLASS(klass);
198
instruction is invalid.
319
+
199
We process data in a mixture of 32-bit and 64-bit chunks.
320
+ dc->reset = rtc_reset;
200
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
321
+ dc->vmsd = &vmstate_rtc;
201
{
322
+}
202
int op;
323
+
203
int q;
324
+static const TypeInfo rtc_info = {
204
- int rd, rn, rm;
325
+ .name = TYPE_XLNX_ZYNQMP_RTC,
205
+ int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
326
+ .parent = TYPE_SYS_BUS_DEVICE,
206
int size;
327
+ .instance_size = sizeof(XlnxZynqMPRTC),
207
int shift;
328
+ .class_init = rtc_class_init,
208
int pass;
329
+ .instance_init = rtc_init,
209
int count;
330
+};
210
int pairwise;
331
+
211
int u;
332
+static void rtc_register_types(void)
212
+ int vec_size;
333
+{
213
uint32_t imm, mask;
334
+ type_register_static(&rtc_info);
214
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
335
+}
215
TCGv_ptr ptr1, ptr2, ptr3;
336
+
216
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
337
+type_init(rtc_register_types)
217
VFP_DREG_N(rn, insn);
218
VFP_DREG_M(rm, insn);
219
size = (insn >> 20) & 3;
220
+ vec_size = q ? 16 : 8;
221
+ rd_ofs = neon_reg_offset(rd, 0);
222
+ rn_ofs = neon_reg_offset(rn, 0);
223
+ rm_ofs = neon_reg_offset(rm, 0);
224
+
225
if ((insn & (1 << 23)) == 0) {
226
/* Three register same length. */
227
op = ((insn >> 7) & 0x1e) | ((insn >> 4) & 1);
228
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
229
q, rd, rn, rm);
230
}
231
return 1;
232
+
233
+ case NEON_3R_LOGIC: /* Logic ops. */
234
+ switch ((u << 2) | size) {
235
+ case 0: /* VAND */
236
+ tcg_gen_gvec_and(0, rd_ofs, rn_ofs, rm_ofs,
237
+ vec_size, vec_size);
238
+ break;
239
+ case 1: /* VBIC */
240
+ tcg_gen_gvec_andc(0, rd_ofs, rn_ofs, rm_ofs,
241
+ vec_size, vec_size);
242
+ break;
243
+ case 2:
244
+ if (rn == rm) {
245
+ /* VMOV */
246
+ tcg_gen_gvec_mov(0, rd_ofs, rn_ofs, vec_size, vec_size);
247
+ } else {
248
+ /* VORR */
249
+ tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
250
+ vec_size, vec_size);
251
+ }
252
+ break;
253
+ case 3: /* VORN */
254
+ tcg_gen_gvec_orc(0, rd_ofs, rn_ofs, rm_ofs,
255
+ vec_size, vec_size);
256
+ break;
257
+ case 4: /* VEOR */
258
+ tcg_gen_gvec_xor(0, rd_ofs, rn_ofs, rm_ofs,
259
+ vec_size, vec_size);
260
+ break;
261
+ case 5: /* VBSL */
262
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
263
+ vec_size, vec_size, &bsl_op);
264
+ break;
265
+ case 6: /* VBIT */
266
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
267
+ vec_size, vec_size, &bit_op);
268
+ break;
269
+ case 7: /* VBIF */
270
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
271
+ vec_size, vec_size, &bif_op);
272
+ break;
273
+ }
274
+ return 0;
275
}
276
- if (size == 3 && op != NEON_3R_LOGIC) {
277
+ if (size == 3) {
278
/* 64-bit element instructions. */
279
for (pass = 0; pass < (q ? 2 : 1); pass++) {
280
neon_load_reg64(cpu_V0, rn + pass);
281
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
282
case NEON_3R_VRHADD:
283
GEN_NEON_INTEGER_OP(rhadd);
284
break;
285
- case NEON_3R_LOGIC: /* Logic ops. */
286
- switch ((u << 2) | size) {
287
- case 0: /* VAND */
288
- tcg_gen_and_i32(tmp, tmp, tmp2);
289
- break;
290
- case 1: /* BIC */
291
- tcg_gen_andc_i32(tmp, tmp, tmp2);
292
- break;
293
- case 2: /* VORR */
294
- tcg_gen_or_i32(tmp, tmp, tmp2);
295
- break;
296
- case 3: /* VORN */
297
- tcg_gen_orc_i32(tmp, tmp, tmp2);
298
- break;
299
- case 4: /* VEOR */
300
- tcg_gen_xor_i32(tmp, tmp, tmp2);
301
- break;
302
- case 5: /* VBSL */
303
- tmp3 = neon_load_reg(rd, pass);
304
- gen_neon_bsl(tmp, tmp, tmp2, tmp3);
305
- tcg_temp_free_i32(tmp3);
306
- break;
307
- case 6: /* VBIT */
308
- tmp3 = neon_load_reg(rd, pass);
309
- gen_neon_bsl(tmp, tmp, tmp3, tmp2);
310
- tcg_temp_free_i32(tmp3);
311
- break;
312
- case 7: /* VBIF */
313
- tmp3 = neon_load_reg(rd, pass);
314
- gen_neon_bsl(tmp, tmp3, tmp, tmp2);
315
- tcg_temp_free_i32(tmp3);
316
- break;
317
- }
318
- break;
319
case NEON_3R_VHSUB:
320
GEN_NEON_INTEGER_OP(hsub);
321
break;
338
--
322
--
339
2.16.2
323
2.19.1
340
324
341
325
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-10-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 29 ++++++++++-------------------
9
1 file changed, 10 insertions(+), 19 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
break;
17
}
18
return 0;
19
+
20
+ case NEON_3R_VADD_VSUB:
21
+ if (u) {
22
+ tcg_gen_gvec_sub(size, rd_ofs, rn_ofs, rm_ofs,
23
+ vec_size, vec_size);
24
+ } else {
25
+ tcg_gen_gvec_add(size, rd_ofs, rn_ofs, rm_ofs,
26
+ vec_size, vec_size);
27
+ }
28
+ return 0;
29
}
30
if (size == 3) {
31
/* 64-bit element instructions. */
32
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
33
cpu_V1, cpu_V0);
34
}
35
break;
36
- case NEON_3R_VADD_VSUB:
37
- if (u) {
38
- tcg_gen_sub_i64(CPU_V001);
39
- } else {
40
- tcg_gen_add_i64(CPU_V001);
41
- }
42
- break;
43
default:
44
abort();
45
}
46
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
47
tmp2 = neon_load_reg(rd, pass);
48
gen_neon_add(size, tmp, tmp2);
49
break;
50
- case NEON_3R_VADD_VSUB:
51
- if (!u) { /* VADD */
52
- gen_neon_add(size, tmp, tmp2);
53
- } else { /* VSUB */
54
- switch (size) {
55
- case 0: gen_helper_neon_sub_u8(tmp, tmp, tmp2); break;
56
- case 1: gen_helper_neon_sub_u16(tmp, tmp, tmp2); break;
57
- case 2: tcg_gen_sub_i32(tmp, tmp, tmp2); break;
58
- default: abort();
59
- }
60
- }
61
- break;
62
case NEON_3R_VTST_VCEQ:
63
if (!u) { /* VTST */
64
switch (size) {
65
--
66
2.19.1
67
68
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-11-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 16 ++++++++--------
9
1 file changed, 8 insertions(+), 8 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
tcg_temp_free_ptr(ptr1);
17
tcg_temp_free_ptr(ptr2);
18
break;
19
+
20
+ case NEON_2RM_VMVN:
21
+ tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
22
+ break;
23
+ case NEON_2RM_VNEG:
24
+ tcg_gen_gvec_neg(size, rd_ofs, rm_ofs, vec_size, vec_size);
25
+ break;
26
+
27
default:
28
elementwise:
29
for (pass = 0; pass < (q ? 4 : 2); pass++) {
30
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
31
case NEON_2RM_VCNT:
32
gen_helper_neon_cnt_u8(tmp, tmp);
33
break;
34
- case NEON_2RM_VMVN:
35
- tcg_gen_not_i32(tmp, tmp);
36
- break;
37
case NEON_2RM_VQABS:
38
switch (size) {
39
case 0:
40
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
41
default: abort();
42
}
43
break;
44
- case NEON_2RM_VNEG:
45
- tmp2 = tcg_const_i32(0);
46
- gen_neon_rsb(size, tmp, tmp2);
47
- tcg_temp_free_i32(tmp2);
48
- break;
49
case NEON_2RM_VCGT0_F:
50
{
51
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
52
--
53
2.19.1
54
55
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-12-richard.henderson@linaro.org
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180228193125.20577-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/translate.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++++++
8
target/arm/translate.c | 31 +++++++++++++++----------------
9
1 file changed, 68 insertions(+)
9
1 file changed, 15 insertions(+), 16 deletions(-)
10
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
return 0;
16
vec_size, vec_size);
17
}
17
}
18
18
return 0;
19
+/* Advanced SIMD three registers of the same length extension.
20
+ * 31 25 23 22 20 16 12 11 10 9 8 3 0
21
+ * +---------------+-----+---+-----+----+----+---+----+---+----+---------+----+
22
+ * | 1 1 1 1 1 1 0 | op1 | D | op2 | Vn | Vd | 1 | o3 | 0 | o4 | N Q M U | Vm |
23
+ * +---------------+-----+---+-----+----+----+---+----+---+----+---------+----+
24
+ */
25
+static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
26
+{
27
+ gen_helper_gvec_3_ptr *fn_gvec_ptr;
28
+ int rd, rn, rm, rot, size, opr_sz;
29
+ TCGv_ptr fpst;
30
+ bool q;
31
+
19
+
32
+ q = extract32(insn, 6, 1);
20
+ case NEON_3R_VMUL: /* VMUL */
33
+ VFP_DREG_D(rd, insn);
21
+ if (u) {
34
+ VFP_DREG_N(rn, insn);
22
+ /* Polynomial case allows only P8 and is handled below. */
35
+ VFP_DREG_M(rm, insn);
23
+ if (size != 0) {
36
+ if ((rd | rn | rm) & q) {
24
+ return 1;
37
+ return 1;
25
+ }
38
+ }
26
+ } else {
39
+
27
+ tcg_gen_gvec_mul(size, rd_ofs, rn_ofs, rm_ofs,
40
+ if ((insn & 0xfe200f10) == 0xfc200800) {
28
+ vec_size, vec_size);
41
+ /* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
29
+ return 0;
42
+ size = extract32(insn, 20, 1);
30
+ }
43
+ rot = extract32(insn, 23, 2);
31
+ break;
44
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
32
}
45
+ || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
33
if (size == 3) {
46
+ return 1;
34
/* 64-bit element instructions. */
47
+ }
35
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
48
+ fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
36
return 1;
49
+ } else if ((insn & 0xfea00f10) == 0xfc800800) {
50
+ /* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
51
+ size = extract32(insn, 20, 1);
52
+ rot = extract32(insn, 24, 1);
53
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
54
+ || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
55
+ return 1;
56
+ }
57
+ fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
58
+ } else {
59
+ return 1;
60
+ }
61
+
62
+ if (s->fp_excp_el) {
63
+ gen_exception_insn(s, 4, EXCP_UDEF,
64
+ syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
65
+ return 0;
66
+ }
67
+ if (!s->vfp_enabled) {
68
+ return 1;
69
+ }
70
+
71
+ opr_sz = (1 + q) * 8;
72
+ fpst = get_fpstatus_ptr(1);
73
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
74
+ vfp_reg_offset(1, rn),
75
+ vfp_reg_offset(1, rm), fpst,
76
+ opr_sz, opr_sz, rot, fn_gvec_ptr);
77
+ tcg_temp_free_ptr(fpst);
78
+ return 0;
79
+}
80
+
81
static int disas_coproc_insn(DisasContext *s, uint32_t insn)
82
{
83
int cpnum, is64, crn, crm, opc1, opc2, isread, rt, rt2;
84
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
85
}
86
}
87
}
37
}
88
+ } else if ((insn & 0x0e000a00) == 0x0c000800
38
break;
89
+ && arm_dc_feature(s, ARM_FEATURE_V8)) {
39
- case NEON_3R_VMUL:
90
+ if (disas_neon_insn_3same_ext(s, insn)) {
40
- if (u && (size != 0)) {
91
+ goto illegal_op;
41
- /* UNDEF on invalid size for polynomial subcase */
92
+ }
42
- return 1;
93
+ return;
43
- }
94
} else if ((insn & 0x0fe00000) == 0x0c400000) {
44
- break;
95
/* Coprocessor double register transfer. */
45
case NEON_3R_VFM_VQRDMLSH:
96
ARCH(5TE);
46
if (!arm_dc_feature(s, ARM_FEATURE_VFP4)) {
47
return 1;
48
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
49
}
50
break;
51
case NEON_3R_VMUL:
52
- if (u) { /* polynomial */
53
- gen_helper_neon_mul_p8(tmp, tmp, tmp2);
54
- } else { /* Integer */
55
- switch (size) {
56
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
57
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
58
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
59
- default: abort();
60
- }
61
- }
62
+ /* VMUL.P8; other cases already eliminated. */
63
+ gen_helper_neon_mul_p8(tmp, tmp, tmp2);
64
break;
65
case NEON_3R_VPMAX:
66
GEN_NEON_INTEGER_OP(pmax);
97
--
67
--
98
2.16.2
68
2.19.1
99
69
100
70
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Allow the translate subroutines to return false for invalid insns.
4
5
At present we can of course invoke an invalid insn exception from within
6
the translate subroutine, but in the short term this consolidates code.
7
In the long term it would allow the decodetree language to support
8
overlapping patterns for ISA extensions.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20180227232618.2908-1-richard.henderson@linaro.org
4
Message-id: 20181011205206.3552-13-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
7
---
15
scripts/decodetree.py | 5 ++---
8
target/arm/translate.c | 70 +++++++++++++++++++++++++++++-------------
16
1 file changed, 2 insertions(+), 3 deletions(-)
9
1 file changed, 48 insertions(+), 22 deletions(-)
17
10
18
diff --git a/scripts/decodetree.py b/scripts/decodetree.py
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
index XXXXXXX..XXXXXXX 100755
12
index XXXXXXX..XXXXXXX 100644
20
--- a/scripts/decodetree.py
13
--- a/target/arm/translate.c
21
+++ b/scripts/decodetree.py
14
+++ b/target/arm/translate.c
22
@@ -XXX,XX +XXX,XX @@ class Pattern(General):
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
23
global translate_prefix
16
size--;
24
output('typedef ', self.base.base.struct_name(),
17
}
25
' arg_', self.name, ';\n')
18
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
26
- output(translate_scope, 'void ', translate_prefix, '_', self.name,
19
- /* To avoid excessive duplication of ops we implement shift
27
+ output(translate_scope, 'bool ', translate_prefix, '_', self.name,
20
- by immediate using the variable shift operations. */
28
'(DisasContext *ctx, arg_', self.name,
21
if (op < 8) {
29
' *a, ', insntype, ' insn);\n')
22
/* Shift by immediate:
30
23
VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
31
@@ -XXX,XX +XXX,XX @@ class Pattern(General):
24
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
32
output(ind, self.base.extract_name(), '(&u.f_', arg, ', insn);\n')
25
}
33
for n, f in self.fields.items():
26
/* Right shifts are encoded as N - shift, where N is the
34
output(ind, 'u.f_', arg, '.', n, ' = ', f.str_extract(), ';\n')
27
element size in bits. */
35
- output(ind, translate_prefix, '_', self.name,
28
- if (op <= 4)
36
+ output(ind, 'return ', translate_prefix, '_', self.name,
29
+ if (op <= 4) {
37
'(ctx, &u.f_', arg, ', insn);\n')
30
shift = shift - (1 << (size + 3));
38
- output(ind, 'return true;\n')
31
+ }
39
# end Pattern
32
+
40
33
+ switch (op) {
34
+ case 0: /* VSHR */
35
+ /* Right shift comes here negative. */
36
+ shift = -shift;
37
+ /* Shifts larger than the element size are architecturally
38
+ * valid. Unsigned results in all zeros; signed results
39
+ * in all sign bits.
40
+ */
41
+ if (!u) {
42
+ tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
43
+ MIN(shift, (8 << size) - 1),
44
+ vec_size, vec_size);
45
+ } else if (shift >= 8 << size) {
46
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
47
+ } else {
48
+ tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
49
+ vec_size, vec_size);
50
+ }
51
+ return 0;
52
+
53
+ case 5: /* VSHL, VSLI */
54
+ if (!u) { /* VSHL */
55
+ /* Shifts larger than the element size are
56
+ * architecturally valid and results in zero.
57
+ */
58
+ if (shift >= 8 << size) {
59
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
60
+ } else {
61
+ tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
62
+ vec_size, vec_size);
63
+ }
64
+ return 0;
65
+ }
66
+ break;
67
+ }
68
+
69
if (size == 3) {
70
count = q + 1;
71
} else {
72
count = q ? 4: 2;
73
}
74
- switch (size) {
75
- case 0:
76
- imm = (uint8_t) shift;
77
- imm |= imm << 8;
78
- imm |= imm << 16;
79
- break;
80
- case 1:
81
- imm = (uint16_t) shift;
82
- imm |= imm << 16;
83
- break;
84
- case 2:
85
- case 3:
86
- imm = shift;
87
- break;
88
- default:
89
- abort();
90
- }
91
+
92
+ /* To avoid excessive duplication of ops we implement shift
93
+ * by immediate using the variable shift operations.
94
+ */
95
+ imm = dup_const(size, shift);
96
97
for (pass = 0; pass < count; pass++) {
98
if (size == 3) {
99
neon_load_reg64(cpu_V0, rm + pass);
100
tcg_gen_movi_i64(cpu_V1, imm);
101
switch (op) {
102
- case 0: /* VSHR */
103
case 1: /* VSRA */
104
if (u)
105
gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
106
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
107
cpu_V0, cpu_V1);
108
}
109
break;
110
+ default:
111
+ g_assert_not_reached();
112
}
113
if (op == 1 || op == 3) {
114
/* Accumulate. */
115
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
116
tmp2 = tcg_temp_new_i32();
117
tcg_gen_movi_i32(tmp2, imm);
118
switch (op) {
119
- case 0: /* VSHR */
120
case 1: /* VSRA */
121
GEN_NEON_INTEGER_OP(shl);
122
break;
123
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
124
case 7: /* VQSHL */
125
GEN_NEON_INTEGER_OP_ENV(qshl);
126
break;
127
+ default:
128
+ g_assert_not_reached();
129
}
130
tcg_temp_free_i32(tmp2);
41
131
42
--
132
--
43
2.16.2
133
2.19.1
44
134
45
135
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Include the U bit in the switches rather than testing separately.
3
Move ssra_op and usra_op expanders from translate-a64.c.
4
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-14-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180228193125.20577-3-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
9
---
10
target/arm/translate-a64.c | 129 +++++++++++++++++++++------------------------
10
target/arm/translate.h | 2 +
11
1 file changed, 61 insertions(+), 68 deletions(-)
11
target/arm/translate-a64.c | 106 ----------------------------
12
12
target/arm/translate.c | 139 ++++++++++++++++++++++++++++++++++---
13
3 files changed, 130 insertions(+), 117 deletions(-)
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
18
+++ b/target/arm/translate.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
extern const GVecGen3 bsl_op;
21
extern const GVecGen3 bit_op;
22
extern const GVecGen3 bif_op;
23
+extern const GVecGen2i ssra_op[4];
24
+extern const GVecGen2i usra_op[4];
25
26
/*
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-a64.c
30
--- a/target/arm/translate-a64.c
16
+++ b/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
18
int index;
33
}
19
TCGv_ptr fpst;
34
}
20
35
21
- switch (opcode) {
36
-static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
22
- case 0x0: /* MLA */
37
-{
23
- case 0x4: /* MLS */
38
- tcg_gen_vec_sar8i_i64(a, a, shift);
24
- if (!u || is_scalar) {
39
- tcg_gen_vec_add8_i64(d, d, a);
25
+ switch (16 * u + opcode) {
40
-}
26
+ case 0x08: /* MUL */
41
-
27
+ case 0x10: /* MLA */
42
-static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
28
+ case 0x14: /* MLS */
43
-{
29
+ if (is_scalar) {
44
- tcg_gen_vec_sar16i_i64(a, a, shift);
30
unallocated_encoding(s);
45
- tcg_gen_vec_add16_i64(d, d, a);
31
return;
46
-}
32
}
47
-
33
break;
48
-static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
34
- case 0x2: /* SMLAL, SMLAL2, UMLAL, UMLAL2 */
49
-{
35
- case 0x6: /* SMLSL, SMLSL2, UMLSL, UMLSL2 */
50
- tcg_gen_sari_i32(a, a, shift);
36
- case 0xa: /* SMULL, SMULL2, UMULL, UMULL2 */
51
- tcg_gen_add_i32(d, d, a);
37
+ case 0x02: /* SMLAL, SMLAL2 */
52
-}
38
+ case 0x12: /* UMLAL, UMLAL2 */
53
-
39
+ case 0x06: /* SMLSL, SMLSL2 */
54
-static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
40
+ case 0x16: /* UMLSL, UMLSL2 */
55
-{
41
+ case 0x0a: /* SMULL, SMULL2 */
56
- tcg_gen_sari_i64(a, a, shift);
42
+ case 0x1a: /* UMULL, UMULL2 */
57
- tcg_gen_add_i64(d, d, a);
43
if (is_scalar) {
58
-}
44
unallocated_encoding(s);
59
-
45
return;
60
-static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
46
}
61
-{
47
is_long = true;
62
- tcg_gen_sari_vec(vece, a, a, sh);
48
break;
63
- tcg_gen_add_vec(vece, d, d, a);
49
- case 0x3: /* SQDMLAL, SQDMLAL2 */
64
-}
50
- case 0x7: /* SQDMLSL, SQDMLSL2 */
65
-
51
- case 0xb: /* SQDMULL, SQDMULL2 */
66
-static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
52
+ case 0x03: /* SQDMLAL, SQDMLAL2 */
67
-{
53
+ case 0x07: /* SQDMLSL, SQDMLSL2 */
68
- tcg_gen_vec_shr8i_i64(a, a, shift);
54
+ case 0x0b: /* SQDMULL, SQDMULL2 */
69
- tcg_gen_vec_add8_i64(d, d, a);
55
is_long = true;
70
-}
56
- /* fall through */
71
-
57
- case 0xc: /* SQDMULH */
72
-static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
58
- case 0xd: /* SQRDMULH */
73
-{
59
- if (u) {
74
- tcg_gen_vec_shr16i_i64(a, a, shift);
60
- unallocated_encoding(s);
75
- tcg_gen_vec_add16_i64(d, d, a);
61
- return;
76
-}
62
- }
77
-
63
break;
78
-static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
64
- case 0x8: /* MUL */
79
-{
65
- if (u || is_scalar) {
80
- tcg_gen_shri_i32(a, a, shift);
66
- unallocated_encoding(s);
81
- tcg_gen_add_i32(d, d, a);
67
- return;
82
-}
68
- }
83
-
69
+ case 0x0c: /* SQDMULH */
84
-static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
70
+ case 0x0d: /* SQRDMULH */
85
-{
71
break;
86
- tcg_gen_shri_i64(a, a, shift);
72
- case 0x1: /* FMLA */
87
- tcg_gen_add_i64(d, d, a);
73
- case 0x5: /* FMLS */
88
-}
74
- if (u) {
89
-
75
- unallocated_encoding(s);
90
-static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
76
- return;
91
-{
77
- }
92
- tcg_gen_shri_vec(vece, a, a, sh);
78
- /* fall through */
93
- tcg_gen_add_vec(vece, d, d, a);
79
- case 0x9: /* FMUL, FMULX */
94
-}
80
+ case 0x01: /* FMLA */
95
-
81
+ case 0x05: /* FMLS */
96
static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
82
+ case 0x09: /* FMUL */
97
{
83
+ case 0x19: /* FMULX */
98
uint64_t mask = dup_const(MO_8, 0xff >> shift);
84
if (size == 1) {
99
@@ -XXX,XX +XXX,XX @@ static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
85
unallocated_encoding(s);
100
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
86
return;
101
int immh, int immb, int opcode, int rn, int rd)
87
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
102
{
88
103
- static const GVecGen2i ssra_op[4] = {
89
read_vec_element(s, tcg_op, rn, pass, MO_64);
104
- { .fni8 = gen_ssra8_i64,
90
105
- .fniv = gen_ssra_vec,
91
- switch (opcode) {
106
- .load_dest = true,
92
- case 0x5: /* FMLS */
107
- .opc = INDEX_op_sari_vec,
93
+ switch (16 * u + opcode) {
108
- .vece = MO_8 },
94
+ case 0x05: /* FMLS */
109
- { .fni8 = gen_ssra16_i64,
95
/* As usual for ARM, separate negation for fused multiply-add */
110
- .fniv = gen_ssra_vec,
96
gen_helper_vfp_negd(tcg_op, tcg_op);
111
- .load_dest = true,
97
/* fall through */
112
- .opc = INDEX_op_sari_vec,
98
- case 0x1: /* FMLA */
113
- .vece = MO_16 },
99
+ case 0x01: /* FMLA */
114
- { .fni4 = gen_ssra32_i32,
100
read_vec_element(s, tcg_res, rd, pass, MO_64);
115
- .fniv = gen_ssra_vec,
101
gen_helper_vfp_muladdd(tcg_res, tcg_op, tcg_idx, tcg_res, fpst);
116
- .load_dest = true,
102
break;
117
- .opc = INDEX_op_sari_vec,
103
- case 0x9: /* FMUL, FMULX */
118
- .vece = MO_32 },
104
- if (u) {
119
- { .fni8 = gen_ssra64_i64,
105
- gen_helper_vfp_mulxd(tcg_res, tcg_op, tcg_idx, fpst);
120
- .fniv = gen_ssra_vec,
106
- } else {
121
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
107
- gen_helper_vfp_muld(tcg_res, tcg_op, tcg_idx, fpst);
122
- .load_dest = true,
108
- }
123
- .opc = INDEX_op_sari_vec,
109
+ case 0x09: /* FMUL */
124
- .vece = MO_64 },
110
+ gen_helper_vfp_muld(tcg_res, tcg_op, tcg_idx, fpst);
125
- };
111
+ break;
126
- static const GVecGen2i usra_op[4] = {
112
+ case 0x19: /* FMULX */
127
- { .fni8 = gen_usra8_i64,
113
+ gen_helper_vfp_mulxd(tcg_res, tcg_op, tcg_idx, fpst);
128
- .fniv = gen_usra_vec,
114
break;
129
- .load_dest = true,
115
default:
130
- .opc = INDEX_op_shri_vec,
116
g_assert_not_reached();
131
- .vece = MO_8, },
117
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
132
- { .fni8 = gen_usra16_i64,
118
133
- .fniv = gen_usra_vec,
119
read_vec_element_i32(s, tcg_op, rn, pass, is_scalar ? size : MO_32);
134
- .load_dest = true,
120
135
- .opc = INDEX_op_shri_vec,
121
- switch (opcode) {
136
- .vece = MO_16, },
122
- case 0x0: /* MLA */
137
- { .fni4 = gen_usra32_i32,
123
- case 0x4: /* MLS */
138
- .fniv = gen_usra_vec,
124
- case 0x8: /* MUL */
139
- .load_dest = true,
125
+ switch (16 * u + opcode) {
140
- .opc = INDEX_op_shri_vec,
126
+ case 0x08: /* MUL */
141
- .vece = MO_32, },
127
+ case 0x10: /* MLA */
142
- { .fni8 = gen_usra64_i64,
128
+ case 0x14: /* MLS */
143
- .fniv = gen_usra_vec,
129
{
144
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
130
static NeonGenTwoOpFn * const fns[2][2] = {
145
- .load_dest = true,
131
{ gen_helper_neon_add_u16, gen_helper_neon_sub_u16 },
146
- .opc = INDEX_op_shri_vec,
132
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
147
- .vece = MO_64, },
133
genfn(tcg_res, tcg_op, tcg_res);
148
- };
134
break;
149
static const GVecGen2i sri_op[4] = {
135
}
150
{ .fni8 = gen_shr8_ins_i64,
136
- case 0x5: /* FMLS */
151
.fniv = gen_shr_ins_vec,
137
- case 0x1: /* FMLA */
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
138
+ case 0x05: /* FMLS */
153
index XXXXXXX..XXXXXXX 100644
139
+ case 0x01: /* FMLA */
154
--- a/target/arm/translate.c
140
read_vec_element_i32(s, tcg_res, rd, pass,
155
+++ b/target/arm/translate.c
141
is_scalar ? size : MO_32);
156
@@ -XXX,XX +XXX,XX @@ const GVecGen3 bif_op = {
142
switch (size) {
157
.load_dest = true
143
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
158
};
144
g_assert_not_reached();
159
145
}
160
+static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
146
break;
161
+{
147
- case 0x9: /* FMUL, FMULX */
162
+ tcg_gen_vec_sar8i_i64(a, a, shift);
148
+ case 0x09: /* FMUL */
163
+ tcg_gen_vec_add8_i64(d, d, a);
149
switch (size) {
164
+}
150
case 1:
165
+
151
- if (u) {
166
+static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
152
- if (is_scalar) {
167
+{
153
- gen_helper_advsimd_mulxh(tcg_res, tcg_op,
168
+ tcg_gen_vec_sar16i_i64(a, a, shift);
154
- tcg_idx, fpst);
169
+ tcg_gen_vec_add16_i64(d, d, a);
155
- } else {
170
+}
156
- gen_helper_advsimd_mulx2h(tcg_res, tcg_op,
171
+
157
- tcg_idx, fpst);
172
+static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
158
- }
173
+{
159
+ if (is_scalar) {
174
+ tcg_gen_sari_i32(a, a, shift);
160
+ gen_helper_advsimd_mulh(tcg_res, tcg_op,
175
+ tcg_gen_add_i32(d, d, a);
161
+ tcg_idx, fpst);
176
+}
162
} else {
177
+
163
- if (is_scalar) {
178
+static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
164
- gen_helper_advsimd_mulh(tcg_res, tcg_op,
179
+{
165
- tcg_idx, fpst);
180
+ tcg_gen_sari_i64(a, a, shift);
166
- } else {
181
+ tcg_gen_add_i64(d, d, a);
167
- gen_helper_advsimd_mul2h(tcg_res, tcg_op,
182
+}
168
- tcg_idx, fpst);
183
+
169
- }
184
+static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
170
+ gen_helper_advsimd_mul2h(tcg_res, tcg_op,
185
+{
171
+ tcg_idx, fpst);
186
+ tcg_gen_sari_vec(vece, a, a, sh);
187
+ tcg_gen_add_vec(vece, d, d, a);
188
+}
189
+
190
+const GVecGen2i ssra_op[4] = {
191
+ { .fni8 = gen_ssra8_i64,
192
+ .fniv = gen_ssra_vec,
193
+ .load_dest = true,
194
+ .opc = INDEX_op_sari_vec,
195
+ .vece = MO_8 },
196
+ { .fni8 = gen_ssra16_i64,
197
+ .fniv = gen_ssra_vec,
198
+ .load_dest = true,
199
+ .opc = INDEX_op_sari_vec,
200
+ .vece = MO_16 },
201
+ { .fni4 = gen_ssra32_i32,
202
+ .fniv = gen_ssra_vec,
203
+ .load_dest = true,
204
+ .opc = INDEX_op_sari_vec,
205
+ .vece = MO_32 },
206
+ { .fni8 = gen_ssra64_i64,
207
+ .fniv = gen_ssra_vec,
208
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
209
+ .load_dest = true,
210
+ .opc = INDEX_op_sari_vec,
211
+ .vece = MO_64 },
212
+};
213
+
214
+static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
215
+{
216
+ tcg_gen_vec_shr8i_i64(a, a, shift);
217
+ tcg_gen_vec_add8_i64(d, d, a);
218
+}
219
+
220
+static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
221
+{
222
+ tcg_gen_vec_shr16i_i64(a, a, shift);
223
+ tcg_gen_vec_add16_i64(d, d, a);
224
+}
225
+
226
+static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
227
+{
228
+ tcg_gen_shri_i32(a, a, shift);
229
+ tcg_gen_add_i32(d, d, a);
230
+}
231
+
232
+static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
233
+{
234
+ tcg_gen_shri_i64(a, a, shift);
235
+ tcg_gen_add_i64(d, d, a);
236
+}
237
+
238
+static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
239
+{
240
+ tcg_gen_shri_vec(vece, a, a, sh);
241
+ tcg_gen_add_vec(vece, d, d, a);
242
+}
243
+
244
+const GVecGen2i usra_op[4] = {
245
+ { .fni8 = gen_usra8_i64,
246
+ .fniv = gen_usra_vec,
247
+ .load_dest = true,
248
+ .opc = INDEX_op_shri_vec,
249
+ .vece = MO_8, },
250
+ { .fni8 = gen_usra16_i64,
251
+ .fniv = gen_usra_vec,
252
+ .load_dest = true,
253
+ .opc = INDEX_op_shri_vec,
254
+ .vece = MO_16, },
255
+ { .fni4 = gen_usra32_i32,
256
+ .fniv = gen_usra_vec,
257
+ .load_dest = true,
258
+ .opc = INDEX_op_shri_vec,
259
+ .vece = MO_32, },
260
+ { .fni8 = gen_usra64_i64,
261
+ .fniv = gen_usra_vec,
262
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
263
+ .load_dest = true,
264
+ .opc = INDEX_op_shri_vec,
265
+ .vece = MO_64, },
266
+};
267
268
/* Translate a NEON data processing instruction. Return nonzero if the
269
instruction is invalid.
270
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
172
}
271
}
173
break;
272
return 0;
174
case 2:
273
175
- if (u) {
274
+ case 1: /* VSRA */
176
- gen_helper_vfp_mulxs(tcg_res, tcg_op, tcg_idx, fpst);
275
+ /* Right shift comes here negative. */
177
- } else {
276
+ shift = -shift;
178
- gen_helper_vfp_muls(tcg_res, tcg_op, tcg_idx, fpst);
277
+ /* Shifts larger than the element size are architecturally
179
- }
278
+ * valid. Unsigned results in all zeros; signed results
180
+ gen_helper_vfp_muls(tcg_res, tcg_op, tcg_idx, fpst);
279
+ * in all sign bits.
181
break;
280
+ */
182
default:
281
+ if (!u) {
183
g_assert_not_reached();
282
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
184
}
283
+ MIN(shift, (8 << size) - 1),
185
break;
284
+ &ssra_op[size]);
186
- case 0xc: /* SQDMULH */
285
+ } else if (shift >= 8 << size) {
187
+ case 0x19: /* FMULX */
286
+ /* rd += 0 */
188
+ switch (size) {
189
+ case 1:
190
+ if (is_scalar) {
191
+ gen_helper_advsimd_mulxh(tcg_res, tcg_op,
192
+ tcg_idx, fpst);
193
+ } else {
287
+ } else {
194
+ gen_helper_advsimd_mulx2h(tcg_res, tcg_op,
288
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
195
+ tcg_idx, fpst);
289
+ shift, &usra_op[size]);
196
+ }
290
+ }
197
+ break;
291
+ return 0;
198
+ case 2:
292
+
199
+ gen_helper_vfp_mulxs(tcg_res, tcg_op, tcg_idx, fpst);
293
case 5: /* VSHL, VSLI */
200
+ break;
294
if (!u) { /* VSHL */
201
+ default:
295
/* Shifts larger than the element size are
202
+ g_assert_not_reached();
296
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
203
+ }
297
neon_load_reg64(cpu_V0, rm + pass);
204
+ break;
298
tcg_gen_movi_i64(cpu_V1, imm);
205
+ case 0x0c: /* SQDMULH */
299
switch (op) {
206
if (size == 1) {
300
- case 1: /* VSRA */
207
gen_helper_neon_qdmulh_s16(tcg_res, cpu_env,
301
- if (u)
208
tcg_op, tcg_idx);
302
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
209
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
303
- else
210
tcg_op, tcg_idx);
304
- gen_helper_neon_shl_s64(cpu_V0, cpu_V0, cpu_V1);
211
}
305
- break;
212
break;
306
case 2: /* VRSHR */
213
- case 0xd: /* SQRDMULH */
307
case 3: /* VRSRA */
214
+ case 0x0d: /* SQRDMULH */
308
if (u)
215
if (size == 1) {
309
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
216
gen_helper_neon_qrdmulh_s16(tcg_res, cpu_env,
310
default:
217
tcg_op, tcg_idx);
311
g_assert_not_reached();
312
}
313
- if (op == 1 || op == 3) {
314
+ if (op == 3) {
315
/* Accumulate. */
316
neon_load_reg64(cpu_V1, rd + pass);
317
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
318
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
319
tmp2 = tcg_temp_new_i32();
320
tcg_gen_movi_i32(tmp2, imm);
321
switch (op) {
322
- case 1: /* VSRA */
323
- GEN_NEON_INTEGER_OP(shl);
324
- break;
325
case 2: /* VRSHR */
326
case 3: /* VRSRA */
327
GEN_NEON_INTEGER_OP(rshl);
328
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
329
}
330
tcg_temp_free_i32(tmp2);
331
332
- if (op == 1 || op == 3) {
333
+ if (op == 3) {
334
/* Accumulate. */
335
tmp2 = neon_load_reg(rd, pass);
336
gen_neon_add(size, tmp, tmp2);
218
--
337
--
219
2.16.2
338
2.19.1
220
339
221
340
diff view generated by jsdifflib
1
In some board or SoC models it is necessary to split a qemu_irq line
1
From: Richard Henderson <richard.henderson@linaro.org>
2
so that one input can feed multiple outputs. We currently have
3
qemu_irq_split() for this, but that has several deficiencies:
4
* it can only handle splitting a line into two
5
* it unavoidably leaks memory, so it can't be used
6
in a device that can be deleted
7
2
8
Implement a qdev device that encapsulates splitting of IRQs, with a
3
Move shi_op and sli_op expanders from translate-a64.c.
9
configurable number of outputs. (This is in some ways the inverse of
10
the TYPE_OR_IRQ device.)
11
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-15-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20180220180325.29818-13-peter.maydell@linaro.org
15
---
9
---
16
hw/core/Makefile.objs | 1 +
10
target/arm/translate.h | 2 +
17
include/hw/core/split-irq.h | 57 +++++++++++++++++++++++++++++
11
target/arm/translate-a64.c | 152 +----------------------
18
include/hw/irq.h | 4 +-
12
target/arm/translate.c | 244 ++++++++++++++++++++++++++-----------
19
hw/core/split-irq.c | 89 +++++++++++++++++++++++++++++++++++++++++++++
13
3 files changed, 179 insertions(+), 219 deletions(-)
20
4 files changed, 150 insertions(+), 1 deletion(-)
21
create mode 100644 include/hw/core/split-irq.h
22
create mode 100644 hw/core/split-irq.c
23
14
24
diff --git a/hw/core/Makefile.objs b/hw/core/Makefile.objs
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
25
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/core/Makefile.objs
17
--- a/target/arm/translate.h
27
+++ b/hw/core/Makefile.objs
18
+++ b/target/arm/translate.h
28
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_FITLOADER) += loader-fit.o
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
29
common-obj-$(CONFIG_SOFTMMU) += qdev-properties-system.o
20
extern const GVecGen3 bif_op;
30
common-obj-$(CONFIG_SOFTMMU) += register.o
21
extern const GVecGen2i ssra_op[4];
31
common-obj-$(CONFIG_SOFTMMU) += or-irq.o
22
extern const GVecGen2i usra_op[4];
32
+common-obj-$(CONFIG_SOFTMMU) += split-irq.o
23
+extern const GVecGen2i sri_op[4];
33
common-obj-$(CONFIG_PLATFORM_BUS) += platform-bus.o
24
+extern const GVecGen2i sli_op[4];
34
25
35
obj-$(CONFIG_SOFTMMU) += generic-loader.o
26
/*
36
diff --git a/include/hw/core/split-irq.h b/include/hw/core/split-irq.h
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
37
new file mode 100644
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
38
index XXXXXXX..XXXXXXX
29
index XXXXXXX..XXXXXXX 100644
39
--- /dev/null
30
--- a/target/arm/translate-a64.c
40
+++ b/include/hw/core/split-irq.h
31
+++ b/target/arm/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
42
+/*
33
}
43
+ * IRQ splitter device.
34
}
44
+ *
35
45
+ * Copyright (c) 2018 Linaro Limited.
36
-static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
46
+ * Written by Peter Maydell
37
-{
47
+ *
38
- uint64_t mask = dup_const(MO_8, 0xff >> shift);
48
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
39
- TCGv_i64 t = tcg_temp_new_i64();
49
+ * of this software and associated documentation files (the "Software"), to deal
40
-
50
+ * in the Software without restriction, including without limitation the rights
41
- tcg_gen_shri_i64(t, a, shift);
51
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
42
- tcg_gen_andi_i64(t, t, mask);
52
+ * copies of the Software, and to permit persons to whom the Software is
43
- tcg_gen_andi_i64(d, d, ~mask);
53
+ * furnished to do so, subject to the following conditions:
44
- tcg_gen_or_i64(d, d, t);
54
+ *
45
- tcg_temp_free_i64(t);
55
+ * The above copyright notice and this permission notice shall be included in
46
-}
56
+ * all copies or substantial portions of the Software.
47
-
57
+ *
48
-static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
58
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
49
-{
59
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
50
- uint64_t mask = dup_const(MO_16, 0xffff >> shift);
60
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
51
- TCGv_i64 t = tcg_temp_new_i64();
61
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
52
-
62
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
53
- tcg_gen_shri_i64(t, a, shift);
63
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
54
- tcg_gen_andi_i64(t, t, mask);
64
+ * THE SOFTWARE.
55
- tcg_gen_andi_i64(d, d, ~mask);
65
+ */
56
- tcg_gen_or_i64(d, d, t);
66
+
57
- tcg_temp_free_i64(t);
67
+/* This is a simple device which has one GPIO input line and multiple
58
-}
68
+ * GPIO output lines. Any change on the input line is forwarded to all
59
-
69
+ * of the outputs.
60
-static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
70
+ *
61
-{
71
+ * QEMU interface:
62
- tcg_gen_shri_i32(a, a, shift);
72
+ * + one unnamed GPIO input: the input line
63
- tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
73
+ * + N unnamed GPIO outputs: the output lines
64
-}
74
+ * + QOM property "num-lines": sets the number of output lines
65
-
75
+ */
66
-static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
76
+#ifndef HW_SPLIT_IRQ_H
67
-{
77
+#define HW_SPLIT_IRQ_H
68
- tcg_gen_shri_i64(a, a, shift);
78
+
69
- tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
79
+#include "hw/irq.h"
70
-}
80
+#include "hw/sysbus.h"
71
-
81
+#include "qom/object.h"
72
-static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
82
+
73
-{
83
+#define TYPE_SPLIT_IRQ "split-irq"
74
- uint64_t mask = (2ull << ((8 << vece) - 1)) - 1;
84
+
75
- TCGv_vec t = tcg_temp_new_vec_matching(d);
85
+#define MAX_SPLIT_LINES 16
76
- TCGv_vec m = tcg_temp_new_vec_matching(d);
86
+
77
-
87
+typedef struct SplitIRQ SplitIRQ;
78
- tcg_gen_dupi_vec(vece, m, mask ^ (mask >> sh));
88
+
79
- tcg_gen_shri_vec(vece, t, a, sh);
89
+#define SPLIT_IRQ(obj) OBJECT_CHECK(SplitIRQ, (obj), TYPE_SPLIT_IRQ)
80
- tcg_gen_and_vec(vece, d, d, m);
90
+
81
- tcg_gen_or_vec(vece, d, d, t);
91
+struct SplitIRQ {
82
-
92
+ DeviceState parent_obj;
83
- tcg_temp_free_vec(t);
93
+
84
- tcg_temp_free_vec(m);
94
+ qemu_irq out_irq[MAX_SPLIT_LINES];
85
-}
95
+ uint16_t num_lines;
86
-
87
/* SSHR[RA]/USHR[RA] - Vector shift right (optional rounding/accumulate) */
88
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
89
int immh, int immb, int opcode, int rn, int rd)
90
{
91
- static const GVecGen2i sri_op[4] = {
92
- { .fni8 = gen_shr8_ins_i64,
93
- .fniv = gen_shr_ins_vec,
94
- .load_dest = true,
95
- .opc = INDEX_op_shri_vec,
96
- .vece = MO_8 },
97
- { .fni8 = gen_shr16_ins_i64,
98
- .fniv = gen_shr_ins_vec,
99
- .load_dest = true,
100
- .opc = INDEX_op_shri_vec,
101
- .vece = MO_16 },
102
- { .fni4 = gen_shr32_ins_i32,
103
- .fniv = gen_shr_ins_vec,
104
- .load_dest = true,
105
- .opc = INDEX_op_shri_vec,
106
- .vece = MO_32 },
107
- { .fni8 = gen_shr64_ins_i64,
108
- .fniv = gen_shr_ins_vec,
109
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
110
- .load_dest = true,
111
- .opc = INDEX_op_shri_vec,
112
- .vece = MO_64 },
113
- };
114
-
115
int size = 32 - clz32(immh) - 1;
116
int immhb = immh << 3 | immb;
117
int shift = 2 * (8 << size) - immhb;
118
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
119
clear_vec_high(s, is_q, rd);
120
}
121
122
-static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
123
-{
124
- uint64_t mask = dup_const(MO_8, 0xff << shift);
125
- TCGv_i64 t = tcg_temp_new_i64();
126
-
127
- tcg_gen_shli_i64(t, a, shift);
128
- tcg_gen_andi_i64(t, t, mask);
129
- tcg_gen_andi_i64(d, d, ~mask);
130
- tcg_gen_or_i64(d, d, t);
131
- tcg_temp_free_i64(t);
132
-}
133
-
134
-static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
135
-{
136
- uint64_t mask = dup_const(MO_16, 0xffff << shift);
137
- TCGv_i64 t = tcg_temp_new_i64();
138
-
139
- tcg_gen_shli_i64(t, a, shift);
140
- tcg_gen_andi_i64(t, t, mask);
141
- tcg_gen_andi_i64(d, d, ~mask);
142
- tcg_gen_or_i64(d, d, t);
143
- tcg_temp_free_i64(t);
144
-}
145
-
146
-static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
147
-{
148
- tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
149
-}
150
-
151
-static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
152
-{
153
- tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
154
-}
155
-
156
-static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
157
-{
158
- uint64_t mask = (1ull << sh) - 1;
159
- TCGv_vec t = tcg_temp_new_vec_matching(d);
160
- TCGv_vec m = tcg_temp_new_vec_matching(d);
161
-
162
- tcg_gen_dupi_vec(vece, m, mask);
163
- tcg_gen_shli_vec(vece, t, a, sh);
164
- tcg_gen_and_vec(vece, d, d, m);
165
- tcg_gen_or_vec(vece, d, d, t);
166
-
167
- tcg_temp_free_vec(t);
168
- tcg_temp_free_vec(m);
169
-}
170
-
171
/* SHL/SLI - Vector shift left */
172
static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
173
int immh, int immb, int opcode, int rn, int rd)
174
{
175
- static const GVecGen2i shi_op[4] = {
176
- { .fni8 = gen_shl8_ins_i64,
177
- .fniv = gen_shl_ins_vec,
178
- .opc = INDEX_op_shli_vec,
179
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
180
- .load_dest = true,
181
- .vece = MO_8 },
182
- { .fni8 = gen_shl16_ins_i64,
183
- .fniv = gen_shl_ins_vec,
184
- .opc = INDEX_op_shli_vec,
185
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
- .load_dest = true,
187
- .vece = MO_16 },
188
- { .fni4 = gen_shl32_ins_i32,
189
- .fniv = gen_shl_ins_vec,
190
- .opc = INDEX_op_shli_vec,
191
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
192
- .load_dest = true,
193
- .vece = MO_32 },
194
- { .fni8 = gen_shl64_ins_i64,
195
- .fniv = gen_shl_ins_vec,
196
- .opc = INDEX_op_shli_vec,
197
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
198
- .load_dest = true,
199
- .vece = MO_64 },
200
- };
201
int size = 32 - clz32(immh) - 1;
202
int immhb = immh << 3 | immb;
203
int shift = immhb - (8 << size);
204
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
205
}
206
207
if (insert) {
208
- gen_gvec_op2i(s, is_q, rd, rn, shift, &shi_op[size]);
209
+ gen_gvec_op2i(s, is_q, rd, rn, shift, &sli_op[size]);
210
} else {
211
gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_shli, size);
212
}
213
diff --git a/target/arm/translate.c b/target/arm/translate.c
214
index XXXXXXX..XXXXXXX 100644
215
--- a/target/arm/translate.c
216
+++ b/target/arm/translate.c
217
@@ -XXX,XX +XXX,XX @@ const GVecGen2i usra_op[4] = {
218
.vece = MO_64, },
219
};
220
221
+static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
222
+{
223
+ uint64_t mask = dup_const(MO_8, 0xff >> shift);
224
+ TCGv_i64 t = tcg_temp_new_i64();
225
+
226
+ tcg_gen_shri_i64(t, a, shift);
227
+ tcg_gen_andi_i64(t, t, mask);
228
+ tcg_gen_andi_i64(d, d, ~mask);
229
+ tcg_gen_or_i64(d, d, t);
230
+ tcg_temp_free_i64(t);
231
+}
232
+
233
+static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
234
+{
235
+ uint64_t mask = dup_const(MO_16, 0xffff >> shift);
236
+ TCGv_i64 t = tcg_temp_new_i64();
237
+
238
+ tcg_gen_shri_i64(t, a, shift);
239
+ tcg_gen_andi_i64(t, t, mask);
240
+ tcg_gen_andi_i64(d, d, ~mask);
241
+ tcg_gen_or_i64(d, d, t);
242
+ tcg_temp_free_i64(t);
243
+}
244
+
245
+static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
246
+{
247
+ tcg_gen_shri_i32(a, a, shift);
248
+ tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
249
+}
250
+
251
+static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
252
+{
253
+ tcg_gen_shri_i64(a, a, shift);
254
+ tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
255
+}
256
+
257
+static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
258
+{
259
+ if (sh == 0) {
260
+ tcg_gen_mov_vec(d, a);
261
+ } else {
262
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
263
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
264
+
265
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
266
+ tcg_gen_shri_vec(vece, t, a, sh);
267
+ tcg_gen_and_vec(vece, d, d, m);
268
+ tcg_gen_or_vec(vece, d, d, t);
269
+
270
+ tcg_temp_free_vec(t);
271
+ tcg_temp_free_vec(m);
272
+ }
273
+}
274
+
275
+const GVecGen2i sri_op[4] = {
276
+ { .fni8 = gen_shr8_ins_i64,
277
+ .fniv = gen_shr_ins_vec,
278
+ .load_dest = true,
279
+ .opc = INDEX_op_shri_vec,
280
+ .vece = MO_8 },
281
+ { .fni8 = gen_shr16_ins_i64,
282
+ .fniv = gen_shr_ins_vec,
283
+ .load_dest = true,
284
+ .opc = INDEX_op_shri_vec,
285
+ .vece = MO_16 },
286
+ { .fni4 = gen_shr32_ins_i32,
287
+ .fniv = gen_shr_ins_vec,
288
+ .load_dest = true,
289
+ .opc = INDEX_op_shri_vec,
290
+ .vece = MO_32 },
291
+ { .fni8 = gen_shr64_ins_i64,
292
+ .fniv = gen_shr_ins_vec,
293
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
294
+ .load_dest = true,
295
+ .opc = INDEX_op_shri_vec,
296
+ .vece = MO_64 },
96
+};
297
+};
97
+
298
+
98
+#endif
299
+static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
99
diff --git a/include/hw/irq.h b/include/hw/irq.h
300
+{
100
index XXXXXXX..XXXXXXX 100644
301
+ uint64_t mask = dup_const(MO_8, 0xff << shift);
101
--- a/include/hw/irq.h
302
+ TCGv_i64 t = tcg_temp_new_i64();
102
+++ b/include/hw/irq.h
303
+
103
@@ -XXX,XX +XXX,XX @@ void qemu_free_irq(qemu_irq irq);
304
+ tcg_gen_shli_i64(t, a, shift);
104
/* Returns a new IRQ with opposite polarity. */
305
+ tcg_gen_andi_i64(t, t, mask);
105
qemu_irq qemu_irq_invert(qemu_irq irq);
306
+ tcg_gen_andi_i64(d, d, ~mask);
106
307
+ tcg_gen_or_i64(d, d, t);
107
-/* Returns a new IRQ which feeds into both the passed IRQs */
308
+ tcg_temp_free_i64(t);
108
+/* Returns a new IRQ which feeds into both the passed IRQs.
309
+}
109
+ * It's probably better to use the TYPE_SPLIT_IRQ device instead.
310
+
110
+ */
311
+static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
111
qemu_irq qemu_irq_split(qemu_irq irq1, qemu_irq irq2);
312
+{
112
313
+ uint64_t mask = dup_const(MO_16, 0xffff << shift);
113
/* Returns a new IRQ set which connects 1:1 to another IRQ set, which
314
+ TCGv_i64 t = tcg_temp_new_i64();
114
diff --git a/hw/core/split-irq.c b/hw/core/split-irq.c
315
+
115
new file mode 100644
316
+ tcg_gen_shli_i64(t, a, shift);
116
index XXXXXXX..XXXXXXX
317
+ tcg_gen_andi_i64(t, t, mask);
117
--- /dev/null
318
+ tcg_gen_andi_i64(d, d, ~mask);
118
+++ b/hw/core/split-irq.c
319
+ tcg_gen_or_i64(d, d, t);
119
@@ -XXX,XX +XXX,XX @@
320
+ tcg_temp_free_i64(t);
120
+/*
321
+}
121
+ * IRQ splitter device.
322
+
122
+ *
323
+static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
123
+ * Copyright (c) 2018 Linaro Limited.
324
+{
124
+ * Written by Peter Maydell
325
+ tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
125
+ *
326
+}
126
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
327
+
127
+ * of this software and associated documentation files (the "Software"), to deal
328
+static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
128
+ * in the Software without restriction, including without limitation the rights
329
+{
129
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
330
+ tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
130
+ * copies of the Software, and to permit persons to whom the Software is
331
+}
131
+ * furnished to do so, subject to the following conditions:
332
+
132
+ *
333
+static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
133
+ * The above copyright notice and this permission notice shall be included in
334
+{
134
+ * all copies or substantial portions of the Software.
335
+ if (sh == 0) {
135
+ *
336
+ tcg_gen_mov_vec(d, a);
136
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
337
+ } else {
137
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
338
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
138
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
339
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
139
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
340
+
140
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
341
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
141
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
342
+ tcg_gen_shli_vec(vece, t, a, sh);
142
+ * THE SOFTWARE.
343
+ tcg_gen_and_vec(vece, d, d, m);
143
+ */
344
+ tcg_gen_or_vec(vece, d, d, t);
144
+
345
+
145
+#include "qemu/osdep.h"
346
+ tcg_temp_free_vec(t);
146
+#include "hw/core/split-irq.h"
347
+ tcg_temp_free_vec(m);
147
+#include "qapi/error.h"
148
+
149
+static void split_irq_handler(void *opaque, int n, int level)
150
+{
151
+ SplitIRQ *s = SPLIT_IRQ(opaque);
152
+ int i;
153
+
154
+ for (i = 0; i < s->num_lines; i++) {
155
+ qemu_set_irq(s->out_irq[i], level);
156
+ }
348
+ }
157
+}
349
+}
158
+
350
+
159
+static void split_irq_init(Object *obj)
351
+const GVecGen2i sli_op[4] = {
160
+{
352
+ { .fni8 = gen_shl8_ins_i64,
161
+ qdev_init_gpio_in(DEVICE(obj), split_irq_handler, 1);
353
+ .fniv = gen_shl_ins_vec,
162
+}
354
+ .load_dest = true,
163
+
355
+ .opc = INDEX_op_shli_vec,
164
+static void split_irq_realize(DeviceState *dev, Error **errp)
356
+ .vece = MO_8 },
165
+{
357
+ { .fni8 = gen_shl16_ins_i64,
166
+ SplitIRQ *s = SPLIT_IRQ(dev);
358
+ .fniv = gen_shl_ins_vec,
167
+
359
+ .load_dest = true,
168
+ if (s->num_lines < 1 || s->num_lines >= MAX_SPLIT_LINES) {
360
+ .opc = INDEX_op_shli_vec,
169
+ error_setg(errp,
361
+ .vece = MO_16 },
170
+ "IRQ splitter number of lines %d is not between 1 and %d",
362
+ { .fni4 = gen_shl32_ins_i32,
171
+ s->num_lines, MAX_SPLIT_LINES);
363
+ .fniv = gen_shl_ins_vec,
172
+ return;
364
+ .load_dest = true,
173
+ }
365
+ .opc = INDEX_op_shli_vec,
174
+
366
+ .vece = MO_32 },
175
+ qdev_init_gpio_out(dev, s->out_irq, s->num_lines);
367
+ { .fni8 = gen_shl64_ins_i64,
176
+}
368
+ .fniv = gen_shl_ins_vec,
177
+
369
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
178
+static Property split_irq_properties[] = {
370
+ .load_dest = true,
179
+ DEFINE_PROP_UINT16("num-lines", SplitIRQ, num_lines, 1),
371
+ .opc = INDEX_op_shli_vec,
180
+ DEFINE_PROP_END_OF_LIST(),
372
+ .vece = MO_64 },
181
+};
373
+};
182
+
374
+
183
+static void split_irq_class_init(ObjectClass *klass, void *data)
375
/* Translate a NEON data processing instruction. Return nonzero if the
184
+{
376
instruction is invalid.
185
+ DeviceClass *dc = DEVICE_CLASS(klass);
377
We process data in a mixture of 32-bit and 64-bit chunks.
186
+
378
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
187
+ /* No state to reset or migrate */
379
int pairwise;
188
+ dc->props = split_irq_properties;
380
int u;
189
+ dc->realize = split_irq_realize;
381
int vec_size;
190
+
382
- uint32_t imm, mask;
191
+ /* Reason: Needs to be wired up to work */
383
+ uint32_t imm;
192
+ dc->user_creatable = false;
384
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
193
+}
385
TCGv_ptr ptr1, ptr2, ptr3;
194
+
386
TCGv_i64 tmp64;
195
+static const TypeInfo split_irq_type_info = {
387
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
196
+ .name = TYPE_SPLIT_IRQ,
388
}
197
+ .parent = TYPE_DEVICE,
389
return 0;
198
+ .instance_size = sizeof(SplitIRQ),
390
199
+ .instance_init = split_irq_init,
391
+ case 4: /* VSRI */
200
+ .class_init = split_irq_class_init,
392
+ if (!u) {
201
+};
393
+ return 1;
202
+
394
+ }
203
+static void split_irq_register_types(void)
395
+ /* Right shift comes here negative. */
204
+{
396
+ shift = -shift;
205
+ type_register_static(&split_irq_type_info);
397
+ /* Shift out of range leaves destination unchanged. */
206
+}
398
+ if (shift < 8 << size) {
207
+
399
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
208
+type_init(split_irq_register_types)
400
+ shift, &sri_op[size]);
401
+ }
402
+ return 0;
403
+
404
case 5: /* VSHL, VSLI */
405
- if (!u) { /* VSHL */
406
+ if (u) { /* VSLI */
407
+ /* Shift out of range leaves destination unchanged. */
408
+ if (shift < 8 << size) {
409
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size,
410
+ vec_size, shift, &sli_op[size]);
411
+ }
412
+ } else { /* VSHL */
413
/* Shifts larger than the element size are
414
* architecturally valid and results in zero.
415
*/
416
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
417
tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
418
vec_size, vec_size);
419
}
420
- return 0;
421
}
422
- break;
423
+ return 0;
424
}
425
426
if (size == 3) {
427
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
428
else
429
gen_helper_neon_rshl_s64(cpu_V0, cpu_V0, cpu_V1);
430
break;
431
- case 4: /* VSRI */
432
- case 5: /* VSHL, VSLI */
433
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
434
- break;
435
case 6: /* VQSHLU */
436
gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
437
cpu_V0, cpu_V1);
438
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
439
/* Accumulate. */
440
neon_load_reg64(cpu_V1, rd + pass);
441
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
442
- } else if (op == 4 || (op == 5 && u)) {
443
- /* Insert */
444
- neon_load_reg64(cpu_V1, rd + pass);
445
- uint64_t mask;
446
- if (shift < -63 || shift > 63) {
447
- mask = 0;
448
- } else {
449
- if (op == 4) {
450
- mask = 0xffffffffffffffffull >> -shift;
451
- } else {
452
- mask = 0xffffffffffffffffull << shift;
453
- }
454
- }
455
- tcg_gen_andi_i64(cpu_V1, cpu_V1, ~mask);
456
- tcg_gen_or_i64(cpu_V0, cpu_V0, cpu_V1);
457
}
458
neon_store_reg64(cpu_V0, rd + pass);
459
} else { /* size < 3 */
460
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
461
case 3: /* VRSRA */
462
GEN_NEON_INTEGER_OP(rshl);
463
break;
464
- case 4: /* VSRI */
465
- case 5: /* VSHL, VSLI */
466
- switch (size) {
467
- case 0: gen_helper_neon_shl_u8(tmp, tmp, tmp2); break;
468
- case 1: gen_helper_neon_shl_u16(tmp, tmp, tmp2); break;
469
- case 2: gen_helper_neon_shl_u32(tmp, tmp, tmp2); break;
470
- default: abort();
471
- }
472
- break;
473
case 6: /* VQSHLU */
474
switch (size) {
475
case 0:
476
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
477
tmp2 = neon_load_reg(rd, pass);
478
gen_neon_add(size, tmp, tmp2);
479
tcg_temp_free_i32(tmp2);
480
- } else if (op == 4 || (op == 5 && u)) {
481
- /* Insert */
482
- switch (size) {
483
- case 0:
484
- if (op == 4)
485
- mask = 0xff >> -shift;
486
- else
487
- mask = (uint8_t)(0xff << shift);
488
- mask |= mask << 8;
489
- mask |= mask << 16;
490
- break;
491
- case 1:
492
- if (op == 4)
493
- mask = 0xffff >> -shift;
494
- else
495
- mask = (uint16_t)(0xffff << shift);
496
- mask |= mask << 16;
497
- break;
498
- case 2:
499
- if (shift < -31 || shift > 31) {
500
- mask = 0;
501
- } else {
502
- if (op == 4)
503
- mask = 0xffffffffu >> -shift;
504
- else
505
- mask = 0xffffffffu << shift;
506
- }
507
- break;
508
- default:
509
- abort();
510
- }
511
- tmp2 = neon_load_reg(rd, pass);
512
- tcg_gen_andi_i32(tmp, tmp, mask);
513
- tcg_gen_andi_i32(tmp2, tmp2, ~mask);
514
- tcg_gen_or_i32(tmp, tmp, tmp2);
515
- tcg_temp_free_i32(tmp2);
516
}
517
neon_store_reg(rd, pass, tmp);
518
}
209
--
519
--
210
2.16.2
520
2.19.1
211
521
212
522
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move mla_op and mls_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-16-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180228193125.20577-8-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate.c | 86 +++++++++++++++++++++++++++++++++++++++-----------
10
target/arm/translate.h | 2 +
9
1 file changed, 67 insertions(+), 19 deletions(-)
11
target/arm/translate-a64.c | 106 -----------------------------
10
12
target/arm/translate.c | 134 ++++++++++++++++++++++++++++++++-----
13
3 files changed, 120 insertions(+), 122 deletions(-)
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
18
+++ b/target/arm/translate.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
extern const GVecGen3 bsl_op;
21
extern const GVecGen3 bit_op;
22
extern const GVecGen3 bif_op;
23
+extern const GVecGen3 mla_op[4];
24
+extern const GVecGen3 mls_op[4];
25
extern const GVecGen2i ssra_op[4];
26
extern const GVecGen2i usra_op[4];
27
extern const GVecGen2i sri_op[4];
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
33
}
34
}
35
36
-static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
37
-{
38
- gen_helper_neon_mul_u8(a, a, b);
39
- gen_helper_neon_add_u8(d, d, a);
40
-}
41
-
42
-static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
43
-{
44
- gen_helper_neon_mul_u16(a, a, b);
45
- gen_helper_neon_add_u16(d, d, a);
46
-}
47
-
48
-static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
49
-{
50
- tcg_gen_mul_i32(a, a, b);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
55
-{
56
- tcg_gen_mul_i64(a, a, b);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
61
-{
62
- tcg_gen_mul_vec(vece, a, a, b);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
67
-{
68
- gen_helper_neon_mul_u8(a, a, b);
69
- gen_helper_neon_sub_u8(d, d, a);
70
-}
71
-
72
-static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
73
-{
74
- gen_helper_neon_mul_u16(a, a, b);
75
- gen_helper_neon_sub_u16(d, d, a);
76
-}
77
-
78
-static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
79
-{
80
- tcg_gen_mul_i32(a, a, b);
81
- tcg_gen_sub_i32(d, d, a);
82
-}
83
-
84
-static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
85
-{
86
- tcg_gen_mul_i64(a, a, b);
87
- tcg_gen_sub_i64(d, d, a);
88
-}
89
-
90
-static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
91
-{
92
- tcg_gen_mul_vec(vece, a, a, b);
93
- tcg_gen_sub_vec(vece, d, d, a);
94
-}
95
-
96
/* Integer op subgroup of C3.6.16. */
97
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
98
{
99
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
100
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
.vece = MO_64 },
102
};
103
- static const GVecGen3 mla_op[4] = {
104
- { .fni4 = gen_mla8_i32,
105
- .fniv = gen_mla_vec,
106
- .opc = INDEX_op_mul_vec,
107
- .load_dest = true,
108
- .vece = MO_8 },
109
- { .fni4 = gen_mla16_i32,
110
- .fniv = gen_mla_vec,
111
- .opc = INDEX_op_mul_vec,
112
- .load_dest = true,
113
- .vece = MO_16 },
114
- { .fni4 = gen_mla32_i32,
115
- .fniv = gen_mla_vec,
116
- .opc = INDEX_op_mul_vec,
117
- .load_dest = true,
118
- .vece = MO_32 },
119
- { .fni8 = gen_mla64_i64,
120
- .fniv = gen_mla_vec,
121
- .opc = INDEX_op_mul_vec,
122
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
123
- .load_dest = true,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen3 mls_op[4] = {
127
- { .fni4 = gen_mls8_i32,
128
- .fniv = gen_mls_vec,
129
- .opc = INDEX_op_mul_vec,
130
- .load_dest = true,
131
- .vece = MO_8 },
132
- { .fni4 = gen_mls16_i32,
133
- .fniv = gen_mls_vec,
134
- .opc = INDEX_op_mul_vec,
135
- .load_dest = true,
136
- .vece = MO_16 },
137
- { .fni4 = gen_mls32_i32,
138
- .fniv = gen_mls_vec,
139
- .opc = INDEX_op_mul_vec,
140
- .load_dest = true,
141
- .vece = MO_32 },
142
- { .fni8 = gen_mls64_i64,
143
- .fniv = gen_mls_vec,
144
- .opc = INDEX_op_mul_vec,
145
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
146
- .load_dest = true,
147
- .vece = MO_64 },
148
- };
149
150
int is_q = extract32(insn, 30, 1);
151
int u = extract32(insn, 29, 1);
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
153
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
154
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
155
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@
16
#include "disas/disas.h"
17
#include "exec/exec-all.h"
18
#include "tcg-op.h"
19
+#include "tcg-op-gvec.h"
20
#include "qemu/log.h"
21
#include "qemu/bitops.h"
22
#include "arm_ldst.h"
23
@@ -XXX,XX +XXX,XX @@ static void gen_neon_narrow_op(int op, int u, int size,
156
@@ -XXX,XX +XXX,XX @@ static void gen_neon_narrow_op(int op, int u, int size,
157
#define NEON_3R_VABA 15
158
#define NEON_3R_VADD_VSUB 16
159
#define NEON_3R_VTST_VCEQ 17
160
-#define NEON_3R_VML 18 /* VMLA, VMLAL, VMLS, VMLSL */
161
+#define NEON_3R_VML 18 /* VMLA, VMLS */
162
#define NEON_3R_VMUL 19
24
#define NEON_3R_VPMAX 20
163
#define NEON_3R_VPMAX 20
25
#define NEON_3R_VPMIN 21
164
#define NEON_3R_VPMIN 21
26
#define NEON_3R_VQDMULH_VQRDMULH 22
165
@@ -XXX,XX +XXX,XX @@ const GVecGen2i sli_op[4] = {
27
-#define NEON_3R_VPADD 23
166
.vece = MO_64 },
28
+#define NEON_3R_VPADD_VQRDMLAH 23
29
#define NEON_3R_SHA 24 /* SHA1C,SHA1P,SHA1M,SHA1SU0,SHA256H{2},SHA256SU1 */
30
-#define NEON_3R_VFM 25 /* VFMA, VFMS : float fused multiply-add */
31
+#define NEON_3R_VFM_VQRDMLSH 25 /* VFMA, VFMS, VQRDMLSH */
32
#define NEON_3R_FLOAT_ARITH 26 /* float VADD, VSUB, VPADD, VABD */
33
#define NEON_3R_FLOAT_MULTIPLY 27 /* float VMLA, VMLS, VMUL */
34
#define NEON_3R_FLOAT_CMP 28 /* float VCEQ, VCGE, VCGT */
35
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_3r_sizes[] = {
36
[NEON_3R_VPMAX] = 0x7,
37
[NEON_3R_VPMIN] = 0x7,
38
[NEON_3R_VQDMULH_VQRDMULH] = 0x6,
39
- [NEON_3R_VPADD] = 0x7,
40
+ [NEON_3R_VPADD_VQRDMLAH] = 0x7,
41
[NEON_3R_SHA] = 0xf, /* size field encodes op type */
42
- [NEON_3R_VFM] = 0x5, /* size bit 1 encodes op */
43
+ [NEON_3R_VFM_VQRDMLSH] = 0x7, /* For VFM, size bit 1 encodes op */
44
[NEON_3R_FLOAT_ARITH] = 0x5, /* size bit 1 encodes op */
45
[NEON_3R_FLOAT_MULTIPLY] = 0x5, /* size bit 1 encodes op */
46
[NEON_3R_FLOAT_CMP] = 0x5, /* size bit 1 encodes op */
47
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
48
[NEON_2RM_VCVT_UF] = 0x4,
49
};
167
};
50
168
51
+
169
+static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
52
+/* Expand v8.1 simd helper. */
170
+{
53
+static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
171
+ gen_helper_neon_mul_u8(a, a, b);
54
+ int q, int rd, int rn, int rm)
172
+ gen_helper_neon_add_u8(d, d, a);
55
+{
173
+}
56
+ if (arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
174
+
57
+ int opr_sz = (1 + q) * 8;
175
+static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
58
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
176
+{
59
+ vfp_reg_offset(1, rn),
177
+ gen_helper_neon_mul_u8(a, a, b);
60
+ vfp_reg_offset(1, rm), cpu_env,
178
+ gen_helper_neon_sub_u8(d, d, a);
61
+ opr_sz, opr_sz, 0, fn);
179
+}
62
+ return 0;
180
+
63
+ }
181
+static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
64
+ return 1;
182
+{
65
+}
183
+ gen_helper_neon_mul_u16(a, a, b);
184
+ gen_helper_neon_add_u16(d, d, a);
185
+}
186
+
187
+static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
188
+{
189
+ gen_helper_neon_mul_u16(a, a, b);
190
+ gen_helper_neon_sub_u16(d, d, a);
191
+}
192
+
193
+static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
194
+{
195
+ tcg_gen_mul_i32(a, a, b);
196
+ tcg_gen_add_i32(d, d, a);
197
+}
198
+
199
+static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
200
+{
201
+ tcg_gen_mul_i32(a, a, b);
202
+ tcg_gen_sub_i32(d, d, a);
203
+}
204
+
205
+static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
206
+{
207
+ tcg_gen_mul_i64(a, a, b);
208
+ tcg_gen_add_i64(d, d, a);
209
+}
210
+
211
+static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
212
+{
213
+ tcg_gen_mul_i64(a, a, b);
214
+ tcg_gen_sub_i64(d, d, a);
215
+}
216
+
217
+static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
218
+{
219
+ tcg_gen_mul_vec(vece, a, a, b);
220
+ tcg_gen_add_vec(vece, d, d, a);
221
+}
222
+
223
+static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
224
+{
225
+ tcg_gen_mul_vec(vece, a, a, b);
226
+ tcg_gen_sub_vec(vece, d, d, a);
227
+}
228
+
229
+/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
230
+ * these tables are shared with AArch64 which does support them.
231
+ */
232
+const GVecGen3 mla_op[4] = {
233
+ { .fni4 = gen_mla8_i32,
234
+ .fniv = gen_mla_vec,
235
+ .opc = INDEX_op_mul_vec,
236
+ .load_dest = true,
237
+ .vece = MO_8 },
238
+ { .fni4 = gen_mla16_i32,
239
+ .fniv = gen_mla_vec,
240
+ .opc = INDEX_op_mul_vec,
241
+ .load_dest = true,
242
+ .vece = MO_16 },
243
+ { .fni4 = gen_mla32_i32,
244
+ .fniv = gen_mla_vec,
245
+ .opc = INDEX_op_mul_vec,
246
+ .load_dest = true,
247
+ .vece = MO_32 },
248
+ { .fni8 = gen_mla64_i64,
249
+ .fniv = gen_mla_vec,
250
+ .opc = INDEX_op_mul_vec,
251
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
252
+ .load_dest = true,
253
+ .vece = MO_64 },
254
+};
255
+
256
+const GVecGen3 mls_op[4] = {
257
+ { .fni4 = gen_mls8_i32,
258
+ .fniv = gen_mls_vec,
259
+ .opc = INDEX_op_mul_vec,
260
+ .load_dest = true,
261
+ .vece = MO_8 },
262
+ { .fni4 = gen_mls16_i32,
263
+ .fniv = gen_mls_vec,
264
+ .opc = INDEX_op_mul_vec,
265
+ .load_dest = true,
266
+ .vece = MO_16 },
267
+ { .fni4 = gen_mls32_i32,
268
+ .fniv = gen_mls_vec,
269
+ .opc = INDEX_op_mul_vec,
270
+ .load_dest = true,
271
+ .vece = MO_32 },
272
+ { .fni8 = gen_mls64_i64,
273
+ .fniv = gen_mls_vec,
274
+ .opc = INDEX_op_mul_vec,
275
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
276
+ .load_dest = true,
277
+ .vece = MO_64 },
278
+};
66
+
279
+
67
/* Translate a NEON data processing instruction. Return nonzero if the
280
/* Translate a NEON data processing instruction. Return nonzero if the
68
instruction is invalid.
281
instruction is invalid.
69
We process data in a mixture of 32-bit and 64-bit chunks.
282
We process data in a mixture of 32-bit and 64-bit chunks.
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
283
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
71
if (q && ((rd | rn | rm) & 1)) {
284
return 0;
72
return 1;
73
}
74
- /*
75
- * The SHA-1/SHA-256 3-register instructions require special treatment
76
- * here, as their size field is overloaded as an op type selector, and
77
- * they all consume their input in a single pass.
78
- */
79
- if (op == NEON_3R_SHA) {
80
+ switch (op) {
81
+ case NEON_3R_SHA:
82
+ /* The SHA-1/SHA-256 3-register instructions require special
83
+ * treatment here, as their size field is overloaded as an
84
+ * op type selector, and they all consume their input in a
85
+ * single pass.
86
+ */
87
if (!q) {
88
return 1;
89
}
90
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
91
tcg_temp_free_ptr(ptr2);
92
tcg_temp_free_ptr(ptr3);
93
return 0;
94
+
95
+ case NEON_3R_VPADD_VQRDMLAH:
96
+ if (!u) {
97
+ break; /* VPADD */
98
+ }
99
+ /* VQRDMLAH */
100
+ switch (size) {
101
+ case 1:
102
+ return do_v81_helper(s, gen_helper_gvec_qrdmlah_s16,
103
+ q, rd, rn, rm);
104
+ case 2:
105
+ return do_v81_helper(s, gen_helper_gvec_qrdmlah_s32,
106
+ q, rd, rn, rm);
107
+ }
108
+ return 1;
109
+
110
+ case NEON_3R_VFM_VQRDMLSH:
111
+ if (!u) {
112
+ /* VFM, VFMS */
113
+ if (size == 1) {
114
+ return 1;
115
+ }
116
+ break;
117
+ }
118
+ /* VQRDMLSH */
119
+ switch (size) {
120
+ case 1:
121
+ return do_v81_helper(s, gen_helper_gvec_qrdmlsh_s16,
122
+ q, rd, rn, rm);
123
+ case 2:
124
+ return do_v81_helper(s, gen_helper_gvec_qrdmlsh_s32,
125
+ q, rd, rn, rm);
126
+ }
127
+ return 1;
128
}
129
if (size == 3 && op != NEON_3R_LOGIC) {
130
/* 64-bit element instructions. */
131
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
132
rm = rtmp;
133
}
285
}
134
break;
286
break;
135
- case NEON_3R_VPADD:
287
+
136
- if (u) {
288
+ case NEON_3R_VML: /* VMLA, VMLS */
137
- return 1;
289
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
138
- }
290
+ u ? &mls_op[size] : &mla_op[size]);
139
- /* Fall through */
291
+ return 0;
140
+ case NEON_3R_VPADD_VQRDMLAH:
292
}
141
case NEON_3R_VPMAX:
293
+
142
case NEON_3R_VPMIN:
294
if (size == 3) {
143
pairwise = 1;
295
/* 64-bit element instructions. */
144
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
296
for (pass = 0; pass < (q ? 2 : 1); pass++) {
145
return 1;
146
}
147
break;
148
- case NEON_3R_VFM:
149
- if (!arm_dc_feature(s, ARM_FEATURE_VFP4) || u) {
150
+ case NEON_3R_VFM_VQRDMLSH:
151
+ if (!arm_dc_feature(s, ARM_FEATURE_VFP4)) {
152
return 1;
153
}
154
break;
155
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
297
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
156
}
298
}
157
}
299
}
158
break;
300
break;
159
- case NEON_3R_VPADD:
301
- case NEON_3R_VML: /* VMLA, VMLAL, VMLS,VMLSL */
160
+ case NEON_3R_VPADD_VQRDMLAH:
302
- switch (size) {
161
switch (size) {
303
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
162
case 0: gen_helper_neon_padd_u8(tmp, tmp, tmp2); break;
304
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
163
case 1: gen_helper_neon_padd_u16(tmp, tmp, tmp2); break;
305
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
164
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
306
- default: abort();
165
}
307
- }
166
}
308
- tcg_temp_free_i32(tmp2);
167
break;
309
- tmp2 = neon_load_reg(rd, pass);
168
- case NEON_3R_VFM:
310
- if (u) { /* VMLS */
169
+ case NEON_3R_VFM_VQRDMLSH:
311
- gen_neon_rsb(size, tmp, tmp2);
170
{
312
- } else { /* VMLA */
171
/* VFMA, VFMS: fused multiply-add */
313
- gen_neon_add(size, tmp, tmp2);
172
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
314
- }
315
- break;
316
case NEON_3R_VMUL:
317
/* VMUL.P8; other cases already eliminated. */
318
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
173
--
319
--
174
2.16.2
320
2.19.1
175
321
176
322
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Move cmtst_op expanders from translate-a64.c.
4
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180228193125.20577-5-richard.henderson@linaro.org
6
Message-id: 20181011205206.3552-17-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/Makefile.objs | 2 +-
10
target/arm/translate.h | 2 +
9
target/arm/helper.h | 4 ++
11
target/arm/translate-a64.c | 38 ------------------
10
target/arm/translate-a64.c | 84 ++++++++++++++++++++++++++++++++++
12
target/arm/translate.c | 81 +++++++++++++++++++++++++++-----------
11
target/arm/vec_helper.c | 109 +++++++++++++++++++++++++++++++++++++++++++++
13
3 files changed, 60 insertions(+), 61 deletions(-)
12
4 files changed, 198 insertions(+), 1 deletion(-)
14
13
create mode 100644 target/arm/vec_helper.c
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
15
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/Makefile.objs
17
--- a/target/arm/translate.h
18
+++ b/target/arm/Makefile.objs
18
+++ b/target/arm/translate.h
19
@@ -XXX,XX +XXX,XX @@ obj-$(call land,$(CONFIG_KVM),$(call lnot,$(TARGET_AARCH64))) += kvm32.o
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
20
obj-$(call land,$(CONFIG_KVM),$(TARGET_AARCH64)) += kvm64.o
20
extern const GVecGen3 bif_op;
21
obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o
21
extern const GVecGen3 mla_op[4];
22
obj-y += translate.o op_helper.o helper.o cpu.o
22
extern const GVecGen3 mls_op[4];
23
-obj-y += neon_helper.o iwmmxt_helper.o
23
+extern const GVecGen3 cmtst_op[4];
24
+obj-y += neon_helper.o iwmmxt_helper.o vec_helper.o
24
extern const GVecGen2i ssra_op[4];
25
obj-y += gdbstub.o
25
extern const GVecGen2i usra_op[4];
26
obj-$(TARGET_AARCH64) += cpu64.o translate-a64.o helper-a64.o gdbstub64.o
26
extern const GVecGen2i sri_op[4];
27
obj-y += crypto_helper.o
27
extern const GVecGen2i sli_op[4];
28
diff --git a/target/arm/helper.h b/target/arm/helper.h
28
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
29
index XXXXXXX..XXXXXXX 100644
29
30
--- a/target/arm/helper.h
30
/*
31
+++ b/target/arm/helper.h
31
* Forward to the isar_feature_* tests given a DisasContext pointer.
32
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_1(neon_rbit_u8, TCG_CALL_NO_RWG_SE, i32, i32)
33
34
DEF_HELPER_3(neon_qdmulh_s16, i32, env, i32, i32)
35
DEF_HELPER_3(neon_qrdmulh_s16, i32, env, i32, i32)
36
+DEF_HELPER_4(neon_qrdmlah_s16, i32, env, i32, i32, i32)
37
+DEF_HELPER_4(neon_qrdmlsh_s16, i32, env, i32, i32, i32)
38
DEF_HELPER_3(neon_qdmulh_s32, i32, env, i32, i32)
39
DEF_HELPER_3(neon_qrdmulh_s32, i32, env, i32, i32)
40
+DEF_HELPER_4(neon_qrdmlah_s32, i32, env, s32, s32, s32)
41
+DEF_HELPER_4(neon_qrdmlsh_s32, i32, env, s32, s32, s32)
42
43
DEF_HELPER_1(neon_narrow_u8, i32, i64)
44
DEF_HELPER_1(neon_narrow_u16, i32, i64)
45
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
46
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate-a64.c
34
--- a/target/arm/translate-a64.c
48
+++ b/target/arm/translate-a64.c
35
+++ b/target/arm/translate-a64.c
49
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
50
tcg_temp_free_ptr(fpst);
37
}
51
}
38
}
52
39
53
+/* AdvSIMD scalar three same extra
40
-/* CMTST : test is "if (X & Y != 0)". */
54
+ * 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
41
-static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
55
+ * +-----+---+-----------+------+---+------+---+--------+---+----+----+
42
-{
56
+ * | 0 1 | U | 1 1 1 1 0 | size | 0 | Rm | 1 | opcode | 1 | Rn | Rd |
43
- tcg_gen_and_i32(d, a, b);
57
+ * +-----+---+-----------+------+---+------+---+--------+---+----+----+
44
- tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
58
+ */
45
- tcg_gen_neg_i32(d, d);
59
+static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
46
-}
60
+ uint32_t insn)
47
-
48
-static void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
49
-{
50
- tcg_gen_and_i64(d, a, b);
51
- tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
52
- tcg_gen_neg_i64(d, d);
53
-}
54
-
55
-static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
56
-{
57
- tcg_gen_and_vec(vece, d, a, b);
58
- tcg_gen_dupi_vec(vece, a, 0);
59
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
60
-}
61
-
62
static void handle_3same_64(DisasContext *s, int opcode, bool u,
63
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn, TCGv_i64 tcg_rm)
64
{
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
66
/* Integer op subgroup of C3.6.16. */
67
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
68
{
69
- static const GVecGen3 cmtst_op[4] = {
70
- { .fni4 = gen_helper_neon_tst_u8,
71
- .fniv = gen_cmtst_vec,
72
- .vece = MO_8 },
73
- { .fni4 = gen_helper_neon_tst_u16,
74
- .fniv = gen_cmtst_vec,
75
- .vece = MO_16 },
76
- { .fni4 = gen_cmtst_i32,
77
- .fniv = gen_cmtst_vec,
78
- .vece = MO_32 },
79
- { .fni8 = gen_cmtst_i64,
80
- .fniv = gen_cmtst_vec,
81
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
82
- .vece = MO_64 },
83
- };
84
-
85
int is_q = extract32(insn, 30, 1);
86
int u = extract32(insn, 29, 1);
87
int size = extract32(insn, 22, 2);
88
diff --git a/target/arm/translate.c b/target/arm/translate.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate.c
91
+++ b/target/arm/translate.c
92
@@ -XXX,XX +XXX,XX @@ const GVecGen3 mls_op[4] = {
93
.vece = MO_64 },
94
};
95
96
+/* CMTST : test is "if (X & Y != 0)". */
97
+static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
61
+{
98
+{
62
+ int rd = extract32(insn, 0, 5);
99
+ tcg_gen_and_i32(d, a, b);
63
+ int rn = extract32(insn, 5, 5);
100
+ tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
64
+ int opcode = extract32(insn, 11, 4);
101
+ tcg_gen_neg_i32(d, d);
65
+ int rm = extract32(insn, 16, 5);
66
+ int size = extract32(insn, 22, 2);
67
+ bool u = extract32(insn, 29, 1);
68
+ TCGv_i32 ele1, ele2, ele3;
69
+ TCGv_i64 res;
70
+ int feature;
71
+
72
+ switch (u * 16 + opcode) {
73
+ case 0x10: /* SQRDMLAH (vector) */
74
+ case 0x11: /* SQRDMLSH (vector) */
75
+ if (size != 1 && size != 2) {
76
+ unallocated_encoding(s);
77
+ return;
78
+ }
79
+ feature = ARM_FEATURE_V8_RDM;
80
+ break;
81
+ default:
82
+ unallocated_encoding(s);
83
+ return;
84
+ }
85
+ if (!arm_dc_feature(s, feature)) {
86
+ unallocated_encoding(s);
87
+ return;
88
+ }
89
+ if (!fp_access_check(s)) {
90
+ return;
91
+ }
92
+
93
+ /* Do a single operation on the lowest element in the vector.
94
+ * We use the standard Neon helpers and rely on 0 OP 0 == 0
95
+ * with no side effects for all these operations.
96
+ * OPTME: special-purpose helpers would avoid doing some
97
+ * unnecessary work in the helper for the 16 bit cases.
98
+ */
99
+ ele1 = tcg_temp_new_i32();
100
+ ele2 = tcg_temp_new_i32();
101
+ ele3 = tcg_temp_new_i32();
102
+
103
+ read_vec_element_i32(s, ele1, rn, 0, size);
104
+ read_vec_element_i32(s, ele2, rm, 0, size);
105
+ read_vec_element_i32(s, ele3, rd, 0, size);
106
+
107
+ switch (opcode) {
108
+ case 0x0: /* SQRDMLAH */
109
+ if (size == 1) {
110
+ gen_helper_neon_qrdmlah_s16(ele3, cpu_env, ele1, ele2, ele3);
111
+ } else {
112
+ gen_helper_neon_qrdmlah_s32(ele3, cpu_env, ele1, ele2, ele3);
113
+ }
114
+ break;
115
+ case 0x1: /* SQRDMLSH */
116
+ if (size == 1) {
117
+ gen_helper_neon_qrdmlsh_s16(ele3, cpu_env, ele1, ele2, ele3);
118
+ } else {
119
+ gen_helper_neon_qrdmlsh_s32(ele3, cpu_env, ele1, ele2, ele3);
120
+ }
121
+ break;
122
+ default:
123
+ g_assert_not_reached();
124
+ }
125
+ tcg_temp_free_i32(ele1);
126
+ tcg_temp_free_i32(ele2);
127
+
128
+ res = tcg_temp_new_i64();
129
+ tcg_gen_extu_i32_i64(res, ele3);
130
+ tcg_temp_free_i32(ele3);
131
+
132
+ write_fp_dreg(s, rd, res);
133
+ tcg_temp_free_i64(res);
134
+}
102
+}
135
+
103
+
136
static void handle_2misc_64(DisasContext *s, int opcode, bool u,
104
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
137
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn,
138
TCGv_i32 tcg_rmode, TCGv_ptr tcg_fpstatus)
139
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
140
{ 0x0e000800, 0xbf208c00, disas_simd_zip_trn },
141
{ 0x2e000000, 0xbf208400, disas_simd_ext },
142
{ 0x5e200400, 0xdf200400, disas_simd_scalar_three_reg_same },
143
+ { 0x5e008400, 0xdf208400, disas_simd_scalar_three_reg_same_extra },
144
{ 0x5e200000, 0xdf200c00, disas_simd_scalar_three_reg_diff },
145
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
146
{ 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise },
147
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
148
new file mode 100644
149
index XXXXXXX..XXXXXXX
150
--- /dev/null
151
+++ b/target/arm/vec_helper.c
152
@@ -XXX,XX +XXX,XX @@
153
+/*
154
+ * ARM AdvSIMD / SVE Vector Operations
155
+ *
156
+ * Copyright (c) 2018 Linaro
157
+ *
158
+ * This library is free software; you can redistribute it and/or
159
+ * modify it under the terms of the GNU Lesser General Public
160
+ * License as published by the Free Software Foundation; either
161
+ * version 2 of the License, or (at your option) any later version.
162
+ *
163
+ * This library is distributed in the hope that it will be useful,
164
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
165
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
166
+ * Lesser General Public License for more details.
167
+ *
168
+ * You should have received a copy of the GNU Lesser General Public
169
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
170
+ */
171
+
172
+#include "qemu/osdep.h"
173
+#include "cpu.h"
174
+#include "exec/exec-all.h"
175
+#include "exec/helper-proto.h"
176
+#include "tcg/tcg-gvec-desc.h"
177
+
178
+
179
+#define SET_QC() env->vfp.xregs[ARM_VFP_FPSCR] |= CPSR_Q
180
+
181
+/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
182
+static uint16_t inl_qrdmlah_s16(CPUARMState *env, int16_t src1,
183
+ int16_t src2, int16_t src3)
184
+{
105
+{
185
+ /* Simplify:
106
+ tcg_gen_and_i64(d, a, b);
186
+ * = ((a3 << 16) + ((e1 * e2) << 1) + (1 << 15)) >> 16
107
+ tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
187
+ * = ((a3 << 15) + (e1 * e2) + (1 << 14)) >> 15
108
+ tcg_gen_neg_i64(d, d);
188
+ */
189
+ int32_t ret = (int32_t)src1 * src2;
190
+ ret = ((int32_t)src3 << 15) + ret + (1 << 14);
191
+ ret >>= 15;
192
+ if (ret != (int16_t)ret) {
193
+ SET_QC();
194
+ ret = (ret < 0 ? -0x8000 : 0x7fff);
195
+ }
196
+ return ret;
197
+}
109
+}
198
+
110
+
199
+uint32_t HELPER(neon_qrdmlah_s16)(CPUARMState *env, uint32_t src1,
111
+static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
200
+ uint32_t src2, uint32_t src3)
201
+{
112
+{
202
+ uint16_t e1 = inl_qrdmlah_s16(env, src1, src2, src3);
113
+ tcg_gen_and_vec(vece, d, a, b);
203
+ uint16_t e2 = inl_qrdmlah_s16(env, src1 >> 16, src2 >> 16, src3 >> 16);
114
+ tcg_gen_dupi_vec(vece, a, 0);
204
+ return deposit32(e1, 16, 16, e2);
115
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
205
+}
116
+}
206
+
117
+
207
+/* Signed saturating rounding doubling multiply-subtract high half, 16-bit */
118
+const GVecGen3 cmtst_op[4] = {
208
+static uint16_t inl_qrdmlsh_s16(CPUARMState *env, int16_t src1,
119
+ { .fni4 = gen_helper_neon_tst_u8,
209
+ int16_t src2, int16_t src3)
120
+ .fniv = gen_cmtst_vec,
210
+{
121
+ .vece = MO_8 },
211
+ /* Similarly, using subtraction:
122
+ { .fni4 = gen_helper_neon_tst_u16,
212
+ * = ((a3 << 16) - ((e1 * e2) << 1) + (1 << 15)) >> 16
123
+ .fniv = gen_cmtst_vec,
213
+ * = ((a3 << 15) - (e1 * e2) + (1 << 14)) >> 15
124
+ .vece = MO_16 },
214
+ */
125
+ { .fni4 = gen_cmtst_i32,
215
+ int32_t ret = (int32_t)src1 * src2;
126
+ .fniv = gen_cmtst_vec,
216
+ ret = ((int32_t)src3 << 15) - ret + (1 << 14);
127
+ .vece = MO_32 },
217
+ ret >>= 15;
128
+ { .fni8 = gen_cmtst_i64,
218
+ if (ret != (int16_t)ret) {
129
+ .fniv = gen_cmtst_vec,
219
+ SET_QC();
130
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
220
+ ret = (ret < 0 ? -0x8000 : 0x7fff);
131
+ .vece = MO_64 },
221
+ }
132
+};
222
+ return ret;
133
+
223
+}
134
/* Translate a NEON data processing instruction. Return nonzero if the
224
+
135
instruction is invalid.
225
+uint32_t HELPER(neon_qrdmlsh_s16)(CPUARMState *env, uint32_t src1,
136
We process data in a mixture of 32-bit and 64-bit chunks.
226
+ uint32_t src2, uint32_t src3)
137
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
227
+{
138
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
228
+ uint16_t e1 = inl_qrdmlsh_s16(env, src1, src2, src3);
139
u ? &mls_op[size] : &mla_op[size]);
229
+ uint16_t e2 = inl_qrdmlsh_s16(env, src1 >> 16, src2 >> 16, src3 >> 16);
140
return 0;
230
+ return deposit32(e1, 16, 16, e2);
141
+
231
+}
142
+ case NEON_3R_VTST_VCEQ:
232
+
143
+ if (u) { /* VCEQ */
233
+/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
144
+ tcg_gen_gvec_cmp(TCG_COND_EQ, size, rd_ofs, rn_ofs, rm_ofs,
234
+uint32_t HELPER(neon_qrdmlah_s32)(CPUARMState *env, int32_t src1,
145
+ vec_size, vec_size);
235
+ int32_t src2, int32_t src3)
146
+ } else { /* VTST */
236
+{
147
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
237
+ /* Simplify similarly to int_qrdmlah_s16 above. */
148
+ vec_size, vec_size, &cmtst_op[size]);
238
+ int64_t ret = (int64_t)src1 * src2;
149
+ }
239
+ ret = ((int64_t)src3 << 31) + ret + (1 << 30);
150
+ return 0;
240
+ ret >>= 31;
151
+
241
+ if (ret != (int32_t)ret) {
152
+ case NEON_3R_VCGT:
242
+ SET_QC();
153
+ tcg_gen_gvec_cmp(u ? TCG_COND_GTU : TCG_COND_GT, size,
243
+ ret = (ret < 0 ? INT32_MIN : INT32_MAX);
154
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
244
+ }
155
+ return 0;
245
+ return ret;
156
+
246
+}
157
+ case NEON_3R_VCGE:
247
+
158
+ tcg_gen_gvec_cmp(u ? TCG_COND_GEU : TCG_COND_GE, size,
248
+/* Signed saturating rounding doubling multiply-subtract high half, 32-bit */
159
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
249
+uint32_t HELPER(neon_qrdmlsh_s32)(CPUARMState *env, int32_t src1,
160
+ return 0;
250
+ int32_t src2, int32_t src3)
161
}
251
+{
162
252
+ /* Simplify similarly to int_qrdmlsh_s16 above. */
163
if (size == 3) {
253
+ int64_t ret = (int64_t)src1 * src2;
164
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
254
+ ret = ((int64_t)src3 << 31) - ret + (1 << 30);
165
case NEON_3R_VQSUB:
255
+ ret >>= 31;
166
GEN_NEON_INTEGER_OP_ENV(qsub);
256
+ if (ret != (int32_t)ret) {
167
break;
257
+ SET_QC();
168
- case NEON_3R_VCGT:
258
+ ret = (ret < 0 ? INT32_MIN : INT32_MAX);
169
- GEN_NEON_INTEGER_OP(cgt);
259
+ }
170
- break;
260
+ return ret;
171
- case NEON_3R_VCGE:
261
+}
172
- GEN_NEON_INTEGER_OP(cge);
173
- break;
174
case NEON_3R_VSHL:
175
GEN_NEON_INTEGER_OP(shl);
176
break;
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
178
tmp2 = neon_load_reg(rd, pass);
179
gen_neon_add(size, tmp, tmp2);
180
break;
181
- case NEON_3R_VTST_VCEQ:
182
- if (!u) { /* VTST */
183
- switch (size) {
184
- case 0: gen_helper_neon_tst_u8(tmp, tmp, tmp2); break;
185
- case 1: gen_helper_neon_tst_u16(tmp, tmp, tmp2); break;
186
- case 2: gen_helper_neon_tst_u32(tmp, tmp, tmp2); break;
187
- default: abort();
188
- }
189
- } else { /* VCEQ */
190
- switch (size) {
191
- case 0: gen_helper_neon_ceq_u8(tmp, tmp, tmp2); break;
192
- case 1: gen_helper_neon_ceq_u16(tmp, tmp, tmp2); break;
193
- case 2: gen_helper_neon_ceq_u32(tmp, tmp, tmp2); break;
194
- default: abort();
195
- }
196
- }
197
- break;
198
case NEON_3R_VMUL:
199
/* VMUL.P8; other cases already eliminated. */
200
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
262
--
201
--
263
2.16.2
202
2.19.1
264
203
265
204
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20180228193125.20577-15-richard.henderson@linaro.org
4
Message-id: 20181011205206.3552-18-richard.henderson@linaro.org
5
[PMM: added parens in ?: expression]
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
target/arm/translate.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++
9
target/arm/translate.c | 81 ++++++++++++++----------------------------
9
1 file changed, 61 insertions(+)
10
1 file changed, 26 insertions(+), 55 deletions(-)
10
11
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
16
@@ -XXX,XX +XXX,XX @@ static void gen_vfp_msr(TCGv_i32 tmp)
16
return 0;
17
tcg_temp_free_i32(tmp);
17
}
18
}
18
19
19
+/* Advanced SIMD two registers and a scalar extension.
20
-static void gen_neon_dup_u8(TCGv_i32 var, int shift)
20
+ * 31 24 23 22 20 16 12 11 10 9 8 3 0
21
-{
21
+ * +-----------------+----+---+----+----+----+---+----+---+----+---------+----+
22
- TCGv_i32 tmp = tcg_temp_new_i32();
22
+ * | 1 1 1 1 1 1 1 0 | o1 | D | o2 | Vn | Vd | 1 | o3 | 0 | o4 | N Q M U | Vm |
23
- if (shift)
23
+ * +-----------------+----+---+----+----+----+---+----+---+----+---------+----+
24
- tcg_gen_shri_i32(var, var, shift);
24
+ *
25
- tcg_gen_ext8u_i32(var, var);
25
+ */
26
- tcg_gen_shli_i32(tmp, var, 8);
27
- tcg_gen_or_i32(var, var, tmp);
28
- tcg_gen_shli_i32(tmp, var, 16);
29
- tcg_gen_or_i32(var, var, tmp);
30
- tcg_temp_free_i32(tmp);
31
-}
32
-
33
static void gen_neon_dup_low16(TCGv_i32 var)
34
{
35
TCGv_i32 tmp = tcg_temp_new_i32();
36
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
37
tcg_temp_free_i32(tmp);
38
}
39
40
-static TCGv_i32 gen_load_and_replicate(DisasContext *s, TCGv_i32 addr, int size)
41
-{
42
- /* Load a single Neon element and replicate into a 32 bit TCG reg */
43
- TCGv_i32 tmp = tcg_temp_new_i32();
44
- switch (size) {
45
- case 0:
46
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
47
- gen_neon_dup_u8(tmp, 0);
48
- break;
49
- case 1:
50
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
51
- gen_neon_dup_low16(tmp);
52
- break;
53
- case 2:
54
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
55
- break;
56
- default: /* Avoid compiler warnings. */
57
- abort();
58
- }
59
- return tmp;
60
-}
61
-
62
static int handle_vsel(uint32_t insn, uint32_t rd, uint32_t rn, uint32_t rm,
63
uint32_t dp)
64
{
65
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
66
int load;
67
int shift;
68
int n;
69
+ int vec_size;
70
TCGv_i32 addr;
71
TCGv_i32 tmp;
72
TCGv_i32 tmp2;
73
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
74
}
75
addr = tcg_temp_new_i32();
76
load_reg_var(s, addr, rn);
77
- if (nregs == 1) {
78
- /* VLD1 to all lanes: bit 5 indicates how many Dregs to write */
79
- tmp = gen_load_and_replicate(s, addr, size);
80
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
81
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
82
- if (insn & (1 << 5)) {
83
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 0));
84
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 1));
85
- }
86
- tcg_temp_free_i32(tmp);
87
- } else {
88
- /* VLD2/3/4 to all lanes: bit 5 indicates register stride */
89
- stride = (insn & (1 << 5)) ? 2 : 1;
90
- for (reg = 0; reg < nregs; reg++) {
91
- tmp = gen_load_and_replicate(s, addr, size);
92
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
93
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
94
- tcg_temp_free_i32(tmp);
95
- tcg_gen_addi_i32(addr, addr, 1 << size);
96
- rd += stride;
26
+
97
+
27
+static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
98
+ /* VLD1 to all lanes: bit 5 indicates how many Dregs to write.
28
+{
99
+ * VLD2/3/4 to all lanes: bit 5 indicates register stride.
29
+ int rd, rn, rm, rot, size, opr_sz;
100
+ */
30
+ TCGv_ptr fpst;
101
+ stride = (insn & (1 << 5)) ? 2 : 1;
31
+ bool q;
102
+ vec_size = nregs == 1 ? stride * 8 : 8;
32
+
103
+
33
+ q = extract32(insn, 6, 1);
104
+ tmp = tcg_temp_new_i32();
34
+ VFP_DREG_D(rd, insn);
105
+ for (reg = 0; reg < nregs; reg++) {
35
+ VFP_DREG_N(rn, insn);
106
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
36
+ VFP_DREG_M(rm, insn);
107
+ s->be_data | size);
37
+ if ((rd | rn) & q) {
108
+ if ((rd & 1) && vec_size == 16) {
38
+ return 1;
109
+ /* We cannot write 16 bytes at once because the
39
+ }
110
+ * destination is unaligned.
40
+
111
+ */
41
+ if ((insn & 0xff000f10) == 0xfe000800) {
112
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
42
+ /* VCMLA (indexed) -- 1111 1110 S.RR .... .... 1000 ...0 .... */
113
+ 8, 8, tmp);
43
+ rot = extract32(insn, 20, 2);
114
+ tcg_gen_gvec_mov(0, neon_reg_offset(rd + 1, 0),
44
+ size = extract32(insn, 23, 1);
115
+ neon_reg_offset(rd, 0), 8, 8);
45
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
116
+ } else {
46
+ || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
117
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
47
+ return 1;
118
+ vec_size, vec_size, tmp);
48
+ }
119
}
49
+ } else {
120
+ tcg_gen_addi_i32(addr, addr, 1 << size);
50
+ return 1;
121
+ rd += stride;
51
+ }
52
+
53
+ if (s->fp_excp_el) {
54
+ gen_exception_insn(s, 4, EXCP_UDEF,
55
+ syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
56
+ return 0;
57
+ }
58
+ if (!s->vfp_enabled) {
59
+ return 1;
60
+ }
61
+
62
+ opr_sz = (1 + q) * 8;
63
+ fpst = get_fpstatus_ptr(1);
64
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
65
+ vfp_reg_offset(1, rn),
66
+ vfp_reg_offset(1, rm), fpst,
67
+ opr_sz, opr_sz, rot,
68
+ size ? gen_helper_gvec_fcmlas_idx
69
+ : gen_helper_gvec_fcmlah_idx);
70
+ tcg_temp_free_ptr(fpst);
71
+ return 0;
72
+}
73
+
74
static int disas_coproc_insn(DisasContext *s, uint32_t insn)
75
{
76
int cpnum, is64, crn, crm, opc1, opc2, isread, rt, rt2;
77
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
78
goto illegal_op;
79
}
122
}
80
return;
123
+ tcg_temp_free_i32(tmp);
81
+ } else if ((insn & 0x0f000a00) == 0x0e000800
124
tcg_temp_free_i32(addr);
82
+ && arm_dc_feature(s, ARM_FEATURE_V8)) {
125
stride = (1 << size) * nregs;
83
+ if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
126
} else {
84
+ goto illegal_op;
85
+ }
86
+ return;
87
} else if ((insn & 0x0fe00000) == 0x0c400000) {
88
/* Coprocessor double register transfer. */
89
ARCH(5TE);
90
--
127
--
91
2.16.2
128
2.19.1
92
129
93
130
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Instead of shifts and masks, use direct loads and stores from the neon
4
register file. Mirror the iteration structure of the ARM pseudocode
5
more closely. Correct the parameters of the VLD2 A2 insn.
6
7
Note that this includes a bugfix for handling of the insn
8
"VLD2 (multiple 2-element structures)" -- we were using an
9
incorrect stride value.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20181011205206.3552-19-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180228193125.20577-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
15
---
8
target/arm/helper.h | 9 +++++
16
target/arm/translate.c | 170 ++++++++++++++++++-----------------------
9
target/arm/translate-a64.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++
17
1 file changed, 74 insertions(+), 96 deletions(-)
10
target/arm/vec_helper.c | 74 +++++++++++++++++++++++++++++++++++++++++
18
11
3 files changed, 166 insertions(+)
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
14
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
21
--- a/target/arm/translate.c
16
+++ b/target/arm/helper.h
22
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(dc_zva, void, env, i64)
23
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
18
DEF_HELPER_FLAGS_2(neon_pmull_64_lo, TCG_CALL_NO_RWG_SE, i64, i64, i64)
24
return tmp;
19
DEF_HELPER_FLAGS_2(neon_pmull_64_hi, TCG_CALL_NO_RWG_SE, i64, i64, i64)
20
21
+DEF_HELPER_FLAGS_5(gvec_qrdmlah_s16, TCG_CALL_NO_RWG,
22
+ void, ptr, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_5(gvec_qrdmlsh_s16, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(gvec_qrdmlah_s32, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(gvec_qrdmlsh_s32, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+
30
#ifdef TARGET_AARCH64
31
#include "helper-a64.h"
32
#endif
33
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-a64.c
36
+++ b/target/arm/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_op3(DisasContext *s, bool is_q, int rd,
38
vec_full_reg_size(s), gvec_op);
39
}
25
}
40
26
41
+/* Expand a 3-operand + env pointer operation using
27
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
42
+ * an out-of-line helper.
43
+ */
44
+static void gen_gvec_op3_env(DisasContext *s, bool is_q, int rd,
45
+ int rn, int rm, gen_helper_gvec_3_ptr *fn)
46
+{
28
+{
47
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
29
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
48
+ vec_full_reg_offset(s, rn),
30
+
49
+ vec_full_reg_offset(s, rm), cpu_env,
31
+ switch (mop) {
50
+ is_q ? 16 : 8, vec_full_reg_size(s), 0, fn);
32
+ case MO_UB:
51
+}
33
+ tcg_gen_ld8u_i64(var, cpu_env, offset);
52
+
34
+ break;
53
/* Set ZF and NF based on a 64 bit result. This is alas fiddlier
35
+ case MO_UW:
54
* than the 32 bit equivalent.
36
+ tcg_gen_ld16u_i64(var, cpu_env, offset);
55
*/
37
+ break;
56
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
38
+ case MO_UL:
57
clear_vec_high(s, is_q, rd);
39
+ tcg_gen_ld32u_i64(var, cpu_env, offset);
58
}
40
+ break;
59
41
+ case MO_Q:
60
+/* AdvSIMD three same extra
42
+ tcg_gen_ld_i64(var, cpu_env, offset);
61
+ * 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
43
+ break;
62
+ * +---+---+---+-----------+------+---+------+---+--------+---+----+----+
63
+ * | 0 | Q | U | 0 1 1 1 0 | size | 0 | Rm | 1 | opcode | 1 | Rn | Rd |
64
+ * +---+---+---+-----------+------+---+------+---+--------+---+----+----+
65
+ */
66
+static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
67
+{
68
+ int rd = extract32(insn, 0, 5);
69
+ int rn = extract32(insn, 5, 5);
70
+ int opcode = extract32(insn, 11, 4);
71
+ int rm = extract32(insn, 16, 5);
72
+ int size = extract32(insn, 22, 2);
73
+ bool u = extract32(insn, 29, 1);
74
+ bool is_q = extract32(insn, 30, 1);
75
+ int feature;
76
+
77
+ switch (u * 16 + opcode) {
78
+ case 0x10: /* SQRDMLAH (vector) */
79
+ case 0x11: /* SQRDMLSH (vector) */
80
+ if (size != 1 && size != 2) {
81
+ unallocated_encoding(s);
82
+ return;
83
+ }
84
+ feature = ARM_FEATURE_V8_RDM;
85
+ break;
86
+ default:
87
+ unallocated_encoding(s);
88
+ return;
89
+ }
90
+ if (!arm_dc_feature(s, feature)) {
91
+ unallocated_encoding(s);
92
+ return;
93
+ }
94
+ if (!fp_access_check(s)) {
95
+ return;
96
+ }
97
+
98
+ switch (opcode) {
99
+ case 0x0: /* SQRDMLAH (vector) */
100
+ switch (size) {
101
+ case 1:
102
+ gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlah_s16);
103
+ break;
104
+ case 2:
105
+ gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlah_s32);
106
+ break;
107
+ default:
108
+ g_assert_not_reached();
109
+ }
110
+ return;
111
+
112
+ case 0x1: /* SQRDMLSH (vector) */
113
+ switch (size) {
114
+ case 1:
115
+ gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlsh_s16);
116
+ break;
117
+ case 2:
118
+ gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlsh_s32);
119
+ break;
120
+ default:
121
+ g_assert_not_reached();
122
+ }
123
+ return;
124
+
125
+ default:
44
+ default:
126
+ g_assert_not_reached();
45
+ g_assert_not_reached();
127
+ }
46
+ }
128
+}
47
+}
129
+
48
+
130
static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
49
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
131
int size, int rn, int rd)
132
{
50
{
133
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
51
tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
134
static const AArch64DecodeTable data_proc_simd[] = {
52
tcg_temp_free_i32(var);
135
/* pattern , mask , fn */
53
}
136
{ 0x0e200400, 0x9f200400, disas_simd_three_reg_same },
54
137
+ { 0x0e008400, 0x9f208400, disas_simd_three_reg_same_extra },
55
+static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
138
{ 0x0e200000, 0x9f200c00, disas_simd_three_reg_diff },
139
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
140
{ 0x0e300800, 0x9f3e0c00, disas_simd_across_lanes },
141
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
142
index XXXXXXX..XXXXXXX 100644
143
--- a/target/arm/vec_helper.c
144
+++ b/target/arm/vec_helper.c
145
@@ -XXX,XX +XXX,XX @@
146
147
#define SET_QC() env->vfp.xregs[ARM_VFP_FPSCR] |= CPSR_Q
148
149
+static void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
150
+{
56
+{
151
+ uint64_t *d = vd + opr_sz;
57
+ long offset = neon_element_offset(reg, ele, size);
152
+ uintptr_t i;
58
+
153
+
59
+ switch (size) {
154
+ for (i = opr_sz; i < max_sz; i += 8) {
60
+ case MO_8:
155
+ *d++ = 0;
61
+ tcg_gen_st8_i64(var, cpu_env, offset);
62
+ break;
63
+ case MO_16:
64
+ tcg_gen_st16_i64(var, cpu_env, offset);
65
+ break;
66
+ case MO_32:
67
+ tcg_gen_st32_i64(var, cpu_env, offset);
68
+ break;
69
+ case MO_64:
70
+ tcg_gen_st_i64(var, cpu_env, offset);
71
+ break;
72
+ default:
73
+ g_assert_not_reached();
156
+ }
74
+ }
157
+}
75
+}
158
+
76
+
159
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
77
static inline void neon_load_reg64(TCGv_i64 var, int reg)
160
static uint16_t inl_qrdmlah_s16(CPUARMState *env, int16_t src1,
78
{
161
int16_t src2, int16_t src3)
79
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
162
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlah_s16)(CPUARMState *env, uint32_t src1,
80
@@ -XXX,XX +XXX,XX @@ static struct {
163
return deposit32(e1, 16, 16, e2);
81
int interleave;
164
}
82
int spacing;
165
83
} const neon_ls_element_type[11] = {
166
+void HELPER(gvec_qrdmlah_s16)(void *vd, void *vn, void *vm,
84
- {4, 4, 1},
167
+ void *ve, uint32_t desc)
85
- {4, 4, 2},
168
+{
86
+ {1, 4, 1},
169
+ uintptr_t opr_sz = simd_oprsz(desc);
87
+ {1, 4, 2},
170
+ int16_t *d = vd;
88
{4, 1, 1},
171
+ int16_t *n = vn;
89
- {4, 2, 1},
172
+ int16_t *m = vm;
90
- {3, 3, 1},
173
+ CPUARMState *env = ve;
91
- {3, 3, 2},
174
+ uintptr_t i;
92
+ {2, 2, 2},
175
+
93
+ {1, 3, 1},
176
+ for (i = 0; i < opr_sz / 2; ++i) {
94
+ {1, 3, 2},
177
+ d[i] = inl_qrdmlah_s16(env, n[i], m[i], d[i]);
95
{3, 1, 1},
178
+ }
96
{1, 1, 1},
179
+ clear_tail(d, opr_sz, simd_maxsz(desc));
97
- {2, 2, 1},
180
+}
98
- {2, 2, 2},
181
+
99
+ {1, 2, 1},
182
/* Signed saturating rounding doubling multiply-subtract high half, 16-bit */
100
+ {1, 2, 2},
183
static uint16_t inl_qrdmlsh_s16(CPUARMState *env, int16_t src1,
101
{2, 1, 1}
184
int16_t src2, int16_t src3)
102
};
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlsh_s16)(CPUARMState *env, uint32_t src1,
103
186
return deposit32(e1, 16, 16, e2);
104
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
187
}
105
int shift;
188
106
int n;
189
+void HELPER(gvec_qrdmlsh_s16)(void *vd, void *vn, void *vm,
107
int vec_size;
190
+ void *ve, uint32_t desc)
108
+ int mmu_idx;
191
+{
109
+ TCGMemOp endian;
192
+ uintptr_t opr_sz = simd_oprsz(desc);
110
TCGv_i32 addr;
193
+ int16_t *d = vd;
111
TCGv_i32 tmp;
194
+ int16_t *n = vn;
112
TCGv_i32 tmp2;
195
+ int16_t *m = vm;
113
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
196
+ CPUARMState *env = ve;
114
rn = (insn >> 16) & 0xf;
197
+ uintptr_t i;
115
rm = insn & 0xf;
198
+
116
load = (insn & (1 << 21)) != 0;
199
+ for (i = 0; i < opr_sz / 2; ++i) {
117
+ endian = s->be_data;
200
+ d[i] = inl_qrdmlsh_s16(env, n[i], m[i], d[i]);
118
+ mmu_idx = get_mem_index(s);
201
+ }
119
if ((insn & (1 << 23)) == 0) {
202
+ clear_tail(d, opr_sz, simd_maxsz(desc));
120
/* Load store all elements. */
203
+}
121
op = (insn >> 8) & 0xf;
204
+
122
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
205
/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
123
nregs = neon_ls_element_type[op].nregs;
206
uint32_t HELPER(neon_qrdmlah_s32)(CPUARMState *env, int32_t src1,
124
interleave = neon_ls_element_type[op].interleave;
207
int32_t src2, int32_t src3)
125
spacing = neon_ls_element_type[op].spacing;
208
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlah_s32)(CPUARMState *env, int32_t src1,
126
- if (size == 3 && (interleave | spacing) != 1)
209
return ret;
127
+ if (size == 3 && (interleave | spacing) != 1) {
210
}
128
return 1;
211
129
+ }
212
+void HELPER(gvec_qrdmlah_s32)(void *vd, void *vn, void *vm,
130
+ tmp64 = tcg_temp_new_i64();
213
+ void *ve, uint32_t desc)
131
addr = tcg_temp_new_i32();
214
+{
132
+ tmp2 = tcg_const_i32(1 << size);
215
+ uintptr_t opr_sz = simd_oprsz(desc);
133
load_reg_var(s, addr, rn);
216
+ int32_t *d = vd;
134
- stride = (1 << size) * interleave;
217
+ int32_t *n = vn;
135
for (reg = 0; reg < nregs; reg++) {
218
+ int32_t *m = vm;
136
- if (interleave > 2 || (interleave == 2 && nregs == 2)) {
219
+ CPUARMState *env = ve;
137
- load_reg_var(s, addr, rn);
220
+ uintptr_t i;
138
- tcg_gen_addi_i32(addr, addr, (1 << size) * reg);
221
+
139
- } else if (interleave == 2 && nregs == 4 && reg == 2) {
222
+ for (i = 0; i < opr_sz / 4; ++i) {
140
- load_reg_var(s, addr, rn);
223
+ d[i] = helper_neon_qrdmlah_s32(env, n[i], m[i], d[i]);
141
- tcg_gen_addi_i32(addr, addr, 1 << size);
224
+ }
142
- }
225
+ clear_tail(d, opr_sz, simd_maxsz(desc));
143
- if (size == 3) {
226
+}
144
- tmp64 = tcg_temp_new_i64();
227
+
145
- if (load) {
228
/* Signed saturating rounding doubling multiply-subtract high half, 32-bit */
146
- gen_aa32_ld64(s, tmp64, addr, get_mem_index(s));
229
uint32_t HELPER(neon_qrdmlsh_s32)(CPUARMState *env, int32_t src1,
147
- neon_store_reg64(tmp64, rd);
230
int32_t src2, int32_t src3)
148
- } else {
231
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlsh_s32)(CPUARMState *env, int32_t src1,
149
- neon_load_reg64(tmp64, rd);
232
}
150
- gen_aa32_st64(s, tmp64, addr, get_mem_index(s));
233
return ret;
151
- }
234
}
152
- tcg_temp_free_i64(tmp64);
235
+
153
- tcg_gen_addi_i32(addr, addr, stride);
236
+void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
154
- } else {
237
+ void *ve, uint32_t desc)
155
- for (pass = 0; pass < 2; pass++) {
238
+{
156
- if (size == 2) {
239
+ uintptr_t opr_sz = simd_oprsz(desc);
157
- if (load) {
240
+ int32_t *d = vd;
158
- tmp = tcg_temp_new_i32();
241
+ int32_t *n = vn;
159
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
242
+ int32_t *m = vm;
160
- neon_store_reg(rd, pass, tmp);
243
+ CPUARMState *env = ve;
161
- } else {
244
+ uintptr_t i;
162
- tmp = neon_load_reg(rd, pass);
245
+
163
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
246
+ for (i = 0; i < opr_sz / 4; ++i) {
164
- tcg_temp_free_i32(tmp);
247
+ d[i] = helper_neon_qrdmlsh_s32(env, n[i], m[i], d[i]);
165
- }
248
+ }
166
- tcg_gen_addi_i32(addr, addr, stride);
249
+ clear_tail(d, opr_sz, simd_maxsz(desc));
167
- } else if (size == 1) {
250
+}
168
- if (load) {
169
- tmp = tcg_temp_new_i32();
170
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
171
- tcg_gen_addi_i32(addr, addr, stride);
172
- tmp2 = tcg_temp_new_i32();
173
- gen_aa32_ld16u(s, tmp2, addr, get_mem_index(s));
174
- tcg_gen_addi_i32(addr, addr, stride);
175
- tcg_gen_shli_i32(tmp2, tmp2, 16);
176
- tcg_gen_or_i32(tmp, tmp, tmp2);
177
- tcg_temp_free_i32(tmp2);
178
- neon_store_reg(rd, pass, tmp);
179
- } else {
180
- tmp = neon_load_reg(rd, pass);
181
- tmp2 = tcg_temp_new_i32();
182
- tcg_gen_shri_i32(tmp2, tmp, 16);
183
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
184
- tcg_temp_free_i32(tmp);
185
- tcg_gen_addi_i32(addr, addr, stride);
186
- gen_aa32_st16(s, tmp2, addr, get_mem_index(s));
187
- tcg_temp_free_i32(tmp2);
188
- tcg_gen_addi_i32(addr, addr, stride);
189
- }
190
- } else /* size == 0 */ {
191
- if (load) {
192
- tmp2 = NULL;
193
- for (n = 0; n < 4; n++) {
194
- tmp = tcg_temp_new_i32();
195
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
196
- tcg_gen_addi_i32(addr, addr, stride);
197
- if (n == 0) {
198
- tmp2 = tmp;
199
- } else {
200
- tcg_gen_shli_i32(tmp, tmp, n * 8);
201
- tcg_gen_or_i32(tmp2, tmp2, tmp);
202
- tcg_temp_free_i32(tmp);
203
- }
204
- }
205
- neon_store_reg(rd, pass, tmp2);
206
- } else {
207
- tmp2 = neon_load_reg(rd, pass);
208
- for (n = 0; n < 4; n++) {
209
- tmp = tcg_temp_new_i32();
210
- if (n == 0) {
211
- tcg_gen_mov_i32(tmp, tmp2);
212
- } else {
213
- tcg_gen_shri_i32(tmp, tmp2, n * 8);
214
- }
215
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
216
- tcg_temp_free_i32(tmp);
217
- tcg_gen_addi_i32(addr, addr, stride);
218
- }
219
- tcg_temp_free_i32(tmp2);
220
- }
221
+ for (n = 0; n < 8 >> size; n++) {
222
+ int xs;
223
+ for (xs = 0; xs < interleave; xs++) {
224
+ int tt = rd + reg + spacing * xs;
225
+
226
+ if (load) {
227
+ gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
228
+ neon_store_element64(tt, n, size, tmp64);
229
+ } else {
230
+ neon_load_element64(tmp64, tt, n, size);
231
+ gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
232
}
233
+ tcg_gen_add_i32(addr, addr, tmp2);
234
}
235
}
236
- rd += spacing;
237
}
238
tcg_temp_free_i32(addr);
239
- stride = nregs * 8;
240
+ tcg_temp_free_i32(tmp2);
241
+ tcg_temp_free_i64(tmp64);
242
+ stride = nregs * interleave * 8;
243
} else {
244
size = (insn >> 10) & 3;
245
if (size == 3) {
251
--
246
--
252
2.16.2
247
2.19.1
253
248
254
249
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Happily, the bits are in the same places compared to a32.
3
For a sequence of loads or stores from a single register,
4
little-endian operations can be promoted to an 8-byte op.
5
This can reduce the number of operations by a factor of 8.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20180228193125.20577-16-richard.henderson@linaro.org
8
Message-id: 20181011205206.3552-20-richard.henderson@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
12
---
10
target/arm/translate.c | 14 +++++++++++++-
13
target/arm/translate.c | 10 ++++++++++
11
1 file changed, 13 insertions(+), 1 deletion(-)
14
1 file changed, 10 insertions(+)
12
15
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
18
--- a/target/arm/translate.c
16
+++ b/target/arm/translate.c
19
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
18
default_exception_el(s));
21
if (size == 3 && (interleave | spacing) != 1) {
19
break;
22
return 1;
20
}
23
}
21
- if (((insn >> 24) & 3) == 3) {
24
+ /* For our purposes, bytes are always little-endian. */
22
+ if ((insn & 0xfe000a00) == 0xfc000800
25
+ if (size == 0) {
23
+ && arm_dc_feature(s, ARM_FEATURE_V8)) {
26
+ endian = MO_LE;
24
+ /* The Thumb2 and ARM encodings are identical. */
27
+ }
25
+ if (disas_neon_insn_3same_ext(s, insn)) {
28
+ /* Consecutive little-endian elements from a single register
26
+ goto illegal_op;
29
+ * can be promoted to a larger little-endian operation.
27
+ }
30
+ */
28
+ } else if ((insn & 0xff000a00) == 0xfe000800
31
+ if (interleave == 1 && endian == MO_LE) {
29
+ && arm_dc_feature(s, ARM_FEATURE_V8)) {
32
+ size = 3;
30
+ /* The Thumb2 and ARM encodings are identical. */
33
+ }
31
+ if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
34
tmp64 = tcg_temp_new_i64();
32
+ goto illegal_op;
35
addr = tcg_temp_new_i32();
33
+ }
36
tmp2 = tcg_const_i32(1 << size);
34
+ } else if (((insn >> 24) & 3) == 3) {
35
/* Translate into the equivalent ARM encoding. */
36
insn = (insn & 0xe2ffffff) | ((insn & (1 << 28)) >> 4) | (1 << 28);
37
if (disas_neon_data_insn(s, insn)) {
38
--
37
--
39
2.16.2
38
2.19.1
40
39
41
40
diff view generated by jsdifflib
1
Add a model of the TrustZone peripheral protection controller (PPC),
1
From: Richard Henderson <richard.henderson@linaro.org>
2
which is used to gate transactions to non-TZ-aware peripherals so
3
that secure software can configure them to not be accessible to
4
non-secure software.
5
2
3
Instead of shifts and masks, use direct loads and stores from
4
the neon register file.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-21-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180220180325.29818-15-peter.maydell@linaro.org
9
---
10
---
10
hw/misc/Makefile.objs | 2 +
11
target/arm/translate.c | 92 +++++++++++++++++++++++-------------------
11
include/hw/misc/tz-ppc.h | 101 ++++++++++++++
12
1 file changed, 50 insertions(+), 42 deletions(-)
12
hw/misc/tz-ppc.c | 302 ++++++++++++++++++++++++++++++++++++++++
13
default-configs/arm-softmmu.mak | 2 +
14
hw/misc/trace-events | 11 ++
15
5 files changed, 418 insertions(+)
16
create mode 100644 include/hw/misc/tz-ppc.h
17
create mode 100644 hw/misc/tz-ppc.c
18
13
19
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/misc/Makefile.objs
16
--- a/target/arm/translate.c
22
+++ b/hw/misc/Makefile.objs
17
+++ b/target/arm/translate.c
23
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_MIPS_ITU) += mips_itu.o
18
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
24
obj-$(CONFIG_MPS2_FPGAIO) += mps2-fpgaio.o
19
return tmp;
25
obj-$(CONFIG_MPS2_SCC) += mps2-scc.o
20
}
26
21
27
+obj-$(CONFIG_TZ_PPC) += tz-ppc.o
22
+static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
23
+{
24
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
28
+
25
+
29
obj-$(CONFIG_PVPANIC) += pvpanic.o
26
+ switch (mop) {
30
obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o
27
+ case MO_UB:
31
obj-$(CONFIG_AUX) += auxbus.o
28
+ tcg_gen_ld8u_i32(var, cpu_env, offset);
32
diff --git a/include/hw/misc/tz-ppc.h b/include/hw/misc/tz-ppc.h
33
new file mode 100644
34
index XXXXXXX..XXXXXXX
35
--- /dev/null
36
+++ b/include/hw/misc/tz-ppc.h
37
@@ -XXX,XX +XXX,XX @@
38
+/*
39
+ * ARM TrustZone peripheral protection controller emulation
40
+ *
41
+ * Copyright (c) 2018 Linaro Limited
42
+ * Written by Peter Maydell
43
+ *
44
+ * This program is free software; you can redistribute it and/or modify
45
+ * it under the terms of the GNU General Public License version 2 or
46
+ * (at your option) any later version.
47
+ */
48
+
49
+/* This is a model of the TrustZone peripheral protection controller (PPC).
50
+ * It is documented in the ARM CoreLink SIE-200 System IP for Embedded TRM
51
+ * (DDI 0571G):
52
+ * https://developer.arm.com/products/architecture/m-profile/docs/ddi0571/g
53
+ *
54
+ * The PPC sits in front of peripherals and allows secure software to
55
+ * configure it to either pass through or reject transactions.
56
+ * Rejected transactions may be configured to either be aborted, or to
57
+ * behave as RAZ/WI. An interrupt can be signalled for a rejected transaction.
58
+ *
59
+ * The PPC has no register interface -- it is configured purely by a
60
+ * collection of input signals from other hardware in the system. Typically
61
+ * they are either hardwired or exposed in an ad-hoc register interface by
62
+ * the SoC that uses the PPC.
63
+ *
64
+ * This QEMU model can be used to model either the AHB5 or APB4 TZ PPC,
65
+ * since the only difference between them is that the AHB version has a
66
+ * "default" port which has no security checks applied. In QEMU the default
67
+ * port can be emulated simply by wiring its downstream devices directly
68
+ * into the parent address space, since the PPC does not need to intercept
69
+ * transactions there.
70
+ *
71
+ * In the hardware, selection of which downstream port to use is done by
72
+ * the user's decode logic asserting one of the hsel[] signals. In QEMU,
73
+ * we provide 16 MMIO regions, one per port, and the user maps these into
74
+ * the desired addresses to implement the address decode.
75
+ *
76
+ * QEMU interface:
77
+ * + sysbus MMIO regions 0..15: MemoryRegions defining the upstream end
78
+ * of each of the 16 ports of the PPC
79
+ * + Property "port[0..15]": MemoryRegion defining the downstream device(s)
80
+ * for each of the 16 ports of the PPC
81
+ * + Named GPIO inputs "cfg_nonsec[0..15]": set to 1 if the port should be
82
+ * accessible to NonSecure transactions
83
+ * + Named GPIO inputs "cfg_ap[0..15]": set to 1 if the port should be
84
+ * accessible to non-privileged transactions
85
+ * + Named GPIO input "cfg_sec_resp": set to 1 if a rejected transaction should
86
+ * result in a transaction error, or 0 for the transaction to RAZ/WI
87
+ * + Named GPIO input "irq_enable": set to 1 to enable interrupts
88
+ * + Named GPIO input "irq_clear": set to 1 to clear a pending interrupt
89
+ * + Named GPIO output "irq": set for a transaction-failed interrupt
90
+ * + Property "NONSEC_MASK": if a bit is set in this mask then accesses to
91
+ * the associated port do not have the TZ security check performed. (This
92
+ * corresponds to the hardware allowing this to be set as a Verilog
93
+ * parameter.)
94
+ */
95
+
96
+#ifndef TZ_PPC_H
97
+#define TZ_PPC_H
98
+
99
+#include "hw/sysbus.h"
100
+
101
+#define TYPE_TZ_PPC "tz-ppc"
102
+#define TZ_PPC(obj) OBJECT_CHECK(TZPPC, (obj), TYPE_TZ_PPC)
103
+
104
+#define TZ_NUM_PORTS 16
105
+
106
+typedef struct TZPPC TZPPC;
107
+
108
+typedef struct TZPPCPort {
109
+ TZPPC *ppc;
110
+ MemoryRegion upstream;
111
+ AddressSpace downstream_as;
112
+ MemoryRegion *downstream;
113
+} TZPPCPort;
114
+
115
+struct TZPPC {
116
+ /*< private >*/
117
+ SysBusDevice parent_obj;
118
+
119
+ /*< public >*/
120
+
121
+ /* State: these just track the values of our input signals */
122
+ bool cfg_nonsec[TZ_NUM_PORTS];
123
+ bool cfg_ap[TZ_NUM_PORTS];
124
+ bool cfg_sec_resp;
125
+ bool irq_enable;
126
+ bool irq_clear;
127
+ /* State: are we asserting irq ? */
128
+ bool irq_status;
129
+
130
+ qemu_irq irq;
131
+
132
+ /* Properties */
133
+ uint32_t nonsec_mask;
134
+
135
+ TZPPCPort port[TZ_NUM_PORTS];
136
+};
137
+
138
+#endif
139
diff --git a/hw/misc/tz-ppc.c b/hw/misc/tz-ppc.c
140
new file mode 100644
141
index XXXXXXX..XXXXXXX
142
--- /dev/null
143
+++ b/hw/misc/tz-ppc.c
144
@@ -XXX,XX +XXX,XX @@
145
+/*
146
+ * ARM TrustZone peripheral protection controller emulation
147
+ *
148
+ * Copyright (c) 2018 Linaro Limited
149
+ * Written by Peter Maydell
150
+ *
151
+ * This program is free software; you can redistribute it and/or modify
152
+ * it under the terms of the GNU General Public License version 2 or
153
+ * (at your option) any later version.
154
+ */
155
+
156
+#include "qemu/osdep.h"
157
+#include "qemu/log.h"
158
+#include "qapi/error.h"
159
+#include "trace.h"
160
+#include "hw/sysbus.h"
161
+#include "hw/registerfields.h"
162
+#include "hw/misc/tz-ppc.h"
163
+
164
+static void tz_ppc_update_irq(TZPPC *s)
165
+{
166
+ bool level = s->irq_status && s->irq_enable;
167
+
168
+ trace_tz_ppc_update_irq(level);
169
+ qemu_set_irq(s->irq, level);
170
+}
171
+
172
+static void tz_ppc_cfg_nonsec(void *opaque, int n, int level)
173
+{
174
+ TZPPC *s = TZ_PPC(opaque);
175
+
176
+ assert(n < TZ_NUM_PORTS);
177
+ trace_tz_ppc_cfg_nonsec(n, level);
178
+ s->cfg_nonsec[n] = level;
179
+}
180
+
181
+static void tz_ppc_cfg_ap(void *opaque, int n, int level)
182
+{
183
+ TZPPC *s = TZ_PPC(opaque);
184
+
185
+ assert(n < TZ_NUM_PORTS);
186
+ trace_tz_ppc_cfg_ap(n, level);
187
+ s->cfg_ap[n] = level;
188
+}
189
+
190
+static void tz_ppc_cfg_sec_resp(void *opaque, int n, int level)
191
+{
192
+ TZPPC *s = TZ_PPC(opaque);
193
+
194
+ trace_tz_ppc_cfg_sec_resp(level);
195
+ s->cfg_sec_resp = level;
196
+}
197
+
198
+static void tz_ppc_irq_enable(void *opaque, int n, int level)
199
+{
200
+ TZPPC *s = TZ_PPC(opaque);
201
+
202
+ trace_tz_ppc_irq_enable(level);
203
+ s->irq_enable = level;
204
+ tz_ppc_update_irq(s);
205
+}
206
+
207
+static void tz_ppc_irq_clear(void *opaque, int n, int level)
208
+{
209
+ TZPPC *s = TZ_PPC(opaque);
210
+
211
+ trace_tz_ppc_irq_clear(level);
212
+
213
+ s->irq_clear = level;
214
+ if (level) {
215
+ s->irq_status = false;
216
+ tz_ppc_update_irq(s);
217
+ }
218
+}
219
+
220
+static bool tz_ppc_check(TZPPC *s, int n, MemTxAttrs attrs)
221
+{
222
+ /* Check whether to allow an access to port n; return true if
223
+ * the check passes, and false if the transaction must be blocked.
224
+ * If the latter, the caller must check cfg_sec_resp to determine
225
+ * whether to abort or RAZ/WI the transaction.
226
+ * The checks are:
227
+ * + nonsec_mask suppresses any check of the secure attribute
228
+ * + otherwise, block if cfg_nonsec is 1 and transaction is secure,
229
+ * or if cfg_nonsec is 0 and transaction is non-secure
230
+ * + block if transaction is usermode and cfg_ap is 0
231
+ */
232
+ if ((attrs.secure == s->cfg_nonsec[n] && !(s->nonsec_mask & (1 << n))) ||
233
+ (attrs.user && !s->cfg_ap[n])) {
234
+ /* Block the transaction. */
235
+ if (!s->irq_clear) {
236
+ /* Note that holding irq_clear high suppresses interrupts */
237
+ s->irq_status = true;
238
+ tz_ppc_update_irq(s);
239
+ }
240
+ return false;
241
+ }
242
+ return true;
243
+}
244
+
245
+static MemTxResult tz_ppc_read(void *opaque, hwaddr addr, uint64_t *pdata,
246
+ unsigned size, MemTxAttrs attrs)
247
+{
248
+ TZPPCPort *p = opaque;
249
+ TZPPC *s = p->ppc;
250
+ int n = p - s->port;
251
+ AddressSpace *as = &p->downstream_as;
252
+ uint64_t data;
253
+ MemTxResult res;
254
+
255
+ if (!tz_ppc_check(s, n, attrs)) {
256
+ trace_tz_ppc_read_blocked(n, addr, attrs.secure, attrs.user);
257
+ if (s->cfg_sec_resp) {
258
+ return MEMTX_ERROR;
259
+ } else {
260
+ *pdata = 0;
261
+ return MEMTX_OK;
262
+ }
263
+ }
264
+
265
+ switch (size) {
266
+ case 1:
267
+ data = address_space_ldub(as, addr, attrs, &res);
268
+ break;
29
+ break;
269
+ case 2:
30
+ case MO_UW:
270
+ data = address_space_lduw_le(as, addr, attrs, &res);
31
+ tcg_gen_ld16u_i32(var, cpu_env, offset);
271
+ break;
32
+ break;
272
+ case 4:
33
+ case MO_UL:
273
+ data = address_space_ldl_le(as, addr, attrs, &res);
34
+ tcg_gen_ld_i32(var, cpu_env, offset);
274
+ break;
275
+ case 8:
276
+ data = address_space_ldq_le(as, addr, attrs, &res);
277
+ break;
35
+ break;
278
+ default:
36
+ default:
279
+ g_assert_not_reached();
37
+ g_assert_not_reached();
280
+ }
38
+ }
281
+ *pdata = data;
282
+ return res;
283
+}
39
+}
284
+
40
+
285
+static MemTxResult tz_ppc_write(void *opaque, hwaddr addr, uint64_t val,
41
static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
286
+ unsigned size, MemTxAttrs attrs)
42
{
43
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
44
@@ -XXX,XX +XXX,XX @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
45
tcg_temp_free_i32(var);
46
}
47
48
+static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
287
+{
49
+{
288
+ TZPPCPort *p = opaque;
50
+ long offset = neon_element_offset(reg, ele, size);
289
+ TZPPC *s = p->ppc;
290
+ AddressSpace *as = &p->downstream_as;
291
+ int n = p - s->port;
292
+ MemTxResult res;
293
+
294
+ if (!tz_ppc_check(s, n, attrs)) {
295
+ trace_tz_ppc_write_blocked(n, addr, attrs.secure, attrs.user);
296
+ if (s->cfg_sec_resp) {
297
+ return MEMTX_ERROR;
298
+ } else {
299
+ return MEMTX_OK;
300
+ }
301
+ }
302
+
51
+
303
+ switch (size) {
52
+ switch (size) {
304
+ case 1:
53
+ case MO_8:
305
+ address_space_stb(as, addr, val, attrs, &res);
54
+ tcg_gen_st8_i32(var, cpu_env, offset);
306
+ break;
55
+ break;
307
+ case 2:
56
+ case MO_16:
308
+ address_space_stw_le(as, addr, val, attrs, &res);
57
+ tcg_gen_st16_i32(var, cpu_env, offset);
309
+ break;
58
+ break;
310
+ case 4:
59
+ case MO_32:
311
+ address_space_stl_le(as, addr, val, attrs, &res);
60
+ tcg_gen_st_i32(var, cpu_env, offset);
312
+ break;
313
+ case 8:
314
+ address_space_stq_le(as, addr, val, attrs, &res);
315
+ break;
61
+ break;
316
+ default:
62
+ default:
317
+ g_assert_not_reached();
63
+ g_assert_not_reached();
318
+ }
64
+ }
319
+ return res;
320
+}
65
+}
321
+
66
+
322
+static const MemoryRegionOps tz_ppc_ops = {
67
static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
323
+ .read_with_attrs = tz_ppc_read,
68
{
324
+ .write_with_attrs = tz_ppc_write,
69
long offset = neon_element_offset(reg, ele, size);
325
+ .endianness = DEVICE_LITTLE_ENDIAN,
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
326
+};
71
int stride;
327
+
72
int size;
328
+static void tz_ppc_reset(DeviceState *dev)
73
int reg;
329
+{
74
- int pass;
330
+ TZPPC *s = TZ_PPC(dev);
75
int load;
331
+
76
- int shift;
332
+ trace_tz_ppc_reset();
77
int n;
333
+ s->cfg_sec_resp = false;
78
int vec_size;
334
+ memset(s->cfg_nonsec, 0, sizeof(s->cfg_nonsec));
79
int mmu_idx;
335
+ memset(s->cfg_ap, 0, sizeof(s->cfg_ap));
80
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
336
+}
81
} else {
337
+
82
/* Single element. */
338
+static void tz_ppc_init(Object *obj)
83
int idx = (insn >> 4) & 0xf;
339
+{
84
- pass = (insn >> 7) & 1;
340
+ DeviceState *dev = DEVICE(obj);
85
+ int reg_idx;
341
+ TZPPC *s = TZ_PPC(obj);
86
switch (size) {
342
+
87
case 0:
343
+ qdev_init_gpio_in_named(dev, tz_ppc_cfg_nonsec, "cfg_nonsec", TZ_NUM_PORTS);
88
- shift = ((insn >> 5) & 3) * 8;
344
+ qdev_init_gpio_in_named(dev, tz_ppc_cfg_ap, "cfg_ap", TZ_NUM_PORTS);
89
+ reg_idx = (insn >> 5) & 7;
345
+ qdev_init_gpio_in_named(dev, tz_ppc_cfg_sec_resp, "cfg_sec_resp", 1);
90
stride = 1;
346
+ qdev_init_gpio_in_named(dev, tz_ppc_irq_enable, "irq_enable", 1);
91
break;
347
+ qdev_init_gpio_in_named(dev, tz_ppc_irq_clear, "irq_clear", 1);
92
case 1:
348
+ qdev_init_gpio_out_named(dev, &s->irq, "irq", 1);
93
- shift = ((insn >> 6) & 1) * 16;
349
+}
94
+ reg_idx = (insn >> 6) & 3;
350
+
95
stride = (insn & (1 << 5)) ? 2 : 1;
351
+static void tz_ppc_realize(DeviceState *dev, Error **errp)
96
break;
352
+{
97
case 2:
353
+ Object *obj = OBJECT(dev);
98
- shift = 0;
354
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
99
+ reg_idx = (insn >> 7) & 1;
355
+ TZPPC *s = TZ_PPC(dev);
100
stride = (insn & (1 << 6)) ? 2 : 1;
356
+ int i;
101
break;
357
+
102
default:
358
+ /* We can't create the upstream end of the port until realize,
103
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
359
+ * as we don't know the size of the MR used as the downstream until then.
104
*/
360
+ */
105
return 1;
361
+ for (i = 0; i < TZ_NUM_PORTS; i++) {
106
}
362
+ TZPPCPort *port = &s->port[i];
107
+ tmp = tcg_temp_new_i32();
363
+ char *name;
108
addr = tcg_temp_new_i32();
364
+ uint64_t size;
109
load_reg_var(s, addr, rn);
365
+
110
for (reg = 0; reg < nregs; reg++) {
366
+ if (!port->downstream) {
111
if (load) {
367
+ continue;
112
- tmp = tcg_temp_new_i32();
368
+ }
113
- switch (size) {
369
+
114
- case 0:
370
+ name = g_strdup_printf("tz-ppc-port[%d]", i);
115
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
371
+
116
- break;
372
+ port->ppc = s;
117
- case 1:
373
+ address_space_init(&port->downstream_as, port->downstream, name);
118
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
374
+
119
- break;
375
+ size = memory_region_size(port->downstream);
120
- case 2:
376
+ memory_region_init_io(&port->upstream, obj, &tz_ppc_ops,
121
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
377
+ port, name, size);
122
- break;
378
+ sysbus_init_mmio(sbd, &port->upstream);
123
- default: /* Avoid compiler warnings. */
379
+ g_free(name);
124
- abort();
380
+ }
125
- }
381
+}
126
- if (size != 2) {
382
+
127
- tmp2 = neon_load_reg(rd, pass);
383
+static const VMStateDescription tz_ppc_vmstate = {
128
- tcg_gen_deposit_i32(tmp, tmp2, tmp,
384
+ .name = "tz-ppc",
129
- shift, size ? 16 : 8);
385
+ .version_id = 1,
130
- tcg_temp_free_i32(tmp2);
386
+ .minimum_version_id = 1,
131
- }
387
+ .fields = (VMStateField[]) {
132
- neon_store_reg(rd, pass, tmp);
388
+ VMSTATE_BOOL_ARRAY(cfg_nonsec, TZPPC, 16),
133
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
389
+ VMSTATE_BOOL_ARRAY(cfg_ap, TZPPC, 16),
134
+ s->be_data | size);
390
+ VMSTATE_BOOL(cfg_sec_resp, TZPPC),
135
+ neon_store_element(rd, reg_idx, size, tmp);
391
+ VMSTATE_BOOL(irq_enable, TZPPC),
136
} else { /* Store */
392
+ VMSTATE_BOOL(irq_clear, TZPPC),
137
- tmp = neon_load_reg(rd, pass);
393
+ VMSTATE_BOOL(irq_status, TZPPC),
138
- if (shift)
394
+ VMSTATE_END_OF_LIST()
139
- tcg_gen_shri_i32(tmp, tmp, shift);
395
+ }
140
- switch (size) {
396
+};
141
- case 0:
397
+
142
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
398
+#define DEFINE_PORT(N) \
143
- break;
399
+ DEFINE_PROP_LINK("port[" #N "]", TZPPC, port[N].downstream, \
144
- case 1:
400
+ TYPE_MEMORY_REGION, MemoryRegion *)
145
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
401
+
146
- break;
402
+static Property tz_ppc_properties[] = {
147
- case 2:
403
+ DEFINE_PROP_UINT32("NONSEC_MASK", TZPPC, nonsec_mask, 0),
148
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
404
+ DEFINE_PORT(0),
149
- break;
405
+ DEFINE_PORT(1),
150
- }
406
+ DEFINE_PORT(2),
151
- tcg_temp_free_i32(tmp);
407
+ DEFINE_PORT(3),
152
+ neon_load_element(tmp, rd, reg_idx, size);
408
+ DEFINE_PORT(4),
153
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
409
+ DEFINE_PORT(5),
154
+ s->be_data | size);
410
+ DEFINE_PORT(6),
155
}
411
+ DEFINE_PORT(7),
156
rd += stride;
412
+ DEFINE_PORT(8),
157
tcg_gen_addi_i32(addr, addr, 1 << size);
413
+ DEFINE_PORT(9),
158
}
414
+ DEFINE_PORT(10),
159
tcg_temp_free_i32(addr);
415
+ DEFINE_PORT(11),
160
+ tcg_temp_free_i32(tmp);
416
+ DEFINE_PORT(12),
161
stride = nregs * (1 << size);
417
+ DEFINE_PORT(13),
162
}
418
+ DEFINE_PORT(14),
163
}
419
+ DEFINE_PORT(15),
420
+ DEFINE_PROP_END_OF_LIST(),
421
+};
422
+
423
+static void tz_ppc_class_init(ObjectClass *klass, void *data)
424
+{
425
+ DeviceClass *dc = DEVICE_CLASS(klass);
426
+
427
+ dc->realize = tz_ppc_realize;
428
+ dc->vmsd = &tz_ppc_vmstate;
429
+ dc->reset = tz_ppc_reset;
430
+ dc->props = tz_ppc_properties;
431
+}
432
+
433
+static const TypeInfo tz_ppc_info = {
434
+ .name = TYPE_TZ_PPC,
435
+ .parent = TYPE_SYS_BUS_DEVICE,
436
+ .instance_size = sizeof(TZPPC),
437
+ .instance_init = tz_ppc_init,
438
+ .class_init = tz_ppc_class_init,
439
+};
440
+
441
+static void tz_ppc_register_types(void)
442
+{
443
+ type_register_static(&tz_ppc_info);
444
+}
445
+
446
+type_init(tz_ppc_register_types);
447
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
448
index XXXXXXX..XXXXXXX 100644
449
--- a/default-configs/arm-softmmu.mak
450
+++ b/default-configs/arm-softmmu.mak
451
@@ -XXX,XX +XXX,XX @@ CONFIG_CMSDK_APB_UART=y
452
CONFIG_MPS2_FPGAIO=y
453
CONFIG_MPS2_SCC=y
454
455
+CONFIG_TZ_PPC=y
456
+
457
CONFIG_VERSATILE_PCI=y
458
CONFIG_VERSATILE_I2C=y
459
460
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
461
index XXXXXXX..XXXXXXX 100644
462
--- a/hw/misc/trace-events
463
+++ b/hw/misc/trace-events
464
@@ -XXX,XX +XXX,XX @@ mos6522_get_next_irq_time(uint16_t latch, int64_t d, int64_t delta) "latch=%d co
465
mos6522_set_sr_int(void) "set sr_int"
466
mos6522_write(uint64_t addr, uint64_t val) "reg=0x%"PRIx64 " val=0x%"PRIx64
467
mos6522_read(uint64_t addr, unsigned val) "reg=0x%"PRIx64 " val=0x%x"
468
+
469
+# hw/misc/tz-ppc.c
470
+tz_ppc_reset(void) "TZ PPC: reset"
471
+tz_ppc_cfg_nonsec(int n, int level) "TZ PPC: cfg_nonsec[%d] = %d"
472
+tz_ppc_cfg_ap(int n, int level) "TZ PPC: cfg_ap[%d] = %d"
473
+tz_ppc_cfg_sec_resp(int level) "TZ PPC: cfg_sec_resp = %d"
474
+tz_ppc_irq_enable(int level) "TZ PPC: int_enable = %d"
475
+tz_ppc_irq_clear(int level) "TZ PPC: int_clear = %d"
476
+tz_ppc_update_irq(int level) "TZ PPC: setting irq line to %d"
477
+tz_ppc_read_blocked(int n, hwaddr offset, bool secure, bool user) "TZ PPC: port %d offset 0x%" HWADDR_PRIx " read (secure %d user %d) blocked"
478
+tz_ppc_write_blocked(int n, hwaddr offset, bool secure, bool user) "TZ PPC: port %d offset 0x%" HWADDR_PRIx " write (secure %d user %d) blocked"
479
--
164
--
480
2.16.2
165
2.19.1
481
166
482
167
diff view generated by jsdifflib
New patch
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
2
3
Announce the availability of the various priority queues.
4
This fixes an issue where guest kernels would miss to
5
configure secondary queues due to inproper feature bits.
6
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20181017213932.19973-2-edgar.iglesias@gmail.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/net/cadence_gem.c | 8 +++++++-
13
1 file changed, 7 insertions(+), 1 deletion(-)
14
15
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/net/cadence_gem.c
18
+++ b/hw/net/cadence_gem.c
19
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
20
int i;
21
CadenceGEMState *s = CADENCE_GEM(d);
22
const uint8_t *a;
23
+ uint32_t queues_mask = 0;
24
25
DB_PRINT("\n");
26
27
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
28
s->regs[GEM_DESCONF] = 0x02500111;
29
s->regs[GEM_DESCONF2] = 0x2ab13fff;
30
s->regs[GEM_DESCONF5] = 0x002f2045;
31
- s->regs[GEM_DESCONF6] = 0x00000200;
32
+ s->regs[GEM_DESCONF6] = 0x0;
33
+
34
+ if (s->num_priority_queues > 1) {
35
+ queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
36
+ s->regs[GEM_DESCONF6] |= queues_mask;
37
+ }
38
39
/* Set MAC address */
40
a = &s->conf.macaddr.a[0];
41
--
42
2.19.1
43
44
diff view generated by jsdifflib
1
The or-irq.h header file is missing the customary guard against
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
multiple inclusion, which means compilation fails if it gets
3
included twice. Fix the omission.
4
2
3
Announce 64bit addressing support.
4
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Message-id: 20181017213932.19973-3-edgar.iglesias@gmail.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180220180325.29818-11-peter.maydell@linaro.org
9
---
10
---
10
include/hw/or-irq.h | 5 +++++
11
hw/net/cadence_gem.c | 3 ++-
11
1 file changed, 5 insertions(+)
12
1 file changed, 2 insertions(+), 1 deletion(-)
12
13
13
diff --git a/include/hw/or-irq.h b/include/hw/or-irq.h
14
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/or-irq.h
16
--- a/hw/net/cadence_gem.c
16
+++ b/include/hw/or-irq.h
17
+++ b/hw/net/cadence_gem.c
17
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@
18
* THE SOFTWARE.
19
#define GEM_DESCONF4 (0x0000028C/4)
19
*/
20
#define GEM_DESCONF5 (0x00000290/4)
20
21
#define GEM_DESCONF6 (0x00000294/4)
21
+#ifndef HW_OR_IRQ_H
22
+#define GEM_DESCONF6_64B_MASK (1U << 23)
22
+#define HW_OR_IRQ_H
23
#define GEM_DESCONF7 (0x00000298/4)
23
+
24
24
#include "hw/irq.h"
25
#define GEM_INT_Q1_STATUS (0x00000400 / 4)
25
#include "hw/sysbus.h"
26
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
26
#include "qom/object.h"
27
s->regs[GEM_DESCONF] = 0x02500111;
27
@@ -XXX,XX +XXX,XX @@ struct OrIRQState {
28
s->regs[GEM_DESCONF2] = 0x2ab13fff;
28
bool levels[MAX_OR_LINES];
29
s->regs[GEM_DESCONF5] = 0x002f2045;
29
uint16_t num_lines;
30
- s->regs[GEM_DESCONF6] = 0x0;
30
};
31
+ s->regs[GEM_DESCONF6] = GEM_DESCONF6_64B_MASK;
31
+
32
32
+#endif
33
if (s->num_priority_queues > 1) {
34
queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
33
--
35
--
34
2.16.2
36
2.19.1
35
37
36
38
diff view generated by jsdifflib
1
Move the definition of the struct for the unimplemented-device
1
From: Richard Henderson <richard.henderson@linaro.org>
2
from unimp.c to unimp.h, so that users can embed the struct
3
in their own device structs if they prefer.
4
2
3
The EL3 version of this register does not include an ASID,
4
and so the tlb_flush performed by vmsa_ttbr_write is not needed.
5
6
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20181019015617.22583-2-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180220180325.29818-10-peter.maydell@linaro.org
9
---
11
---
10
include/hw/misc/unimp.h | 10 ++++++++++
12
target/arm/helper.c | 2 +-
11
hw/misc/unimp.c | 10 ----------
13
1 file changed, 1 insertion(+), 1 deletion(-)
12
2 files changed, 10 insertions(+), 10 deletions(-)
13
14
14
diff --git a/include/hw/misc/unimp.h b/include/hw/misc/unimp.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/misc/unimp.h
17
--- a/target/arm/helper.c
17
+++ b/include/hw/misc/unimp.h
18
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
19
20
.fieldoffset = offsetof(CPUARMState, cp15.mvbar) },
20
#define TYPE_UNIMPLEMENTED_DEVICE "unimplemented-device"
21
{ .name = "TTBR0_EL3", .state = ARM_CP_STATE_AA64,
21
22
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 0,
22
+#define UNIMPLEMENTED_DEVICE(obj) \
23
- .access = PL3_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
23
+ OBJECT_CHECK(UnimplementedDeviceState, (obj), TYPE_UNIMPLEMENTED_DEVICE)
24
+ .access = PL3_RW, .resetvalue = 0,
24
+
25
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[3]) },
25
+typedef struct {
26
{ .name = "TCR_EL3", .state = ARM_CP_STATE_AA64,
26
+ SysBusDevice parent_obj;
27
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2,
27
+ MemoryRegion iomem;
28
+ char *name;
29
+ uint64_t size;
30
+} UnimplementedDeviceState;
31
+
32
/**
33
* create_unimplemented_device: create and map a dummy device
34
* @name: name of the device for debug logging
35
diff --git a/hw/misc/unimp.c b/hw/misc/unimp.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/misc/unimp.c
38
+++ b/hw/misc/unimp.c
39
@@ -XXX,XX +XXX,XX @@
40
#include "qemu/log.h"
41
#include "qapi/error.h"
42
43
-#define UNIMPLEMENTED_DEVICE(obj) \
44
- OBJECT_CHECK(UnimplementedDeviceState, (obj), TYPE_UNIMPLEMENTED_DEVICE)
45
-
46
-typedef struct {
47
- SysBusDevice parent_obj;
48
- MemoryRegion iomem;
49
- char *name;
50
- uint64_t size;
51
-} UnimplementedDeviceState;
52
-
53
static uint64_t unimp_read(void *opaque, hwaddr offset, unsigned size)
54
{
55
UnimplementedDeviceState *s = UNIMPLEMENTED_DEVICE(opaque);
56
--
28
--
57
2.16.2
29
2.19.1
58
30
59
31
diff view generated by jsdifflib
1
Instead of loading guest images to the system address space, use the
1
From: Richard Henderson <richard.henderson@linaro.org>
2
CPU's address space. This is important if we're trying to load the
3
file to memory or via an alias memory region that is provided by an
4
SoC object and thus not mapped into the system address space.
5
2
3
Since QEMU does not implement ASIDs, changes to the ASID must flush the
4
tlb. However, if the ASID does not change there is no reason to flush.
5
6
In testing a boot of the Ubuntu installer to the first menu, this reduces
7
the number of flushes by 30%, or nearly 600k instances.
8
9
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Message-id: 20181019015617.22583-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180220180325.29818-4-peter.maydell@linaro.org
10
---
15
---
11
hw/arm/armv7m.c | 17 ++++++++++++++---
16
target/arm/helper.c | 8 +++-----
12
1 file changed, 14 insertions(+), 3 deletions(-)
17
1 file changed, 3 insertions(+), 5 deletions(-)
13
18
14
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/armv7m.c
21
--- a/target/arm/helper.c
17
+++ b/hw/arm/armv7m.c
22
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size)
23
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_el1_write(CPUARMState *env, const ARMCPRegInfo *ri,
19
uint64_t entry;
24
static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
20
uint64_t lowaddr;
25
uint64_t value)
21
int big_endian;
26
{
22
+ AddressSpace *as;
27
- /* 64 bit accesses to the TTBRs can change the ASID and so we
23
+ int asidx;
28
- * must flush the TLB.
24
+ CPUState *cs = CPU(cpu);
29
- */
25
30
- if (cpreg_field_is_64bit(ri)) {
26
#ifdef TARGET_WORDS_BIGENDIAN
31
+ /* If the ASID changes (with a 64-bit write), we must flush the TLB. */
27
big_endian = 1;
32
+ if (cpreg_field_is_64bit(ri) &&
28
@@ -XXX,XX +XXX,XX @@ void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size)
33
+ extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
29
exit(1);
34
ARMCPU *cpu = arm_env_get_cpu(env);
35
-
36
tlb_flush(CPU(cpu));
30
}
37
}
31
38
raw_write(env, ri, value);
32
+ if (arm_feature(&cpu->env, ARM_FEATURE_EL3)) {
33
+ asidx = ARMASIdx_S;
34
+ } else {
35
+ asidx = ARMASIdx_NS;
36
+ }
37
+ as = cpu_get_address_space(cs, asidx);
38
+
39
if (kernel_filename) {
40
- image_size = load_elf(kernel_filename, NULL, NULL, &entry, &lowaddr,
41
- NULL, big_endian, EM_ARM, 1, 0);
42
+ image_size = load_elf_as(kernel_filename, NULL, NULL, &entry, &lowaddr,
43
+ NULL, big_endian, EM_ARM, 1, 0, as);
44
if (image_size < 0) {
45
- image_size = load_image_targphys(kernel_filename, 0, mem_size);
46
+ image_size = load_image_targphys_as(kernel_filename, 0,
47
+ mem_size, as);
48
lowaddr = 0;
49
}
50
if (image_size < 0) {
51
--
39
--
52
2.16.2
40
2.19.1
53
41
54
42
diff view generated by jsdifflib