1
target-arm queue for softfreeze:
1
First pullreq for 6.0: mostly my v8.1M work, plus some other
2
This has all the big stuff I want to get in for softfreeze;
2
bits and pieces. (I still have a lot of stuff in my to-review
3
there may be one or two smaller patches I pick up later in
3
folder, which I may or may not get to before the Christmas break...)
4
the week.
5
4
6
thanks
5
thanks
7
-- PMM
6
-- PMM
8
7
9
The following changes since commit 0984a157c1c053394adbf64ed7de97f1aebe6a2d:
8
The following changes since commit 5e7b204dbfae9a562fc73684986f936b97f63877:
10
9
11
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2019-03-05 09:33:20 +0000)
10
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2020-12-09 20:08:54 +0000)
12
11
13
are available in the Git repository at:
12
are available in the Git repository at:
14
13
15
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190305
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201210
16
15
17
for you to fetch changes up to 566528f823d1a2e9eb2d7b2ed839547cb31bfc34:
16
for you to fetch changes up to 71f916be1c7e9ede0e37d9cabc781b5a9e8638ff:
18
17
19
hw/arm/stellaris: Implement watchdog timer (2019-03-05 15:55:09 +0000)
18
hw/arm/armv7m: Correct typo in QOM object name (2020-12-10 11:44:56 +0000)
20
19
21
----------------------------------------------------------------
20
----------------------------------------------------------------
22
target-arm queue:
21
target-arm queue:
23
* Fix PC test for LDM (exception return)
22
* hw/arm/smmuv3: Fix up L1STD_SPAN decoding
24
* Implement ARMv8.0-SB
23
* xlnx-zynqmp: Support Xilinx ZynqMP CAN controllers
25
* Implement ARMv8.0-PredInv
24
* sbsa-ref: allow to use Cortex-A53/57/72 cpus
26
* Implement ARMv8.4-CondM
25
* Various minor code cleanups
27
* Implement ARMv8.5-CondM
26
* hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
28
* Implement ARMv8.5-FRINT
27
* Implement more pieces of ARMv8.1M support
29
* hw/arm/stellaris: Implement watchdog timer
30
* virt: support more than 255GB of RAM
31
28
32
----------------------------------------------------------------
29
----------------------------------------------------------------
33
Eric Auger (9):
30
Alex Chen (4):
34
hw/arm/virt: Rename highmem IO regions
31
i.MX25: Fix bad printf format specifiers
35
hw/arm/virt: Split the memory map description
32
i.MX31: Fix bad printf format specifiers
36
hw/boards: Add a MachineState parameter to kvm_type callback
33
i.MX6: Fix bad printf format specifiers
37
kvm: add kvm_arm_get_max_vm_ipa_size
34
i.MX6ul: Fix bad printf format specifiers
38
vl: Set machine ram_size, maxram_size and ram_slots earlier
39
hw/arm/virt: Dynamic memory map depending on RAM requirements
40
hw/arm/virt: Implement kvm_type function for 4.0 machine
41
hw/arm/virt: Check the VCPU PA range in TCG mode
42
hw/arm/virt: Bump the 255GB initial RAM limit
43
35
44
Michel Heily (1):
36
Havard Skinnemoen (1):
45
hw/arm/stellaris: Implement watchdog timer
37
tests/qtest/npcm7xx_rng-test: dump random data on failure
46
38
47
Richard Henderson (11):
39
Kunkun Jiang (1):
48
target/arm: Fix PC test for LDM (exception return)
40
hw/arm/smmuv3: Fix up L1STD_SPAN decoding
49
target/arm: Split out arm_sctlr
50
target/arm: Implement ARMv8.0-SB
51
target/arm: Implement ARMv8.0-PredInv
52
target/arm: Split helper_msr_i_pstate into 3
53
target/arm: Add set/clear_pstate_bits, share gen_ss_advance
54
target/arm: Rearrange disas_data_proc_reg
55
target/arm: Implement ARMv8.4-CondM
56
target/arm: Implement ARMv8.5-CondM
57
target/arm: Restructure handle_fp_1src_{single, double}
58
target/arm: Implement ARMv8.5-FRINT
59
41
60
Shameer Kolothum (1):
42
Marcin Juszkiewicz (1):
61
hw/arm/boot: introduce fdt_add_memory_node helper
43
sbsa-ref: allow to use Cortex-A53/57/72 cpus
62
44
63
include/hw/arm/virt.h | 16 +-
45
Peter Maydell (25):
64
include/hw/boards.h | 5 +-
46
hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
65
include/hw/watchdog/cmsdk-apb-watchdog.h | 8 +
47
target/arm: Implement v8.1M PXN extension
66
target/arm/cpu.h | 64 ++++-
48
target/arm: Don't clobber ID_PFR1.Security on M-profile cores
67
target/arm/helper-a64.h | 3 +
49
target/arm: Implement VSCCLRM insn
68
target/arm/helper.h | 8 +-
50
target/arm: Implement CLRM instruction
69
target/arm/internals.h | 15 +
51
target/arm: Enforce M-profile VMRS/VMSR register restrictions
70
target/arm/kvm_arm.h | 13 +
52
target/arm: Refactor M-profile VMSR/VMRS handling
71
target/arm/translate.h | 34 +++
53
target/arm: Move general-use constant expanders up in translate.c
72
accel/kvm/kvm-all.c | 2 +-
54
target/arm: Implement VLDR/VSTR system register
73
hw/arm/boot.c | 54 ++--
55
target/arm: Implement M-profile FPSCR_nzcvqc
74
hw/arm/stellaris.c | 22 +-
56
target/arm: Use new FPCR_NZCV_MASK constant
75
hw/arm/virt-acpi-build.c | 10 +-
57
target/arm: Factor out preserve-fp-state from full_vfp_access_check()
76
hw/arm/virt.c | 196 ++++++++++---
58
target/arm: Implement FPCXT_S fp system register
77
hw/ppc/mac_newworld.c | 3 +-
59
hw/intc/armv7m_nvic: Update FPDSCR masking for v8.1M
78
hw/ppc/mac_oldworld.c | 2 +-
60
target/arm: For v8.1M, always clear R0-R3, R12, APSR, EPSR on exception entry
79
hw/ppc/spapr.c | 2 +-
61
target/arm: In v8.1M, don't set HFSR.FORCED on vector table fetch failures
80
hw/watchdog/cmsdk-apb-watchdog.c | 74 ++++-
62
target/arm: Implement v8.1M REVIDR register
81
linux-user/elfload.c | 2 +
63
target/arm: Implement new v8.1M NOCP check for exception return
82
target/arm/cpu.c | 2 +
64
target/arm: Implement new v8.1M VLLDM and VLSTM encodings
83
target/arm/cpu64.c | 6 +
65
hw/intc/armv7m_nvic: Support v8.1M CCR.TRD bit
84
target/arm/helper-a64.c | 30 ++
66
target/arm: Implement CCR_S.TRD behaviour for SG insns
85
target/arm/helper.c | 63 +++-
67
hw/intc/armv7m_nvic: Fix "return from inactive handler" check
86
target/arm/kvm.c | 10 +
68
target/arm: Implement M-profile "minimal RAS implementation"
87
target/arm/op_helper.c | 47 ---
69
hw/intc/armv7m_nvic: Implement read/write for RAS register block
88
target/arm/translate-a64.c | 478 +++++++++++++++++++++++--------
70
hw/arm/armv7m: Correct typo in QOM object name
89
target/arm/translate.c | 35 ++-
90
target/arm/vfp_helper.c | 96 +++++++
91
vl.c | 6 +-
92
29 files changed, 1032 insertions(+), 274 deletions(-)
93
71
72
Vikram Garhwal (4):
73
hw/net/can: Introduce Xilinx ZynqMP CAN controller
74
xlnx-zynqmp: Connect Xilinx ZynqMP CAN controllers
75
tests/qtest: Introduce tests for Xilinx ZynqMP CAN controller
76
MAINTAINERS: Add maintainer entry for Xilinx ZynqMP CAN controller
77
78
meson.build | 1 +
79
hw/arm/smmuv3-internal.h | 2 +-
80
hw/net/can/trace.h | 1 +
81
include/hw/arm/xlnx-zynqmp.h | 8 +
82
include/hw/intc/armv7m_nvic.h | 2 +
83
include/hw/net/xlnx-zynqmp-can.h | 78 +++
84
target/arm/cpu.h | 46 ++
85
target/arm/m-nocp.decode | 10 +-
86
target/arm/t32.decode | 10 +-
87
target/arm/vfp.decode | 14 +
88
hw/arm/armv7m.c | 4 +-
89
hw/arm/sbsa-ref.c | 23 +-
90
hw/arm/xlnx-zcu102.c | 20 +
91
hw/arm/xlnx-zynqmp.c | 34 ++
92
hw/intc/armv7m_nvic.c | 246 ++++++--
93
hw/misc/imx25_ccm.c | 12 +-
94
hw/misc/imx31_ccm.c | 14 +-
95
hw/misc/imx6_ccm.c | 20 +-
96
hw/misc/imx6_src.c | 2 +-
97
hw/misc/imx6ul_ccm.c | 4 +-
98
hw/misc/imx_ccm.c | 4 +-
99
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++++++++++
100
target/arm/cpu.c | 5 +-
101
target/arm/helper.c | 7 +-
102
target/arm/m_helper.c | 130 ++++-
103
target/arm/translate.c | 105 +++-
104
tests/qtest/npcm7xx_rng-test.c | 12 +
105
tests/qtest/xlnx-can-test.c | 360 ++++++++++++
106
MAINTAINERS | 8 +
107
hw/Kconfig | 1 +
108
hw/net/can/meson.build | 1 +
109
hw/net/can/trace-events | 9 +
110
target/arm/translate-vfp.c.inc | 511 ++++++++++++++++-
111
tests/qtest/meson.build | 1 +
112
34 files changed, 2713 insertions(+), 153 deletions(-)
113
create mode 100644 hw/net/can/trace.h
114
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
115
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
116
create mode 100644 tests/qtest/xlnx-can-test.c
117
create mode 100644 hw/net/can/trace-events
118
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Kunkun Jiang <jiangkunkun@huawei.com>
2
2
3
This patch implements the machine class kvm_type() callback.
3
Accroding to the SMMUv3 spec, the SPAN field of Level1 Stream Table
4
It returns the number of bits requested to implement the whole GPA
4
Descriptor is 5 bits([4:0]).
5
range including the RAM and IO regions located beyond.
6
The returned value is passed though the KVM_CREATE_VM ioctl and
7
this allows KVM to set the stage2 tables dynamically.
8
5
9
To compute the highest GPA used in the memory map, kvm_type()
6
Fixes: 9bde7f0674f(hw/arm/smmuv3: Implement translate callback)
10
must freeze the memory map by calling virt_set_memmap().
7
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
11
8
Message-id: 20201124023711.1184-1-jiangkunkun@huawei.com
12
Signed-off-by: Eric Auger <eric.auger@redhat.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
10
Acked-by: Eric Auger <eric.auger@redhat.com>
14
Message-id: 20190304101339.25970-9-eric.auger@redhat.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
12
---
17
hw/arm/virt.c | 39 ++++++++++++++++++++++++++++++++++++++-
13
hw/arm/smmuv3-internal.h | 2 +-
18
1 file changed, 38 insertions(+), 1 deletion(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
19
15
20
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
16
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt.c
18
--- a/hw/arm/smmuv3-internal.h
23
+++ b/hw/arm/virt.c
19
+++ b/hw/arm/smmuv3-internal.h
24
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
20
@@ -XXX,XX +XXX,XX @@ static inline uint64_t l1std_l2ptr(STEDesc *desc)
25
bool firmware_loaded = bios_name || drive_get(IF_PFLASH, 0, 0);
21
return hi << 32 | lo;
26
bool aarch64 = true;
27
28
- virt_set_memmap(vms);
29
+ /*
30
+ * In accelerated mode, the memory map is computed earlier in kvm_type()
31
+ * to create a VM with the right number of IPA bits.
32
+ */
33
+ if (!vms->memmap) {
34
+ virt_set_memmap(vms);
35
+ }
36
37
/* We can probe only here because during property set
38
* KVM is not available yet
39
@@ -XXX,XX +XXX,XX @@ static HotplugHandler *virt_machine_get_hotplug_handler(MachineState *machine,
40
return NULL;
41
}
22
}
42
23
43
+/*
24
-#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 4))
44
+ * for arm64 kvm_type [7-0] encodes the requested number of bits
25
+#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 5))
45
+ * in the IPA address space
26
46
+ */
27
#endif
47
+static int virt_kvm_type(MachineState *ms, const char *type_str)
48
+{
49
+ VirtMachineState *vms = VIRT_MACHINE(ms);
50
+ int max_vm_pa_size = kvm_arm_get_max_vm_ipa_size(ms);
51
+ int requested_pa_size;
52
+
53
+ /* we freeze the memory map to compute the highest gpa */
54
+ virt_set_memmap(vms);
55
+
56
+ requested_pa_size = 64 - clz64(vms->highest_gpa);
57
+
58
+ if (requested_pa_size > max_vm_pa_size) {
59
+ error_report("-m and ,maxmem option values "
60
+ "require an IPA range (%d bits) larger than "
61
+ "the one supported by the host (%d bits)",
62
+ requested_pa_size, max_vm_pa_size);
63
+ exit(1);
64
+ }
65
+ /*
66
+ * By default we return 0 which corresponds to an implicit legacy
67
+ * 40b IPA setting. Otherwise we return the actual requested PA
68
+ * logsize
69
+ */
70
+ return requested_pa_size > 40 ? requested_pa_size : 0;
71
+}
72
+
73
static void virt_machine_class_init(ObjectClass *oc, void *data)
74
{
75
MachineClass *mc = MACHINE_CLASS(oc);
76
@@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
77
mc->cpu_index_to_instance_props = virt_cpu_index_to_props;
78
mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a15");
79
mc->get_default_cpu_node_id = virt_get_default_cpu_node_id;
80
+ mc->kvm_type = virt_kvm_type;
81
assert(!mc->get_hotplug_handler);
82
mc->get_hotplug_handler = virt_machine_get_hotplug_handler;
83
hc->plug = virt_machine_device_plug_cb;
84
--
28
--
85
2.20.1
29
2.20.1
86
30
87
31
diff view generated by jsdifflib
New patch
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
2
3
The Xilinx ZynqMP CAN controller is developed based on SocketCAN, QEMU CAN bus
4
implementation. Bus connection and socketCAN connection for each CAN module
5
can be set through command lines.
6
7
Example for using single CAN:
8
-object can-bus,id=canbus0 \
9
-machine xlnx-zcu102.canbus0=canbus0 \
10
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0
11
12
Example for connecting both CAN to same virtual CAN on host machine:
13
-object can-bus,id=canbus0 -object can-bus,id=canbus1 \
14
-machine xlnx-zcu102.canbus0=canbus0 \
15
-machine xlnx-zcu102.canbus1=canbus1 \
16
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0 \
17
-object can-host-socketcan,id=socketcan1,if=vcan0,canbus=canbus1
18
19
To create virtual CAN on the host machine, please check the QEMU CAN docs:
20
https://github.com/qemu/qemu/blob/master/docs/can.txt
21
22
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
23
Message-id: 1605728926-352690-2-git-send-email-fnu.vikram@xilinx.com
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
27
meson.build | 1 +
28
hw/net/can/trace.h | 1 +
29
include/hw/net/xlnx-zynqmp-can.h | 78 ++
30
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++
31
hw/Kconfig | 1 +
32
hw/net/can/meson.build | 1 +
33
hw/net/can/trace-events | 9 +
34
7 files changed, 1252 insertions(+)
35
create mode 100644 hw/net/can/trace.h
36
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
37
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
38
create mode 100644 hw/net/can/trace-events
39
40
diff --git a/meson.build b/meson.build
41
index XXXXXXX..XXXXXXX 100644
42
--- a/meson.build
43
+++ b/meson.build
44
@@ -XXX,XX +XXX,XX @@ if have_system
45
'hw/misc',
46
'hw/misc/macio',
47
'hw/net',
48
+ 'hw/net/can',
49
'hw/nvram',
50
'hw/pci',
51
'hw/pci-host',
52
diff --git a/hw/net/can/trace.h b/hw/net/can/trace.h
53
new file mode 100644
54
index XXXXXXX..XXXXXXX
55
--- /dev/null
56
+++ b/hw/net/can/trace.h
57
@@ -0,0 +1 @@
58
+#include "trace/trace-hw_net_can.h"
59
diff --git a/include/hw/net/xlnx-zynqmp-can.h b/include/hw/net/xlnx-zynqmp-can.h
60
new file mode 100644
61
index XXXXXXX..XXXXXXX
62
--- /dev/null
63
+++ b/include/hw/net/xlnx-zynqmp-can.h
64
@@ -XXX,XX +XXX,XX @@
65
+/*
66
+ * QEMU model of the Xilinx ZynqMP CAN controller.
67
+ *
68
+ * Copyright (c) 2020 Xilinx Inc.
69
+ *
70
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
71
+ *
72
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
73
+ * Pavel Pisa.
74
+ *
75
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
76
+ * of this software and associated documentation files (the "Software"), to deal
77
+ * in the Software without restriction, including without limitation the rights
78
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
79
+ * copies of the Software, and to permit persons to whom the Software is
80
+ * furnished to do so, subject to the following conditions:
81
+ *
82
+ * The above copyright notice and this permission notice shall be included in
83
+ * all copies or substantial portions of the Software.
84
+ *
85
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
86
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
87
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
88
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
89
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
90
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
91
+ * THE SOFTWARE.
92
+ */
93
+
94
+#ifndef XLNX_ZYNQMP_CAN_H
95
+#define XLNX_ZYNQMP_CAN_H
96
+
97
+#include "hw/register.h"
98
+#include "net/can_emu.h"
99
+#include "net/can_host.h"
100
+#include "qemu/fifo32.h"
101
+#include "hw/ptimer.h"
102
+#include "hw/qdev-clock.h"
103
+
104
+#define TYPE_XLNX_ZYNQMP_CAN "xlnx.zynqmp-can"
105
+
106
+#define XLNX_ZYNQMP_CAN(obj) \
107
+ OBJECT_CHECK(XlnxZynqMPCANState, (obj), TYPE_XLNX_ZYNQMP_CAN)
108
+
109
+#define MAX_CAN_CTRLS 2
110
+#define XLNX_ZYNQMP_CAN_R_MAX (0x84 / 4)
111
+#define MAILBOX_CAPACITY 64
112
+#define CAN_TIMER_MAX 0XFFFFUL
113
+#define CAN_DEFAULT_CLOCK (24 * 1000 * 1000)
114
+
115
+/* Each CAN_FRAME will have 4 * 32bit size. */
116
+#define CAN_FRAME_SIZE 4
117
+#define RXFIFO_SIZE (MAILBOX_CAPACITY * CAN_FRAME_SIZE)
118
+
119
+typedef struct XlnxZynqMPCANState {
120
+ SysBusDevice parent_obj;
121
+ MemoryRegion iomem;
122
+
123
+ qemu_irq irq;
124
+
125
+ CanBusClientState bus_client;
126
+ CanBusState *canbus;
127
+
128
+ struct {
129
+ uint32_t ext_clk_freq;
130
+ } cfg;
131
+
132
+ RegisterInfo reg_info[XLNX_ZYNQMP_CAN_R_MAX];
133
+ uint32_t regs[XLNX_ZYNQMP_CAN_R_MAX];
134
+
135
+ Fifo32 rx_fifo;
136
+ Fifo32 tx_fifo;
137
+ Fifo32 txhpb_fifo;
138
+
139
+ ptimer_state *can_timer;
140
+} XlnxZynqMPCANState;
141
+
142
+#endif
143
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
144
new file mode 100644
145
index XXXXXXX..XXXXXXX
146
--- /dev/null
147
+++ b/hw/net/can/xlnx-zynqmp-can.c
148
@@ -XXX,XX +XXX,XX @@
149
+/*
150
+ * QEMU model of the Xilinx ZynqMP CAN controller.
151
+ * This implementation is based on the following datasheet:
152
+ * https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
153
+ *
154
+ * Copyright (c) 2020 Xilinx Inc.
155
+ *
156
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
157
+ *
158
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
159
+ * Pavel Pisa
160
+ *
161
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
162
+ * of this software and associated documentation files (the "Software"), to deal
163
+ * in the Software without restriction, including without limitation the rights
164
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
165
+ * copies of the Software, and to permit persons to whom the Software is
166
+ * furnished to do so, subject to the following conditions:
167
+ *
168
+ * The above copyright notice and this permission notice shall be included in
169
+ * all copies or substantial portions of the Software.
170
+ *
171
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
172
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
173
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
174
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
175
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
176
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
177
+ * THE SOFTWARE.
178
+ */
179
+
180
+#include "qemu/osdep.h"
181
+#include "hw/sysbus.h"
182
+#include "hw/register.h"
183
+#include "hw/irq.h"
184
+#include "qapi/error.h"
185
+#include "qemu/bitops.h"
186
+#include "qemu/log.h"
187
+#include "qemu/cutils.h"
188
+#include "sysemu/sysemu.h"
189
+#include "migration/vmstate.h"
190
+#include "hw/qdev-properties.h"
191
+#include "net/can_emu.h"
192
+#include "net/can_host.h"
193
+#include "qemu/event_notifier.h"
194
+#include "qom/object_interfaces.h"
195
+#include "hw/net/xlnx-zynqmp-can.h"
196
+#include "trace.h"
197
+
198
+#ifndef XLNX_ZYNQMP_CAN_ERR_DEBUG
199
+#define XLNX_ZYNQMP_CAN_ERR_DEBUG 0
200
+#endif
201
+
202
+#define MAX_DLC 8
203
+#undef ERROR
204
+
205
+REG32(SOFTWARE_RESET_REGISTER, 0x0)
206
+ FIELD(SOFTWARE_RESET_REGISTER, CEN, 1, 1)
207
+ FIELD(SOFTWARE_RESET_REGISTER, SRST, 0, 1)
208
+REG32(MODE_SELECT_REGISTER, 0x4)
209
+ FIELD(MODE_SELECT_REGISTER, SNOOP, 2, 1)
210
+ FIELD(MODE_SELECT_REGISTER, LBACK, 1, 1)
211
+ FIELD(MODE_SELECT_REGISTER, SLEEP, 0, 1)
212
+REG32(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x8)
213
+ FIELD(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, BRP, 0, 8)
214
+REG32(ARBITRATION_PHASE_BIT_TIMING_REGISTER, 0xc)
215
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, SJW, 7, 2)
216
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS2, 4, 3)
217
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS1, 0, 4)
218
+REG32(ERROR_COUNTER_REGISTER, 0x10)
219
+ FIELD(ERROR_COUNTER_REGISTER, REC, 8, 8)
220
+ FIELD(ERROR_COUNTER_REGISTER, TEC, 0, 8)
221
+REG32(ERROR_STATUS_REGISTER, 0x14)
222
+ FIELD(ERROR_STATUS_REGISTER, ACKER, 4, 1)
223
+ FIELD(ERROR_STATUS_REGISTER, BERR, 3, 1)
224
+ FIELD(ERROR_STATUS_REGISTER, STER, 2, 1)
225
+ FIELD(ERROR_STATUS_REGISTER, FMER, 1, 1)
226
+ FIELD(ERROR_STATUS_REGISTER, CRCER, 0, 1)
227
+REG32(STATUS_REGISTER, 0x18)
228
+ FIELD(STATUS_REGISTER, SNOOP, 12, 1)
229
+ FIELD(STATUS_REGISTER, ACFBSY, 11, 1)
230
+ FIELD(STATUS_REGISTER, TXFLL, 10, 1)
231
+ FIELD(STATUS_REGISTER, TXBFLL, 9, 1)
232
+ FIELD(STATUS_REGISTER, ESTAT, 7, 2)
233
+ FIELD(STATUS_REGISTER, ERRWRN, 6, 1)
234
+ FIELD(STATUS_REGISTER, BBSY, 5, 1)
235
+ FIELD(STATUS_REGISTER, BIDLE, 4, 1)
236
+ FIELD(STATUS_REGISTER, NORMAL, 3, 1)
237
+ FIELD(STATUS_REGISTER, SLEEP, 2, 1)
238
+ FIELD(STATUS_REGISTER, LBACK, 1, 1)
239
+ FIELD(STATUS_REGISTER, CONFIG, 0, 1)
240
+REG32(INTERRUPT_STATUS_REGISTER, 0x1c)
241
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFEMP, 14, 1)
242
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFWMEMP, 13, 1)
243
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL, 12, 1)
244
+ FIELD(INTERRUPT_STATUS_REGISTER, WKUP, 11, 1)
245
+ FIELD(INTERRUPT_STATUS_REGISTER, SLP, 10, 1)
246
+ FIELD(INTERRUPT_STATUS_REGISTER, BSOFF, 9, 1)
247
+ FIELD(INTERRUPT_STATUS_REGISTER, ERROR, 8, 1)
248
+ FIELD(INTERRUPT_STATUS_REGISTER, RXNEMP, 7, 1)
249
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOFLW, 6, 1)
250
+ FIELD(INTERRUPT_STATUS_REGISTER, RXUFLW, 5, 1)
251
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOK, 4, 1)
252
+ FIELD(INTERRUPT_STATUS_REGISTER, TXBFLL, 3, 1)
253
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFLL, 2, 1)
254
+ FIELD(INTERRUPT_STATUS_REGISTER, TXOK, 1, 1)
255
+ FIELD(INTERRUPT_STATUS_REGISTER, ARBLST, 0, 1)
256
+REG32(INTERRUPT_ENABLE_REGISTER, 0x20)
257
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFEMP, 14, 1)
258
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFWMEMP, 13, 1)
259
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL, 12, 1)
260
+ FIELD(INTERRUPT_ENABLE_REGISTER, EWKUP, 11, 1)
261
+ FIELD(INTERRUPT_ENABLE_REGISTER, ESLP, 10, 1)
262
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSOFF, 9, 1)
263
+ FIELD(INTERRUPT_ENABLE_REGISTER, EERROR, 8, 1)
264
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXNEMP, 7, 1)
265
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOFLW, 6, 1)
266
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXUFLW, 5, 1)
267
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOK, 4, 1)
268
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXBFLL, 3, 1)
269
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFLL, 2, 1)
270
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXOK, 1, 1)
271
+ FIELD(INTERRUPT_ENABLE_REGISTER, EARBLST, 0, 1)
272
+REG32(INTERRUPT_CLEAR_REGISTER, 0x24)
273
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFEMP, 14, 1)
274
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFWMEMP, 13, 1)
275
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL, 12, 1)
276
+ FIELD(INTERRUPT_CLEAR_REGISTER, CWKUP, 11, 1)
277
+ FIELD(INTERRUPT_CLEAR_REGISTER, CSLP, 10, 1)
278
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSOFF, 9, 1)
279
+ FIELD(INTERRUPT_CLEAR_REGISTER, CERROR, 8, 1)
280
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXNEMP, 7, 1)
281
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOFLW, 6, 1)
282
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXUFLW, 5, 1)
283
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOK, 4, 1)
284
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXBFLL, 3, 1)
285
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFLL, 2, 1)
286
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXOK, 1, 1)
287
+ FIELD(INTERRUPT_CLEAR_REGISTER, CARBLST, 0, 1)
288
+REG32(TIMESTAMP_REGISTER, 0x28)
289
+ FIELD(TIMESTAMP_REGISTER, CTS, 0, 1)
290
+REG32(WIR, 0x2c)
291
+ FIELD(WIR, EW, 8, 8)
292
+ FIELD(WIR, FW, 0, 8)
293
+REG32(TXFIFO_ID, 0x30)
294
+ FIELD(TXFIFO_ID, IDH, 21, 11)
295
+ FIELD(TXFIFO_ID, SRRRTR, 20, 1)
296
+ FIELD(TXFIFO_ID, IDE, 19, 1)
297
+ FIELD(TXFIFO_ID, IDL, 1, 18)
298
+ FIELD(TXFIFO_ID, RTR, 0, 1)
299
+REG32(TXFIFO_DLC, 0x34)
300
+ FIELD(TXFIFO_DLC, DLC, 28, 4)
301
+REG32(TXFIFO_DATA1, 0x38)
302
+ FIELD(TXFIFO_DATA1, DB0, 24, 8)
303
+ FIELD(TXFIFO_DATA1, DB1, 16, 8)
304
+ FIELD(TXFIFO_DATA1, DB2, 8, 8)
305
+ FIELD(TXFIFO_DATA1, DB3, 0, 8)
306
+REG32(TXFIFO_DATA2, 0x3c)
307
+ FIELD(TXFIFO_DATA2, DB4, 24, 8)
308
+ FIELD(TXFIFO_DATA2, DB5, 16, 8)
309
+ FIELD(TXFIFO_DATA2, DB6, 8, 8)
310
+ FIELD(TXFIFO_DATA2, DB7, 0, 8)
311
+REG32(TXHPB_ID, 0x40)
312
+ FIELD(TXHPB_ID, IDH, 21, 11)
313
+ FIELD(TXHPB_ID, SRRRTR, 20, 1)
314
+ FIELD(TXHPB_ID, IDE, 19, 1)
315
+ FIELD(TXHPB_ID, IDL, 1, 18)
316
+ FIELD(TXHPB_ID, RTR, 0, 1)
317
+REG32(TXHPB_DLC, 0x44)
318
+ FIELD(TXHPB_DLC, DLC, 28, 4)
319
+REG32(TXHPB_DATA1, 0x48)
320
+ FIELD(TXHPB_DATA1, DB0, 24, 8)
321
+ FIELD(TXHPB_DATA1, DB1, 16, 8)
322
+ FIELD(TXHPB_DATA1, DB2, 8, 8)
323
+ FIELD(TXHPB_DATA1, DB3, 0, 8)
324
+REG32(TXHPB_DATA2, 0x4c)
325
+ FIELD(TXHPB_DATA2, DB4, 24, 8)
326
+ FIELD(TXHPB_DATA2, DB5, 16, 8)
327
+ FIELD(TXHPB_DATA2, DB6, 8, 8)
328
+ FIELD(TXHPB_DATA2, DB7, 0, 8)
329
+REG32(RXFIFO_ID, 0x50)
330
+ FIELD(RXFIFO_ID, IDH, 21, 11)
331
+ FIELD(RXFIFO_ID, SRRRTR, 20, 1)
332
+ FIELD(RXFIFO_ID, IDE, 19, 1)
333
+ FIELD(RXFIFO_ID, IDL, 1, 18)
334
+ FIELD(RXFIFO_ID, RTR, 0, 1)
335
+REG32(RXFIFO_DLC, 0x54)
336
+ FIELD(RXFIFO_DLC, DLC, 28, 4)
337
+ FIELD(RXFIFO_DLC, RXT, 0, 16)
338
+REG32(RXFIFO_DATA1, 0x58)
339
+ FIELD(RXFIFO_DATA1, DB0, 24, 8)
340
+ FIELD(RXFIFO_DATA1, DB1, 16, 8)
341
+ FIELD(RXFIFO_DATA1, DB2, 8, 8)
342
+ FIELD(RXFIFO_DATA1, DB3, 0, 8)
343
+REG32(RXFIFO_DATA2, 0x5c)
344
+ FIELD(RXFIFO_DATA2, DB4, 24, 8)
345
+ FIELD(RXFIFO_DATA2, DB5, 16, 8)
346
+ FIELD(RXFIFO_DATA2, DB6, 8, 8)
347
+ FIELD(RXFIFO_DATA2, DB7, 0, 8)
348
+REG32(AFR, 0x60)
349
+ FIELD(AFR, UAF4, 3, 1)
350
+ FIELD(AFR, UAF3, 2, 1)
351
+ FIELD(AFR, UAF2, 1, 1)
352
+ FIELD(AFR, UAF1, 0, 1)
353
+REG32(AFMR1, 0x64)
354
+ FIELD(AFMR1, AMIDH, 21, 11)
355
+ FIELD(AFMR1, AMSRR, 20, 1)
356
+ FIELD(AFMR1, AMIDE, 19, 1)
357
+ FIELD(AFMR1, AMIDL, 1, 18)
358
+ FIELD(AFMR1, AMRTR, 0, 1)
359
+REG32(AFIR1, 0x68)
360
+ FIELD(AFIR1, AIIDH, 21, 11)
361
+ FIELD(AFIR1, AISRR, 20, 1)
362
+ FIELD(AFIR1, AIIDE, 19, 1)
363
+ FIELD(AFIR1, AIIDL, 1, 18)
364
+ FIELD(AFIR1, AIRTR, 0, 1)
365
+REG32(AFMR2, 0x6c)
366
+ FIELD(AFMR2, AMIDH, 21, 11)
367
+ FIELD(AFMR2, AMSRR, 20, 1)
368
+ FIELD(AFMR2, AMIDE, 19, 1)
369
+ FIELD(AFMR2, AMIDL, 1, 18)
370
+ FIELD(AFMR2, AMRTR, 0, 1)
371
+REG32(AFIR2, 0x70)
372
+ FIELD(AFIR2, AIIDH, 21, 11)
373
+ FIELD(AFIR2, AISRR, 20, 1)
374
+ FIELD(AFIR2, AIIDE, 19, 1)
375
+ FIELD(AFIR2, AIIDL, 1, 18)
376
+ FIELD(AFIR2, AIRTR, 0, 1)
377
+REG32(AFMR3, 0x74)
378
+ FIELD(AFMR3, AMIDH, 21, 11)
379
+ FIELD(AFMR3, AMSRR, 20, 1)
380
+ FIELD(AFMR3, AMIDE, 19, 1)
381
+ FIELD(AFMR3, AMIDL, 1, 18)
382
+ FIELD(AFMR3, AMRTR, 0, 1)
383
+REG32(AFIR3, 0x78)
384
+ FIELD(AFIR3, AIIDH, 21, 11)
385
+ FIELD(AFIR3, AISRR, 20, 1)
386
+ FIELD(AFIR3, AIIDE, 19, 1)
387
+ FIELD(AFIR3, AIIDL, 1, 18)
388
+ FIELD(AFIR3, AIRTR, 0, 1)
389
+REG32(AFMR4, 0x7c)
390
+ FIELD(AFMR4, AMIDH, 21, 11)
391
+ FIELD(AFMR4, AMSRR, 20, 1)
392
+ FIELD(AFMR4, AMIDE, 19, 1)
393
+ FIELD(AFMR4, AMIDL, 1, 18)
394
+ FIELD(AFMR4, AMRTR, 0, 1)
395
+REG32(AFIR4, 0x80)
396
+ FIELD(AFIR4, AIIDH, 21, 11)
397
+ FIELD(AFIR4, AISRR, 20, 1)
398
+ FIELD(AFIR4, AIIDE, 19, 1)
399
+ FIELD(AFIR4, AIIDL, 1, 18)
400
+ FIELD(AFIR4, AIRTR, 0, 1)
401
+
402
+static void can_update_irq(XlnxZynqMPCANState *s)
403
+{
404
+ uint32_t irq;
405
+
406
+ /* Watermark register interrupts. */
407
+ if ((fifo32_num_free(&s->tx_fifo) / CAN_FRAME_SIZE) >
408
+ ARRAY_FIELD_EX32(s->regs, WIR, EW)) {
409
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFWMEMP, 1);
410
+ }
411
+
412
+ if ((fifo32_num_used(&s->rx_fifo) / CAN_FRAME_SIZE) >
413
+ ARRAY_FIELD_EX32(s->regs, WIR, FW)) {
414
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL, 1);
415
+ }
416
+
417
+ /* RX Interrupts. */
418
+ if (fifo32_num_used(&s->rx_fifo) >= CAN_FRAME_SIZE) {
419
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXNEMP, 1);
420
+ }
421
+
422
+ /* TX interrupts. */
423
+ if (fifo32_is_empty(&s->tx_fifo)) {
424
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFEMP, 1);
425
+ }
426
+
427
+ if (fifo32_is_full(&s->tx_fifo)) {
428
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFLL, 1);
429
+ }
430
+
431
+ if (fifo32_is_full(&s->txhpb_fifo)) {
432
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXBFLL, 1);
433
+ }
434
+
435
+ irq = s->regs[R_INTERRUPT_STATUS_REGISTER];
436
+ irq &= s->regs[R_INTERRUPT_ENABLE_REGISTER];
437
+
438
+ trace_xlnx_can_update_irq(s->regs[R_INTERRUPT_STATUS_REGISTER],
439
+ s->regs[R_INTERRUPT_ENABLE_REGISTER], irq);
440
+ qemu_set_irq(s->irq, irq);
441
+}
442
+
443
+static void can_ier_post_write(RegisterInfo *reg, uint64_t val)
444
+{
445
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
446
+
447
+ can_update_irq(s);
448
+}
449
+
450
+static uint64_t can_icr_pre_write(RegisterInfo *reg, uint64_t val)
451
+{
452
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
453
+
454
+ s->regs[R_INTERRUPT_STATUS_REGISTER] &= ~val;
455
+ can_update_irq(s);
456
+
457
+ return 0;
458
+}
459
+
460
+static void can_config_reset(XlnxZynqMPCANState *s)
461
+{
462
+ /* Reset all the configuration registers. */
463
+ register_reset(&s->reg_info[R_SOFTWARE_RESET_REGISTER]);
464
+ register_reset(&s->reg_info[R_MODE_SELECT_REGISTER]);
465
+ register_reset(
466
+ &s->reg_info[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER]);
467
+ register_reset(&s->reg_info[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER]);
468
+ register_reset(&s->reg_info[R_STATUS_REGISTER]);
469
+ register_reset(&s->reg_info[R_INTERRUPT_STATUS_REGISTER]);
470
+ register_reset(&s->reg_info[R_INTERRUPT_ENABLE_REGISTER]);
471
+ register_reset(&s->reg_info[R_INTERRUPT_CLEAR_REGISTER]);
472
+ register_reset(&s->reg_info[R_WIR]);
473
+}
474
+
475
+static void can_config_mode(XlnxZynqMPCANState *s)
476
+{
477
+ register_reset(&s->reg_info[R_ERROR_COUNTER_REGISTER]);
478
+ register_reset(&s->reg_info[R_ERROR_STATUS_REGISTER]);
479
+
480
+ /* Put XlnxZynqMPCAN in configuration mode. */
481
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 1);
482
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP, 0);
483
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP, 0);
484
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, BSOFF, 0);
485
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ERROR, 0);
486
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 0);
487
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 0);
488
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 0);
489
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ARBLST, 0);
490
+
491
+ can_update_irq(s);
492
+}
493
+
494
+static void update_status_register_mode_bits(XlnxZynqMPCANState *s)
495
+{
496
+ bool sleep_status = ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP);
497
+ bool sleep_mode = ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP);
498
+ /* Wake up interrupt bit. */
499
+ bool wakeup_irq_val = sleep_status && (sleep_mode == 0);
500
+ /* Sleep interrupt bit. */
501
+ bool sleep_irq_val = sleep_mode && (sleep_status == 0);
502
+
503
+ /* Clear previous core mode status bits. */
504
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 0);
505
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 0);
506
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 0);
507
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 0);
508
+
509
+ /* set current mode bit and generate irqs accordingly. */
510
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, LBACK)) {
511
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 1);
512
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP)) {
513
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 1);
514
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP,
515
+ sleep_irq_val);
516
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
517
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 1);
518
+ } else {
519
+ /*
520
+ * If all bits are zero then XlnxZynqMPCAN is set in normal mode.
521
+ */
522
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 1);
523
+ /* Set wakeup interrupt bit. */
524
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP,
525
+ wakeup_irq_val);
526
+ }
527
+
528
+ can_update_irq(s);
529
+}
530
+
531
+static void can_exit_sleep_mode(XlnxZynqMPCANState *s)
532
+{
533
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, 0);
534
+ update_status_register_mode_bits(s);
535
+}
536
+
537
+static void generate_frame(qemu_can_frame *frame, uint32_t *data)
538
+{
539
+ frame->can_id = data[0];
540
+ frame->can_dlc = FIELD_EX32(data[1], TXFIFO_DLC, DLC);
541
+
542
+ frame->data[0] = FIELD_EX32(data[2], TXFIFO_DATA1, DB3);
543
+ frame->data[1] = FIELD_EX32(data[2], TXFIFO_DATA1, DB2);
544
+ frame->data[2] = FIELD_EX32(data[2], TXFIFO_DATA1, DB1);
545
+ frame->data[3] = FIELD_EX32(data[2], TXFIFO_DATA1, DB0);
546
+
547
+ frame->data[4] = FIELD_EX32(data[3], TXFIFO_DATA2, DB7);
548
+ frame->data[5] = FIELD_EX32(data[3], TXFIFO_DATA2, DB6);
549
+ frame->data[6] = FIELD_EX32(data[3], TXFIFO_DATA2, DB5);
550
+ frame->data[7] = FIELD_EX32(data[3], TXFIFO_DATA2, DB4);
551
+}
552
+
553
+static bool tx_ready_check(XlnxZynqMPCANState *s)
554
+{
555
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
556
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
557
+
558
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
559
+ " data while controller is in reset mode.\n",
560
+ path);
561
+ return false;
562
+ }
563
+
564
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
565
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
566
+
567
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
568
+ " data while controller is in configuration mode. Reset"
569
+ " the core so operations can start fresh.\n",
570
+ path);
571
+ return false;
572
+ }
573
+
574
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
575
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
576
+
577
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
578
+ " data while controller is in SNOOP MODE.\n",
579
+ path);
580
+ return false;
581
+ }
582
+
583
+ return true;
584
+}
585
+
586
+static void transfer_fifo(XlnxZynqMPCANState *s, Fifo32 *fifo)
587
+{
588
+ qemu_can_frame frame;
589
+ uint32_t data[CAN_FRAME_SIZE];
590
+ int i;
591
+ bool can_tx = tx_ready_check(s);
592
+
593
+ if (!can_tx) {
594
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
595
+
596
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is not enabled for data"
597
+ " transfer.\n", path);
598
+ can_update_irq(s);
599
+ return;
600
+ }
601
+
602
+ while (!fifo32_is_empty(fifo)) {
603
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
604
+ data[i] = fifo32_pop(fifo);
605
+ }
606
+
607
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
608
+ /*
609
+ * Controller is in loopback. In Loopback mode, the CAN core
610
+ * transmits a recessive bitstream on to the XlnxZynqMPCAN Bus.
611
+ * Any message transmitted is looped back to the RX line and
612
+ * acknowledged. The XlnxZynqMPCAN core receives any message
613
+ * that it transmits.
614
+ */
615
+ if (fifo32_is_full(&s->rx_fifo)) {
616
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
617
+ } else {
618
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
619
+ fifo32_push(&s->rx_fifo, data[i]);
620
+ }
621
+
622
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
623
+ }
624
+ } else {
625
+ /* Normal mode Tx. */
626
+ generate_frame(&frame, data);
627
+
628
+ trace_xlnx_can_tx_data(frame.can_id, frame.can_dlc,
629
+ frame.data[0], frame.data[1],
630
+ frame.data[2], frame.data[3],
631
+ frame.data[4], frame.data[5],
632
+ frame.data[6], frame.data[7]);
633
+ can_bus_client_send(&s->bus_client, &frame, 1);
634
+ }
635
+ }
636
+
637
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 1);
638
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, TXBFLL, 0);
639
+
640
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) {
641
+ can_exit_sleep_mode(s);
642
+ }
643
+
644
+ can_update_irq(s);
645
+}
646
+
647
+static uint64_t can_srr_pre_write(RegisterInfo *reg, uint64_t val)
648
+{
649
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
650
+
651
+ ARRAY_FIELD_DP32(s->regs, SOFTWARE_RESET_REGISTER, CEN,
652
+ FIELD_EX32(val, SOFTWARE_RESET_REGISTER, CEN));
653
+
654
+ if (FIELD_EX32(val, SOFTWARE_RESET_REGISTER, SRST)) {
655
+ trace_xlnx_can_reset(val);
656
+
657
+ /* First, core will do software reset then will enter in config mode. */
658
+ can_config_reset(s);
659
+ }
660
+
661
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
662
+ can_config_mode(s);
663
+ } else {
664
+ /*
665
+ * Leave config mode. Now XlnxZynqMPCAN core will enter normal,
666
+ * sleep, snoop or loopback mode depending upon LBACK, SLEEP, SNOOP
667
+ * register states.
668
+ */
669
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 0);
670
+
671
+ ptimer_transaction_begin(s->can_timer);
672
+ ptimer_set_count(s->can_timer, 0);
673
+ ptimer_transaction_commit(s->can_timer);
674
+
675
+ /* XlnxZynqMPCAN is out of config mode. It will send pending data. */
676
+ transfer_fifo(s, &s->txhpb_fifo);
677
+ transfer_fifo(s, &s->tx_fifo);
678
+ }
679
+
680
+ update_status_register_mode_bits(s);
681
+
682
+ return s->regs[R_SOFTWARE_RESET_REGISTER];
683
+}
684
+
685
+static uint64_t can_msr_pre_write(RegisterInfo *reg, uint64_t val)
686
+{
687
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
688
+ uint8_t multi_mode;
689
+
690
+ /*
691
+ * Multiple mode set check. This is done to make sure user doesn't set
692
+ * multiple modes.
693
+ */
694
+ multi_mode = FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK) +
695
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP) +
696
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP);
697
+
698
+ if (multi_mode > 1) {
699
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
700
+
701
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to config"
702
+ " several modes simultaneously. One mode will be selected"
703
+ " according to their priority: LBACK > SLEEP > SNOOP.\n",
704
+ path);
705
+ }
706
+
707
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
708
+ /* We are in configuration mode, any mode can be selected. */
709
+ s->regs[R_MODE_SELECT_REGISTER] = val;
710
+ } else {
711
+ bool sleep_mode_bit = FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP);
712
+
713
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, sleep_mode_bit);
714
+
715
+ if (FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK)) {
716
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
717
+
718
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
719
+ " LBACK mode without setting CEN bit as 0.\n",
720
+ path);
721
+ } else if (FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP)) {
722
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
723
+
724
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
725
+ " SNOOP mode without setting CEN bit as 0.\n",
726
+ path);
727
+ }
728
+
729
+ update_status_register_mode_bits(s);
730
+ }
731
+
732
+ return s->regs[R_MODE_SELECT_REGISTER];
733
+}
734
+
735
+static uint64_t can_brpr_pre_write(RegisterInfo *reg, uint64_t val)
736
+{
737
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
738
+
739
+ /* Only allow writes when in config mode. */
740
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
741
+ return s->regs[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER];
742
+ }
743
+
744
+ return val;
745
+}
746
+
747
+static uint64_t can_btr_pre_write(RegisterInfo *reg, uint64_t val)
748
+{
749
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
750
+
751
+ /* Only allow writes when in config mode. */
752
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
753
+ return s->regs[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER];
754
+ }
755
+
756
+ return val;
757
+}
758
+
759
+static uint64_t can_tcr_pre_write(RegisterInfo *reg, uint64_t val)
760
+{
761
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
762
+
763
+ if (FIELD_EX32(val, TIMESTAMP_REGISTER, CTS)) {
764
+ ptimer_transaction_begin(s->can_timer);
765
+ ptimer_set_count(s->can_timer, 0);
766
+ ptimer_transaction_commit(s->can_timer);
767
+ }
768
+
769
+ return 0;
770
+}
771
+
772
+static void update_rx_fifo(XlnxZynqMPCANState *s, const qemu_can_frame *frame)
773
+{
774
+ bool filter_pass = false;
775
+ uint16_t timestamp = 0;
776
+
777
+ /* If no filter is enabled. Message will be stored in FIFO. */
778
+ if (!((ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) |
779
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) |
780
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) |
781
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)))) {
782
+ filter_pass = true;
783
+ }
784
+
785
+ /*
786
+ * Messages that pass any of the acceptance filters will be stored in
787
+ * the RX FIFO.
788
+ */
789
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) {
790
+ uint32_t id_masked = s->regs[R_AFMR1] & frame->can_id;
791
+ uint32_t filter_id_masked = s->regs[R_AFMR1] & s->regs[R_AFIR1];
792
+
793
+ if (filter_id_masked == id_masked) {
794
+ filter_pass = true;
795
+ }
796
+ }
797
+
798
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) {
799
+ uint32_t id_masked = s->regs[R_AFMR2] & frame->can_id;
800
+ uint32_t filter_id_masked = s->regs[R_AFMR2] & s->regs[R_AFIR2];
801
+
802
+ if (filter_id_masked == id_masked) {
803
+ filter_pass = true;
804
+ }
805
+ }
806
+
807
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) {
808
+ uint32_t id_masked = s->regs[R_AFMR3] & frame->can_id;
809
+ uint32_t filter_id_masked = s->regs[R_AFMR3] & s->regs[R_AFIR3];
810
+
811
+ if (filter_id_masked == id_masked) {
812
+ filter_pass = true;
813
+ }
814
+ }
815
+
816
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
817
+ uint32_t id_masked = s->regs[R_AFMR4] & frame->can_id;
818
+ uint32_t filter_id_masked = s->regs[R_AFMR4] & s->regs[R_AFIR4];
819
+
820
+ if (filter_id_masked == id_masked) {
821
+ filter_pass = true;
822
+ }
823
+ }
824
+
825
+ if (!filter_pass) {
826
+ trace_xlnx_can_rx_fifo_filter_reject(frame->can_id, frame->can_dlc);
827
+ return;
828
+ }
829
+
830
+ /* Store the message in fifo if it passed through any of the filters. */
831
+ if (filter_pass && frame->can_dlc <= MAX_DLC) {
832
+
833
+ if (fifo32_is_full(&s->rx_fifo)) {
834
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
835
+ } else {
836
+ timestamp = CAN_TIMER_MAX - ptimer_get_count(s->can_timer);
837
+
838
+ fifo32_push(&s->rx_fifo, frame->can_id);
839
+
840
+ fifo32_push(&s->rx_fifo, deposit32(0, R_RXFIFO_DLC_DLC_SHIFT,
841
+ R_RXFIFO_DLC_DLC_LENGTH,
842
+ frame->can_dlc) |
843
+ deposit32(0, R_RXFIFO_DLC_RXT_SHIFT,
844
+ R_RXFIFO_DLC_RXT_LENGTH,
845
+ timestamp));
846
+
847
+ /* First 32 bit of the data. */
848
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA1_DB3_SHIFT,
849
+ R_TXFIFO_DATA1_DB3_LENGTH,
850
+ frame->data[0]) |
851
+ deposit32(0, R_TXFIFO_DATA1_DB2_SHIFT,
852
+ R_TXFIFO_DATA1_DB2_LENGTH,
853
+ frame->data[1]) |
854
+ deposit32(0, R_TXFIFO_DATA1_DB1_SHIFT,
855
+ R_TXFIFO_DATA1_DB1_LENGTH,
856
+ frame->data[2]) |
857
+ deposit32(0, R_TXFIFO_DATA1_DB0_SHIFT,
858
+ R_TXFIFO_DATA1_DB0_LENGTH,
859
+ frame->data[3]));
860
+ /* Last 32 bit of the data. */
861
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA2_DB7_SHIFT,
862
+ R_TXFIFO_DATA2_DB7_LENGTH,
863
+ frame->data[4]) |
864
+ deposit32(0, R_TXFIFO_DATA2_DB6_SHIFT,
865
+ R_TXFIFO_DATA2_DB6_LENGTH,
866
+ frame->data[5]) |
867
+ deposit32(0, R_TXFIFO_DATA2_DB5_SHIFT,
868
+ R_TXFIFO_DATA2_DB5_LENGTH,
869
+ frame->data[6]) |
870
+ deposit32(0, R_TXFIFO_DATA2_DB4_SHIFT,
871
+ R_TXFIFO_DATA2_DB4_LENGTH,
872
+ frame->data[7]));
873
+
874
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
875
+ trace_xlnx_can_rx_data(frame->can_id, frame->can_dlc,
876
+ frame->data[0], frame->data[1],
877
+ frame->data[2], frame->data[3],
878
+ frame->data[4], frame->data[5],
879
+ frame->data[6], frame->data[7]);
880
+ }
881
+
882
+ can_update_irq(s);
883
+ }
884
+}
885
+
886
+static uint64_t can_rxfifo_pre_read(RegisterInfo *reg, uint64_t val)
887
+{
888
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
889
+
890
+ if (!fifo32_is_empty(&s->rx_fifo)) {
891
+ val = fifo32_pop(&s->rx_fifo);
892
+ } else {
893
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXUFLW, 1);
894
+ }
895
+
896
+ can_update_irq(s);
897
+ return val;
898
+}
899
+
900
+static void can_filter_enable_post_write(RegisterInfo *reg, uint64_t val)
901
+{
902
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
903
+
904
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1) &&
905
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF2) &&
906
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF3) &&
907
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
908
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 1);
909
+ } else {
910
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 0);
911
+ }
912
+}
913
+
914
+static uint64_t can_filter_mask_pre_write(RegisterInfo *reg, uint64_t val)
915
+{
916
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
917
+ uint32_t reg_idx = (reg->access->addr) / 4;
918
+ uint32_t filter_number = (reg_idx - R_AFMR1) / 2;
919
+
920
+ /* modify an acceptance filter, the corresponding UAF bit should be '0'. */
921
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
922
+ s->regs[reg_idx] = val;
923
+
924
+ trace_xlnx_can_filter_mask_pre_write(filter_number, s->regs[reg_idx]);
925
+ } else {
926
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
927
+
928
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
929
+ " mask is not set as corresponding UAF bit is not 0.\n",
930
+ path, filter_number + 1);
931
+ }
932
+
933
+ return s->regs[reg_idx];
934
+}
935
+
936
+static uint64_t can_filter_id_pre_write(RegisterInfo *reg, uint64_t val)
937
+{
938
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
939
+ uint32_t reg_idx = (reg->access->addr) / 4;
940
+ uint32_t filter_number = (reg_idx - R_AFIR1) / 2;
941
+
942
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
943
+ s->regs[reg_idx] = val;
944
+
945
+ trace_xlnx_can_filter_id_pre_write(filter_number, s->regs[reg_idx]);
946
+ } else {
947
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
948
+
949
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
950
+ " id is not set as corresponding UAF bit is not 0.\n",
951
+ path, filter_number + 1);
952
+ }
953
+
954
+ return s->regs[reg_idx];
955
+}
956
+
957
+static void can_tx_post_write(RegisterInfo *reg, uint64_t val)
958
+{
959
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
960
+
961
+ bool is_txhpb = reg->access->addr > A_TXFIFO_DATA2;
962
+
963
+ bool initiate_transfer = (reg->access->addr == A_TXFIFO_DATA2) ||
964
+ (reg->access->addr == A_TXHPB_DATA2);
965
+
966
+ Fifo32 *f = is_txhpb ? &s->txhpb_fifo : &s->tx_fifo;
967
+
968
+ if (!fifo32_is_full(f)) {
969
+ fifo32_push(f, val);
970
+ } else {
971
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
972
+
973
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: TX FIFO is full.\n", path);
974
+ }
975
+
976
+ /* Initiate the message send if TX register is written. */
977
+ if (initiate_transfer &&
978
+ ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
979
+ transfer_fifo(s, f);
980
+ }
981
+
982
+ can_update_irq(s);
983
+}
984
+
985
+static const RegisterAccessInfo can_regs_info[] = {
986
+ { .name = "SOFTWARE_RESET_REGISTER",
987
+ .addr = A_SOFTWARE_RESET_REGISTER,
988
+ .rsvd = 0xfffffffc,
989
+ .pre_write = can_srr_pre_write,
990
+ },{ .name = "MODE_SELECT_REGISTER",
991
+ .addr = A_MODE_SELECT_REGISTER,
992
+ .rsvd = 0xfffffff8,
993
+ .pre_write = can_msr_pre_write,
994
+ },{ .name = "ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER",
995
+ .addr = A_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER,
996
+ .rsvd = 0xffffff00,
997
+ .pre_write = can_brpr_pre_write,
998
+ },{ .name = "ARBITRATION_PHASE_BIT_TIMING_REGISTER",
999
+ .addr = A_ARBITRATION_PHASE_BIT_TIMING_REGISTER,
1000
+ .rsvd = 0xfffffe00,
1001
+ .pre_write = can_btr_pre_write,
1002
+ },{ .name = "ERROR_COUNTER_REGISTER",
1003
+ .addr = A_ERROR_COUNTER_REGISTER,
1004
+ .rsvd = 0xffff0000,
1005
+ .ro = 0xffffffff,
1006
+ },{ .name = "ERROR_STATUS_REGISTER",
1007
+ .addr = A_ERROR_STATUS_REGISTER,
1008
+ .rsvd = 0xffffffe0,
1009
+ .w1c = 0x1f,
1010
+ },{ .name = "STATUS_REGISTER", .addr = A_STATUS_REGISTER,
1011
+ .reset = 0x1,
1012
+ .rsvd = 0xffffe000,
1013
+ .ro = 0x1fff,
1014
+ },{ .name = "INTERRUPT_STATUS_REGISTER",
1015
+ .addr = A_INTERRUPT_STATUS_REGISTER,
1016
+ .reset = 0x6000,
1017
+ .rsvd = 0xffff8000,
1018
+ .ro = 0x7fff,
1019
+ },{ .name = "INTERRUPT_ENABLE_REGISTER",
1020
+ .addr = A_INTERRUPT_ENABLE_REGISTER,
1021
+ .rsvd = 0xffff8000,
1022
+ .post_write = can_ier_post_write,
1023
+ },{ .name = "INTERRUPT_CLEAR_REGISTER",
1024
+ .addr = A_INTERRUPT_CLEAR_REGISTER,
1025
+ .rsvd = 0xffff8000,
1026
+ .pre_write = can_icr_pre_write,
1027
+ },{ .name = "TIMESTAMP_REGISTER",
1028
+ .addr = A_TIMESTAMP_REGISTER,
1029
+ .rsvd = 0xfffffffe,
1030
+ .pre_write = can_tcr_pre_write,
1031
+ },{ .name = "WIR", .addr = A_WIR,
1032
+ .reset = 0x3f3f,
1033
+ .rsvd = 0xffff0000,
1034
+ },{ .name = "TXFIFO_ID", .addr = A_TXFIFO_ID,
1035
+ .post_write = can_tx_post_write,
1036
+ },{ .name = "TXFIFO_DLC", .addr = A_TXFIFO_DLC,
1037
+ .rsvd = 0xfffffff,
1038
+ .post_write = can_tx_post_write,
1039
+ },{ .name = "TXFIFO_DATA1", .addr = A_TXFIFO_DATA1,
1040
+ .post_write = can_tx_post_write,
1041
+ },{ .name = "TXFIFO_DATA2", .addr = A_TXFIFO_DATA2,
1042
+ .post_write = can_tx_post_write,
1043
+ },{ .name = "TXHPB_ID", .addr = A_TXHPB_ID,
1044
+ .post_write = can_tx_post_write,
1045
+ },{ .name = "TXHPB_DLC", .addr = A_TXHPB_DLC,
1046
+ .rsvd = 0xfffffff,
1047
+ .post_write = can_tx_post_write,
1048
+ },{ .name = "TXHPB_DATA1", .addr = A_TXHPB_DATA1,
1049
+ .post_write = can_tx_post_write,
1050
+ },{ .name = "TXHPB_DATA2", .addr = A_TXHPB_DATA2,
1051
+ .post_write = can_tx_post_write,
1052
+ },{ .name = "RXFIFO_ID", .addr = A_RXFIFO_ID,
1053
+ .ro = 0xffffffff,
1054
+ .post_read = can_rxfifo_pre_read,
1055
+ },{ .name = "RXFIFO_DLC", .addr = A_RXFIFO_DLC,
1056
+ .rsvd = 0xfff0000,
1057
+ .post_read = can_rxfifo_pre_read,
1058
+ },{ .name = "RXFIFO_DATA1", .addr = A_RXFIFO_DATA1,
1059
+ .post_read = can_rxfifo_pre_read,
1060
+ },{ .name = "RXFIFO_DATA2", .addr = A_RXFIFO_DATA2,
1061
+ .post_read = can_rxfifo_pre_read,
1062
+ },{ .name = "AFR", .addr = A_AFR,
1063
+ .rsvd = 0xfffffff0,
1064
+ .post_write = can_filter_enable_post_write,
1065
+ },{ .name = "AFMR1", .addr = A_AFMR1,
1066
+ .pre_write = can_filter_mask_pre_write,
1067
+ },{ .name = "AFIR1", .addr = A_AFIR1,
1068
+ .pre_write = can_filter_id_pre_write,
1069
+ },{ .name = "AFMR2", .addr = A_AFMR2,
1070
+ .pre_write = can_filter_mask_pre_write,
1071
+ },{ .name = "AFIR2", .addr = A_AFIR2,
1072
+ .pre_write = can_filter_id_pre_write,
1073
+ },{ .name = "AFMR3", .addr = A_AFMR3,
1074
+ .pre_write = can_filter_mask_pre_write,
1075
+ },{ .name = "AFIR3", .addr = A_AFIR3,
1076
+ .pre_write = can_filter_id_pre_write,
1077
+ },{ .name = "AFMR4", .addr = A_AFMR4,
1078
+ .pre_write = can_filter_mask_pre_write,
1079
+ },{ .name = "AFIR4", .addr = A_AFIR4,
1080
+ .pre_write = can_filter_id_pre_write,
1081
+ }
1082
+};
1083
+
1084
+static void xlnx_zynqmp_can_ptimer_cb(void *opaque)
1085
+{
1086
+ /* No action required on the timer rollover. */
1087
+}
1088
+
1089
+static const MemoryRegionOps can_ops = {
1090
+ .read = register_read_memory,
1091
+ .write = register_write_memory,
1092
+ .endianness = DEVICE_LITTLE_ENDIAN,
1093
+ .valid = {
1094
+ .min_access_size = 4,
1095
+ .max_access_size = 4,
1096
+ },
1097
+};
1098
+
1099
+static void xlnx_zynqmp_can_reset_init(Object *obj, ResetType type)
1100
+{
1101
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1102
+ unsigned int i;
1103
+
1104
+ for (i = R_RXFIFO_ID; i < ARRAY_SIZE(s->reg_info); ++i) {
1105
+ register_reset(&s->reg_info[i]);
1106
+ }
1107
+
1108
+ ptimer_transaction_begin(s->can_timer);
1109
+ ptimer_set_count(s->can_timer, 0);
1110
+ ptimer_transaction_commit(s->can_timer);
1111
+}
1112
+
1113
+static void xlnx_zynqmp_can_reset_hold(Object *obj)
1114
+{
1115
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1116
+ unsigned int i;
1117
+
1118
+ for (i = 0; i < R_RXFIFO_ID; ++i) {
1119
+ register_reset(&s->reg_info[i]);
1120
+ }
1121
+
1122
+ /*
1123
+ * Reset FIFOs when CAN model is reset. This will clear the fifo writes
1124
+ * done by post_write which gets called from register_reset function,
1125
+ * post_write handle will not be able to trigger tx because CAN will be
1126
+ * disabled when software_reset_register is cleared first.
1127
+ */
1128
+ fifo32_reset(&s->rx_fifo);
1129
+ fifo32_reset(&s->tx_fifo);
1130
+ fifo32_reset(&s->txhpb_fifo);
1131
+}
1132
+
1133
+static bool xlnx_zynqmp_can_can_receive(CanBusClientState *client)
1134
+{
1135
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1136
+ bus_client);
1137
+
1138
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
1139
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1140
+
1141
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is in reset state.\n",
1142
+ path);
1143
+ return false;
1144
+ }
1145
+
1146
+ if ((ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) == 0) {
1147
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1148
+
1149
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is disabled. Incoming"
1150
+ " messages will be discarded.\n", path);
1151
+ return false;
1152
+ }
1153
+
1154
+ return true;
1155
+}
1156
+
1157
+static ssize_t xlnx_zynqmp_can_receive(CanBusClientState *client,
1158
+ const qemu_can_frame *buf, size_t buf_size) {
1159
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1160
+ bus_client);
1161
+ const qemu_can_frame *frame = buf;
1162
+
1163
+ if (buf_size <= 0) {
1164
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1165
+
1166
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Error in the data received.\n",
1167
+ path);
1168
+ return 0;
1169
+ }
1170
+
1171
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
1172
+ /* Snoop Mode: Just keep the data. no response back. */
1173
+ update_rx_fifo(s, frame);
1174
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP))) {
1175
+ /*
1176
+ * XlnxZynqMPCAN is in sleep mode. Any data on bus will bring it to wake
1177
+ * up state.
1178
+ */
1179
+ can_exit_sleep_mode(s);
1180
+ update_rx_fifo(s, frame);
1181
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) == 0) {
1182
+ update_rx_fifo(s, frame);
1183
+ } else {
1184
+ /*
1185
+ * XlnxZynqMPCAN will not participate in normal bus communication
1186
+ * and will not receive any messages transmitted by other CAN nodes.
1187
+ */
1188
+ trace_xlnx_can_rx_discard(s->regs[R_STATUS_REGISTER]);
1189
+ }
1190
+
1191
+ return 1;
1192
+}
1193
+
1194
+static CanBusClientInfo can_xilinx_bus_client_info = {
1195
+ .can_receive = xlnx_zynqmp_can_can_receive,
1196
+ .receive = xlnx_zynqmp_can_receive,
1197
+};
1198
+
1199
+static int xlnx_zynqmp_can_connect_to_bus(XlnxZynqMPCANState *s,
1200
+ CanBusState *bus)
1201
+{
1202
+ s->bus_client.info = &can_xilinx_bus_client_info;
1203
+
1204
+ if (can_bus_insert_client(bus, &s->bus_client) < 0) {
1205
+ return -1;
1206
+ }
1207
+ return 0;
1208
+}
1209
+
1210
+static void xlnx_zynqmp_can_realize(DeviceState *dev, Error **errp)
1211
+{
1212
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(dev);
1213
+
1214
+ if (s->canbus) {
1215
+ if (xlnx_zynqmp_can_connect_to_bus(s, s->canbus) < 0) {
1216
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1217
+
1218
+ error_setg(errp, "%s: xlnx_zynqmp_can_connect_to_bus"
1219
+ " failed.", path);
1220
+ return;
1221
+ }
1222
+ }
1223
+
1224
+ /* Create RX FIFO, TXFIFO, TXHPB storage. */
1225
+ fifo32_create(&s->rx_fifo, RXFIFO_SIZE);
1226
+ fifo32_create(&s->tx_fifo, RXFIFO_SIZE);
1227
+ fifo32_create(&s->txhpb_fifo, CAN_FRAME_SIZE);
1228
+
1229
+ /* Allocate a new timer. */
1230
+ s->can_timer = ptimer_init(xlnx_zynqmp_can_ptimer_cb, s,
1231
+ PTIMER_POLICY_DEFAULT);
1232
+
1233
+ ptimer_transaction_begin(s->can_timer);
1234
+
1235
+ ptimer_set_freq(s->can_timer, s->cfg.ext_clk_freq);
1236
+ ptimer_set_limit(s->can_timer, CAN_TIMER_MAX, 1);
1237
+ ptimer_run(s->can_timer, 0);
1238
+ ptimer_transaction_commit(s->can_timer);
1239
+}
1240
+
1241
+static void xlnx_zynqmp_can_init(Object *obj)
1242
+{
1243
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1244
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1245
+
1246
+ RegisterInfoArray *reg_array;
1247
+
1248
+ memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_CAN,
1249
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1250
+ reg_array = register_init_block32(DEVICE(obj), can_regs_info,
1251
+ ARRAY_SIZE(can_regs_info),
1252
+ s->reg_info, s->regs,
1253
+ &can_ops,
1254
+ XLNX_ZYNQMP_CAN_ERR_DEBUG,
1255
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1256
+
1257
+ memory_region_add_subregion(&s->iomem, 0x00, &reg_array->mem);
1258
+ sysbus_init_mmio(sbd, &s->iomem);
1259
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
1260
+}
1261
+
1262
+static const VMStateDescription vmstate_can = {
1263
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1264
+ .version_id = 1,
1265
+ .minimum_version_id = 1,
1266
+ .fields = (VMStateField[]) {
1267
+ VMSTATE_FIFO32(rx_fifo, XlnxZynqMPCANState),
1268
+ VMSTATE_FIFO32(tx_fifo, XlnxZynqMPCANState),
1269
+ VMSTATE_FIFO32(txhpb_fifo, XlnxZynqMPCANState),
1270
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPCANState, XLNX_ZYNQMP_CAN_R_MAX),
1271
+ VMSTATE_PTIMER(can_timer, XlnxZynqMPCANState),
1272
+ VMSTATE_END_OF_LIST(),
1273
+ }
1274
+};
1275
+
1276
+static Property xlnx_zynqmp_can_properties[] = {
1277
+ DEFINE_PROP_UINT32("ext_clk_freq", XlnxZynqMPCANState, cfg.ext_clk_freq,
1278
+ CAN_DEFAULT_CLOCK),
1279
+ DEFINE_PROP_LINK("canbus", XlnxZynqMPCANState, canbus, TYPE_CAN_BUS,
1280
+ CanBusState *),
1281
+ DEFINE_PROP_END_OF_LIST(),
1282
+};
1283
+
1284
+static void xlnx_zynqmp_can_class_init(ObjectClass *klass, void *data)
1285
+{
1286
+ DeviceClass *dc = DEVICE_CLASS(klass);
1287
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1288
+
1289
+ rc->phases.enter = xlnx_zynqmp_can_reset_init;
1290
+ rc->phases.hold = xlnx_zynqmp_can_reset_hold;
1291
+ dc->realize = xlnx_zynqmp_can_realize;
1292
+ device_class_set_props(dc, xlnx_zynqmp_can_properties);
1293
+ dc->vmsd = &vmstate_can;
1294
+}
1295
+
1296
+static const TypeInfo can_info = {
1297
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1298
+ .parent = TYPE_SYS_BUS_DEVICE,
1299
+ .instance_size = sizeof(XlnxZynqMPCANState),
1300
+ .class_init = xlnx_zynqmp_can_class_init,
1301
+ .instance_init = xlnx_zynqmp_can_init,
1302
+};
1303
+
1304
+static void can_register_types(void)
1305
+{
1306
+ type_register_static(&can_info);
1307
+}
1308
+
1309
+type_init(can_register_types)
1310
diff --git a/hw/Kconfig b/hw/Kconfig
1311
index XXXXXXX..XXXXXXX 100644
1312
--- a/hw/Kconfig
1313
+++ b/hw/Kconfig
1314
@@ -XXX,XX +XXX,XX @@ config XILINX_AXI
1315
config XLNX_ZYNQMP
1316
bool
1317
select REGISTER
1318
+ select CAN_BUS
1319
diff --git a/hw/net/can/meson.build b/hw/net/can/meson.build
1320
index XXXXXXX..XXXXXXX 100644
1321
--- a/hw/net/can/meson.build
1322
+++ b/hw/net/can/meson.build
1323
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_pcm3680_pci.c'))
1324
softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_mioe3680_pci.c'))
1325
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD', if_true: files('ctucan_core.c'))
1326
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD_PCI', if_true: files('ctucan_pci.c'))
1327
+softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP', if_true: files('xlnx-zynqmp-can.c'))
1328
diff --git a/hw/net/can/trace-events b/hw/net/can/trace-events
1329
new file mode 100644
1330
index XXXXXXX..XXXXXXX
1331
--- /dev/null
1332
+++ b/hw/net/can/trace-events
1333
@@ -XXX,XX +XXX,XX @@
1334
+# xlnx-zynqmp-can.c
1335
+xlnx_can_update_irq(uint32_t isr, uint32_t ier, uint32_t irq) "ISR: 0x%08x IER: 0x%08x IRQ: 0x%08x"
1336
+xlnx_can_reset(uint32_t val) "Resetting controller with value = 0x%08x"
1337
+xlnx_can_rx_fifo_filter_reject(uint32_t id, uint8_t dlc) "Frame: ID: 0x%08x DLC: 0x%02x"
1338
+xlnx_can_filter_id_pre_write(uint8_t filter_num, uint32_t value) "Filter%d ID: 0x%08x"
1339
+xlnx_can_filter_mask_pre_write(uint8_t filter_num, uint32_t value) "Filter%d MASK: 0x%08x"
1340
+xlnx_can_tx_data(uint32_t id, uint8_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1341
+xlnx_can_rx_data(uint32_t id, uint32_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1342
+xlnx_can_rx_discard(uint32_t status) "Controller is not enabled for bus communication. Status Register: 0x%08x"
1343
--
1344
2.20.1
1345
1346
diff view generated by jsdifflib
1
From: Michel Heily <michelheily@gmail.com>
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
2
2
3
Implement the watchdog timer for the stellaris boards.
3
Connect CAN0 and CAN1 on the ZynqMP.
4
This device is a close variant of the CMSDK APB watchdog
5
device, so we can model it by subclassing that device and
6
tweaking the behaviour of some of its registers.
7
4
8
Signed-off-by: Michel Heily <michelheily@gmail.com>
5
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
9
Reviewed-by: Peter Maydell <petser.maydell@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
[PMM: rewrote commit message, fixed a few checkpatch nits,
7
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
11
added comment giving the URL of the spec for the Stellaris
8
Message-id: 1605728926-352690-3-git-send-email-fnu.vikram@xilinx.com
12
variant of the watchdog device]
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
10
---
15
include/hw/watchdog/cmsdk-apb-watchdog.h | 8 +++
11
include/hw/arm/xlnx-zynqmp.h | 8 ++++++++
16
hw/arm/stellaris.c | 22 ++++++-
12
hw/arm/xlnx-zcu102.c | 20 ++++++++++++++++++++
17
hw/watchdog/cmsdk-apb-watchdog.c | 74 +++++++++++++++++++++++-
13
hw/arm/xlnx-zynqmp.c | 34 ++++++++++++++++++++++++++++++++++
18
3 files changed, 100 insertions(+), 4 deletions(-)
14
3 files changed, 62 insertions(+)
19
15
20
diff --git a/include/hw/watchdog/cmsdk-apb-watchdog.h b/include/hw/watchdog/cmsdk-apb-watchdog.h
16
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/include/hw/watchdog/cmsdk-apb-watchdog.h
18
--- a/include/hw/arm/xlnx-zynqmp.h
23
+++ b/include/hw/watchdog/cmsdk-apb-watchdog.h
19
+++ b/include/hw/arm/xlnx-zynqmp.h
24
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@
25
#define CMSDK_APB_WATCHDOG(obj) OBJECT_CHECK(CMSDKAPBWatchdog, (obj), \
21
#include "hw/intc/arm_gic.h"
26
TYPE_CMSDK_APB_WATCHDOG)
22
#include "hw/net/cadence_gem.h"
27
23
#include "hw/char/cadence_uart.h"
28
+/*
24
+#include "hw/net/xlnx-zynqmp-can.h"
29
+ * This shares the same struct (and cast macro) as the base
25
#include "hw/ide/ahci.h"
30
+ * cmsdk-apb-watchdog device.
26
#include "hw/sd/sdhci.h"
31
+ */
27
#include "hw/ssi/xilinx_spips.h"
32
+#define TYPE_LUMINARY_WATCHDOG "luminary-watchdog"
28
@@ -XXX,XX +XXX,XX @@
29
#include "hw/cpu/cluster.h"
30
#include "target/arm/cpu.h"
31
#include "qom/object.h"
32
+#include "net/can_emu.h"
33
34
#define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
35
OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
36
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
37
#define XLNX_ZYNQMP_NUM_RPU_CPUS 2
38
#define XLNX_ZYNQMP_NUM_GEMS 4
39
#define XLNX_ZYNQMP_NUM_UARTS 2
40
+#define XLNX_ZYNQMP_NUM_CAN 2
41
+#define XLNX_ZYNQMP_CAN_REF_CLK (24 * 1000 * 1000)
42
#define XLNX_ZYNQMP_NUM_SDHCI 2
43
#define XLNX_ZYNQMP_NUM_SPIS 2
44
#define XLNX_ZYNQMP_NUM_GDMA_CH 8
45
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
46
47
CadenceGEMState gem[XLNX_ZYNQMP_NUM_GEMS];
48
CadenceUARTState uart[XLNX_ZYNQMP_NUM_UARTS];
49
+ XlnxZynqMPCANState can[XLNX_ZYNQMP_NUM_CAN];
50
SysbusAHCIState sata;
51
SDHCIState sdhci[XLNX_ZYNQMP_NUM_SDHCI];
52
XilinxSPIPS spi[XLNX_ZYNQMP_NUM_SPIS];
53
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
54
bool virt;
55
/* Has the RPU subsystem? */
56
bool has_rpu;
33
+
57
+
34
typedef struct CMSDKAPBWatchdog {
58
+ /* CAN bus. */
35
/*< private >*/
59
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
36
SysBusDevice parent_obj;
60
};
37
@@ -XXX,XX +XXX,XX @@ typedef struct CMSDKAPBWatchdog {
38
MemoryRegion iomem;
39
qemu_irq wdogint;
40
uint32_t wdogclk_frq;
41
+ bool is_luminary;
42
struct ptimer_state *timer;
43
44
uint32_t control;
45
@@ -XXX,XX +XXX,XX @@ typedef struct CMSDKAPBWatchdog {
46
uint32_t itcr;
47
uint32_t itop;
48
uint32_t resetstatus;
49
+ const uint32_t *id;
50
} CMSDKAPBWatchdog;
51
61
52
#endif
62
#endif
53
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
63
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
54
index XXXXXXX..XXXXXXX 100644
64
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/arm/stellaris.c
65
--- a/hw/arm/xlnx-zcu102.c
56
+++ b/hw/arm/stellaris.c
66
+++ b/hw/arm/xlnx-zcu102.c
57
@@ -XXX,XX +XXX,XX @@
67
@@ -XXX,XX +XXX,XX @@
58
#include "sysemu/sysemu.h"
68
#include "sysemu/qtest.h"
59
#include "hw/arm/armv7m.h"
69
#include "sysemu/device_tree.h"
60
#include "hw/char/pl011.h"
70
#include "qom/object.h"
61
+#include "hw/watchdog/cmsdk-apb-watchdog.h"
71
+#include "net/can_emu.h"
62
#include "hw/misc/unimp.h"
72
63
#include "cpu.h"
73
struct XlnxZCU102 {
64
74
MachineState parent_obj;
65
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
75
@@ -XXX,XX +XXX,XX @@ struct XlnxZCU102 {
66
* Stellaris LM3S6965 Microcontroller Data Sheet (rev I)
76
bool secure;
67
* http://www.ti.com/lit/ds/symlink/lm3s6965.pdf
77
bool virt;
68
*
78
69
- * 40000000 wdtimer (unimplemented)
79
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
70
+ * 40000000 wdtimer
71
* 40002000 i2c (unimplemented)
72
* 40004000 GPIO
73
* 40005000 GPIO
74
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
75
stellaris_sys_init(0x400fe000, qdev_get_gpio_in(nvic, 28),
76
board, nd_table[0].macaddr.a);
77
78
+
80
+
79
+ if (board->dc1 & (1 << 3)) { /* watchdog present */
81
struct arm_boot_info binfo;
80
+ dev = qdev_create(NULL, TYPE_LUMINARY_WATCHDOG);
82
};
83
84
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_init(MachineState *machine)
85
object_property_set_bool(OBJECT(&s->soc), "virtualization", s->virt,
86
&error_fatal);
87
88
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
89
+ gchar *bus_name = g_strdup_printf("canbus%d", i);
81
+
90
+
82
+ /* system_clock_scale is valid now */
91
+ object_property_set_link(OBJECT(&s->soc), bus_name,
83
+ uint32_t mainclk = NANOSECONDS_PER_SECOND / system_clock_scale;
92
+ OBJECT(s->canbus[i]), &error_fatal);
84
+ qdev_prop_set_uint32(dev, "wdogclk-frq", mainclk);
93
+ g_free(bus_name);
85
+
86
+ qdev_init_nofail(dev);
87
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev),
88
+ 0,
89
+ 0x40000000u);
90
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev),
91
+ 0,
92
+ qdev_get_gpio_in(nvic, 18));
93
+ }
94
+ }
94
+
95
+
96
qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
97
98
/* Create and plug in the SD cards */
99
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_machine_instance_init(Object *obj)
100
s->secure = false;
101
/* Default to virt (EL2) being disabled */
102
s->virt = false;
103
+ object_property_add_link(obj, "xlnx-zcu102.canbus0", TYPE_CAN_BUS,
104
+ (Object **)&s->canbus[0],
105
+ object_property_allow_set_link,
106
+ 0);
95
+
107
+
96
for (i = 0; i < 7; i++) {
108
+ object_property_add_link(obj, "xlnx-zcu102.canbus1", TYPE_CAN_BUS,
97
if (board->dc4 & (1 << i)) {
109
+ (Object **)&s->canbus[1],
98
gpio_dev[i] = sysbus_create_simple("pl061_luminary", gpio_addr[i],
110
+ object_property_allow_set_link,
99
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
111
+ 0);
100
/* Add dummy regions for the devices we don't implement yet,
112
}
101
* so guest accesses don't cause unlogged crashes.
113
102
*/
114
static void xlnx_zcu102_machine_class_init(ObjectClass *oc, void *data)
103
- create_unimplemented_device("wdtimer", 0x40000000, 0x1000);
115
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
104
create_unimplemented_device("i2c-0", 0x40002000, 0x1000);
105
create_unimplemented_device("i2c-2", 0x40021000, 0x1000);
106
create_unimplemented_device("PWM", 0x40028000, 0x1000);
107
diff --git a/hw/watchdog/cmsdk-apb-watchdog.c b/hw/watchdog/cmsdk-apb-watchdog.c
108
index XXXXXXX..XXXXXXX 100644
116
index XXXXXXX..XXXXXXX 100644
109
--- a/hw/watchdog/cmsdk-apb-watchdog.c
117
--- a/hw/arm/xlnx-zynqmp.c
110
+++ b/hw/watchdog/cmsdk-apb-watchdog.c
118
+++ b/hw/arm/xlnx-zynqmp.c
111
@@ -XXX,XX +XXX,XX @@
119
@@ -XXX,XX +XXX,XX @@ static const int uart_intr[XLNX_ZYNQMP_NUM_UARTS] = {
112
* System Design Kit (CMSDK) and documented in the Cortex-M System
120
21, 22,
113
* Design Kit Technical Reference Manual (ARM DDI0479C):
114
* https://developer.arm.com/products/system-design/system-design-kits/cortex-m-system-design-kit
115
+ *
116
+ * We also support the variant of this device found in the TI
117
+ * Stellaris/Luminary boards and documented in:
118
+ * http://www.ti.com/lit/ds/symlink/lm3s6965.pdf
119
*/
120
121
#include "qemu/osdep.h"
122
@@ -XXX,XX +XXX,XX @@ REG32(WDOGINTCLR, 0xc)
123
REG32(WDOGRIS, 0x10)
124
FIELD(WDOGRIS, INT, 0, 1)
125
REG32(WDOGMIS, 0x14)
126
+REG32(WDOGTEST, 0x418) /* only in Stellaris/Luminary version of the device */
127
REG32(WDOGLOCK, 0xc00)
128
#define WDOG_UNLOCK_VALUE 0x1ACCE551
129
REG32(WDOGITCR, 0xf00)
130
@@ -XXX,XX +XXX,XX @@ REG32(CID2, 0xff8)
131
REG32(CID3, 0xffc)
132
133
/* PID/CID values */
134
-static const int watchdog_id[] = {
135
+static const uint32_t cmsdk_apb_watchdog_id[] = {
136
0x04, 0x00, 0x00, 0x00, /* PID4..PID7 */
137
0x24, 0xb8, 0x1b, 0x00, /* PID0..PID3 */
138
0x0d, 0xf0, 0x05, 0xb1, /* CID0..CID3 */
139
};
121
};
140
122
141
+static const uint32_t luminary_watchdog_id[] = {
123
+static const uint64_t can_addr[XLNX_ZYNQMP_NUM_CAN] = {
142
+ 0x00, 0x00, 0x00, 0x00, /* PID4..PID7 */
124
+ 0xFF060000, 0xFF070000,
143
+ 0x05, 0x18, 0x18, 0x01, /* PID0..PID3 */
144
+ 0x0d, 0xf0, 0x05, 0xb1, /* CID0..CID3 */
145
+};
125
+};
146
+
126
+
147
static bool cmsdk_apb_watchdog_intstatus(CMSDKAPBWatchdog *s)
127
+static const int can_intr[XLNX_ZYNQMP_NUM_CAN] = {
148
{
128
+ 23, 24,
149
/* Return masked interrupt status */
150
@@ -XXX,XX +XXX,XX @@ static void cmsdk_apb_watchdog_update(CMSDKAPBWatchdog *s)
151
bool wdogres;
152
153
if (s->itcr) {
154
+ /*
155
+ * Not checking that !s->is_luminary since s->itcr can't be written
156
+ * when s->is_luminary in the first place.
157
+ */
158
wdogint = s->itop & R_WDOGITOP_WDOGINT_MASK;
159
wdogres = s->itop & R_WDOGITOP_WDOGRES_MASK;
160
} else {
161
@@ -XXX,XX +XXX,XX @@ static uint64_t cmsdk_apb_watchdog_read(void *opaque, hwaddr offset,
162
r = s->lock;
163
break;
164
case A_WDOGITCR:
165
+ if (s->is_luminary) {
166
+ goto bad_offset;
167
+ }
168
r = s->itcr;
169
break;
170
case A_PID4 ... A_CID3:
171
- r = watchdog_id[(offset - A_PID4) / 4];
172
+ r = s->id[(offset - A_PID4) / 4];
173
break;
174
case A_WDOGINTCLR:
175
case A_WDOGITOP:
176
+ if (s->is_luminary) {
177
+ goto bad_offset;
178
+ }
179
qemu_log_mask(LOG_GUEST_ERROR,
180
"CMSDK APB watchdog read: read of WO offset %x\n",
181
(int)offset);
182
r = 0;
183
break;
184
+ case A_WDOGTEST:
185
+ if (!s->is_luminary) {
186
+ goto bad_offset;
187
+ }
188
+ qemu_log_mask(LOG_UNIMP,
189
+ "Luminary watchdog read: stall not implemented\n");
190
+ r = 0;
191
+ break;
192
default:
193
+bad_offset:
194
qemu_log_mask(LOG_GUEST_ERROR,
195
"CMSDK APB watchdog read: bad offset %x\n", (int)offset);
196
r = 0;
197
@@ -XXX,XX +XXX,XX @@ static void cmsdk_apb_watchdog_write(void *opaque, hwaddr offset,
198
ptimer_run(s->timer, 0);
199
break;
200
case A_WDOGCONTROL:
201
+ if (s->is_luminary && 0 != (R_WDOGCONTROL_INTEN_MASK & s->control)) {
202
+ /*
203
+ * The Luminary version of this device ignores writes to
204
+ * this register after the guest has enabled interrupts
205
+ * (so they can only be disabled again via reset).
206
+ */
207
+ break;
208
+ }
209
s->control = value & R_WDOGCONTROL_VALID_MASK;
210
cmsdk_apb_watchdog_update(s);
211
break;
212
@@ -XXX,XX +XXX,XX @@ static void cmsdk_apb_watchdog_write(void *opaque, hwaddr offset,
213
s->lock = (value != WDOG_UNLOCK_VALUE);
214
break;
215
case A_WDOGITCR:
216
+ if (s->is_luminary) {
217
+ goto bad_offset;
218
+ }
219
s->itcr = value & R_WDOGITCR_VALID_MASK;
220
cmsdk_apb_watchdog_update(s);
221
break;
222
case A_WDOGITOP:
223
+ if (s->is_luminary) {
224
+ goto bad_offset;
225
+ }
226
s->itop = value & R_WDOGITOP_VALID_MASK;
227
cmsdk_apb_watchdog_update(s);
228
break;
229
@@ -XXX,XX +XXX,XX @@ static void cmsdk_apb_watchdog_write(void *opaque, hwaddr offset,
230
"CMSDK APB watchdog write: write to RO offset 0x%x\n",
231
(int)offset);
232
break;
233
+ case A_WDOGTEST:
234
+ if (!s->is_luminary) {
235
+ goto bad_offset;
236
+ }
237
+ qemu_log_mask(LOG_UNIMP,
238
+ "Luminary watchdog write: stall not implemented\n");
239
+ break;
240
default:
241
+bad_offset:
242
qemu_log_mask(LOG_GUEST_ERROR,
243
"CMSDK APB watchdog write: bad offset 0x%x\n",
244
(int)offset);
245
@@ -XXX,XX +XXX,XX @@ static void cmsdk_apb_watchdog_init(Object *obj)
246
s, "cmsdk-apb-watchdog", 0x1000);
247
sysbus_init_mmio(sbd, &s->iomem);
248
sysbus_init_irq(sbd, &s->wdogint);
249
+
250
+ s->is_luminary = false;
251
+ s->id = cmsdk_apb_watchdog_id;
252
}
253
254
static void cmsdk_apb_watchdog_realize(DeviceState *dev, Error **errp)
255
@@ -XXX,XX +XXX,XX @@ static const TypeInfo cmsdk_apb_watchdog_info = {
256
.class_init = cmsdk_apb_watchdog_class_init,
257
};
258
259
+static void luminary_watchdog_init(Object *obj)
260
+{
261
+ CMSDKAPBWatchdog *s = CMSDK_APB_WATCHDOG(obj);
262
+
263
+ s->is_luminary = true;
264
+ s->id = luminary_watchdog_id;
265
+}
266
+
267
+static const TypeInfo luminary_watchdog_info = {
268
+ .name = TYPE_LUMINARY_WATCHDOG,
269
+ .parent = TYPE_CMSDK_APB_WATCHDOG,
270
+ .instance_init = luminary_watchdog_init
271
+};
129
+};
272
+
130
+
273
static void cmsdk_apb_watchdog_register_types(void)
131
static const uint64_t sdhci_addr[XLNX_ZYNQMP_NUM_SDHCI] = {
274
{
132
0xFF160000, 0xFF170000,
275
type_register_static(&cmsdk_apb_watchdog_info);
133
};
276
+ type_register_static(&luminary_watchdog_info);
134
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
277
}
135
TYPE_CADENCE_UART);
278
136
}
279
type_init(cmsdk_apb_watchdog_register_types);
137
138
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
139
+ object_initialize_child(obj, "can[*]", &s->can[i],
140
+ TYPE_XLNX_ZYNQMP_CAN);
141
+ }
142
+
143
object_initialize_child(obj, "sata", &s->sata, TYPE_SYSBUS_AHCI);
144
145
for (i = 0; i < XLNX_ZYNQMP_NUM_SDHCI; i++) {
146
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
147
gic_spi[uart_intr[i]]);
148
}
149
150
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
151
+ object_property_set_int(OBJECT(&s->can[i]), "ext_clk_freq",
152
+ XLNX_ZYNQMP_CAN_REF_CLK, &error_abort);
153
+
154
+ object_property_set_link(OBJECT(&s->can[i]), "canbus",
155
+ OBJECT(s->canbus[i]), &error_fatal);
156
+
157
+ sysbus_realize(SYS_BUS_DEVICE(&s->can[i]), &err);
158
+ if (err) {
159
+ error_propagate(errp, err);
160
+ return;
161
+ }
162
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->can[i]), 0, can_addr[i]);
163
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->can[i]), 0,
164
+ gic_spi[can_intr[i]]);
165
+ }
166
+
167
object_property_set_int(OBJECT(&s->sata), "num-ports", SATA_NUM_PORTS,
168
&error_abort);
169
if (!sysbus_realize(SYS_BUS_DEVICE(&s->sata), errp)) {
170
@@ -XXX,XX +XXX,XX @@ static Property xlnx_zynqmp_props[] = {
171
DEFINE_PROP_BOOL("has_rpu", XlnxZynqMPState, has_rpu, false),
172
DEFINE_PROP_LINK("ddr-ram", XlnxZynqMPState, ddr_ram, TYPE_MEMORY_REGION,
173
MemoryRegion *),
174
+ DEFINE_PROP_LINK("canbus0", XlnxZynqMPState, canbus[0], TYPE_CAN_BUS,
175
+ CanBusState *),
176
+ DEFINE_PROP_LINK("canbus1", XlnxZynqMPState, canbus[1], TYPE_CAN_BUS,
177
+ CanBusState *),
178
DEFINE_PROP_END_OF_LIST()
179
};
180
280
--
181
--
281
2.20.1
182
2.20.1
282
183
283
184
diff view generated by jsdifflib
New patch
1
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
2
3
The QTests perform five tests on the Xilinx ZynqMP CAN controller:
4
Tests the CAN controller in loopback, sleep and snoop mode.
5
Tests filtering of incoming CAN messages.
6
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
9
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
10
Message-id: 1605728926-352690-4-git-send-email-fnu.vikram@xilinx.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
tests/qtest/xlnx-can-test.c | 360 ++++++++++++++++++++++++++++++++++++
14
tests/qtest/meson.build | 1 +
15
2 files changed, 361 insertions(+)
16
create mode 100644 tests/qtest/xlnx-can-test.c
17
18
diff --git a/tests/qtest/xlnx-can-test.c b/tests/qtest/xlnx-can-test.c
19
new file mode 100644
20
index XXXXXXX..XXXXXXX
21
--- /dev/null
22
+++ b/tests/qtest/xlnx-can-test.c
23
@@ -XXX,XX +XXX,XX @@
24
+/*
25
+ * QTests for the Xilinx ZynqMP CAN controller.
26
+ *
27
+ * Copyright (c) 2020 Xilinx Inc.
28
+ *
29
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
30
+ *
31
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
32
+ * of this software and associated documentation files (the "Software"), to deal
33
+ * in the Software without restriction, including without limitation the rights
34
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
35
+ * copies of the Software, and to permit persons to whom the Software is
36
+ * furnished to do so, subject to the following conditions:
37
+ *
38
+ * The above copyright notice and this permission notice shall be included in
39
+ * all copies or substantial portions of the Software.
40
+ *
41
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
42
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
43
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
44
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
45
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
46
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
47
+ * THE SOFTWARE.
48
+ */
49
+
50
+#include "qemu/osdep.h"
51
+#include "libqos/libqtest.h"
52
+
53
+/* Base address. */
54
+#define CAN0_BASE_ADDR 0xFF060000
55
+#define CAN1_BASE_ADDR 0xFF070000
56
+
57
+/* Register addresses. */
58
+#define R_SRR_OFFSET 0x00
59
+#define R_MSR_OFFSET 0x04
60
+#define R_SR_OFFSET 0x18
61
+#define R_ISR_OFFSET 0x1C
62
+#define R_ICR_OFFSET 0x24
63
+#define R_TXID_OFFSET 0x30
64
+#define R_TXDLC_OFFSET 0x34
65
+#define R_TXDATA1_OFFSET 0x38
66
+#define R_TXDATA2_OFFSET 0x3C
67
+#define R_RXID_OFFSET 0x50
68
+#define R_RXDLC_OFFSET 0x54
69
+#define R_RXDATA1_OFFSET 0x58
70
+#define R_RXDATA2_OFFSET 0x5C
71
+#define R_AFR 0x60
72
+#define R_AFMR1 0x64
73
+#define R_AFIR1 0x68
74
+#define R_AFMR2 0x6C
75
+#define R_AFIR2 0x70
76
+#define R_AFMR3 0x74
77
+#define R_AFIR3 0x78
78
+#define R_AFMR4 0x7C
79
+#define R_AFIR4 0x80
80
+
81
+/* CAN modes. */
82
+#define CONFIG_MODE 0x00
83
+#define NORMAL_MODE 0x00
84
+#define LOOPBACK_MODE 0x02
85
+#define SNOOP_MODE 0x04
86
+#define SLEEP_MODE 0x01
87
+#define ENABLE_CAN (1 << 1)
88
+#define STATUS_NORMAL_MODE (1 << 3)
89
+#define STATUS_LOOPBACK_MODE (1 << 1)
90
+#define STATUS_SNOOP_MODE (1 << 12)
91
+#define STATUS_SLEEP_MODE (1 << 2)
92
+#define ISR_TXOK (1 << 1)
93
+#define ISR_RXOK (1 << 4)
94
+
95
+static void match_rx_tx_data(const uint32_t *buf_tx, const uint32_t *buf_rx,
96
+ uint8_t can_timestamp)
97
+{
98
+ uint16_t size = 0;
99
+ uint8_t len = 4;
100
+
101
+ while (size < len) {
102
+ if (R_RXID_OFFSET + 4 * size == R_RXDLC_OFFSET) {
103
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size] + can_timestamp);
104
+ } else {
105
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size]);
106
+ }
107
+
108
+ size++;
109
+ }
110
+}
111
+
112
+static void read_data(QTestState *qts, uint64_t can_base_addr, uint32_t *buf_rx)
113
+{
114
+ uint32_t int_status;
115
+
116
+ /* Read the interrupt on CAN rx. */
117
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_RXOK;
118
+
119
+ g_assert_cmpint(int_status, ==, ISR_RXOK);
120
+
121
+ /* Read the RX register data for CAN. */
122
+ buf_rx[0] = qtest_readl(qts, can_base_addr + R_RXID_OFFSET);
123
+ buf_rx[1] = qtest_readl(qts, can_base_addr + R_RXDLC_OFFSET);
124
+ buf_rx[2] = qtest_readl(qts, can_base_addr + R_RXDATA1_OFFSET);
125
+ buf_rx[3] = qtest_readl(qts, can_base_addr + R_RXDATA2_OFFSET);
126
+
127
+ /* Clear the RX interrupt. */
128
+ qtest_writel(qts, CAN1_BASE_ADDR + R_ICR_OFFSET, ISR_RXOK);
129
+}
130
+
131
+static void send_data(QTestState *qts, uint64_t can_base_addr,
132
+ const uint32_t *buf_tx)
133
+{
134
+ uint32_t int_status;
135
+
136
+ /* Write the TX register data for CAN. */
137
+ qtest_writel(qts, can_base_addr + R_TXID_OFFSET, buf_tx[0]);
138
+ qtest_writel(qts, can_base_addr + R_TXDLC_OFFSET, buf_tx[1]);
139
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET, buf_tx[2]);
140
+ qtest_writel(qts, can_base_addr + R_TXDATA2_OFFSET, buf_tx[3]);
141
+
142
+ /* Read the interrupt on CAN for tx. */
143
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_TXOK;
144
+
145
+ g_assert_cmpint(int_status, ==, ISR_TXOK);
146
+
147
+ /* Clear the interrupt for tx. */
148
+ qtest_writel(qts, CAN0_BASE_ADDR + R_ICR_OFFSET, ISR_TXOK);
149
+}
150
+
151
+/*
152
+ * This test will be transferring data from CAN0 and CAN1 through canbus. CAN0
153
+ * initiate the data transfer to can-bus, CAN1 receives the data. Test compares
154
+ * the data sent from CAN0 with received on CAN1.
155
+ */
156
+static void test_can_bus(void)
157
+{
158
+ const uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
159
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
160
+ uint32_t status = 0;
161
+ uint8_t can_timestamp = 1;
162
+
163
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
164
+ " -object can-bus,id=canbus0"
165
+ " -machine xlnx-zcu102.canbus0=canbus0"
166
+ " -machine xlnx-zcu102.canbus1=canbus0"
167
+ );
168
+
169
+ /* Configure the CAN0 and CAN1. */
170
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
171
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
172
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
173
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
174
+
175
+ /* Check here if CAN0 and CAN1 are in normal mode. */
176
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
177
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
178
+
179
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
180
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
181
+
182
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
183
+
184
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
185
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
186
+
187
+ qtest_quit(qts);
188
+}
189
+
190
+/*
191
+ * This test is performing loopback mode on CAN0 and CAN1. Data sent from TX of
192
+ * each CAN0 and CAN1 are compared with RX register data for respective CAN.
193
+ */
194
+static void test_can_loopback(void)
195
+{
196
+ uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
197
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
198
+ uint32_t status = 0;
199
+
200
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
201
+ " -object can-bus,id=canbus0"
202
+ " -machine xlnx-zcu102.canbus0=canbus0"
203
+ " -machine xlnx-zcu102.canbus1=canbus0"
204
+ );
205
+
206
+ /* Configure the CAN0 in loopback mode. */
207
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
208
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
209
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
210
+
211
+ /* Check here if CAN0 is set in loopback mode. */
212
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
213
+
214
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
215
+
216
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
217
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
218
+ match_rx_tx_data(buf_tx, buf_rx, 0);
219
+
220
+ /* Configure the CAN1 in loopback mode. */
221
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
222
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
223
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
224
+
225
+ /* Check here if CAN1 is set in loopback mode. */
226
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
227
+
228
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
229
+
230
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
231
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
232
+ match_rx_tx_data(buf_tx, buf_rx, 0);
233
+
234
+ qtest_quit(qts);
235
+}
236
+
237
+/*
238
+ * Enable filters for CAN1. This will filter incoming messages with ID. In this
239
+ * test message will pass through filter 2.
240
+ */
241
+static void test_can_filter(void)
242
+{
243
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
244
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
245
+ uint32_t status = 0;
246
+ uint8_t can_timestamp = 1;
247
+
248
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
249
+ " -object can-bus,id=canbus0"
250
+ " -machine xlnx-zcu102.canbus0=canbus0"
251
+ " -machine xlnx-zcu102.canbus1=canbus0"
252
+ );
253
+
254
+ /* Configure the CAN0 and CAN1. */
255
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
256
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
257
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
258
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
259
+
260
+ /* Check here if CAN0 and CAN1 are in normal mode. */
261
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
262
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
263
+
264
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
265
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
266
+
267
+ /* Set filter for CAN1 for incoming messages. */
268
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0x0);
269
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR1, 0xF7);
270
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR1, 0x121F);
271
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR2, 0x5431);
272
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR2, 0x14);
273
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR3, 0x1234);
274
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR3, 0x5431);
275
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR4, 0xFFF);
276
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR4, 0x1234);
277
+
278
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0xF);
279
+
280
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
281
+
282
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
283
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
284
+
285
+ qtest_quit(qts);
286
+}
287
+
288
+/* Testing sleep mode on CAN0 while CAN1 is in normal mode. */
289
+static void test_can_sleepmode(void)
290
+{
291
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
292
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
293
+ uint32_t status = 0;
294
+ uint8_t can_timestamp = 1;
295
+
296
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
297
+ " -object can-bus,id=canbus0"
298
+ " -machine xlnx-zcu102.canbus0=canbus0"
299
+ " -machine xlnx-zcu102.canbus1=canbus0"
300
+ );
301
+
302
+ /* Configure the CAN0. */
303
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
304
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SLEEP_MODE);
305
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
306
+
307
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
308
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
309
+
310
+ /* Check here if CAN0 is in SLEEP mode and CAN1 in normal mode. */
311
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
312
+ g_assert_cmpint(status, ==, STATUS_SLEEP_MODE);
313
+
314
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
315
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
316
+
317
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
318
+
319
+ /*
320
+ * Once CAN1 sends data on can-bus. CAN0 should exit sleep mode.
321
+ * Check the CAN0 status now. It should exit the sleep mode and receive the
322
+ * incoming data.
323
+ */
324
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
325
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
326
+
327
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
328
+
329
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
330
+
331
+ qtest_quit(qts);
332
+}
333
+
334
+/* Testing Snoop mode on CAN0 while CAN1 is in normal mode. */
335
+static void test_can_snoopmode(void)
336
+{
337
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
338
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
339
+ uint32_t status = 0;
340
+ uint8_t can_timestamp = 1;
341
+
342
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
343
+ " -object can-bus,id=canbus0"
344
+ " -machine xlnx-zcu102.canbus0=canbus0"
345
+ " -machine xlnx-zcu102.canbus1=canbus0"
346
+ );
347
+
348
+ /* Configure the CAN0. */
349
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
350
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SNOOP_MODE);
351
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
352
+
353
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
354
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
355
+
356
+ /* Check here if CAN0 is in SNOOP mode and CAN1 in normal mode. */
357
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
358
+ g_assert_cmpint(status, ==, STATUS_SNOOP_MODE);
359
+
360
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
361
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
362
+
363
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
364
+
365
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
366
+
367
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
368
+
369
+ qtest_quit(qts);
370
+}
371
+
372
+int main(int argc, char **argv)
373
+{
374
+ g_test_init(&argc, &argv, NULL);
375
+
376
+ qtest_add_func("/net/can/can_bus", test_can_bus);
377
+ qtest_add_func("/net/can/can_loopback", test_can_loopback);
378
+ qtest_add_func("/net/can/can_filter", test_can_filter);
379
+ qtest_add_func("/net/can/can_test_snoopmode", test_can_snoopmode);
380
+ qtest_add_func("/net/can/can_test_sleepmode", test_can_sleepmode);
381
+
382
+ return g_test_run();
383
+}
384
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
385
index XXXXXXX..XXXXXXX 100644
386
--- a/tests/qtest/meson.build
387
+++ b/tests/qtest/meson.build
388
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
389
['arm-cpu-features',
390
'numa-test',
391
'boot-serial-test',
392
+ 'xlnx-can-test',
393
'migration-test']
394
395
qtests_s390x = \
396
--
397
2.20.1
398
399
diff view generated by jsdifflib
New patch
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
2
3
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
4
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
5
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
6
Message-id: 1605728926-352690-5-git-send-email-fnu.vikram@xilinx.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
MAINTAINERS | 8 ++++++++
10
1 file changed, 8 insertions(+)
11
12
diff --git a/MAINTAINERS b/MAINTAINERS
13
index XXXXXXX..XXXXXXX 100644
14
--- a/MAINTAINERS
15
+++ b/MAINTAINERS
16
@@ -XXX,XX +XXX,XX @@ F: hw/net/opencores_eth.c
17
18
Devices
19
-------
20
+Xilinx CAN
21
+M: Vikram Garhwal <fnu.vikram@xilinx.com>
22
+M: Francisco Iglesias <francisco.iglesias@xilinx.com>
23
+S: Maintained
24
+F: hw/net/can/xlnx-*
25
+F: include/hw/net/xlnx-*
26
+F: tests/qtest/xlnx-can-test*
27
+
28
EDU
29
M: Jiri Slaby <jslaby@suse.cz>
30
S: Maintained
31
--
32
2.20.1
33
34
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2
3
In the prospect to introduce an extended memory map supporting more
3
Trusted Firmware now supports A72 on sbsa-ref by default [1] so enable
4
RAM, let's split the memory map array into two parts:
4
it for QEMU as well. A53 was already enabled there.
5
5
6
- the former a15memmap, renamed base_memmap, contains regions below
6
1. https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/7117
7
and including the RAM. MemMapEntries initialized in this array
8
have a static size and base address.
9
- extended_memmap, only initialized with entries located after the
10
RAM. MemMapEntries initialized in this array only get their size
11
initialized. Their base address is dynamically computed depending
12
on the the top of the RAM, with same alignment as their size.
13
7
14
Eventually base_memmap entries are copied into the extended_memmap
8
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
15
array. Using two separate arrays however clarifies which entries
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
are statically allocated and those which are dynamically allocated.
10
Message-id: 20201120141705.246690-1-marcin.juszkiewicz@linaro.org
17
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
This new split will allow to grow the RAM size without changing the
19
description of the high IO entries.
20
21
We introduce a new virt_set_memmap() helper function which
22
"freezes" the memory map. We call it in machvirt_init as
23
memory attributes of the machine are not yet set when
24
virt_instance_init() gets called.
25
26
The memory map is unchanged (the top of the initial RAM still is
27
256GiB). Then come the high IO regions with same layout as before.
28
29
Signed-off-by: Eric Auger <eric.auger@redhat.com>
30
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
31
Message-id: 20190304101339.25970-4-eric.auger@redhat.com
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
---
13
---
34
include/hw/arm/virt.h | 13 +++++++----
14
hw/arm/sbsa-ref.c | 23 ++++++++++++++++++++---
35
hw/arm/virt.c | 50 +++++++++++++++++++++++++++++++++++++------
15
1 file changed, 20 insertions(+), 3 deletions(-)
36
2 files changed, 53 insertions(+), 10 deletions(-)
37
16
38
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
39
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
40
--- a/include/hw/arm/virt.h
19
--- a/hw/arm/sbsa-ref.c
41
+++ b/include/hw/arm/virt.h
20
+++ b/hw/arm/sbsa-ref.c
42
@@ -XXX,XX +XXX,XX @@ enum {
21
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
43
VIRT_GIC_VCPU,
22
[SBSA_GWDT] = 16,
44
VIRT_GIC_ITS,
23
};
45
VIRT_GIC_REDIST,
24
46
- VIRT_HIGH_GIC_REDIST2,
25
+static const char * const valid_cpus[] = {
47
VIRT_SMMU,
26
+ ARM_CPU_TYPE_NAME("cortex-a53"),
48
VIRT_UART,
27
+ ARM_CPU_TYPE_NAME("cortex-a57"),
49
VIRT_MMIO,
28
+ ARM_CPU_TYPE_NAME("cortex-a72"),
50
@@ -XXX,XX +XXX,XX @@ enum {
51
VIRT_PCIE_MMIO,
52
VIRT_PCIE_PIO,
53
VIRT_PCIE_ECAM,
54
- VIRT_HIGH_PCIE_ECAM,
55
VIRT_PLATFORM_BUS,
56
- VIRT_HIGH_PCIE_MMIO,
57
VIRT_GPIO,
58
VIRT_SECURE_UART,
59
VIRT_SECURE_MEM,
60
+ VIRT_LOWMEMMAP_LAST,
61
+};
29
+};
62
+
30
+
63
+/* indices of IO regions located after the RAM */
31
+static bool cpu_type_valid(const char *cpu)
64
+enum {
65
+ VIRT_HIGH_GIC_REDIST2 = VIRT_LOWMEMMAP_LAST,
66
+ VIRT_HIGH_PCIE_ECAM,
67
+ VIRT_HIGH_PCIE_MMIO,
68
};
69
70
typedef enum VirtIOMMUType {
71
@@ -XXX,XX +XXX,XX @@ typedef struct {
72
int32_t gic_version;
73
VirtIOMMUType iommu;
74
struct arm_boot_info bootinfo;
75
- const MemMapEntry *memmap;
76
+ MemMapEntry *memmap;
77
const int *irqmap;
78
int smp_cpus;
79
void *fdt;
80
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/hw/arm/virt.c
83
+++ b/hw/arm/virt.c
84
@@ -XXX,XX +XXX,XX @@
85
*/
86
87
#include "qemu/osdep.h"
88
+#include "qemu/units.h"
89
#include "qapi/error.h"
90
#include "hw/sysbus.h"
91
#include "hw/arm/arm.h"
92
@@ -XXX,XX +XXX,XX @@
93
* Note that devices should generally be placed at multiples of 0x10000,
94
* to accommodate guests using 64K pages.
95
*/
96
-static const MemMapEntry a15memmap[] = {
97
+static const MemMapEntry base_memmap[] = {
98
/* Space up to 0x8000000 is reserved for a boot ROM */
99
[VIRT_FLASH] = { 0, 0x08000000 },
100
[VIRT_CPUPERIPHS] = { 0x08000000, 0x00020000 },
101
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry a15memmap[] = {
102
[VIRT_PCIE_PIO] = { 0x3eff0000, 0x00010000 },
103
[VIRT_PCIE_ECAM] = { 0x3f000000, 0x01000000 },
104
[VIRT_MEM] = { 0x40000000, RAMLIMIT_BYTES },
105
+};
106
+
107
+/*
108
+ * Highmem IO Regions: This memory map is floating, located after the RAM.
109
+ * Each MemMapEntry base (GPA) will be dynamically computed, depending on the
110
+ * top of the RAM, so that its base get the same alignment as the size,
111
+ * ie. a 512GiB entry will be aligned on a 512GiB boundary. If there is
112
+ * less than 256GiB of RAM, the floating area starts at the 256GiB mark.
113
+ * Note the extended_memmap is sized so that it eventually also includes the
114
+ * base_memmap entries (VIRT_HIGH_GIC_REDIST2 index is greater than the last
115
+ * index of base_memmap).
116
+ */
117
+static MemMapEntry extended_memmap[] = {
118
/* Additional 64 MB redist region (can contain up to 512 redistributors) */
119
- [VIRT_HIGH_GIC_REDIST2] = { 0x4000000000ULL, 0x4000000 },
120
- [VIRT_HIGH_PCIE_ECAM] = { 0x4010000000ULL, 0x10000000 },
121
- /* Second PCIe window, 512GB wide at the 512GB boundary */
122
- [VIRT_HIGH_PCIE_MMIO] = { 0x8000000000ULL, 0x8000000000ULL },
123
+ [VIRT_HIGH_GIC_REDIST2] = { 0x0, 64 * MiB },
124
+ [VIRT_HIGH_PCIE_ECAM] = { 0x0, 256 * MiB },
125
+ /* Second PCIe window */
126
+ [VIRT_HIGH_PCIE_MMIO] = { 0x0, 512 * GiB },
127
};
128
129
static const int a15irqmap[] = {
130
@@ -XXX,XX +XXX,XX @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
131
return arm_cpu_mp_affinity(idx, clustersz);
132
}
133
134
+static void virt_set_memmap(VirtMachineState *vms)
135
+{
32
+{
136
+ hwaddr base;
137
+ int i;
33
+ int i;
138
+
34
+
139
+ vms->memmap = extended_memmap;
35
+ for (i = 0; i < ARRAY_SIZE(valid_cpus); i++) {
140
+
36
+ if (strcmp(cpu, valid_cpus[i]) == 0) {
141
+ for (i = 0; i < ARRAY_SIZE(base_memmap); i++) {
37
+ return true;
142
+ vms->memmap[i] = base_memmap[i];
38
+ }
143
+ }
39
+ }
144
+
40
+ return false;
145
+ base = 256 * GiB; /* Top of the legacy initial RAM region */
146
+
147
+ for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
148
+ hwaddr size = extended_memmap[i].size;
149
+
150
+ base = ROUND_UP(base, size);
151
+ vms->memmap[i].base = base;
152
+ vms->memmap[i].size = size;
153
+ base += size;
154
+ }
155
+}
41
+}
156
+
42
+
157
static void machvirt_init(MachineState *machine)
43
static uint64_t sbsa_ref_cpu_mp_affinity(SBSAMachineState *sms, int idx)
158
{
44
{
159
VirtMachineState *vms = VIRT_MACHINE(machine);
45
uint8_t clustersz = ARM_DEFAULT_CPUS_PER_CLUSTER;
160
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
46
@@ -XXX,XX +XXX,XX @@ static void sbsa_ref_init(MachineState *machine)
161
bool firmware_loaded = bios_name || drive_get(IF_PFLASH, 0, 0);
47
const CPUArchIdList *possible_cpus;
162
bool aarch64 = true;
48
int n, sbsa_max_cpus;
163
49
164
+ virt_set_memmap(vms);
50
- if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a57"))) {
165
+
51
- error_report("sbsa-ref: CPU type other than the built-in "
166
/* We can probe only here because during property set
52
- "cortex-a57 not supported");
167
* KVM is not available yet
53
+ if (!cpu_type_valid(machine->cpu_type)) {
168
*/
54
+ error_report("mach-virt: CPU type %s not supported", machine->cpu_type);
169
@@ -XXX,XX +XXX,XX @@ static void virt_instance_init(Object *obj)
55
exit(1);
170
"Valid values are none and smmuv3",
56
}
171
NULL);
172
173
- vms->memmap = a15memmap;
174
vms->irqmap = a15irqmap;
175
}
176
57
177
--
58
--
178
2.20.1
59
2.20.1
179
60
180
61
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Havard Skinnemoen <hskinnemoen@google.com>
2
2
3
Up to now the memory map has been static and the high IO region
3
Dump the collected random data after a randomness test failure.
4
base has always been 256GiB.
5
4
6
This patch modifies the virt_set_memmap() function, which freezes
5
Note that this relies on the test having called
7
the memory map, so that the high IO range base becomes floating,
6
g_test_set_nonfatal_assertions() so we don't abort immediately on the
8
located after the initial RAM and the device memory.
7
assertion failure.
9
8
10
The function computes
9
Signed-off-by: Havard Skinnemoen <hskinnemoen@google.com>
11
- the base of the device memory,
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
- the size of the device memory,
11
[PMM: minor commit message tweak]
13
- the high IO region base
14
- the highest GPA used in the memory map.
15
16
Entries of the high IO region are assigned a base address. The
17
device memory is initialized.
18
19
The highest GPA used in the memory map will be used at VM creation
20
to choose the requested IPA size.
21
22
Setting all the existing highmem IO regions beyond the RAM
23
allows to have a single contiguous RAM region (initial RAM and
24
possible hotpluggable device memory). That way we do not need
25
to do invasive changes in the EDK2 FW to support a dynamic
26
RAM base.
27
28
Still the user cannot request an initial RAM size greater than 255GB.
29
30
Signed-off-by: Eric Auger <eric.auger@redhat.com>
31
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
32
Message-id: 20190304101339.25970-8-eric.auger@redhat.com
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
---
13
---
35
include/hw/arm/virt.h | 1 +
14
tests/qtest/npcm7xx_rng-test.c | 12 ++++++++++++
36
hw/arm/virt.c | 52 ++++++++++++++++++++++++++++++++++++++-----
15
1 file changed, 12 insertions(+)
37
2 files changed, 47 insertions(+), 6 deletions(-)
38
16
39
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
17
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
40
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
41
--- a/include/hw/arm/virt.h
19
--- a/tests/qtest/npcm7xx_rng-test.c
42
+++ b/include/hw/arm/virt.h
20
+++ b/tests/qtest/npcm7xx_rng-test.c
43
@@ -XXX,XX +XXX,XX @@ typedef struct {
44
uint32_t msi_phandle;
45
uint32_t iommu_phandle;
46
int psci_conduit;
47
+ hwaddr highest_gpa;
48
} VirtMachineState;
49
50
#define VIRT_ECAM_ID(high) (high ? VIRT_HIGH_PCIE_ECAM : VIRT_PCIE_ECAM)
51
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/arm/virt.c
54
+++ b/hw/arm/virt.c
55
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@
56
#include "qapi/visitor.h"
22
57
#include "standard-headers/linux/input.h"
23
#include "libqtest-single.h"
58
#include "hw/arm/smmuv3.h"
24
#include "qemu/bitops.h"
59
+#include "hw/acpi/acpi.h"
25
+#include "qemu-common.h"
60
26
61
#define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
27
#define RNG_BASE_ADDR 0xf000b000
62
static void virt_##major##_##minor##_class_init(ObjectClass *oc, \
28
63
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@
64
* of a terabyte of RAM will be doing it on a host with more than a
30
/* Number of bits to collect for randomness tests. */
65
* terabyte of physical address space.)
31
#define TEST_INPUT_BITS (128)
66
*/
32
67
-#define RAMLIMIT_GB 255
33
+static void dump_buf_if_failed(const uint8_t *buf, size_t size)
68
-#define RAMLIMIT_BYTES (RAMLIMIT_GB * 1024ULL * 1024 * 1024)
34
+{
69
+#define LEGACY_RAMLIMIT_GB 255
35
+ if (g_test_failed()) {
70
+#define LEGACY_RAMLIMIT_BYTES (LEGACY_RAMLIMIT_GB * GiB)
36
+ qemu_hexdump(stderr, "", buf, size);
71
37
+ }
72
/* Addresses and sizes of our components.
38
+}
73
* 0..128MB is space for a flash device so we can run bootrom code such as UEFI.
39
+
74
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry base_memmap[] = {
40
static void rng_writeb(unsigned int offset, uint8_t value)
75
[VIRT_PCIE_MMIO] = { 0x10000000, 0x2eff0000 },
41
{
76
[VIRT_PCIE_PIO] = { 0x3eff0000, 0x00010000 },
42
writeb(RNG_BASE_ADDR + offset, value);
77
[VIRT_PCIE_ECAM] = { 0x3f000000, 0x01000000 },
43
@@ -XXX,XX +XXX,XX @@ static void test_continuous_monobit(void)
78
- [VIRT_MEM] = { 0x40000000, RAMLIMIT_BYTES },
44
}
79
+ /* Actual RAM size depends on initial RAM and device memory settings */
45
80
+ [VIRT_MEM] = { GiB, LEGACY_RAMLIMIT_BYTES },
46
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
81
};
47
+ dump_buf_if_failed(buf, sizeof(buf));
48
}
82
49
83
/*
50
/*
84
@@ -XXX,XX +XXX,XX @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
51
@@ -XXX,XX +XXX,XX @@ static void test_continuous_runs(void)
85
86
static void virt_set_memmap(VirtMachineState *vms)
87
{
88
- hwaddr base;
89
+ MachineState *ms = MACHINE(vms);
90
+ hwaddr base, device_memory_base, device_memory_size;
91
int i;
92
93
vms->memmap = extended_memmap;
94
@@ -XXX,XX +XXX,XX @@ static void virt_set_memmap(VirtMachineState *vms)
95
vms->memmap[i] = base_memmap[i];
96
}
52
}
97
53
98
- base = 256 * GiB; /* Top of the legacy initial RAM region */
54
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
99
+ if (ms->ram_slots > ACPI_MAX_RAM_SLOTS) {
55
+ dump_buf_if_failed(buf.c, sizeof(buf));
100
+ error_report("unsupported number of memory slots: %"PRIu64,
56
}
101
+ ms->ram_slots);
57
102
+ exit(EXIT_FAILURE);
58
/*
103
+ }
59
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_monobit(void)
104
+
105
+ /*
106
+ * We compute the base of the high IO region depending on the
107
+ * amount of initial and device memory. The device memory start/size
108
+ * is aligned on 1GiB. We never put the high IO region below 256GiB
109
+ * so that if maxram_size is < 255GiB we keep the legacy memory map.
110
+ * The device region size assumes 1GiB page max alignment per slot.
111
+ */
112
+ device_memory_base =
113
+ ROUND_UP(vms->memmap[VIRT_MEM].base + ms->ram_size, GiB);
114
+ device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * GiB;
115
+
116
+ /* Base address of the high IO region */
117
+ base = device_memory_base + ROUND_UP(device_memory_size, GiB);
118
+ if (base < device_memory_base) {
119
+ error_report("maxmem/slots too huge");
120
+ exit(EXIT_FAILURE);
121
+ }
122
+ if (base < vms->memmap[VIRT_MEM].base + LEGACY_RAMLIMIT_BYTES) {
123
+ base = vms->memmap[VIRT_MEM].base + LEGACY_RAMLIMIT_BYTES;
124
+ }
125
126
for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
127
hwaddr size = extended_memmap[i].size;
128
@@ -XXX,XX +XXX,XX @@ static void virt_set_memmap(VirtMachineState *vms)
129
vms->memmap[i].size = size;
130
base += size;
131
}
60
}
132
+ vms->highest_gpa = base - 1;
61
133
+ if (device_memory_size > 0) {
62
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
134
+ ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
63
+ dump_buf_if_failed(buf, sizeof(buf));
135
+ ms->device_memory->base = device_memory_base;
136
+ memory_region_init(&ms->device_memory->mr, OBJECT(vms),
137
+ "device-memory", device_memory_size);
138
+ }
139
}
64
}
140
65
141
static void machvirt_init(MachineState *machine)
66
/*
142
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
67
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_runs(void)
143
vms->smp_cpus = smp_cpus;
144
145
if (machine->ram_size > vms->memmap[VIRT_MEM].size) {
146
- error_report("mach-virt: cannot model more than %dGB RAM", RAMLIMIT_GB);
147
+ error_report("mach-virt: cannot model more than %dGB RAM",
148
+ LEGACY_RAMLIMIT_GB);
149
exit(1);
150
}
68
}
151
69
152
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
70
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
153
memory_region_allocate_system_memory(ram, NULL, "mach-virt.ram",
71
+ dump_buf_if_failed(buf.c, sizeof(buf));
154
machine->ram_size);
72
}
155
memory_region_add_subregion(sysmem, vms->memmap[VIRT_MEM].base, ram);
73
156
+ if (machine->device_memory) {
74
int main(int argc, char **argv)
157
+ memory_region_add_subregion(sysmem, machine->device_memory->base,
158
+ &machine->device_memory->mr);
159
+ }
160
161
create_flash(vms, sysmem, secure_sysmem ? secure_sysmem : sysmem);
162
163
--
75
--
164
2.20.1
76
2.20.1
165
77
166
78
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Alex Chen <alex.chen@huawei.com>
2
2
3
The machine RAM attributes will need to be analyzed during the
3
We should use printf format specifier "%u" instead of "%d" for
4
configure_accelerator() process. especially kvm_type() arm64
4
argument of type "unsigned int".
5
machine callback will use them to know how many IPA/GPA bits are
6
needed to model the whole RAM range. So let's assign those machine
7
state fields before calling configure_accelerator.
8
5
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
6
Reported-by: Euler Robot <euler.robot@huawei.com>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20201126111109.112238-2-alex.chen@huawei.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
12
Message-id: 20190304101339.25970-7-eric.auger@redhat.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
vl.c | 6 +++---
12
hw/misc/imx25_ccm.c | 12 ++++++------
16
1 file changed, 3 insertions(+), 3 deletions(-)
13
1 file changed, 6 insertions(+), 6 deletions(-)
17
14
18
diff --git a/vl.c b/vl.c
15
diff --git a/hw/misc/imx25_ccm.c b/hw/misc/imx25_ccm.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/vl.c
17
--- a/hw/misc/imx25_ccm.c
21
+++ b/vl.c
18
+++ b/hw/misc/imx25_ccm.c
22
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
19
@@ -XXX,XX +XXX,XX @@ static const char *imx25_ccm_reg_name(uint32_t reg)
23
machine_opts = qemu_get_machine_opts();
20
case IMX25_CCM_LPIMR1_REG:
24
qemu_opt_foreach(machine_opts, machine_set_property, current_machine,
21
return "lpimr1";
25
&error_fatal);
22
default:
26
+ current_machine->ram_size = ram_size;
23
- sprintf(unknown, "[%d ?]", reg);
27
+ current_machine->maxram_size = maxram_size;
24
+ sprintf(unknown, "[%u ?]", reg);
28
+ current_machine->ram_slots = ram_slots;
25
return unknown;
29
26
}
30
configure_accelerator(current_machine, argv[0]);
27
}
31
28
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mpll_clk(IMXCCMState *dev)
32
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
29
freq = imx_ccm_calc_pll(s->reg[IMX25_CCM_MPCTL_REG], CKIH_FREQ);
33
replay_checkpoint(CHECKPOINT_INIT);
30
}
34
qdev_machine_init();
31
35
32
- DPRINTF("freq = %d\n", freq);
36
- current_machine->ram_size = ram_size;
33
+ DPRINTF("freq = %u\n", freq);
37
- current_machine->maxram_size = maxram_size;
34
38
- current_machine->ram_slots = ram_slots;
35
return freq;
39
current_machine->boot_order = boot_order;
36
}
40
37
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mcu_clk(IMXCCMState *dev)
41
/* parse features once if machine provides default cpu_type */
38
39
freq = freq / (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], ARM_CLK_DIV));
40
41
- DPRINTF("freq = %d\n", freq);
42
+ DPRINTF("freq = %u\n", freq);
43
44
return freq;
45
}
46
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ahb_clk(IMXCCMState *dev)
47
freq = imx25_ccm_get_mcu_clk(dev)
48
/ (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], AHB_CLK_DIV));
49
50
- DPRINTF("freq = %d\n", freq);
51
+ DPRINTF("freq = %u\n", freq);
52
53
return freq;
54
}
55
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ipg_clk(IMXCCMState *dev)
56
57
freq = imx25_ccm_get_ahb_clk(dev) / 2;
58
59
- DPRINTF("freq = %d\n", freq);
60
+ DPRINTF("freq = %u\n", freq);
61
62
return freq;
63
}
64
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
65
break;
66
}
67
68
- DPRINTF("Clock = %d) = %d\n", clock, freq);
69
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
70
71
return freq;
72
}
42
--
73
--
43
2.20.1
74
2.20.1
44
75
45
76
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alex Chen <alex.chen@huawei.com>
2
2
3
This will allow sharing code that adjusts rmode beyond
3
We should use printf format specifier "%u" instead of "%d" for
4
the existing users.
4
argument of type "unsigned int".
5
5
6
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
6
Reported-by: Euler Robot <euler.robot@huawei.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20190301200501.16533-10-richard.henderson@linaro.org
8
Message-id: 20201126111109.112238-3-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/translate-a64.c | 90 +++++++++++++++++++++-----------------
12
hw/misc/imx31_ccm.c | 14 +++++++-------
13
1 file changed, 49 insertions(+), 41 deletions(-)
13
hw/misc/imx_ccm.c | 4 ++--
14
2 files changed, 9 insertions(+), 9 deletions(-)
14
15
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
diff --git a/hw/misc/imx31_ccm.c b/hw/misc/imx31_ccm.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
18
--- a/hw/misc/imx31_ccm.c
18
+++ b/target/arm/translate-a64.c
19
+++ b/hw/misc/imx31_ccm.c
19
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
20
@@ -XXX,XX +XXX,XX @@ static const char *imx31_ccm_reg_name(uint32_t reg)
20
/* Floating-point data-processing (1 source) - single precision */
21
case IMX31_CCM_PDR2_REG:
21
static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
22
return "PDR2";
22
{
23
default:
23
+ void (*gen_fpst)(TCGv_i32, TCGv_i32, TCGv_ptr);
24
- sprintf(unknown, "[%d ?]", reg);
24
+ TCGv_i32 tcg_op, tcg_res;
25
+ sprintf(unknown, "[%u ?]", reg);
25
TCGv_ptr fpst;
26
return unknown;
26
- TCGv_i32 tcg_op;
27
}
27
- TCGv_i32 tcg_res;
28
}
28
+ int rmode = -1;
29
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_pll_ref_clk(IMXCCMState *dev)
29
30
freq = CKIH_FREQ;
30
- fpst = get_fpstatus_ptr(false);
31
}
31
tcg_op = read_fp_sreg(s, rn);
32
32
tcg_res = tcg_temp_new_i32();
33
- DPRINTF("freq = %d\n", freq);
33
34
+ DPRINTF("freq = %u\n", freq);
34
switch (opcode) {
35
35
case 0x0: /* FMOV */
36
return freq;
36
tcg_gen_mov_i32(tcg_res, tcg_op);
37
}
37
- break;
38
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mpll_clk(IMXCCMState *dev)
38
+ goto done;
39
freq = imx_ccm_calc_pll(s->reg[IMX31_CCM_MPCTL_REG],
39
case 0x1: /* FABS */
40
imx31_ccm_get_pll_ref_clk(dev));
40
gen_helper_vfp_abss(tcg_res, tcg_op);
41
41
- break;
42
- DPRINTF("freq = %d\n", freq);
42
+ goto done;
43
+ DPRINTF("freq = %u\n", freq);
43
case 0x2: /* FNEG */
44
44
gen_helper_vfp_negs(tcg_res, tcg_op);
45
return freq;
45
- break;
46
}
46
+ goto done;
47
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mcu_main_clk(IMXCCMState *dev)
47
case 0x3: /* FSQRT */
48
freq = imx31_ccm_get_mpll_clk(dev);
48
gen_helper_vfp_sqrts(tcg_res, tcg_op, cpu_env);
49
}
49
- break;
50
50
+ goto done;
51
- DPRINTF("freq = %d\n", freq);
51
case 0x8: /* FRINTN */
52
+ DPRINTF("freq = %u\n", freq);
52
case 0x9: /* FRINTP */
53
53
case 0xa: /* FRINTM */
54
return freq;
54
case 0xb: /* FRINTZ */
55
}
55
case 0xc: /* FRINTA */
56
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_hclk_clk(IMXCCMState *dev)
56
- {
57
freq = imx31_ccm_get_mcu_main_clk(dev)
57
- TCGv_i32 tcg_rmode = tcg_const_i32(arm_rmode_to_sf(opcode & 7));
58
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], MAX));
58
-
59
59
- gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
60
- DPRINTF("freq = %d\n", freq);
60
- gen_helper_rints(tcg_res, tcg_op, fpst);
61
+ DPRINTF("freq = %u\n", freq);
61
-
62
62
- gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
63
return freq;
63
- tcg_temp_free_i32(tcg_rmode);
64
}
64
+ rmode = arm_rmode_to_sf(opcode & 7);
65
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_ipg_clk(IMXCCMState *dev)
65
+ gen_fpst = gen_helper_rints;
66
freq = imx31_ccm_get_hclk_clk(dev)
67
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], IPG));
68
69
- DPRINTF("freq = %d\n", freq);
70
+ DPRINTF("freq = %u\n", freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
66
break;
75
break;
67
- }
68
case 0xe: /* FRINTX */
69
- gen_helper_rints_exact(tcg_res, tcg_op, fpst);
70
+ gen_fpst = gen_helper_rints_exact;
71
break;
72
case 0xf: /* FRINTI */
73
- gen_helper_rints(tcg_res, tcg_op, fpst);
74
+ gen_fpst = gen_helper_rints;
75
break;
76
default:
77
- abort();
78
+ g_assert_not_reached();
79
}
76
}
80
77
81
- write_fp_sreg(s, rd, tcg_res);
78
- DPRINTF("Clock = %d) = %d\n", clock, freq);
82
-
79
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
83
+ fpst = get_fpstatus_ptr(false);
80
84
+ if (rmode >= 0) {
81
return freq;
85
+ TCGv_i32 tcg_rmode = tcg_const_i32(rmode);
86
+ gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
87
+ gen_fpst(tcg_res, tcg_op, fpst);
88
+ gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
89
+ tcg_temp_free_i32(tcg_rmode);
90
+ } else {
91
+ gen_fpst(tcg_res, tcg_op, fpst);
92
+ }
93
tcg_temp_free_ptr(fpst);
94
+
95
+ done:
96
+ write_fp_sreg(s, rd, tcg_res);
97
tcg_temp_free_i32(tcg_op);
98
tcg_temp_free_i32(tcg_res);
99
}
82
}
100
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
83
diff --git a/hw/misc/imx_ccm.c b/hw/misc/imx_ccm.c
101
/* Floating-point data-processing (1 source) - double precision */
84
index XXXXXXX..XXXXXXX 100644
102
static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
85
--- a/hw/misc/imx_ccm.c
103
{
86
+++ b/hw/misc/imx_ccm.c
104
+ void (*gen_fpst)(TCGv_i64, TCGv_i64, TCGv_ptr);
87
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
105
+ TCGv_i64 tcg_op, tcg_res;
88
freq = klass->get_clock_frequency(dev, clock);
106
TCGv_ptr fpst;
107
- TCGv_i64 tcg_op;
108
- TCGv_i64 tcg_res;
109
+ int rmode = -1;
110
111
switch (opcode) {
112
case 0x0: /* FMOV */
113
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
114
return;
115
}
89
}
116
90
117
- fpst = get_fpstatus_ptr(false);
91
- DPRINTF("(clock = %d) = %d\n", clock, freq);
118
tcg_op = read_fp_dreg(s, rn);
92
+ DPRINTF("(clock = %d) = %u\n", clock, freq);
119
tcg_res = tcg_temp_new_i64();
93
120
94
return freq;
121
switch (opcode) {
122
case 0x1: /* FABS */
123
gen_helper_vfp_absd(tcg_res, tcg_op);
124
- break;
125
+ goto done;
126
case 0x2: /* FNEG */
127
gen_helper_vfp_negd(tcg_res, tcg_op);
128
- break;
129
+ goto done;
130
case 0x3: /* FSQRT */
131
gen_helper_vfp_sqrtd(tcg_res, tcg_op, cpu_env);
132
- break;
133
+ goto done;
134
case 0x8: /* FRINTN */
135
case 0x9: /* FRINTP */
136
case 0xa: /* FRINTM */
137
case 0xb: /* FRINTZ */
138
case 0xc: /* FRINTA */
139
- {
140
- TCGv_i32 tcg_rmode = tcg_const_i32(arm_rmode_to_sf(opcode & 7));
141
-
142
- gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
143
- gen_helper_rintd(tcg_res, tcg_op, fpst);
144
-
145
- gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
146
- tcg_temp_free_i32(tcg_rmode);
147
+ rmode = arm_rmode_to_sf(opcode & 7);
148
+ gen_fpst = gen_helper_rintd;
149
break;
150
- }
151
case 0xe: /* FRINTX */
152
- gen_helper_rintd_exact(tcg_res, tcg_op, fpst);
153
+ gen_fpst = gen_helper_rintd_exact;
154
break;
155
case 0xf: /* FRINTI */
156
- gen_helper_rintd(tcg_res, tcg_op, fpst);
157
+ gen_fpst = gen_helper_rintd;
158
break;
159
default:
160
- abort();
161
+ g_assert_not_reached();
162
}
163
164
- write_fp_dreg(s, rd, tcg_res);
165
-
166
+ fpst = get_fpstatus_ptr(false);
167
+ if (rmode >= 0) {
168
+ TCGv_i32 tcg_rmode = tcg_const_i32(rmode);
169
+ gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
170
+ gen_fpst(tcg_res, tcg_op, fpst);
171
+ gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
172
+ tcg_temp_free_i32(tcg_rmode);
173
+ } else {
174
+ gen_fpst(tcg_res, tcg_op, fpst);
175
+ }
176
tcg_temp_free_ptr(fpst);
177
+
178
+ done:
179
+ write_fp_dreg(s, rd, tcg_res);
180
tcg_temp_free_i64(tcg_op);
181
tcg_temp_free_i64(tcg_res);
182
}
95
}
96
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_calc_pll(uint32_t pllreg, uint32_t base_freq)
97
freq = ((2 * (base_freq >> 10) * (mfi * mfd + mfn)) /
98
(mfd * pd)) << 10;
99
100
- DPRINTF("(pllreg = 0x%08x, base_freq = %d) = %d\n", pllreg, base_freq,
101
+ DPRINTF("(pllreg = 0x%08x, base_freq = %u) = %d\n", pllreg, base_freq,
102
freq);
103
104
return freq;
183
--
105
--
184
2.20.1
106
2.20.1
185
107
186
108
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alex Chen <alex.chen@huawei.com>
2
2
3
This decoding more closely matches the ARMv8.4 Table C4-6,
3
We should use printf format specifier "%u" instead of "%d" for
4
Encoding table for Data Processing - Register Group.
4
argument of type "unsigned int".
5
5
6
In particular, op2 == 0 is now more than just Add/sub (with carry).
6
Reported-by: Euler Robot <euler.robot@huawei.com>
7
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201126111109.112238-4-alex.chen@huawei.com
9
Message-id: 20190301200501.16533-7-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
target/arm/translate-a64.c | 98 ++++++++++++++++++++++----------------
12
hw/misc/imx6_ccm.c | 20 ++++++++++----------
14
1 file changed, 57 insertions(+), 41 deletions(-)
13
hw/misc/imx6_src.c | 2 +-
14
2 files changed, 11 insertions(+), 11 deletions(-)
15
15
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
diff --git a/hw/misc/imx6_ccm.c b/hw/misc/imx6_ccm.c
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a64.c
18
--- a/hw/misc/imx6_ccm.c
19
+++ b/target/arm/translate-a64.c
19
+++ b/hw/misc/imx6_ccm.c
20
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_3src(DisasContext *s, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ static const char *imx6_ccm_reg_name(uint32_t reg)
21
}
21
case CCM_CMEOR:
22
22
return "CMEOR";
23
/* Add/subtract (with carry)
23
default:
24
- * 31 30 29 28 27 26 25 24 23 22 21 20 16 15 10 9 5 4 0
24
- sprintf(unknown, "%d ?", reg);
25
- * +--+--+--+------------------------+------+---------+------+-----+
25
+ sprintf(unknown, "%u ?", reg);
26
- * |sf|op| S| 1 1 0 1 0 0 0 0 | rm | opcode2 | Rn | Rd |
26
return unknown;
27
- * +--+--+--+------------------------+------+---------+------+-----+
28
- * [000000]
29
+ * 31 30 29 28 27 26 25 24 23 22 21 20 16 15 10 9 5 4 0
30
+ * +--+--+--+------------------------+------+-------------+------+-----+
31
+ * |sf|op| S| 1 1 0 1 0 0 0 0 | rm | 0 0 0 0 0 0 | Rn | Rd |
32
+ * +--+--+--+------------------------+------+-------------+------+-----+
33
*/
34
35
static void disas_adc_sbc(DisasContext *s, uint32_t insn)
36
@@ -XXX,XX +XXX,XX @@ static void disas_adc_sbc(DisasContext *s, uint32_t insn)
37
unsigned int sf, op, setflags, rm, rn, rd;
38
TCGv_i64 tcg_y, tcg_rn, tcg_rd;
39
40
- if (extract32(insn, 10, 6) != 0) {
41
- unallocated_encoding(s);
42
- return;
43
- }
44
-
45
sf = extract32(insn, 31, 1);
46
op = extract32(insn, 30, 1);
47
setflags = extract32(insn, 29, 1);
48
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
49
}
27
}
50
}
28
}
51
29
@@ -XXX,XX +XXX,XX @@ static const char *imx6_analog_reg_name(uint32_t reg)
52
-/* Data processing - register */
30
case USB_ANALOG_DIGPROG:
53
+/*
31
return "USB_ANALOG_DIGPROG";
54
+ * Data processing - register
55
+ * 31 30 29 28 25 21 20 16 10 0
56
+ * +--+---+--+---+-------+-----+-------+-------+---------+
57
+ * | |op0| |op1| 1 0 1 | op2 | | op3 | |
58
+ * +--+---+--+---+-------+-----+-------+-------+---------+
59
+ */
60
static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
61
{
62
- switch (extract32(insn, 24, 5)) {
63
- case 0x0a: /* Logical (shifted register) */
64
- disas_logic_reg(s, insn);
65
- break;
66
- case 0x0b: /* Add/subtract */
67
- if (insn & (1 << 21)) { /* (extended register) */
68
- disas_add_sub_ext_reg(s, insn);
69
+ int op0 = extract32(insn, 30, 1);
70
+ int op1 = extract32(insn, 28, 1);
71
+ int op2 = extract32(insn, 21, 4);
72
+ int op3 = extract32(insn, 10, 6);
73
+
74
+ if (!op1) {
75
+ if (op2 & 8) {
76
+ if (op2 & 1) {
77
+ /* Add/sub (extended register) */
78
+ disas_add_sub_ext_reg(s, insn);
79
+ } else {
80
+ /* Add/sub (shifted register) */
81
+ disas_add_sub_reg(s, insn);
82
+ }
83
} else {
84
- disas_add_sub_reg(s, insn);
85
+ /* Logical (shifted register) */
86
+ disas_logic_reg(s, insn);
87
}
88
- break;
89
- case 0x1b: /* Data-processing (3 source) */
90
- disas_data_proc_3src(s, insn);
91
- break;
92
- case 0x1a:
93
- switch (extract32(insn, 21, 3)) {
94
- case 0x0: /* Add/subtract (with carry) */
95
+ return;
96
+ }
97
+
98
+ switch (op2) {
99
+ case 0x0:
100
+ switch (op3) {
101
+ case 0x00: /* Add/subtract (with carry) */
102
disas_adc_sbc(s, insn);
103
break;
104
- case 0x2: /* Conditional compare */
105
- disas_cc(s, insn); /* both imm and reg forms */
106
- break;
107
- case 0x4: /* Conditional select */
108
- disas_cond_select(s, insn);
109
- break;
110
- case 0x6: /* Data-processing */
111
- if (insn & (1 << 30)) { /* (1 source) */
112
- disas_data_proc_1src(s, insn);
113
- } else { /* (2 source) */
114
- disas_data_proc_2src(s, insn);
115
- }
116
- break;
117
+
118
default:
119
- unallocated_encoding(s);
120
- break;
121
+ goto do_unallocated;
122
}
123
break;
124
+
125
+ case 0x2: /* Conditional compare */
126
+ disas_cc(s, insn); /* both imm and reg forms */
127
+ break;
128
+
129
+ case 0x4: /* Conditional select */
130
+ disas_cond_select(s, insn);
131
+ break;
132
+
133
+ case 0x6: /* Data-processing */
134
+ if (op0) { /* (1 source) */
135
+ disas_data_proc_1src(s, insn);
136
+ } else { /* (2 source) */
137
+ disas_data_proc_2src(s, insn);
138
+ }
139
+ break;
140
+ case 0x8 ... 0xf: /* (3 source) */
141
+ disas_data_proc_3src(s, insn);
142
+ break;
143
+
144
default:
32
default:
145
+ do_unallocated:
33
- sprintf(unknown, "%d ?", reg);
146
unallocated_encoding(s);
34
+ sprintf(unknown, "%u ?", reg);
35
return unknown;
36
}
37
}
38
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_clk(IMX6CCMState *dev)
39
freq *= 20;
40
}
41
42
- DPRINTF("freq = %d\n", (uint32_t)freq);
43
+ DPRINTF("freq = %u\n", (uint32_t)freq);
44
45
return freq;
46
}
47
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd0_clk(IMX6CCMState *dev)
48
freq = imx6_analog_get_pll2_clk(dev) * 18
49
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD0_FRAC);
50
51
- DPRINTF("freq = %d\n", (uint32_t)freq);
52
+ DPRINTF("freq = %u\n", (uint32_t)freq);
53
54
return freq;
55
}
56
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd2_clk(IMX6CCMState *dev)
57
freq = imx6_analog_get_pll2_clk(dev) * 18
58
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD2_FRAC);
59
60
- DPRINTF("freq = %d\n", (uint32_t)freq);
61
+ DPRINTF("freq = %u\n", (uint32_t)freq);
62
63
return freq;
64
}
65
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_periph_clk(IMX6CCMState *dev)
147
break;
66
break;
148
}
67
}
68
69
- DPRINTF("freq = %d\n", (uint32_t)freq);
70
+ DPRINTF("freq = %u\n", (uint32_t)freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ahb_clk(IMX6CCMState *dev)
75
freq = imx6_analog_get_periph_clk(dev)
76
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], AHB_PODF));
77
78
- DPRINTF("freq = %d\n", (uint32_t)freq);
79
+ DPRINTF("freq = %u\n", (uint32_t)freq);
80
81
return freq;
82
}
83
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ipg_clk(IMX6CCMState *dev)
84
freq = imx6_ccm_get_ahb_clk(dev)
85
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], IPG_PODF));
86
87
- DPRINTF("freq = %d\n", (uint32_t)freq);
88
+ DPRINTF("freq = %u\n", (uint32_t)freq);
89
90
return freq;
91
}
92
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_per_clk(IMX6CCMState *dev)
93
freq = imx6_ccm_get_ipg_clk(dev)
94
/ (1 + EXTRACT(dev->ccm[CCM_CSCMR1], PERCLK_PODF));
95
96
- DPRINTF("freq = %d\n", (uint32_t)freq);
97
+ DPRINTF("freq = %u\n", (uint32_t)freq);
98
99
return freq;
100
}
101
@@ -XXX,XX +XXX,XX @@ static uint32_t imx6_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
102
break;
103
}
104
105
- DPRINTF("Clock = %d) = %d\n", clock, freq);
106
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
107
108
return freq;
109
}
110
diff --git a/hw/misc/imx6_src.c b/hw/misc/imx6_src.c
111
index XXXXXXX..XXXXXXX 100644
112
--- a/hw/misc/imx6_src.c
113
+++ b/hw/misc/imx6_src.c
114
@@ -XXX,XX +XXX,XX @@ static const char *imx6_src_reg_name(uint32_t reg)
115
case SRC_GPR10:
116
return "SRC_GPR10";
117
default:
118
- sprintf(unknown, "%d ?", reg);
119
+ sprintf(unknown, "%u ?", reg);
120
return unknown;
121
}
122
}
149
--
123
--
150
2.20.1
124
2.20.1
151
125
152
126
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Alex Chen <alex.chen@huawei.com>
2
2
3
On ARM, the kvm_type will be resolved by querying the KVMState.
3
We should use printf format specifier "%u" instead of "%d" for
4
Let's add the MachineState handle to the callback so that we
4
argument of type "unsigned int".
5
can retrieve the KVMState handle. in kvm_init, when the callback
6
is called, the kvm_state variable is not yet set.
7
5
8
Signed-off-by: Eric Auger <eric.auger@redhat.com>
6
Reported-by: Euler Robot <euler.robot@huawei.com>
9
Acked-by: David Gibson <david@gibson.dropbear.id.au>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20201126111109.112238-5-alex.chen@huawei.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
12
Message-id: 20190304101339.25970-5-eric.auger@redhat.com
13
[ppc parts]
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
11
---
18
include/hw/boards.h | 5 ++++-
12
hw/misc/imx6ul_ccm.c | 4 ++--
19
accel/kvm/kvm-all.c | 2 +-
13
1 file changed, 2 insertions(+), 2 deletions(-)
20
hw/ppc/mac_newworld.c | 3 +--
21
hw/ppc/mac_oldworld.c | 2 +-
22
hw/ppc/spapr.c | 2 +-
23
5 files changed, 8 insertions(+), 6 deletions(-)
24
14
25
diff --git a/include/hw/boards.h b/include/hw/boards.h
15
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
26
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
27
--- a/include/hw/boards.h
17
--- a/hw/misc/imx6ul_ccm.c
28
+++ b/include/hw/boards.h
18
+++ b/hw/misc/imx6ul_ccm.c
29
@@ -XXX,XX +XXX,XX @@ typedef struct {
19
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_ccm_reg_name(uint32_t reg)
30
* should instead use "unimplemented-device" for all memory ranges where
20
case CCM_CMEOR:
31
* the guest will attempt to probe for a device that QEMU doesn't
21
return "CMEOR";
32
* implement and a stub device is required.
22
default:
33
+ * @kvm_type:
23
- sprintf(unknown, "%d ?", reg);
34
+ * Return the type of KVM corresponding to the kvm-type string option or
24
+ sprintf(unknown, "%u ?", reg);
35
+ * computed based on other criteria such as the host kernel capabilities.
25
return unknown;
36
*/
37
struct MachineClass {
38
/*< private >*/
39
@@ -XXX,XX +XXX,XX @@ struct MachineClass {
40
void (*init)(MachineState *state);
41
void (*reset)(void);
42
void (*hot_add_cpu)(const int64_t id, Error **errp);
43
- int (*kvm_type)(const char *arg);
44
+ int (*kvm_type)(MachineState *machine, const char *arg);
45
46
BlockInterfaceType block_default_type;
47
int units_per_default_bus;
48
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/accel/kvm/kvm-all.c
51
+++ b/accel/kvm/kvm-all.c
52
@@ -XXX,XX +XXX,XX @@ static int kvm_init(MachineState *ms)
53
54
kvm_type = qemu_opt_get(qemu_get_machine_opts(), "kvm-type");
55
if (mc->kvm_type) {
56
- type = mc->kvm_type(kvm_type);
57
+ type = mc->kvm_type(ms, kvm_type);
58
} else if (kvm_type) {
59
ret = -EINVAL;
60
fprintf(stderr, "Invalid argument kvm-type=%s\n", kvm_type);
61
diff --git a/hw/ppc/mac_newworld.c b/hw/ppc/mac_newworld.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/hw/ppc/mac_newworld.c
64
+++ b/hw/ppc/mac_newworld.c
65
@@ -XXX,XX +XXX,XX @@ static char *core99_fw_dev_path(FWPathProvider *p, BusState *bus,
66
67
return NULL;
68
}
69
-
70
-static int core99_kvm_type(const char *arg)
71
+static int core99_kvm_type(MachineState *machine, const char *arg)
72
{
73
/* Always force PR KVM */
74
return 2;
75
diff --git a/hw/ppc/mac_oldworld.c b/hw/ppc/mac_oldworld.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/hw/ppc/mac_oldworld.c
78
+++ b/hw/ppc/mac_oldworld.c
79
@@ -XXX,XX +XXX,XX @@ static char *heathrow_fw_dev_path(FWPathProvider *p, BusState *bus,
80
return NULL;
81
}
82
83
-static int heathrow_kvm_type(const char *arg)
84
+static int heathrow_kvm_type(MachineState *machine, const char *arg)
85
{
86
/* Always force PR KVM */
87
return 2;
88
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/hw/ppc/spapr.c
91
+++ b/hw/ppc/spapr.c
92
@@ -XXX,XX +XXX,XX @@ static void spapr_machine_init(MachineState *machine)
93
}
26
}
94
}
27
}
95
28
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_analog_reg_name(uint32_t reg)
96
-static int spapr_kvm_type(const char *vm_type)
29
case USB_ANALOG_DIGPROG:
97
+static int spapr_kvm_type(MachineState *machine, const char *vm_type)
30
return "USB_ANALOG_DIGPROG";
98
{
31
default:
99
if (!vm_type) {
32
- sprintf(unknown, "%d ?", reg);
100
return 0;
33
+ sprintf(unknown, "%u ?", reg);
34
return unknown;
35
}
36
}
101
--
37
--
102
2.20.1
38
2.20.1
103
39
104
40
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
For M-profile CPUs, the range from 0xe0000000 to 0xe00fffff is the
2
Private Peripheral Bus range, which includes all of the memory mapped
3
devices and registers that are part of the CPU itself, including the
4
NVIC, systick timer, and debug and trace components like the Data
5
Watchpoint and Trace unit (DWT). Within this large region, the range
6
0xe000e000 to 0xe000efff is the System Control Space (NVIC, system
7
registers, systick) and 0xe002e000 to 0exe002efff is its Non-secure
8
alias.
2
9
3
Add the kvm_arm_get_max_vm_ipa_size() helper that returns the
10
The architecture is clear that within the SCS unimplemented registers
4
number of bits in the IPA address space supported by KVM.
11
should be RES0 for privileged accesses and generate BusFault for
12
unprivileged accesses, and we currently implement this.
5
13
6
This capability needs to be known to create the VM with a
14
It is less clear about how to handle accesses to unimplemented
7
specific IPA max size (kvm_type passed along KVM_CREATE_VM ioctl.
15
regions of the wider PPB. Unprivileged accesses should definitely
16
cause BusFaults (R_DQQS), but the behaviour of privileged accesses is
17
not given as a general rule. However, the register definitions of
18
individual registers for components like the DWT all state that they
19
are RES0 if the relevant component is not implemented, so the
20
simplest way to provide that is to provide RAZ/WI for the whole range
21
for privileged accesses. (The v7M Arm ARM does say that reserved
22
registers should be UNK/SBZP.)
8
23
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
24
Expand the container MemoryRegion that the NVIC exposes so that
10
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
25
it covers the whole PPB space. This means:
11
Message-id: 20190304101339.25970-6-eric.auger@redhat.com
26
* moving the address that the ARMV7M device maps it to down by
27
0xe000 bytes
28
* moving the off and the offsets within the container of all the
29
subregions forward by 0xe000 bytes
30
* adding a new default MemoryRegion that covers the whole container
31
at a lower priority than anything else and which provides the
32
RAZWI/BusFault behaviour
33
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20201119215617.29887-2-peter.maydell@linaro.org
13
---
37
---
14
target/arm/kvm_arm.h | 13 +++++++++++++
38
include/hw/intc/armv7m_nvic.h | 1 +
15
target/arm/kvm.c | 10 ++++++++++
39
hw/arm/armv7m.c | 2 +-
16
2 files changed, 23 insertions(+)
40
hw/intc/armv7m_nvic.c | 78 ++++++++++++++++++++++++++++++-----
41
3 files changed, 69 insertions(+), 12 deletions(-)
17
42
18
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
43
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
19
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/kvm_arm.h
45
--- a/include/hw/intc/armv7m_nvic.h
21
+++ b/target/arm/kvm_arm.h
46
+++ b/include/hw/intc/armv7m_nvic.h
22
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf);
47
@@ -XXX,XX +XXX,XX @@ struct NVICState {
23
*/
48
MemoryRegion systickmem;
24
void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu);
49
MemoryRegion systick_ns_mem;
25
50
MemoryRegion container;
26
+/**
51
+ MemoryRegion defaultmem;
27
+ * kvm_arm_get_max_vm_ipa_size - Returns the number of bits in the
52
28
+ * IPA address space supported by KVM
53
uint32_t num_irq;
29
+ *
54
qemu_irq excpout;
30
+ * @ms: Machine state handle
55
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/armv7m.c
58
+++ b/hw/arm/armv7m.c
59
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
60
sysbus_connect_irq(sbd, 0,
61
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
62
63
- memory_region_add_subregion(&s->container, 0xe000e000,
64
+ memory_region_add_subregion(&s->container, 0xe0000000,
65
sysbus_mmio_get_region(sbd, 0));
66
67
for (i = 0; i < ARRAY_SIZE(s->bitband); i++) {
68
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/intc/armv7m_nvic.c
71
+++ b/hw/intc/armv7m_nvic.c
72
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
73
.endianness = DEVICE_NATIVE_ENDIAN,
74
};
75
76
+/*
77
+ * Unassigned portions of the PPB space are RAZ/WI for privileged
78
+ * accesses, and fault for non-privileged accesses.
31
+ */
79
+ */
32
+int kvm_arm_get_max_vm_ipa_size(MachineState *ms);
80
+static MemTxResult ppb_default_read(void *opaque, hwaddr addr,
33
+
81
+ uint64_t *data, unsigned size,
34
/**
82
+ MemTxAttrs attrs)
35
* kvm_arm_sync_mpstate_to_kvm
36
* @cpu: ARMCPU
37
@@ -XXX,XX +XXX,XX @@ static inline void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
38
cpu->host_cpu_probe_failed = true;
39
}
40
41
+static inline int kvm_arm_get_max_vm_ipa_size(MachineState *ms)
42
+{
83
+{
43
+ return -ENOENT;
84
+ qemu_log_mask(LOG_UNIMP, "Read of unassigned area of PPB: offset 0x%x\n",
85
+ (uint32_t)addr);
86
+ if (attrs.user) {
87
+ return MEMTX_ERROR;
88
+ }
89
+ *data = 0;
90
+ return MEMTX_OK;
44
+}
91
+}
45
+
92
+
46
static inline int kvm_arm_vgic_probe(void)
93
+static MemTxResult ppb_default_write(void *opaque, hwaddr addr,
47
{
94
+ uint64_t value, unsigned size,
48
return 0;
95
+ MemTxAttrs attrs)
49
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/kvm.c
52
+++ b/target/arm/kvm.c
53
@@ -XXX,XX +XXX,XX @@
54
#include "qemu/error-report.h"
55
#include "sysemu/sysemu.h"
56
#include "sysemu/kvm.h"
57
+#include "sysemu/kvm_int.h"
58
#include "kvm_arm.h"
59
#include "cpu.h"
60
#include "trace.h"
61
@@ -XXX,XX +XXX,XX @@ void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
62
env->features = arm_host_cpu_features.features;
63
}
64
65
+int kvm_arm_get_max_vm_ipa_size(MachineState *ms)
66
+{
96
+{
67
+ KVMState *s = KVM_STATE(ms->accelerator);
97
+ qemu_log_mask(LOG_UNIMP, "Write of unassigned area of PPB: offset 0x%x\n",
68
+ int ret;
98
+ (uint32_t)addr);
69
+
99
+ if (attrs.user) {
70
+ ret = kvm_check_extension(s, KVM_CAP_ARM_VM_IPA_SIZE);
100
+ return MEMTX_ERROR;
71
+ return ret > 0 ? ret : 40;
101
+ }
102
+ return MEMTX_OK;
72
+}
103
+}
73
+
104
+
74
int kvm_arch_init(MachineState *ms, KVMState *s)
105
+static const MemoryRegionOps ppb_default_ops = {
106
+ .read_with_attrs = ppb_default_read,
107
+ .write_with_attrs = ppb_default_write,
108
+ .endianness = DEVICE_NATIVE_ENDIAN,
109
+ .valid.min_access_size = 1,
110
+ .valid.max_access_size = 8,
111
+};
112
+
113
static int nvic_post_load(void *opaque, int version_id)
75
{
114
{
76
/* For ARM interrupt delivery is always asynchronous,
115
NVICState *s = opaque;
116
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
117
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
118
{
119
NVICState *s = NVIC(dev);
120
- int regionlen;
121
122
/* The armv7m container object will have set our CPU pointer */
123
if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
124
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
125
M_REG_S));
126
}
127
128
- /* The NVIC and System Control Space (SCS) starts at 0xe000e000
129
+ /*
130
+ * This device provides a single sysbus memory region which
131
+ * represents the whole of the "System PPB" space. This is the
132
+ * range from 0xe0000000 to 0xe00fffff and includes the NVIC,
133
+ * the System Control Space (system registers), the systick timer,
134
+ * and for CPUs with the Security extension an NS banked version
135
+ * of all of these.
136
+ *
137
+ * The default behaviour for unimplemented registers/ranges
138
+ * (for instance the Data Watchpoint and Trace unit at 0xe0001000)
139
+ * is to RAZ/WI for privileged access and BusFault for non-privileged
140
+ * access.
141
+ *
142
+ * The NVIC and System Control Space (SCS) starts at 0xe000e000
143
* and looks like this:
144
* 0x004 - ICTR
145
* 0x010 - 0xff - systick
146
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
147
* generally code determining which banked register to use should
148
* use attrs.secure; code determining actual behaviour of the system
149
* should use env->v7m.secure.
150
+ *
151
+ * The container covers the whole PPB space. Within it the priority
152
+ * of overlapping regions is:
153
+ * - default region (for RAZ/WI and BusFault) : -1
154
+ * - system register regions : 0
155
+ * - systick : 1
156
+ * This is because the systick device is a small block of registers
157
+ * in the middle of the other system control registers.
158
*/
159
- regionlen = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? 0x21000 : 0x1000;
160
- memory_region_init(&s->container, OBJECT(s), "nvic", regionlen);
161
- /* The system register region goes at the bottom of the priority
162
- * stack as it covers the whole page.
163
- */
164
+ memory_region_init(&s->container, OBJECT(s), "nvic", 0x100000);
165
+ memory_region_init_io(&s->defaultmem, OBJECT(s), &ppb_default_ops, s,
166
+ "nvic-default", 0x100000);
167
+ memory_region_add_subregion_overlap(&s->container, 0, &s->defaultmem, -1);
168
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
169
"nvic_sysregs", 0x1000);
170
- memory_region_add_subregion(&s->container, 0, &s->sysregmem);
171
+ memory_region_add_subregion(&s->container, 0xe000, &s->sysregmem);
172
173
memory_region_init_io(&s->systickmem, OBJECT(s),
174
&nvic_systick_ops, s,
175
"nvic_systick", 0xe0);
176
177
- memory_region_add_subregion_overlap(&s->container, 0x10,
178
+ memory_region_add_subregion_overlap(&s->container, 0xe010,
179
&s->systickmem, 1);
180
181
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
182
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
183
&nvic_sysreg_ns_ops, &s->sysregmem,
184
"nvic_sysregs_ns", 0x1000);
185
- memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
186
+ memory_region_add_subregion(&s->container, 0x2e000, &s->sysreg_ns_mem);
187
memory_region_init_io(&s->systick_ns_mem, OBJECT(s),
188
&nvic_sysreg_ns_ops, &s->systickmem,
189
"nvic_systick_ns", 0xe0);
190
- memory_region_add_subregion_overlap(&s->container, 0x20010,
191
+ memory_region_add_subregion_overlap(&s->container, 0x2e010,
192
&s->systick_ns_mem, 1);
193
}
194
77
--
195
--
78
2.20.1
196
2.20.1
79
197
80
198
diff view generated by jsdifflib
New patch
1
In v8.1M the PXN architecture extension adds a new PXN bit to the
2
MPU_RLAR registers, which forbids execution of code in the region
3
from a privileged mode.
1
4
5
This is another feature which is just in the generic "in v8.1M" set
6
and has no ID register field indicating its presence.
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201119215617.29887-3-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 7 ++++++-
13
1 file changed, 6 insertions(+), 1 deletion(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
20
} else {
21
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
22
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
23
+ bool pxn = false;
24
+
25
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
26
+ pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
27
+ }
28
29
if (m_is_system_region(env, address)) {
30
/* System space is always execute never */
31
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
32
}
33
34
*prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
35
- if (*prot && !xn) {
36
+ if (*prot && !xn && !(pxn && !is_user)) {
37
*prot |= PAGE_EXEC;
38
}
39
/* We don't need to look the attribute up in the MAIR0/MAIR1
40
--
41
2.20.1
42
43
diff view generated by jsdifflib
New patch
1
In arm_cpu_realizefn() we check whether the board code disabled EL3
2
via the has_el3 CPU object property, which we create if the CPU
3
starts with the ARM_FEATURE_EL3 feature bit. If it is disabled, then
4
we turn off ARM_FEATURE_EL3 and also zero out the relevant fields in
5
the ID_PFR1 and ID_AA64PFR0 registers.
1
6
7
This codepath was incorrectly being taken for M-profile CPUs, which
8
do not have an EL3 and don't set ARM_FEATURE_EL3, but which may have
9
the M-profile Security extension and so should have non-zero values
10
in the ID_PFR1.Security field.
11
12
Restrict the handling of the feature flag to A/R-profile cores.
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20201119215617.29887-4-peter.maydell@linaro.org
17
---
18
target/arm/cpu.c | 2 +-
19
1 file changed, 1 insertion(+), 1 deletion(-)
20
21
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.c
24
+++ b/target/arm/cpu.c
25
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
26
}
27
}
28
29
- if (!cpu->has_el3) {
30
+ if (!arm_feature(env, ARM_FEATURE_M) && !cpu->has_el3) {
31
/* If the has_el3 CPU property is disabled then we need to disable the
32
* feature.
33
*/
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the v8.1M VSCCLRM insn, which zeros floating point
2
2
registers if there is an active floating point context.
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
This requires support in write_neon_element32() for the MO_32
4
Message-id: 20190301200501.16533-3-richard.henderson@linaro.org
4
element size, so add it.
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
6
Because we want to use arm_gen_condlabel(), we need to move
7
the definition of that function up in translate.c so it is
8
before the #include of translate-vfp.c.inc.
9
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-5-peter.maydell@linaro.org
7
---
13
---
8
target/arm/cpu.h | 10 ++++++++++
14
target/arm/cpu.h | 9 ++++
9
linux-user/elfload.c | 1 +
15
target/arm/m-nocp.decode | 8 +++-
10
target/arm/cpu.c | 1 +
16
target/arm/translate.c | 21 +++++----
11
target/arm/cpu64.c | 2 ++
17
target/arm/translate-vfp.c.inc | 84 ++++++++++++++++++++++++++++++++++
12
target/arm/translate-a64.c | 14 ++++++++++++++
18
4 files changed, 111 insertions(+), 11 deletions(-)
13
target/arm/translate.c | 22 ++++++++++++++++++++++
14
6 files changed, 50 insertions(+)
15
19
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
22
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fhm(const ARMISARegisters *id)
24
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
21
return FIELD_EX32(id->id_isar6, ID_ISAR6, FHM) != 0;
25
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
22
}
26
}
23
27
24
+static inline bool isar_feature_aa32_sb(const ARMISARegisters *id)
28
+static inline bool isar_feature_aa32_m_sec_state(const ARMISARegisters *id)
25
+{
29
+{
26
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, SB) != 0;
30
+ /*
31
+ * Return true if M-profile state handling insns
32
+ * (VSCCLRM, CLRM, FPCTX access insns) are implemented
33
+ */
34
+ return FIELD_EX32(id->id_pfr1, ID_PFR1, SECURITY) >= 3;
27
+}
35
+}
28
+
36
+
29
static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
37
static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
30
{
38
{
31
/*
39
/* Sadly this is encoded differently for A-profile and M-profile */
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_pauth(const ARMISARegisters *id)
40
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
33
FIELD_DP64(0, ID_AA64ISAR1, GPI, 0xf))) != 0;
41
index XXXXXXX..XXXXXXX 100644
34
}
42
--- a/target/arm/m-nocp.decode
35
43
+++ b/target/arm/m-nocp.decode
36
+static inline bool isar_feature_aa64_sb(const ARMISARegisters *id)
44
@@ -XXX,XX +XXX,XX @@
37
+{
45
# If the coprocessor is not present or disabled then we will generate
38
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SB) != 0;
46
# the NOCP exception; otherwise we let the insn through to the main decode.
39
+}
47
40
+
48
+%vd_dp 22:1 12:4
41
static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
49
+%vd_sp 12:4 22:1
42
{
50
+
43
/* We always set the AdvSIMD and FP fields identically wrt FP16. */
51
&nocp cp
44
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
52
45
index XXXXXXX..XXXXXXX 100644
53
{
46
--- a/linux-user/elfload.c
54
# Special cases which do not take an early NOCP: VLLDM and VLSTM
47
+++ b/linux-user/elfload.c
55
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
48
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
56
- # TODO: VSCCLRM (new in v8.1M) is similar:
49
GET_FEATURE_ID(aa64_pauth, ARM_HWCAP_A64_PACA | ARM_HWCAP_A64_PACG);
57
- #VSCCLRM 1110 1100 1-01 1111 ---- 1011 ---- ---0
50
GET_FEATURE_ID(aa64_fhm, ARM_HWCAP_A64_ASIMDFHM);
58
+ # VSCCLRM (new in v8.1M) is similar:
51
GET_FEATURE_ID(aa64_jscvt, ARM_HWCAP_A64_JSCVT);
59
+ VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
52
+ GET_FEATURE_ID(aa64_sb, ARM_HWCAP_A64_SB);
60
+ VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
53
61
54
#undef GET_FEATURE_ID
62
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
55
63
NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
56
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/cpu.c
59
+++ b/target/arm/cpu.c
60
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
61
t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
62
t = FIELD_DP32(t, ID_ISAR6, DP, 1);
63
t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
64
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1);
65
cpu->isar.id_isar6 = t;
66
67
t = cpu->id_mmfr4;
68
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/cpu64.c
71
+++ b/target/arm/cpu64.c
72
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
73
t = FIELD_DP64(t, ID_AA64ISAR1, API, 0);
74
t = FIELD_DP64(t, ID_AA64ISAR1, GPA, 1);
75
t = FIELD_DP64(t, ID_AA64ISAR1, GPI, 0);
76
+ t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
77
cpu->isar.id_aa64isar1 = t;
78
79
t = cpu->isar.id_aa64pfr0;
80
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
81
u = FIELD_DP32(u, ID_ISAR6, JSCVT, 1);
82
u = FIELD_DP32(u, ID_ISAR6, DP, 1);
83
u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
84
+ u = FIELD_DP32(u, ID_ISAR6, SB, 1);
85
cpu->isar.id_isar6 = u;
86
87
/*
88
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate-a64.c
91
+++ b/target/arm/translate-a64.c
92
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
93
reset_btype(s);
94
gen_goto_tb(s, 0, s->pc);
95
return;
96
+
97
+ case 7: /* SB */
98
+ if (crm != 0 || !dc_isar_feature(aa64_sb, s)) {
99
+ goto do_unallocated;
100
+ }
101
+ /*
102
+ * TODO: There is no speculation barrier opcode for TCG;
103
+ * MB and end the TB instead.
104
+ */
105
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
106
+ gen_goto_tb(s, 0, s->pc);
107
+ return;
108
+
109
default:
110
+ do_unallocated:
111
unallocated_encoding(s);
112
return;
113
}
114
diff --git a/target/arm/translate.c b/target/arm/translate.c
64
diff --git a/target/arm/translate.c b/target/arm/translate.c
115
index XXXXXXX..XXXXXXX 100644
65
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/translate.c
66
--- a/target/arm/translate.c
117
+++ b/target/arm/translate.c
67
+++ b/target/arm/translate.c
118
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
68
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void)
119
*/
69
a64_translate_init();
120
gen_goto_tb(s, 0, s->pc & ~1);
70
}
121
return;
71
122
+ case 7: /* sb */
72
+/* Generate a label used for skipping this instruction */
123
+ if ((insn & 0xf) || !dc_isar_feature(aa32_sb, s)) {
73
+static void arm_gen_condlabel(DisasContext *s)
124
+ goto illegal_op;
74
+{
125
+ }
75
+ if (!s->condjmp) {
126
+ /*
76
+ s->condlabel = gen_new_label();
127
+ * TODO: There is no speculation barrier opcode
77
+ s->condjmp = 1;
128
+ * for TCG; MB and end the TB instead.
78
+ }
129
+ */
79
+}
130
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
80
+
131
+ gen_goto_tb(s, 0, s->pc & ~1);
81
/* Flags for the disas_set_da_iss info argument:
132
+ return;
82
* lower bits hold the Rt register number, higher bits are flags.
133
default:
83
*/
134
goto illegal_op;
84
@@ -XXX,XX +XXX,XX @@ static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
135
}
85
long off = neon_element_offset(reg, ele, memop);
136
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
86
137
*/
87
switch (memop) {
138
gen_goto_tb(s, 0, s->pc & ~1);
88
+ case MO_32:
139
break;
89
+ tcg_gen_st32_i64(src, cpu_env, off);
140
+ case 7: /* sb */
90
+ break;
141
+ if ((insn & 0xf) || !dc_isar_feature(aa32_sb, s)) {
91
case MO_64:
142
+ goto illegal_op;
92
tcg_gen_st_i64(src, cpu_env, off);
143
+ }
93
break;
144
+ /*
94
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
145
+ * TODO: There is no speculation barrier opcode
95
s->base.is_jmp = DISAS_UPDATE_EXIT;
146
+ * for TCG; MB and end the TB instead.
96
}
147
+ */
97
148
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
98
-/* Generate a label used for skipping this instruction */
149
+ gen_goto_tb(s, 0, s->pc & ~1);
99
-static void arm_gen_condlabel(DisasContext *s)
150
+ break;
100
-{
151
default:
101
- if (!s->condjmp) {
152
goto illegal_op;
102
- s->condlabel = gen_new_label();
153
}
103
- s->condjmp = 1;
104
- }
105
-}
106
-
107
/* Skip this instruction if the ARM condition is false */
108
static void arm_skip_unless(DisasContext *s, uint32_t cond)
109
{
110
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/translate-vfp.c.inc
113
+++ b/target/arm/translate-vfp.c.inc
114
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
115
return true;
116
}
117
118
+static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
119
+{
120
+ int btmreg, topreg;
121
+ TCGv_i64 zero;
122
+ TCGv_i32 aspen, sfpa;
123
+
124
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
125
+ /* Before v8.1M, fall through in decode to NOCP check */
126
+ return false;
127
+ }
128
+
129
+ /* Explicitly UNDEF because this takes precedence over NOCP */
130
+ if (!arm_dc_feature(s, ARM_FEATURE_M_MAIN) || !s->v8m_secure) {
131
+ unallocated_encoding(s);
132
+ return true;
133
+ }
134
+
135
+ if (!dc_isar_feature(aa32_vfp_simd, s)) {
136
+ /* NOP if we have neither FP nor MVE */
137
+ return true;
138
+ }
139
+
140
+ /*
141
+ * If FPCCR.ASPEN != 0 && CONTROL_S.SFPA == 0 then there is no
142
+ * active floating point context so we must NOP (without doing
143
+ * any lazy state preservation or the NOCP check).
144
+ */
145
+ aspen = load_cpu_field(v7m.fpccr[M_REG_S]);
146
+ sfpa = load_cpu_field(v7m.control[M_REG_S]);
147
+ tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
148
+ tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
149
+ tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
150
+ tcg_gen_or_i32(sfpa, sfpa, aspen);
151
+ arm_gen_condlabel(s);
152
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
153
+
154
+ if (s->fp_excp_el != 0) {
155
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
156
+ syn_uncategorized(), s->fp_excp_el);
157
+ return true;
158
+ }
159
+
160
+ topreg = a->vd + a->imm - 1;
161
+ btmreg = a->vd;
162
+
163
+ /* Convert to Sreg numbers if the insn specified in Dregs */
164
+ if (a->size == 3) {
165
+ topreg = topreg * 2 + 1;
166
+ btmreg *= 2;
167
+ }
168
+
169
+ if (topreg > 63 || (topreg > 31 && !(topreg & 1))) {
170
+ /* UNPREDICTABLE: we choose to undef */
171
+ unallocated_encoding(s);
172
+ return true;
173
+ }
174
+
175
+ /* Silently ignore requests to clear D16-D31 if they don't exist */
176
+ if (topreg > 31 && !dc_isar_feature(aa32_simd_r32, s)) {
177
+ topreg = 31;
178
+ }
179
+
180
+ if (!vfp_access_check(s)) {
181
+ return true;
182
+ }
183
+
184
+ /* Zero the Sregs from btmreg to topreg inclusive. */
185
+ zero = tcg_const_i64(0);
186
+ if (btmreg & 1) {
187
+ write_neon_element64(zero, btmreg >> 1, 1, MO_32);
188
+ btmreg++;
189
+ }
190
+ for (; btmreg + 1 <= topreg; btmreg += 2) {
191
+ write_neon_element64(zero, btmreg >> 1, 0, MO_64);
192
+ }
193
+ if (btmreg == topreg) {
194
+ write_neon_element64(zero, btmreg >> 1, 0, MO_32);
195
+ btmreg++;
196
+ }
197
+ assert(btmreg == topreg + 1);
198
+ /* TODO: when MVE is implemented, zero VPR here */
199
+ return true;
200
+}
201
+
202
static bool trans_NOCP(DisasContext *s, arg_nocp *a)
203
{
204
/*
154
--
205
--
155
2.20.1
206
2.20.1
156
207
157
208
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
In v8.1M the new CLRM instruction allows zeroing an arbitrary set of
2
the general-purpose registers and APSR. Implement this.
2
3
3
Found by inspection: Rn is the base register against which the
4
The encoding is a subset of the LDMIA T2 encoding, using what would
4
load began; I is the register within the mask being processed.
5
be Rn=0b1111 (which UNDEFs for LDMIA).
5
The exception return should of course be processed from the loaded PC.
6
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20190301202921.21209-1-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201119215617.29887-6-peter.maydell@linaro.org
11
---
10
---
12
target/arm/translate.c | 2 +-
11
target/arm/t32.decode | 6 +++++-
13
1 file changed, 1 insertion(+), 1 deletion(-)
12
target/arm/translate.c | 38 ++++++++++++++++++++++++++++++++++++++
13
2 files changed, 43 insertions(+), 1 deletion(-)
14
14
15
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/t32.decode
18
+++ b/target/arm/t32.decode
19
@@ -XXX,XX +XXX,XX @@ UXTAB 1111 1010 0101 .... 1111 .... 10.. .... @rrr_rot
20
21
STM_t32 1110 1000 10.0 .... ................ @ldstm i=1 b=0
22
STM_t32 1110 1001 00.0 .... ................ @ldstm i=0 b=1
23
-LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
24
+{
25
+ # Rn=15 UNDEFs for LDM; M-profile CLRM uses that encoding
26
+ CLRM 1110 1000 1001 1111 list:16
27
+ LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
28
+}
29
LDM_t32 1110 1001 00.1 .... ................ @ldstm i=0 b=1
30
31
&rfe !extern rn w pu
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
32
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
34
--- a/target/arm/translate.c
18
+++ b/target/arm/translate.c
35
+++ b/target/arm/translate.c
19
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
36
@@ -XXX,XX +XXX,XX @@ static bool trans_LDM_t16(DisasContext *s, arg_ldst_block *a)
20
} else if (i == rn) {
37
return do_ldm(s, a, 1);
21
loaded_var = tmp;
38
}
22
loaded_base = 1;
39
23
- } else if (rn == 15 && exc_return) {
40
+static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
24
+ } else if (i == 15 && exc_return) {
41
+{
25
store_pc_exc_ret(s, tmp);
42
+ int i;
26
} else {
43
+ TCGv_i32 zero;
27
store_reg_from_load(s, i, tmp);
44
+
45
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
46
+ return false;
47
+ }
48
+
49
+ if (extract32(a->list, 13, 1)) {
50
+ return false;
51
+ }
52
+
53
+ if (!a->list) {
54
+ /* UNPREDICTABLE; we choose to UNDEF */
55
+ return false;
56
+ }
57
+
58
+ zero = tcg_const_i32(0);
59
+ for (i = 0; i < 15; i++) {
60
+ if (extract32(a->list, i, 1)) {
61
+ /* Clear R[i] */
62
+ tcg_gen_mov_i32(cpu_R[i], zero);
63
+ }
64
+ }
65
+ if (extract32(a->list, 15, 1)) {
66
+ /*
67
+ * Clear APSR (by calling the MSR helper with the same argument
68
+ * as for "MSR APSR_nzcvqg, Rn": mask = 0b1100, SYSM=0)
69
+ */
70
+ TCGv_i32 maskreg = tcg_const_i32(0xc << 8);
71
+ gen_helper_v7m_msr(cpu_env, maskreg, zero);
72
+ tcg_temp_free_i32(maskreg);
73
+ }
74
+ tcg_temp_free_i32(zero);
75
+ return true;
76
+}
77
+
78
/*
79
* Branch, branch with link
80
*/
28
--
81
--
29
2.20.1
82
2.20.1
30
83
31
84
diff view generated by jsdifflib
New patch
1
For M-profile before v8.1M, the only valid register for VMSR/VMRS is
2
the FPSCR. We have a comment that states this, but the actual logic
3
to forbid accesses for any other register value is missing, so we
4
would end up with A-profile style behaviour. Add the missing check.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-7-peter.maydell@linaro.org
9
---
10
target/arm/translate-vfp.c.inc | 5 ++++-
11
1 file changed, 4 insertions(+), 1 deletion(-)
12
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
16
+++ b/target/arm/translate-vfp.c.inc
17
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
18
* Accesses to R15 are UNPREDICTABLE; we choose to undef.
19
* (FPSCR -> r15 is a special case which writes to the PSR flags.)
20
*/
21
- if (a->rt == 15 && (!a->l || a->reg != ARM_VFP_FPSCR)) {
22
+ if (a->reg != ARM_VFP_FPSCR) {
23
+ return false;
24
+ }
25
+ if (a->rt == 15 && !a->l) {
26
return false;
27
}
28
}
29
--
30
2.20.1
31
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Currently M-profile borrows the A-profile code for VMSR and VMRS
2
2
(access to the FP system registers), because all it needs to support
3
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
3
is the FPSCR. In v8.1M things become significantly more complicated
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
in two ways:
5
Message-id: 20190301200501.16533-9-richard.henderson@linaro.org
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
* there are several new FP system registers; some have side effects
7
on read, and one (FPCXT_NS) needs to avoid the usual
8
vfp_access_check() and the "only if FPU implemented" check
9
10
* all sysregs are now accessible both by VMRS/VMSR (which
11
reads/writes a general purpose register) and also by VLDR/VSTR
12
(which reads/writes them directly to memory)
13
14
Refactor the structure of how we handle VMSR/VMRS to cope with this:
15
16
* keep the M-profile code entirely separate from the A-profile code
17
18
* abstract out the "read or write the general purpose register" part
19
of the code into a loadfn or storefn function pointer, so we can
20
reuse it for VLDR/VSTR.
21
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-8-peter.maydell@linaro.org
8
---
25
---
9
target/arm/cpu.h | 5 ++++
26
target/arm/cpu.h | 3 +
10
target/arm/cpu64.c | 2 +-
27
target/arm/translate-vfp.c.inc | 182 ++++++++++++++++++++++++++++++---
11
target/arm/translate-a64.c | 58 ++++++++++++++++++++++++++++++++++++++
28
2 files changed, 171 insertions(+), 14 deletions(-)
12
3 files changed, 64 insertions(+), 1 deletion(-)
13
29
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
30
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
32
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
33
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_condm_4(const ARMISARegisters *id)
34
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
19
return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TS) != 0;
35
#define ARM_VFP_FPINST 9
36
#define ARM_VFP_FPINST2 10
37
38
+/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
39
+#define QEMU_VFP_FPSCR_NZCV 0xffff
40
+
41
/* iwMMXt coprocessor control registers. */
42
#define ARM_IWMMXT_wCID 0
43
#define ARM_IWMMXT_wCon 1
44
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/translate-vfp.c.inc
47
+++ b/target/arm/translate-vfp.c.inc
48
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
49
return true;
20
}
50
}
21
51
22
+static inline bool isar_feature_aa64_condm_5(const ARMISARegisters *id)
52
+/*
23
+{
53
+ * M-profile provides two different sets of instructions that can
24
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TS) >= 2;
54
+ * access floating point system registers: VMSR/VMRS (which move
25
+}
55
+ * to/from a general purpose register) and VLDR/VSTR sysreg (which
26
+
56
+ * move directly to/from memory). In some cases there are also side
27
static inline bool isar_feature_aa64_jscvt(const ARMISARegisters *id)
57
+ * effects which must happen after any write to memory (which could
58
+ * cause an exception). So we implement the common logic for the
59
+ * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
60
+ * which take pointers to callback functions which will perform the
61
+ * actual "read/write general purpose register" and "read/write
62
+ * memory" operations.
63
+ */
64
+
65
+/*
66
+ * Emit code to store the sysreg to its final destination; frees the
67
+ * TCG temp 'value' it is passed.
68
+ */
69
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
70
+/*
71
+ * Emit code to load the value to be copied to the sysreg; returns
72
+ * a new TCG temporary
73
+ */
74
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
75
+
76
+/* Common decode/access checks for fp sysreg read/write */
77
+typedef enum FPSysRegCheckResult {
78
+ FPSysRegCheckFailed, /* caller should return false */
79
+ FPSysRegCheckDone, /* caller should return true */
80
+ FPSysRegCheckContinue, /* caller should continue generating code */
81
+} FPSysRegCheckResult;
82
+
83
+static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
84
+{
85
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
86
+ return FPSysRegCheckFailed;
87
+ }
88
+
89
+ switch (regno) {
90
+ case ARM_VFP_FPSCR:
91
+ case QEMU_VFP_FPSCR_NZCV:
92
+ break;
93
+ default:
94
+ return FPSysRegCheckFailed;
95
+ }
96
+
97
+ if (!vfp_access_check(s)) {
98
+ return FPSysRegCheckDone;
99
+ }
100
+
101
+ return FPSysRegCheckContinue;
102
+}
103
+
104
+static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
105
+
106
+ fp_sysreg_loadfn *loadfn,
107
+ void *opaque)
108
+{
109
+ /* Do a write to an M-profile floating point system register */
110
+ TCGv_i32 tmp;
111
+
112
+ switch (fp_sysreg_checks(s, regno)) {
113
+ case FPSysRegCheckFailed:
114
+ return false;
115
+ case FPSysRegCheckDone:
116
+ return true;
117
+ case FPSysRegCheckContinue:
118
+ break;
119
+ }
120
+
121
+ switch (regno) {
122
+ case ARM_VFP_FPSCR:
123
+ tmp = loadfn(s, opaque);
124
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
125
+ tcg_temp_free_i32(tmp);
126
+ gen_lookup_tb(s);
127
+ break;
128
+ default:
129
+ g_assert_not_reached();
130
+ }
131
+ return true;
132
+}
133
+
134
+static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
135
+ fp_sysreg_storefn *storefn,
136
+ void *opaque)
137
+{
138
+ /* Do a read from an M-profile floating point system register */
139
+ TCGv_i32 tmp;
140
+
141
+ switch (fp_sysreg_checks(s, regno)) {
142
+ case FPSysRegCheckFailed:
143
+ return false;
144
+ case FPSysRegCheckDone:
145
+ return true;
146
+ case FPSysRegCheckContinue:
147
+ break;
148
+ }
149
+
150
+ switch (regno) {
151
+ case ARM_VFP_FPSCR:
152
+ tmp = tcg_temp_new_i32();
153
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
154
+ storefn(s, opaque, tmp);
155
+ break;
156
+ case QEMU_VFP_FPSCR_NZCV:
157
+ /*
158
+ * Read just NZCV; this is a special case to avoid the
159
+ * helper call for the "VMRS to CPSR.NZCV" insn.
160
+ */
161
+ tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
162
+ tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
163
+ storefn(s, opaque, tmp);
164
+ break;
165
+ default:
166
+ g_assert_not_reached();
167
+ }
168
+ return true;
169
+}
170
+
171
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
172
+{
173
+ arg_VMSR_VMRS *a = opaque;
174
+
175
+ if (a->rt == 15) {
176
+ /* Set the 4 flag bits in the CPSR */
177
+ gen_set_nzcv(value);
178
+ tcg_temp_free_i32(value);
179
+ } else {
180
+ store_reg(s, a->rt, value);
181
+ }
182
+}
183
+
184
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
185
+{
186
+ arg_VMSR_VMRS *a = opaque;
187
+
188
+ return load_reg(s, a->rt);
189
+}
190
+
191
+static bool gen_M_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
192
+{
193
+ /*
194
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
195
+ * FPSCR -> r15 is a special case which writes to the PSR flags;
196
+ * set a->reg to a special value to tell gen_M_fp_sysreg_read()
197
+ * we only care about the top 4 bits of FPSCR there.
198
+ */
199
+ if (a->rt == 15) {
200
+ if (a->l && a->reg == ARM_VFP_FPSCR) {
201
+ a->reg = QEMU_VFP_FPSCR_NZCV;
202
+ } else {
203
+ return false;
204
+ }
205
+ }
206
+
207
+ if (a->l) {
208
+ /* VMRS, move FP system register to gp register */
209
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
210
+ } else {
211
+ /* VMSR, move gp register to FP system register */
212
+ return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
213
+ }
214
+}
215
+
216
static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
28
{
217
{
29
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, JSCVT) != 0;
218
TCGv_i32 tmp;
30
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
219
bool ignore_vfp_enabled = false;
31
index XXXXXXX..XXXXXXX 100644
220
32
--- a/target/arm/cpu64.c
221
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
33
+++ b/target/arm/cpu64.c
222
- return false;
34
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
223
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
35
t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
224
+ return gen_M_VMSR_VMRS(s, a);
36
t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
37
t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1);
38
- t = FIELD_DP64(t, ID_AA64ISAR0, TS, 1);
39
+ t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */
40
cpu->isar.id_aa64isar0 = t;
41
42
t = cpu->isar.id_aa64isar1;
43
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/translate-a64.c
46
+++ b/target/arm/translate-a64.c
47
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
48
}
225
}
49
}
226
50
227
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
51
+static void gen_xaflag(void)
228
- /*
52
+{
229
- * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
53
+ TCGv_i32 z = tcg_temp_new_i32();
230
- * Accesses to R15 are UNPREDICTABLE; we choose to undef.
54
+
231
- * (FPSCR -> r15 is a special case which writes to the PSR flags.)
55
+ tcg_gen_setcondi_i32(TCG_COND_EQ, z, cpu_ZF, 0);
232
- */
56
+
233
- if (a->reg != ARM_VFP_FPSCR) {
57
+ /*
234
- return false;
58
+ * (!C & !Z) << 31
235
- }
59
+ * (!(C | Z)) << 31
236
- if (a->rt == 15 && !a->l) {
60
+ * ~((C | Z) << 31)
237
- return false;
61
+ * ~-(C | Z)
238
- }
62
+ * (C | Z) - 1
239
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
63
+ */
240
+ return false;
64
+ tcg_gen_or_i32(cpu_NF, cpu_CF, z);
241
}
65
+ tcg_gen_subi_i32(cpu_NF, cpu_NF, 1);
242
66
+
243
switch (a->reg) {
67
+ /* !(Z & C) */
68
+ tcg_gen_and_i32(cpu_ZF, z, cpu_CF);
69
+ tcg_gen_xori_i32(cpu_ZF, cpu_ZF, 1);
70
+
71
+ /* (!C & Z) << 31 -> -(Z & ~C) */
72
+ tcg_gen_andc_i32(cpu_VF, z, cpu_CF);
73
+ tcg_gen_neg_i32(cpu_VF, cpu_VF);
74
+
75
+ /* C | Z */
76
+ tcg_gen_or_i32(cpu_CF, cpu_CF, z);
77
+
78
+ tcg_temp_free_i32(z);
79
+}
80
+
81
+static void gen_axflag(void)
82
+{
83
+ tcg_gen_sari_i32(cpu_VF, cpu_VF, 31); /* V ? -1 : 0 */
84
+ tcg_gen_andc_i32(cpu_CF, cpu_CF, cpu_VF); /* C & !V */
85
+
86
+ /* !(Z | V) -> !(!ZF | V) -> ZF & !V -> ZF & ~VF */
87
+ tcg_gen_andc_i32(cpu_ZF, cpu_ZF, cpu_VF);
88
+
89
+ tcg_gen_movi_i32(cpu_NF, 0);
90
+ tcg_gen_movi_i32(cpu_VF, 0);
91
+}
92
+
93
/* MSR (immediate) - move immediate to processor state field */
94
static void handle_msr_i(DisasContext *s, uint32_t insn,
95
unsigned int op1, unsigned int op2, unsigned int crm)
96
@@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
97
s->base.is_jmp = DISAS_NEXT;
98
break;
99
100
+ case 0x01: /* XAFlag */
101
+ if (crm != 0 || !dc_isar_feature(aa64_condm_5, s)) {
102
+ goto do_unallocated;
103
+ }
104
+ gen_xaflag();
105
+ s->base.is_jmp = DISAS_NEXT;
106
+ break;
107
+
108
+ case 0x02: /* AXFlag */
109
+ if (crm != 0 || !dc_isar_feature(aa64_condm_5, s)) {
110
+ goto do_unallocated;
111
+ }
112
+ gen_axflag();
113
+ s->base.is_jmp = DISAS_NEXT;
114
+ break;
115
+
116
case 0x05: /* SPSel */
117
if (s->current_el == 0) {
118
goto do_unallocated;
119
--
244
--
120
2.20.1
245
2.20.1
121
246
122
247
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The constant-expander functions like negate, plus_2, etc, are
2
generally useful; move them up in translate.c so we can use them in
3
the VFP/Neon decoders as well as in the A32/T32/T16 decoders.
2
4
3
We do not need an out-of-line helper for manipulating bits in pstate.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
While changing things, share the implementation of gen_ss_advance.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-9-peter.maydell@linaro.org
8
---
9
target/arm/translate.c | 46 +++++++++++++++++++++++-------------------
10
1 file changed, 25 insertions(+), 21 deletions(-)
5
11
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190301200501.16533-6-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 2 --
12
target/arm/translate.h | 34 ++++++++++++++++++++++++++++++++++
13
target/arm/op_helper.c | 5 -----
14
target/arm/translate-a64.c | 11 -----------
15
target/arm/translate.c | 11 -----------
16
5 files changed, 34 insertions(+), 29 deletions(-)
17
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.h
21
+++ b/target/arm/helper.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(get_cp_reg, i32, env, ptr)
23
DEF_HELPER_3(set_cp_reg64, void, env, ptr, i64)
24
DEF_HELPER_2(get_cp_reg64, i64, env, ptr)
25
26
-DEF_HELPER_1(clear_pstate_ss, void, env)
27
-
28
DEF_HELPER_2(get_r13_banked, i32, env, i32)
29
DEF_HELPER_3(set_r13_banked, void, env, i32, i32)
30
31
diff --git a/target/arm/translate.h b/target/arm/translate.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate.h
34
+++ b/target/arm/translate.h
35
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
36
return ret;
37
}
38
39
+/* Set bits within PSTATE. */
40
+static inline void set_pstate_bits(uint32_t bits)
41
+{
42
+ TCGv_i32 p = tcg_temp_new_i32();
43
+
44
+ tcg_debug_assert(!(bits & CACHED_PSTATE_BITS));
45
+
46
+ tcg_gen_ld_i32(p, cpu_env, offsetof(CPUARMState, pstate));
47
+ tcg_gen_ori_i32(p, p, bits);
48
+ tcg_gen_st_i32(p, cpu_env, offsetof(CPUARMState, pstate));
49
+ tcg_temp_free_i32(p);
50
+}
51
+
52
+/* Clear bits within PSTATE. */
53
+static inline void clear_pstate_bits(uint32_t bits)
54
+{
55
+ TCGv_i32 p = tcg_temp_new_i32();
56
+
57
+ tcg_debug_assert(!(bits & CACHED_PSTATE_BITS));
58
+
59
+ tcg_gen_ld_i32(p, cpu_env, offsetof(CPUARMState, pstate));
60
+ tcg_gen_andi_i32(p, p, ~bits);
61
+ tcg_gen_st_i32(p, cpu_env, offsetof(CPUARMState, pstate));
62
+ tcg_temp_free_i32(p);
63
+}
64
+
65
+/* If the singlestep state is Active-not-pending, advance to Active-pending. */
66
+static inline void gen_ss_advance(DisasContext *s)
67
+{
68
+ if (s->ss_active) {
69
+ s->pstate_ss = 0;
70
+ clear_pstate_bits(PSTATE_SS);
71
+ }
72
+}
73
74
/* Vector operations shared between ARM and AArch64. */
75
extern const GVecGen3 bsl_op;
76
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/arm/op_helper.c
79
+++ b/target/arm/op_helper.c
80
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(get_cp_reg64)(CPUARMState *env, void *rip)
81
return res;
82
}
83
84
-void HELPER(clear_pstate_ss)(CPUARMState *env)
85
-{
86
- env->pstate &= ~PSTATE_SS;
87
-}
88
-
89
void HELPER(pre_hvc)(CPUARMState *env)
90
{
91
ARMCPU *cpu = arm_env_get_cpu(env);
92
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/translate-a64.c
95
+++ b/target/arm/translate-a64.c
96
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, int offset,
97
s->base.is_jmp = DISAS_NORETURN;
98
}
99
100
-static void gen_ss_advance(DisasContext *s)
101
-{
102
- /* If the singlestep state is Active-not-pending, advance to
103
- * Active-pending.
104
- */
105
- if (s->ss_active) {
106
- s->pstate_ss = 0;
107
- gen_helper_clear_pstate_ss(cpu_env);
108
- }
109
-}
110
-
111
static void gen_step_complete_exception(DisasContext *s)
112
{
113
/* We just completed step of an insn. Move from Active-not-pending
114
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
115
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/translate.c
14
--- a/target/arm/translate.c
117
+++ b/target/arm/translate.c
15
+++ b/target/arm/translate.c
118
@@ -XXX,XX +XXX,XX @@ static void gen_exception(int excp, uint32_t syndrome, uint32_t target_el)
16
@@ -XXX,XX +XXX,XX @@ static void arm_gen_condlabel(DisasContext *s)
119
tcg_temp_free_i32(tcg_excp);
17
}
120
}
18
}
121
19
122
-static void gen_ss_advance(DisasContext *s)
20
+/*
21
+ * Constant expanders for the decoders.
22
+ */
23
+
24
+static int negate(DisasContext *s, int x)
25
+{
26
+ return -x;
27
+}
28
+
29
+static int plus_2(DisasContext *s, int x)
30
+{
31
+ return x + 2;
32
+}
33
+
34
+static int times_2(DisasContext *s, int x)
35
+{
36
+ return x * 2;
37
+}
38
+
39
+static int times_4(DisasContext *s, int x)
40
+{
41
+ return x * 4;
42
+}
43
+
44
/* Flags for the disas_set_da_iss info argument:
45
* lower bits hold the Rt register number, higher bits are flags.
46
*/
47
@@ -XXX,XX +XXX,XX @@ static void arm_skip_unless(DisasContext *s, uint32_t cond)
48
49
50
/*
51
- * Constant expanders for the decoders.
52
+ * Constant expanders used by T16/T32 decode
53
*/
54
55
-static int negate(DisasContext *s, int x)
123
-{
56
-{
124
- /* If the singlestep state is Active-not-pending, advance to
57
- return -x;
125
- * Active-pending.
126
- */
127
- if (s->ss_active) {
128
- s->pstate_ss = 0;
129
- gen_helper_clear_pstate_ss(cpu_env);
130
- }
131
-}
58
-}
132
-
59
-
133
static void gen_step_complete_exception(DisasContext *s)
60
-static int plus_2(DisasContext *s, int x)
61
-{
62
- return x + 2;
63
-}
64
-
65
-static int times_2(DisasContext *s, int x)
66
-{
67
- return x * 2;
68
-}
69
-
70
-static int times_4(DisasContext *s, int x)
71
-{
72
- return x * 4;
73
-}
74
-
75
/* Return only the rotation part of T32ExpandImm. */
76
static int t32_expandimm_rot(DisasContext *s, int x)
134
{
77
{
135
/* We just completed step of an insn. Move from Active-not-pending
136
--
78
--
137
2.20.1
79
2.20.1
138
80
139
81
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the new-in-v8.1M VLDR/VSTR variants which directly
2
read or write FP system registers to memory.
2
3
3
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190301200501.16533-8-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
[PMM: fixed up block comment style]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-10-peter.maydell@linaro.org
9
---
7
---
10
target/arm/cpu.h | 5 ++
8
target/arm/vfp.decode | 14 ++++++
11
linux-user/elfload.c | 1 +
9
target/arm/translate-vfp.c.inc | 91 ++++++++++++++++++++++++++++++++++
12
target/arm/cpu64.c | 1 +
10
2 files changed, 105 insertions(+)
13
target/arm/translate-a64.c | 99 +++++++++++++++++++++++++++++++++++++-
14
4 files changed, 105 insertions(+), 1 deletion(-)
15
11
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
14
--- a/target/arm/vfp.decode
19
+++ b/target/arm/cpu.h
15
+++ b/target/arm/vfp.decode
20
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fhm(const ARMISARegisters *id)
16
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR_hp ---- 1101 u:1 .0 l:1 rn:4 .... 1001 imm:8 vd=%vd_sp
21
return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, FHM) != 0;
17
VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
18
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
19
20
+# M-profile VLDR/VSTR to sysreg
21
+%vldr_sysreg 22:1 13:3
22
+%imm7_0x4 0:7 !function=times_4
23
+
24
+&vldr_sysreg rn reg imm a w p
25
+@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
26
+ reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
27
+
28
+# P=0 W=0 is SEE "Related encodings", so split into two patterns
29
+VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
30
+VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
31
+VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
32
+VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
33
+
34
# We split the load/store multiple up into two patterns to avoid
35
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
36
# grouping:
37
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-vfp.c.inc
40
+++ b/target/arm/translate-vfp.c.inc
41
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
42
return true;
22
}
43
}
23
44
24
+static inline bool isar_feature_aa64_condm_4(const ARMISARegisters *id)
45
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
25
+{
46
+{
26
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TS) != 0;
47
+ arg_vldr_sysreg *a = opaque;
48
+ uint32_t offset = a->imm;
49
+ TCGv_i32 addr;
50
+
51
+ if (!a->a) {
52
+ offset = - offset;
53
+ }
54
+
55
+ addr = load_reg(s, a->rn);
56
+ if (a->p) {
57
+ tcg_gen_addi_i32(addr, addr, offset);
58
+ }
59
+
60
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
61
+ gen_helper_v8m_stackcheck(cpu_env, addr);
62
+ }
63
+
64
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
65
+ MO_UL | MO_ALIGN | s->be_data);
66
+ tcg_temp_free_i32(value);
67
+
68
+ if (a->w) {
69
+ /* writeback */
70
+ if (!a->p) {
71
+ tcg_gen_addi_i32(addr, addr, offset);
72
+ }
73
+ store_reg(s, a->rn, addr);
74
+ } else {
75
+ tcg_temp_free_i32(addr);
76
+ }
27
+}
77
+}
28
+
78
+
29
static inline bool isar_feature_aa64_jscvt(const ARMISARegisters *id)
79
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
30
{
80
+{
31
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, JSCVT) != 0;
81
+ arg_vldr_sysreg *a = opaque;
32
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
82
+ uint32_t offset = a->imm;
33
index XXXXXXX..XXXXXXX 100644
83
+ TCGv_i32 addr;
34
--- a/linux-user/elfload.c
84
+ TCGv_i32 value = tcg_temp_new_i32();
35
+++ b/linux-user/elfload.c
36
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
37
GET_FEATURE_ID(aa64_fhm, ARM_HWCAP_A64_ASIMDFHM);
38
GET_FEATURE_ID(aa64_jscvt, ARM_HWCAP_A64_JSCVT);
39
GET_FEATURE_ID(aa64_sb, ARM_HWCAP_A64_SB);
40
+ GET_FEATURE_ID(aa64_condm_4, ARM_HWCAP_A64_FLAGM);
41
42
#undef GET_FEATURE_ID
43
44
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu64.c
47
+++ b/target/arm/cpu64.c
48
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
49
t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
50
t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
51
t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1);
52
+ t = FIELD_DP64(t, ID_AA64ISAR0, TS, 1);
53
cpu->isar.id_aa64isar0 = t;
54
55
t = cpu->isar.id_aa64isar1;
56
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/translate-a64.c
59
+++ b/target/arm/translate-a64.c
60
@@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
61
s->base.is_jmp = DISAS_TOO_MANY;
62
63
switch (op) {
64
+ case 0x00: /* CFINV */
65
+ if (crm != 0 || !dc_isar_feature(aa64_condm_4, s)) {
66
+ goto do_unallocated;
67
+ }
68
+ tcg_gen_xori_i32(cpu_CF, cpu_CF, 1);
69
+ s->base.is_jmp = DISAS_NEXT;
70
+ break;
71
+
85
+
72
case 0x05: /* SPSel */
86
+ if (!a->a) {
73
if (s->current_el == 0) {
87
+ offset = - offset;
74
goto do_unallocated;
75
@@ -XXX,XX +XXX,XX @@ static void gen_get_nzcv(TCGv_i64 tcg_rt)
76
}
77
78
static void gen_set_nzcv(TCGv_i64 tcg_rt)
79
-
80
{
81
TCGv_i32 nzcv = tcg_temp_new_i32();
82
83
@@ -XXX,XX +XXX,XX @@ static void disas_adc_sbc(DisasContext *s, uint32_t insn)
84
}
85
}
86
87
+/*
88
+ * Rotate right into flags
89
+ * 31 30 29 21 15 10 5 4 0
90
+ * +--+--+--+-----------------+--------+-----------+------+--+------+
91
+ * |sf|op| S| 1 1 0 1 0 0 0 0 | imm6 | 0 0 0 0 1 | Rn |o2| mask |
92
+ * +--+--+--+-----------------+--------+-----------+------+--+------+
93
+ */
94
+static void disas_rotate_right_into_flags(DisasContext *s, uint32_t insn)
95
+{
96
+ int mask = extract32(insn, 0, 4);
97
+ int o2 = extract32(insn, 4, 1);
98
+ int rn = extract32(insn, 5, 5);
99
+ int imm6 = extract32(insn, 15, 6);
100
+ int sf_op_s = extract32(insn, 29, 3);
101
+ TCGv_i64 tcg_rn;
102
+ TCGv_i32 nzcv;
103
+
104
+ if (sf_op_s != 5 || o2 != 0 || !dc_isar_feature(aa64_condm_4, s)) {
105
+ unallocated_encoding(s);
106
+ return;
107
+ }
88
+ }
108
+
89
+
109
+ tcg_rn = read_cpu_reg(s, rn, 1);
90
+ addr = load_reg(s, a->rn);
110
+ tcg_gen_rotri_i64(tcg_rn, tcg_rn, imm6);
91
+ if (a->p) {
111
+
92
+ tcg_gen_addi_i32(addr, addr, offset);
112
+ nzcv = tcg_temp_new_i32();
113
+ tcg_gen_extrl_i64_i32(nzcv, tcg_rn);
114
+
115
+ if (mask & 8) { /* N */
116
+ tcg_gen_shli_i32(cpu_NF, nzcv, 31 - 3);
117
+ }
118
+ if (mask & 4) { /* Z */
119
+ tcg_gen_not_i32(cpu_ZF, nzcv);
120
+ tcg_gen_andi_i32(cpu_ZF, cpu_ZF, 4);
121
+ }
122
+ if (mask & 2) { /* C */
123
+ tcg_gen_extract_i32(cpu_CF, nzcv, 1, 1);
124
+ }
125
+ if (mask & 1) { /* V */
126
+ tcg_gen_shli_i32(cpu_VF, nzcv, 31 - 0);
127
+ }
93
+ }
128
+
94
+
129
+ tcg_temp_free_i32(nzcv);
95
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
96
+ gen_helper_v8m_stackcheck(cpu_env, addr);
97
+ }
98
+
99
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
100
+ MO_UL | MO_ALIGN | s->be_data);
101
+
102
+ if (a->w) {
103
+ /* writeback */
104
+ if (!a->p) {
105
+ tcg_gen_addi_i32(addr, addr, offset);
106
+ }
107
+ store_reg(s, a->rn, addr);
108
+ } else {
109
+ tcg_temp_free_i32(addr);
110
+ }
111
+ return value;
130
+}
112
+}
131
+
113
+
132
+/*
114
+static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
133
+ * Evaluate into flags
134
+ * 31 30 29 21 15 14 10 5 4 0
135
+ * +--+--+--+-----------------+---------+----+---------+------+--+------+
136
+ * |sf|op| S| 1 1 0 1 0 0 0 0 | opcode2 | sz | 0 0 1 0 | Rn |o3| mask |
137
+ * +--+--+--+-----------------+---------+----+---------+------+--+------+
138
+ */
139
+static void disas_evaluate_into_flags(DisasContext *s, uint32_t insn)
140
+{
115
+{
141
+ int o3_mask = extract32(insn, 0, 5);
116
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
142
+ int rn = extract32(insn, 5, 5);
117
+ return false;
143
+ int o2 = extract32(insn, 15, 6);
144
+ int sz = extract32(insn, 14, 1);
145
+ int sf_op_s = extract32(insn, 29, 3);
146
+ TCGv_i32 tmp;
147
+ int shift;
148
+
149
+ if (sf_op_s != 1 || o2 != 0 || o3_mask != 0xd ||
150
+ !dc_isar_feature(aa64_condm_4, s)) {
151
+ unallocated_encoding(s);
152
+ return;
153
+ }
118
+ }
154
+ shift = sz ? 16 : 24; /* SETF16 or SETF8 */
119
+ if (a->rn == 15) {
155
+
120
+ return false;
156
+ tmp = tcg_temp_new_i32();
121
+ }
157
+ tcg_gen_extrl_i64_i32(tmp, cpu_reg(s, rn));
122
+ return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
158
+ tcg_gen_shli_i32(cpu_NF, tmp, shift);
159
+ tcg_gen_shli_i32(cpu_VF, tmp, shift - 1);
160
+ tcg_gen_mov_i32(cpu_ZF, cpu_NF);
161
+ tcg_gen_xor_i32(cpu_VF, cpu_VF, cpu_NF);
162
+ tcg_temp_free_i32(tmp);
163
+}
123
+}
164
+
124
+
165
/* Conditional compare (immediate / register)
125
+static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
166
* 31 30 29 28 27 26 25 24 23 22 21 20 16 15 12 11 10 9 5 4 3 0
126
+{
167
* +--+--+--+------------------------+--------+------+----+--+------+--+-----+
127
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
168
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
128
+ return false;
169
disas_adc_sbc(s, insn);
129
+ }
170
break;
130
+ if (a->rn == 15) {
171
131
+ return false;
172
+ case 0x01: /* Rotate right into flags */
132
+ }
173
+ case 0x21:
133
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
174
+ disas_rotate_right_into_flags(s, insn);
134
+}
175
+ break;
176
+
135
+
177
+ case 0x02: /* Evaluate into flags */
136
static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
178
+ case 0x12:
137
{
179
+ case 0x22:
138
TCGv_i32 tmp;
180
+ case 0x32:
181
+ disas_evaluate_into_flags(s, insn);
182
+ break;
183
+
184
default:
185
goto do_unallocated;
186
}
187
--
139
--
188
2.20.1
140
2.20.1
189
141
190
142
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
v8.1M defines a new FP system register FPSCR_nzcvqc; this behaves
2
like the existing FPSCR, except that it reads and writes only bits
3
[31:27] of the FPSCR (the N, Z, C, V and QC flag bits). (Unlike the
4
FPSCR, the special case for Rt=15 of writing the CPSR.NZCV is not
5
permitted.)
2
6
3
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Implement the register. Since we don't yet implement MVE, we handle
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
the QC bit as RES0, with todo comments for where we will need to add
5
Message-id: 20190301200501.16533-11-richard.henderson@linaro.org
9
support later.
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20201119215617.29887-11-peter.maydell@linaro.org
8
---
14
---
9
target/arm/cpu.h | 5 ++
15
target/arm/cpu.h | 13 +++++++++++++
10
target/arm/helper.h | 5 ++
16
target/arm/translate-vfp.c.inc | 27 +++++++++++++++++++++++++++
11
target/arm/cpu64.c | 1 +
17
2 files changed, 40 insertions(+)
12
target/arm/translate-a64.c | 71 ++++++++++++++++++++++++++--
13
target/arm/vfp_helper.c | 96 ++++++++++++++++++++++++++++++++++++++
14
5 files changed, 173 insertions(+), 5 deletions(-)
15
18
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_predinv(const ARMISARegisters *id)
23
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
21
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SPECRES) != 0;
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
22
}
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
23
26
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
24
+static inline bool isar_feature_aa64_frint(const ARMISARegisters *id)
27
+#define FPCR_V (1 << 28) /* FP overflow flag */
25
+{
28
+#define FPCR_C (1 << 29) /* FP carry flag */
26
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FRINTTS) != 0;
29
+#define FPCR_Z (1 << 30) /* FP zero flag */
27
+}
30
+#define FPCR_N (1 << 31) /* FP negative flag */
28
+
31
+
29
static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
32
+#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
33
+#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
34
35
static inline uint32_t vfp_get_fpsr(CPUARMState *env)
30
{
36
{
31
/* We always set the AdvSIMD and FP fields identically wrt FP16. */
37
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
32
diff --git a/target/arm/helper.h b/target/arm/helper.h
38
#define ARM_VFP_FPEXC 8
39
#define ARM_VFP_FPINST 9
40
#define ARM_VFP_FPINST2 10
41
+/* These ones are M-profile only */
42
+#define ARM_VFP_FPSCR_NZCVQC 2
43
+#define ARM_VFP_VPR 12
44
+#define ARM_VFP_P0 13
45
+#define ARM_VFP_FPCXT_NS 14
46
+#define ARM_VFP_FPCXT_S 15
47
48
/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
49
#define QEMU_VFP_FPSCR_NZCV 0xffff
50
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
33
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/helper.h
52
--- a/target/arm/translate-vfp.c.inc
35
+++ b/target/arm/helper.h
53
+++ b/target/arm/translate-vfp.c.inc
36
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fmlal_idx_a32, TCG_CALL_NO_RWG,
54
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
37
DEF_HELPER_FLAGS_5(gvec_fmlal_idx_a64, TCG_CALL_NO_RWG,
55
case ARM_VFP_FPSCR:
38
void, ptr, ptr, ptr, ptr, i32)
56
case QEMU_VFP_FPSCR_NZCV:
39
40
+DEF_HELPER_FLAGS_2(frint32_s, TCG_CALL_NO_RWG, f32, f32, ptr)
41
+DEF_HELPER_FLAGS_2(frint64_s, TCG_CALL_NO_RWG, f32, f32, ptr)
42
+DEF_HELPER_FLAGS_2(frint32_d, TCG_CALL_NO_RWG, f64, f64, ptr)
43
+DEF_HELPER_FLAGS_2(frint64_d, TCG_CALL_NO_RWG, f64, f64, ptr)
44
+
45
#ifdef TARGET_AARCH64
46
#include "helper-a64.h"
47
#include "helper-sve.h"
48
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/cpu64.c
51
+++ b/target/arm/cpu64.c
52
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
53
t = FIELD_DP64(t, ID_AA64ISAR1, GPI, 0);
54
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
55
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
56
+ t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
57
cpu->isar.id_aa64isar1 = t;
58
59
t = cpu->isar.id_aa64pfr0;
60
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/translate-a64.c
63
+++ b/target/arm/translate-a64.c
64
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
65
case 0xf: /* FRINTI */
66
gen_fpst = gen_helper_rints;
67
break;
57
break;
68
+ case 0x10: /* FRINT32Z */
58
+ case ARM_VFP_FPSCR_NZCVQC:
69
+ rmode = float_round_to_zero;
59
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
70
+ gen_fpst = gen_helper_frint32_s;
60
+ return false;
61
+ }
71
+ break;
62
+ break;
72
+ case 0x11: /* FRINT32X */
63
default:
73
+ gen_fpst = gen_helper_frint32_s;
64
return FPSysRegCheckFailed;
65
}
66
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
67
tcg_temp_free_i32(tmp);
68
gen_lookup_tb(s);
69
break;
70
+ case ARM_VFP_FPSCR_NZCVQC:
71
+ {
72
+ TCGv_i32 fpscr;
73
+ tmp = loadfn(s, opaque);
74
+ /*
75
+ * TODO: when we implement MVE, write the QC bit.
76
+ * For non-MVE, QC is RES0.
77
+ */
78
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
79
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
80
+ tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
81
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
82
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
83
+ tcg_temp_free_i32(tmp);
74
+ break;
84
+ break;
75
+ case 0x12: /* FRINT64Z */
85
+ }
76
+ rmode = float_round_to_zero;
77
+ gen_fpst = gen_helper_frint64_s;
78
+ break;
79
+ case 0x13: /* FRINT64X */
80
+ gen_fpst = gen_helper_frint64_s;
81
+ break;
82
default:
86
default:
83
g_assert_not_reached();
87
g_assert_not_reached();
84
}
88
}
85
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
89
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
86
case 0xf: /* FRINTI */
90
gen_helper_vfp_get_fpscr(tmp, cpu_env);
87
gen_fpst = gen_helper_rintd;
91
storefn(s, opaque, tmp);
88
break;
92
break;
89
+ case 0x10: /* FRINT32Z */
93
+ case ARM_VFP_FPSCR_NZCVQC:
90
+ rmode = float_round_to_zero;
94
+ /*
91
+ gen_fpst = gen_helper_frint32_d;
95
+ * TODO: MVE has a QC bit, which we probably won't store
92
+ break;
96
+ * in the xregs[] field. For non-MVE, where QC is RES0,
93
+ case 0x11: /* FRINT32X */
97
+ * we can just fall through to the FPSCR_NZCV case.
94
+ gen_fpst = gen_helper_frint32_d;
98
+ */
95
+ break;
99
case QEMU_VFP_FPSCR_NZCV:
96
+ case 0x12: /* FRINT64Z */
100
/*
97
+ rmode = float_round_to_zero;
101
* Read just NZCV; this is a special case to avoid the
98
+ gen_fpst = gen_helper_frint64_d;
99
+ break;
100
+ case 0x13: /* FRINT64X */
101
+ gen_fpst = gen_helper_frint64_d;
102
+ break;
103
default:
104
g_assert_not_reached();
105
}
106
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
107
handle_fp_fcvt(s, opcode, rd, rn, dtype, type);
108
break;
109
}
110
+
111
+ case 0x10 ... 0x13: /* FRINT{32,64}{X,Z} */
112
+ if (type > 1 || !dc_isar_feature(aa64_frint, s)) {
113
+ unallocated_encoding(s);
114
+ return;
115
+ }
116
+ /* fall through */
117
case 0x0 ... 0x3:
118
case 0x8 ... 0xc:
119
case 0xe ... 0xf:
120
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
121
if (!fp_access_check(s)) {
122
return;
123
}
124
-
125
handle_fp_1src_single(s, opcode, rd, rn);
126
break;
127
case 1:
128
if (!fp_access_check(s)) {
129
return;
130
}
131
-
132
handle_fp_1src_double(s, opcode, rd, rn);
133
break;
134
case 3:
135
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
136
if (!fp_access_check(s)) {
137
return;
138
}
139
-
140
handle_fp_1src_half(s, opcode, rd, rn);
141
break;
142
default:
143
unallocated_encoding(s);
144
}
145
break;
146
+
147
default:
148
unallocated_encoding(s);
149
break;
150
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
151
case 0x59: /* FRINTX */
152
gen_helper_rintd_exact(tcg_rd, tcg_rn, tcg_fpstatus);
153
break;
154
+ case 0x1e: /* FRINT32Z */
155
+ case 0x5e: /* FRINT32X */
156
+ gen_helper_frint32_d(tcg_rd, tcg_rn, tcg_fpstatus);
157
+ break;
158
+ case 0x1f: /* FRINT64Z */
159
+ case 0x5f: /* FRINT64X */
160
+ gen_helper_frint64_d(tcg_rd, tcg_rn, tcg_fpstatus);
161
+ break;
162
default:
163
g_assert_not_reached();
164
}
165
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
166
}
167
break;
168
case 0xc ... 0xf:
169
- case 0x16 ... 0x1d:
170
- case 0x1f:
171
+ case 0x16 ... 0x1f:
172
{
173
/* Floating point: U, size[1] and opcode indicate operation;
174
* size[0] indicates single or double precision.
175
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
176
}
177
need_fpstatus = true;
178
break;
179
+ case 0x1e: /* FRINT32Z */
180
+ case 0x1f: /* FRINT64Z */
181
+ need_rmode = true;
182
+ rmode = FPROUNDING_ZERO;
183
+ /* fall through */
184
+ case 0x5e: /* FRINT32X */
185
+ case 0x5f: /* FRINT64X */
186
+ need_fpstatus = true;
187
+ if ((size == 3 && !is_q) || !dc_isar_feature(aa64_frint, s)) {
188
+ unallocated_encoding(s);
189
+ return;
190
+ }
191
+ break;
192
default:
193
unallocated_encoding(s);
194
return;
195
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
196
case 0x7c: /* URSQRTE */
197
gen_helper_rsqrte_u32(tcg_res, tcg_op, tcg_fpstatus);
198
break;
199
+ case 0x1e: /* FRINT32Z */
200
+ case 0x5e: /* FRINT32X */
201
+ gen_helper_frint32_s(tcg_res, tcg_op, tcg_fpstatus);
202
+ break;
203
+ case 0x1f: /* FRINT64Z */
204
+ case 0x5f: /* FRINT64X */
205
+ gen_helper_frint64_s(tcg_res, tcg_op, tcg_fpstatus);
206
+ break;
207
default:
208
g_assert_not_reached();
209
}
210
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
211
index XXXXXXX..XXXXXXX 100644
212
--- a/target/arm/vfp_helper.c
213
+++ b/target/arm/vfp_helper.c
214
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(vjcvt)(float64 value, CPUARMState *env)
215
216
return result;
217
}
218
+
219
+/* Round a float32 to an integer that fits in int32_t or int64_t. */
220
+static float32 frint_s(float32 f, float_status *fpst, int intsize)
221
+{
222
+ int old_flags = get_float_exception_flags(fpst);
223
+ uint32_t exp = extract32(f, 23, 8);
224
+
225
+ if (unlikely(exp == 0xff)) {
226
+ /* NaN or Inf. */
227
+ goto overflow;
228
+ }
229
+
230
+ /* Round and re-extract the exponent. */
231
+ f = float32_round_to_int(f, fpst);
232
+ exp = extract32(f, 23, 8);
233
+
234
+ /* Validate the range of the result. */
235
+ if (exp < 126 + intsize) {
236
+ /* abs(F) <= INT{N}_MAX */
237
+ return f;
238
+ }
239
+ if (exp == 126 + intsize) {
240
+ uint32_t sign = extract32(f, 31, 1);
241
+ uint32_t frac = extract32(f, 0, 23);
242
+ if (sign && frac == 0) {
243
+ /* F == INT{N}_MIN */
244
+ return f;
245
+ }
246
+ }
247
+
248
+ overflow:
249
+ /*
250
+ * Raise Invalid and return INT{N}_MIN as a float. Revert any
251
+ * inexact exception float32_round_to_int may have raised.
252
+ */
253
+ set_float_exception_flags(old_flags | float_flag_invalid, fpst);
254
+ return (0x100u + 126u + intsize) << 23;
255
+}
256
+
257
+float32 HELPER(frint32_s)(float32 f, void *fpst)
258
+{
259
+ return frint_s(f, fpst, 32);
260
+}
261
+
262
+float32 HELPER(frint64_s)(float32 f, void *fpst)
263
+{
264
+ return frint_s(f, fpst, 64);
265
+}
266
+
267
+/* Round a float64 to an integer that fits in int32_t or int64_t. */
268
+static float64 frint_d(float64 f, float_status *fpst, int intsize)
269
+{
270
+ int old_flags = get_float_exception_flags(fpst);
271
+ uint32_t exp = extract64(f, 52, 11);
272
+
273
+ if (unlikely(exp == 0x7ff)) {
274
+ /* NaN or Inf. */
275
+ goto overflow;
276
+ }
277
+
278
+ /* Round and re-extract the exponent. */
279
+ f = float64_round_to_int(f, fpst);
280
+ exp = extract64(f, 52, 11);
281
+
282
+ /* Validate the range of the result. */
283
+ if (exp < 1022 + intsize) {
284
+ /* abs(F) <= INT{N}_MAX */
285
+ return f;
286
+ }
287
+ if (exp == 1022 + intsize) {
288
+ uint64_t sign = extract64(f, 63, 1);
289
+ uint64_t frac = extract64(f, 0, 52);
290
+ if (sign && frac == 0) {
291
+ /* F == INT{N}_MIN */
292
+ return f;
293
+ }
294
+ }
295
+
296
+ overflow:
297
+ /*
298
+ * Raise Invalid and return INT{N}_MIN as a float. Revert any
299
+ * inexact exception float64_round_to_int may have raised.
300
+ */
301
+ set_float_exception_flags(old_flags | float_flag_invalid, fpst);
302
+ return (uint64_t)(0x800 + 1022 + intsize) << 52;
303
+}
304
+
305
+float64 HELPER(frint32_d)(float64 f, void *fpst)
306
+{
307
+ return frint_d(f, fpst, 32);
308
+}
309
+
310
+float64 HELPER(frint64_d)(float64 f, void *fpst)
311
+{
312
+ return frint_d(f, fpst, 64);
313
+}
314
--
102
--
315
2.20.1
103
2.20.1
316
104
317
105
diff view generated by jsdifflib
New patch
1
We defined a constant name for the mask of NZCV bits in the FPCR/FPSCR
2
in the previous commit; use it in a couple of places in existing code,
3
where we're masking out everything except NZCV for the "load to Rt=15
4
sets CPSR.NZCV" special case.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-12-peter.maydell@linaro.org
9
---
10
target/arm/translate-vfp.c.inc | 4 ++--
11
1 file changed, 2 insertions(+), 2 deletions(-)
12
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
16
+++ b/target/arm/translate-vfp.c.inc
17
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
18
* helper call for the "VMRS to CPSR.NZCV" insn.
19
*/
20
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
21
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
22
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
23
storefn(s, opaque, tmp);
24
break;
25
default:
26
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
27
case ARM_VFP_FPSCR:
28
if (a->rt == 15) {
29
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
30
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
31
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
32
} else {
33
tmp = tcg_temp_new_i32();
34
gen_helper_vfp_get_fpscr(tmp, cpu_env);
35
--
36
2.20.1
37
38
diff view generated by jsdifflib
New patch
1
Factor out the code which handles M-profile lazy FP state preservation
2
from full_vfp_access_check(); accesses to the FPCXT_NS register are
3
a special case which need to do just this part (corresponding in the
4
pseudocode to the PreserveFPState() function), and not the full
5
set of actions matching the pseudocode ExecuteFPCheck() which
6
normal FP instructions need to do.
1
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20201119215617.29887-13-peter.maydell@linaro.org
12
---
13
target/arm/translate-vfp.c.inc | 45 ++++++++++++++++++++--------------
14
1 file changed, 27 insertions(+), 18 deletions(-)
15
16
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-vfp.c.inc
19
+++ b/target/arm/translate-vfp.c.inc
20
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
21
return offs;
22
}
23
24
+/*
25
+ * Generate code for M-profile lazy FP state preservation if needed;
26
+ * this corresponds to the pseudocode PreserveFPState() function.
27
+ */
28
+static void gen_preserve_fp_state(DisasContext *s)
29
+{
30
+ if (s->v7m_lspact) {
31
+ /*
32
+ * Lazy state saving affects external memory and also the NVIC,
33
+ * so we must mark it as an IO operation for icount (and cause
34
+ * this to be the last insn in the TB).
35
+ */
36
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
37
+ s->base.is_jmp = DISAS_UPDATE_EXIT;
38
+ gen_io_start();
39
+ }
40
+ gen_helper_v7m_preserve_fp_state(cpu_env);
41
+ /*
42
+ * If the preserve_fp_state helper doesn't throw an exception
43
+ * then it will clear LSPACT; we don't need to repeat this for
44
+ * any further FP insns in this TB.
45
+ */
46
+ s->v7m_lspact = false;
47
+ }
48
+}
49
+
50
/*
51
* Check that VFP access is enabled. If it is, do the necessary
52
* M-profile lazy-FP handling and then return true.
53
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
54
/* Handle M-profile lazy FP state mechanics */
55
56
/* Trigger lazy-state preservation if necessary */
57
- if (s->v7m_lspact) {
58
- /*
59
- * Lazy state saving affects external memory and also the NVIC,
60
- * so we must mark it as an IO operation for icount (and cause
61
- * this to be the last insn in the TB).
62
- */
63
- if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
64
- s->base.is_jmp = DISAS_UPDATE_EXIT;
65
- gen_io_start();
66
- }
67
- gen_helper_v7m_preserve_fp_state(cpu_env);
68
- /*
69
- * If the preserve_fp_state helper doesn't throw an exception
70
- * then it will clear LSPACT; we don't need to repeat this for
71
- * any further FP insns in this TB.
72
- */
73
- s->v7m_lspact = false;
74
- }
75
+ gen_preserve_fp_state(s);
76
77
/* Update ownership of FP context: set FPCCR.S to match current state */
78
if (s->v8m_fpccr_s_wrong) {
79
--
80
2.20.1
81
82
diff view generated by jsdifflib
New patch
1
Implement the new-in-v8.1M FPCXT_S floating point system register.
2
This is for saving and restoring the secure floating point context,
3
and it reads and writes bits [27:0] from the FPSCR and the
4
CONTROL.SFPA bit in bit [31].
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-14-peter.maydell@linaro.org
9
---
10
target/arm/translate-vfp.c.inc | 58 ++++++++++++++++++++++++++++++++++
11
1 file changed, 58 insertions(+)
12
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
16
+++ b/target/arm/translate-vfp.c.inc
17
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
18
return false;
19
}
20
break;
21
+ case ARM_VFP_FPCXT_S:
22
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
23
+ return false;
24
+ }
25
+ if (!s->v8m_secure) {
26
+ return false;
27
+ }
28
+ break;
29
default:
30
return FPSysRegCheckFailed;
31
}
32
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
33
tcg_temp_free_i32(tmp);
34
break;
35
}
36
+ case ARM_VFP_FPCXT_S:
37
+ {
38
+ TCGv_i32 sfpa, control, fpscr;
39
+ /* Set FPSCR[27:0] and CONTROL.SFPA from value */
40
+ tmp = loadfn(s, opaque);
41
+ sfpa = tcg_temp_new_i32();
42
+ tcg_gen_shri_i32(sfpa, tmp, 31);
43
+ control = load_cpu_field(v7m.control[M_REG_S]);
44
+ tcg_gen_deposit_i32(control, control, sfpa,
45
+ R_V7M_CONTROL_SFPA_SHIFT, 1);
46
+ store_cpu_field(control, v7m.control[M_REG_S]);
47
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
48
+ tcg_gen_andi_i32(fpscr, fpscr, FPCR_NZCV_MASK);
49
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
50
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
51
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
52
+ tcg_temp_free_i32(tmp);
53
+ tcg_temp_free_i32(sfpa);
54
+ break;
55
+ }
56
default:
57
g_assert_not_reached();
58
}
59
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
60
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
61
storefn(s, opaque, tmp);
62
break;
63
+ case ARM_VFP_FPCXT_S:
64
+ {
65
+ TCGv_i32 control, sfpa, fpscr;
66
+ /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
67
+ tmp = tcg_temp_new_i32();
68
+ sfpa = tcg_temp_new_i32();
69
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
70
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
71
+ control = load_cpu_field(v7m.control[M_REG_S]);
72
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
73
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
74
+ tcg_gen_or_i32(tmp, tmp, sfpa);
75
+ tcg_temp_free_i32(sfpa);
76
+ /*
77
+ * Store result before updating FPSCR etc, in case
78
+ * it is a memory write which causes an exception.
79
+ */
80
+ storefn(s, opaque, tmp);
81
+ /*
82
+ * Now we must reset FPSCR from FPDSCR_NS, and clear
83
+ * CONTROL.SFPA; so we'll end the TB here.
84
+ */
85
+ tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
86
+ store_cpu_field(control, v7m.control[M_REG_S]);
87
+ fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
88
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
89
+ tcg_temp_free_i32(fpscr);
90
+ gen_lookup_tb(s);
91
+ break;
92
+ }
93
default:
94
g_assert_not_reached();
95
}
96
--
97
2.20.1
98
99
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The FPDSCR register has a similar layout to the FPSCR. In v8.1M it
2
gains new fields FZ16 (if half-precision floating point is supported)
3
and LTPSIZE (always reads as 4). Update the reset value and the code
4
that handles writes to this register accordingly.
2
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20190301200501.16533-4-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-16-peter.maydell@linaro.org
7
---
9
---
8
target/arm/cpu.h | 13 ++++++++++-
10
target/arm/cpu.h | 5 +++++
9
target/arm/cpu.c | 1 +
11
hw/intc/armv7m_nvic.c | 9 ++++++++-
10
target/arm/cpu64.c | 2 ++
12
target/arm/cpu.c | 3 +++
11
target/arm/helper.c | 55 +++++++++++++++++++++++++++++++++++++++++++++
13
3 files changed, 16 insertions(+), 1 deletion(-)
12
4 files changed, 70 insertions(+), 1 deletion(-)
13
14
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
19
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
19
#define SCTLR_R (1U << 9) /* up to v6; RAZ in v7 */
20
#define FPCR_IXE (1 << 12) /* Inexact exception trap enable */
20
#define SCTLR_UMA (1U << 9) /* v8 onward, AArch64 only */
21
#define FPCR_IDE (1 << 15) /* Input Denormal exception trap enable */
21
#define SCTLR_F (1U << 10) /* up to v6 */
22
#define FPCR_FZ16 (1 << 19) /* ARMv8.2+, FP16 flush-to-zero */
22
-#define SCTLR_SW (1U << 10) /* v7, RES0 in v8 */
23
+#define FPCR_RMODE_MASK (3 << 22) /* Rounding mode */
23
+#define SCTLR_SW (1U << 10) /* v7 */
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
24
+#define SCTLR_EnRCTX (1U << 10) /* in v8.0-PredInv */
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
25
#define SCTLR_Z (1U << 11) /* in v7, RES1 in v8 */
26
+#define FPCR_AHP (1 << 26) /* Alternative half-precision */
26
#define SCTLR_EOS (1U << 11) /* v8.5-ExS */
27
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
27
#define SCTLR_I (1U << 12)
28
#define FPCR_V (1 << 28) /* FP overflow flag */
28
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_sb(const ARMISARegisters *id)
29
#define FPCR_C (1 << 29) /* FP carry flag */
29
return FIELD_EX32(id->id_isar6, ID_ISAR6, SB) != 0;
30
#define FPCR_Z (1 << 30) /* FP zero flag */
30
}
31
#define FPCR_N (1 << 31) /* FP negative flag */
31
32
32
+static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
33
+#define FPCR_LTPSIZE_SHIFT 16 /* LTPSIZE, M-profile only */
33
+{
34
+#define FPCR_LTPSIZE_MASK (7 << FPCR_LTPSIZE_SHIFT)
34
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
35
+}
36
+
35
+
37
static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
36
#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
38
{
37
#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
39
/*
38
40
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sb(const ARMISARegisters *id)
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
41
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SB) != 0;
40
index XXXXXXX..XXXXXXX 100644
42
}
41
--- a/hw/intc/armv7m_nvic.c
43
42
+++ b/hw/intc/armv7m_nvic.c
44
+static inline bool isar_feature_aa64_predinv(const ARMISARegisters *id)
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
45
+{
44
break;
46
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SPECRES) != 0;
45
case 0xf3c: /* FPDSCR */
47
+}
46
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
48
+
47
- value &= 0x07c00000;
49
static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
48
+ uint32_t mask = FPCR_AHP | FPCR_DN | FPCR_FZ | FPCR_RMODE_MASK;
50
{
49
+ if (cpu_isar_feature(any_fp16, cpu)) {
51
/* We always set the AdvSIMD and FP fields identically wrt FP16. */
50
+ mask |= FPCR_FZ16;
51
+ }
52
+ value &= mask;
53
+ if (cpu_isar_feature(aa32_lob, cpu)) {
54
+ value |= 4 << FPCR_LTPSIZE_SHIFT;
55
+ }
56
cpu->env.v7m.fpdscr[attrs.secure] = value;
57
}
58
break;
52
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
59
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
53
index XXXXXXX..XXXXXXX 100644
60
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/cpu.c
61
--- a/target/arm/cpu.c
55
+++ b/target/arm/cpu.c
62
+++ b/target/arm/cpu.c
56
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
63
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
57
t = FIELD_DP32(t, ID_ISAR6, DP, 1);
64
* always reset to 4.
58
t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
65
*/
59
t = FIELD_DP32(t, ID_ISAR6, SB, 1);
66
env->v7m.ltpsize = 4;
60
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
67
+ /* The LTPSIZE field in FPDSCR is constant and reads as 4. */
61
cpu->isar.id_isar6 = t;
68
+ env->v7m.fpdscr[M_REG_NS] = 4 << FPCR_LTPSIZE_SHIFT;
62
69
+ env->v7m.fpdscr[M_REG_S] = 4 << FPCR_LTPSIZE_SHIFT;
63
t = cpu->id_mmfr4;
70
}
64
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
71
65
index XXXXXXX..XXXXXXX 100644
72
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
66
--- a/target/arm/cpu64.c
67
+++ b/target/arm/cpu64.c
68
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
69
t = FIELD_DP64(t, ID_AA64ISAR1, GPA, 1);
70
t = FIELD_DP64(t, ID_AA64ISAR1, GPI, 0);
71
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
72
+ t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
73
cpu->isar.id_aa64isar1 = t;
74
75
t = cpu->isar.id_aa64pfr0;
76
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
77
u = FIELD_DP32(u, ID_ISAR6, DP, 1);
78
u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
79
u = FIELD_DP32(u, ID_ISAR6, SB, 1);
80
+ u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
81
cpu->isar.id_isar6 = u;
82
83
/*
84
diff --git a/target/arm/helper.c b/target/arm/helper.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/target/arm/helper.c
87
+++ b/target/arm/helper.c
88
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
89
};
90
#endif
91
92
+static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri,
93
+ bool isread)
94
+{
95
+ int el = arm_current_el(env);
96
+
97
+ if (el == 0) {
98
+ uint64_t sctlr = arm_sctlr(env, el);
99
+ if (!(sctlr & SCTLR_EnRCTX)) {
100
+ return CP_ACCESS_TRAP;
101
+ }
102
+ } else if (el == 1) {
103
+ uint64_t hcr = arm_hcr_el2_eff(env);
104
+ if (hcr & HCR_NV) {
105
+ return CP_ACCESS_TRAP_EL2;
106
+ }
107
+ }
108
+ return CP_ACCESS_OK;
109
+}
110
+
111
+static const ARMCPRegInfo predinv_reginfo[] = {
112
+ { .name = "CFP_RCTX", .state = ARM_CP_STATE_AA64,
113
+ .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 4,
114
+ .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
115
+ { .name = "DVP_RCTX", .state = ARM_CP_STATE_AA64,
116
+ .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 5,
117
+ .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
118
+ { .name = "CPP_RCTX", .state = ARM_CP_STATE_AA64,
119
+ .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 7,
120
+ .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
121
+ /*
122
+ * Note the AArch32 opcodes have a different OPC1.
123
+ */
124
+ { .name = "CFPRCTX", .state = ARM_CP_STATE_AA32,
125
+ .cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 4,
126
+ .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
127
+ { .name = "DVPRCTX", .state = ARM_CP_STATE_AA32,
128
+ .cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 5,
129
+ .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
130
+ { .name = "CPPRCTX", .state = ARM_CP_STATE_AA32,
131
+ .cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 7,
132
+ .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
133
+ REGINFO_SENTINEL
134
+};
135
+
136
void register_cp_regs_for_features(ARMCPU *cpu)
137
{
138
/* Register all the coprocessor registers based on feature bits */
139
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
140
define_arm_cp_regs(cpu, pauth_reginfo);
141
}
142
#endif
143
+
144
+ /*
145
+ * While all v8.0 cpus support aarch64, QEMU does have configurations
146
+ * that do not set ID_AA64ISAR1, e.g. user-only qemu-arm -cpu max,
147
+ * which will set ID_ISAR6.
148
+ */
149
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)
150
+ ? cpu_isar_feature(aa64_predinv, cpu)
151
+ : cpu_isar_feature(aa32_predinv, cpu)) {
152
+ define_arm_cp_regs(cpu, predinv_reginfo);
153
+ }
154
}
155
156
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu)
157
--
73
--
158
2.20.1
74
2.20.1
159
75
160
76
diff view generated by jsdifflib
New patch
1
In v8.0M, on exception entry the registers R0-R3, R12, APSR and EPSR
2
are zeroed for an exception taken to Non-secure state; for an
3
exception taken to Secure state they become UNKNOWN, and we chose to
4
leave them at their previous values.
1
5
6
In v8.1M the behaviour is specified more tightly and these registers
7
are always zeroed regardless of the security state that the exception
8
targets (see rule R_KPZV). Implement this.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-17-peter.maydell@linaro.org
13
---
14
target/arm/m_helper.c | 16 ++++++++++++----
15
1 file changed, 12 insertions(+), 4 deletions(-)
16
17
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/m_helper.c
20
+++ b/target/arm/m_helper.c
21
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
22
* Clear registers if necessary to prevent non-secure exception
23
* code being able to see register values from secure code.
24
* Where register values become architecturally UNKNOWN we leave
25
- * them with their previous values.
26
+ * them with their previous values. v8.1M is tighter than v8.0M
27
+ * here and always zeroes the caller-saved registers regardless
28
+ * of the security state the exception is targeting.
29
*/
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
31
- if (!targets_secure) {
32
+ if (!targets_secure || arm_feature(env, ARM_FEATURE_V8_1M)) {
33
/*
34
* Always clear the caller-saved registers (they have been
35
* pushed to the stack earlier in v7m_push_stack()).
36
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
37
* v7m_push_callee_stack()).
38
*/
39
int i;
40
+ /*
41
+ * r4..r11 are callee-saves, zero only if background
42
+ * state was Secure (EXCRET.S == 1) and exception
43
+ * targets Non-secure state
44
+ */
45
+ bool zero_callee_saves = !targets_secure &&
46
+ (lr & R_V7M_EXCRET_S_MASK);
47
48
for (i = 0; i < 13; i++) {
49
- /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
50
- if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
51
+ if (i < 4 || i > 11 || zero_callee_saves) {
52
env->regs[i] = 0;
53
}
54
}
55
--
56
2.20.1
57
58
diff view generated by jsdifflib
New patch
1
In v8.1M, vector table fetch failures don't set HFSR.FORCED (see rule
2
R_LLRP). (In previous versions of the architecture this was either
3
required or IMPDEF.)
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-18-peter.maydell@linaro.org
8
---
9
target/arm/m_helper.c | 6 +++++-
10
1 file changed, 5 insertions(+), 1 deletion(-)
11
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
15
+++ b/target/arm/m_helper.c
16
@@ -XXX,XX +XXX,XX @@ load_fail:
17
* The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are
18
* secure); otherwise it targets the same security state as the
19
* underlying exception.
20
+ * In v8.1M HardFaults from vector table fetch fails don't set FORCED.
21
*/
22
if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
23
exc_secure = true;
24
}
25
- env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
26
+ env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK;
27
+ if (!arm_feature(env, ARM_FEATURE_V8_1M)) {
28
+ env->v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
29
+ }
30
armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
31
return false;
32
}
33
--
34
2.20.1
35
36
diff view generated by jsdifflib
New patch
1
In v8.1M a REVIDR register is defined, which is at address 0xe00ecfc
2
and is a read-only IMPDEF register providing implementation specific
3
minor revision information, like the v8A REVIDR_EL1. Implement this.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-19-peter.maydell@linaro.org
8
---
9
hw/intc/armv7m_nvic.c | 5 +++++
10
1 file changed, 5 insertions(+)
11
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/intc/armv7m_nvic.c
15
+++ b/hw/intc/armv7m_nvic.c
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
17
}
18
return val;
19
}
20
+ case 0xcfc:
21
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8_1M)) {
22
+ goto bad_offset;
23
+ }
24
+ return cpu->revidr;
25
case 0xd00: /* CPUID Base. */
26
return cpu->midr;
27
case 0xd04: /* Interrupt Control State (ICSR) */
28
--
29
2.20.1
30
31
diff view generated by jsdifflib
New patch
1
In v8.1M a new exception return check is added which may cause a NOCP
2
UsageFault (see rule R_XLTP): before we clear s0..s15 and the FPSCR
3
we must check whether access to CP10 from the Security state of the
4
returning exception is disabled; if it is then we must take a fault.
1
5
6
(Note that for our implementation CPPWR is always RAZ/WI and so can
7
never cause CP10 accesses to fail.)
8
9
The other v8.1M change to this register-clearing code is that if MVE
10
is implemented VPR must also be cleared, so add a TODO comment to
11
that effect.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-20-peter.maydell@linaro.org
16
---
17
target/arm/m_helper.c | 22 +++++++++++++++++++++-
18
1 file changed, 21 insertions(+), 1 deletion(-)
19
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m_helper.c
23
+++ b/target/arm/m_helper.c
24
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
25
v7m_exception_taken(cpu, excret, true, false);
26
return;
27
} else {
28
- /* Clear s0..s15 and FPSCR */
29
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
30
+ /* v8.1M adds this NOCP check */
31
+ bool nsacr_pass = exc_secure ||
32
+ extract32(env->v7m.nsacr, 10, 1);
33
+ bool cpacr_pass = v7m_cpacr_pass(env, exc_secure, true);
34
+ if (!nsacr_pass) {
35
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
36
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
37
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
38
+ "stackframe: NSACR prevents clearing FPU registers\n");
39
+ v7m_exception_taken(cpu, excret, true, false);
40
+ } else if (!cpacr_pass) {
41
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
42
+ exc_secure);
43
+ env->v7m.cfsr[exc_secure] |= R_V7M_CFSR_NOCP_MASK;
44
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
45
+ "stackframe: CPACR prevents clearing FPU registers\n");
46
+ v7m_exception_taken(cpu, excret, true, false);
47
+ }
48
+ }
49
+ /* Clear s0..s15 and FPSCR; TODO also VPR when MVE is implemented */
50
int i;
51
52
for (i = 0; i < 16; i += 2) {
53
--
54
2.20.1
55
56
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
v8.1M adds new encodings of VLLDM and VLSTM (where bit 7 is set).
2
The only difference is that:
3
* the old T1 encodings UNDEF if the implementation implements 32
4
Dregs (this is currently architecturally impossible for M-profile)
5
* the new T2 encodings have the implementation-defined option to
6
read from memory (discarding the data) or write UNKNOWN values to
7
memory for the stack slots that would be D16-D31
2
8
3
Now we have the extended memory map (high IO regions beyond the
9
We choose not to make those accesses, so for us the two
4
scalable RAM) and dynamic IPA range support at KVM/ARM level
10
instructions behave identically assuming they don't UNDEF.
5
we can bump the legacy 255GB initial RAM limit. The actual maximum
6
RAM size now depends on the physical CPU and host kernel, in
7
accelerated mode. In TCG mode, it depends on the VCPU
8
AA64MMFR0.PARANGE.
9
11
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
11
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
12
Message-id: 20190304101339.25970-11-eric.auger@redhat.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20201119215617.29887-21-peter.maydell@linaro.org
14
---
15
---
15
hw/arm/virt.c | 21 +--------------------
16
target/arm/m-nocp.decode | 2 +-
16
1 file changed, 1 insertion(+), 20 deletions(-)
17
target/arm/translate-vfp.c.inc | 25 +++++++++++++++++++++++++
18
2 files changed, 26 insertions(+), 1 deletion(-)
17
19
18
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
20
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
19
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/virt.c
22
--- a/target/arm/m-nocp.decode
21
+++ b/hw/arm/virt.c
23
+++ b/target/arm/m-nocp.decode
22
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@
23
25
24
#define PLATFORM_BUS_NUM_IRQS 64
26
{
25
27
# Special cases which do not take an early NOCP: VLLDM and VLSTM
26
-/* RAM limit in GB. Since VIRT_MEM starts at the 1GB mark, this means
28
- VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
27
- * RAM can go up to the 256GB mark, leaving 256GB of the physical
29
+ VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 op:1 000 0000
28
- * address space unallocated and free for future use between 256G and 512G.
30
# VSCCLRM (new in v8.1M) is similar:
29
- * If we need to provide more RAM to VMs in the future then we need to:
31
VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
30
- * * allocate a second bank of RAM starting at 2TB and working up
32
VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
31
- * * fix the DT and ACPI table generation code in QEMU to correctly
33
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
32
- * report two split lumps of RAM to the guest
34
index XXXXXXX..XXXXXXX 100644
33
- * * fix KVM in the host kernel to allow guests with >40 bit address spaces
35
--- a/target/arm/translate-vfp.c.inc
34
- * (We don't want to fill all the way up to 512GB with RAM because
36
+++ b/target/arm/translate-vfp.c.inc
35
- * we might want it for non-RAM purposes later. Conversely it seems
37
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
36
- * reasonable to assume that anybody configuring a VM with a quarter
38
!arm_dc_feature(s, ARM_FEATURE_V8)) {
37
- * of a terabyte of RAM will be doing it on a host with more than a
39
return false;
38
- * terabyte of physical address space.)
40
}
39
- */
41
+
40
+/* Legacy RAM limit in GB (< version 4.0) */
42
+ if (a->op) {
41
#define LEGACY_RAMLIMIT_GB 255
43
+ /*
42
#define LEGACY_RAMLIMIT_BYTES (LEGACY_RAMLIMIT_GB * GiB)
44
+ * T2 encoding ({D0-D31} reglist): v8.1M and up. We choose not
43
45
+ * to take the IMPDEF option to make memory accesses to the stack
44
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
46
+ * slots that correspond to the D16-D31 registers (discarding
45
47
+ * read data and writing UNKNOWN values), so for us the T2
46
vms->smp_cpus = smp_cpus;
48
+ * encoding behaves identically to the T1 encoding.
47
49
+ */
48
- if (machine->ram_size > vms->memmap[VIRT_MEM].size) {
50
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
49
- error_report("mach-virt: cannot model more than %dGB RAM",
51
+ return false;
50
- LEGACY_RAMLIMIT_GB);
52
+ }
51
- exit(1);
53
+ } else {
52
- }
54
+ /*
53
-
55
+ * T1 encoding ({D0-D15} reglist); undef if we have 32 Dregs.
54
if (vms->virt && kvm_enabled()) {
56
+ * This is currently architecturally impossible, but we add the
55
error_report("mach-virt: KVM does not support providing "
57
+ * check to stay in line with the pseudocode. Note that we must
56
"Virtualization extensions to the guest CPU");
58
+ * emit code for the UNDEF so it takes precedence over the NOCP.
59
+ */
60
+ if (dc_isar_feature(aa32_simd_r32, s)) {
61
+ unallocated_encoding(s);
62
+ return true;
63
+ }
64
+ }
65
+
66
/*
67
* If not secure, UNDEF. We must emit code for this
68
* rather than returning false so that this takes
57
--
69
--
58
2.20.1
70
2.20.1
59
71
60
72
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
v8.1M introduces a new TRD flag in the CCR register, which enables
2
checking for stack frame integrity signatures on SG instructions.
3
This bit is not banked, and is always RAZ/WI to Non-secure code.
4
Adjust the code for handling CCR reads and writes to handle this.
2
5
3
In preparation for a split of the memory map into a static
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
part and a dynamic part floating after the RAM, let's rename the
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
regions located after the RAM
8
Message-id: 20201119215617.29887-23-peter.maydell@linaro.org
9
---
10
target/arm/cpu.h | 2 ++
11
hw/intc/armv7m_nvic.c | 26 ++++++++++++++++++--------
12
2 files changed, 20 insertions(+), 8 deletions(-)
6
13
7
Signed-off-by: Eric Auger <eric.auger@redhat.com>
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
10
Message-id: 20190304101339.25970-3-eric.auger@redhat.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/arm/virt.h | 8 ++++----
14
hw/arm/virt-acpi-build.c | 10 ++++++----
15
hw/arm/virt.c | 33 ++++++++++++++++++---------------
16
3 files changed, 28 insertions(+), 23 deletions(-)
17
18
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/virt.h
16
--- a/target/arm/cpu.h
21
+++ b/include/hw/arm/virt.h
17
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ enum {
18
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CCR, STKOFHFNMIGN, 10, 1)
23
VIRT_GIC_VCPU,
19
FIELD(V7M_CCR, DC, 16, 1)
24
VIRT_GIC_ITS,
20
FIELD(V7M_CCR, IC, 17, 1)
25
VIRT_GIC_REDIST,
21
FIELD(V7M_CCR, BP, 18, 1)
26
- VIRT_GIC_REDIST2,
22
+FIELD(V7M_CCR, LOB, 19, 1)
27
+ VIRT_HIGH_GIC_REDIST2,
23
+FIELD(V7M_CCR, TRD, 20, 1)
28
VIRT_SMMU,
24
29
VIRT_UART,
25
/* V7M SCR bits */
30
VIRT_MMIO,
26
FIELD(V7M_SCR, SLEEPONEXIT, 1, 1)
31
@@ -XXX,XX +XXX,XX @@ enum {
27
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
32
VIRT_PCIE_MMIO,
33
VIRT_PCIE_PIO,
34
VIRT_PCIE_ECAM,
35
- VIRT_PCIE_ECAM_HIGH,
36
+ VIRT_HIGH_PCIE_ECAM,
37
VIRT_PLATFORM_BUS,
38
- VIRT_PCIE_MMIO_HIGH,
39
+ VIRT_HIGH_PCIE_MMIO,
40
VIRT_GPIO,
41
VIRT_SECURE_UART,
42
VIRT_SECURE_MEM,
43
@@ -XXX,XX +XXX,XX @@ typedef struct {
44
int psci_conduit;
45
} VirtMachineState;
46
47
-#define VIRT_ECAM_ID(high) (high ? VIRT_PCIE_ECAM_HIGH : VIRT_PCIE_ECAM)
48
+#define VIRT_ECAM_ID(high) (high ? VIRT_HIGH_PCIE_ECAM : VIRT_PCIE_ECAM)
49
50
#define TYPE_VIRT_MACHINE MACHINE_TYPE_NAME("virt")
51
#define VIRT_MACHINE(obj) \
52
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
53
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/arm/virt-acpi-build.c
29
--- a/hw/intc/armv7m_nvic.c
55
+++ b/hw/arm/virt-acpi-build.c
30
+++ b/hw/intc/armv7m_nvic.c
56
@@ -XXX,XX +XXX,XX @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
31
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
57
size_pio));
58
59
if (use_highmem) {
60
- hwaddr base_mmio_high = memmap[VIRT_PCIE_MMIO_HIGH].base;
61
- hwaddr size_mmio_high = memmap[VIRT_PCIE_MMIO_HIGH].size;
62
+ hwaddr base_mmio_high = memmap[VIRT_HIGH_PCIE_MMIO].base;
63
+ hwaddr size_mmio_high = memmap[VIRT_HIGH_PCIE_MMIO].size;
64
65
aml_append(rbuf,
66
aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
67
@@ -XXX,XX +XXX,XX @@ build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
68
gicr = acpi_data_push(table_data, sizeof(*gicr));
69
gicr->type = ACPI_APIC_GENERIC_REDISTRIBUTOR;
70
gicr->length = sizeof(*gicr);
71
- gicr->base_address = cpu_to_le64(memmap[VIRT_GIC_REDIST2].base);
72
- gicr->range_length = cpu_to_le32(memmap[VIRT_GIC_REDIST2].size);
73
+ gicr->base_address =
74
+ cpu_to_le64(memmap[VIRT_HIGH_GIC_REDIST2].base);
75
+ gicr->range_length =
76
+ cpu_to_le32(memmap[VIRT_HIGH_GIC_REDIST2].size);
77
}
32
}
78
33
return cpu->env.v7m.scr[attrs.secure];
79
if (its_class_name() && !vmc->no_its) {
34
case 0xd14: /* Configuration Control. */
80
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
35
- /* The BFHFNMIGN bit is the only non-banked bit; we
81
index XXXXXXX..XXXXXXX 100644
36
- * keep it in the non-secure copy of the register.
82
--- a/hw/arm/virt.c
37
+ /*
83
+++ b/hw/arm/virt.c
38
+ * Non-banked bits: BFHFNMIGN (stored in the NS copy of the register)
84
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry a15memmap[] = {
39
+ * and TRD (stored in the S copy of the register)
85
[VIRT_PCIE_ECAM] = { 0x3f000000, 0x01000000 },
40
*/
86
[VIRT_MEM] = { 0x40000000, RAMLIMIT_BYTES },
41
val = cpu->env.v7m.ccr[attrs.secure];
87
/* Additional 64 MB redist region (can contain up to 512 redistributors) */
42
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
88
- [VIRT_GIC_REDIST2] = { 0x4000000000ULL, 0x4000000 },
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
89
- [VIRT_PCIE_ECAM_HIGH] = { 0x4010000000ULL, 0x10000000 },
44
cpu->env.v7m.scr[attrs.secure] = value;
90
+ [VIRT_HIGH_GIC_REDIST2] = { 0x4000000000ULL, 0x4000000 },
45
break;
91
+ [VIRT_HIGH_PCIE_ECAM] = { 0x4010000000ULL, 0x10000000 },
46
case 0xd14: /* Configuration Control. */
92
/* Second PCIe window, 512GB wide at the 512GB boundary */
47
+ {
93
- [VIRT_PCIE_MMIO_HIGH] = { 0x8000000000ULL, 0x8000000000ULL },
48
+ uint32_t mask;
94
+ [VIRT_HIGH_PCIE_MMIO] = { 0x8000000000ULL, 0x8000000000ULL },
49
+
95
};
50
if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
96
51
goto bad_offset;
97
static const int a15irqmap[] = {
98
@@ -XXX,XX +XXX,XX @@ static void fdt_add_gic_node(VirtMachineState *vms)
99
2, vms->memmap[VIRT_GIC_REDIST].size);
100
} else {
101
qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
102
- 2, vms->memmap[VIRT_GIC_DIST].base,
103
- 2, vms->memmap[VIRT_GIC_DIST].size,
104
- 2, vms->memmap[VIRT_GIC_REDIST].base,
105
- 2, vms->memmap[VIRT_GIC_REDIST].size,
106
- 2, vms->memmap[VIRT_GIC_REDIST2].base,
107
- 2, vms->memmap[VIRT_GIC_REDIST2].size);
108
+ 2, vms->memmap[VIRT_GIC_DIST].base,
109
+ 2, vms->memmap[VIRT_GIC_DIST].size,
110
+ 2, vms->memmap[VIRT_GIC_REDIST].base,
111
+ 2, vms->memmap[VIRT_GIC_REDIST].size,
112
+ 2, vms->memmap[VIRT_HIGH_GIC_REDIST2].base,
113
+ 2, vms->memmap[VIRT_HIGH_GIC_REDIST2].size);
114
}
52
}
115
53
116
if (vms->virt) {
54
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
117
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
55
- value &= (R_V7M_CCR_STKALIGN_MASK |
118
56
- R_V7M_CCR_BFHFNMIGN_MASK |
119
if (nb_redist_regions == 2) {
57
- R_V7M_CCR_DIV_0_TRP_MASK |
120
uint32_t redist1_capacity =
58
- R_V7M_CCR_UNALIGN_TRP_MASK |
121
- vms->memmap[VIRT_GIC_REDIST2].size / GICV3_REDIST_SIZE;
59
- R_V7M_CCR_USERSETMPEND_MASK |
122
+ vms->memmap[VIRT_HIGH_GIC_REDIST2].size / GICV3_REDIST_SIZE;
60
- R_V7M_CCR_NONBASETHRDENA_MASK);
123
61
+ mask = R_V7M_CCR_STKALIGN_MASK |
124
qdev_prop_set_uint32(gicdev, "redist-region-count[1]",
62
+ R_V7M_CCR_BFHFNMIGN_MASK |
125
MIN(smp_cpus - redist0_count, redist1_capacity));
63
+ R_V7M_CCR_DIV_0_TRP_MASK |
126
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
64
+ R_V7M_CCR_UNALIGN_TRP_MASK |
127
if (type == 3) {
65
+ R_V7M_CCR_USERSETMPEND_MASK |
128
sysbus_mmio_map(gicbusdev, 1, vms->memmap[VIRT_GIC_REDIST].base);
66
+ R_V7M_CCR_NONBASETHRDENA_MASK;
129
if (nb_redist_regions == 2) {
67
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8_1M) && attrs.secure) {
130
- sysbus_mmio_map(gicbusdev, 2, vms->memmap[VIRT_GIC_REDIST2].base);
68
+ /* TRD is always RAZ/WI from NS */
131
+ sysbus_mmio_map(gicbusdev, 2,
69
+ mask |= R_V7M_CCR_TRD_MASK;
132
+ vms->memmap[VIRT_HIGH_GIC_REDIST2].base);
70
+ }
133
}
71
+ value &= mask;
134
} else {
72
135
sysbus_mmio_map(gicbusdev, 1, vms->memmap[VIRT_GIC_CPU].base);
73
if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
136
@@ -XXX,XX +XXX,XX @@ static void create_pcie(VirtMachineState *vms, qemu_irq *pic)
74
/* v8M makes NONBASETHRDENA and STKALIGN be RES1 */
137
{
75
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
138
hwaddr base_mmio = vms->memmap[VIRT_PCIE_MMIO].base;
76
139
hwaddr size_mmio = vms->memmap[VIRT_PCIE_MMIO].size;
77
cpu->env.v7m.ccr[attrs.secure] = value;
140
- hwaddr base_mmio_high = vms->memmap[VIRT_PCIE_MMIO_HIGH].base;
78
break;
141
- hwaddr size_mmio_high = vms->memmap[VIRT_PCIE_MMIO_HIGH].size;
79
+ }
142
+ hwaddr base_mmio_high = vms->memmap[VIRT_HIGH_PCIE_MMIO].base;
80
case 0xd24: /* System Handler Control and State (SHCSR) */
143
+ hwaddr size_mmio_high = vms->memmap[VIRT_HIGH_PCIE_MMIO].size;
81
if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
144
hwaddr base_pio = vms->memmap[VIRT_PCIE_PIO].base;
82
goto bad_offset;
145
hwaddr size_pio = vms->memmap[VIRT_PCIE_PIO].size;
146
hwaddr base_ecam, size_ecam;
147
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
148
* many redistributors we can fit into the memory map.
149
*/
150
if (vms->gic_version == 3) {
151
- virt_max_cpus = vms->memmap[VIRT_GIC_REDIST].size / GICV3_REDIST_SIZE;
152
- virt_max_cpus += vms->memmap[VIRT_GIC_REDIST2].size / GICV3_REDIST_SIZE;
153
+ virt_max_cpus =
154
+ vms->memmap[VIRT_GIC_REDIST].size / GICV3_REDIST_SIZE;
155
+ virt_max_cpus +=
156
+ vms->memmap[VIRT_HIGH_GIC_REDIST2].size / GICV3_REDIST_SIZE;
157
} else {
158
virt_max_cpus = GIC_NCPU;
159
}
160
--
83
--
161
2.20.1
84
2.20.1
162
85
163
86
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
v8.1M introduces a new TRD flag in the CCR register, which enables
2
checking for stack frame integrity signatures on SG instructions.
3
Add the code in the SG insn implementation for the new behaviour.
2
4
3
We are about to allow the memory map to grow beyond 1TB and
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
potentially overshoot the VCPU AA64MMFR0.PARANGE.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-24-peter.maydell@linaro.org
8
---
9
target/arm/m_helper.c | 86 +++++++++++++++++++++++++++++++++++++++++++
10
1 file changed, 86 insertions(+)
5
11
6
In aarch64 mode and when highmem is set, let's check the VCPU
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
7
PA range is sufficient to address the highest GPA of the memory
8
map.
9
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
11
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
12
Message-id: 20190304101339.25970-10-eric.auger@redhat.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/arm/virt.c | 17 +++++++++++++++++
16
1 file changed, 17 insertions(+)
17
18
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
19
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/virt.c
14
--- a/target/arm/m_helper.c
21
+++ b/hw/arm/virt.c
15
+++ b/target/arm/m_helper.c
22
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
23
#include "standard-headers/linux/input.h"
17
return true;
24
#include "hw/arm/smmuv3.h"
18
}
25
#include "hw/acpi/acpi.h"
19
26
+#include "target/arm/internals.h"
20
+static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
27
21
+ uint32_t addr, uint32_t *spdata)
28
#define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
22
+{
29
static void virt_##major##_##minor##_class_init(ObjectClass *oc, \
23
+ /*
30
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
24
+ * Read a word of data from the stack for the SG instruction,
31
fdt_add_timer_nodes(vms);
25
+ * writing the value into *spdata. If the load succeeds, return
32
fdt_add_cpu_nodes(vms);
26
+ * true; otherwise pend an appropriate exception and return false.
33
27
+ * (We can't use data load helpers here that throw an exception
34
+ if (!kvm_enabled()) {
28
+ * because of the context we're called in, which is halfway through
35
+ ARMCPU *cpu = ARM_CPU(first_cpu);
29
+ * arm_v7m_cpu_do_interrupt().)
36
+ bool aarch64 = object_property_get_bool(OBJECT(cpu), "aarch64", NULL);
30
+ */
31
+ CPUState *cs = CPU(cpu);
32
+ CPUARMState *env = &cpu->env;
33
+ MemTxAttrs attrs = {};
34
+ MemTxResult txres;
35
+ target_ulong page_size;
36
+ hwaddr physaddr;
37
+ int prot;
38
+ ARMMMUFaultInfo fi = {};
39
+ ARMCacheAttrs cacheattrs = {};
40
+ uint32_t value;
37
+
41
+
38
+ if (aarch64 && vms->highmem) {
42
+ if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
39
+ int requested_pa_size, pamax = arm_pamax(cpu);
43
+ &attrs, &prot, &page_size, &fi, &cacheattrs)) {
44
+ /* MPU/SAU lookup failed */
45
+ if (fi.type == ARMFault_QEMU_SFault) {
46
+ qemu_log_mask(CPU_LOG_INT,
47
+ "...SecureFault during stack word read\n");
48
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
49
+ env->v7m.sfar = addr;
50
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
51
+ } else {
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "...MemManageFault during stack word read\n");
54
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_DACCVIOL_MASK |
55
+ R_V7M_CFSR_MMARVALID_MASK;
56
+ env->v7m.mmfar[M_REG_S] = addr;
57
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, false);
58
+ }
59
+ return false;
60
+ }
61
+ value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
62
+ attrs, &txres);
63
+ if (txres != MEMTX_OK) {
64
+ /* BusFault trying to read the data */
65
+ qemu_log_mask(CPU_LOG_INT,
66
+ "...BusFault during stack word read\n");
67
+ env->v7m.cfsr[M_REG_NS] |=
68
+ (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
69
+ env->v7m.bfar = addr;
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
71
+ return false;
72
+ }
40
+
73
+
41
+ requested_pa_size = 64 - clz64(vms->highest_gpa);
74
+ *spdata = value;
42
+ if (pamax < requested_pa_size) {
75
+ return true;
43
+ error_report("VCPU supports less PA bits (%d) than requested "
76
+}
44
+ "by the memory map (%d)", pamax, requested_pa_size);
77
+
45
+ exit(1);
78
static bool v7m_handle_execute_nsc(ARMCPU *cpu)
79
{
80
/*
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
82
*/
83
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
84
", executing it\n", env->regs[15]);
85
+
86
+ if (cpu_isar_feature(aa32_m_sec_state, cpu) &&
87
+ !arm_v7m_is_handler_mode(env)) {
88
+ /*
89
+ * v8.1M exception stack frame integrity check. Note that we
90
+ * must perform the memory access even if CCR_S.TRD is zero
91
+ * and we aren't going to check what the data loaded is.
92
+ */
93
+ uint32_t spdata, sp;
94
+
95
+ /*
96
+ * We know we are currently NS, so the S stack pointers must be
97
+ * in other_ss_{psp,msp}, not in regs[13]/other_sp.
98
+ */
99
+ sp = v7m_using_psp(env) ? env->v7m.other_ss_psp : env->v7m.other_ss_msp;
100
+ if (!v7m_read_sg_stack_word(cpu, mmu_idx, sp, &spdata)) {
101
+ /* Stack access failed and an exception has been pended */
102
+ return false;
103
+ }
104
+
105
+ if (env->v7m.ccr[M_REG_S] & R_V7M_CCR_TRD_MASK) {
106
+ if (((spdata & ~1) == 0xfefa125a) ||
107
+ !(env->v7m.control[M_REG_S] & 1)) {
108
+ goto gen_invep;
46
+ }
109
+ }
47
+ }
110
+ }
48
+ }
111
+ }
49
+
112
+
50
memory_region_allocate_system_memory(ram, NULL, "mach-virt.ram",
113
env->regs[14] &= ~1;
51
machine->ram_size);
114
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
52
memory_region_add_subregion(sysmem, vms->memmap[VIRT_MEM].base, ram);
115
switch_v7m_security_state(env, true);
53
--
116
--
54
2.20.1
117
2.20.1
55
118
56
119
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
In commit 077d7449100d824a4 we added code to handle the v8M
2
requirement that returns from NMI or HardFault forcibly deactivate
3
those exceptions regardless of what interrupt the guest is trying to
4
deactivate. Unfortunately this broke the handling of the "illegal
5
exception return because the returning exception number is not
6
active" check for those cases. In the pseudocode this test is done
7
on the exception the guest asks to return from, but because our
8
implementation was doing this in armv7m_nvic_complete_irq() after the
9
new "deactivate NMI/HardFault regardless" code we ended up doing the
10
test on the VecInfo for that exception instead, which usually meant
11
failing to raise the illegal exception return fault.
2
12
3
The EL0+UMA check is unique to DAIF. While SPSel had avoided the
13
In the case for "configurable exception targeting the opposite
4
check by nature of already checking EL >= 1, the other post v8.0
14
security state" we detected the illegal-return case but went ahead
5
extensions to MSR (imm) allow EL0 and do not require UMA. Avoid
15
and deactivated the VecInfo anyway, which is wrong because that is
6
the unconditional write to pc and use raise_exception_ra to unwind.
16
the VecInfo for the other security state.
7
17
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
18
Rearrange the code so that we first identify the illegal return
9
Message-id: 20190301200501.16533-5-richard.henderson@linaro.org
19
cases, then see if we really need to deactivate NMI or HardFault
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
instead, and finally do the deactivation.
21
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-25-peter.maydell@linaro.org
12
---
25
---
13
target/arm/helper-a64.h | 3 +++
26
hw/intc/armv7m_nvic.c | 59 +++++++++++++++++++++++--------------------
14
target/arm/helper.h | 1 -
27
1 file changed, 32 insertions(+), 27 deletions(-)
15
target/arm/internals.h | 15 ++++++++++++++
16
target/arm/helper-a64.c | 30 +++++++++++++++++++++++++++
17
target/arm/op_helper.c | 42 --------------------------------------
18
target/arm/translate-a64.c | 41 ++++++++++++++++++++++---------------
19
6 files changed, 73 insertions(+), 59 deletions(-)
20
28
21
diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
22
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper-a64.h
31
--- a/hw/intc/armv7m_nvic.c
24
+++ b/target/arm/helper-a64.h
32
+++ b/hw/intc/armv7m_nvic.c
25
@@ -XXX,XX +XXX,XX @@
33
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
26
DEF_HELPER_FLAGS_2(udiv64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
34
{
27
DEF_HELPER_FLAGS_2(sdiv64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
35
NVICState *s = (NVICState *)opaque;
28
DEF_HELPER_FLAGS_1(rbit64, TCG_CALL_NO_RWG_SE, i64, i64)
36
VecInfo *vec = NULL;
29
+DEF_HELPER_2(msr_i_spsel, void, env, i32)
37
- int ret;
30
+DEF_HELPER_2(msr_i_daifset, void, env, i32)
38
+ int ret = 0;
31
+DEF_HELPER_2(msr_i_daifclear, void, env, i32)
39
32
DEF_HELPER_3(vfp_cmph_a64, i64, f16, f16, ptr)
40
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
33
DEF_HELPER_3(vfp_cmpeh_a64, i64, f16, f16, ptr)
41
34
DEF_HELPER_3(vfp_cmps_a64, i64, f32, f32, ptr)
42
+ trace_nvic_complete_irq(irq, secure);
35
diff --git a/target/arm/helper.h b/target/arm/helper.h
43
+
36
index XXXXXXX..XXXXXXX 100644
44
+ if (secure && exc_is_banked(irq)) {
37
--- a/target/arm/helper.h
45
+ vec = &s->sec_vectors[irq];
38
+++ b/target/arm/helper.h
46
+ } else {
39
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(get_cp_reg, i32, env, ptr)
47
+ vec = &s->vectors[irq];
40
DEF_HELPER_3(set_cp_reg64, void, env, ptr, i64)
48
+ }
41
DEF_HELPER_2(get_cp_reg64, i64, env, ptr)
42
43
-DEF_HELPER_3(msr_i_pstate, void, env, i32, i32)
44
DEF_HELPER_1(clear_pstate_ss, void, env)
45
46
DEF_HELPER_2(get_r13_banked, i32, env, i32)
47
diff --git a/target/arm/internals.h b/target/arm/internals.h
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/internals.h
50
+++ b/target/arm/internals.h
51
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
52
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
53
ARMMMUIdx mmu_idx, bool data);
54
55
+static inline int exception_target_el(CPUARMState *env)
56
+{
57
+ int target_el = MAX(1, arm_current_el(env));
58
+
49
+
59
+ /*
50
+ /*
60
+ * No such thing as secure EL1 if EL3 is aarch32,
51
+ * Identify illegal exception return cases. We can't immediately
61
+ * so update the target EL to EL3 in this case.
52
+ * return at this point because we still need to deactivate
53
+ * (either this exception or NMI/HardFault) first.
62
+ */
54
+ */
63
+ if (arm_is_secure(env) && !arm_el_is_aa64(env, 3) && target_el == 1) {
55
+ if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
64
+ target_el = 3;
56
+ /*
57
+ * Return from a configurable exception targeting the opposite
58
+ * security state from the one we're trying to complete it for.
59
+ * Clear vec because it's not really the VecInfo for this
60
+ * (irq, secstate) so we mustn't deactivate it.
61
+ */
62
+ ret = -1;
63
+ vec = NULL;
64
+ } else if (!vec->active) {
65
+ /* Return from an inactive interrupt */
66
+ ret = -1;
67
+ } else {
68
+ /* Legal return, we will return the RETTOBASE bit value to the caller */
69
+ ret = nvic_rettobase(s);
65
+ }
70
+ }
66
+
71
+
67
+ return target_el;
72
/*
68
+}
73
* For negative priorities, v8M will forcibly deactivate the appropriate
69
+
74
* NMI or HardFault regardless of what interrupt we're being asked to
70
#endif
75
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
71
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
76
}
72
index XXXXXXX..XXXXXXX 100644
77
73
--- a/target/arm/helper-a64.c
78
if (!vec) {
74
+++ b/target/arm/helper-a64.c
79
- if (secure && exc_is_banked(irq)) {
75
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(rbit64)(uint64_t x)
80
- vec = &s->sec_vectors[irq];
76
return revbit64(x);
81
- } else {
77
}
82
- vec = &s->vectors[irq];
78
83
- }
79
+void HELPER(msr_i_spsel)(CPUARMState *env, uint32_t imm)
80
+{
81
+ update_spsel(env, imm);
82
+}
83
+
84
+static void daif_check(CPUARMState *env, uint32_t op,
85
+ uint32_t imm, uintptr_t ra)
86
+{
87
+ /* DAIF update to PSTATE. This is OK from EL0 only if UMA is set. */
88
+ if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_UMA)) {
89
+ raise_exception_ra(env, EXCP_UDEF,
90
+ syn_aa64_sysregtrap(0, extract32(op, 0, 3),
91
+ extract32(op, 3, 3), 4,
92
+ imm, 0x1f, 0),
93
+ exception_target_el(env), ra);
94
+ }
95
+}
96
+
97
+void HELPER(msr_i_daifset)(CPUARMState *env, uint32_t imm)
98
+{
99
+ daif_check(env, 0x1e, imm, GETPC());
100
+ env->daif |= (imm << 6) & PSTATE_DAIF;
101
+}
102
+
103
+void HELPER(msr_i_daifclear)(CPUARMState *env, uint32_t imm)
104
+{
105
+ daif_check(env, 0x1f, imm, GETPC());
106
+ env->daif &= ~((imm << 6) & PSTATE_DAIF);
107
+}
108
+
109
/* Convert a softfloat float_relation_ (as returned by
110
* the float*_compare functions) to the correct ARM
111
* NZCV flag state.
112
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
113
index XXXXXXX..XXXXXXX 100644
114
--- a/target/arm/op_helper.c
115
+++ b/target/arm/op_helper.c
116
@@ -XXX,XX +XXX,XX @@ void raise_exception_ra(CPUARMState *env, uint32_t excp, uint32_t syndrome,
117
cpu_loop_exit_restore(cs, ra);
118
}
119
120
-static int exception_target_el(CPUARMState *env)
121
-{
122
- int target_el = MAX(1, arm_current_el(env));
123
-
124
- /* No such thing as secure EL1 if EL3 is aarch32, so update the target EL
125
- * to EL3 in this case.
126
- */
127
- if (arm_is_secure(env) && !arm_el_is_aa64(env, 3) && target_el == 1) {
128
- target_el = 3;
129
- }
84
- }
130
-
85
-
131
- return target_el;
86
- trace_nvic_complete_irq(irq, secure);
132
-}
133
-
87
-
134
uint32_t HELPER(neon_tbl)(uint32_t ireg, uint32_t def, void *vn,
88
- if (!vec->active) {
135
uint32_t maxindex)
89
- /* Tell the caller this was an illegal exception return */
136
{
90
- return -1;
137
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(get_cp_reg64)(CPUARMState *env, void *rip)
138
return res;
139
}
140
141
-void HELPER(msr_i_pstate)(CPUARMState *env, uint32_t op, uint32_t imm)
142
-{
143
- /* MSR_i to update PSTATE. This is OK from EL0 only if UMA is set.
144
- * Note that SPSel is never OK from EL0; we rely on handle_msr_i()
145
- * to catch that case at translate time.
146
- */
147
- if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_UMA)) {
148
- uint32_t syndrome = syn_aa64_sysregtrap(0, extract32(op, 0, 3),
149
- extract32(op, 3, 3), 4,
150
- imm, 0x1f, 0);
151
- raise_exception(env, EXCP_UDEF, syndrome, exception_target_el(env));
152
- }
91
- }
153
-
92
-
154
- switch (op) {
93
- /*
155
- case 0x05: /* SPSel */
94
- * If this is a configurable exception and it is currently
156
- update_spsel(env, imm);
95
- * targeting the opposite security state from the one we're trying
157
- break;
96
- * to complete it for, this counts as an illegal exception return.
158
- case 0x1e: /* DAIFSet */
97
- * We still need to deactivate whatever vector the logic above has
159
- env->daif |= (imm << 6) & PSTATE_DAIF;
98
- * selected, though, as it might not be the same as the one for the
160
- break;
99
- * requested exception number.
161
- case 0x1f: /* DAIFClear */
100
- */
162
- env->daif &= ~((imm << 6) & PSTATE_DAIF);
101
- if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
163
- break;
102
- ret = -1;
164
- default:
103
- } else {
165
- g_assert_not_reached();
104
- ret = nvic_rettobase(s);
166
- }
105
+ return ret;
167
-}
168
-
169
void HELPER(clear_pstate_ss)(CPUARMState *env)
170
{
171
env->pstate &= ~PSTATE_SS;
172
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
173
index XXXXXXX..XXXXXXX 100644
174
--- a/target/arm/translate-a64.c
175
+++ b/target/arm/translate-a64.c
176
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
177
static void handle_msr_i(DisasContext *s, uint32_t insn,
178
unsigned int op1, unsigned int op2, unsigned int crm)
179
{
180
+ TCGv_i32 t1;
181
int op = op1 << 3 | op2;
182
+
183
+ /* End the TB by default, chaining is ok. */
184
+ s->base.is_jmp = DISAS_TOO_MANY;
185
+
186
switch (op) {
187
case 0x05: /* SPSel */
188
if (s->current_el == 0) {
189
- unallocated_encoding(s);
190
- return;
191
+ goto do_unallocated;
192
}
193
- /* fall through */
194
- case 0x1e: /* DAIFSet */
195
- case 0x1f: /* DAIFClear */
196
- {
197
- TCGv_i32 tcg_imm = tcg_const_i32(crm);
198
- TCGv_i32 tcg_op = tcg_const_i32(op);
199
- gen_a64_set_pc_im(s->pc - 4);
200
- gen_helper_msr_i_pstate(cpu_env, tcg_op, tcg_imm);
201
- tcg_temp_free_i32(tcg_imm);
202
- tcg_temp_free_i32(tcg_op);
203
- /* For DAIFClear, exit the cpu loop to re-evaluate pending IRQs. */
204
- gen_a64_set_pc_im(s->pc);
205
- s->base.is_jmp = (op == 0x1f ? DISAS_EXIT : DISAS_JUMP);
206
+ t1 = tcg_const_i32(crm & PSTATE_SP);
207
+ gen_helper_msr_i_spsel(cpu_env, t1);
208
+ tcg_temp_free_i32(t1);
209
break;
210
- }
211
+
212
+ case 0x1e: /* DAIFSet */
213
+ t1 = tcg_const_i32(crm);
214
+ gen_helper_msr_i_daifset(cpu_env, t1);
215
+ tcg_temp_free_i32(t1);
216
+ break;
217
+
218
+ case 0x1f: /* DAIFClear */
219
+ t1 = tcg_const_i32(crm);
220
+ gen_helper_msr_i_daifclear(cpu_env, t1);
221
+ tcg_temp_free_i32(t1);
222
+ /* For DAIFClear, exit the cpu loop to re-evaluate pending IRQs. */
223
+ s->base.is_jmp = DISAS_UPDATE;
224
+ break;
225
+
226
default:
227
+ do_unallocated:
228
unallocated_encoding(s);
229
return;
230
}
106
}
107
108
vec->active = 0;
231
--
109
--
232
2.20.1
110
2.20.1
233
111
234
112
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
For v8.1M the architecture mandates that CPUs must provide at
2
least the "minimal RAS implementation" from the Reliability,
3
Availability and Serviceability extension. This consists of:
4
* an ESB instruction which is a NOP
5
-- since it is in the HINT space we need only add a comment
6
* an RFSR register which will RAZ/WI
7
* a RAZ/WI AIRCR.IESB bit
8
-- the code which handles writes to AIRCR does not allow setting
9
of RES0 bits, so we already treat this as RAZ/WI; add a comment
10
noting that this is deliberate
11
* minimal implementation of the RAS register block at 0xe0005000
12
-- this will be in a subsequent commit
13
* setting the ID_PFR0.RAS field to 0b0010
14
-- we will do this when we add the Cortex-M55 CPU model
2
15
3
Minimize the number of places that will need updating when
4
the virtual host extensions are added.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190301200501.16533-2-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20201119215617.29887-26-peter.maydell@linaro.org
10
---
19
---
11
target/arm/cpu.h | 26 ++++++++++++++++----------
20
target/arm/cpu.h | 14 ++++++++++++++
12
target/arm/helper.c | 8 ++------
21
target/arm/t32.decode | 4 ++++
13
2 files changed, 18 insertions(+), 16 deletions(-)
22
hw/intc/armv7m_nvic.c | 13 +++++++++++++
23
3 files changed, 31 insertions(+)
14
24
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
27
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_sctlr_b(CPUARMState *env)
29
@@ -XXX,XX +XXX,XX @@ FIELD(ID_MMFR4, LSM, 20, 4)
20
(env->cp15.sctlr_el[1] & SCTLR_B) != 0;
30
FIELD(ID_MMFR4, CCIDX, 24, 4)
31
FIELD(ID_MMFR4, EVT, 28, 4)
32
33
+FIELD(ID_PFR0, STATE0, 0, 4)
34
+FIELD(ID_PFR0, STATE1, 4, 4)
35
+FIELD(ID_PFR0, STATE2, 8, 4)
36
+FIELD(ID_PFR0, STATE3, 12, 4)
37
+FIELD(ID_PFR0, CSV2, 16, 4)
38
+FIELD(ID_PFR0, AMU, 20, 4)
39
+FIELD(ID_PFR0, DIT, 24, 4)
40
+FIELD(ID_PFR0, RAS, 28, 4)
41
+
42
FIELD(ID_PFR1, PROGMOD, 0, 4)
43
FIELD(ID_PFR1, SECURITY, 4, 4)
44
FIELD(ID_PFR1, MPROGMOD, 8, 4)
45
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
46
return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
21
}
47
}
22
48
23
+static inline uint64_t arm_sctlr(CPUARMState *env, int el)
49
+static inline bool isar_feature_aa32_ras(const ARMISARegisters *id)
24
+{
50
+{
25
+ if (el == 0) {
51
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, RAS) != 0;
26
+ /* FIXME: ARMv8.1-VHE S2 translation regime. */
27
+ return env->cp15.sctlr_el[1];
28
+ } else {
29
+ return env->cp15.sctlr_el[el];
30
+ }
31
+}
52
+}
32
+
53
+
54
static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
55
{
56
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
57
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/t32.decode
60
+++ b/target/arm/t32.decode
61
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
62
# SEV 1111 0011 1010 1111 1000 0000 0000 0100
63
# SEVL 1111 0011 1010 1111 1000 0000 0000 0101
64
65
+ # For M-profile minimal-RAS ESB can be a NOP, which is the
66
+ # default behaviour since it is in the hint space.
67
+ # ESB 1111 0011 1010 1111 1000 0000 0001 0000
33
+
68
+
34
/* Return true if the processor is in big-endian mode. */
69
# The canonical nop ends in 0000 0000, but the whole rest
35
static inline bool arm_cpu_data_is_big_endian(CPUARMState *env)
70
# of the space is "reserved hint, behaves as nop".
36
{
71
NOP 1111 0011 1010 1111 1000 0000 ---- ----
37
- int cur_el;
72
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
38
-
73
index XXXXXXX..XXXXXXX 100644
39
/* In 32bit endianness is determined by looking at CPSR's E bit */
74
--- a/hw/intc/armv7m_nvic.c
40
if (!is_a64(env)) {
75
+++ b/hw/intc/armv7m_nvic.c
41
return
76
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
42
@@ -XXX,XX +XXX,XX @@ static inline bool arm_cpu_data_is_big_endian(CPUARMState *env)
77
return 0;
43
arm_sctlr_b(env) ||
78
}
44
#endif
79
return cpu->env.v7m.sfar;
45
((env->uncached_cpsr & CPSR_E) ? 1 : 0);
80
+ case 0xf04: /* RFSR */
46
+ } else {
81
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
47
+ int cur_el = arm_current_el(env);
82
+ goto bad_offset;
48
+ uint64_t sctlr = arm_sctlr(env, cur_el);
83
+ }
49
+
84
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
50
+ return (sctlr & (cur_el ? SCTLR_EE : SCTLR_E0E)) != 0;
85
+ return 0;
86
case 0xf34: /* FPCCR */
87
if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
88
return 0;
89
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
90
R_V7M_AIRCR_PRIGROUP_SHIFT,
91
R_V7M_AIRCR_PRIGROUP_LENGTH);
92
}
93
+ /* AIRCR.IESB is RAZ/WI because we implement only minimal RAS */
94
if (attrs.secure) {
95
/* These bits are only writable by secure */
96
cpu->env.v7m.aircr = value &
97
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
98
}
99
break;
51
}
100
}
52
-
101
+ case 0xf04: /* RFSR */
53
- cur_el = arm_current_el(env);
102
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
54
-
103
+ goto bad_offset;
55
- if (cur_el == 0) {
104
+ }
56
- return (env->cp15.sctlr_el[1] & SCTLR_E0E) != 0;
105
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
57
- }
106
+ break;
58
-
107
case 0xf34: /* FPCCR */
59
- return (env->cp15.sctlr_el[cur_el] & SCTLR_EE) != 0;
108
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
60
}
109
/* Not all bits here are banked. */
61
62
#include "exec/cpu-all.h"
63
diff --git a/target/arm/helper.c b/target/arm/helper.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/helper.c
66
+++ b/target/arm/helper.c
67
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
68
flags = FIELD_DP32(flags, TBFLAG_A64, ZCR_LEN, zcr_len);
69
}
70
71
- if (current_el == 0) {
72
- /* FIXME: ARMv8.1-VHE S2 translation regime. */
73
- sctlr = env->cp15.sctlr_el[1];
74
- } else {
75
- sctlr = env->cp15.sctlr_el[current_el];
76
- }
77
+ sctlr = arm_sctlr(env, current_el);
78
+
79
if (cpu_isar_feature(aa64_pauth, cpu)) {
80
/*
81
* In order to save space in flags, we record only whether
82
--
110
--
83
2.20.1
111
2.20.1
84
112
85
113
diff view generated by jsdifflib
1
From: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
1
The RAS feature has a block of memory-mapped registers at offset
2
0x5000 within the PPB. For a "minimal RAS" implementation we provide
3
no error records and so the only registers that exist in the block
4
are ERRIIDR and ERRDEVID.
2
5
3
We introduce an helper to create a memory node.
6
The "RAZ/WI for privileged, BusFault for nonprivileged" behaviour
7
of the "nvic-default" region is actually valid for minimal-RAS,
8
so the main benefit of providing an explicit implementation of
9
the register block is more accurate LOG_UNIMP messages, and a
10
framework for where we could add a real RAS implementation later
11
if necessary.
4
12
5
Signed-off-by: Eric Auger <eric.auger@redhat.com>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
7
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190304101339.25970-2-eric.auger@redhat.com
15
Message-id: 20201119215617.29887-27-peter.maydell@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
16
---
12
hw/arm/boot.c | 54 ++++++++++++++++++++++++++++++++-------------------
17
include/hw/intc/armv7m_nvic.h | 1 +
13
1 file changed, 34 insertions(+), 20 deletions(-)
18
hw/intc/armv7m_nvic.c | 56 +++++++++++++++++++++++++++++++++++
19
2 files changed, 57 insertions(+)
14
20
15
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
21
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
16
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/boot.c
23
--- a/include/hw/intc/armv7m_nvic.h
18
+++ b/hw/arm/boot.c
24
+++ b/include/hw/intc/armv7m_nvic.h
19
@@ -XXX,XX +XXX,XX @@ static void set_kernel_args_old(const struct arm_boot_info *info,
25
@@ -XXX,XX +XXX,XX @@ struct NVICState {
20
}
26
MemoryRegion sysreg_ns_mem;
21
}
27
MemoryRegion systickmem;
22
28
MemoryRegion systick_ns_mem;
23
+static int fdt_add_memory_node(void *fdt, uint32_t acells, hwaddr mem_base,
29
+ MemoryRegion ras_mem;
24
+ uint32_t scells, hwaddr mem_len,
30
MemoryRegion container;
25
+ int numa_node_id)
31
MemoryRegion defaultmem;
32
33
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/armv7m_nvic.c
36
+++ b/hw/intc/armv7m_nvic.c
37
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
38
.endianness = DEVICE_NATIVE_ENDIAN,
39
};
40
41
+
42
+static MemTxResult ras_read(void *opaque, hwaddr addr,
43
+ uint64_t *data, unsigned size,
44
+ MemTxAttrs attrs)
26
+{
45
+{
27
+ char *nodename;
46
+ if (attrs.user) {
28
+ int ret;
47
+ return MEMTX_ERROR;
29
+
30
+ nodename = g_strdup_printf("/memory@%" PRIx64, mem_base);
31
+ qemu_fdt_add_subnode(fdt, nodename);
32
+ qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
33
+ ret = qemu_fdt_setprop_sized_cells(fdt, nodename, "reg", acells, mem_base,
34
+ scells, mem_len);
35
+ if (ret < 0) {
36
+ goto out;
37
+ }
48
+ }
38
+
49
+
39
+ /* only set the NUMA ID if it is specified */
50
+ switch (addr) {
40
+ if (numa_node_id >= 0) {
51
+ case 0xe10: /* ERRIIDR */
41
+ ret = qemu_fdt_setprop_cell(fdt, nodename,
52
+ /* architect field = Arm; product/variant/revision 0 */
42
+ "numa-node-id", numa_node_id);
53
+ *data = 0x43b;
54
+ break;
55
+ case 0xfc8: /* ERRDEVID */
56
+ /* Minimal RAS: we implement 0 error record indexes */
57
+ *data = 0;
58
+ break;
59
+ default:
60
+ qemu_log_mask(LOG_UNIMP, "Read RAS register offset 0x%x\n",
61
+ (uint32_t)addr);
62
+ *data = 0;
63
+ break;
43
+ }
64
+ }
44
+out:
65
+ return MEMTX_OK;
45
+ g_free(nodename);
46
+ return ret;
47
+}
66
+}
48
+
67
+
49
static void fdt_add_psci_node(void *fdt)
68
+static MemTxResult ras_write(void *opaque, hwaddr addr,
50
{
69
+ uint64_t value, unsigned size,
51
uint32_t cpu_suspend_fn;
70
+ MemTxAttrs attrs)
52
@@ -XXX,XX +XXX,XX @@ int arm_load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
71
+{
53
void *fdt = NULL;
72
+ if (attrs.user) {
54
int size, rc, n = 0;
73
+ return MEMTX_ERROR;
55
uint32_t acells, scells;
74
+ }
56
- char *nodename;
75
+
57
unsigned int i;
76
+ switch (addr) {
58
hwaddr mem_base, mem_len;
77
+ default:
59
char **node_path;
78
+ qemu_log_mask(LOG_UNIMP, "Write to RAS register offset 0x%x\n",
60
@@ -XXX,XX +XXX,XX @@ int arm_load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
79
+ (uint32_t)addr);
61
mem_base = binfo->loader_start;
80
+ break;
62
for (i = 0; i < nb_numa_nodes; i++) {
81
+ }
63
mem_len = numa_info[i].node_mem;
82
+ return MEMTX_OK;
64
- nodename = g_strdup_printf("/memory@%" PRIx64, mem_base);
83
+}
65
- qemu_fdt_add_subnode(fdt, nodename);
84
+
66
- qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
85
+static const MemoryRegionOps ras_ops = {
67
- rc = qemu_fdt_setprop_sized_cells(fdt, nodename, "reg",
86
+ .read_with_attrs = ras_read,
68
- acells, mem_base,
87
+ .write_with_attrs = ras_write,
69
- scells, mem_len);
88
+ .endianness = DEVICE_NATIVE_ENDIAN,
70
+ rc = fdt_add_memory_node(fdt, acells, mem_base,
89
+};
71
+ scells, mem_len, i);
90
+
72
if (rc < 0) {
91
/*
73
- fprintf(stderr, "couldn't set %s/reg for node %d\n", nodename,
92
* Unassigned portions of the PPB space are RAZ/WI for privileged
74
- i);
93
* accesses, and fault for non-privileged accesses.
75
+ fprintf(stderr, "couldn't add /memory@%"PRIx64" node\n",
94
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
76
+ mem_base);
95
&s->systick_ns_mem, 1);
77
goto fail;
78
}
79
80
- qemu_fdt_setprop_cell(fdt, nodename, "numa-node-id", i);
81
mem_base += mem_len;
82
- g_free(nodename);
83
}
84
} else {
85
- nodename = g_strdup_printf("/memory@%" PRIx64, binfo->loader_start);
86
- qemu_fdt_add_subnode(fdt, nodename);
87
- qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
88
-
89
- rc = qemu_fdt_setprop_sized_cells(fdt, nodename, "reg",
90
- acells, binfo->loader_start,
91
- scells, binfo->ram_size);
92
+ rc = fdt_add_memory_node(fdt, acells, binfo->loader_start,
93
+ scells, binfo->ram_size, -1);
94
if (rc < 0) {
95
- fprintf(stderr, "couldn't set %s reg\n", nodename);
96
+ fprintf(stderr, "couldn't add /memory@%"PRIx64" node\n",
97
+ binfo->loader_start);
98
goto fail;
99
}
100
- g_free(nodename);
101
}
96
}
102
97
103
rc = fdt_path_offset(fdt, "/chosen");
98
+ if (cpu_isar_feature(aa32_ras, s->cpu)) {
99
+ memory_region_init_io(&s->ras_mem, OBJECT(s),
100
+ &ras_ops, s, "nvic_ras", 0x1000);
101
+ memory_region_add_subregion(&s->container, 0x5000, &s->ras_mem);
102
+ }
103
+
104
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
105
}
106
104
--
107
--
105
2.20.1
108
2.20.1
106
109
107
110
diff view generated by jsdifflib
New patch
1
Correct a typo in the name we give the NVIC object.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-28-peter.maydell@linaro.org
7
---
8
hw/arm/armv7m.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
10
11
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/armv7m.c
14
+++ b/hw/arm/armv7m.c
15
@@ -XXX,XX +XXX,XX @@ static void armv7m_instance_init(Object *obj)
16
17
memory_region_init(&s->container, obj, "armv7m-container", UINT64_MAX);
18
19
- object_initialize_child(obj, "nvnic", &s->nvic, TYPE_NVIC);
20
+ object_initialize_child(obj, "nvic", &s->nvic, TYPE_NVIC);
21
object_property_add_alias(obj, "num-irq",
22
OBJECT(&s->nvic), "num-irq");
23
24
--
25
2.20.1
26
27
diff view generated by jsdifflib