1
First pullreq for 6.0: mostly my v8.1M work, plus some other
1
Hi; this mostly contains the first slice of A64 decodetree
2
bits and pieces. (I still have a lot of stuff in my to-review
2
patches, plus some other minor pieces. It also has the
3
folder, which I may or may not get to before the Christmas break...)
3
enablement of MTE for KVM guests.
4
4
5
thanks
5
thanks
6
-- PMM
6
-- PMM
7
7
8
The following changes since commit 5e7b204dbfae9a562fc73684986f936b97f63877:
8
The following changes since commit d27e7c359330ba7020bdbed7ed2316cb4cf6ffc1:
9
9
10
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2020-12-09 20:08:54 +0000)
10
qapi/parser: Drop two bad type hints for now (2023-05-17 10:18:33 -0700)
11
11
12
are available in the Git repository at:
12
are available in the Git repository at:
13
13
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201210
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230518
15
15
16
for you to fetch changes up to 71f916be1c7e9ede0e37d9cabc781b5a9e8638ff:
16
for you to fetch changes up to 91608e2a44f36e79cb83f863b8a7bb57d2c98061:
17
17
18
hw/arm/armv7m: Correct typo in QOM object name (2020-12-10 11:44:56 +0000)
18
docs: Convert u2f.txt to rST (2023-05-18 11:40:32 +0100)
19
19
20
----------------------------------------------------------------
20
----------------------------------------------------------------
21
target-arm queue:
21
target-arm queue:
22
* hw/arm/smmuv3: Fix up L1STD_SPAN decoding
22
* Fix vd == vm overlap in sve_ldff1_z
23
* xlnx-zynqmp: Support Xilinx ZynqMP CAN controllers
23
* Add support for MTE with KVM guests
24
* sbsa-ref: allow to use Cortex-A53/57/72 cpus
24
* Add RAZ/WI handling for DBGDTR[TX|RX]
25
* Various minor code cleanups
25
* Start of conversion of A64 decoder to decodetree
26
* hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
26
* Saturate L2CTLR_EL1 core count field rather than overflowing
27
* Implement more pieces of ARMv8.1M support
27
* vexpress: Avoid trivial memory leak of 'flashalias'
28
* sbsa-ref: switch default cpu core to Neoverse-N1
29
* sbsa-ref: use Bochs graphics card instead of VGA
30
* MAINTAINERS: Add Marcin Juszkiewicz to sbsa-ref reviewer list
31
* docs: Convert u2f.txt to rST
28
32
29
----------------------------------------------------------------
33
----------------------------------------------------------------
30
Alex Chen (4):
34
Alex Bennée (1):
31
i.MX25: Fix bad printf format specifiers
35
target/arm: add RAZ/WI handling for DBGDTR[TX|RX]
32
i.MX31: Fix bad printf format specifiers
33
i.MX6: Fix bad printf format specifiers
34
i.MX6ul: Fix bad printf format specifiers
35
36
36
Havard Skinnemoen (1):
37
Cornelia Huck (1):
37
tests/qtest/npcm7xx_rng-test: dump random data on failure
38
arm/kvm: add support for MTE
38
39
39
Kunkun Jiang (1):
40
Marcin Juszkiewicz (3):
40
hw/arm/smmuv3: Fix up L1STD_SPAN decoding
41
sbsa-ref: switch default cpu core to Neoverse-N1
42
Maintainers: add myself as reviewer for sbsa-ref
43
sbsa-ref: use Bochs graphics card instead of VGA
41
44
42
Marcin Juszkiewicz (1):
45
Peter Maydell (14):
43
sbsa-ref: allow to use Cortex-A53/57/72 cpus
46
target/arm: Create decodetree skeleton for A64
47
target/arm: Pull calls to disas_sve() and disas_sme() out of legacy decoder
48
target/arm: Convert Extract instructions to decodetree
49
target/arm: Convert unconditional branch immediate to decodetree
50
target/arm: Convert CBZ, CBNZ to decodetree
51
target/arm: Convert TBZ, TBNZ to decodetree
52
target/arm: Convert conditional branch insns to decodetree
53
target/arm: Convert BR, BLR, RET to decodetree
54
target/arm: Convert BRA[AB]Z, BLR[AB]Z, RETA[AB] to decodetree
55
target/arm: Convert BRAA, BRAB, BLRAA, BLRAB to decodetree
56
target/arm: Convert ERET, ERETAA, ERETAB to decodetree
57
target/arm: Saturate L2CTLR_EL1 core count field rather than overflowing
58
hw/arm/vexpress: Avoid trivial memory leak of 'flashalias'
59
docs: Convert u2f.txt to rST
44
60
45
Peter Maydell (25):
61
Richard Henderson (10):
46
hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
62
target/arm: Fix vd == vm overlap in sve_ldff1_z
47
target/arm: Implement v8.1M PXN extension
63
target/arm: Split out disas_a64_legacy
48
target/arm: Don't clobber ID_PFR1.Security on M-profile cores
64
target/arm: Convert PC-rel addressing to decodetree
49
target/arm: Implement VSCCLRM insn
65
target/arm: Split gen_add_CC and gen_sub_CC
50
target/arm: Implement CLRM instruction
66
target/arm: Convert Add/subtract (immediate) to decodetree
51
target/arm: Enforce M-profile VMRS/VMSR register restrictions
67
target/arm: Convert Add/subtract (immediate with tags) to decodetree
52
target/arm: Refactor M-profile VMSR/VMRS handling
68
target/arm: Replace bitmask64 with MAKE_64BIT_MASK
53
target/arm: Move general-use constant expanders up in translate.c
69
target/arm: Convert Logical (immediate) to decodetree
54
target/arm: Implement VLDR/VSTR system register
70
target/arm: Convert Move wide (immediate) to decodetree
55
target/arm: Implement M-profile FPSCR_nzcvqc
71
target/arm: Convert Bitfield to decodetree
56
target/arm: Use new FPCR_NZCV_MASK constant
57
target/arm: Factor out preserve-fp-state from full_vfp_access_check()
58
target/arm: Implement FPCXT_S fp system register
59
hw/intc/armv7m_nvic: Update FPDSCR masking for v8.1M
60
target/arm: For v8.1M, always clear R0-R3, R12, APSR, EPSR on exception entry
61
target/arm: In v8.1M, don't set HFSR.FORCED on vector table fetch failures
62
target/arm: Implement v8.1M REVIDR register
63
target/arm: Implement new v8.1M NOCP check for exception return
64
target/arm: Implement new v8.1M VLLDM and VLSTM encodings
65
hw/intc/armv7m_nvic: Support v8.1M CCR.TRD bit
66
target/arm: Implement CCR_S.TRD behaviour for SG insns
67
hw/intc/armv7m_nvic: Fix "return from inactive handler" check
68
target/arm: Implement M-profile "minimal RAS implementation"
69
hw/intc/armv7m_nvic: Implement read/write for RAS register block
70
hw/arm/armv7m: Correct typo in QOM object name
71
72
72
Vikram Garhwal (4):
73
MAINTAINERS | 1 +
73
hw/net/can: Introduce Xilinx ZynqMP CAN controller
74
docs/system/device-emulation.rst | 1 +
74
xlnx-zynqmp: Connect Xilinx ZynqMP CAN controllers
75
docs/system/devices/usb-u2f.rst | 93 +++
75
tests/qtest: Introduce tests for Xilinx ZynqMP CAN controller
76
docs/system/devices/usb.rst | 2 +-
76
MAINTAINERS: Add maintainer entry for Xilinx ZynqMP CAN controller
77
docs/u2f.txt | 110 ----
78
target/arm/cpu.h | 4 +
79
target/arm/kvm_arm.h | 19 +
80
target/arm/tcg/translate.h | 5 +
81
target/arm/tcg/a64.decode | 152 +++++
82
hw/arm/sbsa-ref.c | 4 +-
83
hw/arm/vexpress.c | 40 +-
84
hw/arm/virt.c | 73 ++-
85
target/arm/cortex-regs.c | 11 +-
86
target/arm/cpu.c | 9 +-
87
target/arm/debug_helper.c | 11 +-
88
target/arm/kvm.c | 35 +
89
target/arm/kvm64.c | 5 +
90
target/arm/tcg/sve_helper.c | 6 +
91
target/arm/tcg/translate-a64.c | 1321 ++++++++++++++++----------------------
92
target/arm/tcg/meson.build | 1 +
93
20 files changed, 979 insertions(+), 924 deletions(-)
94
create mode 100644 docs/system/devices/usb-u2f.rst
95
delete mode 100644 docs/u2f.txt
96
create mode 100644 target/arm/tcg/a64.decode
77
97
78
meson.build | 1 +
79
hw/arm/smmuv3-internal.h | 2 +-
80
hw/net/can/trace.h | 1 +
81
include/hw/arm/xlnx-zynqmp.h | 8 +
82
include/hw/intc/armv7m_nvic.h | 2 +
83
include/hw/net/xlnx-zynqmp-can.h | 78 +++
84
target/arm/cpu.h | 46 ++
85
target/arm/m-nocp.decode | 10 +-
86
target/arm/t32.decode | 10 +-
87
target/arm/vfp.decode | 14 +
88
hw/arm/armv7m.c | 4 +-
89
hw/arm/sbsa-ref.c | 23 +-
90
hw/arm/xlnx-zcu102.c | 20 +
91
hw/arm/xlnx-zynqmp.c | 34 ++
92
hw/intc/armv7m_nvic.c | 246 ++++++--
93
hw/misc/imx25_ccm.c | 12 +-
94
hw/misc/imx31_ccm.c | 14 +-
95
hw/misc/imx6_ccm.c | 20 +-
96
hw/misc/imx6_src.c | 2 +-
97
hw/misc/imx6ul_ccm.c | 4 +-
98
hw/misc/imx_ccm.c | 4 +-
99
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++++++++++
100
target/arm/cpu.c | 5 +-
101
target/arm/helper.c | 7 +-
102
target/arm/m_helper.c | 130 ++++-
103
target/arm/translate.c | 105 +++-
104
tests/qtest/npcm7xx_rng-test.c | 12 +
105
tests/qtest/xlnx-can-test.c | 360 ++++++++++++
106
MAINTAINERS | 8 +
107
hw/Kconfig | 1 +
108
hw/net/can/meson.build | 1 +
109
hw/net/can/trace-events | 9 +
110
target/arm/translate-vfp.c.inc | 511 ++++++++++++++++-
111
tests/qtest/meson.build | 1 +
112
34 files changed, 2713 insertions(+), 153 deletions(-)
113
create mode 100644 hw/net/can/trace.h
114
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
115
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
116
create mode 100644 tests/qtest/xlnx-can-test.c
117
create mode 100644 hw/net/can/trace-events
118
diff view generated by jsdifflib
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2
3
Trusted Firmware now supports A72 on sbsa-ref by default [1] so enable
3
The world outside moves to newer and newer cpu cores. Let move SBSA
4
it for QEMU as well. A53 was already enabled there.
4
Reference Platform to something newer as well.
5
6
1. https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/7117
7
5
8
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
6
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
10
Message-id: 20201120141705.246690-1-marcin.juszkiewicz@linaro.org
8
Message-id: 20230506183417.1360427-1-marcin.juszkiewicz@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
hw/arm/sbsa-ref.c | 23 ++++++++++++++++++++---
11
hw/arm/sbsa-ref.c | 2 +-
15
1 file changed, 20 insertions(+), 3 deletions(-)
12
1 file changed, 1 insertion(+), 1 deletion(-)
16
13
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
14
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/sbsa-ref.c
16
--- a/hw/arm/sbsa-ref.c
20
+++ b/hw/arm/sbsa-ref.c
17
+++ b/hw/arm/sbsa-ref.c
21
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
18
@@ -XXX,XX +XXX,XX @@ static void sbsa_ref_class_init(ObjectClass *oc, void *data)
22
[SBSA_GWDT] = 16,
19
23
};
20
mc->init = sbsa_ref_init;
24
21
mc->desc = "QEMU 'SBSA Reference' ARM Virtual Machine";
25
+static const char * const valid_cpus[] = {
22
- mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a57");
26
+ ARM_CPU_TYPE_NAME("cortex-a53"),
23
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("neoverse-n1");
27
+ ARM_CPU_TYPE_NAME("cortex-a57"),
24
mc->max_cpus = 512;
28
+ ARM_CPU_TYPE_NAME("cortex-a72"),
25
mc->pci_allow_0_address = true;
29
+};
26
mc->minimum_page_bits = 12;
30
+
31
+static bool cpu_type_valid(const char *cpu)
32
+{
33
+ int i;
34
+
35
+ for (i = 0; i < ARRAY_SIZE(valid_cpus); i++) {
36
+ if (strcmp(cpu, valid_cpus[i]) == 0) {
37
+ return true;
38
+ }
39
+ }
40
+ return false;
41
+}
42
+
43
static uint64_t sbsa_ref_cpu_mp_affinity(SBSAMachineState *sms, int idx)
44
{
45
uint8_t clustersz = ARM_DEFAULT_CPUS_PER_CLUSTER;
46
@@ -XXX,XX +XXX,XX @@ static void sbsa_ref_init(MachineState *machine)
47
const CPUArchIdList *possible_cpus;
48
int n, sbsa_max_cpus;
49
50
- if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a57"))) {
51
- error_report("sbsa-ref: CPU type other than the built-in "
52
- "cortex-a57 not supported");
53
+ if (!cpu_type_valid(machine->cpu_type)) {
54
+ error_report("mach-virt: CPU type %s not supported", machine->cpu_type);
55
exit(1);
56
}
57
58
--
27
--
59
2.20.1
28
2.34.1
60
61
diff view generated by jsdifflib
1
v8.1M adds new encodings of VLLDM and VLSTM (where bit 7 is set).
1
From: Richard Henderson <richard.henderson@linaro.org>
2
The only difference is that:
3
* the old T1 encodings UNDEF if the implementation implements 32
4
Dregs (this is currently architecturally impossible for M-profile)
5
* the new T2 encodings have the implementation-defined option to
6
read from memory (discarding the data) or write UNKNOWN values to
7
memory for the stack slots that would be D16-D31
8
2
9
We choose not to make those accesses, so for us the two
3
If vd == vm, copy vm to scratch, so that we can pre-zero
10
instructions behave identically assuming they don't UNDEF.
4
the output and still access the gather indicies.
11
5
6
Cc: qemu-stable@nongnu.org
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1612
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230504104232.1877774-1-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20201119215617.29887-21-peter.maydell@linaro.org
15
---
12
---
16
target/arm/m-nocp.decode | 2 +-
13
target/arm/tcg/sve_helper.c | 6 ++++++
17
target/arm/translate-vfp.c.inc | 25 +++++++++++++++++++++++++
14
1 file changed, 6 insertions(+)
18
2 files changed, 26 insertions(+), 1 deletion(-)
19
15
20
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
16
diff --git a/target/arm/tcg/sve_helper.c b/target/arm/tcg/sve_helper.c
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m-nocp.decode
18
--- a/target/arm/tcg/sve_helper.c
23
+++ b/target/arm/m-nocp.decode
19
+++ b/target/arm/tcg/sve_helper.c
24
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
25
21
intptr_t reg_off;
26
{
22
SVEHostPage info;
27
# Special cases which do not take an early NOCP: VLLDM and VLSTM
23
target_ulong addr, in_page;
28
- VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
24
+ ARMVectorReg scratch;
29
+ VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 op:1 000 0000
25
30
# VSCCLRM (new in v8.1M) is similar:
26
/* Skip to the first true predicate. */
31
VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
27
reg_off = find_next_active(vg, 0, reg_max, esz);
32
VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
28
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
33
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
29
return;
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-vfp.c.inc
36
+++ b/target/arm/translate-vfp.c.inc
37
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
38
!arm_dc_feature(s, ARM_FEATURE_V8)) {
39
return false;
40
}
30
}
41
+
31
42
+ if (a->op) {
32
+ /* Protect against overlap between vd and vm. */
43
+ /*
33
+ if (unlikely(vd == vm)) {
44
+ * T2 encoding ({D0-D31} reglist): v8.1M and up. We choose not
34
+ vm = memcpy(&scratch, vm, reg_max);
45
+ * to take the IMPDEF option to make memory accesses to the stack
46
+ * slots that correspond to the D16-D31 registers (discarding
47
+ * read data and writing UNKNOWN values), so for us the T2
48
+ * encoding behaves identically to the T1 encoding.
49
+ */
50
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
51
+ return false;
52
+ }
53
+ } else {
54
+ /*
55
+ * T1 encoding ({D0-D15} reglist); undef if we have 32 Dregs.
56
+ * This is currently architecturally impossible, but we add the
57
+ * check to stay in line with the pseudocode. Note that we must
58
+ * emit code for the UNDEF so it takes precedence over the NOCP.
59
+ */
60
+ if (dc_isar_feature(aa32_simd_r32, s)) {
61
+ unallocated_encoding(s);
62
+ return true;
63
+ }
64
+ }
35
+ }
65
+
36
+
66
/*
37
/*
67
* If not secure, UNDEF. We must emit code for this
38
* Probe the first element, allowing faults.
68
* rather than returning false so that this takes
39
*/
69
--
40
--
70
2.20.1
41
2.34.1
71
72
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2
3
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
3
At Linaro I work on sbsa-ref, know direction it goes.
4
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
4
5
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
5
May not get code details each time.
6
Message-id: 1605728926-352690-5-git-send-email-fnu.vikram@xilinx.com
6
7
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20230515143753.365591-1-marcin.juszkiewicz@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
11
---
9
MAINTAINERS | 8 ++++++++
12
MAINTAINERS | 1 +
10
1 file changed, 8 insertions(+)
13
1 file changed, 1 insertion(+)
11
14
12
diff --git a/MAINTAINERS b/MAINTAINERS
15
diff --git a/MAINTAINERS b/MAINTAINERS
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/MAINTAINERS
17
--- a/MAINTAINERS
15
+++ b/MAINTAINERS
18
+++ b/MAINTAINERS
16
@@ -XXX,XX +XXX,XX @@ F: hw/net/opencores_eth.c
19
@@ -XXX,XX +XXX,XX @@ SBSA-REF
17
20
M: Radoslaw Biernacki <rad@semihalf.com>
18
Devices
21
M: Peter Maydell <peter.maydell@linaro.org>
19
-------
22
R: Leif Lindholm <quic_llindhol@quicinc.com>
20
+Xilinx CAN
23
+R: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
21
+M: Vikram Garhwal <fnu.vikram@xilinx.com>
24
L: qemu-arm@nongnu.org
22
+M: Francisco Iglesias <francisco.iglesias@xilinx.com>
23
+S: Maintained
24
+F: hw/net/can/xlnx-*
25
+F: include/hw/net/xlnx-*
26
+F: tests/qtest/xlnx-can-test*
27
+
28
EDU
29
M: Jiri Slaby <jslaby@suse.cz>
30
S: Maintained
25
S: Maintained
26
F: hw/arm/sbsa-ref.c
31
--
27
--
32
2.20.1
28
2.34.1
33
29
34
30
diff view generated by jsdifflib
1
The FPDSCR register has a similar layout to the FPSCR. In v8.1M it
1
From: Cornelia Huck <cohuck@redhat.com>
2
gains new fields FZ16 (if half-precision floating point is supported)
2
3
and LTPSIZE (always reads as 4). Update the reset value and the code
3
Extend the 'mte' property for the virt machine to cover KVM as
4
that handles writes to this register accordingly.
4
well. For KVM, we don't allocate tag memory, but instead enable the
5
5
capability.
6
7
If MTE has been enabled, we need to disable migration, as we do not
8
yet have a way to migrate the tags as well. Therefore, MTE will stay
9
off with KVM unless requested explicitly.
10
11
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20230428095533.21747-2-cohuck@redhat.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-16-peter.maydell@linaro.org
9
---
16
---
10
target/arm/cpu.h | 5 +++++
17
target/arm/cpu.h | 4 +++
11
hw/intc/armv7m_nvic.c | 9 ++++++++-
18
target/arm/kvm_arm.h | 19 ++++++++++++
12
target/arm/cpu.c | 3 +++
19
hw/arm/virt.c | 73 +++++++++++++++++++++++++-------------------
13
3 files changed, 16 insertions(+), 1 deletion(-)
20
target/arm/cpu.c | 9 +++---
21
target/arm/kvm.c | 35 +++++++++++++++++++++
22
target/arm/kvm64.c | 5 +++
23
6 files changed, 109 insertions(+), 36 deletions(-)
14
24
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
27
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
29
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
20
#define FPCR_IXE (1 << 12) /* Inexact exception trap enable */
30
*/
21
#define FPCR_IDE (1 << 15) /* Input Denormal exception trap enable */
31
uint32_t psci_conduit;
22
#define FPCR_FZ16 (1 << 19) /* ARMv8.2+, FP16 flush-to-zero */
32
23
+#define FPCR_RMODE_MASK (3 << 22) /* Rounding mode */
33
+ /* CPU has Memory Tag Extension */
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
34
+ bool has_mte;
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
35
+
26
+#define FPCR_AHP (1 << 26) /* Alternative half-precision */
36
/* For v8M, initial value of the Secure VTOR */
27
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
37
uint32_t init_svtor;
28
#define FPCR_V (1 << 28) /* FP overflow flag */
38
/* For v8M, initial value of the Non-secure VTOR */
29
#define FPCR_C (1 << 29) /* FP carry flag */
39
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
30
#define FPCR_Z (1 << 30) /* FP zero flag */
40
bool prop_pauth;
31
#define FPCR_N (1 << 31) /* FP negative flag */
41
bool prop_pauth_impdef;
32
42
bool prop_lpa2;
33
+#define FPCR_LTPSIZE_SHIFT 16 /* LTPSIZE, M-profile only */
43
+ OnOffAuto prop_mte;
34
+#define FPCR_LTPSIZE_MASK (7 << FPCR_LTPSIZE_SHIFT)
44
35
+
45
/* DCZ blocksize, in log_2(words), ie low 4 bits of DCZID_EL0 */
36
#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
46
uint32_t dcz_blocksize;
37
#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
47
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
38
48
index XXXXXXX..XXXXXXX 100644
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
49
--- a/target/arm/kvm_arm.h
40
index XXXXXXX..XXXXXXX 100644
50
+++ b/target/arm/kvm_arm.h
41
--- a/hw/intc/armv7m_nvic.c
51
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_pmu_supported(void);
42
+++ b/hw/intc/armv7m_nvic.c
52
*/
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
53
bool kvm_arm_sve_supported(void);
44
break;
54
45
case 0xf3c: /* FPDSCR */
55
+/**
46
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
56
+ * kvm_arm_mte_supported:
47
- value &= 0x07c00000;
57
+ *
48
+ uint32_t mask = FPCR_AHP | FPCR_DN | FPCR_FZ | FPCR_RMODE_MASK;
58
+ * Returns: true if KVM can enable MTE, and false otherwise.
49
+ if (cpu_isar_feature(any_fp16, cpu)) {
59
+ */
50
+ mask |= FPCR_FZ16;
60
+bool kvm_arm_mte_supported(void);
51
+ }
61
+
52
+ value &= mask;
62
/**
53
+ if (cpu_isar_feature(aa32_lob, cpu)) {
63
* kvm_arm_get_max_vm_ipa_size:
54
+ value |= 4 << FPCR_LTPSIZE_SHIFT;
64
* @ms: Machine state handle
55
+ }
65
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa);
56
cpu->env.v7m.fpdscr[attrs.secure] = value;
66
57
}
67
int kvm_arm_set_irq(int cpu, int irqtype, int irq, int level);
58
break;
68
69
+void kvm_arm_enable_mte(Object *cpuobj, Error **errp);
70
+
71
#else
72
73
/*
74
@@ -XXX,XX +XXX,XX @@ static inline bool kvm_arm_steal_time_supported(void)
75
return false;
76
}
77
78
+static inline bool kvm_arm_mte_supported(void)
79
+{
80
+ return false;
81
+}
82
+
83
/*
84
* These functions should never actually be called without KVM support.
85
*/
86
@@ -XXX,XX +XXX,XX @@ static inline uint32_t kvm_arm_sve_get_vls(CPUState *cs)
87
g_assert_not_reached();
88
}
89
90
+static inline void kvm_arm_enable_mte(Object *cpuobj, Error **errp)
91
+{
92
+ g_assert_not_reached();
93
+}
94
+
95
#endif
96
97
static inline const char *gic_class_name(void)
98
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/hw/arm/virt.c
101
+++ b/hw/arm/virt.c
102
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
103
exit(1);
104
}
105
106
- if (vms->mte && (kvm_enabled() || hvf_enabled())) {
107
+ if (vms->mte && hvf_enabled()) {
108
error_report("mach-virt: %s does not support providing "
109
"MTE to the guest CPU",
110
current_accel_name());
111
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
112
}
113
114
if (vms->mte) {
115
- /* Create the memory region only once, but link to all cpus. */
116
- if (!tag_sysmem) {
117
- /*
118
- * The property exists only if MemTag is supported.
119
- * If it is, we must allocate the ram to back that up.
120
- */
121
- if (!object_property_find(cpuobj, "tag-memory")) {
122
- error_report("MTE requested, but not supported "
123
- "by the guest CPU");
124
+ if (tcg_enabled()) {
125
+ /* Create the memory region only once, but link to all cpus. */
126
+ if (!tag_sysmem) {
127
+ /*
128
+ * The property exists only if MemTag is supported.
129
+ * If it is, we must allocate the ram to back that up.
130
+ */
131
+ if (!object_property_find(cpuobj, "tag-memory")) {
132
+ error_report("MTE requested, but not supported "
133
+ "by the guest CPU");
134
+ exit(1);
135
+ }
136
+
137
+ tag_sysmem = g_new(MemoryRegion, 1);
138
+ memory_region_init(tag_sysmem, OBJECT(machine),
139
+ "tag-memory", UINT64_MAX / 32);
140
+
141
+ if (vms->secure) {
142
+ secure_tag_sysmem = g_new(MemoryRegion, 1);
143
+ memory_region_init(secure_tag_sysmem, OBJECT(machine),
144
+ "secure-tag-memory",
145
+ UINT64_MAX / 32);
146
+
147
+ /* As with ram, secure-tag takes precedence over tag. */
148
+ memory_region_add_subregion_overlap(secure_tag_sysmem,
149
+ 0, tag_sysmem, -1);
150
+ }
151
+ }
152
+
153
+ object_property_set_link(cpuobj, "tag-memory",
154
+ OBJECT(tag_sysmem), &error_abort);
155
+ if (vms->secure) {
156
+ object_property_set_link(cpuobj, "secure-tag-memory",
157
+ OBJECT(secure_tag_sysmem),
158
+ &error_abort);
159
+ }
160
+ } else if (kvm_enabled()) {
161
+ if (!kvm_arm_mte_supported()) {
162
+ error_report("MTE requested, but not supported by KVM");
163
exit(1);
164
}
165
-
166
- tag_sysmem = g_new(MemoryRegion, 1);
167
- memory_region_init(tag_sysmem, OBJECT(machine),
168
- "tag-memory", UINT64_MAX / 32);
169
-
170
- if (vms->secure) {
171
- secure_tag_sysmem = g_new(MemoryRegion, 1);
172
- memory_region_init(secure_tag_sysmem, OBJECT(machine),
173
- "secure-tag-memory", UINT64_MAX / 32);
174
-
175
- /* As with ram, secure-tag takes precedence over tag. */
176
- memory_region_add_subregion_overlap(secure_tag_sysmem, 0,
177
- tag_sysmem, -1);
178
- }
179
- }
180
-
181
- object_property_set_link(cpuobj, "tag-memory", OBJECT(tag_sysmem),
182
- &error_abort);
183
- if (vms->secure) {
184
- object_property_set_link(cpuobj, "secure-tag-memory",
185
- OBJECT(secure_tag_sysmem),
186
- &error_abort);
187
+ kvm_arm_enable_mte(cpuobj, &error_abort);
188
}
189
}
190
59
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
191
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
60
index XXXXXXX..XXXXXXX 100644
192
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/cpu.c
193
--- a/target/arm/cpu.c
62
+++ b/target/arm/cpu.c
194
+++ b/target/arm/cpu.c
63
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
195
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
64
* always reset to 4.
196
qdev_prop_allow_set_link_before_realize,
65
*/
197
OBJ_PROP_LINK_STRONG);
66
env->v7m.ltpsize = 4;
198
}
67
+ /* The LTPSIZE field in FPDSCR is constant and reads as 4. */
199
+ cpu->has_mte = true;
68
+ env->v7m.fpdscr[M_REG_NS] = 4 << FPCR_LTPSIZE_SHIFT;
200
}
69
+ env->v7m.fpdscr[M_REG_S] = 4 << FPCR_LTPSIZE_SHIFT;
201
#endif
70
}
202
}
71
203
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
72
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
204
}
205
if (cpu->tag_memory) {
206
error_setg(errp,
207
- "Cannot enable %s when guest CPUs has MTE enabled",
208
+ "Cannot enable %s when guest CPUs has tag memory enabled",
209
current_accel_name());
210
return;
211
}
212
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
213
}
214
215
#ifndef CONFIG_USER_ONLY
216
- if (cpu->tag_memory == NULL && cpu_isar_feature(aa64_mte, cpu)) {
217
+ if (!cpu->has_mte && cpu_isar_feature(aa64_mte, cpu)) {
218
/*
219
- * Disable the MTE feature bits if we do not have tag-memory
220
- * provided by the machine.
221
+ * Disable the MTE feature bits if we do not have the feature
222
+ * setup by the machine.
223
*/
224
cpu->isar.id_aa64pfr1 =
225
FIELD_DP64(cpu->isar.id_aa64pfr1, ID_AA64PFR1, MTE, 0);
226
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
227
index XXXXXXX..XXXXXXX 100644
228
--- a/target/arm/kvm.c
229
+++ b/target/arm/kvm.c
230
@@ -XXX,XX +XXX,XX @@
231
#include "hw/boards.h"
232
#include "hw/irq.h"
233
#include "qemu/log.h"
234
+#include "migration/blocker.h"
235
236
const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
237
KVM_CAP_LAST_INFO
238
@@ -XXX,XX +XXX,XX @@ bool kvm_arch_cpu_check_are_resettable(void)
239
void kvm_arch_accel_class_init(ObjectClass *oc)
240
{
241
}
242
+
243
+void kvm_arm_enable_mte(Object *cpuobj, Error **errp)
244
+{
245
+ static bool tried_to_enable;
246
+ static bool succeeded_to_enable;
247
+ Error *mte_migration_blocker = NULL;
248
+ int ret;
249
+
250
+ if (!tried_to_enable) {
251
+ /*
252
+ * MTE on KVM is enabled on a per-VM basis (and retrying doesn't make
253
+ * sense), and we only want a single migration blocker as well.
254
+ */
255
+ tried_to_enable = true;
256
+
257
+ ret = kvm_vm_enable_cap(kvm_state, KVM_CAP_ARM_MTE, 0);
258
+ if (ret) {
259
+ error_setg_errno(errp, -ret, "Failed to enable KVM_CAP_ARM_MTE");
260
+ return;
261
+ }
262
+
263
+ /* TODO: add proper migration support with MTE enabled */
264
+ error_setg(&mte_migration_blocker,
265
+ "Live migration disabled due to MTE enabled");
266
+ if (migrate_add_blocker(mte_migration_blocker, errp)) {
267
+ error_free(mte_migration_blocker);
268
+ return;
269
+ }
270
+ succeeded_to_enable = true;
271
+ }
272
+ if (succeeded_to_enable) {
273
+ object_property_set_bool(cpuobj, "has_mte", true, NULL);
274
+ }
275
+}
276
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
277
index XXXXXXX..XXXXXXX 100644
278
--- a/target/arm/kvm64.c
279
+++ b/target/arm/kvm64.c
280
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_steal_time_supported(void)
281
return kvm_check_extension(kvm_state, KVM_CAP_STEAL_TIME);
282
}
283
284
+bool kvm_arm_mte_supported(void)
285
+{
286
+ return kvm_check_extension(kvm_state, KVM_CAP_ARM_MTE);
287
+}
288
+
289
QEMU_BUILD_BUG_ON(KVM_ARM64_SVE_VQ_MIN != 1);
290
291
uint32_t kvm_arm_sve_get_vls(CPUState *cs)
73
--
292
--
74
2.20.1
293
2.34.1
75
76
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
The commit b3aa2f2128 (target/arm: provide stubs for more external
4
argument of type "unsigned int".
4
debug registers) was added to handle HyperV's unconditional usage of
5
Debug Communications Channel. It turns out that Linux will similarly
6
break if you enable CONFIG_HVC_DCC "ARM JTAG DCC console".
5
7
6
Reported-by: Euler Robot <euler.robot@huawei.com>
8
Extend the registers we RAZ/WI set to avoid this.
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
9
8
Message-id: 20201126111109.112238-5-alex.chen@huawei.com
10
Cc: Anders Roxell <anders.roxell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Cc: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
12
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20230516104420.407912-1-alex.bennee@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
16
---
12
hw/misc/imx6ul_ccm.c | 4 ++--
17
target/arm/debug_helper.c | 11 +++++++++--
13
1 file changed, 2 insertions(+), 2 deletions(-)
18
1 file changed, 9 insertions(+), 2 deletions(-)
14
19
15
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
20
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/imx6ul_ccm.c
22
--- a/target/arm/debug_helper.c
18
+++ b/hw/misc/imx6ul_ccm.c
23
+++ b/target/arm/debug_helper.c
19
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_ccm_reg_name(uint32_t reg)
24
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
20
case CCM_CMEOR:
25
.access = PL0_R, .accessfn = access_tdcc,
21
return "CMEOR";
26
.type = ARM_CP_CONST, .resetvalue = 0 },
22
default:
27
/*
23
- sprintf(unknown, "%d ?", reg);
28
- * OSDTRRX_EL1/OSDTRTX_EL1 are used for save and restore of DBGDTRRX_EL0.
24
+ sprintf(unknown, "%u ?", reg);
29
- * It is a component of the Debug Communications Channel, which is not implemented.
25
return unknown;
30
+ * These registers belong to the Debug Communications Channel,
26
}
31
+ * which is not implemented. However we implement RAZ/WI behaviour
27
}
32
+ * with trapping to prevent spurious SIGILLs if the guest OS does
28
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_analog_reg_name(uint32_t reg)
33
+ * access them as the support cannot be probed for.
29
case USB_ANALOG_DIGPROG:
34
*/
30
return "USB_ANALOG_DIGPROG";
35
{ .name = "OSDTRRX_EL1", .state = ARM_CP_STATE_BOTH, .cp = 14,
31
default:
36
.opc0 = 2, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 2,
32
- sprintf(unknown, "%d ?", reg);
37
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
33
+ sprintf(unknown, "%u ?", reg);
38
.opc0 = 2, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
34
return unknown;
39
.access = PL1_RW, .accessfn = access_tdcc,
35
}
40
.type = ARM_CP_CONST, .resetvalue = 0 },
36
}
41
+ /* DBGDTRTX_EL0/DBGDTRRX_EL0 depend on direction */
42
+ { .name = "DBGDTR_EL0", .state = ARM_CP_STATE_BOTH, .cp = 14,
43
+ .opc0 = 2, .opc1 = 3, .crn = 0, .crm = 5, .opc2 = 0,
44
+ .access = PL0_RW, .accessfn = access_tdcc,
45
+ .type = ARM_CP_CONST, .resetvalue = 0 },
46
/*
47
* OSECCR_EL1 provides a mechanism for an operating system
48
* to access the contents of EDECCR. EDECCR is not implemented though,
37
--
49
--
38
2.20.1
50
2.34.1
39
51
40
52
diff view generated by jsdifflib
1
In arm_cpu_realizefn() we check whether the board code disabled EL3
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
via the has_el3 CPU object property, which we create if the CPU
3
starts with the ARM_FEATURE_EL3 feature bit. If it is disabled, then
4
we turn off ARM_FEATURE_EL3 and also zero out the relevant fields in
5
the ID_PFR1 and ID_AA64PFR0 registers.
6
2
7
This codepath was incorrectly being taken for M-profile CPUs, which
3
Bochs card is normal PCI Express card so it fits better in system with
8
do not have an EL3 and don't set ARM_FEATURE_EL3, but which may have
4
PCI Express bus. VGA is simple legacy PCI card.
9
the M-profile Security extension and so should have non-zero values
10
in the ID_PFR1.Security field.
11
5
12
Restrict the handling of the feature flag to A/R-profile cores.
6
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
13
7
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
8
Message-id: 20230505120936.1097060-1-marcin.juszkiewicz@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20201119215617.29887-4-peter.maydell@linaro.org
17
---
10
---
18
target/arm/cpu.c | 2 +-
11
hw/arm/sbsa-ref.c | 2 +-
19
1 file changed, 1 insertion(+), 1 deletion(-)
12
1 file changed, 1 insertion(+), 1 deletion(-)
20
13
21
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.c
16
--- a/hw/arm/sbsa-ref.c
24
+++ b/target/arm/cpu.c
17
+++ b/hw/arm/sbsa-ref.c
25
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
18
@@ -XXX,XX +XXX,XX @@ static void create_pcie(SBSAMachineState *sms)
26
}
19
}
27
}
20
}
28
21
29
- if (!cpu->has_el3) {
22
- pci_create_simple(pci->bus, -1, "VGA");
30
+ if (!arm_feature(env, ARM_FEATURE_M) && !cpu->has_el3) {
23
+ pci_create_simple(pci->bus, -1, "bochs-display");
31
/* If the has_el3 CPU property is disabled then we need to disable the
24
32
* feature.
25
create_smmu(sms, pci->bus);
33
*/
26
}
34
--
27
--
35
2.20.1
28
2.34.1
36
37
diff view generated by jsdifflib
1
The RAS feature has a block of memory-mapped registers at offset
1
From: Richard Henderson <richard.henderson@linaro.org>
2
0x5000 within the PPB. For a "minimal RAS" implementation we provide
3
no error records and so the only registers that exist in the block
4
are ERRIIDR and ERRDEVID.
5
2
6
The "RAZ/WI for privileged, BusFault for nonprivileged" behaviour
3
Split out all of the decode stuff from aarch64_tr_translate_insn.
7
of the "nvic-default" region is actually valid for minimal-RAS,
4
Call it disas_a64_legacy to indicate it will be replaced.
8
so the main benefit of providing an explicit implementation of
9
the register block is more accurate LOG_UNIMP messages, and a
10
framework for where we could add a real RAS implementation later
11
if necessary.
12
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20201119215617.29887-27-peter.maydell@linaro.org
9
Message-id: 20230512144106.3608981-2-peter.maydell@linaro.org
10
[PMM: Rebased]
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
13
---
17
include/hw/intc/armv7m_nvic.h | 1 +
14
target/arm/tcg/translate-a64.c | 82 ++++++++++++++++++----------------
18
hw/intc/armv7m_nvic.c | 56 +++++++++++++++++++++++++++++++++++
15
1 file changed, 44 insertions(+), 38 deletions(-)
19
2 files changed, 57 insertions(+)
20
16
21
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
17
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
22
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/intc/armv7m_nvic.h
19
--- a/target/arm/tcg/translate-a64.c
24
+++ b/include/hw/intc/armv7m_nvic.h
20
+++ b/target/arm/tcg/translate-a64.c
25
@@ -XXX,XX +XXX,XX @@ struct NVICState {
21
@@ -XXX,XX +XXX,XX @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
26
MemoryRegion sysreg_ns_mem;
22
return false;
27
MemoryRegion systickmem;
23
}
28
MemoryRegion systick_ns_mem;
24
29
+ MemoryRegion ras_mem;
25
+/* C3.1 A64 instruction index by encoding */
30
MemoryRegion container;
26
+static void disas_a64_legacy(DisasContext *s, uint32_t insn)
31
MemoryRegion defaultmem;
32
33
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/armv7m_nvic.c
36
+++ b/hw/intc/armv7m_nvic.c
37
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
38
.endianness = DEVICE_NATIVE_ENDIAN,
39
};
40
41
+
42
+static MemTxResult ras_read(void *opaque, hwaddr addr,
43
+ uint64_t *data, unsigned size,
44
+ MemTxAttrs attrs)
45
+{
27
+{
46
+ if (attrs.user) {
28
+ switch (extract32(insn, 25, 4)) {
47
+ return MEMTX_ERROR;
29
+ case 0x0:
48
+ }
30
+ if (!extract32(insn, 31, 1) || !disas_sme(s, insn)) {
49
+
31
+ unallocated_encoding(s);
50
+ switch (addr) {
32
+ }
51
+ case 0xe10: /* ERRIIDR */
52
+ /* architect field = Arm; product/variant/revision 0 */
53
+ *data = 0x43b;
54
+ break;
33
+ break;
55
+ case 0xfc8: /* ERRDEVID */
34
+ case 0x1: case 0x3: /* UNALLOCATED */
56
+ /* Minimal RAS: we implement 0 error record indexes */
35
+ unallocated_encoding(s);
57
+ *data = 0;
36
+ break;
37
+ case 0x2:
38
+ if (!disas_sve(s, insn)) {
39
+ unallocated_encoding(s);
40
+ }
41
+ break;
42
+ case 0x8: case 0x9: /* Data processing - immediate */
43
+ disas_data_proc_imm(s, insn);
44
+ break;
45
+ case 0xa: case 0xb: /* Branch, exception generation and system insns */
46
+ disas_b_exc_sys(s, insn);
47
+ break;
48
+ case 0x4:
49
+ case 0x6:
50
+ case 0xc:
51
+ case 0xe: /* Loads and stores */
52
+ disas_ldst(s, insn);
53
+ break;
54
+ case 0x5:
55
+ case 0xd: /* Data processing - register */
56
+ disas_data_proc_reg(s, insn);
57
+ break;
58
+ case 0x7:
59
+ case 0xf: /* Data processing - SIMD and floating point */
60
+ disas_data_proc_simd_fp(s, insn);
58
+ break;
61
+ break;
59
+ default:
62
+ default:
60
+ qemu_log_mask(LOG_UNIMP, "Read RAS register offset 0x%x\n",
63
+ assert(FALSE); /* all 15 cases should be handled above */
61
+ (uint32_t)addr);
62
+ *data = 0;
63
+ break;
64
+ break;
64
+ }
65
+ }
65
+ return MEMTX_OK;
66
+}
66
+}
67
+
67
+
68
+static MemTxResult ras_write(void *opaque, hwaddr addr,
68
static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
69
+ uint64_t value, unsigned size,
69
CPUState *cpu)
70
+ MemTxAttrs attrs)
70
{
71
+{
71
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
72
+ if (attrs.user) {
72
disas_sme_fa64(s, insn);
73
+ return MEMTX_ERROR;
74
+ }
75
+
76
+ switch (addr) {
77
+ default:
78
+ qemu_log_mask(LOG_UNIMP, "Write to RAS register offset 0x%x\n",
79
+ (uint32_t)addr);
80
+ break;
81
+ }
82
+ return MEMTX_OK;
83
+}
84
+
85
+static const MemoryRegionOps ras_ops = {
86
+ .read_with_attrs = ras_read,
87
+ .write_with_attrs = ras_write,
88
+ .endianness = DEVICE_NATIVE_ENDIAN,
89
+};
90
+
91
/*
92
* Unassigned portions of the PPB space are RAZ/WI for privileged
93
* accesses, and fault for non-privileged accesses.
94
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
95
&s->systick_ns_mem, 1);
96
}
73
}
97
74
98
+ if (cpu_isar_feature(aa32_ras, s->cpu)) {
75
- switch (extract32(insn, 25, 4)) {
99
+ memory_region_init_io(&s->ras_mem, OBJECT(s),
76
- case 0x0:
100
+ &ras_ops, s, "nvic_ras", 0x1000);
77
- if (!extract32(insn, 31, 1) || !disas_sme(s, insn)) {
101
+ memory_region_add_subregion(&s->container, 0x5000, &s->ras_mem);
78
- unallocated_encoding(s);
102
+ }
79
- }
103
+
80
- break;
104
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
81
- case 0x1: case 0x3: /* UNALLOCATED */
105
}
82
- unallocated_encoding(s);
106
83
- break;
84
- case 0x2:
85
- if (!disas_sve(s, insn)) {
86
- unallocated_encoding(s);
87
- }
88
- break;
89
- case 0x8: case 0x9: /* Data processing - immediate */
90
- disas_data_proc_imm(s, insn);
91
- break;
92
- case 0xa: case 0xb: /* Branch, exception generation and system insns */
93
- disas_b_exc_sys(s, insn);
94
- break;
95
- case 0x4:
96
- case 0x6:
97
- case 0xc:
98
- case 0xe: /* Loads and stores */
99
- disas_ldst(s, insn);
100
- break;
101
- case 0x5:
102
- case 0xd: /* Data processing - register */
103
- disas_data_proc_reg(s, insn);
104
- break;
105
- case 0x7:
106
- case 0xf: /* Data processing - SIMD and floating point */
107
- disas_data_proc_simd_fp(s, insn);
108
- break;
109
- default:
110
- assert(FALSE); /* all 15 cases should be handled above */
111
- break;
112
- }
113
+ disas_a64_legacy(s, insn);
114
115
/*
116
* After execution of most insns, btype is reset to 0.
107
--
117
--
108
2.20.1
118
2.34.1
109
110
diff view generated by jsdifflib
1
The constant-expander functions like negate, plus_2, etc, are
1
The A64 translator uses a hand-written decoder for everything except
2
generally useful; move them up in translate.c so we can use them in
2
SVE or SME. It's fairly well structured, but it's becoming obvious
3
the VFP/Neon decoders as well as in the A32/T32/T16 decoders.
3
that it's still more painful to add instructions to than the A32
4
translator, because putting a new instruction into the right place in
5
a hand-written decoder is much harder than adding new instruction
6
patterns to a decodetree file.
7
8
As the first step in conversion to decodetree, create the skeleton of
9
the decodetree decoder; where it does not handle instructions we will
10
fall back to the legacy decoder (which will be for everything at the
11
moment, since there are no patterns in a64.decode).
4
12
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-9-peter.maydell@linaro.org
15
Message-id: 20230512144106.3608981-3-peter.maydell@linaro.org
8
---
16
---
9
target/arm/translate.c | 46 +++++++++++++++++++++++-------------------
17
target/arm/tcg/a64.decode | 20 ++++++++++++++++++++
10
1 file changed, 25 insertions(+), 21 deletions(-)
18
target/arm/tcg/translate-a64.c | 18 +++++++++++-------
19
target/arm/tcg/meson.build | 1 +
20
3 files changed, 32 insertions(+), 7 deletions(-)
21
create mode 100644 target/arm/tcg/a64.decode
11
22
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
23
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
24
new file mode 100644
25
index XXXXXXX..XXXXXXX
26
--- /dev/null
27
+++ b/target/arm/tcg/a64.decode
28
@@ -XXX,XX +XXX,XX @@
29
+# AArch64 A64 allowed instruction decoding
30
+#
31
+# Copyright (c) 2023 Linaro, Ltd
32
+#
33
+# This library is free software; you can redistribute it and/or
34
+# modify it under the terms of the GNU Lesser General Public
35
+# License as published by the Free Software Foundation; either
36
+# version 2.1 of the License, or (at your option) any later version.
37
+#
38
+# This library is distributed in the hope that it will be useful,
39
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
40
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
41
+# Lesser General Public License for more details.
42
+#
43
+# You should have received a copy of the GNU Lesser General Public
44
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
45
+
46
+#
47
+# This file is processed by scripts/decodetree.py
48
+#
49
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
13
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate.c
51
--- a/target/arm/tcg/translate-a64.c
15
+++ b/target/arm/translate.c
52
+++ b/target/arm/tcg/translate-a64.c
16
@@ -XXX,XX +XXX,XX @@ static void arm_gen_condlabel(DisasContext *s)
53
@@ -XXX,XX +XXX,XX @@ enum a64_shift_type {
54
A64_SHIFT_TYPE_ROR = 3
55
};
56
57
+/*
58
+ * Include the generated decoders.
59
+ */
60
+
61
+#include "decode-sme-fa64.c.inc"
62
+#include "decode-a64.c.inc"
63
+
64
/* Table based decoder typedefs - used when the relevant bits for decode
65
* are too awkwardly scattered across the instruction (eg SIMD).
66
*/
67
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_simd_fp(DisasContext *s, uint32_t insn)
17
}
68
}
18
}
69
}
19
70
20
+/*
71
-/*
21
+ * Constant expanders for the decoders.
72
- * Include the generated SME FA64 decoder.
22
+ */
73
- */
74
-
75
-#include "decode-sme-fa64.c.inc"
76
-
77
static bool trans_OK(DisasContext *s, arg_OK *a)
78
{
79
return true;
80
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
81
disas_sme_fa64(s, insn);
82
}
83
84
- disas_a64_legacy(s, insn);
23
+
85
+
24
+static int negate(DisasContext *s, int x)
86
+ if (!disas_a64(s, insn)) {
25
+{
87
+ disas_a64_legacy(s, insn);
26
+ return -x;
88
+ }
27
+}
89
28
+
90
/*
29
+static int plus_2(DisasContext *s, int x)
91
* After execution of most insns, btype is reset to 0.
30
+{
92
diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build
31
+ return x + 2;
93
index XXXXXXX..XXXXXXX 100644
32
+}
94
--- a/target/arm/tcg/meson.build
33
+
95
+++ b/target/arm/tcg/meson.build
34
+static int times_2(DisasContext *s, int x)
96
@@ -XXX,XX +XXX,XX @@ gen = [
35
+{
97
decodetree.process('a32-uncond.decode', extra_args: '--static-decode=disas_a32_uncond'),
36
+ return x * 2;
98
decodetree.process('t32.decode', extra_args: '--static-decode=disas_t32'),
37
+}
99
decodetree.process('t16.decode', extra_args: ['-w', '16', '--static-decode=disas_t16']),
38
+
100
+ decodetree.process('a64.decode', extra_args: ['--static-decode=disas_a64']),
39
+static int times_4(DisasContext *s, int x)
101
]
40
+{
102
41
+ return x * 4;
103
arm_ss.add(gen)
42
+}
43
+
44
/* Flags for the disas_set_da_iss info argument:
45
* lower bits hold the Rt register number, higher bits are flags.
46
*/
47
@@ -XXX,XX +XXX,XX @@ static void arm_skip_unless(DisasContext *s, uint32_t cond)
48
49
50
/*
51
- * Constant expanders for the decoders.
52
+ * Constant expanders used by T16/T32 decode
53
*/
54
55
-static int negate(DisasContext *s, int x)
56
-{
57
- return -x;
58
-}
59
-
60
-static int plus_2(DisasContext *s, int x)
61
-{
62
- return x + 2;
63
-}
64
-
65
-static int times_2(DisasContext *s, int x)
66
-{
67
- return x * 2;
68
-}
69
-
70
-static int times_4(DisasContext *s, int x)
71
-{
72
- return x * 4;
73
-}
74
-
75
/* Return only the rotation part of T32ExpandImm. */
76
static int t32_expandimm_rot(DisasContext *s, int x)
77
{
78
--
104
--
79
2.20.1
105
2.34.1
80
81
diff view generated by jsdifflib
1
Implement the new-in-v8.1M FPCXT_S floating point system register.
1
The SVE and SME decode is already done by decodetree. Pull the calls
2
This is for saving and restoring the secure floating point context,
2
to these decoders out of the legacy decoder. This doesn't change
3
and it reads and writes bits [27:0] from the FPSCR and the
3
behaviour because all the patterns in sve.decode and sme.decode
4
CONTROL.SFPA bit in bit [31].
4
already require the bits that the legacy decoder is decoding to have
5
the correct values.
5
6
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-14-peter.maydell@linaro.org
9
Message-id: 20230512144106.3608981-4-peter.maydell@linaro.org
9
---
10
---
10
target/arm/translate-vfp.c.inc | 58 ++++++++++++++++++++++++++++++++++
11
target/arm/tcg/translate-a64.c | 20 ++++----------------
11
1 file changed, 58 insertions(+)
12
1 file changed, 4 insertions(+), 16 deletions(-)
12
13
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
16
--- a/target/arm/tcg/translate-a64.c
16
+++ b/target/arm/translate-vfp.c.inc
17
+++ b/target/arm/tcg/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
18
@@ -XXX,XX +XXX,XX @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
18
return false;
19
static void disas_a64_legacy(DisasContext *s, uint32_t insn)
19
}
20
{
21
switch (extract32(insn, 25, 4)) {
22
- case 0x0:
23
- if (!extract32(insn, 31, 1) || !disas_sme(s, insn)) {
24
- unallocated_encoding(s);
25
- }
26
- break;
27
- case 0x1: case 0x3: /* UNALLOCATED */
28
- unallocated_encoding(s);
29
- break;
30
- case 0x2:
31
- if (!disas_sve(s, insn)) {
32
- unallocated_encoding(s);
33
- }
34
- break;
35
case 0x8: case 0x9: /* Data processing - immediate */
36
disas_data_proc_imm(s, insn);
20
break;
37
break;
21
+ case ARM_VFP_FPCXT_S:
38
@@ -XXX,XX +XXX,XX @@ static void disas_a64_legacy(DisasContext *s, uint32_t insn)
22
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
39
disas_data_proc_simd_fp(s, insn);
23
+ return false;
40
break;
24
+ }
25
+ if (!s->v8m_secure) {
26
+ return false;
27
+ }
28
+ break;
29
default:
41
default:
30
return FPSysRegCheckFailed;
42
- assert(FALSE); /* all 15 cases should be handled above */
31
}
43
+ unallocated_encoding(s);
32
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
33
tcg_temp_free_i32(tmp);
34
break;
44
break;
35
}
45
}
36
+ case ARM_VFP_FPCXT_S:
46
}
37
+ {
47
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
38
+ TCGv_i32 sfpa, control, fpscr;
48
disas_sme_fa64(s, insn);
39
+ /* Set FPSCR[27:0] and CONTROL.SFPA from value */
40
+ tmp = loadfn(s, opaque);
41
+ sfpa = tcg_temp_new_i32();
42
+ tcg_gen_shri_i32(sfpa, tmp, 31);
43
+ control = load_cpu_field(v7m.control[M_REG_S]);
44
+ tcg_gen_deposit_i32(control, control, sfpa,
45
+ R_V7M_CONTROL_SFPA_SHIFT, 1);
46
+ store_cpu_field(control, v7m.control[M_REG_S]);
47
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
48
+ tcg_gen_andi_i32(fpscr, fpscr, FPCR_NZCV_MASK);
49
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
50
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
51
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
52
+ tcg_temp_free_i32(tmp);
53
+ tcg_temp_free_i32(sfpa);
54
+ break;
55
+ }
56
default:
57
g_assert_not_reached();
58
}
49
}
59
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
50
60
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
51
-
61
storefn(s, opaque, tmp);
52
- if (!disas_a64(s, insn)) {
62
break;
53
+ if (!disas_a64(s, insn) &&
63
+ case ARM_VFP_FPCXT_S:
54
+ !disas_sme(s, insn) &&
64
+ {
55
+ !disas_sve(s, insn)) {
65
+ TCGv_i32 control, sfpa, fpscr;
56
disas_a64_legacy(s, insn);
66
+ /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
67
+ tmp = tcg_temp_new_i32();
68
+ sfpa = tcg_temp_new_i32();
69
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
70
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
71
+ control = load_cpu_field(v7m.control[M_REG_S]);
72
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
73
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
74
+ tcg_gen_or_i32(tmp, tmp, sfpa);
75
+ tcg_temp_free_i32(sfpa);
76
+ /*
77
+ * Store result before updating FPSCR etc, in case
78
+ * it is a memory write which causes an exception.
79
+ */
80
+ storefn(s, opaque, tmp);
81
+ /*
82
+ * Now we must reset FPSCR from FPDSCR_NS, and clear
83
+ * CONTROL.SFPA; so we'll end the TB here.
84
+ */
85
+ tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
86
+ store_cpu_field(control, v7m.control[M_REG_S]);
87
+ fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
88
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
89
+ tcg_temp_free_i32(fpscr);
90
+ gen_lookup_tb(s);
91
+ break;
92
+ }
93
default:
94
g_assert_not_reached();
95
}
57
}
58
96
--
59
--
97
2.20.1
60
2.34.1
98
99
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
Convert the ADR and ADRP instructions.
4
argument of type "unsigned int".
5
4
6
Reported-by: Euler Robot <euler.robot@huawei.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20201126111109.112238-4-alex.chen@huawei.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230512144106.3608981-5-peter.maydell@linaro.org
9
[PMM: Rebased]
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
hw/misc/imx6_ccm.c | 20 ++++++++++----------
13
target/arm/tcg/a64.decode | 13 ++++++++++++
13
hw/misc/imx6_src.c | 2 +-
14
target/arm/tcg/translate-a64.c | 38 +++++++++++++---------------------
14
2 files changed, 11 insertions(+), 11 deletions(-)
15
2 files changed, 27 insertions(+), 24 deletions(-)
15
16
16
diff --git a/hw/misc/imx6_ccm.c b/hw/misc/imx6_ccm.c
17
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/misc/imx6_ccm.c
19
--- a/target/arm/tcg/a64.decode
19
+++ b/hw/misc/imx6_ccm.c
20
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@ static const char *imx6_ccm_reg_name(uint32_t reg)
21
@@ -XXX,XX +XXX,XX @@
21
case CCM_CMEOR:
22
#
22
return "CMEOR";
23
# This file is processed by scripts/decodetree.py
23
default:
24
#
24
- sprintf(unknown, "%d ?", reg);
25
+
25
+ sprintf(unknown, "%u ?", reg);
26
+&ri rd imm
26
return unknown;
27
+
28
+
29
+### Data Processing - Immediate
30
+
31
+# PC-rel addressing
32
+
33
+%imm_pcrel 5:s19 29:2
34
+@pcrel . .. ..... ................... rd:5 &ri imm=%imm_pcrel
35
+
36
+ADR 0 .. 10000 ................... ..... @pcrel
37
+ADRP 1 .. 10000 ................... ..... @pcrel
38
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/tcg/translate-a64.c
41
+++ b/target/arm/tcg/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
27
}
43
}
28
}
44
}
29
@@ -XXX,XX +XXX,XX @@ static const char *imx6_analog_reg_name(uint32_t reg)
45
30
case USB_ANALOG_DIGPROG:
46
-/* PC-rel. addressing
31
return "USB_ANALOG_DIGPROG";
47
- * 31 30 29 28 24 23 5 4 0
32
default:
48
- * +----+-------+-----------+-------------------+------+
33
- sprintf(unknown, "%d ?", reg);
49
- * | op | immlo | 1 0 0 0 0 | immhi | Rd |
34
+ sprintf(unknown, "%u ?", reg);
50
- * +----+-------+-----------+-------------------+------+
35
return unknown;
51
+/*
36
}
52
+ * PC-rel. addressing
53
*/
54
-static void disas_pc_rel_adr(DisasContext *s, uint32_t insn)
55
+
56
+static bool trans_ADR(DisasContext *s, arg_ri *a)
57
{
58
- unsigned int page, rd;
59
- int64_t offset;
60
+ gen_pc_plus_diff(s, cpu_reg(s, a->rd), a->imm);
61
+ return true;
62
+}
63
64
- page = extract32(insn, 31, 1);
65
- /* SignExtend(immhi:immlo) -> offset */
66
- offset = sextract64(insn, 5, 19);
67
- offset = offset << 2 | extract32(insn, 29, 2);
68
- rd = extract32(insn, 0, 5);
69
+static bool trans_ADRP(DisasContext *s, arg_ri *a)
70
+{
71
+ int64_t offset = (int64_t)a->imm << 12;
72
73
- if (page) {
74
- /* ADRP (page based) */
75
- offset <<= 12;
76
- /* The page offset is ok for CF_PCREL. */
77
- offset -= s->pc_curr & 0xfff;
78
- }
79
-
80
- gen_pc_plus_diff(s, cpu_reg(s, rd), offset);
81
+ /* The page offset is ok for CF_PCREL. */
82
+ offset -= s->pc_curr & 0xfff;
83
+ gen_pc_plus_diff(s, cpu_reg(s, a->rd), offset);
84
+ return true;
37
}
85
}
38
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_clk(IMX6CCMState *dev)
86
39
freq *= 20;
87
/*
40
}
88
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
41
89
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
42
- DPRINTF("freq = %d\n", (uint32_t)freq);
90
{
43
+ DPRINTF("freq = %u\n", (uint32_t)freq);
91
switch (extract32(insn, 23, 6)) {
44
92
- case 0x20: case 0x21: /* PC-rel. addressing */
45
return freq;
93
- disas_pc_rel_adr(s, insn);
46
}
94
- break;
47
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd0_clk(IMX6CCMState *dev)
95
case 0x22: /* Add/subtract (immediate) */
48
freq = imx6_analog_get_pll2_clk(dev) * 18
96
disas_add_sub_imm(s, insn);
49
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD0_FRAC);
50
51
- DPRINTF("freq = %d\n", (uint32_t)freq);
52
+ DPRINTF("freq = %u\n", (uint32_t)freq);
53
54
return freq;
55
}
56
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd2_clk(IMX6CCMState *dev)
57
freq = imx6_analog_get_pll2_clk(dev) * 18
58
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD2_FRAC);
59
60
- DPRINTF("freq = %d\n", (uint32_t)freq);
61
+ DPRINTF("freq = %u\n", (uint32_t)freq);
62
63
return freq;
64
}
65
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_periph_clk(IMX6CCMState *dev)
66
break;
97
break;
67
}
68
69
- DPRINTF("freq = %d\n", (uint32_t)freq);
70
+ DPRINTF("freq = %u\n", (uint32_t)freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ahb_clk(IMX6CCMState *dev)
75
freq = imx6_analog_get_periph_clk(dev)
76
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], AHB_PODF));
77
78
- DPRINTF("freq = %d\n", (uint32_t)freq);
79
+ DPRINTF("freq = %u\n", (uint32_t)freq);
80
81
return freq;
82
}
83
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ipg_clk(IMX6CCMState *dev)
84
freq = imx6_ccm_get_ahb_clk(dev)
85
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], IPG_PODF));
86
87
- DPRINTF("freq = %d\n", (uint32_t)freq);
88
+ DPRINTF("freq = %u\n", (uint32_t)freq);
89
90
return freq;
91
}
92
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_per_clk(IMX6CCMState *dev)
93
freq = imx6_ccm_get_ipg_clk(dev)
94
/ (1 + EXTRACT(dev->ccm[CCM_CSCMR1], PERCLK_PODF));
95
96
- DPRINTF("freq = %d\n", (uint32_t)freq);
97
+ DPRINTF("freq = %u\n", (uint32_t)freq);
98
99
return freq;
100
}
101
@@ -XXX,XX +XXX,XX @@ static uint32_t imx6_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
102
break;
103
}
104
105
- DPRINTF("Clock = %d) = %d\n", clock, freq);
106
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
107
108
return freq;
109
}
110
diff --git a/hw/misc/imx6_src.c b/hw/misc/imx6_src.c
111
index XXXXXXX..XXXXXXX 100644
112
--- a/hw/misc/imx6_src.c
113
+++ b/hw/misc/imx6_src.c
114
@@ -XXX,XX +XXX,XX @@ static const char *imx6_src_reg_name(uint32_t reg)
115
case SRC_GPR10:
116
return "SRC_GPR10";
117
default:
118
- sprintf(unknown, "%d ?", reg);
119
+ sprintf(unknown, "%u ?", reg);
120
return unknown;
121
}
122
}
123
--
98
--
124
2.20.1
99
2.34.1
125
126
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The QTests perform five tests on the Xilinx ZynqMP CAN controller:
3
Split out specific 32-bit and 64-bit functions.
4
Tests the CAN controller in loopback, sleep and snoop mode.
4
These carry the same signature as tcg_gen_add_i64,
5
Tests filtering of incoming CAN messages.
5
and so will be easier to pass as callbacks.
6
6
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Retain gen_add_CC and gen_sub_CC during conversion.
8
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
8
9
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1605728926-352690-4-git-send-email-fnu.vikram@xilinx.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20230512144106.3608981-6-peter.maydell@linaro.org
13
[PMM: rebased]
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
16
---
13
tests/qtest/xlnx-can-test.c | 360 ++++++++++++++++++++++++++++++++++++
17
target/arm/tcg/translate-a64.c | 149 +++++++++++++++++++--------------
14
tests/qtest/meson.build | 1 +
18
1 file changed, 84 insertions(+), 65 deletions(-)
15
2 files changed, 361 insertions(+)
16
create mode 100644 tests/qtest/xlnx-can-test.c
17
19
18
diff --git a/tests/qtest/xlnx-can-test.c b/tests/qtest/xlnx-can-test.c
20
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
19
new file mode 100644
21
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX
22
--- a/target/arm/tcg/translate-a64.c
21
--- /dev/null
23
+++ b/target/arm/tcg/translate-a64.c
22
+++ b/tests/qtest/xlnx-can-test.c
24
@@ -XXX,XX +XXX,XX @@ static inline void gen_logic_CC(int sf, TCGv_i64 result)
23
@@ -XXX,XX +XXX,XX @@
25
}
24
+/*
26
25
+ * QTests for the Xilinx ZynqMP CAN controller.
27
/* dest = T0 + T1; compute C, N, V and Z flags */
26
+ *
28
+static void gen_add64_CC(TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
27
+ * Copyright (c) 2020 Xilinx Inc.
29
+{
28
+ *
30
+ TCGv_i64 result, flag, tmp;
29
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
31
+ result = tcg_temp_new_i64();
30
+ *
32
+ flag = tcg_temp_new_i64();
31
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
33
+ tmp = tcg_temp_new_i64();
32
+ * of this software and associated documentation files (the "Software"), to deal
33
+ * in the Software without restriction, including without limitation the rights
34
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
35
+ * copies of the Software, and to permit persons to whom the Software is
36
+ * furnished to do so, subject to the following conditions:
37
+ *
38
+ * The above copyright notice and this permission notice shall be included in
39
+ * all copies or substantial portions of the Software.
40
+ *
41
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
42
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
43
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
44
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
45
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
46
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
47
+ * THE SOFTWARE.
48
+ */
49
+
34
+
50
+#include "qemu/osdep.h"
35
+ tcg_gen_movi_i64(tmp, 0);
51
+#include "libqos/libqtest.h"
36
+ tcg_gen_add2_i64(result, flag, t0, tmp, t1, tmp);
52
+
37
+
53
+/* Base address. */
38
+ tcg_gen_extrl_i64_i32(cpu_CF, flag);
54
+#define CAN0_BASE_ADDR 0xFF060000
55
+#define CAN1_BASE_ADDR 0xFF070000
56
+
39
+
57
+/* Register addresses. */
40
+ gen_set_NZ64(result);
58
+#define R_SRR_OFFSET 0x00
59
+#define R_MSR_OFFSET 0x04
60
+#define R_SR_OFFSET 0x18
61
+#define R_ISR_OFFSET 0x1C
62
+#define R_ICR_OFFSET 0x24
63
+#define R_TXID_OFFSET 0x30
64
+#define R_TXDLC_OFFSET 0x34
65
+#define R_TXDATA1_OFFSET 0x38
66
+#define R_TXDATA2_OFFSET 0x3C
67
+#define R_RXID_OFFSET 0x50
68
+#define R_RXDLC_OFFSET 0x54
69
+#define R_RXDATA1_OFFSET 0x58
70
+#define R_RXDATA2_OFFSET 0x5C
71
+#define R_AFR 0x60
72
+#define R_AFMR1 0x64
73
+#define R_AFIR1 0x68
74
+#define R_AFMR2 0x6C
75
+#define R_AFIR2 0x70
76
+#define R_AFMR3 0x74
77
+#define R_AFIR3 0x78
78
+#define R_AFMR4 0x7C
79
+#define R_AFIR4 0x80
80
+
41
+
81
+/* CAN modes. */
42
+ tcg_gen_xor_i64(flag, result, t0);
82
+#define CONFIG_MODE 0x00
43
+ tcg_gen_xor_i64(tmp, t0, t1);
83
+#define NORMAL_MODE 0x00
44
+ tcg_gen_andc_i64(flag, flag, tmp);
84
+#define LOOPBACK_MODE 0x02
45
+ tcg_gen_extrh_i64_i32(cpu_VF, flag);
85
+#define SNOOP_MODE 0x04
86
+#define SLEEP_MODE 0x01
87
+#define ENABLE_CAN (1 << 1)
88
+#define STATUS_NORMAL_MODE (1 << 3)
89
+#define STATUS_LOOPBACK_MODE (1 << 1)
90
+#define STATUS_SNOOP_MODE (1 << 12)
91
+#define STATUS_SLEEP_MODE (1 << 2)
92
+#define ISR_TXOK (1 << 1)
93
+#define ISR_RXOK (1 << 4)
94
+
46
+
95
+static void match_rx_tx_data(const uint32_t *buf_tx, const uint32_t *buf_rx,
47
+ tcg_gen_mov_i64(dest, result);
96
+ uint8_t can_timestamp)
97
+{
98
+ uint16_t size = 0;
99
+ uint8_t len = 4;
100
+
101
+ while (size < len) {
102
+ if (R_RXID_OFFSET + 4 * size == R_RXDLC_OFFSET) {
103
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size] + can_timestamp);
104
+ } else {
105
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size]);
106
+ }
107
+
108
+ size++;
109
+ }
110
+}
48
+}
111
+
49
+
112
+static void read_data(QTestState *qts, uint64_t can_base_addr, uint32_t *buf_rx)
50
+static void gen_add32_CC(TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
113
+{
51
+{
114
+ uint32_t int_status;
52
+ TCGv_i32 t0_32 = tcg_temp_new_i32();
53
+ TCGv_i32 t1_32 = tcg_temp_new_i32();
54
+ TCGv_i32 tmp = tcg_temp_new_i32();
115
+
55
+
116
+ /* Read the interrupt on CAN rx. */
56
+ tcg_gen_movi_i32(tmp, 0);
117
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_RXOK;
57
+ tcg_gen_extrl_i64_i32(t0_32, t0);
118
+
58
+ tcg_gen_extrl_i64_i32(t1_32, t1);
119
+ g_assert_cmpint(int_status, ==, ISR_RXOK);
59
+ tcg_gen_add2_i32(cpu_NF, cpu_CF, t0_32, tmp, t1_32, tmp);
120
+
60
+ tcg_gen_mov_i32(cpu_ZF, cpu_NF);
121
+ /* Read the RX register data for CAN. */
61
+ tcg_gen_xor_i32(cpu_VF, cpu_NF, t0_32);
122
+ buf_rx[0] = qtest_readl(qts, can_base_addr + R_RXID_OFFSET);
62
+ tcg_gen_xor_i32(tmp, t0_32, t1_32);
123
+ buf_rx[1] = qtest_readl(qts, can_base_addr + R_RXDLC_OFFSET);
63
+ tcg_gen_andc_i32(cpu_VF, cpu_VF, tmp);
124
+ buf_rx[2] = qtest_readl(qts, can_base_addr + R_RXDATA1_OFFSET);
64
+ tcg_gen_extu_i32_i64(dest, cpu_NF);
125
+ buf_rx[3] = qtest_readl(qts, can_base_addr + R_RXDATA2_OFFSET);
126
+
127
+ /* Clear the RX interrupt. */
128
+ qtest_writel(qts, CAN1_BASE_ADDR + R_ICR_OFFSET, ISR_RXOK);
129
+}
65
+}
130
+
66
+
131
+static void send_data(QTestState *qts, uint64_t can_base_addr,
67
static void gen_add_CC(int sf, TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
132
+ const uint32_t *buf_tx)
68
{
69
if (sf) {
70
- TCGv_i64 result, flag, tmp;
71
- result = tcg_temp_new_i64();
72
- flag = tcg_temp_new_i64();
73
- tmp = tcg_temp_new_i64();
74
-
75
- tcg_gen_movi_i64(tmp, 0);
76
- tcg_gen_add2_i64(result, flag, t0, tmp, t1, tmp);
77
-
78
- tcg_gen_extrl_i64_i32(cpu_CF, flag);
79
-
80
- gen_set_NZ64(result);
81
-
82
- tcg_gen_xor_i64(flag, result, t0);
83
- tcg_gen_xor_i64(tmp, t0, t1);
84
- tcg_gen_andc_i64(flag, flag, tmp);
85
- tcg_gen_extrh_i64_i32(cpu_VF, flag);
86
-
87
- tcg_gen_mov_i64(dest, result);
88
+ gen_add64_CC(dest, t0, t1);
89
} else {
90
- /* 32 bit arithmetic */
91
- TCGv_i32 t0_32 = tcg_temp_new_i32();
92
- TCGv_i32 t1_32 = tcg_temp_new_i32();
93
- TCGv_i32 tmp = tcg_temp_new_i32();
94
-
95
- tcg_gen_movi_i32(tmp, 0);
96
- tcg_gen_extrl_i64_i32(t0_32, t0);
97
- tcg_gen_extrl_i64_i32(t1_32, t1);
98
- tcg_gen_add2_i32(cpu_NF, cpu_CF, t0_32, tmp, t1_32, tmp);
99
- tcg_gen_mov_i32(cpu_ZF, cpu_NF);
100
- tcg_gen_xor_i32(cpu_VF, cpu_NF, t0_32);
101
- tcg_gen_xor_i32(tmp, t0_32, t1_32);
102
- tcg_gen_andc_i32(cpu_VF, cpu_VF, tmp);
103
- tcg_gen_extu_i32_i64(dest, cpu_NF);
104
+ gen_add32_CC(dest, t0, t1);
105
}
106
}
107
108
/* dest = T0 - T1; compute C, N, V and Z flags */
109
+static void gen_sub64_CC(TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
133
+{
110
+{
134
+ uint32_t int_status;
111
+ /* 64 bit arithmetic */
112
+ TCGv_i64 result, flag, tmp;
135
+
113
+
136
+ /* Write the TX register data for CAN. */
114
+ result = tcg_temp_new_i64();
137
+ qtest_writel(qts, can_base_addr + R_TXID_OFFSET, buf_tx[0]);
115
+ flag = tcg_temp_new_i64();
138
+ qtest_writel(qts, can_base_addr + R_TXDLC_OFFSET, buf_tx[1]);
116
+ tcg_gen_sub_i64(result, t0, t1);
139
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET, buf_tx[2]);
140
+ qtest_writel(qts, can_base_addr + R_TXDATA2_OFFSET, buf_tx[3]);
141
+
117
+
142
+ /* Read the interrupt on CAN for tx. */
118
+ gen_set_NZ64(result);
143
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_TXOK;
144
+
119
+
145
+ g_assert_cmpint(int_status, ==, ISR_TXOK);
120
+ tcg_gen_setcond_i64(TCG_COND_GEU, flag, t0, t1);
121
+ tcg_gen_extrl_i64_i32(cpu_CF, flag);
146
+
122
+
147
+ /* Clear the interrupt for tx. */
123
+ tcg_gen_xor_i64(flag, result, t0);
148
+ qtest_writel(qts, CAN0_BASE_ADDR + R_ICR_OFFSET, ISR_TXOK);
124
+ tmp = tcg_temp_new_i64();
125
+ tcg_gen_xor_i64(tmp, t0, t1);
126
+ tcg_gen_and_i64(flag, flag, tmp);
127
+ tcg_gen_extrh_i64_i32(cpu_VF, flag);
128
+ tcg_gen_mov_i64(dest, result);
149
+}
129
+}
150
+
130
+
151
+/*
131
+static void gen_sub32_CC(TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
152
+ * This test will be transferring data from CAN0 and CAN1 through canbus. CAN0
153
+ * initiate the data transfer to can-bus, CAN1 receives the data. Test compares
154
+ * the data sent from CAN0 with received on CAN1.
155
+ */
156
+static void test_can_bus(void)
157
+{
132
+{
158
+ const uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
133
+ /* 32 bit arithmetic */
159
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
134
+ TCGv_i32 t0_32 = tcg_temp_new_i32();
160
+ uint32_t status = 0;
135
+ TCGv_i32 t1_32 = tcg_temp_new_i32();
161
+ uint8_t can_timestamp = 1;
136
+ TCGv_i32 tmp;
162
+
137
+
163
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
138
+ tcg_gen_extrl_i64_i32(t0_32, t0);
164
+ " -object can-bus,id=canbus0"
139
+ tcg_gen_extrl_i64_i32(t1_32, t1);
165
+ " -machine xlnx-zcu102.canbus0=canbus0"
140
+ tcg_gen_sub_i32(cpu_NF, t0_32, t1_32);
166
+ " -machine xlnx-zcu102.canbus1=canbus0"
141
+ tcg_gen_mov_i32(cpu_ZF, cpu_NF);
167
+ );
142
+ tcg_gen_setcond_i32(TCG_COND_GEU, cpu_CF, t0_32, t1_32);
168
+
143
+ tcg_gen_xor_i32(cpu_VF, cpu_NF, t0_32);
169
+ /* Configure the CAN0 and CAN1. */
144
+ tmp = tcg_temp_new_i32();
170
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
145
+ tcg_gen_xor_i32(tmp, t0_32, t1_32);
171
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
146
+ tcg_gen_and_i32(cpu_VF, cpu_VF, tmp);
172
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
147
+ tcg_gen_extu_i32_i64(dest, cpu_NF);
173
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
174
+
175
+ /* Check here if CAN0 and CAN1 are in normal mode. */
176
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
177
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
178
+
179
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
180
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
181
+
182
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
183
+
184
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
185
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
186
+
187
+ qtest_quit(qts);
188
+}
148
+}
189
+
149
+
190
+/*
150
static void gen_sub_CC(int sf, TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
191
+ * This test is performing loopback mode on CAN0 and CAN1. Data sent from TX of
151
{
192
+ * each CAN0 and CAN1 are compared with RX register data for respective CAN.
152
if (sf) {
193
+ */
153
- /* 64 bit arithmetic */
194
+static void test_can_loopback(void)
154
- TCGv_i64 result, flag, tmp;
195
+{
155
-
196
+ uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
156
- result = tcg_temp_new_i64();
197
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
157
- flag = tcg_temp_new_i64();
198
+ uint32_t status = 0;
158
- tcg_gen_sub_i64(result, t0, t1);
199
+
159
-
200
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
160
- gen_set_NZ64(result);
201
+ " -object can-bus,id=canbus0"
161
-
202
+ " -machine xlnx-zcu102.canbus0=canbus0"
162
- tcg_gen_setcond_i64(TCG_COND_GEU, flag, t0, t1);
203
+ " -machine xlnx-zcu102.canbus1=canbus0"
163
- tcg_gen_extrl_i64_i32(cpu_CF, flag);
204
+ );
164
-
205
+
165
- tcg_gen_xor_i64(flag, result, t0);
206
+ /* Configure the CAN0 in loopback mode. */
166
- tmp = tcg_temp_new_i64();
207
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
167
- tcg_gen_xor_i64(tmp, t0, t1);
208
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
168
- tcg_gen_and_i64(flag, flag, tmp);
209
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
169
- tcg_gen_extrh_i64_i32(cpu_VF, flag);
210
+
170
- tcg_gen_mov_i64(dest, result);
211
+ /* Check here if CAN0 is set in loopback mode. */
171
+ gen_sub64_CC(dest, t0, t1);
212
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
172
} else {
213
+
173
- /* 32 bit arithmetic */
214
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
174
- TCGv_i32 t0_32 = tcg_temp_new_i32();
215
+
175
- TCGv_i32 t1_32 = tcg_temp_new_i32();
216
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
176
- TCGv_i32 tmp;
217
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
177
-
218
+ match_rx_tx_data(buf_tx, buf_rx, 0);
178
- tcg_gen_extrl_i64_i32(t0_32, t0);
219
+
179
- tcg_gen_extrl_i64_i32(t1_32, t1);
220
+ /* Configure the CAN1 in loopback mode. */
180
- tcg_gen_sub_i32(cpu_NF, t0_32, t1_32);
221
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
181
- tcg_gen_mov_i32(cpu_ZF, cpu_NF);
222
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
182
- tcg_gen_setcond_i32(TCG_COND_GEU, cpu_CF, t0_32, t1_32);
223
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
183
- tcg_gen_xor_i32(cpu_VF, cpu_NF, t0_32);
224
+
184
- tmp = tcg_temp_new_i32();
225
+ /* Check here if CAN1 is set in loopback mode. */
185
- tcg_gen_xor_i32(tmp, t0_32, t1_32);
226
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
186
- tcg_gen_and_i32(cpu_VF, cpu_VF, tmp);
227
+
187
- tcg_gen_extu_i32_i64(dest, cpu_NF);
228
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
188
+ gen_sub32_CC(dest, t0, t1);
229
+
189
}
230
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
190
}
231
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
191
232
+ match_rx_tx_data(buf_tx, buf_rx, 0);
233
+
234
+ qtest_quit(qts);
235
+}
236
+
237
+/*
238
+ * Enable filters for CAN1. This will filter incoming messages with ID. In this
239
+ * test message will pass through filter 2.
240
+ */
241
+static void test_can_filter(void)
242
+{
243
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
244
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
245
+ uint32_t status = 0;
246
+ uint8_t can_timestamp = 1;
247
+
248
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
249
+ " -object can-bus,id=canbus0"
250
+ " -machine xlnx-zcu102.canbus0=canbus0"
251
+ " -machine xlnx-zcu102.canbus1=canbus0"
252
+ );
253
+
254
+ /* Configure the CAN0 and CAN1. */
255
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
256
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
257
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
258
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
259
+
260
+ /* Check here if CAN0 and CAN1 are in normal mode. */
261
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
262
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
263
+
264
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
265
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
266
+
267
+ /* Set filter for CAN1 for incoming messages. */
268
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0x0);
269
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR1, 0xF7);
270
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR1, 0x121F);
271
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR2, 0x5431);
272
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR2, 0x14);
273
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR3, 0x1234);
274
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR3, 0x5431);
275
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR4, 0xFFF);
276
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR4, 0x1234);
277
+
278
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0xF);
279
+
280
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
281
+
282
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
283
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
284
+
285
+ qtest_quit(qts);
286
+}
287
+
288
+/* Testing sleep mode on CAN0 while CAN1 is in normal mode. */
289
+static void test_can_sleepmode(void)
290
+{
291
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
292
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
293
+ uint32_t status = 0;
294
+ uint8_t can_timestamp = 1;
295
+
296
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
297
+ " -object can-bus,id=canbus0"
298
+ " -machine xlnx-zcu102.canbus0=canbus0"
299
+ " -machine xlnx-zcu102.canbus1=canbus0"
300
+ );
301
+
302
+ /* Configure the CAN0. */
303
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
304
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SLEEP_MODE);
305
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
306
+
307
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
308
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
309
+
310
+ /* Check here if CAN0 is in SLEEP mode and CAN1 in normal mode. */
311
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
312
+ g_assert_cmpint(status, ==, STATUS_SLEEP_MODE);
313
+
314
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
315
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
316
+
317
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
318
+
319
+ /*
320
+ * Once CAN1 sends data on can-bus. CAN0 should exit sleep mode.
321
+ * Check the CAN0 status now. It should exit the sleep mode and receive the
322
+ * incoming data.
323
+ */
324
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
325
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
326
+
327
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
328
+
329
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
330
+
331
+ qtest_quit(qts);
332
+}
333
+
334
+/* Testing Snoop mode on CAN0 while CAN1 is in normal mode. */
335
+static void test_can_snoopmode(void)
336
+{
337
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
338
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
339
+ uint32_t status = 0;
340
+ uint8_t can_timestamp = 1;
341
+
342
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
343
+ " -object can-bus,id=canbus0"
344
+ " -machine xlnx-zcu102.canbus0=canbus0"
345
+ " -machine xlnx-zcu102.canbus1=canbus0"
346
+ );
347
+
348
+ /* Configure the CAN0. */
349
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
350
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SNOOP_MODE);
351
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
352
+
353
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
354
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
355
+
356
+ /* Check here if CAN0 is in SNOOP mode and CAN1 in normal mode. */
357
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
358
+ g_assert_cmpint(status, ==, STATUS_SNOOP_MODE);
359
+
360
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
361
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
362
+
363
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
364
+
365
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
366
+
367
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
368
+
369
+ qtest_quit(qts);
370
+}
371
+
372
+int main(int argc, char **argv)
373
+{
374
+ g_test_init(&argc, &argv, NULL);
375
+
376
+ qtest_add_func("/net/can/can_bus", test_can_bus);
377
+ qtest_add_func("/net/can/can_loopback", test_can_loopback);
378
+ qtest_add_func("/net/can/can_filter", test_can_filter);
379
+ qtest_add_func("/net/can/can_test_snoopmode", test_can_snoopmode);
380
+ qtest_add_func("/net/can/can_test_sleepmode", test_can_sleepmode);
381
+
382
+ return g_test_run();
383
+}
384
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
385
index XXXXXXX..XXXXXXX 100644
386
--- a/tests/qtest/meson.build
387
+++ b/tests/qtest/meson.build
388
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
389
['arm-cpu-features',
390
'numa-test',
391
'boot-serial-test',
392
+ 'xlnx-can-test',
393
'migration-test']
394
395
qtests_s390x = \
396
--
192
--
397
2.20.1
193
2.34.1
398
399
diff view generated by jsdifflib
1
From: Havard Skinnemoen <hskinnemoen@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Dump the collected random data after a randomness test failure.
3
Convert the ADD and SUB (immediate) instructions.
4
4
5
Note that this relies on the test having called
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
g_test_set_nonfatal_assertions() so we don't abort immediately on the
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
assertion failure.
8
9
Signed-off-by: Havard Skinnemoen <hskinnemoen@google.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
[PMM: minor commit message tweak]
8
Message-id: 20230512144106.3608981-7-peter.maydell@linaro.org
9
[PMM: Rebased; adjusted to use translate.h's TRANS macro]
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
12
---
14
tests/qtest/npcm7xx_rng-test.c | 12 ++++++++++++
13
target/arm/tcg/translate.h | 5 +++
15
1 file changed, 12 insertions(+)
14
target/arm/tcg/a64.decode | 17 ++++++++
15
target/arm/tcg/translate-a64.c | 73 ++++++++++------------------------
16
3 files changed, 42 insertions(+), 53 deletions(-)
16
17
17
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
18
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/qtest/npcm7xx_rng-test.c
20
--- a/target/arm/tcg/translate.h
20
+++ b/tests/qtest/npcm7xx_rng-test.c
21
+++ b/target/arm/tcg/translate.h
21
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ static inline int rsub_8(DisasContext *s, int x)
22
23
return 8 - x;
23
#include "libqtest-single.h"
24
}
24
#include "qemu/bitops.h"
25
25
+#include "qemu-common.h"
26
+static inline int shl_12(DisasContext *s, int x)
26
27
#define RNG_BASE_ADDR 0xf000b000
28
29
@@ -XXX,XX +XXX,XX @@
30
/* Number of bits to collect for randomness tests. */
31
#define TEST_INPUT_BITS (128)
32
33
+static void dump_buf_if_failed(const uint8_t *buf, size_t size)
34
+{
27
+{
35
+ if (g_test_failed()) {
28
+ return x << 12;
36
+ qemu_hexdump(stderr, "", buf, size);
37
+ }
38
+}
29
+}
39
+
30
+
40
static void rng_writeb(unsigned int offset, uint8_t value)
31
static inline int neon_3same_fp_size(DisasContext *s, int x)
41
{
32
{
42
writeb(RNG_BASE_ADDR + offset, value);
33
/* Convert 0==fp32, 1==fp16 into a MO_* value */
43
@@ -XXX,XX +XXX,XX @@ static void test_continuous_monobit(void)
34
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/a64.decode
37
+++ b/target/arm/tcg/a64.decode
38
@@ -XXX,XX +XXX,XX @@
39
#
40
41
&ri rd imm
42
+&rri_sf rd rn imm sf
43
44
45
### Data Processing - Immediate
46
@@ -XXX,XX +XXX,XX @@
47
48
ADR 0 .. 10000 ................... ..... @pcrel
49
ADRP 1 .. 10000 ................... ..... @pcrel
50
+
51
+# Add/subtract (immediate)
52
+
53
+%imm12_sh12 10:12 !function=shl_12
54
+@addsub_imm sf:1 .. ...... . imm:12 rn:5 rd:5
55
+@addsub_imm12 sf:1 .. ...... . ............ rn:5 rd:5 imm=%imm12_sh12
56
+
57
+ADD_i . 00 100010 0 ............ ..... ..... @addsub_imm
58
+ADD_i . 00 100010 1 ............ ..... ..... @addsub_imm12
59
+ADDS_i . 01 100010 0 ............ ..... ..... @addsub_imm
60
+ADDS_i . 01 100010 1 ............ ..... ..... @addsub_imm12
61
+
62
+SUB_i . 10 100010 0 ............ ..... ..... @addsub_imm
63
+SUB_i . 10 100010 1 ............ ..... ..... @addsub_imm12
64
+SUBS_i . 11 100010 0 ............ ..... ..... @addsub_imm
65
+SUBS_i . 11 100010 1 ............ ..... ..... @addsub_imm12
66
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/tcg/translate-a64.c
69
+++ b/target/arm/tcg/translate-a64.c
70
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
44
}
71
}
45
46
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
47
+ dump_buf_if_failed(buf, sizeof(buf));
48
}
72
}
49
73
74
+typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
75
+
76
+static bool gen_rri(DisasContext *s, arg_rri_sf *a,
77
+ bool rd_sp, bool rn_sp, ArithTwoOp *fn)
78
+{
79
+ TCGv_i64 tcg_rn = rn_sp ? cpu_reg_sp(s, a->rn) : cpu_reg(s, a->rn);
80
+ TCGv_i64 tcg_rd = rd_sp ? cpu_reg_sp(s, a->rd) : cpu_reg(s, a->rd);
81
+ TCGv_i64 tcg_imm = tcg_constant_i64(a->imm);
82
+
83
+ fn(tcg_rd, tcg_rn, tcg_imm);
84
+ if (!a->sf) {
85
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
86
+ }
87
+ return true;
88
+}
89
+
50
/*
90
/*
51
@@ -XXX,XX +XXX,XX @@ static void test_continuous_runs(void)
91
* PC-rel. addressing
52
}
92
*/
53
93
@@ -XXX,XX +XXX,XX @@ static bool trans_ADRP(DisasContext *s, arg_ri *a)
54
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
55
+ dump_buf_if_failed(buf.c, sizeof(buf));
56
}
57
94
58
/*
95
/*
59
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_monobit(void)
96
* Add/subtract (immediate)
60
}
97
- *
61
98
- * 31 30 29 28 23 22 21 10 9 5 4 0
62
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
99
- * +--+--+--+-------------+--+-------------+-----+-----+
63
+ dump_buf_if_failed(buf, sizeof(buf));
100
- * |sf|op| S| 1 0 0 0 1 0 |sh| imm12 | Rn | Rd |
64
}
101
- * +--+--+--+-------------+--+-------------+-----+-----+
102
- *
103
- * sf: 0 -> 32bit, 1 -> 64bit
104
- * op: 0 -> add , 1 -> sub
105
- * S: 1 -> set flags
106
- * sh: 1 -> LSL imm by 12
107
*/
108
-static void disas_add_sub_imm(DisasContext *s, uint32_t insn)
109
-{
110
- int rd = extract32(insn, 0, 5);
111
- int rn = extract32(insn, 5, 5);
112
- uint64_t imm = extract32(insn, 10, 12);
113
- bool shift = extract32(insn, 22, 1);
114
- bool setflags = extract32(insn, 29, 1);
115
- bool sub_op = extract32(insn, 30, 1);
116
- bool is_64bit = extract32(insn, 31, 1);
117
-
118
- TCGv_i64 tcg_rn = cpu_reg_sp(s, rn);
119
- TCGv_i64 tcg_rd = setflags ? cpu_reg(s, rd) : cpu_reg_sp(s, rd);
120
- TCGv_i64 tcg_result;
121
-
122
- if (shift) {
123
- imm <<= 12;
124
- }
125
-
126
- tcg_result = tcg_temp_new_i64();
127
- if (!setflags) {
128
- if (sub_op) {
129
- tcg_gen_subi_i64(tcg_result, tcg_rn, imm);
130
- } else {
131
- tcg_gen_addi_i64(tcg_result, tcg_rn, imm);
132
- }
133
- } else {
134
- TCGv_i64 tcg_imm = tcg_constant_i64(imm);
135
- if (sub_op) {
136
- gen_sub_CC(is_64bit, tcg_result, tcg_rn, tcg_imm);
137
- } else {
138
- gen_add_CC(is_64bit, tcg_result, tcg_rn, tcg_imm);
139
- }
140
- }
141
-
142
- if (is_64bit) {
143
- tcg_gen_mov_i64(tcg_rd, tcg_result);
144
- } else {
145
- tcg_gen_ext32u_i64(tcg_rd, tcg_result);
146
- }
147
-}
148
+TRANS(ADD_i, gen_rri, a, 1, 1, tcg_gen_add_i64)
149
+TRANS(SUB_i, gen_rri, a, 1, 1, tcg_gen_sub_i64)
150
+TRANS(ADDS_i, gen_rri, a, 0, 1, a->sf ? gen_add64_CC : gen_add32_CC)
151
+TRANS(SUBS_i, gen_rri, a, 0, 1, a->sf ? gen_sub64_CC : gen_sub32_CC)
65
152
66
/*
153
/*
67
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_runs(void)
154
* Add/subtract (immediate, with tags)
68
}
155
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
69
156
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
70
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
157
{
71
+ dump_buf_if_failed(buf.c, sizeof(buf));
158
switch (extract32(insn, 23, 6)) {
72
}
159
- case 0x22: /* Add/subtract (immediate) */
73
160
- disas_add_sub_imm(s, insn);
74
int main(int argc, char **argv)
161
- break;
162
case 0x23: /* Add/subtract (immediate, with tags) */
163
disas_add_sub_imm_with_tags(s, insn);
164
break;
75
--
165
--
76
2.20.1
166
2.34.1
77
78
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
Convert the ADDG and SUBG (immediate) instructions.
4
argument of type "unsigned int".
5
4
6
Reported-by: Euler Robot <euler.robot@huawei.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20201126111109.112238-2-alex.chen@huawei.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230512144106.3608981-8-peter.maydell@linaro.org
9
[PMM: Rebased; use TRANS_FEAT()]
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
hw/misc/imx25_ccm.c | 12 ++++++------
13
target/arm/tcg/a64.decode | 8 +++++++
13
1 file changed, 6 insertions(+), 6 deletions(-)
14
target/arm/tcg/translate-a64.c | 38 ++++++++++------------------------
15
2 files changed, 19 insertions(+), 27 deletions(-)
14
16
15
diff --git a/hw/misc/imx25_ccm.c b/hw/misc/imx25_ccm.c
17
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/imx25_ccm.c
19
--- a/target/arm/tcg/a64.decode
18
+++ b/hw/misc/imx25_ccm.c
20
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ static const char *imx25_ccm_reg_name(uint32_t reg)
21
@@ -XXX,XX +XXX,XX @@ SUB_i . 10 100010 0 ............ ..... ..... @addsub_imm
20
case IMX25_CCM_LPIMR1_REG:
22
SUB_i . 10 100010 1 ............ ..... ..... @addsub_imm12
21
return "lpimr1";
23
SUBS_i . 11 100010 0 ............ ..... ..... @addsub_imm
22
default:
24
SUBS_i . 11 100010 1 ............ ..... ..... @addsub_imm12
23
- sprintf(unknown, "[%d ?]", reg);
25
+
24
+ sprintf(unknown, "[%u ?]", reg);
26
+# Add/subtract (immediate with tags)
25
return unknown;
27
+
28
+&rri_tag rd rn uimm6 uimm4
29
+@addsub_imm_tag . .. ...... . uimm6:6 .. uimm4:4 rn:5 rd:5 &rri_tag
30
+
31
+ADDG_i 1 00 100011 0 ...... 00 .... ..... ..... @addsub_imm_tag
32
+SUBG_i 1 10 100011 0 ...... 00 .... ..... ..... @addsub_imm_tag
33
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate-a64.c
36
+++ b/target/arm/tcg/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ TRANS(SUBS_i, gen_rri, a, 0, 1, a->sf ? gen_sub64_CC : gen_sub32_CC)
38
39
/*
40
* Add/subtract (immediate, with tags)
41
- *
42
- * 31 30 29 28 23 22 21 16 14 10 9 5 4 0
43
- * +--+--+--+-------------+--+---------+--+-------+-----+-----+
44
- * |sf|op| S| 1 0 0 0 1 1 |o2| uimm6 |o3| uimm4 | Rn | Rd |
45
- * +--+--+--+-------------+--+---------+--+-------+-----+-----+
46
- *
47
- * op: 0 -> add, 1 -> sub
48
*/
49
-static void disas_add_sub_imm_with_tags(DisasContext *s, uint32_t insn)
50
+
51
+static bool gen_add_sub_imm_with_tags(DisasContext *s, arg_rri_tag *a,
52
+ bool sub_op)
53
{
54
- int rd = extract32(insn, 0, 5);
55
- int rn = extract32(insn, 5, 5);
56
- int uimm4 = extract32(insn, 10, 4);
57
- int uimm6 = extract32(insn, 16, 6);
58
- bool sub_op = extract32(insn, 30, 1);
59
TCGv_i64 tcg_rn, tcg_rd;
60
int imm;
61
62
- /* Test all of sf=1, S=0, o2=0, o3=0. */
63
- if ((insn & 0xa040c000u) != 0x80000000u ||
64
- !dc_isar_feature(aa64_mte_insn_reg, s)) {
65
- unallocated_encoding(s);
66
- return;
67
- }
68
-
69
- imm = uimm6 << LOG2_TAG_GRANULE;
70
+ imm = a->uimm6 << LOG2_TAG_GRANULE;
71
if (sub_op) {
72
imm = -imm;
26
}
73
}
74
75
- tcg_rn = cpu_reg_sp(s, rn);
76
- tcg_rd = cpu_reg_sp(s, rd);
77
+ tcg_rn = cpu_reg_sp(s, a->rn);
78
+ tcg_rd = cpu_reg_sp(s, a->rd);
79
80
if (s->ata) {
81
gen_helper_addsubg(tcg_rd, cpu_env, tcg_rn,
82
tcg_constant_i32(imm),
83
- tcg_constant_i32(uimm4));
84
+ tcg_constant_i32(a->uimm4));
85
} else {
86
tcg_gen_addi_i64(tcg_rd, tcg_rn, imm);
87
gen_address_with_allocation_tag0(tcg_rd, tcg_rd);
88
}
89
+ return true;
27
}
90
}
28
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mpll_clk(IMXCCMState *dev)
91
29
freq = imx_ccm_calc_pll(s->reg[IMX25_CCM_MPCTL_REG], CKIH_FREQ);
92
+TRANS_FEAT(ADDG_i, aa64_mte_insn_reg, gen_add_sub_imm_with_tags, a, false)
30
}
93
+TRANS_FEAT(SUBG_i, aa64_mte_insn_reg, gen_add_sub_imm_with_tags, a, true)
31
94
+
32
- DPRINTF("freq = %d\n", freq);
95
/* The input should be a value in the bottom e bits (with higher
33
+ DPRINTF("freq = %u\n", freq);
96
* bits zero); returns that value replicated into every element
34
97
* of size e in a 64 bit integer.
35
return freq;
98
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
36
}
99
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
37
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mcu_clk(IMXCCMState *dev)
100
{
38
101
switch (extract32(insn, 23, 6)) {
39
freq = freq / (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], ARM_CLK_DIV));
102
- case 0x23: /* Add/subtract (immediate, with tags) */
40
103
- disas_add_sub_imm_with_tags(s, insn);
41
- DPRINTF("freq = %d\n", freq);
104
- break;
42
+ DPRINTF("freq = %u\n", freq);
105
case 0x24: /* Logical (immediate) */
43
106
disas_logic_imm(s, insn);
44
return freq;
45
}
46
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ahb_clk(IMXCCMState *dev)
47
freq = imx25_ccm_get_mcu_clk(dev)
48
/ (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], AHB_CLK_DIV));
49
50
- DPRINTF("freq = %d\n", freq);
51
+ DPRINTF("freq = %u\n", freq);
52
53
return freq;
54
}
55
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ipg_clk(IMXCCMState *dev)
56
57
freq = imx25_ccm_get_ahb_clk(dev) / 2;
58
59
- DPRINTF("freq = %d\n", freq);
60
+ DPRINTF("freq = %u\n", freq);
61
62
return freq;
63
}
64
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
65
break;
107
break;
66
}
67
68
- DPRINTF("Clock = %d) = %d\n", clock, freq);
69
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
70
71
return freq;
72
}
73
--
108
--
74
2.20.1
109
2.34.1
75
76
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
Use the bitops.h macro rather than rolling our own here.
4
argument of type "unsigned int".
5
4
6
Reported-by: Euler Robot <euler.robot@huawei.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20201126111109.112238-3-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230512144106.3608981-9-peter.maydell@linaro.org
11
---
9
---
12
hw/misc/imx31_ccm.c | 14 +++++++-------
10
target/arm/tcg/translate-a64.c | 11 ++---------
13
hw/misc/imx_ccm.c | 4 ++--
11
1 file changed, 2 insertions(+), 9 deletions(-)
14
2 files changed, 9 insertions(+), 9 deletions(-)
15
12
16
diff --git a/hw/misc/imx31_ccm.c b/hw/misc/imx31_ccm.c
13
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/misc/imx31_ccm.c
15
--- a/target/arm/tcg/translate-a64.c
19
+++ b/hw/misc/imx31_ccm.c
16
+++ b/target/arm/tcg/translate-a64.c
20
@@ -XXX,XX +XXX,XX @@ static const char *imx31_ccm_reg_name(uint32_t reg)
17
@@ -XXX,XX +XXX,XX @@ static uint64_t bitfield_replicate(uint64_t mask, unsigned int e)
21
case IMX31_CCM_PDR2_REG:
18
return mask;
22
return "PDR2";
19
}
23
default:
20
24
- sprintf(unknown, "[%d ?]", reg);
21
-/* Return a value with the bottom len bits set (where 0 < len <= 64) */
25
+ sprintf(unknown, "[%u ?]", reg);
22
-static inline uint64_t bitmask64(unsigned int length)
26
return unknown;
23
-{
24
- assert(length > 0 && length <= 64);
25
- return ~0ULL >> (64 - length);
26
-}
27
-
28
/* Simplified variant of pseudocode DecodeBitMasks() for the case where we
29
* only require the wmask. Returns false if the imms/immr/immn are a reserved
30
* value (ie should cause a guest UNDEF exception), and true if they are
31
@@ -XXX,XX +XXX,XX @@ bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
32
/* Create the value of one element: s+1 set bits rotated
33
* by r within the element (which is e bits wide)...
34
*/
35
- mask = bitmask64(s + 1);
36
+ mask = MAKE_64BIT_MASK(0, s + 1);
37
if (r) {
38
mask = (mask >> r) | (mask << (e - r));
39
- mask &= bitmask64(e);
40
+ mask &= MAKE_64BIT_MASK(0, e);
27
}
41
}
28
}
42
/* ...then replicate the element over the whole 64 bit value */
29
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_pll_ref_clk(IMXCCMState *dev)
43
mask = bitfield_replicate(mask, e);
30
freq = CKIH_FREQ;
31
}
32
33
- DPRINTF("freq = %d\n", freq);
34
+ DPRINTF("freq = %u\n", freq);
35
36
return freq;
37
}
38
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mpll_clk(IMXCCMState *dev)
39
freq = imx_ccm_calc_pll(s->reg[IMX31_CCM_MPCTL_REG],
40
imx31_ccm_get_pll_ref_clk(dev));
41
42
- DPRINTF("freq = %d\n", freq);
43
+ DPRINTF("freq = %u\n", freq);
44
45
return freq;
46
}
47
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mcu_main_clk(IMXCCMState *dev)
48
freq = imx31_ccm_get_mpll_clk(dev);
49
}
50
51
- DPRINTF("freq = %d\n", freq);
52
+ DPRINTF("freq = %u\n", freq);
53
54
return freq;
55
}
56
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_hclk_clk(IMXCCMState *dev)
57
freq = imx31_ccm_get_mcu_main_clk(dev)
58
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], MAX));
59
60
- DPRINTF("freq = %d\n", freq);
61
+ DPRINTF("freq = %u\n", freq);
62
63
return freq;
64
}
65
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_ipg_clk(IMXCCMState *dev)
66
freq = imx31_ccm_get_hclk_clk(dev)
67
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], IPG));
68
69
- DPRINTF("freq = %d\n", freq);
70
+ DPRINTF("freq = %u\n", freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
75
break;
76
}
77
78
- DPRINTF("Clock = %d) = %d\n", clock, freq);
79
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
80
81
return freq;
82
}
83
diff --git a/hw/misc/imx_ccm.c b/hw/misc/imx_ccm.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/hw/misc/imx_ccm.c
86
+++ b/hw/misc/imx_ccm.c
87
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
88
freq = klass->get_clock_frequency(dev, clock);
89
}
90
91
- DPRINTF("(clock = %d) = %d\n", clock, freq);
92
+ DPRINTF("(clock = %d) = %u\n", clock, freq);
93
94
return freq;
95
}
96
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_calc_pll(uint32_t pllreg, uint32_t base_freq)
97
freq = ((2 * (base_freq >> 10) * (mfi * mfd + mfn)) /
98
(mfd * pd)) << 10;
99
100
- DPRINTF("(pllreg = 0x%08x, base_freq = %d) = %d\n", pllreg, base_freq,
101
+ DPRINTF("(pllreg = 0x%08x, base_freq = %u) = %d\n", pllreg, base_freq,
102
freq);
103
104
return freq;
105
--
44
--
106
2.20.1
45
2.34.1
107
108
diff view generated by jsdifflib
1
In commit 077d7449100d824a4 we added code to handle the v8M
1
From: Richard Henderson <richard.henderson@linaro.org>
2
requirement that returns from NMI or HardFault forcibly deactivate
3
those exceptions regardless of what interrupt the guest is trying to
4
deactivate. Unfortunately this broke the handling of the "illegal
5
exception return because the returning exception number is not
6
active" check for those cases. In the pseudocode this test is done
7
on the exception the guest asks to return from, but because our
8
implementation was doing this in armv7m_nvic_complete_irq() after the
9
new "deactivate NMI/HardFault regardless" code we ended up doing the
10
test on the VecInfo for that exception instead, which usually meant
11
failing to raise the illegal exception return fault.
12
2
13
In the case for "configurable exception targeting the opposite
3
Convert the ADD, ORR, EOR, ANDS (immediate) instructions.
14
security state" we detected the illegal-return case but went ahead
15
and deactivated the VecInfo anyway, which is wrong because that is
16
the VecInfo for the other security state.
17
4
18
Rearrange the code so that we first identify the illegal return
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
19
cases, then see if we really need to deactivate NMI or HardFault
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
instead, and finally do the deactivation.
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230512144106.3608981-10-peter.maydell@linaro.org
9
[PMM: rebased]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/tcg/a64.decode | 15 ++++++
13
target/arm/tcg/translate-a64.c | 94 +++++++++++-----------------------
14
2 files changed, 44 insertions(+), 65 deletions(-)
21
15
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-25-peter.maydell@linaro.org
25
---
26
hw/intc/armv7m_nvic.c | 59 +++++++++++++++++++++++--------------------
27
1 file changed, 32 insertions(+), 27 deletions(-)
28
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
30
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/intc/armv7m_nvic.c
18
--- a/target/arm/tcg/a64.decode
32
+++ b/hw/intc/armv7m_nvic.c
19
+++ b/target/arm/tcg/a64.decode
33
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
20
@@ -XXX,XX +XXX,XX @@ SUBS_i . 11 100010 1 ............ ..... ..... @addsub_imm12
21
22
ADDG_i 1 00 100011 0 ...... 00 .... ..... ..... @addsub_imm_tag
23
SUBG_i 1 10 100011 0 ...... 00 .... ..... ..... @addsub_imm_tag
24
+
25
+# Logical (immediate)
26
+
27
+&rri_log rd rn sf dbm
28
+@logic_imm_64 1 .. ...... dbm:13 rn:5 rd:5 &rri_log sf=1
29
+@logic_imm_32 0 .. ...... 0 dbm:12 rn:5 rd:5 &rri_log sf=0
30
+
31
+AND_i . 00 100100 . ...... ...... ..... ..... @logic_imm_64
32
+AND_i . 00 100100 . ...... ...... ..... ..... @logic_imm_32
33
+ORR_i . 01 100100 . ...... ...... ..... ..... @logic_imm_64
34
+ORR_i . 01 100100 . ...... ...... ..... ..... @logic_imm_32
35
+EOR_i . 10 100100 . ...... ...... ..... ..... @logic_imm_64
36
+EOR_i . 10 100100 . ...... ...... ..... ..... @logic_imm_32
37
+ANDS_i . 11 100100 . ...... ...... ..... ..... @logic_imm_64
38
+ANDS_i . 11 100100 . ...... ...... ..... ..... @logic_imm_32
39
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/tcg/translate-a64.c
42
+++ b/target/arm/tcg/translate-a64.c
43
@@ -XXX,XX +XXX,XX @@ static uint64_t bitfield_replicate(uint64_t mask, unsigned int e)
44
return mask;
45
}
46
47
-/* Simplified variant of pseudocode DecodeBitMasks() for the case where we
48
+/*
49
+ * Logical (immediate)
50
+ */
51
+
52
+/*
53
+ * Simplified variant of pseudocode DecodeBitMasks() for the case where we
54
* only require the wmask. Returns false if the imms/immr/immn are a reserved
55
* value (ie should cause a guest UNDEF exception), and true if they are
56
* valid, in which case the decoded bit pattern is written to result.
57
@@ -XXX,XX +XXX,XX @@ bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
58
return true;
59
}
60
61
-/* Logical (immediate)
62
- * 31 30 29 28 23 22 21 16 15 10 9 5 4 0
63
- * +----+-----+-------------+---+------+------+------+------+
64
- * | sf | opc | 1 0 0 1 0 0 | N | immr | imms | Rn | Rd |
65
- * +----+-----+-------------+---+------+------+------+------+
66
- */
67
-static void disas_logic_imm(DisasContext *s, uint32_t insn)
68
+static bool gen_rri_log(DisasContext *s, arg_rri_log *a, bool set_cc,
69
+ void (*fn)(TCGv_i64, TCGv_i64, int64_t))
34
{
70
{
35
NVICState *s = (NVICState *)opaque;
71
- unsigned int sf, opc, is_n, immr, imms, rn, rd;
36
VecInfo *vec = NULL;
72
TCGv_i64 tcg_rd, tcg_rn;
37
- int ret;
73
- uint64_t wmask;
38
+ int ret = 0;
74
- bool is_and = false;
39
75
+ uint64_t imm;
40
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
76
41
77
- sf = extract32(insn, 31, 1);
42
+ trace_nvic_complete_irq(irq, secure);
78
- opc = extract32(insn, 29, 2);
43
+
79
- is_n = extract32(insn, 22, 1);
44
+ if (secure && exc_is_banked(irq)) {
80
- immr = extract32(insn, 16, 6);
45
+ vec = &s->sec_vectors[irq];
81
- imms = extract32(insn, 10, 6);
46
+ } else {
82
- rn = extract32(insn, 5, 5);
47
+ vec = &s->vectors[irq];
83
- rd = extract32(insn, 0, 5);
84
-
85
- if (!sf && is_n) {
86
- unallocated_encoding(s);
87
- return;
88
+ /* Some immediate field values are reserved. */
89
+ if (!logic_imm_decode_wmask(&imm, extract32(a->dbm, 12, 1),
90
+ extract32(a->dbm, 0, 6),
91
+ extract32(a->dbm, 6, 6))) {
92
+ return false;
48
+ }
93
+ }
49
+
94
+ if (!a->sf) {
50
+ /*
95
+ imm &= 0xffffffffull;
51
+ * Identify illegal exception return cases. We can't immediately
52
+ * return at this point because we still need to deactivate
53
+ * (either this exception or NMI/HardFault) first.
54
+ */
55
+ if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
56
+ /*
57
+ * Return from a configurable exception targeting the opposite
58
+ * security state from the one we're trying to complete it for.
59
+ * Clear vec because it's not really the VecInfo for this
60
+ * (irq, secstate) so we mustn't deactivate it.
61
+ */
62
+ ret = -1;
63
+ vec = NULL;
64
+ } else if (!vec->active) {
65
+ /* Return from an inactive interrupt */
66
+ ret = -1;
67
+ } else {
68
+ /* Legal return, we will return the RETTOBASE bit value to the caller */
69
+ ret = nvic_rettobase(s);
70
+ }
71
+
72
/*
73
* For negative priorities, v8M will forcibly deactivate the appropriate
74
* NMI or HardFault regardless of what interrupt we're being asked to
75
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
76
}
96
}
77
97
78
if (!vec) {
98
- if (opc == 0x3) { /* ANDS */
79
- if (secure && exc_is_banked(irq)) {
99
- tcg_rd = cpu_reg(s, rd);
80
- vec = &s->sec_vectors[irq];
100
- } else {
81
- } else {
101
- tcg_rd = cpu_reg_sp(s, rd);
82
- vec = &s->vectors[irq];
102
- }
83
- }
103
- tcg_rn = cpu_reg(s, rn);
104
+ tcg_rd = set_cc ? cpu_reg(s, a->rd) : cpu_reg_sp(s, a->rd);
105
+ tcg_rn = cpu_reg(s, a->rn);
106
107
- if (!logic_imm_decode_wmask(&wmask, is_n, imms, immr)) {
108
- /* some immediate field values are reserved */
109
- unallocated_encoding(s);
110
- return;
111
+ fn(tcg_rd, tcg_rn, imm);
112
+ if (set_cc) {
113
+ gen_logic_CC(a->sf, tcg_rd);
114
}
115
-
116
- if (!sf) {
117
- wmask &= 0xffffffff;
84
- }
118
- }
85
-
119
-
86
- trace_nvic_complete_irq(irq, secure);
120
- switch (opc) {
87
-
121
- case 0x3: /* ANDS */
88
- if (!vec->active) {
122
- case 0x0: /* AND */
89
- /* Tell the caller this was an illegal exception return */
123
- tcg_gen_andi_i64(tcg_rd, tcg_rn, wmask);
90
- return -1;
124
- is_and = true;
125
- break;
126
- case 0x1: /* ORR */
127
- tcg_gen_ori_i64(tcg_rd, tcg_rn, wmask);
128
- break;
129
- case 0x2: /* EOR */
130
- tcg_gen_xori_i64(tcg_rd, tcg_rn, wmask);
131
- break;
132
- default:
133
- assert(FALSE); /* must handle all above */
134
- break;
91
- }
135
- }
92
-
136
-
93
- /*
137
- if (!sf && !is_and) {
94
- * If this is a configurable exception and it is currently
138
- /* zero extend final result; we know we can skip this for AND
95
- * targeting the opposite security state from the one we're trying
139
- * since the immediate had the high 32 bits clear.
96
- * to complete it for, this counts as an illegal exception return.
140
- */
97
- * We still need to deactivate whatever vector the logic above has
141
+ if (!a->sf) {
98
- * selected, though, as it might not be the same as the one for the
142
tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
99
- * requested exception number.
100
- */
101
- if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
102
- ret = -1;
103
- } else {
104
- ret = nvic_rettobase(s);
105
+ return ret;
106
}
143
}
107
144
-
108
vec->active = 0;
145
- if (opc == 3) { /* ANDS */
146
- gen_logic_CC(sf, tcg_rd);
147
- }
148
+ return true;
149
}
150
151
+TRANS(AND_i, gen_rri_log, a, false, tcg_gen_andi_i64)
152
+TRANS(ORR_i, gen_rri_log, a, false, tcg_gen_ori_i64)
153
+TRANS(EOR_i, gen_rri_log, a, false, tcg_gen_xori_i64)
154
+TRANS(ANDS_i, gen_rri_log, a, true, tcg_gen_andi_i64)
155
+
156
/*
157
* Move wide (immediate)
158
*
159
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
160
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
161
{
162
switch (extract32(insn, 23, 6)) {
163
- case 0x24: /* Logical (immediate) */
164
- disas_logic_imm(s, insn);
165
- break;
166
case 0x25: /* Move wide (immediate) */
167
disas_movw_imm(s, insn);
168
break;
109
--
169
--
110
2.20.1
170
2.34.1
111
112
diff view generated by jsdifflib
1
From: Kunkun Jiang <jiangkunkun@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Accroding to the SMMUv3 spec, the SPAN field of Level1 Stream Table
3
Convert the MON, MOVZ, MOVK instructions.
4
Descriptor is 5 bits([4:0]).
5
4
6
Fixes: 9bde7f0674f(hw/arm/smmuv3: Implement translate callback)
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20201124023711.1184-1-jiangkunkun@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Acked-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20230512144106.3608981-11-peter.maydell@linaro.org
9
[PMM: Rebased]
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
12
---
13
hw/arm/smmuv3-internal.h | 2 +-
13
target/arm/tcg/a64.decode | 13 ++++++
14
1 file changed, 1 insertion(+), 1 deletion(-)
14
target/arm/tcg/translate-a64.c | 73 ++++++++++++++--------------------
15
2 files changed, 42 insertions(+), 44 deletions(-)
15
16
16
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
17
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/smmuv3-internal.h
19
--- a/target/arm/tcg/a64.decode
19
+++ b/hw/arm/smmuv3-internal.h
20
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@ static inline uint64_t l1std_l2ptr(STEDesc *desc)
21
@@ -XXX,XX +XXX,XX @@ EOR_i . 10 100100 . ...... ...... ..... ..... @logic_imm_64
21
return hi << 32 | lo;
22
EOR_i . 10 100100 . ...... ...... ..... ..... @logic_imm_32
23
ANDS_i . 11 100100 . ...... ...... ..... ..... @logic_imm_64
24
ANDS_i . 11 100100 . ...... ...... ..... ..... @logic_imm_32
25
+
26
+# Move wide (immediate)
27
+
28
+&movw rd sf imm hw
29
+@movw_64 1 .. ...... hw:2 imm:16 rd:5 &movw sf=1
30
+@movw_32 0 .. ...... 0 hw:1 imm:16 rd:5 &movw sf=0
31
+
32
+MOVN . 00 100101 .. ................ ..... @movw_64
33
+MOVN . 00 100101 .. ................ ..... @movw_32
34
+MOVZ . 10 100101 .. ................ ..... @movw_64
35
+MOVZ . 10 100101 .. ................ ..... @movw_32
36
+MOVK . 11 100101 .. ................ ..... @movw_64
37
+MOVK . 11 100101 .. ................ ..... @movw_32
38
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/tcg/translate-a64.c
41
+++ b/target/arm/tcg/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ TRANS(ANDS_i, gen_rri_log, a, true, tcg_gen_andi_i64)
43
44
/*
45
* Move wide (immediate)
46
- *
47
- * 31 30 29 28 23 22 21 20 5 4 0
48
- * +--+-----+-------------+-----+----------------+------+
49
- * |sf| opc | 1 0 0 1 0 1 | hw | imm16 | Rd |
50
- * +--+-----+-------------+-----+----------------+------+
51
- *
52
- * sf: 0 -> 32 bit, 1 -> 64 bit
53
- * opc: 00 -> N, 10 -> Z, 11 -> K
54
- * hw: shift/16 (0,16, and sf only 32, 48)
55
*/
56
-static void disas_movw_imm(DisasContext *s, uint32_t insn)
57
+
58
+static bool trans_MOVZ(DisasContext *s, arg_movw *a)
59
{
60
- int rd = extract32(insn, 0, 5);
61
- uint64_t imm = extract32(insn, 5, 16);
62
- int sf = extract32(insn, 31, 1);
63
- int opc = extract32(insn, 29, 2);
64
- int pos = extract32(insn, 21, 2) << 4;
65
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
66
+ int pos = a->hw << 4;
67
+ tcg_gen_movi_i64(cpu_reg(s, a->rd), (uint64_t)a->imm << pos);
68
+ return true;
69
+}
70
71
- if (!sf && (pos >= 32)) {
72
- unallocated_encoding(s);
73
- return;
74
- }
75
+static bool trans_MOVN(DisasContext *s, arg_movw *a)
76
+{
77
+ int pos = a->hw << 4;
78
+ uint64_t imm = a->imm;
79
80
- switch (opc) {
81
- case 0: /* MOVN */
82
- case 2: /* MOVZ */
83
- imm <<= pos;
84
- if (opc == 0) {
85
- imm = ~imm;
86
- }
87
- if (!sf) {
88
- imm &= 0xffffffffu;
89
- }
90
- tcg_gen_movi_i64(tcg_rd, imm);
91
- break;
92
- case 3: /* MOVK */
93
- tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_constant_i64(imm), pos, 16);
94
- if (!sf) {
95
- tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
96
- }
97
- break;
98
- default:
99
- unallocated_encoding(s);
100
- break;
101
+ imm = ~(imm << pos);
102
+ if (!a->sf) {
103
+ imm = (uint32_t)imm;
104
}
105
+ tcg_gen_movi_i64(cpu_reg(s, a->rd), imm);
106
+ return true;
107
+}
108
+
109
+static bool trans_MOVK(DisasContext *s, arg_movw *a)
110
+{
111
+ int pos = a->hw << 4;
112
+ TCGv_i64 tcg_rd, tcg_im;
113
+
114
+ tcg_rd = cpu_reg(s, a->rd);
115
+ tcg_im = tcg_constant_i64(a->imm);
116
+ tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_im, pos, 16);
117
+ if (!a->sf) {
118
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
119
+ }
120
+ return true;
22
}
121
}
23
122
24
-#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 4))
123
/* Bitfield
25
+#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 5))
124
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
26
125
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
27
#endif
126
{
127
switch (extract32(insn, 23, 6)) {
128
- case 0x25: /* Move wide (immediate) */
129
- disas_movw_imm(s, insn);
130
- break;
131
case 0x26: /* Bitfield */
132
disas_bitfield(s, insn);
133
break;
28
--
134
--
29
2.20.1
135
2.34.1
30
31
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The Xilinx ZynqMP CAN controller is developed based on SocketCAN, QEMU CAN bus
3
Convert the BFM, SBFM, UBFM instructions.
4
implementation. Bus connection and socketCAN connection for each CAN module
5
can be set through command lines.
6
4
7
Example for using single CAN:
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
-object can-bus,id=canbus0 \
9
-machine xlnx-zcu102.canbus0=canbus0 \
10
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0
11
12
Example for connecting both CAN to same virtual CAN on host machine:
13
-object can-bus,id=canbus0 -object can-bus,id=canbus1 \
14
-machine xlnx-zcu102.canbus0=canbus0 \
15
-machine xlnx-zcu102.canbus1=canbus1 \
16
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0 \
17
-object can-host-socketcan,id=socketcan1,if=vcan0,canbus=canbus1
18
19
To create virtual CAN on the host machine, please check the QEMU CAN docs:
20
https://github.com/qemu/qemu/blob/master/docs/can.txt
21
22
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
23
Message-id: 1605728926-352690-2-git-send-email-fnu.vikram@xilinx.com
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230512144106.3608981-12-peter.maydell@linaro.org
9
[PMM: Rebased]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
11
---
27
meson.build | 1 +
12
target/arm/tcg/a64.decode | 13 +++
28
hw/net/can/trace.h | 1 +
13
target/arm/tcg/translate-a64.c | 144 ++++++++++++++++++---------------
29
include/hw/net/xlnx-zynqmp-can.h | 78 ++
14
2 files changed, 94 insertions(+), 63 deletions(-)
30
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++
31
hw/Kconfig | 1 +
32
hw/net/can/meson.build | 1 +
33
hw/net/can/trace-events | 9 +
34
7 files changed, 1252 insertions(+)
35
create mode 100644 hw/net/can/trace.h
36
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
37
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
38
create mode 100644 hw/net/can/trace-events
39
15
40
diff --git a/meson.build b/meson.build
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
41
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
42
--- a/meson.build
18
--- a/target/arm/tcg/a64.decode
43
+++ b/meson.build
19
+++ b/target/arm/tcg/a64.decode
44
@@ -XXX,XX +XXX,XX @@ if have_system
20
@@ -XXX,XX +XXX,XX @@ MOVZ . 10 100101 .. ................ ..... @movw_64
45
'hw/misc',
21
MOVZ . 10 100101 .. ................ ..... @movw_32
46
'hw/misc/macio',
22
MOVK . 11 100101 .. ................ ..... @movw_64
47
'hw/net',
23
MOVK . 11 100101 .. ................ ..... @movw_32
48
+ 'hw/net/can',
24
+
49
'hw/nvram',
25
+# Bitfield
50
'hw/pci',
26
+
51
'hw/pci-host',
27
+&bitfield rd rn sf immr imms
52
diff --git a/hw/net/can/trace.h b/hw/net/can/trace.h
28
+@bitfield_64 1 .. ...... 1 immr:6 imms:6 rn:5 rd:5 &bitfield sf=1
53
new file mode 100644
29
+@bitfield_32 0 .. ...... 0 0 immr:5 0 imms:5 rn:5 rd:5 &bitfield sf=0
54
index XXXXXXX..XXXXXXX
30
+
55
--- /dev/null
31
+SBFM . 00 100110 . ...... ...... ..... ..... @bitfield_64
56
+++ b/hw/net/can/trace.h
32
+SBFM . 00 100110 . ...... ...... ..... ..... @bitfield_32
57
@@ -0,0 +1 @@
33
+BFM . 01 100110 . ...... ...... ..... ..... @bitfield_64
58
+#include "trace/trace-hw_net_can.h"
34
+BFM . 01 100110 . ...... ...... ..... ..... @bitfield_32
59
diff --git a/include/hw/net/xlnx-zynqmp-can.h b/include/hw/net/xlnx-zynqmp-can.h
35
+UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_64
60
new file mode 100644
36
+UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_32
61
index XXXXXXX..XXXXXXX
37
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
62
--- /dev/null
38
index XXXXXXX..XXXXXXX 100644
63
+++ b/include/hw/net/xlnx-zynqmp-can.h
39
--- a/target/arm/tcg/translate-a64.c
64
@@ -XXX,XX +XXX,XX @@
40
+++ b/target/arm/tcg/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ static bool trans_MOVK(DisasContext *s, arg_movw *a)
42
return true;
43
}
44
45
-/* Bitfield
46
- * 31 30 29 28 23 22 21 16 15 10 9 5 4 0
47
- * +----+-----+-------------+---+------+------+------+------+
48
- * | sf | opc | 1 0 0 1 1 0 | N | immr | imms | Rn | Rd |
49
- * +----+-----+-------------+---+------+------+------+------+
65
+/*
50
+/*
66
+ * QEMU model of the Xilinx ZynqMP CAN controller.
51
+ * Bitfield
67
+ *
52
*/
68
+ * Copyright (c) 2020 Xilinx Inc.
53
-static void disas_bitfield(DisasContext *s, uint32_t insn)
69
+ *
54
+
70
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
55
+static bool trans_SBFM(DisasContext *s, arg_SBFM *a)
71
+ *
56
{
72
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
57
- unsigned int sf, n, opc, ri, si, rn, rd, bitsize, pos, len;
73
+ * Pavel Pisa.
58
- TCGv_i64 tcg_rd, tcg_tmp;
74
+ *
59
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
75
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
60
+ TCGv_i64 tcg_tmp = read_cpu_reg(s, a->rn, 1);
76
+ * of this software and associated documentation files (the "Software"), to deal
61
+ unsigned int bitsize = a->sf ? 64 : 32;
77
+ * in the Software without restriction, including without limitation the rights
62
+ unsigned int ri = a->immr;
78
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
63
+ unsigned int si = a->imms;
79
+ * copies of the Software, and to permit persons to whom the Software is
64
+ unsigned int pos, len;
80
+ * furnished to do so, subject to the following conditions:
65
81
+ *
66
- sf = extract32(insn, 31, 1);
82
+ * The above copyright notice and this permission notice shall be included in
67
- opc = extract32(insn, 29, 2);
83
+ * all copies or substantial portions of the Software.
68
- n = extract32(insn, 22, 1);
84
+ *
69
- ri = extract32(insn, 16, 6);
85
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
70
- si = extract32(insn, 10, 6);
86
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
71
- rn = extract32(insn, 5, 5);
87
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
72
- rd = extract32(insn, 0, 5);
88
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
73
- bitsize = sf ? 64 : 32;
89
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
74
-
90
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
75
- if (sf != n || ri >= bitsize || si >= bitsize || opc > 2) {
91
+ * THE SOFTWARE.
76
- unallocated_encoding(s);
92
+ */
77
- return;
93
+
78
- }
94
+#ifndef XLNX_ZYNQMP_CAN_H
79
-
95
+#define XLNX_ZYNQMP_CAN_H
80
- tcg_rd = cpu_reg(s, rd);
96
+
81
-
97
+#include "hw/register.h"
82
- /* Suppress the zero-extend for !sf. Since RI and SI are constrained
98
+#include "net/can_emu.h"
83
- to be smaller than bitsize, we'll never reference data outside the
99
+#include "net/can_host.h"
84
- low 32-bits anyway. */
100
+#include "qemu/fifo32.h"
85
- tcg_tmp = read_cpu_reg(s, rn, 1);
101
+#include "hw/ptimer.h"
86
-
102
+#include "hw/qdev-clock.h"
87
- /* Recognize simple(r) extractions. */
103
+
88
if (si >= ri) {
104
+#define TYPE_XLNX_ZYNQMP_CAN "xlnx.zynqmp-can"
89
/* Wd<s-r:0> = Wn<s:r> */
105
+
90
len = (si - ri) + 1;
106
+#define XLNX_ZYNQMP_CAN(obj) \
91
- if (opc == 0) { /* SBFM: ASR, SBFX, SXTB, SXTH, SXTW */
107
+ OBJECT_CHECK(XlnxZynqMPCANState, (obj), TYPE_XLNX_ZYNQMP_CAN)
92
- tcg_gen_sextract_i64(tcg_rd, tcg_tmp, ri, len);
108
+
93
- goto done;
109
+#define MAX_CAN_CTRLS 2
94
- } else if (opc == 2) { /* UBFM: UBFX, LSR, UXTB, UXTH */
110
+#define XLNX_ZYNQMP_CAN_R_MAX (0x84 / 4)
95
- tcg_gen_extract_i64(tcg_rd, tcg_tmp, ri, len);
111
+#define MAILBOX_CAPACITY 64
96
- return;
112
+#define CAN_TIMER_MAX 0XFFFFUL
97
+ tcg_gen_sextract_i64(tcg_rd, tcg_tmp, ri, len);
113
+#define CAN_DEFAULT_CLOCK (24 * 1000 * 1000)
98
+ if (!a->sf) {
114
+
99
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
115
+/* Each CAN_FRAME will have 4 * 32bit size. */
100
}
116
+#define CAN_FRAME_SIZE 4
101
- /* opc == 1, BFXIL fall through to deposit */
117
+#define RXFIFO_SIZE (MAILBOX_CAPACITY * CAN_FRAME_SIZE)
102
+ } else {
118
+
103
+ /* Wd<32+s-r,32-r> = Wn<s:0> */
119
+typedef struct XlnxZynqMPCANState {
104
+ len = si + 1;
120
+ SysBusDevice parent_obj;
105
+ pos = (bitsize - ri) & (bitsize - 1);
121
+ MemoryRegion iomem;
106
+
122
+
107
+ if (len < ri) {
123
+ qemu_irq irq;
108
+ /*
124
+
109
+ * Sign extend the destination field from len to fill the
125
+ CanBusClientState bus_client;
110
+ * balance of the word. Let the deposit below insert all
126
+ CanBusState *canbus;
111
+ * of those sign bits.
127
+
112
+ */
128
+ struct {
113
+ tcg_gen_sextract_i64(tcg_tmp, tcg_tmp, 0, len);
129
+ uint32_t ext_clk_freq;
114
+ len = ri;
130
+ } cfg;
115
+ }
131
+
116
+
132
+ RegisterInfo reg_info[XLNX_ZYNQMP_CAN_R_MAX];
117
+ /*
133
+ uint32_t regs[XLNX_ZYNQMP_CAN_R_MAX];
118
+ * We start with zero, and we haven't modified any bits outside
134
+
119
+ * bitsize, therefore no final zero-extension is unneeded for !sf.
135
+ Fifo32 rx_fifo;
120
+ */
136
+ Fifo32 tx_fifo;
121
+ tcg_gen_deposit_z_i64(tcg_rd, tcg_tmp, pos, len);
137
+ Fifo32 txhpb_fifo;
138
+
139
+ ptimer_state *can_timer;
140
+} XlnxZynqMPCANState;
141
+
142
+#endif
143
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
144
new file mode 100644
145
index XXXXXXX..XXXXXXX
146
--- /dev/null
147
+++ b/hw/net/can/xlnx-zynqmp-can.c
148
@@ -XXX,XX +XXX,XX @@
149
+/*
150
+ * QEMU model of the Xilinx ZynqMP CAN controller.
151
+ * This implementation is based on the following datasheet:
152
+ * https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
153
+ *
154
+ * Copyright (c) 2020 Xilinx Inc.
155
+ *
156
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
157
+ *
158
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
159
+ * Pavel Pisa
160
+ *
161
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
162
+ * of this software and associated documentation files (the "Software"), to deal
163
+ * in the Software without restriction, including without limitation the rights
164
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
165
+ * copies of the Software, and to permit persons to whom the Software is
166
+ * furnished to do so, subject to the following conditions:
167
+ *
168
+ * The above copyright notice and this permission notice shall be included in
169
+ * all copies or substantial portions of the Software.
170
+ *
171
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
172
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
173
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
174
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
175
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
176
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
177
+ * THE SOFTWARE.
178
+ */
179
+
180
+#include "qemu/osdep.h"
181
+#include "hw/sysbus.h"
182
+#include "hw/register.h"
183
+#include "hw/irq.h"
184
+#include "qapi/error.h"
185
+#include "qemu/bitops.h"
186
+#include "qemu/log.h"
187
+#include "qemu/cutils.h"
188
+#include "sysemu/sysemu.h"
189
+#include "migration/vmstate.h"
190
+#include "hw/qdev-properties.h"
191
+#include "net/can_emu.h"
192
+#include "net/can_host.h"
193
+#include "qemu/event_notifier.h"
194
+#include "qom/object_interfaces.h"
195
+#include "hw/net/xlnx-zynqmp-can.h"
196
+#include "trace.h"
197
+
198
+#ifndef XLNX_ZYNQMP_CAN_ERR_DEBUG
199
+#define XLNX_ZYNQMP_CAN_ERR_DEBUG 0
200
+#endif
201
+
202
+#define MAX_DLC 8
203
+#undef ERROR
204
+
205
+REG32(SOFTWARE_RESET_REGISTER, 0x0)
206
+ FIELD(SOFTWARE_RESET_REGISTER, CEN, 1, 1)
207
+ FIELD(SOFTWARE_RESET_REGISTER, SRST, 0, 1)
208
+REG32(MODE_SELECT_REGISTER, 0x4)
209
+ FIELD(MODE_SELECT_REGISTER, SNOOP, 2, 1)
210
+ FIELD(MODE_SELECT_REGISTER, LBACK, 1, 1)
211
+ FIELD(MODE_SELECT_REGISTER, SLEEP, 0, 1)
212
+REG32(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x8)
213
+ FIELD(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, BRP, 0, 8)
214
+REG32(ARBITRATION_PHASE_BIT_TIMING_REGISTER, 0xc)
215
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, SJW, 7, 2)
216
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS2, 4, 3)
217
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS1, 0, 4)
218
+REG32(ERROR_COUNTER_REGISTER, 0x10)
219
+ FIELD(ERROR_COUNTER_REGISTER, REC, 8, 8)
220
+ FIELD(ERROR_COUNTER_REGISTER, TEC, 0, 8)
221
+REG32(ERROR_STATUS_REGISTER, 0x14)
222
+ FIELD(ERROR_STATUS_REGISTER, ACKER, 4, 1)
223
+ FIELD(ERROR_STATUS_REGISTER, BERR, 3, 1)
224
+ FIELD(ERROR_STATUS_REGISTER, STER, 2, 1)
225
+ FIELD(ERROR_STATUS_REGISTER, FMER, 1, 1)
226
+ FIELD(ERROR_STATUS_REGISTER, CRCER, 0, 1)
227
+REG32(STATUS_REGISTER, 0x18)
228
+ FIELD(STATUS_REGISTER, SNOOP, 12, 1)
229
+ FIELD(STATUS_REGISTER, ACFBSY, 11, 1)
230
+ FIELD(STATUS_REGISTER, TXFLL, 10, 1)
231
+ FIELD(STATUS_REGISTER, TXBFLL, 9, 1)
232
+ FIELD(STATUS_REGISTER, ESTAT, 7, 2)
233
+ FIELD(STATUS_REGISTER, ERRWRN, 6, 1)
234
+ FIELD(STATUS_REGISTER, BBSY, 5, 1)
235
+ FIELD(STATUS_REGISTER, BIDLE, 4, 1)
236
+ FIELD(STATUS_REGISTER, NORMAL, 3, 1)
237
+ FIELD(STATUS_REGISTER, SLEEP, 2, 1)
238
+ FIELD(STATUS_REGISTER, LBACK, 1, 1)
239
+ FIELD(STATUS_REGISTER, CONFIG, 0, 1)
240
+REG32(INTERRUPT_STATUS_REGISTER, 0x1c)
241
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFEMP, 14, 1)
242
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFWMEMP, 13, 1)
243
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL, 12, 1)
244
+ FIELD(INTERRUPT_STATUS_REGISTER, WKUP, 11, 1)
245
+ FIELD(INTERRUPT_STATUS_REGISTER, SLP, 10, 1)
246
+ FIELD(INTERRUPT_STATUS_REGISTER, BSOFF, 9, 1)
247
+ FIELD(INTERRUPT_STATUS_REGISTER, ERROR, 8, 1)
248
+ FIELD(INTERRUPT_STATUS_REGISTER, RXNEMP, 7, 1)
249
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOFLW, 6, 1)
250
+ FIELD(INTERRUPT_STATUS_REGISTER, RXUFLW, 5, 1)
251
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOK, 4, 1)
252
+ FIELD(INTERRUPT_STATUS_REGISTER, TXBFLL, 3, 1)
253
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFLL, 2, 1)
254
+ FIELD(INTERRUPT_STATUS_REGISTER, TXOK, 1, 1)
255
+ FIELD(INTERRUPT_STATUS_REGISTER, ARBLST, 0, 1)
256
+REG32(INTERRUPT_ENABLE_REGISTER, 0x20)
257
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFEMP, 14, 1)
258
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFWMEMP, 13, 1)
259
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL, 12, 1)
260
+ FIELD(INTERRUPT_ENABLE_REGISTER, EWKUP, 11, 1)
261
+ FIELD(INTERRUPT_ENABLE_REGISTER, ESLP, 10, 1)
262
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSOFF, 9, 1)
263
+ FIELD(INTERRUPT_ENABLE_REGISTER, EERROR, 8, 1)
264
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXNEMP, 7, 1)
265
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOFLW, 6, 1)
266
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXUFLW, 5, 1)
267
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOK, 4, 1)
268
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXBFLL, 3, 1)
269
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFLL, 2, 1)
270
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXOK, 1, 1)
271
+ FIELD(INTERRUPT_ENABLE_REGISTER, EARBLST, 0, 1)
272
+REG32(INTERRUPT_CLEAR_REGISTER, 0x24)
273
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFEMP, 14, 1)
274
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFWMEMP, 13, 1)
275
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL, 12, 1)
276
+ FIELD(INTERRUPT_CLEAR_REGISTER, CWKUP, 11, 1)
277
+ FIELD(INTERRUPT_CLEAR_REGISTER, CSLP, 10, 1)
278
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSOFF, 9, 1)
279
+ FIELD(INTERRUPT_CLEAR_REGISTER, CERROR, 8, 1)
280
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXNEMP, 7, 1)
281
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOFLW, 6, 1)
282
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXUFLW, 5, 1)
283
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOK, 4, 1)
284
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXBFLL, 3, 1)
285
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFLL, 2, 1)
286
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXOK, 1, 1)
287
+ FIELD(INTERRUPT_CLEAR_REGISTER, CARBLST, 0, 1)
288
+REG32(TIMESTAMP_REGISTER, 0x28)
289
+ FIELD(TIMESTAMP_REGISTER, CTS, 0, 1)
290
+REG32(WIR, 0x2c)
291
+ FIELD(WIR, EW, 8, 8)
292
+ FIELD(WIR, FW, 0, 8)
293
+REG32(TXFIFO_ID, 0x30)
294
+ FIELD(TXFIFO_ID, IDH, 21, 11)
295
+ FIELD(TXFIFO_ID, SRRRTR, 20, 1)
296
+ FIELD(TXFIFO_ID, IDE, 19, 1)
297
+ FIELD(TXFIFO_ID, IDL, 1, 18)
298
+ FIELD(TXFIFO_ID, RTR, 0, 1)
299
+REG32(TXFIFO_DLC, 0x34)
300
+ FIELD(TXFIFO_DLC, DLC, 28, 4)
301
+REG32(TXFIFO_DATA1, 0x38)
302
+ FIELD(TXFIFO_DATA1, DB0, 24, 8)
303
+ FIELD(TXFIFO_DATA1, DB1, 16, 8)
304
+ FIELD(TXFIFO_DATA1, DB2, 8, 8)
305
+ FIELD(TXFIFO_DATA1, DB3, 0, 8)
306
+REG32(TXFIFO_DATA2, 0x3c)
307
+ FIELD(TXFIFO_DATA2, DB4, 24, 8)
308
+ FIELD(TXFIFO_DATA2, DB5, 16, 8)
309
+ FIELD(TXFIFO_DATA2, DB6, 8, 8)
310
+ FIELD(TXFIFO_DATA2, DB7, 0, 8)
311
+REG32(TXHPB_ID, 0x40)
312
+ FIELD(TXHPB_ID, IDH, 21, 11)
313
+ FIELD(TXHPB_ID, SRRRTR, 20, 1)
314
+ FIELD(TXHPB_ID, IDE, 19, 1)
315
+ FIELD(TXHPB_ID, IDL, 1, 18)
316
+ FIELD(TXHPB_ID, RTR, 0, 1)
317
+REG32(TXHPB_DLC, 0x44)
318
+ FIELD(TXHPB_DLC, DLC, 28, 4)
319
+REG32(TXHPB_DATA1, 0x48)
320
+ FIELD(TXHPB_DATA1, DB0, 24, 8)
321
+ FIELD(TXHPB_DATA1, DB1, 16, 8)
322
+ FIELD(TXHPB_DATA1, DB2, 8, 8)
323
+ FIELD(TXHPB_DATA1, DB3, 0, 8)
324
+REG32(TXHPB_DATA2, 0x4c)
325
+ FIELD(TXHPB_DATA2, DB4, 24, 8)
326
+ FIELD(TXHPB_DATA2, DB5, 16, 8)
327
+ FIELD(TXHPB_DATA2, DB6, 8, 8)
328
+ FIELD(TXHPB_DATA2, DB7, 0, 8)
329
+REG32(RXFIFO_ID, 0x50)
330
+ FIELD(RXFIFO_ID, IDH, 21, 11)
331
+ FIELD(RXFIFO_ID, SRRRTR, 20, 1)
332
+ FIELD(RXFIFO_ID, IDE, 19, 1)
333
+ FIELD(RXFIFO_ID, IDL, 1, 18)
334
+ FIELD(RXFIFO_ID, RTR, 0, 1)
335
+REG32(RXFIFO_DLC, 0x54)
336
+ FIELD(RXFIFO_DLC, DLC, 28, 4)
337
+ FIELD(RXFIFO_DLC, RXT, 0, 16)
338
+REG32(RXFIFO_DATA1, 0x58)
339
+ FIELD(RXFIFO_DATA1, DB0, 24, 8)
340
+ FIELD(RXFIFO_DATA1, DB1, 16, 8)
341
+ FIELD(RXFIFO_DATA1, DB2, 8, 8)
342
+ FIELD(RXFIFO_DATA1, DB3, 0, 8)
343
+REG32(RXFIFO_DATA2, 0x5c)
344
+ FIELD(RXFIFO_DATA2, DB4, 24, 8)
345
+ FIELD(RXFIFO_DATA2, DB5, 16, 8)
346
+ FIELD(RXFIFO_DATA2, DB6, 8, 8)
347
+ FIELD(RXFIFO_DATA2, DB7, 0, 8)
348
+REG32(AFR, 0x60)
349
+ FIELD(AFR, UAF4, 3, 1)
350
+ FIELD(AFR, UAF3, 2, 1)
351
+ FIELD(AFR, UAF2, 1, 1)
352
+ FIELD(AFR, UAF1, 0, 1)
353
+REG32(AFMR1, 0x64)
354
+ FIELD(AFMR1, AMIDH, 21, 11)
355
+ FIELD(AFMR1, AMSRR, 20, 1)
356
+ FIELD(AFMR1, AMIDE, 19, 1)
357
+ FIELD(AFMR1, AMIDL, 1, 18)
358
+ FIELD(AFMR1, AMRTR, 0, 1)
359
+REG32(AFIR1, 0x68)
360
+ FIELD(AFIR1, AIIDH, 21, 11)
361
+ FIELD(AFIR1, AISRR, 20, 1)
362
+ FIELD(AFIR1, AIIDE, 19, 1)
363
+ FIELD(AFIR1, AIIDL, 1, 18)
364
+ FIELD(AFIR1, AIRTR, 0, 1)
365
+REG32(AFMR2, 0x6c)
366
+ FIELD(AFMR2, AMIDH, 21, 11)
367
+ FIELD(AFMR2, AMSRR, 20, 1)
368
+ FIELD(AFMR2, AMIDE, 19, 1)
369
+ FIELD(AFMR2, AMIDL, 1, 18)
370
+ FIELD(AFMR2, AMRTR, 0, 1)
371
+REG32(AFIR2, 0x70)
372
+ FIELD(AFIR2, AIIDH, 21, 11)
373
+ FIELD(AFIR2, AISRR, 20, 1)
374
+ FIELD(AFIR2, AIIDE, 19, 1)
375
+ FIELD(AFIR2, AIIDL, 1, 18)
376
+ FIELD(AFIR2, AIRTR, 0, 1)
377
+REG32(AFMR3, 0x74)
378
+ FIELD(AFMR3, AMIDH, 21, 11)
379
+ FIELD(AFMR3, AMSRR, 20, 1)
380
+ FIELD(AFMR3, AMIDE, 19, 1)
381
+ FIELD(AFMR3, AMIDL, 1, 18)
382
+ FIELD(AFMR3, AMRTR, 0, 1)
383
+REG32(AFIR3, 0x78)
384
+ FIELD(AFIR3, AIIDH, 21, 11)
385
+ FIELD(AFIR3, AISRR, 20, 1)
386
+ FIELD(AFIR3, AIIDE, 19, 1)
387
+ FIELD(AFIR3, AIIDL, 1, 18)
388
+ FIELD(AFIR3, AIRTR, 0, 1)
389
+REG32(AFMR4, 0x7c)
390
+ FIELD(AFMR4, AMIDH, 21, 11)
391
+ FIELD(AFMR4, AMSRR, 20, 1)
392
+ FIELD(AFMR4, AMIDE, 19, 1)
393
+ FIELD(AFMR4, AMIDL, 1, 18)
394
+ FIELD(AFMR4, AMRTR, 0, 1)
395
+REG32(AFIR4, 0x80)
396
+ FIELD(AFIR4, AIIDH, 21, 11)
397
+ FIELD(AFIR4, AISRR, 20, 1)
398
+ FIELD(AFIR4, AIIDE, 19, 1)
399
+ FIELD(AFIR4, AIIDL, 1, 18)
400
+ FIELD(AFIR4, AIRTR, 0, 1)
401
+
402
+static void can_update_irq(XlnxZynqMPCANState *s)
403
+{
404
+ uint32_t irq;
405
+
406
+ /* Watermark register interrupts. */
407
+ if ((fifo32_num_free(&s->tx_fifo) / CAN_FRAME_SIZE) >
408
+ ARRAY_FIELD_EX32(s->regs, WIR, EW)) {
409
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFWMEMP, 1);
410
+ }
122
+ }
411
+
412
+ if ((fifo32_num_used(&s->rx_fifo) / CAN_FRAME_SIZE) >
413
+ ARRAY_FIELD_EX32(s->regs, WIR, FW)) {
414
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL, 1);
415
+ }
416
+
417
+ /* RX Interrupts. */
418
+ if (fifo32_num_used(&s->rx_fifo) >= CAN_FRAME_SIZE) {
419
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXNEMP, 1);
420
+ }
421
+
422
+ /* TX interrupts. */
423
+ if (fifo32_is_empty(&s->tx_fifo)) {
424
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFEMP, 1);
425
+ }
426
+
427
+ if (fifo32_is_full(&s->tx_fifo)) {
428
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFLL, 1);
429
+ }
430
+
431
+ if (fifo32_is_full(&s->txhpb_fifo)) {
432
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXBFLL, 1);
433
+ }
434
+
435
+ irq = s->regs[R_INTERRUPT_STATUS_REGISTER];
436
+ irq &= s->regs[R_INTERRUPT_ENABLE_REGISTER];
437
+
438
+ trace_xlnx_can_update_irq(s->regs[R_INTERRUPT_STATUS_REGISTER],
439
+ s->regs[R_INTERRUPT_ENABLE_REGISTER], irq);
440
+ qemu_set_irq(s->irq, irq);
441
+}
442
+
443
+static void can_ier_post_write(RegisterInfo *reg, uint64_t val)
444
+{
445
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
446
+
447
+ can_update_irq(s);
448
+}
449
+
450
+static uint64_t can_icr_pre_write(RegisterInfo *reg, uint64_t val)
451
+{
452
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
453
+
454
+ s->regs[R_INTERRUPT_STATUS_REGISTER] &= ~val;
455
+ can_update_irq(s);
456
+
457
+ return 0;
458
+}
459
+
460
+static void can_config_reset(XlnxZynqMPCANState *s)
461
+{
462
+ /* Reset all the configuration registers. */
463
+ register_reset(&s->reg_info[R_SOFTWARE_RESET_REGISTER]);
464
+ register_reset(&s->reg_info[R_MODE_SELECT_REGISTER]);
465
+ register_reset(
466
+ &s->reg_info[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER]);
467
+ register_reset(&s->reg_info[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER]);
468
+ register_reset(&s->reg_info[R_STATUS_REGISTER]);
469
+ register_reset(&s->reg_info[R_INTERRUPT_STATUS_REGISTER]);
470
+ register_reset(&s->reg_info[R_INTERRUPT_ENABLE_REGISTER]);
471
+ register_reset(&s->reg_info[R_INTERRUPT_CLEAR_REGISTER]);
472
+ register_reset(&s->reg_info[R_WIR]);
473
+}
474
+
475
+static void can_config_mode(XlnxZynqMPCANState *s)
476
+{
477
+ register_reset(&s->reg_info[R_ERROR_COUNTER_REGISTER]);
478
+ register_reset(&s->reg_info[R_ERROR_STATUS_REGISTER]);
479
+
480
+ /* Put XlnxZynqMPCAN in configuration mode. */
481
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 1);
482
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP, 0);
483
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP, 0);
484
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, BSOFF, 0);
485
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ERROR, 0);
486
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 0);
487
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 0);
488
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 0);
489
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ARBLST, 0);
490
+
491
+ can_update_irq(s);
492
+}
493
+
494
+static void update_status_register_mode_bits(XlnxZynqMPCANState *s)
495
+{
496
+ bool sleep_status = ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP);
497
+ bool sleep_mode = ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP);
498
+ /* Wake up interrupt bit. */
499
+ bool wakeup_irq_val = sleep_status && (sleep_mode == 0);
500
+ /* Sleep interrupt bit. */
501
+ bool sleep_irq_val = sleep_mode && (sleep_status == 0);
502
+
503
+ /* Clear previous core mode status bits. */
504
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 0);
505
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 0);
506
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 0);
507
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 0);
508
+
509
+ /* set current mode bit and generate irqs accordingly. */
510
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, LBACK)) {
511
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 1);
512
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP)) {
513
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 1);
514
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP,
515
+ sleep_irq_val);
516
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
517
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 1);
518
+ } else {
519
+ /*
520
+ * If all bits are zero then XlnxZynqMPCAN is set in normal mode.
521
+ */
522
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 1);
523
+ /* Set wakeup interrupt bit. */
524
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP,
525
+ wakeup_irq_val);
526
+ }
527
+
528
+ can_update_irq(s);
529
+}
530
+
531
+static void can_exit_sleep_mode(XlnxZynqMPCANState *s)
532
+{
533
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, 0);
534
+ update_status_register_mode_bits(s);
535
+}
536
+
537
+static void generate_frame(qemu_can_frame *frame, uint32_t *data)
538
+{
539
+ frame->can_id = data[0];
540
+ frame->can_dlc = FIELD_EX32(data[1], TXFIFO_DLC, DLC);
541
+
542
+ frame->data[0] = FIELD_EX32(data[2], TXFIFO_DATA1, DB3);
543
+ frame->data[1] = FIELD_EX32(data[2], TXFIFO_DATA1, DB2);
544
+ frame->data[2] = FIELD_EX32(data[2], TXFIFO_DATA1, DB1);
545
+ frame->data[3] = FIELD_EX32(data[2], TXFIFO_DATA1, DB0);
546
+
547
+ frame->data[4] = FIELD_EX32(data[3], TXFIFO_DATA2, DB7);
548
+ frame->data[5] = FIELD_EX32(data[3], TXFIFO_DATA2, DB6);
549
+ frame->data[6] = FIELD_EX32(data[3], TXFIFO_DATA2, DB5);
550
+ frame->data[7] = FIELD_EX32(data[3], TXFIFO_DATA2, DB4);
551
+}
552
+
553
+static bool tx_ready_check(XlnxZynqMPCANState *s)
554
+{
555
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
556
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
557
+
558
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
559
+ " data while controller is in reset mode.\n",
560
+ path);
561
+ return false;
562
+ }
563
+
564
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
565
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
566
+
567
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
568
+ " data while controller is in configuration mode. Reset"
569
+ " the core so operations can start fresh.\n",
570
+ path);
571
+ return false;
572
+ }
573
+
574
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
575
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
576
+
577
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
578
+ " data while controller is in SNOOP MODE.\n",
579
+ path);
580
+ return false;
581
+ }
582
+
583
+ return true;
123
+ return true;
584
+}
124
+}
585
+
125
+
586
+static void transfer_fifo(XlnxZynqMPCANState *s, Fifo32 *fifo)
126
+static bool trans_UBFM(DisasContext *s, arg_UBFM *a)
587
+{
127
+{
588
+ qemu_can_frame frame;
128
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
589
+ uint32_t data[CAN_FRAME_SIZE];
129
+ TCGv_i64 tcg_tmp = read_cpu_reg(s, a->rn, 1);
590
+ int i;
130
+ unsigned int bitsize = a->sf ? 64 : 32;
591
+ bool can_tx = tx_ready_check(s);
131
+ unsigned int ri = a->immr;
592
+
132
+ unsigned int si = a->imms;
593
+ if (!can_tx) {
133
+ unsigned int pos, len;
594
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
134
+
595
+
135
+ tcg_rd = cpu_reg(s, a->rd);
596
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is not enabled for data"
136
+ tcg_tmp = read_cpu_reg(s, a->rn, 1);
597
+ " transfer.\n", path);
137
+
598
+ can_update_irq(s);
138
+ if (si >= ri) {
599
+ return;
139
+ /* Wd<s-r:0> = Wn<s:r> */
140
+ len = (si - ri) + 1;
141
+ tcg_gen_extract_i64(tcg_rd, tcg_tmp, ri, len);
142
+ } else {
143
+ /* Wd<32+s-r,32-r> = Wn<s:0> */
144
+ len = si + 1;
145
+ pos = (bitsize - ri) & (bitsize - 1);
146
+ tcg_gen_deposit_z_i64(tcg_rd, tcg_tmp, pos, len);
600
+ }
147
+ }
601
+
602
+ while (!fifo32_is_empty(fifo)) {
603
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
604
+ data[i] = fifo32_pop(fifo);
605
+ }
606
+
607
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
608
+ /*
609
+ * Controller is in loopback. In Loopback mode, the CAN core
610
+ * transmits a recessive bitstream on to the XlnxZynqMPCAN Bus.
611
+ * Any message transmitted is looped back to the RX line and
612
+ * acknowledged. The XlnxZynqMPCAN core receives any message
613
+ * that it transmits.
614
+ */
615
+ if (fifo32_is_full(&s->rx_fifo)) {
616
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
617
+ } else {
618
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
619
+ fifo32_push(&s->rx_fifo, data[i]);
620
+ }
621
+
622
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
623
+ }
624
+ } else {
625
+ /* Normal mode Tx. */
626
+ generate_frame(&frame, data);
627
+
628
+ trace_xlnx_can_tx_data(frame.can_id, frame.can_dlc,
629
+ frame.data[0], frame.data[1],
630
+ frame.data[2], frame.data[3],
631
+ frame.data[4], frame.data[5],
632
+ frame.data[6], frame.data[7]);
633
+ can_bus_client_send(&s->bus_client, &frame, 1);
634
+ }
635
+ }
636
+
637
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 1);
638
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, TXBFLL, 0);
639
+
640
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) {
641
+ can_exit_sleep_mode(s);
642
+ }
643
+
644
+ can_update_irq(s);
645
+}
646
+
647
+static uint64_t can_srr_pre_write(RegisterInfo *reg, uint64_t val)
648
+{
649
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
650
+
651
+ ARRAY_FIELD_DP32(s->regs, SOFTWARE_RESET_REGISTER, CEN,
652
+ FIELD_EX32(val, SOFTWARE_RESET_REGISTER, CEN));
653
+
654
+ if (FIELD_EX32(val, SOFTWARE_RESET_REGISTER, SRST)) {
655
+ trace_xlnx_can_reset(val);
656
+
657
+ /* First, core will do software reset then will enter in config mode. */
658
+ can_config_reset(s);
659
+ }
660
+
661
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
662
+ can_config_mode(s);
663
+ } else {
664
+ /*
665
+ * Leave config mode. Now XlnxZynqMPCAN core will enter normal,
666
+ * sleep, snoop or loopback mode depending upon LBACK, SLEEP, SNOOP
667
+ * register states.
668
+ */
669
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 0);
670
+
671
+ ptimer_transaction_begin(s->can_timer);
672
+ ptimer_set_count(s->can_timer, 0);
673
+ ptimer_transaction_commit(s->can_timer);
674
+
675
+ /* XlnxZynqMPCAN is out of config mode. It will send pending data. */
676
+ transfer_fifo(s, &s->txhpb_fifo);
677
+ transfer_fifo(s, &s->tx_fifo);
678
+ }
679
+
680
+ update_status_register_mode_bits(s);
681
+
682
+ return s->regs[R_SOFTWARE_RESET_REGISTER];
683
+}
684
+
685
+static uint64_t can_msr_pre_write(RegisterInfo *reg, uint64_t val)
686
+{
687
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
688
+ uint8_t multi_mode;
689
+
690
+ /*
691
+ * Multiple mode set check. This is done to make sure user doesn't set
692
+ * multiple modes.
693
+ */
694
+ multi_mode = FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK) +
695
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP) +
696
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP);
697
+
698
+ if (multi_mode > 1) {
699
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
700
+
701
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to config"
702
+ " several modes simultaneously. One mode will be selected"
703
+ " according to their priority: LBACK > SLEEP > SNOOP.\n",
704
+ path);
705
+ }
706
+
707
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
708
+ /* We are in configuration mode, any mode can be selected. */
709
+ s->regs[R_MODE_SELECT_REGISTER] = val;
710
+ } else {
711
+ bool sleep_mode_bit = FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP);
712
+
713
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, sleep_mode_bit);
714
+
715
+ if (FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK)) {
716
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
717
+
718
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
719
+ " LBACK mode without setting CEN bit as 0.\n",
720
+ path);
721
+ } else if (FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP)) {
722
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
723
+
724
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
725
+ " SNOOP mode without setting CEN bit as 0.\n",
726
+ path);
727
+ }
728
+
729
+ update_status_register_mode_bits(s);
730
+ }
731
+
732
+ return s->regs[R_MODE_SELECT_REGISTER];
733
+}
734
+
735
+static uint64_t can_brpr_pre_write(RegisterInfo *reg, uint64_t val)
736
+{
737
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
738
+
739
+ /* Only allow writes when in config mode. */
740
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
741
+ return s->regs[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER];
742
+ }
743
+
744
+ return val;
745
+}
746
+
747
+static uint64_t can_btr_pre_write(RegisterInfo *reg, uint64_t val)
748
+{
749
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
750
+
751
+ /* Only allow writes when in config mode. */
752
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
753
+ return s->regs[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER];
754
+ }
755
+
756
+ return val;
757
+}
758
+
759
+static uint64_t can_tcr_pre_write(RegisterInfo *reg, uint64_t val)
760
+{
761
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
762
+
763
+ if (FIELD_EX32(val, TIMESTAMP_REGISTER, CTS)) {
764
+ ptimer_transaction_begin(s->can_timer);
765
+ ptimer_set_count(s->can_timer, 0);
766
+ ptimer_transaction_commit(s->can_timer);
767
+ }
768
+
769
+ return 0;
770
+}
771
+
772
+static void update_rx_fifo(XlnxZynqMPCANState *s, const qemu_can_frame *frame)
773
+{
774
+ bool filter_pass = false;
775
+ uint16_t timestamp = 0;
776
+
777
+ /* If no filter is enabled. Message will be stored in FIFO. */
778
+ if (!((ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) |
779
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) |
780
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) |
781
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)))) {
782
+ filter_pass = true;
783
+ }
784
+
785
+ /*
786
+ * Messages that pass any of the acceptance filters will be stored in
787
+ * the RX FIFO.
788
+ */
789
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) {
790
+ uint32_t id_masked = s->regs[R_AFMR1] & frame->can_id;
791
+ uint32_t filter_id_masked = s->regs[R_AFMR1] & s->regs[R_AFIR1];
792
+
793
+ if (filter_id_masked == id_masked) {
794
+ filter_pass = true;
795
+ }
796
+ }
797
+
798
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) {
799
+ uint32_t id_masked = s->regs[R_AFMR2] & frame->can_id;
800
+ uint32_t filter_id_masked = s->regs[R_AFMR2] & s->regs[R_AFIR2];
801
+
802
+ if (filter_id_masked == id_masked) {
803
+ filter_pass = true;
804
+ }
805
+ }
806
+
807
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) {
808
+ uint32_t id_masked = s->regs[R_AFMR3] & frame->can_id;
809
+ uint32_t filter_id_masked = s->regs[R_AFMR3] & s->regs[R_AFIR3];
810
+
811
+ if (filter_id_masked == id_masked) {
812
+ filter_pass = true;
813
+ }
814
+ }
815
+
816
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
817
+ uint32_t id_masked = s->regs[R_AFMR4] & frame->can_id;
818
+ uint32_t filter_id_masked = s->regs[R_AFMR4] & s->regs[R_AFIR4];
819
+
820
+ if (filter_id_masked == id_masked) {
821
+ filter_pass = true;
822
+ }
823
+ }
824
+
825
+ if (!filter_pass) {
826
+ trace_xlnx_can_rx_fifo_filter_reject(frame->can_id, frame->can_dlc);
827
+ return;
828
+ }
829
+
830
+ /* Store the message in fifo if it passed through any of the filters. */
831
+ if (filter_pass && frame->can_dlc <= MAX_DLC) {
832
+
833
+ if (fifo32_is_full(&s->rx_fifo)) {
834
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
835
+ } else {
836
+ timestamp = CAN_TIMER_MAX - ptimer_get_count(s->can_timer);
837
+
838
+ fifo32_push(&s->rx_fifo, frame->can_id);
839
+
840
+ fifo32_push(&s->rx_fifo, deposit32(0, R_RXFIFO_DLC_DLC_SHIFT,
841
+ R_RXFIFO_DLC_DLC_LENGTH,
842
+ frame->can_dlc) |
843
+ deposit32(0, R_RXFIFO_DLC_RXT_SHIFT,
844
+ R_RXFIFO_DLC_RXT_LENGTH,
845
+ timestamp));
846
+
847
+ /* First 32 bit of the data. */
848
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA1_DB3_SHIFT,
849
+ R_TXFIFO_DATA1_DB3_LENGTH,
850
+ frame->data[0]) |
851
+ deposit32(0, R_TXFIFO_DATA1_DB2_SHIFT,
852
+ R_TXFIFO_DATA1_DB2_LENGTH,
853
+ frame->data[1]) |
854
+ deposit32(0, R_TXFIFO_DATA1_DB1_SHIFT,
855
+ R_TXFIFO_DATA1_DB1_LENGTH,
856
+ frame->data[2]) |
857
+ deposit32(0, R_TXFIFO_DATA1_DB0_SHIFT,
858
+ R_TXFIFO_DATA1_DB0_LENGTH,
859
+ frame->data[3]));
860
+ /* Last 32 bit of the data. */
861
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA2_DB7_SHIFT,
862
+ R_TXFIFO_DATA2_DB7_LENGTH,
863
+ frame->data[4]) |
864
+ deposit32(0, R_TXFIFO_DATA2_DB6_SHIFT,
865
+ R_TXFIFO_DATA2_DB6_LENGTH,
866
+ frame->data[5]) |
867
+ deposit32(0, R_TXFIFO_DATA2_DB5_SHIFT,
868
+ R_TXFIFO_DATA2_DB5_LENGTH,
869
+ frame->data[6]) |
870
+ deposit32(0, R_TXFIFO_DATA2_DB4_SHIFT,
871
+ R_TXFIFO_DATA2_DB4_LENGTH,
872
+ frame->data[7]));
873
+
874
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
875
+ trace_xlnx_can_rx_data(frame->can_id, frame->can_dlc,
876
+ frame->data[0], frame->data[1],
877
+ frame->data[2], frame->data[3],
878
+ frame->data[4], frame->data[5],
879
+ frame->data[6], frame->data[7]);
880
+ }
881
+
882
+ can_update_irq(s);
883
+ }
884
+}
885
+
886
+static uint64_t can_rxfifo_pre_read(RegisterInfo *reg, uint64_t val)
887
+{
888
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
889
+
890
+ if (!fifo32_is_empty(&s->rx_fifo)) {
891
+ val = fifo32_pop(&s->rx_fifo);
892
+ } else {
893
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXUFLW, 1);
894
+ }
895
+
896
+ can_update_irq(s);
897
+ return val;
898
+}
899
+
900
+static void can_filter_enable_post_write(RegisterInfo *reg, uint64_t val)
901
+{
902
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
903
+
904
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1) &&
905
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF2) &&
906
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF3) &&
907
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
908
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 1);
909
+ } else {
910
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 0);
911
+ }
912
+}
913
+
914
+static uint64_t can_filter_mask_pre_write(RegisterInfo *reg, uint64_t val)
915
+{
916
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
917
+ uint32_t reg_idx = (reg->access->addr) / 4;
918
+ uint32_t filter_number = (reg_idx - R_AFMR1) / 2;
919
+
920
+ /* modify an acceptance filter, the corresponding UAF bit should be '0'. */
921
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
922
+ s->regs[reg_idx] = val;
923
+
924
+ trace_xlnx_can_filter_mask_pre_write(filter_number, s->regs[reg_idx]);
925
+ } else {
926
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
927
+
928
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
929
+ " mask is not set as corresponding UAF bit is not 0.\n",
930
+ path, filter_number + 1);
931
+ }
932
+
933
+ return s->regs[reg_idx];
934
+}
935
+
936
+static uint64_t can_filter_id_pre_write(RegisterInfo *reg, uint64_t val)
937
+{
938
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
939
+ uint32_t reg_idx = (reg->access->addr) / 4;
940
+ uint32_t filter_number = (reg_idx - R_AFIR1) / 2;
941
+
942
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
943
+ s->regs[reg_idx] = val;
944
+
945
+ trace_xlnx_can_filter_id_pre_write(filter_number, s->regs[reg_idx]);
946
+ } else {
947
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
948
+
949
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
950
+ " id is not set as corresponding UAF bit is not 0.\n",
951
+ path, filter_number + 1);
952
+ }
953
+
954
+ return s->regs[reg_idx];
955
+}
956
+
957
+static void can_tx_post_write(RegisterInfo *reg, uint64_t val)
958
+{
959
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
960
+
961
+ bool is_txhpb = reg->access->addr > A_TXFIFO_DATA2;
962
+
963
+ bool initiate_transfer = (reg->access->addr == A_TXFIFO_DATA2) ||
964
+ (reg->access->addr == A_TXHPB_DATA2);
965
+
966
+ Fifo32 *f = is_txhpb ? &s->txhpb_fifo : &s->tx_fifo;
967
+
968
+ if (!fifo32_is_full(f)) {
969
+ fifo32_push(f, val);
970
+ } else {
971
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
972
+
973
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: TX FIFO is full.\n", path);
974
+ }
975
+
976
+ /* Initiate the message send if TX register is written. */
977
+ if (initiate_transfer &&
978
+ ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
979
+ transfer_fifo(s, f);
980
+ }
981
+
982
+ can_update_irq(s);
983
+}
984
+
985
+static const RegisterAccessInfo can_regs_info[] = {
986
+ { .name = "SOFTWARE_RESET_REGISTER",
987
+ .addr = A_SOFTWARE_RESET_REGISTER,
988
+ .rsvd = 0xfffffffc,
989
+ .pre_write = can_srr_pre_write,
990
+ },{ .name = "MODE_SELECT_REGISTER",
991
+ .addr = A_MODE_SELECT_REGISTER,
992
+ .rsvd = 0xfffffff8,
993
+ .pre_write = can_msr_pre_write,
994
+ },{ .name = "ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER",
995
+ .addr = A_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER,
996
+ .rsvd = 0xffffff00,
997
+ .pre_write = can_brpr_pre_write,
998
+ },{ .name = "ARBITRATION_PHASE_BIT_TIMING_REGISTER",
999
+ .addr = A_ARBITRATION_PHASE_BIT_TIMING_REGISTER,
1000
+ .rsvd = 0xfffffe00,
1001
+ .pre_write = can_btr_pre_write,
1002
+ },{ .name = "ERROR_COUNTER_REGISTER",
1003
+ .addr = A_ERROR_COUNTER_REGISTER,
1004
+ .rsvd = 0xffff0000,
1005
+ .ro = 0xffffffff,
1006
+ },{ .name = "ERROR_STATUS_REGISTER",
1007
+ .addr = A_ERROR_STATUS_REGISTER,
1008
+ .rsvd = 0xffffffe0,
1009
+ .w1c = 0x1f,
1010
+ },{ .name = "STATUS_REGISTER", .addr = A_STATUS_REGISTER,
1011
+ .reset = 0x1,
1012
+ .rsvd = 0xffffe000,
1013
+ .ro = 0x1fff,
1014
+ },{ .name = "INTERRUPT_STATUS_REGISTER",
1015
+ .addr = A_INTERRUPT_STATUS_REGISTER,
1016
+ .reset = 0x6000,
1017
+ .rsvd = 0xffff8000,
1018
+ .ro = 0x7fff,
1019
+ },{ .name = "INTERRUPT_ENABLE_REGISTER",
1020
+ .addr = A_INTERRUPT_ENABLE_REGISTER,
1021
+ .rsvd = 0xffff8000,
1022
+ .post_write = can_ier_post_write,
1023
+ },{ .name = "INTERRUPT_CLEAR_REGISTER",
1024
+ .addr = A_INTERRUPT_CLEAR_REGISTER,
1025
+ .rsvd = 0xffff8000,
1026
+ .pre_write = can_icr_pre_write,
1027
+ },{ .name = "TIMESTAMP_REGISTER",
1028
+ .addr = A_TIMESTAMP_REGISTER,
1029
+ .rsvd = 0xfffffffe,
1030
+ .pre_write = can_tcr_pre_write,
1031
+ },{ .name = "WIR", .addr = A_WIR,
1032
+ .reset = 0x3f3f,
1033
+ .rsvd = 0xffff0000,
1034
+ },{ .name = "TXFIFO_ID", .addr = A_TXFIFO_ID,
1035
+ .post_write = can_tx_post_write,
1036
+ },{ .name = "TXFIFO_DLC", .addr = A_TXFIFO_DLC,
1037
+ .rsvd = 0xfffffff,
1038
+ .post_write = can_tx_post_write,
1039
+ },{ .name = "TXFIFO_DATA1", .addr = A_TXFIFO_DATA1,
1040
+ .post_write = can_tx_post_write,
1041
+ },{ .name = "TXFIFO_DATA2", .addr = A_TXFIFO_DATA2,
1042
+ .post_write = can_tx_post_write,
1043
+ },{ .name = "TXHPB_ID", .addr = A_TXHPB_ID,
1044
+ .post_write = can_tx_post_write,
1045
+ },{ .name = "TXHPB_DLC", .addr = A_TXHPB_DLC,
1046
+ .rsvd = 0xfffffff,
1047
+ .post_write = can_tx_post_write,
1048
+ },{ .name = "TXHPB_DATA1", .addr = A_TXHPB_DATA1,
1049
+ .post_write = can_tx_post_write,
1050
+ },{ .name = "TXHPB_DATA2", .addr = A_TXHPB_DATA2,
1051
+ .post_write = can_tx_post_write,
1052
+ },{ .name = "RXFIFO_ID", .addr = A_RXFIFO_ID,
1053
+ .ro = 0xffffffff,
1054
+ .post_read = can_rxfifo_pre_read,
1055
+ },{ .name = "RXFIFO_DLC", .addr = A_RXFIFO_DLC,
1056
+ .rsvd = 0xfff0000,
1057
+ .post_read = can_rxfifo_pre_read,
1058
+ },{ .name = "RXFIFO_DATA1", .addr = A_RXFIFO_DATA1,
1059
+ .post_read = can_rxfifo_pre_read,
1060
+ },{ .name = "RXFIFO_DATA2", .addr = A_RXFIFO_DATA2,
1061
+ .post_read = can_rxfifo_pre_read,
1062
+ },{ .name = "AFR", .addr = A_AFR,
1063
+ .rsvd = 0xfffffff0,
1064
+ .post_write = can_filter_enable_post_write,
1065
+ },{ .name = "AFMR1", .addr = A_AFMR1,
1066
+ .pre_write = can_filter_mask_pre_write,
1067
+ },{ .name = "AFIR1", .addr = A_AFIR1,
1068
+ .pre_write = can_filter_id_pre_write,
1069
+ },{ .name = "AFMR2", .addr = A_AFMR2,
1070
+ .pre_write = can_filter_mask_pre_write,
1071
+ },{ .name = "AFIR2", .addr = A_AFIR2,
1072
+ .pre_write = can_filter_id_pre_write,
1073
+ },{ .name = "AFMR3", .addr = A_AFMR3,
1074
+ .pre_write = can_filter_mask_pre_write,
1075
+ },{ .name = "AFIR3", .addr = A_AFIR3,
1076
+ .pre_write = can_filter_id_pre_write,
1077
+ },{ .name = "AFMR4", .addr = A_AFMR4,
1078
+ .pre_write = can_filter_mask_pre_write,
1079
+ },{ .name = "AFIR4", .addr = A_AFIR4,
1080
+ .pre_write = can_filter_id_pre_write,
1081
+ }
1082
+};
1083
+
1084
+static void xlnx_zynqmp_can_ptimer_cb(void *opaque)
1085
+{
1086
+ /* No action required on the timer rollover. */
1087
+}
1088
+
1089
+static const MemoryRegionOps can_ops = {
1090
+ .read = register_read_memory,
1091
+ .write = register_write_memory,
1092
+ .endianness = DEVICE_LITTLE_ENDIAN,
1093
+ .valid = {
1094
+ .min_access_size = 4,
1095
+ .max_access_size = 4,
1096
+ },
1097
+};
1098
+
1099
+static void xlnx_zynqmp_can_reset_init(Object *obj, ResetType type)
1100
+{
1101
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1102
+ unsigned int i;
1103
+
1104
+ for (i = R_RXFIFO_ID; i < ARRAY_SIZE(s->reg_info); ++i) {
1105
+ register_reset(&s->reg_info[i]);
1106
+ }
1107
+
1108
+ ptimer_transaction_begin(s->can_timer);
1109
+ ptimer_set_count(s->can_timer, 0);
1110
+ ptimer_transaction_commit(s->can_timer);
1111
+}
1112
+
1113
+static void xlnx_zynqmp_can_reset_hold(Object *obj)
1114
+{
1115
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1116
+ unsigned int i;
1117
+
1118
+ for (i = 0; i < R_RXFIFO_ID; ++i) {
1119
+ register_reset(&s->reg_info[i]);
1120
+ }
1121
+
1122
+ /*
1123
+ * Reset FIFOs when CAN model is reset. This will clear the fifo writes
1124
+ * done by post_write which gets called from register_reset function,
1125
+ * post_write handle will not be able to trigger tx because CAN will be
1126
+ * disabled when software_reset_register is cleared first.
1127
+ */
1128
+ fifo32_reset(&s->rx_fifo);
1129
+ fifo32_reset(&s->tx_fifo);
1130
+ fifo32_reset(&s->txhpb_fifo);
1131
+}
1132
+
1133
+static bool xlnx_zynqmp_can_can_receive(CanBusClientState *client)
1134
+{
1135
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1136
+ bus_client);
1137
+
1138
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
1139
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1140
+
1141
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is in reset state.\n",
1142
+ path);
1143
+ return false;
1144
+ }
1145
+
1146
+ if ((ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) == 0) {
1147
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1148
+
1149
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is disabled. Incoming"
1150
+ " messages will be discarded.\n", path);
1151
+ return false;
1152
+ }
1153
+
1154
+ return true;
148
+ return true;
1155
+}
149
+}
1156
+
150
+
1157
+static ssize_t xlnx_zynqmp_can_receive(CanBusClientState *client,
151
+static bool trans_BFM(DisasContext *s, arg_BFM *a)
1158
+ const qemu_can_frame *buf, size_t buf_size) {
1159
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1160
+ bus_client);
1161
+ const qemu_can_frame *frame = buf;
1162
+
1163
+ if (buf_size <= 0) {
1164
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1165
+
1166
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Error in the data received.\n",
1167
+ path);
1168
+ return 0;
1169
+ }
1170
+
1171
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
1172
+ /* Snoop Mode: Just keep the data. no response back. */
1173
+ update_rx_fifo(s, frame);
1174
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP))) {
1175
+ /*
1176
+ * XlnxZynqMPCAN is in sleep mode. Any data on bus will bring it to wake
1177
+ * up state.
1178
+ */
1179
+ can_exit_sleep_mode(s);
1180
+ update_rx_fifo(s, frame);
1181
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) == 0) {
1182
+ update_rx_fifo(s, frame);
1183
+ } else {
1184
+ /*
1185
+ * XlnxZynqMPCAN will not participate in normal bus communication
1186
+ * and will not receive any messages transmitted by other CAN nodes.
1187
+ */
1188
+ trace_xlnx_can_rx_discard(s->regs[R_STATUS_REGISTER]);
1189
+ }
1190
+
1191
+ return 1;
1192
+}
1193
+
1194
+static CanBusClientInfo can_xilinx_bus_client_info = {
1195
+ .can_receive = xlnx_zynqmp_can_can_receive,
1196
+ .receive = xlnx_zynqmp_can_receive,
1197
+};
1198
+
1199
+static int xlnx_zynqmp_can_connect_to_bus(XlnxZynqMPCANState *s,
1200
+ CanBusState *bus)
1201
+{
152
+{
1202
+ s->bus_client.info = &can_xilinx_bus_client_info;
153
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
1203
+
154
+ TCGv_i64 tcg_tmp = read_cpu_reg(s, a->rn, 1);
1204
+ if (can_bus_insert_client(bus, &s->bus_client) < 0) {
155
+ unsigned int bitsize = a->sf ? 64 : 32;
1205
+ return -1;
156
+ unsigned int ri = a->immr;
1206
+ }
157
+ unsigned int si = a->imms;
1207
+ return 0;
158
+ unsigned int pos, len;
1208
+}
159
+
1209
+
160
+ tcg_rd = cpu_reg(s, a->rd);
1210
+static void xlnx_zynqmp_can_realize(DeviceState *dev, Error **errp)
161
+ tcg_tmp = read_cpu_reg(s, a->rn, 1);
1211
+{
162
+
1212
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(dev);
163
+ if (si >= ri) {
1213
+
164
+ /* Wd<s-r:0> = Wn<s:r> */
1214
+ if (s->canbus) {
165
tcg_gen_shri_i64(tcg_tmp, tcg_tmp, ri);
1215
+ if (xlnx_zynqmp_can_connect_to_bus(s, s->canbus) < 0) {
166
+ len = (si - ri) + 1;
1216
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
167
pos = 0;
1217
+
168
} else {
1218
+ error_setg(errp, "%s: xlnx_zynqmp_can_connect_to_bus"
169
- /* Handle the ri > si case with a deposit
1219
+ " failed.", path);
170
- * Wd<32+s-r,32-r> = Wn<s:0>
1220
+ return;
171
- */
1221
+ }
172
+ /* Wd<32+s-r,32-r> = Wn<s:0> */
1222
+ }
173
len = si + 1;
1223
+
174
pos = (bitsize - ri) & (bitsize - 1);
1224
+ /* Create RX FIFO, TXFIFO, TXHPB storage. */
175
}
1225
+ fifo32_create(&s->rx_fifo, RXFIFO_SIZE);
176
1226
+ fifo32_create(&s->tx_fifo, RXFIFO_SIZE);
177
- if (opc == 0 && len < ri) {
1227
+ fifo32_create(&s->txhpb_fifo, CAN_FRAME_SIZE);
178
- /* SBFM: sign extend the destination field from len to fill
1228
+
179
- the balance of the word. Let the deposit below insert all
1229
+ /* Allocate a new timer. */
180
- of those sign bits. */
1230
+ s->can_timer = ptimer_init(xlnx_zynqmp_can_ptimer_cb, s,
181
- tcg_gen_sextract_i64(tcg_tmp, tcg_tmp, 0, len);
1231
+ PTIMER_POLICY_DEFAULT);
182
- len = ri;
1232
+
183
- }
1233
+ ptimer_transaction_begin(s->can_timer);
184
-
1234
+
185
- if (opc == 1) { /* BFM, BFXIL */
1235
+ ptimer_set_freq(s->can_timer, s->cfg.ext_clk_freq);
186
- tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_tmp, pos, len);
1236
+ ptimer_set_limit(s->can_timer, CAN_TIMER_MAX, 1);
187
- } else {
1237
+ ptimer_run(s->can_timer, 0);
188
- /* SBFM or UBFM: We start with zero, and we haven't modified
1238
+ ptimer_transaction_commit(s->can_timer);
189
- any bits outside bitsize, therefore the zero-extension
1239
+}
190
- below is unneeded. */
1240
+
191
- tcg_gen_deposit_z_i64(tcg_rd, tcg_tmp, pos, len);
1241
+static void xlnx_zynqmp_can_init(Object *obj)
192
- return;
1242
+{
193
- }
1243
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
194
-
1244
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
195
- done:
1245
+
196
- if (!sf) { /* zero extend final result */
1246
+ RegisterInfoArray *reg_array;
197
+ tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_tmp, pos, len);
1247
+
198
+ if (!a->sf) {
1248
+ memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_CAN,
199
tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
1249
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
200
}
1250
+ reg_array = register_init_block32(DEVICE(obj), can_regs_info,
201
+ return true;
1251
+ ARRAY_SIZE(can_regs_info),
202
}
1252
+ s->reg_info, s->regs,
203
1253
+ &can_ops,
204
/* Extract
1254
+ XLNX_ZYNQMP_CAN_ERR_DEBUG,
205
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
1255
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
206
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
1256
+
207
{
1257
+ memory_region_add_subregion(&s->iomem, 0x00, &reg_array->mem);
208
switch (extract32(insn, 23, 6)) {
1258
+ sysbus_init_mmio(sbd, &s->iomem);
209
- case 0x26: /* Bitfield */
1259
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
210
- disas_bitfield(s, insn);
1260
+}
211
- break;
1261
+
212
case 0x27: /* Extract */
1262
+static const VMStateDescription vmstate_can = {
213
disas_extract(s, insn);
1263
+ .name = TYPE_XLNX_ZYNQMP_CAN,
214
break;
1264
+ .version_id = 1,
1265
+ .minimum_version_id = 1,
1266
+ .fields = (VMStateField[]) {
1267
+ VMSTATE_FIFO32(rx_fifo, XlnxZynqMPCANState),
1268
+ VMSTATE_FIFO32(tx_fifo, XlnxZynqMPCANState),
1269
+ VMSTATE_FIFO32(txhpb_fifo, XlnxZynqMPCANState),
1270
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPCANState, XLNX_ZYNQMP_CAN_R_MAX),
1271
+ VMSTATE_PTIMER(can_timer, XlnxZynqMPCANState),
1272
+ VMSTATE_END_OF_LIST(),
1273
+ }
1274
+};
1275
+
1276
+static Property xlnx_zynqmp_can_properties[] = {
1277
+ DEFINE_PROP_UINT32("ext_clk_freq", XlnxZynqMPCANState, cfg.ext_clk_freq,
1278
+ CAN_DEFAULT_CLOCK),
1279
+ DEFINE_PROP_LINK("canbus", XlnxZynqMPCANState, canbus, TYPE_CAN_BUS,
1280
+ CanBusState *),
1281
+ DEFINE_PROP_END_OF_LIST(),
1282
+};
1283
+
1284
+static void xlnx_zynqmp_can_class_init(ObjectClass *klass, void *data)
1285
+{
1286
+ DeviceClass *dc = DEVICE_CLASS(klass);
1287
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1288
+
1289
+ rc->phases.enter = xlnx_zynqmp_can_reset_init;
1290
+ rc->phases.hold = xlnx_zynqmp_can_reset_hold;
1291
+ dc->realize = xlnx_zynqmp_can_realize;
1292
+ device_class_set_props(dc, xlnx_zynqmp_can_properties);
1293
+ dc->vmsd = &vmstate_can;
1294
+}
1295
+
1296
+static const TypeInfo can_info = {
1297
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1298
+ .parent = TYPE_SYS_BUS_DEVICE,
1299
+ .instance_size = sizeof(XlnxZynqMPCANState),
1300
+ .class_init = xlnx_zynqmp_can_class_init,
1301
+ .instance_init = xlnx_zynqmp_can_init,
1302
+};
1303
+
1304
+static void can_register_types(void)
1305
+{
1306
+ type_register_static(&can_info);
1307
+}
1308
+
1309
+type_init(can_register_types)
1310
diff --git a/hw/Kconfig b/hw/Kconfig
1311
index XXXXXXX..XXXXXXX 100644
1312
--- a/hw/Kconfig
1313
+++ b/hw/Kconfig
1314
@@ -XXX,XX +XXX,XX @@ config XILINX_AXI
1315
config XLNX_ZYNQMP
1316
bool
1317
select REGISTER
1318
+ select CAN_BUS
1319
diff --git a/hw/net/can/meson.build b/hw/net/can/meson.build
1320
index XXXXXXX..XXXXXXX 100644
1321
--- a/hw/net/can/meson.build
1322
+++ b/hw/net/can/meson.build
1323
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_pcm3680_pci.c'))
1324
softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_mioe3680_pci.c'))
1325
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD', if_true: files('ctucan_core.c'))
1326
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD_PCI', if_true: files('ctucan_pci.c'))
1327
+softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP', if_true: files('xlnx-zynqmp-can.c'))
1328
diff --git a/hw/net/can/trace-events b/hw/net/can/trace-events
1329
new file mode 100644
1330
index XXXXXXX..XXXXXXX
1331
--- /dev/null
1332
+++ b/hw/net/can/trace-events
1333
@@ -XXX,XX +XXX,XX @@
1334
+# xlnx-zynqmp-can.c
1335
+xlnx_can_update_irq(uint32_t isr, uint32_t ier, uint32_t irq) "ISR: 0x%08x IER: 0x%08x IRQ: 0x%08x"
1336
+xlnx_can_reset(uint32_t val) "Resetting controller with value = 0x%08x"
1337
+xlnx_can_rx_fifo_filter_reject(uint32_t id, uint8_t dlc) "Frame: ID: 0x%08x DLC: 0x%02x"
1338
+xlnx_can_filter_id_pre_write(uint8_t filter_num, uint32_t value) "Filter%d ID: 0x%08x"
1339
+xlnx_can_filter_mask_pre_write(uint8_t filter_num, uint32_t value) "Filter%d MASK: 0x%08x"
1340
+xlnx_can_tx_data(uint32_t id, uint8_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1341
+xlnx_can_rx_data(uint32_t id, uint32_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1342
+xlnx_can_rx_discard(uint32_t status) "Controller is not enabled for bus communication. Status Register: 0x%08x"
1343
--
215
--
1344
2.20.1
216
2.34.1
1345
1346
diff view generated by jsdifflib
1
For M-profile before v8.1M, the only valid register for VMSR/VMRS is
1
Convert the EXTR instruction to decodetree (this is the
2
the FPSCR. We have a comment that states this, but the actual logic
2
only one in the 'Extract" class). This is the last of
3
to forbid accesses for any other register value is missing, so we
3
the dp-immediate insns in the legacy decoder, so we
4
would end up with A-profile style behaviour. Add the missing check.
4
can now remove disas_data_proc_imm().
5
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-7-peter.maydell@linaro.org
8
Message-id: 20230512144106.3608981-13-peter.maydell@linaro.org
9
---
9
---
10
target/arm/translate-vfp.c.inc | 5 ++++-
10
target/arm/tcg/a64.decode | 7 +++
11
1 file changed, 4 insertions(+), 1 deletion(-)
11
target/arm/tcg/translate-a64.c | 94 +++++++++++-----------------------
12
2 files changed, 36 insertions(+), 65 deletions(-)
12
13
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
16
--- a/target/arm/tcg/a64.decode
16
+++ b/target/arm/translate-vfp.c.inc
17
+++ b/target/arm/tcg/a64.decode
17
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
18
@@ -XXX,XX +XXX,XX @@ BFM . 01 100110 . ...... ...... ..... ..... @bitfield_64
18
* Accesses to R15 are UNPREDICTABLE; we choose to undef.
19
BFM . 01 100110 . ...... ...... ..... ..... @bitfield_32
19
* (FPSCR -> r15 is a special case which writes to the PSR flags.)
20
UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_64
20
*/
21
UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_32
21
- if (a->rt == 15 && (!a->l || a->reg != ARM_VFP_FPSCR)) {
22
+
22
+ if (a->reg != ARM_VFP_FPSCR) {
23
+# Extract
23
+ return false;
24
+
25
+&extract rd rn rm imm sf
26
+
27
+EXTR 1 00 100111 1 0 rm:5 imm:6 rn:5 rd:5 &extract sf=1
28
+EXTR 0 00 100111 0 0 rm:5 0 imm:5 rn:5 rd:5 &extract sf=0
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static bool trans_BFM(DisasContext *s, arg_BFM *a)
34
return true;
35
}
36
37
-/* Extract
38
- * 31 30 29 28 23 22 21 20 16 15 10 9 5 4 0
39
- * +----+------+-------------+---+----+------+--------+------+------+
40
- * | sf | op21 | 1 0 0 1 1 1 | N | o0 | Rm | imms | Rn | Rd |
41
- * +----+------+-------------+---+----+------+--------+------+------+
42
- */
43
-static void disas_extract(DisasContext *s, uint32_t insn)
44
+static bool trans_EXTR(DisasContext *s, arg_extract *a)
45
{
46
- unsigned int sf, n, rm, imm, rn, rd, bitsize, op21, op0;
47
+ TCGv_i64 tcg_rd, tcg_rm, tcg_rn;
48
49
- sf = extract32(insn, 31, 1);
50
- n = extract32(insn, 22, 1);
51
- rm = extract32(insn, 16, 5);
52
- imm = extract32(insn, 10, 6);
53
- rn = extract32(insn, 5, 5);
54
- rd = extract32(insn, 0, 5);
55
- op21 = extract32(insn, 29, 2);
56
- op0 = extract32(insn, 21, 1);
57
- bitsize = sf ? 64 : 32;
58
+ tcg_rd = cpu_reg(s, a->rd);
59
60
- if (sf != n || op21 || op0 || imm >= bitsize) {
61
- unallocated_encoding(s);
62
- } else {
63
- TCGv_i64 tcg_rd, tcg_rm, tcg_rn;
64
-
65
- tcg_rd = cpu_reg(s, rd);
66
-
67
- if (unlikely(imm == 0)) {
68
- /* tcg shl_i32/shl_i64 is undefined for 32/64 bit shifts,
69
- * so an extract from bit 0 is a special case.
70
- */
71
- if (sf) {
72
- tcg_gen_mov_i64(tcg_rd, cpu_reg(s, rm));
73
- } else {
74
- tcg_gen_ext32u_i64(tcg_rd, cpu_reg(s, rm));
75
- }
76
+ if (unlikely(a->imm == 0)) {
77
+ /*
78
+ * tcg shl_i32/shl_i64 is undefined for 32/64 bit shifts,
79
+ * so an extract from bit 0 is a special case.
80
+ */
81
+ if (a->sf) {
82
+ tcg_gen_mov_i64(tcg_rd, cpu_reg(s, a->rm));
83
} else {
84
- tcg_rm = cpu_reg(s, rm);
85
- tcg_rn = cpu_reg(s, rn);
86
+ tcg_gen_ext32u_i64(tcg_rd, cpu_reg(s, a->rm));
24
+ }
87
+ }
25
+ if (a->rt == 15 && !a->l) {
88
+ } else {
26
return false;
89
+ tcg_rm = cpu_reg(s, a->rm);
90
+ tcg_rn = cpu_reg(s, a->rn);
91
92
- if (sf) {
93
- /* Specialization to ROR happens in EXTRACT2. */
94
- tcg_gen_extract2_i64(tcg_rd, tcg_rm, tcg_rn, imm);
95
+ if (a->sf) {
96
+ /* Specialization to ROR happens in EXTRACT2. */
97
+ tcg_gen_extract2_i64(tcg_rd, tcg_rm, tcg_rn, a->imm);
98
+ } else {
99
+ TCGv_i32 t0 = tcg_temp_new_i32();
100
+
101
+ tcg_gen_extrl_i64_i32(t0, tcg_rm);
102
+ if (a->rm == a->rn) {
103
+ tcg_gen_rotri_i32(t0, t0, a->imm);
104
} else {
105
- TCGv_i32 t0 = tcg_temp_new_i32();
106
-
107
- tcg_gen_extrl_i64_i32(t0, tcg_rm);
108
- if (rm == rn) {
109
- tcg_gen_rotri_i32(t0, t0, imm);
110
- } else {
111
- TCGv_i32 t1 = tcg_temp_new_i32();
112
- tcg_gen_extrl_i64_i32(t1, tcg_rn);
113
- tcg_gen_extract2_i32(t0, t0, t1, imm);
114
- }
115
- tcg_gen_extu_i32_i64(tcg_rd, t0);
116
+ TCGv_i32 t1 = tcg_temp_new_i32();
117
+ tcg_gen_extrl_i64_i32(t1, tcg_rn);
118
+ tcg_gen_extract2_i32(t0, t0, t1, a->imm);
119
}
120
+ tcg_gen_extu_i32_i64(tcg_rd, t0);
27
}
121
}
28
}
122
}
123
-}
124
-
125
-/* Data processing - immediate */
126
-static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
127
-{
128
- switch (extract32(insn, 23, 6)) {
129
- case 0x27: /* Extract */
130
- disas_extract(s, insn);
131
- break;
132
- default:
133
- unallocated_encoding(s);
134
- break;
135
- }
136
+ return true;
137
}
138
139
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
140
@@ -XXX,XX +XXX,XX @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
141
static void disas_a64_legacy(DisasContext *s, uint32_t insn)
142
{
143
switch (extract32(insn, 25, 4)) {
144
- case 0x8: case 0x9: /* Data processing - immediate */
145
- disas_data_proc_imm(s, insn);
146
- break;
147
case 0xa: case 0xb: /* Branch, exception generation and system insns */
148
disas_b_exc_sys(s, insn);
149
break;
29
--
150
--
30
2.20.1
151
2.34.1
31
32
diff view generated by jsdifflib
1
In v8.1M the new CLRM instruction allows zeroing an arbitrary set of
1
Convert the unconditional branch immediate insns B and BL to
2
the general-purpose registers and APSR. Implement this.
2
decodetree.
3
4
The encoding is a subset of the LDMIA T2 encoding, using what would
5
be Rn=0b1111 (which UNDEFs for LDMIA).
6
3
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201119215617.29887-6-peter.maydell@linaro.org
6
Message-id: 20230512144106.3608981-14-peter.maydell@linaro.org
10
---
7
---
11
target/arm/t32.decode | 6 +++++-
8
target/arm/tcg/a64.decode | 9 +++++++++
12
target/arm/translate.c | 38 ++++++++++++++++++++++++++++++++++++++
9
target/arm/tcg/translate-a64.c | 31 +++++++++++--------------------
13
2 files changed, 43 insertions(+), 1 deletion(-)
10
2 files changed, 20 insertions(+), 20 deletions(-)
14
11
15
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/t32.decode
14
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/t32.decode
15
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ UXTAB 1111 1010 0101 .... 1111 .... 10.. .... @rrr_rot
16
@@ -XXX,XX +XXX,XX @@
20
17
21
STM_t32 1110 1000 10.0 .... ................ @ldstm i=1 b=0
18
&ri rd imm
22
STM_t32 1110 1001 00.0 .... ................ @ldstm i=0 b=1
19
&rri_sf rd rn imm sf
23
-LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
20
+&i imm
24
+{
21
25
+ # Rn=15 UNDEFs for LDM; M-profile CLRM uses that encoding
22
26
+ CLRM 1110 1000 1001 1111 list:16
23
### Data Processing - Immediate
27
+ LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
24
@@ -XXX,XX +XXX,XX @@ UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_32
28
+}
25
29
LDM_t32 1110 1001 00.1 .... ................ @ldstm i=0 b=1
26
EXTR 1 00 100111 1 0 rm:5 imm:6 rn:5 rd:5 &extract sf=1
30
27
EXTR 0 00 100111 0 0 rm:5 0 imm:5 rn:5 rd:5 &extract sf=0
31
&rfe !extern rn w pu
28
+
32
diff --git a/target/arm/translate.c b/target/arm/translate.c
29
+# Branches
30
+
31
+%imm26 0:s26 !function=times_4
32
+@branch . ..... .......................... &i imm=%imm26
33
+
34
+B 0 00101 .......................... @branch
35
+BL 1 00101 .......................... @branch
36
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
33
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate.c
38
--- a/target/arm/tcg/translate-a64.c
35
+++ b/target/arm/translate.c
39
+++ b/target/arm/tcg/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static bool trans_LDM_t16(DisasContext *s, arg_ldst_block *a)
40
@@ -XXX,XX +XXX,XX @@ static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
37
return do_ldm(s, a, 1);
41
* match up with those in the manual.
38
}
42
*/
39
43
40
+static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
44
-/* Unconditional branch (immediate)
41
+{
45
- * 31 30 26 25 0
42
+ int i;
46
- * +----+-----------+-------------------------------------+
43
+ TCGv_i32 zero;
47
- * | op | 0 0 1 0 1 | imm26 |
44
+
48
- * +----+-----------+-------------------------------------+
45
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
49
- */
46
+ return false;
50
-static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
47
+ }
51
+static bool trans_B(DisasContext *s, arg_i *a)
48
+
52
{
49
+ if (extract32(a->list, 13, 1)) {
53
- int64_t diff = sextract32(insn, 0, 26) * 4;
50
+ return false;
54
-
51
+ }
55
- if (insn & (1U << 31)) {
52
+
56
- /* BL Branch with link */
53
+ if (!a->list) {
57
- gen_pc_plus_diff(s, cpu_reg(s, 30), curr_insn_len(s));
54
+ /* UNPREDICTABLE; we choose to UNDEF */
58
- }
55
+ return false;
59
-
56
+ }
60
- /* B Branch / BL Branch with link */
57
+
61
reset_btype(s);
58
+ zero = tcg_const_i32(0);
62
- gen_goto_tb(s, 0, diff);
59
+ for (i = 0; i < 15; i++) {
63
+ gen_goto_tb(s, 0, a->imm);
60
+ if (extract32(a->list, i, 1)) {
61
+ /* Clear R[i] */
62
+ tcg_gen_mov_i32(cpu_R[i], zero);
63
+ }
64
+ }
65
+ if (extract32(a->list, 15, 1)) {
66
+ /*
67
+ * Clear APSR (by calling the MSR helper with the same argument
68
+ * as for "MSR APSR_nzcvqg, Rn": mask = 0b1100, SYSM=0)
69
+ */
70
+ TCGv_i32 maskreg = tcg_const_i32(0xc << 8);
71
+ gen_helper_v7m_msr(cpu_env, maskreg, zero);
72
+ tcg_temp_free_i32(maskreg);
73
+ }
74
+ tcg_temp_free_i32(zero);
75
+ return true;
64
+ return true;
76
+}
65
+}
77
+
66
+
78
/*
67
+static bool trans_BL(DisasContext *s, arg_i *a)
79
* Branch, branch with link
68
+{
80
*/
69
+ gen_pc_plus_diff(s, cpu_reg(s, 30), curr_insn_len(s));
70
+ reset_btype(s);
71
+ gen_goto_tb(s, 0, a->imm);
72
+ return true;
73
}
74
75
/* Compare and branch (immediate)
76
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
77
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
78
{
79
switch (extract32(insn, 25, 7)) {
80
- case 0x0a: case 0x0b:
81
- case 0x4a: case 0x4b: /* Unconditional branch (immediate) */
82
- disas_uncond_b_imm(s, insn);
83
- break;
84
case 0x1a: case 0x5a: /* Compare & branch (immediate) */
85
disas_comp_b_imm(s, insn);
86
break;
81
--
87
--
82
2.20.1
88
2.34.1
83
84
diff view generated by jsdifflib
1
v8.1M introduces a new TRD flag in the CCR register, which enables
1
Convert the compare-and-branch-immediate insns CBZ and CBNZ
2
checking for stack frame integrity signatures on SG instructions.
2
to decodetree.
3
This bit is not banked, and is always RAZ/WI to Non-secure code.
4
Adjust the code for handling CCR reads and writes to handle this.
5
3
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-23-peter.maydell@linaro.org
6
Message-id: 20230512144106.3608981-15-peter.maydell@linaro.org
9
---
7
---
10
target/arm/cpu.h | 2 ++
8
target/arm/tcg/a64.decode | 5 +++++
11
hw/intc/armv7m_nvic.c | 26 ++++++++++++++++++--------
9
target/arm/tcg/translate-a64.c | 26 ++++++--------------------
12
2 files changed, 20 insertions(+), 8 deletions(-)
10
2 files changed, 11 insertions(+), 20 deletions(-)
13
11
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
14
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/cpu.h
15
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CCR, STKOFHFNMIGN, 10, 1)
16
@@ -XXX,XX +XXX,XX @@ EXTR 0 00 100111 0 0 rm:5 0 imm:5 rn:5 rd:5 &extract sf=0
19
FIELD(V7M_CCR, DC, 16, 1)
17
20
FIELD(V7M_CCR, IC, 17, 1)
18
B 0 00101 .......................... @branch
21
FIELD(V7M_CCR, BP, 18, 1)
19
BL 1 00101 .......................... @branch
22
+FIELD(V7M_CCR, LOB, 19, 1)
20
+
23
+FIELD(V7M_CCR, TRD, 20, 1)
21
+%imm19 5:s19 !function=times_4
24
22
+&cbz rt imm sf nz
25
/* V7M SCR bits */
23
+
26
FIELD(V7M_SCR, SLEEPONEXIT, 1, 1)
24
+CBZ sf:1 011010 nz:1 ................... rt:5 &cbz imm=%imm19
27
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
25
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
28
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/intc/armv7m_nvic.c
27
--- a/target/arm/tcg/translate-a64.c
30
+++ b/hw/intc/armv7m_nvic.c
28
+++ b/target/arm/tcg/translate-a64.c
31
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
29
@@ -XXX,XX +XXX,XX @@ static bool trans_BL(DisasContext *s, arg_i *a)
32
}
30
return true;
33
return cpu->env.v7m.scr[attrs.secure];
31
}
34
case 0xd14: /* Configuration Control. */
32
35
- /* The BFHFNMIGN bit is the only non-banked bit; we
33
-/* Compare and branch (immediate)
36
- * keep it in the non-secure copy of the register.
34
- * 31 30 25 24 23 5 4 0
37
+ /*
35
- * +----+-------------+----+---------------------+--------+
38
+ * Non-banked bits: BFHFNMIGN (stored in the NS copy of the register)
36
- * | sf | 0 1 1 0 1 0 | op | imm19 | Rt |
39
+ * and TRD (stored in the S copy of the register)
37
- * +----+-------------+----+---------------------+--------+
40
*/
38
- */
41
val = cpu->env.v7m.ccr[attrs.secure];
39
-static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
42
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
40
+
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
41
+static bool trans_CBZ(DisasContext *s, arg_cbz *a)
44
cpu->env.v7m.scr[attrs.secure] = value;
42
{
43
- unsigned int sf, op, rt;
44
- int64_t diff;
45
DisasLabel match;
46
TCGv_i64 tcg_cmp;
47
48
- sf = extract32(insn, 31, 1);
49
- op = extract32(insn, 24, 1); /* 0: CBZ; 1: CBNZ */
50
- rt = extract32(insn, 0, 5);
51
- diff = sextract32(insn, 5, 19) * 4;
52
-
53
- tcg_cmp = read_cpu_reg(s, rt, sf);
54
+ tcg_cmp = read_cpu_reg(s, a->rt, a->sf);
55
reset_btype(s);
56
57
match = gen_disas_label(s);
58
- tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
59
+ tcg_gen_brcondi_i64(a->nz ? TCG_COND_NE : TCG_COND_EQ,
60
tcg_cmp, 0, match.label);
61
gen_goto_tb(s, 0, 4);
62
set_disas_label(s, match);
63
- gen_goto_tb(s, 1, diff);
64
+ gen_goto_tb(s, 1, a->imm);
65
+ return true;
66
}
67
68
/* Test and branch (immediate)
69
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
70
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
71
{
72
switch (extract32(insn, 25, 7)) {
73
- case 0x1a: case 0x5a: /* Compare & branch (immediate) */
74
- disas_comp_b_imm(s, insn);
75
- break;
76
case 0x1b: case 0x5b: /* Test & branch (immediate) */
77
disas_test_b_imm(s, insn);
45
break;
78
break;
46
case 0xd14: /* Configuration Control. */
47
+ {
48
+ uint32_t mask;
49
+
50
if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
51
goto bad_offset;
52
}
53
54
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
55
- value &= (R_V7M_CCR_STKALIGN_MASK |
56
- R_V7M_CCR_BFHFNMIGN_MASK |
57
- R_V7M_CCR_DIV_0_TRP_MASK |
58
- R_V7M_CCR_UNALIGN_TRP_MASK |
59
- R_V7M_CCR_USERSETMPEND_MASK |
60
- R_V7M_CCR_NONBASETHRDENA_MASK);
61
+ mask = R_V7M_CCR_STKALIGN_MASK |
62
+ R_V7M_CCR_BFHFNMIGN_MASK |
63
+ R_V7M_CCR_DIV_0_TRP_MASK |
64
+ R_V7M_CCR_UNALIGN_TRP_MASK |
65
+ R_V7M_CCR_USERSETMPEND_MASK |
66
+ R_V7M_CCR_NONBASETHRDENA_MASK;
67
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8_1M) && attrs.secure) {
68
+ /* TRD is always RAZ/WI from NS */
69
+ mask |= R_V7M_CCR_TRD_MASK;
70
+ }
71
+ value &= mask;
72
73
if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
74
/* v8M makes NONBASETHRDENA and STKALIGN be RES1 */
75
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
76
77
cpu->env.v7m.ccr[attrs.secure] = value;
78
break;
79
+ }
80
case 0xd24: /* System Handler Control and State (SHCSR) */
81
if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
82
goto bad_offset;
83
--
79
--
84
2.20.1
80
2.34.1
85
86
diff view generated by jsdifflib
1
For v8.1M the architecture mandates that CPUs must provide at
1
Convert the test-and-branch-immediate insns TBZ and TBNZ
2
least the "minimal RAS implementation" from the Reliability,
2
to decodetree.
3
Availability and Serviceability extension. This consists of:
4
* an ESB instruction which is a NOP
5
-- since it is in the HINT space we need only add a comment
6
* an RFSR register which will RAZ/WI
7
* a RAZ/WI AIRCR.IESB bit
8
-- the code which handles writes to AIRCR does not allow setting
9
of RES0 bits, so we already treat this as RAZ/WI; add a comment
10
noting that this is deliberate
11
* minimal implementation of the RAS register block at 0xe0005000
12
-- this will be in a subsequent commit
13
* setting the ID_PFR0.RAS field to 0b0010
14
-- we will do this when we add the Cortex-M55 CPU model
15
3
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20201119215617.29887-26-peter.maydell@linaro.org
6
Message-id: 20230512144106.3608981-16-peter.maydell@linaro.org
19
---
7
---
20
target/arm/cpu.h | 14 ++++++++++++++
8
target/arm/tcg/a64.decode | 6 ++++++
21
target/arm/t32.decode | 4 ++++
9
target/arm/tcg/translate-a64.c | 25 +++++--------------------
22
hw/intc/armv7m_nvic.c | 13 +++++++++++++
10
2 files changed, 11 insertions(+), 20 deletions(-)
23
3 files changed, 31 insertions(+)
24
11
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
26
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
14
--- a/target/arm/tcg/a64.decode
28
+++ b/target/arm/cpu.h
15
+++ b/target/arm/tcg/a64.decode
29
@@ -XXX,XX +XXX,XX @@ FIELD(ID_MMFR4, LSM, 20, 4)
16
@@ -XXX,XX +XXX,XX @@ BL 1 00101 .......................... @branch
30
FIELD(ID_MMFR4, CCIDX, 24, 4)
17
&cbz rt imm sf nz
31
FIELD(ID_MMFR4, EVT, 28, 4)
18
32
19
CBZ sf:1 011010 nz:1 ................... rt:5 &cbz imm=%imm19
33
+FIELD(ID_PFR0, STATE0, 0, 4)
34
+FIELD(ID_PFR0, STATE1, 4, 4)
35
+FIELD(ID_PFR0, STATE2, 8, 4)
36
+FIELD(ID_PFR0, STATE3, 12, 4)
37
+FIELD(ID_PFR0, CSV2, 16, 4)
38
+FIELD(ID_PFR0, AMU, 20, 4)
39
+FIELD(ID_PFR0, DIT, 24, 4)
40
+FIELD(ID_PFR0, RAS, 28, 4)
41
+
20
+
42
FIELD(ID_PFR1, PROGMOD, 0, 4)
21
+%imm14 5:s14 !function=times_4
43
FIELD(ID_PFR1, SECURITY, 4, 4)
22
+%imm31_19 31:1 19:5
44
FIELD(ID_PFR1, MPROGMOD, 8, 4)
23
+&tbz rt imm nz bitpos
45
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
24
+
46
return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
25
+TBZ . 011011 nz:1 ..... .............. rt:5 &tbz imm=%imm14 bitpos=%imm31_19
26
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/tcg/translate-a64.c
29
+++ b/target/arm/tcg/translate-a64.c
30
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_cbz *a)
31
return true;
47
}
32
}
48
33
49
+static inline bool isar_feature_aa32_ras(const ARMISARegisters *id)
34
-/* Test and branch (immediate)
50
+{
35
- * 31 30 25 24 23 19 18 5 4 0
51
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, RAS) != 0;
36
- * +----+-------------+----+-------+-------------+------+
52
+}
37
- * | b5 | 0 1 1 0 1 1 | op | b40 | imm14 | Rt |
53
+
38
- * +----+-------------+----+-------+-------------+------+
54
static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
39
- */
40
-static void disas_test_b_imm(DisasContext *s, uint32_t insn)
41
+static bool trans_TBZ(DisasContext *s, arg_tbz *a)
55
{
42
{
56
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
43
- unsigned int bit_pos, op, rt;
57
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
44
- int64_t diff;
58
index XXXXXXX..XXXXXXX 100644
45
DisasLabel match;
59
--- a/target/arm/t32.decode
46
TCGv_i64 tcg_cmp;
60
+++ b/target/arm/t32.decode
47
61
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
48
- bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
62
# SEV 1111 0011 1010 1111 1000 0000 0000 0100
49
- op = extract32(insn, 24, 1); /* 0: TBZ; 1: TBNZ */
63
# SEVL 1111 0011 1010 1111 1000 0000 0000 0101
50
- diff = sextract32(insn, 5, 14) * 4;
64
51
- rt = extract32(insn, 0, 5);
65
+ # For M-profile minimal-RAS ESB can be a NOP, which is the
52
-
66
+ # default behaviour since it is in the hint space.
53
tcg_cmp = tcg_temp_new_i64();
67
+ # ESB 1111 0011 1010 1111 1000 0000 0001 0000
54
- tcg_gen_andi_i64(tcg_cmp, cpu_reg(s, rt), (1ULL << bit_pos));
68
+
55
+ tcg_gen_andi_i64(tcg_cmp, cpu_reg(s, a->rt), 1ULL << a->bitpos);
69
# The canonical nop ends in 0000 0000, but the whole rest
56
70
# of the space is "reserved hint, behaves as nop".
57
reset_btype(s);
71
NOP 1111 0011 1010 1111 1000 0000 ---- ----
58
72
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
59
match = gen_disas_label(s);
73
index XXXXXXX..XXXXXXX 100644
60
- tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
74
--- a/hw/intc/armv7m_nvic.c
61
+ tcg_gen_brcondi_i64(a->nz ? TCG_COND_NE : TCG_COND_EQ,
75
+++ b/hw/intc/armv7m_nvic.c
62
tcg_cmp, 0, match.label);
76
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
63
gen_goto_tb(s, 0, 4);
77
return 0;
64
set_disas_label(s, match);
78
}
65
- gen_goto_tb(s, 1, diff);
79
return cpu->env.v7m.sfar;
66
+ gen_goto_tb(s, 1, a->imm);
80
+ case 0xf04: /* RFSR */
67
+ return true;
81
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
68
}
82
+ goto bad_offset;
69
83
+ }
70
/* Conditional branch (immediate)
84
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
71
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
85
+ return 0;
72
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
86
case 0xf34: /* FPCCR */
73
{
87
if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
74
switch (extract32(insn, 25, 7)) {
88
return 0;
75
- case 0x1b: case 0x5b: /* Test & branch (immediate) */
89
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
76
- disas_test_b_imm(s, insn);
90
R_V7M_AIRCR_PRIGROUP_SHIFT,
77
- break;
91
R_V7M_AIRCR_PRIGROUP_LENGTH);
78
case 0x2a: /* Conditional branch (immediate) */
92
}
79
disas_cond_b_imm(s, insn);
93
+ /* AIRCR.IESB is RAZ/WI because we implement only minimal RAS */
94
if (attrs.secure) {
95
/* These bits are only writable by secure */
96
cpu->env.v7m.aircr = value &
97
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
98
}
99
break;
80
break;
100
}
101
+ case 0xf04: /* RFSR */
102
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
103
+ goto bad_offset;
104
+ }
105
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
106
+ break;
107
case 0xf34: /* FPCCR */
108
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
109
/* Not all bits here are banked. */
110
--
81
--
111
2.20.1
82
2.34.1
112
113
diff view generated by jsdifflib
1
In v8.1M a new exception return check is added which may cause a NOCP
1
Convert the immediate conditional branch insn B.cond to
2
UsageFault (see rule R_XLTP): before we clear s0..s15 and the FPSCR
2
decodetree.
3
we must check whether access to CP10 from the Security state of the
4
returning exception is disabled; if it is then we must take a fault.
5
6
(Note that for our implementation CPPWR is always RAZ/WI and so can
7
never cause CP10 accesses to fail.)
8
9
The other v8.1M change to this register-clearing code is that if MVE
10
is implemented VPR must also be cleared, so add a TODO comment to
11
that effect.
12
3
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-20-peter.maydell@linaro.org
6
Message-id: 20230512144106.3608981-17-peter.maydell@linaro.org
16
---
7
---
17
target/arm/m_helper.c | 22 +++++++++++++++++++++-
8
target/arm/tcg/a64.decode | 2 ++
18
1 file changed, 21 insertions(+), 1 deletion(-)
9
target/arm/tcg/translate-a64.c | 30 ++++++------------------------
10
2 files changed, 8 insertions(+), 24 deletions(-)
19
11
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
21
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m_helper.c
14
--- a/target/arm/tcg/a64.decode
23
+++ b/target/arm/m_helper.c
15
+++ b/target/arm/tcg/a64.decode
24
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
16
@@ -XXX,XX +XXX,XX @@ CBZ sf:1 011010 nz:1 ................... rt:5 &cbz imm=%imm19
25
v7m_exception_taken(cpu, excret, true, false);
17
&tbz rt imm nz bitpos
26
return;
18
27
} else {
19
TBZ . 011011 nz:1 ..... .............. rt:5 &tbz imm=%imm14 bitpos=%imm31_19
28
- /* Clear s0..s15 and FPSCR */
20
+
29
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
21
+B_cond 0101010 0 ................... 0 cond:4 imm=%imm19
30
+ /* v8.1M adds this NOCP check */
22
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
31
+ bool nsacr_pass = exc_secure ||
23
index XXXXXXX..XXXXXXX 100644
32
+ extract32(env->v7m.nsacr, 10, 1);
24
--- a/target/arm/tcg/translate-a64.c
33
+ bool cpacr_pass = v7m_cpacr_pass(env, exc_secure, true);
25
+++ b/target/arm/tcg/translate-a64.c
34
+ if (!nsacr_pass) {
26
@@ -XXX,XX +XXX,XX @@ static bool trans_TBZ(DisasContext *s, arg_tbz *a)
35
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
27
return true;
36
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
28
}
37
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
29
38
+ "stackframe: NSACR prevents clearing FPU registers\n");
30
-/* Conditional branch (immediate)
39
+ v7m_exception_taken(cpu, excret, true, false);
31
- * 31 25 24 23 5 4 3 0
40
+ } else if (!cpacr_pass) {
32
- * +---------------+----+---------------------+----+------+
41
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
33
- * | 0 1 0 1 0 1 0 | o1 | imm19 | o0 | cond |
42
+ exc_secure);
34
- * +---------------+----+---------------------+----+------+
43
+ env->v7m.cfsr[exc_secure] |= R_V7M_CFSR_NOCP_MASK;
35
- */
44
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
36
-static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
45
+ "stackframe: CPACR prevents clearing FPU registers\n");
37
+static bool trans_B_cond(DisasContext *s, arg_B_cond *a)
46
+ v7m_exception_taken(cpu, excret, true, false);
38
{
47
+ }
39
- unsigned int cond;
48
+ }
40
- int64_t diff;
49
+ /* Clear s0..s15 and FPSCR; TODO also VPR when MVE is implemented */
41
-
50
int i;
42
- if ((insn & (1 << 4)) || (insn & (1 << 24))) {
51
43
- unallocated_encoding(s);
52
for (i = 0; i < 16; i += 2) {
44
- return;
45
- }
46
- diff = sextract32(insn, 5, 19) * 4;
47
- cond = extract32(insn, 0, 4);
48
-
49
reset_btype(s);
50
- if (cond < 0x0e) {
51
+ if (a->cond < 0x0e) {
52
/* genuinely conditional branches */
53
DisasLabel match = gen_disas_label(s);
54
- arm_gen_test_cc(cond, match.label);
55
+ arm_gen_test_cc(a->cond, match.label);
56
gen_goto_tb(s, 0, 4);
57
set_disas_label(s, match);
58
- gen_goto_tb(s, 1, diff);
59
+ gen_goto_tb(s, 1, a->imm);
60
} else {
61
/* 0xe and 0xf are both "always" conditions */
62
- gen_goto_tb(s, 0, diff);
63
+ gen_goto_tb(s, 0, a->imm);
64
}
65
+ return true;
66
}
67
68
/* HINT instruction group, including various allocated HINTs */
69
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
70
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
71
{
72
switch (extract32(insn, 25, 7)) {
73
- case 0x2a: /* Conditional branch (immediate) */
74
- disas_cond_b_imm(s, insn);
75
- break;
76
case 0x6a: /* Exception generation / System */
77
if (insn & (1 << 24)) {
78
if (extract32(insn, 22, 2) == 0) {
53
--
79
--
54
2.20.1
80
2.34.1
55
56
diff view generated by jsdifflib
1
Implement the new-in-v8.1M VLDR/VSTR variants which directly
1
Convert the simple (non-pointer-auth) BR, BLR and RET insns
2
read or write FP system registers to memory.
2
to decodetree.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-10-peter.maydell@linaro.org
6
Message-id: 20230512144106.3608981-18-peter.maydell@linaro.org
7
---
7
---
8
target/arm/vfp.decode | 14 ++++++
8
target/arm/tcg/a64.decode | 5 ++++
9
target/arm/translate-vfp.c.inc | 91 ++++++++++++++++++++++++++++++++++
9
target/arm/tcg/translate-a64.c | 55 ++++++++++++++++++++++++++++++----
10
2 files changed, 105 insertions(+)
10
2 files changed, 54 insertions(+), 6 deletions(-)
11
11
12
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/vfp.decode
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/vfp.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR_hp ---- 1101 u:1 .0 l:1 rn:4 .... 1001 imm:8 vd=%vd_sp
16
@@ -XXX,XX +XXX,XX @@
17
VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
17
# This file is processed by scripts/decodetree.py
18
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
18
#
19
19
20
+# M-profile VLDR/VSTR to sysreg
20
+&r rn
21
+%vldr_sysreg 22:1 13:3
21
&ri rd imm
22
+%imm7_0x4 0:7 !function=times_4
22
&rri_sf rd rn imm sf
23
&i imm
24
@@ -XXX,XX +XXX,XX @@ CBZ sf:1 011010 nz:1 ................... rt:5 &cbz imm=%imm19
25
TBZ . 011011 nz:1 ..... .............. rt:5 &tbz imm=%imm14 bitpos=%imm31_19
26
27
B_cond 0101010 0 ................... 0 cond:4 imm=%imm19
23
+
28
+
24
+&vldr_sysreg rn reg imm a w p
29
+BR 1101011 0000 11111 000000 rn:5 00000 &r
25
+@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
30
+BLR 1101011 0001 11111 000000 rn:5 00000 &r
26
+ reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
31
+RET 1101011 0010 11111 000000 rn:5 00000 &r
27
+
32
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
28
+# P=0 W=0 is SEE "Related encodings", so split into two patterns
29
+VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
30
+VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
31
+VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
32
+VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
33
+
34
# We split the load/store multiple up into two patterns to avoid
35
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
36
# grouping:
37
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
38
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-vfp.c.inc
34
--- a/target/arm/tcg/translate-a64.c
40
+++ b/target/arm/translate-vfp.c.inc
35
+++ b/target/arm/tcg/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
36
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond(DisasContext *s, arg_B_cond *a)
42
return true;
37
return true;
43
}
38
}
44
39
45
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
40
+static void set_btype_for_br(DisasContext *s, int rn)
46
+{
41
+{
47
+ arg_vldr_sysreg *a = opaque;
42
+ if (dc_isar_feature(aa64_bti, s)) {
48
+ uint32_t offset = a->imm;
43
+ /* BR to {x16,x17} or !guard -> 1, else 3. */
49
+ TCGv_i32 addr;
44
+ set_btype(s, rn == 16 || rn == 17 || !s->guarded_page ? 1 : 3);
50
+
51
+ if (!a->a) {
52
+ offset = - offset;
53
+ }
54
+
55
+ addr = load_reg(s, a->rn);
56
+ if (a->p) {
57
+ tcg_gen_addi_i32(addr, addr, offset);
58
+ }
59
+
60
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
61
+ gen_helper_v8m_stackcheck(cpu_env, addr);
62
+ }
63
+
64
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
65
+ MO_UL | MO_ALIGN | s->be_data);
66
+ tcg_temp_free_i32(value);
67
+
68
+ if (a->w) {
69
+ /* writeback */
70
+ if (!a->p) {
71
+ tcg_gen_addi_i32(addr, addr, offset);
72
+ }
73
+ store_reg(s, a->rn, addr);
74
+ } else {
75
+ tcg_temp_free_i32(addr);
76
+ }
45
+ }
77
+}
46
+}
78
+
47
+
79
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
48
+static void set_btype_for_blr(DisasContext *s)
80
+{
49
+{
81
+ arg_vldr_sysreg *a = opaque;
50
+ if (dc_isar_feature(aa64_bti, s)) {
82
+ uint32_t offset = a->imm;
51
+ /* BLR sets BTYPE to 2, regardless of source guarded page. */
83
+ TCGv_i32 addr;
52
+ set_btype(s, 2);
84
+ TCGv_i32 value = tcg_temp_new_i32();
85
+
86
+ if (!a->a) {
87
+ offset = - offset;
88
+ }
53
+ }
89
+
90
+ addr = load_reg(s, a->rn);
91
+ if (a->p) {
92
+ tcg_gen_addi_i32(addr, addr, offset);
93
+ }
94
+
95
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
96
+ gen_helper_v8m_stackcheck(cpu_env, addr);
97
+ }
98
+
99
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
100
+ MO_UL | MO_ALIGN | s->be_data);
101
+
102
+ if (a->w) {
103
+ /* writeback */
104
+ if (!a->p) {
105
+ tcg_gen_addi_i32(addr, addr, offset);
106
+ }
107
+ store_reg(s, a->rn, addr);
108
+ } else {
109
+ tcg_temp_free_i32(addr);
110
+ }
111
+ return value;
112
+}
54
+}
113
+
55
+
114
+static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
56
+static bool trans_BR(DisasContext *s, arg_r *a)
115
+{
57
+{
116
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
58
+ gen_a64_set_pc(s, cpu_reg(s, a->rn));
117
+ return false;
59
+ set_btype_for_br(s, a->rn);
118
+ }
60
+ s->base.is_jmp = DISAS_JUMP;
119
+ if (a->rn == 15) {
61
+ return true;
120
+ return false;
121
+ }
122
+ return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
123
+}
62
+}
124
+
63
+
125
+static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
64
+static bool trans_BLR(DisasContext *s, arg_r *a)
126
+{
65
+{
127
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
66
+ TCGv_i64 dst = cpu_reg(s, a->rn);
128
+ return false;
67
+ TCGv_i64 lr = cpu_reg(s, 30);
68
+ if (dst == lr) {
69
+ TCGv_i64 tmp = tcg_temp_new_i64();
70
+ tcg_gen_mov_i64(tmp, dst);
71
+ dst = tmp;
129
+ }
72
+ }
130
+ if (a->rn == 15) {
73
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
131
+ return false;
74
+ gen_a64_set_pc(s, dst);
132
+ }
75
+ set_btype_for_blr(s);
133
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
76
+ s->base.is_jmp = DISAS_JUMP;
77
+ return true;
134
+}
78
+}
135
+
79
+
136
static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
80
+static bool trans_RET(DisasContext *s, arg_r *a)
137
{
81
+{
138
TCGv_i32 tmp;
82
+ gen_a64_set_pc(s, cpu_reg(s, a->rn));
83
+ s->base.is_jmp = DISAS_JUMP;
84
+ return true;
85
+}
86
+
87
/* HINT instruction group, including various allocated HINTs */
88
static void handle_hint(DisasContext *s, uint32_t insn,
89
unsigned int op1, unsigned int op2, unsigned int crm)
90
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
91
btype_mod = opc;
92
switch (op3) {
93
case 0:
94
- /* BR, BLR, RET */
95
- if (op4 != 0) {
96
- goto do_unallocated;
97
- }
98
- dst = cpu_reg(s, rn);
99
- break;
100
+ /* BR, BLR, RET : handled in decodetree */
101
+ goto do_unallocated;
102
103
case 2:
104
case 3:
139
--
105
--
140
2.20.1
106
2.34.1
141
142
diff view generated by jsdifflib
1
Currently M-profile borrows the A-profile code for VMSR and VMRS
1
Convert the single-register pointer-authentication variants of BR,
2
(access to the FP system registers), because all it needs to support
2
BLR, RET to decodetree. (BRAA/BLRAA are in a different branch of
3
is the FPSCR. In v8.1M things become significantly more complicated
3
the legacy decoder and will be dealt with in the next commit.)
4
in two ways:
5
6
* there are several new FP system registers; some have side effects
7
on read, and one (FPCXT_NS) needs to avoid the usual
8
vfp_access_check() and the "only if FPU implemented" check
9
10
* all sysregs are now accessible both by VMRS/VMSR (which
11
reads/writes a general purpose register) and also by VLDR/VSTR
12
(which reads/writes them directly to memory)
13
14
Refactor the structure of how we handle VMSR/VMRS to cope with this:
15
16
* keep the M-profile code entirely separate from the A-profile code
17
18
* abstract out the "read or write the general purpose register" part
19
of the code into a loadfn or storefn function pointer, so we can
20
reuse it for VLDR/VSTR.
21
4
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-8-peter.maydell@linaro.org
7
Message-id: 20230512144106.3608981-19-peter.maydell@linaro.org
25
---
8
---
26
target/arm/cpu.h | 3 +
9
target/arm/tcg/a64.decode | 7 ++
27
target/arm/translate-vfp.c.inc | 182 ++++++++++++++++++++++++++++++---
10
target/arm/tcg/translate-a64.c | 132 +++++++++++++++++++--------------
28
2 files changed, 171 insertions(+), 14 deletions(-)
11
2 files changed, 84 insertions(+), 55 deletions(-)
29
12
30
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
31
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.h
15
--- a/target/arm/tcg/a64.decode
33
+++ b/target/arm/cpu.h
16
+++ b/target/arm/tcg/a64.decode
34
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
17
@@ -XXX,XX +XXX,XX @@ B_cond 0101010 0 ................... 0 cond:4 imm=%imm19
35
#define ARM_VFP_FPINST 9
18
BR 1101011 0000 11111 000000 rn:5 00000 &r
36
#define ARM_VFP_FPINST2 10
19
BLR 1101011 0001 11111 000000 rn:5 00000 &r
37
20
RET 1101011 0010 11111 000000 rn:5 00000 &r
38
+/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
39
+#define QEMU_VFP_FPSCR_NZCV 0xffff
40
+
21
+
41
/* iwMMXt coprocessor control registers. */
22
+&braz rn m
42
#define ARM_IWMMXT_wCID 0
23
+BRAZ 1101011 0000 11111 00001 m:1 rn:5 11111 &braz # BRAAZ, BRABZ
43
#define ARM_IWMMXT_wCon 1
24
+BLRAZ 1101011 0001 11111 00001 m:1 rn:5 11111 &braz # BLRAAZ, BLRABZ
44
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
25
+
26
+&reta m
27
+RETA 1101011 0010 11111 00001 m:1 11111 11111 &reta # RETAA, RETAB
28
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
45
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/translate-vfp.c.inc
30
--- a/target/arm/tcg/translate-a64.c
47
+++ b/target/arm/translate-vfp.c.inc
31
+++ b/target/arm/tcg/translate-a64.c
48
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
32
@@ -XXX,XX +XXX,XX @@ static bool trans_RET(DisasContext *s, arg_r *a)
49
return true;
33
return true;
50
}
34
}
51
35
52
+/*
36
+static TCGv_i64 auth_branch_target(DisasContext *s, TCGv_i64 dst,
53
+ * M-profile provides two different sets of instructions that can
37
+ TCGv_i64 modifier, bool use_key_a)
54
+ * access floating point system registers: VMSR/VMRS (which move
55
+ * to/from a general purpose register) and VLDR/VSTR sysreg (which
56
+ * move directly to/from memory). In some cases there are also side
57
+ * effects which must happen after any write to memory (which could
58
+ * cause an exception). So we implement the common logic for the
59
+ * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
60
+ * which take pointers to callback functions which will perform the
61
+ * actual "read/write general purpose register" and "read/write
62
+ * memory" operations.
63
+ */
64
+
65
+/*
66
+ * Emit code to store the sysreg to its final destination; frees the
67
+ * TCG temp 'value' it is passed.
68
+ */
69
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
70
+/*
71
+ * Emit code to load the value to be copied to the sysreg; returns
72
+ * a new TCG temporary
73
+ */
74
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
75
+
76
+/* Common decode/access checks for fp sysreg read/write */
77
+typedef enum FPSysRegCheckResult {
78
+ FPSysRegCheckFailed, /* caller should return false */
79
+ FPSysRegCheckDone, /* caller should return true */
80
+ FPSysRegCheckContinue, /* caller should continue generating code */
81
+} FPSysRegCheckResult;
82
+
83
+static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
84
+{
38
+{
85
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
39
+ TCGv_i64 truedst;
86
+ return FPSysRegCheckFailed;
40
+ /*
41
+ * Return the branch target for a BRAA/RETA/etc, which is either
42
+ * just the destination dst, or that value with the pauth check
43
+ * done and the code removed from the high bits.
44
+ */
45
+ if (!s->pauth_active) {
46
+ return dst;
87
+ }
47
+ }
88
+
48
+
89
+ switch (regno) {
49
+ truedst = tcg_temp_new_i64();
90
+ case ARM_VFP_FPSCR:
50
+ if (use_key_a) {
91
+ case QEMU_VFP_FPSCR_NZCV:
51
+ gen_helper_autia(truedst, cpu_env, dst, modifier);
92
+ break;
52
+ } else {
93
+ default:
53
+ gen_helper_autib(truedst, cpu_env, dst, modifier);
94
+ return FPSysRegCheckFailed;
54
+ }
55
+ return truedst;
56
+}
57
+
58
+static bool trans_BRAZ(DisasContext *s, arg_braz *a)
59
+{
60
+ TCGv_i64 dst;
61
+
62
+ if (!dc_isar_feature(aa64_pauth, s)) {
63
+ return false;
95
+ }
64
+ }
96
+
65
+
97
+ if (!vfp_access_check(s)) {
66
+ dst = auth_branch_target(s, cpu_reg(s, a->rn), tcg_constant_i64(0), !a->m);
98
+ return FPSysRegCheckDone;
67
+ gen_a64_set_pc(s, dst);
99
+ }
68
+ set_btype_for_br(s, a->rn);
100
+
69
+ s->base.is_jmp = DISAS_JUMP;
101
+ return FPSysRegCheckContinue;
102
+}
103
+
104
+static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
105
+
106
+ fp_sysreg_loadfn *loadfn,
107
+ void *opaque)
108
+{
109
+ /* Do a write to an M-profile floating point system register */
110
+ TCGv_i32 tmp;
111
+
112
+ switch (fp_sysreg_checks(s, regno)) {
113
+ case FPSysRegCheckFailed:
114
+ return false;
115
+ case FPSysRegCheckDone:
116
+ return true;
117
+ case FPSysRegCheckContinue:
118
+ break;
119
+ }
120
+
121
+ switch (regno) {
122
+ case ARM_VFP_FPSCR:
123
+ tmp = loadfn(s, opaque);
124
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
125
+ tcg_temp_free_i32(tmp);
126
+ gen_lookup_tb(s);
127
+ break;
128
+ default:
129
+ g_assert_not_reached();
130
+ }
131
+ return true;
70
+ return true;
132
+}
71
+}
133
+
72
+
134
+static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
73
+static bool trans_BLRAZ(DisasContext *s, arg_braz *a)
135
+ fp_sysreg_storefn *storefn,
136
+ void *opaque)
137
+{
74
+{
138
+ /* Do a read from an M-profile floating point system register */
75
+ TCGv_i64 dst, lr;
139
+ TCGv_i32 tmp;
140
+
76
+
141
+ switch (fp_sysreg_checks(s, regno)) {
77
+ if (!dc_isar_feature(aa64_pauth, s)) {
142
+ case FPSysRegCheckFailed:
143
+ return false;
78
+ return false;
144
+ case FPSysRegCheckDone:
145
+ return true;
146
+ case FPSysRegCheckContinue:
147
+ break;
148
+ }
79
+ }
149
+
80
+
150
+ switch (regno) {
81
+ dst = auth_branch_target(s, cpu_reg(s, a->rn), tcg_constant_i64(0), !a->m);
151
+ case ARM_VFP_FPSCR:
82
+ lr = cpu_reg(s, 30);
152
+ tmp = tcg_temp_new_i32();
83
+ if (dst == lr) {
153
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
84
+ TCGv_i64 tmp = tcg_temp_new_i64();
154
+ storefn(s, opaque, tmp);
85
+ tcg_gen_mov_i64(tmp, dst);
155
+ break;
86
+ dst = tmp;
156
+ case QEMU_VFP_FPSCR_NZCV:
157
+ /*
158
+ * Read just NZCV; this is a special case to avoid the
159
+ * helper call for the "VMRS to CPSR.NZCV" insn.
160
+ */
161
+ tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
162
+ tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
163
+ storefn(s, opaque, tmp);
164
+ break;
165
+ default:
166
+ g_assert_not_reached();
167
+ }
87
+ }
88
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
89
+ gen_a64_set_pc(s, dst);
90
+ set_btype_for_blr(s);
91
+ s->base.is_jmp = DISAS_JUMP;
168
+ return true;
92
+ return true;
169
+}
93
+}
170
+
94
+
171
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
95
+static bool trans_RETA(DisasContext *s, arg_reta *a)
172
+{
96
+{
173
+ arg_VMSR_VMRS *a = opaque;
97
+ TCGv_i64 dst;
174
+
98
+
175
+ if (a->rt == 15) {
99
+ dst = auth_branch_target(s, cpu_reg(s, 30), cpu_X[31], !a->m);
176
+ /* Set the 4 flag bits in the CPSR */
100
+ gen_a64_set_pc(s, dst);
177
+ gen_set_nzcv(value);
101
+ s->base.is_jmp = DISAS_JUMP;
178
+ tcg_temp_free_i32(value);
102
+ return true;
179
+ } else {
180
+ store_reg(s, a->rt, value);
181
+ }
182
+}
103
+}
183
+
104
+
184
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
105
/* HINT instruction group, including various allocated HINTs */
185
+{
106
static void handle_hint(DisasContext *s, uint32_t insn,
186
+ arg_VMSR_VMRS *a = opaque;
107
unsigned int op1, unsigned int op2, unsigned int crm)
187
+
108
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
188
+ return load_reg(s, a->rt);
189
+}
190
+
191
+static bool gen_M_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
192
+{
193
+ /*
194
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
195
+ * FPSCR -> r15 is a special case which writes to the PSR flags;
196
+ * set a->reg to a special value to tell gen_M_fp_sysreg_read()
197
+ * we only care about the top 4 bits of FPSCR there.
198
+ */
199
+ if (a->rt == 15) {
200
+ if (a->l && a->reg == ARM_VFP_FPSCR) {
201
+ a->reg = QEMU_VFP_FPSCR_NZCV;
202
+ } else {
203
+ return false;
204
+ }
205
+ }
206
+
207
+ if (a->l) {
208
+ /* VMRS, move FP system register to gp register */
209
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
210
+ } else {
211
+ /* VMSR, move gp register to FP system register */
212
+ return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
213
+ }
214
+}
215
+
216
static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
217
{
218
TCGv_i32 tmp;
219
bool ignore_vfp_enabled = false;
220
221
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
222
- return false;
223
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
224
+ return gen_M_VMSR_VMRS(s, a);
225
}
109
}
226
110
227
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
111
switch (opc) {
228
- /*
112
- case 0: /* BR */
229
- * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
113
- case 1: /* BLR */
230
- * Accesses to R15 are UNPREDICTABLE; we choose to undef.
114
- case 2: /* RET */
231
- * (FPSCR -> r15 is a special case which writes to the PSR flags.)
115
- btype_mod = opc;
232
- */
116
- switch (op3) {
233
- if (a->reg != ARM_VFP_FPSCR) {
117
- case 0:
234
- return false;
118
- /* BR, BLR, RET : handled in decodetree */
119
- goto do_unallocated;
120
-
121
- case 2:
122
- case 3:
123
- if (!dc_isar_feature(aa64_pauth, s)) {
124
- goto do_unallocated;
125
- }
126
- if (opc == 2) {
127
- /* RETAA, RETAB */
128
- if (rn != 0x1f || op4 != 0x1f) {
129
- goto do_unallocated;
130
- }
131
- rn = 30;
132
- modifier = cpu_X[31];
133
- } else {
134
- /* BRAAZ, BRABZ, BLRAAZ, BLRABZ */
135
- if (op4 != 0x1f) {
136
- goto do_unallocated;
137
- }
138
- modifier = tcg_constant_i64(0);
139
- }
140
- if (s->pauth_active) {
141
- dst = tcg_temp_new_i64();
142
- if (op3 == 2) {
143
- gen_helper_autia(dst, cpu_env, cpu_reg(s, rn), modifier);
144
- } else {
145
- gen_helper_autib(dst, cpu_env, cpu_reg(s, rn), modifier);
146
- }
147
- } else {
148
- dst = cpu_reg(s, rn);
149
- }
150
- break;
151
-
152
- default:
153
- goto do_unallocated;
235
- }
154
- }
236
- if (a->rt == 15 && !a->l) {
155
- /* BLR also needs to load return address */
237
- return false;
156
- if (opc == 1) {
157
- TCGv_i64 lr = cpu_reg(s, 30);
158
- if (dst == lr) {
159
- TCGv_i64 tmp = tcg_temp_new_i64();
160
- tcg_gen_mov_i64(tmp, dst);
161
- dst = tmp;
162
- }
163
- gen_pc_plus_diff(s, lr, curr_insn_len(s));
238
- }
164
- }
239
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
165
- gen_a64_set_pc(s, dst);
240
+ return false;
166
- break;
241
}
167
+ case 0:
242
168
+ case 1:
243
switch (a->reg) {
169
+ case 2:
170
+ /*
171
+ * BR, BLR, RET, RETAA, RETAB, BRAAZ, BRABZ, BLRAAZ, BLRABZ:
172
+ * handled in decodetree
173
+ */
174
+ goto do_unallocated;
175
176
case 8: /* BRAA */
177
case 9: /* BLRAA */
244
--
178
--
245
2.20.1
179
2.34.1
246
247
diff view generated by jsdifflib
1
v8.1M introduces a new TRD flag in the CCR register, which enables
1
Convert the last four BR-with-pointer-auth insns to decodetree.
2
checking for stack frame integrity signatures on SG instructions.
2
The remaining cases in the outer switch in disas_uncond_b_reg()
3
Add the code in the SG insn implementation for the new behaviour.
3
all return early rather than leaving the case statement, so we
4
can delete the now-unused code at the end of that function.
4
5
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-24-peter.maydell@linaro.org
8
Message-id: 20230512144106.3608981-20-peter.maydell@linaro.org
8
---
9
---
9
target/arm/m_helper.c | 86 +++++++++++++++++++++++++++++++++++++++++++
10
target/arm/tcg/a64.decode | 4 ++
10
1 file changed, 86 insertions(+)
11
target/arm/tcg/translate-a64.c | 97 ++++++++++++++--------------------
12
2 files changed, 43 insertions(+), 58 deletions(-)
11
13
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
16
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/m_helper.c
17
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
18
@@ -XXX,XX +XXX,XX @@ BLRAZ 1101011 0001 11111 00001 m:1 rn:5 11111 &braz # BLRAAZ, BLRABZ
19
20
&reta m
21
RETA 1101011 0010 11111 00001 m:1 11111 11111 &reta # RETAA, RETAB
22
+
23
+&bra rn rm m
24
+BRA 1101011 1000 11111 00001 m:1 rn:5 rm:5 &bra # BRAA, BRAB
25
+BLRA 1101011 1001 11111 00001 m:1 rn:5 rm:5 &bra # BLRAA, BLRAB
26
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/tcg/translate-a64.c
29
+++ b/target/arm/tcg/translate-a64.c
30
@@ -XXX,XX +XXX,XX @@ static bool trans_RETA(DisasContext *s, arg_reta *a)
17
return true;
31
return true;
18
}
32
}
19
33
20
+static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
34
+static bool trans_BRA(DisasContext *s, arg_bra *a)
21
+ uint32_t addr, uint32_t *spdata)
22
+{
35
+{
23
+ /*
36
+ TCGv_i64 dst;
24
+ * Read a word of data from the stack for the SG instruction,
25
+ * writing the value into *spdata. If the load succeeds, return
26
+ * true; otherwise pend an appropriate exception and return false.
27
+ * (We can't use data load helpers here that throw an exception
28
+ * because of the context we're called in, which is halfway through
29
+ * arm_v7m_cpu_do_interrupt().)
30
+ */
31
+ CPUState *cs = CPU(cpu);
32
+ CPUARMState *env = &cpu->env;
33
+ MemTxAttrs attrs = {};
34
+ MemTxResult txres;
35
+ target_ulong page_size;
36
+ hwaddr physaddr;
37
+ int prot;
38
+ ARMMMUFaultInfo fi = {};
39
+ ARMCacheAttrs cacheattrs = {};
40
+ uint32_t value;
41
+
37
+
42
+ if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
38
+ if (!dc_isar_feature(aa64_pauth, s)) {
43
+ &attrs, &prot, &page_size, &fi, &cacheattrs)) {
44
+ /* MPU/SAU lookup failed */
45
+ if (fi.type == ARMFault_QEMU_SFault) {
46
+ qemu_log_mask(CPU_LOG_INT,
47
+ "...SecureFault during stack word read\n");
48
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
49
+ env->v7m.sfar = addr;
50
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
51
+ } else {
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "...MemManageFault during stack word read\n");
54
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_DACCVIOL_MASK |
55
+ R_V7M_CFSR_MMARVALID_MASK;
56
+ env->v7m.mmfar[M_REG_S] = addr;
57
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, false);
58
+ }
59
+ return false;
39
+ return false;
60
+ }
40
+ }
61
+ value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
41
+ dst = auth_branch_target(s, cpu_reg(s,a->rn), cpu_reg_sp(s, a->rm), !a->m);
62
+ attrs, &txres);
42
+ gen_a64_set_pc(s, dst);
63
+ if (txres != MEMTX_OK) {
43
+ set_btype_for_br(s, a->rn);
64
+ /* BusFault trying to read the data */
44
+ s->base.is_jmp = DISAS_JUMP;
65
+ qemu_log_mask(CPU_LOG_INT,
66
+ "...BusFault during stack word read\n");
67
+ env->v7m.cfsr[M_REG_NS] |=
68
+ (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
69
+ env->v7m.bfar = addr;
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
71
+ return false;
72
+ }
73
+
74
+ *spdata = value;
75
+ return true;
45
+ return true;
76
+}
46
+}
77
+
47
+
78
static bool v7m_handle_execute_nsc(ARMCPU *cpu)
48
+static bool trans_BLRA(DisasContext *s, arg_bra *a)
49
+{
50
+ TCGv_i64 dst, lr;
51
+
52
+ if (!dc_isar_feature(aa64_pauth, s)) {
53
+ return false;
54
+ }
55
+ dst = auth_branch_target(s, cpu_reg(s, a->rn), cpu_reg_sp(s, a->rm), !a->m);
56
+ lr = cpu_reg(s, 30);
57
+ if (dst == lr) {
58
+ TCGv_i64 tmp = tcg_temp_new_i64();
59
+ tcg_gen_mov_i64(tmp, dst);
60
+ dst = tmp;
61
+ }
62
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
63
+ gen_a64_set_pc(s, dst);
64
+ set_btype_for_blr(s);
65
+ s->base.is_jmp = DISAS_JUMP;
66
+ return true;
67
+}
68
+
69
/* HINT instruction group, including various allocated HINTs */
70
static void handle_hint(DisasContext *s, uint32_t insn,
71
unsigned int op1, unsigned int op2, unsigned int crm)
72
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
73
static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
79
{
74
{
80
/*
75
unsigned int opc, op2, op3, rn, op4;
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
76
- unsigned btype_mod = 2; /* 0: BR, 1: BLR, 2: other */
82
*/
77
TCGv_i64 dst;
83
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
78
TCGv_i64 modifier;
84
", executing it\n", env->regs[15]);
79
85
+
80
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
86
+ if (cpu_isar_feature(aa32_m_sec_state, cpu) &&
81
case 0:
87
+ !arm_v7m_is_handler_mode(env)) {
82
case 1:
88
+ /*
83
case 2:
89
+ * v8.1M exception stack frame integrity check. Note that we
84
+ case 8:
90
+ * must perform the memory access even if CCR_S.TRD is zero
85
+ case 9:
91
+ * and we aren't going to check what the data loaded is.
86
/*
92
+ */
87
- * BR, BLR, RET, RETAA, RETAB, BRAAZ, BRABZ, BLRAAZ, BLRABZ:
93
+ uint32_t spdata, sp;
88
- * handled in decodetree
94
+
89
+ * BR, BLR, RET, RETAA, RETAB, BRAAZ, BRABZ, BLRAAZ, BLRABZ,
95
+ /*
90
+ * BRAA, BLRAA: handled in decodetree
96
+ * We know we are currently NS, so the S stack pointers must be
91
*/
97
+ * in other_ss_{psp,msp}, not in regs[13]/other_sp.
92
goto do_unallocated;
98
+ */
93
99
+ sp = v7m_using_psp(env) ? env->v7m.other_ss_psp : env->v7m.other_ss_msp;
94
- case 8: /* BRAA */
100
+ if (!v7m_read_sg_stack_word(cpu, mmu_idx, sp, &spdata)) {
95
- case 9: /* BLRAA */
101
+ /* Stack access failed and an exception has been pended */
96
- if (!dc_isar_feature(aa64_pauth, s)) {
102
+ return false;
97
- goto do_unallocated;
103
+ }
98
- }
104
+
99
- if ((op3 & ~1) != 2) {
105
+ if (env->v7m.ccr[M_REG_S] & R_V7M_CCR_TRD_MASK) {
100
- goto do_unallocated;
106
+ if (((spdata & ~1) == 0xfefa125a) ||
101
- }
107
+ !(env->v7m.control[M_REG_S] & 1)) {
102
- btype_mod = opc & 1;
108
+ goto gen_invep;
103
- if (s->pauth_active) {
109
+ }
104
- dst = tcg_temp_new_i64();
110
+ }
105
- modifier = cpu_reg_sp(s, op4);
111
+ }
106
- if (op3 == 2) {
112
+
107
- gen_helper_autia(dst, cpu_env, cpu_reg(s, rn), modifier);
113
env->regs[14] &= ~1;
108
- } else {
114
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
109
- gen_helper_autib(dst, cpu_env, cpu_reg(s, rn), modifier);
115
switch_v7m_security_state(env, true);
110
- }
111
- } else {
112
- dst = cpu_reg(s, rn);
113
- }
114
- /* BLRAA also needs to load return address */
115
- if (opc == 9) {
116
- TCGv_i64 lr = cpu_reg(s, 30);
117
- if (dst == lr) {
118
- TCGv_i64 tmp = tcg_temp_new_i64();
119
- tcg_gen_mov_i64(tmp, dst);
120
- dst = tmp;
121
- }
122
- gen_pc_plus_diff(s, lr, curr_insn_len(s));
123
- }
124
- gen_a64_set_pc(s, dst);
125
- break;
126
-
127
case 4: /* ERET */
128
if (s->current_el == 0) {
129
goto do_unallocated;
130
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
131
unallocated_encoding(s);
132
return;
133
}
134
-
135
- switch (btype_mod) {
136
- case 0: /* BR */
137
- if (dc_isar_feature(aa64_bti, s)) {
138
- /* BR to {x16,x17} or !guard -> 1, else 3. */
139
- set_btype(s, rn == 16 || rn == 17 || !s->guarded_page ? 1 : 3);
140
- }
141
- break;
142
-
143
- case 1: /* BLR */
144
- if (dc_isar_feature(aa64_bti, s)) {
145
- /* BLR sets BTYPE to 2, regardless of source guarded page. */
146
- set_btype(s, 2);
147
- }
148
- break;
149
-
150
- default: /* RET or none of the above. */
151
- /* BTYPE will be set to 0 by normal end-of-insn processing. */
152
- break;
153
- }
154
-
155
- s->base.is_jmp = DISAS_JUMP;
156
}
157
158
/* Branches, exception generating and system instructions */
116
--
159
--
117
2.20.1
160
2.34.1
118
119
diff view generated by jsdifflib
1
Implement the v8.1M VSCCLRM insn, which zeros floating point
1
Convert the exception-return insns ERET, ERETA and ERETB to
2
registers if there is an active floating point context.
2
decodetree. These were the last insns left in the legacy
3
This requires support in write_neon_element32() for the MO_32
3
decoder function disas_uncond_reg_b(), which allows us to
4
element size, so add it.
4
remove it.
5
5
6
Because we want to use arm_gen_condlabel(), we need to move
6
The old decoder explicitly decoded the DRPS instruction,
7
the definition of that function up in translate.c so it is
7
only in order to call unallocated_encoding() on it, exactly
8
before the #include of translate-vfp.c.inc.
8
as would have happened if it hadn't decoded it. This is
9
because this insn always UNDEFs unless the CPU is in
10
halting-debug state, which we don't emulate. So we list
11
the pattern in a comment in a64.decode, but don't actively
12
decode it.
9
13
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-5-peter.maydell@linaro.org
16
Message-id: 20230512144106.3608981-21-peter.maydell@linaro.org
13
---
17
---
14
target/arm/cpu.h | 9 ++++
18
target/arm/tcg/a64.decode | 8 ++
15
target/arm/m-nocp.decode | 8 +++-
19
target/arm/tcg/translate-a64.c | 163 +++++++++++----------------------
16
target/arm/translate.c | 21 +++++----
20
2 files changed, 63 insertions(+), 108 deletions(-)
17
target/arm/translate-vfp.c.inc | 84 ++++++++++++++++++++++++++++++++++
18
4 files changed, 111 insertions(+), 11 deletions(-)
19
21
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
21
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
24
--- a/target/arm/tcg/a64.decode
23
+++ b/target/arm/cpu.h
25
+++ b/target/arm/tcg/a64.decode
24
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
26
@@ -XXX,XX +XXX,XX @@ RETA 1101011 0010 11111 00001 m:1 11111 11111 &reta # RETAA, RETAB
25
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
27
&bra rn rm m
28
BRA 1101011 1000 11111 00001 m:1 rn:5 rm:5 &bra # BRAA, BRAB
29
BLRA 1101011 1001 11111 00001 m:1 rn:5 rm:5 &bra # BLRAA, BLRAB
30
+
31
+ERET 1101011 0100 11111 000000 11111 00000
32
+ERETA 1101011 0100 11111 00001 m:1 11111 11111 &reta # ERETAA, ERETAB
33
+
34
+# We don't need to decode DRPS because it always UNDEFs except when
35
+# the processor is in halting debug state (which we don't implement).
36
+# The pattern is listed here as documentation.
37
+# DRPS 1101011 0101 11111 000000 11111 00000
38
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/tcg/translate-a64.c
41
+++ b/target/arm/tcg/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ static bool trans_BLRA(DisasContext *s, arg_bra *a)
43
return true;
26
}
44
}
27
45
28
+static inline bool isar_feature_aa32_m_sec_state(const ARMISARegisters *id)
46
+static bool trans_ERET(DisasContext *s, arg_ERET *a)
29
+{
47
+{
30
+ /*
48
+ TCGv_i64 dst;
31
+ * Return true if M-profile state handling insns
49
+
32
+ * (VSCCLRM, CLRM, FPCTX access insns) are implemented
50
+ if (s->current_el == 0) {
33
+ */
51
+ return false;
34
+ return FIELD_EX32(id->id_pfr1, ID_PFR1, SECURITY) >= 3;
52
+ }
53
+ if (s->fgt_eret) {
54
+ gen_exception_insn_el(s, 0, EXCP_UDEF, 0, 2);
55
+ return true;
56
+ }
57
+ dst = tcg_temp_new_i64();
58
+ tcg_gen_ld_i64(dst, cpu_env,
59
+ offsetof(CPUARMState, elr_el[s->current_el]));
60
+
61
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
62
+ gen_io_start();
63
+ }
64
+
65
+ gen_helper_exception_return(cpu_env, dst);
66
+ /* Must exit loop to check un-masked IRQs */
67
+ s->base.is_jmp = DISAS_EXIT;
68
+ return true;
35
+}
69
+}
36
+
70
+
37
static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
71
+static bool trans_ERETA(DisasContext *s, arg_reta *a)
38
{
72
+{
39
/* Sadly this is encoded differently for A-profile and M-profile */
73
+ TCGv_i64 dst;
40
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
74
+
41
index XXXXXXX..XXXXXXX 100644
75
+ if (!dc_isar_feature(aa64_pauth, s)) {
42
--- a/target/arm/m-nocp.decode
76
+ return false;
43
+++ b/target/arm/m-nocp.decode
77
+ }
44
@@ -XXX,XX +XXX,XX @@
78
+ if (s->current_el == 0) {
45
# If the coprocessor is not present or disabled then we will generate
79
+ return false;
46
# the NOCP exception; otherwise we let the insn through to the main decode.
80
+ }
47
81
+ /* The FGT trap takes precedence over an auth trap. */
48
+%vd_dp 22:1 12:4
82
+ if (s->fgt_eret) {
49
+%vd_sp 12:4 22:1
83
+ gen_exception_insn_el(s, 0, EXCP_UDEF, a->m ? 3 : 2, 2);
50
+
84
+ return true;
51
&nocp cp
85
+ }
52
86
+ dst = tcg_temp_new_i64();
53
{
87
+ tcg_gen_ld_i64(dst, cpu_env,
54
# Special cases which do not take an early NOCP: VLLDM and VLSTM
88
+ offsetof(CPUARMState, elr_el[s->current_el]));
55
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
89
+
56
- # TODO: VSCCLRM (new in v8.1M) is similar:
90
+ dst = auth_branch_target(s, dst, cpu_X[31], !a->m);
57
- #VSCCLRM 1110 1100 1-01 1111 ---- 1011 ---- ---0
91
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
58
+ # VSCCLRM (new in v8.1M) is similar:
92
+ gen_io_start();
59
+ VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
93
+ }
60
+ VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
94
+
61
95
+ gen_helper_exception_return(cpu_env, dst);
62
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
96
+ /* Must exit loop to check un-masked IRQs */
63
NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
97
+ s->base.is_jmp = DISAS_EXIT;
64
diff --git a/target/arm/translate.c b/target/arm/translate.c
98
+ return true;
65
index XXXXXXX..XXXXXXX 100644
99
+}
66
--- a/target/arm/translate.c
100
+
67
+++ b/target/arm/translate.c
101
/* HINT instruction group, including various allocated HINTs */
68
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void)
102
static void handle_hint(DisasContext *s, uint32_t insn,
69
a64_translate_init();
103
unsigned int op1, unsigned int op2, unsigned int crm)
104
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
105
}
70
}
106
}
71
107
72
+/* Generate a label used for skipping this instruction */
108
-/* Unconditional branch (register)
73
+static void arm_gen_condlabel(DisasContext *s)
109
- * 31 25 24 21 20 16 15 10 9 5 4 0
74
+{
110
- * +---------------+-------+-------+-------+------+-------+
75
+ if (!s->condjmp) {
111
- * | 1 1 0 1 0 1 1 | opc | op2 | op3 | Rn | op4 |
76
+ s->condlabel = gen_new_label();
112
- * +---------------+-------+-------+-------+------+-------+
77
+ s->condjmp = 1;
113
- */
78
+ }
114
-static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
79
+}
80
+
81
/* Flags for the disas_set_da_iss info argument:
82
* lower bits hold the Rt register number, higher bits are flags.
83
*/
84
@@ -XXX,XX +XXX,XX @@ static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
85
long off = neon_element_offset(reg, ele, memop);
86
87
switch (memop) {
88
+ case MO_32:
89
+ tcg_gen_st32_i64(src, cpu_env, off);
90
+ break;
91
case MO_64:
92
tcg_gen_st_i64(src, cpu_env, off);
93
break;
94
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
95
s->base.is_jmp = DISAS_UPDATE_EXIT;
96
}
97
98
-/* Generate a label used for skipping this instruction */
99
-static void arm_gen_condlabel(DisasContext *s)
100
-{
115
-{
101
- if (!s->condjmp) {
116
- unsigned int opc, op2, op3, rn, op4;
102
- s->condlabel = gen_new_label();
117
- TCGv_i64 dst;
103
- s->condjmp = 1;
118
- TCGv_i64 modifier;
119
-
120
- opc = extract32(insn, 21, 4);
121
- op2 = extract32(insn, 16, 5);
122
- op3 = extract32(insn, 10, 6);
123
- rn = extract32(insn, 5, 5);
124
- op4 = extract32(insn, 0, 5);
125
-
126
- if (op2 != 0x1f) {
127
- goto do_unallocated;
128
- }
129
-
130
- switch (opc) {
131
- case 0:
132
- case 1:
133
- case 2:
134
- case 8:
135
- case 9:
136
- /*
137
- * BR, BLR, RET, RETAA, RETAB, BRAAZ, BRABZ, BLRAAZ, BLRABZ,
138
- * BRAA, BLRAA: handled in decodetree
139
- */
140
- goto do_unallocated;
141
-
142
- case 4: /* ERET */
143
- if (s->current_el == 0) {
144
- goto do_unallocated;
145
- }
146
- switch (op3) {
147
- case 0: /* ERET */
148
- if (op4 != 0) {
149
- goto do_unallocated;
150
- }
151
- if (s->fgt_eret) {
152
- gen_exception_insn_el(s, 0, EXCP_UDEF, syn_erettrap(op3), 2);
153
- return;
154
- }
155
- dst = tcg_temp_new_i64();
156
- tcg_gen_ld_i64(dst, cpu_env,
157
- offsetof(CPUARMState, elr_el[s->current_el]));
158
- break;
159
-
160
- case 2: /* ERETAA */
161
- case 3: /* ERETAB */
162
- if (!dc_isar_feature(aa64_pauth, s)) {
163
- goto do_unallocated;
164
- }
165
- if (rn != 0x1f || op4 != 0x1f) {
166
- goto do_unallocated;
167
- }
168
- /* The FGT trap takes precedence over an auth trap. */
169
- if (s->fgt_eret) {
170
- gen_exception_insn_el(s, 0, EXCP_UDEF, syn_erettrap(op3), 2);
171
- return;
172
- }
173
- dst = tcg_temp_new_i64();
174
- tcg_gen_ld_i64(dst, cpu_env,
175
- offsetof(CPUARMState, elr_el[s->current_el]));
176
- if (s->pauth_active) {
177
- modifier = cpu_X[31];
178
- if (op3 == 2) {
179
- gen_helper_autia(dst, cpu_env, dst, modifier);
180
- } else {
181
- gen_helper_autib(dst, cpu_env, dst, modifier);
182
- }
183
- }
184
- break;
185
-
186
- default:
187
- goto do_unallocated;
188
- }
189
- if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
190
- gen_io_start();
191
- }
192
-
193
- gen_helper_exception_return(cpu_env, dst);
194
- /* Must exit loop to check un-masked IRQs */
195
- s->base.is_jmp = DISAS_EXIT;
196
- return;
197
-
198
- case 5: /* DRPS */
199
- if (op3 != 0 || op4 != 0 || rn != 0x1f) {
200
- goto do_unallocated;
201
- } else {
202
- unallocated_encoding(s);
203
- }
204
- return;
205
-
206
- default:
207
- do_unallocated:
208
- unallocated_encoding(s);
209
- return;
104
- }
210
- }
105
-}
211
-}
106
-
212
-
107
/* Skip this instruction if the ARM condition is false */
213
/* Branches, exception generating and system instructions */
108
static void arm_skip_unless(DisasContext *s, uint32_t cond)
214
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
109
{
215
{
110
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
216
@@ -XXX,XX +XXX,XX @@ static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
111
index XXXXXXX..XXXXXXX 100644
217
disas_exc(s, insn);
112
--- a/target/arm/translate-vfp.c.inc
218
}
113
+++ b/target/arm/translate-vfp.c.inc
219
break;
114
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
220
- case 0x6b: /* Unconditional branch (register) */
115
return true;
221
- disas_uncond_b_reg(s, insn);
116
}
222
- break;
117
223
default:
118
+static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
224
unallocated_encoding(s);
119
+{
225
break;
120
+ int btmreg, topreg;
121
+ TCGv_i64 zero;
122
+ TCGv_i32 aspen, sfpa;
123
+
124
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
125
+ /* Before v8.1M, fall through in decode to NOCP check */
126
+ return false;
127
+ }
128
+
129
+ /* Explicitly UNDEF because this takes precedence over NOCP */
130
+ if (!arm_dc_feature(s, ARM_FEATURE_M_MAIN) || !s->v8m_secure) {
131
+ unallocated_encoding(s);
132
+ return true;
133
+ }
134
+
135
+ if (!dc_isar_feature(aa32_vfp_simd, s)) {
136
+ /* NOP if we have neither FP nor MVE */
137
+ return true;
138
+ }
139
+
140
+ /*
141
+ * If FPCCR.ASPEN != 0 && CONTROL_S.SFPA == 0 then there is no
142
+ * active floating point context so we must NOP (without doing
143
+ * any lazy state preservation or the NOCP check).
144
+ */
145
+ aspen = load_cpu_field(v7m.fpccr[M_REG_S]);
146
+ sfpa = load_cpu_field(v7m.control[M_REG_S]);
147
+ tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
148
+ tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
149
+ tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
150
+ tcg_gen_or_i32(sfpa, sfpa, aspen);
151
+ arm_gen_condlabel(s);
152
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
153
+
154
+ if (s->fp_excp_el != 0) {
155
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
156
+ syn_uncategorized(), s->fp_excp_el);
157
+ return true;
158
+ }
159
+
160
+ topreg = a->vd + a->imm - 1;
161
+ btmreg = a->vd;
162
+
163
+ /* Convert to Sreg numbers if the insn specified in Dregs */
164
+ if (a->size == 3) {
165
+ topreg = topreg * 2 + 1;
166
+ btmreg *= 2;
167
+ }
168
+
169
+ if (topreg > 63 || (topreg > 31 && !(topreg & 1))) {
170
+ /* UNPREDICTABLE: we choose to undef */
171
+ unallocated_encoding(s);
172
+ return true;
173
+ }
174
+
175
+ /* Silently ignore requests to clear D16-D31 if they don't exist */
176
+ if (topreg > 31 && !dc_isar_feature(aa32_simd_r32, s)) {
177
+ topreg = 31;
178
+ }
179
+
180
+ if (!vfp_access_check(s)) {
181
+ return true;
182
+ }
183
+
184
+ /* Zero the Sregs from btmreg to topreg inclusive. */
185
+ zero = tcg_const_i64(0);
186
+ if (btmreg & 1) {
187
+ write_neon_element64(zero, btmreg >> 1, 1, MO_32);
188
+ btmreg++;
189
+ }
190
+ for (; btmreg + 1 <= topreg; btmreg += 2) {
191
+ write_neon_element64(zero, btmreg >> 1, 0, MO_64);
192
+ }
193
+ if (btmreg == topreg) {
194
+ write_neon_element64(zero, btmreg >> 1, 0, MO_32);
195
+ btmreg++;
196
+ }
197
+ assert(btmreg == topreg + 1);
198
+ /* TODO: when MVE is implemented, zero VPR here */
199
+ return true;
200
+}
201
+
202
static bool trans_NOCP(DisasContext *s, arg_nocp *a)
203
{
204
/*
205
--
226
--
206
2.20.1
227
2.34.1
207
208
diff view generated by jsdifflib
1
Factor out the code which handles M-profile lazy FP state preservation
1
The IMPDEF sysreg L2CTLR_EL1 found on the Cortex-A35, A53, A57, A72
2
from full_vfp_access_check(); accesses to the FPCXT_NS register are
2
and which we (arguably dubiously) also provide in '-cpu max' has a
3
a special case which need to do just this part (corresponding in the
3
2 bit field for the number of processors in the cluster. On real
4
pseudocode to the PreserveFPState() function), and not the full
4
hardware this must be sufficient because it can only be configured
5
set of actions matching the pseudocode ExecuteFPCheck() which
5
with up to 4 CPUs in the cluster. However on QEMU if the board code
6
normal FP instructions need to do.
6
does not explicitly configure the code into clusters with the right
7
CPU count we default to "give the value assuming that all CPUs in
8
the system are in a single cluster", which might be too big to fit
9
in the field.
10
11
Instead of just overflowing this 2-bit field, saturate to 3 (meaning
12
"4 CPUs", so at least we don't overwrite other fields in the register.
13
It's unlikely that any guest code really cares about the value in
14
this field; at least, if it does it probably also wants the system
15
to be more closely matching real hardware, i.e. not to have more
16
than 4 CPUs.
17
18
This issue has been present since the L2CTLR was first added in
19
commit 377a44ec8f2fac5b back in 2014. It was only noticed because
20
Coverity complains (CID 1509227) that the shift might overflow 32 bits
21
and inadvertently sign extend into the top half of the 64 bit value.
7
22
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
25
Message-id: 20230512170223.3801643-2-peter.maydell@linaro.org
11
Message-id: 20201119215617.29887-13-peter.maydell@linaro.org
12
---
26
---
13
target/arm/translate-vfp.c.inc | 45 ++++++++++++++++++++--------------
27
target/arm/cortex-regs.c | 11 +++++++++--
14
1 file changed, 27 insertions(+), 18 deletions(-)
28
1 file changed, 9 insertions(+), 2 deletions(-)
15
29
16
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
30
diff --git a/target/arm/cortex-regs.c b/target/arm/cortex-regs.c
17
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-vfp.c.inc
32
--- a/target/arm/cortex-regs.c
19
+++ b/target/arm/translate-vfp.c.inc
33
+++ b/target/arm/cortex-regs.c
20
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
34
@@ -XXX,XX +XXX,XX @@ static uint64_t l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
21
return offs;
35
{
36
ARMCPU *cpu = env_archcpu(env);
37
38
- /* Number of cores is in [25:24]; otherwise we RAZ */
39
- return (cpu->core_count - 1) << 24;
40
+ /*
41
+ * Number of cores is in [25:24]; otherwise we RAZ.
42
+ * If the board didn't configure the CPUs into clusters,
43
+ * we default to "all CPUs in one cluster", which might be
44
+ * more than the 4 that the hardware permits and which is
45
+ * all you can report in this two-bit field. Saturate to
46
+ * 0b11 (== 4 CPUs) rather than overflowing the field.
47
+ */
48
+ return MIN(cpu->core_count - 1, 3) << 24;
22
}
49
}
23
50
24
+/*
51
static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
25
+ * Generate code for M-profile lazy FP state preservation if needed;
26
+ * this corresponds to the pseudocode PreserveFPState() function.
27
+ */
28
+static void gen_preserve_fp_state(DisasContext *s)
29
+{
30
+ if (s->v7m_lspact) {
31
+ /*
32
+ * Lazy state saving affects external memory and also the NVIC,
33
+ * so we must mark it as an IO operation for icount (and cause
34
+ * this to be the last insn in the TB).
35
+ */
36
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
37
+ s->base.is_jmp = DISAS_UPDATE_EXIT;
38
+ gen_io_start();
39
+ }
40
+ gen_helper_v7m_preserve_fp_state(cpu_env);
41
+ /*
42
+ * If the preserve_fp_state helper doesn't throw an exception
43
+ * then it will clear LSPACT; we don't need to repeat this for
44
+ * any further FP insns in this TB.
45
+ */
46
+ s->v7m_lspact = false;
47
+ }
48
+}
49
+
50
/*
51
* Check that VFP access is enabled. If it is, do the necessary
52
* M-profile lazy-FP handling and then return true.
53
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
54
/* Handle M-profile lazy FP state mechanics */
55
56
/* Trigger lazy-state preservation if necessary */
57
- if (s->v7m_lspact) {
58
- /*
59
- * Lazy state saving affects external memory and also the NVIC,
60
- * so we must mark it as an IO operation for icount (and cause
61
- * this to be the last insn in the TB).
62
- */
63
- if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
64
- s->base.is_jmp = DISAS_UPDATE_EXIT;
65
- gen_io_start();
66
- }
67
- gen_helper_v7m_preserve_fp_state(cpu_env);
68
- /*
69
- * If the preserve_fp_state helper doesn't throw an exception
70
- * then it will clear LSPACT; we don't need to repeat this for
71
- * any further FP insns in this TB.
72
- */
73
- s->v7m_lspact = false;
74
- }
75
+ gen_preserve_fp_state(s);
76
77
/* Update ownership of FP context: set FPCCR.S to match current state */
78
if (s->v8m_fpccr_s_wrong) {
79
--
52
--
80
2.20.1
53
2.34.1
81
82
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
In the vexpress board code, we allocate a new MemoryRegion at the top
2
of vexpress_common_init() but only set it up and use it inside the
3
"if (map[VE_NORFLASHALIAS] != -1)" conditional, so we leak it if not.
4
This isn't a very interesting leak as it's a tiny amount of memory
5
once at startup, but it's easy to fix.
2
6
3
Connect CAN0 and CAN1 on the ZynqMP.
7
We could silence Coverity simply by moving the g_new() into the
8
if() block, but this use of g_new(MemoryRegion, 1) is a legacy from
9
when this board model was originally written; we wouldn't do that
10
if we wrote it today. The MemoryRegions are conceptually a part of
11
the board and must not go away until the whole board is done with
12
(at the end of the simulation), so they belong in its state struct.
4
13
5
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
14
This machine already has a VexpressMachineState struct that extends
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
15
MachineState, so statically put the MemoryRegions in there instead of
7
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
16
dynamically allocating them separately at runtime.
8
Message-id: 1605728926-352690-3-git-send-email-fnu.vikram@xilinx.com
17
18
Spotted by Coverity (CID 1509083).
19
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
23
Message-id: 20230512170223.3801643-3-peter.maydell@linaro.org
10
---
24
---
11
include/hw/arm/xlnx-zynqmp.h | 8 ++++++++
25
hw/arm/vexpress.c | 40 ++++++++++++++++++++--------------------
12
hw/arm/xlnx-zcu102.c | 20 ++++++++++++++++++++
26
1 file changed, 20 insertions(+), 20 deletions(-)
13
hw/arm/xlnx-zynqmp.c | 34 ++++++++++++++++++++++++++++++++++
14
3 files changed, 62 insertions(+)
15
27
16
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
28
diff --git a/hw/arm/vexpress.c b/hw/arm/vexpress.c
17
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/xlnx-zynqmp.h
30
--- a/hw/arm/vexpress.c
19
+++ b/include/hw/arm/xlnx-zynqmp.h
31
+++ b/hw/arm/vexpress.c
20
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@ struct VexpressMachineClass {
21
#include "hw/intc/arm_gic.h"
33
22
#include "hw/net/cadence_gem.h"
34
struct VexpressMachineState {
23
#include "hw/char/cadence_uart.h"
35
MachineState parent;
24
+#include "hw/net/xlnx-zynqmp-can.h"
36
+ MemoryRegion vram;
25
#include "hw/ide/ahci.h"
37
+ MemoryRegion sram;
26
#include "hw/sd/sdhci.h"
38
+ MemoryRegion flashalias;
27
#include "hw/ssi/xilinx_spips.h"
39
+ MemoryRegion lowram;
28
@@ -XXX,XX +XXX,XX @@
40
+ MemoryRegion a15sram;
29
#include "hw/cpu/cluster.h"
30
#include "target/arm/cpu.h"
31
#include "qom/object.h"
32
+#include "net/can_emu.h"
33
34
#define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
35
OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
36
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
37
#define XLNX_ZYNQMP_NUM_RPU_CPUS 2
38
#define XLNX_ZYNQMP_NUM_GEMS 4
39
#define XLNX_ZYNQMP_NUM_UARTS 2
40
+#define XLNX_ZYNQMP_NUM_CAN 2
41
+#define XLNX_ZYNQMP_CAN_REF_CLK (24 * 1000 * 1000)
42
#define XLNX_ZYNQMP_NUM_SDHCI 2
43
#define XLNX_ZYNQMP_NUM_SPIS 2
44
#define XLNX_ZYNQMP_NUM_GDMA_CH 8
45
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
46
47
CadenceGEMState gem[XLNX_ZYNQMP_NUM_GEMS];
48
CadenceUARTState uart[XLNX_ZYNQMP_NUM_UARTS];
49
+ XlnxZynqMPCANState can[XLNX_ZYNQMP_NUM_CAN];
50
SysbusAHCIState sata;
51
SDHCIState sdhci[XLNX_ZYNQMP_NUM_SDHCI];
52
XilinxSPIPS spi[XLNX_ZYNQMP_NUM_SPIS];
53
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
54
bool virt;
55
/* Has the RPU subsystem? */
56
bool has_rpu;
57
+
58
+ /* CAN bus. */
59
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
60
};
61
62
#endif
63
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/arm/xlnx-zcu102.c
66
+++ b/hw/arm/xlnx-zcu102.c
67
@@ -XXX,XX +XXX,XX @@
68
#include "sysemu/qtest.h"
69
#include "sysemu/device_tree.h"
70
#include "qom/object.h"
71
+#include "net/can_emu.h"
72
73
struct XlnxZCU102 {
74
MachineState parent_obj;
75
@@ -XXX,XX +XXX,XX @@ struct XlnxZCU102 {
76
bool secure;
41
bool secure;
77
bool virt;
42
bool virt;
78
79
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
80
+
81
struct arm_boot_info binfo;
82
};
43
};
83
44
@@ -XXX,XX +XXX,XX @@ struct VexpressMachineState {
84
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_init(MachineState *machine)
45
#define TYPE_VEXPRESS_A15_MACHINE MACHINE_TYPE_NAME("vexpress-a15")
85
object_property_set_bool(OBJECT(&s->soc), "virtualization", s->virt,
46
OBJECT_DECLARE_TYPE(VexpressMachineState, VexpressMachineClass, VEXPRESS_MACHINE)
86
&error_fatal);
47
87
48
-typedef void DBoardInitFn(const VexpressMachineState *machine,
88
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
49
+typedef void DBoardInitFn(VexpressMachineState *machine,
89
+ gchar *bus_name = g_strdup_printf("canbus%d", i);
50
ram_addr_t ram_size,
90
+
51
const char *cpu_type,
91
+ object_property_set_link(OBJECT(&s->soc), bus_name,
52
qemu_irq *pic);
92
+ OBJECT(s->canbus[i]), &error_fatal);
53
@@ -XXX,XX +XXX,XX @@ static void init_cpus(MachineState *ms, const char *cpu_type,
93
+ g_free(bus_name);
54
}
94
+ }
95
+
96
qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
97
98
/* Create and plug in the SD cards */
99
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_machine_instance_init(Object *obj)
100
s->secure = false;
101
/* Default to virt (EL2) being disabled */
102
s->virt = false;
103
+ object_property_add_link(obj, "xlnx-zcu102.canbus0", TYPE_CAN_BUS,
104
+ (Object **)&s->canbus[0],
105
+ object_property_allow_set_link,
106
+ 0);
107
+
108
+ object_property_add_link(obj, "xlnx-zcu102.canbus1", TYPE_CAN_BUS,
109
+ (Object **)&s->canbus[1],
110
+ object_property_allow_set_link,
111
+ 0);
112
}
55
}
113
56
114
static void xlnx_zcu102_machine_class_init(ObjectClass *oc, void *data)
57
-static void a9_daughterboard_init(const VexpressMachineState *vms,
115
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
58
+static void a9_daughterboard_init(VexpressMachineState *vms,
116
index XXXXXXX..XXXXXXX 100644
59
ram_addr_t ram_size,
117
--- a/hw/arm/xlnx-zynqmp.c
60
const char *cpu_type,
118
+++ b/hw/arm/xlnx-zynqmp.c
61
qemu_irq *pic)
119
@@ -XXX,XX +XXX,XX @@ static const int uart_intr[XLNX_ZYNQMP_NUM_UARTS] = {
62
{
120
21, 22,
63
MachineState *machine = MACHINE(vms);
64
MemoryRegion *sysmem = get_system_memory();
65
- MemoryRegion *lowram = g_new(MemoryRegion, 1);
66
ram_addr_t low_ram_size;
67
68
if (ram_size > 0x40000000) {
69
@@ -XXX,XX +XXX,XX @@ static void a9_daughterboard_init(const VexpressMachineState *vms,
70
* address space should in theory be remappable to various
71
* things including ROM or RAM; we always map the RAM there.
72
*/
73
- memory_region_init_alias(lowram, NULL, "vexpress.lowmem", machine->ram,
74
- 0, low_ram_size);
75
- memory_region_add_subregion(sysmem, 0x0, lowram);
76
+ memory_region_init_alias(&vms->lowram, NULL, "vexpress.lowmem",
77
+ machine->ram, 0, low_ram_size);
78
+ memory_region_add_subregion(sysmem, 0x0, &vms->lowram);
79
memory_region_add_subregion(sysmem, 0x60000000, machine->ram);
80
81
/* 0x1e000000 A9MPCore (SCU) private memory region */
82
@@ -XXX,XX +XXX,XX @@ static VEDBoardInfo a9_daughterboard = {
83
.init = a9_daughterboard_init,
121
};
84
};
122
85
123
+static const uint64_t can_addr[XLNX_ZYNQMP_NUM_CAN] = {
86
-static void a15_daughterboard_init(const VexpressMachineState *vms,
124
+ 0xFF060000, 0xFF070000,
87
+static void a15_daughterboard_init(VexpressMachineState *vms,
125
+};
88
ram_addr_t ram_size,
126
+
89
const char *cpu_type,
127
+static const int can_intr[XLNX_ZYNQMP_NUM_CAN] = {
90
qemu_irq *pic)
128
+ 23, 24,
91
{
129
+};
92
MachineState *machine = MACHINE(vms);
130
+
93
MemoryRegion *sysmem = get_system_memory();
131
static const uint64_t sdhci_addr[XLNX_ZYNQMP_NUM_SDHCI] = {
94
- MemoryRegion *sram = g_new(MemoryRegion, 1);
132
0xFF160000, 0xFF170000,
95
133
};
96
{
134
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
97
/* We have to use a separate 64 bit variable here to avoid the gcc
135
TYPE_CADENCE_UART);
98
@@ -XXX,XX +XXX,XX @@ static void a15_daughterboard_init(const VexpressMachineState *vms,
99
/* 0x2b060000: SP805 watchdog: not modelled */
100
/* 0x2b0a0000: PL341 dynamic memory controller: not modelled */
101
/* 0x2e000000: system SRAM */
102
- memory_region_init_ram(sram, NULL, "vexpress.a15sram", 0x10000,
103
+ memory_region_init_ram(&vms->a15sram, NULL, "vexpress.a15sram", 0x10000,
104
&error_fatal);
105
- memory_region_add_subregion(sysmem, 0x2e000000, sram);
106
+ memory_region_add_subregion(sysmem, 0x2e000000, &vms->a15sram);
107
108
/* 0x7ffb0000: DMA330 DMA controller: not modelled */
109
/* 0x7ffd0000: PL354 static memory controller: not modelled */
110
@@ -XXX,XX +XXX,XX @@ static void vexpress_common_init(MachineState *machine)
111
I2CBus *i2c;
112
ram_addr_t vram_size, sram_size;
113
MemoryRegion *sysmem = get_system_memory();
114
- MemoryRegion *vram = g_new(MemoryRegion, 1);
115
- MemoryRegion *sram = g_new(MemoryRegion, 1);
116
- MemoryRegion *flashalias = g_new(MemoryRegion, 1);
117
- MemoryRegion *flash0mem;
118
const hwaddr *map = daughterboard->motherboard_map;
119
int i;
120
121
@@ -XXX,XX +XXX,XX @@ static void vexpress_common_init(MachineState *machine)
122
123
if (map[VE_NORFLASHALIAS] != -1) {
124
/* Map flash 0 as an alias into low memory */
125
+ MemoryRegion *flash0mem;
126
flash0mem = sysbus_mmio_get_region(SYS_BUS_DEVICE(pflash0), 0);
127
- memory_region_init_alias(flashalias, NULL, "vexpress.flashalias",
128
+ memory_region_init_alias(&vms->flashalias, NULL, "vexpress.flashalias",
129
flash0mem, 0, VEXPRESS_FLASH_SIZE);
130
- memory_region_add_subregion(sysmem, map[VE_NORFLASHALIAS], flashalias);
131
+ memory_region_add_subregion(sysmem, map[VE_NORFLASHALIAS], &vms->flashalias);
136
}
132
}
137
133
138
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
134
dinfo = drive_get(IF_PFLASH, 0, 1);
139
+ object_initialize_child(obj, "can[*]", &s->can[i],
135
ve_pflash_cfi01_register(map[VE_NORFLASH1], "vexpress.flash1", dinfo);
140
+ TYPE_XLNX_ZYNQMP_CAN);
136
141
+ }
137
sram_size = 0x2000000;
142
+
138
- memory_region_init_ram(sram, NULL, "vexpress.sram", sram_size,
143
object_initialize_child(obj, "sata", &s->sata, TYPE_SYSBUS_AHCI);
139
+ memory_region_init_ram(&vms->sram, NULL, "vexpress.sram", sram_size,
144
140
&error_fatal);
145
for (i = 0; i < XLNX_ZYNQMP_NUM_SDHCI; i++) {
141
- memory_region_add_subregion(sysmem, map[VE_SRAM], sram);
146
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
142
+ memory_region_add_subregion(sysmem, map[VE_SRAM], &vms->sram);
147
gic_spi[uart_intr[i]]);
143
148
}
144
vram_size = 0x800000;
149
145
- memory_region_init_ram(vram, NULL, "vexpress.vram", vram_size,
150
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
146
+ memory_region_init_ram(&vms->vram, NULL, "vexpress.vram", vram_size,
151
+ object_property_set_int(OBJECT(&s->can[i]), "ext_clk_freq",
147
&error_fatal);
152
+ XLNX_ZYNQMP_CAN_REF_CLK, &error_abort);
148
- memory_region_add_subregion(sysmem, map[VE_VIDEORAM], vram);
153
+
149
+ memory_region_add_subregion(sysmem, map[VE_VIDEORAM], &vms->vram);
154
+ object_property_set_link(OBJECT(&s->can[i]), "canbus",
150
155
+ OBJECT(s->canbus[i]), &error_fatal);
151
/* 0x4e000000 LAN9118 Ethernet */
156
+
152
if (nd_table[0].used) {
157
+ sysbus_realize(SYS_BUS_DEVICE(&s->can[i]), &err);
158
+ if (err) {
159
+ error_propagate(errp, err);
160
+ return;
161
+ }
162
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->can[i]), 0, can_addr[i]);
163
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->can[i]), 0,
164
+ gic_spi[can_intr[i]]);
165
+ }
166
+
167
object_property_set_int(OBJECT(&s->sata), "num-ports", SATA_NUM_PORTS,
168
&error_abort);
169
if (!sysbus_realize(SYS_BUS_DEVICE(&s->sata), errp)) {
170
@@ -XXX,XX +XXX,XX @@ static Property xlnx_zynqmp_props[] = {
171
DEFINE_PROP_BOOL("has_rpu", XlnxZynqMPState, has_rpu, false),
172
DEFINE_PROP_LINK("ddr-ram", XlnxZynqMPState, ddr_ram, TYPE_MEMORY_REGION,
173
MemoryRegion *),
174
+ DEFINE_PROP_LINK("canbus0", XlnxZynqMPState, canbus[0], TYPE_CAN_BUS,
175
+ CanBusState *),
176
+ DEFINE_PROP_LINK("canbus1", XlnxZynqMPState, canbus[1], TYPE_CAN_BUS,
177
+ CanBusState *),
178
DEFINE_PROP_END_OF_LIST()
179
};
180
181
--
153
--
182
2.20.1
154
2.34.1
183
155
184
156
diff view generated by jsdifflib
Deleted patch
1
For M-profile CPUs, the range from 0xe0000000 to 0xe00fffff is the
2
Private Peripheral Bus range, which includes all of the memory mapped
3
devices and registers that are part of the CPU itself, including the
4
NVIC, systick timer, and debug and trace components like the Data
5
Watchpoint and Trace unit (DWT). Within this large region, the range
6
0xe000e000 to 0xe000efff is the System Control Space (NVIC, system
7
registers, systick) and 0xe002e000 to 0exe002efff is its Non-secure
8
alias.
9
1
10
The architecture is clear that within the SCS unimplemented registers
11
should be RES0 for privileged accesses and generate BusFault for
12
unprivileged accesses, and we currently implement this.
13
14
It is less clear about how to handle accesses to unimplemented
15
regions of the wider PPB. Unprivileged accesses should definitely
16
cause BusFaults (R_DQQS), but the behaviour of privileged accesses is
17
not given as a general rule. However, the register definitions of
18
individual registers for components like the DWT all state that they
19
are RES0 if the relevant component is not implemented, so the
20
simplest way to provide that is to provide RAZ/WI for the whole range
21
for privileged accesses. (The v7M Arm ARM does say that reserved
22
registers should be UNK/SBZP.)
23
24
Expand the container MemoryRegion that the NVIC exposes so that
25
it covers the whole PPB space. This means:
26
* moving the address that the ARMV7M device maps it to down by
27
0xe000 bytes
28
* moving the off and the offsets within the container of all the
29
subregions forward by 0xe000 bytes
30
* adding a new default MemoryRegion that covers the whole container
31
at a lower priority than anything else and which provides the
32
RAZWI/BusFault behaviour
33
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20201119215617.29887-2-peter.maydell@linaro.org
37
---
38
include/hw/intc/armv7m_nvic.h | 1 +
39
hw/arm/armv7m.c | 2 +-
40
hw/intc/armv7m_nvic.c | 78 ++++++++++++++++++++++++++++++-----
41
3 files changed, 69 insertions(+), 12 deletions(-)
42
43
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
44
index XXXXXXX..XXXXXXX 100644
45
--- a/include/hw/intc/armv7m_nvic.h
46
+++ b/include/hw/intc/armv7m_nvic.h
47
@@ -XXX,XX +XXX,XX @@ struct NVICState {
48
MemoryRegion systickmem;
49
MemoryRegion systick_ns_mem;
50
MemoryRegion container;
51
+ MemoryRegion defaultmem;
52
53
uint32_t num_irq;
54
qemu_irq excpout;
55
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/armv7m.c
58
+++ b/hw/arm/armv7m.c
59
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
60
sysbus_connect_irq(sbd, 0,
61
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
62
63
- memory_region_add_subregion(&s->container, 0xe000e000,
64
+ memory_region_add_subregion(&s->container, 0xe0000000,
65
sysbus_mmio_get_region(sbd, 0));
66
67
for (i = 0; i < ARRAY_SIZE(s->bitband); i++) {
68
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/intc/armv7m_nvic.c
71
+++ b/hw/intc/armv7m_nvic.c
72
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
73
.endianness = DEVICE_NATIVE_ENDIAN,
74
};
75
76
+/*
77
+ * Unassigned portions of the PPB space are RAZ/WI for privileged
78
+ * accesses, and fault for non-privileged accesses.
79
+ */
80
+static MemTxResult ppb_default_read(void *opaque, hwaddr addr,
81
+ uint64_t *data, unsigned size,
82
+ MemTxAttrs attrs)
83
+{
84
+ qemu_log_mask(LOG_UNIMP, "Read of unassigned area of PPB: offset 0x%x\n",
85
+ (uint32_t)addr);
86
+ if (attrs.user) {
87
+ return MEMTX_ERROR;
88
+ }
89
+ *data = 0;
90
+ return MEMTX_OK;
91
+}
92
+
93
+static MemTxResult ppb_default_write(void *opaque, hwaddr addr,
94
+ uint64_t value, unsigned size,
95
+ MemTxAttrs attrs)
96
+{
97
+ qemu_log_mask(LOG_UNIMP, "Write of unassigned area of PPB: offset 0x%x\n",
98
+ (uint32_t)addr);
99
+ if (attrs.user) {
100
+ return MEMTX_ERROR;
101
+ }
102
+ return MEMTX_OK;
103
+}
104
+
105
+static const MemoryRegionOps ppb_default_ops = {
106
+ .read_with_attrs = ppb_default_read,
107
+ .write_with_attrs = ppb_default_write,
108
+ .endianness = DEVICE_NATIVE_ENDIAN,
109
+ .valid.min_access_size = 1,
110
+ .valid.max_access_size = 8,
111
+};
112
+
113
static int nvic_post_load(void *opaque, int version_id)
114
{
115
NVICState *s = opaque;
116
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
117
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
118
{
119
NVICState *s = NVIC(dev);
120
- int regionlen;
121
122
/* The armv7m container object will have set our CPU pointer */
123
if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
124
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
125
M_REG_S));
126
}
127
128
- /* The NVIC and System Control Space (SCS) starts at 0xe000e000
129
+ /*
130
+ * This device provides a single sysbus memory region which
131
+ * represents the whole of the "System PPB" space. This is the
132
+ * range from 0xe0000000 to 0xe00fffff and includes the NVIC,
133
+ * the System Control Space (system registers), the systick timer,
134
+ * and for CPUs with the Security extension an NS banked version
135
+ * of all of these.
136
+ *
137
+ * The default behaviour for unimplemented registers/ranges
138
+ * (for instance the Data Watchpoint and Trace unit at 0xe0001000)
139
+ * is to RAZ/WI for privileged access and BusFault for non-privileged
140
+ * access.
141
+ *
142
+ * The NVIC and System Control Space (SCS) starts at 0xe000e000
143
* and looks like this:
144
* 0x004 - ICTR
145
* 0x010 - 0xff - systick
146
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
147
* generally code determining which banked register to use should
148
* use attrs.secure; code determining actual behaviour of the system
149
* should use env->v7m.secure.
150
+ *
151
+ * The container covers the whole PPB space. Within it the priority
152
+ * of overlapping regions is:
153
+ * - default region (for RAZ/WI and BusFault) : -1
154
+ * - system register regions : 0
155
+ * - systick : 1
156
+ * This is because the systick device is a small block of registers
157
+ * in the middle of the other system control registers.
158
*/
159
- regionlen = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? 0x21000 : 0x1000;
160
- memory_region_init(&s->container, OBJECT(s), "nvic", regionlen);
161
- /* The system register region goes at the bottom of the priority
162
- * stack as it covers the whole page.
163
- */
164
+ memory_region_init(&s->container, OBJECT(s), "nvic", 0x100000);
165
+ memory_region_init_io(&s->defaultmem, OBJECT(s), &ppb_default_ops, s,
166
+ "nvic-default", 0x100000);
167
+ memory_region_add_subregion_overlap(&s->container, 0, &s->defaultmem, -1);
168
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
169
"nvic_sysregs", 0x1000);
170
- memory_region_add_subregion(&s->container, 0, &s->sysregmem);
171
+ memory_region_add_subregion(&s->container, 0xe000, &s->sysregmem);
172
173
memory_region_init_io(&s->systickmem, OBJECT(s),
174
&nvic_systick_ops, s,
175
"nvic_systick", 0xe0);
176
177
- memory_region_add_subregion_overlap(&s->container, 0x10,
178
+ memory_region_add_subregion_overlap(&s->container, 0xe010,
179
&s->systickmem, 1);
180
181
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
182
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
183
&nvic_sysreg_ns_ops, &s->sysregmem,
184
"nvic_sysregs_ns", 0x1000);
185
- memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
186
+ memory_region_add_subregion(&s->container, 0x2e000, &s->sysreg_ns_mem);
187
memory_region_init_io(&s->systick_ns_mem, OBJECT(s),
188
&nvic_sysreg_ns_ops, &s->systickmem,
189
"nvic_systick_ns", 0xe0);
190
- memory_region_add_subregion_overlap(&s->container, 0x20010,
191
+ memory_region_add_subregion_overlap(&s->container, 0x2e010,
192
&s->systick_ns_mem, 1);
193
}
194
195
--
196
2.20.1
197
198
diff view generated by jsdifflib
Deleted patch
1
In v8.1M the PXN architecture extension adds a new PXN bit to the
2
MPU_RLAR registers, which forbids execution of code in the region
3
from a privileged mode.
4
1
5
This is another feature which is just in the generic "in v8.1M" set
6
and has no ID register field indicating its presence.
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201119215617.29887-3-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 7 ++++++-
13
1 file changed, 6 insertions(+), 1 deletion(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
20
} else {
21
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
22
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
23
+ bool pxn = false;
24
+
25
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
26
+ pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
27
+ }
28
29
if (m_is_system_region(env, address)) {
30
/* System space is always execute never */
31
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
32
}
33
34
*prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
35
- if (*prot && !xn) {
36
+ if (*prot && !xn && !(pxn && !is_user)) {
37
*prot |= PAGE_EXEC;
38
}
39
/* We don't need to look the attribute up in the MAIR0/MAIR1
40
--
41
2.20.1
42
43
diff view generated by jsdifflib
Deleted patch
1
v8.1M defines a new FP system register FPSCR_nzcvqc; this behaves
2
like the existing FPSCR, except that it reads and writes only bits
3
[31:27] of the FPSCR (the N, Z, C, V and QC flag bits). (Unlike the
4
FPSCR, the special case for Rt=15 of writing the CPSR.NZCV is not
5
permitted.)
6
1
7
Implement the register. Since we don't yet implement MVE, we handle
8
the QC bit as RES0, with todo comments for where we will need to add
9
support later.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20201119215617.29887-11-peter.maydell@linaro.org
14
---
15
target/arm/cpu.h | 13 +++++++++++++
16
target/arm/translate-vfp.c.inc | 27 +++++++++++++++++++++++++++
17
2 files changed, 40 insertions(+)
18
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
26
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
27
+#define FPCR_V (1 << 28) /* FP overflow flag */
28
+#define FPCR_C (1 << 29) /* FP carry flag */
29
+#define FPCR_Z (1 << 30) /* FP zero flag */
30
+#define FPCR_N (1 << 31) /* FP negative flag */
31
+
32
+#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
33
+#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
34
35
static inline uint32_t vfp_get_fpsr(CPUARMState *env)
36
{
37
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
38
#define ARM_VFP_FPEXC 8
39
#define ARM_VFP_FPINST 9
40
#define ARM_VFP_FPINST2 10
41
+/* These ones are M-profile only */
42
+#define ARM_VFP_FPSCR_NZCVQC 2
43
+#define ARM_VFP_VPR 12
44
+#define ARM_VFP_P0 13
45
+#define ARM_VFP_FPCXT_NS 14
46
+#define ARM_VFP_FPCXT_S 15
47
48
/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
49
#define QEMU_VFP_FPSCR_NZCV 0xffff
50
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/translate-vfp.c.inc
53
+++ b/target/arm/translate-vfp.c.inc
54
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
55
case ARM_VFP_FPSCR:
56
case QEMU_VFP_FPSCR_NZCV:
57
break;
58
+ case ARM_VFP_FPSCR_NZCVQC:
59
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
60
+ return false;
61
+ }
62
+ break;
63
default:
64
return FPSysRegCheckFailed;
65
}
66
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
67
tcg_temp_free_i32(tmp);
68
gen_lookup_tb(s);
69
break;
70
+ case ARM_VFP_FPSCR_NZCVQC:
71
+ {
72
+ TCGv_i32 fpscr;
73
+ tmp = loadfn(s, opaque);
74
+ /*
75
+ * TODO: when we implement MVE, write the QC bit.
76
+ * For non-MVE, QC is RES0.
77
+ */
78
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
79
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
80
+ tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
81
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
82
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
83
+ tcg_temp_free_i32(tmp);
84
+ break;
85
+ }
86
default:
87
g_assert_not_reached();
88
}
89
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
90
gen_helper_vfp_get_fpscr(tmp, cpu_env);
91
storefn(s, opaque, tmp);
92
break;
93
+ case ARM_VFP_FPSCR_NZCVQC:
94
+ /*
95
+ * TODO: MVE has a QC bit, which we probably won't store
96
+ * in the xregs[] field. For non-MVE, where QC is RES0,
97
+ * we can just fall through to the FPSCR_NZCV case.
98
+ */
99
case QEMU_VFP_FPSCR_NZCV:
100
/*
101
* Read just NZCV; this is a special case to avoid the
102
--
103
2.20.1
104
105
diff view generated by jsdifflib
Deleted patch
1
We defined a constant name for the mask of NZCV bits in the FPCR/FPSCR
2
in the previous commit; use it in a couple of places in existing code,
3
where we're masking out everything except NZCV for the "load to Rt=15
4
sets CPSR.NZCV" special case.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-12-peter.maydell@linaro.org
9
---
10
target/arm/translate-vfp.c.inc | 4 ++--
11
1 file changed, 2 insertions(+), 2 deletions(-)
12
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
16
+++ b/target/arm/translate-vfp.c.inc
17
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
18
* helper call for the "VMRS to CPSR.NZCV" insn.
19
*/
20
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
21
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
22
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
23
storefn(s, opaque, tmp);
24
break;
25
default:
26
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
27
case ARM_VFP_FPSCR:
28
if (a->rt == 15) {
29
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
30
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
31
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
32
} else {
33
tmp = tcg_temp_new_i32();
34
gen_helper_vfp_get_fpscr(tmp, cpu_env);
35
--
36
2.20.1
37
38
diff view generated by jsdifflib
Deleted patch
1
In v8.0M, on exception entry the registers R0-R3, R12, APSR and EPSR
2
are zeroed for an exception taken to Non-secure state; for an
3
exception taken to Secure state they become UNKNOWN, and we chose to
4
leave them at their previous values.
5
1
6
In v8.1M the behaviour is specified more tightly and these registers
7
are always zeroed regardless of the security state that the exception
8
targets (see rule R_KPZV). Implement this.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-17-peter.maydell@linaro.org
13
---
14
target/arm/m_helper.c | 16 ++++++++++++----
15
1 file changed, 12 insertions(+), 4 deletions(-)
16
17
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/m_helper.c
20
+++ b/target/arm/m_helper.c
21
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
22
* Clear registers if necessary to prevent non-secure exception
23
* code being able to see register values from secure code.
24
* Where register values become architecturally UNKNOWN we leave
25
- * them with their previous values.
26
+ * them with their previous values. v8.1M is tighter than v8.0M
27
+ * here and always zeroes the caller-saved registers regardless
28
+ * of the security state the exception is targeting.
29
*/
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
31
- if (!targets_secure) {
32
+ if (!targets_secure || arm_feature(env, ARM_FEATURE_V8_1M)) {
33
/*
34
* Always clear the caller-saved registers (they have been
35
* pushed to the stack earlier in v7m_push_stack()).
36
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
37
* v7m_push_callee_stack()).
38
*/
39
int i;
40
+ /*
41
+ * r4..r11 are callee-saves, zero only if background
42
+ * state was Secure (EXCRET.S == 1) and exception
43
+ * targets Non-secure state
44
+ */
45
+ bool zero_callee_saves = !targets_secure &&
46
+ (lr & R_V7M_EXCRET_S_MASK);
47
48
for (i = 0; i < 13; i++) {
49
- /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
50
- if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
51
+ if (i < 4 || i > 11 || zero_callee_saves) {
52
env->regs[i] = 0;
53
}
54
}
55
--
56
2.20.1
57
58
diff view generated by jsdifflib
Deleted patch
1
In v8.1M, vector table fetch failures don't set HFSR.FORCED (see rule
2
R_LLRP). (In previous versions of the architecture this was either
3
required or IMPDEF.)
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-18-peter.maydell@linaro.org
8
---
9
target/arm/m_helper.c | 6 +++++-
10
1 file changed, 5 insertions(+), 1 deletion(-)
11
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
15
+++ b/target/arm/m_helper.c
16
@@ -XXX,XX +XXX,XX @@ load_fail:
17
* The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are
18
* secure); otherwise it targets the same security state as the
19
* underlying exception.
20
+ * In v8.1M HardFaults from vector table fetch fails don't set FORCED.
21
*/
22
if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
23
exc_secure = true;
24
}
25
- env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
26
+ env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK;
27
+ if (!arm_feature(env, ARM_FEATURE_V8_1M)) {
28
+ env->v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
29
+ }
30
armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
31
return false;
32
}
33
--
34
2.20.1
35
36
diff view generated by jsdifflib
Deleted patch
1
In v8.1M a REVIDR register is defined, which is at address 0xe00ecfc
2
and is a read-only IMPDEF register providing implementation specific
3
minor revision information, like the v8A REVIDR_EL1. Implement this.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-19-peter.maydell@linaro.org
8
---
9
hw/intc/armv7m_nvic.c | 5 +++++
10
1 file changed, 5 insertions(+)
11
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/intc/armv7m_nvic.c
15
+++ b/hw/intc/armv7m_nvic.c
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
17
}
18
return val;
19
}
20
+ case 0xcfc:
21
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8_1M)) {
22
+ goto bad_offset;
23
+ }
24
+ return cpu->revidr;
25
case 0xd00: /* CPUID Base. */
26
return cpu->midr;
27
case 0xd04: /* Interrupt Control State (ICSR) */
28
--
29
2.20.1
30
31
diff view generated by jsdifflib
1
Correct a typo in the name we give the NVIC object.
1
Convert the u2f.txt file to rST, and place it in the right place
2
in our manual layout. The old text didn't fit very well into our
3
manual style, so the new version ends up looking like a rewrite,
4
although some of the original text is preserved:
5
6
* the 'building' section of the old file is removed, since we
7
generally assume that users have already built QEMU
8
* some rather verbose text has been cut back
9
* document the passthrough device first, on the assumption
10
that's most likely to be of interest to users
11
* cut back on the duplication of text between sections
12
* format example command lines etc with rST
13
14
As it's a short document it seemed simplest to do this all
15
in one go rather than try to do a minimal syntactic conversion
16
and then clean up the wording and layout.
2
17
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20230421163734.1152076-1-peter.maydell@linaro.org
6
Message-id: 20201119215617.29887-28-peter.maydell@linaro.org
7
---
21
---
8
hw/arm/armv7m.c | 2 +-
22
docs/system/device-emulation.rst | 1 +
9
1 file changed, 1 insertion(+), 1 deletion(-)
23
docs/system/devices/usb-u2f.rst | 93 ++++++++++++++++++++++++++
10
24
docs/system/devices/usb.rst | 2 +-
11
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
25
docs/u2f.txt | 110 -------------------------------
26
4 files changed, 95 insertions(+), 111 deletions(-)
27
create mode 100644 docs/system/devices/usb-u2f.rst
28
delete mode 100644 docs/u2f.txt
29
30
diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
12
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/armv7m.c
32
--- a/docs/system/device-emulation.rst
14
+++ b/hw/arm/armv7m.c
33
+++ b/docs/system/device-emulation.rst
15
@@ -XXX,XX +XXX,XX @@ static void armv7m_instance_init(Object *obj)
34
@@ -XXX,XX +XXX,XX @@ Emulated Devices
16
35
devices/virtio-pmem.rst
17
memory_region_init(&s->container, obj, "armv7m-container", UINT64_MAX);
36
devices/vhost-user-rng.rst
18
37
devices/canokey.rst
19
- object_initialize_child(obj, "nvnic", &s->nvic, TYPE_NVIC);
38
+ devices/usb-u2f.rst
20
+ object_initialize_child(obj, "nvic", &s->nvic, TYPE_NVIC);
39
devices/igb.rst
21
object_property_add_alias(obj, "num-irq",
40
diff --git a/docs/system/devices/usb-u2f.rst b/docs/system/devices/usb-u2f.rst
22
OBJECT(&s->nvic), "num-irq");
41
new file mode 100644
23
42
index XXXXXXX..XXXXXXX
43
--- /dev/null
44
+++ b/docs/system/devices/usb-u2f.rst
45
@@ -XXX,XX +XXX,XX @@
46
+Universal Second Factor (U2F) USB Key Device
47
+============================================
48
+
49
+U2F is an open authentication standard that enables relying parties
50
+exposed to the internet to offer a strong second factor option for end
51
+user authentication.
52
+
53
+The second factor is provided by a device implementing the U2F
54
+protocol. In case of a USB U2F security key, it is a USB HID device
55
+that implements the U2F protocol.
56
+
57
+QEMU supports both pass-through of a host U2F key device to a VM,
58
+and software emulation of a U2F key.
59
+
60
+``u2f-passthru``
61
+----------------
62
+
63
+The ``u2f-passthru`` device allows you to connect a real hardware
64
+U2F key on your host to a guest VM. All requests made from the guest
65
+are passed through to the physical security key connected to the
66
+host machine and vice versa.
67
+
68
+In addition, the dedicated pass-through allows you to share a single
69
+U2F security key with several guest VMs, which is not possible with a
70
+simple host device assignment pass-through.
71
+
72
+You can specify the host U2F key to use with the ``hidraw``
73
+option, which takes the host path to a Linux ``/dev/hidrawN`` device:
74
+
75
+.. parsed-literal::
76
+ |qemu_system| -usb -device u2f-passthru,hidraw=/dev/hidraw0
77
+
78
+If you don't specify the device, the ``u2f-passthru`` device will
79
+autoscan to take the first U2F device it finds on the host (this
80
+requires a working libudev):
81
+
82
+.. parsed-literal::
83
+ |qemu_system| -usb -device u2f-passthru
84
+
85
+``u2f-emulated``
86
+----------------
87
+
88
+``u2f-emulated`` is a completely software emulated U2F device.
89
+It uses `libu2f-emu <https://github.com/MattGorko/libu2f-emu>`__
90
+for the U2F key emulation. libu2f-emu
91
+provides a complete implementation of the U2F protocol device part for
92
+all specified transports given by the FIDO Alliance.
93
+
94
+To work, an emulated U2F device must have four elements:
95
+
96
+ * ec x509 certificate
97
+ * ec private key
98
+ * counter (four bytes value)
99
+ * 48 bytes of entropy (random bits)
100
+
101
+To use this type of device, these have to be configured, and these
102
+four elements must be passed one way or another.
103
+
104
+Assuming that you have a working libu2f-emu installed on the host,
105
+there are three possible ways to configure the ``u2f-emulated`` device:
106
+
107
+ * ephemeral
108
+ * setup directory
109
+ * manual
110
+
111
+Ephemeral is the simplest way to configure; it lets the device generate
112
+all the elements it needs for a single use of the lifetime of the device.
113
+It is the default if you do not pass any other options to the device.
114
+
115
+.. parsed-literal::
116
+ |qemu_system| -usb -device u2f-emulated
117
+
118
+You can pass the device the path of a setup directory on the host
119
+using the ``dir`` option; the directory must contain these four files:
120
+
121
+ * ``certificate.pem``: ec x509 certificate
122
+ * ``private-key.pem``: ec private key
123
+ * ``counter``: counter value
124
+ * ``entropy``: 48 bytes of entropy
125
+
126
+.. parsed-literal::
127
+ |qemu_system| -usb -device u2f-emulated,dir=$dir
128
+
129
+You can also manually pass the device the paths to each of these files,
130
+if you don't want them all to be in the same directory, using the options
131
+
132
+ * ``cert``
133
+ * ``priv``
134
+ * ``counter``
135
+ * ``entropy``
136
+
137
+.. parsed-literal::
138
+ |qemu_system| -usb -device u2f-emulated,cert=$DIR1/$FILE1,priv=$DIR2/$FILE2,counter=$DIR3/$FILE3,entropy=$DIR4/$FILE4
139
diff --git a/docs/system/devices/usb.rst b/docs/system/devices/usb.rst
140
index XXXXXXX..XXXXXXX 100644
141
--- a/docs/system/devices/usb.rst
142
+++ b/docs/system/devices/usb.rst
143
@@ -XXX,XX +XXX,XX @@ option or the ``device_add`` monitor command. Available devices are:
144
USB audio device
145
146
``u2f-{emulated,passthru}``
147
- Universal Second Factor device
148
+ :doc:`usb-u2f`
149
150
``canokey``
151
An Open-source Secure Key implementing FIDO2, OpenPGP, PIV and more.
152
diff --git a/docs/u2f.txt b/docs/u2f.txt
153
deleted file mode 100644
154
index XXXXXXX..XXXXXXX
155
--- a/docs/u2f.txt
156
+++ /dev/null
157
@@ -XXX,XX +XXX,XX @@
158
-QEMU U2F Key Device Documentation.
159
-
160
-Contents
161
-1. USB U2F key device
162
-2. Building
163
-3. Using u2f-emulated
164
-4. Using u2f-passthru
165
-5. Libu2f-emu
166
-
167
-1. USB U2F key device
168
-
169
-U2F is an open authentication standard that enables relying parties
170
-exposed to the internet to offer a strong second factor option for end
171
-user authentication.
172
-
173
-The standard brings many advantages to both parties, client and server,
174
-allowing to reduce over-reliance on passwords, it increases authentication
175
-security and simplifies passwords.
176
-
177
-The second factor is materialized by a device implementing the U2F
178
-protocol. In case of a USB U2F security key, it is a USB HID device
179
-that implements the U2F protocol.
180
-
181
-In QEMU, the USB U2F key device offers a dedicated support of U2F, allowing
182
-guest USB FIDO/U2F security keys operating in two possible modes:
183
-pass-through and emulated.
184
-
185
-The pass-through mode consists of passing all requests made from the guest
186
-to the physical security key connected to the host machine and vice versa.
187
-In addition, the dedicated pass-through allows to have a U2F security key
188
-shared on several guests which is not possible with a simple host device
189
-assignment pass-through.
190
-
191
-The emulated mode consists of completely emulating the behavior of an
192
-U2F device through software part. Libu2f-emu is used for that.
193
-
194
-
195
-2. Building
196
-
197
-To ensure the build of the u2f-emulated device variant which depends
198
-on libu2f-emu: configuring and building:
199
-
200
- ./configure --enable-u2f && make
201
-
202
-The pass-through mode is built by default on Linux. To take advantage
203
-of the autoscan option it provides, make sure you have a working libudev
204
-installed on the host.
205
-
206
-
207
-3. Using u2f-emulated
208
-
209
-To work, an emulated U2F device must have four elements:
210
- * ec x509 certificate
211
- * ec private key
212
- * counter (four bytes value)
213
- * 48 bytes of entropy (random bits)
214
-
215
-To use this type of device, this one has to be configured, and these
216
-four elements must be passed one way or another.
217
-
218
-Assuming that you have a working libu2f-emu installed on the host.
219
-There are three possible ways of configurations:
220
- * ephemeral
221
- * setup directory
222
- * manual
223
-
224
-Ephemeral is the simplest way to configure, it lets the device generate
225
-all the elements it needs for a single use of the lifetime of the device.
226
-
227
- qemu -usb -device u2f-emulated
228
-
229
-Setup directory allows to configure the device from a directory containing
230
-four files:
231
- * certificate.pem: ec x509 certificate
232
- * private-key.pem: ec private key
233
- * counter: counter value
234
- * entropy: 48 bytes of entropy
235
-
236
- qemu -usb -device u2f-emulated,dir=$dir
237
-
238
-Manual allows to configure the device more finely by specifying each
239
-of the elements necessary for the device:
240
- * cert
241
- * priv
242
- * counter
243
- * entropy
244
-
245
- qemu -usb -device u2f-emulated,cert=$DIR1/$FILE1,priv=$DIR2/$FILE2,counter=$DIR3/$FILE3,entropy=$DIR4/$FILE4
246
-
247
-
248
-4. Using u2f-passthru
249
-
250
-On the host specify the u2f-passthru device with a suitable hidraw:
251
-
252
- qemu -usb -device u2f-passthru,hidraw=/dev/hidraw0
253
-
254
-Alternately, the u2f-passthru device can autoscan to take the first
255
-U2F device it finds on the host (this requires a working libudev):
256
-
257
- qemu -usb -device u2f-passthru
258
-
259
-
260
-5. Libu2f-emu
261
-
262
-The u2f-emulated device uses libu2f-emu for the U2F key emulation. Libu2f-emu
263
-implements completely the U2F protocol device part for all specified
264
-transport given by the FIDO Alliance.
265
-
266
-For more information about libu2f-emu see this page:
267
-https://github.com/MattGorko/libu2f-emu.
24
--
268
--
25
2.20.1
269
2.34.1
26
27
diff view generated by jsdifflib