1
Nothing earth-shaking in here, just a lot of refactoring and cleanup
1
First pullreq for 6.0: mostly my v8.1M work, plus some other
2
and a few bugfixes. I suspect I'll have another pullreq to come in
2
bits and pieces. (I still have a lot of stuff in my to-review
3
the early part of next week...
3
folder, which I may or may not get to before the Christmas break...)
4
4
5
The following changes since commit 19591e9e0938ea5066984553c256a043bd5d822f:
5
thanks
6
-- PMM
6
7
7
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2020-08-27 16:59:02 +0100)
8
The following changes since commit 5e7b204dbfae9a562fc73684986f936b97f63877:
9
10
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2020-12-09 20:08:54 +0000)
8
11
9
are available in the Git repository at:
12
are available in the Git repository at:
10
13
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200828
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201210
12
15
13
for you to fetch changes up to ed78849d9711805bda37ee026018d6ee7a606d0e:
16
for you to fetch changes up to 71f916be1c7e9ede0e37d9cabc781b5a9e8638ff:
14
17
15
target/arm: Convert sq{, r}dmulh to gvec for aa64 advsimd (2020-08-28 10:02:50 +0100)
18
hw/arm/armv7m: Correct typo in QOM object name (2020-12-10 11:44:56 +0000)
16
19
17
----------------------------------------------------------------
20
----------------------------------------------------------------
18
target-arm queue:
21
target-arm queue:
19
* target/arm: Cleanup and refactoring preparatory to SVE2
22
* hw/arm/smmuv3: Fix up L1STD_SPAN decoding
20
* armsse: Define ARMSSEClass correctly
23
* xlnx-zynqmp: Support Xilinx ZynqMP CAN controllers
21
* hw/misc/unimp: Improve information provided in log messages
24
* sbsa-ref: allow to use Cortex-A53/57/72 cpus
22
* hw/qdev-clock: Avoid calling qdev_connect_clock_in after DeviceRealize
25
* Various minor code cleanups
23
* hw/arm/xilinx_zynq: Call qdev_connect_clock_in() before DeviceRealize
26
* hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
24
* hw/net/allwinner-sun8i-emac: Use AddressSpace for DMA transfers
27
* Implement more pieces of ARMv8.1M support
25
* hw/sd/allwinner-sdhost: Use AddressSpace for DMA transfers
26
* target/arm: Fill in the WnR syndrome bit in mte_check_fail
27
* target/arm: Clarify HCR_EL2 ARMCPRegInfo type
28
* hw/arm/musicpal: Use AddressSpace for DMA transfers
29
* hw/clock: Minor cleanups
30
* hw/arm/sbsa-ref: fix typo breaking PCIe IRQs
31
28
32
----------------------------------------------------------------
29
----------------------------------------------------------------
33
Eduardo Habkost (1):
30
Alex Chen (4):
34
armsse: Define ARMSSEClass correctly
31
i.MX25: Fix bad printf format specifiers
32
i.MX31: Fix bad printf format specifiers
33
i.MX6: Fix bad printf format specifiers
34
i.MX6ul: Fix bad printf format specifiers
35
35
36
Graeme Gregory (1):
36
Havard Skinnemoen (1):
37
hw/arm/sbsa-ref: fix typo breaking PCIe IRQs
37
tests/qtest/npcm7xx_rng-test: dump random data on failure
38
38
39
Philippe Mathieu-Daudé (14):
39
Kunkun Jiang (1):
40
hw/clock: Remove unused clock_init*() functions
40
hw/arm/smmuv3: Fix up L1STD_SPAN decoding
41
hw/clock: Let clock_set() return boolean value
42
hw/clock: Only propagate clock changes if the clock is changed
43
hw/arm/musicpal: Use AddressSpace for DMA transfers
44
target/arm: Clarify HCR_EL2 ARMCPRegInfo type
45
hw/sd/allwinner-sdhost: Use AddressSpace for DMA transfers
46
hw/net/allwinner-sun8i-emac: Use AddressSpace for DMA transfers
47
hw/arm/xilinx_zynq: Uninline cadence_uart_create()
48
hw/arm/xilinx_zynq: Call qdev_connect_clock_in() before DeviceRealize
49
hw/qdev-clock: Uninline qdev_connect_clock_in()
50
hw/qdev-clock: Avoid calling qdev_connect_clock_in after DeviceRealize
51
hw/misc/unimp: Display value after offset
52
hw/misc/unimp: Display the value with width of the access size
53
hw/misc/unimp: Display the offset with width of the region size
54
41
55
Richard Henderson (19):
42
Marcin Juszkiewicz (1):
56
target/arm: Pass the entire mte descriptor to mte_check_fail
43
sbsa-ref: allow to use Cortex-A53/57/72 cpus
57
target/arm: Fill in the WnR syndrome bit in mte_check_fail
58
qemu/int128: Add int128_lshift
59
target/arm: Split out gen_gvec_fn_zz
60
target/arm: Split out gen_gvec_fn_zzz, do_zzz_fn
61
target/arm: Rearrange {sve,fp}_check_access assert
62
target/arm: Merge do_vector2_p into do_mov_p
63
target/arm: Clean up 4-operand predicate expansion
64
target/arm: Use tcg_gen_gvec_bitsel for trans_SEL_pppp
65
target/arm: Split out gen_gvec_ool_zzzp
66
target/arm: Merge helper_sve_clr_* and helper_sve_movz_*
67
target/arm: Split out gen_gvec_ool_zzp
68
target/arm: Split out gen_gvec_ool_zzz
69
target/arm: Split out gen_gvec_ool_zz
70
target/arm: Tidy SVE tszimm shift formats
71
target/arm: Generalize inl_qrdmlah_* helper functions
72
target/arm: Convert integer multiply (indexed) to gvec for aa64 advsimd
73
target/arm: Convert integer multiply-add (indexed) to gvec for aa64 advsimd
74
target/arm: Convert sq{, r}dmulh to gvec for aa64 advsimd
75
44
76
include/hw/arm/armsse.h | 2 +-
45
Peter Maydell (25):
77
include/hw/char/cadence_uart.h | 17 --
46
hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
78
include/hw/clock.h | 30 +--
47
target/arm: Implement v8.1M PXN extension
79
include/hw/misc/unimp.h | 1 +
48
target/arm: Don't clobber ID_PFR1.Security on M-profile cores
80
include/hw/net/allwinner-sun8i-emac.h | 6 +
49
target/arm: Implement VSCCLRM insn
81
include/hw/qdev-clock.h | 8 +-
50
target/arm: Implement CLRM instruction
82
include/hw/sd/allwinner-sdhost.h | 6 +
51
target/arm: Enforce M-profile VMRS/VMSR register restrictions
83
include/qemu/int128.h | 16 ++
52
target/arm: Refactor M-profile VMSR/VMRS handling
84
target/arm/helper-sve.h | 5 -
53
target/arm: Move general-use constant expanders up in translate.c
85
target/arm/helper.h | 28 +++
54
target/arm: Implement VLDR/VSTR system register
86
target/arm/translate.h | 1 +
55
target/arm: Implement M-profile FPSCR_nzcvqc
87
target/arm/sve.decode | 35 ++-
56
target/arm: Use new FPCR_NZCV_MASK constant
88
hw/arm/allwinner-a10.c | 2 +
57
target/arm: Factor out preserve-fp-state from full_vfp_access_check()
89
hw/arm/allwinner-h3.c | 4 +
58
target/arm: Implement FPCXT_S fp system register
90
hw/arm/armsse.c | 1 +
59
hw/intc/armv7m_nvic: Update FPDSCR masking for v8.1M
91
hw/arm/musicpal.c | 45 ++--
60
target/arm: For v8.1M, always clear R0-R3, R12, APSR, EPSR on exception entry
92
hw/arm/sbsa-ref.c | 2 +-
61
target/arm: In v8.1M, don't set HFSR.FORCED on vector table fetch failures
93
hw/arm/xilinx_zynq.c | 24 +-
62
target/arm: Implement v8.1M REVIDR register
94
hw/core/clock.c | 7 +-
63
target/arm: Implement new v8.1M NOCP check for exception return
95
hw/core/qdev-clock.c | 6 +
64
target/arm: Implement new v8.1M VLLDM and VLSTM encodings
96
hw/misc/unimp.c | 14 +-
65
hw/intc/armv7m_nvic: Support v8.1M CCR.TRD bit
97
hw/net/allwinner-sun8i-emac.c | 46 ++--
66
target/arm: Implement CCR_S.TRD behaviour for SG insns
98
hw/sd/allwinner-sdhost.c | 37 +++-
67
hw/intc/armv7m_nvic: Fix "return from inactive handler" check
99
target/arm/helper.c | 1 -
68
target/arm: Implement M-profile "minimal RAS implementation"
100
target/arm/mte_helper.c | 19 +-
69
hw/intc/armv7m_nvic: Implement read/write for RAS register block
101
target/arm/sve_helper.c | 70 ++----
70
hw/arm/armv7m: Correct typo in QOM object name
102
target/arm/translate-a64.c | 110 ++++++++--
103
target/arm/translate-sve.c | 399 ++++++++++++++--------------------
104
target/arm/vec_helper.c | 182 +++++++++++-----
105
29 files changed, 629 insertions(+), 495 deletions(-)
106
71
72
Vikram Garhwal (4):
73
hw/net/can: Introduce Xilinx ZynqMP CAN controller
74
xlnx-zynqmp: Connect Xilinx ZynqMP CAN controllers
75
tests/qtest: Introduce tests for Xilinx ZynqMP CAN controller
76
MAINTAINERS: Add maintainer entry for Xilinx ZynqMP CAN controller
77
78
meson.build | 1 +
79
hw/arm/smmuv3-internal.h | 2 +-
80
hw/net/can/trace.h | 1 +
81
include/hw/arm/xlnx-zynqmp.h | 8 +
82
include/hw/intc/armv7m_nvic.h | 2 +
83
include/hw/net/xlnx-zynqmp-can.h | 78 +++
84
target/arm/cpu.h | 46 ++
85
target/arm/m-nocp.decode | 10 +-
86
target/arm/t32.decode | 10 +-
87
target/arm/vfp.decode | 14 +
88
hw/arm/armv7m.c | 4 +-
89
hw/arm/sbsa-ref.c | 23 +-
90
hw/arm/xlnx-zcu102.c | 20 +
91
hw/arm/xlnx-zynqmp.c | 34 ++
92
hw/intc/armv7m_nvic.c | 246 ++++++--
93
hw/misc/imx25_ccm.c | 12 +-
94
hw/misc/imx31_ccm.c | 14 +-
95
hw/misc/imx6_ccm.c | 20 +-
96
hw/misc/imx6_src.c | 2 +-
97
hw/misc/imx6ul_ccm.c | 4 +-
98
hw/misc/imx_ccm.c | 4 +-
99
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++++++++++
100
target/arm/cpu.c | 5 +-
101
target/arm/helper.c | 7 +-
102
target/arm/m_helper.c | 130 ++++-
103
target/arm/translate.c | 105 +++-
104
tests/qtest/npcm7xx_rng-test.c | 12 +
105
tests/qtest/xlnx-can-test.c | 360 ++++++++++++
106
MAINTAINERS | 8 +
107
hw/Kconfig | 1 +
108
hw/net/can/meson.build | 1 +
109
hw/net/can/trace-events | 9 +
110
target/arm/translate-vfp.c.inc | 511 ++++++++++++++++-
111
tests/qtest/meson.build | 1 +
112
34 files changed, 2713 insertions(+), 153 deletions(-)
113
create mode 100644 hw/net/can/trace.h
114
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
115
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
116
create mode 100644 tests/qtest/xlnx-can-test.c
117
create mode 100644 hw/net/can/trace-events
118
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Kunkun Jiang <jiangkunkun@huawei.com>
2
2
3
Model after gen_gvec_fn_zzz et al.
3
Accroding to the SMMUv3 spec, the SPAN field of Level1 Stream Table
4
Descriptor is 5 bits([4:0]).
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Fixes: 9bde7f0674f(hw/arm/smmuv3: Implement translate callback)
7
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
8
Message-id: 20201124023711.1184-1-jiangkunkun@huawei.com
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20200815013145.539409-9-richard.henderson@linaro.org
10
Acked-by: Eric Auger <eric.auger@redhat.com>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
12
---
10
target/arm/translate-sve.c | 35 ++++++++++++++++-------------------
13
hw/arm/smmuv3-internal.h | 2 +-
11
1 file changed, 16 insertions(+), 19 deletions(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
12
15
13
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-sve.c
18
--- a/hw/arm/smmuv3-internal.h
16
+++ b/target/arm/translate-sve.c
19
+++ b/hw/arm/smmuv3-internal.h
17
@@ -XXX,XX +XXX,XX @@ static int pred_gvec_reg_size(DisasContext *s)
20
@@ -XXX,XX +XXX,XX @@ static inline uint64_t l1std_l2ptr(STEDesc *desc)
18
return size_for_gvec(pred_full_reg_size(s));
21
return hi << 32 | lo;
19
}
22
}
20
23
21
-/* Invoke a vector expander on two Zregs. */
24
-#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 4))
22
+/* Invoke an out-of-line helper on 3 Zregs and a predicate. */
25
+#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 5))
23
+static void gen_gvec_ool_zzzp(DisasContext *s, gen_helper_gvec_4 *fn,
26
24
+ int rd, int rn, int rm, int pg, int data)
27
#endif
25
+{
26
+ unsigned vsz = vec_full_reg_size(s);
27
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
28
+ vec_full_reg_offset(s, rn),
29
+ vec_full_reg_offset(s, rm),
30
+ pred_full_reg_offset(s, pg),
31
+ vsz, vsz, data, fn);
32
+}
33
34
+/* Invoke a vector expander on two Zregs. */
35
static void gen_gvec_fn_zz(DisasContext *s, GVecGen2Fn *gvec_fn,
36
int esz, int rd, int rn)
37
{
38
@@ -XXX,XX +XXX,XX @@ static bool trans_UQSUB_zzz(DisasContext *s, arg_rrr_esz *a)
39
40
static bool do_zpzz_ool(DisasContext *s, arg_rprr_esz *a, gen_helper_gvec_4 *fn)
41
{
42
- unsigned vsz = vec_full_reg_size(s);
43
if (fn == NULL) {
44
return false;
45
}
46
if (sve_access_check(s)) {
47
- tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
48
- vec_full_reg_offset(s, a->rn),
49
- vec_full_reg_offset(s, a->rm),
50
- pred_full_reg_offset(s, a->pg),
51
- vsz, vsz, 0, fn);
52
+ gen_gvec_ool_zzzp(s, fn, a->rd, a->rn, a->rm, a->pg, 0);
53
}
54
return true;
55
}
56
@@ -XXX,XX +XXX,XX @@ static void do_sel_z(DisasContext *s, int rd, int rn, int rm, int pg, int esz)
57
gen_helper_sve_sel_zpzz_b, gen_helper_sve_sel_zpzz_h,
58
gen_helper_sve_sel_zpzz_s, gen_helper_sve_sel_zpzz_d
59
};
60
- unsigned vsz = vec_full_reg_size(s);
61
- tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
62
- vec_full_reg_offset(s, rn),
63
- vec_full_reg_offset(s, rm),
64
- pred_full_reg_offset(s, pg),
65
- vsz, vsz, 0, fns[esz]);
66
+ gen_gvec_ool_zzzp(s, fns[esz], rd, rn, rm, pg, 0);
67
}
68
69
#define DO_ZPZZ(NAME, name) \
70
@@ -XXX,XX +XXX,XX @@ static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a)
71
static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a)
72
{
73
if (sve_access_check(s)) {
74
- unsigned vsz = vec_full_reg_size(s);
75
- tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
76
- vec_full_reg_offset(s, a->rn),
77
- vec_full_reg_offset(s, a->rm),
78
- pred_full_reg_offset(s, a->pg),
79
- vsz, vsz, a->esz, gen_helper_sve_splice);
80
+ gen_gvec_ool_zzzp(s, gen_helper_sve_splice,
81
+ a->rd, a->rn, a->rm, a->pg, 0);
82
}
83
return true;
84
}
85
--
28
--
86
2.20.1
29
2.20.1
87
30
88
31
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
2
2
3
Allow the device to execute the DMA transfers in a different
3
The Xilinx ZynqMP CAN controller is developed based on SocketCAN, QEMU CAN bus
4
AddressSpace.
4
implementation. Bus connection and socketCAN connection for each CAN module
5
can be set through command lines.
5
6
6
We keep using the system_memory address space, but via the
7
Example for using single CAN:
7
proper dma_memory_access() API.
8
-object can-bus,id=canbus0 \
9
-machine xlnx-zcu102.canbus0=canbus0 \
10
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0
8
11
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Example for connecting both CAN to same virtual CAN on host machine:
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
-object can-bus,id=canbus0 -object can-bus,id=canbus1 \
11
Message-id: 20200814125533.4047-1-f4bug@amsat.org
14
-machine xlnx-zcu102.canbus0=canbus0 \
15
-machine xlnx-zcu102.canbus1=canbus1 \
16
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0 \
17
-object can-host-socketcan,id=socketcan1,if=vcan0,canbus=canbus1
18
19
To create virtual CAN on the host machine, please check the QEMU CAN docs:
20
https://github.com/qemu/qemu/blob/master/docs/can.txt
21
22
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
23
Message-id: 1605728926-352690-2-git-send-email-fnu.vikram@xilinx.com
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
26
---
14
hw/arm/musicpal.c | 45 +++++++++++++++++++++++++++++++--------------
27
meson.build | 1 +
15
1 file changed, 31 insertions(+), 14 deletions(-)
28
hw/net/can/trace.h | 1 +
29
include/hw/net/xlnx-zynqmp-can.h | 78 ++
30
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++
31
hw/Kconfig | 1 +
32
hw/net/can/meson.build | 1 +
33
hw/net/can/trace-events | 9 +
34
7 files changed, 1252 insertions(+)
35
create mode 100644 hw/net/can/trace.h
36
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
37
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
38
create mode 100644 hw/net/can/trace-events
16
39
17
diff --git a/hw/arm/musicpal.c b/hw/arm/musicpal.c
40
diff --git a/meson.build b/meson.build
18
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/musicpal.c
42
--- a/meson.build
20
+++ b/hw/arm/musicpal.c
43
+++ b/meson.build
44
@@ -XXX,XX +XXX,XX @@ if have_system
45
'hw/misc',
46
'hw/misc/macio',
47
'hw/net',
48
+ 'hw/net/can',
49
'hw/nvram',
50
'hw/pci',
51
'hw/pci-host',
52
diff --git a/hw/net/can/trace.h b/hw/net/can/trace.h
53
new file mode 100644
54
index XXXXXXX..XXXXXXX
55
--- /dev/null
56
+++ b/hw/net/can/trace.h
57
@@ -0,0 +1 @@
58
+#include "trace/trace-hw_net_can.h"
59
diff --git a/include/hw/net/xlnx-zynqmp-can.h b/include/hw/net/xlnx-zynqmp-can.h
60
new file mode 100644
61
index XXXXXXX..XXXXXXX
62
--- /dev/null
63
+++ b/include/hw/net/xlnx-zynqmp-can.h
21
@@ -XXX,XX +XXX,XX @@
64
@@ -XXX,XX +XXX,XX @@
22
#include "hw/audio/wm8750.h"
65
+/*
23
#include "sysemu/block-backend.h"
66
+ * QEMU model of the Xilinx ZynqMP CAN controller.
24
#include "sysemu/runstate.h"
67
+ *
25
+#include "sysemu/dma.h"
68
+ * Copyright (c) 2020 Xilinx Inc.
26
#include "exec/address-spaces.h"
69
+ *
27
#include "ui/pixel_ops.h"
70
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
28
#include "qemu/cutils.h"
71
+ *
29
@@ -XXX,XX +XXX,XX @@ typedef struct mv88w8618_eth_state {
72
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
30
73
+ * Pavel Pisa.
31
MemoryRegion iomem;
74
+ *
32
qemu_irq irq;
75
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
33
+ MemoryRegion *dma_mr;
76
+ * of this software and associated documentation files (the "Software"), to deal
34
+ AddressSpace dma_as;
77
+ * in the Software without restriction, including without limitation the rights
35
uint32_t smir;
78
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
36
uint32_t icr;
79
+ * copies of the Software, and to permit persons to whom the Software is
37
uint32_t imr;
80
+ * furnished to do so, subject to the following conditions:
38
@@ -XXX,XX +XXX,XX @@ typedef struct mv88w8618_eth_state {
81
+ *
39
NICConf conf;
82
+ * The above copyright notice and this permission notice shall be included in
40
} mv88w8618_eth_state;
83
+ * all copies or substantial portions of the Software.
41
84
+ *
42
-static void eth_rx_desc_put(uint32_t addr, mv88w8618_rx_desc *desc)
85
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
43
+static void eth_rx_desc_put(AddressSpace *dma_as, uint32_t addr,
86
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
44
+ mv88w8618_rx_desc *desc)
87
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
45
{
88
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
46
cpu_to_le32s(&desc->cmdstat);
89
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
47
cpu_to_le16s(&desc->bytes);
90
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
48
cpu_to_le16s(&desc->buffer_size);
91
+ * THE SOFTWARE.
49
cpu_to_le32s(&desc->buffer);
92
+ */
50
cpu_to_le32s(&desc->next);
93
+
51
- cpu_physical_memory_write(addr, desc, sizeof(*desc));
94
+#ifndef XLNX_ZYNQMP_CAN_H
52
+ dma_memory_write(dma_as, addr, desc, sizeof(*desc));
95
+#define XLNX_ZYNQMP_CAN_H
53
}
96
+
54
97
+#include "hw/register.h"
55
-static void eth_rx_desc_get(uint32_t addr, mv88w8618_rx_desc *desc)
98
+#include "net/can_emu.h"
56
+static void eth_rx_desc_get(AddressSpace *dma_as, uint32_t addr,
99
+#include "net/can_host.h"
57
+ mv88w8618_rx_desc *desc)
100
+#include "qemu/fifo32.h"
58
{
101
+#include "hw/ptimer.h"
59
- cpu_physical_memory_read(addr, desc, sizeof(*desc));
102
+#include "hw/qdev-clock.h"
60
+ dma_memory_read(dma_as, addr, desc, sizeof(*desc));
103
+
61
le32_to_cpus(&desc->cmdstat);
104
+#define TYPE_XLNX_ZYNQMP_CAN "xlnx.zynqmp-can"
62
le16_to_cpus(&desc->bytes);
105
+
63
le16_to_cpus(&desc->buffer_size);
106
+#define XLNX_ZYNQMP_CAN(obj) \
64
@@ -XXX,XX +XXX,XX @@ static ssize_t eth_receive(NetClientState *nc, const uint8_t *buf, size_t size)
107
+ OBJECT_CHECK(XlnxZynqMPCANState, (obj), TYPE_XLNX_ZYNQMP_CAN)
65
continue;
108
+
66
}
109
+#define MAX_CAN_CTRLS 2
67
do {
110
+#define XLNX_ZYNQMP_CAN_R_MAX (0x84 / 4)
68
- eth_rx_desc_get(desc_addr, &desc);
111
+#define MAILBOX_CAPACITY 64
69
+ eth_rx_desc_get(&s->dma_as, desc_addr, &desc);
112
+#define CAN_TIMER_MAX 0XFFFFUL
70
if ((desc.cmdstat & MP_ETH_RX_OWN) && desc.buffer_size >= size) {
113
+#define CAN_DEFAULT_CLOCK (24 * 1000 * 1000)
71
- cpu_physical_memory_write(desc.buffer + s->vlan_header,
114
+
72
+ dma_memory_write(&s->dma_as, desc.buffer + s->vlan_header,
115
+/* Each CAN_FRAME will have 4 * 32bit size. */
73
buf, size);
116
+#define CAN_FRAME_SIZE 4
74
desc.bytes = size + s->vlan_header;
117
+#define RXFIFO_SIZE (MAILBOX_CAPACITY * CAN_FRAME_SIZE)
75
desc.cmdstat &= ~MP_ETH_RX_OWN;
118
+
76
@@ -XXX,XX +XXX,XX @@ static ssize_t eth_receive(NetClientState *nc, const uint8_t *buf, size_t size)
119
+typedef struct XlnxZynqMPCANState {
77
if (s->icr & s->imr) {
120
+ SysBusDevice parent_obj;
78
qemu_irq_raise(s->irq);
121
+ MemoryRegion iomem;
79
}
122
+
80
- eth_rx_desc_put(desc_addr, &desc);
123
+ qemu_irq irq;
81
+ eth_rx_desc_put(&s->dma_as, desc_addr, &desc);
124
+
82
return size;
125
+ CanBusClientState bus_client;
83
}
126
+ CanBusState *canbus;
84
desc_addr = desc.next;
127
+
85
@@ -XXX,XX +XXX,XX @@ static ssize_t eth_receive(NetClientState *nc, const uint8_t *buf, size_t size)
128
+ struct {
86
return size;
129
+ uint32_t ext_clk_freq;
87
}
130
+ } cfg;
88
131
+
89
-static void eth_tx_desc_put(uint32_t addr, mv88w8618_tx_desc *desc)
132
+ RegisterInfo reg_info[XLNX_ZYNQMP_CAN_R_MAX];
90
+static void eth_tx_desc_put(AddressSpace *dma_as, uint32_t addr,
133
+ uint32_t regs[XLNX_ZYNQMP_CAN_R_MAX];
91
+ mv88w8618_tx_desc *desc)
134
+
92
{
135
+ Fifo32 rx_fifo;
93
cpu_to_le32s(&desc->cmdstat);
136
+ Fifo32 tx_fifo;
94
cpu_to_le16s(&desc->res);
137
+ Fifo32 txhpb_fifo;
95
cpu_to_le16s(&desc->bytes);
138
+
96
cpu_to_le32s(&desc->buffer);
139
+ ptimer_state *can_timer;
97
cpu_to_le32s(&desc->next);
140
+} XlnxZynqMPCANState;
98
- cpu_physical_memory_write(addr, desc, sizeof(*desc));
141
+
99
+ dma_memory_write(dma_as, addr, desc, sizeof(*desc));
142
+#endif
100
}
143
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
101
144
new file mode 100644
102
-static void eth_tx_desc_get(uint32_t addr, mv88w8618_tx_desc *desc)
145
index XXXXXXX..XXXXXXX
103
+static void eth_tx_desc_get(AddressSpace *dma_as, uint32_t addr,
146
--- /dev/null
104
+ mv88w8618_tx_desc *desc)
147
+++ b/hw/net/can/xlnx-zynqmp-can.c
105
{
148
@@ -XXX,XX +XXX,XX @@
106
- cpu_physical_memory_read(addr, desc, sizeof(*desc));
149
+/*
107
+ dma_memory_read(dma_as, addr, desc, sizeof(*desc));
150
+ * QEMU model of the Xilinx ZynqMP CAN controller.
108
le32_to_cpus(&desc->cmdstat);
151
+ * This implementation is based on the following datasheet:
109
le16_to_cpus(&desc->res);
152
+ * https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
110
le16_to_cpus(&desc->bytes);
153
+ *
111
@@ -XXX,XX +XXX,XX @@ static void eth_send(mv88w8618_eth_state *s, int queue_index)
154
+ * Copyright (c) 2020 Xilinx Inc.
112
int len;
155
+ *
113
156
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
114
do {
157
+ *
115
- eth_tx_desc_get(desc_addr, &desc);
158
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
116
+ eth_tx_desc_get(&s->dma_as, desc_addr, &desc);
159
+ * Pavel Pisa
117
next_desc = desc.next;
160
+ *
118
if (desc.cmdstat & MP_ETH_TX_OWN) {
161
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
119
len = desc.bytes;
162
+ * of this software and associated documentation files (the "Software"), to deal
120
if (len < 2048) {
163
+ * in the Software without restriction, including without limitation the rights
121
- cpu_physical_memory_read(desc.buffer, buf, len);
164
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
122
+ dma_memory_read(&s->dma_as, desc.buffer, buf, len);
165
+ * copies of the Software, and to permit persons to whom the Software is
123
qemu_send_packet(qemu_get_queue(s->nic), buf, len);
166
+ * furnished to do so, subject to the following conditions:
124
}
167
+ *
125
desc.cmdstat &= ~MP_ETH_TX_OWN;
168
+ * The above copyright notice and this permission notice shall be included in
126
s->icr |= 1 << (MP_ETH_IRQ_TXLO_BIT - queue_index);
169
+ * all copies or substantial portions of the Software.
127
- eth_tx_desc_put(desc_addr, &desc);
170
+ *
128
+ eth_tx_desc_put(&s->dma_as, desc_addr, &desc);
171
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
129
}
172
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
130
desc_addr = next_desc;
173
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
131
} while (desc_addr != s->tx_queue[queue_index]);
174
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
132
@@ -XXX,XX +XXX,XX @@ static void mv88w8618_eth_realize(DeviceState *dev, Error **errp)
175
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
133
{
176
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
134
mv88w8618_eth_state *s = MV88W8618_ETH(dev);
177
+ * THE SOFTWARE.
135
178
+ */
136
+ if (!s->dma_mr) {
179
+
137
+ error_setg(errp, TYPE_MV88W8618_ETH " 'dma-memory' link not set");
180
+#include "qemu/osdep.h"
181
+#include "hw/sysbus.h"
182
+#include "hw/register.h"
183
+#include "hw/irq.h"
184
+#include "qapi/error.h"
185
+#include "qemu/bitops.h"
186
+#include "qemu/log.h"
187
+#include "qemu/cutils.h"
188
+#include "sysemu/sysemu.h"
189
+#include "migration/vmstate.h"
190
+#include "hw/qdev-properties.h"
191
+#include "net/can_emu.h"
192
+#include "net/can_host.h"
193
+#include "qemu/event_notifier.h"
194
+#include "qom/object_interfaces.h"
195
+#include "hw/net/xlnx-zynqmp-can.h"
196
+#include "trace.h"
197
+
198
+#ifndef XLNX_ZYNQMP_CAN_ERR_DEBUG
199
+#define XLNX_ZYNQMP_CAN_ERR_DEBUG 0
200
+#endif
201
+
202
+#define MAX_DLC 8
203
+#undef ERROR
204
+
205
+REG32(SOFTWARE_RESET_REGISTER, 0x0)
206
+ FIELD(SOFTWARE_RESET_REGISTER, CEN, 1, 1)
207
+ FIELD(SOFTWARE_RESET_REGISTER, SRST, 0, 1)
208
+REG32(MODE_SELECT_REGISTER, 0x4)
209
+ FIELD(MODE_SELECT_REGISTER, SNOOP, 2, 1)
210
+ FIELD(MODE_SELECT_REGISTER, LBACK, 1, 1)
211
+ FIELD(MODE_SELECT_REGISTER, SLEEP, 0, 1)
212
+REG32(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x8)
213
+ FIELD(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, BRP, 0, 8)
214
+REG32(ARBITRATION_PHASE_BIT_TIMING_REGISTER, 0xc)
215
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, SJW, 7, 2)
216
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS2, 4, 3)
217
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS1, 0, 4)
218
+REG32(ERROR_COUNTER_REGISTER, 0x10)
219
+ FIELD(ERROR_COUNTER_REGISTER, REC, 8, 8)
220
+ FIELD(ERROR_COUNTER_REGISTER, TEC, 0, 8)
221
+REG32(ERROR_STATUS_REGISTER, 0x14)
222
+ FIELD(ERROR_STATUS_REGISTER, ACKER, 4, 1)
223
+ FIELD(ERROR_STATUS_REGISTER, BERR, 3, 1)
224
+ FIELD(ERROR_STATUS_REGISTER, STER, 2, 1)
225
+ FIELD(ERROR_STATUS_REGISTER, FMER, 1, 1)
226
+ FIELD(ERROR_STATUS_REGISTER, CRCER, 0, 1)
227
+REG32(STATUS_REGISTER, 0x18)
228
+ FIELD(STATUS_REGISTER, SNOOP, 12, 1)
229
+ FIELD(STATUS_REGISTER, ACFBSY, 11, 1)
230
+ FIELD(STATUS_REGISTER, TXFLL, 10, 1)
231
+ FIELD(STATUS_REGISTER, TXBFLL, 9, 1)
232
+ FIELD(STATUS_REGISTER, ESTAT, 7, 2)
233
+ FIELD(STATUS_REGISTER, ERRWRN, 6, 1)
234
+ FIELD(STATUS_REGISTER, BBSY, 5, 1)
235
+ FIELD(STATUS_REGISTER, BIDLE, 4, 1)
236
+ FIELD(STATUS_REGISTER, NORMAL, 3, 1)
237
+ FIELD(STATUS_REGISTER, SLEEP, 2, 1)
238
+ FIELD(STATUS_REGISTER, LBACK, 1, 1)
239
+ FIELD(STATUS_REGISTER, CONFIG, 0, 1)
240
+REG32(INTERRUPT_STATUS_REGISTER, 0x1c)
241
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFEMP, 14, 1)
242
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFWMEMP, 13, 1)
243
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL, 12, 1)
244
+ FIELD(INTERRUPT_STATUS_REGISTER, WKUP, 11, 1)
245
+ FIELD(INTERRUPT_STATUS_REGISTER, SLP, 10, 1)
246
+ FIELD(INTERRUPT_STATUS_REGISTER, BSOFF, 9, 1)
247
+ FIELD(INTERRUPT_STATUS_REGISTER, ERROR, 8, 1)
248
+ FIELD(INTERRUPT_STATUS_REGISTER, RXNEMP, 7, 1)
249
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOFLW, 6, 1)
250
+ FIELD(INTERRUPT_STATUS_REGISTER, RXUFLW, 5, 1)
251
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOK, 4, 1)
252
+ FIELD(INTERRUPT_STATUS_REGISTER, TXBFLL, 3, 1)
253
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFLL, 2, 1)
254
+ FIELD(INTERRUPT_STATUS_REGISTER, TXOK, 1, 1)
255
+ FIELD(INTERRUPT_STATUS_REGISTER, ARBLST, 0, 1)
256
+REG32(INTERRUPT_ENABLE_REGISTER, 0x20)
257
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFEMP, 14, 1)
258
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFWMEMP, 13, 1)
259
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL, 12, 1)
260
+ FIELD(INTERRUPT_ENABLE_REGISTER, EWKUP, 11, 1)
261
+ FIELD(INTERRUPT_ENABLE_REGISTER, ESLP, 10, 1)
262
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSOFF, 9, 1)
263
+ FIELD(INTERRUPT_ENABLE_REGISTER, EERROR, 8, 1)
264
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXNEMP, 7, 1)
265
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOFLW, 6, 1)
266
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXUFLW, 5, 1)
267
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOK, 4, 1)
268
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXBFLL, 3, 1)
269
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFLL, 2, 1)
270
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXOK, 1, 1)
271
+ FIELD(INTERRUPT_ENABLE_REGISTER, EARBLST, 0, 1)
272
+REG32(INTERRUPT_CLEAR_REGISTER, 0x24)
273
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFEMP, 14, 1)
274
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFWMEMP, 13, 1)
275
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL, 12, 1)
276
+ FIELD(INTERRUPT_CLEAR_REGISTER, CWKUP, 11, 1)
277
+ FIELD(INTERRUPT_CLEAR_REGISTER, CSLP, 10, 1)
278
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSOFF, 9, 1)
279
+ FIELD(INTERRUPT_CLEAR_REGISTER, CERROR, 8, 1)
280
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXNEMP, 7, 1)
281
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOFLW, 6, 1)
282
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXUFLW, 5, 1)
283
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOK, 4, 1)
284
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXBFLL, 3, 1)
285
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFLL, 2, 1)
286
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXOK, 1, 1)
287
+ FIELD(INTERRUPT_CLEAR_REGISTER, CARBLST, 0, 1)
288
+REG32(TIMESTAMP_REGISTER, 0x28)
289
+ FIELD(TIMESTAMP_REGISTER, CTS, 0, 1)
290
+REG32(WIR, 0x2c)
291
+ FIELD(WIR, EW, 8, 8)
292
+ FIELD(WIR, FW, 0, 8)
293
+REG32(TXFIFO_ID, 0x30)
294
+ FIELD(TXFIFO_ID, IDH, 21, 11)
295
+ FIELD(TXFIFO_ID, SRRRTR, 20, 1)
296
+ FIELD(TXFIFO_ID, IDE, 19, 1)
297
+ FIELD(TXFIFO_ID, IDL, 1, 18)
298
+ FIELD(TXFIFO_ID, RTR, 0, 1)
299
+REG32(TXFIFO_DLC, 0x34)
300
+ FIELD(TXFIFO_DLC, DLC, 28, 4)
301
+REG32(TXFIFO_DATA1, 0x38)
302
+ FIELD(TXFIFO_DATA1, DB0, 24, 8)
303
+ FIELD(TXFIFO_DATA1, DB1, 16, 8)
304
+ FIELD(TXFIFO_DATA1, DB2, 8, 8)
305
+ FIELD(TXFIFO_DATA1, DB3, 0, 8)
306
+REG32(TXFIFO_DATA2, 0x3c)
307
+ FIELD(TXFIFO_DATA2, DB4, 24, 8)
308
+ FIELD(TXFIFO_DATA2, DB5, 16, 8)
309
+ FIELD(TXFIFO_DATA2, DB6, 8, 8)
310
+ FIELD(TXFIFO_DATA2, DB7, 0, 8)
311
+REG32(TXHPB_ID, 0x40)
312
+ FIELD(TXHPB_ID, IDH, 21, 11)
313
+ FIELD(TXHPB_ID, SRRRTR, 20, 1)
314
+ FIELD(TXHPB_ID, IDE, 19, 1)
315
+ FIELD(TXHPB_ID, IDL, 1, 18)
316
+ FIELD(TXHPB_ID, RTR, 0, 1)
317
+REG32(TXHPB_DLC, 0x44)
318
+ FIELD(TXHPB_DLC, DLC, 28, 4)
319
+REG32(TXHPB_DATA1, 0x48)
320
+ FIELD(TXHPB_DATA1, DB0, 24, 8)
321
+ FIELD(TXHPB_DATA1, DB1, 16, 8)
322
+ FIELD(TXHPB_DATA1, DB2, 8, 8)
323
+ FIELD(TXHPB_DATA1, DB3, 0, 8)
324
+REG32(TXHPB_DATA2, 0x4c)
325
+ FIELD(TXHPB_DATA2, DB4, 24, 8)
326
+ FIELD(TXHPB_DATA2, DB5, 16, 8)
327
+ FIELD(TXHPB_DATA2, DB6, 8, 8)
328
+ FIELD(TXHPB_DATA2, DB7, 0, 8)
329
+REG32(RXFIFO_ID, 0x50)
330
+ FIELD(RXFIFO_ID, IDH, 21, 11)
331
+ FIELD(RXFIFO_ID, SRRRTR, 20, 1)
332
+ FIELD(RXFIFO_ID, IDE, 19, 1)
333
+ FIELD(RXFIFO_ID, IDL, 1, 18)
334
+ FIELD(RXFIFO_ID, RTR, 0, 1)
335
+REG32(RXFIFO_DLC, 0x54)
336
+ FIELD(RXFIFO_DLC, DLC, 28, 4)
337
+ FIELD(RXFIFO_DLC, RXT, 0, 16)
338
+REG32(RXFIFO_DATA1, 0x58)
339
+ FIELD(RXFIFO_DATA1, DB0, 24, 8)
340
+ FIELD(RXFIFO_DATA1, DB1, 16, 8)
341
+ FIELD(RXFIFO_DATA1, DB2, 8, 8)
342
+ FIELD(RXFIFO_DATA1, DB3, 0, 8)
343
+REG32(RXFIFO_DATA2, 0x5c)
344
+ FIELD(RXFIFO_DATA2, DB4, 24, 8)
345
+ FIELD(RXFIFO_DATA2, DB5, 16, 8)
346
+ FIELD(RXFIFO_DATA2, DB6, 8, 8)
347
+ FIELD(RXFIFO_DATA2, DB7, 0, 8)
348
+REG32(AFR, 0x60)
349
+ FIELD(AFR, UAF4, 3, 1)
350
+ FIELD(AFR, UAF3, 2, 1)
351
+ FIELD(AFR, UAF2, 1, 1)
352
+ FIELD(AFR, UAF1, 0, 1)
353
+REG32(AFMR1, 0x64)
354
+ FIELD(AFMR1, AMIDH, 21, 11)
355
+ FIELD(AFMR1, AMSRR, 20, 1)
356
+ FIELD(AFMR1, AMIDE, 19, 1)
357
+ FIELD(AFMR1, AMIDL, 1, 18)
358
+ FIELD(AFMR1, AMRTR, 0, 1)
359
+REG32(AFIR1, 0x68)
360
+ FIELD(AFIR1, AIIDH, 21, 11)
361
+ FIELD(AFIR1, AISRR, 20, 1)
362
+ FIELD(AFIR1, AIIDE, 19, 1)
363
+ FIELD(AFIR1, AIIDL, 1, 18)
364
+ FIELD(AFIR1, AIRTR, 0, 1)
365
+REG32(AFMR2, 0x6c)
366
+ FIELD(AFMR2, AMIDH, 21, 11)
367
+ FIELD(AFMR2, AMSRR, 20, 1)
368
+ FIELD(AFMR2, AMIDE, 19, 1)
369
+ FIELD(AFMR2, AMIDL, 1, 18)
370
+ FIELD(AFMR2, AMRTR, 0, 1)
371
+REG32(AFIR2, 0x70)
372
+ FIELD(AFIR2, AIIDH, 21, 11)
373
+ FIELD(AFIR2, AISRR, 20, 1)
374
+ FIELD(AFIR2, AIIDE, 19, 1)
375
+ FIELD(AFIR2, AIIDL, 1, 18)
376
+ FIELD(AFIR2, AIRTR, 0, 1)
377
+REG32(AFMR3, 0x74)
378
+ FIELD(AFMR3, AMIDH, 21, 11)
379
+ FIELD(AFMR3, AMSRR, 20, 1)
380
+ FIELD(AFMR3, AMIDE, 19, 1)
381
+ FIELD(AFMR3, AMIDL, 1, 18)
382
+ FIELD(AFMR3, AMRTR, 0, 1)
383
+REG32(AFIR3, 0x78)
384
+ FIELD(AFIR3, AIIDH, 21, 11)
385
+ FIELD(AFIR3, AISRR, 20, 1)
386
+ FIELD(AFIR3, AIIDE, 19, 1)
387
+ FIELD(AFIR3, AIIDL, 1, 18)
388
+ FIELD(AFIR3, AIRTR, 0, 1)
389
+REG32(AFMR4, 0x7c)
390
+ FIELD(AFMR4, AMIDH, 21, 11)
391
+ FIELD(AFMR4, AMSRR, 20, 1)
392
+ FIELD(AFMR4, AMIDE, 19, 1)
393
+ FIELD(AFMR4, AMIDL, 1, 18)
394
+ FIELD(AFMR4, AMRTR, 0, 1)
395
+REG32(AFIR4, 0x80)
396
+ FIELD(AFIR4, AIIDH, 21, 11)
397
+ FIELD(AFIR4, AISRR, 20, 1)
398
+ FIELD(AFIR4, AIIDE, 19, 1)
399
+ FIELD(AFIR4, AIIDL, 1, 18)
400
+ FIELD(AFIR4, AIRTR, 0, 1)
401
+
402
+static void can_update_irq(XlnxZynqMPCANState *s)
403
+{
404
+ uint32_t irq;
405
+
406
+ /* Watermark register interrupts. */
407
+ if ((fifo32_num_free(&s->tx_fifo) / CAN_FRAME_SIZE) >
408
+ ARRAY_FIELD_EX32(s->regs, WIR, EW)) {
409
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFWMEMP, 1);
410
+ }
411
+
412
+ if ((fifo32_num_used(&s->rx_fifo) / CAN_FRAME_SIZE) >
413
+ ARRAY_FIELD_EX32(s->regs, WIR, FW)) {
414
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL, 1);
415
+ }
416
+
417
+ /* RX Interrupts. */
418
+ if (fifo32_num_used(&s->rx_fifo) >= CAN_FRAME_SIZE) {
419
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXNEMP, 1);
420
+ }
421
+
422
+ /* TX interrupts. */
423
+ if (fifo32_is_empty(&s->tx_fifo)) {
424
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFEMP, 1);
425
+ }
426
+
427
+ if (fifo32_is_full(&s->tx_fifo)) {
428
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFLL, 1);
429
+ }
430
+
431
+ if (fifo32_is_full(&s->txhpb_fifo)) {
432
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXBFLL, 1);
433
+ }
434
+
435
+ irq = s->regs[R_INTERRUPT_STATUS_REGISTER];
436
+ irq &= s->regs[R_INTERRUPT_ENABLE_REGISTER];
437
+
438
+ trace_xlnx_can_update_irq(s->regs[R_INTERRUPT_STATUS_REGISTER],
439
+ s->regs[R_INTERRUPT_ENABLE_REGISTER], irq);
440
+ qemu_set_irq(s->irq, irq);
441
+}
442
+
443
+static void can_ier_post_write(RegisterInfo *reg, uint64_t val)
444
+{
445
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
446
+
447
+ can_update_irq(s);
448
+}
449
+
450
+static uint64_t can_icr_pre_write(RegisterInfo *reg, uint64_t val)
451
+{
452
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
453
+
454
+ s->regs[R_INTERRUPT_STATUS_REGISTER] &= ~val;
455
+ can_update_irq(s);
456
+
457
+ return 0;
458
+}
459
+
460
+static void can_config_reset(XlnxZynqMPCANState *s)
461
+{
462
+ /* Reset all the configuration registers. */
463
+ register_reset(&s->reg_info[R_SOFTWARE_RESET_REGISTER]);
464
+ register_reset(&s->reg_info[R_MODE_SELECT_REGISTER]);
465
+ register_reset(
466
+ &s->reg_info[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER]);
467
+ register_reset(&s->reg_info[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER]);
468
+ register_reset(&s->reg_info[R_STATUS_REGISTER]);
469
+ register_reset(&s->reg_info[R_INTERRUPT_STATUS_REGISTER]);
470
+ register_reset(&s->reg_info[R_INTERRUPT_ENABLE_REGISTER]);
471
+ register_reset(&s->reg_info[R_INTERRUPT_CLEAR_REGISTER]);
472
+ register_reset(&s->reg_info[R_WIR]);
473
+}
474
+
475
+static void can_config_mode(XlnxZynqMPCANState *s)
476
+{
477
+ register_reset(&s->reg_info[R_ERROR_COUNTER_REGISTER]);
478
+ register_reset(&s->reg_info[R_ERROR_STATUS_REGISTER]);
479
+
480
+ /* Put XlnxZynqMPCAN in configuration mode. */
481
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 1);
482
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP, 0);
483
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP, 0);
484
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, BSOFF, 0);
485
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ERROR, 0);
486
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 0);
487
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 0);
488
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 0);
489
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ARBLST, 0);
490
+
491
+ can_update_irq(s);
492
+}
493
+
494
+static void update_status_register_mode_bits(XlnxZynqMPCANState *s)
495
+{
496
+ bool sleep_status = ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP);
497
+ bool sleep_mode = ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP);
498
+ /* Wake up interrupt bit. */
499
+ bool wakeup_irq_val = sleep_status && (sleep_mode == 0);
500
+ /* Sleep interrupt bit. */
501
+ bool sleep_irq_val = sleep_mode && (sleep_status == 0);
502
+
503
+ /* Clear previous core mode status bits. */
504
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 0);
505
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 0);
506
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 0);
507
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 0);
508
+
509
+ /* set current mode bit and generate irqs accordingly. */
510
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, LBACK)) {
511
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 1);
512
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP)) {
513
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 1);
514
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP,
515
+ sleep_irq_val);
516
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
517
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 1);
518
+ } else {
519
+ /*
520
+ * If all bits are zero then XlnxZynqMPCAN is set in normal mode.
521
+ */
522
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 1);
523
+ /* Set wakeup interrupt bit. */
524
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP,
525
+ wakeup_irq_val);
526
+ }
527
+
528
+ can_update_irq(s);
529
+}
530
+
531
+static void can_exit_sleep_mode(XlnxZynqMPCANState *s)
532
+{
533
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, 0);
534
+ update_status_register_mode_bits(s);
535
+}
536
+
537
+static void generate_frame(qemu_can_frame *frame, uint32_t *data)
538
+{
539
+ frame->can_id = data[0];
540
+ frame->can_dlc = FIELD_EX32(data[1], TXFIFO_DLC, DLC);
541
+
542
+ frame->data[0] = FIELD_EX32(data[2], TXFIFO_DATA1, DB3);
543
+ frame->data[1] = FIELD_EX32(data[2], TXFIFO_DATA1, DB2);
544
+ frame->data[2] = FIELD_EX32(data[2], TXFIFO_DATA1, DB1);
545
+ frame->data[3] = FIELD_EX32(data[2], TXFIFO_DATA1, DB0);
546
+
547
+ frame->data[4] = FIELD_EX32(data[3], TXFIFO_DATA2, DB7);
548
+ frame->data[5] = FIELD_EX32(data[3], TXFIFO_DATA2, DB6);
549
+ frame->data[6] = FIELD_EX32(data[3], TXFIFO_DATA2, DB5);
550
+ frame->data[7] = FIELD_EX32(data[3], TXFIFO_DATA2, DB4);
551
+}
552
+
553
+static bool tx_ready_check(XlnxZynqMPCANState *s)
554
+{
555
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
556
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
557
+
558
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
559
+ " data while controller is in reset mode.\n",
560
+ path);
561
+ return false;
562
+ }
563
+
564
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
565
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
566
+
567
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
568
+ " data while controller is in configuration mode. Reset"
569
+ " the core so operations can start fresh.\n",
570
+ path);
571
+ return false;
572
+ }
573
+
574
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
575
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
576
+
577
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
578
+ " data while controller is in SNOOP MODE.\n",
579
+ path);
580
+ return false;
581
+ }
582
+
583
+ return true;
584
+}
585
+
586
+static void transfer_fifo(XlnxZynqMPCANState *s, Fifo32 *fifo)
587
+{
588
+ qemu_can_frame frame;
589
+ uint32_t data[CAN_FRAME_SIZE];
590
+ int i;
591
+ bool can_tx = tx_ready_check(s);
592
+
593
+ if (!can_tx) {
594
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
595
+
596
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is not enabled for data"
597
+ " transfer.\n", path);
598
+ can_update_irq(s);
138
+ return;
599
+ return;
139
+ }
600
+ }
140
+
601
+
141
+ address_space_init(&s->dma_as, s->dma_mr, "emac-dma");
602
+ while (!fifo32_is_empty(fifo)) {
142
s->nic = qemu_new_nic(&net_mv88w8618_info, &s->conf,
603
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
143
object_get_typename(OBJECT(dev)), dev->id, s);
604
+ data[i] = fifo32_pop(fifo);
144
}
605
+ }
145
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription mv88w8618_eth_vmsd = {
606
+
146
607
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
147
static Property mv88w8618_eth_properties[] = {
608
+ /*
148
DEFINE_NIC_PROPERTIES(mv88w8618_eth_state, conf),
609
+ * Controller is in loopback. In Loopback mode, the CAN core
149
+ DEFINE_PROP_LINK("dma-memory", mv88w8618_eth_state, dma_mr,
610
+ * transmits a recessive bitstream on to the XlnxZynqMPCAN Bus.
150
+ TYPE_MEMORY_REGION, MemoryRegion *),
611
+ * Any message transmitted is looped back to the RX line and
151
DEFINE_PROP_END_OF_LIST(),
612
+ * acknowledged. The XlnxZynqMPCAN core receives any message
152
};
613
+ * that it transmits.
153
614
+ */
154
@@ -XXX,XX +XXX,XX @@ static void musicpal_init(MachineState *machine)
615
+ if (fifo32_is_full(&s->rx_fifo)) {
155
qemu_check_nic_model(&nd_table[0], "mv88w8618");
616
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
156
dev = qdev_new(TYPE_MV88W8618_ETH);
617
+ } else {
157
qdev_set_nic_properties(dev, &nd_table[0]);
618
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
158
+ object_property_set_link(OBJECT(dev), "dma-memory",
619
+ fifo32_push(&s->rx_fifo, data[i]);
159
+ OBJECT(get_system_memory()), &error_fatal);
620
+ }
160
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
621
+
161
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, MP_ETH_BASE);
622
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
162
sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[MP_ETH_IRQ]);
623
+ }
624
+ } else {
625
+ /* Normal mode Tx. */
626
+ generate_frame(&frame, data);
627
+
628
+ trace_xlnx_can_tx_data(frame.can_id, frame.can_dlc,
629
+ frame.data[0], frame.data[1],
630
+ frame.data[2], frame.data[3],
631
+ frame.data[4], frame.data[5],
632
+ frame.data[6], frame.data[7]);
633
+ can_bus_client_send(&s->bus_client, &frame, 1);
634
+ }
635
+ }
636
+
637
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 1);
638
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, TXBFLL, 0);
639
+
640
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) {
641
+ can_exit_sleep_mode(s);
642
+ }
643
+
644
+ can_update_irq(s);
645
+}
646
+
647
+static uint64_t can_srr_pre_write(RegisterInfo *reg, uint64_t val)
648
+{
649
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
650
+
651
+ ARRAY_FIELD_DP32(s->regs, SOFTWARE_RESET_REGISTER, CEN,
652
+ FIELD_EX32(val, SOFTWARE_RESET_REGISTER, CEN));
653
+
654
+ if (FIELD_EX32(val, SOFTWARE_RESET_REGISTER, SRST)) {
655
+ trace_xlnx_can_reset(val);
656
+
657
+ /* First, core will do software reset then will enter in config mode. */
658
+ can_config_reset(s);
659
+ }
660
+
661
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
662
+ can_config_mode(s);
663
+ } else {
664
+ /*
665
+ * Leave config mode. Now XlnxZynqMPCAN core will enter normal,
666
+ * sleep, snoop or loopback mode depending upon LBACK, SLEEP, SNOOP
667
+ * register states.
668
+ */
669
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 0);
670
+
671
+ ptimer_transaction_begin(s->can_timer);
672
+ ptimer_set_count(s->can_timer, 0);
673
+ ptimer_transaction_commit(s->can_timer);
674
+
675
+ /* XlnxZynqMPCAN is out of config mode. It will send pending data. */
676
+ transfer_fifo(s, &s->txhpb_fifo);
677
+ transfer_fifo(s, &s->tx_fifo);
678
+ }
679
+
680
+ update_status_register_mode_bits(s);
681
+
682
+ return s->regs[R_SOFTWARE_RESET_REGISTER];
683
+}
684
+
685
+static uint64_t can_msr_pre_write(RegisterInfo *reg, uint64_t val)
686
+{
687
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
688
+ uint8_t multi_mode;
689
+
690
+ /*
691
+ * Multiple mode set check. This is done to make sure user doesn't set
692
+ * multiple modes.
693
+ */
694
+ multi_mode = FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK) +
695
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP) +
696
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP);
697
+
698
+ if (multi_mode > 1) {
699
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
700
+
701
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to config"
702
+ " several modes simultaneously. One mode will be selected"
703
+ " according to their priority: LBACK > SLEEP > SNOOP.\n",
704
+ path);
705
+ }
706
+
707
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
708
+ /* We are in configuration mode, any mode can be selected. */
709
+ s->regs[R_MODE_SELECT_REGISTER] = val;
710
+ } else {
711
+ bool sleep_mode_bit = FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP);
712
+
713
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, sleep_mode_bit);
714
+
715
+ if (FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK)) {
716
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
717
+
718
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
719
+ " LBACK mode without setting CEN bit as 0.\n",
720
+ path);
721
+ } else if (FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP)) {
722
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
723
+
724
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
725
+ " SNOOP mode without setting CEN bit as 0.\n",
726
+ path);
727
+ }
728
+
729
+ update_status_register_mode_bits(s);
730
+ }
731
+
732
+ return s->regs[R_MODE_SELECT_REGISTER];
733
+}
734
+
735
+static uint64_t can_brpr_pre_write(RegisterInfo *reg, uint64_t val)
736
+{
737
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
738
+
739
+ /* Only allow writes when in config mode. */
740
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
741
+ return s->regs[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER];
742
+ }
743
+
744
+ return val;
745
+}
746
+
747
+static uint64_t can_btr_pre_write(RegisterInfo *reg, uint64_t val)
748
+{
749
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
750
+
751
+ /* Only allow writes when in config mode. */
752
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
753
+ return s->regs[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER];
754
+ }
755
+
756
+ return val;
757
+}
758
+
759
+static uint64_t can_tcr_pre_write(RegisterInfo *reg, uint64_t val)
760
+{
761
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
762
+
763
+ if (FIELD_EX32(val, TIMESTAMP_REGISTER, CTS)) {
764
+ ptimer_transaction_begin(s->can_timer);
765
+ ptimer_set_count(s->can_timer, 0);
766
+ ptimer_transaction_commit(s->can_timer);
767
+ }
768
+
769
+ return 0;
770
+}
771
+
772
+static void update_rx_fifo(XlnxZynqMPCANState *s, const qemu_can_frame *frame)
773
+{
774
+ bool filter_pass = false;
775
+ uint16_t timestamp = 0;
776
+
777
+ /* If no filter is enabled. Message will be stored in FIFO. */
778
+ if (!((ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) |
779
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) |
780
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) |
781
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)))) {
782
+ filter_pass = true;
783
+ }
784
+
785
+ /*
786
+ * Messages that pass any of the acceptance filters will be stored in
787
+ * the RX FIFO.
788
+ */
789
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) {
790
+ uint32_t id_masked = s->regs[R_AFMR1] & frame->can_id;
791
+ uint32_t filter_id_masked = s->regs[R_AFMR1] & s->regs[R_AFIR1];
792
+
793
+ if (filter_id_masked == id_masked) {
794
+ filter_pass = true;
795
+ }
796
+ }
797
+
798
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) {
799
+ uint32_t id_masked = s->regs[R_AFMR2] & frame->can_id;
800
+ uint32_t filter_id_masked = s->regs[R_AFMR2] & s->regs[R_AFIR2];
801
+
802
+ if (filter_id_masked == id_masked) {
803
+ filter_pass = true;
804
+ }
805
+ }
806
+
807
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) {
808
+ uint32_t id_masked = s->regs[R_AFMR3] & frame->can_id;
809
+ uint32_t filter_id_masked = s->regs[R_AFMR3] & s->regs[R_AFIR3];
810
+
811
+ if (filter_id_masked == id_masked) {
812
+ filter_pass = true;
813
+ }
814
+ }
815
+
816
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
817
+ uint32_t id_masked = s->regs[R_AFMR4] & frame->can_id;
818
+ uint32_t filter_id_masked = s->regs[R_AFMR4] & s->regs[R_AFIR4];
819
+
820
+ if (filter_id_masked == id_masked) {
821
+ filter_pass = true;
822
+ }
823
+ }
824
+
825
+ if (!filter_pass) {
826
+ trace_xlnx_can_rx_fifo_filter_reject(frame->can_id, frame->can_dlc);
827
+ return;
828
+ }
829
+
830
+ /* Store the message in fifo if it passed through any of the filters. */
831
+ if (filter_pass && frame->can_dlc <= MAX_DLC) {
832
+
833
+ if (fifo32_is_full(&s->rx_fifo)) {
834
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
835
+ } else {
836
+ timestamp = CAN_TIMER_MAX - ptimer_get_count(s->can_timer);
837
+
838
+ fifo32_push(&s->rx_fifo, frame->can_id);
839
+
840
+ fifo32_push(&s->rx_fifo, deposit32(0, R_RXFIFO_DLC_DLC_SHIFT,
841
+ R_RXFIFO_DLC_DLC_LENGTH,
842
+ frame->can_dlc) |
843
+ deposit32(0, R_RXFIFO_DLC_RXT_SHIFT,
844
+ R_RXFIFO_DLC_RXT_LENGTH,
845
+ timestamp));
846
+
847
+ /* First 32 bit of the data. */
848
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA1_DB3_SHIFT,
849
+ R_TXFIFO_DATA1_DB3_LENGTH,
850
+ frame->data[0]) |
851
+ deposit32(0, R_TXFIFO_DATA1_DB2_SHIFT,
852
+ R_TXFIFO_DATA1_DB2_LENGTH,
853
+ frame->data[1]) |
854
+ deposit32(0, R_TXFIFO_DATA1_DB1_SHIFT,
855
+ R_TXFIFO_DATA1_DB1_LENGTH,
856
+ frame->data[2]) |
857
+ deposit32(0, R_TXFIFO_DATA1_DB0_SHIFT,
858
+ R_TXFIFO_DATA1_DB0_LENGTH,
859
+ frame->data[3]));
860
+ /* Last 32 bit of the data. */
861
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA2_DB7_SHIFT,
862
+ R_TXFIFO_DATA2_DB7_LENGTH,
863
+ frame->data[4]) |
864
+ deposit32(0, R_TXFIFO_DATA2_DB6_SHIFT,
865
+ R_TXFIFO_DATA2_DB6_LENGTH,
866
+ frame->data[5]) |
867
+ deposit32(0, R_TXFIFO_DATA2_DB5_SHIFT,
868
+ R_TXFIFO_DATA2_DB5_LENGTH,
869
+ frame->data[6]) |
870
+ deposit32(0, R_TXFIFO_DATA2_DB4_SHIFT,
871
+ R_TXFIFO_DATA2_DB4_LENGTH,
872
+ frame->data[7]));
873
+
874
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
875
+ trace_xlnx_can_rx_data(frame->can_id, frame->can_dlc,
876
+ frame->data[0], frame->data[1],
877
+ frame->data[2], frame->data[3],
878
+ frame->data[4], frame->data[5],
879
+ frame->data[6], frame->data[7]);
880
+ }
881
+
882
+ can_update_irq(s);
883
+ }
884
+}
885
+
886
+static uint64_t can_rxfifo_pre_read(RegisterInfo *reg, uint64_t val)
887
+{
888
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
889
+
890
+ if (!fifo32_is_empty(&s->rx_fifo)) {
891
+ val = fifo32_pop(&s->rx_fifo);
892
+ } else {
893
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXUFLW, 1);
894
+ }
895
+
896
+ can_update_irq(s);
897
+ return val;
898
+}
899
+
900
+static void can_filter_enable_post_write(RegisterInfo *reg, uint64_t val)
901
+{
902
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
903
+
904
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1) &&
905
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF2) &&
906
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF3) &&
907
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
908
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 1);
909
+ } else {
910
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 0);
911
+ }
912
+}
913
+
914
+static uint64_t can_filter_mask_pre_write(RegisterInfo *reg, uint64_t val)
915
+{
916
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
917
+ uint32_t reg_idx = (reg->access->addr) / 4;
918
+ uint32_t filter_number = (reg_idx - R_AFMR1) / 2;
919
+
920
+ /* modify an acceptance filter, the corresponding UAF bit should be '0'. */
921
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
922
+ s->regs[reg_idx] = val;
923
+
924
+ trace_xlnx_can_filter_mask_pre_write(filter_number, s->regs[reg_idx]);
925
+ } else {
926
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
927
+
928
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
929
+ " mask is not set as corresponding UAF bit is not 0.\n",
930
+ path, filter_number + 1);
931
+ }
932
+
933
+ return s->regs[reg_idx];
934
+}
935
+
936
+static uint64_t can_filter_id_pre_write(RegisterInfo *reg, uint64_t val)
937
+{
938
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
939
+ uint32_t reg_idx = (reg->access->addr) / 4;
940
+ uint32_t filter_number = (reg_idx - R_AFIR1) / 2;
941
+
942
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
943
+ s->regs[reg_idx] = val;
944
+
945
+ trace_xlnx_can_filter_id_pre_write(filter_number, s->regs[reg_idx]);
946
+ } else {
947
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
948
+
949
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
950
+ " id is not set as corresponding UAF bit is not 0.\n",
951
+ path, filter_number + 1);
952
+ }
953
+
954
+ return s->regs[reg_idx];
955
+}
956
+
957
+static void can_tx_post_write(RegisterInfo *reg, uint64_t val)
958
+{
959
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
960
+
961
+ bool is_txhpb = reg->access->addr > A_TXFIFO_DATA2;
962
+
963
+ bool initiate_transfer = (reg->access->addr == A_TXFIFO_DATA2) ||
964
+ (reg->access->addr == A_TXHPB_DATA2);
965
+
966
+ Fifo32 *f = is_txhpb ? &s->txhpb_fifo : &s->tx_fifo;
967
+
968
+ if (!fifo32_is_full(f)) {
969
+ fifo32_push(f, val);
970
+ } else {
971
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
972
+
973
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: TX FIFO is full.\n", path);
974
+ }
975
+
976
+ /* Initiate the message send if TX register is written. */
977
+ if (initiate_transfer &&
978
+ ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
979
+ transfer_fifo(s, f);
980
+ }
981
+
982
+ can_update_irq(s);
983
+}
984
+
985
+static const RegisterAccessInfo can_regs_info[] = {
986
+ { .name = "SOFTWARE_RESET_REGISTER",
987
+ .addr = A_SOFTWARE_RESET_REGISTER,
988
+ .rsvd = 0xfffffffc,
989
+ .pre_write = can_srr_pre_write,
990
+ },{ .name = "MODE_SELECT_REGISTER",
991
+ .addr = A_MODE_SELECT_REGISTER,
992
+ .rsvd = 0xfffffff8,
993
+ .pre_write = can_msr_pre_write,
994
+ },{ .name = "ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER",
995
+ .addr = A_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER,
996
+ .rsvd = 0xffffff00,
997
+ .pre_write = can_brpr_pre_write,
998
+ },{ .name = "ARBITRATION_PHASE_BIT_TIMING_REGISTER",
999
+ .addr = A_ARBITRATION_PHASE_BIT_TIMING_REGISTER,
1000
+ .rsvd = 0xfffffe00,
1001
+ .pre_write = can_btr_pre_write,
1002
+ },{ .name = "ERROR_COUNTER_REGISTER",
1003
+ .addr = A_ERROR_COUNTER_REGISTER,
1004
+ .rsvd = 0xffff0000,
1005
+ .ro = 0xffffffff,
1006
+ },{ .name = "ERROR_STATUS_REGISTER",
1007
+ .addr = A_ERROR_STATUS_REGISTER,
1008
+ .rsvd = 0xffffffe0,
1009
+ .w1c = 0x1f,
1010
+ },{ .name = "STATUS_REGISTER", .addr = A_STATUS_REGISTER,
1011
+ .reset = 0x1,
1012
+ .rsvd = 0xffffe000,
1013
+ .ro = 0x1fff,
1014
+ },{ .name = "INTERRUPT_STATUS_REGISTER",
1015
+ .addr = A_INTERRUPT_STATUS_REGISTER,
1016
+ .reset = 0x6000,
1017
+ .rsvd = 0xffff8000,
1018
+ .ro = 0x7fff,
1019
+ },{ .name = "INTERRUPT_ENABLE_REGISTER",
1020
+ .addr = A_INTERRUPT_ENABLE_REGISTER,
1021
+ .rsvd = 0xffff8000,
1022
+ .post_write = can_ier_post_write,
1023
+ },{ .name = "INTERRUPT_CLEAR_REGISTER",
1024
+ .addr = A_INTERRUPT_CLEAR_REGISTER,
1025
+ .rsvd = 0xffff8000,
1026
+ .pre_write = can_icr_pre_write,
1027
+ },{ .name = "TIMESTAMP_REGISTER",
1028
+ .addr = A_TIMESTAMP_REGISTER,
1029
+ .rsvd = 0xfffffffe,
1030
+ .pre_write = can_tcr_pre_write,
1031
+ },{ .name = "WIR", .addr = A_WIR,
1032
+ .reset = 0x3f3f,
1033
+ .rsvd = 0xffff0000,
1034
+ },{ .name = "TXFIFO_ID", .addr = A_TXFIFO_ID,
1035
+ .post_write = can_tx_post_write,
1036
+ },{ .name = "TXFIFO_DLC", .addr = A_TXFIFO_DLC,
1037
+ .rsvd = 0xfffffff,
1038
+ .post_write = can_tx_post_write,
1039
+ },{ .name = "TXFIFO_DATA1", .addr = A_TXFIFO_DATA1,
1040
+ .post_write = can_tx_post_write,
1041
+ },{ .name = "TXFIFO_DATA2", .addr = A_TXFIFO_DATA2,
1042
+ .post_write = can_tx_post_write,
1043
+ },{ .name = "TXHPB_ID", .addr = A_TXHPB_ID,
1044
+ .post_write = can_tx_post_write,
1045
+ },{ .name = "TXHPB_DLC", .addr = A_TXHPB_DLC,
1046
+ .rsvd = 0xfffffff,
1047
+ .post_write = can_tx_post_write,
1048
+ },{ .name = "TXHPB_DATA1", .addr = A_TXHPB_DATA1,
1049
+ .post_write = can_tx_post_write,
1050
+ },{ .name = "TXHPB_DATA2", .addr = A_TXHPB_DATA2,
1051
+ .post_write = can_tx_post_write,
1052
+ },{ .name = "RXFIFO_ID", .addr = A_RXFIFO_ID,
1053
+ .ro = 0xffffffff,
1054
+ .post_read = can_rxfifo_pre_read,
1055
+ },{ .name = "RXFIFO_DLC", .addr = A_RXFIFO_DLC,
1056
+ .rsvd = 0xfff0000,
1057
+ .post_read = can_rxfifo_pre_read,
1058
+ },{ .name = "RXFIFO_DATA1", .addr = A_RXFIFO_DATA1,
1059
+ .post_read = can_rxfifo_pre_read,
1060
+ },{ .name = "RXFIFO_DATA2", .addr = A_RXFIFO_DATA2,
1061
+ .post_read = can_rxfifo_pre_read,
1062
+ },{ .name = "AFR", .addr = A_AFR,
1063
+ .rsvd = 0xfffffff0,
1064
+ .post_write = can_filter_enable_post_write,
1065
+ },{ .name = "AFMR1", .addr = A_AFMR1,
1066
+ .pre_write = can_filter_mask_pre_write,
1067
+ },{ .name = "AFIR1", .addr = A_AFIR1,
1068
+ .pre_write = can_filter_id_pre_write,
1069
+ },{ .name = "AFMR2", .addr = A_AFMR2,
1070
+ .pre_write = can_filter_mask_pre_write,
1071
+ },{ .name = "AFIR2", .addr = A_AFIR2,
1072
+ .pre_write = can_filter_id_pre_write,
1073
+ },{ .name = "AFMR3", .addr = A_AFMR3,
1074
+ .pre_write = can_filter_mask_pre_write,
1075
+ },{ .name = "AFIR3", .addr = A_AFIR3,
1076
+ .pre_write = can_filter_id_pre_write,
1077
+ },{ .name = "AFMR4", .addr = A_AFMR4,
1078
+ .pre_write = can_filter_mask_pre_write,
1079
+ },{ .name = "AFIR4", .addr = A_AFIR4,
1080
+ .pre_write = can_filter_id_pre_write,
1081
+ }
1082
+};
1083
+
1084
+static void xlnx_zynqmp_can_ptimer_cb(void *opaque)
1085
+{
1086
+ /* No action required on the timer rollover. */
1087
+}
1088
+
1089
+static const MemoryRegionOps can_ops = {
1090
+ .read = register_read_memory,
1091
+ .write = register_write_memory,
1092
+ .endianness = DEVICE_LITTLE_ENDIAN,
1093
+ .valid = {
1094
+ .min_access_size = 4,
1095
+ .max_access_size = 4,
1096
+ },
1097
+};
1098
+
1099
+static void xlnx_zynqmp_can_reset_init(Object *obj, ResetType type)
1100
+{
1101
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1102
+ unsigned int i;
1103
+
1104
+ for (i = R_RXFIFO_ID; i < ARRAY_SIZE(s->reg_info); ++i) {
1105
+ register_reset(&s->reg_info[i]);
1106
+ }
1107
+
1108
+ ptimer_transaction_begin(s->can_timer);
1109
+ ptimer_set_count(s->can_timer, 0);
1110
+ ptimer_transaction_commit(s->can_timer);
1111
+}
1112
+
1113
+static void xlnx_zynqmp_can_reset_hold(Object *obj)
1114
+{
1115
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1116
+ unsigned int i;
1117
+
1118
+ for (i = 0; i < R_RXFIFO_ID; ++i) {
1119
+ register_reset(&s->reg_info[i]);
1120
+ }
1121
+
1122
+ /*
1123
+ * Reset FIFOs when CAN model is reset. This will clear the fifo writes
1124
+ * done by post_write which gets called from register_reset function,
1125
+ * post_write handle will not be able to trigger tx because CAN will be
1126
+ * disabled when software_reset_register is cleared first.
1127
+ */
1128
+ fifo32_reset(&s->rx_fifo);
1129
+ fifo32_reset(&s->tx_fifo);
1130
+ fifo32_reset(&s->txhpb_fifo);
1131
+}
1132
+
1133
+static bool xlnx_zynqmp_can_can_receive(CanBusClientState *client)
1134
+{
1135
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1136
+ bus_client);
1137
+
1138
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
1139
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1140
+
1141
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is in reset state.\n",
1142
+ path);
1143
+ return false;
1144
+ }
1145
+
1146
+ if ((ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) == 0) {
1147
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1148
+
1149
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is disabled. Incoming"
1150
+ " messages will be discarded.\n", path);
1151
+ return false;
1152
+ }
1153
+
1154
+ return true;
1155
+}
1156
+
1157
+static ssize_t xlnx_zynqmp_can_receive(CanBusClientState *client,
1158
+ const qemu_can_frame *buf, size_t buf_size) {
1159
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1160
+ bus_client);
1161
+ const qemu_can_frame *frame = buf;
1162
+
1163
+ if (buf_size <= 0) {
1164
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1165
+
1166
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Error in the data received.\n",
1167
+ path);
1168
+ return 0;
1169
+ }
1170
+
1171
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
1172
+ /* Snoop Mode: Just keep the data. no response back. */
1173
+ update_rx_fifo(s, frame);
1174
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP))) {
1175
+ /*
1176
+ * XlnxZynqMPCAN is in sleep mode. Any data on bus will bring it to wake
1177
+ * up state.
1178
+ */
1179
+ can_exit_sleep_mode(s);
1180
+ update_rx_fifo(s, frame);
1181
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) == 0) {
1182
+ update_rx_fifo(s, frame);
1183
+ } else {
1184
+ /*
1185
+ * XlnxZynqMPCAN will not participate in normal bus communication
1186
+ * and will not receive any messages transmitted by other CAN nodes.
1187
+ */
1188
+ trace_xlnx_can_rx_discard(s->regs[R_STATUS_REGISTER]);
1189
+ }
1190
+
1191
+ return 1;
1192
+}
1193
+
1194
+static CanBusClientInfo can_xilinx_bus_client_info = {
1195
+ .can_receive = xlnx_zynqmp_can_can_receive,
1196
+ .receive = xlnx_zynqmp_can_receive,
1197
+};
1198
+
1199
+static int xlnx_zynqmp_can_connect_to_bus(XlnxZynqMPCANState *s,
1200
+ CanBusState *bus)
1201
+{
1202
+ s->bus_client.info = &can_xilinx_bus_client_info;
1203
+
1204
+ if (can_bus_insert_client(bus, &s->bus_client) < 0) {
1205
+ return -1;
1206
+ }
1207
+ return 0;
1208
+}
1209
+
1210
+static void xlnx_zynqmp_can_realize(DeviceState *dev, Error **errp)
1211
+{
1212
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(dev);
1213
+
1214
+ if (s->canbus) {
1215
+ if (xlnx_zynqmp_can_connect_to_bus(s, s->canbus) < 0) {
1216
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1217
+
1218
+ error_setg(errp, "%s: xlnx_zynqmp_can_connect_to_bus"
1219
+ " failed.", path);
1220
+ return;
1221
+ }
1222
+ }
1223
+
1224
+ /* Create RX FIFO, TXFIFO, TXHPB storage. */
1225
+ fifo32_create(&s->rx_fifo, RXFIFO_SIZE);
1226
+ fifo32_create(&s->tx_fifo, RXFIFO_SIZE);
1227
+ fifo32_create(&s->txhpb_fifo, CAN_FRAME_SIZE);
1228
+
1229
+ /* Allocate a new timer. */
1230
+ s->can_timer = ptimer_init(xlnx_zynqmp_can_ptimer_cb, s,
1231
+ PTIMER_POLICY_DEFAULT);
1232
+
1233
+ ptimer_transaction_begin(s->can_timer);
1234
+
1235
+ ptimer_set_freq(s->can_timer, s->cfg.ext_clk_freq);
1236
+ ptimer_set_limit(s->can_timer, CAN_TIMER_MAX, 1);
1237
+ ptimer_run(s->can_timer, 0);
1238
+ ptimer_transaction_commit(s->can_timer);
1239
+}
1240
+
1241
+static void xlnx_zynqmp_can_init(Object *obj)
1242
+{
1243
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1244
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1245
+
1246
+ RegisterInfoArray *reg_array;
1247
+
1248
+ memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_CAN,
1249
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1250
+ reg_array = register_init_block32(DEVICE(obj), can_regs_info,
1251
+ ARRAY_SIZE(can_regs_info),
1252
+ s->reg_info, s->regs,
1253
+ &can_ops,
1254
+ XLNX_ZYNQMP_CAN_ERR_DEBUG,
1255
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1256
+
1257
+ memory_region_add_subregion(&s->iomem, 0x00, &reg_array->mem);
1258
+ sysbus_init_mmio(sbd, &s->iomem);
1259
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
1260
+}
1261
+
1262
+static const VMStateDescription vmstate_can = {
1263
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1264
+ .version_id = 1,
1265
+ .minimum_version_id = 1,
1266
+ .fields = (VMStateField[]) {
1267
+ VMSTATE_FIFO32(rx_fifo, XlnxZynqMPCANState),
1268
+ VMSTATE_FIFO32(tx_fifo, XlnxZynqMPCANState),
1269
+ VMSTATE_FIFO32(txhpb_fifo, XlnxZynqMPCANState),
1270
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPCANState, XLNX_ZYNQMP_CAN_R_MAX),
1271
+ VMSTATE_PTIMER(can_timer, XlnxZynqMPCANState),
1272
+ VMSTATE_END_OF_LIST(),
1273
+ }
1274
+};
1275
+
1276
+static Property xlnx_zynqmp_can_properties[] = {
1277
+ DEFINE_PROP_UINT32("ext_clk_freq", XlnxZynqMPCANState, cfg.ext_clk_freq,
1278
+ CAN_DEFAULT_CLOCK),
1279
+ DEFINE_PROP_LINK("canbus", XlnxZynqMPCANState, canbus, TYPE_CAN_BUS,
1280
+ CanBusState *),
1281
+ DEFINE_PROP_END_OF_LIST(),
1282
+};
1283
+
1284
+static void xlnx_zynqmp_can_class_init(ObjectClass *klass, void *data)
1285
+{
1286
+ DeviceClass *dc = DEVICE_CLASS(klass);
1287
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1288
+
1289
+ rc->phases.enter = xlnx_zynqmp_can_reset_init;
1290
+ rc->phases.hold = xlnx_zynqmp_can_reset_hold;
1291
+ dc->realize = xlnx_zynqmp_can_realize;
1292
+ device_class_set_props(dc, xlnx_zynqmp_can_properties);
1293
+ dc->vmsd = &vmstate_can;
1294
+}
1295
+
1296
+static const TypeInfo can_info = {
1297
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1298
+ .parent = TYPE_SYS_BUS_DEVICE,
1299
+ .instance_size = sizeof(XlnxZynqMPCANState),
1300
+ .class_init = xlnx_zynqmp_can_class_init,
1301
+ .instance_init = xlnx_zynqmp_can_init,
1302
+};
1303
+
1304
+static void can_register_types(void)
1305
+{
1306
+ type_register_static(&can_info);
1307
+}
1308
+
1309
+type_init(can_register_types)
1310
diff --git a/hw/Kconfig b/hw/Kconfig
1311
index XXXXXXX..XXXXXXX 100644
1312
--- a/hw/Kconfig
1313
+++ b/hw/Kconfig
1314
@@ -XXX,XX +XXX,XX @@ config XILINX_AXI
1315
config XLNX_ZYNQMP
1316
bool
1317
select REGISTER
1318
+ select CAN_BUS
1319
diff --git a/hw/net/can/meson.build b/hw/net/can/meson.build
1320
index XXXXXXX..XXXXXXX 100644
1321
--- a/hw/net/can/meson.build
1322
+++ b/hw/net/can/meson.build
1323
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_pcm3680_pci.c'))
1324
softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_mioe3680_pci.c'))
1325
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD', if_true: files('ctucan_core.c'))
1326
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD_PCI', if_true: files('ctucan_pci.c'))
1327
+softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP', if_true: files('xlnx-zynqmp-can.c'))
1328
diff --git a/hw/net/can/trace-events b/hw/net/can/trace-events
1329
new file mode 100644
1330
index XXXXXXX..XXXXXXX
1331
--- /dev/null
1332
+++ b/hw/net/can/trace-events
1333
@@ -XXX,XX +XXX,XX @@
1334
+# xlnx-zynqmp-can.c
1335
+xlnx_can_update_irq(uint32_t isr, uint32_t ier, uint32_t irq) "ISR: 0x%08x IER: 0x%08x IRQ: 0x%08x"
1336
+xlnx_can_reset(uint32_t val) "Resetting controller with value = 0x%08x"
1337
+xlnx_can_rx_fifo_filter_reject(uint32_t id, uint8_t dlc) "Frame: ID: 0x%08x DLC: 0x%02x"
1338
+xlnx_can_filter_id_pre_write(uint8_t filter_num, uint32_t value) "Filter%d ID: 0x%08x"
1339
+xlnx_can_filter_mask_pre_write(uint8_t filter_num, uint32_t value) "Filter%d MASK: 0x%08x"
1340
+xlnx_can_tx_data(uint32_t id, uint8_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1341
+xlnx_can_rx_data(uint32_t id, uint32_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1342
+xlnx_can_rx_discard(uint32_t status) "Controller is not enabled for bus communication. Status Register: 0x%08x"
163
--
1343
--
164
2.20.1
1344
2.20.1
165
1345
166
1346
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
2
2
3
The gvec operation was added after the initial implementation
3
Connect CAN0 and CAN1 on the ZynqMP.
4
of the SEL instruction and was missed in the conversion.
5
4
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20200815013145.539409-8-richard.henderson@linaro.org
7
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
8
Message-id: 1605728926-352690-3-git-send-email-fnu.vikram@xilinx.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/translate-sve.c | 31 ++++++++-----------------------
11
include/hw/arm/xlnx-zynqmp.h | 8 ++++++++
12
1 file changed, 8 insertions(+), 23 deletions(-)
12
hw/arm/xlnx-zcu102.c | 20 ++++++++++++++++++++
13
hw/arm/xlnx-zynqmp.c | 34 ++++++++++++++++++++++++++++++++++
14
3 files changed, 62 insertions(+)
13
15
14
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-sve.c
18
--- a/include/hw/arm/xlnx-zynqmp.h
17
+++ b/target/arm/translate-sve.c
19
+++ b/include/hw/arm/xlnx-zynqmp.h
18
@@ -XXX,XX +XXX,XX @@ static bool trans_EOR_pppp(DisasContext *s, arg_rprr_s *a)
20
@@ -XXX,XX +XXX,XX @@
19
return do_pppp_flags(s, a, &op);
21
#include "hw/intc/arm_gic.h"
22
#include "hw/net/cadence_gem.h"
23
#include "hw/char/cadence_uart.h"
24
+#include "hw/net/xlnx-zynqmp-can.h"
25
#include "hw/ide/ahci.h"
26
#include "hw/sd/sdhci.h"
27
#include "hw/ssi/xilinx_spips.h"
28
@@ -XXX,XX +XXX,XX @@
29
#include "hw/cpu/cluster.h"
30
#include "target/arm/cpu.h"
31
#include "qom/object.h"
32
+#include "net/can_emu.h"
33
34
#define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
35
OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
36
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
37
#define XLNX_ZYNQMP_NUM_RPU_CPUS 2
38
#define XLNX_ZYNQMP_NUM_GEMS 4
39
#define XLNX_ZYNQMP_NUM_UARTS 2
40
+#define XLNX_ZYNQMP_NUM_CAN 2
41
+#define XLNX_ZYNQMP_CAN_REF_CLK (24 * 1000 * 1000)
42
#define XLNX_ZYNQMP_NUM_SDHCI 2
43
#define XLNX_ZYNQMP_NUM_SPIS 2
44
#define XLNX_ZYNQMP_NUM_GDMA_CH 8
45
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
46
47
CadenceGEMState gem[XLNX_ZYNQMP_NUM_GEMS];
48
CadenceUARTState uart[XLNX_ZYNQMP_NUM_UARTS];
49
+ XlnxZynqMPCANState can[XLNX_ZYNQMP_NUM_CAN];
50
SysbusAHCIState sata;
51
SDHCIState sdhci[XLNX_ZYNQMP_NUM_SDHCI];
52
XilinxSPIPS spi[XLNX_ZYNQMP_NUM_SPIS];
53
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
54
bool virt;
55
/* Has the RPU subsystem? */
56
bool has_rpu;
57
+
58
+ /* CAN bus. */
59
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
60
};
61
62
#endif
63
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/arm/xlnx-zcu102.c
66
+++ b/hw/arm/xlnx-zcu102.c
67
@@ -XXX,XX +XXX,XX @@
68
#include "sysemu/qtest.h"
69
#include "sysemu/device_tree.h"
70
#include "qom/object.h"
71
+#include "net/can_emu.h"
72
73
struct XlnxZCU102 {
74
MachineState parent_obj;
75
@@ -XXX,XX +XXX,XX @@ struct XlnxZCU102 {
76
bool secure;
77
bool virt;
78
79
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
80
+
81
struct arm_boot_info binfo;
82
};
83
84
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_init(MachineState *machine)
85
object_property_set_bool(OBJECT(&s->soc), "virtualization", s->virt,
86
&error_fatal);
87
88
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
89
+ gchar *bus_name = g_strdup_printf("canbus%d", i);
90
+
91
+ object_property_set_link(OBJECT(&s->soc), bus_name,
92
+ OBJECT(s->canbus[i]), &error_fatal);
93
+ g_free(bus_name);
94
+ }
95
+
96
qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
97
98
/* Create and plug in the SD cards */
99
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_machine_instance_init(Object *obj)
100
s->secure = false;
101
/* Default to virt (EL2) being disabled */
102
s->virt = false;
103
+ object_property_add_link(obj, "xlnx-zcu102.canbus0", TYPE_CAN_BUS,
104
+ (Object **)&s->canbus[0],
105
+ object_property_allow_set_link,
106
+ 0);
107
+
108
+ object_property_add_link(obj, "xlnx-zcu102.canbus1", TYPE_CAN_BUS,
109
+ (Object **)&s->canbus[1],
110
+ object_property_allow_set_link,
111
+ 0);
20
}
112
}
21
113
22
-static void gen_sel_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
114
static void xlnx_zcu102_machine_class_init(ObjectClass *oc, void *data)
23
-{
115
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
24
- tcg_gen_and_i64(pn, pn, pg);
116
index XXXXXXX..XXXXXXX 100644
25
- tcg_gen_andc_i64(pm, pm, pg);
117
--- a/hw/arm/xlnx-zynqmp.c
26
- tcg_gen_or_i64(pd, pn, pm);
118
+++ b/hw/arm/xlnx-zynqmp.c
27
-}
119
@@ -XXX,XX +XXX,XX @@ static const int uart_intr[XLNX_ZYNQMP_NUM_UARTS] = {
28
-
120
21, 22,
29
-static void gen_sel_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
121
};
30
- TCGv_vec pm, TCGv_vec pg)
122
31
-{
123
+static const uint64_t can_addr[XLNX_ZYNQMP_NUM_CAN] = {
32
- tcg_gen_and_vec(vece, pn, pn, pg);
124
+ 0xFF060000, 0xFF070000,
33
- tcg_gen_andc_vec(vece, pm, pm, pg);
125
+};
34
- tcg_gen_or_vec(vece, pd, pn, pm);
126
+
35
-}
127
+static const int can_intr[XLNX_ZYNQMP_NUM_CAN] = {
36
-
128
+ 23, 24,
37
static bool trans_SEL_pppp(DisasContext *s, arg_rprr_s *a)
129
+};
38
{
130
+
39
- static const GVecGen4 op = {
131
static const uint64_t sdhci_addr[XLNX_ZYNQMP_NUM_SDHCI] = {
40
- .fni8 = gen_sel_pg_i64,
132
0xFF160000, 0xFF170000,
41
- .fniv = gen_sel_pg_vec,
133
};
42
- .fno = gen_helper_sve_sel_pppp,
134
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
43
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
135
TYPE_CADENCE_UART);
44
- };
45
-
46
if (a->s) {
47
return false;
48
}
136
}
49
- return do_pppp_flags(s, a, &op);
137
50
+ if (sve_access_check(s)) {
138
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
51
+ unsigned psz = pred_gvec_reg_size(s);
139
+ object_initialize_child(obj, "can[*]", &s->can[i],
52
+ tcg_gen_gvec_bitsel(MO_8, pred_full_reg_offset(s, a->rd),
140
+ TYPE_XLNX_ZYNQMP_CAN);
53
+ pred_full_reg_offset(s, a->pg),
54
+ pred_full_reg_offset(s, a->rn),
55
+ pred_full_reg_offset(s, a->rm), psz, psz);
56
+ }
141
+ }
57
+ return true;
142
+
58
}
143
object_initialize_child(obj, "sata", &s->sata, TYPE_SYSBUS_AHCI);
59
144
60
static void gen_orr_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
145
for (i = 0; i < XLNX_ZYNQMP_NUM_SDHCI; i++) {
146
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
147
gic_spi[uart_intr[i]]);
148
}
149
150
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
151
+ object_property_set_int(OBJECT(&s->can[i]), "ext_clk_freq",
152
+ XLNX_ZYNQMP_CAN_REF_CLK, &error_abort);
153
+
154
+ object_property_set_link(OBJECT(&s->can[i]), "canbus",
155
+ OBJECT(s->canbus[i]), &error_fatal);
156
+
157
+ sysbus_realize(SYS_BUS_DEVICE(&s->can[i]), &err);
158
+ if (err) {
159
+ error_propagate(errp, err);
160
+ return;
161
+ }
162
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->can[i]), 0, can_addr[i]);
163
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->can[i]), 0,
164
+ gic_spi[can_intr[i]]);
165
+ }
166
+
167
object_property_set_int(OBJECT(&s->sata), "num-ports", SATA_NUM_PORTS,
168
&error_abort);
169
if (!sysbus_realize(SYS_BUS_DEVICE(&s->sata), errp)) {
170
@@ -XXX,XX +XXX,XX @@ static Property xlnx_zynqmp_props[] = {
171
DEFINE_PROP_BOOL("has_rpu", XlnxZynqMPState, has_rpu, false),
172
DEFINE_PROP_LINK("ddr-ram", XlnxZynqMPState, ddr_ram, TYPE_MEMORY_REGION,
173
MemoryRegion *),
174
+ DEFINE_PROP_LINK("canbus0", XlnxZynqMPState, canbus[0], TYPE_CAN_BUS,
175
+ CanBusState *),
176
+ DEFINE_PROP_LINK("canbus1", XlnxZynqMPState, canbus[1], TYPE_CAN_BUS,
177
+ CanBusState *),
178
DEFINE_PROP_END_OF_LIST()
179
};
180
61
--
181
--
62
2.20.1
182
2.20.1
63
183
64
184
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
2
2
3
Unify add/sub helpers and add a parameter for rounding.
3
The QTests perform five tests on the Xilinx ZynqMP CAN controller:
4
This will allow saturating non-rounding to reuse this code.
4
Tests the CAN controller in loopback, sleep and snoop mode.
5
5
Tests filtering of incoming CAN messages.
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
7
[PMM: fixed accidental use of '=' rather than '+=' in do_sqrdmlah_s]
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
9
Message-id: 20200815013145.539409-15-richard.henderson@linaro.org
9
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
10
Message-id: 1605728926-352690-4-git-send-email-fnu.vikram@xilinx.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
target/arm/vec_helper.c | 80 +++++++++++++++--------------------------
13
tests/qtest/xlnx-can-test.c | 360 ++++++++++++++++++++++++++++++++++++
13
1 file changed, 29 insertions(+), 51 deletions(-)
14
tests/qtest/meson.build | 1 +
14
15
2 files changed, 361 insertions(+)
15
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
16
create mode 100644 tests/qtest/xlnx-can-test.c
17
18
diff --git a/tests/qtest/xlnx-can-test.c b/tests/qtest/xlnx-can-test.c
19
new file mode 100644
20
index XXXXXXX..XXXXXXX
21
--- /dev/null
22
+++ b/tests/qtest/xlnx-can-test.c
23
@@ -XXX,XX +XXX,XX @@
24
+/*
25
+ * QTests for the Xilinx ZynqMP CAN controller.
26
+ *
27
+ * Copyright (c) 2020 Xilinx Inc.
28
+ *
29
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
30
+ *
31
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
32
+ * of this software and associated documentation files (the "Software"), to deal
33
+ * in the Software without restriction, including without limitation the rights
34
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
35
+ * copies of the Software, and to permit persons to whom the Software is
36
+ * furnished to do so, subject to the following conditions:
37
+ *
38
+ * The above copyright notice and this permission notice shall be included in
39
+ * all copies or substantial portions of the Software.
40
+ *
41
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
42
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
43
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
44
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
45
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
46
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
47
+ * THE SOFTWARE.
48
+ */
49
+
50
+#include "qemu/osdep.h"
51
+#include "libqos/libqtest.h"
52
+
53
+/* Base address. */
54
+#define CAN0_BASE_ADDR 0xFF060000
55
+#define CAN1_BASE_ADDR 0xFF070000
56
+
57
+/* Register addresses. */
58
+#define R_SRR_OFFSET 0x00
59
+#define R_MSR_OFFSET 0x04
60
+#define R_SR_OFFSET 0x18
61
+#define R_ISR_OFFSET 0x1C
62
+#define R_ICR_OFFSET 0x24
63
+#define R_TXID_OFFSET 0x30
64
+#define R_TXDLC_OFFSET 0x34
65
+#define R_TXDATA1_OFFSET 0x38
66
+#define R_TXDATA2_OFFSET 0x3C
67
+#define R_RXID_OFFSET 0x50
68
+#define R_RXDLC_OFFSET 0x54
69
+#define R_RXDATA1_OFFSET 0x58
70
+#define R_RXDATA2_OFFSET 0x5C
71
+#define R_AFR 0x60
72
+#define R_AFMR1 0x64
73
+#define R_AFIR1 0x68
74
+#define R_AFMR2 0x6C
75
+#define R_AFIR2 0x70
76
+#define R_AFMR3 0x74
77
+#define R_AFIR3 0x78
78
+#define R_AFMR4 0x7C
79
+#define R_AFIR4 0x80
80
+
81
+/* CAN modes. */
82
+#define CONFIG_MODE 0x00
83
+#define NORMAL_MODE 0x00
84
+#define LOOPBACK_MODE 0x02
85
+#define SNOOP_MODE 0x04
86
+#define SLEEP_MODE 0x01
87
+#define ENABLE_CAN (1 << 1)
88
+#define STATUS_NORMAL_MODE (1 << 3)
89
+#define STATUS_LOOPBACK_MODE (1 << 1)
90
+#define STATUS_SNOOP_MODE (1 << 12)
91
+#define STATUS_SLEEP_MODE (1 << 2)
92
+#define ISR_TXOK (1 << 1)
93
+#define ISR_RXOK (1 << 4)
94
+
95
+static void match_rx_tx_data(const uint32_t *buf_tx, const uint32_t *buf_rx,
96
+ uint8_t can_timestamp)
97
+{
98
+ uint16_t size = 0;
99
+ uint8_t len = 4;
100
+
101
+ while (size < len) {
102
+ if (R_RXID_OFFSET + 4 * size == R_RXDLC_OFFSET) {
103
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size] + can_timestamp);
104
+ } else {
105
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size]);
106
+ }
107
+
108
+ size++;
109
+ }
110
+}
111
+
112
+static void read_data(QTestState *qts, uint64_t can_base_addr, uint32_t *buf_rx)
113
+{
114
+ uint32_t int_status;
115
+
116
+ /* Read the interrupt on CAN rx. */
117
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_RXOK;
118
+
119
+ g_assert_cmpint(int_status, ==, ISR_RXOK);
120
+
121
+ /* Read the RX register data for CAN. */
122
+ buf_rx[0] = qtest_readl(qts, can_base_addr + R_RXID_OFFSET);
123
+ buf_rx[1] = qtest_readl(qts, can_base_addr + R_RXDLC_OFFSET);
124
+ buf_rx[2] = qtest_readl(qts, can_base_addr + R_RXDATA1_OFFSET);
125
+ buf_rx[3] = qtest_readl(qts, can_base_addr + R_RXDATA2_OFFSET);
126
+
127
+ /* Clear the RX interrupt. */
128
+ qtest_writel(qts, CAN1_BASE_ADDR + R_ICR_OFFSET, ISR_RXOK);
129
+}
130
+
131
+static void send_data(QTestState *qts, uint64_t can_base_addr,
132
+ const uint32_t *buf_tx)
133
+{
134
+ uint32_t int_status;
135
+
136
+ /* Write the TX register data for CAN. */
137
+ qtest_writel(qts, can_base_addr + R_TXID_OFFSET, buf_tx[0]);
138
+ qtest_writel(qts, can_base_addr + R_TXDLC_OFFSET, buf_tx[1]);
139
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET, buf_tx[2]);
140
+ qtest_writel(qts, can_base_addr + R_TXDATA2_OFFSET, buf_tx[3]);
141
+
142
+ /* Read the interrupt on CAN for tx. */
143
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_TXOK;
144
+
145
+ g_assert_cmpint(int_status, ==, ISR_TXOK);
146
+
147
+ /* Clear the interrupt for tx. */
148
+ qtest_writel(qts, CAN0_BASE_ADDR + R_ICR_OFFSET, ISR_TXOK);
149
+}
150
+
151
+/*
152
+ * This test will be transferring data from CAN0 and CAN1 through canbus. CAN0
153
+ * initiate the data transfer to can-bus, CAN1 receives the data. Test compares
154
+ * the data sent from CAN0 with received on CAN1.
155
+ */
156
+static void test_can_bus(void)
157
+{
158
+ const uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
159
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
160
+ uint32_t status = 0;
161
+ uint8_t can_timestamp = 1;
162
+
163
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
164
+ " -object can-bus,id=canbus0"
165
+ " -machine xlnx-zcu102.canbus0=canbus0"
166
+ " -machine xlnx-zcu102.canbus1=canbus0"
167
+ );
168
+
169
+ /* Configure the CAN0 and CAN1. */
170
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
171
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
172
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
173
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
174
+
175
+ /* Check here if CAN0 and CAN1 are in normal mode. */
176
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
177
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
178
+
179
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
180
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
181
+
182
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
183
+
184
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
185
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
186
+
187
+ qtest_quit(qts);
188
+}
189
+
190
+/*
191
+ * This test is performing loopback mode on CAN0 and CAN1. Data sent from TX of
192
+ * each CAN0 and CAN1 are compared with RX register data for respective CAN.
193
+ */
194
+static void test_can_loopback(void)
195
+{
196
+ uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
197
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
198
+ uint32_t status = 0;
199
+
200
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
201
+ " -object can-bus,id=canbus0"
202
+ " -machine xlnx-zcu102.canbus0=canbus0"
203
+ " -machine xlnx-zcu102.canbus1=canbus0"
204
+ );
205
+
206
+ /* Configure the CAN0 in loopback mode. */
207
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
208
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
209
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
210
+
211
+ /* Check here if CAN0 is set in loopback mode. */
212
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
213
+
214
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
215
+
216
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
217
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
218
+ match_rx_tx_data(buf_tx, buf_rx, 0);
219
+
220
+ /* Configure the CAN1 in loopback mode. */
221
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
222
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
223
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
224
+
225
+ /* Check here if CAN1 is set in loopback mode. */
226
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
227
+
228
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
229
+
230
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
231
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
232
+ match_rx_tx_data(buf_tx, buf_rx, 0);
233
+
234
+ qtest_quit(qts);
235
+}
236
+
237
+/*
238
+ * Enable filters for CAN1. This will filter incoming messages with ID. In this
239
+ * test message will pass through filter 2.
240
+ */
241
+static void test_can_filter(void)
242
+{
243
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
244
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
245
+ uint32_t status = 0;
246
+ uint8_t can_timestamp = 1;
247
+
248
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
249
+ " -object can-bus,id=canbus0"
250
+ " -machine xlnx-zcu102.canbus0=canbus0"
251
+ " -machine xlnx-zcu102.canbus1=canbus0"
252
+ );
253
+
254
+ /* Configure the CAN0 and CAN1. */
255
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
256
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
257
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
258
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
259
+
260
+ /* Check here if CAN0 and CAN1 are in normal mode. */
261
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
262
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
263
+
264
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
265
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
266
+
267
+ /* Set filter for CAN1 for incoming messages. */
268
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0x0);
269
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR1, 0xF7);
270
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR1, 0x121F);
271
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR2, 0x5431);
272
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR2, 0x14);
273
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR3, 0x1234);
274
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR3, 0x5431);
275
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR4, 0xFFF);
276
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR4, 0x1234);
277
+
278
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0xF);
279
+
280
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
281
+
282
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
283
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
284
+
285
+ qtest_quit(qts);
286
+}
287
+
288
+/* Testing sleep mode on CAN0 while CAN1 is in normal mode. */
289
+static void test_can_sleepmode(void)
290
+{
291
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
292
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
293
+ uint32_t status = 0;
294
+ uint8_t can_timestamp = 1;
295
+
296
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
297
+ " -object can-bus,id=canbus0"
298
+ " -machine xlnx-zcu102.canbus0=canbus0"
299
+ " -machine xlnx-zcu102.canbus1=canbus0"
300
+ );
301
+
302
+ /* Configure the CAN0. */
303
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
304
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SLEEP_MODE);
305
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
306
+
307
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
308
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
309
+
310
+ /* Check here if CAN0 is in SLEEP mode and CAN1 in normal mode. */
311
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
312
+ g_assert_cmpint(status, ==, STATUS_SLEEP_MODE);
313
+
314
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
315
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
316
+
317
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
318
+
319
+ /*
320
+ * Once CAN1 sends data on can-bus. CAN0 should exit sleep mode.
321
+ * Check the CAN0 status now. It should exit the sleep mode and receive the
322
+ * incoming data.
323
+ */
324
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
325
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
326
+
327
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
328
+
329
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
330
+
331
+ qtest_quit(qts);
332
+}
333
+
334
+/* Testing Snoop mode on CAN0 while CAN1 is in normal mode. */
335
+static void test_can_snoopmode(void)
336
+{
337
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
338
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
339
+ uint32_t status = 0;
340
+ uint8_t can_timestamp = 1;
341
+
342
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
343
+ " -object can-bus,id=canbus0"
344
+ " -machine xlnx-zcu102.canbus0=canbus0"
345
+ " -machine xlnx-zcu102.canbus1=canbus0"
346
+ );
347
+
348
+ /* Configure the CAN0. */
349
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
350
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SNOOP_MODE);
351
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
352
+
353
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
354
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
355
+
356
+ /* Check here if CAN0 is in SNOOP mode and CAN1 in normal mode. */
357
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
358
+ g_assert_cmpint(status, ==, STATUS_SNOOP_MODE);
359
+
360
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
361
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
362
+
363
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
364
+
365
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
366
+
367
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
368
+
369
+ qtest_quit(qts);
370
+}
371
+
372
+int main(int argc, char **argv)
373
+{
374
+ g_test_init(&argc, &argv, NULL);
375
+
376
+ qtest_add_func("/net/can/can_bus", test_can_bus);
377
+ qtest_add_func("/net/can/can_loopback", test_can_loopback);
378
+ qtest_add_func("/net/can/can_filter", test_can_filter);
379
+ qtest_add_func("/net/can/can_test_snoopmode", test_can_snoopmode);
380
+ qtest_add_func("/net/can/can_test_sleepmode", test_can_sleepmode);
381
+
382
+ return g_test_run();
383
+}
384
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
16
index XXXXXXX..XXXXXXX 100644
385
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/vec_helper.c
386
--- a/tests/qtest/meson.build
18
+++ b/target/arm/vec_helper.c
387
+++ b/tests/qtest/meson.build
19
@@ -XXX,XX +XXX,XX @@
388
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
20
#endif
389
['arm-cpu-features',
21
390
'numa-test',
22
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
391
'boot-serial-test',
23
-static int16_t inl_qrdmlah_s16(int16_t src1, int16_t src2,
392
+ 'xlnx-can-test',
24
- int16_t src3, uint32_t *sat)
393
'migration-test']
25
+static int16_t do_sqrdmlah_h(int16_t src1, int16_t src2, int16_t src3,
394
26
+ bool neg, bool round, uint32_t *sat)
395
qtests_s390x = \
27
{
28
- /* Simplify:
29
+ /*
30
+ * Simplify:
31
* = ((a3 << 16) + ((e1 * e2) << 1) + (1 << 15)) >> 16
32
* = ((a3 << 15) + (e1 * e2) + (1 << 14)) >> 15
33
*/
34
int32_t ret = (int32_t)src1 * src2;
35
- ret = ((int32_t)src3 << 15) + ret + (1 << 14);
36
+ if (neg) {
37
+ ret = -ret;
38
+ }
39
+ ret += ((int32_t)src3 << 15) + (round << 14);
40
ret >>= 15;
41
+
42
if (ret != (int16_t)ret) {
43
*sat = 1;
44
- ret = (ret < 0 ? -0x8000 : 0x7fff);
45
+ ret = (ret < 0 ? INT16_MIN : INT16_MAX);
46
}
47
return ret;
48
}
49
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlah_s16)(CPUARMState *env, uint32_t src1,
50
uint32_t src2, uint32_t src3)
51
{
52
uint32_t *sat = &env->vfp.qc[0];
53
- uint16_t e1 = inl_qrdmlah_s16(src1, src2, src3, sat);
54
- uint16_t e2 = inl_qrdmlah_s16(src1 >> 16, src2 >> 16, src3 >> 16, sat);
55
+ uint16_t e1 = do_sqrdmlah_h(src1, src2, src3, false, true, sat);
56
+ uint16_t e2 = do_sqrdmlah_h(src1 >> 16, src2 >> 16, src3 >> 16,
57
+ false, true, sat);
58
return deposit32(e1, 16, 16, e2);
59
}
60
61
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlah_s16)(void *vd, void *vn, void *vm,
62
uintptr_t i;
63
64
for (i = 0; i < opr_sz / 2; ++i) {
65
- d[i] = inl_qrdmlah_s16(n[i], m[i], d[i], vq);
66
+ d[i] = do_sqrdmlah_h(n[i], m[i], d[i], false, true, vq);
67
}
68
clear_tail(d, opr_sz, simd_maxsz(desc));
69
}
70
71
-/* Signed saturating rounding doubling multiply-subtract high half, 16-bit */
72
-static int16_t inl_qrdmlsh_s16(int16_t src1, int16_t src2,
73
- int16_t src3, uint32_t *sat)
74
-{
75
- /* Similarly, using subtraction:
76
- * = ((a3 << 16) - ((e1 * e2) << 1) + (1 << 15)) >> 16
77
- * = ((a3 << 15) - (e1 * e2) + (1 << 14)) >> 15
78
- */
79
- int32_t ret = (int32_t)src1 * src2;
80
- ret = ((int32_t)src3 << 15) - ret + (1 << 14);
81
- ret >>= 15;
82
- if (ret != (int16_t)ret) {
83
- *sat = 1;
84
- ret = (ret < 0 ? -0x8000 : 0x7fff);
85
- }
86
- return ret;
87
-}
88
-
89
uint32_t HELPER(neon_qrdmlsh_s16)(CPUARMState *env, uint32_t src1,
90
uint32_t src2, uint32_t src3)
91
{
92
uint32_t *sat = &env->vfp.qc[0];
93
- uint16_t e1 = inl_qrdmlsh_s16(src1, src2, src3, sat);
94
- uint16_t e2 = inl_qrdmlsh_s16(src1 >> 16, src2 >> 16, src3 >> 16, sat);
95
+ uint16_t e1 = do_sqrdmlah_h(src1, src2, src3, true, true, sat);
96
+ uint16_t e2 = do_sqrdmlah_h(src1 >> 16, src2 >> 16, src3 >> 16,
97
+ true, true, sat);
98
return deposit32(e1, 16, 16, e2);
99
}
100
101
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlsh_s16)(void *vd, void *vn, void *vm,
102
uintptr_t i;
103
104
for (i = 0; i < opr_sz / 2; ++i) {
105
- d[i] = inl_qrdmlsh_s16(n[i], m[i], d[i], vq);
106
+ d[i] = do_sqrdmlah_h(n[i], m[i], d[i], true, true, vq);
107
}
108
clear_tail(d, opr_sz, simd_maxsz(desc));
109
}
110
111
/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
112
-static int32_t inl_qrdmlah_s32(int32_t src1, int32_t src2,
113
- int32_t src3, uint32_t *sat)
114
+static int32_t do_sqrdmlah_s(int32_t src1, int32_t src2, int32_t src3,
115
+ bool neg, bool round, uint32_t *sat)
116
{
117
/* Simplify similarly to int_qrdmlah_s16 above. */
118
int64_t ret = (int64_t)src1 * src2;
119
- ret = ((int64_t)src3 << 31) + ret + (1 << 30);
120
+ if (neg) {
121
+ ret = -ret;
122
+ }
123
+ ret += ((int64_t)src3 << 31) + (round << 30);
124
ret >>= 31;
125
+
126
if (ret != (int32_t)ret) {
127
*sat = 1;
128
ret = (ret < 0 ? INT32_MIN : INT32_MAX);
129
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlah_s32)(CPUARMState *env, int32_t src1,
130
int32_t src2, int32_t src3)
131
{
132
uint32_t *sat = &env->vfp.qc[0];
133
- return inl_qrdmlah_s32(src1, src2, src3, sat);
134
+ return do_sqrdmlah_s(src1, src2, src3, false, true, sat);
135
}
136
137
void HELPER(gvec_qrdmlah_s32)(void *vd, void *vn, void *vm,
138
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlah_s32)(void *vd, void *vn, void *vm,
139
uintptr_t i;
140
141
for (i = 0; i < opr_sz / 4; ++i) {
142
- d[i] = inl_qrdmlah_s32(n[i], m[i], d[i], vq);
143
+ d[i] = do_sqrdmlah_s(n[i], m[i], d[i], false, true, vq);
144
}
145
clear_tail(d, opr_sz, simd_maxsz(desc));
146
}
147
148
-/* Signed saturating rounding doubling multiply-subtract high half, 32-bit */
149
-static int32_t inl_qrdmlsh_s32(int32_t src1, int32_t src2,
150
- int32_t src3, uint32_t *sat)
151
-{
152
- /* Simplify similarly to int_qrdmlsh_s16 above. */
153
- int64_t ret = (int64_t)src1 * src2;
154
- ret = ((int64_t)src3 << 31) - ret + (1 << 30);
155
- ret >>= 31;
156
- if (ret != (int32_t)ret) {
157
- *sat = 1;
158
- ret = (ret < 0 ? INT32_MIN : INT32_MAX);
159
- }
160
- return ret;
161
-}
162
-
163
uint32_t HELPER(neon_qrdmlsh_s32)(CPUARMState *env, int32_t src1,
164
int32_t src2, int32_t src3)
165
{
166
uint32_t *sat = &env->vfp.qc[0];
167
- return inl_qrdmlsh_s32(src1, src2, src3, sat);
168
+ return do_sqrdmlah_s(src1, src2, src3, true, true, sat);
169
}
170
171
void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
172
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
173
uintptr_t i;
174
175
for (i = 0; i < opr_sz / 4; ++i) {
176
- d[i] = inl_qrdmlsh_s32(n[i], m[i], d[i], vq);
177
+ d[i] = do_sqrdmlah_s(n[i], m[i], d[i], true, true, vq);
178
}
179
clear_tail(d, opr_sz, simd_maxsz(desc));
180
}
181
--
396
--
182
2.20.1
397
2.20.1
183
398
184
399
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
2
2
3
Rather than require the user to fill in the immediate (shl or shr),
3
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
4
create full formats that include the immediate.
4
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
5
5
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1605728926-352690-5-git-send-email-fnu.vikram@xilinx.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20200815013145.539409-14-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
8
---
11
target/arm/sve.decode | 35 ++++++++++++++++-------------------
9
MAINTAINERS | 8 ++++++++
12
1 file changed, 16 insertions(+), 19 deletions(-)
10
1 file changed, 8 insertions(+)
13
11
14
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
12
diff --git a/MAINTAINERS b/MAINTAINERS
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/sve.decode
14
--- a/MAINTAINERS
17
+++ b/target/arm/sve.decode
15
+++ b/MAINTAINERS
18
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ F: hw/net/opencores_eth.c
19
@rd_rn_i6 ........ ... rn:5 ..... imm:s6 rd:5 &rri
17
20
18
Devices
21
# Two register operand, one immediate operand, with predicate,
19
-------
22
-# element size encoded as TSZHL. User must fill in imm.
20
+Xilinx CAN
23
-@rdn_pg_tszimm ........ .. ... ... ... pg:3 ..... rd:5 \
21
+M: Vikram Garhwal <fnu.vikram@xilinx.com>
24
- &rpri_esz rn=%reg_movprfx esz=%tszimm_esz
22
+M: Francisco Iglesias <francisco.iglesias@xilinx.com>
25
+# element size encoded as TSZHL.
23
+S: Maintained
26
+@rdn_pg_tszimm_shl ........ .. ... ... ... pg:3 ..... rd:5 \
24
+F: hw/net/can/xlnx-*
27
+ &rpri_esz rn=%reg_movprfx esz=%tszimm_esz imm=%tszimm_shl
25
+F: include/hw/net/xlnx-*
28
+@rdn_pg_tszimm_shr ........ .. ... ... ... pg:3 ..... rd:5 \
26
+F: tests/qtest/xlnx-can-test*
29
+ &rpri_esz rn=%reg_movprfx esz=%tszimm_esz imm=%tszimm_shr
27
+
30
28
EDU
31
# Similarly without predicate.
29
M: Jiri Slaby <jslaby@suse.cz>
32
-@rd_rn_tszimm ........ .. ... ... ...... rn:5 rd:5 \
30
S: Maintained
33
- &rri_esz esz=%tszimm16_esz
34
+@rd_rn_tszimm_shl ........ .. ... ... ...... rn:5 rd:5 \
35
+ &rri_esz esz=%tszimm16_esz imm=%tszimm16_shl
36
+@rd_rn_tszimm_shr ........ .. ... ... ...... rn:5 rd:5 \
37
+ &rri_esz esz=%tszimm16_esz imm=%tszimm16_shr
38
39
# Two register operand, one immediate operand, with 4-bit predicate.
40
# User must fill in imm.
41
@@ -XXX,XX +XXX,XX @@ UMINV 00000100 .. 001 011 001 ... ..... ..... @rd_pg_rn
42
### SVE Shift by Immediate - Predicated Group
43
44
# SVE bitwise shift by immediate (predicated)
45
-ASR_zpzi 00000100 .. 000 000 100 ... .. ... ..... \
46
- @rdn_pg_tszimm imm=%tszimm_shr
47
-LSR_zpzi 00000100 .. 000 001 100 ... .. ... ..... \
48
- @rdn_pg_tszimm imm=%tszimm_shr
49
-LSL_zpzi 00000100 .. 000 011 100 ... .. ... ..... \
50
- @rdn_pg_tszimm imm=%tszimm_shl
51
-ASRD 00000100 .. 000 100 100 ... .. ... ..... \
52
- @rdn_pg_tszimm imm=%tszimm_shr
53
+ASR_zpzi 00000100 .. 000 000 100 ... .. ... ..... @rdn_pg_tszimm_shr
54
+LSR_zpzi 00000100 .. 000 001 100 ... .. ... ..... @rdn_pg_tszimm_shr
55
+LSL_zpzi 00000100 .. 000 011 100 ... .. ... ..... @rdn_pg_tszimm_shl
56
+ASRD 00000100 .. 000 100 100 ... .. ... ..... @rdn_pg_tszimm_shr
57
58
# SVE bitwise shift by vector (predicated)
59
ASR_zpzz 00000100 .. 010 000 100 ... ..... ..... @rdn_pg_rm
60
@@ -XXX,XX +XXX,XX @@ RDVL 00000100 101 11111 01010 imm:s6 rd:5
61
### SVE Bitwise Shift - Unpredicated Group
62
63
# SVE bitwise shift by immediate (unpredicated)
64
-ASR_zzi 00000100 .. 1 ..... 1001 00 ..... ..... \
65
- @rd_rn_tszimm imm=%tszimm16_shr
66
-LSR_zzi 00000100 .. 1 ..... 1001 01 ..... ..... \
67
- @rd_rn_tszimm imm=%tszimm16_shr
68
-LSL_zzi 00000100 .. 1 ..... 1001 11 ..... ..... \
69
- @rd_rn_tszimm imm=%tszimm16_shl
70
+ASR_zzi 00000100 .. 1 ..... 1001 00 ..... ..... @rd_rn_tszimm_shr
71
+LSR_zzi 00000100 .. 1 ..... 1001 01 ..... ..... @rd_rn_tszimm_shr
72
+LSL_zzi 00000100 .. 1 ..... 1001 11 ..... ..... @rd_rn_tszimm_shl
73
74
# SVE bitwise shift by wide elements (unpredicated)
75
# Note esz != 3
76
--
31
--
77
2.20.1
32
2.20.1
78
33
79
34
diff view generated by jsdifflib
1
From: Graeme Gregory <graeme@nuviainc.com>
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2
3
Fixing a typo in a previous patch that translated an "i" to a 1
3
Trusted Firmware now supports A72 on sbsa-ref by default [1] so enable
4
and therefore breaking the allocation of PCIe interrupts. This was
4
it for QEMU as well. A53 was already enabled there.
5
discovered when virtio-net-pci devices ceased to function correctly.
6
5
7
Cc: qemu-stable@nongnu.org
6
1. https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/7117
8
Fixes: 48ba18e6d3f3 ("hw/arm/sbsa-ref: Simplify by moving the gic in the machine state")
7
9
Signed-off-by: Graeme Gregory <graeme@nuviainc.com>
8
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20200821083853.356490-1-graeme@nuviainc.com
10
Message-id: 20201120141705.246690-1-marcin.juszkiewicz@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
13
---
14
hw/arm/sbsa-ref.c | 2 +-
14
hw/arm/sbsa-ref.c | 23 ++++++++++++++++++++---
15
1 file changed, 1 insertion(+), 1 deletion(-)
15
1 file changed, 20 insertions(+), 3 deletions(-)
16
16
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/sbsa-ref.c
19
--- a/hw/arm/sbsa-ref.c
20
+++ b/hw/arm/sbsa-ref.c
20
+++ b/hw/arm/sbsa-ref.c
21
@@ -XXX,XX +XXX,XX @@ static void create_pcie(SBSAMachineState *sms)
21
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
22
22
[SBSA_GWDT] = 16,
23
for (i = 0; i < GPEX_NUM_IRQS; i++) {
23
};
24
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i,
24
25
- qdev_get_gpio_in(sms->gic, irq + 1));
25
+static const char * const valid_cpus[] = {
26
+ qdev_get_gpio_in(sms->gic, irq + i));
26
+ ARM_CPU_TYPE_NAME("cortex-a53"),
27
gpex_set_irq_num(GPEX_HOST(dev), i, irq + i);
27
+ ARM_CPU_TYPE_NAME("cortex-a57"),
28
+ ARM_CPU_TYPE_NAME("cortex-a72"),
29
+};
30
+
31
+static bool cpu_type_valid(const char *cpu)
32
+{
33
+ int i;
34
+
35
+ for (i = 0; i < ARRAY_SIZE(valid_cpus); i++) {
36
+ if (strcmp(cpu, valid_cpus[i]) == 0) {
37
+ return true;
38
+ }
39
+ }
40
+ return false;
41
+}
42
+
43
static uint64_t sbsa_ref_cpu_mp_affinity(SBSAMachineState *sms, int idx)
44
{
45
uint8_t clustersz = ARM_DEFAULT_CPUS_PER_CLUSTER;
46
@@ -XXX,XX +XXX,XX @@ static void sbsa_ref_init(MachineState *machine)
47
const CPUArchIdList *possible_cpus;
48
int n, sbsa_max_cpus;
49
50
- if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a57"))) {
51
- error_report("sbsa-ref: CPU type other than the built-in "
52
- "cortex-a57 not supported");
53
+ if (!cpu_type_valid(machine->cpu_type)) {
54
+ error_report("mach-virt: CPU type %s not supported", machine->cpu_type);
55
exit(1);
28
}
56
}
29
57
30
--
58
--
31
2.20.1
59
2.20.1
32
60
33
61
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Havard Skinnemoen <hskinnemoen@google.com>
2
2
3
Model gen_gvec_fn_zzz on gen_gvec_fn3 in translate-a64.c, but
3
Dump the collected random data after a randomness test failure.
4
indicating which kind of register and in which order.
5
4
6
Model do_zzz_fn on the other do_foo functions that take an
5
Note that this relies on the test having called
7
argument set and verify sve enabled.
6
g_test_set_nonfatal_assertions() so we don't abort immediately on the
7
assertion failure.
8
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Havard Skinnemoen <hskinnemoen@google.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20200815013145.539409-4-richard.henderson@linaro.org
11
[PMM: minor commit message tweak]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
13
---
14
target/arm/translate-sve.c | 43 +++++++++++++++++++++-----------------
14
tests/qtest/npcm7xx_rng-test.c | 12 ++++++++++++
15
1 file changed, 24 insertions(+), 19 deletions(-)
15
1 file changed, 12 insertions(+)
16
16
17
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
17
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate-sve.c
19
--- a/tests/qtest/npcm7xx_rng-test.c
20
+++ b/target/arm/translate-sve.c
20
+++ b/tests/qtest/npcm7xx_rng-test.c
21
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_fn_zz(DisasContext *s, GVecGen2Fn *gvec_fn,
21
@@ -XXX,XX +XXX,XX @@
22
}
22
23
23
#include "libqtest-single.h"
24
/* Invoke a vector expander on three Zregs. */
24
#include "qemu/bitops.h"
25
-static bool do_vector3_z(DisasContext *s, GVecGen3Fn *gvec_fn,
25
+#include "qemu-common.h"
26
- int esz, int rd, int rn, int rm)
26
27
+static void gen_gvec_fn_zzz(DisasContext *s, GVecGen3Fn *gvec_fn,
27
#define RNG_BASE_ADDR 0xf000b000
28
+ int esz, int rd, int rn, int rm)
28
29
{
29
@@ -XXX,XX +XXX,XX @@
30
- if (sve_access_check(s)) {
30
/* Number of bits to collect for randomness tests. */
31
- unsigned vsz = vec_full_reg_size(s);
31
#define TEST_INPUT_BITS (128)
32
- gvec_fn(esz, vec_full_reg_offset(s, rd),
32
33
- vec_full_reg_offset(s, rn),
33
+static void dump_buf_if_failed(const uint8_t *buf, size_t size)
34
- vec_full_reg_offset(s, rm), vsz, vsz);
35
- }
36
- return true;
37
+ unsigned vsz = vec_full_reg_size(s);
38
+ gvec_fn(esz, vec_full_reg_offset(s, rd),
39
+ vec_full_reg_offset(s, rn),
40
+ vec_full_reg_offset(s, rm), vsz, vsz);
41
}
42
43
/* Invoke a vector move on two Zregs. */
44
@@ -XXX,XX +XXX,XX @@ const uint64_t pred_esz_masks[4] = {
45
*** SVE Logical - Unpredicated Group
46
*/
47
48
+static bool do_zzz_fn(DisasContext *s, arg_rrr_esz *a, GVecGen3Fn *gvec_fn)
49
+{
34
+{
50
+ if (sve_access_check(s)) {
35
+ if (g_test_failed()) {
51
+ gen_gvec_fn_zzz(s, gvec_fn, a->esz, a->rd, a->rn, a->rm);
36
+ qemu_hexdump(stderr, "", buf, size);
52
+ }
37
+ }
53
+ return true;
54
+}
38
+}
55
+
39
+
56
static bool trans_AND_zzz(DisasContext *s, arg_rrr_esz *a)
40
static void rng_writeb(unsigned int offset, uint8_t value)
57
{
41
{
58
- return do_vector3_z(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->rm);
42
writeb(RNG_BASE_ADDR + offset, value);
59
+ return do_zzz_fn(s, a, tcg_gen_gvec_and);
43
@@ -XXX,XX +XXX,XX @@ static void test_continuous_monobit(void)
44
}
45
46
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
47
+ dump_buf_if_failed(buf, sizeof(buf));
60
}
48
}
61
49
62
static bool trans_ORR_zzz(DisasContext *s, arg_rrr_esz *a)
50
/*
63
{
51
@@ -XXX,XX +XXX,XX @@ static void test_continuous_runs(void)
64
- return do_vector3_z(s, tcg_gen_gvec_or, 0, a->rd, a->rn, a->rm);
52
}
65
+ return do_zzz_fn(s, a, tcg_gen_gvec_or);
53
54
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
55
+ dump_buf_if_failed(buf.c, sizeof(buf));
66
}
56
}
67
57
68
static bool trans_EOR_zzz(DisasContext *s, arg_rrr_esz *a)
58
/*
69
{
59
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_monobit(void)
70
- return do_vector3_z(s, tcg_gen_gvec_xor, 0, a->rd, a->rn, a->rm);
60
}
71
+ return do_zzz_fn(s, a, tcg_gen_gvec_xor);
61
62
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
63
+ dump_buf_if_failed(buf, sizeof(buf));
72
}
64
}
73
65
74
static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a)
66
/*
75
{
67
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_runs(void)
76
- return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
68
}
77
+ return do_zzz_fn(s, a, tcg_gen_gvec_andc);
69
70
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
71
+ dump_buf_if_failed(buf.c, sizeof(buf));
78
}
72
}
79
73
80
/*
74
int main(int argc, char **argv)
81
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a)
82
83
static bool trans_ADD_zzz(DisasContext *s, arg_rrr_esz *a)
84
{
85
- return do_vector3_z(s, tcg_gen_gvec_add, a->esz, a->rd, a->rn, a->rm);
86
+ return do_zzz_fn(s, a, tcg_gen_gvec_add);
87
}
88
89
static bool trans_SUB_zzz(DisasContext *s, arg_rrr_esz *a)
90
{
91
- return do_vector3_z(s, tcg_gen_gvec_sub, a->esz, a->rd, a->rn, a->rm);
92
+ return do_zzz_fn(s, a, tcg_gen_gvec_sub);
93
}
94
95
static bool trans_SQADD_zzz(DisasContext *s, arg_rrr_esz *a)
96
{
97
- return do_vector3_z(s, tcg_gen_gvec_ssadd, a->esz, a->rd, a->rn, a->rm);
98
+ return do_zzz_fn(s, a, tcg_gen_gvec_ssadd);
99
}
100
101
static bool trans_SQSUB_zzz(DisasContext *s, arg_rrr_esz *a)
102
{
103
- return do_vector3_z(s, tcg_gen_gvec_sssub, a->esz, a->rd, a->rn, a->rm);
104
+ return do_zzz_fn(s, a, tcg_gen_gvec_sssub);
105
}
106
107
static bool trans_UQADD_zzz(DisasContext *s, arg_rrr_esz *a)
108
{
109
- return do_vector3_z(s, tcg_gen_gvec_usadd, a->esz, a->rd, a->rn, a->rm);
110
+ return do_zzz_fn(s, a, tcg_gen_gvec_usadd);
111
}
112
113
static bool trans_UQSUB_zzz(DisasContext *s, arg_rrr_esz *a)
114
{
115
- return do_vector3_z(s, tcg_gen_gvec_ussub, a->esz, a->rd, a->rn, a->rm);
116
+ return do_zzz_fn(s, a, tcg_gen_gvec_ussub);
117
}
118
119
/*
120
--
75
--
121
2.20.1
76
2.20.1
122
77
123
78
diff view generated by jsdifflib
1
From: Eduardo Habkost <ehabkost@redhat.com>
1
From: Alex Chen <alex.chen@huawei.com>
2
2
3
TYPE_ARM_SSE is a TYPE_SYS_BUS_DEVICE subclass, but
3
We should use printf format specifier "%u" instead of "%d" for
4
ARMSSEClass::parent_class is declared as DeviceClass.
4
argument of type "unsigned int".
5
5
6
It never caused any problems by pure luck:
6
Reported-by: Euler Robot <euler.robot@huawei.com>
7
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
We were not setting class_size for TYPE_ARM_SSE, so class_size of
8
Message-id: 20201126111109.112238-2-alex.chen@huawei.com
9
TYPE_SYS_BUS_DEVICE was being used (sizeof(SysBusDeviceClass)).
10
This made the system allocate enough memory for TYPE_ARM_SSE
11
devices even though ARMSSEClass was too small for a sysbus
12
device.
13
14
Additionally, the ARMSSEClass::info field ended up at the same
15
offset as SysBusDeviceClass::explicit_ofw_unit_address. This
16
would make sysbus_get_fw_dev_path() crash for the device.
17
Luckily, sysbus_get_fw_dev_path() never gets called for
18
TYPE_ARM_SSE devices, because qdev_get_fw_dev_path() is only used
19
by the boot device code, and TYPE_ARM_SSE devices don't appear at
20
the fw_boot_order list.
21
22
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
23
Message-id: 20200826181006.4097163-1-ehabkost@redhat.com
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
11
---
27
include/hw/arm/armsse.h | 2 +-
12
hw/misc/imx25_ccm.c | 12 ++++++------
28
hw/arm/armsse.c | 1 +
13
1 file changed, 6 insertions(+), 6 deletions(-)
29
2 files changed, 2 insertions(+), 1 deletion(-)
30
14
31
diff --git a/include/hw/arm/armsse.h b/include/hw/arm/armsse.h
15
diff --git a/hw/misc/imx25_ccm.c b/hw/misc/imx25_ccm.c
32
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
33
--- a/include/hw/arm/armsse.h
17
--- a/hw/misc/imx25_ccm.c
34
+++ b/include/hw/arm/armsse.h
18
+++ b/hw/misc/imx25_ccm.c
35
@@ -XXX,XX +XXX,XX @@ typedef struct ARMSSE {
19
@@ -XXX,XX +XXX,XX @@ static const char *imx25_ccm_reg_name(uint32_t reg)
36
typedef struct ARMSSEInfo ARMSSEInfo;
20
case IMX25_CCM_LPIMR1_REG:
37
21
return "lpimr1";
38
typedef struct ARMSSEClass {
22
default:
39
- DeviceClass parent_class;
23
- sprintf(unknown, "[%d ?]", reg);
40
+ SysBusDeviceClass parent_class;
24
+ sprintf(unknown, "[%u ?]", reg);
41
const ARMSSEInfo *info;
25
return unknown;
42
} ARMSSEClass;
26
}
43
27
}
44
diff --git a/hw/arm/armsse.c b/hw/arm/armsse.c
28
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mpll_clk(IMXCCMState *dev)
45
index XXXXXXX..XXXXXXX 100644
29
freq = imx_ccm_calc_pll(s->reg[IMX25_CCM_MPCTL_REG], CKIH_FREQ);
46
--- a/hw/arm/armsse.c
30
}
47
+++ b/hw/arm/armsse.c
31
48
@@ -XXX,XX +XXX,XX @@ static const TypeInfo armsse_info = {
32
- DPRINTF("freq = %d\n", freq);
49
.name = TYPE_ARMSSE,
33
+ DPRINTF("freq = %u\n", freq);
50
.parent = TYPE_SYS_BUS_DEVICE,
34
51
.instance_size = sizeof(ARMSSE),
35
return freq;
52
+ .class_size = sizeof(ARMSSEClass),
36
}
53
.instance_init = armsse_init,
37
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mcu_clk(IMXCCMState *dev)
54
.abstract = true,
38
55
.interfaces = (InterfaceInfo[]) {
39
freq = freq / (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], ARM_CLK_DIV));
40
41
- DPRINTF("freq = %d\n", freq);
42
+ DPRINTF("freq = %u\n", freq);
43
44
return freq;
45
}
46
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ahb_clk(IMXCCMState *dev)
47
freq = imx25_ccm_get_mcu_clk(dev)
48
/ (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], AHB_CLK_DIV));
49
50
- DPRINTF("freq = %d\n", freq);
51
+ DPRINTF("freq = %u\n", freq);
52
53
return freq;
54
}
55
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ipg_clk(IMXCCMState *dev)
56
57
freq = imx25_ccm_get_ahb_clk(dev) / 2;
58
59
- DPRINTF("freq = %d\n", freq);
60
+ DPRINTF("freq = %u\n", freq);
61
62
return freq;
63
}
64
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
65
break;
66
}
67
68
- DPRINTF("Clock = %d) = %d\n", clock, freq);
69
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
70
71
return freq;
72
}
56
--
73
--
57
2.20.1
74
2.20.1
58
75
59
76
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alex Chen <alex.chen@huawei.com>
2
2
3
We need more information than just the mmu_idx in order
3
We should use printf format specifier "%u" instead of "%d" for
4
to create the proper exception syndrome. Only change the
4
argument of type "unsigned int".
5
function signature so far.
6
5
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reported-by: Euler Robot <euler.robot@huawei.com>
8
Message-id: 20200813200816.3037186-2-richard.henderson@linaro.org
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20201126111109.112238-3-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/mte_helper.c | 10 +++++-----
12
hw/misc/imx31_ccm.c | 14 +++++++-------
13
1 file changed, 5 insertions(+), 5 deletions(-)
13
hw/misc/imx_ccm.c | 4 ++--
14
2 files changed, 9 insertions(+), 9 deletions(-)
14
15
15
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
16
diff --git a/hw/misc/imx31_ccm.c b/hw/misc/imx31_ccm.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/mte_helper.c
18
--- a/hw/misc/imx31_ccm.c
18
+++ b/target/arm/mte_helper.c
19
+++ b/hw/misc/imx31_ccm.c
19
@@ -XXX,XX +XXX,XX @@ void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val)
20
@@ -XXX,XX +XXX,XX @@ static const char *imx31_ccm_reg_name(uint32_t reg)
21
case IMX31_CCM_PDR2_REG:
22
return "PDR2";
23
default:
24
- sprintf(unknown, "[%d ?]", reg);
25
+ sprintf(unknown, "[%u ?]", reg);
26
return unknown;
27
}
20
}
28
}
21
29
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_pll_ref_clk(IMXCCMState *dev)
22
/* Record a tag check failure. */
30
freq = CKIH_FREQ;
23
-static void mte_check_fail(CPUARMState *env, int mmu_idx,
24
+static void mte_check_fail(CPUARMState *env, uint32_t desc,
25
uint64_t dirty_ptr, uintptr_t ra)
26
{
27
+ int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
28
ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx);
29
int el, reg_el, tcf, select;
30
uint64_t sctlr;
31
@@ -XXX,XX +XXX,XX @@ uint64_t mte_check1(CPUARMState *env, uint32_t desc,
32
}
31
}
33
32
34
if (unlikely(!mte_probe1_int(env, desc, ptr, ra, bit55))) {
33
- DPRINTF("freq = %d\n", freq);
35
- int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
34
+ DPRINTF("freq = %u\n", freq);
36
- mte_check_fail(env, mmu_idx, ptr, ra);
35
37
+ mte_check_fail(env, desc, ptr, ra);
36
return freq;
37
}
38
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mpll_clk(IMXCCMState *dev)
39
freq = imx_ccm_calc_pll(s->reg[IMX31_CCM_MPCTL_REG],
40
imx31_ccm_get_pll_ref_clk(dev));
41
42
- DPRINTF("freq = %d\n", freq);
43
+ DPRINTF("freq = %u\n", freq);
44
45
return freq;
46
}
47
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mcu_main_clk(IMXCCMState *dev)
48
freq = imx31_ccm_get_mpll_clk(dev);
38
}
49
}
39
50
40
return useronly_clean_ptr(ptr);
51
- DPRINTF("freq = %d\n", freq);
41
@@ -XXX,XX +XXX,XX @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
52
+ DPRINTF("freq = %u\n", freq);
42
53
43
fail_ofs = tag_first + n * TAG_GRANULE - ptr;
54
return freq;
44
fail_ofs = ROUND_UP(fail_ofs, esize);
55
}
45
- mte_check_fail(env, mmu_idx, ptr + fail_ofs, ra);
56
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_hclk_clk(IMXCCMState *dev)
46
+ mte_check_fail(env, desc, ptr + fail_ofs, ra);
57
freq = imx31_ccm_get_mcu_main_clk(dev)
58
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], MAX));
59
60
- DPRINTF("freq = %d\n", freq);
61
+ DPRINTF("freq = %u\n", freq);
62
63
return freq;
64
}
65
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_ipg_clk(IMXCCMState *dev)
66
freq = imx31_ccm_get_hclk_clk(dev)
67
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], IPG));
68
69
- DPRINTF("freq = %d\n", freq);
70
+ DPRINTF("freq = %u\n", freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
75
break;
47
}
76
}
48
77
49
done:
78
- DPRINTF("Clock = %d) = %d\n", clock, freq);
50
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
79
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
51
fail:
80
52
/* Locate the first nibble that differs. */
81
return freq;
53
i = ctz64(mem_tag ^ ptr_tag) >> 4;
82
}
54
- mte_check_fail(env, mmu_idx, align_ptr + i * TAG_GRANULE, ra);
83
diff --git a/hw/misc/imx_ccm.c b/hw/misc/imx_ccm.c
55
+ mte_check_fail(env, desc, align_ptr + i * TAG_GRANULE, ra);
84
index XXXXXXX..XXXXXXX 100644
56
85
--- a/hw/misc/imx_ccm.c
57
done:
86
+++ b/hw/misc/imx_ccm.c
58
return useronly_clean_ptr(ptr);
87
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
88
freq = klass->get_clock_frequency(dev, clock);
89
}
90
91
- DPRINTF("(clock = %d) = %d\n", clock, freq);
92
+ DPRINTF("(clock = %d) = %u\n", clock, freq);
93
94
return freq;
95
}
96
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_calc_pll(uint32_t pllreg, uint32_t base_freq)
97
freq = ((2 * (base_freq >> 10) * (mfi * mfd + mfn)) /
98
(mfd * pd)) << 10;
99
100
- DPRINTF("(pllreg = 0x%08x, base_freq = %d) = %d\n", pllreg, base_freq,
101
+ DPRINTF("(pllreg = 0x%08x, base_freq = %u) = %d\n", pllreg, base_freq,
102
freq);
103
104
return freq;
59
--
105
--
60
2.20.1
106
2.20.1
61
107
62
108
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Alex Chen <alex.chen@huawei.com>
2
2
3
Allow the device to execute the DMA transfers in a different
3
We should use printf format specifier "%u" instead of "%d" for
4
AddressSpace.
4
argument of type "unsigned int".
5
5
6
The H3 SoC keeps using the system_memory address space,
6
Reported-by: Euler Robot <euler.robot@huawei.com>
7
but via the proper dma_memory_access() API.
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
8
Message-id: 20201126111109.112238-4-alex.chen@huawei.com
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
12
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
13
Message-id: 20200814122907.27732-1-f4bug@amsat.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
11
---
16
include/hw/net/allwinner-sun8i-emac.h | 6 ++++
12
hw/misc/imx6_ccm.c | 20 ++++++++++----------
17
hw/arm/allwinner-h3.c | 2 ++
13
hw/misc/imx6_src.c | 2 +-
18
hw/net/allwinner-sun8i-emac.c | 46 +++++++++++++++++----------
14
2 files changed, 11 insertions(+), 11 deletions(-)
19
3 files changed, 38 insertions(+), 16 deletions(-)
20
15
21
diff --git a/include/hw/net/allwinner-sun8i-emac.h b/include/hw/net/allwinner-sun8i-emac.h
16
diff --git a/hw/misc/imx6_ccm.c b/hw/misc/imx6_ccm.c
22
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/net/allwinner-sun8i-emac.h
18
--- a/hw/misc/imx6_ccm.c
24
+++ b/include/hw/net/allwinner-sun8i-emac.h
19
+++ b/hw/misc/imx6_ccm.c
25
@@ -XXX,XX +XXX,XX @@ typedef struct AwSun8iEmacState {
20
@@ -XXX,XX +XXX,XX @@ static const char *imx6_ccm_reg_name(uint32_t reg)
26
/** Interrupt output signal to notify CPU */
21
case CCM_CMEOR:
27
qemu_irq irq;
22
return "CMEOR";
28
23
default:
29
+ /** Memory region where DMA transfers are done */
24
- sprintf(unknown, "%d ?", reg);
30
+ MemoryRegion *dma_mr;
25
+ sprintf(unknown, "%u ?", reg);
31
+
26
return unknown;
32
+ /** Address space used internally for DMA transfers */
33
+ AddressSpace dma_as;
34
+
35
/** Generic Network Interface Controller (NIC) for networking API */
36
NICState *nic;
37
38
diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/arm/allwinner-h3.c
41
+++ b/hw/arm/allwinner-h3.c
42
@@ -XXX,XX +XXX,XX @@ static void allwinner_h3_realize(DeviceState *dev, Error **errp)
43
qemu_check_nic_model(&nd_table[0], TYPE_AW_SUN8I_EMAC);
44
qdev_set_nic_properties(DEVICE(&s->emac), &nd_table[0]);
45
}
46
+ object_property_set_link(OBJECT(&s->emac), "dma-memory",
47
+ OBJECT(get_system_memory()), &error_fatal);
48
sysbus_realize(SYS_BUS_DEVICE(&s->emac), &error_fatal);
49
sysbus_mmio_map(SYS_BUS_DEVICE(&s->emac), 0, s->memmap[AW_H3_EMAC]);
50
sysbus_connect_irq(SYS_BUS_DEVICE(&s->emac), 0,
51
diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/net/allwinner-sun8i-emac.c
54
+++ b/hw/net/allwinner-sun8i-emac.c
55
@@ -XXX,XX +XXX,XX @@
56
57
#include "qemu/osdep.h"
58
#include "qemu/units.h"
59
+#include "qapi/error.h"
60
#include "hw/sysbus.h"
61
#include "migration/vmstate.h"
62
#include "net/net.h"
63
@@ -XXX,XX +XXX,XX @@
64
#include "net/checksum.h"
65
#include "qemu/module.h"
66
#include "exec/cpu-common.h"
67
+#include "sysemu/dma.h"
68
#include "hw/net/allwinner-sun8i-emac.h"
69
70
/* EMAC register offsets */
71
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_update_irq(AwSun8iEmacState *s)
72
qemu_set_irq(s->irq, (s->int_sta & s->int_en) != 0);
73
}
74
75
-static uint32_t allwinner_sun8i_emac_next_desc(FrameDescriptor *desc,
76
+static uint32_t allwinner_sun8i_emac_next_desc(AwSun8iEmacState *s,
77
+ FrameDescriptor *desc,
78
size_t min_size)
79
{
80
uint32_t paddr = desc->next;
81
82
- cpu_physical_memory_read(paddr, desc, sizeof(*desc));
83
+ dma_memory_read(&s->dma_as, paddr, desc, sizeof(*desc));
84
85
if ((desc->status & DESC_STATUS_CTL) &&
86
(desc->status2 & DESC_STATUS2_BUF_SIZE_MASK) >= min_size) {
87
@@ -XXX,XX +XXX,XX @@ static uint32_t allwinner_sun8i_emac_next_desc(FrameDescriptor *desc,
88
}
27
}
89
}
28
}
90
29
@@ -XXX,XX +XXX,XX @@ static const char *imx6_analog_reg_name(uint32_t reg)
91
-static uint32_t allwinner_sun8i_emac_get_desc(FrameDescriptor *desc,
30
case USB_ANALOG_DIGPROG:
92
+static uint32_t allwinner_sun8i_emac_get_desc(AwSun8iEmacState *s,
31
return "USB_ANALOG_DIGPROG";
93
+ FrameDescriptor *desc,
32
default:
94
uint32_t start_addr,
33
- sprintf(unknown, "%d ?", reg);
95
size_t min_size)
34
+ sprintf(unknown, "%u ?", reg);
96
{
35
return unknown;
97
@@ -XXX,XX +XXX,XX @@ static uint32_t allwinner_sun8i_emac_get_desc(FrameDescriptor *desc,
36
}
98
99
/* Note that the list is a cycle. Last entry points back to the head. */
100
while (desc_addr != 0) {
101
- cpu_physical_memory_read(desc_addr, desc, sizeof(*desc));
102
+ dma_memory_read(&s->dma_as, desc_addr, desc, sizeof(*desc));
103
104
if ((desc->status & DESC_STATUS_CTL) &&
105
(desc->status2 & DESC_STATUS2_BUF_SIZE_MASK) >= min_size) {
106
@@ -XXX,XX +XXX,XX @@ static uint32_t allwinner_sun8i_emac_rx_desc(AwSun8iEmacState *s,
107
FrameDescriptor *desc,
108
size_t min_size)
109
{
110
- return allwinner_sun8i_emac_get_desc(desc, s->rx_desc_curr, min_size);
111
+ return allwinner_sun8i_emac_get_desc(s, desc, s->rx_desc_curr, min_size);
112
}
37
}
113
38
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_clk(IMX6CCMState *dev)
114
static uint32_t allwinner_sun8i_emac_tx_desc(AwSun8iEmacState *s,
39
freq *= 20;
115
FrameDescriptor *desc,
40
}
116
size_t min_size)
41
117
{
42
- DPRINTF("freq = %d\n", (uint32_t)freq);
118
- return allwinner_sun8i_emac_get_desc(desc, s->tx_desc_head, min_size);
43
+ DPRINTF("freq = %u\n", (uint32_t)freq);
119
+ return allwinner_sun8i_emac_get_desc(s, desc, s->tx_desc_head, min_size);
44
45
return freq;
120
}
46
}
121
47
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd0_clk(IMX6CCMState *dev)
122
-static void allwinner_sun8i_emac_flush_desc(FrameDescriptor *desc,
48
freq = imx6_analog_get_pll2_clk(dev) * 18
123
+static void allwinner_sun8i_emac_flush_desc(AwSun8iEmacState *s,
49
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD0_FRAC);
124
+ FrameDescriptor *desc,
50
125
uint32_t phys_addr)
51
- DPRINTF("freq = %d\n", (uint32_t)freq);
126
{
52
+ DPRINTF("freq = %u\n", (uint32_t)freq);
127
- cpu_physical_memory_write(phys_addr, desc, sizeof(*desc));
53
128
+ dma_memory_write(&s->dma_as, phys_addr, desc, sizeof(*desc));
54
return freq;
129
}
55
}
130
56
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd2_clk(IMX6CCMState *dev)
131
static bool allwinner_sun8i_emac_can_receive(NetClientState *nc)
57
freq = imx6_analog_get_pll2_clk(dev) * 18
132
@@ -XXX,XX +XXX,XX @@ static ssize_t allwinner_sun8i_emac_receive(NetClientState *nc,
58
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD2_FRAC);
133
<< RX_DESC_STATUS_FRM_LEN_SHIFT;
59
134
}
60
- DPRINTF("freq = %d\n", (uint32_t)freq);
135
61
+ DPRINTF("freq = %u\n", (uint32_t)freq);
136
- cpu_physical_memory_write(desc.addr, buf, desc_bytes);
62
137
- allwinner_sun8i_emac_flush_desc(&desc, s->rx_desc_curr);
63
return freq;
138
+ dma_memory_write(&s->dma_as, desc.addr, buf, desc_bytes);
64
}
139
+ allwinner_sun8i_emac_flush_desc(s, &desc, s->rx_desc_curr);
65
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_periph_clk(IMX6CCMState *dev)
140
trace_allwinner_sun8i_emac_receive(s->rx_desc_curr, desc.addr,
66
break;
141
desc_bytes);
142
143
@@ -XXX,XX +XXX,XX @@ static ssize_t allwinner_sun8i_emac_receive(NetClientState *nc,
144
bytes_left -= desc_bytes;
145
146
/* Move to the next descriptor */
147
- s->rx_desc_curr = allwinner_sun8i_emac_next_desc(&desc, 64);
148
+ s->rx_desc_curr = allwinner_sun8i_emac_next_desc(s, &desc, 64);
149
if (!s->rx_desc_curr) {
150
/* Not enough buffer space available */
151
s->int_sta |= INT_STA_RX_BUF_UA;
152
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_transmit(AwSun8iEmacState *s)
153
desc.status |= TX_DESC_STATUS_LENGTH_ERR;
154
break;
155
}
156
- cpu_physical_memory_read(desc.addr, packet_buf + packet_bytes, bytes);
157
+ dma_memory_read(&s->dma_as, desc.addr, packet_buf + packet_bytes, bytes);
158
packet_bytes += bytes;
159
desc.status &= ~DESC_STATUS_CTL;
160
- allwinner_sun8i_emac_flush_desc(&desc, s->tx_desc_curr);
161
+ allwinner_sun8i_emac_flush_desc(s, &desc, s->tx_desc_curr);
162
163
/* After the last descriptor, send the packet */
164
if (desc.status2 & TX_DESC_STATUS2_LAST_DESC) {
165
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_transmit(AwSun8iEmacState *s)
166
packet_bytes = 0;
167
transmitted++;
168
}
169
- s->tx_desc_curr = allwinner_sun8i_emac_next_desc(&desc, 0);
170
+ s->tx_desc_curr = allwinner_sun8i_emac_next_desc(s, &desc, 0);
171
}
67
}
172
68
173
/* Raise transmit completed interrupt */
69
- DPRINTF("freq = %d\n", (uint32_t)freq);
174
@@ -XXX,XX +XXX,XX @@ static uint64_t allwinner_sun8i_emac_read(void *opaque, hwaddr offset,
70
+ DPRINTF("freq = %u\n", (uint32_t)freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ahb_clk(IMX6CCMState *dev)
75
freq = imx6_analog_get_periph_clk(dev)
76
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], AHB_PODF));
77
78
- DPRINTF("freq = %d\n", (uint32_t)freq);
79
+ DPRINTF("freq = %u\n", (uint32_t)freq);
80
81
return freq;
82
}
83
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ipg_clk(IMX6CCMState *dev)
84
freq = imx6_ccm_get_ahb_clk(dev)
85
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], IPG_PODF));
86
87
- DPRINTF("freq = %d\n", (uint32_t)freq);
88
+ DPRINTF("freq = %u\n", (uint32_t)freq);
89
90
return freq;
91
}
92
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_per_clk(IMX6CCMState *dev)
93
freq = imx6_ccm_get_ipg_clk(dev)
94
/ (1 + EXTRACT(dev->ccm[CCM_CSCMR1], PERCLK_PODF));
95
96
- DPRINTF("freq = %d\n", (uint32_t)freq);
97
+ DPRINTF("freq = %u\n", (uint32_t)freq);
98
99
return freq;
100
}
101
@@ -XXX,XX +XXX,XX @@ static uint32_t imx6_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
175
break;
102
break;
176
case REG_TX_CUR_BUF: /* Transmit Current Buffer */
103
}
177
if (s->tx_desc_curr != 0) {
104
178
- cpu_physical_memory_read(s->tx_desc_curr, &desc, sizeof(desc));
105
- DPRINTF("Clock = %d) = %d\n", clock, freq);
179
+ dma_memory_read(&s->dma_as, s->tx_desc_curr, &desc, sizeof(desc));
106
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
180
value = desc.addr;
107
181
} else {
108
return freq;
182
value = 0;
109
}
183
@@ -XXX,XX +XXX,XX @@ static uint64_t allwinner_sun8i_emac_read(void *opaque, hwaddr offset,
110
diff --git a/hw/misc/imx6_src.c b/hw/misc/imx6_src.c
184
break;
111
index XXXXXXX..XXXXXXX 100644
185
case REG_RX_CUR_BUF: /* Receive Current Buffer */
112
--- a/hw/misc/imx6_src.c
186
if (s->rx_desc_curr != 0) {
113
+++ b/hw/misc/imx6_src.c
187
- cpu_physical_memory_read(s->rx_desc_curr, &desc, sizeof(desc));
114
@@ -XXX,XX +XXX,XX @@ static const char *imx6_src_reg_name(uint32_t reg)
188
+ dma_memory_read(&s->dma_as, s->rx_desc_curr, &desc, sizeof(desc));
115
case SRC_GPR10:
189
value = desc.addr;
116
return "SRC_GPR10";
190
} else {
117
default:
191
value = 0;
118
- sprintf(unknown, "%d ?", reg);
192
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_realize(DeviceState *dev, Error **errp)
119
+ sprintf(unknown, "%u ?", reg);
193
{
120
return unknown;
194
AwSun8iEmacState *s = AW_SUN8I_EMAC(dev);
121
}
195
122
}
196
+ if (!s->dma_mr) {
197
+ error_setg(errp, TYPE_AW_SUN8I_EMAC " 'dma-memory' link not set");
198
+ return;
199
+ }
200
+
201
+ address_space_init(&s->dma_as, s->dma_mr, "emac-dma");
202
+
203
qemu_macaddr_default_if_unset(&s->conf.macaddr);
204
s->nic = qemu_new_nic(&net_allwinner_sun8i_emac_info, &s->conf,
205
object_get_typename(OBJECT(dev)), dev->id, s);
206
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_realize(DeviceState *dev, Error **errp)
207
static Property allwinner_sun8i_emac_properties[] = {
208
DEFINE_NIC_PROPERTIES(AwSun8iEmacState, conf),
209
DEFINE_PROP_UINT8("phy-addr", AwSun8iEmacState, mii_phy_addr, 0),
210
+ DEFINE_PROP_LINK("dma-memory", AwSun8iEmacState, dma_mr,
211
+ TYPE_MEMORY_REGION, MemoryRegion *),
212
DEFINE_PROP_END_OF_LIST(),
213
};
214
215
--
123
--
216
2.20.1
124
2.20.1
217
125
218
126
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alex Chen <alex.chen@huawei.com>
2
2
3
According to AArch64.TagCheckFault, none of the other ISS values are
3
We should use printf format specifier "%u" instead of "%d" for
4
provided, so we do not need to go so far as merge_syn_data_abort.
4
argument of type "unsigned int".
5
But we were missing the WnR bit.
6
5
7
Tested-by: Andrey Konovalov <andreyknvl@google.com>
6
Reported-by: Euler Robot <euler.robot@huawei.com>
8
Reported-by: Andrey Konovalov <andreyknvl@google.com>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201126111109.112238-5-alex.chen@huawei.com
10
Message-id: 20200813200816.3037186-3-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
target/arm/mte_helper.c | 9 +++++----
12
hw/misc/imx6ul_ccm.c | 4 ++--
15
1 file changed, 5 insertions(+), 4 deletions(-)
13
1 file changed, 2 insertions(+), 2 deletions(-)
16
14
17
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
15
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/mte_helper.c
17
--- a/hw/misc/imx6ul_ccm.c
20
+++ b/target/arm/mte_helper.c
18
+++ b/hw/misc/imx6ul_ccm.c
21
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
19
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_ccm_reg_name(uint32_t reg)
22
{
20
case CCM_CMEOR:
23
int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
21
return "CMEOR";
24
ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx);
22
default:
25
- int el, reg_el, tcf, select;
23
- sprintf(unknown, "%d ?", reg);
26
+ int el, reg_el, tcf, select, is_write, syn;
24
+ sprintf(unknown, "%u ?", reg);
27
uint64_t sctlr;
25
return unknown;
28
26
}
29
reg_el = regime_el(env, arm_mmu_idx);
27
}
30
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
28
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_analog_reg_name(uint32_t reg)
31
*/
29
case USB_ANALOG_DIGPROG:
32
cpu_restore_state(env_cpu(env), ra, true);
30
return "USB_ANALOG_DIGPROG";
33
env->exception.vaddress = dirty_ptr;
31
default:
34
- raise_exception(env, EXCP_DATA_ABORT,
32
- sprintf(unknown, "%d ?", reg);
35
- syn_data_abort_no_iss(el != 0, 0, 0, 0, 0, 0, 0x11),
33
+ sprintf(unknown, "%u ?", reg);
36
- exception_target_el(env));
34
return unknown;
37
+
35
}
38
+ is_write = FIELD_EX32(desc, MTEDESC, WRITE);
36
}
39
+ syn = syn_data_abort_no_iss(el != 0, 0, 0, 0, 0, is_write, 0x11);
40
+ raise_exception(env, EXCP_DATA_ABORT, syn, exception_target_el(env));
41
/* noreturn, but fall through to the assert anyway */
42
43
case 0:
44
--
37
--
45
2.20.1
38
2.20.1
46
39
47
40
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
For M-profile CPUs, the range from 0xe0000000 to 0xe00fffff is the
2
Private Peripheral Bus range, which includes all of the memory mapped
3
devices and registers that are part of the CPU itself, including the
4
NVIC, systick timer, and debug and trace components like the Data
5
Watchpoint and Trace unit (DWT). Within this large region, the range
6
0xe000e000 to 0xe000efff is the System Control Space (NVIC, system
7
registers, systick) and 0xe002e000 to 0exe002efff is its Non-secure
8
alias.
2
9
3
Add left-shift to match the existing right-shift.
10
The architecture is clear that within the SCS unimplemented registers
11
should be RES0 for privileged accesses and generate BusFault for
12
unprivileged accesses, and we currently implement this.
4
13
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
It is less clear about how to handle accesses to unimplemented
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
regions of the wider PPB. Unprivileged accesses should definitely
7
Message-id: 20200815013145.539409-2-richard.henderson@linaro.org
16
cause BusFaults (R_DQQS), but the behaviour of privileged accesses is
17
not given as a general rule. However, the register definitions of
18
individual registers for components like the DWT all state that they
19
are RES0 if the relevant component is not implemented, so the
20
simplest way to provide that is to provide RAZ/WI for the whole range
21
for privileged accesses. (The v7M Arm ARM does say that reserved
22
registers should be UNK/SBZP.)
23
24
Expand the container MemoryRegion that the NVIC exposes so that
25
it covers the whole PPB space. This means:
26
* moving the address that the ARMV7M device maps it to down by
27
0xe000 bytes
28
* moving the off and the offsets within the container of all the
29
subregions forward by 0xe000 bytes
30
* adding a new default MemoryRegion that covers the whole container
31
at a lower priority than anything else and which provides the
32
RAZWI/BusFault behaviour
33
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20201119215617.29887-2-peter.maydell@linaro.org
9
---
37
---
10
include/qemu/int128.h | 16 ++++++++++++++++
38
include/hw/intc/armv7m_nvic.h | 1 +
11
1 file changed, 16 insertions(+)
39
hw/arm/armv7m.c | 2 +-
40
hw/intc/armv7m_nvic.c | 78 ++++++++++++++++++++++++++++++-----
41
3 files changed, 69 insertions(+), 12 deletions(-)
12
42
13
diff --git a/include/qemu/int128.h b/include/qemu/int128.h
43
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
14
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
15
--- a/include/qemu/int128.h
45
--- a/include/hw/intc/armv7m_nvic.h
16
+++ b/include/qemu/int128.h
46
+++ b/include/hw/intc/armv7m_nvic.h
17
@@ -XXX,XX +XXX,XX @@ static inline Int128 int128_rshift(Int128 a, int n)
47
@@ -XXX,XX +XXX,XX @@ struct NVICState {
18
return a >> n;
48
MemoryRegion systickmem;
19
}
49
MemoryRegion systick_ns_mem;
20
50
MemoryRegion container;
21
+static inline Int128 int128_lshift(Int128 a, int n)
51
+ MemoryRegion defaultmem;
52
53
uint32_t num_irq;
54
qemu_irq excpout;
55
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/armv7m.c
58
+++ b/hw/arm/armv7m.c
59
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
60
sysbus_connect_irq(sbd, 0,
61
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
62
63
- memory_region_add_subregion(&s->container, 0xe000e000,
64
+ memory_region_add_subregion(&s->container, 0xe0000000,
65
sysbus_mmio_get_region(sbd, 0));
66
67
for (i = 0; i < ARRAY_SIZE(s->bitband); i++) {
68
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/intc/armv7m_nvic.c
71
+++ b/hw/intc/armv7m_nvic.c
72
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
73
.endianness = DEVICE_NATIVE_ENDIAN,
74
};
75
76
+/*
77
+ * Unassigned portions of the PPB space are RAZ/WI for privileged
78
+ * accesses, and fault for non-privileged accesses.
79
+ */
80
+static MemTxResult ppb_default_read(void *opaque, hwaddr addr,
81
+ uint64_t *data, unsigned size,
82
+ MemTxAttrs attrs)
22
+{
83
+{
23
+ return a << n;
84
+ qemu_log_mask(LOG_UNIMP, "Read of unassigned area of PPB: offset 0x%x\n",
85
+ (uint32_t)addr);
86
+ if (attrs.user) {
87
+ return MEMTX_ERROR;
88
+ }
89
+ *data = 0;
90
+ return MEMTX_OK;
24
+}
91
+}
25
+
92
+
26
static inline Int128 int128_add(Int128 a, Int128 b)
93
+static MemTxResult ppb_default_write(void *opaque, hwaddr addr,
27
{
94
+ uint64_t value, unsigned size,
28
return a + b;
95
+ MemTxAttrs attrs)
29
@@ -XXX,XX +XXX,XX @@ static inline Int128 int128_rshift(Int128 a, int n)
30
}
31
}
32
33
+static inline Int128 int128_lshift(Int128 a, int n)
34
+{
96
+{
35
+ uint64_t l = a.lo << (n & 63);
97
+ qemu_log_mask(LOG_UNIMP, "Write of unassigned area of PPB: offset 0x%x\n",
36
+ if (n >= 64) {
98
+ (uint32_t)addr);
37
+ return int128_make128(0, l);
99
+ if (attrs.user) {
38
+ } else if (n > 0) {
100
+ return MEMTX_ERROR;
39
+ return int128_make128(l, (a.hi << n) | (a.lo >> (64 - n)));
40
+ }
101
+ }
41
+ return a;
102
+ return MEMTX_OK;
42
+}
103
+}
43
+
104
+
44
static inline Int128 int128_add(Int128 a, Int128 b)
105
+static const MemoryRegionOps ppb_default_ops = {
106
+ .read_with_attrs = ppb_default_read,
107
+ .write_with_attrs = ppb_default_write,
108
+ .endianness = DEVICE_NATIVE_ENDIAN,
109
+ .valid.min_access_size = 1,
110
+ .valid.max_access_size = 8,
111
+};
112
+
113
static int nvic_post_load(void *opaque, int version_id)
45
{
114
{
46
uint64_t lo = a.lo + b.lo;
115
NVICState *s = opaque;
116
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
117
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
118
{
119
NVICState *s = NVIC(dev);
120
- int regionlen;
121
122
/* The armv7m container object will have set our CPU pointer */
123
if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
124
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
125
M_REG_S));
126
}
127
128
- /* The NVIC and System Control Space (SCS) starts at 0xe000e000
129
+ /*
130
+ * This device provides a single sysbus memory region which
131
+ * represents the whole of the "System PPB" space. This is the
132
+ * range from 0xe0000000 to 0xe00fffff and includes the NVIC,
133
+ * the System Control Space (system registers), the systick timer,
134
+ * and for CPUs with the Security extension an NS banked version
135
+ * of all of these.
136
+ *
137
+ * The default behaviour for unimplemented registers/ranges
138
+ * (for instance the Data Watchpoint and Trace unit at 0xe0001000)
139
+ * is to RAZ/WI for privileged access and BusFault for non-privileged
140
+ * access.
141
+ *
142
+ * The NVIC and System Control Space (SCS) starts at 0xe000e000
143
* and looks like this:
144
* 0x004 - ICTR
145
* 0x010 - 0xff - systick
146
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
147
* generally code determining which banked register to use should
148
* use attrs.secure; code determining actual behaviour of the system
149
* should use env->v7m.secure.
150
+ *
151
+ * The container covers the whole PPB space. Within it the priority
152
+ * of overlapping regions is:
153
+ * - default region (for RAZ/WI and BusFault) : -1
154
+ * - system register regions : 0
155
+ * - systick : 1
156
+ * This is because the systick device is a small block of registers
157
+ * in the middle of the other system control registers.
158
*/
159
- regionlen = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? 0x21000 : 0x1000;
160
- memory_region_init(&s->container, OBJECT(s), "nvic", regionlen);
161
- /* The system register region goes at the bottom of the priority
162
- * stack as it covers the whole page.
163
- */
164
+ memory_region_init(&s->container, OBJECT(s), "nvic", 0x100000);
165
+ memory_region_init_io(&s->defaultmem, OBJECT(s), &ppb_default_ops, s,
166
+ "nvic-default", 0x100000);
167
+ memory_region_add_subregion_overlap(&s->container, 0, &s->defaultmem, -1);
168
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
169
"nvic_sysregs", 0x1000);
170
- memory_region_add_subregion(&s->container, 0, &s->sysregmem);
171
+ memory_region_add_subregion(&s->container, 0xe000, &s->sysregmem);
172
173
memory_region_init_io(&s->systickmem, OBJECT(s),
174
&nvic_systick_ops, s,
175
"nvic_systick", 0xe0);
176
177
- memory_region_add_subregion_overlap(&s->container, 0x10,
178
+ memory_region_add_subregion_overlap(&s->container, 0xe010,
179
&s->systickmem, 1);
180
181
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
182
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
183
&nvic_sysreg_ns_ops, &s->sysregmem,
184
"nvic_sysregs_ns", 0x1000);
185
- memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
186
+ memory_region_add_subregion(&s->container, 0x2e000, &s->sysreg_ns_mem);
187
memory_region_init_io(&s->systick_ns_mem, OBJECT(s),
188
&nvic_sysreg_ns_ops, &s->systickmem,
189
"nvic_systick_ns", 0xe0);
190
- memory_region_add_subregion_overlap(&s->container, 0x20010,
191
+ memory_region_add_subregion_overlap(&s->container, 0x2e010,
192
&s->systick_ns_mem, 1);
193
}
194
47
--
195
--
48
2.20.1
196
2.20.1
49
197
50
198
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
In v8.1M the PXN architecture extension adds a new PXN bit to the
2
MPU_RLAR registers, which forbids execution of code in the region
3
from a privileged mode.
2
4
3
In commit ce4afed839 ("target/arm: Implement AArch32 HCR and HCR2")
5
This is another feature which is just in the generic "in v8.1M" set
4
the HCR_EL2 register has been changed from type NO_RAW (no underlying
6
and has no ID register field indicating its presence.
5
state and does not support raw access for state saving/loading) to
6
type CONST (TCG can assume the value to be constant), removing the
7
read/write accessors.
8
We forgot to remove the previous type ARM_CP_NO_RAW. This is not
9
really a problem since the field is overwritten. However it makes
10
code review confuse, so remove it.
11
7
12
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20200812111223.7787-1-f4bug@amsat.org
10
Message-id: 20201119215617.29887-3-peter.maydell@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
11
---
18
target/arm/helper.c | 1 -
12
target/arm/helper.c | 7 ++++++-
19
1 file changed, 1 deletion(-)
13
1 file changed, 6 insertions(+), 1 deletion(-)
20
14
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
19
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
26
.access = PL2_RW,
20
} else {
27
.readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore },
21
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
28
{ .name = "HCR_EL2", .state = ARM_CP_STATE_BOTH,
22
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
29
- .type = ARM_CP_NO_RAW,
23
+ bool pxn = false;
30
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
24
+
31
.access = PL2_RW,
25
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
32
.type = ARM_CP_CONST, .resetvalue = 0 },
26
+ pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
27
+ }
28
29
if (m_is_system_region(env, address)) {
30
/* System space is always execute never */
31
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
32
}
33
34
*prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
35
- if (*prot && !xn) {
36
+ if (*prot && !xn && !(pxn && !is_user)) {
37
*prot |= PAGE_EXEC;
38
}
39
/* We don't need to look the attribute up in the MAIR0/MAIR1
33
--
40
--
34
2.20.1
41
2.20.1
35
42
36
43
diff view generated by jsdifflib
New patch
1
In arm_cpu_realizefn() we check whether the board code disabled EL3
2
via the has_el3 CPU object property, which we create if the CPU
3
starts with the ARM_FEATURE_EL3 feature bit. If it is disabled, then
4
we turn off ARM_FEATURE_EL3 and also zero out the relevant fields in
5
the ID_PFR1 and ID_AA64PFR0 registers.
1
6
7
This codepath was incorrectly being taken for M-profile CPUs, which
8
do not have an EL3 and don't set ARM_FEATURE_EL3, but which may have
9
the M-profile Security extension and so should have non-zero values
10
in the ID_PFR1.Security field.
11
12
Restrict the handling of the feature flag to A/R-profile cores.
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20201119215617.29887-4-peter.maydell@linaro.org
17
---
18
target/arm/cpu.c | 2 +-
19
1 file changed, 1 insertion(+), 1 deletion(-)
20
21
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.c
24
+++ b/target/arm/cpu.c
25
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
26
}
27
}
28
29
- if (!cpu->has_el3) {
30
+ if (!arm_feature(env, ARM_FEATURE_M) && !cpu->has_el3) {
31
/* If the has_el3 CPU property is disabled then we need to disable the
32
* feature.
33
*/
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the v8.1M VSCCLRM insn, which zeros floating point
2
2
registers if there is an active floating point context.
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
This requires support in write_neon_element32() for the MO_32
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
element size, so add it.
5
Message-id: 20200815013145.539409-13-richard.henderson@linaro.org
5
6
Because we want to use arm_gen_condlabel(), we need to move
7
the definition of that function up in translate.c so it is
8
before the #include of translate-vfp.c.inc.
9
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-5-peter.maydell@linaro.org
7
---
13
---
8
target/arm/translate-sve.c | 20 ++++++++++++--------
14
target/arm/cpu.h | 9 ++++
9
1 file changed, 12 insertions(+), 8 deletions(-)
15
target/arm/m-nocp.decode | 8 +++-
10
16
target/arm/translate.c | 21 +++++----
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
17
target/arm/translate-vfp.c.inc | 84 ++++++++++++++++++++++++++++++++++
12
index XXXXXXX..XXXXXXX 100644
18
4 files changed, 111 insertions(+), 11 deletions(-)
13
--- a/target/arm/translate-sve.c
19
14
+++ b/target/arm/translate-sve.c
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
@@ -XXX,XX +XXX,XX @@ static int pred_gvec_reg_size(DisasContext *s)
21
index XXXXXXX..XXXXXXX 100644
16
return size_for_gvec(pred_full_reg_size(s));
22
--- a/target/arm/cpu.h
17
}
23
+++ b/target/arm/cpu.h
18
24
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
19
+/* Invoke an out-of-line helper on 2 Zregs. */
25
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
20
+static void gen_gvec_ool_zz(DisasContext *s, gen_helper_gvec_2 *fn,
26
}
21
+ int rd, int rn, int data)
27
28
+static inline bool isar_feature_aa32_m_sec_state(const ARMISARegisters *id)
22
+{
29
+{
23
+ unsigned vsz = vec_full_reg_size(s);
30
+ /*
24
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, rd),
31
+ * Return true if M-profile state handling insns
25
+ vec_full_reg_offset(s, rn),
32
+ * (VSCCLRM, CLRM, FPCTX access insns) are implemented
26
+ vsz, vsz, data, fn);
33
+ */
34
+ return FIELD_EX32(id->id_pfr1, ID_PFR1, SECURITY) >= 3;
27
+}
35
+}
28
+
36
+
29
/* Invoke an out-of-line helper on 3 Zregs. */
37
static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
30
static void gen_gvec_ool_zzz(DisasContext *s, gen_helper_gvec_3 *fn,
38
{
31
int rd, int rn, int rm, int data)
39
/* Sadly this is encoded differently for A-profile and M-profile */
32
@@ -XXX,XX +XXX,XX @@ static bool trans_FEXPA(DisasContext *s, arg_rr_esz *a)
40
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
33
return false;
41
index XXXXXXX..XXXXXXX 100644
34
}
42
--- a/target/arm/m-nocp.decode
35
if (sve_access_check(s)) {
43
+++ b/target/arm/m-nocp.decode
36
- unsigned vsz = vec_full_reg_size(s);
44
@@ -XXX,XX +XXX,XX @@
37
- tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
45
# If the coprocessor is not present or disabled then we will generate
38
- vec_full_reg_offset(s, a->rn),
46
# the NOCP exception; otherwise we let the insn through to the main decode.
39
- vsz, vsz, 0, fns[a->esz]);
47
40
+ gen_gvec_ool_zz(s, fns[a->esz], a->rd, a->rn, 0);
48
+%vd_dp 22:1 12:4
41
}
49
+%vd_sp 12:4 22:1
50
+
51
&nocp cp
52
53
{
54
# Special cases which do not take an early NOCP: VLLDM and VLSTM
55
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
56
- # TODO: VSCCLRM (new in v8.1M) is similar:
57
- #VSCCLRM 1110 1100 1-01 1111 ---- 1011 ---- ---0
58
+ # VSCCLRM (new in v8.1M) is similar:
59
+ VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
60
+ VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
61
62
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
63
NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
64
diff --git a/target/arm/translate.c b/target/arm/translate.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate.c
67
+++ b/target/arm/translate.c
68
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void)
69
a64_translate_init();
70
}
71
72
+/* Generate a label used for skipping this instruction */
73
+static void arm_gen_condlabel(DisasContext *s)
74
+{
75
+ if (!s->condjmp) {
76
+ s->condlabel = gen_new_label();
77
+ s->condjmp = 1;
78
+ }
79
+}
80
+
81
/* Flags for the disas_set_da_iss info argument:
82
* lower bits hold the Rt register number, higher bits are flags.
83
*/
84
@@ -XXX,XX +XXX,XX @@ static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
85
long off = neon_element_offset(reg, ele, memop);
86
87
switch (memop) {
88
+ case MO_32:
89
+ tcg_gen_st32_i64(src, cpu_env, off);
90
+ break;
91
case MO_64:
92
tcg_gen_st_i64(src, cpu_env, off);
93
break;
94
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
95
s->base.is_jmp = DISAS_UPDATE_EXIT;
96
}
97
98
-/* Generate a label used for skipping this instruction */
99
-static void arm_gen_condlabel(DisasContext *s)
100
-{
101
- if (!s->condjmp) {
102
- s->condlabel = gen_new_label();
103
- s->condjmp = 1;
104
- }
105
-}
106
-
107
/* Skip this instruction if the ARM condition is false */
108
static void arm_skip_unless(DisasContext *s, uint32_t cond)
109
{
110
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/translate-vfp.c.inc
113
+++ b/target/arm/translate-vfp.c.inc
114
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
42
return true;
115
return true;
43
}
116
}
44
@@ -XXX,XX +XXX,XX @@ static bool trans_REV_v(DisasContext *s, arg_rr_esz *a)
117
45
};
118
+static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
46
119
+{
47
if (sve_access_check(s)) {
120
+ int btmreg, topreg;
48
- unsigned vsz = vec_full_reg_size(s);
121
+ TCGv_i64 zero;
49
- tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
122
+ TCGv_i32 aspen, sfpa;
50
- vec_full_reg_offset(s, a->rn),
123
+
51
- vsz, vsz, 0, fns[a->esz]);
124
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
52
+ gen_gvec_ool_zz(s, fns[a->esz], a->rd, a->rn, 0);
125
+ /* Before v8.1M, fall through in decode to NOCP check */
53
}
126
+ return false;
54
return true;
127
+ }
55
}
128
+
129
+ /* Explicitly UNDEF because this takes precedence over NOCP */
130
+ if (!arm_dc_feature(s, ARM_FEATURE_M_MAIN) || !s->v8m_secure) {
131
+ unallocated_encoding(s);
132
+ return true;
133
+ }
134
+
135
+ if (!dc_isar_feature(aa32_vfp_simd, s)) {
136
+ /* NOP if we have neither FP nor MVE */
137
+ return true;
138
+ }
139
+
140
+ /*
141
+ * If FPCCR.ASPEN != 0 && CONTROL_S.SFPA == 0 then there is no
142
+ * active floating point context so we must NOP (without doing
143
+ * any lazy state preservation or the NOCP check).
144
+ */
145
+ aspen = load_cpu_field(v7m.fpccr[M_REG_S]);
146
+ sfpa = load_cpu_field(v7m.control[M_REG_S]);
147
+ tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
148
+ tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
149
+ tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
150
+ tcg_gen_or_i32(sfpa, sfpa, aspen);
151
+ arm_gen_condlabel(s);
152
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
153
+
154
+ if (s->fp_excp_el != 0) {
155
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
156
+ syn_uncategorized(), s->fp_excp_el);
157
+ return true;
158
+ }
159
+
160
+ topreg = a->vd + a->imm - 1;
161
+ btmreg = a->vd;
162
+
163
+ /* Convert to Sreg numbers if the insn specified in Dregs */
164
+ if (a->size == 3) {
165
+ topreg = topreg * 2 + 1;
166
+ btmreg *= 2;
167
+ }
168
+
169
+ if (topreg > 63 || (topreg > 31 && !(topreg & 1))) {
170
+ /* UNPREDICTABLE: we choose to undef */
171
+ unallocated_encoding(s);
172
+ return true;
173
+ }
174
+
175
+ /* Silently ignore requests to clear D16-D31 if they don't exist */
176
+ if (topreg > 31 && !dc_isar_feature(aa32_simd_r32, s)) {
177
+ topreg = 31;
178
+ }
179
+
180
+ if (!vfp_access_check(s)) {
181
+ return true;
182
+ }
183
+
184
+ /* Zero the Sregs from btmreg to topreg inclusive. */
185
+ zero = tcg_const_i64(0);
186
+ if (btmreg & 1) {
187
+ write_neon_element64(zero, btmreg >> 1, 1, MO_32);
188
+ btmreg++;
189
+ }
190
+ for (; btmreg + 1 <= topreg; btmreg += 2) {
191
+ write_neon_element64(zero, btmreg >> 1, 0, MO_64);
192
+ }
193
+ if (btmreg == topreg) {
194
+ write_neon_element64(zero, btmreg >> 1, 0, MO_32);
195
+ btmreg++;
196
+ }
197
+ assert(btmreg == topreg + 1);
198
+ /* TODO: when MVE is implemented, zero VPR here */
199
+ return true;
200
+}
201
+
202
static bool trans_NOCP(DisasContext *s, arg_nocp *a)
203
{
204
/*
56
--
205
--
57
2.20.1
206
2.20.1
58
207
59
208
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
In v8.1M the new CLRM instruction allows zeroing an arbitrary set of
2
the general-purpose registers and APSR. Implement this.
2
3
3
This is the only user of the function.
4
The encoding is a subset of the LDMIA T2 encoding, using what would
5
be Rn=0b1111 (which UNDEFs for LDMIA).
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20200815013145.539409-6-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201119215617.29887-6-peter.maydell@linaro.org
9
---
10
---
10
target/arm/translate-sve.c | 19 ++++++-------------
11
target/arm/t32.decode | 6 +++++-
11
1 file changed, 6 insertions(+), 13 deletions(-)
12
target/arm/translate.c | 38 ++++++++++++++++++++++++++++++++++++++
13
2 files changed, 43 insertions(+), 1 deletion(-)
12
14
13
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
15
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-sve.c
17
--- a/target/arm/t32.decode
16
+++ b/target/arm/translate-sve.c
18
+++ b/target/arm/t32.decode
17
@@ -XXX,XX +XXX,XX @@ static void do_dupi_z(DisasContext *s, int rd, uint64_t word)
19
@@ -XXX,XX +XXX,XX @@ UXTAB 1111 1010 0101 .... 1111 .... 10.. .... @rrr_rot
18
tcg_gen_gvec_dup_imm(MO_64, vec_full_reg_offset(s, rd), vsz, vsz, word);
20
21
STM_t32 1110 1000 10.0 .... ................ @ldstm i=1 b=0
22
STM_t32 1110 1001 00.0 .... ................ @ldstm i=0 b=1
23
-LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
24
+{
25
+ # Rn=15 UNDEFs for LDM; M-profile CLRM uses that encoding
26
+ CLRM 1110 1000 1001 1111 list:16
27
+ LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
28
+}
29
LDM_t32 1110 1001 00.1 .... ................ @ldstm i=0 b=1
30
31
&rfe !extern rn w pu
32
diff --git a/target/arm/translate.c b/target/arm/translate.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate.c
35
+++ b/target/arm/translate.c
36
@@ -XXX,XX +XXX,XX @@ static bool trans_LDM_t16(DisasContext *s, arg_ldst_block *a)
37
return do_ldm(s, a, 1);
19
}
38
}
20
39
21
-/* Invoke a vector expander on two Pregs. */
40
+static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
22
-static bool do_vector2_p(DisasContext *s, GVecGen2Fn *gvec_fn,
41
+{
23
- int esz, int rd, int rn)
42
+ int i;
24
-{
43
+ TCGv_i32 zero;
25
- if (sve_access_check(s)) {
44
+
26
- unsigned psz = pred_gvec_reg_size(s);
45
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
27
- gvec_fn(esz, pred_full_reg_offset(s, rd),
46
+ return false;
28
- pred_full_reg_offset(s, rn), psz, psz);
29
- }
30
- return true;
31
-}
32
-
33
/* Invoke a vector expander on three Pregs. */
34
static bool do_vector3_p(DisasContext *s, GVecGen3Fn *gvec_fn,
35
int esz, int rd, int rn, int rm)
36
@@ -XXX,XX +XXX,XX @@ static bool do_vecop4_p(DisasContext *s, const GVecGen4 *gvec_op,
37
/* Invoke a vector move on two Pregs. */
38
static bool do_mov_p(DisasContext *s, int rd, int rn)
39
{
40
- return do_vector2_p(s, tcg_gen_gvec_mov, 0, rd, rn);
41
+ if (sve_access_check(s)) {
42
+ unsigned psz = pred_gvec_reg_size(s);
43
+ tcg_gen_gvec_mov(MO_8, pred_full_reg_offset(s, rd),
44
+ pred_full_reg_offset(s, rn), psz, psz);
45
+ }
47
+ }
48
+
49
+ if (extract32(a->list, 13, 1)) {
50
+ return false;
51
+ }
52
+
53
+ if (!a->list) {
54
+ /* UNPREDICTABLE; we choose to UNDEF */
55
+ return false;
56
+ }
57
+
58
+ zero = tcg_const_i32(0);
59
+ for (i = 0; i < 15; i++) {
60
+ if (extract32(a->list, i, 1)) {
61
+ /* Clear R[i] */
62
+ tcg_gen_mov_i32(cpu_R[i], zero);
63
+ }
64
+ }
65
+ if (extract32(a->list, 15, 1)) {
66
+ /*
67
+ * Clear APSR (by calling the MSR helper with the same argument
68
+ * as for "MSR APSR_nzcvqg, Rn": mask = 0b1100, SYSM=0)
69
+ */
70
+ TCGv_i32 maskreg = tcg_const_i32(0xc << 8);
71
+ gen_helper_v7m_msr(cpu_env, maskreg, zero);
72
+ tcg_temp_free_i32(maskreg);
73
+ }
74
+ tcg_temp_free_i32(zero);
46
+ return true;
75
+ return true;
47
}
76
+}
48
77
+
49
/* Set the cpu flags as per a return from an SVE helper. */
78
/*
79
* Branch, branch with link
80
*/
50
--
81
--
51
2.20.1
82
2.20.1
52
83
53
84
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
For M-profile before v8.1M, the only valid register for VMSR/VMRS is
2
the FPSCR. We have a comment that states this, but the actual logic
3
to forbid accesses for any other register value is missing, so we
4
would end up with A-profile style behaviour. Add the missing check.
2
5
3
Model the new function on gen_gvec_fn2 in translate-a64.c, but
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
indicating which kind of register and in which order. Since there
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
is only one user of do_vector2_z, fold it into do_mov_z.
8
Message-id: 20201119215617.29887-7-peter.maydell@linaro.org
9
---
10
target/arm/translate-vfp.c.inc | 5 ++++-
11
1 file changed, 4 insertions(+), 1 deletion(-)
6
12
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20200815013145.539409-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-sve.c | 19 ++++++++++---------
13
1 file changed, 10 insertions(+), 9 deletions(-)
14
15
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-sve.c
15
--- a/target/arm/translate-vfp.c.inc
18
+++ b/target/arm/translate-sve.c
16
+++ b/target/arm/translate-vfp.c.inc
19
@@ -XXX,XX +XXX,XX @@ static int pred_gvec_reg_size(DisasContext *s)
17
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
20
}
18
* Accesses to R15 are UNPREDICTABLE; we choose to undef.
21
19
* (FPSCR -> r15 is a special case which writes to the PSR flags.)
22
/* Invoke a vector expander on two Zregs. */
20
*/
23
-static bool do_vector2_z(DisasContext *s, GVecGen2Fn *gvec_fn,
21
- if (a->rt == 15 && (!a->l || a->reg != ARM_VFP_FPSCR)) {
24
- int esz, int rd, int rn)
22
+ if (a->reg != ARM_VFP_FPSCR) {
25
+
23
+ return false;
26
+static void gen_gvec_fn_zz(DisasContext *s, GVecGen2Fn *gvec_fn,
24
+ }
27
+ int esz, int rd, int rn)
25
+ if (a->rt == 15 && !a->l) {
28
{
26
return false;
29
- if (sve_access_check(s)) {
27
}
30
- unsigned vsz = vec_full_reg_size(s);
28
}
31
- gvec_fn(esz, vec_full_reg_offset(s, rd),
32
- vec_full_reg_offset(s, rn), vsz, vsz);
33
- }
34
- return true;
35
+ unsigned vsz = vec_full_reg_size(s);
36
+ gvec_fn(esz, vec_full_reg_offset(s, rd),
37
+ vec_full_reg_offset(s, rn), vsz, vsz);
38
}
39
40
/* Invoke a vector expander on three Zregs. */
41
@@ -XXX,XX +XXX,XX @@ static bool do_vector3_z(DisasContext *s, GVecGen3Fn *gvec_fn,
42
/* Invoke a vector move on two Zregs. */
43
static bool do_mov_z(DisasContext *s, int rd, int rn)
44
{
45
- return do_vector2_z(s, tcg_gen_gvec_mov, 0, rd, rn);
46
+ if (sve_access_check(s)) {
47
+ gen_gvec_fn_zz(s, tcg_gen_gvec_mov, MO_8, rd, rn);
48
+ }
49
+ return true;
50
}
51
52
/* Initialize a Zreg with replications of a 64-bit immediate. */
53
--
29
--
54
2.20.1
30
2.20.1
55
31
56
32
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
Currently M-profile borrows the A-profile code for VMSR and VMRS
2
2
(access to the FP system registers), because all it needs to support
3
To have a better idea of how big is the region where the offset
3
is the FPSCR. In v8.1M things become significantly more complicated
4
belongs, display the value with the width of the region size
4
in two ways:
5
(i.e. a region of 0x1000 bytes uses 0x000 format).
5
6
6
* there are several new FP system registers; some have side effects
7
on read, and one (FPCXT_NS) needs to avoid the usual
8
vfp_access_check() and the "only if FPU implemented" check
9
10
* all sysregs are now accessible both by VMRS/VMSR (which
11
reads/writes a general purpose register) and also by VLDR/VSTR
12
(which reads/writes them directly to memory)
13
14
Refactor the structure of how we handle VMSR/VMRS to cope with this:
15
16
* keep the M-profile code entirely separate from the A-profile code
17
18
* abstract out the "read or write the general purpose register" part
19
of the code into a loadfn or storefn function pointer, so we can
20
reuse it for VLDR/VSTR.
21
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
24
Message-id: 20201119215617.29887-8-peter.maydell@linaro.org
9
Message-id: 20200812190206.31595-4-f4bug@amsat.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
25
---
12
include/hw/misc/unimp.h | 1 +
26
target/arm/cpu.h | 3 +
13
hw/misc/unimp.c | 10 ++++++----
27
target/arm/translate-vfp.c.inc | 182 ++++++++++++++++++++++++++++++---
14
2 files changed, 7 insertions(+), 4 deletions(-)
28
2 files changed, 171 insertions(+), 14 deletions(-)
15
29
16
diff --git a/include/hw/misc/unimp.h b/include/hw/misc/unimp.h
30
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/misc/unimp.h
32
--- a/target/arm/cpu.h
19
+++ b/include/hw/misc/unimp.h
33
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@
34
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
21
typedef struct {
35
#define ARM_VFP_FPINST 9
22
SysBusDevice parent_obj;
36
#define ARM_VFP_FPINST2 10
23
MemoryRegion iomem;
37
24
+ unsigned offset_fmt_width;
38
+/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
25
char *name;
39
+#define QEMU_VFP_FPSCR_NZCV 0xffff
26
uint64_t size;
40
+
27
} UnimplementedDeviceState;
41
/* iwMMXt coprocessor control registers. */
28
diff --git a/hw/misc/unimp.c b/hw/misc/unimp.c
42
#define ARM_IWMMXT_wCID 0
43
#define ARM_IWMMXT_wCon 1
44
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
29
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/misc/unimp.c
46
--- a/target/arm/translate-vfp.c.inc
31
+++ b/hw/misc/unimp.c
47
+++ b/target/arm/translate-vfp.c.inc
32
@@ -XXX,XX +XXX,XX @@ static uint64_t unimp_read(void *opaque, hwaddr offset, unsigned size)
48
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
33
UnimplementedDeviceState *s = UNIMPLEMENTED_DEVICE(opaque);
49
return true;
34
35
qemu_log_mask(LOG_UNIMP, "%s: unimplemented device read "
36
- "(size %d, offset 0x%" HWADDR_PRIx ")\n",
37
- s->name, size, offset);
38
+ "(size %d, offset 0x%0*" HWADDR_PRIx ")\n",
39
+ s->name, size, s->offset_fmt_width, offset);
40
return 0;
41
}
50
}
42
51
43
@@ -XXX,XX +XXX,XX @@ static void unimp_write(void *opaque, hwaddr offset,
52
+/*
44
UnimplementedDeviceState *s = UNIMPLEMENTED_DEVICE(opaque);
53
+ * M-profile provides two different sets of instructions that can
45
54
+ * access floating point system registers: VMSR/VMRS (which move
46
qemu_log_mask(LOG_UNIMP, "%s: unimplemented device write "
55
+ * to/from a general purpose register) and VLDR/VSTR sysreg (which
47
- "(size %d, offset 0x%" HWADDR_PRIx
56
+ * move directly to/from memory). In some cases there are also side
48
+ "(size %d, offset 0x%0*" HWADDR_PRIx
57
+ * effects which must happen after any write to memory (which could
49
", value 0x%0*" PRIx64 ")\n",
58
+ * cause an exception). So we implement the common logic for the
50
- s->name, size, offset, size << 1, value);
59
+ * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
51
+ s->name, size, s->offset_fmt_width, offset, size << 1, value);
60
+ * which take pointers to callback functions which will perform the
52
}
61
+ * actual "read/write general purpose register" and "read/write
53
62
+ * memory" operations.
54
static const MemoryRegionOps unimp_ops = {
63
+ */
55
@@ -XXX,XX +XXX,XX @@ static void unimp_realize(DeviceState *dev, Error **errp)
64
+
56
return;
65
+/*
66
+ * Emit code to store the sysreg to its final destination; frees the
67
+ * TCG temp 'value' it is passed.
68
+ */
69
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
70
+/*
71
+ * Emit code to load the value to be copied to the sysreg; returns
72
+ * a new TCG temporary
73
+ */
74
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
75
+
76
+/* Common decode/access checks for fp sysreg read/write */
77
+typedef enum FPSysRegCheckResult {
78
+ FPSysRegCheckFailed, /* caller should return false */
79
+ FPSysRegCheckDone, /* caller should return true */
80
+ FPSysRegCheckContinue, /* caller should continue generating code */
81
+} FPSysRegCheckResult;
82
+
83
+static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
84
+{
85
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
86
+ return FPSysRegCheckFailed;
87
+ }
88
+
89
+ switch (regno) {
90
+ case ARM_VFP_FPSCR:
91
+ case QEMU_VFP_FPSCR_NZCV:
92
+ break;
93
+ default:
94
+ return FPSysRegCheckFailed;
95
+ }
96
+
97
+ if (!vfp_access_check(s)) {
98
+ return FPSysRegCheckDone;
99
+ }
100
+
101
+ return FPSysRegCheckContinue;
102
+}
103
+
104
+static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
105
+
106
+ fp_sysreg_loadfn *loadfn,
107
+ void *opaque)
108
+{
109
+ /* Do a write to an M-profile floating point system register */
110
+ TCGv_i32 tmp;
111
+
112
+ switch (fp_sysreg_checks(s, regno)) {
113
+ case FPSysRegCheckFailed:
114
+ return false;
115
+ case FPSysRegCheckDone:
116
+ return true;
117
+ case FPSysRegCheckContinue:
118
+ break;
119
+ }
120
+
121
+ switch (regno) {
122
+ case ARM_VFP_FPSCR:
123
+ tmp = loadfn(s, opaque);
124
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
125
+ tcg_temp_free_i32(tmp);
126
+ gen_lookup_tb(s);
127
+ break;
128
+ default:
129
+ g_assert_not_reached();
130
+ }
131
+ return true;
132
+}
133
+
134
+static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
135
+ fp_sysreg_storefn *storefn,
136
+ void *opaque)
137
+{
138
+ /* Do a read from an M-profile floating point system register */
139
+ TCGv_i32 tmp;
140
+
141
+ switch (fp_sysreg_checks(s, regno)) {
142
+ case FPSysRegCheckFailed:
143
+ return false;
144
+ case FPSysRegCheckDone:
145
+ return true;
146
+ case FPSysRegCheckContinue:
147
+ break;
148
+ }
149
+
150
+ switch (regno) {
151
+ case ARM_VFP_FPSCR:
152
+ tmp = tcg_temp_new_i32();
153
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
154
+ storefn(s, opaque, tmp);
155
+ break;
156
+ case QEMU_VFP_FPSCR_NZCV:
157
+ /*
158
+ * Read just NZCV; this is a special case to avoid the
159
+ * helper call for the "VMRS to CPSR.NZCV" insn.
160
+ */
161
+ tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
162
+ tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
163
+ storefn(s, opaque, tmp);
164
+ break;
165
+ default:
166
+ g_assert_not_reached();
167
+ }
168
+ return true;
169
+}
170
+
171
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
172
+{
173
+ arg_VMSR_VMRS *a = opaque;
174
+
175
+ if (a->rt == 15) {
176
+ /* Set the 4 flag bits in the CPSR */
177
+ gen_set_nzcv(value);
178
+ tcg_temp_free_i32(value);
179
+ } else {
180
+ store_reg(s, a->rt, value);
181
+ }
182
+}
183
+
184
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
185
+{
186
+ arg_VMSR_VMRS *a = opaque;
187
+
188
+ return load_reg(s, a->rt);
189
+}
190
+
191
+static bool gen_M_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
192
+{
193
+ /*
194
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
195
+ * FPSCR -> r15 is a special case which writes to the PSR flags;
196
+ * set a->reg to a special value to tell gen_M_fp_sysreg_read()
197
+ * we only care about the top 4 bits of FPSCR there.
198
+ */
199
+ if (a->rt == 15) {
200
+ if (a->l && a->reg == ARM_VFP_FPSCR) {
201
+ a->reg = QEMU_VFP_FPSCR_NZCV;
202
+ } else {
203
+ return false;
204
+ }
205
+ }
206
+
207
+ if (a->l) {
208
+ /* VMRS, move FP system register to gp register */
209
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
210
+ } else {
211
+ /* VMSR, move gp register to FP system register */
212
+ return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
213
+ }
214
+}
215
+
216
static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
217
{
218
TCGv_i32 tmp;
219
bool ignore_vfp_enabled = false;
220
221
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
222
- return false;
223
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
224
+ return gen_M_VMSR_VMRS(s, a);
57
}
225
}
58
226
59
+ s->offset_fmt_width = DIV_ROUND_UP(64 - clz64(s->size - 1), 4);
227
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
60
+
228
- /*
61
memory_region_init_io(&s->iomem, OBJECT(s), &unimp_ops, s,
229
- * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
62
s->name, s->size);
230
- * Accesses to R15 are UNPREDICTABLE; we choose to undef.
63
sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->iomem);
231
- * (FPSCR -> r15 is a special case which writes to the PSR flags.)
232
- */
233
- if (a->reg != ARM_VFP_FPSCR) {
234
- return false;
235
- }
236
- if (a->rt == 15 && !a->l) {
237
- return false;
238
- }
239
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
240
+ return false;
241
}
242
243
switch (a->reg) {
64
--
244
--
65
2.20.1
245
2.20.1
66
246
67
247
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The constant-expander functions like negate, plus_2, etc, are
2
generally useful; move them up in translate.c so we can use them in
3
the VFP/Neon decoders as well as in the A32/T32/T16 decoders.
2
4
3
The existing clr functions have only one vector argument, and so
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
can only clear in place. The existing movz functions have two
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
vector arguments, and so can clear while moving. Merge them, with
7
Message-id: 20201119215617.29887-9-peter.maydell@linaro.org
6
a flag that controls the sense of active vs inactive elements
8
---
7
being cleared.
9
target/arm/translate.c | 46 +++++++++++++++++++++++-------------------
10
1 file changed, 25 insertions(+), 21 deletions(-)
8
11
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20200815013145.539409-10-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/helper-sve.h | 5 ---
15
target/arm/sve_helper.c | 70 ++++++++------------------------------
16
target/arm/translate-sve.c | 53 +++++++++++------------------
17
3 files changed, 34 insertions(+), 94 deletions(-)
18
19
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
20
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper-sve.h
14
--- a/target/arm/translate.c
22
+++ b/target/arm/helper-sve.h
15
+++ b/target/arm/translate.c
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_uminv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
16
@@ -XXX,XX +XXX,XX @@ static void arm_gen_condlabel(DisasContext *s)
24
DEF_HELPER_FLAGS_3(sve_uminv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
17
}
25
DEF_HELPER_FLAGS_3(sve_uminv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
26
27
-DEF_HELPER_FLAGS_3(sve_clr_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
-DEF_HELPER_FLAGS_3(sve_clr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
-DEF_HELPER_FLAGS_3(sve_clr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
-DEF_HELPER_FLAGS_3(sve_clr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
-
32
DEF_HELPER_FLAGS_4(sve_movz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_4(sve_movz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
DEF_HELPER_FLAGS_4(sve_movz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/sve_helper.c
38
+++ b/target/arm/sve_helper.c
39
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_pnext)(void *vd, void *vg, uint32_t pred_desc)
40
return flags;
41
}
18
}
42
19
43
-/* Store zero into every active element of Zd. We will use this for two
44
- * and three-operand predicated instructions for which logic dictates a
45
- * zero result. In particular, logical shift by element size, which is
46
- * otherwise undefined on the host.
47
- *
48
- * For element sizes smaller than uint64_t, we use tables to expand
49
- * the N bits of the controlling predicate to a byte mask, and clear
50
- * those bytes.
51
+/*
20
+/*
52
+ * Copy Zn into Zd, and store zero into inactive elements.
21
+ * Constant expanders for the decoders.
53
+ * If inv, store zeros into the active elements.
22
+ */
23
+
24
+static int negate(DisasContext *s, int x)
25
+{
26
+ return -x;
27
+}
28
+
29
+static int plus_2(DisasContext *s, int x)
30
+{
31
+ return x + 2;
32
+}
33
+
34
+static int times_2(DisasContext *s, int x)
35
+{
36
+ return x * 2;
37
+}
38
+
39
+static int times_4(DisasContext *s, int x)
40
+{
41
+ return x * 4;
42
+}
43
+
44
/* Flags for the disas_set_da_iss info argument:
45
* lower bits hold the Rt register number, higher bits are flags.
54
*/
46
*/
55
-void HELPER(sve_clr_b)(void *vd, void *vg, uint32_t desc)
47
@@ -XXX,XX +XXX,XX @@ static void arm_skip_unless(DisasContext *s, uint32_t cond)
48
49
50
/*
51
- * Constant expanders for the decoders.
52
+ * Constant expanders used by T16/T32 decode
53
*/
54
55
-static int negate(DisasContext *s, int x)
56
-{
56
-{
57
- intptr_t i, opr_sz = simd_oprsz(desc) / 8;
57
- return -x;
58
- uint64_t *d = vd;
59
- uint8_t *pg = vg;
60
- for (i = 0; i < opr_sz; i += 1) {
61
- d[i] &= ~expand_pred_b(pg[H1(i)]);
62
- }
63
-}
58
-}
64
-
59
-
65
-void HELPER(sve_clr_h)(void *vd, void *vg, uint32_t desc)
60
-static int plus_2(DisasContext *s, int x)
66
-{
61
-{
67
- intptr_t i, opr_sz = simd_oprsz(desc) / 8;
62
- return x + 2;
68
- uint64_t *d = vd;
69
- uint8_t *pg = vg;
70
- for (i = 0; i < opr_sz; i += 1) {
71
- d[i] &= ~expand_pred_h(pg[H1(i)]);
72
- }
73
-}
63
-}
74
-
64
-
75
-void HELPER(sve_clr_s)(void *vd, void *vg, uint32_t desc)
65
-static int times_2(DisasContext *s, int x)
76
-{
66
-{
77
- intptr_t i, opr_sz = simd_oprsz(desc) / 8;
67
- return x * 2;
78
- uint64_t *d = vd;
79
- uint8_t *pg = vg;
80
- for (i = 0; i < opr_sz; i += 1) {
81
- d[i] &= ~expand_pred_s(pg[H1(i)]);
82
- }
83
-}
68
-}
84
-
69
-
85
-void HELPER(sve_clr_d)(void *vd, void *vg, uint32_t desc)
70
-static int times_4(DisasContext *s, int x)
86
-{
71
-{
87
- intptr_t i, opr_sz = simd_oprsz(desc) / 8;
72
- return x * 4;
88
- uint64_t *d = vd;
89
- uint8_t *pg = vg;
90
- for (i = 0; i < opr_sz; i += 1) {
91
- if (pg[H1(i)] & 1) {
92
- d[i] = 0;
93
- }
94
- }
95
-}
73
-}
96
-
74
-
97
-/* Copy Zn into Zd, and store zero into inactive elements. */
75
/* Return only the rotation part of T32ExpandImm. */
98
void HELPER(sve_movz_b)(void *vd, void *vn, void *vg, uint32_t desc)
76
static int t32_expandimm_rot(DisasContext *s, int x)
99
{
77
{
100
intptr_t i, opr_sz = simd_oprsz(desc) / 8;
101
+ uint64_t inv = -(uint64_t)(simd_data(desc) & 1);
102
uint64_t *d = vd, *n = vn;
103
uint8_t *pg = vg;
104
+
105
for (i = 0; i < opr_sz; i += 1) {
106
- d[i] = n[i] & expand_pred_b(pg[H1(i)]);
107
+ d[i] = n[i] & (expand_pred_b(pg[H1(i)]) ^ inv);
108
}
109
}
110
111
void HELPER(sve_movz_h)(void *vd, void *vn, void *vg, uint32_t desc)
112
{
113
intptr_t i, opr_sz = simd_oprsz(desc) / 8;
114
+ uint64_t inv = -(uint64_t)(simd_data(desc) & 1);
115
uint64_t *d = vd, *n = vn;
116
uint8_t *pg = vg;
117
+
118
for (i = 0; i < opr_sz; i += 1) {
119
- d[i] = n[i] & expand_pred_h(pg[H1(i)]);
120
+ d[i] = n[i] & (expand_pred_h(pg[H1(i)]) ^ inv);
121
}
122
}
123
124
void HELPER(sve_movz_s)(void *vd, void *vn, void *vg, uint32_t desc)
125
{
126
intptr_t i, opr_sz = simd_oprsz(desc) / 8;
127
+ uint64_t inv = -(uint64_t)(simd_data(desc) & 1);
128
uint64_t *d = vd, *n = vn;
129
uint8_t *pg = vg;
130
+
131
for (i = 0; i < opr_sz; i += 1) {
132
- d[i] = n[i] & expand_pred_s(pg[H1(i)]);
133
+ d[i] = n[i] & (expand_pred_s(pg[H1(i)]) ^ inv);
134
}
135
}
136
137
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_movz_d)(void *vd, void *vn, void *vg, uint32_t desc)
138
intptr_t i, opr_sz = simd_oprsz(desc) / 8;
139
uint64_t *d = vd, *n = vn;
140
uint8_t *pg = vg;
141
+ uint8_t inv = simd_data(desc);
142
+
143
for (i = 0; i < opr_sz; i += 1) {
144
- d[i] = n[i] & -(uint64_t)(pg[H1(i)] & 1);
145
+ d[i] = n[i] & -(uint64_t)((pg[H1(i)] ^ inv) & 1);
146
}
147
}
148
149
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
150
index XXXXXXX..XXXXXXX 100644
151
--- a/target/arm/translate-sve.c
152
+++ b/target/arm/translate-sve.c
153
@@ -XXX,XX +XXX,XX @@ static bool trans_SADDV(DisasContext *s, arg_rpr_esz *a)
154
*** SVE Shift by Immediate - Predicated Group
155
*/
156
157
-/* Store zero into every active element of Zd. We will use this for two
158
- * and three-operand predicated instructions for which logic dictates a
159
- * zero result.
160
+/*
161
+ * Copy Zn into Zd, storing zeros into inactive elements.
162
+ * If invert, store zeros into the active elements.
163
*/
164
-static bool do_clr_zp(DisasContext *s, int rd, int pg, int esz)
165
-{
166
- static gen_helper_gvec_2 * const fns[4] = {
167
- gen_helper_sve_clr_b, gen_helper_sve_clr_h,
168
- gen_helper_sve_clr_s, gen_helper_sve_clr_d,
169
- };
170
- if (sve_access_check(s)) {
171
- unsigned vsz = vec_full_reg_size(s);
172
- tcg_gen_gvec_2_ool(vec_full_reg_offset(s, rd),
173
- pred_full_reg_offset(s, pg),
174
- vsz, vsz, 0, fns[esz]);
175
- }
176
- return true;
177
-}
178
-
179
-/* Copy Zn into Zd, storing zeros into inactive elements. */
180
-static void do_movz_zpz(DisasContext *s, int rd, int rn, int pg, int esz)
181
+static bool do_movz_zpz(DisasContext *s, int rd, int rn, int pg,
182
+ int esz, bool invert)
183
{
184
static gen_helper_gvec_3 * const fns[4] = {
185
gen_helper_sve_movz_b, gen_helper_sve_movz_h,
186
gen_helper_sve_movz_s, gen_helper_sve_movz_d,
187
};
188
- unsigned vsz = vec_full_reg_size(s);
189
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
190
- vec_full_reg_offset(s, rn),
191
- pred_full_reg_offset(s, pg),
192
- vsz, vsz, 0, fns[esz]);
193
+
194
+ if (sve_access_check(s)) {
195
+ unsigned vsz = vec_full_reg_size(s);
196
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
197
+ vec_full_reg_offset(s, rn),
198
+ pred_full_reg_offset(s, pg),
199
+ vsz, vsz, invert, fns[esz]);
200
+ }
201
+ return true;
202
}
203
204
static bool do_zpzi_ool(DisasContext *s, arg_rpri_esz *a,
205
@@ -XXX,XX +XXX,XX @@ static bool trans_LSR_zpzi(DisasContext *s, arg_rpri_esz *a)
206
/* Shift by element size is architecturally valid.
207
For logical shifts, it is a zeroing operation. */
208
if (a->imm >= (8 << a->esz)) {
209
- return do_clr_zp(s, a->rd, a->pg, a->esz);
210
+ return do_movz_zpz(s, a->rd, a->rd, a->pg, a->esz, true);
211
} else {
212
return do_zpzi_ool(s, a, fns[a->esz]);
213
}
214
@@ -XXX,XX +XXX,XX @@ static bool trans_LSL_zpzi(DisasContext *s, arg_rpri_esz *a)
215
/* Shift by element size is architecturally valid.
216
For logical shifts, it is a zeroing operation. */
217
if (a->imm >= (8 << a->esz)) {
218
- return do_clr_zp(s, a->rd, a->pg, a->esz);
219
+ return do_movz_zpz(s, a->rd, a->rd, a->pg, a->esz, true);
220
} else {
221
return do_zpzi_ool(s, a, fns[a->esz]);
222
}
223
@@ -XXX,XX +XXX,XX @@ static bool trans_ASRD(DisasContext *s, arg_rpri_esz *a)
224
/* Shift by element size is architecturally valid. For arithmetic
225
right shift for division, it is a zeroing operation. */
226
if (a->imm >= (8 << a->esz)) {
227
- return do_clr_zp(s, a->rd, a->pg, a->esz);
228
+ return do_movz_zpz(s, a->rd, a->rd, a->pg, a->esz, true);
229
} else {
230
return do_zpzi_ool(s, a, fns[a->esz]);
231
}
232
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1R_zpri(DisasContext *s, arg_rpri_load *a)
233
234
/* Zero the inactive elements. */
235
gen_set_label(over);
236
- do_movz_zpz(s, a->rd, a->rd, a->pg, esz);
237
- return true;
238
+ return do_movz_zpz(s, a->rd, a->rd, a->pg, esz, false);
239
}
240
241
static void do_st_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
242
@@ -XXX,XX +XXX,XX @@ static bool trans_MOVPRFX_m(DisasContext *s, arg_rpr_esz *a)
243
244
static bool trans_MOVPRFX_z(DisasContext *s, arg_rpr_esz *a)
245
{
246
- if (sve_access_check(s)) {
247
- do_movz_zpz(s, a->rd, a->rn, a->pg, a->esz);
248
- }
249
- return true;
250
+ return do_movz_zpz(s, a->rd, a->rn, a->pg, a->esz, false);
251
}
252
--
78
--
253
2.20.1
79
2.20.1
254
80
255
81
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the new-in-v8.1M VLDR/VSTR variants which directly
2
read or write FP system registers to memory.
2
3
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20200815013145.539409-21-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-10-peter.maydell@linaro.org
7
---
7
---
8
target/arm/helper.h | 10 ++++++++
8
target/arm/vfp.decode | 14 ++++++
9
target/arm/translate-a64.c | 33 ++++++++++++++++++--------
9
target/arm/translate-vfp.c.inc | 91 ++++++++++++++++++++++++++++++++++
10
target/arm/vec_helper.c | 48 ++++++++++++++++++++++++++++++++++++++
10
2 files changed, 105 insertions(+)
11
3 files changed, 81 insertions(+), 10 deletions(-)
12
11
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
12
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
14
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
14
--- a/target/arm/vfp.decode
16
+++ b/target/arm/helper.h
15
+++ b/target/arm/vfp.decode
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_mls_idx_s, TCG_CALL_NO_RWG,
16
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR_hp ---- 1101 u:1 .0 l:1 rn:4 .... 1001 imm:8 vd=%vd_sp
18
DEF_HELPER_FLAGS_5(gvec_mls_idx_d, TCG_CALL_NO_RWG,
17
VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
19
void, ptr, ptr, ptr, ptr, i32)
18
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
20
19
21
+DEF_HELPER_FLAGS_5(neon_sqdmulh_h, TCG_CALL_NO_RWG,
20
+# M-profile VLDR/VSTR to sysreg
22
+ void, ptr, ptr, ptr, ptr, i32)
21
+%vldr_sysreg 22:1 13:3
23
+DEF_HELPER_FLAGS_5(neon_sqdmulh_s, TCG_CALL_NO_RWG,
22
+%imm7_0x4 0:7 !function=times_4
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+
23
+
26
+DEF_HELPER_FLAGS_5(neon_sqrdmulh_h, TCG_CALL_NO_RWG,
24
+&vldr_sysreg rn reg imm a w p
27
+ void, ptr, ptr, ptr, ptr, i32)
25
+@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
28
+DEF_HELPER_FLAGS_5(neon_sqrdmulh_s, TCG_CALL_NO_RWG,
26
+ reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
27
+
31
#ifdef TARGET_AARCH64
28
+# P=0 W=0 is SEE "Related encodings", so split into two patterns
32
#include "helper-a64.h"
29
+VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
33
#include "helper-sve.h"
30
+VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
34
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
31
+VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
32
+VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
33
+
34
# We split the load/store multiple up into two patterns to avoid
35
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
36
# grouping:
37
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
35
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-a64.c
39
--- a/target/arm/translate-vfp.c.inc
37
+++ b/target/arm/translate-a64.c
40
+++ b/target/arm/translate-vfp.c.inc
38
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_op3_fpst(DisasContext *s, bool is_q, int rd, int rn,
41
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
39
tcg_temp_free_ptr(fpst);
42
return true;
40
}
43
}
41
44
42
+/* Expand a 3-operand + qc + operation using an out-of-line helper. */
45
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
43
+static void gen_gvec_op3_qc(DisasContext *s, bool is_q, int rd, int rn,
44
+ int rm, gen_helper_gvec_3_ptr *fn)
45
+{
46
+{
46
+ TCGv_ptr qc_ptr = tcg_temp_new_ptr();
47
+ arg_vldr_sysreg *a = opaque;
48
+ uint32_t offset = a->imm;
49
+ TCGv_i32 addr;
47
+
50
+
48
+ tcg_gen_addi_ptr(qc_ptr, cpu_env, offsetof(CPUARMState, vfp.qc));
51
+ if (!a->a) {
49
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
52
+ offset = - offset;
50
+ vec_full_reg_offset(s, rn),
53
+ }
51
+ vec_full_reg_offset(s, rm), qc_ptr,
54
+
52
+ is_q ? 16 : 8, vec_full_reg_size(s), 0, fn);
55
+ addr = load_reg(s, a->rn);
53
+ tcg_temp_free_ptr(qc_ptr);
56
+ if (a->p) {
57
+ tcg_gen_addi_i32(addr, addr, offset);
58
+ }
59
+
60
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
61
+ gen_helper_v8m_stackcheck(cpu_env, addr);
62
+ }
63
+
64
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
65
+ MO_UL | MO_ALIGN | s->be_data);
66
+ tcg_temp_free_i32(value);
67
+
68
+ if (a->w) {
69
+ /* writeback */
70
+ if (!a->p) {
71
+ tcg_gen_addi_i32(addr, addr, offset);
72
+ }
73
+ store_reg(s, a->rn, addr);
74
+ } else {
75
+ tcg_temp_free_i32(addr);
76
+ }
54
+}
77
+}
55
+
78
+
56
/* Set ZF and NF based on a 64 bit result. This is alas fiddlier
79
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
57
* than the 32 bit equivalent.
80
+{
58
*/
81
+ arg_vldr_sysreg *a = opaque;
59
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
82
+ uint32_t offset = a->imm;
60
gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_mla, size);
83
+ TCGv_i32 addr;
61
}
84
+ TCGv_i32 value = tcg_temp_new_i32();
62
return;
85
+
63
+ case 0x16: /* SQDMULH, SQRDMULH */
86
+ if (!a->a) {
64
+ {
87
+ offset = - offset;
65
+ static gen_helper_gvec_3_ptr * const fns[2][2] = {
88
+ }
66
+ { gen_helper_neon_sqdmulh_h, gen_helper_neon_sqrdmulh_h },
89
+
67
+ { gen_helper_neon_sqdmulh_s, gen_helper_neon_sqrdmulh_s },
90
+ addr = load_reg(s, a->rn);
68
+ };
91
+ if (a->p) {
69
+ gen_gvec_op3_qc(s, is_q, rd, rn, rm, fns[size - 1][u]);
92
+ tcg_gen_addi_i32(addr, addr, offset);
93
+ }
94
+
95
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
96
+ gen_helper_v8m_stackcheck(cpu_env, addr);
97
+ }
98
+
99
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
100
+ MO_UL | MO_ALIGN | s->be_data);
101
+
102
+ if (a->w) {
103
+ /* writeback */
104
+ if (!a->p) {
105
+ tcg_gen_addi_i32(addr, addr, offset);
70
+ }
106
+ }
71
+ return;
107
+ store_reg(s, a->rn, addr);
72
case 0x11:
108
+ } else {
73
if (!u) { /* CMTST */
109
+ tcg_temp_free_i32(addr);
74
gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_cmtst, size);
75
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
76
genenvfn = fns[size][u];
77
break;
78
}
79
- case 0x16: /* SQDMULH, SQRDMULH */
80
- {
81
- static NeonGenTwoOpEnvFn * const fns[2][2] = {
82
- { gen_helper_neon_qdmulh_s16, gen_helper_neon_qrdmulh_s16 },
83
- { gen_helper_neon_qdmulh_s32, gen_helper_neon_qrdmulh_s32 },
84
- };
85
- assert(size == 1 || size == 2);
86
- genenvfn = fns[size - 1][u];
87
- break;
88
- }
89
default:
90
g_assert_not_reached();
91
}
92
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/vec_helper.c
95
+++ b/target/arm/vec_helper.c
96
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlsh_s16)(void *vd, void *vn, void *vm,
97
clear_tail(d, opr_sz, simd_maxsz(desc));
98
}
99
100
+void HELPER(neon_sqdmulh_h)(void *vd, void *vn, void *vm,
101
+ void *vq, uint32_t desc)
102
+{
103
+ intptr_t i, opr_sz = simd_oprsz(desc);
104
+ int16_t *d = vd, *n = vn, *m = vm;
105
+
106
+ for (i = 0; i < opr_sz / 2; ++i) {
107
+ d[i] = do_sqrdmlah_h(n[i], m[i], 0, false, false, vq);
108
+ }
110
+ }
109
+ clear_tail(d, opr_sz, simd_maxsz(desc));
111
+ return value;
110
+}
112
+}
111
+
113
+
112
+void HELPER(neon_sqrdmulh_h)(void *vd, void *vn, void *vm,
114
+static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
113
+ void *vq, uint32_t desc)
114
+{
115
+{
115
+ intptr_t i, opr_sz = simd_oprsz(desc);
116
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
116
+ int16_t *d = vd, *n = vn, *m = vm;
117
+ return false;
117
+
118
+ for (i = 0; i < opr_sz / 2; ++i) {
119
+ d[i] = do_sqrdmlah_h(n[i], m[i], 0, false, true, vq);
120
+ }
118
+ }
121
+ clear_tail(d, opr_sz, simd_maxsz(desc));
119
+ if (a->rn == 15) {
120
+ return false;
121
+ }
122
+ return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
122
+}
123
+}
123
+
124
+
124
/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
125
+static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
125
static int32_t do_sqrdmlah_s(int32_t src1, int32_t src2, int32_t src3,
126
bool neg, bool round, uint32_t *sat)
127
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
128
clear_tail(d, opr_sz, simd_maxsz(desc));
129
}
130
131
+void HELPER(neon_sqdmulh_s)(void *vd, void *vn, void *vm,
132
+ void *vq, uint32_t desc)
133
+{
126
+{
134
+ intptr_t i, opr_sz = simd_oprsz(desc);
127
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
135
+ int32_t *d = vd, *n = vn, *m = vm;
128
+ return false;
136
+
137
+ for (i = 0; i < opr_sz / 4; ++i) {
138
+ d[i] = do_sqrdmlah_s(n[i], m[i], 0, false, false, vq);
139
+ }
129
+ }
140
+ clear_tail(d, opr_sz, simd_maxsz(desc));
130
+ if (a->rn == 15) {
131
+ return false;
132
+ }
133
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
141
+}
134
+}
142
+
135
+
143
+void HELPER(neon_sqrdmulh_s)(void *vd, void *vn, void *vm,
136
static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
144
+ void *vq, uint32_t desc)
137
{
145
+{
138
TCGv_i32 tmp;
146
+ intptr_t i, opr_sz = simd_oprsz(desc);
147
+ int32_t *d = vd, *n = vn, *m = vm;
148
+
149
+ for (i = 0; i < opr_sz / 4; ++i) {
150
+ d[i] = do_sqrdmlah_s(n[i], m[i], 0, false, true, vq);
151
+ }
152
+ clear_tail(d, opr_sz, simd_maxsz(desc));
153
+}
154
+
155
/* Integer 8 and 16-bit dot-product.
156
*
157
* Note that for the loops herein, host endianness does not matter
158
--
139
--
159
2.20.1
140
2.20.1
160
141
161
142
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
v8.1M defines a new FP system register FPSCR_nzcvqc; this behaves
2
like the existing FPSCR, except that it reads and writes only bits
3
[31:27] of the FPSCR (the N, Z, C, V and QC flag bits). (Unlike the
4
FPSCR, the special case for Rt=15 of writing the CPSR.NZCV is not
5
permitted.)
2
6
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Implement the register. Since we don't yet implement MVE, we handle
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
the QC bit as RES0, with todo comments for where we will need to add
5
Message-id: 20200815013145.539409-19-richard.henderson@linaro.org
9
support later.
10
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20201119215617.29887-11-peter.maydell@linaro.org
7
---
14
---
8
target/arm/helper.h | 4 ++++
15
target/arm/cpu.h | 13 +++++++++++++
9
target/arm/translate-a64.c | 16 ++++++++++++++++
16
target/arm/translate-vfp.c.inc | 27 +++++++++++++++++++++++++++
10
target/arm/vec_helper.c | 29 +++++++++++++++++++++++++----
17
2 files changed, 40 insertions(+)
11
3 files changed, 45 insertions(+), 4 deletions(-)
12
18
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
21
--- a/target/arm/cpu.h
16
+++ b/target/arm/helper.h
22
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_uaba_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
18
DEF_HELPER_FLAGS_4(gvec_uaba_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
19
DEF_HELPER_FLAGS_4(gvec_uaba_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
20
26
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
21
+DEF_HELPER_FLAGS_4(gvec_mul_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+#define FPCR_V (1 << 28) /* FP overflow flag */
22
+DEF_HELPER_FLAGS_4(gvec_mul_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+#define FPCR_C (1 << 29) /* FP carry flag */
23
+DEF_HELPER_FLAGS_4(gvec_mul_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+#define FPCR_Z (1 << 30) /* FP zero flag */
30
+#define FPCR_N (1 << 31) /* FP negative flag */
24
+
31
+
25
#ifdef TARGET_AARCH64
32
+#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
26
#include "helper-a64.h"
33
+#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
27
#include "helper-sve.h"
34
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
35
static inline uint32_t vfp_get_fpsr(CPUARMState *env)
36
{
37
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
38
#define ARM_VFP_FPEXC 8
39
#define ARM_VFP_FPINST 9
40
#define ARM_VFP_FPINST2 10
41
+/* These ones are M-profile only */
42
+#define ARM_VFP_FPSCR_NZCVQC 2
43
+#define ARM_VFP_VPR 12
44
+#define ARM_VFP_P0 13
45
+#define ARM_VFP_FPCXT_NS 14
46
+#define ARM_VFP_FPCXT_S 15
47
48
/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
49
#define QEMU_VFP_FPSCR_NZCV 0xffff
50
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
29
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
52
--- a/target/arm/translate-vfp.c.inc
31
+++ b/target/arm/translate-a64.c
53
+++ b/target/arm/translate-vfp.c.inc
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
54
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
33
data, gen_helper_gvec_fmlal_idx_a64);
55
case ARM_VFP_FPSCR:
34
}
56
case QEMU_VFP_FPSCR_NZCV:
35
return;
57
break;
36
+
58
+ case ARM_VFP_FPSCR_NZCVQC:
37
+ case 0x08: /* MUL */
59
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
38
+ if (!is_long && !is_scalar) {
60
+ return false;
39
+ static gen_helper_gvec_3 * const fns[3] = {
40
+ gen_helper_gvec_mul_idx_h,
41
+ gen_helper_gvec_mul_idx_s,
42
+ gen_helper_gvec_mul_idx_d,
43
+ };
44
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
45
+ vec_full_reg_offset(s, rn),
46
+ vec_full_reg_offset(s, rm),
47
+ is_q ? 16 : 8, vec_full_reg_size(s),
48
+ index, fns[size - 1]);
49
+ return;
50
+ }
61
+ }
51
+ break;
62
+ break;
63
default:
64
return FPSysRegCheckFailed;
52
}
65
}
53
66
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
54
if (size == 3) {
67
tcg_temp_free_i32(tmp);
55
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
68
gen_lookup_tb(s);
56
index XXXXXXX..XXXXXXX 100644
69
break;
57
--- a/target/arm/vec_helper.c
70
+ case ARM_VFP_FPSCR_NZCVQC:
58
+++ b/target/arm/vec_helper.c
71
+ {
59
@@ -XXX,XX +XXX,XX @@ DO_3OP(gvec_rsqrts_d, helper_rsqrtsf_f64, float64)
72
+ TCGv_i32 fpscr;
60
*/
73
+ tmp = loadfn(s, opaque);
61
74
+ /*
62
#define DO_MUL_IDX(NAME, TYPE, H) \
75
+ * TODO: when we implement MVE, write the QC bit.
63
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
76
+ * For non-MVE, QC is RES0.
64
+{ \
77
+ */
65
+ intptr_t i, j, oprsz = simd_oprsz(desc), segment = 16 / sizeof(TYPE); \
78
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
66
+ intptr_t idx = simd_data(desc); \
79
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
67
+ TYPE *d = vd, *n = vn, *m = vm; \
80
+ tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
68
+ for (i = 0; i < oprsz / sizeof(TYPE); i += segment) { \
81
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
69
+ TYPE mm = m[H(i + idx)]; \
82
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
70
+ for (j = 0; j < segment; j++) { \
83
+ tcg_temp_free_i32(tmp);
71
+ d[i + j] = n[i + j] * mm; \
84
+ break;
72
+ } \
85
+ }
73
+ } \
86
default:
74
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
87
g_assert_not_reached();
75
+}
88
}
76
+
89
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
77
+DO_MUL_IDX(gvec_mul_idx_h, uint16_t, H2)
90
gen_helper_vfp_get_fpscr(tmp, cpu_env);
78
+DO_MUL_IDX(gvec_mul_idx_s, uint32_t, H4)
91
storefn(s, opaque, tmp);
79
+DO_MUL_IDX(gvec_mul_idx_d, uint64_t, )
92
break;
80
+
93
+ case ARM_VFP_FPSCR_NZCVQC:
81
+#undef DO_MUL_IDX
94
+ /*
82
+
95
+ * TODO: MVE has a QC bit, which we probably won't store
83
+#define DO_FMUL_IDX(NAME, TYPE, H) \
96
+ * in the xregs[] field. For non-MVE, where QC is RES0,
84
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
97
+ * we can just fall through to the FPSCR_NZCV case.
85
{ \
98
+ */
86
intptr_t i, j, oprsz = simd_oprsz(desc), segment = 16 / sizeof(TYPE); \
99
case QEMU_VFP_FPSCR_NZCV:
87
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
100
/*
88
clear_tail(d, oprsz, simd_maxsz(desc)); \
101
* Read just NZCV; this is a special case to avoid the
89
}
90
91
-DO_MUL_IDX(gvec_fmul_idx_h, float16, H2)
92
-DO_MUL_IDX(gvec_fmul_idx_s, float32, H4)
93
-DO_MUL_IDX(gvec_fmul_idx_d, float64, )
94
+DO_FMUL_IDX(gvec_fmul_idx_h, float16, H2)
95
+DO_FMUL_IDX(gvec_fmul_idx_s, float32, H4)
96
+DO_FMUL_IDX(gvec_fmul_idx_d, float64, )
97
98
-#undef DO_MUL_IDX
99
+#undef DO_FMUL_IDX
100
101
#define DO_FMLA_IDX(NAME, TYPE, H) \
102
void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, \
103
--
102
--
104
2.20.1
103
2.20.1
105
104
106
105
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
We defined a constant name for the mask of NZCV bits in the FPCR/FPSCR
2
in the previous commit; use it in a couple of places in existing code,
3
where we're masking out everything except NZCV for the "load to Rt=15
4
sets CPSR.NZCV" special case.
2
5
3
To quickly notice the access size, display the value with the
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
width of the access (i.e. 16-bit access is displayed 0x0000,
5
while 8-bit access 0x00).
6
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20201119215617.29887-12-peter.maydell@linaro.org
9
Message-id: 20200812190206.31595-3-f4bug@amsat.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
hw/misc/unimp.c | 4 ++--
10
target/arm/translate-vfp.c.inc | 4 ++--
13
1 file changed, 2 insertions(+), 2 deletions(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
14
12
15
diff --git a/hw/misc/unimp.c b/hw/misc/unimp.c
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/unimp.c
15
--- a/target/arm/translate-vfp.c.inc
18
+++ b/hw/misc/unimp.c
16
+++ b/target/arm/translate-vfp.c.inc
19
@@ -XXX,XX +XXX,XX @@ static void unimp_write(void *opaque, hwaddr offset,
17
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
20
18
* helper call for the "VMRS to CPSR.NZCV" insn.
21
qemu_log_mask(LOG_UNIMP, "%s: unimplemented device write "
19
*/
22
"(size %d, offset 0x%" HWADDR_PRIx
20
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
23
- ", value 0x%" PRIx64 ")\n",
21
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
24
- s->name, size, offset, value);
22
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
25
+ ", value 0x%0*" PRIx64 ")\n",
23
storefn(s, opaque, tmp);
26
+ s->name, size, offset, size << 1, value);
24
break;
27
}
25
default:
28
26
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
29
static const MemoryRegionOps unimp_ops = {
27
case ARM_VFP_FPSCR:
28
if (a->rt == 15) {
29
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
30
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
31
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
32
} else {
33
tmp = tcg_temp_new_i32();
34
gen_helper_vfp_get_fpscr(tmp, cpu_env);
30
--
35
--
31
2.20.1
36
2.20.1
32
37
33
38
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Factor out the code which handles M-profile lazy FP state preservation
2
from full_vfp_access_check(); accesses to the FPCXT_NS register are
3
a special case which need to do just this part (corresponding in the
4
pseudocode to the PreserveFPState() function), and not the full
5
set of actions matching the pseudocode ExecuteFPCheck() which
6
normal FP instructions need to do.
2
7
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20200815013145.539409-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20201119215617.29887-13-peter.maydell@linaro.org
7
---
12
---
8
target/arm/translate-sve.c | 53 +++++++++++++-------------------------
13
target/arm/translate-vfp.c.inc | 45 ++++++++++++++++++++--------------
9
1 file changed, 18 insertions(+), 35 deletions(-)
14
1 file changed, 27 insertions(+), 18 deletions(-)
10
15
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
18
--- a/target/arm/translate-vfp.c.inc
14
+++ b/target/arm/translate-sve.c
19
+++ b/target/arm/translate-vfp.c.inc
15
@@ -XXX,XX +XXX,XX @@ static int pred_gvec_reg_size(DisasContext *s)
20
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
16
return size_for_gvec(pred_full_reg_size(s));
21
return offs;
17
}
22
}
18
23
19
+/* Invoke an out-of-line helper on 3 Zregs. */
24
+/*
20
+static void gen_gvec_ool_zzz(DisasContext *s, gen_helper_gvec_3 *fn,
25
+ * Generate code for M-profile lazy FP state preservation if needed;
21
+ int rd, int rn, int rm, int data)
26
+ * this corresponds to the pseudocode PreserveFPState() function.
27
+ */
28
+static void gen_preserve_fp_state(DisasContext *s)
22
+{
29
+{
23
+ unsigned vsz = vec_full_reg_size(s);
30
+ if (s->v7m_lspact) {
24
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
31
+ /*
25
+ vec_full_reg_offset(s, rn),
32
+ * Lazy state saving affects external memory and also the NVIC,
26
+ vec_full_reg_offset(s, rm),
33
+ * so we must mark it as an IO operation for icount (and cause
27
+ vsz, vsz, data, fn);
34
+ * this to be the last insn in the TB).
35
+ */
36
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
37
+ s->base.is_jmp = DISAS_UPDATE_EXIT;
38
+ gen_io_start();
39
+ }
40
+ gen_helper_v7m_preserve_fp_state(cpu_env);
41
+ /*
42
+ * If the preserve_fp_state helper doesn't throw an exception
43
+ * then it will clear LSPACT; we don't need to repeat this for
44
+ * any further FP insns in this TB.
45
+ */
46
+ s->v7m_lspact = false;
47
+ }
28
+}
48
+}
29
+
49
+
30
/* Invoke an out-of-line helper on 2 Zregs and a predicate. */
50
/*
31
static void gen_gvec_ool_zzp(DisasContext *s, gen_helper_gvec_3 *fn,
51
* Check that VFP access is enabled. If it is, do the necessary
32
int rd, int rn, int pg, int data)
52
* M-profile lazy-FP handling and then return true.
33
@@ -XXX,XX +XXX,XX @@ static bool do_zzw_ool(DisasContext *s, arg_rrr_esz *a, gen_helper_gvec_3 *fn)
53
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
34
return false;
54
/* Handle M-profile lazy FP state mechanics */
35
}
55
36
if (sve_access_check(s)) {
56
/* Trigger lazy-state preservation if necessary */
37
- unsigned vsz = vec_full_reg_size(s);
57
- if (s->v7m_lspact) {
38
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
58
- /*
39
- vec_full_reg_offset(s, a->rn),
59
- * Lazy state saving affects external memory and also the NVIC,
40
- vec_full_reg_offset(s, a->rm),
60
- * so we must mark it as an IO operation for icount (and cause
41
- vsz, vsz, 0, fn);
61
- * this to be the last insn in the TB).
42
+ gen_gvec_ool_zzz(s, fn, a->rd, a->rn, a->rm, 0);
62
- */
43
}
63
- if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
44
return true;
64
- s->base.is_jmp = DISAS_UPDATE_EXIT;
45
}
65
- gen_io_start();
46
@@ -XXX,XX +XXX,XX @@ static bool trans_RDVL(DisasContext *s, arg_RDVL *a)
66
- }
47
static bool do_adr(DisasContext *s, arg_rrri *a, gen_helper_gvec_3 *fn)
67
- gen_helper_v7m_preserve_fp_state(cpu_env);
48
{
68
- /*
49
if (sve_access_check(s)) {
69
- * If the preserve_fp_state helper doesn't throw an exception
50
- unsigned vsz = vec_full_reg_size(s);
70
- * then it will clear LSPACT; we don't need to repeat this for
51
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
71
- * any further FP insns in this TB.
52
- vec_full_reg_offset(s, a->rn),
72
- */
53
- vec_full_reg_offset(s, a->rm),
73
- s->v7m_lspact = false;
54
- vsz, vsz, a->imm, fn);
74
- }
55
+ gen_gvec_ool_zzz(s, fn, a->rd, a->rn, a->rm, a->imm);
75
+ gen_preserve_fp_state(s);
56
}
76
57
return true;
77
/* Update ownership of FP context: set FPCCR.S to match current state */
58
}
78
if (s->v8m_fpccr_s_wrong) {
59
@@ -XXX,XX +XXX,XX @@ static bool trans_FTSSEL(DisasContext *s, arg_rrr_esz *a)
60
return false;
61
}
62
if (sve_access_check(s)) {
63
- unsigned vsz = vec_full_reg_size(s);
64
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
65
- vec_full_reg_offset(s, a->rn),
66
- vec_full_reg_offset(s, a->rm),
67
- vsz, vsz, 0, fns[a->esz]);
68
+ gen_gvec_ool_zzz(s, fns[a->esz], a->rd, a->rn, a->rm, 0);
69
}
70
return true;
71
}
72
@@ -XXX,XX +XXX,XX @@ static bool trans_TBL(DisasContext *s, arg_rrr_esz *a)
73
};
74
75
if (sve_access_check(s)) {
76
- unsigned vsz = vec_full_reg_size(s);
77
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
78
- vec_full_reg_offset(s, a->rn),
79
- vec_full_reg_offset(s, a->rm),
80
- vsz, vsz, 0, fns[a->esz]);
81
+ gen_gvec_ool_zzz(s, fns[a->esz], a->rd, a->rn, a->rm, 0);
82
}
83
return true;
84
}
85
@@ -XXX,XX +XXX,XX @@ static bool do_zzz_data_ool(DisasContext *s, arg_rrr_esz *a, int data,
86
gen_helper_gvec_3 *fn)
87
{
88
if (sve_access_check(s)) {
89
- unsigned vsz = vec_full_reg_size(s);
90
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
91
- vec_full_reg_offset(s, a->rn),
92
- vec_full_reg_offset(s, a->rm),
93
- vsz, vsz, data, fn);
94
+ gen_gvec_ool_zzz(s, fn, a->rd, a->rn, a->rm, data);
95
}
96
return true;
97
}
98
@@ -XXX,XX +XXX,XX @@ static bool trans_DOT_zzz(DisasContext *s, arg_DOT_zzz *a)
99
};
100
101
if (sve_access_check(s)) {
102
- unsigned vsz = vec_full_reg_size(s);
103
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
104
- vec_full_reg_offset(s, a->rn),
105
- vec_full_reg_offset(s, a->rm),
106
- vsz, vsz, 0, fns[a->u][a->sz]);
107
+ gen_gvec_ool_zzz(s, fns[a->u][a->sz], a->rd, a->rn, a->rm, 0);
108
}
109
return true;
110
}
111
@@ -XXX,XX +XXX,XX @@ static bool trans_DOT_zzx(DisasContext *s, arg_DOT_zzx *a)
112
};
113
114
if (sve_access_check(s)) {
115
- unsigned vsz = vec_full_reg_size(s);
116
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
117
- vec_full_reg_offset(s, a->rn),
118
- vec_full_reg_offset(s, a->rm),
119
- vsz, vsz, a->index, fns[a->u][a->sz]);
120
+ gen_gvec_ool_zzz(s, fns[a->u][a->sz], a->rd, a->rn, a->rm, a->index);
121
}
122
return true;
123
}
124
--
79
--
125
2.20.1
80
2.20.1
126
81
127
82
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the new-in-v8.1M FPCXT_S floating point system register.
2
This is for saving and restoring the secure floating point context,
3
and it reads and writes bits [27:0] from the FPSCR and the
4
CONTROL.SFPA bit in bit [31].
2
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20200815013145.539409-20-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-14-peter.maydell@linaro.org
7
---
9
---
8
target/arm/helper.h | 14 ++++++++++++++
10
target/arm/translate-vfp.c.inc | 58 ++++++++++++++++++++++++++++++++++
9
target/arm/translate-a64.c | 34 ++++++++++++++++++++++++++++++++++
11
1 file changed, 58 insertions(+)
10
target/arm/vec_helper.c | 25 +++++++++++++++++++++++++
11
3 files changed, 73 insertions(+)
12
12
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
15
--- a/target/arm/translate-vfp.c.inc
16
+++ b/target/arm/helper.h
16
+++ b/target/arm/translate-vfp.c.inc
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_mul_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
17
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
18
DEF_HELPER_FLAGS_4(gvec_mul_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
18
return false;
19
DEF_HELPER_FLAGS_4(gvec_mul_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
21
+DEF_HELPER_FLAGS_5(gvec_mla_idx_h, TCG_CALL_NO_RWG,
22
+ void, ptr, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_5(gvec_mla_idx_s, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(gvec_mla_idx_d, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+
28
+DEF_HELPER_FLAGS_5(gvec_mls_idx_h, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(gvec_mls_idx_s, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(gvec_mls_idx_d, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, i32)
34
+
35
#ifdef TARGET_AARCH64
36
#include "helper-a64.h"
37
#include "helper-sve.h"
38
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-a64.c
41
+++ b/target/arm/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
43
return;
44
}
19
}
45
break;
20
break;
46
+
21
+ case ARM_VFP_FPCXT_S:
47
+ case 0x10: /* MLA */
22
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
48
+ if (!is_long && !is_scalar) {
23
+ return false;
49
+ static gen_helper_gvec_4 * const fns[3] = {
24
+ }
50
+ gen_helper_gvec_mla_idx_h,
25
+ if (!s->v8m_secure) {
51
+ gen_helper_gvec_mla_idx_s,
26
+ return false;
52
+ gen_helper_gvec_mla_idx_d,
53
+ };
54
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
55
+ vec_full_reg_offset(s, rn),
56
+ vec_full_reg_offset(s, rm),
57
+ vec_full_reg_offset(s, rd),
58
+ is_q ? 16 : 8, vec_full_reg_size(s),
59
+ index, fns[size - 1]);
60
+ return;
61
+ }
27
+ }
62
+ break;
28
+ break;
63
+
29
default:
64
+ case 0x14: /* MLS */
30
return FPSysRegCheckFailed;
65
+ if (!is_long && !is_scalar) {
31
}
66
+ static gen_helper_gvec_4 * const fns[3] = {
32
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
67
+ gen_helper_gvec_mls_idx_h,
33
tcg_temp_free_i32(tmp);
68
+ gen_helper_gvec_mls_idx_s,
34
break;
69
+ gen_helper_gvec_mls_idx_d,
35
}
70
+ };
36
+ case ARM_VFP_FPCXT_S:
71
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
37
+ {
72
+ vec_full_reg_offset(s, rn),
38
+ TCGv_i32 sfpa, control, fpscr;
73
+ vec_full_reg_offset(s, rm),
39
+ /* Set FPSCR[27:0] and CONTROL.SFPA from value */
74
+ vec_full_reg_offset(s, rd),
40
+ tmp = loadfn(s, opaque);
75
+ is_q ? 16 : 8, vec_full_reg_size(s),
41
+ sfpa = tcg_temp_new_i32();
76
+ index, fns[size - 1]);
42
+ tcg_gen_shri_i32(sfpa, tmp, 31);
77
+ return;
43
+ control = load_cpu_field(v7m.control[M_REG_S]);
78
+ }
44
+ tcg_gen_deposit_i32(control, control, sfpa,
45
+ R_V7M_CONTROL_SFPA_SHIFT, 1);
46
+ store_cpu_field(control, v7m.control[M_REG_S]);
47
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
48
+ tcg_gen_andi_i32(fpscr, fpscr, FPCR_NZCV_MASK);
49
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
50
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
51
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
52
+ tcg_temp_free_i32(tmp);
53
+ tcg_temp_free_i32(sfpa);
79
+ break;
54
+ break;
55
+ }
56
default:
57
g_assert_not_reached();
80
}
58
}
81
59
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
82
if (size == 3) {
60
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
83
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
61
storefn(s, opaque, tmp);
84
index XXXXXXX..XXXXXXX 100644
62
break;
85
--- a/target/arm/vec_helper.c
63
+ case ARM_VFP_FPCXT_S:
86
+++ b/target/arm/vec_helper.c
64
+ {
87
@@ -XXX,XX +XXX,XX @@ DO_MUL_IDX(gvec_mul_idx_d, uint64_t, )
65
+ TCGv_i32 control, sfpa, fpscr;
88
66
+ /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
89
#undef DO_MUL_IDX
67
+ tmp = tcg_temp_new_i32();
90
68
+ sfpa = tcg_temp_new_i32();
91
+#define DO_MLA_IDX(NAME, TYPE, OP, H) \
69
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
92
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
70
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
93
+{ \
71
+ control = load_cpu_field(v7m.control[M_REG_S]);
94
+ intptr_t i, j, oprsz = simd_oprsz(desc), segment = 16 / sizeof(TYPE); \
72
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
95
+ intptr_t idx = simd_data(desc); \
73
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
96
+ TYPE *d = vd, *n = vn, *m = vm, *a = va; \
74
+ tcg_gen_or_i32(tmp, tmp, sfpa);
97
+ for (i = 0; i < oprsz / sizeof(TYPE); i += segment) { \
75
+ tcg_temp_free_i32(sfpa);
98
+ TYPE mm = m[H(i + idx)]; \
76
+ /*
99
+ for (j = 0; j < segment; j++) { \
77
+ * Store result before updating FPSCR etc, in case
100
+ d[i + j] = a[i + j] OP n[i + j] * mm; \
78
+ * it is a memory write which causes an exception.
101
+ } \
79
+ */
102
+ } \
80
+ storefn(s, opaque, tmp);
103
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
81
+ /*
104
+}
82
+ * Now we must reset FPSCR from FPDSCR_NS, and clear
105
+
83
+ * CONTROL.SFPA; so we'll end the TB here.
106
+DO_MLA_IDX(gvec_mla_idx_h, uint16_t, +, H2)
84
+ */
107
+DO_MLA_IDX(gvec_mla_idx_s, uint32_t, +, H4)
85
+ tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
108
+DO_MLA_IDX(gvec_mla_idx_d, uint64_t, +, )
86
+ store_cpu_field(control, v7m.control[M_REG_S]);
109
+
87
+ fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
110
+DO_MLA_IDX(gvec_mls_idx_h, uint16_t, -, H2)
88
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
111
+DO_MLA_IDX(gvec_mls_idx_s, uint32_t, -, H4)
89
+ tcg_temp_free_i32(fpscr);
112
+DO_MLA_IDX(gvec_mls_idx_d, uint64_t, -, )
90
+ gen_lookup_tb(s);
113
+
91
+ break;
114
+#undef DO_MLA_IDX
92
+ }
115
+
93
default:
116
#define DO_FMUL_IDX(NAME, TYPE, H) \
94
g_assert_not_reached();
117
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
95
}
118
{ \
119
--
96
--
120
2.20.1
97
2.20.1
121
98
122
99
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
The FPDSCR register has a similar layout to the FPSCR. In v8.1M it
2
gains new fields FZ16 (if half-precision floating point is supported)
3
and LTPSIZE (always reads as 4). Update the reset value and the code
4
that handles writes to this register accordingly.
2
5
3
To better align the read/write accesses, display the value after
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
the offset (read accesses only display the offset).
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-16-peter.maydell@linaro.org
9
---
10
target/arm/cpu.h | 5 +++++
11
hw/intc/armv7m_nvic.c | 9 ++++++++-
12
target/arm/cpu.c | 3 +++
13
3 files changed, 16 insertions(+), 1 deletion(-)
5
14
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20200812190206.31595-2-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/misc/unimp.c | 8 ++++----
12
1 file changed, 4 insertions(+), 4 deletions(-)
13
14
diff --git a/hw/misc/unimp.c b/hw/misc/unimp.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/misc/unimp.c
17
--- a/target/arm/cpu.h
17
+++ b/hw/misc/unimp.c
18
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ static uint64_t unimp_read(void *opaque, hwaddr offset, unsigned size)
19
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
19
{
20
#define FPCR_IXE (1 << 12) /* Inexact exception trap enable */
20
UnimplementedDeviceState *s = UNIMPLEMENTED_DEVICE(opaque);
21
#define FPCR_IDE (1 << 15) /* Input Denormal exception trap enable */
21
22
#define FPCR_FZ16 (1 << 19) /* ARMv8.2+, FP16 flush-to-zero */
22
- qemu_log_mask(LOG_UNIMP, "%s: unimplemented device read "
23
+#define FPCR_RMODE_MASK (3 << 22) /* Rounding mode */
23
+ qemu_log_mask(LOG_UNIMP, "%s: unimplemented device read "
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
24
"(size %d, offset 0x%" HWADDR_PRIx ")\n",
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
25
s->name, size, offset);
26
+#define FPCR_AHP (1 << 26) /* Alternative half-precision */
26
return 0;
27
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
27
@@ -XXX,XX +XXX,XX @@ static void unimp_write(void *opaque, hwaddr offset,
28
#define FPCR_V (1 << 28) /* FP overflow flag */
28
UnimplementedDeviceState *s = UNIMPLEMENTED_DEVICE(opaque);
29
#define FPCR_C (1 << 29) /* FP carry flag */
29
30
#define FPCR_Z (1 << 30) /* FP zero flag */
30
qemu_log_mask(LOG_UNIMP, "%s: unimplemented device write "
31
#define FPCR_N (1 << 31) /* FP negative flag */
31
- "(size %d, value 0x%" PRIx64
32
32
- ", offset 0x%" HWADDR_PRIx ")\n",
33
+#define FPCR_LTPSIZE_SHIFT 16 /* LTPSIZE, M-profile only */
33
- s->name, size, value, offset);
34
+#define FPCR_LTPSIZE_MASK (7 << FPCR_LTPSIZE_SHIFT)
34
+ "(size %d, offset 0x%" HWADDR_PRIx
35
+
35
+ ", value 0x%" PRIx64 ")\n",
36
#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
36
+ s->name, size, offset, value);
37
#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
37
}
38
38
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
39
static const MemoryRegionOps unimp_ops = {
40
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/intc/armv7m_nvic.c
42
+++ b/hw/intc/armv7m_nvic.c
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
44
break;
45
case 0xf3c: /* FPDSCR */
46
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
47
- value &= 0x07c00000;
48
+ uint32_t mask = FPCR_AHP | FPCR_DN | FPCR_FZ | FPCR_RMODE_MASK;
49
+ if (cpu_isar_feature(any_fp16, cpu)) {
50
+ mask |= FPCR_FZ16;
51
+ }
52
+ value &= mask;
53
+ if (cpu_isar_feature(aa32_lob, cpu)) {
54
+ value |= 4 << FPCR_LTPSIZE_SHIFT;
55
+ }
56
cpu->env.v7m.fpdscr[attrs.secure] = value;
57
}
58
break;
59
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/cpu.c
62
+++ b/target/arm/cpu.c
63
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
64
* always reset to 4.
65
*/
66
env->v7m.ltpsize = 4;
67
+ /* The LTPSIZE field in FPDSCR is constant and reads as 4. */
68
+ env->v7m.fpdscr[M_REG_NS] = 4 << FPCR_LTPSIZE_SHIFT;
69
+ env->v7m.fpdscr[M_REG_S] = 4 << FPCR_LTPSIZE_SHIFT;
70
}
71
72
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
40
--
73
--
41
2.20.1
74
2.20.1
42
75
43
76
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
In v8.0M, on exception entry the registers R0-R3, R12, APSR and EPSR
2
are zeroed for an exception taken to Non-secure state; for an
3
exception taken to Secure state they become UNKNOWN, and we chose to
4
leave them at their previous values.
2
5
3
We want to assert the device is not realized. To avoid overloading
6
In v8.1M the behaviour is specified more tightly and these registers
4
this header including "hw/qdev-core.h", uninline the function first.
7
are always zeroed regardless of the security state that the exception
8
targets (see rule R_KPZV). Implement this.
5
9
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Message-id: 20200803105647.22223-4-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-17-peter.maydell@linaro.org
10
---
13
---
11
include/hw/qdev-clock.h | 6 +-----
14
target/arm/m_helper.c | 16 ++++++++++++----
12
hw/core/qdev-clock.c | 5 +++++
15
1 file changed, 12 insertions(+), 4 deletions(-)
13
2 files changed, 6 insertions(+), 5 deletions(-)
14
16
15
diff --git a/include/hw/qdev-clock.h b/include/hw/qdev-clock.h
17
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/qdev-clock.h
19
--- a/target/arm/m_helper.c
18
+++ b/include/hw/qdev-clock.h
20
+++ b/target/arm/m_helper.c
19
@@ -XXX,XX +XXX,XX @@ Clock *qdev_get_clock_out(DeviceState *dev, const char *name);
21
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
20
* Set the source clock of input clock @name of device @dev to @source.
22
* Clear registers if necessary to prevent non-secure exception
21
* @source period update will be propagated to @name clock.
23
* code being able to see register values from secure code.
22
*/
24
* Where register values become architecturally UNKNOWN we leave
23
-static inline void qdev_connect_clock_in(DeviceState *dev, const char *name,
25
- * them with their previous values.
24
- Clock *source)
26
+ * them with their previous values. v8.1M is tighter than v8.0M
25
-{
27
+ * here and always zeroes the caller-saved registers regardless
26
- clock_set_source(qdev_get_clock_in(dev, name), source);
28
+ * of the security state the exception is targeting.
27
-}
29
*/
28
+void qdev_connect_clock_in(DeviceState *dev, const char *name, Clock *source);
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
29
31
- if (!targets_secure) {
30
/**
32
+ if (!targets_secure || arm_feature(env, ARM_FEATURE_V8_1M)) {
31
* qdev_alias_clock:
33
/*
32
diff --git a/hw/core/qdev-clock.c b/hw/core/qdev-clock.c
34
* Always clear the caller-saved registers (they have been
33
index XXXXXXX..XXXXXXX 100644
35
* pushed to the stack earlier in v7m_push_stack()).
34
--- a/hw/core/qdev-clock.c
36
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
35
+++ b/hw/core/qdev-clock.c
37
* v7m_push_callee_stack()).
36
@@ -XXX,XX +XXX,XX @@ Clock *qdev_alias_clock(DeviceState *dev, const char *name,
38
*/
37
39
int i;
38
return ncl->clock;
40
+ /*
39
}
41
+ * r4..r11 are callee-saves, zero only if background
40
+
42
+ * state was Secure (EXCRET.S == 1) and exception
41
+void qdev_connect_clock_in(DeviceState *dev, const char *name, Clock *source)
43
+ * targets Non-secure state
42
+{
44
+ */
43
+ clock_set_source(qdev_get_clock_in(dev, name), source);
45
+ bool zero_callee_saves = !targets_secure &&
44
+}
46
+ (lr & R_V7M_EXCRET_S_MASK);
47
48
for (i = 0; i < 13; i++) {
49
- /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
50
- if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
51
+ if (i < 4 || i > 11 || zero_callee_saves) {
52
env->regs[i] = 0;
53
}
54
}
45
--
55
--
46
2.20.1
56
2.20.1
47
57
48
58
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
In v8.1M, vector table fetch failures don't set HFSR.FORCED (see rule
2
R_LLRP). (In previous versions of the architecture this was either
3
required or IMPDEF.)
2
4
3
Clock canonical name is set in device_set_realized (see the block
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
added to hw/core/qdev.c in commit 0e6934f264).
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
If we connect a clock after the device is realized, this code is
7
Message-id: 20201119215617.29887-18-peter.maydell@linaro.org
6
not executed. This is currently not a problem as this name is only
8
---
7
used for trace events, however this disrupt tracing.
9
target/arm/m_helper.c | 6 +++++-
10
1 file changed, 5 insertions(+), 1 deletion(-)
8
11
9
Add a comment to document qdev_connect_clock_in() must be called
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
10
before the device is realized, and assert this condition.
11
12
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Message-id: 20200803105647.22223-5-f4bug@amsat.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
include/hw/qdev-clock.h | 2 ++
18
hw/core/qdev-clock.c | 1 +
19
2 files changed, 3 insertions(+)
20
21
diff --git a/include/hw/qdev-clock.h b/include/hw/qdev-clock.h
22
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/qdev-clock.h
14
--- a/target/arm/m_helper.c
24
+++ b/include/hw/qdev-clock.h
15
+++ b/target/arm/m_helper.c
25
@@ -XXX,XX +XXX,XX @@ Clock *qdev_get_clock_out(DeviceState *dev, const char *name);
16
@@ -XXX,XX +XXX,XX @@ load_fail:
26
*
17
* The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are
27
* Set the source clock of input clock @name of device @dev to @source.
18
* secure); otherwise it targets the same security state as the
28
* @source period update will be propagated to @name clock.
19
* underlying exception.
29
+ *
20
+ * In v8.1M HardFaults from vector table fetch fails don't set FORCED.
30
+ * Must be called before @dev is realized.
21
*/
31
*/
22
if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
32
void qdev_connect_clock_in(DeviceState *dev, const char *name, Clock *source);
23
exc_secure = true;
33
24
}
34
diff --git a/hw/core/qdev-clock.c b/hw/core/qdev-clock.c
25
- env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
35
index XXXXXXX..XXXXXXX 100644
26
+ env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK;
36
--- a/hw/core/qdev-clock.c
27
+ if (!arm_feature(env, ARM_FEATURE_V8_1M)) {
37
+++ b/hw/core/qdev-clock.c
28
+ env->v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
38
@@ -XXX,XX +XXX,XX @@ Clock *qdev_alias_clock(DeviceState *dev, const char *name,
29
+ }
39
30
armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
40
void qdev_connect_clock_in(DeviceState *dev, const char *name, Clock *source)
31
return false;
41
{
42
+ assert(!dev->realized);
43
clock_set_source(qdev_get_clock_in(dev, name), source);
44
}
32
}
45
--
33
--
46
2.20.1
34
2.20.1
47
35
48
36
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
In v8.1M a REVIDR register is defined, which is at address 0xe00ecfc
2
and is a read-only IMPDEF register providing implementation specific
3
minor revision information, like the v8A REVIDR_EL1. Implement this.
2
4
3
Clock canonical name is set in device_set_realized (see the block
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
added to hw/core/qdev.c in commit 0e6934f264).
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
If we connect a clock after the device is realized, this code is
7
Message-id: 20201119215617.29887-19-peter.maydell@linaro.org
6
not executed. This is currently not a problem as this name is only
8
---
7
used for trace events, however this disrupt tracing.
9
hw/intc/armv7m_nvic.c | 5 +++++
10
1 file changed, 5 insertions(+)
8
11
9
Fix by calling qdev_connect_clock_in() before realizing.
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
10
11
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
13
Message-id: 20200803105647.22223-3-f4bug@amsat.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
hw/arm/xilinx_zynq.c | 18 +++++++++---------
17
1 file changed, 9 insertions(+), 9 deletions(-)
18
19
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
20
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/xilinx_zynq.c
14
--- a/hw/intc/armv7m_nvic.c
22
+++ b/hw/arm/xilinx_zynq.c
15
+++ b/hw/intc/armv7m_nvic.c
23
@@ -XXX,XX +XXX,XX @@ static void zynq_init(MachineState *machine)
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
24
1, 0x0066, 0x0022, 0x0000, 0x0000, 0x0555, 0x2aa,
17
}
25
0);
18
return val;
26
19
}
27
- /* Create slcr, keep a pointer to connect clocks */
20
+ case 0xcfc:
28
- slcr = qdev_new("xilinx,zynq_slcr");
21
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8_1M)) {
29
- sysbus_realize_and_unref(SYS_BUS_DEVICE(slcr), &error_fatal);
22
+ goto bad_offset;
30
- sysbus_mmio_map(SYS_BUS_DEVICE(slcr), 0, 0xF8000000);
23
+ }
31
-
24
+ return cpu->revidr;
32
/* Create the main clock source, and feed slcr with it */
25
case 0xd00: /* CPUID Base. */
33
zynq_machine->ps_clk = CLOCK(object_new(TYPE_CLOCK));
26
return cpu->midr;
34
object_property_add_child(OBJECT(zynq_machine), "ps_clk",
27
case 0xd04: /* Interrupt Control State (ICSR) */
35
OBJECT(zynq_machine->ps_clk));
36
object_unref(OBJECT(zynq_machine->ps_clk));
37
clock_set_hz(zynq_machine->ps_clk, PS_CLK_FREQUENCY);
38
+
39
+ /* Create slcr, keep a pointer to connect clocks */
40
+ slcr = qdev_new("xilinx,zynq_slcr");
41
qdev_connect_clock_in(slcr, "ps_clk", zynq_machine->ps_clk);
42
+ sysbus_realize_and_unref(SYS_BUS_DEVICE(slcr), &error_fatal);
43
+ sysbus_mmio_map(SYS_BUS_DEVICE(slcr), 0, 0xF8000000);
44
45
dev = qdev_new(TYPE_A9MPCORE_PRIV);
46
qdev_prop_set_uint32(dev, "num-cpu", 1);
47
@@ -XXX,XX +XXX,XX @@ static void zynq_init(MachineState *machine)
48
dev = qdev_new(TYPE_CADENCE_UART);
49
busdev = SYS_BUS_DEVICE(dev);
50
qdev_prop_set_chr(dev, "chardev", serial_hd(0));
51
+ qdev_connect_clock_in(dev, "refclk",
52
+ qdev_get_clock_out(slcr, "uart0_ref_clk"));
53
sysbus_realize_and_unref(busdev, &error_fatal);
54
sysbus_mmio_map(busdev, 0, 0xE0000000);
55
sysbus_connect_irq(busdev, 0, pic[59 - IRQ_OFFSET]);
56
- qdev_connect_clock_in(dev, "refclk",
57
- qdev_get_clock_out(slcr, "uart0_ref_clk"));
58
dev = qdev_new(TYPE_CADENCE_UART);
59
busdev = SYS_BUS_DEVICE(dev);
60
qdev_prop_set_chr(dev, "chardev", serial_hd(1));
61
+ qdev_connect_clock_in(dev, "refclk",
62
+ qdev_get_clock_out(slcr, "uart1_ref_clk"));
63
sysbus_realize_and_unref(busdev, &error_fatal);
64
sysbus_mmio_map(busdev, 0, 0xE0001000);
65
sysbus_connect_irq(busdev, 0, pic[82 - IRQ_OFFSET]);
66
- qdev_connect_clock_in(dev, "refclk",
67
- qdev_get_clock_out(slcr, "uart1_ref_clk"));
68
69
sysbus_create_varargs("cadence_ttc", 0xF8001000,
70
pic[42-IRQ_OFFSET], pic[43-IRQ_OFFSET], pic[44-IRQ_OFFSET], NULL);
71
--
28
--
72
2.20.1
29
2.20.1
73
30
74
31
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
In v8.1M a new exception return check is added which may cause a NOCP
2
UsageFault (see rule R_XLTP): before we clear s0..s15 and the FPSCR
3
we must check whether access to CP10 from the Security state of the
4
returning exception is disabled; if it is then we must take a fault.
2
5
3
Avoid propagating the clock change when the clock does not change.
6
(Note that for our implementation CPPWR is always RAZ/WI and so can
7
never cause CP10 accesses to fail.)
4
8
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
The other v8.1M change to this register-clearing code is that if MVE
10
is implemented VPR must also be cleared, so add a TODO comment to
11
that effect.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200806123858.30058-4-f4bug@amsat.org
15
Message-id: 20201119215617.29887-20-peter.maydell@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
16
---
10
include/hw/clock.h | 5 +++--
17
target/arm/m_helper.c | 22 +++++++++++++++++++++-
11
1 file changed, 3 insertions(+), 2 deletions(-)
18
1 file changed, 21 insertions(+), 1 deletion(-)
12
19
13
diff --git a/include/hw/clock.h b/include/hw/clock.h
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
14
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/clock.h
22
--- a/target/arm/m_helper.c
16
+++ b/include/hw/clock.h
23
+++ b/target/arm/m_helper.c
17
@@ -XXX,XX +XXX,XX @@ void clock_propagate(Clock *clk);
24
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
18
*/
25
v7m_exception_taken(cpu, excret, true, false);
19
static inline void clock_update(Clock *clk, uint64_t value)
26
return;
20
{
27
} else {
21
- clock_set(clk, value);
28
- /* Clear s0..s15 and FPSCR */
22
- clock_propagate(clk);
29
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
23
+ if (clock_set(clk, value)) {
30
+ /* v8.1M adds this NOCP check */
24
+ clock_propagate(clk);
31
+ bool nsacr_pass = exc_secure ||
25
+ }
32
+ extract32(env->v7m.nsacr, 10, 1);
26
}
33
+ bool cpacr_pass = v7m_cpacr_pass(env, exc_secure, true);
27
34
+ if (!nsacr_pass) {
28
static inline void clock_update_hz(Clock *clk, unsigned hz)
35
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
36
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
37
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
38
+ "stackframe: NSACR prevents clearing FPU registers\n");
39
+ v7m_exception_taken(cpu, excret, true, false);
40
+ } else if (!cpacr_pass) {
41
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
42
+ exc_secure);
43
+ env->v7m.cfsr[exc_secure] |= R_V7M_CFSR_NOCP_MASK;
44
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
45
+ "stackframe: CPACR prevents clearing FPU registers\n");
46
+ v7m_exception_taken(cpu, excret, true, false);
47
+ }
48
+ }
49
+ /* Clear s0..s15 and FPSCR; TODO also VPR when MVE is implemented */
50
int i;
51
52
for (i = 0; i < 16; i += 2) {
29
--
53
--
30
2.20.1
54
2.20.1
31
55
32
56
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
v8.1M adds new encodings of VLLDM and VLSTM (where bit 7 is set).
2
The only difference is that:
3
* the old T1 encodings UNDEF if the implementation implements 32
4
Dregs (this is currently architecturally impossible for M-profile)
5
* the new T2 encodings have the implementation-defined option to
6
read from memory (discarding the data) or write UNKNOWN values to
7
memory for the stack slots that would be D16-D31
2
8
3
We want to ensure that access is checked by the time we ask
9
We choose not to make those accesses, so for us the two
4
for a specific fp/vector register. We want to ensure that
10
instructions behave identically assuming they don't UNDEF.
5
we do not emit two lots of code to raise an exception.
6
11
7
But sometimes it's difficult to cleanly organize the code
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
such that we never pass through sve_check_access exactly once.
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Allow multiple calls so long as the result is true, that is,
14
Message-id: 20201119215617.29887-21-peter.maydell@linaro.org
10
no exception to be raised.
15
---
16
target/arm/m-nocp.decode | 2 +-
17
target/arm/translate-vfp.c.inc | 25 +++++++++++++++++++++++++
18
2 files changed, 26 insertions(+), 1 deletion(-)
11
19
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20200815013145.539409-5-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
target/arm/translate.h | 1 +
18
target/arm/translate-a64.c | 27 ++++++++++++++++-----------
19
2 files changed, 17 insertions(+), 11 deletions(-)
20
21
diff --git a/target/arm/translate.h b/target/arm/translate.h
22
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/translate.h
22
--- a/target/arm/m-nocp.decode
24
+++ b/target/arm/translate.h
23
+++ b/target/arm/m-nocp.decode
25
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
24
@@ -XXX,XX +XXX,XX @@
26
* that it is set at the point where we actually touch the FP regs.
25
27
*/
26
{
28
bool fp_access_checked;
27
# Special cases which do not take an early NOCP: VLLDM and VLSTM
29
+ bool sve_access_checked;
28
- VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
30
/* ARMv8 single-step state (this is distinct from the QEMU gdbstub
29
+ VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 op:1 000 0000
31
* single-step support).
30
# VSCCLRM (new in v8.1M) is similar:
32
*/
31
VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
33
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
32
VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
33
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
34
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-a64.c
35
--- a/target/arm/translate-vfp.c.inc
36
+++ b/target/arm/translate-a64.c
36
+++ b/target/arm/translate-vfp.c.inc
37
@@ -XXX,XX +XXX,XX @@ static void do_vec_ld(DisasContext *s, int destidx, int element,
37
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
38
* unallocated-encoding checks (otherwise the syndrome information
38
!arm_dc_feature(s, ARM_FEATURE_V8)) {
39
* for the resulting exception will be incorrect).
40
*/
41
-static inline bool fp_access_check(DisasContext *s)
42
+static bool fp_access_check(DisasContext *s)
43
{
44
- assert(!s->fp_access_checked);
45
- s->fp_access_checked = true;
46
+ if (s->fp_excp_el) {
47
+ assert(!s->fp_access_checked);
48
+ s->fp_access_checked = true;
49
50
- if (!s->fp_excp_el) {
51
- return true;
52
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
53
+ syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
54
+ return false;
55
}
56
-
57
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
58
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
59
- return false;
60
+ s->fp_access_checked = true;
61
+ return true;
62
}
63
64
/* Check that SVE access is enabled. If it is, return true.
65
@@ -XXX,XX +XXX,XX @@ static inline bool fp_access_check(DisasContext *s)
66
bool sve_access_check(DisasContext *s)
67
{
68
if (s->sve_excp_el) {
69
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_sve_access_trap(),
70
- s->sve_excp_el);
71
+ assert(!s->sve_access_checked);
72
+ s->sve_access_checked = true;
73
+
74
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
75
+ syn_sve_access_trap(), s->sve_excp_el);
76
return false;
39
return false;
77
}
40
}
78
+ s->sve_access_checked = true;
41
+
79
return fp_access_check(s);
42
+ if (a->op) {
80
}
43
+ /*
81
44
+ * T2 encoding ({D0-D31} reglist): v8.1M and up. We choose not
82
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
45
+ * to take the IMPDEF option to make memory accesses to the stack
83
s->base.pc_next += 4;
46
+ * slots that correspond to the D16-D31 registers (discarding
84
47
+ * read data and writing UNKNOWN values), so for us the T2
85
s->fp_access_checked = false;
48
+ * encoding behaves identically to the T1 encoding.
86
+ s->sve_access_checked = false;
49
+ */
87
50
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
88
if (dc_isar_feature(aa64_bti, s)) {
51
+ return false;
89
if (s->base.num_insns == 1) {
52
+ }
53
+ } else {
54
+ /*
55
+ * T1 encoding ({D0-D15} reglist); undef if we have 32 Dregs.
56
+ * This is currently architecturally impossible, but we add the
57
+ * check to stay in line with the pseudocode. Note that we must
58
+ * emit code for the UNDEF so it takes precedence over the NOCP.
59
+ */
60
+ if (dc_isar_feature(aa32_simd_r32, s)) {
61
+ unallocated_encoding(s);
62
+ return true;
63
+ }
64
+ }
65
+
66
/*
67
* If not secure, UNDEF. We must emit code for this
68
* rather than returning false so that this takes
90
--
69
--
91
2.20.1
70
2.20.1
92
71
93
72
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
v8.1M introduces a new TRD flag in the CCR register, which enables
2
checking for stack frame integrity signatures on SG instructions.
3
This bit is not banked, and is always RAZ/WI to Non-secure code.
4
Adjust the code for handling CCR reads and writes to handle this.
2
5
3
clock_init*() inlined funtions are simple wrappers around
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
clock_set*() and are not used. Remove them in favor of clock_set*().
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-23-peter.maydell@linaro.org
9
---
10
target/arm/cpu.h | 2 ++
11
hw/intc/armv7m_nvic.c | 26 ++++++++++++++++++--------
12
2 files changed, 20 insertions(+), 8 deletions(-)
5
13
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200806123858.30058-2-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/clock.h | 13 -------------
12
1 file changed, 13 deletions(-)
13
14
diff --git a/include/hw/clock.h b/include/hw/clock.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/clock.h
16
--- a/target/arm/cpu.h
17
+++ b/include/hw/clock.h
17
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ static inline bool clock_is_enabled(const Clock *clk)
18
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CCR, STKOFHFNMIGN, 10, 1)
19
return clock_get(clk) != 0;
19
FIELD(V7M_CCR, DC, 16, 1)
20
}
20
FIELD(V7M_CCR, IC, 17, 1)
21
21
FIELD(V7M_CCR, BP, 18, 1)
22
-static inline void clock_init(Clock *clk, uint64_t value)
22
+FIELD(V7M_CCR, LOB, 19, 1)
23
-{
23
+FIELD(V7M_CCR, TRD, 20, 1)
24
- clock_set(clk, value);
24
25
-}
25
/* V7M SCR bits */
26
-static inline void clock_init_hz(Clock *clk, uint64_t value)
26
FIELD(V7M_SCR, SLEEPONEXIT, 1, 1)
27
-{
27
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
28
- clock_set_hz(clk, value);
28
index XXXXXXX..XXXXXXX 100644
29
-}
29
--- a/hw/intc/armv7m_nvic.c
30
-static inline void clock_init_ns(Clock *clk, uint64_t value)
30
+++ b/hw/intc/armv7m_nvic.c
31
-{
31
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
32
- clock_set_ns(clk, value);
32
}
33
-}
33
return cpu->env.v7m.scr[attrs.secure];
34
-
34
case 0xd14: /* Configuration Control. */
35
#endif /* QEMU_HW_CLOCK_H */
35
- /* The BFHFNMIGN bit is the only non-banked bit; we
36
- * keep it in the non-secure copy of the register.
37
+ /*
38
+ * Non-banked bits: BFHFNMIGN (stored in the NS copy of the register)
39
+ * and TRD (stored in the S copy of the register)
40
*/
41
val = cpu->env.v7m.ccr[attrs.secure];
42
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
44
cpu->env.v7m.scr[attrs.secure] = value;
45
break;
46
case 0xd14: /* Configuration Control. */
47
+ {
48
+ uint32_t mask;
49
+
50
if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
51
goto bad_offset;
52
}
53
54
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
55
- value &= (R_V7M_CCR_STKALIGN_MASK |
56
- R_V7M_CCR_BFHFNMIGN_MASK |
57
- R_V7M_CCR_DIV_0_TRP_MASK |
58
- R_V7M_CCR_UNALIGN_TRP_MASK |
59
- R_V7M_CCR_USERSETMPEND_MASK |
60
- R_V7M_CCR_NONBASETHRDENA_MASK);
61
+ mask = R_V7M_CCR_STKALIGN_MASK |
62
+ R_V7M_CCR_BFHFNMIGN_MASK |
63
+ R_V7M_CCR_DIV_0_TRP_MASK |
64
+ R_V7M_CCR_UNALIGN_TRP_MASK |
65
+ R_V7M_CCR_USERSETMPEND_MASK |
66
+ R_V7M_CCR_NONBASETHRDENA_MASK;
67
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8_1M) && attrs.secure) {
68
+ /* TRD is always RAZ/WI from NS */
69
+ mask |= R_V7M_CCR_TRD_MASK;
70
+ }
71
+ value &= mask;
72
73
if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
74
/* v8M makes NONBASETHRDENA and STKALIGN be RES1 */
75
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
76
77
cpu->env.v7m.ccr[attrs.secure] = value;
78
break;
79
+ }
80
case 0xd24: /* System Handler Control and State (SHCSR) */
81
if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
82
goto bad_offset;
36
--
83
--
37
2.20.1
84
2.20.1
38
85
39
86
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
v8.1M introduces a new TRD flag in the CCR register, which enables
2
checking for stack frame integrity signatures on SG instructions.
3
Add the code in the SG insn implementation for the new behaviour.
2
4
3
Let clock_set() return a boolean value whether the clock
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
has been updated or not.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-24-peter.maydell@linaro.org
8
---
9
target/arm/m_helper.c | 86 +++++++++++++++++++++++++++++++++++++++++++
10
1 file changed, 86 insertions(+)
5
11
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200806123858.30058-3-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/clock.h | 12 +++++++-----
12
hw/core/clock.c | 7 ++++++-
13
2 files changed, 13 insertions(+), 6 deletions(-)
14
15
diff --git a/include/hw/clock.h b/include/hw/clock.h
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/clock.h
14
--- a/target/arm/m_helper.c
18
+++ b/include/hw/clock.h
15
+++ b/target/arm/m_helper.c
19
@@ -XXX,XX +XXX,XX @@ void clock_set_source(Clock *clk, Clock *src);
16
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
20
* @value: the clock's value, 0 means unclocked
17
return true;
21
*
22
* Set the local cached period value of @clk to @value.
23
+ *
24
+ * @return: true if the clock is changed.
25
*/
26
-void clock_set(Clock *clk, uint64_t value);
27
+bool clock_set(Clock *clk, uint64_t value);
28
29
-static inline void clock_set_hz(Clock *clk, unsigned hz)
30
+static inline bool clock_set_hz(Clock *clk, unsigned hz)
31
{
32
- clock_set(clk, CLOCK_PERIOD_FROM_HZ(hz));
33
+ return clock_set(clk, CLOCK_PERIOD_FROM_HZ(hz));
34
}
18
}
35
19
36
-static inline void clock_set_ns(Clock *clk, unsigned ns)
20
+static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
37
+static inline bool clock_set_ns(Clock *clk, unsigned ns)
21
+ uint32_t addr, uint32_t *spdata)
38
{
22
+{
39
- clock_set(clk, CLOCK_PERIOD_FROM_NS(ns));
23
+ /*
40
+ return clock_set(clk, CLOCK_PERIOD_FROM_NS(ns));
24
+ * Read a word of data from the stack for the SG instruction,
41
}
25
+ * writing the value into *spdata. If the load succeeds, return
42
26
+ * true; otherwise pend an appropriate exception and return false.
43
/**
27
+ * (We can't use data load helpers here that throw an exception
44
diff --git a/hw/core/clock.c b/hw/core/clock.c
28
+ * because of the context we're called in, which is halfway through
45
index XXXXXXX..XXXXXXX 100644
29
+ * arm_v7m_cpu_do_interrupt().)
46
--- a/hw/core/clock.c
30
+ */
47
+++ b/hw/core/clock.c
31
+ CPUState *cs = CPU(cpu);
48
@@ -XXX,XX +XXX,XX @@ void clock_clear_callback(Clock *clk)
32
+ CPUARMState *env = &cpu->env;
49
clock_set_callback(clk, NULL, NULL);
33
+ MemTxAttrs attrs = {};
50
}
34
+ MemTxResult txres;
51
35
+ target_ulong page_size;
52
-void clock_set(Clock *clk, uint64_t period)
36
+ hwaddr physaddr;
53
+bool clock_set(Clock *clk, uint64_t period)
37
+ int prot;
54
{
38
+ ARMMMUFaultInfo fi = {};
55
+ if (clk->period == period) {
39
+ ARMCacheAttrs cacheattrs = {};
40
+ uint32_t value;
41
+
42
+ if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
43
+ &attrs, &prot, &page_size, &fi, &cacheattrs)) {
44
+ /* MPU/SAU lookup failed */
45
+ if (fi.type == ARMFault_QEMU_SFault) {
46
+ qemu_log_mask(CPU_LOG_INT,
47
+ "...SecureFault during stack word read\n");
48
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
49
+ env->v7m.sfar = addr;
50
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
51
+ } else {
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "...MemManageFault during stack word read\n");
54
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_DACCVIOL_MASK |
55
+ R_V7M_CFSR_MMARVALID_MASK;
56
+ env->v7m.mmfar[M_REG_S] = addr;
57
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, false);
58
+ }
56
+ return false;
59
+ return false;
57
+ }
60
+ }
58
trace_clock_set(CLOCK_PATH(clk), CLOCK_PERIOD_TO_NS(clk->period),
61
+ value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
59
CLOCK_PERIOD_TO_NS(period));
62
+ attrs, &txres);
60
clk->period = period;
63
+ if (txres != MEMTX_OK) {
64
+ /* BusFault trying to read the data */
65
+ qemu_log_mask(CPU_LOG_INT,
66
+ "...BusFault during stack word read\n");
67
+ env->v7m.cfsr[M_REG_NS] |=
68
+ (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
69
+ env->v7m.bfar = addr;
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
71
+ return false;
72
+ }
61
+
73
+
74
+ *spdata = value;
62
+ return true;
75
+ return true;
63
}
76
+}
64
77
+
65
static void clock_propagate_period(Clock *clk, bool call_callbacks)
78
static bool v7m_handle_execute_nsc(ARMCPU *cpu)
79
{
80
/*
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
82
*/
83
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
84
", executing it\n", env->regs[15]);
85
+
86
+ if (cpu_isar_feature(aa32_m_sec_state, cpu) &&
87
+ !arm_v7m_is_handler_mode(env)) {
88
+ /*
89
+ * v8.1M exception stack frame integrity check. Note that we
90
+ * must perform the memory access even if CCR_S.TRD is zero
91
+ * and we aren't going to check what the data loaded is.
92
+ */
93
+ uint32_t spdata, sp;
94
+
95
+ /*
96
+ * We know we are currently NS, so the S stack pointers must be
97
+ * in other_ss_{psp,msp}, not in regs[13]/other_sp.
98
+ */
99
+ sp = v7m_using_psp(env) ? env->v7m.other_ss_psp : env->v7m.other_ss_msp;
100
+ if (!v7m_read_sg_stack_word(cpu, mmu_idx, sp, &spdata)) {
101
+ /* Stack access failed and an exception has been pended */
102
+ return false;
103
+ }
104
+
105
+ if (env->v7m.ccr[M_REG_S] & R_V7M_CCR_TRD_MASK) {
106
+ if (((spdata & ~1) == 0xfefa125a) ||
107
+ !(env->v7m.control[M_REG_S] & 1)) {
108
+ goto gen_invep;
109
+ }
110
+ }
111
+ }
112
+
113
env->regs[14] &= ~1;
114
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
115
switch_v7m_security_state(env, true);
66
--
116
--
67
2.20.1
117
2.20.1
68
118
69
119
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
In commit 077d7449100d824a4 we added code to handle the v8M
2
requirement that returns from NMI or HardFault forcibly deactivate
3
those exceptions regardless of what interrupt the guest is trying to
4
deactivate. Unfortunately this broke the handling of the "illegal
5
exception return because the returning exception number is not
6
active" check for those cases. In the pseudocode this test is done
7
on the exception the guest asks to return from, but because our
8
implementation was doing this in armv7m_nvic_complete_irq() after the
9
new "deactivate NMI/HardFault regardless" code we ended up doing the
10
test on the VecInfo for that exception instead, which usually meant
11
failing to raise the illegal exception return fault.
2
12
3
Move the check for !S into do_pppp_flags, which allows to merge in
13
In the case for "configurable exception targeting the opposite
4
do_vecop4_p. Split out gen_gvec_fn_ppp without sve_access_check,
14
security state" we detected the illegal-return case but went ahead
5
to mirror gen_gvec_fn_zzz.
15
and deactivated the VecInfo anyway, which is wrong because that is
16
the VecInfo for the other security state.
6
17
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
18
Rearrange the code so that we first identify the illegal return
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
cases, then see if we really need to deactivate NMI or HardFault
9
Message-id: 20200815013145.539409-7-richard.henderson@linaro.org
20
instead, and finally do the deactivation.
21
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-25-peter.maydell@linaro.org
11
---
25
---
12
target/arm/translate-sve.c | 111 ++++++++++++++-----------------------
26
hw/intc/armv7m_nvic.c | 59 +++++++++++++++++++++++--------------------
13
1 file changed, 43 insertions(+), 68 deletions(-)
27
1 file changed, 32 insertions(+), 27 deletions(-)
14
28
15
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
16
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-sve.c
31
--- a/hw/intc/armv7m_nvic.c
18
+++ b/target/arm/translate-sve.c
32
+++ b/hw/intc/armv7m_nvic.c
19
@@ -XXX,XX +XXX,XX @@ static void do_dupi_z(DisasContext *s, int rd, uint64_t word)
33
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
20
}
21
22
/* Invoke a vector expander on three Pregs. */
23
-static bool do_vector3_p(DisasContext *s, GVecGen3Fn *gvec_fn,
24
- int esz, int rd, int rn, int rm)
25
+static void gen_gvec_fn_ppp(DisasContext *s, GVecGen3Fn *gvec_fn,
26
+ int rd, int rn, int rm)
27
{
34
{
28
- if (sve_access_check(s)) {
35
NVICState *s = (NVICState *)opaque;
29
- unsigned psz = pred_gvec_reg_size(s);
36
VecInfo *vec = NULL;
30
- gvec_fn(esz, pred_full_reg_offset(s, rd),
37
- int ret;
31
- pred_full_reg_offset(s, rn),
38
+ int ret = 0;
32
- pred_full_reg_offset(s, rm), psz, psz);
39
33
- }
40
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
34
- return true;
41
35
-}
42
+ trace_nvic_complete_irq(irq, secure);
36
-
43
+
37
-/* Invoke a vector operation on four Pregs. */
44
+ if (secure && exc_is_banked(irq)) {
38
-static bool do_vecop4_p(DisasContext *s, const GVecGen4 *gvec_op,
45
+ vec = &s->sec_vectors[irq];
39
- int rd, int rn, int rm, int rg)
46
+ } else {
40
-{
47
+ vec = &s->vectors[irq];
41
- if (sve_access_check(s)) {
42
- unsigned psz = pred_gvec_reg_size(s);
43
- tcg_gen_gvec_4(pred_full_reg_offset(s, rd),
44
- pred_full_reg_offset(s, rn),
45
- pred_full_reg_offset(s, rm),
46
- pred_full_reg_offset(s, rg),
47
- psz, psz, gvec_op);
48
- }
49
- return true;
50
+ unsigned psz = pred_gvec_reg_size(s);
51
+ gvec_fn(MO_64, pred_full_reg_offset(s, rd),
52
+ pred_full_reg_offset(s, rn),
53
+ pred_full_reg_offset(s, rm), psz, psz);
54
}
55
56
/* Invoke a vector move on two Pregs. */
57
@@ -XXX,XX +XXX,XX @@ static bool do_pppp_flags(DisasContext *s, arg_rprr_s *a,
58
int mofs = pred_full_reg_offset(s, a->rm);
59
int gofs = pred_full_reg_offset(s, a->pg);
60
61
+ if (!a->s) {
62
+ tcg_gen_gvec_4(dofs, nofs, mofs, gofs, psz, psz, gvec_op);
63
+ return true;
64
+ }
48
+ }
65
+
49
+
66
if (psz == 8) {
50
+ /*
67
/* Do the operation and the flags generation in temps. */
51
+ * Identify illegal exception return cases. We can't immediately
68
TCGv_i64 pd = tcg_temp_new_i64();
52
+ * return at this point because we still need to deactivate
69
@@ -XXX,XX +XXX,XX @@ static bool trans_AND_pppp(DisasContext *s, arg_rprr_s *a)
53
+ * (either this exception or NMI/HardFault) first.
70
.fno = gen_helper_sve_and_pppp,
54
+ */
71
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
55
+ if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
72
};
56
+ /*
73
- if (a->s) {
57
+ * Return from a configurable exception targeting the opposite
74
- return do_pppp_flags(s, a, &op);
58
+ * security state from the one we're trying to complete it for.
75
- } else if (a->rn == a->rm) {
59
+ * Clear vec because it's not really the VecInfo for this
76
- if (a->pg == a->rn) {
60
+ * (irq, secstate) so we mustn't deactivate it.
77
- return do_mov_p(s, a->rd, a->rn);
61
+ */
62
+ ret = -1;
63
+ vec = NULL;
64
+ } else if (!vec->active) {
65
+ /* Return from an inactive interrupt */
66
+ ret = -1;
67
+ } else {
68
+ /* Legal return, we will return the RETTOBASE bit value to the caller */
69
+ ret = nvic_rettobase(s);
70
+ }
71
+
72
/*
73
* For negative priorities, v8M will forcibly deactivate the appropriate
74
* NMI or HardFault regardless of what interrupt we're being asked to
75
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
76
}
77
78
if (!vec) {
79
- if (secure && exc_is_banked(irq)) {
80
- vec = &s->sec_vectors[irq];
78
- } else {
81
- } else {
79
- return do_vector3_p(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->pg);
82
- vec = &s->vectors[irq];
80
+
83
- }
81
+ if (!a->s) {
84
- }
82
+ if (!sve_access_check(s)) {
85
-
83
+ return true;
86
- trace_nvic_complete_irq(irq, secure);
84
+ }
87
-
85
+ if (a->rn == a->rm) {
88
- if (!vec->active) {
86
+ if (a->pg == a->rn) {
89
- /* Tell the caller this was an illegal exception return */
87
+ do_mov_p(s, a->rd, a->rn);
90
- return -1;
88
+ } else {
91
- }
89
+ gen_gvec_fn_ppp(s, tcg_gen_gvec_and, a->rd, a->rn, a->pg);
92
-
90
+ }
93
- /*
91
+ return true;
94
- * If this is a configurable exception and it is currently
92
+ } else if (a->pg == a->rn || a->pg == a->rm) {
95
- * targeting the opposite security state from the one we're trying
93
+ gen_gvec_fn_ppp(s, tcg_gen_gvec_and, a->rd, a->rn, a->rm);
96
- * to complete it for, this counts as an illegal exception return.
94
+ return true;
97
- * We still need to deactivate whatever vector the logic above has
95
}
98
- * selected, though, as it might not be the same as the one for the
96
- } else if (a->pg == a->rn || a->pg == a->rm) {
99
- * requested exception number.
97
- return do_vector3_p(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->rm);
100
- */
101
- if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
102
- ret = -1;
98
- } else {
103
- } else {
99
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
104
- ret = nvic_rettobase(s);
105
+ return ret;
100
}
106
}
101
+ return do_pppp_flags(s, a, &op);
107
102
}
108
vec->active = 0;
103
104
static void gen_bic_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
105
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_pppp(DisasContext *s, arg_rprr_s *a)
106
.fno = gen_helper_sve_bic_pppp,
107
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
108
};
109
- if (a->s) {
110
- return do_pppp_flags(s, a, &op);
111
- } else if (a->pg == a->rn) {
112
- return do_vector3_p(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
113
- } else {
114
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
115
+
116
+ if (!a->s && a->pg == a->rn) {
117
+ if (sve_access_check(s)) {
118
+ gen_gvec_fn_ppp(s, tcg_gen_gvec_andc, a->rd, a->rn, a->rm);
119
+ }
120
+ return true;
121
}
122
+ return do_pppp_flags(s, a, &op);
123
}
124
125
static void gen_eor_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
126
@@ -XXX,XX +XXX,XX @@ static bool trans_EOR_pppp(DisasContext *s, arg_rprr_s *a)
127
.fno = gen_helper_sve_eor_pppp,
128
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
129
};
130
- if (a->s) {
131
- return do_pppp_flags(s, a, &op);
132
- } else {
133
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
134
- }
135
+ return do_pppp_flags(s, a, &op);
136
}
137
138
static void gen_sel_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
139
@@ -XXX,XX +XXX,XX @@ static bool trans_SEL_pppp(DisasContext *s, arg_rprr_s *a)
140
.fno = gen_helper_sve_sel_pppp,
141
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
142
};
143
+
144
if (a->s) {
145
return false;
146
- } else {
147
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
148
}
149
+ return do_pppp_flags(s, a, &op);
150
}
151
152
static void gen_orr_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
153
@@ -XXX,XX +XXX,XX @@ static bool trans_ORR_pppp(DisasContext *s, arg_rprr_s *a)
154
.fno = gen_helper_sve_orr_pppp,
155
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
156
};
157
- if (a->s) {
158
- return do_pppp_flags(s, a, &op);
159
- } else if (a->pg == a->rn && a->rn == a->rm) {
160
+
161
+ if (!a->s && a->pg == a->rn && a->rn == a->rm) {
162
return do_mov_p(s, a->rd, a->rn);
163
- } else {
164
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
165
}
166
+ return do_pppp_flags(s, a, &op);
167
}
168
169
static void gen_orn_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
170
@@ -XXX,XX +XXX,XX @@ static bool trans_ORN_pppp(DisasContext *s, arg_rprr_s *a)
171
.fno = gen_helper_sve_orn_pppp,
172
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
173
};
174
- if (a->s) {
175
- return do_pppp_flags(s, a, &op);
176
- } else {
177
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
178
- }
179
+ return do_pppp_flags(s, a, &op);
180
}
181
182
static void gen_nor_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
183
@@ -XXX,XX +XXX,XX @@ static bool trans_NOR_pppp(DisasContext *s, arg_rprr_s *a)
184
.fno = gen_helper_sve_nor_pppp,
185
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
};
187
- if (a->s) {
188
- return do_pppp_flags(s, a, &op);
189
- } else {
190
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
191
- }
192
+ return do_pppp_flags(s, a, &op);
193
}
194
195
static void gen_nand_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
196
@@ -XXX,XX +XXX,XX @@ static bool trans_NAND_pppp(DisasContext *s, arg_rprr_s *a)
197
.fno = gen_helper_sve_nand_pppp,
198
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
199
};
200
- if (a->s) {
201
- return do_pppp_flags(s, a, &op);
202
- } else {
203
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
204
- }
205
+ return do_pppp_flags(s, a, &op);
206
}
207
208
/*
209
--
109
--
210
2.20.1
110
2.20.1
211
111
212
112
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
For v8.1M the architecture mandates that CPUs must provide at
2
least the "minimal RAS implementation" from the Reliability,
3
Availability and Serviceability extension. This consists of:
4
* an ESB instruction which is a NOP
5
-- since it is in the HINT space we need only add a comment
6
* an RFSR register which will RAZ/WI
7
* a RAZ/WI AIRCR.IESB bit
8
-- the code which handles writes to AIRCR does not allow setting
9
of RES0 bits, so we already treat this as RAZ/WI; add a comment
10
noting that this is deliberate
11
* minimal implementation of the RAS register block at 0xe0005000
12
-- this will be in a subsequent commit
13
* setting the ID_PFR0.RAS field to 0b0010
14
-- we will do this when we add the Cortex-M55 CPU model
2
15
3
Model after gen_gvec_fn_zzz et al.
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20201119215617.29887-26-peter.maydell@linaro.org
19
---
20
target/arm/cpu.h | 14 ++++++++++++++
21
target/arm/t32.decode | 4 ++++
22
hw/intc/armv7m_nvic.c | 13 +++++++++++++
23
3 files changed, 31 insertions(+)
4
24
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20200815013145.539409-11-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate-sve.c | 29 ++++++++++++++---------------
11
1 file changed, 14 insertions(+), 15 deletions(-)
12
13
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
14
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-sve.c
27
--- a/target/arm/cpu.h
16
+++ b/target/arm/translate-sve.c
28
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static int pred_gvec_reg_size(DisasContext *s)
29
@@ -XXX,XX +XXX,XX @@ FIELD(ID_MMFR4, LSM, 20, 4)
18
return size_for_gvec(pred_full_reg_size(s));
30
FIELD(ID_MMFR4, CCIDX, 24, 4)
31
FIELD(ID_MMFR4, EVT, 28, 4)
32
33
+FIELD(ID_PFR0, STATE0, 0, 4)
34
+FIELD(ID_PFR0, STATE1, 4, 4)
35
+FIELD(ID_PFR0, STATE2, 8, 4)
36
+FIELD(ID_PFR0, STATE3, 12, 4)
37
+FIELD(ID_PFR0, CSV2, 16, 4)
38
+FIELD(ID_PFR0, AMU, 20, 4)
39
+FIELD(ID_PFR0, DIT, 24, 4)
40
+FIELD(ID_PFR0, RAS, 28, 4)
41
+
42
FIELD(ID_PFR1, PROGMOD, 0, 4)
43
FIELD(ID_PFR1, SECURITY, 4, 4)
44
FIELD(ID_PFR1, MPROGMOD, 8, 4)
45
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
46
return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
19
}
47
}
20
48
21
+/* Invoke an out-of-line helper on 2 Zregs and a predicate. */
49
+static inline bool isar_feature_aa32_ras(const ARMISARegisters *id)
22
+static void gen_gvec_ool_zzp(DisasContext *s, gen_helper_gvec_3 *fn,
23
+ int rd, int rn, int pg, int data)
24
+{
50
+{
25
+ unsigned vsz = vec_full_reg_size(s);
51
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, RAS) != 0;
26
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
27
+ vec_full_reg_offset(s, rn),
28
+ pred_full_reg_offset(s, pg),
29
+ vsz, vsz, data, fn);
30
+}
52
+}
31
+
53
+
32
/* Invoke an out-of-line helper on 3 Zregs and a predicate. */
54
static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
33
static void gen_gvec_ool_zzzp(DisasContext *s, gen_helper_gvec_4 *fn,
55
{
34
int rd, int rn, int rm, int pg, int data)
56
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
35
@@ -XXX,XX +XXX,XX @@ static bool do_zpz_ool(DisasContext *s, arg_rpr_esz *a, gen_helper_gvec_3 *fn)
57
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
36
return false;
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/t32.decode
60
+++ b/target/arm/t32.decode
61
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
62
# SEV 1111 0011 1010 1111 1000 0000 0000 0100
63
# SEVL 1111 0011 1010 1111 1000 0000 0000 0101
64
65
+ # For M-profile minimal-RAS ESB can be a NOP, which is the
66
+ # default behaviour since it is in the hint space.
67
+ # ESB 1111 0011 1010 1111 1000 0000 0001 0000
68
+
69
# The canonical nop ends in 0000 0000, but the whole rest
70
# of the space is "reserved hint, behaves as nop".
71
NOP 1111 0011 1010 1111 1000 0000 ---- ----
72
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/intc/armv7m_nvic.c
75
+++ b/hw/intc/armv7m_nvic.c
76
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
77
return 0;
78
}
79
return cpu->env.v7m.sfar;
80
+ case 0xf04: /* RFSR */
81
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
82
+ goto bad_offset;
83
+ }
84
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
85
+ return 0;
86
case 0xf34: /* FPCCR */
87
if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
88
return 0;
89
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
90
R_V7M_AIRCR_PRIGROUP_SHIFT,
91
R_V7M_AIRCR_PRIGROUP_LENGTH);
92
}
93
+ /* AIRCR.IESB is RAZ/WI because we implement only minimal RAS */
94
if (attrs.secure) {
95
/* These bits are only writable by secure */
96
cpu->env.v7m.aircr = value &
97
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
98
}
99
break;
37
}
100
}
38
if (sve_access_check(s)) {
101
+ case 0xf04: /* RFSR */
39
- unsigned vsz = vec_full_reg_size(s);
102
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
40
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
103
+ goto bad_offset;
41
- vec_full_reg_offset(s, a->rn),
104
+ }
42
- pred_full_reg_offset(s, a->pg),
105
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
43
- vsz, vsz, 0, fn);
106
+ break;
44
+ gen_gvec_ool_zzp(s, fn, a->rd, a->rn, a->pg, 0);
107
case 0xf34: /* FPCCR */
45
}
108
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
46
return true;
109
/* Not all bits here are banked. */
47
}
48
@@ -XXX,XX +XXX,XX @@ static bool do_movz_zpz(DisasContext *s, int rd, int rn, int pg,
49
};
50
51
if (sve_access_check(s)) {
52
- unsigned vsz = vec_full_reg_size(s);
53
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
54
- vec_full_reg_offset(s, rn),
55
- pred_full_reg_offset(s, pg),
56
- vsz, vsz, invert, fns[esz]);
57
+ gen_gvec_ool_zzp(s, fns[esz], rd, rn, pg, invert);
58
}
59
return true;
60
}
61
@@ -XXX,XX +XXX,XX @@ static bool do_zpzi_ool(DisasContext *s, arg_rpri_esz *a,
62
gen_helper_gvec_3 *fn)
63
{
64
if (sve_access_check(s)) {
65
- unsigned vsz = vec_full_reg_size(s);
66
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
67
- vec_full_reg_offset(s, a->rn),
68
- pred_full_reg_offset(s, a->pg),
69
- vsz, vsz, a->imm, fn);
70
+ gen_gvec_ool_zzp(s, fn, a->rd, a->rn, a->pg, a->imm);
71
}
72
return true;
73
}
74
--
110
--
75
2.20.1
111
2.20.1
76
112
77
113
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
The RAS feature has a block of memory-mapped registers at offset
2
0x5000 within the PPB. For a "minimal RAS" implementation we provide
3
no error records and so the only registers that exist in the block
4
are ERRIIDR and ERRDEVID.
2
5
3
Allow the device to execute the DMA transfers in a different
6
The "RAZ/WI for privileged, BusFault for nonprivileged" behaviour
4
AddressSpace.
7
of the "nvic-default" region is actually valid for minimal-RAS,
8
so the main benefit of providing an explicit implementation of
9
the register block is more accurate LOG_UNIMP messages, and a
10
framework for where we could add a real RAS implementation later
11
if necessary.
5
12
6
The A10 and H3 SoC keep using the system_memory address space,
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
but via the proper dma_memory_access() API.
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-27-peter.maydell@linaro.org
16
---
17
include/hw/intc/armv7m_nvic.h | 1 +
18
hw/intc/armv7m_nvic.c | 56 +++++++++++++++++++++++++++++++++++
19
2 files changed, 57 insertions(+)
8
20
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
10
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
11
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
12
Message-id: 20200814110057.307-1-f4bug@amsat.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
include/hw/sd/allwinner-sdhost.h | 6 ++++++
16
hw/arm/allwinner-a10.c | 2 ++
17
hw/arm/allwinner-h3.c | 2 ++
18
hw/sd/allwinner-sdhost.c | 37 ++++++++++++++++++++++++++------
19
4 files changed, 41 insertions(+), 6 deletions(-)
20
21
diff --git a/include/hw/sd/allwinner-sdhost.h b/include/hw/sd/allwinner-sdhost.h
22
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/sd/allwinner-sdhost.h
23
--- a/include/hw/intc/armv7m_nvic.h
24
+++ b/include/hw/sd/allwinner-sdhost.h
24
+++ b/include/hw/intc/armv7m_nvic.h
25
@@ -XXX,XX +XXX,XX @@ typedef struct AwSdHostState {
25
@@ -XXX,XX +XXX,XX @@ struct NVICState {
26
/** Interrupt output signal to notify CPU */
26
MemoryRegion sysreg_ns_mem;
27
qemu_irq irq;
27
MemoryRegion systickmem;
28
28
MemoryRegion systick_ns_mem;
29
+ /** Memory region where DMA transfers are done */
29
+ MemoryRegion ras_mem;
30
+ MemoryRegion *dma_mr;
30
MemoryRegion container;
31
MemoryRegion defaultmem;
32
33
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/armv7m_nvic.c
36
+++ b/hw/intc/armv7m_nvic.c
37
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
38
.endianness = DEVICE_NATIVE_ENDIAN,
39
};
40
31
+
41
+
32
+ /** Address space used internally for DMA transfers */
42
+static MemTxResult ras_read(void *opaque, hwaddr addr,
33
+ AddressSpace dma_as;
43
+ uint64_t *data, unsigned size,
44
+ MemTxAttrs attrs)
45
+{
46
+ if (attrs.user) {
47
+ return MEMTX_ERROR;
48
+ }
34
+
49
+
35
/** Number of bytes left in current DMA transfer */
50
+ switch (addr) {
36
uint32_t transfer_cnt;
51
+ case 0xe10: /* ERRIIDR */
37
52
+ /* architect field = Arm; product/variant/revision 0 */
38
diff --git a/hw/arm/allwinner-a10.c b/hw/arm/allwinner-a10.c
53
+ *data = 0x43b;
39
index XXXXXXX..XXXXXXX 100644
54
+ break;
40
--- a/hw/arm/allwinner-a10.c
55
+ case 0xfc8: /* ERRDEVID */
41
+++ b/hw/arm/allwinner-a10.c
56
+ /* Minimal RAS: we implement 0 error record indexes */
42
@@ -XXX,XX +XXX,XX @@ static void aw_a10_realize(DeviceState *dev, Error **errp)
57
+ *data = 0;
43
}
58
+ break;
44
59
+ default:
45
/* SD/MMC */
60
+ qemu_log_mask(LOG_UNIMP, "Read RAS register offset 0x%x\n",
46
+ object_property_set_link(OBJECT(&s->mmc0), "dma-memory",
61
+ (uint32_t)addr);
47
+ OBJECT(get_system_memory()), &error_fatal);
62
+ *data = 0;
48
sysbus_realize(SYS_BUS_DEVICE(&s->mmc0), &error_fatal);
63
+ break;
49
sysbus_mmio_map(SYS_BUS_DEVICE(&s->mmc0), 0, AW_A10_MMC0_BASE);
64
+ }
50
sysbus_connect_irq(SYS_BUS_DEVICE(&s->mmc0), 0, qdev_get_gpio_in(dev, 32));
65
+ return MEMTX_OK;
51
diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
66
+}
52
index XXXXXXX..XXXXXXX 100644
67
+
53
--- a/hw/arm/allwinner-h3.c
68
+static MemTxResult ras_write(void *opaque, hwaddr addr,
54
+++ b/hw/arm/allwinner-h3.c
69
+ uint64_t value, unsigned size,
55
@@ -XXX,XX +XXX,XX @@ static void allwinner_h3_realize(DeviceState *dev, Error **errp)
70
+ MemTxAttrs attrs)
56
sysbus_mmio_map(SYS_BUS_DEVICE(&s->sid), 0, s->memmap[AW_H3_SID]);
71
+{
57
72
+ if (attrs.user) {
58
/* SD/MMC */
73
+ return MEMTX_ERROR;
59
+ object_property_set_link(OBJECT(&s->mmc0), "dma-memory",
74
+ }
60
+ OBJECT(get_system_memory()), &error_fatal);
75
+
61
sysbus_realize(SYS_BUS_DEVICE(&s->mmc0), &error_fatal);
76
+ switch (addr) {
62
sysbus_mmio_map(SYS_BUS_DEVICE(&s->mmc0), 0, s->memmap[AW_H3_MMC0]);
77
+ default:
63
sysbus_connect_irq(SYS_BUS_DEVICE(&s->mmc0), 0,
78
+ qemu_log_mask(LOG_UNIMP, "Write to RAS register offset 0x%x\n",
64
diff --git a/hw/sd/allwinner-sdhost.c b/hw/sd/allwinner-sdhost.c
79
+ (uint32_t)addr);
65
index XXXXXXX..XXXXXXX 100644
80
+ break;
66
--- a/hw/sd/allwinner-sdhost.c
81
+ }
67
+++ b/hw/sd/allwinner-sdhost.c
82
+ return MEMTX_OK;
68
@@ -XXX,XX +XXX,XX @@
83
+}
69
#include "qemu/log.h"
84
+
70
#include "qemu/module.h"
85
+static const MemoryRegionOps ras_ops = {
71
#include "qemu/units.h"
86
+ .read_with_attrs = ras_read,
72
+#include "qapi/error.h"
87
+ .write_with_attrs = ras_write,
73
#include "sysemu/blockdev.h"
88
+ .endianness = DEVICE_NATIVE_ENDIAN,
74
+#include "sysemu/dma.h"
75
+#include "hw/qdev-properties.h"
76
#include "hw/irq.h"
77
#include "hw/sd/allwinner-sdhost.h"
78
#include "migration/vmstate.h"
79
@@ -XXX,XX +XXX,XX @@ static uint32_t allwinner_sdhost_process_desc(AwSdHostState *s,
80
uint8_t buf[1024];
81
82
/* Read descriptor */
83
- cpu_physical_memory_read(desc_addr, desc, sizeof(*desc));
84
+ dma_memory_read(&s->dma_as, desc_addr, desc, sizeof(*desc));
85
if (desc->size == 0) {
86
desc->size = klass->max_desc_size;
87
} else if (desc->size > klass->max_desc_size) {
88
@@ -XXX,XX +XXX,XX @@ static uint32_t allwinner_sdhost_process_desc(AwSdHostState *s,
89
90
/* Write to SD bus */
91
if (is_write) {
92
- cpu_physical_memory_read((desc->addr & DESC_SIZE_MASK) + num_done,
93
- buf, buf_bytes);
94
+ dma_memory_read(&s->dma_as,
95
+ (desc->addr & DESC_SIZE_MASK) + num_done,
96
+ buf, buf_bytes);
97
sdbus_write_data(&s->sdbus, buf, buf_bytes);
98
99
/* Read from SD bus */
100
} else {
101
sdbus_read_data(&s->sdbus, buf, buf_bytes);
102
- cpu_physical_memory_write((desc->addr & DESC_SIZE_MASK) + num_done,
103
- buf, buf_bytes);
104
+ dma_memory_write(&s->dma_as,
105
+ (desc->addr & DESC_SIZE_MASK) + num_done,
106
+ buf, buf_bytes);
107
}
108
num_done += buf_bytes;
109
}
110
111
/* Clear hold flag and flush descriptor */
112
desc->status &= ~DESC_STATUS_HOLD;
113
- cpu_physical_memory_write(desc_addr, desc, sizeof(*desc));
114
+ dma_memory_write(&s->dma_as, desc_addr, desc, sizeof(*desc));
115
116
return num_done;
117
}
118
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_allwinner_sdhost = {
119
}
120
};
121
122
+static Property allwinner_sdhost_properties[] = {
123
+ DEFINE_PROP_LINK("dma-memory", AwSdHostState, dma_mr,
124
+ TYPE_MEMORY_REGION, MemoryRegion *),
125
+ DEFINE_PROP_END_OF_LIST(),
126
+};
89
+};
127
+
90
+
128
static void allwinner_sdhost_init(Object *obj)
91
/*
129
{
92
* Unassigned portions of the PPB space are RAZ/WI for privileged
130
AwSdHostState *s = AW_SDHOST(obj);
93
* accesses, and fault for non-privileged accesses.
131
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_init(Object *obj)
94
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
132
sysbus_init_irq(SYS_BUS_DEVICE(s), &s->irq);
95
&s->systick_ns_mem, 1);
133
}
96
}
134
97
135
+static void allwinner_sdhost_realize(DeviceState *dev, Error **errp)
98
+ if (cpu_isar_feature(aa32_ras, s->cpu)) {
136
+{
99
+ memory_region_init_io(&s->ras_mem, OBJECT(s),
137
+ AwSdHostState *s = AW_SDHOST(dev);
100
+ &ras_ops, s, "nvic_ras", 0x1000);
138
+
101
+ memory_region_add_subregion(&s->container, 0x5000, &s->ras_mem);
139
+ if (!s->dma_mr) {
140
+ error_setg(errp, TYPE_AW_SDHOST " 'dma-memory' link not set");
141
+ return;
142
+ }
102
+ }
143
+
103
+
144
+ address_space_init(&s->dma_as, s->dma_mr, "sdhost-dma");
104
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
145
+}
146
+
147
static void allwinner_sdhost_reset(DeviceState *dev)
148
{
149
AwSdHostState *s = AW_SDHOST(dev);
150
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_class_init(ObjectClass *klass, void *data)
151
152
dc->reset = allwinner_sdhost_reset;
153
dc->vmsd = &vmstate_allwinner_sdhost;
154
+ dc->realize = allwinner_sdhost_realize;
155
+ device_class_set_props(dc, allwinner_sdhost_properties);
156
}
105
}
157
106
158
static void allwinner_sdhost_sun4i_class_init(ObjectClass *klass, void *data)
159
--
107
--
160
2.20.1
108
2.20.1
161
109
162
110
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
Correct a typo in the name we give the NVIC object.
2
2
3
As we want to call qdev_connect_clock_in() before the device
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
is realized, we need to uninline cadence_uart_create() first.
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-28-peter.maydell@linaro.org
7
---
8
hw/arm/armv7m.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
5
10
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Message-id: 20200803105647.22223-2-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/char/cadence_uart.h | 17 -----------------
12
hw/arm/xilinx_zynq.c | 14 ++++++++++++--
13
2 files changed, 12 insertions(+), 19 deletions(-)
14
15
diff --git a/include/hw/char/cadence_uart.h b/include/hw/char/cadence_uart.h
16
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/char/cadence_uart.h
13
--- a/hw/arm/armv7m.c
18
+++ b/include/hw/char/cadence_uart.h
14
+++ b/hw/arm/armv7m.c
19
@@ -XXX,XX +XXX,XX @@ typedef struct {
15
@@ -XXX,XX +XXX,XX @@ static void armv7m_instance_init(Object *obj)
20
Clock *refclk;
16
21
} CadenceUARTState;
17
memory_region_init(&s->container, obj, "armv7m-container", UINT64_MAX);
22
18
23
-static inline DeviceState *cadence_uart_create(hwaddr addr,
19
- object_initialize_child(obj, "nvnic", &s->nvic, TYPE_NVIC);
24
- qemu_irq irq,
20
+ object_initialize_child(obj, "nvic", &s->nvic, TYPE_NVIC);
25
- Chardev *chr)
21
object_property_add_alias(obj, "num-irq",
26
-{
22
OBJECT(&s->nvic), "num-irq");
27
- DeviceState *dev;
28
- SysBusDevice *s;
29
-
30
- dev = qdev_new(TYPE_CADENCE_UART);
31
- s = SYS_BUS_DEVICE(dev);
32
- qdev_prop_set_chr(dev, "chardev", chr);
33
- sysbus_realize_and_unref(s, &error_fatal);
34
- sysbus_mmio_map(s, 0, addr);
35
- sysbus_connect_irq(s, 0, irq);
36
-
37
- return dev;
38
-}
39
-
40
#endif
41
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/arm/xilinx_zynq.c
44
+++ b/hw/arm/xilinx_zynq.c
45
@@ -XXX,XX +XXX,XX @@ static void zynq_init(MachineState *machine)
46
sysbus_create_simple(TYPE_CHIPIDEA, 0xE0002000, pic[53 - IRQ_OFFSET]);
47
sysbus_create_simple(TYPE_CHIPIDEA, 0xE0003000, pic[76 - IRQ_OFFSET]);
48
49
- dev = cadence_uart_create(0xE0000000, pic[59 - IRQ_OFFSET], serial_hd(0));
50
+ dev = qdev_new(TYPE_CADENCE_UART);
51
+ busdev = SYS_BUS_DEVICE(dev);
52
+ qdev_prop_set_chr(dev, "chardev", serial_hd(0));
53
+ sysbus_realize_and_unref(busdev, &error_fatal);
54
+ sysbus_mmio_map(busdev, 0, 0xE0000000);
55
+ sysbus_connect_irq(busdev, 0, pic[59 - IRQ_OFFSET]);
56
qdev_connect_clock_in(dev, "refclk",
57
qdev_get_clock_out(slcr, "uart0_ref_clk"));
58
- dev = cadence_uart_create(0xE0001000, pic[82 - IRQ_OFFSET], serial_hd(1));
59
+ dev = qdev_new(TYPE_CADENCE_UART);
60
+ busdev = SYS_BUS_DEVICE(dev);
61
+ qdev_prop_set_chr(dev, "chardev", serial_hd(1));
62
+ sysbus_realize_and_unref(busdev, &error_fatal);
63
+ sysbus_mmio_map(busdev, 0, 0xE0001000);
64
+ sysbus_connect_irq(busdev, 0, pic[82 - IRQ_OFFSET]);
65
qdev_connect_clock_in(dev, "refclk",
66
qdev_get_clock_out(slcr, "uart1_ref_clk"));
67
23
68
--
24
--
69
2.20.1
25
2.20.1
70
26
71
27
diff view generated by jsdifflib