1
Hi; here's the first target-arm pullreq for the 8.1 cycle.
1
Hi; here's this week's arm pullreq. Mostly this is my
2
Nothing particularly huge in here, just the various things
2
work on FEAT_MOPS and FEAT_HBC, but there are some
3
that had accumulated during the freeze.
3
other bits and pieces in there too, including a recent
4
set of elf2dmp patches.
4
5
5
thanks
6
thanks
6
-- PMM
7
-- PMM
7
8
8
The following changes since commit 2d82c32b2ceaca3dc3da5e36e10976f34bfcb598:
9
The following changes since commit 55394dcbec8f0c29c30e792c102a0edd50a52bf4:
9
10
10
Open 8.1 development tree (2023-04-20 10:05:25 +0100)
11
Merge tag 'pull-loongarch-20230920' of https://gitlab.com/gaosong/qemu into staging (2023-09-20 13:56:18 -0400)
11
12
12
are available in the Git repository at:
13
are available in the Git repository at:
13
14
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230420
15
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230921
15
16
16
for you to fetch changes up to 1ed1f338520cda0574b7e04f5e8e85e049740548:
17
for you to fetch changes up to 231f6a7d66254a58bedbee458591b780e0a507b1:
17
18
18
arm/mcimx7d-sabre: Set fec2-phy-connected property to false (2023-04-20 10:46:43 +0100)
19
elf2dmp: rework PDB_STREAM_INDEXES::segments obtaining (2023-09-21 16:13:54 +0100)
19
20
20
----------------------------------------------------------------
21
----------------------------------------------------------------
21
target-arm queue:
22
target-arm queue:
22
* hw/arm: Fix some typos in comments (most found by codespell)
23
* target/m68k: Add URL to semihosting spec
23
* exynos: Fix out-of-bounds access in exynos4210_gcomp_find debug printf
24
* docs/devel/loads-stores: Fix git grep regexes
24
* Orangepi-PC, Cubieboard: add Allwinner WDT watchdog emulation
25
* hw/arm/boot: Set SCR_EL3.FGTEn when booting kernel
25
* tests/avocado: Add reboot tests to Cubieboard
26
* linux-user: Correct SME feature names reported in cpuinfo
26
* hw/timer/imx_epit: Fix bugs in timer limit checking
27
* linux-user: Add missing arm32 hwcaps
27
* target/arm: Remove KVM AArch32 CPU definitions
28
* Don't skip MTE checks for LDRT/STRT at EL0
28
* hw/arm/virt: Restrict Cortex-A7 check to TCG
29
* Implement FEAT_HBC
29
* target/arm: Initialize debug capabilities only once
30
* Implement FEAT_MOPS
30
* target/arm: Implement FEAT_PAN3
31
* audio/jackaudio: Avoid dynamic stack allocation
31
* docs/devel/kconfig.rst: Fix incorrect markup
32
* sbsa-ref: add non-secure EL2 virtual timer
32
* target/arm: Report pauth information to gdb as 'pauth_v2'
33
* elf2dmp: improve Win2022, Win11 and large dumps
33
* mcimxd7-sabre, mcimx6ul-evk: Correctly model the way the PHY
34
on the second ethernet device must be configured via the
35
first one
36
34
37
----------------------------------------------------------------
35
----------------------------------------------------------------
38
Akihiko Odaki (1):
36
Fabian Vogt (1):
39
target/arm: Initialize debug capabilities only once
37
hw/arm/boot: Set SCR_EL3.FGTEn when booting kernel
40
38
41
Axel Heider (2):
39
Marcin Juszkiewicz (1):
42
hw/timer/imx_epit: don't shadow variable
40
sbsa-ref: add non-secure EL2 virtual timer
43
hw/timer/imx_epit: fix limit check
44
41
45
Feng Jiang (1):
42
Peter Maydell (23):
46
exynos: Fix out-of-bounds access in exynos4210_gcomp_find debug printf
43
target/m68k: Add URL to semihosting spec
44
docs/devel/loads-stores: Fix git grep regexes
45
linux-user/elfload.c: Correct SME feature names reported in cpuinfo
46
linux-user/elfload.c: Add missing arm and arm64 hwcap values
47
linux-user/elfload.c: Report previously missing arm32 hwcaps
48
target/arm: Update AArch64 ID register field definitions
49
target/arm: Update user-mode ID reg mask values
50
target/arm: Implement FEAT_HBC
51
target/arm: Remove unused allocation_tag_mem() argument
52
target/arm: Don't skip MTE checks for LDRT/STRT at EL0
53
target/arm: Implement FEAT_MOPS enable bits
54
target/arm: Pass unpriv bool to get_a64_user_mem_index()
55
target/arm: Define syndrome function for MOPS exceptions
56
target/arm: New function allocation_tag_mem_probe()
57
target/arm: Implement MTE tag-checking functions for FEAT_MOPS
58
target/arm: Implement the SET* instructions
59
target/arm: Define new TB flag for ATA0
60
target/arm: Implement the SETG* instructions
61
target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies
62
target/arm: Implement the CPY* instructions
63
target/arm: Enable FEAT_MOPS for CPU 'max'
64
audio/jackaudio: Avoid dynamic stack allocation in qjack_client_init
65
audio/jackaudio: Avoid dynamic stack allocation in qjack_process()
47
66
48
Guenter Roeck (5):
67
Viktor Prutyanov (5):
49
hw/net/imx_fec: Support two Ethernet interfaces connected to single MDIO bus
68
elf2dmp: replace PE export name check with PDB name check
50
fsl-imx6ul: Add fec[12]-phy-connected properties
69
elf2dmp: introduce physical block alignment
51
arm/mcimx6ul-evk: Set fec1-phy-connected property to false
70
elf2dmp: introduce merging of physical memory runs
52
fsl-imx7: Add fec[12]-phy-connected properties
71
elf2dmp: use Linux mmap with MAP_NORESERVE when possible
53
arm/mcimx7d-sabre: Set fec2-phy-connected property to false
72
elf2dmp: rework PDB_STREAM_INDEXES::segments obtaining
54
73
55
Peter Maydell (5):
74
docs/devel/loads-stores.rst | 40 +-
56
target/arm: Pass ARMMMUFaultInfo to merge_syn_data_abort()
75
docs/system/arm/emulation.rst | 2 +
57
target/arm: Don't set ISV when reporting stage 1 faults in ESR_EL2
76
contrib/elf2dmp/addrspace.h | 1 +
58
target/arm: Implement FEAT_PAN3
77
contrib/elf2dmp/pdb.h | 2 +-
59
docs/devel/kconfig.rst: Fix incorrect markup
78
contrib/elf2dmp/qemu_elf.h | 2 +
60
target/arm: Report pauth information to gdb as 'pauth_v2'
79
target/arm/cpu.h | 35 ++
61
80
target/arm/internals.h | 55 +++
62
Philippe Mathieu-Daudé (2):
81
target/arm/syndrome.h | 12 +
63
target/arm: Remove KVM AArch32 CPU definitions
82
target/arm/tcg/helper-a64.h | 14 +
64
hw/arm/virt: Restrict Cortex-A7 check to TCG
83
target/arm/tcg/translate.h | 4 +-
65
84
target/arm/tcg/a64.decode | 38 +-
66
Stefan Weil (1):
85
audio/jackaudio.c | 21 +-
67
hw/arm: Fix some typos in comments (most found by codespell)
86
contrib/elf2dmp/addrspace.c | 31 +-
68
87
contrib/elf2dmp/main.c | 154 ++++----
69
Strahinja Jankovic (4):
88
contrib/elf2dmp/pdb.c | 15 +-
70
hw/watchdog: Allwinner WDT emulation for system reset
89
contrib/elf2dmp/qemu_elf.c | 68 +++-
71
hw/arm: Add WDT to Allwinner-A10 and Cubieboard
90
hw/arm/boot.c | 4 +
72
hw/arm: Add WDT to Allwinner-H3 and Orangepi-PC
91
hw/arm/sbsa-ref.c | 2 +
73
tests/avocado: Add reboot tests to Cubieboard
92
linux-user/elfload.c | 72 +++-
74
93
target/arm/helper.c | 39 +-
75
docs/devel/kconfig.rst | 2 +-
94
target/arm/tcg/cpu64.c | 5 +
76
docs/system/arm/cubieboard.rst | 1 +
95
target/arm/tcg/helper-a64.c | 878 +++++++++++++++++++++++++++++++++++++++++
77
docs/system/arm/emulation.rst | 1 +
96
target/arm/tcg/hflags.c | 21 +
78
docs/system/arm/orangepi.rst | 1 +
97
target/arm/tcg/mte_helper.c | 281 +++++++++++--
79
include/hw/arm/allwinner-a10.h | 2 +
98
target/arm/tcg/translate-a64.c | 164 +++++++-
80
include/hw/arm/allwinner-h3.h | 5 +-
99
target/m68k/m68k-semi.c | 4 +
81
include/hw/arm/fsl-imx6ul.h | 1 +
100
tests/tcg/aarch64/sysregs.c | 4 +-
82
include/hw/arm/fsl-imx7.h | 1 +
101
27 files changed, 1768 insertions(+), 200 deletions(-)
83
include/hw/net/imx_fec.h | 2 +
84
include/hw/watchdog/allwinner-wdt.h | 123 +++++++++++
85
target/arm/cpu.h | 5 +
86
target/arm/kvm-consts.h | 9 +-
87
target/arm/kvm_arm.h | 8 +
88
hw/arm/allwinner-a10.c | 7 +
89
hw/arm/allwinner-h3.c | 8 +
90
hw/arm/exynos4210.c | 4 +-
91
hw/arm/fsl-imx6ul.c | 20 ++
92
hw/arm/fsl-imx7.c | 20 ++
93
hw/arm/mcimx6ul-evk.c | 2 +
94
hw/arm/mcimx7d-sabre.c | 2 +
95
hw/arm/musicpal.c | 2 +-
96
hw/arm/omap1.c | 2 +-
97
hw/arm/omap2.c | 2 +-
98
hw/arm/virt-acpi-build.c | 2 +-
99
hw/arm/virt.c | 4 +-
100
hw/arm/xlnx-versal-virt.c | 2 +-
101
hw/net/imx_fec.c | 27 ++-
102
hw/timer/exynos4210_mct.c | 13 +-
103
hw/timer/imx_epit.c | 2 +-
104
hw/watchdog/allwinner-wdt.c | 416 ++++++++++++++++++++++++++++++++++++
105
target/arm/cpu64.c | 2 +-
106
target/arm/cpu_tcg.c | 2 -
107
target/arm/gdbstub.c | 9 +-
108
target/arm/kvm.c | 2 +
109
target/arm/kvm64.c | 18 +-
110
target/arm/ptw.c | 14 +-
111
target/arm/tcg/tlb_helper.c | 26 ++-
112
gdb-xml/aarch64-pauth.xml | 2 +-
113
hw/arm/Kconfig | 4 +-
114
hw/watchdog/Kconfig | 4 +
115
hw/watchdog/meson.build | 1 +
116
hw/watchdog/trace-events | 7 +
117
tests/avocado/boot_linux_console.py | 15 +-
118
43 files changed, 738 insertions(+), 64 deletions(-)
119
create mode 100644 include/hw/watchdog/allwinner-wdt.h
120
create mode 100644 hw/watchdog/allwinner-wdt.c
121
diff view generated by jsdifflib
1
In rST markup syntax, the inline markup (*italics*, **bold** and
1
The spec for m68k semihosting is documented in the libgloss
2
``monospaced``) must be separated from the surrending text by
2
sources. Add a comment with the URL for it, as we already
3
non-word characters, otherwise it is not interpreted as markup.
3
have for nios2 semihosting.
4
To force interpretation as markup in the middle of a word,
5
you need to use a backslash-escaped space (which will not
6
appear as a space in the output).
7
8
Fix a missing backslash-space in this file, which meant that the ``
9
after "select" was output literally and the monospacing was
10
incorrectly extended all the way to the end of the next monospaced
11
word.
12
4
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
Message-id: 20230411105424.3994585-1-peter.maydell@linaro.org
9
Message-id: 20230801154451.3505492-1-peter.maydell@linaro.org
17
---
10
---
18
docs/devel/kconfig.rst | 2 +-
11
target/m68k/m68k-semi.c | 4 ++++
19
1 file changed, 1 insertion(+), 1 deletion(-)
12
1 file changed, 4 insertions(+)
20
13
21
diff --git a/docs/devel/kconfig.rst b/docs/devel/kconfig.rst
14
diff --git a/target/m68k/m68k-semi.c b/target/m68k/m68k-semi.c
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/docs/devel/kconfig.rst
16
--- a/target/m68k/m68k-semi.c
24
+++ b/docs/devel/kconfig.rst
17
+++ b/target/m68k/m68k-semi.c
25
@@ -XXX,XX +XXX,XX @@ or commenting out lines in the second group.
18
@@ -XXX,XX +XXX,XX @@
26
19
*
27
It is also possible to run QEMU's configure script with the
20
* You should have received a copy of the GNU General Public License
28
``--without-default-devices`` option. When this is done, everything defaults
21
* along with this program; if not, see <http://www.gnu.org/licenses/>.
29
-to ``n`` unless it is ``select``ed or explicitly switched on in the
22
+ *
30
+to ``n`` unless it is ``select``\ ed or explicitly switched on in the
23
+ * The semihosting protocol implemented here is described in the
31
``.mak`` files. In other words, ``default`` and ``imply`` directives
24
+ * libgloss sources:
32
are disabled. When QEMU is built with this option, the user will probably
25
+ * https://sourceware.org/git/?p=newlib-cygwin.git;a=blob;f=libgloss/m68k/m68k-semi.txt;hb=HEAD
33
want to change some lines in the first group, for example like this::
26
*/
27
28
#include "qemu/osdep.h"
34
--
29
--
35
2.34.1
30
2.34.1
36
31
37
32
diff view generated by jsdifflib
New patch
1
The loads-and-stores documentation includes git grep regexes to find
2
occurrences of the various functions. Some of these regexes have
3
errors, typically failing to escape the '?', '(' and ')' when they
4
should be metacharacters (since these are POSIX basic REs). We also
5
weren't consistent about whether to have a ':' on the end of the
6
line introducing the list of regexes in each section.
1
7
8
Fix the errors.
9
10
The following shell rune will complain about any REs in the
11
file which don't have any matches in the codebase:
12
for re in $(sed -ne 's/ - ``\(\\<.*\)``/\1/p' docs/devel/loads-stores.rst); do git grep -q "$re" || echo "no matches for re $re"; done
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
Message-id: 20230904161703.3996734-1-peter.maydell@linaro.org
17
---
18
docs/devel/loads-stores.rst | 40 ++++++++++++++++++-------------------
19
1 file changed, 20 insertions(+), 20 deletions(-)
20
21
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
22
index XXXXXXX..XXXXXXX 100644
23
--- a/docs/devel/loads-stores.rst
24
+++ b/docs/devel/loads-stores.rst
25
@@ -XXX,XX +XXX,XX @@ which stores ``val`` to ``ptr`` as an ``{endian}`` order value
26
of size ``sz`` bytes.
27
28
29
-Regexes for git grep
30
+Regexes for git grep:
31
- ``\<ld[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
32
- ``\<st[bwlq]\(_[hbl]e\)\?_p\>``
33
- ``\<st24\(_[hbl]e\)\?_p\>``
34
- - ``\<ldn_\([hbl]e\)?_p\>``
35
- - ``\<stn_\([hbl]e\)?_p\>``
36
+ - ``\<ldn_\([hbl]e\)\?_p\>``
37
+ - ``\<stn_\([hbl]e\)\?_p\>``
38
39
``cpu_{ld,st}*_mmu``
40
~~~~~~~~~~~~~~~~~~~~
41
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_mmu(env, ptr, val, oi, retaddr)``
42
- ``_le`` : little endian
43
44
Regexes for git grep:
45
- - ``\<cpu_ld[bwlq](_[bl]e)\?_mmu\>``
46
- - ``\<cpu_st[bwlq](_[bl]e)\?_mmu\>``
47
+ - ``\<cpu_ld[bwlq]\(_[bl]e\)\?_mmu\>``
48
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_mmu\>``
49
50
51
``cpu_{ld,st}*_mmuidx_ra``
52
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_mmuidx_ra(env, ptr, val, mmuidx, retaddr)``
53
- ``_le`` : little endian
54
55
Regexes for git grep:
56
- - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_mmuidx_ra\>``
57
- - ``\<cpu_st[bwlq](_[bl]e)\?_mmuidx_ra\>``
58
+ - ``\<cpu_ld[us]\?[bwlq]\(_[bl]e\)\?_mmuidx_ra\>``
59
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_mmuidx_ra\>``
60
61
``cpu_{ld,st}*_data_ra``
62
~~~~~~~~~~~~~~~~~~~~~~~~
63
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_data_ra(env, ptr, val, ra)``
64
- ``_le`` : little endian
65
66
Regexes for git grep:
67
- - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data_ra\>``
68
- - ``\<cpu_st[bwlq](_[bl]e)\?_data_ra\>``
69
+ - ``\<cpu_ld[us]\?[bwlq]\(_[bl]e\)\?_data_ra\>``
70
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_data_ra\>``
71
72
``cpu_{ld,st}*_data``
73
~~~~~~~~~~~~~~~~~~~~~
74
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_data(env, ptr, val)``
75
- ``_be`` : big endian
76
- ``_le`` : little endian
77
78
-Regexes for git grep
79
- - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data\>``
80
- - ``\<cpu_st[bwlq](_[bl]e)\?_data\+\>``
81
+Regexes for git grep:
82
+ - ``\<cpu_ld[us]\?[bwlq]\(_[bl]e\)\?_data\>``
83
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_data\+\>``
84
85
``cpu_ld*_code``
86
~~~~~~~~~~~~~~~~
87
@@ -XXX,XX +XXX,XX @@ swap: ``translator_ld{sign}{size}_swap(env, ptr, swap)``
88
- ``l`` : 32 bits
89
- ``q`` : 64 bits
90
91
-Regexes for git grep
92
+Regexes for git grep:
93
- ``\<translator_ld[us]\?[bwlq]\(_swap\)\?\>``
94
95
``helper_{ld,st}*_mmu``
96
@@ -XXX,XX +XXX,XX @@ store: ``helper_{size}_mmu(env, addr, val, opindex, retaddr)``
97
- ``l`` : 32 bits
98
- ``q`` : 64 bits
99
100
-Regexes for git grep
101
+Regexes for git grep:
102
- ``\<helper_ld[us]\?[bwlq]_mmu\>``
103
- ``\<helper_st[bwlq]_mmu\>``
104
105
@@ -XXX,XX +XXX,XX @@ succeeded using a MemTxResult return code.
106
107
The ``_{endian}`` suffix is omitted for byte accesses.
108
109
-Regexes for git grep
110
+Regexes for git grep:
111
- ``\<address_space_\(read\|write\|rw\)\>``
112
- ``\<address_space_ldu\?[bwql]\(_[lb]e\)\?\>``
113
- ``\<address_space_st[bwql]\(_[lb]e\)\?\>``
114
@@ -XXX,XX +XXX,XX @@ Note that portions of the write which attempt to write data to a
115
device will be silently ignored -- only real RAM and ROM will
116
be written to.
117
118
-Regexes for git grep
119
+Regexes for git grep:
120
- ``address_space_write_rom``
121
122
``{ld,st}*_phys``
123
@@ -XXX,XX +XXX,XX @@ device doing the access has no way to report such an error.
124
125
The ``_{endian}_`` infix is omitted for byte accesses.
126
127
-Regexes for git grep
128
+Regexes for git grep:
129
- ``\<ldu\?[bwlq]\(_[bl]e\)\?_phys\>``
130
- ``\<st[bwlq]\(_[bl]e\)\?_phys\>``
131
132
@@ -XXX,XX +XXX,XX @@ For new code they are better avoided:
133
134
``cpu_physical_memory_rw``
135
136
-Regexes for git grep
137
+Regexes for git grep:
138
- ``\<cpu_physical_memory_\(read\|write\|rw\)\>``
139
140
``cpu_memory_rw_debug``
141
@@ -XXX,XX +XXX,XX @@ make sure our existing code is doing things correctly.
142
143
``dma_memory_rw``
144
145
-Regexes for git grep
146
+Regexes for git grep:
147
- ``\<dma_memory_\(read\|write\|rw\)\>``
148
- ``\<ldu\?[bwlq]\(_[bl]e\)\?_dma\>``
149
- ``\<st[bwlq]\(_[bl]e\)\?_dma\>``
150
@@ -XXX,XX +XXX,XX @@ correct address space for that device.
151
152
The ``_{endian}_`` infix is omitted for byte accesses.
153
154
-Regexes for git grep
155
+Regexes for git grep:
156
- ``\<pci_dma_\(read\|write\|rw\)\>``
157
- ``\<ldu\?[bwlq]\(_[bl]e\)\?_pci_dma\>``
158
- ``\<st[bwlq]\(_[bl]e\)\?_pci_dma\>``
159
--
160
2.34.1
161
162
diff view generated by jsdifflib
1
From: Guenter Roeck <linux@roeck-us.net>
1
From: Fabian Vogt <fvogt@suse.de>
2
2
3
On mcimx7d-sabre, the MDIO bus is connected to the first Ethernet
3
Just like d7ef5e16a17c sets SCR_EL3.HXEn for FEAT_HCX, this commit
4
interface. Set fec2-phy-connected to false to reflect this.
4
handles SCR_EL3.FGTEn for FEAT_FGT:
5
5
6
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
6
When we direct boot a kernel on a CPU which emulates EL3, we need to
7
Message-id: 20230315145248.1639364-6-linux@roeck-us.net
7
set up the EL3 system registers as the Linux kernel documentation
8
specifies:
9
https://www.kernel.org/doc/Documentation/arm64/booting.rst
10
11
> For CPUs with the Fine Grained Traps (FEAT_FGT) extension present:
12
> - If EL3 is present and the kernel is entered at EL2:
13
> - SCR_EL3.FGTEn (bit 27) must be initialised to 0b1.
14
15
Cc: qemu-stable@nongnu.org
16
Signed-off-by: Fabian Vogt <fvogt@suse.de>
17
Message-id: 4831384.GXAFRqVoOG@linux-e202.suse.de
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
20
---
11
hw/arm/mcimx7d-sabre.c | 2 ++
21
hw/arm/boot.c | 4 ++++
12
1 file changed, 2 insertions(+)
22
1 file changed, 4 insertions(+)
13
23
14
diff --git a/hw/arm/mcimx7d-sabre.c b/hw/arm/mcimx7d-sabre.c
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/mcimx7d-sabre.c
26
--- a/hw/arm/boot.c
17
+++ b/hw/arm/mcimx7d-sabre.c
27
+++ b/hw/arm/boot.c
18
@@ -XXX,XX +XXX,XX @@ static void mcimx7d_sabre_init(MachineState *machine)
28
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
19
29
if (cpu_isar_feature(aa64_hcx, cpu)) {
20
s = FSL_IMX7(object_new(TYPE_FSL_IMX7));
30
env->cp15.scr_el3 |= SCR_HXEN;
21
object_property_add_child(OBJECT(machine), "soc", OBJECT(s));
31
}
22
+ object_property_set_bool(OBJECT(s), "fec2-phy-connected", false,
32
+ if (cpu_isar_feature(aa64_fgt, cpu)) {
23
+ &error_fatal);
33
+ env->cp15.scr_el3 |= SCR_FGTEN;
24
qdev_realize(DEVICE(s), NULL, &error_fatal);
34
+ }
25
35
+
26
memory_region_add_subregion(get_system_memory(), FSL_IMX7_MMDC_ADDR,
36
/* AArch64 kernels never boot in secure mode */
37
assert(!info->secure_boot);
38
/* This hook is only supported for AArch32 currently:
27
--
39
--
28
2.34.1
40
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Some of the names we use for CPU features in linux-user's dummy
2
/proc/cpuinfo don't match the strings in the real kernel in
3
arch/arm64/kernel/cpuinfo.c. Specifically, the SME related
4
features have an underscore in the HWCAP_FOO define name,
5
but (like the SVE ones) they do not have an underscore in the
6
string in cpuinfo. Correct the errors.
2
7
3
Missed in commit 80485d88f9 ("target/arm: Restrict
8
Fixes: a55b9e7226708 ("linux-user: Emulate /proc/cpuinfo on aarch64 and arm")
4
v7A TCG cpus to TCG accel").
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
linux-user/elfload.c | 14 +++++++-------
14
1 file changed, 7 insertions(+), 7 deletions(-)
5
15
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230405100848.76145-2-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/kvm-consts.h | 9 +++------
12
target/arm/cpu_tcg.c | 2 --
13
2 files changed, 3 insertions(+), 8 deletions(-)
14
15
diff --git a/target/arm/kvm-consts.h b/target/arm/kvm-consts.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/kvm-consts.h
18
--- a/linux-user/elfload.c
18
+++ b/target/arm/kvm-consts.h
19
+++ b/linux-user/elfload.c
19
@@ -XXX,XX +XXX,XX @@ MISMATCH_CHECK(QEMU_PSCI_RET_INTERNAL_FAILURE, PSCI_RET_INTERNAL_FAILURE);
20
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap2_str(uint32_t bit)
20
MISMATCH_CHECK(QEMU_PSCI_RET_NOT_PRESENT, PSCI_RET_NOT_PRESENT);
21
[__builtin_ctz(ARM_HWCAP2_A64_RPRES )] = "rpres",
21
MISMATCH_CHECK(QEMU_PSCI_RET_DISABLED, PSCI_RET_DISABLED);
22
[__builtin_ctz(ARM_HWCAP2_A64_MTE3 )] = "mte3",
22
23
[__builtin_ctz(ARM_HWCAP2_A64_SME )] = "sme",
23
-/* Note that KVM uses overlapping values for AArch32 and AArch64
24
- [__builtin_ctz(ARM_HWCAP2_A64_SME_I16I64 )] = "sme_i16i64",
24
- * target CPU numbers. AArch32 targets:
25
- [__builtin_ctz(ARM_HWCAP2_A64_SME_F64F64 )] = "sme_f64f64",
25
+/*
26
- [__builtin_ctz(ARM_HWCAP2_A64_SME_I8I32 )] = "sme_i8i32",
26
+ * Note that KVM uses overlapping values for AArch32 and AArch64
27
- [__builtin_ctz(ARM_HWCAP2_A64_SME_F16F32 )] = "sme_f16f32",
27
+ * target CPU numbers. AArch64 targets:
28
- [__builtin_ctz(ARM_HWCAP2_A64_SME_B16F32 )] = "sme_b16f32",
28
*/
29
- [__builtin_ctz(ARM_HWCAP2_A64_SME_F32F32 )] = "sme_f32f32",
29
-#define QEMU_KVM_ARM_TARGET_CORTEX_A15 0
30
- [__builtin_ctz(ARM_HWCAP2_A64_SME_FA64 )] = "sme_fa64",
30
-#define QEMU_KVM_ARM_TARGET_CORTEX_A7 1
31
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_I16I64 )] = "smei16i64",
31
-
32
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_F64F64 )] = "smef64f64",
32
-/* AArch64 targets: */
33
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_I8I32 )] = "smei8i32",
33
#define QEMU_KVM_ARM_TARGET_AEM_V8 0
34
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_F16F32 )] = "smef16f32",
34
#define QEMU_KVM_ARM_TARGET_FOUNDATION_V8 1
35
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_B16F32 )] = "smeb16f32",
35
#define QEMU_KVM_ARM_TARGET_CORTEX_A57 2
36
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_F32F32 )] = "smef32f32",
36
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
37
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_FA64 )] = "smefa64",
37
index XXXXXXX..XXXXXXX 100644
38
};
38
--- a/target/arm/cpu_tcg.c
39
39
+++ b/target/arm/cpu_tcg.c
40
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
40
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
41
set_feature(&cpu->env, ARM_FEATURE_EL2);
42
set_feature(&cpu->env, ARM_FEATURE_EL3);
43
set_feature(&cpu->env, ARM_FEATURE_PMU);
44
- cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
45
cpu->midr = 0x410fc075;
46
cpu->reset_fpsid = 0x41023075;
47
cpu->isar.mvfr0 = 0x10110222;
48
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
49
set_feature(&cpu->env, ARM_FEATURE_EL2);
50
set_feature(&cpu->env, ARM_FEATURE_EL3);
51
set_feature(&cpu->env, ARM_FEATURE_PMU);
52
- cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
53
/* r4p0 cpu, not requiring expensive tlb flush errata */
54
cpu->midr = 0x414fc0f0;
55
cpu->revidr = 0x0;
56
--
41
--
57
2.34.1
42
2.34.1
58
43
59
44
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Our lists of Arm 32 and 64 bit hwcap values have lagged behind
2
the Linux kernel. Update them to include all the bits defined
3
as of upstream Linux git commit a48fa7efaf1161c1 (in the middle
4
of the kernel 6.6 dev cycle).
2
5
3
The Cortex-A7 core is only available when TCG is enabled (see
6
For 64-bit, we don't yet implement any of the features reported via
4
commit 80485d88f9 "target/arm: Restrict v7A TCG cpus to TCG accel").
7
these hwcap bits. For 32-bit we do in fact already implement them
8
all; we'll add the code to set them in a subsequent commit.
5
9
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230405100848.76145-3-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
hw/arm/virt.c | 2 ++
14
linux-user/elfload.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
12
1 file changed, 2 insertions(+)
15
1 file changed, 44 insertions(+)
13
16
14
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
17
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/virt.c
19
--- a/linux-user/elfload.c
17
+++ b/hw/arm/virt.c
20
+++ b/linux-user/elfload.c
18
@@ -XXX,XX +XXX,XX @@ static const int a15irqmap[] = {
21
@@ -XXX,XX +XXX,XX @@ enum
22
ARM_HWCAP_ARM_VFPD32 = 1 << 19,
23
ARM_HWCAP_ARM_LPAE = 1 << 20,
24
ARM_HWCAP_ARM_EVTSTRM = 1 << 21,
25
+ ARM_HWCAP_ARM_FPHP = 1 << 22,
26
+ ARM_HWCAP_ARM_ASIMDHP = 1 << 23,
27
+ ARM_HWCAP_ARM_ASIMDDP = 1 << 24,
28
+ ARM_HWCAP_ARM_ASIMDFHM = 1 << 25,
29
+ ARM_HWCAP_ARM_ASIMDBF16 = 1 << 26,
30
+ ARM_HWCAP_ARM_I8MM = 1 << 27,
19
};
31
};
20
32
21
static const char *valid_cpus[] = {
33
enum {
22
+#ifdef CONFIG_TCG
34
@@ -XXX,XX +XXX,XX @@ enum {
23
ARM_CPU_TYPE_NAME("cortex-a7"),
35
ARM_HWCAP2_ARM_SHA1 = 1 << 2,
24
+#endif
36
ARM_HWCAP2_ARM_SHA2 = 1 << 3,
25
ARM_CPU_TYPE_NAME("cortex-a15"),
37
ARM_HWCAP2_ARM_CRC32 = 1 << 4,
26
ARM_CPU_TYPE_NAME("cortex-a35"),
38
+ ARM_HWCAP2_ARM_SB = 1 << 5,
27
ARM_CPU_TYPE_NAME("cortex-a53"),
39
+ ARM_HWCAP2_ARM_SSBS = 1 << 6,
40
};
41
42
/* The commpage only exists for 32 bit kernels */
43
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap_str(uint32_t bit)
44
[__builtin_ctz(ARM_HWCAP_ARM_VFPD32 )] = "vfpd32",
45
[__builtin_ctz(ARM_HWCAP_ARM_LPAE )] = "lpae",
46
[__builtin_ctz(ARM_HWCAP_ARM_EVTSTRM )] = "evtstrm",
47
+ [__builtin_ctz(ARM_HWCAP_ARM_FPHP )] = "fphp",
48
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDHP )] = "asimdhp",
49
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDDP )] = "asimddp",
50
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDFHM )] = "asimdfhm",
51
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDBF16)] = "asimdbf16",
52
+ [__builtin_ctz(ARM_HWCAP_ARM_I8MM )] = "i8mm",
53
};
54
55
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
56
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap2_str(uint32_t bit)
57
[__builtin_ctz(ARM_HWCAP2_ARM_SHA1 )] = "sha1",
58
[__builtin_ctz(ARM_HWCAP2_ARM_SHA2 )] = "sha2",
59
[__builtin_ctz(ARM_HWCAP2_ARM_CRC32)] = "crc32",
60
+ [__builtin_ctz(ARM_HWCAP2_ARM_SB )] = "sb",
61
+ [__builtin_ctz(ARM_HWCAP2_ARM_SSBS )] = "ssbs",
62
};
63
64
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
65
@@ -XXX,XX +XXX,XX @@ enum {
66
ARM_HWCAP2_A64_SME_B16F32 = 1 << 28,
67
ARM_HWCAP2_A64_SME_F32F32 = 1 << 29,
68
ARM_HWCAP2_A64_SME_FA64 = 1 << 30,
69
+ ARM_HWCAP2_A64_WFXT = 1ULL << 31,
70
+ ARM_HWCAP2_A64_EBF16 = 1ULL << 32,
71
+ ARM_HWCAP2_A64_SVE_EBF16 = 1ULL << 33,
72
+ ARM_HWCAP2_A64_CSSC = 1ULL << 34,
73
+ ARM_HWCAP2_A64_RPRFM = 1ULL << 35,
74
+ ARM_HWCAP2_A64_SVE2P1 = 1ULL << 36,
75
+ ARM_HWCAP2_A64_SME2 = 1ULL << 37,
76
+ ARM_HWCAP2_A64_SME2P1 = 1ULL << 38,
77
+ ARM_HWCAP2_A64_SME_I16I32 = 1ULL << 39,
78
+ ARM_HWCAP2_A64_SME_BI32I32 = 1ULL << 40,
79
+ ARM_HWCAP2_A64_SME_B16B16 = 1ULL << 41,
80
+ ARM_HWCAP2_A64_SME_F16F16 = 1ULL << 42,
81
+ ARM_HWCAP2_A64_MOPS = 1ULL << 43,
82
+ ARM_HWCAP2_A64_HBC = 1ULL << 44,
83
};
84
85
#define ELF_HWCAP get_elf_hwcap()
86
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap2_str(uint32_t bit)
87
[__builtin_ctz(ARM_HWCAP2_A64_SME_B16F32 )] = "smeb16f32",
88
[__builtin_ctz(ARM_HWCAP2_A64_SME_F32F32 )] = "smef32f32",
89
[__builtin_ctz(ARM_HWCAP2_A64_SME_FA64 )] = "smefa64",
90
+ [__builtin_ctz(ARM_HWCAP2_A64_WFXT )] = "wfxt",
91
+ [__builtin_ctzll(ARM_HWCAP2_A64_EBF16 )] = "ebf16",
92
+ [__builtin_ctzll(ARM_HWCAP2_A64_SVE_EBF16 )] = "sveebf16",
93
+ [__builtin_ctzll(ARM_HWCAP2_A64_CSSC )] = "cssc",
94
+ [__builtin_ctzll(ARM_HWCAP2_A64_RPRFM )] = "rprfm",
95
+ [__builtin_ctzll(ARM_HWCAP2_A64_SVE2P1 )] = "sve2p1",
96
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME2 )] = "sme2",
97
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME2P1 )] = "sme2p1",
98
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_I16I32 )] = "smei16i32",
99
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_BI32I32)] = "smebi32i32",
100
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_B16B16 )] = "smeb16b16",
101
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_F16F16 )] = "smef16f16",
102
+ [__builtin_ctzll(ARM_HWCAP2_A64_MOPS )] = "mops",
103
+ [__builtin_ctzll(ARM_HWCAP2_A64_HBC )] = "hbc",
104
};
105
106
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
28
--
107
--
29
2.34.1
108
2.34.1
30
109
31
110
diff view generated by jsdifflib
New patch
1
Add the code to report the arm32 hwcaps we were previously missing:
2
ss, ssbs, fphp, asimdhp, asimddp, asimdfhm, asimdbf16, i8mm
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
linux-user/elfload.c | 12 ++++++++++++
8
1 file changed, 12 insertions(+)
9
10
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/linux-user/elfload.c
13
+++ b/linux-user/elfload.c
14
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap(void)
15
}
16
}
17
GET_FEATURE_ID(aa32_simdfmac, ARM_HWCAP_ARM_VFPv4);
18
+ /*
19
+ * MVFR1.FPHP and .SIMDHP must be in sync, and QEMU uses the same
20
+ * isar_feature function for both. The kernel reports them as two hwcaps.
21
+ */
22
+ GET_FEATURE_ID(aa32_fp16_arith, ARM_HWCAP_ARM_FPHP);
23
+ GET_FEATURE_ID(aa32_fp16_arith, ARM_HWCAP_ARM_ASIMDHP);
24
+ GET_FEATURE_ID(aa32_dp, ARM_HWCAP_ARM_ASIMDDP);
25
+ GET_FEATURE_ID(aa32_fhm, ARM_HWCAP_ARM_ASIMDFHM);
26
+ GET_FEATURE_ID(aa32_bf16, ARM_HWCAP_ARM_ASIMDBF16);
27
+ GET_FEATURE_ID(aa32_i8mm, ARM_HWCAP_ARM_I8MM);
28
29
return hwcaps;
30
}
31
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap2(void)
32
GET_FEATURE_ID(aa32_sha1, ARM_HWCAP2_ARM_SHA1);
33
GET_FEATURE_ID(aa32_sha2, ARM_HWCAP2_ARM_SHA2);
34
GET_FEATURE_ID(aa32_crc32, ARM_HWCAP2_ARM_CRC32);
35
+ GET_FEATURE_ID(aa32_sb, ARM_HWCAP2_ARM_SB);
36
+ GET_FEATURE_ID(aa32_ssbs, ARM_HWCAP2_ARM_SSBS);
37
return hwcaps;
38
}
39
40
--
41
2.34.1
diff view generated by jsdifflib
New patch
1
Update our AArch64 ID register field definitions from the 2023-06
2
system register XML release:
3
https://developer.arm.com/documentation/ddi0601/2023-06/
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/arm/cpu.h | 23 +++++++++++++++++++++++
9
1 file changed, 23 insertions(+)
10
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
15
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR0, SHA1, 8, 4)
16
FIELD(ID_AA64ISAR0, SHA2, 12, 4)
17
FIELD(ID_AA64ISAR0, CRC32, 16, 4)
18
FIELD(ID_AA64ISAR0, ATOMIC, 20, 4)
19
+FIELD(ID_AA64ISAR0, TME, 24, 4)
20
FIELD(ID_AA64ISAR0, RDM, 28, 4)
21
FIELD(ID_AA64ISAR0, SHA3, 32, 4)
22
FIELD(ID_AA64ISAR0, SM3, 36, 4)
23
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR2, APA3, 12, 4)
24
FIELD(ID_AA64ISAR2, MOPS, 16, 4)
25
FIELD(ID_AA64ISAR2, BC, 20, 4)
26
FIELD(ID_AA64ISAR2, PAC_FRAC, 24, 4)
27
+FIELD(ID_AA64ISAR2, CLRBHB, 28, 4)
28
+FIELD(ID_AA64ISAR2, SYSREG_128, 32, 4)
29
+FIELD(ID_AA64ISAR2, SYSINSTR_128, 36, 4)
30
+FIELD(ID_AA64ISAR2, PRFMSLC, 40, 4)
31
+FIELD(ID_AA64ISAR2, RPRFM, 48, 4)
32
+FIELD(ID_AA64ISAR2, CSSC, 52, 4)
33
+FIELD(ID_AA64ISAR2, ATS1A, 60, 4)
34
35
FIELD(ID_AA64PFR0, EL0, 0, 4)
36
FIELD(ID_AA64PFR0, EL1, 4, 4)
37
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64PFR1, SME, 24, 4)
38
FIELD(ID_AA64PFR1, RNDR_TRAP, 28, 4)
39
FIELD(ID_AA64PFR1, CSV2_FRAC, 32, 4)
40
FIELD(ID_AA64PFR1, NMI, 36, 4)
41
+FIELD(ID_AA64PFR1, MTE_FRAC, 40, 4)
42
+FIELD(ID_AA64PFR1, GCS, 44, 4)
43
+FIELD(ID_AA64PFR1, THE, 48, 4)
44
+FIELD(ID_AA64PFR1, MTEX, 52, 4)
45
+FIELD(ID_AA64PFR1, DF2, 56, 4)
46
+FIELD(ID_AA64PFR1, PFAR, 60, 4)
47
48
FIELD(ID_AA64MMFR0, PARANGE, 0, 4)
49
FIELD(ID_AA64MMFR0, ASIDBITS, 4, 4)
50
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64MMFR1, AFP, 44, 4)
51
FIELD(ID_AA64MMFR1, NTLBPA, 48, 4)
52
FIELD(ID_AA64MMFR1, TIDCP1, 52, 4)
53
FIELD(ID_AA64MMFR1, CMOW, 56, 4)
54
+FIELD(ID_AA64MMFR1, ECBHB, 60, 4)
55
56
FIELD(ID_AA64MMFR2, CNP, 0, 4)
57
FIELD(ID_AA64MMFR2, UAO, 4, 4)
58
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64DFR0, DEBUGVER, 0, 4)
59
FIELD(ID_AA64DFR0, TRACEVER, 4, 4)
60
FIELD(ID_AA64DFR0, PMUVER, 8, 4)
61
FIELD(ID_AA64DFR0, BRPS, 12, 4)
62
+FIELD(ID_AA64DFR0, PMSS, 16, 4)
63
FIELD(ID_AA64DFR0, WRPS, 20, 4)
64
+FIELD(ID_AA64DFR0, SEBEP, 24, 4)
65
FIELD(ID_AA64DFR0, CTX_CMPS, 28, 4)
66
FIELD(ID_AA64DFR0, PMSVER, 32, 4)
67
FIELD(ID_AA64DFR0, DOUBLELOCK, 36, 4)
68
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64DFR0, TRACEFILT, 40, 4)
69
FIELD(ID_AA64DFR0, TRACEBUFFER, 44, 4)
70
FIELD(ID_AA64DFR0, MTPMU, 48, 4)
71
FIELD(ID_AA64DFR0, BRBE, 52, 4)
72
+FIELD(ID_AA64DFR0, EXTTRCBUFF, 56, 4)
73
FIELD(ID_AA64DFR0, HPMN0, 60, 4)
74
75
FIELD(ID_AA64ZFR0, SVEVER, 0, 4)
76
FIELD(ID_AA64ZFR0, AES, 4, 4)
77
FIELD(ID_AA64ZFR0, BITPERM, 16, 4)
78
FIELD(ID_AA64ZFR0, BFLOAT16, 20, 4)
79
+FIELD(ID_AA64ZFR0, B16B16, 24, 4)
80
FIELD(ID_AA64ZFR0, SHA3, 32, 4)
81
FIELD(ID_AA64ZFR0, SM4, 40, 4)
82
FIELD(ID_AA64ZFR0, I8MM, 44, 4)
83
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ZFR0, F32MM, 52, 4)
84
FIELD(ID_AA64ZFR0, F64MM, 56, 4)
85
86
FIELD(ID_AA64SMFR0, F32F32, 32, 1)
87
+FIELD(ID_AA64SMFR0, BI32I32, 33, 1)
88
FIELD(ID_AA64SMFR0, B16F32, 34, 1)
89
FIELD(ID_AA64SMFR0, F16F32, 35, 1)
90
FIELD(ID_AA64SMFR0, I8I32, 36, 4)
91
+FIELD(ID_AA64SMFR0, F16F16, 42, 1)
92
+FIELD(ID_AA64SMFR0, B16B16, 43, 1)
93
+FIELD(ID_AA64SMFR0, I16I32, 44, 4)
94
FIELD(ID_AA64SMFR0, F64F64, 48, 1)
95
FIELD(ID_AA64SMFR0, I16I64, 52, 4)
96
FIELD(ID_AA64SMFR0, SMEVER, 56, 4)
97
--
98
2.34.1
diff view generated by jsdifflib
New patch
1
For user-only mode we reveal a subset of the AArch64 ID registers
2
to the guest, to emulate the kernel's trap-and-emulate-ID-regs
3
handling. Update the feature bit masks to match upstream kernel
4
commit a48fa7efaf1161c1c.
1
5
6
None of these features are yet implemented by QEMU, so this
7
doesn't yet have a behavioural change, but implementation of
8
FEAT_MOPS and FEAT_HBC is imminent.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
target/arm/helper.c | 11 ++++++++++-
14
tests/tcg/aarch64/sysregs.c | 4 ++--
15
2 files changed, 12 insertions(+), 3 deletions(-)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
22
R_ID_AA64ZFR0_F64MM_MASK },
23
{ .name = "ID_AA64SMFR0_EL1",
24
.exported_bits = R_ID_AA64SMFR0_F32F32_MASK |
25
+ R_ID_AA64SMFR0_BI32I32_MASK |
26
R_ID_AA64SMFR0_B16F32_MASK |
27
R_ID_AA64SMFR0_F16F32_MASK |
28
R_ID_AA64SMFR0_I8I32_MASK |
29
+ R_ID_AA64SMFR0_F16F16_MASK |
30
+ R_ID_AA64SMFR0_B16B16_MASK |
31
+ R_ID_AA64SMFR0_I16I32_MASK |
32
R_ID_AA64SMFR0_F64F64_MASK |
33
R_ID_AA64SMFR0_I16I64_MASK |
34
+ R_ID_AA64SMFR0_SMEVER_MASK |
35
R_ID_AA64SMFR0_FA64_MASK },
36
{ .name = "ID_AA64MMFR0_EL1",
37
.exported_bits = R_ID_AA64MMFR0_ECV_MASK,
38
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
39
.exported_bits = R_ID_AA64ISAR2_WFXT_MASK |
40
R_ID_AA64ISAR2_RPRES_MASK |
41
R_ID_AA64ISAR2_GPA3_MASK |
42
- R_ID_AA64ISAR2_APA3_MASK },
43
+ R_ID_AA64ISAR2_APA3_MASK |
44
+ R_ID_AA64ISAR2_MOPS_MASK |
45
+ R_ID_AA64ISAR2_BC_MASK |
46
+ R_ID_AA64ISAR2_RPRFM_MASK |
47
+ R_ID_AA64ISAR2_CSSC_MASK },
48
{ .name = "ID_AA64ISAR*_EL1_RESERVED",
49
.is_glob = true },
50
};
51
diff --git a/tests/tcg/aarch64/sysregs.c b/tests/tcg/aarch64/sysregs.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/tests/tcg/aarch64/sysregs.c
54
+++ b/tests/tcg/aarch64/sysregs.c
55
@@ -XXX,XX +XXX,XX @@ int main(void)
56
*/
57
get_cpu_reg_check_mask(id_aa64isar0_el1, _m(f0ff,ffff,f0ff,fff0));
58
get_cpu_reg_check_mask(id_aa64isar1_el1, _m(00ff,f0ff,ffff,ffff));
59
- get_cpu_reg_check_mask(SYS_ID_AA64ISAR2_EL1, _m(0000,0000,0000,ffff));
60
+ get_cpu_reg_check_mask(SYS_ID_AA64ISAR2_EL1, _m(00ff,0000,00ff,ffff));
61
/* TGran4 & TGran64 as pegged to -1 */
62
get_cpu_reg_check_mask(id_aa64mmfr0_el1, _m(f000,0000,ff00,0000));
63
get_cpu_reg_check_mask(id_aa64mmfr1_el1, _m(0000,f000,0000,0000));
64
@@ -XXX,XX +XXX,XX @@ int main(void)
65
get_cpu_reg_check_mask(id_aa64dfr0_el1, _m(0000,0000,0000,0006));
66
get_cpu_reg_check_zero(id_aa64dfr1_el1);
67
get_cpu_reg_check_mask(SYS_ID_AA64ZFR0_EL1, _m(0ff0,ff0f,00ff,00ff));
68
- get_cpu_reg_check_mask(SYS_ID_AA64SMFR0_EL1, _m(80f1,00fd,0000,0000));
69
+ get_cpu_reg_check_mask(SYS_ID_AA64SMFR0_EL1, _m(8ff1,fcff,0000,0000));
70
71
get_cpu_reg_check_zero(id_aa64afr0_el1);
72
get_cpu_reg_check_zero(id_aa64afr1_el1);
73
--
74
2.34.1
diff view generated by jsdifflib
1
FEAT_PAN3 adds an EPAN bit to SCTLR_EL1 and SCTLR_EL2, which allows
1
FEAT_HBC (Hinted conditional branches) provides a new instruction
2
the PAN bit to make memory non-privileged-read/write if it is
2
BC.cond, which behaves exactly like the existing B.cond except
3
user-executable as well as if it is user-read/write.
3
that it provides a hint to the branch predictor about the
4
likely behaviour of the branch.
4
5
5
Implement this feature and enable it in the AArch64 'max' CPU.
6
Since QEMU does not implement branch prediction, we can treat
7
this identically to B.cond.
6
8
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230331145045.2584941-4-peter.maydell@linaro.org
10
---
12
---
11
docs/system/arm/emulation.rst | 1 +
13
docs/system/arm/emulation.rst | 1 +
12
target/arm/cpu.h | 5 +++++
14
target/arm/cpu.h | 5 +++++
13
target/arm/cpu64.c | 2 +-
15
target/arm/tcg/a64.decode | 3 ++-
14
target/arm/ptw.c | 14 +++++++++++++-
16
linux-user/elfload.c | 1 +
15
4 files changed, 20 insertions(+), 2 deletions(-)
17
target/arm/tcg/cpu64.c | 4 ++++
18
target/arm/tcg/translate-a64.c | 4 ++++
19
6 files changed, 17 insertions(+), 1 deletion(-)
16
20
17
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
21
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
18
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
19
--- a/docs/system/arm/emulation.rst
23
--- a/docs/system/arm/emulation.rst
20
+++ b/docs/system/arm/emulation.rst
24
+++ b/docs/system/arm/emulation.rst
21
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
25
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
22
- FEAT_MTE3 (MTE Asymmetric Fault Handling)
26
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
23
- FEAT_PAN (Privileged access never)
27
- FEAT_GTG (Guest translation granule size)
24
- FEAT_PAN2 (AT S1E1R and AT S1E1W instruction variants affected by PSTATE.PAN)
28
- FEAT_HAFDBS (Hardware management of the access flag and dirty bit state)
25
+- FEAT_PAN3 (Support for SCTLR_ELx.EPAN)
29
+- FEAT_HBC (Hinted conditional branches)
26
- FEAT_PAuth (Pointer authentication)
30
- FEAT_HCX (Support for the HCRX_EL2 register)
27
- FEAT_PMULL (PMULL, PMULL2 instructions)
31
- FEAT_HPDS (Hierarchical permission disables)
28
- FEAT_PMUv3p1 (PMU Extensions v3.1)
32
- FEAT_HPDS2 (Translation table page-based hardware attributes)
29
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
30
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/cpu.h
35
--- a/target/arm/cpu.h
32
+++ b/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
33
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ats1e1(const ARMISARegisters *id)
37
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_i8mm(const ARMISARegisters *id)
34
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 2;
38
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, I8MM) != 0;
35
}
39
}
36
40
37
+static inline bool isar_feature_aa64_pan3(const ARMISARegisters *id)
41
+static inline bool isar_feature_aa64_hbc(const ARMISARegisters *id)
38
+{
42
+{
39
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 3;
43
+ return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, BC) != 0;
40
+}
44
+}
41
+
45
+
42
static inline bool isar_feature_aa64_hcx(const ARMISARegisters *id)
46
static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
43
{
47
{
44
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HCX) != 0;
48
return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
45
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
49
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
46
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu64.c
51
--- a/target/arm/tcg/a64.decode
48
+++ b/target/arm/cpu64.c
52
+++ b/target/arm/tcg/a64.decode
49
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
53
@@ -XXX,XX +XXX,XX @@ CBZ sf:1 011010 nz:1 ................... rt:5 &cbz imm=%imm19
50
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
54
51
t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
55
TBZ . 011011 nz:1 ..... .............. rt:5 &tbz imm=%imm14 bitpos=%imm31_19
52
t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); /* FEAT_LOR */
56
53
- t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* FEAT_PAN2 */
57
-B_cond 0101010 0 ................... 0 cond:4 imm=%imm19
54
+ t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 3); /* FEAT_PAN3 */
58
+# B.cond and BC.cond
55
t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* FEAT_XNX */
59
+B_cond 0101010 0 ................... c:1 cond:4 imm=%imm19
56
t = FIELD_DP64(t, ID_AA64MMFR1, ETS, 1); /* FEAT_ETS */
60
57
t = FIELD_DP64(t, ID_AA64MMFR1, HCX, 1); /* FEAT_HCX */
61
BR 1101011 0000 11111 000000 rn:5 00000 &r
58
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
62
BLR 1101011 0001 11111 000000 rn:5 00000 &r
63
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
59
index XXXXXXX..XXXXXXX 100644
64
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/ptw.c
65
--- a/linux-user/elfload.c
61
+++ b/target/arm/ptw.c
66
+++ b/linux-user/elfload.c
62
@@ -XXX,XX +XXX,XX @@ static int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0)
67
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap2(void)
63
static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
68
GET_FEATURE_ID(aa64_sme_f64f64, ARM_HWCAP2_A64_SME_F64F64);
64
int ap, int ns, int xn, int pxn)
69
GET_FEATURE_ID(aa64_sme_i16i64, ARM_HWCAP2_A64_SME_I16I64);
70
GET_FEATURE_ID(aa64_sme_fa64, ARM_HWCAP2_A64_SME_FA64);
71
+ GET_FEATURE_ID(aa64_hbc, ARM_HWCAP2_A64_HBC);
72
73
return hwcaps;
74
}
75
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/tcg/cpu64.c
78
+++ b/target/arm/tcg/cpu64.c
79
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
80
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
81
cpu->isar.id_aa64isar1 = t;
82
83
+ t = cpu->isar.id_aa64isar2;
84
+ t = FIELD_DP64(t, ID_AA64ISAR2, BC, 1); /* FEAT_HBC */
85
+ cpu->isar.id_aa64isar2 = t;
86
+
87
t = cpu->isar.id_aa64pfr0;
88
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
89
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
90
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/tcg/translate-a64.c
93
+++ b/target/arm/tcg/translate-a64.c
94
@@ -XXX,XX +XXX,XX @@ static bool trans_TBZ(DisasContext *s, arg_tbz *a)
95
96
static bool trans_B_cond(DisasContext *s, arg_B_cond *a)
65
{
97
{
66
+ ARMCPU *cpu = env_archcpu(env);
98
+ /* BC.cond is only present with FEAT_HBC */
67
bool is_user = regime_is_user(env, mmu_idx);
99
+ if (a->c && !dc_isar_feature(aa64_hbc, s)) {
68
int prot_rw, user_rw;
100
+ return false;
69
bool have_wxn;
101
+ }
70
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
102
reset_btype(s);
71
if (is_user) {
103
if (a->cond < 0x0e) {
72
prot_rw = user_rw;
104
/* genuinely conditional branches */
73
} else {
74
+ /*
75
+ * PAN controls can forbid data accesses but don't affect insn fetch.
76
+ * Plain PAN forbids data accesses if EL0 has data permissions;
77
+ * PAN3 forbids data accesses if EL0 has either data or exec perms.
78
+ * Note that for AArch64 the 'user can exec' case is exactly !xn.
79
+ * We make the IMPDEF choices that SCR_EL3.SIF and Realm EL2&0
80
+ * do not affect EPAN.
81
+ */
82
if (user_rw && regime_is_pan(env, mmu_idx)) {
83
- /* PAN forbids data accesses but doesn't affect insn fetch */
84
+ prot_rw = 0;
85
+ } else if (cpu_isar_feature(aa64_pan3, cpu) && is_aa64 &&
86
+ regime_is_pan(env, mmu_idx) &&
87
+ (regime_sctlr(env, mmu_idx) & SCTLR_EPAN) && !xn) {
88
prot_rw = 0;
89
} else {
90
prot_rw = simple_ap_to_rw_prot_is_user(ap, false);
91
--
105
--
92
2.34.1
106
2.34.1
107
108
diff view generated by jsdifflib
1
We already pass merge_syn_data_abort() two fields from the
1
The allocation_tag_mem() function takes an argument tag_size,
2
ARMMMUFaultInfo struct, and we're about to want to use a third field.
2
but it never uses it. Remove the argument. In mte_probe_int()
3
Refactor to just pass a pointer to the fault info.
3
in particular this also lets us delete the code computing
4
the value we were passing in.
4
5
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20230331145045.2584941-2-peter.maydell@linaro.org
9
---
9
---
10
target/arm/tcg/tlb_helper.c | 15 +++++++--------
10
target/arm/tcg/mte_helper.c | 42 +++++++++++++------------------------
11
1 file changed, 7 insertions(+), 8 deletions(-)
11
1 file changed, 14 insertions(+), 28 deletions(-)
12
12
13
diff --git a/target/arm/tcg/tlb_helper.c b/target/arm/tcg/tlb_helper.c
13
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/tcg/tlb_helper.c
15
--- a/target/arm/tcg/mte_helper.c
16
+++ b/target/arm/tcg/tlb_helper.c
16
+++ b/target/arm/tcg/mte_helper.c
17
@@ -XXX,XX +XXX,XX @@ bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
17
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
18
}
18
* @ptr_access: the access to use for the virtual address
19
19
* @ptr_size: the number of bytes in the normal memory access
20
static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
20
* @tag_access: the access to use for the tag memory
21
+ ARMMMUFaultInfo *fi,
21
- * @tag_size: the number of bytes in the tag memory access
22
unsigned int target_el,
22
* @ra: the return address for exception handling
23
- bool same_el, bool ea,
23
*
24
- bool s1ptw, bool is_write,
24
* Our tag memory is formatted as a sequence of little-endian nibbles.
25
+ bool same_el, bool is_write,
25
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
26
int fsc)
26
* a pointer to the corresponding tag byte. Exit with exception if the
27
* virtual address is not accessible for @ptr_access.
28
*
29
- * The @ptr_size and @tag_size values may not have an obvious relation
30
- * due to the alignment of @ptr, and the number of tag checks required.
31
- *
32
* If there is no tag storage corresponding to @ptr, return NULL.
33
*/
34
static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
35
uint64_t ptr, MMUAccessType ptr_access,
36
int ptr_size, MMUAccessType tag_access,
37
- int tag_size, uintptr_t ra)
38
+ uintptr_t ra)
27
{
39
{
28
uint32_t syn;
40
#ifdef CONFIG_USER_ONLY
29
@@ -XXX,XX +XXX,XX @@ static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
41
uint64_t clean_ptr = useronly_clean_ptr(ptr);
30
* ISS encoding for an exception from a Data Abort, the
42
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(ldg)(CPUARMState *env, uint64_t ptr, uint64_t xt)
31
* ISV field.
43
32
*/
44
/* Trap if accessing an invalid page. */
33
- if (!(template_syn & ARM_EL_ISV) || target_el != 2 || s1ptw) {
45
mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_LOAD, 1,
34
+ if (!(template_syn & ARM_EL_ISV) || target_el != 2 || fi->s1ptw) {
46
- MMU_DATA_LOAD, 1, GETPC());
35
syn = syn_data_abort_no_iss(same_el, 0,
47
+ MMU_DATA_LOAD, GETPC());
36
- ea, 0, s1ptw, is_write, fsc);
48
37
+ fi->ea, 0, fi->s1ptw, is_write, fsc);
49
/* Load if page supports tags. */
50
if (mem) {
51
@@ -XXX,XX +XXX,XX @@ static inline void do_stg(CPUARMState *env, uint64_t ptr, uint64_t xt,
52
53
/* Trap if accessing an invalid page. */
54
mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE, TAG_GRANULE,
55
- MMU_DATA_STORE, 1, ra);
56
+ MMU_DATA_STORE, ra);
57
58
/* Store if page supports tags. */
59
if (mem) {
60
@@ -XXX,XX +XXX,XX @@ static inline void do_st2g(CPUARMState *env, uint64_t ptr, uint64_t xt,
61
if (ptr & TAG_GRANULE) {
62
/* Two stores unaligned mod TAG_GRANULE*2 -- modify two bytes. */
63
mem1 = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
64
- TAG_GRANULE, MMU_DATA_STORE, 1, ra);
65
+ TAG_GRANULE, MMU_DATA_STORE, ra);
66
mem2 = allocation_tag_mem(env, mmu_idx, ptr + TAG_GRANULE,
67
MMU_DATA_STORE, TAG_GRANULE,
68
- MMU_DATA_STORE, 1, ra);
69
+ MMU_DATA_STORE, ra);
70
71
/* Store if page(s) support tags. */
72
if (mem1) {
73
@@ -XXX,XX +XXX,XX @@ static inline void do_st2g(CPUARMState *env, uint64_t ptr, uint64_t xt,
38
} else {
74
} else {
75
/* Two stores aligned mod TAG_GRANULE*2 -- modify one byte. */
76
mem1 = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
77
- 2 * TAG_GRANULE, MMU_DATA_STORE, 1, ra);
78
+ 2 * TAG_GRANULE, MMU_DATA_STORE, ra);
79
if (mem1) {
80
tag |= tag << 4;
81
qatomic_set(mem1, tag);
82
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
83
84
/* Trap if accessing an invalid page. */
85
tag_mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_LOAD,
86
- gm_bs_bytes, MMU_DATA_LOAD,
87
- gm_bs_bytes / (2 * TAG_GRANULE), ra);
88
+ gm_bs_bytes, MMU_DATA_LOAD, ra);
89
90
/* The tag is squashed to zero if the page does not support tags. */
91
if (!tag_mem) {
92
@@ -XXX,XX +XXX,XX @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
93
94
/* Trap if accessing an invalid page. */
95
tag_mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
96
- gm_bs_bytes, MMU_DATA_LOAD,
97
- gm_bs_bytes / (2 * TAG_GRANULE), ra);
98
+ gm_bs_bytes, MMU_DATA_LOAD, ra);
99
100
/*
101
* Tag store only happens if the page support tags,
102
@@ -XXX,XX +XXX,XX @@ void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val)
103
ptr &= -dcz_bytes;
104
105
mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE, dcz_bytes,
106
- MMU_DATA_STORE, tag_bytes, ra);
107
+ MMU_DATA_STORE, ra);
108
if (mem) {
109
int tag_pair = (val & 0xf) * 0x11;
110
memset(mem, tag_pair, tag_bytes);
111
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
112
int mmu_idx, ptr_tag, bit55;
113
uint64_t ptr_last, prev_page, next_page;
114
uint64_t tag_first, tag_last;
115
- uint64_t tag_byte_first, tag_byte_last;
116
- uint32_t sizem1, tag_count, tag_size, n, c;
117
+ uint32_t sizem1, tag_count, n, c;
118
uint8_t *mem1, *mem2;
119
MMUAccessType type;
120
121
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
122
tag_last = QEMU_ALIGN_DOWN(ptr_last, TAG_GRANULE);
123
tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1;
124
125
- /* Round the bounds to twice the tag granule, and compute the bytes. */
126
- tag_byte_first = QEMU_ALIGN_DOWN(ptr, 2 * TAG_GRANULE);
127
- tag_byte_last = QEMU_ALIGN_DOWN(ptr_last, 2 * TAG_GRANULE);
128
-
129
/* Locate the page boundaries. */
130
prev_page = ptr & TARGET_PAGE_MASK;
131
next_page = prev_page + TARGET_PAGE_SIZE;
132
133
if (likely(tag_last - prev_page < TARGET_PAGE_SIZE)) {
134
/* Memory access stays on one page. */
135
- tag_size = ((tag_byte_last - tag_byte_first) / (2 * TAG_GRANULE)) + 1;
136
mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, sizem1 + 1,
137
- MMU_DATA_LOAD, tag_size, ra);
138
+ MMU_DATA_LOAD, ra);
139
if (!mem1) {
140
return 1;
141
}
142
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
143
n = checkN(mem1, ptr & TAG_GRANULE, ptr_tag, tag_count);
144
} else {
145
/* Memory access crosses to next page. */
146
- tag_size = (next_page - tag_byte_first) / (2 * TAG_GRANULE);
147
mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, next_page - ptr,
148
- MMU_DATA_LOAD, tag_size, ra);
149
+ MMU_DATA_LOAD, ra);
150
151
- tag_size = ((tag_byte_last - next_page) / (2 * TAG_GRANULE)) + 1;
152
mem2 = allocation_tag_mem(env, mmu_idx, next_page, type,
153
ptr_last - next_page + 1,
154
- MMU_DATA_LOAD, tag_size, ra);
155
+ MMU_DATA_LOAD, ra);
156
39
/*
157
/*
40
* Fields: IL, ISV, SAS, SSE, SRT, SF and AR come from the template
158
* Perform all of the comparisons.
41
@@ -XXX,XX +XXX,XX @@ static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
159
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
42
*/
160
mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
43
syn = syn_data_abort_with_iss(same_el,
161
(void) probe_write(env, ptr, 1, mmu_idx, ra);
44
0, 0, 0, 0, 0,
162
mem = allocation_tag_mem(env, mmu_idx, align_ptr, MMU_DATA_STORE,
45
- ea, 0, s1ptw, is_write, fsc,
163
- dcz_bytes, MMU_DATA_LOAD, tag_bytes, ra);
46
+ fi->ea, 0, fi->s1ptw, is_write, fsc,
164
+ dcz_bytes, MMU_DATA_LOAD, ra);
47
true);
165
if (!mem) {
48
/* Merge the runtime syndrome with the template syndrome. */
166
goto done;
49
syn |= template_syn;
167
}
50
@@ -XXX,XX +XXX,XX @@ void arm_deliver_fault(ARMCPU *cpu, vaddr addr,
51
syn = syn_insn_abort(same_el, fi->ea, fi->s1ptw, fsc);
52
exc = EXCP_PREFETCH_ABORT;
53
} else {
54
- syn = merge_syn_data_abort(env->exception.syndrome, target_el,
55
- same_el, fi->ea, fi->s1ptw,
56
- access_type == MMU_DATA_STORE,
57
+ syn = merge_syn_data_abort(env->exception.syndrome, fi, target_el,
58
+ same_el, access_type == MMU_DATA_STORE,
59
fsc);
60
if (access_type == MMU_DATA_STORE
61
&& arm_feature(env, ARM_FEATURE_V6)) {
62
--
168
--
63
2.34.1
169
2.34.1
64
170
65
171
diff view generated by jsdifflib
New patch
1
The LDRT/STRT "unprivileged load/store" instructions behave like
2
normal ones if executed at EL0. We handle this correctly for
3
the load/store semantics, but get the MTE checking wrong.
1
4
5
We always look at s->mte_active[is_unpriv] to see whether we should
6
be doing MTE checks, but in hflags.c when we set the TB flags that
7
will be used to fill the mte_active[] array we only set the
8
MTE0_ACTIVE bit if UNPRIV is true (i.e. we are not at EL0).
9
10
This means that a LDRT at EL0 will see s->mte_active[1] as 0,
11
and will not do MTE checks even when MTE is enabled.
12
13
To avoid the translate-time code having to do an explicit check on
14
s->unpriv to see if it is OK to index into the mte_active[] array,
15
duplicate MTE_ACTIVE into MTE0_ACTIVE when UNPRIV is false.
16
17
(This isn't a very serious bug because generally nobody executes
18
LDRT/STRT at EL0, because they have no use there.)
19
20
Cc: qemu-stable@nongnu.org
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20230912140434.1333369-2-peter.maydell@linaro.org
24
---
25
target/arm/tcg/hflags.c | 9 +++++++++
26
1 file changed, 9 insertions(+)
27
28
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/tcg/hflags.c
31
+++ b/target/arm/tcg/hflags.c
32
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
33
&& !(env->pstate & PSTATE_TCO)
34
&& (sctlr & (el == 0 ? SCTLR_TCF0 : SCTLR_TCF))) {
35
DP_TBFLAG_A64(flags, MTE_ACTIVE, 1);
36
+ if (!EX_TBFLAG_A64(flags, UNPRIV)) {
37
+ /*
38
+ * In non-unpriv contexts (eg EL0), unpriv load/stores
39
+ * act like normal ones; duplicate the MTE info to
40
+ * avoid translate-a64.c having to check UNPRIV to see
41
+ * whether it is OK to index into MTE_ACTIVE[].
42
+ */
43
+ DP_TBFLAG_A64(flags, MTE0_ACTIVE, 1);
44
+ }
45
}
46
}
47
/* And again for unprivileged accesses, if required. */
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
FEAT_MOPS defines a handful of new enable bits:
2
* HCRX_EL2.MSCEn, SCTLR_EL1.MSCEn, SCTLR_EL2.MSCen:
3
define whether the new insns should UNDEF or not
4
* HCRX_EL2.MCE2: defines whether memops exceptions from
5
EL1 should be taken to EL1 or EL2
1
6
7
Since we don't sanitise what bits can be written for the SCTLR
8
registers, we only need to handle the new bits in HCRX_EL2, and
9
define SCTLR_MSCEN for the new SCTLR bit value.
10
11
The precedence of "HCRX bits acts as 0 if SCR_EL3.HXEn is 0" versus
12
"bit acts as 1 if EL2 disabled" is not clear from the register
13
definition text, but it is clear in the CheckMOPSEnabled()
14
pseudocode(), so we follow that. We'll have to check whether other
15
bits we need to implement in future follow the same logic or not.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20230912140434.1333369-3-peter.maydell@linaro.org
20
---
21
target/arm/cpu.h | 6 ++++++
22
target/arm/helper.c | 28 +++++++++++++++++++++-------
23
2 files changed, 27 insertions(+), 7 deletions(-)
24
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
29
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
30
#define SCTLR_EnIB (1U << 30) /* v8.3, AArch64 only */
31
#define SCTLR_EnIA (1U << 31) /* v8.3, AArch64 only */
32
#define SCTLR_DSSBS_32 (1U << 31) /* v8.5, AArch32 only */
33
+#define SCTLR_MSCEN (1ULL << 33) /* FEAT_MOPS */
34
#define SCTLR_BT0 (1ULL << 35) /* v8.5-BTI */
35
#define SCTLR_BT1 (1ULL << 36) /* v8.5-BTI */
36
#define SCTLR_ITFSB (1ULL << 37) /* v8.5-MemTag */
37
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_doublelock(const ARMISARegisters *id)
38
return FIELD_SEX64(id->id_aa64dfr0, ID_AA64DFR0, DOUBLELOCK) >= 0;
39
}
40
41
+static inline bool isar_feature_aa64_mops(const ARMISARegisters *id)
42
+{
43
+ return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, MOPS);
44
+}
45
+
46
/*
47
* Feature tests for "does this exist in either 32-bit or 64-bit?"
48
*/
49
diff --git a/target/arm/helper.c b/target/arm/helper.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/helper.c
52
+++ b/target/arm/helper.c
53
@@ -XXX,XX +XXX,XX @@ static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
54
{
55
uint64_t valid_mask = 0;
56
57
- /* No features adding bits to HCRX are implemented. */
58
+ /* FEAT_MOPS adds MSCEn and MCE2 */
59
+ if (cpu_isar_feature(aa64_mops, env_archcpu(env))) {
60
+ valid_mask |= HCRX_MSCEN | HCRX_MCE2;
61
+ }
62
63
/* Clear RES0 bits. */
64
env->cp15.hcrx_el2 = value & valid_mask;
65
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcrx_el2_eff(CPUARMState *env)
66
{
67
/*
68
* The bits in this register behave as 0 for all purposes other than
69
- * direct reads of the register if:
70
- * - EL2 is not enabled in the current security state,
71
- * - SCR_EL3.HXEn is 0.
72
+ * direct reads of the register if SCR_EL3.HXEn is 0.
73
+ * If EL2 is not enabled in the current security state, then the
74
+ * bit may behave as if 0, or as if 1, depending on the bit.
75
+ * For the moment, we treat the EL2-disabled case as taking
76
+ * priority over the HXEn-disabled case. This is true for the only
77
+ * bit for a feature which we implement where the answer is different
78
+ * for the two cases (MSCEn for FEAT_MOPS).
79
+ * This may need to be revisited for future bits.
80
*/
81
- if (!arm_is_el2_enabled(env)
82
- || (arm_feature(env, ARM_FEATURE_EL3)
83
- && !(env->cp15.scr_el3 & SCR_HXEN))) {
84
+ if (!arm_is_el2_enabled(env)) {
85
+ uint64_t hcrx = 0;
86
+ if (cpu_isar_feature(aa64_mops, env_archcpu(env))) {
87
+ /* MSCEn behaves as 1 if EL2 is not enabled */
88
+ hcrx |= HCRX_MSCEN;
89
+ }
90
+ return hcrx;
91
+ }
92
+ if (arm_feature(env, ARM_FEATURE_EL3) && !(env->cp15.scr_el3 & SCR_HXEN)) {
93
return 0;
94
}
95
return env->cp15.hcrx_el2;
96
--
97
2.34.1
diff view generated by jsdifflib
New patch
1
In every place that we call the get_a64_user_mem_index() function
2
we do it like this:
3
memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
4
Refactor so the caller passes in the bool that says whether they
5
want the 'unpriv' or 'normal' mem_index rather than having to
6
do the ?: themselves.
1
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20230912140434.1333369-4-peter.maydell@linaro.org
10
---
11
target/arm/tcg/translate-a64.c | 20 ++++++++++++++------
12
1 file changed, 14 insertions(+), 6 deletions(-)
13
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/translate-a64.c
17
+++ b/target/arm/tcg/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ void a64_translate_init(void)
19
}
20
21
/*
22
- * Return the core mmu_idx to use for A64 "unprivileged load/store" insns
23
+ * Return the core mmu_idx to use for A64 load/store insns which
24
+ * have a "unprivileged load/store" variant. Those insns access
25
+ * EL0 if executed from an EL which has control over EL0 (usually
26
+ * EL1) but behave like normal loads and stores if executed from
27
+ * elsewhere (eg EL3).
28
+ *
29
+ * @unpriv : true for the unprivileged encoding; false for the
30
+ * normal encoding (in which case we will return the same
31
+ * thing as get_mem_index().
32
*/
33
-static int get_a64_user_mem_index(DisasContext *s)
34
+static int get_a64_user_mem_index(DisasContext *s, bool unpriv)
35
{
36
/*
37
* If AccType_UNPRIV is not used, the insn uses AccType_NORMAL,
38
@@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s)
39
*/
40
ARMMMUIdx useridx = s->mmu_idx;
41
42
- if (s->unpriv) {
43
+ if (unpriv && s->unpriv) {
44
/*
45
* We have pre-computed the condition for AccType_UNPRIV.
46
* Therefore we should never get here with a mmu_idx for
47
@@ -XXX,XX +XXX,XX @@ static void op_addr_ldst_imm_pre(DisasContext *s, arg_ldst_imm *a,
48
if (!a->p) {
49
tcg_gen_addi_i64(*dirty_addr, *dirty_addr, offset);
50
}
51
- memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
52
+ memidx = get_a64_user_mem_index(s, a->unpriv);
53
*clean_addr = gen_mte_check1_mmuidx(s, *dirty_addr, is_store,
54
a->w || a->rn != 31,
55
mop, a->unpriv, memidx);
56
@@ -XXX,XX +XXX,XX @@ static bool trans_STR_i(DisasContext *s, arg_ldst_imm *a)
57
{
58
bool iss_sf, iss_valid = !a->w;
59
TCGv_i64 clean_addr, dirty_addr, tcg_rt;
60
- int memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
61
+ int memidx = get_a64_user_mem_index(s, a->unpriv);
62
MemOp mop = finalize_memop(s, a->sz + a->sign * MO_SIGN);
63
64
op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, true, mop);
65
@@ -XXX,XX +XXX,XX @@ static bool trans_LDR_i(DisasContext *s, arg_ldst_imm *a)
66
{
67
bool iss_sf, iss_valid = !a->w;
68
TCGv_i64 clean_addr, dirty_addr, tcg_rt;
69
- int memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
70
+ int memidx = get_a64_user_mem_index(s, a->unpriv);
71
MemOp mop = finalize_memop(s, a->sz + a->sign * MO_SIGN);
72
73
op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, false, mop);
74
--
75
2.34.1
diff view generated by jsdifflib
New patch
1
The FEAT_MOPS memory operations can raise a Memory Copy or Memory Set
2
exception if a copy or set instruction is executed when the CPU
3
register state is not correct for that instruction. Define the
4
usual syn_* function that constructs the syndrome register value
5
for these exceptions.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230912140434.1333369-5-peter.maydell@linaro.org
10
---
11
target/arm/syndrome.h | 12 ++++++++++++
12
1 file changed, 12 insertions(+)
13
14
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/syndrome.h
17
+++ b/target/arm/syndrome.h
18
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
19
EC_DATAABORT = 0x24,
20
EC_DATAABORT_SAME_EL = 0x25,
21
EC_SPALIGNMENT = 0x26,
22
+ EC_MOP = 0x27,
23
EC_AA32_FPTRAP = 0x28,
24
EC_AA64_FPTRAP = 0x2c,
25
EC_SERROR = 0x2f,
26
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_serror(uint32_t extra)
27
return (EC_SERROR << ARM_EL_EC_SHIFT) | ARM_EL_IL | extra;
28
}
29
30
+static inline uint32_t syn_mop(bool is_set, bool is_setg, int options,
31
+ bool epilogue, bool wrong_option, bool option_a,
32
+ int destreg, int srcreg, int sizereg)
33
+{
34
+ return (EC_MOP << ARM_EL_EC_SHIFT) | ARM_EL_IL |
35
+ (is_set << 24) | (is_setg << 23) | (options << 19) |
36
+ (epilogue << 18) | (wrong_option << 17) | (option_a << 16) |
37
+ (destreg << 10) | (srcreg << 5) | sizereg;
38
+}
39
+
40
+
41
#endif /* TARGET_ARM_SYNDROME_H */
42
--
43
2.34.1
diff view generated by jsdifflib
New patch
1
For the FEAT_MOPS operations, the existing allocation_tag_mem()
2
function almost does what we want, but it will take a watchpoint
3
exception even for an ra == 0 probe request, and it requires that the
4
caller guarantee that the memory is accessible. For FEAT_MOPS we
5
want a function that will not take any kind of exception, and will
6
return NULL for the not-accessible case.
1
7
8
Rename allocation_tag_mem() to allocation_tag_mem_probe() and add an
9
extra 'probe' argument that lets us distinguish these cases;
10
allocation_tag_mem() is now a wrapper that always passes 'false'.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20230912140434.1333369-6-peter.maydell@linaro.org
15
---
16
target/arm/tcg/mte_helper.c | 48 ++++++++++++++++++++++++++++---------
17
1 file changed, 37 insertions(+), 11 deletions(-)
18
19
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/tcg/mte_helper.c
22
+++ b/target/arm/tcg/mte_helper.c
23
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
24
}
25
26
/**
27
- * allocation_tag_mem:
28
+ * allocation_tag_mem_probe:
29
* @env: the cpu environment
30
* @ptr_mmu_idx: the addressing regime to use for the virtual address
31
* @ptr: the virtual address for which to look up tag memory
32
* @ptr_access: the access to use for the virtual address
33
* @ptr_size: the number of bytes in the normal memory access
34
* @tag_access: the access to use for the tag memory
35
+ * @probe: true to merely probe, never taking an exception
36
* @ra: the return address for exception handling
37
*
38
* Our tag memory is formatted as a sequence of little-endian nibbles.
39
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
40
* for the higher addr.
41
*
42
* Here, resolve the physical address from the virtual address, and return
43
- * a pointer to the corresponding tag byte. Exit with exception if the
44
- * virtual address is not accessible for @ptr_access.
45
+ * a pointer to the corresponding tag byte.
46
*
47
* If there is no tag storage corresponding to @ptr, return NULL.
48
+ *
49
+ * If the page is inaccessible for @ptr_access, or has a watchpoint, there are
50
+ * three options:
51
+ * (1) probe = true, ra = 0 : pure probe -- we return NULL if the page is not
52
+ * accessible, and do not take watchpoint traps. The calling code must
53
+ * handle those cases in the right priority compared to MTE traps.
54
+ * (2) probe = false, ra = 0 : probe, no fault expected -- the caller guarantees
55
+ * that the page is going to be accessible. We will take watchpoint traps.
56
+ * (3) probe = false, ra != 0 : non-probe -- we will take both memory access
57
+ * traps and watchpoint traps.
58
+ * (probe = true, ra != 0 is invalid and will assert.)
59
*/
60
-static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
61
- uint64_t ptr, MMUAccessType ptr_access,
62
- int ptr_size, MMUAccessType tag_access,
63
- uintptr_t ra)
64
+static uint8_t *allocation_tag_mem_probe(CPUARMState *env, int ptr_mmu_idx,
65
+ uint64_t ptr, MMUAccessType ptr_access,
66
+ int ptr_size, MMUAccessType tag_access,
67
+ bool probe, uintptr_t ra)
68
{
69
#ifdef CONFIG_USER_ONLY
70
uint64_t clean_ptr = useronly_clean_ptr(ptr);
71
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
72
uint8_t *tags;
73
uintptr_t index;
74
75
+ assert(!(probe && ra));
76
+
77
if (!(flags & (ptr_access == MMU_DATA_STORE ? PAGE_WRITE_ORG : PAGE_READ))) {
78
cpu_loop_exit_sigsegv(env_cpu(env), ptr, ptr_access,
79
!(flags & PAGE_VALID), ra);
80
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
81
* exception for inaccessible pages, and resolves the virtual address
82
* into the softmmu tlb.
83
*
84
- * When RA == 0, this is for mte_probe. The page is expected to be
85
- * valid. Indicate to probe_access_flags no-fault, then assert that
86
- * we received a valid page.
87
+ * When RA == 0, this is either a pure probe or a no-fault-expected probe.
88
+ * Indicate to probe_access_flags no-fault, then either return NULL
89
+ * for the pure probe, or assert that we received a valid page for the
90
+ * no-fault-expected probe.
91
*/
92
flags = probe_access_full(env, ptr, 0, ptr_access, ptr_mmu_idx,
93
ra == 0, &host, &full, ra);
94
+ if (probe && (flags & TLB_INVALID_MASK)) {
95
+ return NULL;
96
+ }
97
assert(!(flags & TLB_INVALID_MASK));
98
99
/* If the virtual page MemAttr != Tagged, access unchecked. */
100
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
101
}
102
103
/* Any debug exception has priority over a tag check exception. */
104
- if (unlikely(flags & TLB_WATCHPOINT)) {
105
+ if (!probe && unlikely(flags & TLB_WATCHPOINT)) {
106
int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE;
107
assert(ra != 0);
108
cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, attrs, wp, ra);
109
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
110
#endif
111
}
112
113
+static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
114
+ uint64_t ptr, MMUAccessType ptr_access,
115
+ int ptr_size, MMUAccessType tag_access,
116
+ uintptr_t ra)
117
+{
118
+ return allocation_tag_mem_probe(env, ptr_mmu_idx, ptr, ptr_access,
119
+ ptr_size, tag_access, false, ra);
120
+}
121
+
122
uint64_t HELPER(irg)(CPUARMState *env, uint64_t rn, uint64_t rm)
123
{
124
uint16_t exclude = extract32(rm | env->cp15.gcr_el1, 0, 16);
125
--
126
2.34.1
diff view generated by jsdifflib
1
From: Guenter Roeck <linux@roeck-us.net>
1
The FEAT_MOPS instructions need a couple of helper routines that
2
check for MTE tag failures:
3
* mte_mops_probe() checks whether there is going to be a tag
4
error in the next up-to-a-page worth of data
5
* mte_check_fail() is an existing function to record the fact
6
of a tag failure, which we need to make global so we can
7
call it from helper-a64.c
2
8
3
Add fec[12]-phy-connected properties and use it to set phy-connected
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
and phy-consumer properties for imx_fec.
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20230912140434.1333369-7-peter.maydell@linaro.org
12
---
13
target/arm/internals.h | 28 +++++++++++++++++++
14
target/arm/tcg/mte_helper.c | 54 +++++++++++++++++++++++++++++++++++--
15
2 files changed, 80 insertions(+), 2 deletions(-)
5
16
6
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
7
Message-id: 20230315145248.1639364-3-linux@roeck-us.net
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/fsl-imx6ul.h | 1 +
12
hw/arm/fsl-imx6ul.c | 20 ++++++++++++++++++++
13
2 files changed, 21 insertions(+)
14
15
diff --git a/include/hw/arm/fsl-imx6ul.h b/include/hw/arm/fsl-imx6ul.h
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/fsl-imx6ul.h
19
--- a/target/arm/internals.h
18
+++ b/include/hw/arm/fsl-imx6ul.h
20
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ struct FslIMX6ULState {
21
@@ -XXX,XX +XXX,XX @@ FIELD(MTEDESC, SIZEM1, 12, SIMD_DATA_BITS - 12) /* size - 1 */
20
MemoryRegion ocram_alias;
22
bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr);
21
23
uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra);
22
uint32_t phy_num[FSL_IMX6UL_NUM_ETHS];
24
23
+ bool phy_connected[FSL_IMX6UL_NUM_ETHS];
25
+/**
24
};
26
+ * mte_mops_probe: Check where the next MTE failure is for a FEAT_MOPS operation
25
27
+ * @env: CPU env
26
enum FslIMX6ULMemoryMap {
28
+ * @ptr: start address of memory region (dirty pointer)
27
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
29
+ * @size: length of region (guaranteed not to cross a page boundary)
30
+ * @desc: MTEDESC descriptor word (0 means no MTE checks)
31
+ * Returns: the size of the region that can be copied without hitting
32
+ * an MTE tag failure
33
+ *
34
+ * Note that we assume that the caller has already checked the TBI
35
+ * and TCMA bits with mte_checks_needed() and an MTE check is definitely
36
+ * required.
37
+ */
38
+uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
39
+ uint32_t desc);
40
+
41
+/**
42
+ * mte_check_fail: Record an MTE tag check failure
43
+ * @env: CPU env
44
+ * @desc: MTEDESC descriptor word
45
+ * @dirty_ptr: Failing dirty address
46
+ * @ra: TCG retaddr
47
+ *
48
+ * This may never return (if the MTE tag checks are configured to fault).
49
+ */
50
+void mte_check_fail(CPUARMState *env, uint32_t desc,
51
+ uint64_t dirty_ptr, uintptr_t ra);
52
+
53
static inline int allocation_tag_from_addr(uint64_t ptr)
54
{
55
return extract64(ptr, 56, 4);
56
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
28
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/fsl-imx6ul.c
58
--- a/target/arm/tcg/mte_helper.c
30
+++ b/hw/arm/fsl-imx6ul.c
59
+++ b/target/arm/tcg/mte_helper.c
31
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
60
@@ -XXX,XX +XXX,XX @@ static void mte_async_check_fail(CPUARMState *env, uint64_t dirty_ptr,
32
61
}
33
/*
62
34
* Ethernet
63
/* Record a tag check failure. */
35
+ *
64
-static void mte_check_fail(CPUARMState *env, uint32_t desc,
36
+ * We must use two loops since phy_connected affects the other interface
65
- uint64_t dirty_ptr, uintptr_t ra)
37
+ * and we have to set all properties before calling sysbus_realize().
66
+void mte_check_fail(CPUARMState *env, uint32_t desc,
38
*/
67
+ uint64_t dirty_ptr, uintptr_t ra)
39
+ for (i = 0; i < FSL_IMX6UL_NUM_ETHS; i++) {
68
{
40
+ object_property_set_bool(OBJECT(&s->eth[i]), "phy-connected",
69
int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
41
+ s->phy_connected[i], &error_abort);
70
ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx);
42
+ /*
71
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
43
+ * If the MDIO bus on this controller is not connected, assume the
72
done:
44
+ * other controller provides support for it.
73
return useronly_clean_ptr(ptr);
45
+ */
74
}
46
+ if (!s->phy_connected[i]) {
75
+
47
+ object_property_set_link(OBJECT(&s->eth[1 - i]), "phy-consumer",
76
+uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
48
+ OBJECT(&s->eth[i]), &error_abort);
77
+ uint32_t desc)
49
+ }
78
+{
79
+ int mmu_idx, tag_count;
80
+ uint64_t ptr_tag, tag_first, tag_last;
81
+ void *mem;
82
+ bool w = FIELD_EX32(desc, MTEDESC, WRITE);
83
+ uint32_t n;
84
+
85
+ mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
86
+ /* True probe; this will never fault */
87
+ mem = allocation_tag_mem_probe(env, mmu_idx, ptr,
88
+ w ? MMU_DATA_STORE : MMU_DATA_LOAD,
89
+ size, MMU_DATA_LOAD, true, 0);
90
+ if (!mem) {
91
+ return size;
50
+ }
92
+ }
51
+
93
+
52
for (i = 0; i < FSL_IMX6UL_NUM_ETHS; i++) {
94
+ /*
53
static const hwaddr FSL_IMX6UL_ENETn_ADDR[FSL_IMX6UL_NUM_ETHS] = {
95
+ * TODO: checkN() is not designed for checks of the size we expect
54
FSL_IMX6UL_ENET1_ADDR,
96
+ * for FEAT_MOPS operations, so we should implement this differently.
55
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
97
+ * Maybe we should do something like
56
static Property fsl_imx6ul_properties[] = {
98
+ * if (region start and size are aligned nicely) {
57
DEFINE_PROP_UINT32("fec1-phy-num", FslIMX6ULState, phy_num[0], 0),
99
+ * do direct loads of 64 tag bits at a time;
58
DEFINE_PROP_UINT32("fec2-phy-num", FslIMX6ULState, phy_num[1], 1),
100
+ * } else {
59
+ DEFINE_PROP_BOOL("fec1-phy-connected", FslIMX6ULState, phy_connected[0],
101
+ * call checkN()
60
+ true),
102
+ * }
61
+ DEFINE_PROP_BOOL("fec2-phy-connected", FslIMX6ULState, phy_connected[1],
103
+ */
62
+ true),
104
+ /* Round the bounds to the tag granule, and compute the number of tags. */
63
DEFINE_PROP_END_OF_LIST(),
105
+ ptr_tag = allocation_tag_from_addr(ptr);
64
};
106
+ tag_first = QEMU_ALIGN_DOWN(ptr, TAG_GRANULE);
65
107
+ tag_last = QEMU_ALIGN_DOWN(ptr + size - 1, TAG_GRANULE);
108
+ tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1;
109
+ n = checkN(mem, ptr & TAG_GRANULE, ptr_tag, tag_count);
110
+ if (likely(n == tag_count)) {
111
+ return size;
112
+ }
113
+
114
+ /*
115
+ * Failure; for the first granule, it's at @ptr. Otherwise
116
+ * it's at the first byte of the nth granule. Calculate how
117
+ * many bytes we can access without hitting that failure.
118
+ */
119
+ if (n == 0) {
120
+ return 0;
121
+ } else {
122
+ return n * TAG_GRANULE - (ptr - tag_first);
123
+ }
124
+}
66
--
125
--
67
2.34.1
126
2.34.1
diff view generated by jsdifflib
1
From: Guenter Roeck <linux@roeck-us.net>
1
Implement the SET* instructions which collectively implement a
2
"memset" operation. These come in a set of three, eg SETP
3
(prologue), SETM (main), SETE (epilogue), and each of those has
4
different flavours to indicate whether memory accesses should be
5
unpriv or non-temporal.
2
6
3
The SOC on i.MX6UL and i.MX7 has 2 Ethernet interfaces. The PHY on each may
7
This commit does not include the "memset with tag setting"
4
be connected to separate MDIO busses, or both may be connected on the same
8
SETG* instructions.
5
MDIO bus using different PHY addresses. Commit 461c51ad4275 ("Add a phy-num
6
property to the i.MX FEC emulator") added support for specifying PHY
7
addresses, but it did not provide support for linking the second PHY on
8
a given MDIO bus to the other Ethernet interface.
9
9
10
To be able to support two PHY instances on a single MDIO bus, two properties
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
are needed: First, there needs to be a flag indicating if the MDIO bus on
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
a given Ethernet interface is connected. If not, attempts to read from this
12
Message-id: 20230912140434.1333369-8-peter.maydell@linaro.org
13
bus must always return 0xffff. Implement this property as phy-connected.
13
---
14
Second, if the MDIO bus on an interface is active, it needs a link to the
14
target/arm/tcg/helper-a64.h | 4 +
15
consumer interface to be able to provide PHY access for it. Implement this
15
target/arm/tcg/a64.decode | 16 ++
16
property as phy-consumer.
16
target/arm/tcg/helper-a64.c | 344 +++++++++++++++++++++++++++++++++
17
target/arm/tcg/translate-a64.c | 49 +++++
18
4 files changed, 413 insertions(+)
17
19
18
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
20
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
19
Message-id: 20230315145248.1639364-2-linux@roeck-us.net
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
23
include/hw/net/imx_fec.h | 2 ++
24
hw/net/imx_fec.c | 27 +++++++++++++++++++++++----
25
2 files changed, 25 insertions(+), 4 deletions(-)
26
27
diff --git a/include/hw/net/imx_fec.h b/include/hw/net/imx_fec.h
28
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
29
--- a/include/hw/net/imx_fec.h
22
--- a/target/arm/tcg/helper-a64.h
30
+++ b/include/hw/net/imx_fec.h
23
+++ b/target/arm/tcg/helper-a64.h
31
@@ -XXX,XX +XXX,XX @@ struct IMXFECState {
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(stzgm_tags, TCG_CALL_NO_WG, void, env, i64, i64)
32
uint32_t phy_int;
25
33
uint32_t phy_int_mask;
26
DEF_HELPER_FLAGS_4(unaligned_access, TCG_CALL_NO_WG,
34
uint32_t phy_num;
27
noreturn, env, i64, i32, i32)
35
+ bool phy_connected;
28
+
36
+ struct IMXFECState *phy_consumer;
29
+DEF_HELPER_3(setp, void, env, i32, i32)
37
30
+DEF_HELPER_3(setm, void, env, i32, i32)
38
bool is_fec;
31
+DEF_HELPER_3(sete, void, env, i32, i32)
39
32
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
40
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
41
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/net/imx_fec.c
34
--- a/target/arm/tcg/a64.decode
43
+++ b/hw/net/imx_fec.c
35
+++ b/target/arm/tcg/a64.decode
44
@@ -XXX,XX +XXX,XX @@ static uint32_t imx_phy_read(IMXFECState *s, int reg)
36
@@ -XXX,XX +XXX,XX @@ LDGM 11011001 11 1 ......... 00 ..... ..... @ldst_tag_mult p=0 w=0
45
uint32_t val;
37
STZ2G 11011001 11 1 ......... 01 ..... ..... @ldst_tag p=1 w=1
46
uint32_t phy = reg / 32;
38
STZ2G 11011001 11 1 ......... 10 ..... ..... @ldst_tag p=0 w=0
47
39
STZ2G 11011001 11 1 ......... 11 ..... ..... @ldst_tag p=0 w=1
48
- if (phy != s->phy_num) {
40
+
49
- trace_imx_phy_read_num(phy, s->phy_num);
41
+# Memory operations (memset, memcpy, memmove)
50
+ if (!s->phy_connected) {
42
+# Each of these comes in a set of three, eg SETP (prologue), SETM (main),
51
return 0xffff;
43
+# SETE (epilogue), and each of those has different flavours to
52
}
44
+# indicate whether memory accesses should be unpriv or non-temporal.
53
45
+# We don't distinguish temporal and non-temporal accesses, but we
54
+ if (phy != s->phy_num) {
46
+# do need to report it in syndrome register values.
55
+ if (s->phy_consumer && phy == s->phy_consumer->phy_num) {
47
+
56
+ s = s->phy_consumer;
48
+# Memset
49
+&set rs rn rd unpriv nontemp
50
+# op2 bit 1 is nontemporal bit
51
+@set .. ......... rs:5 .. nontemp:1 unpriv:1 .. rn:5 rd:5 &set
52
+
53
+SETP 00 011001110 ..... 00 . . 01 ..... ..... @set
54
+SETM 00 011001110 ..... 01 . . 01 ..... ..... @set
55
+SETE 00 011001110 ..... 10 . . 01 ..... ..... @set
56
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/tcg/helper-a64.c
59
+++ b/target/arm/tcg/helper-a64.c
60
@@ -XXX,XX +XXX,XX @@ void HELPER(unaligned_access)(CPUARMState *env, uint64_t addr,
61
arm_cpu_do_unaligned_access(env_cpu(env), addr, access_type,
62
mmu_idx, GETPC());
63
}
64
+
65
+/* Memory operations (memset, memmove, memcpy) */
66
+
67
+/*
68
+ * Return true if the CPY* and SET* insns can execute; compare
69
+ * pseudocode CheckMOPSEnabled(), though we refactor it a little.
70
+ */
71
+static bool mops_enabled(CPUARMState *env)
72
+{
73
+ int el = arm_current_el(env);
74
+
75
+ if (el < 2 &&
76
+ (arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE) &&
77
+ !(arm_hcrx_el2_eff(env) & HCRX_MSCEN)) {
78
+ return false;
79
+ }
80
+
81
+ if (el == 0) {
82
+ if (!el_is_in_host(env, 0)) {
83
+ return env->cp15.sctlr_el[1] & SCTLR_MSCEN;
57
+ } else {
84
+ } else {
58
+ trace_imx_phy_read_num(phy, s->phy_num);
85
+ return env->cp15.sctlr_el[2] & SCTLR_MSCEN;
59
+ return 0xffff;
60
+ }
86
+ }
61
+ }
87
+ }
62
+
88
+ return true;
63
reg %= 32;
89
+}
64
90
+
65
switch (reg) {
91
+static void check_mops_enabled(CPUARMState *env, uintptr_t ra)
66
@@ -XXX,XX +XXX,XX @@ static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
92
+{
67
{
93
+ if (!mops_enabled(env)) {
68
uint32_t phy = reg / 32;
94
+ raise_exception_ra(env, EXCP_UDEF, syn_uncategorized(),
69
95
+ exception_target_el(env), ra);
70
- if (phy != s->phy_num) {
96
+ }
71
- trace_imx_phy_write_num(phy, s->phy_num);
97
+}
72
+ if (!s->phy_connected) {
98
+
73
return;
99
+/*
74
}
100
+ * Return the target exception level for an exception due
75
101
+ * to mismatched arguments in a FEAT_MOPS copy or set.
76
+ if (phy != s->phy_num) {
102
+ * Compare pseudocode MismatchedCpySetTargetEL()
77
+ if (s->phy_consumer && phy == s->phy_consumer->phy_num) {
103
+ */
78
+ s = s->phy_consumer;
104
+static int mops_mismatch_exception_target_el(CPUARMState *env)
105
+{
106
+ int el = arm_current_el(env);
107
+
108
+ if (el > 1) {
109
+ return el;
110
+ }
111
+ if (el == 0 && (arm_hcr_el2_eff(env) & HCR_TGE)) {
112
+ return 2;
113
+ }
114
+ if (el == 1 && (arm_hcrx_el2_eff(env) & HCRX_MCE2)) {
115
+ return 2;
116
+ }
117
+ return 1;
118
+}
119
+
120
+/*
121
+ * Check whether an M or E instruction was executed with a CF value
122
+ * indicating the wrong option for this implementation.
123
+ * Assumes we are always Option A.
124
+ */
125
+static void check_mops_wrong_option(CPUARMState *env, uint32_t syndrome,
126
+ uintptr_t ra)
127
+{
128
+ if (env->CF != 0) {
129
+ syndrome |= 1 << 17; /* Set the wrong-option bit */
130
+ raise_exception_ra(env, EXCP_UDEF, syndrome,
131
+ mops_mismatch_exception_target_el(env), ra);
132
+ }
133
+}
134
+
135
+/*
136
+ * Return the maximum number of bytes we can transfer starting at addr
137
+ * without crossing a page boundary.
138
+ */
139
+static uint64_t page_limit(uint64_t addr)
140
+{
141
+ return TARGET_PAGE_ALIGN(addr + 1) - addr;
142
+}
143
+
144
+/*
145
+ * Perform part of a memory set on an area of guest memory starting at
146
+ * toaddr (a dirty address) and extending for setsize bytes.
147
+ *
148
+ * Returns the number of bytes actually set, which might be less than
149
+ * setsize; the caller should loop until the whole set has been done.
150
+ * The caller should ensure that the guest registers are correct
151
+ * for the possibility that the first byte of the set encounters
152
+ * an exception or watchpoint. We guarantee not to take any faults
153
+ * for bytes other than the first.
154
+ */
155
+static uint64_t set_step(CPUARMState *env, uint64_t toaddr,
156
+ uint64_t setsize, uint32_t data, int memidx,
157
+ uint32_t *mtedesc, uintptr_t ra)
158
+{
159
+ void *mem;
160
+
161
+ setsize = MIN(setsize, page_limit(toaddr));
162
+ if (*mtedesc) {
163
+ uint64_t mtesize = mte_mops_probe(env, toaddr, setsize, *mtedesc);
164
+ if (mtesize == 0) {
165
+ /* Trap, or not. All CPU state is up to date */
166
+ mte_check_fail(env, *mtedesc, toaddr, ra);
167
+ /* Continue, with no further MTE checks required */
168
+ *mtedesc = 0;
79
+ } else {
169
+ } else {
80
+ trace_imx_phy_write_num(phy, s->phy_num);
170
+ /* Advance to the end, or to the tag mismatch */
81
+ return;
171
+ setsize = MIN(setsize, mtesize);
82
+ }
172
+ }
83
+ }
173
+ }
84
+
174
+
85
reg %= 32;
175
+ toaddr = useronly_clean_ptr(toaddr);
86
176
+ /*
87
trace_imx_phy_write(val, phy, reg);
177
+ * Trapless lookup: returns NULL for invalid page, I/O,
88
@@ -XXX,XX +XXX,XX @@ static Property imx_eth_properties[] = {
178
+ * watchpoints, clean pages, etc.
89
DEFINE_NIC_PROPERTIES(IMXFECState, conf),
179
+ */
90
DEFINE_PROP_UINT32("tx-ring-num", IMXFECState, tx_ring_num, 1),
180
+ mem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, memidx);
91
DEFINE_PROP_UINT32("phy-num", IMXFECState, phy_num, 0),
181
+
92
+ DEFINE_PROP_BOOL("phy-connected", IMXFECState, phy_connected, true),
182
+#ifndef CONFIG_USER_ONLY
93
+ DEFINE_PROP_LINK("phy-consumer", IMXFECState, phy_consumer, TYPE_IMX_FEC,
183
+ if (unlikely(!mem)) {
94
+ IMXFECState *),
184
+ /*
95
DEFINE_PROP_END_OF_LIST(),
185
+ * Slow-path: just do one byte write. This will handle the
96
};
186
+ * watchpoint, invalid page, etc handling correctly.
97
187
+ * For clean code pages, the next iteration will see
188
+ * the page dirty and will use the fast path.
189
+ */
190
+ cpu_stb_mmuidx_ra(env, toaddr, data, memidx, ra);
191
+ return 1;
192
+ }
193
+#endif
194
+ /* Easy case: just memset the host memory */
195
+ memset(mem, data, setsize);
196
+ return setsize;
197
+}
198
+
199
+typedef uint64_t StepFn(CPUARMState *env, uint64_t toaddr,
200
+ uint64_t setsize, uint32_t data,
201
+ int memidx, uint32_t *mtedesc, uintptr_t ra);
202
+
203
+/* Extract register numbers from a MOPS exception syndrome value */
204
+static int mops_destreg(uint32_t syndrome)
205
+{
206
+ return extract32(syndrome, 10, 5);
207
+}
208
+
209
+static int mops_srcreg(uint32_t syndrome)
210
+{
211
+ return extract32(syndrome, 5, 5);
212
+}
213
+
214
+static int mops_sizereg(uint32_t syndrome)
215
+{
216
+ return extract32(syndrome, 0, 5);
217
+}
218
+
219
+/*
220
+ * Return true if TCMA and TBI bits mean we need to do MTE checks.
221
+ * We only need to do this once per MOPS insn, not for every page.
222
+ */
223
+static bool mte_checks_needed(uint64_t ptr, uint32_t desc)
224
+{
225
+ int bit55 = extract64(ptr, 55, 1);
226
+
227
+ /*
228
+ * Note that tbi_check() returns true for "access checked" but
229
+ * tcma_check() returns true for "access unchecked".
230
+ */
231
+ if (!tbi_check(desc, bit55)) {
232
+ return false;
233
+ }
234
+ return !tcma_check(desc, bit55, allocation_tag_from_addr(ptr));
235
+}
236
+
237
+/*
238
+ * For the Memory Set operation, our implementation chooses
239
+ * always to use "option A", where we update Xd to the final
240
+ * address in the SETP insn, and set Xn to be -(bytes remaining).
241
+ * On SETM and SETE insns we only need update Xn.
242
+ *
243
+ * @env: CPU
244
+ * @syndrome: syndrome value for mismatch exceptions
245
+ * (also contains the register numbers we need to use)
246
+ * @mtedesc: MTE descriptor word
247
+ * @stepfn: function which does a single part of the set operation
248
+ * @is_setg: true if this is the tag-setting SETG variant
249
+ */
250
+static void do_setp(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
251
+ StepFn *stepfn, bool is_setg, uintptr_t ra)
252
+{
253
+ /* Prologue: we choose to do up to the next page boundary */
254
+ int rd = mops_destreg(syndrome);
255
+ int rs = mops_srcreg(syndrome);
256
+ int rn = mops_sizereg(syndrome);
257
+ uint8_t data = env->xregs[rs];
258
+ uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX);
259
+ uint64_t toaddr = env->xregs[rd];
260
+ uint64_t setsize = env->xregs[rn];
261
+ uint64_t stagesetsize, step;
262
+
263
+ check_mops_enabled(env, ra);
264
+
265
+ if (setsize > INT64_MAX) {
266
+ setsize = INT64_MAX;
267
+ }
268
+
269
+ if (!mte_checks_needed(toaddr, mtedesc)) {
270
+ mtedesc = 0;
271
+ }
272
+
273
+ stagesetsize = MIN(setsize, page_limit(toaddr));
274
+ while (stagesetsize) {
275
+ env->xregs[rd] = toaddr;
276
+ env->xregs[rn] = setsize;
277
+ step = stepfn(env, toaddr, stagesetsize, data, memidx, &mtedesc, ra);
278
+ toaddr += step;
279
+ setsize -= step;
280
+ stagesetsize -= step;
281
+ }
282
+ /* Insn completed, so update registers to the Option A format */
283
+ env->xregs[rd] = toaddr + setsize;
284
+ env->xregs[rn] = -setsize;
285
+
286
+ /* Set NZCV = 0000 to indicate we are an Option A implementation */
287
+ env->NF = 0;
288
+ env->ZF = 1; /* our env->ZF encoding is inverted */
289
+ env->CF = 0;
290
+ env->VF = 0;
291
+ return;
292
+}
293
+
294
+void HELPER(setp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
295
+{
296
+ do_setp(env, syndrome, mtedesc, set_step, false, GETPC());
297
+}
298
+
299
+static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
300
+ StepFn *stepfn, bool is_setg, uintptr_t ra)
301
+{
302
+ /* Main: we choose to do all the full-page chunks */
303
+ CPUState *cs = env_cpu(env);
304
+ int rd = mops_destreg(syndrome);
305
+ int rs = mops_srcreg(syndrome);
306
+ int rn = mops_sizereg(syndrome);
307
+ uint8_t data = env->xregs[rs];
308
+ uint64_t toaddr = env->xregs[rd] + env->xregs[rn];
309
+ uint64_t setsize = -env->xregs[rn];
310
+ uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX);
311
+ uint64_t step, stagesetsize;
312
+
313
+ check_mops_enabled(env, ra);
314
+
315
+ /*
316
+ * We're allowed to NOP out "no data to copy" before the consistency
317
+ * checks; we choose to do so.
318
+ */
319
+ if (env->xregs[rn] == 0) {
320
+ return;
321
+ }
322
+
323
+ check_mops_wrong_option(env, syndrome, ra);
324
+
325
+ /*
326
+ * Our implementation will work fine even if we have an unaligned
327
+ * destination address, and because we update Xn every time around
328
+ * the loop below and the return value from stepfn() may be less
329
+ * than requested, we might find toaddr is unaligned. So we don't
330
+ * have an IMPDEF check for alignment here.
331
+ */
332
+
333
+ if (!mte_checks_needed(toaddr, mtedesc)) {
334
+ mtedesc = 0;
335
+ }
336
+
337
+ /* Do the actual memset: we leave the last partial page to SETE */
338
+ stagesetsize = setsize & TARGET_PAGE_MASK;
339
+ while (stagesetsize > 0) {
340
+ step = stepfn(env, toaddr, setsize, data, memidx, &mtedesc, ra);
341
+ toaddr += step;
342
+ setsize -= step;
343
+ stagesetsize -= step;
344
+ env->xregs[rn] = -setsize;
345
+ if (stagesetsize > 0 && unlikely(cpu_loop_exit_requested(cs))) {
346
+ cpu_loop_exit_restore(cs, ra);
347
+ }
348
+ }
349
+}
350
+
351
+void HELPER(setm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
352
+{
353
+ do_setm(env, syndrome, mtedesc, set_step, false, GETPC());
354
+}
355
+
356
+static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
357
+ StepFn *stepfn, bool is_setg, uintptr_t ra)
358
+{
359
+ /* Epilogue: do the last partial page */
360
+ int rd = mops_destreg(syndrome);
361
+ int rs = mops_srcreg(syndrome);
362
+ int rn = mops_sizereg(syndrome);
363
+ uint8_t data = env->xregs[rs];
364
+ uint64_t toaddr = env->xregs[rd] + env->xregs[rn];
365
+ uint64_t setsize = -env->xregs[rn];
366
+ uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX);
367
+ uint64_t step;
368
+
369
+ check_mops_enabled(env, ra);
370
+
371
+ /*
372
+ * We're allowed to NOP out "no data to copy" before the consistency
373
+ * checks; we choose to do so.
374
+ */
375
+ if (setsize == 0) {
376
+ return;
377
+ }
378
+
379
+ check_mops_wrong_option(env, syndrome, ra);
380
+
381
+ /*
382
+ * Our implementation has no address alignment requirements, but
383
+ * we do want to enforce the "less than a page" size requirement,
384
+ * so we don't need to have the "check for interrupts" here.
385
+ */
386
+ if (setsize >= TARGET_PAGE_SIZE) {
387
+ raise_exception_ra(env, EXCP_UDEF, syndrome,
388
+ mops_mismatch_exception_target_el(env), ra);
389
+ }
390
+
391
+ if (!mte_checks_needed(toaddr, mtedesc)) {
392
+ mtedesc = 0;
393
+ }
394
+
395
+ /* Do the actual memset */
396
+ while (setsize > 0) {
397
+ step = stepfn(env, toaddr, setsize, data, memidx, &mtedesc, ra);
398
+ toaddr += step;
399
+ setsize -= step;
400
+ env->xregs[rn] = -setsize;
401
+ }
402
+}
403
+
404
+void HELPER(sete)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
405
+{
406
+ do_sete(env, syndrome, mtedesc, set_step, false, GETPC());
407
+}
408
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
409
index XXXXXXX..XXXXXXX 100644
410
--- a/target/arm/tcg/translate-a64.c
411
+++ b/target/arm/tcg/translate-a64.c
412
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(STZG, aa64_mte_insn_reg, do_STG, a, true, false)
413
TRANS_FEAT(ST2G, aa64_mte_insn_reg, do_STG, a, false, true)
414
TRANS_FEAT(STZ2G, aa64_mte_insn_reg, do_STG, a, true, true)
415
416
+typedef void SetFn(TCGv_env, TCGv_i32, TCGv_i32);
417
+
418
+static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
419
+{
420
+ int memidx;
421
+ uint32_t syndrome, desc = 0;
422
+
423
+ /*
424
+ * UNPREDICTABLE cases: we choose to UNDEF, which allows
425
+ * us to pull this check before the CheckMOPSEnabled() test
426
+ * (which we do in the helper function)
427
+ */
428
+ if (a->rs == a->rn || a->rs == a->rd || a->rn == a->rd ||
429
+ a->rd == 31 || a->rn == 31) {
430
+ return false;
431
+ }
432
+
433
+ memidx = get_a64_user_mem_index(s, a->unpriv);
434
+
435
+ /*
436
+ * We pass option_a == true, matching our implementation;
437
+ * we pass wrong_option == false: helper function may set that bit.
438
+ */
439
+ syndrome = syn_mop(true, false, (a->nontemp << 1) | a->unpriv,
440
+ is_epilogue, false, true, a->rd, a->rs, a->rn);
441
+
442
+ if (s->mte_active[a->unpriv]) {
443
+ /* We may need to do MTE tag checking, so assemble the descriptor */
444
+ desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
445
+ desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
446
+ desc = FIELD_DP32(desc, MTEDESC, WRITE, true);
447
+ /* SIZEM1 and ALIGN we leave 0 (byte write) */
448
+ }
449
+ /* The helper function always needs the memidx even with MTE disabled */
450
+ desc = FIELD_DP32(desc, MTEDESC, MIDX, memidx);
451
+
452
+ /*
453
+ * The helper needs the register numbers, but since they're in
454
+ * the syndrome anyway, we let it extract them from there rather
455
+ * than passing in an extra three integer arguments.
456
+ */
457
+ fn(cpu_env, tcg_constant_i32(syndrome), tcg_constant_i32(desc));
458
+ return true;
459
+}
460
+
461
+TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, gen_helper_setp)
462
+TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, gen_helper_setm)
463
+TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, gen_helper_sete)
464
+
465
typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
466
467
static bool gen_rri(DisasContext *s, arg_rri_sf *a,
98
--
468
--
99
2.34.1
469
2.34.1
diff view generated by jsdifflib
1
So that we can avoid the "older gdb crashes" problem described in
1
Currently the only tag-setting instructions always do so in the
2
commit 5787d17a42f7af4 and which caused us to disable reporting pauth
2
context of the current EL, and so we only need one ATA bit in the TB
3
information via the gdbstub, newer gdb is going to implement support
3
flags. The FEAT_MOPS SETG instructions include ones which set tags
4
for recognizing the pauth information via a new feature name:
4
for a non-privileged access, so we now also need the equivalent "are
5
org.gnu.gdb.aarch64.pauth_v2
5
tags enabled?" information for EL0.
6
6
7
Older gdb won't recognize this feature name, so we can re-enable the
7
Add the new TB flag, and convert the existing 'bool ata' field in
8
pauth support under the new name without risking them crashing.
8
DisasContext to a 'bool ata[2]' that can be indexed by the is_unpriv
9
bit in an instruction, similarly to mte[2].
9
10
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20230406150827.3322670-1-peter.maydell@linaro.org
13
Message-id: 20230912140434.1333369-9-peter.maydell@linaro.org
13
---
14
---
14
target/arm/gdbstub.c | 9 ++++-----
15
target/arm/cpu.h | 1 +
15
gdb-xml/aarch64-pauth.xml | 2 +-
16
target/arm/tcg/translate.h | 4 ++--
16
2 files changed, 5 insertions(+), 6 deletions(-)
17
target/arm/tcg/hflags.c | 12 ++++++++++++
18
target/arm/tcg/translate-a64.c | 23 ++++++++++++-----------
19
4 files changed, 27 insertions(+), 13 deletions(-)
17
20
18
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/gdbstub.c
23
--- a/target/arm/cpu.h
21
+++ b/target/arm/gdbstub.c
24
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu)
25
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, SVL, 24, 4)
23
aarch64_gdb_set_fpu_reg,
26
FIELD(TBFLAG_A64, SME_TRAP_NONSTREAMING, 28, 1)
24
34, "aarch64-fpu.xml", 0);
27
FIELD(TBFLAG_A64, FGT_ERET, 29, 1)
28
FIELD(TBFLAG_A64, NAA, 30, 1)
29
+FIELD(TBFLAG_A64, ATA0, 31, 1)
30
31
/*
32
* Helpers for using the above.
33
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate.h
36
+++ b/target/arm/tcg/translate.h
37
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
38
bool unpriv;
39
/* True if v8.3-PAuth is active. */
40
bool pauth_active;
41
- /* True if v8.5-MTE access to tags is enabled. */
42
- bool ata;
43
+ /* True if v8.5-MTE access to tags is enabled; index with is_unpriv. */
44
+ bool ata[2];
45
/* True if v8.5-MTE tag checks affect the PE; index with is_unpriv. */
46
bool mte_active[2];
47
/* True with v8.5-BTI and SCTLR_ELx.BT* set. */
48
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/tcg/hflags.c
51
+++ b/target/arm/tcg/hflags.c
52
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
53
&& allocation_tag_access_enabled(env, 0, sctlr)) {
54
DP_TBFLAG_A64(flags, MTE0_ACTIVE, 1);
25
}
55
}
26
-#if 0
56
+ /*
57
+ * For unpriv tag-setting accesses we alse need ATA0. Again, in
58
+ * contexts where unpriv and normal insns are the same we
59
+ * duplicate the ATA bit to save effort for translate-a64.c.
60
+ */
61
+ if (EX_TBFLAG_A64(flags, UNPRIV)) {
62
+ if (allocation_tag_access_enabled(env, 0, sctlr)) {
63
+ DP_TBFLAG_A64(flags, ATA0, 1);
64
+ }
65
+ } else {
66
+ DP_TBFLAG_A64(flags, ATA0, EX_TBFLAG_A64(flags, ATA));
67
+ }
68
/* Cache TCMA as well as TBI. */
69
DP_TBFLAG_A64(flags, TCMA, aa64_va_parameter_tcma(tcr, mmu_idx));
70
}
71
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/tcg/translate-a64.c
74
+++ b/target/arm/tcg/translate-a64.c
75
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, bool isread,
76
clean_addr = clean_data_tbi(s, tcg_rt);
77
gen_probe_access(s, clean_addr, MMU_DATA_STORE, MO_8);
78
79
- if (s->ata) {
80
+ if (s->ata[0]) {
81
/* Extract the tag from the register to match STZGM. */
82
tag = tcg_temp_new_i64();
83
tcg_gen_shri_i64(tag, tcg_rt, 56);
84
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, bool isread,
85
clean_addr = clean_data_tbi(s, tcg_rt);
86
gen_helper_dc_zva(cpu_env, clean_addr);
87
88
- if (s->ata) {
89
+ if (s->ata[0]) {
90
/* Extract the tag from the register to match STZGM. */
91
tag = tcg_temp_new_i64();
92
tcg_gen_shri_i64(tag, tcg_rt, 56);
93
@@ -XXX,XX +XXX,XX @@ static bool trans_STGP(DisasContext *s, arg_ldstpair *a)
94
tcg_gen_qemu_st_i128(tmp, clean_addr, get_mem_index(s), mop);
95
96
/* Perform the tag store, if tag access enabled. */
97
- if (s->ata) {
98
+ if (s->ata[0]) {
99
if (tb_cflags(s->base.tb) & CF_PARALLEL) {
100
gen_helper_stg_parallel(cpu_env, dirty_addr, dirty_addr);
101
} else {
102
@@ -XXX,XX +XXX,XX @@ static bool trans_STZGM(DisasContext *s, arg_ldst_tag *a)
103
tcg_gen_addi_i64(addr, addr, a->imm);
104
tcg_rt = cpu_reg(s, a->rt);
105
106
- if (s->ata) {
107
+ if (s->ata[0]) {
108
gen_helper_stzgm_tags(cpu_env, addr, tcg_rt);
109
}
110
/*
111
@@ -XXX,XX +XXX,XX @@ static bool trans_STGM(DisasContext *s, arg_ldst_tag *a)
112
tcg_gen_addi_i64(addr, addr, a->imm);
113
tcg_rt = cpu_reg(s, a->rt);
114
115
- if (s->ata) {
116
+ if (s->ata[0]) {
117
gen_helper_stgm(cpu_env, addr, tcg_rt);
118
} else {
119
MMUAccessType acc = MMU_DATA_STORE;
120
@@ -XXX,XX +XXX,XX @@ static bool trans_LDGM(DisasContext *s, arg_ldst_tag *a)
121
tcg_gen_addi_i64(addr, addr, a->imm);
122
tcg_rt = cpu_reg(s, a->rt);
123
124
- if (s->ata) {
125
+ if (s->ata[0]) {
126
gen_helper_ldgm(tcg_rt, cpu_env, addr);
127
} else {
128
MMUAccessType acc = MMU_DATA_LOAD;
129
@@ -XXX,XX +XXX,XX @@ static bool trans_LDG(DisasContext *s, arg_ldst_tag *a)
130
131
tcg_gen_andi_i64(addr, addr, -TAG_GRANULE);
132
tcg_rt = cpu_reg(s, a->rt);
133
- if (s->ata) {
134
+ if (s->ata[0]) {
135
gen_helper_ldg(tcg_rt, cpu_env, addr, tcg_rt);
136
} else {
27
/*
137
/*
28
- * GDB versions 9 through 12 have a bug which means they will
138
@@ -XXX,XX +XXX,XX @@ static bool do_STG(DisasContext *s, arg_ldst_tag *a, bool is_zero, bool is_pair)
29
- * crash if they see this XML from QEMU; disable it for the 8.0
139
tcg_gen_addi_i64(addr, addr, a->imm);
30
- * release, pending a better solution.
140
}
31
+ * Note that we report pauth information via the feature name
141
tcg_rt = cpu_reg_sp(s, a->rt);
32
+ * org.gnu.gdb.aarch64.pauth_v2, not org.gnu.gdb.aarch64.pauth.
142
- if (!s->ata) {
33
+ * GDB versions 9 through 12 have a bug where they will crash
143
+ if (!s->ata[0]) {
34
+ * if they see the latter XML from QEMU.
144
/*
35
*/
145
* For STG and ST2G, we need to check alignment and probe memory.
36
if (isar_feature_aa64_pauth(&cpu->isar)) {
146
* TODO: For STZG and STZ2G, we could rely on the stores below,
37
gdb_register_coprocessor(cs, aarch64_gdb_get_pauth_reg,
147
@@ -XXX,XX +XXX,XX @@ static bool gen_add_sub_imm_with_tags(DisasContext *s, arg_rri_tag *a,
38
aarch64_gdb_set_pauth_reg,
148
tcg_rn = cpu_reg_sp(s, a->rn);
39
4, "aarch64-pauth.xml", 0);
149
tcg_rd = cpu_reg_sp(s, a->rd);
150
151
- if (s->ata) {
152
+ if (s->ata[0]) {
153
gen_helper_addsubg(tcg_rd, cpu_env, tcg_rn,
154
tcg_constant_i32(imm),
155
tcg_constant_i32(a->uimm4));
156
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
157
if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
158
goto do_unallocated;
40
}
159
}
41
-#endif
160
- if (s->ata) {
42
#endif
161
+ if (s->ata[0]) {
43
} else {
162
gen_helper_irg(cpu_reg_sp(s, rd), cpu_env,
44
if (arm_feature(env, ARM_FEATURE_NEON)) {
163
cpu_reg_sp(s, rn), cpu_reg(s, rm));
45
diff --git a/gdb-xml/aarch64-pauth.xml b/gdb-xml/aarch64-pauth.xml
164
} else {
46
index XXXXXXX..XXXXXXX 100644
165
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
47
--- a/gdb-xml/aarch64-pauth.xml
166
dc->bt = EX_TBFLAG_A64(tb_flags, BT);
48
+++ b/gdb-xml/aarch64-pauth.xml
167
dc->btype = EX_TBFLAG_A64(tb_flags, BTYPE);
49
@@ -XXX,XX +XXX,XX @@
168
dc->unpriv = EX_TBFLAG_A64(tb_flags, UNPRIV);
50
notice and this notice are preserved. -->
169
- dc->ata = EX_TBFLAG_A64(tb_flags, ATA);
51
170
+ dc->ata[0] = EX_TBFLAG_A64(tb_flags, ATA);
52
<!DOCTYPE feature SYSTEM "gdb-target.dtd">
171
+ dc->ata[1] = EX_TBFLAG_A64(tb_flags, ATA0);
53
-<feature name="org.gnu.gdb.aarch64.pauth">
172
dc->mte_active[0] = EX_TBFLAG_A64(tb_flags, MTE_ACTIVE);
54
+<feature name="org.gnu.gdb.aarch64.pauth_v2">
173
dc->mte_active[1] = EX_TBFLAG_A64(tb_flags, MTE0_ACTIVE);
55
<reg name="pauth_dmask" bitsize="64"/>
174
dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM);
56
<reg name="pauth_cmask" bitsize="64"/>
57
<reg name="pauth_dmask_high" bitsize="64"/>
58
--
175
--
59
2.34.1
176
2.34.1
diff view generated by jsdifflib
1
The syndrome value reported to ESR_EL2 should only contain the
1
The FEAT_MOPS SETG* instructions are very similar to the SET*
2
detailed instruction syndrome information when the fault has been
2
instructions, but as well as setting memory contents they also
3
caused by a stage 2 abort, not when the fault was a stage 1 abort
3
set the MTE tags. They are architecturally required to operate
4
(i.e. caused by execution at EL2). We were getting this wrong and
4
on tag-granule aligned regions only.
5
reporting the detailed ISV information all the time.
6
7
Fix the bug by checking fi->stage2. Add a TODO comment noting the
8
cases where we'll have to come back and revisit this when we
9
implement FEAT_LS64 and friends.
10
5
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20230331145045.2584941-3-peter.maydell@linaro.org
8
Message-id: 20230912140434.1333369-10-peter.maydell@linaro.org
14
---
9
---
15
target/arm/tcg/tlb_helper.c | 13 ++++++++++---
10
target/arm/internals.h | 10 ++++
16
1 file changed, 10 insertions(+), 3 deletions(-)
11
target/arm/tcg/helper-a64.h | 3 ++
12
target/arm/tcg/a64.decode | 5 ++
13
target/arm/tcg/helper-a64.c | 86 ++++++++++++++++++++++++++++++++--
14
target/arm/tcg/mte_helper.c | 40 ++++++++++++++++
15
target/arm/tcg/translate-a64.c | 20 +++++---
16
6 files changed, 155 insertions(+), 9 deletions(-)
17
17
18
diff --git a/target/arm/tcg/tlb_helper.c b/target/arm/tcg/tlb_helper.c
18
diff --git a/target/arm/internals.h b/target/arm/internals.h
19
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/tcg/tlb_helper.c
20
--- a/target/arm/internals.h
21
+++ b/target/arm/tcg/tlb_helper.c
21
+++ b/target/arm/internals.h
22
@@ -XXX,XX +XXX,XX @@ static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
22
@@ -XXX,XX +XXX,XX @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
23
uint32_t syn;
23
void mte_check_fail(CPUARMState *env, uint32_t desc,
24
24
uint64_t dirty_ptr, uintptr_t ra);
25
26
+/**
27
+ * mte_mops_set_tags: Set MTE tags for a portion of a FEAT_MOPS operation
28
+ * @env: CPU env
29
+ * @dirty_ptr: Start address of memory region (dirty pointer)
30
+ * @size: length of region (guaranteed not to cross page boundary)
31
+ * @desc: MTEDESC descriptor word
32
+ */
33
+void mte_mops_set_tags(CPUARMState *env, uint64_t dirty_ptr, uint64_t size,
34
+ uint32_t desc);
35
+
36
static inline int allocation_tag_from_addr(uint64_t ptr)
37
{
38
return extract64(ptr, 56, 4);
39
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/tcg/helper-a64.h
42
+++ b/target/arm/tcg/helper-a64.h
43
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(unaligned_access, TCG_CALL_NO_WG,
44
DEF_HELPER_3(setp, void, env, i32, i32)
45
DEF_HELPER_3(setm, void, env, i32, i32)
46
DEF_HELPER_3(sete, void, env, i32, i32)
47
+DEF_HELPER_3(setgp, void, env, i32, i32)
48
+DEF_HELPER_3(setgm, void, env, i32, i32)
49
+DEF_HELPER_3(setge, void, env, i32, i32)
50
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/tcg/a64.decode
53
+++ b/target/arm/tcg/a64.decode
54
@@ -XXX,XX +XXX,XX @@ STZ2G 11011001 11 1 ......... 11 ..... ..... @ldst_tag p=0 w=1
55
SETP 00 011001110 ..... 00 . . 01 ..... ..... @set
56
SETM 00 011001110 ..... 01 . . 01 ..... ..... @set
57
SETE 00 011001110 ..... 10 . . 01 ..... ..... @set
58
+
59
+# Like SET, but also setting MTE tags
60
+SETGP 00 011101110 ..... 00 . . 01 ..... ..... @set
61
+SETGM 00 011101110 ..... 01 . . 01 ..... ..... @set
62
+SETGE 00 011101110 ..... 10 . . 01 ..... ..... @set
63
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/tcg/helper-a64.c
66
+++ b/target/arm/tcg/helper-a64.c
67
@@ -XXX,XX +XXX,XX @@ static uint64_t set_step(CPUARMState *env, uint64_t toaddr,
68
return setsize;
69
}
70
71
+/*
72
+ * Similar, but setting tags. The architecture requires us to do this
73
+ * in 16-byte chunks. SETP accesses are not tag checked; they set
74
+ * the tags.
75
+ */
76
+static uint64_t set_step_tags(CPUARMState *env, uint64_t toaddr,
77
+ uint64_t setsize, uint32_t data, int memidx,
78
+ uint32_t *mtedesc, uintptr_t ra)
79
+{
80
+ void *mem;
81
+ uint64_t cleanaddr;
82
+
83
+ setsize = MIN(setsize, page_limit(toaddr));
84
+
85
+ cleanaddr = useronly_clean_ptr(toaddr);
86
+ /*
87
+ * Trapless lookup: returns NULL for invalid page, I/O,
88
+ * watchpoints, clean pages, etc.
89
+ */
90
+ mem = tlb_vaddr_to_host(env, cleanaddr, MMU_DATA_STORE, memidx);
91
+
92
+#ifndef CONFIG_USER_ONLY
93
+ if (unlikely(!mem)) {
94
+ /*
95
+ * Slow-path: just do one write. This will handle the
96
+ * watchpoint, invalid page, etc handling correctly.
97
+ * The architecture requires that we do 16 bytes at a time,
98
+ * and we know both ptr and size are 16 byte aligned.
99
+ * For clean code pages, the next iteration will see
100
+ * the page dirty and will use the fast path.
101
+ */
102
+ uint64_t repldata = data * 0x0101010101010101ULL;
103
+ MemOpIdx oi16 = make_memop_idx(MO_TE | MO_128, memidx);
104
+ cpu_st16_mmu(env, toaddr, int128_make128(repldata, repldata), oi16, ra);
105
+ mte_mops_set_tags(env, toaddr, 16, *mtedesc);
106
+ return 16;
107
+ }
108
+#endif
109
+ /* Easy case: just memset the host memory */
110
+ memset(mem, data, setsize);
111
+ mte_mops_set_tags(env, toaddr, setsize, *mtedesc);
112
+ return setsize;
113
+}
114
+
115
typedef uint64_t StepFn(CPUARMState *env, uint64_t toaddr,
116
uint64_t setsize, uint32_t data,
117
int memidx, uint32_t *mtedesc, uintptr_t ra);
118
@@ -XXX,XX +XXX,XX @@ static bool mte_checks_needed(uint64_t ptr, uint32_t desc)
119
return !tcma_check(desc, bit55, allocation_tag_from_addr(ptr));
120
}
121
122
+/* Take an exception if the SETG addr/size are not granule aligned */
123
+static void check_setg_alignment(CPUARMState *env, uint64_t ptr, uint64_t size,
124
+ uint32_t memidx, uintptr_t ra)
125
+{
126
+ if ((size != 0 && !QEMU_IS_ALIGNED(ptr, TAG_GRANULE)) ||
127
+ !QEMU_IS_ALIGNED(size, TAG_GRANULE)) {
128
+ arm_cpu_do_unaligned_access(env_cpu(env), ptr, MMU_DATA_STORE,
129
+ memidx, ra);
130
+
131
+ }
132
+}
133
+
134
/*
135
* For the Memory Set operation, our implementation chooses
136
* always to use "option A", where we update Xd to the final
137
@@ -XXX,XX +XXX,XX @@ static void do_setp(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
138
139
if (setsize > INT64_MAX) {
140
setsize = INT64_MAX;
141
+ if (is_setg) {
142
+ setsize &= ~0xf;
143
+ }
144
}
145
146
- if (!mte_checks_needed(toaddr, mtedesc)) {
147
+ if (unlikely(is_setg)) {
148
+ check_setg_alignment(env, toaddr, setsize, memidx, ra);
149
+ } else if (!mte_checks_needed(toaddr, mtedesc)) {
150
mtedesc = 0;
151
}
152
153
@@ -XXX,XX +XXX,XX @@ void HELPER(setp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
154
do_setp(env, syndrome, mtedesc, set_step, false, GETPC());
155
}
156
157
+void HELPER(setgp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
158
+{
159
+ do_setp(env, syndrome, mtedesc, set_step_tags, true, GETPC());
160
+}
161
+
162
static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
163
StepFn *stepfn, bool is_setg, uintptr_t ra)
164
{
165
@@ -XXX,XX +XXX,XX @@ static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
166
* have an IMPDEF check for alignment here.
167
*/
168
169
- if (!mte_checks_needed(toaddr, mtedesc)) {
170
+ if (unlikely(is_setg)) {
171
+ check_setg_alignment(env, toaddr, setsize, memidx, ra);
172
+ } else if (!mte_checks_needed(toaddr, mtedesc)) {
173
mtedesc = 0;
174
}
175
176
@@ -XXX,XX +XXX,XX @@ void HELPER(setm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
177
do_setm(env, syndrome, mtedesc, set_step, false, GETPC());
178
}
179
180
+void HELPER(setgm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
181
+{
182
+ do_setm(env, syndrome, mtedesc, set_step_tags, true, GETPC());
183
+}
184
+
185
static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
186
StepFn *stepfn, bool is_setg, uintptr_t ra)
187
{
188
@@ -XXX,XX +XXX,XX @@ static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
189
mops_mismatch_exception_target_el(env), ra);
190
}
191
192
- if (!mte_checks_needed(toaddr, mtedesc)) {
193
+ if (unlikely(is_setg)) {
194
+ check_setg_alignment(env, toaddr, setsize, memidx, ra);
195
+ } else if (!mte_checks_needed(toaddr, mtedesc)) {
196
mtedesc = 0;
197
}
198
199
@@ -XXX,XX +XXX,XX @@ void HELPER(sete)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
200
{
201
do_sete(env, syndrome, mtedesc, set_step, false, GETPC());
202
}
203
+
204
+void HELPER(setge)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
205
+{
206
+ do_sete(env, syndrome, mtedesc, set_step_tags, true, GETPC());
207
+}
208
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
209
index XXXXXXX..XXXXXXX 100644
210
--- a/target/arm/tcg/mte_helper.c
211
+++ b/target/arm/tcg/mte_helper.c
212
@@ -XXX,XX +XXX,XX @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
213
return n * TAG_GRANULE - (ptr - tag_first);
214
}
215
}
216
+
217
+void mte_mops_set_tags(CPUARMState *env, uint64_t ptr, uint64_t size,
218
+ uint32_t desc)
219
+{
220
+ int mmu_idx, tag_count;
221
+ uint64_t ptr_tag;
222
+ void *mem;
223
+
224
+ if (!desc) {
225
+ /* Tags not actually enabled */
226
+ return;
227
+ }
228
+
229
+ mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
230
+ /* True probe: this will never fault */
231
+ mem = allocation_tag_mem_probe(env, mmu_idx, ptr, MMU_DATA_STORE, size,
232
+ MMU_DATA_STORE, true, 0);
233
+ if (!mem) {
234
+ return;
235
+ }
236
+
237
+ /*
238
+ * We know that ptr and size are both TAG_GRANULE aligned; store
239
+ * the tag from the pointer value into the tag memory.
240
+ */
241
+ ptr_tag = allocation_tag_from_addr(ptr);
242
+ tag_count = size / TAG_GRANULE;
243
+ if (ptr & TAG_GRANULE) {
244
+ /* Not 2*TAG_GRANULE-aligned: store tag to first nibble */
245
+ store_tag1_parallel(TAG_GRANULE, mem, ptr_tag);
246
+ mem++;
247
+ tag_count--;
248
+ }
249
+ memset(mem, ptr_tag | (ptr_tag << 4), tag_count / 2);
250
+ if (tag_count & 1) {
251
+ /* Final trailing unaligned nibble */
252
+ mem += tag_count / 2;
253
+ store_tag1_parallel(0, mem, ptr_tag);
254
+ }
255
+}
256
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
257
index XXXXXXX..XXXXXXX 100644
258
--- a/target/arm/tcg/translate-a64.c
259
+++ b/target/arm/tcg/translate-a64.c
260
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(STZ2G, aa64_mte_insn_reg, do_STG, a, true, true)
261
262
typedef void SetFn(TCGv_env, TCGv_i32, TCGv_i32);
263
264
-static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
265
+static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue,
266
+ bool is_setg, SetFn fn)
267
{
268
int memidx;
269
uint32_t syndrome, desc = 0;
270
271
+ if (is_setg && !dc_isar_feature(aa64_mte, s)) {
272
+ return false;
273
+ }
274
+
25
/*
275
/*
26
- * ISV is only set for data aborts routed to EL2 and
276
* UNPREDICTABLE cases: we choose to UNDEF, which allows
27
- * never for stage-1 page table walks faulting on stage 2.
277
* us to pull this check before the CheckMOPSEnabled() test
28
+ * ISV is only set for stage-2 data aborts routed to EL2 and
278
@@ -XXX,XX +XXX,XX @@ static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
29
+ * never for stage-1 page table walks faulting on stage 2
279
* We pass option_a == true, matching our implementation;
30
+ * or for stage-1 faults.
280
* we pass wrong_option == false: helper function may set that bit.
31
*
32
* Furthermore, ISV is only set for certain kinds of load/stores.
33
* If the template syndrome does not have ISV set, we should leave
34
@@ -XXX,XX +XXX,XX @@ static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
35
* See ARMv8 specs, D7-1974:
36
* ISS encoding for an exception from a Data Abort, the
37
* ISV field.
38
+ *
39
+ * TODO: FEAT_LS64/FEAT_LS64_V/FEAT_SL64_ACCDATA: Translation,
40
+ * Access Flag, and Permission faults caused by LD64B, ST64B,
41
+ * ST64BV, or ST64BV0 insns report syndrome info even for stage-1
42
+ * faults and regardless of the target EL.
43
*/
281
*/
44
- if (!(template_syn & ARM_EL_ISV) || target_el != 2 || fi->s1ptw) {
282
- syndrome = syn_mop(true, false, (a->nontemp << 1) | a->unpriv,
45
+ if (!(template_syn & ARM_EL_ISV) || target_el != 2
283
+ syndrome = syn_mop(true, is_setg, (a->nontemp << 1) | a->unpriv,
46
+ || fi->s1ptw || !fi->stage2) {
284
is_epilogue, false, true, a->rd, a->rs, a->rn);
47
syn = syn_data_abort_no_iss(same_el, 0,
285
48
fi->ea, 0, fi->s1ptw, is_write, fsc);
286
- if (s->mte_active[a->unpriv]) {
49
} else {
287
+ if (is_setg ? s->ata[a->unpriv] : s->mte_active[a->unpriv]) {
288
/* We may need to do MTE tag checking, so assemble the descriptor */
289
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
290
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
291
@@ -XXX,XX +XXX,XX @@ static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
292
return true;
293
}
294
295
-TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, gen_helper_setp)
296
-TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, gen_helper_setm)
297
-TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, gen_helper_sete)
298
+TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, false, gen_helper_setp)
299
+TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, false, gen_helper_setm)
300
+TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, false, gen_helper_sete)
301
+TRANS_FEAT(SETGP, aa64_mops, do_SET, a, false, true, gen_helper_setgp)
302
+TRANS_FEAT(SETGM, aa64_mops, do_SET, a, false, true, gen_helper_setgm)
303
+TRANS_FEAT(SETGE, aa64_mops, do_SET, a, true, true, gen_helper_setge)
304
305
typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
306
50
--
307
--
51
2.34.1
308
2.34.1
diff view generated by jsdifflib
1
From: Axel Heider <axel.heider@hensoldt.net>
1
The FEAT_MOPS memory copy operations need an extra helper routine
2
for checking for MTE tag checking failures beyond the ones we
3
already added for memory set operations:
4
* mte_mops_probe_rev() does the same job as mte_mops_probe(), but
5
it checks tags starting at the provided address and working
6
backwards, rather than forwards
2
7
3
Fix the limit check. If the limit is less than the compare value,
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
the timer can never reach this value, thus it will never fire.
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20230912140434.1333369-11-peter.maydell@linaro.org
11
---
12
target/arm/internals.h | 17 +++++++
13
target/arm/tcg/mte_helper.c | 99 +++++++++++++++++++++++++++++++++++++
14
2 files changed, 116 insertions(+)
5
15
6
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1491
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
7
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
8
Message-id: 168070611775.20412.2883242077302841473-2@git.sr.ht
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/timer/imx_epit.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/timer/imx_epit.c
18
--- a/target/arm/internals.h
18
+++ b/hw/timer/imx_epit.c
19
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ static void imx_epit_update_compare_timer(IMXEPITState *s)
20
@@ -XXX,XX +XXX,XX @@ uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra);
20
* the compare value. Otherwise it may fire at most once in the
21
uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
21
* current round.
22
uint32_t desc);
22
*/
23
23
- is_oneshot = (limit >= s->cmp);
24
+/**
24
+ is_oneshot = (limit < s->cmp);
25
+ * mte_mops_probe_rev: Check where the next MTE failure is for a FEAT_MOPS
25
if (counter >= s->cmp) {
26
+ * operation going in the reverse direction
26
/* The compare timer fires in the current round. */
27
+ * @env: CPU env
27
counter -= s->cmp;
28
+ * @ptr: *end* address of memory region (dirty pointer)
29
+ * @size: length of region (guaranteed not to cross a page boundary)
30
+ * @desc: MTEDESC descriptor word (0 means no MTE checks)
31
+ * Returns: the size of the region that can be copied without hitting
32
+ * an MTE tag failure
33
+ *
34
+ * Note that we assume that the caller has already checked the TBI
35
+ * and TCMA bits with mte_checks_needed() and an MTE check is definitely
36
+ * required.
37
+ */
38
+uint64_t mte_mops_probe_rev(CPUARMState *env, uint64_t ptr, uint64_t size,
39
+ uint32_t desc);
40
+
41
/**
42
* mte_check_fail: Record an MTE tag check failure
43
* @env: CPU env
44
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/tcg/mte_helper.c
47
+++ b/target/arm/tcg/mte_helper.c
48
@@ -XXX,XX +XXX,XX @@ static int checkN(uint8_t *mem, int odd, int cmp, int count)
49
return n;
50
}
51
52
+/**
53
+ * checkNrev:
54
+ * @tag: tag memory to test
55
+ * @odd: true to begin testing at tags at odd nibble
56
+ * @cmp: the tag to compare against
57
+ * @count: number of tags to test
58
+ *
59
+ * Return the number of successful tests.
60
+ * Thus a return value < @count indicates a failure.
61
+ *
62
+ * This is like checkN, but it runs backwards, checking the
63
+ * tags starting with @tag and then the tags preceding it.
64
+ * This is needed by the backwards-memory-copying operations.
65
+ */
66
+static int checkNrev(uint8_t *mem, int odd, int cmp, int count)
67
+{
68
+ int n = 0, diff;
69
+
70
+ /* Replicate the test tag and compare. */
71
+ cmp *= 0x11;
72
+ diff = *mem-- ^ cmp;
73
+
74
+ if (!odd) {
75
+ goto start_even;
76
+ }
77
+
78
+ while (1) {
79
+ /* Test odd tag. */
80
+ if (unlikely((diff) & 0xf0)) {
81
+ break;
82
+ }
83
+ if (++n == count) {
84
+ break;
85
+ }
86
+
87
+ start_even:
88
+ /* Test even tag. */
89
+ if (unlikely((diff) & 0x0f)) {
90
+ break;
91
+ }
92
+ if (++n == count) {
93
+ break;
94
+ }
95
+
96
+ diff = *mem-- ^ cmp;
97
+ }
98
+ return n;
99
+}
100
+
101
/**
102
* mte_probe_int() - helper for mte_probe and mte_check
103
* @env: CPU environment
104
@@ -XXX,XX +XXX,XX @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
105
}
106
}
107
108
+uint64_t mte_mops_probe_rev(CPUARMState *env, uint64_t ptr, uint64_t size,
109
+ uint32_t desc)
110
+{
111
+ int mmu_idx, tag_count;
112
+ uint64_t ptr_tag, tag_first, tag_last;
113
+ void *mem;
114
+ bool w = FIELD_EX32(desc, MTEDESC, WRITE);
115
+ uint32_t n;
116
+
117
+ mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
118
+ /* True probe; this will never fault */
119
+ mem = allocation_tag_mem_probe(env, mmu_idx, ptr,
120
+ w ? MMU_DATA_STORE : MMU_DATA_LOAD,
121
+ size, MMU_DATA_LOAD, true, 0);
122
+ if (!mem) {
123
+ return size;
124
+ }
125
+
126
+ /*
127
+ * TODO: checkNrev() is not designed for checks of the size we expect
128
+ * for FEAT_MOPS operations, so we should implement this differently.
129
+ * Maybe we should do something like
130
+ * if (region start and size are aligned nicely) {
131
+ * do direct loads of 64 tag bits at a time;
132
+ * } else {
133
+ * call checkN()
134
+ * }
135
+ */
136
+ /* Round the bounds to the tag granule, and compute the number of tags. */
137
+ ptr_tag = allocation_tag_from_addr(ptr);
138
+ tag_first = QEMU_ALIGN_DOWN(ptr - (size - 1), TAG_GRANULE);
139
+ tag_last = QEMU_ALIGN_DOWN(ptr, TAG_GRANULE);
140
+ tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1;
141
+ n = checkNrev(mem, ptr & TAG_GRANULE, ptr_tag, tag_count);
142
+ if (likely(n == tag_count)) {
143
+ return size;
144
+ }
145
+
146
+ /*
147
+ * Failure; for the first granule, it's at @ptr. Otherwise
148
+ * it's at the last byte of the nth granule. Calculate how
149
+ * many bytes we can access without hitting that failure.
150
+ */
151
+ if (n == 0) {
152
+ return 0;
153
+ } else {
154
+ return (n - 1) * TAG_GRANULE + ((ptr + 1) - tag_last);
155
+ }
156
+}
157
+
158
void mte_mops_set_tags(CPUARMState *env, uint64_t ptr, uint64_t size,
159
uint32_t desc)
160
{
28
--
161
--
29
2.34.1
162
2.34.1
diff view generated by jsdifflib
1
From: Strahinja Jankovic <strahinjapjankovic@gmail.com>
1
The FEAT_MOPS CPY* instructions implement memory copies. These
2
come in both "always forwards" (memcpy-style) and "overlap OK"
3
(memmove-style) flavours.
2
4
3
This patch adds WDT to Allwinner-H3 and Orangepi-PC.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
WDT is added as an overlay to the Timer module memory area.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20230912140434.1333369-12-peter.maydell@linaro.org
8
---
9
target/arm/tcg/helper-a64.h | 7 +
10
target/arm/tcg/a64.decode | 14 +
11
target/arm/tcg/helper-a64.c | 454 +++++++++++++++++++++++++++++++++
12
target/arm/tcg/translate-a64.c | 60 +++++
13
4 files changed, 535 insertions(+)
5
14
6
Signed-off-by: Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
15
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
7
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
8
Message-id: 20230326202256.22980-4-strahinja.p.jankovic@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
docs/system/arm/orangepi.rst | 1 +
12
include/hw/arm/allwinner-h3.h | 5 ++++-
13
hw/arm/allwinner-h3.c | 8 ++++++++
14
hw/arm/Kconfig | 1 +
15
4 files changed, 14 insertions(+), 1 deletion(-)
16
17
diff --git a/docs/system/arm/orangepi.rst b/docs/system/arm/orangepi.rst
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/docs/system/arm/orangepi.rst
17
--- a/target/arm/tcg/helper-a64.h
20
+++ b/docs/system/arm/orangepi.rst
18
+++ b/target/arm/tcg/helper-a64.h
21
@@ -XXX,XX +XXX,XX @@ The Orange Pi PC machine supports the following devices:
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(sete, void, env, i32, i32)
22
* System Control module
20
DEF_HELPER_3(setgp, void, env, i32, i32)
23
* Security Identifier device
21
DEF_HELPER_3(setgm, void, env, i32, i32)
24
* TWI (I2C)
22
DEF_HELPER_3(setge, void, env, i32, i32)
25
+ * Watchdog timer
23
+
26
24
+DEF_HELPER_4(cpyp, void, env, i32, i32, i32)
27
Limitations
25
+DEF_HELPER_4(cpym, void, env, i32, i32, i32)
28
"""""""""""
26
+DEF_HELPER_4(cpye, void, env, i32, i32, i32)
29
diff --git a/include/hw/arm/allwinner-h3.h b/include/hw/arm/allwinner-h3.h
27
+DEF_HELPER_4(cpyfp, void, env, i32, i32, i32)
28
+DEF_HELPER_4(cpyfm, void, env, i32, i32, i32)
29
+DEF_HELPER_4(cpyfe, void, env, i32, i32, i32)
30
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
30
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
31
--- a/include/hw/arm/allwinner-h3.h
32
--- a/target/arm/tcg/a64.decode
32
+++ b/include/hw/arm/allwinner-h3.h
33
+++ b/target/arm/tcg/a64.decode
33
@@ -XXX,XX +XXX,XX @@
34
@@ -XXX,XX +XXX,XX @@ SETE 00 011001110 ..... 10 . . 01 ..... ..... @set
34
#include "hw/net/allwinner-sun8i-emac.h"
35
SETGP 00 011101110 ..... 00 . . 01 ..... ..... @set
35
#include "hw/rtc/allwinner-rtc.h"
36
SETGM 00 011101110 ..... 01 . . 01 ..... ..... @set
36
#include "hw/i2c/allwinner-i2c.h"
37
SETGE 00 011101110 ..... 10 . . 01 ..... ..... @set
37
+#include "hw/watchdog/allwinner-wdt.h"
38
+
38
#include "target/arm/cpu.h"
39
+# Memmove/Memcopy: the CPY insns allow overlapping src/dest and
39
#include "sysemu/block-backend.h"
40
+# copy in the correct direction; the CPYF insns always copy forwards.
40
41
+#
41
@@ -XXX,XX +XXX,XX @@ enum {
42
+# options has the nontemporal and unpriv bits for src and dest
42
AW_H3_DEV_RTC,
43
+&cpy rs rn rd options
43
AW_H3_DEV_CPUCFG,
44
+@cpy .. ... . ..... rs:5 options:4 .. rn:5 rd:5 &cpy
44
AW_H3_DEV_R_TWI,
45
+
45
- AW_H3_DEV_SDRAM
46
+CPYFP 00 011 0 01000 ..... .... 01 ..... ..... @cpy
46
+ AW_H3_DEV_SDRAM,
47
+CPYFM 00 011 0 01010 ..... .... 01 ..... ..... @cpy
47
+ AW_H3_DEV_WDT
48
+CPYFE 00 011 0 01100 ..... .... 01 ..... ..... @cpy
48
};
49
+CPYP 00 011 1 01000 ..... .... 01 ..... ..... @cpy
49
50
+CPYM 00 011 1 01010 ..... .... 01 ..... ..... @cpy
50
/** Total number of CPU cores in the H3 SoC */
51
+CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
51
@@ -XXX,XX +XXX,XX @@ struct AwH3State {
52
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
52
AWI2CState r_twi;
53
AwSun8iEmacState emac;
54
AwRtcState rtc;
55
+ AwWdtState wdt;
56
GICState gic;
57
MemoryRegion sram_a1;
58
MemoryRegion sram_a2;
59
diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
60
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
61
--- a/hw/arm/allwinner-h3.c
54
--- a/target/arm/tcg/helper-a64.c
62
+++ b/hw/arm/allwinner-h3.c
55
+++ b/target/arm/tcg/helper-a64.c
63
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_h3_memmap[] = {
56
@@ -XXX,XX +XXX,XX @@ static uint64_t page_limit(uint64_t addr)
64
[AW_H3_DEV_OHCI3] = 0x01c1d400,
57
return TARGET_PAGE_ALIGN(addr + 1) - addr;
65
[AW_H3_DEV_CCU] = 0x01c20000,
66
[AW_H3_DEV_PIT] = 0x01c20c00,
67
+ [AW_H3_DEV_WDT] = 0x01c20ca0,
68
[AW_H3_DEV_UART0] = 0x01c28000,
69
[AW_H3_DEV_UART1] = 0x01c28400,
70
[AW_H3_DEV_UART2] = 0x01c28800,
71
@@ -XXX,XX +XXX,XX @@ static void allwinner_h3_init(Object *obj)
72
object_initialize_child(obj, "twi1", &s->i2c1, TYPE_AW_I2C_SUN6I);
73
object_initialize_child(obj, "twi2", &s->i2c2, TYPE_AW_I2C_SUN6I);
74
object_initialize_child(obj, "r_twi", &s->r_twi, TYPE_AW_I2C_SUN6I);
75
+
76
+ object_initialize_child(obj, "wdt", &s->wdt, TYPE_AW_WDT_SUN6I);
77
}
58
}
78
59
79
static void allwinner_h3_realize(DeviceState *dev, Error **errp)
60
+/*
80
@@ -XXX,XX +XXX,XX @@ static void allwinner_h3_realize(DeviceState *dev, Error **errp)
61
+ * Return the number of bytes we can copy starting from addr and working
81
sysbus_connect_irq(SYS_BUS_DEVICE(&s->r_twi), 0,
62
+ * backwards without crossing a page boundary.
82
qdev_get_gpio_in(DEVICE(&s->gic), AW_H3_GIC_SPI_R_TWI));
63
+ */
83
64
+static uint64_t page_limit_rev(uint64_t addr)
84
+ /* WDT */
65
+{
85
+ sysbus_realize(SYS_BUS_DEVICE(&s->wdt), &error_fatal);
66
+ return (addr & ~TARGET_PAGE_MASK) + 1;
86
+ sysbus_mmio_map_overlap(SYS_BUS_DEVICE(&s->wdt), 0,
67
+}
87
+ s->memmap[AW_H3_DEV_WDT], 1);
68
+
88
+
69
/*
89
/* Unimplemented devices */
70
* Perform part of a memory set on an area of guest memory starting at
90
for (i = 0; i < ARRAY_SIZE(unimplemented); i++) {
71
* toaddr (a dirty address) and extending for setsize bytes.
91
create_unimplemented_device(unimplemented[i].device_name,
72
@@ -XXX,XX +XXX,XX @@ void HELPER(setge)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
92
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
73
{
74
do_sete(env, syndrome, mtedesc, set_step_tags, true, GETPC());
75
}
76
+
77
+/*
78
+ * Perform part of a memory copy from the guest memory at fromaddr
79
+ * and extending for copysize bytes, to the guest memory at
80
+ * toaddr. Both addreses are dirty.
81
+ *
82
+ * Returns the number of bytes actually set, which might be less than
83
+ * copysize; the caller should loop until the whole copy has been done.
84
+ * The caller should ensure that the guest registers are correct
85
+ * for the possibility that the first byte of the copy encounters
86
+ * an exception or watchpoint. We guarantee not to take any faults
87
+ * for bytes other than the first.
88
+ */
89
+static uint64_t copy_step(CPUARMState *env, uint64_t toaddr, uint64_t fromaddr,
90
+ uint64_t copysize, int wmemidx, int rmemidx,
91
+ uint32_t *wdesc, uint32_t *rdesc, uintptr_t ra)
92
+{
93
+ void *rmem;
94
+ void *wmem;
95
+
96
+ /* Don't cross a page boundary on either source or destination */
97
+ copysize = MIN(copysize, page_limit(toaddr));
98
+ copysize = MIN(copysize, page_limit(fromaddr));
99
+ /*
100
+ * Handle MTE tag checks: either handle the tag mismatch for byte 0,
101
+ * or else copy up to but not including the byte with the mismatch.
102
+ */
103
+ if (*rdesc) {
104
+ uint64_t mtesize = mte_mops_probe(env, fromaddr, copysize, *rdesc);
105
+ if (mtesize == 0) {
106
+ mte_check_fail(env, *rdesc, fromaddr, ra);
107
+ *rdesc = 0;
108
+ } else {
109
+ copysize = MIN(copysize, mtesize);
110
+ }
111
+ }
112
+ if (*wdesc) {
113
+ uint64_t mtesize = mte_mops_probe(env, toaddr, copysize, *wdesc);
114
+ if (mtesize == 0) {
115
+ mte_check_fail(env, *wdesc, toaddr, ra);
116
+ *wdesc = 0;
117
+ } else {
118
+ copysize = MIN(copysize, mtesize);
119
+ }
120
+ }
121
+
122
+ toaddr = useronly_clean_ptr(toaddr);
123
+ fromaddr = useronly_clean_ptr(fromaddr);
124
+ /* Trapless lookup of whether we can get a host memory pointer */
125
+ wmem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, wmemidx);
126
+ rmem = tlb_vaddr_to_host(env, fromaddr, MMU_DATA_LOAD, rmemidx);
127
+
128
+#ifndef CONFIG_USER_ONLY
129
+ /*
130
+ * If we don't have host memory for both source and dest then just
131
+ * do a single byte copy. This will handle watchpoints, invalid pages,
132
+ * etc correctly. For clean code pages, the next iteration will see
133
+ * the page dirty and will use the fast path.
134
+ */
135
+ if (unlikely(!rmem || !wmem)) {
136
+ uint8_t byte;
137
+ if (rmem) {
138
+ byte = *(uint8_t *)rmem;
139
+ } else {
140
+ byte = cpu_ldub_mmuidx_ra(env, fromaddr, rmemidx, ra);
141
+ }
142
+ if (wmem) {
143
+ *(uint8_t *)wmem = byte;
144
+ } else {
145
+ cpu_stb_mmuidx_ra(env, toaddr, byte, wmemidx, ra);
146
+ }
147
+ return 1;
148
+ }
149
+#endif
150
+ /* Easy case: just memmove the host memory */
151
+ memmove(wmem, rmem, copysize);
152
+ return copysize;
153
+}
154
+
155
+/*
156
+ * Do part of a backwards memory copy. Here toaddr and fromaddr point
157
+ * to the *last* byte to be copied.
158
+ */
159
+static uint64_t copy_step_rev(CPUARMState *env, uint64_t toaddr,
160
+ uint64_t fromaddr,
161
+ uint64_t copysize, int wmemidx, int rmemidx,
162
+ uint32_t *wdesc, uint32_t *rdesc, uintptr_t ra)
163
+{
164
+ void *rmem;
165
+ void *wmem;
166
+
167
+ /* Don't cross a page boundary on either source or destination */
168
+ copysize = MIN(copysize, page_limit_rev(toaddr));
169
+ copysize = MIN(copysize, page_limit_rev(fromaddr));
170
+
171
+ /*
172
+ * Handle MTE tag checks: either handle the tag mismatch for byte 0,
173
+ * or else copy up to but not including the byte with the mismatch.
174
+ */
175
+ if (*rdesc) {
176
+ uint64_t mtesize = mte_mops_probe_rev(env, fromaddr, copysize, *rdesc);
177
+ if (mtesize == 0) {
178
+ mte_check_fail(env, *rdesc, fromaddr, ra);
179
+ *rdesc = 0;
180
+ } else {
181
+ copysize = MIN(copysize, mtesize);
182
+ }
183
+ }
184
+ if (*wdesc) {
185
+ uint64_t mtesize = mte_mops_probe_rev(env, toaddr, copysize, *wdesc);
186
+ if (mtesize == 0) {
187
+ mte_check_fail(env, *wdesc, toaddr, ra);
188
+ *wdesc = 0;
189
+ } else {
190
+ copysize = MIN(copysize, mtesize);
191
+ }
192
+ }
193
+
194
+ toaddr = useronly_clean_ptr(toaddr);
195
+ fromaddr = useronly_clean_ptr(fromaddr);
196
+ /* Trapless lookup of whether we can get a host memory pointer */
197
+ wmem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, wmemidx);
198
+ rmem = tlb_vaddr_to_host(env, fromaddr, MMU_DATA_LOAD, rmemidx);
199
+
200
+#ifndef CONFIG_USER_ONLY
201
+ /*
202
+ * If we don't have host memory for both source and dest then just
203
+ * do a single byte copy. This will handle watchpoints, invalid pages,
204
+ * etc correctly. For clean code pages, the next iteration will see
205
+ * the page dirty and will use the fast path.
206
+ */
207
+ if (unlikely(!rmem || !wmem)) {
208
+ uint8_t byte;
209
+ if (rmem) {
210
+ byte = *(uint8_t *)rmem;
211
+ } else {
212
+ byte = cpu_ldub_mmuidx_ra(env, fromaddr, rmemidx, ra);
213
+ }
214
+ if (wmem) {
215
+ *(uint8_t *)wmem = byte;
216
+ } else {
217
+ cpu_stb_mmuidx_ra(env, toaddr, byte, wmemidx, ra);
218
+ }
219
+ return 1;
220
+ }
221
+#endif
222
+ /*
223
+ * Easy case: just memmove the host memory. Note that wmem and
224
+ * rmem here point to the *last* byte to copy.
225
+ */
226
+ memmove(wmem - (copysize - 1), rmem - (copysize - 1), copysize);
227
+ return copysize;
228
+}
229
+
230
+/*
231
+ * for the Memory Copy operation, our implementation chooses always
232
+ * to use "option A", where we update Xd and Xs to the final addresses
233
+ * in the CPYP insn, and then in CPYM and CPYE only need to update Xn.
234
+ *
235
+ * @env: CPU
236
+ * @syndrome: syndrome value for mismatch exceptions
237
+ * (also contains the register numbers we need to use)
238
+ * @wdesc: MTE descriptor for the writes (destination)
239
+ * @rdesc: MTE descriptor for the reads (source)
240
+ * @move: true if this is CPY (memmove), false for CPYF (memcpy forwards)
241
+ */
242
+static void do_cpyp(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
243
+ uint32_t rdesc, uint32_t move, uintptr_t ra)
244
+{
245
+ int rd = mops_destreg(syndrome);
246
+ int rs = mops_srcreg(syndrome);
247
+ int rn = mops_sizereg(syndrome);
248
+ uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX);
249
+ uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX);
250
+ bool forwards = true;
251
+ uint64_t toaddr = env->xregs[rd];
252
+ uint64_t fromaddr = env->xregs[rs];
253
+ uint64_t copysize = env->xregs[rn];
254
+ uint64_t stagecopysize, step;
255
+
256
+ check_mops_enabled(env, ra);
257
+
258
+
259
+ if (move) {
260
+ /*
261
+ * Copy backwards if necessary. The direction for a non-overlapping
262
+ * copy is IMPDEF; we choose forwards.
263
+ */
264
+ if (copysize > 0x007FFFFFFFFFFFFFULL) {
265
+ copysize = 0x007FFFFFFFFFFFFFULL;
266
+ }
267
+ uint64_t fs = extract64(fromaddr, 0, 56);
268
+ uint64_t ts = extract64(toaddr, 0, 56);
269
+ uint64_t fe = extract64(fromaddr + copysize, 0, 56);
270
+
271
+ if (fs < ts && fe > ts) {
272
+ forwards = false;
273
+ }
274
+ } else {
275
+ if (copysize > INT64_MAX) {
276
+ copysize = INT64_MAX;
277
+ }
278
+ }
279
+
280
+ if (!mte_checks_needed(fromaddr, rdesc)) {
281
+ rdesc = 0;
282
+ }
283
+ if (!mte_checks_needed(toaddr, wdesc)) {
284
+ wdesc = 0;
285
+ }
286
+
287
+ if (forwards) {
288
+ stagecopysize = MIN(copysize, page_limit(toaddr));
289
+ stagecopysize = MIN(stagecopysize, page_limit(fromaddr));
290
+ while (stagecopysize) {
291
+ env->xregs[rd] = toaddr;
292
+ env->xregs[rs] = fromaddr;
293
+ env->xregs[rn] = copysize;
294
+ step = copy_step(env, toaddr, fromaddr, stagecopysize,
295
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
296
+ toaddr += step;
297
+ fromaddr += step;
298
+ copysize -= step;
299
+ stagecopysize -= step;
300
+ }
301
+ /* Insn completed, so update registers to the Option A format */
302
+ env->xregs[rd] = toaddr + copysize;
303
+ env->xregs[rs] = fromaddr + copysize;
304
+ env->xregs[rn] = -copysize;
305
+ } else {
306
+ /*
307
+ * In a reverse copy the to and from addrs in Xs and Xd are the start
308
+ * of the range, but it's more convenient for us to work with pointers
309
+ * to the last byte being copied.
310
+ */
311
+ toaddr += copysize - 1;
312
+ fromaddr += copysize - 1;
313
+ stagecopysize = MIN(copysize, page_limit_rev(toaddr));
314
+ stagecopysize = MIN(stagecopysize, page_limit_rev(fromaddr));
315
+ while (stagecopysize) {
316
+ env->xregs[rn] = copysize;
317
+ step = copy_step_rev(env, toaddr, fromaddr, stagecopysize,
318
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
319
+ copysize -= step;
320
+ stagecopysize -= step;
321
+ toaddr -= step;
322
+ fromaddr -= step;
323
+ }
324
+ /*
325
+ * Insn completed, so update registers to the Option A format.
326
+ * For a reverse copy this is no different to the CPYP input format.
327
+ */
328
+ env->xregs[rn] = copysize;
329
+ }
330
+
331
+ /* Set NZCV = 0000 to indicate we are an Option A implementation */
332
+ env->NF = 0;
333
+ env->ZF = 1; /* our env->ZF encoding is inverted */
334
+ env->CF = 0;
335
+ env->VF = 0;
336
+ return;
337
+}
338
+
339
+void HELPER(cpyp)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
340
+ uint32_t rdesc)
341
+{
342
+ do_cpyp(env, syndrome, wdesc, rdesc, true, GETPC());
343
+}
344
+
345
+void HELPER(cpyfp)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
346
+ uint32_t rdesc)
347
+{
348
+ do_cpyp(env, syndrome, wdesc, rdesc, false, GETPC());
349
+}
350
+
351
+static void do_cpym(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
352
+ uint32_t rdesc, uint32_t move, uintptr_t ra)
353
+{
354
+ /* Main: we choose to copy until less than a page remaining */
355
+ CPUState *cs = env_cpu(env);
356
+ int rd = mops_destreg(syndrome);
357
+ int rs = mops_srcreg(syndrome);
358
+ int rn = mops_sizereg(syndrome);
359
+ uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX);
360
+ uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX);
361
+ bool forwards = true;
362
+ uint64_t toaddr, fromaddr, copysize, step;
363
+
364
+ check_mops_enabled(env, ra);
365
+
366
+ /* We choose to NOP out "no data to copy" before consistency checks */
367
+ if (env->xregs[rn] == 0) {
368
+ return;
369
+ }
370
+
371
+ check_mops_wrong_option(env, syndrome, ra);
372
+
373
+ if (move) {
374
+ forwards = (int64_t)env->xregs[rn] < 0;
375
+ }
376
+
377
+ if (forwards) {
378
+ toaddr = env->xregs[rd] + env->xregs[rn];
379
+ fromaddr = env->xregs[rs] + env->xregs[rn];
380
+ copysize = -env->xregs[rn];
381
+ } else {
382
+ copysize = env->xregs[rn];
383
+ /* This toaddr and fromaddr point to the *last* byte to copy */
384
+ toaddr = env->xregs[rd] + copysize - 1;
385
+ fromaddr = env->xregs[rs] + copysize - 1;
386
+ }
387
+
388
+ if (!mte_checks_needed(fromaddr, rdesc)) {
389
+ rdesc = 0;
390
+ }
391
+ if (!mte_checks_needed(toaddr, wdesc)) {
392
+ wdesc = 0;
393
+ }
394
+
395
+ /* Our implementation has no particular parameter requirements for CPYM */
396
+
397
+ /* Do the actual memmove */
398
+ if (forwards) {
399
+ while (copysize >= TARGET_PAGE_SIZE) {
400
+ step = copy_step(env, toaddr, fromaddr, copysize,
401
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
402
+ toaddr += step;
403
+ fromaddr += step;
404
+ copysize -= step;
405
+ env->xregs[rn] = -copysize;
406
+ if (copysize >= TARGET_PAGE_SIZE &&
407
+ unlikely(cpu_loop_exit_requested(cs))) {
408
+ cpu_loop_exit_restore(cs, ra);
409
+ }
410
+ }
411
+ } else {
412
+ while (copysize >= TARGET_PAGE_SIZE) {
413
+ step = copy_step_rev(env, toaddr, fromaddr, copysize,
414
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
415
+ toaddr -= step;
416
+ fromaddr -= step;
417
+ copysize -= step;
418
+ env->xregs[rn] = copysize;
419
+ if (copysize >= TARGET_PAGE_SIZE &&
420
+ unlikely(cpu_loop_exit_requested(cs))) {
421
+ cpu_loop_exit_restore(cs, ra);
422
+ }
423
+ }
424
+ }
425
+}
426
+
427
+void HELPER(cpym)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
428
+ uint32_t rdesc)
429
+{
430
+ do_cpym(env, syndrome, wdesc, rdesc, true, GETPC());
431
+}
432
+
433
+void HELPER(cpyfm)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
434
+ uint32_t rdesc)
435
+{
436
+ do_cpym(env, syndrome, wdesc, rdesc, false, GETPC());
437
+}
438
+
439
+static void do_cpye(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
440
+ uint32_t rdesc, uint32_t move, uintptr_t ra)
441
+{
442
+ /* Epilogue: do the last partial page */
443
+ int rd = mops_destreg(syndrome);
444
+ int rs = mops_srcreg(syndrome);
445
+ int rn = mops_sizereg(syndrome);
446
+ uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX);
447
+ uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX);
448
+ bool forwards = true;
449
+ uint64_t toaddr, fromaddr, copysize, step;
450
+
451
+ check_mops_enabled(env, ra);
452
+
453
+ /* We choose to NOP out "no data to copy" before consistency checks */
454
+ if (env->xregs[rn] == 0) {
455
+ return;
456
+ }
457
+
458
+ check_mops_wrong_option(env, syndrome, ra);
459
+
460
+ if (move) {
461
+ forwards = (int64_t)env->xregs[rn] < 0;
462
+ }
463
+
464
+ if (forwards) {
465
+ toaddr = env->xregs[rd] + env->xregs[rn];
466
+ fromaddr = env->xregs[rs] + env->xregs[rn];
467
+ copysize = -env->xregs[rn];
468
+ } else {
469
+ copysize = env->xregs[rn];
470
+ /* This toaddr and fromaddr point to the *last* byte to copy */
471
+ toaddr = env->xregs[rd] + copysize - 1;
472
+ fromaddr = env->xregs[rs] + copysize - 1;
473
+ }
474
+
475
+ if (!mte_checks_needed(fromaddr, rdesc)) {
476
+ rdesc = 0;
477
+ }
478
+ if (!mte_checks_needed(toaddr, wdesc)) {
479
+ wdesc = 0;
480
+ }
481
+
482
+ /* Check the size; we don't want to have do a check-for-interrupts */
483
+ if (copysize >= TARGET_PAGE_SIZE) {
484
+ raise_exception_ra(env, EXCP_UDEF, syndrome,
485
+ mops_mismatch_exception_target_el(env), ra);
486
+ }
487
+
488
+ /* Do the actual memmove */
489
+ if (forwards) {
490
+ while (copysize > 0) {
491
+ step = copy_step(env, toaddr, fromaddr, copysize,
492
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
493
+ toaddr += step;
494
+ fromaddr += step;
495
+ copysize -= step;
496
+ env->xregs[rn] = -copysize;
497
+ }
498
+ } else {
499
+ while (copysize > 0) {
500
+ step = copy_step_rev(env, toaddr, fromaddr, copysize,
501
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
502
+ toaddr -= step;
503
+ fromaddr -= step;
504
+ copysize -= step;
505
+ env->xregs[rn] = copysize;
506
+ }
507
+ }
508
+}
509
+
510
+void HELPER(cpye)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
511
+ uint32_t rdesc)
512
+{
513
+ do_cpye(env, syndrome, wdesc, rdesc, true, GETPC());
514
+}
515
+
516
+void HELPER(cpyfe)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
517
+ uint32_t rdesc)
518
+{
519
+ do_cpye(env, syndrome, wdesc, rdesc, false, GETPC());
520
+}
521
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
93
index XXXXXXX..XXXXXXX 100644
522
index XXXXXXX..XXXXXXX 100644
94
--- a/hw/arm/Kconfig
523
--- a/target/arm/tcg/translate-a64.c
95
+++ b/hw/arm/Kconfig
524
+++ b/target/arm/tcg/translate-a64.c
96
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_H3
525
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(SETGP, aa64_mops, do_SET, a, false, true, gen_helper_setgp)
97
select ALLWINNER_A10_PIT
526
TRANS_FEAT(SETGM, aa64_mops, do_SET, a, false, true, gen_helper_setgm)
98
select ALLWINNER_SUN8I_EMAC
527
TRANS_FEAT(SETGE, aa64_mops, do_SET, a, true, true, gen_helper_setge)
99
select ALLWINNER_I2C
528
100
+ select ALLWINNER_WDT
529
+typedef void CpyFn(TCGv_env, TCGv_i32, TCGv_i32, TCGv_i32);
101
select SERIAL
530
+
102
select ARM_TIMER
531
+static bool do_CPY(DisasContext *s, arg_cpy *a, bool is_epilogue, CpyFn fn)
103
select ARM_GIC
532
+{
533
+ int rmemidx, wmemidx;
534
+ uint32_t syndrome, rdesc = 0, wdesc = 0;
535
+ bool wunpriv = extract32(a->options, 0, 1);
536
+ bool runpriv = extract32(a->options, 1, 1);
537
+
538
+ /*
539
+ * UNPREDICTABLE cases: we choose to UNDEF, which allows
540
+ * us to pull this check before the CheckMOPSEnabled() test
541
+ * (which we do in the helper function)
542
+ */
543
+ if (a->rs == a->rn || a->rs == a->rd || a->rn == a->rd ||
544
+ a->rd == 31 || a->rs == 31 || a->rn == 31) {
545
+ return false;
546
+ }
547
+
548
+ rmemidx = get_a64_user_mem_index(s, runpriv);
549
+ wmemidx = get_a64_user_mem_index(s, wunpriv);
550
+
551
+ /*
552
+ * We pass option_a == true, matching our implementation;
553
+ * we pass wrong_option == false: helper function may set that bit.
554
+ */
555
+ syndrome = syn_mop(false, false, a->options, is_epilogue,
556
+ false, true, a->rd, a->rs, a->rn);
557
+
558
+ /* If we need to do MTE tag checking, assemble the descriptors */
559
+ if (s->mte_active[runpriv]) {
560
+ rdesc = FIELD_DP32(rdesc, MTEDESC, TBI, s->tbid);
561
+ rdesc = FIELD_DP32(rdesc, MTEDESC, TCMA, s->tcma);
562
+ }
563
+ if (s->mte_active[wunpriv]) {
564
+ wdesc = FIELD_DP32(wdesc, MTEDESC, TBI, s->tbid);
565
+ wdesc = FIELD_DP32(wdesc, MTEDESC, TCMA, s->tcma);
566
+ wdesc = FIELD_DP32(wdesc, MTEDESC, WRITE, true);
567
+ }
568
+ /* The helper function needs these parts of the descriptor regardless */
569
+ rdesc = FIELD_DP32(rdesc, MTEDESC, MIDX, rmemidx);
570
+ wdesc = FIELD_DP32(wdesc, MTEDESC, MIDX, wmemidx);
571
+
572
+ /*
573
+ * The helper needs the register numbers, but since they're in
574
+ * the syndrome anyway, we let it extract them from there rather
575
+ * than passing in an extra three integer arguments.
576
+ */
577
+ fn(cpu_env, tcg_constant_i32(syndrome), tcg_constant_i32(wdesc),
578
+ tcg_constant_i32(rdesc));
579
+ return true;
580
+}
581
+
582
+TRANS_FEAT(CPYP, aa64_mops, do_CPY, a, false, gen_helper_cpyp)
583
+TRANS_FEAT(CPYM, aa64_mops, do_CPY, a, false, gen_helper_cpym)
584
+TRANS_FEAT(CPYE, aa64_mops, do_CPY, a, true, gen_helper_cpye)
585
+TRANS_FEAT(CPYFP, aa64_mops, do_CPY, a, false, gen_helper_cpyfp)
586
+TRANS_FEAT(CPYFM, aa64_mops, do_CPY, a, false, gen_helper_cpyfm)
587
+TRANS_FEAT(CPYFE, aa64_mops, do_CPY, a, true, gen_helper_cpyfe)
588
+
589
typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
590
591
static bool gen_rri(DisasContext *s, arg_rri_sf *a,
104
--
592
--
105
2.34.1
593
2.34.1
diff view generated by jsdifflib
1
From: Guenter Roeck <linux@roeck-us.net>
1
Enable FEAT_MOPS on the AArch64 'max' CPU, and add it to
2
the list of features we implement.
2
3
3
Add fec[12]-phy-connected properties and use it to set phy-connected
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
and phy-consumer properties for imx_fec.
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230912140434.1333369-13-peter.maydell@linaro.org
7
---
8
docs/system/arm/emulation.rst | 1 +
9
linux-user/elfload.c | 1 +
10
target/arm/tcg/cpu64.c | 1 +
11
3 files changed, 3 insertions(+)
5
12
6
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
13
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
7
Message-id: 20230315145248.1639364-5-linux@roeck-us.net
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/fsl-imx7.h | 1 +
12
hw/arm/fsl-imx7.c | 20 ++++++++++++++++++++
13
2 files changed, 21 insertions(+)
14
15
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/fsl-imx7.h
15
--- a/docs/system/arm/emulation.rst
18
+++ b/include/hw/arm/fsl-imx7.h
16
+++ b/docs/system/arm/emulation.rst
19
@@ -XXX,XX +XXX,XX @@ struct FslIMX7State {
17
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
20
ChipideaState usb[FSL_IMX7_NUM_USBS];
18
- FEAT_LSE (Large System Extensions)
21
DesignwarePCIEHost pcie;
19
- FEAT_LSE2 (Large System Extensions v2)
22
uint32_t phy_num[FSL_IMX7_NUM_ETHS];
20
- FEAT_LVA (Large Virtual Address space)
23
+ bool phy_connected[FSL_IMX7_NUM_ETHS];
21
+- FEAT_MOPS (Standardization of memory operations)
24
};
22
- FEAT_MTE (Memory Tagging Extension)
25
23
- FEAT_MTE2 (Memory Tagging Extension)
26
enum FslIMX7MemoryMap {
24
- FEAT_MTE3 (MTE Asymmetric Fault Handling)
27
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
25
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
28
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/fsl-imx7.c
27
--- a/linux-user/elfload.c
30
+++ b/hw/arm/fsl-imx7.c
28
+++ b/linux-user/elfload.c
31
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
29
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap2(void)
32
30
GET_FEATURE_ID(aa64_sme_i16i64, ARM_HWCAP2_A64_SME_I16I64);
33
/*
31
GET_FEATURE_ID(aa64_sme_fa64, ARM_HWCAP2_A64_SME_FA64);
34
* Ethernet
32
GET_FEATURE_ID(aa64_hbc, ARM_HWCAP2_A64_HBC);
35
+ *
33
+ GET_FEATURE_ID(aa64_mops, ARM_HWCAP2_A64_MOPS);
36
+ * We must use two loops since phy_connected affects the other interface
34
37
+ * and we have to set all properties before calling sysbus_realize().
35
return hwcaps;
38
*/
36
}
39
+ for (i = 0; i < FSL_IMX7_NUM_ETHS; i++) {
37
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
40
+ object_property_set_bool(OBJECT(&s->eth[i]), "phy-connected",
38
index XXXXXXX..XXXXXXX 100644
41
+ s->phy_connected[i], &error_abort);
39
--- a/target/arm/tcg/cpu64.c
42
+ /*
40
+++ b/target/arm/tcg/cpu64.c
43
+ * If the MDIO bus on this controller is not connected, assume the
41
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
44
+ * other controller provides support for it.
42
cpu->isar.id_aa64isar1 = t;
45
+ */
43
46
+ if (!s->phy_connected[i]) {
44
t = cpu->isar.id_aa64isar2;
47
+ object_property_set_link(OBJECT(&s->eth[1 - i]), "phy-consumer",
45
+ t = FIELD_DP64(t, ID_AA64ISAR2, MOPS, 1); /* FEAT_MOPS */
48
+ OBJECT(&s->eth[i]), &error_abort);
46
t = FIELD_DP64(t, ID_AA64ISAR2, BC, 1); /* FEAT_HBC */
49
+ }
47
cpu->isar.id_aa64isar2 = t;
50
+ }
51
+
52
for (i = 0; i < FSL_IMX7_NUM_ETHS; i++) {
53
static const hwaddr FSL_IMX7_ENETn_ADDR[FSL_IMX7_NUM_ETHS] = {
54
FSL_IMX7_ENET1_ADDR,
55
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
56
static Property fsl_imx7_properties[] = {
57
DEFINE_PROP_UINT32("fec1-phy-num", FslIMX7State, phy_num[0], 0),
58
DEFINE_PROP_UINT32("fec2-phy-num", FslIMX7State, phy_num[1], 1),
59
+ DEFINE_PROP_BOOL("fec1-phy-connected", FslIMX7State, phy_connected[0],
60
+ true),
61
+ DEFINE_PROP_BOOL("fec2-phy-connected", FslIMX7State, phy_connected[1],
62
+ true),
63
DEFINE_PROP_END_OF_LIST(),
64
};
65
48
66
--
49
--
67
2.34.1
50
2.34.1
diff view generated by jsdifflib
1
From: Stefan Weil <sw@weilnetz.de>
1
Avoid a dynamic stack allocation in qjack_client_init(), by using
2
a g_autofree heap allocation instead.
2
3
3
Signed-off-by: Stefan Weil <sw@weilnetz.de>
4
(We stick with allocate + snprintf() because the JACK API requires
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
the name to be no more than its maximum size, so g_strdup_printf()
5
Message-id: 20230409200526.1156456-1-sw@weilnetz.de
6
would require an extra truncation step.)
7
8
The codebase has very few VLAs, and if we can get rid of them all we
9
can make the compiler error on new additions. This is a defensive
10
measure against security bugs where an on-stack dynamic allocation
11
isn't correctly size-checked (e.g. CVE-2021-3527).
12
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
15
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
16
Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
17
Message-id: 20230818155846.1651287-2-peter.maydell@linaro.org
7
---
18
---
8
hw/arm/exynos4210.c | 4 ++--
19
audio/jackaudio.c | 5 +++--
9
hw/arm/musicpal.c | 2 +-
20
1 file changed, 3 insertions(+), 2 deletions(-)
10
hw/arm/omap1.c | 2 +-
11
hw/arm/omap2.c | 2 +-
12
hw/arm/virt-acpi-build.c | 2 +-
13
hw/arm/virt.c | 2 +-
14
hw/arm/xlnx-versal-virt.c | 2 +-
15
hw/arm/Kconfig | 2 +-
16
8 files changed, 9 insertions(+), 9 deletions(-)
17
21
18
diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c
22
diff --git a/audio/jackaudio.c b/audio/jackaudio.c
19
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/exynos4210.c
24
--- a/audio/jackaudio.c
21
+++ b/hw/arm/exynos4210.c
25
+++ b/audio/jackaudio.c
22
@@ -XXX,XX +XXX,XX @@ static int mapline_size(const int *mapline)
26
@@ -XXX,XX +XXX,XX @@ static void qjack_client_connect_ports(QJackClient *c)
23
27
static int qjack_client_init(QJackClient *c)
24
/*
25
* Initialize board IRQs.
26
- * These IRQs contain splitted Int/External Combiner and External Gic IRQs.
27
+ * These IRQs contain split Int/External Combiner and External Gic IRQs.
28
*/
29
static void exynos4210_init_board_irqs(Exynos4210State *s)
30
{
28
{
31
@@ -XXX,XX +XXX,XX @@ static void exynos4210_realize(DeviceState *socdev, Error **errp)
29
jack_status_t status;
32
* - SDMA
30
- char client_name[jack_client_name_size()];
33
* - ADMA2
31
+ int client_name_len = jack_client_name_size(); /* includes NUL */
34
*
32
+ g_autofree char *client_name = g_new(char, client_name_len);
35
- * As this part of the Exynos4210 is not publically available,
33
jack_options_t options = JackNullOption;
36
+ * As this part of the Exynos4210 is not publicly available,
34
37
* we used the "HS-MMC Controller S3C2416X RISC Microprocessor"
35
if (c->state == QJACK_STATE_RUNNING) {
38
* public datasheet which is very similar (implementing
36
@@ -XXX,XX +XXX,XX @@ static int qjack_client_init(QJackClient *c)
39
* MMC Specification Version 4.0 being the only difference noted)
37
40
diff --git a/hw/arm/musicpal.c b/hw/arm/musicpal.c
38
c->connect_ports = true;
41
index XXXXXXX..XXXXXXX 100644
39
42
--- a/hw/arm/musicpal.c
40
- snprintf(client_name, sizeof(client_name), "%s-%s",
43
+++ b/hw/arm/musicpal.c
41
+ snprintf(client_name, client_name_len, "%s-%s",
44
@@ -XXX,XX +XXX,XX @@
42
c->out ? "out" : "in",
45
#define MP_LCD_SPI_CMD 0x00104011
43
c->opt->client_name ? c->opt->client_name : audio_application_name());
46
#define MP_LCD_SPI_INVALID 0x00000000
44
47
48
-/* Commmands */
49
+/* Commands */
50
#define MP_LCD_INST_SETPAGE0 0xB0
51
/* ... */
52
#define MP_LCD_INST_SETPAGE7 0xB7
53
diff --git a/hw/arm/omap1.c b/hw/arm/omap1.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/arm/omap1.c
56
+++ b/hw/arm/omap1.c
57
@@ -XXX,XX +XXX,XX @@ struct omap_mpu_state_s *omap310_mpu_init(MemoryRegion *dram,
58
s->led[1] = omap_lpg_init(system_memory,
59
0xfffbd800, omap_findclk(s, "clk32-kHz"));
60
61
- /* Register mappings not currenlty implemented:
62
+ /* Register mappings not currently implemented:
63
* MCSI2 Comm    fffb2000 - fffb27ff (not mapped on OMAP310)
64
* MCSI1 Bluetooth    fffb2800 - fffb2fff (not mapped on OMAP310)
65
* USB W2FC        fffb4000 - fffb47ff
66
diff --git a/hw/arm/omap2.c b/hw/arm/omap2.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/hw/arm/omap2.c
69
+++ b/hw/arm/omap2.c
70
@@ -XXX,XX +XXX,XX @@ struct omap_mpu_state_s *omap2420_mpu_init(MemoryRegion *sdram,
71
omap_findclk(s, "func_96m_clk"),
72
omap_findclk(s, "core_l4_iclk"));
73
74
- /* All register mappings (includin those not currenlty implemented):
75
+ /* All register mappings (including those not currently implemented):
76
* SystemControlMod    48000000 - 48000fff
77
* SystemControlL4    48001000 - 48001fff
78
* 32kHz Timer Mod    48004000 - 48004fff
79
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/hw/arm/virt-acpi-build.c
82
+++ b/hw/arm/virt-acpi-build.c
83
@@ -XXX,XX +XXX,XX @@ static void build_append_gicr(GArray *table_data, uint64_t base, uint32_t size)
84
build_append_int_noprefix(table_data, 0xE, 1); /* Type */
85
build_append_int_noprefix(table_data, 16, 1); /* Length */
86
build_append_int_noprefix(table_data, 0, 2); /* Reserved */
87
- /* Discovery Range Base Addres */
88
+ /* Discovery Range Base Address */
89
build_append_int_noprefix(table_data, base, 8);
90
build_append_int_noprefix(table_data, size, 4); /* Discovery Range Length */
91
}
92
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/hw/arm/virt.c
95
+++ b/hw/arm/virt.c
96
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
97
int pa_bits;
98
99
/*
100
- * Instanciate a temporary CPU object to find out about what
101
+ * Instantiate a temporary CPU object to find out about what
102
* we are about to deal with. Once this is done, get rid of
103
* the object.
104
*/
105
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/hw/arm/xlnx-versal-virt.c
108
+++ b/hw/arm/xlnx-versal-virt.c
109
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
110
fdt_add_clk_node(s, "/clk25", 25000000, s->phandle.clk_25Mhz);
111
112
/* Make the APU cpu address space visible to virtio and other
113
- * modules unaware of muliple address-spaces. */
114
+ * modules unaware of multiple address-spaces. */
115
memory_region_add_subregion_overlap(get_system_memory(),
116
0, &s->soc.fpd.apu.mr, 0);
117
118
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
119
index XXXXXXX..XXXXXXX 100644
120
--- a/hw/arm/Kconfig
121
+++ b/hw/arm/Kconfig
122
@@ -XXX,XX +XXX,XX @@ config OLIMEX_STM32_H405
123
config NSERIES
124
bool
125
select OMAP
126
- select TMP105 # tempature sensor
127
+ select TMP105 # temperature sensor
128
select BLIZZARD # LCD/TV controller
129
select ONENAND
130
select TSC210X # touchscreen/sensors/audio
131
--
45
--
132
2.34.1
46
2.34.1
133
47
134
48
diff view generated by jsdifflib
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
1
Avoid a dynamic stack allocation in qjack_process(). Since this
2
function is a JACK process callback, we are not permitted to malloc()
3
here, so we allocate a working buffer in qjack_client_init() instead.
2
4
3
kvm_arm_init_debug() used to be called several times on a SMP system as
5
The codebase has very few VLAs, and if we can get rid of them all we
4
kvm_arch_init_vcpu() calls it. Move the call to kvm_arch_init() to make
6
can make the compiler error on new additions. This is a defensive
5
sure it will be called only once; otherwise it will overwrite pointers
7
measure against security bugs where an on-stack dynamic allocation
6
to memory allocated with the previous call and leak it.
8
isn't correctly size-checked (e.g. CVE-2021-3527).
7
9
8
Fixes: e4482ab7e3 ("target-arm: kvm - add support for HW assisted debug")
9
Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
11
Message-id: 20230405153644.25300-1-akihiko.odaki@daynix.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
12
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
13
Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
14
Message-id: 20230818155846.1651287-3-peter.maydell@linaro.org
14
---
15
---
15
target/arm/kvm_arm.h | 8 ++++++++
16
audio/jackaudio.c | 16 +++++++++++-----
16
target/arm/kvm.c | 2 ++
17
1 file changed, 11 insertions(+), 5 deletions(-)
17
target/arm/kvm64.c | 18 ++++--------------
18
3 files changed, 14 insertions(+), 14 deletions(-)
19
18
20
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
19
diff --git a/audio/jackaudio.c b/audio/jackaudio.c
21
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/kvm_arm.h
21
--- a/audio/jackaudio.c
23
+++ b/target/arm/kvm_arm.h
22
+++ b/audio/jackaudio.c
24
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ typedef struct QJackClient {
25
#define KVM_ARM_VGIC_V2 (1 << 0)
24
int buffersize;
26
#define KVM_ARM_VGIC_V3 (1 << 1)
25
jack_port_t **port;
27
26
QJackBuffer fifo;
28
+/**
29
+ * kvm_arm_init_debug() - initialize guest debug capabilities
30
+ * @s: KVMState
31
+ *
32
+ * Should be called only once before using guest debug capabilities.
33
+ */
34
+void kvm_arm_init_debug(KVMState *s);
35
+
27
+
36
/**
28
+ /* Used as workspace by qjack_process() */
37
* kvm_arm_vcpu_init:
29
+ float **process_buffers;
38
* @cs: CPUState
30
}
39
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
31
QJackClient;
40
index XXXXXXX..XXXXXXX 100644
32
41
--- a/target/arm/kvm.c
33
@@ -XXX,XX +XXX,XX @@ static int qjack_process(jack_nframes_t nframes, void *arg)
42
+++ b/target/arm/kvm.c
34
}
43
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init(MachineState *ms, KVMState *s)
35
36
/* get the buffers for the ports */
37
- float *buffers[c->nchannels];
38
for (int i = 0; i < c->nchannels; ++i) {
39
- buffers[i] = jack_port_get_buffer(c->port[i], nframes);
40
+ c->process_buffers[i] = jack_port_get_buffer(c->port[i], nframes);
41
}
42
43
if (c->out) {
44
if (likely(c->enabled)) {
45
- qjack_buffer_read_l(&c->fifo, buffers, nframes);
46
+ qjack_buffer_read_l(&c->fifo, c->process_buffers, nframes);
47
} else {
48
for (int i = 0; i < c->nchannels; ++i) {
49
- memset(buffers[i], 0, nframes * sizeof(float));
50
+ memset(c->process_buffers[i], 0, nframes * sizeof(float));
51
}
52
}
53
} else {
54
if (likely(c->enabled)) {
55
- qjack_buffer_write_l(&c->fifo, buffers, nframes);
56
+ qjack_buffer_write_l(&c->fifo, c->process_buffers, nframes);
44
}
57
}
45
}
58
}
46
59
47
+ kvm_arm_init_debug(s);
60
@@ -XXX,XX +XXX,XX @@ static int qjack_client_init(QJackClient *c)
61
jack_get_client_name(c->client));
62
}
63
64
+ /* Allocate working buffer for process callback */
65
+ c->process_buffers = g_new(float *, c->nchannels);
48
+
66
+
49
return ret;
67
jack_set_process_callback(c->client, qjack_process , c);
50
}
68
jack_set_port_registration_callback(c->client, qjack_port_registration, c);
51
69
jack_set_xrun_callback(c->client, qjack_xrun, c);
52
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
70
@@ -XXX,XX +XXX,XX @@ static void qjack_client_fini_locked(QJackClient *c)
53
index XXXXXXX..XXXXXXX 100644
71
54
--- a/target/arm/kvm64.c
72
qjack_buffer_free(&c->fifo);
55
+++ b/target/arm/kvm64.c
73
g_free(c->port);
56
@@ -XXX,XX +XXX,XX @@ GArray *hw_breakpoints, *hw_watchpoints;
74
+ g_free(c->process_buffers);
57
#define get_hw_bp(i) (&g_array_index(hw_breakpoints, HWBreakpoint, i))
75
58
#define get_hw_wp(i) (&g_array_index(hw_watchpoints, HWWatchpoint, i))
76
c->state = QJACK_STATE_DISCONNECTED;
59
77
/* fallthrough */
60
-/**
61
- * kvm_arm_init_debug() - check for guest debug capabilities
62
- * @cs: CPUState
63
- *
64
- * kvm_check_extension returns the number of debug registers we have
65
- * or 0 if we have none.
66
- *
67
- */
68
-static void kvm_arm_init_debug(CPUState *cs)
69
+void kvm_arm_init_debug(KVMState *s)
70
{
71
- have_guest_debug = kvm_check_extension(cs->kvm_state,
72
+ have_guest_debug = kvm_check_extension(s,
73
KVM_CAP_SET_GUEST_DEBUG);
74
75
- max_hw_wps = kvm_check_extension(cs->kvm_state, KVM_CAP_GUEST_DEBUG_HW_WPS);
76
+ max_hw_wps = kvm_check_extension(s, KVM_CAP_GUEST_DEBUG_HW_WPS);
77
hw_watchpoints = g_array_sized_new(true, true,
78
sizeof(HWWatchpoint), max_hw_wps);
79
80
- max_hw_bps = kvm_check_extension(cs->kvm_state, KVM_CAP_GUEST_DEBUG_HW_BPS);
81
+ max_hw_bps = kvm_check_extension(s, KVM_CAP_GUEST_DEBUG_HW_BPS);
82
hw_breakpoints = g_array_sized_new(true, true,
83
sizeof(HWBreakpoint), max_hw_bps);
84
return;
85
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
86
}
87
cpu->mp_affinity = mpidr & ARM64_AFFINITY_MASK;
88
89
- kvm_arm_init_debug(cs);
90
-
91
/* Check whether user space can specify guest syndrome value */
92
kvm_arm_init_serror_injection(cs);
93
94
--
78
--
95
2.34.1
79
2.34.1
96
80
97
81
diff view generated by jsdifflib
1
From: Guenter Roeck <linux@roeck-us.net>
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2
3
On mcimx6ul-evk, the MDIO bus is connected to the second Ethernet
3
Armv8.1+ cpus have Virtual Host Extension (VHE) which added non-secure
4
interface. Set fec1-phy-connected to false to reflect this.
4
EL2 virtual timer.
5
5
6
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
6
This change adds it to fullfil Arm BSA (Base System Architecture)
7
Message-id: 20230315145248.1639364-4-linux@roeck-us.net
7
requirements.
8
9
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
10
Message-id: 20230913140610.214893-2-marcin.juszkiewicz@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
hw/arm/mcimx6ul-evk.c | 2 ++
14
hw/arm/sbsa-ref.c | 2 ++
12
1 file changed, 2 insertions(+)
15
1 file changed, 2 insertions(+)
13
16
14
diff --git a/hw/arm/mcimx6ul-evk.c b/hw/arm/mcimx6ul-evk.c
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/mcimx6ul-evk.c
19
--- a/hw/arm/sbsa-ref.c
17
+++ b/hw/arm/mcimx6ul-evk.c
20
+++ b/hw/arm/sbsa-ref.c
18
@@ -XXX,XX +XXX,XX @@ static void mcimx6ul_evk_init(MachineState *machine)
21
@@ -XXX,XX +XXX,XX @@
19
object_property_add_child(OBJECT(machine), "soc", OBJECT(s));
22
#define ARCH_TIMER_S_EL1_IRQ 13
20
object_property_set_uint(OBJECT(s), "fec1-phy-num", 2, &error_fatal);
23
#define ARCH_TIMER_NS_EL1_IRQ 14
21
object_property_set_uint(OBJECT(s), "fec2-phy-num", 1, &error_fatal);
24
#define ARCH_TIMER_NS_EL2_IRQ 10
22
+ object_property_set_bool(OBJECT(s), "fec1-phy-connected", false,
25
+#define ARCH_TIMER_NS_EL2_VIRT_IRQ 12
23
+ &error_fatal);
26
24
qdev_realize(DEVICE(s), NULL, &error_fatal);
27
enum {
25
28
SBSA_FLASH,
26
memory_region_add_subregion(get_system_memory(), FSL_IMX6UL_MMDC_ADDR,
29
@@ -XXX,XX +XXX,XX @@ static void create_gic(SBSAMachineState *sms, MemoryRegion *mem)
30
[GTIMER_VIRT] = ARCH_TIMER_VIRT_IRQ,
31
[GTIMER_HYP] = ARCH_TIMER_NS_EL2_IRQ,
32
[GTIMER_SEC] = ARCH_TIMER_S_EL1_IRQ,
33
+ [GTIMER_HYPVIRT] = ARCH_TIMER_NS_EL2_VIRT_IRQ,
34
};
35
36
for (irq = 0; irq < ARRAY_SIZE(timer_irq); irq++) {
27
--
37
--
28
2.34.1
38
2.34.1
diff view generated by jsdifflib
1
From: Strahinja Jankovic <strahinjapjankovic@gmail.com>
1
From: Viktor Prutyanov <viktor@daynix.com>
2
2
3
This patch adds WDT to Allwinner-A10 and Cubieboard.
3
PE export name check introduced in d399d6b179 isn't reliable enough,
4
WDT is added as an overlay to the Timer module memory map.
4
because a page with the export directory may be not present for some
5
reason. On the other hand, elf2dmp retrieves the PDB name in any case.
6
It can be also used to check that a PE image is the kernel image. So,
7
check PDB name when searching for Windows kernel image.
5
8
6
Signed-off-by: Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
9
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2165917
7
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
10
8
Message-id: 20230326202256.22980-3-strahinja.p.jankovic@gmail.com
11
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
12
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
13
Message-id: 20230915170153.10959-2-viktor@daynix.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
15
---
11
docs/system/arm/cubieboard.rst | 1 +
16
contrib/elf2dmp/main.c | 93 +++++++++++++++---------------------------
12
include/hw/arm/allwinner-a10.h | 2 ++
17
1 file changed, 33 insertions(+), 60 deletions(-)
13
hw/arm/allwinner-a10.c | 7 +++++++
14
hw/arm/Kconfig | 1 +
15
4 files changed, 11 insertions(+)
16
18
17
diff --git a/docs/system/arm/cubieboard.rst b/docs/system/arm/cubieboard.rst
19
diff --git a/contrib/elf2dmp/main.c b/contrib/elf2dmp/main.c
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/docs/system/arm/cubieboard.rst
21
--- a/contrib/elf2dmp/main.c
20
+++ b/docs/system/arm/cubieboard.rst
22
+++ b/contrib/elf2dmp/main.c
21
@@ -XXX,XX +XXX,XX @@ Emulated devices:
23
@@ -XXX,XX +XXX,XX @@ static int write_dump(struct pa_space *ps,
22
- USB controller
24
return fclose(dmp_file);
23
- SATA controller
24
- TWI (I2C) controller
25
+- Watchdog timer
26
diff --git a/include/hw/arm/allwinner-a10.h b/include/hw/arm/allwinner-a10.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/include/hw/arm/allwinner-a10.h
29
+++ b/include/hw/arm/allwinner-a10.h
30
@@ -XXX,XX +XXX,XX @@
31
#include "hw/misc/allwinner-a10-ccm.h"
32
#include "hw/misc/allwinner-a10-dramc.h"
33
#include "hw/i2c/allwinner-i2c.h"
34
+#include "hw/watchdog/allwinner-wdt.h"
35
#include "sysemu/block-backend.h"
36
37
#include "target/arm/cpu.h"
38
@@ -XXX,XX +XXX,XX @@ struct AwA10State {
39
AwSdHostState mmc0;
40
AWI2CState i2c0;
41
AwRtcState rtc;
42
+ AwWdtState wdt;
43
MemoryRegion sram_a;
44
EHCISysBusState ehci[AW_A10_NUM_USB];
45
OHCISysBusState ohci[AW_A10_NUM_USB];
46
diff --git a/hw/arm/allwinner-a10.c b/hw/arm/allwinner-a10.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/allwinner-a10.c
49
+++ b/hw/arm/allwinner-a10.c
50
@@ -XXX,XX +XXX,XX @@
51
#define AW_A10_EHCI_BASE 0x01c14000
52
#define AW_A10_OHCI_BASE 0x01c14400
53
#define AW_A10_SATA_BASE 0x01c18000
54
+#define AW_A10_WDT_BASE 0x01c20c90
55
#define AW_A10_RTC_BASE 0x01c20d00
56
#define AW_A10_I2C0_BASE 0x01c2ac00
57
58
@@ -XXX,XX +XXX,XX @@ static void aw_a10_init(Object *obj)
59
object_initialize_child(obj, "mmc0", &s->mmc0, TYPE_AW_SDHOST_SUN4I);
60
61
object_initialize_child(obj, "rtc", &s->rtc, TYPE_AW_RTC_SUN4I);
62
+
63
+ object_initialize_child(obj, "wdt", &s->wdt, TYPE_AW_WDT_SUN4I);
64
}
25
}
65
26
66
static void aw_a10_realize(DeviceState *dev, Error **errp)
27
-static bool pe_check_export_name(uint64_t base, void *start_addr,
67
@@ -XXX,XX +XXX,XX @@ static void aw_a10_realize(DeviceState *dev, Error **errp)
28
- struct va_space *vs)
68
sysbus_realize(SYS_BUS_DEVICE(&s->i2c0), &error_fatal);
29
-{
69
sysbus_mmio_map(SYS_BUS_DEVICE(&s->i2c0), 0, AW_A10_I2C0_BASE);
30
- IMAGE_EXPORT_DIRECTORY export_dir;
70
sysbus_connect_irq(SYS_BUS_DEVICE(&s->i2c0), 0, qdev_get_gpio_in(dev, 7));
31
- const char *pe_name;
71
+
32
-
72
+ /* WDT */
33
- if (pe_get_data_dir_entry(base, start_addr, IMAGE_FILE_EXPORT_DIRECTORY,
73
+ sysbus_realize(SYS_BUS_DEVICE(&s->wdt), &error_fatal);
34
- &export_dir, sizeof(export_dir), vs)) {
74
+ sysbus_mmio_map_overlap(SYS_BUS_DEVICE(&s->wdt), 0, AW_A10_WDT_BASE, 1);
35
- return false;
36
- }
37
-
38
- pe_name = va_space_resolve(vs, base + export_dir.Name);
39
- if (!pe_name) {
40
- return false;
41
- }
42
-
43
- return !strcmp(pe_name, PE_NAME);
44
-}
45
-
46
-static int pe_get_pdb_symstore_hash(uint64_t base, void *start_addr,
47
- char *hash, struct va_space *vs)
48
+static bool pe_check_pdb_name(uint64_t base, void *start_addr,
49
+ struct va_space *vs, OMFSignatureRSDS *rsds)
50
{
51
const char sign_rsds[4] = "RSDS";
52
IMAGE_DEBUG_DIRECTORY debug_dir;
53
- OMFSignatureRSDS rsds;
54
- char *pdb_name;
55
- size_t pdb_name_sz;
56
- size_t i;
57
+ char pdb_name[sizeof(PDB_NAME)];
58
59
if (pe_get_data_dir_entry(base, start_addr, IMAGE_FILE_DEBUG_DIRECTORY,
60
&debug_dir, sizeof(debug_dir), vs)) {
61
eprintf("Failed to get Debug Directory\n");
62
- return 1;
63
+ return false;
64
}
65
66
if (debug_dir.Type != IMAGE_DEBUG_TYPE_CODEVIEW) {
67
- return 1;
68
+ eprintf("Debug Directory type is not CodeView\n");
69
+ return false;
70
}
71
72
if (va_space_rw(vs,
73
base + debug_dir.AddressOfRawData,
74
- &rsds, sizeof(rsds), 0)) {
75
- return 1;
76
+ rsds, sizeof(*rsds), 0)) {
77
+ eprintf("Failed to resolve OMFSignatureRSDS\n");
78
+ return false;
79
}
80
81
- printf("CodeView signature is \'%.4s\'\n", rsds.Signature);
82
-
83
- if (memcmp(&rsds.Signature, sign_rsds, sizeof(sign_rsds))) {
84
- return 1;
85
+ if (memcmp(&rsds->Signature, sign_rsds, sizeof(sign_rsds))) {
86
+ eprintf("CodeView signature is \'%.4s\', \'%s\' expected\n",
87
+ rsds->Signature, sign_rsds);
88
+ return false;
89
}
90
91
- pdb_name_sz = debug_dir.SizeOfData - sizeof(rsds);
92
- pdb_name = malloc(pdb_name_sz);
93
- if (!pdb_name) {
94
- return 1;
95
+ if (debug_dir.SizeOfData - sizeof(*rsds) != sizeof(PDB_NAME)) {
96
+ eprintf("PDB name size doesn't match\n");
97
+ return false;
98
}
99
100
if (va_space_rw(vs, base + debug_dir.AddressOfRawData +
101
- offsetof(OMFSignatureRSDS, name), pdb_name, pdb_name_sz, 0)) {
102
- free(pdb_name);
103
- return 1;
104
+ offsetof(OMFSignatureRSDS, name), pdb_name, sizeof(PDB_NAME),
105
+ 0)) {
106
+ eprintf("Failed to resolve PDB name\n");
107
+ return false;
108
}
109
110
printf("PDB name is \'%s\', \'%s\' expected\n", pdb_name, PDB_NAME);
111
112
- if (strcmp(pdb_name, PDB_NAME)) {
113
- eprintf("Unexpected PDB name, it seems the kernel isn't found\n");
114
- free(pdb_name);
115
- return 1;
116
- }
117
+ return !strcmp(pdb_name, PDB_NAME);
118
+}
119
120
- free(pdb_name);
121
-
122
- sprintf(hash, "%.08x%.04x%.04x%.02x%.02x", rsds.guid.a, rsds.guid.b,
123
- rsds.guid.c, rsds.guid.d[0], rsds.guid.d[1]);
124
+static void pe_get_pdb_symstore_hash(OMFSignatureRSDS *rsds, char *hash)
125
+{
126
+ sprintf(hash, "%.08x%.04x%.04x%.02x%.02x", rsds->guid.a, rsds->guid.b,
127
+ rsds->guid.c, rsds->guid.d[0], rsds->guid.d[1]);
128
hash += 20;
129
- for (i = 0; i < 6; i++, hash += 2) {
130
- sprintf(hash, "%.02x", rsds.guid.e[i]);
131
+ for (unsigned int i = 0; i < 6; i++, hash += 2) {
132
+ sprintf(hash, "%.02x", rsds->guid.e[i]);
133
}
134
135
- sprintf(hash, "%.01x", rsds.age);
136
-
137
- return 0;
138
+ sprintf(hash, "%.01x", rsds->age);
75
}
139
}
76
140
77
static void aw_a10_class_init(ObjectClass *oc, void *data)
141
int main(int argc, char *argv[])
78
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
142
@@ -XXX,XX +XXX,XX @@ int main(int argc, char *argv[])
79
index XXXXXXX..XXXXXXX 100644
143
KDDEBUGGER_DATA64 *kdbg;
80
--- a/hw/arm/Kconfig
144
uint64_t KdVersionBlock;
81
+++ b/hw/arm/Kconfig
145
bool kernel_found = false;
82
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_A10
146
+ OMFSignatureRSDS rsds;
83
select ALLWINNER_A10_PIC
147
84
select ALLWINNER_A10_CCM
148
if (argc != 3) {
85
select ALLWINNER_A10_DRAMC
149
eprintf("usage:\n\t%s elf_file dmp_file\n", argv[0]);
86
+ select ALLWINNER_WDT
150
@@ -XXX,XX +XXX,XX @@ int main(int argc, char *argv[])
87
select ALLWINNER_EMAC
151
}
88
select ALLWINNER_I2C
152
89
select AXP209_PMU
153
if (*(uint16_t *)nt_start_addr == 0x5a4d) { /* MZ */
154
- if (pe_check_export_name(KernBase, nt_start_addr, &vs)) {
155
+ printf("Checking candidate KernBase = 0x%016"PRIx64"\n", KernBase);
156
+ if (pe_check_pdb_name(KernBase, nt_start_addr, &vs, &rsds)) {
157
kernel_found = true;
158
break;
159
}
160
@@ -XXX,XX +XXX,XX @@ int main(int argc, char *argv[])
161
printf("KernBase = 0x%016"PRIx64", signature is \'%.2s\'\n", KernBase,
162
(char *)nt_start_addr);
163
164
- if (pe_get_pdb_symstore_hash(KernBase, nt_start_addr, pdb_hash, &vs)) {
165
- eprintf("Failed to get PDB symbol store hash\n");
166
- err = 1;
167
- goto out_ps;
168
- }
169
+ pe_get_pdb_symstore_hash(&rsds, pdb_hash);
170
171
sprintf(pdb_url, "%s%s/%s/%s", SYM_URL_BASE, PDB_NAME, pdb_hash, PDB_NAME);
172
printf("PDB URL is %s\n", pdb_url);
90
--
173
--
91
2.34.1
174
2.34.1
diff view generated by jsdifflib
1
From: Strahinja Jankovic <strahinjapjankovic@gmail.com>
1
From: Viktor Prutyanov <viktor@daynix.com>
2
2
3
This patch adds basic support for Allwinner WDT.
3
Physical memory ranges may not be aligned to page size in QEMU ELF, but
4
Both sun4i and sun6i variants are supported.
4
DMP can only contain page-aligned runs. So, align them.
5
However, interrupt generation is not supported, so WDT can be used only to trigger system reset.
6
5
7
Signed-off-by: Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
6
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
8
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
7
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
9
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
8
Message-id: 20230915170153.10959-3-viktor@daynix.com
10
Message-id: 20230326202256.22980-2-strahinja.p.jankovic@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
include/hw/watchdog/allwinner-wdt.h | 123 ++++++++
11
contrib/elf2dmp/addrspace.h | 1 +
14
hw/watchdog/allwinner-wdt.c | 416 ++++++++++++++++++++++++++++
12
contrib/elf2dmp/addrspace.c | 31 +++++++++++++++++++++++++++++--
15
hw/watchdog/Kconfig | 4 +
13
contrib/elf2dmp/main.c | 5 +++--
16
hw/watchdog/meson.build | 1 +
14
3 files changed, 33 insertions(+), 4 deletions(-)
17
hw/watchdog/trace-events | 7 +
18
5 files changed, 551 insertions(+)
19
create mode 100644 include/hw/watchdog/allwinner-wdt.h
20
create mode 100644 hw/watchdog/allwinner-wdt.c
21
15
22
diff --git a/include/hw/watchdog/allwinner-wdt.h b/include/hw/watchdog/allwinner-wdt.h
16
diff --git a/contrib/elf2dmp/addrspace.h b/contrib/elf2dmp/addrspace.h
23
new file mode 100644
17
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX
18
--- a/contrib/elf2dmp/addrspace.h
25
--- /dev/null
19
+++ b/contrib/elf2dmp/addrspace.h
26
+++ b/include/hw/watchdog/allwinner-wdt.h
27
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@
28
+/*
21
29
+ * Allwinner Watchdog emulation
22
#define ELF2DMP_PAGE_BITS 12
30
+ *
23
#define ELF2DMP_PAGE_SIZE (1ULL << ELF2DMP_PAGE_BITS)
31
+ * Copyright (C) 2023 Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
24
+#define ELF2DMP_PAGE_MASK (ELF2DMP_PAGE_SIZE - 1)
32
+ *
25
#define ELF2DMP_PFN_MASK (~(ELF2DMP_PAGE_SIZE - 1))
33
+ * This file is derived from Allwinner RTC,
26
34
+ * by Niek Linnenbank.
27
#define INVALID_PA UINT64_MAX
35
+ *
28
diff --git a/contrib/elf2dmp/addrspace.c b/contrib/elf2dmp/addrspace.c
36
+ * This program is free software: you can redistribute it and/or modify
29
index XXXXXXX..XXXXXXX 100644
37
+ * it under the terms of the GNU General Public License as published by
30
--- a/contrib/elf2dmp/addrspace.c
38
+ * the Free Software Foundation, either version 2 of the License, or
31
+++ b/contrib/elf2dmp/addrspace.c
39
+ * (at your option) any later version.
32
@@ -XXX,XX +XXX,XX @@ static struct pa_block *pa_space_find_block(struct pa_space *ps, uint64_t pa)
40
+ *
33
41
+ * This program is distributed in the hope that it will be useful,
34
for (i = 0; i < ps->block_nr; i++) {
42
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
35
if (ps->block[i].paddr <= pa &&
43
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
36
- pa <= ps->block[i].paddr + ps->block[i].size) {
44
+ * GNU General Public License for more details.
37
+ pa < ps->block[i].paddr + ps->block[i].size) {
45
+ *
38
return ps->block + i;
46
+ * You should have received a copy of the GNU General Public License
39
}
47
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
40
}
48
+ */
41
@@ -XXX,XX +XXX,XX @@ static uint8_t *pa_space_resolve(struct pa_space *ps, uint64_t pa)
42
return block->addr + (pa - block->paddr);
43
}
44
45
+static void pa_block_align(struct pa_block *b)
46
+{
47
+ uint64_t low_align = ((b->paddr - 1) | ELF2DMP_PAGE_MASK) + 1 - b->paddr;
48
+ uint64_t high_align = (b->paddr + b->size) & ELF2DMP_PAGE_MASK;
49
+
49
+
50
+#ifndef HW_WATCHDOG_ALLWINNER_WDT_H
50
+ if (low_align == 0 && high_align == 0) {
51
+#define HW_WATCHDOG_ALLWINNER_WDT_H
52
+
53
+#include "qom/object.h"
54
+#include "hw/ptimer.h"
55
+#include "hw/sysbus.h"
56
+
57
+/*
58
+ * This is a model of the Allwinner watchdog.
59
+ * Since watchdog registers belong to the timer module (and are shared with the
60
+ * RTC module), the interrupt line from watchdog is not handled right now.
61
+ * In QEMU, we just wire up the watchdog reset to watchdog_perform_action(),
62
+ * at least for the moment.
63
+ */
64
+
65
+#define TYPE_AW_WDT "allwinner-wdt"
66
+
67
+/** Allwinner WDT sun4i family (A10, A12), also sun7i (A20) */
68
+#define TYPE_AW_WDT_SUN4I TYPE_AW_WDT "-sun4i"
69
+
70
+/** Allwinner WDT sun6i family and newer (A31, H2+, H3, etc) */
71
+#define TYPE_AW_WDT_SUN6I TYPE_AW_WDT "-sun6i"
72
+
73
+/** Number of WDT registers */
74
+#define AW_WDT_REGS_NUM (5)
75
+
76
+OBJECT_DECLARE_TYPE(AwWdtState, AwWdtClass, AW_WDT)
77
+
78
+/**
79
+ * Allwinner WDT object instance state.
80
+ */
81
+struct AwWdtState {
82
+ /*< private >*/
83
+ SysBusDevice parent_obj;
84
+
85
+ /*< public >*/
86
+ MemoryRegion iomem;
87
+ struct ptimer_state *timer;
88
+
89
+ uint32_t regs[AW_WDT_REGS_NUM];
90
+};
91
+
92
+/**
93
+ * Allwinner WDT class-level struct.
94
+ *
95
+ * This struct is filled by each sunxi device specific code
96
+ * such that the generic code can use this struct to support
97
+ * all devices.
98
+ */
99
+struct AwWdtClass {
100
+ /*< private >*/
101
+ SysBusDeviceClass parent_class;
102
+ /*< public >*/
103
+
104
+ /** Defines device specific register map */
105
+ const uint8_t *regmap;
106
+
107
+ /** Size of the regmap in bytes */
108
+ size_t regmap_size;
109
+
110
+ /**
111
+ * Read device specific register
112
+ *
113
+ * @offset: register offset to read
114
+ * @return true if register read successful, false otherwise
115
+ */
116
+ bool (*read)(AwWdtState *s, uint32_t offset);
117
+
118
+ /**
119
+ * Write device specific register
120
+ *
121
+ * @offset: register offset to write
122
+ * @data: value to set in register
123
+ * @return true if register write successful, false otherwise
124
+ */
125
+ bool (*write)(AwWdtState *s, uint32_t offset, uint32_t data);
126
+
127
+ /**
128
+ * Check if watchdog can generate system reset
129
+ *
130
+ * @return true if watchdog can generate system reset
131
+ */
132
+ bool (*can_reset_system)(AwWdtState *s);
133
+
134
+ /**
135
+ * Check if provided key is valid
136
+ *
137
+ * @value: value written to register
138
+ * @return true if key is valid, false otherwise
139
+ */
140
+ bool (*is_key_valid)(AwWdtState *s, uint32_t val);
141
+
142
+ /**
143
+ * Get current INTV_VALUE setting
144
+ *
145
+ * @return current INTV_VALUE (0-15)
146
+ */
147
+ uint8_t (*get_intv_value)(AwWdtState *s);
148
+};
149
+
150
+#endif /* HW_WATCHDOG_ALLWINNER_WDT_H */
151
diff --git a/hw/watchdog/allwinner-wdt.c b/hw/watchdog/allwinner-wdt.c
152
new file mode 100644
153
index XXXXXXX..XXXXXXX
154
--- /dev/null
155
+++ b/hw/watchdog/allwinner-wdt.c
156
@@ -XXX,XX +XXX,XX @@
157
+/*
158
+ * Allwinner Watchdog emulation
159
+ *
160
+ * Copyright (C) 2023 Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
161
+ *
162
+ * This file is derived from Allwinner RTC,
163
+ * by Niek Linnenbank.
164
+ *
165
+ * This program is free software: you can redistribute it and/or modify
166
+ * it under the terms of the GNU General Public License as published by
167
+ * the Free Software Foundation, either version 2 of the License, or
168
+ * (at your option) any later version.
169
+ *
170
+ * This program is distributed in the hope that it will be useful,
171
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
172
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
173
+ * GNU General Public License for more details.
174
+ *
175
+ * You should have received a copy of the GNU General Public License
176
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
177
+ */
178
+
179
+#include "qemu/osdep.h"
180
+#include "qemu/log.h"
181
+#include "qemu/units.h"
182
+#include "qemu/module.h"
183
+#include "trace.h"
184
+#include "hw/sysbus.h"
185
+#include "hw/registerfields.h"
186
+#include "hw/watchdog/allwinner-wdt.h"
187
+#include "sysemu/watchdog.h"
188
+#include "migration/vmstate.h"
189
+
190
+/* WDT registers */
191
+enum {
192
+ REG_IRQ_EN = 0, /* Watchdog interrupt enable */
193
+ REG_IRQ_STA, /* Watchdog interrupt status */
194
+ REG_CTRL, /* Watchdog control register */
195
+ REG_CFG, /* Watchdog configuration register */
196
+ REG_MODE, /* Watchdog mode register */
197
+};
198
+
199
+/* Universal WDT register flags */
200
+#define WDT_RESTART_MASK (1 << 0)
201
+#define WDT_EN_MASK (1 << 0)
202
+
203
+/* sun4i specific WDT register flags */
204
+#define RST_EN_SUN4I_MASK (1 << 1)
205
+#define INTV_VALUE_SUN4I_SHIFT (3)
206
+#define INTV_VALUE_SUN4I_MASK (0xfu << INTV_VALUE_SUN4I_SHIFT)
207
+
208
+/* sun6i specific WDT register flags */
209
+#define RST_EN_SUN6I_MASK (1 << 0)
210
+#define KEY_FIELD_SUN6I_SHIFT (1)
211
+#define KEY_FIELD_SUN6I_MASK (0xfffu << KEY_FIELD_SUN6I_SHIFT)
212
+#define KEY_FIELD_SUN6I (0xA57u)
213
+#define INTV_VALUE_SUN6I_SHIFT (4)
214
+#define INTV_VALUE_SUN6I_MASK (0xfu << INTV_VALUE_SUN6I_SHIFT)
215
+
216
+/* Map of INTV_VALUE to 0.5s units. */
217
+static const uint8_t allwinner_wdt_count_map[] = {
218
+ 1,
219
+ 2,
220
+ 4,
221
+ 6,
222
+ 8,
223
+ 10,
224
+ 12,
225
+ 16,
226
+ 20,
227
+ 24,
228
+ 28,
229
+ 32
230
+};
231
+
232
+/* WDT sun4i register map (offset to name) */
233
+const uint8_t allwinner_wdt_sun4i_regmap[] = {
234
+ [0x0000] = REG_CTRL,
235
+ [0x0004] = REG_MODE,
236
+};
237
+
238
+/* WDT sun6i register map (offset to name) */
239
+const uint8_t allwinner_wdt_sun6i_regmap[] = {
240
+ [0x0000] = REG_IRQ_EN,
241
+ [0x0004] = REG_IRQ_STA,
242
+ [0x0010] = REG_CTRL,
243
+ [0x0014] = REG_CFG,
244
+ [0x0018] = REG_MODE,
245
+};
246
+
247
+static bool allwinner_wdt_sun4i_read(AwWdtState *s, uint32_t offset)
248
+{
249
+ /* no sun4i specific registers currently implemented */
250
+ return false;
251
+}
252
+
253
+static bool allwinner_wdt_sun4i_write(AwWdtState *s, uint32_t offset,
254
+ uint32_t data)
255
+{
256
+ /* no sun4i specific registers currently implemented */
257
+ return false;
258
+}
259
+
260
+static bool allwinner_wdt_sun4i_can_reset_system(AwWdtState *s)
261
+{
262
+ if (s->regs[REG_MODE] & RST_EN_SUN4I_MASK) {
263
+ return true;
264
+ } else {
265
+ return false;
266
+ }
267
+}
268
+
269
+static bool allwinner_wdt_sun4i_is_key_valid(AwWdtState *s, uint32_t val)
270
+{
271
+ /* sun4i has no key */
272
+ return true;
273
+}
274
+
275
+static uint8_t allwinner_wdt_sun4i_get_intv_value(AwWdtState *s)
276
+{
277
+ return ((s->regs[REG_MODE] & INTV_VALUE_SUN4I_MASK) >>
278
+ INTV_VALUE_SUN4I_SHIFT);
279
+}
280
+
281
+static bool allwinner_wdt_sun6i_read(AwWdtState *s, uint32_t offset)
282
+{
283
+ const AwWdtClass *c = AW_WDT_GET_CLASS(s);
284
+
285
+ switch (c->regmap[offset]) {
286
+ case REG_IRQ_EN:
287
+ case REG_IRQ_STA:
288
+ case REG_CFG:
289
+ return true;
290
+ default:
291
+ break;
292
+ }
293
+ return false;
294
+}
295
+
296
+static bool allwinner_wdt_sun6i_write(AwWdtState *s, uint32_t offset,
297
+ uint32_t data)
298
+{
299
+ const AwWdtClass *c = AW_WDT_GET_CLASS(s);
300
+
301
+ switch (c->regmap[offset]) {
302
+ case REG_IRQ_EN:
303
+ case REG_IRQ_STA:
304
+ case REG_CFG:
305
+ return true;
306
+ default:
307
+ break;
308
+ }
309
+ return false;
310
+}
311
+
312
+static bool allwinner_wdt_sun6i_can_reset_system(AwWdtState *s)
313
+{
314
+ if (s->regs[REG_CFG] & RST_EN_SUN6I_MASK) {
315
+ return true;
316
+ } else {
317
+ return false;
318
+ }
319
+}
320
+
321
+static bool allwinner_wdt_sun6i_is_key_valid(AwWdtState *s, uint32_t val)
322
+{
323
+ uint16_t key = (val & KEY_FIELD_SUN6I_MASK) >> KEY_FIELD_SUN6I_SHIFT;
324
+ return (key == KEY_FIELD_SUN6I);
325
+}
326
+
327
+static uint8_t allwinner_wdt_sun6i_get_intv_value(AwWdtState *s)
328
+{
329
+ return ((s->regs[REG_MODE] & INTV_VALUE_SUN6I_MASK) >>
330
+ INTV_VALUE_SUN6I_SHIFT);
331
+}
332
+
333
+static void allwinner_wdt_update_timer(AwWdtState *s)
334
+{
335
+ const AwWdtClass *c = AW_WDT_GET_CLASS(s);
336
+ uint8_t count = c->get_intv_value(s);
337
+
338
+ ptimer_transaction_begin(s->timer);
339
+ ptimer_stop(s->timer);
340
+
341
+ /* Use map to convert. */
342
+ if (count < sizeof(allwinner_wdt_count_map)) {
343
+ ptimer_set_count(s->timer, allwinner_wdt_count_map[count]);
344
+ } else {
345
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: incorrect INTV_VALUE 0x%02x\n",
346
+ __func__, count);
347
+ }
348
+
349
+ ptimer_run(s->timer, 1);
350
+ ptimer_transaction_commit(s->timer);
351
+
352
+ trace_allwinner_wdt_update_timer(count);
353
+}
354
+
355
+static uint64_t allwinner_wdt_read(void *opaque, hwaddr offset,
356
+ unsigned size)
357
+{
358
+ AwWdtState *s = AW_WDT(opaque);
359
+ const AwWdtClass *c = AW_WDT_GET_CLASS(s);
360
+ uint64_t r;
361
+
362
+ if (offset >= c->regmap_size) {
363
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
364
+ __func__, (uint32_t)offset);
365
+ return 0;
366
+ }
367
+
368
+ switch (c->regmap[offset]) {
369
+ case REG_CTRL:
370
+ case REG_MODE:
371
+ r = s->regs[c->regmap[offset]];
372
+ break;
373
+ default:
374
+ if (!c->read(s, offset)) {
375
+ qemu_log_mask(LOG_UNIMP, "%s: unimplemented register 0x%04x\n",
376
+ __func__, (uint32_t)offset);
377
+ return 0;
378
+ }
379
+ r = s->regs[c->regmap[offset]];
380
+ break;
381
+ }
382
+
383
+ trace_allwinner_wdt_read(offset, r, size);
384
+
385
+ return r;
386
+}
387
+
388
+static void allwinner_wdt_write(void *opaque, hwaddr offset,
389
+ uint64_t val, unsigned size)
390
+{
391
+ AwWdtState *s = AW_WDT(opaque);
392
+ const AwWdtClass *c = AW_WDT_GET_CLASS(s);
393
+ uint32_t old_val;
394
+
395
+ if (offset >= c->regmap_size) {
396
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
397
+ __func__, (uint32_t)offset);
398
+ return;
51
+ return;
399
+ }
52
+ }
400
+
53
+
401
+ trace_allwinner_wdt_write(offset, val, size);
54
+ if (low_align + high_align < b->size) {
55
+ printf("Block 0x%"PRIx64"+:0x%"PRIx64" will be aligned to "
56
+ "0x%"PRIx64"+:0x%"PRIx64"\n", b->paddr, b->size,
57
+ b->paddr + low_align, b->size - low_align - high_align);
58
+ b->size -= low_align + high_align;
59
+ } else {
60
+ printf("Block 0x%"PRIx64"+:0x%"PRIx64" is too small to align\n",
61
+ b->paddr, b->size);
62
+ b->size = 0;
63
+ }
402
+
64
+
403
+ switch (c->regmap[offset]) {
65
+ b->addr += low_align;
404
+ case REG_CTRL:
66
+ b->paddr += low_align;
405
+ if (c->is_key_valid(s, val)) {
406
+ if (val & WDT_RESTART_MASK) {
407
+ /* Kick timer */
408
+ allwinner_wdt_update_timer(s);
409
+ }
410
+ }
411
+ break;
412
+ case REG_MODE:
413
+ old_val = s->regs[REG_MODE];
414
+ s->regs[REG_MODE] = (uint32_t)val;
415
+
416
+ /* Check for rising edge on WDOG_MODE_EN */
417
+ if ((s->regs[REG_MODE] & ~old_val) & WDT_EN_MASK) {
418
+ allwinner_wdt_update_timer(s);
419
+ }
420
+ break;
421
+ default:
422
+ if (!c->write(s, offset, val)) {
423
+ qemu_log_mask(LOG_UNIMP, "%s: unimplemented register 0x%04x\n",
424
+ __func__, (uint32_t)offset);
425
+ }
426
+ s->regs[c->regmap[offset]] = (uint32_t)val;
427
+ break;
428
+ }
429
+}
67
+}
430
+
68
+
431
+static const MemoryRegionOps allwinner_wdt_ops = {
69
int pa_space_create(struct pa_space *ps, QEMU_Elf *qemu_elf)
432
+ .read = allwinner_wdt_read,
70
{
433
+ .write = allwinner_wdt_write,
71
Elf64_Half phdr_nr = elf_getphdrnum(qemu_elf->map);
434
+ .endianness = DEVICE_NATIVE_ENDIAN,
72
@@ -XXX,XX +XXX,XX @@ int pa_space_create(struct pa_space *ps, QEMU_Elf *qemu_elf)
435
+ .valid = {
73
.paddr = phdr[i].p_paddr,
436
+ .min_access_size = 4,
74
.size = phdr[i].p_filesz,
437
+ .max_access_size = 4,
75
};
438
+ },
76
- block_i++;
439
+ .impl.min_access_size = 4,
77
+ pa_block_align(&ps->block[block_i]);
440
+};
78
+ block_i = ps->block[block_i].size ? (block_i + 1) : block_i;
79
}
80
}
81
82
+ ps->block_nr = block_i;
441
+
83
+
442
+static void allwinner_wdt_expired(void *opaque)
84
return 0;
443
+{
85
}
444
+ AwWdtState *s = AW_WDT(opaque);
86
445
+ const AwWdtClass *c = AW_WDT_GET_CLASS(s);
87
diff --git a/contrib/elf2dmp/main.c b/contrib/elf2dmp/main.c
446
+
447
+ bool enabled = s->regs[REG_MODE] & WDT_EN_MASK;
448
+ bool reset_enabled = c->can_reset_system(s);
449
+
450
+ trace_allwinner_wdt_expired(enabled, reset_enabled);
451
+
452
+ /* Perform watchdog action if watchdog is enabled and can trigger reset */
453
+ if (enabled && reset_enabled) {
454
+ watchdog_perform_action();
455
+ }
456
+}
457
+
458
+static void allwinner_wdt_reset_enter(Object *obj, ResetType type)
459
+{
460
+ AwWdtState *s = AW_WDT(obj);
461
+
462
+ trace_allwinner_wdt_reset_enter();
463
+
464
+ /* Clear registers */
465
+ memset(s->regs, 0, sizeof(s->regs));
466
+}
467
+
468
+static const VMStateDescription allwinner_wdt_vmstate = {
469
+ .name = "allwinner-wdt",
470
+ .version_id = 1,
471
+ .minimum_version_id = 1,
472
+ .fields = (VMStateField[]) {
473
+ VMSTATE_PTIMER(timer, AwWdtState),
474
+ VMSTATE_UINT32_ARRAY(regs, AwWdtState, AW_WDT_REGS_NUM),
475
+ VMSTATE_END_OF_LIST()
476
+ }
477
+};
478
+
479
+static void allwinner_wdt_init(Object *obj)
480
+{
481
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
482
+ AwWdtState *s = AW_WDT(obj);
483
+ const AwWdtClass *c = AW_WDT_GET_CLASS(s);
484
+
485
+ /* Memory mapping */
486
+ memory_region_init_io(&s->iomem, OBJECT(s), &allwinner_wdt_ops, s,
487
+ TYPE_AW_WDT, c->regmap_size * 4);
488
+ sysbus_init_mmio(sbd, &s->iomem);
489
+}
490
+
491
+static void allwinner_wdt_realize(DeviceState *dev, Error **errp)
492
+{
493
+ AwWdtState *s = AW_WDT(dev);
494
+
495
+ s->timer = ptimer_init(allwinner_wdt_expired, s,
496
+ PTIMER_POLICY_NO_IMMEDIATE_TRIGGER |
497
+ PTIMER_POLICY_NO_IMMEDIATE_RELOAD |
498
+ PTIMER_POLICY_NO_COUNTER_ROUND_DOWN);
499
+
500
+ ptimer_transaction_begin(s->timer);
501
+ /* Set to 2Hz (0.5s period); other periods are multiples of 0.5s. */
502
+ ptimer_set_freq(s->timer, 2);
503
+ ptimer_set_limit(s->timer, 0xff, 1);
504
+ ptimer_transaction_commit(s->timer);
505
+}
506
+
507
+static void allwinner_wdt_class_init(ObjectClass *klass, void *data)
508
+{
509
+ DeviceClass *dc = DEVICE_CLASS(klass);
510
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
511
+
512
+ rc->phases.enter = allwinner_wdt_reset_enter;
513
+ dc->realize = allwinner_wdt_realize;
514
+ dc->vmsd = &allwinner_wdt_vmstate;
515
+}
516
+
517
+static void allwinner_wdt_sun4i_class_init(ObjectClass *klass, void *data)
518
+{
519
+ AwWdtClass *awc = AW_WDT_CLASS(klass);
520
+
521
+ awc->regmap = allwinner_wdt_sun4i_regmap;
522
+ awc->regmap_size = sizeof(allwinner_wdt_sun4i_regmap);
523
+ awc->read = allwinner_wdt_sun4i_read;
524
+ awc->write = allwinner_wdt_sun4i_write;
525
+ awc->can_reset_system = allwinner_wdt_sun4i_can_reset_system;
526
+ awc->is_key_valid = allwinner_wdt_sun4i_is_key_valid;
527
+ awc->get_intv_value = allwinner_wdt_sun4i_get_intv_value;
528
+}
529
+
530
+static void allwinner_wdt_sun6i_class_init(ObjectClass *klass, void *data)
531
+{
532
+ AwWdtClass *awc = AW_WDT_CLASS(klass);
533
+
534
+ awc->regmap = allwinner_wdt_sun6i_regmap;
535
+ awc->regmap_size = sizeof(allwinner_wdt_sun6i_regmap);
536
+ awc->read = allwinner_wdt_sun6i_read;
537
+ awc->write = allwinner_wdt_sun6i_write;
538
+ awc->can_reset_system = allwinner_wdt_sun6i_can_reset_system;
539
+ awc->is_key_valid = allwinner_wdt_sun6i_is_key_valid;
540
+ awc->get_intv_value = allwinner_wdt_sun6i_get_intv_value;
541
+}
542
+
543
+static const TypeInfo allwinner_wdt_info = {
544
+ .name = TYPE_AW_WDT,
545
+ .parent = TYPE_SYS_BUS_DEVICE,
546
+ .instance_init = allwinner_wdt_init,
547
+ .instance_size = sizeof(AwWdtState),
548
+ .class_init = allwinner_wdt_class_init,
549
+ .class_size = sizeof(AwWdtClass),
550
+ .abstract = true,
551
+};
552
+
553
+static const TypeInfo allwinner_wdt_sun4i_info = {
554
+ .name = TYPE_AW_WDT_SUN4I,
555
+ .parent = TYPE_AW_WDT,
556
+ .class_init = allwinner_wdt_sun4i_class_init,
557
+};
558
+
559
+static const TypeInfo allwinner_wdt_sun6i_info = {
560
+ .name = TYPE_AW_WDT_SUN6I,
561
+ .parent = TYPE_AW_WDT,
562
+ .class_init = allwinner_wdt_sun6i_class_init,
563
+};
564
+
565
+static void allwinner_wdt_register(void)
566
+{
567
+ type_register_static(&allwinner_wdt_info);
568
+ type_register_static(&allwinner_wdt_sun4i_info);
569
+ type_register_static(&allwinner_wdt_sun6i_info);
570
+}
571
+
572
+type_init(allwinner_wdt_register)
573
diff --git a/hw/watchdog/Kconfig b/hw/watchdog/Kconfig
574
index XXXXXXX..XXXXXXX 100644
88
index XXXXXXX..XXXXXXX 100644
575
--- a/hw/watchdog/Kconfig
89
--- a/contrib/elf2dmp/main.c
576
+++ b/hw/watchdog/Kconfig
90
+++ b/contrib/elf2dmp/main.c
577
@@ -XXX,XX +XXX,XX @@ config WDT_IMX2
91
@@ -XXX,XX +XXX,XX @@ static int write_dump(struct pa_space *ps,
578
92
for (i = 0; i < ps->block_nr; i++) {
579
config WDT_SBSA
93
struct pa_block *b = &ps->block[i];
580
bool
94
581
+
95
- printf("Writing block #%zu/%zu to file...\n", i, ps->block_nr);
582
+config ALLWINNER_WDT
96
+ printf("Writing block #%zu/%zu of %"PRIu64" bytes to file...\n", i,
583
+ bool
97
+ ps->block_nr, b->size);
584
+ select PTIMER
98
if (fwrite(b->addr, b->size, 1, dmp_file) != 1) {
585
diff --git a/hw/watchdog/meson.build b/hw/watchdog/meson.build
99
- eprintf("Failed to write dump header\n");
586
index XXXXXXX..XXXXXXX 100644
100
+ eprintf("Failed to write block\n");
587
--- a/hw/watchdog/meson.build
101
fclose(dmp_file);
588
+++ b/hw/watchdog/meson.build
102
return 1;
589
@@ -XXX,XX +XXX,XX @@
103
}
590
softmmu_ss.add(files('watchdog.c'))
591
+softmmu_ss.add(when: 'CONFIG_ALLWINNER_WDT', if_true: files('allwinner-wdt.c'))
592
softmmu_ss.add(when: 'CONFIG_CMSDK_APB_WATCHDOG', if_true: files('cmsdk-apb-watchdog.c'))
593
softmmu_ss.add(when: 'CONFIG_WDT_IB6300ESB', if_true: files('wdt_i6300esb.c'))
594
softmmu_ss.add(when: 'CONFIG_WDT_IB700', if_true: files('wdt_ib700.c'))
595
diff --git a/hw/watchdog/trace-events b/hw/watchdog/trace-events
596
index XXXXXXX..XXXXXXX 100644
597
--- a/hw/watchdog/trace-events
598
+++ b/hw/watchdog/trace-events
599
@@ -XXX,XX +XXX,XX @@
600
# See docs/devel/tracing.rst for syntax documentation.
601
602
+# allwinner-wdt.c
603
+allwinner_wdt_read(uint64_t offset, uint64_t data, unsigned size) "Allwinner watchdog read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
604
+allwinner_wdt_write(uint64_t offset, uint64_t data, unsigned size) "Allwinner watchdog write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
605
+allwinner_wdt_reset_enter(void) "Allwinner watchdog: reset"
606
+allwinner_wdt_update_timer(uint8_t count) "Allwinner watchdog: count %" PRIu8
607
+allwinner_wdt_expired(bool enabled, bool reset_enabled) "Allwinner watchdog: enabled %u reset_enabled %u"
608
+
609
# cmsdk-apb-watchdog.c
610
cmsdk_apb_watchdog_read(uint64_t offset, uint64_t data, unsigned size) "CMSDK APB watchdog read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
611
cmsdk_apb_watchdog_write(uint64_t offset, uint64_t data, unsigned size) "CMSDK APB watchdog write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
612
--
104
--
613
2.34.1
105
2.34.1
diff view generated by jsdifflib
1
From: Axel Heider <axel.heider@hensoldt.net>
1
From: Viktor Prutyanov <viktor@daynix.com>
2
2
3
Fix issue reported by Coverity.
3
DMP supports 42 physical memory runs at most. So, merge adjacent
4
physical memory ranges from QEMU ELF when possible to minimize total
5
number of runs.
4
6
5
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
7
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
6
Message-id: 168070611775.20412.2883242077302841473-1@git.sr.ht
8
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20230915170153.10959-4-viktor@daynix.com
10
[PMM: fixed format string for printing size_t values]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
12
---
10
hw/timer/imx_epit.c | 2 +-
13
contrib/elf2dmp/main.c | 56 ++++++++++++++++++++++++++++++++++++------
11
1 file changed, 1 insertion(+), 1 deletion(-)
14
1 file changed, 48 insertions(+), 8 deletions(-)
12
15
13
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
16
diff --git a/contrib/elf2dmp/main.c b/contrib/elf2dmp/main.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/timer/imx_epit.c
18
--- a/contrib/elf2dmp/main.c
16
+++ b/hw/timer/imx_epit.c
19
+++ b/contrib/elf2dmp/main.c
17
@@ -XXX,XX +XXX,XX @@ static void imx_epit_update_compare_timer(IMXEPITState *s)
20
@@ -XXX,XX +XXX,XX @@
18
* the compare value. Otherwise it may fire at most once in the
21
#define PE_NAME "ntoskrnl.exe"
19
* current round.
22
20
*/
23
#define INITIAL_MXCSR 0x1f80
21
- bool is_oneshot = (limit >= s->cmp);
24
+#define MAX_NUMBER_OF_RUNS 42
22
+ is_oneshot = (limit >= s->cmp);
25
23
if (counter >= s->cmp) {
26
typedef struct idt_desc {
24
/* The compare timer fires in the current round. */
27
uint16_t offset1; /* offset bits 0..15 */
25
counter -= s->cmp;
28
@@ -XXX,XX +XXX,XX @@ static int fix_dtb(struct va_space *vs, QEMU_Elf *qe)
29
return 1;
30
}
31
32
+static void try_merge_runs(struct pa_space *ps,
33
+ WinDumpPhyMemDesc64 *PhysicalMemoryBlock)
34
+{
35
+ unsigned int merge_cnt = 0, run_idx = 0;
36
+
37
+ PhysicalMemoryBlock->NumberOfRuns = 0;
38
+
39
+ for (size_t idx = 0; idx < ps->block_nr; idx++) {
40
+ struct pa_block *blk = ps->block + idx;
41
+ struct pa_block *next = blk + 1;
42
+
43
+ PhysicalMemoryBlock->NumberOfPages += blk->size / ELF2DMP_PAGE_SIZE;
44
+
45
+ if (idx + 1 != ps->block_nr && blk->paddr + blk->size == next->paddr) {
46
+ printf("Block #%zu 0x%"PRIx64"+:0x%"PRIx64" and %u previous will be"
47
+ " merged\n", idx, blk->paddr, blk->size, merge_cnt);
48
+ merge_cnt++;
49
+ } else {
50
+ struct pa_block *first_merged = blk - merge_cnt;
51
+
52
+ printf("Block #%zu 0x%"PRIx64"+:0x%"PRIx64" and %u previous will be"
53
+ " merged to 0x%"PRIx64"+:0x%"PRIx64" (run #%u)\n",
54
+ idx, blk->paddr, blk->size, merge_cnt, first_merged->paddr,
55
+ blk->paddr + blk->size - first_merged->paddr, run_idx);
56
+ PhysicalMemoryBlock->Run[run_idx] = (WinDumpPhyMemRun64) {
57
+ .BasePage = first_merged->paddr / ELF2DMP_PAGE_SIZE,
58
+ .PageCount = (blk->paddr + blk->size - first_merged->paddr) /
59
+ ELF2DMP_PAGE_SIZE,
60
+ };
61
+ PhysicalMemoryBlock->NumberOfRuns++;
62
+ run_idx++;
63
+ merge_cnt = 0;
64
+ }
65
+ }
66
+}
67
+
68
static int fill_header(WinDumpHeader64 *hdr, struct pa_space *ps,
69
struct va_space *vs, uint64_t KdDebuggerDataBlock,
70
KDDEBUGGER_DATA64 *kdbg, uint64_t KdVersionBlock, int nr_cpus)
71
@@ -XXX,XX +XXX,XX @@ static int fill_header(WinDumpHeader64 *hdr, struct pa_space *ps,
72
KUSD_OFFSET_PRODUCT_TYPE);
73
DBGKD_GET_VERSION64 kvb;
74
WinDumpHeader64 h;
75
- size_t i;
76
77
QEMU_BUILD_BUG_ON(KUSD_OFFSET_SUITE_MASK >= ELF2DMP_PAGE_SIZE);
78
QEMU_BUILD_BUG_ON(KUSD_OFFSET_PRODUCT_TYPE >= ELF2DMP_PAGE_SIZE);
79
@@ -XXX,XX +XXX,XX @@ static int fill_header(WinDumpHeader64 *hdr, struct pa_space *ps,
80
.RequiredDumpSpace = sizeof(h),
81
};
82
83
- for (i = 0; i < ps->block_nr; i++) {
84
- h.PhysicalMemoryBlock.NumberOfPages +=
85
- ps->block[i].size / ELF2DMP_PAGE_SIZE;
86
- h.PhysicalMemoryBlock.Run[i] = (WinDumpPhyMemRun64) {
87
- .BasePage = ps->block[i].paddr / ELF2DMP_PAGE_SIZE,
88
- .PageCount = ps->block[i].size / ELF2DMP_PAGE_SIZE,
89
- };
90
+ if (h.PhysicalMemoryBlock.NumberOfRuns <= MAX_NUMBER_OF_RUNS) {
91
+ for (size_t idx = 0; idx < ps->block_nr; idx++) {
92
+ h.PhysicalMemoryBlock.NumberOfPages +=
93
+ ps->block[idx].size / ELF2DMP_PAGE_SIZE;
94
+ h.PhysicalMemoryBlock.Run[idx] = (WinDumpPhyMemRun64) {
95
+ .BasePage = ps->block[idx].paddr / ELF2DMP_PAGE_SIZE,
96
+ .PageCount = ps->block[idx].size / ELF2DMP_PAGE_SIZE,
97
+ };
98
+ }
99
+ } else {
100
+ try_merge_runs(ps, &h.PhysicalMemoryBlock);
101
}
102
103
h.RequiredDumpSpace +=
26
--
104
--
27
2.34.1
105
2.34.1
diff view generated by jsdifflib
1
From: Feng Jiang <jiangfeng@kylinos.cn>
1
From: Viktor Prutyanov <viktor@daynix.com>
2
2
3
One of the debug printfs in exynos4210_gcomp_find() will
3
Glib's g_mapped_file_new maps file with PROT_READ|PROT_WRITE and
4
access outside the 's->g_timer.reg.comp[]' array if there
4
MAP_PRIVATE. This leads to premature physical memory allocation of dump
5
was no active comparator and 'res' is -1. Add a conditional
5
file size on Linux hosts and may fail. On Linux, mapping the file with
6
to avoid this.
6
MAP_NORESERVE limits the allocation by available memory.
7
7
8
This doesn't happen in normal use because the debug printfs
8
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
9
are by default not compiled in.
9
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
10
10
Message-id: 20230915170153.10959-5-viktor@daynix.com
11
Signed-off-by: Feng Jiang <jiangfeng@kylinos.cn>
12
Message-id: 20230404074506.112615-1-jiangfeng@kylinos.cn
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
[PMM: Adjusted commit message to clarify that the overrun
15
only happens if you've enabled debug printfs]
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
12
---
18
hw/timer/exynos4210_mct.c | 13 ++++++++-----
13
contrib/elf2dmp/qemu_elf.h | 2 ++
19
1 file changed, 8 insertions(+), 5 deletions(-)
14
contrib/elf2dmp/qemu_elf.c | 68 +++++++++++++++++++++++++++++++-------
15
2 files changed, 58 insertions(+), 12 deletions(-)
20
16
21
diff --git a/hw/timer/exynos4210_mct.c b/hw/timer/exynos4210_mct.c
17
diff --git a/contrib/elf2dmp/qemu_elf.h b/contrib/elf2dmp/qemu_elf.h
22
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/timer/exynos4210_mct.c
19
--- a/contrib/elf2dmp/qemu_elf.h
24
+++ b/hw/timer/exynos4210_mct.c
20
+++ b/contrib/elf2dmp/qemu_elf.h
25
@@ -XXX,XX +XXX,XX @@ static int32_t exynos4210_gcomp_find(Exynos4210MCTState *s)
21
@@ -XXX,XX +XXX,XX @@ typedef struct QEMUCPUState {
26
res = min_comp_i;
22
int is_system(QEMUCPUState *s);
23
24
typedef struct QEMU_Elf {
25
+#ifndef CONFIG_LINUX
26
GMappedFile *gmf;
27
+#endif
28
size_t size;
29
void *map;
30
QEMUCPUState **state;
31
diff --git a/contrib/elf2dmp/qemu_elf.c b/contrib/elf2dmp/qemu_elf.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/contrib/elf2dmp/qemu_elf.c
34
+++ b/contrib/elf2dmp/qemu_elf.c
35
@@ -XXX,XX +XXX,XX @@ static bool check_ehdr(QEMU_Elf *qe)
36
return true;
37
}
38
39
-int QEMU_Elf_init(QEMU_Elf *qe, const char *filename)
40
+static int QEMU_Elf_map(QEMU_Elf *qe, const char *filename)
41
{
42
+#ifdef CONFIG_LINUX
43
+ struct stat st;
44
+ int fd;
45
+
46
+ printf("Using Linux mmap\n");
47
+
48
+ fd = open(filename, O_RDONLY, 0);
49
+ if (fd == -1) {
50
+ eprintf("Failed to open ELF dump file \'%s\'\n", filename);
51
+ return 1;
52
+ }
53
+
54
+ if (fstat(fd, &st)) {
55
+ eprintf("Failed to get size of ELF dump file\n");
56
+ close(fd);
57
+ return 1;
58
+ }
59
+ qe->size = st.st_size;
60
+
61
+ qe->map = mmap(NULL, qe->size, PROT_READ | PROT_WRITE,
62
+ MAP_PRIVATE | MAP_NORESERVE, fd, 0);
63
+ if (qe->map == MAP_FAILED) {
64
+ eprintf("Failed to map ELF file\n");
65
+ close(fd);
66
+ return 1;
67
+ }
68
+
69
+ close(fd);
70
+#else
71
GError *gerr = NULL;
72
- int err = 0;
73
+
74
+ printf("Using GLib mmap\n");
75
76
qe->gmf = g_mapped_file_new(filename, TRUE, &gerr);
77
if (gerr) {
78
@@ -XXX,XX +XXX,XX @@ int QEMU_Elf_init(QEMU_Elf *qe, const char *filename)
79
80
qe->map = g_mapped_file_get_contents(qe->gmf);
81
qe->size = g_mapped_file_get_length(qe->gmf);
82
+#endif
83
+
84
+ return 0;
85
+}
86
+
87
+static void QEMU_Elf_unmap(QEMU_Elf *qe)
88
+{
89
+#ifdef CONFIG_LINUX
90
+ munmap(qe->map, qe->size);
91
+#else
92
+ g_mapped_file_unref(qe->gmf);
93
+#endif
94
+}
95
+
96
+int QEMU_Elf_init(QEMU_Elf *qe, const char *filename)
97
+{
98
+ if (QEMU_Elf_map(qe, filename)) {
99
+ return 1;
100
+ }
101
102
if (!check_ehdr(qe)) {
103
eprintf("Input file has the wrong format\n");
104
- err = 1;
105
- goto out_unmap;
106
+ QEMU_Elf_unmap(qe);
107
+ return 1;
27
}
108
}
28
109
29
- DPRINTF("found comparator %d: comp 0x%llx distance 0x%llx, gfrc 0x%llx\n",
110
if (init_states(qe)) {
30
- res,
111
eprintf("Failed to extract QEMU CPU states\n");
31
- s->g_timer.reg.comp[res],
112
- err = 1;
32
- distance_min,
113
- goto out_unmap;
33
- gfrc);
114
+ QEMU_Elf_unmap(qe);
34
+ if (res >= 0) {
115
+ return 1;
35
+ DPRINTF("found comparator %d: "
116
}
36
+ "comp 0x%llx distance 0x%llx, gfrc 0x%llx\n",
117
37
+ res,
118
return 0;
38
+ s->g_timer.reg.comp[res],
119
-
39
+ distance_min,
120
-out_unmap:
40
+ gfrc);
121
- g_mapped_file_unref(qe->gmf);
41
+ }
122
-
42
123
- return err;
43
return res;
124
}
125
126
void QEMU_Elf_exit(QEMU_Elf *qe)
127
{
128
exit_states(qe);
129
- g_mapped_file_unref(qe->gmf);
130
+ QEMU_Elf_unmap(qe);
44
}
131
}
45
--
132
--
46
2.34.1
133
2.34.1
diff view generated by jsdifflib
1
From: Strahinja Jankovic <strahinjapjankovic@gmail.com>
1
From: Viktor Prutyanov <viktor@daynix.com>
2
2
3
Cubieboard tests end with comment "reboot not functioning; omit test".
3
PDB for Windows 11 kernel has slightly different structure compared to
4
Fix this so reboot is done at the end of each test.
4
previous versions. Since elf2dmp don't use the other fields, copy only
5
'segments' field from PDB_STREAM_INDEXES.
5
6
6
Signed-off-by: Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
7
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
7
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
8
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
8
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
9
Message-id: 20230915170153.10959-6-viktor@daynix.com
9
Message-id: 20230326202256.22980-5-strahinja.p.jankovic@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
tests/avocado/boot_linux_console.py | 15 ++++++++++++---
12
contrib/elf2dmp/pdb.h | 2 +-
13
1 file changed, 12 insertions(+), 3 deletions(-)
13
contrib/elf2dmp/pdb.c | 15 ++++-----------
14
2 files changed, 5 insertions(+), 12 deletions(-)
14
15
15
diff --git a/tests/avocado/boot_linux_console.py b/tests/avocado/boot_linux_console.py
16
diff --git a/contrib/elf2dmp/pdb.h b/contrib/elf2dmp/pdb.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/tests/avocado/boot_linux_console.py
18
--- a/contrib/elf2dmp/pdb.h
18
+++ b/tests/avocado/boot_linux_console.py
19
+++ b/contrib/elf2dmp/pdb.h
19
@@ -XXX,XX +XXX,XX @@ def test_arm_cubieboard_initrd(self):
20
@@ -XXX,XX +XXX,XX @@ struct pdb_reader {
20
'Allwinner sun4i/sun5i')
21
} ds;
21
exec_command_and_wait_for_pattern(self, 'cat /proc/iomem',
22
uint32_t file_used[1024];
22
'system-control@1c00000')
23
PDB_SYMBOLS *symbols;
23
- # cubieboard's reboot is not functioning; omit reboot test.
24
- PDB_STREAM_INDEXES sidx;
24
+ exec_command_and_wait_for_pattern(self, 'reboot',
25
+ uint16_t segments;
25
+ 'reboot: Restarting system')
26
uint8_t *modimage;
26
+ # Wait for VM to shut down gracefully
27
char *segs;
27
+ self.vm.wait()
28
size_t segs_size;
28
29
diff --git a/contrib/elf2dmp/pdb.c b/contrib/elf2dmp/pdb.c
29
def test_arm_cubieboard_sata(self):
30
index XXXXXXX..XXXXXXX 100644
30
"""
31
--- a/contrib/elf2dmp/pdb.c
31
@@ -XXX,XX +XXX,XX @@ def test_arm_cubieboard_sata(self):
32
+++ b/contrib/elf2dmp/pdb.c
32
'Allwinner sun4i/sun5i')
33
@@ -XXX,XX +XXX,XX @@ static void *pdb_ds_read_file(struct pdb_reader* r, uint32_t file_number)
33
exec_command_and_wait_for_pattern(self, 'cat /proc/partitions',
34
static int pdb_init_segments(struct pdb_reader *r)
34
'sda')
35
{
35
- # cubieboard's reboot is not functioning; omit reboot test.
36
char *segs;
36
+ exec_command_and_wait_for_pattern(self, 'reboot',
37
- unsigned stream_idx = r->sidx.segments;
37
+ 'reboot: Restarting system')
38
+ unsigned stream_idx = r->segments;
38
+ # Wait for VM to shut down gracefully
39
39
+ self.vm.wait()
40
segs = pdb_ds_read_file(r, stream_idx);
40
41
if (!segs) {
41
@skipUnless(os.getenv('AVOCADO_ALLOW_LARGE_STORAGE'), 'storage limited')
42
@@ -XXX,XX +XXX,XX @@ static int pdb_init_symbols(struct pdb_reader *r)
42
def test_arm_cubieboard_openwrt_22_03_2(self):
43
{
43
@@ -XXX,XX +XXX,XX @@ def test_arm_cubieboard_openwrt_22_03_2(self):
44
int err = 0;
44
45
PDB_SYMBOLS *symbols;
45
exec_command_and_wait_for_pattern(self, 'cat /proc/cpuinfo',
46
- PDB_STREAM_INDEXES *sidx = &r->sidx;
46
'Allwinner sun4i/sun5i')
47
-
47
- # cubieboard's reboot is not functioning; omit reboot test.
48
- memset(sidx, -1, sizeof(*sidx));
48
+ exec_command_and_wait_for_pattern(self, 'reboot',
49
49
+ 'reboot: Restarting system')
50
symbols = pdb_ds_read_file(r, 3);
50
+ # Wait for VM to shut down gracefully
51
if (!symbols) {
51
+ self.vm.wait()
52
@@ -XXX,XX +XXX,XX @@ static int pdb_init_symbols(struct pdb_reader *r)
52
53
53
@skipUnless(os.getenv('AVOCADO_TIMEOUT_EXPECTED'), 'Test might timeout')
54
r->symbols = symbols;
54
def test_arm_quanta_gsj(self):
55
56
- if (symbols->stream_index_size != sizeof(PDB_STREAM_INDEXES)) {
57
- err = 1;
58
- goto out_symbols;
59
- }
60
-
61
- memcpy(sidx, (const char *)symbols + sizeof(PDB_SYMBOLS) +
62
+ r->segments = *(uint16_t *)((const char *)symbols + sizeof(PDB_SYMBOLS) +
63
symbols->module_size + symbols->offset_size +
64
symbols->hash_size + symbols->srcmodule_size +
65
- symbols->pdbimport_size + symbols->unknown2_size, sizeof(*sidx));
66
+ symbols->pdbimport_size + symbols->unknown2_size +
67
+ offsetof(PDB_STREAM_INDEXES, segments));
68
69
/* Read global symbol table */
70
r->modimage = pdb_ds_read_file(r, symbols->gsym_file);
55
--
71
--
56
2.34.1
72
2.34.1
diff view generated by jsdifflib