1
A grab-bag of minor stuff for the end of the year. My to-review
1
Hi; here's this week's arm pullreq. Mostly this is my
2
queue is not empty, but it it at least in single figures...
2
work on FEAT_MOPS and FEAT_HBC, but there are some
3
other bits and pieces in there too, including a recent
4
set of elf2dmp patches.
3
5
6
thanks
4
-- PMM
7
-- PMM
5
8
6
The following changes since commit 5bfbd8170ce7acb98a1834ff49ed7340b0837144:
9
The following changes since commit 55394dcbec8f0c29c30e792c102a0edd50a52bf4:
7
10
8
Merge remote-tracking branch 'remotes/vivier2/tags/trivial-branch-for-6.0-pull-request' into staging (2020-12-14 20:32:38 +0000)
11
Merge tag 'pull-loongarch-20230920' of https://gitlab.com/gaosong/qemu into staging (2023-09-20 13:56:18 -0400)
9
12
10
are available in the Git repository at:
13
are available in the Git repository at:
11
14
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201215
15
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230921
13
16
14
for you to fetch changes up to 23af268566069183285bebbdf95b1b37cb7c0942:
17
for you to fetch changes up to 231f6a7d66254a58bedbee458591b780e0a507b1:
15
18
16
hw/block/m25p80: Fix Numonyx fast read dummy cycle count (2020-12-15 13:39:30 +0000)
19
elf2dmp: rework PDB_STREAM_INDEXES::segments obtaining (2023-09-21 16:13:54 +0100)
17
20
18
----------------------------------------------------------------
21
----------------------------------------------------------------
19
target-arm queue:
22
target-arm queue:
20
* gdbstub: Correct misparsing of vCont C/S requests
23
* target/m68k: Add URL to semihosting spec
21
* openrisc: Move pic_cpu code into CPU object proper
24
* docs/devel/loads-stores: Fix git grep regexes
22
* nios2: Move IIC code into CPU object proper
25
* hw/arm/boot: Set SCR_EL3.FGTEn when booting kernel
23
* Improve reporting of ROM overlap errors
26
* linux-user: Correct SME feature names reported in cpuinfo
24
* xlnx-versal: Add USB support
27
* linux-user: Add missing arm32 hwcaps
25
* hw/misc/zynq_slcr: Avoid #DIV/0! error
28
* Don't skip MTE checks for LDRT/STRT at EL0
26
* Numonyx: Fix dummy cycles and check for SPI mode on cmds
29
* Implement FEAT_HBC
30
* Implement FEAT_MOPS
31
* audio/jackaudio: Avoid dynamic stack allocation
32
* sbsa-ref: add non-secure EL2 virtual timer
33
* elf2dmp: improve Win2022, Win11 and large dumps
27
34
28
----------------------------------------------------------------
35
----------------------------------------------------------------
29
Joe Komlodi (4):
36
Fabian Vogt (1):
30
hw/block/m25p80: Make Numonyx config field names more accurate
37
hw/arm/boot: Set SCR_EL3.FGTEn when booting kernel
31
hw/block/m25p80: Fix when VCFG XIP bit is set for Numonyx
32
hw/block/m25p80: Check SPI mode before running some Numonyx commands
33
hw/block/m25p80: Fix Numonyx fast read dummy cycle count
34
38
35
Peter Maydell (11):
39
Marcin Juszkiewicz (1):
36
gdbstub: Correct misparsing of vCont C/S requests
40
sbsa-ref: add non-secure EL2 virtual timer
37
hw/openrisc/openrisc_sim: Use IRQ splitter when connecting IRQ to multiple CPUs
38
hw/openrisc/openrisc_sim: Abstract out "get IRQ x of CPU y"
39
target/openrisc: Move pic_cpu code into CPU object proper
40
target/nios2: Move IIC code into CPU object proper
41
target/nios2: Move nios2_check_interrupts() into target/nios2
42
target/nios2: Use deposit32() to update ipending register
43
hw/core/loader.c: Track last-seen ROM in rom_check_and_register_reset()
44
hw/core/loader.c: Improve reporting of ROM overlap errors
45
elf_ops.h: Don't truncate name of the ROM blobs we create
46
elf_ops.h: Be more verbose with ROM blob names
47
41
48
Philippe Mathieu-Daudé (1):
42
Peter Maydell (23):
49
hw/misc/zynq_slcr: Avoid #DIV/0! error
43
target/m68k: Add URL to semihosting spec
44
docs/devel/loads-stores: Fix git grep regexes
45
linux-user/elfload.c: Correct SME feature names reported in cpuinfo
46
linux-user/elfload.c: Add missing arm and arm64 hwcap values
47
linux-user/elfload.c: Report previously missing arm32 hwcaps
48
target/arm: Update AArch64 ID register field definitions
49
target/arm: Update user-mode ID reg mask values
50
target/arm: Implement FEAT_HBC
51
target/arm: Remove unused allocation_tag_mem() argument
52
target/arm: Don't skip MTE checks for LDRT/STRT at EL0
53
target/arm: Implement FEAT_MOPS enable bits
54
target/arm: Pass unpriv bool to get_a64_user_mem_index()
55
target/arm: Define syndrome function for MOPS exceptions
56
target/arm: New function allocation_tag_mem_probe()
57
target/arm: Implement MTE tag-checking functions for FEAT_MOPS
58
target/arm: Implement the SET* instructions
59
target/arm: Define new TB flag for ATA0
60
target/arm: Implement the SETG* instructions
61
target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies
62
target/arm: Implement the CPY* instructions
63
target/arm: Enable FEAT_MOPS for CPU 'max'
64
audio/jackaudio: Avoid dynamic stack allocation in qjack_client_init
65
audio/jackaudio: Avoid dynamic stack allocation in qjack_process()
50
66
51
Sai Pavan Boddu (2):
67
Viktor Prutyanov (5):
52
usb: Add versal-usb2-ctrl-regs module
68
elf2dmp: replace PE export name check with PDB name check
53
usb: xlnx-usb-subsystem: Add xilinx usb subsystem
69
elf2dmp: introduce physical block alignment
70
elf2dmp: introduce merging of physical memory runs
71
elf2dmp: use Linux mmap with MAP_NORESERVE when possible
72
elf2dmp: rework PDB_STREAM_INDEXES::segments obtaining
54
73
55
Vikram Garhwal (2):
74
docs/devel/loads-stores.rst | 40 +-
56
usb: Add DWC3 model
75
docs/system/arm/emulation.rst | 2 +
57
arm: xlnx-versal: Connect usb to virt-versal
76
contrib/elf2dmp/addrspace.h | 1 +
58
77
contrib/elf2dmp/pdb.h | 2 +-
59
include/hw/arm/xlnx-versal.h | 9 +
78
contrib/elf2dmp/qemu_elf.h | 2 +
60
include/hw/elf_ops.h | 5 +-
79
target/arm/cpu.h | 35 ++
61
include/hw/usb/hcd-dwc3.h | 55 +++
80
target/arm/internals.h | 55 +++
62
include/hw/usb/xlnx-usb-subsystem.h | 45 ++
81
target/arm/syndrome.h | 12 +
63
include/hw/usb/xlnx-versal-usb2-ctrl-regs.h | 45 ++
82
target/arm/tcg/helper-a64.h | 14 +
64
target/nios2/cpu.h | 3 -
83
target/arm/tcg/translate.h | 4 +-
65
target/openrisc/cpu.h | 1 -
84
target/arm/tcg/a64.decode | 38 +-
66
gdbstub.c | 2 +-
85
audio/jackaudio.c | 21 +-
67
hw/arm/xlnx-versal-virt.c | 55 +++
86
contrib/elf2dmp/addrspace.c | 31 +-
68
hw/arm/xlnx-versal.c | 26 ++
87
contrib/elf2dmp/main.c | 154 ++++----
69
hw/block/m25p80.c | 158 +++++--
88
contrib/elf2dmp/pdb.c | 15 +-
70
hw/core/loader.c | 67 ++-
89
contrib/elf2dmp/qemu_elf.c | 68 +++-
71
hw/intc/nios2_iic.c | 95 ----
90
hw/arm/boot.c | 4 +
72
hw/misc/zynq_slcr.c | 5 +
91
hw/arm/sbsa-ref.c | 2 +
73
hw/nios2/10m50_devboard.c | 13 +-
92
linux-user/elfload.c | 72 +++-
74
hw/nios2/cpu_pic.c | 67 ---
93
target/arm/helper.c | 39 +-
75
hw/openrisc/openrisc_sim.c | 46 +-
94
target/arm/tcg/cpu64.c | 5 +
76
hw/openrisc/pic_cpu.c | 61 ---
95
target/arm/tcg/helper-a64.c | 878 +++++++++++++++++++++++++++++++++++++++++
77
hw/usb/hcd-dwc3.c | 689 ++++++++++++++++++++++++++++
96
target/arm/tcg/hflags.c | 21 +
78
hw/usb/xlnx-usb-subsystem.c | 94 ++++
97
target/arm/tcg/mte_helper.c | 281 +++++++++++--
79
hw/usb/xlnx-versal-usb2-ctrl-regs.c | 229 +++++++++
98
target/arm/tcg/translate-a64.c | 164 +++++++-
80
softmmu/vl.c | 1 -
99
target/m68k/m68k-semi.c | 4 +
81
target/nios2/cpu.c | 29 ++
100
tests/tcg/aarch64/sysregs.c | 4 +-
82
target/nios2/op_helper.c | 9 +
101
27 files changed, 1768 insertions(+), 200 deletions(-)
83
target/openrisc/cpu.c | 32 ++
84
MAINTAINERS | 1 -
85
hw/intc/meson.build | 1 -
86
hw/nios2/meson.build | 2 +-
87
hw/openrisc/Kconfig | 1 +
88
hw/openrisc/meson.build | 2 +-
89
hw/usb/Kconfig | 10 +
90
hw/usb/meson.build | 3 +
91
32 files changed, 1557 insertions(+), 304 deletions(-)
92
create mode 100644 include/hw/usb/hcd-dwc3.h
93
create mode 100644 include/hw/usb/xlnx-usb-subsystem.h
94
create mode 100644 include/hw/usb/xlnx-versal-usb2-ctrl-regs.h
95
delete mode 100644 hw/intc/nios2_iic.c
96
delete mode 100644 hw/nios2/cpu_pic.c
97
delete mode 100644 hw/openrisc/pic_cpu.c
98
create mode 100644 hw/usb/hcd-dwc3.c
99
create mode 100644 hw/usb/xlnx-usb-subsystem.c
100
create mode 100644 hw/usb/xlnx-versal-usb2-ctrl-regs.c
101
diff view generated by jsdifflib
1
Instead of making the ROM blob name something like:
1
The spec for m68k semihosting is documented in the libgloss
2
phdr #0: /home/petmay01/linaro/qemu-misc-tests/ldmia-fault.axf
2
sources. Add a comment with the URL for it, as we already
3
make it a little more self-explanatory for people who don't know
3
have for nios2 semihosting.
4
ELF format details:
5
/home/petmay01/linaro/qemu-misc-tests/ldmia-fault.axf ELF program header segment 0
6
4
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201129203923.10622-5-peter.maydell@linaro.org
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20230801154451.3505492-1-peter.maydell@linaro.org
10
---
10
---
11
include/hw/elf_ops.h | 3 ++-
11
target/m68k/m68k-semi.c | 4 ++++
12
1 file changed, 2 insertions(+), 1 deletion(-)
12
1 file changed, 4 insertions(+)
13
13
14
diff --git a/include/hw/elf_ops.h b/include/hw/elf_ops.h
14
diff --git a/target/m68k/m68k-semi.c b/target/m68k/m68k-semi.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/elf_ops.h
16
--- a/target/m68k/m68k-semi.c
17
+++ b/include/hw/elf_ops.h
17
+++ b/target/m68k/m68k-semi.c
18
@@ -XXX,XX +XXX,XX @@ static int glue(load_elf, SZ)(const char *name, int fd,
18
@@ -XXX,XX +XXX,XX @@
19
if (mem_size != 0) {
19
*
20
if (load_rom) {
20
* You should have received a copy of the GNU General Public License
21
g_autofree char *label =
21
* along with this program; if not, see <http://www.gnu.org/licenses/>.
22
- g_strdup_printf("phdr #%d: %s", i, name);
22
+ *
23
+ g_strdup_printf("%s ELF program header segment %d",
23
+ * The semihosting protocol implemented here is described in the
24
+ name, i);
24
+ * libgloss sources:
25
25
+ * https://sourceware.org/git/?p=newlib-cygwin.git;a=blob;f=libgloss/m68k/m68k-semi.txt;hb=HEAD
26
/*
26
*/
27
* rom_add_elf_program() takes its own reference to
27
28
#include "qemu/osdep.h"
28
--
29
--
29
2.20.1
30
2.34.1
30
31
31
32
diff view generated by jsdifflib
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
1
The loads-and-stores documentation includes git grep regexes to find
2
occurrences of the various functions. Some of these regexes have
3
errors, typically failing to escape the '?', '(' and ')' when they
4
should be metacharacters (since these are POSIX basic REs). We also
5
weren't consistent about whether to have a ':' on the end of the
6
line introducing the list of regexes in each section.
2
7
3
VCFG XIP is set (disabled) when the NVCFG XIP bits are all set (disabled).
8
Fix the errors.
4
9
5
Signed-off-by: Joe Komlodi <komlodi@xilinx.com>
10
The following shell rune will complain about any REs in the
6
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
11
file which don't have any matches in the codebase:
7
Message-id: 1605568264-26376-3-git-send-email-komlodi@xilinx.com
12
for re in $(sed -ne 's/ - ``\(\\<.*\)``/\1/p' docs/devel/loads-stores.rst); do git grep -q "$re" || echo "no matches for re $re"; done
13
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
Message-id: 20230904161703.3996734-1-peter.maydell@linaro.org
9
---
17
---
10
hw/block/m25p80.c | 2 +-
18
docs/devel/loads-stores.rst | 40 ++++++++++++++++++-------------------
11
1 file changed, 1 insertion(+), 1 deletion(-)
19
1 file changed, 20 insertions(+), 20 deletions(-)
12
20
13
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
21
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
14
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/block/m25p80.c
23
--- a/docs/devel/loads-stores.rst
16
+++ b/hw/block/m25p80.c
24
+++ b/docs/devel/loads-stores.rst
17
@@ -XXX,XX +XXX,XX @@ static void reset_memory(Flash *s)
25
@@ -XXX,XX +XXX,XX @@ which stores ``val`` to ``ptr`` as an ``{endian}`` order value
18
s->volatile_cfg |= VCFG_DUMMY;
26
of size ``sz`` bytes.
19
s->volatile_cfg |= VCFG_WRAP_SEQUENTIAL;
27
20
if ((s->nonvolatile_cfg & NVCFG_XIP_MODE_MASK)
28
21
- != NVCFG_XIP_MODE_DISABLED) {
29
-Regexes for git grep
22
+ == NVCFG_XIP_MODE_DISABLED) {
30
+Regexes for git grep:
23
s->volatile_cfg |= VCFG_XIP_MODE_DISABLED;
31
- ``\<ld[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
24
}
32
- ``\<st[bwlq]\(_[hbl]e\)\?_p\>``
25
s->volatile_cfg |= deposit32(s->volatile_cfg,
33
- ``\<st24\(_[hbl]e\)\?_p\>``
34
- - ``\<ldn_\([hbl]e\)?_p\>``
35
- - ``\<stn_\([hbl]e\)?_p\>``
36
+ - ``\<ldn_\([hbl]e\)\?_p\>``
37
+ - ``\<stn_\([hbl]e\)\?_p\>``
38
39
``cpu_{ld,st}*_mmu``
40
~~~~~~~~~~~~~~~~~~~~
41
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_mmu(env, ptr, val, oi, retaddr)``
42
- ``_le`` : little endian
43
44
Regexes for git grep:
45
- - ``\<cpu_ld[bwlq](_[bl]e)\?_mmu\>``
46
- - ``\<cpu_st[bwlq](_[bl]e)\?_mmu\>``
47
+ - ``\<cpu_ld[bwlq]\(_[bl]e\)\?_mmu\>``
48
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_mmu\>``
49
50
51
``cpu_{ld,st}*_mmuidx_ra``
52
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_mmuidx_ra(env, ptr, val, mmuidx, retaddr)``
53
- ``_le`` : little endian
54
55
Regexes for git grep:
56
- - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_mmuidx_ra\>``
57
- - ``\<cpu_st[bwlq](_[bl]e)\?_mmuidx_ra\>``
58
+ - ``\<cpu_ld[us]\?[bwlq]\(_[bl]e\)\?_mmuidx_ra\>``
59
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_mmuidx_ra\>``
60
61
``cpu_{ld,st}*_data_ra``
62
~~~~~~~~~~~~~~~~~~~~~~~~
63
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_data_ra(env, ptr, val, ra)``
64
- ``_le`` : little endian
65
66
Regexes for git grep:
67
- - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data_ra\>``
68
- - ``\<cpu_st[bwlq](_[bl]e)\?_data_ra\>``
69
+ - ``\<cpu_ld[us]\?[bwlq]\(_[bl]e\)\?_data_ra\>``
70
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_data_ra\>``
71
72
``cpu_{ld,st}*_data``
73
~~~~~~~~~~~~~~~~~~~~~
74
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_data(env, ptr, val)``
75
- ``_be`` : big endian
76
- ``_le`` : little endian
77
78
-Regexes for git grep
79
- - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data\>``
80
- - ``\<cpu_st[bwlq](_[bl]e)\?_data\+\>``
81
+Regexes for git grep:
82
+ - ``\<cpu_ld[us]\?[bwlq]\(_[bl]e\)\?_data\>``
83
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_data\+\>``
84
85
``cpu_ld*_code``
86
~~~~~~~~~~~~~~~~
87
@@ -XXX,XX +XXX,XX @@ swap: ``translator_ld{sign}{size}_swap(env, ptr, swap)``
88
- ``l`` : 32 bits
89
- ``q`` : 64 bits
90
91
-Regexes for git grep
92
+Regexes for git grep:
93
- ``\<translator_ld[us]\?[bwlq]\(_swap\)\?\>``
94
95
``helper_{ld,st}*_mmu``
96
@@ -XXX,XX +XXX,XX @@ store: ``helper_{size}_mmu(env, addr, val, opindex, retaddr)``
97
- ``l`` : 32 bits
98
- ``q`` : 64 bits
99
100
-Regexes for git grep
101
+Regexes for git grep:
102
- ``\<helper_ld[us]\?[bwlq]_mmu\>``
103
- ``\<helper_st[bwlq]_mmu\>``
104
105
@@ -XXX,XX +XXX,XX @@ succeeded using a MemTxResult return code.
106
107
The ``_{endian}`` suffix is omitted for byte accesses.
108
109
-Regexes for git grep
110
+Regexes for git grep:
111
- ``\<address_space_\(read\|write\|rw\)\>``
112
- ``\<address_space_ldu\?[bwql]\(_[lb]e\)\?\>``
113
- ``\<address_space_st[bwql]\(_[lb]e\)\?\>``
114
@@ -XXX,XX +XXX,XX @@ Note that portions of the write which attempt to write data to a
115
device will be silently ignored -- only real RAM and ROM will
116
be written to.
117
118
-Regexes for git grep
119
+Regexes for git grep:
120
- ``address_space_write_rom``
121
122
``{ld,st}*_phys``
123
@@ -XXX,XX +XXX,XX @@ device doing the access has no way to report such an error.
124
125
The ``_{endian}_`` infix is omitted for byte accesses.
126
127
-Regexes for git grep
128
+Regexes for git grep:
129
- ``\<ldu\?[bwlq]\(_[bl]e\)\?_phys\>``
130
- ``\<st[bwlq]\(_[bl]e\)\?_phys\>``
131
132
@@ -XXX,XX +XXX,XX @@ For new code they are better avoided:
133
134
``cpu_physical_memory_rw``
135
136
-Regexes for git grep
137
+Regexes for git grep:
138
- ``\<cpu_physical_memory_\(read\|write\|rw\)\>``
139
140
``cpu_memory_rw_debug``
141
@@ -XXX,XX +XXX,XX @@ make sure our existing code is doing things correctly.
142
143
``dma_memory_rw``
144
145
-Regexes for git grep
146
+Regexes for git grep:
147
- ``\<dma_memory_\(read\|write\|rw\)\>``
148
- ``\<ldu\?[bwlq]\(_[bl]e\)\?_dma\>``
149
- ``\<st[bwlq]\(_[bl]e\)\?_dma\>``
150
@@ -XXX,XX +XXX,XX @@ correct address space for that device.
151
152
The ``_{endian}_`` infix is omitted for byte accesses.
153
154
-Regexes for git grep
155
+Regexes for git grep:
156
- ``\<pci_dma_\(read\|write\|rw\)\>``
157
- ``\<ldu\?[bwlq]\(_[bl]e\)\?_pci_dma\>``
158
- ``\<st[bwlq]\(_[bl]e\)\?_pci_dma\>``
26
--
159
--
27
2.20.1
160
2.34.1
28
161
29
162
diff view generated by jsdifflib
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
1
From: Fabian Vogt <fvogt@suse.de>
2
2
3
The previous naming of the configuration registers made it sound like that if
3
Just like d7ef5e16a17c sets SCR_EL3.HXEn for FEAT_HCX, this commit
4
the bits were set the settings would be enabled, while the opposite is true.
4
handles SCR_EL3.FGTEn for FEAT_FGT:
5
5
6
Signed-off-by: Joe Komlodi <komlodi@xilinx.com>
6
When we direct boot a kernel on a CPU which emulates EL3, we need to
7
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
7
set up the EL3 system registers as the Linux kernel documentation
8
Message-id: 1605568264-26376-2-git-send-email-komlodi@xilinx.com
8
specifies:
9
https://www.kernel.org/doc/Documentation/arm64/booting.rst
10
11
> For CPUs with the Fine Grained Traps (FEAT_FGT) extension present:
12
> - If EL3 is present and the kernel is entered at EL2:
13
> - SCR_EL3.FGTEn (bit 27) must be initialised to 0b1.
14
15
Cc: qemu-stable@nongnu.org
16
Signed-off-by: Fabian Vogt <fvogt@suse.de>
17
Message-id: 4831384.GXAFRqVoOG@linux-e202.suse.de
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
20
---
11
hw/block/m25p80.c | 12 ++++++------
21
hw/arm/boot.c | 4 ++++
12
1 file changed, 6 insertions(+), 6 deletions(-)
22
1 file changed, 4 insertions(+)
13
23
14
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/block/m25p80.c
26
--- a/hw/arm/boot.c
17
+++ b/hw/block/m25p80.c
27
+++ b/hw/arm/boot.c
18
@@ -XXX,XX +XXX,XX @@ typedef struct FlashPartInfo {
28
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
19
#define VCFG_WRAP_SEQUENTIAL 0x2
29
if (cpu_isar_feature(aa64_hcx, cpu)) {
20
#define NVCFG_XIP_MODE_DISABLED (7 << 9)
30
env->cp15.scr_el3 |= SCR_HXEN;
21
#define NVCFG_XIP_MODE_MASK (7 << 9)
31
}
22
-#define VCFG_XIP_MODE_ENABLED (1 << 3)
32
+ if (cpu_isar_feature(aa64_fgt, cpu)) {
23
+#define VCFG_XIP_MODE_DISABLED (1 << 3)
33
+ env->cp15.scr_el3 |= SCR_FGTEN;
24
#define CFG_DUMMY_CLK_LEN 4
34
+ }
25
#define NVCFG_DUMMY_CLK_POS 12
35
+
26
#define VCFG_DUMMY_CLK_POS 4
36
/* AArch64 kernels never boot in secure mode */
27
@@ -XXX,XX +XXX,XX @@ typedef struct FlashPartInfo {
37
assert(!info->secure_boot);
28
#define EVCFG_VPP_ACCELERATOR (1 << 3)
38
/* This hook is only supported for AArch32 currently:
29
#define EVCFG_RESET_HOLD_ENABLED (1 << 4)
30
#define NVCFG_DUAL_IO_MASK (1 << 2)
31
-#define EVCFG_DUAL_IO_ENABLED (1 << 6)
32
+#define EVCFG_DUAL_IO_DISABLED (1 << 6)
33
#define NVCFG_QUAD_IO_MASK (1 << 3)
34
-#define EVCFG_QUAD_IO_ENABLED (1 << 7)
35
+#define EVCFG_QUAD_IO_DISABLED (1 << 7)
36
#define NVCFG_4BYTE_ADDR_MASK (1 << 0)
37
#define NVCFG_LOWER_SEGMENT_MASK (1 << 1)
38
39
@@ -XXX,XX +XXX,XX @@ static void reset_memory(Flash *s)
40
s->volatile_cfg |= VCFG_WRAP_SEQUENTIAL;
41
if ((s->nonvolatile_cfg & NVCFG_XIP_MODE_MASK)
42
!= NVCFG_XIP_MODE_DISABLED) {
43
- s->volatile_cfg |= VCFG_XIP_MODE_ENABLED;
44
+ s->volatile_cfg |= VCFG_XIP_MODE_DISABLED;
45
}
46
s->volatile_cfg |= deposit32(s->volatile_cfg,
47
VCFG_DUMMY_CLK_POS,
48
@@ -XXX,XX +XXX,XX @@ static void reset_memory(Flash *s)
49
s->enh_volatile_cfg |= EVCFG_VPP_ACCELERATOR;
50
s->enh_volatile_cfg |= EVCFG_RESET_HOLD_ENABLED;
51
if (s->nonvolatile_cfg & NVCFG_DUAL_IO_MASK) {
52
- s->enh_volatile_cfg |= EVCFG_DUAL_IO_ENABLED;
53
+ s->enh_volatile_cfg |= EVCFG_DUAL_IO_DISABLED;
54
}
55
if (s->nonvolatile_cfg & NVCFG_QUAD_IO_MASK) {
56
- s->enh_volatile_cfg |= EVCFG_QUAD_IO_ENABLED;
57
+ s->enh_volatile_cfg |= EVCFG_QUAD_IO_DISABLED;
58
}
59
if (!(s->nonvolatile_cfg & NVCFG_4BYTE_ADDR_MASK)) {
60
s->four_bytes_address_mode = true;
61
--
39
--
62
2.20.1
40
2.34.1
63
64
diff view generated by jsdifflib
1
In the vCont packet, two of the command actions (C and S) take an
1
Some of the names we use for CPU features in linux-user's dummy
2
argument specifying the signal to be sent to the process/thread, which is
2
/proc/cpuinfo don't match the strings in the real kernel in
3
sent as an ASCII string of two hex digits which immediately follow the
3
arch/arm64/kernel/cpuinfo.c. Specifically, the SME related
4
'C' or 'S' character.
4
features have an underscore in the HWCAP_FOO define name,
5
but (like the SVE ones) they do not have an underscore in the
6
string in cpuinfo. Correct the errors.
5
7
6
Our code for parsing this packet accidentally skipped the first of the
8
Fixes: a55b9e7226708 ("linux-user: Emulate /proc/cpuinfo on aarch64 and arm")
7
two bytes of the signal value, because it started parsing the hex string
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
at 'p + 1' when the preceding code had already moved past the 'C' or
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
'S' with "cur_action = *p++".
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
linux-user/elfload.c | 14 +++++++-------
14
1 file changed, 7 insertions(+), 7 deletions(-)
10
15
11
This meant that we would only do the right thing for signals below
16
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
12
10, and would misinterpret the rest. For instance, when the debugger
13
wants to send the process a SIGPROF (27 on x86-64) we mangle this into
14
a SIGSEGV (11).
15
16
Remove the accidental double increment.
17
18
Fixes: https://bugs.launchpad.net/qemu/+bug/1773743
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
21
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
22
Message-id: 20201121210342.10089-1-peter.maydell@linaro.org
23
---
24
gdbstub.c | 2 +-
25
1 file changed, 1 insertion(+), 1 deletion(-)
26
27
diff --git a/gdbstub.c b/gdbstub.c
28
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
29
--- a/gdbstub.c
18
--- a/linux-user/elfload.c
30
+++ b/gdbstub.c
19
+++ b/linux-user/elfload.c
31
@@ -XXX,XX +XXX,XX @@ static int gdb_handle_vcont(const char *p)
20
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap2_str(uint32_t bit)
32
cur_action = *p++;
21
[__builtin_ctz(ARM_HWCAP2_A64_RPRES )] = "rpres",
33
if (cur_action == 'C' || cur_action == 'S') {
22
[__builtin_ctz(ARM_HWCAP2_A64_MTE3 )] = "mte3",
34
cur_action = qemu_tolower(cur_action);
23
[__builtin_ctz(ARM_HWCAP2_A64_SME )] = "sme",
35
- res = qemu_strtoul(p + 1, &p, 16, &tmp);
24
- [__builtin_ctz(ARM_HWCAP2_A64_SME_I16I64 )] = "sme_i16i64",
36
+ res = qemu_strtoul(p, &p, 16, &tmp);
25
- [__builtin_ctz(ARM_HWCAP2_A64_SME_F64F64 )] = "sme_f64f64",
37
if (res) {
26
- [__builtin_ctz(ARM_HWCAP2_A64_SME_I8I32 )] = "sme_i8i32",
38
goto out;
27
- [__builtin_ctz(ARM_HWCAP2_A64_SME_F16F32 )] = "sme_f16f32",
39
}
28
- [__builtin_ctz(ARM_HWCAP2_A64_SME_B16F32 )] = "sme_b16f32",
29
- [__builtin_ctz(ARM_HWCAP2_A64_SME_F32F32 )] = "sme_f32f32",
30
- [__builtin_ctz(ARM_HWCAP2_A64_SME_FA64 )] = "sme_fa64",
31
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_I16I64 )] = "smei16i64",
32
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_F64F64 )] = "smef64f64",
33
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_I8I32 )] = "smei8i32",
34
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_F16F32 )] = "smef16f32",
35
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_B16F32 )] = "smeb16f32",
36
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_F32F32 )] = "smef32f32",
37
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_FA64 )] = "smefa64",
38
};
39
40
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
40
--
41
--
41
2.20.1
42
2.34.1
42
43
43
44
diff view generated by jsdifflib
1
openrisc_sim_net_init() attempts to connect the IRQ line from the
1
Our lists of Arm 32 and 64 bit hwcap values have lagged behind
2
ethernet device to both CPUs in an SMP configuration by simply caling
2
the Linux kernel. Update them to include all the bits defined
3
sysbus_connect_irq() for it twice. This doesn't work, because the
3
as of upstream Linux git commit a48fa7efaf1161c1 (in the middle
4
second connection simply overrides the first.
4
of the kernel 6.6 dev cycle).
5
5
6
Fix this by creating a TYPE_SPLIT_IRQ to split the IRQ in the SMP
6
For 64-bit, we don't yet implement any of the features reported via
7
case.
7
these hwcap bits. For 32-bit we do in fact already implement them
8
all; we'll add the code to set them in a subsequent commit.
8
9
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Stafford Horne <shorne@gmail.com>
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20201127225127.14770-2-peter.maydell@linaro.org
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
---
13
hw/openrisc/openrisc_sim.c | 13 +++++++++++--
14
linux-user/elfload.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
14
hw/openrisc/Kconfig | 1 +
15
1 file changed, 44 insertions(+)
15
2 files changed, 12 insertions(+), 2 deletions(-)
16
16
17
diff --git a/hw/openrisc/openrisc_sim.c b/hw/openrisc/openrisc_sim.c
17
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/openrisc/openrisc_sim.c
19
--- a/linux-user/elfload.c
20
+++ b/hw/openrisc/openrisc_sim.c
20
+++ b/linux-user/elfload.c
21
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ enum
22
#include "hw/sysbus.h"
22
ARM_HWCAP_ARM_VFPD32 = 1 << 19,
23
#include "sysemu/qtest.h"
23
ARM_HWCAP_ARM_LPAE = 1 << 20,
24
#include "sysemu/reset.h"
24
ARM_HWCAP_ARM_EVTSTRM = 1 << 21,
25
+#include "hw/core/split-irq.h"
25
+ ARM_HWCAP_ARM_FPHP = 1 << 22,
26
26
+ ARM_HWCAP_ARM_ASIMDHP = 1 << 23,
27
#define KERNEL_LOAD_ADDR 0x100
27
+ ARM_HWCAP_ARM_ASIMDDP = 1 << 24,
28
28
+ ARM_HWCAP_ARM_ASIMDFHM = 1 << 25,
29
@@ -XXX,XX +XXX,XX @@ static void openrisc_sim_net_init(hwaddr base, hwaddr descriptors,
29
+ ARM_HWCAP_ARM_ASIMDBF16 = 1 << 26,
30
30
+ ARM_HWCAP_ARM_I8MM = 1 << 27,
31
s = SYS_BUS_DEVICE(dev);
31
};
32
sysbus_realize_and_unref(s, &error_fatal);
32
33
- for (i = 0; i < num_cpus; i++) {
33
enum {
34
- sysbus_connect_irq(s, 0, cpu_irqs[i][irq_pin]);
34
@@ -XXX,XX +XXX,XX @@ enum {
35
+ if (num_cpus > 1) {
35
ARM_HWCAP2_ARM_SHA1 = 1 << 2,
36
+ DeviceState *splitter = qdev_new(TYPE_SPLIT_IRQ);
36
ARM_HWCAP2_ARM_SHA2 = 1 << 3,
37
+ qdev_prop_set_uint32(splitter, "num-lines", num_cpus);
37
ARM_HWCAP2_ARM_CRC32 = 1 << 4,
38
+ qdev_realize_and_unref(splitter, NULL, &error_fatal);
38
+ ARM_HWCAP2_ARM_SB = 1 << 5,
39
+ for (i = 0; i < num_cpus; i++) {
39
+ ARM_HWCAP2_ARM_SSBS = 1 << 6,
40
+ qdev_connect_gpio_out(splitter, i, cpu_irqs[i][irq_pin]);
40
};
41
+ }
41
42
+ sysbus_connect_irq(s, 0, qdev_get_gpio_in(splitter, 0));
42
/* The commpage only exists for 32 bit kernels */
43
+ } else {
43
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap_str(uint32_t bit)
44
+ sysbus_connect_irq(s, 0, cpu_irqs[0][irq_pin]);
44
[__builtin_ctz(ARM_HWCAP_ARM_VFPD32 )] = "vfpd32",
45
}
45
[__builtin_ctz(ARM_HWCAP_ARM_LPAE )] = "lpae",
46
sysbus_mmio_map(s, 0, base);
46
[__builtin_ctz(ARM_HWCAP_ARM_EVTSTRM )] = "evtstrm",
47
sysbus_mmio_map(s, 1, descriptors);
47
+ [__builtin_ctz(ARM_HWCAP_ARM_FPHP )] = "fphp",
48
diff --git a/hw/openrisc/Kconfig b/hw/openrisc/Kconfig
48
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDHP )] = "asimdhp",
49
index XXXXXXX..XXXXXXX 100644
49
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDDP )] = "asimddp",
50
--- a/hw/openrisc/Kconfig
50
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDFHM )] = "asimdfhm",
51
+++ b/hw/openrisc/Kconfig
51
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDBF16)] = "asimdbf16",
52
@@ -XXX,XX +XXX,XX @@ config OR1K_SIM
52
+ [__builtin_ctz(ARM_HWCAP_ARM_I8MM )] = "i8mm",
53
select SERIAL
53
};
54
select OPENCORES_ETH
54
55
select OMPIC
55
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
56
+ select SPLIT_IRQ
56
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap2_str(uint32_t bit)
57
[__builtin_ctz(ARM_HWCAP2_ARM_SHA1 )] = "sha1",
58
[__builtin_ctz(ARM_HWCAP2_ARM_SHA2 )] = "sha2",
59
[__builtin_ctz(ARM_HWCAP2_ARM_CRC32)] = "crc32",
60
+ [__builtin_ctz(ARM_HWCAP2_ARM_SB )] = "sb",
61
+ [__builtin_ctz(ARM_HWCAP2_ARM_SSBS )] = "ssbs",
62
};
63
64
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
65
@@ -XXX,XX +XXX,XX @@ enum {
66
ARM_HWCAP2_A64_SME_B16F32 = 1 << 28,
67
ARM_HWCAP2_A64_SME_F32F32 = 1 << 29,
68
ARM_HWCAP2_A64_SME_FA64 = 1 << 30,
69
+ ARM_HWCAP2_A64_WFXT = 1ULL << 31,
70
+ ARM_HWCAP2_A64_EBF16 = 1ULL << 32,
71
+ ARM_HWCAP2_A64_SVE_EBF16 = 1ULL << 33,
72
+ ARM_HWCAP2_A64_CSSC = 1ULL << 34,
73
+ ARM_HWCAP2_A64_RPRFM = 1ULL << 35,
74
+ ARM_HWCAP2_A64_SVE2P1 = 1ULL << 36,
75
+ ARM_HWCAP2_A64_SME2 = 1ULL << 37,
76
+ ARM_HWCAP2_A64_SME2P1 = 1ULL << 38,
77
+ ARM_HWCAP2_A64_SME_I16I32 = 1ULL << 39,
78
+ ARM_HWCAP2_A64_SME_BI32I32 = 1ULL << 40,
79
+ ARM_HWCAP2_A64_SME_B16B16 = 1ULL << 41,
80
+ ARM_HWCAP2_A64_SME_F16F16 = 1ULL << 42,
81
+ ARM_HWCAP2_A64_MOPS = 1ULL << 43,
82
+ ARM_HWCAP2_A64_HBC = 1ULL << 44,
83
};
84
85
#define ELF_HWCAP get_elf_hwcap()
86
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap2_str(uint32_t bit)
87
[__builtin_ctz(ARM_HWCAP2_A64_SME_B16F32 )] = "smeb16f32",
88
[__builtin_ctz(ARM_HWCAP2_A64_SME_F32F32 )] = "smef32f32",
89
[__builtin_ctz(ARM_HWCAP2_A64_SME_FA64 )] = "smefa64",
90
+ [__builtin_ctz(ARM_HWCAP2_A64_WFXT )] = "wfxt",
91
+ [__builtin_ctzll(ARM_HWCAP2_A64_EBF16 )] = "ebf16",
92
+ [__builtin_ctzll(ARM_HWCAP2_A64_SVE_EBF16 )] = "sveebf16",
93
+ [__builtin_ctzll(ARM_HWCAP2_A64_CSSC )] = "cssc",
94
+ [__builtin_ctzll(ARM_HWCAP2_A64_RPRFM )] = "rprfm",
95
+ [__builtin_ctzll(ARM_HWCAP2_A64_SVE2P1 )] = "sve2p1",
96
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME2 )] = "sme2",
97
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME2P1 )] = "sme2p1",
98
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_I16I32 )] = "smei16i32",
99
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_BI32I32)] = "smebi32i32",
100
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_B16B16 )] = "smeb16b16",
101
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_F16F16 )] = "smef16f16",
102
+ [__builtin_ctzll(ARM_HWCAP2_A64_MOPS )] = "mops",
103
+ [__builtin_ctzll(ARM_HWCAP2_A64_HBC )] = "hbc",
104
};
105
106
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
57
--
107
--
58
2.20.1
108
2.34.1
59
109
60
110
diff view generated by jsdifflib
New patch
1
Add the code to report the arm32 hwcaps we were previously missing:
2
ss, ssbs, fphp, asimdhp, asimddp, asimdfhm, asimdbf16, i8mm
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
linux-user/elfload.c | 12 ++++++++++++
8
1 file changed, 12 insertions(+)
9
10
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/linux-user/elfload.c
13
+++ b/linux-user/elfload.c
14
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap(void)
15
}
16
}
17
GET_FEATURE_ID(aa32_simdfmac, ARM_HWCAP_ARM_VFPv4);
18
+ /*
19
+ * MVFR1.FPHP and .SIMDHP must be in sync, and QEMU uses the same
20
+ * isar_feature function for both. The kernel reports them as two hwcaps.
21
+ */
22
+ GET_FEATURE_ID(aa32_fp16_arith, ARM_HWCAP_ARM_FPHP);
23
+ GET_FEATURE_ID(aa32_fp16_arith, ARM_HWCAP_ARM_ASIMDHP);
24
+ GET_FEATURE_ID(aa32_dp, ARM_HWCAP_ARM_ASIMDDP);
25
+ GET_FEATURE_ID(aa32_fhm, ARM_HWCAP_ARM_ASIMDFHM);
26
+ GET_FEATURE_ID(aa32_bf16, ARM_HWCAP_ARM_ASIMDBF16);
27
+ GET_FEATURE_ID(aa32_i8mm, ARM_HWCAP_ARM_I8MM);
28
29
return hwcaps;
30
}
31
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap2(void)
32
GET_FEATURE_ID(aa32_sha1, ARM_HWCAP2_ARM_SHA1);
33
GET_FEATURE_ID(aa32_sha2, ARM_HWCAP2_ARM_SHA2);
34
GET_FEATURE_ID(aa32_crc32, ARM_HWCAP2_ARM_CRC32);
35
+ GET_FEATURE_ID(aa32_sb, ARM_HWCAP2_ARM_SB);
36
+ GET_FEATURE_ID(aa32_ssbs, ARM_HWCAP2_ARM_SSBS);
37
return hwcaps;
38
}
39
40
--
41
2.34.1
diff view generated by jsdifflib
New patch
1
Update our AArch64 ID register field definitions from the 2023-06
2
system register XML release:
3
https://developer.arm.com/documentation/ddi0601/2023-06/
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/arm/cpu.h | 23 +++++++++++++++++++++++
9
1 file changed, 23 insertions(+)
10
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
15
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR0, SHA1, 8, 4)
16
FIELD(ID_AA64ISAR0, SHA2, 12, 4)
17
FIELD(ID_AA64ISAR0, CRC32, 16, 4)
18
FIELD(ID_AA64ISAR0, ATOMIC, 20, 4)
19
+FIELD(ID_AA64ISAR0, TME, 24, 4)
20
FIELD(ID_AA64ISAR0, RDM, 28, 4)
21
FIELD(ID_AA64ISAR0, SHA3, 32, 4)
22
FIELD(ID_AA64ISAR0, SM3, 36, 4)
23
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR2, APA3, 12, 4)
24
FIELD(ID_AA64ISAR2, MOPS, 16, 4)
25
FIELD(ID_AA64ISAR2, BC, 20, 4)
26
FIELD(ID_AA64ISAR2, PAC_FRAC, 24, 4)
27
+FIELD(ID_AA64ISAR2, CLRBHB, 28, 4)
28
+FIELD(ID_AA64ISAR2, SYSREG_128, 32, 4)
29
+FIELD(ID_AA64ISAR2, SYSINSTR_128, 36, 4)
30
+FIELD(ID_AA64ISAR2, PRFMSLC, 40, 4)
31
+FIELD(ID_AA64ISAR2, RPRFM, 48, 4)
32
+FIELD(ID_AA64ISAR2, CSSC, 52, 4)
33
+FIELD(ID_AA64ISAR2, ATS1A, 60, 4)
34
35
FIELD(ID_AA64PFR0, EL0, 0, 4)
36
FIELD(ID_AA64PFR0, EL1, 4, 4)
37
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64PFR1, SME, 24, 4)
38
FIELD(ID_AA64PFR1, RNDR_TRAP, 28, 4)
39
FIELD(ID_AA64PFR1, CSV2_FRAC, 32, 4)
40
FIELD(ID_AA64PFR1, NMI, 36, 4)
41
+FIELD(ID_AA64PFR1, MTE_FRAC, 40, 4)
42
+FIELD(ID_AA64PFR1, GCS, 44, 4)
43
+FIELD(ID_AA64PFR1, THE, 48, 4)
44
+FIELD(ID_AA64PFR1, MTEX, 52, 4)
45
+FIELD(ID_AA64PFR1, DF2, 56, 4)
46
+FIELD(ID_AA64PFR1, PFAR, 60, 4)
47
48
FIELD(ID_AA64MMFR0, PARANGE, 0, 4)
49
FIELD(ID_AA64MMFR0, ASIDBITS, 4, 4)
50
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64MMFR1, AFP, 44, 4)
51
FIELD(ID_AA64MMFR1, NTLBPA, 48, 4)
52
FIELD(ID_AA64MMFR1, TIDCP1, 52, 4)
53
FIELD(ID_AA64MMFR1, CMOW, 56, 4)
54
+FIELD(ID_AA64MMFR1, ECBHB, 60, 4)
55
56
FIELD(ID_AA64MMFR2, CNP, 0, 4)
57
FIELD(ID_AA64MMFR2, UAO, 4, 4)
58
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64DFR0, DEBUGVER, 0, 4)
59
FIELD(ID_AA64DFR0, TRACEVER, 4, 4)
60
FIELD(ID_AA64DFR0, PMUVER, 8, 4)
61
FIELD(ID_AA64DFR0, BRPS, 12, 4)
62
+FIELD(ID_AA64DFR0, PMSS, 16, 4)
63
FIELD(ID_AA64DFR0, WRPS, 20, 4)
64
+FIELD(ID_AA64DFR0, SEBEP, 24, 4)
65
FIELD(ID_AA64DFR0, CTX_CMPS, 28, 4)
66
FIELD(ID_AA64DFR0, PMSVER, 32, 4)
67
FIELD(ID_AA64DFR0, DOUBLELOCK, 36, 4)
68
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64DFR0, TRACEFILT, 40, 4)
69
FIELD(ID_AA64DFR0, TRACEBUFFER, 44, 4)
70
FIELD(ID_AA64DFR0, MTPMU, 48, 4)
71
FIELD(ID_AA64DFR0, BRBE, 52, 4)
72
+FIELD(ID_AA64DFR0, EXTTRCBUFF, 56, 4)
73
FIELD(ID_AA64DFR0, HPMN0, 60, 4)
74
75
FIELD(ID_AA64ZFR0, SVEVER, 0, 4)
76
FIELD(ID_AA64ZFR0, AES, 4, 4)
77
FIELD(ID_AA64ZFR0, BITPERM, 16, 4)
78
FIELD(ID_AA64ZFR0, BFLOAT16, 20, 4)
79
+FIELD(ID_AA64ZFR0, B16B16, 24, 4)
80
FIELD(ID_AA64ZFR0, SHA3, 32, 4)
81
FIELD(ID_AA64ZFR0, SM4, 40, 4)
82
FIELD(ID_AA64ZFR0, I8MM, 44, 4)
83
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ZFR0, F32MM, 52, 4)
84
FIELD(ID_AA64ZFR0, F64MM, 56, 4)
85
86
FIELD(ID_AA64SMFR0, F32F32, 32, 1)
87
+FIELD(ID_AA64SMFR0, BI32I32, 33, 1)
88
FIELD(ID_AA64SMFR0, B16F32, 34, 1)
89
FIELD(ID_AA64SMFR0, F16F32, 35, 1)
90
FIELD(ID_AA64SMFR0, I8I32, 36, 4)
91
+FIELD(ID_AA64SMFR0, F16F16, 42, 1)
92
+FIELD(ID_AA64SMFR0, B16B16, 43, 1)
93
+FIELD(ID_AA64SMFR0, I16I32, 44, 4)
94
FIELD(ID_AA64SMFR0, F64F64, 48, 1)
95
FIELD(ID_AA64SMFR0, I16I64, 52, 4)
96
FIELD(ID_AA64SMFR0, SMEVER, 56, 4)
97
--
98
2.34.1
diff view generated by jsdifflib
New patch
1
For user-only mode we reveal a subset of the AArch64 ID registers
2
to the guest, to emulate the kernel's trap-and-emulate-ID-regs
3
handling. Update the feature bit masks to match upstream kernel
4
commit a48fa7efaf1161c1c.
1
5
6
None of these features are yet implemented by QEMU, so this
7
doesn't yet have a behavioural change, but implementation of
8
FEAT_MOPS and FEAT_HBC is imminent.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
target/arm/helper.c | 11 ++++++++++-
14
tests/tcg/aarch64/sysregs.c | 4 ++--
15
2 files changed, 12 insertions(+), 3 deletions(-)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
22
R_ID_AA64ZFR0_F64MM_MASK },
23
{ .name = "ID_AA64SMFR0_EL1",
24
.exported_bits = R_ID_AA64SMFR0_F32F32_MASK |
25
+ R_ID_AA64SMFR0_BI32I32_MASK |
26
R_ID_AA64SMFR0_B16F32_MASK |
27
R_ID_AA64SMFR0_F16F32_MASK |
28
R_ID_AA64SMFR0_I8I32_MASK |
29
+ R_ID_AA64SMFR0_F16F16_MASK |
30
+ R_ID_AA64SMFR0_B16B16_MASK |
31
+ R_ID_AA64SMFR0_I16I32_MASK |
32
R_ID_AA64SMFR0_F64F64_MASK |
33
R_ID_AA64SMFR0_I16I64_MASK |
34
+ R_ID_AA64SMFR0_SMEVER_MASK |
35
R_ID_AA64SMFR0_FA64_MASK },
36
{ .name = "ID_AA64MMFR0_EL1",
37
.exported_bits = R_ID_AA64MMFR0_ECV_MASK,
38
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
39
.exported_bits = R_ID_AA64ISAR2_WFXT_MASK |
40
R_ID_AA64ISAR2_RPRES_MASK |
41
R_ID_AA64ISAR2_GPA3_MASK |
42
- R_ID_AA64ISAR2_APA3_MASK },
43
+ R_ID_AA64ISAR2_APA3_MASK |
44
+ R_ID_AA64ISAR2_MOPS_MASK |
45
+ R_ID_AA64ISAR2_BC_MASK |
46
+ R_ID_AA64ISAR2_RPRFM_MASK |
47
+ R_ID_AA64ISAR2_CSSC_MASK },
48
{ .name = "ID_AA64ISAR*_EL1_RESERVED",
49
.is_glob = true },
50
};
51
diff --git a/tests/tcg/aarch64/sysregs.c b/tests/tcg/aarch64/sysregs.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/tests/tcg/aarch64/sysregs.c
54
+++ b/tests/tcg/aarch64/sysregs.c
55
@@ -XXX,XX +XXX,XX @@ int main(void)
56
*/
57
get_cpu_reg_check_mask(id_aa64isar0_el1, _m(f0ff,ffff,f0ff,fff0));
58
get_cpu_reg_check_mask(id_aa64isar1_el1, _m(00ff,f0ff,ffff,ffff));
59
- get_cpu_reg_check_mask(SYS_ID_AA64ISAR2_EL1, _m(0000,0000,0000,ffff));
60
+ get_cpu_reg_check_mask(SYS_ID_AA64ISAR2_EL1, _m(00ff,0000,00ff,ffff));
61
/* TGran4 & TGran64 as pegged to -1 */
62
get_cpu_reg_check_mask(id_aa64mmfr0_el1, _m(f000,0000,ff00,0000));
63
get_cpu_reg_check_mask(id_aa64mmfr1_el1, _m(0000,f000,0000,0000));
64
@@ -XXX,XX +XXX,XX @@ int main(void)
65
get_cpu_reg_check_mask(id_aa64dfr0_el1, _m(0000,0000,0000,0006));
66
get_cpu_reg_check_zero(id_aa64dfr1_el1);
67
get_cpu_reg_check_mask(SYS_ID_AA64ZFR0_EL1, _m(0ff0,ff0f,00ff,00ff));
68
- get_cpu_reg_check_mask(SYS_ID_AA64SMFR0_EL1, _m(80f1,00fd,0000,0000));
69
+ get_cpu_reg_check_mask(SYS_ID_AA64SMFR0_EL1, _m(8ff1,fcff,0000,0000));
70
71
get_cpu_reg_check_zero(id_aa64afr0_el1);
72
get_cpu_reg_check_zero(id_aa64afr1_el1);
73
--
74
2.34.1
diff view generated by jsdifflib
1
We're about to refactor the OpenRISC pic_cpu code in a way that means
1
FEAT_HBC (Hinted conditional branches) provides a new instruction
2
that just grabbing the whole qemu_irq[] array of inbound IRQs for a
2
BC.cond, which behaves exactly like the existing B.cond except
3
CPU won't be possible any more. Abstract out a function for "return
3
that it provides a hint to the branch predictor about the
4
the qemu_irq for IRQ x input of CPU y" so we can more easily replace
4
likely behaviour of the branch.
5
the implementation.
5
6
Since QEMU does not implement branch prediction, we can treat
7
this identically to B.cond.
6
8
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Stafford Horne <shorne@gmail.com>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20201127225127.14770-3-peter.maydell@linaro.org
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
12
---
11
hw/openrisc/openrisc_sim.c | 38 +++++++++++++++++++++-----------------
13
docs/system/arm/emulation.rst | 1 +
12
1 file changed, 21 insertions(+), 17 deletions(-)
14
target/arm/cpu.h | 5 +++++
15
target/arm/tcg/a64.decode | 3 ++-
16
linux-user/elfload.c | 1 +
17
target/arm/tcg/cpu64.c | 4 ++++
18
target/arm/tcg/translate-a64.c | 4 ++++
19
6 files changed, 17 insertions(+), 1 deletion(-)
13
20
14
diff --git a/hw/openrisc/openrisc_sim.c b/hw/openrisc/openrisc_sim.c
21
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/openrisc/openrisc_sim.c
23
--- a/docs/system/arm/emulation.rst
17
+++ b/hw/openrisc/openrisc_sim.c
24
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ static void main_cpu_reset(void *opaque)
25
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
cpu_set_pc(cs, boot_info.bootstrap_pc);
26
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
27
- FEAT_GTG (Guest translation granule size)
28
- FEAT_HAFDBS (Hardware management of the access flag and dirty bit state)
29
+- FEAT_HBC (Hinted conditional branches)
30
- FEAT_HCX (Support for the HCRX_EL2 register)
31
- FEAT_HPDS (Hierarchical permission disables)
32
- FEAT_HPDS2 (Translation table page-based hardware attributes)
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_i8mm(const ARMISARegisters *id)
38
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, I8MM) != 0;
20
}
39
}
21
40
22
+static qemu_irq get_cpu_irq(OpenRISCCPU *cpus[], int cpunum, int irq_pin)
41
+static inline bool isar_feature_aa64_hbc(const ARMISARegisters *id)
23
+{
42
+{
24
+ return cpus[cpunum]->env.irq[irq_pin];
43
+ return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, BC) != 0;
25
+}
44
+}
26
+
45
+
27
static void openrisc_sim_net_init(hwaddr base, hwaddr descriptors,
46
static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
28
- int num_cpus, qemu_irq **cpu_irqs,
29
+ int num_cpus, OpenRISCCPU *cpus[],
30
int irq_pin, NICInfo *nd)
31
{
47
{
32
DeviceState *dev;
48
return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
33
@@ -XXX,XX +XXX,XX @@ static void openrisc_sim_net_init(hwaddr base, hwaddr descriptors,
49
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
34
qdev_prop_set_uint32(splitter, "num-lines", num_cpus);
50
index XXXXXXX..XXXXXXX 100644
35
qdev_realize_and_unref(splitter, NULL, &error_fatal);
51
--- a/target/arm/tcg/a64.decode
36
for (i = 0; i < num_cpus; i++) {
52
+++ b/target/arm/tcg/a64.decode
37
- qdev_connect_gpio_out(splitter, i, cpu_irqs[i][irq_pin]);
53
@@ -XXX,XX +XXX,XX @@ CBZ sf:1 011010 nz:1 ................... rt:5 &cbz imm=%imm19
38
+ qdev_connect_gpio_out(splitter, i, get_cpu_irq(cpus, i, irq_pin));
54
39
}
55
TBZ . 011011 nz:1 ..... .............. rt:5 &tbz imm=%imm14 bitpos=%imm31_19
40
sysbus_connect_irq(s, 0, qdev_get_gpio_in(splitter, 0));
56
41
} else {
57
-B_cond 0101010 0 ................... 0 cond:4 imm=%imm19
42
- sysbus_connect_irq(s, 0, cpu_irqs[0][irq_pin]);
58
+# B.cond and BC.cond
43
+ sysbus_connect_irq(s, 0, get_cpu_irq(cpus, 0, irq_pin));
59
+B_cond 0101010 0 ................... c:1 cond:4 imm=%imm19
44
}
60
45
sysbus_mmio_map(s, 0, base);
61
BR 1101011 0000 11111 000000 rn:5 00000 &r
46
sysbus_mmio_map(s, 1, descriptors);
62
BLR 1101011 0001 11111 000000 rn:5 00000 &r
63
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/linux-user/elfload.c
66
+++ b/linux-user/elfload.c
67
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap2(void)
68
GET_FEATURE_ID(aa64_sme_f64f64, ARM_HWCAP2_A64_SME_F64F64);
69
GET_FEATURE_ID(aa64_sme_i16i64, ARM_HWCAP2_A64_SME_I16I64);
70
GET_FEATURE_ID(aa64_sme_fa64, ARM_HWCAP2_A64_SME_FA64);
71
+ GET_FEATURE_ID(aa64_hbc, ARM_HWCAP2_A64_HBC);
72
73
return hwcaps;
47
}
74
}
48
75
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
49
static void openrisc_sim_ompic_init(hwaddr base, int num_cpus,
76
index XXXXXXX..XXXXXXX 100644
50
- qemu_irq **cpu_irqs, int irq_pin)
77
--- a/target/arm/tcg/cpu64.c
51
+ OpenRISCCPU *cpus[], int irq_pin)
78
+++ b/target/arm/tcg/cpu64.c
79
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
80
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
81
cpu->isar.id_aa64isar1 = t;
82
83
+ t = cpu->isar.id_aa64isar2;
84
+ t = FIELD_DP64(t, ID_AA64ISAR2, BC, 1); /* FEAT_HBC */
85
+ cpu->isar.id_aa64isar2 = t;
86
+
87
t = cpu->isar.id_aa64pfr0;
88
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
89
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
90
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/tcg/translate-a64.c
93
+++ b/target/arm/tcg/translate-a64.c
94
@@ -XXX,XX +XXX,XX @@ static bool trans_TBZ(DisasContext *s, arg_tbz *a)
95
96
static bool trans_B_cond(DisasContext *s, arg_B_cond *a)
52
{
97
{
53
DeviceState *dev;
98
+ /* BC.cond is only present with FEAT_HBC */
54
SysBusDevice *s;
99
+ if (a->c && !dc_isar_feature(aa64_hbc, s)) {
55
@@ -XXX,XX +XXX,XX @@ static void openrisc_sim_ompic_init(hwaddr base, int num_cpus,
100
+ return false;
56
s = SYS_BUS_DEVICE(dev);
101
+ }
57
sysbus_realize_and_unref(s, &error_fatal);
102
reset_btype(s);
58
for (i = 0; i < num_cpus; i++) {
103
if (a->cond < 0x0e) {
59
- sysbus_connect_irq(s, i, cpu_irqs[i][irq_pin]);
104
/* genuinely conditional branches */
60
+ sysbus_connect_irq(s, i, get_cpu_irq(cpus, i, irq_pin));
61
}
62
sysbus_mmio_map(s, 0, base);
63
}
64
@@ -XXX,XX +XXX,XX @@ static void openrisc_sim_init(MachineState *machine)
65
{
66
ram_addr_t ram_size = machine->ram_size;
67
const char *kernel_filename = machine->kernel_filename;
68
- OpenRISCCPU *cpu = NULL;
69
+ OpenRISCCPU *cpus[2] = {};
70
MemoryRegion *ram;
71
- qemu_irq *cpu_irqs[2];
72
qemu_irq serial_irq;
73
int n;
74
unsigned int smp_cpus = machine->smp.cpus;
75
76
assert(smp_cpus >= 1 && smp_cpus <= 2);
77
for (n = 0; n < smp_cpus; n++) {
78
- cpu = OPENRISC_CPU(cpu_create(machine->cpu_type));
79
- if (cpu == NULL) {
80
+ cpus[n] = OPENRISC_CPU(cpu_create(machine->cpu_type));
81
+ if (cpus[n] == NULL) {
82
fprintf(stderr, "Unable to find CPU definition!\n");
83
exit(1);
84
}
85
- cpu_openrisc_pic_init(cpu);
86
- cpu_irqs[n] = (qemu_irq *) cpu->env.irq;
87
+ cpu_openrisc_pic_init(cpus[n]);
88
89
- cpu_openrisc_clock_init(cpu);
90
+ cpu_openrisc_clock_init(cpus[n]);
91
92
- qemu_register_reset(main_cpu_reset, cpu);
93
+ qemu_register_reset(main_cpu_reset, cpus[n]);
94
}
95
96
ram = g_malloc(sizeof(*ram));
97
@@ -XXX,XX +XXX,XX @@ static void openrisc_sim_init(MachineState *machine)
98
99
if (nd_table[0].used) {
100
openrisc_sim_net_init(0x92000000, 0x92000400, smp_cpus,
101
- cpu_irqs, 4, nd_table);
102
+ cpus, 4, nd_table);
103
}
104
105
if (smp_cpus > 1) {
106
- openrisc_sim_ompic_init(0x98000000, smp_cpus, cpu_irqs, 1);
107
+ openrisc_sim_ompic_init(0x98000000, smp_cpus, cpus, 1);
108
109
- serial_irq = qemu_irq_split(cpu_irqs[0][2], cpu_irqs[1][2]);
110
+ serial_irq = qemu_irq_split(get_cpu_irq(cpus, 0, 2),
111
+ get_cpu_irq(cpus, 1, 2));
112
} else {
113
- serial_irq = cpu_irqs[0][2];
114
+ serial_irq = get_cpu_irq(cpus, 0, 2);
115
}
116
117
serial_mm_init(get_system_memory(), 0x90000000, 0, serial_irq,
118
--
105
--
119
2.20.1
106
2.34.1
120
107
121
108
diff view generated by jsdifflib
1
Currently the load_elf code assembles the ROM blob name into a
1
The allocation_tag_mem() function takes an argument tag_size,
2
local 128 byte fixed-size array. Use g_strdup_printf() instead so
2
but it never uses it. Remove the argument. In mte_probe_int()
3
that we don't truncate the pathname if it happens to be long.
3
in particular this also lets us delete the code computing
4
(This matters mostly for monitor 'info roms' output and for the
4
the value we were passing in.
5
error messages if ROM blobs overlap.)
6
5
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201129203923.10622-4-peter.maydell@linaro.org
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
---
9
---
11
include/hw/elf_ops.h | 4 ++--
10
target/arm/tcg/mte_helper.c | 42 +++++++++++++------------------------
12
1 file changed, 2 insertions(+), 2 deletions(-)
11
1 file changed, 14 insertions(+), 28 deletions(-)
13
12
14
diff --git a/include/hw/elf_ops.h b/include/hw/elf_ops.h
13
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/elf_ops.h
15
--- a/target/arm/tcg/mte_helper.c
17
+++ b/include/hw/elf_ops.h
16
+++ b/target/arm/tcg/mte_helper.c
18
@@ -XXX,XX +XXX,XX @@ static int glue(load_elf, SZ)(const char *name, int fd,
17
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
19
uint64_t addr, low = (uint64_t)-1, high = 0;
18
* @ptr_access: the access to use for the virtual address
20
GMappedFile *mapped_file = NULL;
19
* @ptr_size: the number of bytes in the normal memory access
21
uint8_t *data = NULL;
20
* @tag_access: the access to use for the tag memory
22
- char label[128];
21
- * @tag_size: the number of bytes in the tag memory access
23
int ret = ELF_LOAD_FAILED;
22
* @ra: the return address for exception handling
24
23
*
25
if (read(fd, &ehdr, sizeof(ehdr)) != sizeof(ehdr))
24
* Our tag memory is formatted as a sequence of little-endian nibbles.
26
@@ -XXX,XX +XXX,XX @@ static int glue(load_elf, SZ)(const char *name, int fd,
25
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
27
*/
26
* a pointer to the corresponding tag byte. Exit with exception if the
28
if (mem_size != 0) {
27
* virtual address is not accessible for @ptr_access.
29
if (load_rom) {
28
*
30
- snprintf(label, sizeof(label), "phdr #%d: %s", i, name);
29
- * The @ptr_size and @tag_size values may not have an obvious relation
31
+ g_autofree char *label =
30
- * due to the alignment of @ptr, and the number of tag checks required.
32
+ g_strdup_printf("phdr #%d: %s", i, name);
31
- *
33
32
* If there is no tag storage corresponding to @ptr, return NULL.
34
/*
33
*/
35
* rom_add_elf_program() takes its own reference to
34
static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
35
uint64_t ptr, MMUAccessType ptr_access,
36
int ptr_size, MMUAccessType tag_access,
37
- int tag_size, uintptr_t ra)
38
+ uintptr_t ra)
39
{
40
#ifdef CONFIG_USER_ONLY
41
uint64_t clean_ptr = useronly_clean_ptr(ptr);
42
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(ldg)(CPUARMState *env, uint64_t ptr, uint64_t xt)
43
44
/* Trap if accessing an invalid page. */
45
mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_LOAD, 1,
46
- MMU_DATA_LOAD, 1, GETPC());
47
+ MMU_DATA_LOAD, GETPC());
48
49
/* Load if page supports tags. */
50
if (mem) {
51
@@ -XXX,XX +XXX,XX @@ static inline void do_stg(CPUARMState *env, uint64_t ptr, uint64_t xt,
52
53
/* Trap if accessing an invalid page. */
54
mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE, TAG_GRANULE,
55
- MMU_DATA_STORE, 1, ra);
56
+ MMU_DATA_STORE, ra);
57
58
/* Store if page supports tags. */
59
if (mem) {
60
@@ -XXX,XX +XXX,XX @@ static inline void do_st2g(CPUARMState *env, uint64_t ptr, uint64_t xt,
61
if (ptr & TAG_GRANULE) {
62
/* Two stores unaligned mod TAG_GRANULE*2 -- modify two bytes. */
63
mem1 = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
64
- TAG_GRANULE, MMU_DATA_STORE, 1, ra);
65
+ TAG_GRANULE, MMU_DATA_STORE, ra);
66
mem2 = allocation_tag_mem(env, mmu_idx, ptr + TAG_GRANULE,
67
MMU_DATA_STORE, TAG_GRANULE,
68
- MMU_DATA_STORE, 1, ra);
69
+ MMU_DATA_STORE, ra);
70
71
/* Store if page(s) support tags. */
72
if (mem1) {
73
@@ -XXX,XX +XXX,XX @@ static inline void do_st2g(CPUARMState *env, uint64_t ptr, uint64_t xt,
74
} else {
75
/* Two stores aligned mod TAG_GRANULE*2 -- modify one byte. */
76
mem1 = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
77
- 2 * TAG_GRANULE, MMU_DATA_STORE, 1, ra);
78
+ 2 * TAG_GRANULE, MMU_DATA_STORE, ra);
79
if (mem1) {
80
tag |= tag << 4;
81
qatomic_set(mem1, tag);
82
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
83
84
/* Trap if accessing an invalid page. */
85
tag_mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_LOAD,
86
- gm_bs_bytes, MMU_DATA_LOAD,
87
- gm_bs_bytes / (2 * TAG_GRANULE), ra);
88
+ gm_bs_bytes, MMU_DATA_LOAD, ra);
89
90
/* The tag is squashed to zero if the page does not support tags. */
91
if (!tag_mem) {
92
@@ -XXX,XX +XXX,XX @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
93
94
/* Trap if accessing an invalid page. */
95
tag_mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
96
- gm_bs_bytes, MMU_DATA_LOAD,
97
- gm_bs_bytes / (2 * TAG_GRANULE), ra);
98
+ gm_bs_bytes, MMU_DATA_LOAD, ra);
99
100
/*
101
* Tag store only happens if the page support tags,
102
@@ -XXX,XX +XXX,XX @@ void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val)
103
ptr &= -dcz_bytes;
104
105
mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE, dcz_bytes,
106
- MMU_DATA_STORE, tag_bytes, ra);
107
+ MMU_DATA_STORE, ra);
108
if (mem) {
109
int tag_pair = (val & 0xf) * 0x11;
110
memset(mem, tag_pair, tag_bytes);
111
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
112
int mmu_idx, ptr_tag, bit55;
113
uint64_t ptr_last, prev_page, next_page;
114
uint64_t tag_first, tag_last;
115
- uint64_t tag_byte_first, tag_byte_last;
116
- uint32_t sizem1, tag_count, tag_size, n, c;
117
+ uint32_t sizem1, tag_count, n, c;
118
uint8_t *mem1, *mem2;
119
MMUAccessType type;
120
121
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
122
tag_last = QEMU_ALIGN_DOWN(ptr_last, TAG_GRANULE);
123
tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1;
124
125
- /* Round the bounds to twice the tag granule, and compute the bytes. */
126
- tag_byte_first = QEMU_ALIGN_DOWN(ptr, 2 * TAG_GRANULE);
127
- tag_byte_last = QEMU_ALIGN_DOWN(ptr_last, 2 * TAG_GRANULE);
128
-
129
/* Locate the page boundaries. */
130
prev_page = ptr & TARGET_PAGE_MASK;
131
next_page = prev_page + TARGET_PAGE_SIZE;
132
133
if (likely(tag_last - prev_page < TARGET_PAGE_SIZE)) {
134
/* Memory access stays on one page. */
135
- tag_size = ((tag_byte_last - tag_byte_first) / (2 * TAG_GRANULE)) + 1;
136
mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, sizem1 + 1,
137
- MMU_DATA_LOAD, tag_size, ra);
138
+ MMU_DATA_LOAD, ra);
139
if (!mem1) {
140
return 1;
141
}
142
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
143
n = checkN(mem1, ptr & TAG_GRANULE, ptr_tag, tag_count);
144
} else {
145
/* Memory access crosses to next page. */
146
- tag_size = (next_page - tag_byte_first) / (2 * TAG_GRANULE);
147
mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, next_page - ptr,
148
- MMU_DATA_LOAD, tag_size, ra);
149
+ MMU_DATA_LOAD, ra);
150
151
- tag_size = ((tag_byte_last - next_page) / (2 * TAG_GRANULE)) + 1;
152
mem2 = allocation_tag_mem(env, mmu_idx, next_page, type,
153
ptr_last - next_page + 1,
154
- MMU_DATA_LOAD, tag_size, ra);
155
+ MMU_DATA_LOAD, ra);
156
157
/*
158
* Perform all of the comparisons.
159
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
160
mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
161
(void) probe_write(env, ptr, 1, mmu_idx, ra);
162
mem = allocation_tag_mem(env, mmu_idx, align_ptr, MMU_DATA_STORE,
163
- dcz_bytes, MMU_DATA_LOAD, tag_bytes, ra);
164
+ dcz_bytes, MMU_DATA_LOAD, ra);
165
if (!mem) {
166
goto done;
167
}
36
--
168
--
37
2.20.1
169
2.34.1
38
170
39
171
diff view generated by jsdifflib
New patch
1
The LDRT/STRT "unprivileged load/store" instructions behave like
2
normal ones if executed at EL0. We handle this correctly for
3
the load/store semantics, but get the MTE checking wrong.
1
4
5
We always look at s->mte_active[is_unpriv] to see whether we should
6
be doing MTE checks, but in hflags.c when we set the TB flags that
7
will be used to fill the mte_active[] array we only set the
8
MTE0_ACTIVE bit if UNPRIV is true (i.e. we are not at EL0).
9
10
This means that a LDRT at EL0 will see s->mte_active[1] as 0,
11
and will not do MTE checks even when MTE is enabled.
12
13
To avoid the translate-time code having to do an explicit check on
14
s->unpriv to see if it is OK to index into the mte_active[] array,
15
duplicate MTE_ACTIVE into MTE0_ACTIVE when UNPRIV is false.
16
17
(This isn't a very serious bug because generally nobody executes
18
LDRT/STRT at EL0, because they have no use there.)
19
20
Cc: qemu-stable@nongnu.org
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20230912140434.1333369-2-peter.maydell@linaro.org
24
---
25
target/arm/tcg/hflags.c | 9 +++++++++
26
1 file changed, 9 insertions(+)
27
28
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/tcg/hflags.c
31
+++ b/target/arm/tcg/hflags.c
32
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
33
&& !(env->pstate & PSTATE_TCO)
34
&& (sctlr & (el == 0 ? SCTLR_TCF0 : SCTLR_TCF))) {
35
DP_TBFLAG_A64(flags, MTE_ACTIVE, 1);
36
+ if (!EX_TBFLAG_A64(flags, UNPRIV)) {
37
+ /*
38
+ * In non-unpriv contexts (eg EL0), unpriv load/stores
39
+ * act like normal ones; duplicate the MTE info to
40
+ * avoid translate-a64.c having to check UNPRIV to see
41
+ * whether it is OK to index into MTE_ACTIVE[].
42
+ */
43
+ DP_TBFLAG_A64(flags, MTE0_ACTIVE, 1);
44
+ }
45
}
46
}
47
/* And again for unprivileged accesses, if required. */
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
FEAT_MOPS defines a handful of new enable bits:
2
* HCRX_EL2.MSCEn, SCTLR_EL1.MSCEn, SCTLR_EL2.MSCen:
3
define whether the new insns should UNDEF or not
4
* HCRX_EL2.MCE2: defines whether memops exceptions from
5
EL1 should be taken to EL1 or EL2
1
6
7
Since we don't sanitise what bits can be written for the SCTLR
8
registers, we only need to handle the new bits in HCRX_EL2, and
9
define SCTLR_MSCEN for the new SCTLR bit value.
10
11
The precedence of "HCRX bits acts as 0 if SCR_EL3.HXEn is 0" versus
12
"bit acts as 1 if EL2 disabled" is not clear from the register
13
definition text, but it is clear in the CheckMOPSEnabled()
14
pseudocode(), so we follow that. We'll have to check whether other
15
bits we need to implement in future follow the same logic or not.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20230912140434.1333369-3-peter.maydell@linaro.org
20
---
21
target/arm/cpu.h | 6 ++++++
22
target/arm/helper.c | 28 +++++++++++++++++++++-------
23
2 files changed, 27 insertions(+), 7 deletions(-)
24
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
29
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
30
#define SCTLR_EnIB (1U << 30) /* v8.3, AArch64 only */
31
#define SCTLR_EnIA (1U << 31) /* v8.3, AArch64 only */
32
#define SCTLR_DSSBS_32 (1U << 31) /* v8.5, AArch32 only */
33
+#define SCTLR_MSCEN (1ULL << 33) /* FEAT_MOPS */
34
#define SCTLR_BT0 (1ULL << 35) /* v8.5-BTI */
35
#define SCTLR_BT1 (1ULL << 36) /* v8.5-BTI */
36
#define SCTLR_ITFSB (1ULL << 37) /* v8.5-MemTag */
37
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_doublelock(const ARMISARegisters *id)
38
return FIELD_SEX64(id->id_aa64dfr0, ID_AA64DFR0, DOUBLELOCK) >= 0;
39
}
40
41
+static inline bool isar_feature_aa64_mops(const ARMISARegisters *id)
42
+{
43
+ return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, MOPS);
44
+}
45
+
46
/*
47
* Feature tests for "does this exist in either 32-bit or 64-bit?"
48
*/
49
diff --git a/target/arm/helper.c b/target/arm/helper.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/helper.c
52
+++ b/target/arm/helper.c
53
@@ -XXX,XX +XXX,XX @@ static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
54
{
55
uint64_t valid_mask = 0;
56
57
- /* No features adding bits to HCRX are implemented. */
58
+ /* FEAT_MOPS adds MSCEn and MCE2 */
59
+ if (cpu_isar_feature(aa64_mops, env_archcpu(env))) {
60
+ valid_mask |= HCRX_MSCEN | HCRX_MCE2;
61
+ }
62
63
/* Clear RES0 bits. */
64
env->cp15.hcrx_el2 = value & valid_mask;
65
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcrx_el2_eff(CPUARMState *env)
66
{
67
/*
68
* The bits in this register behave as 0 for all purposes other than
69
- * direct reads of the register if:
70
- * - EL2 is not enabled in the current security state,
71
- * - SCR_EL3.HXEn is 0.
72
+ * direct reads of the register if SCR_EL3.HXEn is 0.
73
+ * If EL2 is not enabled in the current security state, then the
74
+ * bit may behave as if 0, or as if 1, depending on the bit.
75
+ * For the moment, we treat the EL2-disabled case as taking
76
+ * priority over the HXEn-disabled case. This is true for the only
77
+ * bit for a feature which we implement where the answer is different
78
+ * for the two cases (MSCEn for FEAT_MOPS).
79
+ * This may need to be revisited for future bits.
80
*/
81
- if (!arm_is_el2_enabled(env)
82
- || (arm_feature(env, ARM_FEATURE_EL3)
83
- && !(env->cp15.scr_el3 & SCR_HXEN))) {
84
+ if (!arm_is_el2_enabled(env)) {
85
+ uint64_t hcrx = 0;
86
+ if (cpu_isar_feature(aa64_mops, env_archcpu(env))) {
87
+ /* MSCEn behaves as 1 if EL2 is not enabled */
88
+ hcrx |= HCRX_MSCEN;
89
+ }
90
+ return hcrx;
91
+ }
92
+ if (arm_feature(env, ARM_FEATURE_EL3) && !(env->cp15.scr_el3 & SCR_HXEN)) {
93
return 0;
94
}
95
return env->cp15.hcrx_el2;
96
--
97
2.34.1
diff view generated by jsdifflib
New patch
1
In every place that we call the get_a64_user_mem_index() function
2
we do it like this:
3
memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
4
Refactor so the caller passes in the bool that says whether they
5
want the 'unpriv' or 'normal' mem_index rather than having to
6
do the ?: themselves.
1
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20230912140434.1333369-4-peter.maydell@linaro.org
10
---
11
target/arm/tcg/translate-a64.c | 20 ++++++++++++++------
12
1 file changed, 14 insertions(+), 6 deletions(-)
13
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/translate-a64.c
17
+++ b/target/arm/tcg/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ void a64_translate_init(void)
19
}
20
21
/*
22
- * Return the core mmu_idx to use for A64 "unprivileged load/store" insns
23
+ * Return the core mmu_idx to use for A64 load/store insns which
24
+ * have a "unprivileged load/store" variant. Those insns access
25
+ * EL0 if executed from an EL which has control over EL0 (usually
26
+ * EL1) but behave like normal loads and stores if executed from
27
+ * elsewhere (eg EL3).
28
+ *
29
+ * @unpriv : true for the unprivileged encoding; false for the
30
+ * normal encoding (in which case we will return the same
31
+ * thing as get_mem_index().
32
*/
33
-static int get_a64_user_mem_index(DisasContext *s)
34
+static int get_a64_user_mem_index(DisasContext *s, bool unpriv)
35
{
36
/*
37
* If AccType_UNPRIV is not used, the insn uses AccType_NORMAL,
38
@@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s)
39
*/
40
ARMMMUIdx useridx = s->mmu_idx;
41
42
- if (s->unpriv) {
43
+ if (unpriv && s->unpriv) {
44
/*
45
* We have pre-computed the condition for AccType_UNPRIV.
46
* Therefore we should never get here with a mmu_idx for
47
@@ -XXX,XX +XXX,XX @@ static void op_addr_ldst_imm_pre(DisasContext *s, arg_ldst_imm *a,
48
if (!a->p) {
49
tcg_gen_addi_i64(*dirty_addr, *dirty_addr, offset);
50
}
51
- memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
52
+ memidx = get_a64_user_mem_index(s, a->unpriv);
53
*clean_addr = gen_mte_check1_mmuidx(s, *dirty_addr, is_store,
54
a->w || a->rn != 31,
55
mop, a->unpriv, memidx);
56
@@ -XXX,XX +XXX,XX @@ static bool trans_STR_i(DisasContext *s, arg_ldst_imm *a)
57
{
58
bool iss_sf, iss_valid = !a->w;
59
TCGv_i64 clean_addr, dirty_addr, tcg_rt;
60
- int memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
61
+ int memidx = get_a64_user_mem_index(s, a->unpriv);
62
MemOp mop = finalize_memop(s, a->sz + a->sign * MO_SIGN);
63
64
op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, true, mop);
65
@@ -XXX,XX +XXX,XX @@ static bool trans_LDR_i(DisasContext *s, arg_ldst_imm *a)
66
{
67
bool iss_sf, iss_valid = !a->w;
68
TCGv_i64 clean_addr, dirty_addr, tcg_rt;
69
- int memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
70
+ int memidx = get_a64_user_mem_index(s, a->unpriv);
71
MemOp mop = finalize_memop(s, a->sz + a->sign * MO_SIGN);
72
73
op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, false, mop);
74
--
75
2.34.1
diff view generated by jsdifflib
1
In rom_check_and_register_reset() we report to the user if there is
1
The FEAT_MOPS memory operations can raise a Memory Copy or Memory Set
2
a "ROM region overlap". This has a couple of problems:
2
exception if a copy or set instruction is executed when the CPU
3
* the reported information is not very easy to intepret
3
register state is not correct for that instruction. Define the
4
* the function just prints the overlap to stderr (and relies on
4
usual syn_* function that constructs the syndrome register value
5
its single callsite in vl.c to do an error_report() and exit)
5
for these exceptions.
6
* only the first overlap encountered is diagnosed
7
8
Make this function use error_report() and error_printf() and
9
report a more user-friendly report with all the overlaps
10
diagnosed.
11
12
Sample old output:
13
14
rom: requested regions overlap (rom dtb. free=0x0000000000008000, addr=0x0000000000000000)
15
qemu-system-aarch64: rom check and register reset failed
16
17
Sample new output:
18
19
qemu-system-aarch64: Some ROM regions are overlapping
20
These ROM regions might have been loaded by direct user request or by default.
21
They could be BIOS/firmware images, a guest kernel, initrd or some other file loaded into guest memory.
22
Check whether you intended to load all this guest code, and whether it has been built to load to the correct addresses.
23
24
The following two regions overlap (in the cpu-memory-0 address space):
25
phdr #0: /home/petmay01/linaro/qemu-misc-tests/ldmia-fault.axf (addresses 0x0000000000000000 - 0x0000000000008000)
26
dtb (addresses 0x0000000000000000 - 0x0000000000100000)
27
28
The following two regions overlap (in the cpu-memory-0 address space):
29
phdr #1: /home/petmay01/linaro/qemu-misc-tests/bad-psci-call.axf (addresses 0x0000000040000000 - 0x0000000040000010)
30
phdr #0: /home/petmay01/linaro/qemu-misc-tests/bp-test.elf (addresses 0x0000000040000000 - 0x0000000040000020)
31
6
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
34
Message-id: 20201129203923.10622-3-peter.maydell@linaro.org
9
Message-id: 20230912140434.1333369-5-peter.maydell@linaro.org
35
---
10
---
36
hw/core/loader.c | 48 ++++++++++++++++++++++++++++++++++++++++++------
11
target/arm/syndrome.h | 12 ++++++++++++
37
softmmu/vl.c | 1 -
12
1 file changed, 12 insertions(+)
38
2 files changed, 42 insertions(+), 7 deletions(-)
39
13
40
diff --git a/hw/core/loader.c b/hw/core/loader.c
14
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
41
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/core/loader.c
16
--- a/target/arm/syndrome.h
43
+++ b/hw/core/loader.c
17
+++ b/target/arm/syndrome.h
44
@@ -XXX,XX +XXX,XX @@ static bool roms_overlap(Rom *last_rom, Rom *this_rom)
18
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
45
last_rom->addr + last_rom->romsize > this_rom->addr;
19
EC_DATAABORT = 0x24,
20
EC_DATAABORT_SAME_EL = 0x25,
21
EC_SPALIGNMENT = 0x26,
22
+ EC_MOP = 0x27,
23
EC_AA32_FPTRAP = 0x28,
24
EC_AA64_FPTRAP = 0x2c,
25
EC_SERROR = 0x2f,
26
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_serror(uint32_t extra)
27
return (EC_SERROR << ARM_EL_EC_SHIFT) | ARM_EL_IL | extra;
46
}
28
}
47
29
48
+static const char *rom_as_name(Rom *rom)
30
+static inline uint32_t syn_mop(bool is_set, bool is_setg, int options,
31
+ bool epilogue, bool wrong_option, bool option_a,
32
+ int destreg, int srcreg, int sizereg)
49
+{
33
+{
50
+ const char *name = rom->as ? rom->as->name : NULL;
34
+ return (EC_MOP << ARM_EL_EC_SHIFT) | ARM_EL_IL |
51
+ return name ?: "anonymous";
35
+ (is_set << 24) | (is_setg << 23) | (options << 19) |
36
+ (epilogue << 18) | (wrong_option << 17) | (option_a << 16) |
37
+ (destreg << 10) | (srcreg << 5) | sizereg;
52
+}
38
+}
53
+
39
+
54
+static void rom_print_overlap_error_header(void)
55
+{
56
+ error_report("Some ROM regions are overlapping");
57
+ error_printf(
58
+ "These ROM regions might have been loaded by "
59
+ "direct user request or by default.\n"
60
+ "They could be BIOS/firmware images, a guest kernel, "
61
+ "initrd or some other file loaded into guest memory.\n"
62
+ "Check whether you intended to load all this guest code, and "
63
+ "whether it has been built to load to the correct addresses.\n");
64
+}
65
+
40
+
66
+static void rom_print_one_overlap_error(Rom *last_rom, Rom *rom)
41
#endif /* TARGET_ARM_SYNDROME_H */
67
+{
68
+ error_printf(
69
+ "\nThe following two regions overlap (in the %s address space):\n",
70
+ rom_as_name(rom));
71
+ error_printf(
72
+ " %s (addresses 0x" TARGET_FMT_plx " - 0x" TARGET_FMT_plx ")\n",
73
+ last_rom->name, last_rom->addr, last_rom->addr + last_rom->romsize);
74
+ error_printf(
75
+ " %s (addresses 0x" TARGET_FMT_plx " - 0x" TARGET_FMT_plx ")\n",
76
+ rom->name, rom->addr, rom->addr + rom->romsize);
77
+}
78
+
79
int rom_check_and_register_reset(void)
80
{
81
MemoryRegionSection section;
82
Rom *rom, *last_rom = NULL;
83
+ bool found_overlap = false;
84
85
QTAILQ_FOREACH(rom, &roms, next) {
86
if (rom->fw_file) {
87
@@ -XXX,XX +XXX,XX @@ int rom_check_and_register_reset(void)
88
}
89
if (!rom->mr) {
90
if (roms_overlap(last_rom, rom)) {
91
- fprintf(stderr, "rom: requested regions overlap "
92
- "(rom %s. free=0x" TARGET_FMT_plx
93
- ", addr=0x" TARGET_FMT_plx ")\n",
94
- rom->name, last_rom->addr + last_rom->romsize,
95
- rom->addr);
96
- return -1;
97
+ if (!found_overlap) {
98
+ found_overlap = true;
99
+ rom_print_overlap_error_header();
100
+ }
101
+ rom_print_one_overlap_error(last_rom, rom);
102
+ /* Keep going through the list so we report all overlaps */
103
}
104
last_rom = rom;
105
}
106
@@ -XXX,XX +XXX,XX @@ int rom_check_and_register_reset(void)
107
rom->isrom = int128_nz(section.size) && memory_region_is_rom(section.mr);
108
memory_region_unref(section.mr);
109
}
110
+ if (found_overlap) {
111
+ return -1;
112
+ }
113
+
114
qemu_register_reset(rom_reset, NULL);
115
roms_loaded = 1;
116
return 0;
117
diff --git a/softmmu/vl.c b/softmmu/vl.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/softmmu/vl.c
120
+++ b/softmmu/vl.c
121
@@ -XXX,XX +XXX,XX @@ static void qemu_machine_creation_done(void)
122
qemu_run_machine_init_done_notifiers();
123
124
if (rom_check_and_register_reset() != 0) {
125
- error_report("rom check and register reset failed");
126
exit(1);
127
}
128
129
--
42
--
130
2.20.1
43
2.34.1
131
132
diff view generated by jsdifflib
1
The Nios2 architecture supports two different interrupt controller
1
For the FEAT_MOPS operations, the existing allocation_tag_mem()
2
options:
2
function almost does what we want, but it will take a watchpoint
3
exception even for an ra == 0 probe request, and it requires that the
4
caller guarantee that the memory is accessible. For FEAT_MOPS we
5
want a function that will not take any kind of exception, and will
6
return NULL for the not-accessible case.
3
7
4
* The IIC (Internal Interrupt Controller) is part of the CPU itself;
8
Rename allocation_tag_mem() to allocation_tag_mem_probe() and add an
5
it has 32 IRQ input lines and no NMI support. Interrupt status is
9
extra 'probe' argument that lets us distinguish these cases;
6
queried and controlled via the CPU's ipending and istatus
10
allocation_tag_mem() is now a wrapper that always passes 'false'.
7
registers.
8
11
9
* The EIC (External Interrupt Controller) interface allows the CPU
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
to connect to an external interrupt controller. The interface
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
allows the interrupt controller to present a packet of information
14
Message-id: 20230912140434.1333369-6-peter.maydell@linaro.org
12
containing:
15
---
13
- handler address
16
target/arm/tcg/mte_helper.c | 48 ++++++++++++++++++++++++++++---------
14
- interrupt level
17
1 file changed, 37 insertions(+), 11 deletions(-)
15
- register set
16
- NMI mode
17
18
18
QEMU does not model an EIC currently. We do model the IIC, but its
19
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
19
implementation is split across code in hw/nios2/cpu_pic.c and
20
hw/intc/nios2_iic.c. The code in those two files has no state of its
21
own -- the IIC state is in the Nios2CPU state struct.
22
23
Because CPU objects now inherit (indirectly) from TYPE_DEVICE, they
24
can have GPIO input lines themselves, so we can implement the IIC
25
directly in the CPU object the same way that real hardware does.
26
27
Create named "IRQ" GPIO inputs to the Nios2 CPU object, and make the
28
only user of the IIC wire up directly to those instead.
29
30
Note that the old code had an "NMI" concept which was entirely unused
31
and also as far as I can see not architecturally correct, since only
32
the EIC has a concept of an NMI.
33
34
This fixes a Coverity-reported trivial memory leak of the IRQ array
35
allocated in nios2_cpu_pic_init().
36
37
Fixes: Coverity CID 1421916
38
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
39
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
40
Message-id: 20201129174022.26530-2-peter.maydell@linaro.org
41
Reviewed-by: Wentong Wu <wentong.wu@intel.com>
42
Tested-by: Wentong Wu <wentong.wu@intel.com>
43
---
44
target/nios2/cpu.h | 1 -
45
hw/intc/nios2_iic.c | 95 ---------------------------------------
46
hw/nios2/10m50_devboard.c | 13 +-----
47
hw/nios2/cpu_pic.c | 31 -------------
48
target/nios2/cpu.c | 30 +++++++++++++
49
MAINTAINERS | 1 -
50
hw/intc/meson.build | 1 -
51
7 files changed, 32 insertions(+), 140 deletions(-)
52
delete mode 100644 hw/intc/nios2_iic.c
53
54
diff --git a/target/nios2/cpu.h b/target/nios2/cpu.h
55
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
56
--- a/target/nios2/cpu.h
21
--- a/target/arm/tcg/mte_helper.c
57
+++ b/target/nios2/cpu.h
22
+++ b/target/arm/tcg/mte_helper.c
58
@@ -XXX,XX +XXX,XX @@ void nios2_cpu_do_unaligned_access(CPUState *cpu, vaddr addr,
23
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
59
MMUAccessType access_type,
24
}
60
int mmu_idx, uintptr_t retaddr);
25
61
26
/**
62
-qemu_irq *nios2_cpu_pic_init(Nios2CPU *cpu);
27
- * allocation_tag_mem:
63
void nios2_check_interrupts(CPUNios2State *env);
28
+ * allocation_tag_mem_probe:
64
29
* @env: the cpu environment
65
void do_nios2_semihosting(CPUNios2State *env);
30
* @ptr_mmu_idx: the addressing regime to use for the virtual address
66
diff --git a/hw/intc/nios2_iic.c b/hw/intc/nios2_iic.c
31
* @ptr: the virtual address for which to look up tag memory
67
deleted file mode 100644
32
* @ptr_access: the access to use for the virtual address
68
index XXXXXXX..XXXXXXX
33
* @ptr_size: the number of bytes in the normal memory access
69
--- a/hw/intc/nios2_iic.c
34
* @tag_access: the access to use for the tag memory
70
+++ /dev/null
35
+ * @probe: true to merely probe, never taking an exception
71
@@ -XXX,XX +XXX,XX @@
36
* @ra: the return address for exception handling
72
-/*
37
*
73
- * QEMU Altera Internal Interrupt Controller.
38
* Our tag memory is formatted as a sequence of little-endian nibbles.
74
- *
39
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
75
- * Copyright (c) 2012 Chris Wulff <crwulff@gmail.com>
40
* for the higher addr.
76
- *
41
*
77
- * This library is free software; you can redistribute it and/or
42
* Here, resolve the physical address from the virtual address, and return
78
- * modify it under the terms of the GNU Lesser General Public
43
- * a pointer to the corresponding tag byte. Exit with exception if the
79
- * License as published by the Free Software Foundation; either
44
- * virtual address is not accessible for @ptr_access.
80
- * version 2.1 of the License, or (at your option) any later version.
45
+ * a pointer to the corresponding tag byte.
81
- *
46
*
82
- * This library is distributed in the hope that it will be useful,
47
* If there is no tag storage corresponding to @ptr, return NULL.
83
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
48
+ *
84
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
49
+ * If the page is inaccessible for @ptr_access, or has a watchpoint, there are
85
- * Lesser General Public License for more details.
50
+ * three options:
86
- *
51
+ * (1) probe = true, ra = 0 : pure probe -- we return NULL if the page is not
87
- * You should have received a copy of the GNU Lesser General Public
52
+ * accessible, and do not take watchpoint traps. The calling code must
88
- * License along with this library; if not, see
53
+ * handle those cases in the right priority compared to MTE traps.
89
- * <http://www.gnu.org/licenses/lgpl-2.1.html>
54
+ * (2) probe = false, ra = 0 : probe, no fault expected -- the caller guarantees
90
- */
55
+ * that the page is going to be accessible. We will take watchpoint traps.
91
-
56
+ * (3) probe = false, ra != 0 : non-probe -- we will take both memory access
92
-#include "qemu/osdep.h"
57
+ * traps and watchpoint traps.
93
-#include "qemu/module.h"
58
+ * (probe = true, ra != 0 is invalid and will assert.)
94
-#include "qapi/error.h"
59
*/
95
-
60
-static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
96
-#include "hw/irq.h"
61
- uint64_t ptr, MMUAccessType ptr_access,
97
-#include "hw/sysbus.h"
62
- int ptr_size, MMUAccessType tag_access,
98
-#include "cpu.h"
63
- uintptr_t ra)
99
-#include "qom/object.h"
64
+static uint8_t *allocation_tag_mem_probe(CPUARMState *env, int ptr_mmu_idx,
100
-
65
+ uint64_t ptr, MMUAccessType ptr_access,
101
-#define TYPE_ALTERA_IIC "altera,iic"
66
+ int ptr_size, MMUAccessType tag_access,
102
-OBJECT_DECLARE_SIMPLE_TYPE(AlteraIIC, ALTERA_IIC)
67
+ bool probe, uintptr_t ra)
103
-
68
{
104
-struct AlteraIIC {
69
#ifdef CONFIG_USER_ONLY
105
- SysBusDevice parent_obj;
70
uint64_t clean_ptr = useronly_clean_ptr(ptr);
106
- void *cpu;
71
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
107
- qemu_irq parent_irq;
72
uint8_t *tags;
108
-};
73
uintptr_t index;
109
-
74
110
-static void update_irq(AlteraIIC *pv)
75
+ assert(!(probe && ra));
111
-{
76
+
112
- CPUNios2State *env = &((Nios2CPU *)(pv->cpu))->env;
77
if (!(flags & (ptr_access == MMU_DATA_STORE ? PAGE_WRITE_ORG : PAGE_READ))) {
113
-
78
cpu_loop_exit_sigsegv(env_cpu(env), ptr, ptr_access,
114
- qemu_set_irq(pv->parent_irq,
79
!(flags & PAGE_VALID), ra);
115
- env->regs[CR_IPENDING] & env->regs[CR_IENABLE]);
80
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
116
-}
81
* exception for inaccessible pages, and resolves the virtual address
117
-
82
* into the softmmu tlb.
118
-static void irq_handler(void *opaque, int irq, int level)
83
*
119
-{
84
- * When RA == 0, this is for mte_probe. The page is expected to be
120
- AlteraIIC *pv = opaque;
85
- * valid. Indicate to probe_access_flags no-fault, then assert that
121
- CPUNios2State *env = &((Nios2CPU *)(pv->cpu))->env;
86
- * we received a valid page.
122
-
87
+ * When RA == 0, this is either a pure probe or a no-fault-expected probe.
123
- env->regs[CR_IPENDING] &= ~(1 << irq);
88
+ * Indicate to probe_access_flags no-fault, then either return NULL
124
- env->regs[CR_IPENDING] |= !!level << irq;
89
+ * for the pure probe, or assert that we received a valid page for the
125
-
90
+ * no-fault-expected probe.
126
- update_irq(pv);
91
*/
127
-}
92
flags = probe_access_full(env, ptr, 0, ptr_access, ptr_mmu_idx,
128
-
93
ra == 0, &host, &full, ra);
129
-static void altera_iic_init(Object *obj)
94
+ if (probe && (flags & TLB_INVALID_MASK)) {
130
-{
95
+ return NULL;
131
- AlteraIIC *pv = ALTERA_IIC(obj);
96
+ }
132
-
97
assert(!(flags & TLB_INVALID_MASK));
133
- qdev_init_gpio_in(DEVICE(pv), irq_handler, 32);
98
134
- sysbus_init_irq(SYS_BUS_DEVICE(obj), &pv->parent_irq);
99
/* If the virtual page MemAttr != Tagged, access unchecked. */
135
-}
100
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
136
-
137
-static void altera_iic_realize(DeviceState *dev, Error **errp)
138
-{
139
- struct AlteraIIC *pv = ALTERA_IIC(dev);
140
-
141
- pv->cpu = object_property_get_link(OBJECT(dev), "cpu", &error_abort);
142
-}
143
-
144
-static void altera_iic_class_init(ObjectClass *klass, void *data)
145
-{
146
- DeviceClass *dc = DEVICE_CLASS(klass);
147
-
148
- /* Reason: needs to be wired up, e.g. by nios2_10m50_ghrd_init() */
149
- dc->user_creatable = false;
150
- dc->realize = altera_iic_realize;
151
-}
152
-
153
-static TypeInfo altera_iic_info = {
154
- .name = TYPE_ALTERA_IIC,
155
- .parent = TYPE_SYS_BUS_DEVICE,
156
- .instance_size = sizeof(AlteraIIC),
157
- .instance_init = altera_iic_init,
158
- .class_init = altera_iic_class_init,
159
-};
160
-
161
-static void altera_iic_register(void)
162
-{
163
- type_register_static(&altera_iic_info);
164
-}
165
-
166
-type_init(altera_iic_register)
167
diff --git a/hw/nios2/10m50_devboard.c b/hw/nios2/10m50_devboard.c
168
index XXXXXXX..XXXXXXX 100644
169
--- a/hw/nios2/10m50_devboard.c
170
+++ b/hw/nios2/10m50_devboard.c
171
@@ -XXX,XX +XXX,XX @@ static void nios2_10m50_ghrd_init(MachineState *machine)
172
ram_addr_t tcm_size = 0x1000; /* 1 kiB, but QEMU limit is 4 kiB */
173
ram_addr_t ram_base = 0x08000000;
174
ram_addr_t ram_size = 0x08000000;
175
- qemu_irq *cpu_irq, irq[32];
176
+ qemu_irq irq[32];
177
int i;
178
179
/* Physical TCM (tb_ram_1k) with alias at 0xc0000000 */
180
@@ -XXX,XX +XXX,XX @@ static void nios2_10m50_ghrd_init(MachineState *machine)
181
182
/* Create CPU -- FIXME */
183
cpu = NIOS2_CPU(cpu_create(TYPE_NIOS2_CPU));
184
-
185
- /* Register: CPU interrupt controller (PIC) */
186
- cpu_irq = nios2_cpu_pic_init(cpu);
187
-
188
- /* Register: Internal Interrupt Controller (IIC) */
189
- dev = qdev_new("altera,iic");
190
- object_property_add_const_link(OBJECT(dev), "cpu", OBJECT(cpu));
191
- sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
192
- sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, cpu_irq[0]);
193
for (i = 0; i < 32; i++) {
194
- irq[i] = qdev_get_gpio_in(dev, i);
195
+ irq[i] = qdev_get_gpio_in_named(DEVICE(cpu), "IRQ", i);
196
}
101
}
197
102
198
/* Register: Altera 16550 UART */
103
/* Any debug exception has priority over a tag check exception. */
199
diff --git a/hw/nios2/cpu_pic.c b/hw/nios2/cpu_pic.c
104
- if (unlikely(flags & TLB_WATCHPOINT)) {
200
index XXXXXXX..XXXXXXX 100644
105
+ if (!probe && unlikely(flags & TLB_WATCHPOINT)) {
201
--- a/hw/nios2/cpu_pic.c
106
int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE;
202
+++ b/hw/nios2/cpu_pic.c
107
assert(ra != 0);
203
@@ -XXX,XX +XXX,XX @@
108
cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, attrs, wp, ra);
204
109
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
205
#include "boot.h"
206
207
-static void nios2_pic_cpu_handler(void *opaque, int irq, int level)
208
-{
209
- Nios2CPU *cpu = opaque;
210
- CPUNios2State *env = &cpu->env;
211
- CPUState *cs = CPU(cpu);
212
- int type = irq ? CPU_INTERRUPT_NMI : CPU_INTERRUPT_HARD;
213
-
214
- if (type == CPU_INTERRUPT_HARD) {
215
- env->irq_pending = level;
216
-
217
- if (level && (env->regs[CR_STATUS] & CR_STATUS_PIE)) {
218
- env->irq_pending = 0;
219
- cpu_interrupt(cs, type);
220
- } else if (!level) {
221
- env->irq_pending = 0;
222
- cpu_reset_interrupt(cs, type);
223
- }
224
- } else {
225
- if (level) {
226
- cpu_interrupt(cs, type);
227
- } else {
228
- cpu_reset_interrupt(cs, type);
229
- }
230
- }
231
-}
232
-
233
void nios2_check_interrupts(CPUNios2State *env)
234
{
235
if (env->irq_pending &&
236
@@ -XXX,XX +XXX,XX @@ void nios2_check_interrupts(CPUNios2State *env)
237
cpu_interrupt(env_cpu(env), CPU_INTERRUPT_HARD);
238
}
239
}
240
-
241
-qemu_irq *nios2_cpu_pic_init(Nios2CPU *cpu)
242
-{
243
- return qemu_allocate_irqs(nios2_pic_cpu_handler, cpu, 2);
244
-}
245
diff --git a/target/nios2/cpu.c b/target/nios2/cpu.c
246
index XXXXXXX..XXXXXXX 100644
247
--- a/target/nios2/cpu.c
248
+++ b/target/nios2/cpu.c
249
@@ -XXX,XX +XXX,XX @@ static void nios2_cpu_reset(DeviceState *dev)
250
#endif
110
#endif
251
}
111
}
252
112
253
+#ifndef CONFIG_USER_ONLY
113
+static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
254
+static void nios2_cpu_set_irq(void *opaque, int irq, int level)
114
+ uint64_t ptr, MMUAccessType ptr_access,
115
+ int ptr_size, MMUAccessType tag_access,
116
+ uintptr_t ra)
255
+{
117
+{
256
+ Nios2CPU *cpu = opaque;
118
+ return allocation_tag_mem_probe(env, ptr_mmu_idx, ptr, ptr_access,
257
+ CPUNios2State *env = &cpu->env;
119
+ ptr_size, tag_access, false, ra);
258
+ CPUState *cs = CPU(cpu);
120
+}
259
+
121
+
260
+ env->regs[CR_IPENDING] &= ~(1 << irq);
122
uint64_t HELPER(irg)(CPUARMState *env, uint64_t rn, uint64_t rm)
261
+ env->regs[CR_IPENDING] |= !!level << irq;
262
+
263
+ env->irq_pending = env->regs[CR_IPENDING] & env->regs[CR_IENABLE];
264
+
265
+ if (env->irq_pending && (env->regs[CR_STATUS] & CR_STATUS_PIE)) {
266
+ env->irq_pending = 0;
267
+ cpu_interrupt(cs, CPU_INTERRUPT_HARD);
268
+ } else if (!env->irq_pending) {
269
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
270
+ }
271
+}
272
+#endif
273
+
274
static void nios2_cpu_initfn(Object *obj)
275
{
123
{
276
Nios2CPU *cpu = NIOS2_CPU(obj);
124
uint16_t exclude = extract32(rm | env->cp15.gcr_el1, 0, 16);
277
@@ -XXX,XX +XXX,XX @@ static void nios2_cpu_initfn(Object *obj)
278
279
#if !defined(CONFIG_USER_ONLY)
280
mmu_init(&cpu->env);
281
+
282
+ /*
283
+ * These interrupt lines model the IIC (internal interrupt
284
+ * controller). QEMU does not currently support the EIC
285
+ * (external interrupt controller) -- if we did it would be
286
+ * a separate device in hw/intc with a custom interface to
287
+ * the CPU, and boards using it would not wire up these IRQ lines.
288
+ */
289
+ qdev_init_gpio_in_named(DEVICE(cpu), nios2_cpu_set_irq, "IRQ", 32);
290
#endif
291
}
292
293
diff --git a/MAINTAINERS b/MAINTAINERS
294
index XXXXXXX..XXXXXXX 100644
295
--- a/MAINTAINERS
296
+++ b/MAINTAINERS
297
@@ -XXX,XX +XXX,XX @@ M: Marek Vasut <marex@denx.de>
298
S: Maintained
299
F: target/nios2/
300
F: hw/nios2/
301
-F: hw/intc/nios2_iic.c
302
F: disas/nios2.c
303
F: default-configs/nios2-softmmu.mak
304
305
diff --git a/hw/intc/meson.build b/hw/intc/meson.build
306
index XXXXXXX..XXXXXXX 100644
307
--- a/hw/intc/meson.build
308
+++ b/hw/intc/meson.build
309
@@ -XXX,XX +XXX,XX @@ specific_ss.add(when: 'CONFIG_IBEX', if_true: files('ibex_plic.c'))
310
specific_ss.add(when: 'CONFIG_IOAPIC', if_true: files('ioapic.c'))
311
specific_ss.add(when: 'CONFIG_LOONGSON_LIOINTC', if_true: files('loongson_liointc.c'))
312
specific_ss.add(when: 'CONFIG_MIPS_CPS', if_true: files('mips_gic.c'))
313
-specific_ss.add(when: 'CONFIG_NIOS2', if_true: files('nios2_iic.c'))
314
specific_ss.add(when: 'CONFIG_OMAP', if_true: files('omap_intc.c'))
315
specific_ss.add(when: 'CONFIG_OMPIC', if_true: files('ompic.c'))
316
specific_ss.add(when: 'CONFIG_OPENPIC_KVM', if_true: files('openpic_kvm.c'))
317
--
125
--
318
2.20.1
126
2.34.1
319
320
diff view generated by jsdifflib
1
The function nios2_check_interrupts)() looks only at CPU-internal
1
The FEAT_MOPS instructions need a couple of helper routines that
2
state; it belongs in target/nios2, not hw/nios2. Move it into the
2
check for MTE tag failures:
3
same file as its only caller, so it can just be local to that file.
3
* mte_mops_probe() checks whether there is going to be a tag
4
4
error in the next up-to-a-page worth of data
5
This removes the only remaining code from cpu_pic.c, so we can delete
5
* mte_check_fail() is an existing function to record the fact
6
that file entirely.
6
of a tag failure, which we need to make global so we can
7
call it from helper-a64.c
7
8
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201129174022.26530-3-peter.maydell@linaro.org
11
Message-id: 20230912140434.1333369-7-peter.maydell@linaro.org
11
Reviewed-by: Wentong Wu <wentong.wu@intel.com>
12
Tested-by: Wentong Wu <wentong.wu@intel.com>
13
---
12
---
14
target/nios2/cpu.h | 2 --
13
target/arm/internals.h | 28 +++++++++++++++++++
15
hw/nios2/cpu_pic.c | 36 ------------------------------------
14
target/arm/tcg/mte_helper.c | 54 +++++++++++++++++++++++++++++++++++--
16
target/nios2/op_helper.c | 9 +++++++++
15
2 files changed, 80 insertions(+), 2 deletions(-)
17
hw/nios2/meson.build | 2 +-
18
4 files changed, 10 insertions(+), 39 deletions(-)
19
delete mode 100644 hw/nios2/cpu_pic.c
20
16
21
diff --git a/target/nios2/cpu.h b/target/nios2/cpu.h
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
22
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
23
--- a/target/nios2/cpu.h
19
--- a/target/arm/internals.h
24
+++ b/target/nios2/cpu.h
20
+++ b/target/arm/internals.h
25
@@ -XXX,XX +XXX,XX @@ void nios2_cpu_do_unaligned_access(CPUState *cpu, vaddr addr,
21
@@ -XXX,XX +XXX,XX @@ FIELD(MTEDESC, SIZEM1, 12, SIMD_DATA_BITS - 12) /* size - 1 */
26
MMUAccessType access_type,
22
bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr);
27
int mmu_idx, uintptr_t retaddr);
23
uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra);
28
24
29
-void nios2_check_interrupts(CPUNios2State *env);
25
+/**
30
-
26
+ * mte_mops_probe: Check where the next MTE failure is for a FEAT_MOPS operation
31
void do_nios2_semihosting(CPUNios2State *env);
27
+ * @env: CPU env
32
28
+ * @ptr: start address of memory region (dirty pointer)
33
#define CPU_RESOLVING_TYPE TYPE_NIOS2_CPU
29
+ * @size: length of region (guaranteed not to cross a page boundary)
34
diff --git a/hw/nios2/cpu_pic.c b/hw/nios2/cpu_pic.c
30
+ * @desc: MTEDESC descriptor word (0 means no MTE checks)
35
deleted file mode 100644
31
+ * Returns: the size of the region that can be copied without hitting
36
index XXXXXXX..XXXXXXX
32
+ * an MTE tag failure
37
--- a/hw/nios2/cpu_pic.c
33
+ *
38
+++ /dev/null
34
+ * Note that we assume that the caller has already checked the TBI
39
@@ -XXX,XX +XXX,XX @@
35
+ * and TCMA bits with mte_checks_needed() and an MTE check is definitely
40
-/*
36
+ * required.
41
- * Altera Nios2 CPU PIC
37
+ */
42
- *
38
+uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
43
- * Copyright (c) 2016 Marek Vasut <marek.vasut@gmail.com>
39
+ uint32_t desc);
44
- *
40
+
45
- * This library is free software; you can redistribute it and/or
41
+/**
46
- * modify it under the terms of the GNU Lesser General Public
42
+ * mte_check_fail: Record an MTE tag check failure
47
- * License as published by the Free Software Foundation; either
43
+ * @env: CPU env
48
- * version 2.1 of the License, or (at your option) any later version.
44
+ * @desc: MTEDESC descriptor word
49
- *
45
+ * @dirty_ptr: Failing dirty address
50
- * This library is distributed in the hope that it will be useful,
46
+ * @ra: TCG retaddr
51
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
47
+ *
52
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
48
+ * This may never return (if the MTE tag checks are configured to fault).
53
- * Lesser General Public License for more details.
49
+ */
54
- *
50
+void mte_check_fail(CPUARMState *env, uint32_t desc,
55
- * You should have received a copy of the GNU Lesser General Public
51
+ uint64_t dirty_ptr, uintptr_t ra);
56
- * License along with this library; if not, see
52
+
57
- * <http://www.gnu.org/licenses/lgpl-2.1.html>
53
static inline int allocation_tag_from_addr(uint64_t ptr)
58
- */
54
{
59
-
55
return extract64(ptr, 56, 4);
60
-#include "qemu/osdep.h"
56
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
61
-#include "cpu.h"
62
-#include "hw/irq.h"
63
-
64
-#include "qemu/config-file.h"
65
-
66
-#include "boot.h"
67
-
68
-void nios2_check_interrupts(CPUNios2State *env)
69
-{
70
- if (env->irq_pending &&
71
- (env->regs[CR_STATUS] & CR_STATUS_PIE)) {
72
- env->irq_pending = 0;
73
- cpu_interrupt(env_cpu(env), CPU_INTERRUPT_HARD);
74
- }
75
-}
76
diff --git a/target/nios2/op_helper.c b/target/nios2/op_helper.c
77
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
78
--- a/target/nios2/op_helper.c
58
--- a/target/arm/tcg/mte_helper.c
79
+++ b/target/nios2/op_helper.c
59
+++ b/target/arm/tcg/mte_helper.c
80
@@ -XXX,XX +XXX,XX @@ void helper_mmu_write(CPUNios2State *env, uint32_t rn, uint32_t v)
60
@@ -XXX,XX +XXX,XX @@ static void mte_async_check_fail(CPUARMState *env, uint64_t dirty_ptr,
81
mmu_write(env, rn, v);
82
}
61
}
83
62
84
+static void nios2_check_interrupts(CPUNios2State *env)
63
/* Record a tag check failure. */
64
-static void mte_check_fail(CPUARMState *env, uint32_t desc,
65
- uint64_t dirty_ptr, uintptr_t ra)
66
+void mte_check_fail(CPUARMState *env, uint32_t desc,
67
+ uint64_t dirty_ptr, uintptr_t ra)
68
{
69
int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
70
ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx);
71
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
72
done:
73
return useronly_clean_ptr(ptr);
74
}
75
+
76
+uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
77
+ uint32_t desc)
85
+{
78
+{
86
+ if (env->irq_pending &&
79
+ int mmu_idx, tag_count;
87
+ (env->regs[CR_STATUS] & CR_STATUS_PIE)) {
80
+ uint64_t ptr_tag, tag_first, tag_last;
88
+ env->irq_pending = 0;
81
+ void *mem;
89
+ cpu_interrupt(env_cpu(env), CPU_INTERRUPT_HARD);
82
+ bool w = FIELD_EX32(desc, MTEDESC, WRITE);
83
+ uint32_t n;
84
+
85
+ mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
86
+ /* True probe; this will never fault */
87
+ mem = allocation_tag_mem_probe(env, mmu_idx, ptr,
88
+ w ? MMU_DATA_STORE : MMU_DATA_LOAD,
89
+ size, MMU_DATA_LOAD, true, 0);
90
+ if (!mem) {
91
+ return size;
92
+ }
93
+
94
+ /*
95
+ * TODO: checkN() is not designed for checks of the size we expect
96
+ * for FEAT_MOPS operations, so we should implement this differently.
97
+ * Maybe we should do something like
98
+ * if (region start and size are aligned nicely) {
99
+ * do direct loads of 64 tag bits at a time;
100
+ * } else {
101
+ * call checkN()
102
+ * }
103
+ */
104
+ /* Round the bounds to the tag granule, and compute the number of tags. */
105
+ ptr_tag = allocation_tag_from_addr(ptr);
106
+ tag_first = QEMU_ALIGN_DOWN(ptr, TAG_GRANULE);
107
+ tag_last = QEMU_ALIGN_DOWN(ptr + size - 1, TAG_GRANULE);
108
+ tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1;
109
+ n = checkN(mem, ptr & TAG_GRANULE, ptr_tag, tag_count);
110
+ if (likely(n == tag_count)) {
111
+ return size;
112
+ }
113
+
114
+ /*
115
+ * Failure; for the first granule, it's at @ptr. Otherwise
116
+ * it's at the first byte of the nth granule. Calculate how
117
+ * many bytes we can access without hitting that failure.
118
+ */
119
+ if (n == 0) {
120
+ return 0;
121
+ } else {
122
+ return n * TAG_GRANULE - (ptr - tag_first);
90
+ }
123
+ }
91
+}
124
+}
92
+
93
void helper_check_interrupts(CPUNios2State *env)
94
{
95
qemu_mutex_lock_iothread();
96
diff --git a/hw/nios2/meson.build b/hw/nios2/meson.build
97
index XXXXXXX..XXXXXXX 100644
98
--- a/hw/nios2/meson.build
99
+++ b/hw/nios2/meson.build
100
@@ -XXX,XX +XXX,XX @@
101
nios2_ss = ss.source_set()
102
-nios2_ss.add(files('boot.c', 'cpu_pic.c'))
103
+nios2_ss.add(files('boot.c'))
104
nios2_ss.add(when: 'CONFIG_NIOS2_10M50', if_true: files('10m50_devboard.c'))
105
nios2_ss.add(when: 'CONFIG_NIOS2_GENERIC_NOMMU', if_true: files('generic_nommu.c'))
106
107
--
125
--
108
2.20.1
126
2.34.1
109
110
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
Implement the SET* instructions which collectively implement a
2
"memset" operation. These come in a set of three, eg SETP
3
(prologue), SETM (main), SETE (epilogue), and each of those has
4
different flavours to indicate whether memory accesses should be
5
unpriv or non-temporal.
2
6
3
This patch adds skeleton model of dwc3 usb controller attached to
7
This commit does not include the "memset with tag setting"
4
xhci-sysbus device. It defines global register space of DWC3 controller,
8
SETG* instructions.
5
global registers control the AXI/AHB interfaces properties, external FIFO
6
support and event count support. All of which are unimplemented at
7
present,we are only supporting core reset and read of ID register.
8
9
9
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
10
Signed-off-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Message-id: 1607023357-5096-3-git-send-email-sai.pavan.boddu@xilinx.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20230912140434.1333369-8-peter.maydell@linaro.org
14
---
13
---
15
include/hw/usb/hcd-dwc3.h | 55 +++
14
target/arm/tcg/helper-a64.h | 4 +
16
hw/usb/hcd-dwc3.c | 689 ++++++++++++++++++++++++++++++++++++++
15
target/arm/tcg/a64.decode | 16 ++
17
hw/usb/Kconfig | 5 +
16
target/arm/tcg/helper-a64.c | 344 +++++++++++++++++++++++++++++++++
18
hw/usb/meson.build | 1 +
17
target/arm/tcg/translate-a64.c | 49 +++++
19
4 files changed, 750 insertions(+)
18
4 files changed, 413 insertions(+)
20
create mode 100644 include/hw/usb/hcd-dwc3.h
21
create mode 100644 hw/usb/hcd-dwc3.c
22
19
23
diff --git a/include/hw/usb/hcd-dwc3.h b/include/hw/usb/hcd-dwc3.h
20
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
24
new file mode 100644
21
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX
22
--- a/target/arm/tcg/helper-a64.h
26
--- /dev/null
23
+++ b/target/arm/tcg/helper-a64.h
27
+++ b/include/hw/usb/hcd-dwc3.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(stzgm_tags, TCG_CALL_NO_WG, void, env, i64, i64)
28
@@ -XXX,XX +XXX,XX @@
25
29
+/*
26
DEF_HELPER_FLAGS_4(unaligned_access, TCG_CALL_NO_WG,
30
+ * QEMU model of the USB DWC3 host controller emulation.
27
noreturn, env, i64, i32, i32)
28
+
29
+DEF_HELPER_3(setp, void, env, i32, i32)
30
+DEF_HELPER_3(setm, void, env, i32, i32)
31
+DEF_HELPER_3(sete, void, env, i32, i32)
32
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/tcg/a64.decode
35
+++ b/target/arm/tcg/a64.decode
36
@@ -XXX,XX +XXX,XX @@ LDGM 11011001 11 1 ......... 00 ..... ..... @ldst_tag_mult p=0 w=0
37
STZ2G 11011001 11 1 ......... 01 ..... ..... @ldst_tag p=1 w=1
38
STZ2G 11011001 11 1 ......... 10 ..... ..... @ldst_tag p=0 w=0
39
STZ2G 11011001 11 1 ......... 11 ..... ..... @ldst_tag p=0 w=1
40
+
41
+# Memory operations (memset, memcpy, memmove)
42
+# Each of these comes in a set of three, eg SETP (prologue), SETM (main),
43
+# SETE (epilogue), and each of those has different flavours to
44
+# indicate whether memory accesses should be unpriv or non-temporal.
45
+# We don't distinguish temporal and non-temporal accesses, but we
46
+# do need to report it in syndrome register values.
47
+
48
+# Memset
49
+&set rs rn rd unpriv nontemp
50
+# op2 bit 1 is nontemporal bit
51
+@set .. ......... rs:5 .. nontemp:1 unpriv:1 .. rn:5 rd:5 &set
52
+
53
+SETP 00 011001110 ..... 00 . . 01 ..... ..... @set
54
+SETM 00 011001110 ..... 01 . . 01 ..... ..... @set
55
+SETE 00 011001110 ..... 10 . . 01 ..... ..... @set
56
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/tcg/helper-a64.c
59
+++ b/target/arm/tcg/helper-a64.c
60
@@ -XXX,XX +XXX,XX @@ void HELPER(unaligned_access)(CPUARMState *env, uint64_t addr,
61
arm_cpu_do_unaligned_access(env_cpu(env), addr, access_type,
62
mmu_idx, GETPC());
63
}
64
+
65
+/* Memory operations (memset, memmove, memcpy) */
66
+
67
+/*
68
+ * Return true if the CPY* and SET* insns can execute; compare
69
+ * pseudocode CheckMOPSEnabled(), though we refactor it a little.
70
+ */
71
+static bool mops_enabled(CPUARMState *env)
72
+{
73
+ int el = arm_current_el(env);
74
+
75
+ if (el < 2 &&
76
+ (arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE) &&
77
+ !(arm_hcrx_el2_eff(env) & HCRX_MSCEN)) {
78
+ return false;
79
+ }
80
+
81
+ if (el == 0) {
82
+ if (!el_is_in_host(env, 0)) {
83
+ return env->cp15.sctlr_el[1] & SCTLR_MSCEN;
84
+ } else {
85
+ return env->cp15.sctlr_el[2] & SCTLR_MSCEN;
86
+ }
87
+ }
88
+ return true;
89
+}
90
+
91
+static void check_mops_enabled(CPUARMState *env, uintptr_t ra)
92
+{
93
+ if (!mops_enabled(env)) {
94
+ raise_exception_ra(env, EXCP_UDEF, syn_uncategorized(),
95
+ exception_target_el(env), ra);
96
+ }
97
+}
98
+
99
+/*
100
+ * Return the target exception level for an exception due
101
+ * to mismatched arguments in a FEAT_MOPS copy or set.
102
+ * Compare pseudocode MismatchedCpySetTargetEL()
103
+ */
104
+static int mops_mismatch_exception_target_el(CPUARMState *env)
105
+{
106
+ int el = arm_current_el(env);
107
+
108
+ if (el > 1) {
109
+ return el;
110
+ }
111
+ if (el == 0 && (arm_hcr_el2_eff(env) & HCR_TGE)) {
112
+ return 2;
113
+ }
114
+ if (el == 1 && (arm_hcrx_el2_eff(env) & HCRX_MCE2)) {
115
+ return 2;
116
+ }
117
+ return 1;
118
+}
119
+
120
+/*
121
+ * Check whether an M or E instruction was executed with a CF value
122
+ * indicating the wrong option for this implementation.
123
+ * Assumes we are always Option A.
124
+ */
125
+static void check_mops_wrong_option(CPUARMState *env, uint32_t syndrome,
126
+ uintptr_t ra)
127
+{
128
+ if (env->CF != 0) {
129
+ syndrome |= 1 << 17; /* Set the wrong-option bit */
130
+ raise_exception_ra(env, EXCP_UDEF, syndrome,
131
+ mops_mismatch_exception_target_el(env), ra);
132
+ }
133
+}
134
+
135
+/*
136
+ * Return the maximum number of bytes we can transfer starting at addr
137
+ * without crossing a page boundary.
138
+ */
139
+static uint64_t page_limit(uint64_t addr)
140
+{
141
+ return TARGET_PAGE_ALIGN(addr + 1) - addr;
142
+}
143
+
144
+/*
145
+ * Perform part of a memory set on an area of guest memory starting at
146
+ * toaddr (a dirty address) and extending for setsize bytes.
31
+ *
147
+ *
32
+ * Copyright (c) 2020 Xilinx Inc.
148
+ * Returns the number of bytes actually set, which might be less than
149
+ * setsize; the caller should loop until the whole set has been done.
150
+ * The caller should ensure that the guest registers are correct
151
+ * for the possibility that the first byte of the set encounters
152
+ * an exception or watchpoint. We guarantee not to take any faults
153
+ * for bytes other than the first.
154
+ */
155
+static uint64_t set_step(CPUARMState *env, uint64_t toaddr,
156
+ uint64_t setsize, uint32_t data, int memidx,
157
+ uint32_t *mtedesc, uintptr_t ra)
158
+{
159
+ void *mem;
160
+
161
+ setsize = MIN(setsize, page_limit(toaddr));
162
+ if (*mtedesc) {
163
+ uint64_t mtesize = mte_mops_probe(env, toaddr, setsize, *mtedesc);
164
+ if (mtesize == 0) {
165
+ /* Trap, or not. All CPU state is up to date */
166
+ mte_check_fail(env, *mtedesc, toaddr, ra);
167
+ /* Continue, with no further MTE checks required */
168
+ *mtedesc = 0;
169
+ } else {
170
+ /* Advance to the end, or to the tag mismatch */
171
+ setsize = MIN(setsize, mtesize);
172
+ }
173
+ }
174
+
175
+ toaddr = useronly_clean_ptr(toaddr);
176
+ /*
177
+ * Trapless lookup: returns NULL for invalid page, I/O,
178
+ * watchpoints, clean pages, etc.
179
+ */
180
+ mem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, memidx);
181
+
182
+#ifndef CONFIG_USER_ONLY
183
+ if (unlikely(!mem)) {
184
+ /*
185
+ * Slow-path: just do one byte write. This will handle the
186
+ * watchpoint, invalid page, etc handling correctly.
187
+ * For clean code pages, the next iteration will see
188
+ * the page dirty and will use the fast path.
189
+ */
190
+ cpu_stb_mmuidx_ra(env, toaddr, data, memidx, ra);
191
+ return 1;
192
+ }
193
+#endif
194
+ /* Easy case: just memset the host memory */
195
+ memset(mem, data, setsize);
196
+ return setsize;
197
+}
198
+
199
+typedef uint64_t StepFn(CPUARMState *env, uint64_t toaddr,
200
+ uint64_t setsize, uint32_t data,
201
+ int memidx, uint32_t *mtedesc, uintptr_t ra);
202
+
203
+/* Extract register numbers from a MOPS exception syndrome value */
204
+static int mops_destreg(uint32_t syndrome)
205
+{
206
+ return extract32(syndrome, 10, 5);
207
+}
208
+
209
+static int mops_srcreg(uint32_t syndrome)
210
+{
211
+ return extract32(syndrome, 5, 5);
212
+}
213
+
214
+static int mops_sizereg(uint32_t syndrome)
215
+{
216
+ return extract32(syndrome, 0, 5);
217
+}
218
+
219
+/*
220
+ * Return true if TCMA and TBI bits mean we need to do MTE checks.
221
+ * We only need to do this once per MOPS insn, not for every page.
222
+ */
223
+static bool mte_checks_needed(uint64_t ptr, uint32_t desc)
224
+{
225
+ int bit55 = extract64(ptr, 55, 1);
226
+
227
+ /*
228
+ * Note that tbi_check() returns true for "access checked" but
229
+ * tcma_check() returns true for "access unchecked".
230
+ */
231
+ if (!tbi_check(desc, bit55)) {
232
+ return false;
233
+ }
234
+ return !tcma_check(desc, bit55, allocation_tag_from_addr(ptr));
235
+}
236
+
237
+/*
238
+ * For the Memory Set operation, our implementation chooses
239
+ * always to use "option A", where we update Xd to the final
240
+ * address in the SETP insn, and set Xn to be -(bytes remaining).
241
+ * On SETM and SETE insns we only need update Xn.
33
+ *
242
+ *
34
+ * Written by Vikram Garhwal<fnu.vikram@xilinx.com>
243
+ * @env: CPU
35
+ *
244
+ * @syndrome: syndrome value for mismatch exceptions
36
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
245
+ * (also contains the register numbers we need to use)
37
+ * of this software and associated documentation files (the "Software"), to deal
246
+ * @mtedesc: MTE descriptor word
38
+ * in the Software without restriction, including without limitation the rights
247
+ * @stepfn: function which does a single part of the set operation
39
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
248
+ * @is_setg: true if this is the tag-setting SETG variant
40
+ * copies of the Software, and to permit persons to whom the Software is
249
+ */
41
+ * furnished to do so, subject to the following conditions:
250
+static void do_setp(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
42
+ *
251
+ StepFn *stepfn, bool is_setg, uintptr_t ra)
43
+ * The above copyright notice and this permission notice shall be included in
252
+{
44
+ * all copies or substantial portions of the Software.
253
+ /* Prologue: we choose to do up to the next page boundary */
45
+ *
254
+ int rd = mops_destreg(syndrome);
46
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
255
+ int rs = mops_srcreg(syndrome);
47
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
256
+ int rn = mops_sizereg(syndrome);
48
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
257
+ uint8_t data = env->xregs[rs];
49
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
258
+ uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX);
50
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
259
+ uint64_t toaddr = env->xregs[rd];
51
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
260
+ uint64_t setsize = env->xregs[rn];
52
+ * THE SOFTWARE.
261
+ uint64_t stagesetsize, step;
53
+ */
262
+
54
+#ifndef HCD_DWC3_H
263
+ check_mops_enabled(env, ra);
55
+#define HCD_DWC3_H
264
+
56
+
265
+ if (setsize > INT64_MAX) {
57
+#include "hw/usb/hcd-xhci.h"
266
+ setsize = INT64_MAX;
58
+#include "hw/usb/hcd-xhci-sysbus.h"
267
+ }
59
+
268
+
60
+#define TYPE_USB_DWC3 "usb_dwc3"
269
+ if (!mte_checks_needed(toaddr, mtedesc)) {
61
+
270
+ mtedesc = 0;
62
+#define USB_DWC3(obj) \
271
+ }
63
+ OBJECT_CHECK(USBDWC3, (obj), TYPE_USB_DWC3)
272
+
64
+
273
+ stagesetsize = MIN(setsize, page_limit(toaddr));
65
+#define USB_DWC3_R_MAX ((0x530 / 4) + 1)
274
+ while (stagesetsize) {
66
+#define DWC3_SIZE 0x10000
275
+ env->xregs[rd] = toaddr;
67
+
276
+ env->xregs[rn] = setsize;
68
+typedef struct USBDWC3 {
277
+ step = stepfn(env, toaddr, stagesetsize, data, memidx, &mtedesc, ra);
69
+ SysBusDevice parent_obj;
278
+ toaddr += step;
70
+ MemoryRegion iomem;
279
+ setsize -= step;
71
+ XHCISysbusState sysbus_xhci;
280
+ stagesetsize -= step;
72
+
281
+ }
73
+ uint32_t regs[USB_DWC3_R_MAX];
282
+ /* Insn completed, so update registers to the Option A format */
74
+ RegisterInfo regs_info[USB_DWC3_R_MAX];
283
+ env->xregs[rd] = toaddr + setsize;
75
+
284
+ env->xregs[rn] = -setsize;
76
+ struct {
285
+
77
+ uint8_t mode;
286
+ /* Set NZCV = 0000 to indicate we are an Option A implementation */
78
+ uint32_t dwc_usb3_user;
287
+ env->NF = 0;
79
+ } cfg;
288
+ env->ZF = 1; /* our env->ZF encoding is inverted */
80
+
289
+ env->CF = 0;
81
+} USBDWC3;
290
+ env->VF = 0;
82
+
291
+ return;
83
+#endif
292
+}
84
diff --git a/hw/usb/hcd-dwc3.c b/hw/usb/hcd-dwc3.c
293
+
85
new file mode 100644
294
+void HELPER(setp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
86
index XXXXXXX..XXXXXXX
295
+{
87
--- /dev/null
296
+ do_setp(env, syndrome, mtedesc, set_step, false, GETPC());
88
+++ b/hw/usb/hcd-dwc3.c
297
+}
89
@@ -XXX,XX +XXX,XX @@
298
+
90
+/*
299
+static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
91
+ * QEMU model of the USB DWC3 host controller emulation.
300
+ StepFn *stepfn, bool is_setg, uintptr_t ra)
92
+ *
301
+{
93
+ * This model defines global register space of DWC3 controller. Global
302
+ /* Main: we choose to do all the full-page chunks */
94
+ * registers control the AXI/AHB interfaces properties, external FIFO support
303
+ CPUState *cs = env_cpu(env);
95
+ * and event count support. All of which are unimplemented at present. We are
304
+ int rd = mops_destreg(syndrome);
96
+ * only supporting core reset and read of ID register.
305
+ int rs = mops_srcreg(syndrome);
97
+ *
306
+ int rn = mops_sizereg(syndrome);
98
+ * Copyright (c) 2020 Xilinx Inc. Vikram Garhwal<fnu.vikram@xilinx.com>
307
+ uint8_t data = env->xregs[rs];
99
+ *
308
+ uint64_t toaddr = env->xregs[rd] + env->xregs[rn];
100
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
309
+ uint64_t setsize = -env->xregs[rn];
101
+ * of this software and associated documentation files (the "Software"), to deal
310
+ uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX);
102
+ * in the Software without restriction, including without limitation the rights
311
+ uint64_t step, stagesetsize;
103
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
312
+
104
+ * copies of the Software, and to permit persons to whom the Software is
313
+ check_mops_enabled(env, ra);
105
+ * furnished to do so, subject to the following conditions:
314
+
106
+ *
315
+ /*
107
+ * The above copyright notice and this permission notice shall be included in
316
+ * We're allowed to NOP out "no data to copy" before the consistency
108
+ * all copies or substantial portions of the Software.
317
+ * checks; we choose to do so.
109
+ *
318
+ */
110
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
319
+ if (env->xregs[rn] == 0) {
111
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
320
+ return;
112
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
321
+ }
113
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
322
+
114
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
323
+ check_mops_wrong_option(env, syndrome, ra);
115
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
324
+
116
+ * THE SOFTWARE.
325
+ /*
117
+ */
326
+ * Our implementation will work fine even if we have an unaligned
118
+
327
+ * destination address, and because we update Xn every time around
119
+#include "qemu/osdep.h"
328
+ * the loop below and the return value from stepfn() may be less
120
+#include "hw/sysbus.h"
329
+ * than requested, we might find toaddr is unaligned. So we don't
121
+#include "hw/register.h"
330
+ * have an IMPDEF check for alignment here.
122
+#include "qemu/bitops.h"
331
+ */
123
+#include "qemu/log.h"
332
+
124
+#include "qom/object.h"
333
+ if (!mte_checks_needed(toaddr, mtedesc)) {
125
+#include "migration/vmstate.h"
334
+ mtedesc = 0;
126
+#include "hw/qdev-properties.h"
335
+ }
127
+#include "hw/usb/hcd-dwc3.h"
336
+
128
+#include "qapi/error.h"
337
+ /* Do the actual memset: we leave the last partial page to SETE */
129
+
338
+ stagesetsize = setsize & TARGET_PAGE_MASK;
130
+#ifndef USB_DWC3_ERR_DEBUG
339
+ while (stagesetsize > 0) {
131
+#define USB_DWC3_ERR_DEBUG 0
340
+ step = stepfn(env, toaddr, setsize, data, memidx, &mtedesc, ra);
132
+#endif
341
+ toaddr += step;
133
+
342
+ setsize -= step;
134
+#define HOST_MODE 1
343
+ stagesetsize -= step;
135
+#define FIFO_LEN 0x1000
344
+ env->xregs[rn] = -setsize;
136
+
345
+ if (stagesetsize > 0 && unlikely(cpu_loop_exit_requested(cs))) {
137
+REG32(GSBUSCFG0, 0x00)
346
+ cpu_loop_exit_restore(cs, ra);
138
+ FIELD(GSBUSCFG0, DATRDREQINFO, 28, 4)
139
+ FIELD(GSBUSCFG0, DESRDREQINFO, 24, 4)
140
+ FIELD(GSBUSCFG0, DATWRREQINFO, 20, 4)
141
+ FIELD(GSBUSCFG0, DESWRREQINFO, 16, 4)
142
+ FIELD(GSBUSCFG0, RESERVED_15_12, 12, 4)
143
+ FIELD(GSBUSCFG0, DATBIGEND, 11, 1)
144
+ FIELD(GSBUSCFG0, DESBIGEND, 10, 1)
145
+ FIELD(GSBUSCFG0, RESERVED_9_8, 8, 2)
146
+ FIELD(GSBUSCFG0, INCR256BRSTENA, 7, 1)
147
+ FIELD(GSBUSCFG0, INCR128BRSTENA, 6, 1)
148
+ FIELD(GSBUSCFG0, INCR64BRSTENA, 5, 1)
149
+ FIELD(GSBUSCFG0, INCR32BRSTENA, 4, 1)
150
+ FIELD(GSBUSCFG0, INCR16BRSTENA, 3, 1)
151
+ FIELD(GSBUSCFG0, INCR8BRSTENA, 2, 1)
152
+ FIELD(GSBUSCFG0, INCR4BRSTENA, 1, 1)
153
+ FIELD(GSBUSCFG0, INCRBRSTENA, 0, 1)
154
+REG32(GSBUSCFG1, 0x04)
155
+ FIELD(GSBUSCFG1, RESERVED_31_13, 13, 19)
156
+ FIELD(GSBUSCFG1, EN1KPAGE, 12, 1)
157
+ FIELD(GSBUSCFG1, PIPETRANSLIMIT, 8, 4)
158
+ FIELD(GSBUSCFG1, RESERVED_7_0, 0, 8)
159
+REG32(GTXTHRCFG, 0x08)
160
+ FIELD(GTXTHRCFG, RESERVED_31, 31, 1)
161
+ FIELD(GTXTHRCFG, RESERVED_30, 30, 1)
162
+ FIELD(GTXTHRCFG, USBTXPKTCNTSEL, 29, 1)
163
+ FIELD(GTXTHRCFG, RESERVED_28, 28, 1)
164
+ FIELD(GTXTHRCFG, USBTXPKTCNT, 24, 4)
165
+ FIELD(GTXTHRCFG, USBMAXTXBURSTSIZE, 16, 8)
166
+ FIELD(GTXTHRCFG, RESERVED_15, 15, 1)
167
+ FIELD(GTXTHRCFG, RESERVED_14, 14, 1)
168
+ FIELD(GTXTHRCFG, RESERVED_13_11, 11, 3)
169
+ FIELD(GTXTHRCFG, RESERVED_10_0, 0, 11)
170
+REG32(GRXTHRCFG, 0x0c)
171
+ FIELD(GRXTHRCFG, RESERVED_31_30, 30, 2)
172
+ FIELD(GRXTHRCFG, USBRXPKTCNTSEL, 29, 1)
173
+ FIELD(GRXTHRCFG, RESERVED_28, 28, 1)
174
+ FIELD(GRXTHRCFG, USBRXPKTCNT, 24, 4)
175
+ FIELD(GRXTHRCFG, USBMAXRXBURSTSIZE, 19, 5)
176
+ FIELD(GRXTHRCFG, RESERVED_18_16, 16, 3)
177
+ FIELD(GRXTHRCFG, RESERVED_15, 15, 1)
178
+ FIELD(GRXTHRCFG, RESERVED_14_13, 13, 2)
179
+ FIELD(GRXTHRCFG, RESVISOCOUTSPC, 0, 13)
180
+REG32(GCTL, 0x10)
181
+ FIELD(GCTL, PWRDNSCALE, 19, 13)
182
+ FIELD(GCTL, MASTERFILTBYPASS, 18, 1)
183
+ FIELD(GCTL, BYPSSETADDR, 17, 1)
184
+ FIELD(GCTL, U2RSTECN, 16, 1)
185
+ FIELD(GCTL, FRMSCLDWN, 14, 2)
186
+ FIELD(GCTL, PRTCAPDIR, 12, 2)
187
+ FIELD(GCTL, CORESOFTRESET, 11, 1)
188
+ FIELD(GCTL, U1U2TIMERSCALE, 9, 1)
189
+ FIELD(GCTL, DEBUGATTACH, 8, 1)
190
+ FIELD(GCTL, RAMCLKSEL, 6, 2)
191
+ FIELD(GCTL, SCALEDOWN, 4, 2)
192
+ FIELD(GCTL, DISSCRAMBLE, 3, 1)
193
+ FIELD(GCTL, U2EXIT_LFPS, 2, 1)
194
+ FIELD(GCTL, GBLHIBERNATIONEN, 1, 1)
195
+ FIELD(GCTL, DSBLCLKGTNG, 0, 1)
196
+REG32(GPMSTS, 0x14)
197
+REG32(GSTS, 0x18)
198
+ FIELD(GSTS, CBELT, 20, 12)
199
+ FIELD(GSTS, RESERVED_19_12, 12, 8)
200
+ FIELD(GSTS, SSIC_IP, 11, 1)
201
+ FIELD(GSTS, OTG_IP, 10, 1)
202
+ FIELD(GSTS, BC_IP, 9, 1)
203
+ FIELD(GSTS, ADP_IP, 8, 1)
204
+ FIELD(GSTS, HOST_IP, 7, 1)
205
+ FIELD(GSTS, DEVICE_IP, 6, 1)
206
+ FIELD(GSTS, CSRTIMEOUT, 5, 1)
207
+ FIELD(GSTS, BUSERRADDRVLD, 4, 1)
208
+ FIELD(GSTS, RESERVED_3_2, 2, 2)
209
+ FIELD(GSTS, CURMOD, 0, 2)
210
+REG32(GUCTL1, 0x1c)
211
+ FIELD(GUCTL1, RESUME_OPMODE_HS_HOST, 10, 1)
212
+REG32(GSNPSID, 0x20)
213
+REG32(GGPIO, 0x24)
214
+ FIELD(GGPIO, GPO, 16, 16)
215
+ FIELD(GGPIO, GPI, 0, 16)
216
+REG32(GUID, 0x28)
217
+REG32(GUCTL, 0x2c)
218
+ FIELD(GUCTL, REFCLKPER, 22, 10)
219
+ FIELD(GUCTL, NOEXTRDL, 21, 1)
220
+ FIELD(GUCTL, RESERVED_20_18, 18, 3)
221
+ FIELD(GUCTL, SPRSCTRLTRANSEN, 17, 1)
222
+ FIELD(GUCTL, RESBWHSEPS, 16, 1)
223
+ FIELD(GUCTL, RESERVED_15, 15, 1)
224
+ FIELD(GUCTL, USBHSTINAUTORETRYEN, 14, 1)
225
+ FIELD(GUCTL, ENOVERLAPCHK, 13, 1)
226
+ FIELD(GUCTL, EXTCAPSUPPTEN, 12, 1)
227
+ FIELD(GUCTL, INSRTEXTRFSBODI, 11, 1)
228
+ FIELD(GUCTL, DTCT, 9, 2)
229
+ FIELD(GUCTL, DTFT, 0, 9)
230
+REG32(GBUSERRADDRLO, 0x30)
231
+REG32(GBUSERRADDRHI, 0x34)
232
+REG32(GHWPARAMS0, 0x40)
233
+ FIELD(GHWPARAMS0, GHWPARAMS0_31_24, 24, 8)
234
+ FIELD(GHWPARAMS0, GHWPARAMS0_23_16, 16, 8)
235
+ FIELD(GHWPARAMS0, GHWPARAMS0_15_8, 8, 8)
236
+ FIELD(GHWPARAMS0, GHWPARAMS0_7_6, 6, 2)
237
+ FIELD(GHWPARAMS0, GHWPARAMS0_5_3, 3, 3)
238
+ FIELD(GHWPARAMS0, GHWPARAMS0_2_0, 0, 3)
239
+REG32(GHWPARAMS1, 0x44)
240
+ FIELD(GHWPARAMS1, GHWPARAMS1_31, 31, 1)
241
+ FIELD(GHWPARAMS1, GHWPARAMS1_30, 30, 1)
242
+ FIELD(GHWPARAMS1, GHWPARAMS1_29, 29, 1)
243
+ FIELD(GHWPARAMS1, GHWPARAMS1_28, 28, 1)
244
+ FIELD(GHWPARAMS1, GHWPARAMS1_27, 27, 1)
245
+ FIELD(GHWPARAMS1, GHWPARAMS1_26, 26, 1)
246
+ FIELD(GHWPARAMS1, GHWPARAMS1_25_24, 24, 2)
247
+ FIELD(GHWPARAMS1, GHWPARAMS1_23, 23, 1)
248
+ FIELD(GHWPARAMS1, GHWPARAMS1_22_21, 21, 2)
249
+ FIELD(GHWPARAMS1, GHWPARAMS1_20_15, 15, 6)
250
+ FIELD(GHWPARAMS1, GHWPARAMS1_14_12, 12, 3)
251
+ FIELD(GHWPARAMS1, GHWPARAMS1_11_9, 9, 3)
252
+ FIELD(GHWPARAMS1, GHWPARAMS1_8_6, 6, 3)
253
+ FIELD(GHWPARAMS1, GHWPARAMS1_5_3, 3, 3)
254
+ FIELD(GHWPARAMS1, GHWPARAMS1_2_0, 0, 3)
255
+REG32(GHWPARAMS2, 0x48)
256
+REG32(GHWPARAMS3, 0x4c)
257
+ FIELD(GHWPARAMS3, GHWPARAMS3_31, 31, 1)
258
+ FIELD(GHWPARAMS3, GHWPARAMS3_30_23, 23, 8)
259
+ FIELD(GHWPARAMS3, GHWPARAMS3_22_18, 18, 5)
260
+ FIELD(GHWPARAMS3, GHWPARAMS3_17_12, 12, 6)
261
+ FIELD(GHWPARAMS3, GHWPARAMS3_11, 11, 1)
262
+ FIELD(GHWPARAMS3, GHWPARAMS3_10, 10, 1)
263
+ FIELD(GHWPARAMS3, GHWPARAMS3_9_8, 8, 2)
264
+ FIELD(GHWPARAMS3, GHWPARAMS3_7_6, 6, 2)
265
+ FIELD(GHWPARAMS3, GHWPARAMS3_5_4, 4, 2)
266
+ FIELD(GHWPARAMS3, GHWPARAMS3_3_2, 2, 2)
267
+ FIELD(GHWPARAMS3, GHWPARAMS3_1_0, 0, 2)
268
+REG32(GHWPARAMS4, 0x50)
269
+ FIELD(GHWPARAMS4, GHWPARAMS4_31_28, 28, 4)
270
+ FIELD(GHWPARAMS4, GHWPARAMS4_27_24, 24, 4)
271
+ FIELD(GHWPARAMS4, GHWPARAMS4_23, 23, 1)
272
+ FIELD(GHWPARAMS4, GHWPARAMS4_22, 22, 1)
273
+ FIELD(GHWPARAMS4, GHWPARAMS4_21, 21, 1)
274
+ FIELD(GHWPARAMS4, GHWPARAMS4_20_17, 17, 4)
275
+ FIELD(GHWPARAMS4, GHWPARAMS4_16_13, 13, 4)
276
+ FIELD(GHWPARAMS4, GHWPARAMS4_12, 12, 1)
277
+ FIELD(GHWPARAMS4, GHWPARAMS4_11, 11, 1)
278
+ FIELD(GHWPARAMS4, GHWPARAMS4_10_9, 9, 2)
279
+ FIELD(GHWPARAMS4, GHWPARAMS4_8_7, 7, 2)
280
+ FIELD(GHWPARAMS4, GHWPARAMS4_6, 6, 1)
281
+ FIELD(GHWPARAMS4, GHWPARAMS4_5_0, 0, 6)
282
+REG32(GHWPARAMS5, 0x54)
283
+ FIELD(GHWPARAMS5, GHWPARAMS5_31_28, 28, 4)
284
+ FIELD(GHWPARAMS5, GHWPARAMS5_27_22, 22, 6)
285
+ FIELD(GHWPARAMS5, GHWPARAMS5_21_16, 16, 6)
286
+ FIELD(GHWPARAMS5, GHWPARAMS5_15_10, 10, 6)
287
+ FIELD(GHWPARAMS5, GHWPARAMS5_9_4, 4, 6)
288
+ FIELD(GHWPARAMS5, GHWPARAMS5_3_0, 0, 4)
289
+REG32(GHWPARAMS6, 0x58)
290
+ FIELD(GHWPARAMS6, GHWPARAMS6_31_16, 16, 16)
291
+ FIELD(GHWPARAMS6, BUSFLTRSSUPPORT, 15, 1)
292
+ FIELD(GHWPARAMS6, BCSUPPORT, 14, 1)
293
+ FIELD(GHWPARAMS6, OTG_SS_SUPPORT, 13, 1)
294
+ FIELD(GHWPARAMS6, ADPSUPPORT, 12, 1)
295
+ FIELD(GHWPARAMS6, HNPSUPPORT, 11, 1)
296
+ FIELD(GHWPARAMS6, SRPSUPPORT, 10, 1)
297
+ FIELD(GHWPARAMS6, GHWPARAMS6_9_8, 8, 2)
298
+ FIELD(GHWPARAMS6, GHWPARAMS6_7, 7, 1)
299
+ FIELD(GHWPARAMS6, GHWPARAMS6_6, 6, 1)
300
+ FIELD(GHWPARAMS6, GHWPARAMS6_5_0, 0, 6)
301
+REG32(GHWPARAMS7, 0x5c)
302
+ FIELD(GHWPARAMS7, GHWPARAMS7_31_16, 16, 16)
303
+ FIELD(GHWPARAMS7, GHWPARAMS7_15_0, 0, 16)
304
+REG32(GDBGFIFOSPACE, 0x60)
305
+ FIELD(GDBGFIFOSPACE, SPACE_AVAILABLE, 16, 16)
306
+ FIELD(GDBGFIFOSPACE, RESERVED_15_9, 9, 7)
307
+ FIELD(GDBGFIFOSPACE, FIFO_QUEUE_SELECT, 0, 9)
308
+REG32(GUCTL2, 0x9c)
309
+ FIELD(GUCTL2, RESERVED_31_26, 26, 6)
310
+ FIELD(GUCTL2, EN_HP_PM_TIMER, 19, 7)
311
+ FIELD(GUCTL2, NOLOWPWRDUR, 15, 4)
312
+ FIELD(GUCTL2, RST_ACTBITLATER, 14, 1)
313
+ FIELD(GUCTL2, RESERVED_13, 13, 1)
314
+ FIELD(GUCTL2, DISABLECFC, 11, 1)
315
+REG32(GUSB2PHYCFG, 0x100)
316
+ FIELD(GUSB2PHYCFG, U2_FREECLK_EXISTS, 30, 1)
317
+ FIELD(GUSB2PHYCFG, ULPI_LPM_WITH_OPMODE_CHK, 29, 1)
318
+ FIELD(GUSB2PHYCFG, RESERVED_25, 25, 1)
319
+ FIELD(GUSB2PHYCFG, LSTRD, 22, 3)
320
+ FIELD(GUSB2PHYCFG, LSIPD, 19, 3)
321
+ FIELD(GUSB2PHYCFG, ULPIEXTVBUSINDIACTOR, 18, 1)
322
+ FIELD(GUSB2PHYCFG, ULPIEXTVBUSDRV, 17, 1)
323
+ FIELD(GUSB2PHYCFG, RESERVED_16, 16, 1)
324
+ FIELD(GUSB2PHYCFG, ULPIAUTORES, 15, 1)
325
+ FIELD(GUSB2PHYCFG, RESERVED_14, 14, 1)
326
+ FIELD(GUSB2PHYCFG, USBTRDTIM, 10, 4)
327
+ FIELD(GUSB2PHYCFG, XCVRDLY, 9, 1)
328
+ FIELD(GUSB2PHYCFG, ENBLSLPM, 8, 1)
329
+ FIELD(GUSB2PHYCFG, PHYSEL, 7, 1)
330
+ FIELD(GUSB2PHYCFG, SUSPENDUSB20, 6, 1)
331
+ FIELD(GUSB2PHYCFG, FSINTF, 5, 1)
332
+ FIELD(GUSB2PHYCFG, ULPI_UTMI_SEL, 4, 1)
333
+ FIELD(GUSB2PHYCFG, PHYIF, 3, 1)
334
+ FIELD(GUSB2PHYCFG, TOUTCAL, 0, 3)
335
+REG32(GUSB2I2CCTL, 0x140)
336
+REG32(GUSB2PHYACC_ULPI, 0x180)
337
+ FIELD(GUSB2PHYACC_ULPI, RESERVED_31_27, 27, 5)
338
+ FIELD(GUSB2PHYACC_ULPI, DISUIPIDRVR, 26, 1)
339
+ FIELD(GUSB2PHYACC_ULPI, NEWREGREQ, 25, 1)
340
+ FIELD(GUSB2PHYACC_ULPI, VSTSDONE, 24, 1)
341
+ FIELD(GUSB2PHYACC_ULPI, VSTSBSY, 23, 1)
342
+ FIELD(GUSB2PHYACC_ULPI, REGWR, 22, 1)
343
+ FIELD(GUSB2PHYACC_ULPI, REGADDR, 16, 6)
344
+ FIELD(GUSB2PHYACC_ULPI, EXTREGADDR, 8, 8)
345
+ FIELD(GUSB2PHYACC_ULPI, REGDATA, 0, 8)
346
+REG32(GTXFIFOSIZ0, 0x200)
347
+ FIELD(GTXFIFOSIZ0, TXFSTADDR_N, 16, 16)
348
+ FIELD(GTXFIFOSIZ0, TXFDEP_N, 0, 16)
349
+REG32(GTXFIFOSIZ1, 0x204)
350
+ FIELD(GTXFIFOSIZ1, TXFSTADDR_N, 16, 16)
351
+ FIELD(GTXFIFOSIZ1, TXFDEP_N, 0, 16)
352
+REG32(GTXFIFOSIZ2, 0x208)
353
+ FIELD(GTXFIFOSIZ2, TXFSTADDR_N, 16, 16)
354
+ FIELD(GTXFIFOSIZ2, TXFDEP_N, 0, 16)
355
+REG32(GTXFIFOSIZ3, 0x20c)
356
+ FIELD(GTXFIFOSIZ3, TXFSTADDR_N, 16, 16)
357
+ FIELD(GTXFIFOSIZ3, TXFDEP_N, 0, 16)
358
+REG32(GTXFIFOSIZ4, 0x210)
359
+ FIELD(GTXFIFOSIZ4, TXFSTADDR_N, 16, 16)
360
+ FIELD(GTXFIFOSIZ4, TXFDEP_N, 0, 16)
361
+REG32(GTXFIFOSIZ5, 0x214)
362
+ FIELD(GTXFIFOSIZ5, TXFSTADDR_N, 16, 16)
363
+ FIELD(GTXFIFOSIZ5, TXFDEP_N, 0, 16)
364
+REG32(GRXFIFOSIZ0, 0x280)
365
+ FIELD(GRXFIFOSIZ0, RXFSTADDR_N, 16, 16)
366
+ FIELD(GRXFIFOSIZ0, RXFDEP_N, 0, 16)
367
+REG32(GRXFIFOSIZ1, 0x284)
368
+ FIELD(GRXFIFOSIZ1, RXFSTADDR_N, 16, 16)
369
+ FIELD(GRXFIFOSIZ1, RXFDEP_N, 0, 16)
370
+REG32(GRXFIFOSIZ2, 0x288)
371
+ FIELD(GRXFIFOSIZ2, RXFSTADDR_N, 16, 16)
372
+ FIELD(GRXFIFOSIZ2, RXFDEP_N, 0, 16)
373
+REG32(GEVNTADRLO_0, 0x300)
374
+REG32(GEVNTADRHI_0, 0x304)
375
+REG32(GEVNTSIZ_0, 0x308)
376
+ FIELD(GEVNTSIZ_0, EVNTINTRPTMASK, 31, 1)
377
+ FIELD(GEVNTSIZ_0, RESERVED_30_16, 16, 15)
378
+ FIELD(GEVNTSIZ_0, EVENTSIZ, 0, 16)
379
+REG32(GEVNTCOUNT_0, 0x30c)
380
+ FIELD(GEVNTCOUNT_0, EVNT_HANDLER_BUSY, 31, 1)
381
+ FIELD(GEVNTCOUNT_0, RESERVED_30_16, 16, 15)
382
+ FIELD(GEVNTCOUNT_0, EVNTCOUNT, 0, 16)
383
+REG32(GEVNTADRLO_1, 0x310)
384
+REG32(GEVNTADRHI_1, 0x314)
385
+REG32(GEVNTSIZ_1, 0x318)
386
+ FIELD(GEVNTSIZ_1, EVNTINTRPTMASK, 31, 1)
387
+ FIELD(GEVNTSIZ_1, RESERVED_30_16, 16, 15)
388
+ FIELD(GEVNTSIZ_1, EVENTSIZ, 0, 16)
389
+REG32(GEVNTCOUNT_1, 0x31c)
390
+ FIELD(GEVNTCOUNT_1, EVNT_HANDLER_BUSY, 31, 1)
391
+ FIELD(GEVNTCOUNT_1, RESERVED_30_16, 16, 15)
392
+ FIELD(GEVNTCOUNT_1, EVNTCOUNT, 0, 16)
393
+REG32(GEVNTADRLO_2, 0x320)
394
+REG32(GEVNTADRHI_2, 0x324)
395
+REG32(GEVNTSIZ_2, 0x328)
396
+ FIELD(GEVNTSIZ_2, EVNTINTRPTMASK, 31, 1)
397
+ FIELD(GEVNTSIZ_2, RESERVED_30_16, 16, 15)
398
+ FIELD(GEVNTSIZ_2, EVENTSIZ, 0, 16)
399
+REG32(GEVNTCOUNT_2, 0x32c)
400
+ FIELD(GEVNTCOUNT_2, EVNT_HANDLER_BUSY, 31, 1)
401
+ FIELD(GEVNTCOUNT_2, RESERVED_30_16, 16, 15)
402
+ FIELD(GEVNTCOUNT_2, EVNTCOUNT, 0, 16)
403
+REG32(GEVNTADRLO_3, 0x330)
404
+REG32(GEVNTADRHI_3, 0x334)
405
+REG32(GEVNTSIZ_3, 0x338)
406
+ FIELD(GEVNTSIZ_3, EVNTINTRPTMASK, 31, 1)
407
+ FIELD(GEVNTSIZ_3, RESERVED_30_16, 16, 15)
408
+ FIELD(GEVNTSIZ_3, EVENTSIZ, 0, 16)
409
+REG32(GEVNTCOUNT_3, 0x33c)
410
+ FIELD(GEVNTCOUNT_3, EVNT_HANDLER_BUSY, 31, 1)
411
+ FIELD(GEVNTCOUNT_3, RESERVED_30_16, 16, 15)
412
+ FIELD(GEVNTCOUNT_3, EVNTCOUNT, 0, 16)
413
+REG32(GHWPARAMS8, 0x500)
414
+REG32(GTXFIFOPRIDEV, 0x510)
415
+ FIELD(GTXFIFOPRIDEV, RESERVED_31_N, 6, 26)
416
+ FIELD(GTXFIFOPRIDEV, GTXFIFOPRIDEV, 0, 6)
417
+REG32(GTXFIFOPRIHST, 0x518)
418
+ FIELD(GTXFIFOPRIHST, RESERVED_31_16, 3, 29)
419
+ FIELD(GTXFIFOPRIHST, GTXFIFOPRIHST, 0, 3)
420
+REG32(GRXFIFOPRIHST, 0x51c)
421
+ FIELD(GRXFIFOPRIHST, RESERVED_31_16, 3, 29)
422
+ FIELD(GRXFIFOPRIHST, GRXFIFOPRIHST, 0, 3)
423
+REG32(GDMAHLRATIO, 0x524)
424
+ FIELD(GDMAHLRATIO, RESERVED_31_13, 13, 19)
425
+ FIELD(GDMAHLRATIO, HSTRXFIFO, 8, 5)
426
+ FIELD(GDMAHLRATIO, RESERVED_7_5, 5, 3)
427
+ FIELD(GDMAHLRATIO, HSTTXFIFO, 0, 5)
428
+REG32(GFLADJ, 0x530)
429
+ FIELD(GFLADJ, GFLADJ_REFCLK_240MHZDECR_PLS1, 31, 1)
430
+ FIELD(GFLADJ, GFLADJ_REFCLK_240MHZ_DECR, 24, 7)
431
+ FIELD(GFLADJ, GFLADJ_REFCLK_LPM_SEL, 23, 1)
432
+ FIELD(GFLADJ, RESERVED_22, 22, 1)
433
+ FIELD(GFLADJ, GFLADJ_REFCLK_FLADJ, 8, 14)
434
+ FIELD(GFLADJ, GFLADJ_30MHZ_SDBND_SEL, 7, 1)
435
+ FIELD(GFLADJ, GFLADJ_30MHZ, 0, 6)
436
+
437
+#define DWC3_GLOBAL_OFFSET 0xC100
438
+static void reset_csr(USBDWC3 * s)
439
+{
440
+ int i = 0;
441
+ /*
442
+ * We reset all CSR regs except GCTL, GUCTL, GSTS, GSNPSID, GGPIO, GUID,
443
+ * GUSB2PHYCFGn registers and GUSB3PIPECTLn registers. We will skip PHY
444
+ * register as we don't implement them.
445
+ */
446
+ for (i = 0; i < USB_DWC3_R_MAX; i++) {
447
+ switch (i) {
448
+ case R_GCTL:
449
+ break;
450
+ case R_GSTS:
451
+ break;
452
+ case R_GSNPSID:
453
+ break;
454
+ case R_GGPIO:
455
+ break;
456
+ case R_GUID:
457
+ break;
458
+ case R_GUCTL:
459
+ break;
460
+ case R_GHWPARAMS0...R_GHWPARAMS7:
461
+ break;
462
+ case R_GHWPARAMS8:
463
+ break;
464
+ default:
465
+ register_reset(&s->regs_info[i]);
466
+ break;
467
+ }
347
+ }
468
+ }
348
+ }
469
+
349
+}
470
+ xhci_sysbus_reset(DEVICE(&s->sysbus_xhci));
350
+
471
+}
351
+void HELPER(setm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
472
+
352
+{
473
+static void usb_dwc3_gctl_postw(RegisterInfo *reg, uint64_t val64)
353
+ do_setm(env, syndrome, mtedesc, set_step, false, GETPC());
474
+{
354
+}
475
+ USBDWC3 *s = USB_DWC3(reg->opaque);
355
+
476
+
356
+static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
477
+ if (ARRAY_FIELD_EX32(s->regs, GCTL, CORESOFTRESET)) {
357
+ StepFn *stepfn, bool is_setg, uintptr_t ra)
478
+ reset_csr(s);
358
+{
479
+ }
359
+ /* Epilogue: do the last partial page */
480
+}
360
+ int rd = mops_destreg(syndrome);
481
+
361
+ int rs = mops_srcreg(syndrome);
482
+static void usb_dwc3_guid_postw(RegisterInfo *reg, uint64_t val64)
362
+ int rn = mops_sizereg(syndrome);
483
+{
363
+ uint8_t data = env->xregs[rs];
484
+ USBDWC3 *s = USB_DWC3(reg->opaque);
364
+ uint64_t toaddr = env->xregs[rd] + env->xregs[rn];
485
+
365
+ uint64_t setsize = -env->xregs[rn];
486
+ s->regs[R_GUID] = s->cfg.dwc_usb3_user;
366
+ uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX);
487
+}
367
+ uint64_t step;
488
+
368
+
489
+static const RegisterAccessInfo usb_dwc3_regs_info[] = {
369
+ check_mops_enabled(env, ra);
490
+ { .name = "GSBUSCFG0", .addr = A_GSBUSCFG0,
370
+
491
+ .ro = 0xf300,
371
+ /*
492
+ .unimp = 0xffffffff,
372
+ * We're allowed to NOP out "no data to copy" before the consistency
493
+ },{ .name = "GSBUSCFG1", .addr = A_GSBUSCFG1,
373
+ * checks; we choose to do so.
494
+ .reset = 0x300,
374
+ */
495
+ .ro = 0xffffe0ff,
375
+ if (setsize == 0) {
496
+ .unimp = 0xffffffff,
497
+ },{ .name = "GTXTHRCFG", .addr = A_GTXTHRCFG,
498
+ .ro = 0xd000ffff,
499
+ .unimp = 0xffffffff,
500
+ },{ .name = "GRXTHRCFG", .addr = A_GRXTHRCFG,
501
+ .ro = 0xd007e000,
502
+ .unimp = 0xffffffff,
503
+ },{ .name = "GCTL", .addr = A_GCTL,
504
+ .reset = 0x30c13004, .post_write = usb_dwc3_gctl_postw,
505
+ },{ .name = "GPMSTS", .addr = A_GPMSTS,
506
+ .ro = 0xfffffff,
507
+ .unimp = 0xffffffff,
508
+ },{ .name = "GSTS", .addr = A_GSTS,
509
+ .reset = 0x7e800000,
510
+ .ro = 0xffffffcf,
511
+ .w1c = 0x30,
512
+ .unimp = 0xffffffff,
513
+ },{ .name = "GUCTL1", .addr = A_GUCTL1,
514
+ .reset = 0x198a,
515
+ .ro = 0x7800,
516
+ .unimp = 0xffffffff,
517
+ },{ .name = "GSNPSID", .addr = A_GSNPSID,
518
+ .reset = 0x5533330a,
519
+ .ro = 0xffffffff,
520
+ },{ .name = "GGPIO", .addr = A_GGPIO,
521
+ .ro = 0xffff,
522
+ .unimp = 0xffffffff,
523
+ },{ .name = "GUID", .addr = A_GUID,
524
+ .reset = 0x12345678, .post_write = usb_dwc3_guid_postw,
525
+ },{ .name = "GUCTL", .addr = A_GUCTL,
526
+ .reset = 0x0c808010,
527
+ .ro = 0x1c8000,
528
+ .unimp = 0xffffffff,
529
+ },{ .name = "GBUSERRADDRLO", .addr = A_GBUSERRADDRLO,
530
+ .ro = 0xffffffff,
531
+ },{ .name = "GBUSERRADDRHI", .addr = A_GBUSERRADDRHI,
532
+ .ro = 0xffffffff,
533
+ },{ .name = "GHWPARAMS0", .addr = A_GHWPARAMS0,
534
+ .ro = 0xffffffff,
535
+ },{ .name = "GHWPARAMS1", .addr = A_GHWPARAMS1,
536
+ .ro = 0xffffffff,
537
+ },{ .name = "GHWPARAMS2", .addr = A_GHWPARAMS2,
538
+ .ro = 0xffffffff,
539
+ },{ .name = "GHWPARAMS3", .addr = A_GHWPARAMS3,
540
+ .ro = 0xffffffff,
541
+ },{ .name = "GHWPARAMS4", .addr = A_GHWPARAMS4,
542
+ .ro = 0xffffffff,
543
+ },{ .name = "GHWPARAMS5", .addr = A_GHWPARAMS5,
544
+ .ro = 0xffffffff,
545
+ },{ .name = "GHWPARAMS6", .addr = A_GHWPARAMS6,
546
+ .ro = 0xffffffff,
547
+ },{ .name = "GHWPARAMS7", .addr = A_GHWPARAMS7,
548
+ .ro = 0xffffffff,
549
+ },{ .name = "GDBGFIFOSPACE", .addr = A_GDBGFIFOSPACE,
550
+ .reset = 0xa0000,
551
+ .ro = 0xfffffe00,
552
+ .unimp = 0xffffffff,
553
+ },{ .name = "GUCTL2", .addr = A_GUCTL2,
554
+ .reset = 0x40d,
555
+ .ro = 0x2000,
556
+ .unimp = 0xffffffff,
557
+ },{ .name = "GUSB2PHYCFG", .addr = A_GUSB2PHYCFG,
558
+ .reset = 0x40102410,
559
+ .ro = 0x1e014030,
560
+ .unimp = 0xffffffff,
561
+ },{ .name = "GUSB2I2CCTL", .addr = A_GUSB2I2CCTL,
562
+ .ro = 0xffffffff,
563
+ .unimp = 0xffffffff,
564
+ },{ .name = "GUSB2PHYACC_ULPI", .addr = A_GUSB2PHYACC_ULPI,
565
+ .ro = 0xfd000000,
566
+ .unimp = 0xffffffff,
567
+ },{ .name = "GTXFIFOSIZ0", .addr = A_GTXFIFOSIZ0,
568
+ .reset = 0x2c7000a,
569
+ .unimp = 0xffffffff,
570
+ },{ .name = "GTXFIFOSIZ1", .addr = A_GTXFIFOSIZ1,
571
+ .reset = 0x2d10103,
572
+ .unimp = 0xffffffff,
573
+ },{ .name = "GTXFIFOSIZ2", .addr = A_GTXFIFOSIZ2,
574
+ .reset = 0x3d40103,
575
+ .unimp = 0xffffffff,
576
+ },{ .name = "GTXFIFOSIZ3", .addr = A_GTXFIFOSIZ3,
577
+ .reset = 0x4d70083,
578
+ .unimp = 0xffffffff,
579
+ },{ .name = "GTXFIFOSIZ4", .addr = A_GTXFIFOSIZ4,
580
+ .reset = 0x55a0083,
581
+ .unimp = 0xffffffff,
582
+ },{ .name = "GTXFIFOSIZ5", .addr = A_GTXFIFOSIZ5,
583
+ .reset = 0x5dd0083,
584
+ .unimp = 0xffffffff,
585
+ },{ .name = "GRXFIFOSIZ0", .addr = A_GRXFIFOSIZ0,
586
+ .reset = 0x1c20105,
587
+ .unimp = 0xffffffff,
588
+ },{ .name = "GRXFIFOSIZ1", .addr = A_GRXFIFOSIZ1,
589
+ .reset = 0x2c70000,
590
+ .unimp = 0xffffffff,
591
+ },{ .name = "GRXFIFOSIZ2", .addr = A_GRXFIFOSIZ2,
592
+ .reset = 0x2c70000,
593
+ .unimp = 0xffffffff,
594
+ },{ .name = "GEVNTADRLO_0", .addr = A_GEVNTADRLO_0,
595
+ .unimp = 0xffffffff,
596
+ },{ .name = "GEVNTADRHI_0", .addr = A_GEVNTADRHI_0,
597
+ .unimp = 0xffffffff,
598
+ },{ .name = "GEVNTSIZ_0", .addr = A_GEVNTSIZ_0,
599
+ .ro = 0x7fff0000,
600
+ .unimp = 0xffffffff,
601
+ },{ .name = "GEVNTCOUNT_0", .addr = A_GEVNTCOUNT_0,
602
+ .ro = 0x7fff0000,
603
+ .unimp = 0xffffffff,
604
+ },{ .name = "GEVNTADRLO_1", .addr = A_GEVNTADRLO_1,
605
+ .unimp = 0xffffffff,
606
+ },{ .name = "GEVNTADRHI_1", .addr = A_GEVNTADRHI_1,
607
+ .unimp = 0xffffffff,
608
+ },{ .name = "GEVNTSIZ_1", .addr = A_GEVNTSIZ_1,
609
+ .ro = 0x7fff0000,
610
+ .unimp = 0xffffffff,
611
+ },{ .name = "GEVNTCOUNT_1", .addr = A_GEVNTCOUNT_1,
612
+ .ro = 0x7fff0000,
613
+ .unimp = 0xffffffff,
614
+ },{ .name = "GEVNTADRLO_2", .addr = A_GEVNTADRLO_2,
615
+ .unimp = 0xffffffff,
616
+ },{ .name = "GEVNTADRHI_2", .addr = A_GEVNTADRHI_2,
617
+ .unimp = 0xffffffff,
618
+ },{ .name = "GEVNTSIZ_2", .addr = A_GEVNTSIZ_2,
619
+ .ro = 0x7fff0000,
620
+ .unimp = 0xffffffff,
621
+ },{ .name = "GEVNTCOUNT_2", .addr = A_GEVNTCOUNT_2,
622
+ .ro = 0x7fff0000,
623
+ .unimp = 0xffffffff,
624
+ },{ .name = "GEVNTADRLO_3", .addr = A_GEVNTADRLO_3,
625
+ .unimp = 0xffffffff,
626
+ },{ .name = "GEVNTADRHI_3", .addr = A_GEVNTADRHI_3,
627
+ .unimp = 0xffffffff,
628
+ },{ .name = "GEVNTSIZ_3", .addr = A_GEVNTSIZ_3,
629
+ .ro = 0x7fff0000,
630
+ .unimp = 0xffffffff,
631
+ },{ .name = "GEVNTCOUNT_3", .addr = A_GEVNTCOUNT_3,
632
+ .ro = 0x7fff0000,
633
+ .unimp = 0xffffffff,
634
+ },{ .name = "GHWPARAMS8", .addr = A_GHWPARAMS8,
635
+ .ro = 0xffffffff,
636
+ },{ .name = "GTXFIFOPRIDEV", .addr = A_GTXFIFOPRIDEV,
637
+ .ro = 0xffffffc0,
638
+ .unimp = 0xffffffff,
639
+ },{ .name = "GTXFIFOPRIHST", .addr = A_GTXFIFOPRIHST,
640
+ .ro = 0xfffffff8,
641
+ .unimp = 0xffffffff,
642
+ },{ .name = "GRXFIFOPRIHST", .addr = A_GRXFIFOPRIHST,
643
+ .ro = 0xfffffff8,
644
+ .unimp = 0xffffffff,
645
+ },{ .name = "GDMAHLRATIO", .addr = A_GDMAHLRATIO,
646
+ .ro = 0xffffe0e0,
647
+ .unimp = 0xffffffff,
648
+ },{ .name = "GFLADJ", .addr = A_GFLADJ,
649
+ .reset = 0xc83f020,
650
+ .rsvd = 0x40,
651
+ .ro = 0x400040,
652
+ .unimp = 0xffffffff,
653
+ }
654
+};
655
+
656
+static void usb_dwc3_reset(DeviceState *dev)
657
+{
658
+ USBDWC3 *s = USB_DWC3(dev);
659
+ unsigned int i;
660
+
661
+ for (i = 0; i < ARRAY_SIZE(s->regs_info); ++i) {
662
+ switch (i) {
663
+ case R_GHWPARAMS0...R_GHWPARAMS7:
664
+ break;
665
+ case R_GHWPARAMS8:
666
+ break;
667
+ default:
668
+ register_reset(&s->regs_info[i]);
669
+ };
670
+ }
671
+
672
+ xhci_sysbus_reset(DEVICE(&s->sysbus_xhci));
673
+}
674
+
675
+static const MemoryRegionOps usb_dwc3_ops = {
676
+ .read = register_read_memory,
677
+ .write = register_write_memory,
678
+ .endianness = DEVICE_LITTLE_ENDIAN,
679
+ .valid = {
680
+ .min_access_size = 4,
681
+ .max_access_size = 4,
682
+ },
683
+};
684
+
685
+static void usb_dwc3_realize(DeviceState *dev, Error **errp)
686
+{
687
+ USBDWC3 *s = USB_DWC3(dev);
688
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
689
+ Error *err = NULL;
690
+
691
+ sysbus_realize(SYS_BUS_DEVICE(&s->sysbus_xhci), &err);
692
+ if (err) {
693
+ error_propagate(errp, err);
694
+ return;
376
+ return;
695
+ }
377
+ }
696
+
378
+
697
+ memory_region_add_subregion(&s->iomem, 0,
379
+ check_mops_wrong_option(env, syndrome, ra);
698
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->sysbus_xhci), 0));
380
+
699
+ sysbus_init_mmio(sbd, &s->iomem);
381
+ /*
700
+
382
+ * Our implementation has no address alignment requirements, but
701
+ /*
383
+ * we do want to enforce the "less than a page" size requirement,
702
+ * Device Configuration
384
+ * so we don't need to have the "check for interrupts" here.
703
+ */
385
+ */
704
+ s->regs[R_GHWPARAMS0] = 0x40204048 | s->cfg.mode;
386
+ if (setsize >= TARGET_PAGE_SIZE) {
705
+ s->regs[R_GHWPARAMS1] = 0x222493b;
387
+ raise_exception_ra(env, EXCP_UDEF, syndrome,
706
+ s->regs[R_GHWPARAMS2] = 0x12345678;
388
+ mops_mismatch_exception_target_el(env), ra);
707
+ s->regs[R_GHWPARAMS3] = 0x618c088;
389
+ }
708
+ s->regs[R_GHWPARAMS4] = 0x47822004;
390
+
709
+ s->regs[R_GHWPARAMS5] = 0x4202088;
391
+ if (!mte_checks_needed(toaddr, mtedesc)) {
710
+ s->regs[R_GHWPARAMS6] = 0x7850c20;
392
+ mtedesc = 0;
711
+ s->regs[R_GHWPARAMS7] = 0x0;
393
+ }
712
+ s->regs[R_GHWPARAMS8] = 0x478;
394
+
713
+}
395
+ /* Do the actual memset */
714
+
396
+ while (setsize > 0) {
715
+static void usb_dwc3_init(Object *obj)
397
+ step = stepfn(env, toaddr, setsize, data, memidx, &mtedesc, ra);
716
+{
398
+ toaddr += step;
717
+ USBDWC3 *s = USB_DWC3(obj);
399
+ setsize -= step;
718
+ RegisterInfoArray *reg_array;
400
+ env->xregs[rn] = -setsize;
719
+
401
+ }
720
+ memory_region_init(&s->iomem, obj, TYPE_USB_DWC3, DWC3_SIZE);
402
+}
721
+ reg_array =
403
+
722
+ register_init_block32(DEVICE(obj), usb_dwc3_regs_info,
404
+void HELPER(sete)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
723
+ ARRAY_SIZE(usb_dwc3_regs_info),
405
+{
724
+ s->regs_info, s->regs,
406
+ do_sete(env, syndrome, mtedesc, set_step, false, GETPC());
725
+ &usb_dwc3_ops,
407
+}
726
+ USB_DWC3_ERR_DEBUG,
408
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
727
+ USB_DWC3_R_MAX * 4);
728
+ memory_region_add_subregion(&s->iomem,
729
+ DWC3_GLOBAL_OFFSET,
730
+ &reg_array->mem);
731
+ object_initialize_child(obj, "dwc3-xhci", &s->sysbus_xhci,
732
+ TYPE_XHCI_SYSBUS);
733
+ qdev_alias_all_properties(DEVICE(&s->sysbus_xhci), obj);
734
+
735
+ s->cfg.mode = HOST_MODE;
736
+}
737
+
738
+static const VMStateDescription vmstate_usb_dwc3 = {
739
+ .name = "usb-dwc3",
740
+ .version_id = 1,
741
+ .fields = (VMStateField[]) {
742
+ VMSTATE_UINT32_ARRAY(regs, USBDWC3, USB_DWC3_R_MAX),
743
+ VMSTATE_UINT8(cfg.mode, USBDWC3),
744
+ VMSTATE_UINT32(cfg.dwc_usb3_user, USBDWC3),
745
+ VMSTATE_END_OF_LIST()
746
+ }
747
+};
748
+
749
+static Property usb_dwc3_properties[] = {
750
+ DEFINE_PROP_UINT32("DWC_USB3_USERID", USBDWC3, cfg.dwc_usb3_user,
751
+ 0x12345678),
752
+ DEFINE_PROP_END_OF_LIST(),
753
+};
754
+
755
+static void usb_dwc3_class_init(ObjectClass *klass, void *data)
756
+{
757
+ DeviceClass *dc = DEVICE_CLASS(klass);
758
+
759
+ dc->reset = usb_dwc3_reset;
760
+ dc->realize = usb_dwc3_realize;
761
+ dc->vmsd = &vmstate_usb_dwc3;
762
+ device_class_set_props(dc, usb_dwc3_properties);
763
+}
764
+
765
+static const TypeInfo usb_dwc3_info = {
766
+ .name = TYPE_USB_DWC3,
767
+ .parent = TYPE_SYS_BUS_DEVICE,
768
+ .instance_size = sizeof(USBDWC3),
769
+ .class_init = usb_dwc3_class_init,
770
+ .instance_init = usb_dwc3_init,
771
+};
772
+
773
+static void usb_dwc3_register_types(void)
774
+{
775
+ type_register_static(&usb_dwc3_info);
776
+}
777
+
778
+type_init(usb_dwc3_register_types)
779
diff --git a/hw/usb/Kconfig b/hw/usb/Kconfig
780
index XXXXXXX..XXXXXXX 100644
409
index XXXXXXX..XXXXXXX 100644
781
--- a/hw/usb/Kconfig
410
--- a/target/arm/tcg/translate-a64.c
782
+++ b/hw/usb/Kconfig
411
+++ b/target/arm/tcg/translate-a64.c
783
@@ -XXX,XX +XXX,XX @@ config IMX_USBPHY
412
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(STZG, aa64_mte_insn_reg, do_STG, a, true, false)
784
bool
413
TRANS_FEAT(ST2G, aa64_mte_insn_reg, do_STG, a, false, true)
785
default y
414
TRANS_FEAT(STZ2G, aa64_mte_insn_reg, do_STG, a, true, true)
786
depends on USB
415
787
+
416
+typedef void SetFn(TCGv_env, TCGv_i32, TCGv_i32);
788
+config USB_DWC3
417
+
789
+ bool
418
+static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
790
+ select USB_XHCI_SYSBUS
419
+{
791
+ select REGISTER
420
+ int memidx;
792
diff --git a/hw/usb/meson.build b/hw/usb/meson.build
421
+ uint32_t syndrome, desc = 0;
793
index XXXXXXX..XXXXXXX 100644
422
+
794
--- a/hw/usb/meson.build
423
+ /*
795
+++ b/hw/usb/meson.build
424
+ * UNPREDICTABLE cases: we choose to UNDEF, which allows
796
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_USB_XHCI_SYSBUS', if_true: files('hcd-xhci-sysbus.c
425
+ * us to pull this check before the CheckMOPSEnabled() test
797
softmmu_ss.add(when: 'CONFIG_USB_XHCI_NEC', if_true: files('hcd-xhci-nec.c'))
426
+ * (which we do in the helper function)
798
softmmu_ss.add(when: 'CONFIG_USB_MUSB', if_true: files('hcd-musb.c'))
427
+ */
799
softmmu_ss.add(when: 'CONFIG_USB_DWC2', if_true: files('hcd-dwc2.c'))
428
+ if (a->rs == a->rn || a->rs == a->rd || a->rn == a->rd ||
800
+softmmu_ss.add(when: 'CONFIG_USB_DWC3', if_true: files('hcd-dwc3.c'))
429
+ a->rd == 31 || a->rn == 31) {
801
430
+ return false;
802
softmmu_ss.add(when: 'CONFIG_TUSB6010', if_true: files('tusb6010.c'))
431
+ }
803
softmmu_ss.add(when: 'CONFIG_IMX', if_true: files('chipidea.c'))
432
+
433
+ memidx = get_a64_user_mem_index(s, a->unpriv);
434
+
435
+ /*
436
+ * We pass option_a == true, matching our implementation;
437
+ * we pass wrong_option == false: helper function may set that bit.
438
+ */
439
+ syndrome = syn_mop(true, false, (a->nontemp << 1) | a->unpriv,
440
+ is_epilogue, false, true, a->rd, a->rs, a->rn);
441
+
442
+ if (s->mte_active[a->unpriv]) {
443
+ /* We may need to do MTE tag checking, so assemble the descriptor */
444
+ desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
445
+ desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
446
+ desc = FIELD_DP32(desc, MTEDESC, WRITE, true);
447
+ /* SIZEM1 and ALIGN we leave 0 (byte write) */
448
+ }
449
+ /* The helper function always needs the memidx even with MTE disabled */
450
+ desc = FIELD_DP32(desc, MTEDESC, MIDX, memidx);
451
+
452
+ /*
453
+ * The helper needs the register numbers, but since they're in
454
+ * the syndrome anyway, we let it extract them from there rather
455
+ * than passing in an extra three integer arguments.
456
+ */
457
+ fn(cpu_env, tcg_constant_i32(syndrome), tcg_constant_i32(desc));
458
+ return true;
459
+}
460
+
461
+TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, gen_helper_setp)
462
+TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, gen_helper_setm)
463
+TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, gen_helper_sete)
464
+
465
typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
466
467
static bool gen_rri(DisasContext *s, arg_rri_sf *a,
804
--
468
--
805
2.20.1
469
2.34.1
806
807
diff view generated by jsdifflib
New patch
1
Currently the only tag-setting instructions always do so in the
2
context of the current EL, and so we only need one ATA bit in the TB
3
flags. The FEAT_MOPS SETG instructions include ones which set tags
4
for a non-privileged access, so we now also need the equivalent "are
5
tags enabled?" information for EL0.
1
6
7
Add the new TB flag, and convert the existing 'bool ata' field in
8
DisasContext to a 'bool ata[2]' that can be indexed by the is_unpriv
9
bit in an instruction, similarly to mte[2].
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20230912140434.1333369-9-peter.maydell@linaro.org
14
---
15
target/arm/cpu.h | 1 +
16
target/arm/tcg/translate.h | 4 ++--
17
target/arm/tcg/hflags.c | 12 ++++++++++++
18
target/arm/tcg/translate-a64.c | 23 ++++++++++++-----------
19
4 files changed, 27 insertions(+), 13 deletions(-)
20
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
25
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, SVL, 24, 4)
26
FIELD(TBFLAG_A64, SME_TRAP_NONSTREAMING, 28, 1)
27
FIELD(TBFLAG_A64, FGT_ERET, 29, 1)
28
FIELD(TBFLAG_A64, NAA, 30, 1)
29
+FIELD(TBFLAG_A64, ATA0, 31, 1)
30
31
/*
32
* Helpers for using the above.
33
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate.h
36
+++ b/target/arm/tcg/translate.h
37
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
38
bool unpriv;
39
/* True if v8.3-PAuth is active. */
40
bool pauth_active;
41
- /* True if v8.5-MTE access to tags is enabled. */
42
- bool ata;
43
+ /* True if v8.5-MTE access to tags is enabled; index with is_unpriv. */
44
+ bool ata[2];
45
/* True if v8.5-MTE tag checks affect the PE; index with is_unpriv. */
46
bool mte_active[2];
47
/* True with v8.5-BTI and SCTLR_ELx.BT* set. */
48
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/tcg/hflags.c
51
+++ b/target/arm/tcg/hflags.c
52
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
53
&& allocation_tag_access_enabled(env, 0, sctlr)) {
54
DP_TBFLAG_A64(flags, MTE0_ACTIVE, 1);
55
}
56
+ /*
57
+ * For unpriv tag-setting accesses we alse need ATA0. Again, in
58
+ * contexts where unpriv and normal insns are the same we
59
+ * duplicate the ATA bit to save effort for translate-a64.c.
60
+ */
61
+ if (EX_TBFLAG_A64(flags, UNPRIV)) {
62
+ if (allocation_tag_access_enabled(env, 0, sctlr)) {
63
+ DP_TBFLAG_A64(flags, ATA0, 1);
64
+ }
65
+ } else {
66
+ DP_TBFLAG_A64(flags, ATA0, EX_TBFLAG_A64(flags, ATA));
67
+ }
68
/* Cache TCMA as well as TBI. */
69
DP_TBFLAG_A64(flags, TCMA, aa64_va_parameter_tcma(tcr, mmu_idx));
70
}
71
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/tcg/translate-a64.c
74
+++ b/target/arm/tcg/translate-a64.c
75
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, bool isread,
76
clean_addr = clean_data_tbi(s, tcg_rt);
77
gen_probe_access(s, clean_addr, MMU_DATA_STORE, MO_8);
78
79
- if (s->ata) {
80
+ if (s->ata[0]) {
81
/* Extract the tag from the register to match STZGM. */
82
tag = tcg_temp_new_i64();
83
tcg_gen_shri_i64(tag, tcg_rt, 56);
84
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, bool isread,
85
clean_addr = clean_data_tbi(s, tcg_rt);
86
gen_helper_dc_zva(cpu_env, clean_addr);
87
88
- if (s->ata) {
89
+ if (s->ata[0]) {
90
/* Extract the tag from the register to match STZGM. */
91
tag = tcg_temp_new_i64();
92
tcg_gen_shri_i64(tag, tcg_rt, 56);
93
@@ -XXX,XX +XXX,XX @@ static bool trans_STGP(DisasContext *s, arg_ldstpair *a)
94
tcg_gen_qemu_st_i128(tmp, clean_addr, get_mem_index(s), mop);
95
96
/* Perform the tag store, if tag access enabled. */
97
- if (s->ata) {
98
+ if (s->ata[0]) {
99
if (tb_cflags(s->base.tb) & CF_PARALLEL) {
100
gen_helper_stg_parallel(cpu_env, dirty_addr, dirty_addr);
101
} else {
102
@@ -XXX,XX +XXX,XX @@ static bool trans_STZGM(DisasContext *s, arg_ldst_tag *a)
103
tcg_gen_addi_i64(addr, addr, a->imm);
104
tcg_rt = cpu_reg(s, a->rt);
105
106
- if (s->ata) {
107
+ if (s->ata[0]) {
108
gen_helper_stzgm_tags(cpu_env, addr, tcg_rt);
109
}
110
/*
111
@@ -XXX,XX +XXX,XX @@ static bool trans_STGM(DisasContext *s, arg_ldst_tag *a)
112
tcg_gen_addi_i64(addr, addr, a->imm);
113
tcg_rt = cpu_reg(s, a->rt);
114
115
- if (s->ata) {
116
+ if (s->ata[0]) {
117
gen_helper_stgm(cpu_env, addr, tcg_rt);
118
} else {
119
MMUAccessType acc = MMU_DATA_STORE;
120
@@ -XXX,XX +XXX,XX @@ static bool trans_LDGM(DisasContext *s, arg_ldst_tag *a)
121
tcg_gen_addi_i64(addr, addr, a->imm);
122
tcg_rt = cpu_reg(s, a->rt);
123
124
- if (s->ata) {
125
+ if (s->ata[0]) {
126
gen_helper_ldgm(tcg_rt, cpu_env, addr);
127
} else {
128
MMUAccessType acc = MMU_DATA_LOAD;
129
@@ -XXX,XX +XXX,XX @@ static bool trans_LDG(DisasContext *s, arg_ldst_tag *a)
130
131
tcg_gen_andi_i64(addr, addr, -TAG_GRANULE);
132
tcg_rt = cpu_reg(s, a->rt);
133
- if (s->ata) {
134
+ if (s->ata[0]) {
135
gen_helper_ldg(tcg_rt, cpu_env, addr, tcg_rt);
136
} else {
137
/*
138
@@ -XXX,XX +XXX,XX @@ static bool do_STG(DisasContext *s, arg_ldst_tag *a, bool is_zero, bool is_pair)
139
tcg_gen_addi_i64(addr, addr, a->imm);
140
}
141
tcg_rt = cpu_reg_sp(s, a->rt);
142
- if (!s->ata) {
143
+ if (!s->ata[0]) {
144
/*
145
* For STG and ST2G, we need to check alignment and probe memory.
146
* TODO: For STZG and STZ2G, we could rely on the stores below,
147
@@ -XXX,XX +XXX,XX @@ static bool gen_add_sub_imm_with_tags(DisasContext *s, arg_rri_tag *a,
148
tcg_rn = cpu_reg_sp(s, a->rn);
149
tcg_rd = cpu_reg_sp(s, a->rd);
150
151
- if (s->ata) {
152
+ if (s->ata[0]) {
153
gen_helper_addsubg(tcg_rd, cpu_env, tcg_rn,
154
tcg_constant_i32(imm),
155
tcg_constant_i32(a->uimm4));
156
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
157
if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
158
goto do_unallocated;
159
}
160
- if (s->ata) {
161
+ if (s->ata[0]) {
162
gen_helper_irg(cpu_reg_sp(s, rd), cpu_env,
163
cpu_reg_sp(s, rn), cpu_reg(s, rm));
164
} else {
165
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
166
dc->bt = EX_TBFLAG_A64(tb_flags, BT);
167
dc->btype = EX_TBFLAG_A64(tb_flags, BTYPE);
168
dc->unpriv = EX_TBFLAG_A64(tb_flags, UNPRIV);
169
- dc->ata = EX_TBFLAG_A64(tb_flags, ATA);
170
+ dc->ata[0] = EX_TBFLAG_A64(tb_flags, ATA);
171
+ dc->ata[1] = EX_TBFLAG_A64(tb_flags, ATA0);
172
dc->mte_active[0] = EX_TBFLAG_A64(tb_flags, MTE_ACTIVE);
173
dc->mte_active[1] = EX_TBFLAG_A64(tb_flags, MTE0_ACTIVE);
174
dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM);
175
--
176
2.34.1
diff view generated by jsdifflib
1
From: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
1
The FEAT_MOPS SETG* instructions are very similar to the SET*
2
instructions, but as well as setting memory contents they also
3
set the MTE tags. They are architecturally required to operate
4
on tag-granule aligned regions only.
2
5
3
This model is a top level integration wrapper for hcd-dwc3 and
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
versal-usb2-ctrl-regs modules, this is used by xilinx versal soc's and
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
future xilinx usb subsystems would also be part of it.
8
Message-id: 20230912140434.1333369-10-peter.maydell@linaro.org
9
---
10
target/arm/internals.h | 10 ++++
11
target/arm/tcg/helper-a64.h | 3 ++
12
target/arm/tcg/a64.decode | 5 ++
13
target/arm/tcg/helper-a64.c | 86 ++++++++++++++++++++++++++++++++--
14
target/arm/tcg/mte_helper.c | 40 ++++++++++++++++
15
target/arm/tcg/translate-a64.c | 20 +++++---
16
6 files changed, 155 insertions(+), 9 deletions(-)
6
17
7
Signed-off-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
18
diff --git a/target/arm/internals.h b/target/arm/internals.h
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
19
index XXXXXXX..XXXXXXX 100644
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
--- a/target/arm/internals.h
10
Message-id: 1607023357-5096-4-git-send-email-sai.pavan.boddu@xilinx.com
21
+++ b/target/arm/internals.h
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
@@ -XXX,XX +XXX,XX @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
12
---
23
void mte_check_fail(CPUARMState *env, uint32_t desc,
13
include/hw/usb/xlnx-usb-subsystem.h | 45 ++++++++++++++
24
uint64_t dirty_ptr, uintptr_t ra);
14
hw/usb/xlnx-usb-subsystem.c | 94 +++++++++++++++++++++++++++++
25
15
hw/usb/Kconfig | 5 ++
26
+/**
16
hw/usb/meson.build | 1 +
27
+ * mte_mops_set_tags: Set MTE tags for a portion of a FEAT_MOPS operation
17
4 files changed, 145 insertions(+)
28
+ * @env: CPU env
18
create mode 100644 include/hw/usb/xlnx-usb-subsystem.h
29
+ * @dirty_ptr: Start address of memory region (dirty pointer)
19
create mode 100644 hw/usb/xlnx-usb-subsystem.c
30
+ * @size: length of region (guaranteed not to cross page boundary)
20
31
+ * @desc: MTEDESC descriptor word
21
diff --git a/include/hw/usb/xlnx-usb-subsystem.h b/include/hw/usb/xlnx-usb-subsystem.h
32
+ */
22
new file mode 100644
33
+void mte_mops_set_tags(CPUARMState *env, uint64_t dirty_ptr, uint64_t size,
23
index XXXXXXX..XXXXXXX
34
+ uint32_t desc);
24
--- /dev/null
35
+
25
+++ b/include/hw/usb/xlnx-usb-subsystem.h
36
static inline int allocation_tag_from_addr(uint64_t ptr)
26
@@ -XXX,XX +XXX,XX @@
37
{
38
return extract64(ptr, 56, 4);
39
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/tcg/helper-a64.h
42
+++ b/target/arm/tcg/helper-a64.h
43
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(unaligned_access, TCG_CALL_NO_WG,
44
DEF_HELPER_3(setp, void, env, i32, i32)
45
DEF_HELPER_3(setm, void, env, i32, i32)
46
DEF_HELPER_3(sete, void, env, i32, i32)
47
+DEF_HELPER_3(setgp, void, env, i32, i32)
48
+DEF_HELPER_3(setgm, void, env, i32, i32)
49
+DEF_HELPER_3(setge, void, env, i32, i32)
50
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/tcg/a64.decode
53
+++ b/target/arm/tcg/a64.decode
54
@@ -XXX,XX +XXX,XX @@ STZ2G 11011001 11 1 ......... 11 ..... ..... @ldst_tag p=0 w=1
55
SETP 00 011001110 ..... 00 . . 01 ..... ..... @set
56
SETM 00 011001110 ..... 01 . . 01 ..... ..... @set
57
SETE 00 011001110 ..... 10 . . 01 ..... ..... @set
58
+
59
+# Like SET, but also setting MTE tags
60
+SETGP 00 011101110 ..... 00 . . 01 ..... ..... @set
61
+SETGM 00 011101110 ..... 01 . . 01 ..... ..... @set
62
+SETGE 00 011101110 ..... 10 . . 01 ..... ..... @set
63
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/tcg/helper-a64.c
66
+++ b/target/arm/tcg/helper-a64.c
67
@@ -XXX,XX +XXX,XX @@ static uint64_t set_step(CPUARMState *env, uint64_t toaddr,
68
return setsize;
69
}
70
27
+/*
71
+/*
28
+ * QEMU model of the Xilinx usb subsystem
72
+ * Similar, but setting tags. The architecture requires us to do this
29
+ *
73
+ * in 16-byte chunks. SETP accesses are not tag checked; they set
30
+ * Copyright (c) 2020 Xilinx Inc. Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
74
+ * the tags.
31
+ *
32
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
33
+ * of this software and associated documentation files (the "Software"), to deal
34
+ * in the Software without restriction, including without limitation the rights
35
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
36
+ * copies of the Software, and to permit persons to whom the Software is
37
+ * furnished to do so, subject to the following conditions:
38
+ *
39
+ * The above copyright notice and this permission notice shall be included in
40
+ * all copies or substantial portions of the Software.
41
+ *
42
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
43
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
44
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
45
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
46
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
47
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
48
+ * THE SOFTWARE.
49
+ */
75
+ */
50
+
76
+static uint64_t set_step_tags(CPUARMState *env, uint64_t toaddr,
51
+#ifndef _XLNX_VERSAL_USB_SUBSYSTEM_H_
77
+ uint64_t setsize, uint32_t data, int memidx,
52
+#define _XLNX_VERSAL_USB_SUBSYSTEM_H_
78
+ uint32_t *mtedesc, uintptr_t ra)
53
+
79
+{
54
+#include "hw/usb/xlnx-versal-usb2-ctrl-regs.h"
80
+ void *mem;
55
+#include "hw/usb/hcd-dwc3.h"
81
+ uint64_t cleanaddr;
56
+
82
+
57
+#define TYPE_XILINX_VERSAL_USB2 "xlnx.versal-usb2"
83
+ setsize = MIN(setsize, page_limit(toaddr));
58
+
84
+
59
+#define VERSAL_USB2(obj) \
85
+ cleanaddr = useronly_clean_ptr(toaddr);
60
+ OBJECT_CHECK(VersalUsb2, (obj), TYPE_XILINX_VERSAL_USB2)
86
+ /*
61
+
87
+ * Trapless lookup: returns NULL for invalid page, I/O,
62
+typedef struct VersalUsb2 {
88
+ * watchpoints, clean pages, etc.
63
+ SysBusDevice parent_obj;
89
+ */
64
+ MemoryRegion dwc3_mr;
90
+ mem = tlb_vaddr_to_host(env, cleanaddr, MMU_DATA_STORE, memidx);
65
+ MemoryRegion usb2Ctrl_mr;
91
+
66
+
92
+#ifndef CONFIG_USER_ONLY
67
+ VersalUsb2CtrlRegs usb2Ctrl;
93
+ if (unlikely(!mem)) {
68
+ USBDWC3 dwc3;
94
+ /*
69
+} VersalUsb2;
95
+ * Slow-path: just do one write. This will handle the
70
+
96
+ * watchpoint, invalid page, etc handling correctly.
97
+ * The architecture requires that we do 16 bytes at a time,
98
+ * and we know both ptr and size are 16 byte aligned.
99
+ * For clean code pages, the next iteration will see
100
+ * the page dirty and will use the fast path.
101
+ */
102
+ uint64_t repldata = data * 0x0101010101010101ULL;
103
+ MemOpIdx oi16 = make_memop_idx(MO_TE | MO_128, memidx);
104
+ cpu_st16_mmu(env, toaddr, int128_make128(repldata, repldata), oi16, ra);
105
+ mte_mops_set_tags(env, toaddr, 16, *mtedesc);
106
+ return 16;
107
+ }
71
+#endif
108
+#endif
72
diff --git a/hw/usb/xlnx-usb-subsystem.c b/hw/usb/xlnx-usb-subsystem.c
109
+ /* Easy case: just memset the host memory */
73
new file mode 100644
110
+ memset(mem, data, setsize);
74
index XXXXXXX..XXXXXXX
111
+ mte_mops_set_tags(env, toaddr, setsize, *mtedesc);
75
--- /dev/null
112
+ return setsize;
76
+++ b/hw/usb/xlnx-usb-subsystem.c
113
+}
77
@@ -XXX,XX +XXX,XX @@
114
+
78
+/*
115
typedef uint64_t StepFn(CPUARMState *env, uint64_t toaddr,
79
+ * QEMU model of the Xilinx usb subsystem
116
uint64_t setsize, uint32_t data,
80
+ *
117
int memidx, uint32_t *mtedesc, uintptr_t ra);
81
+ * Copyright (c) 2020 Xilinx Inc. Sai Pavan Boddu <sai.pava.boddu@xilinx.com>
118
@@ -XXX,XX +XXX,XX @@ static bool mte_checks_needed(uint64_t ptr, uint32_t desc)
82
+ *
119
return !tcma_check(desc, bit55, allocation_tag_from_addr(ptr));
83
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
120
}
84
+ * of this software and associated documentation files (the "Software"), to deal
121
85
+ * in the Software without restriction, including without limitation the rights
122
+/* Take an exception if the SETG addr/size are not granule aligned */
86
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
123
+static void check_setg_alignment(CPUARMState *env, uint64_t ptr, uint64_t size,
87
+ * copies of the Software, and to permit persons to whom the Software is
124
+ uint32_t memidx, uintptr_t ra)
88
+ * furnished to do so, subject to the following conditions:
125
+{
89
+ *
126
+ if ((size != 0 && !QEMU_IS_ALIGNED(ptr, TAG_GRANULE)) ||
90
+ * The above copyright notice and this permission notice shall be included in
127
+ !QEMU_IS_ALIGNED(size, TAG_GRANULE)) {
91
+ * all copies or substantial portions of the Software.
128
+ arm_cpu_do_unaligned_access(env_cpu(env), ptr, MMU_DATA_STORE,
92
+ *
129
+ memidx, ra);
93
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
130
+
94
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
131
+ }
95
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
132
+}
96
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
133
+
97
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
134
/*
98
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
135
* For the Memory Set operation, our implementation chooses
99
+ * THE SOFTWARE.
136
* always to use "option A", where we update Xd to the final
100
+ */
137
@@ -XXX,XX +XXX,XX @@ static void do_setp(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
101
+
138
102
+#include "qemu/osdep.h"
139
if (setsize > INT64_MAX) {
103
+#include "hw/sysbus.h"
140
setsize = INT64_MAX;
104
+#include "hw/irq.h"
141
+ if (is_setg) {
105
+#include "hw/register.h"
142
+ setsize &= ~0xf;
106
+#include "qemu/bitops.h"
143
+ }
107
+#include "qemu/log.h"
144
}
108
+#include "qom/object.h"
145
109
+#include "qapi/error.h"
146
- if (!mte_checks_needed(toaddr, mtedesc)) {
110
+#include "hw/qdev-properties.h"
147
+ if (unlikely(is_setg)) {
111
+#include "hw/usb/xlnx-usb-subsystem.h"
148
+ check_setg_alignment(env, toaddr, setsize, memidx, ra);
112
+
149
+ } else if (!mte_checks_needed(toaddr, mtedesc)) {
113
+static void versal_usb2_realize(DeviceState *dev, Error **errp)
150
mtedesc = 0;
114
+{
151
}
115
+ VersalUsb2 *s = VERSAL_USB2(dev);
152
116
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
153
@@ -XXX,XX +XXX,XX @@ void HELPER(setp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
117
+ Error *err = NULL;
154
do_setp(env, syndrome, mtedesc, set_step, false, GETPC());
118
+
155
}
119
+ sysbus_realize(SYS_BUS_DEVICE(&s->dwc3), &err);
156
120
+ if (err) {
157
+void HELPER(setgp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
121
+ error_propagate(errp, err);
158
+{
159
+ do_setp(env, syndrome, mtedesc, set_step_tags, true, GETPC());
160
+}
161
+
162
static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
163
StepFn *stepfn, bool is_setg, uintptr_t ra)
164
{
165
@@ -XXX,XX +XXX,XX @@ static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
166
* have an IMPDEF check for alignment here.
167
*/
168
169
- if (!mte_checks_needed(toaddr, mtedesc)) {
170
+ if (unlikely(is_setg)) {
171
+ check_setg_alignment(env, toaddr, setsize, memidx, ra);
172
+ } else if (!mte_checks_needed(toaddr, mtedesc)) {
173
mtedesc = 0;
174
}
175
176
@@ -XXX,XX +XXX,XX @@ void HELPER(setm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
177
do_setm(env, syndrome, mtedesc, set_step, false, GETPC());
178
}
179
180
+void HELPER(setgm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
181
+{
182
+ do_setm(env, syndrome, mtedesc, set_step_tags, true, GETPC());
183
+}
184
+
185
static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
186
StepFn *stepfn, bool is_setg, uintptr_t ra)
187
{
188
@@ -XXX,XX +XXX,XX @@ static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
189
mops_mismatch_exception_target_el(env), ra);
190
}
191
192
- if (!mte_checks_needed(toaddr, mtedesc)) {
193
+ if (unlikely(is_setg)) {
194
+ check_setg_alignment(env, toaddr, setsize, memidx, ra);
195
+ } else if (!mte_checks_needed(toaddr, mtedesc)) {
196
mtedesc = 0;
197
}
198
199
@@ -XXX,XX +XXX,XX @@ void HELPER(sete)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
200
{
201
do_sete(env, syndrome, mtedesc, set_step, false, GETPC());
202
}
203
+
204
+void HELPER(setge)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
205
+{
206
+ do_sete(env, syndrome, mtedesc, set_step_tags, true, GETPC());
207
+}
208
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
209
index XXXXXXX..XXXXXXX 100644
210
--- a/target/arm/tcg/mte_helper.c
211
+++ b/target/arm/tcg/mte_helper.c
212
@@ -XXX,XX +XXX,XX @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
213
return n * TAG_GRANULE - (ptr - tag_first);
214
}
215
}
216
+
217
+void mte_mops_set_tags(CPUARMState *env, uint64_t ptr, uint64_t size,
218
+ uint32_t desc)
219
+{
220
+ int mmu_idx, tag_count;
221
+ uint64_t ptr_tag;
222
+ void *mem;
223
+
224
+ if (!desc) {
225
+ /* Tags not actually enabled */
122
+ return;
226
+ return;
123
+ }
227
+ }
124
+ sysbus_realize(SYS_BUS_DEVICE(&s->usb2Ctrl), &err);
228
+
125
+ if (err) {
229
+ mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
126
+ error_propagate(errp, err);
230
+ /* True probe: this will never fault */
231
+ mem = allocation_tag_mem_probe(env, mmu_idx, ptr, MMU_DATA_STORE, size,
232
+ MMU_DATA_STORE, true, 0);
233
+ if (!mem) {
127
+ return;
234
+ return;
128
+ }
235
+ }
129
+ sysbus_init_mmio(sbd, &s->dwc3_mr);
236
+
130
+ sysbus_init_mmio(sbd, &s->usb2Ctrl_mr);
237
+ /*
131
+ qdev_pass_gpios(DEVICE(&s->dwc3.sysbus_xhci), dev, SYSBUS_DEVICE_GPIO_IRQ);
238
+ * We know that ptr and size are both TAG_GRANULE aligned; store
132
+}
239
+ * the tag from the pointer value into the tag memory.
133
+
240
+ */
134
+static void versal_usb2_init(Object *obj)
241
+ ptr_tag = allocation_tag_from_addr(ptr);
135
+{
242
+ tag_count = size / TAG_GRANULE;
136
+ VersalUsb2 *s = VERSAL_USB2(obj);
243
+ if (ptr & TAG_GRANULE) {
137
+
244
+ /* Not 2*TAG_GRANULE-aligned: store tag to first nibble */
138
+ object_initialize_child(obj, "versal.dwc3", &s->dwc3,
245
+ store_tag1_parallel(TAG_GRANULE, mem, ptr_tag);
139
+ TYPE_USB_DWC3);
246
+ mem++;
140
+ object_initialize_child(obj, "versal.usb2-ctrl", &s->usb2Ctrl,
247
+ tag_count--;
141
+ TYPE_XILINX_VERSAL_USB2_CTRL_REGS);
248
+ }
142
+ memory_region_init_alias(&s->dwc3_mr, obj, "versal.dwc3_alias",
249
+ memset(mem, ptr_tag | (ptr_tag << 4), tag_count / 2);
143
+ &s->dwc3.iomem, 0, DWC3_SIZE);
250
+ if (tag_count & 1) {
144
+ memory_region_init_alias(&s->usb2Ctrl_mr, obj, "versal.usb2Ctrl_alias",
251
+ /* Final trailing unaligned nibble */
145
+ &s->usb2Ctrl.iomem, 0, USB2_REGS_R_MAX * 4);
252
+ mem += tag_count / 2;
146
+ qdev_alias_all_properties(DEVICE(&s->dwc3), obj);
253
+ store_tag1_parallel(0, mem, ptr_tag);
147
+ qdev_alias_all_properties(DEVICE(&s->dwc3.sysbus_xhci), obj);
254
+ }
148
+ object_property_add_alias(obj, "dma", OBJECT(&s->dwc3.sysbus_xhci), "dma");
255
+}
149
+}
256
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
150
+
257
index XXXXXXX..XXXXXXX 100644
151
+static void versal_usb2_class_init(ObjectClass *klass, void *data)
258
--- a/target/arm/tcg/translate-a64.c
152
+{
259
+++ b/target/arm/tcg/translate-a64.c
153
+ DeviceClass *dc = DEVICE_CLASS(klass);
260
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(STZ2G, aa64_mte_insn_reg, do_STG, a, true, true)
154
+
261
155
+ dc->realize = versal_usb2_realize;
262
typedef void SetFn(TCGv_env, TCGv_i32, TCGv_i32);
156
+}
263
157
+
264
-static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
158
+static const TypeInfo versal_usb2_info = {
265
+static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue,
159
+ .name = TYPE_XILINX_VERSAL_USB2,
266
+ bool is_setg, SetFn fn)
160
+ .parent = TYPE_SYS_BUS_DEVICE,
267
{
161
+ .instance_size = sizeof(VersalUsb2),
268
int memidx;
162
+ .class_init = versal_usb2_class_init,
269
uint32_t syndrome, desc = 0;
163
+ .instance_init = versal_usb2_init,
270
164
+};
271
+ if (is_setg && !dc_isar_feature(aa64_mte, s)) {
165
+
272
+ return false;
166
+static void versal_usb_types(void)
273
+ }
167
+{
274
+
168
+ type_register_static(&versal_usb2_info);
275
/*
169
+}
276
* UNPREDICTABLE cases: we choose to UNDEF, which allows
170
+
277
* us to pull this check before the CheckMOPSEnabled() test
171
+type_init(versal_usb_types)
278
@@ -XXX,XX +XXX,XX @@ static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
172
diff --git a/hw/usb/Kconfig b/hw/usb/Kconfig
279
* We pass option_a == true, matching our implementation;
173
index XXXXXXX..XXXXXXX 100644
280
* we pass wrong_option == false: helper function may set that bit.
174
--- a/hw/usb/Kconfig
281
*/
175
+++ b/hw/usb/Kconfig
282
- syndrome = syn_mop(true, false, (a->nontemp << 1) | a->unpriv,
176
@@ -XXX,XX +XXX,XX @@ config USB_DWC3
283
+ syndrome = syn_mop(true, is_setg, (a->nontemp << 1) | a->unpriv,
177
bool
284
is_epilogue, false, true, a->rd, a->rs, a->rn);
178
select USB_XHCI_SYSBUS
285
179
select REGISTER
286
- if (s->mte_active[a->unpriv]) {
180
+
287
+ if (is_setg ? s->ata[a->unpriv] : s->mte_active[a->unpriv]) {
181
+config XLNX_USB_SUBSYS
288
/* We may need to do MTE tag checking, so assemble the descriptor */
182
+ bool
289
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
183
+ default y if XLNX_VERSAL
290
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
184
+ select USB_DWC3
291
@@ -XXX,XX +XXX,XX @@ static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
185
diff --git a/hw/usb/meson.build b/hw/usb/meson.build
292
return true;
186
index XXXXXXX..XXXXXXX 100644
293
}
187
--- a/hw/usb/meson.build
294
188
+++ b/hw/usb/meson.build
295
-TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, gen_helper_setp)
189
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_TUSB6010', if_true: files('tusb6010.c'))
296
-TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, gen_helper_setm)
190
softmmu_ss.add(when: 'CONFIG_IMX', if_true: files('chipidea.c'))
297
-TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, gen_helper_sete)
191
softmmu_ss.add(when: 'CONFIG_IMX_USBPHY', if_true: files('imx-usb-phy.c'))
298
+TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, false, gen_helper_setp)
192
specific_ss.add(when: 'CONFIG_XLNX_VERSAL', if_true: files('xlnx-versal-usb2-ctrl-regs.c'))
299
+TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, false, gen_helper_setm)
193
+specific_ss.add(when: 'CONFIG_XLNX_USB_SUBSYS', if_true: files('xlnx-usb-subsystem.c'))
300
+TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, false, gen_helper_sete)
194
301
+TRANS_FEAT(SETGP, aa64_mops, do_SET, a, false, true, gen_helper_setgp)
195
# emulated usb devices
302
+TRANS_FEAT(SETGM, aa64_mops, do_SET, a, false, true, gen_helper_setgm)
196
softmmu_ss.add(when: 'CONFIG_USB', if_true: files('dev-hub.c'))
303
+TRANS_FEAT(SETGE, aa64_mops, do_SET, a, true, true, gen_helper_setge)
304
305
typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
306
197
--
307
--
198
2.20.1
308
2.34.1
199
200
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
The FEAT_MOPS memory copy operations need an extra helper routine
2
for checking for MTE tag checking failures beyond the ones we
3
already added for memory set operations:
4
* mte_mops_probe_rev() does the same job as mte_mops_probe(), but
5
it checks tags starting at the provided address and working
6
backwards, rather than forwards
2
7
3
Connect VersalUsb2 subsystem to xlnx-versal SOC, its placed
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
in iou of lpd domain and configure it as dual port host controller.
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Add the respective guest dts nodes for "xlnx-versal-virt" machine.
10
Message-id: 20230912140434.1333369-11-peter.maydell@linaro.org
11
---
12
target/arm/internals.h | 17 +++++++
13
target/arm/tcg/mte_helper.c | 99 +++++++++++++++++++++++++++++++++++++
14
2 files changed, 116 insertions(+)
6
15
7
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
8
Signed-off-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
9
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 1607023357-5096-5-git-send-email-sai.pavan.boddu@xilinx.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/arm/xlnx-versal.h | 9 ++++++
14
hw/arm/xlnx-versal-virt.c | 55 ++++++++++++++++++++++++++++++++++++
15
hw/arm/xlnx-versal.c | 26 +++++++++++++++++
16
3 files changed, 90 insertions(+)
17
18
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/xlnx-versal.h
18
--- a/target/arm/internals.h
21
+++ b/include/hw/arm/xlnx-versal.h
19
+++ b/target/arm/internals.h
22
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra);
23
#include "hw/net/cadence_gem.h"
21
uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
24
#include "hw/rtc/xlnx-zynqmp-rtc.h"
22
uint32_t desc);
25
#include "qom/object.h"
23
26
+#include "hw/usb/xlnx-usb-subsystem.h"
24
+/**
27
25
+ * mte_mops_probe_rev: Check where the next MTE failure is for a FEAT_MOPS
28
#define TYPE_XLNX_VERSAL "xlnx-versal"
26
+ * operation going in the reverse direction
29
OBJECT_DECLARE_SIMPLE_TYPE(Versal, XLNX_VERSAL)
27
+ * @env: CPU env
30
@@ -XXX,XX +XXX,XX @@ struct Versal {
28
+ * @ptr: *end* address of memory region (dirty pointer)
31
PL011State uart[XLNX_VERSAL_NR_UARTS];
29
+ * @size: length of region (guaranteed not to cross a page boundary)
32
CadenceGEMState gem[XLNX_VERSAL_NR_GEMS];
30
+ * @desc: MTEDESC descriptor word (0 means no MTE checks)
33
XlnxZDMA adma[XLNX_VERSAL_NR_ADMAS];
31
+ * Returns: the size of the region that can be copied without hitting
34
+ VersalUsb2 usb;
32
+ * an MTE tag failure
35
} iou;
33
+ *
36
} lpd;
34
+ * Note that we assume that the caller has already checked the TBI
37
35
+ * and TCMA bits with mte_checks_needed() and an MTE check is definitely
38
@@ -XXX,XX +XXX,XX @@ struct Versal {
36
+ * required.
39
37
+ */
40
#define VERSAL_UART0_IRQ_0 18
38
+uint64_t mte_mops_probe_rev(CPUARMState *env, uint64_t ptr, uint64_t size,
41
#define VERSAL_UART1_IRQ_0 19
39
+ uint32_t desc);
42
+#define VERSAL_USB0_IRQ_0 22
43
#define VERSAL_GEM0_IRQ_0 56
44
#define VERSAL_GEM0_WAKE_IRQ_0 57
45
#define VERSAL_GEM1_IRQ_0 58
46
@@ -XXX,XX +XXX,XX @@ struct Versal {
47
#define MM_OCM 0xfffc0000U
48
#define MM_OCM_SIZE 0x40000
49
50
+#define MM_USB2_CTRL_REGS 0xFF9D0000
51
+#define MM_USB2_CTRL_REGS_SIZE 0x10000
52
+
40
+
53
+#define MM_USB_0 0xFE200000
41
/**
54
+#define MM_USB_0_SIZE 0x10000
42
* mte_check_fail: Record an MTE tag check failure
43
* @env: CPU env
44
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/tcg/mte_helper.c
47
+++ b/target/arm/tcg/mte_helper.c
48
@@ -XXX,XX +XXX,XX @@ static int checkN(uint8_t *mem, int odd, int cmp, int count)
49
return n;
50
}
51
52
+/**
53
+ * checkNrev:
54
+ * @tag: tag memory to test
55
+ * @odd: true to begin testing at tags at odd nibble
56
+ * @cmp: the tag to compare against
57
+ * @count: number of tags to test
58
+ *
59
+ * Return the number of successful tests.
60
+ * Thus a return value < @count indicates a failure.
61
+ *
62
+ * This is like checkN, but it runs backwards, checking the
63
+ * tags starting with @tag and then the tags preceding it.
64
+ * This is needed by the backwards-memory-copying operations.
65
+ */
66
+static int checkNrev(uint8_t *mem, int odd, int cmp, int count)
67
+{
68
+ int n = 0, diff;
55
+
69
+
56
#define MM_TOP_DDR 0x0
70
+ /* Replicate the test tag and compare. */
57
#define MM_TOP_DDR_SIZE 0x80000000U
71
+ cmp *= 0x11;
58
#define MM_TOP_DDR_2 0x800000000ULL
72
+ diff = *mem-- ^ cmp;
59
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/hw/arm/xlnx-versal-virt.c
62
+++ b/hw/arm/xlnx-versal-virt.c
63
@@ -XXX,XX +XXX,XX @@ struct VersalVirt {
64
uint32_t ethernet_phy[2];
65
uint32_t clk_125Mhz;
66
uint32_t clk_25Mhz;
67
+ uint32_t usb;
68
+ uint32_t dwc;
69
} phandle;
70
struct arm_boot_info binfo;
71
72
@@ -XXX,XX +XXX,XX @@ static void fdt_create(VersalVirt *s)
73
s->phandle.clk_25Mhz = qemu_fdt_alloc_phandle(s->fdt);
74
s->phandle.clk_125Mhz = qemu_fdt_alloc_phandle(s->fdt);
75
76
+ s->phandle.usb = qemu_fdt_alloc_phandle(s->fdt);
77
+ s->phandle.dwc = qemu_fdt_alloc_phandle(s->fdt);
78
/* Create /chosen node for load_dtb. */
79
qemu_fdt_add_subnode(s->fdt, "/chosen");
80
81
@@ -XXX,XX +XXX,XX @@ static void fdt_add_timer_nodes(VersalVirt *s)
82
compat, sizeof(compat));
83
}
84
85
+static void fdt_add_usb_xhci_nodes(VersalVirt *s)
86
+{
87
+ const char clocknames[] = "bus_clk\0ref_clk";
88
+ const char irq_name[] = "dwc_usb3";
89
+ const char compatVersalDWC3[] = "xlnx,versal-dwc3";
90
+ const char compatDWC3[] = "snps,dwc3";
91
+ char *name = g_strdup_printf("/usb@%" PRIx32, MM_USB2_CTRL_REGS);
92
+
73
+
93
+ qemu_fdt_add_subnode(s->fdt, name);
74
+ if (!odd) {
94
+ qemu_fdt_setprop(s->fdt, name, "compatible",
75
+ goto start_even;
95
+ compatVersalDWC3, sizeof(compatVersalDWC3));
76
+ }
96
+ qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
97
+ 2, MM_USB2_CTRL_REGS,
98
+ 2, MM_USB2_CTRL_REGS_SIZE);
99
+ qemu_fdt_setprop(s->fdt, name, "clock-names",
100
+ clocknames, sizeof(clocknames));
101
+ qemu_fdt_setprop_cells(s->fdt, name, "clocks",
102
+ s->phandle.clk_25Mhz, s->phandle.clk_125Mhz);
103
+ qemu_fdt_setprop(s->fdt, name, "ranges", NULL, 0);
104
+ qemu_fdt_setprop_cell(s->fdt, name, "#address-cells", 2);
105
+ qemu_fdt_setprop_cell(s->fdt, name, "#size-cells", 2);
106
+ qemu_fdt_setprop_cell(s->fdt, name, "phandle", s->phandle.usb);
107
+ g_free(name);
108
+
77
+
109
+ name = g_strdup_printf("/usb@%" PRIx32 "/dwc3@%" PRIx32,
78
+ while (1) {
110
+ MM_USB2_CTRL_REGS, MM_USB_0);
79
+ /* Test odd tag. */
111
+ qemu_fdt_add_subnode(s->fdt, name);
80
+ if (unlikely((diff) & 0xf0)) {
112
+ qemu_fdt_setprop(s->fdt, name, "compatible",
81
+ break;
113
+ compatDWC3, sizeof(compatDWC3));
82
+ }
114
+ qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
83
+ if (++n == count) {
115
+ 2, MM_USB_0, 2, MM_USB_0_SIZE);
84
+ break;
116
+ qemu_fdt_setprop(s->fdt, name, "interrupt-names",
85
+ }
117
+ irq_name, sizeof(irq_name));
86
+
118
+ qemu_fdt_setprop_cells(s->fdt, name, "interrupts",
87
+ start_even:
119
+ GIC_FDT_IRQ_TYPE_SPI, VERSAL_USB0_IRQ_0,
88
+ /* Test even tag. */
120
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
89
+ if (unlikely((diff) & 0x0f)) {
121
+ qemu_fdt_setprop_cell(s->fdt, name,
90
+ break;
122
+ "snps,quirk-frame-length-adjustment", 0x20);
91
+ }
123
+ qemu_fdt_setprop_cells(s->fdt, name, "#stream-id-cells", 1);
92
+ if (++n == count) {
124
+ qemu_fdt_setprop_string(s->fdt, name, "dr_mode", "host");
93
+ break;
125
+ qemu_fdt_setprop_string(s->fdt, name, "phy-names", "usb3-phy");
94
+ }
126
+ qemu_fdt_setprop(s->fdt, name, "snps,dis_u2_susphy_quirk", NULL, 0);
95
+
127
+ qemu_fdt_setprop(s->fdt, name, "snps,dis_u3_susphy_quirk", NULL, 0);
96
+ diff = *mem-- ^ cmp;
128
+ qemu_fdt_setprop(s->fdt, name, "snps,refclk_fladj", NULL, 0);
97
+ }
129
+ qemu_fdt_setprop(s->fdt, name, "snps,mask_phy_reset", NULL, 0);
98
+ return n;
130
+ qemu_fdt_setprop_cell(s->fdt, name, "phandle", s->phandle.dwc);
131
+ qemu_fdt_setprop_string(s->fdt, name, "maximum-speed", "high-speed");
132
+ g_free(name);
133
+}
99
+}
134
+
100
+
135
static void fdt_add_uart_nodes(VersalVirt *s)
101
/**
136
{
102
* mte_probe_int() - helper for mte_probe and mte_check
137
uint64_t addrs[] = { MM_UART1, MM_UART0 };
103
* @env: CPU environment
138
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
104
@@ -XXX,XX +XXX,XX @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
139
fdt_add_gic_nodes(s);
140
fdt_add_timer_nodes(s);
141
fdt_add_zdma_nodes(s);
142
+ fdt_add_usb_xhci_nodes(s);
143
fdt_add_sd_nodes(s);
144
fdt_add_rtc_node(s);
145
fdt_add_cpu_nodes(s, psci_conduit);
146
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/hw/arm/xlnx-versal.c
149
+++ b/hw/arm/xlnx-versal.c
150
@@ -XXX,XX +XXX,XX @@ static void versal_create_uarts(Versal *s, qemu_irq *pic)
151
}
105
}
152
}
106
}
153
107
154
+static void versal_create_usbs(Versal *s, qemu_irq *pic)
108
+uint64_t mte_mops_probe_rev(CPUARMState *env, uint64_t ptr, uint64_t size,
109
+ uint32_t desc)
155
+{
110
+{
156
+ DeviceState *dev;
111
+ int mmu_idx, tag_count;
157
+ MemoryRegion *mr;
112
+ uint64_t ptr_tag, tag_first, tag_last;
113
+ void *mem;
114
+ bool w = FIELD_EX32(desc, MTEDESC, WRITE);
115
+ uint32_t n;
158
+
116
+
159
+ object_initialize_child(OBJECT(s), "usb2", &s->lpd.iou.usb,
117
+ mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
160
+ TYPE_XILINX_VERSAL_USB2);
118
+ /* True probe; this will never fault */
161
+ dev = DEVICE(&s->lpd.iou.usb);
119
+ mem = allocation_tag_mem_probe(env, mmu_idx, ptr,
120
+ w ? MMU_DATA_STORE : MMU_DATA_LOAD,
121
+ size, MMU_DATA_LOAD, true, 0);
122
+ if (!mem) {
123
+ return size;
124
+ }
162
+
125
+
163
+ object_property_set_link(OBJECT(dev), "dma", OBJECT(&s->mr_ps),
126
+ /*
164
+ &error_abort);
127
+ * TODO: checkNrev() is not designed for checks of the size we expect
165
+ qdev_prop_set_uint32(dev, "intrs", 1);
128
+ * for FEAT_MOPS operations, so we should implement this differently.
166
+ qdev_prop_set_uint32(dev, "slots", 2);
129
+ * Maybe we should do something like
130
+ * if (region start and size are aligned nicely) {
131
+ * do direct loads of 64 tag bits at a time;
132
+ * } else {
133
+ * call checkN()
134
+ * }
135
+ */
136
+ /* Round the bounds to the tag granule, and compute the number of tags. */
137
+ ptr_tag = allocation_tag_from_addr(ptr);
138
+ tag_first = QEMU_ALIGN_DOWN(ptr - (size - 1), TAG_GRANULE);
139
+ tag_last = QEMU_ALIGN_DOWN(ptr, TAG_GRANULE);
140
+ tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1;
141
+ n = checkNrev(mem, ptr & TAG_GRANULE, ptr_tag, tag_count);
142
+ if (likely(n == tag_count)) {
143
+ return size;
144
+ }
167
+
145
+
168
+ sysbus_realize(SYS_BUS_DEVICE(dev), &error_fatal);
146
+ /*
169
+
147
+ * Failure; for the first granule, it's at @ptr. Otherwise
170
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
148
+ * it's at the last byte of the nth granule. Calculate how
171
+ memory_region_add_subregion(&s->mr_ps, MM_USB_0, mr);
149
+ * many bytes we can access without hitting that failure.
172
+
150
+ */
173
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[VERSAL_USB0_IRQ_0]);
151
+ if (n == 0) {
174
+
152
+ return 0;
175
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
153
+ } else {
176
+ memory_region_add_subregion(&s->mr_ps, MM_USB2_CTRL_REGS, mr);
154
+ return (n - 1) * TAG_GRANULE + ((ptr + 1) - tag_last);
155
+ }
177
+}
156
+}
178
+
157
+
179
static void versal_create_gems(Versal *s, qemu_irq *pic)
158
void mte_mops_set_tags(CPUARMState *env, uint64_t ptr, uint64_t size,
159
uint32_t desc)
180
{
160
{
181
int i;
182
@@ -XXX,XX +XXX,XX @@ static void versal_realize(DeviceState *dev, Error **errp)
183
versal_create_apu_cpus(s);
184
versal_create_apu_gic(s, pic);
185
versal_create_uarts(s, pic);
186
+ versal_create_usbs(s, pic);
187
versal_create_gems(s, pic);
188
versal_create_admas(s, pic);
189
versal_create_sds(s, pic);
190
--
161
--
191
2.20.1
162
2.34.1
192
193
diff view generated by jsdifflib
1
In rom_check_and_register_reset() we detect overlaps by looking at
1
The FEAT_MOPS CPY* instructions implement memory copies. These
2
whether the ROM blob we're currently examining is in the same address
2
come in both "always forwards" (memcpy-style) and "overlap OK"
3
space and starts before the previous ROM blob ends. (This works
3
(memmove-style) flavours.
4
because the ROM list is kept sorted in order by AddressSpace and then
5
by address.)
6
7
Instead of keeping the AddressSpace and last address of the previous ROM
8
blob in local variables, just keep a pointer to it.
9
10
This will allow us to print more useful information when we do detect
11
an overlap.
12
4
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201129203923.10622-2-peter.maydell@linaro.org
7
Message-id: 20230912140434.1333369-12-peter.maydell@linaro.org
16
---
8
---
17
hw/core/loader.c | 23 +++++++++++++++--------
9
target/arm/tcg/helper-a64.h | 7 +
18
1 file changed, 15 insertions(+), 8 deletions(-)
10
target/arm/tcg/a64.decode | 14 +
11
target/arm/tcg/helper-a64.c | 454 +++++++++++++++++++++++++++++++++
12
target/arm/tcg/translate-a64.c | 60 +++++
13
4 files changed, 535 insertions(+)
19
14
20
diff --git a/hw/core/loader.c b/hw/core/loader.c
15
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/core/loader.c
17
--- a/target/arm/tcg/helper-a64.h
23
+++ b/hw/core/loader.c
18
+++ b/target/arm/tcg/helper-a64.h
24
@@ -XXX,XX +XXX,XX @@ static void rom_reset(void *unused)
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(sete, void, env, i32, i32)
25
}
20
DEF_HELPER_3(setgp, void, env, i32, i32)
21
DEF_HELPER_3(setgm, void, env, i32, i32)
22
DEF_HELPER_3(setge, void, env, i32, i32)
23
+
24
+DEF_HELPER_4(cpyp, void, env, i32, i32, i32)
25
+DEF_HELPER_4(cpym, void, env, i32, i32, i32)
26
+DEF_HELPER_4(cpye, void, env, i32, i32, i32)
27
+DEF_HELPER_4(cpyfp, void, env, i32, i32, i32)
28
+DEF_HELPER_4(cpyfm, void, env, i32, i32, i32)
29
+DEF_HELPER_4(cpyfe, void, env, i32, i32, i32)
30
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/tcg/a64.decode
33
+++ b/target/arm/tcg/a64.decode
34
@@ -XXX,XX +XXX,XX @@ SETE 00 011001110 ..... 10 . . 01 ..... ..... @set
35
SETGP 00 011101110 ..... 00 . . 01 ..... ..... @set
36
SETGM 00 011101110 ..... 01 . . 01 ..... ..... @set
37
SETGE 00 011101110 ..... 10 . . 01 ..... ..... @set
38
+
39
+# Memmove/Memcopy: the CPY insns allow overlapping src/dest and
40
+# copy in the correct direction; the CPYF insns always copy forwards.
41
+#
42
+# options has the nontemporal and unpriv bits for src and dest
43
+&cpy rs rn rd options
44
+@cpy .. ... . ..... rs:5 options:4 .. rn:5 rd:5 &cpy
45
+
46
+CPYFP 00 011 0 01000 ..... .... 01 ..... ..... @cpy
47
+CPYFM 00 011 0 01010 ..... .... 01 ..... ..... @cpy
48
+CPYFE 00 011 0 01100 ..... .... 01 ..... ..... @cpy
49
+CPYP 00 011 1 01000 ..... .... 01 ..... ..... @cpy
50
+CPYM 00 011 1 01010 ..... .... 01 ..... ..... @cpy
51
+CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
52
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/tcg/helper-a64.c
55
+++ b/target/arm/tcg/helper-a64.c
56
@@ -XXX,XX +XXX,XX @@ static uint64_t page_limit(uint64_t addr)
57
return TARGET_PAGE_ALIGN(addr + 1) - addr;
26
}
58
}
27
59
28
+/* Return true if two consecutive ROMs in the ROM list overlap */
60
+/*
29
+static bool roms_overlap(Rom *last_rom, Rom *this_rom)
61
+ * Return the number of bytes we can copy starting from addr and working
30
+{
62
+ * backwards without crossing a page boundary.
31
+ if (!last_rom) {
63
+ */
64
+static uint64_t page_limit_rev(uint64_t addr)
65
+{
66
+ return (addr & ~TARGET_PAGE_MASK) + 1;
67
+}
68
+
69
/*
70
* Perform part of a memory set on an area of guest memory starting at
71
* toaddr (a dirty address) and extending for setsize bytes.
72
@@ -XXX,XX +XXX,XX @@ void HELPER(setge)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
73
{
74
do_sete(env, syndrome, mtedesc, set_step_tags, true, GETPC());
75
}
76
+
77
+/*
78
+ * Perform part of a memory copy from the guest memory at fromaddr
79
+ * and extending for copysize bytes, to the guest memory at
80
+ * toaddr. Both addreses are dirty.
81
+ *
82
+ * Returns the number of bytes actually set, which might be less than
83
+ * copysize; the caller should loop until the whole copy has been done.
84
+ * The caller should ensure that the guest registers are correct
85
+ * for the possibility that the first byte of the copy encounters
86
+ * an exception or watchpoint. We guarantee not to take any faults
87
+ * for bytes other than the first.
88
+ */
89
+static uint64_t copy_step(CPUARMState *env, uint64_t toaddr, uint64_t fromaddr,
90
+ uint64_t copysize, int wmemidx, int rmemidx,
91
+ uint32_t *wdesc, uint32_t *rdesc, uintptr_t ra)
92
+{
93
+ void *rmem;
94
+ void *wmem;
95
+
96
+ /* Don't cross a page boundary on either source or destination */
97
+ copysize = MIN(copysize, page_limit(toaddr));
98
+ copysize = MIN(copysize, page_limit(fromaddr));
99
+ /*
100
+ * Handle MTE tag checks: either handle the tag mismatch for byte 0,
101
+ * or else copy up to but not including the byte with the mismatch.
102
+ */
103
+ if (*rdesc) {
104
+ uint64_t mtesize = mte_mops_probe(env, fromaddr, copysize, *rdesc);
105
+ if (mtesize == 0) {
106
+ mte_check_fail(env, *rdesc, fromaddr, ra);
107
+ *rdesc = 0;
108
+ } else {
109
+ copysize = MIN(copysize, mtesize);
110
+ }
111
+ }
112
+ if (*wdesc) {
113
+ uint64_t mtesize = mte_mops_probe(env, toaddr, copysize, *wdesc);
114
+ if (mtesize == 0) {
115
+ mte_check_fail(env, *wdesc, toaddr, ra);
116
+ *wdesc = 0;
117
+ } else {
118
+ copysize = MIN(copysize, mtesize);
119
+ }
120
+ }
121
+
122
+ toaddr = useronly_clean_ptr(toaddr);
123
+ fromaddr = useronly_clean_ptr(fromaddr);
124
+ /* Trapless lookup of whether we can get a host memory pointer */
125
+ wmem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, wmemidx);
126
+ rmem = tlb_vaddr_to_host(env, fromaddr, MMU_DATA_LOAD, rmemidx);
127
+
128
+#ifndef CONFIG_USER_ONLY
129
+ /*
130
+ * If we don't have host memory for both source and dest then just
131
+ * do a single byte copy. This will handle watchpoints, invalid pages,
132
+ * etc correctly. For clean code pages, the next iteration will see
133
+ * the page dirty and will use the fast path.
134
+ */
135
+ if (unlikely(!rmem || !wmem)) {
136
+ uint8_t byte;
137
+ if (rmem) {
138
+ byte = *(uint8_t *)rmem;
139
+ } else {
140
+ byte = cpu_ldub_mmuidx_ra(env, fromaddr, rmemidx, ra);
141
+ }
142
+ if (wmem) {
143
+ *(uint8_t *)wmem = byte;
144
+ } else {
145
+ cpu_stb_mmuidx_ra(env, toaddr, byte, wmemidx, ra);
146
+ }
147
+ return 1;
148
+ }
149
+#endif
150
+ /* Easy case: just memmove the host memory */
151
+ memmove(wmem, rmem, copysize);
152
+ return copysize;
153
+}
154
+
155
+/*
156
+ * Do part of a backwards memory copy. Here toaddr and fromaddr point
157
+ * to the *last* byte to be copied.
158
+ */
159
+static uint64_t copy_step_rev(CPUARMState *env, uint64_t toaddr,
160
+ uint64_t fromaddr,
161
+ uint64_t copysize, int wmemidx, int rmemidx,
162
+ uint32_t *wdesc, uint32_t *rdesc, uintptr_t ra)
163
+{
164
+ void *rmem;
165
+ void *wmem;
166
+
167
+ /* Don't cross a page boundary on either source or destination */
168
+ copysize = MIN(copysize, page_limit_rev(toaddr));
169
+ copysize = MIN(copysize, page_limit_rev(fromaddr));
170
+
171
+ /*
172
+ * Handle MTE tag checks: either handle the tag mismatch for byte 0,
173
+ * or else copy up to but not including the byte with the mismatch.
174
+ */
175
+ if (*rdesc) {
176
+ uint64_t mtesize = mte_mops_probe_rev(env, fromaddr, copysize, *rdesc);
177
+ if (mtesize == 0) {
178
+ mte_check_fail(env, *rdesc, fromaddr, ra);
179
+ *rdesc = 0;
180
+ } else {
181
+ copysize = MIN(copysize, mtesize);
182
+ }
183
+ }
184
+ if (*wdesc) {
185
+ uint64_t mtesize = mte_mops_probe_rev(env, toaddr, copysize, *wdesc);
186
+ if (mtesize == 0) {
187
+ mte_check_fail(env, *wdesc, toaddr, ra);
188
+ *wdesc = 0;
189
+ } else {
190
+ copysize = MIN(copysize, mtesize);
191
+ }
192
+ }
193
+
194
+ toaddr = useronly_clean_ptr(toaddr);
195
+ fromaddr = useronly_clean_ptr(fromaddr);
196
+ /* Trapless lookup of whether we can get a host memory pointer */
197
+ wmem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, wmemidx);
198
+ rmem = tlb_vaddr_to_host(env, fromaddr, MMU_DATA_LOAD, rmemidx);
199
+
200
+#ifndef CONFIG_USER_ONLY
201
+ /*
202
+ * If we don't have host memory for both source and dest then just
203
+ * do a single byte copy. This will handle watchpoints, invalid pages,
204
+ * etc correctly. For clean code pages, the next iteration will see
205
+ * the page dirty and will use the fast path.
206
+ */
207
+ if (unlikely(!rmem || !wmem)) {
208
+ uint8_t byte;
209
+ if (rmem) {
210
+ byte = *(uint8_t *)rmem;
211
+ } else {
212
+ byte = cpu_ldub_mmuidx_ra(env, fromaddr, rmemidx, ra);
213
+ }
214
+ if (wmem) {
215
+ *(uint8_t *)wmem = byte;
216
+ } else {
217
+ cpu_stb_mmuidx_ra(env, toaddr, byte, wmemidx, ra);
218
+ }
219
+ return 1;
220
+ }
221
+#endif
222
+ /*
223
+ * Easy case: just memmove the host memory. Note that wmem and
224
+ * rmem here point to the *last* byte to copy.
225
+ */
226
+ memmove(wmem - (copysize - 1), rmem - (copysize - 1), copysize);
227
+ return copysize;
228
+}
229
+
230
+/*
231
+ * for the Memory Copy operation, our implementation chooses always
232
+ * to use "option A", where we update Xd and Xs to the final addresses
233
+ * in the CPYP insn, and then in CPYM and CPYE only need to update Xn.
234
+ *
235
+ * @env: CPU
236
+ * @syndrome: syndrome value for mismatch exceptions
237
+ * (also contains the register numbers we need to use)
238
+ * @wdesc: MTE descriptor for the writes (destination)
239
+ * @rdesc: MTE descriptor for the reads (source)
240
+ * @move: true if this is CPY (memmove), false for CPYF (memcpy forwards)
241
+ */
242
+static void do_cpyp(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
243
+ uint32_t rdesc, uint32_t move, uintptr_t ra)
244
+{
245
+ int rd = mops_destreg(syndrome);
246
+ int rs = mops_srcreg(syndrome);
247
+ int rn = mops_sizereg(syndrome);
248
+ uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX);
249
+ uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX);
250
+ bool forwards = true;
251
+ uint64_t toaddr = env->xregs[rd];
252
+ uint64_t fromaddr = env->xregs[rs];
253
+ uint64_t copysize = env->xregs[rn];
254
+ uint64_t stagecopysize, step;
255
+
256
+ check_mops_enabled(env, ra);
257
+
258
+
259
+ if (move) {
260
+ /*
261
+ * Copy backwards if necessary. The direction for a non-overlapping
262
+ * copy is IMPDEF; we choose forwards.
263
+ */
264
+ if (copysize > 0x007FFFFFFFFFFFFFULL) {
265
+ copysize = 0x007FFFFFFFFFFFFFULL;
266
+ }
267
+ uint64_t fs = extract64(fromaddr, 0, 56);
268
+ uint64_t ts = extract64(toaddr, 0, 56);
269
+ uint64_t fe = extract64(fromaddr + copysize, 0, 56);
270
+
271
+ if (fs < ts && fe > ts) {
272
+ forwards = false;
273
+ }
274
+ } else {
275
+ if (copysize > INT64_MAX) {
276
+ copysize = INT64_MAX;
277
+ }
278
+ }
279
+
280
+ if (!mte_checks_needed(fromaddr, rdesc)) {
281
+ rdesc = 0;
282
+ }
283
+ if (!mte_checks_needed(toaddr, wdesc)) {
284
+ wdesc = 0;
285
+ }
286
+
287
+ if (forwards) {
288
+ stagecopysize = MIN(copysize, page_limit(toaddr));
289
+ stagecopysize = MIN(stagecopysize, page_limit(fromaddr));
290
+ while (stagecopysize) {
291
+ env->xregs[rd] = toaddr;
292
+ env->xregs[rs] = fromaddr;
293
+ env->xregs[rn] = copysize;
294
+ step = copy_step(env, toaddr, fromaddr, stagecopysize,
295
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
296
+ toaddr += step;
297
+ fromaddr += step;
298
+ copysize -= step;
299
+ stagecopysize -= step;
300
+ }
301
+ /* Insn completed, so update registers to the Option A format */
302
+ env->xregs[rd] = toaddr + copysize;
303
+ env->xregs[rs] = fromaddr + copysize;
304
+ env->xregs[rn] = -copysize;
305
+ } else {
306
+ /*
307
+ * In a reverse copy the to and from addrs in Xs and Xd are the start
308
+ * of the range, but it's more convenient for us to work with pointers
309
+ * to the last byte being copied.
310
+ */
311
+ toaddr += copysize - 1;
312
+ fromaddr += copysize - 1;
313
+ stagecopysize = MIN(copysize, page_limit_rev(toaddr));
314
+ stagecopysize = MIN(stagecopysize, page_limit_rev(fromaddr));
315
+ while (stagecopysize) {
316
+ env->xregs[rn] = copysize;
317
+ step = copy_step_rev(env, toaddr, fromaddr, stagecopysize,
318
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
319
+ copysize -= step;
320
+ stagecopysize -= step;
321
+ toaddr -= step;
322
+ fromaddr -= step;
323
+ }
324
+ /*
325
+ * Insn completed, so update registers to the Option A format.
326
+ * For a reverse copy this is no different to the CPYP input format.
327
+ */
328
+ env->xregs[rn] = copysize;
329
+ }
330
+
331
+ /* Set NZCV = 0000 to indicate we are an Option A implementation */
332
+ env->NF = 0;
333
+ env->ZF = 1; /* our env->ZF encoding is inverted */
334
+ env->CF = 0;
335
+ env->VF = 0;
336
+ return;
337
+}
338
+
339
+void HELPER(cpyp)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
340
+ uint32_t rdesc)
341
+{
342
+ do_cpyp(env, syndrome, wdesc, rdesc, true, GETPC());
343
+}
344
+
345
+void HELPER(cpyfp)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
346
+ uint32_t rdesc)
347
+{
348
+ do_cpyp(env, syndrome, wdesc, rdesc, false, GETPC());
349
+}
350
+
351
+static void do_cpym(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
352
+ uint32_t rdesc, uint32_t move, uintptr_t ra)
353
+{
354
+ /* Main: we choose to copy until less than a page remaining */
355
+ CPUState *cs = env_cpu(env);
356
+ int rd = mops_destreg(syndrome);
357
+ int rs = mops_srcreg(syndrome);
358
+ int rn = mops_sizereg(syndrome);
359
+ uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX);
360
+ uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX);
361
+ bool forwards = true;
362
+ uint64_t toaddr, fromaddr, copysize, step;
363
+
364
+ check_mops_enabled(env, ra);
365
+
366
+ /* We choose to NOP out "no data to copy" before consistency checks */
367
+ if (env->xregs[rn] == 0) {
368
+ return;
369
+ }
370
+
371
+ check_mops_wrong_option(env, syndrome, ra);
372
+
373
+ if (move) {
374
+ forwards = (int64_t)env->xregs[rn] < 0;
375
+ }
376
+
377
+ if (forwards) {
378
+ toaddr = env->xregs[rd] + env->xregs[rn];
379
+ fromaddr = env->xregs[rs] + env->xregs[rn];
380
+ copysize = -env->xregs[rn];
381
+ } else {
382
+ copysize = env->xregs[rn];
383
+ /* This toaddr and fromaddr point to the *last* byte to copy */
384
+ toaddr = env->xregs[rd] + copysize - 1;
385
+ fromaddr = env->xregs[rs] + copysize - 1;
386
+ }
387
+
388
+ if (!mte_checks_needed(fromaddr, rdesc)) {
389
+ rdesc = 0;
390
+ }
391
+ if (!mte_checks_needed(toaddr, wdesc)) {
392
+ wdesc = 0;
393
+ }
394
+
395
+ /* Our implementation has no particular parameter requirements for CPYM */
396
+
397
+ /* Do the actual memmove */
398
+ if (forwards) {
399
+ while (copysize >= TARGET_PAGE_SIZE) {
400
+ step = copy_step(env, toaddr, fromaddr, copysize,
401
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
402
+ toaddr += step;
403
+ fromaddr += step;
404
+ copysize -= step;
405
+ env->xregs[rn] = -copysize;
406
+ if (copysize >= TARGET_PAGE_SIZE &&
407
+ unlikely(cpu_loop_exit_requested(cs))) {
408
+ cpu_loop_exit_restore(cs, ra);
409
+ }
410
+ }
411
+ } else {
412
+ while (copysize >= TARGET_PAGE_SIZE) {
413
+ step = copy_step_rev(env, toaddr, fromaddr, copysize,
414
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
415
+ toaddr -= step;
416
+ fromaddr -= step;
417
+ copysize -= step;
418
+ env->xregs[rn] = copysize;
419
+ if (copysize >= TARGET_PAGE_SIZE &&
420
+ unlikely(cpu_loop_exit_requested(cs))) {
421
+ cpu_loop_exit_restore(cs, ra);
422
+ }
423
+ }
424
+ }
425
+}
426
+
427
+void HELPER(cpym)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
428
+ uint32_t rdesc)
429
+{
430
+ do_cpym(env, syndrome, wdesc, rdesc, true, GETPC());
431
+}
432
+
433
+void HELPER(cpyfm)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
434
+ uint32_t rdesc)
435
+{
436
+ do_cpym(env, syndrome, wdesc, rdesc, false, GETPC());
437
+}
438
+
439
+static void do_cpye(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
440
+ uint32_t rdesc, uint32_t move, uintptr_t ra)
441
+{
442
+ /* Epilogue: do the last partial page */
443
+ int rd = mops_destreg(syndrome);
444
+ int rs = mops_srcreg(syndrome);
445
+ int rn = mops_sizereg(syndrome);
446
+ uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX);
447
+ uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX);
448
+ bool forwards = true;
449
+ uint64_t toaddr, fromaddr, copysize, step;
450
+
451
+ check_mops_enabled(env, ra);
452
+
453
+ /* We choose to NOP out "no data to copy" before consistency checks */
454
+ if (env->xregs[rn] == 0) {
455
+ return;
456
+ }
457
+
458
+ check_mops_wrong_option(env, syndrome, ra);
459
+
460
+ if (move) {
461
+ forwards = (int64_t)env->xregs[rn] < 0;
462
+ }
463
+
464
+ if (forwards) {
465
+ toaddr = env->xregs[rd] + env->xregs[rn];
466
+ fromaddr = env->xregs[rs] + env->xregs[rn];
467
+ copysize = -env->xregs[rn];
468
+ } else {
469
+ copysize = env->xregs[rn];
470
+ /* This toaddr and fromaddr point to the *last* byte to copy */
471
+ toaddr = env->xregs[rd] + copysize - 1;
472
+ fromaddr = env->xregs[rs] + copysize - 1;
473
+ }
474
+
475
+ if (!mte_checks_needed(fromaddr, rdesc)) {
476
+ rdesc = 0;
477
+ }
478
+ if (!mte_checks_needed(toaddr, wdesc)) {
479
+ wdesc = 0;
480
+ }
481
+
482
+ /* Check the size; we don't want to have do a check-for-interrupts */
483
+ if (copysize >= TARGET_PAGE_SIZE) {
484
+ raise_exception_ra(env, EXCP_UDEF, syndrome,
485
+ mops_mismatch_exception_target_el(env), ra);
486
+ }
487
+
488
+ /* Do the actual memmove */
489
+ if (forwards) {
490
+ while (copysize > 0) {
491
+ step = copy_step(env, toaddr, fromaddr, copysize,
492
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
493
+ toaddr += step;
494
+ fromaddr += step;
495
+ copysize -= step;
496
+ env->xregs[rn] = -copysize;
497
+ }
498
+ } else {
499
+ while (copysize > 0) {
500
+ step = copy_step_rev(env, toaddr, fromaddr, copysize,
501
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
502
+ toaddr -= step;
503
+ fromaddr -= step;
504
+ copysize -= step;
505
+ env->xregs[rn] = copysize;
506
+ }
507
+ }
508
+}
509
+
510
+void HELPER(cpye)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
511
+ uint32_t rdesc)
512
+{
513
+ do_cpye(env, syndrome, wdesc, rdesc, true, GETPC());
514
+}
515
+
516
+void HELPER(cpyfe)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
517
+ uint32_t rdesc)
518
+{
519
+ do_cpye(env, syndrome, wdesc, rdesc, false, GETPC());
520
+}
521
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
522
index XXXXXXX..XXXXXXX 100644
523
--- a/target/arm/tcg/translate-a64.c
524
+++ b/target/arm/tcg/translate-a64.c
525
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(SETGP, aa64_mops, do_SET, a, false, true, gen_helper_setgp)
526
TRANS_FEAT(SETGM, aa64_mops, do_SET, a, false, true, gen_helper_setgm)
527
TRANS_FEAT(SETGE, aa64_mops, do_SET, a, true, true, gen_helper_setge)
528
529
+typedef void CpyFn(TCGv_env, TCGv_i32, TCGv_i32, TCGv_i32);
530
+
531
+static bool do_CPY(DisasContext *s, arg_cpy *a, bool is_epilogue, CpyFn fn)
532
+{
533
+ int rmemidx, wmemidx;
534
+ uint32_t syndrome, rdesc = 0, wdesc = 0;
535
+ bool wunpriv = extract32(a->options, 0, 1);
536
+ bool runpriv = extract32(a->options, 1, 1);
537
+
538
+ /*
539
+ * UNPREDICTABLE cases: we choose to UNDEF, which allows
540
+ * us to pull this check before the CheckMOPSEnabled() test
541
+ * (which we do in the helper function)
542
+ */
543
+ if (a->rs == a->rn || a->rs == a->rd || a->rn == a->rd ||
544
+ a->rd == 31 || a->rs == 31 || a->rn == 31) {
32
+ return false;
545
+ return false;
33
+ }
546
+ }
34
+ return last_rom->as == this_rom->as &&
547
+
35
+ last_rom->addr + last_rom->romsize > this_rom->addr;
548
+ rmemidx = get_a64_user_mem_index(s, runpriv);
36
+}
549
+ wmemidx = get_a64_user_mem_index(s, wunpriv);
37
+
550
+
38
int rom_check_and_register_reset(void)
551
+ /*
39
{
552
+ * We pass option_a == true, matching our implementation;
40
- hwaddr addr = 0;
553
+ * we pass wrong_option == false: helper function may set that bit.
41
MemoryRegionSection section;
554
+ */
42
- Rom *rom;
555
+ syndrome = syn_mop(false, false, a->options, is_epilogue,
43
- AddressSpace *as = NULL;
556
+ false, true, a->rd, a->rs, a->rn);
44
+ Rom *rom, *last_rom = NULL;
557
+
45
558
+ /* If we need to do MTE tag checking, assemble the descriptors */
46
QTAILQ_FOREACH(rom, &roms, next) {
559
+ if (s->mte_active[runpriv]) {
47
if (rom->fw_file) {
560
+ rdesc = FIELD_DP32(rdesc, MTEDESC, TBI, s->tbid);
48
continue;
561
+ rdesc = FIELD_DP32(rdesc, MTEDESC, TCMA, s->tcma);
49
}
562
+ }
50
if (!rom->mr) {
563
+ if (s->mte_active[wunpriv]) {
51
- if ((addr > rom->addr) && (as == rom->as)) {
564
+ wdesc = FIELD_DP32(wdesc, MTEDESC, TBI, s->tbid);
52
+ if (roms_overlap(last_rom, rom)) {
565
+ wdesc = FIELD_DP32(wdesc, MTEDESC, TCMA, s->tcma);
53
fprintf(stderr, "rom: requested regions overlap "
566
+ wdesc = FIELD_DP32(wdesc, MTEDESC, WRITE, true);
54
"(rom %s. free=0x" TARGET_FMT_plx
567
+ }
55
", addr=0x" TARGET_FMT_plx ")\n",
568
+ /* The helper function needs these parts of the descriptor regardless */
56
- rom->name, addr, rom->addr);
569
+ rdesc = FIELD_DP32(rdesc, MTEDESC, MIDX, rmemidx);
57
+ rom->name, last_rom->addr + last_rom->romsize,
570
+ wdesc = FIELD_DP32(wdesc, MTEDESC, MIDX, wmemidx);
58
+ rom->addr);
571
+
59
return -1;
572
+ /*
60
}
573
+ * The helper needs the register numbers, but since they're in
61
- addr = rom->addr;
574
+ * the syndrome anyway, we let it extract them from there rather
62
- addr += rom->romsize;
575
+ * than passing in an extra three integer arguments.
63
- as = rom->as;
576
+ */
64
+ last_rom = rom;
577
+ fn(cpu_env, tcg_constant_i32(syndrome), tcg_constant_i32(wdesc),
65
}
578
+ tcg_constant_i32(rdesc));
66
section = memory_region_find(rom->mr ? rom->mr : get_system_memory(),
579
+ return true;
67
rom->addr, 1);
580
+}
581
+
582
+TRANS_FEAT(CPYP, aa64_mops, do_CPY, a, false, gen_helper_cpyp)
583
+TRANS_FEAT(CPYM, aa64_mops, do_CPY, a, false, gen_helper_cpym)
584
+TRANS_FEAT(CPYE, aa64_mops, do_CPY, a, true, gen_helper_cpye)
585
+TRANS_FEAT(CPYFP, aa64_mops, do_CPY, a, false, gen_helper_cpyfp)
586
+TRANS_FEAT(CPYFM, aa64_mops, do_CPY, a, false, gen_helper_cpyfm)
587
+TRANS_FEAT(CPYFE, aa64_mops, do_CPY, a, true, gen_helper_cpyfe)
588
+
589
typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
590
591
static bool gen_rri(DisasContext *s, arg_rri_sf *a,
68
--
592
--
69
2.20.1
593
2.34.1
70
71
diff view generated by jsdifflib
New patch
1
Enable FEAT_MOPS on the AArch64 'max' CPU, and add it to
2
the list of features we implement.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230912140434.1333369-13-peter.maydell@linaro.org
7
---
8
docs/system/arm/emulation.rst | 1 +
9
linux-user/elfload.c | 1 +
10
target/arm/tcg/cpu64.c | 1 +
11
3 files changed, 3 insertions(+)
12
13
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
14
index XXXXXXX..XXXXXXX 100644
15
--- a/docs/system/arm/emulation.rst
16
+++ b/docs/system/arm/emulation.rst
17
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
18
- FEAT_LSE (Large System Extensions)
19
- FEAT_LSE2 (Large System Extensions v2)
20
- FEAT_LVA (Large Virtual Address space)
21
+- FEAT_MOPS (Standardization of memory operations)
22
- FEAT_MTE (Memory Tagging Extension)
23
- FEAT_MTE2 (Memory Tagging Extension)
24
- FEAT_MTE3 (MTE Asymmetric Fault Handling)
25
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/linux-user/elfload.c
28
+++ b/linux-user/elfload.c
29
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap2(void)
30
GET_FEATURE_ID(aa64_sme_i16i64, ARM_HWCAP2_A64_SME_I16I64);
31
GET_FEATURE_ID(aa64_sme_fa64, ARM_HWCAP2_A64_SME_FA64);
32
GET_FEATURE_ID(aa64_hbc, ARM_HWCAP2_A64_HBC);
33
+ GET_FEATURE_ID(aa64_mops, ARM_HWCAP2_A64_MOPS);
34
35
return hwcaps;
36
}
37
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/cpu64.c
40
+++ b/target/arm/tcg/cpu64.c
41
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
42
cpu->isar.id_aa64isar1 = t;
43
44
t = cpu->isar.id_aa64isar2;
45
+ t = FIELD_DP64(t, ID_AA64ISAR2, MOPS, 1); /* FEAT_MOPS */
46
t = FIELD_DP64(t, ID_AA64ISAR2, BC, 1); /* FEAT_HBC */
47
cpu->isar.id_aa64isar2 = t;
48
49
--
50
2.34.1
diff view generated by jsdifflib
1
In nios2_cpu_set_irq(), use deposit32() rather than raw shift-and-mask
1
Avoid a dynamic stack allocation in qjack_client_init(), by using
2
operations to set the appropriate bit in the ipending register.
2
a g_autofree heap allocation instead.
3
4
(We stick with allocate + snprintf() because the JACK API requires
5
the name to be no more than its maximum size, so g_strdup_printf()
6
would require an extra truncation step.)
7
8
The codebase has very few VLAs, and if we can get rid of them all we
9
can make the compiler error on new additions. This is a defensive
10
measure against security bugs where an on-stack dynamic allocation
11
isn't correctly size-checked (e.g. CVE-2021-3527).
3
12
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
6
Message-id: 20201129174022.26530-4-peter.maydell@linaro.org
15
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
16
Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
17
Message-id: 20230818155846.1651287-2-peter.maydell@linaro.org
7
---
18
---
8
target/nios2/cpu.c | 3 +--
19
audio/jackaudio.c | 5 +++--
9
1 file changed, 1 insertion(+), 2 deletions(-)
20
1 file changed, 3 insertions(+), 2 deletions(-)
10
21
11
diff --git a/target/nios2/cpu.c b/target/nios2/cpu.c
22
diff --git a/audio/jackaudio.c b/audio/jackaudio.c
12
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
13
--- a/target/nios2/cpu.c
24
--- a/audio/jackaudio.c
14
+++ b/target/nios2/cpu.c
25
+++ b/audio/jackaudio.c
15
@@ -XXX,XX +XXX,XX @@ static void nios2_cpu_set_irq(void *opaque, int irq, int level)
26
@@ -XXX,XX +XXX,XX @@ static void qjack_client_connect_ports(QJackClient *c)
16
CPUNios2State *env = &cpu->env;
27
static int qjack_client_init(QJackClient *c)
17
CPUState *cs = CPU(cpu);
28
{
18
29
jack_status_t status;
19
- env->regs[CR_IPENDING] &= ~(1 << irq);
30
- char client_name[jack_client_name_size()];
20
- env->regs[CR_IPENDING] |= !!level << irq;
31
+ int client_name_len = jack_client_name_size(); /* includes NUL */
21
+ env->regs[CR_IPENDING] = deposit32(env->regs[CR_IPENDING], irq, 1, !!level);
32
+ g_autofree char *client_name = g_new(char, client_name_len);
22
33
jack_options_t options = JackNullOption;
23
env->irq_pending = env->regs[CR_IPENDING] & env->regs[CR_IENABLE];
34
35
if (c->state == QJACK_STATE_RUNNING) {
36
@@ -XXX,XX +XXX,XX @@ static int qjack_client_init(QJackClient *c)
37
38
c->connect_ports = true;
39
40
- snprintf(client_name, sizeof(client_name), "%s-%s",
41
+ snprintf(client_name, client_name_len, "%s-%s",
42
c->out ? "out" : "in",
43
c->opt->client_name ? c->opt->client_name : audio_application_name());
24
44
25
--
45
--
26
2.20.1
46
2.34.1
27
47
28
48
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
Avoid a dynamic stack allocation in qjack_process(). Since this
2
function is a JACK process callback, we are not permitted to malloc()
3
here, so we allocate a working buffer in qjack_client_init() instead.
2
4
3
Malicious user can set the feedback divisor for the PLLs
5
The codebase has very few VLAs, and if we can get rid of them all we
4
to zero, triggering a floating-point exception (SIGFPE).
6
can make the compiler error on new additions. This is a defensive
7
measure against security bugs where an on-stack dynamic allocation
8
isn't correctly size-checked (e.g. CVE-2021-3527).
5
9
6
As the datasheet [*] is not clear how hardware behaves
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
when these bits are zeroes, use the maximum divisor
11
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
8
possible (128) to avoid the software FPE.
12
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
13
Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
14
Message-id: 20230818155846.1651287-3-peter.maydell@linaro.org
15
---
16
audio/jackaudio.c | 16 +++++++++++-----
17
1 file changed, 11 insertions(+), 5 deletions(-)
9
18
10
[*] Zynq-7000 TRM, UG585 (v1.12.2)
19
diff --git a/audio/jackaudio.c b/audio/jackaudio.c
11
B.28 System Level Control Registers (slcr)
12
-> "Register (slcr) ARM_PLL_CTRL"
13
25.10.4 PLLs
14
-> "Software-Controlled PLL Update"
15
16
Fixes: 38867cb7ec9 ("hw/misc/zynq_slcr: add clock generation for uarts")
17
Reported-by: Gaoning Pan <pgn@zju.edu.cn>
18
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
20
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
21
Reviewed-by: Damien Hedde <damien.hedde@greensocs.com>
22
Message-id: 20201210141610.884600-1-f4bug@amsat.org
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
25
hw/misc/zynq_slcr.c | 5 +++++
26
1 file changed, 5 insertions(+)
27
28
diff --git a/hw/misc/zynq_slcr.c b/hw/misc/zynq_slcr.c
29
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/misc/zynq_slcr.c
21
--- a/audio/jackaudio.c
31
+++ b/hw/misc/zynq_slcr.c
22
+++ b/audio/jackaudio.c
32
@@ -XXX,XX +XXX,XX @@ static uint64_t zynq_slcr_compute_pll(uint64_t input, uint32_t ctrl_reg)
23
@@ -XXX,XX +XXX,XX @@ typedef struct QJackClient {
33
return 0;
24
int buffersize;
25
jack_port_t **port;
26
QJackBuffer fifo;
27
+
28
+ /* Used as workspace by qjack_process() */
29
+ float **process_buffers;
30
}
31
QJackClient;
32
33
@@ -XXX,XX +XXX,XX @@ static int qjack_process(jack_nframes_t nframes, void *arg)
34
}
34
}
35
35
36
+ /* Consider zero feedback as maximum divide ratio possible */
36
/* get the buffers for the ports */
37
+ if (!mult) {
37
- float *buffers[c->nchannels];
38
+ mult = 1 << R_xxx_PLL_CTRL_PLL_FPDIV_LENGTH;
38
for (int i = 0; i < c->nchannels; ++i) {
39
+ }
39
- buffers[i] = jack_port_get_buffer(c->port[i], nframes);
40
+ c->process_buffers[i] = jack_port_get_buffer(c->port[i], nframes);
41
}
42
43
if (c->out) {
44
if (likely(c->enabled)) {
45
- qjack_buffer_read_l(&c->fifo, buffers, nframes);
46
+ qjack_buffer_read_l(&c->fifo, c->process_buffers, nframes);
47
} else {
48
for (int i = 0; i < c->nchannels; ++i) {
49
- memset(buffers[i], 0, nframes * sizeof(float));
50
+ memset(c->process_buffers[i], 0, nframes * sizeof(float));
51
}
52
}
53
} else {
54
if (likely(c->enabled)) {
55
- qjack_buffer_write_l(&c->fifo, buffers, nframes);
56
+ qjack_buffer_write_l(&c->fifo, c->process_buffers, nframes);
57
}
58
}
59
60
@@ -XXX,XX +XXX,XX @@ static int qjack_client_init(QJackClient *c)
61
jack_get_client_name(c->client));
62
}
63
64
+ /* Allocate working buffer for process callback */
65
+ c->process_buffers = g_new(float *, c->nchannels);
40
+
66
+
41
/* frequency multiplier -> period division */
67
jack_set_process_callback(c->client, qjack_process , c);
42
return input / mult;
68
jack_set_port_registration_callback(c->client, qjack_port_registration, c);
43
}
69
jack_set_xrun_callback(c->client, qjack_xrun, c);
70
@@ -XXX,XX +XXX,XX @@ static void qjack_client_fini_locked(QJackClient *c)
71
72
qjack_buffer_free(&c->fifo);
73
g_free(c->port);
74
+ g_free(c->process_buffers);
75
76
c->state = QJACK_STATE_DISCONNECTED;
77
/* fallthrough */
44
--
78
--
45
2.20.1
79
2.34.1
46
80
47
81
diff view generated by jsdifflib
New patch
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
1
2
3
Armv8.1+ cpus have Virtual Host Extension (VHE) which added non-secure
4
EL2 virtual timer.
5
6
This change adds it to fullfil Arm BSA (Base System Architecture)
7
requirements.
8
9
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
10
Message-id: 20230913140610.214893-2-marcin.juszkiewicz@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/arm/sbsa-ref.c | 2 ++
15
1 file changed, 2 insertions(+)
16
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/sbsa-ref.c
20
+++ b/hw/arm/sbsa-ref.c
21
@@ -XXX,XX +XXX,XX @@
22
#define ARCH_TIMER_S_EL1_IRQ 13
23
#define ARCH_TIMER_NS_EL1_IRQ 14
24
#define ARCH_TIMER_NS_EL2_IRQ 10
25
+#define ARCH_TIMER_NS_EL2_VIRT_IRQ 12
26
27
enum {
28
SBSA_FLASH,
29
@@ -XXX,XX +XXX,XX @@ static void create_gic(SBSAMachineState *sms, MemoryRegion *mem)
30
[GTIMER_VIRT] = ARCH_TIMER_VIRT_IRQ,
31
[GTIMER_HYP] = ARCH_TIMER_NS_EL2_IRQ,
32
[GTIMER_SEC] = ARCH_TIMER_S_EL1_IRQ,
33
+ [GTIMER_HYPVIRT] = ARCH_TIMER_NS_EL2_VIRT_IRQ,
34
};
35
36
for (irq = 0; irq < ARRAY_SIZE(timer_irq); irq++) {
37
--
38
2.34.1
diff view generated by jsdifflib
1
The openrisc code uses an old style of interrupt handling, where a
1
From: Viktor Prutyanov <viktor@daynix.com>
2
separate standalone set of qemu_irqs invoke a function
3
openrisc_pic_cpu_handler() which signals the interrupt to the CPU
4
proper by directly calling cpu_interrupt() and cpu_reset_interrupt().
5
Because CPU objects now inherit (indirectly) from TYPE_DEVICE, they
6
can have GPIO input lines themselves, and the neater modern way to
7
implement this is to simply have the CPU object itself provide the
8
input IRQ lines.
9
2
10
Create GPIO inputs to the OpenRISC CPU object, and make the only user
3
PE export name check introduced in d399d6b179 isn't reliable enough,
11
of cpu_openrisc_pic_init() wire up directly to those instead.
4
because a page with the export directory may be not present for some
5
reason. On the other hand, elf2dmp retrieves the PDB name in any case.
6
It can be also used to check that a PE image is the kernel image. So,
7
check PDB name when searching for Windows kernel image.
12
8
13
This allows us to delete the hw/openrisc/pic_cpu.c file entirely.
9
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2165917
14
10
15
This fixes a trivial memory leak reported by Coverity of the IRQs
11
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
16
allocated in cpu_openrisc_pic_init().
12
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
13
Message-id: 20230915170153.10959-2-viktor@daynix.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
contrib/elf2dmp/main.c | 93 +++++++++++++++---------------------------
17
1 file changed, 33 insertions(+), 60 deletions(-)
17
18
18
Fixes: Coverity CID 1421934
19
diff --git a/contrib/elf2dmp/main.c b/contrib/elf2dmp/main.c
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Stafford Horne <shorne@gmail.com>
21
Message-id: 20201127225127.14770-4-peter.maydell@linaro.org
22
---
23
target/openrisc/cpu.h | 1 -
24
hw/openrisc/openrisc_sim.c | 3 +-
25
hw/openrisc/pic_cpu.c | 61 --------------------------------------
26
target/openrisc/cpu.c | 32 ++++++++++++++++++++
27
hw/openrisc/meson.build | 2 +-
28
5 files changed, 34 insertions(+), 65 deletions(-)
29
delete mode 100644 hw/openrisc/pic_cpu.c
30
31
diff --git a/target/openrisc/cpu.h b/target/openrisc/cpu.h
32
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
33
--- a/target/openrisc/cpu.h
21
--- a/contrib/elf2dmp/main.c
34
+++ b/target/openrisc/cpu.h
22
+++ b/contrib/elf2dmp/main.c
35
@@ -XXX,XX +XXX,XX @@ typedef struct CPUOpenRISCState {
23
@@ -XXX,XX +XXX,XX @@ static int write_dump(struct pa_space *ps,
36
uint32_t picmr; /* Interrupt mask register */
24
return fclose(dmp_file);
37
uint32_t picsr; /* Interrupt contrl register*/
38
#endif
39
- void *irq[32]; /* Interrupt irq input */
40
} CPUOpenRISCState;
41
42
/**
43
diff --git a/hw/openrisc/openrisc_sim.c b/hw/openrisc/openrisc_sim.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/openrisc/openrisc_sim.c
46
+++ b/hw/openrisc/openrisc_sim.c
47
@@ -XXX,XX +XXX,XX @@ static void main_cpu_reset(void *opaque)
48
49
static qemu_irq get_cpu_irq(OpenRISCCPU *cpus[], int cpunum, int irq_pin)
50
{
51
- return cpus[cpunum]->env.irq[irq_pin];
52
+ return qdev_get_gpio_in_named(DEVICE(cpus[cpunum]), "IRQ", irq_pin);
53
}
25
}
54
26
55
static void openrisc_sim_net_init(hwaddr base, hwaddr descriptors,
27
-static bool pe_check_export_name(uint64_t base, void *start_addr,
56
@@ -XXX,XX +XXX,XX @@ static void openrisc_sim_init(MachineState *machine)
28
- struct va_space *vs)
57
fprintf(stderr, "Unable to find CPU definition!\n");
29
-{
58
exit(1);
30
- IMAGE_EXPORT_DIRECTORY export_dir;
59
}
31
- const char *pe_name;
60
- cpu_openrisc_pic_init(cpus[n]);
61
62
cpu_openrisc_clock_init(cpus[n]);
63
64
diff --git a/hw/openrisc/pic_cpu.c b/hw/openrisc/pic_cpu.c
65
deleted file mode 100644
66
index XXXXXXX..XXXXXXX
67
--- a/hw/openrisc/pic_cpu.c
68
+++ /dev/null
69
@@ -XXX,XX +XXX,XX @@
70
-/*
71
- * OpenRISC Programmable Interrupt Controller support.
72
- *
73
- * Copyright (c) 2011-2012 Jia Liu <proljc@gmail.com>
74
- * Feng Gao <gf91597@gmail.com>
75
- *
76
- * This library is free software; you can redistribute it and/or
77
- * modify it under the terms of the GNU Lesser General Public
78
- * License as published by the Free Software Foundation; either
79
- * version 2.1 of the License, or (at your option) any later version.
80
- *
81
- * This library is distributed in the hope that it will be useful,
82
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
83
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
84
- * Lesser General Public License for more details.
85
- *
86
- * You should have received a copy of the GNU Lesser General Public
87
- * License along with this library; if not, see <http://www.gnu.org/licenses/>.
88
- */
89
-
32
-
90
-#include "qemu/osdep.h"
33
- if (pe_get_data_dir_entry(base, start_addr, IMAGE_FILE_EXPORT_DIRECTORY,
91
-#include "hw/irq.h"
34
- &export_dir, sizeof(export_dir), vs)) {
92
-#include "cpu.h"
35
- return false;
93
-
94
-/* OpenRISC pic handler */
95
-static void openrisc_pic_cpu_handler(void *opaque, int irq, int level)
96
-{
97
- OpenRISCCPU *cpu = (OpenRISCCPU *)opaque;
98
- CPUState *cs = CPU(cpu);
99
- uint32_t irq_bit;
100
-
101
- if (irq > 31 || irq < 0) {
102
- return;
103
- }
36
- }
104
-
37
-
105
- irq_bit = 1U << irq;
38
- pe_name = va_space_resolve(vs, base + export_dir.Name);
106
-
39
- if (!pe_name) {
107
- if (level) {
40
- return false;
108
- cpu->env.picsr |= irq_bit;
109
- } else {
110
- cpu->env.picsr &= ~irq_bit;
111
- }
41
- }
112
-
42
-
113
- if (cpu->env.picsr & cpu->env.picmr) {
43
- return !strcmp(pe_name, PE_NAME);
114
- cpu_interrupt(cs, CPU_INTERRUPT_HARD);
115
- } else {
116
- cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
117
- cpu->env.picsr = 0;
118
- }
119
-}
44
-}
120
-
45
-
121
-void cpu_openrisc_pic_init(OpenRISCCPU *cpu)
46
-static int pe_get_pdb_symstore_hash(uint64_t base, void *start_addr,
122
-{
47
- char *hash, struct va_space *vs)
123
- int i;
48
+static bool pe_check_pdb_name(uint64_t base, void *start_addr,
124
- qemu_irq *qi;
49
+ struct va_space *vs, OMFSignatureRSDS *rsds)
125
- qi = qemu_allocate_irqs(openrisc_pic_cpu_handler, cpu, NR_IRQS);
50
{
51
const char sign_rsds[4] = "RSDS";
52
IMAGE_DEBUG_DIRECTORY debug_dir;
53
- OMFSignatureRSDS rsds;
54
- char *pdb_name;
55
- size_t pdb_name_sz;
56
- size_t i;
57
+ char pdb_name[sizeof(PDB_NAME)];
58
59
if (pe_get_data_dir_entry(base, start_addr, IMAGE_FILE_DEBUG_DIRECTORY,
60
&debug_dir, sizeof(debug_dir), vs)) {
61
eprintf("Failed to get Debug Directory\n");
62
- return 1;
63
+ return false;
64
}
65
66
if (debug_dir.Type != IMAGE_DEBUG_TYPE_CODEVIEW) {
67
- return 1;
68
+ eprintf("Debug Directory type is not CodeView\n");
69
+ return false;
70
}
71
72
if (va_space_rw(vs,
73
base + debug_dir.AddressOfRawData,
74
- &rsds, sizeof(rsds), 0)) {
75
- return 1;
76
+ rsds, sizeof(*rsds), 0)) {
77
+ eprintf("Failed to resolve OMFSignatureRSDS\n");
78
+ return false;
79
}
80
81
- printf("CodeView signature is \'%.4s\'\n", rsds.Signature);
126
-
82
-
127
- for (i = 0; i < NR_IRQS; i++) {
83
- if (memcmp(&rsds.Signature, sign_rsds, sizeof(sign_rsds))) {
128
- cpu->env.irq[i] = qi[i];
84
- return 1;
85
+ if (memcmp(&rsds->Signature, sign_rsds, sizeof(sign_rsds))) {
86
+ eprintf("CodeView signature is \'%.4s\', \'%s\' expected\n",
87
+ rsds->Signature, sign_rsds);
88
+ return false;
89
}
90
91
- pdb_name_sz = debug_dir.SizeOfData - sizeof(rsds);
92
- pdb_name = malloc(pdb_name_sz);
93
- if (!pdb_name) {
94
- return 1;
95
+ if (debug_dir.SizeOfData - sizeof(*rsds) != sizeof(PDB_NAME)) {
96
+ eprintf("PDB name size doesn't match\n");
97
+ return false;
98
}
99
100
if (va_space_rw(vs, base + debug_dir.AddressOfRawData +
101
- offsetof(OMFSignatureRSDS, name), pdb_name, pdb_name_sz, 0)) {
102
- free(pdb_name);
103
- return 1;
104
+ offsetof(OMFSignatureRSDS, name), pdb_name, sizeof(PDB_NAME),
105
+ 0)) {
106
+ eprintf("Failed to resolve PDB name\n");
107
+ return false;
108
}
109
110
printf("PDB name is \'%s\', \'%s\' expected\n", pdb_name, PDB_NAME);
111
112
- if (strcmp(pdb_name, PDB_NAME)) {
113
- eprintf("Unexpected PDB name, it seems the kernel isn't found\n");
114
- free(pdb_name);
115
- return 1;
129
- }
116
- }
130
-}
117
+ return !strcmp(pdb_name, PDB_NAME);
131
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
118
+}
132
index XXXXXXX..XXXXXXX 100644
119
133
--- a/target/openrisc/cpu.c
120
- free(pdb_name);
134
+++ b/target/openrisc/cpu.c
121
-
135
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_reset(DeviceState *dev)
122
- sprintf(hash, "%.08x%.04x%.04x%.02x%.02x", rsds.guid.a, rsds.guid.b,
136
#endif
123
- rsds.guid.c, rsds.guid.d[0], rsds.guid.d[1]);
124
+static void pe_get_pdb_symstore_hash(OMFSignatureRSDS *rsds, char *hash)
125
+{
126
+ sprintf(hash, "%.08x%.04x%.04x%.02x%.02x", rsds->guid.a, rsds->guid.b,
127
+ rsds->guid.c, rsds->guid.d[0], rsds->guid.d[1]);
128
hash += 20;
129
- for (i = 0; i < 6; i++, hash += 2) {
130
- sprintf(hash, "%.02x", rsds.guid.e[i]);
131
+ for (unsigned int i = 0; i < 6; i++, hash += 2) {
132
+ sprintf(hash, "%.02x", rsds->guid.e[i]);
133
}
134
135
- sprintf(hash, "%.01x", rsds.age);
136
-
137
- return 0;
138
+ sprintf(hash, "%.01x", rsds->age);
137
}
139
}
138
140
139
+#ifndef CONFIG_USER_ONLY
141
int main(int argc, char *argv[])
140
+static void openrisc_cpu_set_irq(void *opaque, int irq, int level)
142
@@ -XXX,XX +XXX,XX @@ int main(int argc, char *argv[])
141
+{
143
KDDEBUGGER_DATA64 *kdbg;
142
+ OpenRISCCPU *cpu = (OpenRISCCPU *)opaque;
144
uint64_t KdVersionBlock;
143
+ CPUState *cs = CPU(cpu);
145
bool kernel_found = false;
144
+ uint32_t irq_bit;
146
+ OMFSignatureRSDS rsds;
145
+
147
146
+ if (irq > 31 || irq < 0) {
148
if (argc != 3) {
147
+ return;
149
eprintf("usage:\n\t%s elf_file dmp_file\n", argv[0]);
148
+ }
150
@@ -XXX,XX +XXX,XX @@ int main(int argc, char *argv[])
149
+
151
}
150
+ irq_bit = 1U << irq;
152
151
+
153
if (*(uint16_t *)nt_start_addr == 0x5a4d) { /* MZ */
152
+ if (level) {
154
- if (pe_check_export_name(KernBase, nt_start_addr, &vs)) {
153
+ cpu->env.picsr |= irq_bit;
155
+ printf("Checking candidate KernBase = 0x%016"PRIx64"\n", KernBase);
154
+ } else {
156
+ if (pe_check_pdb_name(KernBase, nt_start_addr, &vs, &rsds)) {
155
+ cpu->env.picsr &= ~irq_bit;
157
kernel_found = true;
156
+ }
158
break;
157
+
159
}
158
+ if (cpu->env.picsr & cpu->env.picmr) {
160
@@ -XXX,XX +XXX,XX @@ int main(int argc, char *argv[])
159
+ cpu_interrupt(cs, CPU_INTERRUPT_HARD);
161
printf("KernBase = 0x%016"PRIx64", signature is \'%.2s\'\n", KernBase,
160
+ } else {
162
(char *)nt_start_addr);
161
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
163
162
+ cpu->env.picsr = 0;
164
- if (pe_get_pdb_symstore_hash(KernBase, nt_start_addr, pdb_hash, &vs)) {
163
+ }
165
- eprintf("Failed to get PDB symbol store hash\n");
164
+}
166
- err = 1;
165
+#endif
167
- goto out_ps;
166
+
168
- }
167
static void openrisc_cpu_realizefn(DeviceState *dev, Error **errp)
169
+ pe_get_pdb_symstore_hash(&rsds, pdb_hash);
168
{
170
169
CPUState *cs = CPU(dev);
171
sprintf(pdb_url, "%s%s/%s/%s", SYM_URL_BASE, PDB_NAME, pdb_hash, PDB_NAME);
170
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_initfn(Object *obj)
172
printf("PDB URL is %s\n", pdb_url);
171
OpenRISCCPU *cpu = OPENRISC_CPU(obj);
172
173
cpu_set_cpustate_pointers(cpu);
174
+
175
+#ifndef CONFIG_USER_ONLY
176
+ qdev_init_gpio_in_named(DEVICE(cpu), openrisc_cpu_set_irq, "IRQ", NR_IRQS);
177
+#endif
178
}
179
180
/* CPU models */
181
diff --git a/hw/openrisc/meson.build b/hw/openrisc/meson.build
182
index XXXXXXX..XXXXXXX 100644
183
--- a/hw/openrisc/meson.build
184
+++ b/hw/openrisc/meson.build
185
@@ -XXX,XX +XXX,XX @@
186
openrisc_ss = ss.source_set()
187
-openrisc_ss.add(files('pic_cpu.c', 'cputimer.c'))
188
+openrisc_ss.add(files('cputimer.c'))
189
openrisc_ss.add(when: 'CONFIG_OR1K_SIM', if_true: files('openrisc_sim.c'))
190
191
hw_arch += {'openrisc': openrisc_ss}
192
--
173
--
193
2.20.1
174
2.34.1
194
195
diff view generated by jsdifflib
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
1
From: Viktor Prutyanov <viktor@daynix.com>
2
2
3
Numonyx chips determine the number of cycles to wait based on bits 7:4
3
Physical memory ranges may not be aligned to page size in QEMU ELF, but
4
in the volatile configuration register.
4
DMP can only contain page-aligned runs. So, align them.
5
5
6
However, if these bits are 0x0 or 0xF, the number of dummy cycles to
6
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
7
wait is 10 for QIOR and QIOR4 commands or when in QIO mode, and otherwise 8 for
7
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
8
the currently supported fast read commands. [1]
8
Message-id: 20230915170153.10959-3-viktor@daynix.com
9
10
[1]
11
https://www.micron.com/-/media/client/global/documents/products/data-sheet/nor-flash/serial-nor/mt25q/die-rev-b/mt25q_qlkt_u_02g_cbb_0.pdf?rev=9b167fbf2b3645efba6385949a72e453
12
13
Signed-off-by: Joe Komlodi <komlodi@xilinx.com>
14
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
15
Message-id: 1605568264-26376-5-git-send-email-komlodi@xilinx.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
10
---
18
hw/block/m25p80.c | 30 +++++++++++++++++++++++++++---
11
contrib/elf2dmp/addrspace.h | 1 +
19
1 file changed, 27 insertions(+), 3 deletions(-)
12
contrib/elf2dmp/addrspace.c | 31 +++++++++++++++++++++++++++++--
13
contrib/elf2dmp/main.c | 5 +++--
14
3 files changed, 33 insertions(+), 4 deletions(-)
20
15
21
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
16
diff --git a/contrib/elf2dmp/addrspace.h b/contrib/elf2dmp/addrspace.h
22
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/block/m25p80.c
18
--- a/contrib/elf2dmp/addrspace.h
24
+++ b/hw/block/m25p80.c
19
+++ b/contrib/elf2dmp/addrspace.h
25
@@ -XXX,XX +XXX,XX @@ static uint8_t numonyx_mode(Flash *s)
20
@@ -XXX,XX +XXX,XX @@
21
22
#define ELF2DMP_PAGE_BITS 12
23
#define ELF2DMP_PAGE_SIZE (1ULL << ELF2DMP_PAGE_BITS)
24
+#define ELF2DMP_PAGE_MASK (ELF2DMP_PAGE_SIZE - 1)
25
#define ELF2DMP_PFN_MASK (~(ELF2DMP_PAGE_SIZE - 1))
26
27
#define INVALID_PA UINT64_MAX
28
diff --git a/contrib/elf2dmp/addrspace.c b/contrib/elf2dmp/addrspace.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/contrib/elf2dmp/addrspace.c
31
+++ b/contrib/elf2dmp/addrspace.c
32
@@ -XXX,XX +XXX,XX @@ static struct pa_block *pa_space_find_block(struct pa_space *ps, uint64_t pa)
33
34
for (i = 0; i < ps->block_nr; i++) {
35
if (ps->block[i].paddr <= pa &&
36
- pa <= ps->block[i].paddr + ps->block[i].size) {
37
+ pa < ps->block[i].paddr + ps->block[i].size) {
38
return ps->block + i;
39
}
26
}
40
}
41
@@ -XXX,XX +XXX,XX @@ static uint8_t *pa_space_resolve(struct pa_space *ps, uint64_t pa)
42
return block->addr + (pa - block->paddr);
27
}
43
}
28
44
29
+static uint8_t numonyx_extract_cfg_num_dummies(Flash *s)
45
+static void pa_block_align(struct pa_block *b)
30
+{
46
+{
31
+ uint8_t num_dummies;
47
+ uint64_t low_align = ((b->paddr - 1) | ELF2DMP_PAGE_MASK) + 1 - b->paddr;
32
+ uint8_t mode;
48
+ uint64_t high_align = (b->paddr + b->size) & ELF2DMP_PAGE_MASK;
33
+ assert(get_man(s) == MAN_NUMONYX);
34
+
49
+
35
+ mode = numonyx_mode(s);
50
+ if (low_align == 0 && high_align == 0) {
36
+ num_dummies = extract32(s->volatile_cfg, 4, 4);
51
+ return;
37
+
38
+ if (num_dummies == 0x0 || num_dummies == 0xf) {
39
+ switch (s->cmd_in_progress) {
40
+ case QIOR:
41
+ case QIOR4:
42
+ num_dummies = 10;
43
+ break;
44
+ default:
45
+ num_dummies = (mode == MODE_QIO) ? 10 : 8;
46
+ break;
47
+ }
48
+ }
52
+ }
49
+
53
+
50
+ return num_dummies;
54
+ if (low_align + high_align < b->size) {
55
+ printf("Block 0x%"PRIx64"+:0x%"PRIx64" will be aligned to "
56
+ "0x%"PRIx64"+:0x%"PRIx64"\n", b->paddr, b->size,
57
+ b->paddr + low_align, b->size - low_align - high_align);
58
+ b->size -= low_align + high_align;
59
+ } else {
60
+ printf("Block 0x%"PRIx64"+:0x%"PRIx64" is too small to align\n",
61
+ b->paddr, b->size);
62
+ b->size = 0;
63
+ }
64
+
65
+ b->addr += low_align;
66
+ b->paddr += low_align;
51
+}
67
+}
52
+
68
+
53
static void decode_fast_read_cmd(Flash *s)
69
int pa_space_create(struct pa_space *ps, QEMU_Elf *qemu_elf)
54
{
70
{
55
s->needed_bytes = get_addr_length(s);
71
Elf64_Half phdr_nr = elf_getphdrnum(qemu_elf->map);
56
@@ -XXX,XX +XXX,XX @@ static void decode_fast_read_cmd(Flash *s)
72
@@ -XXX,XX +XXX,XX @@ int pa_space_create(struct pa_space *ps, QEMU_Elf *qemu_elf)
57
s->needed_bytes += 8;
73
.paddr = phdr[i].p_paddr,
58
break;
74
.size = phdr[i].p_filesz,
59
case MAN_NUMONYX:
75
};
60
- s->needed_bytes += extract32(s->volatile_cfg, 4, 4);
76
- block_i++;
61
+ s->needed_bytes += numonyx_extract_cfg_num_dummies(s);
77
+ pa_block_align(&ps->block[block_i]);
62
break;
78
+ block_i = ps->block[block_i].size ? (block_i + 1) : block_i;
63
case MAN_MACRONIX:
79
}
64
if (extract32(s->volatile_cfg, 6, 2) == 1) {
80
}
65
@@ -XXX,XX +XXX,XX @@ static void decode_dio_read_cmd(Flash *s)
81
66
);
82
+ ps->block_nr = block_i;
67
break;
83
+
68
case MAN_NUMONYX:
84
return 0;
69
- s->needed_bytes += extract32(s->volatile_cfg, 4, 4);
85
}
70
+ s->needed_bytes += numonyx_extract_cfg_num_dummies(s);
86
71
break;
87
diff --git a/contrib/elf2dmp/main.c b/contrib/elf2dmp/main.c
72
case MAN_MACRONIX:
88
index XXXXXXX..XXXXXXX 100644
73
switch (extract32(s->volatile_cfg, 6, 2)) {
89
--- a/contrib/elf2dmp/main.c
74
@@ -XXX,XX +XXX,XX @@ static void decode_qio_read_cmd(Flash *s)
90
+++ b/contrib/elf2dmp/main.c
75
);
91
@@ -XXX,XX +XXX,XX @@ static int write_dump(struct pa_space *ps,
76
break;
92
for (i = 0; i < ps->block_nr; i++) {
77
case MAN_NUMONYX:
93
struct pa_block *b = &ps->block[i];
78
- s->needed_bytes += extract32(s->volatile_cfg, 4, 4);
94
79
+ s->needed_bytes += numonyx_extract_cfg_num_dummies(s);
95
- printf("Writing block #%zu/%zu to file...\n", i, ps->block_nr);
80
break;
96
+ printf("Writing block #%zu/%zu of %"PRIu64" bytes to file...\n", i,
81
case MAN_MACRONIX:
97
+ ps->block_nr, b->size);
82
switch (extract32(s->volatile_cfg, 6, 2)) {
98
if (fwrite(b->addr, b->size, 1, dmp_file) != 1) {
99
- eprintf("Failed to write dump header\n");
100
+ eprintf("Failed to write block\n");
101
fclose(dmp_file);
102
return 1;
103
}
83
--
104
--
84
2.20.1
105
2.34.1
85
86
diff view generated by jsdifflib
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
1
From: Viktor Prutyanov <viktor@daynix.com>
2
2
3
Some Numonyx flash commands cannot be executed in DIO and QIO mode, such as
3
DMP supports 42 physical memory runs at most. So, merge adjacent
4
trying to do DPP or DOR when in QIO mode.
4
physical memory ranges from QEMU ELF when possible to minimize total
5
number of runs.
5
6
6
Signed-off-by: Joe Komlodi <komlodi@xilinx.com>
7
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
7
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
8
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
8
Message-id: 1605568264-26376-4-git-send-email-komlodi@xilinx.com
9
Message-id: 20230915170153.10959-4-viktor@daynix.com
10
[PMM: fixed format string for printing size_t values]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
hw/block/m25p80.c | 114 ++++++++++++++++++++++++++++++++++++++--------
13
contrib/elf2dmp/main.c | 56 ++++++++++++++++++++++++++++++++++++------
12
1 file changed, 95 insertions(+), 19 deletions(-)
14
1 file changed, 48 insertions(+), 8 deletions(-)
13
15
14
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
16
diff --git a/contrib/elf2dmp/main.c b/contrib/elf2dmp/main.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/block/m25p80.c
18
--- a/contrib/elf2dmp/main.c
17
+++ b/hw/block/m25p80.c
19
+++ b/contrib/elf2dmp/main.c
18
@@ -XXX,XX +XXX,XX @@ typedef enum {
20
@@ -XXX,XX +XXX,XX @@
19
MAN_GENERIC,
21
#define PE_NAME "ntoskrnl.exe"
20
} Manufacturer;
22
21
23
#define INITIAL_MXCSR 0x1f80
22
+typedef enum {
24
+#define MAX_NUMBER_OF_RUNS 42
23
+ MODE_STD = 0,
25
24
+ MODE_DIO = 1,
26
typedef struct idt_desc {
25
+ MODE_QIO = 2
27
uint16_t offset1; /* offset bits 0..15 */
26
+} SPIMode;
28
@@ -XXX,XX +XXX,XX @@ static int fix_dtb(struct va_space *vs, QEMU_Elf *qe)
29
return 1;
30
}
31
32
+static void try_merge_runs(struct pa_space *ps,
33
+ WinDumpPhyMemDesc64 *PhysicalMemoryBlock)
34
+{
35
+ unsigned int merge_cnt = 0, run_idx = 0;
27
+
36
+
28
#define M25P80_INTERNAL_DATA_BUFFER_SZ 16
37
+ PhysicalMemoryBlock->NumberOfRuns = 0;
29
38
+
30
struct Flash {
39
+ for (size_t idx = 0; idx < ps->block_nr; idx++) {
31
@@ -XXX,XX +XXX,XX @@ static void reset_memory(Flash *s)
40
+ struct pa_block *blk = ps->block + idx;
32
trace_m25p80_reset_done(s);
41
+ struct pa_block *next = blk + 1;
33
}
42
+
34
43
+ PhysicalMemoryBlock->NumberOfPages += blk->size / ELF2DMP_PAGE_SIZE;
35
+static uint8_t numonyx_mode(Flash *s)
44
+
36
+{
45
+ if (idx + 1 != ps->block_nr && blk->paddr + blk->size == next->paddr) {
37
+ if (!(s->enh_volatile_cfg & EVCFG_QUAD_IO_DISABLED)) {
46
+ printf("Block #%zu 0x%"PRIx64"+:0x%"PRIx64" and %u previous will be"
38
+ return MODE_QIO;
47
+ " merged\n", idx, blk->paddr, blk->size, merge_cnt);
39
+ } else if (!(s->enh_volatile_cfg & EVCFG_DUAL_IO_DISABLED)) {
48
+ merge_cnt++;
40
+ return MODE_DIO;
49
+ } else {
41
+ } else {
50
+ struct pa_block *first_merged = blk - merge_cnt;
42
+ return MODE_STD;
51
+
52
+ printf("Block #%zu 0x%"PRIx64"+:0x%"PRIx64" and %u previous will be"
53
+ " merged to 0x%"PRIx64"+:0x%"PRIx64" (run #%u)\n",
54
+ idx, blk->paddr, blk->size, merge_cnt, first_merged->paddr,
55
+ blk->paddr + blk->size - first_merged->paddr, run_idx);
56
+ PhysicalMemoryBlock->Run[run_idx] = (WinDumpPhyMemRun64) {
57
+ .BasePage = first_merged->paddr / ELF2DMP_PAGE_SIZE,
58
+ .PageCount = (blk->paddr + blk->size - first_merged->paddr) /
59
+ ELF2DMP_PAGE_SIZE,
60
+ };
61
+ PhysicalMemoryBlock->NumberOfRuns++;
62
+ run_idx++;
63
+ merge_cnt = 0;
64
+ }
43
+ }
65
+ }
44
+}
66
+}
45
+
67
+
46
static void decode_fast_read_cmd(Flash *s)
68
static int fill_header(WinDumpHeader64 *hdr, struct pa_space *ps,
47
{
69
struct va_space *vs, uint64_t KdDebuggerDataBlock,
48
s->needed_bytes = get_addr_length(s);
70
KDDEBUGGER_DATA64 *kdbg, uint64_t KdVersionBlock, int nr_cpus)
49
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
71
@@ -XXX,XX +XXX,XX @@ static int fill_header(WinDumpHeader64 *hdr, struct pa_space *ps,
50
case ERASE4_32K:
72
KUSD_OFFSET_PRODUCT_TYPE);
51
case ERASE_SECTOR:
73
DBGKD_GET_VERSION64 kvb;
52
case ERASE4_SECTOR:
74
WinDumpHeader64 h;
53
- case READ:
75
- size_t i;
54
- case READ4:
76
55
- case DPP:
77
QEMU_BUILD_BUG_ON(KUSD_OFFSET_SUITE_MASK >= ELF2DMP_PAGE_SIZE);
56
- case QPP:
78
QEMU_BUILD_BUG_ON(KUSD_OFFSET_PRODUCT_TYPE >= ELF2DMP_PAGE_SIZE);
57
- case QPP_4:
79
@@ -XXX,XX +XXX,XX @@ static int fill_header(WinDumpHeader64 *hdr, struct pa_space *ps,
58
case PP:
80
.RequiredDumpSpace = sizeof(h),
59
case PP4:
81
};
60
- case PP4_4:
82
61
case DIE_ERASE:
83
- for (i = 0; i < ps->block_nr; i++) {
62
case RDID_90:
84
- h.PhysicalMemoryBlock.NumberOfPages +=
63
case RDID_AB:
85
- ps->block[i].size / ELF2DMP_PAGE_SIZE;
64
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
86
- h.PhysicalMemoryBlock.Run[i] = (WinDumpPhyMemRun64) {
65
s->len = 0;
87
- .BasePage = ps->block[i].paddr / ELF2DMP_PAGE_SIZE,
66
s->state = STATE_COLLECTING_DATA;
88
- .PageCount = ps->block[i].size / ELF2DMP_PAGE_SIZE,
67
break;
89
- };
68
+ case READ:
90
+ if (h.PhysicalMemoryBlock.NumberOfRuns <= MAX_NUMBER_OF_RUNS) {
69
+ case READ4:
91
+ for (size_t idx = 0; idx < ps->block_nr; idx++) {
70
+ if (get_man(s) != MAN_NUMONYX || numonyx_mode(s) == MODE_STD) {
92
+ h.PhysicalMemoryBlock.NumberOfPages +=
71
+ s->needed_bytes = get_addr_length(s);
93
+ ps->block[idx].size / ELF2DMP_PAGE_SIZE;
72
+ s->pos = 0;
94
+ h.PhysicalMemoryBlock.Run[idx] = (WinDumpPhyMemRun64) {
73
+ s->len = 0;
95
+ .BasePage = ps->block[idx].paddr / ELF2DMP_PAGE_SIZE,
74
+ s->state = STATE_COLLECTING_DATA;
96
+ .PageCount = ps->block[idx].size / ELF2DMP_PAGE_SIZE,
75
+ } else {
97
+ };
76
+ qemu_log_mask(LOG_GUEST_ERROR, "M25P80: Cannot execute cmd %x in "
77
+ "DIO or QIO mode\n", s->cmd_in_progress);
78
+ }
98
+ }
79
+ break;
99
+ } else {
80
+ case DPP:
100
+ try_merge_runs(ps, &h.PhysicalMemoryBlock);
81
+ if (get_man(s) != MAN_NUMONYX || numonyx_mode(s) != MODE_QIO) {
101
}
82
+ s->needed_bytes = get_addr_length(s);
102
83
+ s->pos = 0;
103
h.RequiredDumpSpace +=
84
+ s->len = 0;
85
+ s->state = STATE_COLLECTING_DATA;
86
+ } else {
87
+ qemu_log_mask(LOG_GUEST_ERROR, "M25P80: Cannot execute cmd %x in "
88
+ "QIO mode\n", s->cmd_in_progress);
89
+ }
90
+ break;
91
+ case QPP:
92
+ case QPP_4:
93
+ case PP4_4:
94
+ if (get_man(s) != MAN_NUMONYX || numonyx_mode(s) != MODE_DIO) {
95
+ s->needed_bytes = get_addr_length(s);
96
+ s->pos = 0;
97
+ s->len = 0;
98
+ s->state = STATE_COLLECTING_DATA;
99
+ } else {
100
+ qemu_log_mask(LOG_GUEST_ERROR, "M25P80: Cannot execute cmd %x in "
101
+ "DIO mode\n", s->cmd_in_progress);
102
+ }
103
+ break;
104
105
case FAST_READ:
106
case FAST_READ4:
107
+ decode_fast_read_cmd(s);
108
+ break;
109
case DOR:
110
case DOR4:
111
+ if (get_man(s) != MAN_NUMONYX || numonyx_mode(s) != MODE_QIO) {
112
+ decode_fast_read_cmd(s);
113
+ } else {
114
+ qemu_log_mask(LOG_GUEST_ERROR, "M25P80: Cannot execute cmd %x in "
115
+ "QIO mode\n", s->cmd_in_progress);
116
+ }
117
+ break;
118
case QOR:
119
case QOR4:
120
- decode_fast_read_cmd(s);
121
+ if (get_man(s) != MAN_NUMONYX || numonyx_mode(s) != MODE_DIO) {
122
+ decode_fast_read_cmd(s);
123
+ } else {
124
+ qemu_log_mask(LOG_GUEST_ERROR, "M25P80: Cannot execute cmd %x in "
125
+ "DIO mode\n", s->cmd_in_progress);
126
+ }
127
break;
128
129
case DIOR:
130
case DIOR4:
131
- decode_dio_read_cmd(s);
132
+ if (get_man(s) != MAN_NUMONYX || numonyx_mode(s) != MODE_QIO) {
133
+ decode_dio_read_cmd(s);
134
+ } else {
135
+ qemu_log_mask(LOG_GUEST_ERROR, "M25P80: Cannot execute cmd %x in "
136
+ "QIO mode\n", s->cmd_in_progress);
137
+ }
138
break;
139
140
case QIOR:
141
case QIOR4:
142
- decode_qio_read_cmd(s);
143
+ if (get_man(s) != MAN_NUMONYX || numonyx_mode(s) != MODE_DIO) {
144
+ decode_qio_read_cmd(s);
145
+ } else {
146
+ qemu_log_mask(LOG_GUEST_ERROR, "M25P80: Cannot execute cmd %x in "
147
+ "DIO mode\n", s->cmd_in_progress);
148
+ }
149
break;
150
151
case WRSR:
152
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
153
break;
154
155
case JEDEC_READ:
156
- trace_m25p80_populated_jedec(s);
157
- for (i = 0; i < s->pi->id_len; i++) {
158
- s->data[i] = s->pi->id[i];
159
- }
160
- for (; i < SPI_NOR_MAX_ID_LEN; i++) {
161
- s->data[i] = 0;
162
- }
163
+ if (get_man(s) != MAN_NUMONYX || numonyx_mode(s) == MODE_STD) {
164
+ trace_m25p80_populated_jedec(s);
165
+ for (i = 0; i < s->pi->id_len; i++) {
166
+ s->data[i] = s->pi->id[i];
167
+ }
168
+ for (; i < SPI_NOR_MAX_ID_LEN; i++) {
169
+ s->data[i] = 0;
170
+ }
171
172
- s->len = SPI_NOR_MAX_ID_LEN;
173
- s->pos = 0;
174
- s->state = STATE_READING_DATA;
175
+ s->len = SPI_NOR_MAX_ID_LEN;
176
+ s->pos = 0;
177
+ s->state = STATE_READING_DATA;
178
+ } else {
179
+ qemu_log_mask(LOG_GUEST_ERROR, "M25P80: Cannot execute JEDEC read "
180
+ "in DIO or QIO mode\n");
181
+ }
182
break;
183
184
case RDCR:
185
--
104
--
186
2.20.1
105
2.34.1
187
188
diff view generated by jsdifflib
1
From: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
1
From: Viktor Prutyanov <viktor@daynix.com>
2
2
3
This module emulates control registers of versal usb2 controller, this is added
3
Glib's g_mapped_file_new maps file with PROT_READ|PROT_WRITE and
4
just to make guest happy. In general this module would control the phy-reset
4
MAP_PRIVATE. This leads to premature physical memory allocation of dump
5
signal from usb controller, data coherency of the transactions, signals
5
file size on Linux hosts and may fail. On Linux, mapping the file with
6
the host system errors received from controller.
6
MAP_NORESERVE limits the allocation by available memory.
7
7
8
Signed-off-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
8
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
9
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
9
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 20230915170153.10959-5-viktor@daynix.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1607023357-5096-2-git-send-email-sai.pavan.boddu@xilinx.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
12
---
15
include/hw/usb/xlnx-versal-usb2-ctrl-regs.h | 45 ++++
13
contrib/elf2dmp/qemu_elf.h | 2 ++
16
hw/usb/xlnx-versal-usb2-ctrl-regs.c | 229 ++++++++++++++++++++
14
contrib/elf2dmp/qemu_elf.c | 68 +++++++++++++++++++++++++++++++-------
17
hw/usb/meson.build | 1 +
15
2 files changed, 58 insertions(+), 12 deletions(-)
18
3 files changed, 275 insertions(+)
19
create mode 100644 include/hw/usb/xlnx-versal-usb2-ctrl-regs.h
20
create mode 100644 hw/usb/xlnx-versal-usb2-ctrl-regs.c
21
16
22
diff --git a/include/hw/usb/xlnx-versal-usb2-ctrl-regs.h b/include/hw/usb/xlnx-versal-usb2-ctrl-regs.h
17
diff --git a/contrib/elf2dmp/qemu_elf.h b/contrib/elf2dmp/qemu_elf.h
23
new file mode 100644
18
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX
19
--- a/contrib/elf2dmp/qemu_elf.h
25
--- /dev/null
20
+++ b/contrib/elf2dmp/qemu_elf.h
26
+++ b/include/hw/usb/xlnx-versal-usb2-ctrl-regs.h
21
@@ -XXX,XX +XXX,XX @@ typedef struct QEMUCPUState {
27
@@ -XXX,XX +XXX,XX @@
22
int is_system(QEMUCPUState *s);
28
+/*
23
29
+ * QEMU model of the VersalUsb2CtrlRegs Register control/Status block for
24
typedef struct QEMU_Elf {
30
+ * USB2.0 controller
25
+#ifndef CONFIG_LINUX
31
+ *
26
GMappedFile *gmf;
32
+ * Copyright (c) 2020 Xilinx Inc. Vikram Garhwal <fnu.vikram@xilinx.com>
27
+#endif
33
+ *
28
size_t size;
34
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
29
void *map;
35
+ * of this software and associated documentation files (the "Software"), to deal
30
QEMUCPUState **state;
36
+ * in the Software without restriction, including without limitation the rights
31
diff --git a/contrib/elf2dmp/qemu_elf.c b/contrib/elf2dmp/qemu_elf.c
37
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
32
index XXXXXXX..XXXXXXX 100644
38
+ * copies of the Software, and to permit persons to whom the Software is
33
--- a/contrib/elf2dmp/qemu_elf.c
39
+ * furnished to do so, subject to the following conditions:
34
+++ b/contrib/elf2dmp/qemu_elf.c
40
+ *
35
@@ -XXX,XX +XXX,XX @@ static bool check_ehdr(QEMU_Elf *qe)
41
+ * The above copyright notice and this permission notice shall be included in
36
return true;
42
+ * all copies or substantial portions of the Software.
37
}
43
+ *
38
44
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
39
-int QEMU_Elf_init(QEMU_Elf *qe, const char *filename)
45
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
40
+static int QEMU_Elf_map(QEMU_Elf *qe, const char *filename)
46
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
41
{
47
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
42
+#ifdef CONFIG_LINUX
48
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
43
+ struct stat st;
49
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
44
+ int fd;
50
+ * THE SOFTWARE.
51
+ */
52
+
45
+
53
+#ifndef _XLNX_USB2_REGS_H_
46
+ printf("Using Linux mmap\n");
54
+#define _XLNX_USB2_REGS_H_
55
+
47
+
56
+#define TYPE_XILINX_VERSAL_USB2_CTRL_REGS "xlnx.versal-usb2-ctrl-regs"
48
+ fd = open(filename, O_RDONLY, 0);
49
+ if (fd == -1) {
50
+ eprintf("Failed to open ELF dump file \'%s\'\n", filename);
51
+ return 1;
52
+ }
57
+
53
+
58
+#define XILINX_VERSAL_USB2_CTRL_REGS(obj) \
54
+ if (fstat(fd, &st)) {
59
+ OBJECT_CHECK(VersalUsb2CtrlRegs, (obj), TYPE_XILINX_VERSAL_USB2_CTRL_REGS)
55
+ eprintf("Failed to get size of ELF dump file\n");
56
+ close(fd);
57
+ return 1;
58
+ }
59
+ qe->size = st.st_size;
60
+
60
+
61
+#define USB2_REGS_R_MAX ((0x78 / 4) + 1)
61
+ qe->map = mmap(NULL, qe->size, PROT_READ | PROT_WRITE,
62
+ MAP_PRIVATE | MAP_NORESERVE, fd, 0);
63
+ if (qe->map == MAP_FAILED) {
64
+ eprintf("Failed to map ELF file\n");
65
+ close(fd);
66
+ return 1;
67
+ }
62
+
68
+
63
+typedef struct VersalUsb2CtrlRegs {
69
+ close(fd);
64
+ SysBusDevice parent_obj;
70
+#else
65
+ MemoryRegion iomem;
71
GError *gerr = NULL;
66
+ qemu_irq irq_ir;
72
- int err = 0;
67
+
73
+
68
+ uint32_t regs[USB2_REGS_R_MAX];
74
+ printf("Using GLib mmap\n");
69
+ RegisterInfo regs_info[USB2_REGS_R_MAX];
75
70
+} VersalUsb2CtrlRegs;
76
qe->gmf = g_mapped_file_new(filename, TRUE, &gerr);
71
+
77
if (gerr) {
72
+#endif
78
@@ -XXX,XX +XXX,XX @@ int QEMU_Elf_init(QEMU_Elf *qe, const char *filename)
73
diff --git a/hw/usb/xlnx-versal-usb2-ctrl-regs.c b/hw/usb/xlnx-versal-usb2-ctrl-regs.c
79
74
new file mode 100644
80
qe->map = g_mapped_file_get_contents(qe->gmf);
75
index XXXXXXX..XXXXXXX
81
qe->size = g_mapped_file_get_length(qe->gmf);
76
--- /dev/null
77
+++ b/hw/usb/xlnx-versal-usb2-ctrl-regs.c
78
@@ -XXX,XX +XXX,XX @@
79
+/*
80
+ * QEMU model of the VersalUsb2CtrlRegs Register control/Status block for
81
+ * USB2.0 controller
82
+ *
83
+ * This module should control phy_reset, permanent device plugs, frame length
84
+ * time adjust & setting of coherency paths. None of which are emulated in
85
+ * present model.
86
+ *
87
+ * Copyright (c) 2020 Xilinx Inc. Vikram Garhwal <fnu.vikram@xilinx.com>
88
+ *
89
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
90
+ * of this software and associated documentation files (the "Software"), to deal
91
+ * in the Software without restriction, including without limitation the rights
92
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
93
+ * copies of the Software, and to permit persons to whom the Software is
94
+ * furnished to do so, subject to the following conditions:
95
+ *
96
+ * The above copyright notice and this permission notice shall be included in
97
+ * all copies or substantial portions of the Software.
98
+ *
99
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
100
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
101
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
102
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
103
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
104
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
105
+ * THE SOFTWARE.
106
+ */
107
+
108
+#include "qemu/osdep.h"
109
+#include "hw/sysbus.h"
110
+#include "hw/irq.h"
111
+#include "hw/register.h"
112
+#include "qemu/bitops.h"
113
+#include "qemu/log.h"
114
+#include "qom/object.h"
115
+#include "migration/vmstate.h"
116
+#include "hw/usb/xlnx-versal-usb2-ctrl-regs.h"
117
+
118
+#ifndef XILINX_VERSAL_USB2_CTRL_REGS_ERR_DEBUG
119
+#define XILINX_VERSAL_USB2_CTRL_REGS_ERR_DEBUG 0
120
+#endif
82
+#endif
121
+
83
+
122
+REG32(BUS_FILTER, 0x30)
123
+ FIELD(BUS_FILTER, BYPASS, 0, 4)
124
+REG32(PORT, 0x34)
125
+ FIELD(PORT, HOST_SMI_BAR_WR, 4, 1)
126
+ FIELD(PORT, HOST_SMI_PCI_CMD_REG_WR, 3, 1)
127
+ FIELD(PORT, HOST_MSI_ENABLE, 2, 1)
128
+ FIELD(PORT, PWR_CTRL_PRSNT, 1, 1)
129
+ FIELD(PORT, HUB_PERM_ATTACH, 0, 1)
130
+REG32(JITTER_ADJUST, 0x38)
131
+ FIELD(JITTER_ADJUST, FLADJ, 0, 6)
132
+REG32(BIGENDIAN, 0x40)
133
+ FIELD(BIGENDIAN, ENDIAN_GS, 0, 1)
134
+REG32(COHERENCY, 0x44)
135
+ FIELD(COHERENCY, USB_COHERENCY, 0, 1)
136
+REG32(XHC_BME, 0x48)
137
+ FIELD(XHC_BME, XHC_BME, 0, 1)
138
+REG32(REG_CTRL, 0x60)
139
+ FIELD(REG_CTRL, SLVERR_ENABLE, 0, 1)
140
+REG32(IR_STATUS, 0x64)
141
+ FIELD(IR_STATUS, HOST_SYS_ERR, 1, 1)
142
+ FIELD(IR_STATUS, ADDR_DEC_ERR, 0, 1)
143
+REG32(IR_MASK, 0x68)
144
+ FIELD(IR_MASK, HOST_SYS_ERR, 1, 1)
145
+ FIELD(IR_MASK, ADDR_DEC_ERR, 0, 1)
146
+REG32(IR_ENABLE, 0x6c)
147
+ FIELD(IR_ENABLE, HOST_SYS_ERR, 1, 1)
148
+ FIELD(IR_ENABLE, ADDR_DEC_ERR, 0, 1)
149
+REG32(IR_DISABLE, 0x70)
150
+ FIELD(IR_DISABLE, HOST_SYS_ERR, 1, 1)
151
+ FIELD(IR_DISABLE, ADDR_DEC_ERR, 0, 1)
152
+REG32(USB3, 0x78)
153
+
154
+static void ir_update_irq(VersalUsb2CtrlRegs *s)
155
+{
156
+ bool pending = s->regs[R_IR_STATUS] & ~s->regs[R_IR_MASK];
157
+ qemu_set_irq(s->irq_ir, pending);
158
+}
159
+
160
+static void ir_status_postw(RegisterInfo *reg, uint64_t val64)
161
+{
162
+ VersalUsb2CtrlRegs *s = XILINX_VERSAL_USB2_CTRL_REGS(reg->opaque);
163
+ /*
164
+ * TODO: This should also clear USBSTS.HSE field in USB XHCI register.
165
+ * May be combine both the modules.
166
+ */
167
+ ir_update_irq(s);
168
+}
169
+
170
+static uint64_t ir_enable_prew(RegisterInfo *reg, uint64_t val64)
171
+{
172
+ VersalUsb2CtrlRegs *s = XILINX_VERSAL_USB2_CTRL_REGS(reg->opaque);
173
+ uint32_t val = val64;
174
+
175
+ s->regs[R_IR_MASK] &= ~val;
176
+ ir_update_irq(s);
177
+ return 0;
84
+ return 0;
178
+}
85
+}
179
+
86
+
180
+static uint64_t ir_disable_prew(RegisterInfo *reg, uint64_t val64)
87
+static void QEMU_Elf_unmap(QEMU_Elf *qe)
181
+{
88
+{
182
+ VersalUsb2CtrlRegs *s = XILINX_VERSAL_USB2_CTRL_REGS(reg->opaque);
89
+#ifdef CONFIG_LINUX
183
+ uint32_t val = val64;
90
+ munmap(qe->map, qe->size);
184
+
91
+#else
185
+ s->regs[R_IR_MASK] |= val;
92
+ g_mapped_file_unref(qe->gmf);
186
+ ir_update_irq(s);
93
+#endif
187
+ return 0;
188
+}
94
+}
189
+
95
+
190
+static const RegisterAccessInfo usb2_ctrl_regs_regs_info[] = {
96
+int QEMU_Elf_init(QEMU_Elf *qe, const char *filename)
191
+ { .name = "BUS_FILTER", .addr = A_BUS_FILTER,
97
+{
192
+ .rsvd = 0xfffffff0,
98
+ if (QEMU_Elf_map(qe, filename)) {
193
+ },{ .name = "PORT", .addr = A_PORT,
99
+ return 1;
194
+ .rsvd = 0xffffffe0,
195
+ },{ .name = "JITTER_ADJUST", .addr = A_JITTER_ADJUST,
196
+ .reset = 0x20,
197
+ .rsvd = 0xffffffc0,
198
+ },{ .name = "BIGENDIAN", .addr = A_BIGENDIAN,
199
+ .rsvd = 0xfffffffe,
200
+ },{ .name = "COHERENCY", .addr = A_COHERENCY,
201
+ .rsvd = 0xfffffffe,
202
+ },{ .name = "XHC_BME", .addr = A_XHC_BME,
203
+ .reset = 0x1,
204
+ .rsvd = 0xfffffffe,
205
+ },{ .name = "REG_CTRL", .addr = A_REG_CTRL,
206
+ .rsvd = 0xfffffffe,
207
+ },{ .name = "IR_STATUS", .addr = A_IR_STATUS,
208
+ .rsvd = 0xfffffffc,
209
+ .w1c = 0x3,
210
+ .post_write = ir_status_postw,
211
+ },{ .name = "IR_MASK", .addr = A_IR_MASK,
212
+ .reset = 0x3,
213
+ .rsvd = 0xfffffffc,
214
+ .ro = 0x3,
215
+ },{ .name = "IR_ENABLE", .addr = A_IR_ENABLE,
216
+ .rsvd = 0xfffffffc,
217
+ .pre_write = ir_enable_prew,
218
+ },{ .name = "IR_DISABLE", .addr = A_IR_DISABLE,
219
+ .rsvd = 0xfffffffc,
220
+ .pre_write = ir_disable_prew,
221
+ },{ .name = "USB3", .addr = A_USB3,
222
+ }
100
+ }
223
+};
101
224
+
102
if (!check_ehdr(qe)) {
225
+static void usb2_ctrl_regs_reset_init(Object *obj, ResetType type)
103
eprintf("Input file has the wrong format\n");
226
+{
104
- err = 1;
227
+ VersalUsb2CtrlRegs *s = XILINX_VERSAL_USB2_CTRL_REGS(obj);
105
- goto out_unmap;
228
+ unsigned int i;
106
+ QEMU_Elf_unmap(qe);
229
+
107
+ return 1;
230
+ for (i = 0; i < ARRAY_SIZE(s->regs_info); ++i) {
108
}
231
+ register_reset(&s->regs_info[i]);
109
232
+ }
110
if (init_states(qe)) {
233
+}
111
eprintf("Failed to extract QEMU CPU states\n");
234
+
112
- err = 1;
235
+static void usb2_ctrl_regs_reset_hold(Object *obj)
113
- goto out_unmap;
236
+{
114
+ QEMU_Elf_unmap(qe);
237
+ VersalUsb2CtrlRegs *s = XILINX_VERSAL_USB2_CTRL_REGS(obj);
115
+ return 1;
238
+
116
}
239
+ ir_update_irq(s);
117
240
+}
118
return 0;
241
+
119
-
242
+static const MemoryRegionOps usb2_ctrl_regs_ops = {
120
-out_unmap:
243
+ .read = register_read_memory,
121
- g_mapped_file_unref(qe->gmf);
244
+ .write = register_write_memory,
122
-
245
+ .endianness = DEVICE_LITTLE_ENDIAN,
123
- return err;
246
+ .valid = {
124
}
247
+ .min_access_size = 4,
125
248
+ .max_access_size = 4,
126
void QEMU_Elf_exit(QEMU_Elf *qe)
249
+ },
127
{
250
+};
128
exit_states(qe);
251
+
129
- g_mapped_file_unref(qe->gmf);
252
+static void usb2_ctrl_regs_init(Object *obj)
130
+ QEMU_Elf_unmap(qe);
253
+{
131
}
254
+ VersalUsb2CtrlRegs *s = XILINX_VERSAL_USB2_CTRL_REGS(obj);
255
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
256
+ RegisterInfoArray *reg_array;
257
+
258
+ memory_region_init(&s->iomem, obj, TYPE_XILINX_VERSAL_USB2_CTRL_REGS,
259
+ USB2_REGS_R_MAX * 4);
260
+ reg_array =
261
+ register_init_block32(DEVICE(obj), usb2_ctrl_regs_regs_info,
262
+ ARRAY_SIZE(usb2_ctrl_regs_regs_info),
263
+ s->regs_info, s->regs,
264
+ &usb2_ctrl_regs_ops,
265
+ XILINX_VERSAL_USB2_CTRL_REGS_ERR_DEBUG,
266
+ USB2_REGS_R_MAX * 4);
267
+ memory_region_add_subregion(&s->iomem,
268
+ 0x0,
269
+ &reg_array->mem);
270
+ sysbus_init_mmio(sbd, &s->iomem);
271
+ sysbus_init_irq(sbd, &s->irq_ir);
272
+}
273
+
274
+static const VMStateDescription vmstate_usb2_ctrl_regs = {
275
+ .name = TYPE_XILINX_VERSAL_USB2_CTRL_REGS,
276
+ .version_id = 1,
277
+ .minimum_version_id = 1,
278
+ .fields = (VMStateField[]) {
279
+ VMSTATE_UINT32_ARRAY(regs, VersalUsb2CtrlRegs, USB2_REGS_R_MAX),
280
+ VMSTATE_END_OF_LIST(),
281
+ }
282
+};
283
+
284
+static void usb2_ctrl_regs_class_init(ObjectClass *klass, void *data)
285
+{
286
+ DeviceClass *dc = DEVICE_CLASS(klass);
287
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
288
+
289
+ rc->phases.enter = usb2_ctrl_regs_reset_init;
290
+ rc->phases.hold = usb2_ctrl_regs_reset_hold;
291
+ dc->vmsd = &vmstate_usb2_ctrl_regs;
292
+}
293
+
294
+static const TypeInfo usb2_ctrl_regs_info = {
295
+ .name = TYPE_XILINX_VERSAL_USB2_CTRL_REGS,
296
+ .parent = TYPE_SYS_BUS_DEVICE,
297
+ .instance_size = sizeof(VersalUsb2CtrlRegs),
298
+ .class_init = usb2_ctrl_regs_class_init,
299
+ .instance_init = usb2_ctrl_regs_init,
300
+};
301
+
302
+static void usb2_ctrl_regs_register_types(void)
303
+{
304
+ type_register_static(&usb2_ctrl_regs_info);
305
+}
306
+
307
+type_init(usb2_ctrl_regs_register_types)
308
diff --git a/hw/usb/meson.build b/hw/usb/meson.build
309
index XXXXXXX..XXXXXXX 100644
310
--- a/hw/usb/meson.build
311
+++ b/hw/usb/meson.build
312
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_USB_DWC2', if_true: files('hcd-dwc2.c'))
313
softmmu_ss.add(when: 'CONFIG_TUSB6010', if_true: files('tusb6010.c'))
314
softmmu_ss.add(when: 'CONFIG_IMX', if_true: files('chipidea.c'))
315
softmmu_ss.add(when: 'CONFIG_IMX_USBPHY', if_true: files('imx-usb-phy.c'))
316
+specific_ss.add(when: 'CONFIG_XLNX_VERSAL', if_true: files('xlnx-versal-usb2-ctrl-regs.c'))
317
318
# emulated usb devices
319
softmmu_ss.add(when: 'CONFIG_USB', if_true: files('dev-hub.c'))
320
--
132
--
321
2.20.1
133
2.34.1
322
323
diff view generated by jsdifflib
New patch
1
From: Viktor Prutyanov <viktor@daynix.com>
1
2
3
PDB for Windows 11 kernel has slightly different structure compared to
4
previous versions. Since elf2dmp don't use the other fields, copy only
5
'segments' field from PDB_STREAM_INDEXES.
6
7
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
8
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
9
Message-id: 20230915170153.10959-6-viktor@daynix.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
contrib/elf2dmp/pdb.h | 2 +-
13
contrib/elf2dmp/pdb.c | 15 ++++-----------
14
2 files changed, 5 insertions(+), 12 deletions(-)
15
16
diff --git a/contrib/elf2dmp/pdb.h b/contrib/elf2dmp/pdb.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/contrib/elf2dmp/pdb.h
19
+++ b/contrib/elf2dmp/pdb.h
20
@@ -XXX,XX +XXX,XX @@ struct pdb_reader {
21
} ds;
22
uint32_t file_used[1024];
23
PDB_SYMBOLS *symbols;
24
- PDB_STREAM_INDEXES sidx;
25
+ uint16_t segments;
26
uint8_t *modimage;
27
char *segs;
28
size_t segs_size;
29
diff --git a/contrib/elf2dmp/pdb.c b/contrib/elf2dmp/pdb.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/contrib/elf2dmp/pdb.c
32
+++ b/contrib/elf2dmp/pdb.c
33
@@ -XXX,XX +XXX,XX @@ static void *pdb_ds_read_file(struct pdb_reader* r, uint32_t file_number)
34
static int pdb_init_segments(struct pdb_reader *r)
35
{
36
char *segs;
37
- unsigned stream_idx = r->sidx.segments;
38
+ unsigned stream_idx = r->segments;
39
40
segs = pdb_ds_read_file(r, stream_idx);
41
if (!segs) {
42
@@ -XXX,XX +XXX,XX @@ static int pdb_init_symbols(struct pdb_reader *r)
43
{
44
int err = 0;
45
PDB_SYMBOLS *symbols;
46
- PDB_STREAM_INDEXES *sidx = &r->sidx;
47
-
48
- memset(sidx, -1, sizeof(*sidx));
49
50
symbols = pdb_ds_read_file(r, 3);
51
if (!symbols) {
52
@@ -XXX,XX +XXX,XX @@ static int pdb_init_symbols(struct pdb_reader *r)
53
54
r->symbols = symbols;
55
56
- if (symbols->stream_index_size != sizeof(PDB_STREAM_INDEXES)) {
57
- err = 1;
58
- goto out_symbols;
59
- }
60
-
61
- memcpy(sidx, (const char *)symbols + sizeof(PDB_SYMBOLS) +
62
+ r->segments = *(uint16_t *)((const char *)symbols + sizeof(PDB_SYMBOLS) +
63
symbols->module_size + symbols->offset_size +
64
symbols->hash_size + symbols->srcmodule_size +
65
- symbols->pdbimport_size + symbols->unknown2_size, sizeof(*sidx));
66
+ symbols->pdbimport_size + symbols->unknown2_size +
67
+ offsetof(PDB_STREAM_INDEXES, segments));
68
69
/* Read global symbol table */
70
r->modimage = pdb_ds_read_file(r, symbols->gsym_file);
71
--
72
2.34.1
diff view generated by jsdifflib