1
<meta>
2
This patch series has been through months of review and
3
refinement. It now has end-to-end Reviewed-by: tags and
4
all code patches but one have Tested-by: tags. No significant
5
issues have been found via review for some weeks.
6
7
The patch set creates two new subsystems:
8
hw/display/apple-gfx
9
hw/vmapple
10
so it doesn't fall within the responsibility of existing
11
maintainers. How do we proceed to get this merged now that
12
10.0 development is open?
13
</meta>
14
15
1
This patch set introduces a new ARM and macOS HVF specific machine type
16
This patch set introduces a new ARM and macOS HVF specific machine type
2
called "vmapple", as well as a family of display devices based on the
17
called "vmapple", as well as a family of display devices based on the
3
ParavirtualizedGraphics.framework in macOS. One of the display adapter
18
ParavirtualizedGraphics.framework in macOS. One of the display adapter
4
variants, apple-gfx-mmio, is required for the new machine type, while
19
variants, apple-gfx-mmio, is required for the new machine type, while
5
apple-gfx-pci can be used to enable 3D graphics acceleration with x86-64
20
apple-gfx-pci can be used to enable 3D graphics acceleration with x86-64
...
...
31
hosts only because ParavirtualizedGraphics.framework is a black box
46
hosts only because ParavirtualizedGraphics.framework is a black box
32
implementing most of the logic behind the apple-gfx device.)
47
implementing most of the logic behind the apple-gfx device.)
33
* PCI devices use legacy IRQs, not MSI/MSI-X. As far as I can tell,
48
* PCI devices use legacy IRQs, not MSI/MSI-X. As far as I can tell,
34
we'd need to include the GICv3 ITS, but it's unclear to me what
49
we'd need to include the GICv3 ITS, but it's unclear to me what
35
exactly needs wiring up.
50
exactly needs wiring up.
36
* Due to lack of MSI(-X), event delivery from USB devices to the guest
51
* Due to a quirk (bug?) in the macOS XHCI driver when MSI-X is not
37
macOS isn't working correctly. My current conclusion is that the
52
available, correct functioning of the USB controller (and thus
38
OS's XHCI driver simply was never designed to work with legacy IRQs.
53
keyboard/tablet) requires a small workaround in the XHCI controller
39
The upshot is that keyboard and mouse/tablet input is very laggy.
54
device. This is part of another patch series:
40
The solution would be to implement MSI(-X) support or figure out how
55
https://patchew.org/QEMU/20241208191646.64857-1-phil@philjordan.eu/
41
to make hcd-xhci-sysbus work with the macOS guest, if at all possible.
42
(EHCI and UHCI/OHCI controllers are not an option as the VMAPPLE
43
guest kernel does not include drivers for these.)
44
* The guest OS must first be provisioned using Virtualization.framework;
56
* The guest OS must first be provisioned using Virtualization.framework;
45
the disk images can subsequently be used in Qemu. (See docs.)
57
the disk images can subsequently be used in Qemu. (See docs.)
46
58
47
The apple-gfx device can be used independently from the vmapple machine
59
The apple-gfx device can be used independently from the vmapple machine
48
type, at least in the PCI variant. It mainly targets x86-64 macOS guests
60
type, at least in the PCI variant. It mainly targets x86-64 macOS guests
...
...
69
CPU-based drawing. For maximum efficiency, the Metal texture
81
CPU-based drawing. For maximum efficiency, the Metal texture
70
containing the guest framebuffer could be drawn directly to
82
containing the guest framebuffer could be drawn directly to
71
a Metal view in the host window, staying on the GPU. (Similar
83
a Metal view in the host window, staying on the GPU. (Similar
72
to the OpenGL/virgl render path on other platforms.)
84
to the OpenGL/virgl render path on other platforms.)
73
85
74
My part of this work has been sponsored by Sauce Labs Inc.
86
Some of my part of this work has been sponsored by Sauce Labs Inc.
75
87
76
---
88
---
77
89
78
v2 -> v3:
90
v2 -> v3:
79
91
...
...
135
docs.
147
docs.
136
* Various smaller fixes in apple-gfx/-mmio, apple-gfx-pci, vmapple/aes,
148
* Various smaller fixes in apple-gfx/-mmio, apple-gfx-pci, vmapple/aes,
137
vmapple/cfg, vmapple/virtio-blk, and vmapple machine type.
149
vmapple/cfg, vmapple/virtio-blk, and vmapple machine type.
138
* Added SPDX license identifiers where they were missing.
150
* Added SPDX license identifiers where they were missing.
139
151
152
v5 -> v6:
153
154
* 01/15 (main/Cocoa/runloop): Combined functions, fixed whitespace
155
* 02/15 (apple-gfx): Further refinement of PVG threading: reduced some callback
156
tasks from BHs to merely acquiring RCU read lock; replaced some libdispatch
157
tasks with BHs; last remaining synchronous BH now uses emphemeral
158
QemuSemaphore.
159
* 02/15 (apple-gfx): Readability improvements and other smaller tweaks
160
(see patch change notes for details)
161
* 04/15 (display modes): Replaced use of alloca() with NSMutableArray.
162
163
v6 -> v7:
164
165
* 02/15 (apple-gfx): Use g_ptr_array_find() helper function, coding style tweak
166
* 03/15 (apple-gfx-pci): Removed an unused function parameter
167
* 04/15 (apple-gfx display mode property): Simplified error handling in
168
property parsing.
169
* 10/15 (vmapple/aes): Coding style tweaks.
170
* 12/15 (vmapple/cfg): Changed error messages for overrun of properties with
171
fixed-length strings to be more useful to users than developers.
172
* 15/15 (vmapple machine type): Tiny error handling fix, un-inlined function
173
174
v7 -> v8:
175
176
* 02/15 (apple-gfx): Naming and type use improvements, fixes for a bug and a
177
leak.
178
* 04/15 (apple-gfx display mode property): Type use improvement
179
* 10/15 (vmapple/aes): Guest error logging tweaks.
180
* 11/15 (vmapple/bdif): Replaced uses of cpu_physical_memory_read with
181
dma_memory_read, and a g_free call with g_autofree.
182
* 12/15 (vmapple/cfg): Macro hygiene fix: consistently enclosing arguments in
183
parens.
184
* 15/15 (vmapple machine type): Use less verbose pattern for defining uuid
185
property.
186
187
v8 -> v9:
188
189
* 01/16 (ui & main loop): Set qemu_main to NULL for GTK UI as well.
190
* 02/16 (apple-gfx): Pass device pointer to graphic_console_init(), various
191
     non-functional changes.
192
* 03/16 (apple-gfx-pci): Fixup of changed common call, whitespace and comment
193
formatting tweaks.
194
* 04/16 (apple-gfx display modes): Re-ordered type definitions so we can drop
195
a 'struct' keyword.
196
* 10/16 (vmapple/aes): Replaced a use of cpu_physical_memory_write with
197
dma_memory_write, minor style tweak.
198
* 11/16 (vmapple/bdif): Replaced uses of cpu_physical_memory_write with
199
dma_memory_write.
200
* 13/16 (vmapple/virtio-blk): Correctly specify class_size for
201
VMAppleVirtIOBlkClass.
202
* 15/16 (vmapple machine type): Documentation improvements, fixed variable
203
name and struct field used during pvpanic device creation.
204
* 16/16 (NEW/RFC vmapple/virtio-blk): Proposed change to replace type hierarchy
205
with a variant property. This seems cleaner and less confusing than the
206
original approach to me, but I'm not sure if it warrants creation of a new
207
QAPI enum and property type definition.
208
209
v9 -> v10:
210
211
* 01/15 (ui & main loop): Added comments to qemu_main declaration and GTK.
212
* 02/15 (apple-gfx): Reworked the way frame rendering code is threaded to use
213
BHs for sections requiring BQL.
214
* 02/15 (apple-gfx): Fixed ./configure error on non-macOS platforms.
215
* 10/15 (vmapple/aes): Code style and comment improvements.
216
* 12/15 (vmapple/cfg): Slightly tidier error reporting for overlong property
217
values.
218
* 13/15 (vmapple/virtio-blk): Folded v9 patch 16/16 into this one, changing
219
the device type design to provide a single device type with a variant
220
     property instead of 2 different subtypes for aux and root volumes.
221
* 15/15 (vmapple machine type): Documentation fixup for changed virtio-blk
222
device type; small improvements to shell commands in documentation;
223
improved propagation of errors during cfg device instantiation.
224
225
v10 -> v11:
226
227
* 01/15 (ui & main loop): Simplified main.c, better comments & commit message
228
* 02/15 (apple-gfx): Give each PV display instance a unique serial number.
229
* 02 & 03/15 (apple-gfx, -pci): Formatting/style tweaks
230
* 15/15 (vmapple machine type): Improvements to shell code in docs
231
232
v11 -> v12:
233
234
* 01/15 (ui & main loop): More precise wording of code comments.
235
* 02/15 (apple-gfx): Fixed memory management regressions introduced in v10;
236
improved error handling; various more conmetic code adjustments
237
* 09/15 (GPEX): Fixed uses of deleted GPEX_NUM_IRQS constant that have been
238
added to QEMU since this patch was originally written.
239
240
v12 -> v13:
241
242
* 15/15 (vmapple machine type): Bumped the machine type version from 9.2
243
to 10.0.
244
* All patches in the series now have been positively reviewed and received
245
corresponding reviewed-by tags.
246
247
v13 -> v14:
248
249
* 6/15 (hw/vmapple directory): Changed myself from reviewer
250
to maintainer, as that seemed appropriate at this point.
251
* 15/15 (vmapple machine type): Gate creation of XHCI and
252
USB HID devices behind if (defaults_enabled()).
253
140
Alexander Graf (9):
254
Alexander Graf (9):
141
hw: Add vmapple subdir
255
hw: Add vmapple subdir
142
hw/misc/pvpanic: Add MMIO interface
256
hw/misc/pvpanic: Add MMIO interface
143
hvf: arm: Ignore writes to CNTP_CTL_EL0
257
hvf: arm: Ignore writes to CNTP_CTL_EL0
144
gpex: Allow more than 4 legacy IRQs
258
gpex: Allow more than 4 legacy IRQs
...
...
155
hw/display/apple-gfx: Adds PCI implementation
269
hw/display/apple-gfx: Adds PCI implementation
156
hw/display/apple-gfx: Adds configurable mode list
270
hw/display/apple-gfx: Adds configurable mode list
157
MAINTAINERS: Add myself as maintainer for apple-gfx, reviewer for HVF
271
MAINTAINERS: Add myself as maintainer for apple-gfx, reviewer for HVF
158
hw/block/virtio-blk: Replaces request free function with g_free
272
hw/block/virtio-blk: Replaces request free function with g_free
159
273
160
MAINTAINERS | 15 +
274
MAINTAINERS | 15 +
161
contrib/vmapple/uuid.sh | 9 +
275
contrib/vmapple/uuid.sh | 9 +
162
docs/system/arm/vmapple.rst | 60 +++
276
docs/system/arm/vmapple.rst | 63 ++
163
docs/system/target-arm.rst | 1 +
277
docs/system/target-arm.rst | 1 +
164
hw/Kconfig | 1 +
278
hw/Kconfig | 1 +
165
hw/arm/sbsa-ref.c | 2 +-
279
hw/arm/sbsa-ref.c | 2 +-
166
hw/arm/virt.c | 2 +-
280
hw/arm/virt.c | 2 +-
167
hw/block/virtio-blk.c | 58 ++-
281
hw/block/virtio-blk.c | 58 +-
168
hw/display/Kconfig | 13 +
282
hw/core/qdev-properties-system.c | 8 +
169
hw/display/apple-gfx-mmio.m | 395 +++++++++++++++
283
hw/display/Kconfig | 13 +
170
hw/display/apple-gfx-pci.m | 156 ++++++
284
hw/display/apple-gfx-mmio.m | 289 +++++++++
171
hw/display/apple-gfx.h | 91 ++++
285
hw/display/apple-gfx-pci.m | 157 +++++
172
hw/display/apple-gfx.m | 870 +++++++++++++++++++++++++++++++++
286
hw/display/apple-gfx.h | 77 +++
173
hw/display/meson.build | 5 +
287
hw/display/apple-gfx.m | 880 ++++++++++++++++++++++++++++
174
hw/display/trace-events | 30 ++
288
hw/display/meson.build | 7 +
175
hw/i386/microvm.c | 2 +-
289
hw/display/trace-events | 30 +
176
hw/loongarch/virt.c | 2 +-
290
hw/i386/microvm.c | 2 +-
177
hw/meson.build | 1 +
291
hw/loongarch/virt.c | 12 +-
178
hw/mips/loongson3_virt.c | 2 +-
292
hw/meson.build | 1 +
179
hw/misc/Kconfig | 4 +
293
hw/mips/loongson3_virt.c | 2 +-
180
hw/misc/meson.build | 1 +
294
hw/misc/Kconfig | 4 +
181
hw/misc/pvpanic-mmio.c | 61 +++
295
hw/misc/meson.build | 1 +
182
hw/openrisc/virt.c | 12 +-
296
hw/misc/pvpanic-mmio.c | 61 ++
183
hw/pci-host/gpex.c | 43 +-
297
hw/openrisc/virt.c | 12 +-
184
hw/riscv/virt.c | 12 +-
298
hw/pci-host/gpex.c | 43 +-
185
hw/vmapple/Kconfig | 32 ++
299
hw/riscv/virt.c | 12 +-
186
hw/vmapple/aes.c | 578 ++++++++++++++++++++++
300
hw/vmapple/Kconfig | 32 +
187
hw/vmapple/bdif.c | 261 ++++++++++
301
hw/vmapple/aes.c | 581 ++++++++++++++++++
188
hw/vmapple/cfg.c | 203 ++++++++
302
hw/vmapple/bdif.c | 275 +++++++++
189
hw/vmapple/meson.build | 5 +
303
hw/vmapple/cfg.c | 196 +++++++
190
hw/vmapple/trace-events | 21 +
304
hw/vmapple/meson.build | 5 +
191
hw/vmapple/trace.h | 1 +
305
hw/vmapple/trace-events | 21 +
192
hw/vmapple/virtio-blk.c | 226 +++++++++
306
hw/vmapple/trace.h | 1 +
193
hw/vmapple/vmapple.c | 659 +++++++++++++++++++++++++
307
hw/vmapple/virtio-blk.c | 205 +++++++
194
hw/xtensa/virt.c | 2 +-
308
hw/vmapple/vmapple.c | 648 ++++++++++++++++++++
195
include/hw/misc/pvpanic.h | 1 +
309
hw/xen/xen-pvh-common.c | 2 +-
196
include/hw/pci-host/gpex.h | 7 +-
310
hw/xtensa/virt.c | 2 +-
197
include/hw/pci/pci_ids.h | 1 +
311
include/hw/misc/pvpanic.h | 1 +
198
include/hw/virtio/virtio-blk.h | 11 +-
312
include/hw/pci-host/gpex.h | 7 +-
199
include/hw/vmapple/vmapple.h | 25 +
313
include/hw/pci/pci_ids.h | 1 +
200
include/qemu-main.h | 3 +-
314
include/hw/qdev-properties-system.h | 5 +
201
include/qemu/cutils.h | 15 +
315
include/hw/virtio/virtio-blk.h | 11 +-
202
include/qemu/typedefs.h | 1 +
316
include/hw/vmapple/vmapple.h | 23 +
203
meson.build | 5 +
317
include/qemu-main.h | 14 +-
204
system/main.c | 56 ++-
318
include/qemu/cutils.h | 15 +
205
target/arm/hvf/hvf.c | 9 +
319
meson.build | 5 +
206
ui/cocoa.m | 58 +--
320
qapi/virtio.json | 14 +
207
ui/sdl2.c | 4 +
321
system/main.c | 37 +-
208
util/hexdump.c | 18 +
322
target/arm/hvf/hvf.c | 9 +
209
49 files changed, 3942 insertions(+), 108 deletions(-)
323
ui/cocoa.m | 54 +-
324
ui/gtk.c | 4 +
325
ui/sdl2.c | 4 +
326
util/hexdump.c | 18 +
327
53 files changed, 3842 insertions(+), 110 deletions(-)
210
create mode 100755 contrib/vmapple/uuid.sh
328
create mode 100755 contrib/vmapple/uuid.sh
211
create mode 100644 docs/system/arm/vmapple.rst
329
create mode 100644 docs/system/arm/vmapple.rst
212
create mode 100644 hw/display/apple-gfx-mmio.m
330
create mode 100644 hw/display/apple-gfx-mmio.m
213
create mode 100644 hw/display/apple-gfx-pci.m
331
create mode 100644 hw/display/apple-gfx-pci.m
214
create mode 100644 hw/display/apple-gfx.h
332
create mode 100644 hw/display/apple-gfx.h
...
...
224
create mode 100644 hw/vmapple/virtio-blk.c
342
create mode 100644 hw/vmapple/virtio-blk.c
225
create mode 100644 hw/vmapple/vmapple.c
343
create mode 100644 hw/vmapple/vmapple.c
226
create mode 100644 include/hw/vmapple/vmapple.h
344
create mode 100644 include/hw/vmapple/vmapple.h
227
345
228
--
346
--
229
2.39.3 (Apple Git-145)
347
2.39.5 (Apple Git-154)
348
349
diff view generated by jsdifflib
Deleted patch
1
macOS's Cocoa event handling must be done on the initial (main) thread
2
of the process. Furthermore, if library or application code uses
3
libdispatch, the main dispatch queue must be handling events on the main
4
thread as well.
5
1
6
So far, this has affected Qemu in both the Cocoa and SDL UIs, although
7
in different ways: the Cocoa UI replaces the default qemu_main function
8
with one that spins Qemu's internal main event loop off onto a
9
background thread. SDL (which uses Cocoa internally) on the other hand
10
uses a polling approach within Qemu's main event loop. Events are
11
polled during the SDL UI's dpy_refresh callback, which happens to run
12
on the main thread by default.
13
14
As UIs are mutually exclusive, this works OK as long as nothing else
15
needs platform-native event handling. In the next patch, a new device is
16
introduced based on the ParavirtualizedGraphics.framework in macOS.
17
This uses libdispatch internally, and only works when events are being
18
handled on the main runloop. With the current system, it works when
19
using either the Cocoa or the SDL UI. However, it does not when running
20
headless. Moreover, any attempt to install a similar scheme to the
21
Cocoa UI's main thread replacement fails when combined with the SDL
22
UI.
23
24
This change tidies up main thread management to be more flexible.
25
26
* The qemu_main global function pointer is a custom function for the
27
main thread, and it may now be NULL. When it is, the main thread
28
runs the main Qemu loop. This represents the traditional setup.
29
* When non-null, spawning the main Qemu event loop on a separate
30
thread is now done centrally rather than inside the Cocoa UI code.
31
* For most platforms, qemu_main is indeed NULL by default, but on
32
Darwin, it defaults to a function that runs the CFRunLoop.
33
* The Cocoa UI sets qemu_main to a function which runs the
34
NSApplication event handling runloop, as is usual for a Cocoa app.
35
* The SDL UI overrides the qemu_main function to NULL, thus
36
specifying that Qemu's main loop must run on the main
37
thread.
38
* For other UIs, or in the absence of UIs, the platform's default
39
behaviour is followed.
40
41
This means that on macOS, the platform's runloop events are always
42
handled, regardless of chosen UI. The new PV graphics device will
43
thus work in all configurations. There is no functional change on other
44
operating systems.
45
46
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
47
---
48
49
v5:
50
51
* Simplified the way of setting/clearing the main loop by going back
52
to setting qemu_main directly, but narrowing the scope of what it
53
needs to do, and it can now be NULL.
54
55
include/qemu-main.h | 3 +--
56
include/qemu/typedefs.h | 1 +
57
system/main.c | 56 +++++++++++++++++++++++++++++++++++----
58
ui/cocoa.m | 58 +++++++++++------------------------------
59
ui/sdl2.c | 4 +++
60
5 files changed, 72 insertions(+), 50 deletions(-)
61
62
diff --git a/include/qemu-main.h b/include/qemu-main.h
63
index XXXXXXX..XXXXXXX 100644
64
--- a/include/qemu-main.h
65
+++ b/include/qemu-main.h
66
@@ -XXX,XX +XXX,XX @@
67
#ifndef QEMU_MAIN_H
68
#define QEMU_MAIN_H
69
70
-int qemu_default_main(void);
71
-extern int (*qemu_main)(void);
72
+extern qemu_main_fn qemu_main;
73
74
#endif /* QEMU_MAIN_H */
75
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
76
index XXXXXXX..XXXXXXX 100644
77
--- a/include/qemu/typedefs.h
78
+++ b/include/qemu/typedefs.h
79
@@ -XXX,XX +XXX,XX @@ typedef struct IRQState *qemu_irq;
80
* Function types
81
*/
82
typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
83
+typedef int (*qemu_main_fn)(void);
84
85
#endif /* QEMU_TYPEDEFS_H */
86
diff --git a/system/main.c b/system/main.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/system/main.c
89
+++ b/system/main.c
90
@@ -XXX,XX +XXX,XX @@
91
92
#include "qemu/osdep.h"
93
#include "qemu-main.h"
94
+#include "qemu/main-loop.h"
95
#include "sysemu/sysemu.h"
96
97
-#ifdef CONFIG_SDL
98
-#include <SDL.h>
99
+#ifdef CONFIG_DARWIN
100
+#include <CoreFoundation/CoreFoundation.h>
101
#endif
102
103
-int qemu_default_main(void)
104
+static int qemu_default_main(void)
105
{
106
int status;
107
108
@@ -XXX,XX +XXX,XX @@ int qemu_default_main(void)
109
return status;
110
}
111
112
-int (*qemu_main)(void) = qemu_default_main;
113
+/*
114
+ * Various macOS system libraries, including the Cocoa UI and anything using
115
+ * libdispatch, such as ParavirtualizedGraphics.framework, requires that the
116
+ * main runloop, on the main (initial) thread be running or at least regularly
117
+ * polled for events. A special mode is therefore supported, where the QEMU
118
+ * main loop runs on a separate thread and the main thread handles the
119
+ * CF/Cocoa runloop.
120
+ */
121
+
122
+static void *call_qemu_default_main(void *opaque)
123
+{
124
+ int status;
125
+
126
+ bql_lock();
127
+ status = qemu_default_main();
128
+ bql_unlock();
129
+
130
+ exit(status);
131
+}
132
+
133
+static void qemu_run_default_main_on_new_thread(void)
134
+{
135
+ QemuThread thread;
136
+
137
+ qemu_thread_create(&thread, "qemu_main", call_qemu_default_main,
138
+ NULL, QEMU_THREAD_DETACHED);
139
+}
140
+
141
+
142
+#ifdef CONFIG_DARWIN
143
+static int os_darwin_cfrunloop_main(void)
144
+{
145
+ CFRunLoopRun();
146
+ abort();
147
+}
148
+
149
+qemu_main_fn qemu_main = os_darwin_cfrunloop_main;
150
+#else
151
+qemu_main_fn qemu_main;
152
+#endif
153
154
int main(int argc, char **argv)
155
{
156
qemu_init(argc, argv);
157
- return qemu_main();
158
+ if (qemu_main) {
159
+ qemu_run_default_main_on_new_thread();
160
+ bql_unlock();
161
+ return qemu_main();
162
+ } else {
163
+ qemu_default_main();
164
+ }
165
}
166
diff --git a/ui/cocoa.m b/ui/cocoa.m
167
index XXXXXXX..XXXXXXX 100644
168
--- a/ui/cocoa.m
169
+++ b/ui/cocoa.m
170
@@ -XXX,XX +XXX,XX @@
171
int height;
172
} QEMUScreen;
173
174
+@class QemuCocoaPasteboardTypeOwner;
175
+
176
static void cocoa_update(DisplayChangeListener *dcl,
177
int x, int y, int w, int h);
178
179
@@ -XXX,XX +XXX,XX @@ static void cocoa_switch(DisplayChangeListener *dcl,
180
static NSInteger cbchangecount = -1;
181
static QemuClipboardInfo *cbinfo;
182
static QemuEvent cbevent;
183
+static QemuCocoaPasteboardTypeOwner *cbowner;
184
185
// Utility functions to run specified code block with the BQL held
186
typedef void (^CodeBlock)(void);
187
@@ -XXX,XX +XXX,XX @@ - (void) dealloc
188
{
189
COCOA_DEBUG("QemuCocoaAppController: dealloc\n");
190
191
- if (cocoaView)
192
- [cocoaView release];
193
+ [cocoaView release];
194
+ [cbowner release];
195
+ cbowner = nil;
196
+
197
[super dealloc];
198
}
199
200
@@ -XXX,XX +XXX,XX @@ - (void)pasteboard:(NSPasteboard *)sender provideDataForType:(NSPasteboardType)t
201
202
@end
203
204
-static QemuCocoaPasteboardTypeOwner *cbowner;
205
-
206
static void cocoa_clipboard_notify(Notifier *notifier, void *data);
207
static void cocoa_clipboard_request(QemuClipboardInfo *info,
208
QemuClipboardType type);
209
@@ -XXX,XX +XXX,XX @@ static void cocoa_clipboard_request(QemuClipboardInfo *info,
210
}
211
}
212
213
-/*
214
- * The startup process for the OSX/Cocoa UI is complicated, because
215
- * OSX insists that the UI runs on the initial main thread, and so we
216
- * need to start a second thread which runs the qemu_default_main():
217
- * in main():
218
- * in cocoa_display_init():
219
- * assign cocoa_main to qemu_main
220
- * create application, menus, etc
221
- * in cocoa_main():
222
- * create qemu-main thread
223
- * enter OSX run loop
224
- */
225
-
226
-static void *call_qemu_main(void *opaque)
227
-{
228
- int status;
229
-
230
- COCOA_DEBUG("Second thread: calling qemu_default_main()\n");
231
- bql_lock();
232
- status = qemu_default_main();
233
- bql_unlock();
234
- COCOA_DEBUG("Second thread: qemu_default_main() returned, exiting\n");
235
- [cbowner release];
236
- exit(status);
237
-}
238
-
239
static int cocoa_main(void)
240
{
241
- QemuThread thread;
242
-
243
- COCOA_DEBUG("Entered %s()\n", __func__);
244
-
245
- bql_unlock();
246
- qemu_thread_create(&thread, "qemu_main", call_qemu_main,
247
- NULL, QEMU_THREAD_DETACHED);
248
-
249
- // Start the main event loop
250
COCOA_DEBUG("Main thread: entering OSX run loop\n");
251
[NSApp run];
252
COCOA_DEBUG("Main thread: left OSX run loop, which should never happen\n");
253
@@ -XXX,XX +XXX,XX @@ static void cocoa_display_init(DisplayState *ds, DisplayOptions *opts)
254
255
COCOA_DEBUG("qemu_cocoa: cocoa_display_init\n");
256
257
- qemu_main = cocoa_main;
258
-
259
// Pull this console process up to being a fully-fledged graphical
260
// app with a menubar and Dock icon
261
ProcessSerialNumber psn = { 0, kCurrentProcess };
262
@@ -XXX,XX +XXX,XX @@ static void cocoa_display_init(DisplayState *ds, DisplayOptions *opts)
263
qemu_clipboard_peer_register(&cbpeer);
264
265
[pool release];
266
+
267
+ /*
268
+ * The Cocoa UI will run the NSApplication runloop on the main thread
269
+ * rather than the default Core Foundation one.
270
+ */
271
+ qemu_main = cocoa_main;
272
}
273
274
static QemuDisplay qemu_display_cocoa = {
275
- .type = DISPLAY_TYPE_COCOA,
276
- .init = cocoa_display_init,
277
+ .type = DISPLAY_TYPE_COCOA,
278
+ .init = cocoa_display_init,
279
};
280
281
static void register_cocoa(void)
282
diff --git a/ui/sdl2.c b/ui/sdl2.c
283
index XXXXXXX..XXXXXXX 100644
284
--- a/ui/sdl2.c
285
+++ b/ui/sdl2.c
286
@@ -XXX,XX +XXX,XX @@
287
#include "sysemu/sysemu.h"
288
#include "ui/win32-kbd-hook.h"
289
#include "qemu/log.h"
290
+#include "qemu-main.h"
291
292
static int sdl2_num_outputs;
293
static struct sdl2_console *sdl2_console;
294
@@ -XXX,XX +XXX,XX @@ static void sdl2_display_init(DisplayState *ds, DisplayOptions *o)
295
}
296
297
atexit(sdl_cleanup);
298
+
299
+ /* SDL's event polling (in dpy_refresh) must happen on the main thread. */
300
+ qemu_main = NULL;
301
}
302
303
static QemuDisplay qemu_display_sdl2 = {
304
--
305
2.39.3 (Apple Git-145)
diff view generated by jsdifflib
Deleted patch
1
MacOS provides a framework (library) that allows any vmm to implement a
2
paravirtualized 3d graphics passthrough to the host metal stack called
3
ParavirtualizedGraphics.Framework (PVG). The library abstracts away
4
almost every aspect of the paravirtualized device model and only provides
5
and receives callbacks on MMIO access as well as to share memory address
6
space between the VM and PVG.
7
1
8
This patch implements a QEMU device that drives PVG for the VMApple
9
variant of it.
10
11
Signed-off-by: Alexander Graf <graf@amazon.com>
12
Co-authored-by: Alexander Graf <graf@amazon.com>
13
14
Subsequent changes:
15
16
* Cherry-pick/rebase conflict fixes, API use updates.
17
* Moved from hw/vmapple/ (useful outside that machine type)
18
* Overhaul of threading model, many thread safety improvements.
19
* Asynchronous rendering.
20
* Memory and object lifetime fixes.
21
* Refactoring to split generic and (vmapple) MMIO variant specific
22
code.
23
24
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
25
---
26
v2:
27
28
* Cherry-pick/rebase conflict fixes
29
* BQL function renaming
30
* Moved from hw/vmapple/ (useful outside that machine type)
31
* Code review comments: Switched to DEFINE_TYPES macro & little endian
32
MMIO.
33
* Removed some dead/superfluous code
34
* Mad set_mode thread & memory safe
35
* Added migration blocker due to lack of (de-)serialisation.
36
* Fixes to ObjC refcounting and autorelease pool usage.
37
* Fixed ObjC new/init misuse
38
* Switched to ObjC category extension for private property.
39
* Simplified task memory mapping and made it thread safe.
40
* Refactoring to split generic and vmapple MMIO variant specific
41
code.
42
* Switched to asynchronous MMIO writes on x86-64
43
* Rendering and graphics update are now done asynchronously
44
* Fixed cursor handling
45
* Coding convention fixes
46
* Removed software cursor compositing
47
48
v3:
49
50
* Rebased on latest upstream, fixed breakages including switching to Resettable methods.
51
* Squashed patches dealing with dGPUs, MMIO area size, and GPU picking.
52
* Allow re-entrant MMIO; this simplifies the code and solves the divergence
53
between x86-64 and arm64 variants.
54
55
v4:
56
57
* Renamed '-vmapple' device variant to '-mmio'
58
* MMIO device type now requires aarch64 host and guest
59
* Complete overhaul of the glue code for making Qemu's and
60
ParavirtualizedGraphics.framework's threading and synchronisation models
61
work together. Calls into PVG are from dispatch queues while the
62
BQL-holding initiating thread processes AIO context events; callbacks from
63
PVG are scheduled as BHs on the BQL/main AIO context, awaiting completion
64
where necessary.
65
* Guest frame rendering state is covered by the BQL, with only the PVG calls
66
outside the lock, and serialised on the named render_queue.
67
* Simplified logic for dropping frames in-flight during mode changes, fixed
68
bug in pending frames logic.
69
* Addressed smaller code review notes such as: function naming, object type
70
declarations, type names/declarations/casts, code formatting, #include
71
order, over-cautious ObjC retain/release, what goes in init vs realize,
72
etc.
73
74
v5:
75
76
* Smaller non-functional fixes in response to review comments, such as using
77
NULL for the AIO_WAIT_WHILE context argument, type name formatting,
78
deleting leftover debug code, logging improvements, state struct field
79
order and documentation improvements, etc.
80
* Instead of a single condition variable for all synchronous BH job types,
81
there is now one for each callback block. This reduces the number
82
of threads being awoken unnecessarily to near zero.
83
* MMIO device variant: Unified the BH job for raising interrupts.
84
* Use DMA APIs for PVG framework's guest memory read requests.
85
* Thread safety improvements: ensure mutable AppleGFXState fields are not
86
accessed outside the appropriate lock. Added dedicated mutex for the task
87
list.
88
* Retain references to MemoryRegions for which there exist mappings in each
89
PGTask, and for IOSurface mappings.
90
91
hw/display/Kconfig | 9 +
92
hw/display/apple-gfx-mmio.m | 387 ++++++++++++++++++
93
hw/display/apple-gfx.h | 79 ++++
94
hw/display/apple-gfx.m | 773 ++++++++++++++++++++++++++++++++++++
95
hw/display/meson.build | 4 +
96
hw/display/trace-events | 28 ++
97
meson.build | 4 +
98
7 files changed, 1284 insertions(+)
99
create mode 100644 hw/display/apple-gfx-mmio.m
100
create mode 100644 hw/display/apple-gfx.h
101
create mode 100644 hw/display/apple-gfx.m
102
103
diff --git a/hw/display/Kconfig b/hw/display/Kconfig
104
index XXXXXXX..XXXXXXX 100644
105
--- a/hw/display/Kconfig
106
+++ b/hw/display/Kconfig
107
@@ -XXX,XX +XXX,XX @@ config XLNX_DISPLAYPORT
108
109
config DM163
110
bool
111
+
112
+config MAC_PVG
113
+ bool
114
+ default y
115
+
116
+config MAC_PVG_MMIO
117
+ bool
118
+ depends on MAC_PVG && AARCH64
119
+
120
diff --git a/hw/display/apple-gfx-mmio.m b/hw/display/apple-gfx-mmio.m
121
new file mode 100644
122
index XXXXXXX..XXXXXXX
123
--- /dev/null
124
+++ b/hw/display/apple-gfx-mmio.m
125
@@ -XXX,XX +XXX,XX @@
126
+/*
127
+ * QEMU Apple ParavirtualizedGraphics.framework device, MMIO (arm64) variant
128
+ *
129
+ * Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
130
+ *
131
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
132
+ * See the COPYING file in the top-level directory.
133
+ *
134
+ * SPDX-License-Identifier: GPL-2.0-or-later
135
+ *
136
+ * ParavirtualizedGraphics.framework is a set of libraries that macOS provides
137
+ * which implements 3d graphics passthrough to the host as well as a
138
+ * proprietary guest communication channel to drive it. This device model
139
+ * implements support to drive that library from within QEMU as an MMIO-based
140
+ * system device for macOS on arm64 VMs.
141
+ */
142
+
143
+#include "qemu/osdep.h"
144
+#import <ParavirtualizedGraphics/ParavirtualizedGraphics.h>
145
+#include "apple-gfx.h"
146
+#include "monitor/monitor.h"
147
+#include "hw/sysbus.h"
148
+#include "hw/irq.h"
149
+#include "trace.h"
150
+#include "qemu/log.h"
151
+
152
+OBJECT_DECLARE_SIMPLE_TYPE(AppleGFXMMIOState, APPLE_GFX_MMIO)
153
+
154
+/*
155
+ * ParavirtualizedGraphics.Framework only ships header files for the PCI
156
+ * variant which does not include IOSFC descriptors and host devices. We add
157
+ * their definitions here so that we can also work with the ARM version.
158
+ */
159
+typedef bool(^IOSFCRaiseInterrupt)(uint32_t vector);
160
+typedef bool(^IOSFCUnmapMemory)(
161
+ void *, void *, void *, void *, void *, void *);
162
+typedef bool(^IOSFCMapMemory)(
163
+ uint64_t phys, uint64_t len, bool ro, void **va, void *, void *);
164
+
165
+@interface PGDeviceDescriptor (IOSurfaceMapper)
166
+@property (readwrite, nonatomic) bool usingIOSurfaceMapper;
167
+@end
168
+
169
+@interface PGIOSurfaceHostDeviceDescriptor : NSObject
170
+-(PGIOSurfaceHostDeviceDescriptor *)init;
171
+@property (readwrite, nonatomic, copy, nullable) IOSFCMapMemory mapMemory;
172
+@property (readwrite, nonatomic, copy, nullable) IOSFCUnmapMemory unmapMemory;
173
+@property (readwrite, nonatomic, copy, nullable) IOSFCRaiseInterrupt raiseInterrupt;
174
+@end
175
+
176
+@interface PGIOSurfaceHostDevice : NSObject
177
+-(instancetype)initWithDescriptor:(PGIOSurfaceHostDeviceDescriptor *)desc;
178
+-(uint32_t)mmioReadAtOffset:(size_t)offset;
179
+-(void)mmioWriteAtOffset:(size_t)offset value:(uint32_t)value;
180
+@end
181
+
182
+struct AppleGFXMapSurfaceMemoryJob;
183
+struct AppleGFXMMIOState {
184
+ SysBusDevice parent_obj;
185
+
186
+ AppleGFXState common;
187
+
188
+ QemuCond iosfc_map_job_cond;
189
+ QemuCond iosfc_unmap_job_cond;
190
+ qemu_irq irq_gfx;
191
+ qemu_irq irq_iosfc;
192
+ MemoryRegion iomem_iosfc;
193
+ PGIOSurfaceHostDevice *pgiosfc;
194
+
195
+ GArray *iosfc_mapped_regions; /* array of AppleGFXMMIOMappedRegion */
196
+};
197
+
198
+typedef struct AppleGFXMMIOJob {
199
+ AppleGFXMMIOState *state;
200
+ uint64_t offset;
201
+ uint64_t value;
202
+ bool completed;
203
+} AppleGFXMMIOJob;
204
+
205
+static void iosfc_do_read(void *opaque)
206
+{
207
+ AppleGFXMMIOJob *job = opaque;
208
+ job->value = [job->state->pgiosfc mmioReadAtOffset:job->offset];
209
+ qatomic_set(&job->completed, true);
210
+ aio_wait_kick();
211
+}
212
+
213
+static uint64_t iosfc_read(void *opaque, hwaddr offset, unsigned size)
214
+{
215
+ AppleGFXMMIOJob job = {
216
+ .state = opaque,
217
+ .offset = offset,
218
+ .completed = false,
219
+ };
220
+ dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
221
+
222
+ dispatch_async_f(queue, &job, iosfc_do_read);
223
+ AIO_WAIT_WHILE(NULL, !qatomic_read(&job.completed));
224
+
225
+ trace_apple_gfx_mmio_iosfc_read(offset, job.value);
226
+ return job.value;
227
+}
228
+
229
+static void iosfc_do_write(void *opaque)
230
+{
231
+ AppleGFXMMIOJob *job = opaque;
232
+ [job->state->pgiosfc mmioWriteAtOffset:job->offset value:job->value];
233
+ qatomic_set(&job->completed, true);
234
+ aio_wait_kick();
235
+}
236
+
237
+static void iosfc_write(void *opaque, hwaddr offset, uint64_t val,
238
+ unsigned size)
239
+{
240
+ AppleGFXMMIOJob job = {
241
+ .state = opaque,
242
+ .offset = offset,
243
+ .value = val,
244
+ .completed = false,
245
+ };
246
+ dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
247
+
248
+ dispatch_async_f(queue, &job, iosfc_do_write);
249
+ AIO_WAIT_WHILE(NULL, !qatomic_read(&job.completed));
250
+
251
+ trace_apple_gfx_mmio_iosfc_write(offset, val);
252
+}
253
+
254
+static const MemoryRegionOps apple_iosfc_ops = {
255
+ .read = iosfc_read,
256
+ .write = iosfc_write,
257
+ .endianness = DEVICE_LITTLE_ENDIAN,
258
+ .valid = {
259
+ .min_access_size = 4,
260
+ .max_access_size = 8,
261
+ },
262
+ .impl = {
263
+ .min_access_size = 4,
264
+ .max_access_size = 8,
265
+ },
266
+};
267
+
268
+static void raise_irq(void *opaque)
269
+{
270
+ qemu_irq *irq = opaque;
271
+
272
+ qemu_irq_pulse(*irq);
273
+}
274
+
275
+typedef struct AppleGFXMapSurfaceMemoryJob {
276
+ uint64_t guest_physical_address;
277
+ uint64_t guest_physical_length;
278
+ void *result_mem;
279
+ AppleGFXMMIOState *state;
280
+ bool read_only;
281
+ bool success;
282
+ bool done;
283
+} AppleGFXMapSurfaceMemoryJob;
284
+
285
+typedef struct AppleGFXMMIOMappedRegion {
286
+ MemoryRegion *region;
287
+ uint64_t map_count;
288
+ uintptr_t host_virtual_start;
289
+ uintptr_t host_virtual_end;
290
+} AppleGFXMMIOMappedRegion;
291
+
292
+static void apple_gfx_mmio_map_surface_memory(void *opaque)
293
+{
294
+ AppleGFXMapSurfaceMemoryJob *job = opaque;
295
+ AppleGFXMMIOState *s = job->state;
296
+ mach_vm_address_t mem;
297
+ MemoryRegion *region = NULL;
298
+ GArray *regions = s->iosfc_mapped_regions;
299
+ AppleGFXMMIOMappedRegion *mapped_region;
300
+ size_t i;
301
+
302
+ mem = apple_gfx_host_address_for_gpa_range(job->guest_physical_address,
303
+ job->guest_physical_length,
304
+ job->read_only, &region);
305
+
306
+ if (mem != 0) {
307
+ for (i = 0; i < regions->len; ++i) {
308
+ mapped_region = &g_array_index(regions, AppleGFXMMIOMappedRegion, i);
309
+ if (region == mapped_region->region) {
310
+ ++mapped_region->map_count;
311
+ break;
312
+ }
313
+ }
314
+
315
+ if (i >= regions->len) {
316
+ /* No existing mapping to this region found, keep a reference and save
317
+ */
318
+ uintptr_t start = (uintptr_t)memory_region_get_ram_ptr(region);
319
+ AppleGFXMMIOMappedRegion new_region = {
320
+ region, 1,
321
+ start,
322
+ start + memory_region_size(region)
323
+ };
324
+ memory_region_ref(region);
325
+ g_array_append_val(regions, new_region);
326
+ trace_apple_gfx_iosfc_map_memory_new_region(
327
+ i, region, start, new_region.host_virtual_end);
328
+ }
329
+ }
330
+
331
+ qemu_mutex_lock(&s->common.job_mutex);
332
+ job->result_mem = (void *)mem;
333
+ job->success = mem != 0;
334
+ job->done = true;
335
+ qemu_cond_broadcast(&s->iosfc_map_job_cond);
336
+ qemu_mutex_unlock(&s->common.job_mutex);
337
+}
338
+
339
+typedef struct AppleGFXUnmapSurfaceMemoryJob {
340
+ void *virtual_address;
341
+ AppleGFXMMIOState *state;
342
+ bool done;
343
+} AppleGFXUnmapSurfaceMemoryJob;
344
+
345
+static AppleGFXMMIOMappedRegion *find_mapped_region_containing(GArray *regions,
346
+ uintptr_t va,
347
+ size_t *index)
348
+{
349
+ size_t i;
350
+ AppleGFXMMIOMappedRegion *mapped_region;
351
+
352
+ for (i = 0; i < regions->len; ++i) {
353
+ mapped_region = &g_array_index(regions, AppleGFXMMIOMappedRegion, i);
354
+ if (va >= mapped_region->host_virtual_start &&
355
+ va < mapped_region->host_virtual_end) {
356
+ *index = i;
357
+ return mapped_region;
358
+ }
359
+ }
360
+ return NULL;
361
+}
362
+
363
+static void apple_gfx_mmio_unmap_surface_memory(void *opaque)
364
+{
365
+ AppleGFXUnmapSurfaceMemoryJob *job = opaque;
366
+ AppleGFXMMIOState *s = job->state;
367
+ uintptr_t mem = (uintptr_t)job->virtual_address;
368
+ GArray *regions = s->iosfc_mapped_regions;
369
+ size_t region_index;
370
+ AppleGFXMMIOMappedRegion *mapped_region =
371
+ find_mapped_region_containing(regions, mem, &region_index);
372
+
373
+ if (mapped_region) {
374
+ trace_apple_gfx_iosfc_unmap_memory_region(mem, region_index, mapped_region->map_count, mapped_region->region);
375
+ if (--mapped_region->map_count == 0) {
376
+ memory_region_unref(mapped_region->region);
377
+ g_array_remove_index_fast(regions, region_index);
378
+ }
379
+ } else {
380
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: memory at %p to be unmapped not "
381
+ "found in any of %u mapped regions.\n",
382
+ __func__,
383
+ job->virtual_address, regions->len);
384
+ }
385
+
386
+ qemu_mutex_lock(&s->common.job_mutex);
387
+ job->done = true;
388
+ qemu_cond_broadcast(&s->iosfc_unmap_job_cond);
389
+ qemu_mutex_unlock(&s->common.job_mutex);
390
+}
391
+
392
+static PGIOSurfaceHostDevice *apple_gfx_prepare_iosurface_host_device(
393
+ AppleGFXMMIOState *s)
394
+{
395
+ PGIOSurfaceHostDeviceDescriptor *iosfc_desc =
396
+ [PGIOSurfaceHostDeviceDescriptor new];
397
+ PGIOSurfaceHostDevice *iosfc_host_dev = nil;
398
+
399
+ iosfc_desc.mapMemory =
400
+ ^bool(uint64_t phys, uint64_t len, bool ro, void **va, void *e, void *f) {
401
+ AppleGFXMapSurfaceMemoryJob job = {
402
+ .guest_physical_address = phys, .guest_physical_length = len,
403
+ .read_only = ro, .state = s,
404
+ };
405
+
406
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
407
+ apple_gfx_mmio_map_surface_memory, &job);
408
+ apple_gfx_await_bh_job(&s->common, &s->iosfc_map_job_cond, &job.done);
409
+
410
+ *va = job.result_mem;
411
+
412
+ trace_apple_gfx_iosfc_map_memory(phys, len, ro, va, e, f, *va,
413
+ job.success);
414
+
415
+ return job.success;
416
+ };
417
+
418
+ iosfc_desc.unmapMemory =
419
+ ^bool(void *va, void *b, void *c, void *d, void *e, void *f) {
420
+ AppleGFXUnmapSurfaceMemoryJob job = { va, s };
421
+ trace_apple_gfx_iosfc_unmap_memory(va, b, c, d, e, f);
422
+
423
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
424
+ apple_gfx_mmio_unmap_surface_memory, &job);
425
+ apple_gfx_await_bh_job(&s->common, &s->iosfc_unmap_job_cond, &job.done);
426
+
427
+ return true;
428
+ };
429
+
430
+ iosfc_desc.raiseInterrupt = ^bool(uint32_t vector) {
431
+ trace_apple_gfx_iosfc_raise_irq(vector);
432
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
433
+ raise_irq, &s->irq_iosfc);
434
+ return true;
435
+ };
436
+
437
+ iosfc_host_dev =
438
+ [[PGIOSurfaceHostDevice alloc] initWithDescriptor:iosfc_desc];
439
+ [iosfc_desc release];
440
+ return iosfc_host_dev;
441
+}
442
+
443
+static void apple_gfx_mmio_realize(DeviceState *dev, Error **errp)
444
+{
445
+ @autoreleasepool {
446
+ AppleGFXMMIOState *s = APPLE_GFX_MMIO(dev);
447
+ PGDeviceDescriptor *desc = [PGDeviceDescriptor new];
448
+
449
+ desc.raiseInterrupt = ^(uint32_t vector) {
450
+ trace_apple_gfx_raise_irq(vector);
451
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
452
+ raise_irq, &s->irq_gfx);
453
+ };
454
+
455
+ desc.usingIOSurfaceMapper = true;
456
+ s->pgiosfc = apple_gfx_prepare_iosurface_host_device(s);
457
+
458
+ s->iosfc_mapped_regions =
459
+ g_array_sized_new(false /* no termination */, true /* clear */,
460
+ sizeof(AppleGFXMMIOMappedRegion),
461
+ 2 /* Usually no more RAM regions*/);
462
+
463
+ apple_gfx_common_realize(&s->common, desc, errp);
464
+ qemu_cond_init(&s->iosfc_map_job_cond);
465
+ qemu_cond_init(&s->iosfc_unmap_job_cond);
466
+
467
+ [desc release];
468
+ desc = nil;
469
+ }
470
+}
471
+
472
+static void apple_gfx_mmio_init(Object *obj)
473
+{
474
+ AppleGFXMMIOState *s = APPLE_GFX_MMIO(obj);
475
+
476
+ apple_gfx_common_init(obj, &s->common, TYPE_APPLE_GFX_MMIO);
477
+
478
+ sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->common.iomem_gfx);
479
+ memory_region_init_io(&s->iomem_iosfc, obj, &apple_iosfc_ops, s,
480
+ TYPE_APPLE_GFX_MMIO, 0x10000);
481
+ sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->iomem_iosfc);
482
+ sysbus_init_irq(SYS_BUS_DEVICE(s), &s->irq_gfx);
483
+ sysbus_init_irq(SYS_BUS_DEVICE(s), &s->irq_iosfc);
484
+}
485
+
486
+static void apple_gfx_mmio_reset(Object *obj, ResetType type)
487
+{
488
+ AppleGFXMMIOState *s = APPLE_GFX_MMIO(obj);
489
+ [s->common.pgdev reset];
490
+}
491
+
492
+
493
+static void apple_gfx_mmio_class_init(ObjectClass *klass, void *data)
494
+{
495
+ DeviceClass *dc = DEVICE_CLASS(klass);
496
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
497
+
498
+ rc->phases.hold = apple_gfx_mmio_reset;
499
+ dc->hotpluggable = false;
500
+ dc->realize = apple_gfx_mmio_realize;
501
+}
502
+
503
+static TypeInfo apple_gfx_mmio_types[] = {
504
+ {
505
+ .name = TYPE_APPLE_GFX_MMIO,
506
+ .parent = TYPE_SYS_BUS_DEVICE,
507
+ .instance_size = sizeof(AppleGFXMMIOState),
508
+ .class_init = apple_gfx_mmio_class_init,
509
+ .instance_init = apple_gfx_mmio_init,
510
+ }
511
+};
512
+DEFINE_TYPES(apple_gfx_mmio_types)
513
diff --git a/hw/display/apple-gfx.h b/hw/display/apple-gfx.h
514
new file mode 100644
515
index XXXXXXX..XXXXXXX
516
--- /dev/null
517
+++ b/hw/display/apple-gfx.h
518
@@ -XXX,XX +XXX,XX @@
519
+/*
520
+ * Data structures and functions shared between variants of the macOS
521
+ * ParavirtualizedGraphics.framework based apple-gfx display adapter.
522
+ *
523
+ * SPDX-License-Identifier: GPL-2.0-or-later
524
+ */
525
+
526
+#ifndef QEMU_APPLE_GFX_H
527
+#define QEMU_APPLE_GFX_H
528
+
529
+#define TYPE_APPLE_GFX_MMIO "apple-gfx-mmio"
530
+#define TYPE_APPLE_GFX_PCI "apple-gfx-pci"
531
+
532
+#include "qemu/osdep.h"
533
+#include <dispatch/dispatch.h>
534
+#import <ParavirtualizedGraphics/ParavirtualizedGraphics.h>
535
+#include "qemu/typedefs.h"
536
+#include "exec/memory.h"
537
+#include "ui/surface.h"
538
+
539
+@class PGDeviceDescriptor;
540
+@protocol PGDevice;
541
+@protocol PGDisplay;
542
+@protocol MTLDevice;
543
+@protocol MTLTexture;
544
+@protocol MTLCommandQueue;
545
+
546
+typedef QTAILQ_HEAD(, PGTask_s) PGTaskList;
547
+
548
+typedef struct AppleGFXState {
549
+ /* Initialised on init/realize() */
550
+ MemoryRegion iomem_gfx;
551
+ id<PGDevice> pgdev;
552
+ id<PGDisplay> pgdisp;
553
+ QemuConsole *con;
554
+ id<MTLDevice> mtl;
555
+ id<MTLCommandQueue> mtl_queue;
556
+ dispatch_queue_t render_queue;
557
+ /*
558
+ * QemuMutex & QemuConds for awaiting completion of PVG memory-mapping and
559
+ * reading requests after submitting them to run in the AIO context.
560
+ * QemuCond (rather than QemuEvent) are used so multiple concurrent jobs
561
+ * can be handled safely.
562
+ * The state associated with each job is tracked in a AppleGFX*Job struct
563
+ * for each kind of job; instances are allocated on the caller's stack.
564
+ * This struct also contains the completion flag which is used in
565
+ * conjunction with the condition variable.
566
+ */
567
+ QemuMutex job_mutex;
568
+ QemuCond task_map_job_cond;
569
+ QemuCond mem_read_job_cond;
570
+
571
+ /* tasks is protected by task_mutex */
572
+ QemuMutex task_mutex;
573
+ PGTaskList tasks;
574
+
575
+ /* Mutable state (BQL) */
576
+ QEMUCursor *cursor;
577
+ bool cursor_show;
578
+ bool gfx_update_requested;
579
+ bool new_frame_ready;
580
+ bool using_managed_texture_storage;
581
+ int32_t pending_frames;
582
+ void *vram;
583
+ DisplaySurface *surface;
584
+ id<MTLTexture> texture;
585
+} AppleGFXState;
586
+
587
+void apple_gfx_common_init(Object *obj, AppleGFXState *s, const char* obj_name);
588
+void apple_gfx_common_realize(AppleGFXState *s, PGDeviceDescriptor *desc,
589
+ Error **errp);
590
+uintptr_t apple_gfx_host_address_for_gpa_range(uint64_t guest_physical,
591
+ uint64_t length, bool read_only,
592
+ MemoryRegion **mapping_in_region);
593
+void apple_gfx_await_bh_job(AppleGFXState *s, QemuCond *job_cond,
594
+ bool *job_done_flag);
595
+
596
+#endif
597
+
598
diff --git a/hw/display/apple-gfx.m b/hw/display/apple-gfx.m
599
new file mode 100644
600
index XXXXXXX..XXXXXXX
601
--- /dev/null
602
+++ b/hw/display/apple-gfx.m
603
@@ -XXX,XX +XXX,XX @@
604
+/*
605
+ * QEMU Apple ParavirtualizedGraphics.framework device
606
+ *
607
+ * Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
608
+ *
609
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
610
+ * See the COPYING file in the top-level directory.
611
+ *
612
+ * SPDX-License-Identifier: GPL-2.0-or-later
613
+ *
614
+ * ParavirtualizedGraphics.framework is a set of libraries that macOS provides
615
+ * which implements 3d graphics passthrough to the host as well as a
616
+ * proprietary guest communication channel to drive it. This device model
617
+ * implements support to drive that library from within QEMU.
618
+ */
619
+
620
+#include "qemu/osdep.h"
621
+#import <ParavirtualizedGraphics/ParavirtualizedGraphics.h>
622
+#include <mach/mach_vm.h>
623
+#include "apple-gfx.h"
624
+#include "trace.h"
625
+#include "qemu-main.h"
626
+#include "exec/address-spaces.h"
627
+#include "migration/blocker.h"
628
+#include "monitor/monitor.h"
629
+#include "qemu/main-loop.h"
630
+#include "qemu/cutils.h"
631
+#include "qemu/log.h"
632
+#include "qapi/visitor.h"
633
+#include "qapi/error.h"
634
+#include "sysemu/dma.h"
635
+#include "ui/console.h"
636
+
637
+static const PGDisplayCoord_t apple_gfx_modes[] = {
638
+ { .x = 1440, .y = 1080 },
639
+ { .x = 1280, .y = 1024 },
640
+};
641
+
642
+/* This implements a type defined in <ParavirtualizedGraphics/PGDevice.h>
643
+ * which is opaque from the framework's point of view. Typedef PGTask_t already
644
+ * exists in the framework headers. */
645
+struct PGTask_s {
646
+ QTAILQ_ENTRY(PGTask_s) node;
647
+ AppleGFXState *s;
648
+ mach_vm_address_t address;
649
+ uint64_t len;
650
+ /*
651
+ * All unique MemoryRegions for which a mapping has been created in in this
652
+ * task, and on which we have thus called memory_region_ref(). There are
653
+ * usually very few regions of system RAM in total, so we expect this array
654
+ * to be very short. Therefore, no need for sorting or fancy search
655
+ * algorithms, linear search will do. */
656
+ GArray *mapped_regions;
657
+};
658
+
659
+static Error *apple_gfx_mig_blocker;
660
+
661
+static void apple_gfx_render_frame_completed(AppleGFXState *s,
662
+ uint32_t width, uint32_t height);
663
+
664
+static dispatch_queue_t get_background_queue(void)
665
+{
666
+ return dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
667
+}
668
+
669
+static PGTask_t *apple_gfx_new_task(AppleGFXState *s, uint64_t len)
670
+{
671
+ mach_vm_address_t task_mem;
672
+ PGTask_t *task;
673
+ kern_return_t r;
674
+
675
+ r = mach_vm_allocate(mach_task_self(), &task_mem, len, VM_FLAGS_ANYWHERE);
676
+ if (r != KERN_SUCCESS) {
677
+ return NULL;
678
+ }
679
+
680
+ task = g_new0(PGTask_t, 1);
681
+ task->s = s;
682
+ task->address = task_mem;
683
+ task->len = len;
684
+ task->mapped_regions = g_array_sized_new(false /* no termination */,
685
+ true /* clear */,
686
+ sizeof(MemoryRegion *),
687
+ 2 /* Usually no more RAM regions*/);
688
+
689
+ QEMU_LOCK_GUARD(&s->task_mutex);
690
+ QTAILQ_INSERT_TAIL(&s->tasks, task, node);
691
+
692
+ return task;
693
+}
694
+
695
+typedef struct AppleGFXIOJob {
696
+ AppleGFXState *state;
697
+ uint64_t offset;
698
+ uint64_t value;
699
+ bool completed;
700
+} AppleGFXIOJob;
701
+
702
+static void apple_gfx_do_read(void *opaque)
703
+{
704
+ AppleGFXIOJob *job = opaque;
705
+ job->value = [job->state->pgdev mmioReadAtOffset:job->offset];
706
+ qatomic_set(&job->completed, true);
707
+ aio_wait_kick();
708
+}
709
+
710
+static uint64_t apple_gfx_read(void *opaque, hwaddr offset, unsigned size)
711
+{
712
+ AppleGFXIOJob job = {
713
+ .state = opaque,
714
+ .offset = offset,
715
+ .completed = false,
716
+ };
717
+ dispatch_queue_t queue = get_background_queue();
718
+
719
+ dispatch_async_f(queue, &job, apple_gfx_do_read);
720
+ AIO_WAIT_WHILE(NULL, !qatomic_read(&job.completed));
721
+
722
+ trace_apple_gfx_read(offset, job.value);
723
+ return job.value;
724
+}
725
+
726
+static void apple_gfx_do_write(void *opaque)
727
+{
728
+ AppleGFXIOJob *job = opaque;
729
+ [job->state->pgdev mmioWriteAtOffset:job->offset value:job->value];
730
+ qatomic_set(&job->completed, true);
731
+ aio_wait_kick();
732
+}
733
+
734
+static void apple_gfx_write(void *opaque, hwaddr offset, uint64_t val,
735
+ unsigned size)
736
+{
737
+ /* The methods mmioReadAtOffset: and especially mmioWriteAtOffset: can
738
+ * trigger and block on operations on other dispatch queues, which in turn
739
+ * may call back out on one or more of the callback blocks. For this reason,
740
+ * and as we are holding the BQL, we invoke the I/O methods on a pool
741
+ * thread and handle AIO tasks while we wait. Any work in the callbacks
742
+ * requiring the BQL will in turn schedule BHs which this thread will
743
+ * process while waiting. */
744
+ AppleGFXIOJob job = {
745
+ .state = opaque,
746
+ .offset = offset,
747
+ .value = val,
748
+ .completed = false,
749
+ };
750
+ dispatch_queue_t queue = get_background_queue();
751
+
752
+ dispatch_async_f(queue, &job, apple_gfx_do_write);
753
+ AIO_WAIT_WHILE(NULL, !qatomic_read(&job.completed));
754
+
755
+ trace_apple_gfx_write(offset, val);
756
+}
757
+
758
+static const MemoryRegionOps apple_gfx_ops = {
759
+ .read = apple_gfx_read,
760
+ .write = apple_gfx_write,
761
+ .endianness = DEVICE_LITTLE_ENDIAN,
762
+ .valid = {
763
+ .min_access_size = 4,
764
+ .max_access_size = 8,
765
+ },
766
+ .impl = {
767
+ .min_access_size = 4,
768
+ .max_access_size = 4,
769
+ },
770
+};
771
+
772
+static void apple_gfx_render_new_frame_bql_unlock(AppleGFXState *s)
773
+{
774
+ BOOL r;
775
+ bool managed_texture = s->using_managed_texture_storage;
776
+ uint32_t width = surface_width(s->surface);
777
+ uint32_t height = surface_height(s->surface);
778
+ MTLRegion region = MTLRegionMake2D(0, 0, width, height);
779
+ id<MTLCommandBuffer> command_buffer = [s->mtl_queue commandBuffer];
780
+ id<MTLTexture> texture = s->texture;
781
+
782
+ assert(bql_locked());
783
+ [texture retain];
784
+
785
+ bql_unlock();
786
+
787
+ /* This is not safe to call from the BQL due to PVG-internal locks causing
788
+ * deadlocks. */
789
+ r = [s->pgdisp encodeCurrentFrameToCommandBuffer:command_buffer
790
+ texture:texture
791
+ region:region];
792
+ if (!r) {
793
+ [texture release];
794
+ bql_lock();
795
+ --s->pending_frames;
796
+ bql_unlock();
797
+ qemu_log_mask(LOG_GUEST_ERROR,
798
+ "%s: encodeCurrentFrameToCommandBuffer:texture:region: "
799
+ "failed\n", __func__);
800
+ return;
801
+ }
802
+
803
+ if (managed_texture) {
804
+ /* "Managed" textures exist in both VRAM and RAM and must be synced. */
805
+ id<MTLBlitCommandEncoder> blit = [command_buffer blitCommandEncoder];
806
+ [blit synchronizeResource:texture];
807
+ [blit endEncoding];
808
+ }
809
+ [texture release];
810
+ [command_buffer addCompletedHandler:
811
+ ^(id<MTLCommandBuffer> cb)
812
+ {
813
+ dispatch_async(s->render_queue, ^{
814
+ apple_gfx_render_frame_completed(s, width, height);
815
+ });
816
+ }];
817
+ [command_buffer commit];
818
+}
819
+
820
+static void copy_mtl_texture_to_surface_mem(id<MTLTexture> texture, void *vram)
821
+{
822
+ /* TODO: Skip this entirely on a pure Metal or headless/guest-only
823
+ * rendering path, else use a blit command encoder? Needs careful
824
+ * (double?) buffering design. */
825
+ size_t width = texture.width, height = texture.height;
826
+ MTLRegion region = MTLRegionMake2D(0, 0, width, height);
827
+ [texture getBytes:vram
828
+ bytesPerRow:(width * 4)
829
+ bytesPerImage:(width * height * 4)
830
+ fromRegion:region
831
+ mipmapLevel:0
832
+ slice:0];
833
+}
834
+
835
+static void apple_gfx_render_frame_completed(AppleGFXState *s,
836
+ uint32_t width, uint32_t height)
837
+{
838
+ bql_lock();
839
+ --s->pending_frames;
840
+ assert(s->pending_frames >= 0);
841
+
842
+ /* Only update display if mode hasn't changed since we started rendering. */
843
+ if (width == surface_width(s->surface) &&
844
+ height == surface_height(s->surface)) {
845
+ copy_mtl_texture_to_surface_mem(s->texture, s->vram);
846
+ if (s->gfx_update_requested) {
847
+ s->gfx_update_requested = false;
848
+ dpy_gfx_update_full(s->con);
849
+ graphic_hw_update_done(s->con);
850
+ s->new_frame_ready = false;
851
+ } else {
852
+ s->new_frame_ready = true;
853
+ }
854
+ }
855
+ if (s->pending_frames > 0) {
856
+ apple_gfx_render_new_frame_bql_unlock(s);
857
+ } else {
858
+ bql_unlock();
859
+ }
860
+}
861
+
862
+static void apple_gfx_fb_update_display(void *opaque)
863
+{
864
+ AppleGFXState *s = opaque;
865
+
866
+ assert(bql_locked());
867
+ if (s->new_frame_ready) {
868
+ dpy_gfx_update_full(s->con);
869
+ s->new_frame_ready = false;
870
+ graphic_hw_update_done(s->con);
871
+ } else if (s->pending_frames > 0) {
872
+ s->gfx_update_requested = true;
873
+ } else {
874
+ graphic_hw_update_done(s->con);
875
+ }
876
+}
877
+
878
+static const GraphicHwOps apple_gfx_fb_ops = {
879
+ .gfx_update = apple_gfx_fb_update_display,
880
+ .gfx_update_async = true,
881
+};
882
+
883
+static void update_cursor(AppleGFXState *s)
884
+{
885
+ assert(bql_locked());
886
+ dpy_mouse_set(s->con, s->pgdisp.cursorPosition.x,
887
+ s->pgdisp.cursorPosition.y, s->cursor_show);
888
+}
889
+
890
+static void set_mode(AppleGFXState *s, uint32_t width, uint32_t height)
891
+{
892
+ MTLTextureDescriptor *textureDescriptor;
893
+
894
+ if (s->surface &&
895
+ width == surface_width(s->surface) &&
896
+ height == surface_height(s->surface)) {
897
+ return;
898
+ }
899
+
900
+ g_free(s->vram);
901
+ [s->texture release];
902
+
903
+ s->vram = g_malloc0_n(width * height, 4);
904
+ s->surface = qemu_create_displaysurface_from(width, height, PIXMAN_LE_a8r8g8b8,
905
+ width * 4, s->vram);
906
+
907
+ @autoreleasepool {
908
+ textureDescriptor =
909
+ [MTLTextureDescriptor
910
+ texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm
911
+ width:width
912
+ height:height
913
+ mipmapped:NO];
914
+ textureDescriptor.usage = s->pgdisp.minimumTextureUsage;
915
+ s->texture = [s->mtl newTextureWithDescriptor:textureDescriptor];
916
+ }
917
+
918
+ s->using_managed_texture_storage =
919
+ (s->texture.storageMode == MTLStorageModeManaged);
920
+ dpy_gfx_replace_surface(s->con, s->surface);
921
+}
922
+
923
+static void create_fb(AppleGFXState *s)
924
+{
925
+ s->con = graphic_console_init(NULL, 0, &apple_gfx_fb_ops, s);
926
+ set_mode(s, 1440, 1080);
927
+
928
+ s->cursor_show = true;
929
+}
930
+
931
+static size_t apple_gfx_get_default_mmio_range_size(void)
932
+{
933
+ size_t mmio_range_size;
934
+ @autoreleasepool {
935
+ PGDeviceDescriptor *desc = [PGDeviceDescriptor new];
936
+ mmio_range_size = desc.mmioLength;
937
+ [desc release];
938
+ }
939
+ return mmio_range_size;
940
+}
941
+
942
+void apple_gfx_common_init(Object *obj, AppleGFXState *s, const char* obj_name)
943
+{
944
+ size_t mmio_range_size = apple_gfx_get_default_mmio_range_size();
945
+
946
+ trace_apple_gfx_common_init(obj_name, mmio_range_size);
947
+ memory_region_init_io(&s->iomem_gfx, obj, &apple_gfx_ops, s, obj_name,
948
+ mmio_range_size);
949
+
950
+ /* TODO: PVG framework supports serialising device state: integrate it! */
951
+}
952
+
953
+typedef struct AppleGFXMapMemoryJob {
954
+ AppleGFXState *state;
955
+ PGTask_t *task;
956
+ uint64_t virtual_offset;
957
+ PGPhysicalMemoryRange_t *ranges;
958
+ uint32_t range_count;
959
+ bool read_only;
960
+ bool success;
961
+ bool done;
962
+} AppleGFXMapMemoryJob;
963
+
964
+uintptr_t apple_gfx_host_address_for_gpa_range(uint64_t guest_physical,
965
+ uint64_t length, bool read_only,
966
+ MemoryRegion **mapping_in_region)
967
+{
968
+ MemoryRegion *ram_region;
969
+ uintptr_t host_address;
970
+ hwaddr ram_region_offset = 0;
971
+ hwaddr ram_region_length = length;
972
+
973
+ ram_region = address_space_translate(&address_space_memory,
974
+ guest_physical,
975
+ &ram_region_offset,
976
+ &ram_region_length, !read_only,
977
+ MEMTXATTRS_UNSPECIFIED);
978
+
979
+ if (!ram_region || ram_region_length < length ||
980
+ !memory_access_is_direct(ram_region, !read_only)) {
981
+ return 0;
982
+ }
983
+
984
+ host_address = (uintptr_t)memory_region_get_ram_ptr(ram_region);
985
+ if (host_address == 0) {
986
+ return 0;
987
+ }
988
+ host_address += ram_region_offset;
989
+ *mapping_in_region = ram_region;
990
+ return host_address;
991
+}
992
+
993
+/* Returns false if the region is already in the array */
994
+static bool add_new_region(GArray *regions, MemoryRegion *region)
995
+{
996
+ MemoryRegion *existing;
997
+ size_t i;
998
+
999
+ for (i = 0; i < regions->len; ++i) {
1000
+ existing = g_array_index(regions, MemoryRegion *, i);
1001
+ if (existing == region) {
1002
+ return false;
1003
+ }
1004
+ }
1005
+ g_array_append_val(regions, region);
1006
+ return true;
1007
+}
1008
+
1009
+static void apple_gfx_map_memory(void *opaque)
1010
+{
1011
+ AppleGFXMapMemoryJob *job = opaque;
1012
+ AppleGFXState *s = job->state;
1013
+ PGTask_t *task = job->task;
1014
+ uint32_t range_count = job->range_count;
1015
+ uint64_t virtual_offset = job->virtual_offset;
1016
+ PGPhysicalMemoryRange_t *ranges = job->ranges;
1017
+ bool read_only = job->read_only;
1018
+ kern_return_t r;
1019
+ mach_vm_address_t target, source;
1020
+ vm_prot_t cur_protection, max_protection;
1021
+ bool success = true;
1022
+ MemoryRegion *region;
1023
+
1024
+ g_assert(bql_locked());
1025
+
1026
+ trace_apple_gfx_map_memory(task, range_count, virtual_offset, read_only);
1027
+ for (int i = 0; i < range_count; i++) {
1028
+ PGPhysicalMemoryRange_t *range = &ranges[i];
1029
+
1030
+ target = task->address + virtual_offset;
1031
+ virtual_offset += range->physicalLength;
1032
+
1033
+ trace_apple_gfx_map_memory_range(i, range->physicalAddress,
1034
+ range->physicalLength);
1035
+
1036
+ region = NULL;
1037
+ source = apple_gfx_host_address_for_gpa_range(range->physicalAddress,
1038
+ range->physicalLength,
1039
+ read_only, &region);
1040
+ if (source == 0) {
1041
+ success = false;
1042
+ continue;
1043
+ }
1044
+
1045
+ if (add_new_region(task->mapped_regions, region)) {
1046
+ memory_region_ref(region);
1047
+ }
1048
+
1049
+ cur_protection = 0;
1050
+ max_protection = 0;
1051
+ // Map guest RAM at range->physicalAddress into PG task memory range
1052
+ r = mach_vm_remap(mach_task_self(),
1053
+ &target, range->physicalLength, vm_page_size - 1,
1054
+ VM_FLAGS_FIXED | VM_FLAGS_OVERWRITE,
1055
+ mach_task_self(),
1056
+ source, false /* shared mapping, no copy */,
1057
+ &cur_protection, &max_protection,
1058
+ VM_INHERIT_COPY);
1059
+ trace_apple_gfx_remap(r, source, target);
1060
+ g_assert(r == KERN_SUCCESS);
1061
+ }
1062
+
1063
+ qemu_mutex_lock(&s->job_mutex);
1064
+ job->success = success;
1065
+ job->done = true;
1066
+ qemu_cond_broadcast(&s->task_map_job_cond);
1067
+ qemu_mutex_unlock(&s->job_mutex);
1068
+}
1069
+
1070
+void apple_gfx_await_bh_job(AppleGFXState *s, QemuCond *job_cond, bool *job_done_flag)
1071
+{
1072
+ qemu_mutex_lock(&s->job_mutex);
1073
+ while (!*job_done_flag) {
1074
+ qemu_cond_wait(job_cond, &s->job_mutex);
1075
+ }
1076
+ qemu_mutex_unlock(&s->job_mutex);
1077
+}
1078
+
1079
+typedef struct AppleGFXReadMemoryJob {
1080
+ AppleGFXState *s;
1081
+ hwaddr physical_address;
1082
+ uint64_t length;
1083
+ void *dst;
1084
+ bool done;
1085
+ bool success;
1086
+} AppleGFXReadMemoryJob;
1087
+
1088
+static void apple_gfx_do_read_memory(void *opaque)
1089
+{
1090
+ AppleGFXReadMemoryJob *job = opaque;
1091
+ AppleGFXState *s = job->s;
1092
+ MemTxResult r;
1093
+
1094
+ r = dma_memory_read(&address_space_memory, job->physical_address,
1095
+ job->dst, job->length, MEMTXATTRS_UNSPECIFIED);
1096
+ job->success = r == MEMTX_OK;
1097
+
1098
+ qemu_mutex_lock(&s->job_mutex);
1099
+ job->done = true;
1100
+ qemu_cond_broadcast(&s->mem_read_job_cond);
1101
+ qemu_mutex_unlock(&s->job_mutex);
1102
+}
1103
+
1104
+static bool apple_gfx_read_memory(AppleGFXState *s, hwaddr physical_address,
1105
+ uint64_t length, void *dst)
1106
+{
1107
+ AppleGFXReadMemoryJob job = {
1108
+ s, physical_address, length, dst
1109
+ };
1110
+
1111
+ trace_apple_gfx_read_memory(physical_address, length, dst);
1112
+
1113
+ /* Traversing the memory map requires RCU/BQL, so do it in a BH. */
1114
+ aio_bh_schedule_oneshot(qemu_get_aio_context(), apple_gfx_do_read_memory,
1115
+ &job);
1116
+ apple_gfx_await_bh_job(s, &s->mem_read_job_cond, &job.done);
1117
+ return job.success;
1118
+}
1119
+
1120
+static void apple_gfx_task_destroy(AppleGFXState *s, PGTask_t *task)
1121
+{
1122
+ GArray *regions = task->mapped_regions;
1123
+ MemoryRegion *region;
1124
+ size_t i;
1125
+
1126
+ for (i = 0; i < regions->len; ++i) {
1127
+ region = g_array_index(regions, MemoryRegion *, i);
1128
+ memory_region_unref(region);
1129
+ }
1130
+ g_array_unref(regions);
1131
+
1132
+ mach_vm_deallocate(mach_task_self(), task->address, task->len);
1133
+
1134
+ QEMU_LOCK_GUARD(&s->task_mutex);
1135
+ QTAILQ_REMOVE(&s->tasks, task, node);
1136
+ g_free(task);
1137
+}
1138
+
1139
+static void apple_gfx_register_task_mapping_handlers(AppleGFXState *s,
1140
+ PGDeviceDescriptor *desc)
1141
+{
1142
+ desc.createTask = ^(uint64_t vmSize, void * _Nullable * _Nonnull baseAddress) {
1143
+ PGTask_t *task = apple_gfx_new_task(s, vmSize);
1144
+ *baseAddress = (void *)task->address;
1145
+ trace_apple_gfx_create_task(vmSize, *baseAddress);
1146
+ return task;
1147
+ };
1148
+
1149
+ desc.destroyTask = ^(PGTask_t * _Nonnull task) {
1150
+ trace_apple_gfx_destroy_task(task, task->mapped_regions->len);
1151
+
1152
+ apple_gfx_task_destroy(s, task);
1153
+ };
1154
+
1155
+ desc.mapMemory = ^bool(PGTask_t * _Nonnull task, uint32_t range_count,
1156
+ uint64_t virtual_offset, bool read_only,
1157
+ PGPhysicalMemoryRange_t * _Nonnull ranges) {
1158
+ AppleGFXMapMemoryJob job = {
1159
+ .state = s,
1160
+ .task = task, .ranges = ranges, .range_count = range_count,
1161
+ .read_only = read_only, .virtual_offset = virtual_offset,
1162
+ .done = false, .success = true,
1163
+ };
1164
+ if (range_count > 0) {
1165
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
1166
+ apple_gfx_map_memory, &job);
1167
+ apple_gfx_await_bh_job(s, &s->task_map_job_cond, &job.done);
1168
+ }
1169
+ return job.success;
1170
+ };
1171
+
1172
+ desc.unmapMemory = ^bool(PGTask_t * _Nonnull task, uint64_t virtualOffset,
1173
+ uint64_t length) {
1174
+ kern_return_t r;
1175
+ mach_vm_address_t range_address;
1176
+
1177
+ trace_apple_gfx_unmap_memory(task, virtualOffset, length);
1178
+
1179
+ /* Replace task memory range with fresh pages, undoing the mapping
1180
+ * from guest RAM. */
1181
+ range_address = task->address + virtualOffset;
1182
+ r = mach_vm_allocate(mach_task_self(), &range_address, length,
1183
+ VM_FLAGS_FIXED | VM_FLAGS_OVERWRITE);
1184
+ g_assert(r == KERN_SUCCESS);
1185
+
1186
+ return true;
1187
+ };
1188
+
1189
+ desc.readMemory = ^bool(uint64_t physical_address, uint64_t length,
1190
+ void * _Nonnull dst) {
1191
+ return apple_gfx_read_memory(s, physical_address, length, dst);
1192
+ };
1193
+}
1194
+
1195
+static PGDisplayDescriptor *apple_gfx_prepare_display_descriptor(AppleGFXState *s)
1196
+{
1197
+ PGDisplayDescriptor *disp_desc = [PGDisplayDescriptor new];
1198
+
1199
+ disp_desc.name = @"QEMU display";
1200
+ disp_desc.sizeInMillimeters = NSMakeSize(400., 300.); /* A 20" display */
1201
+ disp_desc.queue = dispatch_get_main_queue();
1202
+ disp_desc.newFrameEventHandler = ^(void) {
1203
+ trace_apple_gfx_new_frame();
1204
+ dispatch_async(s->render_queue, ^{
1205
+ /* Drop frames if we get too far ahead. */
1206
+ bql_lock();
1207
+ if (s->pending_frames >= 2) {
1208
+ bql_unlock();
1209
+ return;
1210
+ }
1211
+ ++s->pending_frames;
1212
+ if (s->pending_frames > 1) {
1213
+ bql_unlock();
1214
+ return;
1215
+ }
1216
+ @autoreleasepool {
1217
+ apple_gfx_render_new_frame_bql_unlock(s);
1218
+ }
1219
+ });
1220
+ };
1221
+ disp_desc.modeChangeHandler = ^(PGDisplayCoord_t sizeInPixels,
1222
+ OSType pixelFormat) {
1223
+ trace_apple_gfx_mode_change(sizeInPixels.x, sizeInPixels.y);
1224
+
1225
+ BQL_LOCK_GUARD();
1226
+ set_mode(s, sizeInPixels.x, sizeInPixels.y);
1227
+ };
1228
+ disp_desc.cursorGlyphHandler = ^(NSBitmapImageRep *glyph,
1229
+ PGDisplayCoord_t hotSpot) {
1230
+ [glyph retain];
1231
+ dispatch_async(get_background_queue(), ^{
1232
+ BQL_LOCK_GUARD();
1233
+ uint32_t bpp = glyph.bitsPerPixel;
1234
+ size_t width = glyph.pixelsWide;
1235
+ size_t height = glyph.pixelsHigh;
1236
+ size_t padding_bytes_per_row = glyph.bytesPerRow - width * 4;
1237
+ const uint8_t* px_data = glyph.bitmapData;
1238
+
1239
+ trace_apple_gfx_cursor_set(bpp, width, height);
1240
+
1241
+ if (s->cursor) {
1242
+ cursor_unref(s->cursor);
1243
+ s->cursor = NULL;
1244
+ }
1245
+
1246
+ if (bpp == 32) { /* Shouldn't be anything else, but just to be safe...*/
1247
+ s->cursor = cursor_alloc(width, height);
1248
+ s->cursor->hot_x = hotSpot.x;
1249
+ s->cursor->hot_y = hotSpot.y;
1250
+
1251
+ uint32_t *dest_px = s->cursor->data;
1252
+
1253
+ for (size_t y = 0; y < height; ++y) {
1254
+ for (size_t x = 0; x < width; ++x) {
1255
+ /* NSBitmapImageRep's red & blue channels are swapped
1256
+ * compared to QEMUCursor's. */
1257
+ *dest_px =
1258
+ (px_data[0] << 16u) |
1259
+ (px_data[1] << 8u) |
1260
+ (px_data[2] << 0u) |
1261
+ (px_data[3] << 24u);
1262
+ ++dest_px;
1263
+ px_data += 4;
1264
+ }
1265
+ px_data += padding_bytes_per_row;
1266
+ }
1267
+ dpy_cursor_define(s->con, s->cursor);
1268
+ update_cursor(s);
1269
+ }
1270
+ [glyph release];
1271
+ });
1272
+ };
1273
+ disp_desc.cursorShowHandler = ^(BOOL show) {
1274
+ dispatch_async(get_background_queue(), ^{
1275
+ BQL_LOCK_GUARD();
1276
+ trace_apple_gfx_cursor_show(show);
1277
+ s->cursor_show = show;
1278
+ update_cursor(s);
1279
+ });
1280
+ };
1281
+ disp_desc.cursorMoveHandler = ^(void) {
1282
+ dispatch_async(get_background_queue(), ^{
1283
+ BQL_LOCK_GUARD();
1284
+ trace_apple_gfx_cursor_move();
1285
+ update_cursor(s);
1286
+ });
1287
+ };
1288
+
1289
+ return disp_desc;
1290
+}
1291
+
1292
+static NSArray<PGDisplayMode*>* apple_gfx_prepare_display_mode_array(void)
1293
+{
1294
+ PGDisplayMode *modes[ARRAY_SIZE(apple_gfx_modes)];
1295
+ NSArray<PGDisplayMode*>* mode_array = nil;
1296
+ int i;
1297
+
1298
+ for (i = 0; i < ARRAY_SIZE(apple_gfx_modes); i++) {
1299
+ modes[i] =
1300
+ [[PGDisplayMode alloc] initWithSizeInPixels:apple_gfx_modes[i] refreshRateInHz:60.];
1301
+ }
1302
+
1303
+ mode_array = [NSArray arrayWithObjects:modes count:ARRAY_SIZE(apple_gfx_modes)];
1304
+
1305
+ for (i = 0; i < ARRAY_SIZE(apple_gfx_modes); i++) {
1306
+ [modes[i] release];
1307
+ modes[i] = nil;
1308
+ }
1309
+
1310
+ return mode_array;
1311
+}
1312
+
1313
+static id<MTLDevice> copy_suitable_metal_device(void)
1314
+{
1315
+ id<MTLDevice> dev = nil;
1316
+ NSArray<id<MTLDevice>> *devs = MTLCopyAllDevices();
1317
+
1318
+ /* Prefer a unified memory GPU. Failing that, pick a non-removable GPU. */
1319
+ for (size_t i = 0; i < devs.count; ++i) {
1320
+ if (devs[i].hasUnifiedMemory) {
1321
+ dev = devs[i];
1322
+ break;
1323
+ }
1324
+ if (!devs[i].removable) {
1325
+ dev = devs[i];
1326
+ }
1327
+ }
1328
+
1329
+ if (dev != nil) {
1330
+ [dev retain];
1331
+ } else {
1332
+ dev = MTLCreateSystemDefaultDevice();
1333
+ }
1334
+ [devs release];
1335
+
1336
+ return dev;
1337
+}
1338
+
1339
+void apple_gfx_common_realize(AppleGFXState *s, PGDeviceDescriptor *desc,
1340
+ Error **errp)
1341
+{
1342
+ PGDisplayDescriptor *disp_desc = nil;
1343
+
1344
+ if (apple_gfx_mig_blocker == NULL) {
1345
+ error_setg(&apple_gfx_mig_blocker,
1346
+ "Migration state blocked by apple-gfx display device");
1347
+ if (migrate_add_blocker(&apple_gfx_mig_blocker, errp) < 0) {
1348
+ return;
1349
+ }
1350
+ }
1351
+
1352
+ qemu_mutex_init(&s->task_mutex);
1353
+ QTAILQ_INIT(&s->tasks);
1354
+ s->render_queue = dispatch_queue_create("apple-gfx.render",
1355
+ DISPATCH_QUEUE_SERIAL);
1356
+ s->mtl = copy_suitable_metal_device();
1357
+ s->mtl_queue = [s->mtl newCommandQueue];
1358
+
1359
+ desc.device = s->mtl;
1360
+
1361
+ apple_gfx_register_task_mapping_handlers(s, desc);
1362
+
1363
+ s->pgdev = PGNewDeviceWithDescriptor(desc);
1364
+
1365
+ disp_desc = apple_gfx_prepare_display_descriptor(s);
1366
+ s->pgdisp = [s->pgdev newDisplayWithDescriptor:disp_desc
1367
+ port:0 serialNum:1234];
1368
+ [disp_desc release];
1369
+ s->pgdisp.modeList = apple_gfx_prepare_display_mode_array();
1370
+
1371
+ create_fb(s);
1372
+
1373
+ qemu_mutex_init(&s->job_mutex);
1374
+ qemu_cond_init(&s->task_map_job_cond);
1375
+ qemu_cond_init(&s->mem_read_job_cond);
1376
+}
1377
diff --git a/hw/display/meson.build b/hw/display/meson.build
1378
index XXXXXXX..XXXXXXX 100644
1379
--- a/hw/display/meson.build
1380
+++ b/hw/display/meson.build
1381
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_ARTIST', if_true: files('artist.c'))
1382
1383
system_ss.add(when: 'CONFIG_ATI_VGA', if_true: [files('ati.c', 'ati_2d.c', 'ati_dbg.c'), pixman])
1384
1385
+system_ss.add(when: 'CONFIG_MAC_PVG', if_true: [files('apple-gfx.m'), pvg, metal])
1386
+if cpu == 'aarch64'
1387
+ system_ss.add(when: 'CONFIG_MAC_PVG_MMIO', if_true: [files('apple-gfx-mmio.m'), pvg, metal])
1388
+endif
1389
1390
if config_all_devices.has_key('CONFIG_VIRTIO_GPU')
1391
virtio_gpu_ss = ss.source_set()
1392
diff --git a/hw/display/trace-events b/hw/display/trace-events
1393
index XXXXXXX..XXXXXXX 100644
1394
--- a/hw/display/trace-events
1395
+++ b/hw/display/trace-events
1396
@@ -XXX,XX +XXX,XX @@ dm163_bits_ppi(unsigned dest_width) "dest_width : %u"
1397
dm163_leds(int led, uint32_t value) "led %d: 0x%x"
1398
dm163_channels(int channel, uint8_t value) "channel %d: 0x%x"
1399
dm163_refresh_rate(uint32_t rr) "refresh rate %d"
1400
+
1401
+# apple-gfx.m
1402
+apple_gfx_read(uint64_t offset, uint64_t res) "offset=0x%"PRIx64" res=0x%"PRIx64
1403
+apple_gfx_write(uint64_t offset, uint64_t val) "offset=0x%"PRIx64" val=0x%"PRIx64
1404
+apple_gfx_create_task(uint32_t vm_size, void *va) "vm_size=0x%x base_addr=%p"
1405
+apple_gfx_destroy_task(void *task, unsigned int num_mapped_regions) "task=%p, task->mapped_regions->len=%u"
1406
+apple_gfx_map_memory(void *task, uint32_t range_count, uint64_t virtual_offset, uint32_t read_only) "task=%p range_count=0x%x virtual_offset=0x%"PRIx64" read_only=%d"
1407
+apple_gfx_map_memory_range(uint32_t i, uint64_t phys_addr, uint64_t phys_len) "[%d] phys_addr=0x%"PRIx64" phys_len=0x%"PRIx64
1408
+apple_gfx_remap(uint64_t retval, uint64_t source, uint64_t target) "retval=%"PRId64" source=0x%"PRIx64" target=0x%"PRIx64
1409
+apple_gfx_unmap_memory(void *task, uint64_t virtual_offset, uint64_t length) "task=%p virtual_offset=0x%"PRIx64" length=0x%"PRIx64
1410
+apple_gfx_read_memory(uint64_t phys_address, uint64_t length, void *dst) "phys_addr=0x%"PRIx64" length=0x%"PRIx64" dest=%p"
1411
+apple_gfx_raise_irq(uint32_t vector) "vector=0x%x"
1412
+apple_gfx_new_frame(void) ""
1413
+apple_gfx_mode_change(uint64_t x, uint64_t y) "x=%"PRId64" y=%"PRId64
1414
+apple_gfx_cursor_set(uint32_t bpp, uint64_t width, uint64_t height) "bpp=%d width=%"PRId64" height=0x%"PRId64
1415
+apple_gfx_cursor_show(uint32_t show) "show=%d"
1416
+apple_gfx_cursor_move(void) ""
1417
+apple_gfx_common_init(const char *device_name, size_t mmio_size) "device: %s; MMIO size: %zu bytes"
1418
+
1419
+# apple-gfx-mmio.m
1420
+apple_gfx_mmio_iosfc_read(uint64_t offset, uint64_t res) "offset=0x%"PRIx64" res=0x%"PRIx64
1421
+apple_gfx_mmio_iosfc_write(uint64_t offset, uint64_t val) "offset=0x%"PRIx64" val=0x%"PRIx64
1422
+apple_gfx_iosfc_map_memory(uint64_t phys, uint64_t len, uint32_t ro, void *va, void *e, void *f, void* va_result, int success) "phys=0x%"PRIx64" len=0x%"PRIx64" ro=%d va=%p e=%p f=%p -> *va=%p, success = %d"
1423
+apple_gfx_iosfc_map_memory_new_region(size_t i, void *region, uint64_t start, uint64_t end) "index=%zu, region=%p, 0x%"PRIx64"-0x%"PRIx64
1424
+apple_gfx_iosfc_unmap_memory(void *a, void *b, void *c, void *d, void *e, void *f) "a=%p b=%p c=%p d=%p e=%p f=%p"
1425
+apple_gfx_iosfc_unmap_memory_region(uint64_t mem, size_t region_index, uint64_t map_count, void *region) "mapping @ 0x%"PRIx64" in region %zu, map count %"PRIu64", memory region %p"
1426
+apple_gfx_iosfc_raise_irq(uint32_t vector) "vector=0x%x"
1427
+
1428
diff --git a/meson.build b/meson.build
1429
index XXXXXXX..XXXXXXX 100644
1430
--- a/meson.build
1431
+++ b/meson.build
1432
@@ -XXX,XX +XXX,XX @@ socket = []
1433
version_res = []
1434
coref = []
1435
iokit = []
1436
+pvg = []
1437
+metal = []
1438
emulator_link_args = []
1439
midl = not_found
1440
widl = not_found
1441
@@ -XXX,XX +XXX,XX @@ elif host_os == 'darwin'
1442
coref = dependency('appleframeworks', modules: 'CoreFoundation')
1443
iokit = dependency('appleframeworks', modules: 'IOKit', required: false)
1444
host_dsosuf = '.dylib'
1445
+ pvg = dependency('appleframeworks', modules: 'ParavirtualizedGraphics')
1446
+ metal = dependency('appleframeworks', modules: 'Metal')
1447
elif host_os == 'sunos'
1448
socket = [cc.find_library('socket'),
1449
cc.find_library('nsl'),
1450
--
1451
2.39.3 (Apple Git-145)
1452
1453
diff view generated by jsdifflib
Deleted patch
1
This change wires up the PCI variant of the paravirtualised
2
graphics device, mainly useful for x86-64 macOS guests, implemented
3
by macOS's ParavirtualizedGraphics.framework. It builds on code
4
shared with the vmapple/mmio variant of the PVG device.
5
1
6
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
7
---
8
9
v4:
10
11
* Threading improvements analogous to those in common apple-gfx code
12
and mmio device variant.
13
* Addressed some smaller issues raised in code review.
14
15
v5:
16
17
* Minor error handling improvement.
18
19
hw/display/Kconfig | 4 +
20
hw/display/apple-gfx-pci.m | 149 +++++++++++++++++++++++++++++++++++++
21
hw/display/meson.build | 1 +
22
3 files changed, 154 insertions(+)
23
create mode 100644 hw/display/apple-gfx-pci.m
24
25
diff --git a/hw/display/Kconfig b/hw/display/Kconfig
26
index XXXXXXX..XXXXXXX 100644
27
--- a/hw/display/Kconfig
28
+++ b/hw/display/Kconfig
29
@@ -XXX,XX +XXX,XX @@ config MAC_PVG_MMIO
30
bool
31
depends on MAC_PVG && AARCH64
32
33
+config MAC_PVG_PCI
34
+ bool
35
+ depends on MAC_PVG && PCI
36
+ default y if PCI_DEVICES
37
diff --git a/hw/display/apple-gfx-pci.m b/hw/display/apple-gfx-pci.m
38
new file mode 100644
39
index XXXXXXX..XXXXXXX
40
--- /dev/null
41
+++ b/hw/display/apple-gfx-pci.m
42
@@ -XXX,XX +XXX,XX @@
43
+/*
44
+ * QEMU Apple ParavirtualizedGraphics.framework device, PCI variant
45
+ *
46
+ * Copyright © 2023-2024 Phil Dennis-Jordan
47
+ *
48
+ * SPDX-License-Identifier: GPL-2.0-or-later
49
+ *
50
+ * ParavirtualizedGraphics.framework is a set of libraries that macOS provides
51
+ * which implements 3d graphics passthrough to the host as well as a
52
+ * proprietary guest communication channel to drive it. This device model
53
+ * implements support to drive that library from within QEMU as a PCI device
54
+ * aimed primarily at x86-64 macOS VMs.
55
+ */
56
+
57
+#include "apple-gfx.h"
58
+#include "hw/pci/pci_device.h"
59
+#include "hw/pci/msi.h"
60
+#include "qapi/error.h"
61
+#include "trace.h"
62
+#import <ParavirtualizedGraphics/ParavirtualizedGraphics.h>
63
+
64
+OBJECT_DECLARE_SIMPLE_TYPE(AppleGFXPCIState, APPLE_GFX_PCI)
65
+
66
+struct AppleGFXPCIState {
67
+ PCIDevice parent_obj;
68
+
69
+ AppleGFXState common;
70
+};
71
+
72
+static const char* apple_gfx_pci_option_rom_path = NULL;
73
+
74
+static void apple_gfx_init_option_rom_path(void)
75
+{
76
+ NSURL *option_rom_url = PGCopyOptionROMURL();
77
+ const char *option_rom_path = option_rom_url.fileSystemRepresentation;
78
+ apple_gfx_pci_option_rom_path = g_strdup(option_rom_path);
79
+ [option_rom_url release];
80
+}
81
+
82
+static void apple_gfx_pci_init(Object *obj)
83
+{
84
+ AppleGFXPCIState *s = APPLE_GFX_PCI(obj);
85
+
86
+ if (!apple_gfx_pci_option_rom_path) {
87
+ /* The following is done on device not class init to avoid running
88
+ * ObjC code before fork() in -daemonize mode. */
89
+ PCIDeviceClass *pci = PCI_DEVICE_CLASS(object_get_class(obj));
90
+ apple_gfx_init_option_rom_path();
91
+ pci->romfile = apple_gfx_pci_option_rom_path;
92
+ }
93
+
94
+ apple_gfx_common_init(obj, &s->common, TYPE_APPLE_GFX_PCI);
95
+}
96
+
97
+typedef struct AppleGFXPCIInterruptJob {
98
+ PCIDevice *device;
99
+ uint32_t vector;
100
+} AppleGFXPCIInterruptJob;
101
+
102
+static void apple_gfx_pci_raise_interrupt(void *opaque)
103
+{
104
+ AppleGFXPCIInterruptJob *job = opaque;
105
+
106
+ if (msi_enabled(job->device)) {
107
+ msi_notify(job->device, job->vector);
108
+ }
109
+ g_free(job);
110
+}
111
+
112
+static void apple_gfx_pci_interrupt(PCIDevice *dev, AppleGFXPCIState *s,
113
+ uint32_t vector)
114
+{
115
+ AppleGFXPCIInterruptJob *job;
116
+
117
+ trace_apple_gfx_raise_irq(vector);
118
+ job = g_malloc0(sizeof(*job));
119
+ job->device = dev;
120
+ job->vector = vector;
121
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
122
+ apple_gfx_pci_raise_interrupt, job);
123
+}
124
+
125
+static void apple_gfx_pci_realize(PCIDevice *dev, Error **errp)
126
+{
127
+ AppleGFXPCIState *s = APPLE_GFX_PCI(dev);
128
+ int ret;
129
+
130
+ pci_register_bar(dev, PG_PCI_BAR_MMIO,
131
+ PCI_BASE_ADDRESS_SPACE_MEMORY, &s->common.iomem_gfx);
132
+
133
+ ret = msi_init(dev, 0x0 /* config offset; 0 = find space */,
134
+ PG_PCI_MAX_MSI_VECTORS, true /* msi64bit */,
135
+ false /*msi_per_vector_mask*/, errp);
136
+ if (ret != 0) {
137
+ return;
138
+ }
139
+
140
+ @autoreleasepool {
141
+ PGDeviceDescriptor *desc = [PGDeviceDescriptor new];
142
+ desc.raiseInterrupt = ^(uint32_t vector) {
143
+ apple_gfx_pci_interrupt(dev, s, vector);
144
+ };
145
+
146
+ apple_gfx_common_realize(&s->common, desc, errp);
147
+ [desc release];
148
+ desc = nil;
149
+ }
150
+}
151
+
152
+static void apple_gfx_pci_reset(Object *obj, ResetType type)
153
+{
154
+ AppleGFXPCIState *s = APPLE_GFX_PCI(obj);
155
+ [s->common.pgdev reset];
156
+}
157
+
158
+static void apple_gfx_pci_class_init(ObjectClass *klass, void *data)
159
+{
160
+ DeviceClass *dc = DEVICE_CLASS(klass);
161
+ PCIDeviceClass *pci = PCI_DEVICE_CLASS(klass);
162
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
163
+
164
+ rc->phases.hold = apple_gfx_pci_reset;
165
+ dc->desc = "macOS Paravirtualized Graphics PCI Display Controller";
166
+ dc->hotpluggable = false;
167
+ set_bit(DEVICE_CATEGORY_DISPLAY, dc->categories);
168
+
169
+ pci->vendor_id = PG_PCI_VENDOR_ID;
170
+ pci->device_id = PG_PCI_DEVICE_ID;
171
+ pci->class_id = PCI_CLASS_DISPLAY_OTHER;
172
+ pci->realize = apple_gfx_pci_realize;
173
+
174
+ // TODO: Property for setting mode list
175
+}
176
+
177
+static TypeInfo apple_gfx_pci_types[] = {
178
+ {
179
+ .name = TYPE_APPLE_GFX_PCI,
180
+ .parent = TYPE_PCI_DEVICE,
181
+ .instance_size = sizeof(AppleGFXPCIState),
182
+ .class_init = apple_gfx_pci_class_init,
183
+ .instance_init = apple_gfx_pci_init,
184
+ .interfaces = (InterfaceInfo[]) {
185
+ { INTERFACE_PCIE_DEVICE },
186
+ { },
187
+ },
188
+ }
189
+};
190
+DEFINE_TYPES(apple_gfx_pci_types)
191
+
192
diff --git a/hw/display/meson.build b/hw/display/meson.build
193
index XXXXXXX..XXXXXXX 100644
194
--- a/hw/display/meson.build
195
+++ b/hw/display/meson.build
196
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_MAC_PVG', if_true: [files('apple-gfx.m'), pv
197
if cpu == 'aarch64'
198
system_ss.add(when: 'CONFIG_MAC_PVG_MMIO', if_true: [files('apple-gfx-mmio.m'), pvg, metal])
199
endif
200
+system_ss.add(when: 'CONFIG_MAC_PVG_PCI', if_true: [files('apple-gfx-pci.m'), pvg, metal])
201
202
if config_all_devices.has_key('CONFIG_VIRTIO_GPU')
203
virtio_gpu_ss = ss.source_set()
204
--
205
2.39.3 (Apple Git-145)
206
207
diff view generated by jsdifflib
Deleted patch
1
This change adds a property 'display_modes' on the graphics device
2
which permits specifying a list of display modes. (screen resolution
3
and refresh rate)
4
1
5
The property is an array of a custom type to make the syntax slightly
6
less awkward to use, for example:
7
8
-device '{"driver":"apple-gfx-pci", "display-modes":["1920x1080@60", "3840x2160@60"]}'
9
10
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
11
---
12
13
v4:
14
15
* Switched to the native array property type, which recently gained
16
     command line support.
17
* The property has also been added to the -mmio variant.
18
* Tidied up the code a little.
19
20
v5:
21
22
* Better error handling and buffer management in property parsing and
23
output.
24
25
hw/display/apple-gfx-mmio.m | 8 +++
26
hw/display/apple-gfx-pci.m | 9 ++-
27
hw/display/apple-gfx.h | 12 ++++
28
hw/display/apple-gfx.m | 121 ++++++++++++++++++++++++++++++++----
29
hw/display/trace-events | 2 +
30
5 files changed, 139 insertions(+), 13 deletions(-)
31
32
diff --git a/hw/display/apple-gfx-mmio.m b/hw/display/apple-gfx-mmio.m
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/display/apple-gfx-mmio.m
35
+++ b/hw/display/apple-gfx-mmio.m
36
@@ -XXX,XX +XXX,XX @@ static void apple_gfx_mmio_reset(Object *obj, ResetType type)
37
[s->common.pgdev reset];
38
}
39
40
+static Property apple_gfx_mmio_properties[] = {
41
+ DEFINE_PROP_ARRAY("display-modes", AppleGFXMMIOState,
42
+ common.num_display_modes, common.display_modes,
43
+ qdev_prop_display_mode, AppleGFXDisplayMode),
44
+ DEFINE_PROP_END_OF_LIST(),
45
+};
46
47
static void apple_gfx_mmio_class_init(ObjectClass *klass, void *data)
48
{
49
@@ -XXX,XX +XXX,XX @@ static void apple_gfx_mmio_class_init(ObjectClass *klass, void *data)
50
rc->phases.hold = apple_gfx_mmio_reset;
51
dc->hotpluggable = false;
52
dc->realize = apple_gfx_mmio_realize;
53
+
54
+ device_class_set_props(dc, apple_gfx_mmio_properties);
55
}
56
57
static TypeInfo apple_gfx_mmio_types[] = {
58
diff --git a/hw/display/apple-gfx-pci.m b/hw/display/apple-gfx-pci.m
59
index XXXXXXX..XXXXXXX 100644
60
--- a/hw/display/apple-gfx-pci.m
61
+++ b/hw/display/apple-gfx-pci.m
62
@@ -XXX,XX +XXX,XX @@ static void apple_gfx_pci_reset(Object *obj, ResetType type)
63
[s->common.pgdev reset];
64
}
65
66
+static Property apple_gfx_pci_properties[] = {
67
+ DEFINE_PROP_ARRAY("display-modes", AppleGFXPCIState,
68
+ common.num_display_modes, common.display_modes,
69
+ qdev_prop_display_mode, AppleGFXDisplayMode),
70
+ DEFINE_PROP_END_OF_LIST(),
71
+};
72
+
73
static void apple_gfx_pci_class_init(ObjectClass *klass, void *data)
74
{
75
DeviceClass *dc = DEVICE_CLASS(klass);
76
@@ -XXX,XX +XXX,XX @@ static void apple_gfx_pci_class_init(ObjectClass *klass, void *data)
77
pci->class_id = PCI_CLASS_DISPLAY_OTHER;
78
pci->realize = apple_gfx_pci_realize;
79
80
- // TODO: Property for setting mode list
81
+ device_class_set_props(dc, apple_gfx_pci_properties);
82
}
83
84
static TypeInfo apple_gfx_pci_types[] = {
85
diff --git a/hw/display/apple-gfx.h b/hw/display/apple-gfx.h
86
index XXXXXXX..XXXXXXX 100644
87
--- a/hw/display/apple-gfx.h
88
+++ b/hw/display/apple-gfx.h
89
@@ -XXX,XX +XXX,XX @@
90
#import <ParavirtualizedGraphics/ParavirtualizedGraphics.h>
91
#include "qemu/typedefs.h"
92
#include "exec/memory.h"
93
+#include "hw/qdev-properties.h"
94
#include "ui/surface.h"
95
96
@class PGDeviceDescriptor;
97
@@ -XXX,XX +XXX,XX @@
98
99
typedef QTAILQ_HEAD(, PGTask_s) PGTaskList;
100
101
+struct AppleGFXDisplayMode;
102
typedef struct AppleGFXState {
103
/* Initialised on init/realize() */
104
MemoryRegion iomem_gfx;
105
@@ -XXX,XX +XXX,XX @@ typedef struct AppleGFXState {
106
id<MTLDevice> mtl;
107
id<MTLCommandQueue> mtl_queue;
108
dispatch_queue_t render_queue;
109
+ struct AppleGFXDisplayMode *display_modes;
110
+ uint32_t num_display_modes;
111
/*
112
* QemuMutex & QemuConds for awaiting completion of PVG memory-mapping and
113
* reading requests after submitting them to run in the AIO context.
114
@@ -XXX,XX +XXX,XX @@ typedef struct AppleGFXState {
115
id<MTLTexture> texture;
116
} AppleGFXState;
117
118
+typedef struct AppleGFXDisplayMode {
119
+ uint16_t width_px;
120
+ uint16_t height_px;
121
+ uint16_t refresh_rate_hz;
122
+} AppleGFXDisplayMode;
123
+
124
void apple_gfx_common_init(Object *obj, AppleGFXState *s, const char* obj_name);
125
void apple_gfx_common_realize(AppleGFXState *s, PGDeviceDescriptor *desc,
126
Error **errp);
127
@@ -XXX,XX +XXX,XX @@ uintptr_t apple_gfx_host_address_for_gpa_range(uint64_t guest_physical,
128
void apple_gfx_await_bh_job(AppleGFXState *s, QemuCond *job_cond,
129
bool *job_done_flag);
130
131
+extern const PropertyInfo qdev_prop_display_mode;
132
+
133
#endif
134
135
diff --git a/hw/display/apple-gfx.m b/hw/display/apple-gfx.m
136
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/display/apple-gfx.m
138
+++ b/hw/display/apple-gfx.m
139
@@ -XXX,XX +XXX,XX @@
140
#include "sysemu/dma.h"
141
#include "ui/console.h"
142
143
-static const PGDisplayCoord_t apple_gfx_modes[] = {
144
- { .x = 1440, .y = 1080 },
145
- { .x = 1280, .y = 1024 },
146
+static const AppleGFXDisplayMode apple_gfx_default_modes[] = {
147
+ { 1920, 1080, 60 },
148
+ { 1440, 1080, 60 },
149
+ { 1280, 1024, 60 },
150
};
151
152
/* This implements a type defined in <ParavirtualizedGraphics/PGDevice.h>
153
@@ -XXX,XX +XXX,XX @@ static void set_mode(AppleGFXState *s, uint32_t width, uint32_t height)
154
static void create_fb(AppleGFXState *s)
155
{
156
s->con = graphic_console_init(NULL, 0, &apple_gfx_fb_ops, s);
157
- set_mode(s, 1440, 1080);
158
159
s->cursor_show = true;
160
}
161
@@ -XXX,XX +XXX,XX @@ static void apple_gfx_register_task_mapping_handlers(AppleGFXState *s,
162
return disp_desc;
163
}
164
165
-static NSArray<PGDisplayMode*>* apple_gfx_prepare_display_mode_array(void)
166
+static NSArray<PGDisplayMode*>* apple_gfx_create_display_mode_array(
167
+ const AppleGFXDisplayMode display_modes[], uint32_t display_mode_count)
168
{
169
- PGDisplayMode *modes[ARRAY_SIZE(apple_gfx_modes)];
170
+ PGDisplayMode **modes = alloca(sizeof(modes[0]) * display_mode_count);
171
NSArray<PGDisplayMode*>* mode_array = nil;
172
- int i;
173
+ uint32_t i;
174
175
- for (i = 0; i < ARRAY_SIZE(apple_gfx_modes); i++) {
176
+ for (i = 0; i < display_mode_count; i++) {
177
+ const AppleGFXDisplayMode *mode = &display_modes[i];
178
+ trace_apple_gfx_display_mode(i, mode->width_px, mode->height_px);
179
+ PGDisplayCoord_t mode_size = { mode->width_px, mode->height_px };
180
modes[i] =
181
- [[PGDisplayMode alloc] initWithSizeInPixels:apple_gfx_modes[i] refreshRateInHz:60.];
182
+ [[PGDisplayMode alloc] initWithSizeInPixels:mode_size
183
+ refreshRateInHz:mode->refresh_rate_hz];
184
}
185
186
- mode_array = [NSArray arrayWithObjects:modes count:ARRAY_SIZE(apple_gfx_modes)];
187
+ mode_array = [NSArray arrayWithObjects:modes count:display_mode_count];
188
189
- for (i = 0; i < ARRAY_SIZE(apple_gfx_modes); i++) {
190
+ for (i = 0; i < display_mode_count; i++) {
191
[modes[i] release];
192
modes[i] = nil;
193
}
194
@@ -XXX,XX +XXX,XX @@ void apple_gfx_common_realize(AppleGFXState *s, PGDeviceDescriptor *desc,
195
Error **errp)
196
{
197
PGDisplayDescriptor *disp_desc = nil;
198
+ const AppleGFXDisplayMode *display_modes = apple_gfx_default_modes;
199
+ int num_display_modes = ARRAY_SIZE(apple_gfx_default_modes);
200
201
if (apple_gfx_mig_blocker == NULL) {
202
error_setg(&apple_gfx_mig_blocker,
203
@@ -XXX,XX +XXX,XX @@ void apple_gfx_common_realize(AppleGFXState *s, PGDeviceDescriptor *desc,
204
s->pgdisp = [s->pgdev newDisplayWithDescriptor:disp_desc
205
port:0 serialNum:1234];
206
[disp_desc release];
207
- s->pgdisp.modeList = apple_gfx_prepare_display_mode_array();
208
+
209
+ if (s->display_modes != NULL && s->num_display_modes > 0) {
210
+ trace_apple_gfx_common_realize_modes_property(s->num_display_modes);
211
+ display_modes = s->display_modes;
212
+ num_display_modes = s->num_display_modes;
213
+ }
214
+ s->pgdisp.modeList =
215
+ apple_gfx_create_display_mode_array(display_modes, num_display_modes);
216
217
create_fb(s);
218
219
@@ -XXX,XX +XXX,XX @@ void apple_gfx_common_realize(AppleGFXState *s, PGDeviceDescriptor *desc,
220
qemu_cond_init(&s->task_map_job_cond);
221
qemu_cond_init(&s->mem_read_job_cond);
222
}
223
+
224
+static void apple_gfx_get_display_mode(Object *obj, Visitor *v,
225
+ const char *name, void *opaque,
226
+ Error **errp)
227
+{
228
+ Property *prop = opaque;
229
+ AppleGFXDisplayMode *mode = object_field_prop_ptr(obj, prop);
230
+ /* 3 uint16s (max 5 digits) + 2 separator characters + nul. */
231
+ char buffer[5 * 3 + 2 + 1];
232
+ char *pos = buffer;
233
+
234
+ int rc = snprintf(buffer, sizeof(buffer),
235
+ "%"PRIu16"x%"PRIu16"@%"PRIu16,
236
+ mode->width_px, mode->height_px,
237
+ mode->refresh_rate_hz);
238
+ assert(rc < sizeof(buffer));
239
+
240
+ visit_type_str(v, name, &pos, errp);
241
+}
242
+
243
+static void apple_gfx_set_display_mode(Object *obj, Visitor *v,
244
+ const char *name, void *opaque,
245
+ Error **errp)
246
+{
247
+ Property *prop = opaque;
248
+ AppleGFXDisplayMode *mode = object_field_prop_ptr(obj, prop);
249
+ Error *local_err = NULL;
250
+ const char *endptr;
251
+ g_autofree char *str = NULL;
252
+ int ret;
253
+ int val;
254
+
255
+ visit_type_str(v, name, &str, &local_err);
256
+ if (local_err) {
257
+ error_propagate(errp, local_err);
258
+ return;
259
+ }
260
+
261
+ endptr = str;
262
+
263
+ ret = qemu_strtoi(endptr, &endptr, 10, &val);
264
+ if (ret || val > UINT16_MAX || val <= 0) {
265
+ error_setg(errp, "width in '%s' must be a decimal integer number "
266
+ "of pixels in the range 1..65535", name);
267
+ return;
268
+ }
269
+ mode->width_px = val;
270
+ if (*endptr != 'x') {
271
+ goto separator_error;
272
+ }
273
+
274
+ ret = qemu_strtoi(endptr + 1, &endptr, 10, &val);
275
+ if (ret || val > UINT16_MAX || val <= 0) {
276
+ error_setg(errp, "height in '%s' must be a decimal integer number "
277
+ "of pixels in the range 1..65535", name);
278
+ return;
279
+ }
280
+ mode->height_px = val;
281
+ if (*endptr != '@') {
282
+ goto separator_error;
283
+ }
284
+
285
+ ret = qemu_strtoi(endptr + 1, &endptr, 10, &val);
286
+ if (ret || val > UINT16_MAX || val <= 0) {
287
+ error_setg(errp, "refresh rate in '%s'"
288
+ " must be a positive decimal integer (Hertz)", name);
289
+ }
290
+ mode->refresh_rate_hz = val;
291
+ return;
292
+
293
+separator_error:
294
+ error_setg(errp, "Each display mode takes the format "
295
+ "'<width>x<height>@<rate>'");
296
+}
297
+
298
+const PropertyInfo qdev_prop_display_mode = {
299
+ .name = "display_mode",
300
+ .description =
301
+ "Display mode in pixels and Hertz, as <width>x<height>@<refresh-rate> "
302
+ "Example: 3840x2160@60",
303
+ .get = apple_gfx_get_display_mode,
304
+ .set = apple_gfx_set_display_mode,
305
+};
306
diff --git a/hw/display/trace-events b/hw/display/trace-events
307
index XXXXXXX..XXXXXXX 100644
308
--- a/hw/display/trace-events
309
+++ b/hw/display/trace-events
310
@@ -XXX,XX +XXX,XX @@ apple_gfx_cursor_set(uint32_t bpp, uint64_t width, uint64_t height) "bpp=%d widt
311
apple_gfx_cursor_show(uint32_t show) "show=%d"
312
apple_gfx_cursor_move(void) ""
313
apple_gfx_common_init(const char *device_name, size_t mmio_size) "device: %s; MMIO size: %zu bytes"
314
+apple_gfx_common_realize_modes_property(uint32_t num_modes) "using %u modes supplied by 'display-modes' device property"
315
+apple_gfx_display_mode(uint32_t mode_idx, uint16_t width_px, uint16_t height_px) "mode %2"PRIu32": %4"PRIu16"x%4"PRIu16
316
317
# apple-gfx-mmio.m
318
apple_gfx_mmio_iosfc_read(uint64_t offset, uint64_t res) "offset=0x%"PRIx64" res=0x%"PRIx64
319
--
320
2.39.3 (Apple Git-145)
diff view generated by jsdifflib
Deleted patch
1
I'm happy to take responsibility for the macOS PV graphics code. As
2
HVF patches don't seem to get much attention at the moment, I'm also
3
adding myself as designated reviewer for HVF and x86 HVF to try and
4
improve that.
5
1
6
I anticipate that the resulting workload should be covered by the
7
funding I'm receiving for improving Qemu in combination with macOS. As
8
of right now this runs out at the end of 2024; I expect the workload on
9
apple-gfx should be relatively minor and manageable in my spare time
10
beyond that. I may have to remove myself from more general HVF duties
11
once the contract runs out if it's more than I can manage.
12
13
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
14
---
15
MAINTAINERS | 7 +++++++
16
1 file changed, 7 insertions(+)
17
18
diff --git a/MAINTAINERS b/MAINTAINERS
19
index XXXXXXX..XXXXXXX 100644
20
--- a/MAINTAINERS
21
+++ b/MAINTAINERS
22
@@ -XXX,XX +XXX,XX @@ F: target/arm/hvf/
23
X86 HVF CPUs
24
M: Cameron Esfahani <dirty@apple.com>
25
M: Roman Bolshakov <rbolshakov@ddn.com>
26
+R: Phil Dennis-Jordan <phil@philjordan.eu>
27
W: https://wiki.qemu.org/Features/HVF
28
S: Maintained
29
F: target/i386/hvf/
30
@@ -XXX,XX +XXX,XX @@ F: target/i386/hvf/
31
HVF
32
M: Cameron Esfahani <dirty@apple.com>
33
M: Roman Bolshakov <rbolshakov@ddn.com>
34
+R: Phil Dennis-Jordan <phil@philjordan.eu>
35
W: https://wiki.qemu.org/Features/HVF
36
S: Maintained
37
F: accel/hvf/
38
@@ -XXX,XX +XXX,XX @@ F: hw/display/edid*
39
F: include/hw/display/edid.h
40
F: qemu-edid.c
41
42
+macOS PV Graphics (apple-gfx)
43
+M: Phil Dennis-Jordan <phil@philjordan.eu>
44
+S: Maintained
45
+F: hw/display/apple-gfx*
46
+
47
PIIX4 South Bridge (i82371AB)
48
M: Hervé Poussineau <hpoussin@reactos.org>
49
M: Philippe Mathieu-Daudé <philmd@linaro.org>
50
--
51
2.39.3 (Apple Git-145)
52
53
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Graf <graf@amazon.com>
2
1
3
We will introduce a number of devices that are specific to the vmapple
4
target machine. To keep them all tidily together, let's put them into
5
a single target directory.
6
7
Signed-off-by: Alexander Graf <graf@amazon.com>
8
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
9
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
10
---
11
MAINTAINERS | 7 +++++++
12
hw/Kconfig | 1 +
13
hw/meson.build | 1 +
14
hw/vmapple/Kconfig | 1 +
15
hw/vmapple/meson.build | 0
16
hw/vmapple/trace-events | 2 ++
17
hw/vmapple/trace.h | 1 +
18
meson.build | 1 +
19
8 files changed, 14 insertions(+)
20
create mode 100644 hw/vmapple/Kconfig
21
create mode 100644 hw/vmapple/meson.build
22
create mode 100644 hw/vmapple/trace-events
23
create mode 100644 hw/vmapple/trace.h
24
25
diff --git a/MAINTAINERS b/MAINTAINERS
26
index XXXXXXX..XXXXXXX 100644
27
--- a/MAINTAINERS
28
+++ b/MAINTAINERS
29
@@ -XXX,XX +XXX,XX @@ F: hw/hyperv/hv-balloon*.h
30
F: include/hw/hyperv/dynmem-proto.h
31
F: include/hw/hyperv/hv-balloon.h
32
33
+VMapple
34
+M: Alexander Graf <agraf@csgraf.de>
35
+R: Phil Dennis-Jordan <phil@philjordan.eu>
36
+S: Maintained
37
+F: hw/vmapple/*
38
+F: include/hw/vmapple/*
39
+
40
Subsystems
41
----------
42
Overall Audio backends
43
diff --git a/hw/Kconfig b/hw/Kconfig
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/Kconfig
46
+++ b/hw/Kconfig
47
@@ -XXX,XX +XXX,XX @@ source ufs/Kconfig
48
source usb/Kconfig
49
source virtio/Kconfig
50
source vfio/Kconfig
51
+source vmapple/Kconfig
52
source xen/Kconfig
53
source watchdog/Kconfig
54
55
diff --git a/hw/meson.build b/hw/meson.build
56
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/meson.build
58
+++ b/hw/meson.build
59
@@ -XXX,XX +XXX,XX @@ subdir('ufs')
60
subdir('usb')
61
subdir('vfio')
62
subdir('virtio')
63
+subdir('vmapple')
64
subdir('watchdog')
65
subdir('xen')
66
subdir('xenpv')
67
diff --git a/hw/vmapple/Kconfig b/hw/vmapple/Kconfig
68
new file mode 100644
69
index XXXXXXX..XXXXXXX
70
--- /dev/null
71
+++ b/hw/vmapple/Kconfig
72
@@ -0,0 +1 @@
73
+
74
diff --git a/hw/vmapple/meson.build b/hw/vmapple/meson.build
75
new file mode 100644
76
index XXXXXXX..XXXXXXX
77
diff --git a/hw/vmapple/trace-events b/hw/vmapple/trace-events
78
new file mode 100644
79
index XXXXXXX..XXXXXXX
80
--- /dev/null
81
+++ b/hw/vmapple/trace-events
82
@@ -XXX,XX +XXX,XX @@
83
+# See docs/devel/tracing.rst for syntax documentation.
84
+
85
diff --git a/hw/vmapple/trace.h b/hw/vmapple/trace.h
86
new file mode 100644
87
index XXXXXXX..XXXXXXX
88
--- /dev/null
89
+++ b/hw/vmapple/trace.h
90
@@ -0,0 +1 @@
91
+#include "trace/trace-hw_vmapple.h"
92
diff --git a/meson.build b/meson.build
93
index XXXXXXX..XXXXXXX 100644
94
--- a/meson.build
95
+++ b/meson.build
96
@@ -XXX,XX +XXX,XX @@ if have_system
97
'hw/usb',
98
'hw/vfio',
99
'hw/virtio',
100
+ 'hw/vmapple',
101
'hw/watchdog',
102
'hw/xen',
103
'hw/gpio',
104
--
105
2.39.3 (Apple Git-145)
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Graf <graf@amazon.com>
2
1
3
In addition to the ISA and PCI variants of pvpanic, let's add an MMIO
4
platform device that we can use in embedded arm environments.
5
6
Signed-off-by: Alexander Graf <graf@amazon.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
10
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
11
---
12
13
v3:
14
* Rebased on upstream, updated a header path
15
16
hw/misc/Kconfig | 4 +++
17
hw/misc/meson.build | 1 +
18
hw/misc/pvpanic-mmio.c | 61 +++++++++++++++++++++++++++++++++++++++
19
include/hw/misc/pvpanic.h | 1 +
20
4 files changed, 67 insertions(+)
21
create mode 100644 hw/misc/pvpanic-mmio.c
22
23
diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
24
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/misc/Kconfig
26
+++ b/hw/misc/Kconfig
27
@@ -XXX,XX +XXX,XX @@ config PVPANIC_ISA
28
depends on ISA_BUS
29
select PVPANIC_COMMON
30
31
+config PVPANIC_MMIO
32
+ bool
33
+ select PVPANIC_COMMON
34
+
35
config AUX
36
bool
37
select I2C
38
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
39
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/misc/meson.build
41
+++ b/hw/misc/meson.build
42
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_ARMSSE_MHU', if_true: files('armsse-mhu.c'))
43
44
system_ss.add(when: 'CONFIG_PVPANIC_ISA', if_true: files('pvpanic-isa.c'))
45
system_ss.add(when: 'CONFIG_PVPANIC_PCI', if_true: files('pvpanic-pci.c'))
46
+system_ss.add(when: 'CONFIG_PVPANIC_MMIO', if_true: files('pvpanic-mmio.c'))
47
system_ss.add(when: 'CONFIG_AUX', if_true: files('auxbus.c'))
48
system_ss.add(when: 'CONFIG_ASPEED_SOC', if_true: files(
49
'aspeed_hace.c',
50
diff --git a/hw/misc/pvpanic-mmio.c b/hw/misc/pvpanic-mmio.c
51
new file mode 100644
52
index XXXXXXX..XXXXXXX
53
--- /dev/null
54
+++ b/hw/misc/pvpanic-mmio.c
55
@@ -XXX,XX +XXX,XX @@
56
+/*
57
+ * QEMU simulated pvpanic device (MMIO frontend)
58
+ *
59
+ * Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
60
+ *
61
+ * SPDX-License-Identifier: GPL-2.0-or-later
62
+ */
63
+
64
+#include "qemu/osdep.h"
65
+
66
+#include "hw/qdev-properties.h"
67
+#include "hw/misc/pvpanic.h"
68
+#include "hw/sysbus.h"
69
+#include "standard-headers/misc/pvpanic.h"
70
+
71
+OBJECT_DECLARE_SIMPLE_TYPE(PVPanicMMIOState, PVPANIC_MMIO_DEVICE)
72
+
73
+#define PVPANIC_MMIO_SIZE 0x2
74
+
75
+struct PVPanicMMIOState {
76
+ SysBusDevice parent_obj;
77
+
78
+ PVPanicState pvpanic;
79
+};
80
+
81
+static void pvpanic_mmio_initfn(Object *obj)
82
+{
83
+ PVPanicMMIOState *s = PVPANIC_MMIO_DEVICE(obj);
84
+
85
+ pvpanic_setup_io(&s->pvpanic, DEVICE(s), PVPANIC_MMIO_SIZE);
86
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->pvpanic.mr);
87
+}
88
+
89
+static Property pvpanic_mmio_properties[] = {
90
+ DEFINE_PROP_UINT8("events", PVPanicMMIOState, pvpanic.events,
91
+ PVPANIC_PANICKED | PVPANIC_CRASH_LOADED),
92
+ DEFINE_PROP_END_OF_LIST(),
93
+};
94
+
95
+static void pvpanic_mmio_class_init(ObjectClass *klass, void *data)
96
+{
97
+ DeviceClass *dc = DEVICE_CLASS(klass);
98
+
99
+ device_class_set_props(dc, pvpanic_mmio_properties);
100
+ set_bit(DEVICE_CATEGORY_MISC, dc->categories);
101
+}
102
+
103
+static const TypeInfo pvpanic_mmio_info = {
104
+ .name = TYPE_PVPANIC_MMIO_DEVICE,
105
+ .parent = TYPE_SYS_BUS_DEVICE,
106
+ .instance_size = sizeof(PVPanicMMIOState),
107
+ .instance_init = pvpanic_mmio_initfn,
108
+ .class_init = pvpanic_mmio_class_init,
109
+};
110
+
111
+static void pvpanic_register_types(void)
112
+{
113
+ type_register_static(&pvpanic_mmio_info);
114
+}
115
+
116
+type_init(pvpanic_register_types)
117
diff --git a/include/hw/misc/pvpanic.h b/include/hw/misc/pvpanic.h
118
index XXXXXXX..XXXXXXX 100644
119
--- a/include/hw/misc/pvpanic.h
120
+++ b/include/hw/misc/pvpanic.h
121
@@ -XXX,XX +XXX,XX @@
122
123
#define TYPE_PVPANIC_ISA_DEVICE "pvpanic"
124
#define TYPE_PVPANIC_PCI_DEVICE "pvpanic-pci"
125
+#define TYPE_PVPANIC_MMIO_DEVICE "pvpanic-mmio"
126
127
#define PVPANIC_IOPORT_PROP "ioport"
128
129
--
130
2.39.3 (Apple Git-145)
131
132
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Graf <graf@amazon.com>
2
1
3
MacOS unconditionally disables interrupts of the physical timer on boot
4
and then continues to use the virtual one. We don't really want to support
5
a full physical timer emulation, so let's just ignore those writes.
6
7
Signed-off-by: Alexander Graf <graf@amazon.com>
8
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
9
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
10
---
11
target/arm/hvf/hvf.c | 9 +++++++++
12
1 file changed, 9 insertions(+)
13
14
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/hvf/hvf.c
17
+++ b/target/arm/hvf/hvf.c
18
@@ -XXX,XX +XXX,XX @@
19
20
#include "qemu/osdep.h"
21
#include "qemu/error-report.h"
22
+#include "qemu/log.h"
23
24
#include "sysemu/runstate.h"
25
#include "sysemu/hvf.h"
26
@@ -XXX,XX +XXX,XX @@ void hvf_arm_init_debug(void)
27
#define SYSREG_OSLSR_EL1 SYSREG(2, 0, 1, 1, 4)
28
#define SYSREG_OSDLR_EL1 SYSREG(2, 0, 1, 3, 4)
29
#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1)
30
+#define SYSREG_CNTP_CTL_EL0 SYSREG(3, 3, 14, 2, 1)
31
#define SYSREG_PMCR_EL0 SYSREG(3, 3, 9, 12, 0)
32
#define SYSREG_PMUSERENR_EL0 SYSREG(3, 3, 9, 14, 0)
33
#define SYSREG_PMCNTENSET_EL0 SYSREG(3, 3, 9, 12, 1)
34
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
35
case SYSREG_OSLAR_EL1:
36
env->cp15.oslsr_el1 = val & 1;
37
return 0;
38
+ case SYSREG_CNTP_CTL_EL0:
39
+ /*
40
+ * Guests should not rely on the physical counter, but macOS emits
41
+ * disable writes to it. Let it do so, but ignore the requests.
42
+ */
43
+ qemu_log_mask(LOG_UNIMP, "Unsupported write to CNTP_CTL_EL0\n");
44
+ return 0;
45
case SYSREG_OSDLR_EL1:
46
/* Dummy register */
47
return 0;
48
--
49
2.39.3 (Apple Git-145)
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Graf <graf@amazon.com>
2
1
3
Some boards such as vmapple don't do real legacy PCI IRQ swizzling.
4
Instead, they just keep allocating more board IRQ lines for each new
5
legacy IRQ. Let's support that mode by giving instantiators a new
6
"nr_irqs" property they can use to support more than 4 legacy IRQ lines.
7
In this mode, GPEX will export more IRQ lines, one for each device.
8
9
Signed-off-by: Alexander Graf <graf@amazon.com>
10
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
11
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
12
---
13
14
v4:
15
16
* Turned pair of IRQ arrays into array of structs.
17
* Simplified swizzling logic selection.
18
19
hw/arm/sbsa-ref.c | 2 +-
20
hw/arm/virt.c | 2 +-
21
hw/i386/microvm.c | 2 +-
22
hw/loongarch/virt.c | 2 +-
23
hw/mips/loongson3_virt.c | 2 +-
24
hw/openrisc/virt.c | 12 +++++------
25
hw/pci-host/gpex.c | 43 ++++++++++++++++++++++++++++++--------
26
hw/riscv/virt.c | 12 +++++------
27
hw/xtensa/virt.c | 2 +-
28
include/hw/pci-host/gpex.h | 7 +++----
29
10 files changed, 55 insertions(+), 31 deletions(-)
30
31
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/arm/sbsa-ref.c
34
+++ b/hw/arm/sbsa-ref.c
35
@@ -XXX,XX +XXX,XX @@ static void create_pcie(SBSAMachineState *sms)
36
/* Map IO port space */
37
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_pio);
38
39
- for (i = 0; i < GPEX_NUM_IRQS; i++) {
40
+ for (i = 0; i < PCI_NUM_PINS; i++) {
41
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i,
42
qdev_get_gpio_in(sms->gic, irq + i));
43
gpex_set_irq_num(GPEX_HOST(dev), i, irq + i);
44
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/hw/arm/virt.c
47
+++ b/hw/arm/virt.c
48
@@ -XXX,XX +XXX,XX @@ static void create_pcie(VirtMachineState *vms)
49
/* Map IO port space */
50
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_pio);
51
52
- for (i = 0; i < GPEX_NUM_IRQS; i++) {
53
+ for (i = 0; i < PCI_NUM_PINS; i++) {
54
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i,
55
qdev_get_gpio_in(vms->gic, irq + i));
56
gpex_set_irq_num(GPEX_HOST(dev), i, irq + i);
57
diff --git a/hw/i386/microvm.c b/hw/i386/microvm.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/hw/i386/microvm.c
60
+++ b/hw/i386/microvm.c
61
@@ -XXX,XX +XXX,XX @@ static void create_gpex(MicrovmMachineState *mms)
62
mms->gpex.mmio64.base, mmio64_alias);
63
}
64
65
- for (i = 0; i < GPEX_NUM_IRQS; i++) {
66
+ for (i = 0; i < PCI_NUM_PINS; i++) {
67
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i,
68
x86ms->gsi[mms->gpex.irq + i]);
69
}
70
diff --git a/hw/loongarch/virt.c b/hw/loongarch/virt.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/hw/loongarch/virt.c
73
+++ b/hw/loongarch/virt.c
74
@@ -XXX,XX +XXX,XX @@ static void virt_devices_init(DeviceState *pch_pic,
75
memory_region_add_subregion(get_system_memory(), VIRT_PCI_IO_BASE,
76
pio_alias);
77
78
- for (i = 0; i < GPEX_NUM_IRQS; i++) {
79
+ for (i = 0; i < PCI_NUM_PINS; i++) {
80
sysbus_connect_irq(d, i,
81
qdev_get_gpio_in(pch_pic, 16 + i));
82
gpex_set_irq_num(GPEX_HOST(gpex_dev), i, 16 + i);
83
diff --git a/hw/mips/loongson3_virt.c b/hw/mips/loongson3_virt.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/hw/mips/loongson3_virt.c
86
+++ b/hw/mips/loongson3_virt.c
87
@@ -XXX,XX +XXX,XX @@ static inline void loongson3_virt_devices_init(MachineState *machine,
88
virt_memmap[VIRT_PCIE_PIO].base, s->pio_alias);
89
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, virt_memmap[VIRT_PCIE_PIO].base);
90
91
- for (i = 0; i < GPEX_NUM_IRQS; i++) {
92
+ for (i = 0; i < PCI_NUM_PINS; i++) {
93
irq = qdev_get_gpio_in(pic, PCIE_IRQ_BASE + i);
94
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
95
gpex_set_irq_num(GPEX_HOST(dev), i, PCIE_IRQ_BASE + i);
96
diff --git a/hw/openrisc/virt.c b/hw/openrisc/virt.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/hw/openrisc/virt.c
99
+++ b/hw/openrisc/virt.c
100
@@ -XXX,XX +XXX,XX @@ static void create_pcie_irq_map(void *fdt, char *nodename, int irq_base,
101
{
102
int pin, dev;
103
uint32_t irq_map_stride = 0;
104
- uint32_t full_irq_map[GPEX_NUM_IRQS * GPEX_NUM_IRQS * 6] = {};
105
+ uint32_t full_irq_map[PCI_NUM_PINS * PCI_NUM_PINS * 6] = {};
106
uint32_t *irq_map = full_irq_map;
107
108
/*
109
@@ -XXX,XX +XXX,XX @@ static void create_pcie_irq_map(void *fdt, char *nodename, int irq_base,
110
* possible slot) seeing the interrupt-map-mask will allow the table
111
* to wrap to any number of devices.
112
*/
113
- for (dev = 0; dev < GPEX_NUM_IRQS; dev++) {
114
+ for (dev = 0; dev < PCI_NUM_PINS; dev++) {
115
int devfn = dev << 3;
116
117
- for (pin = 0; pin < GPEX_NUM_IRQS; pin++) {
118
- int irq_nr = irq_base + ((pin + PCI_SLOT(devfn)) % GPEX_NUM_IRQS);
119
+ for (pin = 0; pin < PCI_NUM_PINS; pin++) {
120
+ int irq_nr = irq_base + ((pin + PCI_SLOT(devfn)) % PCI_NUM_PINS);
121
int i = 0;
122
123
/* Fill PCI address cells */
124
@@ -XXX,XX +XXX,XX @@ static void create_pcie_irq_map(void *fdt, char *nodename, int irq_base,
125
}
126
127
qemu_fdt_setprop(fdt, nodename, "interrupt-map", full_irq_map,
128
- GPEX_NUM_IRQS * GPEX_NUM_IRQS *
129
+ PCI_NUM_PINS * PCI_NUM_PINS *
130
irq_map_stride * sizeof(uint32_t));
131
132
qemu_fdt_setprop_cells(fdt, nodename, "interrupt-map-mask",
133
@@ -XXX,XX +XXX,XX @@ static void openrisc_virt_pcie_init(OR1KVirtState *state,
134
memory_region_add_subregion(get_system_memory(), pio_base, alias);
135
136
/* Connect IRQ lines. */
137
- for (i = 0; i < GPEX_NUM_IRQS; i++) {
138
+ for (i = 0; i < PCI_NUM_PINS; i++) {
139
pcie_irq = get_per_cpu_irq(cpus, num_cpus, irq_base + i);
140
141
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, pcie_irq);
142
diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
143
index XXXXXXX..XXXXXXX 100644
144
--- a/hw/pci-host/gpex.c
145
+++ b/hw/pci-host/gpex.c
146
@@ -XXX,XX +XXX,XX @@
147
#include "qemu/osdep.h"
148
#include "qapi/error.h"
149
#include "hw/irq.h"
150
+#include "hw/pci/pci_bus.h"
151
#include "hw/pci-host/gpex.h"
152
#include "hw/qdev-properties.h"
153
#include "migration/vmstate.h"
154
@@ -XXX,XX +XXX,XX @@
155
* GPEX host
156
*/
157
158
+struct GPEXIrq {
159
+ qemu_irq irq;
160
+ int irq_num;
161
+};
162
+
163
static void gpex_set_irq(void *opaque, int irq_num, int level)
164
{
165
GPEXHost *s = opaque;
166
167
- qemu_set_irq(s->irq[irq_num], level);
168
+ qemu_set_irq(s->irq[irq_num].irq, level);
169
}
170
171
int gpex_set_irq_num(GPEXHost *s, int index, int gsi)
172
{
173
- if (index >= GPEX_NUM_IRQS) {
174
+ if (index >= s->num_irqs) {
175
return -EINVAL;
176
}
177
178
- s->irq_num[index] = gsi;
179
+ s->irq[index].irq_num = gsi;
180
return 0;
181
}
182
183
@@ -XXX,XX +XXX,XX @@ static PCIINTxRoute gpex_route_intx_pin_to_irq(void *opaque, int pin)
184
{
185
PCIINTxRoute route;
186
GPEXHost *s = opaque;
187
- int gsi = s->irq_num[pin];
188
+ int gsi = s->irq[pin].irq_num;
189
190
route.irq = gsi;
191
if (gsi < 0) {
192
@@ -XXX,XX +XXX,XX @@ static PCIINTxRoute gpex_route_intx_pin_to_irq(void *opaque, int pin)
193
return route;
194
}
195
196
+static int gpex_swizzle_map_irq_fn(PCIDevice *pci_dev, int pin)
197
+{
198
+ PCIBus *bus = pci_device_root_bus(pci_dev);
199
+
200
+ return (PCI_SLOT(pci_dev->devfn) + pin) % bus->nirq;
201
+}
202
+
203
static void gpex_host_realize(DeviceState *dev, Error **errp)
204
{
205
PCIHostState *pci = PCI_HOST_BRIDGE(dev);
206
@@ -XXX,XX +XXX,XX @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
207
PCIExpressHost *pex = PCIE_HOST_BRIDGE(dev);
208
int i;
209
210
+ s->irq = g_malloc0_n(s->num_irqs, sizeof(*s->irq));
211
+
212
pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
213
sysbus_init_mmio(sbd, &pex->mmio);
214
215
@@ -XXX,XX +XXX,XX @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
216
sysbus_init_mmio(sbd, &s->io_ioport);
217
}
218
219
- for (i = 0; i < GPEX_NUM_IRQS; i++) {
220
- sysbus_init_irq(sbd, &s->irq[i]);
221
- s->irq_num[i] = -1;
222
+ for (i = 0; i < s->num_irqs; i++) {
223
+ sysbus_init_irq(sbd, &s->irq[i].irq);
224
+ s->irq[i].irq_num = -1;
225
}
226
227
pci->bus = pci_register_root_bus(dev, "pcie.0", gpex_set_irq,
228
- pci_swizzle_map_irq_fn, s, &s->io_mmio,
229
- &s->io_ioport, 0, 4, TYPE_PCIE_BUS);
230
+ gpex_swizzle_map_irq_fn,
231
+ s, &s->io_mmio, &s->io_ioport, 0,
232
+ s->num_irqs, TYPE_PCIE_BUS);
233
234
pci_bus_set_route_irq_fn(pci->bus, gpex_route_intx_pin_to_irq);
235
qdev_realize(DEVICE(&s->gpex_root), BUS(pci->bus), &error_fatal);
236
}
237
238
+static void gpex_host_unrealize(DeviceState *dev)
239
+{
240
+ GPEXHost *s = GPEX_HOST(dev);
241
+
242
+ g_free(s->irq);
243
+}
244
+
245
static const char *gpex_host_root_bus_path(PCIHostState *host_bridge,
246
PCIBus *rootbus)
247
{
248
@@ -XXX,XX +XXX,XX @@ static Property gpex_host_properties[] = {
249
gpex_cfg.mmio64.base, 0),
250
DEFINE_PROP_SIZE(PCI_HOST_ABOVE_4G_MMIO_SIZE, GPEXHost,
251
gpex_cfg.mmio64.size, 0),
252
+ DEFINE_PROP_UINT8("num-irqs", GPEXHost, num_irqs, PCI_NUM_PINS),
253
DEFINE_PROP_END_OF_LIST(),
254
};
255
256
@@ -XXX,XX +XXX,XX @@ static void gpex_host_class_init(ObjectClass *klass, void *data)
257
258
hc->root_bus_path = gpex_host_root_bus_path;
259
dc->realize = gpex_host_realize;
260
+ dc->unrealize = gpex_host_unrealize;
261
set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
262
dc->fw_name = "pci";
263
device_class_set_props(dc, gpex_host_properties);
264
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
265
index XXXXXXX..XXXXXXX 100644
266
--- a/hw/riscv/virt.c
267
+++ b/hw/riscv/virt.c
268
@@ -XXX,XX +XXX,XX @@ static void create_pcie_irq_map(RISCVVirtState *s, void *fdt, char *nodename,
269
{
270
int pin, dev;
271
uint32_t irq_map_stride = 0;
272
- uint32_t full_irq_map[GPEX_NUM_IRQS * GPEX_NUM_IRQS *
273
+ uint32_t full_irq_map[PCI_NUM_PINS * PCI_NUM_PINS *
274
FDT_MAX_INT_MAP_WIDTH] = {};
275
uint32_t *irq_map = full_irq_map;
276
277
@@ -XXX,XX +XXX,XX @@ static void create_pcie_irq_map(RISCVVirtState *s, void *fdt, char *nodename,
278
* possible slot) seeing the interrupt-map-mask will allow the table
279
* to wrap to any number of devices.
280
*/
281
- for (dev = 0; dev < GPEX_NUM_IRQS; dev++) {
282
+ for (dev = 0; dev < PCI_NUM_PINS; dev++) {
283
int devfn = dev * 0x8;
284
285
- for (pin = 0; pin < GPEX_NUM_IRQS; pin++) {
286
- int irq_nr = PCIE_IRQ + ((pin + PCI_SLOT(devfn)) % GPEX_NUM_IRQS);
287
+ for (pin = 0; pin < PCI_NUM_PINS; pin++) {
288
+ int irq_nr = PCIE_IRQ + ((pin + PCI_SLOT(devfn)) % PCI_NUM_PINS);
289
int i = 0;
290
291
/* Fill PCI address cells */
292
@@ -XXX,XX +XXX,XX @@ static void create_pcie_irq_map(RISCVVirtState *s, void *fdt, char *nodename,
293
}
294
295
qemu_fdt_setprop(fdt, nodename, "interrupt-map", full_irq_map,
296
- GPEX_NUM_IRQS * GPEX_NUM_IRQS *
297
+ PCI_NUM_PINS * PCI_NUM_PINS *
298
irq_map_stride * sizeof(uint32_t));
299
300
qemu_fdt_setprop_cells(fdt, nodename, "interrupt-map-mask",
301
@@ -XXX,XX +XXX,XX @@ static inline DeviceState *gpex_pcie_init(MemoryRegion *sys_mem,
302
303
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, pio_base);
304
305
- for (i = 0; i < GPEX_NUM_IRQS; i++) {
306
+ for (i = 0; i < PCI_NUM_PINS; i++) {
307
irq = qdev_get_gpio_in(irqchip, PCIE_IRQ + i);
308
309
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
310
diff --git a/hw/xtensa/virt.c b/hw/xtensa/virt.c
311
index XXXXXXX..XXXXXXX 100644
312
--- a/hw/xtensa/virt.c
313
+++ b/hw/xtensa/virt.c
314
@@ -XXX,XX +XXX,XX @@ static void create_pcie(MachineState *ms, CPUXtensaState *env, int irq_base,
315
/* Connect IRQ lines. */
316
extints = xtensa_get_extints(env);
317
318
- for (i = 0; i < GPEX_NUM_IRQS; i++) {
319
+ for (i = 0; i < PCI_NUM_PINS; i++) {
320
void *q = extints[irq_base + i];
321
322
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, q);
323
diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
324
index XXXXXXX..XXXXXXX 100644
325
--- a/include/hw/pci-host/gpex.h
326
+++ b/include/hw/pci-host/gpex.h
327
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(GPEXHost, GPEX_HOST)
328
#define TYPE_GPEX_ROOT_DEVICE "gpex-root"
329
OBJECT_DECLARE_SIMPLE_TYPE(GPEXRootState, GPEX_ROOT_DEVICE)
330
331
-#define GPEX_NUM_IRQS 4
332
-
333
struct GPEXRootState {
334
/*< private >*/
335
PCIDevice parent_obj;
336
@@ -XXX,XX +XXX,XX @@ struct GPEXConfig {
337
PCIBus *bus;
338
};
339
340
+typedef struct GPEXIrq GPEXIrq;
341
struct GPEXHost {
342
/*< private >*/
343
PCIExpressHost parent_obj;
344
@@ -XXX,XX +XXX,XX @@ struct GPEXHost {
345
MemoryRegion io_mmio;
346
MemoryRegion io_ioport_window;
347
MemoryRegion io_mmio_window;
348
- qemu_irq irq[GPEX_NUM_IRQS];
349
- int irq_num[GPEX_NUM_IRQS];
350
+ GPEXIrq *irq;
351
+ uint8_t num_irqs;
352
353
bool allow_unmapped_accesses;
354
355
--
356
2.39.3 (Apple Git-145)
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Graf <graf@amazon.com>
2
1
3
VMApple contains an "aes" engine device that it uses to encrypt and
4
decrypt its nvram. It has trivial hard coded keys it uses for that
5
purpose.
6
7
Add device emulation for this device model.
8
9
Signed-off-by: Alexander Graf <graf@amazon.com>
10
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
11
---
12
v3:
13
14
* Rebased on latest upstream and fixed minor breakages.
15
* Replaced legacy device reset method with Resettable method
16
17
v4:
18
19
* Improved logging of unimplemented functions and guest errors.
20
* Better adherence to naming and coding conventions.
21
* Cleaner error handling and recovery, including using g_autoptr
22
23
v5:
24
25
* More logging improvements
26
* Use xxx64_overflow() functions for hexdump buffer size calculations.
27
28
hw/vmapple/Kconfig | 2 +
29
hw/vmapple/aes.c | 578 +++++++++++++++++++++++++++++++++++
30
hw/vmapple/meson.build | 1 +
31
hw/vmapple/trace-events | 14 +
32
include/hw/vmapple/vmapple.h | 17 ++
33
include/qemu/cutils.h | 15 +
34
util/hexdump.c | 18 ++
35
7 files changed, 645 insertions(+)
36
create mode 100644 hw/vmapple/aes.c
37
create mode 100644 include/hw/vmapple/vmapple.h
38
39
diff --git a/hw/vmapple/Kconfig b/hw/vmapple/Kconfig
40
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/vmapple/Kconfig
42
+++ b/hw/vmapple/Kconfig
43
@@ -1 +1,3 @@
44
+config VMAPPLE_AES
45
+ bool
46
47
diff --git a/hw/vmapple/aes.c b/hw/vmapple/aes.c
48
new file mode 100644
49
index XXXXXXX..XXXXXXX
50
--- /dev/null
51
+++ b/hw/vmapple/aes.c
52
@@ -XXX,XX +XXX,XX @@
53
+/*
54
+ * QEMU Apple AES device emulation
55
+ *
56
+ * Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
57
+ *
58
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
59
+ * See the COPYING file in the top-level directory.
60
+ *
61
+ * SPDX-License-Identifier: GPL-2.0-or-later
62
+ */
63
+
64
+#include "qemu/osdep.h"
65
+#include "trace.h"
66
+#include "crypto/hash.h"
67
+#include "crypto/aes.h"
68
+#include "crypto/cipher.h"
69
+#include "hw/irq.h"
70
+#include "hw/sysbus.h"
71
+#include "hw/vmapple/vmapple.h"
72
+#include "migration/vmstate.h"
73
+#include "qemu/cutils.h"
74
+#include "qemu/log.h"
75
+#include "qemu/module.h"
76
+#include "sysemu/dma.h"
77
+
78
+OBJECT_DECLARE_SIMPLE_TYPE(AESState, APPLE_AES)
79
+
80
+#define MAX_FIFO_SIZE 9
81
+
82
+#define CMD_KEY 0x1
83
+#define CMD_KEY_CONTEXT_SHIFT 27
84
+#define CMD_KEY_CONTEXT_MASK (0x1 << CMD_KEY_CONTEXT_SHIFT)
85
+#define CMD_KEY_SELECT_MAX_IDX 0x7
86
+#define CMD_KEY_SELECT_SHIFT 24
87
+#define CMD_KEY_SELECT_MASK (CMD_KEY_SELECT_MAX_IDX << CMD_KEY_SELECT_SHIFT)
88
+#define CMD_KEY_KEY_LEN_NUM 4u
89
+#define CMD_KEY_KEY_LEN_SHIFT 22
90
+#define CMD_KEY_KEY_LEN_MASK ((CMD_KEY_KEY_LEN_NUM - 1u) << CMD_KEY_KEY_LEN_SHIFT)
91
+#define CMD_KEY_ENCRYPT_SHIFT 20
92
+#define CMD_KEY_ENCRYPT_MASK (0x1 << CMD_KEY_ENCRYPT_SHIFT)
93
+#define CMD_KEY_BLOCK_MODE_SHIFT 16
94
+#define CMD_KEY_BLOCK_MODE_MASK (0x3 << CMD_KEY_BLOCK_MODE_SHIFT)
95
+#define CMD_IV 0x2
96
+#define CMD_IV_CONTEXT_SHIFT 26
97
+#define CMD_IV_CONTEXT_MASK (0x3 << CMD_KEY_CONTEXT_SHIFT)
98
+#define CMD_DSB 0x3
99
+#define CMD_SKG 0x4
100
+#define CMD_DATA 0x5
101
+#define CMD_DATA_KEY_CTX_SHIFT 27
102
+#define CMD_DATA_KEY_CTX_MASK (0x1 << CMD_DATA_KEY_CTX_SHIFT)
103
+#define CMD_DATA_IV_CTX_SHIFT 25
104
+#define CMD_DATA_IV_CTX_MASK (0x3 << CMD_DATA_IV_CTX_SHIFT)
105
+#define CMD_DATA_LEN_MASK 0xffffff
106
+#define CMD_STORE_IV 0x6
107
+#define CMD_STORE_IV_ADDR_MASK 0xffffff
108
+#define CMD_WRITE_REG 0x7
109
+#define CMD_FLAG 0x8
110
+#define CMD_FLAG_STOP_MASK BIT(26)
111
+#define CMD_FLAG_RAISE_IRQ_MASK BIT(27)
112
+#define CMD_FLAG_INFO_MASK 0xff
113
+#define CMD_MAX 0x10
114
+
115
+#define CMD_SHIFT 28
116
+
117
+#define REG_STATUS 0xc
118
+#define REG_STATUS_DMA_READ_RUNNING BIT(0)
119
+#define REG_STATUS_DMA_READ_PENDING BIT(1)
120
+#define REG_STATUS_DMA_WRITE_RUNNING BIT(2)
121
+#define REG_STATUS_DMA_WRITE_PENDING BIT(3)
122
+#define REG_STATUS_BUSY BIT(4)
123
+#define REG_STATUS_EXECUTING BIT(5)
124
+#define REG_STATUS_READY BIT(6)
125
+#define REG_STATUS_TEXT_DPA_SEEDED BIT(7)
126
+#define REG_STATUS_UNWRAP_DPA_SEEDED BIT(8)
127
+
128
+#define REG_IRQ_STATUS 0x18
129
+#define REG_IRQ_STATUS_INVALID_CMD BIT(2)
130
+#define REG_IRQ_STATUS_FLAG BIT(5)
131
+#define REG_IRQ_ENABLE 0x1c
132
+#define REG_WATERMARK 0x20
133
+#define REG_Q_STATUS 0x24
134
+#define REG_FLAG_INFO 0x30
135
+#define REG_FIFO 0x200
136
+
137
+static const uint32_t key_lens[CMD_KEY_KEY_LEN_NUM] = {
138
+ [0] = 16,
139
+ [1] = 24,
140
+ [2] = 32,
141
+ [3] = 64,
142
+};
143
+
144
+typedef struct Key {
145
+ uint32_t key_len;
146
+ uint8_t key[32];
147
+} Key;
148
+
149
+typedef struct IV {
150
+ uint32_t iv[4];
151
+} IV;
152
+
153
+static Key builtin_keys[CMD_KEY_SELECT_MAX_IDX + 1] = {
154
+ [1] = {
155
+ .key_len = 32,
156
+ .key = { 0x1 },
157
+ },
158
+ [2] = {
159
+ .key_len = 32,
160
+ .key = { 0x2 },
161
+ },
162
+ [3] = {
163
+ .key_len = 32,
164
+ .key = { 0x3 },
165
+ }
166
+};
167
+
168
+struct AESState {
169
+ SysBusDevice parent_obj;
170
+
171
+ qemu_irq irq;
172
+ MemoryRegion iomem1;
173
+ MemoryRegion iomem2;
174
+ AddressSpace *as;
175
+
176
+ uint32_t status;
177
+ uint32_t q_status;
178
+ uint32_t irq_status;
179
+ uint32_t irq_enable;
180
+ uint32_t watermark;
181
+ uint32_t flag_info;
182
+ uint32_t fifo[MAX_FIFO_SIZE];
183
+ uint32_t fifo_idx;
184
+ Key key[2];
185
+ IV iv[4];
186
+ bool is_encrypt;
187
+ QCryptoCipherMode block_mode;
188
+};
189
+
190
+static void aes_update_irq(AESState *s)
191
+{
192
+ qemu_set_irq(s->irq, !!(s->irq_status & s->irq_enable));
193
+}
194
+
195
+static uint64_t aes1_read(void *opaque, hwaddr offset, unsigned size)
196
+{
197
+ AESState *s = opaque;
198
+ uint64_t res = 0;
199
+
200
+ switch (offset) {
201
+ case REG_STATUS:
202
+ res = s->status;
203
+ break;
204
+ case REG_IRQ_STATUS:
205
+ res = s->irq_status;
206
+ break;
207
+ case REG_IRQ_ENABLE:
208
+ res = s->irq_enable;
209
+ break;
210
+ case REG_WATERMARK:
211
+ res = s->watermark;
212
+ break;
213
+ case REG_Q_STATUS:
214
+ res = s->q_status;
215
+ break;
216
+ case REG_FLAG_INFO:
217
+ res = s->flag_info;
218
+ break;
219
+
220
+ default:
221
+ qemu_log_mask(LOG_UNIMP, "%s: Unknown AES MMIO offset %" PRIx64 "\n",
222
+ __func__, offset);
223
+ break;
224
+ }
225
+
226
+ trace_aes_read(offset, res);
227
+
228
+ return res;
229
+}
230
+
231
+static void fifo_append(AESState *s, uint64_t val)
232
+{
233
+ if (s->fifo_idx == MAX_FIFO_SIZE) {
234
+ /* Exceeded the FIFO. Bail out */
235
+ return;
236
+ }
237
+
238
+ s->fifo[s->fifo_idx++] = val;
239
+}
240
+
241
+static bool has_payload(AESState *s, uint32_t elems)
242
+{
243
+ return s->fifo_idx >= (elems + 1);
244
+}
245
+
246
+static bool cmd_key(AESState *s)
247
+{
248
+ uint32_t cmd = s->fifo[0];
249
+ uint32_t key_select = (cmd & CMD_KEY_SELECT_MASK) >> CMD_KEY_SELECT_SHIFT;
250
+ uint32_t ctxt = (cmd & CMD_KEY_CONTEXT_MASK) >> CMD_KEY_CONTEXT_SHIFT;
251
+ uint32_t key_len;
252
+
253
+ switch ((cmd & CMD_KEY_BLOCK_MODE_MASK) >> CMD_KEY_BLOCK_MODE_SHIFT) {
254
+ case 0:
255
+ s->block_mode = QCRYPTO_CIPHER_MODE_ECB;
256
+ break;
257
+ case 1:
258
+ s->block_mode = QCRYPTO_CIPHER_MODE_CBC;
259
+ break;
260
+ default:
261
+ return false;
262
+ }
263
+
264
+ s->is_encrypt = cmd & CMD_KEY_ENCRYPT_MASK;
265
+ key_len = key_lens[((cmd & CMD_KEY_KEY_LEN_MASK) >> CMD_KEY_KEY_LEN_SHIFT)];
266
+
267
+ if (key_select) {
268
+ trace_aes_cmd_key_select_builtin(ctxt, key_select,
269
+ s->is_encrypt ? "en" : "de",
270
+ QCryptoCipherMode_str(s->block_mode));
271
+ s->key[ctxt] = builtin_keys[key_select];
272
+ } else {
273
+ trace_aes_cmd_key_select_new(ctxt, key_len,
274
+ s->is_encrypt ? "en" : "de",
275
+ QCryptoCipherMode_str(s->block_mode));
276
+ if (key_len > sizeof(s->key[ctxt].key)) {
277
+ return false;
278
+ }
279
+ if (!has_payload(s, key_len / sizeof(uint32_t))) {
280
+ /* wait for payload */
281
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: No payload\n", __func__);
282
+ return false;
283
+ }
284
+ memcpy(&s->key[ctxt].key, &s->fifo[1], key_len);
285
+ s->key[ctxt].key_len = key_len;
286
+ }
287
+
288
+ return true;
289
+}
290
+
291
+static bool cmd_iv(AESState *s)
292
+{
293
+ uint32_t cmd = s->fifo[0];
294
+ uint32_t ctxt = (cmd & CMD_IV_CONTEXT_MASK) >> CMD_IV_CONTEXT_SHIFT;
295
+
296
+ if (!has_payload(s, 4)) {
297
+ /* wait for payload */
298
+ return false;
299
+ }
300
+ memcpy(&s->iv[ctxt].iv, &s->fifo[1], sizeof(s->iv[ctxt].iv));
301
+ trace_aes_cmd_iv(ctxt, s->fifo[1], s->fifo[2], s->fifo[3], s->fifo[4]);
302
+
303
+ return true;
304
+}
305
+
306
+static void dump_data(const char *desc, const void *p, size_t len)
307
+{
308
+ static const size_t MAX_LEN = 0x1000;
309
+ char hex[MAX_LEN * 2 + 1] = "";
310
+
311
+ if (len > MAX_LEN) {
312
+ return;
313
+ }
314
+
315
+ qemu_hexdump_to_buffer(hex, sizeof(hex), p, len);
316
+ trace_aes_dump_data(desc, hex);
317
+}
318
+
319
+static bool cmd_data(AESState *s)
320
+{
321
+ uint32_t cmd = s->fifo[0];
322
+ uint32_t ctxt_iv = 0;
323
+ uint32_t ctxt_key = (cmd & CMD_DATA_KEY_CTX_MASK) >> CMD_DATA_KEY_CTX_SHIFT;
324
+ uint32_t len = cmd & CMD_DATA_LEN_MASK;
325
+ uint64_t src_addr = s->fifo[2];
326
+ uint64_t dst_addr = s->fifo[3];
327
+ QCryptoCipherAlgo alg;
328
+ g_autoptr(QCryptoCipher) cipher = NULL;
329
+ g_autoptr(GByteArray) src = NULL;
330
+ g_autoptr(GByteArray) dst = NULL;
331
+ MemTxResult r;
332
+
333
+ src_addr |= ((uint64_t)s->fifo[1] << 16) & 0xffff00000000ULL;
334
+ dst_addr |= ((uint64_t)s->fifo[1] << 32) & 0xffff00000000ULL;
335
+
336
+ trace_aes_cmd_data(ctxt_key, ctxt_iv, src_addr, dst_addr, len);
337
+
338
+ if (!has_payload(s, 3)) {
339
+ /* wait for payload */
340
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: No payload\n", __func__);
341
+ return false;
342
+ }
343
+
344
+ if (ctxt_key >= ARRAY_SIZE(s->key) ||
345
+ ctxt_iv >= ARRAY_SIZE(s->iv)) {
346
+ /* Invalid input */
347
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Invalid key or iv\n", __func__);
348
+ return false;
349
+ }
350
+
351
+ src = g_byte_array_sized_new(len);
352
+ g_byte_array_set_size(src, len);
353
+ dst = g_byte_array_sized_new(len);
354
+ g_byte_array_set_size(dst, len);
355
+
356
+ r = dma_memory_read(s->as, src_addr, src->data, len, MEMTXATTRS_UNSPECIFIED);
357
+ if (r != MEMTX_OK) {
358
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: DMA read of %"PRIu32" bytes "
359
+ "from 0x%"PRIx64" failed. (r=%d)\n",
360
+ __func__, len, src_addr, r);
361
+ return false;
362
+ }
363
+
364
+ dump_data("cmd_data(): src_data=", src->data, len);
365
+
366
+ switch (s->key[ctxt_key].key_len) {
367
+ case 128 / 8:
368
+ alg = QCRYPTO_CIPHER_ALGO_AES_128;
369
+ break;
370
+ case 192 / 8:
371
+ alg = QCRYPTO_CIPHER_ALGO_AES_192;
372
+ break;
373
+ case 256 / 8:
374
+ alg = QCRYPTO_CIPHER_ALGO_AES_256;
375
+ break;
376
+ default:
377
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Invalid key length\n", __func__);
378
+ return false;
379
+ }
380
+ cipher = qcrypto_cipher_new(alg, s->block_mode,
381
+ s->key[ctxt_key].key,
382
+ s->key[ctxt_key].key_len, NULL);
383
+ if (!cipher) {
384
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to create cipher object\n",
385
+ __func__);
386
+ return false;
387
+ }
388
+ if (s->block_mode != QCRYPTO_CIPHER_MODE_ECB) {
389
+ if (qcrypto_cipher_setiv(cipher, (void *)s->iv[ctxt_iv].iv,
390
+ sizeof(s->iv[ctxt_iv].iv), NULL) != 0) {
391
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to set IV\n", __func__);
392
+ return false;
393
+ }
394
+ }
395
+ if (s->is_encrypt) {
396
+ if (qcrypto_cipher_encrypt(cipher, src->data, dst->data, len, NULL) != 0) {
397
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Encryption failed\n", __func__);
398
+ return false;
399
+ }
400
+ } else {
401
+ if (qcrypto_cipher_decrypt(cipher, src->data, dst->data, len, NULL) != 0) {
402
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Decryption failed\n", __func__);
403
+ return false;
404
+ }
405
+ }
406
+
407
+ dump_data("cmd_data(): dst_data=", dst->data, len);
408
+ r = dma_memory_write(s->as, dst_addr, dst->data, len, MEMTXATTRS_UNSPECIFIED);
409
+ if (r != MEMTX_OK) {
410
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: DMA write of %"PRIu32" bytes "
411
+ "to 0x%"PRIx64" failed. (r=%d)\n",
412
+ __func__, len, src_addr, r);
413
+ return false;
414
+ }
415
+
416
+ return true;
417
+}
418
+
419
+static bool cmd_store_iv(AESState *s)
420
+{
421
+ uint32_t cmd = s->fifo[0];
422
+ uint32_t ctxt = (cmd & CMD_IV_CONTEXT_MASK) >> CMD_IV_CONTEXT_SHIFT;
423
+ uint64_t addr = s->fifo[1];
424
+
425
+ if (!has_payload(s, 1)) {
426
+ /* wait for payload */
427
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: No payload\n", __func__);
428
+ return false;
429
+ }
430
+
431
+ if (ctxt >= ARRAY_SIZE(s->iv)) {
432
+ /* Invalid context selected */
433
+ return false;
434
+ }
435
+
436
+ addr |= ((uint64_t)cmd << 32) & 0xff00000000ULL;
437
+ cpu_physical_memory_write(addr, &s->iv[ctxt].iv, sizeof(s->iv[ctxt].iv));
438
+
439
+ trace_aes_cmd_store_iv(ctxt, addr, s->iv[ctxt].iv[0], s->iv[ctxt].iv[1],
440
+ s->iv[ctxt].iv[2], s->iv[ctxt].iv[3]);
441
+
442
+ return true;
443
+}
444
+
445
+static bool cmd_flag(AESState *s)
446
+{
447
+ uint32_t cmd = s->fifo[0];
448
+ uint32_t raise_irq = cmd & CMD_FLAG_RAISE_IRQ_MASK;
449
+
450
+ /* We always process data when it's coming in, so fire an IRQ immediately */
451
+ if (raise_irq) {
452
+ s->irq_status |= REG_IRQ_STATUS_FLAG;
453
+ }
454
+
455
+ s->flag_info = cmd & CMD_FLAG_INFO_MASK;
456
+
457
+ trace_aes_cmd_flag(!!raise_irq, s->flag_info);
458
+
459
+ return true;
460
+}
461
+
462
+static void fifo_process(AESState *s)
463
+{
464
+ uint32_t cmd = s->fifo[0] >> CMD_SHIFT;
465
+ bool success = false;
466
+
467
+ if (!s->fifo_idx) {
468
+ return;
469
+ }
470
+
471
+ switch (cmd) {
472
+ case CMD_KEY:
473
+ success = cmd_key(s);
474
+ break;
475
+ case CMD_IV:
476
+ success = cmd_iv(s);
477
+ break;
478
+ case CMD_DATA:
479
+ success = cmd_data(s);
480
+ break;
481
+ case CMD_STORE_IV:
482
+ success = cmd_store_iv(s);
483
+ break;
484
+ case CMD_FLAG:
485
+ success = cmd_flag(s);
486
+ break;
487
+ default:
488
+ s->irq_status |= REG_IRQ_STATUS_INVALID_CMD;
489
+ break;
490
+ }
491
+
492
+ if (success) {
493
+ s->fifo_idx = 0;
494
+ }
495
+
496
+ trace_aes_fifo_process(cmd, success ? 1 : 0);
497
+}
498
+
499
+static void aes1_write(void *opaque, hwaddr offset, uint64_t val, unsigned size)
500
+{
501
+ AESState *s = opaque;
502
+
503
+ trace_aes_write(offset, val);
504
+
505
+ switch (offset) {
506
+ case REG_IRQ_STATUS:
507
+ s->irq_status &= ~val;
508
+ break;
509
+ case REG_IRQ_ENABLE:
510
+ s->irq_enable = val;
511
+ break;
512
+ case REG_FIFO:
513
+ fifo_append(s, val);
514
+ fifo_process(s);
515
+ break;
516
+ default:
517
+ qemu_log_mask(LOG_UNIMP,
518
+ "%s: Unknown AES MMIO offset %"PRIx64", data %"PRIx64"\n",
519
+ __func__, offset, val);
520
+ return;
521
+ }
522
+
523
+ aes_update_irq(s);
524
+}
525
+
526
+static const MemoryRegionOps aes1_ops = {
527
+ .read = aes1_read,
528
+ .write = aes1_write,
529
+ .endianness = DEVICE_NATIVE_ENDIAN,
530
+ .valid = {
531
+ .min_access_size = 4,
532
+ .max_access_size = 8,
533
+ },
534
+ .impl = {
535
+ .min_access_size = 4,
536
+ .max_access_size = 4,
537
+ },
538
+};
539
+
540
+static uint64_t aes2_read(void *opaque, hwaddr offset, unsigned size)
541
+{
542
+ uint64_t res = 0;
543
+
544
+ switch (offset) {
545
+ case 0:
546
+ res = 0;
547
+ break;
548
+ default:
549
+ qemu_log_mask(LOG_UNIMP,
550
+ "%s: Unknown AES MMIO 2 offset %"PRIx64"\n",
551
+ __func__, offset);
552
+ break;
553
+ }
554
+
555
+ trace_aes_2_read(offset, res);
556
+
557
+ return res;
558
+}
559
+
560
+static void aes2_write(void *opaque, hwaddr offset, uint64_t val, unsigned size)
561
+{
562
+ trace_aes_2_write(offset, val);
563
+
564
+ switch (offset) {
565
+ default:
566
+ qemu_log_mask(LOG_UNIMP,
567
+ "%s: Unknown AES MMIO 2 offset %"PRIx64", data %"PRIx64"\n",
568
+ __func__, offset, val);
569
+ return;
570
+ }
571
+}
572
+
573
+static const MemoryRegionOps aes2_ops = {
574
+ .read = aes2_read,
575
+ .write = aes2_write,
576
+ .endianness = DEVICE_NATIVE_ENDIAN,
577
+ .valid = {
578
+ .min_access_size = 4,
579
+ .max_access_size = 8,
580
+ },
581
+ .impl = {
582
+ .min_access_size = 4,
583
+ .max_access_size = 4,
584
+ },
585
+};
586
+
587
+static void aes_reset(Object *obj, ResetType type)
588
+{
589
+ AESState *s = APPLE_AES(obj);
590
+
591
+ s->status = 0x3f80;
592
+ s->q_status = 2;
593
+ s->irq_status = 0;
594
+ s->irq_enable = 0;
595
+ s->watermark = 0;
596
+}
597
+
598
+static void aes_init(Object *obj)
599
+{
600
+ AESState *s = APPLE_AES(obj);
601
+
602
+ memory_region_init_io(&s->iomem1, obj, &aes1_ops, s, TYPE_APPLE_AES, 0x4000);
603
+ memory_region_init_io(&s->iomem2, obj, &aes2_ops, s, TYPE_APPLE_AES, 0x4000);
604
+ sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->iomem1);
605
+ sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->iomem2);
606
+ sysbus_init_irq(SYS_BUS_DEVICE(s), &s->irq);
607
+ s->as = &address_space_memory;
608
+}
609
+
610
+static void aes_class_init(ObjectClass *klass, void *data)
611
+{
612
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
613
+
614
+ rc->phases.hold = aes_reset;
615
+}
616
+
617
+static const TypeInfo aes_info = {
618
+ .name = TYPE_APPLE_AES,
619
+ .parent = TYPE_SYS_BUS_DEVICE,
620
+ .instance_size = sizeof(AESState),
621
+ .class_init = aes_class_init,
622
+ .instance_init = aes_init,
623
+};
624
+
625
+static void aes_register_types(void)
626
+{
627
+ type_register_static(&aes_info);
628
+}
629
+
630
+type_init(aes_register_types)
631
diff --git a/hw/vmapple/meson.build b/hw/vmapple/meson.build
632
index XXXXXXX..XXXXXXX 100644
633
--- a/hw/vmapple/meson.build
634
+++ b/hw/vmapple/meson.build
635
@@ -0,0 +1 @@
636
+system_ss.add(when: 'CONFIG_VMAPPLE_AES', if_true: files('aes.c'))
637
diff --git a/hw/vmapple/trace-events b/hw/vmapple/trace-events
638
index XXXXXXX..XXXXXXX 100644
639
--- a/hw/vmapple/trace-events
640
+++ b/hw/vmapple/trace-events
641
@@ -XXX,XX +XXX,XX @@
642
# See docs/devel/tracing.rst for syntax documentation.
643
644
+# aes.c
645
+aes_read(uint64_t offset, uint64_t res) "offset=0x%"PRIx64" res=0x%"PRIx64
646
+aes_cmd_key_select_builtin(uint32_t ctx, uint32_t key_id, const char *direction, const char *cipher) "[%d] Selecting builtin key %d to %scrypt with %s"
647
+aes_cmd_key_select_new(uint32_t ctx, uint32_t key_len, const char *direction, const char *cipher) "[%d] Selecting new key size=%d to %scrypt with %s"
648
+aes_cmd_iv(uint32_t ctx, uint32_t iv0, uint32_t iv1, uint32_t iv2, uint32_t iv3) "[%d] 0x%08x 0x%08x 0x%08x 0x%08x"
649
+aes_cmd_data(uint32_t key, uint32_t iv, uint64_t src, uint64_t dst, uint32_t len) "[key=%d iv=%d] src=0x%"PRIx64" dst=0x%"PRIx64" len=0x%x"
650
+aes_cmd_store_iv(uint32_t ctx, uint64_t addr, uint32_t iv0, uint32_t iv1, uint32_t iv2, uint32_t iv3) "[%d] addr=0x%"PRIx64"x -> 0x%08x 0x%08x 0x%08x 0x%08x"
651
+aes_cmd_flag(uint32_t raise, uint32_t flag_info) "raise=%d flag_info=0x%x"
652
+aes_fifo_process(uint32_t cmd, uint32_t success) "cmd=%d success=%d"
653
+aes_write(uint64_t offset, uint64_t val) "offset=0x%"PRIx64" val=0x%"PRIx64
654
+aes_2_read(uint64_t offset, uint64_t res) "offset=0x%"PRIx64" res=0x%"PRIx64
655
+aes_2_write(uint64_t offset, uint64_t val) "offset=0x%"PRIx64" val=0x%"PRIx64
656
+aes_dump_data(const char *desc, const char *hex) "%s%s"
657
+
658
diff --git a/include/hw/vmapple/vmapple.h b/include/hw/vmapple/vmapple.h
659
new file mode 100644
660
index XXXXXXX..XXXXXXX
661
--- /dev/null
662
+++ b/include/hw/vmapple/vmapple.h
663
@@ -XXX,XX +XXX,XX @@
664
+/*
665
+ * Devices specific to the VMApple machine type
666
+ *
667
+ * Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
668
+ *
669
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
670
+ * See the COPYING file in the top-level directory.
671
+ *
672
+ * SPDX-License-Identifier: GPL-2.0-or-later
673
+ */
674
+
675
+#ifndef HW_VMAPPLE_VMAPPLE_H
676
+#define HW_VMAPPLE_VMAPPLE_H
677
+
678
+#define TYPE_APPLE_AES "apple-aes"
679
+
680
+#endif /* HW_VMAPPLE_VMAPPLE_H */
681
diff --git a/include/qemu/cutils.h b/include/qemu/cutils.h
682
index XXXXXXX..XXXXXXX 100644
683
--- a/include/qemu/cutils.h
684
+++ b/include/qemu/cutils.h
685
@@ -XXX,XX +XXX,XX @@ GString *qemu_hexdump_line(GString *str, const void *buf, size_t len,
686
void qemu_hexdump(FILE *fp, const char *prefix,
687
const void *bufptr, size_t size);
688
689
+/**
690
+ * qemu_hexdump_to_buffer:
691
+ * @buffer: output string buffer
692
+ * @buffer_size: amount of available space in buffer. Must be at least
693
+ * data_size*2+1.
694
+ * @data: input bytes
695
+ * @data_size: number of bytes in data
696
+ *
697
+ * Converts the @data_size bytes in @data into hex digit pairs, writing them to
698
+ * @buffer. Finally, a nul terminating character is written; @buffer therefore
699
+ * needs space for (data_size*2+1) chars.
700
+ */
701
+void qemu_hexdump_to_buffer(char *restrict buffer, size_t buffer_size,
702
+ const uint8_t *restrict data, size_t data_size);
703
+
704
#endif
705
diff --git a/util/hexdump.c b/util/hexdump.c
706
index XXXXXXX..XXXXXXX 100644
707
--- a/util/hexdump.c
708
+++ b/util/hexdump.c
709
@@ -XXX,XX +XXX,XX @@
710
711
#include "qemu/osdep.h"
712
#include "qemu/cutils.h"
713
+#include "qemu/host-utils.h"
714
715
static inline char hexdump_nibble(unsigned x)
716
{
717
@@ -XXX,XX +XXX,XX @@ void qemu_hexdump(FILE *fp, const char *prefix,
718
}
719
720
}
721
+
722
+void qemu_hexdump_to_buffer(char *restrict buffer, size_t buffer_size,
723
+ const uint8_t *restrict data, size_t data_size)
724
+{
725
+ size_t i;
726
+ uint64_t required_buffer_size;
727
+ bool overflow = umul64_overflow(data_size, 2, &required_buffer_size);
728
+ overflow |= uadd64_overflow(required_buffer_size, 1, &required_buffer_size);
729
+ assert(buffer_size >= required_buffer_size && !overflow);
730
+
731
+ for (i = 0; i < data_size; i++) {
732
+ uint8_t val = data[i];
733
+ *(buffer++) = hexdump_nibble(val >> 4);
734
+ *(buffer++) = hexdump_nibble(val & 0xf);
735
+ }
736
+ *buffer = '\0';
737
+}
738
--
739
2.39.3 (Apple Git-145)
740
741
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Graf <graf@amazon.com>
2
1
3
The VMApple machine exposes AUX and ROOT block devices (as well as USB OTG
4
emulation) via virtio-pci as well as a special, simple backdoor platform
5
device.
6
7
This patch implements this backdoor platform device to the best of my
8
understanding. I left out any USB OTG parts; they're only needed for
9
guest recovery and I don't understand the protocol yet.
10
11
Signed-off-by: Alexander Graf <graf@amazon.com>
12
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
13
---
14
15
v4:
16
17
* Moved most header code to .c, rest to vmapple.h
18
* Better compliance with coding, naming, and formatting conventions.
19
20
hw/vmapple/Kconfig | 3 +
21
hw/vmapple/bdif.c | 261 +++++++++++++++++++++++++++++++++++
22
hw/vmapple/meson.build | 1 +
23
hw/vmapple/trace-events | 5 +
24
include/hw/vmapple/vmapple.h | 2 +
25
5 files changed, 272 insertions(+)
26
create mode 100644 hw/vmapple/bdif.c
27
28
diff --git a/hw/vmapple/Kconfig b/hw/vmapple/Kconfig
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/vmapple/Kconfig
31
+++ b/hw/vmapple/Kconfig
32
@@ -XXX,XX +XXX,XX @@
33
config VMAPPLE_AES
34
bool
35
36
+config VMAPPLE_BDIF
37
+ bool
38
+
39
diff --git a/hw/vmapple/bdif.c b/hw/vmapple/bdif.c
40
new file mode 100644
41
index XXXXXXX..XXXXXXX
42
--- /dev/null
43
+++ b/hw/vmapple/bdif.c
44
@@ -XXX,XX +XXX,XX @@
45
+/*
46
+ * VMApple Backdoor Interface
47
+ *
48
+ * Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
49
+ *
50
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
51
+ * See the COPYING file in the top-level directory.
52
+ *
53
+ * SPDX-License-Identifier: GPL-2.0-or-later
54
+ */
55
+
56
+#include "qemu/osdep.h"
57
+#include "qemu/units.h"
58
+#include "qemu/log.h"
59
+#include "qemu/module.h"
60
+#include "trace.h"
61
+#include "hw/vmapple/vmapple.h"
62
+#include "hw/sysbus.h"
63
+#include "hw/block/block.h"
64
+#include "qapi/error.h"
65
+#include "sysemu/block-backend.h"
66
+
67
+OBJECT_DECLARE_SIMPLE_TYPE(VMAppleBdifState, VMAPPLE_BDIF)
68
+
69
+struct VMAppleBdifState {
70
+ SysBusDevice parent_obj;
71
+
72
+ BlockBackend *aux;
73
+ BlockBackend *root;
74
+ MemoryRegion mmio;
75
+};
76
+
77
+#define VMAPPLE_BDIF_SIZE 0x00200000
78
+
79
+#define REG_DEVID_MASK 0xffff0000
80
+#define DEVID_ROOT 0x00000000
81
+#define DEVID_AUX 0x00010000
82
+#define DEVID_USB 0x00100000
83
+
84
+#define REG_STATUS 0x0
85
+#define REG_STATUS_ACTIVE BIT(0)
86
+#define REG_CFG 0x4
87
+#define REG_CFG_ACTIVE BIT(1)
88
+#define REG_UNK1 0x8
89
+#define REG_BUSY 0x10
90
+#define REG_BUSY_READY BIT(0)
91
+#define REG_UNK2 0x400
92
+#define REG_CMD 0x408
93
+#define REG_NEXT_DEVICE 0x420
94
+#define REG_UNK3 0x434
95
+
96
+typedef struct VblkSector {
97
+ uint32_t pad;
98
+ uint32_t pad2;
99
+ uint32_t sector;
100
+ uint32_t pad3;
101
+} VblkSector;
102
+
103
+typedef struct VblkReqCmd {
104
+ uint64_t addr;
105
+ uint32_t len;
106
+ uint32_t flags;
107
+} VblkReqCmd;
108
+
109
+typedef struct VblkReq {
110
+ VblkReqCmd sector;
111
+ VblkReqCmd data;
112
+ VblkReqCmd retval;
113
+} VblkReq;
114
+
115
+#define VBLK_DATA_FLAGS_READ 0x00030001
116
+#define VBLK_DATA_FLAGS_WRITE 0x00010001
117
+
118
+#define VBLK_RET_SUCCESS 0
119
+#define VBLK_RET_FAILED 1
120
+
121
+static uint64_t bdif_read(void *opaque, hwaddr offset, unsigned size)
122
+{
123
+ uint64_t ret = -1;
124
+ uint64_t devid = offset & REG_DEVID_MASK;
125
+
126
+ switch (offset & ~REG_DEVID_MASK) {
127
+ case REG_STATUS:
128
+ ret = REG_STATUS_ACTIVE;
129
+ break;
130
+ case REG_CFG:
131
+ ret = REG_CFG_ACTIVE;
132
+ break;
133
+ case REG_UNK1:
134
+ ret = 0x420;
135
+ break;
136
+ case REG_BUSY:
137
+ ret = REG_BUSY_READY;
138
+ break;
139
+ case REG_UNK2:
140
+ ret = 0x1;
141
+ break;
142
+ case REG_UNK3:
143
+ ret = 0x0;
144
+ break;
145
+ case REG_NEXT_DEVICE:
146
+ switch (devid) {
147
+ case DEVID_ROOT:
148
+ ret = 0x8000000;
149
+ break;
150
+ case DEVID_AUX:
151
+ ret = 0x10000;
152
+ break;
153
+ }
154
+ break;
155
+ }
156
+
157
+ trace_bdif_read(offset, size, ret);
158
+ return ret;
159
+}
160
+
161
+static void le2cpu_sector(VblkSector *sector)
162
+{
163
+ sector->sector = le32_to_cpu(sector->sector);
164
+}
165
+
166
+static void le2cpu_reqcmd(VblkReqCmd *cmd)
167
+{
168
+ cmd->addr = le64_to_cpu(cmd->addr);
169
+ cmd->len = le32_to_cpu(cmd->len);
170
+ cmd->flags = le32_to_cpu(cmd->flags);
171
+}
172
+
173
+static void le2cpu_req(VblkReq *req)
174
+{
175
+ le2cpu_reqcmd(&req->sector);
176
+ le2cpu_reqcmd(&req->data);
177
+ le2cpu_reqcmd(&req->retval);
178
+}
179
+
180
+static void vblk_cmd(uint64_t devid, BlockBackend *blk, uint64_t value,
181
+ uint64_t static_off)
182
+{
183
+ VblkReq req;
184
+ VblkSector sector;
185
+ uint64_t off = 0;
186
+ char *buf = NULL;
187
+ uint8_t ret = VBLK_RET_FAILED;
188
+ int r;
189
+
190
+ cpu_physical_memory_read(value, &req, sizeof(req));
191
+ le2cpu_req(&req);
192
+
193
+ if (req.sector.len != sizeof(sector)) {
194
+ ret = VBLK_RET_FAILED;
195
+ goto out;
196
+ }
197
+
198
+ /* Read the vblk command */
199
+ cpu_physical_memory_read(req.sector.addr, &sector, sizeof(sector));
200
+ le2cpu_sector(&sector);
201
+
202
+ off = sector.sector * 512ULL + static_off;
203
+
204
+ /* Sanity check that we're not allocating bogus sizes */
205
+ if (req.data.len > 128 * MiB) {
206
+ goto out;
207
+ }
208
+
209
+ buf = g_malloc0(req.data.len);
210
+ switch (req.data.flags) {
211
+ case VBLK_DATA_FLAGS_READ:
212
+ r = blk_pread(blk, off, req.data.len, buf, 0);
213
+ trace_bdif_vblk_read(devid == DEVID_AUX ? "aux" : "root",
214
+ req.data.addr, off, req.data.len, r);
215
+ if (r < 0) {
216
+ goto out;
217
+ }
218
+ cpu_physical_memory_write(req.data.addr, buf, req.data.len);
219
+ ret = VBLK_RET_SUCCESS;
220
+ break;
221
+ case VBLK_DATA_FLAGS_WRITE:
222
+ /* Not needed, iBoot only reads */
223
+ break;
224
+ default:
225
+ break;
226
+ }
227
+
228
+out:
229
+ g_free(buf);
230
+ cpu_physical_memory_write(req.retval.addr, &ret, 1);
231
+}
232
+
233
+static void bdif_write(void *opaque, hwaddr offset,
234
+ uint64_t value, unsigned size)
235
+{
236
+ VMAppleBdifState *s = opaque;
237
+ uint64_t devid = (offset & REG_DEVID_MASK);
238
+
239
+ trace_bdif_write(offset, size, value);
240
+
241
+ switch (offset & ~REG_DEVID_MASK) {
242
+ case REG_CMD:
243
+ switch (devid) {
244
+ case DEVID_ROOT:
245
+ vblk_cmd(devid, s->root, value, 0x0);
246
+ break;
247
+ case DEVID_AUX:
248
+ vblk_cmd(devid, s->aux, value, 0x0);
249
+ break;
250
+ }
251
+ break;
252
+ }
253
+}
254
+
255
+static const MemoryRegionOps bdif_ops = {
256
+ .read = bdif_read,
257
+ .write = bdif_write,
258
+ .endianness = DEVICE_NATIVE_ENDIAN,
259
+ .valid = {
260
+ .min_access_size = 1,
261
+ .max_access_size = 8,
262
+ },
263
+ .impl = {
264
+ .min_access_size = 1,
265
+ .max_access_size = 8,
266
+ },
267
+};
268
+
269
+static void bdif_init(Object *obj)
270
+{
271
+ VMAppleBdifState *s = VMAPPLE_BDIF(obj);
272
+
273
+ memory_region_init_io(&s->mmio, obj, &bdif_ops, obj,
274
+ "VMApple Backdoor Interface", VMAPPLE_BDIF_SIZE);
275
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mmio);
276
+}
277
+
278
+static Property bdif_properties[] = {
279
+ DEFINE_PROP_DRIVE("aux", VMAppleBdifState, aux),
280
+ DEFINE_PROP_DRIVE("root", VMAppleBdifState, root),
281
+ DEFINE_PROP_END_OF_LIST(),
282
+};
283
+
284
+static void bdif_class_init(ObjectClass *klass, void *data)
285
+{
286
+ DeviceClass *dc = DEVICE_CLASS(klass);
287
+
288
+ dc->desc = "VMApple Backdoor Interface";
289
+ device_class_set_props(dc, bdif_properties);
290
+}
291
+
292
+static const TypeInfo bdif_info = {
293
+ .name = TYPE_VMAPPLE_BDIF,
294
+ .parent = TYPE_SYS_BUS_DEVICE,
295
+ .instance_size = sizeof(VMAppleBdifState),
296
+ .instance_init = bdif_init,
297
+ .class_init = bdif_class_init,
298
+};
299
+
300
+static void bdif_register_types(void)
301
+{
302
+ type_register_static(&bdif_info);
303
+}
304
+
305
+type_init(bdif_register_types)
306
diff --git a/hw/vmapple/meson.build b/hw/vmapple/meson.build
307
index XXXXXXX..XXXXXXX 100644
308
--- a/hw/vmapple/meson.build
309
+++ b/hw/vmapple/meson.build
310
@@ -1 +1,2 @@
311
system_ss.add(when: 'CONFIG_VMAPPLE_AES', if_true: files('aes.c'))
312
+system_ss.add(when: 'CONFIG_VMAPPLE_BDIF', if_true: files('bdif.c'))
313
diff --git a/hw/vmapple/trace-events b/hw/vmapple/trace-events
314
index XXXXXXX..XXXXXXX 100644
315
--- a/hw/vmapple/trace-events
316
+++ b/hw/vmapple/trace-events
317
@@ -XXX,XX +XXX,XX @@ aes_2_read(uint64_t offset, uint64_t res) "offset=0x%"PRIx64" res=0x%"PRIx64
318
aes_2_write(uint64_t offset, uint64_t val) "offset=0x%"PRIx64" val=0x%"PRIx64
319
aes_dump_data(const char *desc, const char *hex) "%s%s"
320
321
+# bdif.c
322
+bdif_read(uint64_t offset, uint32_t size, uint64_t value) "offset=0x%"PRIx64" size=0x%x value=0x%"PRIx64
323
+bdif_write(uint64_t offset, uint32_t size, uint64_t value) "offset=0x%"PRIx64" size=0x%x value=0x%"PRIx64
324
+bdif_vblk_read(const char *dev, uint64_t addr, uint64_t offset, uint32_t len, int r) "dev=%s addr=0x%"PRIx64" off=0x%"PRIx64" size=0x%x r=%d"
325
+
326
diff --git a/include/hw/vmapple/vmapple.h b/include/hw/vmapple/vmapple.h
327
index XXXXXXX..XXXXXXX 100644
328
--- a/include/hw/vmapple/vmapple.h
329
+++ b/include/hw/vmapple/vmapple.h
330
@@ -XXX,XX +XXX,XX @@
331
332
#define TYPE_APPLE_AES "apple-aes"
333
334
+#define TYPE_VMAPPLE_BDIF "vmapple-bdif"
335
+
336
#endif /* HW_VMAPPLE_VMAPPLE_H */
337
--
338
2.39.3 (Apple Git-145)
339
340
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Graf <graf@amazon.com>
2
1
3
Instead of device tree or other more standardized means, VMApple passes
4
platform configuration to the first stage boot loader in a binary encoded
5
format that resides at a dedicated RAM region in physical address space.
6
7
This patch models this configuration space as a qdev device which we can
8
then map at the fixed location in the address space. That way, we can
9
influence and annotate all configuration fields easily.
10
11
Signed-off-by: Alexander Graf <graf@amazon.com>
12
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
13
---
14
v3:
15
16
* Replaced legacy device reset method with Resettable method
17
18
v4:
19
20
* Fixed initialisation of default values for properties
21
* Dropped superfluous endianness conversions
22
* Moved most header code to .c, device name #define goes in vmapple.h
23
24
v5:
25
26
* Improved error reporting in case of string property buffer overflow.
27
28
hw/vmapple/Kconfig | 3 +
29
hw/vmapple/cfg.c | 203 +++++++++++++++++++++++++++++++++++
30
hw/vmapple/meson.build | 1 +
31
include/hw/vmapple/vmapple.h | 2 +
32
4 files changed, 209 insertions(+)
33
create mode 100644 hw/vmapple/cfg.c
34
35
diff --git a/hw/vmapple/Kconfig b/hw/vmapple/Kconfig
36
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/vmapple/Kconfig
38
+++ b/hw/vmapple/Kconfig
39
@@ -XXX,XX +XXX,XX @@ config VMAPPLE_AES
40
config VMAPPLE_BDIF
41
bool
42
43
+config VMAPPLE_CFG
44
+ bool
45
+
46
diff --git a/hw/vmapple/cfg.c b/hw/vmapple/cfg.c
47
new file mode 100644
48
index XXXXXXX..XXXXXXX
49
--- /dev/null
50
+++ b/hw/vmapple/cfg.c
51
@@ -XXX,XX +XXX,XX @@
52
+/*
53
+ * VMApple Configuration Region
54
+ *
55
+ * Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
56
+ *
57
+ * SPDX-License-Identifier: GPL-2.0-or-later
58
+ *
59
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
60
+ * See the COPYING file in the top-level directory.
61
+ */
62
+
63
+#include "qemu/osdep.h"
64
+#include "hw/vmapple/vmapple.h"
65
+#include "hw/sysbus.h"
66
+#include "qemu/log.h"
67
+#include "qemu/module.h"
68
+#include "qapi/error.h"
69
+#include "net/net.h"
70
+
71
+OBJECT_DECLARE_SIMPLE_TYPE(VMAppleCfgState, VMAPPLE_CFG)
72
+
73
+#define VMAPPLE_CFG_SIZE 0x00010000
74
+
75
+typedef struct VMAppleCfg {
76
+ uint32_t version; /* 0x000 */
77
+ uint32_t nr_cpus; /* 0x004 */
78
+ uint32_t unk1; /* 0x008 */
79
+ uint32_t unk2; /* 0x00c */
80
+ uint32_t unk3; /* 0x010 */
81
+ uint32_t unk4; /* 0x014 */
82
+ uint64_t ecid; /* 0x018 */
83
+ uint64_t ram_size; /* 0x020 */
84
+ uint32_t run_installer1; /* 0x028 */
85
+ uint32_t unk5; /* 0x02c */
86
+ uint32_t unk6; /* 0x030 */
87
+ uint32_t run_installer2; /* 0x034 */
88
+ uint32_t rnd; /* 0x038 */
89
+ uint32_t unk7; /* 0x03c */
90
+ MACAddr mac_en0; /* 0x040 */
91
+ uint8_t pad1[2];
92
+ MACAddr mac_en1; /* 0x048 */
93
+ uint8_t pad2[2];
94
+ MACAddr mac_wifi0; /* 0x050 */
95
+ uint8_t pad3[2];
96
+ MACAddr mac_bt0; /* 0x058 */
97
+ uint8_t pad4[2];
98
+ uint8_t reserved[0xa0]; /* 0x060 */
99
+ uint32_t cpu_ids[0x80]; /* 0x100 */
100
+ uint8_t scratch[0x200]; /* 0x180 */
101
+ char serial[32]; /* 0x380 */
102
+ char unk8[32]; /* 0x3a0 */
103
+ char model[32]; /* 0x3c0 */
104
+ uint8_t unk9[32]; /* 0x3e0 */
105
+ uint32_t unk10; /* 0x400 */
106
+ char soc_name[32]; /* 0x404 */
107
+} VMAppleCfg;
108
+
109
+struct VMAppleCfgState {
110
+ SysBusDevice parent_obj;
111
+ VMAppleCfg cfg;
112
+
113
+ MemoryRegion mem;
114
+ char *serial;
115
+ char *model;
116
+ char *soc_name;
117
+};
118
+
119
+static void vmapple_cfg_reset(Object *obj, ResetType type)
120
+{
121
+ VMAppleCfgState *s = VMAPPLE_CFG(obj);
122
+ VMAppleCfg *cfg;
123
+
124
+ cfg = memory_region_get_ram_ptr(&s->mem);
125
+ memset(cfg, 0, VMAPPLE_CFG_SIZE);
126
+ *cfg = s->cfg;
127
+}
128
+
129
+static bool strlcpy_set_error(char *restrict dst, const char *restrict src,
130
+ size_t dst_size, Error **errp,
131
+ const char *parent_func, const char *location,
132
+ const char *buffer_name, const char *extra_info)
133
+{
134
+ size_t len;
135
+
136
+ len = g_strlcpy(dst, src, dst_size);
137
+ if (len < dst_size) { /* len does not count nul terminator */
138
+ return true;
139
+ }
140
+
141
+ error_setg(errp,
142
+ "%s (%s) strlcpy error: Destination buffer %s too small "
143
+ "(need %zu, have %zu) %s",
144
+ parent_func, location, buffer_name, len + 1, dst_size, extra_info);
145
+ return false;
146
+}
147
+
148
+/*
149
+ * String copying wrapper that returns and reports a runtime error in
150
+ * case of truncation due to insufficient destination buffer space.
151
+ */
152
+#define strlcpy_array_return_error(dst_array, src, errp, extra_info) \
153
+ do { \
154
+ if (!strlcpy_set_error((dst_array), (src), ARRAY_SIZE(dst_array), \
155
+ (errp), __func__, stringify(__LINE__), \
156
+ # dst_array, extra_info)) { \
157
+ return; \
158
+ } \
159
+ } while (0)
160
+
161
+static void vmapple_cfg_realize(DeviceState *dev, Error **errp)
162
+{
163
+ VMAppleCfgState *s = VMAPPLE_CFG(dev);
164
+ uint32_t i;
165
+
166
+ if (!s->serial) {
167
+ s->serial = g_strdup("1234");
168
+ }
169
+ if (!s->model) {
170
+ s->model = g_strdup("VM0001");
171
+ }
172
+ if (!s->soc_name) {
173
+ s->soc_name = g_strdup("Apple M1 (Virtual)");
174
+ }
175
+
176
+ strlcpy_array_return_error(s->cfg.serial, s->serial, errp,
177
+ "setting 'serial' property on VMApple cfg device");
178
+ strlcpy_array_return_error(s->cfg.model, s->model, errp,
179
+ "setting 'model' property on VMApple cfg device");
180
+ strlcpy_array_return_error(s->cfg.soc_name, s->soc_name, errp,
181
+ "setting 'soc_name' property on VMApple cfg device");
182
+ strlcpy_array_return_error(s->cfg.unk8, "D/A", errp, "");
183
+ s->cfg.version = 2;
184
+ s->cfg.unk1 = 1;
185
+ s->cfg.unk2 = 1;
186
+ s->cfg.unk3 = 0x20;
187
+ s->cfg.unk4 = 0;
188
+ s->cfg.unk5 = 1;
189
+ s->cfg.unk6 = 1;
190
+ s->cfg.unk7 = 0;
191
+ s->cfg.unk10 = 1;
192
+
193
+ if (s->cfg.nr_cpus > ARRAY_SIZE(s->cfg.cpu_ids)) {
194
+ error_setg(errp,
195
+ "Failed to create %u CPUs, vmapple machine supports %zu max",
196
+ s->cfg.nr_cpus, ARRAY_SIZE(s->cfg.cpu_ids));
197
+ return;
198
+ }
199
+ for (i = 0; i < s->cfg.nr_cpus; i++) {
200
+ s->cfg.cpu_ids[i] = i;
201
+ }
202
+}
203
+
204
+static void vmapple_cfg_init(Object *obj)
205
+{
206
+ VMAppleCfgState *s = VMAPPLE_CFG(obj);
207
+
208
+ memory_region_init_ram(&s->mem, obj, "VMApple Config", VMAPPLE_CFG_SIZE,
209
+ &error_fatal);
210
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mem);
211
+}
212
+
213
+static Property vmapple_cfg_properties[] = {
214
+ DEFINE_PROP_UINT32("nr-cpus", VMAppleCfgState, cfg.nr_cpus, 1),
215
+ DEFINE_PROP_UINT64("ecid", VMAppleCfgState, cfg.ecid, 0),
216
+ DEFINE_PROP_UINT64("ram-size", VMAppleCfgState, cfg.ram_size, 0),
217
+ DEFINE_PROP_UINT32("run_installer1", VMAppleCfgState, cfg.run_installer1, 0),
218
+ DEFINE_PROP_UINT32("run_installer2", VMAppleCfgState, cfg.run_installer2, 0),
219
+ DEFINE_PROP_UINT32("rnd", VMAppleCfgState, cfg.rnd, 0),
220
+ DEFINE_PROP_MACADDR("mac-en0", VMAppleCfgState, cfg.mac_en0),
221
+ DEFINE_PROP_MACADDR("mac-en1", VMAppleCfgState, cfg.mac_en1),
222
+ DEFINE_PROP_MACADDR("mac-wifi0", VMAppleCfgState, cfg.mac_wifi0),
223
+ DEFINE_PROP_MACADDR("mac-bt0", VMAppleCfgState, cfg.mac_bt0),
224
+ DEFINE_PROP_STRING("serial", VMAppleCfgState, serial),
225
+ DEFINE_PROP_STRING("model", VMAppleCfgState, model),
226
+ DEFINE_PROP_STRING("soc_name", VMAppleCfgState, soc_name),
227
+ DEFINE_PROP_END_OF_LIST(),
228
+};
229
+
230
+static void vmapple_cfg_class_init(ObjectClass *klass, void *data)
231
+{
232
+ DeviceClass *dc = DEVICE_CLASS(klass);
233
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
234
+
235
+ dc->realize = vmapple_cfg_realize;
236
+ dc->desc = "VMApple Configuration Region";
237
+ device_class_set_props(dc, vmapple_cfg_properties);
238
+ rc->phases.hold = vmapple_cfg_reset;
239
+}
240
+
241
+static const TypeInfo vmapple_cfg_info = {
242
+ .name = TYPE_VMAPPLE_CFG,
243
+ .parent = TYPE_SYS_BUS_DEVICE,
244
+ .instance_size = sizeof(VMAppleCfgState),
245
+ .instance_init = vmapple_cfg_init,
246
+ .class_init = vmapple_cfg_class_init,
247
+};
248
+
249
+static void vmapple_cfg_register_types(void)
250
+{
251
+ type_register_static(&vmapple_cfg_info);
252
+}
253
+
254
+type_init(vmapple_cfg_register_types)
255
diff --git a/hw/vmapple/meson.build b/hw/vmapple/meson.build
256
index XXXXXXX..XXXXXXX 100644
257
--- a/hw/vmapple/meson.build
258
+++ b/hw/vmapple/meson.build
259
@@ -XXX,XX +XXX,XX @@
260
system_ss.add(when: 'CONFIG_VMAPPLE_AES', if_true: files('aes.c'))
261
system_ss.add(when: 'CONFIG_VMAPPLE_BDIF', if_true: files('bdif.c'))
262
+system_ss.add(when: 'CONFIG_VMAPPLE_CFG', if_true: files('cfg.c'))
263
diff --git a/include/hw/vmapple/vmapple.h b/include/hw/vmapple/vmapple.h
264
index XXXXXXX..XXXXXXX 100644
265
--- a/include/hw/vmapple/vmapple.h
266
+++ b/include/hw/vmapple/vmapple.h
267
@@ -XXX,XX +XXX,XX @@
268
269
#define TYPE_VMAPPLE_BDIF "vmapple-bdif"
270
271
+#define TYPE_VMAPPLE_CFG "vmapple-cfg"
272
+
273
#endif /* HW_VMAPPLE_VMAPPLE_H */
274
--
275
2.39.3 (Apple Git-145)
276
277
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Graf <graf@amazon.com>
2
1
3
Apple has its own virtio-blk PCI device ID where it deviates from the
4
official virtio-pci spec slightly: It puts a new "apple type"
5
field at a static offset in config space and introduces a new barrier
6
command.
7
8
This patch first creates a mechanism for virtio-blk downstream classes to
9
handle unknown commands. It then creates such a downstream class and a new
10
vmapple-virtio-blk-pci class which support the additional apple type config
11
identifier as well as the barrier command.
12
13
It then exposes 2 subclasses from that that we can use to expose root and
14
aux virtio-blk devices: "vmapple-virtio-root" and "vmapple-virtio-aux".
15
16
Signed-off-by: Alexander Graf <graf@amazon.com>
17
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
18
---
19
20
v4:
21
22
* Use recommended object type declaration pattern.
23
* Correctly log unimplemented code paths.
24
* Most header code moved to .c, type name #defines moved to vmapple.h
25
26
v5:
27
28
* Corrected handling of potentially unaligned writes to virtio config area.
29
* Simplified passing through device variant type to subobject.
30
31
hw/block/virtio-blk.c | 19 ++-
32
hw/vmapple/Kconfig | 3 +
33
hw/vmapple/meson.build | 1 +
34
hw/vmapple/virtio-blk.c | 226 +++++++++++++++++++++++++++++++++
35
include/hw/pci/pci_ids.h | 1 +
36
include/hw/virtio/virtio-blk.h | 12 +-
37
include/hw/vmapple/vmapple.h | 4 +
38
7 files changed, 261 insertions(+), 5 deletions(-)
39
create mode 100644 hw/vmapple/virtio-blk.c
40
41
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/block/virtio-blk.c
44
+++ b/hw/block/virtio-blk.c
45
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_init_request(VirtIOBlock *s, VirtQueue *vq,
46
req->mr_next = NULL;
47
}
48
49
-static void virtio_blk_free_request(VirtIOBlockReq *req)
50
+void virtio_blk_free_request(VirtIOBlockReq *req)
51
{
52
g_free(req);
53
}
54
55
-static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
56
+void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
57
{
58
VirtIOBlock *s = req->dev;
59
VirtIODevice *vdev = VIRTIO_DEVICE(s);
60
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
61
break;
62
}
63
default:
64
- virtio_blk_req_complete(req, VIRTIO_BLK_S_UNSUPP);
65
- virtio_blk_free_request(req);
66
+ {
67
+ /*
68
+ * Give subclasses a chance to handle unknown requests. This way the
69
+ * class lookup is not in the hot path.
70
+ */
71
+ VirtIOBlkClass *vbk = VIRTIO_BLK_GET_CLASS(s);
72
+ if (!vbk->handle_unknown_request ||
73
+ !vbk->handle_unknown_request(req, mrb, type)) {
74
+ virtio_blk_req_complete(req, VIRTIO_BLK_S_UNSUPP);
75
+ virtio_blk_free_request(req);
76
+ }
77
+ }
78
}
79
return 0;
80
}
81
@@ -XXX,XX +XXX,XX @@ static const TypeInfo virtio_blk_info = {
82
.instance_size = sizeof(VirtIOBlock),
83
.instance_init = virtio_blk_instance_init,
84
.class_init = virtio_blk_class_init,
85
+ .class_size = sizeof(VirtIOBlkClass),
86
};
87
88
static void virtio_register_types(void)
89
diff --git a/hw/vmapple/Kconfig b/hw/vmapple/Kconfig
90
index XXXXXXX..XXXXXXX 100644
91
--- a/hw/vmapple/Kconfig
92
+++ b/hw/vmapple/Kconfig
93
@@ -XXX,XX +XXX,XX @@ config VMAPPLE_BDIF
94
config VMAPPLE_CFG
95
bool
96
97
+config VMAPPLE_VIRTIO_BLK
98
+ bool
99
+
100
diff --git a/hw/vmapple/meson.build b/hw/vmapple/meson.build
101
index XXXXXXX..XXXXXXX 100644
102
--- a/hw/vmapple/meson.build
103
+++ b/hw/vmapple/meson.build
104
@@ -XXX,XX +XXX,XX @@
105
system_ss.add(when: 'CONFIG_VMAPPLE_AES', if_true: files('aes.c'))
106
system_ss.add(when: 'CONFIG_VMAPPLE_BDIF', if_true: files('bdif.c'))
107
system_ss.add(when: 'CONFIG_VMAPPLE_CFG', if_true: files('cfg.c'))
108
+system_ss.add(when: 'CONFIG_VMAPPLE_VIRTIO_BLK', if_true: files('virtio-blk.c'))
109
diff --git a/hw/vmapple/virtio-blk.c b/hw/vmapple/virtio-blk.c
110
new file mode 100644
111
index XXXXXXX..XXXXXXX
112
--- /dev/null
113
+++ b/hw/vmapple/virtio-blk.c
114
@@ -XXX,XX +XXX,XX @@
115
+/*
116
+ * VMApple specific VirtIO Block implementation
117
+ *
118
+ * Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
119
+ *
120
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
121
+ * See the COPYING file in the top-level directory.
122
+ *
123
+ * SPDX-License-Identifier: GPL-2.0-or-later
124
+ *
125
+ * VMApple uses almost standard VirtIO Block, but with a few key differences:
126
+ *
127
+ * - Different PCI device/vendor ID
128
+ * - An additional "type" identifier to differentiate AUX and Root volumes
129
+ * - An additional BARRIER command
130
+ */
131
+
132
+#include "qemu/osdep.h"
133
+#include "hw/vmapple/vmapple.h"
134
+#include "hw/virtio/virtio-blk.h"
135
+#include "hw/virtio/virtio-pci.h"
136
+#include "qemu/bswap.h"
137
+#include "qemu/log.h"
138
+#include "qemu/module.h"
139
+#include "qapi/error.h"
140
+
141
+OBJECT_DECLARE_TYPE(VMAppleVirtIOBlk, VMAppleVirtIOBlkClass, VMAPPLE_VIRTIO_BLK)
142
+
143
+typedef struct VMAppleVirtIOBlkClass {
144
+ VirtIOBlkClass parent;
145
+
146
+ void (*get_config)(VirtIODevice *vdev, uint8_t *config);
147
+} VMAppleVirtIOBlkClass;
148
+
149
+typedef struct VMAppleVirtIOBlk {
150
+ VirtIOBlock parent_obj;
151
+
152
+ uint32_t apple_type;
153
+} VMAppleVirtIOBlk;
154
+
155
+/*
156
+ * vmapple-virtio-blk-pci: This extends VirtioPCIProxy.
157
+ */
158
+#define TYPE_VMAPPLE_VIRTIO_BLK_PCI "vmapple-virtio-blk-pci-base"
159
+OBJECT_DECLARE_SIMPLE_TYPE(VMAppleVirtIOBlkPCI, VMAPPLE_VIRTIO_BLK_PCI)
160
+
161
+#define VIRTIO_BLK_T_APPLE_BARRIER 0x10000
162
+
163
+#define VIRTIO_APPLE_TYPE_ROOT 1
164
+#define VIRTIO_APPLE_TYPE_AUX 2
165
+
166
+static bool vmapple_virtio_blk_handle_unknown_request(VirtIOBlockReq *req,
167
+ MultiReqBuffer *mrb,
168
+ uint32_t type)
169
+{
170
+ switch (type) {
171
+ case VIRTIO_BLK_T_APPLE_BARRIER:
172
+ qemu_log_mask(LOG_UNIMP, "%s: Barrier requests are currently no-ops\n",
173
+ __func__);
174
+ virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
175
+ virtio_blk_free_request(req);
176
+ return true;
177
+ default:
178
+ return false;
179
+ }
180
+}
181
+
182
+/*
183
+ * VMApple virtio-blk uses the same config format as normal virtio, with one
184
+ * exception: It adds an "apple type" specififer at the same location that
185
+ * the spec reserves for max_secure_erase_sectors. Let's hook into the
186
+ * get_config code path here, run it as usual and then patch in the apple type.
187
+ */
188
+static void vmapple_virtio_blk_get_config(VirtIODevice *vdev, uint8_t *config)
189
+{
190
+ VMAppleVirtIOBlk *dev = VMAPPLE_VIRTIO_BLK(vdev);
191
+ VMAppleVirtIOBlkClass *vvbk = VMAPPLE_VIRTIO_BLK_GET_CLASS(dev);
192
+ struct virtio_blk_config *blkcfg = (struct virtio_blk_config *)config;
193
+
194
+ vvbk->get_config(vdev, config);
195
+
196
+ g_assert(dev->parent_obj.config_size >= endof(struct virtio_blk_config, zoned));
197
+
198
+ /* Apple abuses the field for max_secure_erase_sectors as type id */
199
+ stl_he_p(&blkcfg->max_secure_erase_sectors, dev->apple_type);
200
+}
201
+
202
+static void vmapple_virtio_blk_class_init(ObjectClass *klass, void *data)
203
+{
204
+ VirtIOBlkClass *vbk = VIRTIO_BLK_CLASS(klass);
205
+ VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
206
+ VMAppleVirtIOBlkClass *vvbk = VMAPPLE_VIRTIO_BLK_CLASS(klass);
207
+
208
+ vbk->handle_unknown_request = vmapple_virtio_blk_handle_unknown_request;
209
+ vvbk->get_config = vdc->get_config;
210
+ vdc->get_config = vmapple_virtio_blk_get_config;
211
+}
212
+
213
+static const TypeInfo vmapple_virtio_blk_info = {
214
+ .name = TYPE_VMAPPLE_VIRTIO_BLK,
215
+ .parent = TYPE_VIRTIO_BLK,
216
+ .instance_size = sizeof(VMAppleVirtIOBlk),
217
+ .class_init = vmapple_virtio_blk_class_init,
218
+};
219
+
220
+/* PCI Devices */
221
+
222
+struct VMAppleVirtIOBlkPCI {
223
+ VirtIOPCIProxy parent_obj;
224
+ VMAppleVirtIOBlk vdev;
225
+ uint32_t apple_type;
226
+};
227
+
228
+
229
+static Property vmapple_virtio_blk_pci_properties[] = {
230
+ DEFINE_PROP_UINT32("class", VirtIOPCIProxy, class_code, 0),
231
+ DEFINE_PROP_BIT("ioeventfd", VirtIOPCIProxy, flags,
232
+ VIRTIO_PCI_FLAG_USE_IOEVENTFD_BIT, true),
233
+ DEFINE_PROP_UINT32("vectors", VirtIOPCIProxy, nvectors,
234
+ DEV_NVECTORS_UNSPECIFIED),
235
+ DEFINE_PROP_END_OF_LIST(),
236
+};
237
+
238
+static void vmapple_virtio_blk_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
239
+{
240
+ VMAppleVirtIOBlkPCI *dev = VMAPPLE_VIRTIO_BLK_PCI(vpci_dev);
241
+ DeviceState *vdev = DEVICE(&dev->vdev);
242
+ VirtIOBlkConf *conf = &dev->vdev.parent_obj.conf;
243
+
244
+ if (conf->num_queues == VIRTIO_BLK_AUTO_NUM_QUEUES) {
245
+ conf->num_queues = virtio_pci_optimal_num_queues(0);
246
+ }
247
+
248
+ if (vpci_dev->nvectors == DEV_NVECTORS_UNSPECIFIED) {
249
+ vpci_dev->nvectors = conf->num_queues + 1;
250
+ }
251
+
252
+ /*
253
+ * We don't support zones, but we need the additional config space size.
254
+ * Let's just expose the feature so the rest of the virtio-blk logic
255
+ * allocates enough space for us. The guest will ignore zones anyway.
256
+ */
257
+ virtio_add_feature(&dev->vdev.parent_obj.host_features, VIRTIO_BLK_F_ZONED);
258
+ /* Propagate the apple type down to the virtio-blk device */
259
+ dev->vdev.apple_type = dev->apple_type;
260
+ /* and spawn the virtio-blk device */
261
+ qdev_realize(vdev, BUS(&vpci_dev->bus), errp);
262
+
263
+ /*
264
+ * The virtio-pci machinery adjusts its vendor/device ID based on whether
265
+ * we support modern or legacy virtio. Let's patch it back to the Apple
266
+ * identifiers here.
267
+ */
268
+ pci_config_set_vendor_id(vpci_dev->pci_dev.config, PCI_VENDOR_ID_APPLE);
269
+ pci_config_set_device_id(vpci_dev->pci_dev.config,
270
+ PCI_DEVICE_ID_APPLE_VIRTIO_BLK);
271
+}
272
+
273
+static void vmapple_virtio_blk_pci_class_init(ObjectClass *klass, void *data)
274
+{
275
+ DeviceClass *dc = DEVICE_CLASS(klass);
276
+ VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
277
+ PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
278
+
279
+ set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
280
+ device_class_set_props(dc, vmapple_virtio_blk_pci_properties);
281
+ k->realize = vmapple_virtio_blk_pci_realize;
282
+ pcidev_k->vendor_id = PCI_VENDOR_ID_APPLE;
283
+ pcidev_k->device_id = PCI_DEVICE_ID_APPLE_VIRTIO_BLK;
284
+ pcidev_k->revision = VIRTIO_PCI_ABI_VERSION;
285
+ pcidev_k->class_id = PCI_CLASS_STORAGE_SCSI;
286
+}
287
+
288
+static void vmapple_virtio_blk_pci_instance_init(Object *obj)
289
+{
290
+ VMAppleVirtIOBlkPCI *dev = VMAPPLE_VIRTIO_BLK_PCI(obj);
291
+
292
+ virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
293
+ TYPE_VMAPPLE_VIRTIO_BLK);
294
+}
295
+
296
+static const VirtioPCIDeviceTypeInfo vmapple_virtio_blk_pci_info = {
297
+ .base_name = TYPE_VMAPPLE_VIRTIO_BLK_PCI,
298
+ .generic_name = "vmapple-virtio-blk-pci",
299
+ .instance_size = sizeof(VMAppleVirtIOBlkPCI),
300
+ .instance_init = vmapple_virtio_blk_pci_instance_init,
301
+ .class_init = vmapple_virtio_blk_pci_class_init,
302
+};
303
+
304
+static void vmapple_virtio_root_instance_init(Object *obj)
305
+{
306
+ VMAppleVirtIOBlkPCI *dev = VMAPPLE_VIRTIO_BLK_PCI(obj);
307
+
308
+ dev->apple_type = VIRTIO_APPLE_TYPE_ROOT;
309
+}
310
+
311
+static const TypeInfo vmapple_virtio_root_info = {
312
+ .name = TYPE_VMAPPLE_VIRTIO_ROOT,
313
+ .parent = "vmapple-virtio-blk-pci",
314
+ .instance_size = sizeof(VMAppleVirtIOBlkPCI),
315
+ .instance_init = vmapple_virtio_root_instance_init,
316
+};
317
+
318
+static void vmapple_virtio_aux_instance_init(Object *obj)
319
+{
320
+ VMAppleVirtIOBlkPCI *dev = VMAPPLE_VIRTIO_BLK_PCI(obj);
321
+
322
+ dev->apple_type = VIRTIO_APPLE_TYPE_AUX;
323
+}
324
+
325
+static const TypeInfo vmapple_virtio_aux_info = {
326
+ .name = TYPE_VMAPPLE_VIRTIO_AUX,
327
+ .parent = "vmapple-virtio-blk-pci",
328
+ .instance_size = sizeof(VMAppleVirtIOBlkPCI),
329
+ .instance_init = vmapple_virtio_aux_instance_init,
330
+};
331
+
332
+static void vmapple_virtio_blk_register_types(void)
333
+{
334
+ type_register_static(&vmapple_virtio_blk_info);
335
+ virtio_pci_types_register(&vmapple_virtio_blk_pci_info);
336
+ type_register_static(&vmapple_virtio_root_info);
337
+ type_register_static(&vmapple_virtio_aux_info);
338
+}
339
+
340
+type_init(vmapple_virtio_blk_register_types)
341
diff --git a/include/hw/pci/pci_ids.h b/include/hw/pci/pci_ids.h
342
index XXXXXXX..XXXXXXX 100644
343
--- a/include/hw/pci/pci_ids.h
344
+++ b/include/hw/pci/pci_ids.h
345
@@ -XXX,XX +XXX,XX @@
346
#define PCI_DEVICE_ID_APPLE_UNI_N_AGP 0x0020
347
#define PCI_DEVICE_ID_APPLE_U3_AGP 0x004b
348
#define PCI_DEVICE_ID_APPLE_UNI_N_GMAC 0x0021
349
+#define PCI_DEVICE_ID_APPLE_VIRTIO_BLK 0x1a00
350
351
#define PCI_VENDOR_ID_SUN 0x108e
352
#define PCI_DEVICE_ID_SUN_EBUS 0x1000
353
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
354
index XXXXXXX..XXXXXXX 100644
355
--- a/include/hw/virtio/virtio-blk.h
356
+++ b/include/hw/virtio/virtio-blk.h
357
@@ -XXX,XX +XXX,XX @@
358
#include "qapi/qapi-types-virtio.h"
359
360
#define TYPE_VIRTIO_BLK "virtio-blk-device"
361
-OBJECT_DECLARE_SIMPLE_TYPE(VirtIOBlock, VIRTIO_BLK)
362
+OBJECT_DECLARE_TYPE(VirtIOBlock, VirtIOBlkClass, VIRTIO_BLK)
363
364
/* This is the last element of the write scatter-gather list */
365
struct virtio_blk_inhdr
366
@@ -XXX,XX +XXX,XX @@ typedef struct MultiReqBuffer {
367
bool is_write;
368
} MultiReqBuffer;
369
370
+typedef struct VirtIOBlkClass {
371
+ /*< private >*/
372
+ VirtioDeviceClass parent;
373
+ /*< public >*/
374
+ bool (*handle_unknown_request)(VirtIOBlockReq *req, MultiReqBuffer *mrb,
375
+ uint32_t type);
376
+} VirtIOBlkClass;
377
+
378
void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq);
379
+void virtio_blk_free_request(VirtIOBlockReq *req);
380
+void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status);
381
382
#endif
383
diff --git a/include/hw/vmapple/vmapple.h b/include/hw/vmapple/vmapple.h
384
index XXXXXXX..XXXXXXX 100644
385
--- a/include/hw/vmapple/vmapple.h
386
+++ b/include/hw/vmapple/vmapple.h
387
@@ -XXX,XX +XXX,XX @@
388
389
#define TYPE_VMAPPLE_CFG "vmapple-cfg"
390
391
+#define TYPE_VMAPPLE_VIRTIO_BLK "vmapple-virtio-blk"
392
+#define TYPE_VMAPPLE_VIRTIO_ROOT "vmapple-virtio-root"
393
+#define TYPE_VMAPPLE_VIRTIO_AUX "vmapple-virtio-aux"
394
+
395
#endif /* HW_VMAPPLE_VMAPPLE_H */
396
--
397
2.39.3 (Apple Git-145)
398
399
diff view generated by jsdifflib
Deleted patch
1
The virtio_blk_free_request() function has been a 1-liner forwarding
2
to g_free() for a while now. We may as well call g_free on the request
3
pointer directly.
4
1
5
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
6
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
7
---
8
hw/block/virtio-blk.c | 43 +++++++++++++++-------------------
9
hw/vmapple/virtio-blk.c | 2 +-
10
include/hw/virtio/virtio-blk.h | 1 -
11
3 files changed, 20 insertions(+), 26 deletions(-)
12
13
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/block/virtio-blk.c
16
+++ b/hw/block/virtio-blk.c
17
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_init_request(VirtIOBlock *s, VirtQueue *vq,
18
req->mr_next = NULL;
19
}
20
21
-void virtio_blk_free_request(VirtIOBlockReq *req)
22
-{
23
- g_free(req);
24
-}
25
-
26
void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
27
{
28
VirtIOBlock *s = req->dev;
29
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_rw_error(VirtIOBlockReq *req, int error,
30
if (acct_failed) {
31
block_acct_failed(blk_get_stats(s->blk), &req->acct);
32
}
33
- virtio_blk_free_request(req);
34
+ g_free(req);
35
}
36
37
blk_error_action(s->blk, action, is_read, error);
38
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_rw_complete(void *opaque, int ret)
39
40
virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
41
block_acct_done(blk_get_stats(s->blk), &req->acct);
42
- virtio_blk_free_request(req);
43
+ g_free(req);
44
}
45
}
46
47
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_flush_complete(void *opaque, int ret)
48
49
virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
50
block_acct_done(blk_get_stats(s->blk), &req->acct);
51
- virtio_blk_free_request(req);
52
+ g_free(req);
53
}
54
55
static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret)
56
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret)
57
if (is_write_zeroes) {
58
block_acct_done(blk_get_stats(s->blk), &req->acct);
59
}
60
- virtio_blk_free_request(req);
61
+ g_free(req);
62
}
63
64
static VirtIOBlockReq *virtio_blk_get_request(VirtIOBlock *s, VirtQueue *vq)
65
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_handle_scsi(VirtIOBlockReq *req)
66
67
fail:
68
virtio_blk_req_complete(req, status);
69
- virtio_blk_free_request(req);
70
+ g_free(req);
71
}
72
73
static inline void submit_requests(VirtIOBlock *s, MultiReqBuffer *mrb,
74
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_zone_report_complete(void *opaque, int ret)
75
76
out:
77
virtio_blk_req_complete(req, err_status);
78
- virtio_blk_free_request(req);
79
+ g_free(req);
80
g_free(data->zone_report_data.zones);
81
g_free(data);
82
}
83
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_handle_zone_report(VirtIOBlockReq *req,
84
return;
85
out:
86
virtio_blk_req_complete(req, err_status);
87
- virtio_blk_free_request(req);
88
+ g_free(req);
89
}
90
91
static void virtio_blk_zone_mgmt_complete(void *opaque, int ret)
92
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_zone_mgmt_complete(void *opaque, int ret)
93
}
94
95
virtio_blk_req_complete(req, err_status);
96
- virtio_blk_free_request(req);
97
+ g_free(req);
98
}
99
100
static int virtio_blk_handle_zone_mgmt(VirtIOBlockReq *req, BlockZoneOp op)
101
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_zone_mgmt(VirtIOBlockReq *req, BlockZoneOp op)
102
return 0;
103
out:
104
virtio_blk_req_complete(req, err_status);
105
- virtio_blk_free_request(req);
106
+ g_free(req);
107
return err_status;
108
}
109
110
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_zone_append_complete(void *opaque, int ret)
111
112
out:
113
virtio_blk_req_complete(req, err_status);
114
- virtio_blk_free_request(req);
115
+ g_free(req);
116
g_free(data);
117
}
118
119
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_zone_append(VirtIOBlockReq *req,
120
121
out:
122
virtio_blk_req_complete(req, err_status);
123
- virtio_blk_free_request(req);
124
+ g_free(req);
125
return err_status;
126
}
127
128
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
129
virtio_blk_req_complete(req, VIRTIO_BLK_S_IOERR);
130
block_acct_invalid(blk_get_stats(s->blk),
131
is_write ? BLOCK_ACCT_WRITE : BLOCK_ACCT_READ);
132
- virtio_blk_free_request(req);
133
+ g_free(req);
134
return 0;
135
}
136
137
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
138
VIRTIO_BLK_ID_BYTES));
139
iov_from_buf(in_iov, in_num, 0, serial, size);
140
virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
141
- virtio_blk_free_request(req);
142
+ g_free(req);
143
break;
144
}
145
case VIRTIO_BLK_T_ZONE_APPEND & ~VIRTIO_BLK_T_OUT:
146
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
147
if (unlikely(!(type & VIRTIO_BLK_T_OUT) ||
148
out_len > sizeof(dwz_hdr))) {
149
virtio_blk_req_complete(req, VIRTIO_BLK_S_UNSUPP);
150
- virtio_blk_free_request(req);
151
+ g_free(req);
152
return 0;
153
}
154
155
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
156
is_write_zeroes);
157
if (err_status != VIRTIO_BLK_S_OK) {
158
virtio_blk_req_complete(req, err_status);
159
- virtio_blk_free_request(req);
160
+ g_free(req);
161
}
162
163
break;
164
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
165
if (!vbk->handle_unknown_request ||
166
!vbk->handle_unknown_request(req, mrb, type)) {
167
virtio_blk_req_complete(req, VIRTIO_BLK_S_UNSUPP);
168
- virtio_blk_free_request(req);
169
+ g_free(req);
170
}
171
}
172
}
173
@@ -XXX,XX +XXX,XX @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
174
while ((req = virtio_blk_get_request(s, vq))) {
175
if (virtio_blk_handle_request(req, &mrb)) {
176
virtqueue_detach_element(req->vq, &req->elem, 0);
177
- virtio_blk_free_request(req);
178
+ g_free(req);
179
break;
180
}
181
}
182
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_dma_restart_bh(void *opaque)
183
while (req) {
184
next = req->next;
185
virtqueue_detach_element(req->vq, &req->elem, 0);
186
- virtio_blk_free_request(req);
187
+ g_free(req);
188
req = next;
189
}
190
break;
191
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_reset(VirtIODevice *vdev)
192
/* No other threads can access req->vq here */
193
virtqueue_detach_element(req->vq, &req->elem, 0);
194
195
- virtio_blk_free_request(req);
196
+ g_free(req);
197
}
198
}
199
200
diff --git a/hw/vmapple/virtio-blk.c b/hw/vmapple/virtio-blk.c
201
index XXXXXXX..XXXXXXX 100644
202
--- a/hw/vmapple/virtio-blk.c
203
+++ b/hw/vmapple/virtio-blk.c
204
@@ -XXX,XX +XXX,XX @@ static bool vmapple_virtio_blk_handle_unknown_request(VirtIOBlockReq *req,
205
qemu_log_mask(LOG_UNIMP, "%s: Barrier requests are currently no-ops\n",
206
__func__);
207
virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
208
- virtio_blk_free_request(req);
209
+ g_free(req);
210
return true;
211
default:
212
return false;
213
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
214
index XXXXXXX..XXXXXXX 100644
215
--- a/include/hw/virtio/virtio-blk.h
216
+++ b/include/hw/virtio/virtio-blk.h
217
@@ -XXX,XX +XXX,XX @@ typedef struct VirtIOBlkClass {
218
} VirtIOBlkClass;
219
220
void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq);
221
-void virtio_blk_free_request(VirtIOBlockReq *req);
222
void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status);
223
224
#endif
225
--
226
2.39.3 (Apple Git-145)
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Graf <graf@amazon.com>
2
1
3
Apple defines a new "vmapple" machine type as part of its proprietary
4
macOS Virtualization.Framework vmm. This machine type is similar to the
5
virt one, but with subtle differences in base devices, a few special
6
vmapple device additions and a vastly different boot chain.
7
8
This patch reimplements this machine type in QEMU. To use it, you
9
have to have a readily installed version of macOS for VMApple,
10
run on macOS with -accel hvf, pass the Virtualization.Framework
11
boot rom (AVPBooter) in via -bios, pass the aux and root volume as pflash
12
and pass aux and root volume as virtio drives. In addition, you also
13
need to find the machine UUID and pass that as -M vmapple,uuid= parameter:
14
15
$ qemu-system-aarch64 -accel hvf -M vmapple,uuid=0x1234 -m 4G \
16
-bios /System/Library/Frameworks/Virtualization.framework/Versions/A/Resources/AVPBooter.vmapple2.bin
17
-drive file=aux,if=pflash,format=raw \
18
-drive file=root,if=pflash,format=raw \
19
-drive file=aux,if=none,id=aux,format=raw \
20
-device vmapple-virtio-aux,drive=aux \
21
-drive file=root,if=none,id=root,format=raw \
22
-device vmapple-virtio-root,drive=root
23
24
With all these in place, you should be able to see macOS booting
25
successfully.
26
27
Known issues:
28
- Keyboard and mouse/tablet input is laggy. The reason for this is
29
either that macOS's XHCI driver is broken when the device/platform
30
does not support MSI/MSI-X, or there's some unfortunate interplay
31
with Qemu's XHCI implementation in this scenario.
32
- Currently only macOS 12 guests are supported. The boot process for
33
13+ will need further investigation and adjustment.
34
35
Signed-off-by: Alexander Graf <graf@amazon.com>
36
Co-authored-by: Phil Dennis-Jordan <phil@philjordan.eu>
37
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
38
---
39
40
v3:
41
* Rebased on latest upstream, updated affinity and NIC creation
42
API usage
43
* Included Apple-variant virtio-blk in build dependency
44
* Updated API usage for setting 'redist-region-count' array-typed property on GIC.
45
* Switched from virtio HID devices (for which macOS 12 does not contain drivers) to an XHCI USB controller and USB HID devices.
46
47
v4:
48
* Fixups for v4 changes to the other patches in the set.
49
* Corrected the assert macro to use
50
* Removed superfluous endian conversions corresponding to cfg's.
51
* Init error handling improvement.
52
* No need to select CPU type on TCG, as only HVF is supported.
53
* Machine type version bumped to 9.2
54
* #include order improved
55
56
v5:
57
* Fixed memory reservation for ecam alias region.
58
* Better error handling setting properties on devices.
59
* Simplified the machine ECID/UUID extraction script and actually created a
60
file for it rather than quoting its code in documentation.
61
62
MAINTAINERS | 1 +
63
contrib/vmapple/uuid.sh | 9 +
64
docs/system/arm/vmapple.rst | 60 ++++
65
docs/system/target-arm.rst | 1 +
66
hw/vmapple/Kconfig | 20 ++
67
hw/vmapple/meson.build | 1 +
68
hw/vmapple/vmapple.c | 659 ++++++++++++++++++++++++++++++++++++
69
7 files changed, 751 insertions(+)
70
create mode 100755 contrib/vmapple/uuid.sh
71
create mode 100644 docs/system/arm/vmapple.rst
72
create mode 100644 hw/vmapple/vmapple.c
73
74
diff --git a/MAINTAINERS b/MAINTAINERS
75
index XXXXXXX..XXXXXXX 100644
76
--- a/MAINTAINERS
77
+++ b/MAINTAINERS
78
@@ -XXX,XX +XXX,XX @@ R: Phil Dennis-Jordan <phil@philjordan.eu>
79
S: Maintained
80
F: hw/vmapple/*
81
F: include/hw/vmapple/*
82
+F: docs/system/arm/vmapple.rst
83
84
Subsystems
85
----------
86
diff --git a/contrib/vmapple/uuid.sh b/contrib/vmapple/uuid.sh
87
new file mode 100755
88
index XXXXXXX..XXXXXXX
89
--- /dev/null
90
+++ b/contrib/vmapple/uuid.sh
91
@@ -XXX,XX +XXX,XX @@
92
+#!/bin/sh
93
+# Used for converting a guest provisioned using Virtualization.framework
94
+# for use with the QEMU 'vmapple' aarch64 machine type.
95
+#
96
+# Extracts the Machine UUID from Virtualization.framework VM JSON file.
97
+# (as produced by 'macosvm', passed as command line argument)
98
+
99
+plutil -extract machineId raw "$1" | base64 -d | plutil -extract ECID raw -
100
+
101
diff --git a/docs/system/arm/vmapple.rst b/docs/system/arm/vmapple.rst
102
new file mode 100644
103
index XXXXXXX..XXXXXXX
104
--- /dev/null
105
+++ b/docs/system/arm/vmapple.rst
106
@@ -XXX,XX +XXX,XX @@
107
+VMApple machine emulation
108
+========================================================================================
109
+
110
+VMApple is the device model that the macOS built-in hypervisor called "Virtualization.framework"
111
+exposes to Apple Silicon macOS guests. The "vmapple" machine model in QEMU implements the same
112
+device model, but does not use any code from Virtualization.Framework.
113
+
114
+Prerequisites
115
+-------------
116
+
117
+To run the vmapple machine model, you need to
118
+
119
+ * Run on Apple Silicon
120
+ * Run on macOS 12.0 or above
121
+ * Have an already installed copy of a Virtualization.Framework macOS 12 virtual machine. I will
122
+ assume that you installed it using the macosvm CLI.
123
+
124
+First, we need to extract the UUID from the virtual machine that you installed. You can do this
125
+by running the shell script in contrib/vmapple/uuid.sh on the macosvm.json file.
126
+
127
+.. code-block:: bash
128
+ :caption: uuid.sh script to extract the UUID from a macosvm.json file
129
+
130
+ $ contrib/vmapple/uuid.sh "path/to/macosvm.json"
131
+
132
+Now we also need to trim the aux partition. It contains metadata that we can just discard:
133
+
134
+.. code-block:: bash
135
+ :caption: Command to trim the aux file
136
+
137
+ $ dd if="aux.img" of="aux.img.trimmed" bs=$(( 0x4000 )) skip=1
138
+
139
+How to run
140
+----------
141
+
142
+Then, we can launch QEMU with the Virtualization.Framework pre-boot environment and the readily
143
+installed target disk images. I recommend to port forward the VM's ssh and vnc ports to the host
144
+to get better interactive access into the target system:
145
+
146
+.. code-block:: bash
147
+ :caption: Example execution command line
148
+
149
+ $ UUID=$(uuid.sh macosvm.json)
150
+ $ AVPBOOTER=/System/Library/Frameworks/Virtualization.framework/Resources/AVPBooter.vmapple2.bin
151
+ $ AUX=aux.img.trimmed
152
+ $ DISK=disk.img
153
+ $ qemu-system-aarch64 \
154
+ -serial mon:stdio \
155
+ -m 4G \
156
+ -accel hvf \
157
+ -M vmapple,uuid=$UUID \
158
+ -bios $AVPBOOTER \
159
+ -drive file="$AUX",if=pflash,format=raw \
160
+ -drive file="$DISK",if=pflash,format=raw \
161
+ -drive file="$AUX",if=none,id=aux,format=raw \
162
+ -drive file="$DISK",if=none,id=root,format=raw \
163
+ -device vmapple-virtio-aux,drive=aux \
164
+ -device vmapple-virtio-root,drive=root \
165
+ -net user,ipv6=off,hostfwd=tcp::2222-:22,hostfwd=tcp::5901-:5900 \
166
+ -net nic,model=virtio-net-pci \
167
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
168
index XXXXXXX..XXXXXXX 100644
169
--- a/docs/system/target-arm.rst
170
+++ b/docs/system/target-arm.rst
171
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
172
arm/stellaris
173
arm/stm32
174
arm/virt
175
+ arm/vmapple
176
arm/xenpvh
177
arm/xlnx-versal-virt
178
arm/xlnx-zynq
179
diff --git a/hw/vmapple/Kconfig b/hw/vmapple/Kconfig
180
index XXXXXXX..XXXXXXX 100644
181
--- a/hw/vmapple/Kconfig
182
+++ b/hw/vmapple/Kconfig
183
@@ -XXX,XX +XXX,XX @@ config VMAPPLE_CFG
184
config VMAPPLE_VIRTIO_BLK
185
bool
186
187
+config VMAPPLE
188
+ bool
189
+ depends on ARM
190
+ depends on HVF
191
+ default y if ARM
192
+ imply PCI_DEVICES
193
+ select ARM_GIC
194
+ select PLATFORM_BUS
195
+ select PCI_EXPRESS
196
+ select PCI_EXPRESS_GENERIC_BRIDGE
197
+ select PL011 # UART
198
+ select PL031 # RTC
199
+ select PL061 # GPIO
200
+ select GPIO_PWR
201
+ select PVPANIC_MMIO
202
+ select VMAPPLE_AES
203
+ select VMAPPLE_BDIF
204
+ select VMAPPLE_CFG
205
+ select MAC_PVG_MMIO
206
+ select VMAPPLE_VIRTIO_BLK
207
diff --git a/hw/vmapple/meson.build b/hw/vmapple/meson.build
208
index XXXXXXX..XXXXXXX 100644
209
--- a/hw/vmapple/meson.build
210
+++ b/hw/vmapple/meson.build
211
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_VMAPPLE_AES', if_true: files('aes.c'))
212
system_ss.add(when: 'CONFIG_VMAPPLE_BDIF', if_true: files('bdif.c'))
213
system_ss.add(when: 'CONFIG_VMAPPLE_CFG', if_true: files('cfg.c'))
214
system_ss.add(when: 'CONFIG_VMAPPLE_VIRTIO_BLK', if_true: files('virtio-blk.c'))
215
+specific_ss.add(when: 'CONFIG_VMAPPLE', if_true: files('vmapple.c'))
216
diff --git a/hw/vmapple/vmapple.c b/hw/vmapple/vmapple.c
217
new file mode 100644
218
index XXXXXXX..XXXXXXX
219
--- /dev/null
220
+++ b/hw/vmapple/vmapple.c
221
@@ -XXX,XX +XXX,XX @@
222
+/*
223
+ * VMApple machine emulation
224
+ *
225
+ * Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
226
+ *
227
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
228
+ * See the COPYING file in the top-level directory.
229
+ *
230
+ * SPDX-License-Identifier: GPL-2.0-or-later
231
+ *
232
+ * VMApple is the device model that the macOS built-in hypervisor called
233
+ * "Virtualization.framework" exposes to Apple Silicon macOS guests. The
234
+ * machine model in this file implements the same device model in QEMU, but
235
+ * does not use any code from Virtualization.Framework.
236
+ */
237
+
238
+#include "qemu/osdep.h"
239
+#include "qemu/bitops.h"
240
+#include "qemu/datadir.h"
241
+#include "qemu/error-report.h"
242
+#include "qemu/guest-random.h"
243
+#include "qemu/help-texts.h"
244
+#include "qemu/log.h"
245
+#include "qemu/module.h"
246
+#include "qemu/option.h"
247
+#include "qemu/units.h"
248
+#include "monitor/qdev.h"
249
+#include "hw/boards.h"
250
+#include "hw/irq.h"
251
+#include "hw/loader.h"
252
+#include "hw/qdev-properties.h"
253
+#include "hw/sysbus.h"
254
+#include "hw/usb.h"
255
+#include "hw/arm/boot.h"
256
+#include "hw/arm/primecell.h"
257
+#include "hw/char/pl011.h"
258
+#include "hw/intc/arm_gic.h"
259
+#include "hw/intc/arm_gicv3_common.h"
260
+#include "hw/misc/pvpanic.h"
261
+#include "hw/pci-host/gpex.h"
262
+#include "hw/usb/xhci.h"
263
+#include "hw/virtio/virtio-pci.h"
264
+#include "hw/vmapple/vmapple.h"
265
+#include "net/net.h"
266
+#include "qapi/error.h"
267
+#include "qapi/qmp/qlist.h"
268
+#include "qapi/visitor.h"
269
+#include "qapi/qapi-visit-common.h"
270
+#include "standard-headers/linux/input.h"
271
+#include "sysemu/hvf.h"
272
+#include "sysemu/kvm.h"
273
+#include "sysemu/reset.h"
274
+#include "sysemu/runstate.h"
275
+#include "sysemu/sysemu.h"
276
+#include "target/arm/internals.h"
277
+#include "target/arm/kvm_arm.h"
278
+
279
+struct VMAppleMachineClass {
280
+ MachineClass parent;
281
+};
282
+
283
+struct VMAppleMachineState {
284
+ MachineState parent;
285
+
286
+ Notifier machine_done;
287
+ struct arm_boot_info bootinfo;
288
+ MemMapEntry *memmap;
289
+ const int *irqmap;
290
+ DeviceState *gic;
291
+ DeviceState *cfg;
292
+ Notifier powerdown_notifier;
293
+ PCIBus *bus;
294
+ MemoryRegion fw_mr;
295
+ MemoryRegion ecam_alias;
296
+ uint64_t uuid;
297
+};
298
+
299
+#define DEFINE_VMAPPLE_MACHINE_LATEST(major, minor, latest) \
300
+ static void vmapple##major##_##minor##_class_init(ObjectClass *oc, \
301
+ void *data) \
302
+ { \
303
+ MachineClass *mc = MACHINE_CLASS(oc); \
304
+ vmapple_machine_##major##_##minor##_options(mc); \
305
+ mc->desc = "QEMU " # major "." # minor " Apple Virtual Machine"; \
306
+ if (latest) { \
307
+ mc->alias = "vmapple"; \
308
+ } \
309
+ } \
310
+ static const TypeInfo machvmapple##major##_##minor##_info = { \
311
+ .name = MACHINE_TYPE_NAME("vmapple-" # major "." # minor), \
312
+ .parent = TYPE_VMAPPLE_MACHINE, \
313
+ .class_init = vmapple##major##_##minor##_class_init, \
314
+ }; \
315
+ static void machvmapple_machine_##major##_##minor##_init(void) \
316
+ { \
317
+ type_register_static(&machvmapple##major##_##minor##_info); \
318
+ } \
319
+ type_init(machvmapple_machine_##major##_##minor##_init);
320
+
321
+#define DEFINE_VMAPPLE_MACHINE_AS_LATEST(major, minor) \
322
+ DEFINE_VMAPPLE_MACHINE_LATEST(major, minor, true)
323
+#define DEFINE_VMAPPLE_MACHINE(major, minor) \
324
+ DEFINE_VMAPPLE_MACHINE_LATEST(major, minor, false)
325
+
326
+#define TYPE_VMAPPLE_MACHINE MACHINE_TYPE_NAME("vmapple")
327
+OBJECT_DECLARE_TYPE(VMAppleMachineState, VMAppleMachineClass, VMAPPLE_MACHINE)
328
+
329
+/* Number of external interrupt lines to configure the GIC with */
330
+#define NUM_IRQS 256
331
+
332
+enum {
333
+ VMAPPLE_FIRMWARE,
334
+ VMAPPLE_CONFIG,
335
+ VMAPPLE_MEM,
336
+ VMAPPLE_GIC_DIST,
337
+ VMAPPLE_GIC_REDIST,
338
+ VMAPPLE_UART,
339
+ VMAPPLE_RTC,
340
+ VMAPPLE_PCIE,
341
+ VMAPPLE_PCIE_MMIO,
342
+ VMAPPLE_PCIE_ECAM,
343
+ VMAPPLE_GPIO,
344
+ VMAPPLE_PVPANIC,
345
+ VMAPPLE_APV_GFX,
346
+ VMAPPLE_APV_IOSFC,
347
+ VMAPPLE_AES_1,
348
+ VMAPPLE_AES_2,
349
+ VMAPPLE_BDOOR,
350
+ VMAPPLE_MEMMAP_LAST,
351
+};
352
+
353
+static MemMapEntry memmap[] = {
354
+ [VMAPPLE_FIRMWARE] = { 0x00100000, 0x00100000 },
355
+ [VMAPPLE_CONFIG] = { 0x00400000, 0x00010000 },
356
+
357
+ [VMAPPLE_GIC_DIST] = { 0x10000000, 0x00010000 },
358
+ [VMAPPLE_GIC_REDIST] = { 0x10010000, 0x00400000 },
359
+
360
+ [VMAPPLE_UART] = { 0x20010000, 0x00010000 },
361
+ [VMAPPLE_RTC] = { 0x20050000, 0x00001000 },
362
+ [VMAPPLE_GPIO] = { 0x20060000, 0x00001000 },
363
+ [VMAPPLE_PVPANIC] = { 0x20070000, 0x00000002 },
364
+ [VMAPPLE_BDOOR] = { 0x30000000, 0x00200000 },
365
+ [VMAPPLE_APV_GFX] = { 0x30200000, 0x00010000 },
366
+ [VMAPPLE_APV_IOSFC] = { 0x30210000, 0x00010000 },
367
+ [VMAPPLE_AES_1] = { 0x30220000, 0x00004000 },
368
+ [VMAPPLE_AES_2] = { 0x30230000, 0x00004000 },
369
+ [VMAPPLE_PCIE_ECAM] = { 0x40000000, 0x10000000 },
370
+ [VMAPPLE_PCIE_MMIO] = { 0x50000000, 0x1fff0000 },
371
+
372
+ /* Actual RAM size depends on configuration */
373
+ [VMAPPLE_MEM] = { 0x70000000ULL, GiB},
374
+};
375
+
376
+static const int irqmap[] = {
377
+ [VMAPPLE_UART] = 1,
378
+ [VMAPPLE_RTC] = 2,
379
+ [VMAPPLE_GPIO] = 0x5,
380
+ [VMAPPLE_APV_IOSFC] = 0x10,
381
+ [VMAPPLE_APV_GFX] = 0x11,
382
+ [VMAPPLE_AES_1] = 0x12,
383
+ [VMAPPLE_PCIE] = 0x20,
384
+};
385
+
386
+#define GPEX_NUM_IRQS 16
387
+
388
+static void create_bdif(VMAppleMachineState *vms, MemoryRegion *mem)
389
+{
390
+ DeviceState *bdif;
391
+ SysBusDevice *bdif_sb;
392
+ DriveInfo *di_aux = drive_get(IF_PFLASH, 0, 0);
393
+ DriveInfo *di_root = drive_get(IF_PFLASH, 0, 1);
394
+
395
+ if (!di_aux) {
396
+ error_report("No AUX device. Please specify one as pflash drive.");
397
+ exit(1);
398
+ }
399
+
400
+ if (!di_root) {
401
+ /* Fall back to the first IF_VIRTIO device as root device */
402
+ di_root = drive_get(IF_VIRTIO, 0, 0);
403
+ }
404
+
405
+ if (!di_root) {
406
+ error_report("No root device. Please specify one as virtio drive.");
407
+ exit(1);
408
+ }
409
+
410
+ /* PV backdoor device */
411
+ bdif = qdev_new(TYPE_VMAPPLE_BDIF);
412
+ bdif_sb = SYS_BUS_DEVICE(bdif);
413
+ sysbus_mmio_map(bdif_sb, 0, vms->memmap[VMAPPLE_BDOOR].base);
414
+
415
+ qdev_prop_set_drive(DEVICE(bdif), "aux", blk_by_legacy_dinfo(di_aux));
416
+ qdev_prop_set_drive(DEVICE(bdif), "root", blk_by_legacy_dinfo(di_root));
417
+
418
+ sysbus_realize_and_unref(bdif_sb, &error_fatal);
419
+}
420
+
421
+static void create_pvpanic(VMAppleMachineState *vms, MemoryRegion *mem)
422
+{
423
+ SysBusDevice *cfg;
424
+
425
+ vms->cfg = qdev_new(TYPE_PVPANIC_MMIO_DEVICE);
426
+ cfg = SYS_BUS_DEVICE(vms->cfg);
427
+ sysbus_mmio_map(cfg, 0, vms->memmap[VMAPPLE_PVPANIC].base);
428
+
429
+ sysbus_realize_and_unref(cfg, &error_fatal);
430
+}
431
+
432
+static void create_cfg(VMAppleMachineState *vms, MemoryRegion *mem)
433
+{
434
+ SysBusDevice *cfg;
435
+ MachineState *machine = MACHINE(vms);
436
+ uint32_t rnd = 1;
437
+
438
+ vms->cfg = qdev_new(TYPE_VMAPPLE_CFG);
439
+ cfg = SYS_BUS_DEVICE(vms->cfg);
440
+ sysbus_mmio_map(cfg, 0, vms->memmap[VMAPPLE_CONFIG].base);
441
+
442
+ qemu_guest_getrandom_nofail(&rnd, sizeof(rnd));
443
+
444
+ qdev_prop_set_uint32(vms->cfg, "nr-cpus", machine->smp.cpus);
445
+ qdev_prop_set_uint64(vms->cfg, "ecid", vms->uuid);
446
+ qdev_prop_set_uint64(vms->cfg, "ram-size", machine->ram_size);
447
+ qdev_prop_set_uint32(vms->cfg, "rnd", rnd);
448
+
449
+ sysbus_realize_and_unref(cfg, &error_fatal);
450
+}
451
+
452
+static void create_gfx(VMAppleMachineState *vms, MemoryRegion *mem)
453
+{
454
+ int irq_gfx = vms->irqmap[VMAPPLE_APV_GFX];
455
+ int irq_iosfc = vms->irqmap[VMAPPLE_APV_IOSFC];
456
+ SysBusDevice *gfx;
457
+
458
+ gfx = SYS_BUS_DEVICE(qdev_new("apple-gfx-mmio"));
459
+ sysbus_mmio_map(gfx, 0, vms->memmap[VMAPPLE_APV_GFX].base);
460
+ sysbus_mmio_map(gfx, 1, vms->memmap[VMAPPLE_APV_IOSFC].base);
461
+ sysbus_connect_irq(gfx, 0, qdev_get_gpio_in(vms->gic, irq_gfx));
462
+ sysbus_connect_irq(gfx, 1, qdev_get_gpio_in(vms->gic, irq_iosfc));
463
+ sysbus_realize_and_unref(gfx, &error_fatal);
464
+}
465
+
466
+static void create_aes(VMAppleMachineState *vms, MemoryRegion *mem)
467
+{
468
+ int irq = vms->irqmap[VMAPPLE_AES_1];
469
+ SysBusDevice *aes;
470
+
471
+ aes = SYS_BUS_DEVICE(qdev_new(TYPE_APPLE_AES));
472
+ sysbus_mmio_map(aes, 0, vms->memmap[VMAPPLE_AES_1].base);
473
+ sysbus_mmio_map(aes, 1, vms->memmap[VMAPPLE_AES_2].base);
474
+ sysbus_connect_irq(aes, 0, qdev_get_gpio_in(vms->gic, irq));
475
+ sysbus_realize_and_unref(aes, &error_fatal);
476
+}
477
+
478
+static inline int arm_gic_ppi_index(int cpu_nr, int ppi_index)
479
+{
480
+ return NUM_IRQS + cpu_nr * GIC_INTERNAL + ppi_index;
481
+}
482
+
483
+static void create_gic(VMAppleMachineState *vms, MemoryRegion *mem)
484
+{
485
+ MachineState *ms = MACHINE(vms);
486
+ /* We create a standalone GIC */
487
+ SysBusDevice *gicbusdev;
488
+ QList *redist_region_count;
489
+ int i;
490
+ unsigned int smp_cpus = ms->smp.cpus;
491
+
492
+ vms->gic = qdev_new(gicv3_class_name());
493
+ qdev_prop_set_uint32(vms->gic, "revision", 3);
494
+ qdev_prop_set_uint32(vms->gic, "num-cpu", smp_cpus);
495
+ /*
496
+ * Note that the num-irq property counts both internal and external
497
+ * interrupts; there are always 32 of the former (mandated by GIC spec).
498
+ */
499
+ qdev_prop_set_uint32(vms->gic, "num-irq", NUM_IRQS + 32);
500
+
501
+ uint32_t redist0_capacity =
502
+ vms->memmap[VMAPPLE_GIC_REDIST].size / GICV3_REDIST_SIZE;
503
+ uint32_t redist0_count = MIN(smp_cpus, redist0_capacity);
504
+
505
+ redist_region_count = qlist_new();
506
+ qlist_append_int(redist_region_count, redist0_count);
507
+ qdev_prop_set_array(vms->gic, "redist-region-count", redist_region_count);
508
+
509
+ gicbusdev = SYS_BUS_DEVICE(vms->gic);
510
+ sysbus_realize_and_unref(gicbusdev, &error_fatal);
511
+ sysbus_mmio_map(gicbusdev, 0, vms->memmap[VMAPPLE_GIC_DIST].base);
512
+ sysbus_mmio_map(gicbusdev, 1, vms->memmap[VMAPPLE_GIC_REDIST].base);
513
+
514
+ /*
515
+ * Wire the outputs from each CPU's generic timer and the GICv3
516
+ * maintenance interrupt signal to the appropriate GIC PPI inputs,
517
+ * and the GIC's IRQ/FIQ/VIRQ/VFIQ interrupt outputs to the CPU's inputs.
518
+ */
519
+ for (i = 0; i < smp_cpus; i++) {
520
+ DeviceState *cpudev = DEVICE(qemu_get_cpu(i));
521
+
522
+ /* Map the virt timer to PPI 27 */
523
+ qdev_connect_gpio_out(cpudev, GTIMER_VIRT,
524
+ qdev_get_gpio_in(vms->gic,
525
+ arm_gic_ppi_index(i, 27)));
526
+
527
+ /* Map the GIC IRQ and FIQ lines to CPU */
528
+ sysbus_connect_irq(gicbusdev, i, qdev_get_gpio_in(cpudev, ARM_CPU_IRQ));
529
+ sysbus_connect_irq(gicbusdev, i + smp_cpus,
530
+ qdev_get_gpio_in(cpudev, ARM_CPU_FIQ));
531
+ }
532
+}
533
+
534
+static void create_uart(const VMAppleMachineState *vms, int uart,
535
+ MemoryRegion *mem, Chardev *chr)
536
+{
537
+ hwaddr base = vms->memmap[uart].base;
538
+ int irq = vms->irqmap[uart];
539
+ DeviceState *dev = qdev_new(TYPE_PL011);
540
+ SysBusDevice *s = SYS_BUS_DEVICE(dev);
541
+
542
+ qdev_prop_set_chr(dev, "chardev", chr);
543
+ sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
544
+ memory_region_add_subregion(mem, base,
545
+ sysbus_mmio_get_region(s, 0));
546
+ sysbus_connect_irq(s, 0, qdev_get_gpio_in(vms->gic, irq));
547
+}
548
+
549
+static void create_rtc(const VMAppleMachineState *vms)
550
+{
551
+ hwaddr base = vms->memmap[VMAPPLE_RTC].base;
552
+ int irq = vms->irqmap[VMAPPLE_RTC];
553
+
554
+ sysbus_create_simple("pl031", base, qdev_get_gpio_in(vms->gic, irq));
555
+}
556
+
557
+static DeviceState *gpio_key_dev;
558
+static void vmapple_powerdown_req(Notifier *n, void *opaque)
559
+{
560
+ /* use gpio Pin 3 for power button event */
561
+ qemu_set_irq(qdev_get_gpio_in(gpio_key_dev, 0), 1);
562
+}
563
+
564
+static void create_gpio_devices(const VMAppleMachineState *vms, int gpio,
565
+ MemoryRegion *mem)
566
+{
567
+ DeviceState *pl061_dev;
568
+ hwaddr base = vms->memmap[gpio].base;
569
+ int irq = vms->irqmap[gpio];
570
+ SysBusDevice *s;
571
+
572
+ pl061_dev = qdev_new("pl061");
573
+ /* Pull lines down to 0 if not driven by the PL061 */
574
+ qdev_prop_set_uint32(pl061_dev, "pullups", 0);
575
+ qdev_prop_set_uint32(pl061_dev, "pulldowns", 0xff);
576
+ s = SYS_BUS_DEVICE(pl061_dev);
577
+ sysbus_realize_and_unref(s, &error_fatal);
578
+ memory_region_add_subregion(mem, base, sysbus_mmio_get_region(s, 0));
579
+ sysbus_connect_irq(s, 0, qdev_get_gpio_in(vms->gic, irq));
580
+ gpio_key_dev = sysbus_create_simple("gpio-key", -1,
581
+ qdev_get_gpio_in(pl061_dev, 3));
582
+}
583
+
584
+static void vmapple_firmware_init(VMAppleMachineState *vms,
585
+ MemoryRegion *sysmem)
586
+{
587
+ hwaddr size = vms->memmap[VMAPPLE_FIRMWARE].size;
588
+ hwaddr base = vms->memmap[VMAPPLE_FIRMWARE].base;
589
+ const char *bios_name;
590
+ int image_size;
591
+ char *fname;
592
+
593
+ bios_name = MACHINE(vms)->firmware;
594
+ if (!bios_name) {
595
+ error_report("No firmware specified");
596
+ exit(1);
597
+ }
598
+
599
+ fname = qemu_find_file(QEMU_FILE_TYPE_BIOS, bios_name);
600
+ if (!fname) {
601
+ error_report("Could not find ROM image '%s'", bios_name);
602
+ exit(1);
603
+ }
604
+
605
+ memory_region_init_ram(&vms->fw_mr, NULL, "firmware", size, &error_fatal);
606
+ image_size = load_image_mr(fname, &vms->fw_mr);
607
+
608
+ g_free(fname);
609
+ if (image_size < 0) {
610
+ error_report("Could not load ROM image '%s'", bios_name);
611
+ exit(1);
612
+ }
613
+
614
+ memory_region_add_subregion(get_system_memory(), base, &vms->fw_mr);
615
+}
616
+
617
+static void create_pcie(VMAppleMachineState *vms)
618
+{
619
+ hwaddr base_mmio = vms->memmap[VMAPPLE_PCIE_MMIO].base;
620
+ hwaddr size_mmio = vms->memmap[VMAPPLE_PCIE_MMIO].size;
621
+ hwaddr base_ecam = vms->memmap[VMAPPLE_PCIE_ECAM].base;
622
+ hwaddr size_ecam = vms->memmap[VMAPPLE_PCIE_ECAM].size;
623
+ int irq = vms->irqmap[VMAPPLE_PCIE];
624
+ MemoryRegion *mmio_alias;
625
+ MemoryRegion *mmio_reg;
626
+ MemoryRegion *ecam_reg;
627
+ DeviceState *dev;
628
+ int i;
629
+ PCIHostState *pci;
630
+ DeviceState *usb_controller;
631
+ USBBus *usb_bus;
632
+
633
+ dev = qdev_new(TYPE_GPEX_HOST);
634
+ qdev_prop_set_uint32(dev, "num-irqs", GPEX_NUM_IRQS);
635
+ sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
636
+
637
+ /* Map only the first size_ecam bytes of ECAM space */
638
+ ecam_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
639
+ memory_region_init_alias(&vms->ecam_alias, OBJECT(dev), "pcie-ecam",
640
+ ecam_reg, 0, size_ecam);
641
+ memory_region_add_subregion(get_system_memory(), base_ecam,
642
+ &vms->ecam_alias);
643
+
644
+ /*
645
+ * Map the MMIO window from [0x50000000-0x7fff0000] in PCI space into
646
+ * system address space at [0x50000000-0x7fff0000].
647
+ */
648
+ mmio_alias = g_new0(MemoryRegion, 1);
649
+ mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
650
+ memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
651
+ mmio_reg, base_mmio, size_mmio);
652
+ memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
653
+
654
+ for (i = 0; i < GPEX_NUM_IRQS; i++) {
655
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), i,
656
+ qdev_get_gpio_in(vms->gic, irq + i));
657
+ gpex_set_irq_num(GPEX_HOST(dev), i, irq + i);
658
+ }
659
+
660
+ pci = PCI_HOST_BRIDGE(dev);
661
+ vms->bus = pci->bus;
662
+ g_assert(vms->bus);
663
+
664
+ while ((dev = qemu_create_nic_device("virtio-net-pci", true, NULL))) {
665
+ qdev_realize_and_unref(dev, BUS(vms->bus), &error_fatal);
666
+ }
667
+
668
+ usb_controller = qdev_new(TYPE_QEMU_XHCI);
669
+ qdev_realize_and_unref(usb_controller, BUS(pci->bus), &error_fatal);
670
+
671
+ usb_bus = USB_BUS(object_resolve_type_unambiguous(TYPE_USB_BUS,
672
+ &error_fatal));
673
+ usb_create_simple(usb_bus, "usb-kbd");
674
+ usb_create_simple(usb_bus, "usb-tablet");
675
+}
676
+
677
+static void vmapple_reset(void *opaque)
678
+{
679
+ VMAppleMachineState *vms = opaque;
680
+ hwaddr base = vms->memmap[VMAPPLE_FIRMWARE].base;
681
+
682
+ cpu_set_pc(first_cpu, base);
683
+}
684
+
685
+static void mach_vmapple_init(MachineState *machine)
686
+{
687
+ VMAppleMachineState *vms = VMAPPLE_MACHINE(machine);
688
+ MachineClass *mc = MACHINE_GET_CLASS(machine);
689
+ const CPUArchIdList *possible_cpus;
690
+ MemoryRegion *sysmem = get_system_memory();
691
+ int n;
692
+ unsigned int smp_cpus = machine->smp.cpus;
693
+ unsigned int max_cpus = machine->smp.max_cpus;
694
+
695
+ vms->memmap = memmap;
696
+ machine->usb = true;
697
+
698
+ possible_cpus = mc->possible_cpu_arch_ids(machine);
699
+ assert(possible_cpus->len == max_cpus);
700
+ for (n = 0; n < possible_cpus->len; n++) {
701
+ Object *cpu;
702
+ CPUState *cs;
703
+
704
+ if (n >= smp_cpus) {
705
+ break;
706
+ }
707
+
708
+ cpu = object_new(possible_cpus->cpus[n].type);
709
+ object_property_set_int(cpu, "mp-affinity",
710
+ possible_cpus->cpus[n].arch_id, &error_fatal);
711
+
712
+ cs = CPU(cpu);
713
+ cs->cpu_index = n;
714
+
715
+ numa_cpu_pre_plug(&possible_cpus->cpus[cs->cpu_index], DEVICE(cpu),
716
+ &error_fatal);
717
+
718
+ if (object_property_find(cpu, "has_el3")) {
719
+ object_property_set_bool(cpu, "has_el3", false, &error_fatal);
720
+ }
721
+ if (object_property_find(cpu, "has_el2")) {
722
+ object_property_set_bool(cpu, "has_el2", false, &error_fatal);
723
+ }
724
+ object_property_set_int(cpu, "psci-conduit", QEMU_PSCI_CONDUIT_HVC,
725
+ NULL);
726
+
727
+ /* Secondary CPUs start in PSCI powered-down state */
728
+ if (n > 0) {
729
+ object_property_set_bool(cpu, "start-powered-off", true,
730
+ &error_fatal);
731
+ }
732
+
733
+ object_property_set_link(cpu, "memory", OBJECT(sysmem), &error_abort);
734
+ qdev_realize(DEVICE(cpu), NULL, &error_fatal);
735
+ object_unref(cpu);
736
+ }
737
+
738
+ memory_region_add_subregion(sysmem, vms->memmap[VMAPPLE_MEM].base,
739
+ machine->ram);
740
+
741
+ create_gic(vms, sysmem);
742
+ create_bdif(vms, sysmem);
743
+ create_pvpanic(vms, sysmem);
744
+ create_aes(vms, sysmem);
745
+ create_gfx(vms, sysmem);
746
+ create_uart(vms, VMAPPLE_UART, sysmem, serial_hd(0));
747
+ create_rtc(vms);
748
+ create_pcie(vms);
749
+
750
+ create_gpio_devices(vms, VMAPPLE_GPIO, sysmem);
751
+
752
+ vmapple_firmware_init(vms, sysmem);
753
+ create_cfg(vms, sysmem);
754
+
755
+ /* connect powerdown request */
756
+ vms->powerdown_notifier.notify = vmapple_powerdown_req;
757
+ qemu_register_powerdown_notifier(&vms->powerdown_notifier);
758
+
759
+ vms->bootinfo.ram_size = machine->ram_size;
760
+ vms->bootinfo.board_id = -1;
761
+ vms->bootinfo.loader_start = vms->memmap[VMAPPLE_MEM].base;
762
+ vms->bootinfo.skip_dtb_autoload = true;
763
+ vms->bootinfo.firmware_loaded = true;
764
+ arm_load_kernel(ARM_CPU(first_cpu), machine, &vms->bootinfo);
765
+
766
+ qemu_register_reset(vmapple_reset, vms);
767
+}
768
+
769
+static CpuInstanceProperties
770
+vmapple_cpu_index_to_props(MachineState *ms, unsigned cpu_index)
771
+{
772
+ MachineClass *mc = MACHINE_GET_CLASS(ms);
773
+ const CPUArchIdList *possible_cpus = mc->possible_cpu_arch_ids(ms);
774
+
775
+ assert(cpu_index < possible_cpus->len);
776
+ return possible_cpus->cpus[cpu_index].props;
777
+}
778
+
779
+
780
+static int64_t vmapple_get_default_cpu_node_id(const MachineState *ms, int idx)
781
+{
782
+ return idx % ms->numa_state->num_nodes;
783
+}
784
+
785
+static const CPUArchIdList *vmapple_possible_cpu_arch_ids(MachineState *ms)
786
+{
787
+ int n;
788
+ unsigned int max_cpus = ms->smp.max_cpus;
789
+
790
+ if (ms->possible_cpus) {
791
+ assert(ms->possible_cpus->len == max_cpus);
792
+ return ms->possible_cpus;
793
+ }
794
+
795
+ ms->possible_cpus = g_malloc0(sizeof(CPUArchIdList) +
796
+ sizeof(CPUArchId) * max_cpus);
797
+ ms->possible_cpus->len = max_cpus;
798
+ for (n = 0; n < ms->possible_cpus->len; n++) {
799
+ ms->possible_cpus->cpus[n].type = ms->cpu_type;
800
+ ms->possible_cpus->cpus[n].arch_id =
801
+ arm_build_mp_affinity(n, GICV3_TARGETLIST_BITS);
802
+ ms->possible_cpus->cpus[n].props.has_thread_id = true;
803
+ ms->possible_cpus->cpus[n].props.thread_id = n;
804
+ }
805
+ return ms->possible_cpus;
806
+}
807
+
808
+static void vmapple_get_uuid(Object *obj, Visitor *v, const char *name,
809
+ void *opaque, Error **errp)
810
+{
811
+ VMAppleMachineState *vms = VMAPPLE_MACHINE(obj);
812
+
813
+ visit_type_uint64(v, name, &vms->uuid, errp);
814
+}
815
+
816
+static void vmapple_set_uuid(Object *obj, Visitor *v, const char *name,
817
+ void *opaque, Error **errp)
818
+{
819
+ VMAppleMachineState *vms = VMAPPLE_MACHINE(obj);
820
+ Error *error = NULL;
821
+
822
+ visit_type_uint64(v, name, &vms->uuid, &error);
823
+ if (error) {
824
+ error_propagate(errp, error);
825
+ return;
826
+ }
827
+}
828
+
829
+static void vmapple_machine_class_init(ObjectClass *oc, void *data)
830
+{
831
+ MachineClass *mc = MACHINE_CLASS(oc);
832
+
833
+ mc->init = mach_vmapple_init;
834
+ mc->max_cpus = 32;
835
+ mc->block_default_type = IF_VIRTIO;
836
+ mc->no_cdrom = 1;
837
+ mc->pci_allow_0_address = true;
838
+ mc->minimum_page_bits = 12;
839
+ mc->possible_cpu_arch_ids = vmapple_possible_cpu_arch_ids;
840
+ mc->cpu_index_to_instance_props = vmapple_cpu_index_to_props;
841
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("host");
842
+ mc->get_default_cpu_node_id = vmapple_get_default_cpu_node_id;
843
+ mc->default_ram_id = "mach-vmapple.ram";
844
+
845
+ object_register_sugar_prop(TYPE_VIRTIO_PCI, "disable-legacy",
846
+ "on", true);
847
+
848
+ object_class_property_add(oc, "uuid", "uint64", vmapple_get_uuid,
849
+ vmapple_set_uuid, NULL, NULL);
850
+ object_class_property_set_description(oc, "uuid", "Machine UUID (SDOM)");
851
+}
852
+
853
+static void vmapple_instance_init(Object *obj)
854
+{
855
+ VMAppleMachineState *vms = VMAPPLE_MACHINE(obj);
856
+
857
+ vms->irqmap = irqmap;
858
+}
859
+
860
+static const TypeInfo vmapple_machine_info = {
861
+ .name = TYPE_VMAPPLE_MACHINE,
862
+ .parent = TYPE_MACHINE,
863
+ .abstract = true,
864
+ .instance_size = sizeof(VMAppleMachineState),
865
+ .class_size = sizeof(VMAppleMachineClass),
866
+ .class_init = vmapple_machine_class_init,
867
+ .instance_init = vmapple_instance_init,
868
+};
869
+
870
+static void machvmapple_machine_init(void)
871
+{
872
+ type_register_static(&vmapple_machine_info);
873
+}
874
+type_init(machvmapple_machine_init);
875
+
876
+static void vmapple_machine_9_2_options(MachineClass *mc)
877
+{
878
+}
879
+DEFINE_VMAPPLE_MACHINE_AS_LATEST(9, 2)
880
+
881
--
882
2.39.3 (Apple Git-145)
883
884
diff view generated by jsdifflib